text
stringlengths
0
6.23M
__index_level_0__
int64
0
419k
\begin{document} \preprint{ } \title{Duality and helicity: a symplectic viewpoint } \author{ M. Elbistan$^{1}$\footnote{mailto:elbistan@impcas.ac.cn.}, C. Duval$^{2}$\footnote{mailto:duval@cpt.univ-mrs.fr}, P. A. Horv\'athy$^{1,3}$\footnote{mailto:horvathy@lmpt.univ-tours.fr}, P.-M. Zhang$^{1}$\footnote{e-mail:zhpm@impcas.ac.cn}, } \affiliation{ ${}^1$ Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou, (China) \\ ${}^2$ Aix Marseille Univ, Universit\'e de Toulon, CNRS, CPT, Marseille, France \\ ${}^3$ Laboratoire de Math\'ematiques et de Physique Th\'eorique, Universit\'e de Tours, (France) } \date{\today} \begin{abstract} The theorem which says that helicity is the conserved quantity associated with the duality symmetry of the vacuum Maxwell equations is proved by viewing electromagnetism as an infinite dimensional symplectic system. In fact, it is shown that helicity is the moment map of duality acting as an $\SO(2)$ group of canonical transformations on the symplectic space of all solutions of the vacuum Maxwell equations. \end{abstract} \pacs{\\ 11.30.-j Symmetry and conservation laws\\ 11.30.Cp Lorentz and Poincar\'e invariance\\ } \maketitle \section{Introduction} The usual electromagnetic action in the vacuum,\footnote{Integration is performed over Minkowski spacetime, $M$, endowed with metric $g=g_{\mu\nu}\,dx^\mu{}dx^\nu$ of signature $(+,-,-,-)$. Let us stress that we will content ourselves with a special relativistic treatment of duality, although our main results spelled out in the next sections clearly hold true (with minor modifications) in a fixed gravitational background.} \beq S= -\frac{1}{4}\int_MF_{\mu\nu}F^{\mu\nu} \,d^4x\,, \label{emaction} \eeq suffers from well-known nevertheless inconvenient defects, namely the \emph{non-invariance} of the Lagrange density under various symmetry transformations and the consequent non-symmetric form of its energy-momentum tensor, requiring to resort to various ``improvements'' \cite{Jackson,BBN} \footnote{We refer to, e.g., \cite{JMS74} for a geometric standpoint associated with the principle of general covariance, enabling us to circumvent these difficulties.}. In particular, while the vacuum Maxwell equations are invariant w.r.t. \emph{duality trans\-formations}, \beq F\mapsto \hF = \cos\theta\, F + \sin\theta\,\star(F), \label{emdual} \eeq for any real $\theta$ (where $F={\half} F_{\mu\nu}dx^\mu\wedge dx^\nu$ and ${\star(F)}=\frac{1}{4}\epsilon_{\mu\nu\rho\sigma} F^{\rho\sigma}dx^\mu\wedge dx^\nu$ is the Hodge dual electromagnetic field strength), the Lagrange density in (\ref{emaction}) is \emph{not invariant}. The apparent contradiction can be resolved by observing that a duality rotation (\ref{emdual}) changes the Lagrange density by a mere surface term. It is therefore a symmetry of the action \cite{BBN,DeTe} and generates therefore, according to the Noether theorem, a conserved quantity identified here as the optical \emph{helicity} \cite{Calkin}. The proof given in \cite{Calkin} is rather laborious, though, due to the complicated behavior of the vector potential and the subsequent use of the Hertz vector --- a rather subtle, non-gauge-invariant tool. The treatment in \cite{DeTe} is also quite involved. Another proposition \cite{Ranada,BBN,Camer3} is to embed the Maxwell theory into a manifestly duality-symmetric one for which Noether's theorem yields a seemingly different expression, namely,\beq \chi_{_\mathrm{CS}}=\frac{1}{2}\int_{\bR^3}\!(\bA\cdot \bB - \bC\cdot\bE)\,d^3\br \label{CShel} \eeq \textit{\`a la Chern-Simons}, where $\bA$ and $\bC$ are vector potentials for the magnetic and the electric fields, $\bnabla\times\bA=\bB$ and $\bnabla\times\bC=-\bE$, respectively. It is worth noting that the second term in Eq. \#~(14) of \cite{Calkin} and, respectively, in Eq. \#~(2.9) of \cite{DeTe}, both represent the vector potential for the dual field strength --- a fact not recognized by none of these authors. See \cite{AfSt,BBN,Camer3} for comprehensive presentations. \goodbreak In the first term in (\ref{CShel}) we recognize the (magnetic) \emph{helicity}, $\chi_\mathrm{mag}=\frac{1}{2}\int\!\bA\cdot \bB\,d^3\br $ widely studied in (magneto)\-hydrodynamics \cite{Moffatt}, where it measures the winding of magnetic lines of force and/or fluid vortex lines, respectively. It is worth stressing that the magnetic helicity alone is {not} a constant of the motion in general, and the clue leading to (\ref{CShel}) is that its non-conservation, \beq \frac{\,d}{dt}\chi_\mathrm{mag}= -\int_{\bbR^3}{ \bE\cdot\bB \,d^3\br, } \label{FstarF} \eeq is precisely compensated by that of the second term \cite{AfSt}. A remarkable fact is that (\ref{CShel}) combines two \emph{Chern-Simons invariants} \cite{ChernSimons}, for both the electromagnetic and its dual field. Duality and helicity have attracted considerable recent attention, namely in optics \cite{BBN,FeCo,Camer3} and in heavy ion physics \cite{Manuel}. Our own interest stems from studying the helicity of semiclassical chiral particles \cite{EDHZ-heli}. In this Note we explain the duality and helicity from yet another viewpoint, which bypasses Lagrangians and gauge fixing altogether. Our clue is to view the set of solutions of electromagnetism as (an infinite-dimensional) \emph{symplectic} space \cite{SSD,CW,ABR}. \section{Electromagnetism in the symplectic framework} In the framework of Hamiltonian mechanics \cite{SSD} one works with manifolds endowed with a closed two-form~$\omega$. If $\dim\ker(\omega)$ has constant but nonzero dimension, $\omega$ is called presymplectic; if its kernel is zero dimensional, it is called symplectic. In the physical applications we have in mind, we start with a manifold such that $(\cV,\omega)$ is presymplectic and is referred to as an \textit{``evolution space''}, where the dynamics takes place. The characteristic leaves which integrate $\ker(\omega)$ are identified with the motions of the system. The quotient of $\cV$ by the characteristic foliation of $\omega$, namely $\cM=\cV/\ker(\omega)$, is therefore endowed with a symplectic two-form $\Omega$, whose pull-back to $\cV$ is $\omega$. Then $(\cM,\Omega)$ is what has been called the \textit{``space of motions''} in \cite{SSD}. Crnkovi\v{c} and Witten \cite{CW} call it the ``true phase space''. The next ingredient is a Lie group $G$ of canonical transformations, i.e., of diffeomorphisms of $\cV$ preserving the two-form~$\omega$. Denote by $\fg$ the Lie algebra of $G$, and by $Z_{\cV}$ the infinitesimal action (fundamental vector field) on $\cV$ associated with $Z\in\fg$. We thus have $L_{Z_{\cV}}\omega=0$ so that $\omega(Z_{\cV},\,\cdot\,)$ is a closed one-form for all $Z\in\fg$. We now say that $J:\cV\to\fg^*$ is a \emph{moment map} for $(\cV,\omega,G)$ if the stronger condition \begin{equation}\label{J} \omega(Z_{\cV},\,\cdot\,)=-d(J\cdot{}Z) \end{equation} holds for all $Z\in\fg$.\footnote{For each point $x$ of $V$, the quantity $J(x)$ belongs to the dual $\fg^*$ of the Lie algebra $\fg$, and contracting with $Z\in\fg$ yields a function $x\mapsto{}J(x)\cdot{}Z$ on $\cV$.} If the equations of motion are given by $\ker(\omega)$, as it happens in the mechanics of finite dimensional systems \cite{SSD} and, as we will prove below, also for Maxwell's electromagnetism, then~$J$ clearly descends to the space of motions, $\cM=\cV/\ker(\omega)$, as the \emph{Noetherian quantity} as\-sociated with the symmetry group $G$~: indeed $J\cdot{}Z$ is a \emph{constant of the motion} for all~$Z\in\fg$. Below we boldly extend this framework to the infinite dimensional ``manifold'' $\cM$ which consists of all \emph{solutions} of the vacuum Maxwell equations modulo gauge transformations we endow with a \emph{symplectic structure}.\footnote{A rigorous treatment of this infinite-dimensional differentiable structure would require the use of, e.g., diffeology \cite{PIZ}, especially when dealing with differential forms on this ``diffeological space''.} Let us show how all this comes about. Our first aim is to translate the usual variational approach into a symplectic language. The actual physical variable is the potential one-form $A=A_\mu\,dx^\mu$ locally defined by $F=dA$.\footnote{One-forms and vector fields are identified by lifting and lowering indices using the Minkowski metric.} Then the variation of the action (\ref{emaction}) with respect to a variation $\delta A=\delta A_{\mu}\,dx^\mu$ of the $4$-potential is \beq \delta S = \int_M\big[\p_\nu (F^{\mu\nu}\delta A_\mu)+(\p_\mu F^{\mu\nu})\delta A_\nu\big] \,d^4x\,. \label{Maxvar} \eeq Assuming that the fields drop off sufficiently rapidly at infinity --- or that the variations~$\delta A$ have compact support --- the surface term can be dropped, allowing us to deduce the vacuum Maxwell equations $ \p_{[\mu}F_{\nu\rho]}=~0 $ and $\p_\mu F^{\mu\nu}=0$, also written as \beq dF=0 \qquad \hbox{and} \qquad d\star(F)=0. \label{Maxeqn} \eeq Denote by ${\cV}$ the space of one-forms $A$ of Minkowski space $M$ whose associated field strength, $F=dA$, is a \emph{solution} of (\ref{Maxeqn}). We contend that ${\cV}$, which can be thought of as an infinite-dimensional manifold (affine space), is an ``evolution space" for the Maxwell theory. Firstly, a variation of a \emph{solution}, $\delta A$, is a ``tangent vector'' to ${\cV}$ at $A\in{\cV}$ if $A+\delta A$ is still a solution of the field equations which vanishes at spatial infinity (as $A$ does). Since the associated field strength is $F+\delta F$, where $\delta F = d(\delta A)$, it follows that $\delta F$ also satisfies the Maxwell equations, $ d(\delta F)=0$ and $d\star(\delta{F})=0. $ Now, adapting Souriau's procedure in \cite{SSD}, Sec. 7, to field theory, we define a symplectic form on the space of all solutions of the linear system (\ref{Maxeqn}). To this end, we consider the action (\ref{emaction}) by integrating over the domain $M'=[t_0,t_1]\times\Sigma\subset{M}$ defined by a Cauchy $3$-surface $\Sigma$ with \emph{arbitrary dates}~$t_0$ and $t_1\neq{}t_0$, where $t$ is some given time-function. When~$F$ is a solution of the Maxwell equations, the variation vanishes, $\delta S=0$, and therefore Eq.~(\ref{Maxvar}) boils down to \beq 0= \int_M\p_\nu (F^{\mu\nu}\delta A_\mu)\,d^4x= \int_{\Sigma_1}\!\star(F(\delta{A})) - \int_{\Sigma_0}\!\star(F(\delta{A})) \,, \nn \eeq where $\Sigma_i=\{t_i\}\times\Sigma$ for $i=0,1$, implying that the integral does not depend on the choice of $t_0$ and $t_1$; the one-form\footnote{ In a coordinate system where the metric is $g=dt^2-d\bx^2$ and $\Sigma$ given by $t=\const$, Eq. (\ref{CartanMax}) reads \begin{equation} \label{CartanMaxBis} \alpha(\delta{A})= \int{\! F^{\mu\nu}\delta A_\mu\p_\nu t\,d^3\bx}. \end{equation} } \begin{equation} \label{CartanMax} \medbox{ \alpha(\delta{A})= \int_\Sigma{\!\star(F(\delta{A}))} = -\int_\Sigma{\star(F)\wedge\delta A} } \end{equation} is therefore well-defined; it is the \emph{Cartan one-form}. The expression~(\ref{CartanMax}) represents the \emph{flux} of the vector field $F(\delta A)=(F^{\mu\nu}\delta A_\mu)\p_\nu$ across the Cauchy surface $\Sigma$. Calculating the exterior derivative, $\omega=d\alpha$, via $d\alpha(\delta A,\delta'\!{A})=\delta(\alpha(\delta'\!{A}))-\delta'(\alpha(\delta A))-\alpha([\delta,\delta']A)$, we find \begin{equation} \label{2FormMaxBis} \omega(\delta{A},\delta'\!{A})= \int_\Sigma\! \delta{A}\wedge\star(\delta'F)-\delta'\!{A}\wedge\star(\delta F). \end{equation} The two-form (\ref{2FormMaxBis}) corresponds \emph{exactly} to that given by Eq. \# (23) in~\cite{CW}. From this point on, we do not use any Lagrangian; the starting point of all our subsequent investigations will be the two-form (\ref{2FormMaxBis}). Let us now show that $({\cV},\omega)$ becomes a formal \emph{presymplectic space}. To that end, let us compute its characteristic distribution. We thus must determine the kernel of $\omega$, i.e., all variations $\delta A$ of a solution $A\in{\cV}$ such that $\omega(\delta A,\delta'\!{A})=0$ for all $\delta'\!{A}$, subject to the constraint $\delta'(d\star(F))=0$ to comply with the field equations. Using a Lagrange multiplier,~$f$, we look for all solutions $\delta A$~of \begin{eqnarray} \int_\Sigma{\! \delta{A}\wedge\star(\delta'F)-\delta'\!{A}\wedge\star(\delta F)} = -\int_\Sigma{\!f\,d(\star(\delta'F))} = \int_\Sigma{\!df\wedge{}\star(\delta'F)} \label{Eq2} \end{eqnarray} for all compactly supported variations $\delta'\!{A}$. Eq. (\ref{Eq2}) readily yields that the kernel is indeed given by all gauge transformations, \begin{equation}\label{keromega} \medbox{ \delta A\in\ker(\omega) \quad \iff \quad \delta A=df } \end{equation} for some smooth function $f$. (Note that we duly have $\delta F=0$.) Then, the leaves of the characteristic distribution $\ker(\omega)$ are identified to the orbits of the \emph{electromagnetic gauge group} $\cJ$ generated by smooth functions $\varphi$ of $M$, which acts on ${\cV}$ according to $A\mapsto {}A+d\varphi$. At last, the quotient \begin{equation}\label{MaxSympl} \medbox{ \cM={\cV}/\cJ } \end{equation} is the \emph{the \emph{``space of motions''} of electromagnetism; it is identified with the space of all vector potentials which are solutions of the free Maxwell equations modulo gauge transformations}, to which $\omega$ projects as the canonical \emph{symplectic two-form} $\Omega$. \section{Duality symmetry} Let us now consider duality rotations (\ref{emdual}) which form, as said before, a manifest symmetry group for the free Maxwell equations.\footnote{ The field equations being linear, any real linear transformation $\hF=a F + b \star(F)$ \& $\star(\hF)= c F+d\star(F)$, with $ad-bc\neq0$, permutes the solutions of (\ref{Maxeqn}). Now, the Hodge star defines a complex structure on the $2$-dimensional space spanned by $F$ and $\star(F)$, since $\star^2=-\bone$. Restricting our considerations to transformations that preserve the ``star'' $\star$, i.e., to $\mathrm{Sp}(1,\bbR)\cong\SL(2,\bbR)$, an easy calculation shows that $c=-b$ and $d=a$, implying $a^2+b^2=1$; hence $a=\cos\theta$ and $b=\sin\theta$ as in Eq. (\ref{emdual}). } Using our symplectic language, we claim that the two-form $\omega$ in (\ref{2FormMaxBis}) is invariant under the (\ref{emdual}), implemented on the potentials as \begin{equation} \label{hA} \hA=\cos\theta\, A + \sin\theta\, C, \qquad \hC=\cos\theta\, C - \sin\theta\, A \end{equation} where $A$ and $C$ are (local) $4$-potentials for the field and its dual, $F=dA$ and $\star(F)=dC$. Note that $A$ and $C$ here are \emph{not} independent since their field strengths are each other's duals. Using the properties of the Hodge star operation, $\star$, one shows indeed that \begin{equation}\label{Symplectomorphism} \medbox{ \omega(\delta{\hA},\delta'\hA) = \omega(\delta{A},\delta'A)\, } \end{equation} for all variations $\delta A$ and $\delta'A$ compatible with the constraints (\ref{Maxeqn}). This proves that \emph{the duality transformation (\ref{emdual}), implemented as above is a canonical trans\-formation of the evolution space, $({\cV},\omega)$, and therefore also of the space of motions, $(\cM,\Omega)$}. We now turn to the \emph{moment map} of duality symmetry. The infinitesimal duality action on ${\cV}$ is given by $ \delta_\varepsilon{A}=\varepsilon\, C $ and $ \delta_\varepsilon{C}=-\varepsilon\, A, $ where $\varepsilon\in\bbR$. A straightforward calculation then shows that, for all $\delta'A$ compatible with the constraints (\ref{Maxeqn}), we have \begin{eqnarray} \omega(\delta_\varepsilon{A},\delta'\!A) = \int_\Sigma{\left\{ \delta'(\star(F))\wedge\varepsilon{}C+\varepsilon{}F\wedge\delta'\!A \right\}} = \half\varepsilon\,\delta'\!\!\int_\Sigma{\left\{ C\wedge\star(F) + A\wedge{}F \right\}} \end{eqnarray} since $\delta'\!A\wedge{}F\equiv\half\delta'(A\wedge{}F)$ and, likewise, $\delta'C\wedge\star(F)\equiv\half\delta'(C\wedge\star(F))$ \--- modulo an exact three-form. It follows that we \emph{do actually have a moment map} $J:{\cV}\to\bbR$, i.e., such that $ \omega(\delta_\varepsilon{A},\delta'A)=-\delta'\big(J(A)\varepsilon\big) $ for the duality group acting on $(\cV,\omega)$, and thus on the space of motions of all solutions of the Maxwell equations, namely \begin{equation} \label{MaxMomentMap} \medbox{ J(A) = -\half\int_\Sigma{A\wedge{}dA+C\wedge{}dC\,,\,} } \end{equation} which is indeed the geometric form of of the helicity, (\ref{CShel}). The conservation of (\ref{MaxMomentMap}) can also be checked directly: the two Chern-Simons three-forms are both the anti-derivatives of the \emph{same} Pontriagin density, but with \emph{opposite signs}, \beq d\big(A\wedge F\big) = F\wedge{}F = -\star(F)\wedge\star(F) = -d\big(C\wedge \star(F)\big). \label{Pontriagin} \eeq Let us consider two Cauchy surfaces $\Sigma_0$ and $\Sigma_1$ with dates $t=t_0$ and $t=t_1$ and view them as the boundaries of a four-volume $V$. The integral of the four-form $-\half{}d\big(A\wedge F+C\wedge\star(F)\big)$ on~$V$ vanishes in view of (\ref{Pontriagin}), proving that the fluxes across $\Sigma_0$ and $\Sigma_1$ are equal, and that the moment map $J$ in (\ref{MaxMomentMap}) is therefore independent of $\Sigma$. The equivalence of (\ref{MaxMomentMap}) with the optical formula in the literature which says that \emph{the optical helicity is in fact the difference of the left- and right-handed photons}, \beq \chi_{\mathrm{O}}=N_L-N_R, \label{optihel} \eeq can be shown along the lines followed in \cite{Calkin,AfSt}. Here we just mention an alternative yet incomplete approach~: the general form (\ref{CShel}) was narrowly missed by Ra\~nada \cite{Ranada}, who did correctly identify both terms --- without adding them however, and considering only the special case $\bE\cdot\bB=0$, when both terms are separately conserved. cf. (\ref{FstarF}). Under such condition he could show that the two integrals are indeed the degrees, $N_L$ and $N_R$, of suitable Hopf maps $S^3\to{}S^2$, confirming (\ref{optihel}) in such a case. Extension of this approach to the general case is under investigation. \section{Conclusion} In this ``variation on a themes''-type Note we re-derive, using the symplectic framework in infinite dimensions, the helicity formula (\ref{MaxMomentMap}), equivalent to the one (\ref{CShel}) proposed in the literature. Unlike for previous authors \cite{BBN, Calkin, AfSt}, our derivation is gauge-invariant, as it did not require any choice of gauge. We note also that our two-form (\ref{2FormMaxBis}) is manifestly duality-invariant, whereas the Cartan one-form $\alpha$ in~(\ref{CartanMax}) is clearly \emph{not}, as it follows from the non-invariance of the standard Maxwell Lagrangian (\ref{emaction}). This highlights the advantage of using the presymplectic Maxwell two-form~(\ref{2FormMaxBis}) to deal with symmetries, and in particular with duality. The situation is reminiscent of what happens for a Dirac monopole, for which no manifestly radially symmetric vector potential and thus no symmetric Lagrangian or Cartan one-form can exist, whereas the two-form which represents the field strength resp. the dynamics is perfectly rotationally invariant \cite{HPA81}. We would also mention that this formula can also be obtained using the Pauli-Lubanski approach \cite{BB-Perjes}, also followed in \cite{EDHZ-heli}. \begin{acknowledgments} CD warmly thanks H. P. K\"unzle and M. J. Gotay for enlightening discussions at the early stage of this work. PH would like to thank J. Balog and K. Bliokh for discussions. ME and PH are grateful to the IMP of the CAS for hospitality in Lanzhou. This work was supported by the Major State Basic Research Development Program in China (No. 2015CB856903), the National Natural Science Foundation of China (Grant No. 11575254 and 11175215). \end{acknowledgments} \bigskip\goodbreak\newpage
57,570
\begin{document} \begin{frontmatter} \title{Asymptotic behavior of Aldous' gossip process\thanksref{T1}} \runtitle{Aldous' gossip process} \thankstext{T1}{Supported in part by NSF Grant DMS-07-04996 from the probability program.} \begin{aug} \author[A]{\fnms{Shirshendu} \snm{Chatterjee}\ead[label=e1]{sc499@cornell.edu}} and \author[B]{\fnms{Rick} \snm{Durrett}\corref{}\ead[label=e2]{rtd@math.duke.edu}} \runauthor{S. Chatterjee and R. Durrett} \affiliation{Cornell University and Duke University} \address[A]{School of Operations Research\\ \quad and Information Engineering\\ Department of Mathematics\\ Cornell University\\ Ithaca, New York 14853\\ USA\\ \printead{e1}} \address[B]{Mathematics Department\\ Duke University\\ Box 90320\\ Durham, North Carolina 27708-0320\\ USA\\ \printead{e2}} \end{aug} \received{\smonth{5} \syear{2010}} \revised{\smonth{9} \syear{2010}} \begin{abstract} Aldous [(2007) Preprint] defined a gossip process in which space is a discrete $N \times N$ torus, and the state of the process at time $t$ is the set of individuals who know the information. Information spreads from a~site to its nearest neighbors at rate $1/4$ each and at rate $N^{-\alpha}$ to a site chosen at random from the torus. We will be interested in the case in which $\alpha< 3$, where the long range transmission significantly accelerates the time at which everyone knows the information. We prove three results that precisely describe the spread of information in a~slightly simplified model on the real torus. The time until everyone knows the information is asymptotically $T=(2-2\alpha/3) N^{\alpha/3} \log N$. If~$\rho_s$ is the fraction of the population who know the information at time $s$ and $\ep$ is small then, for large $N$, the time until $\rho_s$ reaches~ $\ep$ is $T(\ep) \approx T + N^{\alpha/3} \log (3\ep/M)$, where $M$ is a random variable determined by the early spread of the information. The value of $\rho_s$ at time $s = T(1/3) + t N^{\alpha/3}$ is almost a deterministic function $h(t)$ which satisfies an odd looking integro-differential equation. The last result confirms a heuristic calculation of Aldous. \end{abstract} \begin{keyword}[class=AMS] \kwd[Primary ]{60K35} \kwd[; secondary ]{60J80}. \end{keyword} \begin{keyword} \kwd{Gossip} \kwd{branching process} \kwd{first-passage percolation} \kwd{integro-differential equation}. \end{keyword} \end{frontmatter} \section{Introduction}\label{intro} We study a model introduced by \citet{Ald07} for the spread of gossip and other more economically useful information. His paper considers various game theoretic aspects of random percolation of information through networks. Here we concentrate on one small part, a first passage percolation model with nearest neighbor and long-range jumps introduced in his Section~6.2. The work presented here is also related to work of \citet{FilMau04} and \citet{CanMarMon06}, who considered the impact of long-range dispersal on the spread of epidemics and invading species. Space is the discrete torus $\Lambda(N) = (\bbz\bmod N)^2$. The state of the process at time $t$ is $\xi_t \subset\Lambda(N)$, the set of individuals who know the information at time $t$. Information spreads from $i$ to $j$ at rate \[ \nu_{ij} = \cases{ 1/4, &\quad if $j$ is a (nearest) neighbor of $i$,\cr \lambda_N/N^2, &\quad if not.} \] If $\lambda_N=0$, this is ordinary first passage percolation on the torus. If we start with $\xi_0 = \{(0,0)\}$, then the shape theorem for nearest-neighbor\vadjust{\goodbreak} first passage percolation, see \citet{CoxDur81} or \citet{Kes86}, implies that until the process exits $(-N/2,N/2)^2$, the radius of the set $\xi_t$ grows linearly and~$\xi_t$ has an asymptotic shape. From this we see that if $\lambda _N=0$, then there is a~constant $c_0$ so that the time $T_N$, until everyone knows the information, satisfies \[ \frac{T_N}{N} \eqp c_0, \] where $\eqp$ denotes convergence in probability. To simplify things, we will remove the randomness from the nearest neighbor part of the process, and formulate it on the (real) torus $\Gamma(N) = (\bbr\bmod N)^2$. One should be able to prove a similar result for the first passage percolation model but there are two difficulties. The first and easier to handle is that the limiting shape is not round. The second and more difficult issue is that the growth is not deterministic but has fluctuations. One should be able to handle both of these problems, but the proof is already long enough. We consider what we call the ``balloon process,'' in which the state of the process at time $t$ is $\mathcal{C}_t \subset\Gamma(N)$. It starts with one ``center'' chosen uniformly from the torus at time 0. When a center is born at $x$, a disk with radius 0 is put there, and its radius grows deterministically as $r(s) = s/\sqrt{2\pi }$, so that the area of the disk at time $s$ after its birth is $s^2/2$. If the area covered at time $t$ is $C_t$, then births of new centers occur at rate $\lambda_NC_t$. The location of each new center is chosen uniformly from the torus. If the new point lands at $x\in\mathcal{ C}_t$, it will never contribute anything to the growth of the set, but we will count it in the total number of centers, which we denote by $\tilde X_t$. Before turning to the details of our analysis we would like to point out that a related balloon process was used by Barbour and Reinert (\citeyear{BR01}) in their study of distances on the small world graph. Consider a circle of radius $L$ and introduce a Poisson mean $\rho L/2$ number of chords with length~0 connecting randomly chosen points on the circle. To study the distance between a fixed point $O$ and a point chosen at random one wants to examine $S(t) = \{ x \dvtx\operatorname{dist}(O,x) \le t \}$. If we ignore overlaps and let $M(t)$ be the number of intervals in $S(t)$ then $S'(t) = 2M(t)$ and $M(t)$ is a Yule process with births at rate $2\rho M(t)$ due to the interval ends encountering points in the Poisson process of chords. This a balloon process in which the new births come from the boundaries. As in our case one first studies the growth of the ballon process and then estimates the difference from the real process to prove the desired result. There are interesting parallels and differences between the two proofs, see Section 5.2 of Durrett (\citeyear{D07}) for a proof. Here we will be concerned with $\lambda_N = N^{-\alpha}$. To begin we will get rid of trivial cases. If the diameter of $\mathcal{C}_t$ grows linearly, then $\int_0^{c_0 N} C_t \,dt = O(N^3)$. So if $\alpha> 3$, with probability tending to 1 as $N$ goes to $\infty$, there is no long range jump before the initial disk covers the entire torus, and the time $T_N$ until the entire torus is covered satisfies \[ \frac{T_N}{N} \eqp c_1 \qquad \mbox{where } c_1=\sqrt{\pi}. \] If $\alpha=3$, then with probabilities bounded away from 0, (i) there is no long range jump and $T_N \approx c_1N$, and (ii) there is one that lands close enough to $(N/2,N/2)$ to make $T_N \le(1-\delta) Nc_1$. Using $\Rightarrow$ for weak convergence, this suggests that \setcounter{theorem}{-1} \begin{theorem}\label{theo0} When $\alpha=3$, $T_N/N \Rightarrow$ a random limit concentrated on $[0,c_1]$ and with an atom at $c_1$. \end{theorem} \begin{pf} Suppose without loss of generality that the initial center is at~0, and view the torus as $(-N/2,N/2]^2$. The key observation is that the set-valued process $\{ \mathcal{C}_{Nt}/N , t\ge0 \}$ converges to a limit $\mathcal{D}_t$. Before the first long-range dispersal, the state of $\mathcal{D}_t$ is the intersection of the disk of radius $t/\sqrt{2\pi}$ with $(-1/2,1/2]^2$. Long range births occur at rate equal to the area of $\mathcal{D}_t$ and are dispersed uniformly. Since the distance from $(0,0)$ to $(1/2,1/2)$ is $1/\sqrt{2}$, if there are no long range births before time $c_1=\sqrt{\pi}$ or if all long range births land inside $\mathcal{D}_t$ then the torus is covered at time $c_1$. Computing the distribution of the cover time when it is $< c_1$ is complicated, but the answer is a continuous functional of the limit process, and standard weak convergence results give the result. \end{pf} For the remainder of the paper we suppose $\lambda_N = N^{-\alpha}$ with $\alpha<3$. The overlaps between disks in $\mathcal{C}_t$ pose a difficulty in analyzing the process, so we begin by studying a simpler ``balloon branching process'' $\mathcal{A}_t$, in which $A_t$ is the sum of the areas of all of the disks at time $t$, births of new centers occur at rate $\lambda_NA_t$, and the location of each new center is chosen uniformly from the torus. Let $X_t$ be the number of centers at time $t$ in $\mathcal{A}_t$. Suppose we start $\mathcal{C}_0$ and $\mathcal{A}_0$ from the same randomly chosen point. The areas $C_t=A_t$ until the time of the first birth, which can be made to be the same in the two processes. If we couple the location of the new centers at that time, and continue in the obvious way letting $\mathcal{C}_t$ and $\mathcal{A}_t$ give birth at the same time with the maximum rate possible, to the same place when they give birth simultaneously, and letting\vadjust{\goodbreak} $\mathcal{A}_t$ give birth by itself otherwise, then we will have \begin{equation}\label{couple} \mathcal{C}_t \subset\mathcal{A}_t,\qquad C_t \le A_t,\qquad \tilde X_t \le X_t \qquad\mbox{for all $t\ge0$.} \end{equation} $X_t$ is a Crump--Mode--Jagers branching process, but saying these words does not magically solve our problems. Define the length process $L_t$ to be $\sqrt{2\pi}$ times the sum of the radii of all the disks at time $t$. \begin{eqnarray} \label{LA} L_t &=&\int_0^t (t-s) \,dX_s = \int_0^t X_s \,ds, \nonumber\\[-8pt]\\[-8pt] A_t &=&\int_0^t \frac{(t-s)^2}{2} \,dX_s = \int_0^t L_s \,ds. \nonumber \end{eqnarray} Here and later we use $\intt$ for integration over the closed interval $[0,t]$, that is, we include the contribution from the atom in $dX_s$ at 0 ($X_0=1$ while $X_s=0$ for $s<0$). For the second equality on each line integrate by parts or note that $L_t'=X_t$ and $A_t'=L_t$. Since $X_t$ increases by 1 at rate $\lambda_NA_t$, $(X_t,L_t,A_t)$ is a Markov process. To simplify formulas, we will often drop the subscript $N$ from $\lambda_N$. For comparison with $C_t$, the parameter $\lambda$ is important, but in the analysis of $A_t$ it is not. If we let \begin{equation}\label{scale}\qquad X^1_t = X(t\lambda^{-1/3}),\qquad L^1_t = \lambda^{1/3} L(t\lambda^{-1/3}),\qquad A^1_t = \lambda^{2/3} A(t\lambda^{-1/3}), \end{equation} then $(X_t^1,L^1_t,A^1_t)$ is the process with $\lambda=1$. To study the growth of $A_t$, first we will compute the means of $X_t$, $L_t$ and~$A_t$. Let $F(t) = \lambda t^3/3!$. Using the independent and identical behavior of all the disks in $\mathcal{A}_t$ it is easy to show that (see the proof of Lemma \ref{XLAlem}) \[ EX_t = 1 + \int_0^t EX_{t-s} \,dF(s). \] Solving the above renewal equation and using (\ref{LA}), we can show \begin{eqnarray}\label{mean} EX_t &=& \sum_{k=0}^\infty F^{*k}(t) = V(t) = \sum_{k=0}^\infty\frac {\lambda^k t^{3k}}{(3k)!}, \nonumber\\ EL_t &=& \sum_{k=0}^\infty\frac{\lambda^k t^{3k+1}}{(3k+1)!},\\ EA_t &=& \sum_{k=0}^\infty \frac{\lambda^k t^{3k+2}}{(3k+2)!}. \nonumber \end{eqnarray} To evaluate $V(t)$ we note that $V'''(t)\!=\!\lambda V(t)$ with $V(0)\!=\!1, V'(0)\!=\!V''(0)\!=\!0$, so \begin{equation}\label{Vtdef} V(t)=\tfrac{1}{3}[\exp(\lambda^{1/3}t) +\exp(\lambda^{1/3}\omega t) +\exp(\lambda^{1/3}\omega^2 t)]. \end{equation} Here $\omega=(-1+i\sqrt{3})/2$ is one of the complex cube roots of 1 and $\omega^2=(-1-i\sqrt{3})/2$ is the other. Note that each of $\omega$ and $\omega^2$ has real part $-1/2$. So the second and third terms in (\ref{Vtdef}) go to 0 exponentially fast. If $\mathcal{F}_s=\sigma\{X_r, L_r, A_r\dvtx r\le s\}$, then \begin{equation}\label{infgen} \frac{d}{dt} E\left.\left.\left[ \matrix{X_t \cr L_t \cr A_t} \right| \mathcal{F}_s \right] \right|_{t=s} = \pmatrix{ 0 & 0 & \lambda\cr 1 & 0 & 0 \cr 0 & 1 & 0 } \left[\matrix{X_s \cr L_s \cr A_s}\right]. \end{equation} Let $Q$\vspace*{1pt} be the matrix in (\ref{infgen}). By computing the determinant of $Q-\eta I$ it is easy to see that $Q$ has eigenvalues $\eta= \lambda^{1/3}, \omega\lambda^{1/3}, \omega^2 \lambda^{1/3}$, and $e^{-\eta t} ( X_t + \eta L_t + \eta^2 A_t )$ is a (complex) martingale. To treat the three martingales separately, let \begin{eqnarray*} I_t &=& X_t + \lambda^{1/3}L_t + \lambda^{2/3}A_t, \qquad M_t=\exp(-\lambda^{1/3}t) I_t,\\ J_t &=& X_t + (\omega\lambda^{1/3})L_t + (\omega\lambda ^{1/3})^2A_t, \qquad \tilde J_t=\exp(-\omega\lambda^{1/3}t) J_t,\\ K_t &=& X_t + (\omega^2\lambda^{1/3})L_t + (\omega^2\lambda ^{1/3})^2A_t, \qquad \tilde K_t=\exp(-\omega^2\lambda^{1/3}t) K_t, \end{eqnarray*} so that $M_t$ is the real martingale, and $\tilde J_t$ and $\tilde K_t $ are the complex ones. \begin{theorem} \label{th1} $\{M_t\dvtx t\ge0\}$ is a positive square integrable martingale with respect to the filtration $\{ \mathcal F_t\dvtx t\ge0\}$. $EM_t=M_0=1$. \begin{eqnarray*} & EM_t^2 =\frac{8}{7}-\frac{1}{3} \exp(-\lambda^{1/3}t) + O\bigl(\exp(-5\lambda^{1/3}t/2)\bigr),& \\ & E |\tilde J_t|^2,\ E|\tilde K_t|^2 = \frac{1}{6} \exp(2\lambda ^{1/3}t) + O\bigl( \exp(\lambda^{1/3}t/2)\bigr).& \end{eqnarray*} If we let $M = \lim_{t\to\infty} M_t$, then $P(M>0)=1$ and \[ \exp(-\lambda^{1/3}t)X_t,\mbox{ } \lambda^{1/3}\exp(-\lambda^{1/3}t)L_t,\mbox{ } \lambda^{2/3}\exp(-\lambda^{1/3}t)A_t \to M/3 \] a.s. and in $L^2$. The distribution of $M$ does not depend on $\lambda$. \end{theorem} The last result follows from (\ref{scale}), which with (\ref{LA}) explains why the three quantities converge to the same limit. The key to the proof of the convergence results is to note that $1+\omega+\omega^2=0$ implies \begin{eqnarray*} 3X_t &=& I_t + J_t + K_t, \\ 3\lambda^{1/3} L_t &=& I_t + \omega^2 J_t + \omega K_t, \\ 3\lambda^{2/3} A_t &=& I_t + \omega J_t + \omega^2 K_t. \nonumber \end{eqnarray*} The real parts of $\omega$ and $\omega^2$ are $-1/2$. Although the results for $E|\tilde J_t|^2$ and $E|\tilde K_t|^2$ show that the martingales $\tilde J_t$ and $\tilde K_t$ are not $L^2$ bounded, it is easy to show that $\exp(-\lambda^{1/3}t)J_t$ and $\exp(-\lambda^{1/3}t) K_t \to0$ a.s. and in $L^2$, and Theorem \ref{th1} then follows from $M_t = \exp(-\lambda ^{1/3}t) I_t \to M$. Recall that $\lambda_N = N^{-\alpha}$ and let \begin{eqnarray}\label{a} a(t) &=& (1/3) N^{2\alpha/3} \exp( N^{-\alpha/3} t),\qquad l(t)=N^{-\alpha/3}a(t),\nonumber\\[-8pt]\\[-8pt] x(t) &=& N^{-2\alpha/3} a(t),\nonumber \end{eqnarray} so that $A_t/a(t), L_t/l(t), X_t/x(t) \to M$ a.s. Let \begin{equation}\label{S} S(\ep) = N^{\alpha/3}[ (2-2\alpha/3) \log N + \log(3\ep) ], \end{equation} so $a(S(\ep))=\ep N^2$. Let \begin{equation} \label{sigtau} \sigma(\ep) = \inf\{ t \dvtx A_t \ge \ep N^2 \} \quad\mbox{and}\quad \tau(\ep) = \inf\{ t \dvtx C_t \ge\ep N^2 \}. \end{equation} The first of these is easy to study. \begin{theorem} \label{th2} If $0<\ep< 1$, then as $N\to\infty$ \[ N^{-\alpha/3} \bigl(\sigma(\ep) - S(\ep)\bigr) \eqp- \log(M). \] The coupling in (\ref{couple}) implies $\tau(\ep)\ge\sigma(\ep)$. In the other direction, for any $\gamma>0$ \[ \limsup_{N\to\infty} P\bigl[ \tau(\ep) > \sigma\bigl((1+\gamma)\ep\bigr) \bigr] \le P\bigl( M \le(1+\gamma)\ep^{1/3} \bigr) + 11\frac{\ep^{1/3}} {\gamma}. \] \end{theorem} The last result implies that for $\ep<1$ \begin{equation} \label{tauLLN} \tau(\ep) \sim (2-2\alpha/3) N^{\alpha/3}\log N. \end{equation} Our next goal is to obtain more precise information about $\tau(\ep)$ and about how $|C_t|/N^2$ increases from a small positive level to reach 1. The first result in Theorem \ref{th2} shows that $(\sigma(\ep)-S(\ep))/N^{\alpha/3}$ is determined by the random variable $M$ from Theorem \ref{th1}, which in turn is determined by what happens early in the growth of the branching balloon process. Let \begin{equation} \label{R} R = N^{\alpha/3}[(2-2\alpha/3)\log N - \log(M)], \end{equation} $R$ is defined so that $a(R) = (1/3) N^2 /M$, and hence $A_R/N^2 \eqp1/3$. Define \begin{equation}\label{psiWI}\qquad \psi(t)\equiv R+N^{\alpha/3}t,\qquad W\equiv\psi(\log(3\ep))\quad \mbox{and}\quad I_{\ep,t}=[\log(3\ep) , t] \end{equation} for $\log(3\ep) \le t$. $W$ is defined so that $a(W)=\ep N^2/M$ and hence $A_W/N^2 \eqp\ep$. The arguments that led to Theorem \ref{th2} will show that if $\ep$ is small then~$C_W/A_W$ is close to 1 with high probability. To get a lower bound on the growth of $C_t$ after time $W$ we declare that the centers in $\mathcal{C}_W$ and $\mathcal{A}_W$ to be generation 0 in $\mathcal{C}_t$ and $\mathcal{A}_t$, respectively, and we number the succeeding generations in the obvious way, a center born from an area of generation $k$ is in generation $k+1$. For $t\ge \log(3\ep)$, let $C_{W,\psi(t)}^k$ and $A_{W,\psi(t)}^k$ denote\vadjust{\goodbreak} the areas covered at time $\psi(t)$ by respective centers of generations $j\in\{0, 1, \ldots, k\}$ and let \begin{eqnarray}\label{gfdef} g_{0}(t)&=&\ep\biggl[1+\bigl(t-\log(3\ep)\bigr)+\frac{(t-\log(3\ep ))^2}{2}\biggr],\nonumber\\[-8pt]\\[-8pt] f_0(t)&=&g_0(t)-\ep^{7/6}.\nonumber \end{eqnarray} To explain these definitions, we note that Lemma \ref{B0bounds} will show that for any~$t$, there is an $\ep_0=\ep_0(t)$ so that for any $0 < \ep< \ep_0$ \begin{eqnarray*} \lim_{N\to\infty} P\Bigl(\sup_{s\in I_{\ep,t}} \bigl|N^{-2}A^0_{W,\psi(s)}-g_0(s)\bigr|>\eta\Bigr)&=&0\qquad \mbox{for any $\eta>0$},\\ P\Bigl( \inf_{s\in I_{\ep,t}} N^{-2} \bigl(C^0_{W,\psi(s)} - A^0_{W,\psi(s)}\bigr) < - \ep^{7/6} \Bigr) &\le& P( M < \ep^{1/3} ) + \ep^{1/12}. \end{eqnarray*} Since $C_{W,\psi(t)}^0 \le A_{W,\psi(t)}^0$, if $\ep$ is small, with high probability $g_0(t)$ and $f_0(t)$ provide upper and lower bounds, respectively, for $C_{W,\psi(t)}^0$. To begin to improve these bounds we let \[ f_1(t) = 1-\bigl(1-f_0(t)\bigr)\exp\biggl(-\int_{\log(3\ep)}^t \frac{(t-s)^2}{2} f_{0}(s) \,ds\biggr), \] and define $g_1$ similarly. To explain this equation note that an $x \notin C_{W,\psi(t)}^0$ will not be in $C_{W,\psi(t)}^1$ if and only if no generation 1 center is born in the space--time cone \[ K_{x,t}^\ep\equiv\bigl\{(y,s)\in\Gamma(N)\times[W,\psi(t)]\dvtx |y-x| \le\bigl(\psi(t)-s\bigr)/\sqrt{2\pi}\bigr\}. \] Lemma \ref{f1lb} shows that for $0< \ep<\ep_0$ and $\delta>0$, \[ \limsup_{N\to\infty} P\Bigl( \inf_{s\in I_{\ep,t}} N^{-2}C^1_{W,\psi(s)} - f_1(s) < - \delta\Bigr) \le P( M < \ep^{1/3} ) + \ep^{1/12}. \] To iterate this we will let \[ f_{k+1}(t) = 1 - \bigl(1-f_{k}(t)\bigr) \exp\biggl(-\int_{\log(3\ep)}^t \frac{(t-s)^2}{2}\bigl(f_k(s)-f_{k-1}(s)\bigr) \,ds\biggr) \] for $k\ge1$. The difference $f_k(s)-f_{k-1}(s)$ in the integral comes from the fact that a new point in generation $k+1$ must come from a point that is in generation $k$ but not in generation $k-1$. Combining these equations we have\looseness=-1 \begin{eqnarray*} && 1-f_{k+1}(t)\\ &&\qquad = \bigl(1-f_k(t)\bigr) \exp\biggl(-\int_{\log(3\ep)}^t \frac{(t-s)^2}{2} \bigl(f_k(s)-f_{k-1}(s)\bigr) \,ds\biggr)\\ &&\qquad = \bigl(1-f_{k-1}(t)\bigr)\exp\Biggl(-\int_{\log(3\ep)}^t \frac{(t-s)^2}{2} \sum_{l=k-1}^k\bigl(f_l(s)-f_{l-1}(s)\bigr) \,ds\Biggr)\cdots\\ &&\qquad = \bigl(1-f_0(t)\bigr)\exp\Biggl(-\int_{\log(3\ep)}^t \frac{(t-s)^2}{2} \sum_{l=1}^k \bigl(f_l(s)-f_{l-1}(s)\bigr) + f_0(s) \,ds\Biggr) \end{eqnarray*}\looseness=0 so that \begin{equation} \label{fkinteq} f_{k+1}(t) = 1-\bigl(1-f_0(t)\bigr)\exp\biggl(-\int_{\log(3\ep)}^t \frac{(t-s)^2}{2} f_{k}(s) \,ds\biggr). \end{equation} Since $f_1(t) \ge f_0(t)$, letting $k\to\infty$, $f_k(t)\uparrow f_\ep(t)$, where $f_\ep$ is the unique solution of \begin{equation} \label{fepinteq} f_\ep(t)=1-\bigl(1-f_0(t)\bigr)\exp\biggl(-\int_{\log(3\ep)}^t \frac{(t-s)^2}{2} f_\ep(s) \,ds\biggr) \end{equation} with $f_\ep(\log(3\ep))=\ep-\ep^{7/6}$. $g_k(t)$ and $g_\ep(t)$ are defined similarly. $g_\ep(t)$ and $f_\ep(t)$ provide upper and lower bounds on the growth of $C_{\psi(t)}$ for $t \ge\log(3\ep)$. To close the gap between these bounds we let $\ep\to0$. \begin{lemma}\label{h} For any $t<\infty$, if $I_{\ep,t}=[\log(3\ep),t]$, then as $\ep\to 0$, \[ \sup_{s\in I_{\ep,t}} |f_\ep(s)-h(s)|\mbox{, } \sup_{s\in I_{\ep,t}} |g_\ep(s)-h(s)| \to0 \] for some nondecreasing $h$ with \textup{(a)} $\lim_{t\to-\infty} h(t) = 0$, \textup{(b)} $\lim_{t\to\infty} h(t) = 1$, {\renewcommand{\theequation}{c} \begin{equation} h(t) = 1-\exp\biggl(-\int_{-\infty}^t \frac{(t-s)^2}{2} h(s) \,ds\biggr) \end{equation} } \vspace*{6pt} \vspace*{-\baselineskip} \noindent and \textup{(d)} $0 < h(t) < 1$ for all $t$. \end{lemma} If one removes the 2 from inside the exponential, this is equation (36) in \citet{Ald07}. Since there is no initial condition, the solution is only unique up to time translation. \begin{theorem}\label{th3} Let $h$ be the function in Lemma \ref{h}. For any $t<\infty$ and $\delta>0$, \[ \lim_{N\to\infty} P\Bigl(\sup_{s\le t} \bigl|N^{-2}C_{\psi(s)}-h(s)\bigr| \le \delta\Bigr)=1. \] \end{theorem} This result shows that the displacement of $\tau(\ep)$ from $(2-2\alpha/3)N^{\alpha/3} \log N$ on the scale $N^{\alpha/3}$ is dictated by the random variable $M$ that gives the rate of growth of the branching balloon process, and that once $C_t$ reaches $\ep N^2$, the growth is deterministic. The solution $h(t)$ never reaches 1, so we need a little more work to show that \begin{theorem} \label{th4} Let $T_N$ be the first time the torus is covered. As $N\to\infty$ \[ T_N / (N^{\alpha/3} \log N) \eqp2-2\alpha/3.\vadjust{\goodbreak} \] \end{theorem} The remainder of the paper is organized as follows. In Section \ref{sec2}, we prove the properties of $\mathcal{A}_t$ presented in Theorem \ref{th1}. In Section \ref{sec3}, we prove the properties of the hitting times s $\sigma(\ep)$ and $\tau(\ep)$ stated in Theorem \ref{th2}. In Section \ref{sec4}, we prove the limiting behavior of $\mathcal{ C}_t$ mentioned in Theorem \ref{th3}. Finally in Section \ref{sec5}, we prove Theorem \ref{th4}. \section{Properties of the balloon branching process $\mathcal{A}_t$}\label{sec2} \begin{lemma}\label{conv} $\intt s^m(t-s)^n \,ds=\frac{m!n!}{(m+n+1)!}t^{m+n+1}$. \end{lemma} \begin{pf} If you can remember the definition of the beta distribution, this is trivial. If you cannot then integrate by parts and use induction. \end{pf} Let $F(t)=\lambda t^3/3!$ for $t \ge0$, and $F(t)=0$ for $t<0$. Let $V(t)=\sum_{k=0}^\infty F^{*k}(t)$, where $*k$ indicates the $k$-fold convolution. \begin{lemma}\label{V} If $\omega=(-1+i\sqrt{3})/2$, then \[ V(t)=\sum_{k=0}^\infty\frac{\lambda^k t^{3k}}{(3k)!} =\frac{1}{3}[\exp(\lambda^{1/3}t)+\exp (\lambda^{1/3}\omega t)+\exp(\lambda^{1/3}\omega^2 t)]. \] \end{lemma} \begin{pf} We first use induction to show that \begin{equation}\label{Fconv} F^{*k}(t)= \cases{ \lambda^kt^{3k}/(3k)!, &\quad$t\ge0$,\cr 0, &\quad$t<0$.} \end{equation} This holds for $k=0, 1$ by our assumption. If the equality holds for $k=n$, then using Lemma \ref{conv} we have for $t \ge0$ \[ F^{*(n+1)}(t)=\int_0^t F^{*n}(t-s) \,dF(s) =\int_0^t \frac{\lambda^n(t-s)^{3n}}{(3n)!}\frac{\lambda s^2}{2} \,ds =\frac{\lambda^{n+1}t^{3n+3}}{(3n+3)!}. \] It follows by induction that $V(t)=\sum_{k=0}^\infty\lambda ^kt^{3k}/(3k)!$. To evaluate the sum we note that setting $\lambda=1$, $U(t)=\sum_{k=0}^\infty t^{3k}/(3k)!$ solves \[ U'''(t)=U(t) \qquad\mbox{with $U(0)=1$ and $U'(0)=U''(0)=0$.} \] This differential equation has solutions of the from $e^{\gamma t}$, where $\gamma^3=1$, that is, $\gamma=1, \omega$ and $\omega^2$. This leads to the general solution \[ U(t)=Ae^t+Be^{\omega t}+Ce^{\omega^2 t} \] for some constants $A, B, C$. Using the initial conditions for $U(t)$ we have \[ A+B+C=1,\qquad A+B\omega+C\omega^2=0,\qquad A+B\omega^2+C\omega=0. \] Since $1+ \omega+ \omega^2=0$, we have $A=B=C=1/3$. Since $V(t) = U(\lambda^{1/3}t)$, we have proved the desired result. \end{pf} Our next step is to compute the first two moments of $X_t, L_t$ and $A_t$. For that we need the following lemma in addition to the previous one. \begin{lemma}\label{renewaleq} Let $\{N_t\dvtx t\ge0\}$ be a Poisson process on $[0,\infty)$ with intensity $\lambda(\cdot)$ and let $\Pi_t$ be the set of points at time $t$. If $\{Y_t,Z_t\dvtx t\ge0\}$ are two complex valued stochastic processes satisfying \[ Y_t=y(t)+\sum_{s_i\in\Pi_t} Y^i_{t-s_i},\qquad Z_t=z(t)+\sum _{s_i\in\Pi_t} Z^i_{t-s_i}, \] where $(Y^i, Z^i)$, $i=1, 2, \ldots,$ are i.i.d. copies of $(Y,Z)$, and independent of~$N$, then \begin{eqnarray*} EY_t &=&y(t)+\intt EY_{t-s}\lambda(s) \,ds, \\ E(Y_tZ_t) &=&(EY_t)(EZ_t)+\intt E(Y_{t-s}Z_{t-s})\lambda(s) \,ds. \end{eqnarray*} \end{lemma} \begin{pf} $N_t$ has Poisson distribution with mean $\Lambda_t=\intt\lambda(s) \,ds$. Given $N_t=n$, the conditional distribution of $\Pi_t$ is same as the distribution of $\{t_1, \ld, t_n\}$, where $t_1, \ldots, t_n$ are i.i.d. from $[0,t]$ with density $\beta(\cdot)=\lambda(\cdot)/\Lambda_t$. Hence \[ E(Y_t|N_t)=y(t)+\sum_{i=1}^{N_t} EY^i_{t-t_i} =y(t)+N_t\int_0^t EY_{t-s} \beta(s) \,ds, \] and taking expected values $EY_t=y(t)+\intt EY_{t-s}\lambda(s) \,ds$. Similarly $EZ_t=z(t)+\intt EZ_{t-s}\lambda(s) \,ds$. Using the conditional distribution of $\Pi_t$ given $N_t$, \begin{eqnarray*} E(Y_tZ_t|N_t)&=&y(t)z(t) + y(t) E\sum_{i=1}^{N_t} Z^i_{t-t_{i}} +z(t) E\sum _{i=1}^{N_t} Y^i_{t-t_{i}}\\[-2pt] &&{}+E\Biggl[\sum_{i=1}^{N_t} Y^i_{t-t_i}Z^i_{t-t_i}+\sum_{i\ne j}Y^i_{t-t_i}Z^j_{t-t_j}\Biggr]\\[-2pt] &=&y(t)z(t)+y(t)N_t\int_0^t EZ_{t-s} \beta(s) \,ds\\[-2pt] &&{} + z(t)N_t\int _0^t EY_{t-s} \beta(s) \,ds+N_t\int_0^t E(Y_{t-s}Z_{t-s}) \beta(s) \,ds\\[-2pt] &&{}+N_t(N_t-1)\int_0^t EY_{t-s} \beta(s) \,ds \int_0^t EZ_{t-s}\beta (s) \,ds. \end{eqnarray*} Taking expectation on both sides and using $EN_t(N_t-1)=\Lambda^2_t$, we get \[ E(Y_tZ_t) = (EY_t)(EZ_t)+\intt E(Y_{t-s}Z_{t-s})\lambda(s) \,ds, \] which completes the proof.\vadjust{\goodbreak} \end{pf} Now we use Lemmas \ref{V} and \ref{renewaleq} to have the first moments. \begin{lemma} $E(X_t, L_t,A_t) = (V(t),V''(t)/\lambda, V'(t)/\lambda)$. \label{XLAlem} \end{lemma} \begin{pf} Recall that $F(t)=\lambda t^3/3!$. In the balloon branching process, the initial center $x$ gives birth to new centers at rate $F'(t) = \lambda t^2/2$, and all the centers behave independently and with the same distribution as the one at~$x$. So \begin{equation} \label{Xbreakup} X_t=1+\sum_{s_i \in\Pi_t} X^i_{t-s_i}, \end{equation} where $\Pi_t \subset[0,t]$ is the set of times when new centers are born in $\mathcal{A}_t$ and~$X^i$, $i=1, 2, \ldots,$ are i.i.d. copies of $X$, and using Lemma \ref{renewaleq}, \[ EX_t =1 + \int_0^t EX_{t-s} \,dF(s). \] Using (4.5) from Chapter 3 of \citet{Dur10} and then (\ref{LA}): \begin{eqnarray}\label{meanXLA} EX_t &=& V(t) = \sum_{k=0}^\infty\frac{\lambda^k t^{3k}}{(3k)!}, \nonumber\\ EL_t &=&\int_0^t EX_s \,ds =\sum_{k=0}^\infty\frac{\lambda^k t^{3k+1}}{(3k+1)!}, \\ EA_t &=& \int_0^t EL_s \,ds =\sum_{k=0}^\infty\frac{\lambda^k t^{3k+2}}{(3k+2)!}. \nonumber \end{eqnarray} Since $V(t) = 1 + \sum_{k=0}^\infty\lambda^{k+1} t^{3k+3}/(3k+3)!$, it is easy to see that $EA_t=V'(t)/\lambda$ and $EL_t=V''(t)/\lambda$. \end{pf} \begin{lemma}\label{mart} If $M_t =\exp(-\lambda^{1/3}t)[X_t+\lambda^{1/3}L_t+\lambda^{2/3}A_t]$, then $\{M_t\dvtx t\ge0\}$ is a square integrable martingale with respect to the filtration $\{ \mathcal F_t\dvtx t\ge0\}$. $EM_t=1$ and \[ EM_t^2=\tfrac{8}{7}-\tfrac{1}{3} \exp(-\lambda^{1/3}t) + \theta_t \qquad\mbox{where } |\theta_t| \le\tfrac{4}{15} \exp(-5\lambda^{1/3}t/2) \] and hence $(8/7) - EM_t^2 \le\exp(-\lambda^{1/3}t)$. \end{lemma} \begin{pf} Let $h(t,x,\ell,a) = \exp(-\lambda^{1/3}t)[x+\lambda ^{1/3}\ell+\lambda^{2/3}a]$, and let $\mathcal{L}$ be the generator of the Markov process $(t,X_t,L_t,A_t)$. Equation (\ref{infgen}) implies $\mathcal{L}h=0$, so $M_t$ is a martingale from Dynkin's formula. $EM_t=EM_0=1$. To compute $EM_t^2$ we use Lemma \ref{renewaleq} as follows. Let $Y_t=Z_t=X_t+\lambda^{1/3}L_t+\lambda^{2/3}A_t$ and $g(t)\equiv (EY_t)^2$. Since $EM_t=1$, $g(t)=\exp(2\lambda^{1/3}t)$. Combining (\ref{LA})\vadjust{\goodbreak} and (\ref{Xbreakup}), letting $L_t^i = \int_0^t X_s^i \,ds$ and $A_t^i = \int_0^t L_s^i \,ds, i=1, 2, \ldots,$ and changing the variables $u=s-s_i$, we see that \[ L_t=\int_0^t \biggl[1 + \sum_{s_i \in\Pi_s} X_{s-s_i}^i \biggr] \,ds = t + \sum_{s_i \in\Pi_t} \int_0^{t-s_i}X_u^i \,du = t + \sum_{s_i \in\Pi_t} L_{t-s_i}^i \] and hence \[ A_t=\int_0^t \biggl[t + \sum_{s_i \in\Pi_s} L_{s-s_i}^i \biggr] \,ds = t^2/2 + \sum_{s_i \in\Pi_t} \int_0^{t-s_i}L_u^i \,du = t^2/2 + \sum_{s_i \in\Pi_t} A_{t-s_i}^i. \] Thus all of $X_t, L_t$ and $A_t$ satisfy the hypothesis of Lemma \ref{renewaleq} and so do~$Y_t$ and~$Z_t$, as they are linear combinations of $X_t, L_t$ and $A_t$. So applying Lem\-ma~\ref{renewaleq} \[ EY_t^2=g(t)+\intt EY_{t-s}^2 \,dF(s). \] Solving the renewal equation using (4.8) in Chapter 3 of \citet{Dur10}, \[ EY_t^2=g*V(t)=\exp(2\lambda^{1/3}t)+\intt \exp\bigl(2\lambda^{1/3}(t-s)\bigr) V'(s) \,ds, \] where $V=\sum_{k=0}^\infty F^{*k}$. To evaluate the integral we use Lemma \ref{V} to conclude \begin{eqnarray*} &&\int_0^t \exp(-2\lambda^{1/3}s) V'(s) \,ds \\ &&\qquad=\frac{1}{3} \int_0^t \exp(-2\lambda^{1/3}s) \\ &&\qquad\quad\hspace*{21pt}{}\times\lambda^{1/3}[\exp(\lambda^{1/3}s)+\omega \exp(\lambda^{1/3}\omega s) +\omega^2 \exp(\lambda^{1/3}\omega^2 s)] \,ds \\ &&\qquad=\frac{1}{3}\biggl[ \frac{1}{1-2}\{ \exp(-\lambda^{1/3}t)-1\} +\frac{\omega}{\omega-2}\bigl\{\exp\bigl((\omega-2)\lambda ^{1/3} t\bigr)-1\bigr\}\\ &&\qquad\quad\hspace*{112.1pt}{} +\frac{\omega^2}{\omega^2-2}\bigl\{\exp\bigl((\omega ^2-2)\lambda^{1/3} t\bigr)-1\bigr\}\biggr]. \end{eqnarray*} Now using $1= -\omega-\omega^2$ and $\omega^3=1$, \[ 1-\frac{\omega}{\omega-2}-\frac{\omega^2}{\omega^2-2} =1-\frac{\omega^3 - 2\omega+\omega^3 -2\omega^2}{\omega^3-2\omega^2-2\omega^2+4} =1-\frac{4}{7}=\frac{3}{7}. \] Since $\omega= (-1+i\sqrt{3})/2$ and $\omega^2 = (-1-i\sqrt{3})/2$, the remaining error satisfies \begin{eqnarray*} 3|\theta_t| &=& \biggl|\frac{\omega}{\omega-2}\exp\bigl((\omega-2)\lambda^{1/3} t\bigr)\biggr| + \biggl| \frac{\omega^2}{\omega^2-2}\exp\bigl((\omega ^2-2)\lambda^{1/3} t\bigr) \biggr| \\ &=& \biggl( \frac{1}{|\omega-2|} + \frac{1}{|\omega^2-2|} \biggr) \exp(-5\lambda^{1/3}t/2) \le2\cdot\frac{2}{5} \exp(-5\lambda^{1/3}t/2), \end{eqnarray*} since $\omega-2$ and $\omega^2-2$ each have real part $-5/2$. Putting all together \begin{equation}\label{intbd} \int_0^t \exp(-2\lambda^{1/3}s) V'(s) \,ds =\frac{1}{7} - \frac{1}{3} \exp(-\lambda^{1/3}t)+ \theta_t, \end{equation} since $EM_t^2=\exp(-2\lambda^{1/3}t)EY_t^2$, the desired result follows. \end{pf} We use the previous calculation to get bounds for $EA_t^2, EL_t^2$ and $EX_t^2$, which will be useful later. \begin{lemma}\label{sqbound} Let $a(\cdot), l(\cdot)$ and $x(\cdot)$ be as in (\ref{a}). Then \[ EA_t^2 \le\tfrac{27}{2} a^2(t),\qquad EL_t^2 \le\tfrac{27}{2} l^2(t),\qquad EX_t^2 \le\tfrac{27}{2} x^2(t). \] \end{lemma} \begin{pf} By (\ref{intbd}) we have \begin{equation}\label{intbd1} \int_0^t\exp(-2\lambda^{1/3}s) V'(s) \,ds \le\frac{1}{7} + \frac{4}{15} = \frac{43}{105} \le\frac{1}{2}. \end{equation} Now using Lemma \ref{renewaleq} \begin{eqnarray*} EA_t^2&=&(EA_t)^2+\intt EA_{t-s}^2 \,dF(s),\qquad EL_t^2=(EL_t)^2+\intt EL_{t-s}^2 \,dF(s),\\ EX_t^2&=&(EX_t)^2+\intt EX_{t-s}^2 \,dF(s). \end{eqnarray*} Solving the renewal equations $EA_t^2=\phi_a*V(t), EL_t^2=\phi_l*V(t)$ and $EX_t^2=\phi_x*V(t)$, where $V(\cdot)$ is as in Lemma \ref{V} and $\phi_a(t)=(EA_t)^2, \phi_l(t)=(EL_t)^2$ and $\phi _x(t)=(EX_t)^2$. A crude upper bound for $\phi_a(t)$ is $9a^2(t)$. Since $a(t-s)=a(t)\exp(-\lambda^{1/3}s)$, \begin{equation} \label{a2bd} a^2*V(t)=a^2(t)\biggl[1+\intt\exp(-\lambda^{1/3} s) V'(s) \,ds\biggr] \le\frac{3a^2(t)}{2} \end{equation} by (\ref{intbd1}). Hence $EA_t^2\le9a^2*V(t)\le(27/2)a^2(t)$. Similarly using the bounds $9l^2(t)$ and $9x^2(t)$ for $\phi_l(t)$ and $\phi_x(t)$, respectively, and noting that $l(t-s)/l(t)=x(t-s)/x(t)=\exp(-\lambda^{1/3} s)$, we get the desired bounds for $EL_t^2$ and $EX_t^2$. \end{pf} \begin{lemma} \label{JKbds} Let $\tilde J_t, \tilde K_t = e^{-\eta t}(X_t + \eta L_t + \eta^2 A_t)$ with $\eta= \omega\lambda^{1/3}$, $\omega^2\lambda^{1/3}$, respectively. Then $\tilde J_t$ and $\tilde K_t$ are complex martingales with respect to the filtration $\mathcal{F}_t$, and \[ E|\tilde J_t|^2, E|\tilde K_t|^2 = \tfrac{1}{6} \exp(2\lambda^{1/3}t)+\tfrac{1}{2} +\theta_t\qquad \mbox{where } |\theta_t|\le \tfrac23 \exp(\lambda^{1/3}t/2), \] and hence $E|\tilde J_t|^2, E|\tilde K_t|^2 \le(4/3) \exp (2\lambda^{1/3}t)$. \end{lemma} \begin{pf} Let $h(t,x,\ell,a)=e^{-\eta t}(x+\eta \ell+\eta^2a)$, and let $\mathcal L$ be the generator of the Markov process $(t,X_t,L_t,A_t)$. Equation (\ref{infgen}) implies $\mathcal{L}h=0$ when $\eta=\lambda^{1/3}\omega, \lambda^{1/3}\omega^2$, so that $\tilde J_t$ and $\tilde K_t$ are complex martingales by Dynkin's formula.\vspace*{1pt} First we compute $E|J_t|^2$, where $J_t=\exp(\lambda^{1/3}\omega t) \tilde J_t$. For that we use Lemma \ref{renewaleq} with $Y_t=J_t$ and $Z_t=\bar J_t$, the complex conjugate. Since $\tilde J_t$ is a~complex martingale with $\tilde J_0=1$ and $\omega= (-1+i\sqrt{3})/2$, $E\tilde J_t=1$ and hence \[ |EJ_t|^2 = \exp(-\lambda^{1/3} t). \] Using Lemma \ref{renewaleq} $E|J_t|^2=|EJ_t|^2 + \intt E|J_{t-s}|^2 \,dF(s)$. Solving the renewal equation as we have done twice before \[ E|J_t|^2 = \exp(-\lambda^{1/3} t) + \int_0^t \exp\bigl(-\lambda^{1/3}(t-s)\bigr) V'(s) \,ds. \] Repeating the first part of the proof for $K_t=\exp(\lambda^{1/3}\omega^2 t) \tilde K_t$, we see that $E|K_t|^2$ is also equal to the right-hand side above. The integral is $\exp(-\lambda^{1/3}t)$ times \begin{eqnarray*} &&\frac{1}{3} \int_0^t \exp(\lambda^{1/3}s) \cdot \lambda^{1/3}[\exp(\lambda^{1/3}s)+\omega \exp(\lambda^{1/3}\omega s) +\omega^2 \exp(\lambda^{1/3}\omega^2 s)] \,ds \\ &&\qquad=\frac{1}{3}\biggl[ \frac{1}{1+1}\{ \exp(2\lambda^{1/3}t)-1\} +\frac{\omega}{\omega+1}\bigl\{\exp\bigl((\omega+1)\lambda ^{1/3} t\bigr)-1\bigr\}\\ &&\qquad\quad\hspace*{108.7pt}{} +\frac{\omega^2}{\omega^2+1}\bigl\{\exp\bigl((\omega ^2+1)\lambda^{1/3} t\bigr)-1\bigr\}\biggr]. \end{eqnarray*} Now using $1= -\omega-\omega^2$ and $\omega^3=1$, \[ -\frac12-\frac{\omega}{\omega+1}-\frac{\omega^2}{\omega^2+1} =-\frac 12-\frac{\omega^3+\omega+\omega^3+\omega^2}{\omega^3+\omega ^2+\omega+1} =-\frac32. \] Since $\omega= (-1+i\sqrt{3})/2$ and $\omega^2 = (-1-i\sqrt{3})/2$, if we take \[ \theta_t = \frac13\biggl[\frac{\omega}{\omega+1}\exp \bigl((\omega+1)\lambda^{1/3} t\bigr) + \frac{\omega^2}{\omega^2+1}\exp\bigl((\omega^2+1)\lambda^{1/3} t\bigr) \biggr], \] then \[ 3|\theta_t| \le\biggl( \frac{1}{|\omega+1|} + \frac{1}{|\omega^2+1|} \biggr) \exp(\lambda^{1/3}t/2) \le 2 \exp(\lambda^{1/3}t/2), \] since each of $\omega+1$ and $\omega^2+1$ has real part $1/2$. Putting all together \begin{equation}\label{Jbd} E|J_t|^2\le\tfrac16 \exp(\lambda^{1/3}t) + \tfrac12 \exp(-\lambda^{1/3}t) + \tfrac23 \exp(-\lambda^{1/3}t/2), \end{equation} which completes the proof, since $E|\tilde J_t|^2/E|J_t|^2=\exp(\lambda^{1/3} t)=E|\tilde K_t|^2/E|K_t|^2$. \end{pf} \begin{lemma} If $M = \lim_{t\to\infty} M_t$, we have $P(M>0)=1$ and \[ \exp(-\lambda^{1/3} t) X_t\mbox{, } \lambda^{1/3}\exp(-\lambda^{1/3} t) L_t\mbox{, } \lambda^{2/3}\exp(-\lambda^{1/3} t) A_t \to\frac{M}{3} \] a.s. and in $L^2$. \end{lemma} \begin{pf} $M = \lim_{t\to\infty} M_t$ exists a.s. and in $L^2$, since $M_t$ is an $L^2$ bounded martingale. Recall that \begin{eqnarray*} I_t &=& X_t + \lambda^{1/3} L_t + \lambda^{2/3} A_t,\\ J_t &=& X_t + \omega\lambda^{1/3} L_t + \omega^2 \lambda^{2/3} A_t,\\ K_t &=& X_t + \omega^2 \lambda^{1/3} L_t + \omega\lambda^{2/3} A_t. \end{eqnarray*} Since $1+\omega+\omega^2=0$ and $\omega^3=1$, \begin{eqnarray}\label{lincomb} 3X_t &=& I_t + J_t + K_t, \nonumber\\ 3\lambda^{1/3} L_t &=& I_t + \omega^2 J_t + \omega K_t, \\ 3\lambda^{2/3} A_t &=& I_t + \omega J_t + \omega^2 K_t. \nonumber \end{eqnarray} Since $M_t= \exp(-\lambda^{1/3} t) I_t \to M$, it suffices to show that $\exp(-\lambda^{1/3} t) J_t$ and $\exp(-\lambda^{1/3} t) K_t$ go to 0 a.s. and in $L^2$. We will only prove this for $J_t$, since the argument for $K_t$ is almost identical. $\tilde J_t$ is a complex martingale, so $|\tilde J_t|$ is a real submartingale. Using the $L^2$ maximal inequality, (4.3) in Chapter~4 of \citet{Dur10} and Lemma \ref{JKbds}, \begin{equation} \label{L2max} E\Bigl( \max_{0\le s \le t} |\tilde J_s|^2 \Bigr) \le4 E|\tilde J_t|^2 \le \frac{16}{3}\exp(2\lambda^{1/3}t). \end{equation} The real part of $\omega$ is $-1/2$. So writing $\tilde J_s=\exp(\lambda^{1/3}(1-\omega)s) \cdot\exp(-\lambda^{1/3}s)J_s$, we see that \begin{equation} \label{hammer} E\Bigl( \max_{u\le s \le t} |\tilde J_s|^2 \Bigr) \ge\exp(3\lambda^{1/3}u) E\Bigl( {\max_{u \le s \le t}} |{\exp}(-\lambda^{1/3}s)J_s|^2 \Bigr). \end{equation} Combining these bounds with Chebyshev inequality, and taking $t_n=\break2\lambda^{-1/3}\log n$ for $n=1, 2, \ldots$ \begin{eqnarray}\label{supJbd}\qquad P\Bigl( {\max_{t_n \le s \le t_{n+1}}} |{\exp}(-\lambda^{1/3}s)J_s|^2 \ge\ep\Bigr) &\le&\ep^{-2} E \Bigl( {\max_{t_n \le s \le t_{n+1}}} |{\exp}(-\lambda^{1/3}s)J_s|^2 \Bigr) \nonumber \\ &\le&\frac{16}{3} \ep^{-2} \exp\bigl(\lambda ^{1/3}(2t_{n+1}-3t_n)\bigr)\\ &=& \frac{16}{3}\ep^{-2} \frac{(n+1)^4}{n^6}\nonumber \end{eqnarray} for any $\ep>0$. Summing over $n$, and using the Borel--Cantelli lemma \[ |{\exp}(-\lambda^{1/3}s)J_s| \to0 \qquad\mbox{a.s.} \] To get convergence in $L^2$ we use (\ref{Jbd}). \[ E|{\exp}(-\lambda^{1/3}t)J_t|^2 \le \tfrac{4}{3}\exp(-\lambda^{1/3} t) \to0 \qquad\mbox{as } t\to\infty. \] To prove that $P(M>0)=1$ we begin by noting that convergence in $L^2$ implies that $P(M>0)>0$. Every time a new balloon is born it has positive probability of starting a process with a positive limit, so this will happen eventually and $P(M>0)=1$. \end{pf} \section{Hitting times for $\mathcal{A}_t$ and $\mathcal{C}_t$}\label{sec3} Recall that $\sigma(\ep) = \inf\{ t \dvtx A_t \ge\ep N^2 \}$ and $\tau(\ep)=\inf\{t\dvtx C_t\ge\ep N^2\}$. Also recall the definitions of $a(\cdot), l(\cdot), x(\cdot)$ and $S(\cdot)$ from (\ref{a}) and (\ref{S}). Note that $a(S(\ep)) = \ep N^2$ and $A_t/a(t), L_t/l(t), X_t/\allowbreak x(t) \to M$ a.s. by Theorem \ref{th1}. We begin by estimating the difference between~$M$ and each of $A_t/a(t), L_t/l(t)$ and $X_t/x(t)$. \begin{lemma}\label{supbound} For any $\gamma, u>0$ \[ P\Bigl({\sup_{t\ge u}} |A_t/a(t)-M| \ge\gamma^2\Bigr) \le C\gamma^{-4}\exp(-\lambda^{1/3} u) \] for some constant $C$. The same bound holds for $ P({\sup_{t\ge u}} |L_t/l(t)-M| \ge\gamma^2)$ and $P({\sup_{t\ge u}} |X_t/x(t)-M| \ge\gamma^2)$. \end{lemma} \begin{pf} Using (\ref{lincomb}) $A_t/a(t)=M_t+\omega \exp(-\lambda^{1/3}t) J_t+\omega^2 \exp(-\lambda^{1/3}t) K_t$. For $0<u\le t$ the triangle inequality implies \begin{equation}\label{bd1} \quad|A_t/a(t) - M| \le|M_t-M| + |{\exp}(-\lambda^{1/3}t) J_t| + |{\exp}(-\lambda^{1/3}t) K_t|. \end{equation} Taking the supremum over $t$, \begin{eqnarray}\label{supbd} &&P\Bigl({\sup_{t\ge u} }|A_t/a(t)-M| \ge\gamma^2\Bigr)\nonumber\\ &&\qquad \le P\Bigl({\sup_{t\ge u} }|M_t-M| \ge\gamma^2/3\Bigr) + P\Bigl({\sup_{t\ge u} }|{\exp}(-\lambda^{1/3}t) J_t| \ge\gamma^2/3\Bigr)\\ &&\qquad\quad{} + P\Bigl({\sup_{t\ge u}} |{\exp}(-\lambda^{1/3}t) K_t| \ge\gamma^2/3\Bigr).\nonumber \end{eqnarray} To bound the first term in the right-hand side of (\ref{supbd}) we note that \[ E\Bigl({\sup_{t\ge u}}|M_t-M|^2\Bigr) = \lim_{U\to\infty} E\Bigl({\max_{u\le t\le U}} |M_t-M|^2\Bigr). \] Using triangle inequality $|M_t-M|\le|M_t-M_u|+|M_u-M|$. Taking supremum over $t\in[u,U]$ and using the inequality $(a+b)^2\le2(a^2+b^2)$, \[ E\Bigl({\max_{u\le t\le U}} |M_t-M|^2\Bigr) \le 2\Bigl(E\Bigl({\max_{u\le t\le U}} |M_t-M_u|^2 \Bigr)+E|M_u-M|^2\Bigr). \] Using the $L^2$ maximal inequality, (4.3) in Chapter 4 of \citet{Dur10} and orthogonality of martingale increments, \[ E\Bigl({\max_{u\le t\le U} }|M_t-M_u|^2\Bigr) \le 4E(M_U-M_u)^2=4(EM_U^2-EM_u^2). \] Since the martingale $M_t$ converges to $M$ in $L^2$, $EM^2=\lim_{t\to\infty} EM_t^2=8/7$. Then using orthogonality of martingale increments and Lemma \ref{mart}, \[ E(M_u-M)^2 = EM^2 - EM_u^2 \le\exp(-\lambda^{1/3}u). \] Combining the last four bounds with Lemma \ref{mart}, and using Chebyshev inequality \begin{equation}\label{supbd1} P\Bigl({\sup_{t\ge u}} |M_t-M| \ge\gamma^2/3\Bigr) \le 9\gamma^{-4}\cdot10\exp(-\lambda^{1/3} u). \end{equation} To bound the second term in the right-hand side of (\ref{supbd}) we take $t_n=u+2\lambda^{-1/3}\log n$ for $n=1, 2, \ldots$ and use an argument similar to the one leading to (\ref{supJbd}) together with Chebyshev inequality to get \begin{eqnarray} \label{supbd2} P\Bigl({\sup_{t\ge u}} |{\exp}(-\lambda^{1/3}t) J_t| \ge \gamma ^2/3\Bigr) &\le&\sum_{n=1}^\infty P\Bigl({\max_{t_n\le t\le t_{n+1}}} |{\exp}(-\lambda^{1/3}t) J_t| \ge\gamma ^2/3\Bigr) \nonumber\\ &\le& 9\gamma^{-4} \sum_{n=1}^\infty E\Bigl({\max_{t_n\le t\le t_{n+1}}} |{\exp}(-\lambda^{1/3}t) J_t|\Bigr)^2 \nonumber\\[-8pt]\\[-8pt] &\le& 9 \cdot\frac{16}{3}\gamma^{-4} \sum_{n=1}^\infty \exp\bigl(\lambda^{1/3}(2t_{n+1}-3t_n)\bigr) \nonumber\\ &=& 48 \gamma^{-4} \exp(-\lambda^{1/3}u) \sum_{n=1}^\infty\frac{(n+1)^4}{n^6}.\nonumber \end{eqnarray} Repeating the previous argument for the third term in the right-hand side of (\ref{supbd}) we get the same upper bound as in (\ref{supbd2}). Combining (\ref{supbd}), (\ref{supbd1}) and~(\ref{supbd2}) we get the desired bound for $A_t/a(t)$. The bound in (\ref{bd1}) also works for both $L_t/l(t)$ and $X_t/x(t)$, since using~(\ref{lincomb}) \begin{eqnarray*} L_t/l(t) &=& M_t+\omega^2\exp(-\lambda^{1/3}t)J_t + \omega\exp (-\lambda^{1/3}t)K_t,\\ X_t/x(t) &=& M_t+ \exp(-\lambda^{1/3}t)J_t+ \exp(-\lambda^{1/3}t)K_t, \end{eqnarray*} and so the assertion of this lemma holds if $A_t/a(t)$ is replabed by $L_t/l(t)$ or~$X_t/x(t)$. \end{pf} We now use Lemma \ref{supbound} to study the limiting behavior of $\sigma(\ep)$. \begin{lemma}\label{ALXbd} Let $W_\ep=S(\ep/M)$, where $S(\cdot)$ is as in (\ref{S}) and $M$ is the limit random variable in Theorem \ref{th1}. Then for any $\eta>0$ \begin{eqnarray*} \lim_{N\to\infty} P(|A_{W_\ep}-\ep N^2|>\eta N^2) &=& \lim_{N\to\infty} P(|L_{W_\ep}-\ep N^{2-\alpha/3}|>\eta N^{2-\alpha/3})\\ &=& \lim_{N\to\infty} P(|X_{W_\ep}-\ep N^{2-2\alpha/3}|>\eta N^{2-2\alpha/3})\\ &=&0. \end{eqnarray*} \end{lemma} \begin{pf} Since $P(M>0)=1$, given $\theta>0$, we can choose $\gamma=\gamma (\theta)>0$ so that $\gamma<\eta/\ep$ and \begin{equation}\label{Mnot0} P(M<\gamma)<\theta. \end{equation} Using Lemma \ref{supbound} we can choose a constant $b=b(\gamma,\theta)$ such that \[ P\Bigl( {\sup_{t\ge b N^{\alpha/3}}} |A_t/a(t) - M | > \gamma^2 \Bigr) < \theta. \] Combining with (\ref{Mnot0}) \[ P\Bigl( {\sup_{t\ge bN^{\alpha/3}}} |A_t/a(t) - M | > \gamma M \Bigr) < 2\theta. \] Since $a(W_\ep)=\ep N^2/M$, by the choices of $\gamma$ and $b$, \begin{eqnarray*} P( |A_{W_\ep} - \ep N^2| \ge\eta N^2) &\le& P( |A_{W_\ep} - \ep N^2| \ge\ep\gamma N^2) \\ &=& P\bigl(|A_{W_\ep}/a(W_\ep)-M|\ge\gamma M\bigr) \\ &<& 2\theta +P(W_\ep<bN^{\alpha/3}). \end{eqnarray*} By the definition of $S(\cdot)$, \[ P( W_\ep< b N^{\alpha/3}) = P\biggl(M > \frac{3\ep}{b} N^{2-2\alpha/3}\biggr) \to0 \] as $N\to\infty$, and so $\limsup_{N\to\infty} P(|A_{W_\ep }-\ep N^2|>\eta N^2) \le2\theta$. Since $\theta>0$ is arbitrary, we have shown that \[ \lim_{N\to\infty} P( |A_{W_\ep}-\ep N^2| \ge\eta N^2 ) = 0. \] Repeating\vspace*{1pt} the argument for $L_{W_\ep}$ and $X_{W_\ep}$, and noting that $l(W_\ep)=\break\ep N^{2-\alpha/3}/M$ and $x(W_\ep)=\ep N^{2-2\alpha/3}/M$, we get the other two assertions.~ \end{pf} As a corollary of Lemma \ref{ALXbd} we get the first conclusion of Theorem \ref{th2}. \begin{corollary} \label{th2part1} As $N\to\infty$, $N^{-\alpha/3} (\sigma(\ep)-S(\ep)) \eqp-\log(M)$. \end{corollary} \begin{pf} For any $\eta>0$ choose $\gamma>0$ so that $\log(1+\gamma)<\eta$ and $\log(1-\gamma)>-\eta$. Let $W_\ep$ be as in Lemma \ref{ALXbd}. Clearly $W_{(1+\gamma)\ep}=S(\ep)+N^{\alpha/3}[\log(1+\gamma )-\log M]$ and $W_{(1-\gamma)\ep}=S(\ep)+N^{\alpha/3}[\log (1-\gamma)-\log M]$. Using Lemma~\ref{ALXbd} \begin{eqnarray*} && P\bigl[N^{-\alpha/3} \bigl(\sigma(\ep)-S(\ep)\bigr)>-\log M+\eta\bigr] \\ &&\qquad \le P\bigl(\sigma(\ep) >W_{(1+\gamma)\ep}\bigr) =P(A_{W_{(1+\gamma)\ep}}<\ep N^2)\to0,\\ && P\bigl[N^{-\alpha/3} \bigl(\sigma(\ep)-S(\ep)\bigr)<-\log M-\eta\bigr] \\ &&\qquad \le P\bigl(\sigma(\ep) < W_{(1-\gamma)\ep}\bigr) =P(A_{W_{(1-\gamma)\ep}} > \ep N^2)\to0 \end{eqnarray*} as $N\to\infty$, and the proof is complete. \end{pf} The second conclusion in Theorem \ref{th2} follows from $C_t \le A_t$. To get the third we have to wait till Lemma \ref{tausigma}. First we need to show that when~$A_t/N^2$ is small, $C_t/N^2$ is not very much smaller. To prepare for that we need the following result. \begin{lemma}\label{renewalineq} Let $F(t)=\lambda t^3/3!$. If $u(\cdot)$ and $\beta(\cdot)$ are functions such that $u(t) \le\beta(t)+\intt u(t-s) \,dF(s)$ for all $t\ge0$, then \[ u(t) \le\beta* V(t) = \beta(t)+\intt\beta(t-s) \,dV(s), \] where $V(\cdot)$ is as in Lemma \ref{V}. \end{lemma} \begin{pf} Define $\tilde\beta(t)\equiv\beta(t)+\intt u(t-s) \,dF(s)-u(t)$. So $\tilde\beta(t) \ge0$ for all $t\ge0$. If $\hat\beta(t)\equiv \beta(t)-\tilde\beta(t)$, then \[ u(t)=\hat\beta(t)+\intt u(t-s) \,dF(s). \] Solving the renewal equation we get $u(t)=\hat\beta* V(t)$, where $V(\cdot)$ is as in Lem\-ma~\ref{V}. Since $\hat\beta(t)\le\beta (t)$ for all $t\ge0$, we get the result. \end{pf} We now apply Lemma \ref{renewalineq} to estimate the difference between $EA_t$ and~$EC_t$. \begin{lemma}\label{compare1} For any $t\ge0$ and $a(\cdot)$ as in (\ref{a}), \[ EC_t\ge EA_t - \frac{11 a^2(t)}{ N^2}. \] \end{lemma} \begin{pf} In either of our processes, if a center is born at time $s$, then the radius of the corresponding disk at time $t>s$ will be $(t-s)/\sqrt{2\pi}$. Thus $x$ will be covered at time $t$ if and only if there is a center in\vadjust{\goodbreak} the space--time cone \begin{equation}\label{cone} K_{x,t}\equiv\bigl\{(y,s)\in\Gamma(N) \times[0,t]\dvtx|y-x| \le (t-s)/\sqrt{2\pi}\bigr\}. \end{equation} If $0=s_0, s_1, s_2,\ldots$ are the birth times of new centers in $\mathcal{C}_t$, then \[ P( x \notin\mathcal{C}_t | s_0, s_1, s_2, \ldots) = \prod_{i\dvtx s_i\le t} \biggl[1-\frac{(t-s_i)^2}{2N^2}\biggr] \le\exp\biggl[-\sum_{i\dvtx s_i\le t} \frac{(t-s_i)^2}{2N^2}\biggr], \] since $1-x\le e^{-x}$. Let $q(t)\equiv P(x \notin\mathcal{C}_t)$, which does not depend on $x$, since we have a random chosen starting point. Recall that $\tilde X_t$ is the number of centers born by time $t$ in $\mathcal{C}_t$. Using the last inequality \[ q(t) \le E\exp\biggl[-\int_0^t \frac{(t-s)^2}{2N^2} \,d\tilde X_s\biggr] \] and $E C_t= N^2(1- q(t))$. Integrating $e^{-y} \ge1-y$ gives $1-e^{-x}\ge x-x^2/2$ for $x\ge0$. So \begin{eqnarray} \label{eq7} E C_t & \ge & N^2E\biggl[1-\exp\biggl(-\int_0^t \frac {(t-s)^2}{2N^2} \,d\tilde X_s\biggr)\biggr]\nonumber\\[-8pt]\\[-8pt] & \ge & N^2 E\biggl[\int_0^t \frac{(t-s)^2}{2N^2} \,d\tilde X_s -\frac12\biggl(\int_0^t \frac{(t-s)^2}{2N^2} \,d\tilde X_s \biggr)^2\biggr]. \nonumber \end{eqnarray} For the first term on the right we use $E\tilde X_t=1+\lambda\intt EC_s \,ds$. For the second term on the right, we use the coupling between $\mathcal{C}_t$ and $\mathcal{A}_t$ described in the \hyperref [intro]{Introduction}, see (\ref{couple}), so that we have $\int_0^t (t-s)^2 \,d\tilde X_s \le\int_0^t (t-s)^2 \,dX_s$. Combining these two facts \begin{eqnarray}\label{eq6} EC_t & \ge & \frac{t^2}{2} + \int_0^t \frac{(t-s)^2}{2} \lambda EC_s \,ds -\frac{1}{2N^2} E\biggl[\int_0^t \frac{(t-s)^2}{2}\,dX_s\biggr]^2 \nonumber\\[-8pt]\\[-8pt] &=& \frac{t^2}{2} + \intt\frac{(t-s)^2}{2} \lambda EC_s \,ds-\frac {EA_t^2}{2N^2}.\nonumber \end{eqnarray} The last equality follows from (\ref{LA}), as does the next equation for $EA_t$: \begin{equation}\label{eq13} EA_t=\frac{t^2}{2} +\int_0^t \frac{(t-s)^2}{2} V'(s) \,ds = \frac{t^2}{2} + \int_0^t \frac{(t-s)^2}{2} \lambda EA_s \,ds. \end{equation} Here $V(\cdot)$ is as in Lemma \ref{V} and $EA_t=V'(t)/\lambda$ by Lemma \ref{XLAlem}. Combining~(\ref{eq6}) and (\ref{eq13}), if $u(t)\equiv EA_t-EC_t$, and $F(s)=\lambda s^3/3!$, then \[ u(t) \le\frac{EA_t^2}{2N^2} + \int_0^t \frac{(t-s)^2}{2} \lambda u(s) \,ds = \frac{EA_t^2}{2N^2} + \int_0^t u(t-r) \,dF(r), \] where the last step is obtained by changing variables $s \mapsto t-r$. If $\beta(t) = EA_t^2/2N^2$, then by Lemma \ref{sqbound} $\beta(t)\le27a^2(t)/4N^2$, and using\vadjust{\goodbreak} Lemma \ref{renewalineq} and (\ref{a2bd}) \[ u(t) \le\beta* V(t)\le\frac{27}{4N^2} (a^2)*V(t) \le \frac{27}{4N^2} \frac32 a^2(t), \] which gives the result, since $81/8 \le11$. \end{pf} To complete the proof of Theorem \ref{th2} it remains to show the third conclusion of it, which we separate as the following lemma and prove it using Lemma \ref{compare1}. \begin{lemma} \label{tausigma} For any $\gamma>0$ \[ \limsup_{N\to\infty} P\bigl( \tau(\ep) > \sigma\bigl((1+\gamma)\ep\bigr) \bigr) \le P\bigl( M \le(1+\gamma)\ep^{1/3} \bigr) + 11 \frac{\ep^{1/3}}{\gamma}. \] \end{lemma} \begin{pf} Let $U=\sigma((1+\gamma)\ep)$ and $T=S(\ep^{2/3})$, where $S(\cdot)$ is as in (\ref{S}). Now \[ S(\ep^{2/3}) - S\bigl((1+\gamma)\ep\bigr) = N^{\alpha/3} \bigl[ - \tfrac{1}{3} \log(\ep) - \log(1+\gamma) \bigr]. \] It follows from Corollary \ref{th2part1} that \begin{eqnarray*} \limsup_{N\to\infty} P( U \ge T ) &\le& P\biggl( - \log(M) \ge- \frac{1}{3} \log(\ep) - \log(1+\gamma) \biggr) \\ &=& P\bigl( M \le(1+\gamma) \ep^{1/3} \bigr). \end{eqnarray*} Using Markov's inequality, Lemma \ref{compare1}, and $a(T) = \ep^{2/3}N^2$, \begin{equation}\label{b3}\quad P(|A_{T}-C_{T}|>\gamma\ep N^2) \le \frac{E(A_{T}-C_{T})}{\gamma\ep N^2} \le\frac{6 (a(T))^2}{\gamma \ep N^4} \le11 \cdot\frac{\ep^{1/3}}{\gamma}. \end{equation} Using these two bounds and the fact that $|A_t-C_t|$ is nondecreasing in $t$, we get \begin{eqnarray*} &&\limsup_{N\to\infty} P\bigl[\tau(\ep)>\sigma\bigl((1+\gamma)\ep\bigr)\bigr]\\ &&\qquad= \limsup_{N\to\infty} P[|A_U-C_U| >\gamma\ep N^2]\\ &&\qquad \le\limsup_{N\to\infty} P( U \ge T ) + \limsup_{N\to\infty} P[|A_U-C_U| >\gamma\ep N^2, U < T]\\ &&\qquad \le\limsup_{N\to\infty} P( U \ge T ) + P( |A_{T}-C_{T}|>\gamma\ep N^2), \end{eqnarray*} which completes the proof. \end{pf} \section{Limiting behavior of $\mathcal{C}_t$}\label{sec4} Let $\mathcal{C}_{s,t}^0$ be the set of points covered in $\mathcal{C}_t$ at time $t$ by the balloons born before time $s$. If we number the generations of centers in $\mathcal{C}_t$ starting with those existing at time $s$ as $\mathcal{C}_t$-centers of generation~0, then $\mathcal{C}_{s,t}^0$ is the set of points covered at time $t$ by the generation~0 centers of\vadjust{\goodbreak} $\mathcal{C}_t$. Let $\mathcal {C}^1_{s,t}$ be the set of points, which are either in $\mathcal{C}^0_{s,t}$, or are covered at time $t$ by a balloon born from this area. This is the set of points covered by $\mathcal{C}_t$-centers of generations $\le1$ at time~$t$, ignoring births from $\mathcal{C}^1_{s,t} \setminus\mathcal{C}^0_{s,t}$, which are second generation centers. Continuing by induction, we let $\mathcal{ C}^k_{s,t}$ be the set of points and $C_{s,t}^k=|\mathcal {C}_{s,t}^k|$ be the total area covered by $\mathcal {C}_t$-centers of generations $0\le j \le k$ at time $t$. Similarly $A_{s,t}^k$ denotes the total area of the balloons in $\mathcal{A}_t$ of generations $j\in\{0, 1, \ldots, k\}$ at time~$t$, where generation 0 centers are those existing at time $s$. Recall the following definitions from (\ref{a}), (\ref{S}), (\ref{R}) and (\ref{psiWI}). \begin{eqnarray*} a(t) &=& (1/3)N^{2\alpha/3}\exp(N^{-\alpha/3} t),\\ S(\ep) &=& N^{\alpha/3}[(2-2\alpha/3)\log N + \log(3\ep)], \\ R &=& N^{\alpha/3}[(2-2\alpha/3)\log N - \log(M)], \end{eqnarray*} where $M$ is the limit random variable in Theorem \ref{th1}, and for $\log(3\ep) \le t$, \[ \psi(t)\equiv R+N^{\alpha/3}t,\qquad W\equiv\psi(\log(3\ep))\quad \mbox{and}\quad I_{\ep,t}=[\log(3\ep) , t]. \] Note that $\psi(t) \le0$ only if $M\ge N^{2-2\alpha/3}t$.\vspace*{1pt} Obviously $C_{s,t}^0 \le A_{s,t}^0$. For the other direction we have the following lemma. \begin{lemma}\label{compare2} For any $0<s<t$, \[ EC_{s,t}^0 \ge EA_{s,t}^0-\frac{a^2(s)}{N^2}p\bigl((t-s)\lambda ^{1/3}\bigr), \] where for some positive constants $c_1, c_2$ and $c_4$, \begin{equation} \label{pxdef} p(x)=c_1+c_2x^2/2!+c_4x^4/4!. \end{equation} \end{lemma} \begin{pf} By the definition of $A_{s,t}^0$, \begin{equation}\label{Ast} A_{s,t}^0=\int_0^s \frac{(t-r)^2}{2} \,dX_r = \frac{(t-s)^2}{2}X_s+(t-s)L_s+A_s. \end{equation} For the second equality we have written $(t-r)^2=(t-s)^2+2(t-s)(s-r)+(s-r)^2$ and used (\ref{LA}). As in Lemma \ref{compare1}, a point $x$ is not covered by time~$t$ by the balloons born before time $s$, if and only if no center is born in the truncated space--time cone \[ K_{x,s,t} \equiv\bigl\{(y,r)\in\Gamma(N)\times[0,s]\dvtx |y-x| \le(t-r)/\sqrt{2\pi}\bigr\}. \] So using arguments similar to the ones for (\ref{eq7}) and $1-e^{-x} \ge x - x^2/2$, \begin{eqnarray*} EC_{s,t}^0 &\ge& N^2E\biggl[1-\exp\biggl(-\int_0^s \frac{(t-r)^2}{2N^2} \,d\tilde X_r\biggr)\biggr]\\ &\ge& N^2\biggl[E\int_0^s \frac{(t-r)^2}{2N^2} \,d\tilde X_r-\frac12 E\biggl(\int_0^s \frac{(t-r)^2}{2N^2} \,d\tilde X_r\biggr)^2\biggr]. \end{eqnarray*} For the first term on the right, we use $E\tilde X_t=1+\lambda\intt EC_s \,ds$. For the second term on the right, we use the coupling between $\mathcal{C}_t$ and $\mathcal{A}_t$ described in the \hyperref[intro]{Introduction}, see (\ref{couple}), to conclude that \[ \int_0^s (t-r)^2 \,d\tilde X_r \le\int_0^s (t-r)^2 \,dX_r=2A_{s,t}^0. \] Combining these two facts, using the first equality in (\ref{Ast}), $EX_t=1+\lambda\intt EA_s \,ds$, and Lemma \ref{compare1}, \begin{eqnarray}\label{eq1}\quad EC_{s,t}^0 & \ge & \frac{t^2}{2} + \int_0^s \frac{(t-r)^2}{2} \lambda EC_r \,dr -\frac{E(A_{s,t}^0)^2}{2N^2} \nonumber\\[2pt] & \ge & \frac{t^2}{2} + \int_0^s \frac{(t-r)^2}{2} \lambda EA_r \,dr - 11\int_0^s \frac{(t-r)^2}{2}\frac{\lambda a^2(r)}{N^2} \,dr-\frac {E(A_{s,t}^0)^2}{2N^2} \\[2pt] &=& EA_{s,t}^0 - 11\int_0^s \frac{(t-r)^2}{2}\frac{\lambda a^2(r)}{N^2} \,dr-\frac{E(A_{s,t}^0)^2}{2N^2}.\nonumber \end{eqnarray} To estimate the second term in the right-hand side of (\ref{eq1}), we write \[ (t-r)^2/2=(t-s)^2/2+(t-s)(s-r)+(s-r)^2/2, \] change variables $r = s-q$, and note $a(s-q)=a(s)\exp(-\lambda ^{1/3}q)$, to get \begin{eqnarray}\label{2nd} &&\int_0^s\frac{(t-r)^2}{2} \lambda a^2(r) \,dr\nonumber\\[2pt] &&\qquad= a^2(s) \biggl[\frac{(t-s)^2}{2}\lambda^{2/3} \int_0^s\lambda^{1/3}\exp(-2\lambda^{1/3}q) \,dq \nonumber\\[2pt] &&\qquad\quad\hspace*{28.1pt}{} +(t-s)\lambda^{1/3} \int_0^s\lambda^{2/3} q \exp(-2\lambda^{1/3}q) \,dq\\[2pt] &&\qquad\quad\hspace*{84pt}{}+ \int_0^s\lambda\frac{q^2}{2} \exp(-2\lambda^{1/3}q ) \,dq\biggr] \nonumber\\[2pt] &&\qquad\le\frac{a^2(s)}{2}\biggl[\frac{(t-s)^2}{2}\lambda ^{2/3}+(t-s)\lambda^{1/3}+1\biggr].\nonumber \end{eqnarray} For the last inequality we have used \[ \int_0^s r^k\exp(-\mu r) \,dr \le\int_0^\infty r^k\exp(-\mu r) \,dr = \frac{k!}{\mu^{k+1}}. \] To estimate the third term in the right-hand side of (\ref{eq1}) we use (\ref{Ast}) to get \[ E[(A_{s,t}^0)^2] \le 3[EX_s^2(t-s)^4/4+EL_s^2(t-s)^2+EA_s^2]. \] Applying Lemma \ref{sqbound} and using the fact that $a(s)=\lambda^{-1/3}l(s)=\lambda^{-2/3}x(s)$, \begin{eqnarray}\label{3rd} E[(A_{s,t}^0)^2] & \le & 3 \cdot\frac{27}{2} \biggl[x^2(s)\frac{(t-s)^4}{4} + l^2(s) (t-s)^2 + a^2(s)\biggr] \nonumber\\[-9pt]\\[-9pt] & \le & 243 a^2(s) \biggl[\frac{(t-s)^4}{4!}\lambda^{4/3} + \frac{(t-s)^2}{2!}\lambda ^{2/3}+1\biggr].\nonumber \end{eqnarray} Combining (\ref{eq1}), (\ref{2nd}) and (\ref{3rd}) we get the result. \end{pf} To show uniform convergence of $C_{W,\psi(\cdot)}^k$ to $C_{\psi (\cdot)}$, we also need to bound the difference $A_t$ and $A_{s,t}^k$ for suitable choices of $s$ and $t$. \begin{lemma}\label{Abound} If $T=S(\ep^{2/3})$, where $S(\cdot)$ is as in (\ref{S}), then for any $t>0$ \[ EA_{T+tN^{\alpha/3}}-EA_{T,T+tN^{\alpha/3}}^k \le 3\ep^{2/3}N^2\sum_{j=k+1}^\infty\frac{t^j}{j!}. \] \end{lemma} \begin{pf} By (\ref{Ast}) $EA_{s,t}^0=EA_s+EL_s (t-s)+EX_s (t-s)^2/2$. If $X_{s,t}^k$ and~$L_{s,t}^k$ denote the number of centers and sum of radii of all the balloons in $\mathcal{A}_t$ of generations $j\in\{1, 2, \ldots, k\}$ at time $t$, where generation 0 centers are those which are born before time $s$, then for $t>s$, \[ \frac{d}{dt} EX_{s,t}^1 = N^{-\alpha}EA_{s,t}^0,\qquad \frac{d}{dt} EL_{s,t}^1 = EX_{s,t}^1,\qquad \frac{d}{dt} EA_{s,t}^1 = EL_{s,t}^1. \] Integrating over $[s,t]$ and using (\ref{Ast}) we have \begin{eqnarray*} EX_{s,t}^1 &=& N^{-\alpha}\biggl[(t-s)EA_s+\frac{(t-s)^2}{ 2!} EL_s+\frac{(t-s)^3}{3!}EX_s\biggr],\\[-2pt] EL_{s,t}^1 &=& N^{-\alpha}\biggl[\frac{(t-s)^2}{2!}EA_s+\frac{(t-s)^3}{3!} EL_s+\frac{(t-s)^4}{4!}EX_s\biggr],\\[-2pt] EA_{s,t}^1 &=& N^{-\alpha}\biggl[\frac{(t-s)^3}{3!}EA_s+\frac{(t-s)^4}{4!} EL_s+\frac{(t-s)^5}{5!}EX_s\biggr]. \end{eqnarray*} Turning to other generations, for $k\ge2$ and $t>s$, \begin{eqnarray*} \frac{d}{dt} (EX_{s,t}^k-EX_{s,t}^{k-1}) &=& N^{-\alpha }(EA_{s,t}^{k-1}-EA_{s,t}^{k-2}),\\[-2pt] \frac{d}{dt} (EL_{s,t}^k-EL_{s,t}^{k-1}) &=& (EX_{s,t}^k-EX_{s,t}^{k-1}),\\[-2pt] \frac{d}{dt} (EA_{s,t}^k-EA_{s,t}^{k-1}) &=& (EL_{s,t}^k-EL_{s,t}^{k-1}), \end{eqnarray*} and using induction on $k$ we have \[ EA_{s,t}^k=\sum_{j=0}^k N^{-\alpha j}\biggl[\frac{(t-s)^{3j}}{(3j)!} EA_s + \frac{(t-s)^{3j+1}}{(3j+1)!} EL_s +\frac{(t-s)^{3j+2}}{(3j+2)!} EX_s\biggr].\vadjust{\goodbreak} \] Since $A_{s,t}^k \uparrow A_t$ for any $s<t$, $EA_t=\lim_{k\to\infty} EA_{s,t}^k$ by Monotone Convergence theorem. Replacing $s$ by $T$ and $t$ by $T+t N^{\alpha/3}$, \begin{eqnarray}\label{eq5}\qquad && EA_{T+t N^{\alpha/3}}-EA_{T,T+t N^{\alpha/3}}^k \nonumber\\[-8pt]\\[-8pt] &&\qquad=\sum_{j=k+1}^\infty\biggl[\frac{t^{3j}}{(3j)!} EA_T +\frac{t^{3j+1}}{(3j+1)!} N^{\alpha/3} EL_T+ \frac {t^{3j+2}}{(3j+2)!} N^{2\alpha/3} EX_T\biggr]. \nonumber \end{eqnarray} Using the fact that $EA_T+ N^{\alpha/3}EL_T+N^{2\alpha /3}EX_T-3a(T)=0$ and $a(T)=\ep^{2/3}N^2$, the right-hand side of (\ref{eq5}) is $\le3\ep^{2/3}N^2\sum_{j=k+1}^\infty t^j/j!$, which completes the proof. \end{pf} Recall the definitions of $\psi(\cdot), W$ and $I_{\ep,t}$ from the displays before Lem\-ma~\ref{compare2} and that for $\log(3\ep)\le t$, \begin{equation}\label{gdef2} g_0(t)=\ep\biggl[1+\bigl(t-\log(3\ep)\bigr)+\frac{(t-\log(3\ep))^2}{2}\biggr]. \end{equation} \begin{lemma}\label{B0bounds} For any $t<\infty$, there is an $\ep_0=\ep_0(t)>0$ so that for $0< \ep< \ep_0$, \begin{eqnarray*} \lim_{N\to\infty} P\Bigl( \sup_{s\in I_{\ep,t}} \bigl|N^{-2} A^0_{W,\psi(s)} -g_0(s) \bigr|>\eta\Bigr) &=& 0 \qquad\mbox{for any } \eta>0,\\ P\Bigl( \inf_{s\in I_{\ep,t}} N^{-2} \bigl( C^0_{W,\psi(s)} - A^0_{W,\psi(s)} \bigr) < -\ep ^{7/6} \Bigr) & \le & P(M<\ep^{1/3})+\ep^{1/12}. \end{eqnarray*} \end{lemma} \begin{pf} To prove the first result we use (\ref{Ast}) to conclude \[ A^0_{W,\psi(t)}=\frac{(t-\log(3\ep))^2}{2}N^{2\alpha/3}X_W+\bigl(t-\log (3\ep)\bigr)N^{\alpha/3}L_W+A_W. \] Applying Lemma \ref{ALXbd} \begin{eqnarray*} &&\lim_{N\to\infty} P\Bigl(\sup_{s\in I_{\ep,t}} \bigl|N^{-2} A^0_{W,\psi(s)} -g_0(s)\bigr|>\eta\Bigr) \\ &&\qquad \le\lim_{N\to\infty} P\biggl( \bigl|N^{-(2-2\alpha/3)}X_W-\ep\bigr| > \frac{2\eta}{3(t-\log(3\ep))^2} \biggr)\\ &&\qquad\quad{} + \lim_{N\to\infty} P\biggl( \bigl|N^{-(2-\alpha/3)}L_W-\ep\bigr| > \frac{\eta}{3(t-\log(3\ep))} \biggr) \\ &&\qquad\quad{} + \lim_{N\to\infty} P\biggl(|N^{-2}A_W-\ep| > \frac{\eta}{3}\biggr) =0. \end{eqnarray*} Let $\ep_0=\ep_0(t)$ be such that $\ep_0^{1/12}p(t-\log(3\ep)) \le 1$, where $p(\cdot)$ is the polynomial in (\ref{pxdef}). Let $T=S(\ep^{2/3})$, where $S(\cdot)$ is defined in (\ref{S}), and $T' = T+(t-\log(3\ep))N^{\alpha/3}$. Using the fact that\vadjust{\goodbreak} $A^0_{s,s+t}-C^0_{s,s+t}$ is nondecreasing in~$s$, Markov's inequality, and then Lemma \ref{compare2} we see that \begin{eqnarray*} && P\Bigl(\sup_{s\in I_{\ep,t}} \bigl|A^0_{W,\psi(s)}-C^0_{W,\psi (s)}\bigr| > \ep^{7/6} N^2, W \le T\Bigr)\\ &&\qquad \le P(|A^0_{T,T'} - C^0_{T,T'}| >\ep^{7/6} N^2) \le\frac{E|A^0_{T,T'} - C^0_{T,T'}|}{\ep^{7/6} N^2} \\ &&\qquad \le\frac{a^2(T) p(t-\log(3\ep))}{\ep^{7/6} N^4}. \end{eqnarray*} Noting that $P(W >T)=P(M<\ep^{1/3}), a(T)=\ep^{2/3}N^2$ and $\ep^{1/12}p(t-\break\log(3\ep))\hspace*{-0.2pt}<1$ for $\ep<\ep_0$ we have \[ P\Bigl(\sup_{s\in I_{\ep,t}} \bigl|A_{W,\psi(s)}-C_{W,\psi (s)}\bigr|>\ep^{7/6} N^2\Bigr) \le P(M<\ep^{1/3}) + \ep^{1/12}, \] which completes the proof. \end{pf} Our next step is to improve the lower bound in Lemma \ref{B0bounds}. Let \[ \rho^0_t = N^{-2} A_{W,\psi(t)} - \ep^{7/6}. \] On the event \begin{equation} \label{Fdef} F = \bigl\{ \bigl|N^{-2}\mathcal{C}^0_{W,\psi (s)}\bigr| \ge\rho^0_s \mbox{ for all $s\in I_{\ep,t}$} \bigr\}, \end{equation} which has probability tending to 1 as $\ep\to0$ by Lemma \ref{B0bounds}, $\mathcal {C}^0_{W,\psi(s)}$ can be coupled with a process $\mathcal{B}^0_{\psi (s)}$ so that $N^{-2}|\mathcal{B}^0_{\psi(s)}|=\rho^0_s$ and $\mathcal{C}^0_{W,\psi(s)} \supseteq\mathcal{B}^0_{\psi(s)}$ for $s\in I_{\ep,t}$. If for $k \ge1$ $\mathcal{B}^k_{\psi(t)}$ is obtained from $\mathcal{B}^0_{\psi(t)}$ in the same way as $\mathcal{C}^k_{W,\psi(t)}$ is obtained from $\mathcal{C}^0_{W,\psi(t)}$, then, on $F$, $\mathcal {C}^k_{W,\psi(s)} \supseteq\mathcal{B}^k_{\psi(s)}$ for $s\in I_{\ep,t}$. For $k\ge1$ let\looseness=-1 \[ \rho^k_s = N^{-2} \bigl|\mathcal{B}^k_{\psi(s)}\bigr|. \]\looseness=0 We begin with the case $k=1$. For $f_0(t)=g_0(t)-\ep^{7/6}$, where $g_0$ is as in (\ref{gdef2}), let \begin{equation}\label{f1eq2} f_1(t) = 1 - \bigl(1-f_0(t)\bigr) \exp\biggl(-\int_{\log(3\ep)}^t \frac{(t-s)^2}{2}f_0(s)\,ds\biggr). \end{equation} \begin{lemma}\label{f1lb} For any $t<\infty$ there is an $\ep_0=\ep_0(t)>0$ so that for $0 < \ep< \ep_0$ and any $\delta>0$, \[ \limsup_{N\to\infty} P\Bigl[\inf_{s\in I_{\ep,t}} \bigl(N^{-2}C^1_{W,\psi(s)}-f_1(s)\bigr) < -\delta\Bigr] \le P(M<\ep ^{1/3})+\ep^{1/12}. \] \end{lemma} \begin{pf} As in Lemma \ref{compare1}, if $x\notin\mathcal{B}^0_{\psi(t)}$, then $x\notin\mathcal{B}^1_{\psi(t)}$ if and only if no generation 1 center is born in the space--time cone \[ K_{x,t}^\ep\equiv\bigl\{(y,s)\in\Gamma(N)\times[W,\psi(t)]\dvtx |y-x| \le\bigl(\psi(t)-s\bigr)/\sqrt{2\pi}\bigr\}.\vadjust{\goodbreak} \] Conditioning on $\mathcal{G}^0_t=\sigma\{ \mathcal{B}^0_{\psi(s)} \dvtx s\in I_{\ep,t}\}$, the locations of generation 1 centers in $\mathcal {B}^1_{t}$ is a Poisson point process on $\Gamma(N) \times [W,\psi(t)]$ with intensity \[ N^{-2} \times|\mathcal{B}^0_{s}|N^{-\alpha} = \rho^0_{\psi ^{-1}(s)} N^{-\alpha}. \] Using this and then changing variables $s=\psi(r)$, where $\psi(r)=R+N^{\alpha/3}r$, \begin{eqnarray*} P\bigl( x \notin\mathcal{B}^1_{\psi(t)} | \mathcal{G}^0_t \bigr) &=& (1-\rho^0_t) \exp\biggl(- \int_W^{\psi(t)} \frac{(\psi(t)-s)^2}{2}\rho^0_{\psi^{-1}(s)} N^{-\alpha} \,ds\biggr)\\ &=& (1-\rho^0_t) \exp\biggl(-\int_{\log(3\ep)}^t \frac{(t-r)^2}{2}\rho^0_r \,dr\biggr). \end{eqnarray*} Let $E_{x,t} = \{ x \notin\mathcal{B}^1_{t}\}$. Since $K_{x,t}^\ep$ and $K_{y,t}^\ep$ are disjoint if $|x-y|>2(t-\log(3\ep))N^{\alpha/3}/\sqrt{2\pi}$, the events\vspace*{1pt} $E_{x,t}$ and $E_{y,t}$ are conditionally independent given $\mathcal {G}^0_t$ if this holds. Define the random variables $Y_x$, $x\in\Gamma(N)$, so that $Y_x=1$ if $E_{x,t}$ occurs, and $Y_x=0$ otherwise. From (\ref{cmu1}) \begin{equation} \label{cmu1} E( Y_x | \mathcal{G}^0_t) =(1-\rho^0_t)\exp\biggl(-\int_{\log(3\ep)}^t \frac{(t-s)^2}{2}\rho^0_s \,ds\biggr). \end{equation} Using independence of $Y_x$ and $Y_z$ for $|x-z|>2(t-\log(3\ep))N^{\alpha/3}/\sqrt{2\pi}$, and the fact that $\{z\dvtx|x-z| \le2(t-\log(3\ep))N^{\alpha/3}/\sqrt{2\pi }\}$ has area $2(t-\log(3\ep))^2 N^{2\alpha/3}$, \begin{eqnarray}\label{cvar1} &&\operatorname{var} \biggl( \int_{x\in\Gamma(N)} Y_x \,dx \big| \mathcal{G}^0_t\biggr) \nonumber\\ &&\qquad=\int_{x,z\in\Gamma(N)} [E( Y_xY_z | \mathcal{G}^0_t) -E( Y_x | \mathcal{G}^0_t) E( Y_z | \mathcal {G}^0_t)] \,dx \,dz \\ &&\qquad\le N^2\cdot2\bigl(t-\log(3\ep)\bigr)^2 N^{2\alpha/3}.\nonumber \end{eqnarray} Using Chebyshev's inequality, we see that \begin{eqnarray} \label{cch1} &&P\biggl(\biggl|\int_{x\in\Gamma(N)} \bigl(Y_x-E( Y_x | \mathcal{G}^0_t) \bigr)\, dx \biggr|> \frac{\eta}{2} N^2 \Big| \mathcal{G}^0_t \biggr) \nonumber\\[-8pt]\\[-8pt] &&\qquad\le \frac{4\operatorname{var}(\int_{x\in\Gamma(N)} Y_x \,dx | \mathcal{G}^0_t)}{\eta^2N^4}.\nonumber \end{eqnarray} Combining (\ref{cmu1}), (\ref{cvar1}) and (\ref{cch1}) gives \begin{eqnarray*} &&P\biggl( \biggl|(1-\rho^1_t) - (1-\rho^0_t)\exp\biggl(-\int_{\log(3\ep)}^t \frac{(t-s)^2}{2}\rho^0_s \,ds\biggr) \biggr| > \frac{\eta}{2} \Big| \mathcal{G}^0_t \biggr) \\ &&\qquad\le\frac {8(t-\log(3\ep))^2}{\eta^2N^{2-2\alpha/3}}. \end{eqnarray*} The same bound holds for the unconditional probability. By Lemma \ref{B0bounds} if $\eta>0$ and \[ F_{0,\eta} \equiv\Bigl\{{\sup_{s\in I_{\ep,t}}} |\rho^0_s-f_0(s)| \le\eta\Bigr\}\qquad \mbox{then } \lim_{N\to\infty} P(F_{0,\eta}^c) = 0. \] Let $\eta'=\eta[1+(t-\log(3\ep))^3/3!]^{-1}/2$. Using (\ref{f1eq2}) and the fact that for $x,y\ge0$ \begin{equation}\label{eineq} |e^{-x}-e^{-y}| = \biggl| \int_x^y e^{-z} \,dz \biggr| \le|x-y|, \end{equation} we see that on the event $F_{0,\eta'}$, we have for any $s\in I_{\ep,t}$ \begin{eqnarray*} && \biggl|(1-\rho^0_s) \exp\biggl(-\int_{\log(3\ep)}^s \frac{(s-r)^2}{2}\rho^0_r \,dr\biggr)-\bigl(1-f_1(s)\bigr)\biggr| \\ &&\qquad \le\bigl|(1-\rho^0_s)-\bigl(1-f_0(s)\bigr)\bigr| + \eta' \int_{\log (3\ep)}^s \frac{(s-r)^2}{2} \,dr\\ &&\qquad\le\eta' + \eta' \frac{(s-\log(3\ep))^3}{3!} \le\frac{\eta}{2}. \end{eqnarray*} So for any $s\in I_{\ep,t}$ \begin{eqnarray*} && \lim_{N\to\infty} P\bigl( |\rho^1_s - f_1(s)| > \eta\bigr)\\ &&\qquad\le\lim_{N\to\infty} P(F_{0,\eta'}^c )\\ &&\qquad\quad{}+ \lim_{N\to\infty}P\biggl(\biggl|(1-\rho^1_s )-(1-\rho^0_s) \exp\biggl( -\int_{\log(3\ep)}^s \frac{(s-r)^2}{2}\rho^0_r \,dr\biggr)\biggr|>\frac{\eta}{2}\biggr) \\ &&\qquad= 0. \end{eqnarray*} Since $\eta>0$ is arbitrary, the two quantities being compared are increasing and continuous, and on the event $F$ defined in (\ref{Fdef}) $N^{-2} C^1_{W,\psi(s)} \ge\rho^1_s$ for $s\in I_{\ep,t}$, \begin{eqnarray*} & & \limsup_{N\to\infty} P\Bigl[\inf_{s\in I_{\ep,t}} \bigl(N^{-2}C^1_{W,\psi(s)}-f_1(s)\bigr) < -\delta\Bigr]\\ &&\qquad \le P(F^c) +\limsup_{N\to\infty} P\Bigl(\sup_{s\in I_{\ep,t}} |\rho^1_s-f_1(s)| >\delta\Bigr) \le P(F^c), \end{eqnarray*} and the desired conclusion follows from Lemma \ref{B0bounds}. \end{pf} To improve this we will let \begin{equation}\label{fiter2} f_{k+1}(t) = 1 - \bigl(1-f_{k}(t)\bigr) \exp\biggl(-\int_{\log(3\ep)}^t \frac {(t-s)^2}{2}\bigl(f_k(s)-f_{k-1}(s)\bigr)\,ds\biggr),\hspace*{-38pt} \end{equation} and recall from (\ref{fepinteq}) that as $k\uparrow\infty$, $f_k(t) \uparrow f_\ep(t)$. \begin{lemma}\label{fklb} For any $t<\infty$ there is an $\ep_0=\ep_0(t)>0$ so that for $0 < \ep< \ep_0$ and any $\delta>0$, \[ \limsup_{N\to\infty} P\Bigl[\inf_{s\in I_{\ep,t}} \bigl(N^{-2}C_{\psi(s)}-f_\ep(s)\bigr) < -\delta\Bigr] \le P(M<\ep ^{1/3})+\ep^{1/12}. \] \end{lemma} \begin{pf} Conditioning on $\mathcal{G}^k_t=\sigma\{ \mathcal{B}^j_{\psi (s)} \dvtx0\le j\le k, s\in I_{\ep,t}\}$, we have \[ P\bigl( x \notin\mathcal{B}^{k+1}_{\psi(t)} | \mathcal{G}^k_t \bigr) = (1-\rho^k_t) \exp\biggl(-\intt\frac{(t-s)^2}{2} (\rho^k_s-\rho^{k-1}_s) \,ds\biggr). \] Let $F_{k,\eta}=\{\sup_{s\in I_{\ep,t}} |\rho^k_s-f_k(s)|\le\eta\}$, and $\eta'=\eta[1+2(t-\log(3\ep))^3/3!]^{-1}/2$. Using (\ref{fiter2}) and $|e^{-x}-e^{-y}| \le|x-y|$ for $x,y\ge0$, we see that on the event $G_{k,\eta'}=F_{k,\eta'} \cap F_{k-1,\eta'}$, for any $s\in I_{\ep,t}$ \begin{eqnarray*} &&\biggl|(1-\rho^k_t) \exp\biggl(-\int_{\log(3\ep)}^t \frac{(t-s)^2}{2} (\rho^k_s-\rho^{k-1}_s) \,ds \biggr)-\bigl(1-f_{k+1}(t)\bigr)\biggr| \\[-2pt] &&\qquad \le\bigl|(1-\rho_t^k)-\bigl(1-f_k(t)\bigr)\bigr| + 2\eta' \int_{\log(3\ep)}^t \frac{(t-s)^2}{2} \,ds \\[-2pt] &&\qquad= \eta'+2\eta' \bigl(t-\log(3\ep)\bigr)^3/3\le\eta/2. \end{eqnarray*} Bounding the variance as before we can conclude by induction on $k$ that for any $\eta>0$ \begin{equation}\label{rhokbd} \lim_{N\to\infty} P\Bigl( {\sup_{s\in I_{\ep,t}}} |\rho^k_s - f_k(s)| > \eta\Bigr)= 0. \end{equation} Next we bound the difference between $f_k(t)$ and $f_\ep(t)$. Let $G(t)=t^3/3!$ for $t\ge0$ and $G(t)=0$ for $t<0$. If $*k$ indicates the $k$-fold convolution, then for $k\ge1$, using arguments similar to the ones in the proof of Lemma \ref{V}, $G^{*k}(t)=t^{3k}/(3k)!$ for $t\ge0$ and $G^{*k}(t)=0$ for $t<0$. Now if $f*G^{*k}(t) = \intt f(t-r) \,dG^{*k}(r)$, $\tilde f_k(\cdot)= f_k(\cdot+\log(3\ep))$ and $\tilde f_\ep(\cdot)=f_\ep(\cdot+\log (3\ep))$, then changing variables $s\mapsto t-r$ in (\ref{fkinteq}) and (\ref{fepinteq}), and using the inequality in (\ref{eineq}), \begin{eqnarray*} & & \bigl|\tilde f_k\bigl(t-\log(3\ep)\bigr)-\tilde f_\ep\bigl(t-\log (3\ep)\bigr)\bigr| \\ &&\qquad \le \bigl|{\exp}\bigl(-\tilde f_{k-1}*G\bigl(t-\log(3\ep)\bigr)\bigr)-\exp\bigl(-\tilde f_\ep*G\bigl(t-\log(3\ep)\bigr)\bigr)\bigr|\\ &&\qquad \le |\tilde f_{k-1}-\tilde f_\ep|*G\bigl(t-\log(3\ep)\bigr). \end{eqnarray*} Iterating the above inequality and using $|\tilde f_\ep(s)-\tilde f_0(s)|=\tilde f_\ep(s)-\tilde f_0(s)\le1$, \begin{eqnarray} \label{fgap} |f_k(t)-f_\ep(t)| &=& \bigl|\tilde f_k\bigl(t-\log(3\ep)\bigr)-\tilde f_\ep\bigl(t-\log (3\ep)\bigr)\bigr| \nonumber\\ & \le & |\tilde f_0-\tilde f_\ep|*G^{*k}\bigl(t-\log(3\ep)\bigr) \\ & \le & G^{*k}\bigl(t-\log(3\ep)\bigr) = \frac{(t-\log(3\ep))^{3k}}{(3k)!}, \nonumber \end{eqnarray} where the last equality comes from (\ref{Fconv}).\vadjust{\goodbreak} Choose $K=K(\ep,t)$ so that $(t-\log(3\ep))^{3K}/(3K)! <\delta/2$. Since $C_{\psi(t)} \ge C^k_{W,\psi(t)}$ for any $k\ge0$, and on the event $F$ defined in (\ref{Fdef}), we have $ C^k_{W,\psi(t)} \ge|\mathcal{B}^k_{\psi(t)}|$, we have \[ P\Bigl(\inf_{s\in I_{\ep,t} } \bigl(N^{-2} C_{\psi(s)}-f_\ep (s)\bigr)<-\delta\Bigr) \le P(F^c) + P\Bigl({\sup_{s\in I_{\ep,t} }}|\rho ^K_s-f_K(s)| > \delta/2\Bigr). \] Using (\ref{rhokbd}) and Lemma \ref{B0bounds} we get the result. \end{pf} It is now time to get upper bounds on $C_{\psi(s)}$. Recall $g_0(t)$ defined in (\ref{gdef2}), let $g_{-1}(t)=0$ and for $k \ge1$ let \begin{eqnarray}\label{giter} g_k(t) &=& 1-\bigl(1-g_{k-1}(t)\bigr)\nonumber\\[-8pt]\\[-8pt] &&{}\times\exp\biggl(-\int_{\log(3\ep)}^t \frac {(t-s)^2}{2} \bigl(g_{k-1}(s)-g_{k-2}(s)\bigr) \,ds\biggr).\nonumber \end{eqnarray} As in the case of $f_k(t)$, the equations above imply \[ g_k(t) = 1-\bigl(1-g_{0}(t)\bigr)\exp\biggl(-\int_{\log(3\ep)}^t \frac{(t-s)^2}{2} g_{k-1}(s) \,ds\biggr), \] so we have $g_k(t) \uparrow g_\ep(t)$ as $k\uparrow\infty$, where $g_\ep(t)$ satisfies \[ g_\ep(t) = 1-\bigl(1-g_{0}(t)\bigr)\exp\biggl(-\int_{\log(3\ep)}^t \frac{(t-s)^2}{2} g_\ep(s) \,ds\biggr). \] \begin{lemma}\label{glb} For any $t<\infty$ there exists $\ep_0=\ep_0(t)>0$ such that for $0 < \ep< \ep_0$ and any $\delta>0$, \[ \limsup_{N\to\infty} P\Bigl[\sup_{s\in I_{\ep,t}} \bigl(N^{-2}C_{\psi(s)}-g_\ep(s)\bigr) > \delta\Bigr] \le P(M<\ep^{1/3})+\ep^{2/3}. \] \end{lemma} \begin{pf} $C^0_{W,\psi(t)} \le A^0_{W,\psi(t)}$. If $\phi^0_t = N^{-2} A^0_{W,\psi(t)} $ is the fraction\vspace*{1pt} of area covered by generation 0 balloons at time $\psi(t)$, generation 1 centers are born at rate $N^{2-\alpha}\phi^0_{\psi^{-1}(\cdot)}$. Let $\phi^1_t$ denotes\vspace*{1pt} the fraction of area covered by centers of generations $\le1$ at time $\psi(t)$, then using an argument similar to the one for Lemma \ref{f1lb} gives \[ \lim_{N\to\infty} P\Bigl( \sup_{s\in I_{\ep,t}} \phi^1_s - g_1(s) > \eta\Bigr) = 0 \] for any $\eta>0$. Continuing by induction, let $\phi_t^k$ be the fraction of area covered by centers of generations $0\le j\le k$. Since (\ref{giter}) and (\ref{fiter2}) are the same except for the\vadjust{\goodbreak} letter they use, then by an argument identical to the one for Lemma \ref{fklb}, \begin{equation}\label{eq9} \lim_{N\to\infty} P\Bigl( {\sup_{s\in I_{\ep,t}}} |\phi^k_s - g_k(s)| > \eta\Bigr) = 0 \end{equation} for any $\eta>0$. Now using an argument similar to the one for (\ref{fgap}) \begin{equation}\label{eq8} {\sup_{s\in I_{\ep,t}}} |g_k(s)-g_\ep(s)| \le\frac {(t-\log(3\ep))^{3k}}{(3k)!} . \end{equation} Next we bound the difference between $C^k_{W,\psi(t)}$ and $C_{\psi (t)}$. Let $T=S(\ep^{2/3})$, where $S(\cdot)$ is as in (\ref{S}). Using the coupling between $\mathcal{C}_t$ and $\mathcal{A}_t$, \[ C_{\psi(t)}- C^k_{W,\psi(t)} \le A_{\psi(t)}-A_{W,\psi(t)}^k. \] Using the fact that $EA_{s+t}-EA_{s,s+t}^k$ is nondecreasing in $s$, the definitions of $W$ and $T$, Markov's inequality, and Lemma \ref{Abound}, we have for $T'=T+(t-\log(3\ep))N^{\alpha/3}$, \begin{eqnarray*} && P\biggl(\sup_{s\in I_{\ep,t}}\bigl(C_{\psi(s)}-\mathcal {C}^k_{W,\psi(s)} \bigr) > \frac{\delta N^2}{4} \biggr) \\ &&\qquad \le P(W>T) + P\biggl( A_{T'}-A_{T,T'} > \frac{\delta N^2}{4} \biggr)\\ &&\qquad \le P(M<\ep^{1/3})+ \frac{4}{\delta N^{2}} E( A_{T'}-A_{T,T'})\\ &&\qquad \le P(M<\ep^{1/3}) + \frac{12\ep^{2/3}}{\delta} \sum_{j=k+1}^\infty \frac{(t-\log(3\ep))^j}{j!}. \end{eqnarray*} Choose $K=K(\ep,t)$ large enough so that $\sum_{j=K+1}^\infty (t-\log(3\ep))^j/j! < \delta/12$. If we let \[ F_K =\Bigl\{\sup_{s\in I_{\ep,t}} \bigl(C_{\psi(s)} - C^K_{W,\psi(s)}\bigr) \le(\delta/4)N^2\Bigr\}, \] then \[ P(F_K^c)\le P(M<\ep^{1/3})+\ep^{2/3}. \] By the choice of $K$ and (\ref{eq8}), ${\sup_{s\in I_{\ep,t}}} |g_K(s)-g_\ep(s)|\le\delta/2$. Combining the last two inequalities and using the fact that $N^{-2}C^K_{W,\psi(s)} \le\phi^K_s =\break N^{-2}A^K_{W,\psi(s)}$, \[ P\Bigl(\sup_{s\in I_{\ep,t}} N^{-2}C_{\psi(s)}-g_\ep(s) > \delta \Bigr) \le P(F_K^c)+ P\Bigl({\sup_{s\in I_{\ep,t}}} |\phi ^K_s-g_K(s)| > \delta/4\Bigr). \] So using (\ref{eq9}) we have the desired result. \end{pf} Our next goal is: \begin{pf*}{Proof of Lemma \ref{h}} We prove the result in two steps. To begin we consider a function $h_\ep(\cdot)$ satisfying $h_\ep(t) = e^t/3$ for $t < \log(3\ep)$. \begin{equation}\label{hep} h_\ep(t)=1-\exp\biggl(-\int_{-\infty}^{\log(3\ep)} \frac{(t-s)^2}{2} \frac{e^s}{3} \,ds - \int_{\log(3\ep)}^t \frac {(t-s)^2}{2} h_\ep(s) \,ds\biggr)\hspace*{-32pt} \end{equation} for $t\ge\log(3\ep)$, and prove that $h_\ep(\cdot)$ converges to some $h(\cdot)$ with the desired properties. \begin{lemma} \label{hepmono} For fixed $t$, $h_\ep(t)$ in (\ref{hep}) is monotone decreasing in $\ep$. \end{lemma} \begin{pf} If we change variables $s = t-u$ and integrate by parts, or remember the first two moments of the exponential with mean 1, then \begin{eqnarray} \label{id1} \int_{-\infty}^t (t-s) e^s \,ds &=& \int_0^\infty u e^{t-u} \,du = e^t, \nonumber\\[-8pt]\\[-8pt] \int_{-\infty}^t \frac{(t-s)^2}{2} e^s \,ds &=& \int_0^\infty\frac {u^2}{2} e^{t-u} \,du = e^t\int_0^\infty u e^{-u} \,du = e^t.\nonumber \end{eqnarray} Using $(t-s)^2/2 = (t-r)^2/2 + (t-r)(r-s) + (r-s)^2/2$ now gives the following identity \begin{equation} \label{id} \int_{-\infty}^r \frac{(t-s)^2}{2} e^s \,ds = e^r\biggl[\frac{(t-r)^2}{2}+(t-r)+1\biggr]. \end{equation} Using (\ref{hep}), the inequality $1-e^{-x}\le x$, (\ref{id1}), and changing variables $s=t-u$, \begin{eqnarray*} h_\ep(t)-\frac13 e^t &\le& \int_{\log(3\ep)}^t \frac {(t-s)^2}{2}\biggl(h_\ep(s)-\frac13 e ^s\biggr) \,ds \\ &=&\int_0^{t-\log(3\ep)} \biggl(h_\ep(t-u)-\frac13 e^{t-u} \biggr)\frac{u^2}{2} \,du. \end{eqnarray*} Applying Lemma \ref{renewalineq} with $\lambda=1$ and $\beta(\cdot )\equiv0$ to $h_\ep(\cdot+\log(3\ep))-\exp(\cdot+\log(3\ep))/3$, \[ h_\ep(t)-\tfrac13 e^t\le0 \qquad\mbox{for any $t\ge\log(3\ep)$}. \] This shows that if $0<\ep<\delta<1$, then $h_\delta(t) \ge h_\ep(t)$ for $t\le\log(3\delta)$. To compare the exponentials for $t > \log(3\delta)$, we note that \begin{eqnarray*} &&\int_{\log(3\ep)}^{\log(3\delta)} \frac{(t-s)^2}{2} \biggl(h_\ep(s)-\frac13 e^s\biggr) \,ds +\int_{\log(3\delta)}^t \frac{(t-s)^2}{2}\bigl(h_\ep(s)-h_\delta (s)\bigr) \,ds\\ &&\qquad \le0+ \int_0^{t-\log(3\delta)} \bigl(h_\ep(t-u)-h_\delta (t-u)\bigr) \frac{u^2}{2} \,ds. \end{eqnarray*} Applying Lemma \ref{renewalineq} with $\lambda=1$ and $\beta(\cdot )\equiv0$ to $h_\ep(\cdot+\log(3\delta))-h_\delta(\cdot+\log(3\delta))$, we see that $h_\ep(t)-h_\delta(t)\le0$ for $t\ge\log(3\delta)$. \end{pf} \begin{lemma} $h(t) =\lim_{\ep\to0} h_\ep(t)$ exists. If $h \not\equiv0$ then $h$ has properties \textup{(a)--(d)} in Lemma \ref{h}. \end{lemma} \begin{pf} Lemma \ref{hepmono} implies that the limit exists. Since $0\le h_\ep(t)\le e^t/3$, $ 0\le h(t)\le e^t/3$ and so $\lim_{t\to -\infty} h(t)=0$. To show that \begin{equation}\label{hsatint} h(t) = 1 - \exp\biggl( -\int_{-\infty}^t \frac{(t-s)^2}{2} h(s) \,ds \biggr), \end{equation} we need to show that as $\ep\to0$ \begin{equation}\label{eq10} \int_{\log(3\ep)}^t\frac{(t-s)^2}{2} h_\ep(s) \,ds \to\int_{-\infty}^t \frac{(t-s)^2}{2} h(s) \,ds. \end{equation} Given $\eta>0$, choose $\delta=\delta(\eta)>0$ so that \[ \delta\bigl[1+\bigl(t-\log(3\delta)\bigr)+\bigl(t-\log(3\delta)\bigr)^2/2\bigr] < \eta/4. \] By bounded convergence theorem, as $\ep\to0$, \[ \int_{\log(3\delta)}^t \frac{(t-s)^2}{2} h_\ep(s) \,ds \to \int_{\log(3\delta)}^t \frac{(t-s)^2}{2} h(s) \,ds. \] So we can choose $\ep_0=\ep_0(\eta)$ so that the difference between the two integrals is at most $\eta/2$ for any $\ep<\ep_0$. Therefore if $\ep<\ep_0$, then \begin{eqnarray*} && \biggl|\int_{\log(3\ep)}^t \frac{(t-s)^2}{2} h_\ep(s) \,ds -\int_{-\infty}^t \frac{(t-s)^2}{2} h(s) \,ds\biggr|\\ &&\qquad \le\frac{\eta}{2} + 2\int_{-\infty}^{\log(3\delta)} \frac {(t-s)^2}{2} \frac13 e^s \,ds. \end{eqnarray*} Using the identity in (\ref{id}) we conclude that the second term is \[ \le2\delta\bigl[1+\bigl(t-\log(3\delta)\bigr)+\bigl(t-\log(3\delta)\bigr)^2/2 \bigr]\le\frac{\eta}{2}. \] This shows that (\ref{eq10}) holds, and with (\ref{hep}) and (\ref{id}) proves (\ref{hsatint}). To prove $\lim_{t\to\infty} h(t)=1$ note that if $h(\cdot)\not \equiv0$, then there is an $r$ with $h(r)>0$, and so for $t>r$ \[ \int_{-\infty}^t \frac{(t-s)^2}{2} h(s) \,ds \ge h(r)\int_r^t \frac {(t-s)^2}{2} \,ds = h(r) \frac{(t-r)^3}{3!} \to\infty \] as $t\to\infty$. So in view of (\ref{hsatint}), $h(t)\to1$ as $t\to\infty$, if $h(\cdot)\not\equiv0$. The last detail is to show if $h(\cdot) \not\equiv0$, then $h(t) \in(0,1)$ for all $t$. Suppose, if possible, $h(t_0)=0$. Equation (\ref{hsatint}) implies $\int_{-\infty}^{t_0} h(s)[(t-s)^2/2] \,ds=0$, and hence $h(s)=0$ for $s\le t_0$. Changing variables $s\mapsto t-r$, and using (\ref{hsatint}) again with the inequality $1-e^{-x} \le x$, imply that for any $t>t_0$ \[ h(t)\le\int_{-\infty}^t \frac{(t-s)^2}{2} h(s) \,ds=\int_0^{t-t_0} h(t-r) \frac{r^2}{2} \,dr. \] Applying Lemma \ref{renewalineq} with $\lambda=1$ and $\beta(\cdot)\equiv0$ to the function $h(\cdot+ t_0)$, we see that $h(t)\le0$ for any $t>t_0$. But $h(t)\ge0$ for any $t$, and hence $h \equiv0$, a~contradiction. \end{pf} To complete the proof of Lemma \ref{h} it suffices to show that $|f_\ep(\cdot)-h_\ep(\cdot)|$ and $|g_\ep(\cdot)-h_\ep(\cdot)|$ converge to 0 as $\ep \to 0$. To do this, note that if \[ h_0(t)=1-\exp\biggl(-\int_{-\infty}^{\log(3\ep)}\frac{(t-s)^2}{2} \frac{e^s}{3} \,ds\biggr), \] then \[ h_\ep(t)=1-\bigl(1-h_0(t)\bigr)\exp\biggl(-\int_{\log(3\ep)}^t \frac {(t-s)^2}{2} h_\ep(s) \,ds\biggr), \] and so using the inequality $ |e^{-x}-e^{-y}|\le|x-y|$ for $x,y\ge 0$, \[ |h_\ep(t)-g_\ep(t)| \le|h_0(t)-g_0(t)| +\int_{\log(3\ep)}^t \frac{(t-s)^2}{2}|h_\ep(s)-g_\ep (s)| \,ds. \] Using the inequality $0\le e^{-x}-1+x\le x^2/2$ and the identity in (\ref{id}), \begin{eqnarray*} |h_0(t)-g_0(t)| &\le&\frac12 \biggl[\ep+\ep\bigl(t-\log(3\ep)\bigr)+\ep \frac{(t-\log(3\ep))^2}{2}\biggr]^2\\ & \le & \frac32 \ep^2\biggl[1+\bigl(t-\log(3\ep)\bigr)^2+\frac{(t-\log(3\ep ))^4}{4}\biggr]. \end{eqnarray*} Applying Lemma \ref{renewalineq} with $\lambda=1$ and $\beta(t)=1+t^2+t^4/4$ to the function \[ \bigl|h_\ep\bigl(\cdot+\log(3\ep)\bigr)-g_\ep\bigl(\cdot+\log(3\ep)\bigr)\bigr|, \] we have $|h_\ep(t)-g_\ep(t)| \le(3\ep^2/2)\beta*V(t-\log(3\ep))$, where $V(\cdot)$ is as in Lem\-ma~\ref{V}. Using $\lambda=1$ in the expression of $V(\cdot)$ and Lemma \ref{conv}, \begin{eqnarray*} \beta*V(t) &=&\beta(t)+\intt\beta(t-s) V'(s) \,ds \\ &=& \sum_{k=0}^\infty\biggl[\frac{t^{3k}}{(3k)!}+2\frac{t^{3k+2}}{(3k+2)!} +6\frac{t^{3k+4}}{(3k+4)!}\biggr] \le6e^t. \end{eqnarray*} So $|h_\ep(t)-g_\ep(t)| \le(3\ep^2/2) \cdot6\exp(t-\log(3\ep ))$, and so \[ {\sup_{s\in I_{\ep,t}}} |h_\ep(s)-g_\ep(s)| \le6\ep e^t/2. \] Repeating the argument for $f_\ep(\cdot)$, and noting that $|h_0(t)-f_0(t)|=|h_0(t)-g_0(t)|+\ep^{7/6}$, \[ {\sup_{s\in I_{\ep,t}}} |h_\ep(s)-f_\ep(s)| \le \biggl(6\frac32 \ep^2+\ep^{7/6}\biggr) \exp\bigl(t-\log(3\ep)\bigr) = \biggl(\frac13 \ep^{1/6}+ 3\ep\biggr) e^t. \] This completes the second step and we have proved Lemma \ref{h}. \end{pf*} Now we have all the ingredients to prove Theorem \ref{th3}. \begin{pf*}{Proof of Theorem \ref{th3}} Let $h(\cdot)$ be as in Lemma \ref{h}. Choose $\ep\in(0,\delta/6)$ small enough so that \[ {\sup_{s\in I_{\ep,t}}} |g_\ep(s)-h(s)| <\delta/2,\qquad {\sup_{s\in I_{\ep,t}}} |f_\ep(s)-h(s)| <\delta/2. \] Let $D=\{M\le3\ep N^{2-2\alpha/3}\}$. On the event $D$, $W=\psi(\log(3\ep))>0$. So \begin{eqnarray} \label{eq11} && P\Bigl(\sup_{s\le t} \bigl|N^{-2}C_{\psi(s)}-h(s)\bigr|>\delta \Bigr)\nonumber\\ &&\qquad\le P(D^c) + P\bigl(N^{-2}C_W+h(\log(3\ep))>\delta\bigr) \nonumber\\[-8pt]\\[-8pt] &&\qquad\quad{} + P\Bigl(\sup_{s\in I_{\ep,t}} \bigl(N^{-2}C_{\psi(s)}-h(s)\bigr)>\delta\Bigr) \nonumber\\ &&\qquad\quad{} + P\Bigl(\inf_{s\in I_{\ep,t}} \bigl(N^{-2}C_{\psi(s)}-h(s)\bigr) < -\delta\Bigr).\nonumber \end{eqnarray} To estimate the second term in (\ref{eq11}) note that $h(\log(3\ep ))\!\le\!(1/3)\exp(\log(3\ep))\!<\delta/2$ and \[ P(N^{-2}C_W>\delta/2) \le P\bigl(A_W>(\delta /2)N^2\bigr)\to0 \] as $N\to\infty$ by Lemma \ref{ALXbd}. To estimate the third term in (\ref{eq11}) we use Lemma \ref{glb} to get \begin{eqnarray*} && \limsup_{N\to\infty} P\Bigl( \sup_{s\in I_{\ep,t}} \bigl(N^{-2}C_{\psi(s)}-h(s)\bigr) >\delta\Bigr)\\ &&\qquad \le\limsup_{N\to\infty} P\Bigl( \sup_{s\in I_{\ep,t}} \bigl(N^{-2}C_{\psi(s)}-g_\ep(s)\bigr)>\delta/2 \Bigr) \\ &&\qquad\le P(M<\ep^{1/3})+\ep^{2/3}. \end{eqnarray*} For the fourth term in (\ref{eq11}) use Lemma \ref{fklb} to get \begin{eqnarray*} && \limsup_{N\to\infty} P\Bigl(\inf_{s\in I_{\ep,t}} \bigl(N^{-2}C_{\psi(s)}-h(s)\bigr)<-\delta\Bigr)\\ &&\qquad \le\limsup_{N\to\infty} P\Bigl(\inf_{s\in I_{\ep,t}} \bigl(N^{-2}C_{\psi(s)}-f_\ep(s)\bigr)<-\delta/2\Bigr) \\ &&\qquad\le P(M<\ep^{1/3})+\ep^{1/12}. \end{eqnarray*} Letting $\ep\to0$, we see that for any $\delta>0$, \begin{equation}\label{eq2} \lim_{N\to\infty} P\Bigl(\sup_{s\in I_{\ep,t}} \bigl|N^{-2}C_{\psi(s)}-h(s)\bigr|>\delta\Bigr)=0. \end{equation} It remains to show that $h(\cdot)\not\equiv0$. Let $\ep, \gamma$ be such that \[ P[M\le(1+\gamma)\ep^{1/3}] + 11 \frac{\ep^{1/3}}{\gamma} <1. \] Fix any $\eta>0$ and let $t_0=\log(3\ep(1+\gamma)+3\eta)$. Using Lemmas \ref{ALXbd} and \ref{tausigma} \begin{eqnarray*} &&\limsup_{N\to\infty} P\bigl(N^{-2} C_{\psi(t_0)} < \ep\bigr) \\ &&\qquad= \limsup_{N\to\infty} P\bigl(\tau(\ep)>\psi(t_0)\bigr)\\ &&\qquad\le\limsup_{N\to\infty} P\bigl[\tau(\ep) > \sigma\bigl(\ep(1+\gamma)\bigr)\bigr] + \limsup_{N\to\infty} P\bigl[\sigma\bigl(\ep(1+\gamma)\bigr)>\psi(t_0)\bigr]\\ &&\qquad\le\limsup_{N\to\infty} P\bigl[\tau(\ep) > \sigma\bigl(\ep(1+\gamma)\bigr)\bigr]\\ &&\qquad\quad{} + \limsup_{N\to\infty} P\bigl(|N^{-2} A_{W_{\ep(1+\gamma )+\eta}} - \ep(1+\gamma)-\eta| > \eta\bigr)\\ &&\qquad \le P[M\le(1+\gamma)\ep^{1/3}] + 11\frac{\ep^{1/3}}{\gamma} < 1. \end{eqnarray*} But if $h(t_0)=0$, we get a contradiction to (\ref{eq2}). This proves $h(\cdot)\not\equiv0$. \end{pf*} \section{Asymptotics for the cover time}\label{sec5} \mbox{} \begin{pf*}{Proof of Theorem \ref{th4}} Theorem \ref{th3} gives a lower bound on the area covered whcih implies that if $\delta>0$ and $N$ is large, then with high probability the number of centers in $\mathcal{C}_{\psi(0)}$ dominates a Poisson random variable with mean $\lambda(\delta) N^{2-(2\alpha/3)}$, where \[ \lambda(\delta) = \int_{-\infty}^0 \bigl(h(s)-\delta\bigr)^+ \,ds. \] If $\delta_0$ is small enough, $\lambda_0\equiv\lambda(\delta_0) >0$. Dividing the torus into disjoint squares of size $\kappa N^{\alpha/3} \sqrt{\log N}$, where $\kappa$ is a large constant, the probability that a given square is vacant is $\exp(-\lambda_0\kappa^2 \log N)$. If $\kappa\sqrt{\log N} \ge1$, the number of squares is $\le N^{2-(2\alpha/3)}$. So if $\lambda_0\kappa^2 \ge2$, then with high probability none of our squares is vacant. Thus even if no more births of new centers occur then the entire square will be covered by a time $\psi(0)+O(N^{\alpha/3} \sqrt{\log N})$. \end{pf*}
31,485
TITLE: Show that the set of even functions in $C[-1,1]$ is a proper closed subalgebra of $C[-1,1]$. QUESTION [2 upvotes]: I know that an algebra is a vector space $A$ on which a multiplication is defined $(f,g)\mapsto fg$ (from $A\times A$ into $A$) satisfying: i) $(fg) = f(gh)$, for all $f,g \in A$; ii) $f(g + h) = fg + fh, (f + g)h = fh + gh$, for all $f,g,h \in A$; iii) $\alpha(fg) = (\alpha f)g = f(\alpha g)$, for all scalars $\alpha$ and all $f,g \in A$. Question: How do I show that the set of even functions in $C[-1,1]$ is a proper closed subalgebra of $C[-1,1]$? Should I show that every subset of even function in $C[-1,1]$ satisfies all the properties listed above, and is closed? Thanks in advance! REPLY [2 votes]: No, you have show that the sum, product, multiplication by a scalar, and limits of even continuous functions are even continuous functions. And, to show that it is a proper subalgebra, you need to exhibit a function $f\in C[-1,1]$ that is not even.
121,021
Puffco Plus Replacement Parts. Plus Mouthpiece Features: - Use: Concentrates - Compatible with Puffco Plus - Durable and Easy to Clean - Silicone Top - Built-in Carb Cap - Scoop Loading Tool - Scoop Concentrates with Ease What's in the Box: - 1x - Plus Replacement Mouthpiece with Dart by Puffco
250,756
Replies 2 Views 888 Activity 12 months ago Users 2 jone 15 Oct 2018 at 10:43 00 Vanessa Warwick 15 Oct 2018 at 11:15 00 Vanessa Warwick Landlord and Co-Founder of PropertyTribes.com **If you have got value from Property Tribes, find out how you can support it in remaining a free to use community resource**
127,187
simple web 2.0 logo for women network Budget $30-250 USD We need a simple web 2.0 logo for a community/social network only for women. Light colours needed. Important - we need it inter alia as a vector file - the logo should be were simple, so as web 2.0 - In the logo there should be probably the shapes of two woman. Glossy style, probably in a square. Should be also good to use as a favicon. (see the examples!!!!) - we will provide the colours - the text of our company should be designed right next to the logo serious bidders only! Befor we want to see examples of your work!! Tildelt til: hi,just check PM for portfolio,and read our reviews. Its would be your guidance.(Why we are best for your project) 34 freelancers are bidding on average $53 for this job Dear sir, We are ready to start project. Please check pmb for details OK. tell me the name of website and the colors. your logo today! [url removed, login to view] for portfolio
151,749
\begin{document} \title{Polar Duality, John Ellipsoid, and Generalized Gaussians} \author{Maurice de Gosson and Charlyne de Gosson\thanks{maurice.de.gosson@univie.ac.at}\\University of Vienna\\Faculty of Mathematics (NuHAG)\\Oskar-Morgenstern-Platz 1\\1090 Vienna AUSTRIA} \maketitle \begin{abstract} We apply the notion of polar duality from convex geometry to the study quantum covariance ellipsoids in symplectic phase space. We consider in particular the case of \textquotedblleft quantum blobs\textquotedblright\ introduced in previous work; quantum blobs are the smallest symplectic invariant regions of the phase space compatible with the uncertainty principle in its strong Robertson--Schr\"{o}dinger form. We show that they can be characterized by a simple condition using polar duality, thus improving previous results. We apply these geometric results to the characterization of pure Gaussian states in terms of partial information on the covariance ellipsoid. \end{abstract} \textbf{Keywords}: polar duality; Lagrangian plane; polar duality; symplectic capacity; John and L\"{o}wner ellipsoids; uncertainty principle \textbf{MSC 2020}: 52A20, 52A05, 81S10, 42B35 \section{Introduction} In a recent paper \cite{gopolar} we pointed out the usefulness of the geometric notion of polar duality in expressing the uncertainty principle of quantum mechanics. In our discussion of polar duality we suggested that a quantum system localized in the position representation in a set $X$ cannot be localized in the momentum representation in a set smaller than its polar dual $X^{\hbar}$, the latter being defined as the set of all $p$ in momentum space such that $px\leq\hbar$ for all $x\in X$. In the present work we go several steps further by studying the product sets $X\times X^{\hbar}$. In particular we find that when $X$ is an ellipsoid, then the John ellipsoid $X\times X^{\hbar}$ is a \textquotedblleft quantum blob\textquotedblright\ (as defined in previous work \cite{Birk,blob,goluPR}) to which one canonically associates a squeezed coherent state. The two main results of this paper are \begin{itemize} \item Theorem \ref{Thm1}: we prove that a centered phase space ellipsoid $\Omega$ is a quantum blob (\textit{i.e.} a symplectic ball with radius$\sqrt{\hbar}$) if and only if the polar dual of the projection of $\Omega$ on the position space is the intesection of $\Omega$ with the momentum space; this considerably strengthens a previous result obtained in \cite{gopolar}; \item Theorem \ref{Thm2}; it is an analytical version of Theorem \ref{Thm1}, which we use to give a simple characterization of pure Gaussian states in tems of partial information on the covariance ellipsoid of a Gaussian state. This result is related to the so-called \textquotedblleft Pauli problem\textquotedblright. \end{itemize} \begin{notation} The configuration space of a system with $n$ degrees of freedom will in general be written $\mathbb{R}_{x}^{n}$, and its dual (the momentum space) $\mathbb{R}_{p}^{n}$. The position variables will be written $x=(x_{1} ,...,x_{n})$ and the momentum variables $p=(p_{1},...,p_{n})$. The duality form (identified with the usual inner product) is $p\cdot x=p_{1}x_{1} +\cdot\cdot\cdot+p_{n}x_{n}$. The product $\mathbb{R}_{x}^{n}\times \mathbb{R}_{p}^{n}$ is identified with $\mathbb{R}^{2n}$ and is equipped with the standard symplectic form $\sigma$ defined by $\omega(z,z^{\prime})=p\cdot x^{\prime}-p^{\prime}\cdot x$ if $z=(x,p)$, $z^{\prime}=(x^{\prime},p^{\prime })$. The corresponding symplectic group is denoted $\operatorname*{Sp}(n)$: $S\in\operatorname*{Sp}(n)$ if and only $\omega(Sz,Sz^{\prime})=\omega (z,z^{\prime})$ for all $z,z^{\prime}$. We denote by $\operatorname*{Sym} _{++}(n,\mathbb{R})$ the cone of real positive definite symmetric $n\times n$ matrices, and by $GL(n,\mathbb{R})$ the general (real) linear group (the invertible real $n\times n$ matrices). \end{notation} \section{A Geometric Quantum Phase Space} \subsection{Polar duality and quantum states} Let $X\subset\mathbb{R}_{x}^{n}$ be a convex body: $X$ is compact\ and convex and has non-empty interior $\operatorname*{int}(X)$. If $0\in \operatorname*{int}(X)$ we define the $\hbar$-polar dual $X^{\hslash} \subset\mathbb{R}_{p}^{n}$ of $X$ by \begin{equation} X^{\hslash}=\{p\in\mathbb{R}^{m}:\sup\nolimits_{x\in X}(p\cdot x)\leq \hbar\}\label{omo2} \end{equation} where $\hbar$ is a positive constant (we have $X^{\hslash}=\hbar X^{o}$ where $X^{o}$ is the traditional polar dual dual from convex geometry). The following properties of polar duality are obvious \cite{Vershynin}: \begin{itemize} \item $(X^{\hslash})^{\hbar}=X$ (reflexivity) and $X\subset Y\Longrightarrow Y^{\hslash}\subset X^{\hslash}$ (anti-monotonicity), \item For all $L\in GL(n,\mathbb{R})$: \begin{equation} (LX)^{\hbar}=(L^{T})^{-1}X^{\hslash} \label{scaling} \end{equation} (scaling property). In particular $(\lambda X)^{\hbar}=\lambda^{-1}X^{\hslash }$ for all $\lambda\in\mathbb{R}$, $\lambda\neq0$. \end{itemize} We can view $X$ and $X^{\hslash}$ as subsets of phase space by the identifications $\mathbb{R}_{x}^{n}\equiv\mathbb{R}_{x}^{n}\times0$ and $\mathbb{R}_{p}^{n}\equiv0\times\mathbb{R}_{p}^{n}$. Writing $\ell _{X}=\mathbb{R}_{x}^{n}\times0$ and $\ell_{P}=0\times\mathbb{R}_{p}^{n}$ the transformation $X\longrightarrow X^{\hslash}$ is a mapping $\ell _{X}\longrightarrow\ell_{P}$. With this interpretation formula (\ref{scaling}) can be rewritten in symplectic form as \begin{equation} (M_{L^{-1}}X)^{\hbar}=M_{L^{T}}X^{\hslash}\label{ML} \end{equation} where $M_{L^{-1}}= \begin{pmatrix} L^{-1} & 0\\ 0 & L^{T} \end{pmatrix} $ is in $\operatorname*{Sp}(n)$. Notice that $M_{L^{-}}:\ell_{X} \longrightarrow\ell_{X}$ and $M_{L^{-}}:\ell_{P}\longrightarrow\ell_{P}$. Suppose now that $X$ is an ellipsoid centered at the origin: \begin{equation} X=\{x\in\mathbb{R}_{x}^{n}:Ax\cdot x\leq\hbar\} \label{Xell} \end{equation} where $A\in\operatorname*{Sym}_{++}(n,\mathbb{R})$. The polar dual $X^{\hslash}$ is the ellipsoid \begin{equation} X^{\hslash}=\{p\in\mathbb{R}_{p}^{n}:A^{-1}p\cdot p\leq\hbar\}. \label{Xelldual} \end{equation} In particular the ball $B_{X}^{n}(\sqrt{\hbar})=\{x:|x|\leq\sqrt{\hbar}\}$ is $(B_{X}^{n}(\sqrt{\hbar}))^{\hbar}=B_{P}^{n}(\sqrt{\hbar})$. Let $\Omega$ be a convex body in $\mathbb{R}^{2n}$. Recall \cite{Ball} that the John ellipsoid $\Omega_{\mathrm{John}}$ is the unique ellipsoid in $R^{2n}$ with maximum volume contained in $\Omega$. If $M\in GL(2n,\mathbb{R} )$ then \begin{equation} (M(\Omega))_{\mathrm{John}}=M(\Omega_{\mathrm{John}}). \label{JL1} \end{equation} In previous work \cite{blob,goluPR} we called the image of the phase space ball $B^{2n}(\sqrt{\hbar})$ by some $S\in\operatorname*{Sp}(n)$ a \textquotedblleft quantum blob\textquotedblright. Quantum blobs are minimum quantum uncertainty phase space units. The product $X\times X^{\hslash}$ contains a \emph{unique} quantum blob: \begin{proposition} \label{Prop1}Let $X=\{x:Ax\cdot x\leq\hbar\}$. The John ellipsoid of the quantum state $X\times X^{\hslash}$ is a a quantum blob, namely \begin{equation} (X\times X^{\hslash})_{\mathrm{John}}=M_{A^{1/2}}(B^{2n}(\sqrt{\hbar})) \label{Johnma} \end{equation} where $M_{A^{1/2}}= \begin{pmatrix} A^{-1/2} & 0\\ 0 & A^{1/2} \end{pmatrix} \in\operatorname*{Sp}(n)$. \end{proposition} \begin{proof} That $S_{A^{1/2}}\in\operatorname*{Sp}(n)$ is clear. Let $B_{X}^{n} (\sqrt{\hbar})$ and $B_{P}^{n}(\sqrt{\hbar})$ be the balls with radius $\sqrt{\hbar}$ in in $\mathbb{R}_{x}^{n}$ and $\mathbb{R}_{p}^{n}$, respectively. We have, by (\ref{Xell}), (\ref{Xelldual}), and (\ref{JL1}), \begin{align*} (X\times X^{\hslash})_{\mathrm{John}} & =(A^{-1/2}B_{X}^{n}(\sqrt{\hbar })\times A^{1/2}B_{P}^{n}(\sqrt{\hbar}))_{\mathrm{John}}\\ & =M_{A^{1/2}}(B_{X}^{n}(\sqrt{\hbar})\times B_{P}^{n}(\sqrt{\hbar }))_{\mathrm{John}} \end{align*} Let us show that \[ (B_{X}^{n}(\sqrt{\hbar})\times B_{P}^{n}(\sqrt{\hbar})))_{\mathrm{John} }=B^{2n}(\sqrt{\hbar}); \] this will prove our assertion. The inclusion$\ B^{2n}(\sqrt{\hbar})\subset B_{X}^{n}(\sqrt{\hbar})\times B_{P}^{n}(\sqrt{\hbar})$ is obvious, and we cannot have $B^{2n}(R)\subset B_{X}^{n}(\sqrt{\hbar})\times B_{P}^{n} (\sqrt{\hbar})$ if $R>1$. Assume now that the John ellipsoid $\Omega _{\mathrm{John}}$ of $\Omega=B_{X}^{n}(\sqrt{\hbar})\times B_{P}^{n} (\sqrt{\hbar})$ is defined by \[ Ax\cdot x+Bx\cdot p+Cp\cdot p\leq\hbar \] where $A,C>0$ and $B$ are real symmetric $n\times n$ matrices. Since $\Omega$ is invariant by the transformation $(x,p)\longmapsto(p,x)$ so is $\Omega_{\mathrm{John}}$ and we must thus have $A=C$ and $B=B^{T}$. Similarly, $\Omega$ being invariant by the partial reflection $(x,p)\longmapsto(-x,p)$ we get $B=0$ so $\Omega_{\mathrm{John}}$ is defined by $Ax\cdot x+Ap\cdot p\leq 1$. The next step is to observe that $\Omega$, and hence $\Omega _{\mathrm{John}}$, are invariant under all symplectic rotations $(x,p)\longmapsto(Hx,HP)$ where $H\in O(n,\mathbb{R})$ so we must have $AH=HA$ for all $H\in O(n,\mathbb{R})$, but this is only possible if $A=\lambda I_{n\times n}$ for some $\lambda\in\mathbb{R}$. The John ellipsoid is thus of the type $B^{2n}(\lambda^{-1/2})$ for some $\lambda\geq1$ and this concludes the proof in view of the inclusion $B^{2n}(\sqrt{\hbar})\subset B_{X} ^{n}(\sqrt{\hbar})\times B_{P}^{n}(\sqrt{\hbar})$ since we cannot have $\lambda>1$. \end{proof} \begin{remark} \label{Rem1}The John ellipsoid $(X\times X^{\hslash})_{\mathrm{John}}$ is the set of all $(x,p)\in\mathbb{R}_{z}^{2n}$ such that $Ax\cdot x+A^{-1}p\cdot p\leq\hbar$. The orthogonal projections of $(X\times X^{\hslash} )_{\mathrm{John}}$ on the coordinate planes $\ell_{X}=\mathbb{R}_{x}^{n} \times0$ and. $\ell_{P}=0\times\mathbb{R}_{p}^{n}$ are therefore $\Pi _{X}(X\times X^{\hslash})_{\mathrm{John}}=X$ and $\Pi_{P}(X\times X^{\hslash })_{\mathrm{John}}=X^{\hslash}$. \end{remark} The construction above shows that we have a canonical identification between the ellipsoids $X=\{x:Ax\cdot x\leq\hbar\}$ and the squeezed coherent states \begin{equation} \phi_{A}(x)=(\pi\hbar)^{-n/4}(\det A)^{1/4}e^{-Ax\cdot x/2\hbar}.\label{coh1} \end{equation} In fact, the covariance ellipsoid of $\phi_{A}$ is precisely the John ellipsoid of the product $X\times X^{\hslash}$ as can be seen calculating the Wigner transform of $\phi_{A}$ \begin{equation} W\phi_{A}(z)=(\pi\hbar)^{-n}(\det A)^{1/4}\exp\left[ -\frac{1}{\hbar}(Ax\cdot x+A^{-1}p\cdot p)\right] \label{wfa} \end{equation} which corresponds to the canonical bijection \[ X\longmapsto(X\times X^{\hslash})_{\mathrm{John}} \] between (centered) configuration space ellipsoids $X$ and John ellipsoids of $X\times X^{\hslash}$ (we will have more to say about this correspondence in the forthcoming sections). \subsection{Polar duality and the symplectic camel} Symplectic capacities (see \cite{cielibak,goluPR} for reviews) are numerical invariants that serve as a fundamental tool in the study of various symplectic and Hamiltonian rigidity phenomena; they are closely related to Gromov's symplectic non-squeezing theorem \cite{gr85}. We denote $\operatorname*{Symp}(n)$ the group of all symplectomorphisms $(\mathbb{R}_{z}^{2n},\omega)\longrightarrow(\mathbb{R}_{z}^{2n},\omega).$ That is, $f\in\operatorname*{Symp}(n)$ if and only $f$ is a diffeomorphism of $\mathbb{R}_{z}^{2n}$ whose Jacobian matrix $Df(z)$ is in $\operatorname*{Sp} (n)$ for every $z\in\mathbb{R}_{z}^{2n}$. A (normalized) symplectic capacity on $(\mathbb{R}^{2n},\sigma)$ associates to every subset $\Omega\subset\mathbb{R}_{z}^{2n}$ a number $c(\Omega )\in\mathbb{[}0,+\infty\mathbb{]}$ such that the following properties hold: \begin{description} \item[SC1] \textit{Monotonicity}: If $\Omega\subset\Omega^{\prime}$ then $c(\Omega)\leq c(\Omega^{\prime})$; \item[SC2] \textit{Conformality}: For every $\lambda\in\mathbb{R}$ we have $c(\lambda\Omega)=\lambda^{2}c(\Omega)$; \item[SC3] \textit{Symplectic invariance}: $c(f(\Omega))=c(\Omega)$ for every $f\in\operatorname*{Symp}(n)$; \item[SC4] \textit{Normalization}: For $1\leq j\leq n$ we have $c(B^{2n} (r))=\pi r=c(Z_{j}^{2n}(r))$ where $Z_{j}^{2n}(r)$ is the cylinder with radius $r$ based on the $x_{j},p_{j}$ plane. \end{description} There exists a symplectic capacity, denoted by $c_{\max}$, such that $c\leq c_{\max}$ for every symplectic capacity. It is defined by \begin{equation} c_{\max}(\Omega)=\inf_{f\in\operatorname*{Symp}(n)}\{\pi r^{2}:f(\Omega )\subset Z_{j}^{2n}(r)\} \label{cmax} \end{equation} where $Z_{j}^{2n}(r)$ is the phase space cylinder defined by $x_{j}^{2} +p_{j}^{2}\leq r^{2}$ and $\operatorname*{Symp}(n)$ the group of all symplectomorphisms of $\mathbb{R}^{2n}$ equipped with the standard symplectic structure. Similarly, there exists a smallest symplectic capacity $c_{\min}$, it is defined by \[ c_{\min}(\Omega)=\sup_{f\in\operatorname*{Symp}(n)}\{\pi r^{2}:f(B^{2n} (r))\subset\Omega\}. \] One shows \cite{armios08,arkaos13} that\textit{\ }if $X\subset\mathbb{R} _{x}^{n}$ and $P\subset\mathbb{R}_{p}^{n}$ are centrally symmetric convex bodies then we have \begin{equation} c_{\max}(X\times P)=4\hbar\sup\{\lambda>0:\lambda X^{\hbar}\subset P\}. \label{yaron1} \end{equation} In particular, \begin{equation} c_{\max}(X\times X^{\hbar})=4\hbar~. \label{yaron3} \end{equation} One also has the weaker notion of linear symplectic capacity, obtained by replacing condition (SC3) with \begin{description} \item[SC3lin] \textit{Linear} \textit{symplectic invariance}: $c(S(\Omega ))=c(\Omega)$ for every $S\in\operatorname*{Sp}(n)$ and $c(\Omega +z))=c(\Omega)$ for every $z\in\mathbb{R}^{2n}$. \end{description} One then defines the corresponding minimal and maximal linear symplectic capacities $c_{\min}^{\mathrm{lin}}$ and $c_{\max}^{\mathrm{lin}}$ \begin{align} c_{\min}^{\mathrm{lin}}(\Omega) & =\sup_{S\in\operatorname*{Sp}(n)}\{\pi R^{2}:S(B^{2n}(z,R)),z\in\mathbb{R}^{2n}\}\label{clin1}\\ c_{\max}^{\mathrm{lin}}(\Omega) & =\inf_{f\in\operatorname*{Sp}(n)}\{\pi r^{2}:S(\Omega)\subset Z_{j}^{2n}(z,r),z\in\mathbb{R}^{2n}\}. \label{clin2} \end{align} It turns out that all symplectic capacities agree on ellipsoids. They are calculated as follows: assume that \[ \Omega=\{z\in\mathbb{R}^{2n}:Mz\cdot z\leq r^{2}\} \] where $M\in\operatorname*{Sym}^{+}(2n,\mathbb{R})$, and let $\lambda _{1}^{\sigma},\lambda_{2}^{\sigma},...,\lambda_{n}^{\sigma}$ be the symplectic eigenvalue of $M$, \textit{i.e.} the numbers $\lambda_{j}^{\sigma}>0$ ($1\leq j\leq n$) such that the $\pm i\lambda_{j}^{\sigma}$ are the eigenvalues of the antisymmetric matrix $M^{1/2}JM^{1/2}$. Then \begin{equation} c(\Omega)=\pi r^{2}/\lambda_{\max}^{\sigma}\label{capellipse} \end{equation} where $\lambda_{\max}^{\sigma}=\max\{\lambda_{1}^{\sigma},\lambda_{2}^{\sigma },...,\lambda_{n}^{\sigma}\}$ (see \cite{go09,goluPR}). The following technical Lemma will allows us to prove a refinement of formula (\ref{yaron3}). \begin{lemma} \label{Lemmaclin}Let $\Omega\subset\mathbb{R}^{2n}$ be a centrally symmetric body. We have \begin{equation} c_{\min}^{\mathrm{lin}}(\Omega)=\sup_{S\in\operatorname*{Sp}(n)}\{\pi R^{2}:S(B^{2n}(R))\subset\Omega\}~. \label{clinmin} \end{equation} \end{lemma} \begin{proof} Since $\Omega$ is centrally symmetric we have $S(B^{2n}(z_{0},R))\subset \Omega$ if and only if $S(B^{2n}(-z_{0},R))\subset\Omega$. The ellipsoid $S(B^{2n}(R))$ is interpolated between $S(B^{2n}(z_{0},R))$ and $S(B^{2n} (-z_{0},R))$ using the mapping $t\longmapsto$ $z(t)=z-2tz_{0}$ where $z\in S(B^{2n}(z_{0},R))$, and is hence contained in $\Omega$ by convexity. \end{proof} \begin{proposition} Let $c_{\min}^{\mathrm{lin}}$ be the smallest linear symplectic capacity and $X\subset\mathbb{R}_{x}^{n}$ a centered ellipsoid. We have \begin{equation} c_{\min}^{\mathrm{lin}}(X\times X^{\hbar})=4\hbar.\label{clinmax} \end{equation} \end{proposition} \begin{proof} In view of Lemma \ref{Lemmaclin} $c_{\min}^{\mathrm{lin}}(X\times X^{\hbar})$ is the greatest number $\pi R^{2}$ such that $X\times X^{\hbar}$ contains a symplectic ball $S(B^{2n}(R)$, $S\in\operatorname*{Sp}(n)$. In view of Proposition \ref{Prop1} $M_{A^{1/2}}(B^{2n}(\sqrt{\hbar}))$ is such a symplectic ball; since it is also the largest ellipsoid contained in $X\times X^{\hbar}$ we must have \[ c_{\min}^{\mathrm{lin}}(X\times X^{\hbar})=c_{\min}^{\mathrm{lin}}(M_{A^{1/2} }(B^{2n}(\sqrt{\hbar})))=\pi\hbar. \] \end{proof} \section{Projections of Quantum Blobs} In this section we generalize the observation made in Remark \ref{Rem1} .Projecting Phase Space Ellipsoids \subsection{Block matrix notation} For $M\in\operatorname*{Sym}_{++}(2n,\mathbb{R})$ we define the phase space ellipsoid \begin{equation} \Omega=\{z\in\mathbb{R}^{2n}:Mz\cdot z\leq\hbar\}. \label{Mellipse} \end{equation} Setting $M=\frac{1}{2}\hbar\Sigma^{-1}$ we can visualize $\Omega$ as the covariance matrix of a (classical or quantum) state: \begin{equation} \Omega=\{z\in\mathbb{R}^{2n}:\tfrac{1}{2}\Sigma^{-1}z\cdot z\leq1\}. \label{covellipse} \end{equation} Writing $M$ and $\Sigma$ in block-matrix form Let us write $M$ in block form \begin{equation} M= \begin{pmatrix} M_{XX} & M_{XP}\\ M_{PX} & M_{PP} \end{pmatrix} \text{ \ },\text{ \ }\Sigma= \begin{pmatrix} \Sigma_{XX} & \Sigma_{XP}\\ \Sigma_{PX} & \Sigma_{PP} \end{pmatrix} \label{M} \end{equation} where the blocks are $n\times n$ matrices. The condition $M>0$ ensures us that $M_{XX}>0$, $M_{PP}>0$, and $M_{PX}=M_{XP}^{T}$ (\textit{resp}. $\Sigma _{XX}>0$, $\Sigma_{PP}>0$, and $\Sigma_{PX}=\Sigma_{XP}^{T}$; see \cite{zhang}). Using classical formulas for the inversion of block matrices \cite{Tzon} we have \begin{equation} M^{-1}= \begin{pmatrix} (M/M_{PP})^{-1} & -(M/M_{PP})^{-1}M_{XP}M_{PP}^{-1}\\ -M_{PP}^{-1}M_{PX}(M/M_{PP})^{-1} & (M/M_{XX})^{-1} \end{pmatrix} \label{Minverse} \end{equation} where $M/M_{PP}$ and $M/M_{XX}$ are the Schur complements: \begin{align} M/M_{PP} & =M_{XX}-M_{XP}M_{PP}^{-1}M_{PX}\label{schurm1}\\ M/M_{XX} & =M_{PP}-M_{PX}M_{XX}^{-1}M_{XP}. \label{schurm2} \end{align} Similarly, \begin{equation} \Sigma^{-1}= \begin{pmatrix} (\Sigma/\Sigma_{PP})^{-1} & -(\Sigma/\Sigma_{PP})^{-1}\Sigma_{XP}\Sigma _{PP}^{-1}\\ -\Sigma_{PP}^{-1}\Sigma_{PX}(\Sigma/\Sigma_{BB})^{-1} & (\Sigma/\Sigma _{XX})^{-1} \end{pmatrix} \label{covinv} \end{equation} Notice that these formulas imply \begin{gather} \Sigma_{XX}=\frac{\hbar}{2}(M/M_{PP})^{-1}\text{ },\text{ }\Sigma_{PP} =\frac{\hbar}{2}(M/M_{XX})^{-1}\label{msig1}\\ \Sigma_{XP}=-\frac{\hbar}{2}(M/M_{PP})^{-1}M_{XP}M_{PP}^{-1}. \label{msig2} \end{gather} \begin{lemma} \label{LemmaBlob}The ellipsoid $\Omega$ is a quantum blob $S(B^{2n} (\sqrt{\hbar}))$, $S\in\operatorname*{Sp}(n)$ if and only if the block entries of $M=(SS^{T})^{-1}$ satisfy \begin{equation} M_{XX}M_{PP}-M_{XP}^{2}=I_{n\times n}\text{ , }M_{PX}M_{PP}=M_{PP}M_{XP}. \label{RSMatrixM} \end{equation} \end{lemma} \begin{proof} The ellipsoid $\Omega$ is the set of all $z\in\mathbb{R}^{2n}$ such that $(SS^{T})^{-1}z\cdot z\leq\hbar$. The positive definite matrix $M=(S^{T} S)^{-1}$ is thus symplectic. This condition is equivalent to the matrix relation $MJM=J$, hence (\ref{RSMatrix}) \end{proof} Notice that the conditions above can be written, in terms of the covariance matrix, \begin{equation} \Sigma_{XX}\Sigma_{PP}-\Sigma_{XP}^{2}=\tfrac{1}{4}\hbar^{2}I_{n\times n}\text{ \textit{and} }\Sigma_{PX}\Sigma_{PP}=\Sigma_{PP}\text{ }\Sigma_{XP}. \label{RSMatrix} \end{equation} This is a matrix form of the saturated Robertson--Schr\"{o}dinger uncertainty principle \cite{go09,goluPR}. \subsection{Orthogonal projections and intersections} Let $M$ be the symmetric positive definite matrix (\ref{M}). The following results is well-known (see for instance \cite{gopolar}): \begin{lemma} \label{LemmaProj}The orthogonal projections $\Pi_{\ell_{X}}\Omega$ and $P=\Pi_{\ell_{P}}\Omega$ on the coordinate subspaces $\ell_{X}=\mathbb{R} _{x}^{n}\times0$ and $\ell_{P}=0\times\mathbb{R}_{p}^{n}$ of $\Omega$ are the ellipsoids \begin{align} \Pi_{\ell_{X}}\Omega & =\{x\in\mathbb{R}_{x}^{n}:(M/M_{PP})x^{2}\leq \hbar\}\label{boundb}\\ \Pi_{\ell_{P}}\Omega & =\{p\in\mathbb{R}_{p}^{n}:(M/M_{XX})p^{2}\leq\hbar\}. \label{bounda} \end{align} In terms of the covariance matrix $\Sigma$ and the formulas (\ref{msig1}) this is \begin{align} \Pi_{\ell_{X}}\Omega & =\{x\in\mathbb{R}_{x}^{n}:\tfrac{1}{2}\Sigma_{XX} ^{-1}x^{2}\leq1\}\label{boundc}\\ \Pi_{\ell_{P}}\Omega & =\{p\in\mathbb{R}_{p}^{n}:\tfrac{1}{2}\Sigma_{PP} ^{-1}p^{2}\leq1\}. \label{boundd} \end{align} \end{lemma} Orthogonal projections and intersections are exchanged by polar duality: \begin{lemma} \label{Propinter}For every linear subspace $\ell$ of $\mathbb{R}^{n}$ we have \begin{equation} (X\cap\ell)^{\hbar}=\Pi_{\ell}(X^{\hslash})\text{ \textit{and} }(\Pi_{\ell }X)^{\hbar}=X^{\hslash}\cap\ell\label{projint} \end{equation} where $\Pi_{\ell}$ is the orthogonal projection $\mathbb{R}_{x}^{n} \longrightarrow\ell$. (In both equalities, the operation of taking the polar set in the left hand side is made inside $\ell$.) \end{lemma} \begin{proof} (See Vershynin \cite{Vershynin}). Let us first show that $\Pi_{\ell }(X^{\hslash})\subset(X\cap\ell)^{\hbar}$. Let $p\in X^{\hslash}$. We have, for every $x\in X\cap\ell$, \[ x\cdot\Pi_{\ell}p=\Pi_{\ell}x\cdot p=x\cdot p\leq\hbar \] hence $\Pi_{\ell}p\in(X\cap\ell)^{\hbar}$. To prove the reverse inclusion we note that it is sufficient, by the anti-monotonicity property of polar duality, to prove that $(\Pi_{\ell}(X^{\hslash}))^{\hbar}\subset X\cap\ell$. Let $x\in(\Pi_{\ell}(X^{\hslash}))^{\hbar}$; we have $x\cdot\Pi_{\ell} p\leq\hbar$ for every $p\in X^{\hslash}$. Since $x\in\ell$ (because the dual of a subset of $\ell$ is in $\ell$) we also have \[ \hbar\geq x\cdot\Pi_{\ell}p=\Pi_{\ell}x\cdot p=x\cdot p \] from which follows that $x\in(X^{\hbar})^{\hbar}=X$, which shows that $x\in X\cap\ell$. This completes the proof of the first formula in (\ref{projint}). The second formula in (\ref{projint}) follows by duality, noting that in view of the reflexivity of polar duality we have \[ (X^{\hslash}\cap\ell)^{\hbar}=\Pi_{\ell}(X^{\hslash})^{\hbar}=\Pi_{\ell}X \] and hence $X^{\hslash}\cap\ell=(\Pi_{\ell}X)^{\hbar}$. \end{proof} \subsection{Quantum blobs from projections and intersections} In \cite{gopolar} we proved that if $\Omega$ is a quantum blob then $\Pi _{\ell_{X}}\Omega$ and $\Pi_{\ell_{P}}\Omega$ are polar dual of each other. The following result considerably improves this statement: \begin{theorem} \label{Thm1}A centered phase space ellipsoid \[ \Omega=\{z\in\mathbb{R}_{z}^{2n}:Mz\cdot z\leq\hbar\} \] ($M\in\operatorname*{Sym}_{++}(2n,\mathbb{R})$) is a quantum blob $S(B^{2n}(\sqrt{\hbar}))$, $S\in\operatorname*{Sp}(n)$ if and only if the equivalent conditions \begin{equation} (\Pi_{\ell_{X}}\Omega)^{\hbar}=\Omega\cap\ell_{P}\text{ \ , \ }\Pi_{\ell_{X} }\Omega=(\Omega\cap\ell_{P})^{\hbar}. \label{bon1} \end{equation} are satisfied. In terms of the matrix $M$ these conditions are equivalent to the identity \begin{equation} M_{PP}(M/M_{PP})=I_{n\times n}. \label{cond0} \end{equation} \end{theorem} \begin{proof} That the conditions $(\Pi_{\ell_{X}}\Omega)^{\hbar}=\Omega\cap\ell_{P}$ and $(\Pi_{\ell_{X}}\Omega)^{\hbar}=\Omega^{\hbar}\cap\ell_{X}$ are equivalent is clear by the reflexivity of polar duality. Writing $M$ in block matrix form, the condition $z=(x,p)\in\Omega$ means that \[ M_{XX}x^{2}++2M_{PX}xp+M_{PP}p^{2}\leq\hbar \] (we are using again the abbreviations $M_{XX}x\cdot x=M_{XX}x^{2}$, etc.) and the intersection $\Omega\cap\ell_{P}$ is therefore \[ \Omega\cap\ell_{P}=\{p:M_{PP}p^{2}\leq\hbar\}. \] On the other hand, in view of Lemma \ref{LemmaProj}, \[ \Pi_{\ell_{X}}\Omega=\{x:(M/M_{PP})x^{2}\leq\hbar\} \] and the polar dual $(\Pi_{\ell_{X}}\Omega)^{\hbar}$ is \[ (\Pi_{\ell_{X}}\Omega)^{\hbar}=\{p:(M/M_{PP})^{-1}p^{2}\leq\hbar\} \] so we have to prove that $\Omega$ is a quantum blob if and only if (\ref{cond0}) holds. Using the explicit expression (\ref{schurm1}) of the Schur complement this is equivalent to the condition \begin{equation} (M_{XX}-M_{XP}M_{PP}^{-1}M_{PX})M_{PP}=I_{n\times n}. \label{cond1} \end{equation} Assume now that $\Omega$ is a quantum blob; then $\Omega=S(B^{2n}(\sqrt{\hbar }))$ for some $S\in\operatorname*{Sp}(n)$; then $z\in\Omega$ if and only if $Mz\cdot z\leq\hbar$ where $M=(S^{T})^{-1}S^{-1}$. Since $M\in \operatorname*{Sp}(n)\cap\operatorname*{Sym}_{++}(2n,\mathbb{R})$ we have $M_{PP}M_{XP}=M_{PX}M_{PP}$ (second formula (\ref{RSMatrixM}) in Lemma \ref{LemmaBlob}) and hence \begin{align*} (M_{XX}-M_{XP}M_{PP}^{-1}M_{PX})M_{PP} & =M_{XX}M_{PP}-M_{XP}M_{PP} ^{-1}(M_{PX}M_{PP})\\ & =M_{XX}M_{PP}-(M_{XP})^{2}. \end{align*} Using the first formula (\ref{RSMatrixM}) in Lemma \ref{LemmaBlob} we thus have \begin{equation} (M_{XX}-M_{XP}M_{PP}^{-1}M_{PX})M_{PP}=I_{n\times n} \label{mim} \end{equation} which implies that $(\Pi_{\ell_{X}}\Omega)^{\hbar}=\Omega\cap\ell_{P}$, so we have proven the necessity of the condition (\ref{bon1}) Let us prove that this condition is sufficient as well. Let us perform a Williamson diagonalization \cite{Birk} of the matrix $M$: there exists $S\in\operatorname*{Sp}(n)$ such that \begin{equation} M=S_{0}^{T}DS_{0}\text{ \ , \ }D= \begin{pmatrix} \Lambda^{\omega} & 0_{n\times n}\\ 0_{n\times n} & \Lambda^{\omega} \end{pmatrix} \label{Williamson} \end{equation} where $\Lambda^{\omega}=\operatorname*{diag}(\lambda_{1}^{\omega} ,...,\lambda_{n}^{\omega})$; here $\lambda_{1}^{\omega},...,\lambda _{n}^{\omega}$ the symplectic eigenvalues of $M$ (\textit{i.e.} the moduli of the usual eigenvalues of the matrix $JM$; they are the same as those of the antisymmetric matrix $M^{1/2}JM^{1/2}$ and hence of the type $\pm i\lambda$, $\lambda>0$). Since a symplectic automorphism transforms a quantum blob into another quantum blob, we can reduce the proof of the sufficiency of (\ref{bon1}) to the case where $\Omega$ is the ellipsoid \[ \Omega_{0}=\{z\in\mathbb{R}^{2n}:\Lambda^{\omega}x^{2}+\Lambda^{\omega} p^{2}\leq\hbar\}. \] We have here $\Pi_{\ell_{X}}\Omega_{0}=\{x:\Lambda^{\omega}x^{2}\leq\hbar\}$ hence $(\Pi_{\ell_{X}}\Omega_{0})^{\hbar}=\{x:(\Lambda^{\omega})^{-1}x^{2} \leq\hbar\}$ and $\Omega\cap\ell_{P}=\{x:\Lambda^{\omega}x^{2}\leq\hbar\}$. The equality $(\Pi_{\ell_{X}}\Omega_{0})^{\hbar}=\Omega\cap\ell_{P}$ thus implies that $\Lambda^{\omega}=I_{n\times n}$ hence $M=S_{0}^{T}S_{0} \in\operatorname*{Sp}(n)$. \end{proof} \section{Gaussian Quantum Phase Space} \subsection{The Wigner transform} Recall that the Wigner transform (or function) of a square integrable function $\psi:\mathbb{R}^{n}\longrightarrow\mathbb{C}$ is the function $W\psi :\mathbb{R}^{2n}\longrightarrow\mathbb{R}$ defined by the absolutely convergent integral \begin{equation} W\psi(x,p)=\left( \tfrac{1}{2\pi\hbar}\right) ^{n}\int e^{-\frac{i}{\hbar }py}\psi(x+\tfrac{1}{2}y)\psi^{\ast}(x-\tfrac{1}{2}y)d^{n}y~. \label{wigtra} \end{equation} It satisfies the Moyal identity \begin{equation} (W\psi|W\phi)_{L^{2}(\mathbb{R}^{2n})}=(2\pi\hbar)^{-n}|(\psi|\phi )|_{L^{2}(\mathbb{R}^{n})}^{2} \label{Moyal} \end{equation} which implies that $||W\psi||_{L^{2}(\mathbb{R}^{2n})}=(2\pi\hbar )^{-n/2}||\psi||_{L^{2}(\mathbb{R}^{n})}^{2}$. An important property satisfied by the Wigner transform is its symplectic covariance: for every $S\in\operatorname*{Sp}(n)$ and $\psi\in L^{2} (\mathbb{R}^{n})$ we have \begin{equation} W\psi(S^{-1}z)=W(\widehat{S}\psi)(z) \label{symco} \end{equation} where $\widehat{S}\in\operatorname*{Mp}(n)$ is one of the two metaplectic operators projecting onto $S$ (recall \cite{Birk} that $\operatorname*{Mp} (n)$, the metaplectic group, is a unitary representation in $L^{2} (\mathbb{R}^{n})$ of the double cover of $\operatorname*{Sp}(n)$). The covering projection $\pi^{\operatorname*{Mp}}:\operatorname*{Mp} (n)\longrightarrow\operatorname*{Sp}(n)$ is uniquely determined by its action of the generators of $\operatorname*{Mp}(n)$. Here is a basic example. Let $X\in\operatorname*{Sym}_{++}(n,\mathbb{R})$ and $Y\in\operatorname*{Sym}(n,\mathbb{R})$. The associated generalized Gaussian $\psi_{X,Y}$ is defined by \begin{equation} \psi_{X,Y}(x)=(\pi\hbar)^{-n/4}(\det X)^{1/4}e^{-\tfrac{1}{2\hbar}(X+iY)x^{2} }. \label{fay} \end{equation} Its Wigner transform is given by \cite{Bas,Birk,Wigner} \begin{equation} W\psi_{X,Y}(z)=(\pi\hbar)^{-n}e^{-\tfrac{1}{\hbar}Gz\cdot z} \label{phagauss} \end{equation} where \begin{equation} G= \begin{pmatrix} X+YX^{-1}Y & YX^{-1}\\ X^{-1}Y & X^{-1} \end{pmatrix} . \label{gsym} \end{equation} It is essential to observe that $G=G^{T}\in\operatorname*{Sp}(n)$; this is most easily seen using the factorization where \begin{equation} G=S^{T}S\text{ \ },\text{ }S= \begin{pmatrix} X^{1/2} & 0\\ X^{-1/2}Y & X^{-1/2} \end{pmatrix} \in\operatorname*{Sp}(n). \label{bi} \end{equation} \subsection{Gaussian density operators} Let $\widehat{\rho}\in\mathcal{L}^{1}(L^{2}(\mathbb{R}^{n}))$ be a trace class operator on $L^{2}(\mathbb{R}^{n})$. If $\operatorname*{Tr}(\widehat{\rho})=1$ and $\widehat{\rho}$ is positive semidefinite ($\widehat{\rho}\geq0$) one says that $\widehat{\rho}$ is a density operator (it represents the mixed states in quantum mechanics). One shows, using the spectral theorem for compact operators, that the Weyl symbol of $\widehat{\rho}$ can be written as $(2\pi\hbar)^{n}\rho$ where $\rho$ (the \textquotedblleft Wigner distribution of $\widehat{\rho}$\textquotedblright) is a convex sum \[ \rho=\sum_{j}\lambda_{j}W\psi_{j}\text{ , }\lambda_{j}\geq0\text{ , }\sum _{j}\lambda_{j}=1 \] where $(\psi_{j})_{j}$ is an orthonormal set of vectors in $L^{2} (\mathbb{R}^{n})$ (the series is absolutely convergent in $L^{2} (\mathbb{R}^{n})$). Of particular interest are Gaussian density operators, by definition these are the density operators whose Wigner distribution can be written \begin{equation} \rho(z)=\frac{1}{(2\pi)^{n}\sqrt{\det\Sigma}}e^{-\frac{1}{2}\Sigma ^{-1}(z-z_{0})(z-z_{0})} \label{rhoG} \end{equation} where $z_{0}\in\mathbb{R}_{z}^{2n}$ and the covariance matrix $\Sigma \in\operatorname*{Sym}_{++}(n,\mathbb{R})$ (we will from now on choose $z_{0}=0$, but all the statements on the covariance matrix and ellipsoid that follow are not influenced by this assumption). While the operator $\widehat{\rho}$ with Weyl symbol $(2\pi\hbar)^{n}\rho$ automatically has trace one, the condition $\widehat{\rho}\geq0$ is equivalent to \cite{cogoni,dutta,Birk} \begin{equation} \Sigma+\frac{i\hbar}{2}J\geq0. \label{quant0} \end{equation} (that is, the eigenvalues of the Hermitian matrix $\Sigma+\frac{i\hbar}{2}J$ are $\geq0$). By definition the purity of a density operator $\rho$ is the number $\mu(\widehat{\rho})=\operatorname*{Tr}(\widehat{\rho}^{2})$. We have $0<\mu(\widehat{\rho})\leq1$ and $\mu(\widehat{\rho})=1$ if and only the Wigner distribution $\rho$ of $\widehat{\rho}$ consists of a single term: $\rho=W\psi$ for some $\psi\in L^{2}(\mathbb{R}^{n})$. \begin{proposition} \label{PropBlob}Let $\widehat{\rho}$ be a Gaussian density operator with covariance matrix $\Sigma$. (i) The condition $\Sigma+\frac{i\hbar}{2}J\geq0$ holds if and only if the covariance ellipsoid $\Omega$ associated with $\Sigma$ contains a quantum blob. (ii) We have $\mu(\widehat{\rho})=1$ if and only $\Omega$ is a quantum blob and we have in this case $\rho=W\psi_{X,Y}$ for some pair of matrices $(X,Y)$. . \end{proposition} \begin{proof} We have proven part (i) in \cite{Birk,go09} (also see \cite{goluPR}). To prove (ii) we note that the purity of a Gaussian state $\widehat{\rho}$ is \cite{Birk} \[ \mu(\widehat{\rho})=\left( \frac{\hbar}{2}\right) ^{n}(\det\Sigma)^{-1/2} \] hence $\mu(\widehat{\rho})=1$ if and only if $\det\Sigma=(\hbar/2)^{2n}$. Let $\lambda_{1}^{\omega},...,\lambda_{n}^{\omega}$ be the symplectic eigenvalues of $\Sigma$ as in the proof of Theorem \ref{Thm1}; in view of Williamson's symplectic diagonalization theorem there exists $S\in\operatorname*{Sp}(n)$ such that $\Sigma=S^{-1}D(S^{T})^{-1}$ where $D= \begin{pmatrix} \Lambda^{\omega} & 0\\ 0 & \Lambda^{\omega} \end{pmatrix} $ with $\Lambda^{\omega}=\operatorname*{diag}(\lambda_{1}^{\omega} ,...,\lambda_{n}^{\omega})$. The quantum condition (\ref{quant0}) is equivalent to $\lambda_{j}^{\omega}\geq\hbar/2$ for all $j$ hence \[ \det\Sigma=(\lambda_{1}^{\sigma})^{2}\cdot\cdot\cdot(\lambda_{n}^{\sigma} )^{2}=1 \] if and only if $\lambda_{j}^{\omega}=\hbar/2$ for all $j$, hence $\Sigma =\frac{\hbar}{2}S^{-1}(S^{T})^{-1}$ and $\Omega=S(B^{2n}(\sqrt{\hbar}))$ is a quantum blob. \end{proof} \subsection{A characterization of Gaussian density operators} We are going to apply Theorem \ref{Thm1} to characterize pure Gaussian density operators without prior knowledge of the full covariance matrix. This is related to the so-called \textquotedblleft Pauli reconstruction problem\textquotedblright\ \cite{Pauli} we have discussed in \cite{gopauli}. The latter can be reformulated in terms of the Wigner transform as follows: given a function $\psi\in L^{1}(\mathbb{R}^{n})\cap L^{2}(\mathbb{R}^{n})$ whose Fourier transform is also in $L^{1}(\mathbb{R}^{n})\cap L^{2} (\mathbb{R}^{n})$ the question is whether we reconstruct $\psi$ from the knowledge of the marginal distributions \begin{equation} \int W\psi(x,p)d^{n}p=|\psi(x)|^{2}\text{ \ },\text{ \ }\int W\psi (x,p)d^{n}x=|\widehat{\psi}(p)|^{2}\label{marg} \end{equation} where the Fourier transform $\widehat{\psi}$ of $\psi$ is given by \begin{equation} \widehat{\psi}(p)=\left( \frac{1}{2\pi\hbar}\right) ^{n/2}\int e^{-\frac {i}{\hbar}px}\psi(x)d^{n}x.\label{FT} \end{equation} The answer to Pauli's question is negative; the study of this problem has led to many developments, one of them being the theory of symplectic quantum tomography (see \textit{e. g}. \cite{ib}). The following result is essentially an analytic restatement of Theorem \ref{Thm1}: \begin{theorem} \label{Thm2}Let $\widehat{\rho}\in\mathcal{L}^{1}(L^{2}(\mathbb{R}^{n}))$ be a density operator with Gaussian Wigner distribution \[ \rho(z)=\frac{1}{(2\pi)^{n}\sqrt{\det\Sigma}}e^{-\frac{1}{2}\Sigma^{-1}z\cdot z}. \] Then $\widehat{\rho}$ is a pure density operator if and only if \begin{equation} \Phi(x)=2^{n}\int\rho(x,p)d^{n}p\label{fiw} \end{equation} where $\Phi$ is the Fourier transform of the function $p\longmapsto \rho(0,p/2)$. \end{theorem} \begin{proof} We begin by noting that by the well-known formula about marginals in probability theory we have \begin{equation} \int\rho(x,p)d^{n}p=\frac{1}{(2\pi)^{n/2}\sqrt{\det\Sigma_{XX}}}e^{-\tfrac {1}{2}\Sigma_{XX}^{-1}x\cdot x}.\label{psix} \end{equation} Returning to the notation $M=\frac{\hbar}{2}\Sigma^{-1}$ we have \[ \rho(z)=(\pi\hbar)^{-n}(\det M)^{1/2}e^{-\frac{1}{\hbar}Mz\cdot z} \] and the margin formula (\ref{psix}) reads \begin{equation} \int\rho(x,p)d^{n}p=(\pi\hbar)^{-n/2}(\det M/M_{PP})^{1/2}e^{-\frac{1}{\hbar }(M/M_{PP})x\cdot x}.\label{romm} \end{equation} Assume now that $\widehat{\rho}$ is a pure density operator and let us show that (\ref{fiw}) holds (also see Remark \ref{Rem2} below). In view of Proposition \ref{PropBlob} we then have $\rho=W\psi_{X,Y}$ for some Gaussian (\ref{fay}) and thus $\rho(z)=(\pi\hbar)^{-n}e^{-\tfrac{1}{\hbar}Gz\cdot z}$ where $G$ is the symmetric symplectic matrix (\ref{gsym}). Using the first marginal property (\ref{marg}) and the definition of $\psi_{X,Y}$ it follows that \[ \int\rho(x,p)d^{n}p=|\psi_{X,Y}(x)|^{2}=(\pi\hbar)^{-n/2}(\det X)^{1/2} e^{-\tfrac{1}{\hbar}Xx\cdot x}. \] On the other hand \[ W\psi_{X,Y}(0,p/2)=(\pi\hbar)^{-n}e^{-\frac{1}{4\hbar}X^{-1}p\cdot p} \] and its Fourier transform is \[ \Phi(x)=\left( \frac{2}{\pi\hbar}\right) ^{n}(\det X)^{1/2}e^{-\frac {1}{\hbar}Xx\cdot x} \] hence the equality (\ref{fiw}). Assume now that, conversely, (\ref{fiw}) holds. We have \ \ \ \[ \rho(0,p/2)=(\pi\hbar)^{-n}(\det M)^{1/2}e^{-\frac{1}{4\hbar}M_{PP}p\cdot p}. \] and the Fourier transform $\Phi$ of the function $p\longmapsto\rho(0,p/2)$ is given by \[ \Phi(p)=\left( \frac{2}{\pi\hbar}\right) ^{n}(\det M)^{1/2}(\det M_{PP})^{-1/2}e^{-\frac{1}{\hbar}M_{PP}^{-1}x\cdot x}. \] The equality (\ref{fiw}) requires that \[ (\det M)^{1/2}(\det M_{PP})^{-1/2}e^{-\frac{1}{\hbar}M_{PP}^{-1}x\cdot x}=(\det M/M_{PP})^{1/2}e^{-\frac{1}{\hbar}(M/M_{PP})x\cdot x} \] that is, equivalently, \begin{gather*} M_{PP}^{-1}=(M/M_{PP})\\ (\det M)^{1/2}(\det M_{PP})^{-1/2}=(\det M/M_{PP})^{1/2}. \end{gather*} The first of these two conditions implies that the covariance ellipsoid $\Omega$ is a quantum blob (formula (\ref{cond0})) in Theorem \ref{Thm1}); the second condition is then automatically satisfied since $\det M=1$ \ in this case. \end{proof} \begin{remark} \label{Rem2}Condition (\ref{fiw}) is actually satisfied by \emph{all} even Wigner transformations (and hence by all pure density operators corresponding to an even function $\psi$) . suppose indeed that $\rho=W\psi$ for some suitable even function $\psi\in L^{2}(\mathbb{R}^{n})$. Then \[ W\psi(0,p/2)=(\pi\hbar)^{-n}\int e^{\frac{i}{\hbar}p\cdot y}|\psi(y)|^{2} d^{n}y; \] Taking the Fourier transform of both sides and using the first marginal property (\ref{marg}) yields the identity (\ref{fiw}). \end{remark} \section{Perspectives and Comments} Among all states (classical, or quantum) the Gaussians are those which are entirely characterized by their covariance matrices. The notion of polar duality thus appears informally as being a generalization of the uncertainty principle of quantum mechanics as expressed in terms of variances and covariances. Polar duality actually is a more general concept than the usual uncertainty principle, expressed in terms of covariances and variances of position and momentum variables (and the derived notion of quantum blob). As was already in the work of Uffink and Hilgevoord \cite{hi,hiuf}, variances and covariances are satisfactory measures of uncertainties only for Gaussian (or almost Gaussian) distribution. For more general distributions having nonvanishing \textquotedblleft tails\textquotedblright\ they can lead to gross errors and misinterpretation. Another advantage of the notion of polar duality is that it might precisely be extended to study uncertainties when non-Gaussianity appears. Instead of considering ellipsoids $X$ in configuration space $\mathbb{R}_{x}^{n}$ one might want to consider sets $X$ which are only convex. In this case the polar dual $X^{\hbar}$ is still well-defined and one might envisage, using the machinery of the Minkowski functional to generalize the results presented here to general non-centrally symmetric convex bodies in $\mathbb{R}_{x}^{n}$. The difficulty comes from the fact that we then need to choose the correct center with respect to which the polar duality is defined since there is no privileged \textquotedblleft center\textquotedblright\ \cite{arkami}; different choices may lead to polar duals with very different sizes and volumes. These are difficult questions, but they may lead to a better understanding of very general uncertainty principles for the density operators of quantum mechanics. \begin{acknowledgement} This work has been financed by the Grant P 33447 N of the Austrian Research Foundation FWF. \end{acknowledgement}
197,238
\begin{document} \font\fFt=eusm10 \font\fFa=eusm7 \font\fFp=eusm5 \def\K{\mathchoice {\hbox{\,\fFt K}} {\hbox{\,\fFt K}} {\hbox{\,\fFa K}} {\hbox{\,\fFp K}}} \def\T{\mathchoice {\hbox{\,\fFt T}} {\hbox{\,\fFt T}} {\hbox{\,\fFa T}} {\hbox{\,\fFp T}}} \begin{abstract} Here authors establish the sharp inequalities for classical beta function by studying the inequalities of trigonometric sine function. \end{abstract} \maketitle \section{Introduction} For $x,y>0$, the classical \emph{gamma function} $\Gamma$, the {\it digamma function} $\psi$ and the \emph{beta function} $B(\cdot,\cdot)$ are defined by $$\Gamma(x) = \int^\infty_0 e^{-t}t^{x-1}\,dt,\quad \psi(x) = \frac{\Gamma'(x)}{\Gamma(x)},\quad B(x,y) = \frac{\Gamma(x)\Gamma(y)}{\Gamma(x+y)},$$ respectively. The study of these functions has become crucial because of its dynamic applications in the field of various branches of engineering and mathematics \cite{roy}. Since last half century numerous authors has given the several functional inequalities of these function by using different approaches, e.g., see \cite{alz2,alz3,dab,qv}. In this paper, we establish the inequalities for beta function by studying the well-known Jordan's inequality \cite{kvv,mit,neusan}. The functions $\Gamma$ and $\psi$ satisfy the following recurrence relation \begin{equation}\label{recgam} \Gamma(1+x)=x\Gamma(x),\quad \psi(1+x)=x\psi(x). \end{equation} Weierstrass gave the following infinite production definitions of gamma function and the sine function $$\frac{1}{\Gamma(x)}=xe^{\gamma x}\prod_{n=1}^\infty\left(1+\frac{x}{n}\right)e^{-x/e},\quad \sin(\pi x)=\pi x\prod_{n\neq 0}^\infty\left(1-\frac{x}{n}\right)e^{x/n},$$ where $\gamma$ is the Euler-Mascheroni constant \cite{as} defined by $$\gamma=\lim_{x\to \infty}\left(\sum_{k=1}^\infty \frac{1}{k}-\log(n)\right)\approx 0.57721.$$ These definitions gives the following relation \begin{equation}\label{eulref} \Gamma(t)\Gamma(1-t)=\frac{\pi}{\sin (\pi t)},\quad t\notin \mathbb{Z}, \end{equation} which is known as the Euler's reflection formula \cite[6.1.17]{as}. We refer to reader to see \cite{dab} for the historical background and the properties of gamma and beta function. Dragomir et al. \cite{dab} established the following inequality \begin{equation}\label{ineq0704} B(x,y)\leq \frac{1}{xy},\quad x,y\in(0,1), \end{equation} which was refined by Alzer \cite{alz1} as follows \begin{equation}\label{ineq0704a} \frac{1}{xy}\left(1-a\frac{1-x}{1+x}\frac{1-y}{1+y}\right)< B(x,y)< \frac{1}{xy}\left(1-b\frac{1-x}{1+x}\frac{1-y}{1+y}\right), ,\quad x,y\in(0,1), \end{equation} with the best possible constants $a=2\pi^2/3-4\approx 2.57973$ and $b=1$. Recently, the second inequality in \eqref{ineq0704} was refined by Iv\'ady\cite {ivady} \begin{equation}\label{ineq0704b} \frac{1}{xy}\left(x+y-xy\right)\leq B(x,y)\leq \frac{1}{xy}\frac{x+y}{1+xy},\quad x,y\in(0,1). \end{equation} \begin{lemma}\label{newlema} Let $g(x)=f(x)/\sin (x)$ for $x \in (0, \pi)$. Then $\sin(x)^2g'(x)= h(x)$, and the sign of $h'(x)$ depends on the sign of $F(x)= f(x)+ f''(x)$. \end{lemma} \begin{proof} One has $$\sin (x)^2 g'(x)= f'(x)\sin(x)-f(x)\cos (x)=h(x),$$ and $$h'(x)= (f(x)+f''(x))\sin(x)=F(x)\sin(x).$$ As $\sin (x)>0$ for all $x \in (0,\pi)$, the result follows. \end{proof} \begin{theorem} For $x\in(0,1)$, we have \begin{equation}\label{ineq1104} \frac{3 (1-t)}{\pi t^2-\pi t+\pi }<\frac{\sin (\pi t)}{\pi t}<\frac{\pi (1-t)}{\pi t^2-\pi t+\pi }, \end{equation} \begin{equation}\label{ineq0505} 1-(2-t)t^2<\frac{\sin(\pi t)}{\pi t}<\frac{16}{5\pi}\left(1-(2-t)t^2\right). \end{equation} \end{theorem} \begin{proof} Let $g(x)= f(x)/\sin (x)$ for $x\in(0, \pi)$, where $$f(x)= (\pi x-x^2)/(\pi^2-\pi x+x^2).$$ We get $$\frac{(\pi^2-\pi x+x^2)^3}{x(\pi-x)}F(x)=(\pi^2-\pi x+x^2)^2- 6\pi^2 = A(x)B(x),$$ where $B(x)>0$ always, and $A(x)=x^2-\pi x+\pi^2- \pi\sqrt{6}$. The roots of equation $A(x)=0$ are $x_1= (\pi-\sqrt{4\cdot 6^{1/2}\pi-\pi^2})/2$ which is in $(0, \pi/2)$, and $x_2= (\pi+ \sqrt{4\cdot 6^{1/2}\pi-\pi^2})/2$ which is in $(\pi/2, \pi)$. Let $x\in(0, x_1)$, then $A(x)>0$, so $F(x)>0$, giving $h'(x)>0$. This implies $h(x)>h(0)=0$, so $g'(x)>0$ by Lemma \ref{newlema}. Again, let $x\in[x_1, \pi/2)$, then $A(x)\geq 0$, giving $F(x)\leq 0$, i.e. $h'(x)\leq 0$. This implies $h(x)>h(\pi/2)=0$. So $g'(x)\geq 0$ here too. We have proved that $g'(x)>0$ for all $x$ in $(0, \pi/2)$. Let now $x$ in $(\pi/2, x_2)$. Then $A(x)<0$, so $h'(x)<0$, implying $h(x)<h(\pi/2)=0$. For $x$ in $[x_2, \pi)$ one has $h'(x)\geq 0$, so $h(x)\leq h(\pi)=0$. Therefore, for all $x$ in $(\pi/2, \pi)$ one has $h(x)<0$, i.e. $g'(x)<0$ here. In both cases we had $g'(x)=0$ only for $x=\pi/2$. Consequently, the function $g$ is strictly increasing in $(0,\pi/2)$ and strictly decreasing decreasing $(\pi/2,\pi)$, and attains maximum $1/3$ at $x=\pi/2$ as well as $g$ tends to $1/\pi$ when $x$ tends to $0$ or $\pi$. This implies the proof of \eqref{ineq1104} if we let $x=\pi t$. For the proof of \eqref{ineq0505}, let $$f(x)=\frac{\sin(x)}{x(x^3-2\pi x^2+\pi^3)}.$$ A simple calculation gives $$(x \,k(x))^2 f'(x)= g(x),$$ where $k(x)= x^3-2\pi x^2 +\pi^3$ and $$g(x)=\cos (x).(x^4-2\pi x^3 +\pi^3x)- \sin (x).(4x^3 -6\pi x^2 +\pi^3).$$ It is immediate that $g(0)= g(\pi/2)= g(\pi)=0$. One has $g'(x)= -x\sin(x)h(x),$ with $$h(x)= x^3-2\pi x^2+12x+\pi^3-12\pi.$$ Here $h(0)= \pi(\pi^2-12)<0$, $h(\pi/2)= 5\pi^3/8-6\pi >0$ as $5\pi^2 >48$ and $h(\pi)=0.$ Further $h'(x)= 3x^2-4\pi x +12,$ and $h''(x)= 2(3x-2\pi)$. Here $\pi/2< 2\pi/3< \pi$. The roots of $h'(x)=0$ are $x_1= (2\pi-2\sqrt{\pi^2-9})/3\approx 1.47271$, which is in $(0, \pi/2)$, and $x_2=(2\pi+2\sqrt{\pi^2-9})/3\approx 2.71608$, which is in $(2\pi/3, \pi).$ Therefore, $h(x)$ is strictly increasing in $(0, x_1)$, and $(x_2, \pi)$, while strictly decreasing in $(x_1, x_2). $ Let $x \in (0, \pi/2)$, then as $h(0)<0$, $h(\pi/2)>0$, $h$ has a single root $x_0$, and a maximum point in $x_1$. Thus $h(x)<0$ in $(0, x_0)$, and $h(x)>0$ in $(x_0, \pi/2)$. Therefore, $g'(x)>0$ for $x\in(0, x_0)$ and $g'(x)<0$ in $(x_0, \pi/2)$. Thus $g(x)>g(0)=0$ in $(0, x_0)$ and $g(x)>g(\pi/2)=0$ $(x_0, \pi/2)$. In all cases, $g(x)>0$ for $x \in (0, \pi/2)$. This means that, $f(x)$ is strictly increasing in $(0, \pi/2)$. When $x$ is in $(\pi/2, \pi)$, the proof runs as above, by remarking that by $h(2\pi/3)<0$, there exists a unique $x^*_0 \in (\pi/2, \pi)$ such that $h(x^*_0)=0$. Since $h(x)>0$ in $(x^*_0, \pi)$ and $h(x)<0$ in $(x^*_0, \pi)$ we get that $g(x)<g(\pi/2)=0$ in $(\pi/2, x^*_0)$, while $g(x)<g(\pi)=0$ in $(x^*_0, \pi)$, so in all case $g(x)<0$, when $x$ is in $(\pi/2, \pi)$. Thus $f(x)$ is strictly decreasing in $(\pi/2, \pi)$. This completes the proof. \end{proof} The inequalities in \eqref{ineq1104} and \eqref{ineq0505} are not comparable. From the proof of \eqref{ineq0505} we get the following corollary. \begin{corollary} For $x\in(0,\pi/2)$, we have $$\frac{x^3-2 \pi x^2+\pi ^3}{4 x^3-6 \pi x^2+\pi ^3}>\frac{\tan(x)}{x},$$ inequality reverses for $x\in(\pi/2,\pi)$. \end{corollary} \begin{theorem}\label{mainthm} For we have \begin{enumerate} \item $\displaystyle\frac{\alpha}{xy}\frac{x+y}{1+xy}<B(x,y)< \displaystyle\frac{\beta}{xy}\frac{x+y}{1+xy},\quad x\in(0,1)\, {\rm with}\, y=1-x,$\\ with the best possible constants $\alpha=5\pi/16\approx 0.98175$ and $\beta=1$, \item $B(x,y)<\displaystyle\frac{1}{xy}\frac{x+y}{1+xy},\quad x,y\in(0,1)$,\\ inequality reverses for $x>1$. \end{enumerate} \end{theorem} \begin{proof} Utilizing \eqref{eulref}, the first inequality in \eqref{ineq0505} can be written as $$t(1-t)(1+t(1-t))<\frac{1}{\Gamma(t)\Gamma(1-t)},$$ which is equivalent to $$\frac{t(1-t)(1+t(1-t))}{t+1-t}<\frac{\Gamma(t+1-t)}{\Gamma(t)\Gamma(1-t)}.$$ Letting $x=t$ and $y=1-t$, we get the first inequality. The second inequality follows similarly from the second inequality of \eqref{ineq0505}. This completes the proof. \end{proof} \begin{remark} The inequality $$1-\frac{z}{\pi}<\frac{\sin(z)}{z},\quad z\in(0,\pi),$$ can be written as $$\Gamma\left(1+\frac{z}{\pi}\right)\Gamma\left(1-\frac{z}{\pi}\right)<\frac{1}{1-z/\pi},$$ by \eqref{eulref}. This implies \eqref{ineq0704} if we let $x=z/\pi$ and $y=1-z/\pi$. \end{remark} \begin{lemma}\label{lema1004} We have \begin{enumerate} \item $\psi(1+x)-\psi(x+y)<\displaystyle\frac{1-y}{x+y-xy},\quad x>1, \,y\in(0,1)$,\\ \item $\psi(2-x)-\psi(1+x)<\displaystyle\frac{1-2x}{1-(1-x)x},\quad x\in(0,1/2)$,\\ inequality reverses for $x\in(1/2,1)$. \end{enumerate} \end{lemma} \begin{proof} For $x>1$ and $y\in(0,1)$, we define $$g_x(y)=\psi(1+x)-\psi(x+y)-\displaystyle\frac{1-y}{x+y-xy}.$$ Differentiating with respect to $y$ we get \begin{eqnarray*} g_x''(y)&=&-\frac{2 (1-x)^2 (1-y)}{(x (1-y)+y)^3}-\frac{2 (1-x)}{(x (-y)+x+y)^2}- \psi''(x+y)\\ &=&\frac{2 x-2}{(x (1-y)+y)^3}-\psi''(x+y)>0, \end{eqnarray*} since $\psi''(x+y)<0$. Thus, $g_x$ is convex in $y$, clearly $g_x(0)=f_x(1)=0$. This implies that the function $g_x$ lies under the line segment joining origin and the point $(1,0)$. Hence the proof is obvious now. For (2), write $$f(x)=\psi(2-x)-\psi(1+x)-\frac{1-2x}{1-(1-x)x},\quad x\in(0,1).$$ One has \begin{eqnarray*} f''(x)&=&\left(\frac{2 (2 x-1)^2}{(1-(1-x) x)^3}-\frac{2}{(1-(1-x) x)^2}\right) (2 x-1)\\ & &-\frac{4 (2 x-1)}{(1-(1-x) x)^2}+\psi''(2-x)-\psi''(x+1)\\ &=&\frac{2 (x-2) (x+1) (2 x-1)}{((x-1) x+1)^3}+\psi''(2-x)-\psi''(x+1). \end{eqnarray*} Clearly, the function $\psi''$ is increasing and negative. So, it is not difficult to see that $f''$ is positive for $x\in(0,1/2)$, and negative for $x\in(1/2,1)$. This implies the convexity and concavity of $f$ in $x\in(0,1/2)$ and $x\in(1/2,1)$, respectively. Clearly, $f(0)=f(1/2)=f(1)=0$. This completes the proof. \end{proof} \begin{theorem}\label{thm1004} We have \begin{enumerate} \item $\displaystyle \frac{1}{xy}(x+y-xy)>B(x,y), \quad x>1,\,y\in(0,1)$,\\ inequality reverses for $x\in(0,1)$, \item $B(x,y)<\displaystyle\displaystyle\frac{\pi}{3}\frac{1}{xy}(x+y-xy), \quad x\in(0,1),\,{\rm with}\, y=1-x$. \end{enumerate} \end{theorem} \begin{proof} The inequality in (1) can be written as $$h_y(x)=\log(\Gamma(1+x))+\log(\Gamma(1+y))-\log(\Gamma(x+y))+\log(x+y-xy)>0.$$ Clearly, $h_y(1)=0$. Differentiation with respect to $x$ yields $$h_y'(x)=\psi(1+x)-\psi(x+y)-\displaystyle\frac{1-y}{x+y-xy}=g_x(y),$$ which is negative by Lemma \ref{lema1004}(1). thus the function $h_y(x)$ is decreasing in $x>1$, this implies (1). For part (2), let $$h(x)= \log \left(\frac{1}{3} \pi (1-(1-x) x)\right)- \log (\Gamma (2-x))-\log (\Gamma (x+1)),$$ clearly $h(1/2)=0$. One has, $$h'(x)=\psi(2-x)-\psi(1+x)-\frac{1-2x}{1-(1-x)x},$$ which is positive in $x\in(0,1/2)$ and negative in $x\in(1/2,1)$ by Lemma \ref{lema1004} (2). This implies that $h$ is decreasing in $x\in(0,1/2)$ and increasing in $x\in(1/2,1)$. Thus the proof follows. \end{proof} \vspace{.5cm}
71,990
TITLE: The integral of an exact form over an orientable closed manifold is $0$ QUESTION [0 upvotes]: A form $\beta^{p}$ is closed if $d\beta=0$. A form $\beta^{p}$ is exact if $\beta^{p}=d\alpha^{p-1}$, for some form $\alpha^{p-1}$. An orientable closed manifold is a compact manifold without boundary. What is a compact manifold? Why is an orientable closed manifold the same as a compact manifold without boundary? If a manifold is compact, does it not, be definition, have a boundary? How do you prove that the integral of an exact form over an orientable closed manifold is $0$? REPLY [4 votes]: I will try to answer your questions: 1) A manifold is compact if every open covering (i.e. a collection of open sets that contains the manifold in its union) has a finite subcovering. So from this open covering, which might have infinitely many sets, you can choose finitely many and these must still cover the manifold. 2)On this website https://en.wikipedia.org/wiki/Classification_of_manifolds a closed manifold is defined as a compact manifold without boundary. 3) No, a compact manifold could have no boundary. For example, consider the circle $S^1$. It is definitely compact and does not have a boundary. 4) As we have just concluded that this manifold has no boundary, we can just use Stoke's theorem for our manifold $M$: \begin{equation} \int_{M} d\omega = \int_{\partial M} \omega. \end{equation} We have an exact form, let's call it $\beta^p$. As you already said, this means that $\beta^p=d\alpha^{p-1}$. Also, because $M$ has no boundary, the integral over $\partial M$ will be zero: \begin{equation} \int_M \beta^p = \int_M d\alpha^{p-1} = \int_{\partial M} \alpha^{p-1} =0. \end{equation}
4,682
TITLE: Solving a first order nonlinear nonhomogeneous ODE QUESTION [4 upvotes]: Edit: from the answers, I have learnt that the differential equation can be solved by expressing it as being a hypergeometric differential equation. My question now is that, how many a function in the form of $x(1-x)^2y''(x)+(1-x)^2y'(x)+ay(x)=0$ be transformed into the form of $\eta(1-\eta)f''(\eta)+(b-c\eta)f'(\eta)+df(\eta)=0$? Original question: How may one solve a differential equation in the form of: $\frac{dy}{dx}=P(x) -ky^2$ I have attempted at reducing it into a second order homogeneous equation in the form of $\frac{d^2u}{dx^2}=-kP(t)u$ by making the substitution of $ky= \frac{\frac{du}{dx}}{u}$ However, I am still unable to solve this. Are there any methods for solving either equation? If it helps, P(x) is the derivative of: $f(x)=\frac{a-be^{cx+d}}{1-e^{cx+d}}$ where a,b,c,d are constants Additionally, y=0 when x=0, and y=0 as x$\rightarrow$ infinity A numerical approach to solving the equation with randomly chosen values for constants substituted in gives the following graph: Link which looks like (maybe) a chi-square distribution....? REPLY [3 votes]: The equation you are working with is the Ricatti equation: $$y'+ky^{2}=P(x)$$ Where $$P(x)=\frac{d}{dx}\frac{a-be^{cx+d}}{1-e^{cx+d}}=\frac{c(a-b)e^{cx+d}}{(1-e^{cx+d})^{2}}$$ By doing your substitution $$y(x)=\frac{1}{k}\frac{\frac{du(x)}{dx}}{u(x)}$$ you indeed have the second order linear ode $$\frac{d^{2}u}{dx^{2}}=kP(x)u(x)=k\frac{c(a-b)e^{cx+d}}{(1-e^{cx+d})^{2}}u(x)$$ First it is nice to clean the equation a little bit by letting $z=c{x}+d$ and $k(a-b)/c=\alpha$, then $$\frac{d^{2}u(z)}{dz^{2}}=\alpha\frac{e^{z}}{(1-e^{z})^{2}}u(z)$$ Then you may also want a change of variables $$\xi=e^{z}$$ So that $$\xi(1-\xi)^{2}\frac{d^{2}u(\xi)}{d\xi^{2}}+(1-\xi)^{2}\frac{du(\xi)}{d\xi}-\alpha{u}(\xi)=0$$ Then you let $1-\sqrt{4\alpha+1}=\gamma$ and do the following substitutions $$\sigma=(x-1)$$ $$u(\xi)=\sigma^{\frac{1-\gamma}{2}}f(\sigma)$$ To give the Gauss hypergeometric ode $$\sigma(1-\sigma)\frac{d^{2}}{d\sigma^{2}}f(\sigma)+(\gamma+(1-\gamma)\sigma)\frac{d}{d\sigma}f(\sigma)-\frac{1}{4}\gamma^{2}f(\sigma)=0$$
92,054
There are too many things I collect (or accumulate or hoard....) Records-obviously, vans, records with vans on the cover, and records featuring covers of the song "compared to what". Leave it to OFF! to combine all of these oddities into one 7" record. Never mind the fact that you could get the single with 2 different covers on 2 different colors of vinyl! I was definitely curious about how a band whose entire recorded output was only twice as long as the average length of the song "compared to what" would approach this funky classic So here is an example of what Compared to What usually sounds like OFF! make the song their own and if you didn't know the lyrics to the original, I don't think you would realize it is a cover. Sounds like OFF! A good OFF! single, but not a crucial cover of one of my favorite songs.
283,100
*traduction en Français* We’re very happy to have today on the blog a wonderful UF author: Suzanne McLeod. Her series Spellcrackers.com is unique and you haven’t tried yet I can only recommend you this one. The fourth book of the saga The Shifting Price of Prey will be released on August 30th in UK. If you want to learn more about the novels you can read the reviews here: A big thank you to Suzanne for answering our questions. You can visit her website for more information about the series: ———————————— Can you introduce yourself in a few words? Hello, everyone, and thanks so much for having me here, Melliane. I’m Suzanne McLeod, a UK author who writes the Spellcrackers.com urban fantasy series set in London and featuring Genny Taylor, a sidhe fae. Genny’s stories are full of magic, mayhem and murder – liberally spiced with hot guys, kick-ass chicks and super-cool supes! How many books do you intend to write for this series? The Shifting Price of Prey #4 comes out August 30 (which isn’t long! *excitedly bites knuckles*, and I’m writing #5 just now then I’ll be on to #6 – which is the last book I’m currently contracted to write. After that, well I have plans! I could tell you about but then I’d have to kill you! *evil cackle* Was it difficult to write the first book? How long did it take? Did it become easier with the following books? It was difficult as it was the first book I’d ever written so I was learning all the time – still am – and it took me around two years from when I first came up with the idea of Genny. Writing the following books was, and is easier, in that I now know my characters and their world a lot more, but I’m always trying to improve and make the books better, which makes writing them harder. :- ) How do you find your titles? Did you imagine them all when you started the series or do you brainstorm each time? The title of book #1 was Spellcrackers.com but when my publishers bought it, and the next two, they wanted to have that as the series title. So I had a massive brainstorming session with a couple of writer friends and we came up with about six different sets of titles. I gave them to my publishers and they chose the ones they liked best. The titles for books 4, 5 & 6 came into being after another brainstorm once I’d worked out where exactly Genny’s story was going next. 🙂 How did you end up writing Urban Fantasy books? Is there any other genre that appeal to you? I write urban fantasy because that’s what I love reading the most, and I think you should always write what you love. As for any other genres that might appeal – maybe something dystopian, or steampunk’ish, or science fiction. Does the inspiration for the characters come from people you know? Not usually. Unless it’s by special request. There are three characters in The Shifting Price of Prey who are tuckerisations: my fab publicist and his partner, and a wonderful reader who won an auction to be a baddie in the book. Though, to be honest, it’s only their names that I’ve used, not their personalities. My usual character inspiration comes from finding out what makes my characters tick by imagining their backstory and life, either before they arrive on the page, or after. It’s what makes writing characters fun for me. 🙂 Is there a character more difficult than the others to write? Hah, yes! Malik and Finn have both been extremely difficult in the past! It’s not always been easy knowing what they really want and then getting only so much as the current story needs of that on the page, especially as the reader only sees them through Genny’s eyes. 😀 If you had to choose between Malik and Finn, who would it be? Lol! I’m their creator – I get to have both! *g* Was it difficult to have so many creatures in the same book? Not really. Part of the two years it took to write the first book was working out the history of Genny’s world, and where all the different creatures fit in to that history, and what sort of interactions they all have between each other. Which is why the vamps only became celebrities once the goblins agreed to work with them; humans trust the goblins to make sure the vamps stick to the ‘vamp licensing laws’ and keep them safe. Of course, the vamps know how to get round the laws, if they want too. *g* Who is your favorite character in the series? I don’t have an overall favourite but I always love the ‘gift’ characters. They’re the ones who appear from nowhere (a.k.a my subconscious) and they’re always fun to write. Ricou, a naiad, and Sylvia, a dryad, who are now friends of Genny’s, started out as ‘gift’ characters. And like them, ‘gift’ characters often insist on getting bigger parts as the stories move forward. 🙂 What is your favorite book in the series? The Cold Kiss of Death for three reasons: it has my favourite scene (the one where Genny is running away from some baddies in Southwark, which is a fun scene but also sums up the whole theme of the book); it was the quickest to write (so far); and Mr Mac and I spent a fabulous few days in London doing a lot of the research, so it brings back some fantastic memories. (Of course, we didn’t have to deal with ghosts stalking us, or being on the run from the police, or evading baddies like Genny does). *g* Have you already other plans for future series? Or is it top secret? I do have plans! And like I said before, I could tell you but . . . yes, they’re still top secret 🙂 Can you tell us a little something about the fourth book? Well, with the events at the end of book 3, Genny believed that she could get on with her life without worrying about the fae, or anybody else. But when Tavish asks for her help with a problem, which a mysterious vamp called the Emperor has to answer to, if Genny can find him, she turns to Malik for help. Only she discovers Malik is wrestling with his own demons . . . and that he might need Genny’s help more than she needs his! Did you need to do a lot of researches for your books? I tend to do research as it’s needed, either at the plotting stage, or when I’m writing. A good bit of the research is visiting London and the places where the books are set so that I can get the descriptions right. I also find visiting the places can spark new ideas, which is why Covent Garden is witch central and home to the Witches’ Market, and why Spellcrackers has their office there; that idea came from one of my early research trips. Other times I know I want to write about say, oh, eels (there is an eel ‘character’ in book 3, and in book 4, too! *teases*) so I’ll research the internet and books until I feel I have enough information for what I’m writing about. Do you have a favorite author? Or a favorite book? I have a ton of favourite authors, way too many to mention here, though I will pick on two – Jaye Wells and Ann Aguirre. Not only do I love all their books but I’ve been privileged to read their works in progress (and so get lots of sneaky early looks! Yes, I’ve read Ann’s Outpost and Endgame! And the start of Jaye’s new series! *makes everyone jealous* ;p). I’ve also been very privileged that Jaye and Ann have given me input on my books. Oh, and if you’re a fan of Jaye’s awesome Sabina Kane series then look out for the Giguhl easter eggs in The Shifting Price of Prey. *g* Where is your favorite place to write? Somewhere quiet with no interruptions and plenty of cups of tea – which usually, but not always, means home. 🙂 Have you ever been in France? I have! Mr Mac and I LOVE it and have spent many holidays there. We’ve been to a few places in Brittany, around the Loire Valley and quite a few trips to Paris, which is my favourite city after London. :-). And now I want to visit Paris again! Thanks again for having me, Melliane, and I hope you’ve all enjoyed reading! ———————————— Thanks to Suzanne McLeod you have the possibility to win a book from the series: The sweet scent of blood, The Cold Kiss of Death, The Bitter seed of Magic or The Shifting price of prey. The giveaway is international if Book Depository ships to you and ends on August 30th. I haven’t heard of this serie but it’s seem quite good thank you for the discovery Thanks, Miki, great you like the sound of it 🙂 Yeah ! Merci pour cette nouvelle découverte, ma petite tentatrice préférée ! ♥ AAah tu as craqué, toi madame Ebook. Je suis impressionnée. Tu vas adoré cette série! <3 What a fantastic giveway. Thanks, this serie sounds very great 😀 Mariska Thanks, Mariska 😀 This is such a great series! I can’t wait to see what happens next for Genny. :0) Thanks, Sarah! Hope you enjoy Genny’s next adventure 🙂 I really liked the first book in this series. I must get the next book One of these days. Glad you enjoyed the first, thanks 🙂 #andyesreadthenext #theygetbettersoeveryonesays ;p I have the first book, Sweet Scent of Blood. Thank you! Hope you enjoy/enjoyed 🙂 Awesome! Sounds great. I love discovering new series. Will put in my TBR list. 🙂 Thanks! Hope you have fun reading 🙂 Thanks for the giveaway! 🙂 You’re welcome 🙂 I am gonna be naughty and enter even if I have read the book 😉 I want print too, mowuahaha totally understand you for that, I also would like my copy for the 4th book. Aw, thanks, ladies 🙂 <3 What a great interview Melliane! And what a great series 🙂 I have an e-ARC of the book, but I’d love an actual book as well – I do that with all my faves. Happy reading! Aw, thanks, Lexxie 🙂 <3 I have the first two books and I need this whole series so I can dive in all at once. 🙂 Oh and I loved the interview. I need to learn how to write so I can claim all the book hotties I create. LOL I agree it could be fun! Thanks, Melissa, glad you enjoyed. And reading them all at once would have you all Spellcrackered out! (sorry, couldn’t resist *g*) Great interview! These books sound fun! Thanks for sharing them! 😀 Thanks, Liesel, glad you enjoyed. Melliane’s a fab interviewer 🙂 I keep meaning to read book 3. I got derailed by other books while waiting for it. What? There are other books out there … *teases* Hope you enjoy #3 when you get to it, thanks 😀 I love reading about story inspirations and writing process, so thanks for interview. Brandi from Blkosiner’s Book Blog Great you enjoyed,Brandi, thanks 🙂 Oh I love these covers. You really won one over on the cover god. ;D And stories to get back into. Thank you! The cover gods have indeed been good to me! (must’ve been all those sacrifices, err, I mean prayers . . . *g*) Thanks, Melissa The books all look amazing. I must hurry & catch up. Thanks, and hope you enjoy 🙂 Thanks for the lovely interview, I really enjoyed reading it. I still have the third book to read, but I am looking forward to it. Glad you enjoyed, and have fun reading #3 when you get to it 😀 A great interview, thank you for sharing it and I am looking forward to reading The Shifting price of Prey Thank you! And I hope you enjoy reading the book, too 🙂 Thank you very much for the giveaway! The interview is amazing, very interesting! Great you enjoyed it! Thanks 😀 I can’t believe there’s an UF set in London I haven’t heard about – where have I been??!! I will have to check out the series partly because Mellanie loves it so much and partly as Suzanne sounds lovely! 🙂 Aw, thanks, Mel. I hope you enjoy if you try 🙂 I am anxiously awaiting the Shifting Price of Prey, so hopefully I will win it! Aw, thanks, Michelle! I hope you enjoy it: lots of stuff happens *teases* *g* Thank you for the competition. 🙂 You’re welcome 🙂 I wish I can visit a place that will inspire a witch central place for me 🙂 Cambonified{at}yahoo{dot}com Covent Garden is a fab market to visit even without the witches (though would be way more fun with them *g*) Thank you for the very cool giveaway.I can’t wait for The Shifting Price of Prey to come out!Thanks again, Gwen. You’re welcome and thanks, I hope you enjoy reading it 😀 I’ve read the first two books and I really liked them. Can’t wait to dive in the next 2. Aw, thank you! I’m so glad you enjoyed them and hope you enjoy the next two as much – lots of stuff happens . . . *teases* 😀 I love this series, I cant wait for the next book! BTW: *Team Finn* lol Oh, Team Finn! Yay! Finn thanks you *g* I hope you enjoy the next one! Because of you Melliane, I bought the first two books! I keep my fingers crossed to win the third! 😉 And thank you for these awesome interview and giveaway! I love reading authors when they talk about their books… 🙂 Aw, thank you, Zendastark! I do hope you enjoy reading the first 2! And glad you enjoyed the interview 😀 Merci pour ce concours, je participe avec plaisir ! Merci, Lyric 🙂 I love these books! I wish they were easier to get in the US. Aw, thanks so much, Sara! I wish they were too 🙂 Great post! Never heard about this author but for your post it sound very interesting. Would love to read this. Merci beaucoup! Glad you enjoyed! Thank you 🙂 The series sounds vaguely familiar but I would love to start reading it!! Even if I don’t win, I may just buy the books =) x I hope you enjoy if you do! Thanks, Heather 🙂 thanks for great giveaway^^ You’re welcome! And thank you 🙂 thx 4 the chance of win those wonderful books…. i hope i am one of the lucky one 🙂 You’re welcome! Thank you for dropping by 🙂 This is a new series to me. Thanks for the introduction to a new author. Love the interview. And thanks for making the giveaway international. tamsyn5(at)yahoo(dot)com Thanks, Tamsyn, glad you liked the interview! (Melliane’s a great interviewer :-)) I’ve read the first books of the series and loved them. I’m definitely eager to finish reading this series and would love The Bitter Seed of Magic.
405,811
An unidentified member of India's Parliament covers his face with a handkerchief after being affected by pepper spray in New Delhi, India. / AP NEW DELHI (AP) - A lawmaker sprayed. "I parliament member was rushed to a hospital, but no information was." Copyright 2014 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed. Read the original story: Lawmaker uses pepper spray in Indian parliament
44,096
animation EPIC Official Trailer (2013) [HD]animation 3y ago How Elevators Think - elevator algorithms in NYC and Dubaianimation 35m ago Empire, 2010, installation at the Wood Street Galleries, Pittsburghanimation 47m ago Crocobear (My Uncle)animation 1h ago [WIP] Blood Ancestors - Death animationanimation 2h ago The Humanitarian Impact Bondanimation 2h ago Hundred OPanimation 2h ago SING Trailer # 2 (Animation Blockbuster - 2016)animation 3h ago BAMPAanimation 3h ago Beating malaria: mobilising support for a malaria-free futureanimation 3h ago So Sorry: बंगाल के पछी विपछी भाग 2animation 3h ago Andy Baker and Kyle Platts - Toxic Mega Fadeanimation 4h ago Stash 117 Previewanimation 5h ago Ave Espelita | Animation Reel 2016animation 6h ago Rénovation du Centre de Contrôle Jupiter 2animation 6h ago Polynoid // DJI Osmo / Director's Cutanimation 6h ago UI/UX - animated tab bar navigation // Hype 3 Pro Tumultanimation 7h ago Pornography!!! - Animated Story Times!animation 9h ago Lets Have A Chatanimation 9h ago Tsunami Forecast Model Animation: Chile 1960animation 10h ago A Temporarily Embarrassed Believeranimation 11h ago last sunsetanimation 11h ago NEW #PAWPATROL PUP TRACKER w SUPER PUP AOLLO Finger Family | Daddy Finger #ANIMATION Nursery Songanimation 11h ago opolisanimation 12h ago "Wounded Warrior Recovery" Pastor Bob DAILY!animation 12h.
99,532
February 5, 2004 The marker on the grave of Graf Spee captain Hans Langsdorff at a cemetery in Buenos Aires, Argentina. Picture: Reuters A famous World War II battleship may soon surface from where it has lain for 60 years. Mary Milliken reports from Montevideo. off their sleepy shores. But tales of the pride of the Nazi fleet keep its memory alive, and this week a team of divers will begin raising pieces of the pocket battleship. "It was a masterpiece in its time," said Mensun Bound, a marine archaeologist from Oxford University weaned on tales of the Battle of the River Plate. "And it doesn't have a dark history. Its captain was a man of great dignity and honour. It was a battle in which both sides came out with their honour intact." Under the command of Captain December 13, 1939, they sighted and attacked the Graf Spee. Langsdorff took his badly damaged ship to port in Montevideo, where he was allowed to bury 36. Believing he would be met by a beefed-up British fleet, Langsdorff evacuated his men to ships headed to Argentina, then sank the Graf Spee with explosives to stop it from falling into enemy hands. Two days after scuttling his ship, Langsdorff took his own life in Buenos Aires. Survivors who stayed in Uruguay and Argentina often spoke of recovering the Graf Spee, located seven kilometres off the coast in waters no deeper than 12 metres. In 1997, Mr Bound and Uruguayan partner Hector Bado found the ship was in much better condition than expected. Today they will attempt to raise the range finder, a component that held the first radar antenna installed in a warship. The ship will remain in Uruguay. "It will be rebuilt on land and will be the best ship museum in the world," said Mr Bado. "This is the last salvageable German battleship." - Reuters
200,082
This really isn't a recipe but more of a tutorial on how to take a shot of tequila, or tequila cruda, the Hollywood way. Tequila connoisseurs will tell you that you should savor the taste and aroma slowly when drinking straight tequila. However, this method is a hit at parties and with tourists in Mexico, and there is an order that should be followed for it to make sense. A decent gold or silver tequila or Mezcal is best, avoid the cheaper brands which almost certainly bring on hangovers. Ingredients: - 1 1/2 oz tequil - lemon or lime wedg - 1 pinch of sal
337,197
Check The perfect place for digging at Tule Elk Park Early Education School I just got back from an amazing conference called Engaging Our Grounds: 2011 International Green Schoolyard Conference in San Francisco and my head is reeling from…Continue Added by Cynthia Gentry on September 22, 2011 at 10:28pm — No Comments "It is easier to build strong children than to repair broken men." - Frederick Douglass Added by Cynthia Gentry on September 9, 2011 at 9:49am — No Comments (Google office in Zurich - designed by the playful and brilliant architects at CamenzindEvolution) I remember going to buy shoes in Baltimore at age 3. I entered the store via a SLIDE!! I remember building a ghost house for my neighborhood at age 9 or so. You entered…Continue Added by Cynthia Gentry on September 6, 2011 at 7:37…Continue Added by Cynthia Gentry on September 1, 2011 at 8:00pm — 1 Comment 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 © 2020 Created by Cynthia Gentry. Badges | Report an Issue | Terms of Service
416,486
China called for calm Friday after North Korea, reacting to U.N. Security Council approval Thursday of the new sanctions, announced it had scrapped its peace pacts with South Korea and threatened pre-emptive nuclear strikes. The Foreign Ministry spokesman, whose name was not reported, was quoted by North Korea's government-run Korean Central News Agency as saying the Security Council resolution shows the United Nations is a tool of a U.S. plan "to destroy the Democratic People's Republic of Korea "by disarming and suffocating it economically," South Korea's Yonhap News Agency reported. "The DPRK vehemently denounces and totally rejects the resolution on sanctions against the DPRK, a product of the U.S. hostile policy toward it," the spokesman said. The new sanctions resolution aims to punish North Korea for its latest underground nuclear test. A commentary in a North Korean government newspaper threatened the United States with "real war" if it goes ahead with joint military exercises with South Korea that began last week and include a period of computer simulated drills scheduled for March 11-21. The Pyongyang government threatened to cut the emergency hotline with the South and cancel two non-aggression agreements.
374,949
TITLE: Probability of picking $2$ diamonds and $1$ non-diamond QUESTION [0 upvotes]: I'm having trouble understanding when to use combinations for counting, and when to multiply probabilities. Say the question is what's the probability of drawing $2$ diamonds and one card that isn't a diamond. Why is one of these approaches wrong: $$\dfrac{_{13}C_2\,\cdot\, _{39}C_1}{_{52}C_3}$$ Here, I am choosing $2$ diamonds from the $13$ diamonds, times choosing $1$ non-diamond from the $39$ non-diamonds, all over the total number of ways you can choose $3$ cards. $$\frac{13}{52}\cdot\frac{12}{51}\cdot\frac{39}{50}$$ Here, first the probability of choosing $1$ diamond, then the second diamond from the remaining $51$ cards and $12$ diamonds, and then finally choosing one non-diamond from the remaining $50$ cards. These both give $2$ different answers. They're off by a factor of $3$. Which is correct? Why? REPLY [1 votes]: The first approach is correct. For the second approach, you have assumed the order DDN (diamond,diamond,non-diamond) but ignored DND and NDD.
95,785
This test consists of 5 multiple choice questions, 5 short answer questions, and 10 short essay. Short Answer Questions 1. Who is Pat Furlong? 2. What year is it in Children of Silence, Chapter 9? 3. What does Loretta force Maureen to do for Pat? 4. Why does Bernard give Jules three hundred dollars? 5. What happens to Maureen's notebook? Short Essay Questions 1. How does Loretta feel about Howard's absence? 2. What are some of Maureen's responsibilities around the house after Loretta marries Pat? 3. What happens to Maureen as a result of her method of earning income? 4. What does Loretta think about her relationship with her children while she is in the shelter? 5. How does Jules seduce Nadine? 6. How does Maureen make money so she can leave her parents' house? 7. What is Loretta's reputation as a teenager? 8. What do Jules and Nadine end up doing in Chapter 5 of the second book? 9. How does Mrs. Wendall react to the barn fire? 10. How do Mort and Vera differ in their ideas about social change?
181,255
TITLE: Convergence radius of $\sum _{ n=1 }^{ \infty }{ \frac { { \left( -1 \right) }^{ \left\lfloor \sqrt { n } \right\rfloor } }{ n } } { x }^{ n }$ QUESTION [2 upvotes]: I need to find the radius of the seria $$\sum _{ n=1 }^{ \infty }{ \frac { { \left( -1 \right) }^{ \left\lfloor \sqrt { n } \right\rfloor } }{ n } } { x }^{ n }$$ where, $\left\lfloor \sqrt { n} \right\rfloor $ is floor function.I haven't got any idea.Any help will be appriciated REPLY [1 votes]: Use Cauchy root test,$$\frac { 1 }{ R } =\overline { \underset { n\rightarrow \infty }{ lim } } \sqrt [ n ]{ \left| \frac { { \left( -1 \right) }^{ \left\lfloor \sqrt { n } \right\rfloor } }{ n } \right| } =\lim _{ n\rightarrow \infty }{ \frac { 1 }{ \sqrt [ n ]{ n } } =1 } $$,so $$\left| x \right| <1$$
69,367
Striker not back on PSG radar following failed move in January as club denies speculation over summer moveParis Saint Germain have quashed speculation linking them with a summer move for Saint Etienne forward Dimitri Payet after missing out on him in January. The 24-year-old Frenchman was close to joining the capital-based club during the winter transfer window, but negotiations collapsed. Rumours began to resurface over the weekend that he was destined for PSG after he was seen at the Parc des Princes as the home side beat Lyon 1-0. However the club have moved quickly to play down suggestions he was there for talks ahead of developments later this year. A statement released by PSG reads: "The presence of the player from AS Saint-Etienne at the Parc de Princes is nothing more than a personal initiative on his part. The club denies there was a meeting between Alain Roche [director of sport] and Payet." Saint Etienne are currently sixth in Ligue 1 following their 2-1 win over Nancy in which Payet scored both goals.
252,064
Robbing My Joy What is it that robbed you of your joy? What are the things that emotionally take you from a place of absolute appreciation and happiness every single day. What if for most people the one thing was the same thing? In this episode Michael shares the one thing that he believes Rob’s most people of most of their joy. Enjoy!
318,379
Chic & Unique Design Fun to Wear! Good Quality Handmade in EU Warranty 24 Months Specification Japanese Quartz Movement (Citizen) Genuine Leather Strap Stainless Steel Back Case Diamater 39 mm (1.5″) Case color: Golden Strap Width 20 mm (0.8″) Strap color: Gray Interchangeable Straps & Battery Splash Resistant Additional information Shipping Worldwide from 4 USD Express Delivery with FedEx Return 30 days
6,921
by Mark Roe for Project Syndicate, June 20th, 2011... Someone naïve in the ways of US corporations might say that these rules are paper-thin, because shareholders can just elect new directors if the incumbents are recalcitrant. As long as they can elect the directors, one might think, shareholders rule the firm. That would be plausible if American corporate ownership were concentrated and powerful, with major shareholders owning, say, 25% of a company’s stock – a structure common in most other advanced countries, where families, foundations, or financial institutions more often have that kind of authority inside large firms. But that is neither how US firms are owned, nor how US corporate elections work. Ownership in large American firms is diffuse, with block-holding shareholders scarce, even today. Hedge funds with big blocks of stock are news, not the norm. (continue reading… ) 0 Responses to “How Capitalist is America?”
51,552
TITLE: Understanding a Sobolev Embedding Theorem QUESTION [4 upvotes]: In my adv. Analysis course, we have studied the following Sobolev Embedding Theorem: Let $m\in\mathbb{N}$ and $s>m+d/2$. Then $$H^s(\mathbb{R}^d)\hookrightarrow C_0^m(\mathbb{R}^d)$$That is: $H^s(\mathbb{R}^d)$ embeds into $C_0^m(\mathbb{R}^d)$ The proof we've studied basically starts by noticing that the Schwartz space $\mathcal{S}(\mathbb{R}^d)$ is dense in $H^s$, and then it goes on proving that the inclusion map $$i:H^s\overset{\mathrm{dense}}{\supseteq}\mathcal{S} \longrightarrow C_0^m$$ is continuous. So by existence (and uniqueness) of an extended (injective) linear bounded operator, we have in fact an embedding from the Sobolev Space $H^s(\mathbb{R}^d)$ into $C_0^m(\mathbb{R}^d)$. But I'm asking myself the nature of such extended embedding, more precisely: Does it mean that, under the hypotheses of the theorem, the Sobolev space $H^s$ is a "subset" of $C_0^m$? In the sense that every function in $H^s$ has a representative (of the a.e. equivalence class) in $C_0^m$? In other words, does the extension of the inclusion behaves as an inclusion? Thanks REPLY [2 votes]: As Giusseppe has stated, this does, indeed, mean that you can view the $H^s$ as a subset of $C^m_0$, but, as you guessed, exactly in the best possible sense: Any function class in $H^s$ has a representative time $C^m_0$. To see this, let $u\in H^s$ (or rather, let $u$ be a representative of such a class) and pick a sequence $(u_n)_{n\in\mathbb{N}}\subseteq \mathcal{S}$ such that $u_n\to u$ in $H^s$. Since $i$ is a bounded operator on $H^s$, we see that $u_n$ must also be convergent in $C^m_0$. This, in particular, implies that $u_n$ is pointwise convergent with a limit $f\in C^m_0$. However, $u_n$ being convergent in $H^s$ implies, in particular, that $u_n$ converges to $u$ in $L^2$. This, in turn, implies that there is a subsequence $u_{n_k}$ which converges to $u$ almost everywhere. However, this implies that $f=u$ almost everywhere, and hence, any element of $H^s$ admits a classically differentiable representative.
212,009
Seven Islands State Birding Park is the only park in Tennessee that primarily focuses on managing habitat for birds. In addition to creating a premier birding destination, the park provides educational programming and participates in several bird monitoring and research projects. Join us via Zoom on Tuesday, August 23, at 7 p.m. EDT to learn more about the park, some of the behind-the-scenes projects and how you can get involved. This program is presented by the UT Arboretum Society and the Tennessee Citizens for Wilderness Planning. Clare Dattilo is the seasonal interpreter at Seven Islands State Birding Park, where she leads educational programs, participates in bird research and organizes community science volunteers. She has over 20 years of experience in natural history interpretation and environmental education and is passionate about sharing her love of the natural world with visitors at the park. The program is free, but registration is required to receive your link. Register here. This program will be recorded, and closed captioning is available. Please note this program is scheduled on Eastern time. Contact UT Arboretum education coordinator, Michelle Campanis, at [email protected] with any questions or registration issues. Due to continued concerns regarding Covid -19, the UT Arboretum Society’s educational programs are currently not on-site activities. The UT Arboretum Society is pleased to bring the public some great online options. Learn more about the Arboretum Society here. Butterfly Festival is Sept. 17. One of the day’s highlights will be the release of 500 painted lady butterflies promptly at noon. Please plan enough time for arrival and parking before the release. It is suggested that butterflies be purchased early in the day due to limited supply, on a first come first served basis. Children are invited to help release the butterflies, which will be offered for $5 per butterfly to cover costs. Cash or credit cards will be accepted. For the safety of all, the use of butterfly nets at this event is strictly prohibited. Speakers: Two speakers will give presentations in the air-conditioned UT Arboretum Auditorium. From 10-10:45, Stephen Lyn Bales will present “Our Beloved Butterflies and Their Hosts.” At 11 a.m., Georgann Eubanks will present “Habitat Heroes: Saving the Wild South for Us All.” Both speakers will have materials for sale, as will local artisans Kathy Fahey, Brad Greenwood, Kris Light and Teresa Myrick, all offering butterfly-themed merchandise. Melanie Staten is a public relations consultant with her husband, Vince.
304,634
Board & Brush Board & Brush offers hands on workshops to build on-trend, farmhouse-classic, and inspirational pieces of décor for your home or office. Each workshop teaches you the techniques needed to create a custom piece that looks professionally made. Address: 8851 Macon Hwy, Bldg 300, Athens, GA 30606 Phone: (706) 202-5969 Website: click here Hours: Times vary
136,076
\begin{document} \maketitle \begin{abstract} For a pseudo-Anosov homeomorphism $f$ on a closed surface of genus $g\geq 2$, for which the entropy is on the order $\frac{1}{g}$ (the lowest possible order), Farb-Leininger-Margalit showed that the volume of the mapping torus is bounded, independent of $g$. We show that the analogous result fails for a surface of fixed genus $g$ with $n$ punctures, by constructing pseudo-Anosov homeomorphism with entropy of the minimal order $\frac{\log n}{n}$, and volume tending to infinity. \end{abstract} \section{Introduction} Let $l(g,n) = \min\{\log(\lambda(f)) | f : S_{g,n} \to S_{g,n}\}$ denote the logarithm of the minimal dilatation of a pseudo-Anosov $f$ on an orientable surface $S_{g,n}$ with genus $g$ and $n$ punctures, that is, the minimal topological entropy. When $n=0$, Penner showed that \[\frac{\log2}{12g-12}<l_{g,0}<\frac{\log 11}{g}.\] See \cite{penner}. These bounds have been improved since Penner's original work \cite{bound1,bound2,bound3,bound4,bound5,bound6} To better understand where minimal dilatation pseudo-Anosov homeomorphism come from, in \cite{flm}, the authors consider the set \[ \Psi_L=\{f:S_{g,0}\to S_{g,0} | f \text{ is pseudo-Anosov, } \log(\lambda(f)) \leq \frac{L}{g}\}. \] They show that for any $L>0$ there exists finite number of hyperbolic 3-manifolds $M_1, \dots, M_n$, such that for each $f\in \Psi_L$, the mapping torus $M_f$ of $f$ is obtained by Dehn fillings on some $M_i$. See \cite[Corollary 1.4]{flm}. As a consequence, the volume of $M_f$ is bounded by a constant depending only on $L$; see \cite[Corollary 1.5]{flm}. See also \cite{agol2,kojima,brock}. For punctured surfaces of a fixed genus, Tsai \cite{tsai} proved that $l_{g,n}$ has a different asymptotic behavior. \begin{thm}[Tsai] For any fixed $g\geq2$, for all $n\geq3$, there is a constant $c_g\geq1$ depending on $g$ such that \[ \frac{\log n}{c_gn}< l_{g,n} < \frac{c_g\log{n}}{n}.\] \end{thm} See also \cite{yazdi,yazdi2,valdivia,bound5}. For fixed $g\geq 2, n\geq 0$, let \[ \Psi_{g,L}=\{f:S_{g,n}\to S_{g,n} | f \text{ is pseudo-Anosov, } \log(\lambda(f)) \leq L\frac{\log{n}}{n}\}. \] We show that the analogue of the results of \cite{flm} fail for $\Psi_{g,L}$. Specifically, we prove the following. \begin{mthm*} For any fixed $g\geq 2$, and $L\geq 162g$, there exists a sequence $\{M_{f_i}\}_{i=1}^{\infty}$, with $f_i\in \Psi_{g,L}$, so that $\displaystyle {\lim_{n \to \infty} \vol(M_{f_i})\rightarrow \infty}$. \end{mthm*} As a consequence, we have the following. \begin{col} For any $g\geq 2$, there exists $L$ such that there is no finite set $\Omega$ of 3-manifolds so that for all $M_f$, $f\in \Psi_{g,L}$ are obtained by Dehn filling on some $M \in \Omega$. \end{col} The construction in the proof of the Main Theorem is based on the example in \cite{tsai} of $f_{g,n}:S_{g,n}\to S_{g,n}$ with \[ \log(\lambda(f_{g,n})) < \frac{c_g\log{n}}{n}. \] But for each $g$, one can show that $\{M_{f_{g,n}}\}_{n=1}^\infty$ are all obtained by Dehn fillings on a finite number of 3-manifolds, so we have to modify this construction. See also examples constructed by Kin-Takasawa \cite{bound5}. The idea is to compose $f_{g,n}$ with homeomorphisms supported in a subsurface of $S_{g,n}$ that become more and more complicated as $n$ gets larger. This has to be balanced with keeping the stretch factor bounded by a fixed multiple of $\frac{\log n }{n}$. In Section 2 we recall some of the background we will need on fibered 3-manifold, hyperbolic geometry and Dehn surgery. In Section 3 we state Theorem \ref{main}, which is a version of the Main Theorem for punctured spheres based on a construction of \cite{hironaka}, then prove the Main Theorem based on that. In Section 4 we give the complete proof of Theorem \ref{main} by giving the construction of the sequence $\{M_{f_i}\}_{i=1}^{\infty}$, which are obtained by cutting open and gluing in an increasing numbers of copies of a certain manifold with totally geodesic boundary, then applying Dehn fillings. Based on the Main Theorem, we have the following question. If we only consider the minimizers of the entropy, can we still find a sequence with unbounded volume? \section{Background} \subsection{Fibered 3-manifolds} Let $S$ be a closed surface minus a finite number of points. We sometimes consider $S$ as a compact surface with boundary components, and will confuse punctures with boundary components when convenient (the former obtained from the latter by removing the boundary). The following theorem is from \cite{thurston}. \begin{thm}[Thurston] Any diffeomorphism $f$ on $S$ is isotopic to a map $f\textprime$ satisfying one of the following conditions: \begin{enumerate} \item[(i)]$f\textprime$ has finite order. \item[(ii)]$f\textprime$ preserves a disjoint union of essential simple curves. \item[(iii)]There exists $\lambda>1$ and two transverse measured foliations $\mathcal{F}^s$ and $\mathcal{F}^u$, called the stable and unstable foliations, respectively, such that \[f\textprime(\mathcal{F}^s)=(1/\lambda)\mathcal{F}^s, f\textprime(\mathcal{F}^u)=\lambda\mathcal{F}^u.\] \end{enumerate} \end{thm} The three cases are called {\em periodic}, {\em reducible} and {\em pseudo-Anosov} respectively. The number $\lambda=\lambda(f)$ in case (iii) is called the {\em stretch factor} of $f$. The topological entropy of pseudo-Anosov homeomorphism $f:S\to S$ is $\log (\lambda(f))$ Let $M$ be the interior of a compact, connected, orientable, irreducible, atoroidal 3-manifold that fibers over $S^1$ with fiber $S\subset M$ and monodromy $f$. That is, $M$ is the mapping torus of $f$: \[M=M_f=S\times[0,1] / (x,1)\sim(f(x),0).\] Then $S$ is a closed orientable surface with a finite number of punctures and negative Euler characteristic, and $f$ is pseudo-Anosov with a unique expanding invariant foliation up to isotopy. Associated to $(M,S)$ we also have \begin{enumerate} \item[(i)]$F\subset H^1(M,\mathbb{R})$, the open face of the unit ball in Thurston norm with $[S]\in (F\cdot \mathbb{R}^+)$. See \cite{thurstonnorm}. \item[(ii)]A suspension flow $\psi$ on $M$, and a 2-dimensional foliation obtained by suspending the stable and unstable foliation of $f$. See \cite{fried1}. \end{enumerate} $F$ is called a {\em fibred face} of the Thurston norm ball. The segments \[{x}\times [0,1] \subset S \times [0,1]\] glued together in $M_f$ are leaves of the 1-dimensional foliation $\Psi$ of M, the flow lines of $\psi$. The following theorem is from \cite{fried1} and \cite{fried2}. \begin{thm}[Fried]\label{fried} Let $(M,S)$, $F$ and $\Psi$ be as above. Then any integral class in $F\cdot \mathbb{R}^+$ is represented by a fiber $S\textprime$ of a fibration of $M$ over the circle which can be isotoped to be transverse to $\Psi$, and the first return map of $\psi$ coincides with the pseudo-Anosov monodromy $f\textprime$, up to isotopy. Moreover, if $S\textprime \subset M$ is any orientable surface with $S\textprime \pitchfork \Psi$, then $[S\textprime]\in \overline{F\cdot \mathbb{R}^+}$. \end{thm} If $f: S\to S$ is pseudo-Anosov on a surface with punctures, and $G\subset S$ is a spine, then we can homotope $f$ to a map $g: S\to G$ so that $g|_G:G\to G$ a graph map; that is, $g$ sends vertices to vertices and edges to edge paths. The growth rate of $g|_G$ is the largest absolute value of any eigenvalue of the Perron-Frobenious block of the transition matrix $T$ induced by $g$, and is an upper bound for $\lambda(f)$, see \cite{bestvina}. The Perron-Frobenius Theorem tells that largest eigenvalue of a Perron-Frobenius matrix is bounded above by the largest row sum of the matrix. Recall that associated to a non-negative integral matrix $T=\{e_{ij}\}, 1\leq i,j \leq n$ is a directed graph $\Gamma$, where $\{V_1, V_2, \dots, V_n\}$ is the vertex set of $\Gamma$ corresponding to the columns/rows of $T$, and $e_{ij}$ represents the number of edges pointing from $V_i$ to $V_j$. We have the following proposition. See \cite{gantmacher}. \begin{prop}\label{pf} Let $\Gamma$ be the directed graph of an integral Perron-Frobenius matrix $T$ with eigenvalue $\lambda$. Let $N(V_i,l)$ be the number of length-$l$ paths emanating from vertex $V_i$ in $\Gamma$. Then $\lambda \leq \max_i{{N(V_i,l)}}$. \end{prop} \subsection{Hyperbolic geometry} \begin{figure}[!ht]\centering \label{AA} \tikzset{every picture/.style={line width=0.75pt}} \begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-0.8,xscale=0.8] \draw [draw opacity=0][dash pattern={on 4.5pt off 4.5pt}] (168.75,135.37) .. controls (170.08,122.38) and (195.84,111.64) .. (231.05,108.08) -- (258.51,137.04) -- cycle ; \draw [dash pattern={on 4.5pt off 4.5pt}] (168.75,135.37) .. controls (170.08,122.38) and (195.84,111.64) .. (231.05,108.08) ; \draw [draw opacity=0] (241.25,162) .. controls (239.52,155.77) and (238.52,148.54) .. (238.52,140.82) .. controls (238.52,118.35) and (246.95,99.97) .. (257.6,98.61) -- (258.86,140.82) -- cycle ; \draw (241.25,162) .. controls (239.52,155.77) and (238.52,148.54) .. (238.52,140.82) .. controls (238.52,118.35) and (246.95,99.97) .. (257.6,98.61) ; \draw (258.16,140.82) .. controls (258.16,140.82) and (258.16,140.82) .. (258.16,140.82) .. controls (258.16,140.82) and (258.16,140.82) .. (258.16,140.82) .. controls (258.16,140.82) and (258.16,140.82) .. (258.16,140.82) .. controls (258.16,140.82) and (258.16,140.82) .. (258.16,140.82)(168,140.82) .. controls (168,91.03) and (208.37,50.66) .. (258.16,50.66) .. controls (307.95,50.66) and (348.32,91.03) .. (348.32,140.82) .. controls (348.32,190.61) and (307.95,230.98) .. (258.16,230.98) .. controls (208.37,230.98) and (168,190.61) .. (168,140.82) ; \draw [draw opacity=0][dash pattern={on 4.5pt off 4.5pt}] (257.14,98.76) .. controls (257.86,98.6) and (258.6,98.53) .. (259.34,98.54) .. controls (270.57,98.66) and (279.47,117.69) .. (279.21,141.05) .. controls (279.13,148.23) and (278.19,154.98) .. (276.6,160.89) -- (258.86,140.82) -- cycle ; \draw [dash pattern={on 4.5pt off 4.5pt}] (257.14,98.76) .. controls (257.86,98.6) and (258.6,98.53) .. (259.34,98.54) .. controls (270.57,98.66) and (279.47,117.69) .. (279.21,141.05) .. controls (279.13,148.23) and (278.19,154.98) .. (276.6,160.89) ; \draw [draw opacity=0] (348.3,138.07) .. controls (346.58,154.28) and (306.82,166.8) .. (258.19,166.2) .. controls (209.05,165.6) and (169.3,151.85) .. (168.75,135.37) -- (258.56,136) -- cycle ; \draw (348.3,138.07) .. controls (346.58,154.28) and (306.82,166.8) .. (258.19,166.2) .. controls (209.05,165.6) and (169.3,151.85) .. (168.75,135.37) ; \draw [draw opacity=0] (220.35,159.8) .. controls (217.48,154.09) and (215.86,147.64) .. (215.86,140.82) .. controls (215.86,117.46) and (234.8,98.52) .. (258.16,98.52) .. controls (281.52,98.52) and (300.45,117.46) .. (300.45,140.82) .. controls (300.45,147.79) and (298.77,154.37) .. (295.78,160.17) -- (258.16,140.82) -- cycle ; \draw (220.35,159.8) .. controls (217.48,154.09) and (215.86,147.64) .. (215.86,140.82) .. controls (215.86,117.46) and (234.8,98.52) .. (258.16,98.52) .. controls (281.52,98.52) and (300.45,117.46) .. (300.45,140.82) .. controls (300.45,147.79) and (298.77,154.37) .. (295.78,160.17) ; \draw [draw opacity=0] (290.32,168.29) .. controls (282.56,177.36) and (271.03,183.12) .. (258.16,183.12) .. controls (245.07,183.12) and (233.38,177.18) .. (225.62,167.84) -- (258.16,140.82) -- cycle ; \draw (290.32,168.29) .. controls (282.56,177.36) and (271.03,183.12) .. (258.16,183.12) .. controls (245.07,183.12) and (233.38,177.18) .. (225.62,167.84) ; \draw [draw opacity=0] (261.06,182.86) .. controls (260.34,183.02) and (259.61,183.11) .. (258.86,183.11) .. controls (253.15,183.11) and (247.98,178.21) .. (244.29,170.32) -- (258.86,140.82) -- cycle ; \draw (261.06,182.86) .. controls (260.34,183.02) and (259.61,183.11) .. (258.86,183.11) .. controls (253.15,183.11) and (247.98,178.21) .. (244.29,170.32) ; \draw [draw opacity=0][dash pattern={on 4.5pt off 4.5pt}] (287.67,107.62) .. controls (322.65,111.89) and (347.87,123.14) .. (348.36,136.22) -- (258.56,136) -- cycle ; \draw [dash pattern={on 4.5pt off 4.5pt}] (287.67,107.62) .. controls (322.65,111.89) and (347.87,123.14) .. (348.36,136.22) ; \draw [draw opacity=0][dash pattern={on 4.5pt off 4.5pt}] (234.72,107.74) .. controls (236.73,107.57) and (238.77,107.42) .. (240.84,107.3) -- (258.51,137.04) -- cycle ; \draw [dash pattern={on 4.5pt off 4.5pt}] (234.72,107.74) .. controls (236.73,107.57) and (238.77,107.42) .. (240.84,107.3) ; \draw [draw opacity=0][dash pattern={on 4.5pt off 4.5pt}] (251.49,105.84) .. controls (253.9,105.8) and (256.33,105.78) .. (258.79,105.8) .. controls (261.42,105.82) and (264.03,105.88) .. (266.61,105.97) -- (258.56,136) -- cycle ; \draw [dash pattern={on 4.5pt off 4.5pt}] (251.49,105.84) .. controls (253.9,105.8) and (256.33,105.78) .. (258.79,105.8) .. controls (261.42,105.82) and (264.03,105.88) .. (266.61,105.97) ; \draw [draw opacity=0][dash pattern={on 4.5pt off 4.5pt}] (275.26,106.8) .. controls (277.63,106.93) and (279.98,107.09) .. (282.29,107.28) -- (262.35,136.6) -- cycle ; \draw [dash pattern={on 4.5pt off 4.5pt}] (275.26,106.8) .. controls (277.63,106.93) and (279.98,107.09) .. (282.29,107.28) ; \draw [draw opacity=0][dash pattern={on 4.5pt off 4.5pt}] (273.56,170.36) .. controls (270.06,177.61) and (265.33,182.29) .. (260.12,182.9) -- (259.32,140.68) -- cycle ; \draw [dash pattern={on 4.5pt off 4.5pt}] (273.56,170.36) .. controls (270.06,177.61) and (265.33,182.29) .. (260.12,182.9) ; \draw (252.93,162.49) -- (260.9,166.47) -- (252.93,170.46) ; \draw (242.66,140.19) -- (238.53,148.08) -- (234.7,140.04) ; \draw (242.66,136.8) -- (238.53,144.69) -- (234.7,136.65) ; \draw (242.66,133.41) -- (238.53,141.3) -- (234.7,133.26) ; \draw (205.46,67.56) -- (231.05,108.08) ; \draw (310.34,67.6) -- (283.93,107.95) ; \draw (236.46,176.44) -- (209.34,216.44) ; \draw (277.83,177.79) -- (304.27,219.16) ; \draw (293.76,84.67) -- (299.45,83.65) -- (300.39,89.36)(296.07,81.4) -- (301.76,80.39) -- (302.7,86.1) ; \draw (293.25,201.49) -- (292.09,193.1) -- (285.62,197.82) -- cycle ; \draw [fill={rgb, 255:red, 3; green, 3; blue, 3 } ,fill opacity=1 ] (216.43,84.57) -- (216.59,93.03) -- (223.57,89.11) -- cycle ; \draw (228.69,195.51) -- (220.15,200.47) -- (221.31,190.66) -- (222.58,196.78) -- cycle ; \draw (8,136.83) .. controls (8,96.98) and (40.31,64.67) .. (80.17,64.67) .. controls (120.02,64.67) and (152.33,96.98) .. (152.33,136.83) .. controls (152.33,176.69) and (120.02,209) .. (80.17,209) .. controls (40.31,209) and (8,176.69) .. (8,136.83) -- cycle ; \draw [draw opacity=0] (152.04,139.86) .. controls (148.7,154.89) and (117.8,166.67) .. (80.17,166.67) .. controls (40.31,166.67) and (8,153.46) .. (8,137.17) .. controls (8,136.94) and (8,136.71) .. (8.02,136.49) -- (80.17,137.17) -- cycle ; \draw (152.04,139.86) .. controls (148.7,154.89) and (117.8,166.67) .. (80.17,166.67) .. controls (40.31,166.67) and (8,153.46) .. (8,137.17) .. controls (8,136.94) and (8,136.71) .. (8.02,136.49) ; \draw [draw opacity=0][dash pattern={on 4.5pt off 4.5pt}] (152.33,136.83) .. controls (149,121.81) and (118.1,110.03) .. (80.46,110.03) .. controls (40.61,110.03) and (8.3,123.24) .. (8.3,139.53) .. controls (8.3,139.76) and (8.3,139.99) .. (8.32,140.21) -- (80.46,139.53) -- cycle ; \draw [dash pattern={on 4.5pt off 4.5pt}] (152.33,136.83) .. controls (149,121.81) and (118.1,110.03) .. (80.46,110.03) .. controls (40.61,110.03) and (8.3,123.24) .. (8.3,139.53) .. controls (8.3,139.76) and (8.3,139.99) .. (8.32,140.21) ; \draw [draw opacity=0] (75.84,208.95) .. controls (59.81,207.48) and (47,175.76) .. (47,136.83) .. controls (47,97.94) and (59.79,66.23) .. (75.8,64.72) -- (77,136.83) -- cycle ; \draw (75.84,208.95) .. controls (59.81,207.48) and (47,175.76) .. (47,136.83) .. controls (47,97.94) and (59.79,66.23) .. (75.8,64.72) ; \draw [draw opacity=0][dash pattern={on 4.5pt off 4.5pt}] (78.16,208.95) .. controls (94.19,207.48) and (107,175.76) .. (107,136.83) .. controls (107,97.11) and (93.66,64.88) .. (77.16,64.67) -- (77,136.83) -- cycle ; \draw [dash pattern={on 4.5pt off 4.5pt}] (78.16,208.95) .. controls (94.19,207.48) and (107,175.76) .. (107,136.83) .. controls (107,97.11) and (93.66,64.88) .. (77.16,64.67) ; \draw [draw opacity=0][dash pattern={on 4.5pt off 4.5pt}] (518.34,58.53) .. controls (510.58,37.08) and (500.05,23.33) .. (488.36,21.79) -- (486.16,147.82) -- cycle ; \draw [dash pattern={on 4.5pt off 4.5pt}] (518.34,58.53) .. controls (510.58,37.08) and (500.05,23.33) .. (488.36,21.79) ; \draw [draw opacity=0][dash pattern={on 4.5pt off 4.5pt}] (396.75,142.37) .. controls (398.08,129.38) and (423.84,118.64) .. (459.05,115.08) -- (486.51,144.04) -- cycle ; \draw [dash pattern={on 4.5pt off 4.5pt}] (396.75,142.37) .. controls (398.08,129.38) and (423.84,118.64) .. (459.05,115.08) ; \draw [draw opacity=0] (469.25,169) .. controls (467.52,162.77) and (466.52,155.54) .. (466.52,147.82) .. controls (466.52,125.35) and (474.95,106.97) .. (485.6,105.61) -- (486.86,147.82) -- cycle ; \draw (469.25,169) .. controls (467.52,162.77) and (466.52,155.54) .. (466.52,147.82) .. controls (466.52,125.35) and (474.95,106.97) .. (485.6,105.61) ; \draw [draw opacity=0][dash pattern={on 4.5pt off 4.5pt}] (485.14,105.76) .. controls (485.86,105.6) and (486.6,105.53) .. (487.34,105.54) .. controls (498.57,105.66) and (507.47,124.69) .. (507.21,148.05) .. controls (507.13,155.23) and (506.19,161.98) .. (504.6,167.89) -- (486.86,147.82) -- cycle ; \draw [dash pattern={on 4.5pt off 4.5pt}] (485.14,105.76) .. controls (485.86,105.6) and (486.6,105.53) .. (487.34,105.54) .. controls (498.57,105.66) and (507.47,124.69) .. (507.21,148.05) .. controls (507.13,155.23) and (506.19,161.98) .. (504.6,167.89) ; \draw [draw opacity=0] (435.97,167.41) .. controls (412.54,161.78) and (397.09,152.6) .. (396.75,142.37) -- (486.56,143) -- cycle ; \draw (435.97,167.41) .. controls (412.54,161.78) and (397.09,152.6) .. (396.75,142.37) ; \draw [draw opacity=0] (448.35,166.8) .. controls (445.48,161.09) and (443.86,154.64) .. (443.86,147.82) .. controls (443.86,124.46) and (462.8,105.52) .. (486.16,105.52) .. controls (509.52,105.52) and (528.45,124.46) .. (528.45,147.82) .. controls (528.45,154.79) and (526.77,161.37) .. (523.78,167.17) -- (486.16,147.82) -- cycle ; \draw (448.35,166.8) .. controls (445.48,161.09) and (443.86,154.64) .. (443.86,147.82) .. controls (443.86,124.46) and (462.8,105.52) .. (486.16,105.52) .. controls (509.52,105.52) and (528.45,124.46) .. (528.45,147.82) .. controls (528.45,154.79) and (526.77,161.37) .. (523.78,167.17) ; \draw [draw opacity=0] (518.32,175.29) .. controls (510.56,184.36) and (499.03,190.12) .. (486.16,190.12) .. controls (473.07,190.12) and (461.38,184.18) .. (453.62,174.84) -- (486.16,147.82) -- cycle ; \draw (518.32,175.29) .. controls (510.56,184.36) and (499.03,190.12) .. (486.16,190.12) .. controls (473.07,190.12) and (461.38,184.18) .. (453.62,174.84) ; \draw [draw opacity=0] (489.06,189.86) .. controls (488.34,190.02) and (487.61,190.11) .. (486.86,190.11) .. controls (481.15,190.11) and (475.98,185.21) .. (472.29,177.32) -- (486.86,147.82) -- cycle ; \draw (489.06,189.86) .. controls (488.34,190.02) and (487.61,190.11) .. (486.86,190.11) .. controls (481.15,190.11) and (475.98,185.21) .. (472.29,177.32) ; \draw [draw opacity=0][dash pattern={on 4.5pt off 4.5pt}] (515.67,114.62) .. controls (550.65,118.89) and (575.87,130.14) .. (576.36,143.22) -- (486.56,143) -- cycle ; \draw [dash pattern={on 4.5pt off 4.5pt}] (515.67,114.62) .. controls (550.65,118.89) and (575.87,130.14) .. (576.36,143.22) ; \draw [draw opacity=0][dash pattern={on 4.5pt off 4.5pt}] (462.72,114.74) .. controls (464.73,114.57) and (466.77,114.42) .. (468.84,114.3) -- (486.51,144.04) -- cycle ; \draw [dash pattern={on 4.5pt off 4.5pt}] (462.72,114.74) .. controls (464.73,114.57) and (466.77,114.42) .. (468.84,114.3) ; \draw [draw opacity=0][dash pattern={on 4.5pt off 4.5pt}] (479.49,112.84) .. controls (481.9,112.8) and (484.33,112.78) .. (486.79,112.8) .. controls (489.42,112.82) and (492.03,112.88) .. (494.61,112.97) -- (486.56,143) -- cycle ; \draw [dash pattern={on 4.5pt off 4.5pt}] (479.49,112.84) .. controls (481.9,112.8) and (484.33,112.78) .. (486.79,112.8) .. controls (489.42,112.82) and (492.03,112.88) .. (494.61,112.97) ; \draw [draw opacity=0][dash pattern={on 4.5pt off 4.5pt}] (503.26,113.8) .. controls (505.63,113.93) and (507.98,114.09) .. (510.29,114.28) -- (490.35,143.6) -- cycle ; \draw [dash pattern={on 4.5pt off 4.5pt}] (503.26,113.8) .. controls (505.63,113.93) and (507.98,114.09) .. (510.29,114.28) ; \draw [draw opacity=0][dash pattern={on 4.5pt off 4.5pt}] (501.56,177.36) .. controls (498.06,184.61) and (493.33,189.29) .. (488.12,189.9) -- (487.32,147.68) -- cycle ; \draw [dash pattern={on 4.5pt off 4.5pt}] (501.56,177.36) .. controls (498.06,184.61) and (493.33,189.29) .. (488.12,189.9) ; \draw (480.93,169.49) -- (488.9,173.47) -- (480.93,177.46) ; \draw (470.66,147.19) -- (466.53,155.08) -- (462.7,147.04) ; \draw (470.66,143.8) -- (466.53,151.69) -- (462.7,143.65) ; \draw (470.66,140.41) -- (466.53,148.3) -- (462.7,140.26) ; \draw (416.83,43.9) -- (442.83,88.15) ; \draw (557.83,42.9) -- (511.93,114.95) ; \draw (464.46,183.44) -- (448.83,207.15) ; \draw (505.83,183.79) -- (551.83,256.9) ; \draw (521.76,91.67) -- (527.45,90.65) -- (528.39,96.36)(524.07,88.4) -- (529.76,87.39) -- (530.7,93.1) ; \draw (521.25,208.49) -- (520.09,200.1) -- (513.62,204.82) -- cycle ; \draw [fill={rgb, 255:red, 3; green, 3; blue, 3 } ,fill opacity=1 ] (448.43,96.57) -- (448.59,105.03) -- (455.57,101.11) -- cycle ; \draw (458.69,199.51) -- (450.15,204.47) -- (451.31,194.66) -- (452.58,200.78) -- cycle ; \draw (487.32,147.68) .. controls (487.32,147.68) and (487.32,147.68) .. (487.32,147.68) .. controls (487.32,147.68) and (487.32,147.68) .. (487.32,147.68) .. controls (487.32,147.68) and (487.32,147.68) .. (487.32,147.68) .. controls (487.32,147.68) and (487.32,147.68) .. (487.32,147.68)(361.25,147.68) .. controls (361.25,78.05) and (417.69,21.61) .. (487.32,21.61) .. controls (556.95,21.61) and (613.4,78.05) .. (613.4,147.68) .. controls (613.4,217.31) and (556.95,273.75) .. (487.32,273.75) .. controls (417.69,273.75) and (361.25,217.31) .. (361.25,147.68) ; \draw (444.66,147.19) -- (440.53,155.08) -- (436.7,147.04) ; \draw (444.66,143.8) -- (440.53,151.69) -- (436.7,143.65) ; \draw (444.66,140.41) -- (440.53,148.3) -- (436.7,140.26) ; \draw [fill={rgb, 255:red, 3; green, 3; blue, 3 } ,fill opacity=1 ] (424.43,56.57) -- (424.59,65.03) -- (431.57,61.11) -- cycle ; \draw (541.76,60.67) -- (547.45,59.65) -- (548.39,65.36)(544.07,57.4) -- (549.76,56.39) -- (550.7,62.1) ; \draw (543.25,243.49) -- (542.09,235.1) -- (535.62,239.82) -- cycle ; \draw (434.69,234.51) -- (426.15,239.47) -- (427.31,229.66) -- (428.58,235.78) -- cycle ; \draw [draw opacity=0] (576.3,145.07) .. controls (574.58,161.28) and (534.82,173.8) .. (486.19,173.2) .. controls (472.66,173.04) and (459.85,171.87) .. (448.36,169.93) -- (486.56,143) -- cycle ; \draw (576.3,145.07) .. controls (574.58,161.28) and (534.82,173.8) .. (486.19,173.2) .. controls (472.66,173.04) and (459.85,171.87) .. (448.36,169.93) ; \draw [draw opacity=0] (481.86,273.45) .. controls (458.72,267.44) and (440.61,213.49) .. (440.61,147.82) .. controls (440.61,80.18) and (459.83,24.97) .. (483.96,21.79) -- (486.16,147.82) -- cycle ; \draw (481.86,273.45) .. controls (458.72,267.44) and (440.61,213.49) .. (440.61,147.82) .. controls (440.61,80.18) and (459.83,24.97) .. (483.96,21.79) ; \draw [draw opacity=0][dash pattern={on 4.5pt off 4.5pt}] (525.61,84.72) .. controls (524.46,79.24) and (523.17,74.02) .. (521.75,69.1) -- (486.16,147.82) -- cycle ; \draw [dash pattern={on 4.5pt off 4.5pt}] (525.61,84.72) .. controls (524.46,79.24) and (523.17,74.02) .. (521.75,69.1) ; \draw [draw opacity=0][dash pattern={on 4.5pt off 4.5pt}] (531.45,161.15) .. controls (531.62,156.77) and (531.7,152.32) .. (531.7,147.82) .. controls (531.7,131.33) and (530.56,115.58) .. (528.48,101.14) -- (486.16,147.82) -- cycle ; \draw [dash pattern={on 4.5pt off 4.5pt}] (531.45,161.15) .. controls (531.62,156.77) and (531.7,152.32) .. (531.7,147.82) .. controls (531.7,131.33) and (530.56,115.58) .. (528.48,101.14) ; \draw [draw opacity=0][dash pattern={on 4.5pt off 4.5pt}] (526.03,208.85) .. controls (528.1,198.5) and (529.66,187.25) .. (530.61,175.38) -- (486.16,147.82) -- cycle ; \draw [dash pattern={on 4.5pt off 4.5pt}] (526.03,208.85) .. controls (528.1,198.5) and (529.66,187.25) .. (530.61,175.38) ; \draw [draw opacity=0][dash pattern={on 4.5pt off 4.5pt}] (522.44,224.12) .. controls (523.13,221.61) and (523.78,219.02) .. (524.4,216.36) -- (486.16,147.82) -- cycle ; \draw [dash pattern={on 4.5pt off 4.5pt}] (522.44,224.12) .. controls (523.13,221.61) and (523.78,219.02) .. (524.4,216.36) ; \draw [draw opacity=0][dash pattern={on 4.5pt off 4.5pt}] (495.9,271.1) .. controls (504.57,265.87) and (512.31,253.81) .. (518.36,237.06) -- (486.16,147.82) -- cycle ; \draw [dash pattern={on 4.5pt off 4.5pt}] (495.9,271.1) .. controls (504.57,265.87) and (512.31,253.81) .. (518.36,237.06) ; \draw (448.43,96.57) -- (459.05,115.08) ; \draw (443.83,214.15) -- (416.83,252.9) ; \draw [draw opacity=0] (459.11,62.04) .. controls (468.85,58.97) and (479.29,57.52) .. (490.08,57.99) .. controls (539.69,60.16) and (578.16,102.13) .. (575.99,151.74) .. controls (573.82,201.35) and (531.85,239.82) .. (482.24,237.65) .. controls (473.78,237.28) and (465.64,235.76) .. (457.98,233.23) -- (486.16,147.82) -- cycle ; \draw (459.11,62.04) .. controls (468.85,58.97) and (479.29,57.52) .. (490.08,57.99) .. controls (539.69,60.16) and (578.16,102.13) .. (575.99,151.74) .. controls (573.82,201.35) and (531.85,239.82) .. (482.24,237.65) .. controls (473.78,237.28) and (465.64,235.76) .. (457.98,233.23) ; \draw [draw opacity=0] (445.53,227.32) .. controls (437.22,222.96) and (429.47,217.22) .. (422.6,210.1) .. controls (388.13,174.35) and (389.16,117.43) .. (424.91,82.96) .. controls (431,77.08) and (437.71,72.23) .. (444.82,68.42) -- (487.32,147.68) -- cycle ; \draw (445.53,227.32) .. controls (437.22,222.96) and (429.47,217.22) .. (422.6,210.1) .. controls (388.13,174.35) and (389.16,117.43) .. (424.91,82.96) .. controls (431,77.08) and (437.71,72.23) .. (444.82,68.42) ; \draw (74.93,162.49) -- (82.9,166.47) -- (74.93,170.46) ; \draw (50.66,140.19) -- (47.12,148.08) -- (43.83,140.04) ; \draw (50.66,136.8) -- (47.12,144.69) -- (43.83,136.65) ; \draw (50.66,133.41) -- (47.12,141.3) -- (43.83,133.26) ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (40,84.25) .. controls (40,83.47) and (40.63,82.83) .. (41.42,82.83) .. controls (42.2,82.83) and (42.83,83.47) .. (42.83,84.25) .. controls (42.83,85.03) and (42.2,85.67) .. (41.42,85.67) .. controls (40.63,85.67) and (40,85.03) .. (40,84.25) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (113,84.25) .. controls (113,83.47) and (113.63,82.83) .. (114.42,82.83) .. controls (115.2,82.83) and (115.83,83.47) .. (115.83,84.25) .. controls (115.83,85.03) and (115.2,85.67) .. (114.42,85.67) .. controls (113.63,85.67) and (113,85.03) .. (113,84.25) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (38,186.25) .. controls (38,185.47) and (38.63,184.83) .. (39.42,184.83) .. controls (40.2,184.83) and (40.83,185.47) .. (40.83,186.25) .. controls (40.83,187.03) and (40.2,187.67) .. (39.42,187.67) .. controls (38.63,187.67) and (38,187.03) .. (38,186.25) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (113,189.25) .. controls (113,188.47) and (113.63,187.83) .. (114.42,187.83) .. controls (115.2,187.83) and (115.83,188.47) .. (115.83,189.25) .. controls (115.83,190.03) and (115.2,190.67) .. (114.42,190.67) .. controls (113.63,190.67) and (113,190.03) .. (113,189.25) -- cycle ; \draw (134,145) node {$\delta _{0}$}; \draw (66,90) node {$\delta _{1}$}; \end{tikzpicture} \caption{Left: $\Sigma_4$. Middle: $A_0$. Right: $A$.} \end{figure} \begin{figure}[!ht]\centering \label{AA} \tikzset{every picture/.style={line width=0.75pt}} \tikzset{every picture/.style={line width=0.75pt}} \begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1] \draw (447.82,154.64) .. controls (447.82,146.03) and (464.39,139.04) .. (484.82,139.04) .. controls (505.26,139.04) and (521.83,146.03) .. (521.83,154.64) .. controls (521.83,163.26) and (505.26,170.24) .. (484.82,170.24) .. controls (464.39,170.24) and (447.82,163.26) .. (447.82,154.64) -- cycle ; \draw [color={rgb, 255:red, 255; green, 255; blue, 255 } ,draw opacity=1 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (461.11,140.4) .. controls (461.11,134.51) and (465.12,129.73) .. (470.07,129.73) .. controls (475.02,129.73) and (479.04,134.51) .. (479.04,140.4) .. controls (479.04,146.29) and (475.02,151.07) .. (470.07,151.07) .. controls (465.12,151.07) and (461.11,146.29) .. (461.11,140.4) -- cycle ; \draw [color={rgb, 255:red, 255; green, 255; blue, 255 } ,draw opacity=1 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (491.47,140.4) .. controls (491.47,134.51) and (495.48,129.73) .. (500.44,129.73) .. controls (505.39,129.73) and (509.4,134.51) .. (509.4,140.4) .. controls (509.4,146.29) and (505.39,151.07) .. (500.44,151.07) .. controls (495.48,151.07) and (491.47,146.29) .. (491.47,140.4) -- cycle ; \draw (238.82,198.79) .. controls (238.82,191.97) and (255.39,186.45) .. (275.82,186.45) .. controls (296.26,186.45) and (312.83,191.97) .. (312.83,198.79) .. controls (312.83,205.61) and (296.26,211.14) .. (275.82,211.14) .. controls (255.39,211.14) and (238.82,205.61) .. (238.82,198.79) -- cycle ; \draw [color={rgb, 255:red, 255; green, 255; blue, 255 } ,draw opacity=1 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (252.11,187.52) .. controls (252.11,182.86) and (256.12,179.08) .. (261.07,179.08) .. controls (266.02,179.08) and (270.04,182.86) .. (270.04,187.52) .. controls (270.04,192.18) and (266.02,195.96) .. (261.07,195.96) .. controls (256.12,195.96) and (252.11,192.18) .. (252.11,187.52) -- cycle ; \draw [color={rgb, 255:red, 255; green, 255; blue, 255 } ,draw opacity=1 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (282.47,187.52) .. controls (282.47,182.86) and (286.48,179.08) .. (291.44,179.08) .. controls (296.39,179.08) and (300.4,182.86) .. (300.4,187.52) .. controls (300.4,192.18) and (296.39,195.96) .. (291.44,195.96) .. controls (286.48,195.96) and (282.47,192.18) .. (282.47,187.52) -- cycle ; \draw (18.83,142) .. controls (18.83,114.39) and (49.05,92) .. (86.33,92) .. controls (123.61,92) and (153.83,114.39) .. (153.83,142) .. controls (153.83,169.61) and (123.61,192) .. (86.33,192) .. controls (49.05,192) and (18.83,169.61) .. (18.83,142) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (48,142.25) .. controls (48,141.47) and (48.63,140.83) .. (49.42,140.83) .. controls (50.2,140.83) and (50.83,141.47) .. (50.83,142.25) .. controls (50.83,143.03) and (50.2,143.67) .. (49.42,143.67) .. controls (48.63,143.67) and (48,143.03) .. (48,142.25) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (84.92,143.42) .. controls (84.92,142.63) and (85.55,142) .. (86.33,142) .. controls (87.12,142) and (87.75,142.63) .. (87.75,143.42) .. controls (87.75,144.2) and (87.12,144.83) .. (86.33,144.83) .. controls (85.55,144.83) and (84.92,144.2) .. (84.92,143.42) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (123.92,144.42) .. controls (123.92,143.63) and (124.55,143) .. (125.33,143) .. controls (126.12,143) and (126.75,143.63) .. (126.75,144.42) .. controls (126.75,145.2) and (126.12,145.83) .. (125.33,145.83) .. controls (124.55,145.83) and (123.92,145.2) .. (123.92,144.42) -- cycle ; \draw (32,141) .. controls (32,129.95) and (47.67,121) .. (67,121) .. controls (86.33,121) and (102,129.95) .. (102,141) .. controls (102,152.05) and (86.33,161) .. (67,161) .. controls (47.67,161) and (32,152.05) .. (32,141) -- cycle ; \draw (338.75,99) -- (338.75,197) .. controls (338.75,208.6) and (305.9,218) .. (265.38,218) .. controls (224.85,218) and (192,208.6) .. (192,197) -- (192,99) .. controls (192,87.4) and (224.85,78) .. (265.38,78) .. controls (305.9,78) and (338.75,87.4) .. (338.75,99) .. controls (338.75,110.6) and (305.9,120) .. (265.38,120) .. controls (224.85,120) and (192,110.6) .. (192,99) ; \draw [dash pattern={on 4.5pt off 4.5pt}] (232.58,95.7) -- (230.86,203.31) .. controls (230.84,204.06) and (228.8,204.63) .. (226.3,204.59) .. controls (223.8,204.55) and (221.78,203.91) .. (221.79,203.16) -- (223.52,95.55) .. controls (223.53,94.8) and (225.57,94.23) .. (228.07,94.27) .. controls (230.58,94.31) and (232.6,94.95) .. (232.58,95.7) .. controls (232.57,96.45) and (230.53,97.03) .. (228.03,96.98) .. controls (225.53,96.94) and (223.51,96.3) .. (223.52,95.55) ; \draw [dash pattern={on 4.5pt off 4.5pt}] (265.58,95.7) -- (263.86,203.31) .. controls (263.84,204.06) and (261.8,204.63) .. (259.3,204.59) .. controls (256.8,204.55) and (254.78,203.91) .. (254.79,203.16) -- (256.52,95.55) .. controls (256.53,94.8) and (258.57,94.23) .. (261.07,94.27) .. controls (263.58,94.31) and (265.6,94.95) .. (265.58,95.7) .. controls (265.57,96.45) and (263.53,97.03) .. (261.03,96.98) .. controls (258.53,96.94) and (256.51,96.3) .. (256.52,95.55) ; \draw [dash pattern={on 4.5pt off 4.5pt}] (296.58,96.7) -- (294.86,204.31) .. controls (294.84,205.06) and (292.8,205.63) .. (290.3,205.59) .. controls (287.8,205.55) and (285.78,204.91) .. (285.79,204.16) -- (287.52,96.55) .. controls (287.53,95.8) and (289.57,95.23) .. (292.07,95.27) .. controls (294.58,95.31) and (296.6,95.95) .. (296.58,96.7) .. controls (296.57,97.45) and (294.53,98.03) .. (292.03,97.98) .. controls (289.53,97.94) and (287.51,97.3) .. (287.52,96.55) ; \draw (279.84,97.45) .. controls (279.83,104.27) and (263.26,109.78) .. (242.82,109.77) .. controls (222.38,109.75) and (205.81,104.21) .. (205.82,97.4) .. controls (205.82,90.58) and (222.4,85.06) .. (242.84,85.08) .. controls (263.28,85.09) and (279.84,90.63) .. (279.84,97.45) -- cycle ; \draw (69,146) .. controls (69,134.95) and (84.67,126) .. (104,126) .. controls (123.33,126) and (139,134.95) .. (139,146) .. controls (139,157.05) and (123.33,166) .. (104,166) .. controls (84.67,166) and (69,157.05) .. (69,146) -- cycle ; \draw (413.82,214.64) .. controls (413.82,206.03) and (430.39,199.04) .. (450.82,199.04) .. controls (471.26,199.04) and (487.83,206.03) .. (487.83,214.64) .. controls (487.83,223.26) and (471.26,230.24) .. (450.82,230.24) .. controls (430.39,230.24) and (413.82,223.26) .. (413.82,214.64) -- cycle ; \draw [color={rgb, 255:red, 255; green, 255; blue, 255 } ,draw opacity=1 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (427.11,200.4) .. controls (427.11,194.51) and (431.12,189.73) .. (436.07,189.73) .. controls (441.02,189.73) and (445.04,194.51) .. (445.04,200.4) .. controls (445.04,206.29) and (441.02,211.07) .. (436.07,211.07) .. controls (431.12,211.07) and (427.11,206.29) .. (427.11,200.4) -- cycle ; \draw [color={rgb, 255:red, 255; green, 255; blue, 255 } ,draw opacity=1 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (457.47,200.4) .. controls (457.47,194.51) and (461.48,189.73) .. (466.44,189.73) .. controls (471.39,189.73) and (475.4,194.51) .. (475.4,200.4) .. controls (475.4,206.29) and (471.39,211.07) .. (466.44,211.07) .. controls (461.48,211.07) and (457.47,206.29) .. (457.47,200.4) -- cycle ; \draw (546.75,83.01) -- (546.75,215.9) .. controls (546.75,228.06) and (513.9,237.92) .. (473.38,237.92) .. controls (432.85,237.92) and (400,228.06) .. (400,215.9) -- (400,83.01) .. controls (400,70.86) and (432.85,61) .. (473.38,61) .. controls (513.9,61) and (546.75,70.86) .. (546.75,83.01) .. controls (546.75,95.17) and (513.9,105.03) .. (473.38,105.03) .. controls (432.85,105.03) and (400,95.17) .. (400,83.01) ; \draw [dash pattern={on 4.5pt off 4.5pt}] (440.59,83.01) -- (438.85,219.71) .. controls (438.84,220.46) and (436.8,221.03) .. (434.3,220.98) .. controls (431.8,220.92) and (429.77,220.27) .. (429.78,219.52) -- (431.52,82.82) .. controls (431.53,82.07) and (433.57,81.5) .. (436.08,81.56) .. controls (438.58,81.61) and (440.6,82.26) .. (440.59,83.01) .. controls (440.58,83.76) and (438.55,84.33) .. (436.04,84.28) .. controls (433.54,84.23) and (431.52,83.57) .. (431.52,82.82) ; \draw [dash pattern={on 4.5pt off 4.5pt}] (473.59,83.01) -- (471.85,219.71) .. controls (471.84,220.46) and (469.8,221.03) .. (467.3,220.98) .. controls (464.8,220.92) and (462.77,220.27) .. (462.78,219.52) -- (464.52,82.82) .. controls (464.53,82.07) and (466.57,81.5) .. (469.08,81.56) .. controls (471.58,81.61) and (473.6,82.26) .. (473.59,83.01) .. controls (473.58,83.76) and (471.55,84.33) .. (469.04,84.28) .. controls (466.54,84.23) and (464.52,83.57) .. (464.52,82.82) ; \draw [dash pattern={on 4.5pt off 4.5pt}] (504.59,84.27) -- (502.85,220.97) .. controls (502.84,221.72) and (500.8,222.29) .. (498.3,222.24) .. controls (495.8,222.19) and (493.77,221.54) .. (493.78,220.79) -- (495.52,84.09) .. controls (495.53,83.34) and (497.57,82.77) .. (500.08,82.82) .. controls (502.58,82.87) and (504.6,83.52) .. (504.59,84.27) .. controls (504.58,85.02) and (502.55,85.59) .. (500.04,85.54) .. controls (497.54,85.49) and (495.52,84.84) .. (495.52,84.09) ; \draw (487.84,85.58) .. controls (487.83,94.19) and (471.26,101.16) .. (450.82,101.14) .. controls (430.38,101.13) and (413.82,94.13) .. (413.82,85.51) .. controls (413.82,76.9) and (430.4,69.93) .. (450.84,69.94) .. controls (471.27,69.96) and (487.84,76.96) .. (487.84,85.58) -- cycle ; \draw (449.17,73.65) -- (441.27,69.54) -- (449.3,65.69) ; \draw (452.56,73.65) -- (444.66,69.53) -- (452.69,65.68) ; \draw (455.95,73.64) -- (448.05,69.52) -- (456.08,65.67) ; \draw (240.17,89.65) -- (232.27,85.54) -- (240.3,81.69) ; \draw (243.56,89.65) -- (235.66,85.53) -- (243.69,81.68) ; \draw (246.95,89.64) -- (239.05,85.52) -- (247.08,81.67) ; \draw (65.17,125.65) -- (57.27,121.54) -- (65.3,117.69) ; \draw (68.56,125.65) -- (60.66,121.53) -- (68.69,117.68) ; \draw (71.95,125.64) -- (64.05,121.52) -- (72.08,117.67) ; \draw (104.93,161.49) -- (112.9,165.47) -- (104.93,169.46) ; \draw (270.93,206.49) -- (278.9,210.47) -- (270.93,214.46) ; \draw (479.93,166.49) -- (487.9,170.47) -- (479.93,174.46) ; \draw (452.17,203.65) -- (444.27,199.54) -- (452.3,195.69) ; \draw (455.56,203.65) -- (447.66,199.53) -- (455.69,195.68) ; \draw (458.95,203.64) -- (451.05,199.52) -- (459.08,195.67) ; \draw (65,176) node {$\delta _{1}$}; \draw (116,113) node {$\delta _{0}$}; \end{tikzpicture} \caption{Left: $\Sigma_4$. Middle: $A_0$. Right: $A$.} \end{figure} The following construction is given by Agol in \cite{agol}. Let $\Sigma_4$ denote the 4-puntured sphere, and let $\delta_0, \delta_1 \subset \Sigma_4$ be two circles on $\Sigma_4$ shown in Figure 1. Let $A_0$ be $\Sigma_4\times [0,1]\backslash (\delta_0 \times \{0\} \cup \delta_1 \times \{1\})$. Let $V_8$ denote the volume of a regular, ideal, hyperbolic octahedron. \begin{prop}[Agol]\label{agol} $A_0$ has complete hyperbolic metric with totally geodesic boundary, with $\vol(A_0)=2V_8$. \end{prop} For our purpose, it is more useful to draw the 4-punctured sphere as a 3-punctured disk, then $A$ and $A_0$ are manifolds shown in Figure 2. Let $A$ denote the manifold obtained by isometrically gluing two copies of $A_0$ along $\Sigma_4\times \{0\}\backslash (\delta_0 \times \{0\})$, then we have \[A\cong\Sigma_4\times [0,1]\backslash (\delta_1 \times \{0,1\} \cup \delta_0 \times \{1/2\})\] and $A$ is a hyperbolic 3-manifold with totally geodesic boundary and \[\vol(A)=4V_8\]. We will also need the following theorem, due to Adams \cite{adams}. \begin{thm}[Adams]\label{adams} Any properly embedded incompressible thrice-punctured sphere in a hyperbolic 3-manifold $M$ is isotopic to a totally geodesic properly embedded thrice-punctured sphere in $M$. \end{thm} From this theorem one easily obtains the following. \begin{col} A disjoint union of pairwise non-isotopic properly embedded thrice-punctured spheres in a hyperbolic 3-manifold $M$ are simultaneously isotopic to pairwise disjoint totally geodesic thrice-punctured spheres in $M$. \end{col} \subsection{Dehn surgery} Let $M$ be a compact 3-manifold with boundary $\partial M=\partial_1M\sqcup \dots \sqcup \partial_kM$ so that the interior of $M$ is a complete hyperbolic manifold, where $\partial_iM$ is a torus for any $1\leq i\leq k$. Choose a basis $\mu_i,\nu_i$ for $H_1(\partial_i M)=\pi_1(\partial_i M)$. Then the isotopy class of any essential simple closed curve $\beta_i$ on $\partial_iM$, called a {\em slope}, is represented by $p_i\mu_i+q_i\nu_i$ in $H_1(\partial_iM)$ for coprime integer $p_i,q_i$. Since we do not care about orientation of $\beta_i$, we use the notation $\beta_i=\frac{p_i}{q_i}\in\mathbb{Q}\cup\{\infty\}$. Given $\beta=(\beta_1,\dots, \beta_k)$, where each $\beta_i$ is a slope, let $M_\beta$ denote the manifold obtained by gluing a solid torus to each $\partial_iM$, where $\beta_i$ is the slope $\partial_iM$ identified with the meridian of the corresponding solid torus. We call $\beta=\{\beta_1,\dots,\beta_k\}$ the Dehn surgery coefficients. The following is from \cite{thurstonnote,dehnsurgery}. \begin{thm}[Thurston]\label{thurston} If the interior of $M$ is a complete hyperbolic 3-manifold of finite volume and $\beta=\{\beta_1,\dots,\beta_k\}$ are the Dehn surgery coefficients, then for all but finitely many slopes $\beta_i$, for each $i$, $M_\beta$ is hyperbolic and $\vol(M_\beta)<\vol(M)$. If $\beta^n=(\beta^n_1,\dots, \beta^n_k)$, with $\{\beta_i^n\}_{n=1}^{\infty}$ an infinite sequence of distinct slopes on $\partial_iM$ for each $1\leq i\leq n$, then $\displaystyle{\lim_{n \to \infty} \vol(M_{\beta^n})\rightarrow \vol(M)}$. \end{thm} Let $||[M]||$ denote the {\em Gromov norm} of the fundamental class $[M]\in H_3(M;\partial M)$. Then we have the following two theorems. See \cite[Theorem 6.2, Proposition 6.5.2, Lemma 6.5.4]{thurstonnote} \begin{thm}[Gromov]\label{gromov} If the interior of $M$ admits a complete hyperbolic metric of finite volume, then \[ ||[M]||=\frac{\vol(M)}{v_3}. \] \end{thm} \begin{thm}[Thurston]\label{thurston1} For any Dehn fillings with Dehn surgery coefficients $\beta=\{\beta_1,\dots,\beta_k\}$, \[ ||[M_\beta]||\leq||[M]||. \] \end{thm} We will be interested in a special case of Dehn surgery in which $M$ is obtained from a mapping torus \[M_f=S\times [0,1]/(x,1)\sim(f(x),0)\] by removing neighborhoods of disjoint curves $\alpha_1, \alpha_2, \dots, \alpha_k$, $\alpha_i \subset S\times \{t_i\}$, for some \[0<t_1<t_2<\dots<t_k<1.\] Then we can choose basis $\mu_i,\nu_i$ of $H_1(\partial_i M)$, so that if $\beta_i=\frac{1}{r_i}$, then \[M_{\beta}=M_{T_{\alpha_k}^{r_k}T_{\alpha_{k-1}}^{r_{k-1}} \dots T_{\alpha_1}^{r_1}f}.\] See, for example, \cite{stallings}. \section{Reduction} Consider the sphere with $n+m+2$ puntures $S_{0,n+m+2}$. We can distribute the punctures as shown in Figure 3. Let $x$, $y$ and $z$ be the three of the punctures as shown. Let $X,Y\subset S_{0,n+m+2}$ be two embedded punctured disks centered at $x$ and $y$ as shown in Figure 3. There are $n$ punctures in $X$ arranged around $x$, $m$ punctures in $Y$ arranged around $y$, with one puncture shared in $X$ and $Y$. Let $p_n$ denote the homeomorphism which is supported inside $X$, fixes $x$ and rotates the punctures around $x$ by one counterclockwise. Let $q_m$ denote the homeomorphism which is supported inside $Y$, fixes $y$ and rotates the punctures around $y$ by one clockwise. For any $n,m>6$, let $f_{n,m}:S_{0,n+m+2} \rightarrow S_{0,n+m+2}$ be $f_{n,m}=q_mp_n$. These homeomorphisms $f_{n,m}$ were constructed by Hironaka and Kin in \cite{hironaka} and were shown to be pseudo-Anosov. Let $V_1, V_2, \dots, V_n$ be the punctures in $X$, starting with $V_1$ in $X\cap Y$, ordered counter-clockwise, as shown in Figure 3. Let $\Sigma_0 \subset S_{0,n+m+2}$ be the subsurface containing 3 consecutive punctures $\{V_i, V_{i+1}, V_{i+2}\}$, with $\partial\Sigma_0=\beta$ as shown in Figure 3. Let $\alpha,\gamma\subset\Sigma_0$ be the two essential closed curves shown. We will consider the composition $hf^3_{n,m}$, where $h: S_{0,n+m+2}\to S_{0,n+m+2}$ is a homeomorphism supported in $\Sigma_0$. Note that if we replace $h$ by $p_n^khp^{-k}_n$ for $1\leq k \leq n-(i+3)$, which is supported on $p^k_n(\Sigma_0)$, then $q_m$ commutes with $p_n^jhp_n^{-j}$ for $1\leq j \leq k$. So we have \begin{equation}\notag \begin{split} f^k_{n,m}hf^3_{n,m}f^{-k}_{n,m} & = f^{k-1}_{n,m}q_m(p_nhp_n^{-1})p_nf^{-k+3}_{n,m} \\ & = f^{k-1}_{n,m}(p_nhp_n^{-1})q_mp_nf^{-k+3}_{n,m} \\ & = f^{k-1}_{n,m}(p_nhp_n^{-1})f^{-k+4}_{n,m} \\ & = f^{k-2}_{n,m}q_m(p^2_nhp_n^{-2})p_nf^{-k+4}_{n,m} \\ & = \dots \\ & = q_m(p_n^{k}hp_n^{-k})p_nf^{2}_{n,m} \\ & =(p_n^{k}hp_n^{-k})f^{3}_{n,m} \\ \end{split} \end{equation} That is, $hf^3_{n,m}$ is conjugate to $p_n^khp^{-k}_nf^3_{n,m}$. In particular, we can assume $\Sigma_0$ surrounds $V_i,V_{i+1},V_{i+2}$ for any $2\leq i\leq n-5$ at the expense of conjugation which does not affect stretch factor or the homeomorphism type of mapping torus. For this reason, in the following statements, $\Sigma_0$ is allowed to surround the punctures $V_i,V_{i+1},V_{i+2}$ for any $2\leq i\leq n-5$. \begin{figure}[!ht]\centering \tikzset{every picture/.style={line width=0.75pt}} \begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1] \draw [fill={rgb, 255:red, 255; green, 0; blue, 0 } ,fill opacity=0.46 ] (134.36,270.73) .. controls (114.48,271.15) and (97.85,246.7) .. (97.21,216.12) .. controls (96.57,185.55) and (112.16,160.42) .. (132.04,160.01) .. controls (151.92,159.59) and (168.56,184.04) .. (169.2,214.62) .. controls (169.84,245.19) and (154.24,270.32) .. (134.36,270.73) -- cycle ; \draw [fill={rgb, 255:red, 208; green, 2; blue, 27 } ,fill opacity=0.33 ] (201.13,211.42) .. controls (223.45,198.66) and (262.61,225.2) .. (288.61,270.68) .. controls (314.61,316.17) and (317.59,363.38) .. (295.28,376.14) .. controls (272.96,388.89) and (233.8,362.36) .. (207.8,316.87) .. controls (181.8,271.38) and (178.82,224.17) .. (201.13,211.42) -- cycle ; \draw (75,218.92) .. controls (75,101.88) and (169.88,7) .. (286.92,7) .. controls (403.96,7) and (498.83,101.88) .. (498.83,218.92) .. controls (498.83,335.96) and (403.96,430.83) .. (286.92,430.83) .. controls (169.88,430.83) and (75,335.96) .. (75,218.92) -- cycle ; \draw [dash pattern={on 4.5pt off 4.5pt}] (94.11,211.45) .. controls (94.11,153.12) and (141.39,105.84) .. (199.72,105.84) .. controls (258.05,105.84) and (305.33,153.12) .. (305.33,211.45) .. controls (305.33,269.78) and (258.05,317.06) .. (199.72,317.06) .. controls (141.39,317.06) and (94.11,269.78) .. (94.11,211.45) -- cycle ; \draw [dash pattern={on 4.5pt off 4.5pt}] (278.06,211.45) .. controls (278.06,153.12) and (325.34,105.84) .. (383.67,105.84) .. controls (441.99,105.84) and (489.28,153.12) .. (489.28,211.45) .. controls (489.28,269.78) and (441.99,317.06) .. (383.67,317.06) .. controls (325.34,317.06) and (278.06,269.78) .. (278.06,211.45) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (290,213.69) .. controls (290,212.23) and (291.18,211.05) .. (292.64,211.05) .. controls (294.1,211.05) and (295.28,212.23) .. (295.28,213.69) .. controls (295.28,215.15) and (294.1,216.33) .. (292.64,216.33) .. controls (291.18,216.33) and (290,215.15) .. (290,213.69) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (290,376.14) .. controls (290,374.68) and (291.18,373.5) .. (292.64,373.5) .. controls (294.1,373.5) and (295.28,374.68) .. (295.28,376.14) .. controls (295.28,377.59) and (294.1,378.77) .. (292.64,378.77) .. controls (291.18,378.77) and (290,377.59) .. (290,376.14) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (197.08,211.45) .. controls (197.08,209.99) and (198.26,208.81) .. (199.72,208.81) .. controls (201.18,208.81) and (202.36,209.99) .. (202.36,211.45) .. controls (202.36,212.91) and (201.18,214.09) .. (199.72,214.09) .. controls (198.26,214.09) and (197.08,212.91) .. (197.08,211.45) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (381.03,214.09) .. controls (381.03,212.63) and (382.21,211.45) .. (383.67,211.45) .. controls (385.12,211.45) and (386.31,212.63) .. (386.31,214.09) .. controls (386.31,215.55) and (385.12,216.73) .. (383.67,216.73) .. controls (382.21,216.73) and (381.03,215.55) .. (381.03,214.09) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (264.92,163.52) .. controls (264.92,162.07) and (266.1,160.89) .. (267.56,160.89) .. controls (269.01,160.89) and (270.19,162.07) .. (270.19,163.52) .. controls (270.19,164.98) and (269.01,166.16) .. (267.56,166.16) .. controls (266.1,166.16) and (264.92,164.98) .. (264.92,163.52) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (232.67,140.83) .. controls (232.67,139.37) and (233.85,138.19) .. (235.31,138.19) .. controls (236.76,138.19) and (237.94,139.37) .. (237.94,140.83) .. controls (237.94,142.29) and (236.76,143.47) .. (235.31,143.47) .. controls (233.85,143.47) and (232.67,142.29) .. (232.67,140.83) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (263.72,267.44) .. controls (263.72,265.98) and (264.91,264.8) .. (266.36,264.8) .. controls (267.82,264.8) and (269,265.98) .. (269,267.44) .. controls (269,268.9) and (267.82,270.08) .. (266.36,270.08) .. controls (264.91,270.08) and (263.72,268.9) .. (263.72,267.44) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (312.7,161.13) .. controls (312.7,159.68) and (313.88,158.5) .. (315.33,158.5) .. controls (316.79,158.5) and (317.97,159.68) .. (317.97,161.13) .. controls (317.97,162.59) and (316.79,163.77) .. (315.33,163.77) .. controls (313.88,163.77) and (312.7,162.59) .. (312.7,161.13) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (344.95,139.63) .. controls (344.95,138.18) and (346.13,137) .. (347.59,137) .. controls (349.04,137) and (350.22,138.18) .. (350.22,139.63) .. controls (350.22,141.09) and (349.04,142.27) .. (347.59,142.27) .. controls (346.13,142.27) and (344.95,141.09) .. (344.95,139.63) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (381.98,130.08) .. controls (381.98,128.62) and (383.16,127.44) .. (384.61,127.44) .. controls (386.07,127.44) and (387.25,128.62) .. (387.25,130.08) .. controls (387.25,131.54) and (386.07,132.72) .. (384.61,132.72) .. controls (383.16,132.72) and (381.98,131.54) .. (381.98,130.08) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (241.03,284.16) .. controls (241.03,282.71) and (242.21,281.53) .. (243.67,281.53) .. controls (245.12,281.53) and (246.31,282.71) .. (246.31,284.16) .. controls (246.31,285.62) and (245.12,286.8) .. (243.67,286.8) .. controls (242.21,286.8) and (241.03,285.62) .. (241.03,284.16) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (195.64,126.5) .. controls (195.64,125.04) and (196.82,123.86) .. (198.28,123.86) .. controls (199.73,123.86) and (200.92,125.04) .. (200.92,126.5) .. controls (200.92,127.95) and (199.73,129.13) .. (198.28,129.13) .. controls (196.82,129.13) and (195.64,127.95) .. (195.64,126.5) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (157.42,138.44) .. controls (157.42,136.98) and (158.6,135.8) .. (160.06,135.8) .. controls (161.51,135.8) and (162.69,136.98) .. (162.69,138.44) .. controls (162.69,139.9) and (161.51,141.08) .. (160.06,141.08) .. controls (158.6,141.08) and (157.42,139.9) .. (157.42,138.44) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (212.36,292.53) .. controls (212.36,291.07) and (213.54,289.89) .. (215,289.89) .. controls (216.46,289.89) and (217.64,291.07) .. (217.64,292.53) .. controls (217.64,293.98) and (216.46,295.16) .. (215,295.16) .. controls (213.54,295.16) and (212.36,293.98) .. (212.36,292.53) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (164.58,282.97) .. controls (164.58,281.51) and (165.77,280.33) .. (167.22,280.33) .. controls (168.68,280.33) and (169.86,281.51) .. (169.86,282.97) .. controls (169.86,284.43) and (168.68,285.61) .. (167.22,285.61) .. controls (165.77,285.61) and (164.58,284.43) .. (164.58,282.97) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (135.92,255.5) .. controls (135.92,254.04) and (137.1,252.86) .. (138.56,252.86) .. controls (140.01,252.86) and (141.19,254.04) .. (141.19,255.5) .. controls (141.19,256.95) and (140.01,258.13) .. (138.56,258.13) .. controls (137.1,258.13) and (135.92,256.95) .. (135.92,255.5) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (305.53,261.47) .. controls (305.53,260.01) and (306.71,258.83) .. (308.17,258.83) .. controls (309.62,258.83) and (310.81,260.01) .. (310.81,261.47) .. controls (310.81,262.93) and (309.62,264.11) .. (308.17,264.11) .. controls (306.71,264.11) and (305.53,262.93) .. (305.53,261.47) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (334.2,287.75) .. controls (334.2,286.29) and (335.38,285.11) .. (336.84,285.11) .. controls (338.29,285.11) and (339.47,286.29) .. (339.47,287.75) .. controls (339.47,289.2) and (338.29,290.39) .. (336.84,290.39) .. controls (335.38,290.39) and (334.2,289.2) .. (334.2,287.75) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (377.2,299.69) .. controls (377.2,298.24) and (378.38,297.05) .. (379.84,297.05) .. controls (381.29,297.05) and (382.47,298.24) .. (382.47,299.69) .. controls (382.47,301.15) and (381.29,302.33) .. (379.84,302.33) .. controls (378.38,302.33) and (377.2,301.15) .. (377.2,299.69) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (422.59,293.72) .. controls (422.59,292.26) and (423.77,291.08) .. (425.22,291.08) .. controls (426.68,291.08) and (427.86,292.26) .. (427.86,293.72) .. controls (427.86,295.18) and (426.68,296.36) .. (425.22,296.36) .. controls (423.77,296.36) and (422.59,295.18) .. (422.59,293.72) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (456.03,263.86) .. controls (456.03,262.4) and (457.21,261.22) .. (458.67,261.22) .. controls (460.13,261.22) and (461.31,262.4) .. (461.31,263.86) .. controls (461.31,265.31) and (460.13,266.5) .. (458.67,266.5) .. controls (457.21,266.5) and (456.03,265.31) .. (456.03,263.86) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (467.98,225.64) .. controls (467.98,224.18) and (469.16,223) .. (470.61,223) .. controls (472.07,223) and (473.25,224.18) .. (473.25,225.64) .. controls (473.25,227.09) and (472.07,228.27) .. (470.61,228.27) .. controls (469.16,228.27) and (467.98,227.09) .. (467.98,225.64) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (114.42,217.27) .. controls (114.42,215.82) and (115.6,214.64) .. (117.05,214.64) .. controls (118.51,214.64) and (119.69,215.82) .. (119.69,217.27) .. controls (119.69,218.73) and (118.51,219.91) .. (117.05,219.91) .. controls (115.6,219.91) and (114.42,218.73) .. (114.42,217.27) -- cycle ; \draw [draw opacity=0] (197.81,77.81) .. controls (200.67,73.6) and (205.36,70.69) .. (210.83,70.28) .. controls (220.36,69.57) and (228.65,76.72) .. (229.36,86.24) .. controls (229.52,88.32) and (229.3,90.33) .. (228.77,92.22) -- (212.12,87.53) -- cycle ; \draw (197.81,77.81) .. controls (200.67,73.6) and (205.36,70.69) .. (210.83,70.28) .. controls (220.36,69.57) and (228.65,76.72) .. (229.36,86.24) .. controls (229.52,88.32) and (229.3,90.33) .. (228.77,92.22) ; \draw (207.47,75.36) -- (196.19,80.35) -- (198.97,68.33) ; \draw [draw opacity=0] (369.76,76.62) .. controls (366.89,72.41) and (362.21,69.5) .. (356.74,69.09) .. controls (347.21,68.38) and (338.91,75.52) .. (338.2,85.05) .. controls (338.05,87.12) and (338.27,89.14) .. (338.8,91.03) -- (355.45,86.33) -- cycle ; \draw (369.76,76.62) .. controls (366.89,72.41) and (362.21,69.5) .. (356.74,69.09) .. controls (347.21,68.38) and (338.91,75.52) .. (338.2,85.05) .. controls (338.05,87.12) and (338.27,89.14) .. (338.8,91.03) ; \draw (360.1,74.16) -- (371.37,79.15) -- (368.6,67.14) ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (129.95,174.27) .. controls (129.95,172.82) and (131.13,171.64) .. (132.58,171.64) .. controls (134.04,171.64) and (135.22,172.82) .. (135.22,174.27) .. controls (135.22,175.73) and (134.04,176.91) .. (132.58,176.91) .. controls (131.13,176.91) and (129.95,175.73) .. (129.95,174.27) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (422.59,149.19) .. controls (422.59,147.73) and (423.77,146.55) .. (425.22,146.55) .. controls (426.68,146.55) and (427.86,147.73) .. (427.86,149.19) .. controls (427.86,150.65) and (426.68,151.83) .. (425.22,151.83) .. controls (423.77,151.83) and (422.59,150.65) .. (422.59,149.19) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (458.42,187.41) .. controls (458.42,185.96) and (459.6,184.78) .. (461.06,184.78) .. controls (462.52,184.78) and (463.7,185.96) .. (463.7,187.41) .. controls (463.7,188.87) and (462.52,190.05) .. (461.06,190.05) .. controls (459.6,190.05) and (458.42,188.87) .. (458.42,187.41) -- cycle ; \draw (135.62,163.85) .. controls (144.69,166.49) and (147.69,183.63) .. (142.3,202.13) .. controls (136.92,220.63) and (125.19,233.48) .. (116.11,230.84) .. controls (107.03,228.2) and (104.04,211.06) .. (109.42,192.56) .. controls (114.81,174.06) and (126.54,161.21) .. (135.62,163.85) -- cycle ; \draw (109.28,211.55) .. controls (116.25,206.94) and (129.51,214.72) .. (138.9,228.92) .. controls (148.29,243.12) and (150.26,258.37) .. (143.28,262.98) .. controls (136.31,267.59) and (123.05,259.81) .. (113.66,245.61) .. controls (104.27,231.41) and (102.31,216.16) .. (109.28,211.55) -- cycle ; \draw (72,150.75) -- (128.09,173.52) ; \draw [shift={(129.95,174.27)}, rotate = 202.1] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; \draw (60.83,220.17) -- (112.42,217.38) ; \draw [shift={(114.42,217.27)}, rotate = 536.9100000000001] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; \draw (114.83,289.17) -- (137.4,257.13) ; \draw [shift={(138.56,255.5)}, rotate = 485.17] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; \draw (200.42,197.42) node {$x$}; \draw (387.95,195.03) node {$y$}; \draw (294.78,359.86) node {$z$}; \draw (226.7,48.81) node {$p_{n}$}; \draw (343.75,50) node {$q_{m}$}; \draw (270.89,338.76) node {$\Sigma _{1}$}; \draw (156.22,102.26) node {$X$}; \draw (417.81,99.87) node {$Y$}; \draw (159.5,209.54) node {$\Sigma _{0}$}; \draw (207.92,334.55) node {$\tau _{1}$}; \draw (320.2,316.63) node {$\tau _{2}$}; \draw (170.62,167.51) node {$\beta $}; \draw (117.37,189.94) node {$\alpha $}; \draw (133.81,235.28) node {$\gamma $}; \draw (296,201.75) node {$V_{1}$}; \draw (268,149.75) node {$V_{2}$}; \draw (236,127.75) node {$V_{3}$}; \draw (184,119.75) node {$V_{4}$}; \draw (148,136.75) node {$V_{5}$}; \draw (60,148.75) node {$V_{6}$}; \draw (49,219.75) node {$V_{7}$}; \draw (101,289.75) node {$V_{8}$}; \draw (157,292.75) node {$V_{9}$}; \draw (215,278.75) node {$V_{10}$}; \draw (238,270.75) node {$V_{11}$}; \draw (262,250.75) node {$V_{12}$}; \end{tikzpicture} \caption{$S_{0,n+m+2}$ for $n=m=12$} \end{figure} \begin{thm}\label{main} For any $k=1,2,3,\dots$, there exists $B_k$ such that if \[h_k=T_{\alpha}^{u_1}T_{\gamma}^{v_1}\dots T_{\alpha}^{u_{k-1}}T_{\gamma}^{v_{k-1}}T_{\alpha}^{u_k}T_{\beta}^{v_k}\] where $u_i,v_i\geq B_k$ for all $i$, then for $h_kf_{n,m}: S_{0,n+m+2}\to S_{0,n+m+2}$, we have \begin{enumerate} \item[(1)] $h_kf^3_{n,m}$ is pseudo-Anosov. \item[(2)] $\vol(M_{h_kf^3_{n,m}})\geq3kV_8$. \item[(3)] there exists $N=N_k$, such that if $n=m>N$, then \[ \log\lambda (h_kf^3_{n,n})\leq 54\frac{\log(2n+2)}{2n+2}. \] \end{enumerate} \end{thm} Assuming this theorem, we prove the Main Theorem from the introduction. \begin{mthm*} For any fixed $g\geq 2$, and $L\geq 162g$, there exists a sequence $\{M_{f_i}\}_{i=1}^{\infty}$, with $f_i\in \Psi_{g,L}$, so that $\displaystyle {\lim_{n \to \infty} \vol(M_{f_i})\rightarrow \infty}$. \end{mthm*} \begin{proof} For any $g\geq 2$, \cite{tsai} gives a construction of an appropriate cover $\pi: S_{g,s}\rightarrow S_{0,n+m+2}$ such that $s=(2g+1)(n+m+1)+1$ and \[f_{n,m}: S_{0,n+m+2}\rightarrow S_{0,n+m+2},\] lifts to $S_{g,s}$. Moreover, it is clear from her construction that each of $\alpha, \beta, \gamma$ lift, so $h_k$ lifts. Let $\widetilde{f_k}: S_{g,s}\rightarrow S_{g,s}$ be the lift of $h_k\circ f^3_{n,m}$. Then $\log(\lambda(\widetilde{f_k}))=\log(\lambda(h_kf^3_{n,m}))$. Also by Theorem \ref{main}, for $n=m>N_k$ and $n=m$ large enough, \[ \log(\lambda(\widetilde{f_k})) \leq 54\frac{\log(n+m+2)}{n+m+2}<54\frac{\log(s)}{\frac{s-1}{2g+1}+1}<162g\frac{\log s}{s}. \] Furthermore, $\vol(M_{\widetilde{f_k}})=deg(\pi)\vol(M) \geq 3kV_8deg(\pi)$. Therefore, $\{M_{\widetilde{f_k}}\}^{\infty}_{k=1}$ is contained in the set for the theorem and $\vol(M_{\widetilde{f_k}})\rightarrow \infty$. \end{proof} \begin{col*} For any $g\geq 2$, there exists $L$ such that there is no finite set $\Omega$ of 3-manifolds so that all $M_f$, $f\in \Psi_{g,L}$, are obtained by Dehn filling on some $M \in \Omega$. \end{col*} \begin{proof} Let $L\geq 162g$. If the finite set $\Omega$ exist, then by Theorems \ref{gromov} and \ref{thurston1}, \[ \displaystyle{\vol(M_f)\leq v_3\max_{M\in \Omega}\{||[M]||\}}<\infty, \] which contradicts the Main Theorem. \end{proof} \section{Proof of Theorem \ref{main}} Now fix some $n,m>6$, let $f=f^3_{n,m}$. Let $M_f$ be the mapping torus. The proof of the following lemma is almost identical to the proof of \cite[Theorem B]{long}. \begin{lem} $M_f\backslash ((\alpha \cup \beta) \times \{1/2\})$ is hyperbolic. \end{lem} \begin{proof} Let \[ \Sigma=S_{0,n+m+2}\times \{1/2\}, \Sigma\textprime=\Sigma\backslash ((\alpha \cup \beta) \times \{1/2\})\subset M_f. \] Let $T_0\subset M_f$ be an embedded incompressible torus. By applying some isotopy, we can make every component of $T_0\backslash \Sigma\textprime$ be an annulus. Any annulus component should either miss no fiber or have boundary components parallel to $\alpha$ or $\beta$, and on opposite sides of some small neighborhood of $\alpha$ or $\beta$. Since $\alpha$ and $\beta$ bound different number of punctures, a component parallel to $\alpha$ can never connect to a component parallel to $\beta$. Also, $f^{k_1}(\alpha)$ will never close up with $f^{k_2}(\alpha)$ if $k_1\neq k_2$ since $f$ is pseudo-Anosov. By Thurston's hyperbolization theorem (see \cite{thurston1,morgan,otal}), $M_f\backslash ((\alpha \cup \beta) \times \{1/2\})$ is hyperbolic. \end{proof} For any $k$, let $L_k \subset M_f $ be \[ L_k=\alpha \times \left\{\frac{2}{4k}, \frac{4}{4k}, \dots, \frac{2k+2}{4k}\right\} \cup \gamma \times \left\{\frac{3}{4k}, \frac{5}{4k}, \dots, \frac{2k+1}{4k}\right\} \cup \beta \times \left\{\frac{1}{4k}\right\}. \] Let $N(L_k)$ denote an tubular neighborhood of $L_k$ and $M_k=M_f\backslash N(L_k)$. We can order the boundary components of $M_k$ as \[\partial M_k=\partial_1M_k\sqcup \dots \sqcup \partial_{2k+2}M_k ,\] where \[ \begin{cases} \partial_{2i}M_k=\alpha \times \{ \frac{2i}{4k}\} & \text{for any } 1\leq i \leq k+1 \\ \partial_{2i+1}M_k=\gamma \times \{ \frac{2i+1}{4k}\} & \text{for any } 1\leq i \leq k-1\\ \partial_1M_k=\beta \times \{ \frac{1}{4k}\}. & \end{cases} \] \begin{lem} The interior of $M_f\backslash N(L_k)$ is hyperbolic and \[\vol(int (M_f\backslash N(L_k)))\geq 4kV_8.\] \end{lem} \begin{proof} Glue $k$ copies of $A$, top to bottom, to get \[A_k\cong (S_{0,4}\times [0,1])\backslash \left(\alpha \times \left\{\frac{0}{2k}, \frac{2}{2k}, \dots, \frac{2k}{2k}\right\} \cup \gamma \times \left\{\frac{1}{2k}, \frac{3}{2k}, \dots, \frac{2k-1}{2k}\right\}\right),\] with the $i$-th copy identifying with \[ \left(S_{0,4}\times \left[\frac{2i-2}{2k},\frac{2i}{2k}\right]\right)\backslash\left(\alpha \times \left\{\frac{2i-2}{2k}, \frac{2i}{2k}\right\}\cup \gamma \times\left\{\frac{2i-1}{2k}\right\}\right). \] By Theorem \ref{adams}, $A_k$ has four totally geodesic thrice-punctured sphere boundary components, and $\vol(A_k)=4kV_8$. Cut $M_f\backslash ((\alpha \cup \beta) \times \{1/2\})$ along the two thrice-punctured spheres, i.e. the two regions shown in Figure 4. The two thrice-punctured spheres can be assumed to be totally geodesic by Corollary 2. So the cut-open manifold has four totally geodesic thrice-punctured sphere boundary components. Now glue the top boundary of $A_k$ to the top of the cut by an isometry, with the marked curves and colored faces glued correspondingly. Then apply the same to the bottom boundary. After applying an isotopy to adjust the height, we see that the result is homeomorphic to $M_f\backslash N(L_k)$. Moreover, $A_k$ is isometrically embedded in $M_f\backslash N(L_k)$. Since $\vol(A_k)\geq 4kV_8$, we have $\vol(M_f\backslash N(L_k))\geq 4kV_8$. \begin{figure}[!ht]\centering \tikzset{every picture/.style={line width=0.75pt}} \begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1] \draw [fill={rgb, 255:red, 249; green, 46; blue, 46 } ,fill opacity=0.64 ] (80.45,147.49) .. controls (80.45,134.3) and (105.33,123.61) .. (136.02,123.61) .. controls (166.71,123.61) and (191.59,134.3) .. (191.59,147.49) .. controls (191.59,160.68) and (166.71,171.37) .. (136.02,171.37) .. controls (105.33,171.37) and (80.45,160.68) .. (80.45,147.49) -- cycle ; \draw [fill={rgb, 255:red, 126; green, 211; blue, 33 } ,fill opacity=1 ] (93.63,147.39) .. controls (93.63,141.54) and (104.67,136.8) .. (118.29,136.8) .. controls (131.91,136.8) and (142.96,141.54) .. (142.96,147.39) .. controls (142.96,153.25) and (131.91,157.99) .. (118.29,157.99) .. controls (104.67,157.99) and (93.63,153.25) .. (93.63,147.39) -- cycle ; \draw (290.85,56.44) -- (290.12,211.64) .. controls (290.01,234.16) and (234.8,252.42) .. (166.79,252.42) .. controls (98.79,252.42) and (43.75,234.16) .. (43.85,211.64) -- (44.59,56.44) .. controls (44.69,33.92) and (99.91,15.67) .. (167.91,15.67) .. controls (235.91,15.67) and (290.96,33.92) .. (290.85,56.44) .. controls (290.74,78.96) and (235.53,97.21) .. (167.53,97.21) .. controls (99.52,97.21) and (44.48,78.96) .. (44.59,56.44) ; \draw [draw opacity=0] (290.68,150.84) .. controls (285.43,171.31) and (232.44,187.4) .. (167.85,187.4) .. controls (99.77,187.4) and (44.59,169.53) .. (44.59,147.49) .. controls (44.59,147.23) and (44.59,146.97) .. (44.61,146.72) -- (167.85,147.49) -- cycle ; \draw (290.68,150.84) .. controls (285.43,171.31) and (232.44,187.4) .. (167.85,187.4) .. controls (99.77,187.4) and (44.59,169.53) .. (44.59,147.49) .. controls (44.59,147.23) and (44.59,146.97) .. (44.61,146.72) ; \draw [draw opacity=0][dash pattern={on 4.5pt off 4.5pt}] (44.61,146.72) .. controls (45.74,124.99) and (100.49,107.49) .. (167.85,107.49) .. controls (233.77,107.49) and (287.6,124.24) .. (290.95,145.32) -- (167.85,147.4) -- cycle ; \draw [dash pattern={on 4.5pt off 4.5pt}] (44.61,146.72) .. controls (45.74,124.99) and (100.49,107.49) .. (167.85,107.49) .. controls (233.77,107.49) and (287.6,124.24) .. (290.95,145.32) ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (97.45,147.43) .. controls (97.45,146.2) and (98.45,145.2) .. (99.69,145.2) .. controls (100.92,145.2) and (101.92,146.2) .. (101.92,147.43) .. controls (101.92,148.66) and (100.92,149.66) .. (99.69,149.66) .. controls (98.45,149.66) and (97.45,148.66) .. (97.45,147.43) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (133.79,147.49) .. controls (133.79,146.26) and (134.79,145.26) .. (136.02,145.26) .. controls (137.25,145.26) and (138.25,146.26) .. (138.25,147.49) .. controls (138.25,148.72) and (137.25,149.72) .. (136.02,149.72) .. controls (134.79,149.72) and (133.79,148.72) .. (133.79,147.49) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (167.85,147.49) .. controls (167.85,146.26) and (168.84,145.26) .. (170.08,145.26) .. controls (171.31,145.26) and (172.31,146.26) .. (172.31,147.49) .. controls (172.31,148.72) and (171.31,149.72) .. (170.08,149.72) .. controls (168.84,149.72) and (167.85,148.72) .. (167.85,147.49) -- cycle ; \draw (397.82,253.79) .. controls (397.82,246.97) and (413.42,241.45) .. (432.66,241.45) .. controls (451.9,241.45) and (467.5,246.97) .. (467.5,253.79) .. controls (467.5,260.61) and (451.9,266.14) .. (432.66,266.14) .. controls (413.42,266.14) and (397.82,260.61) .. (397.82,253.79) -- cycle ; \draw [color={rgb, 255:red, 255; green, 255; blue, 255 } ,draw opacity=1 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (410.33,242.52) .. controls (410.33,237.86) and (414.11,234.08) .. (418.77,234.08) .. controls (423.43,234.08) and (427.21,237.86) .. (427.21,242.52) .. controls (427.21,247.18) and (423.43,250.96) .. (418.77,250.96) .. controls (414.11,250.96) and (410.33,247.18) .. (410.33,242.52) -- cycle ; \draw [color={rgb, 255:red, 255; green, 255; blue, 255 } ,draw opacity=1 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (438.91,242.52) .. controls (438.91,237.86) and (442.69,234.08) .. (447.36,234.08) .. controls (452.02,234.08) and (455.8,237.86) .. (455.8,242.52) .. controls (455.8,247.18) and (452.02,250.96) .. (447.36,250.96) .. controls (442.69,250.96) and (438.91,247.18) .. (438.91,242.52) -- cycle ; \draw (426.82,218.79) .. controls (426.82,211.97) and (442.42,206.45) .. (461.66,206.45) .. controls (480.9,206.45) and (496.5,211.97) .. (496.5,218.79) .. controls (496.5,225.61) and (480.9,231.14) .. (461.66,231.14) .. controls (442.42,231.14) and (426.82,225.61) .. (426.82,218.79) -- cycle ; \draw [color={rgb, 255:red, 255; green, 255; blue, 255 } ,draw opacity=1 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (439.33,207.52) .. controls (439.33,202.86) and (443.11,199.08) .. (447.77,199.08) .. controls (452.43,199.08) and (456.21,202.86) .. (456.21,207.52) .. controls (456.21,212.18) and (452.43,215.96) .. (447.77,215.96) .. controls (443.11,215.96) and (439.33,212.18) .. (439.33,207.52) -- cycle ; \draw [color={rgb, 255:red, 255; green, 255; blue, 255 } ,draw opacity=1 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (467.91,207.52) .. controls (467.91,202.86) and (471.69,199.08) .. (476.36,199.08) .. controls (481.02,199.08) and (484.8,202.86) .. (484.8,207.52) .. controls (484.8,212.18) and (481.02,215.96) .. (476.36,215.96) .. controls (471.69,215.96) and (467.91,212.18) .. (467.91,207.52) -- cycle ; \draw (397.82,184.79) .. controls (397.82,177.97) and (413.42,172.45) .. (432.66,172.45) .. controls (451.9,172.45) and (467.5,177.97) .. (467.5,184.79) .. controls (467.5,191.61) and (451.9,197.14) .. (432.66,197.14) .. controls (413.42,197.14) and (397.82,191.61) .. (397.82,184.79) -- cycle ; \draw [color={rgb, 255:red, 255; green, 255; blue, 255 } ,draw opacity=1 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (410.33,173.52) .. controls (410.33,168.86) and (414.11,165.08) .. (418.77,165.08) .. controls (423.43,165.08) and (427.21,168.86) .. (427.21,173.52) .. controls (427.21,178.18) and (423.43,181.96) .. (418.77,181.96) .. controls (414.11,181.96) and (410.33,178.18) .. (410.33,173.52) -- cycle ; \draw [color={rgb, 255:red, 255; green, 255; blue, 255 } ,draw opacity=1 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (438.91,173.52) .. controls (438.91,168.86) and (442.69,165.08) .. (447.36,165.08) .. controls (452.02,165.08) and (455.8,168.86) .. (455.8,173.52) .. controls (455.8,178.18) and (452.02,181.96) .. (447.36,181.96) .. controls (442.69,181.96) and (438.91,178.18) .. (438.91,173.52) -- cycle ; \draw (428.82,151.79) .. controls (428.82,144.97) and (444.42,139.45) .. (463.66,139.45) .. controls (482.9,139.45) and (498.5,144.97) .. (498.5,151.79) .. controls (498.5,158.61) and (482.9,164.14) .. (463.66,164.14) .. controls (444.42,164.14) and (428.82,158.61) .. (428.82,151.79) -- cycle ; \draw [color={rgb, 255:red, 255; green, 255; blue, 255 } ,draw opacity=1 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (441.33,140.52) .. controls (441.33,135.86) and (445.11,132.08) .. (449.77,132.08) .. controls (454.43,132.08) and (458.21,135.86) .. (458.21,140.52) .. controls (458.21,145.18) and (454.43,148.96) .. (449.77,148.96) .. controls (445.11,148.96) and (441.33,145.18) .. (441.33,140.52) -- cycle ; \draw [color={rgb, 255:red, 255; green, 255; blue, 255 } ,draw opacity=1 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (469.91,140.52) .. controls (469.91,135.86) and (473.69,132.08) .. (478.36,132.08) .. controls (483.02,132.08) and (486.8,135.86) .. (486.8,140.52) .. controls (486.8,145.18) and (483.02,148.96) .. (478.36,148.96) .. controls (473.69,148.96) and (469.91,145.18) .. (469.91,140.52) -- cycle ; \draw (530.27,35.49) -- (529.26,248.88) .. controls (529.19,264) and (493.56,276.25) .. (449.67,276.25) .. controls (405.78,276.25) and (370.27,264) .. (370.34,248.88) -- (371.34,35.49) .. controls (371.41,20.37) and (407.05,8.12) .. (450.94,8.12) .. controls (494.82,8.12) and (530.34,20.37) .. (530.27,35.49) .. controls (530.2,50.61) and (494.56,62.86) .. (450.68,62.86) .. controls (406.79,62.86) and (371.27,50.61) .. (371.34,35.49) ; \draw (399.06,28.76) .. controls (399.06,21.95) and (414.66,16.42) .. (433.9,16.42) .. controls (453.15,16.42) and (468.74,21.95) .. (468.74,28.76) .. controls (468.74,35.58) and (453.15,41.11) .. (433.9,41.11) .. controls (414.66,41.11) and (399.06,35.58) .. (399.06,28.76) -- cycle ; \draw (397.82,116.79) .. controls (397.82,109.97) and (413.42,104.45) .. (432.66,104.45) .. controls (451.9,104.45) and (467.5,109.97) .. (467.5,116.79) .. controls (467.5,123.61) and (451.9,129.14) .. (432.66,129.14) .. controls (413.42,129.14) and (397.82,123.61) .. (397.82,116.79) -- cycle ; \draw [color={rgb, 255:red, 255; green, 255; blue, 255 } ,draw opacity=1 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (410.33,105.52) .. controls (410.33,100.86) and (414.11,97.08) .. (418.77,97.08) .. controls (423.43,97.08) and (427.21,100.86) .. (427.21,105.52) .. controls (427.21,110.18) and (423.43,113.96) .. (418.77,113.96) .. controls (414.11,113.96) and (410.33,110.18) .. (410.33,105.52) -- cycle ; \draw [color={rgb, 255:red, 255; green, 255; blue, 255 } ,draw opacity=1 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (438.91,105.52) .. controls (438.91,100.86) and (442.69,97.08) .. (447.36,97.08) .. controls (452.02,97.08) and (455.8,100.86) .. (455.8,105.52) .. controls (455.8,110.18) and (452.02,113.96) .. (447.36,113.96) .. controls (442.69,113.96) and (438.91,110.18) .. (438.91,105.52) -- cycle ; \draw (431.38,80.97) .. controls (431.38,74.15) and (446.98,68.62) .. (466.22,68.62) .. controls (485.46,68.62) and (501.06,74.15) .. (501.06,80.97) .. controls (501.06,87.78) and (485.46,93.31) .. (466.22,93.31) .. controls (446.98,93.31) and (431.38,87.78) .. (431.38,80.97) -- cycle ; \draw [color={rgb, 255:red, 255; green, 255; blue, 255 } ,draw opacity=1 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (440.16,72.8) .. controls (440.16,68.14) and (443.94,64.36) .. (448.6,64.36) .. controls (453.26,64.36) and (457.04,68.14) .. (457.04,72.8) .. controls (457.04,77.46) and (453.26,81.24) .. (448.6,81.24) .. controls (443.94,81.24) and (440.16,77.46) .. (440.16,72.8) -- cycle ; \draw [color={rgb, 255:red, 255; green, 255; blue, 255 } ,draw opacity=1 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (469.99,70.32) .. controls (469.99,65.65) and (473.77,61.88) .. (478.43,61.88) .. controls (483.09,61.88) and (486.87,65.65) .. (486.87,70.32) .. controls (486.87,74.98) and (483.09,78.76) .. (478.43,78.76) .. controls (473.77,78.76) and (469.99,74.98) .. (469.99,70.32) -- cycle ; \draw [dash pattern={on 4.5pt off 4.5pt}] (428.54,28.19) -- (424.86,257.45) .. controls (424.85,258.44) and (422.14,259.21) .. (418.82,259.15) .. controls (415.5,259.1) and (412.82,258.25) .. (412.83,257.25) -- (416.51,28) .. controls (416.53,27) and (419.24,26.24) .. (422.56,26.29) .. controls (425.88,26.34) and (428.56,27.19) .. (428.54,28.19) .. controls (428.53,29.19) and (425.82,29.95) .. (422.5,29.9) .. controls (419.18,29.85) and (416.5,28.99) .. (416.51,28) ; \draw [dash pattern={on 4.5pt off 4.5pt}] (484.56,29.14) -- (480.86,259.71) .. controls (480.84,260.56) and (478.54,261.21) .. (475.71,261.17) .. controls (472.88,261.12) and (470.6,260.4) .. (470.61,259.55) -- (474.31,28.97) .. controls (474.33,28.12) and (476.63,27.47) .. (479.46,27.52) .. controls (482.29,27.56) and (484.57,28.29) .. (484.56,29.14) .. controls (484.55,29.99) and (482.24,30.64) .. (479.41,30.59) .. controls (476.58,30.55) and (474.3,29.82) .. (474.31,28.97) ; \draw [dash pattern={on 4.5pt off 4.5pt}] (455.58,28.04) -- (451.86,259.59) .. controls (451.85,260.5) and (449.35,261.21) .. (446.28,261.16) .. controls (443.22,261.11) and (440.75,260.33) .. (440.76,259.41) -- (444.48,27.86) .. controls (444.5,26.94) and (446.99,26.23) .. (450.06,26.28) .. controls (453.12,26.33) and (455.59,27.12) .. (455.58,28.04) .. controls (455.56,28.96) and (453.07,29.66) .. (450,29.61) .. controls (446.94,29.56) and (444.47,28.78) .. (444.48,27.86) ; \draw (147.33,132.36) node {$\alpha $}; \draw (204.96,138.38) node {$\beta $}; \draw (370.85,32.71) node {$\alpha \times \{0\}$}; \draw (371.19,112.48) node {$\alpha \times \left\{\frac{2}{6}\right\}$}; \draw (369.87,183.08) node {$\alpha \times \left\{\frac{4}{6}\right\}$}; \draw (530.66,77.95) node {$\gamma \times \left\{\frac{1}{6}\right\}$}; \draw (532.66,148.95) node {$\gamma \times \left\{\frac{3}{6}\right\}$}; \draw (530.66,217.95) node {$\gamma \times \left\{\frac{5}{6}\right\}$}; \draw (368.06,242.59) node {$\alpha \times \{1\}$}; \end{tikzpicture} \caption{Cut and glue $A_k$ to $M_f\backslash ((\alpha \cup \beta) \times \{1/2\})$ when $k=3$} \end{figure} \end{proof} \begin{prop} Given $k$, there exists $B_k$, such that if $u_i,v_i>B_k$, then $h_kf$ is pseudo-Anosov and $\vol(M_{h_kf})\geq 3kV_8$. \end{prop} \begin{proof} Let $M=M_f\backslash N(L_k)$. Let $\beta=\{\frac{1}{v_k}, \frac{1}{u_k},\dots, \frac{1}{v_1},\frac{1}{u_1} \}$, then by Theorem \ref{thurston}, $M_{h_kf}=M_\beta$, and when $u_i,v_i$ are big enough, the volume is approximatly equal to $\vol(M_f\backslash N(L_k))$. In particular, if $u_i,v_i$ are large enough, \[\vol(int(M_{h_kf}))\geq \vol(int(M_f\backslash N(L_k)))-kV_8 \geq 3kV_8\] by Lemma 2. \end{proof} \begin{lem} For $n,m>3$, $M_{h_kf^3_{n,m}}\cong M_{h_kf^3_{n+3,m}}\cong M_{h_kf^3_{n,m+3}}$. \end{lem} \begin{proof} By Proposition 1, $\interior{M}=M_{h_kf}=M_{h_kf^3_{n,m}}$ is hyperbolic. Let $\Sigma_1$ be the subsurface in $S_{0,n+m+2}$ shown in Figure 3 containing 3 punctures, and let $\tau_1$ and $\tau_2$ denote the two components of $\partial\Sigma_1$, where $\tau_1$ and $\tau_2$ are two arcs connecting $x$ and $z$, with $\tau_2=f^3_{n,m}(\tau_1)$. Construct a surface $\Sigma_2\subset M$ as follows. First, define a map \[\eta=(\eta_1,\eta_2):\Sigma_1\rightarrow S\times[0,1]\] so that $\eta(\Sigma_1)\cap S\times\{0\}=\tau_2\times\{0\}$, $\eta(\Sigma_1)\cap S\times\{1\}=\tau_1\times\{1\}$ and $\eta_1$ is the inclusion of $\Sigma_1$ into $S$. Since $f(\tau_1)=\tau_2$, if we project $p:S\times[0,1] \rightarrow M_f$, $\eta$ defines an embedding of $\Sigma_1/(\tau_1\isEquivTo{f} \tau_2)$, that is, $\Sigma_1$ with $\tau_1$ glued to $\tau_2$ by $f$. Since $\eta_1$ is the inclusion, $\Sigma_2=p\circ \eta(\Sigma_1/\tau_1\isEquivTo{f} \tau_2)$ is transverse to the suspension flow. By Theorem \ref{fried}, $[\Sigma_2] \in \overline{F\cdot \mathbb{R}^+}$. We will define a surface $S\textprime$ such that $[S\textprime]=[S]+[\Sigma_2]$ in $H^1(M_f)$ as follows. Let $S_{\tau_2}$ denote the surface obtained by cutting $S$ along $\tau_2$. Then $S_{\tau_2}$ has two boundary components, denote $\tau^+_2, \tau^-_2$. Since $\tau_2=p\circ\eta(\Sigma_1)$, and $p\circ\eta(\tau_1)=p\circ\eta(\tau_2)=\tau_2 \subset S \subset M_f$, we can construct $S\textprime$ in $M_f$ by gluing $\tau^+_2$ to $\eta(\tau_2)$ and $\tau^-_2$ to $\eta(\tau_1)$, perturbed slightly to be embedded. Then $[S\textprime]=[S]+[\Sigma_2]$ and $S\textprime \pitchfork \Psi$. So $S\textprime$ is a fiber representing a class in $F\cdot \mathbb{R}^+ \subset H^1(M)$. By Theorem \ref{fried}, the first return map of $\psi$ is the monodromy $f\textprime:S\textprime \rightarrow S\textprime$. This is given by \[ f\textprime(x)= \begin{cases} \eta(x) & \text{if } x \in \Sigma_1 \\ f\circ\eta^{-1}(x) & \text{if } x \in \eta(\Sigma_1)\\ f(x) & \text{otherwise} \end{cases} \] See Figure 5. As indicated by Figure 6, $S\textprime \cong S_{0,n+m+5}$, and up to conjugation, $f\textprime=f^3_{n+3,m}$. Therefore, $M_{h_kf^3_{n,m}}\cong M_{h_kf^3_{n+3,m}}$. Similarly, if we pick another subsurface in $Y$ homeomorphic to $\Sigma_0$, one can show $M_{h_kf^3_{n,m}}\cong M_{h_kf^3_{n,m+3}}$. \begin{figure}[H]\centering \tikzset{every picture/.style={line width=0.75pt}} \tikzset{every picture/.style={line width=0.75pt}} \begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-0.75,xscale=0.75] \draw (280.83,108.86) -- (280.83,238.97) .. controls (280.83,262.28) and (231.85,281.17) .. (171.42,281.17) .. controls (110.99,281.17) and (62,262.28) .. (62,238.97) -- (62,108.86) .. controls (62,85.56) and (110.99,66.67) .. (171.42,66.67) .. controls (231.85,66.67) and (280.83,85.56) .. (280.83,108.86) .. controls (280.83,132.16) and (231.85,151.05) .. (171.42,151.05) .. controls (110.99,151.05) and (62,132.16) .. (62,108.86) ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (145.05,110.12) .. controls (145.05,109.32) and (145.7,108.67) .. (146.5,108.67) .. controls (147.3,108.67) and (147.94,109.32) .. (147.94,110.12) .. controls (147.94,110.91) and (147.3,111.56) .. (146.5,111.56) .. controls (145.7,111.56) and (145.05,110.91) .. (145.05,110.12) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (166.05,121.12) .. controls (166.05,120.32) and (166.7,119.67) .. (167.5,119.67) .. controls (168.3,119.67) and (168.94,120.32) .. (168.94,121.12) .. controls (168.94,121.91) and (168.3,122.56) .. (167.5,122.56) .. controls (166.7,122.56) and (166.05,121.91) .. (166.05,121.12) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (177.05,105.12) .. controls (177.05,104.32) and (177.7,103.67) .. (178.5,103.67) .. controls (179.3,103.67) and (179.94,104.32) .. (179.94,105.12) .. controls (179.94,105.91) and (179.3,106.56) .. (178.5,106.56) .. controls (177.7,106.56) and (177.05,105.91) .. (177.05,105.12) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (205.05,110.12) .. controls (205.05,109.32) and (205.7,108.67) .. (206.5,108.67) .. controls (207.3,108.67) and (207.94,109.32) .. (207.94,110.12) .. controls (207.94,110.91) and (207.3,111.56) .. (206.5,111.56) .. controls (205.7,111.56) and (205.05,110.91) .. (205.05,110.12) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (162.05,92.12) .. controls (162.05,91.32) and (162.7,90.67) .. (163.5,90.67) .. controls (164.3,90.67) and (164.94,91.32) .. (164.94,92.12) .. controls (164.94,92.91) and (164.3,93.56) .. (163.5,93.56) .. controls (162.7,93.56) and (162.05,92.91) .. (162.05,92.12) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (135.05,83.12) .. controls (135.05,82.32) and (135.7,81.67) .. (136.5,81.67) .. controls (137.3,81.67) and (137.94,82.32) .. (137.94,83.12) .. controls (137.94,83.91) and (137.3,84.56) .. (136.5,84.56) .. controls (135.7,84.56) and (135.05,83.91) .. (135.05,83.12) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (121.05,104.12) .. controls (121.05,103.32) and (121.7,102.67) .. (122.5,102.67) .. controls (123.3,102.67) and (123.94,103.32) .. (123.94,104.12) .. controls (123.94,104.91) and (123.3,105.56) .. (122.5,105.56) .. controls (121.7,105.56) and (121.05,104.91) .. (121.05,104.12) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (153.05,132.12) .. controls (153.05,131.32) and (153.7,130.67) .. (154.5,130.67) .. controls (155.3,130.67) and (155.94,131.32) .. (155.94,132.12) .. controls (155.94,132.91) and (155.3,133.56) .. (154.5,133.56) .. controls (153.7,133.56) and (153.05,132.91) .. (153.05,132.12) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (199.05,85.12) .. controls (199.05,84.32) and (199.7,83.67) .. (200.5,83.67) .. controls (201.3,83.67) and (201.94,84.32) .. (201.94,85.12) .. controls (201.94,85.91) and (201.3,86.56) .. (200.5,86.56) .. controls (199.7,86.56) and (199.05,85.91) .. (199.05,85.12) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (225.05,92.12) .. controls (225.05,91.32) and (225.7,90.67) .. (226.5,90.67) .. controls (227.3,90.67) and (227.94,91.32) .. (227.94,92.12) .. controls (227.94,92.91) and (227.3,93.56) .. (226.5,93.56) .. controls (225.7,93.56) and (225.05,92.91) .. (225.05,92.12) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (232.05,113.12) .. controls (232.05,112.32) and (232.7,111.67) .. (233.5,111.67) .. controls (234.3,111.67) and (234.94,112.32) .. (234.94,113.12) .. controls (234.94,113.91) and (234.3,114.56) .. (233.5,114.56) .. controls (232.7,114.56) and (232.05,113.91) .. (232.05,113.12) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (159.43,127.66) .. controls (159.43,126.86) and (160.07,126.22) .. (160.87,126.22) .. controls (161.67,126.22) and (162.32,126.86) .. (162.32,127.66) .. controls (162.32,128.46) and (161.67,129.11) .. (160.87,129.11) .. controls (160.07,129.11) and (159.43,128.46) .. (159.43,127.66) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (220.05,130.12) .. controls (220.05,129.32) and (220.7,128.67) .. (221.5,128.67) .. controls (222.3,128.67) and (222.94,129.32) .. (222.94,130.12) .. controls (222.94,130.91) and (222.3,131.56) .. (221.5,131.56) .. controls (220.7,131.56) and (220.05,130.91) .. (220.05,130.12) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (191.05,123.12) .. controls (191.05,122.32) and (191.7,121.67) .. (192.5,121.67) .. controls (193.3,121.67) and (193.94,122.32) .. (193.94,123.12) .. controls (193.94,123.91) and (193.3,124.56) .. (192.5,124.56) .. controls (191.7,124.56) and (191.05,123.91) .. (191.05,123.12) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (206.05,134.12) .. controls (206.05,133.32) and (206.7,132.67) .. (207.5,132.67) .. controls (208.3,132.67) and (208.94,133.32) .. (208.94,134.12) .. controls (208.94,134.91) and (208.3,135.56) .. (207.5,135.56) .. controls (206.7,135.56) and (206.05,134.91) .. (206.05,134.12) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (176.05,140.12) .. controls (176.05,139.32) and (176.7,138.67) .. (177.5,138.67) .. controls (178.3,138.67) and (178.94,139.32) .. (178.94,140.12) .. controls (178.94,140.91) and (178.3,141.56) .. (177.5,141.56) .. controls (176.7,141.56) and (176.05,140.91) .. (176.05,140.12) -- cycle ; \draw [draw opacity=0][dash pattern={on 4.5pt off 4.5pt}] (62,109.18) .. controls (62,109.08) and (62,108.97) .. (62,108.86) .. controls (62,85.56) and (110.99,66.67) .. (171.42,66.67) .. controls (231.25,66.67) and (279.87,85.19) .. (280.82,108.17) -- (171.42,108.86) -- cycle ; \draw [dash pattern={on 4.5pt off 4.5pt}] (62,109.18) .. controls (62,109.08) and (62,108.97) .. (62,108.86) .. controls (62,85.56) and (110.99,66.67) .. (171.42,66.67) .. controls (231.25,66.67) and (279.87,85.19) .. (280.82,108.17) ; \draw [draw opacity=0][dash pattern={on 4.5pt off 4.5pt}] (62.01,238.15) .. controls (62.72,214.92) and (111.43,196.17) .. (171.42,196.17) .. controls (227.33,196.17) and (273.45,212.46) .. (280.03,233.49) -- (171.42,238.67) -- cycle ; \draw [dash pattern={on 4.5pt off 4.5pt}] (62.01,238.15) .. controls (62.72,214.92) and (111.43,196.17) .. (171.42,196.17) .. controls (227.33,196.17) and (273.45,212.46) .. (280.03,233.49) ; \draw [draw opacity=0] (147,239.62) .. controls (154.31,232.37) and (166.88,232.87) .. (175.4,240.87) .. controls (184.08,249.02) and (185.25,261.88) .. (178.02,269.59) .. controls (177.85,269.77) and (177.67,269.94) .. (177.5,270.12) -- (162.29,254.83) -- cycle ; \draw (147,239.62) .. controls (154.31,232.37) and (166.88,232.87) .. (175.4,240.87) .. controls (184.08,249.02) and (185.25,261.88) .. (178.02,269.59) .. controls (177.85,269.77) and (177.67,269.94) .. (177.5,270.12) ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (145.05,240.12) .. controls (145.05,239.32) and (145.7,238.67) .. (146.5,238.67) .. controls (147.3,238.67) and (147.94,239.32) .. (147.94,240.12) .. controls (147.94,240.91) and (147.3,241.56) .. (146.5,241.56) .. controls (145.7,241.56) and (145.05,240.91) .. (145.05,240.12) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (166.05,251.12) .. controls (166.05,250.32) and (166.7,249.67) .. (167.5,249.67) .. controls (168.3,249.67) and (168.94,250.32) .. (168.94,251.12) .. controls (168.94,251.91) and (168.3,252.56) .. (167.5,252.56) .. controls (166.7,252.56) and (166.05,251.91) .. (166.05,251.12) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (177.05,235.12) .. controls (177.05,234.32) and (177.7,233.67) .. (178.5,233.67) .. controls (179.3,233.67) and (179.94,234.32) .. (179.94,235.12) .. controls (179.94,235.91) and (179.3,236.56) .. (178.5,236.56) .. controls (177.7,236.56) and (177.05,235.91) .. (177.05,235.12) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (205.05,240.12) .. controls (205.05,239.32) and (205.7,238.67) .. (206.5,238.67) .. controls (207.3,238.67) and (207.94,239.32) .. (207.94,240.12) .. controls (207.94,240.91) and (207.3,241.56) .. (206.5,241.56) .. controls (205.7,241.56) and (205.05,240.91) .. (205.05,240.12) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (162.05,222.12) .. controls (162.05,221.32) and (162.7,220.67) .. (163.5,220.67) .. controls (164.3,220.67) and (164.94,221.32) .. (164.94,222.12) .. controls (164.94,222.91) and (164.3,223.56) .. (163.5,223.56) .. controls (162.7,223.56) and (162.05,222.91) .. (162.05,222.12) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (135.05,213.12) .. controls (135.05,212.32) and (135.7,211.67) .. (136.5,211.67) .. controls (137.3,211.67) and (137.94,212.32) .. (137.94,213.12) .. controls (137.94,213.91) and (137.3,214.56) .. (136.5,214.56) .. controls (135.7,214.56) and (135.05,213.91) .. (135.05,213.12) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (121.05,234.12) .. controls (121.05,233.32) and (121.7,232.67) .. (122.5,232.67) .. controls (123.3,232.67) and (123.94,233.32) .. (123.94,234.12) .. controls (123.94,234.91) and (123.3,235.56) .. (122.5,235.56) .. controls (121.7,235.56) and (121.05,234.91) .. (121.05,234.12) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (153.05,262.12) .. controls (153.05,261.32) and (153.7,260.67) .. (154.5,260.67) .. controls (155.3,260.67) and (155.94,261.32) .. (155.94,262.12) .. controls (155.94,262.91) and (155.3,263.56) .. (154.5,263.56) .. controls (153.7,263.56) and (153.05,262.91) .. (153.05,262.12) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (199.05,215.12) .. controls (199.05,214.32) and (199.7,213.67) .. (200.5,213.67) .. controls (201.3,213.67) and (201.94,214.32) .. (201.94,215.12) .. controls (201.94,215.91) and (201.3,216.56) .. (200.5,216.56) .. controls (199.7,216.56) and (199.05,215.91) .. (199.05,215.12) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (225.05,222.12) .. controls (225.05,221.32) and (225.7,220.67) .. (226.5,220.67) .. controls (227.3,220.67) and (227.94,221.32) .. (227.94,222.12) .. controls (227.94,222.91) and (227.3,223.56) .. (226.5,223.56) .. controls (225.7,223.56) and (225.05,222.91) .. (225.05,222.12) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (232.05,243.12) .. controls (232.05,242.32) and (232.7,241.67) .. (233.5,241.67) .. controls (234.3,241.67) and (234.94,242.32) .. (234.94,243.12) .. controls (234.94,243.91) and (234.3,244.56) .. (233.5,244.56) .. controls (232.7,244.56) and (232.05,243.91) .. (232.05,243.12) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (159.43,257.66) .. controls (159.43,256.86) and (160.07,256.22) .. (160.87,256.22) .. controls (161.67,256.22) and (162.32,256.86) .. (162.32,257.66) .. controls (162.32,258.46) and (161.67,259.11) .. (160.87,259.11) .. controls (160.07,259.11) and (159.43,258.46) .. (159.43,257.66) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (220.05,260.12) .. controls (220.05,259.32) and (220.7,258.67) .. (221.5,258.67) .. controls (222.3,258.67) and (222.94,259.32) .. (222.94,260.12) .. controls (222.94,260.91) and (222.3,261.56) .. (221.5,261.56) .. controls (220.7,261.56) and (220.05,260.91) .. (220.05,260.12) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (191.05,253.12) .. controls (191.05,252.32) and (191.7,251.67) .. (192.5,251.67) .. controls (193.3,251.67) and (193.94,252.32) .. (193.94,253.12) .. controls (193.94,253.91) and (193.3,254.56) .. (192.5,254.56) .. controls (191.7,254.56) and (191.05,253.91) .. (191.05,253.12) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (206.05,264.12) .. controls (206.05,263.32) and (206.7,262.67) .. (207.5,262.67) .. controls (208.3,262.67) and (208.94,263.32) .. (208.94,264.12) .. controls (208.94,264.91) and (208.3,265.56) .. (207.5,265.56) .. controls (206.7,265.56) and (206.05,264.91) .. (206.05,264.12) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (176.05,270.12) .. controls (176.05,269.32) and (176.7,268.67) .. (177.5,268.67) .. controls (178.3,268.67) and (178.94,269.32) .. (178.94,270.12) .. controls (178.94,270.91) and (178.3,271.56) .. (177.5,271.56) .. controls (176.7,271.56) and (176.05,270.91) .. (176.05,270.12) -- cycle ; \draw [draw opacity=0] (175.73,141.7) .. controls (168.29,147.49) and (156.81,146.55) .. (148.87,139.1) .. controls (140.19,130.94) and (139.01,118.08) .. (146.25,110.37) .. controls (146.33,110.29) and (146.42,110.2) .. (146.5,110.12) -- (161.98,125.13) -- cycle ; \draw (175.73,141.7) .. controls (168.29,147.49) and (156.81,146.55) .. (148.87,139.1) .. controls (140.19,130.94) and (139.01,118.08) .. (146.25,110.37) .. controls (146.33,110.29) and (146.42,110.2) .. (146.5,110.12) ; \draw [dash pattern={on 4.5pt off 4.5pt}] (146.5,240.12) -- (146.5,111.56) ; \draw [dash pattern={on 4.5pt off 4.5pt}] (177.5,268.67) -- (177.5,140.12) ; \draw (353.83,81) -- (617.83,81) -- (617.83,279.42) -- (353.83,279.42) -- cycle ; \draw [color={rgb, 255:red, 255; green, 255; blue, 255 } ,draw opacity=1 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (434,80.92) .. controls (434,78.2) and (436.2,76) .. (438.92,76) .. controls (441.63,76) and (443.83,78.2) .. (443.83,80.92) .. controls (443.83,83.63) and (441.63,85.83) .. (438.92,85.83) .. controls (436.2,85.83) and (434,83.63) .. (434,80.92) -- cycle ; \draw [color={rgb, 255:red, 255; green, 255; blue, 255 } ,draw opacity=1 ][fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (524,279.92) .. controls (524,277.2) and (526.2,275) .. (528.92,275) .. controls (531.63,275) and (533.83,277.2) .. (533.83,279.92) .. controls (533.83,282.63) and (531.63,284.83) .. (528.92,284.83) .. controls (526.2,284.83) and (524,282.63) .. (524,279.92) -- cycle ; \draw (434,80.92) -- (533.83,279.92) ; \draw [dash pattern={on 4.5pt off 4.5pt}] (434,80.92) -- (434,278.42) ; \draw [dash pattern={on 4.5pt off 4.5pt}] (533.83,82.42) -- (533.83,279.92) ; \draw (473.83,247.58) -- (473.83,195.58) ; \draw [shift={(473.83,193.58)}, rotate = 450] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; \draw (145,97.75) node {$x$}; \draw (208,97.75) node {$y$}; \draw (186,136.75) node {$z$}; \draw (145,227.75) node {$x$}; \draw (208,227.75) node {$y$}; \draw (186,266.75) node {$z$}; \draw (132.78,124.36) node {$\tau _{1}$}; \draw (160.27,243.55) node {$\tau _{2}$}; \draw (102.27,52.67) node {$f( \tau _{1}) =\tau _{2}$}; \draw (515.27,288.55) node {$\tau ^{-}_{2}$}; \draw (557.27,289.55) node {$\tau ^{+}_{2} =\tau '_{2}$}; \draw (434.27,288.55) node {$\tau _{1}$}; \draw (480,292) node {$\Sigma _{1}$}; \draw (531,167) node {$\eta ( \Sigma _{1}) =\Sigma _{2}$}; \draw (629,292) node {$S\times \{0\}$}; \draw (582,67) node {$S\times \{1\}$}; \draw (163,184.17) node {$\Sigma _{2}$}; \draw (464,218.17) node {$\eta $}; \draw (429.27,67.55) node {$\eta ( \tau _{1})$}; \end{tikzpicture} \caption{Obtain $\Sigma_2$ from $\eta:\Sigma_1\rightarrow S\times[0,1]$ and $S\textprime$ from $S$ and $\Sigma_2$ as shown.} \end{figure} \begin{figure}[H]\centering \tikzset{every picture/.style={line width=0.75pt}} \begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1] \draw (105.07,151.94) .. controls (117.29,144.95) and (138.73,159.48) .. (152.97,184.39) .. controls (167.2,209.3) and (168.84,235.15) .. (156.62,242.13) .. controls (144.4,249.12) and (122.96,234.59) .. (108.72,209.68) .. controls (94.48,184.77) and (92.85,158.92) .. (105.07,151.94) -- cycle ; \draw (36,156.04) .. controls (36,91.95) and (87.95,40) .. (152.04,40) .. controls (216.13,40) and (268.08,91.95) .. (268.08,156.04) .. controls (268.08,220.13) and (216.13,272.08) .. (152.04,272.08) .. controls (87.95,272.08) and (36,220.13) .. (36,156.04) -- cycle ; \draw [dash pattern={on 4.5pt off 4.5pt}] (46.47,151.95) .. controls (46.47,120.02) and (72.36,94.12) .. (104.3,94.12) .. controls (136.23,94.12) and (162.13,120.02) .. (162.13,151.95) .. controls (162.13,183.89) and (136.23,209.78) .. (104.3,209.78) .. controls (72.36,209.78) and (46.47,183.89) .. (46.47,151.95) -- cycle ; \draw [dash pattern={on 4.5pt off 4.5pt}] (147.19,151.95) .. controls (147.19,120.02) and (173.08,94.12) .. (205.02,94.12) .. controls (236.96,94.12) and (262.85,120.02) .. (262.85,151.95) .. controls (262.85,183.89) and (236.96,209.78) .. (205.02,209.78) .. controls (173.08,209.78) and (147.19,183.89) .. (147.19,151.95) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (153.73,153.18) .. controls (153.73,152.38) and (154.38,151.74) .. (155.18,151.74) .. controls (155.97,151.74) and (156.62,152.38) .. (156.62,153.18) .. controls (156.62,153.98) and (155.97,154.62) .. (155.18,154.62) .. controls (154.38,154.62) and (153.73,153.98) .. (153.73,153.18) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (153.73,242.13) .. controls (153.73,241.33) and (154.38,240.69) .. (155.18,240.69) .. controls (155.97,240.69) and (156.62,241.33) .. (156.62,242.13) .. controls (156.62,242.93) and (155.97,243.58) .. (155.18,243.58) .. controls (154.38,243.58) and (153.73,242.93) .. (153.73,242.13) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (102.85,151.95) .. controls (102.85,151.16) and (103.5,150.51) .. (104.3,150.51) .. controls (105.09,150.51) and (105.74,151.16) .. (105.74,151.95) .. controls (105.74,152.75) and (105.09,153.4) .. (104.3,153.4) .. controls (103.5,153.4) and (102.85,152.75) .. (102.85,151.95) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (203.58,153.4) .. controls (203.58,152.6) and (204.22,151.95) .. (205.02,151.95) .. controls (205.82,151.95) and (206.47,152.6) .. (206.47,153.4) .. controls (206.47,154.2) and (205.82,154.84) .. (205.02,154.84) .. controls (204.22,154.84) and (203.58,154.2) .. (203.58,153.4) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (140,125.71) .. controls (140,124.91) and (140.64,124.27) .. (141.44,124.27) .. controls (142.24,124.27) and (142.88,124.91) .. (142.88,125.71) .. controls (142.88,126.51) and (142.24,127.15) .. (141.44,127.15) .. controls (140.64,127.15) and (140,126.51) .. (140,125.71) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (122.34,113.28) .. controls (122.34,112.48) and (122.98,111.84) .. (123.78,111.84) .. controls (124.58,111.84) and (125.23,112.48) .. (125.23,113.28) .. controls (125.23,114.08) and (124.58,114.73) .. (123.78,114.73) .. controls (122.98,114.73) and (122.34,114.08) .. (122.34,113.28) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (139.34,182.61) .. controls (139.34,181.82) and (139.99,181.17) .. (140.79,181.17) .. controls (141.58,181.17) and (142.23,181.82) .. (142.23,182.61) .. controls (142.23,183.41) and (141.58,184.06) .. (140.79,184.06) .. controls (139.99,184.06) and (139.34,183.41) .. (139.34,182.61) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (166.16,124.4) .. controls (166.16,123.6) and (166.81,122.96) .. (167.6,122.96) .. controls (168.4,122.96) and (169.05,123.6) .. (169.05,124.4) .. controls (169.05,125.2) and (168.4,125.85) .. (167.6,125.85) .. controls (166.81,125.85) and (166.16,125.2) .. (166.16,124.4) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (183.82,112.63) .. controls (183.82,111.83) and (184.46,111.18) .. (185.26,111.18) .. controls (186.06,111.18) and (186.71,111.83) .. (186.71,112.63) .. controls (186.71,113.43) and (186.06,114.07) .. (185.26,114.07) .. controls (184.46,114.07) and (183.82,113.43) .. (183.82,112.63) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (204.09,107.4) .. controls (204.09,106.6) and (204.74,105.95) .. (205.54,105.95) .. controls (206.34,105.95) and (206.98,106.6) .. (206.98,107.4) .. controls (206.98,108.19) and (206.34,108.84) .. (205.54,108.84) .. controls (204.74,108.84) and (204.09,108.19) .. (204.09,107.4) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (126.91,191.77) .. controls (126.91,190.97) and (127.56,190.33) .. (128.36,190.33) .. controls (129.16,190.33) and (129.8,190.97) .. (129.8,191.77) .. controls (129.8,192.57) and (129.16,193.21) .. (128.36,193.21) .. controls (127.56,193.21) and (126.91,192.57) .. (126.91,191.77) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (102.06,105.43) .. controls (102.06,104.64) and (102.71,103.99) .. (103.5,103.99) .. controls (104.3,103.99) and (104.95,104.64) .. (104.95,105.43) .. controls (104.95,106.23) and (104.3,106.88) .. (103.5,106.88) .. controls (102.71,106.88) and (102.06,106.23) .. (102.06,105.43) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (81.13,111.97) .. controls (81.13,111.18) and (81.78,110.53) .. (82.57,110.53) .. controls (83.37,110.53) and (84.02,111.18) .. (84.02,111.97) .. controls (84.02,112.77) and (83.37,113.42) .. (82.57,113.42) .. controls (81.78,113.42) and (81.13,112.77) .. (81.13,111.97) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (111.22,196.35) .. controls (111.22,195.55) and (111.86,194.9) .. (112.66,194.9) .. controls (113.46,194.9) and (114.11,195.55) .. (114.11,196.35) .. controls (114.11,197.15) and (113.46,197.79) .. (112.66,197.79) .. controls (111.86,197.79) and (111.22,197.15) .. (111.22,196.35) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (85.05,186.12) .. controls (85.05,185.32) and (85.7,184.67) .. (86.5,184.67) .. controls (87.3,184.67) and (87.94,185.32) .. (87.94,186.12) .. controls (87.94,186.91) and (87.3,187.56) .. (86.5,187.56) .. controls (85.7,187.56) and (85.05,186.91) .. (85.05,186.12) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (72.36,173.07) .. controls (72.36,172.27) and (73,171.63) .. (73.8,171.63) .. controls (74.6,171.63) and (75.25,172.27) .. (75.25,173.07) .. controls (75.25,173.87) and (74.6,174.52) .. (73.8,174.52) .. controls (73,174.52) and (72.36,173.87) .. (72.36,173.07) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (162.23,179.34) .. controls (162.23,178.54) and (162.88,177.9) .. (163.68,177.9) .. controls (164.48,177.9) and (165.12,178.54) .. (165.12,179.34) .. controls (165.12,180.14) and (164.48,180.79) .. (163.68,180.79) .. controls (162.88,180.79) and (162.23,180.14) .. (162.23,179.34) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (177.93,193.73) .. controls (177.93,192.93) and (178.58,192.29) .. (179.38,192.29) .. controls (180.17,192.29) and (180.82,192.93) .. (180.82,193.73) .. controls (180.82,194.53) and (180.17,195.18) .. (179.38,195.18) .. controls (178.58,195.18) and (177.93,194.53) .. (177.93,193.73) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (201.48,200.27) .. controls (201.48,199.47) and (202.12,198.83) .. (202.92,198.83) .. controls (203.72,198.83) and (204.37,199.47) .. (204.37,200.27) .. controls (204.37,201.07) and (203.72,201.72) .. (202.92,201.72) .. controls (202.12,201.72) and (201.48,201.07) .. (201.48,200.27) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (226.33,197) .. controls (226.33,196.2) and (226.98,195.56) .. (227.78,195.56) .. controls (228.57,195.56) and (229.22,196.2) .. (229.22,197) .. controls (229.22,197.8) and (228.57,198.45) .. (227.78,198.45) .. controls (226.98,198.45) and (226.33,197.8) .. (226.33,197) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (244.65,180.65) .. controls (244.65,179.85) and (245.29,179.21) .. (246.09,179.21) .. controls (246.89,179.21) and (247.53,179.85) .. (247.53,180.65) .. controls (247.53,181.45) and (246.89,182.1) .. (246.09,182.1) .. controls (245.29,182.1) and (244.65,181.45) .. (244.65,180.65) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (251.19,159.72) .. controls (251.19,158.92) and (251.83,158.28) .. (252.63,158.28) .. controls (253.43,158.28) and (254.08,158.92) .. (254.08,159.72) .. controls (254.08,160.52) and (253.43,161.17) .. (252.63,161.17) .. controls (251.83,161.17) and (251.19,160.52) .. (251.19,159.72) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (60.58,157.14) .. controls (60.58,156.34) and (61.23,155.7) .. (62.03,155.7) .. controls (62.83,155.7) and (63.47,156.34) .. (63.47,157.14) .. controls (63.47,157.94) and (62.83,158.59) .. (62.03,158.59) .. controls (61.23,158.59) and (60.58,157.94) .. (60.58,157.14) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (66.09,131.6) .. controls (66.09,130.8) and (66.73,130.15) .. (67.53,130.15) .. controls (68.33,130.15) and (68.98,130.8) .. (68.98,131.6) .. controls (68.98,132.39) and (68.33,133.04) .. (67.53,133.04) .. controls (66.73,133.04) and (66.09,132.39) .. (66.09,131.6) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (226.33,117.86) .. controls (226.33,117.06) and (226.98,116.42) .. (227.78,116.42) .. controls (228.57,116.42) and (229.22,117.06) .. (229.22,117.86) .. controls (229.22,118.66) and (228.57,119.31) .. (227.78,119.31) .. controls (226.98,119.31) and (226.33,118.66) .. (226.33,117.86) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (245.95,138.79) .. controls (245.95,137.99) and (246.6,137.35) .. (247.4,137.35) .. controls (248.2,137.35) and (248.84,137.99) .. (248.84,138.79) .. controls (248.84,139.59) and (248.2,140.24) .. (247.4,140.24) .. controls (246.6,140.24) and (245.95,139.59) .. (245.95,138.79) -- cycle ; \draw (326,159.04) .. controls (326,94.95) and (377.95,43) .. (442.04,43) .. controls (506.13,43) and (558.08,94.95) .. (558.08,159.04) .. controls (558.08,223.13) and (506.13,275.08) .. (442.04,275.08) .. controls (377.95,275.08) and (326,223.13) .. (326,159.04) -- cycle ; \draw [dash pattern={on 4.5pt off 4.5pt}] (336.47,154.95) .. controls (336.47,123.02) and (362.36,97.12) .. (394.3,97.12) .. controls (426.23,97.12) and (452.13,123.02) .. (452.13,154.95) .. controls (452.13,186.89) and (426.23,212.78) .. (394.3,212.78) .. controls (362.36,212.78) and (336.47,186.89) .. (336.47,154.95) -- cycle ; \draw [dash pattern={on 4.5pt off 4.5pt}] (437.19,154.95) .. controls (437.19,123.02) and (463.08,97.12) .. (495.02,97.12) .. controls (526.96,97.12) and (552.85,123.02) .. (552.85,154.95) .. controls (552.85,186.89) and (526.96,212.78) .. (495.02,212.78) .. controls (463.08,212.78) and (437.19,186.89) .. (437.19,154.95) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (443.73,156.18) .. controls (443.73,155.38) and (444.38,154.74) .. (445.18,154.74) .. controls (445.97,154.74) and (446.62,155.38) .. (446.62,156.18) .. controls (446.62,156.98) and (445.97,157.62) .. (445.18,157.62) .. controls (444.38,157.62) and (443.73,156.98) .. (443.73,156.18) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (443.73,245.13) .. controls (443.73,244.33) and (444.38,243.69) .. (445.18,243.69) .. controls (445.97,243.69) and (446.62,244.33) .. (446.62,245.13) .. controls (446.62,245.93) and (445.97,246.58) .. (445.18,246.58) .. controls (444.38,246.58) and (443.73,245.93) .. (443.73,245.13) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (392.85,154.95) .. controls (392.85,154.16) and (393.5,153.51) .. (394.3,153.51) .. controls (395.09,153.51) and (395.74,154.16) .. (395.74,154.95) .. controls (395.74,155.75) and (395.09,156.4) .. (394.3,156.4) .. controls (393.5,156.4) and (392.85,155.75) .. (392.85,154.95) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (493.58,156.4) .. controls (493.58,155.6) and (494.22,154.95) .. (495.02,154.95) .. controls (495.82,154.95) and (496.47,155.6) .. (496.47,156.4) .. controls (496.47,157.2) and (495.82,157.84) .. (495.02,157.84) .. controls (494.22,157.84) and (493.58,157.2) .. (493.58,156.4) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (430,128.71) .. controls (430,127.91) and (430.64,127.27) .. (431.44,127.27) .. controls (432.24,127.27) and (432.88,127.91) .. (432.88,128.71) .. controls (432.88,129.51) and (432.24,130.15) .. (431.44,130.15) .. controls (430.64,130.15) and (430,129.51) .. (430,128.71) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (412.34,116.28) .. controls (412.34,115.48) and (412.98,114.84) .. (413.78,114.84) .. controls (414.58,114.84) and (415.23,115.48) .. (415.23,116.28) .. controls (415.23,117.08) and (414.58,117.73) .. (413.78,117.73) .. controls (412.98,117.73) and (412.34,117.08) .. (412.34,116.28) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (433.34,183.61) .. controls (433.34,182.82) and (433.99,182.17) .. (434.79,182.17) .. controls (435.58,182.17) and (436.23,182.82) .. (436.23,183.61) .. controls (436.23,184.41) and (435.58,185.06) .. (434.79,185.06) .. controls (433.99,185.06) and (433.34,184.41) .. (433.34,183.61) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (456.16,127.4) .. controls (456.16,126.6) and (456.81,125.96) .. (457.6,125.96) .. controls (458.4,125.96) and (459.05,126.6) .. (459.05,127.4) .. controls (459.05,128.2) and (458.4,128.85) .. (457.6,128.85) .. controls (456.81,128.85) and (456.16,128.2) .. (456.16,127.4) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (473.82,115.63) .. controls (473.82,114.83) and (474.46,114.18) .. (475.26,114.18) .. controls (476.06,114.18) and (476.71,114.83) .. (476.71,115.63) .. controls (476.71,116.43) and (476.06,117.07) .. (475.26,117.07) .. controls (474.46,117.07) and (473.82,116.43) .. (473.82,115.63) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (494.09,110.4) .. controls (494.09,109.6) and (494.74,108.95) .. (495.54,108.95) .. controls (496.34,108.95) and (496.98,109.6) .. (496.98,110.4) .. controls (496.98,111.19) and (496.34,111.84) .. (495.54,111.84) .. controls (494.74,111.84) and (494.09,111.19) .. (494.09,110.4) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (426.91,189.77) .. controls (426.91,188.97) and (427.56,188.33) .. (428.36,188.33) .. controls (429.16,188.33) and (429.8,188.97) .. (429.8,189.77) .. controls (429.8,190.57) and (429.16,191.21) .. (428.36,191.21) .. controls (427.56,191.21) and (426.91,190.57) .. (426.91,189.77) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (392.06,108.43) .. controls (392.06,107.64) and (392.71,106.99) .. (393.5,106.99) .. controls (394.3,106.99) and (394.95,107.64) .. (394.95,108.43) .. controls (394.95,109.23) and (394.3,109.88) .. (393.5,109.88) .. controls (392.71,109.88) and (392.06,109.23) .. (392.06,108.43) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (371.13,114.97) .. controls (371.13,114.18) and (371.78,113.53) .. (372.57,113.53) .. controls (373.37,113.53) and (374.02,114.18) .. (374.02,114.97) .. controls (374.02,115.77) and (373.37,116.42) .. (372.57,116.42) .. controls (371.78,116.42) and (371.13,115.77) .. (371.13,114.97) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (419.22,193.35) .. controls (419.22,192.55) and (419.86,191.9) .. (420.66,191.9) .. controls (421.46,191.9) and (422.11,192.55) .. (422.11,193.35) .. controls (422.11,194.15) and (421.46,194.79) .. (420.66,194.79) .. controls (419.86,194.79) and (419.22,194.15) .. (419.22,193.35) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (369.05,188.12) .. controls (369.05,187.32) and (369.7,186.67) .. (370.5,186.67) .. controls (371.3,186.67) and (371.94,187.32) .. (371.94,188.12) .. controls (371.94,188.91) and (371.3,189.56) .. (370.5,189.56) .. controls (369.7,189.56) and (369.05,188.91) .. (369.05,188.12) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (356.36,171.07) .. controls (356.36,170.27) and (357,169.63) .. (357.8,169.63) .. controls (358.6,169.63) and (359.25,170.27) .. (359.25,171.07) .. controls (359.25,171.87) and (358.6,172.52) .. (357.8,172.52) .. controls (357,172.52) and (356.36,171.87) .. (356.36,171.07) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (452.23,182.34) .. controls (452.23,181.54) and (452.88,180.9) .. (453.68,180.9) .. controls (454.48,180.9) and (455.12,181.54) .. (455.12,182.34) .. controls (455.12,183.14) and (454.48,183.79) .. (453.68,183.79) .. controls (452.88,183.79) and (452.23,183.14) .. (452.23,182.34) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (467.93,196.73) .. controls (467.93,195.93) and (468.58,195.29) .. (469.38,195.29) .. controls (470.17,195.29) and (470.82,195.93) .. (470.82,196.73) .. controls (470.82,197.53) and (470.17,198.18) .. (469.38,198.18) .. controls (468.58,198.18) and (467.93,197.53) .. (467.93,196.73) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (491.48,203.27) .. controls (491.48,202.47) and (492.12,201.83) .. (492.92,201.83) .. controls (493.72,201.83) and (494.37,202.47) .. (494.37,203.27) .. controls (494.37,204.07) and (493.72,204.72) .. (492.92,204.72) .. controls (492.12,204.72) and (491.48,204.07) .. (491.48,203.27) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (516.33,200) .. controls (516.33,199.2) and (516.98,198.56) .. (517.78,198.56) .. controls (518.57,198.56) and (519.22,199.2) .. (519.22,200) .. controls (519.22,200.8) and (518.57,201.45) .. (517.78,201.45) .. controls (516.98,201.45) and (516.33,200.8) .. (516.33,200) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (534.65,183.65) .. controls (534.65,182.85) and (535.29,182.21) .. (536.09,182.21) .. controls (536.89,182.21) and (537.53,182.85) .. (537.53,183.65) .. controls (537.53,184.45) and (536.89,185.1) .. (536.09,185.1) .. controls (535.29,185.1) and (534.65,184.45) .. (534.65,183.65) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (541.19,162.72) .. controls (541.19,161.92) and (541.83,161.28) .. (542.63,161.28) .. controls (543.43,161.28) and (544.08,161.92) .. (544.08,162.72) .. controls (544.08,163.52) and (543.43,164.17) .. (542.63,164.17) .. controls (541.83,164.17) and (541.19,163.52) .. (541.19,162.72) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (351.58,154.14) .. controls (351.58,153.34) and (352.23,152.7) .. (353.03,152.7) .. controls (353.83,152.7) and (354.47,153.34) .. (354.47,154.14) .. controls (354.47,154.94) and (353.83,155.59) .. (353.03,155.59) .. controls (352.23,155.59) and (351.58,154.94) .. (351.58,154.14) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (356.09,134.6) .. controls (356.09,133.8) and (356.73,133.15) .. (357.53,133.15) .. controls (358.33,133.15) and (358.98,133.8) .. (358.98,134.6) .. controls (358.98,135.39) and (358.33,136.04) .. (357.53,136.04) .. controls (356.73,136.04) and (356.09,135.39) .. (356.09,134.6) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (516.33,120.86) .. controls (516.33,120.06) and (516.98,119.42) .. (517.78,119.42) .. controls (518.57,119.42) and (519.22,120.06) .. (519.22,120.86) .. controls (519.22,121.66) and (518.57,122.31) .. (517.78,122.31) .. controls (516.98,122.31) and (516.33,121.66) .. (516.33,120.86) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (535.95,141.79) .. controls (535.95,140.99) and (536.6,140.35) .. (537.4,140.35) .. controls (538.2,140.35) and (538.84,140.99) .. (538.84,141.79) .. controls (538.84,142.59) and (538.2,143.24) .. (537.4,143.24) .. controls (536.6,143.24) and (535.95,142.59) .. (535.95,141.79) -- cycle ; \draw (394.3,153.51) .. controls (363.83,197.08) and (403.73,275.13) .. (443.73,245.13) ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (389.22,199.35) .. controls (389.22,198.55) and (389.86,197.9) .. (390.66,197.9) .. controls (391.46,197.9) and (392.11,198.55) .. (392.11,199.35) .. controls (392.11,200.15) and (391.46,200.79) .. (390.66,200.79) .. controls (389.86,200.79) and (389.22,200.15) .. (389.22,199.35) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (397.22,198.35) .. controls (397.22,197.55) and (397.86,196.9) .. (398.66,196.9) .. controls (399.46,196.9) and (400.11,197.55) .. (400.11,198.35) .. controls (400.11,199.15) and (399.46,199.79) .. (398.66,199.79) .. controls (397.86,199.79) and (397.22,199.15) .. (397.22,198.35) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (406.22,197.35) .. controls (406.22,196.55) and (406.86,195.9) .. (407.66,195.9) .. controls (408.46,195.9) and (409.11,196.55) .. (409.11,197.35) .. controls (409.11,198.15) and (408.46,198.79) .. (407.66,198.79) .. controls (406.86,198.79) and (406.22,198.15) .. (406.22,197.35) -- cycle ; \draw (394.3,156.4) .. controls (429.83,160.08) and (394.83,243.08) .. (443.73,245.13) ; \draw (392.85,154.95) .. controls (432.85,124.95) and (461.83,234.08) .. (446.62,245.13) ; \draw (278.83,159.08) -- (310.83,159.08) ; \draw [shift={(312.83,159.08)}, rotate = 180] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; \draw (327.83,217.58) -- (406.84,223.44) ; \draw [shift={(408.83,223.58)}, rotate = 184.24] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; \draw (104.68,144.27) node {$x$}; \draw (207.36,142.96) node {$y$}; \draw (156.35,249.22) node {$z$}; \draw (142.27,221.67) node {$\Sigma _{1}$}; \draw (108.78,219.36) node {$\tau _{1}$}; \draw (172.27,210.55) node {$\tau _{2}$}; \draw (394.68,147.27) node {$x$}; \draw (497.36,145.96) node {$y$}; \draw (446.35,252.22) node {$z$}; \draw (431.78,230.36) node {$\tau _{1}$}; \draw (460.27,215.55) node {$\tau _{2}$}; \draw (305,214.67) node {$\eta ( \Sigma _{1})$}; \draw (436.27,212.67) node {$\Sigma _{1}$}; \end{tikzpicture} \caption{Left: $S$. Right: $S\textprime$} \end{figure} \end{proof} \begin{lem} For fixed $k$, and fixed $u_i,v_i\geq B_k$ (the constant from Proposition 3), there exists $R>0$ so that if $n=m\geq R$, then $h_kf^3_{n,n}: S_{0,2n+2}\to S_{0,2n+2}$ has $\log\lambda(h_kf^3_{n,n})\leq 54\frac{\log 2n+2}{2n+2}$. \end{lem} \begin{proof} We can get the spine G as in Figure 7 on $S_{0,n+m+2}$. This is in fact a train track for $f_{n,m}$, as described in \cite{hironaka}, and hence also $f$. Then $f$ induces a map $g:G\to G$. \begin{figure}[!ht]\centering \tikzset{every picture/.style={line width=0.75pt}} \begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-0.8,xscale=0.8] \draw (177.52,207.36) .. controls (171.45,216.18) and (175.89,220.15) .. (186.21,221.45) ; \draw (186.21,221.45) .. controls (168.28,223.86) and (164.37,231.88) .. (178.97,239.38) ; \draw (178.97,239.38) .. controls (166.6,238.11) and (163.04,238.1) .. (160.15,249.63) ; \draw (158.7,202.24) .. controls (161.6,213.76) and (167.39,215.05) .. (177.52,207.36) ; \draw (177.52,207.36) -- (220.95,154.85) ; \draw (186.21,221.45) -- (310.71,220.92) ; \draw (178.97,239.38) -- (247.01,263.72) ; \draw (160.15,249.63) -- (161.6,308.55) ; \draw (158.7,202.24) -- (157.25,138.2) ; \draw (157.25,138.2) .. controls (159.14,126.99) and (138.37,106.11) .. (156.31,106.95) .. controls (174.24,107.78) and (158.2,120.31) .. (157.25,138.2) -- cycle ; \draw (220.95,154.85) .. controls (229.41,146.34) and (224.77,118.83) .. (239.32,128.14) .. controls (253.87,137.45) and (232.71,140.27) .. (220.95,154.85) -- cycle ; \draw (247.01,263.72) .. controls (258.25,269.15) and (287.57,259.12) .. (280.49,273.72) .. controls (273.41,288.33) and (265.66,270.68) .. (247.01,263.72) -- cycle ; \draw (161.6,308.54) .. controls (159.85,319.77) and (180.87,340.44) .. (162.93,339.78) .. controls (144.98,339.12) and (160.87,326.44) .. (161.6,308.54) -- cycle ; \draw (310.71,220.92) .. controls (319.56,220.76) and (322.28,220.44) .. (325.55,211.31) ; \draw (474.96,234.62) .. controls (481.09,225.84) and (476.67,221.84) .. (466.36,220.49) ; \draw (466.36,220.49) .. controls (484.3,218.17) and (488.27,210.17) .. (473.71,202.6) ; \draw (473.71,202.6) .. controls (486.07,203.93) and (489.63,203.96) .. (492.6,192.44) ; \draw (493.75,239.84) .. controls (490.92,228.3) and (485.14,226.99) .. (474.96,234.62) ; \draw (474.96,234.62) -- (431.2,286.92) ; \draw (466.36,220.49) -- (341.85,220.41) ; \draw (473.71,202.6) -- (405.82,177.92) ; \draw (492.6,192.44) -- (491.52,133.52) ; \draw (493.75,239.84) -- (494.79,303.89) ; \draw (494.79,303.89) .. controls (492.83,315.08) and (513.47,336.06) .. (495.54,335.14) .. controls (477.61,334.22) and (493.73,321.77) .. (494.79,303.89) -- cycle ; \draw (431.2,286.92) .. controls (422.69,295.39) and (427.16,322.92) .. (412.66,313.54) .. controls (398.16,304.15) and (419.34,301.44) .. (431.2,286.92) -- cycle ; \draw (405.82,177.92) .. controls (394.62,172.43) and (365.24,182.32) .. (372.41,167.75) .. controls (379.58,153.19) and (387.22,170.87) .. (405.82,177.92) -- cycle ; \draw (491.52,133.52) .. controls (493.34,122.3) and (472.45,101.52) .. (490.39,102.28) .. controls (508.33,103.03) and (492.36,115.63) .. (491.52,133.52) -- cycle ; \draw (341.85,220.41) .. controls (333,220.52) and (328.45,217.71) .. (325.55,211.31) ; \draw (325.55,211.31) .. controls (327.98,194.74) and (301.23,163.86) .. (324.34,165.09) .. controls (347.45,166.33) and (326.77,184.85) .. (325.55,211.31) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=3.75] [line join = round][line cap = round] (234.47,135.84) .. controls (234.47,135.84) and (234.47,135.84) .. (234.47,135.84) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=3.75] [line join = round][line cap = round] (324.95,182.02) .. controls (324.95,182.02) and (324.95,182.02) .. (324.95,182.02) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=3.75] [line join = round][line cap = round] (157.19,116.99) .. controls (157.19,116.99) and (157.19,116.99) .. (157.19,116.99) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=3.75] [line join = round][line cap = round] (273.11,272.5) .. controls (273.11,272.5) and (273.11,272.5) .. (273.11,272.5) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=3.75] [line join = round][line cap = round] (161.43,331.88) .. controls (161.43,331.88) and (161.43,331.88) .. (161.43,331.88) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=3.75] [line join = round][line cap = round] (164.73,226.32) .. controls (164.73,226.32) and (164.73,226.32) .. (164.73,226.32) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=3.75] [line join = round][line cap = round] (491.77,111.33) .. controls (491.77,111.33) and (491.77,111.33) .. (491.77,111.33) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=3.75] [line join = round][line cap = round] (382.44,170.24) .. controls (382.44,170.24) and (382.44,170.24) .. (382.44,170.24) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=3.75] [line join = round][line cap = round] (489.89,217.36) .. controls (489.89,217.36) and (489.89,217.36) .. (489.89,217.36) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=3.75] [line join = round][line cap = round] (418.26,306.9) .. controls (418.26,306.9) and (418.26,306.9) .. (418.26,306.9) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=3.75] [line join = round][line cap = round] (495.54,327.64) .. controls (495.54,327.64) and (495.54,327.64) .. (495.54,327.64) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=3.75] [line join = round][line cap = round] (328.72,333.76) .. controls (328.72,333.76) and (328.72,333.76) .. (328.72,333.76) ; \draw (511.17,198.88) .. controls (505.09,207.7) and (509.53,211.67) .. (519.85,212.97) ; \draw (519.85,212.97) .. controls (501.93,215.38) and (498.01,223.4) .. (512.62,230.9) ; \draw (512.62,230.9) .. controls (500.24,229.62) and (496.69,229.62) .. (493.79,241.14) ; \draw (492.35,193.75) .. controls (495.24,205.28) and (501.03,206.56) .. (511.17,198.88) ; \draw (511.17,198.88) -- (554.6,146.36) ; \draw (519.85,212.97) -- (609.67,213.36) ; \draw (512.62,230.9) -- (580.66,255.23) ; \draw (554.6,146.36) .. controls (563.06,137.86) and (558.41,110.35) .. (572.97,119.66) .. controls (587.52,128.97) and (566.36,131.79) .. (554.6,146.36) -- cycle ; \draw (580.66,255.23) .. controls (591.89,260.67) and (621.21,250.64) .. (614.13,265.24) .. controls (607.06,279.84) and (599.31,262.19) .. (580.66,255.23) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=3.75] [line join = round][line cap = round] (568.12,127.36) .. controls (568.12,127.36) and (568.12,127.36) .. (568.12,127.36) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=3.75] [line join = round][line cap = round] (606.76,264.02) .. controls (606.76,264.02) and (606.76,264.02) .. (606.76,264.02) ; \draw (603.99,213.25) .. controls (615.17,215.27) and (636.3,194.75) .. (635.25,212.68) .. controls (634.19,230.6) and (621.86,214.41) .. (603.99,213.25) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=3.75] [line join = round][line cap = round] (625.19,213.43) .. controls (625.19,213.43) and (625.19,213.43) .. (625.19,213.43) ; \draw (141.31,243.11) .. controls (147.44,234.32) and (143.03,230.33) .. (132.71,228.97) ; \draw (132.71,228.97) .. controls (150.65,226.65) and (154.62,218.65) .. (140.07,211.08) ; \draw (140.07,211.08) .. controls (152.43,212.41) and (155.98,212.44) .. (158.95,200.93) ; \draw (160.1,248.32) .. controls (157.28,236.78) and (151.49,235.47) .. (141.31,243.11) ; \draw (141.31,243.11) -- (97.55,295.4) ; \draw (132.71,228.97) -- (46.99,228.91) ; \draw (140.07,211.08) -- (72.18,186.41) ; \draw (97.55,295.4) .. controls (89.04,303.87) and (93.51,331.4) .. (79.01,322.02) .. controls (64.52,312.64) and (85.7,309.92) .. (97.55,295.4) -- cycle ; \draw (72.18,186.41) .. controls (60.98,180.92) and (31.6,190.8) .. (38.77,176.24) .. controls (45.94,161.67) and (53.57,179.36) .. (72.18,186.41) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=3.75] [line join = round][line cap = round] (48.8,178.72) .. controls (48.8,178.72) and (48.8,178.72) .. (48.8,178.72) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=3.75] [line join = round][line cap = round] (84.61,315.38) .. controls (84.61,315.38) and (84.61,315.38) .. (84.61,315.38) ; \draw (51.86,229) .. controls (40.7,226.87) and (19.38,247.2) .. (20.6,229.28) .. controls (21.81,211.37) and (34,227.68) .. (51.86,229) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=3.75] [line join = round][line cap = round] (30.65,228.62) .. controls (30.65,228.62) and (30.65,228.62) .. (30.65,228.62) ; \draw (245.31,211.24) node {$e_{1}$}; \draw (194.41,166.94) node {$e_{2}$}; \draw (147.29,161.29) node {$e_{3}$}; \draw (212.32,256.48) node {$e_{8}$}; \draw (180.28,284.75) node {$e_{7}$}; \draw (394.23,208.41) node {$e'_{1}$}; \draw (431.93,170.71) node {$e'_{2}$}; \draw (504.5,159.4) node {$e'_{3}$}; \draw (514.87,269.67) node {$e'_{7}$}; \draw (441.35,257.42) node {$e'_{8}$}; \draw (325.42,156.57) node {$a_{1}$}; \draw (244.37,121.7) node {$a_{2}$}; \draw (157.66,98.14) node {$a_{3}$}; \draw (163.31,348.84) node {$a_{7}$}; \draw (292.43,279.1) node {$a_{8}$}; \draw (185.93,212.18) node [scale=0.8] {$b_{1}$}; \draw (168.97,201.81) node [scale=0.8] {$b_{2}$}; \draw (183.1,231.97) node [scale=0.8] {$b_{n}$}; \draw (174.62,248) node [scale=0.8] {$b_{n-1}$}; \draw (365.01,159.4) node {$a'_{2}$}; \draw (491.3,94.37) node {$a'_{3}$}; \draw (405.54,323.4) node {$a'_{8}$}; \draw (496.96,346.96) node {$a'_{7}$}; \draw (166.14,217.84) node [scale=0.8] {$x$}; \draw (489.42,209.35) node [scale=0.8] {$y$}; \draw (468.68,210.3) node [scale=0.8] {$b'_{1}$}; \draw (483.76,194.27) node [scale=0.8] {$b'_{2}$}; \draw (468.68,228.2) node [scale=0.8] {$b'_{m}$}; \draw (482.82,245.17) node [scale=0.8] {$b'_{m-1}$}; \draw (329.19,323.4) node {$z$}; \draw (532.77,156.57) node {$e'_{4}$}; \draw (520.52,203.7) node [scale=0.8] {$b'_{4}$}; \draw (502.61,194.27) node [scale=0.8] {$b'_{3}$}; \draw (577.07,110.39) node {$a'_{4}$}; \draw (643.05,210.3) node {$a'_{5}$}; \draw (623.25,264.96) node {$a'_{6}$}; \draw (517.69,221.61) node [scale=0.8] {$b'_{5}$}; \draw (505.44,239.51) node [scale=0.8] {$b'_{6}$}; \draw (569.53,202.76) node {$e'_{5}$}; \draw (551.62,255.54) node {$e'_{6}$}; \draw (67.18,321.51) node {$a_{6}$}; \draw (13.45,221.61) node {$a_{5}$}; \draw (113.36,260.25) node {$e_{6}$}; \draw (89.8,217.84) node {$e_{5}$}; \draw (151.06,247.05) node [scale=0.8] {$b_{6}$}; \draw (135.04,235.74) node [scale=0.8] {$b_{5}$}; \draw (138.81,217.84) node [scale=0.8] {$b_{4}$}; \draw (150.12,200.87) node [scale=0.8] {$b_{3}$}; \draw (103.93,185.79) node {$e_{4}$}; \draw (31.36,167.88) node {$a_{4}$}; \end{tikzpicture} \caption{Spine of $S_{0,n+m+2}$ when $n=m=8$} \end{figure} The graph $G$ contains the loop edges $a_1, a_2, \dots, a_n$, and $a\textprime_2, a\textprime_3, \dots, a\textprime_m$, which $g$ acts on as a permutation, and ``peripheral" edges $b_1, b_2, \dots, b_n$, and $b\textprime_1, b\textprime_2, \dots, b\textprime_m$, which $g$ also acts on them as a permutation. The transition matrix has the following form: \[ T= \left[ \begin{array}{c|c} A & *\\ \hline 0 & P\\ \end{array} \right] \] where $P$ corresponds to $e_1, e_2, \dots, e_n$, $e\textprime_1, e\textprime_2, \dots, e\textprime_m$. The matrix $A$ is a permutation matrix corresponds to $a_1, a_2, \dots, a_n$, $a\textprime_1, a\textprime_2, \dots, a\textprime_m$, $b_1, b_2, \dots, b_n$, $b\textprime_1, b\textprime_2, \dots, b\textprime_m$. So the largest eigenvalue of $T$ (in absolute value) will be the largest eigenvalue of $P$. If we remove all the non-contributing edges, we have \[ \begin{array}{rcl} e_i & \to & e_{i+3} \mbox{ } \mbox{ for } 1\leq i\leq n-3 \\ e\textprime_i & \to & e\textprime_{i+3} \mbox{ } \mbox{ for } 1< i\leq m-2 \\ e\textprime_1 & \to & e\textprime_4e\textprime_4e\textprime_3e\textprime_3e\textprime_2e\textprime_2e\textprime_1e_1e_2e_2e_3e_3e_4 \\ e_n & \to & e_3e_3e_2e_2e_1e\textprime_1e\textprime_2e\textprime_2e\textprime_3e\textprime_3e\textprime_4 \\ e\textprime_m & \to & e\textprime_3e\textprime_3e\textprime_2e\textprime_2e\textprime_1e_1e_2e_2e_3 \\ e_{n-1} & \to & e_2e_2e_1e\textprime_1e\textprime_2e\textprime_2e\textprime_3 \\ e\textprime_{m-1} & \to & e\textprime_2e\textprime_2e\textprime_1e_1e_2 \\ e_{n-2} & \to & e_1e\textprime_1e\textprime_2 \end{array} \] \begin{figure}[!ht]\centering \tikzset{every picture/.style={line width=0.75pt}} \begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1] \draw (7,150.45) .. controls (7,81.01) and (105.56,24.73) .. (227.13,24.73) .. controls (348.71,24.73) and (447.27,81.01) .. (447.27,150.45) .. controls (447.27,219.88) and (348.71,276.17) .. (227.13,276.17) .. controls (105.56,276.17) and (7,219.88) .. (7,150.45) -- cycle ; \draw (31.29,150.45) .. controls (31.29,88.68) and (118.97,38.6) .. (227.13,38.6) .. controls (335.3,38.6) and (422.98,88.68) .. (422.98,150.45) .. controls (422.98,212.22) and (335.3,262.3) .. (227.13,262.3) .. controls (118.97,262.3) and (31.29,212.22) .. (31.29,150.45) -- cycle ; \draw (54.74,150.45) .. controls (54.74,96.07) and (131.92,51.99) .. (227.13,51.99) .. controls (322.34,51.99) and (399.53,96.07) .. (399.53,150.45) .. controls (399.53,204.82) and (322.34,248.9) .. (227.13,248.9) .. controls (131.92,248.9) and (54.74,204.82) .. (54.74,150.45) -- cycle ; \draw (80.28,150.45) .. controls (80.28,104.13) and (146.03,66.58) .. (227.13,66.58) .. controls (308.23,66.58) and (373.98,104.13) .. (373.98,150.45) .. controls (373.98,196.77) and (308.23,234.31) .. (227.13,234.31) .. controls (146.03,234.31) and (80.28,196.77) .. (80.28,150.45) -- cycle ; \draw (106.42,150.45) .. controls (106.42,112.37) and (160.47,81.51) .. (227.13,81.51) .. controls (293.8,81.51) and (347.85,112.37) .. (347.85,150.45) .. controls (347.85,188.52) and (293.8,219.39) .. (227.13,219.39) .. controls (160.47,219.39) and (106.42,188.52) .. (106.42,150.45) -- cycle ; \draw (133.69,150.45) .. controls (133.69,120.98) and (175.53,97.08) .. (227.13,97.08) .. controls (278.74,97.08) and (320.57,120.98) .. (320.57,150.45) .. controls (320.57,179.92) and (278.74,203.81) .. (227.13,203.81) .. controls (175.53,203.81) and (133.69,179.92) .. (133.69,150.45) -- cycle ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (194.83,7.83) -- (255.6,7.83) -- (255.6,114.17) -- (194.83,114.17) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=2.25] [line join = round][line cap = round] (128.39,37.65) .. controls (128.39,37.65) and (128.39,37.65) .. (128.39,37.65) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=2.25] [line join = round][line cap = round] (161.61,29.98) .. controls (161.61,29.98) and (161.61,29.98) .. (161.61,29.98) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=2.25] [line join = round][line cap = round] (163.31,44.46) .. controls (163.31,44.46) and (163.31,44.46) .. (163.31,44.46) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=2.25] [line join = round][line cap = round] (130.09,52.98) .. controls (130.09,52.98) and (130.09,52.98) .. (130.09,52.98) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=2.25] [line join = round][line cap = round] (165.02,58.09) .. controls (165.02,58.09) and (165.02,58.09) .. (165.02,58.09) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=2.25] [line join = round][line cap = round] (133.5,67.46) .. controls (133.5,67.46) and (133.5,67.46) .. (133.5,67.46) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=2.25] [line join = round][line cap = round] (165.87,74.28) .. controls (165.87,74.28) and (165.87,74.28) .. (165.87,74.28) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=2.25] [line join = round][line cap = round] (136.91,84.5) .. controls (136.91,84.5) and (136.91,84.5) .. (136.91,84.5) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=2.25] [line join = round][line cap = round] (166.72,90.46) .. controls (166.72,90.46) and (166.72,90.46) .. (166.72,90.46) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=2.25] [line join = round][line cap = round] (142.02,101.54) .. controls (142.02,101.54) and (142.02,101.54) .. (142.02,101.54) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=2.25] [line join = round][line cap = round] (169.28,108.35) .. controls (169.28,108.35) and (169.28,108.35) .. (169.28,108.35) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=2.25] [line join = round][line cap = round] (145.43,124.54) .. controls (145.43,124.54) and (145.43,124.54) .. (145.43,124.54) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=2.25] [line join = round][line cap = round] (325.02,37.65) .. controls (325.02,37.65) and (325.02,37.65) .. (325.02,37.65) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=2.25] [line join = round][line cap = round] (291.8,29.98) .. controls (291.8,29.98) and (291.8,29.98) .. (291.8,29.98) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=2.25] [line join = round][line cap = round] (290.1,44.46) .. controls (290.1,44.46) and (290.1,44.46) .. (290.1,44.46) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=2.25] [line join = round][line cap = round] (323.32,52.98) .. controls (323.32,52.98) and (323.32,52.98) .. (323.32,52.98) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=2.25] [line join = round][line cap = round] (288.4,58.09) .. controls (288.4,58.09) and (288.4,58.09) .. (288.4,58.09) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=2.25] [line join = round][line cap = round] (319.91,67.46) .. controls (319.91,67.46) and (319.91,67.46) .. (319.91,67.46) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=2.25] [line join = round][line cap = round] (287.54,74.28) .. controls (287.54,74.28) and (287.54,74.28) .. (287.54,74.28) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=2.25] [line join = round][line cap = round] (316.51,83.65) .. controls (316.51,83.65) and (316.51,83.65) .. (316.51,83.65) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=2.25] [line join = round][line cap = round] (286.69,90.46) .. controls (286.69,90.46) and (286.69,90.46) .. (286.69,90.46) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=2.25] [line join = round][line cap = round] (311.4,101.54) .. controls (311.4,101.54) and (311.4,101.54) .. (311.4,101.54) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=2.25] [line join = round][line cap = round] (284.14,108.35) .. controls (284.14,108.35) and (284.14,108.35) .. (284.14,108.35) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=2.25] [line join = round][line cap = round] (308.84,124.54) .. controls (308.84,124.54) and (308.84,124.54) .. (308.84,124.54) ; \draw (174.54,25.04) -- (182.39,26.99) -- (176.12,32.11) ; \draw (139.61,31) -- (147.47,32.95) -- (141.19,38.07) ; \draw (174.54,39.52) -- (182.39,41.47) -- (176.12,46.59) ; \draw (139.61,46.34) -- (147.47,48.28) -- (141.19,53.4) ; \draw (174.54,53.15) -- (182.39,55.1) -- (176.12,60.22) ; \draw (143.02,59.97) -- (150.88,61.91) -- (144.6,67.03) ; \draw (174.54,68.48) -- (182.39,70.43) -- (176.12,75.55) ; \draw (145.15,76.99) -- (153.23,77.47) -- (147.99,83.64) ; \draw (175.39,84.67) -- (183.25,86.62) -- (176.97,91.74) ; \draw (175.13,102.16) -- (183.15,103.3) -- (177.43,109.03) ; \draw (147.65,94.15) -- (155.74,94.42) -- (150.67,100.73) ; \draw (151.68,114.6) -- (159.69,113.47) -- (155.79,120.56) ; \draw (268.07,24.18) -- (275.33,27.76) -- (268.12,31.42) ; \draw (305.31,29.62) -- (311.92,34.29) -- (304.22,36.78) ; \draw (269.45,37.35) -- (276.15,41.89) -- (268.5,44.52) ; \draw (267.75,50.98) -- (274.45,55.52) -- (266.8,58.15) ; \draw (266.9,66.31) -- (273.6,70.85) -- (265.94,73.49) ; \draw (269.45,82.49) -- (276.15,87.03) -- (268.5,89.67) ; \draw (268.29,99.17) -- (274.36,104.52) -- (266.43,106.17) ; \draw (304.9,43.81) -- (311,49.13) -- (303.08,50.82) ; \draw (301.52,57.42) -- (307.58,62.79) -- (299.65,64.42) ; \draw (299.1,73.53) -- (305,79.08) -- (297.02,80.47) ; \draw (297.86,90.35) -- (303.17,96.46) -- (295.1,97.04) ; \draw (294.79,109.82) -- (299.66,116.28) -- (291.56,116.3) ; \draw (225.93,56.74) node {$D$}; \draw (162.04,22.31) node [scale=0.7] {$e'_{m-2}$}; \draw (124.56,29.98) node [scale=0.7] {$e'_{m-5}$}; \draw (162.04,37.65) node [scale=0.7] {$e'_{m-3}$}; \draw (126.26,46.17) node [scale=0.7] {$e'_{m-6}$}; \draw (129.67,60.65) node [scale=0.7] {$e'_{m-7}$}; \draw (161.19,52.13) node [scale=0.7] {$e'_{m-4}$}; \draw (163.74,67.46) node [scale=0.7] {$e_{n-3}$}; \draw (133.93,76.83) node [scale=0.7] {$e_{n-6}$}; \draw (138.19,93.02) node [scale=0.7] {$e_{n-7}$}; \draw (164.59,83.65) node [scale=0.7] {$e_{n-4}$}; \draw (168,99.83) node [scale=0.7] {$e_{n-5}$}; \draw (139.89,115.17) node [scale=0.7] {$e_{n-8}$}; \draw (93.89,35.23) node [scale=0.9,rotate=-335.11] {$\dotsc $}; \draw (99.85,52.27) node [scale=0.9,rotate=-335.11] {$\dotsc $}; \draw (104.96,66.75) node [scale=0.9,rotate=-335.11] {$\dotsc $}; \draw (110.07,82.09) node [scale=0.9,rotate=-335.11] {$\dotsc $}; \draw (116.04,101.68) node [scale=0.9,rotate=-335.11] {$\dotsc $}; \draw (125.41,124.68) node [scale=0.9,rotate=-335.11] {$\dotsc $}; \draw (291.52,22.31) node [scale=0.7] {$e'_{7}$}; \draw (325.59,29.98) node [scale=0.7] {$e'_{10}$}; \draw (291.52,37.65) node [scale=0.7] {$e'_{6}$}; \draw (324.74,47.02) node [scale=0.7] {$e'_{9}$}; \draw (288.96,51.28) node [scale=0.7] {$e'_{5}$}; \draw (322.19,60.65) node [scale=0.7] {$e'_{8}$}; \draw (288.11,67.46) node [scale=0.7] {$e_{6}$}; \draw (317.93,77.69) node [scale=0.7] {$e_{9}$}; \draw (287.26,83.65) node [scale=0.7] {$e_{5}$}; \draw (312.81,94.72) node [scale=0.7] {$e_{8}$}; \draw (284.7,100.69) node [scale=0.7] {$e_{7}$}; \draw (311.96,116.87) node [scale=0.7] {$e_{10}$}; \draw (346.89,33.53) node [scale=0.9,rotate=-17.84] {$\dotsc $}; \draw (343.48,50.57) node [scale=0.9,rotate=-17.84] {$\dotsc $}; \draw (340.93,65.05) node [scale=0.9,rotate=-17.84] {$\dotsc $}; \draw (334.96,82.94) node [scale=0.9,rotate=-17.84] {$\dotsc $}; \draw (331.56,99.12) node [scale=0.9,rotate=-17.84] {$\dotsc $}; \draw (329,122.12) node [scale=0.9,rotate=-17.84] {$\dotsc $}; \end{tikzpicture} \caption{The directed graph $\Gamma$ associated to $f$.} \end{figure} \begin{figure}[!ht]\centering \tikzset{every picture/.style={line width=0.75pt}} \begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1] \draw (87,329.55) -- (551.93,329.55) ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (305.67,86.22) .. controls (305.67,84.49) and (307.07,83.08) .. (308.8,83.08) .. controls (310.53,83.08) and (311.93,84.49) .. (311.93,86.22) .. controls (311.93,87.95) and (310.53,89.35) .. (308.8,89.35) .. controls (307.07,89.35) and (305.67,87.95) .. (305.67,86.22) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (451.67,86.22) .. controls (451.67,84.49) and (453.07,83.08) .. (454.8,83.08) .. controls (456.53,83.08) and (457.93,84.49) .. (457.93,86.22) .. controls (457.93,87.95) and (456.53,89.35) .. (454.8,89.35) .. controls (453.07,89.35) and (451.67,87.95) .. (451.67,86.22) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (183.67,329.22) .. controls (183.67,327.49) and (185.07,326.08) .. (186.8,326.08) .. controls (188.53,326.08) and (189.93,327.49) .. (189.93,329.22) .. controls (189.93,330.95) and (188.53,332.35) .. (186.8,332.35) .. controls (185.07,332.35) and (183.67,330.95) .. (183.67,329.22) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (306.67,329.22) .. controls (306.67,327.49) and (308.07,326.08) .. (309.8,326.08) .. controls (311.53,326.08) and (312.93,327.49) .. (312.93,329.22) .. controls (312.93,330.95) and (311.53,332.35) .. (309.8,332.35) .. controls (308.07,332.35) and (306.67,330.95) .. (306.67,329.22) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (431.67,329.22) .. controls (431.67,327.49) and (433.07,326.08) .. (434.8,326.08) .. controls (436.53,326.08) and (437.93,327.49) .. (437.93,329.22) .. controls (437.93,330.95) and (436.53,332.35) .. (434.8,332.35) .. controls (433.07,332.35) and (431.67,330.95) .. (431.67,329.22) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (185.67,131.22) .. controls (185.67,129.49) and (187.07,128.08) .. (188.8,128.08) .. controls (190.53,128.08) and (191.93,129.49) .. (191.93,131.22) .. controls (191.93,132.95) and (190.53,134.35) .. (188.8,134.35) .. controls (187.07,134.35) and (185.67,132.95) .. (185.67,131.22) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (186.67,177.22) .. controls (186.67,175.49) and (188.07,174.08) .. (189.8,174.08) .. controls (191.53,174.08) and (192.93,175.49) .. (192.93,177.22) .. controls (192.93,178.95) and (191.53,180.35) .. (189.8,180.35) .. controls (188.07,180.35) and (186.67,178.95) .. (186.67,177.22) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (186.67,225.22) .. controls (186.67,223.49) and (188.07,222.08) .. (189.8,222.08) .. controls (191.53,222.08) and (192.93,223.49) .. (192.93,225.22) .. controls (192.93,226.95) and (191.53,228.35) .. (189.8,228.35) .. controls (188.07,228.35) and (186.67,226.95) .. (186.67,225.22) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (186.67,277.22) .. controls (186.67,275.49) and (188.07,274.08) .. (189.8,274.08) .. controls (191.53,274.08) and (192.93,275.49) .. (192.93,277.22) .. controls (192.93,278.95) and (191.53,280.35) .. (189.8,280.35) .. controls (188.07,280.35) and (186.67,278.95) .. (186.67,277.22) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (418.67,132.22) .. controls (418.67,130.49) and (420.07,129.08) .. (421.8,129.08) .. controls (423.53,129.08) and (424.93,130.49) .. (424.93,132.22) .. controls (424.93,133.95) and (423.53,135.35) .. (421.8,135.35) .. controls (420.07,135.35) and (418.67,133.95) .. (418.67,132.22) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (421.67,176.08) .. controls (421.67,174.35) and (423.07,172.95) .. (424.8,172.95) .. controls (426.53,172.95) and (427.93,174.35) .. (427.93,176.08) .. controls (427.93,177.81) and (426.53,179.22) .. (424.8,179.22) .. controls (423.07,179.22) and (421.67,177.81) .. (421.67,176.08) -- cycle ; \draw (189.8,225.22) -- (454.8,86.22) ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (422.67,225.22) .. controls (422.67,223.49) and (424.07,222.08) .. (425.8,222.08) .. controls (427.53,222.08) and (428.93,223.49) .. (428.93,225.22) .. controls (428.93,226.95) and (427.53,228.35) .. (425.8,228.35) .. controls (424.07,228.35) and (422.67,226.95) .. (422.67,225.22) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (423.67,276.22) .. controls (423.67,274.49) and (425.07,273.08) .. (426.8,273.08) .. controls (428.53,273.08) and (429.93,274.49) .. (429.93,276.22) .. controls (429.93,277.95) and (428.53,279.35) .. (426.8,279.35) .. controls (425.07,279.35) and (423.67,277.95) .. (423.67,276.22) -- cycle ; \draw (308.8,86.22) -- (434.8,329.22) ; \draw (268.57,45.98) .. controls (268.57,23.76) and (286.58,5.75) .. (308.8,5.75) .. controls (331.02,5.75) and (349.03,23.76) .. (349.03,45.98) .. controls (349.03,68.2) and (331.02,86.22) .. (308.8,86.22) .. controls (286.58,86.22) and (268.57,68.2) .. (268.57,45.98) -- cycle ; \draw (188.8,131.22) -- (308.8,86.22) ; \draw (308.8,86.22) -- (189.8,177.22) ; \draw (189.8,225.22) -- (308.8,86.22) ; \draw (308.8,86.22) -- (189.8,277.22) ; \draw (186.8,329.22) -- (308.8,86.22) ; \draw [color={rgb, 255:red, 255; green, 0; blue, 0 } ,draw opacity=1 ][line width=2.25] (308.8,86.22) -- (421.8,132.22) ; \draw [color={rgb, 255:red, 255; green, 0; blue, 0 } ,draw opacity=1 ][line width=2.25] (308.8,86.22) -- (424.8,176.08) ; \draw [color={rgb, 255:red, 255; green, 0; blue, 0 } ,draw opacity=1 ][line width=2.25] (308.8,86.22) -- (425.8,225.22) ; \draw [color={rgb, 255:red, 255; green, 0; blue, 0 } ,draw opacity=1 ][line width=1.5] (426.8,276.22) -- (308.8,86.22) ; \draw (188.8,131.22) -- (309.8,329.22) ; \draw (189.8,177.22) -- (309.8,329.22) ; \draw (189.8,225.22) -- (309.8,329.22) ; \draw (189.8,277.22) -- (309.8,329.22) ; \draw [color={rgb, 255:red, 255; green, 0; blue, 0 } ,draw opacity=1 ][line width=2.25] (188.8,131.22) -- (424.8,176.08) ; \draw (188.8,131.22) -- (425.8,225.22) ; \draw [color={rgb, 255:red, 255; green, 0; blue, 0 } ,draw opacity=1 ][line width=2.25] (188.8,131.22) -- (426.8,276.22) ; \draw [color={rgb, 255:red, 255; green, 0; blue, 0 } ,draw opacity=1 ][line width=2.25] (189.8,225.22) -- (421.8,132.22) ; \draw [color={rgb, 255:red, 255; green, 0; blue, 0 } ,draw opacity=1 ][line width=2.25] (189.8,225.22) -- (424.8,176.08) ; \draw [color={rgb, 255:red, 255; green, 0; blue, 0 } ,draw opacity=1 ][line width=2.25] (189.8,225.22) -- (308.46,250.75) -- (426.8,276.22) ; \draw (189.8,277.22) -- (421.8,132.22) ; \draw [color={rgb, 255:red, 255; green, 0; blue, 0 } ,draw opacity=1 ][line width=2.25] (189.8,277.22) -- (424.8,176.08) ; \draw (186.8,329.22) -- (426.8,276.22) ; \draw (260.59,56.87) -- (268.59,40.33) -- (277.02,56.66) ; \draw (141.33,79.75) -- (156,87.08) -- (141.33,94.42) ; \draw (123.33,123.75) -- (138,131.08) -- (123.33,138.42) ; \draw (121.33,168.75) -- (136,176.08) -- (121.33,183.42) ; \draw (123.33,217.75) -- (138,225.08) -- (123.33,232.42) ; \draw (120.33,269.75) -- (135,277.08) -- (120.33,284.42) ; \draw (119.33,321.75) -- (134,329.08) -- (119.33,336.42) ; \draw (490.33,322.75) -- (505,330.08) -- (490.33,337.42) ; \draw (493.33,269.75) -- (508,277.08) -- (493.33,284.42) ; \draw (492.33,218.75) -- (507,226.08) -- (492.33,233.42) ; \draw (494.33,168.75) -- (509,176.08) -- (494.33,183.42) ; \draw (493.33,122.75) -- (508,130.08) -- (493.33,137.42) ; \draw (495.33,79.75) -- (510,87.08) -- (495.33,94.42) ; \draw [color={rgb, 255:red, 255; green, 0; blue, 0 } ,draw opacity=1 ][line width=2.25] (308.8,86.22) -- (454.8,86.22) ; \draw [color={rgb, 255:red, 255; green, 0; blue, 0 } ,draw opacity=1 ][line width=2.25] (188.8,131.22) -- (421.8,132.22) ; \draw (551.93,130.55) -- (421.8,132.22) ; \draw (87,130.55) -- (188.8,131.22) ; \draw [color={rgb, 255:red, 255; green, 0; blue, 0 } ,draw opacity=1 ][line width=2.25] (189.8,177.22) -- (424.8,176.08) ; \draw (87,176.55) -- (189.8,177.22) ; \draw (551.93,176.55) -- (424.8,176.08) ; \draw (87,86.42) -- (308.8,86.22) ; \draw (454.8,86.22) -- (551.93,86.42) ; \draw (87,225.55) -- (189.8,225.22) ; \draw [color={rgb, 255:red, 255; green, 0; blue, 0 } ,draw opacity=1 ][line width=2.25] (189.8,225.22) -- (425.8,225.22) ; \draw (425.8,225.22) -- (551.93,225.55) ; \draw (87,276.55) -- (189.8,277.22) ; \draw [color={rgb, 255:red, 255; green, 0; blue, 0 } ,draw opacity=1 ][line width=2.25] (426.8,276.22) -- (189.8,277.22) ; \draw (426.8,276.22) -- (551.93,276.55) ; \draw (189.8,177.22) -- (426.8,276.22) ; \draw (312,73.42) node {$e'_{1}$}; \draw (458,72.42) node {$e'_{4}$}; \draw (172,117.42) node {$e'_{m}$}; \draw (434,119.42) node {$e'_{3}$}; \draw (437,166.42) node {$e'_{2}$}; \draw (172,165.42) node {$e'_{m-1}$}; \draw (173,212.42) node {$e_{n}$}; \draw (171,265.42) node {$e_{n-1}$}; \draw (170,316.42) node {$e_{n-2}$}; \draw (439,214.42) node {$e_{3}$}; \draw (439,265.42) node {$e_{2}$}; \draw (448,316.42) node {$e_{4}$}; \draw (311,338.42) node {$e_{1}$}; \end{tikzpicture} \caption{$D$: edges marked thick denote two directed edges between corresponding vertices} \end{figure} Assume $n=m$, we get the directed graph $\Gamma$ associated to $f$ (or $g$) and $T$ (with only the contributing edges) as shown in Figure 8. The graph is made of 6 big ``loops" going clockwise, together with a subgraph $D$. The subgraph $D$ is given by the relations determined by $g$ above, as shown in Figure 9, containing one loop at $e\textprime _1$. For simplicity, the graph of $D$ in Figure 9 omits the arrows in between. All edges with omitted arrows implicitly point from left to right. The edges marked thick mean there are two edges connecting those vertices. Thus, a path with given length passing through $D$ once will either \begin{itemize} \item directly go from left to right with length 1. \item go from left to $e\textprime_1$, then wrap around the loop at $e\textprime_1$ some number of times, then go to the right. \item pass $e_1$ and go to $e_4$. \end{itemize} Given two vertices, the number of paths of length $\frac{n}{13}$ between them which passes through $D$ is therefore at most 2. Now we let $\Sigma_0$ surround $V_{\lfloor \frac{n}{2} \rfloor-1}, V_{\lfloor \frac{n}{2} \rfloor},V_{\lfloor \frac{n}{2} \rfloor+1}$, fix $h_k$ and consider a graph map $g_k\simeq h_kf$ and its matrix $T_k$. Note that $h_k$ is supported in a neighborhood of $\Sigma_0$. Let $a_j, a_{j+1}, a_{j+2}$ denote the three loops wrapping around the three punctures in $\Sigma_0$. If we remove all the non-contributing edges, after homotopy, $h_k$ sends $e_j,e_{j+1}, e_{j+2}$ to a combination of $e_j,e_{j+1}, e_{j+2}$ without acting on other edges. Thus $g_k\simeq h_kf$ sends $e_{j-3},e_{j-2}, e_{j-1}$ to a combination of $e_j,e_{j+1}, e_{j+2}$ and acts on the rest of the edges as $g\simeq f$ does. Then we get the directed graph $\Gamma_k$ associated to $T_k$ and $g_k$ as shown in Figure 10. The graph $\Gamma_k$ is the same as $\Gamma$ away from $e_{j-3},e_{j-2}, e_{j-1},e_j,e_{j+1}, e_{j+2}$, and has a subgraph $D_k$ given by $h_k$. The subgraph $D_k$ is a bipartite graph with 3 vertices in each set, $\{e_j,e_{j+1}, e_{j+2}\}$ and $\{e_{j-3},e_{j-2}, e_{j-1\}}$. All edges of $D_k$ point from right to left, from $\{e_{j-3},e_{j-2}, e_{j-1}\}$ to $\{e_j,e_{j+1}, e_{j+2}\}$. The number of edges between any two vertices in different sets is bounded above by some $E_k>0$ depending on $h_k$. See Figure 11. When $n=m$ is big enough, any path of length $\frac{n}{13}$ can't intersect $D$ and $D_k$ simultaneously. Thus given any two vertices, the number of paths of length $\frac{n}{13}$ between those vertices is bounded above by $N_k=\max\{2, E_k\}$. The number of paths of length $\frac{n}{13}$ emanating from a given vertex is thus at most $2nN_k$. Then for $\lambda_0$, the leading eigenvalue of $T_k$, by Proposition \ref{pf}, we have \[\log \lambda_0 \leq \frac{\log 2nN_k}{\frac{n}{13}}.\] When $n>N_k$ is large enough, we have \[ \log \lambda_0 \leq \frac{\log 2nN_k}{\frac{n}{13}} < \frac{2\log (2n+2)}{\frac{2n}{26}} < \frac{2\log (2n+2)}{\frac{2n+2}{27}}=54\frac{\log (2n+2)}{2n+2}. \] The result follows since $\lambda(h_kf)\leq \lambda_0$. \end{proof} \begin{figure}[H]\centering \tikzset{every picture/.style={line width=0.75pt}} \begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1] \draw (7,151.45) .. controls (7,82.01) and (105.56,25.73) .. (227.13,25.73) .. controls (348.71,25.73) and (447.27,82.01) .. (447.27,151.45) .. controls (447.27,220.88) and (348.71,277.17) .. (227.13,277.17) .. controls (105.56,277.17) and (7,220.88) .. (7,151.45) -- cycle ; \draw (31.29,151.45) .. controls (31.29,89.68) and (118.97,39.6) .. (227.13,39.6) .. controls (335.3,39.6) and (422.98,89.68) .. (422.98,151.45) .. controls (422.98,213.22) and (335.3,263.3) .. (227.13,263.3) .. controls (118.97,263.3) and (31.29,213.22) .. (31.29,151.45) -- cycle ; \draw (54.74,151.45) .. controls (54.74,97.07) and (131.92,52.99) .. (227.13,52.99) .. controls (322.34,52.99) and (399.53,97.07) .. (399.53,151.45) .. controls (399.53,205.82) and (322.34,249.9) .. (227.13,249.9) .. controls (131.92,249.9) and (54.74,205.82) .. (54.74,151.45) -- cycle ; \draw (80.28,151.45) .. controls (80.28,105.13) and (146.03,67.58) .. (227.13,67.58) .. controls (308.23,67.58) and (373.98,105.13) .. (373.98,151.45) .. controls (373.98,197.77) and (308.23,235.31) .. (227.13,235.31) .. controls (146.03,235.31) and (80.28,197.77) .. (80.28,151.45) -- cycle ; \draw (106.42,151.45) .. controls (106.42,113.37) and (160.47,82.51) .. (227.13,82.51) .. controls (293.8,82.51) and (347.85,113.37) .. (347.85,151.45) .. controls (347.85,189.52) and (293.8,220.39) .. (227.13,220.39) .. controls (160.47,220.39) and (106.42,189.52) .. (106.42,151.45) -- cycle ; \draw (133.69,151.45) .. controls (133.69,121.98) and (175.53,98.08) .. (227.13,98.08) .. controls (278.74,98.08) and (320.57,121.98) .. (320.57,151.45) .. controls (320.57,180.92) and (278.74,204.81) .. (227.13,204.81) .. controls (175.53,204.81) and (133.69,180.92) .. (133.69,151.45) -- cycle ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (194.83,8.83) -- (255.6,8.83) -- (255.6,115.17) -- (194.83,115.17) -- cycle ; \draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (195.61,194.82) -- (255.31,194.82) -- (255.31,241.25) -- (195.61,241.25) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=2.25] [line join = round][line cap = round] (128.39,38.65) .. controls (128.39,38.65) and (128.39,38.65) .. (128.39,38.65) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=2.25] [line join = round][line cap = round] (161.61,30.98) .. controls (161.61,30.98) and (161.61,30.98) .. (161.61,30.98) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=2.25] [line join = round][line cap = round] (163.31,45.46) .. controls (163.31,45.46) and (163.31,45.46) .. (163.31,45.46) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=2.25] [line join = round][line cap = round] (130.09,53.98) .. controls (130.09,53.98) and (130.09,53.98) .. (130.09,53.98) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=2.25] [line join = round][line cap = round] (165.02,59.09) .. controls (165.02,59.09) and (165.02,59.09) .. (165.02,59.09) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=2.25] [line join = round][line cap = round] (133.5,68.46) .. controls (133.5,68.46) and (133.5,68.46) .. (133.5,68.46) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=2.25] [line join = round][line cap = round] (165.87,75.28) .. controls (165.87,75.28) and (165.87,75.28) .. (165.87,75.28) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=2.25] [line join = round][line cap = round] (136.91,85.5) .. controls (136.91,85.5) and (136.91,85.5) .. (136.91,85.5) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=2.25] [line join = round][line cap = round] (166.72,91.46) .. controls (166.72,91.46) and (166.72,91.46) .. (166.72,91.46) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=2.25] [line join = round][line cap = round] (142.02,102.54) .. controls (142.02,102.54) and (142.02,102.54) .. (142.02,102.54) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=2.25] [line join = round][line cap = round] (169.28,109.35) .. controls (169.28,109.35) and (169.28,109.35) .. (169.28,109.35) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=2.25] [line join = round][line cap = round] (145.43,125.54) .. controls (145.43,125.54) and (145.43,125.54) .. (145.43,125.54) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=2.25] [line join = round][line cap = round] (325.02,38.65) .. controls (325.02,38.65) and (325.02,38.65) .. (325.02,38.65) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=2.25] [line join = round][line cap = round] (291.8,30.98) .. controls (291.8,30.98) and (291.8,30.98) .. (291.8,30.98) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=2.25] [line join = round][line cap = round] (290.1,45.46) .. controls (290.1,45.46) and (290.1,45.46) .. (290.1,45.46) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=2.25] [line join = round][line cap = round] (323.32,53.98) .. controls (323.32,53.98) and (323.32,53.98) .. (323.32,53.98) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=2.25] [line join = round][line cap = round] (288.4,59.09) .. controls (288.4,59.09) and (288.4,59.09) .. (288.4,59.09) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=2.25] [line join = round][line cap = round] (319.91,68.46) .. controls (319.91,68.46) and (319.91,68.46) .. (319.91,68.46) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=2.25] [line join = round][line cap = round] (287.54,75.28) .. controls (287.54,75.28) and (287.54,75.28) .. (287.54,75.28) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=2.25] [line join = round][line cap = round] (316.51,84.65) .. controls (316.51,84.65) and (316.51,84.65) .. (316.51,84.65) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=2.25] [line join = round][line cap = round] (286.69,91.46) .. controls (286.69,91.46) and (286.69,91.46) .. (286.69,91.46) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=2.25] [line join = round][line cap = round] (311.4,102.54) .. controls (311.4,102.54) and (311.4,102.54) .. (311.4,102.54) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=2.25] [line join = round][line cap = round] (284.14,109.35) .. controls (284.14,109.35) and (284.14,109.35) .. (284.14,109.35) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=2.25] [line join = round][line cap = round] (308.84,125.54) .. controls (308.84,125.54) and (308.84,125.54) .. (308.84,125.54) ; \draw (174.54,26.04) -- (182.39,27.99) -- (176.12,33.11) ; \draw (139.61,32) -- (147.47,33.95) -- (141.19,39.07) ; \draw (174.54,40.52) -- (182.39,42.47) -- (176.12,47.59) ; \draw (139.61,47.34) -- (147.47,49.28) -- (141.19,54.4) ; \draw (174.54,54.15) -- (182.39,56.1) -- (176.12,61.22) ; \draw (143.02,60.97) -- (150.88,62.91) -- (144.6,68.03) ; \draw (174.54,69.48) -- (182.39,71.43) -- (176.12,76.55) ; \draw (145.15,77.99) -- (153.23,78.47) -- (147.99,84.64) ; \draw (175.39,85.67) -- (183.25,87.62) -- (176.97,92.74) ; \draw (175.13,103.16) -- (183.15,104.3) -- (177.43,110.03) ; \draw (147.65,95.15) -- (155.74,95.42) -- (150.67,101.73) ; \draw (151.68,115.6) -- (159.69,114.47) -- (155.79,121.56) ; \draw (268.07,25.18) -- (275.33,28.76) -- (268.12,32.42) ; \draw (305.31,30.62) -- (311.92,35.29) -- (304.22,37.78) ; \draw (269.45,38.35) -- (276.15,42.89) -- (268.5,45.52) ; \draw (267.75,51.98) -- (274.45,56.52) -- (266.8,59.15) ; \draw (266.9,67.31) -- (273.6,71.85) -- (265.94,74.49) ; \draw (269.45,83.49) -- (276.15,88.03) -- (268.5,90.67) ; \draw (268.29,100.17) -- (274.36,105.52) -- (266.43,107.17) ; \draw (304.9,44.81) -- (311,50.13) -- (303.08,51.82) ; \draw (301.52,58.42) -- (307.58,63.79) -- (299.65,65.42) ; \draw (299.1,74.53) -- (305,80.08) -- (297.02,81.47) ; \draw (297.86,91.35) -- (303.17,97.46) -- (295.1,98.04) ; \draw (294.79,110.82) -- (299.66,117.28) -- (291.56,117.3) ; \draw (225.93,57.74) node {$D$}; \draw (227.13,218.15) node {$D_{k}$}; \draw (162.04,23.31) node [scale=0.7] {$e'_{m-2}$}; \draw (124.56,30.98) node [scale=0.7] {$e'_{m-5}$}; \draw (162.04,38.65) node [scale=0.7] {$e'_{m-3}$}; \draw (126.26,47.17) node [scale=0.7] {$e'_{m-6}$}; \draw (129.67,61.65) node [scale=0.7] {$e'_{m-7}$}; \draw (161.19,53.13) node [scale=0.7] {$e'_{m-4}$}; \draw (163.74,68.46) node [scale=0.7] {$e_{n-3}$}; \draw (133.93,77.83) node [scale=0.7] {$e_{n-6}$}; \draw (138.19,94.02) node [scale=0.7] {$e_{n-7}$}; \draw (164.59,84.65) node [scale=0.7] {$e_{n-4}$}; \draw (168,100.83) node [scale=0.7] {$e_{n-5}$}; \draw (139.89,116.17) node [scale=0.7] {$e_{n-8}$}; \draw (93.89,36.23) node [scale=0.9,rotate=-335.11] {$\dotsc $}; \draw (99.85,53.27) node [scale=0.9,rotate=-335.11] {$\dotsc $}; \draw (104.96,67.75) node [scale=0.9,rotate=-335.11] {$\dotsc $}; \draw (110.07,83.09) node [scale=0.9,rotate=-335.11] {$\dotsc $}; \draw (116.04,102.68) node [scale=0.9,rotate=-335.11] {$\dotsc $}; \draw (125.41,125.68) node [scale=0.9,rotate=-335.11] {$\dotsc $}; \draw (291.52,23.31) node [scale=0.7] {$e'_{7}$}; \draw (325.59,30.98) node [scale=0.7] {$e'_{10}$}; \draw (291.52,38.65) node [scale=0.7] {$e'_{6}$}; \draw (324.74,48.02) node [scale=0.7] {$e'_{9}$}; \draw (288.96,52.28) node [scale=0.7] {$e'_{5}$}; \draw (322.19,61.65) node [scale=0.7] {$e'_{8}$}; \draw (288.11,68.46) node [scale=0.7] {$e_{6}$}; \draw (317.93,78.69) node [scale=0.7] {$e_{9}$}; \draw (287.26,84.65) node [scale=0.7] {$e_{5}$}; \draw (312.81,95.72) node [scale=0.7] {$e_{8}$}; \draw (284.7,101.69) node [scale=0.7] {$e_{7}$}; \draw (311.96,117.87) node [scale=0.7] {$e_{10}$}; \draw (346.89,34.53) node [scale=0.9,rotate=-17.84] {$\dotsc $}; \draw (343.48,51.57) node [scale=0.9,rotate=-17.84] {$\dotsc $}; \draw (340.93,66.05) node [scale=0.9,rotate=-17.84] {$\dotsc $}; \draw (334.96,83.94) node [scale=0.9,rotate=-17.84] {$\dotsc $}; \draw (331.56,100.12) node [scale=0.9,rotate=-17.84] {$\dotsc $}; \draw (329,123.12) node [scale=0.9,rotate=-17.84] {$\dotsc $}; \end{tikzpicture} \caption{The directed graph $\Gamma_k$ associated to $h_kf$.} \end{figure} \begin{figure}[H]\centering \tikzset{every picture/.style={line width=0.75pt}} \begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1] \draw (104.83,239.98) -- (520.93,239.98) ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (191.35,239.68) .. controls (191.35,238.13) and (192.6,236.87) .. (194.15,236.87) .. controls (195.7,236.87) and (196.96,238.13) .. (196.96,239.68) .. controls (196.96,241.23) and (195.7,242.48) .. (194.15,242.48) .. controls (192.6,242.48) and (191.35,241.23) .. (191.35,239.68) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (413.3,239.68) .. controls (413.3,238.13) and (414.55,236.87) .. (416.1,236.87) .. controls (417.65,236.87) and (418.91,238.13) .. (418.91,239.68) .. controls (418.91,241.23) and (417.65,242.48) .. (416.1,242.48) .. controls (414.55,242.48) and (413.3,241.23) .. (413.3,239.68) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (194.03,146.6) .. controls (194.03,145.05) and (195.29,143.8) .. (196.84,143.8) .. controls (198.38,143.8) and (199.64,145.05) .. (199.64,146.6) .. controls (199.64,148.15) and (198.38,149.41) .. (196.84,149.41) .. controls (195.29,149.41) and (194.03,148.15) .. (194.03,146.6) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (194.03,193.14) .. controls (194.03,191.59) and (195.29,190.34) .. (196.84,190.34) .. controls (198.38,190.34) and (199.64,191.59) .. (199.64,193.14) .. controls (199.64,194.69) and (198.38,195.94) .. (196.84,195.94) .. controls (195.29,195.94) and (194.03,194.69) .. (194.03,193.14) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (405.24,146.6) .. controls (405.24,145.05) and (406.5,143.8) .. (408.05,143.8) .. controls (409.6,143.8) and (410.85,145.05) .. (410.85,146.6) .. controls (410.85,148.15) and (409.6,149.41) .. (408.05,149.41) .. controls (406.5,149.41) and (405.24,148.15) .. (405.24,146.6) -- cycle ; \draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (406.14,192.25) .. controls (406.14,190.7) and (407.39,189.44) .. (408.94,189.44) .. controls (410.49,189.44) and (411.75,190.7) .. (411.75,192.25) .. controls (411.75,193.79) and (410.49,195.05) .. (408.94,195.05) .. controls (407.39,195.05) and (406.14,193.79) .. (406.14,192.25) -- cycle ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ] (196.84,146.6) -- (408.94,192.25) ; \draw (194.15,239.68) -- (408.94,192.25) ; \draw (150.75,152.76) -- (137.36,146.76) -- (150.19,139.65) ; \draw (147.9,199.48) -- (134.67,193.13) -- (147.68,186.35) ; \draw (146.95,246.07) -- (133.77,239.62) -- (146.84,232.94) ; \draw (478.78,247.17) -- (465.81,240.3) -- (479.08,234.04) ; \draw (479.88,198.63) -- (466.7,192.19) -- (479.76,185.5) ; \draw (480.67,153.09) -- (467.59,146.43) -- (480.77,139.97) ; \draw (104.83,146.9) -- (196.84,146.6) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ] (196.84,146.6) -- (408.05,146.6) ; \draw (408.05,146.6) -- (520.93,146.9) ; \draw (104.83,192.54) -- (196.84,193.14) ; \draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ] (408.94,192.25) -- (196.84,193.14) ; \draw (408.94,192.25) -- (520.93,192.54) ; \draw (408.05,146.6) -- (196.84,193.14) ; \draw (408.05,146.6) -- (194.15,239.68) ; \draw (416.1,239.68) -- (196.84,193.14) ; \draw (196.84,146.6) -- (416.1,239.68) ; \draw (300.76,152.43) -- (289.57,146.66) -- (300.9,141.17) ; \draw (270.33,198.97) -- (259.14,193.2) -- (270.47,187.7) ; \draw (304.34,245.5) -- (293.15,239.74) -- (304.48,234.24) ; \draw (273.47,168.69) -- (263.76,160.68) -- (276,157.72) ; \draw (287.57,191.97) -- (280.42,181.6) -- (293,182.1) ; \draw (337.13,167.49) -- (324.72,165.37) -- (333.86,156.71) ; \draw (345.76,230.57) -- (336.29,222.27) -- (348.61,219.68) ; \draw (265.93,230.05) -- (253.86,226.47) -- (263.96,218.96) ; \draw (362.43,174.15) -- (349.9,172.9) -- (358.42,163.62) ; \draw (181.8,135.15) node {$e_{j+2}$}; \draw (180.01,182.58) node {$e_{j+1}$}; \draw (179.12,228.22) node {$e_{j}$}; \draw (419.86,136.94) node {$e_{j-1}$}; \draw (419.86,182.58) node {$e_{j-2}$}; \draw (427.92,228.22) node {$e_{j-3}$}; \end{tikzpicture} \caption{$D_k$: each directed edge in between represent $\leq E_k$ directed edge.} \end{figure} Now we finish Theorem \ref{main}. Part (1) is given by Lemma 2. Part (2) is given by Proposition 3. Part (3) is given by Lemma 4. \bibliographystyle{amsalpha} \bibliography{1} \end{document}
95,587
TITLE: Tree Labelling Conjecture QUESTION [0 upvotes]: Strong Tree Labelling Conjecture Vertex Properties For any vertex, there exists label n. For any label n, there exists an integer I i such that i is from one t to the count of the improper subtree’s vertices, V. The label n is constructed while the graph is constructed by a current count which is incremented AFTER the vertex is labeled. Vertex Example: The circle has 1 inside. Edge Properties For any edge e, there exists a two tuple (i, j) such that i equals the vertex’s label towards the improper subtree’s root and j towards the improper subtree’s leaves. Remark: For any edge e, the edge may or may not have a weight w such that w is a real number. Edge Example: The integer 1 is inside the top circle. The top circle is attached to an edge which is a line segment which is labeled 1 towards the top circle and 2 towards the bottom circle. There is a bottom circle with 2 inside. Required Fully Connected Subtree Properties The function si domain is a a subtree’s vertex root’s label s and a vertex’s label attached the subtree v.. The function si co-domain is a integer i such that is from one to the count of the subtree’s vertices. The function si modulo the count of vertices in the subtree is defined to be the subtree’s root vertex’s label plus the vertex’s label modulo the count of vertices in the subtree. The function sj domain is a a subtree’s vertex root’s label s and a vertex’s label attached the subtree v.. The function sj co-domain is a natural number i such that is from one to the count of the subtree’s vertices. The function sj modulo the count of vertices in the subtree is defined to be the subtree’s root vertex’s label plus the vertex’s label modulo the count of vertices in the subtree. For any subtree s, s vertex’s constructed by the function si, and the edges constructed by the two tuple (si(.,.), sj(., .)). The constructed subtree meets the vertex properties and the edge properties. Conjecture (Not Sure How To Prove) For Simple Trees Claim: For any improper subset of a simple tree t, t can be labelled to meet the required connected subtree’s properties. Also, t meets the vertex properties and edge properties. I am not sure how to prove this or what methods to prove, so I have tried a proof for a binary search tree. Lemma: There Exists A Binary Search Tree With The Labelling Proof By Cases Case: Leaf Vertex Without Children Leaf Vertex Property: Consider the definition of the function si. Since (i + 0)(mod i) = 0, the vertex property is meet. Edge Properties: Since there is no edge, the property is vacuously true. Case Left Vertex Without Right Vertex Leaf Vertex Property: Consider the definition of the function si. Since (i + 0)(mod i) = 0, the vertex property is meet. Left Child Vertex Property: Consider the definition of the function si. Since (i + 1)(mod i) = 1, the vertex property is meet. Edge Properties: For the edge e, let i = 0 and j = 0. Case Right Vertex Without Left Vertex Leaf Vertex Property: Consider the definition of the function si. Since (i + 0)(mod i) = 0, the vertex property is meet. Right Child Vertex Property: Consider the definition of the function si. Since (i + 1)(mod i) = 1, the vertex property is meet. Edge Properties: For the edge e, let i = 0 and j = 0. Case Left Vertex And Right Vertex Leaf Vertex Property: Consider the definition of the function si. Since (i + 0)(mod i) = 0, the vertex property is meet. Therefore, the vertex property is meet Left Vertex Property: Consider the definition of the function si. Since the vertex is constructed first, si becomes n for the left vertex becomes (i + 0)(mod i) which equals zero. Right Child Vertex Property: Consider the definition of the function si..Since the vertex is constructed last, si becomes (i + 2)(mod i) which equals two. Edge Properties: For the edge between the left vertex and the improper subtree’s root, let i = 0 and j = 1. For the edge between the right vertex and the improper subtree’s root, let i = 0 and j = 1./ REPLY [0 votes]: Solution For a Complete Tree The tree is constructed one vertex at a time in an in-order traversal. The count of vertices added is incremented AFTER the vertex is constructed. Case: Root of Improper Subtree: Vertex Property of Root: The label is constructed to be zero. Consider the definition of the function si. The function si (mod 1) becomes (0 + 0) mod 1 which is zero. Edge Property of Root: Let i = 0 in the edges. The j value is constructed after the vertex attached to the root is constructed. Case: Leaf of Improper Subtree Vertex Property: If the tree is constructed in an in-order ordering then the leaf's vertex is exactly one more than the leaf's parent's vertex. Since the vertex's are constructed one at a time, this label is unique and meets the strong label requirements for the tree. For the subtree, consider the definition of the function si. The function si mod count of vertices becomes (count of vertices + 0) mod count of vertices which equals zero. Since a subtree of a single vertex meets the interval for the labels, this meets the vertex properties. Edge Property: let i = parent's i and j = parent vertex's i plus one. Case: Not Root of Subtree or leaf of Subtree Vertex Property: The vertex's parent vertex has a degree d. Also, the vertex's parent has a vertex label of i. Consider the definition of the function si. The function si becomes (i + d)(mod count of vertices). Since there are child vertex's, i + d is less than the count of vertices. In fact, before the vertex's first child vertex is added, i + d is less than the count of vertices. This implies the function's i + d portion never went over count of vertices, which simplifies to i + d. Since the vertex's degree is incremented, the formula produced a unique and total ordering to the child vertex's. Edge Properties: Let i be the parent's vertex and j the child's vertex. Since the edge is constructed after the vertex, this meets the edge properties.
94,543
TITLE: How to do optimization on expression which includes Reciprocal QUESTION [1 upvotes]: The conditions are: $w$ is a known value, and $x_{11} >0, x_{12}>0, ..., x_{nn}>0;$ \begin{equation} x_{11} \leq w \end{equation} \begin{equation}x_{12} + x_{22} \leq w\end{equation} \begin{equation}x_{13} + x_{23} + x_{33} \leq w\end{equation} \begin{equation} ...\end{equation} \begin{equation}x_{1n} + x_{2n} +... + x_{nn} \leq w\end{equation} How can I choose $x_{ij}$ to minimize {R} and make these expressions be true? \begin{equation} \frac{1}{x_{11}} + \frac{1}{x_{12}} + \frac{1}{x_{13}} + ... \frac{1}{x_{1n}} = R;\end{equation} \begin{equation} \frac{1}{x_{22}} + \frac{1}{x_{23}} + ... \frac{1}{x_{2n}} = R;\end{equation} \begin{equation}...\end{equation} \begin{equation} \frac{1}{x_{nn}} = R;\end{equation} The question is from a wire width issue: In a rectangle (width is w) have n wires, length are 1, 2, 3, 4 ... n I want they have same resistance, and the resistance is minimum. Their width can be changed in each step. So I need choose each line's width (x11, x12, ... ). I think a number computation way maybe can solve the problem, but have no idea about how to do it. Thanks REPLY [0 votes]: Let's look at the longest constraint with its equality: $$\begin{align}x_{1n}+x_{2n}+\cdots+x_{nn}\le w\\ \frac{1}{x_{1n}}+\frac{1}{x_{2n}}+\cdots+\frac{1}{x_{nn}}=R\end{align}$$ where $x_{ij}>0$ for all $i,j$. The objective is to minimize $R$. Taking the partial derivative w.r.t. $x_{in}$ we have $$\frac{\partial}{\partial x_{in}}R=-x_{in}^{-2}.$$ This cannot equal 0, so there is no finite stationary point, and the minimum is achieved for the maximum possible value of $x_{in}$. In other words, the constraint will be active: $$x_{1n}+x_{2n}+\cdots+x_{nn}=w.$$ Assume we have some $x_{in}<x_{jn}$ in that equality. Then $$-x_{in}^{-2}>-x_{jn}^{-2},$$ which means by increasing $x_{in}$ the value of $R$ changes more rapidly per unit than by decreasing $x_{jn}$, and the lowest value of $R$ is achieved when $x_{1n}=x_{2n}=\cdots=x_{nn}$. Thus $$x_{in}=\frac{w}{n}$$ and $$R=\frac{n^2}{w}.$$ You can argue similarly to above that $x_{ij}=x_{kj}$ for all $j$, hence all your $x$ values are now fixed now that you know $R$: $$x_{ij}=\frac{wj}{n^2}.$$
59,541
Documents on website The second draft of the Moving Forward Framework has now been placed on the Moving Forward website and wiki. The latest update of the framework now contains two parts. Part one - the strategies of the framework and part two -the Examples of Practice document which will be a resource for staff and students. This document will continue to develop throughout the duration of the Moving Forward initiative. These documents can be viewed by accessing the following links – Website – Framework or Wiki-Framework The staff consultation documents are now also published in the Project and GCU document section of the Moving Forward website and are all available to view on campus only. These documents can be accessed by selecting the following link- Website – Staff Consultation Documents
143,790
4 training courses that create internet millionaires If you want to fast-track your success at launching and growing an online business, it’s best you learn from the experts Making money online is hard. You can hack and slash away and try to figure it out on your own or you can start making money sooner by learning the right way to make it happen. I sure had my ups and downs when I first started trying to make money online. I’ve tried so many things in my ten years of online business that I know how overwhelming it can be. I think the biggest challenge for most online entrepreneurs is focus. There are so many options when it comes to earning income online that it makes it hard to know what to do. I think this is why 90% of people that try to make money online fail miserably. Issac Newton understood the power of learning from those that have come before you. His famous quote sums up my thoughts on learning from others. That’s why if you want to fast track your success online, it makes sense to learn from the experts that have come before you. The courses I’m highlighting below are proven training programs where hundreds if not thousands of people have taken the course and actually started earning a good and even great income online by putting what they learned into practice. Let’s dive in! 1) JVZoo Academy – A complete guide to success with JVZoo Jvzoo is one of the leading online affiliate marketing networks out there. Multi-millionaires like Tai Lopez, Amy Porterfield, and many others all use JVZoo. Jvzoo recently launched the JVZoo Academy course lead by founder Sam Bakker. This course offers both sides of the transaction. You can learn how to sell your products on JVZoo and get other Affiliates to sell for you. You can also learn how to become an affiliate marketer and sell other people’s products for huge commissions. This is the first ever JVZoo endorsed program and all the information you will be following and replicating is coming directly from Sam Bakker, one of the biggest sellers and affiliates on the platform for the last 2 years. He knows his stuff and has the income to prove it. He’s been doing this since 1999 and has made millions of dollars since. The training is broken down into 4 progressive steps that show you to succeed. Here they are; - Phase #1: The ‘Mastery’ System - Phase #2: Building Your Growth Platform Mastery - Phase #3: Selling Formula Mastery - Phase #4: Evergreen Commissions Mastery What others are saying about the JVZoAcademymy “I had NO idea about online business or marketing. I was working as a teaching assistant, when Sam started teaching me the ropes. Fast forward to today, I have numerous top selling products on JVzoo”- Josh Ratta, JVZoo Academy Member “In 3 years I’ve generated over $5,000,000.00 launching my products on JVZoo.” – Luke Maguire, JVZoo Academy Member “This information has made a huge difference and added long term value to my business on JVZoo. It increased my lead generation and sales to profits.” – Julius Karan, JVZoo Academy Member “I’ve built a $400,000 business on JVZoo working part time alongside my day job” – Lee Pennington, JVZoo Academy Member Learn more about the JVZoo Academy. 2) Tai Lopez’s Social Media Marketing Agency + Bonuses Update: Check out my full review of this program If you haven’t heard of Tai Lopez check him out. He’s the real deal and has a massive social media following and is a very successful affiliate marketer and online business entrepreneur. He went from broke & sleeping on his mom’s couch to becoming a social media mogul and influencer living in Beverly Hills. Tai has 635M minutes of his content watched on YouTube, $21.7M in social media marketing testing, and one of the top 15 TedX talks of all time. Most of his success has come from affiliate marketing and from developing two amazing products. His first was called the 67 steps and now this course here is his most recent product that teaches how to start a social media marketing business in less than 8 weeks. This 4-month online course teaches you the same strategies and tactics he used in an easy-to-follow system with practical, immediately-usable information. Tai also uses JVZoo to manage his product launches and affiliate programs. Check him out on Instagram, Twitter, Facebook, and Snapchat. He’s everywhere. He makes a lot of money online because he creates great products that provide huge value. This is definitely a fast-track program to making money online. It may cost you more than a lot of other programs out there but, it’s definitely worth it. What others are saying about Tai Lopez social media marketing course Within a month of starting Tai’s SMMA program, Jaiden closed his third client, who paid him $20k in a cashier’s check in person. – Jaiden Gross Closed two contracts at $1497/month each with a chance of upgrading. – Daniel Acosta Getting paid $1000/month with her first client. This is her first income after graduating college, even though she has earned 2 degrees. – Bonnie Laska 67 Steps is a awesome program! It helped inspire me to start a business, quit my job and is helping me cure my mom’s cancer. Tai essentially re-wires your brain to go after the Good Life as he calls it. I’m not one to leave reviews but absolutely love his vision and feel the need to spread the good information. – Gifted 3) Michael Cheney’s The commission machine 2017 Michael has been making a full-time living online since the year 1999. He’s made over $7 million in this time and much of it in the form of affiliate commissions from just promoting other people’s stuff. In the Commission Machine 2017, is a method for making money using JVZoo, It’s similar to the JVZoo Academy in that you are promoting the same products but the approach is different. What others are saying about the commission machine 2017 “I went from struggling to $1500 to $2500 or more a daywithout 4) Affiliate Boot camp Russell Brunson has created one of the best sales funnel building applications on the market called Click Funnels. They also have one of the best affiliate programs out there that pay 40% commissions on a whole bunch of recurring products from the software to books and courses. The Affiliate Bootcamp is a comprehensive video training program that teaches you all about affiliate marketing. ClickFunnels is offering this Affiliate Marketing boot camp for free. This $997 training course will help train you on how to do affiliate marketing and actually make a living from it. What others are saying about the affiliate boot camp “Affiliate bootcamp is just crazy good training for affiliates, not just for ClickFunnels Affiliates – I’m getting so much out of the course and would highly recommend it to anyone struggling to wrap their head around affiliate marketing.” Stephen DJ
176,838
\section{Solvability of nonlocal boundary value problems}\label{sectLSolv} In this section, we study solvability of nonlocal boundary value problems. In subsection~\ref{subsectLThetaFred}, we establish necessary and sufficient conditions for Fredholm solvability of the nonlocal boundary value problems with parameter $\theta$ in plane angles. In subsection~\ref{subsectLOneValued}, we study necessary conditions for Fredholm solvability and sufficient conditions for one--valued solvability of nonlocal boundary value problems in dihedral angles. \subsection{Fredholm solvability of nonlocal boundary value problems with parameter $\theta$.}\label{subsectLThetaFred} \begin{theorem}\label{thLThetaFred} Put $a=b+l$. Suppose the line $\Im\lambda=b+1-2m$ contains no poles of the operator--valued function $\tilde{\cal L}^{-1}(\lambda)$; then the operator $$ {\cal L}(\theta)=\{{\cal P}_j(D_y,\ \theta),\ {\cal B}_{j\sigma\mu}(D_y,\ \theta)\}: E_a^{l+2m,\,N}(K)\to E_a^{l,\,N}(K,\ \gamma) $$ is Fredholm for all $\theta\in S^{n-3}$. If there is a $\theta\in S^{n-3}$ such that the operator ${\cal L}(\theta)$ is Fredholm, then the line $\Im\,\lambda=b+1-2m$ contains no poles of the operator--valued function $\tilde{\cal L}^{-1}(\lambda).$ \end{theorem} \begin{proof} Suppose the line $\Im\lambda=b+1-2m$ contains no poles of $\tilde{\cal L}^{-1}(\lambda)$; then by Theorem~\ref{thAprE2}, the operator ${\cal L}(\theta)$ has finite dimensional kernel and closed range. Let us prove that cokernel of the operator ${\cal L}(\theta)$ is of finite dimension. First, we put $l=0.$ By Theorems~\ref{lSolvLambda1} and~\ref{lSolvLambda1'}, the operators $\tilde{\cal L}(\lambda)$ and $\tilde{\cal M}(\lambda)$ are Fredholm and have zero indices. Therefore from Green formula~(\ref{eqGrPLambda}) and Remark~\ref{rGrLambda}, it follows that $\lambda_0$ is a pole of $\tilde{\cal L}^{-1}(\lambda)$ iff $\lambda'_0=\bar\lambda_0-2i(m-1)$ is a pole of $\tilde{\cal M}^{-1}(\lambda)$. Hence the line $\Im\,\lambda=(-b+2m)+1-2m$ contains no poles of the operator--valued function $\tilde{\cal M}^{-1}(\lambda).$ Now by Theorem~\ref{thAprE2'}, kernel of the operator ${\cal M}(\theta)$ is of finite dimension. Finally, Lemma~\ref{lKerLThetaAdj} implies $\dim\ker({\cal L}(\theta)^*)=\dim\ker({\cal M}(\theta))<\infty$. Consider the case $l\ge1.$ Suppose $f\in E_{a}^{l,\,N}(K,\ \gamma)$. By the above, there exists a $u\in E_{a-l}^{2m,\,N}(K)$ such that ${\cal L}(\theta)u=f$ iff $(f,\ \Psi_i)_{E_{a-l}^{0,\,N}(K,\ \gamma)}=0$ for some linearly independent functions $\Psi_i\in E_{a-l}^{0,\,N}(K,\ \gamma)$ ($i=1,\ \dots,\ J$). Here $(\cdot,\ \cdot)_{E_{a-l}^{0,\,N}(K,\ \gamma)}$ is the inner product in the Hilbert space $E_{a-l}^{0,\,N}(K,\ \gamma)$. In addition, by Theorem~\ref{thAprE1}, we have $u\in E_{a}^{l+2m,\,N}(K)$. By virtue of the Schwarz inequality and boundness of the imbeding operator of $E_{a}^{l,\,N}(K,\ \gamma)$ into $E_{a-l}^{0,\,N}(K,\ \gamma)$, we have $$ \begin{array}{c} (f,\ \Psi_i)_{E_{a-l}^{0,\,N}(K,\ \gamma)}\le\|f\|_{E_{a-l}^{0,\,N}(K,\ \gamma)}\|\Psi_i\|_{E_{a-l}^{0,\,N}(K,\ \gamma)}\le \\ \\ k_1 \|f\|_{E_{a}^{l,\,N}(K,\ \gamma)}\|\Psi_i\|_{E_{a-l}^{0,\,N}(K,\ \gamma)} \end{array} $$ for all $f\in E_{a}^{l,\,N}(K,\ \gamma)$. Therefore, by virtue of the Riesz theorem concerning a general form of a linear functional in a Hilbert space, there exist linearly independent functions $\hat\Psi_i\in E_{a}^{l,\,N}(K,\ \gamma)$ ($i=1,\ \dots,\ J$) such that $$ (f,\ \Psi_i)_{E_{a-l}^{0,\,N}(K,\ \gamma)}=(f,\ \hat\Psi_i)_{E_{a}^{l,\,N}(K,\ \gamma)}\ \mbox{for all } f\in E_{a}^{l,\,N}(K,\ \gamma). $$ This means that cokernel of the operator ${\cal L}(\theta)$ is of the same finite dimension $J$ for all $l\ge 0$. The second part of the Theorem follows from Theorem~\ref{thAprE2}. \end{proof} \subsection{Solvability of nonlocal boundary value problems in dihedral angles.}\label{subsectLOneValued} \begin{theorem}\label{thLSolv} Put $a=b+l$. Suppose the line $\Im\,\lambda=b+1-2m$ contains no poles of the operator--valued function $\tilde{\cal L}^{-1}(\lambda)$. Suppose also that for $l=0$, we have $\dim\ker({\cal L}(\theta))=0$ for all $\theta\in S^{n-3},$ $\codim{\cal R}({\cal L}(\theta_0))=0$ for some $\theta_0\in S^{n-3}$; then the operator $$ {\cal L}=\{{\cal P}_j(D_y,\ D_z),\ {\cal B}_{j\sigma\mu}(D_y,\ D_z)\}: H_a^{l+2m,\,N}(\Omega)\to H_a^{l,\,N}(\Omega,\ \Gamma) $$ is an isomorphism. \end{theorem} \begin{proof} By Theorem~\ref{thAprE2}, we have $\dim\ker({\cal L}(\theta))<\infty$ and range ${\cal R}({\cal L}(\theta))$ is closed in $E_a^{l,\,N}(K,\ \gamma)$ for all $\theta\in S^{n-3}.$ Since the operator ${\cal L}(\theta)$ is bounded and $\dim\ker({\cal L}(\theta))=0$ for $l=0$, we have \begin{equation}\label{eqLSolv1} k_1 \|{\cal L}(\theta)u\|_{E_a^{0,\,N}(K,\ \gamma)}\le \|u\|_{E_a^{2m,\,N}(K)}\le k_2\|{\cal L}(\theta)u\|_{E_a^{0,\,N}(K,\ \gamma)}, \end{equation} where $k_1,\ k_2>0$ are independent of $\theta\in S^{n-3}$ and $u$ ($k_2$ does not depend on $\theta\in S^{n-3}$, since the sphere $S^{n-3}$ is compact). By assumption, there exists a $\theta_0\in S^{n-3}$ such that the operator ${\cal L}(\theta_0)$ has a bounded inverse. Therefore, using estimates~(\ref{eqLSolv1}) and the method of continuation with respect to the parameter $\theta\in S^{n-3}$ (see the proof of theorem~7.1 \cite[Chapter 2, \S7]{Lad}), we prove that the operator ${\cal L}(\theta)$ has a bounded inverse for all $\theta\in S^{n-3}.$ Reduce problem~(\ref{eqP}), (\ref{eqB}) to problem~(\ref{eqPTheta}), (\ref{eqBTheta}) doing the Fourier transform with respect to $z: U(y,\ z)\to \hat U(y,\ \eta)$ and changing variables: $y'=|\eta|\cdot y$. Now repeating the proof of lemma~7.3 \cite[\S7]{MP} and applying Theorem~\ref{thAprH} of this work, we complete the proof. \end{proof} \begin{theorem}\label{thLNecessCond} Suppose for some $b\in{\mathbb R},$ $l_1\ge0,$ the operator $$ {\cal L}=\{{\cal P}_j(D_y,\ D_z),\ {\cal B}_{j\sigma\mu}(D_y,\ D_z)\}: H_{a_1}^{l_1+2m,\,N}(\Omega)\to H_{a_1}^{l_1,\,N}(\Omega,\ \Gamma),\ a_1=b+l_1, $$ is Fredholm; then the operator $$ {\cal L}(\theta)=\{{\cal P}_j(D_y,\ \theta),\ {\cal B}_{j\sigma\mu}(D_y,\ \theta)\}: E_a^{l+2m,\,N}(K)\to E_a^{l,\,N}(K,\ \gamma),\ a=b+l, $$ is an isomorphism for all $\theta\in S^{n-3},$ $l=0,\ 1,\ \dots$ \end{theorem} \begin{proof} 1) While proving the Theorem, we shall follow the scheme of the paper \cite[\S8]{MP}. Similarly to the proof of lemma~8.1 \cite[\S8]{MP}, one can prove that the operator ${\cal L}$ is an isomorphism for $l=l_1$, $a=a_1$. Therefore we have $$ \|U\|_{H_{a_1}^{l_1+2m,\,N}(\Omega)}\le k_1\|{\cal L}U\|_{H_{a_1}^{l_1,\,N}(\Omega,\ \Gamma)}. $$ Substituting $U^p(y,\ z)=p^{1-n/2}e^{i(\theta,\ z)}\varphi(z/p)u(y)$ ($\varphi\in C_0^\infty({\mathbb R}^{n-2})$, $u\in E_{a_1}^{l_1+2m,\,N}(K)$, $\theta\in S^{n-3}$) into the last inequality and passing to the limit as $p\to\infty$, we get \begin{equation}\label{eqLNecessCond1} \|u\|_{E_{a}^{l+2m,\,N}(K)}\le k_2\|{\cal L}(\theta)u\|_{E_{a}^{l,\,N}(K,\ \gamma)} \end{equation} for $l=l_1$, $a=a_1$. This implies that ${\cal L}(\theta)$ has trivial kernel for $l=l_1$, $a=a_1$. But by Theorem~\ref{thAprE1}, kernel of ${\cal L}(\theta)$ does not depend on $l$ and $a=b+l$; therefore the operator ${\cal L}(\theta)$ has trivial kernel for all $l$ and $a=b+l$. By Theorem~\ref{thAprE2}, estimate~(\ref{eqLNecessCond1}) implies that the line $\Im\,\lambda=b+1-2m$ contains no poles of the operator--valued function $\tilde{\cal L}^{-1}(\lambda).$ Hence, by Theorem~\ref{thLThetaFred}, the operator ${\cal L}(\theta)$ is Fredholm for all $l$ and $a=b+l$. From this and from triviality of $\ker{\cal L}(\theta)$, it follows that estimate~(\ref{eqLNecessCond1}) is valid for all $l$ and $a=b+l$. 2) Repeating the proof of lemma~7.3 \cite[\S7]{MP}, from estimate~(\ref{eqLNecessCond1}), we get $$ \|U\|_{H_{a}^{2m,\,N}(\Omega)}\le k_3\|{\cal L}U\|_{H_{a}^{0,\,N}(\Omega,\ \Gamma)}, $$ where $l=0$, $a=b$. Therefore, the operator ${\cal L}:H_{b}^{2m,\,N}(\Omega)\to H_{b}^{0,\,N}(\Omega,\ \Gamma)$ has trivial kernel and closed range. Let us show that its range coincides with $H_{b}^{0,\,N}(\Omega,\ \Gamma)$. Indeed, since $H_{b+l_1}^{l_1+2m,\,N}(\Omega)\subset H_{b}^{2m,\,N}(\Omega)$, range ${\cal R}({\cal L})_{b+l_1}$ of the operator ${\cal L}:H_{b+l_1}^{l_1+2m,\,N}(\Omega)\to H_{b+l_1}^{l_1,\,N}(\Omega,\ \Gamma)$ is contained in range ${\cal R}({\cal L})_b$ of the operator ${\cal L}:H_{b}^{2m,\,N}(\Omega)\to H_{b}^{0,\,N}(\Omega,\ \Gamma)$: $$ {\cal R}({\cal L})_{b+l_1}\subset{\cal R}({\cal L})_b. $$ By proved in 1), ${\cal R}({\cal L})_{b+l_1}=H_{b+l_1}^{l_1,\,N}(\Omega,\ \Gamma)$ which is dense in $H_{b}^{0,\,N}(\Omega,\ \Gamma)$; hence, ${\cal R}({\cal L})_{b}$ is also dense in $H_{b}^{0,\,N}(\Omega,\ \Gamma)$. But ${\cal R}({\cal L})_{b}$ is closed; therefore, ${\cal R}({\cal L})_{b}=H_{b}^{0,\,N}(\Omega,\ \Gamma)$. So, we have proved that the operator ${\cal L}:H_{b}^{2m,\,N}(\Omega)\to H_{b}^{0,\,N}(\Omega,\ \Gamma)$ is an isomorphism. 3) Now we shall prove the estimate \begin{equation}\label{eqLNecessCond2} \|V\|_{{\cal H}_{-b+2m}^{2m,\,N}(\Omega)}\le k_4\|{\cal M}V\|_{{\cal H}_{-b+2m}^{0,\,N}(\Omega,\ \Gamma)}. \end{equation} Denote by ${\rm P}: H_{b-2m}^{0,\,N}(\Omega)\to H_b^{0,\,N}(\Omega)$ the unbounded operator corresponding to problem~(\ref{eqP}), (\ref{eqB}) with homogeneous nonlocal conditions. The operator ${\rm P}$ is given by $$ \begin{array}{c} \Dom({\rm P})=\{U\in H_b^{2m,\,N}(\Omega):\ {\cal B}_{j\sigma\mu}(D_y,\ D_z)U=0,\\ j=1,\ \dots,\ N;\ \sigma=1,\ R_j+1;\ \mu=1,\ \dots,\ m\}, \end{array} $$ $$ {\rm P}U=({\cal P}_1(D_y,\ D_z)U_1,\ \dots,\ {\cal P}_N(D_y,\ D_z)U_N),\quad U\in \Dom({\rm P}). $$ Denote by ${\rm Q}: H_{-b}^{0,\,N}(\Omega)\to H_{-b+2m}^{0,\,N}(\Omega)$ the unbounded operator corresponding to problem~(\ref{eqQ})--(\ref{eqT}) with homogeneous boundary conditions and homogeneous nonlocal transmission conditions. The operator ${\rm Q}$ is given by $$ \begin{array}{c} \Dom({\rm Q})=\{V\in {\cal H}_{-b+2m}^{2m,\,N}(\Omega):\ {\cal C}_{j\sigma\mu}(D_y,\ D_z)V=0,\ {\cal T}_{jq\nu}(D_y,\ D_z)V=0,\\ j=1,\ \dots,\ N;\ \sigma=1,\ R_j+1;\ \mu=1,\ \dots,\ m;\\ q=2,\ \dots,\ R_j;\ \nu=1,\ \dots,\ 2m\} \end{array} $$ $$ {\rm Q}V=(W_1,\ \dots,\ W_N),\ W_j={\cal Q}_j(D_y,\ D_z)V_{jt}\ \mbox{for } x\in\Omega_{jt},\ V\in \Dom({\rm Q}). $$ It is clear that $\Dom({\rm P})$ is dense in $H_{b-2m}^{0,\,N}(\Omega)$ and $\Dom({\rm Q})$ is dense in ${\cal H}_{-b}^{0,\,N}(\Omega).$ From Theorems~\ref{thAprH} and~\ref{thAprH'}, it follows that the operators ${\rm P}$ and ${\rm Q}$ are closed. Since the operator ${\cal L}:H_{b}^{2m,\,N}(\Omega)\to H_{b}^{0,\,N}(\Omega,\ \Gamma)$ is an isomorphism, the operator ${\rm P}$ is also an isomorphism from $\Dom({\rm P})$ onto $H_b^{0,\,N}(\Omega)$. Denote by ${\rm P}^*: H_{-b}^{0,\,N}(\Omega)\to H_{-b+2m}^{0,\,N}(\Omega)$ the operator that is adjoint to ${\rm P}$ with respect to the inner product $\sum\limits_j(U_j,\ V_j)_{\Omega_j}$ in $\prod\limits_j L_2(\Omega_j).$ Since the operator ${\rm P}$ is an isomorphism from $\Dom({\rm P})$ onto $H_b^{0,\,N}(\Omega)$, the operator ${\rm P}^*$ is also an isomorphism from $\Dom({\rm P}^*)$ onto $H_{-b+2m}^{0,\,N}(\Omega)$ and its domain $\Dom({\rm P}^*)$ is dense in $H_{-b}^{0,\,N}(\Omega)$. The operator ${\rm P}^*$ is given by $$ \sum\limits_j\bigl({\rm P}_jU_j,\ V_j\bigr)_{\Omega_j}= \sum\limits_j\bigr(U_j,\ ({\rm P}^*V)_j\bigr)_{\Omega_j}\ \mbox{for any } U\in\Dom({\rm P}), V\in\Dom({\rm P}^*). $$ Since the closed operator ${\rm P}^*$ is an isomorphism from $\Dom({\rm P}^*)$ onto $H_{-b+2m}^{0,\,N}(\Omega)$, we have \begin{equation}\label{eqAdjP1} \|V\|_{H_{-b}^{0,\,N}(\Omega)}\le k_5\|{\rm P}^*V\|_{H_{-b+2m}^{0,\,N}(\Omega)} \end{equation} for all $V\in \Dom({\rm P}^*)$, where $k_5>0$ is independent of $V.$ From Theorem~\ref{thGrP} and Remark~\ref{rGr}, it follows that ${\rm Q}\subset{\rm P}^*$.\footnote{One can prove that ${\rm Q}={\rm P}^*$, but for our purposes, it is sufficient to prove the weaker result.} Therefore using~(\ref{eqAdjP1}), we get $$ \|V\|_{{\cal H}_{-b}^{0,\,N}(\Omega)}\le k_5\|{\rm Q}V\|_{H_{-b+2m}^{0,\,N}(\Omega)} $$ for all $V\in \Dom({\rm Q})$. From the last inequality, Lemma~\ref{lHomog'}, and Theorem~\ref{thAprH'}, we obtain estimate~(\ref{eqLNecessCond2}). 4) Substituting $V^p(y,\ z)=p^{1-n/2}e^{i(\theta,\ z)}\varphi(z/p)v(y)$ ($\varphi\in C_0^\infty({\mathbb R}^{n-2})$, $v\in {\cal E}_{-b+2m}^{2m,\,N}(K)$, $\theta\in S^{n-3}$) into inequality~(\ref{eqLNecessCond2}) and passing to the limit as $p\to\infty$, we get $$ \|v\|_{{\cal E}_{-b+2m}^{2m,\,N}(K)}\le k_6\|{\cal M}(\theta)v\|_{{\cal E}_{-b+2m}^{0,\,N}(K,\ \gamma)}. $$ Therefore kernel of the operator ${\cal M}(\theta):{\cal E}_{-b+2m}^{2m,\,N}(K)\to {\cal E}_{-b+2m}^{0,\,N}(K,\ \gamma)$ is trivial. By virtue of Lemma~\ref{lKerLThetaAdj}, $\dim\ker({\cal L}(\theta)^*)=\dim\ker({\cal M}(\theta))=0$. Combining this with 1), we see that the operator ${\cal L}(\theta):E_b^{2m,\,N}(K)\to E_b^{0,\,N}(K,\ \gamma)$ is an isomorphism. Using Theorem~\ref{thAprE1'}, we prove the Theorem for arbitrary $l$ and $a=b+l$. \end{proof} \begin{remark} From Theorems~\ref{thLThetaFred} and~\ref{thLNecessCond}, it follows that the operator $ {\cal L}:H_a^{l+2m,\,N}(\Omega)\to H_a^{l,\,N}(\Omega,\ \Gamma) $ is an isomorphism for all $l$ and $a=b+l$ whenever $ {\cal L}:H_{a_1}^{l_1+2m,\,N}(\Omega)\to H_{a_1}^{l_1,\,N}(\Omega,\ \Gamma) $ is Fredholm for some $l_1$ and $a_1=b+l_1$. \end{remark}
189,527
Podcast: Play in new window | Download (Duration: 1:07:12 — 62.0MB) D.J. Sherman of the SHL hockey team Titanium – Sport Corner came by the studio to talk about his travels, playing hockey, hockey life, hockey love, and hockey just in general. We also talked about the first time he visited Thailand, and what that was like, and deciding to move to Thailand. He talked about finding out about the big tournament the Flying Farangs have every year, which made him want to find out where and try to skate again. We also talked about what it was like for him playing in his first league game here in the SHL, and how exciting it was. Lastly, we talked about a product that he came up with to help skaters get the most out of skating. X-Factor Site On November 20th, the SHL had their very first league games, and they were amazing! This podcast also covers these events. Dom talked to the captain of Titanium – Sport Corner, Zak Garofolo and what he felt his team needed to do to beat the Aware. Dom also talked to the Aware captain, Patrik Lundback and what he felt his team needed to do to beat Titanium – Sport Corner. Titanium – Sport Corner and Aware had a great game with Titanium – Sport Corner coming back from behind in the third period to tie it up forcing a shootout, with Titanium – Sport Corner taking the win! Dom talked to the Player of the Game (PoG) after the game, Gabor Toth (G), and got his feeling about what it was like to come back and win the game in that manner. Before the next match, Dom talked to the captain of the Spitfires, Brad Wilson, and got his opinions on what the Spitfires needed to do to beat the Hooters and take the win. Then Dom talked to the Hooters captain, Justin St. Denis, about his strategy to beat the Spitfires. Next it was the Spitfires vs Hooters, and it was another nail biter with the Spitfires falling behind 4 -1 at one point, but they came back and tied it at the end, forcing another shootout, with the Spitfires as the winners! Dom talked to the Player of the Game (PoG), Adrian Meyers after the game and got his feeling about what it was like and what he felt he needed to do in the next game. D. J. Sherman of the Titanium – Sport Corner – DD41 Gallery Have any comments, questions or feedback? - Comment in show notes Connect with or Follow me Subscribe Rate Review Leave Comments Credits - Images are © Domnick M. Dumais & Tracy Dumais unless otherwise stated.
317,407
Source: The Jewish Daily Forward At the May 3, 2009 OU Synagogue Leadership Seminar of the OU’s West Coast Division in Los Angeles, Rabbi Saul Zucker, Director of the Department of Day School and Educational Services of the O. U., presented a two headed plan to help meet the economic crisis faced by yeshiva and day school administrations and parents across the nation. Part One - Nationwide Health Insurance – The OU is working with a major insurer on a plan for yeshiva day schools. The schools would join a group of about 40,000 other participants, whose demographics would determine the premium. It is estimated that the potential savings for any given school over the course of the year is tens of thousands of dollars. - Energy Cost-Cutting Measures - A switch to solar energy has already presented one school in New Jersey with a potential savings of $80,000. Other schools would incur little or no costs in the conversion, through arrangements worked out by the OU and a provider. - Grant writing – Using a professional grants consultant, to be made available via the O.U., to identify government and private sources for additional financial support and draft the grant proposals to obtain the funds. - Revenue-Generating OU Toolbar - Having yeshiva students, parents and faculty use and get others to use a custom Internet toolbar offered through the O.U. for their Web browser. With each click, corporate sponsors whose ads jump to the top of searches will contribute to a fund to be maintained by the O.U. and disbursed to the schools. - Holding bingo fundraising events to generate income. - Setting up a kehilla, or community fund, via local Orthodox congregations to allow schools to broaden their fundraising base beyond the families of their students. Under this plan, congregational rabbis would promote donations of roughly $20-$30 per month, to be automatically deducted from the bank accounts or charged to the credit cards of synagogue members to support nearby yeshivas. Zucker said that his six-point program emerged from a summit of Jewish education officials in New York last January, at which some 20 initiatives were discussed. The schools involved ultimately agreed on six of them. Even with the implementation of these cost saving measures, it cannot be expected that tuition will be significantly reduced. This leads to part two of the plan, the reduced cost yeshiva. Part Two With a growing number of Orthodox families forced to consider removing their children from private Jewish schools due to their lack of financial resources, a proposal for setting up a new kind of yeshiva day school was put forth. The concept of the reduced tuition school will operate as follows: - Class size of 25, not 15-18; - No fixed aides to every classroom in the lower grades; - No after-school extracurricular programs unless staffed by volunteers; - A cooperative model of education in which parents sign on to giving four hours per month in providing services to the school in different ways; - Teachers’ salaries to remain competitive with the market; - A computer lab to be present but not one with “absolute star-quality cutting-edge equipment.” Such a a stripped-down yeshiva could give a child a solid Jewish and secular education for some $6,500 to $7,000 per year rather than the $15,000 to $20,000 today. Add comment:
300,759
Nov 02 2017 Employees at businesses of all shapes and sizes take home the keys to their companies every night. Virtual keys, that is. Smartphones and tablets are gateways into a business. The devices that parents give to their kids during dinners out to enjoy a moment of peace and quiet can also open up a company to any kind of breach or attack. If you allow employees to access email, company files, or any other type of information via a smartphone or tablet, whether it was purchased by you or whether they own it, you need to make sure you have strong security protocols and policies in place. While having access to email and files from anywhere can be incredibly convenient, that convenience comes at a price. The ability to wipe devices remotely when they are lost or stolen, controlling what information is available and accessible via a mobile operating system, and establishing protocols that individuals must follow if they want to access company information on their smartphones and tablets are just a few approaches that can help save you time and money. Lock your virtual doors now, before it’s too late. Read more about these topics and other important security ideas on our blog at.
327,861
Lahore Property Trends & Analysis See what are market trends for Buying and Selling Properties in Lahore Property and Real Estate Market. Share this:FacebookTwitterLinkedInWhatsAppTelegramGooglePinterestEmail Khurram on said Kindly let me know the following about naya nazimbad >> Developer get NOC?If yes what is NOC Number >> Developer get Utility NOC from concern authority?like Water Board , KESC , Suigas >> Developer haired 150 FC for security as per they mention. so what about after completing the project .. is security still exists? >> Is the aggrement between developer and customer at the Rs. 100 Stamp Paper?as per KBCA policy Regards eProperty on said It is always suggested to contact the developer / builder / authority directly. Best answer for above questions can only be given by developer / builder / authority its self. Farhan Ahmed Quadri on said would like to know more about how to buy property in naya nazmaabad karachi eProperty on said Please contact Naya Nazimabad City Karachi office directly to learn more about Naya Nazimabad City Karachi. pervez on said AOA I am interesting to buy 240 single unit or 160 double unit benglow in new nazimabad if any one want to sale owner ship
175,181
\begin{document} \maketitle \begin{abstract} In this paper we contribute to qualitative and geometric analysis of planar piecewise smooth vector fields, which consist of two smooth vector fields separated by the straight line $y=0$ and sharing the origin as a non-degenerate equilibrium. In the sense of $\Sigma$-equivalence, we provide a sufficient condition for linearization and give phase portraits and normal forms for these linearizable vector fields. This condition is hard to be weakened because there exist vector fields which are not linearizable when this condition is not satisfied. Regarding perturbations, a necessary and sufficient condition for local $\Sigma$-structural stability is established when the origin is still an equilibrium of both smooth vector fields under perturbations. In the opposition to this case, we prove that for any piecewise smooth vector field studied in this paper there is a limit cycle bifurcating from the origin, and there are some piecewise smooth vector fields such that for any positive integer $m$ there is a perturbation having exactly $m$ limit cycles bifurcating from the origin. Here $m$ maybe infinity. \vskip 0.2cm {\bf 2010 MSC:} 34A36, 34C41, 37G05, 37G15. {\bf Keywords:} limit cycle bifurcation, linearization, non-smooth equilibrium, normal form, structural stability. \end{abstract} \baselineskip 15pt \parskip 10pt \thispagestyle{empty} \setcounter{page}{1} \section{Introduction and statement of the main results} \setcounter{equation}{0} \setcounter{lm}{0} \setcounter{thm}{0} \setcounter{rmk}{0} \setcounter{df}{0} \setcounter{cor}{0} Let ${\mathcal U}\subset\mathbb{R}^2$ be a bounded open set containing the origin $O$, $\mathfrak{X}$ be the set of all $\mathcal{C}^1$ vector fields defined on ${\mathcal U}$ and endowed with the $\mathcal{C}^1$-topology. We consider the piecewise smooth vector field \begin{eqnarray} Z(x, y)=\left\{ \begin{aligned} &X(x, y)=(X_1(x, y), X_2(x, y))~~~~&& {\rm if}~~ (x, y)\in\Sigma^+,\\ &Y(x, y)=(Y_1(x, y), Y_2(x, y))~~~~&& {\rm if}~~ (x, y)\in\Sigma^-,\\ \end{aligned} \right. \label{sysp} \end{eqnarray} where $X, Y\in \mathfrak{X}$ and $$\Sigma^+=\{(x, y)\in {\mathcal U}: y>0\}\qquad \Sigma^-=\{(x, y)\in {\mathcal U}: y<0\}.$$ Define $\Omega$ as the set of all $Z(x,y)$ satisfying (\ref{sysp}) and endowed with the product topology. In past two decades, many researchers shift their interest to the study of piecewise smooth vector fields, because such vector fields are ubiquitous in mechanical engineering \cite{QC, HCJL}, feedback control systems \cite{MD, FGKP}, biological systems \cite{AYX, TSY}, electrical circuits \cite{MD}, etc. Notice that the piecewise smooth vector field (\ref{sysp}) is not defined on $\Sigma=\{(x, y)\in {\mathcal U}: y=0\}$, called {\it discontinuity line} or {\it switching line}. Denote the vector field on $\Sigma$ by $Z_\Sigma$, which is usually defined by the so-called Filippov convention \cite{AFF}, see Section 2 for a review. Here $Z_\Sigma$ is naturally defined as $X$ or $Y$ if $X(x, y)\equiv Y(x, y)$ for all $(x, y)\in\Sigma$. The vector field (\ref{sysp}), together with $Z_\Sigma$, are called a {\it Filippov vector field}. In whole paper, speaking of the vector field $Z\in\Omega$, it always means that $Z=Z_\Sigma$ on $\Sigma$. A point at which $Z\in\Omega$ vanishes is said to be an {\it equilibrium} or {\it singular point}. Hence, an equilibrium of $Z$ is an equilibrium of either $X$ in $\Sigma^+$ or $Y$ in $\Sigma^-$ or $Z_\Sigma$ in $\Sigma$. Throughout this paper, we call it a {\it smooth equilibrium} for the first two cases and a {\it non-smooth equilibrium} for the last case. Regarding the local dynamics of $Z=(X, Y)\in\Omega$ near a smooth equilibrium, the investigation can be reduced to the local dynamics of the smooth vector field $X$ or $Y$ near this equilibrium and, with the efforts of many researchers, a large number of mature theories and methods have been established (see e.g., \cite{ZZF, B-YAK, JKHale}). Therefore, we focus on the local dynamics for non-smooth equilibria, which is more difficult than the smooth case because most theories and methods for smooth vector fields are no longer valid for non-smooth ones. Although that, in recent twenty years some excellent results about limit cycle bifurcation, normal form and structurally stability were given in textbooks \cite{MD, AFF} and journal papers \cite{YAK, MG1, HZ, CGP, ZK, PGL, TCJLC, TCJ, ZZHFF}. Let $\Omega_0\subset\Omega$ be the set of all piecewise smooth vector fields satisfying \begin{eqnarray} X(0, 0)=Y(0, 0)=(0, 0),~~~~~~\det A^+\det A^-\ne0 \label{adc} \end{eqnarray} and \begin{eqnarray} X_{2x}(0, 0)Y_{2x}(0, 0)>0, \label{condi} \end{eqnarray} where $A^+$ (resp. $A^-$) is the Jacobian matrix of $X$ (resp. $Y$) at $O$ and $X_{2x}, Y_{2x}$ denote the derivatives of $X_2, Y_2$ with respect to $x$, respectively. (\ref{adc}) means that equilibrium $O$ is non-degenerate for both $X$ and $Y$, (\ref{condi}) means that there exists a hollow neighborhood of $O$, in which there are no sliding points (see Section 2). In this paper we study the local dynamics of vector field $Z=(X, Y)\in\Omega_0$ near $O$, which is a non-smooth equilibrium of $Z$, i.e., $Z(0, 0)=Z_\Sigma(0, 0)=(0, 0)$. Our first goal is to study the {\it local $\Sigma$-equivalence} between $Z=(X, Y)\in\Omega_0$ and its linear part \begin{eqnarray} Z_L(x, y)= \left\{ \begin{aligned} &X_L(x, y)=A^+(x, y)^\top~~~~~&& {\rm if}~~ (x, y)\in\Sigma^+,\\ &Y_L(x, y)=A^-(x, y)^\top~~~&& {\rm if}~~ (x, y)\in\Sigma^-\\ \end{aligned} \right. \label{pwl} \end{eqnarray} near $O$. Roughly speaking, the local $\Sigma$-equivalence is just the local topological equivalence preserving the switching line $\Sigma$. A precise definition of local $\Sigma$-equivalence is stated in Section 2. A nonlocal definition of $\Sigma$-equivalence, e.g., not in a neighborhood of equilibrium but in the whole domain of definition, was given in \cite[Definition 2.20]{MG1} and \cite[Definition 2.30]{MD}. One of motivations for this goal comes from the work \cite{XCZ}. In \cite[Theorem 2.2]{XCZ}, 19 different types of normal forms for $Z\in\Omega$ with (\ref{adc}) were obtained by using a continuous piecewise linear change of variables. We notice that in these normal forms the linear parts are normalized but the nonlinear parts are not normalized. So, it is unknown that whether these nonlinear parts can be eliminated after normalization. Another motivation is from smooth vector fields. A smooth vector field is locally topologically equivalent to its linear part near an equilibrium if all eigenvalues of the Jacobian matrix at this equilibrium have nonzero real part (see, e.g., \cite{PH} and \cite[Theorem 4.7]{ZZF}). Hence, it is a natural question to find conditions such that $Z\in\Omega_0$ is locally $\Sigma$-equivalent near $O$ to its linear part $Z_L$ given in (\ref{pwl}). Let $\lambda^\pm_1$ and $\lambda^\pm_2$ be the eigenvalues of $A^\pm$, and \begin{eqnarray} \Omega_1=\{Z\in\Omega_0: \lambda^+_1\ne\lambda^+_2, \lambda^-_1\ne\lambda^-_2, \ell\ne0\}, \label{subsetdefinition} \end{eqnarray} where \begin{eqnarray} \ell= \left\{ \begin{aligned} &\frac{{\rm Re}\lambda^+_1}{|{\rm Im}\lambda^+_1|}+\frac{{\rm Re}\lambda^-_1}{|{\rm Im}\lambda^-_1|}~~~ &&{\rm if}~~~~~{\rm Im}\lambda^+_1{\rm Im}\lambda^-_1\ne0,\\ &~1~~~~~~~~~~~&&{\rm if}~~~~~{\rm Im}\lambda^+_1{\rm Im}\lambda^-_1=0, \end{aligned} \right. \label{eigen2} \end{eqnarray} ${\rm Re}$ and ${\rm Im}$ denote the real and imaginary part of eigenvalues respectively. We have the first theorem as follows. \begin{thm} Every $Z\in\Omega_1$ is locally $\Sigma$-equivalent to its corresponding piecewise linear vector field $Z_L$ of form {\rm (\ref{pwl})} near the origin. Moreover, the local phase portrait of $Z$ near the origin is one of the 11 phase portraits presented in Figure~\ref{localphaseportraits} in the sense of $\Sigma$-equivalence. \label{normalform} \end{thm} \begin{figure} \begin{minipage}[t]{0.24\linewidth} \centering \includegraphics[width=1.4in]{FF-2.eps} \caption*{{\small (FF-1)}} \end{minipage} \begin{minipage}[t]{0.24\linewidth} \centering \includegraphics[width=1.39in]{FF-1.eps} \caption*{{\small (FF-2)}} \end{minipage} \begin{minipage}[t]{0.24\linewidth} \centering \includegraphics[width=1.4in]{FN-1.eps} \caption*{{\small (FN-1)}} \end{minipage} \begin{minipage}[t]{0.24\linewidth} \centering \includegraphics[width=1.42in]{FN-2.eps} \caption*{{\small (FN-2)}} \end{minipage} \begin{minipage}[t]{0.24\linewidth} \centering \includegraphics[width=1.4in]{FS.eps} \caption*{{\small (FS)}} \end{minipage} \begin{minipage}[t]{0.24\linewidth} \centering \includegraphics[width=1.4in]{NN-1.eps} \caption*{{\small (NN-1)}} \end{minipage} \begin{minipage}[t]{0.24\linewidth} \centering \includegraphics[width=1.4in]{NN-2.eps} \caption*{{\small (NN-2)}} \end{minipage} \begin{minipage}[t]{0.24\linewidth} \centering \includegraphics[width=1.4in]{NN-3.eps} \caption*{{\small (NN-3)}} \end{minipage} \begin{minipage}[t]{0.24\linewidth} \centering \includegraphics[width=1.4in]{NS-1.eps} \caption*{{\small (NS-1)}} \end{minipage}~~ \begin{minipage}[t]{0.24\linewidth} \centering \includegraphics[width=1.4in]{NS-2.eps} \caption*{{\small (NS-2)}} \end{minipage}~~ \begin{minipage}[t]{0.24\linewidth} \centering \includegraphics[width=1.42in]{SS.eps} \caption*{{\small (SS)}} \end{minipage} \caption{{\small Local phase portraits of $Z\in\Omega_1$ near the origin.}} \label{localphaseportraits} \end{figure} Theorem~\ref{normalform} is proved in Section 3, where we present a normal form for each one of these $11$ kinds of phase portraits shown in Figure~\ref{localphaseportraits}. We remark that the first part of Theorem~\ref{normalform} can be regarded as a generalisation of \cite[Theorem 4.7]{ZZF} from smooth vector fields to piecewise smooth vector fields. We clarify some differences between the requirements for eigenvalues in these two theorems as follows. In \cite[Theorem 4.7]{ZZF} it is required that all eigenvalues of the Jacobian matrix at a smooth equilibrium have nonzero real part in order that the smooth vector field is topologically equivalent to its linear part near this equilibrium. However, in Theorem~\ref{normalform} we require that all eigenvalues of the Jacobian matrixes $A^+$ and $A^-$ at $O$, namely the non-smooth equilibrium, satisfy $$\lambda^\pm_1\lambda^\pm_2\ne0,~~~~~\lambda^+_1\ne\lambda^+_2,~~~~~~\lambda^-_1\ne\lambda^-_2,~~~~~\ell\ne0$$ by the definition of $\Omega_1$ given in (\ref{subsetdefinition}). Comparing the requirements of \cite[Theorem 4.7]{ZZF} with our Theorem~\ref{normalform}, we see that \cite[Theorem 4.7]{ZZF} does not allow pure imaginary eigenvalues but Theorem~\ref{normalform} allows. On the other hand, by \cite[Theorem B]{CGP} or \cite[Theorem 1.2]{HZ} the condition $\ell\ne0$ in Theorem~\ref{normalform} excludes the case that $O$ is a non-smooth center of the linear part. It is not hard to give an example showing the non-equivalence when $O$ is a non-smooth center of the linear part. Another difference is that \cite[Theorem 4.7]{ZZF} allows the Jacobian matrix to have the same eigenvalue, but Theorem~\ref{normalform} does not allow this for both Jacobian matrices $A^+$ and $A^-$. We give an example to show the non-equivalence when the Jacobian matrix $A^+$ or $A^-$ has the same eigenvalue in Section 3. Our second goal is to study the structural stability of $Z\in \Omega_0$ in the sense of $\Sigma$-equivalence, i.e., {\it $\Sigma$-structural stability} as defined in \cite[p.1978]{MG1}. Usually, $Z\in\Omega_0$ is not $\Sigma$-structurally stable when the perturbation is inside $\Omega$ because $O$ can be destroyed under such a perturbation and the so-called boundary equilibrium bifurcation occurs \cite{YAK}. Thus the only interest is to consider the $\Sigma$-structural stability of $Z\in\Omega_0$ with respect to $\Omega_0$, i.e., the perturbation is inside $\Omega_0$. In particular, we focus on the local $\Sigma$-structural stability of $Z\in\Omega_0$ near $O$. Roughly speaking, $Z\in\Omega_0$ is said to be {\it locally $\Sigma$-structurally stable with respect to $\Omega_0$ near $O$} if any vector field that lies in a sufficiently small neighborhood of $Z$ contained in $\Omega_0$ is locally $\Sigma$-equivalent to $Z$ near $O$. \begin{thm} $Z\in\Omega_0$ is locally $\Sigma$-structurally stable with respect to $\Omega_0$ near the origin if and only if $Z\in\Omega_1$, where $\Omega_0$ is defined above {\rm (\ref{adc})} and $\Omega_1$ is defined in {\rm(\ref{subsetdefinition})}. \label{stability} \end{thm} \vspace{-13pt} Theorem~\ref{stability} is proved in Section 4. The third goal of this paper is devoted to the study of limit cycle bifurcations, more precisely, identify the existence and number of crossing limit cycles bifurcating from the non-smooth equilibrium $O$ of a piecewise smooth vector field $Z=(X, Y)\in\Omega_0$. Here a limit cycle is said to be a {\it crossing limit cycle} if it intersects the switching line $\Sigma$ only at crossing points (see Section 2). Many works about limit cycle bifurcations are done for the case that $O$ is of focus-focus type, i.e., an equilibrium of focus type for both $X$ and $Y$. See, e.g., \cite{CGP, ZK, KM, Xingwu, XCZ, CNTT, LHHH} for the perturbations in $\Omega_0$ and \cite{HZ, YMH} for the perturbations in $\Omega$. Such bifurcation is analogous to the Hopf bifurcation of smooth vector fields. Then a natural question is whether limit cycles can bifurcate from $O$ for other cases, for instance $O$ is of focus-saddle type, focus-node type, etc. Since bifurcations usually depend on the type of local phase portraits of the unperturbed systems and there exist many kinds of possibilities as obtained in Theorem~\ref{normalform}, in this paper we do not establish the bifurcation diagrams one by one but give some universal results on the limit cycle bifurcations for all unperturbed vector fields in $\Omega_0$. \begin{thm} For $\Omega_0$ defined above {\rm(\ref{adc})} and its subset $\Omega_1$ defined in {\rm(\ref{subsetdefinition})}, the following statements hold. \vspace{-13pt} \begin{description} \setlength{\itemsep}{-0.8mm} \item{\rm(1)} For any $Z\in\Omega_0$ and any small neighborhood $\mathcal{N}\subset\Omega$ of $Z$, there exists a vector field in $\mathcal{N}$ having a crossing limit cycle bifurcating from the non-smooth equilibrium $O$ of $Z$. \item{\rm(2)} There exists a $Z_0\in\Omega_1$ {\rm(}resp. $\Omega_0\setminus\Omega_1${\rm)} such that, for any $m\in\mathbb{N}^+\cup\{\infty\}$ and any small neighborhood $\mathcal{N}\subset\Omega$ of $Z_0$, there exists a vector field in $\mathcal{N}$ having exactly $m$ hyperbolic crossing limit cycles bifurcating from the non-smooth equilibrium $O$ of $Z_0$. \end{description} \label{bifurcation} \end{thm} \vspace{-13pt} Theorem~\ref{bifurcation} is proved in Section 5. Note that even though our main motivation is to consider the case of piecewise smooth vector fields, the set $\Omega_0$ also includes the smooth vector fields with $X\equiv Y$ having $O$ as a non-degenerate equilibrium. Thus it follows from the statement (1) of Theorem~\ref{bifurcation} that limit cycles can bifurcate from a rough focus, saddle or node of smooth vector fields under non-smooth perturbations. This is impossible under smooth perturbations. This paper is organized as follows. In Section 2 we shortly recall basic notions and results on piecewise smooth vector fields. In Section 3 we give the proof of Theorem~\ref{normalform}, and an example showing that the vector field in $\Omega_0$ might not be locally $\Sigma$-equivalent to its linear part near the origin if the Jacobian matrix $A^+$ or $A^-$ has the same eigenvalue. The proofs of Theorems~\ref{stability} and \ref{bifurcation} are given in Sections 4 and 5, respectively. \section{Preliminaries} \setcounter{equation}{0} \setcounter{lm}{0} \setcounter{thm}{0} \setcounter{rmk}{0} \setcounter{df}{0} \setcounter{cor}{0} For the sake of completeness, in this section we shortly review some basic notions and results on piecewise smooth vector fields, especially Filippov vector fields. Section 2.1 contains the definitions of vector field $Z_\Sigma$ on $\Sigma$ and all kinds of singularities. Moreover, the local $\Sigma$-equivalence is also clarified in Section 2.1. In Section 2.2 we state the pseudo-Hopf bifurcation for a special class of piecewise smooth vector fields in order to prove our results conveniently. \subsection{Notions on piecewise smooth vector fields} Consider the piecewise smooth vector field $Z\in\Omega$ given in (\ref{sysp}). First we clarify the definition of vector field $Z_\Sigma$ on $\Sigma$ by the Filippov convention \cite{AFF}. To do this, $\Sigma$ is divided into the {\it crossing set} $$\Sigma^c=\{(x, y)\in\Sigma: X_2(x, y)\cdot Y_2(x, y)>0\},$$ and the {\it sliding set} $$\Sigma^s=\{(x, y)\in\Sigma: X_2(x, y)\cdot Y_2(x, y)\le0\},$$ as in \cite{YAK, AFF}. The points in $\Sigma^c$ and $\Sigma^s$ are called {\it crossing points} and {\it sliding points} respectively. For $(x, y)\in\Sigma^c$, $X$ and $Y$ are both transversal to $\Sigma$ and their normal components have the same sign, so that the orbit passing through $(x, y)$ crosses $\Sigma$ at $(x, y)$ and it is a continuous, but non-smooth curve. This means that we can define $Z_\Sigma$ at $(x, y)$ as any one of $X$ and $Y$. For concreteness, in this paper we specify $$ Z_\Sigma(x, y)=\left\{ \begin{aligned} &Y(x, y)~~~&&{\rm if} ~~(x, y)\in\Sigma^c,~~ X_2(x, y)<0,\\ &X(x, y)~~~&&{\rm if} ~~(x, y)\in\Sigma^c,~~ X_2(x, y)>0.\\ \end{aligned} \right. $$ For $(x, y)\in\Sigma^s$, either the normal components of $X$ and $Y$ to $\Sigma$ have the opposite sign or at least one of them vanishes. In this case $Z_\Sigma$ is defined such that it is tangent to $\Sigma^s$. Particularly, if $Y_2(x, y)\ne X_2(x, y)$, $$Z_\Sigma(x, y)=\left(\frac{Y_2(x, y)X_1(x, y)-X_2(x, y)Y_1(x, y)}{Y_2(x, y)-X_2(x, y)},~0\right)$$ by \cite{AFF, YAK}, while if $Y_2(x, y)=X_2(x, y)=0$, namely $(x, y)$ is a {\it singular sliding point} (see \cite{YAK}), we always assume $Z_\Sigma(x, y)=(0, 0)$ in this paper. Sometimes, $Z_\Sigma$ restricted on $\Sigma^s$, denoted by $Z^s$, is called the {\it sliding vector field} of $Z$ and the corresponding equilibria are said to be {\it pseudoequilibria}. Having the definition of $Z_\Sigma$, the flow of $Z$ can be obtained by concatenating the flows of $X, Y$ and $Z_\Sigma$ as stated in \cite{YAK}. In the switching line $\Sigma$, the boundary $\partial\Sigma^s$ of $\Sigma^s$ plays an important role in the dynamical analysis of piecewise smooth vector fields. Let $q\in\partial\Sigma^s$. If $X_2(q)=0, X(q)\ne0$ (resp. $Y_2(q)=0, Y(q)\ne0$), then $q$ is called a {\it tangency point} of $X$ (resp. $Y$), see \cite{YAK}. In addition, a tangency point $q$ of $X$ is called a {\it fold point} if $X_1(q)X_{2x}(q)\ne0$ and it is said to be {\it visible} (resp. {\it invisible}) when $X_1(q)X_{2x}(q)>0$ (resp. $X_1(q)X_{2x}(q)<0$). The above notions can be similarly defined for $Y$. If $q$ is a fold point of both $X$ and $Y$, we call it a {\it fold-fold point} of $Z$, which can be divided into visible-visible, invisible-invisible and visible-invisible types. If $X(q)=0$ (resp. $Y(q)=0$), $q$ is called a {\it boundary equilibrium} of $X$ (resp. $Y$). Clearly, a boundary equilibrium must be a pseudoequilibrium. Regarding piecewise smooth vector fields, there are two types of equivalences, i.e., topological equivalence and $\Sigma$-equivalence. We adopt the latter in this paper as it was indicated in Section 1, see \cite[Definition 2.20]{MG1} and \cite[Definition 2.30]{MD} for the definition of $\Sigma$-equivalence. Since we deal with the local dynamics of $Z\in\Omega_0$ near the origin, namely the non-smooth equilibrium, we can localize the definition of the $\Sigma$-equivalence as follows. \begin{df} Consider two piecewise smooth vector fields $Z_1$ and $Z_2$ in $\Omega_0$. We say that $Z_1$ and $Z_2$ are {\rm locally $\Sigma$-equivalent} near the origin if \vspace{-13pt} \begin{description} \setlength{\itemsep}{-0.8mm} \item{\rm(1)} $Z_1$ and $Z_2$ are locally topologically equivalent near the origin, i.e., there exist two neighborhoods $U$ and $V$ of the origin, and a homeomorphism $H: U\rightarrow V$ such that $H$ maps the orbits of $Z_1$ in $U$ onto the orbits of $Z_2$ in $V$, preserving the direction of time; and \item{\rm(2)} the homeomorphism $H$ sends $\Sigma\cap U$ to $\Sigma\cap V$. \end{description} \label{signaequil} \end{df} \vspace{-13pt} As a result, the definition of local $\Sigma$-equivalence gives rise to the definition of {\it local $\Sigma$-structural stability} of $Z\in\Omega_0$ with respect to $\Omega_0$ near the origin, that is, $Z\in\Omega_0$ is said to be locally $\Sigma$-structurally stable with respect to $\Omega_0$ near the origin, if any vector field that lies in a sufficiently small neighborhood of $Z$ contained in $\Omega_0$ is locally $\Sigma$-equivalent to $Z$ near the origin. \subsection{Pseudo-Hopf bifurcation} It is well known that the Hopf bifurcation of smooth vector fields is a main tool to produce limit cycles, where limit cycles bifurcate from a weak focus as the stability of this focus changes. In piecewise smooth vector fields there exists a similar phenomenon, called {\it pseudo-Hopf bifurcation} (see, e.g., \cite{HZ, RLS, MG1, CNTT, JCJLV}), where limit cycles are created from a {\it pseudo-focus} as the stability of a sliding segment changes, see Figure~\ref{pseudohopfbifurcation}. Here a point in the switching line is said to be a stable (resp. unstable) pseudo-focus if all orbits near this point turn around and tend to it as the time increases (resp. decreases) as defined in \cite{CGP}. In order to prove the results of this paper conveniently, we adopt the version given in \cite[Proposition 2.3]{CNTT} by considering the special one-parametric piecewise smooth vector field \begin{eqnarray} Z_\delta(x, y)=\left\{ \begin{aligned} &X(x, y)~~~~~&&{\rm if}~~y>0,\\ &Y(x, y)+(0, \delta)^\top~~~~~~~&&{\rm if}~~y<0, \end{aligned} \right. \label{ejeeewer} \end{eqnarray} where $X=(X_1, X_2)$ and $Y=(Y_1, Y_2)$ are $\mathcal{C}^1$ vector fields defined on $\mathbb{R}^2$, $\delta\in\mathbb{R}$ is a parameter. \begin{figure}[htp] \begin{minipage}[t]{0.33\linewidth} \centering \includegraphics[width=1.7in]{pseudo-1.eps} \caption*{{\small $\delta>0$}} \end{minipage} \begin{minipage}[t]{0.33\linewidth} \centering \includegraphics[width=1.7in]{pseudohopf0.eps} \caption*{{\small $\delta=0$}} \end{minipage} \begin{minipage}[t]{0.33\linewidth} \centering \includegraphics[width=1.7in]{pseudohopf-1.eps} \caption*{{\small $\delta<0$}} \end{minipage} \caption{{\small The pseudo-Hopf bifurcation of (\ref{ejeeewer}) satisfying $X_1(0, 0)<0<Y_1(0, 0)$ and the origin is stable.}} \label{pseudohopfbifurcation} \end{figure} \begin{prop} For $\delta=0$ we assume that the origin is a stable {\rm(}resp. unstable{\rm)} pseudo-focus formed by an invisible-invisible fold-fold point of the piecewise smooth vector field $Z_\delta$ and $X_1(0, 0)<0<Y_1(0, 0)$. Then the vector field $Z_\delta$ exhibits a pseudo-Hopf bifurcation at $\delta=0$ for $|\delta|$ sufficiently small, more precisely, there exists some $\delta_0>0$ such that $Z_\delta$ has a stable {\rm(}resp. unstable{\rm)} crossing limit cycle bifurcating from the origin for $-\delta_0<\delta<0$ {\rm(}resp. $\delta_0>\delta>0${\rm)} and has no crossing limit cycles for $\delta_0>\delta>0$ {\rm(}resp. $-\delta_0<\delta<0${\rm)}. \label{pseudohopf} \end{prop} The proof of Proposition~\ref{pseudohopf} follows directly from the generalized Poincar\'e-Bendixson Theorem for piecewise smooth vector fields, see \cite{CTEE}. \section{Proof of Theorem~\ref{normalform}} \setcounter{equation}{0} \setcounter{lm}{0} \setcounter{thm}{0} \setcounter{rmk}{0} \setcounter{df}{0} \setcounter{cor}{0} This section is devoted to proving Theorem~\ref{normalform}. Let $Z=(X, Y)\in\Omega_0$. We start by studying the local sliding dynamics of $Z$ near the origin $O$. \begin{lm} For $Z=(X, Y)\in\Omega_0$ there exists a neighborhood ${\mathcal U}_0\subset {\mathcal U}$ of $O$ such that $\Sigma\cap {\mathcal U}_0$ is separated into two crossing sets by $O$. In addition, if $X_{2x}(0, 0)>0$ and $Y_{2x}(0, 0)>0$, the direction of $X$ and $Y$ on the right {\rm(}resp. left{\rm)} crossing set is upward {\rm(}resp. downward{\rm)}, while if $X_{2x}(0, 0)<0$ and $Y_{2x}(0, 0)<0$, the direction of $X$ and $Y$ on the right {\rm(}resp. left{\rm)} crossing set is downward {\rm(}resp. upward{\rm)}. \label{slidy} \end{lm} \begin{proof} Writing $X_2(x, 0)$ and $Y_2(x, 0)$ around $x=0$ as \begin{eqnarray} X_2(x, 0)=X_{2x}(0, 0)x+\mathcal{O}(x^2),~~~~~~~~Y_2(x, 0)=Y_{2x}(0, 0)x+\mathcal{O}(x^2), \label{snajfaf} \end{eqnarray} we get $X_2(x, 0)Y_2(x, 0)=X_{2x}(0, 0)Y_{2x}(0, 0)x^2+\mathcal{O}(x^3)$. By the definition of $\Omega_0$, we get $X_{2x}(0, 0)Y_{2x}(0, 0)>0$ and then there exists a neighborhood ${\mathcal U}_0\subset {\mathcal U}$ of $O$ such that $X_2(x, 0)Y_2(x, 0)=0$ for $(x, 0)=O$ and $X_2(x, 0)Y_2(x, 0)>0$ for $(x, 0)\in ({\mathcal U}_0\cap\Sigma)\setminus\{O\}$. It follows from the definition of crossing set that $\{(x, 0)\in {\mathcal U}_0\cap\Sigma: x<0\}$ and $\{(x, 0)\in {\mathcal U}_0\cap\Sigma: x>0\}$ are two crossing sets separated by $O$, i.e., the first part of Lemma~\ref{slidy} is proved. The second part is obtained directly from (\ref{snajfaf}). \end{proof} Our main idea for proving Theorem~\ref{normalform} is to provide a normal form for $Z\in\Omega_1\subset\Omega_0$ such that both $Z$ and the corresponding piecewise linear vector field $Z_L$ are locally $\Sigma$-equivalent to this normal form near the origin. Then $Z$ is locally $\Sigma$-equivalent to $Z_L$ near the origin, and the local phase portrait of $Z$ is the phase portrait of this normal form in the sense of $\Sigma$-equivalence. This concludes the proof of Theorem~\ref{normalform}. Therefore, in what follows we will study the normal forms of $Z\in\Omega_1$ using the method introduced in \cite{MG1, TCJ, TCJLC}. Such a method has been successfully applied to obtain the normal forms of piecewise smooth vector fields in $\Omega$ near a codimension-zero (resp. codimension-one) singularity in \cite{MG1} (resp. \cite{TCJ, TCJLC}), and near a $\Sigma$-center in \cite{CT, LXZ}. To this end we classify $\Omega_1$ into the following six subsets: \vspace{-13pt} \begin{description} \setlength{\itemsep}{-0.8mm} \item{}$\Omega_{ff}=\{Z\in\Omega_1: \lambda^\pm_1,\lambda^\pm_2 \in \mathbb{C}\setminus \mathbb{R}\}$, \item{}$\Omega_{fn}=\{Z\in\Omega_1: {\rm either}~\lambda^+_1,\lambda^+_2\in \mathbb{C}\setminus \mathbb{R}, \lambda^-_1, \lambda^-_2\in \mathbb{R}, \lambda^-_1 \lambda^-_2>0~ {\rm or}~ \lambda^-_1,\lambda^-_2\in \mathbb{C}\setminus \mathbb{R}, \lambda^+_1, \lambda^+_2\in \mathbb{R}, \lambda^+_1 \lambda^+_2>0\}$, \item{}$\Omega_{fs}=\{Z\in\Omega_1:{\rm either}~\lambda^+_1,\lambda^+_2\in \mathbb{C}\setminus \mathbb{R}, \lambda^-_1, \lambda^-_2\in \mathbb{R}, \lambda^-_1 \lambda^-_2<0~ {\rm or}~ \lambda^-_1,\lambda^-_2\in \mathbb{C}\setminus \mathbb{R}, \lambda^+_1, \lambda^+_2\in \mathbb{R}, \lambda^+_1 \lambda^+_2<0\}$, \item{}$\Omega_{nn}=\{Z\in\Omega_1: \lambda^\pm_1,\lambda^\pm_2\in\mathbb{R}, \lambda^+_1\lambda^+_2>0, \lambda^-_1\lambda^-_2>0\}$, \item{}$\Omega_{ns}=\{Z\in\Omega_1: \lambda^\pm_1, \lambda^\pm_2\in\mathbb{R}, {\rm either}~\lambda^+_1\lambda^+_2>0, \lambda^-_1\lambda^-_2<0 ~{\rm or}~\lambda^+_1\lambda^+_2<0, \lambda^-_1\lambda^-_2>0\}$, \item{}$\Omega_{ss}=\{Z\in\Omega_1: \lambda^\pm_1,\lambda^\pm_2\in\mathbb{R}, \lambda^+_1\lambda^+_2<0, \lambda^-_1\lambda^-_2<0\}$. \end{description} \vspace{-13pt} Clearly, $$ \Omega_1=\Omega_{ff}\cup\Omega_{fn}\cup\Omega_{fs}\cup\Omega_{nn}\cup\Omega_{ns}\cup\Omega_{ss}. $$ Now we study the normal forms for $Z=(X, Y)\in\Omega_{ff}, \Omega_{fn}, \Omega_{fs}, \Omega_{nn}, \Omega_{ns}$ and $\Omega_{ss}$, respectively. \begin{lm} If $Z=(X, Y)\in\Omega_{ff}$, then $Z$ is locally $\Sigma$-equivalent to $Z_{ff}=(X_{ff}, Y_{ff})\in\Omega_{ff}$ near the origin, where $$X_{ff}(x, y)=(\alpha x-y, x+\alpha y),\qquad Y_{ff}(x, y)=(\alpha x-y, x+\alpha y),$$ $\alpha={\rm sign}\ell$ and $\ell\ne0$ is defined in {\rm(\ref{eigen2})}. \label{ff} \end{lm} \begin{proof} Because $\Omega_{ff}\subset\Omega_1\subset\Omega_0$, $Z\in\Omega_{ff}$ satisfies (\ref{condi}) by the definition of $\Omega_0$. Using the change $(x, y)\rightarrow(-x, y)$, we only need to consider the case \begin{eqnarray} X_{2x}(0, 0)>0, ~~~~~~~Y_{2x}(0, 0)>0. \label{onecase} \end{eqnarray} Hence, $\Sigma\cap {\mathcal U}_0$ is separated into two crossing sets by $O$, and the direction of $X$ and $Y$ on the right {\rm(}resp. left{\rm)} crossing set is upward {\rm(}resp. downward{\rm)} as it is seen in Lemma~\ref{slidy}. Recalling \cite[Theorem B]{CGP} and \cite[Theorem 1.2]{HZ}, we obtain that $O$ is a stable pseudo-focus if $\ell<0$ and an unstable pseudo-focus if $\ell>0$ for $Z\in\Omega_{ff}$ satisfying (\ref{onecase}), see Figure~\ref{ffn}. For $Z_{ff}\in\Omega_{ff}$ it is a linear vector field, and $O$ is a stable focus as shown in (FF-1) of Figure~\ref{localphaseportraits} if $\alpha=-1$ and an unstable focus as shown in (FF-2) of Figure~\ref{localphaseportraits} if $\alpha=1$. \begin{figure} \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=1.7in]{ff-n.eps} \caption*{(a)~ {\small $\ell<0$}} \end{minipage} \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=1.7in]{ff+n.eps} \caption*{(b)~ {\small $\ell>0$}} \end{minipage} \caption{{\small Local phase portraits of $Z\in\Omega_{ff}$ satisfying (\ref{onecase}) near $O$.}} \label{ffn} \end{figure} Next we prove this lemma for the case $\ell<0$ and $\alpha=-1$. The case $\ell>0$ and $\alpha=1$ can be treated similarly. Consider two sufficiently small neighborhoods $U\subset {\mathcal U}_0$ and $V\subset {\mathcal U}_0$ of $O$ as shown in Figure~\ref{focusfocus}, where ${\mathcal U}_0$ is given in Lemma~\ref{slidy}, $U$ is surrounded by the closed line segment $\overline{CA}\subset\Sigma$ and the orbital arc of $Z$ from $A$ to $C$ after passing through $B$, $V$ is surrounded by the closed line segment $\overline{C_1A_1}\subset\Sigma$ and the orbital arc of $Z_{ff}$ from $A_1$ to $C_1$ after passing through $B_1$. Here overline denotes the closure. We need to construct a homeomorphism $H$ from $U$ to $V$ implying the $\Sigma$-equivalence between $Z$ with $\ell<0$ and $Z_{ff}$ with $\alpha=-1$. \begin{figure} \begin{minipage}[t]{1.0\linewidth} \centering \includegraphics[width=4.8in]{focusfocus.eps} \end{minipage} \caption{{\small The homeomorphism $H$ between $Z\in\Omega_{ff}$ with $\ell<0$ and $Z_{ff}$ with $\alpha=-1$.}} \label{focusfocus} \end{figure} For $Z\in\Omega_{ff}$ satisfying (\ref{onecase}), $O$ is an anticlockwise rotary equilibrium of focus type of $X$ and $Y$. Thus, given $P\in \overline{OA}$, there exist a first time $t_1=t_1(P)\ge0$ such that $\Phi^+(t_1, P)\in \overline{OB}$, and a first time $t_2=t_2(\Phi^+(t_1, P))\ge0$ such that $\Phi^-\left(t_2, \Phi^+(t_1, P)\right)\in \overline{OC}$, where $\Phi^+$ and $\Phi^-$ denote the flows of $X$ and $Y$ respectively. This means that we can define a Poincar\'e map $\mathcal{P}: \overline{OA}\rightarrow \overline{OC}$ by \begin{eqnarray} \mathcal{P}(P)=\Phi^-\left(t_2, \Phi^+(t_1, P)\right). \label{afnf} \end{eqnarray} In particular, $\mathcal{P}(O)=O$ and $\mathcal{P}(A)=C$, since $A$ and $C$ lie in the same orbit. Let $(x_P, 0)$ and $(\mathcal{P}_1(x_P), \mathcal{P}_2(x_P))$ be the coordinates of $P$ and $\mathcal{P}(P)$ respectively. Then $\mathcal{P}_2(x_P)=0$ and $\mathcal{P}_1(x_P)$ is given by $$\mathcal{P}_1(x_P)=e^{\ell\pi} x_P+\mathcal{O}(x^2_P)$$ from \cite[Theorem 1.1, Theorem 1.2]{HZ}. Similarly, denoting the flows of $X_{ff}$ and $Y_{ff}$ by $\Psi^+$ and $\Psi^-$ respectively, we can define a Poincar\'e map $\mathcal{Q}: \overline{OA_1}\rightarrow \overline{OC_1}$ by \begin{eqnarray} \mathcal{Q}(P)=\Psi^-\left(s_2, \Psi^+(s_1, P)\right), \label{dasjfn} \end{eqnarray} which satisfies $\mathcal{Q}(O)=O$ and $\mathcal{Q}(A_1)=C_1$, where $s_1=s_1(P)\ge0$ is the first time such that $\Psi^+(s_1, P)\in \overline{OB_1}$, and $s_2=s_2(\Psi^+(s_1, P))\ge0$ is the first time such that $\Psi^-\left(s_2, \Psi^+(s_1, P)\right)\in\overline{OC_1}$. Let $(\mathcal{Q}_1(x_P), \mathcal{Q}_2(x_P))$ be the coordinates of $\mathcal{Q}(P)$. Then $\mathcal{Q}_2(x_P)=0$ and a straightway calculation yields $$\mathcal{Q}_1(x_P)=e^{-2\pi}x_P.$$ Since we are considering the case of $\ell<0$, according to the linearization and conjugacy theory of smooth map \cite{PH1}, $U$ and $V$ can be chosen to ensure that there exists a homeomorphism $h: [0, x_A]\rightarrow [0, x_{A_1}]$ satisfying \begin{eqnarray} h(0)=0, \qquad h(x_A)=x_{A_1}, \qquad h(\mathcal{P}_1(x_P))=\mathcal{Q}_1(h(x_P)), \label{jaffse} \end{eqnarray} where $x_A$ and $x_{A_1}$ are the first coordinates of $A$ and $A_1$ respectively. Consequently, we define a homeomorphism $H_0: \overline{OA}\rightarrow \overline{OA_1}$ by \begin{eqnarray} H_0(P)=H_0(x_P, 0)=(h(x_P), 0) \qquad {\rm for} \quad P\in \overline{OA}. \label{ajfff} \end{eqnarray} Clearly, it follows from (\ref{jaffse}) that $H_0(O)=O$, $H_0(A)=A_1$ and $H_0(C)=C_1$. Given $P\in\overline{OB}$, there exists a first time $t_3=t_3(P)\le0$ such that $\Phi^+(t_3, P)\in\overline{OA}$, since $O$ is an anticlockwise rotary equilibrium of focus type of $X$. Then $H_0(\Phi^+(t_3, P))\in\overline{OA_1}$ and there exists a first time $s_3=s_3(H_0(\Phi^+(t_3, P)))\ge0$ such that $\Psi^+(s_3, H_0(\Phi^+(t_3, P)))\in\overline{OB_1}$ because $O$ is an anticlockwise rotary focus of $X_{ff}$. By the arc length parametrization we can identify the orbital arc of $X$ from $\Phi^+(t_3, P)$ to $P$ with the one of $X_{ff}$ from $H_0(\Phi^+(t_3, P))$ to $\Psi^+(s_3, H_0(\Phi^+(t_3, P)))$. Therefore, in this way we can define a homeomorphism $H^+: \overline{\Sigma^+\cap U}\rightarrow\overline{\Sigma^+\cap V}$ that maps $\overline{BA}$ onto $\overline{B_1A_1}$, maps the orbits of $X$ in $\overline{\Sigma^+\cap U}$ onto the orbits of $X_{ff}$ in $\overline{\Sigma^+\cap V}$ and satisfies \begin{eqnarray} \left.H^+\right|_{\overline{OA}}=H_0. \label{dnjvdhf} \end{eqnarray} Given $P\in\overline{OC}$, there exists a first time $t_4=t_4(P)\le0$ such that $\Phi^-(t_4, P)\in\overline{OB}$. Then $H^+(\Phi^-(t_4, P))\in\overline{OB_1}$ from the definition of $H^+$, and there exists a first time $s_4=s_4(H^+(\Phi^-(t_4, P)))\ge0$ such that $\Psi^-\left(s_4, H^+(\Phi^-(t_4, P))\right)\in\overline{OC_1}$. Similarly we can identify the orbital arc of $Y$ from $\Phi^-(t_4, P)$ to $P$ with the one of $Y_{ff}$ from $H^+(\Phi^-(t_4, P))$ to $\Psi^-\left(s_4, H^+(\Phi^-(t_4, P))\right)$, and thus define a homeomorphism $H^-: \overline{\Sigma^-\cap U}\rightarrow\overline{\Sigma^-\cap V}$ that maps $\overline{BC}$ onto $\overline{B_1C_1}$, maps the orbits of $Y$ in $\overline{\Sigma^-\cap U}$ onto the orbits of $Y_{ff}$ in $\overline{\Sigma^-\cap V}$ and satisfies \begin{eqnarray} \left.H^-\right|_{\overline{OB}}=\left.H^+\right|_{\overline{OB}}. \label{dnjvdhwewf} \end{eqnarray} Moreover, for any $P\in\overline{OC}$ we have $$ \begin{aligned} H^-(P)&=\Psi^-\left(s_4, H^+(\Phi^-(t_4, P))\right)=\Psi^-\left(s_4, \Psi^+(s_3, H_0(\Phi^+(t_3, \Phi^-(t_4, P))))\right)\\ &=\mathcal{Q}(H_0(\Phi^+(t_3, \Phi^-(t_4, P))))=H_0(\mathcal{P}(\Phi^+(t_3, \Phi^-(t_4, P))))\\ &=H_0(P) \end{aligned} $$ by (\ref{afnf}), (\ref{dasjfn}), (\ref{jaffse}), (\ref{ajfff}) and the constructions of $H^\pm$. This implies that \begin{eqnarray} \left.H^-\right|_{\overline{OC}}=\left.H_0\right|_{\overline{OC}}. \label{dnejnfew} \end{eqnarray} Let \begin{eqnarray} H(P)=\left\{ \begin{aligned} &H^+(P) \qquad &&{\rm for}\quad P\in(\Sigma^+\cup\Sigma)\cap U,\\ &H^-(P) \qquad &&{\rm for}\quad P\in(\Sigma^-\cup\Sigma)\cap U. \end{aligned} \right. \label{asjfnjfec} \end{eqnarray} Then $H$ is a homeomorphism from $U$ to $V$ because $H^\pm$ are homeomorphisms in their domains and $\left.H^+\right|_{\overline{BC}}=\left.H^-\right|_{\overline{BC}}$ by (\ref{dnjvdhf}), (\ref{dnjvdhwewf}) and (\ref{dnejnfew}). Furthermore, the construction of $H$ ensures that $H$ maps the orbits of $Z\in\Omega_{ff}$ with $\ell<0$ in $U$ onto the orbits of $Z_{ff}$ with $\alpha=-1$ in $V$, preserving the direction of time and the switching line $\Sigma$. We eventually conclude that $Z\in\Omega_{ff}$ with $\ell<0$ and $Z_{ff}$ with $\alpha=-1$ are locally $\Sigma$-equivalent near $O$. \end{proof} \begin{lm} If $Z=(X, Y)\in\Omega_{fn}$, then $Z$ is locally $\Sigma$-equivalent to $Z_{fn}=(X_{fn}, Y_{fn})\in\Omega_{fn}$ near the origin, where $$X_{fn}(x, y)=(-y, x), \qquad Y_{fn}(x, y)=(2\beta x+y, x+2\beta y)$$ and $$\beta= \left\{ \begin{aligned} &{\rm sign}(\lambda^-_1+\lambda^-_2) \qquad {\rm when}~ \lambda^-_1,\lambda^-_2\in\mathbb{R},\\ &{\rm sign}(\lambda^+_1+\lambda^+_2) \qquad {\rm when}~ \lambda^+_1,\lambda^+_2\in\mathbb{R}. \end{aligned} \right. $$ \label{fn} \end{lm} \begin{proof} By $(x, y)\rightarrow(x, -y)$ and $(x, y)\rightarrow(-x, y)$ we only need to consider $Z\in\Omega_{fn}$ satisfying (\ref{onecase}) and $$\lambda^+_1,\lambda^+_2\in\mathbb{C}\setminus\mathbb{R}, \qquad \lambda^-_1,\lambda^-_2\in\mathbb{R}, \qquad\lambda^-_1\lambda^-_2>0.$$ In this case, $O$ is an equilibrium of focus type of $X$ and a node of $Y$ by \cite[Theorems 4.2, 4.3, 5.1]{ZZF}. Thus, recalling the dynamics on $\Sigma$ given in Lemma~\ref{slidy}, we get two different types of the local phase portraits of $Z$ near $O$ depending on the sign of $\lambda^-_1+\lambda^-_2$, namely the stability of $O$ when it is regarded as an equilibrium of $Y$, see Figure~\ref{fnn}. In Figure~\ref{fnn}(a), the strong unstable manifold $m^u_s$ lies in the left side of the weak unstable manifold $m^u_w$, while in Figure~\ref{fnn}(b), the strong stable manifold $m^s_s$ lies in the right side of the weak stable manifold $m^s_w$. Here we use the assumption of $\lambda^-_1\ne\lambda^-_2$ for all vector fields in $\Omega_1$. Regarding the vector field $Z_{fn}$, we easily verify that its phase portrait is the one either as shown in (FN-1) of Figure~\ref{localphaseportraits} if $\beta=1$, or as shown in (FN-2) of Figure~\ref{localphaseportraits} if $\beta=-1$. \begin{figure} \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=1.7in]{fn+nn.eps} \caption*{(a)~ {\small $\lambda^-_1+\lambda^-_2>0$}} \end{minipage} \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=1.7in]{fn-nn.eps} \caption*{(b)~ {\small $\lambda^-_1+\lambda^-_2<0$}} \end{minipage} \caption{{\small Local phase portraits of $Z\in\Omega_{fn}$ satisfying (\ref{onecase}) and $\lambda^+_1,\lambda^+_2\in\mathbb{C}\setminus\mathbb{R}, \lambda^-_1,\lambda^-_2\in\mathbb{R}, \lambda^-_1\lambda^-_2>0$ near $O$.}} \label{fnn} \end{figure} We only consider $\lambda^-_1+\lambda^-_2>0$ and $\beta=1$ because the case of $\lambda^-_1+\lambda^-_2<0$ and $\beta=-1$ is similar. Consider two sufficiently small neighborhoods $U\subset {\mathcal U}_0$ and $V\subset {\mathcal U}_0$ of $O$ as shown in Figure~\ref{focusnode}, where ${\mathcal U}_0$ is given in Lemma~\ref{slidy}, $U$ is surrounded by orbital arc $\widehat{AB}$ of $X$ from $A$ to $B$, and arc $\widehat{BA}$ on which $Y$ is transverse to it, $V$ is surrounded by orbital arc $\widehat{A_2B_2}$ of $X_{fn}$ from $A_2$ to $B_2$, and arc $\widehat{B_2A_2}$ on which the vector field $Y_{fn}$ is transverse to it. We need to construct a homeomorphism $H$ from $U$ to $V$ providing the $\Sigma$-equivalence between $Z\in\Omega_{fn}$ with $\lambda^-_1+\lambda^-_2>0$ and $Z_{fn}$ with $\beta=1$. \begin{figure} \begin{minipage}[t]{1.0\linewidth} \centering \includegraphics[width=5in]{focusnode.eps} \end{minipage} \caption{{\small The homeomorphism $H$ between $Z\in\Omega_{fn}$ with $\lambda^-_1+\lambda^-_2>0$ and $Z_{fn}$ with $\beta=1$.}} \label{focusnode} \end{figure} By the arc length parametrization there exists a homeomorphism $H_0: \overline{OA}\rightarrow \overline{OA_2}$ such that $H_0(O)=O$ and $H_0(A)=A_2$. Since $O$ is an anticlockwise rotary equilibrium of focus type of $X$, the forward orbit of $X$ starting from $P\in\overline{OA}$ evolves in $\overline{\Sigma^+\cap U}$ until it reaches $\overline{OB}$ at a point $Q$. Then $H_0(P)\in\overline{OA_2}$. Since $O$ is an anticlockwise rotary center of $X_{fn}$, the forward orbit of $X_{fn}$ starting from $H_0(P)$ evolves in $\overline{\Sigma^+\cap V}$ until it reaches $\overline{OB_2}$ at a point $Q_2$. By the arc length parametrization we can identify the orbital arc of $X$ from $P$ to $Q$ with the one of $X_{fn}$ from $H_0(P)$ to $Q_2$. In this way we can define a homeomorphism $H_f: \overline{\Sigma^+\cap U}\rightarrow\overline{\Sigma^+\cap V}$ that maps $\overline{BA}$ onto $\overline{B_2A_2}$, maps the orbits of $X$ in $\overline{\Sigma^+\cap U}$ onto the orbits of $X_{fn}$ in $\overline{\Sigma^+\cap V}$ and satisfies \begin{eqnarray} \left.H_f\right|_{\overline{OA}}=H_0. \label{itoruo} \end{eqnarray} Consider the region $R_{BOC}$ surrounded by $\overline{OB}$, $\widehat{BC}$ and the strong unstable manifold $\widehat{OC}$, and the corresponding region $R_{B_2OC_2}$ surrounded by $\overline{OB_2}$, $\widehat{B_2C_2}$ and the strong unstable manifold $\widehat{OC_2}$. Given $P\in\overline{OB}$, there exists a unique point $Q\in\widehat{BC}$ such that the backward orbit of $Y$ starting from $Q$ evolves in $\overline{R_{BOC}}$ until it reaches or tends to $\overline{OB}$ at $P$, since $\widehat{OC}$ is the strong unstable manifold of the node $O$ for $Y$ and we are assuming that the vector field $Y$ on $\widehat{BA}$ is transverse to $\widehat{BA}$. Analogously, there exists a unique point $Q_2\in\widehat{B_2C_2}$ such that the backward orbit of $Y_{fn}$ starting from $Q_2$ evolves in $\overline{R_{B_2OC_2}}$ until it reaches or tends to $\overline{OB_2}$ at $H_f(P)$. Therefore, by the arc length parametrization again we can identify the orbital arc of $Y$ from $P$ to $Q$ with the one of $Y_{fn}$ from $H_f(P)$ to $Q_2$, and then define a homeomorphism $H^1_n: \overline{R_{BOC}}\rightarrow \overline{R_{B_2OC_2}}$ that maps the orbits of $Y$ in $\overline{R_{BOC}}$ onto the orbits of $Y_{fn}$ in $\overline{R_{B_2OC_2}}$ and satisfies \begin{eqnarray} \left.H^1_n\right|_{\overline{OB}}=\left.H_f\right|_{\overline{OB}}. \label{uafgkfh} \end{eqnarray} Consider the region $R_{COA}$ surrounded by $\widehat{OC}$, $\widehat{CA}$ and $\overline{OA}$, and the corresponding region $R_{B_2OC_2}$ surrounded by $\widehat{OC_2}$, $\widehat{C_2A_2}$ and $\overline{OA_2}$. Regarding arcs $\widehat{CA}$ and $\widehat{C_2A_2}$, we obtain a homeomorphism $H^0_n: \widehat{CA}\rightarrow\widehat{C_2A_2}$ such that $H^0_n(C)=C_2$ and $H^0_n(A)=A_2$ by the arc length parametrization. Since the choice of $U$ ensures that the vector filed $Y$ on $(\widehat{CA}\cup\overline{OA})\setminus O$ is transverse to $(\widehat{CA}\cup\overline{OA})\setminus O$, the backward orbit of $Y$ starting from $P\in(\widehat{CA}\cup\overline{OA})\setminus O$ evolves in $\overline{R_{COA}}$ and finally tends to $O$. Let $P_2=H_0(P)$ if $P\in\overline{OA}$ and $P_2=H^0_n(P)$ if $P\in\widehat{CA}$. Then the backward orbit of $Y_{fn}$ starting from $P_2$ evolves in $\overline{R_{C_2OA_2}}$ and tends to $O$. Identify the orbital arc of $Y$ from $P$ to $O$ with the orbital arc of $Y_{fn}$ from $P_2$ to $O$. In this way we can define a homeomorphism $H^2_n: \overline{R_{COA}}\rightarrow \overline{R_{C_2OA_2}}$ that maps the orbits of $Y$ in $\overline{R_{COA}}$ onto the orbits of $Y_{fn}$ in $\overline{R_{C_2OA_2}}$ and satisfies \begin{eqnarray} \left.H^2_n\right|_{\widehat{OC}}=\left.H^1_n\right|_{\widehat{OC}}, \qquad \left.H^2_n\right|_{\overline{OA}}=H_0, \qquad \left.H^2_n\right|_{\widehat{CA}}=H^0_n. \label{urcnkfh} \end{eqnarray} Joining the homeomorphisms $H^1_n$ and $H^2_n$, by (\ref{itoruo}), (\ref{uafgkfh}) and (\ref{urcnkfh}) we obtain that $$ H_n(P)=\left\{ \begin{aligned} &H^1_n(P) \qquad &&{\rm for} \quad P\in \overline{R_{BOC}},\\ &H^2_n(P) \qquad &&{\rm for} \quad P\in \overline{R_{COA}}, \end{aligned} \right. $$ is a homeomorphism from $\overline{\Sigma^-\cap U}$ to $\overline{\Sigma^-\cap V}$ that maps the orbits of $Y$ in $\overline{\Sigma^-\cup U}$ onto the orbits of $Y_{fn}$ in $\overline{\Sigma^-\cup V}$ and satisfies $\left.H_n\right|_{\overline{BA}}=\left.H_f\right|_{\overline{BA}}$. Consequently, the homeomorphisms $H_n$ and $H_f$ form a homeomorphism $H: U\rightarrow V$ that maps the orbits of $Z\in\Omega_{fn}$ with $\lambda^-_1+\lambda^-_2>0$ in $U$ onto the orbits of $Z_{fn}$ with $\beta=1$ in $V$, preserving the direction of time and the switching line $\Sigma$. This concludes the proof of Lemma~\ref{fn}. \end{proof} \begin{lm} If $Z=(X, Y)\in\Omega_{fs}$, then $Z$ is locally $\Sigma$-equivalent to $Z_{fs}=(X_{fs}, Y_{fs})\in\Omega_{fs}$ near the origin, where $$X_{fs}(x, y)=(-y, x), \qquad Y_{fs}(x, y)=(y, x).$$ \label{fsfsfs} \end{lm} \begin{proof} Using the changes $(x, y)\rightarrow(x, -y)$ and $(x, y)\rightarrow(-x, y)$, we only need to consider $Z\in\Omega_{fs}$ satisfying (\ref{onecase}) and $$\lambda^+_1, \lambda^+_2\in\mathbb{C}\setminus\mathbb{R},\qquad \lambda^-_1, \lambda^-_2\in\mathbb{R}, \qquad \lambda^-_1\lambda^-_2<0.$$ In this case, $O$ is an equilibrium of focus type of $X$ and a saddle of $Y$ by \cite[Theorems 4.2, 4.4, 5.1]{ZZF}. Reviewing the dynamics on $\Sigma$ given in Lemma~\ref{slidy}, we depict the local phase portrait of $Z$ near $O$ as shown in Figure~\ref{fsn}. The phase portrait of the vector field $Z_{fs}$ is as shown in (FS) of Figure~\ref{localphaseportraits}. \begin{figure} \begin{minipage}[t]{1.0\linewidth} \centering \includegraphics[width=1.7in]{fsn.eps} \end{minipage} \caption{{\small Local phase portrait of $Z\in\Omega_{fs}$ satisfying (\ref{onecase}) and $\lambda^+_1, \lambda^+_2\in\mathbb{C}\setminus\mathbb{R}, \lambda^-_1, \lambda^-_2\in\mathbb{R}, \lambda^-_1\lambda^-_2<0$ near $O$.}} \label{fsn} \end{figure} Consider two sufficiently small neighborhoods $U\subset {\mathcal U}_0$ and $V\subset {\mathcal U}_0$ of $O$ as shown in Figure~\ref{focussaddle}, where $\widehat{AB}$ and $\widehat{A_3B_3}$ are the corresponding orbital arcs, $\widehat{BA}$ (resp. $\widehat{B_3A_3}$) is the arc where the vector field $Y$ (resp. $Y_{fs}$) is transverse to it. \begin{figure} \begin{minipage}[t]{1.0\linewidth} \centering \includegraphics[width=4.8in]{focussaddle.eps} \end{minipage} \caption{{\small The homeomorphism $H$ between $Z\in\Omega_{fs}$ and $Z_{fs}$.}} \label{focussaddle} \end{figure} As done in the proof of Lemma~\ref{fn}, we can define a homeomorphism $H_f: \overline{\Sigma^+\cap U}\rightarrow\overline{\Sigma^+\cap V}$ that maps $\overline{BA}$ onto $\overline{B_3A_3}$, and maps the orbits of $X$ in $\overline{\Sigma^+\cap U}$ onto the orbits of $X_{fs}$ in $\overline{\Sigma^+\cap V}$. In order to complete this proof, next we construct a homeomorphism $H_s: \overline{\Sigma^-\cap U}\rightarrow\overline{\Sigma^-\cap V}$ that maps the orbits of $Y$ in $\overline{\Sigma^-\cap U}$ onto the orbits of $X_{fs}$ in $\overline{\Sigma^-\cap V}$ and satisfies $\left.H_s\right|_{\overline{BA}}=\left.H_f\right|_{\overline{BA}}$. Let $$\widehat{OD}=\{(x, y)\in\overline{\Sigma^-\cap U}: Y_2(x, y)=0\}, \qquad\overline{OD_3}=\{(x, y)\in\overline{\Sigma^-\cap V}: x=0\},$$ where $Y_2$ is the ordinate of $Y$. Then there exists a homeomorphism $H^0_s: \widehat{OD}\rightarrow\overline{OD_3}$ such that $H^0_s(O)=O$ and $H^0_s(D)=D_3$ by the arc length parametrization. Consider the region $R_{BOD}$ surrounded by $\overline{OB}$, $\widehat{BD}$ and $\widehat{OD}$, and the region $R_{B_3OD_3}$ surrounded by $\overline{OB_3}$, $\widehat{B_3D_3}$ and $\overline{OD_3}$. Given $P\in\overline{OB}\cup\widehat{OD}$, there exists a unique point $Q\in\widehat{BD}$ such that the backward orbit of $Y$ starting from $Q$ evolves in $\overline{R_{BOD}}$ until it either reaches $(\overline{OB}\cup\widehat{OD})\setminus O$ when $P\ne O$ or tends to $O$ when $P=O$, since we require that the vector field $Y$ on $\widehat{BD}$ is transverse to $\widehat{BD}$. Let $P_3=H_f(P)$ if $P\in\overline{OB}$ and $P_3=H^0_s(P)$ if $P\in\widehat{OD}$. We obtain a unique point $Q_3\in\widehat{B_3D_3}$ such that the backward orbit of $Y_{fs}$ starting from $Q_3$ evolves in $\overline{R_{B_3OD_3}}$ until it reaches or tends to $P_3$. The arc length parametrization allows to identify the orbital arc of $Y$ from $Q$ to $P$ and the one of $Y_{fs}$ from $Q_3$ to $P_3$. In this way we can define a homeomorphism $H^1_s: \overline{R_{BOD}}\rightarrow\overline{R_{B_3OD_3}}$ that maps the orbits of $Y$ in $\overline{R_{BOD}}$ onto the orbits of $Y_{fs}$ in $\overline{R_{B_3OD_3}}$ and satisfies \begin{eqnarray} \left.H^1_s\right|_{\overline{OB}}=\left.H_f\right|_{\overline{OB}}, \qquad \left.H^1_s\right|_{\widehat{OD}}=H^0_s. \label{anfjfcdsfm} \end{eqnarray} A similar argument to the last paragraph yields a homeomorphism $H^2_s: \overline{R_{DOA}}\rightarrow\overline{R_{D_3OA_3}}$ that maps the orbits of $Y$ in $\overline{R_{DOA}}$ onto the orbits of $Y_{fs}$ in $\overline{R_{D_3OA_3}}$ and satisfies \begin{eqnarray} \left.H^2_s\right|_{\overline{OA}}=\left.H_f\right|_{\overline{OA}}, \qquad \left.H^2_s\right|_{\widehat{OD}}=H^0_s. \label{anfjfcafdsfm} \end{eqnarray} Thus, joining the homeomorphisms $H^1_s$ and $H^2_s$ we construct $H_s$ as $$ H_s(P)=\left\{ \begin{aligned} &H^1_s(P) \qquad &&{\rm for} \quad P\in \overline{R_{BOD}},\\ &H^2_s(P) \qquad &&{\rm for} \quad P\in \overline{R_{DOA}}. \end{aligned} \right. $$ From (\ref{anfjfcdsfm}) and (\ref{anfjfcafdsfm}), it follows that $H_s$ is a homeomorphism from $\overline{\Sigma^-\cap U}$ to $\overline{\Sigma^-\cap V}$ maps the orbits of $Y$ in $\overline{\Sigma^-\cap U}$ onto the orbits of $X_{fs}$ in $\overline{\Sigma^-\cap V}$ and satisfies $\left.H_s\right|_{\overline{BA}}=\left.H_f\right|_{\overline{BA}}$. Consequently, the homeomorphisms $H_s$ and $H_f$ directly form a homeomorphism $H: U\rightarrow V$ that maps the orbits of $Z\in\Omega_{fs}$ in $U$ onto the orbits of $Z_{fs}$ in $V$, preserving the direction of time and the switching line $\Sigma$. This proves Lemma~\ref{fsfsfs}. \end{proof} \begin{lm} If $Z=(X, Y)\in\Omega_{nn}$, then $Z$ is locally $\Sigma$-equivalent to $Z_{nn}=(X_{nn}, Y_{nn})\in\Omega_{nn}$ near the origin, where $$X_{nn}(x, y)=(2\gamma x+y, x+2\gamma y), \qquad Y_{nn}(x, y)=(2\eta x+y, x+2\eta y),$$ and $$ \left\{ \begin{aligned} &\gamma=\eta={\rm sign}(\lambda^+_1+\lambda^+_2) \qquad &&{\rm when} \quad (\lambda^+_1+\lambda^+_2)(\lambda^-_1+\lambda^-_2)>0,\\ &\gamma=-\eta=1 \qquad &&{\rm when} \quad (\lambda^+_1+\lambda^+_2)(\lambda^-_1+\lambda^-_2)<0. \end{aligned} \right. $$ \label{nn} \end{lm} \begin{proof} For $Z\in\Omega_{nn}$ we know that $O$ is a node of both $X$ and $Y$ with two different eigenvalues by \cite[Theorem 4.3]{ZZF}. Moreover, using the change $(x, y)\rightarrow(-x, y)$ it is enough to consider $Z\in\Omega_{nn}$ satisfying (\ref{onecase}). In this case, according to the dynamics on $\Sigma$ given in Lemma~\ref{slidy}, we get four local phase portraits of $Z$ near $O$ as shown in Figure~\ref{nnn}, depending on the sign of $\lambda^\pm_1+\lambda^\pm_2$, namely the stability of $O$ as an equilibrium of $X$ and $Y$. However, we notice that the phase portrait (d) of Figure~\ref{nnn} can be transformed into (b) of Figure~\ref{nnn} by the change $(x, y)\rightarrow(-x, -y)$, so that there are essentially three different types of the local phase portraits of $Z$ near $O$. Besides, a simple analysis implies that the phase portrait of $Z_{nn}$ is (NN-1) (resp. (NN-2) and (NN-3)) of Figure~\ref{localphaseportraits} if $\gamma=\eta=1$ (resp. $\gamma=-\eta=1$ and $\gamma=\eta=-1$). \begin{figure} \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=1.7in]{nn++n.eps} \caption*{(a)~ {\small $\lambda^+_1+\lambda^+_2>0,~\lambda^-_1+\lambda^-_2>0$}} \end{minipage} \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=1.7in]{nn+-n.eps} \caption*{(b)~ {\small $\lambda^+_1+\lambda^+_2>0,~\lambda^-_1+\lambda^-_2<0$}} \end{minipage} \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=1.7in]{nn--n.eps} \caption*{(c)~ {\small $\lambda^+_1+\lambda^+_2<0,~\lambda^-_1+\lambda^-_2<0$}} \end{minipage} \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=1.7in]{nn-+n.eps} \caption*{(d)~ {\small $\lambda^+_1+\lambda^+_2<0,~\lambda^-_1+\lambda^-_2>0$}} \end{minipage} \caption{{\small Local phase portraits of $Z\in\Omega_{nn}$ satisfying (\ref{onecase}) near $O$.}} \label{nnn} \end{figure} The homeomorphism between $Z\in\Omega_{nn}$ and $Z_{nn}$ can be constructed by a similar method to the proofs of foregoing lemmas. In fact, consider the case of $\lambda^+_1+\lambda^+_2>0, \lambda^-_1+\lambda^-_2>0$ and $\gamma=\eta=1$ as an example. We can choose two sufficiently small neighborhoods $U\subset {\mathcal U}_0$ and $V\subset {\mathcal U}_0$ of $O$ such that $Z$ is transverse to the boundary of $U$ and $Z_{nn}$ is transverse to the boundary of $V$. Then there is always a homeomorphism $H: \Sigma\cap U\rightarrow \Sigma\cap V$ satisfying $H(O)=O, H(\Sigma_l\cap U)=\Sigma_l\cap V$ and $H(\Sigma_r\cap U)=\Sigma_r\cap V$, where $\Sigma_l=\{(x, 0)\in {\mathcal U}: x<0\}$ and $\Sigma_r=\{(x, 0)\in {\mathcal U}: x>0\}$. Like the construction of $H_n$ in the proof of Lemma~\ref{fn}, we are able to extend $H$ for $\Sigma^+\cap U$ and $\Sigma^-\cap U$ respectively, and finally obtain a homeomorphism from $U$ to $V$ that provides the $\Sigma$-equivalence between $Z\in\Omega_{nn}$ with $\lambda^+_1+\lambda^+_2>0, \lambda^-_1+\lambda^-_2>0$ and $Z_{nn}$ with $\gamma=\eta=1$. That is, Lemma~\ref{nn} holds. \end{proof} \begin{lm} If $Z=(X, Y)\in\Omega_{ns}$, then $Z$ is locally $\Sigma$-equivalent to $Z_{ns}=(X_{ns}, Y_{ns})\in\Omega_{ns}$ near the origin, where $$X_{ns}(x, y)=(2\xi x+y, x+2\xi y),\qquad Y_{ns}(x, y)=(y, x),$$ and $$\xi= \left\{ \begin{aligned} &{\rm sign}(\lambda^-_1+\lambda^-_2) \qquad {\rm when}~ \lambda^-_1\lambda^-_2>0,\\ &{\rm sign}(\lambda^+_1+\lambda^+_2) \qquad {\rm when}~ \lambda^+_1\lambda^+_2>0. \end{aligned} \right. $$ \label{ns} \end{lm} \begin{proof} Using the changes $(x, y)\rightarrow(x, -y)$ and $(x, y)\rightarrow(-x, y)$, we only need to consider $Z\in\Omega_{ns}$ satisfying (\ref{onecase}), $\lambda^+_1\lambda^+_2>0$ and $\lambda^-_1\lambda^-_2<0$. In this case, $O$ is a node of $X$ and a saddle of $Y$ by \cite[Theorems 4.3, 4.4]{ZZF}. Combining with the dynamics on $\Sigma$ given in Lemma~\ref{slidy}, we get two different types of the local phase portraits of $Z$ near $O$ as shown in Figure~\ref{nsn}, depending on the sign of $\lambda^+_1+\lambda^+_2$. Regarding $Z_{ns}$, its phase portrait is (NS-1) (resp. (NS-2)) of Figure~\ref{localphaseportraits} if $\xi=1$ (resp. $\xi=-1$). \begin{figure} \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=1.75in]{ns+n.eps} \caption*{(a)~ {\small $\lambda^+_1+\lambda^+_2>0$}} \end{minipage} \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=1.75in]{ns-n.eps} \caption*{(b)~ {\small $\lambda^+_1+\lambda^+_2<0$}} \end{minipage} \caption{{\small Local phase portraits of $Z\in\Omega_{ns}$ satisfying (\ref{onecase}), $\lambda^+_1\lambda^+_2>0$ and $\lambda^-_1\lambda^-_2<0$ near $O$.}} \label{nsn} \end{figure} Consider two sufficiently small neighborhoods $U\subset {\mathcal U}_0$ and $V\subset {\mathcal U}_0$ of $O$ such that $Z$ is transverse to the boundary of $U$ and $Z_{ns}$ is transverse to the boundary of $V$. For each one of the above two cases, we can define a homeomorphism $H$ with $H(O)=O$ to identify $\Sigma\cap U$ with $\Sigma\cap V$ by the arc length parametrization. Then $H$ can be extended for $\Sigma^+\cap U$ (resp. $\Sigma^-\cap U$) as the construction of $H_n$ (resp. $H_s$) in the proof of Lemma~\ref{fn} (resp. Lemma~\ref{fsfsfs}). That is, $H$ is a homeomorphism from $U$ to $V$ that provides $\Sigma$-equivalence, and then Lemma~\ref{ns} holds. \end{proof} \begin{lm} If $Z=(X, Y)\in\Omega_{ss}$, then $Z$ is locally $\Sigma$-equivalent to $Z_{ss}=(X_{ss}, Y_{ss})\in\Omega_{ss}$ near the origin, where $$X_{ss}(x, y)=(y, x),\qquad Y_{ss}(x, y)=(y, x).$$ \label{ssss} \end{lm} \begin{proof} For $Z\in\Omega_{ss}$ we know that $O$ is a saddle of both $X$ and $Y$ by \cite[Theorem 4.4]{ZZF}. Using the change $(x, y)\rightarrow(-x, y)$ we only need to consider $Z\in\Omega_{ss}$ satisfying (\ref{onecase}). Together with the dynamics on $\Sigma$ given in Lemma~\ref{slidy}, this implies that the local phase portrait of $Z$ near $O$ is as shown in Figure~\ref{ssn}. Moreover, the phase portrait of $Z_{ss}$ is (SS) of Figure~\ref{localphaseportraits}. \begin{figure} \begin{minipage}[t]{1.0\linewidth} \centering \includegraphics[width=1.8in]{ssn.eps} \end{minipage} \caption{{\small Local phase portrait of $Z\in\Omega_{ss}$ satisfying (\ref{onecase}) near $O$.}} \label{ssn} \end{figure} Consider two sufficiently small neighborhoods $U\subset {\mathcal U}_0$ and $V\subset {\mathcal U}_0$ of $O$ such that $Z$ is transverse to the boundary of $U$ and $Z_{ss}$ is transverse to the boundary of $V$ . We can define a homeomorphism $H$ with $H(O)=O$ to identify $\Sigma\cap U$ with $\Sigma\cap V$ by the arc length parametrization. Repeating the construction of $H_s$ in the proof of Lemma~\ref{fsfsfs}, we extend $H$ for $\Sigma^+\cap U$ and $\Sigma^-\cap U$ respectively, and finally obtain a homeomorphism from $U$ to $V$ that provides $\Sigma$-equivalence between $Z\in\Omega_{ss}$ and $Z_{ss}$. This proves Lemma~\ref{ssss}. \end{proof} Now we are in a suitable position to prove Theorem~\ref{normalform}. \begin{proof}[{\bf Proof of Theorem~\ref{normalform}}] For $Z\in\Omega_{ff}$ (resp. $\Omega_{fn}, \Omega_{fs}, \Omega_{nn}, \Omega_{ns}, \Omega_{ss}$), the corresponding piecewise linear vector field $Z_L$ given in (\ref{pwl}) is also in $\Omega_{ff}$ (resp. $\Omega_{fn}, \Omega_{fs}, \Omega_{nn}, \Omega_{ns}, \Omega_{ss}$). Thus, by Lemmas~\ref{ff}-\ref{ssss} both $Z$ and $Z_L$ are locally $\Sigma$-equivalent to $Z_{ff}$ (resp. $Z_{fn}, Z_{fs}, Z_{nn}, Z_{ns}, Z_{ss}$) near $O$, which implies that $Z$ is locally $\Sigma$-equivalent to $Z_L$ near $O$ if $Z\in\Omega_{ff}$ (resp. $\Omega_{fn}, \Omega_{fs}, \Omega_{nn}, \Omega_{ns}, \Omega_{ss}$). Since $\Omega_1=\Omega_{ff}\cup\Omega_{fn}\cup\Omega_{fs}\cup\Omega_{nn}\cup\Omega_{ns}\cup\Omega_{ss}$, $Z$ is locally $\Sigma$-equivalent to $Z_L$ near $O$ for every $Z\in\Omega_1$. Collecting all non-equivalent phase portraits of $Z_{ff}$, $Z_{fn}, Z_{fs}, Z_{nn}, Z_{ns}$ and $Z_{ss}$ obtained in Lemmas \ref{ff}-\ref{ssss}, we get 11 local phase portraits of $Z\in\Omega_1$ near $O$ as shown in Figure~\ref{localphaseportraits}. \end{proof} From Lemmas~\ref{ff}, \ref{nn} and \ref{ssss} we find that some $Z\in\Omega_1$ are locally $\Sigma$-equivalent to smooth linear vector fields near the origin. As indicated in Section 1, Theorem~\ref{normalform} does not allow the same eigenvalue for the Jacobian matrices $A^+$ and $A^-$ respectively in order that the vector field in $\Omega_0$ is locally $\Sigma$-equivalent to its linear part near the origin. The next proposition provides an example showing that the vector field in $\Omega_0$ might not be locally $\Sigma$-equivalent to its linear part near the origin if the Jacobian matrix $A^+$ or $A^-$ has the same eigenvalue. \begin{prop} Consider the piecewise smooth vector field $Z^*=(X^*, Y^*)$ with $$X^*(x, y)=(y, x),~~~~~~~~~~Y^*(x, y)=\left(x+\frac{1}{2}\Gamma\left(x, y\right),~x+y+\frac{1}{2}\Gamma\left(x, y\right)\right),$$ where $$\Gamma\left(x, y\right)= \left\{ \begin{aligned} \left(x^2+y^2\right)^{1/2}&\left(-\frac{1}{2}\ln\left(x^2+y^2\right)\right)^{-3/2}~~~~~&&{\rm if}~~x^2+y^2<1,\\ &0~~~~~&&{\rm if}~~x^2+y^2=0. \end{aligned} \right.$$ Then $Z^*\in\Omega_0$ and it is not locally $\Sigma$-equivalent to its linear part $Z^*_L=(X^*_L, Y^*_L)$ near the origin, where $X^*_L(x, y)=(y, x)$ and $Y^*_L(x, y)=(x,~x+y)$. \label{example} \end{prop} \begin{proof} We start by proving $Z^*=(X^*, Y^*)\in\Omega_0$. In fact, a straightway calculation implies $\Gamma(0, 0)=0$, $\Gamma_x(0, 0)=\Gamma_y(0, 0)=0$ and $\Gamma(x, y)$ is continuously differential near $O$. Thus $Y^*$ is a $\mathcal{C}^1$ vector filed having $O$ as a non-degenerate equilibrium, i.e., $Y^*(0, 0)=(0, 0)$ and the determinant of the Jacobian matrix of $Y^*$ at $O$ is nonzero. Clearly, the vector field $X^*$ is also $\mathcal{C}^1$ and $O$ is a linear saddle of it. Accordingly, condition (\ref{adc}) holds for $Z^*$. On the other hand, we have $X^*_{2x}(0, 0)=Y^*_{2x}(0, 0)=1$, so that (\ref{condi}) also holds for $Z^*$, where $X^*_2$ and $Y^*_2$ are the ordinates of $X^*$ and $Y^*$ respectively. In conclusion, we get $Z^*\in\Omega_0$ from the definition of $\Omega_0$ given above (\ref{adc}), and the linear part of $Z^*$ is $Z^*_L$ from $\Gamma_x(0, 0)=\Gamma_y(0, 0)=0$. Next we determine the local phase portraits of $Z^*$ and $Z^*_L$ near $O$ in order to prove that $Z^*$ is not locally $\Sigma$-equivalent to $Z^*_L$. Regarding $Z^*$, $O$ is a saddle of $X^*$ with the unstable manifold $\{(x, y)\in\mathbb{R}^2: y=x, y>0\}$ and the stable manifold $\{(x, y)\in\mathbb{R}^2: y=-x, y>0\}$. From \cite[Example 4.3]{ZZF} we have known that all orbits of $Y^*$ near $O$ starting from the negative $x$-axis enter into $\Sigma^-$ and then reach the positive $x$-axis after a finite time. Thus the local phase portrait of $Z^*$ near $O$ is as shown in Figure~\ref{fsexam}(a). Regarding $Z^*_L$, $O$ is an unstable non-diagonalizable node of $Y^*_L$ with the characteristic direction $x=0$. Due to $X^*_L=X^*$, we conclude that the phase portrait of $Z^*_L$ is as shown in Figure~\ref{fsexam}(b). Consider the orbits of $Z^*$ and $Z^*_L$ starting from the negative $x$-axis. From Figure~\ref{fsexam} we observe that these orbits of $Z^*$ intersect the positive $x$-axis, but the ones of $Z^*_L$ do not. Since any $\Sigma$-equivalence sends the orbits of $Z^*$ to the orbits of $Z^*_L$, preserving the switching line $\Sigma$, it also preserves the intersections between the orbits and $\Sigma$. Consequently, $Z^*$ cannot be locally $\Sigma$-equivalent to $Z^*_L$ near $O$. \begin{figure} \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=1.7in]{examplen.eps} \caption*{(a)} \end{minipage} \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=1.7in]{examplel.eps} \caption*{(b)} \end{minipage} \caption{{\small Local phase portraits of $Z^*$ and $Z^*_L$ near $O$, where (a) is for $Z^*$ and (b) is for $Z^*_L$.}} \label{fsexam} \end{figure} \end{proof} \section{Proof of Theorem~\ref{stability}} \setcounter{equation}{0} \setcounter{lm}{0} \setcounter{thm}{0} \setcounter{rmk}{0} \setcounter{df}{0} \setcounter{cor}{0} Since $\Omega_{ff}$ is an open set of $\Omega_1\subset\Omega_0$, any small perturbation of $Z\in\Omega_{ff}$ inside $\Omega_0$ belongs to $\Omega_{ff}$. In particular, the value of sign function $\alpha$ defined in Lemma~\ref{ff} is the same for $Z\in\Omega_{ff}$ and its perturbation. Thus by Lemma~\ref{ff} both $Z\in\Omega_{ff}$ and any perturbation of it inside $\Omega_0$ are locally $\Sigma$-equivalent to the same normal form $Z_{ff}$ near $O$. This means that $Z\in\Omega_{ff}$ is locally $\Sigma$-structurally stable with respect to $\Omega_0$ near $O$. Similar argument can be applied to $Z$ belonging to $\Omega_{fn}$, $\Omega_{fs}, \Omega_{nn}, \Omega_{ns}$ and $\Omega_{ss}$ respectively. Finally, due to $\Omega_1=\Omega_{ff}\cup\Omega_{fn}\cup\Omega_{fs}\cup\Omega_{nn}\cup\Omega_{ns}\cup\Omega_{ss}$, we conclude that $Z\in\Omega_1$ is locally $\Sigma$-structurally stable with respect to $\Omega_0$ near $O$, that is, the sufficiency holds. To obtain the necessity, we can equivalently prove that $Z\in\Omega_0$ is not locally $\Sigma$-structurally stable with respect to $\Omega_0$ near $O$ if $Z\in\Omega_0\setminus\Omega_1$. To do this, we classify $\Omega_0\setminus\Omega_1$ into two subsets: \begin{eqnarray} \Omega_2=\{Z\in\Omega_0: {\rm Im}\lambda^+_1{\rm Im}\lambda^-_1\ne0, \ell=0\}, \qquad \Omega_3=\{Z\in\Omega_0: (\lambda^+_1-\lambda^+_2)(\lambda^-_1-\lambda^-_2)=0\}. \label{subsetdefinitions} \end{eqnarray} Clearly, $\Omega_0\setminus\Omega_1=\Omega_2\cup\Omega_3$. If $Z=(X, Y)\in\Omega_2$, then $O$ is an equilibrium of focus type for both $X$ and $Y$. In this case, $O$ is a non-smooth center or pseudo-focus of focus-focus type of $Z$ with the first Lyapunov constant $\ell=0$ as it was clarified in \cite[Theorem B]{CGP}. Since $\ell$ only depends on the linear part of $Z$, we easily obtain a perturbed vector field with $\ell>0$ and a perturbed one with $\ell<0$ by perturbing the linear part of $Z$ in $\Omega_0$. This means that, for any sufficiently small neighborhood of $Z$ in $\Omega_0$, there always exist two vector fields where $O$ is a pseudo-focus with the different stability. Even limit cycles can bifurcate from $O$, e.g., \cite{ZK}. Then these two perturbed vector fields are not locally $\Sigma$-equivalent near $O$, so that $Z\in\Omega_2$ is not locally $\Sigma$-structurally stable with respect to $\Omega_0$ near $O$. If $Z=(X, Y)\in\Omega_3$, then at least one of $\lambda^+_1=\lambda^+_2$ and $\lambda^-_1=\lambda^-_2$ holds. Without loss of generality we assume that $\lambda^+_1=\lambda^+_2$. Writing $X$ near $O$ as $$X=A^+\left(x, y\right)^\top+\Upsilon^+(x, y),$$ where $\Upsilon^+(x, y)$ is the higher order terms and $$ A^+=\left( \begin{array}{cc} a_{11}^+&a_{12}^+\\ a_{21}^+&a_{22}^+ \end{array}\right), $$ we get \begin{eqnarray} \lambda^+_1=\lambda^+_2=\frac{1}{2}(a^+_{11}+a^+_{22}), \qquad (a^+_{11}-a^+_{22})^2+4a^+_{12}a^+_{21}=0, \qquad a^+_{21}\ne0. \label{dnjknfejfew} \end{eqnarray} Here $a^+_{21}\ne0$ is due to the fact that $Z\in\Omega_0$ satisfies (\ref{condi}). Consider the vector field $Z_\varepsilon=(X_\varepsilon, Y)$ with $$ X_\varepsilon=A^+_\varepsilon\left(x, y\right)^\top+\Upsilon^+(x, y) $$ and $$ A^+_\varepsilon=\left( \begin{array}{cc} a_{11}^+&\frac{a_{12}^+a_{21}^++\varepsilon/4}{a_{21}^++\varepsilon}\\ a_{21}^++\varepsilon&a_{22}^+ \end{array}\right). $$ Then for any sufficiently small neighborhood of $Z$ in $\Omega_0$, there exists $\varepsilon_0>0$ such that $Z_\varepsilon$ lies in the neighborhood for all $-\varepsilon_0<\varepsilon<\varepsilon_0$. Denote the eigenvalues of $A^+_\varepsilon$ by $\lambda^+_{\varepsilon, 1}$ and $\lambda^+_{\varepsilon, 2}$. It follows from (\ref{dnjknfejfew}) that $$\lambda^+_{\varepsilon, 1}=\lambda^+_1+\frac{\sqrt{-\varepsilon}}{2}i, \qquad \lambda^+_{\varepsilon, 2}=\lambda^+_1-\frac{\sqrt{-\varepsilon}}{2}i$$ for $-\varepsilon_0<\varepsilon<0$, while for $\varepsilon_0>\varepsilon>0$, $$\lambda^+_{\varepsilon, 1}=\lambda^+_1+\frac{\sqrt\varepsilon}{2}, \qquad \lambda^+_{\varepsilon, 2}=\lambda^+_1-\frac{\sqrt\varepsilon}{2}.$$ In the case of $-\varepsilon_0<\varepsilon<0$, $O$ is a focus of $X_\varepsilon$ by \cite[Theorem 4.2]{ZZF}, so that all orbits of $Z_{\varepsilon}$ near $O$ starting from the positive $x$-axis enter into $\Sigma^+$ and then reach the negative $x$-axis as $t$ increases (resp. decreases) if $a^+_{21}>0$ (resp. $<0$). In the case of $\varepsilon_0>\varepsilon>0$, $O$ is a diagonalizable node of $X_\varepsilon$ by \cite[Theorem 4.3]{ZZF}, which has two characteristic directions with the nonzero slope due to $a^+_{21}\ne0$. Thus all orbits of $Z_{\varepsilon}$ near $O$ starting from the positive $x$-axis cannot reach the negative $x$-axis from $\Sigma^+$ as $t$ increases (resp. decreases) if $a^+_{21}>0$ (resp. $<0$). As indicated in the proof of Proposition~\ref{example}, any $\Sigma$-equivalence sends the orbits of $Z_\varepsilon$ with $-\varepsilon_0<\varepsilon<0$ to the orbits of $Z_\varepsilon$ with $\varepsilon_0>\varepsilon>0$, preserving the switching line $\Sigma$ and the intersections of $\Sigma$ and the orbits. Consequently, $Z_\varepsilon$ with $-\varepsilon_0<\varepsilon<0$ cannot be locally $\Sigma$-equivalent to $Z_\varepsilon$ with $\varepsilon_0>\varepsilon>0$ near $O$. This means that, for any sufficiently small neighborhood of $Z\in\Omega_3$ in $\Omega_0$, there are always two vector fields that are not locally $\Sigma$-equivalent near $O$. So $Z\in\Omega_3$ is not locally $\Sigma$-structurally stable with respect to $\Omega_0$ near $O$. Recalling the last paragraph, we conclude the necessity. This ends the proof of Theorem~\ref{stability}. \section{Proof of Theorem~\ref{bifurcation}} \setcounter{equation}{0} \setcounter{lm}{0} \setcounter{thm}{0} \setcounter{rmk}{0} \setcounter{df}{0} \setcounter{cor}{0} Before proving Theorem~\ref{bifurcation}, we study the limit cycle bifurcations by perturbing the following piecewise linear vector field \begin{eqnarray} Z_0(x, y)=\left\{ \begin{aligned} &X_0(x, y)=(ay, x) \qquad &&{\rm if}\quad y>0,\\ &Y_0(x, y)=(by, x) \qquad &&{\rm if}\quad y<0, \end{aligned} \right. \label{ppll} \end{eqnarray} where $a, b\in\mathbb{R}$ satisfies $ab\ne0$. \begin{prop} Consider the piecewise linear vector field $Z_0$ with $ab\ne0$ given in {\rm(\ref{ppll})}. Then $Z_0\in\Omega_1$ if either $a>0$ or $b>0$, and $Z_0\in\Omega_2$ if $a<0$ and $b<0$, where $\Omega_1\subset\Omega_0$ and $\Omega_2\subset\Omega_0\setminus\Omega_1$ are defined in {\rm(\ref{subsetdefinition})} and {\rm(\ref{subsetdefinitions})} separately. Moreover, $Z_0$ has no limit cycles. \label{zoooo} \end{prop} \begin{proof} The first part of Proposition~\ref{zoooo} follows directly from the definitions of $\Omega_1$ and $\Omega_2$. Since $O$ is saddle or center of $X_0$ and $Y_0$, it is impossible for $Z_0$ to have limit cycles totally contained in the half plane $y\ge0$ or $y\le0$. On the other hand, when $O$ is a center of both $X_0$ and $Y_0$, it is a global non-smooth center for $Z_0$, so that $Z_0$ has no limit cycles occupying the half planes $y>0$ and $y<0$. Clearly, there also exist no limit cycles occupying the half planes $y>0$ and $y<0$ when $O$ is a saddle of $X_0$ or $Y_0$. \end{proof} Next we state two bifurcation results by perturbing the piecewise linear vector field $Z_0$ given in (\ref{ppll}). \begin{prop} Consider the piecewise linear vector field $Z_0\in\Omega_0$ in {\rm(\ref{ppll})}, and the piecewise polynomial vector field $$ Z^f_\epsilon(x,y)=\left\{ \begin{aligned} &X^f_\epsilon(x,y)=((a-\epsilon)y-\epsilon, x) \qquad &&{\rm if}\quad y>0,\\ &Y^f_\epsilon(x,y)=\left((b-\epsilon)y+\epsilon, x+\epsilon\frac{\partial f(x, \epsilon)}{\partial x}\right) \qquad &&{\rm if}\quad y<0, \end{aligned} \right. $$ where $\epsilon\ge0$ and \begin{eqnarray} f(x, \epsilon)=x\prod^m_{i=1}\left(x^2-\left(\frac{i\epsilon}{m}\right)^2\right). \label{saasmajf} \end{eqnarray} Then $Z^f_\epsilon=Z_0$ for $\epsilon=0$. Besides, for any given $a$ and $b$ satisfying $0<|a|\le1/2$ and $0<|b|\le1/2$, there exists $0<\epsilon_0<\min\{|a|, |b|\}$ such that for $0<\epsilon<\epsilon_0$, $Z^f_\epsilon$ has exactly $m$ hyperbolic crossing limit cycles $\Gamma_i$ $(i=1, 2, \cdot\cdot\cdot, m)$ bifurcating from the non-smooth equilibrium $O$ of $Z_0$, where $\Gamma_i$ obeys the algebraic curve $$ \Gamma^+_i: \frac{1}{2}x^2-\frac{a-\epsilon}{2}y^2+\epsilon y=\frac{1}{2}\left(\frac{i\epsilon}{m}\right)^2$$ in the half plane $y\ge0$ and the algebraic curve $$\Gamma^-_i: \frac{1}{2}x^2+\epsilon f(x, \epsilon)-\frac{b-\epsilon}{2}y^2-\epsilon y=\frac{1}{2}\left(\frac{i\epsilon}{m}\right)^2$$ in the half plane $y\le0$. Moreover, $\Gamma_i$ is stable if $m-i$ is even and unstable if $m-i$ is odd. \label{ejhfjncjff} \end{prop} \begin{prop} Consider the piecewise linear vector field $Z_0\in\Omega_0$ in {\rm(\ref{ppll})}, and the piecewise $\mathcal{C}^\infty$ vector field $$ Z^g_\epsilon(x,y)=\left\{ \begin{aligned} &X^g_\epsilon(x,y)=((a-\epsilon)y-\epsilon, x) \qquad &&{\rm if}\quad y>0,\\ &Y^g_\epsilon(x,y)=\left((b-\epsilon)y+\epsilon, x+\epsilon\frac{\partial g(x, \epsilon)}{\partial x}\right) \qquad &&{\rm if}\quad y<0, \end{aligned} \right. $$ where $\epsilon\ge0$ and $g(x, \epsilon)$ is a $\mathcal{C}^\infty$ function given by $$g(x, \epsilon)=\left\{ \begin{aligned} &0 \qquad &&{\rm if}\quad x\le0,\\ &e^{-1/x}\sin\left(\frac{\pi\epsilon}{x}\right) \qquad &&{\rm if}\quad x>0. \end{aligned} \right. $$ Then $Z^g_\epsilon=Z_0$ for $\epsilon=0$. Besides, for any given $a$ and $b$ satisfying $0<|a|\le1/2$ and $0<|b|\le1/2$, there exists $0<\epsilon_0<\min\{|a|, |b|\}$ such that for $0<\epsilon<\epsilon_0$, $Z^g_\epsilon$ has infinitely many hyperbolic crossing limit cycles $\Theta_i$ $(i\in\mathbb{N}^+)$ bifurcating from the non-smooth equilibrium $O$ of $Z_0$, where $\Theta_i$ obeys the algebraic curve $$\Theta^+_i: \frac{1}{2}x^2-\frac{a-\epsilon}{2}y^2+\epsilon y=\frac{1}{2}\left(\frac{\epsilon}{i}\right)^2$$ in the half plane $y\ge0$ and the algebraic curve $$\Theta^-_i: \frac{1}{2}x^2+\epsilon g(x, \epsilon)-\frac{b-\epsilon}{2}y^2-\epsilon y=\frac{1}{2}\left(\frac{\epsilon}{i}\right)^2$$ in the half plane $y\le0$. Moreover, $\Theta_i$ is stable if $i$ is odd and unstable if $i$ is even. \label{ejhadafjff} \end{prop} Propositions~\ref{ejhfjncjff} and \ref{ejhadafjff} will be proved later on. If $a=b>0$ (resp. $<0$), then $Z_0=X_0=Y_0$ is a linear vector field having $O$ as a saddle (resp. center). Thus our results reveal that any finitely or infinitely many limit cycles can bifurcate from some linear saddle and center under non-smooth perturbations. Besides, observe that $Z^f_\epsilon$ and $Z^g_\epsilon$ are both piecewise smooth Hamiltonian systems. This means that it is possible for piecewise smooth Hamiltonian systems to have limit cycles, but this cannot occur in smooth Hamiltonian systems as well known. Now we are in a position to provide the proof of Theorem~\ref{bifurcation}. \begin{proof}[{\bf Proof of Theorem~\ref{bifurcation}}] For $Z=(X, Y)\in\Omega_0$ we consider the three-parametric perturbed vector field $Z^{\bm\epsilon}=(X^{\bm{\epsilon}}, Y^{\bm{\epsilon}})$ with $$ \begin{aligned} X^{\bm{\epsilon}}(x, y)&=\left(X_1(x, y)-X_{2x}(0, 0)\epsilon_1+X_{2x}(0, 0)\epsilon_1x, X_2(x, y)\right),\\ Y^{\bm\epsilon}(x, y)&=\left(Y_1(x, y)+Y_{2x}(0, 0)\epsilon_1+Y_{2x}(0, 0)\epsilon_1x+\epsilon_2Y_2(x, y), Y_2(x, y)+\epsilon_3\right), \end{aligned} $$ where $\bm\epsilon=(\epsilon_1, \epsilon_2, \epsilon_3)\in\mathbb{R}^3$ is a parameter vector. Clearly, $Z^{\bm{\epsilon}}=Z$ for ${\bm{\epsilon}}=(0, 0, 0)$, and $Z^{\bm{\epsilon}}\in\Omega$. We claim that for any small neighborhood of $\bm\epsilon=(0, 0, 0)$ there always exists $\bm\epsilon_0$ in the neighborhood such that $Z^{\bm\epsilon_0}$ has a crossing limit cycle bifurcating from the non-smooth equilibrium $O$ of $Z$. In fact, fixing $\epsilon_2=\epsilon_3=0$ we have \begin{equation}\label{cmkc} \begin{aligned} &X^{\bm\epsilon}_1(0, 0)=-X_{2x}(0, 0)\epsilon_1,\qquad &&X^{\bm\epsilon}_2(0, 0)=0,\qquad &&X^{\bm\epsilon}_{2x}(0, 0)=X_{2x}(0, 0),\\ &Y^{\bm\epsilon}_1(0, 0)=Y_{2x}(0, 0)\epsilon_1,\qquad &&Y^{\bm\epsilon}_2(0, 0)=0,\qquad &&Y^{\bm\epsilon}_{2x}(0, 0)=Y_{2x}(0, 0), \end{aligned} \end{equation} where $(X^{\bm\epsilon}_1, X^{\bm\epsilon}_2)$ and $(Y^{\bm\epsilon}_1, Y^{\bm\epsilon}_2)$ are the coordinates of $X^{\bm{\epsilon}}$ and $Y^{\bm{\epsilon}}$ respectively. So $O$ is an invisible-invisible fold-fold point of $Z^{\bm\epsilon}$ for $\epsilon_1>0$ and $\epsilon_2=\epsilon_3=0$. Besides, all orbits of $Z^{\bm\epsilon}$ near $O$ turn around $O$ because $X^{\bm\epsilon}_{2x}(0, 0)Y^{\bm\epsilon}_{2x}(0, 0)=X_{2x}(0, 0)Y_{2x}(0, 0)>0$. Here $X_{2x}(0, 0)Y_{2x}(0, 0)>0$ is due that $Z\in\Omega_0$ satisfies (\ref{condi}). Thus $O$ is either a non-smooth center or a pseudo-focus of $Z^{\bm\epsilon}$ for $\epsilon_1>0$ and $\epsilon_2=\epsilon_3=0$. By the time reversal, without loss of generality next we only work with the case where all orbits of $Z^{\bm\epsilon}$ near $O$ rotate counterclockwise, namely $X_{2x}(0, 0)>0$ and $Y_{2x}(0, 0)>0$. If $O$ is a stable (resp. unstable) pseudo-focus of $Z^{\bm\epsilon}$ for $\epsilon_1>0$ and $\epsilon_2=\epsilon_3=0$, a direct application of Proposition~\ref{pseudohopf} yields that for given $\epsilon_1>0$ and $\epsilon_2=0$ there exists $\hat\epsilon_3=\hat\epsilon_3(\epsilon_1)>0$ such that $Z^{\bm\epsilon}$ with $\epsilon_1>0$, $\epsilon_2=0$ and $-\hat\epsilon_3<\epsilon_3<0$ (resp. $0<\epsilon_3<\hat\epsilon_3$) admits a stable (resp. unstable) crossing limit cycle bifurcating from $O$. Thus, for any small neighborhood of $\bm\epsilon=(0, 0, 0)$ we can choose some $\bm\epsilon_0=(\epsilon_{10}, \epsilon_{20}, \epsilon_{30})$ satisfying $\epsilon_{10}>0$, $\epsilon_{20}=0$ and $0<|\epsilon_{30}|<\hat\epsilon_3(\epsilon_{10})$ such that $Z^{\bm\epsilon_0}$ has a crossing limit cycle bifurcating from $O$, that is, the claim holds in the case that $O$ is a pseudo-focus. If $O$ is a non-smooth center of $Z^{\bm\epsilon}$ for $\epsilon_1>0$ and $\epsilon_2=\epsilon_3=0$, we can obtain an upper Poincar\'e map $P_U$ near $O$ which maps a point $(x_0, 0)$ with $x_0>0$ to a point $(x_1, 0)$ with $x_1<0$, and a lower Poincar\'e map $P_L$ near $O$ which maps $(x_1, 0)$ to $(x_0, 0)$. When $\epsilon_3=0$ and $\epsilon_2$ is perturbed to be $\epsilon_2\ne0$, it is easily verify that (\ref{cmkc}) still holds, i.e., $O$ is still an invisible-invisible fold-fold point. In this case, we also can define an upper Poincar\'e map $\tilde P_U$ near $O$ which maps a point $(x_0, 0)$ with $x_0>0$ to a point $(x_1, 0)$ with $x_1<0$, and a lower Poincar\'e map $\tilde P_L$ near $O$ which maps $(x_1, 0)$ to a point $(x_2, 0)$ with $x_2>0$. Clearly, $P_U=\tilde P_U$ because $X^{\bm\epsilon}$ is independent of $\epsilon_2$. Moreover, we can prove that $x_2>x_0$ if $\epsilon_2>0$. In fact, considering the vector field $Y^{\bm\epsilon}$ we define the following two equations \begin{equation}\label{ineqcanf} \frac{dy}{dx}=\varphi_1(x, y):=\frac{Y_2(x, y)}{Y_1(x, y)+Y_{2x}(0, 0)\epsilon_1+Y_{2x}(0, 0)\epsilon_1x} \end{equation} for $\epsilon_2=\epsilon_3=0$, and \begin{equation}\label{afineqcanf} \frac{dy}{dx}=\varphi_2(x, y):=\frac{Y_2(x, y)}{Y_1(x, y)+Y_{2x}(0, 0)\epsilon_1+Y_{2x}(0, 0)\epsilon_1x+\epsilon_2Y_2(x, y)} \end{equation} for $\epsilon_2\ne0$ and $\epsilon_3=0$. Since $Y^{\bm\epsilon}_1(0, 0)=Y_{2x}(0, 0)\epsilon_1>0$, the denominators of $\varphi_1(x, y)$ and $\varphi_2(x, y)$ are positive in a sufficiently small neighborhood of $O$. Thus $\varphi_1(x, y)\ge\varphi_2(x, y)$ for $\epsilon_2>0$, and the equality holds only for $(x, y)=(0, 0)$. Applying the theory of differential inequality to equations (\ref{ineqcanf}) and (\ref{afineqcanf}), we obtain the solution of equation (\ref{ineqcanf}) with the initial value $(x_1, 0)$ always lies above the solution of equation (\ref{afineqcanf}) with the initial value $(x_1, 0)$ in the half plane $y\le0$. So $x_2>x_0$ if $\epsilon_2>0$, and then $O$ is an unstable pseudo-focus of $Z^{\bm\epsilon}$ for $\epsilon_1>0$, $\epsilon_2>0$ and $\epsilon_3=0$. Repeating the analysis in the last paragraph and using Proposition~\ref{pseudohopf}, for any small neighborhood of $\bm\epsilon=(0, 0, 0)$ we can choose some $\bm\epsilon_0=(\epsilon_{10}, \epsilon_{20}, \epsilon_{30})$ satisfying $\epsilon_{10}>0$, $\epsilon_{20}>0$ and $0<\epsilon_{30}<\hat\epsilon_3(\epsilon_{10}, \epsilon_{20})$ such that $Z^{\bm\epsilon_0}$ has a crossing limit cycle bifurcating from $O$, that is, the claim also holds in the case that $O$ is a non-smooth center. This, together with the last paragraph, concludes statement (1) because $Z^{\bm\epsilon}\rightarrow Z$ as $\bm\epsilon\rightarrow0$. Let $Z_0$ be the piecewise linear vector field given in (\ref{ppll}). Then $Z_0\in\Omega_1$ if either $a>0$ or $b>0$, and $Z_0\in\Omega_0\setminus\Omega_1$ if $a<0$ and $b<0$ as indicated in Proposition~\ref{zoooo}. Thus statement (2) is a direct conclusion of Propositions~\ref{ejhfjncjff} and \ref{ejhadafjff} because $Z^f_\epsilon\rightarrow Z_0$ and $Z^g_\epsilon\rightarrow Z_0$ as $\epsilon\rightarrow0$. \end{proof} As well known, it is a challenge objective to establish the bifurcation diagram for some bifurcations, particularly for the higher codimension bifurcations, since a higher codimension bifurcation usually consists of too many lower codimension ones. Speaking of bifurcation diagrams, we can obtain an important information from the proof of Theorem~\ref{bifurcation}, that is, the bifurcation diagram of any vector field in $\Omega_0$ must contain a bifurcation boundary where the codimension one pseudo-Hopf bifurcation occurs. A complete bifurcation diagram of the vector fields in $\Omega_0$ will be left as a future work. Actually, this is an extremely complex work, since there exist many possible local phase portraits for the unperturbed vector fields as seen in Theorem~\ref{normalform}, and such a bifurcation has the higher codimension. Finally, we give the proofs of Propositions~\ref{ejhfjncjff} and \ref{ejhadafjff}. \begin{proof}[{\bf Proof of Proposition~\ref{ejhfjncjff}}] Clearly, $Z^f_\epsilon=Z_0$ for $\epsilon=0$. The rest of this proof is completed by the following four steps. {\it Step 1. The upper Poincar\'e map $P_U$.} Because of $a\ne0$, we can choose $\epsilon_1>0$ such that ${\rm sign}a={\rm sign}(a-\epsilon)$ for $0<\epsilon<\epsilon_1$. In this case, $X^f_\epsilon$ has a unique equilibrium $E_X:=(0, \epsilon/(a-\epsilon))$, which is a linear center if $a-\epsilon<0$ and a linear saddle if $a-\epsilon>0$. When $E_X$ is linear center, i.e., $a-\epsilon<0$, it lies in the lower half plane $y<0$ because of $0<\epsilon<\epsilon_1$, and then it is not a real equilibrium for $Z_\epsilon$. From the center dynamics and the direction of the vector field $X^f_\epsilon$ on the $x$-axis, it follows that the orbit of $X^f_\epsilon$ with $0<\epsilon<\epsilon_1$ starting from $(x_0, 0)$ with $x_0>0$ enters into $y>0$, and reaches again the $x$-axis at a point $(x_1, 0)$ with $x_1<0$ as $t$ increases. When $E_X$ is a linear saddle, i.e., $a-\epsilon>0$, it lies in the upper half plane $y>0$ because of $0<\epsilon<\epsilon_1$, and the stable and unstable manifolds of it lie in $$\left\{(x, y)\in\mathbb{R}^2\: x\ne0, y=-\frac{x}{\sqrt{a-\epsilon}}+\frac{\epsilon}{a-\epsilon}\right\}, \qquad \left\{(x, y)\in\mathbb{R}^2\: x\ne0, y=\frac{x}{\sqrt{a-\epsilon}}+\frac{\epsilon}{a-\epsilon}\right\},$$ respectively. Thus the stable manifold intersects the $x$-axis at $(\epsilon/\sqrt{a-\epsilon} ,0)$, and the unstable manifold intersects the $x$-axis at $(-\epsilon/\sqrt{a-\epsilon} ,0)$. Together with the direction of the vector field $X^f_\epsilon$ on $\{(x, 0)\in\mathbb{R}^2: -\epsilon/\sqrt{a-\epsilon}<x<\epsilon/\sqrt{a-\epsilon}\}$, we get that the orbit of $X^f_\epsilon$ with $0<\epsilon<\epsilon_1$ starting from $(x_0, 0)$ with $0<x_0<\epsilon/\sqrt{a-\epsilon}$ enters into $y>0$ and reaches again the $x$-axis at a point $(x_1, 0)$ with $-\epsilon/\sqrt{a-\epsilon}<x_1<0$ as $t$ increases. According to the last two paragraphs, we can construct an upper Poincar\'e map $P_U$ as $x_1=P_U(x_0, \epsilon)$, which is defined for $0<x_0<\varpi_u(\epsilon)$ and $0<\epsilon<\epsilon_1$, where \begin{eqnarray} \varpi_u(\epsilon)= \left\{ \begin{aligned} &+\infty \qquad &&{\rm when~} E_X~ {\rm is~ a~ center,~ i.e.,}~ a-\epsilon<0,\\ &\epsilon/\sqrt{a-\epsilon}\qquad &&{\rm when~ }E_X~ {\rm is~ a~ saddle,~ i.e.,}~ a-\epsilon>0. \end{aligned} \right. \label{rutireuiv} \end{eqnarray} Furthermore, calculating the first integral $H^f_X$ of $X^f_\epsilon$ we get $$ H^f_X(x, y)=\frac{1}{2}x^2-\frac{a-\epsilon}{2}y^2+\epsilon y, $$ so that $P_U(x_0, \epsilon)$ satisfies $H^f_X(x_0, 0)=H^f_X(P_U(x_0), 0)$, i.e., \begin{eqnarray} P_U(x_0, \epsilon)=-x_0\qquad {\rm for}\quad 0<x_0<\varpi_u(\epsilon)~~{\rm and}~~0<\epsilon<\epsilon_1. \label{PUM} \end{eqnarray} {\it Step 2. The lower Poincar\'e map $P_L$.} Since $b\ne0$, there exists $\epsilon_2>0$ such that ${\rm sign}b={\rm sign}(b-\epsilon)$ for $0<\epsilon<\epsilon_2$. Throughout this step, $\epsilon_2$ can be reduced if necessary. Consider the function $$ F(x, \epsilon)=x+\epsilon\frac{\partial f(x, \epsilon)}{\partial x}. $$ Due to $F(0, 0)=0$ and $F_x(0,0)=1$, by the Implicit Function Theorem there exists a function $x(\epsilon)$ defined for $0<\epsilon<\epsilon_2$ such that $x(0)=0$ and $F(x(\epsilon), \epsilon)=0$. In addition, $x(\epsilon)$ is given by \begin{eqnarray} x(\epsilon)=(-1)^{m+1}\epsilon\prod^m_{i=1}\left(\frac{i\epsilon}{m}\right)^2+\mathcal{O}(\epsilon^{2m+2})=(-1)^{m+1}\frac{(m!)^2}{m^{2m}}\epsilon^{2m+1} +\mathcal{O}(\epsilon^{2m+2}). \label{erieief} \end{eqnarray} By the definition of invisible fold point, $(x(\epsilon), 0)$ is an invisible fold point of $Y^f_\epsilon$ for $0<\epsilon<\epsilon_2$. Combining the direction of $Y^f_\epsilon$ on the $x$-axis, we know that the orbit of $Y^f_\epsilon$ near $(x(\epsilon), 0)$ starting from a point $(x_1, 0)$ with $x_1<x(\epsilon)$ evolves in $y<0$ until it reaches the $x$-axis at a point $(x_2, 0)$ with $x_2>x(\epsilon)$ again. In this case, we can define a lower Poincar\'e map $P_L$ as $x_2=P_L(x_1, \epsilon)$ for $x_1<x(\epsilon)$ closed to $x(\epsilon)$ and $0<\epsilon<\epsilon_2$. Since the first integral of $Y^f_\epsilon$ is $$ H^f_Y(x, y)=\frac{1}{2}x^2+\epsilon f(x, \epsilon)-\frac{b-\epsilon}{2}y^2-\epsilon y, $$ $P_L(x_1, \epsilon)$ satisfies \begin{eqnarray} \frac{1}{2}x_1^2+\epsilon f(x_1, \epsilon)=\frac{1}{2}P_L(x_1, \epsilon)^2+\epsilon f(P_L(x_1, \epsilon), \epsilon). \label{PLM} \end{eqnarray} Next we precisely determine the definition domain of $P_L$. Notice that $E_Y:=(x(\epsilon), -\epsilon/(b-\epsilon))$ is an equilibrium of $Y^f_\epsilon$ for $0<\epsilon<\epsilon_2$. Calculating the eigenvalues of the Jacobian matrix of $Y^f_\epsilon$ at $E_Y$, we have that $E_Y$ is of focus type if $b-\epsilon<0$ from \cite[Theorem 5.1]{ZZF}, and a saddle if $b-\epsilon>0$ from \cite[Theorem 4.4]{ZZF}. When $E_Y$ is of focus type, i.e., $b-\epsilon<0$, it lies in the upper half plane $y>0$ because of $\epsilon>0$. Moreover, $O$ is a linear center of $Y^f_\epsilon$ for $\epsilon=0$ due to ${\rm sign}b={\rm sign}(b-\epsilon)$ for $0<\epsilon<\epsilon_2$. Thus $\epsilon_2>0$ can be reduced such that $P_L(x_1, \epsilon)$ is defined for $-1<x_1<x(\epsilon)$ and $0<\epsilon<\epsilon_2$. When $E_Y$ is saddle, i.e., $b-\epsilon>0$, it lies in the lower half plane $y<0$ because of $\epsilon>0$. $E_Y$ has one stable (resp. unstable) manifold intersecting the $x$-axis. Let $(x_s, 0)$ (resp. $(x_u, 0)$) be the intersection between the stable (resp. unstable) manifold and the $x$-axis. Then $H^f_Y(x_u, 0)=H^f_Y(x_s, 0)=H^f_Y(E_Y)$, i.e., \begin{eqnarray} \begin{aligned} \frac{1}{2}(x_u)^2+\epsilon f(x_u, \epsilon)=\frac{1}{2}(x_s)^2+\epsilon f(x_s, \epsilon)&=\frac{1}{2}x(\epsilon)^2+\epsilon f(x(\epsilon), \epsilon)-\frac{b-\epsilon}{2}\left(\frac{-\epsilon}{b-\epsilon}\right)^2-\epsilon\left(\frac{-\epsilon}{b-\epsilon}\right)\\ &=\frac{\epsilon^2}{2(b-\epsilon)}+\mathcal{O}(\epsilon^3), \end{aligned} \label{ejhncjdnvsd} \end{eqnarray} where the last equality is due to (\ref{erieief}). Solving (\ref{ejhncjdnvsd}) we get $$x_u=\frac{\epsilon}{\sqrt{b-\epsilon}}+\mathcal{O}(\epsilon^2),\qquad x_s=-\frac{\epsilon}{\sqrt{b-\epsilon}}+\mathcal{O}(\epsilon^2)$$ for $0<\epsilon<\epsilon_2$ by $x_u>x_s$. Consequently, $P_L(x_1, \epsilon)$ is defined for $x_s<x_1<x(\epsilon)$ and $0<\epsilon<\epsilon_2$. Moreover, $x(\epsilon)<P_L(x_1)<x_u$. In conclusion, we take the definition domain of $P_L(x_1, \epsilon)$ as $\varpi_l(\epsilon)<x_1<x(\epsilon)$, where \begin{eqnarray} \varpi_l(\epsilon)= \left\{ \begin{aligned} &-1\qquad &&{\rm when~ }E_Y~ {\rm is~ of~ focus~type, ~ i.e.,}~ b-\epsilon<0, \\ &x_s=-\frac{\epsilon}{\sqrt{b-\epsilon}}+\mathcal{O}(\epsilon^2) \qquad &&{\rm when~} E_Y~ {\rm is~ a~ saddle,~ i.e.,}~ b-\epsilon>0. \end{aligned} \right. \label{jfeuihuncjsdh} \end{eqnarray} {\it Step 3. The full Poincar\'e map $P$.} Take $\epsilon_0=\min\{\epsilon_1, \epsilon_2\}$ and $\varpi(\epsilon)=|x(\epsilon)|$. In what follows $\epsilon_0>0$ can be reduced if necessary. Let $$I(\epsilon)=\left(\varpi(\epsilon), \min\{-\varpi_l(\epsilon), \varpi_u(\epsilon)\}\right).$$ By (\ref{erieief}) and the definitions of $\varpi_l(\epsilon)$ and $\varpi_u(\epsilon)$ the interval $I(\epsilon)$ is non-empty for $0<\epsilon<\epsilon_0$. According to the last two steps, we construct $P$ as the composition $P(x_0, \epsilon)=P_L(P_U(x_0, \epsilon), \epsilon)$ for $x_0\in I(\epsilon)$ and $0<\epsilon<\epsilon_0$. Hence, a fixed point of $P(x_0, \epsilon)$ in the interval $I(\epsilon)$ corresponds to a crossing periodic orbit of $Z^f_\epsilon$. Furthermore, from (\ref{saasmajf}), (\ref{PUM}) and (\ref{PLM}) the map $P(x_0, \epsilon)$ satisfies $$\frac{1}{2}x_0^2+\epsilon f(-x_0, \epsilon)=\frac{1}{2}P(x_0, \epsilon)^2+\epsilon f(P(x_0, \epsilon), \epsilon),$$ i.e., \begin{eqnarray} \frac{1}{2}x_0^2-\epsilon x_0\prod^m_{i=1}\left(x_0^2-\left(\frac{i\epsilon}{m}\right)^2\right)= \frac{1}{2}P(x_0, \epsilon)^2+\epsilon P(x_0, \epsilon)\prod^m_{i=1}\left(P(x_0, \epsilon)^2-\left(\frac{i\epsilon}{m}\right)^2\right). \label{ejfheuinjef} \end{eqnarray} {\it Step 4. Crossing limit cycles.} Now we study the crossing limit cycles of $Z^f_\epsilon$ using the Poincar\'e map $P$. Since $0<|a|\le1/2$ and $0<|b|\le1/2$ as assumed in Proposition~\ref{ejhfjncjff}, we have $\min\{-\varpi_l(\epsilon), \varpi_u(\epsilon)\}>\sqrt 2\epsilon+\mathcal{O}(\epsilon^2)$ for $0<\epsilon<\epsilon_0$, so that $i\epsilon/m<\min\{-\varpi_l(\epsilon), \varpi_u(\epsilon)\}$ for all $i=1, 2, \cdot\cdot\cdot, m$ and $0<\epsilon<\epsilon_0$. On the other hand, it follows from (\ref{erieief}) that $i\epsilon/m>|x(\mu)|$, i.e., $i\epsilon/m>\varpi(\epsilon)$, for all $i=1, 2, \cdot\cdot\cdot, m$ and $0<\epsilon<\epsilon_0$. So $i\epsilon/m\in I(\epsilon)$ for all $i=1, 2, \cdot\cdot\cdot, m$ and $0<\epsilon<\epsilon_0$. Associate with (\ref{ejfheuinjef}), $x_0$ is a fixed point of $P$ in $I(\epsilon)$ if and only if $x_0=i\epsilon/m$, which implies that $Z^f_\epsilon$ has exactly $m$ isolated and nested crossing periodic orbits, namely crossing limit cycles. Moreover, these crossing limit cycles intersect the positive $x$-axis at $(i\epsilon/m, 0)$, $i=1, 2, \cdot\cdot\cdot, m$. Using the first integrals $H^f_X$ and $H^f_Y$, we get that the $m$ limit cycles obey the algebraic curves $\Gamma^+_i$ and $\Gamma^-_i$ defined in Proposition~\ref{ejhfjncjff}, $i=1, 2, \cdot\cdot\cdot, m$. Finally, in order to determine the hyperbolicity and stability of $\Gamma_i$, $i=1, 2, \cdot\cdot\cdot, m$, taking the derivative with respect to $x_0$ for (\ref{ejfheuinjef}), we have $$ \begin{aligned} \frac{dP}{dx_0}\left(\frac{i\epsilon}{m}\right)&=\frac{\frac{i\epsilon}{m}-2\epsilon\left(\frac{i\epsilon}{m}\right)^2\prod^m_{k=1, k\ne i}\left(\left(\frac{i\epsilon}{m}\right)^2-\left(\frac{k\epsilon}{m}\right)^2\right)} {\frac{i\epsilon}{m}+2\epsilon\left(\frac{i\epsilon}{m}\right)^2\prod^m_{k=1, k\ne i}\left(\left(\frac{i\epsilon}{m}\right)^2-\left(\frac{k\epsilon}{m}\right)^2\right)}\\ &=\frac{1-2\epsilon^{2m}\left(\frac{i}{m}\right)\prod^m_{k=1, k\ne i}\left(\left(\frac{i}{m}\right)^2-\left(\frac{k}{m}\right)^2\right)} {1+2\epsilon^{2m}\left(\frac{i}{m}\right)\prod^m_{k=1, k\ne i}\left(\left(\frac{i}{m}\right)^2-\left(\frac{k}{m}\right)^2\right)}. \end{aligned} $$ Thus $0<\frac{dP}{dx_0}\left(\frac{i\epsilon}{m}\right)<1$ (resp. $>1$) if $m-i$ is even (resp. odd), that is, $\Gamma_i$ is hyperbolic and stable (resp. unstable) if $m-i$ is even (resp. odd). The proof of Proposition~\ref{ejhfjncjff} is finished. \end{proof} \begin{proof}[{\bf Proof of Proposition~\ref{ejhadafjff}}] Obviously, $Z^g_\epsilon=Z_0$ for $\epsilon=0$. The study of the bifurcated crossing limit cycles is extremely similar to the proof of Proposition~\ref{ejhfjncjff}. So we neglect some details. In fact, comparing the vector fields $Z^f_\epsilon=(X^f_\epsilon, Y^f_\epsilon)$ and $Z^g_\epsilon=(X^g_\epsilon, X^g_\epsilon)$, we see $X^f_\epsilon=X^g_\epsilon$, so that we get the same upper Poincar\'e map \begin{eqnarray} P_U(x_0, \epsilon)=-x_0\qquad {\rm for}\quad 0<x_0<\varpi_u(\epsilon)~~{\rm and}~~0<\epsilon<\epsilon_1. \label{PUMMMM} \end{eqnarray} as defined in (\ref{PUM}). Here $\varpi_u(\epsilon)$ is given in (\ref{rutireuiv}). Besides, $Y^f_\epsilon$ and $Y^g_\epsilon$ have the same expression except that the function $f$ is replaced by $g$. With the replacement, $O$ is an invisible fold point of $Y^g_\epsilon$, and $Y^g_\epsilon$ has $(0, -\epsilon/(b-\epsilon))$ as an equilibrium, which is of focus type if $b-\epsilon<0$ and a saddle if $b-\epsilon>0$. Therefore, carrying out a similar argument to Step 2 in the proof of Proposition~\ref{ejhfjncjff}, we can choose some $\epsilon_2>0$ and define a lower Poincar\'e map $P_L(x_1, \epsilon)$ for $\tilde\varpi_l(\epsilon)<x_1<0$ and $0<\epsilon<\epsilon_2$, where $$ \tilde\varpi_l(\epsilon)= \left\{ \begin{aligned} &-1\qquad &&{\rm when~ }(0, -\epsilon/(b-\epsilon))~ {\rm is~ of~ focus~type, ~ i.e.,}~ b-\epsilon<0, \\ &\tilde x_s=-\frac{\epsilon}{\sqrt{b-\epsilon}}+\mathcal{O}(\epsilon^2) \qquad &&{\rm when~} (0, -\epsilon/(b-\epsilon))~ {\rm is~ a~ saddle,~ i.e.,}~ b-\epsilon>0, \end{aligned} \right. $$ and $(\tilde x_s, 0)$ is the intersection between the stable manifold of $(0, -\epsilon/(b-\epsilon))$ and the negative $x$-axis. Notice that $\varpi_l(\epsilon)$ defined in (\ref{jfeuihuncjsdh}) and $\tilde\varpi_l(\epsilon)$ are the same in the sense of neglecting the higher order terms. Since the first integral of $Y^g_\epsilon$ is $$ H^g_Y(x, y)=\frac{1}{2}x^2+\epsilon g(x, \epsilon)-\frac{b-\epsilon}{2}y^2-\epsilon y, $$ $P_L(x_1, \epsilon)$ satisfies \begin{eqnarray} \frac{1}{2}x_1^2+\epsilon g(x_1, \epsilon)=\frac{1}{2}P_L(x_1, \epsilon)^2+\epsilon g(P_L(x_1, \epsilon), \epsilon). \label{PLLM} \end{eqnarray} The above analysis allows us to define a full Poincar\'e map $P(x_0, \epsilon)=P_L(P_U(x_0, \epsilon), \epsilon)$ for $x_0\in \tilde I(\epsilon)$ and $0<\epsilon<\epsilon_0$, where $$\tilde I(\epsilon)=(0, \min\{-\tilde\varpi_l(\epsilon), \varpi_u(\epsilon)\}),\qquad \epsilon_0=\min\{\epsilon_1, \epsilon_2\}.$$ Hence, a fixed point of $P(x_0, \epsilon)$ in the interval $\tilde I(\epsilon)$ corresponds to a crossing periodic orbit of $Z^g_\epsilon$. Furthermore, from (\ref{PUMMMM}) and (\ref{PLLM}) it follows that $P(x_0, \epsilon)$ satisfies $$\frac{1}{2}x_0^2+\epsilon g(-x_0, \epsilon)=\frac{1}{2}P(x_0, \epsilon)^2+\epsilon g(P(x_0, \epsilon), \epsilon),$$ i.e., \begin{eqnarray} \frac{1}{2}x_0^2= \frac{1}{2}P(x_0, \epsilon)^2+\epsilon e^{-1/P(x_0, \epsilon)}\sin\left(\frac{\pi\epsilon}{P(x_0, \epsilon)}\right). \label{eafaeinjef} \end{eqnarray} Now we study the fixed points of $P(x_0, \epsilon)$ in $\tilde I(\epsilon)$. Since $0<|a|\le1/2$ and $0<|b|\le1/2$ as assumed in Proposition~\ref{ejhadafjff}, $\min\{-\tilde\varpi_l(\epsilon), \varpi_u(\epsilon)\}>\sqrt2\epsilon+\mathcal{O}(\epsilon^2)$ for $0<\epsilon<\epsilon_0$, so that $\epsilon/i\in\tilde I(\epsilon)$ for $i\in\mathbb{N}^+$ and $0<\epsilon<\epsilon_0$. Here $\epsilon_0$ can be reduced if necessary. As a consequence, by (\ref{eafaeinjef}) we get that $x_0$ is a fixed point of $P(x_0, \epsilon)$ in $\tilde I(\epsilon)$ if and only if $x_0=\epsilon/i$, $i\in\mathbb{N}^+$. This means that $Z^g_\epsilon$ has infinitely many nested crossing limit cycles. Moreover, these crossing limit cycles intersect the positive $x$-axis at $(\epsilon/i, 0)$, $i\in\mathbb{N}^+$. Using the first integrals, we get that these crossing limit cycles obey the algebraic curves $\Theta^+_i$ and $\Theta^-_i$ defined in Proposition~\ref{ejhadafjff}, $i\in\mathbb{N}^+$. Finally, computing the derivative of $P(x_0, \epsilon)$ with respect to $x_0$ for (\ref{eafaeinjef}), we get $$ \begin{aligned} \frac{dP}{dx_0}\left(\frac{\epsilon}{i}\right)&=\frac{\epsilon/i}{\epsilon/i-\pi i^2e^{-i/\epsilon}\cos(\pi i)}. \end{aligned} $$ So $0<\frac{dP}{dx_0}\left(\frac{\epsilon}{i}\right)<1$ (resp. $>1$) if $i$ is odd (resp. even), which implies that $\Theta_i$ is hyperbolic and stable (resp. unstable) if $i$ is odd (resp. even). This ends the proof of Proposition~\ref{ejhadafjff}. \end{proof} {\footnotesize
124,019
TITLE: Let $ n \geq 3 $. By factorising $ n $ or $n + 1 $ (as appropriate), show that $ \mathbb{Z}[\sqrt{-n}] $ is not a UFD QUESTION [3 upvotes]: Let $ n \geq 3 $. By factorising $ n $ or $n + 1 $ (as appropriate), show that $ \mathbb{Z}[\sqrt{-n}] $ is not a UFD. My thoughts so far: Define $ N(a + b \sqrt{-n}) = a^2 + n b^2 $. Suppose $ n $ is odd. Then $ n + 1 $ is even, say $ n + 1 = 2k $. Now $ N(2) = 4 $, and the norm of an element in this ring can never be 2, so we have that 2 is irreducible. Now note that $ 1 + n = (1 + \sqrt{-n})(1 - \sqrt{-n}) $. Is $ 1 + \sqrt{-n} $ irreducible? Well, if $ 1 + \sqrt{-n} = z_1 z_2 $, then $ N(z_1)N(z_2) = 1 + n $. So $ N(z_i) \leq \frac{n+1}{2} < n $. But this means both $ z_i$ must be purely real, which clearly can't be the case. Similarly, $ 1 - \sqrt{-n} $ is irreducible. Neither of these factors are equal to 2, and so 2 appears in one factorisation but not another. Hence for $ n $ odd, we don't have a UFD. What about $n$ even? How can I factorise $ n $ other than as $ 2k $ for some $k $? Thanks EDIT: I overlooked $ n = \sqrt{-n} ( -\sqrt{n}) $! REPLY [2 votes]: This CW answer intends to remove the question from the unanswered queue Your thoughts are correct and if you correct your edit according to this comment by Arturo Magidin you will find a similar argument for even $n$.
31,212
Most people would consider having multiple “businesses” as laterals rather than complementary aspects. But let me tell you, when you combine things together that are seemingly separate but end up being complementary, you create a true unique selling proposition. If you want to be in real estate, have skills in online marketing, and can creatively apply new and unique business models, then you are really in a league of your own. That’s what happened when AirBnB first went into business, and then again when people realized they can have a totally different business model with their own real estate. The Real Estate Method of Long-Term Investing Real estate investing has been around for decades and it has built people immense amounts of wealth. This can be from wholesaling, flipping, holding rental properties, or owning commercial properties. In each case, the real estate investor can make a sh*t ton of money from land and the properties on top of them. One of the tried-and-true methods of growing steady wealth is by having rental properties. The easiest way for people to do that in most cases is through either single-family homes or multi-unit apartment complexes. With single-family homes, you can have a long-term tenant that pays you monthly, covering your mortgage and gaining a little bit of profit. This is called having cash-flow. With commercial properties like apartment complexes, people start small with duplexes and then trade up to bigger complexes using the 1031 exchange. This means that you can sell a property and buy another one without having to pay capital gains taxes. This has been done for a long time, and people have built millions in assets simply by doing this. Most real estate investors that have some years on them will tell you that these two buy-and-hold strategies are the best way to create a solid foundation of passive income. Combining Online Marketing Into The Mix Check this out: If you take the buy-and-hold strategy and turn that into an online marketing gig, then what do you get? AirBnB. AirBnB’s idea was to provide places that people could rent on a nightly basis that were not in hotels. They built their platform then scraped the properties craigslist, where they’d seen a need forming, and attracted audiences to their site. In what seemed like overnight, they built an empire of rental properties and such a large user base that they grew far more quickly than anyone would’ve imagined. This is the power of putting technology and marketing with a business model that has already proven to work. Creating The Unique Business Model Homeowners noticed something really neat: Since they could rent their own properties for a nightly rate, they were making a lot more money than with a long-term tenant. A tenant might be $950 a month for a particular 2 bedroom apartment, but when you rent the unit for $100, you only need to rent it out for almost 1/3 of the month to reach the same amount of revenue. After evaluating the demand for housing in particular areas, homeowners realized they can triple the revenue they bring in from a single property. With that revelation, there’s only one way to go from there… …Purchasing more properties and replicating the model. Now real estate investors have turned to catching as many short-term rental properties as possible with the intent of putting them up on nightly rental sites like vrbo and airBnB. Once that business model started trending, other real estate investor saw an opportunity. Wholesalers are able to find sellers who need to get rid of a property, and they pair them up with a buyer who is interested in it. They get a check as their “fee” without ever having the purchase the property. They are simply doing the middle-man’s work. Why not do the same thing with the airBnB short-term rental business model? Now we’re seeing a trend of investors finding homeowners that are willing to monetize their properties but don’t know how or don’t want to learn how to deal with short-term rental processes and regulations. Investors will get a homeowner to agree to the model, and then turn around and prepare it for airBnB-style renting. This means that they might have to put in some work into the interior, which generally raises the home’s value anyway but also gives a better “cozy” feel to the space. Then they will list them on these short term rental sites. The properties will get bookings and the investor will split a portion of the profits with the homeowner, making it a win-win-win situation for all parties involved – the homeowner, investor, and tenant. This is called a Master Lease model by some, and they are becoming increasingly popular in tourist areas or neighborhoods that have a lot of events or venues. Check out this video on rental arbitrage to find out more: Other methods might involve simply buying properties that fit the mold. Areas such as downtown Houston always have a need for rental properties for the various events, venues, business meetings, and tourists. These areas are also subject to foreclosures, and people are struggling to keep their homes. This can result in another win-win situation where, as an investor, you’re able to save a homeowner from bankruptcy while also remodeling the home and preparing it to be a profitable business for you. Many other cities can provide homes at a discount as well, you simply have to learn how to look. Catch The Wave Check out the Master Lease model and see if it might be something you can dig into yourself or do with your own properties if you have them. These might also be called AirBnB arbitrage. It’s worth checking out the demand on short term housing in your area and seeing if you might be able to make some additional profit on the housing that’s available there. Let us know what you find, and check back to learn more! ().
415,191
Refined by: Refine by: Refine by people: Refine by location: - Genre: Reviews/Essays (x) - Subject heading: American literature -- History and criticism. (x) - Subject: Major John Andre (x) - Publication date - Time period - Antebellum Period (1) - Early Modern History (1) - Medieval History (1) - Revolutionary History (1) - Holding Institution - Place of printing - New York, NY (1) - Place of publication - New York, NY (1)
403,205
Android 6 Exchange Account Configuration The following manual has been created and tested with a Motoral Nexus 6 with the Android Version 6.x (Marshmallow). Precondition: - Smartphone or mobile phone based on Android 6.x with Wifi or a mobile internet connection - ZIH-Login with an Exchange account General Data: - Exchange Server: msx.tu-dresden.de - Username: ZIH-Login - Password: ZIH-Passwort - Exchange email address: ZIH-Login@msx.tu-dresden.de Please select applications, starting from the home screen. Scroll to and select Settings. Select Accounts in the following menu. Add account. Select Exchange as account type. Enter your email address as follows: <ZIH-Login>@msx.tu-dresden.de And select Manual Setup. Select Exchange and touch Next. Enter your email account password and touch Next. Enter the Server, the Port and the Security Type as well and touch Next. Confirm the notification with Ok. Review and/or change the email account options. When you're done touch Next. Activate the device administrator. Edit the account name if desired and touch Next. The email account has been added. Your email will sync to your phone.
63,148
New York Fashion Week always brings together the top designers of the fashion world and A-list celebrities, and this year was no exception. Earlier this month, both wedding gowns and white dresses from non-bridal designers were on display that could be indicative of what will appear during October’s bridal market. Some of the designers included: Derek Lam, Victoria Beckham, Carmen Marc Valvo, BCBG Max Azria, Adam, Farah Angsa, Lela Rose, Jenny Packham, Behnaz Sarafpour, Jill Stuart, Monique L’huillier, and Carolina Herrera As you plan your big day, you can look to these new collections for inspiration from the silhouettes, textures, patterns, and embellishments. Cocktail dresses and metallic fabrics were some of the trends seen throughout designer collections during New York Fashion Week. Another interesting concept from this year’s collection is the “Little White Dress,” a second dress for the bride to change into for the reception, typically short and sleek.
310,096
\begin{document} \begin{frontmatter} \title{Bandwidth selection in kernel empirical risk minimization via the gradient} \runtitle{Bandwidth selection via the gradient} \begin{aug} \author[A]{\fnms{Micha\"{e}l}~\snm{Chichignoud}\thanksref{T1}\ead[label=e1]{chichignoud@stat.math.ethz.ch}} \and \author[B]{\fnms{S\'{e}bastien}~\snm{Loustau}\corref{}\ead[label=e2]{loustau@math.univ-angers.fr}} \runauthor{M. Chichignoud and S. Loustau} \affiliation{ETH Z\"{u}rich and University of Angers} \address[A]{Seminar Fuer Statistics\\ ETH Z\"urich\\ R\"amistrasse 101\\ CH-8092 Z\"urich\\ Switzerland\\ \printead{e1}} \address[B]{LAREMA\\ Universit\'e d'Angers\\ 2 Bvd Lavoisier\\ 49045 Angers Cedex\\ France\\ \printead{e2}} \end{aug} \thankstext{T1}{Supported in part as member of the German--Swiss Research Group FOR916 (Statistical Regularization and Qualitative Constraints) with grant number 20PA20E-134495/1.} \received{\smonth{1} \syear{2014}} \revised{\smonth{1} \syear{2015}} \begin{abstract} In this paper, we deal with the data-driven selection of multidimensional and possibly anisotropic bandwidths in the general framework of kernel empirical risk minimization. We propose a universal selection rule, which leads to optimal adaptive results in a large variety of statistical models such as nonparametric robust regression and statistical learning with errors in variables. These results are stated in the context of smooth loss functions, where the gradient of the risk appears as a good criterion to measure the performance of our estimators. The selection rule consists of a comparison of gradient empirical risks. It can be viewed as a nontrivial improvement of the \mbox{so-called} Goldenshluger--Lepski method to nonlinear estimators. Furthermore, one main advantage of our selection rule is the nondependency on the Hessian matrix of the risk, usually involved in standard adaptive procedures. \end{abstract} \begin{keyword}[class=AMS] \kwd[Primary ]{62G05} \kwd{62G20} \kwd[; secondary ]{62G08} \kwd{62H30} \end{keyword} \begin{keyword} \kwd{Adaptivity} \kwd{bandwidth selection} \kwd{ERM} \kwd{robust regression} \kwd{statistical learning} \kwd{errors-in-variables} \end{keyword} \end{frontmatter} \section{Introduction}\label{sintro} We consider the minimization problem of an unknown risk function $\R\dvtx \bR^m\to\bR$, where $m\geq1$ is the dimension of the statistical model. We assume the existence of a risk minimizer \begin{equation} \label{oracle} \f^\star\in\arg\min_{\f\in\bR^m}\R(\f), \end{equation} where the risk function corresponds to the expectation of an appropriate loss function w.r.t. an unknown distribution. In empirical risk minimization, this quantity is usually estimated by its empirical version from an i.i.d. sample. However, in many problems such as local $M$-estimation or errors-in-variables models, a nuisance parameter can be involved in the empirical version. This parameter most often coincides with some bandwidth related to a kernel that gives rise to ``kernel empirical risk minimization.'' One typically deals with this issue in pointwise estimation, as, for example, in Polzehl and Spokoiny \cite{PolzehlSpokoiny06} with localized likelihoods or in Chichignoud and Lederer \cite{ChichignoudLederer13} with local $M$-estimators. In learning theory, many authors have recently investigated supervised and unsupervised learning with errors in variables. As a rule, such matters require one to plug deconvolution kernels into the empirical risk, as Loustau and Marteau \cite {pinkfloyds} in noisy discriminant analysis or Hall and Lahiri \cite{HallLahiri08} in quantile and moment estimation; see also Dattner, Rei{ss} and Trabs \cite{DattnerReissTrabs13}. In the above papers, the authors studied the theoretical properties of kernel empirical risk minimizers and proposed deterministic choices of bandwidths to deduce optimal minimax results. As usual, these optimal bandwidths are related to the smoothness of the target function or the underlying density and are not achievable in practice. Adaptivity is therefore one of the biggest challenges. In this respect, data-driven bandwidth selections have been already proposed in \cite {ChichignoudLederer13,ChichignoudLoustau13,DattnerReissTrabs13,PolzehlSpokoiny06}, which are all based on Lepski-type procedures. Lepski-type procedures are rather appropriate to construct data-driven bandwidths involved in kernels; for further details, see, for example, \cite{Katkovnik99,Lepski90,LepskiMammenSpokoiny97}. It is well known that they suffer from the restriction to isotropic bandwidths with multidimensional data, which is the consideration of nested neighborhoods (hyper-cube). Many improvements were made by Kerkyacharian, Lepski and Picard \cite{KerkyacharianLepskiPicard01} and more recently by Goldenshluger and Lepski \cite{GoldenshlugerLepski11} to select anisotropic bandwidths (hyper-rectangle). Nevertheless, their approach still does not provide anisotropic bandwidth selection for nonlinear estimators, which is the scope of this paper. The only work we can mention is \cite{ChichignoudLederer13} in a restrictive case, which is pointwise estimation in nonparametric regression. Therefore, the study of data-driven selection of anisotropic bandwidths is still an open issue. Moreover, this field is of great interest in practice, especially in image denoising; see, for example, \cite{CastroSalmonWillett12,KatkovnikFoiEgiazarianAstola10}. The main contribution of our paper is to bring new insights to the problem of bandwidth selection in kernel empirical risk minimization in a possible anisotropic framework. To this end, we first introduce a new criterion called \textit{gradient excess risk}, which makes the anisotropic bandwidth selection possible. We then provide a novel data-driven selection based on the comparison of ``Gradient empirical risks.'' That can be viewed as an extension of the so-called Goldenshluger--Lepski method (GL method; see \cite {GoldenshlugerLepski11}) and of the empirical risk comparison method (ERC method; see \cite{ChichignoudLoustau13}). Eventually, we derive an upper bound for the gradient excess risk (called gradient inequality) and optimal results in many settings, such as pointwise and global estimation in nonparametric regression and clustering with errors in variables. Note that we consider the risk minimization over the finite dimensional set $ \bR^m$. In statistical learning or nonparametric estimation, one usually aims at estimating a functional object belonging to some Hilbert space. However, in many examples, the target function can be approximated by a finite object, thanks, for instance, to a suitable decomposition in a basis of the Hilbert space. This is typically the case in local $M$-estimation, where the target function is assumed to be locally polynomial (and even constant in many cases). Moreover, in statistical learning, one is often interested in the estimation of a finite number of parameters, as in clustering. The extension to the infinite-dimensional case is discussed in Section~\ref{sdiscussion}. The structure of this paper is as follows: the main ideas behind the gradient excess risk are introduced in the remainder of this section. An upper bound for the gradient excess risk of the data-driven procedure is presented in Section~\ref{gradientinequality}. This procedure is applied to clustering in Section~\ref{sectionkmeans} and to robust nonparametric regression in Section~\ref{sectionlocalglobal}. Additionally, a discussion of our assumptions and an outlook are given in Section~\ref{sdiscussion}, and Section~\ref{ssimu} illustrates the behavior of the method with numerical results. The proofs are finally conducted in the \hyperref[appendix]{Appendix}. \subsection{The gradient excess risk approach}\label{sgradient} In the literature, such as in statistical learning, the excess risk $R(\widehat\theta)-R(\theta^\star)$ is the main criterion to measure the performance of some estimator $\widehat\theta$. Originally, Vapnik and Chervonenkis \cite{vapnikold} proposed to control this quantity via the empirical process theory, which gives rise to slow rates $\cO( n^{-1/2})$ for the excess risk; see also \cite{vapnik98}. In the last decade, many authors have improved such a bound by giving fast rates $\cO( n^{-1})$ using the so-called localization technique; see \cite{svm,kolt,mammen,nedelec,mendelsonkernel,tsybakov2004} and Boucheron, Bousquet and Lugosi \cite{surveylugosi} for an overview in classification. This technique consists of studying the increments of an empirical process in the neighborhood of the target $\f^\star$. In particular, it requires a variance-risk correspondence, equivalent to the eminent margin assumption. As far as we know, this complicated modus operandi is the major obstacle to the anisotropic bandwidth selection issue. In what follows, we introduce an alternative criterion to solve this issue, namely the gradient excess risk ($G$-excess risk, for short, in the sequel). This quantity is defined as \begin{equation} \label{dexcessrisk} \bigl\llvert \D\bigl(\fn,\f^\star\bigr)\bigr\rrvert _2:=\bigl\llvert \D(\fn)-\D\bigl(\f^\star\bigr)\bigr\rrvert _2\qquad\mbox{where }\D:=\nabla\R, \end{equation} whereas \mbox{$ \llvert \cdot\rrvert _2 $} denotes the Euclidean norm on $ \bR^m $ and $\nabla\R\dvtx \bR^m\to\bR^m$ denotes the gradient of the risk $\R$. With a slight abuse of notation, $G$ denotes the gradient, whereas $G(\cdot,\f^\star)$ denotes the $\D$-excess risk. Under regularity assumptions on $R(\cdot)$, the $G$-excess risk is linked with the excess risk, thanks to the following lemma. \begin{lemma} \label{lemmadmargin} Let $\f^\star$, defined as in (\ref{oracle}), and $U$ be the Euclidean ball of $\bR^m$ centered at $\f^\star$, with radius $\delta>0$. Assume $\f\mapsto\R(\f)$ is $\cC^2(U)$, each second partial derivative of $\R$ is bounded on $U$ by a constant $\kappa_1$ and the Hessian matrix $H_{\R}(\cdot)$ is positive definite at $\f^\star$. Then, for $\delta>0$ small enough, we have \[ \sqrt{\R(\f)-\R\bigl(\f^\star\bigr)}\leq 2\frac{\sqrt{m\kappa_1}}{\lambda_{\min}}\bigl\llvert \D\bigl(\f,\f^\star\bigr)\bigr\rrvert _2\qquad \forall\f\in U, \] where $\lambda_{\min}$ is the smallest eigenvalue of $H_{\R}(\f^\star)$. \end{lemma} The proof is based on the inverse function theorem and a Taylor expansion of the function $R(\cdot)$. Let us explain how this lemma, together with standard probabilistic tools, leads to fast rates for the excess risk. In this section, $\Rn$ denotes the usual empirical risk with associated gradient $\Dn:=\nabla\Rn$ and associated ERM $\fn$ for ease of exposition. Under the assumptions of Lemma \ref{lemmadmargin}, $\D(\f ^\star)=\Dn(\fn)=(0,\dots,0)^\top$, and we have the following heuristic: \begin{eqnarray}\label{heuristichuber} \sqrt{\R(\fn)-\R\bigl(\f^\star\bigr)}&\lesssim&\bigl\llvert \D\bigl(\fn,\f^\star\bigr)\bigr\rrvert _2 = \bigl\llvert \D( \fn)-\Dn(\fn)\bigr\rrvert _2 \nonumber\\[-8pt]\\[-8pt]\nonumber &\leq& \sup_{\f\in\bR^m}\bigl \llvert \D(\f)-\Dn(\f)\bigr\rrvert _2\lesssim n^{-1/2}, \end{eqnarray} where $\lesssim$ denotes the inequality up to some positive constant. The last bound only requires a concentration inequality applied to the empirical process $ \Dn(\cdot)-\D(\cdot) $. Therefore, this heuristic provides fast rates for the excess risk without any localization technique. Furthermore, similar bounds can be obtained for the $\ell_2$-norm $ \llvert \fn-\f^\star\rrvert _2 $ using the same path. Indeed, under the same assumptions, the assertion of Lemma \ref{lemmadmargin} holds, replacing the square root of the excess risk by $ \llvert \fn-\f ^\star\rrvert _2 $ (see the proof of Lemma \ref{lemmadmargin}), and then optimal rates are deduced. From the model selection point of view, standard penalization techniques---based on localization---suffer from the dependency on parameters involved in the margin assumption. More precisely, in the strong margin assumption framework, the construction of the penalty requires the knowledge of $\lambda_{\min}$, related to the Hessian matrix of the risk. Although many authors have recently investigated the adaptivity w.r.t. these parameters, by proposing ``margin-adaptive'' procedures (see \cite{PolzehlSpokoiny06} for the propagation method, \cite{Lecue07} for aggregation and \cite{ArlotMassart09} for the slope heuristic), the theory is not completed and remains a hard issue; see the related discussion in Section~\ref{sdiscussion}. As an alternative, our data-driven procedure does not suffer from the dependency on $\lambda_{\min}$ since we focus on a gradient inequality in Section~\ref{gradientinequality}. \subsection{Kernel empirical risk minimization}\label{sectionKERM} In this section, the kernel empirical risk minimization is properly defined and illustrated with two examples: local \mbox{$M$-}estimators and deconvolution $k$-means. For some $p\in\bN^\star$, consider a \mbox{$\bR^p$-}random variable $Z$ distributed according to $P$, absolutely continuous w.r.t. the Lebesgue measure. In what follows, we observe a sample $\cZ_n:=\{Z_1,\ldots,Z_n\}$ of independent and identically distributed (i.i.d.) random\vspace*{1pt} variables according to $P$. Moreover, we call a kernel of order $ r\in\bN^\star$ a symmetric function $K\dvtx \bR^d\to\bR$, $d\geq1$, which satisfies the following properties: \begin{itemize} \item[$\bullet$] $\int_{\bR^d} K(x)\,dx=1$,\vspace*{1pt} \item[$\bullet$] $\int_{\bR^d} K(x)x_j^k\,dx=0$ $\forall k\leq r, \forall j\in\{1,\ldots, d\}$,\vspace*{1pt} \item[$\bullet$] $\int_{\bR^d} \llvert K(x)\rrvert \llvert x_j\rrvert ^{r}\,dx<\infty$, $\forall j\in\{1,\ldots, d\}$. \end{itemize} For any $h\in\cH\subset\bR^d_+$, the dilation $K_h$ is defined as \[ K_h(x)=\Pi_h^{-1}K(x_1/h_1, \ldots, x_d/h_d)\qquad \forall x\in\bR^d, \] where $\Pi_h:=\prod_{j=1}^dh_j$. For a given kernel $K$, we define the kernel empirical risk indexed by an anisotropic bandwidth $h\in\cH\subset(0,1]^d$ as \begin{equation} \label{defemprisk} \Rn_{h}(\f):= \frac{1}{n}\sum _{i=1}^n\ell_{K_h}(Z_i,\f), \end{equation} and an associated kernel empirical risk minimizer (kernel ERM) as \begin{equation} \label{defkerm} \fn_h\in\argmin_{\f\in\bR^m} \Rn_{h}( \f). \end{equation} The function $\ell_{K_h}\dvtx \bR^p\times\bR^m\to\bR_+$ is a loss function associated to a kernel $ K_h $ such that $ \f\mapsto\ell_{K_h}(Z,\f ) $ is twice differentiable $P$-almost surely and such that the limit of its expectation coincides with the risk, that is, \begin{equation} \label{eqlimitrisk} \lim_{h\to(0,\dots,0)}\bE\Rn_h(\f)=\R(\f)\qquad \forall\f\in\bR^m, \end{equation} where $ \bE$ denotes the expectation w.r.t. the distribution of the sample $ \cZ_n $. The agenda is the data-driven selection of the ``best'' estimator in the family $\{\fn_h,h\in\cH\}$. This issue arises in many examples, such as local fitted likelihood (Polzehl and Spokoiny \cite {PolzehlSpokoiny06}), image denoising (Astola et~al. \cite{KatkovnikFoiEgiazarianAstola10}) and robust nonparametric regression; see Chichignoud and Lederer \cite{ChichignoudLederer13}. In such a framework, we observe a sample of i.i.d. pairs $Z_i=(W_i,Y_i)_{i=1}^n$, and the kernel empirical risk has the following general form: \[ \frac{1}{n}\sum_{i=1}^n \ell_{K_h}(Z_i,\theta)=\frac{1}{n}\sum _{i=1}^n\rho(Z_i,\theta){K}_h (W_i-x_0 ), \] where $\rho(\cdot,\cdot)$ is some likelihood and $x_0\in\bR^d$. Another example arises when we observe a contaminated sample $Z_i=X_i+\varepsilon_i$, $i=1, \ldots, n$ in the problem of clustering. In this case, the kernel empirical risk is defined according to \[ \frac{1}{n}\sum_{i=1}^n \ell_{K_h}(Z_i,\bc)=\frac{1}{n}\sum _{i=1}^n\int_{\bR^d}\min _{j=1,\dots,k}\llvert x-c_j\rrvert _2^2 \widetilde K_h(Z_i-x)\,dx, \] where $\widetilde K_h(\cdot)$ is a deconvolution kernel and $\bc =(c_1, \ldots, c_k)\in\bR^{dk}$ is a codebook. In the next section, we present the bandwidth selection rule in the general context of kernel empirical risk minimization. We especially deal with clustering with errors in variables and robust nonparametric regression in Sections~\ref{sectionkmeans} and \ref{sectionlocalglobal}, respectively. \section{Selection rule and gradient inequality} \label{gradientinequality} The anisotropic bandwidth selection issue has been recently investigated in Goldenshluger and Lepski \cite{GoldenshlugerLepski11} (GL~method) in density estimation; see also \cite{ComteLacour13} for deconvolution estimation and \cite{GoldenshlugerLepski08,GoldenshlugerLepski09} for the white noise model. This method, based on the comparison of estimators, requires some ``linearity'' property, which is trivially satisfied by kernel estimators. However, kernel ERMs are usually nonlinear (except for the least square estimator), and the GL method cannot be directly applied to such estimators. To tackle this issue, we introduce a new selection rule based on the comparison of gradient empirical risks instead of estimators (i.e., kernel ERM). To that end, we first introduce some notations. For any $h\in\cH$ and any $\f\in\bR^m$, the gradient empirical risk ($G$-empirical risk) is defined as \begin{equation} \label{defdemprisk} \Dn_h(\f):=\frac{1}{n}\sum _{i=1}^n\nabla\ell_{K_h}(Z_i, \f)= \Biggl(\frac{1}{n}\sum_{i=1}^n \frac {\partial}{\partial\f_j}\ell_{K_h} (Z_i,\f) \Biggr)_{j=1,\ldots, m}. \end{equation} Note that we have coarsely $ \Dn_h(\fn_h)=(0,\ldots,0)^\top$ since $\ell_{K_h}(Z_i,\cdot)$ is twice differentiable almost surely. According to (\ref{eqlimitrisk}), we also notice that the limit of the expectation of the $\D$-empirical risk coincides with the gradient of the risk. Following Goldenshluger and Lepski \cite{GoldenshlugerLepski11}, we introduce an auxiliary\break \mbox{$\D $-}empiri\-cal risk in the comparison. For any couple of bandwidths $(h,\eta)\in\cH^2$ and any $ \f\in\bR ^m $, the auxiliary $\D$-empirical risk is defined as \begin{equation} \label{defconvolutiondemprisk} \Dn_{h,\eta}(\f):=\frac{1}{n}\sum _{i=1}^n\nabla\ell_{K_h*K_\eta }(Z_i, \f), \end{equation} where $K_h*K_\eta(\cdot):=\int_{\bR^d}K_h(\cdot-x)K_\eta(x)\,dx$ stands for the convolution between $K_h$ and $K_\eta$. The gradient inequality stated in Theorem \ref{thmainresult} is based on the control of some random processes as follows. \begin{defi}[(Majorant)]\label{defmajorant} For any integer $ {l}>0 $, we call \textit{majorant} a function $ \majorant_l\dvtx \cH^2\to\bR_+ $ such that \[ \bP \Bigl(\sup_{\lambda,\eta\in\cH} \bigl\{ \llvert \Dn_ { \lambda,\eta} -\bE \Dn_{\lambda,\eta}\rrvert _{2,\infty}+\llvert \Dn_{\eta}-\bE \Dn_{\eta }\rrvert _{2,\infty}-\majorant_l(\lambda, \eta) \bigr\}_+>0 \Bigr)\leq n^{-l}, \] where $\llvert T\rrvert _{2,\infty}:=\sup_{\f\in\bR^m}\llvert T(\f)\rrvert _2$ for all $ T\dvtx \bR^m\to\bR^m $ with $\llvert \cdot\rrvert _2$ the Euclidean norm on $\bR^m$, and $\bE$ is understood coordinatewise. \end{defi} The main issue for applications is to compute right order majorants. It follows from classical tools such as Talagrand's inequalities (Talagrand \cite{talagrandinventiones}, Boucheron, Lugosi and Massart \cite{boucheronlivre}, Bousquet \cite{bousquet}; see also \cite{GoldenshlugerLepski09b}). In Sections~\ref{sectionkmeans} and~\ref{sectionlocalglobal} such majorant functions are computed in clustering and in robust nonparametric regression. We are now ready to define the selection rule as \begin{equation} \label{defrule} \widehat h\in\argmin_{h\in\cH} \widehat{\mathrm{BV}}(h), \end{equation} where $\widehat{\mathrm{BV}}(h)$ is an estimate of the bias--variance decomposition at a given bandwidth $h\in\cH$. It is explicitly defined as \begin{eqnarray}\label{defempbv} \widehat{\mathrm{BV}}(h):=\sup_{\eta\in\cH} \bigl\{ \llvert \Dn_{h,\eta }-\Dn_\eta\rrvert _{2,\infty} - \majorant_l(h,\eta) \bigr\} +\majorant^\infty_l(h)\nonumber \\ \eqntext{\displaystyle\mbox{with }\majorant_l^\infty (h):=\sup _{\lambda\in\cH} \majorant_l(\lambda,h).} \end{eqnarray} The kernel ERM $\fn_{\widehat h}$, defined in (\ref{defkerm}), with bandwidth $\widehat h$, selected in (\ref{defrule}), satisfies the following bound. \begin{theorem}[(Gradient inequality)]\label{thmainresult} For any $n\in\bN^\star$ and for any $l\in\bN^\star$, we have with probability $1-n^{-l}$, \[ \bigl\llvert \D\bigl(\fn_{\widehat h},\f^\star\bigr)\bigr\rrvert _2\leq3\inf_{h\in\cH} \bigl\{B(h)+\majorant _l^\infty(h) \bigr\}, \] where $B\dvtx \cH\to\bR_+$ is a bias function defined as \begin{equation} \label{defbias} \quad\bias(h):=\max \Bigl(\llvert \bE\Dn_{h}-\D\rrvert _{2,\infty},\sup_{\eta\in \cH}\llvert \bE\Dn_{h,\eta}-\bE \Dn_\eta\rrvert _{ 2, \infty} \Bigr)\qquad\forall h\in\cH. \end{equation} \end{theorem} Theorem \ref{thmainresult} is the main result of this paper. The $\D$-excess risk of the data-driven estimator $\fn_{\widehat h}$ is bounded with high probability. The RHS in the gradient inequality can be viewed as the minimization of a usual bias--variance trade-off. Indeed, the bias term $ B(h) $ is deterministic and tends to $0$ as $ h\to(0,\dots,0)$. The majorant $ \majorant_l^\infty(h) $ upper bounds the stochastic part of the $ \D $-empirical risk and can be viewed as a variance term. The gradient inequality of Theorem \ref{thmainresult} is sufficient to establish adaptive fast rates in noisy clustering and adaptive minimax rates in nonparametric estimation; see Sections~\ref{sectionkmeans}~and~\ref{sectionlocalglobal}. Moreover, the construction of the selection rule (\ref{defrule}), as well as the upper bound in Theorem \ref{thmainresult}, does not suffer from the dependency on $\lambda_{\min}$ related to the smallest eigenvalue of the Hessian matrix of the risk; see Lemma \ref{lemmadmargin}. In other words, the method is robust w.r.t. this parameter, which is a major improvement in comparison with other adaptive or model selection methods of the literature cited in the \hyperref[sintro]{Introduction}. \begin{pf*}{Proof of Theorem \protect\ref{thmainresult}} For some $h\in\cH$, we start with the following decomposition: \begin{eqnarray} \label{eqdecompositionexcessrisk} \bigl\llvert \D\bigl(\fn_{\widehat h},\f^\star\bigr) \bigr\rrvert _2&=&\bigl\llvert (\Dn_{\widehat h}-\D) (\fn_{\widehat h})\bigr\rrvert _2\leq\llvert \Dn_{\widehat h}-\D \rrvert _{2,\infty} \nonumber\\[-8pt]\\[-8pt]\nonumber &\leq&\llvert \Dn_{\widehat h}-\Dn_{\widehat h,h}\rrvert _{2,\infty}+ \llvert \Dn _{\widehat h,h}-\Dn_{h}\rrvert _{2,\infty}+\llvert \Dn_{h}-\D\rrvert _{2,\infty}. \end{eqnarray} By definition of $\widehat h$ in (\ref{defrule}), the first two terms in the RHS of (\ref{eqdecompositionexcessrisk}) are bounded as follows: \begin{eqnarray}\label{eqcontrolvar} && \llvert \Dn_{\widehat h}-\Dn_{\widehat h,h}\rrvert _{2,\infty}+\llvert \Dn_{\widehat h,h}-\Dn_{h}\rrvert _{2,\infty}\nonumber \\ &&\qquad = \llvert \Dn_{h,\widehat h}-\Dn_{\widehat h}\rrvert _{2,\infty}-\majorant_{\ell}(h,\widehat h)+\majorant_{\ell}(h, \widehat h) \nonumber \\ &&\quad\qquad {}+\llvert \Dn_{\widehat h,h}-\Dn_{h}\rrvert _{2,\infty}- \majorant_{\ell }(\widehat h,h)+\majorant_{\ell}(\widehat h,h) \nonumber\\[-8pt]\\[-8pt]\nonumber &&\qquad \leq \sup_{\eta\in\cH} \bigl\{\llvert \Dn_{h,\eta}- \Dn_{\eta }\rrvert _{2,\infty}-\majorant_{\ell}(h, \eta) \bigr\} +\majorant_ { \ell}^\infty( h) \nonumber \\ &&\quad\qquad{}+ \sup_{\eta\in\cH} \bigl\{\llvert \Dn_{\widehat h,\eta}- \Dn_{\eta }\rrvert _{2,\infty}-\majorant_{\ell}(\widehat h, \eta) \bigr\}+\majorant_{\ell}^\infty(\widehat h) \nonumber \\ &&\qquad =\widehat{\mathrm{BV}}(h)+\widehat{\mathrm{BV}}(\widehat h)\leq2\widehat{ \mathrm{BV}}(h).\nonumber \end{eqnarray} Besides, the last term in (\ref{eqdecompositionexcessrisk}) is controlled as follows: \begin{eqnarray*} \llvert \Dn_{h}-\D\rrvert _{2,\infty}&\leq&\llvert \Dn_{h}-\bE\Dn_h\rrvert _{2,\infty}+\llvert \bE \Dn_{h}-\D\rrvert _{2,\infty} \nonumber \\ &\leq&\llvert \Dn_{h}-\bE\Dn_h\rrvert _{2,\infty}- \majorant_l(\lambda,h) +\majorant_l(\lambda,h)+\llvert \bE\Dn_{h}-\D\rrvert _{2,\infty} \nonumber \\ &\leq&\sup_{\lambda,\eta} \bigl\{\llvert \Dn_{\lambda,\eta}-\bE\Dn _{\lambda,\eta}\rrvert _{2,\infty}+\llvert \Dn_{ \eta} -\bE\Dn_\eta\rrvert _{2,\infty} -\majorant_l(\lambda, \eta) \bigr\} \nonumber \\ &&{}+\majorant_l^\infty(h)+\llvert \bE\Dn_{h}-\D \rrvert _{2,\infty} \nonumber \\ &=:&\zeta+\majorant_l^\infty(h)+\llvert \bE \Dn_{h}-\D\rrvert _{2,\infty}. \end{eqnarray*} Using (\ref{eqdecompositionexcessrisk}) and (\ref{eqcontrolvar}), together with the last inequality, we have for all $ h\in\cH$, \begin{equation} \label{eqbounddexcessrisk} \bigl\llvert \D\bigl(\fn_{\widehat h},\f^\star\bigr) \bigr\rrvert _2\leq2\widehat{\mathrm{BV}}(h)+\zeta+\majorant _l^\infty(h)+\llvert \bE\Dn_{h}-\D\rrvert _{2, \infty}. \end{equation} It then remains to control the term $\widehat{\mathrm{BV}}(h)$. We have \begin{eqnarray*} && \widehat{\mathrm{BV}}(h)-\majorant_l^\infty(h) \\ &&\qquad \leq \sup _{\lambda,\eta} \bigl\{\llvert \Dn_{\lambda,\eta}-\bE \Dn_{\lambda,\eta}\rrvert _{2,\infty}+\llvert \Dn_{\eta}-\bE \Dn_{\eta}\rrvert _{2, \infty} -\majorant_l(\lambda, \eta) \bigr\} \\ &&\quad\qquad{} +\sup_{\eta}\llvert \bE\Dn_{h,\eta} -\bE \Dn_\eta\rrvert _{2,\infty} \\ &&\qquad = \zeta+\sup_{\eta} \llvert \bE\Dn_{h,\eta} -\bE\Dn_\eta\rrvert _{2,\infty}. \end{eqnarray*} The gradient inequality follows directly from (\ref{eqbounddexcessrisk}), Definition \ref{defmajorant} and the definition of $\zeta$. \end{pf*} \section{Application to noisy clustering}\label{sectionkmeans} Let us consider an integer $k\geq1$ and a $\bR^d$-random variable $X$ with law $P$ with density $f$ w.r.t. the Lebesgue measure on $\bR^d$ satisfying $\bE _P\llvert X\rrvert ^2_2<\infty$, where \mbox{$\llvert \cdot\rrvert _2$} stands for the Euclidean norm in $\bR^d$. Moreover, we restrict the study to the compact set $[0,1]^d$, assuming that $X\in [0,1]^d$ almost surely. We want to construct $k$ centroids minimizing some distortion, \begin{equation} \label{distortion} \RC(\bc):=\bE_{P}w(\bc,X), \end{equation} where $\bc=(c_1,\ldots,c_k)\in\bR^{d\times k}$ is a candidate codebook of $k$ centroids. For ease of exposition, we study this quantization problem with the Euclidean distance, by choosing the standard $k$-means loss function, namely, \[ w(\bc,x)=\min_{j=1,\ldots, k}\llvert x-c_j\rrvert _2^2,\qquad x\in\bR^d. \] In this section, we are interested in the inverse statistical learning context (see~\cite{isl}), which corresponds to the minimization of (\ref {distortion}), thanks to a noisy set of observations, \[ Z_i=X_i+\varepsilon_i, \qquad i=1, \ldots, n, \] where $(\varepsilon_i)_{i=1}^n$ are i.i.d. with density $g$ w.r.t. the Lebesgue measure on $\bR^d$ and mutually independent of the original sample $(X_i)_{i=1}^n$. This topic was first considered in \cite{bl12}, where general oracle inequalities are proposed. Let us fix a kernel $K_h$ of order $ r\in\bN^\star$ with $h\in\cH$ and consider $ \widetilde K_h$ a deconvolution kernel defined such that $\cF[\widetilde K_h]=\cF[K_h]/\cF[g]$, where $\cF$ stands for the usual Fourier transform. As introduced in Section~\ref{sectionKERM}, we have at our disposal the family of kernel ERM defined as \begin{equation} \label{noisykmeans} \qquad\bcn_h\in\arg\min_{\bc\in\bR^{dk}} \RCn_h(\bc)\qquad\mbox{where }\RCn_h(\bc):=\frac{1}{n} \sum_{i=1}^n w(\bc,\cdot)*\widetilde K_h(Z_i-\cdot), \end{equation} where $f*g(\cdot):=\int_{[0,1]^d}f(x)g(\cdot-x)\,dx$ stands for the convolution product (restricted to the compact $[0,1]^d$ for simplicity). From an adaptive point of view, Chichignoud and Loustau \cite{ChichignoudLoustau13} have recently investigated the problem of choosing the bandwidth in (\ref{noisykmeans}). They established fast rates of convergence---up to a logarithmic term---for a data-driven selection of $h$, based on a comparison of kernel empirical risks. However, their approach is restricted to isotropic bandwidth selection and depends on the parameters involved in the margin assumption (in particular on $\lambda_{\min}$ in Lemma \ref{lemmadmargin}). In the following, adaptive fast rates of convergence for the excess risk are obtained via the gradient approach. For this purpose, we assume that the Hessian matrix $H_{\RC}$ is positive definite. This assumption was considered for the first time in Pollard \cite {pollard81} and is often referred as Pollard's regularity assumptions; see also~\cite{levrard}. Under these assumptions, we can state the same kind of result as Lemma \ref{lemmadmargin} in the framework of clustering with $k$-means. \begin{lemma} \label{dclustering} Let $\bc^\star$ be a minimizer of (\ref{distortion}), and assume $f$ is continuous and $H_{\RC}(\bc^\star)$ is positive definite. Let $U$ be the Euclidean ball center at $\bc^\star $ with radius $\delta>0$. Then, for $\delta$ sufficiently small, \[ \sqrt{\RC(\bc)-\RC\bigl(\bc^\star\bigr)}\leq C\bigl\llvert \nabla\RC( \bc)-\nabla\RC\bigl(\bc^\star\bigr)\bigr\rrvert _2\qquad \forall \bc\in U, \] where $C>0$ is a constant which depends on $H_{\RC}(\bc^\star)$, $k$ and $d$. \end{lemma} We\vspace*{1pt} have at our disposal a family of kernel ERM $\{\bcn_h,h\in\cH\}$ with associated kernel empirical risk $ \RCn_h(\cdot) $ defined in (\ref{noisykmeans}). We propose to apply the selection rule (\ref{defrule}) to choose the bandwidth $h\in\cH$. In this problem as well, we first consider the $\D$-excess risk approach to establish adaptive fast rates of convergence for the excess risk. For any $h\in\cH$, the $ \D$-empirical risk vector of $\bR^{dk}$ is given by \begin{eqnarray*} \label{defndemprisk} \DCn_h(\bc)&:=& \Biggl(\frac{1}{n}\sum _{i=1}^n\frac{\partial }{\partial c_j^u}\int_{[0,1]^d}w( \bc,x)\widetilde K_h(Z_i-x)\,dx \Biggr)_{(u,j)\in\{1, \ldots,d\}\times\{1,\ldots, k\}} \\ &=& \Biggl(-\frac{1}{n}\sum_{i=1}^n2 \int_{V_j(\bc )}\bigl(x^u-c_j^u \bigr)\widetilde K_h(Z_i-x)\,dx \Biggr)_{(u,j)\in\{1, \ldots,d\}\times\{1,\ldots, k\}}, \end{eqnarray*} where $x^u$ denotes the $u$th coordinate of $x\in\bR^d$ and $V_j(\bc)$, ${j=1,\ldots, k}$ are open Vorono\"i cells associated to $\bc$, defined as $V_j(\bc)=\{x\in[0,1]^d\dvtx \forall u\neq j, \llvert x-c_j\rrvert _2<\llvert x-c_u\rrvert _2\}$. Note that $ \DCn_h(\bcn_h)=(0, \ldots, 0)^\top$ a.s. by smoothness. The construction of the rule follows the general case of Section~\ref{gradientinequality}, which requires the introduction of an auxiliary $\D$-empirical risk. For any couple of bandwidths $(h,\eta)\in\cH^2$, the auxiliary $\D$-empirical risk is defined as \[ \label{defauxdemprisk} \DCn_{h,\eta}(\bc):= \Biggl(-\frac{1}{n}\sum _{i=1}^n2\int_{V_j} \bigl(x^u-c_j^u\bigr)\widetilde K_{h,\eta}(Z_i-x)\,dx \Biggr)_{(u,j)\in\{1, \ldots,d\}\times\{ 1,\ldots, k\}}\in \bR^{dk}, \] where $\widetilde K_{h,\eta}=\widetilde{K_h*K_\eta}$ is the auxiliary deconvolution kernel as in Comte and Lacour \cite{ComteLacour13}. The statement of the oracle inequality is based on the computation of a majorant function. For this purpose, we need the following additional assumptions on the kernel ${K}\in\bL_2(\bR^d)$. (\textbf{K1}) There exists $S=(S_1,\dots,S_d)\in\bR^d_+$ such that the kernel $K$ satisfies \[ \operatorname{supp}\mathcal{F}[K]\subset[-S,S]\quad\mbox{and}\quad\sup_{t\in\bR ^d} \bigl\llvert \mathcal{F}[K](t)\bigr\rrvert < \infty, \] where $\operatorname{supp} g=\{x\dvtx g(x)\neq0\}$ and $[-S,S]=\bigotimes_{v=1}^d [-S_v,S_v]$. This assumption is standard in deconvolution estimation and is satisfied by many standard kernels, such as the \textit{sinc} kernel. We also consider a kernel $K$ of order $r\in\bN^\star$, according to the definition of Section~\ref{sectionKERM}. Kernels of order $r$ satisfying (\textbf{K1}) could be constructed by using the so-called Meyer wavelet; see \cite{Mallat09}. Additionally, we need an assumption on the noise distribution $g$: \begin{longlist} \item[\textbf{Noise assumption} \textbf{NA}$(\rho,\beta)$.] There exist a vector $ \beta=(\beta_1,\dots,\beta_d)\in (0,\infty)^d $ and a positive constant $ \rho$ such that for all $ t\in\bR^d $, \[ \bigl\llvert \cF[g](t)\bigr\rrvert \geq\rho\prod_{j=1}^d \biggl(\frac {t_j^2+1}{2} \biggr)^{-\beta_j/2}. \] \end{longlist} \textbf{NA($\rho,\beta$)} deals with a polynomial behavior of the Fourier transform of the noise density $g$. An exponential decreasing of the characteristic function of $g$ is not considered in this paper for simplicity; see \cite{ComteLacour13} in multivariate deconvolution for such a study. We are now ready to compute some majorant functions in our context. For some $ s^+>0 $, let $ \cH:=[h_-,h^+]^d $ be the bandwidth set such that $0<h_-<h^+<1$, \begin{equation} \label{defhminhmax} h_{-}:= \biggl(\frac{\log^6(n)}{n} \biggr)^{1/(2\vee2\sum _{j=1}^d\beta_j)} \quad\mbox{and}\quad h^{+}:= \bigl(1/\log(n) \bigr)^{1/(2s^{+})}. \end{equation} \begin{lemma} \label{lemmamajkmeans} Assume $(\mathbf{K1})$ and $\mathbf{NA}(\rho,\beta)$ hold for some $\rho >0$ and some $\beta\in\bR^d_+$. Let $a\in(0,1)$, and consider $ \cH_a:=\{(h_{-},\dots,h_{-})\}\cup \{h\in\cH\dvtx \forall j=1,\ldots,d\ \exists m_j\in\bN\dvtx h_j=h^{+}a^{m_j} \}$ an exponential net of $ \cH=[h_-,h^+]^d $, such that $ \llvert \cH_a\rrvert \leq n $. For any integer $ {l}>0 $, let us introduce the function $ \mathcal {M}^{\mathrm{k}} _l\dvtx \cH^2\to\bR_+ $ defined as \[ \label{defmajorantkmeans} \mathcal{M}^{\mathrm{k}}_l(h,\eta):= b'_1\sqrt{kd} \biggl(\frac{\prod_{i=1}^d\eta_i^{-\beta_i}}{\sqrt {n}}+ \frac{\prod_{i=1} ^d(h_i\vee\eta_i)^{-\beta_i}}{\sqrt{n}} \biggr), \] where $b'_1:=b'_1(l)>0$ is linear in $l$ and independent of $n$; see the \hyperref[appendix]{Appendix} for details. Then, for $n$ sufficiently large, the function $\mathcal{M}^{\mathrm{k}} _l(\cdot,\cdot)$ is a majorant, that is, \begin{eqnarray*} && \bP \Bigl(\sup_{h,\eta\in\cH_a} \bigl\{ \llvert \DCn_ { h,\eta} - \bE\DCn_{h,\eta}\rrvert _{2,\infty}+\llvert \DCn_{\eta}-\bE \DCn_{\eta }\rrvert _{2,\infty}-\mathcal{M}^{\mathrm{k}}_l(h, \eta) \bigr\}_+>0 \Bigr) \\ &&\qquad \leq n^{-l}, \end{eqnarray*} where\vspace*{1pt} $ \bE$ denotes the expectation w.r.t. to the sample and $\llvert T\rrvert _{2,\infty}=\break \sup_{\bc\in[0,1]^{dk}}\llvert T(\bc)\rrvert _2$ for all $ T\dvtx \bR ^{dk}\to\bR^{dk} $ with $\llvert \cdot\rrvert _2$ the Euclidean norm on $\bR^{dk}$. \end{lemma} The proof is based on a Talagrand inequality; see the \hyperref[appendix]{Appendix}. This lemma is the cornerstone and gives the order of the variance term in such a problem. We are now ready to define the selection rule in this setting as \begin{equation} \label{defrulekmeans} \qquad\widehat h\in\argmin_{h\in\cH_a} \Bigl\{\sup _{\eta\in\cH_a} \bigl\{\llvert \DCn_{h,\eta}-\DCn_\eta \rrvert _{2,\infty} -\mathcal{M}^{\mathrm{k}}_l(h,\eta) \bigr \} +\cM^{\mathrm{k},\infty}_l(h) \Bigr\}, \end{equation} where $\cM^{\mathrm{k},\infty}_l(h):=\sup_{\lambda\in\cH_a} \mathcal{M}^{\mathrm{k}}_l(\lambda,h) $ and $\cH_a$ is defined in Lemma \ref{lemmamajkmeans}. Eventually, we need an additional assumption on the regularity of the density $f$ to control the bias term in Theorem \ref{thmkmeans}. The regularity is expressed in terms of anisotropic Nikol'skii class. \begin{defi}[(Anisotropic Nikol'skii space)]\label{defnikolskiiAnisot} Let $ s=( s_1, s_2,\ldots, s_d)\in\bR^d_+ $, $ q\in[1,\infty[ $ and $ L>0 $ be fixed. We say that $ f\dvtx [0,1]^d\rightarrow[-L,L] $ belongs to the anisotropic Nikol'skii class $\cN_{q,d}(s,L)$ if for all $ j=1,\ldots,d $, $z\in\bR$ and for all $ x\in(0,1]^d $, \begin{eqnarray*} && \biggl(\int\biggl\llvert \frac{\partial^{\lfloor s_j\rfloor}}{\partial x_j^{\lfloor s_j\rfloor}}f(x_1, \ldots,x_j+z,\ldots,x_d)-\frac {\partial^{\lfloor s_j\rfloor}}{\partial x_j^{\lfloor s_j\rfloor }}f(x_1, \ldots,x_j,\ldots,x_d)\biggr\rrvert ^q\,dx \biggr)^{ 1/q} \\ &&\qquad \leq L\llvert z\rrvert ^{ s_j-\lfloor s_j\rfloor}, \end{eqnarray*} and $\llVert \frac{\partial^{ l}}{\partial x_j^{ l}}f\rrVert _q\leq L$, for any $l=0,\ldots, \lfloor s_j\rfloor$, where $\lfloor s_j\rfloor$ is the largest integer strictly less than $s_j$. \end{defi} Nikol'skii classes were introduced in approximation theory by Nikol'skii; see \cite{Nikolskii75}, for example. We also refer to \cite{GoldenshlugerLepski11,KerkyacharianLepskiPicard01} where the problem of adaptive estimation has been treated for the Gaussian white noise model and for density estimation, respectively. In the sequel, we assume that the multivariate density $f$ belongs to the anisotropic Nikol'skii class $\cN_{2,d}(s,L)$, for some $s\in\bR _+^d$ and some $L>0$. In other words, the density has possible different regularities in all directions. The statement of a nonadaptive upper bound for the excess risk in the anisotropic case has been already investigated in \cite{ChichignoudLoustau13}. In the following theorem, we propose the adaptive version of the previous cited result, where the bandwidth $\widehat h$ is chosen via the selection rule (\ref{defrulekmeans}). \begin{theorem} \label{thmkmeans} Assume $ (\mathbf{K1}) $ and $\mathbf{NA}(\rho,\beta)$ hold for some $\rho >0$ and some $\beta\in\bR^d_+$. Assume the Hessian matrix of $\RC$ is positive definite for any $\bc^\star\in\cM$. Then, for any $ s\in(0,s^+]^d $, any $L>0$, we have \begin{eqnarray*} &&\limsup_{n\to\infty} n^{1/(1+\sum_{j=1}^d\beta_j/s_j)}\sup_{f\in \cN_{2,d}(s,L)} \bigl[\bE\RC(\bcn_{\widehat h})-\RC\bigl(\bc^\star \bigr) \bigr]< \infty, \end{eqnarray*} where $\widehat h$ is driven in (\ref{defrulekmeans}). \end{theorem} This theorem is a direct application of Theorem \ref{thmainresult}, Lemma \ref{dclustering} and the majorant construction. It gives adaptive fast rates of convergence for the excess risk of~$\bcn_{\widehat h}$ and significantly improves the result stated in \cite{ChichignoudLoustau13} for two reasons: first, the selection rule allows the extension to the anisotropic case; besides, there is no logarithmic term in the adaptive rate. In our opinion, the localization technique used in \cite {ChichignoudLoustau13} seems to be the major obstacle to avoid the extra $\log n$ term. \section{Application to robust nonparametric regression}\label{sectionlocalglobal} In this section, we apply the gradient inequality to the framework of local $M$-estimation in nonparametric robust regression. It will give adaptive minimax results for nonlinear estimators for both pointwise and global estimation. Let us specify the model beforehand. For some $n\in\bN^\star$, we observe a training set $\cZ_{n}:=\{(W_i,Y_i), i=1,\ldots, n\}$ of i.i.d. pairs, distributed according to the probability measure $ P $ on $[0,1]^d\times\bR$ satisfying the set of equations \begin{equation} \label{model} Y_i=f^\star(W_i)+ \xi_i,\qquad i=1,\ldots, n. \end{equation} We aim at estimating the target function $f^\star\dvtx [0,1]^d\rightarrow[-B,B]$, $B>0$. The noise variables $(\xi_i)_{i\in{1,\ldots,n}}$ are assumed to be i.i.d. with symmetric density $g_\xi$ w.r.t. the Lebesgue measure. We also assume $g_\xi$ is continuous at $ 0 $ and $ g_\xi(0)>0 $. For simplicity, the design points $(W_i)_{i=1}^n$ are assumed to be i.i.d. according to the uniform law on $[0,1]^d$ (extension to a more general design is straightforward), and we assume that $(W_i)_{i=1}^n$ and $(\xi_i)_{i=1}^n$ are mutually independent for ease of exposition. Eventually, we restrict the estimation of $f^\star$ to the closed set $ \cT\subset[0,1]^d $ to avoid discussion on boundary effects. We will consider a point $ x_0 \in\cT$ for pointwise estimation and the $\bL_q(\cT)$-risk for global estimation, for $q\in[1,+\infty)$. Next, we introduce an estimate of $f^\star(x_0)$ at any $x_0\in\cT$ with the local constant approach (LCA). The key idea of LCA, as described, for example, in \cite{Tsybakov08}, Chapter~1, is to approximate the target function by a constant in a neighborhood of size $ h\in(0,1)^d $ of a given point $x_0$, which corresponds to a model of dimension $ m=1 $. To deal with heavy-tailed noises, we especially employ the Huber loss (see \cite{Huber64}) defined as follows. For any scale $\gamma>0$ and $z\in\bR$, \[ \label{defhubercontrast} \rho_\gamma(z):=\cases{ z^2/2, &\quad if $ \llvert z\rrvert \leq\gamma$, \cr \gamma\bigl(\llvert z\rrvert -\gamma/2\bigr), & \quad otherwise.} \] The parameter $ \gamma$ selects the level of robustness of the Huber loss between the square loss (large value of $ \gamma$) and the absolute loss (small value of $\gamma$). Let $ \cH:=[h_-,h^+]^d $ be the bandwidth set such that $0<h_-<h^+<1$, \[ \label{defbandwidthnet} h_-:=\frac{\log^{6/d}(n)}{n^{1/d}}\quad\mbox{and}\quad h^+:= \frac {1}{\log^2(n)}. \] For any $x_0\in\cT$, the local {estimator} $ \estim_{h}(x_0) $ of $ f^\star(x_0) $ is defined as \begin{equation} \label{deflocalestimate} \estim_{h}(x_0):=\argmin_{t\in[- B, B]} \widehat{R}^{\mathrm{loc}}_{h}(t),\qquad h\in\cH, \end{equation} where $ \widehat{R}^{\mathrm{loc}}_{h}(\cdot):= \frac{1}{n}\sum_{i=1}^n\rho_\gamma(Y_i-\cdot ) K_h(W_i-x_0)$ is the local empirical risk, and $ K_h $ is a $1$-Lipschitz kernel of order $1$. We notice that the local empirical risk estimates the local risk $ \R ^{\mathrm{loc}} (\cdot):= \bE_{Y\mid W=x_0}\rho_\gamma(Y-\cdot) $ whose $ f^\star(x_0) $ is its unique minimizer. In nonparametric estimation, one is usually interested in pointwise or global risk instead of excess risk. Since Theorem \ref{thmainresult} controls the $ \D$-excess risk of the adaptive estimator, we present the following lemma that links the pointwise risk with the $\D$-excess risk. \begin{lemma}\label{lemlocalmargincondition} Assume that $ \sup_{h\in\cH}\llvert \estim_{h}(x_0)-f^\star(x_0)\rrvert \leq\bE\rho_\gamma ''(\xi_1)/4 $ holds. Then, for all $ h\in\cH$, \[ \bigl\llvert \estim_{h}(x_0)-f^\star(x_0) \bigr\rrvert \leq\frac{2}{\bE\rho_\gamma''(\xi _1)}\bigl\llvert { G^{\mathrm{loc}} \bigl( \estim_{h}(x_0) \bigr)-G^{\mathrm{loc}} \bigl(f^\star(x_0) \bigr)}\bigr\rrvert, \] where $ G^{\mathrm{loc}}$ (and, resp., $ \rho_\gamma'' $) denotes the derivative of $\R^{\mathrm{loc}}$ (resp., the second derivative of $ \rho_\gamma$). \end{lemma} The proof is given in the \hyperref[appendix]{Appendix}. We can also deduce the same inequality with the $ \bL_q(\cT) $-norm. The\vspace*{2pt} assumption $ \sup_{h\in\cH}\llvert \estim_{h}(x_0)-f^\star(x_0)\rrvert \leq\bE\rho_\gamma ''(\xi_1)/4$ is necessary to use the theory of differential calculus and can be satisfied by using the consistency of $ \estim_{h} $. In this respect, the definitions of $ h_- $ and $ h^+ $ above\vspace*{2pt} imply the consistency of all estimators $\estim_{h}, h\in\cH$; for further details, see below as well as~\cite{ChichignoudLederer13}, Theorem 1. \subsection{The selection rule in pointwise estimation}\label{sectionlocal} We now present the application of the selection rule for pointwise estimation. To compute the procedure, we define the $ \D$-empirical risk as \begin{equation} \label{deflocalempiricalderivative} \widehat{G}^{\mathrm{loc}}_{h}(t):= \frac{\partial\widehat {R}^{\mathrm{loc}}_{h}}{\partial t}(t) =-\frac{1}{n}\sum_{i=1}^n \rho_\gamma' (Y_i-t ) K_h(W_i-x_0). \end{equation} For two bandwidths $ h,\lambda$, we introduce the auxiliary $ \D $-empirical risk as \[ \widehat{G}^{\mathrm{loc}}_{h,\eta}(t):=-\frac{1}{n}\sum _{i=1}^n\rho_\gamma' (Y_i-t ) K_{h,\eta }(W_i-x_0), \] where $ K_{h,\eta}:=K_{h}*K_{\eta}$, as before. To apply the results of Section~\ref{gradientinequality}, we need to compute optimal majorants of the associated empirical processes. The construction of such bounds for the pointwise case has already received attention in the literature; see \cite{ChichignoudLederer13}, Proposition~2. For any integer $l\in\bN^\star$, let us introduce the function $\mathcal{M}^{\mathrm{loc}}_l\dvtx \cH^2\to\bR_+$ defined as \begin{eqnarray*} &&\mathcal{M}^{\mathrm{loc}}_l(h,\eta):=C_{0}\llVert K \rrVert _2\sqrt{\bE\bigl[\rho _\gamma'( \xi _1)\bigr]^2} \biggl(\sqrt{ \frac{l \log(n)}{n\prod_{j=1}^dh_j\vee\eta_j}}+\sqrt{\frac{l \log(n)}{n\prod_{j=1}^d\eta_j}} \biggr), \end{eqnarray*} where $C_0>0$ is an absolute constant which does not depend on the model. Then if we set $ \cH_a:=\{(h_{-},\dots,h_{-})\}\cup \{h\in\cH\dvtx \forall j=1,\ldots,d\ \exists m_j\in\bN\dvtx h_j=h^{+}a^{m_j} \}$, $a\in(0,1)$, an\vspace*{1pt} exponential net of $ \cH=[h_-,h^+]^d $, such that $ \llvert \cH_a\rrvert \leq n $, for any $ l>0 $, the function $ \mathcal{M}^{\mathrm{loc}}_l(\cdot,\cdot)$ is a majorant according to Definition \ref{defmajorant}. Eventually, we introduce the data-driven bandwidth following the schema of the selection rule in Section~\ref{gradientinequality}, \begin{equation} \label{defruleloc} \qquad\widehat h^{\mathrm{loc}}\in\argmin_{h\in\cH_a} \Bigl\{\sup _{\eta\in\cH _a} \bigl\{\bigl\llvert \widehat{G}^{\mathrm{loc}}_{h, \eta} -\widehat{G}^{\mathrm{loc}}_\eta\bigr\rrvert _{\infty} - \mathcal{M}^{\mathrm{loc}}_l(h,\eta) \bigr\} +\cM^{\mathrm{loc},\infty}_l(h) \Bigr\}, \end{equation} where $\cM^{\mathrm{loc},\infty}_l(h):=\sup_{h'\in\cH _a}\mathcal{M}^{\mathrm{loc}}_l(h',h)$. To derive minimax adaptive rates for local estimation, we start with the definition of the anisotropic H\"older class. \begin{defi}[(Anisotropic H\"{o}lder class)]\label{defholderAnisot} Let $ s=(s_1,s_2,\ldots,s_d)\in\bR_+^d $ and $ L>0 $ be fixed. We say that $ f\dvtx [0,1]^d\rightarrow[-L,L] $ belongs to the anisotropic H\"{o}lder class $\Sigma(s,L)$ of functions if for all $ j=1,\ldots,d $ and for all $ x\in(0,1]^d $, \begin{eqnarray*} && \biggl\llvert \frac{\partial^{\lfloor s_j\rfloor}}{\partial x_j^{\lfloor s_j\rfloor}}f(x_1, \ldots, x_j+z, \ldots, x_d)-\frac{\partial ^{\lfloor s_j\rfloor}}{\partial x_j^{\lfloor s_j\rfloor}}f(x_1, \ldots, x_j,\ldots, x_d)\biggr\rrvert \\ &&\qquad \leq L \llvert z\rrvert ^{s_j-\lfloor s_j\rfloor }\qquad \forall z\in\bR, \end{eqnarray*} and \[ \sup_{x\in[0,1]^d}\biggl\llvert \frac{\partial^{l}}{\partial x_j^{l}}f(x)\biggr\rrvert \leq L\qquad \forall l=0, \ldots, \lfloor s_j\rfloor, \] where $\lfloor s_j\rfloor$ is the largest integer strictly less than $s_j$. \end{defi} \begin{theorem}\label{thholderadapation} For any $ s\in(0,1]^d $, any $ L>0 $ and any $ q\geq1 $, it holds for all $ x_0\in\cT$, \[ \limsup_{n\to\infty} \bigl({n}/\log(n) \bigr)^{q\bar s/(2\bar s+1)}\sup _{f\in\Sigma({ s},L)}\bE\bigl\llvert \estim_{\widehat h^\mathrm{loc}}(x_0)-f^\star(x_0) \bigr\rrvert ^q<\infty, \] where $ \bar s:= (\sum_{j=1}^d s_j^{-1} )^{-1} $ denotes the harmonic average. \end{theorem} The proposed estimator $ \estim_{\widehat h} $ is then adaptive minimax over anisotropic H\"{o}lder classes in pointwise estimation. The minimax optimality of this rate [with the $ \log(n) $ factor] has been stated by \cite{Klutchnikoff05} in the white noise model for pointwise estimation; see also \cite{GoldenshlugerLepski08}. For simplicity, we did not study the case of locally polynomial functions [i.e., $s\in(0,\infty)^d $]. Chichignoud and Lederer \cite{ChichignoudLederer13}, Theorem~2, have shown that the variance of local $M$-estimators is of order $ \bE[\rho_\gamma'(\xi_1)]^2/n(\bE\rho_\gamma''(\xi_1))^2 $, and therefore their Lepski-type procedure depends on this quantity. Thanks to the gradient approach, we obtain the same result without the dependency on the parameter $ \bE\rho_\gamma''(\xi_1) $, which corresponds to $ \lambda_{\min} $ in the general setting. The selection rule is therefore robust w.r.t. to the fluctuations of this parameter, in particular when $ \gamma$ is small (median estimator). \subsection{The selection rule in global estimation}\label{sectionglobal} The aim of this section is to derive adaptive minimax results for $\estim_h$ for the $\bL_q$-risk. To this end, we need to modify the selection rule (\ref{defruleloc}) including a global ($\bL_q$-norm) comparison of $ \D$-empirical risks. For this purpose, for all $ t\in\bR$, we denote the $\D$-empirical risks at a given point $ x_0\in\cT $ as \[ \widehat{G}^{\mathrm{loc}}_{h}(t,x_0)=-\frac{1}{n} \sum_{i=1}^n\rho _\gamma' (Y_i-t ) K_h(W_i-x_0) \] and \[ \widehat{G}^{\mathrm{loc}}_{h,\eta}(t, x_0)=- \frac{1}{n}\sum_{i=1}^n \rho_\gamma' (Y_i-t ) K_{h,\eta}(W_i-x_0), \] where the dependence in $x_0$ is explicitly written. Then we define, for $ q\in[1,\infty[ $ and for any function $ \omega\dvtx \bR\times\cT\to \bR$, the $ \bL_q $-norm and $ \bL_{q,\infty} $-semi-norm \[ \bigl\llVert \omega(t,\cdot)\bigr\rrVert _q:= \biggl(\int _\cT\bigl\llvert \omega(t,x)\bigr\rrvert ^q\,dx \biggr)^{1/q}\quad\mbox{and}\quad \llVert \omega\rrVert _{q,\infty}:=\sup_{t\in[-B,B]}\bigl\llVert \omega(t,\cdot)\bigr \rrVert _q. \] The construction of majorants is based on uniform bounds for $\bL _q$-norms of empirical processes. Recently, Goldenshluger and Lepski investigated this topic \cite{GoldenshlugerLepski09b}, Theorem 2. For any integer $l\in\bN ^\star$, let us introduce the function $\Gamma_{l,q}\dvtx \cH\to\bR_+$ defined as \begin{eqnarray*} \Gamma_{l,q}(h)&:=& C_{q}\bigl\llVert \rho_\gamma' \bigr\rrVert _\infty\sqrt{1+l} \\ &&{} \times \cases{\displaystyle 4\llVert K\rrVert _q{\Biggl(n\prod_{j=1}^dh_j \Biggr)^{-(q-1)/q}}, &\quad if $q\in[1,2[$, \cr \displaystyle\frac{30q}{\log(q)}\bigl( \llVert K\rrVert _2\vee\llVert K\rrVert _q\bigr){\Biggl(n \prod_{j=1}^dh_j \Biggr)^{-1/2}}, &\quad if $q\in[2,\infty[$,} \end{eqnarray*} where $C_q>0$ is an absolute constant which does not depend on $n$. Then, for any $ l>0 $, the function $\mathcal{M}^{\mathrm{glo}}_{l,q}(\lambda,\eta):=\Gamma_{l,q} (\lambda\vee\eta)+\Gamma_{l,q}(\eta) $ is a majorant according to Definition \ref{defmajorant}. We finally select the bandwidth according to \[ \widehat h^{\mathrm{glo}}_q\in\argmin_{h\in\cH} \Bigl\{\sup _{\eta\in\cH } \bigl\{\bigl\llVert \widehat{G}^{\mathrm{loc}}_{h,\eta} -\widehat{G}^{\mathrm{loc}}_\eta\bigr\rrVert _ { q, \infty} - \mathcal{M}^{\mathrm{glo}}_{l,q}(h,\eta) \bigr\} +2\Gamma_{l,q}(h) \Bigr\}. \] The above choice of the bandwidth leads to the estimator $ \estim _{\widehat h_q^{\mathrm{glo}}} $ with the following adaptive minimax properties for the $ \bL_q $-risk over anisotropic Nikol'skii classes; see Definition \ref{defnikolskiiAnisot}. \begin{theorem}\label{thnikolskiiadapation} For any $ s\in(0,1]^d $, any $ L>0 $ and any $ q\geq1 $, it holds that \[ \limsup_{n\to\infty}\psi_{n,q}^{-1}(s)\sup _{f\in\cN_{q,d}({ s},L)}\bE\llVert \estim_{\widehat h_q^{\mathrm{glo}}}-f\rrVert _q^q<\infty, \] where $ \bar s:= (\sum_{j=1}^d s_j^{-1} )^{-1} $ denotes the harmonic average and \[ \psi_{n,q}(s)=\cases{ (1/n )^{q (q-1)\bar s/(q\bar s+q-1)}, &\quad if $q\in [1,2[$, \cr (1/{n} )^{q\bar s/(2\bar s+1)}, &\quad if $q\geq2$.} \] \end{theorem} We refer to \cite {HasminskiiIbragimow90,HasminskiiIbragimov81} for the minimax optimality of these rates over Nikol'skii classes. The proposed estimate $\estim _{\widehat h^{\mathrm{glo}}_q}$ is then adaptive minimax. To the best of our knowledge, the minimax adaptivity over anisotropic Nikol'skii classes has never been studied in regression with possible heavy-tailed noises. We finally refer to the remarks after Theorem \ref{thholderadapation}. \section{Discussion} \label{sdiscussion} Our paper solves the general bandwidth selection issue in kernel ERM by using a novel selection rule, based on the minimization of an estimate of the bias--variance decomposition of the gradient excess risk. This new criterion simultaneously upper bounds the estimation error ($ \ell_2 $-norm) and the prediction error (excess risk) with optimal rates. One of the key messages we would like to highlight is the following: if we consider smooth loss functions and a family of consistent ERM, fast rates of convergence are automatically reached, provided that the Hessian matrix of the risk function is positive definite. This statement is based on the key Lemma \ref{lemmadmargin} in Section~\ref{sgradient}, where the square root of the excess risk is controlled by the $\D$-excess risk. From an adaptive point of view, one can take another look at Lemma \ref {lemmadmargin}. On the RHS of Lemma \ref{lemmadmargin}, the $\D$-excess risk is multiplied by the constant $\lambda_{\min}^{-1}$, that is, the smallest eigenvalue of the Hessian matrix at $\f^\star$. This parameter is also involved in the margin assumption. As a result, our selection rule does not depend on this parameter since the margin assumption is not required to obtain slow rates for the $\D$-excess risk. This fact partially solves an issue highlighted by Massart \cite{Massart07}, Section~8.5.2, in the model selection framework: \begin{quotation} \textit{It is indeed a really hard work in this context to design margin adaptive penalties. Of course recent works on the topic, involving local Rademacher penalties, for instance, provide at least some theoretical solution to the problem but still if one carefully looks at the penalties which are proposed in these works, they systematically involve constants which are typically unknown. In some cases, these constants are absolute constants which should nevertheless considered as unknown just because the numerical values coming from the theory are obviously over pessimistic. In some other cases, it is even worse since they also depend on nuisance parameters related to the unknown distribution.} \end{quotation} In Section~\ref{ssimu} below, we also illustrate the robustness of the method with numerical results. An interesting and challenging open problem would be to employ the gradient approach in the model selection framework in order to propose a more robust penalization technique (i.e., which does not depend on the parameter $ \lambda_{\min} $). The gradient approach requires two main ingredients: the first one concerns the smoothness of the loss function in terms of differentiability; the second one affects the dimension of the statistical model that we have at hand, which has to be parametric, that is, of finite dimension $m\in \bN^\star$. From our point of view, the smoothness of the loss function is not a restriction, since modern algorithms are usually based---in order to reduce computational complexity---on some kind of gradient descent methods in practice. On the other hand, the second ingredient might be more restrictive from the model selection point of view. An interesting open problem would be to employ the same path when the dimension $m\geq1$ is possibly larger than $n$, that is, in a high-dimensional setting. \section{Numerical results} \label{ssimu} For completeness, we illustrate the performance of our selection rule in the context of clustering with errors in variables, and compare it to the most recent bandwidth selection procedure in that framework: ERC method, recently evolved in \cite{ChichignoudLoustau13}. This method has both theoretical and computational advantages (see also \cite {KatkovnikSpokoiny08}); however, it only provides isotropic bandwidth selection. For this reason, our anisotropic selection rule may outperform ERC method. The computation of the selection rule (\ref{defrulekmeans}) requires many optimization steps. We first compute a family of codebooks $\{ \widehat\bc_h,h\in\mathcal{H}\}$ according to (\ref{noisykmeans}), by using a noisy version of the vanilla $k$-means algorithm. This technique gives an approximation of the optimal solution (\ref {noisykmeans}) thanks to an iterative procedure based on Newton optimization. More theoretical foundations are detailed in \cite {bl12}. Second, we use parallel execution in order to compute the comparison of gradient empirical risks. \subsection*{Experiments} We generate an i.i.d. noisy sample $\cD_n=\{Z_1,\ldots, Z_n\}$ such that for any $i=1,\ldots, n$, \begin{eqnarray} \label{modexp} Z_i=\cases{ X_i^{(1)}+ \varepsilon_i(u), &\quad if $Y_i=1$, \cr X_i^{(2)}+\varepsilon_i(u), &\quad if $Y_i=2$,} \end{eqnarray} where\vspace*{1pt} $(X_i^{(1)})_{i=1}^n$ [resp., $(X_i^{(2)})_{i=1}^n$] are i.i.d. Gaussian with density $f_{\mathcal{N} (0_2,I_2 )}$ (resp., $f_{\mathcal{N} ((5,0)^T,I_2 )}$) and $(Y_i)_{i=1}^n$ are i.i.d. such that $\bP(Y_i=1)=\bP(Y_i=2)=1/2$. Here, $(\varepsilon_i(u))_{i=1}^n$ are i.i.d. with Gaussian noise with zero mean $(0,0)^T$ and covariance matrix $\Sigma(u)={1\ \ 0\choose 0\ \ u}$ for $u\in\{1,\ldots, 10\}$. In\vspace*{1pt} this setting, we compare both adaptive procedures [our selection rule (\ref{noisykmeans}) and ERC method] to the standard $k$-means with Lloyd's algorithm by computing the empirical clustering error according to \begin{eqnarray} \label{cerror} \mathcal{I}_n(\widehat c_1,\widehat c_2):=\min_{\widehat\bc =(\widehat c_1,\widehat c_2),(\widehat c_2,\widehat c_1)}\frac{1}{n}\sum _{i=1}^n\ind\bigl(Y_i\neq f_{\widehat \bc}(X_i)\bigr), \end{eqnarray} where $f_{\widehat\bc}(x)\in\arg\min_{j=1, 2}\llvert x-\widehat\bc _j\rrvert ^2_2$ and $Y_i\in\{1,2\}$, $i=1, \ldots, n$ correspond to the latent class labels defined in (\ref{modexp}). Similar to many adaptive methods, Lepki-type procedures suffer from a dependency on a tuning parameter. In particular, in ERC method, a constant governs the variance threshold (see \cite{Katkovnik99} or \cite{ChichignoudLoustau13}), and in our selection rule as well, a constant $b_1'>0$ appears in the majorant function of Lemma \ref{lemmamajkmeans}. As discussed earlier, the choice of this constant remains an hard issue for application. In the sequel, we illustrate the behavior of both adaptive methods w.r.t. 3 constants: $0.1$, $1$ and $10$. Figure~\ref{fig1}(a)--(b) illustrates the evolution of the clustering risk (\ref{cerror}) when $u\in\{1, \ldots, 10\}$ in model (\ref{modexp}) for $k$-means (red curve) versus both adaptive procedures. \begin{figure} \begin{tabular}{@{}cc@{}} \includegraphics{1318f01a.eps} & \includegraphics{1318f01b}\\ \footnotesize{(a) $k$-means vs ERC method} & \footnotesize{(b) $k$-means vs Gradient} \end{tabular} \caption{Clustering risk averaged over 100 replications with $n=200$ for $k$-means versus ERC \textup{(a)} and the gradient \textup{(b)}.}\vspace*{-1pt}\label{fig1} \end{figure} In Figure~\ref{fig1}(a), we compare the clustering risk (\ref{cerror}) of $k$-means (red curve) with ERC with 3 different constants (ERC1, ERC2 and ERC3). The methods are comparable, and we observe that ERC performance is sensitive to the choice of the constant. Nevertheless, a good calibration of this constant gives slightly better results than $k$-means. In Figure~\ref{fig1}(b), the gradient approach with three different constants (G1, G2 and G3) gives a clustering risk less than 5$\%$ for any $u\in\{1, \ldots, 10\}$. In comparison, standard $k$-means completely fails when $u$ is increasing. As a conclusion, our selection rule significantly outperforms $k$-means and ERC for any constant. This highlights the importance in practice to choose two different bandwidths in each direction in this model, that is, an anisotropic bandwidth. Our selection rule is also robust to the choice of the constant, which confirms the theoretical study. \begin{appendix} \section*{Appendix}\label{appendix} \setcounter{equation}{0} \subsection{Proof of Lemma \texorpdfstring{\protect\ref{lemmadmargin}}{1}} The proof is based on standard tools from differential calculus applied to the multivariate risk function $R\in\cC^2(U)$, where $U$ is an open ball centered at $\f^\star$. The first step is to apply a Taylor expansion of first order which gives, for all $\f\in U$, \begin{eqnarray*} && \R(\f)-\R\bigl(\f^\star\bigr) \\ &&\qquad =\bigl(\f-\f^\star \bigr)^\top\nabla\R\bigl(\f^\star \bigr) \\ &&\quad\qquad{} +\sum _{k\in\bN^m\dvtx \llvert k\rrvert =2} \frac{2(\f-\f^\star)^{k}}{k_1!\cdots k_m!}\int_0^1 (1-t) \frac{\partial^2}{\partial\f^k}R\bigl(\f^\star +t\bigl(\f-\f^\star \bigr)\bigr)\,dt, \end{eqnarray*} where $\frac{\partial^2}{\partial\f^k}R=\frac{\partial^2}{\partial\f _1^{k_1}\cdots\partial\f_m^{k_m}} R$, $\llvert k\rrvert =k_1+\cdots+ k_m$ and $ (\f-\f^\star)^{k}=\prod_{j=1}^m(\f_j-\f^\star_j)^{k_j} $. Now, by the property $ \nabla\R(\f^\star)=0 $ and the boundedness of the second partial derivatives, we can write \begin{eqnarray*} \R(\f)-\R\bigl(\f^\star\bigr)&\leq&\kappa_1\sum _{k\in\bN^m\dvtx \llvert k\rrvert =2} \bigl\llvert \f-\f ^\star\bigr\rrvert ^{k}\leq \kappa_1\sum_{i,j=1}^m \bigl\llvert \f_i-\f^\star_i\bigr\rrvert \times \bigl\llvert \f_j-\f^\star _j\bigr\rrvert \\ &\leq& m\kappa_1 \bigl\llvert \f-\f^\star\bigr\rrvert _2^2. \end{eqnarray*} It then remains to show the inequality \begin{equation} \label{secondstep} \bigl\llvert \f-\f^\star\bigr\rrvert _2\leq 2\bigl\llvert \D\bigl(\f,\f^\star\bigr)\bigr\rrvert _2/ \lambda_{\min}, \end{equation} where $\lambda_{\min}$ is defined in the lemma. This can be done by using standard inverse function theorem and the mean value theorem for multi-dimensional functions. Indeed, since the Hessian matrix of $ \R$---also viewed as the Jacobian matrix of $ \D$---is positive definite at $\f^\star$, and since $\R \in\cC^2(U)$, the inverse function theorem shows the existence of a bijective function $\D^{-1}\in\cC^1(G(U))$ such that \[ \bigl\llvert \f-\f^\star\bigr\rrvert _2=\bigl\llvert \D^{-1}\circ\D(\f)-\D^{-1}\circ\D\bigl(\f ^\star \bigr)\bigr\rrvert _2\qquad\mbox{for any } \f\in U. \] We can then apply a vector-valued version of the mean value theorem to obtain \begin{eqnarray}\label{mvt2} \bigl\llvert \f-\f^\star\bigr\rrvert _2&\leq& \sup_{u\in[G(\f),G(\f^\star)]}\duvvvert J_{\D ^{-1}}(u)\duvvvert _2 \bigl\llvert \D\bigl(\f^\star\bigr)-\D(\f)\bigr\rrvert _2 \nonumber\\[-8pt]\\[-8pt] \eqntext{\mbox{for any } \f\in U,} \end{eqnarray} where $ [G(\f),G(\f^\star)] $ denotes the multi-dimensional bracket between $G(\f)$ and $G(\f^\star)$, and $\vvvert \cdot\vvvert _2 $ denotes the operator norm associated to the Euclidean norm $\llvert \cdot\rrvert _2$. Since $\llvert \f-\f^\star\rrvert _2\leq\delta$ and $G$ is continuous, we now have \[ \lim_{\delta\to0}\sup_{u\in [G(\f),G(\f^\star)]}\duvvvert J_{\D^{-1}}(u)\duvvvert _2= \duvvvert J_{\D^{-1}} \bigl(G \bigl(\f^\star\bigr)\bigr)\duvvvert _2. \] Then, for $\delta>0$ small enough, we have with (\ref{mvt2}) \begin{eqnarray*} \bigl\llvert \f-\f^\star\bigr\rrvert _2&\leq& 2\duvvvert J_{\D^{-1}} \bigl(G\bigl(\f^\star\bigr)\bigr)\duvvvert _2 \bigl\llvert \D\bigl(\f^\star\bigr)-\D(\f)\bigr\rrvert _2 \\ &=& 2\duvvvert J^{-1}_{\D}\bigl(\f^\star\bigr)\duvvvert _2 \bigl\llvert \D\bigl(\f^\star\bigr)-\D(\f)\bigr\rrvert _2 \\ &=& 2\duvvvert H_{\R}^{-1}\bigl(\f^\star\bigr)\duvvvert _2 \bigl\llvert \D\bigl(\f^\star\bigr)-\D(\f)\bigr\rrvert _2, \end{eqnarray*} where $H_{\R}$ is the Hessian matrix of $R$. (\ref{secondstep}) follows easily, and the proof is complete. \subsection{Proofs of Section~\texorpdfstring{\protect\ref{sectionkmeans}}{3}} \mbox{} \begin{pf*}{Proof of Lemma \protect\ref{dclustering}} The Hessian matrix of $\RC(\cdot)$ involves integrals over faces of the Vorono\"i diagram $(V_j(\bc))_{j=1}^k$. For $i\neq j$, let us\vspace*{1pt} denote the face (possibly empty) common to $V_i(\bc)$ and $V_j(\bc)$ as $F_{ij}$. Moreover, denote $\sigma(\cdot)$ the $(d-1)$-di\-mensional Lebesgue measure. Then, since $f$ is continuous and $X\in[0,1]^d$, uniform continuity arguments ensure that the integral $\int_{F_{ij}}\llvert x-m\rrvert _2^2f(x)\sigma(dx)$ exists and depends\vspace*{1pt} continuously on the location of the center $m$, for any $i,j$ and for any $m\in\bR ^d$. Then we can use the following lemma due to \cite{pollard82}. \begin{lemma}[(\cite{pollard82})] \label{pollard} Suppose $\mathbb{E}_P\llvert X\rrvert _2<\infty$ and $P$ has a continuous density $f$ w.r.t. Lebesgue measure. Assume\vspace*{-1pt} integral $\int_{F_{ij}}\llvert x-m\rrvert _2^2f(x)\sigma(dx)$ exists and depends continuously on the location of the centers, for any $i,j$ and for any $m\in\bR^d$. Then if centers $c_i$, $i=1, \ldots, d$ are all distinct, $\RC(\cdot )$ has a Hessian matrix $H_{\RC}(\cdot)$ made up of $d\times d$ blocks, \begin{eqnarray*} && H_{\RC}(\bc) (i,j) \\ &&\qquad = \cases{\displaystyle 2\mathbb{P}\bigl(X\in V_i(\bc) \bigr)-2\sum_{u\neq i}\delta_{iu}^{-1} \int_{F_{iu}}f(x)\llvert x-c_i\rrvert _2^2\sigma(dx), &\quad if $i=j$, \vspace*{3pt}\cr \displaystyle -2 \delta_{ij}^{-1}\int_{F_{ij}}f(x) (x-c_i) (x-c_j)^\top\sigma (dx), &\quad otherwise,} \end{eqnarray*} where $\delta_{ij}=\llvert c_i-c_j\rrvert _2$ and $\bc\in\bR^{dk}$. \end{lemma} Hence there exists $\delta>0$ such that $\RC(\cdot)\in\mathcal {C}^2(U)$, and Lemma \ref{lemmadmargin} with $R=\RC$ completes the proof. \end{pf*} \begin{pf*}{Proof of Lemma \protect\ref{lemmamajkmeans}} We start with the study of $\llvert \DCn_h-\bE\DCn_h\rrvert _{2,\infty}$. For ease of exposition, we denote by $P_n^Z$ the empirical measure with respect to $Z_i$, $i=1, \ldots, n$ and by $P^Z$ the expectation w.r.t. the law of $Z$. Then we have \begin{eqnarray} \label{eqlemma1} && \llvert \DCn_h-\bE\DCn_h\rrvert _{2,\infty}\nonumber \\ &&\qquad =\sup_{\bc\in[0,1]^{dk}}\bigl\llvert \DCn _h(\bc)-\bE\DCn_h(\bc)\bigr\rrvert _2 \\ &&\qquad \leq \sqrt{kd}\sup_{\bc,i,j}\biggl\llvert \bigl(P_n^Z-P^Z \bigr) \biggl(\int_{V_j}2\bigl(x^i-c_j^i \bigr)\widetilde K_h(Z-x)\,dx \biggr)\biggr\rrvert.\nonumber \end{eqnarray} The cornerstone of the proof is to apply a concentration inequality to this supremum of empirical process. We use in the sequel the following Talagrand-type inequality; see, for example, \cite{ComteLacour13}. \begin{lemma} \label{lemmatal} Let $\cX_1,\ldots, \cX_n$ be i.i.d. random variables, and let $\cS$ be a countable subset of $\bR^{m}$. Consider the random variable \[ U_n(\cS):=\sup_{\bc\in\cS}\Biggl\llvert \frac{1}{n}\sum_{l=1}^n\psi _{\bc}(\cX_l)-\bE\psi_{\bc} (\cX_l)\Biggr\rrvert , \] where $\psi_{\bc} $ is such that $ \sup_{\bc\in\cS}\llvert \psi_{\bc}\rrvert _{\infty}\leq M$, $\bE U_n(\cS)\leq E$ and $\sup_{\bc\in\cS}\bE [\psi_{\bc}(Z)^2 ]\leq v$. Then, for any $\delta>0$, we have \[ \bP \bigl(U_n(\cS)\geq (1+2\delta)E \bigr)\leq\exp \biggl(- \frac{\delta^2nE}{6v} \biggr)\vee\exp \biggl(-\frac{ (\delta\wedge1)\delta nE}{21M} \biggr). \] \end{lemma} The proof of Lemma \ref{lemmatal} is omitted; see \cite {ComteLacour13}. We hence have to compile the quantities $E,v$ and $M$ associated with the random variable \[ \widetilde \zeta_n=\sup_{\bc,i,j}\biggl\llvert \bigl(P_n^Z-P^Z \bigr) \biggl(\int _{V_j}2\bigl(x^i-c_j^i \bigr)\widetilde K_h(Z-x)\,dx \biggr)\biggr\rrvert. \] The compilation of $E:=E(h)>0$ uses the same path as \cite{ChichignoudLoustau13}, Lemma~3. More precisely, we can apply a chaining argument to the function $\int_{V_j}2(x^i-u)\widetilde K_h(Z-x)\,dx$, for any $u\in(0,1)$. Then we have, together with a maximum inequality due to \cite{Massart07}, Chapter~6, \begin{eqnarray} \label{Hbound} \bE\widetilde\zeta_n\leq \frac{b_3}{2\sqrt{n}\Pi_h( \beta)}+ \frac{b_4}{2\sqrt{n}\Pi_h( \beta+1/2)}\leq\frac{b_5}{\sqrt{n} \Pi_h( \beta)}:=E(h), \end{eqnarray} where $\Pi_h( \beta):=\prod_{i=1}^d h_i^{ \beta_i}$ for $ \beta\in \bR^d_+$ provided that $\prod_{i=1}^dh_i^{-1/2}\geq b_1/b_1'$ (thanks to the definition of $ \cH_a $ and $n$ sufficiently large). The constant $b_3,b_4,b_5>0$ can be explicitly computed. This calculation is omitted for simplicity. Besides, using \cite{ChichignoudLoustau13}, Lemma 1, with $\psi_{\bc,i,j}(Z):=\int_{V_j}2(x^i-c_j^i)\widetilde K_h(Z-x)\,dx$, we have \begin{equation} \label{vbound} \sup_{\bc,i,j}\bE \bigl[\psi_{\bc,i,j}(Z)^2 \bigr]\leq\frac {b_6}{\Pi_h(2 \beta)}:=v(h), \end{equation} whereas \cite{ChichignoudLoustau13}, Lemma 2, allows us to write \begin{equation} \label{Mbound} \sup_{\bc,i,j}\llvert \psi_{\bc,i,j}\rrvert _{\infty}\leq\frac{b_7}{\Pi_h( \beta+1/2)}:=M(h), \end{equation} where $b_6,b_7$ are absolute constants. Hence, Lemma \ref{lemmatal}, together with (\ref{eqlemma1})--(\ref{Mbound}), gives us, for all $ \delta>0$, \begin{eqnarray*} && \bP \bigl(\llvert \DCn_h-\bE\DCn_h\rrvert _{2,\infty}\geq \sqrt{kd}(1+2\delta)E(h) \bigr) \\ &&\qquad \leq\exp \biggl(- \frac{\delta ^2nE(h)}{6v(h)} \biggr)\vee\exp \biggl(-\frac{(\delta\wedge1)\delta nE(h)}{21M(h)} \biggr). \end{eqnarray*} Moreover, note that from the previous calculations, we have $nE(h)/v(h)=c\sqrt{n}/\Pi_h( \beta)$ and $nE(h)/M(h)=c'\sqrt{n}\sqrt{\Pi_h(1/2)}$, where $c,c'>0$ depend on $b_5,b_6$ and $b_5,b_7$, respectively. Provided that $\sqrt{n}(c\Pi _h( \beta)\wedge c'\sqrt{\Pi_h(1/2)})\geq(\log n)^2$ (thanks to the definition of $ \cH_a $ and $n$ sufficiently large), we come up with \begin{eqnarray*} && \bP \bigl(\llvert \DCn_h-\bE\DCn_h\rrvert _{2,\infty}\geq \sqrt{kd}(1+2\delta)E(h) \bigr) \\ &&\qquad \leq\exp \biggl\{- \biggl( \frac {\delta^2}{6}\wedge\frac{ (\delta\wedge1)\delta}{21} \biggr) (\log n)^2 \biggr \}. \end{eqnarray*} This gives us the first part of the majorant of Lemma \ref{lemmamajkmeans}. The last\vspace*{1pt} step is to show a similar bound for the auxiliary empirical process $\llvert \DCn_{h,\eta}-\bE\DCn_{h,\eta}\rrvert _{2,\infty}$. This can be easily done by using Lemma \ref{lemmatal} together with the previous results. Then we have for any $h,\eta\in\cH_a$, \begin{eqnarray*} && \bP \bigl(\llvert \DCn_{h,\eta}-\bE\DCn_{h,\eta}\rrvert _{2,\infty}\geq \sqrt{kd}(1+2\delta)E(h\vee\eta) \bigr) \\ &&\qquad \leq\exp \biggl\{- \biggl(\frac{\delta^2}{6}\wedge\frac{ (\delta\wedge1)\delta}{21} \biggr) (\log n)^2 \biggr\}, \end{eqnarray*} where with a slight abuse of notation, the maximum $\vee$ is understood coordinatewise. Using the union bound, the definition of $\mathcal{M}^{\mathrm {k}}_l(\cdot,\cdot)$ allows us to write \begin{eqnarray*} && \bP \Bigl(\sup_{h,\eta} \bigl\{\llvert \DCn_{h,\eta}-\bE \DCn_{h,\eta }\rrvert _{2,\infty}+\llvert \DCn_h-\bE \DCn_h\rrvert _{ 2,\infty} -\mathcal{M}^{\mathrm{k}}_l(h, \eta) \bigr\}> 0 \Bigr) \\ &&\qquad \leq (\operatorname{card}\cH_a )^2 \sup_{h,\eta} \bP \bigl(\llvert \DCn_{h,\eta}-\bE\DCn_{h,\eta }\rrvert _{2,\infty} \\ &&\hspace*{107pt}{}+\llvert \DCn_h-\bE\DCn_h\rrvert _{2,\infty} -\mathcal{M}^{\mathrm{k}}_l(h,\eta)> 0 \bigr) \\ &&\qquad \leq (\operatorname{card}\cH_a )^2 \sup_{h,\eta} \bigl\{\bP \bigl(\llvert \DCn_h-\bE\DCn_h\rrvert _{2,\infty}- \sqrt{kd}(1+2\delta)E(h)>0 \bigr) \\ &&\hspace*{98pt}{}+ \bP \bigl(\llvert \DCn_{h,\eta}-\bE \DCn_{h,\eta}\rrvert _{2,\infty} \\ &&\hspace*{185pt}{}- \sqrt{kd}(1+2\delta)E(h\vee\eta)>0 \bigr) \bigr\} \\ &&\qquad \leq2 (\operatorname{card}\cH_a )^2\exp \biggl(- \frac{\delta ^2}{6}\wedge\frac{(\delta\wedge 1)\delta}{21}(\log n)^2 \biggr)\leq n^{-l}, \end{eqnarray*} where we choose $b'_1=b_5(1+2\delta)$ with $\delta:=\delta(l)=1\vee (21(l+2)/(\log n))$. \end{pf*} \begin{pf*}{Proof of Theorem \protect\ref{thmkmeans}} The proof of Theorem \ref{thmkmeans} is a direct application of Theorem \ref{thmainresult} and Lemma \ref{lemmamajkmeans}. Indeed, for any $l\in\bN ^\star$, for $n$ large enough, we have with probability $1-n^{-l}$, \[ \bigl\llvert \DC\bigl(\bcn_{\widehat h},\bc^\star\bigr)\bigr\rrvert _2\leq 3\inf_{h\in\cH_a} \bigl\{\bias(h)+ \cM^{\mathrm{k},\infty }_l(h) \bigr\}, \] where $\bias(h)$ is defined as \[ \bias(h):=\max \Bigl(\llvert \bE\DCn_{h}-\DC\rrvert _{2,\infty}, \sup_{\eta }\llvert \bE\DCn_{h,\eta}-\bE \DCn_\eta\rrvert _{2, \infty} \Bigr)\qquad \forall h\in \cH_a. \] The control of the bias function is as follows: \begin{eqnarray*} && \llvert \bE\DCn_{h,\eta}-\bE\DCn_\eta\rrvert _{2,\infty}^2 \\ &&\qquad = \sup _{\bc\in[0,1]^{dk}}\sum_{i,j} \biggl\{ \int _{V_j}2\bigl(x^i-c_j^i \bigr) \bigl(\bE_{P^Z} \widetilde K_{h,\eta}(Z-x)- \bE_{P^Z}\widetilde K_\eta(Z-x) \bigr)\,dx \biggr \}^2 \\ && \qquad =\sup_{\bc\in[0,1]^{dk}}\sum _{i,j} \biggl\{\int_{V_j}2 \bigl(x^i-c_j^i\bigr) \bigl( \bE_{P^X} K_{h,\eta}(X-x)-\bE_{P^X} K_\eta(X-x) \bigr)\,dx \biggr\}^2 \\ &&\qquad \leq 4\sup_{\bc\in[0,1]^{dk}}\sum_{i,j}\int _{V_j}\bigl(x^i-c_j^i \bigr)^2\,dx\bigl\llvert K_{\eta}*(K_h*f-f)\bigr \rrvert _2^2 \\ &&\qquad \leq 4k\bigl\llvert \mathcal{F}[K]\bigr\rrvert _{\infty}\llvert f_h-f\rrvert _2^2, \end{eqnarray*} where $\llvert f_h-f\rrvert _2:=\llvert K_h*f-f\rrvert _2$ is the usual nonparametric bias term in deconvolution estimation. Besides, note that \begin{eqnarray*} && \llvert \bE\DCn_{h}-\DC\rrvert _{2,\infty}^2 \\ &&\qquad =\sup _{\bc\in[0,1]^{dk}}\sum_{i,j} \biggl\{\int _{V_j}2\bigl(x^i-c_j^i \bigr) \bigl(\bE_{P^X} K_{h}(X-x)-f(x) \bigr)\,dx \biggr \}^2 \\ &&\qquad \leq 4\sup_{\bc\in[0,1]^{dk}}\sum_{i,j}\int _{V_j}\bigl(x^i-c_j^i \bigr)^2\,dx\llvert K_{h}*f-f\rrvert _2^2. \end{eqnarray*} Then we need a control of the bias function, \[ B^{\mathrm{k}}(h):=2\sqrt{k} \bigl(1\vee\bigl\llvert \cF[K]\bigr\rrvert _{\infty} \bigr)\llvert K_h*f-f\rrvert _2\qquad \forall h\in\cH. \] By using Comte and Lacour \cite{ComteLacour13}, Proposition~3, we directly have for all $f\in\mathcal{N}_{2,d}(s,L)$, \begin{equation} \label{bcontrolkmeans} B^{\mathrm{k}}(h)\leq2\sqrt{k} \bigl(1\vee\bigl\llvert \cF[K] \bigr\rrvert _{\infty} \bigr)L\sum_{j=1}^dh_j^{s_j}\qquad \forall h\in\cH. \end{equation} Now, we have to use a result such as Lemma \ref{dclustering}, for our family of estimators $\{\bcn_{h}, h\in\mathcal{H}_a\}$. In other words, we need to check that this family of estimators is consistent with respect to the Euclidean norm in $\mathbb{R}^{dk}$. \begin{lemma} \label{consistencykmeans} Assume $f$ is continuous, $X\in[0,1]^d$ a.s. and the Hessian matrix of $\RC$ is positive definite on $\cM$. Consider the family $\{\bcn _{h},h\in\cH_a\}$ with $\cH_a$ defined in Lemma \ref{lemmamajkmeans}. Then, for any $\delta>0$, for any $l\in\mathbb{N}^\star $, for any $\bcn_h\in\cH_a$, there exists $\bc^\star\in\mathcal {M}$ such that for $n$ great enough, with probability $1-n^{-l}$, \begin{eqnarray*} \bigl\llvert \bcn_h-\bc^\star\bigr\rrvert _2 \leq\delta. \end{eqnarray*} \end{lemma} \begin{pf} Using \cite{gg}, the positive definiteness of the Hessian matrix on $\mathcal{M}$ and the continuity of $f$, we have, for any $\bcn_h\in\cH_a$, for some constant $A_1>0$, $\llvert \bcn_h-\bc^\star\rrvert _2\leq A_1(\RC(\bcn_h)-\RC (\bc^\star))$, where $\bc^\star\in\arg\min_{\bc\in\cM}\llvert \bcn _h-\bc\rrvert _2$. It remains to show that by definition of $\cH_a$ in Lemma \ref{lemmamajkmeans}, with high probability, $\RC(\bcn_h)-\RC(\bc ^\star)\to0$ as $n$ tends to infinity. This can be seen easily from Chichignoud and Loustau \cite{ChichignoudLoustau13}, which gives the order of the bias term and the variance term for such a problem. At this stage, we can notice that localization is used in \cite {ChichignoudLoustau13}, and appears to be necessary here. However, using a global approach (i.e., a simple Hoeffding inequality to the family of kernel ERM), we can have, for any $l\in\mathbb{N}^\star$, with probability $1-n^{-l}$, \[ \RC(\widehat\bc_h)-\RC\bigl(\bc^\star\bigr)\lesssim \frac{\Pi_h(-\beta )}{\sqrt{n}}+\sum_{j=1}^dh_j^{s_j}\qquad \forall h\in\cH_a. \] By definition of $\cH_a$, the RHS tends to zero as $n\to\infty$, and then for $n$ great enough, this term is controlled by $\delta$. \end{pf} Then, for any $h\in\cH_a$ and $n$ great enough, Lemma \ref {dclustering} allows us to write with probability $1-n^{-l}$, \[ \sqrt{\RC(\bcn_h)-\RC\bigl(\bc^\star\bigr)}\leq 2 \frac{\sqrt{kd}}{\lambda_{\min}}\bigl\llvert \nabla\RC\bigl(\bcn_h, \bc^\star\bigr)\bigr\rrvert _2. \] Using Theorem \ref{thmainresult} with $ l=q $, bias control (\ref {bcontrolkmeans}) and the last inequality, there exists an absolute constant $ b_8>0 $ such that \[ \sup_{f\in\mathcal{N}_{2}(s,L)}\bE \bigl[\RC(\bcn_{\widehat h})-\RC\bigl( \bc^\star\bigr) \bigr]\leq b_8 \inf_{h\in\cH_a} \Biggl\{\sum_{j=1}^dh_j^{ s_j}+ \frac{\Pi_h(-\beta )}{n} \Biggr\} ^2+b_8n^{-q}. \] Let $ h^\star$ denote the oracle bandwidth as $ h^\star:=\arg\inf_{h\in\cH} \{\sum_{j=1}^dh_j^{ s_j}+\frac {\Pi_h(-\beta)}{n} \}$, and define the oracle bandwidth $ h^\star_a $ on the net $\cH_a$ such that $ a h^\star_{a,j}\leq h^\star_{j}\leq h^\star_{a,j} $, for all $ j=1,\dots,d$. Eventually, we have \[ \sup_{f\in\mathcal{N}_{2}(s,L)}\bE \bigl[\RC(\bcn_{\widehat h})-\RC\bigl( \bc^\star\bigr) \bigr]\leq b_8 a^{-qd/2}\inf _{h\in\cH} \Biggl\{\sum_{j=1}^dh_j^{ s_j}+ \frac{\Pi _h(-\beta)}{n} \Biggr\}^2+b_8n^{-q}. \] By a standard bias variance trade-off, we obtain the assertion of the theorem, provided that $q\geq1$. \end{pf*} \subsection{Proofs of Section~\texorpdfstring{\protect\ref{sectionlocalglobal}}{4}} \mbox{} \begin{pf*}{Proof of Lemma \protect\ref{lemlocalmargincondition}} By definition, we first note that \[ \bigl\llvert {G^{\mathrm{loc}} \bigl(\estim_{h}(x_0) \bigr) -G^{\mathrm{loc}} \bigl(f^\star(x_0) \bigr)}\bigr \rrvert =\bigl\llvert \bE\rho_\gamma ' \bigl(\xi _1+f^\star(x_0)-\estim_{ h}(x_0) \bigr)-\bE\rho_\gamma' (\xi_1 )\bigr\rrvert. \] Using the mean value theorem and the assumption $ \sup_{h\in\cH}\llvert \estim_{h}(x_0)-f^\star(x_0)\rrvert \leq\bE\rho_\gamma ''(\xi)/4 $, there exists $ c \in[-\bE\rho_\gamma''(\xi_1)/4,\bE\rho _\gamma''(\xi_1)/4]$ such that \[ \bigl\llvert {G^{\mathrm{loc}} \bigl(\estim_{h}(x_0) \bigr) -G^{\mathrm{loc}} \bigl(f^\star(x_0) \bigr)}\bigr \rrvert ={\bE\rho_\gamma ''(\xi _1+c)}\bigl\llvert f^\star(x_0)- \estim_{h}(x_0)\bigr\rrvert. \] Since $ \bE\rho_\gamma''(\xi_1+\cdot) $ is a 2-Lipschitz function, it yields \[ \bigl\llvert {G^{\mathrm{loc}} \bigl(\estim_{h}(x_0) \bigr) -G^{\mathrm{loc}} \bigl(f^\star(x_0) \bigr)}\bigr \rrvert \geq \frac{\bE\rho_\gamma''(\xi_1)}{2}\bigl\llvert f^\star(x_0)- \estim_{h}(x_0)\bigr\rrvert. \] The proof is complete. \end{pf*} \begin{pf*}{Proof of Theorem \protect\ref{thholderadapation}} From \cite{ChichignoudLederer13}, Theorem 1, we notice that all estimators $ \{\estim_h(x_0),h\in\cH\} $ are consistent, and thus, for $n$ sufficiently large, the assumption of Lemma \ref{lemlocalmargincondition} holds for all $ x_0\in\cT$. Using Theorem \ref{thmainresult} with $ l>0 $ and Lemma \ref{lemlocalmargincondition}, we get \[ \bigl\llvert \estim_{\widehat h^\mathrm{loc}}(x_0)-f^\star(x_0) \bigr\rrvert \leq\frac{6}{\bE\rho_\gamma ''(\xi_1)}\inf_{ h\in\cH_a} \bigl\{ \bias(h)+2\cM_{l}^{\mathrm{loc},\infty}(h) \bigr\}, \] with $ \bias(h)=\max (\llvert \bE\widehat{G}^{\mathrm{loc}}_{h}-G^{\mathrm {loc}}\rrvert _{\infty},\sup_{\eta\in \cH}\llvert \bE\widehat{G}^{\mathrm{loc}}_{h,\eta} -\bE\widehat{G}^{\mathrm{loc}}_\eta\rrvert _ { \infty} )$. The control of $\bias(\cdot)$ over H\"{o}lder classes is based on the same schema as in \cite{GoldenshlugerLepski08}, applied to the function $ F_t(\cdot):=\bE\rho_\gamma'(f^\star(\cdot)-t+\xi_1)$. For any $ f\in\Sigma({ s},L) $ and any $ h\in\cH$, we then want to show \begin{eqnarray}\label{eqlocalbiascontrol} B^{\mathrm{loc}}(h)&\leq&\sup_{t\in[-B,B]}\sup _{y\in\cT}\biggl\llvert \int K_{h}(x-y) \bigl[F_t(x)-F_t(y) \bigr]\,dx\biggr\rrvert \nonumber\\[-8pt]\\[-8pt]\nonumber &\leq& L \llvert K\rrvert _\infty\sum_{j=1}^dh_j^{ s_j}. \end{eqnarray} By definition, we see that $ \llvert \bE\widehat{G}^{\mathrm{loc}}_{h}-G^{\mathrm{loc}}\rrvert _{\infty}=\sup_{t\in[-B,B]}\llvert \bE K_{h}(W-x_0) [F_t(W)-F_t(x_0) ]\rrvert $ and by definition of $ \bE\widehat{G}^{\mathrm{loc}}_{h,\eta} $ and $F_t$, we have \begin{eqnarray*} -\bE\widehat{G}^{\mathrm{loc}}_{h,\eta}(t)&=&\int F_t(x)K_{h,\eta}(x-x_0)\,dx \\ &=&\int F_t(x) \biggl(\int K_{h}(x-y)K_{\eta}(y-x_0)\,dy \biggr)\,dx. \end{eqnarray*} Using Fubini's theorem and the equation $ \int K_{h}(x-y)\,dx=1 $ for all $ y\in\cT$, we get \begin{eqnarray*} -\bE\widehat{G}^{\mathrm{loc}}_{h,\eta}(t)&=&\int K_{\eta}(y-x_0) F_t(y)\,dy \\ &&{} +\int K_{\eta }(y-x_0) \biggl( \int K_{h}(x-y) \bigl[F_t(x)-F_t(y) \bigr]\,dx \biggr)\,dy \\ &=&\int K_{\eta}(y-x_0) F_t(y)\,dy \\ &&{}+\int K_{\eta}(y-x_0) \int K_{h}(x-y) \bigl[F_t(x)-F_t(y) \bigr]\,dx\,dy. \end{eqnarray*} Then it holds for any $ x_0\in\cT$, \begin{eqnarray*} && \bigl\llvert \bE\widehat{G}^{\mathrm{loc}}_{h,\eta}(t)-\bE\widehat {G}^{\mathrm{loc}}_{\eta}(t)\bigr\rrvert \\ &&\qquad =\biggl\llvert \int K_{\eta}(y-x_0) \int K_{h}(x-y) \bigl[F_t(x)-F_t(y) \bigr]\,dx\,dy\biggr\rrvert \\ &&\qquad \leq \bigl\llVert K_{\eta}(\cdot-x_0)\bigr\rrVert _1\sup_{y\in\cT}\biggl\llvert \int K_{h}(x-y) \bigl[F_t(x)-F_t(y) \bigr]\,dx \biggr\rrvert \\ &&\qquad =\sup_{y\in\cT}\biggl\llvert \int K_{h}(x-y) \bigl[F_t(x)-F_t(y) \bigr]\,dx\biggr\rrvert. \end{eqnarray*} We have then shown the first inequality in (\ref{eqlocalbiascontrol}). Using the smoothness of $ \rho_\gamma' $, we have for all $ f\in\Sigma( s,L) $, \begin{eqnarray*} && \biggl\llvert \int K_{h}(x-y) \bigl[F_t(x)-F_t(y) \bigr]\,dx\biggr\rrvert \\ &&\qquad =\biggl\llvert \int K_{h}(x-y)\bE \bigl[ \rho_\gamma' \bigl(f(x)-t+\xi_1 \bigr)- \rho_\gamma' \bigl(f(y)-t+\xi _1 \bigr) \bigr] \,dx\biggr\rrvert \\ &&\qquad \leq\biggl\llvert \int K_h(x-y) \bigl(f(x)-f(y)\bigr)\,dx\biggr \rrvert \\ &&\qquad \leq L\llVert K\rrVert _\infty\sum_{j=1}^dh_j^{ s_j}. \end{eqnarray*} Therefore, (\ref{eqlocalbiascontrol}) holds. Then, using Theorem \ref{thmainresult} with $ l=q $, Lemma \ref{lemlocalmargincondition} and (\ref{eqlocalbiascontrol}), there exists an absolute constant $ T_1>0 $ such that \[ \sup_{f\in\Sigma({ s},L)}\bE\bigl\llvert \estim_{\widehat h}(x_0)-f(x_0) \bigr\rrvert ^q\leq T_1 \inf_{h\in\cH_a} \Biggl\{\sum_{j=1}^dh_j^{ s_j}+ \sqrt{\frac{\log (n)}{n\Pi_h}} \Biggr\} ^q+T_1n^{-q}. \] Let $ h^\star$ denote the oracle bandwidth as $ h^\star:=\arg\inf_{h\in\cH} \{\sum_{j=1}^dh_j^{ s_j}+\sqrt {\frac{\log(n)}{n\Pi_h}} \}$, and define the oracle bandwidth $ h^\star_a $ such that $ a h^\star_{a,j}\leq h^\star_{j}\leq h^\star_{a,j} $, for all $ j=1,\dots,d$. Then we get \[ \sup_{f\in\Sigma({ s},L)}\bE\bigl\llvert \estim_{\widehat h}(x_0)-f(x_0) \bigr\rrvert ^q\leq T_1 a^{-qd/2}\inf _{h\in\cH} \Biggl\{\sum_{j=1}^dh_j^{ s_j}+ \sqrt{\frac {\log(n)}{n\Pi_h}} \Biggr\} ^q+T_1n^ { -q }. \] By a standard bias variance trade-off, we obtain the assertion of the theorem. \end{pf*} \begin{pf*}{Proof of Theorem \protect\ref{thnikolskiiadapation}} Here again, the assumption of Lemma \ref{lemlocalmargincondition} holds for $n$ sufficiently large for all $ x_0\in\cT$. Using Theorem \ref{thmainresult} with $ l>0 $ and adding the $ \bL _q $-norm, we have \[ \llVert \estim_{\widehat h^{\mathrm{glo}}_q}-f\rrVert _q\leq\frac{6}{\bE\rho_\gamma''(\xi _1)}\inf _{h\in\cH} \bigl\{ B(h)+2\Gamma_{l,q}^{\mathrm{glo}}(h) \bigr\}, \] where $ B(h)=\max (\llVert \bE\widehat{G}^{\mathrm{loc}}_{h}-G^{\mathrm {loc}}\rrVert _{q,\infty},\sup_{\eta\in\cH }\llVert \bE\widehat{G}^{\mathrm{loc}}_{h, \eta} -\bE\widehat{G}^{\mathrm{loc}}_\eta\rrVert _{q, \infty} )$. The control of the bias term is based on the schema of \cite{GoldenshlugerLepski11} for linear estimates. For any $ h\in\cH$, we want to show that \begin{equation} \label{eqglobalbiascontrol} B(h)\leq\sup_{t\in[-B,B]}\biggl\llVert \int K_{h}(x-\cdot) \bigl[F_t(x)-F_t(\cdot) \bigr]\,dx\biggr\rrVert _q \leq L\sum_{j=1}^dh_j^{ s_j}, \end{equation} where we recall $ F_t(x):=\bE\rho_\gamma'(f(x)-f_t(x)+\xi_1)$. By definition, one has \[ \bigl\llVert \bE\widehat{G}^{\mathrm{loc}}_{h}-G^{\mathrm{loc}}\bigr \rrVert _{q,\infty }=\sup_{t\in[-B,B]}\bigl\llVert \bE K_{h}(W-\cdot) \bigl[F_t(W)-F_t(\cdot) \bigr]\bigr\rrVert _q. \] Moreover, in the proof of Theorem \ref{thholderadapation}, we have shown that for any $ x_0\in\cT$, \begin{eqnarray*} && \bE\widehat{G}^{\mathrm{loc}}_{\eta}(t,x_0)-\bE\widehat {G}^{\mathrm{loc}}_{h,\eta}(t,x_0) \\ &&\qquad =\int K_{\eta }(y-x_0) \int K_{h}(x-y) \bigl[F_t(x)-F_t(y) \bigr]\,dx\,dy. \end{eqnarray*} By Young's inequality and the definition of the kernel in Section~\ref{sectionKERM}, it yields \begin{eqnarray*} && \bigl\llVert \bE\widehat{G}^{\mathrm{loc}}_{\eta}-\bE \widehat{G}^{\mathrm {loc}}_{h,\eta}\bigr\rrVert _{q,\infty} \\ &&\qquad =\sup _{t\in [-B,B]}\biggl\llVert \int K_{\eta}(y-\cdot) \int K_{h}(x-y) \bigl[F_t(x)-F_t(y) \bigr]\,dx\,dy \biggr\rrVert _{q,\infty} \\ &&\qquad \leq\sup_{t\in[-B,B]}\biggl\llVert \int K_{h}(x-\cdot) \bigl\llvert F_t(x)-F_t(\cdot)\bigr\rrvert\, dx\biggr \rrVert _{q,\infty}. \end{eqnarray*} Using the smoothness of $ \rho_\gamma' $, we have for any $ x,y\in \cT$ and any $ {t\in[-B,B]} $, \begin{eqnarray*} && F_t(x)-F_t(y)=\bE \bigl[ \rho_\gamma' \bigl(f(x)-t+\xi_1 \bigr)-\rho_\gamma' \bigl(f(y)-t+\xi _1 \bigr) \bigr] \leq\bigl\llvert f(x)-f(y)\bigr \rrvert. \end{eqnarray*} Therefore, (\ref{eqglobalbiascontrol}) holds for all $ f\in\cN _{q,d}( s,L) $. Then, using Theorem \ref{thmainresult} with $ l=q $, Lemma \ref{lemlocalmargincondition} and (\ref{eqglobalbiascontrol}), there exists an absolute constant $ T_2>0 $ such that \begin{eqnarray*} && \sup_{f\in\cN_{q,d}({ s},L)}\bE\llVert \estim_{\widehat h^{\mathrm{glo}}_q}-f\rrVert _q^q \\ &&\qquad \leq T_2\times\cases{\displaystyle \inf _{h\in\cH} \Biggl\{\sum_{j=1}^dh_j^{ s_j}+(n \Pi_h)^{-(q-1)/q} \Biggr\}^q+n^{-q}, &\quad if $q\in[1,2[$, \cr \displaystyle\inf_{h\in\cH} \Biggl\{\sum _{j=1}^dh_j^{ s_j}+(n \Pi_h)^{-1/2} \Biggr\}^q+n^{-q}, &\quad if $q\in[2,\infty[$.} \end{eqnarray*} Computing these infimums, we obtain the assertion of the theorem. \end{pf*} \end{appendix}
167,028
Lyric: Strong Enough to Stand Still Earlier this month we reported that a resolution was drafted which would allow for a “Save the Lyric” fund to be set up. Tonight, we received word that the remaining section of the Lyric Theater on Park Street is likely to be salvaged. Michael Fuschi, Building Official, shared that the correspondence he had with Hallisey Engineering regarding the mortar strength test results. The locations that were tested showed mortar strength to be in the range of high strength mortar. Unless something new is discovered, it sounds like the building is not in danger of being demolished. Next week Fuschi and Glenn Geathers (Project Manager of Department of Development Services Economic Development Division) will be meeting with John Hallisey at the site. Afterwards, they will meet with the ad hoc Lyric subcommittee of the Frog Hollow NRZ. The plan is to stabilize the structure from the third floor down before winter. The roof has already been patched. It will then be “fast tracked” and put to bid. June 25, 2010 @ 11:15 am Good to hear.
381,512
\begin{document} \title[$q$-concave operators and $q$-concave Banach lattices] {Optimal domain of $q$-concave operators and vector measure representation of $q$-concave Banach lattices} \author[O.\ Delgado]{O.\ Delgado} \address{Departamento de Matem\'atica Aplicada I, E.\ T.\ S.\ de Ingenier\'ia de Edificaci\'on, Universidad de Sevilla, Avenida de Reina Mercedes, 4 A, Sevilla 41012, Spain} \email{\textcolor[rgb]{0.00,0.00,0.84}{olvido@us.es}} \author[E.\ A.\ S\'{a}nchez P\'{e}rez]{E.\ A.\ S\'{a}nchez P\'{e}rez} \address{Instituto Universitario de Matem\'atica Pura y Aplicada, Universitat Polit\`ecnica de Val\`encia, Camino de Vera s/n, 46022 Valencia, Spain.} \email{\textcolor[rgb]{0.00,0.00,0.84}{easancpe@mat.upv.es}} \subjclass[2010]{Primary 47B38, 46G10. Secondary 46E30, 46B42.} \keywords{Banach lattices, $q$-concave operators, quasi-Banach function spaces, vector measures defined on a $\delta$-ring.} \thanks{The first author gratefully acknowledges the support of the Ministerio de Econom\'{\i}a y Competitividad (project \#MTM2012-36732-C03-03) and the Junta de Andaluc\'{\i}a (projects FQM-262 and FQM-7276), Spain.} \thanks{The second author acknowledges with thanks the support of the Ministerio de Econom\'{\i}a y Competitividad (project \#MTM2012-36740-C02-02), Spain.} \date{\today} \maketitle \begin{abstract} Given a Banach space valued $q$-concave linear operator $T$ defined on a $\sigma$-order continuous quasi-Banach function space, we provide a description of the optimal domain of $T$ preserving $q$-concavity, that is, the largest $\sigma$-order continuous quasi-Banach function space to which $T$ can be extended as a $q$-concave operator. We show in this way the existence of maximal extensions for $q$-concave operators. As an application, we show a representation theorem for $q$-concave Banach lattices through spaces of integrable functions with respect to a vector measure. This result culminates a series of representation theorems for Banach lattices using vector measures that have been obtained in the last twenty years. \end{abstract} \section{Introduction}\label{SEC: Introduction} Let $X(\mu)$ be a $\sigma$-order continuous quasi-Banach function space related to a positive measure $\mu$ on a measurable space $(\Omega,\Sigma)$ such that there exists $g\in X(\mu)$ with $g>0$ $\mu$-a.e.\ and let $T\colon X(\mu)\to E$ be a continuous linear operator with values in a Banach space $E$. Considering the $\delta$-ring $\Sigma_{X(\mu)}$ of all sets $A\in\Sigma$ satisfying that $\chi_A\in X(\mu)$ and the vector measure $m_T\colon\Sigma_{X(\mu)}\to E$ given by $m_T(A)=T(\chi_A)$, it follows that the space $L^1(m_T)$ of integrable functions with respect to $m_T$ is the optimal domain of $T$ preserving continuity. That is, the largest $\sigma$-order continuous quasi-Banach function space to which $T$ can be extended as a continuous operator still with values in $E$. Moreover, the extension of $T$ to $L^1(m_T)$ is given by the integration operator $I_{m_T}$. This fact was originally proved in \cite[Corollary 3.3]{curbera-ricker1} for Banach function spaces $X(\mu)$ with $\mu$ finite and $\chi_\Omega\in X(\mu)\subset L^1(\mu)$, in which case $\Sigma_{X(\mu)}$ coincides with the $\sigma$-algebra $\Sigma$. The extension for Banach function spaces (without extra assumptions) is deduced from \cite[Proposition 4]{calabuig-delgado-sanchezperez}. The jump to quasi-Banach function spaces appears in \cite[Theorem 4.14]{okada-ricker-sanchezperez} for the case when $\mu$ is finite and $\chi_\Omega\in X(\mu)$ and in \cite{delgado-sanchezperez} for the general case. Some effort has been made in recent years to solve several versions of the following general \emph{problem:} Suppose that the operator $T$ has a property P. Is there an optimal domain for $T$ preserving P? that is, is there a function space $Z$ such that $T$ can be extended to $Z$ as an operator with the property P in such a way that $Z$ is the largest space for which this holds? And in this case, which is the relation among $Z$ and $m_T$? The answer to the first question is in general no. For example, in \cite{okada} it is proved that for compacness or weak compacness $T$ has an optimal domain only in the case when $I_{m_T}$ is compact or weakly compact, respectively. In the same line it is shown in \cite{calabuig-jimenezfernadez-juan-sanchezperez} that $T$ has an optimal domain for AM-compacness if and only if $I_{m_T}$ is AM-compact. However, other properties have got positive answers to our problem, see \cite{calabuig-jimenezfernadez-juan-sanchezperez} for narrow operators, \cite{calabuig-delgado-sanchezperez} for order-w continuous or $Y(\eta)$-extensible operators and \cite{delgado2} for positive order continuous operators. Also in \cite{calabuig-jimenezfernadez-juan-sanchezperez} the problem is studied for Dunford-Pettis operators, but although some partial results are shown there, the question of the existence of a maximal extension is still open. In this paper we analyze this problem for the case of $q$-concave operators, obtaining a positive answer. Namely, if $T$ is $q$-concave we show how to compute explicitly the largest quasi-Banach function space to which $T$ can be extended preserving $q$-concavity (Corollary \ref{COR: qConcaveOptimalDomain}). Even more, we prove that this optimal domain is in fact the $q$-concave core of the space $L^1(m_T)$ and the maximal extension is given by the integration operator $I_{m_T}$. These results are obtained as a particular case of the more general Theorem \ref{THM: (p,q)PowerConcave-Factorization} which gives the optimal domain for a class of operators (called $(p,q)$-power-concave) which contains the $q$-concave operators. As an application we obtain an improvement in some sense of the Maurey-Rosenthal factorization of $q$-concave operators acting in $q$-convex Banach function spaces (Corollary \ref{COR: MaureyRosenthalFactorization}). The reader can find information about this nowadays classical topic for example in \cite{defant}, \cite{defant-sanchezperez} and the references therein. In the last section we provide a new representation theorem for $q$-concave Banach lattices in terms of a vector measure. This type of representation theorems has its origin in \cite[Theorem 8]{curbera}, where it is proved that every order continuous Banach lattice $F$ with a weak unit is order isometric to a space $L^1(\nu)$ of a vector measure $\nu$ defined on a $\sigma$-algebra. Later in \cite[Proposition 2.4]{fernandez-mayoral-naranjo-saez-sanchezperez} it is shown that if moreover $F$ is $p$-convex then it is order isometric to $L^p(m)$ for another vector measure $m$. Similar results work for $F$ without weak unit but in this case the vector measures used in the representations of $F$ are defined in a $\delta$-ring, see \cite[Theorem 5]{delgado-juan} and \cite[Theorem 10]{calabuig-juan-sanchezperez}. Also there are representation theorems for $F$ replacing $\sigma$-order continuity by the Fatou property, in this case through spaces of weakly integrable functions, see \cite{curbera-ricker2}, \cite{curbera-ricker3}, \cite{delgado-juan} and \cite{juan-sanchezperez}. For $p,q\in[1,\infty)$, in Theorem \ref{THM: BanachLattice-Representation} we obtain that every $q$-concave and $p$-convex Banach lattice is order isometric to a space $L^p(m)$ of a vector measure $m$ defined on a $\delta$-ring whose integration operator $I_{m_T}$ is $\frac{q}{p}$-concave. The converse is also true. In particular, every $q$-concave Banach lattice is order isometric to a space $L^1(m)$ of a vector measure $m$ having a $q$-concave integration operator. \section{Preliminaries}\label{SEC: Preliminaries} In this section we establish the notation and present the basic results on quasi-Banach function spaces (including the proof of some of them for completeness) and on vector measure integration, which will be used through the whole paper. Let $(\Omega,\Sigma)$ be a fixed measurable space. For a measure $\mu\colon\Sigma\to[0,\infty]$, we denote by $L^0(\mu)$ the space of all $\Sigma$--measurable real valued functions on $\Omega$, where functions which are equal $\mu$--a.e.\ are identified. Given two set functions $\mu,\lambda\colon\Sigma\to[0,\infty]$ we will write $\lambda\ll\mu$ if $\mu(A)=0$ implies $\lambda(A)=0$. If $\lambda\ll\mu$ and $\mu\ll\lambda$ we will say that $\mu$ and $\lambda$ are \emph{equivalent}. If $\mu,\lambda\colon\Sigma\to[0,\infty]$ are two measures with $\lambda\ll\mu$, then the map $[i]\colon L^0(\mu)\to L^0(\lambda)$ which takes a $\mu$--a.e.\ class in $L^0(\mu)$ represented by $f$ into the $\lambda$--a.e.\ class represented by the same $f$, is a well defined linear map. In order to simplify notation we will write $[i](f)=f$. Note that if $\lambda$ and $\mu$ are equivalent then $L^0(\mu)=L^0(\lambda)$ and $[i]$ is the identity map $i$. \subsection{Quasi-Banach function spaces} Let $X$ be a real vector space and $\Vert\cdot\Vert_X$ a \emph{quasi-norm} on $X$, that is a function $\Vert\cdot\Vert_X\colon X\to [0,\infty)$ satisfying the following conditions: \begin{itemize}\setlength{\leftskip}{-2.5ex}\setlength{\itemsep}{1ex} \item[(i)] $\Vert x\Vert_X=0$ if and only if $x=0$, \item[(ii)] $\Vert \alpha x\Vert_X=\vert\alpha\vert\cdot\Vert x\Vert_X$ for all $\alpha\in\mathbb{R}$ and $x\in X$, and \item[(iii)] there is a constant $K\ge1$ such that $\Vert x+y\Vert_X\le K(\Vert x\Vert_X+\Vert y\Vert_X)$ for all $x,y\in X$. \end{itemize} For $0<r\le1$ being such that $K=2^{\frac{1}{r}-1}$, it follows that \begin{equation}\label{EQ: r-sum} \Big\Vert\sum_{j=1}^nx_j\Big\Vert_X\le 4^{\frac{1}{r}}\Big(\sum_{j=1}^n\Vert x_j\Vert_X^r\Big)^{\frac{1}{r}} \end{equation} for every finite subset $(x_j)_{j=1}^n\subset X$, see \cite[Lemma 1.1]{kalton-peck-roberts}. The quasi-norm $\Vert\cdot\Vert_X$ induces a metrizable vector topology on $X$ where a base of neighborhoods of $0$ is given by sets of the form $\{x\in X:\, \Vert x\Vert_X\le \frac{1}{n}\}$. So, a sequence $(x_n)$ converges to $x$ in $X$ if and only if $\Vert x-x_n\Vert_X\to0$. If such topology is complete then $X$ is said to be a \emph{quasi-Banach space} (\emph{Banach space} if $K=1$). Having in mind the inequality \eqref{EQ: r-sum}, standard arguments show the next result. \begin{proposition}\label{PROP: quasi-normCompleteness} The following statements are equivalent: \begin{itemize}\setlength{\leftskip}{-2.5ex}\setlength{\itemsep}{1ex} \item[(a)] $X$ is complete. \item[(b)] For every $0<r'\le r$ ($r$ as in \eqref{EQ: r-sum}) it follows that if $(x_n)\subset X$ is such that $\sum\Vert x_n\Vert_X^{r'}<\infty$ then $\sum x_n$ converges in $X$. \item[(c)] There exists $r'>0$ satisfying that if $(x_n)\subset X$ is such that $\sum\Vert x_n\Vert_X^{r'}<\infty$ then $\sum x_n$ converges in $X$. \end{itemize} \end{proposition} Note that if a series $\sum x_n$ converges in $X$ then \begin{equation}\label{EQ: norm-sum} \Big\Vert\sum x_n\Big\Vert_X\le 4^{\frac{1}{r}}K\Big(\sum\Vert x_n\Vert_X^r\Big)^{\frac{1}{r}}, \end{equation} where $r$ is as in \eqref{EQ: r-sum}. By using the map $|||\cdot|||$ given in \cite[Theorem 1.2]{kalton-peck-roberts}, it is routine to check that if $x_n\to x$ in $X$ then \begin{equation}\label{EQ: Limit-quasi-norm} 4^{-\frac{1}{r}}\limsup\Vert x_n\Vert_X\le\Vert x\Vert_X\le4^{\frac{1}{r}}\liminf\Vert x_n\Vert_X. \end{equation} Also note that a linear map $T\colon X\to Y$ between quasi-Banach spaces is continuous if and only if there exists a constant $M>0$ such that $\Vert Tx\Vert_Y\le M\Vert x\Vert_X$ for all $x\in X$, see \cite[p.\,8]{kalton-peck-roberts}. By a \emph{quasi-Banach function space} (briefly, quasi-B.f.s.)\ we mean a quasi-Banach space $X(\mu)\subset L^0(\mu)$ satisfying that if $f\in X(\mu)$ and $g\in L^0(\mu)$ with $|g|\le|f|$ $\mu$--a.e.\ then $g\in X(\mu)$ and $\Vert g\Vert_{X(\mu)}\le\Vert f\Vert_{X(\mu)}$. If $X(\mu)$ is a Banach space we will refer it as a \emph{Banach function space} (briefly, B.f.s.). In particular, a quasi-B.f.s.\ is a quasi-Banach lattice for the $\mu$-a.e.\ pointwise order, in which the convergence in quasi-norm of a sequence implies the convergence $\mu$-a.e.\ for some subsequence. Let us prove this important fact. \begin{proposition}\label{PROP: mu-a.e.ConvergenceSubsequence} If $f_n\to f$ in a quasi-B.f.s.\ $X(\mu)$, then there exists a subsequence $f_{n_j}\to f$ $\mu$--a.e. \end{proposition} \begin{proof} Let $r$ be as in \eqref{EQ: r-sum}. We can take a strictly increasing sequence $(n_j)_{j\ge1}$ such that $\Vert f-f_{n_j}\Vert_{X(\mu)}\le\frac{1}{2^j}$. For every $m\ge1$, since $$ \sum_{j\ge m}\Vert f-f_{n_j}\Vert_{X(\mu)}^r\le \sum_{j\ge m}\frac{1}{2^{jr}}<\infty, $$ by Proposition \ref{PROP: quasi-normCompleteness} and \eqref{EQ: norm-sum}, it follows that $g_m=\sum_{j\ge m}|f-f_{n_j}|$ converges in $X(\mu)$ and $\Vert g_m\Vert_{X(\mu)}\le 4^{\frac{1}{r}}K(\sum_{j\ge m}\frac{1}{2^{jr}})^{\frac{1}{r}}$. Fix $N\ge 1$ and let $A_j^N=\{\omega\in\Omega: \vert f(\omega)-f_{n_j}(\omega)\vert>\frac{1}{N}\}$. Since $$ \chi_{\cap_{m\ge1}\cup_{j\ge m}A_j^N}\le\chi_{\cup_{j\ge m}A_j^N}\le\sum_{j\ge m}\chi_{A_j^N}\le N\sum_{j\ge m}\vert f-f_{n_j}\vert=Ng_m, $$ then $$ \Vert\chi_{\cap_{m\ge1}\cup_{j\ge m}A_j^N}\Vert_{X(\mu)}\le N\Vert g_m\Vert_{X(\mu)}\le4^{\frac{1}{r}}NK\Big(\sum_{j\ge m}\frac{1}{2^{jr}}\Big)^{\frac{1}{r}}. $$ Taking $m\to\infty$ we have that $\Vert\chi_{\cap_{m\ge1}\cup_{j\ge m}A_j^N}\Vert_{X(\mu)}=0$ and so $\mu(\cap_{m\ge1}\cup_{j\ge m}A_j^N)=0$. Then $\mu(\cup_{N\ge1}\cap_{m\ge1}\cup_{j\ge m}A_j^N)=0$, from which $f_{n_j}\to f$ $\mu$-a.e. \end{proof} A quasi-B.f.s.\ $X(\mu)$ is \emph{$\sigma$-order continuous} if for every $(f_n)\subset X(\mu)$ with $f_n\downarrow0$ $\mu$-a.e.\ it follows that $\Vert f_n\Vert_X\downarrow0$. It has the \emph{$\sigma$-Fatou property} if for every sequence $(f_n)\subset X$ such that $0\le f_n\uparrow f$ $\mu$-a.e.\ and $\sup_n\Vert f_n\Vert_X<\infty$ we have that $f\in X$ and $\Vert f_n\Vert_X\uparrow\Vert f\Vert_X$. A similar argument to that given in \cite[p.\,2]{lindenstrauss-tzafriri} for Banach lattices shows that every positive linear operator between quasi-Banach lattices is automatically continuous. In particular, all inclusions between quasi-B.f.s.\ are continuous. The intersection $X(\mu)\cap Y(\mu)$ and the sum $X(\mu)+Y(\mu)$ of two quasi-B.f.s.'\ (B.f.s.')\ $X(\mu)$ and $Y(\mu)$ are quasi-B.f.s.'\ (B.f.s.')\ endowed respectively with the quasi-norms (norms) $$ \Vert f\Vert_{X(\mu)\cap Y(\mu)}=\max\big\{\Vert f\Vert_{X(\mu)},\Vert f\Vert_{Y(\mu)}\big\} $$ and $$ \Vert f\Vert_{X(\mu)+Y(\mu)}=\inf\big(\Vert f_1\Vert_{X(\mu)}+\Vert f_2\Vert_{Y(\mu)}\big), $$ where the infimum is taken over all possible representations $f=f_1+f_2$ $\mu$-a.e.\ with $f_1\in X(\mu)$ and $f_2\in Y(\mu)$. The $\sigma$-order continuity is also preserved by this operations: if $X(\mu)$ and $Y(\mu)$ are $\sigma$-order continuous then $X(\mu)\cap Y(\mu)$ and $X(\mu)+Y(\mu)$ are $\sigma$-order continuous. Detailed proofs of these facts can be found in \cite{delgado-sanchezperez}, see also \cite[\S\,3, Theorem 1.3]{bennett-sharpley} for the standard parts. Let $p\in(0,\infty)$. The \emph{$p$-power} of a quasi-B.f.s.\ $X(\mu)$ is the quasi-B.f.s. $$ X(\mu)^p=\big\{f\in L^0(\mu):\, |f|^p\in X(\mu)\big\} $$ endowed with the quasi-norm $$ \Vert f\Vert_{X(\mu)^p}=\Vert\,|f|^p\,\Vert_{X(\mu)}^{\frac{1}{p}}. $$ The reader can find a complete explanation of the space $X^p(\mu)$ for instance in \cite[\S\,2.2]{okada-ricker-sanchezperez} for the case when $\mu$ is finite and $\chi_\Omega\in X(\mu)$. The proofs given there, with the natural modifications, work in our general case. However, note that the notation is different: our $p$-powers here are the $\frac{1}{p}$-th powers there. This standard space can be found in different sources, unfortunately, notation is not exactly the same in all of them. The following remark collects some results on the space $X(\mu)^p$ which will be used in the next sections. First, recall that a quasi-B.f.s.\ $X(\mu)$ is \emph{$p$-convex} if there exists a constant $C>0$ such that $$ \Big\Vert\Big(\sum_{j=1}^n|f_j|^p\Big)^{\frac{1}{p}}\Big\Vert_{X(\mu)}\le C\,\Big(\sum_{j=1}^n\Vert f_j\Vert_{X(\mu)}^p\Big)^{\frac{1}{p}} $$ for every finite subset $(f_j)_{j=1}^n\subset X(\mu)$. The smallest constant satisfying the previous inequality is called the \emph{p-convexity constant} of $X(\mu)$ and is denoted by $M^{(p)}[X(\mu)]$. \begin{remark}\label{REM: XpResults} Let $X(\mu)$ be a quasi-B.f.s. The following statements hold: \begin{itemize}\setlength{\leftskip}{-2.5ex}\setlength{\itemsep}{1ex} \item[(a)] $X(\mu)^p$ is $\sigma$-order continuous if and only if $X(\mu)$ is $\sigma$-order continuous. \item[(b)] If $\chi_\Omega\in X(\mu)$ and $0<p\le q<\infty$ then $X(\mu)^q\subset X(\mu)^p$. \item[(c)] If $X(\mu)$ is a B.f.s.\ then $X(\mu)^p$ is $p$-convex. \item[(d)] If $X(\mu)$ is a B.f.s.\ and $p\ge1$ then $\Vert \cdot\Vert_{X(\mu)^p}$ is a norm and so $X(\mu)^p$ is a B.f.s. \item[(e)] If $X(\mu)$ is $\frac{1}{p}$-convex with $M^{(\frac{1}{p})}[X(\mu)]=1$ then $\Vert \cdot\Vert_{X(\mu)^p}$ is a norm and so $X(\mu)^p$ is a B.f.s. \end{itemize} \end{remark} Let $T\colon X(\mu)\to E$ be a linear operator defined on a quasi-B.f.s.\ $X(\mu)$ and with values in a quasi-Banach space $E$. For $q\in(0,\infty)$, the operator $T$ is said to be \emph{$q$-concave} if there exists a constant $C>0$ such that $$ \Big(\sum_{j=1}^n\Vert T(f_j)\Vert_E^q\Big)^{\frac{1}{q}}\le C\,\Big\Vert\Big(\sum_{j=1}^n|f_j|^q\Big)^{\frac{1}{q}}\Big\Vert_{X(\mu)} $$ for every finite subset $(f_j)_{j=1}^n\subset X(\mu)$. A quasi-B.f.s.\ $X(\mu)$ is \emph{q-concave} if the identity map $i\colon X(\mu)\to X(\mu)$ is q-concave. Note that if $T$ is $q$-concave then it is $p$-concave for all $p>q$. A proof of this fact can be found in \cite[Proposition 2.54.(iv)]{okada-ricker-sanchezperez} for the case when $\mu$ is finite and $\chi_\Omega\in X(\mu)$. An adaptation of this proof to our context works. \begin{proposition}\label{PROP: q-concaveImpliesSigmaoc} If $X(\mu)$ is a $q$-concave quasi-B.f.s.\ then it is $\sigma$-order continuous. \end{proposition} \begin{proof} Since $q$-concavity implies $p$-concavity for every $q<p$, we only have to consider the case $q\ge1$. Denote by $C$ the $q$-concavity constant of $X(\mu)$ and consider $(f_n)\subset X(\mu)$ such that $f_n\downarrow0$ $\mu$-a.e. For every strictly increasing subsequence $(n_k)$ we have that \begin{eqnarray*} \Big(\sum_{k=1}^m\Vert f_{n_k}-f_{n_{k+1}}\Vert_{X(\mu)}^q\Big)^{\frac{1}{q}} & \le & C\,\Big\Vert\Big(\sum_{k=1}^m|f_{n_k}-f_{n_{k+1}}|^q\Big)^{\frac{1}{q}}\Big\Vert_{X(\mu)} \\ & \le & C\,\Big\Vert\sum_{k=1}^m|f_{n_k}-f_{n_{k+1}}|\,\Big\Vert_{X(\mu)} \\ & = & C\,\Vert f_{n_1}-f_{n_{m+1}}\Vert_{X(\mu)} \\ & \le & C\,\Vert f_{n_1}\Vert_{X(\mu)} \end{eqnarray*} for all $m\ge1$. Then, $(f_n)$ is a Cauchy sequence in $X(\mu)$, as in other case we can find $\delta>0$ and two subsequences $(n_k)$, $(m_k)$ such that $n_k<m_k<n_{k+1}<m_{k+1}$ and $\delta<\Vert f_{n_k}-f_{m_k}\Vert_{X(\mu)}\le\Vert f_{n_k}-f_{n_{k+1}}\Vert_{X(\mu)}$ for all $k$, which is a contradiction. Let $h\in X(\mu)$ be such that $f_n\to h$ in $X(\mu)$. From Proposition \ref{PROP: mu-a.e.ConvergenceSubsequence}, there exists a subsequence $f_{n_j}\to h$ $\mu$--a.e.\ and so $h=0$ $\mu$-a.e. Hence, $\Vert f_n\Vert_{X(\mu)}\downarrow0$. \end{proof} \begin{lemma}\label{LEM: q-concaveOperatorOnX+Y} Let $X(\mu)$ and $Y(\mu)$ be two quasi-B.f.s.'\ and consider a linear operator $T\colon X(\mu)+Y(\mu)\to E$ with values in a quasi-Banach space $E$. The operator $T$ is $q$-concave if and only if both $T\colon X(\mu)\to E$ and $T\colon Y(\mu)\to E$ are $q$-concave. \end{lemma} \begin{proof} If $T\colon X(\mu)+Y(\mu)\to E$ is $q$-concave, since $X(\mu)\subset X(\mu)+Y(\mu)$ continuously, it follows that $T\colon X(\mu)\to E$ is $q$-concave. Similarly, $T\colon Y(\mu)\to E$ is $q$-concave. Suppose that $T\colon X(\mu)\to E$ and $T\colon Y(\mu)\to E$ are $q$-concave and denote by $C_X$ and $C_Y$ their respective $q$-concavity constants. Write $K$ for the constant satisfying the property (iii) of the quasi-norm $\Vert\cdot\Vert_E$. We will use the inequality: \begin{equation}\label{EQ: t-inequality} (a+b)^t\le \max\{1,2^{t-1}\}(a^t+b^t) \end{equation} where $0\le a,b<\infty$ and $0<t<\infty$. Let $(f_j)_{j=1}^n\subset X(\mu)+Y(\mu)$. For $h=\big(\sum_{j=1}^n|f_j|^q\big)^{\frac{1}{q}}\in X(\mu)+Y(\mu)$, consider $h_1\in X(\mu)$ and $h_2\in Y(\mu)$ such that $h=h_1+h_2$ $\mu$-a.e. Taking the set $A=\big\{\omega\in\Omega:\, h(\omega)\le 2|h_1(\omega)|\big\}$, $\alpha_q=\max\{1,2^{q-1}\}$ and using \eqref{EQ: t-inequality}, we have that \begin{eqnarray*} \sum_{j=1}^n\Vert T(f_j)\Vert_E^q & \le & K^q\, \sum_{j=1}^n\Big(\Vert T(f_j\chi_A)\Vert_E+\Vert T(f_j\chi_{\Omega\backslash A})\Vert_E\Big)^q \\ & \le & K^q\,\alpha_q\,\left(\sum_{j=1}^n\Vert T(f_j\chi_A)\Vert_E^q+\sum_{j=1}^n\Vert T(f_j\chi_{\Omega\backslash A})\Vert_E^q\right). \end{eqnarray*} Note that $(f_j\chi_A)_{j=1}^n\subset X(\mu)$ as $|f_j|\chi_A\le h\chi_A\le 2|h_1|$ for all $j$. Then, \begin{eqnarray*} \sum_{j=1}^n\Vert T(f_j\chi_A)\Vert_E^q & \le & C_X^{\,q}\,\Big\Vert\Big(\sum_{j=1}^n|f_j|^q\Big)^{\frac{1}{q}}\chi_A\Big\Vert_{X(\mu)}^q \\ & = & C_X^{\,q}\,\Vert h\chi_A\Vert_{X(\mu)}^q\le 2^qC_X^{\,q}\,\Vert h_1\Vert_{X(\mu)}^q. \end{eqnarray*} Similarly, $(f_j\chi_{\Omega\backslash A})_{j=1}^n\subset Y(\mu)$ as $|f_j|\chi_{\Omega\backslash A}\le h\chi_{\Omega\backslash A}\le 2|h_2|$ $\mu$-a.e.\ for all $j$ and so $$ \sum_{j=1}^n\Vert T(f_j\chi_{\Omega\backslash A})\Vert_E^q \le 2^qC_Y^{\,q}\,\Vert h_2\Vert_{Y(\mu)}^q. $$ Denoting $C=\max\{C_X,C_Y\}$ and using again \eqref{EQ: t-inequality}, it follows that \begin{eqnarray*} \Big(\sum_{j=1}^n\Vert T(f_j)\Vert_E^q\Big)^{\frac{1}{q}} & \le & 2KC\alpha_q^{\frac{1}{q}}\,\big(\Vert h_1\Vert_{X(\mu)}^q+\Vert h_2\Vert_{Y(\mu)}^q\big)^{\frac{1}{q}} \\ & \le & 2^{1+|1-\frac{1}{q}|}KC\,\big(\Vert h_1\Vert_{X(\mu)}+\Vert h_2\Vert_{Y(\mu)}\big). \end{eqnarray*} Taking infimum over all representations $\big(\sum_{j=1}^n|f_j|^q\big)^{\frac{1}{q}}=h_1+h_2$ $\mu$-a.e.\ with $h_1\in X(\mu)$ and $h_2\in Y(\mu)$, we have that $$ \Big(\sum_{j=1}^n\Vert T(f_j)\Vert_E^q\Big)^{\frac{1}{q}}\le2^{1+|1-\frac{1}{q}|}KC\, \Big\Vert \Big(\sum_{j=1}^n|f_j|^q\Big)^{\frac{1}{q}}\Big\Vert_{X(\mu)+Y(\mu)}. $$ \end{proof} Further information on Banach lattices and function spaces can be found for instance in \cite{bennett-sharpley,kalton-peck-roberts,lindenstrauss-tzafriri,luxemburg-zaanen,okada-ricker-sanchezperez} and \cite{zaanen}. \subsection{Integration with respect to a vector measure defined on a $\delta$-ring} Let $\mathcal{R}$ be a \emph{$\delta$--ring} of subsets of $\Omega$ (i.e.\ a ring closed under countable intersections) and let $\mathcal{R}^{loc}$ be the $\sigma$--algebra of all subsets $A$ of $\Omega$ such that $A\cap B\in\mathcal{R}$ for all $B\in\mathcal{R}$. Note that $\mathcal{R}^{loc}=\mathcal{R}$ whenever $\mathcal{R}$ is a $\sigma$-algebra. Write $\mathcal{S}(\mathcal{R})$ for the space of all \emph{$\mathcal{R}$--simple functions} (i.e.\ simple functions supported in $\mathcal{R}$). A Banach space valued set function $m\colon\mathcal{R}\to E$ is a \emph{vector measure} (\emph{real measure} when $E=\mathbb{R}$) if $\sum m(A_n)$ converges to $m(\cup A_n)$ in $E$ for each sequence $(A_n)\subset\mathcal{R}$ of pairwise disjoint sets with $\cup A_n\in\mathcal{R}$. The \emph{variation} of a real measure $\lambda\colon\mathcal{R}\to \mathbb{R}$ is the measure $|\lambda|\colon\mathcal{R}^{loc}\to[0,\infty]$ given by $$ |\lambda|(A)=\sup\Big\{\sum|\lambda(A_j)|:\, (A_j) \textnormal{ finite disjoint sequence in } \mathcal{R}\cap 2^A\Big\}. $$ The variation $|\lambda|$ is finite on $\mathcal{R}$. The space $L^1(\lambda)$ of \emph{integrable functions with respect to $\lambda$} is defined as the classical space $L^1(|\lambda|)$ with the usual norm $|f|_\lambda=\int_\Omega|f|\,d|\lambda|$. The integral of an $\mathcal{R}$--simple function $\varphi=\sum_{j=1}^na_j\chi_{A_j}$ over $A\in\mathcal{R}^{loc}$ is defined in the natural way by $\int_A\varphi\,d\lambda=\sum_{j=1}^na_j\lambda(A_j\cap A)$. The space $\mathcal{S}(\mathcal{R})$ is dense in $L^1(\lambda)$. This allows to define the integral of a function $f\in L^1(\lambda)$ over $A\in\mathcal{R}^{loc}$ as $\int_A f\,d\lambda=\lim\int_A\varphi_n\,d\lambda$ for any sequence $(\varphi_n)\subset\mathcal{S}(\mathcal{R})$ converging to $f$ in $L^1(\lambda)$. The \emph{semivariation} of a vector measure $m\colon\mathcal{R}\to E$ is the function $\Vert m\Vert\colon\mathcal{R}^{loc}\to[0,\infty]$ defined by $$ \Vert m\Vert(A)=\sup_{x^*\in B_{E^*}}|x^*m|(A), $$ where $B_{E^*}$ is the closed unit ball of the topological dual $E^*$ of $E$ and $|x^*m|$ is the variation of the real measure $x^*m$ given by the composition of $m$ with $x^*$. The semivariation $\Vert m\Vert$ is finite on $\mathcal{R}$. A set $A\in\mathcal{R}^{loc}$ is said to be \emph{$m$--null} if $m(B)=0$ for every $B\in\mathcal{R}\cap 2^A$. This is equivalent to $\Vert m\Vert(A)=0$. It is known that there exists a measure $\eta\colon\mathcal{R}^{loc}\to[0,\infty]$ equivalent to $\Vert m\Vert $ (see \cite[Theorem 3.2]{brooks-dinculeanu}). Denote $L^0(m)=L^0(\eta)$. The space $L_w^1(m)$ of \emph{weakly integrable} functions with respect to $m$ is defined as the space of ($m$-a.e.\ equal) functions $f\in L^0(m)$ such that $f\in L^1(x^*m)$ for every $x^*\in E^*$. The space $L^1(m)$ of \emph{integrable} functions with respect to $m$ consists in all functions $f\in L_w^1(m)$ satisfying that for each $A\in \mathcal{R}^{loc}$ there exists $x_A \in E$, which is denoted by $\int_A f\,dm$, such that $$ x^*(x_A)=\int_A f\,dx^*m,\ \ \textnormal{ for all } x^*\in E^*. $$ The spaces $L^1(m)$ and $L_w^1(m)$ are B.f.s.'\ related to the measure space $(\Omega,\mathcal{R}^{loc},\eta)$, and the expression $$ \Vert f\Vert_m=\sup_{x^*\in B_{E^*}}\int_\Omega |f|\,d|x^*m| $$ gives a norm for both spaces. The norm of $f\in L^1(m)$ can also be computed by means of the formula \begin{equation}\label{EQ: L1m-intnorm} \Vert f\Vert_m=\sup \left\{ \left\Vert\int_\Omega f\varphi \, d m\right\Vert_E: \ \varphi \in\mathcal{S}(\mathcal{R}),\, |\varphi|\le1\right\}. \end{equation} Moreover, $L^1(m)$ is $\sigma$-order continuous and contains $\mathcal{S}(\mathcal{R})$ as a dense subset and $L_w^1(m)$ has the $\sigma$-Fatou property. For every $\mathcal{R}$-simple function $\varphi=\sum_{j=1}^n\alpha_j\chi_{A_i}$ it follows that $\int_A\varphi\,dm=\sum_{j=1}^n\alpha_im(A_j\cap A)$ for all $A\in \mathcal{R}^{loc}$. The \emph{integration operator} $I_m\colon L^1(m)\to E$ given by $I_m(f)=\int_\Omega f\,dm$, is a continuous linear operator with $\Vert I_m(f)\Vert_E\le\Vert f\Vert_m$. If $m$ is \emph{positive}, that is $m(A)\ge0$ for all $A\in\mathcal{R}$, then $\Vert f\Vert_m=\Vert I_m(|f|)\Vert_E$ for all $f\in L^1(m)$. For every $g\in L^1(m)$, the set function $m_g\colon\mathcal{R}^{loc}\to E$ given by $m_g(A)=I_m(g\chi_A)$ is a vector measure. Moreover, $f\in L^1(m_g)$ if and only if $fg\in L^1(m)$, and in this case $\Vert f\Vert_{L^1(m_g)}=\Vert fg\Vert_{L^1(m)}$. For definitions and general results regarding integration with respect to a vector measure defined on a $\delta$-ring we refer to \cite{calabuig-delgado-juan-sanchezperez,delgado1,lewis,masani-niemi1,masani-niemi2}. Let $p\in(0,\infty)$. We denote by $L^p(m)$ the $p$-power of $L^1(m)$, that is, $$ L^p(m)=\big\{f\in L^0(m):\, |f|^p\in L^1(m)\big\}. $$ As noted in Remark \ref{REM: XpResults}, the space $L^p(m)$ is a $\sigma$-order continuous quasi-B.f.s.\ with the quasi-norm $\Vert f\Vert_{L^p(m)}=\Vert\,|f|^p\,\Vert_{L^1(m)}^{1/p}$. Moreover, if $p\ge1$ then $\Vert\cdot\Vert_{L^p(m)}$ is a norm and so $L^p(m)$ is a B.f.s. Direct proofs of these facts and some general results on the spaces $L^p(m)$ can be found in \cite{calabuig-juan-sanchezperez}. \section{The $q$-concave core of a $\sigma$-order continuous quasi-B.f.s}\label{SEC: q-concaveCore} Let $X(\mu)$ be a $\sigma$-order continuous quasi-B.f.s.\ and $q\in(0,\infty)$. We define the space $qX(\mu)$ to be the set of functions $f\in X(\mu)$ such that $$ \Vert f\Vert_{qX(\mu)}=\sup\Big(\sum_{j=1}^n\Vert f_j\Vert_{X(\mu)}^q\Big)^{\frac{1}{q}}<\infty, $$ where the supremum is taken over all finite set $(f_j)_{j=1}^n\subset X(\mu)$ satisfying $|f|=\big(\sum_{j=1}^n|f_j|^q\big)^{\frac{1}{q}}$ $\mu$-a.e. Note that $\Vert f\Vert_{X(\mu)}\le \Vert f\Vert_{qX(\mu)}$. \begin{proposition} The space $qX(\mu)$ is a quasi-B.f.s.\ with quasi-norm $\Vert \cdot\Vert_{qX(\mu)}$. \end{proposition} \begin{proof} First let us see that if $f\in qX(\mu)$ and $g\in L^0(\mu)$ with $|g|\le|f|$ $\mu$--a.e.\ then $g\in qX(\mu)$ and $\Vert g\Vert_{qX(\mu)}\le\Vert f\Vert_{qX(\mu)}$. Note that $g\in X(\mu)$ as $f\in X(\mu)$. Let $(g_j)_{j=1}^n\subset X(\mu)$ be such that $|g|=\big(\sum_{j=1}^n|g_j|^q\big)^{\frac{1}{q}}$ $\mu$-a.e.\ and take $h=\big|\,|f|^q-|g|^q\,\big|^{\frac{1}{q}}\in X(\mu)$. Since $|f|=\big(\sum_{j=1}^n|g_j|^q+|h|^q\big)^{\frac{1}{q}}$ $\mu$-a.e., we have that $$ \Big(\sum_{j=1}^n\Vert g_j\Vert_{X(\mu)}^q\Big)^{\frac{1}{q}}\le\Big(\sum_{j=1}^n\Vert g_j\Vert_{X(\mu)}^q+\Vert h\Vert_{X(\mu)}^q\Big)^{\frac{1}{q}}\le\Vert f\Vert_{qX(\mu)}. $$ Taking supremum over all $(g_j)_{j=1}^n\subset X$ with $|g|=\big(\sum_{j=1}^n|g_j|^q\big)^{\frac{1}{q}}$ $\mu$-a.e., we have that $g\in qX(\mu)$ with $\Vert g\Vert_{qX(\mu)}\le\Vert f\Vert_{qX(\mu)}$. It is direct to check that $\Vert \cdot\Vert_{qX(\mu)}$ satisfies the properties (i) and (ii) of a quasi-norm. Let $K$ be the constant satisfying the property (iii) of a quasi-norm for $\Vert \cdot\Vert_{X(\mu)}$. Given $f,g\in qX(\mu)$ and $(h_j)_{j=1}^n\subset X$ such that $|f+g|=\big(\sum_{j=1}^n|h_j|^q\big)^{\frac{1}{q}}$ $\mu$-a.e., by taking $A=\big\{\omega\in\Omega:\,|f(\omega)+g(\omega)|\le2|f(\omega)|\big\}$, $\alpha_q=\max\{1,2^{q-1}\}$ and using \eqref{EQ: t-inequality}, we have that \begin{eqnarray*} \sum_{j=1}^n\Vert h_j\Vert_{X(\mu)}^q & \le & K^q\sum_{j=1}^n\big(\Vert h_j\chi_A\Vert_{X(\mu)}+\Vert h_j\chi_{\Omega\backslash A}\Vert_{X(\mu)}\big)^q \\ & \le & K^q\alpha_q\Big(\sum_{j=1}^n\Vert h_j\chi_A\Vert_{X(\mu)}^q+\sum_{j=1}^n\Vert h_j\chi_{\Omega\backslash A}\Vert_{X(\mu)}^q\Big). \end{eqnarray*} Note that $|f+g|\chi_A, |f+g|\chi_{\Omega\backslash A}\in qX(\mu)$ as $|f+g|\chi_A\le2|f|$ and $|f+g|\chi_{\Omega\backslash A}\le2|g|$. Then, \begin{eqnarray*} \sum_{j=1}^n\Vert h_j\Vert_{X(\mu)}^q & \le & K^q\alpha_q\Big(\big\Vert |f+g|\chi_A\big\Vert_{qX(\mu)}^q+\big\Vert |f+g|\chi_{\Omega\backslash A}\big\Vert_{qX(\mu)}^q\Big) \\ & \le & 2^qK^q\alpha_q\big(\Vert f\Vert_{qX(\mu)}^q+\Vert g\Vert_{qX(\mu)}^q\big). \end{eqnarray*} By using again \eqref{EQ: t-inequality}, we have that \begin{eqnarray*} \Big(\sum_{j=1}^n\Vert h_j\Vert_{X(\mu)}^q\Big)^{\frac{1}{q}} & \le & 2K\alpha_q^{\frac{1}{q}}\big(\Vert f\Vert_{qX(\mu)}^q+\Vert g\Vert_{qX(\mu)}^q\big)^{\frac{1}{q}} \\ & \le & 2^{1+|1-\frac{1}{q}|}K\big(\Vert f\Vert_{qX(\mu)}+\Vert g\Vert_{qX(\mu)}\big). \end{eqnarray*} Taking supremum over all $(h_j)_{j=1}^n\subset X$ with $|f+g|=\big(\sum_{j=1}^n|h_j|^q\big)^{\frac{1}{q}}$ $\mu$-a.e., we have that \begin{equation}\label{EQ: pX-Kconstant} \Vert f+g\Vert_{qX(\mu)}\le2^{1+|1-\frac{1}{q}|}K\big(\Vert f\Vert_{qX(\mu)}+\Vert g\Vert_{qX(\mu)}\big). \end{equation} Finally, let us prove that $qX(\mu)$ is complete. Denote by $r$ and $r'$ the constants satisfying \eqref{EQ: r-sum} for $X(\mu)$ and $qX(\mu)$ respectively. Note that $r'<r$ as $2^{1+|1-\frac{1}{q}|}K>K$. Let $(f_n)\subset qX(\mu)$ be such that $\sum\Vert f_n\Vert_{qX(\mu)}^{r'}<\infty$. Since $\Vert\cdot\Vert_{X(\mu)}\le\Vert\cdot\Vert_{qX(\mu)}$, from Proposition \ref{PROP: quasi-normCompleteness}, we have that $\sum_{j=1}^kf_j\to g$ and $\sum_{j=1}^k|f_j|\to \tilde{g}$ in $X(\mu)$. From Proposition \ref{PROP: mu-a.e.ConvergenceSubsequence}, it follows that $\sum_{j=1}^kf_j\to g$ and $\sum_{j=1}^k|f_j|\to \tilde{g}$ pointwise except on a $\mu$-null set $Z$. Fix any $\gamma>1$ and consider the sets $A_k=\big\{\omega\in\Omega:\,|g(\omega)|\le \gamma\sum_{j=1}^k|f_j(\omega)|\big\}$. Note that $\Omega\backslash \cup A_k\subset Z$ and so it is $\mu$-null. Indeed, if $\omega\not\in Z$ and $|g(\omega)|>\gamma\sum_{j=1}^k|f_j(\omega)|$ for all $k$ (in particular $\sum|f_n(\omega)|\not=0$), then $\gamma\sum|f_n(\omega)|\le|g(\omega)|\le\sum|f_n(\omega)|<\infty$, which is a contradiction. Also note that $g\chi_{A_k}\in qX(\mu)$ as $|g|\chi_{A_k}\le \gamma\sum_{j=1}^k|f_j|$. Given $(h_j)_{j=1}^n\subset X$ with $|g|=\big(\sum_{j=1}^n|h_j|^q\big)^{\frac{1}{q}}$ $\mu$-a.e., we have that \begin{eqnarray*} \Big(\sum_{j=1}^n\Vert h_j\chi_{A_k}\Vert_{X(\mu)}^q\Big)^{\frac{1}{q}} & \le & \Vert g\chi_{A_k}\Vert_{qX(\mu)}\le \gamma\Big\Vert \sum_{j=1}^k|f_j|\Big\Vert_{qX(\mu)} \\ & \le & 4^{\frac{1}{r'}}\gamma\Big(\sum_{j=1}^k\Vert f_j\Vert_{qX(\mu)}^{r'}\Big)^{\frac{1}{r'}} \\ & \le & 4^{\frac{1}{r'}}\gamma\Big(\sum\Vert f_n\Vert_{qX(\mu)}^{r'}\Big)^{\frac{1}{r'}}. \end{eqnarray*} On other hand, since $X(\mu)$ is $\sigma$-order continuous and $|h_j|\chi_{A_k}\uparrow |h_j|$ $\mu$-a.e.\ as $k\to\infty$, we have that $h_j\chi_{A_k}\to h_j$ in $X(\mu)$ as $k\to\infty$. Taking limit as $k\to\infty$ in the above inequality and applying \eqref{EQ: Limit-quasi-norm}, we obtain that $$ \Big(\sum_{j=1}^n\Vert h_j\Vert_{X(\mu)}^q\Big)^{\frac{1}{q}} \le 4^{\frac{1}{r}+\frac{1}{r'}}\gamma\Big(\sum\Vert f_n\Vert_{qX(\mu)}^{r'}\Big)^{\frac{1}{r'}}. $$ Now, taking supremum over all $(h_j)_{j=1}^n\subset X$ with $|g|=\big(\sum_{j=1}^n|h_j|^q\big)^{\frac{1}{q}}$ $\mu$-a.e., it follows that $g\in qX(\mu)$ with $\Vert g\Vert_{qX(\mu)}\le4^{\frac{1}{r}+\frac{1}{r'}}\gamma\big(\sum\Vert f_n\Vert_{qX(\mu)}^{r'}\big)^{\frac{1}{r'}}$. Even more, since $\gamma$ is arbitrary, taking $\gamma\to 1$ we have that $$ \Big\Vert \sum f_n\Big\Vert_{qX(\mu)}\le4^{\frac{1}{r}+\frac{1}{r'}}\Big(\sum\Vert f_n\Vert_{qX(\mu)}^{r'}\Big)^{\frac{1}{r'}}. $$ Of course $\sum_{j=1}^nf_j\to g$ in $qX(\mu)$ as $$ \Big\Vert g-\sum_{j=1}^nf_j\Big\Vert_{qX(\mu)}=\Big\Vert \sum_{j>n}f_j\Big\Vert_{qX(\mu)}\le4^{\frac{1}{r}+\frac{1}{r'}}\Big(\sum_{j>n}\Vert f_j\Vert_{qX(\mu)}^{r'}\Big)^{\frac{1}{r'}}\to0. $$ Therefore, from Proposition \ref{PROP: quasi-normCompleteness} it follows that $qX(\mu)$ is complete. \end{proof} \begin{proposition} The space $qX(\mu)$ is $q$-concave. In consequence, it is also $\sigma$-order continuous. \end{proposition} \begin{proof} Let $(f_j)_{j=1}^n\subset qX(\mu)$ and consider $(h_k^j)_{k=1}^{m_j}\subset X(\mu)$ with $|f_j|=\big(\sum_{k=1}^{m_j}|h_k^j|^q\big)^{\frac{1}{q}}$ $\mu$-a.e.\ for each $j$. Since $\big(\sum_{j=1}^n|f_j|^q\big)^{\frac{1}{q}}=\big(\sum_{j=1}^n\sum_{k=1}^{m_j}|h_k^j|^q\big)^{\frac{1}{q}}$ $\mu$-a.e., it follows that $$ \sum_{j=1}^n\sum_{k=1}^{m_j}\Vert h_k^j\Vert_{X(\mu)}^q\le \Big\Vert\Big(\sum_{j=1}^n|f_j|^q\Big)^{\frac{1}{q}}\Big\Vert_{qX(\mu)}^q. $$ Taking supremum for each $j=1,...,n$ over all $(h_k^j)_{k=1}^{m_j}\subset X(\mu)$ with $|f_j|=\big(\sum_{k=1}^{m_j}|h_k^j|^q\big)^{\frac{1}{q}}$ $\mu$-a.e., we have that $$ \sum_{j=1}^n\Vert f_j\Vert_{qX(\mu)}^q\le \Big\Vert\Big(\sum_{j=1}^n|f_j|^q\Big)^{\frac{1}{q}}\Big\Vert_{qX(\mu)}^q $$ and so $qX(\mu)$ is $q$-concave. The $\sigma$-order continuity is given by Proposition \ref{PROP: q-concaveImpliesSigmaoc}. \end{proof} Even more, the following proposition shows that $qX(\mu)$ is in fact the \emph{$q$-concave core} of $X(\mu)$, that is, the largest $q$-concave quasi-B.f.s.\ related to $\mu$ contained in $X(\mu)$. In particular, $qX(\mu)=X(\mu)$ whenever $X(\mu)$ is $q$-concave. \begin{proposition}\label{PROP: q-concaveCore} Let $Z(\xi)$ be a quasi-B.f.s.\ with $\mu\ll\xi$. The following statements are equivalent: \begin{itemize}\setlength{\leftskip}{-3ex}\setlength{\itemsep}{.5ex} \item[(a)] $[i]\colon Z(\xi)\to X(\mu)$ is well defined and $q$-concave. \item[(b)] $[i]\colon Z(\xi)\to qX(\mu)$ is well defined. \end{itemize} In particular, $qX(\mu)$ is the $q$-concave core of $X(\mu)$. \end{proposition} \begin{proof} (a) $\Rightarrow$ (b) Denote by $C$ the $q$-concavity constant of the operator $[i]\colon Z(\xi)\to X(\mu)$. Let $f\in Z(\xi)$ (so $f\in X(\mu)$) and $(f_j)_{j=1}^{n}\subset X(\mu)$ with $|f|=\big(\sum_{j=1}^n|f_j|^q\big)^{\frac{1}{q}}$ except on a $\mu$-null set $N$. Since $|f_j|\chi_{\Omega\backslash N}\le |f|$ pointwise (so $\xi$-a.e.), then $f_j\chi_{\Omega\backslash N}\in Z(\xi)$. Noting that $f_j=f_j\chi_{\Omega\backslash N}$ $\mu$-a.e., it follows that \begin{eqnarray*} \Big(\sum_{j=1}^n\Vert f_j\Vert_{X(\mu)}^q\Big)^{\frac{1}{q}} & = & \Big(\sum_{j=1}^n\Vert f_j\chi_{\Omega\backslash N}\Vert_{X(\mu)}^q\Big)^{\frac{1}{q}} \\ & \le & C\,\Big\Vert \Big(\sum_{j=1}^n|f_j|^q\Big)^{\frac{1}{q}}\chi_{\Omega\backslash N}\Big\Vert_{Z(\xi)}\le C\,\Vert f\Vert_{Z(\xi)}. \end{eqnarray*} Hence $f\in qX(\mu)$ with $\Vert f\Vert_{qX(\mu)}\le C\,\Vert f\Vert_{Z(\xi)}$. (b) $\Rightarrow$ (a) Clearly $[i]\colon Z(\xi)\to X(\mu)$ is well defined as $qX(\mu)\subset X(\mu)$. Denote by $M$ the continuity constant of $[i]\colon Z(\xi)\to qX(\mu)$ (recall that every positive operator between quasi-B.f.s.'\ is continuous). For every $(f_j)_{j=1}^n\subset Z(\xi)$ we have that $\big(\sum_{j=1}^n|f_j|^q\big)^{\frac{1}{q}}$ is in $qX(\mu)$ as it is in $Z(\xi)$, and so \begin{eqnarray*} \Big(\sum_{j=1}^n\Vert f_j\Vert_{X(\mu)}^q\Big)^{\frac{1}{q}} & \le &\Big\Vert \Big(\sum_{j=1}^n|f_j|^q\Big)^{\frac{1}{q}}\Big\Vert_{qX(\mu)} \\ & \le & M\,\Big\Vert \Big(\sum_{j=1}^n|f_j|^q\Big)^{\frac{1}{q}}\Big\Vert_{Z(\xi)}. \end{eqnarray*} Hence, $[i]\colon Z(\xi)\to X(\mu)$ is $q$-concave. In particular, if $Z(\mu)$ is a $q$-concave quasi-B.f.s.\ such that $Z(\mu)\subset X(\mu)$, we have that $i\colon Z(\mu)\to X(\mu)$ is well defined, continuous and so $q$-concave. Then, from (a) $\Rightarrow$ (b) we have that $Z(\mu)\subset qX(\mu)$. \end{proof} For $p\in(0,\infty)$, the $p$-power of $qX(\mu)$ can be described in terms of the $p$-power of $X(\mu)$. \begin{proposition}\label{PROP: pPower-qX(mu)} The equality $\big(qX(\mu)\big)^p=qpX(\mu)^p$ holds with equal norms. \end{proposition} \begin{proof} Let $f\in \big(qX(\mu)\big)^p$. Since $|f|^p\in qX(\mu)$, in particular $|f|^p\in X(\mu)$ and so $f\in X(\mu)^p$. Consider $(f_j)_{j=1}^{n}\subset X(\mu)^p$ satisfying that $|f|=\big(\sum_{j=1}^n|f_j|^{qp}\big)^{\frac{1}{qp}}$ $\mu$-a.e. Noting that $(|f_j|^p)_{j=1}^{n}\subset X(\mu)$ and $|f|^p=\big(\sum_{j=1}^n(|f_j|^p)^q\big)^{\frac{1}{q}}$ $\mu$-a.e., we have that $$ \Big(\sum_{j=1}^n\Vert f_j\Vert_{X(\mu)^p}^{qp}\Big)^{\frac{1}{qp}}= \Big(\sum_{j=1}^n\Vert\, |f_j|^p\,\Vert_{X(\mu)}^q\Big)^{\frac{1}{qp}}\le\Vert\,|f|^p\,\Vert_{qX(\mu)}^{\frac{1}{p}} =\Vert f\Vert_{(qX(\mu))^p}. $$ Then, $f\in qpX(\mu)^p$ and $\Vert f\Vert_{qpX(\mu)^p}\le\Vert f\Vert_{(qX(\mu))^p}$. Let now $f\in qpX(\mu)^p$. In particular $f\in X(\mu)^p$ and so $|f|^p\in X(\mu)$. Consider $(f_j)_{j=1}^{n}\subset X(\mu)$ satisfying that $|f|^p=\big(\sum_{j=1}^n|f_j|^q\big)^{\frac{1}{q}}$ $\mu$-a.e. Noting that $(|f_j|^{\frac{1}{p}})_{j=1}^{n}\subset X(\mu)^p$ and $|f|=\big(\sum_{j=1}^n(|f_j|^{\frac{1}{p}})^{qp}\big)^{\frac{1}{qp}}$ $\mu$-a.e., we have that $$ \Big(\sum_{j=1}^n\Vert f_j\Vert_{X(\mu)}^q\Big)^{\frac{1}{q}}= \Big(\sum_{j=1}^n\Vert\, |f_j|^{\frac{1}{p}}\,\Vert_{X(\mu)^p}^{qp}\Big)^{\frac{1}{q}}\le\Vert f\Vert_{qpX(\mu)^p}^p. $$ Then, $|f|^p\in qX(\mu)$ and $\Vert\,|f|^p\,\Vert_{qX(\mu)}\le\Vert f\Vert_{qpX(\mu)^p}^p$. Hence, $f\in \big(qX(\mu)\big)^p$ and $\Vert f\Vert_{(qX(\mu))^p}=\Vert\,|f|^p\,\Vert_{qX(\mu)}^{\frac{1}{p}}\le\Vert f\Vert_{qpX(\mu)^p}$. \end{proof} \section{Optimal domain for $(p,q)$-power-concave operators} \label{SEC: (p,q)PowerConcave-quasiBfs} Let $X(\mu)$ be a $\sigma$-order continuous quasi-B.f.s.\ satisfying what we call the \emph{$\sigma$-property}: $$ \Omega=\cup \Omega_n \textnormal{ with } \chi_{\Omega_n}\in X(\mu) \textnormal{ for all } n, $$ and let $T\colon X(\mu)\to E$ be a continuous linear operator with values in a Banach space $E$. We consider the $\delta$-ring $$ \Sigma_{X(\mu)}=\big\{A\in\Sigma:\, \chi_A\in X(\mu)\big\} $$ and the vector measure $m_T\colon\Sigma_{X(\mu)}\to E$ given by $m_T(A)=T(\chi_A)$. Note that the $\sigma$-property of $X(\mu)$ guarantees that $\Sigma_{X(\mu)}^{loc}=\Sigma$ and since $\Vert m_T\Vert\ll\mu$ we have that $[i]\colon L^0(\mu)\to L^0(m_T)$ is well defined. Also note that a quasi-B.f.s.\ has the $\sigma$-property if and only if it contains a function $g>0$ $\mu$-a.e. As an extension of \cite[\S\,3]{calabuig-delgado-sanchezperez} to quasi-B.f.s.', in \cite{delgado-sanchezperez} it is proved that $[i]\colon X(\mu)\to L^1(m_T)$ is well defined and $T=I_{m_T}\circ[i]$. Even more, $L^1(m_T)$ is the largest $\sigma$-order continuous quasi-B.f.s.\ with this property. That means, if $Z(\xi)$ is a $\sigma$-order continuous quasi-B.f.s.\ with $\xi\ll\mu$ and $T$ factors as $$ \xymatrix{ X(\mu) \ar[rr]^{T} \ar@{.>}[dr]_(.45){[i]} & & E\\ & Z(\xi) \ar@{.>}[ur]_(.5){S}} $$ with $S$ being a continuous linear operator, then $[i]\colon Z(\xi)\to L^1(m_T)$ is well defined and $S=I_{m_T}\circ[i]$. In other words, $L^1(m_T)$ is the optimal domain to which $T$ can be extended preserving continuity. In this section we present the main results of the paper, including a description of the optimal domain for $T$ (when $T$ is $q$-concave) preserving $q$-concavity. First, we have to provide a natural non-finite measure version of the so called $p$-th power factorable operators, which were developed for the first time in \cite[\S\,5.1]{okada-ricker-sanchezperez} for the case of finite measures. For $p\in(0,\infty)$, we say that $T$ is a \emph{$p$-th power factorable operator with a continuous extension} if there is a continuous linear extension of $T$ to $X(\mu)^{\frac{1}{p}}+X(\mu)$, i.e. $T$ factors as $$ \xymatrix{ X(\mu) \ar[rr]^{T} \ar@{.>}[dr]_(.4){i} & & E\\ & X(\mu)^{\frac{1}{p}}+X(\mu) \ar@{.>}[ur]_(.6){S}} $$ for a continuous linear operator $S$. Regarding this definition and having in mind Remark \ref{REM: XpResults}.(b), two standard cases must be considered whenever $\chi_\Omega\in X(\mu)$. If $1<p$ we have that $X(\mu)^{\frac{1}{p}}+X(\mu)=X(\mu)^{\frac{1}{p}}$, and then the definition of $p$-th power factorable operator with a continuous extension coincides with the one given in \cite[Definition 5.1]{okada-ricker-sanchezperez}. However, if $p\le1$ we have that $X(\mu)^{\frac{1}{p}}+X(\mu)=X(\mu)$ and so $p$-th power factorable operators with continuous extensions are just continuous operators. The following result, which is proved in \cite{delgado-sanchezperez} in order to find the optimal domain for $p$-th power factorable operators, will be the starting point of our work in this section. The proof is an adaptation to our setting of the proof given in \cite[Theorem 5.7]{okada-ricker-sanchezperez} for the case when $\mu$ is finite, $\chi_\Omega\in X(\mu)$ and $p\ge1$. \begin{theorem}\label{THM: quasiBfsXsubset(LpCapL1)mT} The following statements are equivalent. \begin{itemize}\setlength{\leftskip}{-3ex}\setlength{\itemsep}{1ex} \item[(a)] $T$ is $p$-th power factorable with a continuous extension. \item[(b)] $[i]\colon X(\mu)^{\frac{1}{p}}+X(\mu)\to L^1(m_T)$ is well defined. \item[(c)] $[i]\colon X(\mu)\to L^p(m_T)\cap L^1(m_T)$ is well defined. \item[(d)] There exists $M>0$ such that $\Vert Tf\Vert_E\le M\Vert f\Vert_{X(\mu)^{\frac{1}{p}}+X(\mu)}$ for all $f\in X(\mu)$. \end{itemize} Moreover, if (a)-(d) holds, the extension of $T$ to $X(\mu)^{\frac{1}{p}}+X(\mu)$ coincides with the integration operator $I_{m_T}\circ[i]$. \end{theorem} In a brief overview, (a) implies (b) and the fact that the extension of $T$ to $X(\mu)^{\frac{1}{p}}+X(\mu)$ is just $I_{m_T}\circ[i]$ follow from the optimality of $L^1(m_T)$. Note that $X(\mu)^{\frac{1}{p}}+X(\mu)$ is $\sigma$-order continuous as $X(\mu)$ is so. The equivalence between (b) and (c) is a direct check. Statement (b) implies (d) since $[i]\colon X(\mu)^{\frac{1}{p}}+X(\mu)\to L^1(m_T)$ is continuous (as it is positive) and $T=I_{m_T}\circ[i]$. Finally, (d) implies (a) is based on a standard argument which use the approximation of a measurable function through functions in $X(\mu)$ (possible by the $\sigma$-property) to construct an extension of $T$ to $X(\mu)^{\frac{1}{p}}+X(\mu)$. For a detailed proof of Theorem \ref{THM: quasiBfsXsubset(LpCapL1)mT} see \cite{delgado-sanchezperez}, where moreover it is proved that if $T$ is $p$-th power factorable with a continuous extension then $L^p(m_T)\cap L^1(m_T)$ is the optimal domain to which $T$ can be extended preserving this property. Now, let us go to the new results on optimal domains. We consider the following property stronger than $p$-th power factorable and look for its optimal domain. For $p,q\in(0,\infty)$, we say that $T$ is \emph{$(p,q)$-power-concave} if there exists a constant $C>0$ such that $$ \Big(\sum_{j=1}^n\Vert T(f_j)\Vert_E^{\frac{q}{p}}\Big)^{\frac{p}{q}}\le C\,\Big\Vert\Big(\sum_{j=1}^n|f_j|^{\frac{q}{p}}\Big)^{\frac{p}{q}}\Big\Vert_{X(\mu)^{\frac{1}{p}}+X(\mu)} $$ for every finite subset $(f_j)_{j=1}^n\subset X(\mu)$. If $\chi_\Omega\in X(\mu)$ and $p\ge1$ we have that $X(\mu)^{\frac{1}{p}}+X(\mu)=X(\mu)^{\frac{1}{p}}$, and then our definition of $(p,q)$-power-concave operator coincides with the one given in \cite[Definition 6.1]{okada-ricker-sanchezperez}. \begin{remark}\label{REM: (p,q)PowerConcave} The following statements hold: \begin{itemize}\setlength{\leftskip}{-2.5ex}\setlength{\itemsep}{1ex} \item[(i)] A $(1,q)$-power-concave operator is just a $q$-concave operator. \item[(ii)] If $T$ is $(p,q)$-power-concave then $T$ is $\frac{q}{p}$-concave, as $X(\mu)\subset X(\mu)^{\frac{1}{p}}+X(\mu)$ continuously. \item[(iii)] If $\chi_\Omega\in X(\mu)$ and $p<1$, since $X(\mu)^{\frac{1}{p}}+X(\mu)=X(\mu)$, we have that $(p,q)$-power-concavity coincides with $\frac{q}{p}$-concavity. \item[(iv)] If $T$ is $(p,q)$-power-concave then $T$ is $p$-th power factorable with a continuous extension. Indeed, the $(p,q)$-power-concave inequality applied to an unique function is just the item (d) of Theorem \ref{THM: quasiBfsXsubset(LpCapL1)mT} \end{itemize} \end{remark} As we will see in the next result, $(p,q)$-power-concavity is close related to the following property. We say that $T$ is \emph{$p$-th power factorable with a $q$-concave extension} if there exists a $q$-concave linear extension of $T$ to $X(\mu)^{\frac{1}{p}}+X(\mu)$, i.e. $T$ factors as $$ \xymatrix{ X(\mu) \ar[rr]^{T} \ar@{.>}[dr]_(.45){i} & & E\\ & X(\mu)^{\frac{1}{p}}+X(\mu) \ar@{.>}[ur]_(.55){S}} $$ with $S$ being a $q$-concave linear operator. In this case, it is direct to check that $T$ is $q$-concave. \begin{theorem}\label{THM: Xsubset(q/pL1CapqLp)mT} The following statements are equivalent: \begin{itemize}\setlength{\leftskip}{-2.5ex}\setlength{\itemsep}{1ex} \item[(a)] $T$ is $(p,q)$-power-concave. \item[(b)] $T$ is $p$-th power factorable with a $\frac{q}{p}$-concave extension. \item[(c)] $[i]\colon X(\mu)^{\frac{1}{p}}+X(\mu)\to L^1(m_T)$ is well defined and $\frac{q}{p}$-concave. \item[(d)] $[i]\colon X(\mu)\to L^1(m_T)$ is well defined and $\frac{q}{p}$-concave, and $[i]\colon X(\mu)\to L^p(m_T)$ is well defined and $q$-concave. \item[(e)] $[i]\colon X(\mu)\to \frac{q}{p}L^1(m_T)\cap qL^p(m_T)$ is well defined. \end{itemize} Moreover, if (a)-(e) holds, the extension of $T$ to $X(\mu)^{\frac{1}{p}}+X(\mu)$ coincides with the integration operator $I_{m_T}\circ[i]$. \end{theorem} \begin{proof} First note that $\frac{q}{p}L^1(m_T)\cap qL^p(m_T)$ is $\sigma$-order continuous as a consequence of Proposition \ref{PROP: q-concaveImpliesSigmaoc}. (a) $\Rightarrow$ (b) From Remark \ref{REM: (p,q)PowerConcave}.(iv) we have that $T$ is $p$-th power factorable with a continuous extension. Let $S\colon X(\mu)^{\frac{1}{p}}+X(\mu)\to E$ be a continuous linear operator extending $T$. We are going to see that $S$ is $\frac{q}{p}$-concave. Since $T$ is $(p,q)$-power-concave and $S=T$ on $X(\mu)$, there exists $C>0$ such that $$ \Big(\sum_{j=1}^n\Vert S(f_j)\Vert_E^{\frac{q}{p}}\Big)^{\frac{p}{q}}\le C\,\Big\Vert\Big(\sum_{j=1}^n|f_j|^{\frac{q}{p}}\Big)^{\frac{p}{q}}\Big\Vert_{X(\mu)^{\frac{1}{p}}+X(\mu)} $$ for all finite subset $(f_j)_{j=1}^n\subset X(\mu)$. Consider $(f_j)_{j=1}^n\subset X(\mu)^{\frac{1}{p}}+X(\mu)$ with $f_j\ge0$ $\mu$-a.e.\ for all $j$. The $\sigma$-property of $X(\mu)$ allows to find for each $j=1,...,n$ a sequence $(h_k^j)\subset X(\mu)$ such that $0\le h_k^j\uparrow f_j$ $\mu$-a.e.\ as $k\to\infty$ (see \cite{delgado-sanchezperez} for the details). For every $k$, we have that \begin{eqnarray*} \Big(\sum_{j=1}^n\Vert S(h_k^j)\Vert_E^{\frac{q}{p}}\Big)^{\frac{p}{q}} & \le & C\,\Big\Vert\Big(\sum_{j=1}^n|h_k^j|^{\frac{q}{p}}\Big)^{\frac{p}{q}}\Big\Vert_{X(\mu)^{\frac{1}{p}}+X(\mu)} \\ & \le & C\,\Big\Vert\Big(\sum_{j=1}^n|f_j|^{\frac{q}{p}}\Big)^{\frac{p}{q}}\Big\Vert_{X(\mu)^{\frac{1}{p}}+X(\mu)}. \end{eqnarray*} On other hand, since $X(\mu)^{\frac{1}{p}}+X(\mu)$ is $\sigma$-order continuous, it follows that $h_k^j\to f_j$ in $X(\mu)^{\frac{1}{p}}+X(\mu)$ as $k\to\infty$, and so $S(h_k^j)\to S(f_j)$ in $E$ as $k\to\infty$. Hence, taking limit as $k\to\infty$ in the above inequality, it follows that $$ \Big(\sum_{j=1}^n\Vert S(f_j)\Vert_E^{\frac{q}{p}}\Big)^{\frac{p}{q}}\le C\,\Big\Vert\Big(\sum_{j=1}^n|f_j|^{\frac{q}{p}}\Big)^{\frac{p}{q}}\Big\Vert_{X(\mu)^{\frac{1}{p}}+X(\mu)}. $$ For a general $(f_j)_{j=1}^n\subset X(\mu)^{\frac{1}{p}}+X(\mu)$, write $f_j=f_j^+-f_j^-$ where $f_j^+$ and $f_j^-$ are the positive and negative parts respectively of each $f_j$. By using inequality \eqref{EQ: t-inequality} and denoting $\alpha_{p,q}=\max\{1,2^{1-\frac{p}{q}}\}$, we have that \begin{eqnarray*} \Big(\sum_{j=1}^n\Vert S(f_j)\Vert_E^{\frac{q}{p}}\Big)^{\frac{p}{q}} & \le & \Big(\sum_{j=1}^n\big(\Vert S(f_j^+)\Vert_E+\Vert S(f_j^-)\Vert_E\big)^{\frac{q}{p}}\Big)^{\frac{p}{q}} \\ & \le & \alpha_{p,q}\Big(\sum_{j=1}^n\Vert S(f_j^+)\Vert_E^{\frac{q}{p}}+\sum_{j=1}^n\Vert S(f_j^-)\Vert_E^{\frac{q}{p}}\Big)^{\frac{p}{q}} \\ & \le & \alpha_{p,q}\,C\,\Big\Vert\Big(\sum_{j=1}^n|f_j^+|^{\frac{q}{p}} +\sum_{j=1}^n|f_j^-|^{\frac{q}{p}}\Big)^{\frac{p}{q}}\Big\Vert_{X(\mu)^{\frac{1}{p}}+X(\mu)} \\ & = & \alpha_{p,q}\,C\, \Big\Vert\Big(\sum_{j=1}^n|f_j|^{\frac{q}{p}}\Big)^{\frac{p}{q}}\Big\Vert_{X(\mu)^{\frac{1}{p}}+X(\mu)} \end{eqnarray*} (for the last equality note that $|f_j|^{\frac{q}{p}}=|f_j^+|^{\frac{q}{p}} +|f_j^-|^{\frac{q}{p}}$ as $f_j^+$ and $f_j^-$ have disjoint support). (b) $\Rightarrow$ (c) Since $T$ is $p$-th power factorable with a $\frac{q}{p}$-concave (and so continuous) extension, from Theorem \ref{THM: quasiBfsXsubset(LpCapL1)mT}, the map $[i]\colon X(\mu)^{\frac{1}{p}}+X(\mu)\to L^1(m_T)$ is well defined. Let $S\colon X(\mu)^{\frac{1}{p}}+X(\mu)\to E$ be a $\frac{q}{p}$-concave linear operator extending $T$. Note that $S=I_{m_T}\circ[i]$ (Theorem \ref{THM: quasiBfsXsubset(LpCapL1)mT}). Denote by $C$ the $\frac{q}{p}$-concavity constant of $S$. Consider $(f_j)_{j=1}^n\subset X(\mu)^{\frac{1}{p}}+X(\mu)$ and fix $\varepsilon>0$. For each $j$, by \eqref{EQ: L1m-intnorm}, we can take $\varphi_j\in\mathcal{S}\big(\Sigma_{X(\mu)}\big)$ such that $|\varphi_j|\le1$ and $$ \Vert f_j\Vert_{L^1(m_T)}\le\Big(\frac{\varepsilon}{2^j}\Big)^{\frac{p}{q}}+\Vert I_{m_T}(f_j\varphi_j)\Vert_E. $$ Since $f_j\varphi_j\in X(\mu)^{\frac{1}{p}}+X(\mu)$ as $|f_j\varphi_j|\le|f_j|$, then $I_{m_T}(f_j\varphi_j)=S(f_j\varphi_j)$. So, by using inequality \eqref{EQ: t-inequality} and the $\frac{q}{p}$-concavity of $S$, we have that \begin{eqnarray*} \sum_{j=1}^n\Vert f_j\Vert_{L^1(m_T)}^{\frac{q}{p}} & \le & \sum_{j=1}^n\left(\Big(\frac{\varepsilon}{2^j}\Big)^{\frac{p}{q}}+\Vert S(f_j\varphi_j)\Vert_E\right)^{\frac{q}{p}} \\ & \le & \max\{1,2^{\frac{q}{p}-1}\}\left(\sum_{j=1}^n\frac{\varepsilon}{2^j}+ \sum_{j=1}^n\Vert S(f_j\varphi_j)\Vert_E^{\frac{q}{p}}\right) \\ & \le & \max\{1,2^{\frac{q}{p}-1}\} \left(\varepsilon+C^{\frac{q}{p}}\,\Big\Vert\Big(\sum_{j=1}^n |f_j\varphi_j|^{\frac{q}{p}}\Big)^{\frac{p}{q}}\Big\Vert_{X(\mu)^{\frac{1}{p}}+X(\mu)}^{\frac{q}{p}}\right) \\ & \le & \max\{1,2^{\frac{q}{p}-1}\} \left(\varepsilon+C^{\frac{q}{p}}\,\Big\Vert\Big(\sum_{j=1}^n |f_j|^{\frac{q}{p}}\Big)^{\frac{p}{q}}\Big\Vert_{X(\mu)^{\frac{1}{p}}+X(\mu)}^{\frac{q}{p}}\right). \end{eqnarray*} Taking limit as $\varepsilon\to0$, we obtain $$ \sum_{j=1}^n\Vert f_j\Vert_{L^1(m_T)}^{\frac{q}{p}}\le C^{\frac{q}{p}}\,\max\{1,2^{\frac{q}{p}-1}\} \Big\Vert\Big(\sum_{j=1}^n |f_j|^{\frac{q}{p}}\Big)^{\frac{p}{q}}\Big\Vert_{X(\mu)^{\frac{1}{p}}+X(\mu)}^{\frac{q}{p}} $$ and so $$ \Big(\sum_{j=1}^n\Vert f_j\Vert_{L^1(m_T)}^{\frac{q}{p}}\Big)^{\frac{p}{q}}\le C\,\max\{1,2^{1-\frac{p}{q}}\} \Big\Vert\Big(\sum_{j=1}^n |f_j|^{\frac{q}{p}}\Big)^{\frac{p}{q}}\Big\Vert_{X(\mu)^{\frac{1}{p}}+X(\mu)}. $$ Hence, $[i]\colon X(\mu)^{\frac{1}{p}}+X(\mu)\to L^1(m_T)$ is $\frac{q}{p}$-concave. (c) $\Leftrightarrow$ (d) From Theorem \ref{THM: quasiBfsXsubset(LpCapL1)mT}, we have that $[i]\colon X(\mu)^{\frac{1}{p}}+X(\mu)\to L^1(m_T)$ is well defined if and only if $[i]\colon X(\mu)\to L^p(m_T)\cap L^1(m_T)$ is well defined, which is equivalent to $[i]\colon X(\mu)\to L^1(m_T)$ and $[i]\colon X(\mu)\to L^p(m_T)$ well defined. By Lemma \ref{LEM: q-concaveOperatorOnX+Y} we have that $[i]\colon X(\mu)^{\frac{1}{p}}+X(\mu)\to L^1(m_T)$ is $\frac{q}{p}$-concave if and only if $[i]\colon X(\mu)^{\frac{1}{p}}\to L^1(m_T)$ and $[i]\colon X(\mu)\to L^1(m_T)$ are $\frac{q}{p}$-concave. On other hand, it is straightforward to verify that $[i]\colon X(\mu)^{\frac{1}{p}}\to L^1(m_T)$ is $\frac{q}{p}$-concave if and only if $[i]\colon X(\mu)\to L^p(m_T)$ is $q$-concave. (d) $\Leftrightarrow$ (e) follows from Proposition \ref{PROP: q-concaveCore}. (c) $\Rightarrow$ (a) Denote by $C$ the $\frac{q}{p}$-concavity constant of $[i]\colon X(\mu)^{\frac{1}{p}}+X(\mu)\to L^1(m_T)$. Consider $(f_j)_{j=1}^n\subset X(\mu)$ and note that $f_j\in L^1(m_T)$ with $I_{m_T}(f_j)=T(f_j)$ for all $j$. Then, \begin{eqnarray*} \Big(\sum_{j=1}^n\Vert T(f_j)\Vert_E^{\frac{q}{p}}\Big)^{\frac{p}{q}} & = & \Big(\sum_{j=1}^n\Vert I_{m_T}(f_j)\Vert_E^{\frac{q}{p}}\Big)^{\frac{p}{q}} \\ & \le & \Big(\sum_{j=1}^n\Vert f_j\Vert_{L^1(m_T)}^{\frac{q}{p}}\Big)^{\frac{p}{q}} \\ & \le & C\,\Big\Vert\Big(\sum_{j=1}^n|f_j|^{\frac{q}{p}}\Big)^{\frac{p}{q}}\Big\Vert_{X(\mu)^{\frac{1}{p}}+X(\mu)}. \end{eqnarray*} \end{proof} Note that $qL^p(m_T)=\big(\frac{q}{p}L^1(m_T)\big)^p$ (see Proposition \ref{PROP: pPower-qX(mu)}). In particular, in the case when $T$ is $(p,q)$-power-concave and $\chi_\Omega\in X(\mu)$ (so $\chi_\Omega\in \frac{q}{p}L^1(m_T)$), from Remark \ref{REM: XpResults}.(b) it follows that $\frac{q}{p}L^1(m_T)\cap qL^p(m_T)=qL^p(m_T)$ if $p\ge1$ and $\frac{q}{p}L^1(m_T)\cap qL^p(m_T)=\frac{q}{p}L^1(m_T)$ if $p<1$. \begin{theorem}\label{THM: (p,q)PowerConcave-Factorization} Suppose that $T$ is $(p,q)$-power-concave. Then, $T$ factors as \begin{equation}\label{EQ: (p,q)PowerConcave-Factorization1} \begin{split} \xymatrix{ X(\mu) \ar[rr]^{T} \ar@{.>}[dr]_(.4){[i]} & & E\\ & \frac{q}{p}L^1(m_T)\cap qL^p(m_T) \ar@{.>}[ur]_(.6){I_{m_T}}} \end{split} \end{equation} with $I_{m_T}$ being $(p,q)$-power-concave. Moreover, the factorization is \emph{optimal} in the sense: $$ \left.\begin{minipage}{6.2cm} \textnormal{\it If $Z(\xi)$ is a $\sigma$-order continuous quasi-B.f.s.\ such that $\xi\ll\mu$ and} \leqnomode \begin{equation}\label{EQ: (p,q)PowerConcave-Factorization2} \begin{split} \xymatrix{ X(\mu) \ar[rr]^{T} \ar@{.>}[dr]_(.45){[i]} & & E\\ & Z(\xi) \ar@{.>}[ur]_(.5){S}} \end{split} \end{equation} \textnormal{\it with $S$ being a $(p,q)$-power-concave linear operator} \end{minipage}\ \right\} \ \Longrightarrow \ \ \ \begin{minipage}{6cm} \textnormal{\it $[i]\colon Z(\xi)\to \frac{q}{p}L^1(m_T)\cap qL^p(m_T)$} \\ \textnormal{\it is well defined and $S=I_{m_T}\circ[i]$.} \end{minipage} $$ \end{theorem} \begin{proof} The factorization \eqref{EQ: (p,q)PowerConcave-Factorization1} follows from Theorem \ref{THM: Xsubset(q/pL1CapqLp)mT}. The space $\frac{q}{p}L^1(m_T)\cap qL^p(m_T)$ is $\sigma$-order continuous as noted before and satisfies the $\sigma$-property as $X(\mu)$ does. Since $ {\textstyle I_{m_T}\colon \frac{q}{p}L^1(m_T)\cap qL^p(m_T)\to E} $ is continuous (as $I_{m_T}\colon L^1(m_T)\to E$ is so), we can apply Theorem \ref{THM: Xsubset(q/pL1CapqLp)mT} to see that it is $(p,q)$-power-concave. Note that $\Sigma_{X(\mu)}\subset\Sigma_{\frac{q}{p}L^1(m_T)\cap qL^p(m_T)}$ and $m_{I_{m_T}}(A)=I_{m_T}(\chi_A)=T(\chi_A)=m_T(A)$ for all $A\in\Sigma_{X(\mu)}$. That is, $m_T$ is the restriction of $m_{I_{m_T}}\colon\Sigma_{\frac{q}{p}L^1(m_T)\cap qL^p(m_T)}\to E$ to $\Sigma_{X(\mu)}$. From \cite[Lemma 3]{calabuig-delgado-sanchezperez}, it follows that $L^1(m_{I_{m_T}})=L^1(m_T)$. Then, $$ {\textstyle [i]\colon \frac{q}{p}L^1(m_T)\cap qL^p(m_T)\to \frac{q}{p}L^1(m_{I_{m_T}})\cap qL^p(m_{I_{m_T}})} $$ is well defined as $\frac{q}{p}L^1(m_{I_{m_T}})\cap qL^p(m_{I_{m_T}})=\frac{q}{p}L^1(m_T)\cap qL^p(m_T)$. Let $Z(\xi)$ satisfy \eqref{EQ: (p,q)PowerConcave-Factorization2}. In particular, $Z(\xi)$ has the $\sigma$-property. From Theorem \ref{THM: Xsubset(q/pL1CapqLp)mT} applied to the operator $S\colon Z(\xi)\to E$, we have that $[i]\colon Z(\xi)\to \frac{q}{p}L^1(m_S)\cap qL^p(m_S)$ is well defined and $S=I_{m_S}\circ[i]$. Since $\Sigma_{X(\mu)}\subset\Sigma_{Z(\xi)}$ and $m_S(A)=S(\chi_A)=T(\chi_A)=m_T(A)$ for all $A\in\Sigma_{X(\mu)}$ (i.e.\ $m_T$ is the restriction of $m_S\colon\Sigma_{Z(\xi)}\to E$ to $\Sigma_{X(\mu)}$), from \cite[Lemma 3]{calabuig-delgado-sanchezperez}, it follows that $L^1(m_S)=L^1(m_T)$ and $I_{m_S}=I_{m_T}$. Therefore, $$ {\textstyle[i]\colon Z(\xi)\to \frac{q}{p}L^1(m_S)\cap qL^p(m_S)=\frac{q}{p}L^1(m_T)\cap qL^p(m_T)} $$ is well defined and $S=I_{m_S}\circ[i]=I_{m_T}\circ[i]$. \end{proof} We can rewrite Theorem \ref{THM: (p,q)PowerConcave-Factorization} in terms of optimal domains. \begin{corollary} Suppose that $T$ is $(p,q)$-power-concave. Then $\frac{q}{p}L^1(m_T)\cap qL^p(m_T)$ is the largest $\sigma$-order continuous quasi-B.f.s.\ to which $T$ can be extended as a $(p,q)$-power-concave operator still with values in $E$. Moreover, the extension of $T$ to the space $\frac{q}{p}L^1(m_T)\cap qL^p(m_T)$ is given by the integration operator $I_{m_T}$. \end{corollary} Recalling that the $(1,q)$-power-concave operators coincide with the $q$-concave operators, we obtain our main result. \begin{corollary} \label{COR: qConcaveOptimalDomain} Suppose that $T$ is $q$-concave. Then $qL^1(m_T)$ is the largest $\sigma$-order continuous quasi-B.f.s.\ to which $T$ can be extended as a $q$-concave operator still with values in $E$. Moreover, the extension of $T$ to $qL^1(m_T)$ is given by the integration operator $I_{m_T}$. \end{corollary} Let us give now a direct application related to the Maurey-Rosenthal factorization of $q$-concave operators defined on a $q$-convex quasi-B.f.s. In the case when $T$ is $q$-concave, by Corollary \ref{COR: qConcaveOptimalDomain}, the integration operator $I_{m_T}$ extends $T$ to the space $qL^1(m_T)$. Note that the map $[i]\colon X(\mu)\to qL^1(m_T)$ is $q$-concave as it is continuous and $qL^1(m_T)$ is $q$-concave. From a variant of the Maurey-Rosenthal theorem proved in \cite[Corollary 5]{defant}, under some extra conditions, if $X(\mu)$ is $q$-convex then $[i]\colon X(\mu)\to qL^1(m_T)$ factors through the space $L^q(\mu)$. So, we obtain the following improvement of the usual factorization of $q$-concave operators on $q$-convex quasi-B.f.s.'. \begin{corollary}\label{COR: MaureyRosenthalFactorization} Let $1\le q<\infty$. Assume that $\mu$ is $\sigma$-finite and that $X(\mu)$ is $q$-convex and has the $\sigma$-Fatou property. If $T$ is $q$-concave then it can be factored as $$ \xymatrix{ X(\mu) \ar[rr]^{T} \ar@{.>}[d]_{M_g} & & E\\ L^q(\mu) \ar@{.>}[rr]^{M_{g^{-1}}}& & qL^1(m_T) \ar@{.>}[u]_{I_{m_T}}} $$ for positive multiplication operators $M_g$ and $M_{g^{-1}}$. The converse is also true. \end{corollary} \section{Vector measure representation of $q$-concave Banach lattices} \label{SEC: qConcaveBanachLattices} In this last section we look for a characterization of the class of Banach lattices which are $p$-convex and $q$-concave in terms of spaces of integrable functions with respect to a vector measure. For $1<p$, it is known that order continuous $p$-convex Banach lattices can be order isometrically represented as spaces $L^p$ of a vector measure defined on a $\delta$-ring (see \cite[Theorem 10]{calabuig-juan-sanchezperez}). We will see that the addition of the $q$-concavity property to the represented Banach lattice translates to adding some concavity property to the corresponding integration map. First let us show two results concerning concavity for the integration operator of a vector measure which will be needed later. Let $m\colon\mathcal{R}\to E$ be a vector measure defined on a $\delta$--ring $\mathcal{R}$ of subsets of $\Omega$ and with values in a Banach space $E$. \begin{proposition}\label{PROP: Im-qConcave} The integration operator $I_m\colon L^1(m)\to E$ is $q$-concave if and only if $L^1(m)$ is $q$-concave. \end{proposition} \begin{proof} Suppose that $I_m\colon L^1(m)\to E$ is $q$-concave and denote by $C$ its $q$-concavity constant. Take $(f_j)_{j=1}^n\subset L^1(m)$ and $(\varphi_j)_{j=1}^n\subset \mathcal{S}(\mathcal{R})$ with $|\varphi_j|\le1$ for all $j$. Since $(f_j\varphi_j)_{j=1}^n\subset L^1(m)$, as $|f_j\varphi_j|\le|f_j|$ for all $j$, we have that $$ \Big(\sum_{j=1}^n\Vert I_m(f_j\varphi_j)\Vert_E^q\Big)^{\frac{1}{q}}\le C\,\Big\Vert\Big(\sum_{j=1}^n|f_j\varphi_j|^q\Big)^{\frac{1}{q}}\Big\Vert_{L^1(m)}\le C\,\Big\Vert\Big(\sum_{j=1}^n|f_j|^q\Big)^{\frac{1}{q}}\Big\Vert_{L^1(m)}. $$ Taking supremum for each $j=1,..,n$ over all $\varphi_j\in \mathcal{S}(\mathcal{R})$ with $|\varphi_j|\le1$, from \eqref{EQ: L1m-intnorm}, if follows that $$ \Big(\sum_{j=1}^n\Vert f_j\Vert_{L^1(m)}^q\Big)^{\frac{1}{q}}\le C\,\Big\Vert\Big(\sum_{j=1}^n|f_j|^q\Big)^{\frac{1}{q}}\Big\Vert_{L^1(m)}. $$ The converse is obvious as $I_m$ is continuous. \end{proof} Direct useful consequences can be deduced of the fact that the integration map $I_m:L^1(m) \to E$ is $q$-concave. Assume that $m$ is defined on a $\sigma$-algebra and note that $q$-concavity for $q\ge1$ always implies $(q,1)$-concavity (see the definition for instance in \cite[p.\,61]{okada-ricker-sanchezperez2}). Thus, by \cite[Proposition 7.9]{okada-ricker-sanchezperez2}, if $I_m$ is $q$-concave for $q\ge1$ then it is \emph{weakly completely continuous} (i.e.\ it maps weak Cauchy sequences into weakly convergent sequences). Moreover, this implies that $L^1(m)$ coincides with the space $L_w^1(m)$ and so it has the $\sigma$-Fatou property. In the case when $\chi_\Omega\in L^1(m)$ (for instance if $m$ is defined on a $\sigma$-algebra), we obtain a further result regarding $(p,q)$-power-concave operators. \begin{proposition}\label{PROP: Lp(m)-qConcave} Suppose that $\chi_\Omega\in L^1(m)$ and $p\ge1$. The integration operator $I_m\colon L^p(m)\to E$ is $(p,q)$-power-concave if and only if $L^p(m)$ is $q$-concave. \end{proposition} \begin{proof} First note that under the hypothesis it follows that $L^p(m)$ has the $\sigma$-property (in fact $\chi_\Omega\in L^p(m)$) and $L^p(m)\subset L^1(m)$. So, $I_m\colon L^p(m)\to E$ is well defined and continuous. Suppose that $I_m\colon L^p(m)\to E$ is $(p,q)$-power-concave. From Theorem \ref{THM: Xsubset(q/pL1CapqLp)mT}, we have that $[i]\colon L^p(m)\to\frac{q}{p}L^1(m_{I_m})\cap qL^p(m_{I_m})$ is well defined. Note that $(\mathcal{R}^{loc})_{L^p(m)}=\mathcal{R}^{loc}$ and so $m_{I_m}$ coincides with $m_{\chi_\Omega}$ (see Preliminaries). Then, $L^1(m_{I_m})=L^1(m)$ and so $L^p(m)\subset\frac{q}{p}L^1(m)\cap qL^p(m)\subset qL^p(m)$. Hence, $L^p(m)$ is $q$-concave as $L^p(m)= qL^p(m)$. Suppose now that $L^p(m)$ is $q$-concave. Then, it is direct to check that $L^1(m)$ is $\frac{q}{p}$-concave. Since $L^p(m)\subset L^1(m)$, the integration operator $I_m\colon L^1(m)\to E$ is continuous and $\big(L^p(m)\big)^{\frac{1}{p}}+L^p(m)=L^1(m)$, it follows that $I_m\colon L^p(m)\to E$ satisfies the inequality of the definition of $(p,q)$-power-concave operator. \end{proof} Let us go now to the representation of $q$-concave Banach lattices as spaces of integrable functions. We begin by considering B.f.s.'. \begin{proposition}\label{PROP: Bfs-Representation} Let $p,q\in(0,\infty)$ and let $Z(\xi)$ be a $q$-concave B.f.s.\ which is also $p$-convex in the case when $p>1$. Then, $Z(\xi)$ coincides with the space $L^p(m)$ of a Banach space valued vector measure $m\colon\mathcal{R}\to E$ defined on a $\delta$-ring whose integration operator $I_m\colon L^1(m)\to E$ is $\frac{q}{p}$-concave. Moreover, if $\chi_\Omega\in Z(\xi)$, the vector measure $m$ is defined on a $\sigma$-algebra. \end{proposition} \begin{proof} Note that if $p\le1$ then $Z(\xi)^{\frac{1}{p}}$ is a B.f.s.\ (see Remark \ref{REM: XpResults}.(d)). In the case when $p>1$, renorming $Z(\xi)$ if it is necessary, we can assume that the $p$-convexity constant of $Z(\xi)$ is equal to $1$ (see \cite[Proposition 1.d.8]{lindenstrauss-tzafriri}), and so $Z(\xi)^{\frac{1}{p}}$ is a B.f.s. (see Remark \ref{REM: XpResults}.(e)). Consider the $\delta$-ring $\Sigma_{Z(\xi)}=\big\{A\in\Sigma:\,\chi_A\in Z(\xi)\big\}$ and the finitely additive set function $m\colon\Sigma_{Z(\xi)}\to Z(\xi)^{\frac{1}{p}}$ given by $m(A)=\chi_A$. Since $Z(\xi)^{\frac{1}{p}}$ is $\sigma$-order continuous, as $Z(\xi)$ is so by Proposition \ref{PROP: q-concaveImpliesSigmaoc}, it follows that $m$ is a vector measure. Let us see that $L^1(m)=Z(\xi)^{\frac{1}{p}}$ with equal norms and so we will have that $Z(\xi)$ coincides with $L^p(m)$. For $\varphi\in\mathcal{S}(\Sigma_{Z(\xi)})$ we have that $\varphi\in Z(\xi)^{\frac{1}{p}}$ and $I_m(\varphi)=\varphi$. Moreover, since $m$ is positive, \begin{equation}\label{EQ: Bfs-Representation} \Vert\varphi\Vert_{L^1(m)}=\Vert I_m(|\varphi|)\Vert_{Z(\xi)^{\frac{1}{p}}}=\Vert \varphi\Vert_{Z(\xi)^{\frac{1}{p}}}. \end{equation} In particular, by taking $\varphi=\chi_A$, we obtain that $\Vert m\Vert$ is equivalent to $\xi$. Given $f\in L^1(m)$, since $\mathcal{S}(\Sigma_{Z(\xi)})$ is dense in $L^1(m)$, we can take $(\varphi_n)\subset\mathcal{S}(\Sigma_{Z(\xi)})$ such that $\varphi_n\to f $ in $L^1(m)$ and $m$-a.e. From \eqref{EQ: Bfs-Representation}, we have that $(\varphi_n)$ is a Cauchy sequence in $Z(\xi)^{\frac{1}{p}}$ and so there is $h\in Z(\xi)^{\frac{1}{p}}$ such that $\varphi_n\to h$ in $Z(\xi)^{\frac{1}{p}}$. Taking a subsequence $\varphi_{n_j}\to h$ $\xi$-a.e.\ we see that $f=h\in Z(\xi)^{\frac{1}{p}}$ and $$ \Vert f\Vert_{Z(\xi)^{\frac{1}{p}}}=\lim\Vert \varphi_n\Vert_{Z(\xi)^{\frac{1}{p}}}=\lim\Vert \varphi_n\Vert_{L^1(m)}=\Vert f\Vert_{L^1(m)}. $$ Let now $f\in Z(\xi)^{\frac{1}{p}}$ and take $(\varphi_n)\subset\mathcal{S}(\Sigma)$ such that $0\le\varphi_n\uparrow|f|$. For any $n$, writing $\varphi_n=\sum_{j=1}^m\alpha_j\chi_{A_j}$ with $(A_j)_{j=1}^m$ being pairwise disjoint and $\alpha_j>0$ for all $j$, we see that $\chi_{A_j}\le\alpha_j^{-1/p}|f|^{1/p}$ and so $\varphi_n\in\mathcal{S}(\Sigma_{Z(\xi)})$. On other hand, since $Z(\xi)^{\frac{1}{p}}$ is $\sigma$-order continuous, we have that $\varphi_n\to f$ in $Z(\xi)^{\frac{1}{p}}$. From \eqref{EQ: Bfs-Representation}, we have that $(\varphi_n)$ is a Cauchy sequence in $L^1(m)$ and so there is $h\in L^1(m)$ such that $\varphi_n\to h$ in $L^1(m)$. Taking a subsequence $\varphi_{n_j}\to h$ $m$-a.e.\ we see that $f=h\in L^1(m)$. Hence, $L^1(m)=Z(\xi)^{\frac{1}{p}}$ with equal norms and, since $Z(\xi)$ is $q$-concave, it follows that $L^1(m)$ is $\frac{q}{p}$-concave. From Proposition \ref{PROP: Im-qConcave}, the integration operator $I_m\colon L^1(m)\to E$ is $\frac{q}{p}$-concave. Note that if $\chi_\Omega\in Z(\xi)$, then $\Sigma_{Z(\xi)}=\Sigma$ and so $m$ is defined on a $\sigma$-algebra. \end{proof} For the final result we need some concepts related to Banach lattices. The definitions of $p$-convexity, $q$-concavity and $\sigma$-order continuity for Banach lattices are the same that for B.f.s.'. A Banach lattice $F$ is said to be \emph{order continuous} if for every downwards directed system $(x_\tau)\subset F$ with $x_\tau\downarrow0$ it follows that $\Vert x_\tau\Vert_F\downarrow0$ and is said to be \emph{$\sigma$-complete} if every order bounded sequence in $F$ has a supremum. A Banach lattice which is $\sigma$-order continuous and $\sigma$-complete at the same time is order continuous, see \cite[Proposition 1.a.8]{lindenstrauss-tzafriri}. A \emph{weak unit} of a Banach lattice $F$ is an element $0\le e\in F$ such that $\inf\{x,e\}=0$ implies $x=0$. An operator $T\colon F_1\to F_2$ between Banach lattices is said to be an \emph{order isometry} if it is linear, one to one, onto, $\Vert Tx\Vert_{F_2}=\Vert x\Vert_{F_1}$ for all $x\in F_1$ and $T(\inf\{x,y\})=\inf\{Tx,Ty\}$ for all $x,y\in F_1$. In particular, an order isometry is a positive operator. So, by using \cite[Proposition 1.d.9]{lindenstrauss-tzafriri}, it is direct to check that every order isometry preserves $p$-convexity and $q$-concavity whenever $p,q\ge1$. \begin{theorem}\label{THM: BanachLattice-Representation} Let $p,q\in[1,\infty)$ and let $F$ be a Banach lattice. The following statements are equivalent: \begin{itemize}\setlength{\leftskip}{-3ex}\setlength{\itemsep}{.5ex} \item[(a)] $F$ is $q$-concave and $p$-convex. \item[(b)] $F$ is order isometric to a space $L^p(m)$ of a Banach space valued vector measure $m\colon\mathcal{R}\to E$ defined on a $\delta$-ring whose integration operator $I_m\colon L^1(m)\to E$ is $\frac{q}{p}$-concave. \end{itemize} Moreover, (a) holds with $F$ having a weak unit if and only if (b) holds with $m$ defined on a $\sigma$-algebra. In this last case $I_m\colon L^p(m)\to E$ is $(p,q)$-power-concave. \end{theorem} \begin{proof} (a) $\Rightarrow$ (b) Since $F$ is $q$-concave, it satisfies a lower $q$-estimate (see \cite[Definition 1.f.4]{lindenstrauss-tzafriri}) and then it is $\sigma$-complete and $\sigma$-order continuous (see the proof of \cite[Proposition 1.f.5]{lindenstrauss-tzafriri}). So, $F$ is order continuous. From \cite[Theorem 5]{delgado-juan} we have that $F$ is order isometric to a space $L^1(\nu)$ of a Banach space valued vector measure $\nu$ defined on a $\delta$-ring. Then, $L^1(\nu)$ is a B.f.s.\ satisfying the conditions of Proposition \ref{PROP: Bfs-Representation} and so $L^1(\nu)=L^p(m)$ with $m\colon\mathcal{R}\to E$ being a vector measure defined on a $\delta$-ring $\mathcal{R}$ and with values in a Banach space $E$, whose integration operator $I_m\colon L^1(m)\to E$ is $\frac{q}{p}$-concave. (b) $\Rightarrow$ (a) Since $L^p(m)$ is $p$-convex (Remark \ref{REM:XpResults}.(c)) and $q$-concave (as $L^1(m)$ is $\frac{q}{p}$-concave by Proposition \ref{PROP: Im-qConcave}), $F$ also is. Now suppose that (a) holds with $F$ having a weak unit. From \cite[Theorem 8]{curbera} we have that $F$ is order isometric to a space $L^1(\nu)$ of a Banach space valued vector measure $\nu$ defined on a $\sigma$-algebra. Since $\chi_\Omega\in L^1(\nu)$, from Proposition \ref{PROP: Bfs-Representation} we have that (b) holds with $m$ defined on a $\sigma$-algebra. Conversely, if (b) holds with $m$ defined on a $\sigma$-algebra then $\chi_\Omega\in L^p(m)$ (as $\chi_\Omega\in L^1(m)$). So, the image of $\chi_\Omega$ by the order isometry is a weak unit in $F$. Moreover, from Proposition \ref{PROP: Lp(m)-qConcave} it follows that $I_m\colon L^p(m)\to E$ is $(p,q)$-power-concave. \end{proof} In particular, from Theorem \ref{THM: BanachLattice-Representation} we obtain that a Banach lattice is $q$-concave (with $q\ge1$) if and only if it is order isometric to a space $L^1(m)$ of a vector measure $m$ with a $q$-concave integration operator.
63,573
TITLE: Linearly independent functions not solutions of ODE QUESTION [0 upvotes]: If I have a set of $N$ linearly independent functions $f_1,\dots,f_N$, that may NOT be the solutions of a differential equation, and I impose initial conditions $f(0)=K_0,\dots,D^{N-1}f(0)=K_{N-1}$, is it true that we will always find coefficients $a_1,\dots,a_N$ such that the function $$f(x)=a_1f_1(x)+\dots +a_Nf_N(x)$$ satisfies the initial conditions imposed? Or is only true if the linearly independent functions are solutions of a differential equation? (Of course assuming this functions are differentiable up to order $N$) My problem raises from thinking in $$\{sin(x),cos(x)\}$$ In this case is true the statement becasue both functions are never zero for the same value $x$. On the other side $$\{x,x^2\}$$Are also linearly independent but at $x=0$ both functions are zero so the statement is false. I don't understand why the fact that they the first set is an independent set of solutions and the second not, makes a difference. REPLY [2 votes]: Let $f_1,\ldots,f_n$ be linearly independent functions and define $$D\colon f_i\mapsto \left(f_i(0),f^\prime_i(0),\ldots,f^{(n-1)}_i(0)\right).$$ For solving your initial value problem you need $Df_1,\ldots,Df_n$ to span $\mathbb{R}^n$. This is true in the case of differential equations, but not for general systems of linearly independent functions. Since $D\colon\mathbb{R}^{[0,1]}\to\mathbb{R}^n$ its actually fairly easy to find independent functions in $\mathbb{R}^{[0,1]}$ such that their projection is no longer independent like for instance your example $x,x^2$. You just need some $g\in\mathrm{span}(f_1,\ldots,f_n)$ such that $Dg=0$, which can be done even for $g\neq 0$. REPLY [2 votes]: Linear independence as functions doesn't tell you much about pointwise behaviour in general. The set of functions $f_i(x)$ in this case are fairly irrelevant - all that matters is the set $f_i^{(k)}(0)$ which forms an $N\times N$ matrix $A_i\,^k$ if we assume we impose initial conditions $f(0)=b_0,\ldots,f^{(N-1)}(0)=b_{N-1}$. (Sorry about the weird mixture of 0-based and 1-based indices.) Then the question is simply this: Does there exist a vector $\mathbf a$ such that $A \mathbf a = \mathbf b$? or in general, Is $A$ invertible? In general, regardless of whether the functions are linearly independent as functions, the rows of this matrix may or may not be independent. (For instance, the functions can all be piecewise defined as identically $0$ for $x\in[0,\frac{1}{2}]$ and then become linearly independent just by having different forms in $[1,2]$, say.) The reason why a basis of solutions to a differential equations are special is because - provided they are nice and nonsingular $N$th order equations, they give a prescription for evolving any initial condition $\mathbf b$ forwards in $x$. Therefore, you can choose the rows to be independent just by choosing each $f_i$ to have initial conditions such that e.g. $A$ is the identity matrix! ($f_i^{(k)}(0) = \delta_k^i$.) This is just what we mean by choosing a basis of equations. Note that $x,x^2$ are a basis of solutions for a differential equation of the form $$x^2f''+bxf'+cf$$ but that this is a singular equation, since you have to divide through by $x^2$.
24,027
A page from the new Wuthering Heights graphic novel As part of a season of events to paint Haworth's literary sisters in a whole new light, a brand new adaptation of Wuthering Heights is released next month - in graphic novel form. David Barnett, Features Editor of the Bradford Telegraph and Argus spoke to Richard Wilcocks about his article, which appeared in the paper last week and which is reproduced below. He was asked to define a "Brontë purist" and his reply was: As far as what I meant by a Brontë purist... possibly a fan of the Brontës' work who might not consider a (necessary) abridgement of the source text for adaptation purposes a positive move. Whether such people exist outside of the brains of journalists is a point for debate. The Parsonage's Arts programme is 'radical' anyway (see previous postings, for example those relating to Cornelia Parker) so it should fit in well with the Radical Brontës Festival opening in Bradford in September. Here is the article, which has appeared on the B & A's website at Heathcliff and Cathy.....in graphic detail It's often said that if Shakespeare was alive today, he'd be writing comic books. No longer the preserve of unfeasibly muscle-bound crimefighters in tights or funny talking animals, comics - or graphic novels, to give them their grown-up name - are now considered to be a valid, adult form of storytelling. Go into Waterstone's in Bradford and there's a whole section devoted to contemporary comics, from the high-octane and often violent Japanese manga to the off-kilter monochromatic nightmares of Charles Burns to the fabulist source material of many major recent blockbuster movies such as V for Vendetta, Sin City, From Hell and Road to Perdition.. Graphic novels generally begin life as a script produced by a writer with dialogue and "stage directions" for the artist to interpret in a series of sequential panels, which was why Keith Jeffrey, who is heading the umbrella initiative Illuminate, under which Radical Brontës falls, brought West Yorkshire poet and playwright Adam Strickson on board. Adam says: "Keith knew I was a Brontë enthusiast and got in touch. My first reaction was I don't know anything about graphic novels', but when I started looking into it I realised it wasn't too distant to scriptwriting for the stage." Adam, who has worked as director of inter-cultural stage company Chol Theatre and had a writer's placement at Birmingham Repertory earlier this year, then had to break down Emily's original novel into a narrative suitable for the comic book treatment.." He is hoping that those already fans of the Brontës will enjoy this fresh take on the book, while at the same time the format might draw other people who have never read the original into Emily's text. For the art duties, an industry professional who will doubtless help the book cause a stir within the comics fraternity was commissioned. Siku is a Leicester-born artist who went to Nigeria at an early age and mastered his craft there. He has worked in commercial graphic design and computer games design, but is perhaps most well known for his work on the pioneering British science fiction comic 2000AD, for which he has illustrated a variety of strips including the comic's flagship character, Judge Dredd. Siku's work has an almost dreamlike quality to it, heavily shadowed and perfect for the gothic tragedy of Heathcliff and Cathy. His art is quite unique in the mainstream comics world, eschewing the god-like anatomy usually associated with superheroes for a more elongated, almost otherworldy effect - sometimes to the annoyance of fans. Before he was accepted as a contemporary master of painted comics, he received hate-mail from fans who didn't like the way he drew Judge Dredd's famous jaw! Siku says: "I suspect it was the moodiness of my work and the heavy amounts of shadow and black, which drew Keith Jeffrey to me. The gothic story actually suits what you might call my sci-fi' style." In keeping with the original text, which describes moody anti-hero Heathcliff as the "child of a Lascar" (Asian seaman) or a "gypsy" - a fact often ignored in movie and TV adaptations - Siku wanted to highlight what he saw as the character's exotic nature. He says: "I was always aware when working on the book that I was adapting a classic story - it's a great project to work on and I'm exceptionally proud of my work on Wuthering Heights." Whether the Brontës' father Patrick would have allowed comics or graphic novels - had they existed in those days - into the Parsonage as suitable reading matter is a moot point. A new take on a classic story has been created, and might possibly create a mutual respect between graphic novel fans and Bronte enthusiasts. Wuthering Heights: The Graphic Novel will be launched at Waterstone's in Bradford on Saturday, September 16.
83,281
TITLE: Is $\log(z-\alpha)$ well defined? QUESTION [2 upvotes]: Could you help me with the following problem please: Show that if $U\subset{\mathbb{C}}$ is simply connected domain and $\alpha \in \mathbb{C}\setminus U$, then log$(z-\alpha)$ is well defined for all $z\in U$ I have no idea how to solve this problem, it arose at the time of proving the Riemann mapping theorem, but it does not occur to me how to do the proof, also, I have tried to see it taking the case in which U is an open ball, but not I can understand well what the exercise says, since if, for example, I took the unit ball centered on the origin, there would still be values in which the logarithm would not be well defined. I hope you can help me with this, thank you. My try: We fix a point $z_0\in U$ and let's define: $$ \log(z-\alpha)=\displaystyle\int_{\gamma(z)}\dfrac{1}{w-\alpha}dw+\log(z_0-\alpha) $$ with $\gamma(z)$ a path in $U$ that joins $z$ with $z_0$. And it would be necessary to see that in this case the integral does not depend on the path, I think that would suffice, but I have not managed to prove it, it occurs to me that it is because it is a simply connected domain and applying Cauchy's theorem REPLY [2 votes]: I think you are on the right track. I'd proceed by doing a further step remembering the following facts: Cauchy's integral theorem: if $f: U\to \Bbb C$ is a holomorphic function on the domain $U$, then $$ \oint\limits_\gamma f(w)\mathrm{d}w=0 $$ for each simple closed curve contained in $U$. If $\alpha \in \Bbb C\setminus U$ then $$ f(w)=\frac{1}{w-\alpha}\quad \forall w\in U $$ is holomorphic on the whole $U$. To see this note that, by putting $w\equiv x+iy$, we have $$ \begin{split} \frac{\partial}{\partial x} f(x,y) &= \frac{\partial}{\partial x} \frac{1}{x+iy-\alpha} \\ & = -\frac{1}{(x+iy-\alpha)^2}\\ & = -i\frac{\partial}{\partial y} \frac{1}{x+iy-\alpha}= -i\frac{\partial}{\partial y} f(x,y) \end{split} $$ thus $f$ satisfies the Cauchy-Riemann equations on the whole $U$ since it is continuous with its partial derivatives for any $w\in U$ due to the hypothesis $\alpha \in \Bbb C\setminus U$. Now the definition of $\log(z-\alpha)$ as given above is seen to be path independent since, for any two continuous paths $\gamma_1[z],\gamma_2[z]: [0,1]\to U$ joining $z_0$ to $z$ in $U$ $$ \oint\limits_{\gamma_1[z]}\frac{1}{w-\alpha}\mathrm{d}w = \oint\limits_{\gamma_2[z]}\frac{1}{w-\alpha}\mathrm{d}w \iff \oint\limits_{\gamma_1\cup [-\gamma_2]}\frac{1}{w-\alpha}\mathrm{d}w=0 $$ provided having defined the closed curve $\gamma_1\cup [-\gamma_2]:[0,1]\to \Bbb C$ as $$ \gamma_1\cup [-\gamma_2](t)= \begin{cases} \gamma_1[z](2t) & t\in \big[0,\frac{1}{2}\big[\\ \gamma_2[z](2- 2t) & t\in \big[\frac{1}{2},1\big] \end{cases} $$ Thus $\log(z-\alpha)$ is univocally defined. Final notes As @Haus said in the comments, the choice of $z_0\in U$ is exactly the choice of the branch of the logarithm we are willing to use: once you chose $z_0$, the additive constant becomes $\log(z_0-\alpha)$ and is fully determined for the whole simply connected domain $U$. If we consider $\alpha=0$ and $U=\Bbb C\setminus {0}$ (which is not simply connected) we see that it is not so: depending on how we encircle $0$ by the path $\gamma[z]$ (clockwise or counterclockwise, starting from an arbitrary $z_0\neq 0$), we have that $$ \log z =\log |z| + i\arg z $$ where the imaginary part is only known modulo $2n\pi i$ for all $n\in\Bbb N$ and thus not uniquely determined. The curve $\gamma_1\cup[-\gamma_2]$ has been defined in perhaps a cumbersome way in order to show how it is closed. Intuitively speaking, when the parameter $t$ rises from $0$ to $\frac{1}{2}$, the mapped point "moves" along $\gamma_1$ from $z_0$ to the (chosen and fixed) $z$: when it further rises beyond $\frac{1}{2}$ to $1$, the mapped point "moves" from $z$ to $z_0$ "reversing its motion". This later behavior is the reason for the highly nonstandard and perhaps unfortunate notation $[-\gamma_2]$ I've chosen for this part of the composite path: I wanted to find something that would remind the reader of the addition of simplexes in euclidean spaces, in order to make the concept intuitively clear, but I've obviously failed. Strictly speaking, the reasoning above requires the two paths $\gamma_1[z]$ and $\gamma_2[z]$ to be non intersecting: however, the integral calculated on them can be seen to be equal to the one calculated on an auxiliary path joining $z$ to $z_0$ and not interesting any of them, thus the independence of the definition of the complex logarithm holds without any restriction on the used path.
202,727
I love baking with honey. Different flavor. Different texture. Always a good adventure. Years ago I sent for a small cookbook called “Cookin’ with Honey” and put out by the Minnesota BeeKeepers Association. They packed 190 recipes into this little book and lots of great tips. Although the recipe I used is straight out of this cookbook, the advice on making substitutions is a great tip to bookmark. Here’s their rule: Tips for swapping out honey instead of sugar: • In a cake or cookie recipe that calls for other sweetening, the general rule is to substitute 2/3 cup honey for each cup of sugar in the recipe. • Also reduce the amount of liquid by ¼ cup for each cup of honey used. • When substituting honey in baked goods, add ½ tsp baking soda to the recipe for every cup of honey used. • Bake it about 25 degrees lower than called for to prevent over-browning. This is a small batch – about 2 dozen cookies. If you’re not sure about baking with honey, this is a good one to cut your teeth on. The cookies are softer than cookies made with sugar and a bit more cakelike. And the flavor is subtle, but definitely honey. Ingredients for Cookies ½ cup honey ½ cup butter 1 egg 1 tsp vanilla 1 ½ cups flour ½ tsp soda ¼ tsp baking powder ½ tsp salt ½ cup walnuts – I roast mine in the oven first for 8 minutes to bring out the snap of the nuts. 1 cup chocolate chips – I used Ghiradelli Dark Directions Preheat oven to 350 degrees Cream the honey and butter and add the egg and vanilla. Sift dry ingredients together and add to the honey mixture. Mix just until blended and then add the nuts and chips and stir to blend. Drop by teaspoonfuls. Bake for 10 -12 minutes. Dark Ghiradelli chocolate chips, snappy walnuts and honey make this a wonderful cookie to enjoy. These are really good. Sold my husband. We left out the choc. chips as he is allergic and added way more walnuts. I have a question….what happens if you leave out the baking powder? Baking powder helps items rise a bit and also has a bit of baking soda. If you leave out the powder, they may taste and look a bit flat. Let us know if you try it. Just got done baking these cookies with Kamut flour (which is always organic) and vegan carob chips. Just changed the 1/4 tsp of baking powder to 1/2 and added a couple tbls more of carob chip because I had no nuts. Kamut flour can be used 1:1 in a recipe. It has a mildly sweet, buttery flavor and three times more fiber than wheat flour. I am on Dr. Rober Morse’s fruit fast and it took all of my cravings for sweets away. I made these and didn’t even want a spoonful of dough. LOL. Thanks so much for the recipe! Love your site, too. have been making these cookies for about 2 months now and love them. My Naturopath had told me to try Spelt flour instead of white flour (I was previously using 1 cup white and 1/2 whole wheat flour and they were perfect texture and taste. Using the spelt flour she told me to reduce it by 1 tbsp. They are really tasty still but they dry out really fast. What else can I add to ensure they stay moist like the regular recipe? Amber, try adding an additional egg to keep them moist. And be sure to store them in an airtight container when baked. You might try just baking a few at a time and keeping the dough in little balls in the freezer so you can bake a few at a time. Thanks for the note. Clare Thanks for the recipe & honey tips ! You are welcome! So happy you enjoyed them. Just saw Dr. Oz with cookies made from honey instead of sugar…which led me here. So happy to find your site! Wrote down the recipe and will try this evening…have been craving cookies!!! Mmmmmmm thank you! These are very popular on my site. I hope you enjoy them as much as I do. Thanks for writing. Clare Hi I made these cookies yesterday and they were amazing! thank you for sharing I was just wondering if I could use quaker oats next time instead of flour to make oatmeal chocolate chip cookies? would It work or do I have to use the flour everytime Hi Tianda. I’m so happy you liked the cookies. Here is a great oatmeal choc chip cookie recipe that I shared some time ago. No flour needed, so I make it for those who need gluten free. And yes, I think you could easily experiment with the honey cookie recipe to eliminate the flour and swap out oats. I usually put in about 1 1/2 cups of oats for every cup of recommended flour, but this is just an estimate. Let me know how they turn out. Peace, Clare Just made these cookies and was pleasantly surprised how good they are. Thanks for the recipe!!! You are welcome! Thanks for checking in. Hi, I am looking forward to trying out this recipe for my kids school lunches. We are in a nut free class and I am wondering if I can drop the walnuts without adding anything in to replace them? Thanks. Yes, just delete the nuts and enjoy. They are only added for extra texture and shouldn’t change the consistency. Thank you so much for sharing this recipe – these cookies are absolutely amazing! Probably my favorite that I have ever baked a big hit with family and friends! Thanks for your comment. This is what keeps me blogging. So happy you enjoyed them. Clare I made the honey choc-chip cookies today. I bought a jar of raw local honey and realized I did not like the taste didn’t want it to go to waste, found your recipe and made them to see. They are delicious, My son really likes them also! Thanks!! Lisa, I’m so happy you and your son liked these! Thanks for posting. I just made these cookies and due to the fact I have several food sensitivities, I made them gluten-free, soy-free, and almost dairy-free (unless you are extremely sensitive to lactose, butter usually doesn’t affect most lactose intolerant people). I’m sure margarine, coconut oil, or other butter replacements could be used. I made the following simple substitutions: Substituted the flour with 1 3/4 cups Mama’s Almond Blend All Purpose Flour gluten free flour substitute. Substituted chocolate chips with Enjoy Life gluten-free, dairy-free, soy-free and nut-free chocolate mini-chips. (I used 1/4 cup so I could use more nuts). I also used 1 cup of coarsely chopped walnuts. Added 1/4 tsp. of xanthan gum to keep the soft texture from falling apart. Hint: Non-glutenous flour sometimes doesn’t brown like regular flour. I baked these for exactly 9 minutes and they turned out very light brown, almost didn’t change color, so take care not to over bake. Thanks for the careful notes Delilah. This is a very popular recipe, so I hope your notes help others with allergies. Oh, I was so busy keeping my facts straight, I forgot to mention they turned out absolutely delicious! So happy they were good! Thanks again Delilah. Pingback: Peanut butter & cocoa nib cookies | Sharky Oven Gloves Thanks! I’m going to try this, but with raisins or something else in place of the chocolate chips since they contain sugar. Due to food intolerance issues, I don’t consume cane or beet sugar (in any form) so am eager to learn how to bake using honey (or maple syrup or date syrup). In my research it says when using honey to reduce the amount of the liquid. However… most cookie and cake recipes do NOT have ‘liquid’ in the first place. So I’m stumped. Do you have any information on how to adapt cookie and cake recipes (using honey in place of sugar) that don’t have liquid to replace? Thanks for your help! Hi DesertRose, thanksf for your comments. You are right that most cookie recipes don’t contain much liquid. However, we count the oil and eggs as part of the “liquid” in a recipe. You might use a smaller egg and reduce your butter or oil by just a bit. Cooking with honey will yield a softer product, so a bit less oil shouldn’t hurt. Cakes however usually do call for milk or buttermilk, so you should be OK there. May I suggest that you look for a beekeepers association and see if they have more help for you online. My tips came from a booklet I got ove 30 yrs ago from the Minnesota Beekeepers Association. Good luck with your experiments and happy baking! I just love your blog – it’s so bright and chreey – it makes me happy every time I visit! I just left you a Butterfly Award on my site – come on over and check it out.Lyla Thanks Alip! I’ll check out that award. These were great! Thank you for posting this. Many nutritionists argue that honey is a healthier alternative to processed white sugar. It also seems to have its benefits politically, as well. Great recipe! So happy you enjoyed them. I think honey has many fine qualities and I got that straight from Winnie the Pooh.
127,058
Creative writer and editor This post was written a field service management software to inform an audience of fire safety professionals of the benefits of the product. For this post I conducted an interview with the company's client to create a success for the blog. As a company grows, it becomes more difficult for founders and managers to stay on top of performance management. That’s why it’s important to build a culture of communication, feedback, and goal setting. Lattice CEO and co-founder, Jack Altman, has some tips on how to do just that. As a startup scales beyond product market fit, product management becomes very important. But how can founders effectively manage the product and perform all of their other duties? Here are best practices for founders to manage the product and hire product managers. Did you ever wonder how the Department of Defense protects sensitive data? We sat down with Alertboot founder and CEO, Tim Maliyil, to find out how he makes file encryption software for governments, banks, and law firms — all with remote developers and freelancers. Sometimes it seems like project managers and developers just can’t get along. Here’s why, and what to do about it. This is an SEO post written for Fullstack Academy to get aspiring developers excited about a career in IT. I edited and optimized this article for SEO for Fullstack Academy. This is a gated whitepaper on the benefits of cloud-hosted ERP services written for a professional services firm This article was an SEO targeted post for Codementor to generate leads. Ready to hire a software engineer, but not sure if you should hire U.S.-based or overseas developers? Here's what you need to know! This post offers a detailed guide on creating a software engineer resume and portfolio to boost your freelance career. Examples included! This is a hiring guide for a niche programming skillset — written for a freelance platform. Check out these seven best practices to create a successful developer onboarding process. Ready for programming opportunities to find you? Here’s how to get the attention of technical recruiters. Non-disclosure agreements can be considered a necessary evil for entrepreneurs. Here is a comprehensive guide to help you navigate legal considerations of protecting your secret sauce. What actually is product market fit? How do you get there? Can it be quantified? This article breaks down the prerequisites, and all of the accompanying jargon. Overseeing a distributed team of 80 employees, Zapier is an emerging thought leader in remote team management. We learned some best practices for managing remote developers from Zapier CTO, Bryan Helmig. Hiring a front-end developer means taking the time understand developer salary expectations, crafting a job description to attract the right talent, designing an interview process to test technical skills, and more. Here’s a guide on everything you need to know to recruit a front-end developer. This is a post written for Fullstack Academy intended to inform their readers about career options in IT, therby encouraging them to pursue IT training. This is an informative post written for Fullstack Academy and their audience of aspiring developers. This is an SEO post written for Fullstack Academy to build their domain authority and draw prospective students to their site. This is an SEO post written for Fullstack Academy intended to inform them of the different types of developers and move them closer towards a purchase decision. This is an SEO post written for Fullstack Academy intended to inform their audience of the different paths to become a professional developer. This is an SEO post written for Fullstack Academy intended to inform their potential students. Written for a certified Apple reseller. This was an SEO post written for a certified Apple reseller. This was an SEO post written for a plastics manufacturing company. This is an SEO post for a local audio visual equipment supplier.
132,103
The second annual North America Permaculture Convergence (NAPC) will be held in partnership with the Northern California Permaculture Convergence (NCPC) at the world-renowned Solar Living Institute. Wednesday through Friday, NAPC will facilitate working group meetings alongside multiple tracks of advanced workshops. On Saturday and Sunday, NCPC activities will focus on more introductory workshops, educational programming, and cultural festivities. To take advantage of the fact that Northern California offers some of the best examples of permaculture in North America, there will be a variety of off-site events and tours held before and after the Convergence. Transition Sarasota's Executive Director, Don Hall, will present an interactive workshop on "Effective Collaboration" in the Mediterranean Garden on Sunday from 11am-noon. He will also speak as part of a panel on "Transition: Permaculture Your Town" at the Bioregional Hub on Saturday from 2-3pm.
284,466
Check out my other cards for awesome rare Pokemon! The item "Pokemon Card Charizard Holo Rare Base Set Unlimited 4/102 PSA 9 MINT 1999" is in sale since Thursday, January 04, 2018. This item is in the category "Toys & Hobbies\Collectible Card Games\Pokémon Trading Card Game\Pokémon Individual Cards". The seller is "pokemastercards".
417,284
What is a VPN? IT support firms in LA may characterize a Virtual Private Network, or VPN, as digitally expanding the boundaries of your network. Say you had a team of employees working in “the field.” Provided they had a secure internet connection, then they could do that fieldwork from your VPN, reducing security compromise and maintaining operational efficiency. Reasons and Times of Use Owing to cloud computing and other decentralization considerations, differing options may or may not recommend VPN use for your business. Consultation is wise to help you consider variables otherwise invisible from an interior perspective. MSPs can see your operation and advise you with greater clarity as they know the answers to questions you’ve yet to realize need asking. In any event, some of the times and reasons to use VPNs include: For General Security on Network Devices Using Public Wi-Fi IT support firms in LA will advise VPN use for secure network access on public Wi-Fi networks. Such public networks can have hidden malicious elements in them which may infect your network if you use those connections alone for access. Going through a VPN provides an added layer of security which will reduce instances of network compromise from such threats. As a Means of Total Security or Anonymity, VPNs Fall Short VPNs won’t make your devices invisible. Also, say you go to a questionable website, click on a link, and incidentally download a virus. Well, you’re not going to be protected from that if your only defense is a VPN. So, when you’re in public, it’s also wise for you to use additional means of protection. Monitoring and support make sense, as does MFA, or Multi-Factor Authentication. Decentralization as a result of the cloud promotes increasingly remote operations, so it’s also sensible to outline MDM (Mobile Device Management) protocols which extend far beyond the penumbra of a basic VPN. For some companies, a VPN is an additional step that may or may not be worthwhile. However, even small companies have found that, in 2019’s cybercrime-rich atmosphere, every possible layer of protection is necessary— and probably still isn’t enough. Balanced VPN Utility IT support in LA through Advanced Networks can help you determine where VPN utility will most effectively serve your business. Additionally, we can help advise you pertaining to cloud computing solutions, MDM, and MFA options. There’s a lot to protect against— and a lot of options which can allow you to expand the flexibility of operations safely. Contact us for more information.
232,441
Not All Chocolate Easter Bunnies Are Created Equal What do these confectioners all have in common besides chocolate? - Lindt - Sees - Hershey’s - Godiva - Gertrude Hawk - Sanders Easter! Specifically chocolate Easter bunnies. Dark chocolate, milk chocolate, filled chocolate bunnies and chocolate covered bunnies. Give me dark chocolate any time of day and twice when there's a bottle off red wine open on the table. Those Easter baskets you enjoyed as a kid have stepped up their game these days. For $5.00 my mom could fill baskets with candy including chocolate for four kids. Today five bucks wouldn't buy you one of the chocolate bunnies listed above. If I were stuffing baskets this year it would be with an all chocolate variety. Even chocolate covered jelly beans! So here’s an idea. Call grandma and ask her for her famous fudge recipe. You know the one she only pulls out a Christmas time. Go find a bunny mold and let your imagination take off. See Also:
175,645
– The William C. Velasquez Institute (WCVI) recently completed a flash poll of registered voters in Texas Congressional Districts 20 and 29 and the preliminary findings imply strong support for the landmark Climate Change bill, American Clean Energy and Security Act of 2009 (ACESA). WCVI, which held Latino Leadership meetings in San Antonio and Los Angeles on April 25th and in Houston on May 21st to discuss this bill, is urging community members to contact Representative Charlie Gonzalez’s and Gene Green’s Offices to support the bill. As members of the House Energy and Commerce Committee, they hold important swing votes, which could be scheduled as soon as today. Further, WCVI, along with other Latino leaders, have formed Tejanos for a Better Future, a coalition of leaders and organizations in San Antonio. Its goal is to promote climate change mitigation and adaptation from a Latino/Hispanic perspective. “The climate crisis will disproportionately impact Latinos. ACESA, now being discussed in Congress will create new economic opportunities for our community through green jobs and a new green economy.” said Antonio Gonzalez, WCVI President. Preliminary survey data shows 58% of voters support the ACESA. An overwhelming 87% of voters want to see Texas increase its production and use of renewable energy and 95% want to see the state become more energy efficient. And finally, 55% of voters believe green house gases can be reduced while creating economic opportunities and jobs at the same time. Added Gonzalez, “The work of Tejanos for a Better Future is very timely with the climate change legislation moving through the US House of Representatives this week. This bill is vital to our planet and to Latinos, and we have high expectations that Congressmen Gonzalez and Green will support a strong bill that protects the environment and our community.” WCVI plans to hold additional Climate Change briefings in Arizona, California and Texas. For more information, call 210-922-3118 or visit.
331,454
TITLE: Is this a proof that recursive definition of functions indeed defines a function? QUESTION [0 upvotes]: Someone asked me how you prove that defining a function recursively actually defines a function, and then I tried to rigorously prove it. Is it right? Let $\mathbb{N}=\{0,1,2,\dots\}$. For any natural number $x$ and any function $g:\mathbb{N} \longrightarrow \mathbb{N}$, there is precisely one function $f:\mathbb{N} \longrightarrow \mathbb{N}$ having the following properties: (1) $f(0)=x$ and (2) $f(n+1)=g(f(n))$ for all $n \in \mathbb{N}$. Proof: Let $N(t)$ denote the initial segment $\{0,1,2,\dots,t\}$ of the natural numbers. We define functions $f^t: N(t) \longrightarrow \mathbb{N}$, for every $t \in \mathbb{N}$, such that: (1) $f^t(0)=x$ and (2) $f^t(n+1)=g(f^t(n))$ for all $n \in N(t-1)$. For $n=0$ we simply say $f^0={(0,x)}$. This has property (1), and vacuously property (2). Now we define the function $f^{t+1}$, given that $f^t$ exists, by saying $f^{t+1}=f^t \cup {t+1, g(t)}$. This clearly inherits properties (1) and (2) from $f^t$, so we have that $f^t$ exists for all $t \in \mathbb{N}$, by induction. Now we let $f$ be the union over $\mathbb{N}$ of all the "initial" functions $f(t)$. This defines a function because the $f^t$ are all functions, each nested inside the next. It inherits properties (1) and (2) from the $f^t$. Q.E.D. REPLY [1 votes]: What you have so far proves that a function satisfying (1) and (2) exists. To prove that it is unique you have to also prove that if $h$ is any function satisfying (1) and (2) then $f(n)=h(n)$ for all $n \in \mathbb N$. This is most easily proved by induction on $n$.
202,195
TITLE: Probability function on $\mathbb N$ - no convergence to $1$? QUESTION [2 upvotes]: Consider a box containing one red ball and one black ball. If we draw a black ball, we put it back and add another black ball. If we draw the red ball, the experiment is over. What is the probability $p_n$ that the red ball is drawn in the $n$-th drawing? Show that it's a probability function. My thoughts: In the first drawing, there are only two balls (red, black). So the probability is $$p_1 = \frac{1}{2}$$ In the second drawing, if we didn't draw the red ball yet, the probability would be $$p_2 = \frac{1}{3}\left( 1 - \frac{1}{2} \right)$$ because there are three balls now, and we multiply the probability of drawing the red ball with the counter probability $p_1^c = 1 - p_1$ from the first step. This procedure leads to: \begin{align} p_3 & = \frac{1}{4} - \frac{1}{24} \\[4pt] p_4 & = \frac{1}{5} - \frac{5}{120} \\ &\vdots \\ p_n & = \frac{1}{n+1} - \frac{1}{(n+1)!} \end{align} But this doesn't seem to be a probability function on $\mathbb N$: $$\sum_{n \in \mathbb N} p_n = \sum_{n \in \mathbb N} \frac{1}{n+1} - \frac{1}{(n+1)!} = \sum_{n \in \mathbb N} \frac{n! - 1}{(n+1)!} = \infty$$ Can you help me find the mistake? REPLY [2 votes]: \begin{align} p_n & = \left( 1 - \frac 1 2 \right)\left( 1 - \frac 1 3 \right)\left( 1 - \frac 1 4 \right)\left( 1 - \frac 1 5 \right) \cdots\left( 1 - \frac 1 n \right) \frac 1 {n+1} \\[10pt] & = \left(\frac 1 2 \cdot\frac 2 3 \cdot \frac 3 4 \cdot\frac 4 5 \cdots \frac{n-1} n\right)\cdot \frac 1 {n+1} \\[10pt] & =\frac 1 {n(n+1)} = \frac 1 n - \frac 1 {n+1} \end{align} These add up to $1$ because the sum telescopes: \begin{align} \left(1 - \frac 1 2 \right) + \left(\frac 1 2 - \frac 1 3 \right) + \left( \frac 1 3 - \frac 1 4 \right) + \cdots + \left( \frac 1 n - \frac 1 {n+1} \right) = 1 - \frac 1 {n+1} \to 1\text{ as } n\to\infty. \end{align}
54,390
ATA does snow and ice accumulation study The American Transportation Research Institute (ATRI), the research arm of the American Trucking Assns. (ATA), has completed a study on the effects of snow and ice accumulation on the top of vehicles and has called for a multi-stage plan to educate truck operators on the possible dangers. ATA said that chunks of ice and snow are a safety issue as they may strike other vehicles and result in property damage or injury to other motorists. However, significant challenges exist for cleaning the tops of trailers, including the hazards workers face when manually clearing snow and ice, the limited availability and effectiveness of snow removal devices and the lack of available vehicle-based solutions, ATA said. According to ATA, recommendations for short-term action include a public outreach and education campaign targeting operators of all vehicle types and a feasibility study for snow removal devices; a mid-term action is to explore placing snow removal devices at public weight stations and ports of entry; and the long-term action investigates potential vehicle-based solutions that would impede snow and ice accumulation on vehicles. Want to use this article? Click here for options! © 2009 Penton Media Inc.
245,876
TITLE: Extending $f:\mathbb{S}^1\to\mathbb{R}^2\setminus\{0\}$ to the disk $\bar{\mathbb{D}}^2$ QUESTION [1 upvotes]: Let $f:\mathbb{S}^1\to\mathbb{R}^2\setminus\{0\}$ be a smooth function. Prove that there exists a continuous $\hat{f}:\bar{\mathbb{D}}^2\to\mathbb{R}^2\setminus\{0\} \Leftrightarrow \text{deg}(f)=0$. Furthermore, prove that the extension can be chosen smoothly. I just have a vague hunch: if there is such a $\hat{f}:\bar{\mathbb{D}}^2\to\mathbb{R}^2\setminus\{0\}$, then $\hat{f}$ is homotopic to a constant, since $\bar{\mathbb{D}}^2$ is contractible, so $\text{deg}(\hat{f})=0$, and I'd like to conclude that $\text{deg}(f)=0$. For the opposite direction, if $\text{deg}(f)=0$, that means the image of $f$ is not a closed curve, but a compact curve with both ends unconnected, which is diffeomorphic to a closed interval. So we have an induced smooth function $f:\mathbb{S}^1\to[a, b]$, so we can define $\hat{f}(x):=||x||f\left(\frac{x}{||x||}\right)$ for $x\neq 0$ $\hat{f}(0):=0$, which is continuous. I don't know if this works, but that was the only thing I could think about. Any tips? Thanks! REPLY [2 votes]: The idea for the first direction basically looks fine. The second part does not really make sense, because the image of $f$ always is a closed curve (also you have to construct $\hat f$ with values in $\mathbb R^2\setminus\{0\}$. As a hint for the solution, observe that for any space $Y$ a continuous extension of $f:S^1\to Y$ to $\hat f:\bar D^2\to Y$ is equivalent to a homotopy $H:S^1\times [0,1]\to X$ between $f$ and the constant map to $\hat f(0)\in Y$. The relation simply is that $H(x,t)=\hat f((1-t)x)$.
95,922
Actant 3, 2014, Gouache and casein on board, 48 x 24″ Says the Willamette Week, “Daniela Molnar’s floral studies juxtapose finely detailed realistic passages with flat tatters of color.” An apt description. The Actant series focuses on the fluid movement, as well as the precise structure of flowers. These works are both color field paintings and botanical studies, exploring how patterns in nature are perceived, named and understood. The works question habits of naming and perception — when is a flower no longer a flower? When does it cease to be a noun and become a verb? The series is also based on an inquiry into my own and cultural attitudes concerning beauty and utility.
175,355
Livonia Ladywood | Girls Swimming Scores & Schedule Livonia Ladywood Blazers14680 Newburgh Rd Livonia, MI 48154 Map It 2014 Record - Overall - 0-0 Become a Reporter! Submit updates and help us improve coverage of this school. Sign Up » * notes a league game # notes a tournament game
404,618
\begin{document} \maketitle \begin{abstract} In a recent article, we showed that trigonometric shearlets are able to detect directional step discontinuities along edges of periodic characteristic functions. In this paper, we extend these results to multivariate periodic functions which have jump discontinuities in higher order directional derivatives along edges. In order to prove suitable upper and lower bounds for the shearlet coefficients, we need to generalize the results about localization- and orientation-dependent decay properties of the corresponding inner products of trigonometric shearlets and the underlying periodic functions. \end{abstract} \smallskip { \small {\textbf{Keywords.}} Detection of directional singularities, higher order directional derivatives, trigonometric shearlets, periodic wavelets} \smallskip { \small {\textbf{Mathematics Subject Classification.}} 42C15, 42C40, 65T60} \smallskip \section{Introduction} \label{sec:introduction} The automatic recognition and separation of different image parts is of great importance in many industrial or life science applications. For this reason, one needs to precisely and effectively detect edges in images. One famous approach is the Canny algorithm \cite{canny:detection} which applies two-dimensional edge filters on a smoothed version of the image followed by a non-maximum suppression called hysteresis. It is well-known that the Canny algorithm is equivalent to the task of finding local maxima of a two-dimensional wavelet transform \cite{mallat:detection}. Typically, classical multivariate wavelets are obtained by taking the tensor product of one-dimensional scaling and wavelet functions. Since the support of these functions is aligned with the coordinate axes, they are not optimal for the detection and characterization of singularities in arbitrary directions as they can occur in dimensions higher than one \cite{mallat:buch}. Therefore, several multivariate directional systems have been considered in order to overcome this limitation, for example brushlets \cite{meyer:brushlets}, ridgelets \cite{candes:ridgelets}, curvelets \cite{candes:curvelets} or shearlets \cite{kutyniok:book}. A widely used model for multivariate functions which contain singularities along edges is the class of so-called cartoon-like functions \cite{donoho:wedgelets,donoho:sparse}. These are functions of the form $\mathfrak{f}=f_0+f_1\,\chi_T$ where $T\subset \mathbb{R}^2$ and $f_0,f_1$ are smooth functions with compact support. This class was used for optimal sparse approximation with multivariate directional systems such as curvelets or shearlets \cite{candes:curvelets,labate:sparse} and later for the more general classes of parabolic molecules or $\alpha$-molecules \cite{grohs:parabolic_molecules,grohs:alpha, grohs:molecules}. Another application is the detection and characterization of directional discontinuities in cartoon-like functions. In a number of articles \cite{grohs:parabolic_molecules,labate:detection_continuous,kutyniok:edges_compactly} it was shown for different settings that continuous shearlets are well suited to deal with this task if the underlying cartoon-like function is piecewise constant. To get a more realistic model of images with smooth transitions of different image parts, one needs to allow for the functions $f_1$ to be smooth with vanishing values on the boundary curve up to a directional derivative of higher order. In \cite{labate:smooth}, the authors showed that for the continuous shearlet coefficients the estimate \begin{equation}\label{eq:main_result_cont} 0<\lim\limits_{a\rightarrow 0^+}a^{-(n/2+3/4)}\abs{\bigl\langle \mathfrak{f},\psi_{a,s_0,\mathbf{p}} \bigr\rangle}<\infty \end{equation} holds true if $\mathbf{p}\in\partial T$ and $s=s_0$ corresponds to the normal direction of $\partial T$ at $\mathbf{p}$ with $n$ denoting the number of vanishing derivatives of $f_1$ in that point. On the other hand, the shearlet coefficients exhibit rapid decay if $\mathbf{p}\notin\partial T$ or if $s = s_0$ does not correspond to the normal direction of $\partial T$ at $\mathbf{p}$. In the case of discrete shearlets, the authors in \cite{labate:detection} proved the existence of suitable upper and lower bounds for the shearlet coefficients if the corresponding cartoon-like function is piecewise constant, e.g. $f_0=0$ and $f_1=1$. In \cite{schober:detection}, a similar result was shown for trigonometric shearlets and the detection of singularities of periodic characteristic functions. Until now, there is no analogous result to \cref{eq:main_result_cont} for the detection of jumps in higher order directional derivatives in the discrete setting. In this paper, we consider the trigonometric shearlets from \cite{schober:detection} which arise from the theory of multivariate periodic wavelets \cite{bergmann:dlVP,langemann:multi_periodic_wavelets,maksimenko:multi_periodic_wavelets} and extend the results to general cartoon-like functions having jumps in higher order directional derivatives on edge curves which hence do not need to be closed as in the case of characteristic functions in \cite{labate:detection, schober:detection}. We provide upper and lower estimates for the shearlet coefficients in the case that the corresponding smooth function $f_1$ vanishes on the boundary curve up to a directional derivative of higher order. The structure of the paper is as follows: We introduce trigonometric shearlets in \cref{sec:trigonometric_shearlets} and show a new upper bound for the partial derivates of these functions in polar coordinates. In \cref{sec:main_results}, we formulate the two main results of this paper given by \cref{thm:hauptresultat} und \cref{thm:hauptresultat2}. The next \cref{sec:proof_of_theorem_3_1} contains the proof of \cref{thm:hauptresultat} based on a decomposition of the underlying cartoon-like function on dyadic squares. \cref{sec:localization_lemmata} includes technical preparations for the proof of \cref{thm:hauptresultat2} in terms of localization lemmata and a new representation of the Fourier transform of polynomial cartoon-like functions in \cref{lem:fourier_transformation_gauss}. With these results in hand, we give proof of the lower bound for the shearlet coefficients in \cref{sec:proof_of_theorem_3_2}. \section{Trigonometric shearlets} \label{sec:trigonometric_shearlets} We denote two-dimensional vectors by $\mathbf{x}=(x_1,x_2)^{\mathrm{T}}$ with the usual inner product $\mathbf{x}^{\mathrm{T}}\mathbf{y}\mathrel{\mathop:}= x_1\,y_1+x_2\,y_2$ and the induced Euclidean norm written as $\abs{\mathbf{x}}_2\mathrel{\mathop:}=\sqrt{\mathbf{x}^{\mathrm{T}}\mathbf{x}}$. Moreover, we write $\abs{\mathbf{x}}_1\mathrel{\mathop:}=\abs{x_1}+\abs{x_2}$, $\mathbf{x}^\mathbf{y}\mathrel{\mathop:}=x_1^{y_1}\,x_2^{y_2}$ and $\mathbf{x}^\beta\mathrel{\mathop:}=x_1^\beta\,x_2^\beta$ for $\beta\in \mathbb{R}$. For the representation of a vector $\boldsymbol{\xi}\in \mathbb{R}^2$ in polar coordinates, we write $\boldsymbol{\xi}=\rho\,\boldsymbol{\Theta}(\theta)$ with $\rho\mathrel{\mathop:}=\abs{\boldsymbol{\xi}}_2$ and $\boldsymbol{\Theta}(\theta)\mathrel{\mathop:}=(\cos\theta,\sin\theta)^{\mathrm{T}}$. For $\mathbf{k}\in \mathbb{N}_0^2$ and $n\in \mathbb{N}_0$ with $\abs{\mathbf{k}}_1\leq n$ we define $\mathbf{k}!\mathrel{\mathop:}=k_1!\,k_2!$ and $\binom{n}{\mathbf{k}}\mathrel{\mathop:}=\frac{n!}{\mathbf{k}!(n-\abs{\mathbf{k}}_1)!}$. We denote by $C(\Omega)$ the space of all continuous functions on a domain $\Omega\subseteq \mathbb{R}^2$ with the norm $\norm{f}_{C(\Omega)}\mathrel{\mathop:}=\sup\limits_{\mathbf{x}\in A}\abs{f(\mathbf{x})}$. For $\mathbf{r}=(r_1,r_2)^{\mathrm{T}}\in \mathbb{N}_0^2$ and a sufficiently smooth function $f$ we use the notation $\partial^{\mathbf{r}}f\mathrel{\mathop:}= \frac{\partial^{r_1+r_2}}{\partial x_1^{r_1}\partial x_2^{r_2}}f$ and the space of all $q$-times continuously differentiable compactly supported functions will be denoted by \begin{equation*} C^q_0(\Omega)\mathrel{\mathop:}=\left\lbrace f:\Omega\rightarrow \mathbb{R}:\partial^\mathbf{r}f\in C(\Omega)\;\text{for all}\;\mathbf{r}\in \mathbb{N}_0^2\; \text{with}\;\abs{\mathbf{r}}_1\leq q,\,\abs{\mathrm{supp}\,f}<\infty\right\rbrace \end{equation*} with the norm $\norm{f}_{C_0^q}\mathrel{\mathop:}=\norm{f}_{C_0^q(\Omega)}\mathrel{\mathop:}=\sup\limits_{\abs{\mathbf{r}}_1\leq q}\,\sup\limits_{\mathbf{x}\in \Omega}\abs{\partial^\mathbf{r}f(\mathbf{x})}$.\\ In this section, we define trigonometric shearlets which were already used in \cite{schober:detection}. For convenience, we briefly recap the construction and some properties of these functions. We call a nonnegative and even function $g:\mathbb{R}\rightarrow \mathbb{R}$ admissible if $\mathrm{supp}\,g=\left(-\frac{2}{3},\frac{2}{3}\right)$ and $g$ is monotonically decreasing for $x\in \left( \frac{1}{3},\frac{2}{3} \right)$ and satisfies the property $\sum\limits_{z\in \mathbb{Z}}g(x+z)=1$ for all $x\in \mathbb{R}$.\\ An admissible function $g$ can be chosen arbitrarily smooth \cite{schober:detection}. We introduce functions $\widetilde{g}:\mathbb{R}\rightarrow \mathbb{R}$ which are given by $\widetilde{g}(x)\mathrel{\mathop:}=g\left( \frac{x}{2} \right)-g(x)$.\\ For $\mathfrak{i}\in\lbrace \mathfrak{h},\mathfrak{v}\rbrace$ we consider bivariate functions $\Psi^{(\mathfrak{i})}:\mathbb{R}^2\rightarrow \mathbb{R}$ defined by \begin{equation}\label{eq:window_function} \Psi^{(\mathfrak{h})}(\mathbf{x})\mathrel{\mathop:}=\widetilde{g}(x_1)\,g(x_2),\qquad\qquad \Psi^{(\mathfrak{v})}(\mathbf{x})\mathrel{\mathop:}=g(x_1)\,\widetilde{g}(x_2). \end{equation} We call them window functions and write $\Psi^{(\mathfrak{i})}\in \mathcal{W}$. We remark that for an admissible function $g\in C^q_0(\mathbb{R})$ we have $\Psi^{(\mathfrak{i})}\in C^q_0(\mathbb{R}^2)$ and denote $\Psi^{(\mathfrak{i})}\in \mathcal{W}^q$. For even $j\in \mathbb{N}_0$ and $\ell\in \mathbb{Z}$ with $\abs{\ell}\leq 2^{j/2}$ we consider the matrices \begin{equation*} \mathbf{N}_{j,\ell}^{(\mathfrak{h})}\mathrel{\mathop:}=\begin{pmatrix} 2^j & \ell\, 2^{j/2}\\ 0 & 2^{j/2} \end{pmatrix},\qquad\qquad\;\mathbf{N}_{j,\ell}^{(\mathfrak{v})}\mathrel{\mathop:}=\begin{pmatrix} 2^{j/2} & 0\\ \ell\, 2^{j/2} & 2^j \end{pmatrix} \end{equation*} and introduce the functions \begin{equation*} \Psi^{(\mathfrak{i})}_{j,\ell}(\mathbf{x})\mathrel{\mathop:}=\Psi^{(\mathfrak{i})}\left(\left(\mathbf{N}_{j,\ell}^{(\mathfrak{i})}\right)^{-\mathrm{T}}\mathbf{x}\right),\qquad \mathbf{x}\in \mathbb{R}^2. \end{equation*} For $\mathfrak{i}\in\lbrace \mathfrak{i},\mathfrak{v}\rbrace$, $\Psi^{(\mathfrak{i})}\in \mathcal{W}^q$ and $\mathbf{y}\in \mathcal{P}(\mathbf{N}_{j,\ell}^{(\mathfrak{i})})$ we define trigonometric shearlets by \begin{equation}\label{eq:trigonometric_shearlets} \psi_{j,\ell,\mathbf{y}}^{(\mathfrak{i})}(\mathbf{x})\mathrel{\mathop:}=2^{-3j/4}\sum_{\mathbf{k}\in \mathbb{Z}^2}\Psi^{(\mathfrak{i})}_{j,\ell}(\mathbf{k})\,\mathrm{e}^{\mathrm{i}\mathbf{k}^{\mathrm{T}}(\mathbf{x}-2\pi\mathbf{y})}, \end{equation} where \begin{align*} &\mathcal{P}\left(\mathbf{N}_{j,\ell}^{(\mathfrak{h})}\right)=\Bigl\lbrace 2^{-j}\,z_1\,:\,z_1=-2^{j-1},\dots,2^{j-1}-1 \Bigr\rbrace\times\Bigl\lbrace 2^{-j/2}\,z_2\,:\,z_2=-2^{j/2-1},\dots,2^{j/2-1}-1\Bigr\rbrace,\\ &\mathcal{P}\left(\mathbf{N}_{j,\ell}^{(\mathfrak{v})}\right)=\Bigl\lbrace 2^{-j/2}\,z_1\,:\,z_1=-2^{j/2-1},\dots,2^{j/2-1}-1 \Bigr\rbrace\times\Bigl\lbrace 2^{-j}\,z_2\,:\,z_2=-2^{j-1},\dots,2^{j-1}-1 \Bigr\rbrace. \end{align*} Let $f,g:\mathbb{R}\rightarrow \mathbb{R}$ be sufficiently smooth functions. The $n$-th order derivative of the composition of $f$ and $g$ is given by Fa\`{a} di Bruno's formula \cite[Section 4.3]{porteous:faadibruno} \begin{align} \frac{\mathrm{d}^n}{\mathrm{d}x^n}f(g(x))&=\sum\limits_{\mathbf{k}}\binom{n}{\mathbf{k}}\,f^{(\abs{\mathbf{k}}_1)}(g(x))\,\prod\limits_{j=1}^n \left( \frac{g^{(j)}(x)}{j!} \right)^{k_j}\notag\\\label{eq:faa_di_bruno2} &=\sum\limits_{k=1}^{n}f^{(k)}(g(x))\,B_{n,k}\Bigl(g'(x),g''(x),\hdots,g^{(n-k+1)}(x)\Bigr), \end{align} where the sum in the first line runs over all vectors $\mathbf{k}=(k_1,\hdots,k_n)^{\mathrm{T}}\in \mathbb{N}_0^n$ with $\sum\limits_{i=1}^n i\cdot k_i=n$ and $B_{n,k}$ are the well known Bell polynomials. It is also known that \begin{equation}\label{eq:bell_zahl} \sum_{k=0}^{n}B_{n,k}(1,1,\hdots,1)=\sum\limits_{k=0}^{n}\sum\limits_{\mathbf{m}}\binom{n}{\mathbf{m}}\prod_{j=1}^{n-k+1}(j!)^{-m_j}=B_n, \end{equation} where $B_n$ is the $n$-th Bell number and the inner sum is running over all $\mathbf{m}=(m_1,\hdots,m_{n-k+1})^{\mathrm{T}}\in \mathbb{N}_0^{n-k+1}$ fulfilling $\sum\limits_{i=1}^{n-k+1}m_i=k$ and $\sum\limits_{i=1}^n i\cdot m_i=n$. In the following lemma we need the angles \begin{equation}\label{eq:theta_jl} \theta_{j,\ell}^{(\mathrm{h})}\mathrel{\mathop:}=\arctan\left(\ell\,2^{-j/2}\right),\qquad\qquad \theta_{j,\ell}^{(\mathrm{v})}\mathrel{\mathop:}=\mathrm{arccot}\left(\ell\,2^{-j/2}\right). \end{equation} \begin{lemma}\label{lem:partielle_ableitung_psi:polar} For $\mathfrak{i}\in\lbrace \mathfrak{h},\mathfrak{v}\rbrace$ and $q\in \mathbb{N}_0$ let $\Psi^{(\mathfrak{i})}\in \mathcal{W}^q$ from \cref{eq:window_function} be given. Then for all $r\leq q$ we have \begin{equation*} \abs{\frac{\partial^r}{\partial\rho^r}\left[\Psi_{j,\ell}^{(\mathfrak{i})}\left( 2^j\rho\,\boldsymbol{\Theta}(\theta) \right)\right]}\leq C_1(r),\qquad\abs{\frac{\partial^r}{\partial\theta^r}\left[\Psi_{j,\ell}^{(\mathfrak{i})}\left( 2^j\rho\,\boldsymbol{\Theta}(\theta) \right)\right]}\leq C_2(r)\,2^{jr/2}. \end{equation*} \end{lemma} \begin{proof} We only show the case $\mathfrak{i}=\mathfrak{h}$. We use polar coordinates and obtain \begin{equation*} \Psi_{j,\ell}^{(\mathfrak{h})}\left( 2^j\rho\,\boldsymbol{\Theta}(\theta) \right)=\widetilde{g}_{\alpha}\bigl(\rho\cos\theta\bigr)\,g_{\alpha}\Bigl( \rho\cos\theta\left(2^{j/2}\tan\theta-\ell\right) \Bigr) \end{equation*} and use the chain rule to get \begin{equation}\label{eq:partielle_ableitung_psi:polar1} \abs{\frac{\partial^s}{\partial\rho^s}\left[\widetilde{g}_{\alpha}\bigl(\rho\cos\theta\bigr)\right]}=\abs{\cos\theta}^s\,\abs{\widetilde{g}_{\alpha}^{(s)}\bigl(\rho\cos\theta\bigr)}\leq \norm{\widetilde{g}_{\alpha}}_{C^s(\mathbb{R})}=C_1(s) \end{equation} for all $s\leq r$. From \cite[Lemma 1]{schober:detection} it follows that \begin{equation*} \mathrm{supp}\,\Psi_{j,\ell}^{(\mathfrak{h})}\left( 2^j\rho\,\boldsymbol{\Theta}(\theta) \right)\subset\left\lbrace(\rho,\theta)\in \mathbb{R}\times\left[-\frac{\pi}{2},\frac{\pi}{2}\right]:\frac{1}{3}<\abs{\rho}< 2,\,\theta_{j,\ell-2}^{(\mathfrak{h})}<\theta<\theta_{j,\ell+2}^{(\mathfrak{h})}\right\rbrace, \end{equation*} leading to the estimate \begin{align} \abs{\frac{\partial^s}{\partial\rho^s}\left[g_{\alpha}\Bigl( \rho\cos\theta\left(2^{j/2}\tan\theta-\ell\right) \Bigr)\right]}&=\label{eq:partielle_ableitung_psi:polar2} \abs{\cos\theta}^s\Bigl\lvert2^{j/2}\tan\theta-\ell \Bigr\rvert^s \abs{g_{\alpha}^{(s)}\Bigl( \rho\cos\theta\left(2^{j/2}\tan\theta-\ell\right) \Bigr)}\notag\\ &\leq 2^s\,\norm{g_{\alpha}}_{C^s(\mathbb{R})}\leq C_2. \end{align} Using Leibniz rule and triangle inequality we get with \cref{eq:partielle_ableitung_psi:polar1} and \cref{eq:partielle_ableitung_psi:polar2} \begin{equation*} \abs{\frac{\partial^r}{\partial\rho^r}\left[\Psi_{j,\ell}^{(\mathfrak{h})}\left( 2^j\rho\,\boldsymbol{\Theta}(\theta) \right)\right]}\leq\sum\limits_{s=0}^{r}\binom{r}{s}\,2^{r-s}\,C_1(s)\,C_2(r-s)\leq 3^r\,C_3(r)=C_4(r). \end{equation*} For the variable $\theta$ we again use the chain rule for $s\leq r$ to obtain \begin{equation*} \abs{\frac{\partial^s}{\partial\theta^s}\left[ \rho\cos\theta\right]}\leq\abs{\rho}, \end{equation*} which leads with \cref{eq:bell_zahl} and the Fa\`{a} di Bruno formula from \cref{eq:faa_di_bruno2} to \begin{align} \abs{\frac{\partial^s}{\partial\theta^s}\left[ \widetilde{g}_{\alpha}\bigl(\rho\cos\theta\bigr)\right]}&\leq\sum\limits_{t=1}^{s}\abs{\widetilde{g}_{\alpha}^{(t)}\bigl(\rho\cos\theta\bigr)}\,B_{s,t}\left(\abs{\frac{\partial}{\partial\theta}\left[ \rho\cos\theta\right]},\hdots,\abs{\frac{\partial^{s-t+1}}{\partial\theta^{s-t+1}}\left[ \rho\cos\theta\right]}\right)\notag\\ &\leq\norm{\widetilde{g}_{\alpha}}_{C^s(\mathbb{R})}\sum\limits_{t=1}^{s}\sum\limits_{\mathbf{m}}\binom{s}{\mathbf{m}}\prod_{j=1}^{s-t+1}\left(\frac{\abs{\frac{\partial^j}{\partial\theta^j}\left[ \rho\cos\theta\right]}}{j!}\right)^{m_j}\notag\\ &\leq C_5(s)\sum\limits_{t=1}^{s}\abs{\rho}^{t}\sum\limits_{\mathbf{m}}\binom{s}{\mathbf{m}}\,\prod\limits_{j=1}^{s-t+1}(j!)^{-m_j}\leq C_6(s)\,B_s=C_7(s). \label{eq:partielle_ableitung_psi:polar3} \end{align} For even $s\in \mathbb{N}$ we have \begin{equation*} \abs{\frac{\partial^s}{\partial\theta^s}\left[\rho\cos\theta\left(2^{j/2}\tan\theta-\ell\right)\right]}= \abs{\rho}\abs{\cos\theta}\Bigl\lvert2^{j/2}\tan\theta-\ell \Bigr\rvert \leq 4 \end{equation*} since $\rho\leq 2$ and for odd $s\in \mathbb{N}$ we see \begin{equation*} \abs{\frac{\partial^s}{\partial\theta^s}\left[\rho\cos\theta\left(2^{j/2}\tan\theta-\ell\right)\right]}= \abs{\rho}\abs{\cos\theta}\Bigl\lvert2^{j/2}+\ell\,\tan\theta \Bigr\rvert \leq C_8\,\abs{\rho}\,2^{j/2} \end{equation*} since $\abs{\ell}<2^{j/2}$. Using the Fa\`{a} di Bruno formula \cref{eq:faa_di_bruno2} we obtain the estimate \begin{align} &\abs{\frac{\partial^s}{\partial\theta^s}\left[g_{\alpha}\Bigl( \rho\cos\theta\left(2^{j/2}\tan\theta-\ell\right) \Bigr)\right]}\notag\\ &\quad\leq\norm{g_{\alpha}}_{C^s(\mathbb{R})}\sum\limits_{t=1}^{s}\sum\limits_{\mathbf{m}}\binom{s}{\mathbf{m}}\prod_{j=1}^{s-t+1}\left(\frac{\abs{\frac{\partial^j}{\partial\theta^j}\left[ \rho\cos\theta\left(2^{k}\tan\theta-\ell\right)\right]}}{j!}\right)^{m_j}\notag\\ &\quad\leq C_9(s)\sum\limits_{t=1}^{s}\abs{\rho}^{t}\,2^{jt/2}\sum\limits_{\mathbf{m}}\binom{s}{\mathbf{m}}\,\prod\limits_{j=1}^{s-t+1}(j!)^{-m_j}\leq C_{10}(s)\,2^{js/2}\,B_s=C_{11}(s)\,2^{js/2}. \label{eq:partielle_ableitung_psi:polar4} \end{align} With the estimates \cref{eq:partielle_ableitung_psi:polar3} and \cref{eq:partielle_ableitung_psi:polar4} we finally conclude \begin{align*} \abs{\frac{\partial^r}{\partial\theta^r}\left[\Psi_{j,\ell}^{(\mathfrak{h})}\left( 2^j\rho\,\boldsymbol{\Theta}(\theta) \right)\right]}&\leq\sum\limits_{s=0}^{r}\binom{r}{s}\,\abs{\frac{\partial^s}{\partial\theta^s}\left[\widetilde{g}_{\alpha}\bigl(\rho\cos\theta\bigr)\right]}\\ &\qquad\times\abs{\frac{\partial^{r-s}}{\partial\theta^{r-s}}\left[g_{\alpha}\Bigl( \rho\cos\theta\left(2^{k}\tan\theta-\ell\right) \Bigr)\right]}\\ &\leq\sum\limits_{s=0}^{r}\binom{r}{s}\,C_7(s)\,B_s\,C_{11}(r-s)\,2^{j(r-s)/2}\,B_{r-s}\\ &\leq C_{12}(r)\,B_r^2\,2^{jr/2}=C_{13}(r)\,2^{jr/2}. \end{align*} \end{proof} \section{Main results} \label{sec:main_results} For the main results of this paper, we need the class of so called cartoon-like functions \cite{candes:curvelets,donoho:wedgelets,labate:sparse}. These are functions which are smooth except for discontinuities along edges. We call a set $T\subset \left( -\pi,\pi \right)^2$ star-shaped and write $T\in \mathrm{STAR}$ if there exists $\mathbf{x}_0\in T$, called origin, such that for every $\mathbf{x}\in T$ we have \begin{equation*} \bigl\lbrace \lambda \mathbf{x}+(1-\lambda)\mathbf{x}_0\,:\,\lambda\in[0,1] \bigr\rbrace\subset T. \end{equation*} We follow the ideas of \cite[Section 8.2]{donoho:wedgelets} and consider star-shaped sets with smooth boundaries $\partial T$ given by a parametrized curve in polar coordinates. Let $r\in C^2 \left([0,2\pi)\right)$ be a radius function with $\norm{r}_{C^2}\leq\tau$ and $T\in \mathrm{STAR}$ a star-shaped set with origin $\mathbf{x}_0$ which boundary $\partial T$ can be expressed in polar coordinates by a parametrized curve $\boldsymbol{\gamma}:[0,2\pi)\rightarrow \partial T$ of the form \begin{equation} \label{eq:star1} \boldsymbol{\gamma}(x)=\mathbf{x}_0+r(x)\,(\cos x,\,\sin x)^{\mathrm{T}},\qquad x\in[0,2\pi). \end{equation} The set $\mathrm{STAR}^2(\tau)$ is defined as the set which contains all $T\in \mathrm{STAR}$ with a boundary described as in \cref{eq:star1}. \begin{definition}\label{def:cartoon_funktionen} For $T\in \mathrm{STAR}^2(\tau)$ and $u\in \mathbb{N}_0$ the set of cartoon-like functions is defined by \begin{equation*} \mathcal{E}^u(\tau)\mathrel{\mathop:}=\Bigl\lbrace \mathfrak{f}=f_0+f_1\chi_T\,:\,f_0,f_1\in C_0^u(\mathbb{R}^2)\;\,\text{and}\;\, \mathrm{supp}\,f_0\subset(-\pi,\pi)^2\Bigr\rbrace. \end{equation*} \end{definition} The directional derivative of a continuously differentiable function $f:\Omega\rightarrow \mathbb{R}$ in the direction $\mathbf{v}\in \mathbb{R}^2$ with $\abs{\mathbf{v}}_2=1$ in $\mathbf{x}\in \Omega$ is given by \begin{equation*} \partial_\mathbf{v}f(\mathbf{x})\mathrel{\mathop:}=\partial_\mathbf{v}[f](\mathbf{x})=\mathbf{v}^{\mathrm{T}}\,\mathrm{grad}\,f(\mathbf{x}). \end{equation*} For $f\in C^q(\Omega)$ and $0\leq m\leq q$ there exist the directional derivatives of $m$-th order in every direction $\mathbf{v}\in \mathbb{R}^2$ with $\abs{\mathbf{v}}_2=1$. They are given by $\partial^{0}_{\mathbf{v}}f(\mathbf{x})=f(\mathbf{x})$ and \begin{equation}\label{eq:m_te_richtungsableitung_allgemein} \partial^{m}_{\mathbf{v}}f(\mathbf{x})\mathrel{\mathop:}=\partial^{m}_{\mathbf{v}}[f](\mathbf{x})=\partial \mathbf{v}\left[ \partial^{m-1}_{\mathbf{v}}f\right](\mathbf{x})=\sum_{\abs{\mathbf{r}}_1=m}\binom{m}{\mathbf{r}}\,\mathbf{v}^{\mathbf{r}}\partial^{\mathbf{r}}f(\mathbf{x}),\qquad 1\leq m\leq q, \end{equation} where the last equality can be shown by induction. An important tool for the analysis of cartoon-like functions is the decomposition on dyadic squares \cite{candes:curvelets,labate:sparse,schober:detection}. For $j\in \mathbb{N}_0$ let $\mathcal{Q}_j$ be the set of all dyadic squares $Q\subseteq[-\pi,\pi)^2$ with \begin{equation*} Q=\left[2\pi k_1\,2^{-j/2}-\pi,2\pi (k_1+1)\,2^{-j/2}-\pi\right)\times\left[2\pi k_2\,2^{-j/2}-\pi,2\pi (k_2+1)\,2^{-j/2}-\pi\right) \end{equation*} for $k_1,k_2=0,\dots ,2^{j/2}-1$. For smooth functions $\phi\in C_0^\infty\left(\mathbb{R}^2\right)$ with $\mathrm{supp}\,\phi\subset(-\pi,\pi)^2$ and $Q\in \mathcal{Q}_j$ we define \begin{equation}\label{eq:phi_Q} \phi_Q(\mathbf{x})\mathrel{\mathop:}=\phi\left(2^{j/2}(x_1+\pi)-\pi(2k_1-1),2^{j/2}(x_2+\pi)-\pi(2k_2-1)\right) \end{equation} and assume that $\phi$ defines a smooth partition of unity \begin{equation}\label{eq:zerl_der_eins} \sum_{Q\in \mathcal{Q}_j}\phi_Q(\mathbf{x})=1,\qquad \mathbf{x}\in [-\pi,\pi)^2. \end{equation} Let $T\in \mathrm{STAR}^2(\tau)$ be given. We say $Q\in \mathcal{Q}_j^1\subset \mathcal{Q}_j$ if $\partial T\cap Q\neq\emptyset$ and for the non-intersecting squares we write $\mathcal{Q}_j^0\mathrel{\mathop:}=\mathcal{Q}_j\setminus \mathcal{Q}_j^1$.\\ For Lebesgue measurable sets $A\subseteq \mathbb{R}^2$ and functions $f:A\rightarrow \mathbb{R}$ we define \begin{equation*} \norm{f}_{A,p}\mathrel{\mathop:}=\left( \int_{A}\abs{f(\mathbf{x})}^p\,\mathrm{d}\mathbf{x} \right)^{1/p},\qquad 1\leq p<\infty, \end{equation*} and denote the collection of functions satisfying $\norm{f}_{A,p}<\infty$ by $L_p(A)$. For two-dimensional $2\pi$-periodic functions $f:\mathbb{T}^2\rightarrow \mathbb{R}$ given on the torus $\mathbb{T}^2\mathrel{\mathop:}=\mathbb{R}^2\setminus2\pi\,\mathbb{Z}^2$ the usual inner product of the Hilbert space $L_2(\mathbb{T}^2)$ is given by \begin{equation*} \langle f,g \rangle_2 \mathrel{\mathop:}=\frac{1}{2\pi}\int_{\mathbb{T}^2}f(\mathbf{x})\overline{g(\mathbf{x})}\,\mathrm{d}\mathbf{x},\qquad\qquad f,g\in L_2(\mathbb{T}^2). \end{equation*} \begin{figure}[t]\hspace{-0.2cm} \subfloat{ {\includegraphics[width=.47\textwidth]{smooth_function.pdf}} }\hspace{0.3cm} \subfloat{ {\includegraphics[width=.47\textwidth]{smooth_function_zoom.pdf}} } \caption[]{Left: Cartoon-like function with jump discontinuities in the zeroth (red), first (blue) and second (black) order directional derivative on a circle with radius $2$. Right: Zoom into the green window in the left picture.}\label{fig:numerics1} \end{figure} For all $\mathbf{x}\in\partial T$ let $\mathbf{n}(\mathbf{x})=(\cos(\vartheta_\mathbf{x}),\sin(\vartheta_\mathbf{x}))^{\mathrm{T}},\,\vartheta_\mathbf{x}\in[0,2\pi),$ be the outer normal direction of $\partial T$ in $\mathbf{x}$. For the two main theorems we need cartoon-like functions $\mathfrak{f}\in \mathcal{E}^{u+1}(\tau)$ from \cref{def:cartoon_funktionen} with $u>4$ and their $2\pi$-periodization $\mathfrak{f}^{2\pi}$. For a window function $\Psi^{(\mathfrak{i})}\in \mathcal{W}^{2q}$ with $2q\geq u$ and $\mathfrak{i}\in \left\lbrace \mathfrak{h},\mathfrak{v} \right\rbrace$ from \cref{eq:window_function} let $\psi_{j,\ell,\mathbf{y}}^{(\mathfrak{i})}$ be a trigonometric shearlet from \cref{eq:trigonometric_shearlets}. \begin{theorem} \label{thm:hauptresultat} Let $j\in \mathbb{N}$ be sufficiently large and even and $\ell\in \mathbb{Z}$ with $\abs{\ell}<2^{j/2}$ and $\mathbf{y}\in \mathcal{P}(\mathbf{N}_{j,\ell}^{(\mathfrak{i})})$ be given. For every $Q\in \mathcal{Q}_j^1$ we choose $\mathbf{x}_0\mathrel{\mathop:}=\mathbf{x}_0(Q)\in\partial T \cap Q$. Moreover, let $n\mathrel{\mathop:}=n(Q)\in \mathbb{N}_0$ with $n<u$ such that \begin{align*} \partial_{\boldsymbol{\Theta}(\vartheta)}^m[f_1](\mathbf{x})=0\quad\text{and}\quad \partial_{\boldsymbol{\Theta}(\vartheta)}^n[f_1](\mathbf{x})\neq 0, & \quad\text{if }\, 0\leq m < n,\\ f_1(\mathbf{x})\neq 0, & \quad\text{if }\, n=0, \end{align*} is fulfilled for all $\mathbf{x}\in\partial T \cap Q$ and all $\vartheta\in\left(\theta_{j,\ell-2}^{(\mathfrak{i})},\theta_{j,\ell+2}^{(\mathfrak{i})}\right)$. Then there exists a constant $C_1>0$ such that \begin{equation*} \abs{\left\langle \mathfrak{f}^{2\pi},\psi_{j,\ell,\mathbf{y}}^{(\mathfrak{i})}\right\rangle_2}\leq C_1\,2^{-3j/4}\sum_{Q\in\mathcal{Q}_j^1}\frac{\left(1+2^{j/2}\abs{\sin(\theta_{j,\ell}^{(\mathfrak{i})}-\vartheta_{\mathbf{x}_0})}\right)^{-5/2}}{2^{jn}\Bigl(1+2^j\abs{2\pi\mathbf{y}-\mathbf{x}_0}_2^2\Bigr)^{q}}, \end{equation*} where $C_1=C_1(\mathfrak{f},\Psi^{(\mathfrak{i})},T)$ is independent of $j,\ell$ and $\mathbf{y}$. \end{theorem} For $\varepsilon>0$, $T\in \mathrm{STAR}^2(\tau)$ and $\mathbf{y}\in \mathcal{P}(\mathbf{N}_{j,\ell}^{(\mathfrak{i})})$ we define \begin{equation}\label{eq:U_epsilon} U_\varepsilon(\mathbf{y})\mathrel{\mathop:}=U_{\varepsilon,T}(\mathbf{y})\mathrel{\mathop:}=\partial T\cap B_\varepsilon(2\pi\mathbf{y}). \end{equation} \begin{theorem} \label{thm:hauptresultat2} Let $0<\varepsilon_0\leq 1$ and a sufficiently large and even $j\in \mathbb{N}$, $\ell\in \mathbb{Z}$ with $\abs{\ell}<2^{j/2}$ and $\mathbf{y}\in \mathcal{P}(\mathbf{N}_{j,\ell}^{(\mathfrak{i})})$ be given. Moreover, we assume the following conditions: \begin{itemize} \item [i)] For $\varepsilon=\varepsilon_0\,2^{-j/2}$ there exists $\mathbf{x}_0\in U_\varepsilon(\mathbf{y})$ with $\vartheta_{\mathbf{x}_0}\in\left(\theta_{j,\ell-2}^{(\mathfrak{i})},\theta_{j,\ell+2}^{(\mathfrak{i})}\right)$. \item [ii)] For $n\in \mathbb{N}_0$ with $4(n+1)<u$ we have \begin{align} \label{eq:mainthm2} \partial_{\boldsymbol{\Theta}(\vartheta)}^m[f_1](\mathbf{x})=0\quad\text{and}\quad \partial_{\boldsymbol{\Theta}(\vartheta)}^n[f_1](\mathbf{x})\neq 0, & \quad\text{if }\, 0\leq m < n,\\\notag f_1(\mathbf{x})\neq 0, & \quad\text{if }\, n=0, \end{align} for all $\mathbf{x}\in U_\varepsilon(\mathbf{y})$ and all $\vartheta\in\left(\theta_{j,\ell-2}^{(\mathfrak{i})},\theta_{j,\ell+2}^{(\mathfrak{i})}\right)$. \end{itemize} Then there exists a constant $C_2>0$ such that \begin{equation*} \abs{\left\langle \mathfrak{f}^{2\pi},\psi_{j,\ell,\mathbf{y}}^{(\mathfrak{i})}\right\rangle_2}\geq C_2\,2^{-j(3/4+n)}, \end{equation*} where $C_2=C_2(\mathfrak{f},\Psi^{(\mathfrak{i})},T,\varepsilon_0)$ is independent of $j,\ell$ and $\mathbf{y}$. \end{theorem} \begin{remark} The two main results from \cite{schober:detection} can be found in the latter theorems as special cases. In \cref{thm:hauptresultat} we have the result from \cite[Theorem 1]{schober:detection} if $n=0$ and in \cref{thm:hauptresultat2} we have the result from \cite[Theorem 2]{schober:detection} if $n=0$, $\mathfrak{f}=\chi_T$ from \cref{def:cartoon_funktionen} is a characteristic function which means $f_0=0$ and $f_1=1$. It should be mentioned that the lower bound from \cref{thm:hauptresultat2} still holds holds true if the condition \cref{eq:mainthm2} is reduced to the nonzero-condition only for the $n$-th order directional derivative. \end{remark} \begin{figure}[t]\hspace{-0.2cm} \subfloat{ {\includegraphics[width=.34\textwidth]{circle.pdf}} }\hspace{0.3cm} \subfloat{ {\includegraphics[width=.61\textwidth]{lower_bound_derivative.pdf}} } \caption[]{Left: Schematic visualization of the function from \cref{fig:numerics1} with colored boundary lines where the function has directional jump discontinuities of different orders. Right: Magnitudes of $\mathcal{L}^{(\mathfrak{i}),\mathrm{max}}_{\ell}$ and $\mathcal{L}^{(\mathfrak{i}),\mathrm{min}}_{\ell}$ from \cref{eq:L_l} as functions of the orientation angles $\theta_{10,\ell}^{(\mathfrak{i})}$.}\label{fig:numerics} \end{figure} At the end of this section, we include a small numerical example to visualize the main results. We construct a cartoon-like function $\mathfrak{f}$ with directional jump discontinuities of different orders on a circle with radius $2$ (see \cref{fig:numerics1}). Using the parametrization $(2\cos\theta,\,2\sin\theta)^{\mathrm{T}}$, the parts of different smoothness on the boundary are separated at the angles $\theta\in \left\lbrace \frac{\pi}{3},\pi,\frac{5\pi}{3}\right\rbrace$ (see \cref{fig:numerics}). On the red line of the boundary, the function is discontinuous. On the blue line, the function has a jump discontinuity in the first directional derivative and on the black line in the second directional derivative in every direction unless the tangent direction. In this example, we choose $j=10$, $\varepsilon_0=\frac{1}{2}$ thus $\varepsilon=\frac{1}{2}\,2^{-5}=\frac{1}{64}$ and consider the matrix $\mathbf{M}_{10}=2^{10}\,\mathbf{I}_2$ with the two-dimensional identity matrix $\mathbf{I}_2$. We collect all pattern points $\mathbf{y}\in \mathcal{P}(\mathbf{M}_{10})$ for which there exists $\mathbf{x}_0\in U_\varepsilon(\mathbf{y})$ fulfilling $\vartheta_{\mathbf{x}_0}\in\left(\theta_{10,\ell-2}^{(\mathfrak{i})},\theta_{10,\ell+2}^{(\mathfrak{i})}\right)$ in the set $\mathcal{Y}_{\ell}^{(\mathfrak{i})}$ where $\ell\in \mathbb{Z}$ with $\abs{\ell}<32$ and $\mathfrak{i}\in \left\lbrace \mathfrak{h},\mathfrak{v} \right\rbrace$. We compute the values \begin{equation}\label{eq:L_l} \mathcal{L}^{(\mathfrak{i}),\mathrm{max}}_{\ell}\mathrel{\mathop:}=\max\limits_{\mathbf{y}\in \mathcal{Y}_{\ell}^{(\mathfrak{i})}}\abs{\left\langle \mathfrak{f}^{2\pi},\psi_{j,\ell,\mathbf{y}}^{(\mathfrak{i})}\right\rangle_2},\quad \mathcal{L}^{(\mathfrak{i}),\mathrm{min}}_{\ell}\mathrel{\mathop:}=\min\limits_{\mathbf{y}\in \mathcal{Y}_{\ell}^{(\mathfrak{i})}}\abs{\left\langle \mathfrak{f}^{2\pi},\psi_{j,\ell,\mathbf{y}}^{(\mathfrak{i})}\right\rangle_2} \end{equation} and present them in the right picture of \cref{fig:numerics} as functions of the orientation angles $\theta_{10,\ell}^{(\mathfrak{i})}$. One can clearly see how the magnitude of the shearlet coefficients $\abs{\left\langle \mathfrak{f}^{2\pi},\psi_{j,\ell,\mathbf{y}}^{(\mathfrak{i})}\right\rangle_2}$ depends on the number of vanishing directional derivatives of the function $\mathfrak{f}^{2\pi}$ on the boundary curve as it was anticipated in the main results. \section{Proof of Theorem 3.1} \label{sec:proof_of_theorem_3_1} The Fourier coefficients of a function $f\in L_1(\mathbb{T}^2)$ are given by \begin{equation*} c_{\mathbf{k}}(f)\mathrel{\mathop:}=(2\pi)^{-2}\int_{\mathbb{T}^2}f(\mathbf{x})\,\mathrm{e}^{-\mathrm{i}\mathbf{k}^{\mathrm{T}}\mathbf{x}}\,\mathrm{d}\mathbf{x},\qquad\mathbf{k}\in \mathbb{Z}^2 \end{equation*} and the Fourier transform of $f\in L_1(\mathbb{R}^2)$ is defined as \begin{equation*} \mathcal{F}[f](\mathbf{x})\mathrel{\mathop:}=\mathcal{F}f(\mathbf{x})\mathrel{\mathop:}=(2\pi)^{-2}\int_{\mathbb{R}^2}f(\boldsymbol{\xi})\,\mathrm{e}^{-\mathrm{i}\boldsymbol{\xi}^{\mathrm{T}}\mathbf{x}}\,\mathrm{d}\boldsymbol{\xi},\qquad\mathbf{x}\in \mathbb{R}^2, \end{equation*} and we have the operator \begin{equation*} \mathcal{F}^{-1}[f](\mathbf{x})\mathrel{\mathop:}=\mathcal{F}^{-1}f(\mathbf{x})\mathrel{\mathop:}=\int_{\mathbb{R}^2}f(\boldsymbol{\xi})\,\mathrm{e}^{\mathrm{i}\boldsymbol{\xi}^{\mathrm{T}}\mathbf{x}}\,\mathrm{d}\boldsymbol{\xi},\qquad\mathbf{x}\in \mathbb{R}^2. \end{equation*} Let $q\in \mathbb{N}_0$ and $\mathbf{r}\in \mathbb{N}_0^2$ with $\abs{\mathbf{r}}_1\leq q$. If $f\in L_1(\mathbb{R}^2)$ and $(\mathrm{i}\,\circ)^q\,f\in L_1(\mathbb{R}^2)$, then $\mathcal{F}f\in C^q(\mathbb{R}^2)$ and \begin{equation}\label{eq:properties_fourier2} \partial^\mathbf{r}\mathcal{F}f(\boldsymbol{\xi})=\mathcal{F}\left[(\mathrm{i}\,\circ)^\mathbf{r}\,f(\mathbf{x})\right](\boldsymbol{\xi}). \end{equation} For $f\in C^q(\mathbb{R}^2)$ and $\partial^\mathbf{r}f\in L_1(\mathbb{R}^2)$ we have \begin{equation}\label{eq:properties_fourier3} \mathcal{F}\left[ \partial^\mathbf{r}f \right](\boldsymbol{\xi})=(\mathrm{i}\,\boldsymbol{\xi})^\mathbf{r}\,\mathcal{F}f(\boldsymbol{\xi}) \end{equation} and from \cref{eq:m_te_richtungsableitung_allgemein} and \cref{eq:properties_fourier3}, it follows that the Fourier transform of the $m$-th order directional derivative of a function along a normalized direction $\mathbf{v}\in \mathbb{R}^2$ can be written as \begin{equation}\label{eq:eigenschaft_fourier_richtungsableitung} \mathcal{F}\left[\partial^m_{\mathbf{v}}f\right](\boldsymbol{\xi})=\sum_{\abs{\mathbf{r}}_1=m}\binom{m}{\mathbf{r}}\mathbf{v}^\mathbf{r}\mathcal{F}\left[\partial^{\mathbf{r}}f\right](\boldsymbol{\xi})=\mathrm{i}^m\mathcal{F}f(\boldsymbol{\xi})\sum_{\abs{\mathbf{r}}_1=m}\binom{m}{\mathbf{r}}\mathbf{v}^{\mathbf{r}}\boldsymbol{\xi}^\mathbf{r}=\mathrm{i}^m (\mathbf{v}^{\mathrm{T}}\boldsymbol{\xi})^m\mathcal{F}f(\boldsymbol{\xi}). \end{equation} For the remainder of this section we fix the function $\phi\in C_0^\infty\left(\mathbb{R}^2\right)$ with $\mathrm{supp}\,\phi\subset(-\pi,\pi)^2$ and consider its scaled version $\phi_j\mathrel{\mathop:}=\phi\left(2^{j/2}\circ\right)$. Following the approach from \cite[Chapter 6.1]{candes:curvelets}, we assume that for sufficiently large $j\geq j_0$ the edge curve $\partial T$ can be parametrized on the support of $\phi_Q,\,Q\in \mathcal{Q}_j^1,\,$ either as $(x_1,E(x_1))^{\mathrm{T}}$ or $(E(x_2),x_2)^{\mathrm{T}}$. \begin{definition} For $x_2\in\left[-2^{-j/2},2^{-j/2}\right]$ let $(E(x_2),x_2)^{\mathrm{T}}$ be a parametrization of $\partial T$ with $E(0)=E'(0)=0$. For $f\in C^2(\mathbb{R}^2)$ we call \begin{equation*} \mathcal{K}_j(\mathbf{x})=f(\mathbf{x})\,\phi_j(\mathbf{x})\,\chi_{\lbrace \mathbf{x}\,:\, x_1\geq t(x_2)\rbrace}(\mathbf{x}) \end{equation*} standard edge fragment. \end{definition} Let $\mathcal{K}_{j,\mathbf{x}_0,\vartheta}$ be an arbitrary edge fragment, which means that the tangent in $\mathbf{x}_0\in\partial T$ is pointing in the direction $\boldsymbol{\Theta}(\vartheta)=(\cos{\vartheta},\sin{\vartheta})^{\mathrm{T}}$ for $\vartheta\in[0,2\pi)$. Then $\mathcal{K}_{j,\mathbf{0},0}=\mathcal{K}_j$ is a standard edge fragment and it was shown in \cite[Corollary 6.7]{candes:curvelets} that the corresponding Fourier transform fulfills \begin{equation}\label{eq:rotate_edge_fragment} \mathcal{F}\mathcal{K}_{j,\mathbf{x}_0,\vartheta}(\boldsymbol{\xi})=\mathrm{e}^{-\mathrm{i}\,\mathbf{x}_0^{\mathrm{T}}\boldsymbol{\xi}}\,\mathcal{F}\mathcal{K}_j(\mathbf{R}_\vartheta^{\mathrm{T}}\,\boldsymbol{\xi}), \end{equation} where $\mathbf{R}_\vartheta$ is a rotation matrix by the angle $\vartheta$. Here we show the following lemma which generalizes \cite[Lemma 6]{schober:detection}. \begin{lemma} \label{lem:norm_FT_Q1} For $\mathfrak{i}\in\lbrace \mathfrak{h},\mathfrak{v}\rbrace$ and $q\in \mathbb{N}_0$ let $\Psi^{(\mathfrak{i})}\in \mathcal{W}^q$ be given. Moreover, let $\mathcal{K}_{j,\mathbf{0},\vartheta}$ with $\vartheta\in \left( \theta_{j,\ell-2}^{(\mathfrak{i})},\theta_{j,\ell+2}^{(\mathfrak{i})} \right)$ be an arbitrary edge fragment and $f_j=f\,\phi_j$ a function with $\partial_{\boldsymbol{\Theta}(\theta_{j,\ell}^{(\mathfrak{i})})}^n f_j=\mathcal{K}_{j,\mathbf{0},\vartheta}$ for $n\in \mathbb{N}_0$. Then for $\mathbf{r}\in \mathbb{N}_0^2$ we have \begin{equation*} \norm{\partial^{\mathbf{r}}\left[\mathcal{F}\left[f_j\right]\,\Psi_{j,\ell}^{(\mathfrak{i})}\right]}^2_{\mathrm{supp}\,\Psi_{j,\ell}^{(\mathfrak{i})},2}\leq C(n,\mathbf{r})\,2^{-j(3/2+2n+\abs{\mathbf{r}}_1)}\,\left(1+2^{j/2}\abs{\sin(\theta_{j,\ell}^{(\mathfrak{i})}-\vartheta)}\right)^{-5}. \end{equation*} \end{lemma} \begin{proof} We use an idea from \cite[Corollary 6.6]{candes:curvelets} and define $\phi_\mathbf{r}(\mathbf{x})\mathrel{\mathop:}=\mathbf{x}^\mathbf{r}\,\phi(\mathbf{x})$. It follows that $\phi_\mathbf{r}\left(2^{j/2}\circ\right)\in C^\infty_0\left(\mathbb{R}^2\right)$ and $\abs{\mathrm{supp}\,\phi_\mathbf{r}\left(2^{j/2}\circ\right)}\leq 2^{-j}$. We obtain the representation \begin{equation*} \mathbf{x}^\mathbf{r}f_j(\mathbf{x})=2^{-j\abs{\mathbf{r}}_1/2}\,f(\mathbf{x})\,\phi_\mathbf{r}\left(2^{j/2}\,\mathbf{x}\right)=2^{-j\abs{\mathbf{r}_1}/2}f_{j,\mathbf{r}}(\mathbf{x}), \end{equation*} where $f_{j,\mathbf{r}}\mathrel{\mathop:}=f\,\phi_\mathbf{r}\left(2^{j/2}\,\circ\right)$. Note that the function $f_{j,\mathbf{r}}$ also fulfills $\partial_{\boldsymbol{\theta}(\theta_{j,\ell}^{(\mathfrak{i})})}^n f_{j,\mathbf{r}}=\mathcal{K}_{j,\mathbf{0},\vartheta}$. Using \cref{eq:properties_fourier2} we get \begin{equation}\label{proof:estimate_f_smooth5} \partial^\mathbf{r}\mathcal{F}f_j(\boldsymbol{\xi})=\mathcal{F}\left[( \mathrm{i}\,\circ)^\mathbf{r}f_j(\circ)\right](\boldsymbol{\xi})=\mathrm{i}^\mathbf{r}\,2^{-j\abs{\mathbf{r}}_1/2}\,\mathcal{F}\left[f(\circ)\,\phi_\mathbf{r}\left(2^{j/2}\circ\right)\right](\boldsymbol{\xi}) \end{equation} and with \cref{proof:estimate_f_smooth5} and \cref{eq:eigenschaft_fourier_richtungsableitung} we have \begin{align} \int\limits_{\mathrm{supp}\Psi_{j,\ell}^{(\mathfrak{i})}}\bigl\lvert\partial^{\mathbf{r}}\left[\mathcal{F}f_j\right](\boldsymbol{\xi})\bigr\rvert^2\mathrm{d}\boldsymbol{\xi}&=2^{-j\abs{\mathbf{r}}_1}\int\limits_{\mathrm{supp}\Psi_{j,\ell}^{(\mathfrak{i})}}\bigl\lvert\mathcal{F}f_{j,\mathbf{r}}(\boldsymbol{\xi})\bigr\rvert^2 \mathrm{d}\boldsymbol{\xi}\notag\\\label{proof:int_FT_Q1_0} &=2^{-j\abs{\mathbf{r}}_1}\int\limits_{\mathrm{supp}\Psi_{j,\ell}^{(\mathfrak{i})}}\left\lvert\left( \boldsymbol{\Theta}^{\mathrm{T}}(\theta_{j,\ell}^{(\mathfrak{i})})\,\boldsymbol{\xi} \right)^{-n}\mathcal{F}[\mathcal{K}_{j,\mathbf{0},\vartheta}](\boldsymbol{\xi})\right\rvert^2 \mathrm{d}\boldsymbol{\xi}. \end{align} In the following, we need a result from \cite[Theorem 6.1]{candes:curvelets} given by \begin{equation}\label{eq:int_FT_radius} \int\limits_{\abs{\rho}\in I_j}\abs{\mathcal{F}\mathcal{K}_j\Bigr(\rho\,\boldsymbol{\Theta}(\theta-\vartheta)\Bigl)}^2 \mathrm{d}\rho\leq C\,2^{-2j}\,\Bigl(1+2^{j/2}\abs{\sin\left(\theta-\vartheta\right)}\Bigr)^{-5}, \end{equation} where $I_j=\left[2^{j-1},2^{j+1}\right]$. In polar coordinates $\boldsymbol{\xi}=\rho\,\boldsymbol{\Theta}(\theta)$ with $\rho=\abs{\boldsymbol{\xi}}_2$ the inner product from \cref{proof:int_FT_Q1_0} fulfills \begin{equation}\label{proof:int_FT_Q1_1} \abs{\boldsymbol{\Theta}^{\mathrm{T}}(\theta_{j,\ell}^{(\mathfrak{i})})\,\boldsymbol{\xi}}=\abs{\rho\,\cos\left( \theta_{j,\ell}^{(\mathfrak{i})}-\theta\right)}\geq C_2\abs{\rho}, \end{equation} if $\theta\in \left( \theta_{j,\ell-2}^{(\mathfrak{i})},\theta_{j,\ell+2}^{(\mathfrak{i})} \right)$. Additionally, we have \begin{equation}\label{proof:int_FT_Q1_2} \Bigl(\theta_{j,\ell+2}^{(\mathfrak{i})}-\theta_{j,\ell-2}^{(\mathfrak{i})}\Bigr)\leq C_3\,2^{-j/2}. \end{equation} We transform the integral from \cref{proof:int_FT_Q1_0} into polar coordinates and use \cref{eq:int_FT_radius}, \cref{proof:int_FT_Q1_1}, \cref{proof:int_FT_Q1_2} and \cite[Lemma 1]{schober:detection} to finally get \begin{align} \int\limits_{\mathrm{supp}\Psi_{j,\ell}^{(\mathfrak{i})}}\bigl\lvert\partial^{\mathbf{r}}\left[\mathcal{F}f_j\right](\boldsymbol{\xi})\bigr\rvert^2 \mathrm{d}\boldsymbol{\xi}&=2^{-j\abs{\mathbf{r}}_1}\int\limits_{\theta_{j,\ell-2}^{(\mathfrak{i})}}^{\theta_{j,\ell+2}^{(\mathfrak{i})}}\int\limits_{\frac{2^j}{3}}^{2^{j+1}}\abs{\left( \rho\,\cos\left( \theta_{j,\ell}^{(\mathfrak{i})}-\theta\right) \right)^{-n}\mathcal{F}\mathcal{K}_j\Bigr(\rho\,\boldsymbol{\Theta}(\theta-\vartheta)\Bigl)}^2\rho\, \mathrm{d}\rho\,\mathrm{d}\theta\notag\\ &\leq C_4(n,\mathbf{r})\,2^{-j(1+2n+\abs{\mathbf{r}}_1)}\int\limits_{\theta_{j,\ell-2}^{(\mathfrak{i})}}^{\theta_{j,\ell+2}^{(\mathfrak{i})}}\left(1+2^{j/2}\bigl\lvert\sin(\theta-\vartheta)\bigr\rvert\right)^{-5}\mathrm{d}\theta\notag\\\label{proof:int_FT_Q1_3} &\leq C_5(n,\mathbf{r})\,2^{-j(3/2+2n+\abs{\mathbf{r}}_1)}\left(1+2^{j/2}\abs{\sin(\theta_{j,\ell}^{(\mathfrak{i})}-\vartheta)}\right)^{-5}. \end{align} To obtain the desired estimate, we repeat the steps from the proof of \cite[Lemma 4]{schober:detection} and apply \cref{proof:int_FT_Q1_3}. \end{proof} In the following, we consider the second order differential operator $L\mathrel{\mathop:}=I+2^j\Delta$ used in \cite{candes:curvelets,labate:sparse,schober:detection} where $\Delta\mathrel{\mathop:}=\partial^{(2,0)}+\partial^{(0,2)}$ is the Laplace operator. The following Lemma is a generalization of \cite[Lemma 8]{schober:detection}. The proof is similar and will be omitted. \begin{lemma}\label{lem:norm_Lq} For $\mathfrak{i}\in\lbrace \mathfrak{h},\mathfrak{v}\rbrace$ and $q\in \mathbb{N}_0$ let $\Psi^{(\mathfrak{i})}\in \mathcal{W}^{2q}$ be given. We consider functions $f_{j,0}\mathrel{\mathop:}=f_0\,\phi_j$ with $f_0\in C_0^u\left(\mathbb{R}^2\right)$ for $u\in \mathbb{N}_0$ and for an arbitrary edge fragment $\mathcal{K}_{j,\mathbf{0},\vartheta}$ with $\vartheta\in \left( \theta_{j,\ell-2}^{(\mathfrak{i})},\theta_{j,\ell+2}^{(\mathfrak{i})} \right)$ let $f_{j,1}\mathrel{\mathop:}=f_1\,\phi_j$ such that $\partial_{\boldsymbol{\Theta}(\theta_{j,\ell}^{(\mathfrak{i})})}^n f_{j,1}=\mathcal{K}_{j,\mathbf{0},\vartheta}$ for $n\in \mathbb{N}_0$. Then there exist constants $C_1(u,q),C_2(n,q)>0$ such that \begin{equation*} \norm{L^q\left[ \mathcal{F}[h]\,\Psi_{j,\ell}^{(\mathfrak{i})} \right]}^2_{\mathrm{supp}\,\Psi_{j,\ell}^{(\mathfrak{i})},2}\leq \begin{cases} C_1(u,q)\,2^{-j(2u+1)}, & \text{if } h=f_{j,0},\vspace{0.3cm}\\ \dfrac{C_2(n,q)\,2^{-j(3/2+2n)}}{\left(1+2^{j/2}\abs{\sin\left(\theta_{j,\ell}^{(\mathfrak{i})}-\vartheta\right)}\right)^{5}}, & \text{if } h=f_{j,1}. \end{cases} \end{equation*} \end{lemma} We are ready to prove the first main theorem of this paper. \begin{proof}[Proof of \cref{thm:hauptresultat}] Let $T\in \mathrm{STAR}^2(\tau)$ and $\mathfrak{f}\mathrel{\mathop:}=f\,\chi_{T}\in\mathcal{E}^{u+1}(\tau)$ be given. Using the smooth functions $\phi_Q\in C_0^{\infty}\left( \mathbb{R}^2 \right),\,Q\in \mathcal{Q}_j,$ from \cref{eq:phi_Q} which form a partition of unity in \cref{eq:zerl_der_eins}, we can decompose the function $\mathfrak{f}$ on dyadic squares as \begin{equation}\label{eq:zerl_f_Q} \mathfrak{f}=\sum_{Q\in \mathcal{Q}_j}\mathfrak{f}_Q=\sum_{Q\in \mathcal{Q}_j^0}\mathfrak{f}_Q+\sum_{Q\in \mathcal{Q}_j^1}\mathfrak{f}_Q, \end{equation} where $\mathfrak{f}_Q\mathrel{\mathop:}=\mathfrak{f}\,\phi_Q$. It was observed in \cite[Section 5.1]{candes:curvelets} that there are constants $C_1,C_2>0$ with \begin{equation}\label{eq:maechtigkeit_Q_j} \abs{\mathcal{Q}_j^0}\leq C_1\,2^j,\qquad\qquad\abs{\mathcal{Q}_j^1}\leq C_2\,2^{j/2}. \end{equation} We denote by $\mathfrak{f}_Q^{2\pi}$ the $2\pi$-periodization of $\mathfrak{f}_Q$. Since $\mathfrak{f}_Q\in L_1(\mathbb{R}^2)$ the Fourier coefficients of $\mathfrak{f}_Q^{2\pi}$ can be written as \begin{equation*} c_{\mathbf{k}}(\mathfrak{f}_Q^{2\pi})=\mathcal{F}[\mathfrak{f}_Q](\mathbf{k}),\qquad \mathbf{k}\in \mathbb{Z}^2. \end{equation*} From \cref{eq:properties_fourier2} we get $\mathcal{F}[\mathfrak{f}_Q]\in C^{2q}(\mathbb{R}^2)$ because $\mathfrak{f}_Q$ is compactly supported. Moreover, the assumption $\Psi_{j,\ell}^{(\mathfrak{i})}\in \mathcal{W}^{2q}$ with $2q> 4$ implies $\mathcal{F}[\mathfrak{f}_Q]\,\Psi_{j,\ell}^{(\mathfrak{i})}\in C_0^{2q}(\mathbb{R}^2)$. Thus, we can use the Poisson summation formula and Parseval's identity (see \cite{schober:detection}) to obtain \begin{equation}\label{eq:beweis_der_oberen_schranke1} \left\langle \mathfrak{f}_Q^{2\pi},\psi^{(\mathfrak{i})}_{j,\ell,\mathbf{y}}\right\rangle_2=2^{-3j/4}\sum_{\mathbf{k}\in \mathbb{Z}^2}\mathcal{F}[\mathfrak{f}_Q](\mathbf{k})\,\Psi^{(\mathfrak{i})}_{j,\ell}(\mathbf{k})\,\mathrm{e}^{2\pi\mathrm{i}\mathbf{k}^{\mathrm{T}}\mathbf{y}}=2^{-3j/4}\sum_{\mathbf{n}\in \mathbb{Z}^2}S_Q(\mathbf{n}), \end{equation} where \begin{equation*} S_Q(\mathbf{n})\mathrel{\mathop:}=\mathcal{F}^{-1}\left[\mathcal{F}[\mathfrak{f}_Q]\Psi^{(\mathfrak{i})}_{j,\ell} \right]\Bigl(2\pi(\mathbf{y}+\mathbf{n})\Bigr)=\int\limits_{\mathbb{R}^2}\mathcal{F}[\mathfrak{f}_Q](\boldsymbol{\xi})\,\Psi^{(\mathfrak{i})}_{j,\ell}(\boldsymbol{\xi})\,\mathrm{e}^{2\pi\mathrm{i}\boldsymbol{\xi}^{\mathrm{T}}(\mathbf{y}+\mathbf{n})}\,\mathrm{d}\boldsymbol{\xi}. \end{equation*} We follow some of the steps in the proof of \cite[Theorem 3.1]{schober:detection} and use repeated partial integration and Hölder's inequality to obtain \begin{equation}\label{eq:beweis_der_oberen_schranke3} \bigl\lvert S_Q(\mathbf{n})\bigr\rvert\leq 2^{3j/4}\Bigl(1+2^j\abs{2\pi(\mathbf{y}+\mathbf{n})}_2^2\Bigr)^{-q}\norm{L^q\left[\mathcal{F}[\mathfrak{f}_Q]\,\Psi^{(\mathfrak{i})}_{j,\ell}\right] }_{\mathrm{supp}\,\Psi_{j,\ell}^{(\mathfrak{i})},2}. \end{equation} In \cite[Theorem 3.1]{schober:detection} it was also shown that \begin{equation*} \sum_{\mathbf{n}\in \mathbb{Z}^2\setminus\{\mathbf{0}\}}\Bigl(1+2^j\abs{2\pi(\mathbf{y}+\mathbf{n}) }_2^2\Bigr)^{-q}\leq C_2(q)\,2^{-jq}, \end{equation*} which leads to \begin{equation}\label{eq:beweis_der_oberen_schranke4} 2^{-3j/4}\sum_{\mathbf{n}\in \mathbb{Z}^2\setminus\{\mathbf{0}\}}\bigl\lvert S_Q(\mathbf{n})\bigr\rvert\leq C(q)\,2^{-jq}\norm{L^q\left[\mathcal{F}[\mathfrak{f}_Q]\,\Psi^{(\mathfrak{i})}_{j,\ell}\right] }_{\mathrm{supp}\,\Psi_{j,\ell}^{(\mathfrak{i})},2}. \end{equation} In the following, we distinguish wether the boundary $\partial T$ intersects the support of $\phi_Q$ or not and consider two different cases. \begin{itemize} \item [i)] Let $Q\in \mathcal{Q}_j^0$: \\ If $Q\cap T=\emptyset$, we have $\mathfrak{f}_Q=0$ and thus \begin{equation*} \abs{\left\langle \mathfrak{f}_Q^{2\pi},\psi^{(\mathfrak{i})}_{j,\ell,\mathbf{y}}\right\rangle_2}=0. \end{equation*} If $Q\subset T$, we choose $\mathbf{x}_1\in[-\pi,\pi]^2$ with \begin{equation}\label{eq:x1} \abs{2\pi \mathbf{y}-\mathbf{x}_1}_2\geq C>0 \end{equation} and consider the function $\widetilde{\mathfrak{f}}_Q(\mathbf{x})\mathrel{\mathop:}=\mathfrak{f}_Q(\mathbf{x}+\mathbf{x}_1)$. From \cref{eq:rotate_edge_fragment} it follows that $\mathcal{F}[\mathfrak{f}_Q](\boldsymbol{\xi})=\mathrm{e}^{-\mathrm{i}\,\boldsymbol{\xi}^{\mathrm{T}}\mathbf{x}_1}\,\mathcal{F}[\widetilde{\mathfrak{f}}_Q](\boldsymbol{\xi})$ which implies \begin{equation*} S_Q(\mathbf{0})=\mathcal{F}^{-1}\left[\mathcal{F}[\mathfrak{f}_Q]\Psi^{(\mathfrak{i})}_{j,\ell} \right](2\pi\mathbf{y})=\int\limits_{\mathbb{R}^2}\mathcal{F}[\widetilde{\mathfrak{f}}_Q](\boldsymbol{\xi})\,\Psi^{(\mathfrak{i})}_{j,\ell}(\boldsymbol{\xi})\,\mathrm{e}^{\mathrm{i}\,\boldsymbol{\xi}^{\mathrm{T}}(2\pi\mathbf{y}-\mathbf{x}_1)}\,\mathrm{d}\boldsymbol{\xi}. \end{equation*} Since $\mathcal{F}[\widetilde{\mathfrak{f}}_Q]\,\Psi_{j,\ell}^{(\mathfrak{i})}\in C_0^q(\mathbb{R}^2)$, we can repeat the steps which led to \cref{eq:beweis_der_oberen_schranke3} and use \cref{eq:x1} to obtain \begin{equation}\label{eq:beweis_der_oberen_schranke5} \bigl\lvert S_Q(\mathbf{0}) \bigr\rvert\leq2^{-j(q-3/4)}\norm{L^q\left[\mathcal{F}[\widetilde{\mathfrak{f}}_Q]\,\Psi^{(\mathfrak{i})}_{j,\ell}\right] }_{\mathrm{supp}\,\Psi_{j,\ell}^{(\mathfrak{i})},2}. \end{equation} Finally, the estimates \cref{eq:beweis_der_oberen_schranke4}, \cref{eq:beweis_der_oberen_schranke5} and the first case of \cref{lem:norm_Lq} plugged in \cref{eq:beweis_der_oberen_schranke1} lead to \begin{equation}\label{eq:beweis_der_oberen_schranke6} \abs{\left\langle \mathfrak{f}_Q^{2\pi},\psi^{(\mathfrak{i})}_{j,\ell,\mathbf{y}}\right\rangle_2}\leq 2^{-3j/4}\Bigl( \abs{S_Q(\mathbf{0})}+\hspace{-0.3cm}\sum_{\mathbf{n}\in \mathbb{Z}^2\setminus\{\mathbf{0}\}}\bigl\lvert S_Q(\mathbf{n})\bigr\rvert \Bigr)\leq C_1(u,q)\,2^{-j(q+u+3/2)}. \end{equation} \item[ii)] Let $Q\in \mathcal{Q}_j^1$:\\ Then we have \begin{equation*} S_Q(\mathbf{0})=\int\limits_{\mathbb{R}^2}\mathcal{F}[\mathfrak{f}_Q]\,\Psi^{(\mathfrak{i})}_{j,\ell}(\boldsymbol{\xi})\,\mathrm{e}^{\mathrm{i}\,\boldsymbol{\xi}^{\mathrm{T}}(2\pi\mathbf{y}-\mathbf{x}_0)}\,\mathrm{d}\boldsymbol{\xi}, \end{equation*} where $\partial_{\boldsymbol{\Theta}(\theta_{j,\ell}^{(\mathfrak{i})})}^n \mathfrak{f}_Q=\mathcal{K}_{j,\mathbf{0},\vartheta_{\mathbf{x}_0}}$ and $\mathcal{K}_{j,\mathbf{0},\vartheta_{\mathbf{x}_0}}$ is an arbitrary edge fragment. With the same arguments as before we see that \begin{equation}\label{eq:beweis_der_oberen_schranke7} \bigl\lvert S_Q(\mathbf{0}) \bigr\rvert\leq2^{3j/4}\Bigl(1+2^j\abs{2\pi\mathbf{y}-\mathbf{x}_0}_2^2\Bigr)^{-q}\norm{L^q\left[\mathcal{F}[f_Q]\,\Psi^{(\mathfrak{i})}_{j,\ell}\right] }_{\mathrm{supp}\,\Psi_{j,\ell}^{(\mathfrak{i})},2}. \end{equation} From \cref{eq:beweis_der_oberen_schranke4}, \cref{eq:beweis_der_oberen_schranke7} and the second case of \cref{lem:norm_Lq} we deduce \begin{align}\label{eq:beweis_der_oberen_schranke8} \abs{\left\langle \mathfrak{f}_Q^{2\pi},\psi^{(\mathfrak{i})}_{j,\ell,\mathbf{y}}\right\rangle_2}&\leq 2^{-3j/4}\Bigl( \abs{S_Q(\mathbf{0})}+\hspace{-0.3cm}\sum_{\mathbf{n}\in \mathbb{Z}^2\setminus\{\mathbf{0}\}}\bigl\lvert S_Q(\mathbf{n})\bigr\rvert \Bigr)\notag\\ &\leq C_3(n,q)\,2^{-3j/4}\frac{\left(1+2^{j/2}\abs{\sin(\theta_{j,\ell}^{(\mathfrak{i})}-\vartheta_{\mathbf{x}_0})}\right)^{-5/2}}{2^{jn}\Bigl(1+2^j\abs{2\pi\mathbf{y}-\mathbf{x}_0}_2^2\Bigr)^{q}}. \end{align} With the decomposition in \cref{eq:zerl_f_Q} we can use the estimates in \cref{eq:beweis_der_oberen_schranke6} and \cref{eq:beweis_der_oberen_schranke8} to get \begin{align}\label{eq:beweis_der_oberen_schranke9} \abs{\left\langle \mathfrak{f}^{2\pi},\psi^{(\mathfrak{i})}_{j,\ell,\mathbf{y}}\right\rangle_2}&\leq\sum_{Q\in \mathcal{Q}_j^0}\abs{\left\langle \mathfrak{f}_Q^{2\pi},\psi^{(\mathfrak{i})}_{j,\ell,\mathbf{y}}\right\rangle_2}+\sum_{Q\in \mathcal{Q}_j^1} \abs{\left\langle \mathfrak{f}_Q^{2\pi},\psi^{(\mathfrak{i})}_{j,\ell,\mathbf{y}}\right\rangle_2}\notag\\ &\leq C_4(u,n,q)\,2^{-3j/4}\sum_{Q\in\mathcal{Q}_j^1}\frac{\left(1+2^{j/2}\abs{\sin(\theta_{j,\ell}^{(\mathfrak{i})}-\vartheta_{\mathbf{x}_0})}\right)^{-5/2}}{2^{jn}\Bigl(1+2^j\abs{2\pi\mathbf{y}-\mathbf{x}_0}_2^2\Bigr)^{q}}. \end{align} To finish the proof we consider general cartoon-like functions $\mathfrak{f}_0\in \mathcal{E}^{u+1}(\tau)$ of the form $\mathfrak{f}_0=f_0+f\,\chi_T=f_0+\mathfrak{f}$, where $\mathfrak{f}=f\,\chi_T$, $f_0,f\in C_0^{u+1}\left(\mathbb{R}^2\right)$ and $T\in \mathrm{STAR}^2(\tau)$. For the function $f_0$ we define $f_{0,Q}\mathrel{\mathop:}=f_0\,\phi_Q$ and have the representation \begin{equation*} f_0=\sum_{Q\in \mathcal{Q}_j^0}f_{0,Q}, \end{equation*} since $\mathcal{Q}_j^1=\emptyset$. With \cref{eq:maechtigkeit_Q_j} and \cref{eq:beweis_der_oberen_schranke6} we get \begin{equation}\label{eq:f_0} \abs{\left\langle f_0^{2\pi},\psi^{(\mathfrak{i})}_{j,\ell,\mathbf{y}}\right\rangle_2}\leq\sum_{Q\in \mathcal{Q}_j^0}\abs{\left\langle f_{0,Q}^{2\pi},\psi^{(\mathfrak{i})}_{j,\ell,\mathbf{y}}\right\rangle_2}\leq C_5(u,q)\,2^{-j(q+u+1/2)}. \end{equation} The estimates \cref{eq:beweis_der_oberen_schranke9} and \cref{eq:f_0} lead to \begin{align*} \abs{\left\langle \mathfrak{f}_0^{2\pi},\psi^{(\mathfrak{i})}_{j,\ell,\mathbf{y}}\right\rangle_2}&\leq\abs{\left\langle f_0^{2\pi},\psi^{(\mathfrak{i})}_{j,\ell,\mathbf{y}}\right\rangle_2}+\abs{\left\langle \mathfrak{f}^{2\pi},\psi^{(\mathfrak{i})}_{j,\ell,\mathbf{y}}\right\rangle_2}\\ &\leq C_6(u,n,q)\,2^{-3j/4}\sum_{Q\in\mathcal{Q}_j^1}\frac{\left(1+2^{j/2}\abs{\sin(\theta_{j,\ell}^{(\mathfrak{i})}-\vartheta_{\mathbf{x}_0})}\right)^{-5/2}}{2^{jn}\Bigl(1+2^j\abs{2\pi\mathbf{y}-\mathbf{x}_0}_2^2\Bigr)^{q}}, \end{align*} since $n<u$ and the proof is finished. \end{itemize}\vspace{-0.8cm} \end{proof} \section{Localization lemmata} \label{sec:localization_lemmata} Let $T\in \mathrm{STAR^2}(\tau)$ and $p\mathrel{\mathop:}=p_u:\mathbb{R}^2\rightarrow \mathbb{R}$ be a bivariate polynomial of order $u$. In \cite[Lemma 4.1]{labate:smooth} the authors showed that the Fourier transform of the function $P_u\mathrel{\mathop:}=p\,\chi_T$ can be written as \begin{equation}\label{fourier_transformation_gauss} \mathcal{F}P_u(\boldsymbol{\xi})=(2\pi)^{-2}\int\limits_T p(\mathbf{x})\,\mathrm{e}^{-\mathrm{i}\boldsymbol{\xi}^{\mathrm{T}}\mathbf{x}}\,\mathrm{d}\mathbf{x}=\sum\limits_{m=0}^u\frac{C_m}{\abs{\boldsymbol{\xi}}_2^{m+2}}\int\limits_{\partial T}p_m(\mathbf{x},\boldsymbol{\xi})\,\mathrm{e}^{-\mathrm{i}\boldsymbol{\xi}^{\mathrm{T}}\mathbf{x}}\,\boldsymbol{\xi}^{\mathrm{T}}\mathbf{n}(\mathbf{x})\,\mathrm{d}\sigma(\mathbf{x}) \end{equation} with constants $C_0,\hdots, C_u>0$ and functions $p_0(\mathbf{x},\boldsymbol{\xi})\mathrel{\mathop:}=p(\mathbf{x})$, $p_m(\mathbf{x},\boldsymbol{\xi})\mathrel{\mathop:}=\frac{\boldsymbol{\xi}^{\mathrm{T}}}{\abs{\boldsymbol{\xi}}_2}\mathrm{grad}_\mathbf{x}[p_{m-1}](\mathbf{x},\boldsymbol{\xi})$ and the outer normal vector of the boundary $\partial T$ given by $\mathbf{n}(\mathbf{x})$.\\ In the following lemma, we derive an explicit expression for the functions $p_m$ which gives a new representation of \cref{fourier_transformation_gauss} in polar coordinates. \begin{lemma} \label{lem:fourier_transformation_gauss} Let $T\in \mathrm{STAR^2}(\tau)$ and $p=p_u:\mathbb{R}^2\rightarrow \mathbb{R}$ be a bivariate polynomial of order $u$. Then there exist constants $C_0,\hdots, C_u>0$ such that the Fourier transform of the function $P_u=p\,\chi_T$ is of the form \begin{equation}\label{eq:T_fourier} \mathcal{F}P_u\left(\rho\,\boldsymbol{\Theta}(\theta)\right)=\sum\limits_{m=0}^{u}\frac{C_m}{\rho^{m+1}}\int\limits_{\partial T}\,\partial_{\boldsymbol{\Theta}(\theta)}^m[p](\mathbf{x})\,\mathrm{e}^{-\mathrm{i}\,\rho\,\boldsymbol{\Theta}^{\mathrm{T}}(\theta)\mathbf{x}}\,\boldsymbol{\Theta}^{\mathrm{T}}(\theta)\,\mathbf{n}(\mathbf{x})\,\mathrm{d}\sigma(\mathbf{x}). \end{equation} \end{lemma} \begin{proof} By induction on the variable $m$ we show that \begin{equation*} p_m(\mathbf{x},\boldsymbol{\xi})=\abs{\boldsymbol{\xi}}_2^{-m}\sum\limits_{\abs{\mathbf{r}}_1=m}\binom{m}{\mathbf{r}}\,\boldsymbol{\xi}^{\mathbf{r}}\,\partial^{\mathbf{r}}[p](\mathbf{x}). \end{equation*} For $m=0$ we have $p_0(\mathbf{x},\boldsymbol{\xi})=p$. Suppose there exists $m\in \mathbb{N}$ such that \begin{equation*} p_m(\mathbf{x},\boldsymbol{\xi})=\frac{\boldsymbol{\xi}^{\mathrm{T}}}{\abs{\boldsymbol{\xi}}_2}\mathrm{grad}_\mathbf{x}[p_{m-1}](\mathbf{x},\boldsymbol{\xi})=\abs{\boldsymbol{\xi}}_2^{-m}\sum\limits_{\abs{\mathbf{r}}_1=m}\binom{m}{\mathbf{r}}\,\boldsymbol{\xi}^{\mathbf{r}}\,\partial^{\mathbf{r}}p. \end{equation*} It follows that \begin{align*} \abs{\boldsymbol{\xi}}_2^{m+1}p_{m+1}(\mathbf{x},\boldsymbol{\xi})&=\boldsymbol{\xi}^{\mathrm{T}}\mathrm{grad}_{\mathbf{x}}\left[ \sum\limits_{\abs{\mathbf{r}}_1=m}\binom{m}{\mathbf{r}}\boldsymbol{\xi}^{\mathbf{r}}\,\partial^{\mathbf{r}}p \right]\\ &=\sum\limits_{r=0}^m\binom{m}{r}\,\xi_1^{r+1}\,\xi_2^{m-r}\,\partial^{(r+1,0)}\,\partial^{(0,m-r)}p+\sum\limits_{r=0}^m\binom{m}{r}\,\xi_1^{r}\,\xi_2^{m-r+1}\,\partial^{(r,0)}\,\partial^{(0,m-r+1)}p\\ &=\sum\limits_{\abs{\mathbf{r}}_1=m+1}\binom{m+1}{\mathbf{r}}\,\boldsymbol{\xi}^{\mathbf{r}}\,\partial^{\mathbf{r}}p. \end{align*} We use polar coordinates to verify \begin{equation*} \abs{\boldsymbol{\xi}}_2^{-m}\,\boldsymbol{\xi}^{\mathbf{r}}=(\cos\theta)^{r_1}\,(\sin\theta)^{r_2}=(\boldsymbol{\Theta}(\theta))^{\mathbf{r}} \end{equation*} if $\abs{\mathbf{r}}_1=m$ and obtain with \cref{eq:m_te_richtungsableitung_allgemein} \begin{equation}\label{fourier_transformation_gauss2} p_m(\mathbf{x},\rho\,\boldsymbol{\Theta}(\theta))=\sum\limits_{\abs{\mathbf{r}}_1=m}\binom{m}{\mathbf{r}}\,(\boldsymbol{\Theta}(\theta))^{\mathbf{r}}\,\partial^{\mathbf{r}}[p](\mathbf{x})=\partial_{\boldsymbol{\Theta}(\theta)}^m[p](\mathbf{x}). \end{equation} We finish the proof by using polar coordinates for the variable $\boldsymbol{\xi}$ in \cref{fourier_transformation_gauss} and inserting \cref{fourier_transformation_gauss2}. \end{proof} Let $\boldsymbol{\gamma}:[0,2\pi)\rightarrow\partial T$ be a curve from \cref{eq:star1}. For $M\in \mathbb{N}$ let $a_0<a_1<\hdots<a_M$ be a partition of the interval $[0,2\pi)$ such that for each $x\in[a_k,a_{k+1}), k=0,\hdots,M-1,$ the curve $\boldsymbol{\gamma}$ can either be represented as a horizontal curve $(x,f(x))^{\mathrm{T}}$ or a vertical curve $(f(x),x)^{\mathrm{T}}$. If $\mathfrak{i}=\mathfrak{h}$, then $(f(x),x)^{\mathrm{T}}$ with $\abs{f'(x)}\leq 1$ is a vertical curve and $(x,f(x))^{\mathrm{T}}$ with $\abs{f'(x)}<1$ is a horizontal curve. Otherwise, if $\mathfrak{i}=\mathfrak{v}$, then $(f(x),x)^{\mathrm{T}}$ with $\abs{f'(x)}<1$ is a vertical curve and $(x,f(x))^{\mathrm{T}}$ with $\abs{f'(x)}\leq 1$ is a horizontal curve.\\ With the parametrization of the curve $\boldsymbol{\gamma}$ we can write the line integral \cref{eq:T_fourier} as \begin{align*} \mathcal{F}P_u\left(\rho\,\boldsymbol{\Theta}(\theta)\right)&=\sum\limits_{m=0}^{u}\frac{C_m}{\rho^{m+1}}\int\limits_{0}^{2\pi}\partial_{\boldsymbol{\Theta}(\theta)}^m[p](\mathbf{x})\,\mathrm{e}^{-\mathrm{i}\,\rho\,\boldsymbol{\Theta}^{\mathrm{T}}(\theta)\,\boldsymbol{\gamma}(x)}\,\boldsymbol{\Theta}^{\mathrm{T}}(\theta)\,\mathbf{n}(\boldsymbol{\gamma}(x))\abs{\boldsymbol{\gamma}'(x)}_2\mathrm{d}x\\ &=\sum\limits_{m=0}^{u}\frac{C_m}{\rho^{m+1}}\sum_{k=0}^{M-1}\int\limits_{a_k}^{a_{k+1}}p_{\theta}^m(\boldsymbol{\gamma}(x))\,\mathrm{e}^{-\mathrm{i}\,\rho\,\boldsymbol{\Theta}^{\mathrm{T}}(\theta)\,\boldsymbol{\gamma}(x)}\,\boldsymbol{\Theta}^{\mathrm{T}}(\theta)\,\boldsymbol{\beta}(x)\,\mathrm{d}x, \end{align*} where $\boldsymbol{\beta}(x)\mathrel{\mathop:}=\mathbf{n}(\boldsymbol{\gamma}(x))\abs{\boldsymbol{\gamma}'(x)}_2$ and $p_{\theta}^m(\mathbf{x})\mathrel{\mathop:}=\partial_{\boldsymbol{\Theta}(\theta)}^m[p](\mathbf{x})$. With the help of polar coordinates, we transform the following integral into \begin{align} &\mathcal{F}^{-1}\left[ \mathcal{F}[P_u]\Psi^{(\mathfrak{i})}_{j,\ell} \right](2\pi\mathbf{y})\notag\\ &\qquad=\sum_{m=0}^{u}C_m\int\limits_{0}^{\infty}\int\limits_{0}^{2\pi}\int\limits_{\partial T}\Psi_{j,\ell}^{(\mathfrak{i})}\left(\rho\,\boldsymbol{\Theta}(\theta)\right)p_{\theta}^m(\mathbf{x})\rho^{-m}\mathrm{e}^{\mathrm{i} \rho \boldsymbol{\Theta}^{\mathrm{T}}(\theta)(2\pi\mathbf{y}-\mathbf{x})}\boldsymbol{\Theta}^{\mathrm{T}}(\theta)\,\mathbf{n}(\mathbf{x})\mathrm{d}\sigma\,\mathrm{d}\theta\,\mathrm{d}\rho\notag\\\label{eq:F_inv} &\qquad=\sum_{m=0}^{u}C_m\sum_{k=0}^{M-1}\,I_k^{(\mathfrak{i})}(j,\ell,\mathbf{y},m) \end{align} with \begin{equation*} I_k^{(\mathfrak{i})}(j,\ell,\mathbf{y},m)\mathrel{\mathop:}=\int\limits_{0}^{\infty}\int\limits_{0}^{2\pi}\int\limits_{a_k}^{a_{k+1}}\Psi_{j,\ell}^{(\mathfrak{i})}\left(\rho\,\boldsymbol{\Theta}(\theta)\right)p_{\theta}^m(\boldsymbol{\gamma}(x))\rho^{-m}\,\mathrm{e}^{\mathrm{i} \rho \boldsymbol{\Theta}^{\mathrm{T}}(\theta)(2\pi\mathbf{y}-\boldsymbol{\gamma}(x))}\,\boldsymbol{\Theta}^{\mathrm{T}}(\theta)\,\boldsymbol{\beta}(x)\,\mathrm{d}x\,\mathrm{d}\theta\,\mathrm{d}\rho. \end{equation*} From the assumption of \cref{thm:hauptresultat2} it follows that there exists $\varepsilon>0$ such that $U_\varepsilon(\mathbf{y})=\partial T\cap B_\varepsilon(2\pi\mathbf{y})\neq\emptyset$. We choose $k^*=k^*(\mathbf{y})$ with $0\leq k^*\leq M-1$ such that for $x\in[a_{k^*},a_{k^*+1})$ the neighborhood \begin{equation*} U_\varepsilon(\mathbf{y})=\partial T\cap B_\varepsilon(2\pi\mathbf{y}) \end{equation*} from \cref{eq:U_epsilon} can be represented by the curve $\boldsymbol{\gamma}(x)$ (see \cref{fig:skizze_rand}). \begin{figure}[t] \subfloat{ {\includegraphics[width=.49\textwidth]{rand_zoom.pdf}} }\hspace{-0.5cm} \subfloat{ {\includegraphics[width=.503\textwidth]{skizze_rand.pdf}} } \caption[]{Left: Star-like set $T\in\mathrm{STAR}^2$ (red). Right: Zoom into the small window of the left picture to see the neighborhood $B_\varepsilon(\mathbf{y})$ around $\mathbf{y}\in \mathcal{P}(\mathbf{N}_{j,\ell}^{(\mathfrak{i})})$ and $U_\varepsilon(\mathbf{y})$ on the boundary $\partial T$ with the interval $[a_{k^*},a_{k^*+1})$.}\label{fig:skizze_rand} \end{figure} The following lemma, called localization lemma, is important for the proof of \cref{thm:hauptresultat}. We adapt the main ideas of \cite[Lemma 4.1]{labate:detection_continuous} where a similar statement was shown for cone-adapted continuous shearlets. \begin{lemma} \label{lem:lokalisierungslemma} For $\mathfrak{i}\in \left\lbrace \mathfrak{h},\mathfrak{v}\right\rbrace$ and $q\in \mathbb{N}$ let $\Psi^{(\mathfrak{i})}\in \mathcal{W}^{2q}$ be given. Then there exists a constant $C(m,q,p,\varepsilon_0)>0$ such that for all $k\neq k^*$ we have \begin{equation*} \bigl\lvert I_k^{(\mathfrak{i})}(j,\ell,\mathbf{y},m)\bigr\rvert\leq C(m,q,p,\varepsilon_0)\,2^{-j(q+m-1/2)}. \end{equation*} \end{lemma} \begin{proof} We provide the proof only for $\mathfrak{i}=\mathfrak{h}$ and use the notation $I_k\mathrel{\mathop:}=I_k^{(\mathfrak{h})}(j,\ell,\mathbf{y},m)$. From \cite[Lemma 1]{schober:detection} we have \begin{equation*} \mathrm{supp}\,\Psi_{j,\ell}^{(\mathfrak{h})}\left( 2^j\rho\,\boldsymbol{\Theta}(\theta) \right)\subset\left\lbrace(\rho,\theta)\in \mathbb{R}\times\left[-\frac{\pi}{2},\frac{\pi}{2}\right]:\frac{1}{3}<\abs{\rho}< 2,\,\theta_{j,\ell-2}^{(\mathfrak{h})}<\theta<\theta_{j,\ell+2}^{(\mathfrak{h})}\right\rbrace \end{equation*} and the substitution $\rho=2^j\,\rho'$ leads to \begin{equation*} I_k=2^{-j(m-1)}\int\limits_{\frac{1}{3}}^{2} \int\limits_{\theta_{j,\ell-2}^{(\mathfrak{h})}}^{\theta_{j,\ell+2}^{(\mathfrak{h})}}\int\limits_{a_k}^{a_{k+1}}\Psi_{j,\ell}^{(\mathfrak{h})}\left( 2^j\rho\,\boldsymbol{\Theta}(\theta) \right)\,p_{\theta}^m(\boldsymbol{\gamma}(x))\,\rho^{-m}\,\mathrm{e}^{\mathrm{i}2^j\rho\boldsymbol{\Theta}^{\mathrm{T}}(\theta)(2\pi \mathbf{y}-\boldsymbol{\gamma}(x))}\boldsymbol{\Theta}^{\mathrm{T}}(\theta)\boldsymbol{\beta}(x)\,\mathrm{d}x\,\mathrm{d}\theta\, \mathrm{d}\rho. \end{equation*} We consider the sets \begin{equation*} M_1\mathrel{\mathop:}=M_1(K)\mathrel{\mathop:}=\left\lbrace\theta\in \left(\theta_{j,\ell-2}^{(\mathfrak{h})},\theta_{j,\ell+2}^{(\mathfrak{h})}\right):\frac{\bigl\lvert\boldsymbol{\Theta}^{\mathrm{T}}(\theta)(2\pi\mathbf{y}-\boldsymbol{\gamma}(x))\bigr\rvert}{\abs{2\pi \mathbf{y}-\boldsymbol{\gamma}(x)}_2}\geq K\right\rbrace \end{equation*} and \begin{equation*} M_2\mathrel{\mathop:}=\left(\theta_{j,\ell-2}^{(\mathfrak{h})},\theta_{j,\ell+2}^{(\mathfrak{h})}\right)\setminus M_1, \end{equation*} where $K=K(\varepsilon_0)>0$ is chosen such that $M_1=\left(\theta_{j,\ell-2}^{(\mathfrak{h})},\theta_{j,\ell+2}^{(\mathfrak{h})}\right)$ for all $x\in[a_k,a_{k+1}]$ with $\abs{2\pi \mathbf{y}-\boldsymbol{\gamma}(x)}_2<c(\varepsilon_0)$. \\ We can use these sets to split the integral into $I_k=I_{k,1}+I_{k,2}$, where \begin{align*} I_{k,i}&\mathrel{\mathop:}=2^{-j(m-1)}\int\limits_{\frac{1}{3}}^{2} \int\limits_{M_i}\int\limits_{a_k}^{a_{k+1}}\Psi_{j,\ell}^{(\mathfrak{h})}\left( 2^j\rho\,\boldsymbol{\Theta}(\theta) \right)\,p_{\theta}^m(\boldsymbol{\gamma}(x))\,\rho^{-m}\,\mathrm{e}^{\mathrm{i}2^j\rho\boldsymbol{\Theta}^{\mathrm{T}}(\theta)(2\pi \mathbf{y}-\boldsymbol{\gamma}(x))}\\ &\qquad\times\boldsymbol{\Theta}^{\mathrm{T}}(\theta)\,\boldsymbol{\beta}(x)\,\mathrm{d}x\,\mathrm{d}\theta\, \mathrm{d}\rho \end{align*} for $i\in \left\lbrace 1,2 \right\rbrace$ and investigate these integrals separately. \begin{itemize} \item[i)] By Fubini's theorem, we can change the order of integration in $I_{k,1}$ to obtain \begin{equation*} I_{k,1}=2^{-j(m-1)} \int\limits_{M_1}\int\limits_{a_k}^{a_{k+1}}J(x,\theta)\,p_{\theta}^m(\boldsymbol{\gamma}(x))\,\boldsymbol{\Theta}^{\mathrm{T}}(\theta)\,\boldsymbol{\beta}(x)\,\mathrm{d}x\,\mathrm{d}\theta \end{equation*} with \begin{equation*} J(x,\theta)\mathrel{\mathop:}=\int\limits_{\frac{1}{3}}^{2}\Psi_{j,\ell}^{(\mathfrak{h})}\left( 2^j\rho\,\boldsymbol{\Theta}(\theta) \right)\,\rho^{-m}\,\mathrm{e}^{\mathrm{i}2^j\rho\boldsymbol{\Theta}^{\mathrm{T}}(\theta)(2\pi \mathbf{y}-\boldsymbol{\gamma}(x))}\,\mathrm{d}\rho,\quad x\in[a_k,a_{k+1}),\quad\theta\in M_1. \end{equation*} For $k=0,\hdots,M-1$ with $k\neq k^*$ and $x\in[a_k,a_{k+1})$ we have $\boldsymbol{\gamma}(x)\in U_{\varepsilon}^{\mathrm{c}}(\mathbf{y})$ or equivalently \begin{equation}\label{eq:lokalisierungslemma0} \abs{2\pi \mathbf{y}-\boldsymbol{\gamma}(x)}_2\geq\varepsilon=\varepsilon_0\,2^{-j/2}. \end{equation} With \begin{equation*} \abs{\frac{\partial^s}{\partial\rho^s}\left[\rho^{-m}\right]}=\frac{(m+s)!}{(m-1)!}\abs{\rho}^{-(m+s)}, \end{equation*} the Leibniz rule and \cref{lem:partielle_ableitung_psi:polar} we obtain \begin{align} \abs{\frac{\partial^{2q}}{\partial\rho^{2q}}\left[\Psi_{j,\ell}^{(\mathfrak{h})}\left( 2^j\rho\,\boldsymbol{\Theta}(\theta) \right)\,\rho^{-m}\right]}&\leq\sum_{s=0}^{2q}\binom{2q}{s}\abs{\frac{\partial^s}{\partial\rho^s}\left[\Psi_{j,\ell}^{(\mathfrak{h})}\left( 2^j\rho\,\boldsymbol{\Theta}(\theta) \right)\right]}\abs{\frac{\partial^{2q-s}}{\partial\rho^{2q-s}}\left[\rho^{-m}\right]}\notag\\ \label{eq:lokalisierungslemma1} &\leq C_2(q,m). \end{align} Since $\theta\in M_1$, we can use \cref{eq:lokalisierungslemma0} and \cref{eq:lokalisierungslemma1} for the integral $J(x,\theta)$ and $2q$-times integration by parts with respect to the variable $\rho$ to obtain \begin{align*} \bigl\lvert J(x,\theta)\bigr\rvert&\leq\Bigl\lvert2^j\,\boldsymbol{\Theta}^{\mathrm{T}}(\theta)(2\pi \mathbf{y}-\boldsymbol{\gamma}(x))\Bigr\rvert^{-2q}\int\limits_{\frac{1}{3}}^{2}\abs{\frac{\partial^{2q}}{\partial\rho^{2q}}\left[\Psi_{j,\ell}^{(\mathfrak{h})}\left( 2^j\rho\,\boldsymbol{\Theta}(\theta) \right)\,\rho^{-m}\right]}\,\mathrm{d}\rho\\ &\leq C_3(q,m,\varepsilon_0)\,2^{-jq}. \end{align*} The estimate \cref{proof:int_FT_Q1_2} implies $\abs{M_1}\leq C\,2^{-j/2}$ and we can bound the integral $I_{k,1}$ from above by \begin{align*} \bigl\lvert I_{k,1}\bigr\rvert&\leq2^{-j(m-1)} \int\limits_{M_1}\int\limits_{a_k}^{a_{k+1}}\bigl\lvert J(x,\theta)\bigr\rvert\abs{p_{\theta}^m(\boldsymbol{\gamma}(x))\,\boldsymbol{\Theta}^{\mathrm{T}}(\theta)\,\boldsymbol{\beta}(x)}\,\mathrm{d}x\,\mathrm{d}\theta\\ &\leq C_4(q,m,p,\varepsilon_0)\,2^{-j(q+m-1/2)}. \end{align*} \item[ii)] For the integral $I_{k,2}$ we follow a similar approach, but this time with respect to the variable $\theta$. For $x\in[a_k,a_{k+1}]$ with $\abs{2\pi \mathbf{y}-\boldsymbol{\gamma}(x)}_2<c(\varepsilon_0)$, by the choice of $K$ in $M_1$, we have that $M_2=\emptyset$ and thus $I_{k,2}=0$. If on the other hand $M_2\neq\emptyset$, we have $\boldsymbol{\Theta}^{\mathrm{T}}(\theta)\boldsymbol{\Theta}'(\theta)=0$ for $\theta\in M_2$ and the inequality \begin{equation}\label{eq:lokalisierungslemma2} \Bigl\lvert(2\pi\mathbf{y}-\boldsymbol{\gamma}(x))^{\mathrm{T}}\boldsymbol{\Theta}'(\theta)\Bigr\rvert\geq c(\varepsilon_0) \end{equation} is fulfilled. We can write the integral $I_{k,2}$ as \begin{equation*} I_{k,2}=2^{-j(m-1)} \int\limits_{\frac{1}{3}}^{2}\int\limits_{a_k}^{a_{k+1}}K(x,\rho)\,\rho^{-m}\,\mathrm{d}x\,\mathrm{d}\rho, \end{equation*} where \begin{equation*} K(x,\rho)\mathrel{\mathop:}=\int\limits_{M_2}\Psi_{j,\ell}^{(\mathfrak{h})}\left( 2^j\rho\,\boldsymbol{\Theta}(\theta) \right)\,p_{\theta}^m(\boldsymbol{\gamma}(x))\,\mathrm{e}^{\mathrm{i}2^j\rho\boldsymbol{\Theta}^{\mathrm{T}}(\theta)(2\pi \mathbf{y}-\boldsymbol{\gamma}(x))}\,\boldsymbol{\Theta}^{\mathrm{T}}(\theta)\,\boldsymbol{\beta}(x)\,\mathrm{d}\theta. \end{equation*} The curve \cref{eq:star1} is of the form $\boldsymbol{\gamma}(x)=\mathbf{x}_0+r(x)(\cos x,\,\sin x)^{\mathrm{T}}$. Thus, we have \begin{align*} \boldsymbol{\beta}(x)&=\mathbf{n}(\boldsymbol{\gamma}(x))\abs{\boldsymbol{\gamma}'(x)}_2\\ &=\left( r(x)\cos x+r'(x)\sin x,\,r(x)\sin x-r'(x)\cos x\right)^{\mathrm{T}}\sqrt{r(x)^2+r'(x)^2}. \end{align*} From this equality we obtain \begin{equation*} \boldsymbol{\Theta}^{\mathrm{T}}(\theta)\,\boldsymbol{\beta}(x)=\left( r(x)\cos\left( \theta-x \right)+r'(x)\sin\left( \theta-x \right) \right)\sqrt{r(x)^2+r'(x)^2} \end{equation*} leading to \begin{equation*} \abs{\frac{\partial^s}{\partial\theta^s}\left[ \boldsymbol{\Theta}^{\mathrm{T}}(\theta)\,\boldsymbol{\beta}(x) \right]}\leq C_5. \end{equation*} With the same ideas which led to \cref{eq:lokalisierungslemma1} we can estimate \begin{align} \abs{\frac{\partial^{2q}}{\partial\theta^{2q}}\left[\Psi_{j,\ell}^{(\mathfrak{h})}\left( 2^j\rho\boldsymbol{\Theta}(\theta) \right)\boldsymbol{\Theta}^{\mathrm{T}}(\theta)\boldsymbol{\beta}(x)\right]}&\leq\sum_{s=0}^{2q}\binom{2q}{s}C_6(s,m,p)\,2^{js/2}\notag\\\label{eq:lokalisierungslemma3} &\leq C_7(q,m,p)\,2^{jq}. \end{align} Finally, we perform $2q$-times integration by parts with respect to the variable $\theta$ and use \cref{eq:lokalisierungslemma2} and \cref{eq:lokalisierungslemma3} to obtain the estimate \begin{align*} &\bigl\lvert K(x,\rho)\bigr\rvert \\ &\qquad\leq \int\limits_{M_2}\Bigl\lvert2^j\rho\,(2\pi\mathbf{y}-\boldsymbol{\gamma}(x))^{\mathrm{T}}\boldsymbol{\Theta}'(\theta)\Bigl\lvert ^{-2q}\abs{\frac{\partial^{2q}}{\partial\theta^{2q}}\left[\Psi_{j,\ell}^{(\mathfrak{h})}\left( 2^j\rho\boldsymbol{\Theta}(\theta) \right)p_{\theta}^m(\boldsymbol{\gamma}(x))\boldsymbol{\Theta}^{\mathrm{T}}(\theta)\boldsymbol{\beta}(x)\right]}\mathrm{d}\theta\\ &\qquad\leq C_8(q,m,p,\varepsilon_0)\,2^{-j(q+1/2)}. \end{align*} Similar to the estimate for $I_{k,2}$, we get \begin{equation*} \bigl\lvert I_{k,2}\bigr\rvert\leq2^{-j(m-1)}\int\limits_{\frac{1}{3}}^{2} \int\limits_{a_k}^{a_{k+1}}\bigl\lvert K(x,\rho)\bigr\rvert \abs{\rho^{-m}}\,\mathrm{d}x\,\mathrm{d}\rho\leq C_9(q,m,p,\varepsilon_0)\,2^{-j(q+m-1/2)} \end{equation*} and the proof is finished. \end{itemize} \vspace{-0.7cm} \end{proof} The set $\mathcal{M}^{(\mathfrak{h})}\subset\lbrace 0,\hdots,M-1\rbrace$ contains all indices such that for $x\in[a_k,a_{k+1})$ with $k\in\mathcal{M}^{(\mathfrak{h})}$ the curve $\boldsymbol{\gamma}(x)$ is horizontal and $\mathcal{M}^{(\mathfrak{v})}\subset\lbrace 0,\hdots,M-1\rbrace$ includes all indices such that for $x\in[a_k,a_{k+1})$ with $k\in\mathcal{M}^{(\mathfrak{v})}$ the curve $\boldsymbol{\gamma}(x)$ is vertical. Obviously, we have $\mathcal{M}^{(\mathfrak{h})}\cup\mathcal{M}^{(\mathfrak{v})}=\lbrace 1,\hdots,M\rbrace$. A similar version of the following lemma was proven in \cite[Lemma 5.10]{schober:detection}, based on the ideas from \cite[Section 3.1]{labate:detection}. We omit the proof here since it requires only a slight adjustment of the proof of \cite[Lemma 5.10]{schober:detection}. \begin{lemma}\label{lem:orientation_lemma} For $\mathfrak{i}\in\lbrace\mathfrak{h},\mathfrak{v}\rbrace$ and $q\in \mathbb{N}$ let $\Psi^{(\mathfrak{i})}\in \mathcal{W}^q$ be a window function. Then for any $N\in \mathbb{N}$ there exists a constant $C(m,N,p)>0$ such that for all $k\in \mathcal{M}^{(\mathfrak{i})}$ we have \begin{equation*} \bigl\lvert I_k^{(\mathfrak{i})}(j,\ell,\mathbf{y},m)\bigr\rvert\leq C(m,N,p)\,2^{-j(N+m-1/2)}. \end{equation*} \end{lemma} In the case of continuous shearlets, the following lemma was already established in \cite[Lemma 4.3]{labate:smooth}. \begin{lemma}\label{lem:P_L} Let $\mathfrak{f}=f\,\chi_T\in \mathcal{E}^{u+1}(\tau)$ and $T_uf(\mathbf{x};\,2\pi\mathbf{y})$ be the bivariate Taylor approximation of $f$ with order $u$ around the point $2\pi\mathbf{y}$ and let $P_{u,f,\mathbf{y}}(\mathbf{x})\mathrel{\mathop:}=T_uf(\mathbf{x};\,2\pi\mathbf{y})\,\chi_T(\mathbf{x})$. Moreover, for $\mathfrak{i}\in\lbrace\mathfrak{h},\mathfrak{v}\rbrace$ and $2q\geq u\in \mathbb{N}$ let $\Psi^{(\mathfrak{i})}\in \mathcal{W}^{2q}$ be a window function. Then there is a constant $C(\mathfrak{f},q)>0$ such that \begin{equation*} \abs{\left\langle \mathfrak{f}^{2\pi}-P_{u,f,\mathbf{y}}^{2\pi},\psi_{j,\ell,\mathbf{y}}^{(\mathfrak{i})} \right\rangle_2}\leq C(\mathfrak{f},q)\,2^{-j(u-1)/4}. \end{equation*} \end{lemma} \begin{proof} Again, we present the proof only for $\mathfrak{i}=\mathfrak{h}$. With $\delta=2^{-j/4}$ we get \begin{align*} \abs{\left\langle \mathfrak{f}^{2\pi}-P_{u,f,\mathbf{y}}^{2\pi},\psi_{j,\ell,\mathbf{y}}^{(\mathfrak{h})} \right\rangle_2}&\leq\int\limits_{\mathbb{T}^2}\psi^{(\mathfrak{h})}_{j,\ell,\mathbf{y}}(\mathbf{x})\,\chi_{T}(\mathbf{x})\,\bigl\lvert f(\mathbf{x})-P_{u,f,\mathbf{y}}(\mathbf{x})\bigr\rvert\mathrm{d}\mathbf{x}\\ &=\Biggl(\;\int\limits_{B_{\delta}(2\pi \mathbf{y})}+\int\limits_{B^{\mathrm{c}}_{\delta}(2\pi \mathbf{y})}\Biggr)\psi^{(\mathfrak{h})}_{j,\ell,\mathbf{y}}(\mathbf{x})\,\chi_{T}(\mathbf{x})\,\bigl\lvert f(\mathbf{x})-P_{u,f,\mathbf{y}}(\mathbf{x})\bigr\rvert\mathrm{d}\mathbf{x} \end{align*} and write the last line as $\mathcal{I}_1+\mathcal{I}_2$. The approximation property of Taylor polynomials of order $u$ leads to \begin{equation*} \bigl\lvert f(\mathbf{x})-P_{u,f,\mathbf{y}}(\mathbf{x})\bigr\rvert\leq C\,2^{-j(u+1)/4} \end{equation*} for $\mathbf{x}\in B_{\delta}(2\pi \mathbf{y})$. The result in \cite[Lemma 9]{schober:detection} implies $\abs{\psi_{j,\ell,\mathbf{y}}^{(\mathfrak{h})}(\mathbf{x})}\leq C(q)\,2^{3j/4}$. Thus, we can bound $\mathcal{I}_1$ by \begin{equation*} \abs{\mathcal{I}_1}\leq C(q)\,2^{3j/4}\int\limits_{ B_{\delta}(2\pi \mathbf{y})\cap T}\bigl\lvert f(\mathbf{x})-P_{u,f,\mathbf{y}}(\mathbf{x})\bigr\rvert\mathrm{d}\mathbf{x}\leq C(q)\, 2^{-j\left((u+2)/4-3/4\right)}=C(q)\,2^{-j(u-1)/4}. \end{equation*} We again use \cite[Lemma 9]{schober:detection} but this time for the decay term to arrive at \begin{align*} \abs{\mathcal{I}_2}&\leq C(q)\,2^{-j(q-3/4)}\int_{0}^{2\pi}\int_{2^{-\frac{j}{4}}}^{\infty}\rho^{1-2q}\,\mathrm{d}\rho\,\mathrm{d}\theta\\ &\leq C_2(q)\,2^{-j(q-3/4)}\,2^{-j(1-q)/2}=C_3(q)\ 2^{-j(2q-1)/4} \end{align*} and the proof is finished since $2q\geq u$. \end{proof} {\section{Proof of Theorem 3.2} \label{sec:proof_of_theorem_3_2} Let $T\in \mathrm{STAR}^2(\tau)$ and $\mathfrak{f}\mathrel{\mathop:}=f\,\chi_{T}\in \mathcal{E}^{u+1}(\tau)$ be given. Moreover, let $p\mathrel{\mathop:}=p_u\mathrel{\mathop:}=T_uf(\mathbf{x};\,2\pi\mathbf{y})$ be the bivariate Taylor polynomial of $f$ with order $u>4$ around the point $2\pi\mathbf{y}$. We consider functions $P_u=p\,\chi_T$ and denote its $2\pi$-periodization by $P_u^{2\pi}$.\\ In the first part of the proof we show \begin{equation}\label{proof:hauptresultat2_0} \abs{\left\langle P_u^{2\pi},\psi^{(\mathfrak{i})}_{j,\ell,\mathbf{y}} \right\rangle_2}\geq C(u,n,q,\varepsilon_0,T)\,2^{-j(3/4+n)}. \end{equation} Similar to \cref{eq:beweis_der_oberen_schranke1} in the proof of \cref{thm:hauptresultat}, we use Parseval's identity and the Poisson summation formula to get \begin{equation*} \left\langle P_u^{2\pi},\psi^{(\mathfrak{i})}_{j,\ell,\mathbf{y}}\right\rangle_2=2^{-3j/4}\sum_{\mathbf{n}\in \mathbb{Z}^2}\mathcal{F}^{-1}\left[\mathcal{F}[P_u]\Psi^{(\mathfrak{i})}_{j,\ell} \right]\Bigl(2\pi(\mathbf{y}+\mathbf{n})\Bigr)=2^{-3j/4}\sum_{\mathbf{n}\in \mathbb{Z}^2}S(\mathbf{n}), \end{equation*} where \begin{equation*} S(\mathbf{n})\mathrel{\mathop:}=\mathcal{F}^{-1}\left[\mathcal{F}[P_u]\Psi^{(\mathfrak{i})}_{j,\ell} \right]\Bigl(2\pi(\mathbf{y}+\mathbf{n})\Bigr). \end{equation*} i) First, we show \begin{equation}\label{proof:hauptresultat2_1} 2^{-3j/4}\sum_{\mathbf{n}\in \mathbb{Z}^2\setminus\{\mathbf{0}\}}\abs{S(\mathbf{n})}\leq C(u,n,q)\,2^{-j(q+n+1/4)}. \end{equation} We consider the decomposition of $P_u$ on dyadic squares $Q\in \mathcal{Q}_j$ and define $P_{u,Q}\mathrel{\mathop:}=P_u\,\phi_Q$ to get \begin{equation*} P_u=\sum_{Q\in \mathcal{Q}_j^0}P_{u,Q}+\sum_{Q\in \mathcal{Q}_j^1}P_{u,Q}. \end{equation*} We repeat the steps which led to \cref{eq:beweis_der_oberen_schranke4} and obtain \begin{equation*} 2^{-3j/4}\sum_{\mathbf{n}\in \mathbb{Z}^2\setminus\{\mathbf{0}\}}\abs{\mathcal{F}^{-1}\left[\mathcal{F}[P_{u,Q}]\Psi^{(\mathfrak{i})}_{j,\ell} \right]\Bigl(2\pi(\mathbf{y}+\mathbf{n})\Bigr)}\leq C(q)\, 2^{-jq}\norm{L^q\left[\mathcal{F}[P_{L,Q}]\,\Psi^{(\mathfrak{i})}_{j,\ell}\right] }_{\mathrm{supp}\,\Psi_{j,\ell}^{(\mathfrak{i})},2}. \end{equation*} With the linearity of the Fourier transform and the estimate of the absolutely convergent series in the last line we can write \begin{align*} 2^{-3j/4}\sum_{\mathbf{n}\in \mathbb{Z}^2\setminus\{\mathbf{0}\}}\abs{S(\mathbf{n})}&\leq2^{-3j/4}\sum_{\mathbf{n}\in \mathbb{Z}^2\setminus\{\mathbf{0}\}}\left( \sum_{Q\in \mathcal{Q}_j^0}+\sum_{Q\in \mathcal{Q}_j^1} \right) \abs{\mathcal{F}^{-1}\left[\mathcal{F}[P_{u,Q}]\Psi^{(\mathfrak{i})}_{j,\ell} \right]\Bigl(2\pi(\mathbf{y}+\mathbf{n})\Bigr)}\\ &=\left( \sum_{Q\in \mathcal{Q}_j^0}+\sum_{Q\in \mathcal{Q}_j^1} \right)2^{-3j/4}\sum_{\mathbf{n}\in \mathbb{Z}^2\setminus\{\mathbf{0}\}} \abs{\mathcal{F}^{-1}\left[\mathcal{F}[P_{u,Q}]\Psi^{(\mathfrak{i})}_{j,\ell} \right]\Bigl(2\pi(\mathbf{y}+\mathbf{n})\Bigr)}\\ &\leq C(q)\, 2^{-jq}\left( \sum_{Q\in \mathcal{Q}_j^0}+\sum_{Q\in \mathcal{Q}_j^1} \right)\norm{L^q\left[\mathcal{F}[P_{u,Q}]\,\Psi^{(\mathfrak{i})}_{j,\ell}\right] }_{\mathrm{supp}\,\Psi_{j,\ell}^{(\mathfrak{i})},2}. \end{align*} Next, we use \cref{lem:norm_Lq} and the estimates from \cref{eq:maechtigkeit_Q_j} and obtain \cref{proof:hauptresultat2_1} since \begin{align*} 2^{-3j/4}\sum_{\mathbf{n}\in \mathbb{Z}^2\setminus\{\mathbf{0}\}}\abs{S(\mathbf{n})}&\leq C(q)\, 2^{-jq}\left( C_1(u,q)\,2^j\,2^{-j(q+u+3/2)} + C_2(n,q)\, 2^{j/2}\,2^{3/4+n}\right)\\ &\leq C_3(u,n,q)\,2^{-j(q+n+1/4)}. \end{align*} ii) In the following, we show \begin{equation}\label{eq:lower_bound} \abs{S(\mathbf{0})}\geq C_4(n,q,\varepsilon_0,T)\,2^{-jn}. \end{equation} Assume, the last estimate is true. Then, for sufficiently large $q\in \mathbb{N}$ with \cref{proof:hauptresultat2_1} and the inverse triangle inequality we obtain \begin{equation*} \abs{\left\langle P_u^{2\pi},\psi^{(\mathfrak{i})}_{j,\ell,\mathbf{y}} \right\rangle_2}\geq 2^{-3j/4}\left( \abs{S(\mathbf{0})}-\sum_{\mathbf{n}\in \mathbb{Z}^2\setminus\{\mathbf{0}\}}\abs{S(\mathbf{n})}\right)\geq C_5(u,n,q,\varepsilon_0,T)\,2^{-j(3/4+n)} \end{equation*} and therefore \cref{proof:hauptresultat2_0}.\\ We now start with the proof of \cref{eq:lower_bound}. For this reason, we recall the representation \begin{equation*} S(\mathbf{0})=\mathcal{F}^{-1}\left[\mathcal{F}[P_u]\Psi^{(\mathfrak{i})}_{j,\ell}\right](2\pi\mathbf{y})=\sum_{m=0}^{u}C_m\sum_{k=0}^{M-1}\,I_k^{(\mathfrak{i})}(j,\ell,\mathbf{y},m) \end{equation*} from \cref{eq:F_inv} in polar coordinates with \begin{align} I_k^{(\mathfrak{i})}(j,\ell,\mathbf{y},m)&=\int\limits_{0}^{\infty}\int\limits_{0}^{2\pi}\int\limits_{a_k}^{a_{k+1}}\Psi_{j,\ell}^{(\mathfrak{i})}\left(\rho\,\boldsymbol{\Theta}(\theta)\right)p_{\theta}^m(\boldsymbol{\gamma}(x))\rho^{-m}\,\mathrm{e}^{\mathrm{i} \rho \boldsymbol{\Theta}^{\mathrm{T}}(\theta)(2\pi\mathbf{y}-\boldsymbol{\gamma}(x))}\times\boldsymbol{\Theta}^{\mathrm{T}}(\theta)\,\boldsymbol{\beta}(x)\,\mathrm{d}x\,\mathrm{d}\theta\,\mathrm{d}\rho\notag\\\label{proof:upper_bound1} &=2^{-j(m-1)}\int\limits_{0}^{\infty}\int\limits_{0}^{2\pi}\Psi_{j,\ell}^{(\mathfrak{i})}\left( 2^j\rho\,\boldsymbol{\Theta}(\theta) \right)\,\rho^{-m}\,\mathrm{e}^{2\pi\mathrm{i}2^j\rho\,\boldsymbol{\Theta}^{\mathrm{T}}(\theta)\mathbf{y}}\,G_k(\rho,\theta,m)\,\mathrm{d}\theta\, \mathrm{d}\rho \end{align} and \begin{equation}\label{eq:L_k} G_k(\rho,\theta,m)\mathrel{\mathop:}=\int\limits_{a_k}^{a_{k+1}}p_{\theta}^m(\boldsymbol{\gamma}(x))\,\mathrm{e}^{-\mathrm{i}2^j\rho\boldsymbol{\Theta}^{\mathrm{T}}(\theta)\boldsymbol{\gamma}(x)}\,\boldsymbol{\Theta}^{\mathrm{T}}(\theta)\,\boldsymbol{\beta}(x)\,\mathrm{d}x. \end{equation} We consider only $\mathfrak{i}=\mathfrak{h}$ since the case $\mathfrak{i}=\mathfrak{v}$ is similar. First, from \cref{lem:orientation_lemma} and the inverse triangle inequality it follows that \begin{equation*} \abs{\mathcal{F}^{-1}\left[\mathcal{F}[P_u]\Psi^{(\mathfrak{h})}_{j,\ell}\right](2\pi\mathbf{y})}\geq\abs{\sum_{m=0}^{u}C_m\sum_{k\in \mathcal{M}^{(\mathfrak{v})}}\,I_k^{(\mathfrak{h})}(j,\ell,\mathbf{y},m)}-\abs{\mathcal{M}^{(\mathfrak{h})}}C_6(m,N,p)\,2^{-j(N+m-1/2)} \end{equation*} and the last term is negligible for sufficiently large $N\in \mathbb{N}$. By assumption of the theorem, the set \cref{eq:U_epsilon} is nonempty and there exists $k^*=k^*(\mathbf{y}),\,0\leq k^*\leq M-1,$ such that $U_\varepsilon(\mathbf{y})$ can be represented by a vertical curve $\boldsymbol{\gamma}(x)=(t_{k^*}(x),x)^{\mathrm{T}}$ for $x\in[a_{k^*},a_{k^*+1})$. Thus, we can use \cref{lem:lokalisierungslemma} to bound the expression from above by \begin{align*} \abs{\sum_{m=0}^{u}C_m\sum_{k\in \mathcal{M}^{(\mathfrak{v})}}\,I_k^{(\mathfrak{h})}(j,\ell,\mathbf{y},m)}\geq\abs{\sum\limits_{m=0}^{u}C_m\,I_{k^*}^{(\mathfrak{h})}(j,\ell,\mathbf{y},m)}-C_7(m,p,q,\varepsilon_0)\,2^{-j(q+m-1/2)}. \end{align*} Therefore, the desired estimate \cref{eq:lower_bound} is equivalent to find a constant $C_8(n,T)$ such that \begin{equation}\label{eq:haupttheorem2_beh} \abs{I_{k^*}^{(\mathfrak{h})}(j,\ell,\mathbf{y},m)}\geq C_8(n,T)\,2^{-jn}. \end{equation} For $k=k^*$ we split up the integral $I_{k^*}^{(\mathfrak{h})}(j,\ell,\mathbf{y},m)$ from \cref{proof:upper_bound1} into \begin{align*} I_{k^*}^{(\mathfrak{h})}(j,\ell,\mathbf{y},m)&=2^{-j(m-1)}\int\limits_{0}^{\infty} \biggl(\int\limits_{-\frac{\pi}{2}}^{\frac{\pi}{2}}+\int\limits_{\frac{\pi}{2}}^{\frac{3\pi}{2}}\biggr)\Psi_{j,\ell}^{(\mathfrak{h})}\left( 2^j\rho\,\boldsymbol{\Theta}(\theta) \right)\,\rho^{-m}\,\mathrm{e}^{2\pi\mathrm{i}2^j\rho\,\boldsymbol{\Theta}^{\mathrm{T}}(\theta)\mathbf{y}}\mathcal{L}_{k^*}(\rho,\theta,m)\,\mathrm{d}\theta\, \mathrm{d}\rho\\ &=\mathrel{\mathop:}I_{k^*,1}^{(\mathfrak{h})}(j,\ell,\mathbf{y},m)+I_{k^*,2}^{(\mathfrak{h})}(j,\ell,\mathbf{y},m). \end{align*} For convenience, we write $I_{1}\mathrel{\mathop:}=I_{k^*,1}^{(\mathfrak{h})}(j,\ell,\mathbf{y},m)$ and $I_{2}\mathrel{\mathop:}=I_{k^*,2}^{(\mathfrak{h})}(j,\ell,\mathbf{y},m)$ for the rest of the proof. We follow the ideas from \cite[p. 34]{schober:detection} and use the symmetry properties of the admissible functions $\widetilde{g}$ and $g$ to obtain \begin{equation}\label{eq:I1_I2} I_{k^*}=2\,\mathrm{i}\,\mathrm{Im}(I_{1})=2\,\mathrm{i}\,\mathrm{Im}(I_{2}). \end{equation} The vertical curve $\boldsymbol{\gamma}(x)$ is parametrized by $(t_{k^*}(x),x)^{\mathrm{T}}$ for $x\in[a_{k^*},a_{k^*+1})$. For the point $\mathbf{x}_0=(t_{k^*}(x_0),x_0)^{\mathrm{T}}\in U_{\varepsilon}(\mathbf{y})$ we have that $x_0\in[a_{k^*},a_{k^*+1})$ and therefore $\abs{x-x_0}<\varepsilon=\varepsilon_0\,2^{-j/2}$. We write the function $t_{k^*}(x)$ locally as \begin{equation*} t_{k^*}(x)=t_{k^*}(x_0)+B(x-x_0)+A(x-x_0)^2+r(x-x_0), \end{equation*} where $r(x-x_0)=\mathcal{O}\left((x-x_0)^3\right)$ and in the case $\mathfrak{i}=\mathfrak{h}$ we have $B=t_{k^*}'(x_0)\in[-1,1]$. In the following, we assume $A\mathrel{\mathop:}=\frac{1}{2}\,t_{k^*}''(x_0)>0$. The proof for $A<0$ is similar and will be omitted. We adapt the approach of \cite{labate:detection,schober:detection} and substitute $v=x-x_0$ to get $\tilde{a}_{k^*}\mathrel{\mathop:}=a_{k^*}-x_0$, and for \cref{eq:L_k} we have \begin{equation*} G_{k^*}(\rho,\theta,n)=\int\limits_{\tilde{a}_{k^*}}^{\tilde{a}_{k^*+1}}\mathrm{e}^{-\mathrm{i}2^j\rho\boldsymbol{\Theta}^{\mathrm{T}}(\theta)\left(t_{k^*}(x_0)+Bv+Av^2+\mathcal{O}(v^3),v+x_0\right)^{\mathrm{T}}}\,p_{\theta}^n(\boldsymbol{\gamma}(v+x_0))\,\boldsymbol{\Theta}^{\mathrm{T}}(\theta)\,\boldsymbol{\beta}(v+x_0)\,\mathrm{d}v. \end{equation*} The result of \cite[Lemma 1]{schober:detection} implies \begin{equation}\label{eq:supp_2hochj_psi} \mathrm{supp}\,\Psi^{(\mathfrak{h})}_{j,\ell}(2^j\rho\,\boldsymbol{\Theta}(\theta))\subset\left\lbrace(\rho,\theta)\in \mathbb{R}\times\left[-\frac{\pi}{2},\frac{\pi}{2}\right]:\frac{1}{3}<\abs{\rho}< 2,\,\theta_{j,\ell-2}^{(\mathfrak{h})}<\theta_t<\theta_{j,\ell+2}^{(\mathfrak{h})}\right\rbrace. \end{equation} By assumption, for the directional derivatives on the boundary we have \begin{equation*} p_{\theta}^m(\boldsymbol{\gamma}(x))\begin{cases} =0, &\text{if } 0\leq m<n,\\ \neq 0, &\text{if } m=n, \end{cases} \end{equation*} for $\theta\in \left( \theta_{j,\ell-2}^{(\mathfrak{h})},\theta_{j,\ell+2}^{(\mathfrak{h})} \right)$, why $I_{k^*}(j,\ell,\mathbf{y},m)=0$ for $0\leq m<n$ and $p_\theta^0(\boldsymbol{\gamma}(x))=p(\boldsymbol{\gamma}(x))\neq 0$ for $n=0$. As the proof will show, we only need to consider the integral $I_{k^*}(j,\ell,\mathbf{y},n)$ since the integrals $I_{k^*}(j,\ell,\mathbf{y},m)$ for $n<m_1\leq u$ decay faster.\\ For the integral $I_{k^*,1}^{(\mathfrak{h})}(j,\ell,\mathbf{y},m)$ we get \begin{equation*} I_1=2^{-j(n-1)}\int\limits_{\frac{1}{3}}^{2}\int\limits_{\theta_{j,\ell-2}^{(\mathfrak{h})}}^{\theta_{j,\ell+2}^{(\mathfrak{h})}}\int\limits_{\tilde{a}_{k^*}}^{\tilde{a}_{k^*+1}}\Psi_{j,\ell}^{(\mathfrak{h})}\left( 2^j\rho\,\boldsymbol{\Theta}(\theta) \right)\,\rho^{-n}\,\mathrm{e}^{\mathrm{i} 2^j\rho R(v,\theta)}\,\varphi(v,\theta)\,\mathrm{d}v\,\mathrm{d}\theta\, \mathrm{d}\rho, \end{equation*} where $\Lambda\mathrel{\mathop:}=2^j\rho$, $\varphi(v,\theta)\mathrel{\mathop:}=p_{\theta}^n(\boldsymbol{\gamma}(v+x_0))\,\boldsymbol{\Theta}^{\mathrm{T}}(\theta)\,\boldsymbol{\beta}(v+x_0)$ and \begin{align} R(v,\theta)&\mathrel{\mathop:}=-\boldsymbol{\Theta}^{\mathrm{T}}(\theta) \bigl(Av^2+Bv+t_{k^*}(x_0)+\mathcal{O}(v^3)-2\pi y_1,v+x_0-2\pi y_2\bigr)^{\mathrm{T}}\notag\\ &=-\cos\theta\bigl( Av^2+(B+\tan\theta)v+t_{k^*}(x_0)+\mathcal{O}(v^3)-2\pi y_1+(x_0-2\pi y_2)\tan\theta\bigr)\notag\\ &=-\cos\theta\left(A\biggl(v+\frac{B+\tan\theta}{2A}\biggr)^2 +\widetilde{C}-2\pi y_1 - \frac{(B+\tan\theta)^2}{4A}\right)\label{eq:R_A0}. \end{align} In the last line $\widetilde{C}\mathrel{\mathop:}=t_{k^*}(x_0)+(x_0-2\pi y_2)\tan\theta+r(v)$ and since $\abs{v}<\varepsilon=\varepsilon_0\,2^{-j/2}$ we have $\abs{r(v)}< C_1\,\varepsilon^3=C_2(\varepsilon_0)\,2^{-3j/2}$. It follows that \begin{equation*} \frac{\partial R}{\partial v}(v,\theta)=-2A\cos\theta\biggl(v+\frac{B+\tan\theta}{2A}\biggr)^2=0 \end{equation*} if $v_{\theta}=-\frac{B+\tan\theta}{2A}$ and we introduce $\phi(v,\theta)\mathrel{\mathop:}=R(v,\theta)-R(v_{\theta},\theta)$ which gives \begin{equation*} \phi(v_{\theta},\theta)=\frac{\partial \phi}{\partial v}(v_{\theta},\theta)=0,\qquad\qquad\frac{\partial \phi^2}{\partial v^2}(v_{\theta},\theta)=\frac{\partial R^2}{\partial v^2}(v_{\theta},\theta)=-2A\cos\theta\neq0, \end{equation*} since $\cos\theta>0$ for $\theta\in \left(\theta_{j,\ell-2}^{(\mathfrak{h})},\theta_{j,\ell+2}^{(\mathfrak{h})} \right)$. This allows us to write $I_1$ as \begin{equation}\label{eq:I1} I_1=2^{-j(n-1)}\int\limits_{\frac{1}{3}}^{2}\int\limits_{\theta_{j,\ell-2}^{(\mathfrak{h})}}^{\theta_{j,\ell+2}^{(\mathfrak{h})}}\Psi_{j,\ell}^{(\mathfrak{h})}\left( 2^j\rho\,\boldsymbol{\Theta}(\theta) \right)\,\rho^{-n}\,\mathrm{e}^{\mathrm{i} 2^j\rho R(v_{\theta},\theta)}\left(\int\limits_{\tilde{a}_{k^*}}^{\tilde{a}_{k^*+1}}\mathrm{e}^{\mathrm{i}\, \Lambda\, \phi(v,\theta)}\, \varphi(v,\theta)\,\mathrm{d}v\right)\mathrm{d}\theta\,\mathrm{d}\rho. \end{equation} We use \cite[Lemma 13]{schober:detection}, called method of stationary phase, for the inner integral to get the estimate \begin{equation}\label{eq:I11} \int\limits_{\tilde{a}_{k^*}}^{\tilde{a}_{k^*+1}}\mathrm{e}^{\mathrm{i}\, \Lambda\, \phi(v,\theta)}\,\varphi(v,\theta)\,\mathrm{d}v=C\,\sqrt{\pi\mathrm{i}}\, (2^j\rho\,\abs{A\,\cos\theta})^{-\frac{1}{2}}\,\varphi(v_{\theta},\theta)+ r_2(j), \end{equation} where $\abs{r_2(j)}\leq C_2\,2^{-j}$. As remarked in \cite[p. 115]{labate:detection} the constant $C_2>0$ is independent of $\theta$, $\rho$, $j$, $\ell$ and $\mathbf{y}$. Using \cref{eq:I11}, we further split up the integral \cref{eq:I1} in $I_1=I_{11}+I_{12}$ with \begin{align*} &I_{11}=C\,2^{-j(n-1/2)}\sqrt{\frac{\pi\mathrm{i}}{A}}\int\limits_{\frac{1}{3}}^{2}\int\limits_{\theta_{j,\ell-2}^{(\mathfrak{h})}}^{\theta_{j,\ell+2}^{(\mathfrak{h})}}\Psi_{j,\ell}^{(\mathfrak{h})}\left( 2^j\rho\,\boldsymbol{\Theta}(\theta) \right)\,\rho^{-(n+1/2)}\,\mathrm{e}^{\mathrm{i} 2^j\rho R(v_{\theta},\theta)}\,\abs{\cos\theta}^{-1/2}\,\varphi(v_{\theta},\theta)\,\mathrm{d}\theta\,\mathrm{d}\rho,\\ &I_{12}=C_2\,2^{-jn}\int\limits_{\frac{1}{3}}^{2}\int\limits_{\theta_{j,\ell-2}^{(\mathfrak{h})}}^{\theta_{j,\ell+2}^{(\mathfrak{h})}}\Psi_{j,\ell}^{(\mathfrak{h})}\left( 2^j\rho\,\boldsymbol{\Theta}(\theta) \right)\,\rho^{-n}\,\mathrm{e}^{\mathrm{i} 2^j\rho R(v_{\theta},\theta)}\,\mathrm{d}\theta\,\mathrm{d}\rho. \end{align*} With the substitution $t=2^{j/2}\tan\theta-\ell$ we have $\mathrm{d}\theta=2^{-j/2}\cos^2{\theta_t}\,\mathrm{d}t$ where $\theta_t\mathrel{\mathop:}=\arctan((\ell+t)\,2^{-j/2})=\theta_{j,\ell+t}^{(\mathfrak{h})}$. For the function $R(v_{\theta},\theta)$ from \cref{eq:R_A0} this leads to \begin{equation*} 2^jR(v_{\theta_t},t)=\cos\theta_t\left(\frac{(2^{j/2}B+\ell+t)^2}{4A}-2^j(\widetilde{C}-2\pi y_1) \right)=\cos\theta_t\left(\frac{(p+t)^2}{4A} +D \right), \end{equation*} where $p\mathrel{\mathop:}=2^{j/2}B+\ell$ and $D\mathrel{\mathop:}=2^j(2\pi y_1-\widetilde{C})$. By assumption, we have $\mathbf{x}_0\in U_{\varepsilon}(\mathbf{y})$ such that $\abs{p}\leq \frac{1}{4}$ and $\abs{D}\leq \frac{3\pi}{4}$. \\ From \cref{eq:supp_2hochj_psi}, it follows that $I_{11}=I_{12}=0$ for $\abs{t}>2$ and we get \begin{align} &I_{11}=C\,2^{-jn}\sqrt{\frac{\pi\mathrm{i}}{A}}\int\limits_{\frac{1}{3}}^{2}\int\limits_{-2}^{2}\Psi_{j,\ell}^{(\mathfrak{h})}\left( 2^j\rho\,\boldsymbol{\Theta}(\theta_t) \right)\,\rho^{-(n+1/2)}\,\mathrm{e}^{\mathrm{i}\rho\cos\theta_t\left(\frac{(p+t)^2}{4A} +D \right)}\,\abs{\cos{\theta_t}}^{3/2}\,\varphi(v_{\theta_t},\theta_t)\,\mathrm{d}t\,\mathrm{d}\rho,\label{I_11_temp}\\\notag &I_{12}=C_2\,2^{-j(n+1/2)}\int\limits_{\frac{1}{3}}^{2}\int\limits_{-2}^{2}\Psi_{j,\ell}^{(\mathfrak{h})}\left( 2^j\rho\,\boldsymbol{\Theta}(\theta_t) \right)\,\rho^{-n}\,\mathrm{e}^{\mathrm{i}\rho\cos\theta_t\left(\frac{(p+t)^2}{4A} +D \right)}\,\abs{\cos{\theta_t}}^{2}\,\mathrm{d}t\,\mathrm{d}\rho. \end{align} A direct estimate with the triangle inequality leads to $\abs{I_{12}}\leq C_3\,2^{-j(n+1/2)}$ and we can omit this term in the following. From the definition of $\theta_{j,\ell}^{(\mathfrak{h})}$ in \cref{eq:theta_jl} we get \begin{equation*} \cos\theta_t=\cos \left( \arctan \left( 2^{-j/2}(\ell+t)\right) \right)=\left( 1+\left( 2^{-j/2}(\ell+t) \right)^2 \right)^{-1/2}. \end{equation*} The right-hand side is a function \begin{equation*} w(x)=\left( 1+\left( 2^{-j/2}\ell+x \right)^2 \right)^{-1/2} \end{equation*} evaluated in $x=2^{-j/2}\,t$. Using the Taylor approximation of order zero, we obtain \begin{equation*} h(2^{-j/2}t)=\left( 1+\left( 2^{-j/2}\ell\right)^2 \right)^{-1/2}+w'(\xi)\,2^{-j/2}t \end{equation*} with $\abs{\xi}\leq \abs{2^{-j/2}t}\leq 2^{-j/2+1}$ since $\abs{t}\leq 2$. This leads to \begin{equation*} \abs{2^{-j/2}t\,w'(\xi)}\leq2^{-j/2+1}\frac{\abs{2^{-j/2}+\xi}}{\left( 1+\left( 2^{-j/2}\ell+\xi\right)^2\right)^{3/2}}\leq \frac{3\cdot2^{-j+1}}{\left( 1+\left( 2^{-j/2}\ell+\xi\right)^2\right)^{3/2}}, \end{equation*} and we write $\cos\theta_t=\mu_{j,\ell}+r_3(j)$ with $r_3(j)=\mathcal{O}\left(2^{-j}\right)$ and $\mu_{j,\ell}\mathrel{\mathop:}=(1+(2^{-j/2}\ell)^2)^{-1/2}$ fulfills $2^{-1/2}\leq\abs{\mu_{j,\ell}}\leq 1$. Similar to the previous case, we write $\sin\theta_t=2^{-j/2}\,\ell\,\mu_{j,\ell}+r_4(j)$ with $r_4(j)=\mathcal{O}\left(2^{-j}\right)$. We omit the additive term with fast decay and replace $\cos\theta_t$ by $\mu_{j,\ell}$ and $\sin\theta_t$ by $2^{-j/2}\,\ell\,\mu_{j,\ell}$. The curve $\boldsymbol{\gamma}(x)$ is parametrized by $(t_{k^*}(x),x)^{\mathrm{T}}$ for $x\in[a_{k^*},a_{k^*+1})$ leading to \begin{equation*} \boldsymbol{\beta}(v_{\theta_t}+x_0)=\mathbf{n}(\boldsymbol{\gamma}(v_{\theta_t}+x_0))\abs{\boldsymbol{\gamma}'(v_{\theta_t}+x_0)}_2=\left( -1,t'(v_{\theta_t}+x_0) \right)^{\mathrm{T}}\sqrt{t'(v_{\theta_t}+x_0)^2+1} \end{equation*} and therefore \begin{equation*} \boldsymbol{\Theta}^{\mathrm{T}}(\theta_t)\,\boldsymbol{\beta}(v_{\theta_t}+x_0)=\left( \mu_{j,\ell}\left( 2^{-j/2}\,\ell\,t'(v_{\theta_t}+x_0)-1 \right) \right)\sqrt{t'(v_{\theta_t}+x_0)^2+1}. \end{equation*} Moreover, by the assumption on the directional derivative of order $n$ there is $\widetilde{q}$ such that \begin{equation*} \abs{p_{\theta_t}^n(\boldsymbol{\gamma}(v_{\theta_t}+x_0))-p_{\theta_t}^n(\boldsymbol{\gamma}(\widetilde{q})}\leq C\,2^{-j/2} \end{equation*} and $p_{\theta_t}^n(\boldsymbol{\gamma}(\widetilde{q})\neq 0$. We replace $\varphi(v_{\theta_t},\theta_t)=p_{\theta_t}^n(\boldsymbol{\gamma}(v_{\theta_t}+x_0))\,\boldsymbol{\Theta}^{\mathrm{T}}(\theta_t)\,\boldsymbol{\beta}(v_{\theta_t}+x_0)$ in \cref{I_11_temp} by a constant and write \begin{equation*} I_{11}=C_3\,2^{-jn}\,\mu_{j,\ell}^{3/2}\,\sqrt{\frac{\mathrm{i}}{A}}\int\limits_{\frac{1}{3}}^{2}\int\limits_{-2}^{2}\Psi_{j,\ell}^{(\mathfrak{h})}\left( 2^j\rho\,\boldsymbol{\Theta}(\theta_t) \right)\,\rho^{-(n+1/2)}\,\mathrm{e}^{\mathrm{i}\rho\,\mu_{j,\ell}\left(\frac{(p+t)^2}{4A} +D \right)}\,\mathrm{d}t\,\mathrm{d}\rho. \end{equation*} Next, we write $\lambda=\rho\,\mu_{j,\ell}$ which gives \begin{equation*} \Psi^{(\mathfrak{h})}_{j,\ell}(2^j\rho\,\boldsymbol{\Theta}(\theta_t))=\widetilde{g}(\rho\cos\theta_t)\,g\left(\rho\cos\theta_t(2^{j/2}\tan\theta_t-\ell)\right)=\widetilde{g}(\lambda)\,g\left(t\,\lambda\right). \end{equation*} From here we can follow the steps from \cite[p. 36]{schober:detection} with the obvious changes in our case and write \begin{equation}\label{eq:H_I11} I_{11}=C_3\,2^{-jn}\,\mu_{j,\ell}^{n+1}\,\sqrt{\frac{\mathrm{i}}{A}}\int\limits_{\frac{1}{3}}^{2}\widetilde{g}(\lambda)\,\mathrm{e}^{\mathrm{i}\lambda D}\,\lambda^{-(n+1/2)}\,H(\lambda,p,A)\,\mathrm{d}\lambda, \end{equation} where \begin{equation*} H(\lambda,p,A)\mathrel{\mathop:}=\sqrt{\frac{A}{\lambda}}\Bigl(a(\lambda,p,A)+\mathrm{i}\,b(\lambda,p,A)\Bigr) \end{equation*} and \begin{align*} a(\lambda,p,A)&\mathrel{\mathop:}=\int\limits_{0}^{\infty}\left(g\left(2\sqrt{A\lambda\,v}+ p\,\lambda\right)+g\left(2\sqrt{A\lambda\,v}- p\,\lambda\right)\right)\,\frac{\cos{v}}{\sqrt{v}}\,\mathrm{d}v,\\ b(\lambda,p,A)&\mathrel{\mathop:}=\int\limits_{0}^{\infty}\left(g\left(2\sqrt{A\lambda\,v}+ p\,\lambda\right)+g\left(2\sqrt{A\lambda\,v}- p\,\lambda\right)\right)\,\frac{\sin{v}}{\sqrt{v}}\,\mathrm{d}v. \end{align*} With the positive solution $\sqrt{\mathrm{i}}=\frac{1+\mathrm{i}}{\sqrt{2}}$ we can write \cref{eq:H_I11} as $I_{11}=\mathrm{Re}(I_{11})+\mathrm{i}\,\mathrm{Im}(I_{11})$ with \begin{align*} \mathrm{Im}(I_{11})&=C_4(n,A)\,2^{-jn}\int\limits_{\frac{1}{3}}^{\frac{4}{3}}\widetilde{g}(\lambda)\,\lambda^{-(n+1)}\,\Bigl( \Bigl[a(\lambda,p,A)+b(\lambda,p,A)\Bigr]\cos(D\lambda)\\ &\qquad+\Bigl[a(\lambda,p,A)-b(\lambda,p,A)\Bigr]\sin(D\lambda) \Bigr)\mathrm{d}\lambda. \end{align*} We use the relation $I_{2}=-\overline{I_{1}}$ from \cref{eq:I1_I2} and repeat the previous steps for the integral $I_{2}$ instead of $I_{1}$ to get $I_{21}=\mathrm{Re}(I_{21})+\mathrm{i}\,\mathrm{Im}(I_{21})$ with \begin{align*} \mathrm{Im}(I_{21})&=C_5(n,A)\,2^{-jn}\int\limits_{\frac{1}{3}}^{\frac{4}{3}}\widetilde{g}(\lambda)\,\lambda^{-(n+1)}\,\Bigl(\Bigl[a(\lambda,p,A)+b(\lambda,p,A)\Bigr]\sin(D\lambda)\\ &\qquad-\Bigl[a(\lambda,p,A)-b(\lambda,p,A)\Bigr]\cos(D\lambda) \Bigr)\mathrm{d}\lambda. \end{align*} The integrals $\mathrm{Im}(I_{11})$ and $\mathrm{Im}(I_{21})$ are up to the factor $\lambda^{-(n+1)}$ identical to $P_1(D,p,A)$ and $P_2(D,p,A)$ from \cite[Lemma 16]{schober:detection} which can similarly be shown to hold true in our case. The assumptions $\abs{p}\leq \frac{1}{2}$ and $\abs{D}\leq \frac{3\pi}{4}$ of that lemma are also fulfilled. From \cref{eq:I1_I2} we obtain \begin{equation*} \bigl\lvert I_{k^*}(j,\ell,\mathbf{y},m)\bigr\rvert =2\bigl\lvert\mathrm{Im}(I_{11}+I_{12})\bigr\rvert=2\bigl\lvert\mathrm{Im}(I_{21}+I_{22})\bigr\rvert \end{equation*} and with the inverse triangle inequality and \cite[Lemma 16]{schober:detection} we have shown the lower bound \cref{eq:haupttheorem2_beh} and thus the estimate \cref{eq:lower_bound}. We have proven \cref{proof:hauptresultat2_0} in the case $A>0$. For $A=0$ we omit the proof since the arguments from \cite{schober:detection} in that case can merely be repeated with the obvious modifications. \\ In the last part of the proof we show the lower bound of \cref{thm:hauptresultat2} for cartoon-like functions with \cref{lem:P_L}. In order to do that, we consider functions $\mathfrak{f}_0\in \mathcal{E}^{u+1}(\tau)$ of the form $\mathfrak{f}_0=f_0+f\,\chi_T=f_0+\mathfrak{f}$ with $\mathfrak{f}=f\,\chi_T$ and $f_0,f\in C_0^{u+1}(\mathbb{R}^2)$. In \cref{eq:f_0} we show that \begin{equation*} \abs{\left\langle f_0^{2\pi},\psi^{(\mathfrak{i})}_{j,\ell,\mathbf{y}}\right\rangle_2}\leq C_3(u,q)\,2^{-j(q+u+1/2)}. \end{equation*} We use the inverse triangle inequality, \cref{proof:hauptresultat2_0}, \cref{lem:P_L} and the assumption $u>4(n+1)$ to get \begin{align*} \abs{\left\langle \mathfrak{f}_0^{2\pi},\psi_{j,\ell,\mathbf{y}}^{(\mathfrak{i})}\right\rangle_2}&\geq\abs{\left\langle P_{u,f,\mathbf{y}}^{2\pi},\psi_{j,\ell,\mathbf{y}}^{(\mathfrak{i})} \right\rangle_2}-\abs{\left\langle \mathfrak{f}^{2\pi}-P_{u,f,\mathbf{y}}^{2\pi},\psi_{j,\ell,\mathbf{y}}^{(\mathfrak{i})} \right\rangle_2}-\abs{\left\langle f_0^{2\pi},\psi^{(\mathfrak{i})}_{j,\ell,\mathbf{y}}\right\rangle_2}\\ &\geq C_1(u,n,q,\varepsilon_0,T)\,2^{-j(3/4+n)}-C_2(\mathfrak{f},q)\,2^{-j(u-1)/4}-C_3(u,q)\,2^{-j(q+u+1/2)}\\ &\geq C_4 (u,n,q,\varepsilon_0,T)\,2^{-j(3/4+n)} \end{align*} and \cref{thm:hauptresultat2} is proven. \qed \footnotesize
31,069
TITLE: Trace of a power of a skew-symmetric matrix QUESTION [0 upvotes]: How to express ${\rm Tr}(A^n)$ (in terms of ${\rm det}\,A$), where $A$ is a skew-symmetric $m\times m$ matrix? With references if possible. REPLY [1 votes]: For even powers we cannot determine the trace from the determinant and order alone. Let matrix $A$ be the $3×3$ matrix defined as follows: $A_{1,2}=A_{2,3}=A_{3,1}=c,$ with other entries defined by skew-symmetry. Then $\det A=0$. But $A^2$ has all its diagonal elements equal to $-2c^2$, so its trace is $-6c^2$ which varies with $c$ despite the constant determinant. The trace of an odd power of any skew-symmetric matrix is always zero. Can you see why?
214,109
\begin{document} \title{Nested Polar Codes Achieve the Shannon Rate-Distortion Function and the Shannon Capacity} \author{\IEEEauthorblockN{Aria G. Sahebi and S. Sandeep Pradhan\\ \thanks{This work was supported by NSF grants CCF-0915619 and CCF-1116021.}} \IEEEauthorblockA{Department of Electrical Engineering and Computer Science,\\ University of Michigan, Ann Arbor, MI 48109, USA.\\ Email: \tt\small ariaghs@umich.edu, pradhanv@umich.edu}} \maketitle \begin{abstract} It is shown that nested polar codes achieve the Shannon capacity of arbitrary discrete memoryless sources and the Shannon capacity of arbitrary discrete memory less channels. \end{abstract} \section{Introduction} Polar codes were originally proposed by Arikan in \cite{arikan_polar} to achieve the symmetric capacity of binary-input discrete memoryless channels. Polar codes for lossy source coding were investigated in \cite{korada_lossy_source} where it is shown that polar codes achieve the symmetric rate-distortion function for sources with binary reconstruction alphabets. For the lossless source coding problem, the source polarization phenomenon is introduced in \cite{arikan_source_polarization} to compress a source down to its entropy. It is well known that linear codes can at most achieve the symmetric capacity of discrete memoryless channels and the symmetric rate-distortion function for discrete memoryless sources. This indicates that polar codes are optimal linear codes in terms of the achievable rate. It is also known that nested linear codes achieve the Shannon capacity of arbitrary discrete memoryless channels and the Shannon rate-distortion function for arbitrary discrete memoryless sources. In this paper, we investigate the performance of nested polar codes for the point-to-point channel and source coding problems and show that these codes achieve the Shannon capacity of arbitrary (binary or non-binary) DMCs and the Shannon rate-distortion function for arbitrary DMSs. The results of this paper are general regarding the size of the channel and source alphabets. To generalize the results to non-binary cases, we use the approach of \cite{sahebi_multilevel_polar_ieee} in which it is shown that polar codes with their original $(u,u+v)$ kernel, achieve the symmetric capacity of arbitrary discrete memoryless channels where $+$ is the addition operation over any finite Abelian group. \section{Preliminaries} \label{prel} \subsubsection{Source and Channel Models} For the source coding problem, the source is modeled as a discrete-time random process with each sample taking values in a fixed finite set $\mathcal{X}$ with probability distribution $p_X$. The reconstruction alphabet is denoted by $\mathcal{U}$ and the quality of reconstruction is measured by a single-letter distortion function $d:\mathcal{X}\times \mathcal{U}\rightarrow \mathds{R}^{+}$. We denote the source by $(\mathcal{X},\mathcal{U},p_{X},d)$. With a slight abuse of notation, for $x^n\in\mathcal{X}^n$ and $u^n\in\mathcal{U}^n$, we define \begin{align*} d(x^n,u^n)=\frac{1}{n}\sum_{i=1}^n d(x_i,u_i) \end{align*} For the channel coding problem, we consider discrete memoryless and stationary channels used without feedback. We associate two finite sets $\mathcal{X}$ and $\mathcal{Y}$ with the channel as the channel input and output alphabets. These channels can be characterized by a conditional probability law $W(y|x)$ for $x\in \mathcal{X}$ and $y\in \mathcal{Y}$. The channel is specified by $(\mathcal{X},\mathcal{Y},W)$. The source of information generates messages over the set $\{1,2,\ldots,M\}$ uniformly for some positive integer $M$.\\ \subsubsection{Achievability and the Rate-Distortion Function for the Source Coding Problem} A transmission system with parameters $(n,\Theta,\Delta,\tau)$ for compressing a given source $(\mathcal{X},\mathcal{U},p_{X},d)$ consists of an encoding mapping and a decoding mapping \begin{align*} &\mbox{\small Enc}:\mathcal{X}^n\rightarrow \{1,2,\cdots,\Theta\},\\ &\mbox{\small Dec}:\{1,2,\cdots,\Theta\}\rightarrow \mathcal{U}^n \end{align*} such that the following condition is met: \begin{align*} P\left(d\left(X^n,\mbox{\small Dec}(\mbox{\small Enc}(X^n))\right)>\Delta\right)\le \tau \end{align*} where $X^n$ is the random vector of length $n$ generated by the source. In this transmission system, $n$ denotes the block length, $\log \Theta$ denotes the number of channel uses, $\Delta$ denotes the distortion level and $\tau$ denotes the probability of exceeding the distortion level $\Delta$.\\ Given a source, a pair of non-negative real numbers $(R,D)$ is said to be achievable if there exists for every $\epsilon>0$, and for all sufficiently large numbers $n$ a transmission system with parameters $(n,\Theta,\Delta,\tau)$ for compressing the source such that \begin{align*} \frac{1}{n}\log \Theta\le R+\epsilon, \qquad \Delta\le D+\epsilon,\qquad \tau\le \epsilon \end{align*} The optimal rate distortion function $R^*(D)$ of the source is given by the infimum of the rates $R$ such that $(R,D)$ is achievable.\\ It is known that the optimal rate-distortion function is given by: \begin{align}\label{eqn:Shannon_RD} R(D)=\inf_{\substack{p_{U|X}\\\mathds{E}_{p_X p_{U|X}}\{d(X,U)\}\le D}} I(X;U) \end{align} where $p_{U|X}$ is the conditional probability of $U$ given $X$. \subsubsection{Achievability and Capacity for the Channel Coding Problem} A transmission system with parameters $(n,M,\tau)$ for reliable communication over a given channel $(\mathcal{X},\mathcal{Y},W)$ consists of an encoding mapping $\mbox{\small Enc}:\{1,2,\ldots,M\}\rightarrow \mathcal{X}^n$ and a decoding mapping $\mbox{\small Dec}:\mathcal{Y}^n\rightarrow\{1,2,\ldots,M\}$ such that \begin{align*} \frac{1}{M}\sum_{m=1}^{M} W^n\left(\mbox{\small Dec}(Y^n)\ne m|X^n=\mbox{\small Enc}(m)\right)\le \tau \end{align*} Given a channel $(\mathcal{X},\mathcal{Y},W)$, the rate $R$ is said to be achievable if for all $\epsilon>0$ and for all sufficiently large $n$, there exists a transmission system for reliable communication with parameters $(n,M,\tau)$ such that \begin{align*} \frac{1}{n}\log M \ge R-\epsilon,\qquad\qquad \tau\le \epsilon \end{align*} The channel capacity is the supremum of the set of achievable rates. It is known that the channel capacity is given by: \begin{align}\label{eqn:Shannon_C} C=\sup_{p_{X}} I(X;Y) \end{align} where $p_{X}$ is the channel input distribution.\\ \subsubsection{Groups, Rings and Fields} All groups referred to in this paper are \emph{Abelian groups}. Given a group $(\G,+)$, a subset $H$ of $\G$ is called a \emph{subgroup} of $\G$ if it is closed under the group operation. In this case, $(H,+)$ is a group in its own right. This is denoted by $H\le \G$. A \emph{coset} $C$ of a subgroup $H$ is a shift of $H$ by an arbitrary element $a\in \G$ (i.e. $C=a+H$ for some $a\in\G$). For any subgroup $H$ of $\G$, its cosets partition the group $\G$. A \emph{transversal} $T$ of a subgroup $H$ of $\G$ is a subset of $\G$ containing one and only one element from each coset (shift) of $H$. Given an element $d$ of $\G$, $\langle d\rangle$ denotes the subgroup of $\G$ generated by $d$. i.e. the smallest subgroup of $\G$ containing $d$. A subgroup $M$ of $\G$ is called maximal if it is a proper subgroup and there does not exist another proper subgroup of $\G$ containing $M$.\\ \subsubsection{Channel Parameters} For a channel $(\mathcal{X},\mathcal{Y},W)$, assume $\mathcal{X}$ is equipped with the structure of a group $(\G,+)$. The symmetric capacity is defined as $\bar{I}(W)=I(X;Y)$ where the channel input $X$ is uniformly distributed over $\mathcal{X}$ and $Y$ is the output of the channel. The Bhattacharyya distance between two distinct input symbols $x$ and $\tilde{x}$ is defined as \begin{align*} Z(W_{\{x,\tilde{x}\}})=\sum_{y\in\mathcal{Y}}\sqrt{W(y|x)W(y|\tilde{x})} \end{align*} and the average Bhattacharyya distance is defined as \begin{align*} Z(W)=\sum_{\substack{x,\tilde{x}\in \mathcal{X}\\x\ne\tilde{x}}}\frac{1}{q(q-1)}Z(W_{\{x,\tilde{x}\}}) \end{align*} where $q=|\mathcal{X}|$. We use the following two quantities in the paper extensively: \begin{align*} &D_d(W)=\frac{1}{2q}\sum_{u\in \mathcal{U}} \sum_{x\in\mathcal{X}} \left|W(x|u)-W(x|u+d)\right|\\ &\tilde{D}_d(W)=\frac{1}{2q}\sum_{u\in \mathcal{U}} \sum_{x\in\mathcal{X}} \left(W(x|u)-W(x|u+d)\right)^2 \end{align*} where $d$ is some element of $\G$ and $+$ is the group operation.\\ \subsubsection{Binary Polar Codes} For any $N=2^n$, a polar code of length $N$ designed for the channel $(\mathds{Z}_2,\mathcal{Y},W)$ is a linear (coset) code characterized by a generator matrix $G_N$ and a set of indices $A\subseteq \{1,\cdots,N\}$ of \emph{almost perfect channels}. The generator matrix for polar codes is defined as $G_N=B_NF^{\otimes n}$ where $B_N$ is a permutation of rows, $F=\left[ \begin{array}{cc}1 & 0\\1 & 1\end{array} \right]$ and $\otimes$ denotes the Kronecker product. The set $A$ is a function of the channel. The decoding algorithm for polar codes is a specific form of successive cancellation \cite{arikan_polar}.\\ \subsubsection{Polar Codes Over Abelian Groups} For any discrete memoryless channel, there always exists an {Abelian group} of the same size as that of the channel input alphabet. In general, for an Abelian group, there may not exist a multiplication operation. Since polar encoders are characterized by a matrix multiplication, before using these codes for channels of arbitrary input alphabet sizes, a generator matrix for codes over Abelian groups needs to be properly defined. Polar codes over Abelian groups are introduced in \cite{sahebi_multilevel_polar_ieee}.\\ \subsubsection{Notation} We denote by $O(\epsilon)$ any function of $\epsilon$ which is right-continuous around $0$ and that $O(\epsilon)\rightarrow 0$ as $\epsilon\downarrow 0$.\\ For positive integers $N$ and $r$, let $\{A_0,A_1,\cdots,A_r\}$ be a partition of the index set $\{1,2,\cdots,N\}$. Given sets $T_t$ for $t=0,\cdots,r$, the direct sum $\bigoplus_{t=0}^r T_t^{A_t}$ is defined as the set of all tuples $u_1^N=(u_1,\cdots,u_N)$ such that $u_i\in T_t$ whenever $i\in A_t$.\\ \section{The Lossy Source Coding Problem} In this section, we prove the following theorem: \begin{theorem} For an arbitrary discrete memoryless source $(\mathcal{X},\mathcal{U},p_X,d)$, nested polar codes achieve the Shannon rate-distortion function \eqref{eqn:Shannon_C}. \end{theorem} For the source $(\mathcal{X},\mathcal{U},p_X,d)$, let $\mathcal{U}=\G$ where $\G$ is an arbitrary Abelian group and let $q=|\G|$ be the size of the group. For a pair $(R,D)\in\mathds{R}^2$, let $X$ be distributed according to $p_X$ and let $U$ be a random variable such that $\mathds{E}\{d(X,U)\}\le D$. We prove that there exists a pair of polar codes $\mathds{C}_i\subseteq\mathds{C}_o$ such that $\mathds{C}_i$ induces a partition of $\mathds{C}_o$ through its shifts, $\mathds{C}_o$ is a good source code for $X$ and each shift of $\mathds{C}_i$ is a good channel code for the test channel $p_{X|U}$. This will be made clear later in the following.\\ Given the test channel $p_{X|U}$, define the artificial channels $(\G,\G,W_c)$ and $(\G,\mathcal{X}\times\G,W_s)$ such that for $s,z\in \G$ and $x\in \mathcal{X}$, \begin{align*} &W_c(z|s)=p_U(z-s)\\ &W_s(x,z|s)=p_{XU}(x,z-s) \end{align*} These channels have been depicted in Figures \ref{fig:Wc_RD} and \ref{fig:Ws_RD}. \begin{figure}[!h] \centering \includegraphics[scale=1]{Wc_RD.pdf} \caption{\small Test channel for the inner code (the channel coding component)} \label{fig:Wc_RD} \end{figure} \begin{figure}[!h] \centering \includegraphics[scale=1]{Ws_RD.pdf} \caption{\small Test channel for the outer code (the source coding component)} \label{fig:Ws_RD} \end{figure} Let $S$ be a random variable uniformly distributed over $\G$ which is independent from $X$ and $U$. It is straightforward to show that in this case, $Z$ is also uniformly distributed over $\G$. The symmetric capacity of the channel $W_c$ is equal to \begin{align*} \bar{I}(W_c)=I(S;Z)&=H(Z)-H(Z|S)\\ &\stackrel{(a)}{=}\log q-H(U|S)=\log q-H(U) \end{align*} where $(a)$ follows since $Z$ is uniformly distributed and $U$ is independent of $S$. For the channel $W_s$, first we show that $X$ and $Z$ are independent. For $z\in \G$ and $x\in\mathcal{X}$, \begin{align*} p_{X|Z}(x|z)&=\sum_{u\in \G} p_{U|Z}(u|z) p_{X|ZU}(x|z,u)\\ &=\sum_{u\in \G} \frac{p_{US}(u,z-u)}{p_{Z}(z)} p_{X|ZU}(x|z,u)\\ &\stackrel{(a)}{=}\sum_{u\in \G} p_U(u) p_{X|U}(x|u)\\ &=p_X(x) \end{align*} where $(a)$ follows since $S$ and $U$ are independent, $S$ and $Z$ are uniformly distributed and the Markov chain $Z\leftrightarrow U \leftrightarrow X$ holds. The symmetric capacity of the channel $W_s$ is equal to \begin{align*} \bar{I}(W_s)=I(S;XZ)&=H(S)+H(XZ)-H(SXZ)\\ &\stackrel{(a)}{=}H(S)+H(X)+H(Z)-H(SXU)\\ &\stackrel{(b)}{=}H(X)+H(Z)-H(XU)\\ &\stackrel{(c)}{=}\log q-H(U|X)\\ \end{align*} where $(a)$ follows since $X$ and $Z$ are independent and there is a one-to-one correspondence between $(S,Z)$ and $(S,U)$. Equality $(b)$ follows since $S$ is independent of $X,U$ and hence $H(SXU)=H(S)+H(XU)$. Equality $(c)$ follows since $Z$ is uniform.\\ We employ a nested polar code in which the inner code is a good channel code for the channel $W_c$ and the outer code is a good source code for $W_s$. The rate of this code is equal to \begin{align*} R&=\bar{I}(W_s)-\bar{I}(W_c)\\ &=\log q-H(U|X)-\left(\log q-H(U)\right)=I(X;U) \end{align*} Note that the channels $W_c$ and $W_s$ are chosen so that the difference of their \emph{symmetric} capacities is equal to the \emph{Shannon} mutual information between $U$ and $X$. This enables us to use channel coding polar codes to achieve the symmetric capacity of $W_c$ (as the inner code) and source coding polar codes to achieve the symmetric capacity of the test channel $W_s$ (as the outer code). The exact proof is postponed to Section \ref{section:Binary_proof_RD} where the result is proved for the binary case and Section \ref{section:General_proof_RD} in which the general proof (for arbitrary Abelian groups) is presented. The next section is devoted to some general definitions and useful lemmas which are used in the proofs. \subsection{Definitions and Lemmas} For a channel $(\mathcal{X},\mathcal{Y},W)$, the basic channel transformations associated with polar codes are given by: \begin{align} &\label{eqn:channel_transform1} W^-(y_1,y_2|u_1)=\sum_{u_2^\prime\in \G}\frac{1}{q}W(y_1|u_1+u_2^\prime)W(y_2|u_2^\prime)\\ &\label{eqn:channel_transform2} W^+(y_1,y_2,u_1|u_2)=\frac{1}{q}W(y_1|u_1+u_2)W(y_2|u_2) \end{align} for $y_1,y_2\in\mathcal{Y}$ and $u_1,u_2\in \G$. We apply these transformations to both channels $(\G,\G,W_c)$ and $(\G,\mathcal{X}\times\G,W_s)$. Repeating these operations $n$ times recursively for $W_c$ and $W_s$, we obtain $N=2^n$ channels $W_{c,N}^{(1)},\cdots,W_{c,N}^{(N)}$ and $W_{s,N}^{(1)},\cdots,W_{s,N}^{(N)}$ respectively. For $i=1,\cdots,N$, these channels are given by: \begin{align*} W_{c,N}^{(i)}(z_1^n,v_1^{i-1}|v_i) &=\sum_{v_{i+1}^N\in \G^{N-i}} \frac{1}{q^{N-1}} W_c^N(z_1^N|v_1^NG)\\ &= \sum_{v_{i+1}^N\in \G^{N-i}} \frac{1}{q^{N-1}} p_U^N(z_1^N-v_1^NG)\\ W_{s,N}^{(i)}(x_1^N,z_1^n,v_1^{i-1}|v_i) &=\sum_{v_{i+1}^N\in \G^{N-i}} \frac{1}{q^{N-1}} W_s^N(x_1^N,z_1^N|v_1^NG)\\ &= \sum_{v_{i+1}^N\in \G^{N-i}} \frac{1}{q^{N-1}} p_{XU}^N(x_1^N,z_1^N-v_1^NG)\\ \end{align*} for $z_1^N,v_1^N\in \G^N$, $x_1^N\in\mathcal{X}^N$ where $G$ is the generator matrix of dimensions $N\times N$ for polar codes. For the case of binary input channels, it has been shown in \cite{arikan_polar} that as $N\rightarrow \infty$, these channels polarize in the sense that their Bhattacharyya parameter gets either close to zero (perfect channels) or close to one (useless channels). For arbitrary channels, it is shown in \cite{sahebi_multilevel_polar_ieee} that polarization happens in multiple levels so that as $N\rightarrow \infty$ channels get useless, perfect or ``partially perfect''.\\ For an integer $n$, let $J_n$ be a uniform random variable over the set $\{1,2,\cdots,N=2^n\}$ and define the random variable $I^n(W)$ as \begin{align}\label{eqn:Iprocess} I^n(W)=I(X;Y) \end{align} where $X$ and $Y$ are the input and output of $W_N^{(J_n)}$ respectively and $X$ is uniformly distributed. It has been shown in \cite{sasoglu_polar_q} that the process $I^0,I^1,I^2,\cdots$ is a martingale; hence $\mathds{E}\{I^n\}=I^0$. For an integer $n$, define the random variable $Z_d^n(W)=Z_d(W_N^{(J_n)})$ where for a channel $(\G,\mathcal{Y},W)$, \begin{align}\label{eqn:Zd} Z_d(W)=\frac{1}{q}\sum_{x\in \G}\sum_{y\in \mathcal{Y}}\sqrt{W(y|x)W(y|x+d)} \end{align} Other than the processes $I^n(W)$ and $Z_d^n(W)$, in the proof of polarization, we need another set of processes $Z^H(W)$ and $I^n_H(W)$ for $H\le \G$ which we define in the following. Define \begin{align*} Z^H(W)=\sum_{d\notin H} Z_d(W) \end{align*} Note that any uniform random variable defined over $\G$ can be decomposed into two uniform and independent random variables $[X]_H$ and $[X]_{T_H}$ where $[X]_H$ takes values from $H$ and $[X]_{T_H}$ takes values from the transversal $T$ of $H$ such that $X=[X]_H+[X]_{T_H}$. For an integer $n$, define the random variable $I^n_H(W)$ as \begin{align}\label{eqn:Iprocess2} I^n_H(W)=I(X;Y|[X]_{T_H})=I([X]_H;Y|[X]_{T_H}) \end{align} \begin{definition}\label{def:degraded} The channel $(\G,\mathcal{Y}_1,W_1)$ is degraded with respect to the channel $(\G,\mathcal{Y}_2,W_2)$ if there exists a channel $(\mathcal{Y}_2,\mathcal{Y}_1,W)$ such that for $x\in\G$ and $y_1 \in \mathcal{Y}_1$, \begin{align*} W_1(y_1|x)=\sum_{y_2\in\mathcal{Y_2}} W_2(y_2|x) W(y_1|y_2) \end{align*} \end{definition} \begin{lemma}\label{lemma:Degraded_ZZ} If the channel $(\G,\mathcal{Y}_1,W_1)$ is degraded with respect to the channel $(\G,\mathcal{Y}_2,W_2)$ then $Z(W_1)\ge Z(W_2)$. \end{lemma} \begin{IEEEproof} Follows from [???]. \end{IEEEproof} \begin{lemma}\label{lemma:Degraded_WW} If the channel $(\G,\mathcal{Y}_1,W_1)$ is degraded with respect to the channel $(\G,\mathcal{Y}_2,W_2)$ then $(\G,\mathcal{Y}_1\times \mathcal{Y}_1\times \G,W_1^+)$ is degraded with respect to the channel $(\G,\mathcal{Y}_2\times \mathcal{Y}_2\times \G,W_2^+)$ and $(\G,\mathcal{Y}_1\times \mathcal{Y}_1,W_1^-)$ is degraded with respect to the channel $(\G,\mathcal{Y}_2\times \mathcal{Y}_2,W_2^-)$ \end{lemma} \begin{IEEEproof} Follows from [???]. \end{IEEEproof} \begin{lemma}\label{lemma:Wc_deg_Ws_RD2} The channel $W_c$ is degraded with respect to the channel $W_s$ in the sense of Definition \ref{def:degraded}. \end{lemma} \begin{IEEEproof} Intuitively, it is clear that $W_c$ is a degraded version of $W_s$. The proof is as follows: Let the channel $(\mathcal{X}\times \G,\G,W)$ be such that for $z,z'\in\G$ and $x\in\mathcal{X}$, $W(z|x,z')=\mathds{1}_{\{z=z'\}}$. Then for $s,z\in\G$, \begin{align*} \sum_{\substack{z'\in\G\\x\in\mathcal{X}}} W_s(x,z'|s)\mathds{1}_{\{z=z'\}} &= \sum_{\substack{z'\in\G\\x\in\mathcal{X}}} P_{XU}(x,z'-s)\cdot \mathds{1}_{\{z=z'\}}\\ &=\sum_{x\in\mathcal{X}} P_{XU}(x,z'-s)\\ &= P_{U}(x,z'-s) = W_c(z|s) \end{align*} \end{IEEEproof} Let the random vectors $X_1^N,U_1^N$ be distributed according to $P_{XU}^N$ and let $Z_1^N$ be a random variable uniformly distributed over $\G^N$ which is independent of $X_1^N,U_1^N$. Let $S_1^N=Z_1^N-U_1^N$ and $V_1^N=S_1^N G^{-1}$ (Here, $G^{-1}$ is the inverse of the one-two-one mapping $G:\G^N\rightarrow \G^N$). In other words, the joint distribution of these random vectors is given by \begin{align*} &p_{V_1^NS_1^NU_1^NX_1^NZ_1^N}(v_1^N,s_1^N,u_1^N,x_1^N,z_1^N)\\ &\qquad\qquad=\frac{1}{q^N} p_{XU}^N(x_1^N,u_1^N) \mathds{1}_{\{s_1^N=v_1^NG,u_1^n=z_1^N-v_1^NG\}} \end{align*} This implies \begin{align*} p_{V_1^NX_1^NZ_1^N}(v_1^N,x_1^N,z_1^N)&=\frac{1}{q^N} p_{XU}^N(x_1^N,z_1^N-v_1^NG),\\ p_{V_1^NZ_1^N}(v_1^N,z_1^N)&=\frac{1}{q^N} p_{U}^N(z_1^N-v_1^NG) \end{align*} In the next section, we provide the proof for the binary case. \subsection{Source Coding: Sketch of the Proof for the Binary Case}\label{section:Binary_proof_RD} The standard result of channel polarization for the binary input channel $W_c$ implies \cite{arikan_polar} that for any $\epsilon>0$ and $0< \beta <\frac{1}{2}$, there exist a large $N=2^n$ and a partition $A_0,A_1$ of $[1,N]$ such that for $t=0,1$ and $i\in A_t$, $\left|\bar{I}(W_{c,N}^{(i)}) -t \right|<\epsilon$ and such that for $i\in A_1$ $Z(W_{c,N}^{(i)})< 2^{-N^{\beta}}$. Moreover, as $\epsilon\rightarrow 0$ (and $N\rightarrow \infty$), $\frac{|A_t|}{N}\rightarrow p_t$ for some $p_0,p_1$ adding up to one with $p_1=\bar{I}(W_c)$.\\ Similarly, for the channel $W_s$ we have the following: For any $\epsilon>0$ and $0< \beta <\frac{1}{2}$, there exist a large $N=2^n$ and a partition $B_0,B_1$ of $[1,N]$ such that for $\tau=0,1$ and $i\in B_{\tau}$, $\left|\bar{I}(W_{s,N}^{(i)}) -\tau \right|<\epsilon$ and such that for $i\in B_1$, $Z(W_{s,N}^{(i)})< 2^{-N^{\beta}}$. Moreover, as $\epsilon\rightarrow 0$ (and $N\rightarrow \infty$), $\frac{|B_\tau|}{N}\rightarrow q_\tau$ for some $q_0,q_1$ adding up to one with $q_1=\bar{I}(W_s)$. \begin{lemma}\label{lemma:Zc_Zs_RD2} For $i=1,\cdots,N$, $Z(W_{c,N}^{(i)})\ge Z(W_{s,N}^{(i)})$. \end{lemma} \begin{IEEEproof} Follows from Lemma \ref{lemma:Wc_deg_Ws_RD2}, Lemma \ref{lemma:Degraded_ZZ} and Lemma \ref{lemma:Degraded_WW}. \end{IEEEproof} To introduce the encoding and decoding rules, we need to make the following definitions: \begin{align*} &A_0=\left\{i\in[1,N]\left|Z(W_{c,N}^{(i)})>2^{-N^{\beta}}\right.\right\}\\ &B_0=\left\{i\in[1,N]\left|Z(W_{s,N}^{(i)})>1-2^{-N^{\beta}}\right.\right\} \end{align*} and $A_1=[1,N]\backslash A_0$ and $B_1=[1,N]\backslash B_0$. For $t=0,1$ and $\tau=0,1$, define $A_{t,\tau}=A_t\cap B_{\tau}$. Note that for large $N$, $2^{-N^{\beta}} < 1-2^{-N^{\beta}}$ and therefore, Lemma \ref{lemma:Zc_Zs_RD2} implies $A_{1,0}=\emptyset$. Note that the above polarization results imply that as $N$ increases, $\frac{|A_1|}{N}\rightarrow \bar{I}(W_c)$ and $\frac{|B_{\tau}|}{N}\rightarrow \bar{I}(W_s)$.\\ \subsubsection{Encoding and Decoding} Let $z_1^N\in\G^N$ be an outcome of the random variable $Z_1^N$ known to both the encoder and the decoder. Given a source sequence $x_1^N\in\mathcal{X}^N$, the encoding rule is as follows: For $i\in[1,N]$, if $i\in B_0$, then $v_i$ is uniformly distributed over $\G$ and is known to both the encoder and the decoder (and is independent from other random variables). If $i\in B_1$, $v_i=g$ for some $g\in\G$ with probability \begin{align*} P(v_i=g)=p_{V_i|X_1^NZ_1^NV_1^{i-1}}(g|x_1^N,z_1^N,v_1^{i-1}) \end{align*} Note that $[1,N]$ can be partitioned into $A_{0,0},A_{0,1}$ and $A_{1,1}$ (since $A_{1,0}$ is empty) and $B_0=A_{0,0}$, $B_1=A_{0,1}\cup A_{1,1}$. Therefore, $v_-1^N$ can be decompose as $v_1^N=v_{A_{0,0}}+v_{A_{0,1}}+v_{A_{1,1}}$ in which $v_{A_{0,0}}$ is known to the decoder. The encoder sends $v_{A_{0,1}}$ to the decoder and the decoder uses the channel code to recover $v_{A_{1,1}}$. The decoding rule is as follows: Given $z_1^N$, $v_{A_{0,0}}$ and $v_{A_{0,1}}$, let $\hat{v}_{A_{0,0}}=v_{A_{0,0}}$ and $\hat{v}_{A_{0,1}}=v_{A_{0,1}}$. For $i\in A_{1,1}$, let \begin{align*} \hat{v}_i=\argmax_{g\in G} W_{c,N}^{(i)}(z_1^N,\hat{v}_1^{i-1}|g) \end{align*} Finally, the decoder outputs $z_1^N-\hat{v}_1^NG$. \subsubsection{Error Analysis} The analysis is a combination of the-point-to point channel coding and source coding results for polar codes. The average distortion between the encoder input and the decoder output is upper bounded by \begin{align*} D_{avg}&\!\le\!\!\!\!\!\sum_{z_1^N\in\G^N} \!\! \frac{1}{q^N} \!\!\!\!\! \sum_{x_1^N\in\mathcal{X}^N} \!\!\!\!\! p_X^N(x_1^N)\!\!\!\!\!\! \sum_{v_1^N\in\G^N} \!\! \frac{1}{q^{|B_0|}} \! \left(\prod_{i\in B_1}\!\!\! p(v_i|x_1^N\!\!,z_1^N\!\!,\!v_1^{i-1}\!)\!\!\right)\\ &\qquad\qquad\qquad\quad \Big(d_{max}\cdot \mathds{1}_{\{\hat{v}\ne v\}}+d(x_1^N,z_1^N-v_1^NG)\Big) \end{align*} where we have replaced $p_{V_i|X_1^NZ_1^NV_1^{i-1}}(v_i|x_1^N,z_1^N,v_1^{i-1})$ with $p(v_i|x_1^N,z_1^N,v_1^{i-1})$ for simplicity of notation and $d_{max}$ is the maximum value of the $d(\cdot,\cdot)$ function. Let \begin{align*} &q_{V_i|X_1^NZ_1^NV_1^{i-1}}(v_i|x_1^N\!z_1^N\!v_1^{i-1})\\ &\qquad\qquad\quad =\left\{\begin{array}{ll} \!\!\!\frac{1}{2}& \mbox{ If } i\in B_0\\ \!\!\!p_{V_i|X_1^NZ_1^NV_1^{i-1}}(v_i|x_1^N\!z_1^N\!v_1^{i-1})& \mbox{ If }i\in B_1 \end{array}\right. \end{align*} and \begin{align*} q_{X_1^NZ_1^N}(x_1^N,z_1^N)=p_{X_1^NZ_1^N}(x_1^N,z_1^N) \end{align*} We have \begin{align}\label{eqn:Davg} \nonumber D_{avg}&\le\sum_{\substack{v_1^N,z_1^N\in\G^N\\x_1^N\in\mathcal{X}^N}} q_{V_1^NX_1^NZ_1^N}(v_1^N,x_1^N,z_1^N)\\ &\nonumber \qquad\qquad\qquad \Big(d_{max}\cdot \mathds{1}_{\{\hat{v}\ne v\}}+d(x_1^N,z_1^N-v_1^NG)\Big)\\ &\nonumber\le \!\!\!\!\!\!\!\!\!\!\sum_{\substack{v_1^N,z_1^N\in\G^N\\x_1^N\in\mathcal{X}^N}}\!\!\!\!\!\!\!\!\! \Big(p(v_1^N,x_1^N,z_1^N) \!+\! \left|q(v_1^N\!\!,x_1^N\!\!,z_1^N)\!-\!p(v_1^N\!\!,x_1^N\!\!,z_1^N)\right|\Big)\\ &\qquad\qquad \Big(d_{max}\cdot \mathds{1}_{\{\hat{v}\ne v\}}+d(x_1^N,z_1^N-v_1^NG)\Big) \end{align} where in the last inequality, we dropped the subscripts of the probability distributions for simplicity of notation. Therefore, \begin{align}\label{eqn:D1D2D3} D_{avg}&\le D_1+D_2+D_3 \end{align} where \begin{align} &\label{eqn:D1}D_{1} = \sum_{\substack{v_1^N,z_1^N\in\G^N\\x_1^N\in\mathcal{X}^N}} p(v_1^N,x_1^N,z_1^N) d_{max}\cdot \mathds{1}_{\{\hat{v}\ne v\}}\\ &\label{eqn:D2}D_{2} = \sum_{\substack{v_1^N,z_1^N\in\G^N\\x_1^N\in\mathcal{X}^N}} p(v_1^N,x_1^N,z_1^N) d(x_1^N,z_1^N-v_1^NG)\\ &\nonumber D_{3} = \sum_{\substack{v_1^N,z_1^N\in\G^N\\x_1^N\in\mathcal{X}^N}} \left|q(v_1^N,x_1^N,z_1^N)-p(v_1^N,x_1^N,z_1^N)\right|\\ &\label{eqn:D3}\qquad\qquad\qquad \Big(d_{max}\cdot \mathds{1}_{\{\hat{v}\ne v\}}+d(x_1^N,z_1^N-v_1^NG)\Big) \end{align} Here, we only give a sketch for the rest of the proof. The proof for the general case is completely presented in Section \ref{section:General_proof_RD}. The proof proceeds as follows: It is straightforward to show that $D_1\rightarrow D$ as $N$ increases. It can also be shown that $D_2\rightarrow 0$ as $N$ increases since the inner code is a good channel code. Finally, it can be shown that $D_3\rightarrow 0$ as $N$ increases since the total variation distance between the $P$ and the $Q$ measures is small (in turn since the outer code is a good source code). \subsection{Source Coding: Proof for the General Case}\label{section:General_proof_RD} \subsubsection{Review of Polar Codes for Arbitrary Sources and Channels} The result of channel polarization for arbitrary discrete memoryless channels applied to $W_c$ implies \cite{} that for any $\epsilon>0$ and $0< \beta <\frac{1}{2}$, there exist a large $N=2^n$ and a partition $\{A_H|H\le \G\}$ of $[1,N]$ such that for $H\le \G$ and $i\in A_H$, $\left|\bar{I}(W_{c,N}^{(i)}) -\log \frac{|\G|}{|H|} \right|<\epsilon$ and $Z^H(W_{c,N}^{(i)})< 2^{-N^{\beta}}$. Moreover, as $\epsilon\rightarrow 0$ (and $N\rightarrow \infty$), $\frac{|A_H|}{N}\rightarrow p_H$ for some probabilities $p_H,H\le \G$ adding up to one with $\sum_{H\le \G} p_H \log \frac{|\G|}{|H|}=\bar{I}(W_c)$.\\ Similarly, for the channel $W_s$ we have the following: For any $\epsilon>0$ and $0< \beta <\frac{1}{2}$, there exist a large $N=2^n$ and a partition $\{B_H|H\le \G\}$ of $[1,N]$ such that for $H\le \G$ and $i\in B_H$, $\left|\bar{I}(W_{s,N}^{(i)}) -\log \frac{|\G|}{|H|} \right|<\epsilon$ and $Z^H(W_{s,N}^{(i)})< 2^{-N^{\beta}}$. Moreover, as $\epsilon\rightarrow 0$ (and $N\rightarrow \infty$), $\frac{|B_H|}{N}\rightarrow q_H$ for some probabilities $q_H,H\le \G$ adding up to one with $\sum_{H\le \G} q_H \log \frac{|\G|}{|H|}=\bar{I}(W_s)$. \begin{lemma} If the channel $(\G,\mathcal{Y}_1,W_1)$ is degraded with respect to the channel $(\G,\mathcal{Y}_2,W_2)$ in the sense of Definition \ref{def:degraded}, then for any $d\in G$, \begin{align*} Z_d(W_1)\ge Z_d(W_2) \end{align*} \end{lemma} \begin{IEEEproof} Let $(\mathcal{Y}_2,\mathcal{Y}_1,W)$ be a channel so that the condition of Definition \ref{def:degraded} is satisfied. We have \begin{align*} Z_d(W_1)&=\frac{1}{q} \sum_{x\in\G} \sum_{y_1\in\mathcal{Y}_1} \sqrt{W_1(y_1|x)W_1(y_1|x+d)}\\ &=\frac{1}{q} \sum_{x\in\G} \sum_{y_1\in\mathcal{Y}_1} \\ &\sqrt{\!\sum_{y_2\in\mathcal{Y}_2} \!\!\!W_2(y_2|x) W\!(y_1|y_2)\!\!\!\!\sum_{y_2'\in\mathcal{Y}_2} \!\!\!W_2(y_2'|x+d) W\!(y_1|y_2')}\\ &\!\ge \!\frac{1}{q}\!\! \sum_{x\in\G} \sum_{y_1\in\mathcal{Y}_1} \sum_{y_2\in\mathcal{Y}_2} \!\!\!\!\!\sqrt{W_2(y_2|x) W\!(y_1|y_2)^2 W_2(y_2|x\!+\!d) }\\ &= \frac{1}{q} \sum_{x\in\G} \sum_{y_2\in\mathcal{Y}_2} \sqrt{W_2(y_2|x) W_2(y_2|x+d) }\\ &=Z_d(W_2) \end{align*} \end{IEEEproof} \begin{lemma}\label{lemma:Zdc_Zds_RDG} For $i=1,\cdots,N$ and for $d\in \G$ and $H\le\G$, $Z_d(W_{c,N}^{(i)})\ge Z_d(W_{s,N}^{(i)})$ and $Z^H(W_{c,N}^{(i)})\ge Z^H(W_{s,N}^{(i)})$. \end{lemma} \begin{IEEEproof} Follows from Lemma \ref{lemma:Wc_deg_Ws_RD2}, Lemma \ref{lemma:Degraded_WW} and Lemma \ref{lemma:Zdc_Zds_RDG}. \end{IEEEproof} We define some quantities before we introduce the encoding and decoding rules. For $H\le \G$, define \begin{align*} &A_H=\Big\{i\in[1,N]\Big|Z^H(W_{c,N}^{(i)})<2^{-N^{\beta}},\\ &\qquad\qquad\qquad\qquad \nexists K\le H \mbox{such that } Z^K(W_{c,N}^{(i)})<2^{-N^{\beta}}\Big\}\\ &B_{H}=\Big\{i\in[1,N]\Big|Z^{H}(W_{s,N}^{(i)})<1-2^{-N^{\beta}},\\ &\qquad\qquad\qquad \nexists K\le H\mbox{ such that }Z^{K}(W_{s,N}^{(i)})<1-2^{-N^{\beta}}\Big\} \end{align*} For $H\le \G$ and $K\le \G$, define $A_{H,K}=A_H\cap B_{K}$. Note that for large $N$, $2^{-N^{\beta}} < 1-2^{-N^{\beta}}$ and therefore, if for some $i\in[1,N]$, $i\in A_H$, Lemma \ref{lemma:Zdc_Zds_RDG} implies $Z^H(W_{s,N}^{(i)})<1-2^{-N^{\beta}}$ and hence $i\in \cup_{K\le H}B_K$. Therefore, for $K \nleq H$, $A_{H,K}=\emptyset$. Therefore $\{A_{H,K}|K\le H\le \G\}$ is a partition of $[1,N]$. Note that the channel polarization results imply that as $N$ increases, $\frac{|A_H|}{N}\rightarrow p_H$ and $\frac{|B_{H}|}{N}\rightarrow q_H$.\\ \subsubsection{Encoding and Decoding} Let $z_1^N\in\G^N$ be an outcome of the random variable $Z_1^N$ known to both the encoder and the decoder. Given $K\le H\le \G$, let $T_H$ be a transversal of $H$ in $\G$ and let $T_{K\le H}$ be a transversal of $K$ in $H$. Any element $g$ of $\G$ can be represented by $g=[g]_K+[g]_{T_{K\le H}}+[g]_{T_H}$ for unique $[g]_K\in K$, $[g]_{T_{K\le H}}\in T_{K\le H}$ and $[g]_{T_H}\in T_H$. Also note that $T_{K\le H}+T_H$ is a transversal $T_K$ of $K$ in $\G$ so that $g$ can be uniquely represented by $g=[g]_K+[g]_{T_K}$ for some $[g]_{T_K}\in T_K$ and $[g]_{T_K}$ can be uniquely represented by $[g]_{T_K}= [g]_{T_{K\le H}}+[g]_{T_H}$. Given a source sequence $x_1^N\in\mathcal{X}^N$, the encoding rule is as follows: For $i\in[1,N]$, if $i\in A_{H,K}$ for some $K\le H\le \G$, $[v_i]_K$ is uniformly distributed over $K$ and is known to both the encoder and the decoder (and is independent from other random variables). The component $[v_i]_{T_K}$ is chosen randomly so that for $g\in [v_i]_K+T_K$, \begin{align*} P(v_i=g)=\frac{p_{V_i|X_1^NZ_1^NV_1^{i-1}}(g|x_1^N,z_1^N,v_1^{i-1})}{p_{V_i|X_1^NZ_1^NV_1^{i-1}}([v_i]_K+T_K|x_1^N,z_1^N,v_1^{i-1})} \end{align*} Note that $v_1^N$ can be decomposed as $v_1^N=[v_1^N]_K+[v_1^N]_{T_{K\le H}}+[v_1^N]_{T_H}$ (with a slight abuse of notation since $K$ and $H$ depend on the index $i$) in which $[v_1^N]_K$ is known to the decoder. The encoder sends $[v_1^N]_{T_{K\le H}}$ to the decoder and the decoder uses the channel code to recover $[v_1^N]_{T_H}$. The decoding rule is as follows: Given $z_1^N$, $[v_1^N]_K$ and $[v_1^N]_{T_{K\le H}}$, and for $i\in A_{H,K}$, let \begin{align*} \hat{v}_i=\argmax_{g\in [v_i]_K+[v_i]_{T_{K\le H}}+T_H} W_{c,N}^{(i)}(z_1^N,\hat{v}_1^{i-1}|g) \end{align*} Finally, the decoder outputs $z_1^N-\hat{v}_1^NG$. Note that the rate of this code is equal to \begin{align*} R &=\sum_{K\le H\le \G} \frac{|A_{H,K}|}{N} \log \frac{|H|}{|K|}\\ &= \sum_{K\le H\le \G} \frac{|A_{H,K}|}{N} \log \frac{|\G|}{|K|} - \sum_{K\le H\le \G} \frac{|A_{H,K}|}{N} \log \frac{|\G|}{|H|}\\ &= \sum_{K\le \G} \frac{|B_{K}|}{N} \log \frac{|\G|}{|K|} - \sum_{H\le \G} \frac{|A_{H}|}{N} \log \frac{|\G|}{|H|}\\ &\rightarrow \bar{I}(W_s)-\bar{I}(W_c)= I(X;U) \end{align*} \subsection{Error Analysis} The average distortion between the encoder input and the decoder output is upper bounded by \begin{align*} D_{avg}&\le\sum_{z_1^N\in\G^N}\frac{1}{q^N} \sum_{x_1^N\in\mathcal{X}^N}p_X^N(x_1^N) \sum_{v_1^N\in\G^N} \frac{1}{q^{|B_0|}} \\ &\qquad\quad \left(\prod_{K\le \G} \prod_{i\in B_K} \!\! \frac{p(g|x_1^N,z_1^N,v_1^{i-1})}{p([v_i]_K+T_K|x_1^N,z_1^N,v_1^{i-1}) \cdot |K|}\right)\\ &\qquad\qquad\qquad\qquad \left(d_{max}\cdot \mathds{1}_{\{\hat{v}\ne v\}}+d(x_1^N,z_1^N-v_1^NG)\right) \end{align*} where $p_{V_i|X_1^NZ_1^NV_1^{i-1}}(\cdot|x_1^N,z_1^N,v_1^{i-1})$ is replaced with $p(\cdot|x_1^N,z_1^N,v_1^{i-1})$ for simplicity of notation. For $i\in B_K$, let \begin{align*} &q_{V_i|X_1^NZ_1^NV_1^{i-1}}(v_i|x_1^Nz_1^Nv_1^{i-1})\\ &\qquad = \frac{p_{V_i|X_1^NZ_1^NV_1^{i-1}}(g|x_1^N,z_1^N,v_1^{i-1})}{p_{V_i|X_1^NZ_1^NV_1^{i-1}}([v_i]_K+T_K|x_1^N,z_1^N,v_1^{i-1})\cdot |K|} \end{align*} and \begin{align*} q_{X_1^NZ_1^N}(x_1^N,z_1^N)=p_{X_1^NZ_1^N}(x_1^N,z_1^N) \end{align*} Note that Equations \eqref{eqn:Davg} through \eqref{eqn:D3} are valid for the general case (with the new $Q$ measure). It follows from the analysis of \cite[Section III.F and Section IV]{sahebi_polar_source} that $D_2=D$. The following lemma is also proved in \cite[Section III.F and Section IV]{sahebi_polar_source}. \begin{lemma} With the above definitions, \begin{align*} \|P-Q\|_{t.v.} &= \sum_{\substack{v_1^N,z_1^N\in\G^N\\x_1^N\in\mathcal{X}^N}} \left|q(v_1^N,x_1^N,z_1^N)-p(v_1^N,x_1^N,z_1^N)\right|\\ &\le K2^{-N^{\beta}} \end{align*} for some constant $K$ depending only on $q$. \end{lemma} It remains to show that $D_1$ vanishes as $N$ approaches infinity. We have \begin{align*} &\sum_{\substack{v_1^N\!\!,z_1^N\in\G^N\\x_1^N\in\mathcal{X}^N}} \!\!\!\!\!\!\! p_{XU}^N(x_1^N\!\!,z_1^N\!\!-v_1^NG) \!\!\!\!\!\! \sum_{K\le H\le \G} \sum_{i\in A_{H,K}} \mathds{1}_{\Big\{W_{c,N}^{(i)}(z_1^N,v_1^{i-1}|v_i)}\\ &\quad {\color{white}1}_{\le W_{c,N}^{(i)}(z_1^N,v_1^{i-1}|\tilde{v}_i) \mbox{ for some }\tilde{v}_i\in [v_i]_K+[v_i]_{T_{K\le H}}+T_H,\tilde{v}_i\ne v_i\Big\}}\\ &\le \sum_{\substack{v_1^N,z_1^N\in\G^N\\x_1^N\in\mathcal{X}^N}} \!\!\!\!\!\!p_{XU}^N(x_1^N,z_1^N-v_1^NG) \!\!\!\!\sum_{K\le H\le \G} \sum_{i\in A_{H,K}} \sum_{\substack{\tilde{v}_i\in [v_i]_H+T_H\\\tilde{v}_i\ne v_i}}\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad \sqrt{\frac{W_{c,N}^{(i)}(z_1^N,v_1^{i-1}|\tilde{v}_i)}{W_{c,N}^{(i)}(z_1^N,v_1^{i-1}|v_i)}}\\ &=\!\!\!\!\!\! \sum_{K\le H\le \G} \sum_{i\in A_{H,K}} \!\!\!\!\sum_{\substack{v_i\in \G\\\tilde{v}_i\in [v_i]_H+T_H\\\tilde{v}_i\ne v_i}} \!\!\sum_{v_1^{i-1},z_1^N} \!\!\!\frac{1}{q} \!\!\left(\sum_{v_{i+1^N}} \!\!\!\!\frac{1}{q^{N-1}} p_{U}^N(z_1^N\!\!\!-\!v_1^NG)\!\!\right)\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad \sqrt{\frac{W_{c,N}^{(i)}(z_1^N,v_1^{i-1}|\tilde{v}_i)}{W_{c,N}^{(i)}(z_1^N,v_1^{i-1}|v_i)}}\\ &= \sum_{K\le H\le \G} \sum_{i\in A_{H,K}} \sum_{\substack{v_i\in \G\\\tilde{v}_i\in [v_i]_H+T_H\\\tilde{v}_i\ne v_i}} Z_{\{v_i,\tilde{v}_i\}}(W_{c,N}^{(i)}) \end{align*} Note that $\tilde{v}_i\in [v_i]_H+T_H$ and $\tilde{v}_i\ne v_i$ imply $d=\tilde{v}_i-v_i\notin H$. We have \begin{align*} Z_{\{v_i,\tilde{v}_i\}}(W_{c,N}^{(i)})\le qZ_d(W_{c,N}^{(i)})\le q Z^H(W_{c,N}^{(i)}) \end{align*} Therefore, \begin{align*} D_1 &\le \sum_{K\le H\le \G} \sum_{i\in A_{H,K}} \sum_{\substack{v_i\in \G\\\tilde{v}_i\in [v_i]_H+T_H\\\tilde{v}_i\ne v_i}} q Z^H(W_{c,N}^{(i)})\\ &\le 4^q q N 2^{-N^{\beta}} \end{align*} Therefore, $D_1\rightarrow 0$ as $N$ increases. \section{Polar Codes Achieve the Shannon Capacity of Arbitrary DMCs} In this section, we prove the following theorem: \begin{theorem} For an arbitrary discrete memoryless channel $(\mathcal{X},\mathcal{Y},W)$, nested polar codes achieve the Shannon capacity. \end{theorem} For the channel let $\mathcal{X}=\G$ for some Abelian group $\G$ and let $|\G|=q$. Similarly to the source coding problem, we show that there exists nested polar code $\mathds{C}_i\subseteq \mathcal{C}_o$ such that $\mathds{C}_o$ is a good channel code and each shift of $\mathds{C}_i$ is a good source code. This will be made clear later in the following.\\ Let $X$ be a random variable with the capacity achieving distribution and let $U$ be uniformly distributed over $\G$. Define the artificial channels $(\G,\G,W_s)$ and $(\G,\mathcal{Y}\times\G,W_c)$ such that for $u,z\in \G$ and $y\in \mathcal{Y}$, \begin{align*} &W_s(z|u)=p_X(z-u)\\ &W_c(y,z|u)=p_{XY}(z-u,y) \end{align*} These channels have been depicted in Figures \ref{fig:Ws_C} and \ref{fig:Wc_C}. \begin{figure}[!h]\label{fig:Ws_C} \centering \includegraphics[scale=1]{Channel_s.pdf} \caption{\small Test channel for the inner code (the source coding component)} \end{figure} \begin{figure}[!h]\label{fig:Wc_C} \centering \includegraphics[scale=1]{Channel_c.pdf} \caption{\small Test channel for the outer code (the channel coding component)} \end{figure} Note that for $u,x,z\in G$ and $y\in\mathcal{Y}$, $p_{UXYZ}(u,x,y,z)=p_U(u)p_X(x)W(y|x)\mathds{1}_{\{z=u+x\}}$. Similarly to the source coding case, one can show that the symmetric capacities of the channels are equal to \begin{align*} &\bar{I}(W_s)=\log q-H(X)\\ &\bar{I}(W_c)=\log q-H(X|Y) \end{align*} We employ a nested polar code in which the inner code is a good source code for the test channel $W_s$ and the outer code is a good channel code for $W_c$. The rate of this code is equal to \begin{align*} R&=\bar{I}(W_c)-\bar{I}(W_x)\\ &=\log q-H(X|Y)-\left(\log q-H(X)\right)=I(X;Y) \end{align*} Note that the channels $W_c$ and $W_s$ are chosen so that the difference of their \emph{symmetric} capacities is equal to the \emph{Shannon} capacity of the original channel. We postpone the proof to Section \ref{section:proof_C2} where the result is proved for the binary case and Section \ref{section:proof_CG} in which the general proof (for arbitrary Abelian groups) is presented. The rest of this section is devoted to some general definitions and lemmas which are used in the proofs. Let $n$ be a positive integer and let $N=2^n$. Similar to the source coding case, For both channels $W_s$ and $W_c$ and for $i=1\cdots N$, define the synthesized channels as \begin{align*} W_{c,N}^{(i)}(y_1^N,z_1^N,v_1^{i-1}|v_i)&=\sum_{v_{i+1}^N\in G^{N-i}} \frac{1}{2^{N-1}} W_c^N(y_1^N,z_1^N|v_1^NG)\\ &=\sum_{v_{i+1}^N\in G^{N-i}} \frac{1}{2^{N-1}} p_{XY}^N(z_1^N-v_1^NG,y_1^N) \end{align*} and \begin{align*} W_{s,N}^{(i)}(z_1^N,v_1^{i-1}|v_i)&=\sum_{v_{i+1}^N\in G^{N-i}} \frac{1}{2^{N-1}} W_s^N(z_1^N|v_1^NG)\\ &=\sum_{v_{i+1}^N\in G^{N-i}} \frac{1}{2^{N-1}} p_X^N(z_1^N-v_1^NG) \end{align*} Let the random vector $U_1^N$ be distributed according to $p_U^N$ (uniform) and let $V_1^N=U_1^N G^{-1}$ where $G$ is the polar coding matrix of dimension $N\times N$. Note that since $G$ is a one-to-one mapping, $V_1^N$ is also uniformly distributed. Let $Y_1^N$ and $Z_1^N$ be the outputs of the channel $W_c$ when the input is $U_1^N$. Note that for $v_1^N,u_1^n,x_1^N,z_1^N\in \G^N$ and $y_1^N\in \mathcal{Y}^N$, \begin{align*} &p_{V_1^N U_1^N X_1^N Y_1^N Z_1^N}(v_1^N,u_1^n,x_1^N,y_1^N,z_1^N)\\ &=\mathds{1}_{\{v_1^N=u_1^N G^{-1}\}} p_U^N(u_1^N) p_X^N(x_1^N) W^N(y_1^N|x_1^N) \mathds{1}_{\{z_1^N=u_1^N+x_1^N\}}\\ &=\frac{1}{2^N} p_X^N(z_1^N-v_1^N G) W^N(y_1^N|z_1^N-v_1^N G) \mathds{1}_{\{u_1^N=v_1^N G,x_1^N =z_1^N-v_1^NG\}} \end{align*} and \begin{align*} p_{V_1^N Y_1^N Z_1^N}(v_1^N,y_1^N,z_1^N) &=\frac{1}{2^N} p_X^N(z_1^N-v_1^N G) W^N(y_1^N|z_1^N-v_1^N G)\\ p_{V_1^N Z_1^N}(v_1^N,z_1^N) &=\frac{1}{2^N} p_X^N(z_1^N-v_1^N G) \end{align*} \section{Channel Coding: Sketch of the Proof for the Binary Case}\label{section:proof_C2} The following theorems state the standard channel coding and source coding polarization phenomenons for the binary case. \begin{theorem}\label{theorem:polar_source2C} For any $\epsilon>0$ and $0< \beta <\frac{1}{2}$, there exist a large $N=2^n$ and a partition $A_0,A_1$ of $[1,N]$ such that for $t=0,1$ and $i\in A_t$, $\left|\bar{I}(W_{s,N}^{(i)}) -t \right|<\epsilon$ and $Z(W_{s,N}^{(i)})< 2^{-N^{\beta}}$. Moreover, as $\epsilon\rightarrow 0$ (and $N\rightarrow \infty$), $\frac{|A_t|}{N}\rightarrow p_t$ for some $p_0,p_1$ adding up to one with $p_1=\bar{I}(W_s)$. \end{theorem} \begin{theorem}\label{theorem:polar_channel2C} For any $\epsilon>0$ and $0< \beta <\frac{1}{2}$, there exist a large $N=2^n$ and a partition $B_0,B_1$ of $[1,N]$ such that for $\tau=0,1$ and $i\in B_{\tau}$, $\left|\bar{I}(W_{c,N}^{(i)}) -\tau \right|<\epsilon$ and $Z(W_{c,N}^{(i)})< 2^{-N^{\beta}}$. Moreover, as $\epsilon\rightarrow 0$ (and $N\rightarrow \infty$), $\frac{|B_\tau|}{N}\rightarrow q_\tau$ for some $q_0,q_1$ adding up to one with $q_1=\bar{I}(W_c)$. \end{theorem} \begin{lemma}\label{lemma:Zc_Zs_C2} For $i=1,\cdots,N$, $Z(W_{s,N}^{(i)})\ge Z(W_{c,N}^{(i)})$. \end{lemma} \begin{IEEEproof} Follows from since $W_s$ is degraded with respect to $W_c$ and using Lemma[???] and Lemma[???]. \end{IEEEproof} Define \begin{align*} &A_0=\left\{i\in[1,N]\left|Z(W_{s,N}^{(i)})>1-2^{-N^{\beta}}\right.\right\}\\ &B_0=\left\{i\in[1,N]\left|Z(W_{c,N}^{(i)})>2^{-N^{\beta}}\right.\right\} \end{align*} and $A_1=[1,N]\backslash A_0$ and $B_1=[1,N]\backslash B_0$. For $t=0,1$ and $\tau=0,1$, define $A_{t,\tau}=A_t\cap B_{\tau}$. Note that Lemma \ref{lemma:Zc_Zs_RD2} implies \begin{align*} A_{1,0}=\left\{i\in[1,N]\left|<2^{-N^{\beta}}<Z(W_{c,N}^{(i)})<Z(W_{s,N}^{(i)})<1-2^{-N^{\beta}}\right.\right\} \end{align*} Since $Z(W_{c,N}^{(i)})$ and $Z(W_{s,N}^{(i)})$ both polarize to $0,1$, as $N$ increases $\frac{A_{1,0}}{N}\rightarrow 0$. Note that Theorems \ref{theorem:polar_channel2} and $\ref{theorem:polar_source2}$ imply that as $N$ increases, $\frac{|A_1|}{N}\rightarrow \bar{I}(W_c)$ and $\frac{|B_{1}|}{N}\rightarrow \bar{I}(W_s)$. \subsection{Encoding and Decoding} Let $z_1^N$ be a realization of the random vector $Z_1^N$ available to both the encoder and the decoder. Given a partition $A_{0,0},A_{0,1},A_{1,0},A_{1,1}$ of $[1,N]$, a vector $v_1^N\in \G^N$ can be decomposed as $v_1^N=v_{A_{0,0}}+v_{A_{0,1}}+v_{A_{1,0}}+v_{A_{1,1}}$ and similarly, the set $\G^N$ can be partitioned into the union $\G^{A_{0,0}}\cup\G^{A_{0,1}}\cup\G^{A_{1,0}}\cup\G^{A_{1,1}}$. Let $v_{A_{0,0}}\in \G^{A_{0,0}}$ be a uniformly distributed random variable available to both the encoder and the decoder which is independent from all other random variables and let $v_{A_{0,1}}$ be the message vector. The encoding is as follows: For $i \in A_{1,1}\cup A_{1,0}$, \begin{align*} v_i=\left\{\begin{array}{ll} 0& \mbox{with prob. } p_{V_i|Z_1^N V_1^{i-1}}(0|z_1^N,v_1^{i-1})\\ 1& \mbox{with prob. } p_{V_i|Z_1^N V_1^{i-1}}(1|z_1^N,v_1^{i-1}) \end{array}\right. \end{align*} The receiver has access to $y_1^N$, $z_1^N$ and $v_{A_{0,0}}$. Note that $\frac{|A_{1,0}|}{N}\rightarrow 0$ as $N$ increases. Assume for the moment that the receiver has access to $v_{A_{1,0}}$. Then it can use the following decoding rule: For $i\in A_{0,1}\cup A_{1,1}$, \begin{align*} \hat{v}_i=\argmax_{g\in \G} W_{c,N}^{(i)} (y_1^N,z_1^N,\hat{v}_1^{i-1}|g) \end{align*} It is shown in the next section that with this encoding and decoding rules, the probability of error goes to zero. It remains to send the component $v_{A_{1,0}}$ to the decoder which can be done using a regular polar code (which achieves the symmetric capacity of the channel). Note that since the fraction $\frac{|A_{1,0}|}{N}$ vanishes as $N$ increases, the rate loss due to the transmission of $v_{A_{1,0}}$ can be made arbitrarily small. \subsection{Error Analysis} The receiver has access to $z_1^N, v_{A_{0,0}}, v_{A_{1,0}}$ and $y_1^N$. A communication error occurs if $\hat{v}_{0,1}\ne v_{0,1}$. The error event is contained in the event: \begin{align*} \bigcup_{i\in A_{0,1}\cup A_{1,1}} \{W_{c,N}^{(i)} (y_1^N,z_1^N,\hat{v}_1^{i-1}|v_i)\le W_{c,N}^{(i)} (y_1^N,z_1^N,\hat{v}_1^{i-1}|v_i+1)\} \end{align*} Therefore, we have the following upper bound on the average probability of error: \begin{align*} \mathds{E}\{P_{err}\}&\le \sum_{z_1^N\in\G^N} \frac{1}{q^N} \sum_{v_1^N\in\G^N} \frac{1}{q^{|A_{0,0}|+|A_{0,1}|}} \\ &\left(\prod_{i\in A_{0,1}\cup A_{1,0}} p_{V_i|Z_1^NV_1^{i-1}}(v_i|z_1^N,v_1^{i-1})\right) \sum_{y_1^N}W^N(y_1^N|z_1^N-v_1^NG) \\ &\mathds{1}_{\{\exists i\in[1,N]:W_{c,N}^{(i)} (y_1^N,z_1^N,{v}_1^{i-1}|{v}_i)\le W_{c,N}^{(i)} (y_1^N,z_1^N,{v}_1^{i-1}|{v}_i+1)\}} \end{align*} Define \begin{align*} p(v_1^N,z_1^N)&=p_{V_1^NZ_1^N}(v_1^N,z_1^N)\\ &=\frac{1}{q^N}\cdot \left(\prod_{i\in [1,N]} p_{V_i|Z_1^NV_1^{i-1}}(v_i|z_1^N,v_1^{i-1})\right) \end{align*} and \begin{align*} q(v_1^N,z_1^N)=\frac{1}{q^N}\cdot \frac{1}{q^{|A_{0,0}|+|A_{0,1}|}} \left(\prod_{i\in A_{0,1}\cup A_{1,0}} p_{V_i|Z_1^NV_1^{i-1}}(v_i|z_1^N,v_1^{i-1})\right) \end{align*} Note that \begin{align*} \mathds{E}\{P_{err}\}&\le \sum_{v_1^N} \sum_{z_1^N} q(v_1^N,z_1^N) \sum_{y_1^N}W^N(y_1^N|z_1^N-v_1^NG)\\ & \mathds{1}_{\{\exists i\in[1,N]:W_{c,N}^{(i)} (y_1^N,z_1^N,{v}_1^{i-1}|{v}_i)\le W_{c,N}^{(i)} (y_1^N,z_1^N,{v}_1^{i-1}|{v}_i+1)\}}\\ &\le P_1+P_2 \end{align*} where \begin{align*} P_1&= \sum_{v_1^N} \sum_{z_1^N} p(v_1^N,z_1^N) \sum_{y_1^N}W^N(y_1^N|z_1^N-v_1^NG)\\ &\mathds{1}_{\{\exists i\in[1,N]:W_{c,N}^{(i)} (y_1^N,z_1^N,{v}_1^{i-1}|{v}_i)\le W_{c,N}^{(i)} (y_1^N,z_1^N,{v}_1^{i-1}|{v}_i+1)\}} \end{align*} and \begin{align*} P_2&= \sum_{v_1^N} \sum_{z_1^N} |q(v_1^N,z_1^N)-p(v_1^N,z_1^N)| \sum_{y_1^N}W^N(y_1^N|z_1^N-v_1^NG) \\ &\mathds{1}_{\{\exists i\in[1,N]:W_{c,N}^{(i)} (y_1^N,z_1^N,{v}_1^{i-1}|{v}_i)\le W_{c,N}^{(i)} (y_1^N,z_1^N,{v}_1^{i-1}|{v}_i+1)\}}\\ &\le \sum_{v_1^N} \sum_{z_1^N} |q(v_1^N,z_1^N)-p(v_1^N,z_1^N)| \sum_{y_1^N}W^N(y_1^N|z_1^N-v_1^NG)\\ &\le \sum_{v_1^N} \sum_{z_1^N} |q(v_1^N,z_1^N)-p(v_1^N,z_1^N)| \end{align*} We use the following two lemmas from [???]. \begin{lemma} For $p(\cdot,\cdot)$ and $q(\cdot,\cdot)$ defined as above, \begin{align*} &\sum_{v_1^N}\sum_{z_1^N} \left|p(v_1^N,z_1^N)-q(v_1^N,z_1^N)\right|\le 2\sum_{i\in A_{0,0}\cup A_{0,1}}\\ &\mathds{E}\left\{\left|\frac{1}{2}-p_{V_i|V_1^{i-1}Z_1^N}(0|V_1^N,Z_1^N)\right|\right\} \end{align*} \end{lemma} \begin{lemma} For $i\in [1,N]$, if $Z(W_{s,N}^{(i)})\ge 1-\delta_N^2$ then \begin{align*} \mathds{E}\left\{\left|\frac{1}{2}-p_{V_i|V_1^{i-1}Z_1^N}(0|V_1^N,Z_1^N)\right|\right\}\le \sqrt{2}\delta_N \end{align*} \end{lemma} Note that for $i\in A_{0,0}\cup A_{0,1}$, $Z(W_{s,N}^{(i)})\ge 1-\delta_N^2$. Therefore, Lemmas [] and [] imply \begin{align*} P_2\le 2\sqrt{2}N\delta_N \end{align*} We have \begin{align*} P_1&\le \sum_{v_1^N} \sum_{z_1^N} p(v_1^N,z_1^N) \sum_{y_1^N}W^N(y_1^N|z_1^N-v_1^NG) \sum_{i\in A_{0,1}}\\ &\mathds{1}_{\{W_{c,N}^{(i)} (y_1^N,z_1^N,{v}_1^{i-1}|{v}_i)\le W_{c,N}^{(i)} (y_1^N,z_1^N,{v}_1^{i-1}|{v}_i+1)\}}\\ &\le \sum_{v_1^N} \sum_{z_1^N} p(v_1^N,z_1^N) \sum_{y_1^N}W^N(y_1^N|z_1^N-v_1^NG) \sum_{i\in A_{0,1}}\\ &\sqrt{\frac{W_{c,N}^{(i)} (y_1^N,z_1^N,{v}_1^{i-1}|{v}_i+1)}{W_{c,N}^{(i)} (y_1^N,z_1^N,{v}_1^{i-1}|{v}_i)}}\\ &= \sum_{v_1^N} \sum_{z_1^N} \sum_{y_1^N} \frac{1}{2^N} p_X^N(z_1^N-v_1^N G) W^N(y_1^N|z_1^N-v_1^NG) \\ &\sum_{i\in A_{0,1}}\sqrt{\frac{W_{c,N}^{(i)} (y_1^N,z_1^N,{v}_1^{i-1}|{v}_i+1)}{W_{c,N}^{(i)} (y_1^N,z_1^N,{v}_1^{i-1}|{v}_i)}}\\ \end{align*} where the last equality follows since $p(v_1^N,z_1^N)=p_{V_1^NZ_1^N}(v_1^N,z_1^N)=\frac{1}{2^N} p_X^N(z_1^N-v_1^N G)$. Therefore, \begin{align*} P_1&\le \sum_{i\in A_{0,1}} \frac{1}{2} \sum_{v_i} \sum_{v_1^{i-1}} \sum_{y_1^N,z_1^N} \sqrt{\frac{W_{c,N}^{(i)} (y_1^N,z_1^N,{v}_1^{i-1}|{v}_i+1)}{W_{c,N}^{(i)} (y_1^N,z_1^N,{v}_1^{i-1}|{v}_i)}} \sum_{v_{i+1}^{N}} \frac{1}{2^{N-1}} p_X^N(z_1^N-v_1^N G) W^N(y_1^N|z_1^N-v_1^NG)\\ &=\sum_{i\in A_{0,1}} \frac{1}{2} \sum_{v_i} \sum_{v_1^{i-1}} \sum_{y_1^N,z_1^N} \sqrt{\frac{W_{c,N}^{(i)} (y_1^N,z_1^N,{v}_1^{i-1}|{v}_i+1)}{W_{c,N}^{(i)} (y_1^N,z_1^N,{v}_1^{i-1}|{v}_i)}} W_{c,N}^{(i)} (y_1^N,z_1^N,{v}_1^{i-1}|{v}_i)\\ &=\sum_{i\in A_{0,1}} \frac{1}{2} \sum_{v_i} Z(W_{c,N}^{(i)})\\ &\le N\delta_N^2 \end{align*} \section{Channel Coding: Proof for the General Case}\label{section:proof_CG} For $H\le \G$, define \begin{align*} &A_H=\Big\{i\in[1,N]\Big|Z^H(W_{s,N}^{(i)})<1-2^{-N^{\beta}},\\ &\qquad\qquad\qquad\qquad \nexists K\le H \mbox{such that } Z^K(W_{s,N}^{(i)})<1-2^{-N^{\beta}}\Big\}\\ &B_{H}=\Big\{i\in[1,N]\Big|Z^{H}(W_{c,N}^{(i)})<2^{-N^{\beta}},\\ &\qquad\qquad\qquad \nexists K\le H\mbox{ such that }Z^{K}(W_{c,N}^{(i)})<2^{-N^{\beta}}\Big\} \end{align*} For $H\le \G$ and $K\le \G$, define $A_{H,K}=A_H\cap B_{K}$. Note that for $K\le H\le \G$, $Z^H(W)\le Z^K(W)$. Also note that $Z^H(W_{c,N}^{(i)})\le Z^H(W_{s,N}^{(i)})$ Therefore, if $K\nleq H$ then \begin{align*} A_{H,K}\subseteq \Big\{i\in[1,N]\Big| 2^{-N^{\beta}}<Z^H(W_{c,N}^{(i)})\le Z^H(W_{s,N}^{(i)})<1-2^{-N^{\beta}}\Big\} \end{align*} Since $Z^H(W_{c,N}^{(i)})$ and $Z^H(W_{s,N}^{(i)})$ both polarize to $0,1$, as $N$ increases $\frac{A_{H,K}}{N}\rightarrow 0$ if $K\nleq H$. Note that the channel polarization results imply that as $N$ increases, $\frac{|A_H|}{N}\rightarrow p_H$ and $\frac{|B_{H}|}{N}\rightarrow q_H$.\\ \subsubsection{Encoding and Decoding} Let $z_1^N\in\G^N$ be an outcome of the random variable $Z_1^N$ known to both the encoder and the decoder. Given $K\le H\le \G$, let $T_H$ be a transversal of $H$ in $\G$ and let $T_{K\le H}$ be a transversal of $K$ in $H$. Any element $g$ of $\G$ can be represented by $g=[g]_K+[g]_{T_{K\le H}}+[g]_{T_H}$ for unique $[g]_K\in K$, $[g]_{T_{K\le H}}\in T_{K\le H}$ and $[g]_{T_H}\in T_H$. Also note that $T_{K\le H}+T_H$ is a transversal $T_K$ of $K$ in $\G$ so that $g$ can be uniquely represented by $g=[g]_K+[g]_{T_K}$ for some $[g]_{T_K}\in T_K$ and $[g]_{T_K}$ can be uniquely represented by $[g]_{T_K}= [g]_{T_{K\le H}}+[g]_{T_H}$. Given a source sequence $x_1^N\in\mathcal{X}^N$, the encoding rule is as follows: For $i\in[1,N]$, if $i\in A_{H,K}$ for some $K\le H\le \G$, $[v_i]_K$ is uniformly distributed over $K$ and is known to both the encoder and the decoder (and is independent from other random variables). The component $[v_i]_{T_{K\le H}}$ is the message and is uniformly distributed but is only known to the encoder. The component $[v_i]_{T_K}$ is chosen randomly so that for $g\in [v_i]_K+[v_i]_{T_{K\le H}}+T_H$, \begin{align*} P(v_i=g)=\frac{p_{V_i|X_1^NZ_1^NV_1^{i-1}}(g|x_1^N,z_1^N,v_1^{i-1})}{p_{V_i|X_1^NZ_1^NV_1^{i-1}}([v_i]_K+[v_i]_{T_{K\le H}}+T_H|x_1^N,z_1^N,v_1^{i-1})} \end{align*} For $i\in[1,N]$, if $i\in A_{H,K}$ for some $K\nleq H$, $[v_i]_H$ is uniformly distributed over $H$ and is known to both the encoder and the decoder and the component $[v_i]_{T_H}$ is chosen randomly so that for $g\in [v_i]_H+T_H$, \begin{align*} P(v_i=g)=\frac{p_{V_i|X_1^NZ_1^NV_1^{i-1}}(g|x_1^N,z_1^N,v_1^{i-1})}{p_{V_i|X_1^NZ_1^NV_1^{i-1}}([v_i]_H+T_H|x_1^N,z_1^N,v_1^{i-1})} \end{align*} For the moment assume that in this case $v_i$ is known at the receiver. Note that for $i\in[1,N]$, if $i\in A_{H,K}$ for some $K\le H\le \G$, $v_i$ can be decomposed as $v_i=[v_i]_K+[v_i]_{T_{K\le H}}+[v_i]_{T_H}$ in which $[v_i]_K$ is known to the decoder. The decoding rule is as follows: Given $z_1^N$ and for $i\in A_{H,K}$ for some $K\le H\le \G$, let \begin{align*} \hat{v}_i=\argmax_{g\in [v_i]_K+[v_i]_{T_{K\le H}}+T_H} W_{c,N}^{(i)}(z_1^N,\hat{v}_1^{i-1}|g) \end{align*} It is shown in the next section that with this encoding and decoding rules, the probability of error goes to zero. It remains to send the $v_i$ $i\in A_{H,K}$ with $K\nleq H$ to the decoder which can be done using a regular polar code (which achieves the symmetric capacity of the channel). Note that since the fraction $\frac{|A_{H,K}|}{N}$ vanishes as $N$ increases if $K\nleq H$, the rate loss due to this transmission can be made arbitrarily small. \bibliographystyle{IEEEtran} \bibliography{IEEEabrv,ariabib} \end{document}
37,472
TITLE: Verification of proof involving the lcm of consecutive numbers. QUESTION [1 upvotes]: $\newcommand{\lcm}{\operatorname{lcm}}$ Just a quick one. I have to prove that the lcm of two consecutive numbers is its product. Using the identity $\gcd(a,b) \cdot \lcm(a,b) = a \cdot b$, you can find $\lcm(a,b) = \frac{a \cdot b}{\gcd(a,b)}$ I can prove separately that the gcd of consecutive numbers is 1 (coprime) -- which leaves $a \cdot b$ and thus proving the statement. I'm pretty sure this is complete but for the sake of clarity is there something I'm missing? REPLY [1 votes]: Nothing. That is correct. On the other hand, when two coprime numbers $a$ and $b$ divide a number $c$, then $ab\mid c$ too. So, $ab$ divides every common multiple of $a$ and $b$ and this also proves that $\operatorname{lcm}(a,b)=ab$, without using the fact that $\gcd(a,b)\operatorname{lcm}(a,b)=ab$.
29,926
The Liberty Campaign List price: US$22.95 Currently unavailable Add to wishlist AbeBooks may have this title (opens in new window).Try AbeBooks Product details - Hardback | 341 pages - 166.9 x 231.9 x 28.2mm | 716.68g - 31 Dec 1994 - Chivers North America - Hampton, United States - Large type / large print - large type edition - 0792719468 - 9780792719465
375,890
Description The type of protein used is in an instant form ... - Fish free - Lactose free - Milk free - No artificial colours - No artificial sweeteners Free from: No Artificial Colours, Sweeteners, Milk, Lactose,.
380,592
TITLE: How to handle quotients with little-oh notation? QUESTION [0 upvotes]: As an example, I’m trying to prove that $\lim_{x \to 0} \frac{\sin ax}{\sin bx} = \frac{a}{b}$. Writing $\sin x = x + o(x^2)$ as $x \to 0$ (via its taylor polynomial) we have $$\lim_{x \to 0} \frac{\sin ax}{\sin bx} = \lim_{x \to 0} \frac{a}{b} \frac{bx + o(x^2)}{bx + o(x^2)}$$ Im having trouble proving that the limit of the quotient on the RHS is $1$. REPLY [1 votes]: We can factor out the desired fraction $$\lim_{x \to 0} \frac{\sin ax}{\sin bx} = \frac{a}{b} \lim_{x \to 0} \frac{bx + o(x^2)}{bx + o(x^2)}$$ Then, on the right hand side inside the limit, factor out an $x$ in the numerator and denominator to get $$\lim_{x \to 0} \frac{\sin ax}{\sin bx} = \frac{a}{b} \lim_{x \to 0} \frac{x}{x}\frac{b + o(x)}{b + o(x)}$$ then cancel $$\lim_{x \to 0} \frac{\sin ax}{\sin bx} = \frac{a}{b} \lim_{x \to 0} \frac{b + o(x)}{b + o(x)}$$ and you can evaluate the limit, because you no longer have a case of $\frac{0}{0}$. The limit is $1$, and you get $$\lim_{x \to 0} \frac{\sin ax}{\sin bx} = \frac{a}{b}$$
39,048
\begin{document} \title[Submanifolds into Rotational Hypersurfaces]{A Bonnet Theorem for Submanifolds into Rotational Hypersurfaces} \thanks{The first author was partially supported by CAPES-Brazil} \author{C. do Rei Filho} \author{F. Vit\'orio} \date{} \subjclass[2010]{Primary 53C20; Secondary 31C05} \maketitle \thispagestyle{empty} \begin{abstract} In this work, we prove a version of the fundamental theorem of submanifolds to target manifolds with warped structure. \end{abstract} \section{Introduction} The classical Bonnet's theorem establishes necessary and sufficient conditions for the existence of an isometric immersion of a simply connected manifold $M^k$ in an Euclidean space $\mathbb{R}^{d+l}$ with prescribed second fundamental form. There are several proofs for this beautiful achievement in Differential Geometry. One could read for example the original proof for $\real^3$ in \cite{bonnet}, or more general versions in \cite{spivak}, \cite{petersen} or \cite{marcos}. It seems that the underlying idea of the Bonnet's proof is that the metric and the second fundamental form of the immersed manifold must obey some compatibility equations. The recent development of the study of constant mean curvature surfaces in the three dimensional geometries with $3$ and $4$ dimensional isometries groups exploits the resemblances with the known results in the theory on space forms. Benoit Daniel, \cite{dan} and \cite{daniel2}, had proved versions of Bonnet's theorem in $\mathbb{S}^2\times \real$, $\mathbb{H}^2\times \real$, three dimensional Berger spheres and three dimensional Heisenberg groups. His method is to add to the Gauss and Codazzi equations some conditions about the tangential and normal components of a ``vertical'' vector field and to verify directly that these conditions fit the compatibility equations for integrate a certain distribution. As far the authors know, \cite{dan} is the first paper that deals with the problem which the classical structure equations are not sufficients to give the immersions. In fact, the tangent and normal projections yield additional conditions for the immersion. Along the recent year some works \cite{cx}, \cite{{Kowal}} and \cite{ltv} gave generalizations for Daniel's work in some directions. The aim of this work is give another piece in this mosaic solving the problem for a class of non-homogeneous target manifolds. Our approach, like \cite{ltv}, is to reduce the problem to the case of flat Euclidean target spaces by using the canonical isometric embedding of a revolution hypersurface into a flat Euclidean space. We point out that in \cite{cx} the authors prove a version of a Bonnet's theorem for warped products provided that the basis is a flat space, i.e., $\real\times_\eta\real^n$. Our main result uses a fiber bundle terminology, see Definition \ref{compatibility} and Theorem \ref{main} for the precise statement, but it can be read roughly as following: \vspace{0.4cm} \begin{minipage}[b]{11.5cm} {\it Let $\big(M^k,g \big)$ be a simply connected Riemannian manifold of dimension $k$ and let $\bar M$ be a revolution hypersurface. There exist compatibility equations for $\bar M$ that are necessary and sufficients for existence of an isometric immersion $ \mathbf{x}:M^k \to \bar M^n$. Furthermore, the isometric immersion induces a vector bundle isomorphism. } \end{minipage} \section{Preliminaries.}\label{prelim} Let $\mathbb{E}^n$ be the $\real^n$ with metric $(1,...,1,\epsilon),$ where $\epsilon \in \{-1,1\}.$ Let $\ga: I \to \mathbb{E}^2$ be a curve parametrized by the arc lenght, \[ \ga(t)=\big(f(t), h(t)\big),\] where $f,h:I \to \real$ are smooth function satisfying $f'(t)^2+\epsilon h'(t)^2=1$, for all $t\in I$. For our purposes, we will assume that $f$ is positive and $h'(t)>0$. Let $\Phi: I \times \Sp^n \to \mathbb{E}^{n+2}$ be the rotational hypersurface given by \[ \Phi(t,\om)=\big( f(t) \om, h(t)\big), \] where $\om \in \Sp^n$ and we are considering via the canonical embeddings $\Sp^n\subset \real^{n+1}= \real^{n+1}\times\{0\}\subset \mathbb{E}^{n+2}$. On the cylinder $I\times \Sp^n$ we define the Riemannian metric $ ds^2= \Phi^*(\lan\,,\ran)$, where $\lan\,,\ran$ is the standard metric of $\mathbb{E}^{n+2}$. It is simple to see that $\bar M^{n+1}=\big(I\times \Sp^n, ds^2\big)$ is a warped product manifold with warped metric $ds^2= dt^2+f(t)^2d\sigma^2$ where $d\sigma^2$ is the standard metric of $\Sp^n$. Note that, $\bar M$ has a distinguished unitary vector field $\partial_t= (f'(t)\om, h'(t))$. Notice that the unitary normal field of $\Phi$ is given by $N_t=(h'(t)\omega,-\epsilon f'(t))$. Furthermore, if we consider the map $\ti{\Psi}:I\times \Sp^n \to \mathbb{E}^{n+2}$ defined by $$\ti{\Psi}(t,\omega)=\Phi(t,\omega)-f(t).f'(t)\partial_t - \epsilon f(t).h'(t)N_t=(0,...,0,h(t)).$$ Then, $\ti \Psi$ does not depends on $\omega$ and, hence, the curve $\ti \sigma:I \to \mathbb{E}^{n+2}$, given by $t\mapsto \ti{\Psi}(t,\omega) $ is well defined. See also that, \begin{equation*} \ti \sigma'(t)=(0,...,0,h'(t))=\epsilon h'(t)\ [ h'(t)\partial_t-f'(t)N_t]. \end{equation*} Conversely, let $\bar M^{n+1}=\big(I\times \Sp^n, ds^2\big)$ be a warped product manifold with warped metric $ds^2= dt^2+f(t)^2d\sigma^2$ where $d\sigma^2$ is the standard metric of $\Sp^n.$ If $(1-f'(t)^2)\epsilon >0, \ \forall \, t \in I$ then is well defined a function $h:I \to \real$ by the expressions $f'(t)^2+\epsilon h'(t)^2=1$ and$\ h'(t)>0$, for all $t\in I$. Therefore $\Phi: \bar M^{n+1} \to \mathbb{E}^{n+2}$ defined by \begin{equation}\label{isometric} \Phi(t,\om)=\big( f(t) \om, h(t)\big), \end{equation} is an isometric immersion. \\ Let $\bar M=\big(I\times \Sp^n, ds^2\big)$ be a warped product manifold with metric $ds^2= dt^2+f(t)^2d\sigma^2$. Let $\bar{\nabla}$ be the Levi-Civita connection on $\bar M$. The covariant derivative of the vector field $\partial_t$ with respect to any tangent vector satisfies \begin{equation} \bar{\nabla}_u \partial_t = \frac{f'(t)}{f(t)}(u-\langle u,\partial_t \rangle \partial_t), \,\,\, \forall \, u \in T\bar{M}. \end{equation} Thus, in particular, the orbits of the vector field $\partial_t$ are geodesics and the vector field $V=f(t) \partial_t$ is closed conformal with conformallity factor $f'(t)$ i.e., \begin{eqnarray}\label{c.c.f.} \bar\nabla_u V=f'(t)u, \,\, \forall \, u \in T\bar{M}. \end{eqnarray} It is clear that $\bar{M}$ is foliated by spheres $\Sigma_t=\Sp^n(f(t))$ for each $t\in I$, $\partial_t$ is the unit normal vector field for each leaf $\Sigma_t$ which is umbilical, moreover $\Sigma_t$ has mean curvature $-\frac{f'(t)}{f(t)}$ with sectional curvature equals to $1/f(t)^2$. Let $\bar{R}$ be the curvature tensor on $\bar M$. We observe that \begin{eqnarray} \bar{R}(u,v)\partial_t = \frac{f''(t)}{f(t)}\big(\langle u,\partial_t \rangle v - \langle v,\partial_t \rangle u\big), \,\,\, \forall \, u,v \in T\bar{M}. \end{eqnarray} In this way, we can compute the curvature tensor $\bar{R}$ as follows \begin{equation} \label{curv-tensor} \begin{array}{rcl} \bar{R}(u,v)w &=& \Big(\frac{1-f'(t)^2}{f(t)^2} + \frac{f''(t)}{f(t)} \Big) \Big[ \langle w,\partial_t \rangle\big( v,u ,\partial_t \big) + \langle \big(v,w,u \big),\partial_t \rangle \partial_t \Big] \\ &&+ \frac{1-f'(t)^2}{f(t)^2} \big( u,v,w\big) \end{array} \end{equation} where $\big( u,v,w\big)=\langle v,w \rangle u - \langle u,w \rangle v$, for all $u,v,w \in T\bar{M}$. It is worthwhile to mention that the shape operator $A$ can be expressed as $$Av=-\frac{h'(t)}{f(t)}v+\Big(\frac{h'(t)}{f(t)}+\epsilon\frac{f''(t)}{h'(t)}\Big)\langle v,\partial_t \rangle \partial_t \quad \forall \,\,v\in T\bar M.$$ \section{Necessary conditions for an isometric immersion} Let $\mathbf{x}:M^k \to \bar M$ be an isometric immersion of a $k-$dimensional Riemannian manifold $M^k$ into the warped product $\bar M$. Let us denote $T$ the canonical projection of $\partial_t$ into $TM,$ such that \begin{equation}\label{troc.xino} \partial_t=T + \varrho, \end{equation} where $\varrho $ is a section of $TM^{\perp}.$ \begin{proposition}\label{prop.princ.1}Let $\mathbf{x}:M^k \to \bar M$ be an isometric immersion of a $k-$dimensional Riemannian manifold $M^k$ into the warped product $\bar M$. For all $u \in TM,$ we have \begin{equation}\label{bola} \alpha(u,T) + \nabla^{\perp}_u \varrho = -\frac{f'(t)}{f(t)}\langle u, T \rangle \varrho \end{equation} \begin{equation}\label{lopo} \nabla_u T - A_{\varrho}u= \frac{f'(t)}{f(t)}(u-\langle u,T \rangle T). \end{equation} where $\nabla,$ $\nabla^{\perp},$ $\alpha$ and $A$ denote the Levi-Civita conection in $M,$ the induced normal conection in $M,$ the second fundamental form of the immersion $\mathbf{x} (M)$ and the Weingarten endomorphism associated to $\alpha,$ respectively. \end{proposition} \noindent{\bf Proof:} Derive $\partial_t=T + \varrho $ with respect to $u$ and use the Gauss and Weingarten formulae. We conclude the result taking the tangent and normal components of that derivative. \begin{flushright}$\Box$ \end{flushright} The expression of the tensor curvature (\ref{curv-tensor}) allow us write the Gauss, Codazzi and Ricci equations in the following way: \begin{proposition}\label{prop.princ.2} Let $\mathbf{x}:M^k \to \bar M$ be an isometric immersion of a $k-$dimensional Riemannian manifold $M^k$ into the warped product $\bar M$. Let us consider $u,v,w,z,w \in TM$ and $\xi, \eta \in TM ^\perp$. Then, Gauss, Codazzi and Ricci equations, for the isometric immersion $\mathbf{x}$, are, respectively: \begin{eqnarray} \langle R(u,v)w,z \rangle \! &=& \! \lambda(t) \big( u,v,w,z\big) \nonumber \\ & &+\,\, \mu(t) \big[ \lan w, T\ran \big(v,u,T,z\big)+\big( v,w,u,T\big) \lan T,z \ran \big]\\ & &+\,\, \langle \alpha(u,z), \alpha(v,w) \rangle - \langle \alpha(u,w),\alpha(v,z) \rangle \nonumber \end{eqnarray} \begin{eqnarray} (\nabla_u ^{\perp} \alpha)(v,w) - (\nabla_v ^{\perp} \alpha)(u,w) = \mu \big( v,w,u,T\big) \varrho \end{eqnarray} \begin{equation} \langle R^\perp (u,v)\xi,\eta \rangle = \langle [A_{\xi},A_{\eta}]u,v \rangle, \end{equation} where \begin{equation*} \lambda(t)= \frac{1-f'(t)^2}{f(t)^2} \quad \textrm{and} \quad \mu(t) = \frac{1-f'(t)^2}{f(t)^2} + \frac{f''(t)}{f(t)} . \end{equation*} \end{proposition} \noindent{\bf Proof:} It is a standard computation and follows directly from (\ref{curv-tensor}). \begin{flushright}$\Box$ \end{flushright} To finish this section we want point out an intrisical characterisation of the vector field $T$. This will be given by the following \begin{proposition}\label{gradient} Let $\mathbf{x}:M^k \to \bar M$ be an isometric immersion of a $k-$dimensional Riemannian manifold $M^k$ into the warped product $\bar M$. There exists a function $\mk h: M \to \real$ such that $T=\grad_M \mk h$. \end{proposition} \section{Establishing the sufficient conditions} Let $(M^k, \langle\,,\,\rangle)$ be a Riemannian manifold and let us denote $\nabla$ its Levi-Civita connection. Let $E$ be a Riemannian vector fiber bundle on $M^k$ with rank $n+1-k$ and let us denote $\nabla'$ its compatible connection. Let $\mk h \in C^{\infty}(M),\, \varrho \in \Gamma(E)$ and $\alpha '$ be a smooth function on $M$, a section of the vector fiber bundle $E$ and a symmetric section of the homomorphisms fiber bundle $\textrm{Hom\,}(TM \times TM, E)$, respectively. Let us define for each local section $\xi\in \Gamma(E)$ the map $A'_{\xi}:TM \to TM$ by \begin{equation} \langle A'_{\xi}u,v \rangle = \langle \alpha'(u,v),\xi \rangle, \end{equation} for all $u,v \in TM$. In virtue of the proposition \ref{gradient} we can rephrased the necessary conditions obtained in the propositions \ref{prop.princ.1} and \ref{prop.princ.2} using this abstract framework of fiber bundles. \vspace{0.1cm} \begin{definition}\label{compatibility} We say that the data $(M^k,\langle,\rangle, \nabla',\alpha',\varrho,\mk h)$ satisfies the compatibility equations for $\bar{M}$ if \begin{equation}\label{eq.comp.1} {\arrowvert {T} \arrowvert}^2 + {\arrowvert \varrho \arrowvert}^2 =1, \,\,\,\,\, {T=\grad_M \mk h} \end{equation} and for all $u,v,z,w \in \chi(M)$ and $\xi, \eta \in \Gamma( E)$ the following equations hold: \begin{equation}\label{eq.comp.2} \alpha '(v,T) + \nabla'_v \varrho = -\frac{f'(\mk h)}{f(\mk h)}\langle v, T \rangle \varrho \end{equation} \begin{equation}\label{eq.comp.3} \nabla_v T - A'_{\varrho}v= \frac{f'(\mk h)}{f(\mk h)}\big( v-\langle v, T \rangle X \big) \end{equation} \begin{eqnarray}\label{eq.comp.4} \langle R(u,v)w,z \rangle \! &=& \! \lambda(\mk h) \big( u,v,w,z\big) \nonumber \\ & &+\,\, \mu(\mk h) \big[ \lan w, T\ran \big(v,u,T,z\big)+\big( v,w,u,T\big) \lan T,z \ran \big]\\ & &+\,\, \langle \alpha(u,z), \alpha(v,w) \rangle - \langle \alpha(u,w),\alpha(v,z) \rangle \nonumber \end{eqnarray} \begin{eqnarray} (\nabla_u ^{\perp} \alpha)(v,w) - (\nabla_v ^{\perp} \alpha)(u,w) = \mu(\mk h) \big( v,w,u,T\big) \varrho \end{eqnarray} \begin{equation} \langle R^\perp (u,v)\xi,\eta \rangle = \langle [A_{\xi},A_{\eta}]u,v \rangle, \end{equation} where \begin{equation*} \lambda(\mk h)= \frac{1-f'(\mk h)^2}{f(\mk h)^2} \quad \textrm{and} \quad \mu(\mk h) = \frac{1-f'(\mk h)^2}{f(\mk h)^2} + \frac{f''(\mk h)}{f(\mk h)} . \end{equation*} \end{definition} \section{Proof of the Fundamental Theorem } Let us consider the fiber bundle obtained by the Whitney sum of the tangent fiber bundle $TM$ with the fiber bundle $E$ as in the definition \ref{compatibility}, $\ti{E}= TM\oplus_w E$, endowed with the product metric and compatible connection \begin{eqnarray*} \nabla''_vu &=& \nabla_vu + \alpha'(v,u) \quad u,v\in TM ,\\ \nabla''_v\xi &=& -A'_{\xi}v + \nabla'_v\xi \quad v \in TM \,\,and \,\, \xi \in \Gamma(E). \end{eqnarray*} It is easy see that, the section $X=T+\varrho \in \Gamma (\ti{E})$ satisfies $\arrowvert X \arrowvert = 1$ and \begin{equation} \nabla''_vX=\frac{f'(\mk h)}{f(\mk h)}(v-\langle v,T \rangle X), \quad \forall v\in TM. \end{equation} In particular, \begin{equation*} \nabla''_vf(\mk h)X=f'(\mk h)v, \quad \forall v\in TM. \end{equation*} Moreover, if $\langle v,T \rangle = \langle v, X \rangle = 0,$ then $v(f(\mk h))=v(h(\mk h))= v(\mk h)=0,$ since $v(\mk h)=\langle \grad_M \mk h,v\rangle = \langle T,v\rangle. $ \\ Now, assume that $(1-f'(\mk h)^2)\epsilon >0$. Thus, the function $h$ given by the expressions $f'(\mk h)^2+\epsilon h'(\mk h)^2=1$ and$\ h'(\mk h)>0$ is well defined, up to a constant. Let $\check{E}={E}\oplus_w\langle \zeta \rangle$ be the Semi-Riemannian fiber bundle obtained by summing the Semi-Riemannian line fiber bundle $\langle \zeta \rangle$ to ${E}$. On $\check{E}$, we define \begin{eqnarray*} \nk \!\!\!\! &:& \!\!\!\! TM\times \check{E} \setap \check{E} \\ \ak \!\!\!\! &:& \!\!\!\! TM\times TM \setap \check{E} \end{eqnarray*} putting, \begin{eqnarray} \ak (u,v) &=& \alpha' (u,v) +\epsilon(- \ti \lambda (\mk h) \langle u,v \rangle + \ti \mu (\mk h) \langle u,X\rangle \langle v,X \rangle)\zeta \\ \nk_v\phi &=& \nabla'_v(\phi)_E +\epsilon \ti \mu (\mk h) \langle v,X \rangle \Big( \langle \phi,X \rangle \zeta - \langle \phi,\zeta \rangle \varrho\Big) +\epsilon v(\langle \phi,\zeta\rangle)\zeta \end{eqnarray} where $u,v \in TM$, $\phi \in \Gamma(\check{E})$, $(\phi)_E$ is the canonical projection on $E$ and $\ti \lambda, \ti \mu$ are defined by \begin{equation} \ti \lambda(\mk h)= \frac{h'(\mk h)}{f(\mk h)} \quad \textrm{and} \quad \ti \mu (\mk h)= \frac{h'(\mk h)}{f(\mk h)} + \epsilon\frac{ f''(\mk h)}{h'(\mk h)} \end{equation} \begin{remark} Note that $\ti \lambda (\mk h)^2=\epsilon\lambda(\mk h)$ and $\ti \lambda(\mk h).\ti \mu (\mk h)= \epsilon\mu(\mk h).$ \end{remark} Under the notations and definitions above is straightforward conclude the following \begin{lemma} Assume that the data $(M^k,\langle,\rangle, \nabla',\alpha',\varrho,\mk h)$ satisfy the compatibility equations for­ $\bar{M}$. If $(1-f'(\mk h)^2)\epsilon >0$, then the data $(M,g,\nk,\ak)$ satisfy the compatibility equations for $\mathbb{E}^{n+2}$. \end{lemma} Thus, using the fundamental theorem of submanifolds, there exists an isometric immersion $g:M^k \setap \mathbb{E}^{n+2}$ and a fiber bunder isometry $\check{g}:\check{E} \setap TM^\bot$ along $g$, such that \begin{eqnarray} \ti \alpha = \check{g} \check{\alpha} \end{eqnarray} \begin{eqnarray} \ti{\nabla}^{\bot} \check{g} =\check{g} \check{\nabla} \nonumber \end{eqnarray} where $\ti{\nabla}^{\perp}$ and $\ti{\alpha}$ are the normal connection and second fundamental form of $g(M) \subset \mathbb{E}^{n+2},$ respectively. Denoting by $D$ the covariant derivative of $\mathbb{E}^{n+2}$, it is a simple computation, identifying $\check{g}(X)$ with $X$ and $\check{g}(\zeta)$ with $\zeta$ show \begin{equation} \label{equ.deriv} \begin{array}{rcl} D_v X &=& \frac{f'(\mk h)}{f(\mk h)}(v - \langle v,T \rangle X) + \epsilon \langle v,T \rangle(\ti \mu - \ti \lambda)\zeta, \quad \forall \, v \in TM. \\ \\ D_v \zeta &=& \ti \lambda v - \ti \mu \langle v, T \rangle X, \quad \forall \, v \in TM. \nonumber \end{array} \end{equation} \begin{claim} \label{prop.frob} For all $u,v\in TM,$ with $\langle u,T \rangle = \langle v,T \rangle =0,$ we have that $\langle D_vu,X\rangle = \langle D_uv, X \rangle.$ In particular, $\langle [u,v],T \rangle =0. $ \end{claim} Now, assume further that $\mk h:M\setap \real$ is a smooth submersion, i.e., $T=\grad_M \mk h \neq 0$ for all point in $M$. First of all, note that the claim \ref{prop.frob} implies that the distribution $$p\in M \longmapsto \mathfrak{D} (p) =\{v\in T_pM; \langle v, T \rangle =0\}$$ is involutive, hence totally integrable. Therefore, $M$ admits a codimension one foliation $\mathfrak{F}(T)$ oriented by the unitary vector field $T/|T|$. Note also that the foliation $\mathfrak{F}(T)$ is determined by the submersion $\mk h:M \setap \real$ as level sets, on the other words, $\mathfrak{F}(T)=\{\mk h^{-1}(s)\subset M;s\in \mk h(M)\}$.\\ Let $\Psi:M\setap \mathbb{E}^{n+2}$ be the smooth map defined by $$\Psi(p)=g(p)-f(\mk h(p)).f'(\mk h(p))X|_p - \epsilon f(\mk h(p)).h'(\mk h(p))\zeta|_p.$$ \begin{claim} The map $\Psi$ is constant along each connected leaf of the foliation $\mathfrak{F}(T).$ \end{claim} \noindent {\bf Proof:} In order to compute a derivative of $\Psi$, note that for $v \in \mathfrak{D}$ the equations (\ref{equ.deriv}) can be rewritten as \begin{equation*} \begin{array}{rcl} D_v X &=& \frac{f'(\mk h)}{f(\mk h)}v, \\ \\ D_v \zeta &=& \ti \lambda v=\frac{h'(\mk h)}{f(\mk h)} v. \end{array} \end{equation*} Hence, \[ D_v \Psi= v- f(\mk h(p)).f'(\mk h(p))D_v X - \epsilon f(\mk h(p)).h'(\mk h(p))D_v\zeta =v- f'^2 v - \epsilon h'^2 v=0, \] since that $f'^2+\epsilon h'^2=1$. \begin{flushright}$\Box$ \end{flushright} Now, the connectedness of $M$ and the fact that $\mk h:M \setap \real$ is a smooth submersion imply that $\mk h(M)\subset \real$ is an open interval. Moreover, since the map $\Psi$ is constant along each level set of $\mk h$, there exists an unique smooth map $\sigma:\mk h(M)\subset \real \setap \mathbb{E}^{n+2}$ such that $\Psi=\sigma\circ \mk h.$ \begin{center} \hspace{0.5cm}\xymatrix{ M \ar[r]^{\Psi} \ar[d]^{\mk h} & \mathbb{E}^{n+2} \\ \mk h(M) \ar[ur]_{\sigma} } \end{center} For each $s\in \mk h(M),$ the equation $$\langle \sigma(s)-g(p),\sigma(s)-g(p)\rangle=f(s)^2,\,\, \forall \, p\in \mk h^{-1}(s),$$ shown that image of each leaf $\mk h^{-1}(s)$ is contained in a $(n+1)-$dimensional pseudosphere of $\mathbb{E}^{n+2}$ centered at $\sigma(s)$ and ratio $f(s)$, i.e., $g(\mk h^{-1}(s)) \subset \mathbb{S}^{n+1}_{f(s)}(\sigma(s))\subset \mathbb{E}^{n+2}$. We call $\sigma:\mk h(M) \setap \mathbb{E}^{n+2}$ the {\em curve of centers} of $M$. \begin{claim}\label{cenret} $\sigma(\mk h(M))\subset \mathbb{E}^{n+2}$ is a straight line. \end{claim} \noindent{\bf Proof:} In order to prove this claim, we will prove that the curvature of the curve of centers is identically zero. Since that $\Psi=\sigma\circ \mk h$ and $T(\mk h)=|T|^2$, we obtain $\sigma'(\mk h)=|T|^{-2}T(\Psi).$ Thus, computing $T(\Psi),$ we have $$\sigma'(\mk h)= \epsilon h'(\mk h)\Big( h'(\mk h)X-f'(\mk h)\zeta\Big ).$$ In the same way, $\sigma''(\mk h)=|T|^{-2}T(\Psi')$, where $\Psi'=\sigma'\circ \mk h$. Thus, computing $T(\Psi')$ we have $$\sigma''(\mk h)= \epsilon h''(\mk h)\Big( h'(\mk h)X-f'(\mk h)\zeta\Big ).$$ Thus, \begin{equation} \label{zerocurvature}\sigma''(\mk h)= \frac{h''(\mk h)}{h'(\mk h)} \sigma'(\mk h). \end{equation} Therefore, (\ref{zerocurvature}) implies that the curvature of $\sigma(\mk h)$ is zero, since that the velocity of the curve $\sigma(\mk h)$ is $|\sigma'(\mk h)|=h'(\mk h)$. \begin{flushright}$\Box$ \end{flushright} Note that $\langle \sigma',\sigma' \rangle = \epsilon h'^2.$ As a direct consequence of the claim \ref{cenret} we have $$H=\{x\in \mathbb{E}^{n+2} ; \langle x,\sigma'(s) \rangle =0\}$$ is a Riemannian hyperplane of $\mathbb{E}^{n+2}$ which does not depends of the parameter $s$. Thus the equation $$\langle \sigma'(s),\Psi(p)-g(p)\rangle = 0, \, \forall \, p\in \mk h^{-1}(s), $$ tell us that for each $s\in \mk h(M),$ $$g(\mk h^{-1}(s)) \subset \big(\mathbb{S}^{n+1}_{f(s)}(\sigma(s))\, \cap \,(\sigma(s)+ H) \big) .$$ \medskip Let us take $s_0\in \mk h(M)$ and let $\tau:\mathbb{E}^{n+2}\setap \mathbb{E}^{n+2}$ be the rigid motion of $\mathbb{E}^{n+2}$ such that $\tau(\sigma(\mk h(M)))$ is contained in the axis ${\Huge O}x_{n+2}$, $(0,...,0,h(s_0))=\tau(\sigma(s_0))$ and the velocity vector $\tau(\sigma')$ is pointing in the same orientation of axis ${\Huge O}x_{n+2}$. Note that such isometry satisfies $$\tau(g(\mk h^{-1}(s_0))) \subset \big(\mathbb{S}^{n+1}_{f(s)}((0,...,0,h(s_0)))\, \cap \,((0,...,0,h(s_0))+\{x\in \mathbb{E}^{n+2};x_{n+2}=0\}) \big).$$ Thus, for construction, the curves $s\in \mk h(M)\setap (0,...,0,h(s))$ and $s\in \mk h(M)\setap \tau(\sigma(s))$ coincide in $s_0$, their derivatives coincide for all points, ($\tau(\sigma'(s))$ and $h'(s)$ are pointing in the same direction and $|\tau(\sigma'(s))|=h'(s)$), hence $(0,...,0,h(s))=\tau(\sigma(s)), \forall \, s\in \mk h(M)\subset I.$ Therefore, we conclude that­ $\tau(g(M))\subset \Phi(\bar M^{n+1}),$ where $\Phi$ is the isometric gimmersion given in (\ref{isometric}). Now, we will summarize the informations that we have obtained, up to an isometry of $\mathbb{E}^{n+2}$, \begin{enumerate} \item[a)] There exists an isometric immersion $g:M^k \setap \mathbb{E}^{n+2}$ and a fiber bundle isometry $\check{g}:\check{E} \setap TM^\bot$ along $g$, such that $\ti \alpha = \check{g} \check{\alpha}$ and $\ti{\nabla}^{\bot} \check{g} =\check{g} \check{\nabla},$ where $\ti{\nabla}^{\perp}$ and $\ti{\alpha}$ are the normal connection and second fundamental form of $g(M) \subset \mathbb{E}^{n+2},$ respectively.\\ \item[b)] The curve $\sigma(s)=g(p)-f(s).f'(s)X|_p -\epsilon f(s).h'(s)\zeta|_p$ parametrize an open interval of the axis ${\Huge O}x_{n+2}$. More specifically, $\sigma(s)=(0,...,0,h(s)).$ We also have $\sigma'(s)=\epsilon h'(s)\Big(h'(s)X - f'(s)\zeta\Big )=(0,...,0,h'(s)).$\\ \item[c)] We have, for each $s\in \mk h(M)$, $$g(\mk h^{-1}(s)) \subset \big(\mathbb{S}^{n+1}_{f(s)}(\sigma(s))\, \cap \,(\sigma(s)+\{x\in \mathbb{E}^{n+2};x_{n+2}=0\}) \big).$$ In particular, $g(M)\subset \Phi(\bar M^{n+1}).$\\ \item[d)] Furthermore, $(0,...,0,h(s))=\Phi(s,\omega)-f(s).f'(s)\partial_t - f(s).h'(s)N_t $ and $(0,...,0,h'(s))=h'(s)^2\partial_t-f'(s)h'(s)N_t.$\\ \item[e)] The items (b) and (d) provide that $X=\partial_t|_{g(M)}$ and $\zeta=N_t|_{g(M)}$. Indeed, given $p \in M$, take $\omega \in \mathbb{S}^n$ such that $g(p)=\Phi(s,\omega)$. Thus, the items (b) e (d) above, imply in the following system: \begin{equation} \left\{ \begin{array}{l} \label{sys1} -f(s).f'(s)X - f(s).h'(s)\zeta = -f(s).f'(s)\partial_t - f(s).h'(s)N_t \\ \\ h'(s)^2X - f'(s)h'(s)\zeta = h'(s)^2\partial_t - f'(s)h'(s)N_t. \end{array}\right. \end{equation} Since that $f(s)\neq 0$ and $h'(s)\neq 0, \,\, \forall\, s\in I$, the system of equations (\ref{sys1}) is equivalent to \begin{equation} \label{sys2} \left\{ \begin{array}{l} f'(s)(X-\partial_t) = -h'(s)(\zeta - N_t)\\\\ h'(s)(X-\partial_t) = f'(s)(\zeta - N_t) \end{array}\right. \end{equation} Note that the system of equations (\ref{sys2}) implies that $X=\partial_t$ and $\zeta=N_t$ at $p$. As the choice of $p\in M$ was arbitrary, the result follows. \end{enumerate} Therefore, there exists an isometric immersion $\mathbf{ x}:M^k\setap \bar M,$ defined by $g=\Phi \circ \mathbf{ x}$ and a fiber bundle isometry $\mathbf{\ti x}: E \setap TM^\bot$ along $\mathbf{ x},$ defined by $\mathbf{\ti x}=\check{g}|_E,$ such that $\alpha=\mathbf{\ti x} \alpha'$ and $\nabla^{\bot} \mathbf{\ti x}=\mathbf{\ti x}\nabla',$ where $\nabla^{\perp}$ and $\alpha$ are the normal connection and the second fundamental for of $\mathbf{ x}(M) \subset \bar M,$ respectively. Moreover, $$\partial_t=\mathbf{x}_*(T) + \mathbf{\ti x} (\varrho).$$ This allow us to conclude the following \begin{theorem}\label{main} Let $\big(M^k,\langle,\rangle \big)$ be a $k-$dimensional simply connected Riemannian manifold. Assume that the data $(M^k,\langle,\rangle, \nabla',\alpha',\varrho,\mk h)$ satisfy the compatibility equations for $\bar M$, as in the definition \ref{compatibility}, assume also that $\mk h$ is a smooth submersion and that $(1-f'(\mk h)^2)\epsilon >0$. Then, there exists an isometric immersion $\mathbf{x}:M^k \to \bar M^{n+1}$ and a fiber bundle isometry $ \mathbf{\ti x}: E \rightarrow TM^\bot$ along $\mathbf{x}$, such that $$\alpha=\mathbf{\ti x}\alpha' \quad {e} \quad \nabla^{\bot}\mathbf{\ti x}= \mathbf{\ti x}\nabla' \quad \textrm{and} \quad \partial_t=\mathbf{x}_*(\grad_M \mk h) + \mathbf{\ti x}(\varrho)$$ where $\nabla^{\bot}$ and $\alpha$ are the normal connection and the second fundamental form of $\mathbf{x}(M) \subset \bar M$, respectively. \end{theorem}
15,050
TITLE: Prove that the equation $x^{15} + 7x^{3} - 5 = 0$ has exactly one real solution QUESTION [5 upvotes]: Prove that the equation $x^{15} + 7x^{3} - 5 = 0$ has exactly one real solution. A hint that has been given by the teacher is to analyze the function $f(x) = x^{15} + 7x^{3} - 5$. REPLY [3 votes]: We will use Descartes's rule of signs to determine the number of positive and negative roots of $x^{15}+7x^3-5$. (Zero is obviously not a root.) The non-zero coefficients of the polynomial per se change sign exactly once. Therefore, it has exactly one positive root. Replacing $x$ with $-x$ we get $-x^{15}-7x^3-5$, which has no sign changes at all. Therefore the original polynomial has no negative roots. In conclusion, the original polynomial has exactly one real root, and it is positive.
17,801
- 2009 Incident 97/2009 Due to the overnight and morning snowfall in the area, team members and team vehicles were again called out to assist NWAS (Manchester) with various incidents arising from the wintry weather conditions. Our first incident commenced at 09:40 and with the last incident finishing just before midnight. The team dealt with 16 separate incidents including assisting NWAS emergency vehicles stuck in the snow and ice, transporting a District Nurse between two moorland villages, a sledging accident in the Bradshaw area, and directly assisting the ambulance service with casualty transfers to hospital including patients with fractures and medical conditions. In total 19 team members in four team vehicles and three team member’s vehicles were involved in these incidents. During the day, hard worked team members (also remember that they have been out on the previous days incidents) have worked tirelessly in support of our colleagues in NWAS (Manchester) who have been obviously very busy during this winter weather period. Team members have taken advantage of food breaks whenever they can which included 5 team members turning up at our Team Leader’s house after dealing with incident 97xii/2009, where they were treated to a meal of pea & ham soup by our Team Leader’s partner (and former team member) Ann Thompson – the pea and ham soup received a 10/10 gourmet chef’s recommendation from the team members who sampled it! Our base at Ladybridge Hall was finally closed down at 01:00 in the early morning of Wednesday 23rd December. By admin • 2009, Incident • • Tags: WinterRoadConditions
161,594
TITLE: Irreducibility over $\mathbb{Q}[\sqrt[n]{p}]$ QUESTION [5 upvotes]: Let $p$ be a prime and $n=p+1$. Then prove or disprove whether $x^{2p}+nx+n$ is irreducible over $\mathbb{Q}[\sqrt[n]{p}]$. This question appeared in our quiz today. But I couldn't even prove that the polynomial is irreducible over $\mathbb{Q}[x]$. Provided that I can prove that the polynomial, which I will call $f$ from now on, is irreducible over $\mathbb{Q}[x]$, we will have $$[\mathbb{Q}(\alpha,\sqrt[n]{p}):\mathbb{Q}]\le [\mathbb{Q}(\alpha):\mathbb{Q}][\mathbb{Q}(\sqrt[n]{p}):\mathbb{Q}]=2p(p+1)$$ ,where $\alpha$ is a root of $f$. But we also have $p+1=[\mathbb{Q}(\sqrt[n]{p}):\mathbb{Q}]\mid[\mathbb{Q}(\alpha,\sqrt[n]{p}):\mathbb{Q}]$ , similarly we also have $2p\mid [\mathbb{Q}(\alpha,\sqrt[n]{p}):\mathbb{Q}]$.Thus either $[\mathbb{Q}(\alpha,\sqrt[n]{p}):\mathbb{Q}]=p(p+1)$ or $2p(p+1)$. Maybe some further pondering will actually show us that the polynomial is indeed irreducible over the field $\mathbb{Q}[\sqrt[n]{p}]$. But I am yet unable to prove that the polynomial is irreducible over $\mathbb{Q}[x]$. I can't see any immediate way to apply Eisenstein. REPLY [3 votes]: Not really a quiz-appropriate solution, but... Irreducibility over $\mathbb{Q}$: Let $g(x)$ be a factor of $f(x) = x^{2p}+nx+n$ of degree $d$. Every root $\alpha$ of $g(x)$ satisfies $\alpha^{2p}+n\alpha+n = 0$, so $$\alpha^{2p} = n(-1-\alpha).$$ Multiplying over all roots of $g$ gives $$ g(0)^{2p} = \prod_{g(\alpha)=0} \alpha^{2p} = \prod_{g(\alpha)=0} n(-1-\alpha) = n^d g(-1).$$ Also, $g(-1)$ divides $f(-1) = (-1)^{2p}+n(-1)+n = 1$, and by the above equation $g(-1)$ is positive, so $g(-1) = 1$. So we have $$g(0)^{2p} = n^d.$$ If $p\not\mid d$, then $n$ is a perfect $p$th power. But this gives a contradiction, since $n = p+1$ is too small to be a $p$th power. If $p\mid d$ and $g(x)$ is a nontrivial factor, then $d = p$. So $g(0)^{2p} = n^p$, which means $n$ is a perfect square. But then $p= m^2-1 = (m-1)(m+1)$ for some integer $m$. This contradicts the primality of $p$, unless $m=2$ and $p=3$. So we have proven that the polynomial is irreducible for $p \ne 3$. For $p=3$, the above reasoning gives that $d=3$, $g(-1) = 1$, and $g(0) = \pm 2$ for each factor $g(x)$, which reduces the possible factorizations to two cases: $$x^6 + 4x+4 = (x^3 + ax^2+ax + 2)\cdot(x^3 + bx^2 + bx + 2)$$ or $$x^6 + 4x+4 = (x^3 + cx^2 + (c-4)x - 2)\cdot(x^3 + dx^2 + (d-4)x - 2).$$ Each of these cases can be shown to be impossible by comparing the coefficients for the $x^5$ and $x^1$ terms on both sides. Irreducibility over $\mathbb{Q}(\sqrt[n]{p})$: Consider an algebraic field extension $L/\mathbb{Q}$, and let $\sigma$ be any automorphism of $L$. Note that if $f(x) \in \mathbb{Q}[x]$ factors into two monic irreducible factors over $L[x]$ as $$ f(x) = g_1(x)\cdot g_2(x),$$ then we can get another factorization of $f(x)$ by applying $\sigma$ coefficientwise to both sides: $$ f(x) = \sigma(f(x)) = \sigma(g_1(x))\cdot\sigma(g_2(x)).$$ Since the irreducible factorization is unique up to order, $\sigma$ permutes the $g_i(x)$. If moreover $L$ is a Galois extension of $\mathbb{Q}$, then the subgroup of automorphisms that fix $g_1(x)$ and $g_2(x)$ has index 1 or index 2 in the Galois group of $L/\mathbb{Q}$. By the fundamental theorem of Galois theory, this implies that the coefficients of $g_1(x)$ and $g_2(x)$ either all lie in $\mathbb{Q}$ or all lie in the same quadratic subfield of $L$. If $f(x)$ is irreducible over $\mathbb{Q}$, then only the second case is possible. Applying to this problem: Let $K= \mathbb{Q}(\sqrt[n]{p})$ and let $L$ be the Galois closure of $K$, i.e. $L$ is the splitting field of $x^n - p$. In your post you basically showed that $$[\mathbb{Q}(\alpha, \sqrt[n]{p}) : K] = p \quad\text{ or }\quad [\mathbb{Q}(\alpha, \sqrt[n]{p}) : K]=2p,$$ i.e. $f(x)$ splits into two factors of degree $p$ or is irreducible over $K$. The same basic idea applies to $L$ as well. The case $p=2$ is straightforward; for $p\ne 2$ we have $\sqrt{p} \in K$, so $x^n-p$ can be factored as $$x^n-p = (x^{n/2}-\sqrt{p})(x^{n/2}+\sqrt{p})$$ over $K$. The degree of the splitting field of each factor can only have prime factors that are at most $\frac{n}{2} = \frac{p+1}{2} < p$, so the degree of $L$ is relatively prime to $p$. Hence by the same reasoning as in your post and by the argument above, $f(x)$ splits into two factors of degree $p$ or is irreducible over $L$. Thus if $f(x)$ factors over $L$, it splits into two factors with coefficients in a quadratic subfield of $L$. If moreover $f(x)$ factors over $K$, then the quadratic field must be a subfield of $K$ as well, i.e. it must be $\mathbb{Q}(\sqrt{p})$, the only quadratic subfield of $K$. In summary, $f(x)$ is reducible over $K$ if and only if it splits into two conjugate factors of degree $p$ over $\mathbb{Q}(\sqrt{p})$. In that case we could then write $$f(x) = (a(x) + b(x)\sqrt{p})\cdot(a(x) - b(x)\sqrt{p}) = a(x)^2 - p \cdot b(x)^2.$$ This says that $f(x)$ would need to be perfect square modulo $p$. But $f'(x) \equiv n \pmod{p}$ is relatively prime to $f(x)$, so $f(x)$ is squarefree modulo $p$. Hence $f(x)$ is not reducible over $K$.
185,749
The $8,000 Credit Cost Some Home Buyers Much More - Getty Images By Jack Hough If you missed out on the $8,000 tax credit for first-time homebuyers that expired just over a year ago, you might be better off for it. Numbers released Monday suggest typical recipients have lost twice as much to falling house prices as they gained from the incentive. The Zillow Home Value Index fell to $170,000 in March, down $15,000 from a year earlier and down $20,000 from two years earlier, according to Zillow.com. The index represents the midpoint of valuation estimates for U.S. single-family homes, including co-ops and condos. The tax credit program offered up to $8,000 to first-time home buyers and, in a later expanded version, up to $6,500 for existing homeowners who bought again. It ran from January 2009 through April 2010, with the closing deadline eventually pushed to September. A 2008 predecessor program might have been an even worse deal, offering up to $7,500 to first-time buyers as a no-interest, 15-year loan. The typical home has lost $48,000 in value since March 2008, according to the Zillow index. The IRS says the 2009 and 2010 homebuyer credits cost $26 billion, including more than $500 million in fraudulent claims. See the full story at SmartMoney.com, including my math on whether the housing market has reached bottom. Also, helpful fellow that I am, I’ve drafted a quick apology note for the tax credit program to be used by those responsible.
316,053
NEW YORK (WCBS 880) – Louis Scala, the accused leader of a Staten Island drug trafficking ring that peddled over 40,000 oxycodone pills out of the back of an ice cream truck, has pleaded guilty. WCBS 880’s Irene Cornell On The Case Scala faces up to three and a half years in prison. The drug trafficking ring allegedly peddled thousands of oxycodone pills out of an ice cream truck that was used to sell ice cream to neighborhood kids. Scala admitted to obtaining the pain killers by recruiting neighborhood volunteers to build fraudulent prescriptions. Then he would park his ice cream truck on a prearranged corner and sell ice cream to kids. But officials said after serving ice cream to children, Scala would then invite customers looking to buy oxycodone to step into his truck and complete the transaction. Prosecutors said the customers would usually wait in their cars nearby for a signal from Scala. Police say it was a $1 million a year business. Please share your thoughts below…
152,801
151 Woodlawn Rd., Macclenny, FL, US, 32063 - Phone: (904) 259-3000 Beauty &. .
287,241
Go back: Lyrics >Artist & Bands: F>Fleetwood Mac>The Second Time Lyrics Fleetwood Mac – The Second Time Lyrics [ The Second Time lyrics from ] Well it had to do With a dream come true And someone that you loved And would always love The second time around For us She never looked back Someone that you loved And would always love The second time around, for us She never looked back She could never look back
162,139
\begin{document} \begin{center} {\large \bf On the positive zeros of generalized Narayana polynomials related to the Boros-Moll polynomials} \end{center} \begin{center} James Jing Yu Zhao\\[6pt] School of Mathematics, Tianjin University, \\ Tianjin 300350, P.R. China\\[8pt] Email: {\tt jjyzhao@tju.edu.cn} \end{center} \noindent\textbf{Abstract.} The generalized Narayana polynomials $N_{n,m}(x)$ arose from the study of infinite log-concavity of the Boros-Moll polynomials. The real-rootedness of $N_{n,m}(x)$ had been proved by Chen, Yang and Zhang. They also showed that when $n\geq m+2$, each of the generalized Narayana polynomials has one and only one positive zero and $m$ negative zeros, where the negative zeros of $N_{n,m}(x)$ and $N_{n+1,m+1}(x)$ have interlacing relations. In this paper, we study the properties of the positive zeros of $N_{n,m}(x)$ for $n\geq m+2$. We first obtain a new recurrence relation for the generalized Narayana polynomials. Based on this recurrence relation, we prove upper and lower bounds for the positive zeros of $N_{n,m}(x)$. Moreover, the monotonicity of the positive zeros of $N_{n,m}(x)$ are also proved by using the new recurrence relation. \noindent \emph{AMS Classification 2020:} 05A10, 11B83, 26C10 \noindent \emph{Keywords:} Generalized Narayana polynomials, positive zeros, bounds, monotonicity \section{Introduction} Let $n>k\geq 0$ be integers. The classical Narayana number $N(n,k)$, named after T.V. Narayana \cite{Narayana}, is given by $$N(n,k)=\frac{1}{n}{n\choose k}{n\choose k+1},$$ which appears in OEIS as A001263 in \cite{Sloane}. It is well known that the Narayana numbers refine the Catalan numbers $C_n=\frac{1}{n+1}{2n\choose n}$ since $$\sum_{k=0}^{n-1} N(n,k)=C_n.$$ For more information on Catalan numbers, see \cite{Catalan, stanley}. The generating polynomials of $N(n,k)$, namely $$ \sum_{k=0}^{n-1} N(n,k) x^k, $$ are called the Narayana polynomials. The Narayana numbers and the Narayana polynomials have attracted a lot of attention, and been extensively studied in relation to algebraic combinatorics \cite{Williams2005, Armstrong, Avaletal, ARR2015, MMY2019}, number theory \cite{GuoJiang2017, SaganTirrell2020}, probability and statistics \cite{Sulanke02, Branden2004, Avaletal, Ful-Rol, C-Y-Z-2021}, geometry \cite{ARR2015} and especially to enumerative combinatorics \cite{Sulanke04, Kamioka, Avaletal, CYZ1, CYZ2, WangYang, SaganTirrell2020} by the mathematical community. It is well known that the Narayana polynomials have only real zeros, see \cite{LiuWang}. In 2018, Chen, Yang and Zhang \cite{CYZ2} studied a generalization of the Narayana polynomials in the following form \begin{align}\label{eq-geNaPoCYZ} N_{n,m}(x)=\sum\limits_{k=0}^n \left({n\choose k}{m\choose k}-{n\choose k+1}{m\choose k-1}\right) x^k, \end{align} where $m$ and $n$ are nonnegative integers. Clearly, when $n=m+1$, the generalized Narayana polynomials $N_{n,m}(x)$ reduce to the classical Narayana polynomials, say, \begin{align*} N_{m+1,m}(x)=\frac{1}{m+1}\sum_{k=0}^{m}{m+1\choose k}{m+1\choose k+1}x^k=\sum_{k=0}^{m}N(m+1,k)x^k. \end{align*} It should be mentioned that the generalized Narayana polynomials $N_{n,m}(x)$ arose in the study of infinite log-concavity of the Boros-Moll polynomials, which were first introduced by Boros and Moll \cite{Boros-Moll} while studying a quartic integral. The generalized Narayana polynomials $N_{n,m}(x)$ appear to have some interesting properties on their zeros. For instance, Chen, Yang and Zhang \cite{CYZ2} had showed the real-rootedness of $N_{n,m}(x)$ for nonnegative integers $m$ and $n$. Moreover, the real zeros of $N_{n,m}(x)$ have interlacing relations for $n\leq m+1$. Specifically, for $n\geq m+2$, they gave the following result on positive zeros. \begin{theorem}\cite[Theorem 3.4]{CYZ2}\label{thm-unipzo} For any $m\geq 0$ and $n\geq m$, the polynomial $N_{n,m}(x)$ has only real zeros. If $n\geq m+2$, then $N_{n,m}(x)$ has one and only one positive zero. \end{theorem} The real-rootedness of $N_{n,m}(x)$ was proved by a criterion for determining whether two polynomials have interlaced zeros established by Liu and Wang \cite[Theorem 2.3]{LiuWang} and the theory of P\'{o}lya frequency sequences \cite{Wang-Yeh}. The existence and uniqueness of the positive zero of each $N_{n,m}(x)$ for $n\geq m+2$ can be obtained easily by using the well known Intermediate Value Theorem together with Descartes's Rule (see \cite{Curtiss}). Chen et al. \cite[Theorem 3.4]{CYZ2} also proved that $N_{n,m}(x)$ has $m$ negative zeros for $n\geq m+2$ with $m\geq 0$, and the negative zeros of $N_{n,m}(x)$ and $N_{n+1,m+1}(x)$ have interlacing relations. Many well known functions (or polynomials) have interlacing properties for their zeros. For instance, Cho and Chung \cite{Cho-Chung} had proved that the positive zeros of $\nu$-parameter families of Bessel functions are simultaneously interlaced under certain conditions. Although the real-rootedness of $N_{n,m}(x)$ and the interlace feature of their negative zeros had been deeply studied, the properties of the positive zeros of $N_{n,m}(x)$ were still left unknown. This paper mainly concerns with the analytic properties of the positive zeros of the generalized Narayana polynomials $N_{n,m}(x)$. Note that the case of $n=m+2$ is trivial since $N_{m+2,m}(1)=0$ by the Chu-Vandermonde convolution (see \cite{Gould} or \cite[\S 5.1]{GKP}). For $n\geq m+3$ with $m\geq 0$, we give upper and lower bounds for the the positive zeros of $N_{n,m}(x)$. Furthermore, we also show monotonicity of the positive zeros of $N_{n,m}(x)$. These two main results are proved by using mathematical induction together with a new three-term recurrence relation of the generalized Narayana polynomials. This paper is organized as follows. In Section \ref{S-R}, we give a new three-term recurrence relation of $N_{n,m}(x)$ and show a proof by hands. An alternative proof by symbolic method established by Chen, Hou and Mu \cite{CHM} is also mentioned. In Section \ref{S-3}, we prove the first main result of this paper, the upper and lower bounds for the positive zeros of $N_{n,m}(x)$. The second main result of this paper, say, the monotonicity of the positive zeros of $N_{n,m}(x)$, are stated in Section \ref{S-M}. \section{Recurrence relation}\label{S-R} In this section we show a new three-term recurrence relation of the generalized Narayana polynomials $N_{n,m}(x)$ defined in \eqref{eq-geNaPoCYZ}. This recurrence relation will be used to prove the main results of this paper, the bounds and the monotonicity of the positive zeros of $N_{n,m}(x)$ for $n\geq m+3$ and $m\geq 0$. The main result of this seciotn is as follows. \begin{theorem}\label{thm-main-rec-gNyp} For any integers $m\geq 0$ and $n\geq 1$, we have \begin{align}\label{eq-rec-gelNaya-m} c_{n,m}(x)N_{n,m+1}(x)=a_{n,m}(x) N_{n,m}(x)+b_{n,m}(x) N_{n-1,m}(x), \end{align} where \begin{equation}\label{eq-abcnmx} \left\{ \begin{aligned} &a_{n,m}(x)=(m+2-n)(m^2-n^2+4m+3)x-2n,\\[5pt] &b_{n,m}(x)=n[(m+2-n)(m+1-n)x-2](x-1),\\[5pt] &c_{n,m}(x)=(m+3)(m+2-n)(m+1-n)x. \end{aligned} \right. \end{equation} \end{theorem} \begin{proof} First, let us calculate the right-hand side of \eqref{eq-rec-gelNaya-m}. For convenience, rewrite $$ a_{n,m}(x)=Ax-2n\quad {\rm and}\quad b_{n,m}(x)=Bx^2-Cx+2n, $$ where \begin{align*} &A=(m+2-n)(m^2-n^2+4m+3),\\ &B=n(m+2-n)(m+1-n),\\ &C=B+2n. \end{align*} Then the right-hand side of \eqref{eq-rec-gelNaya-m} can be expressed as a sum of three parts, that is, \begin{align} &\ a_{n,m}(x)N_{n,m}(x)+b_{n,m}(x)N_{n-1,m}(x)\nonumber\\ =&\ (Ax-2n) N_{n,m}(x)+(Bx^2-Cx+2n)N_{n-1,m}(x)\nonumber\\ =&\ 2n\left(N_{n-1,m}(x)-N_{n,m}(x)\right)+\left(Ax N_{n,m}(x)-Cx N_{n-1,m}(x)\right) +Bx^2 N_{n-1,m}(x).\label{eq-2.3} \end{align} Observe that the coefficients of $x^k$ in the summand of \eqref{eq-geNaPoCYZ} can be rewritten as \begin{align*} {n\choose k}{m\choose k}-{n\choose k+1}{m\choose k-1} ={n+1\choose k+1}{m+1\choose k}\frac{(m-n)k+m+1}{(n+1)(m+1)}. \end{align*} So it follows that \begin{align} 2n\left(N_{n-1,m}(x)-N_{n,m}(x)\right) &=2n\sum_{k=0}^n {m\choose k-1}{n\choose k}\frac{n-m-1}{n}x^k\nonumber\\ &=2nx\sum_{k=1}^n {m\choose k-1}{n\choose k}\frac{n-m-1}{n}x^{k-1}\nonumber\\ &=x\sum_{k=0}^{n} 2(n-m-1) {m\choose k}{n\choose k+1}x^k. \label{eq-RHS-1} \end{align} In order to calculate the second part of \eqref{eq-2.3}, set $A=C+D$, where $D=(m+1-n)(m^2-mn+5m-n+6)$. Then we have \begin{align} Ax N_{n,m}(x)-Cx N_{n-1,m}(x) &=(C+D)x N_{n,m}(x)-Cx N_{n-1,m}(x)\nonumber\\ &=Cx\left(N_{n,m}(x)-N_{n-1,m}(x)\right)+Dx N_{n,m}(x)\nonumber\\ &=x \sum_{k=0}^{n} C{m\choose k-1}{n\choose k}\frac{m+1-n}{n}x^k\nonumber\\ &\qquad +x \sum_{k=0}^{n} D{n+1\choose k+1}{m+1\choose k}\frac{(m-n)k+m+1}{(n+1)(m+1)}x^k.\label{eq-RHS-2} \end{align} For the last part of \eqref{eq-2.3}, we have \begin{align} Bx^2 N_{n-1,m}(x) &=Bx \sum_{k=0}^{n-1} {n\choose k+1}{m+1\choose k}\frac{(m+1-n)k+m+1}{n(m+1)}x^{k+1}\nonumber\\ &=x \sum_{k=0}^{n} B {n\choose k}{m+1\choose k-1}\frac{(m+1-n)k+n}{n(m+1)}x^k. \label{eq-RHS-3} \end{align} Now we have that the right-hand side of \eqref{eq-rec-gelNaya-m} is equal to a sum of three parts as given in \eqref{eq-RHS-1}, \eqref{eq-RHS-2} and \eqref{eq-RHS-3}. Clearly, the left-hand side of \eqref{eq-rec-gelNaya-m} is $$ c_{n,m}(x)N_{n,m+1}(x) =x\sum_{k=0}^n (m+3)(m+2-n)(m+1-n) {n+1\choose k+1}{m+2\choose k}\frac{(m+1-n)k+m+2}{(n+1)(m+2)}x^k. $$ By comparing the coefficients of $x^k$ in the summands of \eqref{eq-RHS-1}, \eqref{eq-RHS-2}, \eqref{eq-RHS-3} and $c_{n,m}(x)N_{n,m+1}(x)$, we find that \begin{align*} &2(n-m-1){m\choose k}{n\choose k+1}+C{m\choose k-1}{n\choose k}\frac{m+1-n}{n} +D{n+1\choose k+1}{m+1\choose k}\frac{(m-n)k+m+1}{(n+1)(m+1)}\nonumber\\[5pt] &\quad +B{n\choose k}{m+1\choose k-1}\frac{(m+1-n)k+n}{n(m+1)}\nonumber\\[5pt] =&\ (m+3)(m+2-n)(m+1-n) {n+1\choose k+1}{m+2\choose k}\frac{(m+1-n)k+m+2}{(n+1)(m+2)}, \end{align*} where $B,C$ and $D$ are defined above. Therefore it follows that \eqref{eq-rec-gelNaya-m} holds for all integers $m\geq 0$ and $n\geq 1$. This completes the proof. \end{proof} Specifically, when $n\neq m+1$ and $n\neq m+2$, from \eqref{eq-rec-gelNaya-m} we have \begin{align}\label{eq-rec-gelNaya-mv} N_{n,m+1}(x) =\frac{a_{n,m}(x)}{c_{n,m}(x)}N_{n,m}(x)+\frac{b_{n,m}(x)}{c_{n,m}(x)}N_{n-1,m}(x), \end{align} where $a_{n,m}(x)$, $b_{n,m}(x)$ and $c_{n,m}(x)$ are given by \eqref{eq-abcnmx}. In the remainder of this paper, we shall use the recurrence relation \eqref{eq-rec-gelNaya-mv} to prove our main results. \begin{remark} It should be mentioned that Theorem \ref{thm-main-rec-gNyp} can also be proved by using the extended Zeilberger algorithm, a symbolic method established by Chen, Hou and Mu \cite{CHM}. See \cite{C-Y-Z-2021} for example. \end{remark} \section{The bounds}\label{S-3} The aim of this section is to prove the first main result of this paper, the lower and upper bounds of the positive zeros of $N_{n,m}(x)$. \begin{theorem}\label{Thm-zero-bounds} For $m\geq 0$ and $n\geq m+3$, denote by $r_{n,m}^{+}$ the positive zero of the generalized Narayana polynomial $N_{n,m}(x)$. Then we have that \begin{align}\label{eq-zero-bounds} \frac{2(n+1)}{(m+1-n)((m+2)^2-(n+1)^2-1)} <r_{n,m}^{+} \leq \frac{2}{(m-n)(m+1-n)}, \end{align} where the equality holds only for $m=0$. \end{theorem} Before proving Theorem \ref{Thm-zero-bounds}, we first show the following two lemmas, which will be used in the proofs of our main results. \begin{lemma}\label{lemma-mono-on-n} Fixed $m\geq 0$ and $n\geq m+4$. Then for any $x>0$ we have \begin{align*} N_{n-1,m}(x)>N_{n,m}(x). \end{align*} \end{lemma} \begin{proof} Note that when $n\geq m+2$, the polynomial $N_{n,m}(x)$ has degree $m+1$. So by \eqref{eq-geNaPoCYZ}, we have \begin{align*} N_{n-1,m}(x)-N_{n,m}(x) =&\,\sum_{k=0}^{m+1}\left[{n-1\choose k}{m\choose k}-{n-1\choose k+1}{m\choose k-1} -\left({n\choose k}{m\choose k}-{n\choose k+1}{m\choose k-1}\right)\right]x^k\\ =&\,\sum_{k=0}^{m+1}\left(-{n-1\choose k-1}{m\choose k}+{n-1\choose k}{m\choose k-1}\right)x^k\\ =&\,\sum_{k=0}^{m+1}{n\choose k}{m\choose k-1}\frac{n-m-1}{n}x^k. \end{align*} It follows that $N_{n-1,m}(x)-N_{n,m}(x)>0$ for $m\geq0$, $n\geq m+4$ and $x>0$. This completes the proof. \end{proof} \begin{lemma}\label{lem-sign} Given integers $m\geq 0$ and $n\geq m+2$, let $N_{n,m}(x)$ be defined as in \eqref{eq-geNaPoCYZ}, and $r_{n,m}^{+}$ be the positive zero of $N_{n,m}(x)$. Then for any $x>0$, we have \begin{itemize} \item[$(i)$] $N_{n,m}(x)>0$ if and only if $x<r_{n,m}^{+}$; \item[$(ii)$] $N_{n,m}(x)<0$ if and only if $x>r_{n,m}^{+}$. \end{itemize} \end{lemma} \begin{proof} We first prove the sufficiency of $(i)$ and $(ii)$. Suppose $0<x<r_{n,m}^{+}$. It is clear that $N_{n,m}(x)\neq 0$ by Theorem \ref{thm-unipzo}, because each $N_{n,m}(x)$ has one and only one positive zero when $m\geq 0$ and $n\geq m+2$. By \eqref{eq-geNaPoCYZ}, each polynomial $N_{n,m}(x)$ is a continuous function with respect to $x$ in $(-\infty,+\infty)$. If $N_{n,m}(x)<0$, then by the continuity of $N_{n,m}(x)$ there must exist at least one positive zero in $(0,x)$ since $N_{n,m}(0)=1>0$ by \eqref{eq-geNaPoCYZ}, which contradicts Theorem \ref{thm-unipzo}. It follows that $N_{n,m}(x)>0$. Suppose $x>r_{n,m}^{+}$. Clearly, $N_{n,m}(x)\neq 0$ by Theorem \ref{thm-unipzo}. Note that for $n\geq m+2$, the leading term of $N_{n,m}(x)$ is $-{n\choose m+2}x^{m+1}$ by \eqref{eq-geNaPoCYZ}. Hence $\lim_{x\rightarrow +\infty} N_{n,m}(x)=-\infty$ for $n\geq m+2$ and $m\geq 0$. Assume $N_{n,m}(x)>0$, then by the continuity of $N_{n,m}(x)$, there is at least one positive zero in $(x,+\infty)$, a contradiction to Theorem \ref{thm-unipzo}. This leads to that $N_{n,m}(x)<0$. To show the necessity of $(i)$, suppose $N_{n,m}(x)>0$ for $x>0$. Clearly, $x\neq r_{n,m}^{+}$. If $x>r_{n,m}^{+}$, then by the sufficiency of $(ii)$ proved above, we have $N_{n,m}(x)<0$, a contradiction. It follows that $x<r_{n,m}^{+}$. The necessity of $(ii)$ is obtained in a similar argument and the details are omitted. This completes the proof. \end{proof} Now we are able to show the proof of the first main result of this paper. \noindent{\it Proof of Theorem \ref{Thm-zero-bounds}.} It is clear that $N_{n,m}(x)$ are polynomials with real coefficients and leading term $-{n\choose m+2}x^{m+1}$ for $m\geq 0$ and $n\geq m+3$. Hence $N_{n,m}(x)$ are continuous functions with respect to $x$ in $(-\infty,+\infty)$. Since the degree of each $N_{n,m}(x)$ is $m+1$ for $n\geq m+3$ and $m\geq 0$, we shall prove the bounds in \eqref{eq-zero-bounds} by mathematical induction on $m$. For $m=0$, we have $N_{n,0}(x)=-{n\choose 2}x+1$ by \eqref{eq-geNaPoCYZ}. It is clear that $$ \frac{2(n+1)}{(1-n)(3-(n+1)^2)}<r_{n,0}^{+}=\frac{2}{n(n-1)} $$ for $n\geq 3$. Hence we have \eqref{eq-zero-bounds} holds for $m=0$ with $n\geq 3$. For $m=1$, by \eqref{eq-geNaPoCYZ} we have $N_{n,1}(x)=-{n\choose 3}x^2-\frac{n(n-3)}{2}x+1$. Then $$ N_{n,1}\left(\frac{2}{(n-1)(n-2)}\right)=-\frac{2(n-3)}{3(n-1)(n-2)}<0,\quad n\geq 4. $$ It follows from Lemma \ref{lem-sign} that $r_{n,1}^{+}<\frac{2}{(n-1)(n-2)}$ for $n\geq 4$. Moreover, $$ N_{n,1}\left(\frac{2(n+1)}{(n-2)((n+1)^2-8)}\right) =\frac{2(n-3)[(n-1)(2n^2+n-25)+24]}{3(n-2)(n^2+2n-7)^2}>0, \quad n\geq 4. $$ By Lemma \ref{lem-sign}, $r_{n,1}^{+}>\frac{2(n+1)}{(n-2)((n+1)^2-8)}$ for $n\geq 4$. Thus we have \eqref{eq-zero-bounds} holds true for $m=1$ with $n\geq 4$. Now we have proved \eqref{eq-zero-bounds} for $m=0$ and $m=1$ with $n\geq m+3$. Next assume \eqref{eq-zero-bounds} holds for $m\geq 1$ and $n\geq m+3$. We aim to prove that \eqref{eq-zero-bounds} holds for $m+1$ and $n\geq m+4$. That is, for $n\geq m+4$, \begin{align}\label{eq-zero-bounds-m+1} x_1<r_{n,m+1}^{+}<x_2, \end{align} where $$ x_1=\frac{2(n+1)}{(m+2-n)((m+3)^2-(n+1)^2-1)} \quad {\rm and} \quad x_2=\frac{2}{(m+1-n)(m+2-n)}. $$ Clearly $0<x_1<x_2$ for $n\geq m+4$ and $m\geq 0$. In order to prove \eqref{eq-zero-bounds-m+1}, it is sufficient to prove that for $n\geq m+4$, $$ N_{n,m+1}(x_1)>0 \quad {\rm and} \quad N_{n,m+1}(x_2)<0. $$ We first prove $N_{n,m+1}(x_1)>0$. For this purpose, we use the recurrence relation \eqref{eq-rec-gelNaya-mv} to express $N_{n,m+1}(x_1)$ as \begin{align}\label{eq-rec-proofm+1} N_{n,m+1}(x_1) =\frac{a_{n,m}(x_1)}{c_{n,m}(x_1)}N_{n,m}(x_1) +\frac{b_{n,m}(x_1)}{c_{n,m}(x_1)}N_{n-1,m}(x_1). \end{align} For $m\geq 0$ and $n\geq m+4$, it is clear that $0<x_1<1$ and hence $c_{n,m}(x_1)>0$ by \eqref{eq-abcnmx}. By a simple calculation we get $$ a_{n,m}(x_1)=-\frac{2(n-m-3)(n-m-1)}{(n+m+4)(n-m-2)+1}<0,\quad n\geq m+4. $$ Observe that for $m\geq 0$ and $n\geq m+4$, $$ (m+2-n)(m+1-n)x_1-2=-\frac{2(m+2)(n-m-3)}{(n+m+4)(n-m-2)+1}<0. $$ So $$ b_{n,m}(x_1) =n[(m+2-n)(m+1-n)x_1-2](x_1-1)>0,\quad n\geq m+4. $$ It follows that for $m\geq 0$ and $n\ge m+4$, $$ \frac{a_{n,m}(x_1)}{c_{n,m}(x_1)}<0 \quad {\rm and} \quad \frac{b_{n,m}(x_1)}{c_{n,m}(x_1)}>0. $$ To determine the sign of $N_{n,m+1}(x_1)$ it remains to ensure the sign of $N_{n,m}(x_1)$ and $N_{n-1,m}(x_1)$. We claim that $N_{n-1,m}(x_1)>0$ for $m\geq 0$ and $n\geq m+4$. By hypothesis, we have $$ \frac{2n}{(m+2-n)((m+2)^2-n^2-1)}<r_{n-1,m}^{+}. $$ This leads to \begin{align*} &\ x_1-\frac{2n}{(m+2-n)((m+2)^2-n^2-1)}\\ =&\ -\frac{2(n-m-3)(n-m-1)}{(n-m-2)[(n+m+4)(n-m-2)+1][n^2-(m+1)(m+3)]}\\ <&\ 0 \end{align*} for $m\geq 0$ and $n\geq m+4$. So $$ 0<x_1<\frac{2n}{(m+2-n)((m+2)^2-n^2-1)}<r_{n-1,m}^{+}. $$ It follows from Lemma \ref{lem-sign} that $N_{n-1,m}(x_1)>0$. Thus for $m\geq 0$ and $n\geq m+4$, we have \begin{align}\label{eq-boprm1} \frac{b_{n,m}(x_1)}{c_{n,m}(x_1)}N_{n-1,m}(x_1)>0. \end{align} Now let us consider the sign of $N_{n,m}(x_1)$. Notice that $N_{n,m}(x_1)$ can not be negative for all $n\geq m+4$. For example, when $m=1$, $$ N_{n,1}(x_1)=\frac{n^5-52n^4+123n^3+1018n^2-4666n+5292}{3(n-3)^2 (n^2+2n-14)^2}. $$ This yields $N_{n,1}(x_1)=-61/49$ for $n=5$, and $N_{n,1}(x_1)=48074/410346049$ for $n=50$. So we shall discuss in two cases. \nointerlineskip {\bf Case 1.} $N_{n,m}(x_1)\leq 0$. In this case we have $\frac{a_{n,m}(x_1)}{c_{n,m}(x_1)}N_{n,m}(x_1)\geq 0$, and hence $N_{n,m+1}(x_1)>0$ for $m\geq 0$ and $n\geq m+4$ by \eqref{eq-rec-proofm+1} and \eqref{eq-boprm1}. \nointerlineskip {\bf Case 2.} $N_{n,m}(x_1)>0$. Since $x_1>0$, by Lemma \ref{lemma-mono-on-n} we have that $N_{n-1,m}(x_1)> N_{n,m}(x_1)$ for $m\geq 0$ and $n\geq m+4$. Then by \eqref{eq-rec-gelNaya-mv}, \begin{align*} N_{n,m+1}(x_1) &\,=\frac{a_{n,m}(x_1)}{c_{n,m}(x_1)}N_{n,m}(x_1) +\frac{b_{n,m}(x_1)}{c_{n,m}(x_1)}N_{n-1,m}(x_1)\\ &\,>\frac{a_{n,m}(x_1)+b_{n,m}(x_1)}{c_{n,m}(x_1)}N_{n,m}(x_1), \end{align*} where \begin{align*} a_{n,m}(x_1)+b_{n,m}(x_1) &\,=\frac{2(n+1)(n-m-3)(n-m-1)[(n-m-3)(m+1)(n+m+4)-2]}{[(n+m+4)(n-m-2)+1]^2 (n-m-2)}>0 \end{align*} for $m\geq 0$ and $n\geq m+4$. Therefore $N_{n,m+1}(x_1)>0$ for $m\geq 0$ and $n\geq m+4$. Thus in both cases, it follows that $N_{n,m+1}(x_1)>0$ for all $m\geq 1$ and $n\geq m+4$. It remains to prove the inequality $N_{n,m+1}(x_2)<0$ for $n\geq m+4$. By the recurrence relation \eqref{eq-rec-gelNaya-mv} we have \begin{align*} N_{n,m+1}(x_2) &=\frac{a_{n,m}(x_2)}{c_{n,m}(x_2)}N_{n,m}(x_2) +\frac{b_{n,m}(x_2)}{c_{n,m}(x_2)}N_{n-1,m}(x_2). \end{align*} Observe that $(m+2-n)(m+1-n)x_2-2=0$ for $n\geq m+4$. So $b_{n,m}(x_2)=0$. Thus \begin{align}\label{eq-rec-proofm1x2} N_{n,m+1}(x_2) =\frac{a_{n,m}(x_2)}{c_{n,m}(x_2)}N_{n,m}(x_2). \end{align} Let us determine the signs of $a_{n,m}(x_2)$, $c_{n,m}(x_2)$ and $N_{n,m}(x_2)$. For $n\geq m+4$, it is clear that $0<x_2\leq 1/3$, and hence $c_{n,m}(x_2)>0$ by \eqref{eq-abcnmx}. Notice that \begin{align*} a_{n,m}(x_2)=\frac{(m+2-n)(m^2-n^2+4m+3)2}{(m+1-n)(m+2-n)}-2n =\frac{2(m+1)(n-m-3)}{n-m-1}>0 \end{align*} for $n\geq m+4$. By hypothesis, it follows that $$ r_{n,m}^{+}<\frac{2}{(m-n)(m+1-n)} <\frac{2}{(m+1-n)(m+2-n)} =x_2<+\infty $$ for $n\geq m+4$. By Lemma \ref{lem-sign}, $N_{n,m}(x_2)<0$ for $n\geq m+4$. Then by \eqref{eq-rec-proofm1x2} we have $N_{n,m+1}(x_2)<0$ for $n\geq m+4$. Now we have proved \eqref{eq-zero-bounds-m+1} for $m\geq 1$ with $n\geq m+4$. By the inductive hypothesis, it follows that the inequalities in \eqref{eq-zero-bounds} hold for $m\geq 0$ and $n\geq m+3$. By the procedure of the proof, it is clear that the equality in \eqref{eq-zero-bounds} holds only for $m=0$. This completes the proof. \qed The following result is an immediate consequence of Theorem \ref{Thm-zero-bounds}. \begin{prop} Let $m\geq 0$ and $n\geq m+3$. Denote by $r_{n,m}^{+}$ the positive zero of $N_{n,m}(x)$. Then $0<r_{n,m}^{+}\leq \frac{1}{3}$. In addition, for any fixed $m\geq 0$, $$\lim_{n\rightarrow \infty} r_{n,m}^{+}=0.$$ \end{prop} \section{The monotonicity}\label{S-M} This section is devoted to the study of the monotonicity of the positive zeros of $N_{n,m}(x)$. The second main result of this paper is as follows. \begin{theorem}\label{thm-monocity} Let $m\geq 0$ and $n\geq m+3$. Denote by $r_{n,m}^{+}$ the positive zero of $N_{n,m}(x)$, then we have \begin{align} r_{n+1,m}^{+}<r_{n,m}^{+}, \qquad r_{n+1,m+1}^{+}<r_{n,m}^{+},\qquad n\geq m+3, \end{align} and \begin{align} r_{n,m}^{+}<r_{n,m+1}^{+},& \qquad n\geq m+4. \end{align} \end{theorem} \begin{proof} Note that $N_{n,m}(x)$ are continuous functions with respect to $x$ in $(-\infty,+\infty)$. By Lemma \ref{lemma-mono-on-n} we have $N_{n,m}(x)>N_{n+1,m}(x)$ for $m\geq 0$, $n\geq m+3$ and $x>0$. It follows from Theorem \ref{thm-unipzo} that \begin{align}\label{eq-mono-1} r_{n+1,m}^{+}<r_{n,m}^{+} \end{align} for $m\geq 0$ and $n\geq m+3$. Notice that $r_{n,m+1}^{+}<r_{n-1,m}^{+}$ for $n\geq m+4$ if and only if $r_{n+1,m+1}^{+}<r_{n,m}^{+}$ for $n\geq m+3$. So it remains to prove \begin{align}\label{eq-inequ-2} r_{n,m}^{+}<r_{n,m+1}^{+}<r_{n-1,m}^{+}, \qquad n\geq m+4. \end{align} Next we shall use mathematical induction on $m$ to prove \eqref{eq-inequ-2}. First for $m=0$, we aim to prove $$ r_{n,0}^{+}<r_{n,1}^{+}<r_{n-1,0}^{+},\qquad n\geq 4. $$ Clearly, $0<r_{n,0}^{+}<r_{n-1,0}^{+}$ for $n\geq 4$ by \eqref{eq-mono-1}. Thus by the continuity of $N_{n,m}(x)$ and Theorem \ref{thm-unipzo}, it suffices to show that for $n\geq 4$, \begin{align}\label{eq-s-condi-m0} N_{n,1}(r_{n,0}^{+})>0 \quad {\rm and}\quad N_{n,1}(r_{n-1,0}^{+})<0. \end{align} From \eqref{eq-geNaPoCYZ} we have $N_{n,0}(x)=-{n\choose 2}x+1$. So $r_{n,0}^{+}=2/n(n-1)$. By the recurrence relation \eqref{eq-rec-gelNaya-mv}, \begin{align}\label{eq-rec-Nn10} N_{n,1}(x) =\frac{(2-n)(-n^2+3)x-2n}{3(2-n)(1-n)x}N_{n,0}(x) +\frac{n[(2-n)(1-n)x-2](x-1)}{3(2-n)(1-n)x}N_{n-1,0}(x). \end{align} Obviously, $N_{n,0}(r_{n,0}^{+})=0$. Then \begin{align*} N_{n,1}(r_{n,0}^{+}) =\frac{n[(2-n)(1-n)r_{n,0}^{+}-2](r_{n,0}^{+}-1)} {3(2-n)(1-n)r_{n,0}^{+}}N_{n-1,0}(r_{n,0}^{+}). \end{align*} By Lemma \ref{lem-sign} we have $N_{n-1,0}(r_{n,0}^{+})>0$ since $0<r_{n,0}^{+}<r_{n-1,0}^{+}$. Observe that $(2-n)(1-n)r_{n,0}^{+}-2=2(n-2)/n-2<0$, for $n\geq 4$. It is clear that $r_{n,0}^{+}-1<0$ and $3(2-n)(1-n)r_{n,0}^{+}>0$ for $n\geq 4$. Thus $N_{n,1}(r_{n,0}^{+})>0$ for $n\geq 4$. In order to prove $N_{n,1}(r_{n-1,0}^{+})<0$, let us consider \eqref{eq-rec-Nn10} again. Clearly, $N_{n-1,0}(r_{n-1,0}^{+})=0$. Hence by \eqref{eq-rec-Nn10}, \begin{align*} N_{n,1}(r_{n-1,0}^{+}) =\frac{(2-n)(-n^2+3)r_{n-1,0}^{+}-2n}{3(2-n)(1-n)r_{n-1,0}^{+}} N_{n,0}(r_{n-1,0}^{+}). \end{align*} By Lemma \ref{lemma-mono-on-n} we have $N_{n,0}(r_{n-1,0}^{+})<0$ since $r_{n,0}^{+}<r_{n-1,0}^{+}$. Notice that $(2-n)(-n^2+3)r_{n-1,0}^{+}-2n=2(n^2-3)/(n-1)-2n=2(n-3)/(n-1)>0$, for $n\geq 4$. Clearly, $3(2-n)(1-n)r_{n-1,0}^{+}>0$ for $n\geq 4$. It follows that $N_{n,1}(r_{n-1,0}^{+})<0$. So we have \eqref{eq-s-condi-m0}, and hence \eqref{eq-inequ-2} holds for $m=0$. Now assume $r_{n,m-1}^{+}<r_{n,m}^{+}<r_{n-1,m-1}^{+}$ for $n\geq m+3$. We aim to show that $$ r_{n,m}^{+}<r_{n,m+1}^{+}<r_{n-1,m}^{+} $$ for $n\geq m+4$. Clearly, $r_{n,m}^{+}<r_{n-1,m}^{+}$ for $n\geq m+4$ by \eqref{eq-mono-1}. Therefore by the continuity of $N_{n,m}(x)$ and Theorem \ref{thm-unipzo}, it is sufficient to prove that for $n\geq m+4$, \begin{align}\label{eq-s-condi-m} N_{n,m+1}(r_{n,m}^{+})>0\quad {\rm and}\quad N_{n,m+1}(r_{n-1,m}^{+})<0. \end{align} For this purpose, let us recall the recurrence relation \eqref{eq-rec-gelNaya-mv} \begin{align*} N_{n,m+1}(x) =\frac{a_{n,m}(x)}{c_{n,m}(x)}N_{n,m}(x)+\frac{b_{n,m}(x)}{c_{n,m}(x)}N_{n-1,m}(x), \end{align*} where \begin{equation*} \left\{ \begin{aligned} &a_{n,m}(x)=(m+2-n)(m^2-n^2+4m+3)x-2n,\\[5pt] &b_{n,m}(x)=n[(m+2-n)(m+1-n)x-2](x-1),\\[5pt] &c_{n,m}(x)=(m+3)(m+2-n)(m+1-n)x. \end{aligned} \right. \end{equation*} It follows from \eqref{eq-rec-gelNaya-mv} that \begin{align}\label{eq-Nnm1rnm} N_{n,m+1}(r_{n,m}^{+}) =\frac{b_{n,m}(r_{n,m}^{+})}{c_{n,m}(r_{n,m}^{+})}N_{n-1,m}(r_{n,m}^{+}), \end{align} and \begin{align}\label{eq-Nnm1n-1m} N_{n,m+1}(r_{n-1,m}^{+}) =\frac{a_{n,m}(r_{n-1,m}^{+})}{c_{n,m}(r_{n-1,m}^{+})}N_{n,m}(r_{n-1,m}^{+}). \end{align} Let us first determine the sign of $N_{n,m+1}(r_{n,m}^{+})$. By Theorem \ref{Thm-zero-bounds}, we have $$ r_{n,m}^{+}\leq \frac{2}{(m-n)(m+1-n)}<\frac{2}{(m+2-n)(m+1-n)} $$ for $m\geq 0$ and $n\geq m+3$. So $(m+2-n)(m+1-n)r_{n,m}^{+}-2<0$. Since $r_{n,m}^{+}-1<0$, it follows that $b_{n,m}(r_{n,m}^{+})>0$. Clearly, $r_{n,m}^{+}>0$, and hence $c_{n,m}(r_{n,m}^{+})>0$ for $n\geq m+4$. By Lemma \ref{lem-sign} we get $N_{n-1,m}(r_{n,m}^{+})>0$ since $0<r_{n,m}^{+}<r_{n-1,m}^{+}$ for $n\geq 4$. Thus by \eqref{eq-Nnm1rnm} we have $N_{n,m+1}(r_{n,m}^{+})>0$ for $n\geq m+4$. It remains to prove $N_{n,m+1}(r_{n-1,m}^{+})<0$ for $n\geq m+4$. By Theorem \ref{Thm-zero-bounds}, we have $$ r_{n-1,m}^{+}>\frac{2n}{(m+2-n)((m+2)^2-n^2-1)} $$ for $m\geq 0$ and $n\geq m+4$. Hence $a_{n,m}(r_{n-1,m}^{+})>0$ for $n\geq m+4$. It is clear that $r_{n-1,m}^{+}>0$ and hence $c_{n,m}(r_{n-1,m}^{+})>0$, for $n\geq m+4$. By Lemma \ref{lem-sign} we have $N_{n,m}(r_{n-1,m}^{+})<0$ because $r_{n,m}^{+}<r_{n-1,m}^{+}$. Then by \eqref{eq-Nnm1n-1m} it follows that $N_{n,m+1}(r_{n-1,m}^{+})<0$ for $m\geq 0$ and $n\geq m+4$. Since \eqref{eq-s-condi-m} has been proved, it follows that $$ r_{n,m}^{+}<r_{n,m+1}^{+}<r_{n-1,m}^{+} $$ for $n\geq m+4$. By the inductive hypothesis, we have \eqref{eq-inequ-2} holds for $n\geq m+4$. This completes the proof. \end{proof} \noindent{\bf Acknowledgements.} This work was supported by the National Natural Science Foundation of China under Grant Nos. 11771330 and 11971203.
44,079
TITLE: Experience of linear regression QUESTION [0 upvotes]: As part of my work (programmer), I need to learn some linear regression. I have a degree in pure mathematics, but not in statistics. In fact, I have one course in statistics and two or three in probabilities. That theory will be useful in machine learning. Knowing that we program in Python. Could anyone be able to give me a good book, an introduction, in linear regression? Thanks in advance! REPLY [2 votes]: The best intro book there is for data science methods in general, including linear regression, in Python is probably Data Science from Scratch by Joel Grus. This covers simple linear regression, multiple regression, and logistic regression, among other traditional methods, as well as a brief tour of the theory. The only disadvantage to this is that you are literally doing everything from scratch - I have heard that this book does not cover these methods using standard Python libraries, such as scikit-learn and pandas. Another recommendation I would make is Real-World Machine Learning. My recollection is that this one covers machine learning methods using more standardized packages, rather than from scratch. This text isn't as theoretically driven as Grus' text. If you're looking for something more mathematical focusing on linear regession as its own theory ("general linear models" are what they're called - do not confuse this with generalized linear models), I would recommend a traditional intro-Ph.D.-level statistics text, such as Plane Answers to Complex Questions. I've gotten to know this text very well since I started the Master's program I'm in, but I'm also aware that Agresti released a similar text very recently, and the original Linear Models text by Searle (a classic) has been updated with R and SAS code. After going through this material on linear models - particularly Searle's text - you will be well-prepared to tackle Elements of Statistical Learning, a take on machine learning from a statistical perspective, a.k.a. "statistical learning." This text covers penalization methods, such as LASSO and Ridge regression.
53,987
Clover Blue - Eldonna Edwards books Eldonna Edwards Set against the backdrop of a 1970s commune in Northern California, Clover Blue is a compelling, beautifully written story of a young boy’s search for identity. There are many things twelve-year-old Clover Blue isn’t sure of: his exact date of birth, his name before he was adopted into the Saffron Freedom Community, or who his first parents were. What he does know with certainty is that among this close-knit, nature-loving group, he is happy. Here, everyone is family, regardless of their disparate backgrounds—surfer, midwife, Grateful Dead groupie, Vietnam deserter. But despite his loyalty to the commune and its guru-like founder Goji, Blue grapples with invisible ties toward another family—the one he doesn’t remember. With the urging of his fearless and funny best friend, Harmony, Clover Blue begins to ask questions. For the first time, Goji’s answers fail to satisfy. The passing months bring upheaval to their little clan and another member arrives, a beautiful runaway teen named Rain, sparking new tensions. As secrets slowly unfurl, Blue’s beliefs—about Goji, the guidelines that govern their seemingly idyllic lives, and the nature of family itself—begin to shift. With each revelation about a heartbreaking past he never imagined, Blue faces a choice between those he’s always trusted, and an uncertain future where he must risk everything in his quest for the truth. Part coming-of-age tale, part love story, part mystery, Clover Blue tenderly explores an unconventional but no less complex family that resonates with our deep-rooted yearning for home. Download Clover Blue - Eldonna Edwards your Ebook Free in Epub, PDF, Kindle and use your favorite editor to read them. Free eBooks. Romance, Polar, Literature, Erotic History, Science Fiction, Thriller, Police, Fantasy. More than 900,000 books at your disposal in our library. Clover Blue - Eldonna Edwards how to download books? Download Clover Blue - Eldonna Edwards your Ebook Free. Free Clover Blue - Eldonna Edwards in Epub, PDF, Kindle and use your favorite editor to read them. Free eBooks. Romance, Polar, Literature, Erotic History, Science Fiction, Thriller, Police, Fantasy. More than 900,000 books at your disposal in our library.
270,740
TITLE: Nth number of continued fraction QUESTION [1 upvotes]: Given a real number $r$ and a non-negative integer $n$, is there a way to accurately find the $n^{th}$ (with the integer being the $0^{th}$ number in the continued fraction. If this can not be done for all $r$ what are some specific ones, like $\pi$ or $e$. I already now how to do this square roots. REPLY [2 votes]: For arbitrary real numbers, there's no better method known than the 'abstract' one of simply extracting the digits through the usual recurrence relation; even for non-quadratic algebraic numbers — even for a number as simple as $\sqrt{2}+\sqrt{3}$! — nothing is known about any structure to the coefficients. By contrast, if the goal is to take a number known to some precision and churn out an appropriate number of coefficients for its continued fraction, then there are excellent algorithms for doing that - and that might be enough to find some structure in the coefficients which can then be proven in a non-algorithmic fashion (for instance, the patterns in the continued fraction coefficients of $e$, or of quadratic surds, or in the Liouville numbers). The simplest way is to take a rational approximation $\frac ab$ to your number $r$ (for instance, if you have a decimal expansion $r=d_0.d_1d_2d_3d_4\ldots d_n$ to n digits of precision, then set $a=d_0d_1d_2\ldots d_n$ and $b=10^n$) and then run the extended Euclidean algorithm for the GCD on $a$ and $b$; the 'partial quotients' found along the way are precisely the coefficients of the continued fraction. See http://en.wikipedia.org/wiki/Euclidean_algorithm#Continued_fractions for the basics of the method; if you're interested in more details, volume 2 of Knuth's Art Of Computer Programming (specifically, section 4.5.3, problem 47 within that section and the references there) is an excellent next step.
62,698
TITLE: Combinatorial Proof of $\binom{\binom{n}{2}}{2} = 3 \binom{n}{3}+ 3 \binom{n}{4}$ for $n \geq 4$ QUESTION [14 upvotes]: For $n \geq 4$, show that $\binom{\binom{n}{2}}{2} = 3 \binom{n}{3}+ 3 \binom{n}{4}$. LHS: So we have a set of $\binom{n}{2}$ elements, and we are choosing a $2$ element subset. RHS: We are choosing a $3$ element subset and a $4$ element subset (each from a set of $n$ elements). But we multiply by $3$ by the multiplication principle for some reason. REPLY [14 votes]: LHS: The $\binom{n}{2}$ is the number of pairs you can form of n distinct elements, so the LHS counts the number of ways to choose two distinct pairs. RHS: Notice that you can choose two pairs that have a common element (but only one). If the two pairs are disjoint, then you need to choose four elements and then ask how you pair them. If the pairs have a common element, then you need to choose only three elements and then choose which is the common element.
26,847
TITLE: Is there an explicit way to determine $\mathrm{Mat}_n(R[X_1,\dots,X_m])\simeq\mathrm{Mat}_n(R)[X_1,\dots,X_m]$? QUESTION [3 upvotes]: For a commutative ring $R$, let $\mathrm{Mat}_n(R[X_1,\dots,X_m])$ denotes the matrix ring with entries from $R[X_1,\dots,X_m]$, and let $\mathrm{Mat}_n(R)[X_1,\dots,X_m]$ denotes the polynomial ring with coefficients in $\mathrm{Mat}_n(R)$. Is there an easy way to see that both structures are isomorphic as rings? Even experimenting with just one indeterminate at small cases of $n$, I'm having difficulty finding a suitable map to verify. What is the natural ring isomorphism here? Thanks. REPLY [6 votes]: There is an evident map $M_n(R)\to M_n(R[X_1,\dots,X_m])$, which is injective and a map of rings, so we can identify the elements of $M_n(R)$ with their images in $M_n(R[X_1,\dots,X_m])$. On the other hand, for each $i\in\{1,\dots,m\}$ let $\underline X_i$ be the element of $M_n(R[X_1,\dots,X_m])$ which is a diagonal matrix all of whose diagonal entries are $X_i$, so that $\underline X_i=X_i\cdot I_n$, with $I_n\in M_n(R[X_1,\dots,X_m])$ the identity matrix. An element $A$ of $M_n(R[X_1,\dots,X_m])$ can be written in exactly one way as a finite sum $$\sum_{i_1,\dots,i_m\geq0} a_{i_1,\dots,i_m}\underline X_1^{i_1}\cdots \underline X_m^{i_m}$$ with the $a_{i_1,\dots,i_m}$ elements of $M_n(R)$. That's where the map comes from. For all $i_1,\dots,i_m\geq0$ and all $i$, $j\in\{1,\dots,n\}$, the $(i,j)$th entry of the matrix $a_{i_1,\dots,i_m}$ is the coefficient of $X_1^{i_1}\cdots X_m^{i_m}$ in the $(i,j)$th entry of $A$. Alternatively, let us write $S=R[X_1,\dots,X_m]$. The ring $M_n(S)$ is the endomorphism ring of the free left $S$-module $S^n$ of rank $n$. One can check that there is a canonical isomorphism $$\hom_S(S^n,S^n)\to S\otimes_R\hom_R(R^n,R^n)$$ and, since $\hom_R(R^n,R^n)\cong M_n(R)$, this tells us that $$M_n(S)\cong S\otimes_R M_n(R)$$ We are thus left with showing that $S\otimes_R M_n(R)\cong M_n(R)[X_1,\dots,X_m]$. It is in fact true that for all $R$-algebras $\Lambda$ we have an isomorphism $$R[X_1,\dots,X_m]\otimes_R\Lambda\cong\Lambda[X_1,\dots,X_m],$$ and we want this when $\Lambda=M_n(R)$. Can you do this?
1,855
London –: “Ultimately I'm a defender: Appeal may work or not. I will never pull out of a challenge, as much as I will never intend to injure a player. “It's about the team and our team looks strong at the moment. We need to maintain this level of performance.” Meanwhile,. First-half goals from James Milner and Edin Dzeko secured victory against an Arsenal side who themselves played most of the match a man down after defender Laurent Koscielny saw red for a 10th minute foul on Dzeko. – Sapa-AFP
285,428
\begin{document} \author{Sergio D. Grillo \\ \textit{Centro At\'{o}mico Bariloche and Instituto Balseiro}\\ \textit{\ 8400-S. C. de Bariloche}\\ \textit{\ Argentina}} \title{\emph{FRT} Construction and Equipped Quantum Linear Spaces} \date{February 2003 } \maketitle \begin{abstract} We show there exists a rigid monoidal category formed out by quantum spaces with an additional structure, such that \emph{FRT }bialgebras and corresponding rectangular generalizations are its internal coEnd and coHom objects, respectively. This enable us to think of them as the coordinate rings of `quantum spaces of homomorphisms' that preserve the mentioned structure. The well known algebra epimorphisms between \emph{FRT} bialgebras and Manin quantum semigroups translate into `inclusions' of the corresponding quantum spaces, as the space of endomorphisms of a metric linear space $\mathbf{V}$ is included in $gl\left( \mathbf{V}\right) $. Our study is mainly developed for quadratic quantum spaces, and later generalized to the conic case. \end{abstract} \section{Introduction} Given a finite dimensional $\Bbbk $-vector space $\mathbf{V}$ and a linear endomorphism $\Bbb{R}$ of $\mathbf{V\otimes V}$, a universal bialgebra $ A\left( \Bbb{R}\right) $ can be constructed \cite{kas}. The assignment of $ A\left( \Bbb{R}\right) $ to each pair $\left( \mathbf{V},\Bbb{R}\right) $ is known as \emph{FRT} \emph{construction} \cite{frt}. Every $A\left( \Bbb{R} \right) $ is a generically non commutative quadratic algebra generated by a finite dimensional coalgebra, more precisely, by linearly independent coefficients of a multiplicative matrix $\mathbf{t}$ \cite{A}. This is why they are called \emph{quantum matrix bialgebras}. Given a basis $\left\{ v_{i}\right\} $ of $\mathbf{V}$, coefficients of $\mathbf{t}$ are identified with the elements $t_{i}^{j}=v^{j}\otimes v_{i}$ of $\mathbf{V}^{\ast }\otimes \mathbf{V}$, and the quadratic relations they must satisfy are sometimes written \[ \Bbb{R}_{ij}^{kl}\;t_{k}^{n}\,t_{l}^{m}-t_{i}^{k}\,t_{j}^{l}\;\Bbb{R} _{kl}^{nm};\;\;\;i,j,n,m=1,...,\dim \mathbf{V,} \] being $\Bbb{R}_{ij}^{kl}\in \Bbbk $ the coefficients of $\Bbb{R}$ in the given basis. Quantum matrix bialgebras are the `dual' version of quantum universal enveloping algebras, such as Drinfeld-Jimbo \cite{dj} quantized Lie bialgebras $U_{q}\left( \frak{g}\right) $.\footnote{ It is worth mentioning we are not asking for $\Bbb{R}$ to be a Yang-Baxter operator. If this were the case, $\Bbb{R}$ would indicate the so called $R$ -matrix.} On the other hand, they are quotient of Manin \emph{quantum }( \emph{semi})\emph{groups} $\underline{end}\left[ \mathcal{V}\right] $ \cite {man0}, i.e. the internal coEnd objects of the monoidal category $\mathrm{QA} $ of quadratic algebras. In other words, there exists a bialgebra epimorphism $\underline{end}\left[ \mathcal{V}\right] \twoheadrightarrow A\left( \Bbb{R}\right) $ in $\mathrm{QA}$. The relationship between $\Bbb{R}$ and $\mathcal{V}$ will be discussed later. By now, let us write $\mathcal{V} \Vdash \Bbb{R}$ when they are related. Using geometric language, each object $\underline{end}\left[ \mathcal{V}\right] $ is interpreted as the coordinate ring of a non commutative algebraic variety, or \emph{quantum linear space}, living in the opposite category $\mathrm{QA}^{op}$. It represents the quantum semigroup of endomorphisms corresponding to the quantum space $ \mathcal{V}^{op}$. Thus, epimorphism above gives rise to a monic $A\left( \Bbb{R}\right) ^{op}\hookrightarrow \underline{end}\left[ \mathcal{V}\right] ^{op}$ enabling us to regard $A\left( \Bbb{R}\right) $ as the coordinate ring of a quantum subspace of $\underline{end}\left[ \mathcal{V}\right] ^{op} $. Of course, Manin construction also includes \emph{quantum spaces of homomorphisms} $\underline{hom}\left[ \mathcal{W},\mathcal{V}\right] $, indicating by $\mathcal{W}$ a quadratic algebra generated by a vector subspace $\mathbf{W}$. They have as \emph{FRT} analogue the \emph{ rectangular quantum matrix algebras} $A\left( \Bbb{R}:\Bbb{S}\right) $ \cite {mm}\cite{maj}, in the sense that there exist algebra epimorphisms $ \underline{hom}\left[ \mathcal{W},\mathcal{V}\right] \twoheadrightarrow A\left( \Bbb{R}:\Bbb{S}\right) $ leading us to a geometric interpretation as described before. $\Bbb{S}$ denotes a linear map $\mathbf{W}\otimes \mathbf{W }\rightarrow \mathbf{W}\otimes \mathbf{W}$ such that $\mathcal{W}\Vdash \Bbb{ S}$. Given a basis $\left\{ w_{i}\right\} $ of $\mathbf{W}$, the algebras $ A\left( \Bbb{R}:\Bbb{S}\right) $ are generated by symbols $ t_{i}^{j}=w^{j}\otimes v_{i}\in \mathbf{W}^{\ast }\otimes \mathbf{V}$ satisfying\footnote{ We are using the convention of ref. \cite{phh} to evaluate the maps $\Bbb{R}$ and $\Bbb{S}$ appearing in the rectangular quantum matrix algebras, instead of the one used in \cite{mm}.} \begin{equation} \Bbb{R}_{ij}^{kl}\;t_{k}^{n}\,t_{l}^{m}-t_{i}^{k}\,t_{j}^{l}\;\Bbb{S} _{kl}^{nm};\;\;\;i,j=1,...,\dim \mathbf{V},\;\;\;n,m=1,...,\dim \mathbf{W}. \label{rqa} \end{equation} These algebras were studied in detail in \cite{phh}, where $\Bbb{R}$ and $ \Bbb{S}$ are Yang-Baxter operators of Hecke type. This paper was mainly motived by the following question induced by `inclusions' $A\left( \Bbb{R}:\Bbb{S}\right) ^{op}\hookrightarrow \underline{ hom}\left[ \mathcal{W},\mathcal{V}\right] ^{op}$: Do the quadratic algebras $ A\left( \Bbb{R}:\Bbb{S}\right) $ represent homomorphisms between quantum spaces supplied with some additional structure, i.e. spaces that are not characterized just by its respective coordinate rings ? In order to answer this question we encode Manin and \emph{FRT} constructions, reformulating and generalizing the latter, in the unifying language of rigid monoidal categories \cite{dm}. We show that bialgebras $ A\left( \Bbb{R}\right) $ can be seen as internal coEnd objects contained in certain rigid monoidal category $\left( \mathrm{EQA},\boxtimes \right) $, the \emph{equipped quantum spaces}, formed out by pairs $\frak{V}=\left( \mathcal{V};\Bbb{R}\right) $ with $\mathcal{V}\in \mathrm{QA}$ and $\mathcal{ V}\Vdash \Bbb{R}$. More precisely, there exists a surjective embedding $ \mathsf{U}:\mathrm{EQA}\hookrightarrow \mathrm{QA}:\left( \mathcal{V};\Bbb{R} \right) \mapsto \mathcal{V}$, and a related opposite $\mathsf{U}^{op}: \mathrm{EQA}^{op}\hookrightarrow \mathrm{QA}^{op}$, such that for each pair $ \frak{V}$, the object $\underline{hom}\left[ \frak{V},\frak{V}\right] = \underline{end}\left[ \frak{V}\right] $ is \emph{functored} to $A\left( \Bbb{ R}\right) $. In general, coHom objects $\underline{hom}\left[ \frak{W},\frak{ V}\right] $, with $\frak{W}=\left( \mathcal{W};\Bbb{S}\right) $, are sent to $A\left( \Bbb{R}:\Bbb{S}\right) $. Moreover, from the general formalism of rigid monoidal categories follows existence and associativity properties of \emph{rectangular comultiplication} maps defined in \cite{mm}, and also existence of algebra epimorphisms \begin{equation} \underline{hom}_{\mathrm{QA}}\left[ \mathsf{U}\frak{W},\mathsf{U}\frak{V} \right] \twoheadrightarrow \mathsf{U}\underline{hom}_{\mathrm{EQA}}\left[ \frak{W},\frak{V}\right] ,\;\;\;\forall \frak{W},\frak{V}\in \mathrm{EQA}, \label{inc} \end{equation} from which previously mentioned `inclusions' are deduced. We conclude, each algebra $A\left( \Bbb{R}:\Bbb{S}\right) $ is the coordinate ring of the space $\underline{hom}\left[ \frak{W},\frak{V}\right] ^{op}\in \mathrm{EQA} ^{op}$ of homomorphisms between spaces $\frak{W}^{op}$ and $\frak{V}^{op}$. Thus, such spaces are described by their respective coordinate rings $ \mathsf{U}\underline{hom}\left[ \frak{W},\frak{V}\right] =A\left( \Bbb{R}: \Bbb{S}\right) $, $\mathsf{U}\frak{W}$ and $\mathsf{U}\frak{V}$ (given by quadratic algebras), and an additional data. We also show $\mathrm{EQA}$ is equivalent to a category whose objects are pairs $\left( \mathbf{V},\Bbb{R} \right) $ as described above, in such a way that we can write $\left( \mathcal{V};\Bbb{R}\right) \equiv \left( \mathbf{V},\Bbb{R}\right) $ if $ \mathcal{V}$ is generated by $\mathbf{V}$. Hence, we are assigning a bialgebra $A\left( \Bbb{R}\right) =\mathsf{U}\underline{end}\left[ \frak{V} \right] $ to each pair $\left( \mathbf{V},\Bbb{R}\right) $ in a universal way. \section{Quantum linear spaces} In what follows $\Bbbk $ indicates some of the numerics fields, $\Bbb{R}$ or $\Bbb{C}$. The usual tensor product on $\Bbbk \mathrm{-Alg}=\mathrm{Alg}$ \textrm{\ }and $\mathrm{Vct}_{\Bbbk }=\mathrm{Vct}$ (the categories of unital associative $\Bbbk $-algebras and of $\Bbbk $-vector spaces, respectively) is denoted by $\otimes $. \textrm{Vct}$_{f}$ indicates the full subcategory of $\mathrm{Vct}$ formed out by finite dimensional vector spaces. Originally \cite{man0}, Manin defined quantum spaces as opposite objects to quadratics algebras. The latter are pairs $\left( \mathbf{A}_{1},\mathbf{A} \right) $, with $\mathbf{A}\in \mathrm{Alg}$ generated by $\mathbf{A}_{1}$ in $\mathrm{Vct}_{f}$, such that the canonical epimorphism $\mathbf{A} _{1}^{\otimes }\twoheadrightarrow \mathbf{A}$ has as kernel a bilateral ideal algebraically generated by a subspace of $\mathbf{A}_{1}^{\otimes 2}$. As usual, $\mathbf{A}_{1}^{\otimes }=\tbigoplus_{n\in \Bbb{N}_{0}}\mathbf{A} _{1}^{\otimes n}$ denotes the tensor algebra of\textbf{\ }$\mathbf{A}_{1}$ (being $\Bbb{N}_{0}\doteq \Bbb{N}\cup \left\{ 0\right\} $). To be more explicit, for every quadratic algebra $\left( \mathbf{A}_{1},\mathbf{A} \right) $ there exists a subspace $\mathbf{R}\subset \mathbf{A}_{1}^{\otimes 2}$ such that \[ \ker \left[ \mathbf{A}_{1}^{\otimes }\twoheadrightarrow \mathbf{A}\right] =I \left[ \mathbf{R}\right] =\mathbf{A}_{1}^{\otimes }\cdot \mathbf{R}\cdot \mathbf{A}_{1}^{\otimes }. \] In general, we note by $I\left[ \mathbf{X}\right] \subset \mathbf{A} _{1}^{\otimes }$ the bilateral ideal generated by a set $\mathbf{X}\subset \mathbf{A}_{1}^{\otimes }$. For instance, each algebra $A\left( \Bbb{R}:\Bbb{ S}\right) $ defines a quantum space \[ A\left( \Bbb{R}:\Bbb{S}\right) \equiv \left( \mathbf{W}^{\ast }\otimes \mathbf{V},A\left( \Bbb{R}:\Bbb{S}\right) \right) . \] The kernel of its related canonical epimorphism is generated by the elements given in Eq. $\left( \ref{rqa}\right) $. The category $\mathrm{QA}$, as mentioned before, has above pairs as objects and as arrows $\left( \mathbf{A} _{1},\mathbf{A}\right) \rightarrow \left( \mathbf{B}_{1},\mathbf{B}\right) $ algebra homomorphisms $\mathbf{A}\rightarrow \mathbf{B}$ that preserve generating spaces, that is to say, $\mathbf{A}\rightarrow \mathbf{B}$ restricted to $\mathbf{A}_{1}$ defines a linear map $\mathbf{A} _{1}\rightarrow \mathbf{B}_{1}$. In \cite{man1}, Manin extended the concept to arbitrary finitely generated algebras, i.e. pairs $\left( \mathbf{A}_{1},\mathbf{A}\right) $ as above, but without restrictions on their respective canonical epimorphisms $\mathbf{ A}_{1}^{\otimes }\twoheadrightarrow \mathbf{A}$. We shall indicate \textrm{ FGA }the category formed out by these pairs. Its arrows are again algebra homomorphisms preserving the generating linear spaces. Thus, $\mathrm{QA}$ is a full subcategory of \textrm{FGA}. Note that arrows $\alpha :\left( \mathbf{A}_{1},\mathbf{A}\right) \rightarrow \left( \mathbf{B}_{1},\mathbf{B} \right) $ in \textrm{FGA }are characterized by linear maps $\alpha _{1}: \mathbf{A}_{1}\rightarrow \mathbf{B}_{1}$ such that \[ \alpha _{1}^{\otimes }\left( \ker \left[ \mathbf{A}_{1}^{\otimes }\twoheadrightarrow \mathbf{A}\right] \right) \subset \ker \left[ \mathbf{B} _{1}^{\otimes }\twoheadrightarrow \mathbf{B}\right] . \] In $\mathrm{QA}$, if $\mathbf{R}$ and $\mathbf{S}$ are the subspaces generating the respective kernels, last condition reads $\alpha _{1}^{\otimes 2}\left( \mathbf{R}\right) \subset \mathbf{S}$. In \cite{gm} we study another full subcategory of \textrm{FGA}, namely $\mathrm{CA}$, the conic algebras or conic quantum spaces. Its objects $\left( \mathbf{A}_{1}, \mathbf{A}\right) $ are such that $\mathbf{A}$ is a graded algebra and $ \mathbf{A}_{1}$ is its subspace of homogeneous elements of degree one, or equivalently, its related ideal (i.e. the kernel of its canonical epimorphism) is a graded subalgebra of $\mathbf{A}_{1}^{\otimes }$. Examples of them, beside quadratics, are the so called $m$-th quantum spaces, whose associated ideals are generated by a subspace of $\mathbf{A}_{1}^{\otimes m}$ , for some $m\geq 2$. The latter, in turn, form a full subcategory $\mathrm{ CA}^{m}$ of $\mathrm{CA}$, leading us to the full inclusions $\mathrm{CA} ^{m}\subset \mathrm{CA}\subset \mathrm{FGA}$. Of course, $\mathrm{QA}= \mathrm{CA}^{2}$. The monoid we consider on these categories is the bifunctor $\circ $, given on objects by \begin{equation} \left( \mathbf{A}_{1},\mathbf{A}\right) \circ \left( \mathbf{B}_{1},\mathbf{B }\right) =\left( \mathbf{A}_{1}\otimes \mathbf{B}_{1},\mathbf{A}\circ \mathbf{B}\right) , \label{mon} \end{equation} with $\mathbf{A}\circ \mathbf{B}$ the subalgebra of $\mathbf{A}\otimes \mathbf{B}$ generated by $\mathbf{A}_{1}\otimes \mathbf{B}_{1}$. On arrows, it assigns to $\alpha $ and $\beta $, with domains $\left( \mathbf{A}_{1}, \mathbf{A}\right) $ and $\left( \mathbf{B}_{1},\mathbf{B}\right) $, respectively, the algebra morphism $\alpha \circ \beta =\left. \alpha \otimes \beta \right| _{\mathbf{A}\circ \mathbf{B}}$. The unit object is $ \mathcal{I}=\left( \Bbbk ,\Bbbk \right) $ in $\mathrm{FGA}$ and $\mathcal{K} =\left( \Bbbk ,\Bbbk ^{\otimes }\right) $ in $\mathrm{CA}$ and every $ \mathrm{CA}^{m}$. Let us mention the forgetful functor $\mathsf{F}:\mathrm{ FGA}\rightarrow \mathrm{Alg}:\left( \mathbf{A}_{1},\mathbf{A}\right) \mapsto \mathbf{A}$ preserves the units, since $\mathsf{F}\mathcal{I}=\Bbbk $, but is not monoidal. Nevertheless, it is easy to check that algebra inclusions $ i_{\mathcal{A},\mathcal{B}}:\mathbf{A}\circ \mathbf{B}\hookrightarrow \mathbf{A}\otimes \mathbf{B}$, related to quantum spaces $\mathcal{A}=\left( \mathbf{A}_{1},\mathbf{A}\right) $ and $\mathcal{B}=\left( \mathbf{B}_{1}, \mathbf{B}\right) $, define a natural transformation $\mathsf{F}\,\circ \,\rightarrow \otimes \,\left( \mathsf{F}\times \mathsf{F}\right) $. (Note that $\mathsf{F}\left( \mathcal{A}\circ \mathcal{B}\right) =\mathbf{A}\circ \mathbf{B}$ and $\mathsf{F}\mathcal{A}\otimes \mathsf{F}\mathcal{B}=\mathbf{A }\otimes \mathbf{B}$.) That is to say, for any couple of arrows $\alpha ,\beta \in \mathrm{FGA}$, with $\alpha :\mathcal{A}\rightarrow \mathcal{C}$ and $\beta :\mathcal{B}\rightarrow \mathcal{D}$, the diagram \begin{equation} \begin{diagram}[s=2.5em] \QTR{bf}{A}\circ \QTR{bf}{B} & \rInto^{i_{\QTR{cal}{A},\QTR{cal}{B}}} & \QTR{bf}{A}\otimes \QTR{bf}{B} \\ \dTo^{\QTR{sf}{F}\left( \alpha \circ \beta \right) }& & \dTo_{\QTR{sf}{F}\alpha \otimes \QTR{sf}{F}\beta } \\ \QTR{bf}{C}\circ \QTR{bf}{D} & \rInto^{i_{\QTR{cal}{C},\QTR{cal}{D}}} & \QTR{bf}{C}\otimes \QTR{bf}{D} \\ \end{diagram} \label{fa} \end{equation} is commutative. When restricted to $\mathrm{CA}$ and every $\mathrm{CA}^{m}$ , the above natural transformation holds, but $\mathsf{F}$ does not respects units, because $\mathsf{F}\mathcal{K}=\Bbbk ^{\otimes }$. However, the canonical projection $\Bbbk ^{\otimes }\twoheadrightarrow \Bbbk $ defines epimorphisms $p_{\mathcal{A}}:\Bbbk ^{\otimes }\otimes \mathbf{A} \twoheadrightarrow \Bbbk \otimes \mathbf{A}$, with $\mathsf{F}\mathcal{A}= \mathbf{A}$, that make commutative the diagrams \begin{equation} \begin{diagram}[w=3.5em] \Bbbk ^{\otimes }\circ \QTR{bf}{A} & \rInto^{i_{\QTR{cal}{K},\QTR{cal}{A}}} & \Bbbk ^{\otimes }\otimes \QTR{bf}{A} & \rOnto^{p_{\QTR{cal}{A}}} & \Bbbk \otimes \QTR{bf}{A} \\ & \rdTo~{\backsimeq } & & \ruTo~{\backsimeq} & \\ & & \QTR{bf}{A} & & \\ \end{diagram} \label{fu} \end{equation} Indicating by $e$ the generator of $\Bbbk $, then \[ \Bbbk ^{\otimes }=\Bbbk \left[ e\right] \;\;\;\;and\;\;\;p_{\mathcal{A} }\left( e^{n}\otimes a\right) =e\otimes a. \] The isomorphisms $\mathbf{A}\backsimeq \Bbbk \otimes \mathbf{A}$ and $ \mathbf{A}\backsimeq \Bbbk ^{\otimes }\circ \mathbf{A}$ are the functorial isomorphisms related to the left unital constraint in $\mathrm{Alg}$ and $ \mathrm{CA}$, respectively. Of course, a diagram analogous to $\left( \ref {fu}\right) $ but with $\Bbbk $ on the right is also fulfilled. \bigskip There exist internal coHom objects on each one of this monoidal categories. For instance, for $\mathcal{A}=\left( \mathbf{A}_{1},\mathbf{A}\right) $ and $\mathcal{B}=\left( \mathbf{B}_{1},\mathbf{B}\right) $ in $\mathrm{CA}$ (resp. $\mathrm{CA}^{m}$), they are given by graded algebras $\underline{hom} \left[ \mathcal{B},\mathcal{A}\right] $ generated by $\mathbf{B}_{1}^{\ast }\otimes \mathbf{A}_{1}$ and constrained by homogeneous relations (resp. of $ m$-th order). For more details, see \cite{gm}. \section{The equipped quantum spaces} Consider $\mathcal{A}=\left( \mathbf{A}_{1},\mathbf{A}\right) \in \mathrm{QA} $ and a linear map $\Bbb{R}:\mathbf{A}_{1}^{\otimes 2}\rightarrow \mathbf{A} _{1}^{\otimes 2}$. \begin{definition} We say $\Bbb{R}$ is \textbf{compatible} with $\mathcal{A}$, and use the shorthand notation $\mathcal{A}\Vdash $ $\Bbb{R}$, if \[ \ker \left[ \mathbf{A}_{1}^{\otimes }\twoheadrightarrow \mathbf{A}\right] =I \left[ \func{Im}\Bbb{R}\right] . \] A pair $\left( \mathcal{A};\Bbb{R}\right) $ such that $\mathcal{A}\Vdash $ $ \Bbb{R}$ will be called \textbf{equipped quantum space} \emph{(}or equipped quadratic algebra\emph{)} with \textbf{structure }$\Bbb{R}$. If a morphism of quantum spaces $\alpha :\mathcal{A}\rightarrow \mathcal{B}$, with $ \mathcal{A}\Vdash $ $\Bbb{R}$ and $\mathcal{B}\Vdash \Bbb{S}$, satisfies $ \alpha _{1}^{\otimes 2}\,\Bbb{R}=\Bbb{S}\,\alpha _{1}^{\otimes 2}$, we say that $\alpha $ preserves structures $\Bbb{R}$ and $\Bbb{S}$. The category formed out by equipped quantum spaces and structure preserving arrows will be denoted $\mathrm{EQA}$.\ \ \ $\blacksquare $ \end{definition} From now on, we reserve the name \emph{pair }only for equipped quantum spaces (in contrast to last section where we used it for ordinary ones). A simple characterization of equipped quantum spaces is given by the following result. \begin{proposition} The category $\mathrm{EQA}$ is equivalent to one whose objects are pairs $ \left( \mathbf{V},\Bbb{R}\right) $, with $\mathbf{V}\in \mathrm{Vct}_{f}$ and $\Bbb{R}:\mathbf{V}^{\otimes 2}\rightarrow \mathbf{V}^{\otimes 2}$ a linear map, and whose arrows $\left( \mathbf{V},\Bbb{R}\right) \rightarrow \left( \mathbf{W},\Bbb{S}\right) $ are linear homomorphisms $l:\mathbf{V} \rightarrow \mathbf{W}$ such that $l^{\otimes 2}\,\Bbb{R}=\Bbb{S} \,l^{\otimes 2}$. \end{proposition} \textbf{Proof:} The equivalence is defined by functors \begin{equation} \mathsf{f}:\left( \mathcal{A};\Bbb{R}\right) \mapsto \left( \mathbf{A}_{1}, \Bbb{R}\right) \;\;\;and\;\;\;\mathsf{g}:\left( \mathbf{V},\Bbb{R}\right) \mapsto \left( \left( \mathbf{V},\left. \mathbf{V}^{\otimes }\right/ I\left[ \func{Im}\Bbb{R}\right] \right) ;\Bbb{R}\right) . \label{fg} \end{equation} On arrows, $\mathsf{f}\alpha =\alpha _{1}$ and $\mathsf{g}l$ is the extension of $l$ to an algebra homomorphism. If $l$ goes from $\left( \mathbf{V},\Bbb{R}\right) $ to $\left( \mathbf{W},\Bbb{S}\right) $, since \begin{eqnarray*} l^{\otimes }\left( I\left[ \func{Im}\Bbb{R}\right] \right) &=&\mathbf{W} ^{\otimes }\cdot l^{\otimes }\left( \func{Im}\Bbb{R}\right) \cdot \mathbf{W} ^{\otimes }=\mathbf{W}^{\otimes }\cdot \left( \func{Im}l^{\otimes 2}\,\Bbb{R} \right) \cdot \mathbf{W}^{\otimes } \\ &=&\mathbf{W}^{\otimes }\cdot \left( \func{Im}\Bbb{S}\,l^{\otimes 2}\right) \cdot \mathbf{W}^{\otimes }\subset \mathbf{W}^{\otimes }\cdot \func{Im}\Bbb{S }\,\cdot \mathbf{W}^{\otimes }=I\left[ \func{Im}\Bbb{S}\right] , \end{eqnarray*} then such extension is well defined. Natural equivalence $\mathsf{f}\circ \mathsf{g}\backsimeq id$ is immediate. The functorial isomorphisms for equivalence $\mathsf{g}\circ \mathsf{f} \backsimeq id$ are given by the algebra isomorphisms $\mathbf{A}\backsimeq \left. \mathbf{A}_{1}^{\otimes }\right/ I\left[ \func{Im}\Bbb{R}\right] $, which are well defined provided $\ker \left[ \mathbf{A}_{1}^{\otimes }\twoheadrightarrow \mathbf{A}\right] =I\left[ \func{Im}\Bbb{R}\right] .\;\;\;\blacksquare $ \bigskip Because of this equivalence, we identify the objects of both categories. That is to say, we understand pairs $\left( \mathbf{A}_{1},\Bbb{R}\right) $ and $\left( \mathcal{A};\Bbb{R}\right) $ as the same thing, indicating both categories by \textrm{EQA}.\footnote{ Note that the category $\mathcal{YB}$ defined in \cite{mm}, formed out by pairs $\left( \mathbf{V},\Bbb{R}\right) $ such that $\Bbb{R}$ is a Yang-Baxter solution of $q$-Hecke type, is a full subcategory of $\mathrm{EQA }$.} Since to deal with pairs $\left( \mathbf{V},\Bbb{R}\right) $ is often easier than to deal with $\left( \mathcal{A};\Bbb{R}\right) $, we shall work out our constructions mainly in terms of the former. Naturally, they also provide a more direct contact to \emph{FRT }construction. \subsection{Products and duals} A monoidal structure and an involution can be attached to \textrm{EQA }in the following way. Let us consider the canonical algebra isomorphism $ \varphi _{\mathbf{V},\mathbf{W}}$ between $\left[ \mathbf{V}\otimes \mathbf{W }\right] ^{\otimes }$ and \[ \mathbf{V}\circ \mathbf{W}\doteq \bigoplus\nolimits_{n\in \Bbb{N}_{0}} \mathbf{V}^{\otimes n}\otimes \mathbf{W}^{\otimes n}, \] the subalgebra of $\mathbf{V}^{\otimes }\otimes \mathbf{W}^{\otimes }=\bigoplus\nolimits_{n,m}\mathbf{V}^{\otimes n}\otimes \mathbf{W}^{\otimes m}$ generated by $\mathbf{V}\otimes \mathbf{W}$. The restriction of $\varphi _{\mathbf{V},\mathbf{W}}$ to $\left( \mathbf{V}\otimes \mathbf{W}\right) ^{\otimes 2}$, which we also denote $\varphi _{\mathbf{V},\mathbf{W}}$, is given by \[ v\otimes w\otimes v^{\prime }\otimes w^{\prime }\mapsto v\otimes v^{\prime }\otimes w\otimes w^{\prime };\;\;\forall v,v^{\prime }\in \mathbf{V} ,\;w,w^{\prime }\in \mathbf{W}. \] We define the bifunctor $\boxtimes :\mathrm{EQA}\times \mathrm{EQA} \rightarrow \mathrm{EQA}$ as \begin{equation} \left( \mathbf{V},\Bbb{R}\right) \times \left( \mathbf{W},\Bbb{S}\right) \mapsto \left( \mathbf{V}\otimes \mathbf{W},\Bbb{R}\boxtimes \Bbb{S}\right) ,\;\;\;k\times l\mapsto k\boxtimes l\doteq k\otimes l, \label{A} \end{equation} being \begin{equation} \Bbb{R}\boxtimes \Bbb{S}\doteq \varphi _{\mathbf{V},\mathbf{W}}^{-1}\,\left( \Bbb{R}\otimes \Bbb{I}+\Bbb{I}\otimes \Bbb{S}\right) \,\varphi _{\mathbf{V}, \mathbf{W}} \label{B} \end{equation} (taking into account the restriction of $\varphi _{\mathbf{V},\mathbf{W}}$). $\Bbb{I}$ denotes the identity endomorphism of the corresponding vector spaces. Identifying $\mathbf{V}\circ \mathbf{W}$ and $\left[ \mathbf{V} \otimes \mathbf{W}\right] ^{\otimes }$ we shall write $\Bbb{R}\boxtimes \Bbb{ S}\thickapprox \Bbb{R}\otimes \Bbb{I}+\Bbb{I}\otimes \Bbb{S}$. Straightforwardly, the bifunctor $\boxtimes $ defines a symmetric monoidal structure with unit object $\frak{K}=\left( \Bbbk ,\Bbb{O}\right) $, being $ \Bbb{O}$ the null endomorphism of $\Bbbk ^{\otimes 2}$, i.e. $\func{Im}\Bbb{O }=\left\{ 0\right\} $. The functorial isomorphisms $\tau _{\frak{V},\frak{W} }:\frak{V}\boxtimes \frak{W}\backsimeq \frak{W}\boxtimes \frak{V}$ related to symmetry, with $\frak{V}=\left( \mathbf{V},\Bbb{R}\right) $ and $\frak{W} =\left( \mathbf{W},\Bbb{S}\right) $, are given by canonical flipping maps $ \mathbf{V}\otimes \mathbf{W}\backsimeq \mathbf{W}\otimes \mathbf{V}$, $ v\otimes w\mapsto w\otimes v$. The ones related to unit are $\ell _{\frak{V} }:v\in \mathbf{V}\mapsto e\otimes v$ and $r_{\frak{V}}:v\in \mathbf{V} \mapsto v\otimes e$, indicating by $e$ the generator of $\Bbbk $. Let us define the contravariant functor $\dagger :\mathrm{EQA}\rightarrow \mathrm{EQA}$, \begin{equation} \dagger \,:\left( \mathbf{V},\Bbb{R}\right) \mapsto \frak{V}^{\dagger }\doteq \left( \mathbf{V}^{\ast },\Bbb{R}^{\dagger }\right) \doteq \left( \mathbf{V}^{\ast },-\Bbb{R}^{\ast }\right) ,\;\;\;\;\dagger \,:l\mapsto l^{\dagger }\doteq l^{\ast }, \label{C} \end{equation} being $\mathbf{V}^{\ast }$ the dual of $\mathbf{V}$, and $\Bbb{R}^{\ast }$ the transpose map w.r.t. the usual extension to $\mathbf{V}^{\otimes 2}$ of the pairing between $\mathbf{V}$ and $\mathbf{V}^{\ast }$. It is clear that $ \dagger ^{2}=\dagger \dagger $ is naturally equivalent to $id_{\mathrm{EQA}}$ . In particular, $\frak{V}^{\dagger \dagger }\backsimeq \frak{V}$, $\forall \frak{V}\in \mathrm{EQA}$. The relation between $\boxtimes $ and $\dagger $ can be summarized by equations \[ \left( \frak{V}\boxtimes \frak{W}\right) ^{\dagger }\backsimeq \frak{V} ^{\dagger }\boxtimes \frak{W}^{\dagger },\;\frak{K}^{\dagger }\backsimeq \frak{K}. \] In terms of pairs $\left( \mathcal{A};\Bbb{R}\right) $, $\boxtimes $ and $ \dagger $ are given by \[ \left( \mathcal{A};\Bbb{R}\right) \times \left( \mathcal{B};\Bbb{S}\right) \mapsto \left( \mathcal{A}\boxtimes \mathcal{B};\Bbb{R}\boxtimes \Bbb{S} \right) ,\;\;\;\alpha \times \beta \mapsto \alpha \boxtimes \beta , \] and \[ \left( \mathcal{A};\Bbb{R}\right) \mapsto \left( \mathcal{A}^{\dagger };\Bbb{ R}^{\dagger }\right) \doteq \left( \left( \mathbf{A}_{1}^{\ast },\mathbf{A} _{1}^{\ast \otimes }/I\left[ \func{Im}\Bbb{R}^{\ast }\right] \right) ;-\Bbb{R }^{\ast }\right) ,\;\;\;\;\alpha \mapsto \alpha ^{\dagger }, \] respectively, being $\mathcal{A}\boxtimes \mathcal{B}\doteq \left( \mathbf{A} _{1}\otimes \mathbf{B}_{1},\left. \left[ \mathbf{A}_{1}\otimes \mathbf{B}_{1} \right] ^{\otimes }\right/ \func{Im}\left[ \Bbb{R}\boxtimes \Bbb{S}\right] \right) $. The arrows $\alpha \boxtimes \beta $ and $\alpha ^{\dagger }$ are the extension of $\alpha _{1}\otimes \beta _{1}$ and $\alpha _{1}^{\ast }$ to an algebra map. The unit object for $\boxtimes $ is $\left( \mathcal{K}; \Bbb{O}\right) $. \subsection{The embedding $\mathrm{EQA}\hookrightarrow \mathrm{QA}$} Now, we study the relationship between $\mathrm{EQA}$ and $\mathrm{QA}$ as monoidal categories. There exists an obvious forgetful functor between these categories. \begin{proposition} The function $\left( \mathcal{A};\Bbb{R}\right) \mapsto \mathcal{A}$ defines a surjective embedding $\mathsf{U}:\mathrm{EQA}\hookrightarrow \mathrm{QA}$. \end{proposition} \textbf{Proof}: We just need to show the function is surjective, i.e. given $ \mathcal{A}\in \mathrm{QA}$, there exists a compatible map $\Bbb{R}$ such that $\mathsf{U}\left( \mathcal{A};\Bbb{R}\right) =\mathcal{A}$. Let $I\left[ \mathbf{R}\right] $ be the ideal related to $\mathcal{A}$. Consider a decomposition $\mathbf{A}_{1}^{\otimes 2}=\mathbf{R}\oplus \mathbf{R}^{c}$, with associated projections $\Bbb{P}$ such that $\func{Im}\Bbb{P}=\mathbf{R}$ . Since $I\left[ \mathbf{R}\right] =I\left[ \func{Im}\Bbb{P}\right] $, then $ \mathcal{A}\Vdash \Bbb{P}$ and the proposition follows.\ \ \ \ $\blacksquare $ \bigskip On pairs $\left( \mathbf{V},\Bbb{R}\right) $ the embedding is given by $ \left( \mathbf{V},\Bbb{R}\right) \mapsto \left( \mathbf{V},\left. \mathbf{V} ^{\otimes }\right/ I\left[ \func{Im}\Bbb{R}\right] \right) $ (see second part of Eq. $\left( \ref{fg}\right) $). The surjectivity is up to isomorphisms in $\mathrm{QA}$. The functor $\mathsf{U}$ obviously preserves the unit objects, in fact \[ \mathsf{U}\frak{K}=\mathsf{U}\left( \Bbbk ,\Bbb{O}\right) =\left( \Bbbk ,\left. \Bbbk ^{\otimes }\right/ I\left[ \func{Im}\Bbb{O}\right] \right) =\left( \Bbbk ,\Bbbk ^{\otimes }\right) =\mathcal{K}, \] but is not monoidal. Nevertheless, \begin{proposition} There exist functorial epimorphisms $\mathsf{U}\left( \frak{V}\boxtimes \frak{W}\right) \twoheadrightarrow \mathsf{U}\frak{V}\circ \mathsf{U}\frak{W} $, $\frak{V},\frak{W}\in \mathrm{EQA}$, defining a natural transformation $ \mathsf{U\,}\boxtimes \mathsf{\,}\rightarrow \circ \mathsf{\,}\left( \mathsf{ U}\times \mathsf{U}\right) $. \end{proposition} \textbf{Proof:} It is clear that, given pairs $\frak{V}$ and $\frak{W}$, we have \[ \begin{array}{l} \func{Im}\Bbb{R}\boxtimes \Bbb{S}\thickapprox \varphi _{\mathbf{V},\mathbf{W} }\left( \func{Im}\Bbb{R}\boxtimes \Bbb{S}\right) = \\ \\ =\func{Im}\left[ \Bbb{R}\otimes \Bbb{I}+\Bbb{I}\otimes \Bbb{S}\right] \subset \func{Im}\Bbb{R}\otimes \mathbf{W}^{\otimes 2}+\mathbf{V}^{\otimes 2}\otimes \func{Im}\Bbb{S}, \end{array} \] and accordingly, \[ \varphi _{\mathbf{V},\mathbf{W}}\left( I\left[ \func{Im}\Bbb{R}\boxtimes \Bbb{S}\right] \right) \subset \left( I\left[ \func{Im}\Bbb{R}\right] \otimes \mathbf{W}^{\otimes }+\mathbf{V}^{\otimes }\otimes I\left[ \func{Im} \Bbb{S}\right] \right) \cap \mathbf{V}\circ \mathbf{W}. \] The first ideal in above inclusion is related to the quantum space $\mathsf{U }\left( \frak{V}\boxtimes \frak{W}\right) $, and the latter to $\mathsf{U} \frak{V}\circ \mathsf{U}\frak{W}$, since it defines the algebra \[ \left( \left. \mathbf{V}^{\otimes }\right/ I\left[ \func{Im}\Bbb{R}\right] \right) \circ \left( \left. \mathbf{W}^{\otimes }\right/ I\left[ \func{Im} \Bbb{S}\right] \right) . \] Note the corresponding algebras are quotient of $\left[ \mathbf{V}\otimes \mathbf{W}\right] ^{\otimes }$. Hence, for every couple $\frak{V},\frak{W} \in \mathrm{EQA}$, we have an epimorphism $p_{\frak{V},\frak{W}}:\mathsf{U} \left( \frak{V}\boxtimes \frak{W}\right) \twoheadrightarrow \mathsf{U}\frak{V }\circ \mathsf{U}\frak{W}$. By straightforward calculations, it can be checked commutativity of diagrams \begin{equation} \begin{diagram} \QTR{sf}{U}\left( \QTR{frak}{V}\boxtimes \QTR{frak}{W}\right) & \rOnto^{p_{\QTR{frak}{V},\QTR{frak}{W}}} & \QTR{sf}{U}\QTR{frak}{V}\circ \QTR{sf}{U}\QTR{frak}{W} \\ \dTo^{\QTR{sf}{U}\left( \alpha \boxtimes \beta \right) }& & \dTo_{\QTR{sf}{U}\alpha \circ \QTR{sf}{U}\beta } \\ \QTR{sf}{U}\left( \QTR{frak}{X}\boxtimes \QTR{frak}{Y}\right) & \rOnto^{p_{\QTR{frak}{X},\QTR{frak}{Y}}} & \QTR{sf}{U}\QTR{frak}{X}\circ \QTR{sf}{U}\QTR{frak}{Y} \\ \end{diagram} \label{ua} \end{equation} for every couple of arrows $\alpha :\frak{V}\rightarrow \frak{X}$ and $\beta :\frak{W}\rightarrow \frak{Y}$ in $\mathrm{EQA}$.\ \ \ $\blacksquare $ \bigskip This result, together with the ones relating monoids $\circ $ and $\otimes $ , will be useful in order to construct the rectangular comultiplications maps. \section{Rectangular quantum matrix algebras} Now, the central result. We shall show the following theorem later, in a more general context. \begin{theorem} The monoidal category $\left( \mathrm{EQA},\boxtimes ,\frak{K}\right) $ is \textbf{rigid}, and has $\dagger $ as duality functor. For every $\frak{V} =\left( \mathbf{V},\Bbb{R}\right) $ in $\mathrm{EQA}$, the evaluation and coevaluation arrows, $ev_{\frak{V}}:\frak{V}^{\dagger }\boxtimes \frak{V} \rightarrow \frak{K}$ and $coev_{\frak{V}}:\frak{K}\rightarrow \frak{V} \boxtimes \frak{V}^{\dagger }$, respectively, are given by the corresponding maps for $\mathbf{V}$ in the rigid monoidal category $\left( \mathrm{Vct} _{f},\otimes ,\Bbbk \right) $. \ \ \ $\blacksquare $ \end{theorem} We can define the internal coHom object related to a couple $\frak{W},\frak{V }\in \mathrm{EQA}$ as $\underline{hom}\left[ \frak{W},\frak{V}\right] \doteq \frak{W}^{\dagger }\boxtimes \frak{V}$, and take \[ \delta _{\frak{V},\frak{W}}\doteq \tau _{\frak{W},\underline{hom}\left[ \frak{W},\frak{V}\right] }\,\left( coev_{\frak{W}}\boxtimes I\right) \,\ell _{\frak{V}}:\frak{V}\rightarrow \underline{hom}\left[ \frak{W},\frak{V} \right] \boxtimes \frak{W} \] as the (left) coevaluation arrow. Its well known universality property says: given $\frak{H}\in \mathrm{EQA}$ and $\varphi :\frak{V}\rightarrow \frak{H} \boxtimes \frak{W}$, there exists a unique morphism $\alpha :\underline{hom} \left[ \frak{W},\frak{V}\right] \rightarrow \frak{H}$ making commutative the diagram \begin{equation} \begin{diagram} & & \QTR{frak}{V} & & \\ & \ldTo^{\delta _{\QTR{frak}{V},\QTR{frak}{W}}} & & \rdTo^{\varphi } & \\ \underline{hom}\left[ \QTR{frak}{W},\QTR{frak}{V}\right] \boxtimes \QTR{frak}{W}& & \rTo^{\alpha \boxtimes I}& &\QTR{frak}{H}\boxtimes\QTR{frak}{W} \\ \end{diagram} \label{diaun} \end{equation} From $\left( \ref{diaun}\right) $ and general properties of monoidal categories follow the existence of comultiplication \begin{equation} \underline{hom}\left[ \frak{W},\frak{V}\right] \rightarrow \underline{hom} \left[ \frak{U},\frak{V}\right] \boxtimes \underline{hom}\left[ \frak{W}, \frak{U}\right] ,\;\forall \frak{U},\frak{V},\frak{W}\in \mathrm{EQA}, \label{com} \end{equation} given by $\Delta _{\frak{U},\frak{V},\frak{W}}=\tau _{\underline{hom}\left[ \frak{W},\frak{U}\right] ,\underline{hom}\left[ \frak{U},\frak{V}\right] }\,\left( I\boxtimes coev_{\frak{U}}\boxtimes I\right) \,\left( I\boxtimes \ell _{\frak{V}}\right) $, and counit arrows \begin{equation} \varepsilon _{\frak{V}}=ev_{\frak{V}}:\underline{end}\left[ \frak{V}\right] \rightarrow \frak{K},\;\forall \frak{V}\in \mathrm{EQA}. \label{coun} \end{equation} Coevaluations are particular comultiplications. Indeed, since $\underline{hom }\left[ \frak{K},\frak{V}\right] =\frak{V}$, $\forall \frak{V}\in \mathrm{EQA }$, it can be seen that $\delta _{\frak{V},\frak{W}}=\Delta _{\frak{W},\frak{ V},\frak{K}}$. On the other hand, if $\frak{U}=\frak{V}=\frak{W}$, $\Delta _{ \frak{V}}=\Delta _{\frak{V},\frak{V},\frak{V}}$ and $\varepsilon _{\frak{V}}$ gives $\underline{end}\left[ \frak{V}\right] $ a coalgebra structure in $ \mathrm{EQA}$, and $\delta _{\frak{V}}=\delta _{\frak{V},\frak{V}}$ makes $ \frak{V}$ an $\underline{end}\left[ \frak{V}\right] $-corepresentation in the same category. We add that arrows $\left( \ref{com}\right) $ and $\left( \ref{coun}\right) $ satisfy usual associativity and unit constraints, expressed by commutativity of the following diagrams\footnote{ Of course, we have a similar commutative diagram where $\frak{K}$ is on the right.} \begin{equation} \begin{diagram} \underline{hom}\left[ \QTR{frak}{X},\QTR{frak}{V}\right] \boxtimes \underline{hom}\left[ \QTR{frak}{W},\QTR{frak}{X}\right] &\rTo& \underline{hom}\left[ \QTR{frak}{Y},\QTR{frak}{V}\right] \boxtimes \underline{hom}\left[ \QTR{frak}{X},\QTR{frak}{Y}\right] \boxtimes \underline{hom}\left[ \QTR{frak}{W},\QTR{frak}{X}\right]\\ \uTo& &\uTo \\ \underline{hom}\left[ \QTR{frak}{W},\QTR{frak}{V}\right] &\rTo& \underline{hom}\left[ \QTR{frak}{Y},\QTR{frak}{V}\right] \boxtimes \underline{hom}\left[ \QTR{frak}{W},\QTR{frak}{Y}\right]\\ \end{diagram} \label{evco} \end{equation} \medskip \begin{equation} \begin{diagram} \QTR{frak}{K}\boxtimes \underline{hom}\left[ \QTR{frak}{W},\QTR{frak}{V}\right] & &\lTo & & \underline{end}\left[ \QTR{frak}{V}\right] \boxtimes \underline{hom}\left[ \QTR{frak}{W},\QTR{frak}{V}\right] \\ & \luTo~{\backsimeq } & &\ruTo & \\ & & \underline{hom}\left[ \QTR{frak}{W},\QTR{frak}{V}\right] & &\\ \end{diagram} \label{idco} \end{equation} \bigskip Consider pairs $\frak{V}=\left( \mathbf{V},\Bbb{R}\right) $ and $\frak{W} =\left( \mathbf{W},\Bbb{S}\right) $ in $\mathrm{EQA}$, and take basis $ \left\{ v_{i}\right\} $ and $\left\{ w_{i}\right\} $ of $\mathbf{V}$ and $ \mathbf{W}$, respectively. The image under $\mathsf{U}$ of the internal coHom object \[ \underline{hom}\left[ \frak{W},\frak{V}\right] =\frak{W}^{\dagger }\boxtimes \frak{V}=\left( \mathbf{W}^{\ast }\otimes \mathbf{V},\Bbb{S}^{\dagger }\boxtimes \Bbb{R}\right) \] is a quadratic algebra generated by $\mathbf{W}^{\ast }\otimes \mathbf{V}$ and obeying relations $I\left[ \func{Im}\Bbb{S}^{\dagger }\boxtimes \Bbb{R} \right] $. Writing $t_{i}^{j}=w^{j}\otimes v_{i}$, straightforwardly \begin{equation} \func{Im}\Bbb{S}^{\dagger }\boxtimes \Bbb{R}=span\left[ \Bbb{R} _{ij}^{kl}\;t_{k}^{n}\otimes t_{l}^{m}-t_{i}^{k}\otimes t_{j}^{l}\;\Bbb{S} _{kl}^{nm}\right] _{i,j,n,m}. \label{raa} \end{equation} Comparing $\left( \ref{rqa}\right) $ with $\left( \ref{raa}\right) $, we have $\mathsf{U}\underline{hom}\left[ \frak{W},\frak{V}\right] =A\left( \Bbb{ R}:\Bbb{S}\right) $. In particular, $\mathsf{U}\underline{end}\left[ \frak{V} \right] =A\left( \Bbb{R}\right) $. Thus, the algebras $A\left( \Bbb{R}:\Bbb{S }\right) $ (resp. bialgebras $A\left( \Bbb{R}\right) $) are the coordinate ring of an equipped quantum space with structure $\Bbb{S}^{\dagger }\boxtimes \Bbb{R}$ (resp. $\Bbb{R}^{\dagger }\boxtimes \Bbb{R}$), representing the space of homomorphisms from $\frak{W}^{op}$ to $\frak{V} ^{op}$. \subsection{Rectangular comultiplication and counit maps} Now, we are going to construct rectangular comultiplication and counit maps defined in \cite{mm} for algebras $A\left( \Bbb{R}:\Bbb{S}\right) $. This can be done in steps below: \begin{enumerate} \item Apply the functor $\mathsf{U}$ to comultiplications given in $\left( \ref{com}\right) $ to obtain the maps \[ \mathsf{U}\underline{hom}\left[ \frak{W},\frak{V}\right] \rightarrow \mathsf{ U}\left( \underline{hom}\left[ \frak{U},\frak{V}\right] \boxtimes \underline{ hom}\left[ \frak{W},\frak{U}\right] \right) . \] \item Compose it with the functorial epimorphism \[ \mathsf{U}\left( \underline{hom}\left[ \frak{U},\frak{V}\right] \boxtimes \underline{hom}\left[ \frak{W},\frak{U}\right] \right) \twoheadrightarrow \mathsf{U}\underline{hom}\left[ \frak{U},\frak{V}\right] \circ \mathsf{U} \underline{hom}\left[ \frak{W},\frak{U}\right] . \] \item Apply the forgetful functor $\mathsf{F}:\left( \mathbf{A}_{1},\mathbf{ A}\right) \mapsto \mathbf{A}$ to this composition. This gives us an algebra homomorphism \[ \mathsf{FU}\underline{hom}\left[ \frak{W},\frak{V}\right] \rightarrow \mathsf{F}\left( \mathsf{U}\underline{hom}\left[ \frak{U},\frak{V}\right] \circ \mathsf{U}\underline{hom}\left[ \frak{W},\frak{U}\right] \right) . \] \item Finally, compose the latter with the functorial inclusion \[ \mathsf{F}\left( \mathsf{U}\underline{hom}\left[ \frak{U},\frak{V}\right] \circ \mathsf{U}\underline{hom}\left[ \frak{W},\frak{U}\right] \right) \hookrightarrow \mathsf{FU}\underline{hom}\left[ \frak{U},\frak{V}\right] \otimes \mathsf{FU}\underline{hom}\left[ \frak{W},\frak{U}\right] . \] \end{enumerate} The resulting maps are precisely the arrows $A\left( \Bbb{R}:\Bbb{S}\right) \rightarrow A\left( \Bbb{R}:\Bbb{T}\right) \otimes A\left( \Bbb{T}:\Bbb{S} \right) $ defined in \cite{mm}. For counits: \begin{enumerate} \item Apply $\mathsf{FU}$ to the counit arrow $\varepsilon _{\frak{V}}: \underline{end}\left[ \frak{V}\right] \rightarrow \frak{K}$ given in $\left( \ref{coun}\right) $, to obtain the arrow $A\left( \Bbb{R}\right) \rightarrow \Bbbk ^{\otimes }$. \item Compose to surjection $\Bbbk ^{\otimes }\twoheadrightarrow \Bbbk $. \end{enumerate} The algebra homomorphism $A\left( \Bbb{R}\right) \rightarrow \Bbbk $ we obtain, together with comultiplication $A\left( \Bbb{R}\right) \rightarrow A\left( \Bbb{R}\right) \otimes A\left( \Bbb{R}\right) $ gives $A\left( \Bbb{R }\right) $ a bialgebra structure. In fact, diagrams $\left( \ref{evco} \right) $ and $\left( \ref{idco}\right) $, properly combined with $\left( \ref{fa}\right) $, $\left( \ref{fu}\right) $ and $\left( \ref{ua}\right) $, lead us to associativity and unit constraints for arrows \[ A\left( \Bbb{R}:\Bbb{S}\right) \rightarrow A\left( \Bbb{R}:\Bbb{T}\right) \otimes A\left( \Bbb{T}:\Bbb{S}\right) ,\;\;A\left( \Bbb{R}\right) \rightarrow \Bbbk . \] Summing up, we have constructed square and rectangular quantum matrices as internal coHom objects in the rigid category of equipped quantum spaces, giving a generalization of \emph{FRT} construction in the scenario of Manin quantum groups. \subsection{The inclusions $\mathsf{U}\protect\underline{hom}\left[ \frak{W}, \frak{V}\right] ^{op}\hookrightarrow \protect\underline{hom}\left[ \mathsf{U} \frak{W},\mathsf{U}\frak{V}\right] ^{op}$} Let us show there exist epis $\underline{hom}\left[ \mathsf{U}\frak{W}, \mathsf{U}\frak{V}\right] \twoheadrightarrow A\left( \Bbb{R}:\Bbb{S}\right) $ , with $\underline{hom}\left[ \mathsf{U}\frak{W},\mathsf{U}\frak{V}\right] $ the internal coHom object in $\mathrm{QA}$ related to quantum spaces $ \mathsf{U}\frak{W}$ and $\mathsf{U}\frak{V}$. As we have claimed in the introduction, this is an indication that quadratic algebras $A\left( \Bbb{R}: \Bbb{S}\right) $ represent homomorphisms between structurally richer spaces, w.r.t. coHom objects in $\mathrm{QA}$. Consider the evaluation map $\delta _{\frak{V},\frak{W}}:\frak{V}\rightarrow \underline{hom}\left[ \frak{W},\frak{V}\right] \boxtimes \frak{W}$ in $ \mathrm{EQA}$. In the previously given basis $\left\{ v_{i}\right\} $ and $ \left\{ w_{i}\right\} $ this map is defined by the assignment $v_{i}\mapsto t_{i}^{j}\otimes w_{j}$, putting again $t_{i}^{j}=w^{j}\otimes v_{i}$. It is functored by $\mathsf{U}$ to an arrow $\mathsf{U}\frak{V}\rightarrow \mathsf{ U}\left( \underline{hom}\left[ \frak{W},\frak{V}\right] \boxtimes \frak{W} \right) $ which, composed to the functorial epi \[ \mathsf{U}\left( \underline{hom}\left[ \frak{W},\frak{V}\right] \boxtimes \frak{W}\right) \twoheadrightarrow \mathsf{U}\underline{hom}\left[ \frak{W}, \frak{V}\right] \circ \mathsf{U}\frak{W}, \] defines in $\mathrm{QA}$ another arrow $\varphi :\mathsf{U}\frak{V} \rightarrow \mathsf{U}\underline{hom}\left[ \frak{W},\frak{V}\right] \circ \mathsf{U}\frak{W}$, also defined by $v_{i}\mapsto t_{i}^{j}\otimes w_{j}$. From universality of internal coHom objects in $\mathrm{QA}$ (see Equation $ \left( \ref{diaun}\right) $), there exists a unique arrow \[ \alpha :\underline{hom}\left[ \mathsf{U}\frak{W},\mathsf{U}\frak{V}\right] \rightarrow \mathsf{U}\underline{hom}\left[ \frak{W},\frak{V}\right] \] such that $\varphi =\left( \alpha \circ I\right) \,\delta _{\mathsf{U}\frak{V },\mathsf{U}\frak{W}}$, being $\delta _{\mathsf{U}\frak{V},\mathsf{U}\frak{W} }$ the coevaluation associated to $\underline{hom}\left[ \mathsf{U}\frak{W}, \mathsf{U}\frak{V}\right] \in \mathrm{QA}$. Recalling that (since $\mathsf{U} \frak{W}$ and $\mathsf{U}\frak{V}$ are quadratic algebras generated by $ \mathbf{W}$ and $\mathbf{V}$, resp.) $\underline{hom}\left[ \mathsf{U}\frak{W },\mathsf{U}\frak{V}\right] $ is generated by $\mathbf{W}^{\ast }\otimes \mathbf{V}$ and $\delta _{\mathsf{U}\frak{V},\mathsf{U}\frak{W}}\left( v_{i}\right) =t_{i}^{j}\otimes w_{j}$, then $\alpha $ is the identity on generators. Consequently, due to $\mathsf{U}\underline{hom}\left[ \frak{W}, \frak{V}\right] $ is also generated by $\mathbf{W}^{\ast }\otimes \mathbf{V}$ , $\alpha $ is an algebra epimorphism. Hence, we have shown: For every couple $\frak{W},\frak{V}$ of equipped quantum space, the quantum space $\underline{hom}\left[ \mathsf{U}\frak{W}, \mathsf{U}\frak{V}\right] ^{op}$ `contains' $\mathsf{U}\underline{hom}\left[ \frak{W},\frak{V}\right] ^{op}$ as a subspace. \section{The equipped conic quantum spaces} Given a finite dimensional $\Bbbk $-vector space\textbf{\ }$\mathbf{V}\in $ \textrm{Vct}$_{f}$, consider the degree cero homogeneous linear endomorphisms of $\mathbf{V}^{\otimes }$, i.e. \[ \Bbb{R}\in End_{\mathrm{Vct}}\left[ \mathbf{V}^{\otimes }\right] \;\;such\;that\;\;\Bbb{R}\left( \mathbf{V}^{\otimes n}\right) \subset \mathbf{V}^{\otimes n}. \] Of course, each map $\Bbb{R}$ is defined by a family $\left\{ \Bbb{R} _{n}\right\} _{n\in \Bbb{N}_{0}}$ of linear maps $\Bbb{R}_{n}:\mathbf{V} ^{\otimes n}\rightarrow \mathbf{V}^{\otimes n}$. In terms of these endomorphisms, all above constructions can be repeated word by word in the category $\mathrm{CA}$. That is to say, we can define \emph{equipped conic quantum spaces }as pairs $\left( \mathcal{A},\Bbb{R}\right) $, $\mathcal{A} \in \mathrm{CA}$, such that $\mathcal{A}\Vdash \Bbb{R}$, i.e. $\ker \left[ \mathbf{A}_{1}^{\otimes }\twoheadrightarrow \mathbf{A}\right] =I\left[ \func{ Im}\Bbb{R}\right] $. We just must change the defining condition for morphisms $\left( \mathcal{A},\Bbb{R}\right) \rightarrow \left( \mathcal{B}, \Bbb{S}\right) $ by $\alpha _{1}^{\otimes }\Bbb{\,R}=\Bbb{S\,}\alpha _{1}^{\otimes }$, or $\alpha _{1}^{\otimes n}\Bbb{\,R}_{n}=\Bbb{S}_{n}\Bbb{\, }\alpha _{1}^{\otimes n}$ for all $n\in \Bbb{N}_{0}$. Let us call $\mathrm{ ECA}$ the related category. It can be shown, the category of pairs $\frak{V} =\left( \mathbf{V},\Bbb{R}\right) $ such that $\func{Im}\Bbb{R}_{1}=\left\{ 0\right\} $, with morphisms given by linear maps $l$ such that $l^{\otimes } \Bbb{\,R}=\Bbb{S\,}l^{\otimes }$, is equivalent to $\mathrm{ECA}$. The functors $\boxtimes $ and $\dagger $ are defined by Eqs. $\left( \ref{A} \right) $, $\left( \ref{B}\right) $ and $\left( \ref{C}\right) $. $\mathrm{ EQA}$ can be seen as a full subcategory of $\mathrm{ECA}$ by regarding the endomorphisms $\Bbb{R}:\mathbf{V}^{\otimes 2}\rightarrow \mathbf{V}^{\otimes 2}$ as the map $\Bbb{R}_{2}$ of an homogeneous endomorphism $\Bbb{R}\in End_{ \mathrm{Vct}}\left[ \mathbf{V}^{\otimes }\right] $ such that $\Bbb{R}_{n}$ is the null map if $n\neq 2$. This enable us to define the full subcategory of equipped $m$-th quantum spaces $\mathrm{ECA}^{m}$ in terms of pairs $ \left( \mathbf{V},\Bbb{R}\right) $ with $\Bbb{R}_{n}$ the null map for all $ n\neq m$. Again, function $\left( \mathbf{V},\Bbb{R}\right) \mapsto \left( \mathbf{V} ,\left. \mathbf{V}^{\otimes }\right/ I\left[ \func{Im}\Bbb{R}\right] \right) $ gives rise to a surjective embedding $\mathsf{U}$, now in $\mathrm{CA}$, such that the functorial epimorphisms $\mathsf{U}\left( \frak{V}\boxtimes \frak{W}\right) \twoheadrightarrow \mathsf{U}\frak{V}\circ \mathsf{U}\frak{W} $ are still valid. Now, let us prove \textbf{Theor. 4} in this more general setting. \begin{theorem} The category of equipped conic quantum spaces is \textbf{rigid }w.r.t. the monoidal structure $\boxtimes $, and has $\dagger $ as duality functor. \end{theorem} \textbf{Proof:}\ Consider an object $\frak{V}=\left( \mathbf{V},\Bbb{R} \right) $. We define the evaluation and the coevaluation morphisms, \[ ev_{\frak{V}}:\frak{V}^{\dagger }\boxtimes \frak{V}\rightarrow \frak{K} \;\;\;\;and\;\;\;\;coev_{\frak{V}}:\frak{K}\rightarrow \frak{V}\boxtimes \frak{V}^{\dagger }, \] as the usual pairing $v\otimes v^{\prime }\mapsto \left\langle v,v^{\prime }\right\rangle \,e$ and coevaluation of $\mathbf{V}$ and $\mathbf{V}^{\ast }$ , respectively, being $e$ the generator of $\Bbbk $. We must show that they are effectively arrows in $\mathrm{E}$\textrm{$CA$}. Note that the tensor product map $ev_{\frak{V}}^{\otimes }$ defines the algebra homomorphism $\mathbf{V}^{\ast }\circ \mathbf{V}\rightarrow \Bbbk ^{\otimes }$, \[ u\otimes v\in \mathbf{V}^{\ast \otimes n}\otimes \mathbf{V}^{\otimes n}\mapsto \left\langle u,v\right\rangle \;e^{n}\in \Bbbk ^{\otimes n}, \] where we use $\varphi _{\mathbf{V}^{\ast },\mathbf{V}}$ to identify the algebras $\mathbf{V}^{\ast }\circ \mathbf{V}$ and $\left[ \mathbf{V}^{\ast }\otimes \mathbf{V}\right] ^{\otimes }$. By direct calculations (and from the very definition of $\Bbb{R}^{\dagger }$), it can be seen \[ ev_{\frak{V}}^{\otimes }\,\,\Bbb{R}^{\dagger }\boxtimes \Bbb{R}=0=\Bbb{O} \,ev_{\frak{V}}^{\otimes }. \] To show the analogous equation for the coevaluation map, let us first introduce some notation. Let $\left\{ v_{i}\right\} $ be a basis of $\mathbf{ V}$. Construct for each $n\in \Bbb{N}$ a basis $\left\{ v_{R}\right\} $ of $ \mathbf{V}^{\otimes n}$, being $R=\left( r_{1},...,r_{n}\right) $ a multi-index with $1\leq r_{k}\leq \dim \mathbf{V}$, $\forall k=1,...,n$, in such a way that $v_{R}=v_{r_{1}}\otimes ...\otimes v_{r_{n}}.$ Consider also the basis $\left\{ v^{R}\right\} $ of $\mathbf{V}^{\ast \otimes n}$ dual to $ \left\{ v_{R}\right\} $ w.r.t. the usual pairing $\left\langle \cdot ,\cdot \right\rangle _{n}:\mathbf{V}^{\ast \otimes n}\otimes \mathbf{V}^{\otimes n}\rightarrow \Bbbk $. In these terms, the algebra homomorphism \[ e^{n}\in \Bbbk ^{\otimes }\mapsto v_{R}\otimes v^{R}\in \mathbf{V}\circ \mathbf{V}^{\ast } \] (sum over repeated (multi)indices is understood) coincides with the map $ coev_{\frak{V}}^{\otimes }$. To see that equation \[ \Bbb{R}\boxtimes \Bbb{R}^{\dagger }\,coev_{\frak{V}}^{\otimes }=coev_{\frak{V }}^{\otimes }\,\Bbb{O} \] holds, we just have to prove $\Bbb{R}\otimes I-I\otimes \Bbb{R}^{\ast }$ evaluated on $v_{R}\otimes v^{R}$ is equal to cero, i.e. \[ \left[ \Bbb{R}\left( v_{R}\right) \otimes v^{R}-v_{S}\otimes \Bbb{R}^{\ast }\left( v^{S}\right) \right] =0. \] Writing $\Bbb{R}\left( v_{R}\right) =\Bbb{R}_{R}^{S}\;v_{S}$, we have for the transpose $\Bbb{R}^{\ast }\left( v^{S}\right) =v^{R}\;\Bbb{R}_{R}^{S}$, and consequently the left member of equation above is identically cero. So, the theorem is proven.\ \ \ $\blacksquare $ \subsubsection*{Acknowledgments} The author thanks to CNEA and Fundaci\'{o}n Antorchas, Argentina, for financial support.
108,951
Information under section 4 (1) b of RTI Act 2005 Department of Tourism & Civil Aviation, Himachal Pradesh Introduction The Department of Tourism & Civil Aviation, HP, is the nodal agency that plays a pro-active role in the promotion of tourism in the State. This is done through a wide range of literature and publicity material, participation in national and international fairs / meet, by creating / upgrading infrastructure and transport amenities in the tourist places / destinations and by creating new tourist products in the State. This is also being done through public private participation. The Department also plays regulatory role under the H.P. Tourism Development and Registration Act, 2002. Organisation, Functions & Duties The Department of Tourism is a Government department and the following functions / duties have been allocated by the Government as per the business of the Government of H.P. allocation rules-1971:- - Development and Promotion of Tourism. - State and District Tourist Advisory Committee. - Tourist Services – Supply of information, Development / Reservation of accommodation and development of civic amenities. - Hotel legislation. - Construction / Maintenance of the tourist accommodation. - Promotion of sports such as winter sports, golf, adventure sports etc. - Registration of Tourism units. - Matters relating to shooting of films in Himachal Pradesh. - Establishment, budget and accounts matters. - Civil Aviation. - Development of lakes in Himachal Pradesh. Powers and Duties of each officers/officials: A).Commissioner / Director Tourism: – Commissioner / Director Tourism is the head of the Department. Being Head of the Department, exercises general superintendence and control over all the officers / officials of the Department. He is responsible to carry out the functions of the Department. For this purpose he exercises powers as conferred / vested by the Act and various Rules including the services & Financial Rules. He also exercises functional powers delegated to him by the Govt. from time to time. B) Addl. /Joint Director Administration: – The following jobs are assigned to him: - Establishment. - Audit and PAC/CAG matters. - Stipend, Grant-in-aid to HPTDC. - Stores and purchases. - Civil Aviation matters including expansion of airstrips. - Tourist Complaints - Vehicles. - Inspection of offices of District Tourism Development Offices. - Implementation of H.P. Tourism Development & Registration Act / Rules. - Home Stay Scheme / Guidelines. - Budget and Planning ; Budget Assurances; Vidhan Sabha matters, Training. - Tourist Information Centres. C) Additional Director: The following jobs are assigned: - Publicity, Advertisements, fairs and festivals, grant in aid, Exhibitions, Souvenirs. - Works (State and Centrally Sponsored Schemes). - Hospitality. - Incentives, subsidy and Hotel Projects - Adventure Tourism - Public Private Partnership projects. - Food Craft Institute, Dharamshala and Institute of Hotel Management, Kufri and Hamirpur. - Project approval cases. - Issuing of Essentiality Certificate. - Statistics. D) Publicity Officer: Entire publicity work regarding preparation of promotional publicity material i.e. printing of brochures, folders, posters, post cards, calendars, Monal and promotional films etc. and releasing of the advertisements in print media i.e. in newspapers / magazines; in outdoor media and through electronic media etc.. and participation in fairs & festivals, exhibitions etc. In addition RTI matters as Public Information Officer and Nodal Officer for website. E) Dy. Director / District Tourism Dev. Officers (DTDO’s): The Department has eight posts of District Tourism Development Officer / Dy. Director, Tourism & Civil Aviation at Shimla, Solan, Mandi, Kullu Dharamshala (Kangra), Kinnaur at Reckong Peo, Lahaul at Keylong and Spiti at Kaza. The Department has six posts of Assistant Tourism Development Officer (ATDO) i.e one each at Directorate of Tourism at Shimla and at Chamba, Dharamshala (Kangra), Solan, Shimla and Kullu. Broad functions and responsibilities of the Dy. Director’s / DTDO’s / ATDO’s are as follows:- a. Regulatory Works: To implement the H.P. Tourism Development and Registration Act-2002 i.e. Registration of Hotels / Guest Houses/ Restaurants, Travel Agencies, Tourist Guides, Photographers and camping sites and inspection and compounding of the offences under the said Act / Rules. b. Development Works: To undertake and implement the developmental works of State / Centrally Sponsored schemes. c. Trainings: To Impart the adventure sports / other trainings such as water sports / Rafting, trekking, Human Resource Development, and training to unemployed youths in Tourist Guides, Home stay owners, Dhaba owners/ workers etc. d. Participation in Fair and Festivals: Department has been regularly participating in the fairs and festivals within and outside the State for wide publicity. The Dy. Director’s / DTDO’s organize cultural programmes / other functions to attract more and more tourists to the State. e) Superintendent (Gr- I and Gr-II): One post of Superintendent Gr-I (being filled up) at Directorate of Tourism at Shimla and five posts of Superintendent Gr-II i.e. One each in Directorate and in the offices of DTDO Chamba, Shimla, Kullu, and Kangra at Dharamshala. f) Private Secretary: One in the Directorate of Tourism at Shimla. g) Inspector Hotels: The Department has 10 posts of Hotel Inspectors out of which four posts filled up and six are vacant. Carry out the inspection of the tourism units in the accordance with the powers conferred in H.P. Tourism Development of Registration Act-2002 and process the cases for registration of tourism unit, revision of rates of units, challaning of units running without registration and filling cases before the Hon’ble Court. h) Tourist Information Officers (TIOs): The Department has fourteen posts of T.I.O.’s in which eleven posts are filled up (including four outsource basis TIO’s) and three posts are vacant. The Department has established the Tourist Information Centre’s within and out side the State to facilitate the tourists visiting to the state. In all there are 10 TICs viz. Victory Tunnel Shimla, Railway Station Shimla, Bypass Shimla, Kullu, Manali, Dharamshala, Dalhousie, Reckong Peo, Solan and Chennai. i) Sr. Assistants: The Department has eleven posts of Senior Assistants. Seven sr. Assistants are at the Head Quarter; they have been assigned the various jobs such as works, Publilcity, Planning & Development, Budget, Civil Aviation, Audit, PAC / CAG matters. Vidhan Sabha matters, Establishment and Store etc. Four Sr. Assistants are deployed in the office of Dy. Director’s / DTDOs to deal with the subjects pertaining to Dy. Director / DTDO offices. j) Asst. Research Officer: One post at headquarter: To collect and compile data relating to Tourists arrivals, employment generation including preparation of Hotel Directory. k) Jr. Assistant cum Clerks: The Department has twenty nine posts of Jr. Assistants & clerks. Eleven Jr. Assistants & clerks are working in the head quarter including contract basis clerks; nine in the offices of Dy. Director’s / DTDOs and nine posts of clerks are vacant. l) Jr. Scale Stenographer & Steno Typist: One post of Jr. Scale Stenographer at Directorate of Tourism at Shimla and one post of Steno Typist at District Tourism Office Kangra at Dharamshala. m) Clerk cum Data Operator: One at Directorate of Tourism at Shimla and 6 data operators in the field offices of the Department. n) Drivers: The Department has seven posts of Drivers. Three posts of drivers are at head quarter and four posts of drivers in the field offices of the Department. o) Peons: The Department has twenty eight posts of Peon’s / Chowkidar / sweeper. Nine posts in Head office including three vacant and twelve posts in the field offices of the Department and Tourist Information Centres. Procedure followed in the decision making process including channels of supervision & accountability: The Head of the Department i.e. Commissioner / Director is the decision making authority as per the powers delegated by the Government. The Dealing Assistants / Jr. Assistant / Clerks deals the subjects assigned to them and they further put up the same to Branch officers as per allocation of subjects given above. Some cases after scrutiny are disposed off by the Addl. Director / Joint Director at their own level. Cases on which the decision of the Commissioner / Director is required are presented to him. Norms set by it for the discharge of its functions: The functions are discharged as per the norms set by the Government in accordance with the office manual and instructions issued from time to time by the Government. The Department has assigned the jobs to the all Dealing Assistants / Clerks as mentioned above. Rules, regulations, instructions, manuals & records held by its or under its control: - H. P.Tourism Development & Registration Act, 2002. - H.P.Tourism Development & Registration of Tourism Trade Rules, 2012. - H.P. Aero Sports Rules, 2004. - H.P. River Rafting Rules, 2005. - HP Miscellaneous Adventure Activities Rules, 2017 - Rules for grant of Incentives to Tourism Industry for SC / ST Categories in Himachal Pradesh, 2000. - Rules for grant of stipend to private candidates for training in Hotel management & catering 1987. - Recruitment & promotion Rules of different categories of the Department. - Himachal Pradesh Home Stay Scheme-2008. A Statement of the categories of the documents that are held by it or under its control: The Department has under its control the documents which are un-classified / un-categorized and also documents mentioned u/s 8 of the Right to Information Act. Particulars of any arrangement that exists for consultation with or representation by the members of the public in relation to the formulation of its policy or administration thereof. Tourism Development Board The State Govt. has constituted Tourism Development Board for State level and Tourism Development Councils for specific areas i.e. Tourism Councils for Kufri-Naldehra, Manali, Dalhousie-Khajjiar, and Dharamshala-Mcleodganj. There is provision of nomination of non-officials members. In accordance with the H.P. Tourism Development and Registration Act-2002, a Tourism Board under the Chairmanship of Hon’ble Chief Minister has been constituted vide notification No. Tsm-A(3)-1/2002- dated 24.11.2003. The Board shall consist of following members: Non official members:- Director (Tourism), H.P. Ex-Officio Member Secretary Besides, Tourism Develoment Council for Manali, Distt. Kullu, H.P. has also been constituted vide notification No. Tsm.-F(3)-2/2003-I dated 7.7.2004 The Council shall consist of following members: Non official members Apart from the function provided in the Act, the Council shall also look after the collection of entry fee from the vehicles bearing registration numbers other than the State of H.P. issued vide notification of even number dated 29.5.2004. Note: The minutes of the Board / Councils are accessible for public. Budget for the Financial Year 2021-22 (Rs.in Lakhs) Budget allocated to each of its agency, indicating the particulars of all plans, proposed expenditures and reports on disbursements made: The budget to the field agencies i.e. District Tourism Development Offices have been allocated as per the requirement. Particulars of recipients of concessions permits or authorization granted by it: The Department has no such scheme to provide concessions and authorization etc. However, the Department is issuing permission for shooting of Films in Himachal Pradesh. Details in respect of the information available to or held by its reduced in an electronic form: The Department of Tourism has its own website and information about the department has been published on it for facilitation to the tourists as well as general public through electronic media. Web Site: Particulars of facilities available to citizen for obtaining information, including the working of library or reading room, if maintained for public use: At present the Department has not set up any library. However, the department has appointed Publicity Officer as Nodal Officer to provide information to the visiting tourists and general public. In addition, the department has also DTDO offices / Information Centers within and outside the State from where the departmental information can be obtained. Such other information as may be prescribed: Other information relating to this Department may be provided as and when required by the public.
289,882
- Please ensure that the document is in fact a Crystal Report document (example: not an MDB file renamed with an RPT extension) - Open a backup copy of the document - If using a CITRIX environment, reset the connection by rebooting the workstation connected to the CITRIX server - The report has been overwritten by the export - Rename the file and change the .rpt extension to .mdb - Open the file in Microsoft Access - If the file opens successfully, then it was overwritten by the export and the report will need to be recreated or a backup copy will need to be restored. - Open the Task Manager. Go to the Processes tab. Look for the crw32.exe process and end all instances. Blackbaud Community Connect and collaborate with fellow Blackbaud users.
141,475
\begin{document} \title{\bf Holomorphic line Bundles over a Tower of Coverings} \author{ \ \ Yuan Yuan\footnote{ Supported in part by National Science Foundation grant DMS-1412384} \ \ and \ \ Junyan Zhu} \vspace{3cm} \maketitle \begin{abstract} {\small We study a tower of normal coverings over a compact K\"ahler manifold with holomorphic line bundles. When the line bundle is sufficiently positive, we obtain an effective estimate, which implies the Bergman stability. As a consequence, we deduce the equidistribution for zero currents of random holomorphic sections. Furthermore, we obtain a variance estimate for those random zero currents, which yields the almost sure convergence under some geometric condition. } \end{abstract} \maketitle \section{Introduction} Let $(M, g)$ be a Riemannian manifold with a complete Riemannian metric $g$. Suppose that its fundamental group $\Ga=\pi_1(M)$ admits a tower of normal subgroups: $\Ga=\Ga_0\supsetneq\Ga_1\supsetneq\cdots\supsetneq\Ga_j\supsetneq\cdots$ satisfying $2\leq[\Ga_j:\Ga_{j+1}]<\infty$ for each $j\geq0$ and $\bigcap_{j=0}^\infty\Ga_j=\{1\}$. Let $\tilde{M}$ denote the universal covering of $M$. Then $\Ga$ acts on $\tilde{M}$ as a group of deck transformations, which is free and properly discontinuous. Denote $\tilde{M}/\Ga_j$ by $M_j$ and we thus obtain a tower of normal coverings: $\tilde{M}\stackrel{p_j}\longrightarrow M_j\stackrel{q_j}\longrightarrow M_0=M$, where $p_j$ and $q_j$ denote the covering maps satisfying $q_j\circ p_j=p_0$ for all $j\geq0$. Furthermore, for each $j\geq0$, the group action $\Ga\curvearrowright\tilde{M}$ induces $\Ga/\Ga_j\curvearrowright M_j$. The differential structure and the Riemannian metric on each $M_j$ and $\tilde{M}$ are determined by those on $M$ via the covering maps $q_j$ and $p_0$. It is a classical result that every Riemannian manifold whose fundamental group is isomorphic to a finitely generated subgroup of $SL(n, \mathbb{C})$ admits a tower of coverings (cf. \cite{Bo}). There have been a lot of important works studying the asymptotic behaviors of various topological, geometrical and spectral properties for the tower of coverings of compact Riemannian manifolds (cf. \cite{CG} \cite{DW} \cite{Don1} \cite{Kaz} \cite{Ye1} and etc). \medskip In this paper, we are interested in the random complex geometry over a tower of coverings. Our motivations come from a series of works by Shiffman, Zelditch and their coauthors (cf. \cite{BSZ} \cite{SZ1} \cite{SZ2} \cite{SZ3} \cite{SZ4} and etc), as well as the recent paper by Lu and Zelditch \cite{LZ}. Let $(M,\om)$ be a compact \kahler\ manifold of complex dimension $n$ with volume form $dV=\frac{\om^n}{n!}$. For simplicity, we still use $\om$ and $dV$ to denote their counterparts on each level of the tower of coverings. Since the covering indices $I_j:=[\Ga:\Ga_j]<\infty$, each $q_j:M_j\to M$ is a finite-sheeted covering. Hence $M_j$'s are all compact. \medskip Let $E$ be a holomorphic line bundle over $M$ with a smooth Hermitian metric $h_E$. By abuse of notation, we denote the pullback line bundles $(q_j^*E, q_j^*h_E)$ and $(p_0^*E, p_0^*h_E)$ still by $(E,h_E)$. Then we call $\{(M_j,E)\}$ a tower of normal coverings with line bundles. Let $\Pi_{j,E}$ be the Bergman kernel of the line bundle $(E,h_E)\to M_j$ and $\tilde{\Pi}_E$ be the $L^2$-Bergman kernel of $(E,h_E)\to\tilde{M}$. The base locus of $E\to M_j$ (respectively the $L^2$-base locus of $E\to\tilde{M}$) is denoted by $B_{j,E}$ (respectively $\tilde{B}_E$). The Bergman metric $\Om_{j,E}$ (respectively $\tilde{\Om}_E$) is a smooth positive $(1,1)$-form defined over $M_j\setminus B_{j,E}$ (respectively $\tilde{M}\setminus\tilde{B}_E$). As the Bergman kernel $\Pi_{j,E}$ (respectively $\tilde{\Pi}_E$) is invariant under the group action $\Ga/\Ga_j\curvearrowright M_j$ (respectively $\Ga\curvearrowright\tilde{M}$)(cf. \S 2) while $M_j/(\Ga/\Ga_j)=M$ (respectively $\tilde{M}/\Ga=M$), $\Pi_{j,E}$ (respectively $\tilde{\Pi}_E$) descends to $\underline{\Pi}_{j,E}$ (respectively $\underline{\tilde{\Pi}}_E$) over $M\times M$. Similarly we denote the descendants of base loci and Bergman metrics by $\underline{B}_{j,E}$, $\underline{\tilde{B}}_E$, $\underline{\Om}_{j,E}$ and $\underline{\tilde{\Om}}_E$. \begin{definition} A tower of normal coverings with line bundles $\{(M_j,E)\}$ is Bergman stable if the pull-back Bergman kernels $\{\Pi_{j,E}(p_j(\cdot),p_j(\cdot))\}$ converge locally uniformly to $\tilde{\Pi}_E(\cdot,\cdot)$ over $\tilde{M}\times\tilde{M}$. \end{definition} In particular, if $E =K_M$ is the canonical line bundle, the Bergman stability has been studied by many authors (cf. \cite{R} \cite{To} \cite{O} \cite{CF} \cite{Ye3}, etc). If one assumes the Bergman stability for $E$, by the standard argument in complex analysis, one can derive the higher order convergence for the Bergman metrics (cf. Proposition \ref{BKCinfty}). Furthermore, we are interested in the equidistribution of the simultaneous zeros of random sections in $H^0(M_j,E)$. \medskip Let $d_{j,E} = \dim_{\mathbb{C}} H^0(M_j,E)$ be the complex dimension of the space of holomorphic sections. For any $1\leq l\leq n$, we may consider the Grassmannian of $l$-dimensional complex linear subspaces of $H^0(M_j,E)$, denoted by $\gcal_lH^0(M_j,E)$. Endowing $\gcal_lH^0(M_j,E)$ with the normailzed Haar measure $\mu^{(l)}_{j,E}$, thus we obtain a probability space $\left( \gcal_lH^0(M_j,E),\mu^{(l)}_{j,E} \right)$, of which the expectation is denoted by $\mathbb{E}^{(l)}_j$. Any $\scal^l_{j,E}\in\gcal_lH^0(M_j,E)$ can be written as $\scal^l_{j,E}=\text{Span}\{s_{j_1},\dots,s_{j_l}\}$, where $s_{j_1},\dots,s_{j_l}\in H^0(M_j,E)$ are linearly independent. Let $Z_{\scal^l_{j,E}}\in\dcal'^{l,l}(M_j)$ denote the current of integration over the common zero set of $s_{j_1},...,s_{j_l}$ (to be more specific, it is the current of integration over the regular points of the complex analytic set $\{z\in M_j:\ s_{j_1}(z)=\cdots=s_{j_l}(z)=0\}$), which is independent of the choice of the basis $\{s_{j_1},\dots,s_{j_l}\}$. Whenever $B_{j,E}=\emptyset$, by Bertini's theorem (cf. \cite{GH} pp.137), for a generic (thus almost sure in terms of the probability measure $\mu^{(l)}_{j,E}$) choice of $\scal^l_{j,E}=\text{Span}\{s_{j_1},\dots,s_{j_l}\}$, the zero sets $\{z\in M_j:\ s_{j_k}(z)=0\}$ are smooth and intersect transversely for $k=1, \cdots, l$. Hence $\{z\in M_j:\ s_{j_1}(z)=\cdots=s_{j_l}(z)=0\}$ is a smooth submanifold of $M_j$ with codimension $l$. Therefore, we may ignore multiplicities when considering expectations. In order to work on the same level, we study the normalized zero currents \begin{align}\label{NMZ} \underline{Z}_{\scal^l_{j,E}}:=I_j^{-1}{q_j}_*Z_{\scal^l_{j,E}}\in \dcal'^{l,l}(M). \end{align} The following is the equidistribution result for a general line bundle $E$. \begin{prop}\label{EZ} If the tower of normal coverings with line bundles $\{(M_j,E)\}$ is Bergman stable and $\tilde{B}_E=\emptyset$, then there exists $J\geq0$ such that, $$\mathbb{E}^{(l)}_j\underline{Z}_{\scal^l_{j,E}}=(\pi^{-1}\underline{\Om}_{j,E})^l$$ for any $j\geq J$ and $1\leq l\leq n$, as $(l,l)$-currents on $M$. Furthermore, it satisfies $$\lim_{j\to\infty}\mathbb{E}^{(l)}_j\underline{Z}_{\scal^l_{j,E}}=(\pi^{-1}\underline{\tilde{\Om}}_E)^l$$ in the sense of currents. \end{prop} \begin{comment} We also consider the unit sphere $SH^0(M_j,E)=\{s\in H^0(M_j,E):\ \norm{s}=1\}\simeq S^{2d_{j,E}-1}$ with the normalized Haar measure $\nu_{j,E}$. This probability space is the same as the space $\langle\gcal_1H^0(M_j,E),\mu^1_{j,E}\rangle$ that we discussed above. Any sequence of sections ${\bf{s}}_E=\{s_j\}_{j=0}^\infty$ with $s_j\in SH^0(M_j,E)$ for each $j\geq0$ can be identified as a random element in the probability space $\langle\Pi_{j=0}^\infty SH^0(M_j,E),\nu_E\rangle$, where $\nu_E$ is the infinite product measure induced by $\nu_{j,E}$'s. For all $j\geq0$, denote $$\lfloor\underline{Z}_{{\bf{s}}_E}\rfloor_j=\underline{Z}_{s_j}\in\dcal'^{1,1}(M).$$ Then the following almost sure convergence theorem follows from the argument in \cite{BL} and the Bergman stability. \begin{prop}\label{AS} With the same assumptions as in Proposition \ref{EZ}, we have \begin{align*} \lim_{j\to\infty}\lfloor\underline{Z}_{{\bf{s}}_E}\rfloor_j=\pi^{-1}\underline{\tilde{\Om}}_E \end{align*} in the sense of currents for $\nu_E$-almost all ${\bf{s}}_E\in\Pi_{j=0}^\infty SH^0(M_j,E)$. \end{prop} \end{comment} Next we focus on a positive holomorphic line bundle $(L,h)$ over $M$. Choose the curvature form $\om_h=\frac{\sqrt{-1}}{2}\Theta_h$ as the \kahler\ form of $M$ and $dV_h=\frac{\om_h^n}{n!}$. For any $N\geq1$, the tower of normal coverings with line bundles $\{(M_j,L^N)\}$ can be similarly defined. The main theorem in the paper are the following Bergman stability result for sufficiently positive line bundles over the tower of coverings. The main ingredient in our argument is the theorem of Poincar\'{e} series in \cite{LZ}, from which we can derive the effective estimates of difference between the Bergman kernels on the universal covering and on each level. More precisely, the difference decays exponentially in terms of a geometric quantity $\tau_j$ (cf. equation (\ref{tau0})) on the tower of coverings. \begin{theorem}\label{BK} There exists $N_1=N_1(M,L,h)>0$ and $\si=\si(M,L,h,N)>0$ when $N\geq N_1$, such that for any compact subsets $K,K'\subset\tilde{M}$, there exists $C_{K,K'}>0$ satisfying \begin{align*} \abs{\Pi_{j,L^N}(p_j(z),p_j(w))-\tilde{\Pi}_{L^N}(z,w)}_{h^N}\leq C_{K,K'}e^{-\si\tau_j}, \end{align*} for all $z\in K,w\in K', N\geq N_1$ and $j$ large enough. As a consequence, the tower of normal coverings with line bundles $\{(M_j,L^N)\}$ is Bergman stable for $N\geq N_1$. \end{theorem} The following base point freeness theorem is an application of H\"ormander type $L^2$-estimates. \begin{theorem}\label{BL} There exists some $N_2=N_2(M,L,h)>0$ such that for all $N\geq N_2$, the line bundles $L^N\to\tilde{M}$ are $L^2$-base point free, i.e. $\tilde{B}_{L^N}=\emptyset$. \end{theorem} A direct consequence is the equidistribution result of the simultaneous zeros of random sections of the positive line bundle. \begin{cor}\label{EZ(L)} Let $N^*(M,L,h)=\max\{N_1,N_2\}$. For all $N\geq N^*$ and $1\leq l\leq n$, the expectation of the normailzed zero current $\underline{Z}_{\scal^l_{j,L^N}}$ satisfies $$\lim_{j\to\infty}\mathbb{E}^{(l)}_j\underline{Z}_{\scal^l_{j,L^N}}=(\pi^{-1}\underline{\tilde{\Om}}_{L^N})^l$$ in the sense of currents. \end{cor} \begin{comment} As a consequence, we can derive The equidistribution and almost sure convergence of the zero currents of random sections over positive line bundles are direct consequence of Theorem \ref{BK} and Theorem \ref{BL}. We can also obtain the following effective estimate of the variance of the zero currents in terms of $\tau_j$. \end{comment} We also consider the unit sphere $SH^0(M_j,L^N)=\{s\in H^0(M_j,L^N):\ \norm{s}_{h^N}=1\}\simeq S^{2d_{j,L^N}-1}$ with the normalized Haar measure $\nu_{j,L^N}$. This probability space is the same as the space $\langle\gcal_1H^0(M_j,L^N),\mu^1_{j,L^N}\rangle$ discussed above. Then the variance of the zero currents can also be estimated in terms of $\tau_j$. \begin{theorem}\label{Variance} With the same $N^*>0$ as in Corollary \ref{EZ(L)}. For all $N\geq N^*$ and any smooth test form $\psi\in\dcal^{n-1,n-1}(M)$, the normalized zero current $\underline{Z}_{s_j}=I_j^{-1}{q_j}_*Z_{s_j}$ satisfies \begin{align*} \begin{split} Var\left((\underline{Z}_{s_j},\psi)\right)=&\int_{SH^0(M_j,L^N)}\abs{(\underline{Z}_{s_j}-\pi^{-1}\underline{\Om}_{j,L^N},\psi)}^2 d\nu_{j,L^N}(s_j) \\ \lesssim&[\exp\{-c\tau_{\lfloor\frac{j}{2}\rfloor}\}+2^{-\frac{j}{2}}]\ \norm{\sqrt{-1}\pa\bar{\pa}\psi}_{L^1(M)}^2, \end{split} \end{align*} for $j$ large enough, where $c=c(M,L,h,N)>0$. In addition, \begin{align*} \lim_{j\to\infty}Var\left((\underline{Z}_{s_j},\psi)\right)=0. \end{align*} \end{theorem} We want to point out here that the constants $\si=\si(M,L,h,N)$ in Theorem \ref{BK} and $c=c(M,L,h,N)$ in Theorem \ref{Variance} can be arbitrarily large if we let $N$ be large enough. \medskip In \S2, we collect all the preliminaries and background. In \S3, we discuss the equidistribution for a general holomorphic line bundle $E$. \S4 is devoted to show the Bergman stability for a positive holomorphic line bundle $L$. At last, we spend the entire \S5 proving the variance estimates and almost sure convergence of the normalized zero currents. \subsection*{Acknowledgement} The authors would like to thank Professor Bernard Shiffman and Professor Steve Zelditch for their helpful discussions and Professor Xiaojun Huang for his constant support. The authors also would like to thank the referee for the penetrating comments. Part of the work was done when the first author was visiting Capital Normal University in China and the second author was visiting Syracuse University. They are grateful to both departments for the warm hospitality. \section{Preliminaries and Background} For convenience, we are going to omit the $0$ index in our following notations concerning the base manifold $M=M_0$. \subsection{Bergman Kernel, Base Locus and Bergman Metric} The Hermitian inner product of sections of the line bundle $(E,h_E)\to M$ is defined by $$\llangle s,s'\rrangle:=\int_{M}(s,s')_{h_E}\ dV.$$ If one chooses an orthonormal basis $\{S_k\}_{k=1}^{d_E}$ of $H^0(M,E)$, then for any $z,w\in M$, the Bergman kernel is given by $$\Pi_E(z,w):=\sum_{k=1}^{d_E}S_k(z)\otimes\overline{S_k(w)}.$$ It is straightforward to check that $\Pi_E(z,w)$ does not depend on the choice of the orthonormal basis and $\Pi_E\in H^0(M\times M,E\boxtimes\bar{E})$ is the integral kernel of the orthogonal projection $L^2(M,E)\to H^0(M,E)$ satisfying the reproducing property: $$s(w)=\int_{M}(s(z),\Pi_E(z,w))_{h_E}\ dV(z),\text{\quad for all }w\in M\text{ and }s\in H^0(M,E).$$ \medskip The base locus $B_E$ is the common zero set for all holomorphic sections: \begin{align}\label{Bergman0} B_E:=\{z\in M:\ s(z)=0\text{ for all }s\in H^0(M,E)\}=\{z\in M:\ \Pi_E(z,z)=0\}. \end{align} Suppose $U,U'\subset M$ are two open sets with local frames $e_E,e'_E$ defined on it. Then there exist holomorphic functions $\{f_k\}_{k=1}^{d_E}\subset\ocal(U)$ and $\{g_k\}_{k=1}^{d_E}\subset\ocal(U')$ such that $S_k=f_ke_E$ over $U$ and $S_k=g_ke'_E$ over $U'$ for $1\leq k\leq d_E$. Hence, \begin{align}\label{Bergman1} \Pi_E(z,w)=\Phi_E(z,w)e_E(z)\otimes\overline{e'_E(w)}\text{\quad for }z\in U, w\in U', \end{align} where $$\Phi_E(z,w):=\sum_{k=1}^{d_E} f_k(z)\overline{g_k(w)}$$ is holomorphic in $z\in U$ and anti-holomorphic in $w\in U'$. Moreover, we restrict $\Pi_E$ on the diagonal and denote $$\phi_E(z)=\Phi_E(z,z)=\sum_{k=1}^{d_E}\abs{f_k(z)}^2\text{\quad for }z\in U.$$ Therefore $\phi_E\in\ccal^\infty(U,\R^+)$ and is nonvanishing on $U\setminus B_E$. The Bergman metric $\Om_E$ can be defined on $U\setminus B_E$ by \begin{align}\label{Bergman2} \Om_E:=\frac{\sqrt{-1}}{2}\pa\bar{\pa}\log{\phi_E}\geq0, \end{align} which is independent of the choice of the local frame $e_E$. So we may choose an open covering of $M$ and $\Om_E$ is thus defined on $M\setminus B_E$. \medskip The Bergman kernels $\Pi_{j,E}$ (respectively $L^2$-Bergman kernel $\tilde{\Pi}_E$), base loci $B_{j,E}$ (respectively $L^2$-base locus $\tilde{B}_E$) and Bergman metrics $\Om_{j,E}$ (respectively $\tilde{\Om}_E$) over $M_j$ (respectively $\tilde{M}$) can be defined in a similar way. Since the actions $\Ga/\Ga_j\curvearrowright M_j,\Ga\curvearrowright\tilde{M}$ are given by isometries of the manifolds that preserve the metrics $h_E$ of line bundles, they also preserve the Hermitian inner products of holomorphic sections. Therefore the Bergman kernels are invariant under these actions in the sense that \begin{align}\label{Bergman3} \Pi_{j,E}([\ga]_jz,[\ga]_jw)=\Pi_{j,E}(z,w),\text{\quad for all }z,w\in M_j\text{ and }[\ga]_j\in\Ga/\Ga_j, \end{align} and \begin{align}\label{Bergman4} \tilde{\Pi}_E(\ga z,\ga w)=\tilde{\Pi}_{E}(z,w),\text{\quad for all }z,w\in\tilde{M}\text{ and }\ga\in\Ga. \end{align} By (\ref{Bergman0}) and (\ref{Bergman2}), the base loci and Bergman metrics are both induced from the Bergman kernels and they also share the invariant properties. Therefore, we have the descendants $\underline{\Pi}_{j,E}$, $\underline{B}_{j,E}$ and $\underline{\Om}_{j,E}$ on $M$ in the sense that $\underline{\Pi}_{j,E}(q_j(\cdot),q_j(\cdot))=\Pi_{j,E}(\cdot,\cdot)$, $q_j^{-1}(\underline{B}_{j,E})=B_{j,E}$ and $q_j^*\underline{\Om}_{j,E}=\Om_{j,E}$. On the other hand, for any $j\geq0$, as $q_j:M_j\to M$ is a proper local diffeomorphism, the direct image ${q_j}_*\Om_{j,E}$ satisfies \begin{align}\label{Bergman5} {q_j}_*\Om_{j,E}=I_j\underline{\Om}_{j,E}, \end{align} where $I_j = [\Gamma : \Gamma_j] < \infty$ is the covering indices. \medskip The following theorem implies that the Bergman kernel on the universal covering of a positive line bundle concentrates on the diagonal. It is generally referred to as Agmon estimates, which serves as a powerful tool in our proofs. \begin{theorem}[cf. \cite{LZ} Theorem 2.1 or \cite{MM2} Theorem 0.1] Let $M$ be a complete \kahler\ manifold and $(L,h)\to M$ be a positive holomorphic line bundle. Then there exists some $\be=\be(M,L,h)>0$ such that the $L^2$-Bergman kernel $\tilde{\Pi}_{L^N}$ of $(L^N,h^N)\to\tilde{M}$ satisfies \begin{align*} \abs{\tilde{\Pi}_{L^N}(z,w)}_{h^N}\lesssim e^{-\be\sqrt{N}dist(z,w)}, \end{align*} for $z,w\in\tilde{M}$ with $dist(z,w)\geq1$. \end{theorem} \subsection{Circle Bundle and \szego\ Kernel} We now focus on a positive holomorphic line bundle $(L,h)$ over $M$, i.e. $h$ is a smooth Hermitian metric with positive curvature form $\om_h=\frac{\sqrt{-1}}{2}\Theta_h = - \frac{\sqrt{-1}}{2} \partial \bar\partial \log h$. $L^{-1}$ denotes its dual bundle with dual metric $h^{-1}$. $\rho$ is a function on $L^{-1}$ given by $\rho(\la):=\abs{\la}^2_{h^{-1}}-1$, which is a defining function for the disc bundle $D:=\{\la\in L^{-1}:\ \abs{\la}_{h^{-1}}\leq1\}$. When $L$ is a positive line bundle, $D$ is a strictly pseudoconvex domain. Therefore the principal $S^1$-bundle $\pi:X\to M$ given by $X:=\{\la\in L^{-1}:\abs{\la}_{h^{-1}}=1\}=\pa D$ is a strictly pseudoconvex $CR$ manifold. $\al:=-\sqrt{-1}\pa\rho|_X=\sqrt{-1}\bar{\pa}\rho|_X$ is a contact form on $X$ with $d\al=2\pi^*\om_h$ and $dV_X:=\frac{\al\wedge(d\al)^n}{2^{n+1}\pi n!}$ is a volume form on $X$. For any $N\geq1$, we can lift a section $s\in H^0(M,L^N)$ to $\hat{s}\in\hcal^2_N(X)$, the Hardy space of $L^2$ CR-functions on $X$ satisfying the equivariant condition $\hat{s}(r_\theta x)=e^{\sqrt{-1}N\theta}\hat{s}(x)$, where $x\in X$ and $r_{\theta}$ denotes the $S^1$-action on $X$. In fact, $$\hat{s}(x):=\langle x^N,s(\pi(x))\rangle,$$ where $\langle\cdot\rangle$ denotes the pairing of $L^N$ with $L^{-N}$. If $e_L$ is a local frame of $L$ over some open set $U$, for $z\in U$, we use $(z,\theta)$ as local coordinate for $x=e^{\sqrt{-1}\theta}\abs{e_L(z)}_he^{-1}_L(z)\in X$. Suppose that $s=fe^N_L$ over $U$ for some $f\in\ocal(U)$, then in terms of the local coordinates, $$\hat{s}(z,\theta)=\langle(e^{\sqrt{-1}\theta}\abs{e_L(z)}_he^{-1}_L(z))^N,f(z)e^N_L(z)\rangle=e^{\sqrt{-1}N\theta}\abs{e_L(z)}_h^Nf(z).$$ As a result, the lifting preserves the $L^2$-inner products: \begin{align}\label{Equivariant} \llangle s,s'\rrangle=(\hat{s},\hat{s}')_{L^2(dV_X)}:=\int_X\hat{s}\bar{\hat{s}}'\ dV_X. \end{align} Let $\{S_k\}_{k=1}^{d_{L^N}}$ be an orthonormal basis of $H^0(M,L^N)$. Then by (\ref{Equivariant}), $\{\hat{S}_k\}_{k=1}^{d_{L^N}}$ forms an orthonormal basis of $\hcal^2_N(X)$. In this way, the Bergman kernel $\Pi_{L^N}$ can be lifted to the \szego\ kernel of $\hcal^2_N(X)$: $$\hat{\Pi}_N(x,y)=\sum_{k=1}^{d_{L^N}}\hat{S}_k(x)\overline{\hat{S}_k(y)}\text{\quad for }x,y\in X.$$ Similarly we can define circle bundles $\pi_j:X_j\to M_j$, $\tilde{\pi}:\tilde{X}\to\tilde{M}$ and \szego\ kernels $\hat{\Pi}_{j,N}$, $\hat{\tilde{\Pi}}_N$ of $X_j$ and $\tilde X$ respectively, of which the local expressions are \begin{align}\label{Szego1} \hat{\Pi}_{j,N}(z,\theta,w,\varphi)=e^{\sqrt{-1}N(\theta-\varphi)}\abs{e_{j,L}(z)}_h^N\abs{e'_{j,L}(w)}_h^N\Phi_{j,L^N}(z,w),\text{\quad for }j\geq0, \end{align} if $\Pi_{j,L^N}(z,w)=\Phi_{j,L^N}(z,w)e^N_{j,L}(z)\otimes\overline{e'^N_{j,L}(w)}$, and \begin{align}\label{Szego2} \hat{\tilde{\Pi}}_N(z,\theta,w,\varphi)=e^{\sqrt{-1}N(\theta-\varphi)}\abs{\tilde{e}_L(z)}_h^N\abs{\tilde{e}'_L(w)}_h^N\tilde{\Phi}_{L^N}(z,w), \end{align} if $\tilde{\Pi}_{L^N}(z,w)=\tilde{\Phi}_{L^N}(z,w)\tilde{e}^N_L(z)\otimes\overline{\tilde{e}'^N_L(w)}$. Therefore, \begin{align}\label{Szego3} \abs{\hat{\tilde{\Pi}}_N(x,y)}=\abs{\tilde{\Pi}_{L^N}(\tilde{\pi}(x),\tilde{\pi}(y))}_{h^N},\text{\quad for all }x,y\in\tilde{X}. \end{align} \medskip The action $\Ga\curvearrowright\tilde{M}$ can be lifted as a group of CR holomorphic contact transformations on $\tilde{X}$ preserving the contact form $\tilde{\al}$. To be more specific, in terms of compatible local coordinates on $\tilde{X}$ (i.e. if we take $\tilde{e}_L$ as a local frame of $L\to \tilde{M}$ near $z\in\tilde{M}$, then we will take $\tilde{e}_L\circ\ga^{-1}$ as a local frame near $\ga z$), \begin{align}\label{Szego4} \ga(z,\theta)=(\ga z,\theta). \end{align} Hence the action $\Ga\curvearrowright\tilde{X}$ commutes with the $S^1$-action. As the Bergman kernel $\tilde{\Pi}_{L^N}$ satisfies (\ref{Bergman4}), by (\ref{Szego2}) and (\ref{Szego4}), \begin{align*} \hat{\tilde{\Pi}}_N(\ga x,\ga y)=\hat{\tilde{\Pi}}_N(x,y)\text{\quad for all }x,y\in\tilde{X}\text{ and }\ga\in\Ga. \end{align*} For each $j\geq0$, the covering map $p_j:\tilde{M}\to M_j$ induces a map $\hat{p}_j:\tilde{X}\to X_j$ such that the following diagram commutes: \begin{align*} \begin{array}{ccccc} & \tilde{M} & \stackrel{\tilde{\pi}}\longleftarrow & \tilde{X} & \\ p_j & \downarrow & & \downarrow & \hat{p}_j \\ & M_j & \stackrel{\pi_j}\longleftarrow & X_j & \end{array}. \end{align*} In fact, under compatible local coordinates (i.e. for any $z\in \tilde{M}$, if we take $e_{j,L}$ as a local frame of $L\to M_j$ near $p_j(z)\in M_j$, then we will take $\tilde{e}_L=e_{j,L}\circ p_j$ as a local frame of $L\to\tilde{M}$ near $z$), \begin{align}\label{Szego5} \hat{p}_j(z,\theta)=(p_j(z),\theta). \end{align} The following theorem proved by Z. Lu and S. Zelditch describes the relation between the \szego\ kernels over a manifold and those over the universal covering, which is the essential ingredient in our proof. \begin{theorem}[\cite{LZ} Theorem 1] There exists $N_0=N_0(M,L,h)>0$ such that if $N\geq N_0$, then for all $j\geq0$, \begin{align}\label{LZThm} \hat\Pi_{j,N}(\hat{p}_j(x),\hat{p}_j(y))=\di\sum_{\ga_j\in\Ga_j}\hat{\tilde{\Pi}}_N(\ga_j x,y),\text{\quad for any }x,y\in\tilde{X}. \end{align} \end{theorem} \section{Equidistribution for a General Line Bundle $E$} The following proposition asserts that Bergman stability implies higher order convergence of the Bergman kernels. It follows from the standard normal family argument (cf. Proposition 3.5 in \cite{To}). \begin{prop}\label{BKCinfty} If the tower of normal coverings with line bundles $\{(M_j,E)\}$ is Bergman stable, then the pull-back Bergman kernels $\{\Pi_{j,E}(p_j(\cdot),p_j(\cdot))\}$ converge locally uniformly in $\ccal^\infty$ topology to $\tilde{\Pi}_E(\cdot,\cdot)$ over $\tilde{M}\times\tilde{M}$. \end{prop} \begin{proof} Let $(U,V),(U',V')$ be any two pairs of bounded open sets in $\tilde{M}$ such that: \begin{enumerate}[a.] \item $V\subset\subset U$ and $V'\subset\subset U'$. \item The restrictions $p_0|_U$ and $p_0|_{U'}$ are one-to-one, which implies that $p_j|_U$ and $p_j|_{U'}$ are one-to-one for any $j\geq0$. \item$U$ is contained in the domain of a local frame $\tilde{e}_E$ of $E\to\tilde{M}$ as well as a holomorphic coordinate system $\{\xi=(\xi_1,\dots,\xi_n)\}$, while $U'$ is contained in the domain of a local frame $\tilde{e}'_E$ as well as a holomorphic coordinate system $\{\eta=(\eta_1,\dots,\eta_n)\}$. \end{enumerate} Hence for all $j\geq0$, we may define $e_{j,E}:=\tilde{e}_{E}\circ p^{-1}_j$ and $e'_{j,E}:=\tilde{e}'_{E}\circ p^{-1}_j$ as local frames of $E\to M_j$ over $p_j(U)$ and $p_j(U')$ respectively. Then as in (\ref{Bergman1}), $$\tilde{\Pi}_E(z,w)=\tilde{\Phi}_E(z,w)\tilde{e}_{E}(z)\otimes\overline{\tilde{e}_E(w)},$$ and for $j\geq0$, $$\Pi_{j,E}(p_j(z),p_j(w))=\Phi_{j,E}(p_j(z),p_j(w))e_{j,E}(p_j(z))\otimes\overline{e'_{j,E}(p_j(w))}=\Phi^*_{j,E}(z,w)\tilde{e}_{E}(z)\otimes\overline{\tilde{e}_E(w)},$$ where $\Phi^*_{j,E}(z,w):=\Phi_{j,E}(p_j(z),p_j(w))$ is also holomorphic in $z\in U$ and antiholomorphic in $w\in U'$. Take $t,t'>0$ such that for any $\xi\in V$ and $\eta\in V'$ , the coordinate polydiscs $\Pi_{i=1}^n\bar{D}(\xi_i,t)\subset U$ and $\Pi_{i=1}^n\bar{D}(\eta_i,t')\subset U'$. For arbitrary multi-indices $\al=(\al_1,\dots,\al_n)$ and $\be=(\be_1,\dots,\be_n)$, using Cauchy's integral formula, we have that for any $\xi^0\in V$ and $\eta^0\in V'$, \begin{align*} \begin{split} &D^{\al}_{\xi}\bar{D}^{\be}_{\eta}\Phi^*_{j,E}(\xi^0,\eta^0) \\ =&\frac{\al!\be!}{(2\pi)^{2n}}\int_{\abs{\xi_1-\xi^0_1}=t}\cdots\int_{\abs{\xi_n-\xi^0_n}=t}\int_{\abs{\eta_1-\eta^0_1}=t'}\cdots\int_{\abs{\eta_n-\eta^0_n}=t'} \frac{\Phi^*_{j,E}(\xi,\eta)}{\Pi_{i=1}^n(\xi_i-\xi^0_i)^{\al_i+1}(\bar{\eta}_i-\bar{\eta}^0_i)^{\be_i+1}}\ d\xi d\bar{\eta}, \end{split} \end{align*} which implies that \begin{align*} \abs{D^{\al}_{\xi}\bar{D}^{\be}_{\eta}\Phi^*_{j,E}(\xi^0,\eta^0)}\leq\frac{\al!\be!}{t^{\abs{\al}}t'^{\abs{\be}}}\norm{\Phi^*_{j,E}}_{\ccal^0(U\times U')}. \end{align*} Hence for any $k>0$, there exists a constant $C=C(U,V,U',V',k)>0$ such that \begin{align*} \norm{\Phi^*_{j,E}}_{\ccal^k(V\times V')}\leq C\norm{\Phi^*_{j,E}}_{\ccal^0(U\times U')}. \end{align*} Moreover, the Bergman stability assumption implies that $\left\{\Phi^*_{j,E}\right\}$ converges uniformly on $\bar{U}\times\bar{U}'$ to $\tilde{\Phi}_E$. Thus, for $j$ sufficiently large, we have \begin{align}\label{BKCinfty1} \norm{\Phi^*_{j,E}}_{\ccal^k(V\times V')}\leq C(\norm{\tilde{\Phi}_E}_{\ccal^0(U\times U')}+1). \end{align} To prove the locally uniform $\ccal^1$ convergence of the Bergman kernels, it suffices to show that the derivatives of the sequence $\left\{\Phi^*_{j,E}\right\}$ converge locally uniformly to those of $\tilde{\Phi}_E$ on $U\times U'$. If not, by taking a subsequence if necessary, we may assume that there exist some $1\leq i\leq n$, compact sets $K\subset U, K'\subset U'$ and $\ep>0$ such that \begin{align}\label{BKCinfty2} \sup_{K\times K'}\Abs{\frac{\pa\Phi^*_{j,E}}{\pa\xi_i}-\frac{\pa\tilde{\Phi}_E}{\pa\xi_i}}\geq\ep\text{ \quad for all }j\geq0. \end{align} However, since $\{\frac{\pa\Phi^*_{j,E}}{\pa\xi_i}\}$ and their derivatives are uniformly bounded on $K\times K'$ by (\ref{BKCinfty1}), applying Arzel$\grave{a}$-Ascoli theorem, we then have a subsequence $\{\frac{\pa\Phi^*_{j_s,E}}{\pa\xi_i}\}$ converges uniformly on $K\times K'$. As $\left\{\Phi^*_{j_s,E}\right\}$ converges uniformly to $\tilde{\Phi}_E$ on $K\times K'$, $\{\frac{\pa\Phi^*_{j_s,E}}{\pa\xi_i}\}$ must converge uniformly to $\frac{\pa\tilde{\Phi}_E}{\pa\xi_i}$, which contradicts to (\ref{BKCinfty2}). Thus we prove the locally uniform $\ccal^1$ convergence of the Bergman kernels. The higher order convergence follows by induction. \end{proof} \begin{prop}\label{BM} If the tower of normal coverings with line bundles $\{(M_j,E)\}$ is Bergman stable and $\tilde{B}_E=\emptyset$, then there exists $J\geq0$ such that for $j\geq J$, the base loci $B_{j,E}=\emptyset$ and the Bergman metrics $\Om_{j,E}$ can be defined all over $M_j$. Moreover, $\{\underline{\Om}_{j,E}\}_{j=J}^\infty$ converges to $\underline{\tilde{\Om}}_E$ uniformly in $\ccal^\infty$ topology. \end{prop} \begin{proof} Let$D_0\subset\subset\tilde{M}$ be a fundamental domain corresponding to $M_0=M$, i.e. it satisfies $p_0|_{D_0}:D_0\to M$ is injective and $p_0|_{\bar{D}_0}:\bar{D}_0\to M$ is surjective. Since $\{(M_j,E)\}$ is Bergman stable and $\tilde{\Pi}_E$ is nonvanishing on the diagonal, for the compact set $\bar{D}_0\subset\tilde{M}$, there exists $J\geq0$ such that for any $j\geq J$, $\Pi_{j,E}(p_j(z),p_j(z))\neq0$ for $z\in\bar{D}_0$. Hence $B_{j,E}\cap p_j(\bar{D}_0)=\emptyset$ for $j\geq J$ and thus $\underline{B}_{j,E}\cap p_0(\bar{D}_0)=q_j(B_{j,E})\cap q_jp_j(\bar{D}_0)=\emptyset$. Since $p_0(\bar{D}_0)=M$, $\underline{B}_{j,E}=\emptyset$. Therefore $B_{j,E}=q_j^{-1}(\underline{B}_{j,E})=\emptyset$ for $j\geq J$. The remaining part of this proposition follows from the definition of Bergman metric (\ref{Bergman2}) and Proposition \ref{BKCinfty}. \end{proof} Proposition \ref{EZ} then follows from the standard arguments as in \cite{SZ1}. \begin{proof}[Proof of Proposition \ref{EZ}] The first part of this statement follows from Proposition \ref{BM} and Lemma 4.3 in \cite{SZ1}. The second part also follows from Proposition \ref{BM}. \end{proof} \begin{comment} To prove Proposition \ref{AS}, we need t The following lemma characterizes the growth of dimension $d_{j,E}$ of $H^0(M_j, E)$ in terms of the covering index $I_j$. \begin{lem}\label{dim} If the tower of coverings with line bundles $\{(M_j,E)\}$ is Bergman stable and $\tilde{B}_E=\emptyset$, then $d_{j,E}=O(I_j)$. \end{lem} \begin{proof} By the definition of Bergman stability, we have $\{\underline{\Pi}_{j,E}\}$ converges uniformly to $\underline{\tilde{\Pi}}_E$ on $M\times M$. Hence, \begin{align*} \begin{split} I_j^{-1}d_{j,E}=&I_j^{-1}\int_{M_j}\abs{\Pi_{j,E}(z_j,z_j)}_{h_E}\ dV \\ =&\int_M\abs{\underline{\Pi}_{j,E}(z,z)}_{h_E}\ dV \\ \to&\int_{M}\abs{\underline{\tilde{\Pi}}_E(z,z)}_{h_E}\ dV. \end{split} \end{align*} Since $\tilde{B}_E=\emptyset$, $\abs{\underline{\tilde{\Pi}}_E(z,z)}_{h_E}>0$ on $M$, thus $\int_{M}\abs{\underline{\tilde{\Pi}}_E(z,z)}_{h_E}\ dV>0$ and the lemma follows. \end{proof} \begin{proof}[Proof of Proposition \ref{AS}] The proof essentially follows from the argument in \cite{BL} line by line. In fact, our case is easier, as the normalized Bergman kernel converges to the smooth Bergman kernel on $\tilde M$ locally uniformly by the Bergman stability. Note that the normalized zero currents $\lfloor\underline{Z}_{{\bf{s}}_E}\rfloor_j$ are in the same cohomology class for all $j$ and the normalization in our case satisfies Corollary 2.7 in \cite{BL} by Lemma \ref{dim} above. \end{proof} \begin{rmk} Suppose we are given a not necessarily Gaussian probability measure and consider i.i.d. complex-valued random variables as in \cite{BL}. More precisely, a random section $s_j\in H^0(M_j,E)$ can be written as $s_j=\sum_{k=1}^{d_{j,E}}a_{j,k}S^j_k$, where $\{S^j_k\}_{k=1}^{d_{j,E}}$ is a fixed orthonormal basis of $H^0(M_j,E)$ and $\{a_{j,k}\}_{j\geq0,1\leq k\leq d_{j,E}}$ are i.i.d. complex-valued random variables with distribution $\phi(z)d_2z$, where $\phi\in\ccal^0(\C,\R^+)$ satisfies \begin{enumerate}[(i)] \item $\phi$ is bounded; \item there exists some $C>0$ such that $\int_{\abs{z}\geq R}\phi(z)d_2z\leq\frac{C}{R^2}$ holds for $R$ sufficiently large. \end{enumerate} Then the analogue conclusions in Proposition \ref{EZ} and Proposition \ref{AS} still hold by the same argument as in \cite{BL}. \end{rmk} \end{comment} \section{Positive Line Bundle $L$ over a Tower of Coverings} The following geometric quantity describes the profound geometry of the tower of coverings, which was first appeared in \cite{DW}. \begin{definition}\label{tau} For any $j\geq0$, \begin{align}\label{tau0} \tau_j=\inf\left\{dist(z,\gamma_jz):z\in\tilde{M},\gamma_j\in\Gamma_j\setminus\{1\}\right\}. \end{align} \end{definition} One can check that $\tau_j\geq2R_j$, where $R_j$ denotes the injectivity radius of $M_j$. It is easy to see that for all $z\in\tilde{M}$, $p_j|_{B(z,\half\tau_j)}$ is one-to-one, where $B(z,\half\tau_j)$ denotes the geodesic ball in $\tilde{M}$ centered at $z$ of radius $\half\tau_j$. For all $j\geq0$, $\Gamma_j\setminus\{1\}\supset\Gamma_{j+1}\setminus\{1\}$. Hence the sequence $\{\tau_j\}$ is nondecreasing. The following lemma describes the growth of $\tau_j$, which was obtained in \cite{DW} \cite{Don1}. \begin{lem}\label{tau} $\di\lim_{j\to\infty}\tau_j=\infty$. \end{lem} \begin{proof} Argue by contradiction. We assume that there exists $C>0$ such that for all $j\geq0$, $\tau_j\leq C$. Then for all $j\geq0$, there exist $z_j\in\tilde{M}$ and $\gamma_j\in\Gamma_j\setminus\{1\}$ with $dist(z_j,\gamma_jz_j)\leq2C$. Also we know that $p_0|_{\bar{D}_0}$ is surjective, where $D_0\subset\subset\tilde{M}$ is the fundamental domain described in the proof of Proposition \ref{BM}, and for all $j\geq0$, $p_0^{-1}(p_0(z_j))=\Gamma z_j$. Hence for all $j$, there exists $g_j\in\Gamma$ such that $g_jz_j\in\bar{D}_0$. Denote $z'_j=g_jz_j\in\bar{D}_0$ and $\gamma'_j=g_j\gamma_jg_j^{-1}\in\Gamma_j\setminus\{1\}$ since $\Gamma_j$ is a normal subgroup of $\Gamma$, we then have $dist(z'_j,\gamma'_jz'_j)=dist(z_j,\gamma_jz_j)\leq2C$ as $g_j\in\Gamma$ is an isometry on $\tilde{M}$. By the compactness of $\bar{D}_0$, there exists a subsequence $\{j_k\}$ such that $z'_{j_k}\to z^*\in\bar{D}_0$. Since $dist(z^*,\gamma'_{j_k}z^*)\leq dist(z^*,z'_{j_k})+dist(z'_{j_k},\gamma'_{j_k}z'_{j_k})+dist(\gamma'_{j_k}z'_{j_k},\gamma'_{j_k}z^*)\leq2dist(z^*,z'_{j_k})+2C$, there exists some $K>0$ with $\{\gamma'_{j_k}z^*\}_{k=K}^{\infty}\subset\bar{B}(z^*,3C)$. Choosing a subsequence again if necessary, we may assume $\gamma'_{j_k}z^*\to w\in\bar{B}(z^*,3C)$. Thus $p_0(z^*)=p_0(\gamma'_{j_k}z^*)\to p_0(w)$, i.e. $p_0(z^*)=p_0(w)$, which implies that there exists some $h\in\Gamma$ such that $z^*=hw$. Hence $\gamma'_{j_k}hw\to w$. Since the group action $\Gamma\curvearrowright\tilde{M}$ is properly discontinuous, $\gamma'_{j_k}h=1$ for $k$ large enough. Therefore $h\in\Gamma_{j_k}$ for $k$ large. But we know $\bigcap_{j=0}^\infty\Ga_j=\{1\}$, so $h=1$. Thus $\gamma'_{j_k}=1$ for $k$ large, which draws a contradiction since $\gamma'_{j_k}\in\Gamma_{j_k}\setminus\{1\}$. \end{proof} Hence we can assume that $\tau_0\geq2$ and now begin the proof of Theorem \ref{BK} \begin{proof}[Proof of Theorem \ref{BK}] Similar as in the proof of Proposition \ref{BKCinfty}, we can take bounded open sets $U, U'\subset\tilde{M}$ satisfying conditions $(b)$ and $(c)$, local frames $\tilde{e}_L$ and $\tilde{e}'_L$ over $U$ and $U'$, $e_{j,L}=\tilde{e}_L\circ (p_j|_U)^{-1}$ and $e'_{j,L}=\tilde{e}'_L\circ (p_j|_{U'})^{-1}$ over $p_j(U)$ and $p_j(U')$. Under these local frames, we adopt similar notations to write: for any $z\in U, w\in U'$, \begin{align*} \begin{split} \Pi_{j,L^N}(p_j(z),p_j(w))=&\Phi_{j,L^N}(p_j(z),p_j(w))e^N_{j,L}(p_j(z))\otimes\overline{e'^N_{j,L}(p_j(w))} \\ =&\Phi_{j,L^N}(p_j(z),p_j(w))\tilde{e}^N_L(z)\otimes\overline{\tilde{e}'^N_L(w)}, \end{split} \end{align*} and \begin{align*} \tilde{\Pi}_{L^N}(z,w)=\tilde{\Phi}_{L^N}(z,w)\tilde{e}^N_L(z)\otimes\overline{\tilde{e}'^N_L(w)}. \end{align*} Hence, by (\ref{Szego1}) and (\ref{Szego2}), \begin{align*} \abs{\Pi_{j,L^N}(p_j(z),p_j(w))-\tilde{\Pi}_{L^N}(z,w)}_{h^N}=\abs{\hat\Pi_{j,N}(p_j(z),\theta,p_j(w),\varphi)-\hat{\tilde{\Pi}}_N(z,\theta,w,\varphi)}, \end{align*} for any $\theta,\varphi\in[0,2\pi]$. On the other hand, take any $N\geq N_0$, then (\ref{LZThm}) holds for all $j\geq0$. By triangle inequality, we have: for all $x,y\in\tilde{X}$, \begin{align*} \abs{\hat\Pi_{j,N}(\hat{p}_j(x),\hat{p}_j(y))-\hat{\tilde{\Pi}}_N(x,y)}\leq\sum_{\ga_j\in\Ga_j\setminus\{1\}}\abs{\hat{\tilde{\Pi}}_N(\ga_jx,y)}. \end{align*} Take any $z\in U,w\in U'$ and let $x=(z,\theta),y=(w,\varphi)\in\tilde{X}$ in terms of the local frames $\tilde{e}_L,\tilde{e}'_L$, by (\ref{Szego3}), (\ref{Szego4}) and (\ref{Szego5}), the inequality above implies \begin{align*} \abs{\hat\Pi_{j,N}(p_j(z),\theta,p_j(w),\varphi)-\hat{\tilde{\Pi}}_N(z,\theta,w,\varphi)} \leq\sum_{\ga_j\in\Ga_j\setminus\{1\}}\abs{\tilde{\Pi}_{L^N}(\ga_jz,w)}_{h^N}. \end{align*} Hence we obtain \begin{align*} \abs{\Pi_{j,L^N}(p_j(z),p_j(w))-\tilde{\Pi}_{L^N}(z,w)}_{h^N}\leq\sum_{\ga_j\in\Ga_j\setminus\{1\}}\abs{\tilde{\Pi}_{L^N}(\ga_jz,w)}_{h^N}. \end{align*} As $\{\tau_j\}$ increases to $\infty$, for any compact sets $K\subset U,K'\subset U'$, there exists $a_{K,K'}\in\N$ such that $$\half\tau_{a_{K,K'}}\geq\sup_{z\in K,w\in K'}dist(z,w).$$ Thus for any $j\geq a_{K,K'}$, $z\in K,w\in K'$ and $\ga_j\in\Ga_j\setminus\{1\}$, \begin{align*} dist(\ga_jz,w)\geq dist(\ga_jz,z)-dist(z,w)\geq\tau_j-\half\tau_{a_{K,K'}}\geq\half\tau_j\geq1. \end{align*} Then by Agmon estimates, there exists $\beta=\beta(M,L,h)>0$ such that \begin{align*} \begin{split} &\sum_{\ga_j\in\Ga_j\setminus\{1\}}\abs{\tilde{\Pi}_{L^N}(\ga_jz,w)}_{h^N} \\ \leq&\sum_{k=0}^{\infty}\sharp\left\{\ga_j\in\Ga_j\setminus\{1\}:(k+\half)\tau_j\leq dist(\ga_jz,w)<(k+\frac{3}{2})\tau_j\right\}e^{-\be\sqrt{N}(k+\half)\tau_j} \\ \leq&\sum_{k=0}^{\infty}\sharp\left\{\ga_j\in\Ga_j:\ga_jz\in B(w,(k+\frac{3}{2})\tau_j)\right\}e^{-\be\sqrt{N}(k+\half)\tau_j}. \end{split} \end{align*} Whenever $\ga_jz\in B(w,(k+\frac{3}{2})\tau_j)$, $$B(\ga_jz,\half\tau_j)\subset B(w,(k+2)\tau_j).$$ Also from the definition of $\tau_j$, $$B(\ga z,\half\tau_j)\cap B(\ga'z,\half\tau_j)=\emptyset\text{\quad for }\ga,\ga'\in\Ga_j,\ \ga\neq\ga'.$$ Furthermore, since $\Ga$ acts isometrically on $\tilde{M}$, $$V(B(\ga z,\half\tau_j))=V(B(z,\half\tau_j)), \text{\quad for all }\ga\in\Ga.$$ Hence for any $j\geq a_{K,K'}$, \begin{align*} \begin{split} &\sharp\left\{\ga_j\in\Ga_j:\ga_jz\in B(w,(k+\frac{3}{2})\tau_j)\right\} \\ &\leq\frac{V(B(w,(k+2)\tau_j))}{V(B(z,\half\tau_j))} \\ &\leq\frac{V(B(w,(k+2)\tau_j))}{V(B(z,\half\tau_{a_{K,K'}}))} \\ &=\frac{V(B(w,\half\tau_{a_{K,K'}}))}{V(B(z,\half\tau_{a_{K,K'}}))}\frac{V(B(w,(k+2)\tau_j))}{V(B(w,\half\tau_{a_{K,K'}}))}. \end{split} \end{align*} Since Ricci curvature of the compact manifold $M$ is bounded, there exists $K>0$ such that $Ric(M)\geq-(2n-1)K$. Therefore $Ric(\tilde{M})\geq-(2n-1)K$. By Bishop-Gromov volume comparison theorem (cf. \cite{SY} pp.11), if $V(2n,-K,R)$ denotes the volume of the geodesic balls of radius R in the space form $M^{2n}_{-K}$ of constant sectional curvature $-K$, then for all $j\geq a_{K,K'}$ and $k\geq0$, \begin{align*} \frac{V(B(w,(k+2)\tau_j))}{V(2n,-K,(k+2)\tau_j)}\leq\frac{V(B(w,\half\tau_{a_{K,K'}}))}{V(2n,-K,\half\tau_{a_{K,K'}})} \ \Rightarrow\ \frac{V(B(w,(k+2)\tau_j))}{V(B(w,\half\tau_{a_{K,K'}}))}\leq\frac{V(2n,-K,(k+2)\tau_j)}{V(2n,-K,\half\tau_{a_{K,K'}})}. \end{align*} We may rescale the metric on the space form $M_{-K}^{2n}$ by $K$ and get the space form $M_{-1}^{2n}$, while the volume ratio remains unchanged. Hence \begin{align*} \frac{V(B(w,(k+2)\tau_j))}{V(B(w,\half\tau_{a_{K,K'}}))}\leq\frac{V(2n,-1,(k+2)\tau_j)}{V(2n,-1,\half\tau_{a_{K,K'}})}, \end{align*} \begin{align*} \Rightarrow\sharp\left\{\ga_j\in\Ga_j:\ga_jz\in B(w,(k+\frac{3}{2})\tau_j)\right\}\leq\frac{V(B(w,\half\tau_{a_{K,K'}}))}{V(B(z,\half\tau_{a_{K,K'}}))}\frac{V(2n,-1,(k+2)\tau_j)}{V(2n,-1,\half\tau_{a_{K,K'}})}. \end{align*} For all $R>0$, by an explicit formula (cf.\cite{SY} pp.9), \begin{align*} \begin{split} V(2n,-1,R)&=\si_{2n-1}\int_0^R(\sinh t)^{2n-1}\ dt \\ &=\si_{2n-1}\int_0^R(\cosh^2t-1)^{n-1}\ d(\cosh t) \\ &=\si_{2n-1}\int_1^{\cosh R}(u^2-1)^{n-1}\ du \\ &\leq\si_{2n-1}\int_0^{e^R}u^{2n-2}\ du \\ &=\frac{\si_{2n-1}}{2n-1}e^{(2n-1)R}, \end{split} \end{align*} here $\si_{2n-1}$ denotes the Euclidean volume of the unit sphere $S^{2n-1}\subset\R^{2n}$. Since $K,K'\subset\tilde{M}$ are compact, there exists a constant $\tilde{C}_{K,K'}>0$ depending on $K,K'$, such that \begin{align*} \frac{V(B(w,\half\tau_{a_{K,K'}}))}{V(B(z,\half\tau_{a_{K,K'}}))}\leq\tilde{C}_{K,K'},\text{ for all }z\in K,w\in K'. \end{align*} Take $\hat{C}_{K,K'}=\frac{\si_{2n-1}\tilde{C}_{K,K'}}{(2n-1)V(2n,-1,\half\tau_{a_{K,K'}})}$, then \begin{align*} \sharp\left\{\ga_j\in\Ga_j:\ga_jz\in B(w,(k+\frac{3}{2})\tau_j)\right\}\leq\hat{C}_{K,K'}e^{(2n-1)(k+2)\tau_j}, \end{align*} for all $z\in K,w\in K'$ and $j\geq a_{K,K'}$. Therefore, \begin{align*} \begin{split} &\abs{\Pi_{j,L^N}(p_j(z),p_j(w))-\tilde{\Pi}_{L^N}(z,w)}_{h^N} \\ \leq&\hat{C}_{K,K'}\sum_{k=0}^{\infty}e^{(2n-1)(k+2)\tau_j}e^{-\be\sqrt{N}(k+\half)\tau_j} \\ =&\hat{C}_{K,K'}e^{(4n-2-\half\be\sqrt{N})\tau_j}\sum_{k=0}^{\infty}(e^{(2n-1-\be\sqrt{N})\tau_j})^k. \end{split} \end{align*} Denote $N_1=N_1(M,L,h)=\max\{\lfloor(\frac{8n-2}{\be(M,L,h)})^2\rfloor+1,N_0(M,L,h)\}$. Hence when $N\geq N_1$, $\si=\si(M,L,h,N)=-(4n-2-\half\be\sqrt{N})\geq1$ and $2n-1-\be\sqrt{N}\leq-\half\si$. For all $z\in K,w\in K'$ and $j\geq a_{K,K'}$, \begin{align*} \abs{\Pi_{j,L^N}(p_j(z),p_j(w))-\tilde{\Pi}_{L^N}(z,w)}_{h^N}\leq\hat{C}_{K,K'}\frac{e^{-\si\tau_j}}{1-e^{-\half\si\tau_j}}\leq\frac{\hat{C}_{K,K'}}{1-e^{-\si}}e^{-\si\tau_j}, \end{align*} since $\tau_j\geq\tau_0\geq2$. Take $C_{K,K'}=\frac{\hat{C}_{K,K'}}{1-e^{-\si}}$ and we get the desired estimate. Then Bergman stability follows from Lemma \ref{tau}. \end{proof} \medskip \begin{rmk} Let $(M, \omega)$ be a compact symplectic manifold of real dimension $2n$ with $\frac{\omega}{\pi}$ an integral cohomology class, $L\rightarrow M$ be a Hermitian line bundle such that the curvature $\om_h=\frac{\sqrt{-1}}{2}\Theta_h = \omega$ and $J$ be an almost complex structure compatible with $\omega$. Suppose that $(M, \omega)$ admits a tower of normal coverings. Denote the Bergman kernel of $D_j$ (respectively $\tilde D$), i.e. the Schwartz kernel for the orthogonal projection from $L^2_J(M_j, L^N)$ (respectively $L^2_J(\tilde M, L^N)$) onto the ($L^2$) kernel of $D_j$ (respectively, of $\tilde D$) on $M_j$ (respectively, $\tilde M$), by $\Pi_{j,L^N}(\cdot, \cdot)$ (respectively $\tilde\Pi_{L^N}(\cdot, \cdot)$), where $D_j$ (respectively, $\tilde D$) is the pseudodifferential operator so that $D_j \Pi_{j, L^N}=0$ (respectively $\tilde D \tilde\Pi_{L^N}=0$). (cf. \cite{MM1} \cite{SZ2}) Then by applying Theorem 0.1 and 0.2 in \cite{MM2} and the same argument as above, the analogue conclusion to Theorem \ref{BK} in the symplectic geometric setting also holds. \end{rmk} \begin{rmk} As pointed out by the referee, if the tensor power $N$ of the line bundle $L$ is sufficiently large, then a tower of coverings (not necessarily normal) with line bundles $(M_j, L^N)$ is Bergman stable. The proof follows from the heat kernel argument in \cite{Don2} (cf. section 1). In Donnelly's proof, the assumption is that the smallest nonzero eigenvalue of the Hodge-Laplacian is uniformly bounded below by a positive constant (independent of $j$), while this assumption is always true if $N$ is sufficiently large by the Bochner-Kodaira identity. On the other hand, with the assumption of the normal coverings, the Bergman stability without the effective estimates follows also from the standard H\"ormander type $L^2$-estimates and the estimate of $\tau_j$ (cf. Proposition \ref{tau}), which we attach in the Appendix. \end{rmk} \medskip Theorem \ref{BL} follows from a very similar argument of $L^2$-estimates as in the proof of second part of Proposition \ref{bsne} in Appendix. Hence we skip the proof here. \begin{comment} the standard H\"ormander type $L^2$ estimates. \begin{proof}[Proof of Theorem \ref{BL}] For any $w\in\bar{D}_0$, similar as before, choosing a bounded neighbourhood $U_w$ of $w$ such that: $p_0|_{U_w}$ is one-to-one, there is a local frame $\tilde{e}_L$ over $U_w$ and a holomorphic normal coordinate system $\{\xi_1,\dots,\xi_n\}$ centered at $w$ over $U_w$. Let $\rho(z):=dist(z,w)$ be the distance between $z\in\tilde{M}$ and $w$, which is differentiable at least in $B(w,R_0)\setminus\{w\}\subset\tilde{M}$, where $R_0$ is the injectivity radius of $M$. By shrinking $U_w$ if necessarily, we may assume that $U_w=B(w,r)$ for some $0<r<R_0$. Denote $U'_w=B(w,\half r)\subset\subset U_w$. Moreover, there exists some $a>0$ such that $U_w \supset \bar{B}_{\xi}(0,a)$ where $B_{\xi}(0,a)=\{\xi\in\C^n:\abs{\xi} < a\}$ is the coordinate ball centered at $0$ of radius $a$. Since $\bar{D}_0$ is compact, the positive numbers $r$ and $a$ can be chosen uniformly for all $w\in\bar{D}_0$. Let $\chi,\de\in\ccal^{\infty}([0,\infty),\R)$ be nonincreasing cut-off functions such that $\chi(t)=1$ if $0\leq t\leq\half r$, $\chi(t)=0$ if $t\geq r$ and $\de(t)=1$ if $0\leq t\leq\frac{1}{4}a^2$, $\de(t)=0$ if $t\geq a^2$. For any $N>0$, define $T_w\in\Ga(\tilde{M},L^N)$ by $T_w(z)=\chi(\rho(z))\tilde{e}^N_L(z)$ in $U_w$ and $T_w(z)=0$ otherwise. Also we define an upper semi-continuous function $\varphi_w$ on $\tilde{M}$ as follows: over $U_w$, $\varphi_w(\xi)=\de(\abs{\xi}^2)\log{\left(\Abs{\frac{\xi}{a}}^{2n}\right)}$ in terms of holomorphic coordinates, and $\varphi_w=0$ otherwise. Locally $\varphi_w$ is the sum of a $\ccal^\infty$ function and a plurisubharmonic function. Hence $\sqrt{-1}\pa\bar{\pa}\varphi_w$ is a $(1,1)$-current of order $0$, which can be treated as a $(1,1)$-form with measure coefficients. Over $U_w$, \begin{align*} \begin{split} \sqrt{-1}\pa\bar{\pa}\varphi_w =&n\sqrt{-1}\pa\bar{\pa}\left[\de(\abs{\xi}^2)\log{\left( \Abs{\frac{\xi}{a}}^2 \right) }\right] \\ =&n\sqrt{-1}\left[\frac{2\de'(\abs{\xi}^2)}{\abs{\xi}^2}+\de''(\abs{\xi}^2)\log{\left(\Abs{\frac{\xi}{a}}^2\right)}\right]\bar{\xi}d\xi\wedge\xi d\bar{\xi} \\ &+n\sqrt{-1}\de'(\abs{\xi}^2)\log{\left( \Abs{\frac{\xi}{a}}^2 \right) }d\xi\wedge d\bar{\xi}+n\sqrt{-1}\de(\abs{\xi}^2)\pa\bar{\pa}\log{\abs{\xi}^2} \\ \geq&n\sqrt{-1}\left[\frac{2\de'(\abs{\xi}^2)}{\abs{\xi}^2}+\de''(\abs{\xi}^2)\log{\left( \Abs{\frac{\xi}{a}}^2\right)}\right]\bar{\xi}d\xi\wedge\xi d\bar{\xi}, \end{split} \end{align*} where $\bar{\xi}d\xi\wedge\xi d\bar{\xi}=(\sum_i\bar{\xi}_id\xi_i) \wedge (\sum_i\xi_id\bar{\xi}_i)$ and $d\xi\wedge d\bar{\xi}=\sum_id\xi_i\wedge d\bar{\xi}_i$. Applying Lebesgue's decomposition on its measure coefficients, we have $\sqrt{-1}\pa\bar{\pa}\varphi_w=(\sqrt{-1}\pa\bar{\pa}\varphi_w)_{ac}+(\sqrt{-1}\pa\bar{\pa}\varphi_w)_{sing}$, where the singular part $(\sqrt{-1}\pa\bar{\pa}\varphi_w)_{sing}$ is a positive $(1,1)$-current and the absolutely continuous part $(\sqrt{-1}\pa\bar{\pa}\varphi_w)_{ac}$ is a $(1,1)$-form with $L^1_{loc}$ coefficients and satisfies $(\sqrt{-1}\pa\bar{\pa}\varphi_w)_{ac}\geq\al_w$, where $\al_w$ is a smooth $(1,1)$-form on $\tilde{M}$ given by \begin{align*} \al_w= \begin{cases} n\sqrt{-1}\left[\frac{2\de'(\abs{\xi}^2)}{\abs{\xi}^2}+\de''(\abs{\xi}^2)\log{\Abs{\frac{\xi}{a}}^2}\right]\bar{\xi}d\xi\wedge\xi d\bar{\xi} &\text{over }U_w \\ 0 &\text{otherwise}. \end{cases} \end{align*} (cf. \cite{MM} Remark B.2.13) Define a $(1,1)$-form on $\tilde{M}$ \begin{align*} Ric(K^{-1}_{\tilde{M}}\otimes L^N,\varphi_w):=NRic(h)+Ric(\om_h)+\half(\sqrt{-1}\pa\bar{\pa}\varphi_w)_{ac}, \end{align*} where $Ric(h)=\om_h$ denotes the Ricci curvature of the line bundle $L\to\tilde{M}$, $Ric(\om_h)$ denotes the Ricci curvature of the manifold $(\tilde{M},\om_h)$, which equals the Ricci curvature of the anti-canonical line bundle $K^{-1}_{\tilde{M}}$. Since $M$ is compact and $Ric(h)>0$, there exists $N'_2=N'_2(M,L,h)>0$ such that over $M$, $NRic(h)+Ric(\om_h)\geq0$ for all $N\geq N'_2$. Thus, the same result holds over $\tilde{M}$. On the other hand, since $\al_w$ is a smooth $(1,1)$-form with compact support, there exists $N''_2=N''_2(M,L,h)>0$ such that \begin{align*} NRic(h)+\half(\sqrt{-1}\pa\bar{\pa}\varphi_w)_{ac}\geq NRic(h)+\half\al_w\geq Ric(h)=\om_h,\ \text{ for all }N\geq N''_2. \end{align*} Also we can choose $N''_2$ uniformly for all $w\in\bar{D}_0$. Let $N_2=N'_2+N''_2>0$, \begin{align*} NRic(h)+Ric(\om_h)+\half(\sqrt{-1}\pa\bar{\pa}\varphi_w)_{ac}\geq\om_h,\ \text{ for all }N\geq N_2. \end{align*} $\bar{\pa}T_w\in\Ga_{0,1}(\tilde{M},L^N)=\Ga_{n,1}(\tilde{M},K_{\tilde{M}}^{-1}\otimes L^N)$ is $\bar{\pa}$-closed and supported on $U_w\setminus\bar{U}'_w$. On its support, \begin{align*} \bar{\pa}T_w(z)=\chi'(\rho(z))\bar{\pa}\rho(z)\tilde{e}^N_L(z). \end{align*} Then \begin{align*} \abs{\bar{\pa}T_w(z)}^2_{h^N,\om_h}=\abs{\chi'(\rho(z))}^2\abs{\bar{\pa}\rho(z)}^2_{\om_h}(\abs{\tilde{e}_L(z)}_h^2)^N \lesssim h^N(z), \end{align*} and when $N\geq N_2>N''_2$, $NRic(h)+\half(\sqrt{-1}\pa\bar{\pa}\varphi_w)_{ac}\geq\om_h$, \begin{align*} \begin{split} \abs{\bar{\pa}T_w(z)}^2_{h^N,Ric(K^{-1}_{\tilde{M}}\otimes L^N,\varphi_w)} =&\abs{\chi'(\rho(z))}^2\abs{\bar{\pa}\rho(z)}^2_{NRic(h)+\half(\sqrt{-1}\pa\bar{\pa}\varphi_w)_{ac}}(\abs{\tilde{e}_L(z)}_h^2)^N \\ \leq&\abs{\chi'(\rho(z))}^2\abs{\bar{\pa}\rho(z)}^2_{\om_h}(\abs{\tilde{e}_L(z)}_h^2)^N \\ \lesssim&h^N(z). \end{split} \end{align*} Thus $\bar{\pa}T_w\in L^2_{0,1}(\tilde{M},L^N)=L^2_{n,1}(\tilde{M},K_{\tilde{M}}^{-1}\otimes L^N)$ since \begin{align*} \int_{\tilde{M}}\abs{\bar{\pa}T_w(z)}^2_{h^N,\om_h}\ dV\lesssim\int_{U_w\setminus\bar{U}'_w}h^N(z)\ \om^n_h(z)<\infty, \end{align*} and \begin{align*} \int_{\tilde{M}}\abs{\bar{\pa}T_w(z)}^2_{h^N,Ric(K^{-1}_{\tilde{M}}\otimes L^N,\varphi_w)}e^{-\varphi_w(z)}\ dV \lesssim\int_{U_w\setminus\bar{U}'_w}h^N(z)e^{-\varphi_w(z)}\ \om^n_h(z)<\infty \end{align*} as $\varphi_w$ is smooth away from $w$. By H\"ormander-Demailly's $L^2$-estimates with singular weight (see \cite{Dem} Theorem 5.1), there exists some $T'_w\in L^2_{n,0}(\tilde{M},K_{\tilde{M}}^{-1}\otimes L^N,loc)=L^2(\tilde{M},L^N,loc)$ such that \begin{align}\label{Hor1} \bar{\pa}T'_w=\bar{\pa}T_w, \end{align} and \begin{align}\label{Hor2} \int_{\tilde{M}}\abs{T'_w}_{h^N}^2e^{-\varphi_w}\ dV \leq\int_{\tilde{M}}\abs{\bar{\pa}T_w(z)}^2_{h^N,Ric(K^{-1}_{\tilde{M}}\otimes L^N,\varphi_w)}e^{-\varphi_w(z)}\ dV<\infty. \end{align} Take $S_w=T_w-T'_w$. (\ref{Hor1}) implies that $S_w\in H^0(\tilde{M},L^N)$. Since $T_w\in\Ga(\tilde{M},L^N)$, $T'_w=T_w-S_w\in\Ga(\tilde{M},L^N)$. Moreover, by the definition of $\varphi_w$, $e^{-\varphi_w(\xi)}\sim\abs{\xi}^{-2n}$ near $\om$. Then (\ref{Hor2}) implies that $T'_w(w)=0$. Therefore $S_w(w)=T_w(w)=\tilde{e}^N_L(w)\neq0$. Since $\varphi_w\leq0$, $T'_w\in L^2(\tilde{M},L^N)$ by \begin{align}\label{Hor3} \int_{\tilde{M}}\abs{T'_w}_{h^N}^2\ dV\leq\int_{\tilde{M}}\abs{T'_w}_{h^N}^2e^{-\varphi_w}\ dV<\infty. \end{align} And from the definition of $T_w$, we have \begin{align}\label{Hor4} \int_{\tilde{M}}\abs{T_w}_{h^N}^2\ dV\lesssim\int_{U_w}h^N(z)\ \om^n_h(z)<\infty. \end{align} (\ref{Hor3}) and (\ref{Hor4}) imply that $S_w\in L^2(\tilde{M},L^N)$. In conclusion, $w$ is not a base point for $L^N\to\tilde{M}$ when $N\geq N_2$. By the arbitrary choice of $w\in\bar{D}_0$, we have thus proved the theorem. \end{proof} \end{comment} \begin{rmk} Since $\tilde{B}_{L^N}=\emptyset$ for $N\geq N_2$, Proposition \ref{BM} implies that $B_{j,L^N}=\emptyset$ for $j$ sufficiently large. In fact, as $L\to M$ is an ample line bundle, we may choose some $N_3=N_3(M,L)>0$ so that $L^N\to M$ is very ample for all $N\geq N_3$, hence base point free. By pulling back holomorphic sections from $M$ to $M_j$, we are able to show that $B_{j,L^N}=\emptyset$ for all $j\geq0$ if $N\geq N_3$. \end{rmk} \begin{comment} By combining Proposition \ref{EZ}, Proposition \ref{AS}, Theorem \ref{BK} and Theorem \ref{BL}, we have the following corollaries. \begin{cor}\label{EZ(L)} With the same $N^*>0$ given in Theorem \ref{Variance}, for all $N\geq N^*$ and $1\leq l\leq n$, the expectation of the normalized zero current $\underline{Z}_{\scal^l_{j,L^N}}$ satisfies $$\lim_{j\to\infty}\mathbb{E}^{(l)}_j\underline{Z}_{\scal^l_{j,L^N}}=(\pi^{-1}\underline{\tilde{\Om}}_{L^N})^l$$ in the sense of currents. \end{cor} \begin{cor}\label{AS(L)} With the same $N^*>0$ given in Theorem \ref{Variance}, for all $N\geq N^*$, \begin{align*} \lim_{j\to\infty}\lfloor\underline{Z}_{{\bf{s}}_{L^N}}\rfloor_j=\pi^{-1}\underline{\tilde{\Om}}_{L^N} \end{align*} in the sense of currents for $\nu_{L^N}$-almost all ${\bf{s}}_{L^N}\in\Pi_{j=0}^\infty SH^0(M_j,L^N)$. \end{cor} \end{comment} \begin{proof}[Proof of Corollary \ref{EZ(L)}] This follows directly from Proposition \ref{EZ}, Theorem \ref{BK} and Theorem \ref{BL}. \end{proof} \section{Variance Estimate and Almost Sure Convergence} In this section, we will derive the variance estimate. The essential ingredient is still the theorem of Poincar\'{e} series in \cite{LZ}. We also rely on the deep explicit formula for the variance in \cite{SZ4}. \begin{proof}[Proof of Theorem \ref{Variance}] We only consider those $j\geq J$, where $J$ is mentioned in Proposition \ref{BM} for $E=L^N$. Taking a partition of unity if necessary, we may assume that $\supp(\psi)\subset U$ for some open set $U\subset M$, which is the domain of some local frame $e_L$ of $L\to M$. Then $e_{j,L}:=e_L\circ q_j$ is a local frame of $L\to M_j$ over $q_j^{-1}(U)$. Moreover, by making $U$ even smaller, it is also possible to assume that $p^{-1}_0(U)$ is the disjoint union of $\ga \tilde{U}$'s for all $\ga\in\Ga$, where $\tilde{U}\subset\tilde{M}$ is such that $p_0|_{\tilde{U}}$ is one-to-one. Denote $p_j(\tilde{U})=U_j$. Hence $q^{-1}_j(U)$ is the union of $[\ga]_jU_j$'s for all $[\ga]_j\in\Ga/\Ga_j$ and $q_j|_{U_j}$ is one-to-one. Thereafter, for $z,w\in U$, we would use $z_j,w_j\in U_j$ and $\tilde{z},\tilde{w}\in\tilde{U}$ to denote their preimages. Choosing an orthonormal basis $\{S_{j_k}\}_{k=1}^{d_{j,L^N}}$ of $H^0(M_j,L^N)$, we assume that for $1\leq k\leq d_{j,L^N}$, $S_{j_k}=f_{j_k}e^N_{j,L}$ over $q_j^{-1}(U)$ for some $f_{j_k}\in\ocal(q_j^{-1}(U))$. Write $f_j=(f_{j_1},\dots,f_{j_{d_{j,L^N}}})$. Hence $\sqrt{-1}\pa\bar{\pa}\log\abs{f_j}=\Om_{j,L^N}$ over $q_j^{-1}(U)$ when $j\geq J$. For any $s_j\in SH^0(M_j,L^N)$, suppose $s_j=\sum_{k=1}^{d_{j,L^N}}a_kS_{j_k}$ for some $a=(a_1,\dots,a_{d_{j,L^N}})\in S^{2d_{j,L^N}-1}\subset\C^{d_{j,L^N}}$. Then over $q_j^{-1}(U)$, $s_j=(\sum_{k=1}^{d_{j,L^N}}a_kf_{j_k})e^N_{j,L}=\langle a,\bar{f}_j\rangle e^N_{j,L}$. By Poincar\'{e}-Lelong formula, over $q_j^{-1}(U)$, the zero current \begin{align*} Z_{s_j}=\frac{\sqrt{-1}}{\pi}\pa\bar{\pa}\log\abs{\langle a,\bar{f}_j\rangle}=\frac{\sqrt{-1}}{\pi}\pa\bar{\pa}\log\abs{\langle a,u_j\rangle}+\frac{\sqrt{-1}}{\pi}\pa\bar{\pa}\log\abs{f_j}=\frac{\sqrt{-1}}{\pi}\pa\bar{\pa}\log\abs{\langle a,u_j\rangle}+\pi^{-1}\Om_{j,L^N}, \end{align*} where $u_j(z):=\frac{\overline{f_j(z)}}{\abs{f_j(z)}}\in S^{2d_{j,L^N}-1}$. Then by (\ref{Bergman5}), \begin{align*} \underline{Z}_{s_j}-\pi^{-1}\underline{\Om}_{j,L^N}=I_j^{-1}{q_j}_*Z_{s_j}-\pi^{-1}\underline{\Om}_{j,L^N}=I_j^{-1}{q_j}_*\frac{\sqrt{-1}}{\pi}\pa\bar{\pa}\log\abs{\langle a,u_j\rangle}. \end{align*} Therefore, \begin{align*} \begin{split} (\underline{Z}_{s_j}-\pi^{-1}\underline{\Om}_{j,L^N},\psi)=&(I_j^{-1}{q_j}_*\frac{\sqrt{-1}}{\pi}\pa\bar{\pa}\log\abs{\langle a,u_j\rangle},\psi) \\ =&(I_j^{-1}\pi^{-1}{q_j}_*\log\abs{\langle a,u_j\rangle},\sqrt{-1}\pa\bar{\pa}\psi) \\ =&\int_{M}(\sqrt{-1}\pa\bar{\pa}\psi(z))(I_j^{-1}\sum_{[\ga]_j\in\Ga/\Ga_j}\pi^{-1}\log\abs{\langle a,u_j([\ga]_jz_j)\rangle}). \end{split} \end{align*} We denote the normalized Haar measure on the sphere $S^{2d_{j,L^N}-1}$ by $\nu_{2d_{j,L^N}-1}$. Following the proof in Theorem 3.1 of \cite{SZ4}, one can show that \begin{align*} \begin{split} &\int_{SH^0(M_j,L^N)}\abs{(\underline{Z}_{s_j}-\pi^{-1}\underline{\Om}_{j,L^N},\psi)}^2\ d\nu_{j,L^N}(s_j) \\ =&\int_{M\times M}(\sqrt{-1}\pa\bar{\pa}\psi(z))(\sqrt{-1}\pa\bar{\pa}\overline{\psi(w)}) \\ &\times I_j^{-2}\sum_{[\ga]_j,[\ga']_j\in\Ga/\Ga_j}\pi^{-2}\int_{S^{2d_{j,L^N}-1}}\log\abs{\langle a,u_j([\ga]_jz_j)\rangle}\log\abs{\langle a,u_j([\ga']_jw_j)\rangle}\ d\nu_{2d_{j,L^N}-1}(a) \\ =&\int_{M\times M}(\sqrt{-1}\pa\bar{\pa}\psi(z))(\sqrt{-1}\pa\bar{\pa}\overline{\psi(w)})I_j^{-2}\sum_{[\ga]_j,[\ga']_j\in\Ga/\Ga_j}\tilde{G}(P_{j,L^N}([\ga]_jz_j,[\ga']_jw_j)) \\ = &\int_{M\times M}(\sqrt{-1}\pa\bar{\pa}\psi(z))(\sqrt{-1}\pa\bar{\pa}\overline{\psi(w)})I_j^{-1}\sum_{[\ga]_j\in\Ga/\Ga_j}\tilde{G}(P_{j,L^N}([\ga]_jz_j,w_j)), \end{split} \end{align*} where \begin{align*} P_{j,L^N}(z_j, w_j) :=&\frac{\abs{\Pi_{j,L^N}(z_j, w_j)}_{h^N}} {\sqrt{\abs{\Pi_{j,L^N}(z_j, z_j)}_{h^N}}\sqrt{\abs{\Pi_{j,L^N}(w_j, w_j)}_{h^N}}} \end{align*} denotes the normalized Bergman kernel of $\Pi_{j,L^N}$ for $j\geq J$ when the denominator never vanishs and the last equality follows from the symmetry. In \cite{SZ4}, Shiffman and Zelditch introduce the function \begin{align*} \tilde{G}(t)=-\frac{1}{4\pi^2}\int_0^{t^2}\frac{\log(1-s)}{s}\ ds \end{align*} to calculate the variance and moreover, they write down the explicit expression of $\tilde{G}$ using power series $$\tilde{G}(t)=\frac{1}{4\pi^2}\sum_{n=1}^\infty\frac{t^{2n}}{n^2},$$ which plays an essential role in our estimate. By the power series expression we have \begin{align}\label{Variance1} \tilde{G}(t)\leq\frac{t^2}{24},\qquad\text{ for } 0\leq t\leq1. \end{align} Hence by (\ref{Variance1}) and recalling that the denominators of $P_{j,L^N}$'s are bounded from below by a uniform positive constant for $j\geq J$, it follows that \begin{align}\label{Variance3} \begin{split} &\int_{SH^0(M_j,L^N)}\abs{(\underline{Z}_{s_j}-\pi^{-1}\underline{\Om}_{j,L^N},\psi)}^2\ d\nu_{j,L^N}(s_j) \\ \lesssim&\sup_{(z_j,w_j)\in U_j\times U_j}I_j^{-1}\sum_{[\ga]_j\in\Ga/\Ga_j}\abs{\Pi_{j,L^N}([\ga]_jz_j,w_j)}^2_{h^N}\norm{\sqrt{-1}\pa\bar{\pa}\psi}^2_{L^1(M)}. \end{split} \end{align} From now on, for any fixed $(z,w)\in U\times U$ (thus the pairs $(z_j,w_j)\in U_j\times U_j$ and $(\tilde{z},\tilde{w})\in\tilde{U}\times\tilde{U}$ are determined), we always choose the representative $\ga$ of the coset $[\ga]_j\in\Ga/\Ga_j$ such that $dist(\ga\tilde{z},\tilde{w})=\inf_{g\in[\ga]_j}dist(g\tilde{z},\tilde{w})$ (in fact, $\inf_{g\in[\ga]_j}dist(g\tilde{z},\tilde{w})=\min_{g\in[\ga]_j}dist(g\tilde{z},\tilde{w})$ due to the proper discontinuity of deck transformation). With these settings, we can proceed the estimate in (\ref{Variance3}) as follows. First of all, since $N\geq N^*\geq N_0(M,L,h)$, (\ref{LZThm}) shows that \begin{align*} \abs{\Pi_{j,L^N}([\ga]_jz_j,w_j)}_{h^N}\leq\sum_{\ga_j\in\Ga_j}\abs{\tilde{\Pi}_{L^N}(\ga_j\ga\tilde{z},\tilde{w})}_{h^N}. \end{align*} If $dist(\ga\tilde{z},\tilde{w})\leq\half\tau_j$, then for any $\ga_j\in\Ga_j\setminus\{1\}$, \begin{align*} dist(\ga_j\ga\tilde{z},\tilde{w})\geq dist(\ga_j\ga\tilde{z},\ga\tilde{z})-dist(\ga\tilde{z},\tilde{w})\geq\tau_j-\half\tau_j=\half\tau_j. \end{align*} However, if $dist(\ga\tilde{z},\tilde{w})>\half\tau_j$, then $dist(\ga_j\ga\tilde{z},\tilde{w})\geq\half\tau_j$ for all $\ga_j\in\Ga_j$. Similarly as in the proof of Theorem \ref{BK}, we shall have: for all $j\geq J$, which replaces the position of $a_{K,K'}$, there exists $C_{\tilde{U}}>0$(playing the same role as $C_{K,K'}$), such that \begin{align*} \abs{\Pi_{j,L^N}([\ga]_jz_j,w_j)}_{h^N}\leq \begin{cases} \abs{\tilde{\Pi}_{L^N}(\ga\tilde{z},\tilde{w})}_{h^N}+C_{\tilde{U}}e^{-\si\tau_j}&\text{\quad if }dist(\ga\tilde{z},\tilde{w})\leq\half\tau_j, \\ C_{\tilde{U}}e^{-\si\tau_j}&\text{\quad if }dist(\ga\tilde{z},\tilde{w})>\half\tau_j. \end{cases} \end{align*} As a matter of fact, formulas defining $C_{K,K'}$ and the fact that the replacement for $a_{K,K'}$ is a fixed constant independent of $\tilde{U}$ show that $C_{\ga\tilde{U}}$ can be chosen to be equal to $C_{\tilde{U}}$ for any $\ga\in\Ga$. Thus, \begin{align}\label{Variance4} I_j^{-1}\sum_{[\ga]_j\in\Ga/\Ga_j}\abs{\Pi_{j,L^N}([\ga]_jz_j,w_j)}^2_{h^N} \lesssim C^2_{\tilde{U}}e^{-2\si\tau_j}+I_j^{-1}\sum_{[\ga]_j\in\Ga/\Ga_j,\ dist(\ga\tilde{z},\tilde{w})\leq\half\tau_j}\abs{\tilde{\Pi}_{L^N}(\ga\tilde{z},\tilde{w})}^2_{h^N}. \end{align} Let $$A_j(\tilde{z},\tilde{w})=\sum_{[\ga]_j\in\Ga/\Ga_j,\ dist(\ga\tilde{z},\tilde{w})\leq\half\tau_j}\abs{\tilde{\Pi}_{L^N}(\ga\tilde{z},\tilde{w})}^2_{h^N}$$ and denote the coset representatives appearing in the summation of $j$-th step by $\{\ga^{(j)}_1,\dots,\ga^{(j)}_{\kappa_j}\}$, where $1\leq\kappa_j\leq I_j$ since it definitely contains the identity. We observe that $\{\ga^{(j)}_1,\dots,\ga^{(j)}_{\kappa_j}\}\subset\{\ga^{(j+1)}_1,\dots,\ga^{(j+1)}_{\kappa_{j+1}}\}$ because in our convention, representatives of a coset is also a representative of a smaller one and it satisfies the condition for the new summation if it satisfies the previous one. Hence $A_{j+1}(\tilde{z},\tilde{w})$ is obtained from $A_j(\tilde{z},\tilde{w})$ by adding $\De_j=\kappa_{j+1}-\kappa_j$ new terms. We have already shown that those $\ga\in\Ga$ with $d(\ga\tilde{z},\tilde{w})\leq\half\tau_j$ are exactly those representatives appearing in the summation of $A_j$, thus $\half\tau_j<\ga\leq\half\tau_{j+1}$ for $\ga\in\{\ga^{(j+1)}_1,\dots,\ga^{(j+1)}_{\kappa_{j+1}}\}\setminus\{\ga^{(j)}_1,\dots,\ga^{(j)}_{\kappa_j}\}$. Therefore, by Agmon estimates, \begin{align*} A_{j+1}(\tilde{z},\tilde{w})\leq\Delta_je^{-\be\sqrt{N}\tau_j}+A_j(\tilde{z},\tilde{w}). \end{align*} Denote $\si'=\si'(M,L,h,N)=\be\sqrt{N}>0$. So for all $j\geq J$ and $k\geq1$, \begin{align*} \begin{split} A_{j+k}(\tilde{z},\tilde{w})\leq&\Delta_{j+k-1}e^{-\si'\tau_{j+k-1}}+A_{j+k-1}(\tilde{z},\tilde{w}) \\ \leq&\Delta_{j+k-1}e^{-\si'\tau_{j+k-1}}+\Delta_{j+k-2}e^{-\si'\tau_{j+k-2}}+A_{j+k-2}(\tilde{z},\tilde{w}) \\ \leq&\qquad\qquad\cdots\cdots\cdots \\ \leq&\Delta_{j+k-1}e^{-\si'\tau_{j+k-1}}+\Delta_{j+k-2}e^{-\si'\tau_{j+k-2}}+\cdots+\Delta_je^{-\si'\tau_j}+A_j(\tilde{z},\tilde{w}) \\ \leq&(\Delta_{j+k-1}+\Delta_{j+k-2}+\cdots+\Delta_j)e^{-\si'\tau_j}+A_j(\tilde{z},\tilde{w}) \\ =&(\kappa_{j+k}-\kappa_j)e^{-\si'\tau_j}+A_j(\tilde{z},\tilde{w}) \\ \leq&I_{j+k}e^{-\si'\tau_j}+A_j(\tilde{z},\tilde{w}). \end{split} \end{align*} Hence, \begin{align*} 0\leq\frac{A_{j+k}(\tilde{z},\tilde{w})}{I_{j+k}}\leq e^{-\si'\tau_j}+\frac{A_j(\tilde{z},\tilde{w})}{I_{j+k}}\leq e^{-\si'\tau_j}+\sup_{\tilde{z}\in\tilde{U}}\abs{\tilde{\Pi}_{L^N}(\tilde{z},\tilde{z})}^2_{h^N}\frac{I_j}{I_{j+k}}\leq e^{-\si'\tau_j}+2^{-k}\sup_{\tilde{z}\in\tilde{U}}\abs{\tilde{\Pi}_{L^N}(\tilde{z},\tilde{z})}^2_{h^N}, \end{align*} where the last inequality is due to the fact that $\frac{I_{j+k}}{I_j}=[\Ga_{j+2}:\Ga_{j+1}]\cdots[\Ga_{j+k-1}:\Ga_{j+k}]\geq2^k$. Therefore, for any $j\geq0$, we have the uniform estimate \begin{align}\label{Variance5} 0\leq\frac{A_j(\tilde{z},\tilde{w})}{I_j}=I_j^{-1}\sum_{[\ga]_j\in\Ga/\Ga_j,\ dist(\ga\tilde{z},\tilde{w})\leq\half\tau_j}\abs{\tilde{\Pi}_{L^N}(\ga\tilde{z},\tilde{w})}^2_{h^N}\leq \exp\{-\si'\tau_{\lfloor\frac{j}{2}\rfloor}\}+2^{-\lfloor\frac{j}{2}\rfloor}\sup_{\tilde{z}\in\tilde{U}}\abs{\tilde{\Pi}_{L^N}(\tilde{z},\tilde{z})}^2_{h^N}. \end{align} Combining (\ref{Variance3}), (\ref{Variance4}) and (\ref{Variance5}), we get \begin{align}\label{Variance6} \begin{split} &\int_{SH^0(M_j,L^N)}\abs{(\underline{Z}_{s_j}-\pi^{-1}\underline{\Om}_{j,L^N},\psi)}^2\ d\nu_{j,L^N}(s_j) \\ \lesssim&[C^2_{\tilde{U}}\exp\{-2\si\tau_j\}+\exp\{-\si'\tau_{\lfloor\frac{j}{2}\rfloor}\} +2^{-\lfloor\frac{j}{2}\rfloor}\sup_{\tilde{z}\in\tilde{U}}\abs{\tilde{\Pi}_{L^N}(\tilde{z},\tilde{z})}^2_{h^N}] \ \norm{\sqrt{-1}\pa\bar{\pa}\psi}_{L^1(M)}^2. \end{split} \end{align} Hence the variance estimate follows. The second statement holds since $\tau_j\to\infty$. \end{proof} Any sequence of sections ${\bf{s}}_{L^N}=\{s_j\}_{j=0}^\infty$ with $s_j\in SH^0(M_j,L^N)$ for each $j\geq0$ can be identified as a random element in the probability space $\langle\Pi_{j=0}^\infty SH^0(M_j,L^N),\nu_{L^N}\rangle$, where $\nu_{L^N}$ is the infinite product measure induced by $\nu_{j,L^N}$'s. If we fix an orthonormal basis $\{e_1,\cdots,e_{d_{j,L^N}}\}$ of $H^0(M_j,L^N)$, the set of orthonormal bases $\mathcal{ONB}_{j,L^N}$ of $H^0(M_j,L^N)$ is identical to $U(d_{j,L^N})$, the unitary group of rank $d_{j,L^N}$. Using $\vartheta_{j,L^N}$ to denote the unit mass Haar measure on $\mathcal{ONB}_{j,L^N}$, then $\langle\mathcal{ONB}_{j,L^N},\vartheta_{j,L^N}\rangle$ is a probability space. Similar as above, we may consider a sequence of orthonormal bases ${\bf{S}}_{L^N}=\{(S_{j,1},\dots,S_{j,d_{j,L^N}})\}_{j=0}^\infty\in\langle\Pi_{j=0}^\infty\mathcal{ONB}_{j,L^N},\vartheta_{L^N}\rangle$, where $\vartheta_{L^N}$ is the infinite product measure induced by ${\vartheta_{j,L^N}}$. For all $j\geq0$, denote $$\lfloor\underline{Z}_{{\bf{s}}_{L^N}}\rfloor_j=\underline{Z}_{s_j}\in\dcal'^{1,1}(M).$$ Then similar as Theorem 1.1 and 1.2 in \cite{SZ1}, we have \begin{cor} Assume that $\{\tau_j\}$ defined in (\ref{tau0}) satisfies \begin{align}\label{Variance7} \sum_je^{-s\tau_j}<\infty, \end{align} for some constant $s>0$. Then there exists $\hat{N}=\hat{N}(M,L,h)>0$ such that for all $N\geq\hat{N}$, \begin{enumerate}[i)] \item $\lfloor\underline{Z}_{{\bf{s}}_{L^N}}\rfloor_j$ converges to $\pi^{-1}\underline{\tilde{\Om}}_{L^N}$ for $\nu_{L^N}$-almost all ${\bf{s}}_{L^N}\in\Pi_{j=0}^\infty SH^0(M_j,L^N)$; \item For $\vartheta_{L^N}$-almost all ${\bf{S}}_{L^N}=\{(S_{j,1},\dots,S_{j,d_{j,L^N}})\}_{j=0}^{\infty}\in\Pi_{j=0}^\infty\mathcal{ONB}_{j,L^N}$, $$d_{j,L^N}^{-1}\sum_{k=1}^{d_{j,L^N}}\abs{(\underline{Z}_{S_{j,k}}-\underline{\tilde{\Om}}_{L^N},\psi)}^2\to0$$ for any $\psi\in\dcal^{n-1,n-1}(M)$. Equivalently, for each $j\geq0$ there exists a subset $\La_{j,L^N}\in\{1,\dots,d_{j,L^N}\}$ such that $\frac{\sharp\La_{j,L^N}}{d_{j,L^N}}\to1$ and for any $k\in\La_{j,L^N}$, the sequence $\underline{Z}_{S_{j,k}}$ satisfies \begin{align*} \lim_{j\to\infty}\underline{Z}_{S_{j,k}}=\pi^{-1}\underline{\tilde{\Om}}_{L^N}. \end{align*} \end{enumerate} \end{cor} \begin{proof} i) Take $\hat{N}(M,L,h)\geq N^*(M,L,h)$ such that the $c(M,L,h,N)$ in Proposition \ref{Variance} satisfies $c(M,L,h,\hat{N})\geq s$. Then for any $N\geq\hat{N}$, choosing any $\psi\in\dcal^{n-1,n-1}(M)$, Proposition \ref{Variance} implies that \begin{align*} \begin{split} &\int_{\Pi_{j=0}^\infty SH^0(M_j,L^N)}\sum_{j=0}^\infty\abs{(\lfloor\underline{Z}_{{\bf{s}}_{L^N}}\rfloor_j-\pi^{-1}\underline{\Om}_{j,L^N},\psi)}^2d\nu_{L^N}({\bf{s}}_{L^N}) \\ =&\sum_{j=0}^\infty\int_{SH^0(M_j,L^N)}\abs{(\underline{Z}_{s_j}-\pi^{-1}\underline{\Om}_{j,L^N},\psi)}^2\ d\nu_{j,L^N}(s_j) \\ \lesssim&\sum_{j=0}^\infty[\exp\{-c\tau_{\lfloor\frac{j}{2}\rfloor}\}+2^{-\frac{j}{2}}]\ \norm{\sqrt{-1}\pa\bar{\pa}\psi}_{L^1(M)}^2 \\ \leq&\sum_{j=0}^\infty[\exp\{-s\tau_{\lfloor\frac{j}{2}\rfloor}\}+2^{-\frac{j}{2}}]\ \norm{\sqrt{-1}\pa\bar{\pa}\psi}_{L^1(M)}^2<\infty \end{split} \end{align*} by (\ref{Variance7}). Therefore, $\lfloor\underline{Z}_{{\bf{s}}_{L^N}}\rfloor_j-\pi^{-1}\underline{\Om}_{j,L^N}\to0$ in the sense of currents for $\nu_{L^N}$-almost all ${\bf{s}}_{L^N}=\{s_j\}\in\Pi_{j=0}^\infty SH^0(M_j,L^N)$. Then i) follows from Proposition \ref{BM}. ii) follows from the same argument in the proof of Theorem 1.2 in \cite{SZ1}. \end{proof} \begin{rmk} (1) Let $\Gamma_j= H_{j,1} \times H_{j,2} \subset \mathbb{Z}^2$ be a discrete lattice and $M_j = \mathbb{C} / \Gamma_j$ be a real two dimensional flat torus. If $\Ga_0\supsetneq\Ga_1\supsetneq\cdots\supsetneq\Ga_j\supsetneq\cdots$ is a tower of normal subgroups and $H_{j, l} \supsetneq H_{j+1, l}$ for all $j,l$, then $\tau_{j+1}\geq2\tau_j$. Thus, condition (\ref{Variance7}) holds for all $s>0$. \\ (2) Let $M_j$ be a sequence of compact quotients of $SU(n, 1) / S(U(1)\times U(n)) = \mathbb{B}^n$ corresponding to a tower of congruence subgroups $\Gamma(q_j)$ of $G(Q, \cal{L})$ (See section 2.2 \cite{Ye1} for the detailed definition of these subgroups). Then $$\tau_j \geq 2 \log \left\{c \left[ \vol(M_j)^{\frac{2}{n^2+2n}} \right] \right\} \geq 2 \log c + \frac{4j}{n^2+2n} \log 2 + \frac{4}{n^2+2n}\log \vol(M_0) .$$ (cf. Lemma 2.2.1 \cite{Ye1}) Hence for any $s>0,$ condition (\ref{Variance7}) easily follows. \\ \end{rmk} \section{Appendix} We include a proof of the Bergman stability as stated in Theorem \ref{BK} by using the standard H\"ormander-Demailly type $L^2$ estimate, for a slightly more general setup (complete noncompact base manifold with bounded geometry). The proof is well known to the experts, which we record here for its independent interest. \begin{prop}\label{bsne} Assume the \kahler\ manifold $(M,\om_h)$ is complete (not necessarily compact), and satisfies the following geometric finite conditions: \begin{enumerate}[(a)] \item The sectional curvature of $(M, \om_h)$ is uniformly bounded; \item The injectivity radius of $(M, \om_h)$ is uniformly bounded from below by $R>0$. \end{enumerate} Then there exists some $N_4=N_4(M,L,h)>0$ such that any tower of normal coverings with line bundles $\{(M_j,L^N)\}$ is Bergman stable whenever $N\geq N_4$. \end{prop} \begin{proof} We essentially follow the argument of \cite{To} (see also \cite{CF}\cite{Ye3}) to break the argument into two parts: \begin{enumerate}[(i)] \item $\di\limsup_{j\to\infty}\abs{\Pi_{j,L^N}\left(p_j(z),p_j(z)\right)}_{h^N}\leq\abs{\tilde{\Pi}_{L^N}(z,z)}_{h^N}$ for any $z\in\tilde M$ and any $N\geq1$; \item $\di\liminf_{j\to\infty}\abs{\Pi_{j,L^N}\left(p_j(z),p_j(z)\right)}_{h^N}\geq\abs{\tilde{\Pi}_{L^N}(z,z)}_{h^N}$ for any $z\in\tilde M$ and any $N\geq N_4$. \end{enumerate} Part (i) follows by a straightforward normal family argument (cf. \cite{To} \cite{CF}\cite{Ye3}) which we will omit here, while part (ii) is a combination of H\"ormander's $L^2$-estimate and Agmon estimate. For any $z\in\tilde{M}$, define $\tau_j(z)=\inf\left\{dist(z,\gamma_j z):\ \gamma_j\in\Gamma_j\setminus\{1\}\right\}$. Then $p_j|_{B(z,\half\tau_j(z))}$ is one-to-one and $p_j|_{B(z,\half\tau_j(z))}:B(z,\half\tau_j(z))\to p_j\left(B(z,\half\tau_j(z))\right)$ is a biholomorphism. It is proved in \cite{DW} that $\tau_j(z) \rightarrow \infty$ uniformly on compact subsets of $\tilde M$, as $j \rightarrow \infty$. \medskip Now fix a point $x \in\tilde{M}$. We only need to show the case that $\tilde{\Pi}_{L^N}(x,x)\neq0$. Let $\rho(\cdot)=dist(\cdot, x)\in\ccal^0(\tilde{M})$ and $x_j=p_j(x) \in M_j$ for any $j \geq 0$. \medskip \textbf{Step 1}: Define sections $\{T_j\in\Gamma(M_j,L^N)\}$. Consider the coherent state $$S_{x}(y):=\frac{\tilde{\Pi}_{L^N}(y, x)}{\sqrt{\tilde{\Pi}_{L^N}(x, x)}}.$$ Then $S_{x} \in SH^0(\tilde{M},L^N)$ and $\abs{S_{x}(x)}_{h^N}^2=\abs{\tilde{\Pi}_{L^N}(x, x)}_{h^N}$. For any $j\geq0$, let $\tilde{T}_j(y)=\chi_j(\rho(y))S_{x}(y)\in\Gamma(\tilde{M},L^N)$, where the nonincreasing function $\chi_j\in\ccal_c^{\infty}([0,\infty),\R^+)$ satisfies $\chi_j(r)=1$ for $0\leq r\leq\frac{1}{4}\tau_j(x)$, $\chi_j(r)=0$ for $r\geq\frac{1}{3}\tau_j(x)$ and $\norm{\chi'_j}_{\infty}=O(\tau_j(x)^{-1})$. Since $p_j|_{B(x,\half\tau_j(x))}$ is one-to-one, the sections $\{T_j\in\Gamma(M_j,L^N)\}$ are defined as follows: \begin{align*} T_j(z)= \begin{cases} \tilde{T}_j\left((p_j|_{B(x,\half\tau_j(x))})^{-1}(z)\right) &\text{if} ~z\in p_j(B(x,\half\tau_j(x))), \\ 0 & \text{otherwise}. \end{cases} \end{align*} \medskip \textbf{Step 2}: Construct potential functions $\{\phi_j\}$ following \cite{Ye2}. The construction is due to \cite{Ye2}. Since the injectivity radius of the base manifold $M=M_0$ is bounded from below by $R>0$, then the injectivity radius of $M_j$ at $x_j$ is at least $R$ since injectivity radius is nondecreasing along the tower of coverings. Let $\delta\in\ccal^{\infty}_c([0,\infty),\R^+)$ (fixed and independent of $j$) be a nonincreasing cut-off function satisfying $\delta(r)=1$ for $0\leq r\leq \frac{1}{2}R$ and $\delta(r)=0$ for $r\geq R$. In addition, one can pick up $\delta(r)$ so that $-\frac{2+1}{R} \leq \delta'(r) \leq 0$ and $\left|\delta''(r) \right| \leq \frac{4(2+1)}{r^2}$. As $\tau_j(x) \rightarrow \infty$ as $j \rightarrow \infty$, by choosing $j$ sufficiently large, we can always assume that $\tau_j(x) > 4R$. Define a function on $\tilde M$ by $$\phi(y)= \log \left(\frac{4 \rho^2(y)}{R^2} \right) \times \delta(\rho(y)).$$ Then the potential function $\phi_j$ on $M_j$ is defined by \begin{align*} \phi_j(z)= \begin{cases} n \phi \left((p_j|_{B(x,\half\tau_j(x))})^{-1}(z)\right) &\text{if} ~z\in p_j(B(x,\half\tau_j(x))), \\ 0 & \text{otherwise}. \end{cases} \end{align*} As the sectional curvature of $M_j$ is uniformly bounded independent of $j$, by the Hessian comparison theorem \cite{GW}, as shown in \cite{Ye2}, one can control the complex Hessian of $\phi_j$ to have $$\frac{\sqrt{-1}}{2} \partial\bar\partial \phi_j \geq -K \om_h,$$ where the positive constant $K=K(M,L,h)$ is independent of $j$ and the base point $x \in \tilde M$. \medskip \textbf{Step 3}: Apply H\"ormander's theorem to solve $\bar{\pa}$-equation $\bar{\pa}T'_j=\bar{\pa}T_j$. There exists $N_4'=N_4'(M,L,h) >0$, such that \begin{align}\label{Curvbd} NRic(h)+\frac{\sqrt{-1}}{2}\pa\bar{\pa}\phi_j+Ric(\om_h) \geq \om_h ~\text{ for}~ N \geq N_4' . \end{align} For $N\geq N_4'$ and sufficiently large $j$, we consider the line bundle $(L^N,h^Ne^{-\phi_j})\to (M_j,dV_h)$. By (\ref{Curvbd}), we apply H\"ormander's $L^2$-estimate for the $\bar\partial$-equation (cf. \cite{Dem} Theorem 5.1). There exists $T'_j\in L^2(M_j,(L^N,h^Ne^{-\phi_j}))$, such that $\bar{\pa}T'_j=\bar{\pa}T_j$ with \begin{align}\label{Hor} \norm{T'_j}^2_{L^2(h^Ne^{-\phi_j})}=\int_{M_j}\abs{T'_j}^2_{h^N}e^{-\phi_j}dV_h \leq\int_{M_j}\abs{\bar{\pa}T_j}^2_{(h^N,\om_h)}e^{-\phi_j}dV_h=\norm{\bar{\pa}T_j}^2_{L^2(h^Ne^{-\phi_j})}. \end{align} Note that $\bar{\pa} T_j$ is supported in $p_j\left(\bar{B}(x,\frac{1}{3}\tau_j(x))\setminus B(x,\frac{1}{4}\tau_j(x))\right)=p_j(\bar{B}(x,\frac{1}{3}\tau_j(x)))\setminus p_j(B(x,\frac{1}{4}\tau_j(x)))$. For any $z\in p_j(B(x,\half\tau_j(x)))$, \begin{align*} \begin{split} \bar{\pa}T_j(z)&=\bar{\pa}\left[\chi_j\left(\rho\circ(p_j|_{B(x,\half\tau_j(x))})^{-1}(z)\right)S_x\left((p_j|_{B(x,\half\tau_j(x))})^{-1}(z)\right)\right] \\ &=\chi'_j\left(\rho\circ(p_j|_{B(x,\half\tau_j(x))})^{-1}(z)\right)\bar{\pa}\rho\left((p_j|_{B(x,\half\tau_j(x))})^{-1}(z)\right)S_{x}\left((p_j|_{B(x,\half\tau_j(x))})^{-1}(z)\right). \end{split} \end{align*} The distance function $\rho$ is is differentiable almost everywhere (away from the cut-locus). Moreover, we have $\abs{\bar{\pa}\rho}_{\om_h}^2=\half\abs{d\rho}_{\om_h}^2=\half$ almost everywhere. Hence for almost every $z\in p_j(B(x,\half\tau_j(x)))$, \begin{align}\label{Hor0} \abs{\bar{\pa}T_j(z)}^2_{(h^N,\om_h)}= \half\Abs{\chi'_j\left((p_j|_{B(x,\half\tau_j(x))})^{-1}(z)\right)}^2\ \Abs{S_{x}\left((p_j|_{B(x,\half\tau_j(x))})^{-1}(z)\right)}^2_{h^N}. \end{align} From the definition of $\chi_j$, \begin{align}\label{Hor1} \left| \chi'_j\left((p_j|_{B(x,\half\tau_j(x))})^{-1}(z)\right) \right|^2\lesssim\tau_j(x)^{-2}. \end{align} Applying Agmon estimate on the support of $\bar{\pa}T_j$, when $N\geq N_0$, \begin{align}\label{Hor2} \Abs{S_x\left((p_j|_{B(x,\half\tau_j(x))})^{-1}(z)\right)}^2_{h^N}\lesssim e^{-2\be\sqrt{N}\frac{1}{4}\tau_j(x)}=e^{-\half\be\sqrt{N}\tau_j(x)}, \end{align} provided that $j\geq0$ is large enough to satisfy $\frac{1}{4}\tau_j(x) \geq1$. Combining (\ref{Hor0}),(\ref{Hor1}) and (\ref{Hor2}), we have that for $j$ large enough, the following holds almost everywhere in $p_j(B(x,\half\tau_j(x)))$: \begin{align}\label{Hor4} \abs{\bar{\pa}T_j}^2_{(h^N,\om_h)}\lesssim\tau_j(x)^{-2}e^{-\half\be\sqrt{N}\tau_j(x)}. \end{align} As $\bar{\pa}T_j$ is supported in $p_j(\bar{B}(x,\frac{1}{3}\tau_j(x)))\setminus p_j(B(x,\frac{1}{4}\tau_j(x)))$ and $\phi_j$ is supported in $p_j(B(x,R))$, $\phi_j=0$ in the supported of $\bar{\pa}T_j$ for $j$ large enough. Therefore, for such $j$, by (\ref{Hor4}), \begin{align*} \begin{split} \norm{\bar{\pa}T_j}^2_{L^2(h^Ne^{-\phi_j})}=&\int_{M_j}\abs{\bar{\pa}T_j}^2_{(h^N,\om_h)}e^{-\phi_j}dV_h \\ \lesssim&\tau_j(x)^{-2}e^{-\half\be\sqrt{N}\tau_j(x)}\int_{p_j(B(x,\half\tau_j(x)))}\ dV_h \\ =&\tau_j(x)^{-2}e^{-\half\be\sqrt{N}\tau_j(x)}V({B(x,\half\tau_j(x))}). \end{split} \end{align*} Since the Ricci curvature of $\tilde{M}$ has a lower bound, by Bishop volume comparison theorem, $V(B(x,\half\tau_j(x)))\ dV_h$ grows at most exponentially. In other words, there exists $C=C(M,L,h)>0$ such that $V(B(x,\half\tau_j(x)))\leq e^{\frac{C}{2}\tau_j(x)}$. Hence \begin{align*} \norm{\bar{\pa}T_j}^2_{L^2(h^Ne^{-\phi_j})}\lesssim\tau_j(x)^{-2}e^{-\half\be\sqrt{N}\tau_j(x)}e^{\frac{C}{2}\tau_j(x)}=\tau_j(x)^{-2}e^{-\half(\be\sqrt{N}-C)\tau_j(x)}. \end{align*} Denote $N_4''=N_4''(M,L,h)=\max\{\lfloor\left(\frac{C+2}{\be}\right)^2\rfloor+1,N_0\}$. Then for $N\geq N_4''$, $\be\sqrt{N}-C>2$, \begin{align}\label{Hor5} \norm{\bar{\pa}T_j}^2_{L^2(h^Ne^{-\phi_j})}\lesssim\tau_j(x)^{-2}e^{-\tau_j(x)}. \end{align} Take $N_4=\max\{N_4',N_4''\}$. By the $L^2$-estimate (\ref{Hor}), for $N\geq N_4$ and $j$ large enough, then \begin{align*} \norm{T'_j}^2_{L^2(h^N e^{-\phi_j})}=\int_{M_j}\abs{T'_j}^2_{h^N}e^{-\phi_j}dV_h\lesssim\tau_j(x)^{-2}e^{-\tau_j(x)} < \infty. \end{align*} As $\phi_j\leq\log4$, $e^{-\phi_j}\geq\frac{1}{4}$, then \begin{align}\label{Hor6} \norm{T'_j}^2_{L^2(h^N)}\lesssim\norm{T'_j}^2_{L^2(h^Ne^{-\phi_j})}\lesssim\tau_j(x)^{-2}e^{-\tau_j(x)}\rightarrow 0, ~\text{as}~j \rightarrow \infty. \end{align} \textbf{Step 4}: Conclusion. Let $S_j:=T_j-T'_j$. Then $S_j$ satisfies following properties for $N \geq N_4$ and for $j$ sufficiently large: \begin{enumerate}[(1)] \item $\bar{\pa}S_j=\bar{\pa}T_j-\bar{\pa}T'_j=0$. This implies $ S_j\in H^0(M_j,L^N)$ and thus $T'_j\in \Ga(M_j,L^N)$. \item Since $e^{-\phi_j(z)}\sim\left(\rho\circ(p_j|_{B(x,\half\tau_j(x))})^{-1}(z)\right)^{-2n}=dist(z,x_j)^{-2n}$ near $x_j$, $\abs{T'_j}^2_{h^N}e^{-\phi_j}$ is not locally integrable unless we have $T'_j(x_j)=0$. Therefore $S_j(x_j)=T_j(x_j)-T'_j(x_j)=T_j(x_j)$, which implies that $\abs{S_j(x_j)}^2_{h^N}=\abs{T_j(x_j)}^2_{h^N}=\abs{S_{x}(x)}^2_{h^N}=\abs{\tilde{\Pi}_{L^N}(x, x)}_{h^N}>0$. \item \begin{align*} \begin{split} 0<\norm{S_j}_{L^2(h^N)}=&\norm{T_j-T'_j}_{L^2(h^N)}\leq\norm{T_j}_{L^2(h^N)}+\norm{T'_j}_{L^2(h^N)} \\ \leq&\norm{S_{x}}_{L^2(h^N)}+\norm{T'_j}_{L^2(h^N)} \\ =&1+\norm{T'_j}_{L^2(h^N)}. \end{split} \end{align*} \end{enumerate} Define $F_j=\frac{S_j}{\norm{S_j}_{L^2(h^N)}}\in SH^0(M_j,L^N)$. Therefore, by the extremal property of Bergman kernel, \begin{align*} \abs{\Pi_{j,L^N}\left(p_j(x),p_j(x)\right)}_{h^N}=\abs{\Pi_{j,L^N}(x_j, x_j)}_{h^N}\geq\abs{F_j(x_j)}^2_{h^N} =\frac{\abs{S_j(x_j)}^2_{h^N}}{\norm{S_j}^2_{L^2(h^N)}}\geq\frac{\abs{\tilde{\Pi}_{L^N}(x, x)}_{h^N}}{(1+\norm{T'_j}_{L^2(h^N)})^2}. \end{align*} By (\ref{Hor6}), for $N\geq N_4$, \begin{align*} \liminf_{j\to\infty}\abs{\Pi_{j,L^N}\left(p_j(x),p_j(x)\right)}_{h^N}\geq\abs{\tilde{\Pi}_{L^N}(x,x)}_{h^N}. \end{align*} Hence part (ii) is proved as $x \in \tilde M$ is arbitrary. \end{proof}
119,676
Clutter, chaos, competition, ceilings – we help our clients break through it all by… ...blending art & science Right brain/left brain – we don’t choose sides. We bring experimentation and rigorous evaluation to our creative storytelling and embrace of nuance. ...being partners, not just pollsters Whether you’re building the plane in mid-air or trying to turn a large ship, we’re on board. We work with complex coalitions, campaigns of all sizes, large institutions, and start-ups -- as researchers, but also communication experts and team players. ...working at the intersection of insightful and actionable Interesting alone doesn’t cut it. We deliver the strategies, narratives, and messaging architectures you need to make your goals a reality. ...pulling no punches We smash rose-colored glasses and ditch advocacy-speak. Our research reflects the world as it is, so we can start making the world as it should be. Our messaging and strategies work because they’ve been stress-tested against attacks they’ll face in the wild.
23,065
Samsung, the Korean tech giant has not announced any major product yet and 2018’s Q2 is about to end, but recently we had some leaks suggesting the Samsung Galaxy Note 9 is a few weeks away from the official launch. And not only this, Samsung is also planning for the next installment in the Galaxy S series, that is, Samsung Galaxy S10, after the official launch of the new Galaxy Note 9. And here are some leaks suggesting, Samsung may feature Iris Scanner with In-Screen fingerprint and 3D facial recognition system. ALSO READ: Samsung Gear S4 may release as early as August 9 Samsung introduced the Iris Scanner back there in the Galaxy Note 7 and that time it was one of the best innovative biometric security systems, though it was not so swift and secure. And since now, we have seen a number of innovative biometric security system, one of the popular is the facial recognition, and here we are talking about the 3D facial recognition system as we have seen in the Apple iPhone X, not the unreliable AI face recognition, and also the In-Screen fingerprint scanner as we have seen in the recently launched Vivo Nex. Samsung is one of the companies which launch a product only after a lot of research, not a breakthrough at all. Samsung is the first company to launch the concept of bezel-less display in the smartphone. But we have not seen any major development in the biometric security system, for unlocking in the Samsung mobile devices. Even the recently launched Oppo Find X comes with not only a bezel-less display, also an advanced facial recognition system, and the sibling company, Vivo is developing a new 3D TOF Facial Recognition, which can be housed in a smaller area and need no notch. MUST READ: Vivo now working on 3D Depth Camera for Face Recognition According to a report from a Korean site, the next Galaxy device by the Korean tech giant is not the Galaxy X, the rumored Samsung device to feature a Foldable Screen, but it’s the Galaxy S10. And this time there will be no iris scanner (ditched). Report says that Samsung has not ordered sample iris scanner for the prototype of the Galaxy S10 which is still under development, Chances are, the final product, Galaxy S10 won’t feature the iris scanner as the new biometric security options will include the new in-screen fingerprint scanner and the 3D sensor for facial recognition, so, with all this including a Iris Scanner makes no sense. MUST READ: How does under display fingerprint works: Vivo Nex According to the leakers, Galaxy S10, codenamed “Beyond”, is expected to come with a display size of 5.8-inches and there will be S10 Plus variant like the previous series, and it is expected to have a display size of 6.3-inches. The Galaxy S10, as told earlier is still in development, so, the speculations made for a 3D face recognition may get false, but in our view, there is a more probability that the new Galaxy S10 may feature a 3D facial recognition and an In-Screen fingerprint.
362,636
Experience the warm West Michigan hospitality of a Bed & Breakfast on your next getaway. Whether you’re looking for a cozy romantic getaway, a quaint historical charmer, or a luxurious weekend away, West Michigan has a Bed & Breakfast for you! BED & BREAKFASTS TO VISIT IN SOUTH WEST MICHIGAN Visit Marshall to experience the best in bed and breakfast destinations. The National House Inn, built over 170 years ago, is the oldest operating inn in Michigan. It originally welcomed stagecoach travelers and offers the same gracious hospitality with luxuries and conveniences of today. Saugatuck is home to some of the most charming B&Bs around, including Wickwood Inn! This is where best-selling James Beard Hall of Fame cookbook author and co-founder of The Silver Palate and owner of Wickwood for two decades, Julee Rosso, first began to source the freshest local farmer’s market ingredients. Other B&B options in Saugatuck include Bayside Inn, Maplewood Hotel, Marywood Manor B&B and Cottages, Serendipity Bed and Breakfast, Sherwood Forest B&B, and Twin Oaks Inn. Choose the Yelton Manor Boutique Hotel B&B in South Haven when only the very best location will do! Luxurious and lakeside, YMBB is the closest B&B in South Haven to the beach and walkable to everything, blissfully tucked away from the noisy harbor, festivals, and nightspots. Greater Lansing is home to a number of wonderful Bed & Breakfasts. In trendy Old Town Lansing find the Cozy Koi Bed & Breakfast, while in downtown East Lansing, the charming Wild Goose Inn sits just a block away from Michigan State University campus. For a retreat in style, choose The English Inn for quaint-yet-modern cottages and deliciously elegant rooms in the manor house. For something a bit different, head out to Williamston and commune with some llamas and alpacas at the Willowicke Inn or head to Dimondale to stay in The Legend Inn. The area’s newest B&B is found in the northern neighboring community of St. Johns. The Nordic Pineapple is a charming inn with five rooms with all the amenities, right in the heart of St. Johns. More South Region Bed & Breakfasts: Bay Pointe Inn, Gun Lake Gordon Beach Inn, Union Pier Henderson Castle, Kalamazoo White Rabbit Inn B&B, Lakeside BED & BREAKFASTS IN CENTRAL WEST MICHIGAN Visit the Muskegon area and stay at one of the numerous Bed & Breakfasts in the area such as White Swan Inn Bed & Breakfast in Whitehall. Come to White Swan Inn Bed & Breakfast for the hospitality, stay for the incredible location. This charming inn is within walking distance of stores and restaurants, across the street from a historic performing arts center, one block away from the bike trail, just up the hill from White Lake, and a short drive to the fantastic sunsets at Lake Michigan. The Lamplighter Bed & Breakfast, situated along Ludington Avenue in Ludington, is just five blocks from shopping, antiquing, restaurants, breweries, ice cream, and entertainment. Each of the five rooms and suites offer private baths and a comfortable night’s sleep. Enjoy Michigan’s spring and summer seasons and wander outside to enjoy the beautiful gardens and outdoor spaces. You’ll also find many other charming bed & breakfasts and other accommodations in Ludington for your next stay. Mecosta County has a plethora of bed & breakfasts (and more) for visitors to enjoy. Whether you’re interested in cabins, cottages, or bed & breakfasts, Mecosta County has a place for you to rest your head. Feel at home on vacation on one of Isabella County’s bed and breakfasts: Enjoy convenient access to Downtown Mt. Pleasant, home to museums, restaurants, boutiques, and more. Or take a break from the hustle and bustle of the city and relax in a country-side cottage. Whichever you prefer, the unique area bed & breakfast accommodations have what you seek. Learn more about Isabella County’s unique bed & breakfast options at the Mt. Pleastant Convention & Visitors Bureau. More Central Region Bed & Breakfasts: Hart House Bed & Breakfast, Hart The Gerber Guest House LLC, Fremont BED & BREAKFASTS IN NORTH WEST MICHIGAN & UPPER PENINSULA Enjoy a relaxing weekend on the Leelanau Peninsula. Visit the Inn at Black Star Farms for a weekend of luxury, featuring a farm-fresh breakfast each morning of your stay. What are you waiting for? Book your next up north getaway! More North Region and Upper Peninsula Bed & Breakfasts: Antiquities’ Wellington Inn, Traverse City Bridge Street Inn, Charlevoix Chateau Chantal, Traverse City Cottage Inn, Mackinac Island Harbour View Inn, Mackinac Island Inn at Grey Gables, Charlevoix Old Mission Inn, Traverse City Point Betsie Lighthouse, Frankfort The Mackinac House, Mackinac Island Terrace Inn, Petoskey Traverse Tall Ship Co, Traverse City
278,789
\subsection{Families of $(\varphi,\Gamma)$-modules over affinoids} We will use Tate-Sen theory to associate families of $(\varphi,\Gamma)$-modules to families of Galois representations. Let $A$ be an $E$-affinoid algebra, and let $A\widehat\otimes\B_{(\rig),K}^{\dagger,(s)}$ denote one of the rings $A\widehat\otimes \B_K^\dagger$, $A\widehat\otimes \B_K^{\dagger,s}$, $A\widehat\otimes \B_{\rig,K}^{\dagger,s}$, or $A\widehat\otimes \B_{\rig, K}^{\dagger}$. Similarly, let $A\widehat\otimes\widetilde{\B}^{\dagger,(s)}$ denote one of the rings $A\widehat\otimes\widetilde{\B}^{\dagger}$ or $A\widehat\otimes\widetilde{\B}^{\dagger,s}$. Throughout this subsection, let $s_0=(p-1)/p$ and let $s_n=p^ns_0=p^{n-1}(p-1)$. \begin{definition} A $\varphi$-module over $A\widehat\otimes\B_{(\rig),K}^{\dagger,(s)}$ is a finitely presented projective module $\D^{(s)}$ over $A\widehat\otimes\B_{(\rig),K}^{\dagger,(s)}$ together with a map $\varphi:\D^{(s)}\rightarrow \D^{(ps)}$ which is semilinear over $\varphi:\B_{(\rig),K}^{\dagger,(s)}\rightarrow\B_{(\rig),K}^{\dagger,(ps)}$, such that the linearization $\varphi':\B_{(\rig),K}^{\dagger,(ps)} {}_{\varphi}{\otimes}_{\B_{(\rig),K}^{\dagger,(s)}}\D^{(s)}\rightarrow \D^{(ps)}$ is an isomorphism. A $(\varphi,\Gamma)$-module over $A$ is a $\varphi$-module over $A$ together with a continuous $A$-linear action of $\Gamma_K$ which is semilinear over the action of $\Gamma_K$ on $\B_{(\rig),K}^{\dagger,(s)}$ and commutes with $\varphi$. \end{definition} \begin{remark} A $\varphi$-module $D$ over $A\widehat\otimes\B_{(\rig),K}^{\dagger,s}$ is in particular a finite $A\widehat\otimes\B_{(\rig),K}^{\dagger,s}$-module. It is therefore a finite module over either a Banach algebra or a Fr\'echet-Stein algebra. It follows that $D$ has a unique structure as a Fr\'echet $A\widehat\otimes\B_{(\rig),K}^{\dagger,s}$-module. Thus, we may speak unambiguously of the continuity of any action of $\Gamma_K$. \end{remark} \begin{remark} In~\cite{kl}, the authors define a family of $(\varphi,\Gamma)$-modules over $A\widehat\otimes \B_{\rig,K}^{\dagger,s}$, for $s\gg0$, to be a coherent locally free sheaf over the product of the half-open annulus $0<v_p(X)\leq 1/s$ with $\Sp(A)$ in the category of rigid analytic spaces. By Lemma~\ref{fibral-ranks}, this is equivalent to the definition we have given. This equivalence is also proven in~\cite[Proposition 2.2.7]{kpx}, where the authors use the $\varphi$-module structure on a family of $(\varphi,\Gamma)$-modules to prove finite generation of its global sections. \end{remark} The main source of $(\varphi,\Gamma)$-modules is Galois representations; to any family of $p$-adic Galois representations, we can functorially associate a family of $(\varphi,\Gamma)$-modules, and this functor is fully faithful, as we now explain. \begin{definition} Let $X$ be a rigid analytic space over $E$. A family of Galois representations over $X$ is a locally free $\mathscr{O}_X$-module $\mathscr{V}$ of rank $d$ together with an $\mathscr{O}_X$-linear action of $\Gal_K$ which acts continuously on $\Gamma(U,\mathscr{V})$ for every admissible affinoid open $U\subset X$. \end{definition} \begin{remark} It is enough to check continuity on a single admissible affinoid cover $\{U_i\}$ of $X$. For if $U_i=\Sp(A_i)$ is affinoid and $\Gal_K$ acts continuously on $\mathscr{V}(U_i)$, then $\Gal_K$ certainly acts continuously on $\mathscr{V}(W) = \mathscr{V}(U_i)\otimes_{A_i}\mathscr{O}_X(W)$ for any affinoid subdomain $W\subset U_i$. On the other hand, suppose that $\{U_i=\Sp(A_i)\}$ is an admissible affinoid covering of $U=\Sp(A)$, and suppose that $\Gal_K$ acts continuously on $\mathscr{V}(U_i)$. Since $$0\rightarrow A\rightarrow \prod_i A_i\rightarrow \prod_{i,j}A_i\widehat\otimes_AA_j$$ is exact, $\mathscr{V}(U)$ inherits its topology from its embedding in $\prod_i\mathscr{V}(U_i)$, and $\GL(\mathscr{V}(U))$ inherits its topology from its embedding in $\prod_i\GL(\mathscr{V}(U_i))$. Therefore, $\Gal_K$ acts continuously on $\mathscr{V}(U)$. \end{remark} Then we have the following theorem (and subsequent refinements, cf~\cite{kl},~\cite{liu}): \begin{thm}[{\cite{bc}}]\label{phi-gamma-bc} Let $\mathscr{A}$ be a formal $\mathscr{O}_E$-model for $A$, and let $V$ be a free $A$-module of rank $d$ equipped with a continuous $A$-linear action of $\Gal_K$. Suppose that $V$ contains a free $\Gal_K$-stable $\mathscr{A}$-submodule $V_0$ of rank $d$. Then for $s\gg0$, there is a $\varphi$- and $\Gal_K$-stable $A\widehat\otimes\B_K^{\dagger,s}$-submodule $$\D_K^{\dagger,s}(V)\subset \left((A\widehat\otimes\widetilde\B^{\dagger,s})\otimes_{\Q_p}V\right)^{H_K}$$ which is a locally free $A\widehat\otimes\B_K^{\dagger,s}$-module of constant rank $d$ such that the natural map $$(A\widehat\otimes\widetilde\B^{\dagger,s})\otimes_{A\widehat\otimes\B_K^{\dagger,s}}\D_K^{\dagger,s}(V)\rightarrow (A\widehat\otimes\widetilde\B^{\dagger,s})\otimes_AV$$ is an isomorphism. If $\Gal_K$ acts trivially on $V_0/12pV_0$, then $\D_K^{\dagger,s}(V)$ is $A\widehat\otimes\B_K^{\dagger,s}$-free of rank $d$. The formation of $\D_K^{\dagger,s}(V)$ is compatible with base change in $\mathscr{A}$ in the sense that if $A\rightarrow A'$ is a homomorphism of $E$-affinoid algebras induced by a morphism of integral models $\mathscr{A}\rightarrow\mathscr{A}'$, then (possibly at the cost of increasing $s$ so that both sides exist) the natural homomorphism $\D_K^{\dagger,s}(V)\widehat\otimes_AA'\rightarrow\D_K^{\dagger,s}(V\otimes_AA')$ is an isomorphism. \end{thm} \begin{proof} Except for the base change result, this follows directly from~\cite[Proposition 4.2.8]{bc} and \cite[Th\'eor\`eme 4.2.9]{bc}. The base change result follows from the construction, as we explain below. Let $L/K$ be a finite extension such that $\Gal_L$ acts trivially on $V_0/12pV_0$. Under the given hypotheses, Theorem~\ref{tate-sen-phi-gamma} implies that there is some integer $n(L,V_0)$ such that for $n\geq n(L,V_0)$, there is a $\varphi$- and $\Gal_L$-stable $\mathscr{A}\widehat\otimes\A_{L,n}^{\dagger,s_0}$-submodule $$\D_{L,n}^{\dagger,s_0}(V_0)\subset\left((\mathscr{A}\widehat\otimes\widetilde\A^{\dagger,s_0})\otimes_{\mathscr{A}}V_0\right)^{H_L}$$ which is free of rank $d$, has a basis which is almost invariant by $\Gamma_L$ (i.e., it is $c_3$-fixed), and such that the natural morphism $$(\mathscr{A}\widehat\otimes\widetilde\A^{\dagger,s_0})\otimes_{\mathscr{A}\widehat\otimes\A_{L,n}^{\dagger,s_0}}\D_{L,n}^{\dagger,s_0}(V_0)\rightarrow (\mathscr{A}\widehat\otimes\widetilde\A^{\dagger,s_0})\otimes_{\mathscr{A}}V_0$$ is an isomorphism. Further, $\D_{L,n}^{\dagger,s_0}(V_0)$ is the unique $\mathscr{A}\widehat\otimes\A_{L,n}^{\dagger,s_0}$-submodule with these properties. This is the statement of~\cite[Proposition 4.2.8]{bc}. We define $\D_{L,n}^{\dagger,s_0}(V):=\D_{L,n}^{\dagger,s_0}(V_0)[1/p]$. Then for $s\geq s_{n(L,V_0)}$ we further define $$\D_K^{\dagger,s}(V):=\left((A\widehat\otimes\B_L^{\dagger,s})\otimes_{A\widehat\otimes\B_{L}^{\dagger,s(V)}}\varphi^n(\D_{L,n}^{\dagger,s_0}(V))\right)^{H_K}$$ and this does not depend on $V_0$ or $L$. By \cite[Th\'eor\`eme 4.2.9]{bc}, $\D_K^{\dagger,s}(V)$ is $A\widehat\otimes\B_K^{\dagger,s}$-locally free of rank $d$. It remains to address functoriality in $\mathscr{A}$. Choose $L/K$ such that $\Gal_{L/K}$ acts trivially on $V_0/12pV_0$, choose $n$ sufficiently large for both $V_0$ and $V_0\otimes_{\mathscr{A}}\mathscr{A}'$, and choose $s\geq s_n$. By the uniqueness of $\D_{L,n}^{\dagger,s_0}(V_0\otimes_{\mathscr{A}}\mathscr{A}')$, the natural morphism $$\mathscr{A}'\widehat\otimes_{\mathscr{A}}\D_{L,n}^{\dagger,s_0}(V_0)\rightarrow \D_{L,n}^{\dagger,s_0}(V_0\otimes_{\mathscr{A}}\mathscr{A}')$$ is surjective. Since $\mathscr{A}'\widehat\otimes_{\mathscr{A}}\D_{L,n}^{\dagger,s_0}(V_0)$ and $\D_{L,n}^{\dagger,s_0}(V_0\otimes_{\mathscr{A}}\mathscr{A}')$ are locally free $\mathscr{A}'\widehat\otimes \A_{L,n}^{\dagger,s_0}$-modules of the same rank, it is an isomorphism. It follows that the natural morphism $A'\widehat\otimes_A\D_L^{\dagger,s}(V)\rightarrow \D_L^{\dagger,s}(V\otimes_AA')$ is an isomorphism, and therefore that $$\left(A'\widehat\otimes_A\D_L^{\dagger,s}(V)\right)^{H_K}\rightarrow \D_K^{\dagger,s}(V\otimes_AA')$$ is an isomorphism. We need to show that the natural map $A'\widehat\otimes_A\D_L^{\dagger,s}(V)^{H_K}\rightarrow\left(A'\widehat\otimes_A\D_L^{\dagger,s}(V)\right)^{H_K}$ is an isomorphism, and we temporarily set $D=\D_L^{\dagger,s}(V)$ to ease notation. But $H_L\subset H_K$ acts trivially on $A'\widehat\otimes_AD$ and $H_K/H_L$ is finite, so taking $H_K/H_L$-invariants is exact. In fact, taking $H_K/H_L$-invariants and extending scalars from $A'\widehat\otimes\B_K^{\dagger,s}$ to $A'\widehat\otimes\B_L^{\dagger,s}$ are inverse functors between the category of finite projective $A'\widehat\otimes \B_K^{\dagger,s}$-modules and the category of finite projective $A'\widehat\otimes \B_L^{\dagger,s}$-modules with semi-linear $H_K/H_L$-action. Since \begin{align*} (A'\widehat\otimes\B_L^{\dagger,s})\otimes_{A'\widehat\otimes\B_K^{\dagger,s}}(A'\widehat\otimes_AD^{H_K})&\cong (A'\widehat\otimes\B_L^{\dagger,s})\otimes_{A\widehat\otimes\B_L^{\dagger,s}}((A\widehat\otimes\B_L^{\dagger,s})\otimes_{A\widehat\otimes\B_K^{\dagger,s}}D^{H_K}) \\ &= (A'\widehat\otimes\B_L^{\dagger,s})\otimes_{A\widehat\otimes\B_L^{\dagger,s}}D \end{align*} and $$(A'\widehat\otimes\B_L^{\dagger,s})\otimes_{A'\widehat\otimes\B_K^{\dagger,s}}\left(A'\widehat\otimes_AD\right)^{H_K}\cong A'\widehat\otimes_AD$$ the result follows. \end{proof} \begin{remark} The construction of families of $(\varphi,\Gamma)$-modules given in \cite[Proposition 4.2.8]{bc} and \cite[Th\'eor\`eme 4.2.9]{bc} in fact only requires $A$ to be a Banach algebra, not an affinoid algebra. \end{remark} \begin{remark} If $V$ admits a $\Gal_K$-stable locally free $\mathscr{A}$-submodule $V_0$ of rank $d$, we may construct $\D_K^{\dagger,s}(V)$ by working on a cover $\{\Spf\mathscr{A}_i\}$ of $\Spf\mathscr{A}$ trivializing $V_0$. Since we know that the formation of $\D_K^{\dagger,s}(V)$ is functorial in maps $\mathscr{A}_i\rightarrow\mathscr{A}_i\widehat\otimes_{\mathscr{O}_E}\mathscr{A}_j$, we can glue the $\D_K^{\dagger,s}(V|_{\mathscr{A}_i[1/p]})$ to get a sheaf of $A\widehat\otimes\B_K^{\dagger,s}$-modules on $\Sp(A)$. By~\cite[Proposition 3.10]{kl}, there is a finite locally free $A\widehat\otimes\B_K^{\dagger,s}$-module $\D_K^{\dagger,s}(V)$ which induces this sheaf. \end{remark} \begin{remark} It is natural to wonder whether the construction of $\D_K^{\dagger,s}(V)$ we have given depends on the formal model $\mathscr{A}$ of $A$ we worked with. It suffices to check independence of the integral model for an admissible formal blowing up $\mathscr{X}'\rightarrow\Spf\mathscr{A}$ with center $\mathscr{I}=(f_0,\ldots f_m)$. More precisely, if $\mathscr{V}$ admits both a Galois-stable $\mathscr{A}$-lattice and a Galois-stable $\mathscr{A}'$-lattice, then $\Spf\mathscr{A}$ and $\Spf\mathscr{A}'$ have a common admissible blow-up $\mathscr{X}$, so it suffices to check that the construction we have given yields the same result on the generic fibers of $\mathscr{X}$ and $\Spf\mathscr{A}$. Temporarily let $\D_{K,\mathscr{X}}^{\dagger,s}(V)$ denote the construction using the integral structure $\mathscr{X}$ and $\D_{K,\mathscr{A}}^{\dagger,s}(V)$ denote the construction using the integral structure $\mathscr{A}$. Now $\mathscr{X}$ admits a covering by the formal schemes $$\mathscr{X}_i:=\Spf\mathscr{A}\left\langle \frac{f_0}{f_i},\ldots,\frac{f_m}{f_i}\right\rangle$$ and the morphism $\mathscr{X}_i\rightarrow \Spf\mathscr{A}$ is induced by $\mathscr{A}\rightarrow \mathscr{A}\langle \frac{f_0}{f_i},\ldots,\frac{f_m}{f_i}\rangle$. In other words, $$\D_{K,\mathscr{X}}^{\dagger,s}(V)|_{\Sp(A\langle \frac{f_0}{f_i},\ldots,\frac{f_m}{f_i}\rangle)} = A\left\langle \frac{f_0}{f_i},\ldots,\frac{f_m}{f_i}\right\rangle\widehat\otimes_A\D_{K,\mathscr{A}}^{\dagger,s}(V)$$ It follows that $\D_{K,\mathscr{X}}^{\dagger,s}(V)=\D_{K,\mathscr{A}}^{\dagger,s}(V)$. \end{remark} \subsection{Globalization via sheaves} It is possible to remove the hypothesis that $V$ admits a Galois-stable lattice from this result. For this, we use a lemma of Chenevier (however, we could instead use \cite[Theorem 3.11 and Definition 3.12]{kl}). \begin{lemma}[{\cite[Lemme 3.18]{chenevier}}]\label{chenevier} Let $X$ be a quasi-compact quasi-separated rigid analytic space and let $V$ be a finite locally free $\mathscr{O}_X$-module equipped with a continuous action by a compact topological group $G$. Then there is a formal scheme $\mathscr{X}$ over $\mathscr{O}_E$ which is topologically of finite type, and a finite locally free $\mathscr{O}_{\mathscr{X}}$-module $\mathcal{V}$ equipped with a continuous $\mathscr{O}_{\mathscr{X}}$-linear action of $G$, such that $X$ is the generic fiber of $\mathscr{X}$ and $V$ (with its $G$-action) is the generic fiber of $\mathcal{V}$. \end{lemma} Thus, we have the following corollary. \begin{cor} Let $X$ be a quasi-compact quasi-separated rigid analytic space over $E$, and let $\mathscr{V}$ be a finite locally free $\mathscr{O}_X$-module equipped with a continuous $\mathscr{O}_X$-linear action of $\Gal_K$. Then for $s\gg0$, there is a sheaf $\mathscr{D}_{X,K}^{\dagger,s}(\mathscr{V})$ such that \begin{enumerate} \item $\mathscr{D}_{X,K}^{\dagger,s}(\mathscr{V})$ is locally free $\mathscr{B}_{X,K}^{\dagger,s}$-modules of rank $d$ \item the natural map $$\widetilde{\mathscr{B}}_{X,K}^{\dagger,s}\otimes_{\mathscr{B}_{X,K}^{\dagger,s}}\mathscr{D}_{X,K}^{\dagger,s}(\mathscr{V})\rightarrow \widetilde{\mathscr{B}}_{X,K}^{\dagger,s}\otimes_{\mathscr{O}_X}\mathscr{V}$$ is an isomorphism \item the formation of $\mathscr{D}_{X,K}^{\dagger,s}(\mathscr{V})$ commutes with arbitrary base change on $X$ \item there is some finite Galois extension $L/K$ such that $$\mathscr{D}_{X,L}^{\dagger,s}(\mathscr{V})=\mathscr{D}_{X,K}^{\dagger,s}(\mathscr{V})\otimes_{\mathscr{B}_{X,K}^{\dagger,s}}\mathscr{B}_{X,L}^{\dagger,s}$$ is $X$-locally free. \end{enumerate} \end{cor} \begin{remark} By a ``sheaf of locally free $\mathscr{B}_{X,K}^{\dagger,s}$-modules'', we simply mean a sheaf of $\mathscr{B}_{X,K}^{\dagger,s}$-modules such that for each affinoid subdomain $U=\Sp(A)\subset X$, $\Gamma(U,\mathscr{D}_{X,K}^{\dagger,s})$ is a locally free $A\widehat\otimes\B_K^{\dagger,s}$-module. \end{remark} \begin{proof} Only the claim about base change requires justification. Given a morphism $f:X\rightarrow X'$ of rigid analytic spaces, there exist admissible formal $R$-models $\mathscr{X}$ and $\mathscr{X}'$ for $X$ and $X'$ and a morphism $\varphi:\mathscr{X}\rightarrow\mathscr{X}'$ inducing $f$ on the generic fiber, by Theorem~\ref{formal-models}. Furthermore, we may choose an admissible formal $R$-model $\mathscr{X}_1$ of $X$ such that the Galois representation on $\mathscr{V}$ extends to a finite locally free $\mathscr{O}_{\mathscr{X}_1}$-module $\mathscr{V}_0$. Again by Theorem~\ref{formal-models}, we can find a third formal model $\mathscr{X}_2$ of $X$ together with morphisms $\psi:\mathscr{X}_2\rightarrow\mathscr{X}$, $\psi_1:\mathscr{X}_2\rightarrow \mathscr{X}_1$ which are admissible formal blow-ups. Now $\psi^\ast\mathscr{V}_0$ is a finite locally free Galois representation over $\mathscr{X}_2$ whose generic fiber is $\mathscr{V}$, and $\varphi\circ\psi_1:\mathscr{X}_2\rightarrow \mathscr{X}'$ induces $f:X\rightarrow X'$ on the generic fiber. Thus, functoriality of $\mathscr{D}_{X,K}^{\dagger,s}(\mathscr{V})$ follows from functoriality in the integral model shown in~\ref{phi-gamma-bc}. \end{proof} \begin{remark} We do not know whether there is an intrinsic characterization of $\D_K^{\dagger,s}(V)$ as a submodule of $(A\widehat\otimes\widetilde{\B}^{\dagger,s})\otimes_AV$, or an intrinsic characterization of $\mathscr{D}_{X,K}^{\dagger,s}(\mathscr{V})$ as a subsheaf of $\widetilde{\mathscr{B}}\otimes_{\mathscr{O}_X}\mathscr{V}$, as opposed to the definition by construction we have given. Such a characterization could potentially simplify a number of proofs. \end{remark} \begin{remark} Liu~\cite[Proposition 1.1.1]{liu} has shown that the linearization of Frobenius $\varphi':\varphi^\ast\mathscr{D}_{K}^{\dagger,s}\rightarrow \mathscr{D}_K^{\dagger,ps}$ is an isomorphism. This completes the proof that $\mathscr{D}_K^{\dagger,s}$ is a $(\varphi,\Gamma)$-module. \end{remark} \subsection{Functorial properties of $\D_K^{\dagger,s}(V)$} We can deduce that the assignment $V\mapsto \D_K^{\dagger,s}(V)$ is functorial in $V$, as well as exact and compatible with direct sums and tensor products. This is stated in several places in the literature but not proved, to the best of our knowledge, so we now address it. \begin{prop} Let $X$ be a quasi-compact quasi-separated rigid analytic space over $E$, and let $\mathscr{V}$ and $\mathscr{V}'$ be finite locally free $\mathscr{O}_X$-modules of ranks $d$ and $d'$, respectively, equipped with continuous $\mathscr{O}_X$-linear actions of $\Gal_K$. Then for $s\gg0$ \begin{enumerate} \item $\mathscr{D}_{X,K}^{\dagger,s}(\mathscr{V}\oplus\mathscr{V}') =\mathscr{D}_{X,K}^{\dagger,s}(\mathscr{V})\oplus \mathscr{D}_{X,K}^{\dagger,s}(\mathscr{V}')$ \item $\mathscr{D}_{X,K}^{\dagger,s}(\mathscr{V}\otimes_{\mathscr{O}_X}\mathscr{V}') =\mathscr{D}_{X,K}^{\dagger,s}(\mathscr{V})\otimes_{\mathscr{B}_{X,K}^{\dagger,s}} \mathscr{D}_{X,K}^{\dagger,s}(\mathscr{V}')$ \item\label{d-dag-hom} $\mathscr{D}_{X,K}^{\dagger,s}(\calHom_{\mathscr{O}_X}(\mathscr{V},\mathscr{V}')) = \calHom_{\mathscr{B}_{X,K}^{\dagger,s}}(\mathscr{D}_{X,K}^{\dagger,s}(\mathscr{V}),\mathscr{D}_{X,K}^{\dagger,s}(\mathscr{V}))$ \end{enumerate} \end{prop} \begin{remark} Strictly speaking, $$\calHom_{\mathscr{B}_{X,K}^{\dagger,s}}(\mathscr{D}_{X,K}^{\dagger,s}(\mathscr{V}),\mathscr{D}_{X,K}^{\dagger,s}(\mathscr{V}))(U)=\Hom_{\mathscr{B}_{U,K}^{\dagger,s}}(\mathscr{D}_{X,K}^{\dagger,s}(\mathscr{V})|_U,\mathscr{D}_{X,K}^{\dagger,s}(\mathscr{V})|_U)$$ where $U=\Sp(A)\subset X$ is an affinoid subdomain. But $\mathscr{D}_{X,K}^{\dagger,s}(\mathscr{V})$ is a coherent sheaf of $\mathscr{B}_{X,K}^{\dagger,s}$-modules, in the sense of~\cite[Definition 3.4]{kl}, so $$\Hom_{\mathscr{B}_{U,K}^{\dagger,s}}(\mathscr{D}_{X,K}^{\dagger,s}(\mathscr{V})|_U,\mathscr{D}_{X,K}^{\dagger,s}(\mathscr{V})|_U) = \Hom_{A\widehat\otimes\B_K^{\dagger,s}}(\mathscr{D}_{X,K}^{\dagger,s}(\mathscr{V})(U),\mathscr{D}_{X,K}^{\dagger,s}(\mathscr{V})(U))$$ \end{remark} \begin{proof} We may reduce to the case where $X=\Sp(A)$, and $V:=\Gamma(X,\mathscr{V})$ and $V':=\Gamma(X,\mathscr{V}')$ admit free $\Gal_K$-stable $\mathscr{A}$-submodules $V_0$ and $V_0'$ of ranks $d$ and $d'$, respectively, for some formal $\mathscr{O}_E$-model $\mathscr{A}$ of $A$. Let $L/K$ be a finite Galois extension such that $\Gal_L$ acts trivially on $V_0/12pV_0$ and $V_0'/12pV_0'$, and let $n$ be sufficiently large for both $V_0$ and $V_0'$. \begin{enumerate} \item We first compare $\D_{L,n}^{\dagger,s_0}(V_0\oplus V_0')$ and $\D_{L,n}^{\dagger,s_0}(V_0)\oplus\D_{L,n}^{\dagger,s_0}(V_0')$. Both are free $\Gal_K$-stable $\mathscr{A}\widehat\otimes\A_{L,n}^{\dagger,s_0}$-submodules of $$(A\widehat\otimes\widetilde{\A}^{\dagger,s_0})\otimes_A(V_0\oplus V_0')=\left((A\widehat\otimes\widetilde{\A}^{\dagger,s_0})\otimes_AV_0\right) \oplus \left((A\widehat\otimes\widetilde{\A}^{\dagger,s_0})\otimes_AV_0'\right)$$ with rank $d+d'$ and trivial $H_L$-action. Moreover, if $$\{\ve{e}_1,\ldots,\ve{e}_d\}\text{ and }\{\ve{e}_1',\ldots,\ve{e}_{d'}'\}$$ are almost-invariant bases of $\D_{L,n}^{\dagger,s}(V_0)$ and $\D_{L,n}^{\dagger,s_0}(V_0')$, respectively, then $$\{\ve{e}_1,\ldots,\ve{e}_d,\ve{e}_1',\ldots,\ve{e}_{d'}'\}$$ is clearly an almost-invariant basis of $\D_{L,n}^{\dagger,s_0}(V_0)\oplus\D_{L,n}^{\dagger,s_0}(V_0')$. Therefore, by the uniqueness of $\D_{L,n}^{\dagger,s_0}(V_0\oplus V_0')$, $$\D_{L,n}^{\dagger,s_0}(V_0\oplus V_0')=\D_{L,n}^{\dagger,s_0}(V_0)\oplus\D_{L,n}^{\dagger,s_0}(V_0')$$ as submodules of $(A\widehat\otimes\widetilde{\A}^{\dagger,s_0})\otimes_A(V_0\oplus V_0')$. Inverting $p$, it is clear that $\D_{L,n}^{\dagger,s_0}(V\oplus V')=\D_{L,n}^{\dagger,s_0}(V)\oplus\D_{L,n}^{\dagger,s_0}(V')$. Moreover, applying $\varphi^n$, extending scalars from $A\widehat\otimes\B_{L}^{\dagger,s_n}$ to $A\widehat\otimes\B_L^{\dagger,s}$, and taking $H_K$-invariants all commute with direct sums, so $\D_K^{\dagger,s}(V\oplus V')=\D_K^{\dagger,s}(V)\oplus\D_K^{\dagger,s}(V')$ for $s$ sufficiently large, as desired. \item We first compare $\D_{L,n}^{\dagger,s_0}(V_0\otimes_{\mathscr{A}} V_0')$ and $\D_{L,n}^{\dagger,s_0}(V_0)\otimes_{\mathscr{A}}\widehat\otimes_{\A_{L,n}^{\dagger,s_0}}\D_{L,n}^{\dagger,s}(V_0')$. Both are free $\Gal_K$-stable $\mathscr{A}\widehat\otimes\A_{L,n}^{\dagger,s_0}$-submodules of $$(A\widehat\otimes\widetilde{\A}^{\dagger,s_0})\otimes_A(V_0\otimes_{\mathscr{A}}V_0')=\left((A\widehat\otimes\widetilde{\A}^{\dagger,s_0})\otimes_{\mathscr{A}}V_0\right) \otimes_{A\widehat\otimes\widetilde{\A}^{\dagger,s_0}} \left((A\widehat\otimes\widetilde{\A}^{\dagger,s_0})\otimes_{\mathscr{A}}V_0'\right)$$ with rank $d\cdot d'$ and trivial $H_L$-action. Moreover, if $$\{\ve{e}_1,\ldots,\ve{e}_d\}\text{ and }\{\ve{e}_1',\ldots,\ve{e}_{d'}'\}$$ are almost-invariant bases of $\D_{L,n}^{\dagger,s_0}(V_0)$ and $\D_{L,n}^{\dagger,s_0}(V_0')$, respectively, then $$\{\ve{e}_1\otimes\ve{e}_1',\ldots,\ve{e}_d\otimes\ve{e}_1',\ldots,\ve{e}_d\otimes\ve{e}_{d'}'\}$$ is clearly an almost-invariant basis of $\D_{L,n}^{\dagger,s_0}(V_0)\otimes_{\mathscr{A}\widehat\otimes\A_{L,n}^{\dagger,s_0}}\D_{L,n}^{\dagger,s_0}(V_0')$. Therefore, by the uniqueness of $\D_{L,n}^{\dagger,s_0}(V_0\otimes_{\mathscr{A}} V_0')$, $$\D_{L,n}^{\dagger,s_0}(V_0\otimes_{\mathscr{A}} V_0')=\D_{L,n}^{\dagger,s_0}(V_0)\otimes_{\mathscr{A}\widehat\otimes\A_{L,n}^{\dagger,s_0}}\D_{L,n}^{\dagger,s_0}(V_0')$$ as submodules of $(A\widehat\otimes\widetilde{\A}^{\dagger,s_0})\otimes_{\mathscr{A}}(V_0\otimes_{\mathscr{A}} V_0')$. Furthermore, it is clear that inverting $p$, applying $\varphi^n$, and extending scalars from $A\widehat\otimes\B_L^{\dagger,s_n}$ to $A\widehat\otimes\B_L^{\dagger,s}$ are all compatible with tensor products. Thus, we have $\D_L^{\dagger,s}(V\otimes_AV')=\D_L^{\dagger,s}(V)\otimes_{A\widehat\otimes\B_L^{\dagger,s}}\D_L^{\dagger,s}(V')$ and it remains to show that $\D_L^{\dagger,s}(V\otimes_AV')^{H_K}=\D_L^{\dagger,s}(V)^{H_K}\otimes_{A\widehat\otimes\B_K^{\dagger,s}}\D_L^{\dagger,s}(V')^{H_K}$. But $H_L$ acts trivially on $\D_L^{\dagger,s}(V)$ and $\D_L^{\dagger,s}(V')^{H_K}$, and $H_K/H_L$ is a finite group. Thus, taking $H_K/H_L$-invariants and $(\cdot)\otimes_{A\widehat\otimes\B_K^{\dagger,s}}A\widehat\otimes\B_L^{\dagger,s}$ are inverse functors between the category of finite projective $A\widehat\otimes\B_K^{\dagger,s}$-modules and the category of finite projective $A\widehat\otimes\B_L^{\dagger,s}$-modules with semi-linear $H_K/H_L$-action. Since $$(A\widehat\otimes\B_L^{\dagger,s})\otimes_{A\widehat\otimes\B_K^{\dagger,s}}\left(\D_K^{\dagger,s}(V)\otimes_{A\widehat\otimes\B_K^{\dagger,s}}\D_K^{\dagger,s}(V')\right) \cong \D_L^{\dagger,s}(V)\otimes_{A\widehat\otimes\B_L^{\dagger,s}}\D_L^{\dagger,s}(V')$$ the result follows. \item We first compare $\D_{L,n}^{\dagger,s_0}(\Hom_{\mathscr{A}}(V_0,V'_0))$ and $\Hom_{\mathscr{A}\widehat\otimes\A_{L,n}^{\dagger,s_0}}(\D_{L,n}^{\dagger,s_0}(V_0),\D_{L,n}^{\dagger,s}(V'_0))$. Both are $\Gal_K$-stable $\mathscr{A}\widehat\otimes\A_{L,n}^{\dagger,s_0}$-submodules of $$\widetilde{\A}^{\dagger,s_0}\otimes_A\Hom_\mathscr{A}(V_0,V'_0)) = \Hom_{\mathscr{A}\widehat\otimes\A^{\dagger,s_0}}((\mathscr{A}\widehat\otimes\A^{\dagger,s_0})\otimes_\mathscr{A}V_0,(\mathscr{A}\widehat\otimes\A^{\dagger,s_0})\otimes_{\mathscr{A}}V'_0)$$ with rank $d\cdot d'$ and trivial $H_L$-action. We choose bases $\{\ve{e}_1,\ldots,\ve{e}_d\}$ and $\{\ve{e}'_1,\ldots,\ve{e}'_{d'}\}$ of $\D_{L,n}^{\dagger,s_0}(V_0)$ and $\D_{L,n}^{\dagger,s_0}(V'_0)$, respectively, which are almost-invariant, and we let $\ve{f}_{ij}\in\text{Mat}_{d'\times d}(\mathscr{A}\widehat\otimes\A_{L,n}^{\dagger,s})$ be the matrix (with respect to the bases $\{\ve{e}_i\}$, $\{\ve{e}'_j\}$) with a $1$ in the $i$th row and $j$th column and zeroes elsewhere. Then we claim that $\{\ve{f}_{ij}\}$ is an almost-invariant basis of $\Hom_{\mathscr{A}\widehat\otimes\A_{L,n}^{\dagger,s}}(\D_{L,n}^{\dagger,s}(V_0),\D_{L,n}^{\dagger,s}(V'_0))$,. The group $\Gamma_{L_n}$ acts on $\Hom_{\mathscr{A}\widehat\otimes\A_{L,n}^{\dagger,s_0}}(\D_{L,n}^{\dagger,s_0}(V_0),\D_{L,n}^{\dagger,s_0}(V'_0))$ via $f\mapsto \gamma\circ f\circ\gamma^{-1}$ for any $\gamma\in\Gamma_{L_n}$; we need to compute the matrix of this action with respect to $\{\ve{e}_1,\ldots,\ve{e}_d\}$ and $\{\ve{e}'_1,\ldots,\ve{e}'_{d'}\}$. Choose a topological generator $\gamma\in\Gamma_{L_n}$, and let $U_1$ and $U_2$ be the matrices representing the actions of $\gamma^{-1}$ and $\gamma$ on $\D_{L,n}^{\dagger,s_0}(V_0)$ and $\D_{L,n}^{\dagger,s_0}(V'_0)$, respectively (with respect to the bases $\{\ve{e}_1,\ldots,\ve{e}_d\}$ and $\{\ve{e}'_1,\ldots,\ve{e}'_{d'}\}$). Let $M\in\Mat_{d'\times d}(\mathscr{A}\widehat\otimes\A_{L,n}^{\dagger,s_0})$ be a matrix representing an element of $\Hom_{\mathscr{A}\widehat\otimes\A_{L,n}^{\dagger,s_0}}(\D_{L,n}^{\dagger,s_0}(V_0),\D_{L,n}^{\dagger,s_0}(V'_0))$. Then the action of $\gamma-1$ on $M$ is given by \begin{align*} (\gamma-1)\cdot M &= U_2\gamma(MU_1)-M = (U_2-1)\gamma(MU_1)+\gamma(MU_1)-M \\ &= (U_2-1)\gamma(MU_1)+\gamma(M(U_1-1))+\gamma(M)-M \\ &= (U_2-1)\gamma(MU_1)+\gamma(M(U_1-1))+(\gamma-1)(M) \end{align*} Since $U_2-1$ and $U_1-1$ are small, and the coefficients of $(\gamma-1)(M)$ are small, almost invariance follows. By uniqueness of $\D_{L,n}^{\dagger,s_0}(\Hom_{\mathscr{A}}(V_0,V'_0))$, it follows that $$\D_{L,n}^{\dagger,s_0}(\Hom_{\mathscr{A}}(V_0,V'_0))=\Hom_{\mathscr{A}\widehat\otimes\A_{L,n}^{\dagger,s_0}}(\D_{L,n}^{\dagger,s_0}(V_0),\D_{L,n}^{\dagger,s_0}(V'_0))$$ as submodules of $\widetilde{\A}^{\dagger,s_0}\otimes_A\Hom_\mathscr{A}(V_0,V'_0)$. It is clear that $$\Hom_{\mathscr{A}\widehat\otimes\A_{L,n}^{\dagger,s_0}}(\D_{L,n}^{\dagger,s_0}(V_0),\D_{L,n}^{\dagger,s_0}(V'_0))[1/p] = \D_{L,n}^{\dagger,s_0}(\Hom_A(V,V'))$$ since $\Hom_{\mathscr{A}}(V_0,V'_0)$ is an integral model for $\Hom_A(V,V')$. Next, we need to pass from $\D_{L,n}^{\dagger,s_0}$ to $\D_K^{\dagger,s}$, for $s\gg0$. First of all, we claim that \begin{equation*}\begin{split}(\ad\varphi)^n\left(\Hom_{A\widehat\otimes\B_{L,n}^{\dagger,s_0}}(\D_{L,n}^{\dagger,s_0}(V),\D_{L,n}^{\dagger,s_0}(V'))\right)& \\ =\Hom_{A\widehat\otimes\B_L^{\dagger,s_n}}&\left(\varphi^n(\D_{L,n}^{\dagger,s_0}(V)),{\varphi'}^n(\D_{L,n}^{\dagger,s_0}(V'))\right)\end{split}\end{equation*} where $\ad\varphi$ is the Frobenius on $\Hom_{A\widehat\otimes\B_{L,n}^{\dagger,s_0}}(\D_{L,n}^{\dagger,s_0}(V),\D_{L,n}^{\dagger,s_0}(V'))$. Recall that $\varphi$ is bijective on $A\widehat\otimes\widetilde{\B}^\dagger$, so that an inverse to $\varphi^n$ exists on $\varphi^n(\D_{L,n}^{\dagger,s_0}(V))$, and an inverse to ${\varphi'}^n$ exists on ${\varphi'}^n(\D_{L,n}^{\dagger,s_0}(V'))$. The action of $(\ad\varphi)^n$ on the left sends a homomorphism $f:\D_{L,n}^{\dagger,s_0}(V)\rightarrow\D_{L,n}^{\dagger,s_0}(V')$ to ${\varphi'}^n\circ f\circ\varphi^{-n}:\varphi^n(\D_{L,n}^{\dagger,s_0}(V))\rightarrow{\varphi'}^n(\D_{L,n}^{\dagger,s_0}(V'))$. Conversely, given a map $g:\varphi^n(\D_{L,n}^{\dagger,s_0}(V))\rightarrow{\varphi'}^n(\D_{L,n}^{\dagger,s_0}(V'))$ we may define $f:={\varphi'}^{-n}\circ g\circ \varphi^n$; then $g=(\ad\varphi)^n(f)$. Now we have $\D_L^{\dagger,s_n}(\Hom_A(V,V'))=\Hom_{A\widehat\otimes\B_L^{\dagger,s_n}}(\D_L^{\dagger,s_n}(V),\D_L^{\dagger,s_n}(V'))$ and we claim that for any $s\geq s_n$, the natural map \begin{equation*}\begin{split}(A\widehat\otimes\B_L^{\dagger,s})\otimes_{A\widehat\otimes\B_L^{\dagger,s_n}}\Hom_{A\widehat\otimes\B_L^{\dagger,s_n}}&(\D_L^{\dagger,s_n}(V),\D_L^{\dagger,s_n}(V')) \\ &\rightarrow \Hom_{A\widehat\otimes\B_L^{\dagger,s}}(\D_L^{\dagger,s}(V),\D_L^{\dagger,s}(V'))\end{split}\end{equation*} is an isomorphism. But if we choose bases of $\D_L^{\dagger,s_n}(V)$ and $\D_L^{\dagger,s_n}(V')$, we see that both sides are naturally identified with $\Mat_{d'\times d}(A\widehat\otimes\B_L^{\dagger,s})$. It remains to see that $$\Hom_{A\widehat\otimes\B_L^{\dagger,s}}(\D_L^{\dagger,s}(V),\D_L^{\dagger,s}(V'))^{H_K} = \Hom_{A\widehat\otimes\B_K^{\dagger,s}}(\D_L^{\dagger,s}(V)^{H_K},\D_L^{\dagger,s}(V')^{H_K})$$ But both sides are fixed under the action of $H_L\subset H_K$, and $H_K/H_L$ is a finite group. Moreover, $A\widehat\otimes\B_K^{\dagger,s}=(A\widehat\otimes\B_L^{\dagger,s})^{H_K/H_L}$. Therefore, taking $H_K$-invariants and extending scalars from $A\widehat\otimes\B_K^{\dagger,s}$ to $A\widehat\otimes\B_L^{\dagger,s}$ are inverse functors between the categories of finite $A\widehat\otimes\B_L^{\dagger,s}$-modules with semi-linear $H_K/H_L$-action and finite $A\widehat\otimes\B_K^{\dagger,s}$-modules. \end{enumerate} \end{proof} \begin{cor} Let $\mathscr{V}$, $\mathscr{V}'$ be families of $\Gal_K$-representations over $X$. Then $$\Hom_{\mathscr{O}_X[\Gal_K]}(\mathscr{V},\mathscr{V}')=\Hom_{\mathscr{B}_{X,K}^{\dagger,s}[\varphi,\Gamma_K]}(\mathscr{D}_K^{\dagger,s}(\mathscr{V}),\mathscr{D}_K^{\dagger,s}(\mathscr{V}'))$$ for $s\gg0$. In particular, $\mathscr{V}\mapsto\mathscr{D}_K^{\dagger,s}(\mathscr{V})$ is a fully faithful functor. \end{cor} \begin{proof} From Proposition~\ref{d-dag-hom}, $$\mathscr{D}_{X,K}^{\dagger,s}(\calHom_{\mathscr{O}_X}(\mathscr{V},\mathscr{V}')) = \calHom_{\mathscr{B}_{X,K}^{\dagger,s}}(\mathscr{D}_{X,K}^{\dagger,s}(\mathscr{V}),\mathscr{D}_{X,K}^{\dagger,s}(\mathscr{V}))$$ There is therefore a natural isomorphism $$\widetilde{\mathscr{B}}_X^{\dagger,s}\otimes_{\mathscr{B}_{X,K}^{\dagger,s}}\calHom_{\mathscr{B}_{X,K}^{\dagger,s}}(\mathscr{D}_{X,K}^{\dagger,s}(\mathscr{V}),\mathscr{D}_{X,K}^{\dagger,s}(\mathscr{V}))\xrightarrow{\sim} \widetilde{\mathscr{B}}_X^{\dagger,s}\otimes_{\mathscr{O}_X}\calHom_{\mathscr{O}_X}(\mathscr{V},\mathscr{V}')$$ Taking $H_K$-invariants of both sides, we have a natural isomorphism $$\widetilde{\mathscr{B}}_{X,K}^{\dagger,s}\otimes_{\mathscr{B}_{X,K}^{\dagger,s}}\calHom_{\mathscr{B}_{X,K}^{\dagger,s}}(\mathscr{D}_{X,K}^{\dagger,s}(\mathscr{V}),\mathscr{D}_{X,K}^{\dagger,s}(\mathscr{V}))\xrightarrow{\sim} \left(\widetilde{\mathscr{B}}_X^{\dagger,s}\otimes_{\mathscr{O}_X}\calHom_{\mathscr{O}_X}(\mathscr{V},\mathscr{V}')\right)^{H_K}$$ Now we take $\varphi$- and $\Gamma_K$-invariants of both sides. On the right side, since $\varphi$ acts trivially on $\calHom_{\mathscr{O}_X}(\mathscr{V},\mathscr{V}')$, and $\widetilde{\B}^{\varphi=1}=\Q_p$, we have $\calHom_{\mathscr{O}_X[\Gal_K]}(\mathscr{V},\mathscr{V}')$. On the left side, the $\varphi$- and $\Gamma_K$-invariant sub-module certainly contains the $\mathscr{O}_X$-module of $\varphi$- and $\Gamma_K$-equivariant homomorphisms $$\calHom_{\mathscr{B}_{X,K}^{\dagger,s}[\varphi,\Gamma_K]}(\mathscr{D}_{X,K}^{\dagger,s}(\mathscr{V}),\mathscr{D}_{X,K}^{\dagger,s}(\mathscr{V}))$$ We claim that this is everything. To see this, we may pass to an admissible affinoid cover of $X$, so we may assume that $X=\Sp(A)$ and $V:=\Gamma(X,\mathscr{V})$ and $V':=\Gamma(X,\mathscr{V}')$ are free $A$-modules, admitting $\Gal_K$-stable free $\mathscr{A}$-lattices for some formal $\mathscr{O}_E$-model $\mathscr{A}$ of $A$. Thus, it suffices to show that $$\left((A\widehat\otimes\widetilde{\B}_L^{\dagger,s})\otimes \Hom_{A\widehat\otimes\B_L^{\dagger,s}}(\D_L^{\dagger,s}(V),\D_L^{\dagger,s}(V'))\right)^{\varphi,\Gamma_L}\!\! = \Hom_{A\widehat\otimes\B_L^{\dagger,s}[\varphi,\Gamma_L]}(\D_L^{\dagger,s}(V),\D_L^{\dagger,s}(V'))$$ We first consider the $\Gamma_L$-invariants of $$(\mathscr{A}\widehat\otimes\widetilde{\A}_L^{\dagger,s_0})\otimes_{\mathscr{A}\widehat\otimes\A_{L,n}^{\dagger,s_0}}\Hom_{\mathscr{A}\widehat\otimes\A_{L,n}^{\dagger,s_0}}(\D_{L,n}^{\dagger,s_0}(V_0),\D_{L,n}^{\dagger,s_0}(V'_0))$$ The $\Gamma_{L_n}$-invariant submodule certainly contains $\Hom_{\mathscr{A}\widehat\otimes\A_{L,n}^{\dagger,s_0}[\Gamma_L]}(\D_{L,n}^{\dagger,s_0}(V_0),\D_{L,n}^{\dagger,s_0}(V'_0))$. To see that this is everything, suppose that $$\ve{x}:=\sum_{ij}x_{ij}\ve{f}_{ij}\in (\mathscr{A}\widehat\otimes\widetilde{\A}_L^{\dagger,s_0})\otimes_{\mathscr{A}\widehat\otimes\A_{L,n}^{\dagger,s_0}}\Hom_{\mathscr{A}\widehat\otimes\A_{L,n}^{\dagger,s_0}}(\D_{L,n}^{\dagger,s_0}(V_0),\D_{L,n}^{\dagger,s_0}(V'_0))$$ is fixed by $\Gamma_{L_n}$, where $\ve{f}_{ij}$ is the almost-invariant basis we used earlier. Then if $U_\gamma$ is the matrix of $\gamma\in\Gamma_{L_n}$ with respect to this basis, we have $U_\gamma\gamma(\ve{x})=\ve{x}$. We may then apply~\cite[Lemme 3.2.5]{bc} with $V_1=U_\gamma^{-1}$ and $V_2=1$ to conclude that $\ve{x}\in \Hom_{\mathscr{A}\widehat\otimes\A_{L,n}^{\dagger,s_0}}(\D_{L,n}^{\dagger,s_0}(V_0),\D_{L,n}^{\dagger,s_0}(V'_0))$. Now $\varphi$ and $\Gamma_L$ commute, so $$\left(\Hom_{\mathscr{A}\widehat\otimes\A_{L,n}^{\dagger,s_0}}(\D_{L,n}^{\dagger,s_0}(V_0),\D_{L,n}^{\dagger,s_0}(V'_0))\right)^{\Gamma_{L}}=\left(\Hom_{\mathscr{A}\widehat\otimes\widetilde{\A}_L^{\dagger,s_n}}(\D_L^{\dagger,s_n}(V_0),\D_L^{\dagger,s_n}(V'_0))\right)^{\Gamma_L}$$ Inverting $p$, it follows that $\Hom_{A[\Gal_L]}(V,V')=\Hom_{A\widehat\otimes\B_L^{\dagger,s_n}[\varphi,\Gamma_L]}(\D_L^{\dagger,s_n}(V),\D_L^{\dagger,s_n})$ and since $\D_L^{\dagger,s}(V) = (A\widehat\otimes\B_L^{\dagger,s})\otimes_{A\widehat\otimes\B_L^{\dagger,s_n}}\D_L^{\dagger,s_n}$, we are done. \end{proof} We define \begin{align*} \D_{\rig,K}^{\dagger,s}(V)&:=(A\widehat\otimes\B_{\rig,K}^{\dagger,s})\otimes_{A\widehat\otimes\B_K^{\dagger,s}}\D_K^{\dagger,s}(V) \\ \D_{\rig,K}^\dagger(V)&:=\varinjlim_s\D_{\rig,K}^{\dagger,s}(V) \\ \widetilde{\D}^{\dagger,s}(V)&:=(A\widehat\otimes\widetilde{\B}^{\dagger,s})\otimes_{A\widehat\otimes\B_K^{\dagger,s}}\D_K^{\dagger,s}(V) \\ \widetilde{\D}^\dagger(V)&:=\varinjlim_s\widetilde{\D}^{\dagger,s}(V) \end{align*} and we let $\mathscr{D}_{X,\rig,K}^{\dagger,(s)}(\mathscr{V})$ and $\widetilde{\mathscr{D}}_X^{\dagger,(s)}(\mathscr{V})$ denote the corresponding sheaves. If the base is clear, we suppress the subscript $X$. \subsection{$\D_{\Sen}(V)$ and $\D_{\dif}(V)$} We pause to briefly discuss the objects we have constructed. For simplicity, we temporarily assume that $A=\Q_p$. Given a Galois representation $V$ of dimension $d$, we have constructed a module over $\B_{\rig,K}^{\dagger}$ of rank $d$, equipped with a semilinear Frobenius and a semilinear action of $\Gamma_K$. There is some $s$ so that these structures descend to $\B_{\rig,K}^{\dagger,s}$, which is (non-canonically) the ring of analytic functions on the half-open annulus $0<v_p(X)\leq 1/e_Ks$; we think of $p^{-1/e_Ks(V)}$ as the minimal inner radius of an annulus to which everything descends. Consider the analytic function $\log(1+X)\in\B_{\rig,K}^{\dagger,s}$. It has infinitely many zeroes, at the points $X=1-\zeta_{p^n}$, which accumulate towards the boundary of the unit disk. For a given $s$, we think of $n(s)$ as the minimal $n$ so that $X=1-\zeta_{p^n}$ lies in the annulus $0<v_p(X)\leq 1/e_Ks$. Returning to our general setup, we use $(\varphi,\Gamma)$-modules to construct modules $\D_{\Sen}(V)$ and $\D_{\dif}(V)$, which will be useful for our study of Hodge-Tate and de Rham representations. Recall that there is a family of injections $i_n:\B_K^{\dagger,s}\rightarrow K_n[\![t]\!]$ for every $n\geq n(s)$, which extend to injections $i_n:\B_{\rig,K}^{\dagger,s}\rightarrow K_n[\![t]\!]$. It is defined as the composition $$\B_K^{\dagger,s_n}\subset \widetilde{\B}^{\dagger,s_n}\xrightarrow{\varphi^{-n}}\widetilde{\B}^{\dagger,s_0}\rightarrow\B_{\dR}^+$$ where the last map sends $\sum p^k[x_k]$ (viewed as an element of $\widetilde{\B}^+$) to its image in $\B_{\dR}^+$, and it factors through $K_n[\![t]\!]$. \begin{definition} Let $X$ be a quasi-compact quasi-separated rigid analytic space and let $\mathscr{V}$ be a locally free $\mathscr{O}_X$-module of rank $d$ equipped with a continuous $\mathscr{O}_X$-linear action of $\Gal_K$. Then by the preceding discussion, there is a finite extension $L/K$ such that $\mathscr{D}_{\rig,L}^{\dagger,s}(\mathscr{V})$ is $X$-locally free. \begin{enumerate} \item For any $n\geq n(s)$, we put $\mathscr{D}_{\Sen}^{L_n}(\mathscr{V}):=\mathscr{D}_{L}^{\dagger,s}(\mathscr{V})\otimes_{\mathscr{B}_{L}^{\dagger,s}}^{i_n}(\mathscr{O}_X\otimes_{\Q_p}L_n)$. Then $\mathscr{D}_{\Sen}^{L_n}(\mathscr{V})$ is an $X$-locally free $\mathscr{O}_X\otimes L_n$-module of rank $d$ with a linear action of $\Gamma_{L_n}$. \item For any $n\geq n(s)$, we put $\mathscr{D}_{\dif}^{L_n,+}(\mathscr{V}):=\mathscr{D}_{L}^{\dagger,s}(\mathscr{V})\otimes_{\mathscr{B}_{L}^{\dagger,s}}^{i_n}(\mathscr{O}_X\widehat\otimes_{\Q_p}L_n[\![t]\!])$, and we define $\mathscr{D}_{\dif}^{L_n}(\mathscr{V}):=\mathscr{D}_{\dif}^{L_n,+}(\mathscr{V})[1/t]$. Then $\mathscr{D}_{\dif}^{L_n,+}(\mathscr{V})$ is an $X$-locally free $\mathscr{O}_X\widehat\otimes_{\Q_p} L_n[\![t]\!]$-module of rank $d$ with a semi-linear action of $\Gamma_{L_n}$, where $L_n[\![t]\!]$ is equipped with its natural Fr\'echet topology. Here $\Gamma_{L_n}$ acts trivially on $L_n$, but acts on $t$ via $\gamma\cdot t=\chi(\gamma)t$. \end{enumerate} \end{definition} \begin{remark} Both $\mathscr{D}_{\Sen}^{L_n}(\mathscr{V})$ and $\mathscr{D}_{\dif}^{L_n,+}(\mathscr{V})$ actually have semi-linear actions of all of $\Gal_K$, ultimately by $\Gal_K$-stability of $\D_{L,n}^{\dagger,s_0}(V)$ inside $(A\widehat\otimes\widetilde{\B}^{\dagger,s_0})\otimes_AV$. We define $\mathscr{D}_{\Sen}^{K_n}(\mathscr{V}):=\mathscr{D}_{\Sen}^{L_n}(\mathscr{V})^{H_K}$ and $\mathscr{D}_{\dif}^{K_n,+}(\mathscr{V}):=\mathscr{D}_{\dif}^{L_n,+}(\mathscr{V})^{H_K}$. \end{remark} \begin{remark} If $A$ is a general $\Q_p$-Banach algebra with valuation ring $\mathscr{A}$, $V_0$ is a free $\mathscr{A}$-module of rank $d$ equipped with a continuous $\mathscr{A}$-linear action of $\Gal_K$, and $V:=V_0[1/p]$, then we may similarly define $\D_{\Sen}^{L_n}(V):=\D_{L}^{\dagger,s}(V)\otimes_{A\widehat\otimes\B_{L}^{\dagger,s}}^{i_n}(A\otimes_{\Q_p}L_n)$ and $\D_{\dif}^{L_n}(V):=\D_{L}^{\dagger,s}(V)\otimes_{A\widehat\otimes\B_{L}^{\dagger,s}}^{i_n}(A\widehat\otimes_{\Q_p}L_n[\![t]\!])$. \end{remark} \begin{remark} It is also possible to construct $\mathscr{D}_{\Sen}^{L_n}(\mathscr{V})$ directly by means of Tate-Sen theory applied to semi-linear representations of $\Gal_K$ on finite $X$-locally free $\mathscr{O}_X\widehat\otimes\C_K$-modules. We exploit this point of view in the proof of Theorem~\ref{dsen-dht}. \end{remark} \begin{prop} \begin{enumerate} \item $\mathscr{D}_{\Sen}^{L_n}(\mathscr{V})$ is an $X$-locally free $\mathscr{O}_X\otimes L_n$-module of rank $d$, and we have a Galois-equivariant isomorphism $$\C_K\widehat\otimes_{L_n}\mathscr{D}_{\Sen}^{L_n}(\mathscr{V})\rightarrow \C_K\widehat\otimes_{\Q_p}V$$ \item $\mathscr{D}_{\dif}^{L_n,+}(\mathscr{V})$ is an $X$-locally free $\mathscr{O}_X\widehat\otimes L_n[\![t]\!]$-module of rank $d$, and we have a Galois-equivariant isomorphism $$(\mathscr{O}_X\widehat\otimes\B_{\dR}^+)\otimes_{\mathscr{O}_X\widehat\otimes L_n[\![t]\!]}\mathscr{D}_{\dif}^{L_n,+}(\mathscr{V})\rightarrow (\mathscr{O}_X\widehat\otimes\B_{\dR}^+)\otimes_{\mathscr{O}_X}\mathscr{V}$$ which respects the filtrations on each side. \end{enumerate} \end{prop} \begin{proof} For both of these, the starting point is the isomorphism $$\widetilde{\mathscr{B}}^{\dagger,s}\otimes_{\mathscr{B}_{L}^{\dagger,s}}\mathscr{D}_{L}^{\dagger,s}(\mathscr{V})\rightarrow \widetilde{\mathscr{B}}^{\dagger,s}\otimes_{\mathscr{O}_X}\mathscr{V}$$ The composition $$\B_L^{\dagger,s}\xrightarrow{i_n}L_n[\![t]\!]\rightarrow \B_{\dR}^+$$ is the same as the composition $$\B_L^{\dagger,s}\subset \widetilde{\B}^{\dagger,s}\xrightarrow{\varphi^{-n}}\widetilde{\B}^{\dagger,p^{-n}s}\rightarrow\B_{\dR}^+$$ by definition, so extending scalars on each side from $\widetilde{\mathscr{B}}^{\dagger,s}$ to $\mathscr{B}_{X,\dR}^+$ or $\mathscr{O}_X\widehat\otimes_{\Q_p}\C_K$ gives the desired result. \end{proof}
213,353
TITLE: Image of linear transformation QUESTION [0 upvotes]: I was trying to do this exercise and I don't know if what I did is okay or not. Let $T:P_2[\mathbb R] \to \mathbb R^4$ be a linear transformation and let $B=\{1 + x^2, x, 2x + 1\}$ and $$B'= \{(0,0,1,3), (0,1,1,0), (1,1,2,1), (1,0,0,2)\}$$ be bases of $P_2[\mathbb R]$ and $\mathbb R^4$, respectively. If the matrix representing $T$ is $M =\left[\matrix{1&3&2\cr 0&1&1\cr 1&2&1\cr 2&4&2\cr}\right]$, a)Find the bases $\ker (T)$ and ${\rm Im} (T)$. Check dimension theorem. a)First of all I did this - rename the basis $B=\{u_0,u_1,u_2\}$ and $B'=\{v_1,v_2,v_3,v_4\}$. So: $$\eqalign{T(u_0) &= v_1 + v_3 + 2v_4\cr T(u_1) &= 3v_1 + v_2 + 2v_3 + 4v_4\cr T(u_2) &= 2v_1 + v_2 + v_3 + 2v_4\cr}$$ Then, row reduce the given matrix to get the kernel, so: $$\left[\matrix{1&0&-1\cr 0&1&1\cr 0&0&0\cr 0&0&0\cr}\right] \left[\matrix{a\cr b\cr c\cr}\right] = \left[\matrix{0\cr 0\cr 0\cr0\cr}\right]$$ $a - c =0$ and $b + c=0$, so $a=c, b=-c$, so a basis is $\left\{\pmatrix{1\cr -1\cr 1\cr}\right\}$. Then, for the image I did this: $$\eqalign{ a + 3b + 2c &= w\cr b + c &= x\cr a + 2b + c &= y\cr 2a + 4b + 2c &= z\cr}$$ After gauss, finally reached to $w=x +y$ -- I don't know what to do now. REPLY [1 votes]: You can do this with just Gaussian elimination: $$\left[\begin{array}{ccc|c} 1&3&2&w\\ 0&1&1&x\\ 1&2&1&y\\ 2&4&2&z\end{array}\right] \to \left[\begin{array} {ccc|c} 1&3&2&w\\ 0&1&1&x\\ 0&-1&-1&y-w\\ 0&-2&-2&z-2w\end{array}\right] \to \left[\begin{array} {ccc|c} 1&3&2&w\\ 0&1&1&x\\ 0&0&0&y-w+x\\ 0&0&0&z-2w+2x\end{array}\right]$$ You have two rows which are almost entirely zero. In order to guarantee that there is a solution, you must have $y-w+x=0$ and $z-2w+2x=0$. Any vector that meets those conditions will be in the column space.
7,000
Hillsdale - Ambassador - Pub TableProduct Id: 222050 Phone Code: 0568-4C83 Product Description The. Features: - Pub Table has a rich Cherry finish - Round shape - Wood top - Pedestal base - Bar Height - Transitional style - Some assembly required - Manufacturer limited 90-day warranty against manufacturing defects Specifications: - Overall Dimensions: 42" H x 36" W x 36" D - Weight: 75 lbs - Shipping Carton Dimensions 1: 6" H x 38.5" W x 38.5" D - Shipping Carton Dimensions 2: 10" H x 10" W x 42" D - Shipping Carton Weight 1: 54 lbs - Shipping Carton Weight 2: 21 lbs Recommended Care: - Dust frequently using a clean, specially treated dusting cloth that will attract and hold dust particles - Do not use liquid or abrasive cleaners as they may damage the finish
332,584
She started her business with just a business card and ended up with a a two-year-long waiting list before her show, Long Island Medium, ever aired on TLC. Hicksville’s sassy connection to the other side, Teresa Caputo, takes the stage close to home on Saturday, Oct. 6 at the NYCB Theatre at Westbury. Gold Circle Seats include a meet and greet. advertisement More articles filed under Entertainment News,Long Island News,News Tags: highlight-entertainment
215,292
Credit Advice Have a question? Have advice to share? The combined knowledge and experience of everyone in the Credit Karma community can help you. Enter your question or help others below to get started! QuestionPosted in Credit Karma 4423 People Helped First you get copies of all three credit reports from annualcreditreport.com and you look them over very thoroughly, noting and disputing all wrong information with the credit bureau reporting it. Why pay something that is not yours? You will find information about the creditor on your credit reports and can contact them after getting that information. Logically, you should know whom you owe and can make payment arrangements with the information on your credit report. 3939 People Helped Your Credit report will list contact info for each creditor on your report. get all 3 of your credit reports for free once a year at the website: AnnualCredit Report .com Before you pay off old debts, read up on the best ways to deal with them. Don't call collection agencies about old debts by phone unless your ready to pay it off. You can often get old debts removed or reduced, read the articles and blogs here on CK and find out the stratigies for saving money on old debts. It worth the time so you don't make costly mistakes in dealing with the scoundrels that run collection agencies. Good
225,112
TITLE: Galois group of $(T^4-3)(T^6-3)$ QUESTION [7 upvotes]: Given the polynomial $f(T) = (T^4-3)(T^6-3)$, I would like to calculate the Galois group of $f$. What I've done is the following: setting $\alpha = 3^{1/4}$ and $\beta= 3^{1/6}$, $\zeta_k = e^{2\pi i / k}$, since the decomposition field of $f$ over $\mathbb Q$ is $L := \mathbb Q(\alpha,...,\alpha \zeta_4^3,\beta,...,\beta\zeta_6^5) = \mathbb Q(\alpha,\beta,\zeta_4,\zeta_6) = \mathbb Q(3^{1/3},3^{1/4},i)$ since $\zeta_6 = (1+i(3^{1/4})^2)/2$ and $3^{1/6}=\frac{1}{3^4}(3^{1/3}(3^{1/4})^2)^5$ and then: $$ \mathbb Q \subset \mathbb Q(3^{1/4}) \subset \mathbb Q(3^{1/4},3^{1/3})\subset \mathbb Q(3^{1/4},3^{1/3},i) = L $$ with $$ [L:\mathbb Q(3^{1/3},3^{1/4})] = 2,\quad [\mathbb Q(3^{1/3},3^{1/4}):\mathbb Q(3^{1/4})] = 3, \quad [\mathbb Q(3^{1/4}):\mathbb Q]=4. $$ The first is clear to me, since the first field contains complex numbers, the second is the complicated one and the last is because of $\deg(T^4-3)$, which is the minimum polynomial $\mathbb Q$. So, $[L:\mathbb Q] = 24$. But I don't know how to calculate the Galois group. Any hint? Thanks in advance Edit: I'll try the following. If $\phi\in Gal(L:\mathbb Q)$, $\phi$ is completely determined by $\phi(i)$, $\phi(3^{1/3})$ and $\phi(3^{1/4})$. For the first, $(\phi(i))^2=\phi(i^2) = \phi(-1) = -1$, so $\phi(i) = \pm i$. For the second, $3 = \phi((3^{1/3})^3) = \phi(3^{1/3})^3$ and necessarily $\phi(3^{1/3}) = \zeta_3^{k} 3^{k/3}$ for some $k \in \{0,...,2\}$. This gives $Gal(L:\mathbb Q)\cong \mathbb Z_2\times \mathbb Z_4\times \mathbb Z_3$? REPLY [0 votes]: I think your answer for the Galois group is correct. Note that we can also express $L = \mathbb{Q}(\sqrt[12]{3}, \zeta_{12})$ where $\zeta_{12}$ is a primitive $12$th root of unity.
3,904
A cornucopia of free resources for the study of the New Testament awaits the visitor at BiblicalStudies.org.uk/ – examples include McNeile’s 1927 Introduction to the New Testament, about 100 articles relevant to the Gospel of Matthew, about 80 articles relevant to Mark, over 100 articles relevant to Luke, and many more articles and other materials relevant to John. Inspired by this display of generous erudition, I thought it might be helpful to provide links to free downloadable files of the Cambridge Commentary for Schools and Universities – sharing the perspectives of some major league scholars of yesteryear – and the Commentaries for Schools, which was edited by C. J. Ellicott, who was the author of (among other things) Considerations on the Revision of the English Version of the New Testament; he was instrumental in the production and promotion of the Revised Version. John Burgon interacted with some of Ellicott’s text-critical considerations in The Revision Revised. Gospel of Matthew, by A. Carr, 1878 (1908 edition) Gospel of Mark, by G. F Maclear, 1879 Gospel of Mark, by A. Plummer, 1920 Gospel of Luke, by F. W. Farrar, 1882 Gospel of John, by A. Plummer, 1882 Acts 15-28, by J. R. Lumby, 1882 First Corinthians, by J. S. Lias, 1892 Second Corinthians, by A. Plummer, 1903 Philippians, by H. C. G. Moule, 1897 Colossians & Philemon, by A. L. Williams, 1907 First & Second Thessalonians, by G. G. Findley, 1904 First & Second Timothy and Titus, by A. E. Humphreys, 1895 First & Second Timothy and Titus, by J. H. Bernard, 1899 First Peter, by G. W. Blenkin, 1914 Second Peter and Jude, by E. H. Plumptre, 1893 Epistles of John, by A. Plummer, 1890 Revelation, by W. A. Simcox, 1893 Also by E. H. Plumptre: An Introduction to the New Testament (with preface by Ellicott), Exposition of the Epistles to the Seven Churches of Asia Exposition of the Epistles to the Seven Churches of Asia Commentary on James, 1895 Commentary for Schools (edited by Charles John Ellicott) (1879) Gospel of Matthew, by E. H. Plumptre Gospel of Mark, by E. H. Plumptre Gospel of Luke, by E. H. Plumptre Gospel of John, by H. W. Watkins First Corinthians, by T. T. Shore Second Corinthians, by E. H. Plumptre Galatians, Ephesians, and Philippians, by W. Sanday and A. Barry Colossians, First & Second Thessalonians, and First and Second Timothy, by A. Barry, A. J. Mason, and H. D. M. Spence Titus, Philemon, Hebrews, and James, by H. D. M. Spence, A. Barry, W. F. Moulton, and E. G. Punchard First Peter, Second Peter, First John, Second John Third John, and Jude, by A. J. Mason, A. Plummer, and W. M. Sinclair Revelation, by W. B. Carpenter Ellicott’s commentaries collected together in three volumes: Some readers may find it handy to download these volumes onto a flash-drive, to ensure that they will be readily available. Others may want to use the “My Library” feature at Google Books to add them to their own customizable collection of virtual books. Readers are invited to double-check the data in this post. Readers are invited to double-check the data in this post. 1 comment: Thanks, James. I'm going to get a good flash drive and download some of these.
259,114
To briefly review: A thousand years ago in this remote, unspecial, spot, an otherwise ordinary group of Indians were somehow infused with the talent, desire and determination to build mega structures like the one pictured here--containing 500 plus rooms, numerous Kivas and ceremonial plazas. 12 of these grand houses were constructed in the immediate vicinity and another 15 in the outlying areas. Nowhere else in America has similar super building occurred. The obvious question is: WHAT GOT INTO THESE PEOPLE. WHY, WHY, WHY WOULD THEY DO THIS. 1. An ordinary clan of ancient Indians, following water or fleeing hostilities or seeking greener pastures, made their way to this canyon and this spot. 2. An anomaly of this spot is a super-abundance of perfectly sized building stones----1,2and 3 inch layers of sandstone---perfectly flat on two sides. Kids playing could pile up impressive stacks---and soon discovered that mud between the layers would harden and stabilize stacks---making even more impressive structures. Adults quickly grasped the possibility of easy-to-build walls for houses----and further, the bonus of houses sharing a wall -- or two--or even three----AND the bonus of houses on top of houses---thus sharing a roof. BINGO --pueblo style building is underway. 3. Density dwelling works social magic on the tribe---intensifying idea exchange--culture elaboration--and speeds creation and spread of cultural memes. (mental bits of culture--songs, tales etc) (see Palo Solari's work with Arcology) 4. With food and water problems solved (for the moment) people turned their attention to the fun pursuits of building with these terrific stones. Human ego, being what it is, bigger and ever more impressive houses were constructed. 5. Excellence in construction began to be noticed and appreciated----triggering a FEEDBACK SPIRAL of excellence. (i.e. beautiful work---when publicly appreciated---evokes even more excellence---and appreciation--and excellence.......) 6. The occasional visitor---noting the quality and size of construction here---spread the word---and more visitors came to see---creating another FEEDBACK SPIRAL of fame--tourism--more fame etc. Eventually the community spiraled its way to SPECTACLE. 7. Spectacle evokes a numinous effect. (Awe)---like seeing Solomon's Temple, or the pyramids or Hoover Dam. Spectacle too, feeding on awe and fame generates ever greater spectacle. 8. Architectural authority arose to shape the growing spectacle into some order. 9. Meanwhile, nearby clans are building spectacles of their own. 10. Divine guidance: Throughout the entire process people seek guidance from the Shaman. He too spirals toward ever more elaborate theology and ritual. Visitors and locals enjoy a bigger and bigger "show". 11. Commerce quickly raises its head---and another feedback loop spirals upward. 12. The various feedback loops enfold and further enhance each other. In short Chaco canyon becomes a terrific place to visit for a good time. Branson Missouri's rise to super-tourist spot is very similar. We RVers can see how spiraling has made Quartzsite, Az our "Mecca". 13. Tyranny or no tyranny? Well yes and no. Slavery? No!---- Social coercion--Yes! Theology always jerks people around---by fear and hope--so they spend energies working for the church. In Chaco, no doubt a theology arose with a governing priesthood and masses worked "voluntarily" for the deity. 14. Why did it end?---It was a social empire and It ran its course---usually 300 years is about max for them---bad weather probably precipated a downward feedback spiral and its charm died quicker than it rose. 15. What have we learned here? That lightening can strike in unlikely places---that empires--of all kinds--political--social--religious--commercial are giant feedback loops that rise--ascend--and decline. That tiny causes can have huge effects. An abundance of flat--perfectly sized building stones---started a local building craze that swelled to a spectacle--to a monument---to a 300 year "happening". That feedback loops--once recognized--are seen to be all around and in us---and can serve for good or ill. Social Manias--like statue building on Easter Island----Tulip mania in holland---pyramid mania in Egypt and House mania in the US ----rarely serve humanity well---though they produce impressive stuff. That humans like to get excited and lose themselves in something bigger than themselves. It is near ecstasy to allign your energy and intent with many others---and is extremely powerful. The dilemma of mavericks and extremely independent personalities is that unless they join some sort of human surge---they miss out on all the fun. At the psychological level--chaco suggest that tiny steps in the direction of your dreams can start an upward spiral of great satisfaction and who knows what else. My friend Boonie persuaded me to try blogging-----and ---now I have a game to play that connects me with the world.
77,160