text
stringlengths 0
6.23M
| __index_level_0__
int64 0
419k
|
|---|---|
DMRC {CRA & Maintainer} Admit Card 2016, Delhi Metro SC/TO Hall Ticket
Delhi Metro Rail Corporation was established in 1995 for development. DMRC CRA Recruitment 2016 & DMRC Maintainer Recruitment 2016 for vacant posts. CRA ( Customer Relationship Assistant, SC, TO & Maintainer Recruitment at its process & DMRC will going to release Admit Card for written test. Applied candidates are advised to stay tuned with us to get DMRC CRA Admit Card 2016 & DMRC Maintainer Admit Card 2016.
Delhi Metro have vision to computing experience in Delhi Metro to be customer delight. & connect whole Delhi with Metro Network. To serve customer with passion. Delhi Metro Rail Corporation (DMRC) one of the largest transportation sector in India. For such a heavy duty & handle customer eligible aspirants will be recruit for various vacant posts of Customer Relationship Assistant or Maintainer. Applied candidates are informed that DMRC Admit Card 2016 will able to download. Delhi Metro Rail Corporation have 216 trains which contain 4, 6 & 8 couches. DMRC have 2 min service to pick & drop customes. CRA is very responsible posts in any organisation to handle various issues related to customers & provide solution. Applied candidates will have to download DMRC SC Hall Ticket 2016, DMRC TO Hall Ticket 2016, DMRC CRA Admit Card 2016, DMRC Maintainer Admit Card 2016.
DMRC CRA Admit Card 2016
Aspirants will have to stay tuned with official web link or desiji.in to get Admit Card or Hall ticket for examination to select candidates for given posts. DMRC is largest Transportation sector in Delhi State. It connects Delhi’s all corner to each other. It have Yellow Line, Red Line, Green Line, Blue Line, Purple Line, Orange Line which which is connect to each other. Now a days DMRC connect to Rapid Metro Gurgaon. Now DMRC will going to extract eligible applicants from plenty of candidates for CRA, Maintainer, SC & TO. These posts handle various function in DMRC Corporation. Get here DMRC CRA Hall Ticket 2016, DMRC Maintainer Hall Ticket 2016 from link given below.
- Corporation – Delhi Metro Rail Corporation
- Known As – DMRC
- Recruitment For – CRA (Customer Relationship Assistant), Maintainer, SC & TO
- Job Location – Delhi State
- Searching For – DMRC CRA Admit Card 2016, DMRC Maintainer Admit Card 2016/Hall Ticket
- Status – Available as soon
- Category – Government Jobs in Delhi
- Official Portal – wwwdelhimetrorail.com
GATE 2016 Online Calculator
Delhi Metro CRA Admit Card 2016 will be available as soon. Admit Card contain information like Applicant’s Name, Roll Number & Photo with examination detail like Examination Center, Timing & all schedule. Aspirants will have to stay tuned with desiji.in to get DMRC Admit Card 2016. Delhi Metro Rail Corporation select aspirants based on Written Test & Personal Interview. Aspirants which will qualified in test will be called for personal interview. Then candidates will select for posts of CRA, Maintainer and others.
DMRC Maintainer Hall Ticket 2016
Delhi Metro Rail Corporation (DMRC) have various vacant posts of CRA, Maintainer. DMRC CRA Exam 2016 will be held as soon. Applicants will have to stay tuned with us to download Admit Card/Hall Ticket. Stay here with us to Download Admit Card & latest updates regarding DMRC Recruitment 2016. DMRC have 216 Kms along with 160 station & connect Delhi, Noida & Gurgaon. Applied candidates are informed that Admit Card will available as soon.
Main portal for DMRC is, Click on link given for Admit Card for upcoming exam. Exam for recruitment of eligible aspirants for CRA & Maintainer. Then candidates will have to submit detail like Application Number, DOB and what will asked on screen. Proceed to get DMRC CRA Admit Card 2016 & DMRC Maintainer Admit Card from link given below.
Download – DMRC CRA Admit Card 2016
Delhi State Government & DMRC have supervision on Delhi Metro Rail Corporation, daily process, passenger handling, get eligible staff, latest recruitment and others. Eligible students are informed that we have all notification regarding all recruitment in Delhi State.
Get – Government Jobs in Delhi
Mujhhe job chah8ye
| 344,731
|
Floral Mother's Day card
Floral Mother's Day card
2.75
A sweet and pretty card with the collaged word ‘Mum’ that is surrounded by pretty flowers and leaves.
Printed on to luxury 300gsm card and kraft envelope, then hand packaged in a recyclable cello wrap.
Quantity:
Add To Cart
| 43,575
|
\begin{document}
\maketitle
\begin{center}
{\bf Abstract}
\end{center}
{\small The aim of this note is to present some new results concerning ``almost everywhere''
well-posedness and stability of continuity equations with measure initial data.
The proofs of all such results can be found in \cite{amfifrgi}, together
with some application to the semiclassical limit of the Schr\"odinger equation.
}
\begin{center}
{\bf Resum\'e}
\end{center}
{\small Dans cette note, nous pr\'esentons des nouveaux r\'esultats concernant
l'existence, l'unicit\'e (au sens ``presque partout'') et la stabilit\'e pour des \'equations
de continuit\'e avec donn\'ees initiales mesures.
Les preuves de tous ces r\'esultats sont donn\'ees dans \cite{amfifrgi}, avec aussi des applications
\`a la limite semiclassique pour l'\'equation de Schr\"odinger.
}
\bigskip
Starting from the seminal paper of DiPerna-Lions \cite{diplions}
(dealing mostly with the transport equation), in
\cite{ambrosio,cetraro} the well-posedness of the continuity
equation
\begin{equation}\label{contieq}
\frac{d}{dt}\mu_t+\nabla\cdot (\bb_t\mu_t)=0.
\end{equation}
has been stronly related to well-posedness of the ODE (here we use
the notation $\bb(t,x)=\bb_t(x)$)
\begin{equation}\label{ODE}
\left\{\begin{array}{ll}
\dot \XX(t,x)=\bb_t(\XX(t,x))&\text{for $\Leb{1}$-a.e. $t\in (0,T)$,}\\
\XX(0,x)=x,
\end{array}
\right.
\end{equation}
for ``almost every'' $x\in\R^d$.
More precisely, observe the concept of solution to \eqref{ODE} is not
invariant under modification of $\bb$ in Lebesgue negligible
sets, while many applications of the theory to fluid dynamics (see
for instance \cite{lions2}, \cite{lions3}) and conservation laws
need this invariance property. This leads to the concept of
\emph{regular Lagrangian flow} (RLF in short): one may ask that,
for all $t\in [0,T]$, the image $\XX(t,\cdot)_\sharp\Leb{d}$
of the Lebesgue measure $\Leb{d}$ under the
flow map $x\mapsto\XX(t,x)$ is still controlled by $\Leb{d}$ (see
Definition~\ref{RLflow} below). Then, existence and uniqueness (up to $\Leb{d}$-negligible sets)
and stability of the RLF $\XX(t,x)$ in $\R^d$ hold true provided
the functional version of \eqref{contieq}, namely
\begin{equation}\label{contieqw}
\frac{d}{dt}w_t+\nabla\cdot (\bb_t w_t)=0,
\end{equation}
is well-posed in the set of non-negative bounded integrable funtions
$L^\infty_+\bigl([0,T];L^1(\R^d)\cap L^\infty(\R^d)\bigr)$.
Now, we may view \eqref{contieq} as an infinite-dimensional ODE in
$\Probabilities{\R^d}$, the space of probability measures in
$\R^d$ and try to obtain existence and uniqueness results for \eqref{contieq}
in the same spirit of the finite-dimensional theory, starting from the
simple observation that $t\mapsto\delta_{\sxX(t,x)}$ solves \eqref{contieq}.
We may expect that
if we fix a ``good'' measure $\nnu$ in the space $\Probabilities{\R^d}$
of initial data, then existence, uniqueness $\nnu$-a.e. and stability hold.
Moreover, for $\nnu$-a.e. $\mu$, the unique and stable
solution of \eqref{contieq} starting from $\mu$
should be given by
\begin{equation}\label{ovvia}
\mmu(t,\mu):=\int \delta_{\sxX(t,x)}\,d\mu(x)\qquad \forall\, t\in
[0,T],\,\,\mu\in\Probabilities{\R^d}.
\end{equation}
\section{Continuity equations and flows}
We use a standard and hopefully self-explainatory notation.
Let $\bb:[0,T]\times\R^d\to\R^d$ be a Borel vector field belonging to
$L^1_{\rm loc}\bigl([0,T]\times\R^d\bigr)$, and
set $\bb_t(\cdot):=\bb(t,\cdot)$; we \emph{shall not} work with the
Lebesgue equivalence class of $\bb$, although a posteriori the
theory is independent of the choice of the representative.
\begin{definition}[$\nu$-RLF in $\R^d$]\label{RLflow}
Let $\XX(t,x):[0,T]\times\R^d\to\R^d$ and
$\nu\in {\mathscr M}_+(\R^d)$ with $\nu\ll\Leb{d}$ and with bounded
density. We say that $\XX(t,x)$ is a $\nu$-RLF in $\R^d$ (relative
to $\bb$) if the following two conditions are fulfilled:
\begin{itemize}
\item[(i)] for $\nu$-a.e. $x$, the function $t\mapsto\XX(t,x)$
is an absolutely continuous integral solution to the ODE \eqref{ODE}
in $[0,T]$ with $\XX(0,x)=x$;
\item[(ii)] $\XX(t,\cdot)_\sharp\nu\leq C\Leb{d}$ for all $t\in
[0,T]$, for some constant $C$ independent of $t$.
\end{itemize}
\end{definition}
By a simple application of Fubini's theorem this concept is, unlike the
single condition (i), invariant in the Lebesgue equivalence class of $\bb$.
In this context, since all admissible initial measures $\nu$ are
bounded above by $C\Leb{d}$, uniqueness of the $\nu$-RLF can and
will be understood in the following stronger sense: if $f,\,g\in
L^1(\R^d)\cap L^\infty(\R^d)$ are nonnegative and $\XX$ and $\YY$
are respectively a $f\Leb{d}$-RLF and a $g\Leb{d}$-RLF, then
$\XX(\cdot,x)=\YY(\cdot,x)$ for $\Leb{d}$-a.e. $x\in\{f>0\}\cap
\{g>0\}$.
\begin{remark}{\rm We recall that the $\nu$-RLF exists
for all $\nu\leq C\Leb{d}$, and is unique, in the strong sense
described above, under the following
assumptions on $\bb$: $|\bb|$ is uniformly bounded, $\bb_t\in
BV_{\rm loc}(\R^d;\R^d)$ and $\nabla\cdot\bb_t=g_t\Leb{d}\ll\Leb{d}$
for $\Leb{1}$-a.e. $t\in (0,T)$, with
$$
\|g_t\|_{L^\infty(\R^d)}\in L^1(0,T),\qquad |D\bb_t|(B_R)\in
L^1(0,T)\quad\text{for all $R>0$,}
$$
where $|D\bb_t|$ denotes the total variation of the distributional
derivative of $\bb_t$. (See \cite{ambrosio} or
\cite{cetraro} and the paper \cite{bouchut} for Hamiltonian vector
fields.)}\end{remark}
Given a nonnegative $\sigma$-finite measure
$\nnu\in\Measuresp{\Probabilities{\R^d}}$, we denote by
$\E\nnu\in\Measuresp{\R^d}$ its expectation, namely
$$
\int_{\R^d}\phi\,d\E\nnu=\int_{{\mathscr
P}(\R^d)}\int_{\R^d}\phi\,d\mu \,d\nnu(\mu)\qquad\text{for all
$\phi$ bounded Borel.}
$$
\begin{definition}[Regular measures in
$\Measuresp{\Probabilities{\R^d}}$]\label{RegMis} Let
$\nnu\in\Measuresp{\Probabilities{\R^d}}$. We say that $\nnu$ is
\emph{regular} if $\E\nnu\leq C\Leb{d}$ for some constant $C$.
\end{definition}
\begin{example}\label{eRegMis}
{\rm (1) The first standard example of a regular measure $\nnu$ is
the law under $\rho\Leb{d}$ of the map $x\mapsto\delta_x$, with
$\rho\in L^1(\R^{d})\cap L^\infty(\R^d)$ nonnegative. Actually, one can
even consider the law under $\Leb{d}$, and in this case
$\nnu$ would be $\sigma$-finite instead of finite.
(2) If $d=2n$ and $z=(x,p)\in\R^n\times\R^n$ (this factorization
corresponds for instance to flows in a phase space), instead of
considering the law of under $\rho\Leb{2n}$ of the map
$(x,p)\mapsto\delta_x\otimes\delta_p$, one may also consider the law
under $\rho\Leb{n}$ of the map $x\mapsto \delta_x\times\gamma$, with
$\rho\in L^1(\R^n_x)\cap L^\infty(\R^n_x)$ nonnegative and
$\gamma\in\Probabilities{\R^n_p}$ bounded from above by a constant
multiple of $\Leb{n}$.}
\end{example}
We observe that Definition~\ref{RLflow} has a
natural (but not perfect) transposition to flows in
$\Probabilities{\R^d}$:
\begin{definition}[Regular Lagrangian flow in
$\Probabilities{\R^d}$]\label{RLflowmis} Let
$\mmu:[0,T]\times\Probabilities{\R^d}\to\Probabilities{\R^d}$ and
$\nnu\in\Measuresp{\Probabilities{\R^d}}$. We say that $\mmu$ is a
$\nnu$-RLF in $\Probabilities{\R^d}$ (relative to $\bb$) if
\begin{itemize}
\item[(i)] for $\nnu$-a.e. $\mu$, $|\bb|\in L^1_{\rm
loc}\bigl((0,T)\times\R^d;\mu_tdt\bigr)$,
$t\mapsto\mu_t:=\mmu(t,\mu)$ is continuous from $[0,1]$ to
$\Probabilities{\R^d}$ with $\mmu(0,\mu)=\mu$ and $\mu_t$ solves
\eqref{contieq} in the sense of distributions;
\item[(ii)] $\E(\mmu(t,\cdot)_\sharp\nnu)\leq C\Leb{d}$ for all $t\in [0,T]$, for
some constant $C$ independent of $t$.
\end{itemize}
\end{definition}
Notice that condition (ii) is weaker than $\mmu(t,\cdot)_\sharp\nnu\leq C\nnu$
(which would be the analogue of (ii) in Definition~\ref{RLflow} if
we were allowed to choose $\nu=\Leb{d}$), and it is actually sufficient
and much more flexible for our purposes, since we would like to consider
measures $\nnu$ generated as in Example~\ref{eRegMis}(2).
\section{Existence, uniqueness and stability of the RLF}
In this section we recall the main existence and uniqueness
results of the $\nu$-RLF in $\R^d$, and see their extensions to
$\nnu$-RLF in $\Probabilities{\R^d}$. The following result is proved
in \cite[Theorem~19]{cetraro} for the
part concerning existence and in \cite[Theorem~16,
Remark~17]{cetraro} for the part concerning uniqueness.
\begin{theorem}[Existence and uniqueness of the $\nu$-RLF in
$\R^d$]\label{texirlfrd} Assume that \eqref{contieqw} has existence
and uniqueness in $L^\infty_+\bigl([0,T];L^1(\R^d)\cap
L^\infty(\R^d)\bigr)$. Then, for all $\nu\ll\Leb{d}$ with bounded
density the $\nu$-RLF exists and is unique.
\end{theorem}
The next result shows that, uniqueness of \eqref{contieqw} in
$L^\infty_+\bigl([0,T];L^1(\R^d)\cap L^\infty(\R^d)\bigr)$ implies a
stronger property, namely uniqueness of the $\nnu$-RLF.
\begin{theorem}[Existence and uniqueness of the $\nnu$-RLF in
$\Probabilities{\R^d}$]\label{texirlfprob} Assume that
\eqref{contieqw} has uniqueness in $L^\infty_+\bigl([0,T];L^1(\R^d)\cap
L^\infty(\R^d)\bigr)$. Then, for all
$\nnu\in\Measuresp{\Probabilities{\R^d}}$ regular,
there exists at most one $\nnu$-RLF in $\Probabilities{\R^d}$.
If \eqref{contieqw} has existence in $L^\infty_+\bigl([0,T];L^1(\R^d)\cap
L^\infty(\R^d)\bigr)$, this unique flow is given by
\begin{equation}\label{realRLF}
\mmu(t,\mu):=\int_{\R^d}\delta_{\sxX(t,x)}\,d\mu(x),
\end{equation}
where $\XX(t,x)$ denotes the unique $\E\nnu$-RLF.
\end{theorem}
For the applications it is important to show that RLF's not only
exist and are unique, but also that they are stable. In the
statement of the stability result we shall consider measures
$\nnu_n\in\Probabilities{\Probabilities{\R^d}}$, $n\geq 1$, and a
limit measure $\nnu$. We shall assume that $\nnu_n=(i_n)_\sharp\P$,
where $(W,{\mathcal F},\P)$ is a probability measure space and
$i_n:W\to\Probabilities{\R^d}$ are measurable; we shall also assume
that $\nnu=i_\sharp\P$, with $i_n\to i$ $\P$-almost everywhere.
(Recall that Skorokhod theorem (see \cite[\S8.5, Vol.
II]{bogachevII}) shows that weak convergence of $\nnu_n$ to $\nnu$
always implies this sort of representation, even with $W=[0,1]$
endowed with the standard measure structure, for suitable
$i_n,\,i$.) The following formulation of the stability result is
particularly suitable for the application to semiclassical limit of
the Schr\"odinger equation.
Henceforth, we fix an autonomous vector field $\bb:\R^d\to\R^d$
satisfying the following regularity conditions:
\begin{itemize}
\item[(a)] $d=2n$ and $\bb(x,p)=(p,\cc(x))$, $(x,p)\in\R^d$,
$\cc:\R^n\to\R^n$ Borel and locally integrable;
\item[(b)] there
exists a closed $\Leb{n}$-negligible set $S$ such that $\cc$ is
locally bounded on $\R^n\setminus S$.
\end{itemize}
\begin{theorem}[Stability of the $\nnu$-RLF in
$\Probabilities{\R^d}$]\label{tstable} Let $i_n,\,i$ be as above and
let $\mmu_n:[0,T]\times i_n(W)\to\Probabilities{\R^d}$ be satisfying
$\mmu_n(0,i_n(w))=i_n(w)$ and the following conditions:
\begin{itemize}
\item[(i)] (uniform regularity)
$$\sup_{n\geq 1}\sup_{t\in [0,T]}\int_W\int_{\R^d}\phi\,d\mmu_n(t,i_n(w))\,d\P(w)\leq
C\int_{\R^d} \phi\,dx
$$
for all $\phi\in C_c(\R^d)$ nonnegative;
\item[(ii)] (uniform decay away from $S$) for some $\beta>1$
\begin{equation}\label{ali4}
\sup_{\delta>0}\limsup_{n\to\infty}\int_W\int_0^T
\int_{B_R}\frac{1}{{\rm
dist}^\beta(x,S)+\delta}\,d\mmu_n(t,i_n(w))\,dt \,d\P(w)<\infty
\qquad\forall\, R>0;
\end{equation}
\item[(iii)] (space tightness) for all $\eps>0$,
$\P\Bigl(\bigl\{w:\ \sup\limits_{t\in
[0,T]}\mmu_n(t,i_n(w))(\R^d\setminus B_R)>\eps\bigr\}\Bigr)\to 0$
as $R\to\infty$;
\item[(iv)] (time tightness) for $\P$-a.e. $w\in
W$, for all $n\geq 1$ and $\phi\in C^\infty_c(\R^d)$,
$t\mapsto\int_{\R^d}\phi\,d\mmu_n(t,i_n(w))$ is absolutely
continuous in $[0,T]$ and
$$
\lim_{M\uparrow\infty}\P\biggl(\Bigl\{w\in W:\
\int_0^T\biggl|\biggr(\int_{\R^d}\phi\,d\mmu_n(t,i_n(w))\biggr)'\biggr|\,dt>M\Bigr\}\biggr)=0;
$$
\item[(v)] (limit continuity equation)
\begin{equation}\label{ali6}
\lim_{n\to\infty}\int_W
\biggl|\int_0^T\biggl[\varphi'(t)\int_{\R^d}\phi\,d\mmu_n(t,i_n(w))+\varphi(t)\int_{\R^d}
\langle\bb,\nabla\phi\rangle\,d\mmu_n(t,i_n(w))\biggr]\,dt\biggr|\,d\P(w)=0
\end{equation}
for all $\phi\in C^\infty_c\bigl(\R^d\setminus(S\times\R^n)\bigr)$,
$\varphi\in C^\infty_c(0,T)$.
\end{itemize}
Assume, besides (a), (b) above, that \eqref{contieqw} has
uniqueness in $L^\infty_+\bigl([0,T];L^1\cap L^\infty(\R^d)\bigr)$.
Then the $\nnu$-RLF $\mmu(t,\mu)$ relative to $\bb$ exists, is
unique (by Theorem~\ref{texirlfprob}) and
\begin{equation}\label{cetraro1}
\lim_{n\to\infty} \int_W\sup_{t\in [0,T]}d_{{\mathscr
P}}(\mmu_n(t,i_n(w)),\mmu(t,i(w)))\,d\P(w)=0
\end{equation}
where $d_{{\mathscr P}}$ is any bounded distance in $\Probabilities{\R^d}$ inducing
weak convergence of measures.
\end{theorem}
An example of application of the above stability result is the following:
let $\alpha\in (0,1)$ and let $\psi^\e_{x_0,p_0}:[0,T]\times \R^n \to \C$
be a family of solutions to the Schr\"odinger equation
\begin{equation}\label{wkb}
\left\{
\begin{array}{l} i\e \p_t \psi_{x_0,p_0}^\e(t)=-\frac{\e^2}{2}\Delta \psi_{x_0,p_0}^\e(t)+U\psi_{x_0,p_0}^\e(t)\\
\psi^\e_{x_0,p_0}(0)=\e^{-n\alpha/2}\phi_0\Bigl(\frac{x-x_0}{\e^{\alpha}}\Bigr)e^{i(x\cdot
p_0)/\e},
\end{array}
\right.
\end{equation}
with $\phi_0\in C^2_c(\R^n)$ and $\int|\phi_0|^2\,dx=1$.
When the potential $U$ is of class $C^2$, it was proven in \cite{gerard, lionspaul}
that for every $(x_0,p_0)$ the Wigner transforms
$W_\e\psi^\e_{x_0,p_0}(t)$ converge, in the natural
dual space ${\cal A}'$ \cite{lionspaul} for the Wigner transforms,
to $\delta_{\sxX(t,x_0,p_0)}$ as $\e\downarrow 0$.
Here $\XX(t,x,p)$ if the unique flow in $\R^{2n}$
associated to the Liouville equation
\begin{equation}
\label{eq:liouville}
\p_t W + p\cdot \n_x W-\n U(x) \cdot \n_p W=0.
\end{equation}
In \cite{amfifrgi}, relying also on some a-priori estimates of
\cite{amfrgi} (see also \cite{figpaul}), the authors consider a
potential $U$ which can be written as the sum of a repulsive Coulomb
potential $U_s$ plus a bounded Lipschitz interation term $U_b$ with
$\n U_b\in BV_{\rm loc}$. We observe that in this case the equation
\eqref{eq:liouville} does not even make sense for measure initial
data, as $\n U$ is not continuous. Still, they can prove \emph{full}
convergence as $\e\downarrow 0$, namely
\begin{equation}\label{limeps}
\lim_{\e\downarrow 0}
\int_{\R^d}\rho(x_0,p_0)\sup_{t\in [-T,T]}
d_{{\cal A}'}\bigl(W_\e\psi^\e_{x_0,p_0}(t),\delta_{\sxX(t,x_0,p_0)}\bigr)
dx_0dp_0=0
\qquad\forall\, T>0
\end{equation}
for all $\rho\in L^1(\R^{2n})\cap L^\infty(\R^{2n})$ nonnegative,
where $\XX(t,x,p)$ if the unique $\Leb{2n}$-RLF
associated to \eqref{eq:liouville} and $d_{{\cal A}'}$ is a bounded distance
inducing the weak$^*$ topology in the unit ball of ${\cal A'}$.
The proof of \eqref{limeps} relies on an application of
Theorem~\ref{tstable} to the Husimi transforms of
$\psi^\e_{x_0,p_0}(t)$. The scheme is sufficiently flexible to allow
more general families of initial conditions displaying partial
concentration, of position or momentum, or no concentration at all:
for instance, the limiting case $\alpha=1$ in \eqref{wkb} (related to
Example~\ref{eRegMis}(2)) leads to
$$
\lim_{\e\downarrow 0} \int_{\R^d}\rho(x_0)\sup_{t\in [-T,T]}
d_{{\cal
A}'}\bigl(W_\e\psi^\e_{x_0,p_0}(t),\mmu(t,\mu(x_0,p_0))\bigr) dx_0=0
\qquad\forall\, p_0\in\R^n,\,T>0
$$
for all $\rho\in L^1(\R^n)\cap L^\infty(\R^n)$ nonnegative, with
$\mmu(t,\mu)$ given by \eqref{ovvia} and
$\mu(x_0,p_0)=\delta_{x_0}\times|\hat\phi_0|^2(\cdot-p_0)\Leb{n}$.
| 10,175
|
TITLE: Sets of integers that turned out to be finite
QUESTION [7 upvotes]: In the spirit of the previous question "Conjectures that have been disproved with extremely large counterexamples?", and as an attempt to salvage this closed question, I'm interested in sets of natural numbers or integers such that it was historically an open question whether they contained infinitely many numbers, but have now been proven to be finite.
For example, it is still not known whether there are infinitely many perfect numbers. If it were proved that there are in fact only finitely many, that would make them a valid answer to this question.
I know that one can easily make up trivial examples ("I don't know whether $A=\{n:|n|<10^{1000}\}$ is finite. Oh wait, it is. Done!") but those would not count as historical open questions.
Note: This is not identical to the previous question on extremely large counterexamples, because one may have an extremely large counterexample while still having infinitely many larger non-counterexamples.
REPLY [1 votes]: Probably the most famous finite set of integers, that is hard to prove to be finite, is the set $B \subset \mathbb{N}$ of integers $n$ satisfying the statement of Burnside conjecture («All groups with exponent $n$ are locally finite»). This set is finite due to Adyan-Novikov theorem («There exists an infinite finitely generated group of exponent $n$ for any odd $n \geq 665$») and Ivanov theorem («There exists an infinite finitely generated group of exponent $n$ for any even $n \geq 2^{48}$»). However, despite $B$ is finite, its exact size remains unknown. The only numbers that are known for sure to belong to $B$ are 1, 2, 3, 4 and 6.
If you want something more tangible, then there is Zivcovic Theorem, that states, that there are finitely many numbers $n \in \mathbb{N}$, such, that $\Sigma_{k = 0}^{n - 1} {(-1)}^k (n - k)!$ is prime. Such numbers form the OEIS sequence A001272.
Another example of such nontrivially finite sets of integers can be found in this MSE question: Equation $x = \tau(2^x - 1)$. The equation here appeared to have exactly 7 solutions: 1, 2, 4, 6, 8, 16, and 32.
And of course we should mention the set of all $n \in \mathbb{N}$, such that $\exists x, y, z \in \mathbb{N} x^n + y^n = z^n$. Actually, this set is two-element and contains only 1 and 2. This statement is called Fermat’s Last Theorem and was one of the most long-standing open problems (formulated by Fermat in 1637 and proved by Wiles in 1995)
| 172,630
|
Typically, payday advances are funds within two weeks that you can borrow from a financial institution and return them. The total amount that one may somewhere obtain ranges between $ 1000 and $1, 500. You can easily term it being a paycheck advance, short-term loan, bad credit loan, cash loan, or fast loan. If you’re in an urgent situation or have an unhealthy credit history, you have access to the mortgage seamlessly. Let’s explore the advantages and demerits associated with the loan at length.
Read the rest of this page »
| 100,115
|
People mental illness dating
I like to revisit this topic every so often to allow people to post comments and add to the list. ” “It could be worse, you know.” “Everything happens for a reason.” “It’s all part of a larger plan.” “You’re only given what you can handle.” “All you need to do is think positive.” “Half the battle is the mindset.?
There are always eyebrow-raising things people say to those with cancer and/or their families.
Maybe not everyone would find each of the comments listed below to be offensive but they’ve been submitted by readers as ones they wish they hadn’t heard.
” “At least it’s not on your face where everyone could see the scars, besides you don’t really your breasts anyway.” “A new-agey friend asked me if I had been really angry about anything 7 years before my diagnosis that I had repressed. )” .
We’ve partnered with WHSmith to offer readers Telegraph Great Reads – an exclusive offer on 10 hand-picked non-fiction titles.
Those days that drag on and you just wonder and hope.I was out to eat with my youngest son, now 16, and ran into an acquaintance. Her daughter, who knows I went through chemo all a year earlier, made a comment that her mother must have a particularly strong constitution because she didn’t have trouble with side..
| 2,161
|
Logic Programming
Online Computer Science Degree Guide: Logic Programming
Computer science relies on programming to allow for software to accomplish its desired functions. There are numerous programming methods for setting the guidelines for software to operate. One of the most prolific methods is logic programming. This programming method involves giving software parameters to following logical reasoning, derived largely from simple if, then statements.
| 324,101
|
Experiences with Maha Periyava: Beyond boundaries…
Experiences with Maha Periyava: Beyond boundaries
Many years back about (12 to 13), a Muslim gentleman walked into clinic. Seeing Sri Maha Periyava’s photo, he bowed in reverence and mentioned that Sri Maha Periyava is a great sage. I was surprised and asked him how he knew about Sri Maha Periyava. The Muslim gentleman shared his experience which I shall present in the first person account as narrated to me.
“Sir, I was a linesman in electricity board about 30 years back… We, in a group of 5 or 6, went to the Kanchi Matham to attend to a fault. Sri Maha Periyava who was seated there, gestured me to come and queried whether I prayed 5 times a day. I was shocked as to how Sri Maha Periyava could make out my religion. Sir that was only the beginning of my surprise. Then Sri Maha Periyava quoted from the Holy Quran extensively. We are Tamils who read the Quran. So the Arabic that we read will have a Tamil tint.
However, Sri Maha Periyava recited the Quran just as an Arab would speak his native tongue. His purity of diction was truly amazing, Sir If I had thought that this was great, then more was to come, he said.
Sri Maha Periyava then quoted a particular point and asked me how many times had the Quran mandated daily prayer?
I replied 5 times as any devout Muslim would. Sri Maha Periyava then asked me to get it clarified by the local Kazi. I did so and reported back to Maha Periyava, the same news. Maha Periyava then told me to get in touch with the Chief Kazi at Madras and get back to Him. The Chief Kazi not only said the same thing but chided me of asking useless questions. I reported the same to Sri Maha Periyava.
It was then that a great feat happened. Sri Maha Periyava asked for the Quran book to be brought and effortlessly showed and quoted from the Arabic that, actually mandated daily prayer was 6 times. However since the 6th time falls at 11.30 pm many people skip it. He told me that if I prayed for the 6th time also I would really benefit. Sir, true to His word, I retired as an executive engineer of EB.
Then when I had gone to Kamakshi temple for official work few times, Sri Maha Periyava would recognize me, bless me and Enquire the Number of Times I Pray!!! Just look at the Encyclopediac Knowledge of Sri Maha Periyava the Omnipotent, Omnipresent God Who Walked. I was really blessed to have listened first hand to such a great experience.
Sri Maha Periyava has blessed countless souls irrespective of their religion.
Compiled by Jagadguru Sri Maha Periyava – Kanchi Paramacharya
| 97,484
|
\begin{document}
\title{Fluids, Elasticity, Geometry,\\ and the Existence of Wrinkled Solutions}
\author{Amit Acharya \and Gui-Qiang G. Chen \and Siran Li \and Marshall Slemrod \and \\ Dehua Wang}
\institute{A. Acharya \at
Civil \& Environmental Engineering, Carnegie Mellon University,
Pittsburgh, PA 15213, USA.\\
\email{acharyaamit@cmu.edu}
\and
G.-Q. Chen \at
Mathematical Institute, University of Oxford, Oxford, OX2 6GG, UK;\\
AMSS \& UCAS, Chinese Academy of Sciences, Beijing 100190, China.\\
\email{chengq@maths.ox.ac.uk}
\and
S. Li \at
Mathematical Institute, University of Oxford, Oxford, OX2 6GG, UK.\\
\email{siran.li@maths.ox.ac.uk}
\and
M. Slemrod \at
Department of Mathematics, University of Wisconsin, Madison, WI 53706, USA.\\
\email{slemrod@math.wisc.edu}
\and
D. Wang \at
Department of Mathematics, University of Pittsburgh, Pittsburgh, PA 15260, USA.\\
\email{dwang@math.pitt.edu}
}
\date{Received: May 3, 2016 / Accepted: May 15, 2017}
\maketitle
\iffalse
\keywords{Underlying connections, fluids, elasticity, geometry, incompressible Euler equations,
continuum mechanics, compressible Euler
equations, isometric embedding,
wrinkled solutions, shear solutions, elastodynamics, Gauss-Codazzi equations,
Gauss curvature, first and second
fundamental forms, geometric flow.}
\subjclass[2010]{Primary: 53C42, 53C21, 53C45, 58J32, 35L65, 35M10;
Secondary: 76F02, 35D30, 35Q31, 35Q35, 35L45, 57R40, 57R42, 76H05, 76N10.}
\date{\today}
\fi
\begin{abstract} $\,\,$ We are concerned with$\,$ underlying connections$\,$ between fluids,
elasticity,
isometric embedding of Riemannian manifolds,
and the existence of wrinkled solutions of
the associated
nonlinear partial differential equations.
In this paper, we develop such connections for the case of two spatial dimensions,
and demonstrate that the continuum mechanical equations can be mapped into
a corresponding geometric framework and the
inherent direct application of the theory of isometric embeddings
and the Gauss-Codazzi equations
through examples for the Euler equations for fluids
and the Euler-Lagrange equations for elastic solids.
These results show that the geometric theory provides an avenue
for addressing the admissibility
criteria for nonlinear conservation laws in continuum mechanics.
\end{abstract}
\tableofcontents
\section{$\,$ Introduction}\label{S1}
We$\,$ are$\,$ concerned$\,$ with $\,$ underlying$\,$ connections$ \,$ between$\,$ fluids, $\,$ elasticity,
isometric embedding of Riemannian manifolds,
and the existence of wrinkled solutions of the associated
nonlinear partial differential equations.
One of the main purposes of this paper is to develop such connections
for the case of two spatial dimensions,
and examine whether the continuum mechanical equations can be mapped into
a corresponding geometric framework and the
inherent direct application of the theory of isometric embeddings and the Gauss-Codazzi equations.
Another motivation for such a study is to explore the possibility whether the geometric theory
can serve an avenue for addressing the admissibility criteria
for nonlinear conservation laws in continuum mechanics.
In recent years, a theory of {\it wild} solutions to
both the incompressible and compressible
Euler equations in two and higher spatial dimensions
has been developed
by De Lellis, Sz\'{e}kelyhidi Jr., and others
in
\cite{BDIS, BDS1, CDK, CDS2012, DS2009, DS2010, DS2012, DS2013, DS2014, DS2015,SzL2011,Wied}
and the references cited therein.
The approach is based on the
analogy with the highly irregular (static) solutions of the isometric
embedding of a two-dimensional Riemannian manifold into three-dimensional
Euclidean space given by the Nash-Kuiper theorem \cite{Kuiper,Nash1954}.
Specifically, the analogy arises because of the applicability of
Gromov's h-principle and
convex integration to both problems ({\it cf}. \cite{Gromov1986}).
This suggests that perhaps the initial value problem in fluid dynamics
and the embedding problem in differential geometry would be more than just two
analogous issues, and could in fact be mapped one to the other.
A first thought
on this issue would suggest that the question is not even meaningful:
the fluid problem is dynamic and the embedding problem is static.
Thus, if the linkage is to make any sense at all,
we must think of the embedding problem as
dynamic and derive the equations of a time evolving two-dimensional
Riemannian manifold.
Of course, conceptually it is easy to visualize
this time evolving two-dimensional Riemannian manifold as a
two-dimensional surface moving in three-dimensional space
that can be seen in
such an everyday phenomenon as the vibration of the surface of a drum.
Thus, to make the link, we must derive the equations of a new type of geometric
flow and then interpret the consequences of this fluid-geometric duality. In
fact, once we have mapped a solution of the Euler equations onto the evolving
surface, it is rather easy to see that the dynamic metric $\bg$ is a short metric
in the sense of the Nash-Kuiper theorem \cite{Kuiper,Nash1954}
with respect to a metric associated with a developable surface.
Hence, an immediate consequence of our theory is that the evolving
manifold can be approximated by wrinkled manifolds
({\it i.e.}, manifolds with discontinuous second derivatives)
and thus gives some indication of the
existence of {\it wild} solutions of the dual fluid problem.
A simple mental
picture of the geometric image of the fluid problem would be the motion of
$C^{1,\alpha}$ wrinkles on a piece of paper.
Unfortunately, while appealing, this mental picture is not correct, and
the correct visualization would be the fractal images given
by Borrelli et al. \cite{Borrelli2012,Borrelli2013}.
Moreover, the {\it wild} solutions of the geometric problem are
completely time-reversible and are in
reality a sequence of the Nash-Kuiper solutions of the embedding problem
with the metric given by $\bg=\bg^*$,
where $\bg^*$ corresponds to a developable surface
that is time-independent and has non-vanishing mean curvature.
The implication of this fact is immediate:
The geometric Nash-Kuiper {\it wild} solutions are time-reversible.
This suggests a plausible answer to the question raised
in \cite{BDIS, BDS1, CDK, CDS2012, DS2009, DS2010, DS2012, DS2013, DS2014, DS2015,SzL2011,Wied}
as to which is the correct admissibility criterion to choose the relevant solution
from the infinite number of non-unique solutions to the Euler equations.
Namely, no
dynamic admissibility criterion such as the energy inequality, entropy
inequality, entropy rate can serve the purpose,
since all the inequalities
become the identities for such solutions.
The only possible useful criteria
must be meaningful for time
reversible
fluid flow, such as energy
minimization, artificial viscosity \cite{Dafermos-book},
or viscosity-capillarity \cite{Slemrod2013}.
The equations for geometric flow are abundant:
The Einstein equations of general
relativity and the Ricci flow equations are two of the better known examples.
In both of
these problems,
the metric for a Riemannian manifold becomes the dynamic
unknown.
In the theory developed in this paper, the same situation
arises: A dynamic metric $\bg$ is our unknown along with the second
fundamental form for the evolving manifold.
Furthermore, just as in the case
of the Einstein equations ({\it cf.} \cite{BIsen,KlaiR}),
the initial data must be
consistent with the problem, that is,
initially
an embedded
manifold does exist and the map is also consistent with the divergence free
condition on the velocity for the incompressible fluid case.
For the Einstein equations, this consistency of the initial data
yields the Einstein
constraint equations, while in our case a system of constraint
equations is also required.
In this paper we show that our constraint equations have
local analytic solutions; moreover, there is a velocity $(u,v)$
that defines both a solution of the incompressible Euler equations and an
evolving two-dimensional Riemannian manifold isometrically immersed
in $\mathbb{R}^{3}$.
A similar result is given for the compressible case,
as well as for neo-Hookean elasticity.
The most important issue is the physical meaning of the
results mentioned above.
In short, we emphasize the
comments which have appeared in Sz\'{e}kelyhidi Jr. \cite{SzL2011},
Bardos-Titi-Wiedemann \cite{BTitiW}, Bardos-Titi \cite{BTiti},
and Bardos-Lopes Filho-Niu-Nussenzveig Lopes-Titi \cite{BLFN}.
The appearance of the {\it wild} solutions is due essentially
to the application of the Euler equations with vortex sheet initial data.
Hence, the Euler equations, which have no small
scales built into their theory as opposed to the compressible or
incompressible Navier-Stokes equations, have been used in a case
where they
should not be expected to apply.
To this, we must add the
proviso that was again mentioned in
De Lellis-Sz\'{e}kelyhidi Jr. \cite{DS2009, DS2010, DS2012, DS2013, DS2014, DS2015,SzL2011};
also see Elling \cite{Elling2006,Elling2013}:
these {\it wild} solutions could be a demonstration of fluid turbulence.
The results given in this paper show that, if this is the case, then this
{\it wild} fluid turbulent behavior is mirrored by the solutions generated by the
non-smooth moving wrinkled surfaces produced by the Nash-Kuiper theorem \cite{Kuiper,Nash1954}.
This paper consists of nine sections after this brief introduction.
In \S 2, we recall the Euler equations for an inviscid incompressible fluid and
the Gauss-Codazzi equations for the isometric embedding of a two-dimensional
Riemannian manifold into three-dimensional Euclidean space.
In \S 3, as discussed above, we derive the equations of the geometric flow,
as well as the constraint conditions on the initial data.
In \S 4, we prove the solvability of the constraint equations for the
initial data. We emphasize that the constraint conditions require the
initial fluid velocity \textit{not} to be a shear flow. This assumption pairs
nicely with a theorem in De Lellis-Sz\'{e}kelyhidi Jr. \cite{DS2012} that non-smooth
shear flow initial data for both the compressible and incompressible
Euler equations are {\it wild} data and yield non-unique solutions to the Cauchy
problem for both the compressible and incompressible Euler equations.
In fact, our computation suggests that this is the only {\it wild} initial
data. On the other hand, we note that, for the degenerate case, when the fluid
motion is a shear flow, we still have a metric that provides the desired map,
namely, metric $\bg^*$.
In \S 5, we state and prove our main result:
Evolving from the initial data, there exists a metric $\bg$
of the geometric flow equations which yields a solution of both the Euler equations
and the equations describing the evolving isometrically immersed Riemannian
manifold $({\mathcal M},\bg)$.
In \S 6, we continue the discussion of the initial
data issue and show that, for the shear flow initial data in the hypotheses of
Lemma \ref{L41}, the symbol of the underlying system of second order partial
differential equations vanishes so that the equations for $\bg$
are degenerate;
however, as we just commented, metric $\bg^*$ suffices
in this singular case.
In \S 7, we give our principal result:
The evolving manifold arising from the Euler equations can be approximated by
wrinkled $C^{1,\alpha }$ manifolds $({\mathcal M}, \bg^*)$ for some $\alpha\in (0,1)$
which are continuous in time.
Since the time-continuity follows from a rather lengthy argument,
its proof is presented in a separate appendix of this paper for completeness.
These wrinkled solutions can be arranged as a time-sequence of solutions
which render the initial value problem for the evolving manifold to have an
infinite number of constant energy solutions.
Furthermore, we provide a formal map from the geometric wrinkled solutions to
weak solutions of the incompressible Euler equations.
In \S 8, a short discussion is provided to show that many of
our earlier results for the incompressible
Euler equations carry over to the compressible case.
Based on the knowledge that has been obtained from the fluid equations,
we show in \S 9 how the case of general continuum mechanics can be placed in our mechanics-geometry
framework. As an illustrative example, we demonstrate results for elastodynamic motion
of a neo-Hookean solid. Furthermore, these results
suggest that more refined continuum mechanical theories relying on micro-structure
could play a key role in choosing admissible solutions.
Our last section, \S 10, provides a discussion of
admissibility criteria for the (incompressible and compressible) Euler
equations which asserts that, for the multidimensional Euler equations, no dynamic admissibility condition would
eliminate {\it wild} solutions, and hence the only meaningful one must be the one which
eliminates or at least reduces the number of wrinkles of our dual geometric
problem. We also show how our work suggests a minimal dynamical model for internally stressed elastic materials.
The paper concludes with an appendix in which the time-continuity of the wrinkled
solutions is proved.
\section{$\,$ Basic Equations}\label{S2}
\subsection{$\,$ Geometric equations and notations}
We start with some basic geometric equations and notations for subsequent developments.
For more details, see Han-Hong \cite{HanHong} and the references cited therein.
Let $({\mathcal M},\bg)$ be a two-dimensional Riemannian manifold with
$\y(x_{1},x_{2})\in\R^{3}$ denoting a point on the manifold,
$\partial_{i}\y\cdot \partial_{j}\y=g_{ij}$.
The unit normal vector $\n$ to the manifold is given by
$$
\n=\frac{\partial_{i}\y\times \partial_{j}\y}{|\partial_{i}\y\times\partial_{j}\y|},
$$
and the second fundamental form is
$$
II=L(dx_{1})^{2}+2Mdx_{1}dx_{2}+N(dx_{2})^{2},
$$
where $L=\n\cdot \partial_{11}\y, \ M=\n\cdot \partial_{12}\y$, $N=\n\cdot \partial_{22}\y$,
and $\partial_{ij}:=\partial_i\partial_j$ with $\partial_i=\partial_{x_i}$
for $i,j=1,2$.
We will use the alternative version of the second fundamental form
$$
(L^{\prime}, M^{\prime}, N^{\prime})=\frac{1}{\sqrt{\det \bg}}(L,M,N),
$$
and recall that
\begin{equation}\label{2.1aa}
R_{1212}=\frac{\kappa}{\det \bg},
\end{equation}
where $\kappa$ is the Gauss curvature, and
$R_{ijkl}$ is the Riemann curvature tensor.
Furthermore, for notational simplicity, we henceforth drop the ``\'{}" superscript
in the alternative version of the second fundamental form.
Then the Codazzi equations are
\begin{equation}\label{e21}
\begin{cases}
\partial _{1}N-\partial_{2}M
=-\Gamma _{22}^{1}L+2\Gamma_{12}^{1}M-\Gamma_{11}^{1}N,\\[2mm]
\partial_{1}M-\partial_{2}L
=\Gamma_{22}^{2}L-2\Gamma_{12}^{2}M+\Gamma_{11}^{2}N,
\end{cases}
\ee
and the Gauss equation is
\be\label{e22}
LN-M^{2}=\kappa.
\ee
The Christoffel symbols are given by the following formulas:
\begin{equation*}
\Gamma_{ij}^{k}={\frac12}g^{kl}(\partial_{j}g_{il}+\partial_{i}g_{jl}
-\partial_{l}g_{ij}),
\end{equation*}
so that
\be\label{e23}
\begin{cases}
\Gamma_{11}^{1}
=\frac1{2\det \bg}\left[g_{22}(\partial_{1}g_{11})-g_{12}(2\partial_{1}g_{12}-\partial_{2}g_{11})\right],\\[1mm]
\Gamma_{12}^{1}
=\frac1{2\det \bg}\left[g_{22}(\partial_{2}g_{11})-g_{12}(\partial_{1}g_{22})\right],\\[1mm]
\Gamma_{22}^{1}
=\frac1{2\det \bg}\left[g_{22}(2\partial_{2}g_{21}-\partial_{1}g_{22})-g_{12}(\partial_{2}g_{22})\right],\\[1mm]
\Gamma_{11}^{2}
=\frac1{2\det \bg}\left[-g_{12}(\partial_{1}g_{11})+g_{11}(2\partial_{1}g_{12}-\partial_{2}g_{11})\right],\\[1mm]
\Gamma_{12}^{2}
=\frac1{2\det \bg}\left[-g_{12}(\partial_{2}g_{11})+g_{11}(\partial_{1}g_{22})\right],\\[1mm]
\Gamma_{22}^{2}
=\frac1{2\det \bg}\left[-g_{12}(2\partial_{2}g_{21}-\partial_{1}g_{22})+g_{11}(\partial_{2}g_{22})\right],
\end{cases}
\ee
and the Riemann curvature tensor is given by
\begin{equation}\label{2.4-a}
R_{iljk}=g_{lm}\big(\partial_{k}\Gamma_{ij}^{m}
-\partial_{j}\Gamma_{ik}^{m}+\Gamma_{ij}^{n}\Gamma_{nk}^{m}
-\Gamma_{ik}^{n}\Gamma_{nj}^{m}\big),
\end{equation}
where we have used the Einstein summation convention that the repeated indices
are implicitly summed over in the terms, which will also be used from now on.
In particular, we have
\begin{equation}\label{2.4-b}
\begin{split}
R_{1212}
=&g_{21}\big(\partial_{2}\Gamma_{11}^{1}-\partial_{1}\Gamma_{12}^{1}
+\Gamma_{11}^{1}\Gamma_{12}^{1}+\Gamma_{11}^{2}\Gamma_{22}^{1}
-\Gamma_{12}^{1}\Gamma_{11}^{1}-\Gamma_{12}^{2}\Gamma_{21}^{1}\big)\\
&+g_{22}\big(\partial_{2}\Gamma_{11}^{2}-\partial_{1}\Gamma_{12}^{2}
+\Gamma_{11}^{1}\Gamma_{12}^{2}+\Gamma_{11}^{2}\Gamma_{22}^{2}
-\Gamma_{12}^{1}\Gamma_{11}^{2}-\Gamma_{12}^{2}\Gamma_{21}^{2}\big).
\end{split}
\end{equation}
With \eqref{2.1aa},
we have Gauss's Theorema
Egregium for the Gauss curvature.
A convenient form is given by Brioschi's
formula:
\be\label{e24}
\begin{split}
\kappa =&\frac1{2(\det \bg)^{2}}\det
\begin{bmatrix}
-
\partial_{22}g_{11}+2\partial_{12}g_{12}-
\partial_{11}g_{22} &
\partial_{1}g_{11}
& 2\partial_{1}g_{12}-
\partial_{2}g_{11} \\[1mm]
2\partial_{2}g_{12}-
\partial_{1}g_{22}
& 2g_{11} & 2g_{12} \\[1mm]
\partial_{2}g_{22} & 2g_{12} & 2g_{22}
\end{bmatrix} \\[2mm]
&-\frac1{2(\det \bg)^{2}}\det
\begin{bmatrix}
0 &
\partial_{2}g_{11} &
\partial_{1}g_{22} \\[1mm]
\partial_{2}g_{11} & 2g_{11} & 2g_{12} \\[1mm]
\partial_{1}g_{22} & 2g_{12} & 2g_{22}
\end{bmatrix}.
\end{split}
\ee
We recall the fundamental theorem of surface theory states
that the solvability of the Gauss-Codazzi equations is a necessary and
sufficient condition for the existence of an isometric embedding,
{\it i.e.}, a
simply connected surface $\y\in\mathbb{R}^{3}$
which satisfies $\partial_{i}\y\cdot \partial_{j}\y=g_{ij}$.
A convenient reference for the smooth version of the fundamental theorem
is do Carmo \cite{CarmoM}, while a non-smooth version can be found
in
Mardare \cite{Mardare1, Mardare2}.
\subsection{$\,$ Incompressible Euler equations for an inviscid fluid}
The equations for the balance of linear momentum are
\be\label{e25}
\begin{cases}
\partial_{1}(u^{2}+p)+\partial_{2}(uv)=-\partial_{t}u,\\
\partial_{1}(uv)+\partial_{2}(v^{2}+p)=-\partial_{t}v,
\end{cases}
\ee
where the constant density $\rho =1$ is taken.
The condition of incompressibility for the constant density is then given by
the equation:
\be\label{e26}
\partial_{1}u+\partial_{2}v=0.
\ee
In addition, taking the divergence of equations \eqref{e25}
and using the incompressibility condition \eqref{e26},
we have
\be\label{e27}
\partial_{11}(u^{2})
+2\partial_{12}(uv)+\partial_{22}(v^{2})
=-\triangle p.
\ee
\subsection{$\,$ The geometric equations in fluid variables}
Just as in Chen-Slemrod-Wang \cite{CSW2010},
it is convenient to write the geometric
equations in fluid variables. Set
\be\label{e28}
L=v^{2}+p,\quad M=-uv, \quad N=u^{2}+p.
\ee
Then the Gauss equation becomes
$$
(v^{2}+p)(u^{2}+p)-(uv)^{2}=\kappa,
$$
that is,
$$
p^{2}+pq^{2}=\kappa,
$$
where $q^{2}=u^{2}+v^{2}$.
This quadratic equation then gives
\be\label{e29}
p=-\frac12q^{2}\pm \frac12\sqrt{q^{4}+4\kappa}.
\ee
This means that
$$
q^{4}+4\kappa \geq 0
$$
must be required.
As it will be seen in the analysis below,
this condition is always satisfied.
We have just shown that $(L,M,N)$ can be written in fluid variables.
Now we can write\ the fluid variables in terms of the geometric variables
$(L,M,N)$.
To do this, simply substitute formula \eqref{e29}
into \eqref{e28}
to find
$$
L-N=v^{2}-u^{2}.
$$
Write $v=-\frac{M}{u}$ to see that
$
L-N=\big(\frac{M}{u}\big)^{2}-u^{2},
$
which yields that
$$
u^{4}+(L-N)u^{2}-M^{2}=0.
$$
Then we see
$$
u^{2}=\frac12\left(-(L-N)\pm \sqrt{(L-N)^{2}+4M^{2}}\right),
$$
and, using $L-N=v^{2}-\big(\frac{M}{v}\big)^{2}$,
$$
v^{2}=\frac12\left((L-N)\pm \sqrt{(L-N)^{2}+4M^{2}}\right).
$$
This shows that the ``+'' sign must be chosen in the above formulas so that
\be\label{e210}
\begin{cases}
u^{2}=\frac12\left(-(L-N)+\sqrt{(L-N)^{2}+4M^{2}}\right),\\[1.5mm]
v^{2}=\frac12\left((L-N)+\sqrt{(L-N)^{2}+4M^{2}}\right).
\end{cases}
\ee
Note that the condition: $q^{4}+4\kappa \geq 0$, with $q^{2}=\sqrt{(L-N)^{2}+4M^{2}}$, is equivalent to
$$
\big((L-N)^{2}+4M^{2}\big)+4\kappa \geq 0,
$$
that is,
$$
\big((L-N)^{2}+4(LN-\kappa)\big)+4\kappa =(L+N)^{2}\geq 0,
$$
which is always satisfied.
Thus, we have shown that $(u,v,p)$ are determined by $(L,M,N)$,
since
$$
p=\frac{1}{2}\big(-q^{2}\pm \sqrt{q^{4}+4\kappa}\big),
\quad
q^{2}=u^{2}+v^{2}=\sqrt{(L-N)^{2}+4M^{2}}.
$$
\section{$\,$ The Equations for Geometric Flow}\label{S3}
We now construct a {\it dual} solution, which simultaneously satisfies
the incompressible Euler equations and the Gauss-Codazzi equations of
isometric embeddings.
As before, the Gauss-Codazzi equations are
\be\label{e31}
\begin{cases}
\partial_{1}N-\partial_{2}M
=-\Gamma_{22}^{1}L+2\Gamma_{12}^{1}M-\Gamma_{11}^{1}N,\\[1mm]
\partial_{1}M-\partial_{2}L=\Gamma_{22}^{2}L-2\Gamma_{12}^{2}M+\Gamma_{11}^{2}N,
\end{cases}
\ee
and
\be\label{e32}
LN-M^2=\kappa.
\ee
Hence, for any $(u,v)$
to be a solution of the Euler
equations \eqref{e25}--\eqref{e26}, we must have
\be\label{e33}
\begin{cases}
\partial_{t}u=\Gamma_{22}^{1}L-2\Gamma_{12}^{1}M+\Gamma_{11}^{1}N,\\[1mm]
\partial_{t}v=\Gamma_{22}^{2}L-2\Gamma_{12}^{2}M+\Gamma_{11}^{2}N,
\end{cases}
\ee
and
\be\label{e34}
\partial_{1}u+\partial_{2}v=0.
\ee
Taking the divergence of \eqref{e33} and using \eqref{e34}, we have
\be\label{e35}
\partial_{1}\big(\Gamma_{22}^{1}L-2\Gamma_{12}^{1}M+\Gamma_{11}^{1}N\big)
+\partial_{2}\big(\Gamma_{22}^{2}L-2\Gamma_{12}^{2}M+\Gamma_{11}^{2}N\big)=0.
\ee
For convenience, define
\be\label{e36}
\begin{split}
&U(L,M,N):=\left(\frac12\left[-(L-N)+\sqrt{(L-N)^{2}+4M^{2}}\right]\right)^{{\frac12}},\\
&V(L,M,N):=\left(\frac12\left[(L-N)+\sqrt{(L-N)^{2}+4M^{2}}\right]\right)^{{\frac12}},
\end{split}
\ee
so that
\be\label{e37}
(u, v)=(U(L,M,N),V(L,M,N)).
\ee
The other choice of
$$
(u, v)=-(U(L,M,N),V(L,M,N))
$$
can be handled similarly.
\smallskip
In summary, we have the two evolution equations \eqref{e33} and four closure
relations \eqref{e31}--\eqref{e32} and \eqref{e35}.
Since $\Gamma _{ij}^{k}$ and $\kappa$ are the functions of $(L, M, N, \bg)$ through \eqref{e23}--\eqref{e24},
we obtain the six equations \eqref{e31}--\eqref{e33} and \eqref{e35} for the six unknowns $(L, M, N, \bg)$.
\section{$\,$ The Constraint Equations and Their Consequences}\label{S4}
In this section, we exposit the constraint equations on the initial data and
the consequences of their solvability.
Our first result is
\begin{theorem}\label{Theorem41}
Assume that the initial data $(L, M, N, \bg)$ at $t=0$ satisfy the five constraint equations:
\begin{align}
&\partial_{1}U+\partial_{2}V=0, \label{e41}\\
&\partial_{1}\big(\Gamma_{22}^{1}L-2\Gamma_{12}^{1}M+\Gamma_{11}^{1}N\big)
+\partial_{2}\big(\Gamma_{22}^{2}L-2\Gamma_{12}^{2}M+\Gamma_{11}^{2}N\big)=0,\label{e42}\\
&\partial_{1}N-\partial_{2}M=-\Gamma_{22}^{1}L+2\Gamma_{12}^{1}M-\Gamma_{11}^{1}N, \label{e43}\\
&\partial_{1}M-\partial_{2}L=\Gamma_{22}^{2}L-2\Gamma_{12}^{2}M+\Gamma_{11}^{2}N,\label{e44}\\
&LN-M^{2}=\kappa, \label{e45}
\end{align}
which mean that the initial data are consistent with the
incompressibility of the fluid and that the evolving manifold is initially
indeed a Riemannian manifold.
Then, if the system of six equations \eqref{e31}--\eqref{e33} and \eqref{e35}
in the six unknowns \\
$(L, M, N, \bg)$ is
satisfied, it produces simultaneously a solution of both the Gauss-Codazzi
equations and the incompressible Euler equations.
\end{theorem}
\proof
$\,$ Since $(u,v)=(U(L,M,N),V(L,M,N))$, we see from the first two
evolution equations \eqref{e31} and \eqref{e35} that
$$
\partial_{t}(\partial_{1}U+\partial_{2}V)=0.
$$
Since the initial data satisfy the constraint equations,
we conclude that, for $t>0$,
$$
\partial_{1}U+\partial_{2}V=0.
$$
Next, since the Gauss-Codazzi equations are satisfied for all $t>0$,
the fluid representation \eqref{e28} for $(L, M, N)$
allows
to write the Codazzi equations as
\begin{eqnarray*}
&\partial_{1}(u^{2}+p)+\partial_{2}(uv)
=-\Gamma_{22}^{1}L+2\Gamma_{12}^{1}M-\Gamma_{11}^{1}N,\\
&\partial_{1}(uv)+\partial_{2}(v^{2}+p)
=-\Gamma_{22}^{2}L+2\Gamma_{12}^{2}M-\Gamma_{11}^{2}N,
\end{eqnarray*}
and the Gauss equation as
$$
p=\frac{1}{2}\big(-q^{2}+\sqrt{q^{4}+4\kappa}\big).
$$
By \eqref{e33},
we see that the balance of linear momentum equations is also satisfied.
The proof is complete.
\endproof
We next examine the solvability of the initial data system.
Note that \eqref{e42}--\eqref{e44} imply
\be\label{e46}
\partial_{1}(\partial_{1}N-\partial_{2}M)
+\partial_{2}(-\partial_{1}M+\partial_{2}L)=0,
\ee
and the initial data system can be written as
\begin{align}
&\partial_{1}U+\partial_{2}V=0, \label{e41b}\\
&\partial_{11}N-2\partial_{12}M+\partial_{22}L=0,\label{e46b}\\
&\partial_{1}N-\partial_{2}M=-\Gamma_{22}^{1}L+2\Gamma_{12}^{1}M
-\Gamma_{11}^{1}N, \label{e43b}\\
&\partial_{1}M-\partial_{2}L=\Gamma_{22}^{2}L-2\Gamma_{12}^{2}M
+\Gamma_{11}^{2}N,\label{e44b}\\
&LN-M^{2}=\kappa. \label{e45b}
\end{align}
We can reverse the above computation. If \eqref{e46b}--\eqref{e44b} are satisfied,
we
take the divergence of the left-hand sides of \eqref{e43b}--\eqref{e44b}
and employ \eqref{e46b}
to yield \eqref{e42}.
From \eqref{e23},
we find that
\eqref{e41} and \eqref{e43}--\eqref{e46} becomes an
undetermined
system of five equations in the six unknowns $(L, M, N, \bg)$.
Our existence result for the initial data satisfying the constraint equations
reads as follows:
\begin{lemma}\label{L41}
Let an analytic divergence free velocity $(u, v)$ be
prescribed in a neighborhood of a point $(x_{1,}x_{2})=(0,0)$
such that $uv\neq 0$ at this point.
Set $(x_{1}^{\prime}, x_{2}^{\prime})=(x_{1}+x_{2}, x_{1}-x_{2})$.
On $x_{1}^{\prime}=0$ {\rm (}respectively $x_{2}^{\prime}=0${\rm )},
prescribe the analytic initial data{\rm :}
$g_{11}=g_{22}=1,
\partial_{x_{1}^{\prime}}g_{11}=\partial_{x_{1}^{\prime}}g_{22}=0$,
and $g_{12}$ satisfying the ordinary
differential equation:
\begin{equation}\label{ode-1}
\begin{split}
\partial_{x_{2}^{\prime}}g_{12}=
&\frac1{2N}\big[g_{12} (\partial_{1}N-\partial_{2}M)+ (\partial_{1}M+\partial_{2}L)\big]\\
&-\frac1{2L}\big[(\partial_{1}N-\partial_{2}M)+g_{12} (\partial_{1}M+\partial_{2}L)\big],
\end{split}
\end{equation}
with initial condition $g_{12}=0$ at $x_{2}^{\prime }=0$
so that $g_{ij}=\delta _{ij}$ at $(x_{1}^{\prime}, x_{2}^{\prime})=(0,0)$
{\rm (}respectively, $\partial_{x_{2}^{\prime}}g_{11}=\partial_{x_{2}^{\prime}}g_{22}=0$
and a similar ordinary differential equation
and initial data{\rm )}.
Then the initial data system \eqref{e41}--\eqref{e45}
has a local analytic solution $(g_{11}, g_{12}, g_{22})$.
\end{lemma}
\proof $\,$ We divide the proof into eight steps.
\smallskip
{\bf 1.} In fluid variables, three of our equations are
\begin{align}
&\partial_{1}u+\partial _{2}v=0, \label{e41f} \\
&\partial_{11}(u^{2})+2\partial_{12}(uv)
+\partial_{22}(v^{2})=-\triangle p, \label{e46f}\\
&p=\frac12\big(-q^{2}+\sqrt{q^{4}+4\kappa}\big). \label{e45f}
\end{align}
Next, prescribe the velocity to make this a determined system. If a
divergence free velocity $(u, v)$ is prescribed,
then \eqref{e46f} is immediately
solvable for $p$ under the standard regularity assumptions
on $(u, v)$, and hence \eqref{e45f}
defines the Gauss curvature $\kappa$.
Thus, $(L, M, N)$ are known to be independent of metric $\bg$.
\smallskip
{\bf 2.} Now, determine $\bg$ by solving the three equations:
\begin{align}
&\kappa \det \bg=g_{21}\big(\partial_{2}\Gamma_{11}^{1}
-\partial_{1}\Gamma_{12}^{1}+\Gamma _{11}^{1}\Gamma_{12}^{1}
+\Gamma_{11}^{2}\Gamma_{22}^{1}-\Gamma_{12}^{1}\Gamma_{11}^{1}
-\Gamma _{12}^{2}\Gamma_{21}^{1}\big)\notag \\
&\qquad \qquad + g_{22}\big(\partial_{2}\Gamma_{11}^{2}
-\partial_{1}\Gamma_{12}^{2}+\Gamma_{11}^{1}\Gamma_{12}^{2}
+\Gamma_{11}^{2}\Gamma_{22}^{2}-\Gamma_{12}^{1}\Gamma_{11}^{2}
-\Gamma_{12}^{2}\Gamma_{21}^{2}\big), \label{e48}\\
&\partial_{1}N-\partial_{2}M=-\Gamma_{22}^{1}L+2\Gamma_{12}^{1}M
-\Gamma_{11}^{1}N, \label{e43c} \\
&-\partial_{1}M+\partial_{2}L=-\Gamma_{22}^{2}L+2\Gamma_{12}^{2}M
-\Gamma_{11}^{2}N. \label{e44c}
\end{align}
\smallskip
{\bf 3.} We use \eqref{e43c}--\eqref{e44c} to
solve for $\partial_{1}g_{12}$ and $\partial_{2}g_{21}$.
Simply use \eqref{e23}
to write these two equations \eqref{e43c}--\eqref{e44c} as
\begin{align}
&\frac1{\det \bg}
\big[
-g_{22}L (\partial_{2}g_{21})
+
g_{12}N(\partial_{1}g_{12})\big]\notag\\
&=\partial_{1}N-\partial_{2}M
+\frac1{2\det \bg}\left[
g_{22}(-\partial _{1}g_{22})-
g_{12}(\partial _{2}g_{22})\right]L\notag\\
&\quad -\frac{1}{\det \bg}\left[
g_{22}(\partial_{2}g_{11})-
g_{12}(\partial _{1}g_{22})\right]M\notag\\
&\quad +\frac1{2\det \bg} \left[
g_{22}(\partial _{1}g_{11})-
g_{12}(-\partial _{2}g_{11})\right] N, \label{e421}
\end{align}
\begin{align}
&\frac1{\det \bg}
\big[
g_{12}L(\partial_{2}g_{21})
-
g_{11}N(\partial_{1}g_{12})\big]\notag\\
&=-\partial_{1}M+\partial_{2}L
+\frac1{2\det \bg}\left[
g_{12}(-\partial_{1}g_{22})+
g_{11}(\partial _{2}g_{22})\right]L\notag\\
&\quad-\frac{1}{\det \bg}
\left[ -
g_{12}(\partial_{2}g_{11})+
g_{11}(\partial _{1}g_{22})\right] M\notag\\
&\quad+\frac1{2\det \bg}\left[ -
g_{12}(\partial _{1}g_{11})+
g_{11}(-\partial _{2}g_{11})\right]N.\label{e422}
\end{align}
In the matrix form, we have
\bes
\begin{split}
\frac1{\det \bg}
\begin{bmatrix}
g_{12}N & -g_{22}L \\
-g_{11}N & g_{12}L
\end{bmatrix}
\begin{bmatrix}
\partial_{1}g_{12} \\
\partial_{2}g_{21}
\end{bmatrix}
= \begin{bmatrix}
\partial_{1}N-\partial_{2}M\\
-\partial_{1}M+\partial_{2}L
\end{bmatrix}
+\frac1{\det \bg}
\begin{bmatrix}
G_1 \\
G_2
\end{bmatrix},
\end{split}
\ees
where
\bes
\begin{split}
G_1=&\frac{1}{2}\big[g_{22}(-\partial _{1}g_{22})-g_{12}(\partial _{2}g_{22})\big]L
-\big[g_{22}(\partial_{2}g_{11})-g_{12}(\partial _{1}g_{22})\big]M\\
&+\frac{1}{2}\big[g_{22}(\partial_{1}g_{11})-g_{12}(-\partial_{2}g_{11})\big]N,\\
G_2=&\frac{1}{2}\big[g_{12}(-\partial_{1}g_{22})+g_{11}(\partial_{2}g_{22})\big] L
-\big[-g_{12}(\partial_{2}g_{11})+g_{11}(\partial_{1}g_{22})\big] M\\
&+\frac{1}{2}\big[-g_{12}(\partial_{1}g_{11})+g_{11}(-\partial_{2}g_{11})\big]N.
\end{split}
\ees
The inverse of the coefficient matrix is
$$
\begin{bmatrix}
\frac{g_{12}}{N} & \frac{g_{22}}{N} \\[1.5mm]
\frac{g_{11}}{L} & \frac{g_{12}}{L}
\end{bmatrix},
$$
which gives
\be\label{e423}
\begin{bmatrix}
\partial_{1}g_{12} \\
\partial_{2}g_{21}
\end{bmatrix}
=
\begin{bmatrix}
\frac{g_{12}}{N} & \frac{g_{22}}{N} \\[1.5mm]
\frac{g_{11}}{L} & \frac{g_{12}}{L}
\end{bmatrix}
\begin{bmatrix}
\partial_{1}N-\partial_{2}M+\frac1{\det \bg}G_1\\
-\partial_{1}M+\partial_{2}L+\frac1{\det \bg}G_2
\end{bmatrix}.
\ee
\smallskip
{\bf 4.} From \eqref{e423}, the equality of cross partials gives us the additional consistency
equation:
\be\label{e424}
\begin{split}
&\partial_{2}\left(\frac{g_{12}}{N}\Big(\partial_{1}N-\partial _{2}M+\frac1{\det \bg}G_1\Big)
+ \frac{g_{22}}{N}\Big(-\partial _{1}M+\partial _{2}L+\frac1{\det \bg}G_2\Big)\right)\\
&=\partial_{1}\left(\frac{g_{11}}{L}\Big(\partial_{1}N-\partial _{2}M+\frac1{\det \bg}G_1\Big)
+\frac{g_{12}}{L}\Big(-\partial_{1}M+\partial _{2}L+\frac1{\det \bg}G_2\Big)\right).
\end{split}
\ee
The Gauss curvature equation \eqref{e48} has the form:
\be\label{e425}
\kappa =\frac{1}{2\det \bg}\left[-\partial_{22}g_{11}+2\partial_{12}g_{12}
-\partial_{11}g_{22}\right] +l.o.t.
\ee
Substitution of \eqref{e423} into \eqref{e425} yields
the two second-order equations for $(g_{11},g_{22})$.
\smallskip
{\bf 5.} Now compute the highest order terms of the differential operators
associated with (\ref{e423}) at the origin where $g_{ij}=\delta _{ij}$:
\begin{equation}\label{e4.24-a}
\begin{split}
\begin{bmatrix}
\partial_{1}g_{12} \\[1.5mm]
\partial_{2}g_{21}
\end{bmatrix}
&=
\begin{bmatrix}
0 & \frac{1}{N} \\[1.5mm]
\frac{1}{L} & 0
\end{bmatrix}
\begin{bmatrix}
\mathbf{\ }\partial _{1}N-\partial _{2}M-
{\frac12}
\partial_{1}g_{22}L-\partial_{2}g_{11}M+
{\frac12}
\partial _{1}g_{11}N \\[1.5mm]
\mathbf{-}\partial _{1}M+\partial _{2}L+
{\frac12}
\partial_{2}g_{22}L-\partial _{1}g_{22}M+-
{\frac12}
\partial _{2}g_{11}N
\end{bmatrix}\\[2mm]
&=
\begin{bmatrix}
\frac{1}{N}(\mathbf{-}\partial _{1}M+\partial _{2}L+
{\frac12}
\partial_{2}g_{22}L-\partial_{1}g_{22}M-
{\frac12}
\partial_{2}g_{11}N) \\[1.5mm]
\frac{1}{L}\mathbf{\ (}\partial_{1}N-\partial_{2}M-
{\frac12}
\partial _{1}g_{22}L-\partial _{2}g_{11}M+
{\frac12}
\partial _{1}g_{11}N)
\end{bmatrix}.
\end{split}
\end{equation}
Then \eqref{e425} becomes
\be\label{e426}
\begin{split}
\kappa =&\frac{1}{2}\left[-\partial_{22}g_{11}-
\partial_{11}g_{22}\right]
+\frac{1}{4N}\left[
\partial_{22}g_{22}L-2\partial_{12}g_{22}M-
\partial_{22}g_{11}N\right]\\
&+\frac{1}{4L}\left[-
\partial_{11}g_{22}L-2\partial_{12}g_{11}M+
\partial_{11}g_{11}N\right]+l.o.t.
\end{split}
\ee
and \eqref{e424} becomes
\begin{equation}\label{e426-b}
\begin{split}
&\frac{1}{2N}\big[
(\partial_{22}g_{22})L
-2(\partial_{12}g_{22})M-
(\partial_{22}g_{11})N\big]\\
&=\frac{1}{2L}\big[-
(\partial_{11}g_{22})L
-2(\partial_{12}g_{11})M+
(\partial_{11}g_{11})N\big]+l.o.t.
\end{split}
\end{equation}
In the matrix form, \eqref{e426}--\eqref{e426-b} are written as
\be\label{e427}
\begin{bmatrix}
{\frac14}
\frac{N}{L} & -
{\frac14}
\\[1.5mm]
-
{\frac12}
\frac{N}{L} &
{\frac12}
\end{bmatrix}
\partial_{11}
\begin{bmatrix}
g_{11} \\[1.5mm]
g_{22}
\end{bmatrix}
+
\begin{bmatrix}
-\frac{M}{2L} & -\frac{M}{2N} \\[1.5mm]
\frac{M}{L} & -\frac{M}{N}
\end{bmatrix}
\partial_{12}
\begin{bmatrix}
g_{11} \\[1.5mm]
g_{22}
\end{bmatrix}
+
\begin{bmatrix}
-
{\frac14}
&
{\frac14}
\frac{L}{N} \\[1.5mm]
-
{\frac12}
&
{\frac12}
\frac{L}{N}
\end{bmatrix}
\partial_{22}
\begin{bmatrix}
g_{11} \\[1.5mm]
g_{22}
\end{bmatrix}
=l.o.t.
\ee
Make the change of independent variables:
$$
(x_{1}^{\prime}, x_{2}^{{\prime}})=(x_{1}+x_{2}, x_{1}-x_{2}),
$$
so that, for any $f\in C^2$,
\begin{align*}
&\frac{\partial^{2}f}{\partial x_{1}^{2}}
=\frac{\partial^{2}f}{\partial x_{1}^{\prime 2}}
+2\frac{\partial^{2}f}{\partial x_{1}^{\prime }\partial x_{2}^{\prime}}
+\frac{\partial^{2}f}{\partial x_{2}^{\prime 2}},\quad
\frac{\partial^{2}f}{\partial x_{2}^{2}}
=\frac{\partial^{2}f}{\partial x_{1}^{{\prime }2}}
-2\frac{\partial^{2}f}{\partial x_{1}^{\prime }\partial x_{2}^{\prime}}
+\frac{\partial^{2}f}{\partial x_{2}^{{\prime }2}},\\[2mm]
&\frac{\partial^{2}f}{\partial x_{1}\partial x_{2}}
=\frac{\partial^{2}f}{\partial x_{1}^{\prime 2}}
+\frac{\partial^{2}f}{\partial x_{2}^{\prime 2}}.
\end{align*}
Then we have
\be\label{e427-a}
A\frac{\partial^{2}}{\partial x_{1}^{\prime 2}}
\begin{bmatrix}
g_{11}\\
g_{22}
\end{bmatrix}
+
A\frac{\partial^{2}}{\partial x_{2}^{\prime 2}}
\begin{bmatrix}
g_{11}\\
g_{22}
\end{bmatrix}
+\big(\frac{N}{L}+1\big)
\begin{bmatrix}
\frac{1}{2} & -\frac{1}{2} \\[1.5mm]
-1 & 1
\end{bmatrix}
\frac{\partial^{2}}{\partial x_{1}^{\prime}\partial x_2^{\prime}}
\begin{bmatrix}
g_{11}\\
g_{22}
\end{bmatrix}
=l.o.t.
\ee
where the coefficient matrix $A$ is
\begin{align*}
A&=\begin{bmatrix}
{\frac14}
\frac{N}{L} & -
{\frac14}
\\[1.5mm]
-
{\frac12}
\frac{N}{L} &
{\frac12}
\end{bmatrix}
+
\begin{bmatrix}
-\frac{M}{2L} & -\frac{M}{2N} \\[1.5mm]
\frac{M}{L} & -\frac{M}{N}
\end{bmatrix}
+
\begin{bmatrix}
-
{\frac14}
&
{\frac14}
\frac{L}{N} \\[1.5mm]
-
{\frac12}
&
{\frac12}
\frac{L}{N}
\end{bmatrix}\\[2mm]
&=
\begin{bmatrix}
{\frac14}
\frac{N}{L}-\frac{M}{2L}-
{\frac14}
& -
{\frac14}
-\frac{M}{2N}\text{ }+
{\frac14}
\frac{L}{N} \\[1.5mm]
-
{\frac12}
\frac{N}{L}+\frac{M}{L}-
{\frac12}
&
{\frac12}
-\frac{M}{N}+
{\frac12}
\frac{L}{N}
\end{bmatrix}
\end{align*}
with determinant equal to
$$
-{\frac12}\frac{M}{LN}\left(N+L-2M\right)
=-{\frac12}\frac{M}{LN}\left(N+L-2\sqrt{LN-\kappa}\right).
$$
If the term: $N+L-2\sqrt{LN-\kappa}=0$, then
$(N-L)^{2}=-4\kappa$.
Thus, the conditions that $\kappa >0$ and $M\neq 0$
make this coefficient matrix
non-singular.
Since $M=-uv$ in fluid variables, we need $uv\neq 0$ and
curvature $\kappa$ determined by the Poisson equation
with $p^{2}+pq^{2}=\kappa$.
Of course, when $p>0$, $L>0$, and $N>0$, then $\frac{M}{LN}\neq 0$.
Clearly, if pressure $p$ determined by the Poisson equation is positive,
then the
condition that $\kappa >0$ is automatically satisfied.
Then line $x_1^\prime=0$ is non-characteristic for the equation in \eqref{e427-a}.
\smallskip
{\bf 6.} Since $\partial_{x_{1}^{\prime }}g_{12}={\frac12}(\partial _{x_{1}}g_{12}+\partial _{x_{2}}g_{12})$,
\eqref{e423} gives a partial differential equation for $g_{12}$ in the direction
normal to the initial data line $x_{1}^{\prime }=0$:
\begin{align}
\frac{\partial g_{12}}{\partial x_{1}^{\prime}}
=&\frac{1}{2}\big(\frac{g_{12}}{N}+\frac{g_{11}}{L}\big)\big(\partial_1N-\partial_2M+\frac{1}{\det \bg}G_1\big)\notag\\[1.5mm]
&+\frac{1}{2}\big(\frac{g_{22}}{N}+\frac{g_{12}}{L}\big)\big(-\partial_1M +\partial_2L+\frac{1}{\det \bg}G_2\big). \label{e427-b}
\end{align}
\smallskip
{\bf 7.} We now determine the initial data for system \eqref{e427-a}--\eqref{e427-b}
on line $x_{1}^{\prime}=0$ that is non-characteristic for the system.
Since
\begin{eqnarray*}
&&\partial_{x_{1}^{\prime }}g_{11}={\frac12}(\partial_{x_{1}}g_{11}+\partial _{x_{2}}g_{11}),
\quad \partial_{x_{1}^{\prime}}g_{22}={\frac12}(\partial_{x_{1}}g_{22}+\partial _{x_{2}}g_{22}),\\
&&\partial_{x_{2}^{\prime }}g_{11}={\frac12}(\partial_{x_{1}}g_{11}-\partial_{x_{2}}g_{11}), \quad
\partial_{x_{2}^{\prime }}g_{22}={\frac12}(\partial _{x_{1}}g_{22}-\partial _{x_{2}}g_{22}),
\end{eqnarray*}
the initial conditions:
$$
g_{ij}=\delta _{ij}, \quad
\partial _{x_{1}^{\prime }}g_{11}=\partial _{x_{1}^{\prime }}g_{22}=0
$$
also give us
$$
\partial_{x_{2}^{\prime}}g_{11}=\partial_{x_{2}^{\prime}}g_{22}=0.
$$
This yields
$$
\partial_{x_{1}}g_{11}=\partial_{x_{2}}g_{11}
=\partial_{x_{1}}g_{22}=\partial_{x_{2}}g_{22}=0\qquad \mbox{on $x_{1}^{\prime}=0$}.
$$
Then, from the formula:
$\partial_{x_{2}^{\prime}}g_{12}={\frac12}(\partial_{x_{1}}g_{12}-\partial_{x_{2}}g_{12})$,
\eqref{e423}, and
$$
\begin{bmatrix}
\partial_{1}g_{12} \\[1.5mm]
\partial_{2}g_{21}
\end{bmatrix}
=
\begin{bmatrix}
\frac{g_{12}}{N} & \frac{1}{N} \\[1.5mm]
\frac{1}{L} & \frac{g_{12}}{L}
\end{bmatrix}
\begin{bmatrix}
\mathbf{\ }\partial_{1}N-\partial_{2}M \\[1.5mm]
\mathbf{-}\partial_{1}M+\partial_{2}L
\end{bmatrix},
$$
we obtain the ordinary differential equation \eqref{ode-1} on line $x_{1}^{\prime}=0$.
This differential equation \eqref{ode-1}
and the initial condition: $g_{12}=0$ at $x_{2}^{\prime}=0$
determine the initial data for $g_{12}$ on line $x_{1}^{\prime}=0$.
Therefore, we have specified analytic initial data
on the
non-characteristic line $x_{1}^{\prime}=0$ for system \eqref{e427-a}--\eqref{e427-b}.
\smallskip
{\bf 8.} Finally, we see that the Cauchy-Kowalewski
theorem delivers a local analytic solution $(g_{11}, g_{12}, g_{22})$
of system \eqref{e427-a}--\eqref{e427-b}
with the analytic data
on the non-characteristic line $x_{1}^{\prime}=0$,
where we have used the analyticity of $p$ as the
solution to our Poisson equation and the analyticity of $\kappa$
from the
formula: $\kappa=p^{2}+pq^{2}$.
The positivity condition on $p$ is
irrelevant and can always be satisfied by simply adding to any $p$ solving the
Poisson equation a sufficiently large positive constant.
This completes the proof.
\endproof
Notice that the shear flow data $(u,v)=(u(x_{2}),0)$ are divergence free,
no matter how smooth or in
fact how irregular $u$ is.
In this case, $M=0$, so that Lemma \ref{L41}
does not apply to yield a metric $\bg$.
However, in this case, we have the following trivial result.
\begin{lemma}\label{L42}
If $M=0$ in any open set of $\mathbb{R}^{2}$, then there exists a developable surface
\begin{equation}\label{develop}
\y=(Ax_2, Ax_1, f(x_2))
\end{equation}
with corresponding metric $\bg^*$ given by
$$
g^*_{11}(\x)=A^2, \quad g^*_{12}(\x)=0, \quad g^*_{22}(\x)=(f'(x_2))^2+A^2,
$$
with
$$
f'(x_2)=A\arctan\big(-A\int_0^{x_2}u^2(s) ds\big),
$$
with $A>1$ a constant.
Similarly, the corresponding developable surface with metric $\bg^*$
can be obtained for $v=v(x_1)$ as another shear flow. In particular, $g_{ij}^*(\mathbf{0})=A^2\delta_{ij}$ and $\partial_k g_{ij}^*(\mathbf{0})=0$ for $i,j,k=1,2$.
Conversely, the developable surface \eqref{develop} with $f$ and $A$ given defines a shear
flow given by \eqref{4.2-1} below and $v = 0$.
\end{lemma}
\proof $\,$ Consider the surface:
$$
y_1=Ax_2, \quad y_2=Ax_1, \quad y_3=f(x_2), \quad A=\text{const.}>1.
$$
Then
\begin{eqnarray*}
&& \partial_1 y_1=0
\quad
\partial_1 y_2=A, \quad \partial_1 y_3=0,\\
&& \partial_2 y_1=A, \quad \partial_2 y_2=0,
\quad \partial_2 y_3=f'(x_2),
\end{eqnarray*}
so that
\begin{eqnarray*}
&& g_{11}=\partial_1 \y\cdot \partial_1 \y=A^2,\\
&& g_{12}=\partial_1 \y\cdot \partial_2 \y=0,\\
&& g_{22}=\partial_2 \y\cdot \partial_2 \y=\big(f'(x_2)\big)^2+A^2,
\end{eqnarray*}
and
$$
\mathbf{n}=\frac{\partial_1 \y\times \partial_2 \y}{|\partial_1 \y\times \partial_2 \y|}
=\frac{(f'(x_2), 0, -A)}{\sqrt{\big(f'(x_2)\big)^2+A^2}}.
$$
Furthermore, we have
\begin{eqnarray*}
&& L=\frac{1}{\sqrt{\det{\bg}}} \partial_{11} \y\cdot \mathbf{n}=0,\\
&& M=\frac{1}{\sqrt{\det{\bg}}} \partial_{12} \y\cdot \mathbf{n}=0,\\
&& N=\frac{1}{\sqrt{\det{\bg}}} \partial_{22} \y\cdot \mathbf{n}
=\frac{1}{\sqrt{\det{\bg}}}(0,0, f''(x_2))\cdot \mathbf{n}
=-\frac{f''(x_2)}{\big(f'(x_2)\big)^2+A^2},
\end{eqnarray*}
so that
$$
\kappa=0.
$$
\iffalse
At $(x_1,x_2)=(0,0)$,
$$
g_{11}=1, \quad g_{12}=0, \quad g_{22}=\big(f'(0)\big)^2+1,
$$
Set $f'(0)=0$ so that $g_{22}(\mathbf{0})=1$.
Then, at $(x_1,x_2)=\mathbf{0}$,
$$
\partial_{22} g_{22}=2 f'(x_2)f''(x_2)|_{x_2=0}=0,
$$
which implies that
$$
\partial_{kl} g_{ij}=0 \qquad \mbox{at $(x_1,x_2)=\mathbf{0}$}.
$$
\fi
With $N$ prescribed by the solution of
\begin{equation}\label{4.2-1}
-\frac{f''(x_2)}{\big(f'(x_2)\big)^2+A^2}=u^2(x_2),
\qquad f'(0)=0,
\end{equation}
we have constructed the developable surface,
while the solution of \eqref{4.2-1} is determined by
$$
f'(x_2)=A\arctan\big(-A\int_0^{x_2}u^2(s)\, ds\big).
$$
Similarly, we can obtain the corresponding results for $v=v(x_1)$ as another shear flow.
\endproof
Notice that the Codazzi equations are automatically satisfied since $\Gamma_{11}^1=\Gamma_{11}^2=0$.
\begin{remark}\label{R41}
$\,$ Notice the very important fact that the shear flow initial data
given by Lemma {\rm \ref{L42}} require only the square integrability
of the shear functions $u(x_2)$ and $v(x_1)$.
\end{remark}
\begin{remark}\label{R42}
$\,$ The assumption of analyticity is made only for
convenience. In fact, if one follows the presentation of DeTurck-Yang
{\rm \cite{DeYang}} for a similar problem where the theory of systems
of real principal type is exploited,
it is quite evident that the analyticity could be
replaced by $C^{\infty}$.
\end{remark}
\section{$\,$ Existence Theorem for the Evolving Fluids and Manifolds}\label{S5}
Now we prove our main existence theorem.
\begin{theorem}\label{T51}
For the initial data constructed in Lemma {\rm \ref{L41}}, there
exists a solution of system \eqref{e31}--\eqref{e33} and \eqref{e35},
locally in space-time,
which satisfies both the incompressible Euler equations \eqref{e25}--\eqref{e26}
and a propagating two-dimensional Riemannian manifold isometrically immersed
in $\mathbb{R}^{3}$.
\end{theorem}
\proof $\,$ For the given fluid data, by the Cauchy-Kowalewski
theorem, there exists a local space-time solution $(u,v, p)$ of
the incompressible Euler equations \eqref{e25}--\eqref{e26}
with $uv\neq 0, p>0$, and \eqref{e27}.
Define the Gauss curvature via the relation:
$$
\kappa=p^{2}+pq^{2}, \quad q^{2}=u^{2}+v^{2},
$$
so that
$LN-M^{2}=\kappa$.
Thus system \eqref{e31}--\eqref{e33} and \eqref{e35}
is satisfied if the following system can be solved:
\bes
\begin{cases}
\partial_{t}u=\Gamma_{22}^{1}L-2\Gamma_{12}^{1}M+\Gamma _{11}^{1}N,\\[1.5mm]
\partial_{t}v=\Gamma_{22}^{2}L-2\Gamma_{12}^{2}M+\Gamma _{11}^{2}N,\\[1.5mm]
\kappa \det \bg=g_{2m}(\partial_{2}\Gamma_{11}^{m}-\partial_{1}\Gamma_{12}^{m}
+\Gamma_{11}^{n}\Gamma_{n2}^{m}-\Gamma_{12}^{n}\Gamma_{n1}^{m})
\end{cases}
\ees
for the time evolving metric $\bg$, where
\bes
\begin{cases}
u^{2}=\frac12\big[-(L-N)+\sqrt{(L-N)^{2}+4M^{2}}\big],\\[1.5mm]
v^{2}=\frac12\big[(L-N)+\sqrt{(L-N)^{2}+4M^{2}}\big],\\[1.5mm]
q^{2}=\sqrt{(L-N)^{2}+4M^{2}}.
\end{cases}
\ees
This is the same system for $\bg$
that has been solved in Lemma \ref{L41}. This completes the proof.
\endproof
\begin{corollary}\label{C51}
Let $(u,v)$ be a
solution, locally analytic in space-time, of the incompressible Euler equations
\eqref{e25}--\eqref{e26},
satisfying the initial condition $uv\neq 0$ at a point $\x_{0}$.
Then there is an evolving metric $\bg$ so that
solution $(u,v)$ of the Euler equations
also defines a time
evolving two-dimensional Riemannian manifold $({\mathcal M},\bg)$
immersed in $\mathbb{R}^{3}$.
\end{corollary}
\begin{corollary}\label{C52}
Conversely, let $({\mathcal M},\bg)$ be an isometrically embeddable
manifold in $\mathbb{R}^{3}$ with
second fundamental form $(L[\bg],M[\bg],N[\bg])$ that is
a solution of the Gauss-Codazzi equations \eqref{e21}--\eqref{e22}
{\rm (}for an as yet undetermined metric $\bg${\rm )}.
Identify the fluid variables $(u,v,p)$ by our relations{\rm :}
\be\label{5.1-a}
\begin{cases}
u^{2}=\frac{1}{2}\big[-(L-N)+\sqrt{(L-N)^{2}+4M^{2}}\big],\\[1.5mm]
v^{2}=\frac{1}{2}\big[(L-N)+\sqrt{(L-N)^{2}+4M^{2}}\big],\\[1.5mm]
p^{2}+pq^{2}=\kappa, \quad q^{2}=u^{2}+v^{2},
\end{cases}
\ee
so that $(u,v,p)=(u[\bg], v[\bg], p[\bg])$ are functionals of the unknown metric field $\bg$.
Then the equations:
\be\label{5.2-a}
\begin{cases}
\partial_{t}u = \Gamma_{22}^{1}L-2\Gamma_{12}^{1}M+\Gamma _{11}^{1}N, \\[1.5mm]
\partial_{t}v = \Gamma_{22}^{2}L-2\Gamma _{12}^{2}M +\Gamma _{11}^{2}N, \\[1.5mm]
\partial_{11}(u^{2})+2\partial_{12}(uv)+\partial_{22}(v^{2})
=-\triangle p
\end{cases}
\ee
define a system of three equations in the three unknowns $(g_{11}, g_{12}, g_{22})$.
If this implicit system for $\bg$ has a solution, then the three functions $(u,v,p)$ satisfy
the incompressible Euler equations \eqref{e25}--\eqref{e26}.
\end{corollary}
\proof $\,$ Substitute the first two equations in \eqref{5.2-a}
into the Codazzi equations \eqref{e31}.
We see that the balance of linear momentum \eqref{e25} is satisfied,
and the third equation in \eqref{5.2-a}
implies, via the balance of
linear momentum \eqref{e25}, the incompressibility condition:
$$
\mbox{div}(u,v)=0.
$$
\endproof
\begin{remark} $\,$ The difficulty in employing Corollary {\rm \ref{C52}}
would be that the equations for $\bg$ are
only known implicitly.
Notice that equations \eqref{e33}--\eqref{e34} satisfied by $\bg$ can be written as
\be\label{e51}
\begin{cases}
\partial_t u=\Gamma_{22}^{1}(v^{2}+p)+2\Gamma_{12}^{1}(uv)+\Gamma_{11}^{1}(u^{2}+p),\\[1.5mm]
\partial_t v=\Gamma_{22}^{2}(v^{2}+p)+2\Gamma_{12}^{2}(uv)+\Gamma_{11}^{2}(u^{2}+p),\\[1.5mm]
\partial_{1}\big(\Gamma_{22}^{1}(v^{2}+p)+2\Gamma _{12}^{1}(uv)+\Gamma_{11}^{1}(u^{2}+p)\big)\\[1.5mm]
\quad +\partial_{2}\big(\Gamma_{22}^{2}(v^{2}+p)+2\Gamma_{12}^{2}(uv)+\Gamma_{11}^{2}(u^{2}+p)\big)=0.
\end{cases}
\ee
Here $(u,v)$ are defined by \eqref{e32} in terms of $(L, M, N)$ which in turn depend
on $\bg$, and $p$ is given by
$$
p=\frac12\left(\sqrt{q^{4}+4\kappa}-q^2\right)
$$
from \eqref{e210}.
Thus, this is a system of three equations for the three components of $\bg$.
The difficulty here is that the dependence of $(u, v)$ on $\bg$ is implicit.
It would be interesting to identify some suitable ways to check their solvability.
\end{remark}
\section{$\,$ Shear Flow Initial Data}\label{S6}
As we noted in Remark \ref{R41},
the shear flow data do not satisfy the hypothesis: $uv\neq 0$ in Lemma \ref{L41}.
Hence, this is a particularly interesting case to study.
Namely, if the fluid data of $(u,v)$ are $(u(x_{2}),0)$,
the question is whether a Riemannian
manifold $({\mathcal M},\bg)$ locally immersed in $\mathbb{R}^{3}$ can be determined.
For this data, we find immediately that $\triangle p=0$
so that $p$ is harmonic.
Furthermore, $L=p,M=0,N=u^2(x_2)+p,\kappa =p u^2(x_2)+p^{2}$,
and our constraint equations \eqref{e43}--\eqref{e44}
reduce to the equations:
\be\label{e61}
\begin{split}
&\partial_{1}p=-\Gamma_{22}^{1}p-\Gamma_{11}^{1}\big(u^2(x_2)+p\big),\\[1mm]
&\partial_{2}p=-\Gamma_{22}^{2}p-\Gamma_{11}^{2}\big(u^2(x_2)+p\big),
\end{split}
\ee
and
\be\label{e62}
\begin{split}
&pu^2(x_2)+p^{2} \notag\\
&=\frac1{2(\det \bg)^2}
\det
\begin{bmatrix}
-
\partial_{22}g_{11}+2\partial_{12}g_{12}-
\partial_{11}g_{22} &
\partial_{1}g_{11}
& 2\partial_{1}g_{12}-
\partial_{2}g_{11} \\[1.5mm]
2\partial_{2}g_{12}-
\partial_{1}g_{22} & 2g_{11} & 2g_{12} \\[1.5mm]
\partial_{2}g_{22} & 2g_{12} & 2g_{22}
\end{bmatrix}\\[2mm]
&\quad -\frac1{2(\det \bg)^{2}}\det
\begin{bmatrix}
0 &
\partial _{2}g_{11} &
\partial _{1}g_{22} \\[1.5mm]
\partial _{2}g_{11} & 2g_{11} & 2g_{12} \\[1.5mm]
\partial _{1}g_{22} & 2g_{12} & 2g_{22}
\end{bmatrix}.
\end{split}
\ee
As in \S \ref{S4}, \eqref{e61}--\eqref{e62}
give equation \eqref{e27} that is now
\be\label{e63}
\begin{bmatrix}
\frac{u^2(x_2)+p}{4p} & -
{\frac14}
\\[1.5mm]
-
\frac{u^2(x_2)+p}{2p} &
{\frac12}
\end{bmatrix}
\partial_{11}
\begin{bmatrix}
g_{11} \\[1.5mm]
g_{22}
\end{bmatrix}
+
\begin{bmatrix}
-
{\frac14}
&
\frac{p}{4(u^2(x_2)+p)} \\[1.5mm]
-
{\frac12}
&
\frac{p}{2(u^2(x_2)+p)}
\end{bmatrix}
\partial_{22}
\begin{bmatrix}
g_{11} \\[1.5mm]
g_{22}
\end{bmatrix}
=l.o.t.
\ee
The symbol of the above differential operator is given by
\bes
\begin{split}
&\det \left(
\begin{bmatrix}
\frac{u^2(x_{2})+p}{4p} & -
{\frac14}
\\[1.5mm]
-
\frac{u^2(x_{2})+p}{2p} &
{\frac12}
\end{bmatrix}
\xi_{1}^{2}+
\begin{bmatrix}
-
{\frac14}
&
\frac{p}{4(u^2(x_{2})+p)} \\[1.5mm]
-
{\frac12}
&
\frac{p}{2(u^2(x_{2})+p)}
\end{bmatrix}
\xi_{2}^{2}\right)\\[2mm]
&=\det
\begin{bmatrix}
\frac{u^2(x_{2})+p}{4p}\xi_{1}^{2}
-
{\frac14}
\xi_{2}^{2} & -
{\frac14}
\xi _{1}^{2}+
\frac{p}{4(u^2(x_{2})+p)}\xi_{2}^{2}\\[1.5mm]
-
\frac{u^2(x_{2})+p}{2p}\xi_{1}^{2}
-
{\frac12}
\xi_{2}^{2} &
{\frac12}
\xi_{1}^{2}+
\frac{p}{2(u^2(x_{2})+p)}
\xi_{2}^{2}
\end{bmatrix}\\[1.5mm]
&=0.
\end{split}
\ees
Thus, for the shear flow initial data, our system of partial differential equations
for $\bg$ is degenerate.
On the other hand, we know from Lemma \ref{L42} that,
if a shear flow is given in an open neighborhood of space-time,
then
metric $\bg^*$ provides the desired map.
\medskip
\section{$\,$ The Nash-Kuiper Theorem and Existence of Wild Solutions}\label{S7}
The existence of a locally analytic Riemannian manifold with metric $\bg$
has been established in Corollary \ref{C51},
\iffalse
we wish to know the local form of the metric.
To do this, we adapt a new local system of normal coordinates
$(x_{1}^{\prime \prime }$, $x_{2}^{\prime \prime })$ ({\it cf.}
\cite[pp.86]{CarmoM} and \cite[pp.71]{HanHong})
so that the origin of the old coordinate system remains the origin
of our new coordinate system and $\Gamma_{ij}^k(\mathbf{0})=0$.
We can easily compute $\bg$ to the leading order in the normal coordinates by
the formula:
\fi
$$
g_{ij}(x_{1}, x_{2},t)
=\delta_{ij}
+h.o.t.,
$$
and hence
$(g_{ij})<(g_{ij}^*)$ in the sense of quadratic forms locally near $(x_1,x_2)=(0,0)$ for $A>1$.
\iffalse
that is,
\bes
\begin{split}
&g_{11}(x_{1}^{\prime \prime },x_{2}^{\prime \prime},t)
=1+\frac13R_{1221}x_{2}^{\prime \prime }x_{2}^{\prime \prime}+h.o.t.
=1-\frac13\frac{\kappa}{\det \bg} (x_{2}^{\prime \prime})^2+h.o.t.,\\
&g_{22}(x_{1}^{\prime \prime },x_{2}^{\prime\prime},t)
=1+\frac13R_{2112}x_{1}^{\prime\prime}x_{1}^{\prime\prime}+h.o.t.
=1-\frac13\frac{\kappa}{\det \bg} (x_{1}^{\prime \prime})^2+h.o.t.,\\
&g_{12}(x_{1}^{\prime \prime },x_{2}^{\prime\prime},t)
=\frac13R_{1212}x_{1}^{\prime \prime }x_{2}^{\prime \prime}+h.o.t.
=\frac13\frac{\kappa}{\det \bg} x_{1}^{\prime \prime }x_{2}^{\prime\prime}+h.o.t..
\end{split}
\ees
Hence, as a quadratic form,
$$
g_{ij}\eta_{i}\eta_{j}
=\eta_{i}\eta_{i}-\frac13\frac{\kappa}{\det \bg}
\big(
\eta_{1}^{2} (x_{2}^{\prime \prime})^2
+\eta_{2}^{2} (x_{1}^{\prime\prime})^2
-\eta_{1}\eta_{2} x_{1}^{\prime\prime}x_{2}^{\prime\prime}
\big)+h.o.t..
$$
However, the matrix
$$
\begin{bmatrix}
(x_{2}^{\prime\prime})^2 & -
{\frac12}
x_{1}^{\prime \prime }x_{2}^{\prime \prime } \\[1.5mm]
-
{\frac12}
x_{1}^{\prime \prime }x_{2}^{\prime \prime } & (x_{1}^{\prime \prime})^2
\end{bmatrix}
$$
has determinant
$
{\frac34}
(x_{1}^{\prime \prime})^2(x_{2}^{\prime \prime})^2$ and hence is positive
definite away from $x_{1}^{\prime \prime }=x_{2}^{\prime \prime }=0.$
Furthermore, on line $x_{1}^{\prime \prime}=0$,
the quadratic form is
$$
g_{ij}\eta _{i}\eta _{j}=\eta _{i}\eta _{i}-\frac13\frac{\kappa}{\det \bg}{\Huge (}
\eta_{1}^{2}(x_{2}^{\prime\prime})^2{\Huge )}+h.o.t. \qquad \mbox{in $x_{2}^{\prime\prime}$}.
$$
\fi
Thus, we have proved that map $\y$ locally induced by metric $\bg$
is shorter than that induced by $\bg^*$ or, in language of the Nash-Kuiper
theorem \cite{CDS2012,Kuiper,Nash1954},
embedding ${\y}_\bg$ induced by $\bg$
is a
\textit{short embedding}. We state this in the following lemma:
\begin{lemma}\label{L71}
The metric, $(g_{ij}(x_{1}, x_{2},t))$,
given in Theorem {\rm \ref{T51}} and Corollary {\rm \ref{C51}}
is locally a short embedding
that satisfies
$$
(g_{ij}(x_{1},x_{2},t))< (g^*_{ij})
$$
in the sense of quadratic forms.
\end{lemma}
We then recall the Nash-Kuiper theorem as given in \cite{CDS2012}.
\begin{theorem}\label{T71}
Let $({\mathcal M}^{n}, \mathfrak{g)}$ be a smooth, compact
Riemannian manifold of dimension $n\geq 2$.
Assume that $\mathfrak{g}$ is in $C^{\infty}$.
Then
\smallskip
{\rm (i)} If $m\geq \frac{(n+2)(n+3)}{2}$,
any short embedding can be approximated by
isometric embeddings of
class $C^{\infty}$ {\rm (}cf. Nash {\rm \cite{Nash1954}} and Gromov {\rm \cite{Gromov1986}}{\rm )}{\rm ;}
\smallskip
{\rm (ii)} If $m\geq n+1,$ then any short embedding can be approximated in $C^0$ by isometric
embeddings of class $C^{1}$ {\rm (}cf. Nash {\rm \cite{Nash1954}}, Kuiper {\rm \cite{Kuiper}}{\rm )}.
\end{theorem}
We note that part (ii) of Theorem \ref{T71}
has been extended for an analytic metric $\mathfrak{g}$ with a manifold diffeomorphic to an $n$-dimensional ball
by Borisov \cite{Borisov1,Borisov2,Borisov3,Borisov4,Borisov5,Borisov6,BorisovY}
to obtain the greater regularity $C^{1,\alpha}, \alpha <\frac{1}{1+n+n^{2}}$,
and the condition of analyticity has been weakened by Conti-De Lellis-Sz\'{e}kelyhidi Jr. \cite{CDS2012}.
Without the condition that the manifold is diffeomorphic to an $n$-dimensional ball,
Conti-De Lellis-Sz\'{e}kelyhidi Jr. \cite{CDS2012} still obtain the greater
regularity $C^{1,\alpha}$ for some $\alpha >0$.
\smallskip
An immediate consequence of the results by
Nash-Kuiper \cite{Nash1954,Kuiper}, Borisov \cite{Borisov1,Borisov2,Borisov3,Borisov4,Borisov5,Borisov6,BorisovY},
and Conti-De Lellis-Sz\'{e}kelyhidi Jr. \cite{CDS2012} is the following theorem.
\begin{theorem}\label{T72}
The short embedding $\y_{\bg}$ established by Theorem
{\rm \ref{T51}} and Corollary {\rm \ref{C51}}
can be approximated in $C^0$,
by wrinkled embeddings $\y_{\rm w}$ in $C^{1,\alpha }$ for some $\alpha \in (0,1)$, locally in space.
In elementary terms, the geometric image of our locally analytic solution
to the incompressible Euler equations, i.e.,
our surface propagating in time, can be approximated in
$C^0$ by wrinkled manifolds.
\end{theorem}
\begin{corollary}\label{c7.1} In particular, $\yy_{\rm w}(t, \cdot)$ constructed in Appendix \ref{A1} belongs
to $C^0([0,T]; C^{1,\alpha}_{\rm loc})$.
\end{corollary}
It is the time-dependent wrinkled solution of Corollary \ref{c7.1} that we use in the rest of the paper.
\begin{remark} $\,$ We reinforce the fact that, while the wrinkled embedding $\y_{\rm w}(t)$ satisfies
the time-independent relation
$\partial_{i}\y_{\rm w}\cdot \partial_{j}\y_{\rm w}=g^*_{ij}$ where $g^*_{ij}$ is independent of time,
$\y_{\rm w}$ must be time-dependent, since $\y_{\rm w}$ shadows the time-dependent short embedding $\y_{\rm g}$.
\iffalse
(ii) The family of time-dependent wrinkled embeddings $\y_{\rm w}(t)$ trivially satisfy the bounds
$\|\partial_i\y_{\rm w}(t)\|^2_C\le \|g_{ii}^*\|_C$, $i=1,2$, and hence for any sequence $\{t_n\}$, $t_n\to\bar{t}$,
$\{\y_{\rm w}(t_n)\}$ (by virtue of the mean value theorem) is equi-continuous in $C$ and in fact Lipschitz in $x$, uniform in $(t,x)$.
Thus possibly extracting a subsequence also denoted by $\{\y_{\rm w}(t_n)\}$ we have $\y_{\rm w}(t_n)\to \tilde{\y}(\bar{t})$
in $C$ for some embedding $\tilde{\y}(\bar{t})$. Thus $\y_{\rm w}(t)$ can always be extended as a continuous embedding.
(iii) The unresolved issue in (ii) is whether $\tilde{\y}(t)$ is an isometric embedding, i.e., $\partial_i\tilde{\y}(t)\cdot \partial_j\tilde{\y}(t)=g_{ij}^*$.
A natural proof of this result would follow from an improved inequality: $\|\y_{\rm w}(t)\|_{C^{1,\alpha}}\le \rm{constant}$, where the constant is independent of $t$. The derived equality $\partial_i\tilde{\y}(t)\cdot \partial_j\tilde{\y}(t)=g_{ij}^*$ would then follow from the convergence of
$\{\partial_i\y_{\rm w}(t_n)\}$ in $C$.
(iv) The desired inequality $\|\y_{\rm w}(t)\|_{C^{1,\alpha}}\le \rm{constant}$ with constant independent of $t$, would appear to follow from the construction of Conti-DeLellis-Sz\'{e}kelyhidi Jr. \cite{CDS2012} since the time-dependence in their proof of $C^{1,\alpha}$ regularity would only rely on the $t$-regularity of known smooth short embedding $\y_{\rm g}(t)$. However such a discussion and possible proof is beyond the scope of this paper.
\fi
\end{remark}
\begin{theorem}\label{T74}
There are an infinite number of {\rm (}non $C^{2}${\rm )} evolving
manifolds for metric $\mathfrak{g}_{ij}=g^*_{ij}$,
all with the same
initial wrinkled data at $t=0$.
Furthermore, for all these solutions, the energy remains constant{\rm :}
$$
E(t)=\int\limits_{\Omega}(\partial_{i}\y_{\rm w} \cdot \partial_i\y_{\rm w}){\rm d}\x =const.
$$
where the manifold is defined by
the $C^{1,\alpha}$--solution
of $\partial_{i}\y_{\rm w}\cdot \partial_{j}\y_{\rm w}=g^*_{ij}$
which trivially satisfies $\partial_{i}\y_{\rm w}\cdot \partial_{i}\y_{\rm w}=g^*_{ii}$
so that $\partial_{i}\y_{\rm w}$ is in $L^{\infty}$ in time.
\end{theorem}
\proof $\,$ On any sequence of time intervals,
just switch back and forth
from any of the infinite choices of wrinkled solutions.
Specifically, let $[0,T]$ be a time interval in which there exists a unique
analytic solution to the Cauchy
initial value problem for the Euler equations.
By Theorem \ref{T72}, this solution is the pre-image of a short embedding $\y_{\bg}$ and
can be approximated by a wrinkled embedding $\y_{\rm w}$.
Divide the interval $[0,T]$ into sub-intervals
$[T_0,T_1], [T_1, T_2], \dots, [T_{n-1}, T_n]$ with $T_0=0$ and $T_n=T$.
Take any sequence $\varepsilon_k>0, k=1,\cdots, n$.
Then, by Theorem \ref{T72}, we have a sequence of
wrinkled solutions $\{\y^k_{\rm w}\}$ such that
$$
\|\y_{\bg}-\y^k_{\rm w}\|_{C}<\varepsilon_k
\qquad \mbox{for $T_{k-1}\le t\le T_k$, $k=1, \dots, n$}.
$$
Now define the wrinkled solution $\y^*_{\rm w}$ on $[0,T]$ by
$$
\y^*_{\rm w}:=\y_{\rm w}^k
\qquad \mbox{for $T_{k-1}\le t< T_k$, $k=1, \dots, n$}.
$$
Fix $\y_{\rm w}^1$ as the Cauchy data,
but allow $\varepsilon_k$ and $T_k$ to vary for $k=2, \dots, n$.
In this way, we have produced an infinite number of wrinkled solutions satisfying the same Cauchy data.
Then the solutions are in $C^{1,\alpha}$ in space and continuous in time on
each of the sub-intervals.
To compute the
energy, use
$$
E(t)=\int\limits_{\Omega }(\partial_{i}\y_{\rm w}\cdot \partial_{i}\y_{\rm w})\,{\rm d}\x
=\int\limits_{\Omega }g^*_{ii}\,{\rm d}\x =const.
$$
\endproof
In the results given in Theorems \ref{T71}--\ref{T74},
by construction,
embedding $\y_{\rm \bg}$ is
the geometric image of a smooth solution of the Euler
equations \eqref{e25}--\eqref{e26}.
On the other hand, the Nash-Kuiper $C^{1,\alpha}$ embeddings have
not been shown to be the image of solutions in any sense of the Euler
equations \eqref{e25}--\eqref{e26}.
We now provide this link.
Let $\Omega\Subset \mathcal{M}$ be a compact domain, where $\mathcal{M}$ is a regular surface with a family
of Riemannian metric $\{\bg(t)\}_{t\in[0,T]}$.
Consider the difference $\vv(t):=\y_{\rm w}(t)-\y_{\rm \bg}(t) \in C^{1,\alpha}(\Omega; \R^3)$ for each $t \in [0,T]$
and continuous in $t$.
As computed by G\"{u}nther (see \cite{gunther}), one obtains
\begin{equation}\label{A}
\partial_i (\vv\cdot\partial_j\y_{\rm \bg})+\partial_j (\vv\cdot \partial_i \y_{\rm \bg})
- 2\vv \cdot \big(\Gamma^k_{ij} \partial_k \y_{\rm \bg} + H_{ij}\mathbf{n}\big) = h_{ij},
\end{equation}
where $\Gamma^k_{ij}$ is the Christoffel symbol of $(\mathcal{M}, \bg)$, $H \in {\rm Sym}^2(T^*\mathcal{M})$
is the second fundamental form associated with the smooth isometric embeddings
$\y_{\rm \bg}(t)$, and $\n$ is the outward unit normal.
Moreover, we suppress subscript $t$ in the equations which are {\em kinematic} ({\it i.e.}, pointwise in $t$).
We introduce the symmetric quadratic form $\bh=(h_{ij})\in C^0( {\rm Sym}^2(T^*\mathcal{M}))$ by
\begin{equation}\label{h}
h_{ij} := g^*_{ij}-{g}_{ij}-\partial_i \vv \cdot \partial_j \vv.
\end{equation}
Now we {\em project} the difference $\vv$ along the direction $\partial_i \y_{\rm \bg}$: Define
\begin{equation}
\bar{v}(t)^i := \vv(t) \cdot \partial_i \y_{\rm \bg}(t) \in C^{1, \alpha}(\Omega; \R)
\qquad \text{ for each } t \in [0,T], \, i \in \{1,2\}.
\end{equation}
This is valid since $\y_{\rm \bg}$ is smooth. Noting that, for any vector field $\phi=(\phi^j)$,
one has
\begin{equation*}
\nabla_i \phi^j = \partial_i \phi^j - \Gamma^k_{ij} \phi^k,
\end{equation*}
with $\nabla$ being the covariant derivative on $(\mathcal{M}, \bg)$.
Thus, \eqref{A} can be recast into
\begin{equation}\label{B}
\nabla_i \bar{v}^j + \nabla_j \bar{v}^i - 2\vv \cdot H_{ij}\n = h_{ij}.
\end{equation}
\iffalse
To proceed we invoke a non-degeneracy condition of $\y_{\rm \bg}(t)$, i.e., $\partial_1\y_{\rm \bg}, \partial_2\y_{\rm \bg}$ be linearly independent.
It implies that the annihilators of $H_{ij}$ form a subspace of dimension $2$, at least locally near $\x^\star \in \Omega$:
\be\bgin{align}\label{annihilator}
{\rm ann}(H):= \bigg\{A=\{A^{kij}\} \in T^*\mathcal{M} ^{\otimes 3} &: \sum_{i,j=1}^2 A^{kij} H_{ij} = 0 \text{ for } k=1,2, \nonumber\\
&\text{ such that } A^{kij}=A^{kji}, A^{kij}=A^{ikj}\bigg\}.
\end{align}
\fi
Multiplying \eqref{B} by $A^{kij}$,
we obtain a system of two first-order scalar PDEs:
\begin{equation}\label{key equation}
\bar{A}^1 \partial_1 \bar{\vv} + \bar{A}^2 \partial_2 \bar{\vv} + B\bar{\vv} = \bar{\bh},
\end{equation}
where
\begin{equation}\label{regularity}
\begin{cases}
\bar{\vv}=(\bar{v}^1, \bar{v}^2)^\top \in C^{1,\alpha}(\Omega; \R^2),\\
\bar{\bh}=(A^{1ij}h_{ij}, A^{2ij}h_{ij})^\top \in C^0(\Omega; \R),\\
B= (B_{ij})= (-A^{jlm} \Gamma^i_{lm})\in C (\mathfrak{gl}(2;\R)),
\end{cases}
\end{equation}
and the $2\times 2$ matrices
\begin{equation}
\bar{A}^k := (A^{ijk})_{1\leq i,j \leq 2} \qquad \text{ for } k \in \{1,2\},
\end{equation}
satisfy the symmetry property: $A^{kij}=A^{kji}=A^{ikj}$,
Notice that, due to the symmetry,
one has
$$
\bar{A}^1=\begin{bmatrix}
A^{111}& A^{112}\\
A^{112}&A^{122}
\end{bmatrix},\quad
\bar{A}^2=\begin{bmatrix}
A^{112}&A^{122}\\
A^{122}&A^{222}
\end{bmatrix},
$$
matrices $\bar{A}^k$ have only four independent components
$(A^{111}, A^{112}, A^{122}, A^{222})$.
\iffalse
The system \eqref{key equation} of balance laws is not closed due to the appearance of the nonlinear term $\bar{h}$
which contains the terms $A^{kij}\partial_iv\cdot\partial_jv$. We close the system by relating $v\in\R^3$ to $\bar{v}\in\R^3$ as
\begin{equation}\label{vv}
v\cdot\partial_iy=\bar{v}_i, \quad i=1,2,
\end{equation}
\begin{equation}\label{vv2}
2\nabla_1\bar{v}_1-2v\cdot{\bf n}H_{11}^3=g_{11}^*-g_{11}-\partial_1v\cdot\partial_1v.
\end{equation}
Specifically one can solve \eqref{vv} for say two components of vector $v$ in terms of the third component,
then substitute this relation into \eqref{annihilator} to obtain an ordinary differential equation for this third component of $v$.
Thus when the closure condition \eqref{vv}, \eqref{vv2} are appended to balance laws \eqref{key equation} we have
closed system for $\bar{v}_1, \bar{v}_2$. Furthermore it is perhaps interesting to note that a local change of coordinates will yield
$H_{11}=\kappa, \; H_{12}=H_{21}=0,\; H_{22}=1$ ($\kappa$ is the Gauss curvature associated with $\bg$)
at a point $\x^*$ and hence we can choose
$$
\bar{A}^1=\begin{bmatrix}
0&1\\
1&0
\end{bmatrix},\quad
\bar{A}^2=\begin{bmatrix}
1&0\\
0&-\kappa
\end{bmatrix}.
$$
Thus locally near $x^*$ the linearized system obtained by neglecting the term $A^{kij}\partial_iv\cdot\partial_jv$ is elliptic when $\kappa>0$ and hyperbolic when $\kappa<0$.
\fi
Now we consider $(U_1,U_2,P)$ satisfying the incompressible Euler equations:
\begin{equation}\label{B3}
\begin{split}
&\partial_tU_1+\partial_1\left(U_1^2+P\right)+\partial_2\left(U_1U_2\right)=0,\\
&\partial_tU_2+\partial_1\left(U_1U_2\right)+\partial_2\left(U_2^2+P\right)=0,\\
&\partial_1U_1+\partial_2U_2=0.
\end{split}
\end{equation}
On the other hand, \eqref{key equation} may be written as
\begin{equation}\label{B2}
\partial_1\left(\bar{A}^1\begin{bmatrix} \bar{v}_1 \\ \bar{v}_2\end{bmatrix}\right)
+\partial_2\left(\bar{A}^2\begin{bmatrix} \bar{v}_1 \\ \bar{v}_2\end{bmatrix}\right)
-\partial_1\bar{A}^1\begin{bmatrix} \bar{v}_1 \\ \bar{v}_2\end{bmatrix}
-\partial_2\bar{A}^2\begin{bmatrix} \bar{v}_1 \\ \bar{v}_2\end{bmatrix}
=\begin{bmatrix} {h}_1 \\ {h}_2\end{bmatrix},
\end{equation}
where $\mathbf{h}=\begin{bmatrix} {h}_1 \\ {h}_2\end{bmatrix}=\bar{\mathbf{h}}-B\bar{\vv}$.
Thus, the identification:
\begin{equation}\label{e710}
\begin{split}
&U_1^2+P=A^{111}\bar{v}_1+A^{112}\bar{v}_2=:r_1,\\
&U_1U_2=A^{112}\bar{v}_1+A^{122}\bar{v}_2=:r_2,\\
&U_2^2+P=A^{122}\bar{v}_1+A^{222}\bar{v}_2=:r_3,
\end{split}
\end{equation}
will force the second two terms of the first two equations in \eqref{B3} to agree with
$$
\partial_1\left(\bar{A}^1
\begin{bmatrix} \bar{v}_1 \\ \bar{v}_2
\end{bmatrix}\right)
+\partial_2\left(\bar{A}^2
\begin{bmatrix} \bar{v}_1 \\ \bar{v}_2
\end{bmatrix}\right).
$$
Furthermore, it is a simple matter to express $(U_1, U_2, P)$ in terms of $(r_1, r_2, r_3)$
as
\begin{equation}\label{e711}
U_1=(r_1-P) ^\frac12, \quad
U_2=(r_3-P)^\frac12,
\end{equation}
where $P$ satisfies
\begin{equation}\label{e712}
P=\frac12\left((r_1+r_3)-\left((r_1-r_3)^2+4r_2^2\right)^\frac12\right),
\end{equation}
so that $r_1-P>0$, $r_2-P>0$.
Thus, for \eqref{key equation} to agree with the incompressible Euler equations, we see the system:
\begin{align}
&\partial_t\begin{bmatrix}U_1\\ U_2\end{bmatrix}
+\partial_1\bar{A}^1\begin{bmatrix}\bar{v}_1\\ \bar{v}_2\end{bmatrix}
+\partial_2\bar{A}^2\begin{bmatrix}\bar{v}_1\\ \bar{v}_2\end{bmatrix}
+\bh=0, \label{e713}\\[1.5mm]
&\partial_1U_1+\partial_2U_2=0, \label{e714}
\end{align}
must be satisfied, where $(U_1, U_2)$ depend smoothly on $(A^{111}, A^{112}, A^{122}, A^{222})$,
and $\bh$ depends linearly on $A^{kij}$.
This is a under-determined system, since there are three equations in the four unknowns
$(A^{111}, A^{112}, A^{122}, A^{222})$.
A non-trivial solution of this system thus maps the Nash-Kuiper solution to a weak solution
of the incompressible Euler equations.
Let us rewrite \eqref{e714} in terms of a stream function $\Psi(\x,t)$:
\begin{align}
&U_2+\partial_1\Psi=0, \label{e715}\\
&U_1-\partial_2\Psi=0, \label{e716}
\end{align}
where, from \eqref{e711},
$$
U_i=\hat{U}_i(A^{111}, A^{112}, A^{122}, A^{222}; v_1, v_2), \qquad i=1,2.
$$
Define $\ff:=(A^{111}, A^{112}, A^{122}, A^{222},\Psi)$. Then \eqref{e715} is of the form:
\begin{equation}\label{e717}
\Phi_1(\x,t,\ff,\partial_1\ff)=0,
\end{equation}
while \eqref{e716} is of the form:
\begin{equation}\label{e718}
\Phi_2(\x,t,\ff,\partial_1\ff,\partial_2\ff)=0,
\end{equation}
where $\Phi_1$ and $\Phi_2$ are scalar valued functions.
Furthermore, \eqref{e713} is of the form
\begin{equation}\label{e719}
\Phi_3(\x,t,\ff,\partial_t\ff,\partial_1\ff,\partial_2\ff)=0,
\end{equation}
where $\Phi_3$ takes values in $\R^2$.
Thus we have written our system \eqref{e713}--\eqref{e714} for $\ff$
in Gromov's triangular form \eqref{e717}--\eqref{e719}.
In fact, let us quote Gromov's theorem \cite[p. 198]{Gromov1986} verbatim as follows:
{\it Let $\partial_i, i=1, \dots, k$, be continuous linearly independent vector fields on $V$.
For example, $V=\R^n$ and $\partial_i=\partial/\partial u_i$, $i=1,\dots,n$.
Let $\Phi_i$ be smooth vector valued functions such that $\Phi_i$ takes values in $\R^{s_i}$ and $\Phi_i$
has entries $\vv, \ff, \partial_1 \ff, \dots, \partial_i\ff$, where $\ff$ is the unknown map $V\to\R^q$.
In other words, $\Phi_i: V\times\R^{q(i+1)}\to\R^{s_i}$.
Consider the following {\rm (}triangular{\rm )} systems of $s=\sum_{i=1}^ks_i$ PDEs{\rm :}
\begin{equation}\label{e720}
\begin{split}
&\Phi_1(\vv,\ff,\partial_1\ff)=0,\\
&\Phi_2(\vv,\ff,\partial_1\ff,\partial_2\ff)=0,\\
&\cdots \cdots \cdots\\
&\Phi_k(\vv,\ff,\partial_1\ff,\dots,\partial_k\ff)=0.
\end{split}
\end{equation}
{\bf Local Solvability.} If $s_i\le q-1$ for all $i=1,\dots,k$, and if the functions $\Phi_i, i=1,\dots, k$, are generic,
then system \eqref{e720} admits a local $C^1$--solution $\ff: U\to\R^q$, for some open subset $U\subset V$.
Moreover, the $C^1$-solutions $U\to\R^q$ are $C^0$--dense in some open subset in the space of $C^0$-maps $U\to\R^q$.}
Notice that Gromov's theorem requires ``smoothness" and ``genericity" of $(\Phi_1, \Phi_2, \Phi_3)$.
The ``genericity" appears to be a requirement of nonlinearity on $(\Phi_1, \Phi_2, \Phi_3)$
which is satisfied because of the nonlinear relation \eqref{e711}.
However, ``smoothness" is not obvious, since $\Phi_i$ is at most $C^{0,\alpha}$ in space
due to the occurrence of derivatives of $V$.
Since $\y_{\rm w}$ is continuous in $t$, but not necessarily smooth, ``smoothness" in $t$
is also an issue.
Nevertheless, if Gromov's theorem is still valid in our case, we would obtain an infinite number of
solutions $\ff$. However, from \eqref{e715}--\eqref{e716},
$\ff$ in $C^1$ would yield $(U_1, U_2)$ at best continuous
in $\x=(x_1, x_2)$, and hence a weak but not strong solution of the incompressible Euler equations.
Hence, it seems that, for the moment, we have formal but not yet rigorous map from
our Nash-Kuiper solution to the Euler equations.
\section{$\,$ The Compressible Euler Equations}\label{S8}
The arguments we have used for the incompressible case also carry over to
the compressible Euler equations.
Then the equations for the balance of linear momentum are
\be\label{e81}
\begin{cases}
\partial_{1}(\rho u^{2}+p)+\partial_{2}(\rho uv)=-\partial_{t}(\rho u),\\[1.5mm]
\partial_{1}(\rho uv)+\partial_{2}(\rho v^{2}+p)=-\partial_{t}(\rho v),
\end{cases}
\ee
The equation for the conservation of mass is given by the equation:
\be\label{e82}
\partial_{1}(\rho u)+\partial_{2}(\rho v)=-\partial_{t}\rho.
\ee
Set
\be\label{e83}
L=\rho v^{2}+p,\quad M=-\rho uv,\quad N=\rho u^{2}+p,
\ee
and the Gauss equation becomes
$$
(\rho v^{2}+p)(\rho u^{2}+p)-(\rho uv)^{2}=\kappa,
$$
{\it i.e.},
\be\label{e84}
p^{2}+p\rho q^{2}=\kappa,
\ee
where $q^{2}=u^{2}+v^{2}$.
For the compressible case, we take
\be\label{e85}
p=\frac{\rho^{\gamma}}{\gamma}, \qquad \gamma\ge 1.
\ee
Substitute \eqref{e85} into \eqref{e84} to obtain
\be\label{e86}
\Big(\frac{\rho}{\gamma}\Big)^{2\gamma}+ q^2\frac{\rho^{\gamma +1}}{\gamma}=\kappa.
\ee
For simplicity, let us first take the isothermal case $\gamma =1$ so that
\be\label{e87}
\rho^{2}(1+ q^2)=\kappa,
\ee
that is,
\be\label{e88}
\rho =\sqrt{\kappa/(1+q^{2})}.
\ee
From \eqref{e88}, $\rho$ as the density
is determined explicitly as a function of $\kappa$ and $q^{2}$.
On the other hand, $\rho$ must satisfy the mass balance equation
\eqref{e82} so that
\be\label{e89}
\partial_{1}(u\sqrt{\kappa/(1+q^{2})})+\partial_{2}(v\sqrt{\kappa/(1+q^{2})})
= -\partial_{t} (\sqrt{\kappa/(1+q^{2})}).
\ee
We see that, for given $(u, v)$,
\eqref{e89} is a scalar conservation law for the Gauss
curvature $\kappa$.
This leads us to the following elementary lemma:
\begin{lemma}\label{L81}
For a given smooth solution $(\rho, u, v)$ of the
compressible Euler equations \eqref{e81}--\eqref{e82},
the initial value problem for the Gauss
curvature $\kappa$ given by \eqref{e89} with initial data
$\kappa=\rho^{2}(1+q^{2})>0$ at $t=0$
has a global smooth solution $\kappa$ in space-time,
which satisfies \eqref{e87}.
\end{lemma}
\proof $\,$ Notice that equation \eqref{e89} is a linear transport
equation of conservation form for $\frac{1}{\sqrt{\kappa}}$.
Then the result holds simply by using the standard local
existence-uniqueness theorem for the conservation law
and the direct computation for $J=\sqrt{\frac{\kappa}{1+q^2}}-\rho$:
\begin{align*}
\partial_{t}J&=-(\partial_{1}\rho u+\partial_{2}\rho v)+
\partial_{1}(u\sqrt{\kappa/(1+q^{2})})
+\partial_{2}(v\sqrt{\kappa/(1+q^{2})})\\[1mm]
&=-\partial _{1}(Ju)-\partial_{2}(Jv).
\end{align*}
This equation has the unique
solution $J=0.$
\endproof
The general case $\gamma \geq 1$ follows from noting that
the left-hand side of \eqref{e86} is a monotone increasing function
of $\rho$ so that, for any $\kappa>0$,
there exists a function $\rho(\kappa, q)$ that solves \eqref{e86}.
The regularity of this solution is seen by differentiation
of \eqref{e86} with respect
to $(q, \kappa)$.
We can thus state the general version of Lemma \ref{L81} as
\begin{lemma}\label{L82}
$\,$For a given smooth solution $(\rho, u, v)$ of $\,$the
compressible Euler
equations \eqref{e81}--\eqref{e82},
the initial value problem for the Gauss
curvature for \eqref{e89} with initial data \eqref{e86}
at $t=0$ has a global solution in space-time
satisfying \eqref{e86}.
\end{lemma}
Again, we can also write the fluid variables in terms of the geometric
variables $(L, M, N)$.
To do this, simply substitute \eqref{e85}
with $\rho$ given by the solution to \eqref{e86}
into $L=\rho v^{2}+p, M=-\rho uv$, and $N=\rho u^{2}+p$ to
find
$$
L-N=\rho \big(v^{2}-u^{2}\big).
$$
Write $v=-\frac{M}{\rho u}$ to see
$L-N=\rho \big(\frac{M}{\rho u}\big)^{2}-\rho u^{2}$, which yields
$$
(L-N)\rho u^{2}=M^{2}-\rho ^{2}u^{4},\quad
\rho ^{2}u^{4}+(L-N)\rho u^{2}-M^{2}=0.
$$
Then
$$
u^{2}=\frac{1}{2\rho}\big[-(L-N)\pm \sqrt{(L-N)^{2}+4M^{2}}\big],
$$
and, with $L-N=v^{2}-\big(\frac{M}{v}\big)^{2}$,
$$
v^{2}=\frac{1}{2\rho}\big[(L-N)\pm \sqrt{(L-N)^{2}+4M^{2}}\big].
$$
This tells us to choose sign $``+ "$ in the above formulas so that
\be\label{e810}
\begin{cases}
u^{2}=\frac{1}{2\rho}\big[-(L-N)+\sqrt{(L-N)^{2}+4M^{2}}\big],\\[1.5mm]
v^{2}=\frac{1}{2\rho}\big[(L-N)+\sqrt{(L-N)^{2}+4M^{2}}\big],
\end{cases}
\ee
which are the desired relations.
Now the compressible analogs of Theorem \ref{T51} and Corollaries \ref{C51}--\ref{C52}
follow by the same arguments we have employed in \S \ref{S5}.
\begin{theorem} \label{T81}
For locally analytic initial data $(\rho, u, v)$
satisfying \eqref{e86} and $\rho uv\neq 0$,
there exists a local solution in space-time
of system \eqref{e41b}--\eqref{e45b} which satisfies both the compressible Euler
equations \eqref{e81}--\eqref{e82} and a
propagating two-dimensional Riemannian manifold isometrically immersed in $\mathbb{R}^{3}$.
\end{theorem}
\begin{corollary}\label{C81}
Let $(\rho, u, v)$ be a local analytic
solution in space-time of the compressible Euler equations \eqref{e81}--\eqref{e82}
satisfying the initial condition{\rm :}
$\rho uv\neq 0$ at a point $x_{0}$.
Then there is an evolving metric $\bg$ so
that solution $(\rho, u, v)$ of the compressible Euler equations
\eqref{e81}--\eqref{e82}
also
defines a time evolving two-dimensional Riemannian manifold $({\mathcal M}, \bg)$ immersed
in $\mathbb{R}^{3}$ with the Gauss curvature
$\kappa$
determined \eqref{e86}.
\end{corollary}
Theorems \ref{T72}--\ref{T74} remain unchanged, except for the fact that
metric $\bg^*$ for the wrinkled manifold now corresponds to the
special incompressible solution of the compressible Euler equations \eqref{e81}--\eqref{e82}
with
$\rho=const.\, $
In particular, Theorem \ref{T72} now gives the smooth solutions of the
geometric image of the compressible Euler equations \eqref{e81}--\eqref{e82}
being approximated by
the wrinkled solutions that correspond
weak
shear solutions of
the \emph{incompressible} Euler
equations.
The construction of the wrinkled solutions in this case
is the same as done in \S \ref{S7}
and produces the solutions of the incompressible Euler equations \eqref{e25}--\eqref{e26}.
A wrinkled solution of the compressible Euler equations \eqref{e81}--\eqref{e82}
is impossible, since it would correspond to a vacuum via (\ref{e84}).
\begin{remark} $\,$ We note that the results given in \S 7 and this section on
the existence of {\it wild}
weak shear solutions have been given in terms of
the Cartesian coordinates $\x=(x_{1}, x_{2})$.
The choice of local
Cartesian coordinates
is only a convenience, and the Euler
equations written in polar coordinates would suffice.
\end{remark}
\section{$\,$ Isometric Embedding Problem and General Continuum Mechanics: Elastodynamics as an Example}\label{S10}
Motivated by our results for fluid dynamics,
we now consider solutions in two-dimensional general continuum mechanics.
Denote by $\T=(T_{11}, T_{12}, T_{22})$ the (symmetric) Cauchy stress tensor,
and assume that fields $(u,v)$, $\T$,
and $\rho$ are consistent with some specific constitutive equation
for a body and satisfy the balances of mass and linear momentum
(satisfaction of the balance of angular momentum
is automatic).
The equations for the balance of linear momentum in the spatial representation are
\be\label{e101}
\begin{cases}
\partial_{1}(\rho u^{2}-T_{11})+\partial_{2}(\rho uv-T_{12})
=-\partial_{t}(\rho u),\\[1mm]
\partial_{1}(\rho uv-T_{12})+\partial_{2}(\rho v^{2}-T_{22})
=-\partial_{t}(\rho v),
\end{cases}
\ee
and the balance of mass is
\begin{equation}\label{9.2a}
\partial_{1}(\rho u) +\partial_{2}(\rho v) = -\partial_{t}\rho,
\end{equation}
or
\begin{equation}\label{9.2b}
\rho = \rho_0 \, (\det \F)^{-1},
\end{equation}
where $\rho_0$ is the density of the body in the reference configuration,
and $\F$ is the deformation gradient of the current configuration
with respect to this reference.
\subsection{$\,$ Mapping a general continuum mechanics problem
to the non-degenerate isometric embedding problem}
Denote the geometric dependent variables as
\be\label{LMN}
L=\rho v^{2}-T_{22},\quad N=\rho u^{2}- T_{11},\quad
M= -\rho uv + T_{12},
\ee
so that the Gauss equation becomes
\begin{equation}\label{10.5-a}
(\rho v^{2}-T_{22})(\rho u^{2}-T_{11})-(\rho uv-T_{12})^{2}= \kappa.
\end{equation}
Under the assumption that
\begin{equation}\label{10.6-a}
\det \T+ 2\rho uvT_{12}-\rho v^{2}T_{11}-\rho u^{2}T_{22} > 0,
\end{equation}
at least on some initial time interval,
such a solution of continuum mechanics corresponds to a positive Gauss curvature
for the corresponding isometric embedding problem.
Using $(L, M, N)$ as time-dependent data in (\ref{e31})--(\ref{e32})
and expressing these three equations in terms of the metric components $(g_{11}, g_{12}, g_{22})$
and their derivatives by using (\ref{e23})--(\ref{e24}),
we have a system of three partial differential equations for the components of the metric.
Using the required analog of Corollary \ref{C81}, then we have
\begin{corollary}
Let $(u,v, T_{11}, T_{12}, T_{22}, \rho)$ be a local analytic solution in space-time
of system \eqref{e101}--\eqref{9.2a}
at a point satisfying the condition{\rm :}
$$
T_{12}-\rho uv \neq 0\qquad \mbox{at a point $\x^0$}.
$$
Then there is an evolving metric $\bg$ so that the continuum mechanical solution
defines a time evolving two-dimensional Riemannian manifold $({\mathcal M},\bg)$
isometrically immersed in $\mathbb{R}^3$.
\end{corollary}
\subsection{$\,$ Image of degenerate isometric embedding problem in continuum mechanics}
We note that the ability to solve for the evolving metric $\bg$ in the fluid case
relied on the initial data for the off-diagonal term $M$ in the second fundamental
form being non-vanishing.
In the cases of incompressible and compressible fluids,
this term was simply $M=-uv$.
However, for general continuum mechanics,
$$
M = T_{12}-\rho u v,
$$
that is, the expression for $M$ has an additional contribution.
Theorems \ref{T72}--\ref{T74} remain unchanged, except for the fact that metric $\bg^*$
for the wrinkled manifold now corresponds
to special \emph{steady} smooth solutions of the equations of two-dimensional general continuum mechanics.
We now identify these solutions.
We want to define solutions to the mechanical equations \eqref{e101}--\eqref{9.2a}
in continuum mechanics
from a smooth, degenerate isometric embedding problem for
which $\partial_i \y \cdot \partial_j \y = g^*_{ij}, i,j=1,2$, are satisfied.
Then, making the association analogous to that used in Lemma 4.2,
we have
\be\label{10.8-a}
\begin{cases}
\rho uv - T_{12} = 0,\\
\rho u^2 - T_{11} = Z(x_2,t),\\
\rho v^2 - T_{22} = 0,
\end{cases}
\ee
where function $Z$ is defined from the embedding, and
\be
\begin{cases}
\partial_1(\rho uv - T_{12}) + \partial_2 (\rho v^2 - T_{22}) = 0, \\
\partial_1( \rho u^2 - T_{11}) + \partial_2( \rho uv - T_{12}) = 0
\end{cases}
\ee
are satisfied by the mechanical fields that are being defined.
These would form a consistent set of fields satisfying the balances
of linear momentum and mass if the following constraints
\be\nonumber
\begin{cases}
\partial_t (\rho u) = \partial_t (\rho v) = 0,\\
\partial_1 (\rho u) + \partial_2 (\rho v) = -\partial_t \, \rho
\end{cases}
\ee
are satisfied, {\it i.e.},
these geometric solutions define continuum mechanical solutions with steady momenta.
This is easily done by noting that the conditions imply $\partial_{tt} \rho = 0$ with the solution
\begin{equation}\label{rho_flat}
\rho (x_1, x_2, t) = \rho_1(x_1,x_2)\, t + \rho_2 (x_1,x_2),
\end{equation}
where $\rho_1$ and $\rho_2$ are arbitrary time-independent functions
of the spatial variables, which are required to be so chosen that $\rho>0$.
We then define $(u, v)$ by integrating the pointwise ordinary differential equations:
\be\nonumber
\begin{cases}
\rho u_t = - \rho_1 u,\\
\rho v_t = - \rho_1 v.
\end{cases}
\ee
With $(\rho, u, v, Z)$ in hand, we define the stress components
from \eqref{10.8-a} to obtain
a class of mechanical solutions in general continuum mechanics
from the smooth degenerate isometric embedding problem.
It is to be noted that, for such continuum mechanical solutions to be realizable
for a specific material, the mechanical fields as defined have to be shown
to be consistent with a constitutive equation for the stress for that material.
Alternatively, by seeking solutions to the system
\be\label{steady}
\begin{cases}
\partial_1(\rho uv - T_{12}) + \partial_2 (\rho v^2 - T_{22}) = 0, \\
\partial_1( \rho u^2 - T_{11}) + \partial_2( \rho uv - T_{12}) = 0,\\
\partial_1 (\rho u) + \partial_2(\rho v) = -\partial_t \,\rho,
\end{cases}
\ee
we can define solutions to the balance of mass and the \emph{steady equations of balance of linear momentum}.
\begin{remark} $\,$ The solutions to system \eqref{steady} do not necessarily constitute exact, steady solutions
of the balance of linear momentum, {\it i.e.}, $\partial_t (\rho u)$ and $\partial_t (\rho v)$ may not evaluate to $0$
from such motions, much in the spirit of quasi-static evolutions in solid mechanics and Stokes flow
in fluid mechanics.
As is understood, such solutions are typically interpreted in an asymptotic sense when the velocities
are assumed to ``equilibrate'' on a much faster time-scale than the evolution of driving boundary conditions
or forcing, {\it i.e.}, $t$ is assumed to be the ``slow'' time scale and the right-hand sides of the first two
equations actually have the terms: $-\epsilon \, \partial_t (\rho u)$ and $-\epsilon \, \partial_t (\rho v)$,
with $0 < \epsilon \ll 1$, so that we deal with the singular perturbation with a first-order approximation.
\end{remark}
\smallskip
We now
display the image of the isometric embedding problem
in the smooth degenerate case in nonlinear elastodynamics for a Neo-Hookean material.
We choose the constitutive equation for the Cauchy stress of the compressible material as
\[
\T = \rho \F\F^\top =: \rho \B,
\]
where $\F$ is the deformation gradient from a fixed reference configuration of the body.
The generic point on the fixed reference is denoted by $\X = (X_1, X_2)$, the motion as $\x(\X,t)$,
and $F_{ij} = \frac{\partial x_i}{\partial X_j}$.
For the sake of simplicity, we ignore a multiplicative scalar function of the invariants of $\B$
that would ensure that the stress at the reference configuration vanishes, even in this simplest
frame-indifferent elastic constitutive assumption.
\emph{We now seek special solutions to the steady equations \eqref{steady}.
After obtaining any such solution consistent with the posed steady problem,
we will further check which of them also constitute {\rm (}steady{\rm )} solutions to the equations of balance
of linear momentum}.
\smallskip
Thus, it needs to be demonstrated that the conditions:
\be\label{NH_gov_eq}
\begin{cases}
uv - B_{12} = 0,\\
v^2 - B_{22} = 0,\\
u^2 - B_{11} = \rho^{-1} Z(x_2,t)
\end{cases}
\ee
are satisfied, along with balance of mass in the form
\[
\rho = (\det \F)^{-1} \rho_0(\X),
\]
where $\rho_0$ is the mass density distribution in the reference configuration.
We assume the mass density distribution on the reference to be a constant function with value $\rho_0 > 0$, and always require that $\det \F >0$.
The components of $\B$ are given as
\[
B_{11} = F_{11}^2 + F_{12}^2, \quad B_{12} = B_{21} = F_{11}F_{21} + F_{12}F_{22},
\quad B_{22}=F_{21}^2 + F_{22}^2.
\]
Thus, the equations required to be satisfied by a motion consistent with these constraints
would be
\be\label{elastic_syst}
\begin{cases}
\frac{\partial x_1}{\partial t}\frac{\partial x_2}{\partial t}
= \frac{\partial x_1}{\partial X_1}\frac{\partial x_2}{\partial X_1}
+ \frac{\partial x_1}{\partial X_2}\frac{\partial x_2}{\partial X_2},\\[1.5mm]
\Big(\frac{\partial x_2}{\partial t}\Big)^2
= \Big(\frac{\partial x_2}{\partial X_1}\Big)^2
+ \Big(\frac{\partial x_2}{\partial X_2}\Big)^2,\\[1.5mm]
\Big(\frac{\partial x_1}{\partial t}\Big)^2
= \Big(\frac{\partial x_1}{\partial X_1}\Big)^2
+ \Big(\frac{\partial x_1}{\partial X_2}\Big)^2 + \left(\rho_0^{-1} \det \F \right) Z (x_2(X,t),t).
\end{cases}
\ee
We define $\nabla x_i := (\partial_{X_1} x_i, \partial_{X_2} x_i)$,
and
\begin{align*}
J (\nabla x_1, \nabla x_2):=& \det \F = \partial_{X_1} x_1 \partial_{X_2} x_2 - \partial_{X_2} x_1 \partial_{X_1} x_2\\[1mm]
=& \sqrt{(\nabla x_1 \cdot \nabla x_2)^2 - |\nabla x_1|^2 |\nabla x_2|^2}
\end{align*}
for subsequent use.
If solutions exist to the above system,
then they also satisfy the following conditions:
the satisfaction of the first equation of (\ref{elastic_syst}) by a solution
satisfying the second and third equations is equivalent to
\[
\rho_0^{-1} Z |\nabla x_2|^2 + J(\nabla x_1, \nabla x_2) = 0,
\]
which implies that $|\nabla x_2| > 0$, or else $ J = 0$ which is not acceptable by hypothesis.
Thus, the solutions of the steady, Neo-Hookean image of the degenerate isometric embedding problem must satisfy the following system for functions $(x_1, x_2)$:
\be\label{es2}
\begin{cases}
\Big(\frac{\partial x_2}{\partial t}\Big)^2 = |\nabla x_2|^2,\\[1mm]
\Big(\frac{\partial x_1}{\partial t}\Big)^2
= |\nabla x_1|^2 - \Big(\frac{J\left(\nabla x_1, \nabla x_2 \right)}{|\nabla x_2|}\Big)^2,\\
\rho_0^{-1} Z(x_2, t) |\nabla x_2|^2 = - J(\nabla x_1, \nabla x_2),
\end{cases}
\ee
with the caveat that the solutions to \eqref{es2} satisfy
$$
\frac{\partial x_1}{\partial t} \frac{\partial x_2}{\partial t} = \nabla x_1 \cdot \nabla x_2.
$$
When restricting attention to the class of solutions to the whole system that are scale-invariant
in $(x_1, x_2)$, {\it i.e.}, those solutions that remain solutions if the dependent and independent
variables are scaled by the same constant,
we note that the first two equations in (\ref{es2}) remain invariant under the rescaling of
the independent
and dependent variables by $\lambda$.
Similarly, $Z(\lambda x_2, \lambda t)$ is independent of $\lambda$.
\emph{We now assume that the given function $Z(x_2,t) = - \rho_0$}.
\begin{example}\label{elast_example}
$\,$ Consider a shearing motion of the form:
\be\label{ex_ansatz}
\begin{split}
& x_1 = X_1 + f(X_2,t),\\
& x_2 = X_2 \pm t.
\end{split}
\ee
Then $F_{11} = 1$, $F_{12} = \partial_{X_2} f$, $F_{21} = 0$, and $F_{22} = 1$.
Thus, $J = 1$, $|\nabla x_1|^2 = 1 + (\partial_{X_2} f)^2$, and $|\nabla x_2|^2 = 1$.
Then the first equation of (\ref{es2}) is identically satisfied, the second requires
\[
\Big(\frac{\partial f}{\partial t}\Big)^2 = \Big(\frac{\partial f}{\partial X_2} \Big)^2,
\]
and the third is identically satisfied by our choice of $Z = -\rho_0$.
Thus, traveling waves of the form:
\[
f(X_2, t) = w(X_2 \pm t)
\]
define solutions to system \eqref{es2}
for each sign of $\pm$ in the second equation of (\ref{ex_ansatz}).
For a spatially uniform density field on the reference configuration,
it is easy to check that the equations of balance of linear momentum for the Neo-Hookean constitutive assumption
we are considering (the first Piola-Kirchhoff stress is given by $\rho_0 \F$) reduces to the linear,
second order system:
\[
\frac{\partial^2 x_i}{\partial t^2}
= \frac{\partial^2 x_i}{\partial X_1^2} + \frac{\partial^2 x_i}{\partial X_2^2}.
\]
Thus, for the assumed ansatz (\ref{ex_ansatz}), the image of the degenerate isometric embedding problem
has produced special, \emph{exact} non-steady, solutions to the balance of linear momentum.
Clearly, these are not steady solutions on the reference configuration in general.
Thus, it is an interesting question to check whether any of these
are \emph{exact} steady solutions on the current configuration.
\begin{enumerate}
\item[(i)] The solutions generated in Example \ref{elast_example} correspond to
$f( {{X}_{2}},t)=w( {{X}_{2}}\pm t )$.
Let us consider each in turn.
To check steadiness, we consider an arbitrarily fixed point
$\x^0=(x_{1}^{0},x_{2}^{0})$ in space.
We have to show that the velocities of material points that occupy it
at different times is the same,
since the density is constant, $\rho={{\rho }_{0}}$, everywhere for this example.
Let the image in the reference configuration of point $\x^{0}$
at time $t$ be
\[
(X_{1}^{0}(\x^{0},t ), X_{2}^{0}(\x^{0},t)).
\]
For $f({{X}_{2}},t)=w({{X}_{2}}+t)$,
\begin{equation}
x_{1}^{0}=X_{1}^{0}+w(X_{2}^{0}+t), \quad x_{2}^{0}=X_{2}^{0}+t,
\end{equation}
which implies
\begin{equation}\label{10.14a}
X_{1}^{0}=x_{1}^{0}-w( x_{2}^{0}).
\end{equation}
Now
\begin{equation}
u({{X}_{1}},{{X}_{2}},t)=\frac{\partial f}{\partial t}({{X}_{2}},t )=w'( {{X}_{2}}+t), \quad
v( {{X}_{1}},{{X}_{2}} )=1.
\end{equation}
Then
\begin{equation}
\begin{split}
&u( X_{1}^{0},X_{2}^{0},t )=w'( X_{2}^{0}+t )=w'( x_{2}^{0}-t+t )=w'( x_{2}^{0}),\\
&v( X_{1}^{0},X_{2}^{0},t)=1.
\end{split}
\end{equation}
Thus, we indeed have a steady spatial velocity and momentum field.
The way to understand it physically is the following:
Let $w(X_2)$ be specified.
The points on line ${{x}_{2}}=x_{2}^{0}$ have material points sitting on them at time $t$
with referential $X_2$--coordinate given by $x_{2}^{0}-t$.
However, for such referential points, the horizontal velocity $w'$ corresponds
to
$$
X_2+t=x_{2}^{0}-t+t=x_{2}^{0}.
$$
Hence, this is a very curious situation where the picture is completely steady on the current configuration;
however,
there is unsteady wave propagation with Piola-Kirchhoff shear
stress waves propagating on the reference configuration.
\smallskip
\item[(ii)]
Now we consider the other solution $f({{X}_{2}},t)=w( {{X}_{2}}-t)$. Then
\begin{equation}\nonumber
\begin{split}
& X_{2}^{0}=x_{2}^{0}-t, \\
& X_{1}^{0}=x_{1}^{0}-w( X_{2}^{0}-t)=x_{1}^{0}-w( x_{2}^{0}-2t),
\end{split}
\end{equation}
so that
\begin{equation}\nonumber
u( X_{1}^{0},X_{2}^{0} )=-w'( x_{2}^{0}-2t), \quad
v( X_{1}^{0},X_{2}^{0})=1.
\end{equation}
Then we have unsteadiness in the current configuration as well as the reference.
Note that this case corresponds to a situation that does not satisfy the caveat on
solutions
of \eqref{es2} to correspond to solutions of \eqref{elastic_syst}.
\smallskip
\item[(iii)] However, if we now choose the ansatz for the motion to be
\begin{equation*}
{{x}_{1}}={{X}_{1}}+f( {{X}_{2}},t), \quad {{x}_{2}}={{X}_{2}}-t,
\end{equation*}
then it can be checked that again $f({{X}_{2}},t )=w( {{X}_{2}}\pm t)$
are both solutions to system \eqref{es2},
but now only solution $w( {{X}_{2}}-t)$ works (for obvious reasons by following the previous argument):
\begin{equation*}
\begin{split}
X_{2}^{0}&=x_{2}^{0}+t, \\
X_{1}^{0}&=x_{1}^{0}-f(X_{2}^{0},t) = x_{1}^{0}-w( X_{2}^{0}-t)\\
&=x_{1}^{0}-w( x_{2}^{0}+t-t )=x_{1}^{0}-w( x_{2}^{0}).
\end{split}
\end{equation*}
Then again
\begin{equation*}
u(X_{1}^{0},X_{2}^{0})=w'( X_{2}^{0}-t)=w'( x_{2}^{0}), \quad
v( X_{1}^{0},X_{2}^{0})=v( {{X}_{1}},{{X}_{2}})=-1.
\end{equation*}
Thus, we have a steady field on the current configuration, while, on the reference,
stress/velocity/position waves are moving from the bottom to the top.
\end{enumerate}
\smallskip
While simple, it is important to appreciate that the generated exact solution
in this extremely simple example can produce smooth analogs (with continuous deformation gradient)
of the traveling wave profile shown in Fig. \ref{f1}
\begin{figure}
\centering
\includegraphics[width=5.0in, height=3.0in]{microstructure.pdf}
\caption{(a) Shear bands and phase boundaries;
(b) Dislocation with core, the terminating line (through the plane of paper)
of a shear band;
(c) Generalized disclination with core, the terminating line of a phase boundary.}
\label{f1}
\end{figure}
with arbitrarily small positive values of $a$ and $b$.
When $a$ is comparable to $b$,
this is a sequence of phase boundaries separating domains;
when $b \ll a$, then the regions spanned within width $b$ are slip zones or shear bands
separating undeformed blocks.
The greater freedom that elastodynamics provides over elastostatics in producing microstructure
can now be appreciated.
While such deformations in elastostatics can only be produced with a multi-well energy
function \cite{Ball_James, Abeyaratne_Knowles},
such microstructures (for $a,b \ll 1$) in elastodynamics occur
in this case of the simplified Neo-Hookean
material (with the energy density $tr(\mathbf{C})$),
and persist for all times. Indeed, multiple low-energy states corresponding to large total strains
and static microstructures\footnote{Note that our example is unequivocally dynamic owing to the requirement that $|\nabla x_2|^2 > 0$.}
are observed facts, and their representation consists an important physical ingredient of solid mechanics\footnote{Indeed, in the traditional definition
of phase transformations in solids, {\it cf.} \cite{Abeyaratne_Knowles}, the microstructures being discussed here would not be considered as domains
separated by phase boundaries, since the strain states belong to the same well.}.
Our intent here is to simply point out the greater freedom of displaying microstructure without any length scale
in elastodynamics.
By considering the case for ever smaller $a,b > 0$, it is also now perhaps easy to intuitively see why the wrinkled
deformations of the isometric embedding problem may have a limiting status in elastodynamics,
as the parameters $a$ and/or $b$ tend to zero.
Furthermore, it is an interesting question whether our elastodynamic model (\ref{es2}) corresponding to the degenerate
isometric embedding also displays solutions that represent static and dynamic terminating lines of phase
boundaries (generalized disclinations) and shear bands (dislocations),
ingredients of nature that may help to further understand physically the nature of the wrinkled embeddings
from geometry.
Finally, it is important to recognize and accept that microstructure in solids is not about infinite refinement
and, as is well-known, such scale-free deformations, no matter how exciting from the analytical point of view,
are notoriously difficult to deal with practically in modeling endeavors (following nature's guide, in some sense).
Thus, our considerations seem to point to the need for models that can represent both static, quasi-static
({\it i.e.}, evolving microstructure in the absence of material inertia), as well as fully dynamic microstructures
with in-built length scales and accounting for the dissipation due to microstructure evolution.
\end{example}
\subsection{$\,$ Mapping between the general continuum mechanics problem and the isometric embedding problem}
Suppose that we have a smooth, time-dependent two-dimensional manifold immersed in $\mathbb{R}^3$.
Then $(g_{11}, g_{12}, g_{22})$ and $(L,M,N)$ can be defined from the manifold
as in \S \ref{S2}.
Consider the initial data to be available on $(\rho, u, v)$ denoted by $(\rho_0, u_0, v_0)$ pointwise.
Then define the following quantities $(m_u, m_v)$ through the ordinary differential equations:
\be\label{momenta}
\begin{cases}
\partial_{t}\, m_u = \Gamma_{22}^{1}L-2\Gamma_{12}^{1}M+\Gamma_{11}^{1}N,\\
\partial_{t}\, m_v = \Gamma_{22}^{2}L-2\Gamma_{12}^{2}M+\Gamma_{11}^{2}N,
\end{cases}
\ee
with initial condition on $m_u$ specified as $\rho_0 u_0$ and on $m_v$ as $\rho_0 v_0$.
With fields $(m_u, m_v)$ available, solve
\begin{equation}\label{bal_mass}
-\partial_t \, \rho = \partial_1 m_u + \partial_2 m_v
\end{equation}
with initial condition on $\rho$ to be $\rho_0$.
Then, using this $\rho$ field along with $\Gamma$ and $(L, M, N)$ as time-dependent data,
solve the pointwise ordinary differential equations for fields $(u, v)$ given by (\ref{momenta})
with $(m_u, m_v)$ replaced by $(\rho u, \rho v)$, with the initial condition on $(u, v)$ being $(u_0, v_0)$, respectively.
Clearly, we have
\[
(m_u, m_v) = (\rho u, \rho v).
\]
However, then (\ref{bal_mass}) implies that the constructed fields $(\rho, u,v)$ satisfy the balance of mass.
Finally, defining the stress components from (\ref{LMN}) by using $(L, M, N)$ and the constructed fields $(\rho, u, v)$ as the data,
and noting (\ref{momenta}) and (\ref{e31}), we find that every smooth time evolving
two-dimensional manifold immersed in $\mathbb{R}^3$ defines a solution of the balance laws of two-dimensional
general continuum mechanics.
It requires the imposition of further constraints to obtain the motions within this class that are consistent
with the constitutive response of any specific material.
For a given constitutive relation, the Cauchy stress is determined from the velocity field $(u,v)$ and density $\rho$.
The case for (incompressible and compressible) inviscid fluids has been exposited,
and the case of elasticity requires the deformation gradients to be calculated from the deformation field
inferred from the velocity field $(u,v)$ via time integration.
Hence, since $(\rho,u,v)$ have been determined by our argument above,
the ability to satisfy the constitutive relations requires that metric $\bg$ be
consistent with these extra constraints.
Let $\F$ be the deformation gradient field of the mechanical body from a fixed stress-free elastic
reference configuration.
Let $\LL_v$ denote the velocity gradient field.
For simplicity, assume a local constitutive equation for the stress of the form:
\[
\T = \hat \T (\LL_v,\rho,\F,\bg).
\]
We also use the notation: $\kappa = \kappa(\bg,\nabla \bg, \nabla^2 \bg)$
and $\Gamma= \Gamma(\bg,\nabla \bg)$, for the known functions for the Gauss curvature and the Christoffel symbols.
\begin{remark} $\,$ A time-dependent set of mechanical fields $(\rho, u, v, \F, \T)$
and a time-dependent metric field $\bg$ related
by the constitutive assumption $\hat \T$ are consistent in the sense that the balance of linear momentum and
balance of mass are satisfied, and the metric is isometrically embedded in $\mathbb{R}^3$ at each instant of time,
provided that the following system of constrained partial differential equations of evolution are satisfied:
\begin{equation}\label{constrained_system}
\begin{cases}
\partial_{1}(\rho u) +\partial_{2}(\rho v) = -\partial_{t}\rho,\\[1mm]
\partial_k F_{ij} v_k - (\LL_v)_{ik}F_{kj} = -\partial_t F_{ij},\\[1mm]
\partial_{1}\big(\rho u^{2}-\hat T_{11}(\LL_v,\rho, \F, \bg)\big)
+\partial_{2}\big(\rho uv- \hat T_{12}(\LL_v,\rho, \F, \bg)\big)
= -\partial_{t}(\rho u),\\[1mm]
\partial_{1}\big(\rho uv - \hat T_{12}(\LL_v,\rho, \F, \bg)\big)
+\partial_{2}\big(\rho v^{2} - \hat T_{22}(\LL_v,\rho, \F, \bg)\big)=-\partial _{t}(\rho v),
\end{cases}
\end{equation}
and
\begin{equation}\label{constrained_system-a}
\begin{cases}
\partial_{1} N -\partial_{2}M
= -\Gamma_{22}^{1}(\bg,\nabla \bg) L +2\Gamma_{12}^{1} (\bg,\nabla \bg) M -\Gamma_{11}^{1} (\bg,\nabla \bg)N,\\[1mm]
\partial_{1} M -\partial_{2}L
= \Gamma_{22}^{2}(\bg,\nabla \bg) L -2\Gamma_{12}^{2}(\bg,\nabla \bg) M +\Gamma_{11}^{2}(\bg,\nabla \bg) N,\\[1mm]
LN - M^2 = \kappa(\bg,\nabla \bg, \nabla^2 \bg),\\[1mm]
L \mathbf{e}^1 \otimes \mathbf{e}^1
+ M \left(\mathbf{e}^1 \otimes \mathbf{e}^2 + \mathbf{e}^2 \otimes \mathbf{e}^1 \right)
+ N \mathbf{e}^2 \otimes \mathbf{e}^2\\
$\qquad$ = \mathbf{f} \left( \rho \mathbf{u} \otimes \mathbf{u}
- \mathbf{T}\left(\mathbf{L}_v, \rho, \mathbf{F}, \mathbf{g} \right) \right),
\end{cases}
\end{equation}
where $\mathbf{f}$ is a tensor-valued function of its tensorial argument,
and $\left(\mathbf{e}^1, \mathbf{e}^2\right)$ is the dual basis corresponding
to the natural basis on the surface given
by $\left(\partial_1\mathbf{y},\partial_2 \mathbf{y} \right)$.
The last tensorial equation consists of three independent equations.
The equations in \eqref{constrained_system-a}
are to be considered as the constraints
that determine the family of pairs of first and second fundamental forms of
embedded manifolds consistent with a given mechanical state at any given time.
{\it Conversely}, the evolution of the mechanical fields following
the first four equations \eqref{constrained_system}
must be constrained to the {\it manifold} defined by
\eqref{constrained_system-a}
in the state-space of spatial $(\rho, u, v, \F, \bg)$ fields.
One may consider eliminating variables $(L, M, N)$ from the equations in \eqref{constrained_system-a}
to obtain three constraint equations for the three components of the metric field.
Abstractly, one may think of eliminating all of the equations in \eqref{constrained_system-a}
and replacing $\bg$ in the mechanical set of the first four equations \eqref{constrained_system}
as a spatially non-local term in the mechanical fields representing a solution of \eqref{constrained_system-a}.
Notice that this remark applies as well to the steady problem of continuum mechanics
(where the right-hand sides of the third and fourth equations of \eqref{constrained_system}
are assumed to be $0$) and the time-dependence of boundary conditions
(or body forces that have been assumed to vanish here for simplicity)
drive the evolution of the mechanical problem.
We note that the earlier sections of this paper dealing with the equations
of incompressible and compressible fluid dynamics,
and Neo-Hookean elastodynamics are special cases of the above system where the constitutive equation
for the Cauchy stress are independent of metric $\bg$.
\end{remark}
\section{$\,$ Concluding Remarks}\label{S9}
We close by discussing some broad implications and possible extensions of the presented work.
\subsection{$\,$ Admissibility of weak solutions}
Since the dynamics of the wrinkled solutions shadowing the
incompressible or compressible Euler equations are completely reversible
Nash-Kuiper solutions corresponding to the metric for the developable surface, the usual
irreversible entropy admissibility criteria become useless as selection
criteria. This tells us that the only hope of selecting a unique physically
meaningful solution would be chosen without recourse to time evolution, {\it e.g.},
artificial viscosity or energy minimization.
For example, an energy
minimization which penalizes second derivatives of $\y$ would prefer the affine
initial data over data with folds, so that the {\it wild} initial data would be ruled out.
Similarly,
Sz\'{e}kelyhidi Jr. \cite{SzL2011}, Bardos-Titi-Wiedermann \cite{BTitiW},
Bardos-Titi \cite{BTiti}, and Bardos-Lopes Filho-Niu-Nussenzveig Lopes-Titi \cite{BLFN}
have shown that the viscosity criterion also eliminates the {\it wild} solutions for the
Euler equations.
To be more precise about the role of viscosity, we recall the incompressible
Navier-Stokes equations:
\be\label{e91}
\begin{cases}
\partial_{1}(u^{2}+p)+\partial_{2}(uv)=-\partial_{t}u+\frac{1}{\text{Re}}\Delta u,\\[1mm]
\partial_{1}(uv)+\partial _{2}(v^{2}+p)=-\partial_{t}v+\frac{1}{\text{Re}}\Delta v,
\end{cases}
\ee
where we have taken density $\rho =1$ and $\text{Re}$ denotes the
Reynolds number.
The condition of incompressibility and the Poisson equation for the pressure
remain as
\begin{align}
&\partial_{1}u+\partial_{2}v=0,\label{e92}\\[1mm]
&\partial_{11}^{2}(u^{2})+2\partial_{12}(uv)+\partial_{22}(v^{2})
=-\Delta p. \label{e93}
\end{align}
If we review all the previous arguments made for the inviscid Euler
equations leading up to and including \S \ref{S8},
we see all the conclusions
we have made regarding the Euler equations hold true for the Navier-Stokes
equations modulo one crucial point.
For the Euler equations,
metric $\bg^*$ provides the map from the steady shear, $p=0$,
a solution of the Euler equations to the Gauss-Codazzi equations.
However, for a shear solution of the Navier-Stokes equations,
the right-hand sides of \eqref{e91}
must vanish.
Therefore, instead of the fluid pre-image being a steady shear,
the fluid pre-image must satisfy the diffusion equation:
\[
-\partial_{t}u+\frac{1}{\text{Re}}\Delta u=0,\quad v=0,\quad p=0,
\]
or
\[
-\partial _{t}v+\frac{1}{\text{Re}}\Delta v=0,\quad u=0,\quad p=0.
\]
Thus, the pre-image is smooth so that, by \eqref{e28},
we have a smooth second fundamental form and a smooth embedding $\y$.
However, this contradicts the non-$C^{2}$ property of
our Nash-Kuiper wrinkled solution.
Thus, the only possibility is that, for the Navier-Stokes equations,
there is
no fluid pre-image of the Nash-Kuiper wrinkled solutions.
One may be
tempted to discount the physical relevance of the Nash-Kuiper theorem
outlined here and in the recent work in
\cite{BDIS, BDS1, CDK, CDS2012, DS2009, DS2010, DS2012, DS2013, DS2014, DS2015,SzL2011,Wied}.
On the other hand,
as noted in Chen-Glimm \cite{CG2012}
and others ({\it cf}. \cite{R-K,Wal}),
viscous fluid turbulence arises with the imposition of an external force,
which overcomes the viscous dissipation.
Hence, it may be that the addition of an external force would
indeed bring us back to the Nash-Kuiper-Gromov turbulence scenario.
Furthermore, the dynamics of defect microstructure like dislocations,
phase and grain boundaries,
triple junctions, and point defects in crystalline solids furnish a compelling
physical argument for accepting/developing physically rigorous and practically
computable models that account for the representation of microstructure,
necessarily then not of infinite refinement and with a modicum of uniqueness
in the predicted evolution of their fields.
In analogy with a crumpled piece of paper that does not produce infinitely fine
terminated folds and neither unfolds itself back to its original flat state,
perhaps the correct notion of admissibility is to move to augmented physical models
of continuum defect dynamics involving extra kinematics representing the microscopic,
and hence smoothed, dynamics of discontinuity surfaces, their terminating lines,
and point singularities of the fields of the original macroscopic model
(like nonlinear elasticity and Navier-Stokes),
while accounting for the energetics of these defects and the dissipation produced
owing to their motion. Such partial differential equation-based augmentations
of nonlinear elasticity theory have begun to emerge, {\it e.g.},
Acharya-Fressengeas \cite{ach_fress}, along with their interesting predictions
of soliton-like dynamical behavior of nonsingular
defects (Zhang et al. \cite{zhangetal}) and their collective behavior.
\subsection{$\,$ A plausible minimal model for internally stressed elastic materials}
The constrained evolution system \eqref{constrained_system}--\eqref{constrained_system-a},
written on the deforming configuration of an elastic body being tracked in Lagrangian fashion,
appears to pose an interesting model for internally stressed elastic bodies
whose constitutive response in terms of the deformation gradient
and a metric representing a stress-free state is known:
\begin{equation}\label{elast_constrained_system}
\begin{cases}
\rho = \rho_0 (\det \F)^{-1},\\[1mm]
\partial_{1}\hat T_{11}(\F, \bg)+\partial_{2}\hat T_{12}(\F, \bg)=\rho\, d_t u,\\[1mm]
\partial_{1}\hat T_{12}(\F, \bg)+\partial_{2}\hat T_{22}(\F, \bg)= \rho \,d_t v,
\end{cases}
\end{equation}
and
\begin{equation}\label{elast_constrained_system-a}
\begin{cases}
\partial_{1} N -\partial_{2}M
= -\Gamma_{22}^{1}(\bg,\nabla \bg) L +2\Gamma_{12}^{1} (\bg,\nabla \bg) M -\Gamma_{11}^{1} (\bg,\nabla \bg)N,\\[1mm]
\partial_{1}M -\partial_{2}L = \Gamma _{22}^{2}(\bg,\nabla \bg) L -2\Gamma_{12}^{2}(\bg,\nabla \bg) M +\Gamma_{11}^{2}(\bg,\nabla \bg) N,\\[1mm]
LN - M^2 = \kappa(\bg,\nabla \bg, \nabla^2 \bg),\\[1mm]
L \mathbf{e}^1 \otimes \mathbf{e}^1
+ M \left(\mathbf{e}^1 \otimes \mathbf{e}^2 + \mathbf{e}^2 \otimes \mathbf{e}^1 \right)
+ N \mathbf{e}^2 \otimes \mathbf{e}^2
= \mathbf{\hat{f}} \left( \mathbf{T}\left(\mathbf{F}, \mathbf{g} \right) \right),
\end{cases}
\end{equation}
where $\partial_i, i=1,2$, represent the spatial derivatives on the current configuration,
$d_t$ represents the material time derivative operator,
$\mathbf{\hat{f}}$ is a tensor-valued function of its tensorial argument,
and $\left(\mathbf{e}^1, \mathbf{e}^2\right)$ is the dual basis corresponding to the natural
basis on the surface given by $\left(\partial_1\mathbf{y},\partial_2 \mathbf{y} \right)$.
The last tensorial equation consists of three independent equations.
First of all, we note that, on dimensional grounds,
the identification of the second fundamental form with mechanical objects with physical dimensions
of stress implies from the Codazzi
equations that the metric is physically
dimensionless and the Christoffel symbols have dimensions of reciprocal lengths.
Thus, the metric may be interpreted as describing strain.
Of course, this identification also implies that a material parameter with physical units
of ($\mbox{stress}\cdot\mbox{length})^2$ is required to make the Gauss curvature equation
dimensionally consistent, while introducing a length-scale into the traditional elastic problem
of internal stress.
Next, we consider the equations in (\ref{elast_constrained_system-a}) for given $\F$.
Considering, for the moment, the situation when the constitutive equation is independent of $\bg$,
this becomes a question of determining the metric, given the second fundamental form,
such that an embedding exists in $\mathbb{R}^3$,
which is the opposite of the isometric embedding problem that may be interpreted as the question
of determining the second fundamental form, given a metric $\bg$.
At any rate, as some of our results show, this problem has a solution in many instances.
It is also perhaps reasonable to expect that the situation does not change drastically
even when the constitutive equation depends on $\bg$,
and is not degenerate in the sense that, for almost all $\F$,
there are many solutions for $\bg$ (this would necessitate evolution equations for $\bg$).
Based on this premise, the requirement of an embedding of $\bg$ in $\mathbb{R}^3$ assumes
a physical status replacing a separate constitutive equation for the evolution of $\bg$
that would be required for the mechanical problem otherwise.
Moreover, much like in systems displaying relaxation oscillations,
the geometry and stability of states on the constraint manifold can,
on occasion, lead to interesting dynamical behavior of the mechanical fields.
The above model appears to have the possibility of being considered as a minimal model
for the statics and dynamics of soft and biological
materials ({\it e.g.}, Efrati et al
\cite{kupferman},
Jin et al
\cite{suo}, and Ambrosi et al \cite{goriely}) depending only on the knowledge of the elastic
response of the material and the material constant needed to define the Gauss curvature equation.
\begin{appendices}
\section{Time-Continuity of the Wrinkled Solutions}\label{A1}
In this appendix, let us prove the following result:
Let $(u,v,p)$ be a solution to the two-dimensional Euler equations in the neighbourhood
of a point $\x^\star \in \mathcal{M}$, locally analytic in space, such that
\begin{equation}\label{cauchy-k condition}
u(\x^\star,0) v(\x^\star,0) \neq 0 \qquad \text{ at } t = 0.
\end{equation}
\iffalse
and
\begin{equation}\label{bddness of p}
p(x,t) > -C_0 > -\infty \qquad \text{ for all } (x,t) \in \R^2 \times [0,T].
\end{equation}
\fi
Moreover, let $\yy_\bg(t,\cdot): (\mathcal{M}, \bg) \hookrightarrow \R^3$ be the corresponding short immersion
of an analytic surface for each $t\in [0,T]$, with $(\mathcal{M}, \bg)$ being
the Riemannian manifold given by Theorem 5.1 and Corollary 5.1.
Denote by
\begin{equation*}
\{\Phi_t\}_{t\in [0,T]}: X \equiv C^\infty_{\rm loc}(\mathcal{M}; \R^3)
\longrightarrow Y \equiv C^{1,\alpha}_{\rm loc}(\mathcal{M}; \R^3)
\end{equation*}
the collection of maps sending the short immersion $\yy_\bg(t,\cdot)$ to the {\it wild} isometric
immersion $\yy_w(t,\cdot)$, indexed by time $t$,
constructed following Conti-De Lellis-Sz\'{e}kelyhidi Jr. \cite{CDS2012}.
It is defined as follows:
\begin{align}\label{arrows}
\Phi_t: \quad X\equiv C^\infty_{\rm loc}(\mathcal{M}; \R^3)
&\xrightarrow{ \Phi_1 } C^1_{\rm loc}(\mathcal{M}; \R^3)
\xrightarrow{ \Phi_2 } C^2_{\rm loc}(\mathcal{M}; \R^3)\nonumber\\
&\xrightarrow{ \Phi_3 } C^{1,\alpha}_{\rm loc}(\mathcal{M}; \R^3) \equiv Y,
\end{align}
where $\uu:=\Phi_1(\yy_\bg(t, \cdot))$ is the $C^1_{\rm loc}$ isometric immersion
constructed by Nash-Kuiper in \cite{Kuiper, Nash1954},
$\vv:=\Phi_2(\uu)$ is the $C^2_{\rm loc}$ map ``close to being isometric'',
obtained by mollifying $\uu$, and $\yy_{\rm w}:=\Phi_3(\vv)$ is the $C^{1,\alpha}_{\rm loc}$ isometric immersion
constructed by Conti-De Lellis-Sz\'{e}kelyhidi Jr. \cite{CDS2012}.
\begin{theorem}
For each $\yy_\bg\in (X, C^1_{\rm loc})$, we have
\begin{equation*}
\Phi(\yy_\bg( \cdot))\in C^0\left([0,T], C^{1}_{\rm loc}\right).
\end{equation*}
Therefore£¬ if $u(\x,t_k)v(\x,t_k)\neq 0$ locally in space and
$(u,v,p)(t_k)\to (u,v,p)(t)$ in $C^1_{\rm{loc}}$ as $t_k\to t$,
then
$\yy_{\rm w}(t_k)\to \yy_{\rm w}(t)$ in $C^1_{\rm{loc}}$ as $t_k\to t$.
\end{theorem}
\begin{remark} $\,$ First, analytic solutions are obtained via the Cauchy-Kowalewski theorem,
which entails condition \eqref{cauchy-k condition}.
Second, the wrinkled solutions $\yy_{\rm w}$ are constructed from the short map $\yy_\bg$
by adding ``Nash wrinkles'' or ``corrugations'',
whose first derivatives are of only H\"{o}lder regularity at best.
The current upper bound, obtained by Borisov in \cite{Borisov6} and by Conti-De Lellis-Sz\'{e}kelyhidi Jr.
in \cite{CDS2012}, is $\alpha < \frac{1}{7}$£¬
where $\frac{1}{7}=\frac{1}{1+2J_n}$ for $n=2$, with $J_n = \frac{n(n+1)}{2}$
known as the {\em Janet dimension}.
\end{remark}
\proof $\,$ We divide the proof into five steps.
\smallskip
{\bf 1.} {\it Reduction to a geometric problem}.
First, we reduce the problem to showing the $C^1$--continuity of
wrinkled solutions $\yy_{\rm w}$ with respect to the short maps $\yy_\bg$.
Recall from \S 2 that the Gauss curvature is defined from the fluid variables as
\begin{equation*}
\kappa:=p^2 + p(u^2+v^2).
\end{equation*}
For $p>-C_0>-\infty$, with no loss of generality,
we can replace $p$ with $(p+C_0)$ throughout, since the Euler equation \eqref{e25}--\eqref{e26}
is invariant under the translation in $p$.
Thus, the analytic surface $(\mathcal{M}, \bg)$ corresponding to $(u,v,p)$ has positive curvature
for all $t \in [0,T]$.
On the other hand, in Lemma 4.2, we have another metric $\bg^\ast$ obtained from the shear flow.
Here $\bg^\ast$ is the induced metric of the following parameterised
map $\yy_{\bg^\ast}:\mathcal{M}' \subset \mathcal{M}\rightarrow \R^3$ near $\x^\star$:
\begin{equation*}
\yy_{\bg^\ast} = (Ax_2, Ax_1, f(x_2))^\top,
\end{equation*}
where $f$ is given implicitly by
$f'(x_2) = A \arctan \big(-A\int_0^{x_2} u^2(s)\dd s\big)$ for $A > 1$.
We know that $\yy_\bg$ is strictly {\em short}:
\begin{equation*}
\yy_\bg^\#(\geucl) < (\yy_{\bg^\ast})^\#(\geucl)
\end{equation*}
in the sense of quadratic forms, where $\#$ denotes the pullback operator.
Furthermore, as shown in Theorem 5.1, under condition \eqref{cauchy-k condition},
the geometric flow has an analytic solution, due to the Cauchy-Kowalewski theorem.
Thus, the previously constructed short map $\yy_\bg$ maps $[0,T]$
to $C^\infty_{\rm loc}(\mathcal{M}; \R^3)$.
\medskip
{\bf 2.} {\it Outline of the proof}.
Maps $(\Phi_1, \Phi_2, \Phi_3)$ will be explained in detail in the subsequent development;
they are given implicitly in \S 6.3 of \cite{CDS2012}, namely, in the proof of Corollary 1.2 therein.
\begin{lemma}[Corollary 1.2 in \cite{CDS2012}]
Let $n \in \mathbb{N}$ and let $\bg_0$ be a positive-definite $n \times n$ matrix.
There exists $r>0$ such that, for any smooth bounded open $\Omega \subset \R^n$
and any $\bg\in C^{0, \beta}(\overline{\Omega}; O^{+}(n))$ satisfying $\|\bg-\bg_0\|_{C^0} \leq r$,
the following holds{\rm :}
For any given $\yy_\bg \in C^1(\overline{\Omega}; \R^{n+1})$, $\e >0$,
and $\alpha \in (0, \min\{\frac{1}{1+2J_n}, \frac{\beta}{2}\})$,
there exists a map $\yy_{\rm w} \in C^{1,\alpha}(\overline{\Omega}; \R^{n+1})$ with
\begin{equation*}
\yy_{\rm w}^\# \geucl = \bg,\qquad
\|\yy_{\rm w} - \yy_\bg \|_{C^0} \leq \e.
\end{equation*}
\end{lemma}
Thus, in view of \eqref{arrows}, it is enough to prove that $(\Phi_1, \Phi_2, \Phi_3)$
are continuous in time, providing that all the function spaces therein
are endowed with the $C^1_{\rm loc}$ topology.
Here, as we begin with $\yy_\bg \in C^\infty$,
we are taking $n=2$, $\bg_0=\geucl$, and $\beta = 1$ in the lemma above.
Domain $\Omega$ is chosen to be a suitably small neighborhood
of $\x^\star$ in surface $\mathcal{M}$.
In the subsequent steps, we do not restrain ourselves to the case that $\mathcal{M}$
is a $2$-dimensional manifold immersed in $\R^3$.
Instead, the following arguments hold for any $n$-dimensional hypersurface $\mathcal{M}$
immersed into $(\R^{n+1}, \geucl)$, where $\geucl$ is the Euclidean metric.
\medskip
{\bf 3.} {\it Continuity of $\Phi_1$}.
In this step, we prove the continuity of $\Phi_1$, {\it i.e.}, the continuous dependence
of the Nash-Kuiper {\it wild} isometric immersions
with respect to the initial short immersion.
For simplicity of presentation, we only give the proof for Nash's construction in \cite{Nash1954},
which in fact requires at least two co-dimensions of the immersions.
Similar arguments work for Kuiper's construction in \cite{Kuiper} as well,
as long as we replace the ``Nash wrinkles'' (see \eqref{wrinkle} below)
by the ``corrugations'' in one co-dimension.
Our presentation of Nash's construction closely follows the exposition in \cite{exposition}.
Starting with the short map $\yy_\bg$, the $C^1$--isometric immersion is constructed
by adding ``Nash wrinkles'' to $\yy_\bg$ in countably many {\em stages},
and each stage involves finitely many {\em steps}:
\smallskip
\noindent
{\bf Nash's Steps.} To describe the steps, let us first recall the topological lemma concerning
the existence of a nice cover, which is proved by collecting the interiors of
the stars of (the barycentric subdivision of) a triangulation of $\mathcal{M}$:
\begin{lemma}[Lemma 2.2.1 in \cite{exposition}]\label{lemma on cover}
$\,$ Let $\mathcal{M}$ be an $n$-dimensional smooth manifold,
and $\{V_\lambda\}$ be an open cover.
Then there exists another cover $\{U_l\}$ such that
\begin{enumerate}
\item[\rm (i)]
Each $U_l$ lies in some $V_\lambda${\rm ;}
\item[\rm (ii)]
The closure of each $U_l$ is diffeomorphic to the $n$-dimensional closed ball in $\R^n${\rm ;}
\item[\rm (iii)]
Each $U_l$ intersects with at most finitely many other $U_{l'}$'s{\rm ;}
\item[\rm (iv)]
Each point $p\in\mathcal{M}$ has a neighbourhood contained in at most $(n+1)$ members of the cover{\rm ;}
\item[\rm (v)]
$\{U_l\}$ can be subdivided into $(n+1)$ classes, each consisting of pairwise disjoint $U_l$'s.
\end{enumerate}
\end{lemma}
Then, denoting by $I_l:=\{j: U_j \cap U_l \neq \emptyset\}$ in Lemma A(iii) with some $l$ fixed.
As $\yy_\bg$ is strictly short, for any $\delta>\|\bg-\yy_\bg^\# \geucl\|_{C^0}>0$,
we can choose $\delta_l>0$ such that
\begin{equation}\label{delta, step}
(1-\delta_l)\bg - \yy_\bg^\# \geucl \,\,\, \text{is positive-definite},\qquad\,\,
\|\delta_l \bg\|_{C^0(U_j)} \leq \frac{\delta}{2} \,\,\, \text{ for all } j \in I_l.
\end{equation}
Now, for some fixed $C^\infty$ partition of unity subordinate to $\{U_l\}$, we set
\begin{equation}\label{h-1}
\bh:=(1-\phi)\bg - \yy_\bg^\# \geucl,
\end{equation}
where $\phi := \sum_l \delta_l\phi_l$. Then $\bh$ is positive definite.
By Proposition 2.3.1 in \cite{exposition}, we can decompose $\bh$ into a locally finite
sum of {\em primitive metrics}:
\begin{equation}\label{decomp into primitive metrics}
\bh = \sum_j \bh_j
\end{equation}
such that each $\bh_j$ is supported in some $U_l$.
It is crucial to note that $\bh_j$ satisfies the following conditions:
For each $p\in\mathcal{M}$, there are at most $(n+1)J_n$ $\bh_j$'s
supported at $p$ and, for each $j$, ${\rm supp}(\bh_j)$ intersects
with finitely many other ${\rm supp}(\bh_{j'})$'s. Moreover,
\begin{equation}\label{primitive metric}
\bh_j = a_j^2 \dd\psi_j \otimes \dd\psi_j
\end{equation}
for some smooth functions $\phi_j$ depending only on $\mathcal{M}$
and the ``nice cover'' $\{U_l\}$ in Lemma \ref{lemma on cover}.
Now, choose two orthogonal vector fields $\boldsymbol{\nu}$
and $\boldsymbol{\xi}$ on $\overline{U_l}$,
which are of unit length and orthogonal to $TU_l$ throughout.
Then, in each {\em step} $j$ (in the sense of Nash), we add to $\yy_\bg$ a term $\ww^{\rm wrinkle}_{j}$,
which is a fast-oscillating plane wave of profile $a_j$,
frequency $\lambda\gg 1$ (to be determined), and directions $\boldsymbol{\xi}$
and $\boldsymbol{\nu}$.
More precisely, we consider
\begin{equation}\label{wrinkle}
\ww^{\rm wrinkle}_j(\x) = \frac{a_j(\x)}{\lambda}\cos(\lambda\psi_j(\x))\boldsymbol{\nu}(\x)
+ \frac{a_j(\x)}{\lambda} \sin (\lambda \psi_j (\x)) \boldsymbol{\xi}(\x).
\end{equation}
Such terms are known as ``Nash wrinkles'', or as ``spirals'' in Nash's original paper \cite{Nash1954}.
Finally, consider the map:
\begin{equation}\label{adding wrinkles}
\mathfrak{S} (\yy_\bg) := \yy_\bg + \ww^{\rm wrinkle}_1 + \ww^{\rm wrinkle}_2 + \ww^{\rm wrinkle}_3 + \cdots.
\end{equation}
To wit, for each point $\x\in \mathcal{M}$,
at most $(n+1)J_n$ Nash wrinkles are non-zero, so the sum in \eqref{adding wrinkles} is finite.
On the other hand, by choosing $\lambda$ sufficiently large,
we can require the Nash wrinkles to be very small in the $C^0$--norm.
Every such map $\mathfrak{S}$ is called a {\em Nash's stage}.
In summary, {\em $\mathfrak{S}$ maps from the space of $C^1_{\rm loc}$ strictly short
immersions $\mathcal{M} \hookrightarrow \R^3$ to itself}.
Now we are in the situation of showing that map $\mathfrak{S}$ is continuous in time.
In the sequel, $C_1, C_2, C_3,...$ denote universal constants depending only on the open cover
given by Lemma \ref{lemma on cover}.
Let us fix any $\eta > 0$ and assume that
\begin{equation}\label{x}
\|\yy_\bg(t)-\yy_\bg(s)\|_{C^1} \leq \eta \qquad \text{ for some } t,s \in [0,T].
\end{equation}
Then, in view of Eq. \eqref{h-1}, it follows that
\begin{equation*}
\|\bh_j(t)- \bh_j(s)\|_{C^1} \leq C_1 \eta^2 \qquad \text{ for all } j \in I_l.
\end{equation*}
Thanks to Eq. \eqref{primitive metric} (in which $\psi_j$ depends only on the cover), we have
\begin{equation*}
\|a_j(t)-a_j(s)\|_{C^1} \leq C_2 \eta \qquad \text{ for all } j \in I_l.
\end{equation*}
Therefore, expression \eqref{wrinkle} for the Nash wrinkles directly gives us
\begin{equation*}
\|\ww^{\rm wrinkle}_j (t) -\ww^{\rm wrinkle}_j (s)\|_{C^1} \leq \frac{2C_2}{\lambda} \eta \qquad \text{ for all } j \in I_l,
\end{equation*}
which immediately implies that
\begin{equation}\label{xx}
\|\mathfrak{S}(\yy_{\bg}(t)) - \mathfrak{S}(\yy_{\bg}(s))\|_{C^1_{\rm loc}} \leq \frac{2(n+1)J_n C_2}{\lambda} \eta + \eta.
\end{equation}
From Eqs. \eqref{x}--\eqref{xx}, we conclude that map $\mathfrak{S}$ is continuous in time,
when its domain and range are equipped with the $C^1_{\rm loc}$ topology.
The above arguments hold for all $\lambda >0$; we are going to specify $\lambda$
in Eq. \eqref{choosing lambda} below, in order to ensure the convergence of the {\em stages}.
\smallskip
\noindent
{\bf Nash's Stages.}
The purpose of each stage $\mathfrak{S}$ is to correct the error,
$\|\mathfrak{S}(\yy_\bg)^\# \geucl - \bg\|_{C^0}$,
{\it i.e.} to lessen the deviation of the pulled back metrics from being isometric.
In view of Proposition 2.2.2 and the proof of Theorem 2.1.4 in \cite{exposition},
for any fixed $\e>0$, we can obtain the following bound for $\mathfrak{S}$ at the $q$-th stage:
\begin{equation}\label{stage estimates}
\begin{cases}
\|\mathfrak{S}^q(\yy_\bg) - \mathfrak{S}^{q-1}(\yy_\bg) \|_{C^0(U_l)} < 2^{-q-1}\min\{\e, 2^{-l}\} \qquad \text{ for every } l, \\[1mm]
\|\bg-\mathfrak{S}^{q}(\yy_\bg)^\# \geucl\|_{C^0(\mathcal{M})} < \delta \equiv 4^{-q},\\[1mm]
\|D[\mathfrak{S}^q(\yy_\bg)] - D[\mathfrak{S}^{q-1}(\yy_\bg)]\|_{C^0(\mathcal{M})} < \sqrt{2}(n+1)J_n 2^{-q+1},
\end{cases}
\end{equation}
for each $q=1,2,3,\ldots$. Since we are proving everything locally, we assume without loss of generality
that $\mathcal{M}$ is compact.
By $\mathfrak{S}^q$ we mean the composition $\mathfrak{S}\circ\ldots\circ\mathfrak{S}$ for $q$ times.
As a remark, the above estimates involve the choice of $\lambda$ at the $q$-th stage
for each $q$ ({\it cf.} the proof of Eq. (2.16) in \cite{exposition}).
Hence, by the second inequality in Eq. \eqref{stage estimates}, we find that
\begin{equation*}
\Phi_1(\yy_\bg) := \lim_{q\rightarrow \infty} \mathfrak{S}^q(\yy_\bg) \in C^0(\mathcal{M}; \R^N)
\end{equation*}
is an isometric immersion.
Moreover, by the first and third equations, $\Phi_1(\yy_\bg)$ in fact lies in $C^1(\mathcal{M}; \R^N)$.
All the above constructions are {\it kinematic}, {\it i.e.}, they hold pointwise in $t \in [0,T]$.
Finally, in light of the proof for the estimates in \eqref{x}--\eqref{xx},
we have the following: If, at the $q$-th {\em stage},
we choose parameter $\lambda$ in the Nash wrinkles \eqref{wrinkle} to satisfy
\begin{equation}\label{choosing lambda}
\lambda = \lambda_q \geq 2^{q+1}(n+1)J_n C_2
\end{equation}
in addition to Eq. \eqref{stage estimates}, then we have
\begin{equation*}
\|\Phi_1(\yy_\bg)(t) - \Phi_1(\yy_\bg)(s)\|_{C^1_{\rm loc}} \leq 2\eta,
\end{equation*}
provided that $\|\yy_\bg(t)-\yy_\bg(s)\|_{C^1} \leq \eta$ for some $t,s \in [0,T]$
as in Eq. \eqref{x}.
This completes the proof of the continuity of $\Phi_1$.
\smallskip
{\bf 4.} {\it Continuity of $\Phi_2$}.
This directly follows from the properties of mollification.
Indeed, let $0\leq J \in C^\infty(\R^n)$ be the standard mollifier in $\R^n$ such
that $\int_{\R^n}J(x)\,\dd x =1$ and ${\rm supp}(J) \in [-1,1]^n$. Then, for $\uu \in C^1({\Omega\subset \R^n; \R^N})$,
we define component-wise:
\begin{equation*}
\Phi_2(\uu)^i:= \uu^i \ast J_\e \qquad \text{in } \big\{\x\in\Omega: {\rm dist}(\x, \p\Omega) > \e\big\}\,\,
\text{ for } i\in\{1,2,\ldots,n\},
\end{equation*}
where $J_\e(\x):=\e^{-n}J(\frac{\x}{\e})$.
Then $\Phi_2(\uu)$ converges to $\uu$ in $C^1$ as $\e \rightarrow 0^{+}$ on any compact subset of $\Omega$.
In general, for $\uu \in C^1(\mathcal{M}; \R^N)$ where $\mathcal{M}$ is an $n$-dimensional manifold,
for any chart $\mathcal{M}' \subset \mathcal{M}$, we can find a $C^1$ diffeomorphism
$\ff: \mathcal{M}' \rightarrow \Omega \subset \R^n$.
Therefore, we have
\begin{equation*}
\|\Phi_2(\uu)(t) - \Phi_2(\uu)(s)\|_{C^1(\mathcal{M}')} \leq C\|\uu(t)-\uu(s)\|_{C^1(\mathcal{M})},
\end{equation*}
where $C$ depends on $\|\ff|_{\mathcal{M}'}\|_{C^1}$ and $\|(\ff|_{\mathcal{M}'})^{-1}\|_{C^1}$.
\smallskip
{\bf 5.} {\it Continuity of $\Phi_3$}. In this final step,
we prove that $\yy_{\rm w} = \Phi_3(\vv)$ is continuous.
This map is constructed by Conti-De Lellis-Sz\'{e}kelyhidi Jr. in \cite{{CDS2012}},
by a {\em step/stage} construction similar to that of Nash's.
The difference is that, in every {\em step},
before adding the corrugations (introduced in Kuiper \cite{Kuiper} in co-dimension-1 case,
as the counterpart to the Nash wrinkles),
one first {\em mollifies} immersion $\vv$ and metric $\bg$.
The estimates involved to control the mollification in each step is
motivated by Nash's argument for $C^\infty$ isometric embedding in \cite{nash2}.
\smallskip
\noindent
{\bf Steps in \cite{CDS2012}.}
Similar to Nash's construction, each {\em step} is achieved by adding the corrugations.
We closely follow \S 4 in \cite{CDS2012} for the presentation.
The basic building block is a corrugation function $\Gamma =\Gamma(z_1,z_2)\in C^\infty([0,\delta_\ast]\times\R;\R^2)$
which is $(2\pi)$--periodic in $z_2$ for some small $\delta_\ast >0$, and
\begin{equation}\label{estimate for Gamma}
\begin{cases}
|\p_{z_2}\Gamma(z_1,z_2) + (1,0)^\top|^2 = 1+z_1^2,\\[1mm]
|\p_{z_1}\p^k_{z_2}\Gamma_1(z_1, z_2)|+|\p_{z_2}^k\Gamma(z_1,z_2)| \leq C_k z_1 \qquad \text{ for } k \geq 0,
\end{cases}
\end{equation}
where $\Gamma=(\Gamma_1, \Gamma_2)^\top$.
As in Step 3 above, each {\em stage}, denoted by $\sss$ here, consists of $J_n$ steps.
In each step, we add a {\em corrugation}:
\begin{equation*}
\sss (\vv):=\vv_0+\yy_1^{\rm corrugation} + \yy_2^{\rm corrugation} + \ldots + \yy_{J_n}^{\rm corrugation}.
\end{equation*}
Let us index the {\em steps} (in a fixed {\em stage}) by $j\in\{1,2,\ldots, J_n\}$, and abbreviate by
\begin{equation*}
\vv_j := \vv_0 + \yy_1^{\rm corrugation} + \ldots + \yy_{j}^{\rm corrugation},
\end{equation*}
where $\vv_0$ is given below.
Our goal is to describe $\vv_j$ or $\yy_j^{\rm corrugation}$ for each $j$, and investigate its dependence on time.
For this purpose, we first state the estimates achieved at the end of each {\em stage} in \cite{CDS2012}.
This will help us specifying the parameters ({\it i.e.}, $\lambda_j$, $l_j$, $a_j$, $\nu_j$, etc.) involved in each {\em step}:
\begin{lemma}[Proposition 5.1 in \cite{CDS2012}]\label{lemma: one stage, Phi3}
$\,$ For any $n\in\mathbb{N}$ and any positive-definite $n\times n$ matrix $\bg_0$,
there exists $r \in (0,1)$ such that, for any open bounded smooth $\Omega \subset \R^n$ and
any $\bg \in C^\beta(\overline{\Omega}; O^{+}(n))$ with $\|\bg-\bg_0\|_{C^0}\leq r$,
there is $\delta_0 > 0$ so that, for all $K \geq 1$, whenever
\begin{eqnarray*}
\|\vv^\# \geucl - \bg\|_{C^0} \leq \delta^2 \,\,\,\mbox{for some $\delta \leq \delta_0$},\qquad\,\,
\|\vv\|_{C^2} \leq \mu \,\,\, \mbox{for some $\mu$},
\end{eqnarray*}
one can construct $\sss(\vv) \in C^2$ such that
\begin{eqnarray}\label{y1}
&&\|\sss(\vv)^\#\geucl - \bg\|_{C^0} \leq C_3 \delta^2\big(K^{-1} + \delta^{\beta-2}\mu^{-\beta}\big),\\\label{y2}
&&\|\sss(\vv)\|_{C^2} \leq C_3 \mu K^{J_n},\\ \label{y3}
&&\|\sss(\vv)-\vv\|_{C^1}\leq C_3 \delta,
\end{eqnarray}
where $C_3$ depends only on $n, \Omega, \bg_0$, and $\bg$.
\end{lemma}
It is crucial to remark that, in Lemma \ref{lemma: one stage, Phi3},
the construction of a certain {\em stage} begins with three given parameters:
$K, \delta$, and $\mu$.
In particular, they are independent of $j$, which indexes the {\em steps} within this {\em stage}.
From here, one first introduces the ``mollification parameter'' (Step 1 in \S 5.2, \cite{CDS2012}):
\begin{equation}\label{l, Phi3}
l:=\frac{\delta}{\mu},
\end{equation}
then, for the standard mollifying $0\leq J \in C^\infty_c(\R^n)$, we set
\begin{equation}\label{mollification, Phi3}
\vvv:=\vv\ast J_l, \qquad \gggg=\bg \ast J_l, \qquad J_l(\x):=l^{-n}J(\frac{\x}{l}).
\end{equation}
As $\vv$ is very close to being isometric (namely, $\|\vv^\#\geucl-\bg\|_{C^0} \leq \delta^2$),
for some large absolute constant $C_4$, the following matrix $(1+\frac{C_4\delta^2}{r})\gggg - \vv^\#\geucl$
is positive-definite. Thus, it again can be decomposed into primitive metrics:
\begin{equation}\label{decomposition into primitive metrics, Phi3}
(1+\frac{C_4\delta^2}{r})\gggg - \vvv^\#\geucl = \sum_{i=1}^{J_n} \widetilde{a_i}^2\boldsymbol{\nu}_i \otimes \boldsymbol{\nu}_i.
\end{equation}
Then, we rescale
\begin{equation}\label{rescale, Phi3}
\vv_0 := \frac{1}{(1+C_4 r^{-1}\delta^2)^{1/2}}\vvv, \qquad a_i :=\frac{1}{(1+C_4 r^{-1}\delta^2)^{1/2}}\widetilde{a_i} \quad \text{ for } i \in \{1,2,\ldots, J_n\}.
\end{equation}
Now we are ready for specifying each $\yy_j^{\rm corrugation}$ recursively.
The following is adapted from \S 4.2 of \cite{CDS2012}, by working a local orthonormal frame
$\{\mathbf{e}_1, \mathbf{e}_2, \ldots\}$ in $\R^n$.
First, define the vector fields:
\begin{equation}\label{two vector fields, Phi3}
\begin{cases}
\boldsymbol{\xi}_{j+1}:=\na \vv_{j} \cdot (\na^\top\vv_j\na\vv_j)^{-1}\cdot \boldsymbol{\nu}_j,\\
\boldsymbol{\zeta}_{j+1}:= \text{ the vector field dual to the $n$-form } \p_1 \vv_j \wedge \p_2\vv_j \wedge
\ldots \wedge \p_n \vv_j.
\end{cases}
\end{equation}
Then the ``amplitude'' is given by
\begin{equation}\label{amplitude, Phi3}
\Psi_j(x):=\frac{\boldsymbol{\xi}_j}{|\boldsymbol{\xi}_j|^2}(\x) \otimes \mathbf{e}_1 + \frac{\boldsymbol{\zeta}_j}{|\boldsymbol{\zeta}_j||\boldsymbol{\xi}_j|}(\x)\otimes \mathbf{e}_2.
\end{equation}
Finally, using the building block $\Gamma$, the {\em $j$-th corrugation} is defined by
\begin{equation}\label{corrugation, level j, Phi3}
\yy_j^{\rm corrugation} := \frac{1}{\lambda_j}\Psi_j(\x)\Gamma(|\boldsymbol{\xi}_j|a_j, \lambda_j \x\cdot \boldsymbol{\nu}_j),
\end{equation}
where, as in Step 3, \S 5.2 of \cite{CDS2012}, one chooses
\begin{equation}\label{lambda_j, Phi3}
\lambda_j := K^{j+1}l^{-1}.
\end{equation}
Let us now discuss the dependence of $\sss(\vv)$ on time.
For this purpose, fix $\eta >0$ and assume that
\begin{equation*}
\|\vv(t) - \vv(s)\|_{C^1} \leq \eta \qquad \text{ for some } t,s \in [0,T].
\end{equation*}
Then the mollification in equation \eqref{mollification, Phi3} gives us
\begin{align}\label{mollification estimate for v}
\|\vvv(t)-\vvv(s)\|_{C^2} &= \|\na J_l \ast \big(\vv(t)-\vv(s)\big)\|_{C^1} \nonumber\\
&= l^{-1} \Big\|\int_{\R^n}\na J (\z)\Big(\vv (t, \x-l\z) - \vv (s, \x-l\z)\Big)\,\dd \z \Big\|_{C^1}\nonumber\\
&\leq C_5 l^{-1} \|\vv(t)-\vv(s)\|_{C^1} \leq C_5 l^{-1}\eta,
\end{align}
where $C_5 \equiv \|J\|_{W^{1,1}(\R^n)}$.
From here,
the decomposition in Eq. \eqref{decomposition into primitive metrics, Phi3} yields
\begin{equation*}
\|\widetilde{a_i}(t) - \widetilde{a_i}(s)\|_{C^2} \leq C_6 \eta l^{-1},
\end{equation*}
where $C_6$ only depends on $\bg$, $\bg_0$, $n$, $\|J\|_{W^{1,1}(\R^n)}$, and the local geometry of $\Omega$.
Then \eqref{rescale, Phi3} shows that the rescaled quantities satisfy
\begin{equation}\label{v0, Phi3}
\|\vv_0(t)-\vv_0(s)\|_{C^2} + \|a_i(t)-a_i(s)\|_{C^2} \leq \frac{C_6 \eta}{l\sqrt{1+\frac{C_4\delta^2}{r}}}
\end{equation}
for each $i\in\{1,2,\ldots, J_n\}$. In addition, for the lower order derivatives, we have
\begin{equation}\label{lower order estimates for v, a. Phi3}
\|\vv_0(t)-\vv_0(s)\|_{C^1} + \|a_i(t)-a_i(s)\|_{C^1} \leq \frac{C_6 \eta}{\sqrt{1+\frac{C_4\delta^2}{r}}}.
\end{equation}
To proceed, notice that one can assume
\begin{equation}\label{assumption on nabla vj}
C_7^{-1}{\rm Id}\leq\na^\top\vv_j \na\vv_j\leq C_7{\rm Id}
\end{equation}
(see the beginning of \S 5.2 in \cite{CDS2012}); here $C_7$ may depend on $j$, but, as there are only finitely many $j$,
we can take $C_7$ to be absolute. Hence, equation \eqref{two vector fields, Phi3} implies that,
for each $j\in\{1,2,\ldots, J_n\}$,
\begin{equation}\label{estimate in xi and zeta}
\|\boldsymbol{\xi}_j(t)-\boldsymbol{\xi}_j(s)\|_{C^1} \leq \frac{C_8\eta}{l\sqrt{1+\frac{C_4\delta^2}{r}}},
\quad \|\boldsymbol{\zeta}_j(t)-\boldsymbol{\zeta}_j(s)\|_{C^1}
\leq \frac{C_8}{l}\bigg(\frac{\eta}{\sqrt{1+\frac{C_4\delta^2}{r}}}\bigg)^n,
\end{equation}
whereas the estimate in \eqref{lower order estimates for v, a. Phi3}
gives us
\begin{equation}\label{lower order estimate in xi and zeta}
\|\boldsymbol{\xi}_j(t)-\boldsymbol{\xi}_j(s)\|_{C^0} \leq \frac{C_9\eta}{\sqrt{1+\frac{C_4\delta^2}{r}}},
\quad \|\boldsymbol{\zeta}_j(t)-\boldsymbol{\zeta}_j(s)\|_{C^0} \leq C_9 \bigg(\frac{\eta}{\sqrt{1+\frac{C_4\delta^2}{r}}}\bigg)^n.
\end{equation}
From here, we obtain the estimate for $\|\Psi_j(t)-\Psi_j(s)\|_{C^1}$: Since
\begin{equation*}
\na \frac{\boldsymbol{\xi}_j}{|\boldsymbol{\xi}_j|^2}
= \frac{\na \boldsymbol{\xi}_j - 2 \boldsymbol{\xi}_j \otimes \boldsymbol{\xi}_j}{|\boldsymbol{\xi}_j|^2},
\end{equation*}
and, by \eqref{assumption on nabla vj}, we have $|\boldsymbol{\xi}_j| \geq C_7^{-3/2}$ (and similarly for $\boldsymbol{\zeta}_j$)
so that
\begin{align*}
\|\Psi_j(t)-\Psi_j(s)\|_{C^1}
& \leq C_{10} \bigg\{\frac{\eta}{l\sqrt{1+\frac{C_4\delta^2}{r}}}
+\frac{1}{l}\bigg(\frac{\eta}{\sqrt{1+\frac{C_4\delta^2}{r}}}\bigg)^n\bigg\} \\
& \leq C_{10} \Big(\eta l^{-1} + \eta^n l^{-1}\Big).
\end{align*}
Next, let us bound $\|\yy_j^{\rm corrugation}(t)-\yy_j^{\rm corrugation}(s)\|_{C^1}$.
In view of expression \eqref{corrugation, level j, Phi3} of the corrugations,
a simple interpolation leads to
\begin{align*}
&\big\|\yy_j^{\rm corrugation}(t)-\yy_j^{\rm corrugation}(s)\big\|_{C^1} \nonumber\\
&\leq \frac{C_{11}}{\lambda_j} \bigg\{\big\|\Psi_j(t)-\Psi_j(s)\big\|_{C^1}\|(\widetilde\Gamma(t),\widetilde\Gamma(s))\|_{C^0}\\
&\qquad \quad\,\,\, + \big\| \widetilde\Gamma(t)-\widetilde\Gamma(s)\big\|_{C^1}\|(\Psi_j(t),\Psi_j(s))\|_{C^0} \bigg\},
\end{align*}
where the following shorthand is introduced:
$\widetilde\Gamma(s) \equiv \Gamma\big(|\boldsymbol{\xi}_j(s)|a_j(s), \lambda_j \x \cdot \boldsymbol{\nu}_j\big)$,
and similarly $\widetilde\Gamma(t)\equiv \Gamma\big(|\boldsymbol{\xi}_j(t)|a_j(t), \lambda_j \x \cdot \boldsymbol{\nu}_j\big)$.
To continue the estimate, we need the following uniform-in-time bounds.
First of all, thanks to \eqref{assumption on nabla vj}, we have
\begin{equation}\label{C0 estimate for xi, a, Psi}
\|\boldsymbol{\xi}_j\|_{C^0} + \|\Psi_j\|_{C^0} \leq C_{12},
\end{equation}
as well as
\begin{align*}
\|a_j\|^2_{C^1} &\leq \Big(1+C_4 r^{-1}\delta_2\Big)^{-1/2} \|\widetilde{a_j}\|_{C^1}
\leq \|\widetilde{\bg} - \widetilde{\vv}^\sharp \bg_{\rm Eucl}\|_{C^1} + C_4 r^{-1} \delta^2 \|\widetilde{\bg}\|_{C^1}\nonumber\\
&\leq C_{5} l^{-1} \|\bg-\vv^\sharp \bg_{\rm Eucl}\|_{C^0} + C_4 C_5 r^{-1} \delta^2 \|\bg\|_{C^0},
\end{align*}
which can be proved in the similar manner to \eqref{mollification estimate for v}. It follows that
\begin{equation}\label{aj in C1 norm, Phi3}
\|a_j\|_{C^1} \leq C_{13} (1+l^{-1})\delta.
\end{equation}
Moreover, a simple computation gives
\begin{equation}\label{aj in C0 norm, Phi3}
\|a_j\|_{C^0} \leq C_{14} \delta.
\end{equation}
Finally, \eqref{assumption on nabla vj} yields that
\begin{equation*}
\|\boldsymbol{\xi}_j\|_{C^1} \leq C_{15} \Big(\|\vv_j\|_{C^0} + \|\vv_j\|_{C^1} \Big\|\na\Big[ \big(\na^\top \vv_j \na \vv_j\big)^{-1}\Big]\Big\|_{C^0}\bigg).
\end{equation*}
On the other hand, we have the identity:
\begin{align*}
&\na\Big[ \Big(\na^\top \vv_j \na \vv_j\Big)^{-1}\Big] \\
&= -\Big(\na^\top \vv_j \na \vv_j\Big)^{-1} \cdot \Big(\na\na^\top \vv_j \cdot \na \vv_j
+ \na^\top \vv_j \na^2\vv_j\Big) \cdot \Big(\na^\top \vv_j \na \vv_j\Big)^{-1},
\end{align*}
so that the following bound is verified:
\begin{equation}\label{xi C1 norm}
\|\boldsymbol{\xi}_j\|_{C^1} \leq C_{16} \|\vv_j\|_{C^2} \leq C_{17} \mu.
\end{equation}
Thus, as $\Gamma \in C^{\infty}([0,\delta_{\ast}]\times[0,2\pi])$ is periodic in the second arguments,
we have
\begin{align*}
&\big\|\yy_j^{\rm corrugation}(t)-\yy_j^{\rm corrugation}(s)\big\|_{C^1}\\
&\leq \frac{C_{18}}{\lambda_j} \Big\{ \big\|\Psi_j(t)-\Psi_j(s)\big\|_{C^1} + \big\|\widetilde\Gamma(t)-\widetilde\Gamma(s)\big\|_{C^1} \Big\} \nonumber\\
&\leq \frac{C_{19}}{\lambda_j} \Big\{\eta l^{-1} + \eta^nl^{-1} + \big\|\widetilde\Gamma(t)-\widetilde\Gamma(s)\big\|_{C^1} \Big\}.
\end{align*}
To continue, let us estimate by the Taylor expansion
\begin{align*}
\big\|\widetilde\Gamma(t)-\widetilde\Gamma(s)\big\|_{C^1}
\leq \Big\| \p_{z_1}\Gamma(\Theta, \lambda_j \x \cdot \boldsymbol{\nu}_j) \Big[|\boldsymbol{\xi}_j(t)|a_j(t) - |\boldsymbol{\xi}_j(s)|a_j(s)\Big] \Big\|_{C^1},
\end{align*}
where $|\Theta|$ lies between $|\boldsymbol{\xi}_j(s)||a_j(s)|$ and $|\boldsymbol{\xi}_j(t)||a_j(t)|$. Then
\begin{align*}
&\big\|\widetilde\Gamma(t)-\widetilde\Gamma(s)\big\|_{C^1}\\
&\leq C_{20}\Big\||\boldsymbol{\xi}_j(t)|a_j(t) - |\boldsymbol{\xi}_j(s)|a_j(s) \Big\|_{C^1} \nonumber\\
&\leq C_{21} \bigg\{\|\boldsymbol{\xi}_j(t)\|_{C^1} \|a_j(t)-a_j(s)\|_{C^0} + \|\boldsymbol{\xi}_j(t)\|_{C^0} \|a_j(t)-a_j(s)\|_{C^1}\nonumber\\
&\qquad\quad\,\, + \|a_j(s)\|_{C^1} \Big\| |\boldsymbol{\xi}_j(t)|-|\boldsymbol{\xi}_j(s)|\Big\|_{C^0}
+ \|a_j(s)\|_{C^0}\Big\| |\boldsymbol{\xi}_j(t)|-|\boldsymbol{\xi}_j(s)|\Big\|_{C^1} \bigg\}\nonumber\\
&\leq C_{22} \Big\{\mu \delta + \eta + (1+l^{-1})\delta\eta + \delta\eta l^{-1} \Big\}.
\end{align*}
Here, the first line follows from \eqref{estimate for Gamma}, \eqref{C0 estimate for xi, a, Psi},
and \eqref{aj in C0 norm, Phi3},
the third inequality follows from interpolation, and the final one from
estimates \eqref{estimate in xi and zeta}--\eqref{xi C1 norm}.
The constants $C_{20}, C_{21}$, and $C_{22}$ may further depend on $\|\p_{z_1}\Gamma\|_{C^0}$.
We are now ready to conclude
\begin{equation}\label{corrugation difference, Phi3}
\big\|\yy_j^{\rm corrugation}(t)-\yy_j^{\rm corrugation}(s)\big\|_{C^1}
\leq \frac{C_{23}}{\lambda_j}\big\{ \eta l^{-1} + \mu \delta + \eta + l^{-1}\delta \eta\big\},
\end{equation}
where, without loss of generalities, we have assumed that $\eta \leq 1$.
Therefore, summing over the geometric series in view of equation \eqref{lambda_j, Phi3}, we have
\begin{align}
\|\sss(\vv)(t)-\sss(\vv)(s)\|_{C^1}
&\leq C_{23} \frac{1-K^{-J_n}}{K^2-K} (\eta + \mu l \delta + \eta l + \delta\eta)\nonumber\\
&\leq C_{23} (\eta + \mu l \delta + \eta l + \delta\eta),\label{one stage, Phi3}
\end{align}
since $K>1$ in the assumption of Lemma \ref{lemma: one stage, Phi3}.
Here, $C_{23}$ depends on $\bg_0$, $\bg$, $r$, $n$, $\Omega$, $\beta$, $\|J\|_{W^{1,1}}$, and $\Gamma$,
but not on $\eta$, $l$, $\mu$, and $\delta$.
The last three of these four parameters are chosen differently for distinctive {\em stages} below.
\medskip
\noindent
{\bf Stages in \cite{CDS2012}.}
Now we iterate for countably many times the construction above for one {\em stage}.
As in Step 3, we index the stages by $q=1,2,3,\ldots$.
Recall from \S 6 in \cite{CDS2012} that $\Phi_3$ is given by
\begin{equation*}
\Phi_3(\vv):= \lim_{q\rightarrow \infty} \sss^q (\vv) \in C^1_{\rm loc}(\mathcal{M}; \R^{n+1}),
\end{equation*}
where one needs to suitably choose $(\mu_q, \delta_q)$ in place of $(\mu, \delta)$
in equations \eqref{y1}--\eqref{y3} in Lemma \ref{lemma: one stage, Phi3} above,
with $\sss$ is replaced by $\sss^q=\underbrace{\sss\circ\ldots\circ\sss}_{q \text{ times}}$ therein.
Following the delicate arguments therein, by choosing
\begin{equation*}
a < \min \big\{\frac{1}{2}, \frac{\beta J_n}{2-\beta} \big\}, \qquad \alpha < \min \big\{\frac{\beta}{2}, \frac{1}{1+2J_n}\big\},
\end{equation*}
one can bound via the interpolation of the $C^1$ and $C^2$ estimates as follows: For each $q$,
\begin{align}
& \|\sss^{q+1}(\vv) - \sss^q(\vv)\|_{C^{1,\alpha}} \leq C_{24} \mu_0 K^{-\big((1-\alpha)a-\alpha J_n\big)q} \quad \text{ on } [0,T],
\label{holder}\\
&\|\sss^q(\vv)^\# \geucl - \bg\|_{C^0} \leq \delta_q^2.\label{isometric for holder case}
\end{align}
Therefore, in view of equation \eqref{holder},
$\Phi_3(\vv)$ in fact lies in $C^{1,\alpha}_{\rm loc}(\mathcal{M}; \R^{n+1})$
and, by Eq. \eqref{isometric for holder case}, $\Phi_3(\vv)$ is indeed an isometric immersion.
It remains to discuss the dependence of $\Phi_3(\vv)$ on time.
From the definition of $\Phi_3$ and estimate \eqref{one stage, Phi3} for one stage,
we observe
\begin{equation}\label{Phi 3 bound in t}
\|\Phi_3(\vv)(t)-\Phi_3(\vv)(s)\|_{C^1} \leq \lim_{q\rightarrow \infty} C_{23} (\eta + \mu_q l_q \delta_q + \eta l_q + \eta \delta_q).
\end{equation}
In \S 6.1 of \cite{CDS2012}, parameters $(\delta_q, \mu_q)$ are chosen to satisfy
\begin{equation}\label{relation for delta, mu}
\delta_{q} \leq \delta_0 K^{-aq}, \qquad \mu_q = \mu_0 K^{q J_n},
\end{equation}
where $\mu_0 > 0$ is a fixed constant, and $K \geq 2^{1/a}$ for $a>0$ specified in the preceding.
Since $l_q:=\delta_q \mu^{-1}_q$ as defined in equation \eqref{l, Phi3}, we have
\begin{equation*}
l_q \leq \frac{\delta_0}{\mu_0}K^{-(a+J_n)q} \qquad \text{ for each } q\in\{1,2,3\ldots\}.
\end{equation*}
In particular, $l_q \rightarrow 0$ because $K>1$. Therefore, \eqref{Phi 3 bound in t} becomes
\begin{equation*}
\|\Phi_3(\vv)(t)-\Phi_3(\vv)(s)\|_{C^1} \leq C_{23} \eta,
\end{equation*}
where $C_{23}$ is a universal constant depending on $\bg_0$, $\bg$, $n$, $r$, $\Omega$, $\delta_0$, $\mu_0$, $K$, $\alpha$, $\beta$,
$\Gamma$, $\|J\|_{W^{1,1}}$, and $a$,
provided that $\|\vv(t) - \vv(s)\|_{C^1} \leq \eta$ for some $t,s \in [0,T]$.
That is, $\Phi_3: C^2_{\rm loc}(\mathcal{M}; \R^{n+1}) \rightarrow C^{1,\alpha}_{\rm loc}(\mathcal{M}; \R^{n+1})$
is continuous in time, when the domain and range are both equipped with the $C^1_{\rm loc}$ topology.
The proof is now complete.
\endproof
\end{appendices}
\bigskip
\section*{Acknowledgments}
A. Acharya acknowledges the support of the Rosi and Max Varon Visiting Professorship at
the Weizmann Institute of Science, Rehovot, Israel,
and was also supported in part by grants NSF-CMMI-1435624, NSF-DMS-1434734,
and ARO W911NF-15-1-0239.
G.-Q. Chen's research was supported in part by
the UK
Engineering and Physical Sciences Research Council Award
EP/E035027/1 and
EP/L015811/1, and the Royal Society--Wolfson Research Merit Award (UK).
S. Li's research was supported in part by the UK EPSRC Science and
Innovation award to the Oxford Centre for Nonlinear PDE (EP/E035027/1).
M. Slemrod was supported in part by Simons Collaborative Research Grant 232531.
M. Slemrod also thanks the Oxford Center for Nonlinear PDE and the Max Planck Institute for Mathematics in the Sciences (Leipzig) for their kind hospitality.
D. Wang was supported in part by NSF grants DMS-1312800 and DMS-1613213.
We also thank L. Sz\'{e}kelyhidi Jr. for his valuable remarks and suggestions.
Finally we thank the anonymous referee for his/her valuable comments and suggestions.
\bigskip
| 183,835
|
\begin{document}
\maketitle
\abstract{The Painleve first equation can be represented as the equation of isomonodromic deformation
of a Schr\"odinger equation with a cubic potential. We introduce a new algorithm for computing the direct
monodromy problem for this Schr\"odinger equation. The algorithm is based on the geometric theory of
Schr\"odinger equation due to Nevanlinna}
\section{Introduction}
Painlev\'e equations form the core of what may be called "modern
special function theory". Indeed, since the 1970s' pioneering works of Ablowitz and Segur, and McCoy, Tracy and Wu
the Painlev\'e functions have been playing the same role in nonlinear mathematical physics that the classical special functions, such as Airy functions, Bessel functions, etc., are playing in linear physics.
For example, it has been recently conjectured by Dubrovin \cite{dubrovin08} (and proven in a particular case by Claeys and Grava \cite{tamtom}) that Painlev\'e equations play a big role also in the theory of nonlinear waves and dispersive equations.
In this context, the first Painlev\'e equation (P-I)
\begin{equation}\label{eq:PI}
y''= 6y^2 -z \, , \; z \in \mathbb{C} \quad ,
\end{equation}
is of special importance.
Indeed recently
\cite{dubrovin} Dubrovin, Grava and Klein
discovered that a special solution of P-I -called int\'egrale tritronqu\'ee- provides
the universal correction to the semiclassical limit of solutions to the focusing nonlinear
Schr\"odinger equation.
The key mathematical fact about the Painlev\'e equations is their Lax-integrability.
This allows to apply to the study of the Painlev\'e functions a powerful Isomonodromy - Riemann-Hilbert method.
Using this method, a great number of analytical, and especially asymptotic results have been obtained during the last
two decades- for the general theory see the monumental book \cite{fokas}, for what concerns P-I see
our papers \cite{piwkb} \cite{piwkb2}.
It is clear that we need efficient and reliable algorithms for computing solutions
of the Painlev\'e equations and that these algorithms should take into account the
integrability of Painlev\'e equations. To this aim S. Olver has recently build an algorithm
to solve the Riemann-Hilbert problem -or inverse monodromy problem-
associated to Painlev\'e-II \cite{sheehan10}.
In the present paper we introduce a new simple algorithm
for solving the \textit{direct monodromy problem} associated to P-I;
given the Cauchy data $y,y',z$ of a given solution of P-I we can compute
its monodromy data -or equivalently, the Riemann-Hilbert problem associated to it.
Given our algorithm, it would be possible to compute the inverse monodromy problem
for Painlev\'e-I simply using an appropriate Newton's method.
Due to the simplicity of our algorithm, this procedure would be probably more efficient than a numerical solution
of the corresponding Riemann-Hilbert problem (which, to the best of our knowledge, has not yet been given).
However, this is a matter of future research.
Below we briefly introduce the direct monodromy problem associated to P-I.
\subsection{The Perturbed Cubic Oscillator}
Consider the following Schr\"odinger equation with a cubic potential (plus a fuchsian singularity)
\begin{eqnarray}\label{eq:perturbedschr}
\frac{d^2\psi(\lambda)}{d\lambda^2} \!\! &=& \!\! Q(\lambda;y,y',z) \psi(\lambda)\; , \\ \nonumber
Q(\lambda;y,y',z) \!\! &=& \!\! 4 \lambda^3 - 2 \lambda z + 2 z y - 4 y^3 + y'^2+ \frac{y'}{\lambda - y}
+\frac{3}{4(\lambda -y)^2} \, .
\end{eqnarray}
We call such equation the \textit{perturbed (cubic) oscillator}.
We define the Stokes Sector $S_k$ as
\begin{equation}
S_k=\left\lbrace \lambda : \av{ \arg\lambda - \frac{2 \pi k}{5} } < \frac{\pi}{5} \right\rbrace
\, , k \in \bb{Z}_{5} \; .
\end{equation}
Here, and for the rest of the paper, $\bb{Z}_5$ is the group of the integers modulo five. We will often choose
as representatives of $\bb{Z}_{5}$ the numbers $-2,-1,0,1,2$.
For any Stokes sector, there is a unique (up to a multiplicative constant) solution of the perturbed oscillator
that decays exponentially inside $S_k$. We call such solution the \textit{k-th subdominant solution} and let
$\psi_k(\lambda;y,y',z)$ denote it \footnote{Equation (\ref{eq:perturbedschr}) has a fuchsian singularity
at $\lambda=y$. Hence, as we explain in Section 2, any solution is two-valued having a square root singularity at $\lambda=y$.
We do not discuss this fact here: the reader can find the rigorous definition of subdominant solution in the Appendix.}.
The asymptotic behaviour of $\psi_k$ is known explicitely in a bigger sector
of the complex plane, namely $S_{k-1}\cup\overline{S_k}\cup S_{k+1}$:
\begin{equation}\label{eq:intwkb}
\lim_{\substack{ \lambda \to \infty \\ \av{\arg{\lambda} - \frac{2 \pi k}{5}} < \frac{3 \pi}{5} -\e}}
\frac{\psi_k(\lambda;y,y',z)}{\lambda^{-\frac{3}{4}} \exp\left\lbrace-\frac{4}{5} \lambda^{\frac{5}{2}} +
\frac{z}{2}\lambda^{\frac{1}{2}}\right\rbrace} \to 1 , \; \forall \e >0 \, .
\end{equation}
Here the branch of $\lambda^{\frac{1}{2}}$ is chosen such that $\psi_k$ is exponentially small in $S_k$.
Since $\psi_{k-1}$ grows exponentially in $S_k$, then $\psi_{k-1}$ and $\psi_{k}$ are linearly independent.
Then $\left\lbrace \psi_{k-1},\psi_{k} \right\rbrace$ is a basis of solutions, whose asymptotic behaviours is known in $S_{k-1}\cup S_{k}$.
Fixed $k^* \in \bb{Z}_5$, we know the asymptotic behaviour of $\left\lbrace \psi_{k^*-1},\psi_{k^*} \right\rbrace$
only in $S_{k^*-1}\cup S_{k^*}$.
If we want to know the asymptotic behaviours of this basis in all the complex plane, it is sufficient to
know the linear transformation from basis $\left\lbrace \psi_{k-1},\psi_{k} \right\rbrace$ to basis
$\left\lbrace \psi_{k},\psi_{k+1} \right\rbrace$ for any $k \in \bb{Z}_5$.
From the asymptotic behaviours, it follows that these changes of basis
are triangular matrices: for any $k$, $\psi_{k-1}=\psi_{k+1}+\sigma_k\psi_k$
for some complex number $\sigma_k$, called \textit{Stokes multiplier}.
The quintuplet of Stokes multipliers $\sigma_k, k \in \bb{Z}_5$ is called the monodromy data of the perturbed oscillator.
We can now define the direct monodromy problem.
\begin{Prob*}
Fixed $y,y',z$ compute the Stokes multipliers $\sigma_k(y,y',z)$
of the perturbed oscillator equation (\ref{eq:perturbedschr}).
\end{Prob*}
Our Algorithm gives a numerical solution
of this problem.
\subsection{P-I: Isomonodromy Approach}
What is the relation of the direct monodromy problem with P-I?
\textbf{Painlev\'e-I is the equation of isomonodromic deformation of the perturbed oscillator}.
Indeed the following Theorem holds.
\begin{Thm}\label{thm:isomonodromy}
Let the parameters $y=y(z), y'=\frac{dy(z)}{dz}$ of the potential $Q(\lambda;y,y',z)$
be functions of $z$; then $y(z)$
solves P-I if and only if the Stokes multipliers of the perturbed oscillator do not depend on $z$.
\begin{proof}
See for example \cite{piwkb}.
\end{proof}
\end{Thm}
Fix a solution $y(z)$ of P-I. If $z$ is a pole of $y(z)$ then equation (\ref{eq:perturbedschr}) is not well-defined. However,
recently the author \cite{piwkb} (see also \cite{chudnovsky94}) showed that
this difficulty can be overcome.
Let $a \in \bb{C}$ be a pole of $y(z)$ then
$y(z)$ has the following Laurent expansion around $a$
$$y(z)=\frac{1}{(z-a)^2}+\frac{a(z-a)^2}{10}+ \frac{(z-a)^3}{6}+b(z-a)^4+O((z-a)^5) \; .$$
Inserting the Laurent expansion into equation (\ref{eq:perturbedschr}) and taking the the limit $z \to a$,
equation (\ref{eq:perturbedschr})
becomes the following Schr\"odinger equation with a cubic potential
\begin{equation}\label{eq:schr}
\frac{d^2\psi(\lambda)}{d\lambda^2}= V(\lambda;2a,28b) \psi(\lambda)\; , \quad
V(\lambda;a,b)=4 \lambda^3 - a \lambda -b \;.
\end{equation}
Moreover the Stokes multipliers of the equation (\ref{eq:schr}) coincide with
the Stokes multipliers of equation (\ref{eq:perturbedschr}).
We call equation (\ref{eq:schr}) the cubic oscillator. For the rest of the paper
we consider it as a particular case of the perturbed oscillator (\ref{eq:perturbedschr}).
\subsection{Stokes Multipliers and Asymptotic Values}
The algorithm is based on the formula (\ref{eq:intsigmaR})
below, that we discovered in \cite{dtba}.
Consider the following Schwarzian equation
\begin{equation}\label{eq:intschw}
\left\lbrace f(\lambda) , \lambda \right\rbrace = -2 Q(\lambda;y,y',z) \, .
\end{equation}
Here $\left\lbrace f(\lambda) , \lambda \right\rbrace=\frac{f'''(\lambda)}{f'(\lambda)} -
\frac{3}{2}\left(\frac{f''(\lambda)}{f'(\lambda)}\right)^2 $ is the Schwarzian derivative.
For every solution of the Schwarzian equation (\ref{eq:intschw}) the following limit exists
\begin{equation*}
w_k(f)=\lim_{\lambda\ \to \infty \,, \lambda \in S_k }f(\lambda) \in \mathbb{C}
\cup \infty \, ,
\end{equation*}
provided the limit is taken along a curve non-tangential to the boundary of $S_k$.
In Section 2, we will prove that the following formula holds for any solution of the Schwarzian
equation (\ref{eq:intschw})
\begin{equation}\label{eq:intsigmaR}
\sigma_k(y,y',z)=i\left(w_{1+k}(f),w_{-2+k}(f); w_{-1+k}(f),w_{2+k}(f) \right) \;.
\end{equation}
Here $(a,b;c,d)=\frac{(a-c)(b-d)}{(a-d)(b-c)}$ is the cross ratio of four points on the sphere.
The paper is organized as follows.
In Section 2 we introduce the Nevanlinna's theory of the cubic oscillator and the Schwarzian differential equation (\ref{eq:schw}).
Then we prove formula (\ref{eq:intsigmaR}) formula for computing the Stokes multipliers from any solution of the Schwarzian differential equation.
Section 3 is devoted to the description of the Algorithm.
In Section 4 we test our algorithm against the WKB prediction and the Deformed TBA equations.
For convenience of the reader, we explain the basic theory of cubic oscillators (Stokes sectors, Stokes multipliers,
subdominant solutions, etc ...) in the Appendix.
\paragraph{Acknowledgments}
I am indebted to my advisor Prof. B. Dubrovin who constantly gave me
suggestions and advice. This work began in May 2010 during the workshop "Numerical solution of the Painlev\'e equations" at ICMS, Edinburgh and was finished
in June 2010 while I was a guest of Prof. Y. Takei at RIMS, Kyoto.
I thank Prof. Takei and all the participants to the workshop for the stimulating discussions.
This work is partially supported by the Italian Ministry of University and Research
(MIUR) grant PRIN 2008 "Geometric methods in the theory of nonlinear waves and their applications".
\section{Schwarzian Differential Equation}
As we mentioned in the Introduction, our Algorithm is based on formula (\ref{eq:intsigmaR}) that allows one
to compute Stokes multipliers from any solution of the Schwarzian differential equation (\ref{eq:intschw}).
This formula, proven in Theorem \ref{thm:dtba} below,
has its roots in the geometric theory of the Schr\"odinger equation,
which was developed by Nevanlinna in the 1930s' \cite{nevanlinna32}.
The author learned such a beautiful theory from the remarkable paper of Eremenko and Gabrielov \cite{eremenko}.
In this section we follow quite closely \cite{eremenko} as well as author's recent paper \cite{dtba}.
\begin{Rem*}
Equation (\ref{eq:perturbedschr}) has a fuchsian singularity at the pole $\lambda=y$ of the potential
$Q(\lambda;y,y',z)$. However this is an \textit{apparent singularity} \cite{piwkb}: the monodromy around
the singularity of any solution of (\ref{eq:perturbedschr}) is $-1$.
As a consequence, the ratio of two solutions of (\ref{eq:perturbedschr}) is a meromorphic function.
\end{Rem*}
The main geometric object of Nevanlinna's theory is the Schwarzian derivative of a
(non constant) meromorphic function $f(\lambda)$
\begin{equation}\label{def:schwarzian}
\left\lbrace f(\lambda),\lambda \right\rbrace =\frac{f'''(\lambda)}{f'(\lambda)} -
\frac{3}{2}\left(\frac{f''(\lambda)}{f'(\lambda)}\right)^2 \; .
\end{equation}
The Schwarzian derivative is strictly related to the Schr\"odinger equation (\ref{eq:schr}). Indeed, the following Lemma is
true.
\begin{Lem}\label{lem:schsch}
The (non constant) meromorphic function $f:\bb{C} \to \overline{\bb{C}}$ solves the Schwarzian differential equation
\begin{equation}\label{eq:schw}
\left\lbrace f(\lambda) , \lambda \right\rbrace = -2 Q(\lambda;y,y',z) \, .
\end{equation}
iff $f(\lambda)=\frac{\phi(\lambda)}{\chi(\lambda)}$ where $\phi(\lambda)$ and $\chi(\lambda)$
are two linearly independent solutions of the Schr\"odinger equation (\ref{eq:perturbedschr}). Hence,
the first derivative of any (non constant) solution of (\ref{eq:schw}) vanishes only at the pole $\lambda=y$ of the potential.
\end{Lem}
We define the Asymptotic Stokes Sector $S_k$ as
\begin{equation}
S_k=\left\lbrace \lambda : \av{ \arg\lambda - \frac{2 \pi k}{5} } < \frac{\pi}{5} \right\rbrace
\, , k \in \bb{Z}_{5} \; .
\end{equation}
Every solution of the Schwarzian equation (\ref{eq:schw}) has limit for $\lambda \to \infty$, $\lambda \in S_k$.
More precisely we have the following
\begin{Lem}[Nevanlinna]\label{lem:wk}
\begin{itemize}
\item[(i)]Let $f(\lambda)=\frac{\phi(\lambda)}{\chi(\lambda)}$ be a solution of (\ref{eq:schw}) then for
all $k \in \bb{Z}_5$ the following limit exists
\begin{equation}\label{eq:wk}
w_k(f)=\lim_{\lambda\ \to \infty \, \lambda \in S_k }f(\lambda) \in \mathbb{C}
\cup \infty \, ,
\end{equation}
provided the limit is taken along a curve non-tangential to the boundary of $S_k$.
\item[(ii)]$w_{k+1}(f) \neq w_{k}(f) \, , \; \forall k \in \bb{Z}_5$.
\item[(iii)] Let $g(\lambda)=\frac{a f(\lambda) +b}{c f(\lambda)+d}=
\frac{a\phi(\lambda) + b\chi(\lambda) }{c \phi(\lambda)+d\chi(\lambda)}$,
$\left(\begin{matrix}
a & b \\
c & d
\end{matrix} \right) \in Gl(2,\bb{C})$. Then
\begin{equation}\label{eq:moebius}
w_k(g)= \frac{a \, w_k(f) + b}{c \, w_k(f)+d} \; .
\end{equation}
\item[(iv)]If the function $f$ is evaluated along a ray contained in $S_k$, the convergence to $w_k(f)$ is super-exponential.
\end{itemize}
\begin{proof}
\begin{itemize}
\item[(i-iii)]Let $\psi_k$ be the solution of equation (\ref{eq:perturbedschr}) subdominant in $S_k$ and $\psi_{k+1}$ be the one subdominant in $S_{k+1}$. Since they form a basis of solutions, then $f(\lambda)=\frac{\alpha\psi_k(\lambda) + \beta\psi_{k+1}(\lambda) }{\gamma \psi_k(\lambda)+\delta\psi_{k+1}(\lambda)}$, for some $\left(\begin{matrix}
\alpha & \beta \\
\gamma & \delta
\end{matrix} \right) \in Gl(2,\bb{C}) $. Hence $w_k(f)=\frac{\beta}{\delta}$ if $\delta \neq 0$, $w_k(f)=\infty$ if $\delta=0$.
Similarly $w_{k+1}(f)=\frac{\alpha}{\gamma}$. Since $\left(\begin{matrix}
\alpha & \beta \\
\gamma & \delta
\end{matrix} \right) \in Gl(2,\bb{C}) $ then $w_k(f) \neq w_{k+1}(f)$
\item[(iv)]From estimates (\ref{eq:intwkb}) we know that inside $S_k$,
$$\av{\frac{\psi_k(\lambda;y,y',z)}{\psi_{k+1}(\lambda;y,y',z)}} \sim e^{-Re\left(\frac{8}{5}\lambda^{\frac{5}{2}} -
z\lambda^{\frac{1}{2}}\right)} \;,$$ where the branch of $\lambda^{\frac{1}{2}}$ is chosen such that
the exponential is decaying.
\end{itemize}
\end{proof}
\end{Lem}
\begin{Def}\label{def:wk}
Let $f(\lambda)$ be a solution of the Schwarzian equation (\ref{eq:schw})
and $w_k(f)$ be defined as in (\ref{eq:wk}). We call $w_k(f)$ the k-th asymptotic value of $f$.
\end{Def}
We noticed in a previous paper \cite{dtba} that the Stokes multipliers of the Schr\"odinger equation
are rational functions of the asymptotic values $w_k(f)$. This relation is the basis of our Algorithm.
\begin{Thm}\label{thm:dtba} \cite{dtba}
Denote $\sigma_k$ the k-th Stokes multiplier of the Schr\"odinger equation (\ref{eq:perturbedschr})
(for its precise definition, see equation (\ref{eq:multipliers}) in the Appendix).
Let $f$ be any solution of the Schwarzian equation (\ref{eq:schw}). Then
\begin{equation}\label{eq:identity}
\sigma_k = i \left(w_{1+k}(f),w_{-2+k}(f); w_{-1+k}(f),w_{2+k}(f) \right) \;, \forall k \in \bb{Z}_5 \;,
\end{equation}
where $(a,b;c,d)=\frac{(a-c)(b-d)}{(a-d)(b-c)}$ is the cross ratio of four point on the sphere.
\begin{proof}
Due to equation (\ref{eq:moebius}) all the asymptotic values
of two different solutions of (\ref{eq:schw}) are related by the same fractional linear transformation.
As it is well-known, the cross ratios of four points of the sphere is invariant if all the points are transformed by the same fractional linear transformation.
Hence the right-hand side of (\ref{eq:identity}) does not depend on the choice of the solution of the Schwarzian equation.
Let $\psi_{k+1}$ be the solution of (\ref{eq:perturbedschr}) subdominant in $S_{k+1}$ and $\psi_{k+2}$ be
the one subdominant in $S_{k+2}$ (see the Appendix for the precise definition). By choosing
$f(\lambda)=\frac{\psi_{k+1}(\lambda)}{\psi_{k+2}(\lambda)}$, one verifies easily that the
identity (\ref{eq:identity}) is satisfied.
\end{proof}
\end{Thm}
\begin{Rem*}
The same construction presented here holds for anharmonic oscillators with polynomial potentials
of any degree. For any degree, there are formulas similar to (\ref{eq:identity}) for expressing
Stokes multipliers in terms of cross ratios of asymptotic values.
The general formula will be given in a subsequent publication.
\end{Rem*}
\subsection{Singularities}
Since the Schwarzian differential equation is linearized (see Lemma \ref{lem:schsch}) by the Schr\"odinger equation, any solution is a meromorphic function and has an infinite number of poles (for a proof of this fact see \cite{nevanlinna32}
and \cite{elfving}).
The poles, however, are localized near the boundaries of the Stokes sectors $S_k, k \in \bb{Z}_5$.
Indeed, using the estimates (\ref{eq:intwkb}) one can prove the following
\begin{Lem}\label{lem:poles}
Let $f(\lambda)$ be any solution of the Schwarzian equation (\ref{eq:schw}). Fix $\e >0$ and define
$\tilde{S_k}= \left\lbrace \lambda : \av{ \arg\lambda - \frac{2 \pi k}{5} } \leq \frac{\pi}{5} -\e \right\rbrace
\, , k \in \bb{Z}_{5} \; .$ Then $f(\lambda)$ has a finite number of poles inside $\tilde{S_k}$. Hence,
there are a finite number of rays inside $\tilde{S_k}$ on which $f(\lambda)$ has a singularity.
\end{Lem}
\section{The Algorithm}
In the previous section we have proved the following remarkable facts
\begin{itemize}
\item Along any ray contained in the Stokes Sector $S_k$, any solution $f$ to the Schwarzian differential equation (\ref{eq:perturbedschr}) $f$
converges super-exponentially to the asymptotic value $w_k(f)$. See Lemma \ref{lem:wk} (iv).
\item The Stokes multipliers of the Schr\"odinger equation (\ref{eq:schw}) are cross-ratios of the
asymptotic values $w_k(f)$. See equation (\ref{eq:identity}).
\item Inside any closed subsector of $S_k$, $f$ has a finite number of poles. See Lemma \ref{lem:poles}.
\end{itemize}
Hence the Simple Algorithm for Computing Stokes Multipliers goes as follows:
\begin{enumerate}
\item Set k=-2.
\item Fix arbitrary Cauchy data of $f$: $f(\lambda^*),f'(\lambda^*),f''(\lambda^*)$, with the conditions $\lambda^* \neq y$, $f'(\lambda^*) \neq 0$.
\item Choose an angle $\alpha$ inside $S_k$, such that the singular point $\lambda=y$ does not belong to the
corresponding ray, i.e. $\alpha \neq \arg y$. Define $t: \bb{R}^+\cup 0 \to \bb{C}$, $t(x)=f(e^{i\alpha}x +\lambda^*)$.
The function $t$ satisfies the following Cauchy problem
\begin{eqnarray}\label{eq:cauchy}
\left\lbrace \begin{aligned}
\left\lbrace t(x),x \right\rbrace = e^{2 i\alpha} Q(e^{i\alpha}x + \lambda^*;y,y',z), \hspace{1.5cm} \\
t(0)=f(\lambda^*), t'(0)=e^{i\alpha}f'(\lambda^*), \, t''(0)=e^{2 i\alpha}f''(\lambda^*) \; .
\end{aligned}
\right.
\end{eqnarray}
\item Integrate equation (\ref{eq:cauchy}) either directly \footnote{Integrating equation (\ref{eq:cauchy}) directly, one can hit a singularity $x*$ of $y$. To continue the solution past the pole, starting from $x^*-\e$ one can integrate the function $\tilde{y}=\frac{1}{y}$, which satisfies the same Schwarzian differential equation.} or by linearization (see Remark below), and compute $w_k(f)$ with the desired accuracy and precision.
\item If $k <2, k++$, return to point 3.
\item Compute $\sigma_l$ using formula (\ref{eq:identity}) for all $l \in \bb{Z}_5$.
\end{enumerate}
\begin{Rem*}
As was shown in Lemma \ref{lem:schsch}, any solution $f$ of the Schwarzian equation is the ratio of two solutions of
the Schr\"odinger equation. Hence, one can solve the nonlinear Cauchy problem (\ref{eq:cauchy}) by solving
two linear Cauchy problems.
Whether the linearization is more efficient than the direct integration of (\ref{eq:cauchy}) will not be investigated in the present paper.
\end{Rem*}
\section{A Test}
We have implemented our algorithm using MATHEMATICA's ODE solver NDSOLVE integrating equation (\ref{eq:cauchy})
with steps of length 0.1. We decided the integrator to stop at step $n$ if
\begin{eqnarray*}
\av{t(0.1 n) - t(0.1(n-1))} &<& 10^{-13} \, \mbox{ and } \\
\av{\frac{t(0.1 n) - t(0.1(n-1))}{t(0.1 n)}} &<& 10^{-13} \; .
\end{eqnarray*}
To test our algorithm we computed the Stokes multiplier $\sigma_0(b)$ of the equation
\begin{equation}\label{eq:simpleschr}
\frac{d^2\psi(\lambda)}{d\lambda^2}= (4 \lambda^3 -b) \psi(\lambda) \; .
\end{equation}
According to the WKB analysis (see \cite{sibuya75}, \cite{piwkb}) the Stokes multiplier $\sigma_0(b)$ has the following asymptotics
\begin{equation}\label{eq:wkbprediction}
\sigma_0(b) \sim \left\lbrace
\begin{aligned}
-i e^{\frac{\sqrt{\frac{\pi}{3}} \Gamma(1/3)}{2^{2/3} \Gamma(11/6)} b^{\frac{5}{6}}} \hspace{2cm} , \; \mbox{ if } \, b>0 \\
-2 i e^{-\frac{\sqrt{\pi} \Gamma(1/3)}{ 2^{\frac{5}{3}} \Gamma(11/6)} (-b)^{\frac{5}{6}}}
\cos\left(\frac{\sqrt{\frac{\pi}{3}} \Gamma(1/3)}{2^{5/3} \Gamma(11/6)} (-b)^{\frac{5}{6}} \right) , \mbox{ if } b <0 \; .
\end{aligned}
\right.
\end{equation}
Our computations (see Figure 1 and 2 below) shows clearly that the WKB approximation is very efficient also for small value of the parameter $b$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=10cm]{figure1.eps}
\end{center}
\label{fig:negativeb}
\caption{ \footnotesize{Thick dotted line: the rescaled Stokes multiplier $\frac{i}{2}e^{\frac{\sqrt{\pi} \Gamma(1/3)}{ 2^{\frac{5}{3}} \Gamma(11/6)} (-b)^{\frac{5}{6}}} \sigma_0(b)$ evaluated with our algorithm; thin continuous line: $\cos\left(\frac{\sqrt{\frac{\pi}{3}} \Gamma(1/3)}{2^{5/3} \Gamma(11/6)} (-b)^{\frac{5}{6}} \right)$, i.e. the WKB prediction for the rescaled Stokes multiplier.}}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=10cm]{figure2.eps}
\end{center}
\label{fig:positiveb}
\caption{ \footnotesize{Thick dotted line: the rescaled Stokes multiplier $e^{-\frac{\sqrt{\frac{\pi}{3}} \Gamma(1/3)}{2^{2/3} \Gamma(11/6)} b^{\frac{5}{6}}} \sigma_0(b)$ evaluated with our algorithm; thin continuous line: 1, the WKB prediction for the rescaled Stokes multiplier.}}
\end{figure}
We also tested our results against the numerical solution (due to A. Moro and the author)
of the Deformed Thermodynamic Bethe Ansatz equations (Deformed TBA), which has been
recently introduced by the author \cite{dtba}, developing the seminal work of Dorey and Tateo \cite{doreytateo}.
The Deformed TBA equations are a set of nonlinear integral equations which describe the exact correction to the WKB asymptotics.
The numerical solution of the Deformed TBA equations enable to a-priori set the absolute error in the evaluation of the Stokes multiplier $\sigma_0(b)$
rescaled with respect to the exponential factor, shown in (\ref{eq:wkbprediction}).
Hence, in the range of $ -20 \leq b \leq 20$ we could verify that we had computed the rescaled $\sigma_0(b)$ with an absolute error less than $10^{-8}$.
\section{Appendix}
The reader expert in anharmonic oscillators theory will skip this Appendix; for her, it will be enough to know
that we denote $\sigma_k(y,y',z)$ the $k-th$ Stokes multipliers of equation (\ref{eq:perturbedschr}).
Here we review briefly the standard way, i.e. by means of Stokes multipliers,
of introducing the monodromy problem
for equation (\ref{eq:perturbedschr}). All the statements of this section are proved in
Appendix A of author's paper \cite{piwkb} and in Sibuya's book \cite{sibuya75} .
\begin{Lem}\label{lem:wkb}
Fix $k \in \mathbb{Z}_5 = \left\lbrace -2, \dots , 2 \right\rbrace$ and define a cut in the $\bb{C}$ plane
connecting $\lambda=y$ with infinity such that its points eventually do not belong to
$S_{k-1} \cup \overline{S_k} \cup S_{k+1}$. Choose the branch of $\lambda^{\frac{1}{2}}$ by requiring
$$\lim_{\substack{\lambda \to \infty \\ \arg{\lambda}=
\frac{2 \pi k}{5}}} {\rm Re}\lambda^{\frac{5}{2}} = + \infty \, ,$$
while choose arbitrarily one of the branch of $\lambda^{\frac{1}{4}}$.
Then there exists a unique solution $\psi_k(\lambda;a,b)$ of equation (\ref{eq:schr})
such that
\begin{equation}\label{eq:asym}
\lim_{\substack{ \lambda \to \infty \\ \av{\arg{\lambda} - \frac{2 \pi k}{5}} < \frac{3 \pi}{5} -\e}}
\frac{\psi_k(\lambda;y,y',z)}{\lambda^{-\frac{3}{4}} e^{-\frac{4}{5} \lambda^{\frac{5}{2}} +
\frac{z}{2}\lambda^{\frac{1}{2}}}} \to 1 , \; \forall \e >0 \, .
\end{equation}
\end{Lem}
\begin{Def*}
We denote $\psi_k$ the k-th subdominant solution or the solution subdominant in the k-th sector.
\end{Def*}
From the asymptotics (\ref{eq:asym}), it follows that $\psi_k$ and $\psi_{k+1}$
are linearly independent. If one fixes the same branch of $\lambda^{\frac{1}{4}}$
in the asymptotics (\ref{eq:asym}) of $\psi_{k-1},\psi_{k},\psi_{k+1}$ then the following equations hold true
\begin{eqnarray}\nonumber
\frac{\psi_{k-1}(\lambda;y,y',z)}{\psi_k(\lambda;y,y',z)} &=& \frac{\psi_{k+1}(\lambda;y,y',z)}{\psi_k(\lambda;y,y',z)}+\sigma_k(y,y',z) \;, \\ \label{eq:multipliers} \\ \nonumber
- i \sigma_{k+3}(y,y',z) &=& 1+\sigma_k(y,y',z)\sigma_{k+1}(y,y',z)\, , \; \forall k \in \bb{Z}_5 \; .
\end{eqnarray}
\begin{Def*}
The entire functions $\sigma_k(y,y',z)$ are called Stokes multipliers.
The quintuplet of Stokes multipliers $\sigma_k(y,y',z), k \in \bb{Z}_5$ is called the monodromy data
of equation (\ref{eq:schr}).
\end{Def*}
| 185,492
|
TITLE: Thin lens equation calculation does not match experimental value, why?
QUESTION [0 upvotes]: for an optics experiment I had to design a $2$-lens system to image a grating, the diagram looks like this:
The focal length of both lenses is $15$cm, at the end of the system there's a camera connected to a computer where the image is displayed. The system should have magnification $M=1$. I followed the thin lens equation, and placed the first lens $30$cm away from the grating, an aperture $30$cm from the first lens (this is the intermediate image plane) then the second lens 30cm from the aperture and finally the camera $30$cm to the right of the second lens. The strange thing is that it didn't work, I had to place the camera $22.3$cm from the second lens in order to get a sharp image, I don't understand why this happens whether the aperture is doing something that I'm not understanding or if the thin lens approximation is breaking down. Does anyone have a suggestion? I don't think it's the aperture because the radius was large compared to the wavelength of the laser, and the aperture was only used to select different areas on the grating that corresponded to different spacings.
REPLY [1 votes]: Take out the camera, and replace it with a piece of paper. You should see the sharp image as predicted by the thin lens equation.
If you use a camera, you are adding another lens, and the image forms behind the lens on the sensor. That is fine, but you have to do more calculating to figure out where to put the lens and sensor.
Perhaps you want to take a picture of the paper instead of using the camera as part of the system?
| 94,517
|
TITLE: Euler characteristic of the generic fiber of a homogeneous polynomial
QUESTION [4 upvotes]: Let $f:\mathbb C^n\to \mathbb C$ be a homogeneous polynomial of degree $>1$.
Can it happen that the Euler characteristic of $f^{-1}(x)$ for $x\neq 0$ is equal to 1? The answer seems to be "no" but I would be glad to see either a proof or a reference (or a counterexample). Same question when $\mathbb C$ is replaced by a field of characteristic $p$ with the condition that $f$ is not a $p$-th power of another polynomial.
REPLY [3 votes]: I am just posting my comment above as an answer.
There is an action of $\mu_d$, the group of $d$th roots of unity on $\mathbb{A}^n$. The unique point in $\mathbb{A}^n$ with nontrivial stabilizer subgroup is the origin $0$, and $f(0)$ equals $0$. Thus, for every fiber $f^{-1}(x)$ other than the fiber of the origin, there is a free action of $\mu_d$ on $f^{-1}(x)$. So the Euler characteristic of $f^{-1}(x)$ equals $d$ times the Euler characteristic of the free quotient $f^{-1}(x)/\mu_d$.
The quotient $f^{-1}(x)/\mu_d$ can have Euler characteristic $1$, e.g., if $f(x_1,\dots,x_n) = x_1^d$ or $x_1^d + x_2x_3^{d-1}$, etc.
| 41,897
|
\begin{document}
\title{Eigenfunction Expansions and \\Transformation Theory}
\author{Manuel Gadella$^1$ and Fernando G\'omez$^2$}
\maketitle
\begin{center}
\small
$^1$Dpto. de F\'{\i}sica Te\'orica. Universidad de Valladolid.\\
Facultad de Ciencias, Prado de la Magdalena, s.n.,\\
47005 Valladolid, Spain.\\
email: {\tt gadella@fta.uva.es}\\
$^2$Dpto. de An\'alisis Matem\'atico. Universidad de Valladolid.\\
Facultad de Ciencias, Prado de la Magdalena, s.n.,\\
47005 Valladolid, Spain.\\
email: {\tt fgcubill@am.uva.es}
\end{center}
\begin{abstract}
Generalized eigenfunctions may be regarded as vectors of a basis in a particular direct integral of Hil\-bert spaces or as elements of the antidual space $\Phi^\times$ in a convenient Gelfand triplet $\Phi\subseteq{\mathcal\H}\subseteq\Phi^\times$.
This work presents a fit treatment for computational purposes of transformations formulas relating different generalized bases of eigenfunctions in both frameworks direct integrals and Gelfand triplets.
Transformation formulas look like usual in Physics li\-te\-ra\-tu\-re, as limits of integral functionals but with well defined kernels. Several approaches are feasible. Here Vitali and martingale approaches are showed.
\end{abstract}
{\small
{\it Keywords:} Eigenfunction expansion, Spectral measure, Direct integral of Hilbert spaces, Gelfand triplet, Rigged Hilbert space, Dirac transformation theory, Vitali system, Martingale.
{\it Mathematics Subject Classification (2000):} {47A70, 47N50.}
}
\section{Introduction}
Eigenfunction expansions appear in the most varied domains as for example in the basis of Dirac formulation of Quantum Mechanics \cite{D}, where each complete set of
commuting observables (csco) $A_1,A_2,\dots,A_n$ is supposed to have a generalized basis of kets
$|\l_1,\l_2,\dots,\l_n\>$ satisfying the
following properties:
\smallskip
1.- The kets $|\l_1,\l_2,\dots,\l_n\rangle$ are
generalized eigenvectors of the observables $A_1,A_2,\dots,A_n$,
i.e.,
$$
A_j\,|\l_1,\l_2,\dots,\l_n\>=\l_j\,|\l_1,\l_2,\dots,\l_n\>\,,\quad
(j=1,2,\dots,n)\,,
$$
where $\l_j$ is one of the possible outcomes of a measurement
of the observable $A_j$, $j=1,2,\dots,n$. Let us call $\Lambda_j$
the set of these outcomes.
\smallskip
2.- For each pure state $\varphi$, one has the following integral
decomposition:
$$
\varphi=\int_\Lambda
\<\l_1,\l_2,\dots,\l_n|\varphi\>\,|\l_1,\l_2,\dots,\l_n\>\,
d\l_1d\l_2\cdots d\l_n\,,
$$
where $\Lambda=\Lambda_1\times\Lambda_2\times\dots\times\Lambda_n$
is the Cartesian product of the $\Lambda_j$ and for (almost) all
$(\l_1,\l_2,\dots,\l_n)\in\Lambda$,
$\<\l_1,\l_2,\dots,\l_n|\varphi\>$ is a
complex number that gives the coordinates of $\varphi$ in the
generalized basis $|\l_1,\l_2,\dots,\l_n\>$.
\smallskip
Following von Neumann \cite{VN}, observables in Quantum Mechanics
are represented by selfadjoint operators in a Hilbert space $\H$.
Therefore, a csco $A_1,\dots,A_n$ is given by a set of $n$
self adjoint operators, also called $A_i$, whose respective Hilbert space spectra are
the sets $\Lambda_i$. The
classical version of the spectral theorem asso\-cia\-tes to the family
of selfadjoint operators $A_i$ a family of
commuting Borel spectral measure spaces $(\Lambda_i,\B_i,\H,P_i)$, where $i=1\ldots,n$; see Section \ref{app1}.
Consider $\Lambda$ as a metric space with the product
topology and the co\-rres\-ponding Borel
$\sigma$-algebra $\B_\Lambda$. Since the
commuting spectral measures are Borel, there exists a unique
spectral measure space of the form $(\Lambda,\B_\l,\H,P)$ such
that
$$
P(\Lambda_1\times\ldots\times
E_i\times\ldots\times\Lambda_n)=P_i(E_i),\quad (E_i\in\B_i,\,\,i=1\ldots,n).
$$
The spectral measure $P$ is called the {\it product} of $P_1,\ldots,P_m$ \cite{BS}.
Thus, giving a csco $\{A_1,A_2,\dots,A_n\}$ is equivalent
to give the corresponding Borel spectral measure space $(\Lambda,\B_\Lambda,\H,P)$.
It is well known that only for the pure discrete part of the spectrum there exist eigenvectors belonging to $\H$.
For the continuous part, generalized eigenvectors or eigenfunctions have been considered as
components of an orthonormal measurable basis in a profitable direct integral decomposition
of the Hilbert space $\H$ \cite{VN49,MA} or as elements of the space $\Phi^\t$ of a conveniently chosen rigging
\begin{equation}\label{0}
\Phi\subseteq H\subseteq\Phi^\t,
\end{equation}
where $\Phi$ is a dense subspace of $\H$ with its own topology $\tau_\Phi$
and $\Phi^\t$ is its topological (anti)dual space, i.e. the space of
$\tau_\Phi$-continuous antilinear forms on $\Phi$. A triplet of the form (\ref{0}) is usually called
a {\it Gelfand triplet} or {\it rigged Hilbert space} (RHS).
Gelfand \cite{GK,GS,GV} was the first to give a precise meaning to generalized eigenvectors, which was later elaborated, among others, by Berezanskii \cite{BERE65,BSU}, Maurin \cite{MAUR68}, Foia\c s \cite{F}, Roberts \cite{R}, Melsheimer \cite{ME} and Antoine \cite{A,AI}. For a detailed exposition see previous works by the authors \cite{GG,GG1}.
Such mathematics are being useful on a great variety of areas in Physics. In particular, RHS have been used in the study of Scattering Theory \cite{KK70}, resonances \cite{B,BG,AI,AGP,AP,BOIV,CBPR}, singular states in Statistical Mechanics \cite{VH,AS,CGIL}, spectral decompositions associated to chaotic maps \cite{AT,SATB}, time irreversibility \cite{PPT,AP} or axiomatic theory of quantum fields
\cite{BLT}.
The aim of this paper is to discuss a particular
aspect of eigenfunction ex\-pan\-sion theory: the transformations formulas relating different generalized bases.
Given two different csco
$A_1,A_2,\dots,A_n$ and $B_1,B_2,\dots,B_n$ with respective ge\-ne\-ra\-li\-zed bases $|\l_1,\l_2,\dots,\l_n\>$ and $|\xi_1,\xi_2,\dots,\xi_n\>$, Dirac \cite{D} introduced transformation formulas relating coordinates of a pure state $\varphi$ in both bases as follows:
\begin{equation}\label{1g}
\begin{array}{l}
\<\l_1,\l_2,\dots,\l_n|\varphi\>=
\\
\ds =\int \<\l_1,\l_2,\dots,\l_n|\xi_1,\xi_2,\dots,\xi_n\>\,
\<\xi_1,\xi_2,\dots,\xi_n|\varphi\>\,d\xi_1 d\xi_2\cdots d\xi_n\,,
\end{array}
\end{equation}
\begin{equation}\label{2g}
\begin{array}{l}
\<\xi_1,\xi_2,\dots,\xi_n|\varphi\>=
\\
\ds=\int \<\xi_1,\xi_2,\dots,\xi_n|\l_1,\l_2,\dots,\l_n\>\,
\<\l_1,\l_2,\dots,\l_n|\varphi\>\, d\l_1 d\l_2\cdots d\l_n \,.
\end{array}
\end{equation}
These linear formulas are straightforward ge\-ne\-ra\-li\-za\-tions of the familiar ones on finite-dimensional Hilbert spaces. In infinite-dimensional Hilbert spaces, however, the situation is not so simple, because the integral kernels
$$
\<\l_1,\l_2,\dots,\l_n|\xi_1,\xi_2,\dots,\xi_n\>\quad \text{ and }\quad
\<\xi_1,\xi_2,\dots,\xi_n|\l_1,\l_2,\dots,\l_n\>
$$
have in general no mathematical meaning.
A preliminary discussion of the
problem is given in the early literature mentioned above, in particular, the second part of Melsheimer \cite{ME}.
This work intends giving to transformation theory a new mathematical treatment. A fit treatment for computational purposes that can be described as follows:
For two spectral measure spaces on the same separable Hilbert space,
$(\Lambda,{\A},\H,P)$ and $(\Xi,{\B},\H,Q)$,
transformation formulas between the respective eigenfunction expansions are obtained as limits of integrals like those of Equations (\ref{1g}) and (\ref{2g}) but with well defined kernels.
These approximate integral functionals are provided by Vitali approach to Radon-Nikodym derivatives (see Appendix), which, in particular, is sufficient to deal with the absolutely continuous part of the spectral decomposition of a cso.
This is done in both direct integral and rigged Hilbert space frameworks, where Vitali approach leads to the functionals $\tilde\Gamma^\t_{nl}$ and $\Gamma^\t_{nl}$ given in Equations (\ref{fun.ant.Gammat.deg.nc}) and (\ref{fun.ant.Gammaxx}), respectively.
In both frameworks pointwise or weak convergence of approximate functionals
is assured (Theorems \ref{p18} and \ref{p16}).
Convergence with respect to finer topologies is only possible in rigged Hilbert spaces:
uniform convergence on precompact sets is proved for barrelled tvs $\Phi$ (Corollary \ref{p5}), whereas convergence with respect to strong topology $\beta(\Phi^\t,\Phi)$ is verified on barrelled and
semi-reflexive tvs $\Phi$, in particular when $\Phi$ is a Montel
space, a Montel-Fr\'echet space or a tvs with nuclear strong dual
(Corollary \ref{p9}).
The paper is organized as follows: Section \ref{sc2} collects some notions and results
used along the work relative to spectral measure spaces, direct integrals of
Hilbert spaces, locally convex equipments and eigenfunction expansions.
Section \ref{sctf} contains Vitali approach to transformation theory described above. Some comments about other approaches using martingale theory or generalized Cauchy-Stieltjes and Poisson integrals are included in Section \ref{sect.32}. Finally, an Appendix added at the end reviews briefly Vitali systems.
\section{Preliminaries. Eigenfunction Expansions.}\label{sc2}
In this Section we introduce the terminology and preliminary results
related with the mathematical structures used along this paper: spectral
measure spaces, direct integrals of Hilbert spaces and locally convex equipments of
spectral measures. For the first two structures the terminology is mainly that of
Birman-Solomjak \cite{BS}. For equipments and eigenfunction expansions see previous works by the authors \cite{GG, GG1}.
\subsection{Spectral Measure Spaces}\label{app1}
Let $(\Lambda,{\A})$ be a measurable space, let $\H$ be a separable
Hilbert space with scalar product $(\cdot,\cdot)$ and norm $||\cdot||$ (we
consider separable Hilbert spaces only) and let ${\mathcal P}={\mathcal P}(\H)$ be the set
of orthonormal projections on $\H$. A {\it spectral measure} on $\H$ is a mapping
$P:{\A}\to {\mathcal P}$ satisfying the following conditions:
(1) {\it countable additivity}, i.e. if $\{E_n\}$ is a finite or countable
set of disjoint sets of ${\A}$, then $P(\cup_n E_n)=\sum_n P(E_n)$ in
strong sense;
(2) {\it completeness}, i.e. $P(\Lambda)=I$.
Then $(\Lambda,{\A},\H,P)$ is referred as a {\it spectral measure space}.
If $\Lambda$ is a complete separable metric space and ${\A}$ is the
$\sigma$-algebra of all Borel subset of $\Lambda$, then $P$ is called a
{\it Borel spectral measure}.
The classical version of the spectral theorem \cite{VN} associates to
each normal operator $A$ defined on $\H$ a Borel spectral measure space
$(\Lambda,{\A},\H,P)$, being $\Lambda=\sigma(A)$, the spectrum of
$A$, and $P(E)$ the orthogonal projection corresponding to $E\in\mathcal
A$.
Here we consider the more general situation introduced in \cite{BS}: measure spaces
$(\Lambda,{\A},\mu)$ such that $\mu$ is a $\sigma$-finite measure with a countable basis
(a sequence $\{E_n\}$ of measurable sets
in $\A$ such that for any $E\in\A$ and for all $\varepsilon>0$,
there exists an $E_k$, from the sequence $\{E_n\}$, such that $\mu[(E\backslash E_k)\cup
(E_k\backslash E)]<\varepsilon$).
By the {\it type} $[\mu]$ of a measure $\mu$ defined on $(\Lambda,\A)$
we understand the equivalence class of all measures $\nu$ on $(\Lambda,\A)$ such that $\mu$ and $\nu$ are mutually absolutely continuous, that is, they have the same class of null sets.
If the measure $\mu$ is absolutely continuous with
respect to the measure $\nu$, we write $[\nu]\succ [\mu]$.
Given a spectral measure space $(\Lambda,{\A},\H,P)$, every
pair $f,g\in\H$ define the complex measure:
$$
\mu_{f,g}(E):=(f,P(E)g),\quad (E\in \A).
$$
When $f=g$ we write $\mu_f:=\mu_{f,f}$ and therefore,
$\mu_f(E)=(f,P(E)f)$.
We say that a nonzero vector $g\in\H$ is of {\it maximal type} with respect to the
spectral measure $P$ if for each $f\in\H$, $[\mu_g]\succ [\mu_f]$.
In this case, $g$ is called a maximal vector. Such maximal vectors
always exist provided that $\H$ is separable.
The type $[\mu_g]$ of a maximal vector is called the {\it spectral
type} of $P$ and denoted by $[P]$.
For an element $g$ of $\H$ we denote by $\H_g$ the closed subspace
of $\H$ defined by
$$
\H_g={\rm adh\,}\{f\in\H: f=P(E)g\},
$$
where $E$ runs over $\A$ and {\it adh} denotes adherence.
A family of vectors $\{g_j\}_{j=1}^m$ in $\H$, where $m\in
\{1,2,\dots,\infty\}$,
is called a {\it generating system} of $\H$ with respect to $P$ if
$\H$ is the orthogonal sum of the spaces $\H_{g_j}$:
\begin{equation}\label{odh}
\H=\bigoplus_{j=1}^m\H_{g_j}.
\end{equation}
There exists a generating system $\{g_j\}_{j=1}^m$ such that
\begin{equation}\label{seq.ty}
[P]=[\mu_{g_1}] \succ [\mu_{g_2}] \succ\dots
\end{equation}
For each $k\in\{1,2,\ldots,m\}$ let $\Lambda(g_k)$ be the support of $\mu_{g_k}$.
The family $\{\Lambda_k\}_{k=1}^m$ of subsets of $\Lambda$ given by
$$
\begin{array}{rcl}
\Lambda_k &=&\Lambda (g_k)\backslash \Lambda (g_{k+1})\,, \hskip0.6cm
(k<m),
\\[1ex]
\Lambda_m &= &\bigcap_{k\in \{1,2,\ldots,m\}} \Lambda(g_k),
\end{array}
$$
is a partition of $\Lambda$ and then we can define the {\it multiplicity
function} $N_p$ of $P$ as
$$
N_P(\l)=k\,,\hskip0.6cm {\rm if}\hskip0.6cm \l\in\Lambda_k
$$
The sequence of types (\ref{seq.ty}) and the multiplicity function $N_p$ are the unitary invariants of $P$.
Now, let $S(\Lambda,P)$ be the set of complex-valued measurable functions on $\Lambda$
which are finite almost
everywhere with respect to the measures $\mu_g(E)=(g,P(E)g)$ for all
$g\in\H$ or, equivalently, with respect to $\mu_g$ for some (all) $g\in\H$ of maximal type (we say that
these functions are finite almost everywhere with respect to $P$ or
that are
$[P]$-a.e. finite). For each $\phi\in S(\Lambda,P)$, we can define the
operator
$$
J_\phi:= \int_\Lambda \phi\,dP
$$
with domain
${\mathcal D}_\phi=\{f\in\H : \int_\Lambda |\phi|^2\,d\mu_f<\infty\}$.
The main properties of the operators $J_\phi$ are listed in the
following \cite{BS}:
\begin{prop}\label{pA1}
If $f,g\in\H$ and $\phi\in S(\Lambda,P)$, then:
\smallskip
(i) $(f,J_\phi g)=\int_\Lambda \phi\,d\mu_{f,g}$, if $g\in {\mathcal D}_\phi$.
\smallskip
(ii) $(f,J_\phi f)=\int_\Lambda \phi\,d\mu_f$, if $f\in {\mathcal D}_\phi$.
\smallskip
(iii) $||J_\phi f||^2=\int_\Lambda |\phi|^2\,d\mu_f$, if $f\in {\mathcal
D}_\phi$.
\smallskip
(iv) $||J_\phi||= [P]\mbox{-}{\rm ess\, sup\,} |\phi|$, if $\phi\in
L^\infty(\Lambda,[P])$.
\smallskip
(v) $g\in{\mathcal D}_\phi$ if and only if $\phi\in L^2(\Lambda,\mu_g)$.
\smallskip
(vi) $\H_g$ reduces $J_\phi$, i.e.,
$J_\phi(\H_g\cap {\mathcal D}_\phi)\subset \H_g$ and
$J_\phi(\H_g^\bot\cap {\mathcal D}_\phi)\subset \H_g^\bot$.
\smallskip
(vii) The mapping
$V_g: L^2(\Lambda,\mu_g)\rightarrow \H_g :\phi \mapsto J_\phi g$
is unitary.
\smallskip
(viii)
$\H_g= \{ J_\phi g:\phi\in S(\Lambda,P),g\in{\mathcal D}_\phi\}$.
\end{prop}
\subsection{Direct Integrals of Hilbert Spaces}\label{app2}
Let $(\Lambda,{\A})$ be a measurable space. We say that a family
$\{{\H}_\l\}_{\l\in \Lambda}$ of separable Hilbert spaces, with
scalar product $(\cdot,\cdot)_\l$ and norm $||\cdot||_\l$, together with
a countable set $\{u_j(\l)\}_{j=1}^\infty$ of elements of the product
$\prod_{\l\in \Lambda} {\H}_\l$ is a {\it measurable family of Hilbert
spaces on $(\Lambda,{\A})$} if for every $j,k\in{\mathbb N}$ the functions
$( u_j(\l),u_k(\l))_{\l}$ are measurable and for each $\l\in \Lambda$
the subspace
${\rm span}\{u_j(\l):1\leq j<\infty\}$ is dense in ${\H}_\l$.
In such case the {\it function of dimension}
$N(\l):={\rm dim}({\H}_\l)$ is measurable and we say that an element
$f$ of $\prod_{\l\in \Lambda} {\H}_\l$ is a {\it measurable vector
family} if the function
$( f(\l),u_j(\l))_{\l}$ is measurable for each $j$.
Given a measure $\mu$ on $(\Lambda,{\A})$, the space of measurable vector families $f$ on ${\A}$ such that
$\int_\Lambda ||f(\l)||^2_{\l}\,d\mu(\l)<\infty$ is called the {\it direct integral}
of the spaces ${\H}_\l$ with respect to $\mu$ and denoted by
$$
{\H}_{\mu,N}\quad \text{or}\quad \int_\Lambda^\oplus {\H}_\l\,d\mu(\l).
$$
If, as usual, we identify functions that coincide $\mu$-a.e., the space
${\H}_{\mu,N}$ is a Hilbert space with the inner product
$(f,g)_{\H_{\mu,N}}:=\int_\Lambda \big( f(\l),g(\l)\big)_{\l}\,d\mu(\l)$.
This structure does not depend on $\{u_j\}$.
The sets
$\Lambda_k=\{\l\in \Lambda:N(\l)=k\}$, where $k=1,2,\ldots,\infty$,
are measurable and there exist sequences $\{e_k(\l)\}_{k=1}^\infty$ of
measurable vector families with the following properties:
\smallskip
(a) for each $\l$, $\{e_k(\l)\}_1^{N(\l)}$ is an orthonormal basis for
${\H}_\l$, and $e_k(\l)=0$ for $k>N(\l)$;
\smallskip
(b) for each $k$ there is a measurable partition of $\Lambda$,
$\Lambda=\bigcup_{l=1}^\infty \Lambda_l^k$, such that on each
$\Lambda_l^k$, $e_k(\l)$ is a finite linear combination of the $u_j(\l)$
with coefficients depending measurably on $\l$.
\smallskip\noindent
Such a sequence $\{e_k\}$ is called a {\it orthonormal measurable basis}
of the direct integral ${\H}_{\mu,N}$ and its choice determines a
unitary isomorphism between ${\H}_{\mu,N}$ and
$L^2(\Lambda_\infty,\mu;l^2)\oplus\big(\oplus_1^\infty
L^2(\Lambda_m,\mu;{\mathbb C}^m)\big)$.
Any measurable set $ E \in{\A}$ generates the operator $\hat{P}(E)$
on ${\H}_{\mu,N}$, multiplication by the characteristic function
$\chi_ E $ of $ E $ ($\chi_E(\l)$ is one if
$\l\in E$ and zero otherwise),
$$
\hat{P}( E )g:=\chi_ E g,\quad (g\in{\H}_{\mu,N}).
$$
The family of projections $ \hat{P}( E )$, ($E
\in{\A}$), defines a spectral measure on ${\H}_{\mu,N}$.
The equalities $\mu(E)=0$ and $\hat{P}(E)=0$ are equivalent and,
thus, the {\it types} of $\mu$ and $\hat{P}$ coincide and also
$S(\Lambda,\mu)=S(\Lambda, \hat{P})$ and
$L^\infty(\Lambda,\mu)=L^\infty(\Lambda,\hat{P})$.
In general, every $\phi\in S(\Lambda,\mu)$ defines a
{\it multiplication or diagonal operator}
on ${\H}_{\mu,N}$, which we denote by $Q_\phi$, given by
$$
\begin{array}{l}
(Q_\phi f)(\l):=\phi(\l)f(\l),\quad \forall f\in D(Q_\phi),\\[1ex]
D(Q_\phi)=\{f\in {\H}_{\mu,N}:\int|\phi(\l)|^2||f(\l)||^2_{\l}\,
d\mu(\l)<\infty\}.
\end{array}
$$
Indeed, for each $\phi\in S(\Lambda,\mu)$, its integral $\hat J_\phi$
with respect to the spectral measure $\hat{P}$ and the multiplication
operator $Q_\phi$ coincide, that is,
$Q_\phi=\hat J_\phi=\int \phi\,d \hat{P}$,
and then we can translate every result of the theory of integration
with respect to spectral measures to the context of multiplication
operators on direct integrals.
Moreover, each measurable family of operators
$\{T(\l):\H_\l\to\H_\l\}_{\l\in \Lambda}$ defines an
operator on ${\H}_{\mu,N}$, say $T$, given by
$Tf:= \int_\Lambda^\oplus T(\l)f(\l)\,d\mu(\l)$, ($f=f(\l)\in D(T)$).
The operators of this form are called {\it
decomposable operators}.
In particular, when $T(\l)=\phi(\l)I_\l$, with $I_\l$ the identity operator on $\H_\l$, then
$T$ is just the multiplication operator $Q_\phi$.
A decomposable operator $T$ is self-adjoint
(unitary, normal, orthogonal projection) if and only if the operators
$T(\l)$ are self-adjoint (unitary, normal, orthogonal projection) on
$\H_\l$ for $\mu$-a.e. $\l\in\Lambda$.
Like for spectral measures $P$ the spectral
type $[P]$ and the multiplicity function $N_P$ are the unitary
invariants, the measure type $[\mu]$ and the function of
dimension $N$ are the unitary invariants of direct integrals
$\H_{\mu,N}$, i.e., they determine the
corresponding structure uniquely up to unitary equivalence.
Now, given a spectral measure space $(\Lambda,\A,\H,P)$ and
a direct integral $\H_{\mu,N}=\int_\Lambda^\oplus \H_\l\,d\mu(\l)$
defined on $(\Lambda,\A)$,
a {\it structural isomorphism} among both structures is a unitary
operator $V$ from $\H$ onto $\H_{\mu,N}$ verifying
\begin{equation}\label{19xx}
V\, J_\phi= Q_\phi\, V,\quad \phi\in S(\Lambda,P)=S(\Lambda,\mu).
\end{equation}
In particular, $V P ( E )= \hat{P}( E )V$, for each $ E \in{\A}$.
The conditions under which a spectral measure space and a direct
integral are structurally isomorphic and the explicit form of a
structural isomorphism are given in the following \cite[Th.7.4.1]{BS}:
\begin{thm}\label{pA2}
A spectral measure space $(\Lambda,\A,\H,P)$ and a direct
integral $\H_{\mu,N}=\int_\Lambda^\oplus \H_\l\,d\mu(\l)$ defined on
$(\Lambda,\A)$ are structurally isomorphic if and only if
$$
[P]=[\mu]\quad {\rm and}\quad N_P=N\,\,[\mu]\mbox{-}{\rm a.e.}
$$
In such case, a structural isomorphism $V:\H\to\H_{\mu,N}$ among them
is defined by
\begin{equation}\label{19}
V^{-1} h=\bigoplus_{j=1}^m \left(\int_{\Lambda}
\left(e_j(\l),\sqrt{\frac{d\mu}{d\mu_{g_1}}(\l)}\;h(\l)
\right)_\l \,dP(\l)\right)\,g_j \,,
\end{equation}
where $\{e_j(\l)\}_{j=1}^m$ is a measurable orthonormal basis in
$\H_{\mu,N}$ and $\{g_k\}_{k=1}^m$ is a generating system of $\H$ with
respect to $P$ such that:
\smallskip
i.) $[P]=[\mu_{g_1}]\succ [\mu_{g_2}]\succ\dots$;
\smallskip
ii.) if $1\le j\le k\le m$ ($m=[P]\mbox{-}{\rm ess\, sup\,} N_P$), then the
measure
$\mu_{g_k}$ coincides with $\mu_{g_k}$ on its support $\Lambda(g_k)$,
which means that $\mu_{g_j}|_{\Lambda(g_k)}=\mu_{g_k}$.
\end{thm}
The functional version of von Neumann spectral theorem \cite{VN49}
is a direct consequence of the above facts.
Finally, recall that, given a generating system $\{g_j\}_{j=1}^m$ of $\H$ with respect to a spectral measure $P$, the space $\H$ decomposes into the orthogonal sum (\ref{odh}), so that each $h\in \H$ can be written as $h=\oplus_{j=1}^m h_j$, where $h_j\in \H_{g_j}$. This fact together with property {\it (vii)} of Proposition \ref{pA1} imply that for each $h\in\H$ there exists a unique family of functions $\tilde{h}_j\in L^2(\Lambda,\mu_{g_j})$ such that
\begin{equation}\label{erkk34}
h=\bigoplus_{j=1}^m h_j = \bigoplus_{j=1}^m J_{\tilde{h}_j}g_j=
\bigoplus_{j=1}^m \big[\int_{\Lambda} \tilde{h}_j\,dP\big]\,g_j.
\end{equation}
Theorem \ref{tA4} below collects some results proved in \cite{GG}. The first one describes the $\l j$-components of $h$ and $Vh$ in terms of certain Radom-Nikodym derivatives associated with the generating system. The second one shows that it is possible to obtain a spectral decomposition ``a la
Dirac" by using direct integrals of Hilbert spaces.
\begin{thm}\label{tA4}
Under the conditions of Theorem \ref{pA2} we have:
\smallskip
(i) For each
$h\in\H$ and $j\in\{1,2,\ldots,m\}$, the following identities are satisfied $\mu$-a.e.:
\begin{equation}\label{28}
\begin{array}{l}
\ds
\sqrt{\frac{d\mu_{g_j}}{d\mu}(\l)}\,\tilde{h}_j(\l)= \big(e_j(\l),[Vh](\l)\big)_{\l} =
\\[3ex]
\hspace{0ex} \ds
= \sqrt{\frac{d\mu}{d\mu_{g_1}}(\l)}\,\frac{d\mu_{g_j,h}}{d\mu}(\l)
= \sqrt{\frac{d\mu_{g_1}}{d\mu}(\l)}\,\frac{d\mu_{g_j,h}}{d\mu_{g_1}}(\l)
= \sqrt{\frac{d\mu_{g_j}}{d\mu}(\l)}\,\frac{d\mu_{g_j,h}}{d\mu_{g_j}}(\l)\,.
\end{array}
\end{equation}
\smallskip
(ii) For each $f,h\in\H$ and $E\in\A$,
\begin{equation}
\big(f,P(E)h\big)_\H=\sum_{j=1}^m \int_E \big([Vf](\l), e_j(\l)\big)_\l\;
\big(e_j(\l), [Vh](\l)\big)_\l\,d\mu(\l) \label{20}
\end{equation}
\end{thm}
\subsection{Locally Convex Equipments of Spectral Measures}\label{app3}
By a {\it topological vector space} ({\bf tvs}) we mean a pair ($\Phi,\tau_\Phi$),
where $\Phi$ is a vector space over the complex field ${\mathbb C}$ and $\tau_\Phi$ a locally convex linear topology on $\Phi$. In what follows $\Phi^\t$ denotes the space of continuous {\it antilinear} mappings (functionals) from $\Phi$ into $\mathbb C$.
The space $\Phi^\t$ is also a complex vector space and we can endow it with its own topology \cite{RR}.
For the (anti)dual pair $(\Phi,\Phi^\t)$ we denote the action of $F^\t\in\Phi^\t$
on $\phi\in\Phi$ as $\<\phi|F^\t\>$. This bracket is linear to the right and antilinear to
the left, just like the scalar product of Hilbert spaces. As usual, we write $\<F^\t|\phi\>=\<\phi|F^\t\>^\ast$, where $\ast$ denotes the complex conjugate.
In order to approach the question of existence of generalized
eigenvectors for a normal operator defined on a Hilbert space
$\H$ and with continuous spectrum, one can propose to rig the
Hilbert space $\H$ with a pair
$(\Phi,\Phi^\t)$ such that
$\Phi$ is a dense subspace of $\H$ and $\Phi^\t$ contains a complete set of
generalized eigenvectors for the operator in question or, better, for the associated spectral measure.
To be precise:
\begin{defn}\rm
We say that a tvs $(\Phi,\tau_\Phi)$ {\it rigs or equips the
spectral measure space} $(\Lambda,{\A},\H,P)$ when the following conditions hold:
\smallskip
i.) There exists a one to one linear mapping $I:\Phi\longmapsto \H$ with
range dense in $\H$. Identifying
each $\phi\in\Phi$ with its image $I(\phi)$, we can assume that
$\Phi\subset \H$ is a dense subspace of $\H$ and $I$ the canonical
injection from $\Phi$ into $\H$.
\smallskip
ii.) There exists a $\sigma$-finite measure $\mu$ on
$(\Lambda,\A)$, a set $\Lambda_0\subset \Lambda$ with zero $\mu$
measure and a family of vectors in $\Phi^\t$ of the form
\begin{equation}\label{36}
\Big\{{\l k}^\t\in\Phi^\t \,:\, \l\in \Lambda\backslash \Lambda_0,\,
k\in\{1,2,\ldots,m\}\Big\},
\end{equation}
where $m\in\{\infty,1,2,\ldots\}$, such that
\begin{equation}\label{37}
(\phi, P ( E )\varphi)_{\H}=\sum_{k=1}^m \int_ E \<\phi|{\l
k}^\t\>\, \<{\l k}^\t|\varphi\>\,d\mu(\l),\quad
(\phi,\varphi\in\Phi,\,\,E \in{\A}).
\end{equation}
In particular, if $E=\Lambda$, then, $P(E)=I_\H$, the identity on $\H$, and
$$
(\phi,\varphi)_{\H}=\sum_{k=1}^m \int_\Lambda \<\phi|{\l
k}^\t\>\, \<{\l k}^\t|\varphi\>\,d\mu(\l),\quad
(\phi,\varphi\in\Phi).
$$
A family of the form (\ref{36}) satisfying (\ref{37})
is called {\it a complete system of generalized
eigenvectors} (also called {\it Dirac kets}) of the spectral measure space $(\Lambda,{\A},\H,P)$.
\end{defn}
Each direct integral $\H_{\mu,N}$ associated to
$(\Lambda,{\A},{\H}, P )$ as in Theorem \ref{pA2} along with
one of its measurable orthonormal bases $\{e_k(\l)\}_{k=1}^{N(\l)}$ or,
equivalently, each generating system $\{g_k\}_{k=1}^m$ in ${\H}$ with
respect to $P$ verifying conditions i.) and ii.) of Theorem
\ref{pA2}, provide a rigging $(\Phi,\tau_\Phi,\mu,\{\l k^\t\})$. This
rigging is characterized by the following properties \cite{GG}:
\begin{itemize}
\item[(i)]
The subspace $\Phi$ is dense in $\H$ and is given by
$$
\Phi=\{\phi\in {\H}: {\rm exists}\,\,
\frac{d\mu_{\phi,g_k}}{d\mu_{g_k}}(\l)<\infty,\,\,\forall \l\in
\Lambda\backslash \Lambda_0,\,\,\forall k\in\{1,2,\ldots,N(\l)\}\},
$$
where $\Lambda_0$ is a subset of $\Lambda$ with $\mu$ zero measure (or
equivalently, $P$ zero measure).
\item[(ii)]
The complete family of antilinear functionals on $\Phi$ fulfilling
(\ref{37}) is of the form
$$
\Big\{\l k^\t\,:\, \l\in\Lambda\backslash \Lambda_0,\,
k\in\{1,2,\ldots,N(\l)\}\Big\},
$$
where the action of $\l k^\t$ over each $\phi\in\Phi$ is given by:
\begin{equation}\label{39}
\< \phi|\l k^\t\> = \big(e_k(\l),[Vh](\l)\big)_{\l} =
\sqrt{\frac{d\mu_{g_k}}{d\mu}(\l)}\,
\frac{d\mu_{g_k,h}}{d\mu_{g_k}}(\l)\,.
\end{equation}
\item[(iii)]
The topological antidual space $\Phi^\t$ is the vector space spanned by the set $\{\l k^\t\}$. The topology $\tau_\Phi$ is the weak topology $\sigma(\Phi,\Phi^\t)$, i.e. the coarsest one compatible with the dual pair $(\Phi,\Phi^\t)$.
The topology $\tau_\Phi$ is generated by the family of seminorms
$$
\phi\mapsto |\<\phi|\l k^\t\>|, \quad \l\in \Lambda\backslash
\Lambda_0,\,\,k\in\{1,2,\ldots,N(\l)\}.
$$
\end{itemize}
This type of riggings is {\it minimal}
in the sense that no topology on $\Phi$ coarser than that given in (iii) --except for
the indeterminacy derived by the zero $\mu$ measure set
$\Lambda_0$-- can rig the spectral measure space $(\Lambda,{\A},{\H},
P )$.
In the opposite side we find the so-called {\it universal} equipments, those with Hilbert-Schmidt inductive and nuclear topologies which, due to the inductive and nuclear versions of the spectral theorem, rig every ``regular" spectral measure space, {\it c.f.} \cite{GG1}.
\section{Transformation Theory}\label{sctf}
Let us pass to deal with the main subject of this work: the transformation formulas between two given representations described by spectral measures $(\Lambda,{\A},\H,P)$ and
$(\Xi,{\B},\H,Q)$, here called $P$ and $Q$ for brevity. The
elements we are going to handle are the following:
\begin{itemize}
\item[({\bf A})]
The scalar measures derived of the spectral measures $P$ and $Q$,
$$
\mu_{f,g}^P(E):=(f,P(E)g)_\H
\quad {\rm and}\quad
\mu_{f,g}^Q(F):=(f,Q(F)g)_\H,
$$
where $f,g\in\H$, $E\in{\A}$ and $F\in{\B}$.
Recall that when $f=g$ we write $\mu_f^P:=\mu_{f,f}^P$ and
$\mu_f^Q:=\mu_{f,f}^Q$. (See Section \ref{app1}.)
\item[({\bf B})]
Generating systems $\{g_j\}_{j=1}^m$ and $\{g'_k\}_{k=1}^{m'}$
of $\H$ with respect to $P$ and $Q$, respectively, which satisfy the
following properties (see Section \ref{app1} and Theorem \ref{pA2}):
\begin{itemize}
\item[(a)]
$[ P ]=[\mu^P_{g_1}]\succ [\mu^P_{g_2}]\succ\cdots$ and $[ Q
]=[\mu^Q_{g'_1}]\succ [\mu^Q_{g'_2}]\succ\cdots$.
\item[(b)]
If $1\leq j\leq k\leq m$, then $\mu^P_{g_j}|_{\Lambda(g_k)}=\mu^P_{g_k}$ and, if $1\leq j\leq
k\leq m'$, then
$\mu^Q_{g'_j}|_{\Lambda(g'_k)}=\mu^Q_{g'_k}$.
\end{itemize}
\item[({\bf C})]
Direct integrals of Hilbert spaces (see Section \ref{app2})
$$
\H_{\mu,N}=\int_\Lambda \H_\l\,d\mu(\l)\quad {\rm and} \quad
\H_{\mu',N'}=\int_\Xi \H_\xi\,d\mu'(\xi)
$$
associated, respectively, to $P$ and $Q$ as in Theorem \ref{pA2}
through orthonormal measurable bases in both
direct integrals
$$
\{e_j(\l)\}_{j=1}^m\subset\H_{\mu,N}
\quad{\rm and }\quad
\{e'_k(\xi)\}_{k=1}^{m'}\subset\H_{\mu',N'}
$$
and corresponding structural isomorphisms
$$
{\V}:\H\to\H_{\mu,N}
\quad{\rm and }\quad
{\V}':\H\to\H_{\mu',N'}\,.
$$
We shall suppose without loss of generality that
$$
[\mu]=[P]=[\mu_g^P]
\quad{\rm and}\quad
[\mu']=[Q]=[\mu^Q_{g_1}].
$$
\item[({\bf D})]
A locally convex topological vector space ({\bf tvs}) $[\Phi,\tau_\Phi]$ that rigs both
spectral measures and two complete systems of generalized eigenvectors
$$
\{\l j^\t\}_{(\l,j)\in\Lambda\t\{1,2,\ldots,m\}}\subset\Phi^\t \quad {\rm
and}\quad
\{\xi k^{\t}\}_{(\xi,k)\in\Xi\t\{1,2,\ldots,m'\}}\subset\Phi^\t
$$
for both spectral measures $P$ and $Q$, being the decompositions given with respect to the
scalar measures $\mu$ and $\mu'$, respectively.
(See Section \ref{app3}.)
\end{itemize}
Identifying generalized eigenvectors with the elements of the orthonormal measurable bases given in ({\bf C}), transformation formulas
(\ref{2g}) in direct integrals should be similar, for each $h\in\H$, to
\begin{equation}
\label{D.19.5444id}
\big(e'_k(\xi),[{\V}'h](\xi)\big)_{\xi}= \sum_{j=1}^m
\int_\Lambda
\< e'_k(\xi)|e_j(\l)\>\,
\big(e_j(\l),[{\V}h](\l)\big)_{\l}\,d\mu(\l)\,.
\end{equation}
On the other hand, when we consider the locally convex equipment
$[\Phi,\tau_\Phi]$ given in ({\bf D}), transformation formulas
should be similar, for each $\psi\in\Phi$, to
\begin{equation}
\label{D.19.5444rhs}
\< \xi k^\t|\psi\>= \sum_{j=1}^m \int_\Lambda
\< \xi k^\t|\l j^\t\>\, \< \l j^\t|\psi\>\, d\mu(\l)\,.
\end{equation}
But the integrals kernels of these formulas, written as
$$
\< e'_k(\xi)|e_j(\l)\>
\quad {\rm and}\quad
\< \xi k^\t|\l j^\t\>\,,
$$
in general, do not have algebraic sense.
Indeed, $\< e'_k(\xi)|e_j(\l)\>$ combines elements belonging to
different spaces: $e'_k(\xi)\in\H_\xi$ and
$e_j(\l)\in\H_\l$. And, though $\xi k^\t$ and $\l j^\t$ are both in $\Phi^\t$, in general neither
$\xi k^\t$ nor $\l j^\t$ belong to $\Phi$ and therefore $\< \xi k^\t|\l
j^\t\>$ is meaningless in the dual pair $(\Phi,\Phi^\t)$.
We shall analyze equations (\ref{D.19.5444id}) and (\ref{D.19.5444rhs})
at the same time because the fo\-llo\-wing relations are satisfied:
\begin{equation}\label{emm0P}
\< \phi|\l j^\t\>= \big([{\V}\phi](\l),e_j(\l)\big)_{\H_\l}=
\sqrt{\frac{d\mu^P_{g_k}}{d\mu}(\l)}\,\frac{d\mu^P_{\phi,g_k}}{d\mu^P_{g_k}}
(\l),
\quad (\phi\in\Phi),
\end{equation}
\begin{equation}\label{emm0Q}
\< \phi|\xi k^\t\>= \big([{\V'}\phi](\xi),e'_k(\xi)\big)_{\H_\xi}=
\sqrt{\frac{d\mu^Q_{g'_k}}{d\mu'}(\xi)}\,\frac{d\mu^Q_{\phi,g'_k}}{d\mu^Q_{g'_k}}
(\xi),
\quad (\phi\in\Phi).
\end{equation}
(See Equation (\ref{39}).)
Theorems \ref{p18} and \ref{p16} below mean a first approach to the problem. Both results consider Vitali approach to Radon-Nikodym derivatives (see Appendix), so that the following Lemma is all we need to prove them.
\begin{lem}\label{l17}
Let $\H_{\mu,N}$ and $\H_{\mu',N'}$ be two direct integrals corresponding, respectively, to the spectral measure spaces $(\Lambda,{\A},\H,P)$ and $(\Xi,{\B},\H,Q)$
as in ({\bf C}). Then for all $F\in{\B}$ and each pair of elements
$f,h\in\H$ we have the following identities:
\begin{equation}\label{tle44.nc}
\begin{array}{rcl}
\ds \mu^Q_{f,h}(F)
& = & \ds
\sum_{j=1}^m \int_\Lambda
\big( [{\V}Q(F)f](\l),e_j(\l) \big)_{\l}\,
\big(e_j(\l),[{\V}h](\l)\big)_{\l}\,d\mu(\l) \\[2ex]
& = & \ds
\sum_{j=1}^m \int_\Lambda
\big( [{\V}f](\l),e_j(\l) \big)_{\l}\,
\big(e_j(\l),[{\V}Q(F)h](\l)\big)_{\l}\,d\mu(\l).
\end{array}
\end{equation}
\end{lem}
\begin{proof}
Let $f$ and $h$ be two elements of $\H$ whose
decompositions (\ref{erkk34}) are given by $f=\oplus_j f_j$ and
$h=\oplus_j h_j$, where
$f_j,h_j\in\H_{g_j}$. The elements of the family
$\{h_{j}\}_{j\in\{1,2,\ldots,m\}}$ are spectrally orthogonal with respect to
$P$ and $\sum_j||h_{j}||^2<\infty$. Thus we have
$$
\mu^P_{h}(E)=\mu^P_{\oplus_j h_{j}}( E )=\sum_j\mu^P_{h_{j}}( E ), \quad
(E\in{\A}),
$$
and the same is valid for $\{f_{j}\}_{j\in\{1,2,\ldots,m\}}$.
Therefore,
\begin{equation}\label{mcmp44}
\mu^P_{f,h}(E)=\sum_j\mu^P_{f,h_{j}}( E )=
\sum_j\mu^P_{f_j,h}( E )=\sum_j\mu^P_{f_j,h_{j}}( E ),
\quad (E\in{\A}).
\end{equation}
As in (\ref{erkk34}), for each $j\in\{1,2,\ldots,m\}$, denote by $\tilde{f}_{j}$
the function of $L^2(\Lambda,\mu^P_{g_j})$ such that
\begin{equation}\label{mcmp44x}
f_{j}=J_{\tilde{f}_{j}}g_j=\big[\int_\Lambda \tilde{f}_{j}\, dP\big]g_j.
\end{equation}
From (\ref{mcmp44}) and (\ref{mcmp44x}), being $\V$ unitary, it follows
\begin{equation}\label{ettdcdeg}
\begin{array}{rcl}
\mu^Q_{f,h}(F) & = & \ds \sum_{j=1}^m \big( f_j,Q(F)h \big)_\H
= \sum_{j=1}^m \big( J_{\tilde{f}_j}g_j,Q(F)h \big)_\H
\\[2ex]
& = & \ds \sum_{j=1}^m \big( g_j,J_{\tilde{f}_j^\ast}Q(F)h \big)_\H
= \sum_{j=1}^m \big( {\V}g_j,{\V}J_{\tilde{f}_j^\ast}Q(F)h
\big)_{\H_{\mu,N}}
\end{array}
\end{equation}
Now, in the light of (\ref{19}),
$\ds \V g_j=\sqrt{\frac{d\mu^P_{g_1}}{d\mu}(\l)}\,e_j(\l)$.
This fact together with (\ref{19xx}) imply
$$
\begin{array}{l}
\ds\sum_{j=1}^m \big( {\V}g_j,{\V}J_{\tilde{f}_j^\ast}Q(F)h
\big)_{\H_{\mu,N}} =
\\
\ds =\sum_{j=1}^m \int_\Lambda
\sqrt{\frac{d\mu^P_{g_1}}{d\mu}(\l)}\,
\tilde{f}_j^\ast(\l)\,
\big( e_j(\l), [\V Q(F)h](\l)\big)_{\l}\,d\mu(\l).
\end{array}
$$
Finally, from (\ref{28}),
$$
\tilde{f}_j^\ast(\l)=
\sqrt{\frac{d\mu}{d\mu^P_{g_1}}(\l)}\,\big([{\V}f_j](\l),e_j(\l)\big)_{\l}=
\sqrt{\frac{d\mu}{d\mu^P_{g_1}}(\l)}\,\big([{\V}f](\l),e_j(\l)\big)_{\l}
,\quad \mu\mbox{-}{\rm c.s.}
$$
Substituting in (\ref{ettdcdeg}) we obtain the
first identity of (\ref{tle44.nc}). The second one is obtained
interchanging the roles of $f$ and $h$ in (\ref{ettdcdeg}).
\end{proof}
\begin{thm}\label{p18}
Let $\H_{\mu,N}$ and $\H_{\mu',N'}$ be two direct integrals corresponding, res\-pec\-ti\-ve\-ly, to the spectral measure spaces $(\Lambda,{\A},\H,P)$ and $(\Xi,{\B},\H,Q)$
as in ({\bf C}). Su\-ppo\-se the measure $\mu'$ has a Vitali system and let
$\{F_n\}_{n=1}^\infty$ be a sequence of sets of ${\B}$ admitting a
contraction to a point $\xi\in\Xi$ and such that $\mu'(F_n)\neq0$ ($n\in\mathbb N$). Then, for all $h\in\H$ and
$k\in\{1,2,\ldots,m'\}$,
\begin{equation}\label{ettdc4idns.nc}
\begin{array}{l}
\ds \big([{\V}' h](\xi),e'_k(\xi)\big)_\xi =\\[1ex]
= \ds \sqrt{\frac{d\mu'}{d\mu^Q_{g'_k}}(\xi)}\,
\lim_{n\to\infty}
\sum_{j=1}^m \int_{\Lambda}
\big(e_j(\l),[{\V}g'_{k}](\l)\big)_{\l}\,
{\big( [{\V}Q(F_n)h](\l),e_j(\l) \big)_{\l}\over \mu'(F_n)}\,
d\mu(\l) \\[3ex]
= \ds
\sqrt{\frac{d\mu'}{d\mu^Q_{g'_k}}(\xi)}\,
\lim_{n\to\infty}
\sum_{j=1}^m \int_\Lambda
{\big(e_j(\l),[{\V}Q(F_n)g'_{k}](\l)\big)_{\l}\over \mu'(F_n)} \,
\big( [{\V}h](\l),e_j(\l) \big)_{\l}\,
d\mu(\l).
\end{array}
\end{equation}
\end{thm}
\begin{proof}
By Equation (\ref{39}), save for a set $\Lambda_0$ of zero
$\mu'$-measure, we have
\begin{equation}\label{drnsvit4.0}
\big([{\V}' h](\xi),e'_k(\xi)\big)_\xi=
\sqrt{\frac{d\mu^Q_{g'_k}}{d\mu'}(\xi)}\,
\frac{d\mu^Q_{h,g'_k}}{d\mu^Q_{g'_k}}(\xi)=
\sqrt{\frac{d\mu'}{d\mu^Q_{g'_k}}(\xi)}\,
\frac{d\mu^Q_{h,g'_k}}{d\mu'}(\xi).
\end{equation}
By hypothesis the measure $\mu'$ have a Vitali system.
Since for each pair $f,h\in\H$ the measure $\mu^Q_{f,h}$ is absolutely
continuous with respect to $\mu'$, every Vitali system for $\mu'$ is
also a Vitali system for
$\mu^Q_{f,h}$. Thus, the Vitali-Lebesgue theorem (Theorem \ref{tA7} of Appendix) implies that
for
$\mu'$-a.a. $\xi\in\Xi$,
if $F_1,F_2,\ldots$ is a sequence of measurable sets of ${\B}$ that
admits a contraction to $\xi$, then
\begin{equation}\label{drnsvit4}
\frac{d\mu^Q_{h,g'_k}}{d\mu'}(\xi)=\lim_{n\to\infty}
\frac{\mu^Q_{h,g'_k}(F_n)}{\mu'(F_n)}.
\end{equation}
Substituting $\mu^Q_{h,g'_k}(F_n)$ in (\ref{drnsvit4}) by the corresponding right hand sides of (\ref{tle44.nc}) and putting them into (\ref{drnsvit4.0}) we get formulae (\ref{ettdc4idns.nc}).
\end{proof}
Appealing to equalities (\ref{emm0P}), (\ref{emm0Q}) and
(\ref{28}), Theorem \ref{p18} can be translated directly into the framework
of locally convex equipments. The only additional fact one needs is that functionals belong to $\Phi^\t$. This happens, for example, when $\tau_\Phi$ is finer than the subspace topology induced by $\H$ on $\Phi$, so that $\Phi\subseteq\H\subseteq\Phi^\t$.
\begin{thm}\label{p16}
Consider a locally convex tvs
$[\Phi,\tau_\Phi]$ rigging both spectral measure spaces $(\Lambda,{\A},\H,P)$ and $(\Xi,{\B},\H,Q)$ as in ({\bf D}) such that $\tau_\Phi$ is finer than the subspace topology induced by $\H$ on $\Phi$. Assume the measure $\mu'$ has a Vitali system and let
$\{F_n\}_{n=1}^\infty$ be a sequence of sets in ${\B}$ admitting a
contraction to the point $\xi\in\Xi$ and such that $\mu'(F_n)\neq0$ ($n\in\mathbb N$). Then, for all $\phi\in\Phi$ and $k\in\{1,2,\ldots,m'\}$,
\begin{equation}\label{ettdc4evtns}
\ds \<\phi | \xi k^\t \> =
\sqrt{\frac{d\mu'}{d\mu^Q_{g'_k}}(\xi)}\,
\lim_{n\to\infty}
\sum_{j=1}^m \int_\Lambda
\sqrt{\frac{d\mu_{g_k}}{d\mu}(\l)}\,{[\widetilde{Q(F_n)g'_{k}}](\l)\over \mu'(F_n)}
\,\< \phi | \l j^\t\>\,d\mu(\l).
\end{equation}
In particular, when each $g'_{kj}$ belongs to $\Phi$ and $Q(F_n)\Phi\subseteq\Phi$ for all $n\in\mathbb N$,
\begin{equation}\label{ettdc4evtns.1}
\begin{array}{rcl}
\ds \<\phi | \xi k^\t \>
& = & \ds \sqrt{\frac{d\mu'}{d\mu^Q_{g'_k}}(\xi)}\,
\lim_{n\to\infty} \sum_{j=1}^m \int_{\Lambda}
\< \l j^\t | g'_{k}\> \,\frac{\< Q(F_n)\phi | \l j^\t\>}{\mu'(F_n)}\,d\mu(\l) \\[3ex]
& = & \ds
\sqrt{\frac{d\mu'}{d\mu^Q_{g'_k}}(\xi)}\,
\lim_{n\to\infty} \sum_{j=1}^m \int_\Lambda
\frac{\< \l j^\t | Q(F_n)g'_{k}\>}{\mu'(F_n)}\,\< \phi | \l j^\t\>\,d\mu(\l).
\end{array}
\end{equation}
\end{thm}
\begin{rem}\rm
Under some additional assumptions formulae simplify notably. For example,
let us take $\mu=\mu_{g_1}$ and $\mu'=\mu_{g'_1}$, so that normalization factors disappear, and suppose the spectral measures $P$ and $Q$ commute.
\begin{itemize}
\item
If, moreover, $P$ is simple, the generating system with respect to $P$ has only one element $g$ and the set of operators commuting with every projection $P(E)$,
($E\in{\A}$), is just the algebra of multiplication operators with
respect to $P$. In particular, for each projection
$Q(F)$, ($F\in{\B}$), there exists a function $\gamma_{F}\in
L^\infty(\Lambda,\mu^P_g)$ such that $Q(F)=J_{\gamma_{F}}$.
Moreover, being $Q(F)$ an orthogonal projection, $\gamma_{F}$ must be the
characteristic function of certain set $E_{F}\in{\A}$, i.e.
$\gamma_{F}=\chi_{E_{F}}$. Therefore, in this case (\ref{ettdc4evtns}) has the following form:
\begin{equation}\label{ettdc4ft}
\< \phi|\xi k^{\t}\>
= \lim_{n\to\infty}
\int_{E_{F_n}} \frac{\tilde{g}'_k(\l)}{\mu'(F_n)}\,\<\phi|\l^\t\>\,d\mu(\l),
\end{equation}
In particular, if $g'_k\in\Phi$,
\begin{equation}\label{ettdc4fft}
\< \phi|\xi k^{\t}\> =
\lim_{n\to\infty}
\int_{E_{F_n}} \frac{\<\l^\t|g'_k\>}{\mu'(F_n)}\,\<\phi|\l^\t\>\,d\mu(\l).
\end{equation}
Equation (\ref{ettdc4idns.nc}) simplifies in a similar way:
\begin{equation}\label{ettdc4ft.dih}
\big([{\V}' h](\xi),e'_k(\xi)\big)_\xi
= \lim_{n\to\infty}
\int_{E_{F_n}} \frac{\tilde{g}'_k(\l)}{\mu'(F_n)}\,\big( [{\V}h](\l),e(\l) \big)_{\l}\,d\mu(\l),
\end{equation}
where $e(\l)$ is the unique element of the orthogonal measurable basis in $\H_{\mu,N}$.
\item
For arbitrary spectral measure $P$, not necessarily simple, in
general, the ope\-ra\-tors $Q(F)$ are not multiplication but decomposable operators of the form
$$
Q(F)=\int_\Lambda^\oplus [Q(F)](\l)\,d\mu(\l),
$$
where $[Q(F)](\l)$ is an orthogonal projection in $\H_\l$ for $\mu$-a.e.
$\l\in\Lambda$. In this case (\ref{ettdc4idns.nc}) takes the form:
\begin{equation}\label{ettdc4idns}
\begin{array}{l}
\ds \big([{\V}' h](\xi),e'_k(\xi)\big)_\xi =\\[1ex]
= \ds \lim_{n\to\infty} \sum_{j=1}^m \int_{\Lambda}
\big(e_j(\l),[{\V}g'_{k}](\l)\big)_{\l}\,
\frac{\big( [Q(F_n)](\l)[{\V}h](\l),e_j(\l) \big)_{\l}}{\mu'(F_n)}\,d\mu(\l)
\\[2ex]
\ds = \lim_{n\to\infty} \sum_{j=1}^m \int_\Lambda
\frac{\big(e_j(\l),[Q(F_n)](\l)[{\V}g'_{k}](\l)\big)_{\l}}{\mu'(F_n)}\,
\big( [{\V}h](\l),e_j(\l) \big)_{\l}\,d\mu(\l).
\end{array}
\end{equation}
\end{itemize}
\end{rem}
When, as in Theorem \ref{p18}, one considers direct integrals
of Hilbert spaces, $\H\simeq\H_{\mu,N}\simeq\H_{\mu',N'}$, being Hilbert spaces, one can identify them as usual with their topological antidual spaces $\H^\t\simeq\H^\t_{\mu,N}\simeq\H^\t_{\mu',N'}$. The
isomorphism $\H\simeq\H^\t$ is linear
whereas the isomorphism between $\H$ and its topological dual $\H'$ is antilinear --this is the reason why antiduals instead duals are considered--.
Fix $(\xi,k)\in\Xi\t\{1,2,\ldots,m'\}$ and for each pair $(n,l)$, with $n,l\in\mathbb N$ and $1\leq l<m$, define the functional $\tilde\Gamma^\t_{nl}:\H \to {\mathbb C}$ by
\begin{equation}\label{fun.ant.Gammat.deg.nc}
\tilde\Gamma^\t_{nl}(h):=
\sqrt{\frac{d\mu'}{d\mu^Q_{g'_k}}(\xi)}\,\sum_{j=1}^l
\int_{\Lambda}
{\big(e_j(\l),[{\V}Q(F_n)g'_{k}](\l)\big)_{\l}\over\mu'(F_n)}
\,\big( [{\V}h](\l),e_j(\l)\big)_\l\,d\mu(\l)\,,
\end{equation}
($h\in\H$). Really $\tilde\Gamma^\t_{nl}$ depends on $(\xi,k)$ also, but we drop this fact from the notation.
Obviously $\{\tilde\Gamma^\t_{nl}\}\subset\H^\t$ and (\ref{ettdc4idns.nc}) is equivalent to
the simple or pointwise convergence on $\H$ of the sequence $\{\tilde\Gamma^\t_{nl}\}$
to the functional $\big([{\V}'\cdot](\xi),e'_k(\xi)\big)_\xi$ or, in other words, its
convergence on $\H^\t$ with respect to the weak topology $\sigma(\H^\t,\H)$ --the usual ``weak limit", say $\wlim$, in Hilbert spaces--, i.e.
$$
\big([{\V}'\cdot](\xi),e'_k(\xi)\big)_\xi=\wlim_{(n,l)\to(\infty,m)} \tilde\Gamma^\t_{nl}.
$$
In a similar way, under the hypothesis of Theorem \ref{p16}, the functionals
$\Gamma^\t_{nl}:\Phi \to {\mathbb C}$ defined by
\begin{equation}\label{fun.ant.Gammaxx}
\Gamma^\t_{nl}(\phi):= \sqrt{\frac{d\mu'}{d\mu^Q_{g'_k}}(\xi)}\, \sum_{j=1}^l
\int_\Lambda
\sqrt{\frac{d\mu_{g_k}}{d\mu}(\l)}\,{[\widetilde{Q(F_n)g'_{k}}](\l)\over
\mu'(F_n)}\, \< \phi|\l j^\t\>\,
d\mu(\l)\,,
\end{equation}
($\phi\in\Phi$), belong to $\Phi^\t$, for each $n\in\mathbb N$ and $1\leq l<m$, and (\ref{ettdc4evtns}) says that $\{\Gamma^\t_{nl}\}$ converges to $\xi k^\t$ on $\Phi^\t$ with respect to the weak topology $\sigma(\Phi^\t,\Phi)$, that is,
$$
\xi k^\t=\lim_{(n,l)\to(\infty,m)} \Gamma^\t_{nl}\quad
{\rm in\ \ } [\Phi^\t,\sigma(\Phi^\t,\Phi)]\,.
$$
Clearly the sequence $\{\tilde\Gamma^\t_{nl}\}$ cannot converge to $\big([{\V}'\cdot](\xi),e'_k(\xi)\big)_\xi$ in strong sense on $\H^\t\simeq\H$. Indeed, even in the simplest case described by
(\ref{ettdc4ft.dih}), where the index $l$ disappears and the sequence $\{\tilde\Gamma^\t_{n}\}$ may be identified with the sequence of elements of $\H_{\mu,N}$ given by
$$
\left\{{\chi_{E_{F_n}}(\l)\,\tilde{g}'_k(\l) \over \mu'(F_n)}\,e(\l)\right\}_{n\in\mathbb N},
$$
since $\ds \lim_{n\to\infty} \mu'(F_n)=\mu'(\xi)=0$ and the characteristic functions $\chi_{E_{F_n}}$ take values $0$ and $1$ only, if $\{\tilde\Gamma^\t_{n}\}$ converges in strong sense, its limit must be the zero function, but this contradicts
the fact that $e'_k(\xi)$ belongs to an orthonormal
measurable basis of $\H_{\mu',N'}$.
In what follows we establish conditions for the convergence of $\{\Gamma^\t_{nl}\}$ to $\xi k^\t$ with respect to topologies finer than weak topology on $\Phi^\t$.
\subsection{$\bf \tau_{pc}(\Phi^\t,\Phi)$-convergence.}\label{ss.2.1.1}
Among the most important results on barrelled tvs we mention the
Banach-Steinhaus theorem \cite{RR}, which involves the topology
$\tau_\sigma$ of simple or pointwise convergence and the topology
$\tau_{pc}$ of uniform convergence on precompact sets
of $\Phi$ defined on the space ${\mathcal L}^\t(\Phi,\Psi)$ of all
$(\tau_\Phi,\tau_\Psi)$-continuous
(anti)linear mappings from $\Phi$ into $\Psi$, where $[\Psi,\tau_\Psi]$
is any other locally convex tvs.
\begin{thm}\label{t3}
{\bf [Banach-Steinhaus]}
Let $\Phi$ and $\Psi$ be
locally convex tvs, $\Phi$ in addition barrelled. If $(T_\alpha)$ is a net
in
${\mathcal L}^\t(\Phi,\Psi)$ which is
$\tau_\sigma$-bounded\footnote{
If $\Phi$ is a barrelled
tvs then the $\sigma(\Phi^\t,\Phi)$-bounded sets and the
$\beta(\Phi^\t,\Phi)$-bounded sets coincide.
}
and which converges pointwise to some $T\in\Psi^\Phi$, then
$T\in{\mathcal L}^\t(\Phi,\Psi)$ and $(T_\alpha)$ converges to $T$ with respect to
the topology $\tau_{pc}$.
\end{thm}
In the study of the transformation theory on locally convex equipments
the relevant part of the Banach-Steinhaus theorem is the
$\tau_{pc}$-convergence of the net, because the other part of the result is
implicit in the construction of the equipment.
\begin{cor}\label{p5}
Let $[\Phi,\tau_\Phi]$ be a locally convex tvs
rigging both spectral measures as in ({\bf D}) and
such that $[\Phi,\tau_\Phi]$ is a barrelled space whose topology is finer
than that induced by $\H$ on
$\Phi$. Then, under the conditions of Theorem \ref{p16}, for all
$k\in\{1,2,\ldots,m'\}$ and for $\mu'$-a.a. $\xi\in\Xi$, we have
\begin{equation}\label{l.tpc.cd}
\xi k^{\t}
= \lim_{(n,l)\to(\infty,m)} \Gamma^\t_{nl}
\quad
{\rm in\ \ } [\Phi^\t,\tau_{pc}(\Phi^\t,\Phi)]\,.
\end{equation}
\end{cor}
\begin{proof}
Equation (\ref{ettdc4evtns}) assures the sequence
$\{\Gamma^\t_{nl}\}\subset \Phi^\t$ is $\sigma(\Phi^\t,\Phi)$-bounded and
converges pointwise to $\xi k^\t$.
By Banach-Steinhaus theorem, $\{\Gamma^\t_{nl}\}$ converges
to $\xi k^\t$ in $[\Phi^\t,\tau_{pc}(\Phi^\t,\Phi)]$.
\end{proof}
\subsection{$\bf \beta(\Phi^\t,\Phi)$-convergence.}\label{ss.2.1.3}
Now we pass to study under what conditions we can substitute the
$\tau_{pc}$-convergence of the sequence $\{\Gamma^\t_{nl}\}$ by the convergence with respect to the
strong topology $\beta(\Phi^\t,\Phi)$. We focus attention on properties of barrelled
and semi-reflexive tvs.
It is well known that a locally convex tvs $[\Phi,\tau_\Phi]$ is
semi-reflexive if and only if the Mackey topology $\mu(\Phi^\t,\Phi)$
and the strong topology $\beta(\Phi^\t,\Phi)$ coincide in $\Phi^\t$
\cite{RR}. Thus, for a semi-reflexive tvs $\Phi$ we have
\begin{equation}\label{ig.top.etsr}
\mu(\Phi^\t,\Phi)=\tau_{pc}(\Phi^\t,\Phi)=\beta(\Phi^\t,\Phi).
\end{equation}
The following result is immediate from Corollary \ref{p5}
and identities (\ref{ig.top.etsr}):
\begin{cor}\label{p9}
Let $[\Phi,\tau_\Phi]$ be a barrelled and
semi-reflexive tvs\footnote{A barrelled and semi-reflexive locally convex tvs $\Phi$ admit the
following characterizations \cite{RR}:
(a) $\Phi$ is reflexive;
(b) $\Phi$ is semi-reflexive and quasi-barrelled;
(c) $\Phi$ is quasi-barrelled and weakly quasi-complete.
}
that rigs both spectral measures as in ({\bf
D}) with a topology finer than that induced by $\H$ on
$\Phi$. Then, under the conditions of Theorem \ref{p16}, for all $k\in\{1,2,\ldots,m'\}$ and for $\mu'$-a.a. $\xi\in\Xi$, we have
$$
\xi k^{\t}
= \lim_{(n,l)\to(\infty,m)} \Gamma^\t_{nl}
\quad
{\rm in\ \ } [\Phi^\t,\beta(\Phi^\t,\Phi)]\,.
$$
\end{cor}
In particular, the conclusions of Proposition \ref{p9} are satisfied when $[\Phi,\tau_\Phi]$ is a Montel space, a Fr\'echet-Montel space\footnote{Since a Fr\'echet space $\Phi$ is
a Montel space if and only if it is separable and every
$\sigma(\Phi',\Phi)$-convergent sequence in $\Phi'$ is
$\beta(\Phi',\Phi)$-convergent, and formula (\ref{ettdc4ft}) implies
the weak convergence of the sequence of functionals $\Gamma_n$,
for Fr\'echet-Montel spaces we can deduce directly the conclusions of Corollary \ref{p9} from Theorem \ref{p16}. }
or a locally convex tvs such that the strong antidual space $\Phi_\beta^\t$
is nuclear\footnote{
If $\Phi$ is a locally convex tvs such that the strong antidual space $\Phi_\beta^\t$
is nuclear, then $\Phi$ is
semi-Montel and quasi-barrelled, that is, $\Phi$ is a Montel space
\cite{PI72}.
}.
\section{Final Remarks}\label{sect.32}
Transformation theory has been introduced in Section \ref{sctf} on the basis of Vitali approach to Radon-Nikodym derivatives, in terms of which generalized eigenfunctions are given. Any other approach to the concept of Radon-Nikodym derivative, as for example martingale theory or generalized Cauchy-Stieltjes and Poisson integrals, must lead to similar results.
For example, let us take $(\Xi, {\B},\mu')$ a probability space and let $\nu$ be a finite measure on $(\Xi, {\B})$. Assume there is a sequence
$\Pi_n$, $n=1,2,\dots$, of partitions of $\Xi$ into a finite number of pairwise disjoint measurable subsets of positive measure $\mu'$, such that each partition $\Pi_{n+1}$ is finer than
$\Pi_n$ and the $\sigma$-algebra generated by $\bigcup_{n=1}^{\infty}\Pi_n$ coincides with ${\B}$.
Put
$$
X_n(\l)=\sum_{F\in\Pi_n}{\nu(F)\over \mu'(F)}\chi_F(\l)\,,
$$
and denote by ${\B}_n$ the sub-$\sigma$-algebra of ${\B}$ generated by
$\Pi_n$. Then $\{(X_n,{\B}_n): n=1,2,\ldots\}$ is a martingale that converges $\mu'$-a.e. to an integrable limit $X$, which coincides with the usual Radon-Nikodym derivative
$d\nu/d\mu'$ provided $\nu$ is absolutely continuous with respect to $\mu'$ \cite{Billin}.
Clearly the functionals $^M\tilde\Gamma^\t_{nl}$ and $^M\Gamma^\t_{nl}$ given by
$$
\begin{array}{rccl}
^M\tilde\Gamma^\t_{nl}: & \H & \longrightarrow & {\mathbb C}
\\[2ex]
& h & \mapsto &
\ds \sqrt{\frac{d\mu'}{d\mu^Q_{g'_k}}(\xi)}\,\sum_{j=1}^l
\sum_{F\in\Pi_n}{\chi_F(\xi)}\t
\\[1ex]
&&&\ds\t\int_{\Lambda}
{\big(e_j(\l),[{\V}Q(F)g'_{k}](\l)\big)_{\l}\over\mu'(F)}
\,\big( [{\V}h](\l),e_j(\l)\big)_\l\,d\mu(\l)
\end{array}
$$
and
$$
\begin{array}{rccl}
^M\Gamma^\t_{nl}: & \Phi & \longrightarrow & {\mathbb C} \\[2ex]
& \phi & \mapsto &
\ds \sqrt{\frac{d\mu'}{d\mu^Q_{g'_k}}(\xi)}\, \sum_{j=1}^l
\sum_{F\in\Pi_n}{\chi_F(\xi)}\t
\\[1ex]
&&&\ds\t\int_\Lambda
\sqrt{\frac{d\mu_{g_k}}{d\mu}(\l)}\,{[\widetilde{Q(F)g'_{k}}](\l)\over
\mu'(F)}\, \< \phi|\l j^\t\>\,
d\mu(\l)
\end{array}
$$
have the same properties than the functionals $\tilde\Gamma^\t_{nl}$ and $\Gamma^\t_{nl}$ defined in Equations (\ref{fun.ant.Gammat.deg.nc}) and (\ref{fun.ant.Gammaxx}), respectively.
Approximation by finite $\sigma$-algebras, as in this exam\-ple, can be characterized in terms of separability of $L^1(\mu')$ \cite{MALL}.
\subsection*{Acknowledgments}
We thank to Profs. I.E. Antoniou, A. Bohm, M. Castagnino, F.
L\'opez Fdez-Asenjo, M. N\'u\~nez and Z. Suchanecki for useful discussions.
This work was supported by JCyL-project VA013C05 (Castilla y Le\'on) and MEC-project FIS2005-03989 (Spain).
\appendix
\section{Vitali Systems.}
Let $(\Lambda,{\A},\mu)$ be a measure space such that
for each $\l\in\Lambda$ the set $\{\l\}$ is measurable and
$\mu(\l)=0$.
\begin{defn}\rm
A {\it Vitali system} for $(\Lambda,{\A},\mu)$ is a family of measurable sets ${\mathcal V}\subseteq \A$ such that:
\begin{itemize}
\item[(i)]
Being given a measurable set $E\in\A$ and $\varepsilon >0$, there exist
a countable family of Vitali sets $A_1,A_2,\dots$ such that
$$
E \subset \bigcup_{n=1}^\infty A_n\,, \hskip0.7cm \mu\left(
\bigcup_{n=1}^\infty A_n \right) <\mu(E)+\varepsilon
$$
\item[(ii)]
Each $A\in\A$ has a {\it border}, i.e., a zero $\mu$-measure
set $\partial A$ such that
\begin{itemize}
\item[(a)]
If $\l\in A\setminus(A\cap\partial A)$, then any Vitali set
containing $\l$ with measure sufficiently small is contained in
$A\setminus(A\cap\partial A)$.
\item[(b)]
If $x\notin A\cup\partial A$, then any Vitali set
containing $\l$ with measure sufficiently small has no common point
with
$A\cup\partial A$.
\end{itemize}
\item[(iii)]
Let $E\subset \Lambda$ be a set admitting a covering by Vitali sets
$\B\subset V$ such that for each $\l\in E$ and each $\varepsilon>0$,
there exists
a set $A_\varepsilon(x)\in\B$ with $\mu[A_\varepsilon(x)]<\varepsilon$ and $\l\in
A_\varepsilon(x)$. Then, $E$ can be covered, to within a set of $\mu$ zero
measure, by countably many disjoint sets $A_j \in \B$.
\end{itemize}
\end{defn}
\begin{ex}\rm
The Lebesgue measure on $({\mathbb R}^n,{\B})$, where $\B$
is the Borel $\sigma$-algebra, has as particular Vitali system the set of
all closed cubes.
\end{ex}
\begin{defn}\rm
A sequence of measurable sets $\{E_1,E_2,\dots\}\subset\A$ {\it admits a contraction} to a point $\l_0\in\Lambda$ when:
\begin{itemize}
\item[(i)]
For each $E_n$ in the sequence, there is a Vitali set $A_n$ such
that $\l_0\in A_n$ and $\lim_{n\to\infty} \mu(A_n)=0$.
\item[(ii)]
There exists a positive constant $c$ ($c>0$) such that
$$
\mu(E_n)\ge c\mu(A_n)\,,\hskip0.7cm (n\in \mathbb N)
$$
\end{itemize}
\end{defn}
\begin{defn}\rm
Let $\nu$ be a countably additive function from $\A$ into ${\mathbb R}$.
The {\it de\-ri\-va\-ti\-ve of $\nu$ at the point $\l_0$ with
respect to the Vitali system $\mathcal V$} is given by
$$
D_{\mathcal V}(\l_0) =\lim_{\varepsilon\to
0}\frac{\nu[A_\varepsilon(\l_0)]}{\mu[A_\varepsilon(\l_0)]}
$$
(provided that the limit exists), where $A_\varepsilon(\l_0)$ is any Vitali
set with $\mu$ measure smaller than $\varepsilon$ containing $\l_0$.
\end{defn}
The basic result on differentiation with respect to a Vitali system is due to Vitali and Lebesgue \cite[Th.10.1]{SG}:
\begin{thm}\label{tA7}
{\bf [Vitali-Lebesgue]}
Let $(\Lambda,{\A},\mu)$ be
a measure space, $\mathcal V$ a Vitali system on $\Lambda$ and $\nu$ a
mapping from
$\A$ into $\mathbb R$ with the property of being absolutely continuous with
respect to
$\mu$. Then, the derivative of $\nu$ with respect to $\mathcal V$ exists,
save for a set of $\mu$ zero measure. This is given at each point
$\l_0$ by
$$
D_{\mathcal V}(\l_0)=\lim_{n\to \infty} \frac{\nu(E_n)}{\mu(E_n)}
$$
where $E_1,E_2,\dots$ is a sequence of measurable sets admitting a
contraction to $\l_0$. This derivative coincides with the
Radon-Nikodym derivative of $\nu$ with respect to $\mu$,
$d\nu/d\mu$.
\end{thm}
| 98,107
|
Glapwell Centre and MUGA
The Glapwell Centre and MUGA are available to book online here.
The Glapwell Centre and MUGA are available to book online here.
Glapwell is a small Parish covering 774 acres of land. It is 3 miles from Bolsover, seven miles from Chesterfield on the main A617 Chesterfield to Mansfield Road, accessible from junction 29 of the M1 motorway.
The once very busy Glapwell colliery closed in the 1960’s and was demolished to be replaced by an industrial estate.
Glapwell Parish Council produce a regular newsletter to let the community know what is happening in the village. Read our latest newsletter here:
| 398,770
|
TITLE: Induction Proof, $n^2$ is even $\implies$ $n$ is even
QUESTION [0 upvotes]: If $n^2$ is even $\Rightarrow$ $n$ is even.
Test: $n = 1$. $2^2 = 4$ is even $\Rightarrow$ $2$ is even.
Assuming: $P(k)$ is true with $k$ even.
Test: $P(k+2)$: $k+2$ is even.
I can't finish it. I do not know what to do.
REPLY [1 votes]: You start off by assuming the inductive hypothesis. That is, assume that $k^2$ being even implies that $k$ is even. Now, we further assume that $(k+2)^2 = k^2+4k+4$ is even and we set out to prove that $k+2$ is even. Thus, for some integer, say $l$, we have that:
$$k^2+4k+4 = 2l$$
Solving this for $k^2$ gives:
$$k^2 = 2l-4k-4$$
Factoring a two out from the right hand side gives:
$$k^2 = 2(l-2k-2)$$
Thus, by definition, $k^2$ is even. So we can invoke the inductive hypothesis to say that this implies that $k$ is also even. So one can quickly see that this implies $k+2$ is even and the proof is complete.
REPLY [0 votes]: $2$ is prime, so one can apply Euclid's lemma, to get: $2|n^2\implies (2|n\lor2|n)\implies2|n$.
I don't see any need for induction.
| 197,649
|
(Image Credit: Keshaofficial / Instagram)
We all know (or are) those ladies who spend their days talking about their best friend and all the things they’ve done together. In the case of these women, their best friend just happens to be their cat. To prove that the most famous faces in Hollywood are more like us than we may think, I’ve compiled a list of Hollywood’s coolest cat ladies!
Kesha
Kesha’s Instagram will confirm just how much of a cat lady the pop star really is. From cat socks to cat coffee mugs, it’s clear to see that Kesha really loves her cats. Her friendly feline pet “Mr. Peep$” even got his own (verified!) Twitter account! Understandably, Kesha takes her cats more seriously than most things in life.
Nina Dobrev
One quick look at her Instagram will tell you all you need to know; this girl loves her cat. Not only does Gypsy get to go on all kinds of road trips, but she recently visited the Grand Canyon, harness and all. Dobrev’s cat goes all over the world and following her Instagram account will give you all the reason you need to be envious of a cat.
Kate Walsh
Kate Walsh is a cat lady’s dream! She’s constantly uploading pictures and videos of her multiple cats to her Instagram, and recently she visited Black Jaguar White Tiger and has shared some incredible pictures and videos with the big cats that we could only dream of getting to hold one day!
Katy Perry
To say that Katy Perry is obsessed with cats would be an understatement. Her signature fragrances are titled Purr and Meow which might give the hint that she’s a little bit more in love with cats than the average human. Appropriately named Katy Purry, her pet cat even made an appearance in her Roar video!
Taylor Swift
Last but not least, the classiest and most elegant cat lady of all, Taylor Swift. Fans around the world know all there is to know about Olivia Benson and Meredith Grey, Swift’s brilliantly named cats. Endless pictures and videos take up her Instagram feed of her cats cuddling, fighting and acting like wild meerkats. Her little furry friends go on trips, visit friend’s houses and get to go out and about in New York City in her arms. Can’t we all just be one of Taylor Swift’s cats?
Taking a look at this list it’s safe to say that being a crazy cat lady is the new normal, so embrace it ladies! Post all of the pictures and videos you want featuring you and your furry companion and feel no shame!
| 356,631
|
The New Paltz Police Department’s medical program was established in 1998 and was the first police medical program of its kind. Police personnel were initially trained as Certified First Responders and in the use of an AED.
The medical program has evolved through the years. Currently police personnel are minimally certified in ASHI Advanced First Aid, CPR and AED for Professional Rescuer, Tactical Combat Casualty Care and in the administration of Intranasal Naloxone. The department’s medical program is under the direct supervision of Dr. Eric Stutt.
The photographs above depict the equipment carried by officers; a stocked medical bag, AED and Intransal Naloxone kit.
“Our medical program continues to evolve under the supervision of Dr. Stutt. Our officers have administered medical assistance countless times since our program’s inception, we take pride in being able to give back to the community with this service”, said New Paltz Police Chief Joseph Snyder.
| 100,837
|
Description: The system approach is given to reliability management of technical product using principles of general theory of management, the reliability management diagram is provided, the process of its functioning being described for the stages of the system design, development, fabrication, use by purpose and repair. Peculiarities are considered of managing reliability of a complex technical system (CTS) with multiple level operability.
Keywords: Reliability Management, complex technical system, layered operational capacity
| 103,994
|
Huntington, N.Y. (PRWEB) March 05, 2014
Cheryl Metrick is an author, artist, poet, singer and a New York SAG-AFTRA actress for over 25 years in theater, television and film. Jeree Wade, M.A. is a Montclair, New Jersey-based life coach, counselor, producer, and performer. Both driven and brought together by a common passion, these two multi-talented women collaborate and release a newly published book, titled On the Wings of Inspiration: Exploring Our Inner Life through Interpretive Symbols.
The book is based on 13 of Metrick’s drawings and poems and is about her journey into the art to find her personal, true meaning of life within. Although based on her art and autobiography, with Wade’s commentary also adding a bit of her own autobiography for emphasis, topics from the metaphysical philosophies to spirituality, humanitarianism, ecology, and energy systems are explored. These leave readers with a desire to reflect upon their lives and who they truly are with a sense of encouragement to creatively become who they want to be.
The commentary and workbook sections delve into symbolism to guide readers through an analytical process to achieve greater awareness, become mindful, and set guidelines in the continued pursuit of a purposeful and fulfilling life. They include a wide range of themes – compassion, intention and focus, manifestation, balance, transformation, maturation, renewal, self-knowledge, self-sabotage, self-awareness, death, letting go, and the meaning of time – leading readers to take a personal journey to self-awareness and inner growth.
A seamless blend of autobiographical, physical, metaphysical and spiritual elements, On the Wings of Inspiration is one book that will not fail to touch readers’ hearts and minds and spur them to reawaken their creative selves. Metrick and Wade have indeed successfully set an example and created a guided process to support readers to move into a more peaceful and more centered place to understand and implement life’s lessons.
For more information on this book, interested parties may log on to.
About the Authors
Cheryl Metrick held principal roles in independent films, commercials, infomercials, voice-overs, and a Long Island Cablevision variety show. As a professional singer and recording artist, Metrick’s most recent recording is an inspirational song where she wrote both lyrics and music to “There Are Angels,” which won several songwriting contests. It was recorded live with a string orchestra to Frank Owens’ beautiful arrangement. A previously released CD, “Cheryl Metrick: Fantasy & Romance,” is a recording of one of her NYC cabaret shows. A new CD consisting entirely of songs written by her will be released soon.
Jeree Wade, M.A. is in private practice specializing in spiritually focused life coaching, counseling and healing workshops. Her interests lie in continued research in the areas of emotional intelligence, compulsive caregiving, self-esteem and empathy. She supports her clients in exploring self-empathy as well as self-responsibility. With a degree in theater arts and a master’s degree in counseling psychology, Wade created and conducted Serenity Seminars over the last 20 years. These stimulating yet serene weekend seminars combine traditional group process, cognitive therapy, music and relaxation.
On the Wings of Inspiration * by Cheryl Metrick and Jeree Wade MA
Exploring Our Inner Life through Interpretive Symbols
Publication Date: December 17, 2013
Picture Book; $47.49; 124 pages; 978-1-4836-7192-5
Picture Book Hardcover; $52.99; 124 pages; 978-1-4836-7193-2
eBook; $4.99; 978-1-4836-7194.
| 123,573
|
First I want to tell you that I love you. I really do. Then I want to tell you that I’m switching service providers for subscription delivery. It’s no big deal and if I’ve done it correctly you shouldn’t notice a difference. If you get this via email then everything is tickety-boo.
If you subscribed a long time ago but never received a confirmation email and subsequently didn’t start receiving emails. I’m sorry! I think I’ve fixed that.
If you didn’t know email subscription was even an option and you’re all ‘how do I get this?’ then put your email address in that box to the right.
If I’ve screwed up and you get this email twice then I’m writing to pre-emptively apologize.
If you have any questions or comments on email subscription please comment below or tweet me or send me an email. I want to make sure you’re all happy.
And thank you again for reading, all of you…no matter if you visit, or follow on facebook or via rss. If it wasn’t for you guys this would all be for naught. And naught isn’t cool. No it isn’t.
And now, here’s a picture of Boo who is feeling slightly jilted about the whole Dog Vs Cat thing. He says his pictures aren’t in focus and that as he’s the handsomer of the two…I need to fix that. Ahem. It might be late. I should go to bed.
Beautiful kitty!
LOL Ok, so who’d you switch to? Oh and why?
Shan I switched to a WP plugin called Wysija because feedburner is no longer supporting their API and because google (who owns feedburner) is famous for starting and axing projects I thought I’d just take control of that back.
| 200,088
|
\section{Extended derivations ans proofs
\label{ap:theory-extended}
}
\paragraph{Path-ordered Exponential
}
Every element $g\in G$ can be written as a product $g=\prod_a \exp[t_a^i L_i]$ using the matrix exponential \citep{hall2015lie}.
This can be done using a path $\gamma$ connecting $I$ to $g$ on the manifold of $G$.
Here, $t_a$ will be segments of the path $\gamma$ which add up as vectors to connect $I$ to $g$.
This surjective map can be written as a ``path-ordered'' (or time-ordered in physics \citep{weinberg1995quantum}) exponential (POE).
In the simplest form, POE can be defined by breaking $u=\prod_a \exp[t_a^i L_i]$ down into
infinitesimal steps of size $t_a = 1/N$ with $N\to \infty$.
Choosing $\gamma $ to be a differentiable path, we can replace the sum over segments $\sum_a t_a$ with an integral along the path $\sum_a t_a = \int_\gamma ds t(s) ds$, where $t(s) = d\gamma/ds$ is the tangent vector to the path $\gamma$, where $s\in [0,1]$ parametrizes $\gamma$
The POE is then defined as the infinitesimal $t_a$ limit of $g=\prod_a \exp[t_a^i L_i]$.
This can be written as
\begin{align}
g &= P\exp\br{ \int_\gamma t^i(s) L_i ds} = \lim_{N\to \infty} \prod_{a=1}^N \pa{I+ \delta s {\gamma'}^i(s_a) L_i } \cr
&= \int_0^{s_1}ds_0 {\gamma'}^i(s_0) L_i \int_0^{s_2}ds_1 {\gamma'}^j(s_1) L_j \dots
\int_0^{1}ds_N {\gamma'}^k(s_N) L_k
\end{align}
\out{
On connected $G$ we can Taylor expand $\kappa(v)$ around identity.
\begin{align}
\kappa(v) &= \kappa\pa{ P\exp\br{\int_\gamma dt^iL_i} } = \kappa(I) + Pt^iL_i \kappa'(I) + P{(t^iL_i)^2\over 2} \kappa''(I) + \dots \cr
&=P \sum_n {(t^iL_i)^n\over n!} {d^n\kappa(g)\over dg^n} \bigg|_{g\to I}\cr
&=P \sum_{n=0}^\infty {1\over n!} \prod_{k=1}^n \int_{} dt^iL_i {d^n\kappa(g)\over dg^n} \bigg|_{g\to I}
\end{align}
\nd{
We want the expansion to slowly chip away at $\kappa$, moving it toward identity.
Starting from G-conv, we cn either expand $f$ or $\kappa$.
If we expand $\kappa$ the derivatives will be on $\kappa$.
If we expand the $v$ in $f(gv)$, can we express the $\kappa(v)$ in terms of product of integrals of small $v_\eps$? Can we do that from the beginning?
What happens if we break the integral down into these smaller pieces?
Can we prove that it will remain just one integral, rather than the product of many similar integrals, which would be spurious?
If we think about it as a propagator, then the intermediate integrals will not result in extra integrals.
}
We can use $u=u_\eps v$ to expand $\kappa(v)$ as
\begin{align}
\kappa(v) &= \kappa\pa{(I+\eps^iL_i) u} \approx \kappa(u) + \eps^iL_i u {d \kappa(u)\over du} + O(\eps^2)
\label{eq:kappa-expand}
\end{align}
While $\kappa(v)$ can be any function on $G$ (with small restrictions discussed in \citet{kondor2018generalization}), we can use \eqref{eq:kappa-expand} to replace $\kappa$ with a new kernel $\kappa_1$ which has support on most of $G$, except for a small neighborhood $\eta$ when $\|\eps\| < \eta$.
In other words $\kappa(v) \approx \kappa_1(u) + \eps^iL_i u \kappa_1'(u) $.
in \eqref{eq:G-conv} and rewrite G-conv as follows
\begin{align}
[\kappa \star f](g) &\approx \int_{G/H_\eta} dv \br{\kappa(v) }
\end{align}
where $H_\eta$ is a subgroup spanned by
For continuous symmetry groups, using the Lie algebra
allows us to approximate G-conv without having to integrate over the full group manifold, thus alleviating the need to discretize or sample the group.
The following proposition establishes the connection between G-conv and Lie algebras.
\begin{proposition}
Let $G$ be a Lie group, $f:\mathcal{S}\to \mathcal{F}$ a differentiable equivariant function.
If a convolution kernel $\kappa: G \to \mathrm{Hom}(\mathcal{F}, \mathcal{F}') $ has support only on an infinitesimal neighborhood $\eta$ of identity, a G-conv layer of \eqref{eq:G-conv} can be written in terms of the Lie algebra.
\end{proposition}
\begin{proof}
Consider \eqref{eq:G-conv}
with $\pi: G \to \mathrm{GL}(\mathbb{R}[\mathcal{S}])$ a representation of $G$.
Linearization over $\eta$ yields $\pi[v] \approx I + \eps^i L_i$ with $L_i \in \mathrm{Hom}(\mathbb{R}[\mathcal{S}],\mathbb{R}[\mathcal{S}])$ being a representation for the basis of the Lie algebra of $G$.
Since $\kappa $
has support only in an $\eta$ neighborhood of identity, fixing a basis $L_i$, we can reparametrize $\tilde{\kappa}(\eps)\equiv \kappa(I+\eps^i L_i)$ as a function over the Lie algebra $\tilde{\kappa }:\mathfrak{g}\to \mathrm{Hom}(\mathcal{F},\mathcal{F}') $.
The Haar measure is also replaced by a volume element in the tangent space $d\mu(u) \to d\eps$.
\eqref{eq:G-conv} becomes
\out{
\begin{align}
&(\kappa \star f)(\vx) \approx \int_G
\tilde{\kappa}(\eps) (I+\eps\cdot L^\rho)f\pa{(I-\eps\cdot L^\pi) \vx} d \eps \cr&
\approx
\left.
\pa{W^0 I + W\cdot\left[L^\rho - L^\pi \vx\cdot \del_z\right]} f(z)\right|_{z\to \vx}
\label{eq:G-conv-expand} \\
&W^0 \equiv \int_G \tilde{\kappa}(\eps) d\eps, \quad
W^i\equiv \int_G \eps^i \tilde{\kappa}(\eps) d\eps.
\label{eq:G-conv-gbar}
\end{align}
where $L_i^\rho$ and $L_i^\pi$ are the Lie algebra basis in the $\rho$ and $\pi$ representations, respectively.
}
\begin{align}
&(\kappa \star f)(\vx) \approx \int_G
\tilde{\kappa}(\eps)
f\pa{(I-\eps\cdot L) \vx} d \eps \cr&
\approx
\left.
\pa{W^0 I - W\cdot L \vx\cdot \del_z} f(z)\right|_{z\to \vx}
\label{eq:G-conv-expand} \\
&W^0 \equiv \int_G \tilde{\kappa}(\eps) d\eps, \quad
W^i\equiv \int_G \eps^i \tilde{\kappa}(\eps) d\eps.
\label{eq:G-conv-gbar}
\end{align}
where $L_i$ are the Lie algebra basis.
\end{proof}
}
\paragraph{L-conv derivation}
Let us consider what happens if the kernel in G-conv \eqref{eq:G-conv} is localized near identity.
Let $\kappa_I (u) = c \delta_\eta (u)$, with constants $c\in \R^{m'} \otimes \R^{m}$ and kernel $\delta_\eta(u) \in \R$ which has support only on on an $\eta$ neighborhood of identity, meaning $\delta_\eta(I+\eps^iL_i) \to 0 $ if $|\eps|>\eta$.
This allows us to expand G-conv in the Lie algebra of $G$ to linear order.
With $v_\eps = I+\eps^i L_i $, we have
\begin{align}
[\delta_\eta \star f](g) &= \int_G dv \delta_\eta (v) f(gv )
=\int_{\|\eps\| <\eta } dv_\eps \delta_\eta (v_\eps) f(gv_\eps )\cr
&= \int d\eps \delta_\eta (I+\eps^i L_i) f(g(I+\eps^i L_i))\cr
&= \int d\eps \delta_\eta (I+\eps^i L_i) f(g+\eps^i gL_i)\cr
&= \int d\eps \delta_\eta (I+\eps^i L_i) \br{f(g)+ \eps^i g L_i \cdot {d\over dg} f(g) + O(\eps^2) }
\cr
&= \int d\eps \delta_\eta (I+\eps^i L_i) \br{I+ \eps^i g L_i \cdot {d\over dg} } f(g) + O(\eta^2) \cr
&= W^0\br{I + \ba{\eps}^i g L_i\cdot {d\over dg} } f(g) + O(\eta^2)
\label{eq:L-conv-basic}
\end{align}
where $d\eps $ is the integration measure on the Lie algebra induced by the Haar measure $dv$ on $G$.
The $O(\eta^2)$ term arises from integrating the $O(\eps^2)$ terms.
To see this, note that for an order $p$ function
$\phi(\epsilon)$, for $|\epsilon| < \eta$, $|\phi(\epsilon)| < \eta^p C$ (for some constant $C$).
Substituting this bound into the integral over the kernel $\delta_\eta$ we get
\begin{align}
&\left|\int d\epsilon \delta_\eta (I+\epsilon^i L_i) \phi(\epsilon)\right| \leq \int \left| d\epsilon \delta_\eta (I+\epsilon^i L_i) \phi(\epsilon)\right| \cr
&< \int \left| d\epsilon \delta_\eta (I+\epsilon^i L_i)\eta^p C\right| \leq \eta^p C \int \left| d\epsilon \delta_\eta (I+\epsilon^i L_i)\right| \leq \eta^p C
\label{eq:O-eta-p}
\end{align}
In matrix representations, $g L_i\cdot {df\over dg} = [g L_i]_\alpha^\beta {df\over dg_\alpha^\beta} = \Tr{[g L_i]^T {df\over dg} }$.
Note that in $g(I+\eps^iL_i)\vx_0$, the $gL_i\vx_0 = \hat{L}_i{(g)}\vx$ come from the pushforward $\hat{L}_i{(g)}=gL_ig^{-1} \in T_gG$.
Here
\begin{align}
W^0 &=
c \int d\eps \delta_\eta (I+\eps^iL_i) \in \R^{m'} \otimes \R^m, &
\ba{\eps}^i &= \frac{\int d\eps \delta_\eta (I+\eps^iL_i) \eps^i}{\int d\eps \delta_\eta (I+\eps^iL_i)} \in \R^{m} \otimes \R^m
\end{align}
with $\|\ba{\eps}\|<\eta $.
When $\delta_\eta$ is normalized, meaning $\int_G \delta_\eta(g)dg=1 $, we have $W^0 = c$ and
\begin{align}
\ba{\eps}^i &= \int d\eps \delta_\eta (I+\eps^iL_i) \eps^i \nonumber
\end{align}
Note that with $f(g)\in \R^m$, each $\eps^i \in\R^{m} \otimes \R^m $ is a matrix.
With indices, $f(gv_\eps) $ is given by
\begin{align}
[f(gv_\eps)]^a &= \sum_b f^b(g(\delta^a_b + [\eps^i]^a_b L_i))
\end{align}
Similarly, the integration measure $d\eps$, which is induced by the Haar measure $dv_\eps \equiv d\mu(v_\eps)$, is a product $\int d\eps = \int |J| \prod d [\eps^i]^a_b$, with $J=\ro v_\eps /\ro \eps $ being the Jacobian.
\Eqref{eq:L-conv-basic} is the core of the architecture we are proposing, the Lie algebra convolution or \textbf{L-conv}.
\textbf{L-conv Layer}
In general, we define Lie algebra convolution (L-conv) as follows \begin{align}
Q[f](g)
&= W^0 \br{I + \ba{\eps}^i g L_i\cdot {d\over dg} }f(g) \cr
&= [W^0]_b f^a\pa{g\pa{\delta^b_a+ [\ba{\eps}^i]_a^b L_i}} +O(\ba{\eps}^2)
\label{eq:L-conv-def1}
\end{align}
\paragraph{Extended equivariance for L-conv}
From \eqref{eq:L-conv-def1} we see that $W^0$ acts on the output feature indices.
Notice that the equivariance of L-conv is due to the way $gv_\eps = g(I+\ba{\eps}^iL_i)$ appears in the argument, since for $u\in G$
\begin{align}
u\cdot Q[f](g) = W^0 f(u^{-1}gv_\eps) = W^0 [u\cdot f](gv_\eps)
\end{align}
Because of this, replacing $W^0$ with a general neural network which acts on the feature indices separately will not affect equivariance.
For instance, if we pass L-conv through a neural network to obtain a generalized L-conv $Q_\sigma$, we have
\begin{align}
Q_\sigma[f](g) &= \sigma( W f(gv_\eps) + b) \cr
u \cdot Q_\sigma[f](g) &= Q_\sigma[f](u^{-1}g)= \sigma( W f(u^{-1}gv_\eps) + b)\cr
&= \sigma( W [u\cdot f](gv_\eps) + b) = Q_\sigma[u\cdot f](g)
\end{align}
Thus, L-conv can be followed by any nonlinear neural network as long as it only acts on the feature indices (i.e. $a$ in $f^a(g)$) and not on the spatial indices $g$ in $f(g)$.
\subsection{Approximating G-conv using L-conv \label{ap:G-conv2L-conv} }
We now show that G-conv \eqref{eq:G-conv} can be approximated by composing L-conv layers.
\textbf{Universal approximation for kernels}
Using the same argument used for neural networks \citep{hornik1989multilayer,cybenko1989approximation}, we may approximate any kernel $\kappa(v)$ as the sum of a number of kernels $\kappa_k$ with support only on a small $\eta$ neighborhood of $u_k \in G $ to arbitrary accuracy.
The local kernels can be written as $\kappa_k (v) = c_k \delta_\eta ( u_k^{-1}v)$, with $\delta_\eta(u)$ as in \eqref{eq:L-conv-basic}
and constants $c_k\in \R^{m'} \otimes \R^{m}$.
Using this, G-conv \eqref{eq:G-conv} becomes
\begin{align}
[\kappa \star f](g)& = \sum_k [\kappa_k \star f](g) = \sum_k c_k \int dv \delta_\eta(u_k^{-1} v) f(gv) \cr
&= \sum_k c_k \int dv \delta_\eta(v ) f(gu_k v) = \sum_k c_k [\delta_\eta \star f] (gu_k).
\end{align}
As we showed in \eqref{eq:L-conv-basic}, $[\delta_\eta \star f](g)$ is the definition of L-conv.
Next, we need to show that $[\delta_\eta \star f] (gu_k)$ can also be approximated with $[\delta_\eta \star f] (g)$ and hence L-conv.
For this we use $u_k =v_k (I+\eps_k^iL_i) $ to find $ v_k \in G$ which are closer to $I$ than $u_k$.
Taylor expanding $F_\eta = \delta_\eta\star f$ in $\eps$ we obtain
\begin{align}
F_\eta (gu_k)& = F_\eta \pa{gv_k (I+\eps_k^iL_i) } = F_\eta (gv_k) + \eps_k^i u L_i \cdot {dF_\eta (u)\over du}\bigg|_{u\to gv_k} + O(\eps^2)
\cr
[\kappa \star f](g)& = \sum_k c_k F_\eta (gu_k) = \sum_k \br{c_k + c_k \eps_k^i u L_i \cdot {d\over du} }F_\eta (u) \bigg|_{u\to gv_k}
\cr
&=\sum_k \br{W_k^0 + W_k^i u L_i\cdot {d\over du} }F_\eta (u) \bigg|_{u\to gv_k}
=\sum_k Q_k[F_\eta ](gv_k)
\label{eq:L-conv-uk}
\end{align}
Using \eqref{eq:L-conv-uk} we can progressively remove the $u_k$ as $F_\eta(gu_k) \approx Q^n_k[\dots [Q^1_k[F_\eta]]](g)$, i.e. an $n$ layer L-conv.
Thus, we conclude that any G-conv \eqref{eq:G-conv} can be approximated by multilayer L-conv.
\subsection{Example of continuous L-conv \label{ap:examples-continuous}}
The $gL_i\cdot df/dg$ in \eqref{eq:L-conv} can be written in terms of partial derivatives $\ro_\alpha f(\vx) = \ro f/\ro \vx^\alpha$.
In general, using $\vx^\rho = g^\rho_\sigma \vx_0^\sigma$, we have
\begin{align}
{df(g\vx_0)\over dg^\alpha_\beta}
&= {d(g^\rho_\sigma \vx_0^\sigma) \over dg^\alpha_\beta}\ro_\rho f(\vx) = \vx_0^\beta \ro_\alpha f(\vx)
\label{eq:dgx0-dg}
\\
gL_i\cdot {df\over dg} &= [gL_i]^\alpha_\beta \vx_0^\beta \ro_\alpha f(\vx) = [gL_i\vx_0]\cdot \del f(\vx) = \hat{L}_if(\vx)
\label{eq:dfdg-general0}
\end{align}
Hence, for each $L_i$, the pushforward $gL_i$ generates a flow on $\mathcal{S}$ through the vector field $\hat{L}_i\equiv gL_i\cdot d/dg = [gL_i\vx_0]^\alpha \ro_\alpha$ (Fig. \ref{fig:Lie-group-S}).
Being a vector field $\hat{L}_i\in T\mathcal{S}$ (i.e. 1-tensor), $\hat{L}_i$ is basis independent, meaning
for $v \in G$, $\hat{L}_i(v\vx) = \hat{L}_i$.
Its components transform as $[\hat{L}_i(v \vx)]^\alpha =[vgL_i\vx_0]^\alpha = v^\alpha_\beta \hat{L}_i(\vx)^\beta $, while the partial transforms as $\ro / \ro[v\vx]^\alpha = [v^{-1}]^{\gamma}_\alpha \ro_\gamma$.
{
Using this relation and Taylor expanding \eqref{eq:L-conv-equiv}, we obtain a second form for the group action on L-conv.
For $w\in G$, with $\vy= w^{-1}\vx$ we have
\begin{align}
Q[f](w^{-1}g\vx_0)&= W^0\br{I+
\ba{\eps}^i[\hat{L}_i]^\alpha[w^{-1}]^\beta_\alpha {\ro \over \ro \vy^\beta}}f(\vy)\big|_{\vy\to w^{-1}\vx}
\label{eq:L-conv-hat-L-equiv}
\end{align}
}
\textbf{1D Translation:}
Let $G=T_1 = (\R,+)$.
A matrix representation for $G$ is found by encoding $x$ as a a 2D vector $(x,1)$.
The lift is given by $\vx_0 =(0,1) $ as the origin and $g = \begin{pmatrix}1&x\\0&1\end{pmatrix}$.
The Lie algebra basis is $L = \begin{pmatrix}0&1\\0&0\end{pmatrix}$.
It is easy to check that $gg'\vx_0 = (x+x',1)$.
We also find $gL = L $, meaning $L$ looks the same in all $T_gG$.
Close to identity $I=0$, $v_\eps = I+\eps L = \eps $.
We have $gv_\eps \vx_0 = (g+ \eps gL)x_0 = (x + \eps,1)$.
Thus, $f(g(I+\eps L)\vx_0) \approx f(\vx)+ \eps df(\vx)/d \vx $.
This readily generalizes to $n$D translations $T_n$ (SI \ref{ap:example-Tn}), yielding $f(\vx) + \eps^\alpha \ro_\alpha f(\vx)$.
\paragraph{2D Rotation:} Let $G=SO(2)$.
The space which $SO(2)$ can lift is not the full $\R^2$, but a circle of fixed radius $r=\sqrt{x^2+y^2}$.
Hence we choose $\mathcal{S}=S^1$ embedded in $\R^2$, with $x=r\cos\theta$ and $y=r\sin\theta$.
For the lift, we use the standard 2D representation.
We have $\vx_0 = (r,0)$ and (see SI \ref{ap:example-so2})
\begin{align}
L&=\mat{0&-1\\1&0},&
g&=\exp[\theta L]= {1\over r}\mat{x &-y \\ y & x},&
gL \cdot {df\over dg} &=
\pa{x \ro_y
- y \ro_x } f.
\label{eq:L-so20}
\end{align}
Physicists will recognize $\hat{L} \equiv \pa{x \ro_y - y \ro_x } = \ro_\theta$ as the angular momentum operator in quantum mechanics and field theories, which generates rotations around the $z$ axis.
\paragraph{Rotation and scaling}
Let $G= SO(2)\times \R^+$, where the $\R^+=[0,\infty)$ is scaling.
The infinitesimal generator for scaling is identity $L_2 = I$.
This group is also Abelian, meaning $[L_2,L]=0$ ($L\in so(2)$ \eqref{eq:L-so20}).
$\R^2/{0}$ can be lifted to $G$ by choosing $\vx_0 = (1,0)$ in polar coordinates and $\vx = g\vx_0 = rL_2 \exp[\theta L]\vx_0$.
We again have $gL\cdot df/dg = \ro_\theta f $.
We also have $gL_2 = g$, so $gL_2\vx_0 = (x,y)$ and from \eqref{eq:dfdg-general0}, $gL_2 \cdot df/dg = (x\ro_x+y\ro_y)f = r\ro_r f $, which is the scaling operation.
\subsubsection{Rotation $SO(2)$ \label{ap:example-so2} }
With $\vx_0 = (r,0)$
\begin{align}
g&
=\begin{pmatrix}\cos \theta &-\sin\theta \\\sin\theta &\cos\theta \end{pmatrix},
={1\over r}\begin{pmatrix}x &-y \\ y & x \end{pmatrix}, & L&=\begin{pmatrix}0&-1\\1&0\end{pmatrix} & gL& = {1\over r}\begin{pmatrix}- y & - x\\x & - y\end{pmatrix} \\
gL\vx_0 &= \mat{-y\\x} = \mat{-\sin\theta \\ \cos\theta}
\label{eq:gLx0-so2}
\end{align}
To calculate $df/dg$ we note that even after the lift, the function $f$ was defined on $\mathcal{S}$.
So we must include the $\vx_0$ in $f(g\vx_0)$.
Using \eqref{eq:dfdg-general}, we have
\begin{align}
{df(g\vx_0)\over dg} &= {1\over r} \vx_0^T
\mat{\ro_xf&-\ro_yf\\ \ro_yf&\ro_xf } = \mat{\ro_xf&-\ro_yf\\ 0&0}\cr
gL \cdot {df\over dg} &= \mathrm{Tr}
\begin{pmatrix}x \ro_y f
- y \ro_x f
& x \ro_x f
+ y \ro_y f
\\ 0&0
\end{pmatrix} =
\pa{x \ro_y f
- y \ro_x f}
\end{align}
\subsubsection{Translations $T_n$
\label{ap:example-Tn} }
Generalizing the $T_1$ case, we add a dummy dimension $0$ and $\vx_0=(1,0,\dots,0)$.
The generators are $[L_i]_\mu^\nu = \delta_{i\mu}\delta_0^\nu$ and $g = I+ x^i L_i$.
Again, $gL_i = L_i + x^j L_j L_i = L_i$ as $L_jL_i = 0$ for all $i,j$.
Hence, $[\hat{L}_i]^\alpha = [gL_i\vx_0]^\alpha =\delta^\alpha_i $.
\out{
\subsection{Haar measure}
\nd{Take out}
To calculate $\int_G dg$ we need to calculate the Haar measure $dg = d\mu(g) $.
For $G= \mathrm{GL}_d(\R)$, the measure is
\begin{align}
dg = {1\over det(g)^d} \bigwedge_{i,j} dg_{ij}.
\end{align}
which has $d^2$ different $dg_{ij}$
This can be seen in terms of the generators of $\mathrm{GL}_d(\R)$, which are all one-hot matrices $\mE_{ij}$.
For subgroups $G\subset \mathrm{GL}_d(\R)$ we can find independent components by writing
$g^{-1}dg =dt^i g^{-1}L_ig $, which is Maurer-Cartan form.
When the Lie algebra basis has $n=|\mathfrak{g}|$ elements, we have $n$ independent variables $t^i$.
\subsubsection{$SO(2)\times \R_+$ Haar measure}
When the group is Abelian, meaning $[L_i,L_j]=0$, the Haar measure computation simplifies.
For $SO(2)\times \R_+$ with $g=r\exp[\theta L_\theta]$ we have
\begin{align}
g &= r \mat{\cos\theta & -\sin \theta \\
\sin\theta & \cos\theta },\qquad g^{-1} = {1\over r} \mat{\cos\theta & \sin \theta \\
-\sin\theta & \cos\theta }\cr
g^{-1}dg &= g^{-1}\br{{1\over r}dr g+r d\theta \mat{-\sin\theta & -\cos\theta \\ \cos\theta & -\sin \theta } } = {dr\over r} I + d\theta L_\theta.
\end{align}
To build the volume form from this we can start from the Euclidean volume form $dx\wedge dy$, which is simply a different basis, and express in terms of $d\theta\wedge dr$.
The Jacobian for this change of variables is
\begin{align}
J= \mat{{\ro \theta\over \ro x} & {\ro \theta\over \ro y}\\
{\ro r\over \ro x} & {\ro r\over \ro y}} &= \mat{{-1\over y} & {1\over x}\\ {x\over r} & {y\over r}},& |\det(J)| = {2\over r}
\end{align}
And so we recover the polar integration measure $d^2 x =|\det(J)|^{-1} dr d\theta = r dr d\theta $
}
\subsection{Group invariant loss \label{ap:invariant-loss} }
\out{
$G$ being a symmetry of the system means that $f$ and transformed features $g\cdot f$ have similar probability of occurring.
Hence, the loss function needs to be \textit{group invariant} \nd{cite?}.
One way to construct group invariant functions is to integrate over $G$ (global pooling \citep{bronstein2021geometric}).
A function $ I = \int_G dg F(g)$ is $G$-invariant as for $w\in G$
\begin{align}
w\cdot I &=\int_G w\cdot F(g)dg = \int F(w^{-1}g) dg = \int_G F(g') d(w g') = \int_G F(g') dg'
\label{eq:loss-invariance}
\end{align}
where we used the invariance of the Haar measure $d(wg') = dg'$.
}
Because $G$ is the symmetry group, $f$ and $g\cdot f$ should result in the same optimal parameters.
Hence, the minima of the loss function need to be \textit{group invariant}.
One way to satisfy this is for the loss itself to be group invariant, which can be constructed
by integrating over $G$ (global pooling \citep{bronstein2021geometric}).
A function $ I = \int_G dg F(g)$ is $G$-invariant as for $w\in G$
\begin{align}
w\cdot I &=\int_G w\cdot F(g)dg = \int F(w^{-1}g) dg = \int_G F(g') d(w g') = \int_G F(g') dg'
\label{eq:loss-invariance}
\end{align}
where we used the invariance of the Haar measure $d(wg') = dg'$.
We can change the integration to $\int_\mathcal{S} d^nx $ by change of variable $dg/d\vx$.
Since we need $\mathcal{S}$ to be lifted to $G$, the lift$:\mathcal{S}\to G$ is injective, the a map $ G\to\mathcal{S}$ need not be.
$\mathcal{S}$ is homeomorphic to $G/H$, where $H\subset G$ is the stabilizer of the origin, i.e. $h\vx_0 = \vx_0,\forall h\in H$.
Since $F(g\vx_0) = F(gh\vx_0)$, we have
\begin{align}
I& = \int_G F(g) dg = \int_H dh \int_{G/H} dg' F(g') = V_H\int_{G/H} dg' F(g')
\end{align}
Since $G/H\sim \mathcal{S}$, the volume forms $dg'=V_H d^nx$ can be matched for some parametrization.
\subsubsection{MSE Loss \label{ap:Loss-MSE}}
\out{
In \eqref{eq:loss-invariance} $F(g)$ is any scalar function of the features $f$ with a defined $G$ action.
For example $F(g)$ can be mean square loss (MSE) $F(g)= \sum_n\|Q[f_n](g)\|^2$, where $f_n$ are data samples and $Q[f]$ is L-conv or another $G$-equivariant function.
In supervised learning the input is a pair $f_n,y_n$.
The labels $y_n$ have their own $G$ action, including being $G$-invariant.
We can concatenate $\phi_n [f_n|y_n]$ to define a merged feature which has a well-defined $G$ action.
Using \eqref{eq:L-conv-def0} and \eqref{eq:dfdg-general} defining an MSE loss yields
\begin{align}
I[W] &= \sum_n \int_G dg \left\| W^0 \br{I + \ba{\eps}^i g L_i\cdot {d\over dg} }\phi_n(g) \right\|^2 \cr
&= \sum_n \int_G dg \br{
\|W^0\phi_n\|^2 + \left\|W^i g L_i\cdot {d\phi_n\over dg}\right\|^2 +2\phi_n^T W^{0T}W^i g L_i\cdot {d\over dg}\phi_n
}
\label{eq:loss-MSE0}
\end{align}
where $W^i= W^0\ba{\eps}^i$.
First we simplify the first two terms in \eqref{eq:loss-MSE0}.
The first term is $f_n^TMfn$ where $M=W^{0T}W^0$.
For the second term, let $U_i(g) = gL_i\vx_0$.
Using \eqref{eq:dfdg-general} we have
\begin{align}
I[W] &= \sum_n \int_G dg \left\| W^0 \br{I + \ba{\eps}^i U_i^\alpha \ro_\alpha }f_n(g) \right\|^2 + I_\ro\cr
&= \sum_n \int_G dg \br{ \|W^0f_n\|^2 +\|W^iU_i^\alpha \ro_\alpha f_n\|^2 } + I_\ro \cr
&= \sum_n \int_G dg \br{ f_n^TMf_n +\ro_\alpha f_n^T\mathbf{g}^{\alpha\beta} \ro_\beta f_n } + I_\ro \cr
&= \sum_n \int_G dg \br{ M_{ab}f_n^a f_n^b +\mathbf{g}^{\alpha\beta}_{ab} \ro_\beta f_n^b \ro_\alpha f_n^a} + I_\ro
\end{align}
Where $\mathbf{g}^{\alpha\beta}_{ab} = \sum_c [W^i]_{ac}U_i^\alpha [W^j]_{bc}U_j^\beta $ acts as a Riemannian metric $\mathbf{g}^{\alpha\beta}$ on $\mathcal{S}$ and a metric on the feature space $\mathcal{F}$ as $\mathbf{g}_{ab}$.
\paragraph{MSE loss}
}
The MSE is given by $I = \sum_n\int _G dg\|Q[f_n](g)\|^2$, where $f_n$ are data samples and $Q[f]$ is L-conv or another $G$-equivariant function.
In supervised learning the input is a pair $f_n,y_n$.
$G$ can also act on the labels $y_n$.
We assme that $y_n$ are either also scalar features $y_n:\mathcal{S}\to \R^{m_y}$ with a group action $g\cdot y_n(\vx)=y_n(g^{-1}\vx)$ (e.g. $f_n$ and $y_n$ are both images), or that $y_n$ are categorical.
In the latter case $g\cdot y_n = y_n$ because the only representations of a continuous $G$ on a discrete set are constant.
We can concatenate the inputs to
$\phi_n \equiv [f_n|y_n]$
with a well-defined $G$ action $g\cdot \phi_n = [g\cdot f_n| g\cdot y_n]$.
The collection of combined inputs $\Phi = (\phi_1,\dots, \phi_N)^T$ is an $(m+m_y)\times N$ matrix.
Using
equations \ref{eq:L-conv} and \ref{eq:dfdg-general}, the MSE loss with parameters $W = \{W^0,\ba{\eps}\}$ becomes
\begin{align}
I[\Phi;W] &= \int_G dg \L[\Phi;W] = \int_G dg \left\| W^0 \br{I + \ba{\eps}^i [\hat{L}_i]^\alpha \ro_\alpha}\Phi(g) \right\|^2
\cr
&= 2\int_G dg \br{
\|W^0\Phi\|^2 + \left\|W^i [\hat{L}_i]^\alpha \ro_\alpha \Phi \right\|^2 +2\Phi^T W^{0T}W^i [\hat{L}_i]^\alpha \ro_\alpha \Phi}
\label{eq:loss-MSE-expand}\\
&= \int_\mathcal{S} {d^nx\over\left|\ro x\over \ro g\right|} \br{\Phi^T\mathbf{m}_2\Phi + \ro_\alpha \Phi^T \mathbf{h}^{\alpha\beta} \ro_\beta \Phi +[\hat{L}_i]^\alpha \ro_\alpha \pa{\Phi^T \mathbf{v}^i \Phi} }
\label{eq:Loss-MSE0}
\end{align}
\out{
\begin{align}
I[\Phi;W] &= \int_G dg \L[\Phi;W] = \int_G dg \left\| W^0 \br{I + \ba{\eps}^i g L_i\cdot {d\over dg} }\Phi(g) \right\|^2
\cr
&= \int_G dg \br{
\|W^0\Phi\|^2 + \left\|W^i [\hat{L}_i]^\alpha \ro_\alpha \Phi \right\|^2 +2\Phi^T W^{0T}W^i [\hat{L}_i]^\alpha \ro_\alpha \Phi}
\label{eq:loss-MSE-expand}\\
&= \int_\mathcal{S} {d^nx\over\left|\ro x\over \ro g\right|} \br{\Phi^T\mathbf{m}_2\Phi + \ro_\alpha \Phi^T \mathbf{h}^{\alpha\beta} \ro_\beta \Phi +[\hat{L}_i]^\alpha \ro_\alpha \pa{\Phi^T \mathbf{v}^i \Phi} }
\label{eq:Loss-MSE}
\end{align}
}
where $\left|\ro x\over \ro g\right|$ is the determinant of the Jacobian, $W^i = W^0 \ba{\eps}^i$ and
\begin{align}
\mathbf{m}_2 &=W^{0T}W^0, &
\mathbf{h}^{\alpha\beta}(\vx) &= \ba{\eps}^{iT} \mathbf{m}_2 \ba{\eps}^j [\hat{L}_i]^\alpha [\hat{L}_j]^\beta, &
\mathbf{v}^i &= \mathbf{m}_2\ba{\eps}^{i} .
\label{eq:MSE-params0}
\end{align}
From \eqref{eq:loss-MSE-expand} to \ref{eq:Loss-MSE0} we used the fact that $W^0$ and $W^i$ do not depend on $\vx$ (or $g$) to write
\begin{align}
2\Phi^T W^{0T}W^i [\hat{L}_i]^\alpha \ro_\alpha \Phi & = [\hat{L}_i]^\alpha \ro_\alpha \pa{\Phi^T W^{0T}W^i \Phi } = [\hat{L}_i]^\alpha \ro_\alpha \pa{\Phi^T \mathbf{m}_2 \ba{\eps}^i \Phi }
\end{align}
Note that $\mathbf{h}$ has feature space indices via $[\ba{\eps}^{iT} \mathbf{m}_2 \ba{\eps}^j]_{ab}$, with index symmetry
$\mathbf{h}^{\alpha\beta}_{ab}=\mathbf{h}^{\beta\alpha}_{ba} $.
When $\mathcal{F}= \R$ (i.e. $f$ is a 1D scalar), $\mathbf{h}^{\alpha\beta}$ becomes a
a Riemannian metric
for $\mathcal{S}$.
In general $\mathbf{h}$ combines a 2-tensor $\mathbf{h}_{ab}=\mathbf{h}_{ab}^{\alpha\beta}\ro_\alpha \ro_\beta \in T\mathcal{S} \otimes T\mathcal{S}$ with an inner product $h^T\mathbf{h}^{\alpha\beta}f $ on the feature space $\mathcal{F}$.
Hence $\mathbf{h}\in T\mathcal{S}\otimes T\mathcal{S}\otimes \mathcal{F}^*\otimes \mathcal{F}^*$ is a $(2,2)$-tensor, with $\mathcal{F}^*$ being the dual space of $\mathcal{F}$.
\paragraph{Loss invariant metric transformation}
The metric
$\mathbf{h}$ transforms equivariantly as a 2-tensor.
As discussed under \eqref{eq:dfdg-general},
$[\hat{L}_i(v \vx)]^\alpha = v^\alpha_\beta \hat{L}_i(\vx)^\beta $ and
\begin{align}
v\cdot \mathbf{h}^{\alpha\beta}&= \mathbf{h}^{\alpha\beta}(v^{-1}\vx) =[v^{-1}]^\alpha_\rho [v^{-1}]^\beta_\gamma
\mathbf{h}^{\rho\gamma}(\vx), &
(v&\in G).
\label{eq:h-metric-covariance}
\end{align}
Note that $v\cdot \mathbf{m}_2=\mathbf{m}_2$ since $f_n$ and $y_n$ are scalars.
For example, let $G=SO(2)$ and $R(\xi) \in SO(2)$ be rotation by angle $\xi$.
Since there is only one $L_i=L$, the metric factorizes to
\begin{align}
\mathbf{h}^{\alpha\beta}_{ab} = [\ba{\eps}^T\mathbf{m}_2\ba{\eps}]_{ab} \otimes [\hat{L}\hat{L}^T]^{\alpha\beta}
\end{align}
To find $R(\xi)\cdot \mathbf{h}$ we only need to calculate $R(\xi)^{-1} \hat{L}$.
With $g= R(\theta)$, we have $\hat{L}(\vx) = R(\theta) L \vx_0 = (-y,x) = r(-\sin\theta,\cos\theta)$ from \eqref{eq:gLx0-so2}.
Therefore, $R(\xi)^{-1}\hat{L} =R(\theta -\xi) = \hat{L}(R(\xi^{-1} \vx)$.
Using \eqref{eq:MSE-params} in \eqref{eq:h-metric-covariance}, the transformed metric becomes
\begin{align}
R(\xi)\cdot \mathbf{h}^{\alpha\beta}(R(\theta)\vx_0) = \ba{\eps}^T\mathbf{m}_2 \ba{\eps} \otimes [R(-\xi)\hat{L}]^\alpha [R(-\xi)\hat{L}]^\beta = \mathbf{h}^{\alpha\beta}(R(\theta-\xi)\vx_0) ,
\end{align}
\subsubsection{Third term as a boundary term}
Since terms in \eqref{eq:Loss-MSE} are scalars, they can be evaluated in any basis.
If $\mathcal{S}$ can be lifted to multiple Lie groups, either group can be used to evaluate \eqref{eq:Loss-MSE}.
For example $\R^n/0$ can be lifted to both $T_n$ and $SO(n)\times \R_+$.
For the translation group $G=T_n$ we have $gL_i = L_i$ and $[\hat{L}_i]^\alpha
= \delta^\alpha_i$ (SI \ref{ap:example-Tn}) and $|\ro g/\ro x| = 1$ and $dg = d^n x$.
Thus, the last term in \eqref{eq:Loss-MSE} simplifies to a complete divergence
$\int d^nx \ro_i (\Phi^T \mathbf{v}^i \Phi)$.
Using the generalized Stoke's theorem $\int_\mathcal{S} dw = \int_{\ro \mathcal{S}} w $, the last term in \eqref{eq:Loss-MSE} becomes a boundary term.
When $\mathcal{S}$ is non-compact, the last term is
$ I_\ro= \int_{\ro \mathcal{S}} d\Sigma_i \Phi^T \mathbf{v}^i \Phi$, where $d\Sigma_i$ is the normal times the volume form of the $(n-1)$D boundary $\ro \mathcal{S}$ and is in the radial direction (e.g. for $\mathcal{S}=\R^n$ the boundary is a hyper-sphere $\ro \mathcal{S}= S^{n-1}$).
Generally we expect the features $\phi_n$ to be concentrated in a finite region of the space and that they go to zero as $r\to \infty$ (if they don't the loss term $\Phi^T\mathrm{m}_2\Phi$ will diverge).
Thus, the last term in \eqref{eq:Loss-MSE} generally becomes a vanishing boundary term and does not matter.
\out{
\nd{Show $SO(2)\times \R_+$ and $T_n$ examples.}
If $\ro_\alpha \pa{\left|\ro g\over \ro x\right|[\hat{L}_i]^\alpha}=0$, the last term in \eqref{ap:Loss-MSE} becomes a total derivative $\ro_\alpha (\left|\ro g\over \ro x\right|[\hat{L}_i]^\alpha \Phi^T\mathbf{v}^i\Phi)$.
This happens for instance for the translation group, where the Jacobian is identity and
\nd{
shouldn't $\ro^\alpha [\hat{L}_i]_\alpha$ be zero usually? I know it doesn't hold for scaling.
But it holds for $SO(2)$ and translations.
Is there a rule for compact groups?
$\hat{L}_i$ is related to the Maurer-Cartan form $\omega = g^{-1} \ro_\alpha g d\vx^\alpha$, which encodes the pushforward.
Write $f(gv_\eps) = f(g+ \eta(\eps)^\alpha \ro_\alpha g)$.
We have $ \eta^\alpha \ro_\alpha g = \eps^i g L_i$, so $ \eps^i L_i \eta^\alpha g^{-1} \ro_\alpha g = \eta \cdot \omega $.
If $ gL_i=c_i^\alpha g \omega_\alpha = dg\cdot c^i $, since $dg$ is an exact form we have $d^2g =0 $, but $\ro_\alpha (|dg/dx|\ro^\alpha g) = d * dg$ and that's not necessarily zero.
}
}
\subsubsection{MSE Loss for translation group $T_n$ }
We have $gL_i = L_i$ and $[\hat{L}_i]^\alpha
= \delta^\alpha_i$ (SI \ref{ap:example-Tn}),
the last term in \eqref{eq:Loss-MSE} becomes a complete divergence
$I _\ro = \int d^nx \ro_i (\Phi^T \mathbf{v}^i \Phi)$.
Using the generalized Stoke's theorem $\int_\mathcal{S} dw = \int_{\ro \mathcal{S}} w $,
when $\mathcal{S}$ is non-compact,
$I _\ro= \int_{\ro \mathcal{S}} d\Sigma_i \Phi^T \mathbf{v}^i \Phi$.
Here $d\Sigma_i$ is the normal times the volume form of the $(n-1)$D boundary $\ro \mathcal{S}$ (e.g. for $\mathcal{S}=\R^n$ the boundary is a hyper-sphere $\ro \mathcal{S}= S^{n-1}$).
Generally we expect the features $\phi_n$ to be concentrated in a finite region of the space and that they go to zero as $r\to \infty$ (if they don't the loss term $\Phi^T\mathrm{m}_2\Phi$ will diverge).
Thus, the last term in \eqref{eq:Loss-MSE} generally becomes a vanishing boundary term and does not matter.
Next, the second term in \eqref{eq:Loss-MSE} can be worked out as
\begin{align}
\hat{L}_i^\alpha \ro_\alpha \phi^T \ba{\eps}^{iT} \mathbf{m}_2 \ba{\eps}^j\hat{L}_j^\beta \ro_\beta \phi
&= \ro_j \phi^T\ba{\eps}^{iT} \mathbf{m}_2 \ba{\eps}^j \ro_i \phi = \ro_j \phi^T \mathbf{h}^{ji} \ro_i \phi
\end{align}
where $\mathbf{h}^{ji}=\ba{\eps}^{iT} \mathbf{m}_2 \ba{\eps}^j $ is a general, space-independent metric compatible with translation symmetry.
When the weights $[W^i]^a_b \sim \mathcal{N}(0,1)$ are random Gaussian, we have $W^{jT} W^i \approx m^2 \delta^{ij}$ and we recover the Euclidean metric.
With the lst term vanishing, the loss function \eqref{eq:Loss-MSE} has a striking resemblance to a Lagrangian used in physics, as we discuss next.
\subsubsection{Boundary term with spherical symmetry}
When $\mathcal{S}\sim \R^n$ and $G=T_n$, the third term becomes a boundary term.
But we can also have $G= SO(n)\times \R_+$ (spherial symmetry and scaling).
The boundary $\ro \mathcal{S}\sim S^{n-1}$, which has an $SO(n)$ symmetry.
The normal
$d\Sigma(\vx)$ is a vector pointing in the radial direction and $g$ is the lift for $\vx$.
Since $g\in SO(n)$, we have
\begin{align}
d\Sigma_\beta [gL_\alpha \vx_0]^\beta &= d\Sigma^T g L_\alpha \vx_0 = [g^T d\Sigma]^T L_\alpha \vx_0
\end{align}
Since $g\in SO(n)$, $g^T = g^{-1}$ and $g^Td\Sigma(g\vx_0) = d\Sigma(\vx_0) =V_{n-1} \vx_0$, meaning the normal vector is rotated back toward $\vx_0$.
Here $V_{n-1}$ is the volume of the boundary $S^{n-1}$.
Hence we have
\begin{align}
d\Sigma^T g L_i \vx_0 = \vx_0^T L_i \vx_0 = 0
\end{align}
for all generators $L_i \in so(n)$ because $L_i= -L_i^T$ and hence diagonal entries like $\vx_0^T L_i \vx_0$ are zero.
Only the scaling generator $L_0=I$ we have $\vx_0^T L_0 \vx_0 = 1$.
This means that the last term in \eqref{eq:Loss-MSE} can be nonzero at the boundary only if $\Phi\mathbf{v}^i\Phi$ is in the radial direction, meaning $\ba{\eps}^0 = 0$, and $\Phi$ does not vanish at the boundary.
However, a non-vanishing $\Phi$ at the boundary results in diverging loss unless the mass matrix $\mathbf{m}_2$ has eigenvalues equal to zero.
This is what happens relativistic theories where light rays can have nonzero $\Phi$ at infinity because they are massless.
\out{
\paragraph{Boundary term and conserved quantity}
Since terms in \eqref{eq:Loss-MSE} are scalars, they can be evaluated in any basis.
If $\mathcal{S}$ can be lifted to multiple Lie groups, either group can be used to evaluate \eqref{eq:Loss-MSE}.
For example $\R^n/0$ can be lifted to both $T_n$ and $SO(n)\times \R_+$.
For the translation group $G=T_n$ we have $gL_i = L_i$ and $[\hat{L}_i]^\alpha
= \delta^\alpha_i$ (SI \ref{ap:example-Tn}) and $|\ro g/\ro x| = 1$ and $dg = d^n x$.
Thus, the last term in \eqref{eq:Loss-MSE} simplifies to a complete divergence
$\int d^nx \ro_i (\Phi^T \mathbf{v}^i \Phi)$.
Using the generalized Stoke's theorem $\int_\mathcal{S} dw = \int_{\ro \mathcal{S}} w $, the last term in \eqref{eq:Loss-MSE} becomes a boundary term.
When $\mathcal{S}$ is non-compact, the last term is
$ I_\ro= \int_{\ro \mathcal{S}} d\Sigma_i \Phi^T \mathbf{v}^i \Phi$, where $d\Sigma_i$ is the normal times the volume form of the $(n-1)$D boundary $\ro \mathcal{S}$ and is in the radial direction (e.g. for $\mathcal{S}=\R^n$ the boundary is a hyper-sphere $\ro \mathcal{S}= S^{n-1}$).
Generally we expect the features $\phi_n$ to be concentrated in a finite region of the space and that they go to zero as $r\to \infty$ (if they don't the loss term $\Phi^T\mathrm{m}_2\Phi$ will diverge).
Thus, the last term in \eqref{eq:Loss-MSE} generally becomes a vanishing boundary term and does not matter.
\nd{Show $SO(2)\times \R_+$ and $T_n$ examples.}
If $\ro_\alpha \pa{\left|\ro g\over \ro x\right|[\hat{L}_i]^\alpha}=0$, the last term in \eqref{ap:Loss-MSE} becomes a total derivative $\ro_\alpha (\left|\ro g\over \ro x\right|[\hat{L}_i]^\alpha \Phi^T\mathbf{v}^i\Phi)$.
This happens for instance for the translation group, where the Jacobian is identity and
\nd{
shouldn't $\ro^\alpha [\hat{L}_i]_\alpha$ be zero usually? I know it doesn't hold for scaling.
But it holds for $SO(2)$ and translations.
Is there a rule for compact groups?
$\hat{L}_i$ is related to the Maurer-Cartan form $\omega = g^{-1} \ro_\alpha g d\vx^\alpha$, which encodes the pushforward.
Write $f(gv_\eps) = f(g+ \eta(\eps)^\alpha \ro_\alpha g)$.
We have $ \eta^\alpha \ro_\alpha g = \eps^i g L_i$, so $ \eps^i L_i \eta^\alpha g^{-1} \ro_\alpha g = \eta \cdot \omega $.
If $ gL_i=c_i^\alpha g \omega_\alpha = dg\cdot c^i $, since $dg$ is an exact form we have $d^2g =0 $, but $\ro_\alpha (|dg/dx|\ro^\alpha g) = d * dg$ and that's not necessarily zero.
}
}
\out{
\nd{revise!!!}
The last term can be written as $gL_i\cdot {d\over dg} (f_n^T W^{0T}W^i f_n)$.
Separating out $\ba{\eps}^i$ and defining $f'_n \equiv W^{0}f_n$, the last term reads $[\ba{\eps}^i]_{ab} gL_i\cdot {d\over dg}[{f'_n}^a{f'_n}^b]$.
Next, we note this is a complete derivative, as $\ba{\eps}^i gL_i\cdot df/dg = \delta \vx^\alpha \ro_\alpha f(\vx)$ was the first-order Taylor expansion of $f(\vx+\delta \vx ) = f(g(I+\ba{\eps}^iL_i)\vx_0)$.
\nd{Not always... if $\delta\vx$ is space dependent then it doesn't become a surface integral}
Using $\delta \vx$ and changing $\int_G dg$ to $\int_\mathcal{S} d\vx $, and Stoke's theorem ($\int_\mathcal{S} d f = \int_{\ro \mathcal{S}} f$ )
we find that the last term $ I_\ro$ is a total derivative and yields a boundary term.
\begin{align}
I_\ro =&\sum_n\int_G 2f_n^T W^{0T} W^i g L_i\cdot {df_n\over dg} dg = \sum_n\int_G dg [\ba{\eps}^i]_{ab} gL_i\cdot {d\over dg}[{f'_n}^a{f'_n}^b]\cr
=& \sum_n\int_\mathcal{S} d\vx [\delta\vx^\alpha]_{ab} \ro_\alpha [{f'_n}^a{f'_n}^b]
= \left. \Sigma_\alpha(\vx) [\delta\vx^\alpha]_{ab} \br{\sum_n {f'_n}^a{f'_n}^b} \right|_{\vx\to \ro \mathcal{S}}
\end{align}
where $\sigma_\alpha (\vx)$ is the normal vector of a hyper-surface defining the boundary $\ro\mathcal{S}$.
If $ \mathcal{S}$ is compact, its boundary $\ro \mathcal{S} = \emptyset$ so boundary terms vanish.
When $\mathcal{S}$ is non-compact, the boundary term may capture important things such as conserved quantities.
\nd{
but if it is non-zero it can be a conserved quantity.
Check its Noether's Theorem.
}
\nd{
\paragraph{Equivariance of parameters}
Derive what $w\cdot \phi$ does to $\mathbf{h}$ etc.
$w\cdot Q[f](g) = W^0f(w^{-1}g(I+\ba{\eps}^iL_i)) = $
\paragraph{Conservation laws and Noether's theorem}
Derive Noether's theorem for MSE.
The version for $\delta \phi$ should be the familiar one. Is there a version for $W$?
}
}
\subsection{Generalization error
\label{ap:generalization}
}
Equivariant neural networks are hoped to be more robust than others.
Equivariance should improve generalization.
One way to check this is to see how the network would perform for an input $\phi'= \phi + \delta \phi$ which adds a small perturbation $\delta \phi$ to a real data point $\phi$.
Robustness to such perturbation would mean that, for optimal parameters $W^*$ , the loss function would not change,
i.e. $I [\phi';W^*]=I [\phi;W^*]$.
This can be cast as a variational equation, requiring $I $ to be minimized around real data points $\phi$.
Writing $I[\phi;W]= \int d^nx \L[\phi;W]$, we have
\begin{align}
\delta I [\phi;W^*] &=\int_\mathcal{S} d^nx\br{
{\ro \L \over \ro \phi^a }\delta \phi^a + {\ro \L \over \ro (\ro_\alpha \phi^a) } \ro_\alpha (\delta \phi^a)
}
\end{align}
Doing a partial integration on the second term, we get
\begin{align}
\delta I [\phi;W^*] &=\int_\mathcal{S} d^nx\br{
{\ro \L \over \ro \phi^b } - \ro_\alpha {\ro \L \over \ro (\ro_\alpha \phi^b) }
} \delta \phi^b + \int_\mathcal{S} d^nx\ro_\alpha \br{
{\ro \L \over \ro (\ro_\alpha \phi^b) } \delta \phi^b
} \cr
&= \int_\mathcal{S} d^nx\br{
{\ro \L \over \ro \phi^b } - \ro_\alpha {\ro \L \over \ro (\ro_\alpha \phi^b) }
} \delta \phi^b + \int_{\ro \mathcal{S}} d^{n-1}\Sigma_\alpha \br{
{\ro \L \over \ro (\ro_\alpha \phi^b) } \delta \phi^b
}
\label{eq:loss-Euler-Lagrange-derivation}
\end{align}
where we used the Stoke's theorem again to change the last term to a boundary integral.
We will discuss this term further below, but since features $\phi$ have to finite in extent, $\phi(\vx)\to0$ as $|\vx|\to \infty $, and the boundary term vanishes.
We will return to the boundary integral below.
The first term in \eqref{eq:loss-Euler-Lagrange-derivation} is the classic Euler-Lagrange (EL) equation.
Thus, requiring good generalization, i.e. $\delta I [\phi;W^*]/\delta \phi=0$ means for optimal parameters $W^*$, the real data $\phi$ satisfies the EL equations
\begin{align}
\mbox{Generalization Error Minimization} \Longleftrightarrow \mbox{EL: }
\quad
{\ro \L \over \ro \phi^b } - \ro_\alpha {\ro \L \over \ro (\ro_\alpha \phi^b) } = 0
\label{eq:loss-Euler-Lagrange0}
\end{align}
Applying this to the MSE loss \eqref{eq:Loss-MSE}, \eqref{eq:loss-Euler-Lagrange0} becomes
\begin{align}
\mathbf{m}_2 \phi - \ro_\alpha \pa{|J| \mathbf{h}^{\alpha\beta} \ro_\beta \phi } - \ro_\alpha\pa{|J|\mathbf{v}^i [\hat{L}_i]^\alpha }\phi =0
\label{eq:loss-EL-MSE0}
\end{align}
where $|J| = |\ro g/\ro x|$ is the determinant of the Jacobian.
For the translation group, \eqref{eq:loss-EL-MSE0} becomes a Helmholtz equation
\begin{align}
\mathbf{h}^{ij}\ro_i\ro_j\phi= \ba{\eps}^i\mathbf{m}_2 \ba{\eps}^j \ro_i\ro_j\phi = \mathbf{m}_2 \phi
\end{align}
where $\mathbf{h}^{ij}\ro_i\ro_j = \del^2$ is the Laplace-Beltrami operator with $\mathbf{h}$ as the metric.
\paragraph{Conservation laws}
The equivariance condition \eqref{eq:equivairance-general} can be written for the integrand of the loss $\L[\phi,W]$.
Since $G$ is the symmetry of the system, transforming an input $\phi\to w\cdot \phi$ by $w\in G$ the integrand changes equivariantly as $\L[w\cdot \phi]= w\cdot \L[\phi] $.
Now, let $w$ be an infinitesimal $w\approx I+\eta^i L_i$.
The action $w\cdot \phi $ can be written as a Taylor expansion, similar to the one in L-conv, yielding
\begin{align}
w\cdot \phi(\vx)& = \phi(w^{-1}\vx) = \phi((I-\eta^i L_i)\vx) = \phi(\vx) - \eta^i [L_i \vx]^\alpha \ro_\alpha \phi(\vx) \cr
& = \phi(\vx) + \delta \vx^\alpha \ro_\alpha \phi(\vx) = \phi(\vx) + \delta \phi(\vx)
\label{eq:delta-phi}
\end{align}
with $\delta \vx^\alpha = - \eta^i [L_i \vx]^\alpha $ and $\delta \phi = \delta \vx^\alpha \ro_\alpha \phi$.
Similarly, we have $w\cdot \L =\L + \delta \vx^\alpha \ro_\alpha \L$.
Next, we can use the chain rule to calculate $\L[w\cdot \phi]$.
\begin{align}
\L[w\cdot \phi]& = \L[\phi(\vx) + \delta \phi(\vx) ]= \L[\phi] + {\ro \L \over \ro \phi^b } \delta \phi^b + {\ro \L \over \ro (\ro_\alpha \phi^b) } \delta \ro_\alpha \phi^b \cr
&= \L[\phi] + \br{{\ro \L \over \ro \phi^b } -\ro_\alpha {\ro \L \over \ro (\ro_\alpha \phi^b) }}\delta \phi^b + \ro_\alpha \pa{ {\ro \L \over \ro (\ro_\alpha \phi^b) }\delta \phi^b }
\label{eq:Noether-expand}
\end{align}
where we used the fact that $\delta \vx = \eta^i L_i \vx $ can vary independently from $\vx $ (because of $\eta^i$), and so $\delta \ro_\alpha \phi^b = \ro_\alpha \delta\phi^b $.
The same way, $\delta \vx^\alpha \ro_\alpha \L = \ro_\alpha (\delta \vx^\alpha \L)$.
Now, if $\phi$ are the real data and the parameters in $\L$ minimize generalization error, then $\L$ satisfies \eqref{eq:loss-EL-MSE0}.
This means that the first term in \eqref{eq:Noether-expand} vanishes.
Setting the second term equal to $w\cdot \L$ we get
\begin{align}
\L[w\cdot \phi] - w\cdot \L[\phi]& = \ro_\alpha \br{{\ro \L \over \ro (\ro_\alpha \phi^b) }\delta \phi^b - \delta \vx^\alpha \L } =0
\end{align}
Thus, the terms in the brackets are divergence free.
These terms are called a Noether conserved current $J^\alpha$.
In summary
\begin{align}
\mbox{Noether current: } J^\alpha &= {\ro \L \over \ro (\ro_\alpha \phi^b) } \delta \phi^b - {\ro \L \over \ro \vx^\alpha } \delta \vx^\alpha , & \delta I [\phi;W^*]=0 \quad \Rightarrow \ro_\alpha J^\alpha &= 0
\end{align}
$J$ captures the change of the Lagrangian $\L$ along symmetry direction $\hat{L}_i$.
Plugging $\delta \phi = \delta \vx^\alpha \ro_\alpha \phi$from \eqref{eq:delta-phi} we find
\begin{align}
&\ro_\alpha \br{{\ro \L \over \ro (\ro_\alpha \phi^b) }\delta \phi^b - \delta \vx^\alpha \L } =\delta \vx^\beta \ro_\alpha \br{{\ro \L \over \ro (\ro_\alpha \phi^b) }\ro_\beta \phi^b - \delta^\alpha_\beta \L } = \delta \vx^\beta \ro_\alpha T^\alpha_\beta.
\end{align}
$T^\alpha_\beta$ is known as the stress-energy tensor in physics \citep{landau2013classical}.
It is the Noether current associated with space (or space-time) variations $\delta \vx$.
It appears here because $G$ acts on the space, as opposed to acting on feature dimensions.
For the MSE loss we have
\begin{align}
T^\alpha_\beta \equiv& {\ro \L \over \ro (\ro_\alpha \phi^b) }\ro_\beta \phi^b - \delta^\alpha_\beta \L = \ro_\rho \phi^T\pa{\delta^\lambda_\beta \mathbf{h}^{\alpha\rho}- \delta^\alpha_\beta \mathbf{h}^{\rho\lambda} }\ro_\lambda \phi - \phi^T \mathbf{m}_2 \phi
\end{align}
\out{
\nd{
Does the loss not change along $T^\alpha_\beta$?
We are putting real data into the optimized network (i.e. optimal weights). then we are asking how the loss function integrand (Lagrangian) changes if we move infinitesimally over space.
The answer is that it changes such that $T^\alpha_\beta$ remains divergence free.
It's like saying as time progresses, the momentum doesn't change.
Except, here we don't have time and it's about directions in space.
}
}
It would be interesting to see if the conserved currents can be used in practice as an alternative way for identifying or discovering symmetries.
\section{Tensor notation details \label{ap:tensor-long} }
\begin{figure}
\centering
\includegraphics[width=.8\linewidth]{figs2/L-conv-S0S-vec-1.pdf}
\caption{\textbf{
Manifold vs. discretized Space}
While real systems can have a continuous manifold
$\mathcal{S}_0$ as their base space, often the data collected from is a discrete array $\mathcal{S}$.
The discretization (coarsening) will induce some of the topology of $\mathcal{S}_0$ on $\mathcal{S}$ as a graph.
Graph neighborhoods $<\mu>$ on the discrete $\mathcal{S}$ represent tangent spaces $T_{\vx_\mu}\mathcal{S}$ and approximate $T_\vx \mathcal{S}_0$.
The lift takes $\vx\in \mathcal{S}_0$ to $g\in G$, and maps the tangent spaces $T_\vx \mathcal{S}_0\to T_gG$.
Each Lie algebra basis $L_{i}\in \mathfrak{g}=T_IG$ generates a vector field on the tangent bundle $TG$ via the pushforward as $L^{(g)} = gL_{i}g^{-1}$.
Due to the lift, each $L_i$ also generates a vector field $\hat{L}_i^\alpha(\vx)\ro_\alpha = [gL_i\vx_0]^\alpha \ro_\alpha$, with $\vx=g\vx_0$.
Analogously, on $\mathcal{S}$ we get
a vector field $[\hat{L}_i]_\mu= g_\mu L_i\vx_0$ on $T\mathcal{S}$, with $\vx_\mu=g_\mu\vx_0$.
Note that depicting $\mathcal{S}$ and $\mathcal{S}_0$ as 2D is only for convenience.
They may have any dimensions.
}
\label{fig:S0S}
\end{figure}
If the dataset being analyzed is in the form of $f(\vx)$ for some sample of points $\vx$, together with derivatives $\del f(\vx) $, we can use the L-conv formulation above.
However, in many datasets, such as images, $f(\vx)$ is given as a finite dimensional array or tensor, with $\vx$ taking values over a grid.
Even though the space $\mathcal{S}$ is now discrete, the group which acts on it can still be continuous (e.g. image rotations).
Let $\mathcal{S}= \{\vx_0,\dots \vx_{d-1}\}$ contain $d$ points.
Each $\vx_\mu$ represents a coordinate in higher dimensional grid.
For instance, on a $10\times 10$ image, $\vx_0$ is $(x,y)=(0,0)$ point and $ \vx_{99}$ is $(x,y) =(9,9)$.
\paragraph{Feature maps}
To define features $f(\vx_\mu)\in \R^m$ for $\vx_\mu \in \mathcal{S}$, we embed $\vx_\mu \in \R^d$ and
encode them as the canonical basis (one-hot) vectors with components $[\vx_\mu]^\nu = \delta_\mu^\nu$ (Kronecker delta), e.g. $\vx_0 = (1,0,\dots, 0)$.
The feature space becomes $\mathcal{F}= \R^d \otimes \R^m $, meaning feature maps $\vf \in \mathcal{F}$ are $d\times m$ tensors, with $f(\vx_\mu) = \vx_\mu^T \vf = \vf_\mu $.
\paragraph{Group action}
Any subgroup $G\subseteq \mathrm{GL}_d(\R)$ of the general linear group (invertible $d\times d$ matrices) acts on $\R^d$ and $\mathcal{F}$.
Since $\vx_\mu \in \R^d$, $g \in G$ also naturally act on $\vx_\mu$.
The resulting $\vy = g \vx_\mu $ is a linear combination $\vy = c^\nu \vx_\nu$ of elements of the discrete $\mathcal{S}$, not a single element.
The action of $G$ on $\vf$ and $\vx$, can be defined in multiple equivalent ways.
We define $f(g \cdot \vx_\mu) = \vx_\mu^T g^T\vf, \forall g\in G$.
For $w\in G$ we have
\begin{align}
w\cdot f(\vx_\mu)&= f(w^{-1}\cdot \vx_\mu) = \vx_\mu^T w^{-1T}\vf =
[w^{-1}\vx]^T \vf
\label{eq:G-action-tensor-T0}
\end{align}
Dropping the position $\vx_\mu$, the transformed features are matrix product $w\cdot \vf= w^{-1T}\vf$.
\paragraph{G-conv and L-conv in tensor notation}
Writing G-conv \eqref{eq:G-conv} in the tensor notation we have
\begin{align}
[\kappa\star f](g\vx_0) = \int_G \kappa(v)f(gv\vx_0) dv = \vx_0^T \int_G v^{T} g^{T} \vf \kappa^T(v) dv
\equiv \vx_0^T [\vf \star \kappa](g)
\label{eq:G-conv-tensor0}
\end{align}
where we moved $\kappa^T(v) \in \R^m \otimes \R^{m'}$ to the right of $\vf$ because it acts as a matrix on the output index of $\vf$.
The equivariance of \eqref{eq:G-conv-tensor0} is readily checked with $w\in G$
\begin{align}
w\cdot [\vf \star \kappa](g) & = [\vf \star \kappa](w^{-1}g)
= \int_G v^{T} g^T w^{-1T} \vf \kappa^T(v) dv
= [(w\cdot\vf) \star \kappa](g)
\end{align}
where we used $[w^{-1}g]^{T}\vf = g^{T} w^{-1T}\vf $.
Similarly, we can rewrite L-conv \eqref{eq:L-conv} in the tensor notation.
Defining $v_\eps =I+ \ba{\eps}^i L_i $
\begin{align}
Q[\vf](g)
&= W^0 f\pa{g\pa{I+ \ba{\eps}^i L_i}}
= \vx_0^T \pa{I+ \ba{\eps}^i L_i}^{T} g^{T} \vf W^{0T} \cr
&= \pa{\vx_0+ \ba{\eps}^i [gL_i\vx_0]}^T \vf W^{0T}.
\label{eq:L-conv-tensor0}
\end{align}
Here, $\hat{L}_i=gL_i\vx_0$ is eactly the tensor analogue of pushforward vector field $\hat{L}_i$ in \eqref{eq:dfdg-general}.
We will make this analogy more precise below.
The equivariance of L-conv in tensor notation is again evident from the $g^T \vf$, resulting in
\begin{align}
Q[w\cdot \vf](g) &= \vx_0^Tv_\eps^T g^T w^{-1T} \vf W^{0T}= Q[\vf](w^{-1}g) = w\cdot Q[\vf](g)
\label{eq:L-conv-equiv-tensor0}
\end{align}
Next, we will discuss how to implement \eqref{eq:L-conv-tensor0} in practice and how to learn symmetries with L-conv.
We will also discuss the relation between L-conv and other neural architectures.
\subsection{Constraints from topology on tensor L-conv \label{ap:L-conv-tensor-interpret}
}
To implement \eqref{eq:L-conv-tensor0} we need to specify the lift and the form of $L_i$.
We will now discuss the mathematical details leading to and easy to implement form of \eqref{eq:L-conv-tensor0}.
\paragraph{Topology}
Although he discrete space $\mathcal{S}$ is a set of points, in many cases it has a topology.
For instance, $\mathcal{S}$ can be a discretization of a manifold $\mathcal{S}_0$, or vertices on a lattice or a general graph.
We encode this topology in an undirected graph (i.e. 1-simplex) with vertex set $\mathcal{S}$ edge set $\mathcal{E}$.
Instead of the commonly used graph adjacency matrix $\mA$, we will use the incidence matrix $\mB:\mathcal{S}\times \mathcal{E}\to \{0,1,-1\}$.
$\mB^\mu_\alpha=1$ or $-1$ if edge $\alpha$ starts or ends at node $\mu$, respectively, and $\mB^\mu_\alpha=0$ otherwise (undirected graphs have pairs of incoming and outgoing edges ).
Similar to the continuous case we will denote the topological space $(\mathcal{S},\mathcal{E},\mB)$ simply by $\mathcal{S}$.
\out{
The Laplacian $\mL = \mD-\mA$ becomes $\mL = \mB\mB^T$.
A weighted graph can be encoded using a diagonal weight matrix $\mW_{ab}$ for the edges, with the weighted Laplacian $\mL = \mB \mW\mB^T$.
}
Figure \ref{fig:S0S} summarizes some of the aspects of the discretization as well as analogies between $\mathcal{S}_0$ and $\mathcal{S}$.
Technically, the group $G_0$ acting on $\mathcal{S}_0$ and $G$ acting on $\mathcal{S}$ are different.
But we can find a group $G$ which closely approximates $G_0$ (see SI \ref{ap:approx}).
For instance, \citet{rao1999learning} used Shannon-Whittaker interpolation theorem \citep{whitaker1915functions} to define continuous 1D translation and 2D rotation groups on discrete data.
We return to this when expressing CNN as L-conv in \ref{sec:approx}.
\paragraph{Neighborhoods as discrete tangent bundle}
$\mB$ is useful for extending differential geometry to graphs \citep{schaub2020random}.
Define the neighborhood $<\mu>=\left\{ \alpha \in \mathcal{E} \big|\mB_\alpha^\mu =-1 \right\}$ of $\vx_\mu$ as the set of outgoing edges.
\out{
\nd{Should we try to identify $<\mu>$ with a set of vectors instead? Like $\vx_\nu -\vx_\mu$? How can we talk about $L_i\vx_0$ being in $<0>$?
Or should we think of it as an operator?
Which acts on a vector and yields a number?
That is true of the transpose, as $\vx_0^T L_i^T \vf \in \mathcal{F}$. We say that $C \in <\mu>$ if $C\vf = \sum_{\alpha\in <\mu>} c^\alpha \mB_\alpha^\nu \vf_\nu $
}
}
$<\mu>$ can be identified with $T_{\vx_\mu} \mathcal{S}$ as for $\alpha \in <\mu>$, $\mB_\alpha \vf = \vf_\mu - \vf_\nu \sim \ro_\alpha \vf $, where $\mu$ and $\nu$ are the endpoints of edge $\alpha$.
This relation becomes exact when $\mathcal{S}$ is an $n$D square lattice with infinitesimal lattice spacing.
The set of all neighborhoods is $\mB$ itself and encodes the approximate tangent bundle $T\mathcal{S}$.
For some operator $C:\mathcal{S}\otimes\mathcal{F} \to \mathcal{F} $ acting on $\vf $ we will say $C \in <\mu>$ if its action remains within vertices connected to $\mu$, meaning
\begin{align}
C \in <\mu>: \qquad C\vf = \sum_{\alpha\in <\mu>} \tilde{C}^\alpha \mB_\alpha^\nu \vf_\nu
\end{align}
\paragraph{Lift and group action}
Lie algebra elements $L_i$ by definition take the origin $\vx_0$ to points close to it.
Thus, for small enough $\eta$,
$(I+\eta L_i) \vx_0 \in <0> $ and so $[L_i \vx_0]^T\vf=[\hat{L}_i]_0^\rho \vf_\rho = \sum_{\alpha \in <0>}
[\hat{\ell}_i]_0^\alpha \mB^\rho_\alpha \vf_\rho $.
The coefficients $[\hat{\ell}_i]_0^\alpha \in \R$ are in fact the discrete version of $\hat{L}_i$ components from \eqref{eq:dfdg-general}.
For the pushforward $gL_ig^{-1}$, we define the lift via $\vx_\mu = g_\mu \vx_0$.
We require the $G$-action to preserve the topology of $\mathcal{S}$, meaning points which are close remain close after the $G$-action.
As a result, $<\mu>$ can be reached by pushing forward elements in $<0>$.
Thus, for each $i$, $\exists \eta \ll 1 $ such that $g_\mu (I+\eta L_i) \vx_0 \in<\mu>$, meaning for a set of coefficients $ [\hat{L}_i]_\mu^\nu \in \R$
we have
\begin{align}
[g_\mu (I+\eta L_i) \vx_0]^T\vf & = \vf_\mu+ \eta \sum_{\alpha\in <\mu>}
[\hat{\ell}_i]_\mu^\alpha \mB_\alpha ^\nu\vf_\nu
\end{align}
where $\vf_\mu = \vx_\mu^T\vf $.
Acting with $[g_\mu L_i \vx_0]^T\vx_\nu $ and
inserting $I= \sum_\rho \vx_\rho \vx_\rho^T$ we have
\begin{align}
[\hat{L}_i]_\mu^\nu &=
[\hat{\ell}_i]_\mu^\alpha \mB_\alpha^\nu
= [g_\mu L_i\vx_0]^\nu =\sum_{\rho} [g_\mu\vx_\rho \vx^T_\rho L_i \vx_0]^T\vx_\nu
\cr & = \sum_{\rho\in <0>} [L_i]_0^\rho [\vx_\nu^T g_\mu\vx_\rho]^T
= [L_i]_0^\rho [g_\mu]^\nu_\rho = [\hat{\ell}_i]_0^\alpha \mB_\alpha^\rho [g_\mu]^\nu_\rho
\label{eq:hat-L-discrete0}
\end{align}
This $\hat{L}_i \equiv \hat{\ell}_i^\alpha \mB_\alpha $ is the discrete $\mathcal{S}$ version of the vector field $\hat{L}_i(\vx) = [g L_i \vx_0]^\alpha \ro_\alpha $ in \eqref{eq:dfdg-general}.
\subsection{Approximating a symmetry and discretization error \label{ap:approx}}
\paragraph{Discretization error}
While systems such as crystalline solids are discrete in nature, many other datasets such as images result from discretization of continuous data.
The discretization (or ``coarsening'' \citep{bronstein2021geometric}) will modify the groups that can act on the space.
For example, first rotating a shape by $SO(2)$ then taking a picture is different from first taking a picture then rotating the picture (i.e. group action and discretization do not commute).
Nevertheless, in most cases in physics and machine learning the symmetry group $G_0$ of the space before discretization has a small Lie algebra dimension $n$, (e.g. $SO(3), SE(3), SO(3,1)$ etc).
Usually the resolution of the discretization is $ d \gg n$.
In this case, there always exist some $G\subseteq \mathrm{GL}_d(\R)$ which approximates $G_0$ reasonably well.
The approximation means $\forall g_0 \in G_0, \exists g\in G $ such that the error $\mathcal{L}_G=\|g_0\cdot f(\vx_\mu) -\vx_\mu^T g \vf\|^2<\eta^2 $ where $\eta$ depends on the resolution of the discretization.
Minimizing the error $\mathcal{L}_G$ can be the process of identifying the $G$ which best approximates $G_0$.
We will denote this approximate similarity as $G \simeq G_0$.
For example, \citet{rao1999learning} used the Shannon-Whittaker Interpolation theorem \citep{whitaker1915functions} to translate discrete 1D signals (features) by arbitrary, continuous amounts.
In this case the transformed features are $\vf'_\mu = g(z)_\mu^\nu \vf_\nu $, where $g(z)_\mu^\nu = {1\over d} \sum_{p=-d/2}^{d/2} \cos \pa{{2\pi p\over d}(z+\mu -\nu) } $ approximates the shift operator for continuous $z$.
The $g(z)$ form a group because $g(w)g(z)=g(w+z)$, which is a representation for periodic 1D shifts.
\citet{rao1999learning} also use a 2D version of the interpolation theorem to approximate $SO(2)$.
In practice, we can assume the true symmetry to be $G$, as we only have access to the discretized data and can't measure $G_0$ directly.
\out{
\paragraph{Discretization error}
While systems such as crystalline solids are discrete in nature, many other datasets such as images result from discretizion of continuous data.
The discretization (or ``coarsening'' \citep{bronstein2021geometric}) will affect the group action.
For example, rotating an object by $SO(2)$, then taking a picture is different from first taking a picture and then rotating the image by an image rotation group $G$.
Nevertheless, we expect that there exist $G\subseteq \mathrm{GL}_d(\R)$ which closely approximates rotations, or any group $G_0$ that acts on the continuous space.
The approximation means $\forall g_0 \in G_0, \exists g\in G $ such that, $\|g_0\cdot f(\vx_\mu) -\vx_\mu^T g \vf\|<\eta $ where $\eta$ depends on the resolution of the discretization (the group action $\vx_\mu^T g \vf$ is explained below).
We will denote this approximate similarity as $G \simeq G_0$.
For example, \citet{rao1999learning} used the Shannon-Whittaker Interpolation theorem \citep{whitaker1915functions} to translate discrete 1D signals (features) by arbitrary, continuous amounts.
In this case the transformed features are $f'_\mu = q(z)_\mu^\nu f_\nu $, where $q(z)_\mu^\nu = {1\over d} \sum_{p=-d/2}^{d/2} \cos \pa{{2\pi p\over d}(z+\mu -\nu) } $ approximates the shift operator for continuous $z$.
The $q(z)$ form a group because $q(w)q(z)=q(w+z)$, which is a representation for periodic 1D shifts.
\citet{rao1999learning} also use a 2D version of the interpolation theorem to approximate $SO(2)$.
We show below how the Lie algebra of $G$ and $G_0$ are related.
When learning symmetries, since we only have access to the data via the discretized dataset, we assume the symmetry group to be $G\subseteq \mathrm{GL}_d(\R)$.
}
\out{
\paragraph{Lift}
We want the lift $\vx_\mu = g_\mu \vx_0$ to preserve the topology $(\mathcal{S},\mA)$.
while it is trivial to find permutations taking $\vx_0$ to $\vx_\mu$, most of them won't preserve all neighborhoods $<\mu>$.
For instance, if $G$ is approximating 1D translations the lift must move adjacent points together.
Adjacency, captured in $\mA$ is also found by moving points in $S$ using infinitesimal group elements $v_\eps = I+\eps^iL_i$.
Since 1D shifts have a single Lie algebra basis $L$,
there's only one way to define $g_\mu$, namely shifting all points by index $\mu$.
This can be achieved using circulant matrices.
Hence, the lift for periodic 1D translations $G\simeq S^1$ is
\begin{align}
g_\mu & = \sum_{\nu = 0}^{d } \vx_{\mu + \nu } \vx_{\nu}^T \quad (\mu + \nu\ \mathrm{mod}\ d)
&
[g_\mu]^\rho_\lambda &=\sum_{\nu} \delta_{\mu+\nu, \lambda} \delta^\rho_\nu
= \delta^\rho_{\lambda-\mu}
\label{eq:gmu-circ-comp0}
\end{align}
With $ g_\mu^{-1}=g_\mu^T$.
The uniqueness of $g_\mu$ here was because there was a single $L_i$, which generates a unique flow on $\mathcal{S}$.
For groups with multiple $L_i$, using the path-ordered notation $g_\mu = P\exp[\int_\gamma dt^i L_i]$ we see that at every step along a path $\gamma $ a different $L_i$ step is taken.
For non-Abelian groups where $[L_i,L_j]\ne 0$, the order of these steps matters and
the lift is not uniquely defined.
In fact, as we see later, every choice of the lift defines a ``gauge'' on the tangent bundle $T\mathcal{S}$.
Gauges have been used in Gauge Equivariant Mesh (GEM) CNN \citet{cohen2019gauge} (see \citet{bronstein2021geometric} for a review) and
used extensively in physics
\citep{polyakov2018gauge}.
\nd{Equivariance of L-conv}
Since in connected Lie groups, larger group elements can be constructed as $u=\prod_\alpha \exp[t_\alpha \cdot L]$ \citep{hall2015lie} from elements near identity. It follows that any G-conv layer can be constructed from multiple L-conv layers .
}
\out{
\subsection{Fully-connected as G-conv and L-conv \label{ap:FC2L-conv} }
\nd{remove}
Consider the case where the topology of $\mathcal{S}$ is a complete graph.
In this case, the neighborhood of each $\vx_\mu$ includes all other vertices.
Therefore, the topology of $\mathcal{S}$ puts no constraint on the Lie algebra of $G$ and any linear transformation in $G=\mathrm{GL}_d(\R)$ preserves the topology of $\mathcal{S}$.
The Lie algebra $L_i\mathrm{gl}_d(\R)$ has $d^2$ elements, one for each matrix entry.
Thus, we have also have $d^2$ weights $W^i = W^0\ba{\eps}^i$.
The L-conv output \eqref{eq:L-conv-tensor} $Q[\vf]
=\vf W^0 + \hat{L}_i\vf W^i$ has $d$ spatial dimensions $\mu$ and $d^2\times m'$ filter dimensions $i,a$.
Pooling over the spatial index $\mu$, we get $\sum_\mu (\vf_\mu^a+[\ba{\eps}^i]^a_b [\hat{L}_i]_\mu^\nu \vf^b_\nu) = V^{\nu a}_b \vf_\nu^b $.
Here $V$ is a general linear transformation on $\vf$, similar to one linear perceptron.
Multiple L-conv layers, both parallel and in series with residuals, can emulate wider fully-connected networks.
These are equivariant under $w\in \mathrm{GL}_d(\R) $, as we elaborate now.
The idea is that in FC layers $ w\cdot [V \vf] = Vw^{-1T}\vf = V'\vf$ captures their equivariance.
Recall that $[\hat{L}_i]_\mu^\nu = [g_\mu L_i \vx_0]^\nu$.
For simplicity, let L-conv act only on the $\mu$, meaning $[\ba{\eps}_i]^a_b = \eps_i \delta^a_b $, and $W^0=I$
The action \eqref{eq:L-conv-tensor} becomes
\begin{align}
Q[\vf]_\mu = Q[\vf](g_\mu) = \vx_0^T v_\eps^Tg_\mu^T \vf
= [(I+\ba{\eps}^i\hat{L}_i)\vf]_\mu .
\end{align}
Its equivariance under $w\in \mathrm{GL}_d(\R)$ follows \eqref{eq:L-conv-equiv-tensor} as
\begin{align}
Q[w\cdot \vf](g_\mu) & = \vx_0^T v_\eps^Tg_\mu^T w^{-1T}\vf = Q[ \vf](w^{-1} g_\mu)\cr
&= [(I+\ba{\eps}^i \hat{L}_i)w^{-1T}\vf]_\mu .
\end{align}
Combining multiple layers of L-conv, we get $v\vf =v_{\eps_n}\dots v_{\eps_1}\vf \approx \exp[t^i \hat{L}_i]\vf $, where $v\in \mathrm{GL}_d(\R)$.
When there is no restriction on the weights, meaning $G=\mathrm{GL}_d(\R)$, L-conv becomes a fully-connected layer, as shown in the following proposition.
\out{
\begin{proposition}\thlabel{prop:FC}
A fully-connected neural network layer can be written as an L-conv layer using $\mathrm{GL}_d(\R)$ generators, followed by a sum pooling and nonlinear activation.
\end{proposition}
\begin{proof}
The generators of $\mathrm{GL}_d(\R)$ are one-hot $\mE \in \R^{d\times d}$ matrices $L_i = \mE_{(\alpha, \beta)}$ which are non-zero only at index $i=(\alpha, \beta)$ \footnote{We may also label them by single index like $i = \alpha + \beta d$, but two indices is more convenient.}
with elements written using Kronecker deltas
\begin{align}
\mathrm{GL}_d(\R)\mbox{ generators}: L_{i,\mu}^\nu = [\mE_{(\alpha, \beta)}]_\mu^\nu &= \delta_{\mu\alpha} \delta^{\nu}_\beta
\label{eq:GL-generators}
\end{align}
Now, consider the weight matrix $w\in \R^{m \times d} $ and bias $b \in \R^m$ of a fully connected layer acting on $h\in \R^d$ as $F(h) = \sigma(w\cdot h+b)$.
The matrix element can be written as
\begin{align}
w_b^{\nu} & = \sum_\mu \sum_{\alpha,\beta} w_b^{\alpha}\mathbf{1}_\beta [\mE_{(\alpha,\beta)}]^\nu_\mu \cr
&= \sum_\mu \sum_{\alpha,\beta} W_{(\alpha,\beta)}^{b,1} [\mE_{(\alpha,\beta)}]^\nu_\mu
= \sum_\mu W^{b,1} \cdot [L]^\nu_\mu
\end{align}
L-conv with weights $W_{(\alpha,\beta)}^{b,1} = w_b^\alpha \mathbf{1}_\beta$ (1 input channel, and $\mathbf{1}$ being a vector of ones)
followed by pooling over $\mu$ is the same as a fully connected layer with weights $w$.
\end{proof}
}
}
\section{Experiments \label{ap:experiments}}
\input{secs2/experiments2}
\input{secs2/experiments}
| 140,793
|
TITLE: Are gravitational waves transverse or longitudinal waves, or do they have unique/unknown properties?
QUESTION [4 upvotes]: Gravitational waves propagate through a medium of space-time. Are they traverse waves or longitudinal waves? Or do they propagate without oscillating?
REPLY [4 votes]: Gravitational waves are transverse but their possible polarizations are described not by a transverse vector but by a transverse tensor.
Electromagnetic waves moving in the $z$ direction may have two possible polarization vectors $x$ or $y$, or their (complex) linear combinations – vectors perpendicular to the $z$ axis.
Gravitational waves in 3+1 dimensions also have two polarizations that may be described by the components of a tensor $h_{xx}=-h_{yy}$ and by $h_{xy}$. One may again consider complex linear combinations of these polarizations, e.g. circular polarizations not too different from the electromagnetic case.
In the gravitational case, we need a tensor with two indices – that is why we also say that the gravitons have spin $j=2$, unlike photons' $j=1$. The tensor $h_{\mu\nu}$ in general is symmetric so it has 10 components to start with.
However, they have to obey $k^\mu h_{\mu\nu} = 0$ which reduces the number of polarizations to six. This vanishing of the "inner product" is the transverse condition which is why it is right to say that the waves are transverse even though the polarizations are tensor-like.
The polarizations of the form $h_{\mu\nu} = k_\mu\lambda_\nu+\lambda_mu k_\nu$ are "pure gauge", resulting from diffeomorphisms, so they're unphysical. The reduces the 6 candidate polarizations to 3. Finally, there is a traceless condition $h^\mu{}_\mu = 0$ which reduces the number of independent polarizations to 2, just like for the electromagnetic waves.
| 42,697
|
Outdated and unautomated IT systems have contributed to underpayments of the state pension, according to a new report from the National Audit Office (NAO).
It has identified the issue as a key factor in the Department for Work and Pensions (DWP) underpaying 134,000 pensioners by over £1 billion before April of this year.
The errors have affected pensioners who first claimed state pension before April 2016, do not have a full national insurance record, and should have received certain increases in their basic pension.
The report says the failures of the IT systems combined with the complexity of state pension rules and high degree of manual review by case workers, which has made some error in processing claims almost inevitable.
DWP case workers often failed to set (and later action) manual IT system prompts on pensioners’ files to review the payments at a later date, such as when a spouse reaches pension age or an 80th birthday.
They often made errors when they did set process prompts because frontline staff found instructions difficult to use and lacked training on complex cases.
In addition, those managing and checking prompts have often been unable to access data across all digital systems to ensure the work has been correct.
Missed opportunities
Other factors have been that the DWP’s approach of measuring, identifying and tackling the largest causes of fraud and error led it to miss earlier opportunities to identify underpayments, and that it has had no means of reviewing individual complaints or errors. Quality assurance processes have focused on checking changes to case details, such as a change of address or death of a spouse, rather than the overall accuracy of payments.
There has been some effort to rectify the situation: between 11 January and 5 September this year DWP reviewed 72,780 cases and paid £60.6 million in arrears in 11% of them. But it may find it difficult to correct underpayments of pensioners who have died and as of last month had not approved a formal plan to trace their estates.
Gareth Davies, head of the NAO said: “The impact of the underpayment of state pension on those pensioners affected is significant. It is vital that the Department of Work and Pensions corrects past underpayments and implements changes to prevent similar problems in future.”
Image from iStock, Galeanu Mihai
| 31,069
|
US Patent US3009284 for the Barbie doll.
Printed on Archival matte art paper.
This invention relates to a doll construction and more particularly to a construction by which a doll may be supported in a balanced, realistic position when not in use or when on display.This invention provides a doll having a body and limbs in articulated relationship and means for supporting the doll in an upright or standing position for display or for storage..
| 66,176
|
I’m here in Shanghai, and haven’t had much time to write. I’m sitting in the back row of a talk, typing this on my iPad. I can’t access WordPress directly in China, as most social media websites are blocked. (On my laptop, I can use VPN to get through, but don’t have it set up on my iPad, and didn’t bring my laptop today.) But I’m going to try posting by email.
The trip has been full of adventures and activity so far. Lots of running around, lots of things done. I feel like I’ve packed in a week’s worth of activities in 4 days. (I arrived in Shanghai On Friday, as did my cousin who was able to join me on the trip. We took the bullet train to Beijing on Saturday, then saw the Great Wall on Sunday, sprinted through the Forbidden City on Monday morning, and walked around Beijing a lot in the gaps between. We took the train back to Shanghai Monday afternoon. My talk was yesterday, Tuesday, the first day of the conference. It went well. I expect the tiredness to kick in today, now that I have that done.)
So, yeah, this has been a packed trip. I haven’t had time to sort through the hundreds of photos I’ve taken so far, but here’s one from Beijing I had to share.
“Knowledge likes pants
Invisible but very important.”
>
That is so funny. Was the translation in place or did you get it translated?
Thanks for helping me start out the day laughing.
Of course! EVERYONE likes pants!
awesome! I can’t believe you did all that! You are a super tourist! :-)
The sign actually says underwear in Chinese. And it makes perfect sense!
But dies that mean knowledge has also to be clean ;-)
That’s … unspeakably fabulous.
Inscrutable…
That is fabulous! (I also guessed that “pants” was in the British sense, referring to “underpants”, but that doesn’t make it any less hilarious!)
The trip sounds great. I hope you’ll find time to post more about it when you get a chance!
Just found this shirt at a Saver’s in Warick, Rhode Island. Yours?
| 310,789
|
\begin{document}
\title{Functors of modules associated with flat and projective modules II}
\author{Adri\'an Gordillo-Merino, Jos\'e Navarro, Pedro Sancho}
\address{Departamento de Matem\'aticas\\
Universidad de Extremadura\\
Avenida de Elvas, s/n\\
06006 Badajoz (SPAIN)}
\email{adgormer@unex.es, navarrogarmendia@unex.es, sancho@unex.es}
\thanks{All authors have been partially supported by Junta de Extremadura and FEDER funds.}
\subjclass{Primary 16D10; Secondary 18A99}
\keywords{flat, projective, Mittag-Leffler, reflexivity theorem, functors}
\begin{abstract} Let $R$ be an associative ring with unit. Given an $R$-module $M$, we can associate the following covariant functor from the category of $R$-algebras to the category of abelian groups: $S\mapsto M\otimes_R S$.
With the corresponding notion of dual functor, we prove that the natural morphism of functors $\,\mathcal M\to \mathcal M^{\vee\vee}\,$ is an isomorphism.
We prove several characterizations of the functors associated with flat modules, flat Mittag-Leffler modules and projective modules.
\end{abstract}
\maketitle
\section{Introduction}
Let $\,R\,$ be an associative ring with unit. Consider the functor from the category of $R$-algebras $\,R\text{-Alg}\,$ to the category of right $R$-modules $\,R\text{-Mod}\,$,
$$o\colon \,R\text{-Alg}\,\to \,R\text{-Mod}\,,\, o(S):=S$$
for any $R$-algebra $S$, and the functor $r\colon \,R\text{-Mod}\,\to \,R\text{-Alg}\,$, $r(N)=R\langle N\rangle$, where $R\langle N\rangle$ is the $R$-algebra generated by $N$ (see \ref{N3.4}).
It is well known the functorial isomorphism
$$\Hom_R(N,o(S))=\Hom_{R-alg}(r(N),S)$$
for any right $R$-module $N$ and any $R$-algebra $S$.
Hence, it is easy to obtain a functorial isomorphism
$$\Hom_{grp}(\mathbb G\circ o,\mathbb F)=\Hom_{grp}(\mathbb G,\mathbb F\circ r)$$
for any covariant functors of abelian groups $\mathbb G\colon \,R\text{-Mod}\,\to $ $\mathbb Z${-Mod}, and $\mathbb F\colon \,R\text{-Alg }\,\to $ $\mathbb Z${-Mod}.
Let $\,\mathcal R\,$ be the covariant functor from the category of $\,R$-algebras, to the category of $R$-algebras, defined by $\,{\mathcal R}(S):=S$, for any $R$-algebra $\,S$.
\medskip
\begin{definition}
A {\sl functor of $\,\mathcal{R}$-modules} is a covariant functor $\,\mathbb M \colon R\text{-Alg} \to \mathbb Z\text{-Mod} \,$ together with a morphism of functors of sets $\,{\mathcal R}\times \mathbb M\to \mathbb M\,$ that endows
$\,\mathbb M(S)\,$ with an $\,S$-module structure, for any $\,R$-algebra $\,S$.
A {\sl morphism of $\,{\mathcal R}$-modules} $\,f\colon \mathbb M\to \mathbb M'\,$
is a morphism of functors such that the morphisms $\,f_{S}\colon \mathbb M({S})\to
\mathbb M'({S})\,$ are morphisms of $\,{S}$-modules.
\end{definition}
\medskip
If $\mathbb G\colon \,R\text{-Mod}\,\to $ $\mathbb Z${-Mod} is additive, then $\mathbb G^o:=\mathbb G\circ o$ is naturally
a functor of $\R$-modules. Let $in,h_x\colon R\langle N\rangle \to R\langle N\oplus x\cdot R\rangle $ be the morphisms of $R$-algebras induced by the morphisms of $R$-modules $N\to R\langle N\oplus x\cdot R\rangle $, $n\mapsto n,x\cdot n$. Given a functor of $\R$-modules $\mathbb F$, let $\mathbb F^r\colon \,R\text{-Mod}\,\to $ $\mathbb Z${-Mod} be defined as follows: $\mathbb F^r(N)$ is the kernel of the morphism
$$\xymatrix{\mathbb F(R\langle N\rangle) \ar[rr]^-{\mathbb F(h_x)-{x\cdot \mathbb F(in)}} & & \mathbb F(R\langle N\oplus x\cdot R\rangle)\\ n' \ar@{|->}[rr] & & \mathbb F(h_x)(n')- x\cdot \mathbb F(in)(n'),}$$
for any right $R$-module $N$ and any $n'\in \mathbb F(R\langle N\rangle)$.
We prove (\ref{T3}) the following theorem.
\begin{theorem} \label{T4} Let $\mathbb F\colon \,R\text{-Alg}\,\to $ $\mathbb Z${-Mod} be a covariant functor of $\R$-modules and $\mathbb G\colon \,R\text{-Mod}\,\to $ $\mathbb Z${-Mod} an additive covariant functor of abelian groups. Then, we have a functorial isomorphism
$$\xymatrix { \Hom_{grp}(\mathbb G,\mathbb F^r) \ar@{=}[r] &
\Hom_{\R}(\mathbb G^o,\mathbb F)}$$
\end{theorem}
In Algebraic Geometry, functors of $\R-$modules from the category of $R-$algebras to the category of abelian groups are featured more frequently than those from the category of right $R-$modules to the category of abelian groups. Nevertheless, the results
about reflexivity of modules and characterizations of flat Mittag-Leffler modules are obtained more naturally with the latter functors (see
\cite{mittagleffler}). The results below are a consequence of Theorem \ref{T4} and the results obtained in \cite{mittagleffler}.
Any $R$-module $M$ can be thought as a functor of $\R$-modules:
Consider the following covariant functor of $\,\mathcal R$-modules $\mathcal M$, defined by
$$\mathcal M(S):=S\otimes_R M\,,$$
for any $R$-algebra $S$. We will say that $\mathcal M$ is the quasi-coherent $\R$-module associated with $M$. It is easy to prove that the category of $R$-modules is equivalent to the category of quasi-coherent $\R$-modules.
Given a functor of $\R$-modules
$\mathbb M$, we will say that the functor of right $\R$-modules $\mathbb M^\vee$ defined by
$$\mathbb M^\vee(S)=\Hom_{\R}(\mathbb M,\mathcal S),$$
for any $R$-algebra $S$, is the dual functor of $\mathbb M$.
We will say that $\mathcal M^\vee$ is the $\R$-module scheme associated with the $R$-module $M$.
We are now in a position to state the main results of this paper, grouped together herein according to the concepts involved:
\begin{theorem} Let $M$ be an $R$-module. The natural morphism of $\mathcal R$-modules $$\mathcal M \to \mathcal M^{\vee\vee}$$
is an isomorphism.
\end{theorem}
When $\,R\,$ is a commutative ring, this theorem has been proved for finitely generated modules
using the language of sheaves in the big Zariski topology, in \cite{Hirschowitz}, and it is
implicit in \cite[II,\textsection 1,2.5]{gabriel}. The reflexivity of these quasi-coherent $\,\mathcal{R}$-modules $\,\mathcal{M}\,$ has been used for a variety of applications in theory of linear representations of affine group schemes \cite{Amel,Pedro1,Pedro2}.
Likewise, we think that this new reflexivity theorem will be useful in the theory of comodules over non-commutative rings.
\bigskip
\begin{theorem} Let $M$ be an $R$-module such that $\mathcal M^r(N)=N\otimes_R M$, for any right $R$-module $N$. Then,
\begin{enumerate}
\item $M$ is a finitely generated projective module if and only if $\mathcal M$ is a module scheme.
\item $M$ is a flat module if and only if $\mathcal M$ is a direct limit of module schemes.
\item $M$ is a flat Mittag-Leffler module if and only if $\mathcal M$ is a direct limit of submodule schemes.
\item $M$ is a flat strict Mittag-Leffler module if and only if $\mathcal M$ is a direct limit of submodule schemes, $\mathcal N_i^\vee\subseteq \mathcal M$, and the dual morphism $\mathcal M^\vee\to\mathcal N_i$ is an epimorphism, for any $i$.
\item $M$ is a countably generated projective module if and only if there exists a chain of
module subschemes of $\mathcal M$,
$$ \mathcal N_0^\vee\subseteq \mathcal N_1^\vee\subseteq\cdots\subseteq \mathcal N^\vee_n\subseteq\cdots,$$
such that $\mathcal M=\cup_{n\in\mathbb N}\mathcal N_n^\vee$.
\item $M$ is projective if and only if there exists a chain of $\R$-sub\-mo\-du\-les of $\mathcal M$,
$$\mathbb W_0\subseteq \mathbb W_1\subseteq \cdots\subseteq \mathbb W_n\subseteq \cdots ,$$
such that $\mathcal M=\cup_{n\in\mathbb N} \mathbb W_n$, where $\mathbb W_n$ is a direct sum of module schemes and the natural morphism $\mathcal M^\vee\to \mathbb W_{n}^\vee$ is an epimorphism, for any $n\in \mathbb N$.
\end{enumerate}
\end{theorem}
Also, the theorem below establishes several statements which are equivalent to an $R$-module being a flat strict Mittag-Leffler module.
\begin{theorem} Let $M$ be an $R$-module such that $\mathcal M^r(N)=N\otimes_R M$, for any right $R$-module $N$.
Then, the following, equivalent conditions are met:
\begin{enumerate}
\item $M$ is a flat strict Mittag-Leffler $R$-module.
\item Let $\{M_i\}_{i\in I}$ be the set of all finitely generated submodules of $M$, and $M'_i:=\Ima[M^*\to M_i^*]$. The natural morphism
$\mathcal M\to \lim \limits_{\rightarrow} {\mathcal M_i'}^\vee$
is an isomorphism.
\item There exists a monomorphism $\mathcal M\hookrightarrow \prod_{J}\mathcal R$.
\item Every morphism of $\mathcal R$-modules $f\colon \mathcal M^\vee\to \mathcal R$ factors through the quasi-coherent module associated with $\Ima f_R$.
\end{enumerate}
\end{theorem}
\section{Preliminaries}\label{preliminar}
Let $\,R\,$ be an associative ring with unit, and let $\,\mathcal R\,$ be the covariant functor from the category of $\,R$-algebras to the category of $\, R$-algebras, defined by $\,{\mathcal R}(S):=S$, for any $R$-algebra $\,S$.
\medskip
\begin{definition}
A {\sl functor of $\,\mathcal{R}$-modules} is a covariant functor $\,\mathbb M \colon R\text{-Alg} \to \mathbb Z\text{-Mod}\,$ together with a morphism of functors of sets $\,{\mathcal R}\times \mathbb M\to \mathbb M\,$ that endows
$\,\mathbb M(S)\,$ with an $\,S$-module structure, for any $\,R$-algebra $\,S$.
A {\sl morphism of $\,{\mathcal R}$-modules} $\,f\colon \mathbb M\to \mathbb M'\,$
is a morphism of functors such that the morphisms $\,f_{S}\colon \mathbb M({S})\to
\mathbb M'({S})\,$ are morphisms of $\,{S}$-modules.
\end{definition}
\medskip
\begin{definition} If $\,\mathbb M\,$ is an $\,\mathcal R$-module, the {\sl dual} $\,\mathbb M^\vee\,$ is the following functor $\, R\text{-Alg} \to \mathbb Z\text{-Mod}\,$
$$\mathbb M^\vee (S):=
\Hom_{\mathcal R}(\mathbb M,\mathcal S),$$
which is a functor of right $\R$-modules.\end{definition}
If $\,S\,$ is an $R$-algebra, the restriction of an $\,{\mathcal R}$-module $\,\mathbb M\,$ to the category of ${S}$-algebras will be written $$\mathbb M_{\mid {S}}(S'):=\mathbb M(S'),$$ for any ${S}$-algebra $S'$.
\medskip
\begin{definition}
The {\sl functor of homomorphisms} $\,{\mathbb Hom}_{{\mathcal R}}(\mathbb M,\mathbb M')\,$ is the covariant functor $\, R\text{-Alg} \to \mathbb Z\text{-Mod}\,$ defined by $${\mathbb Hom}_{{\mathcal R}}(\mathbb M,\mathbb M')({S}):={\rm Hom}_{\mathcal {S}}(\mathbb M_{|{S}}, \mathbb M'_{|{S}}), $$ where $\,\Hom_{{\mathcal S}}(\mathbb M_{|{S}},\mathbb M'_{|{S}})$ stands for the set\footnote{In this paper, we will only consider well-defined functors $\,{\mathbb Hom}_{{\mathcal R}}(\mathbb M,\mathbb M')$, that is to say, functors such that $\,\Hom_{\mathcal {S}}(\mathbb M_{|{S}},\mathbb {M'}_{|{S}})\,$ is a set, for any $R$-algebra ${S}$.} of all morphisms of $\,{\mathcal S}$-modules from $\,\mathbb M_{|{S}}\,$ to $\,\mathbb M'_{|{S'}}$.
In the following, it will also be convenient to consider another notion of dual module:
$\,\mathbb{M}^*\,$ is the functor of right $\R$-modules defined by $$\,\mathbb M^*:=\mathbb Hom_{{\mathcal R}}(\mathbb M,\mathcal R). $$
\end{definition}
\medskip
\medskip
\begin{definition} The {\sl quasi-coherent $\mathcal{R}$-module} associated with an $R$-module $\,M\,$ is the following covariant functor
$$\,{\mathcal{M}} \colon R\text{-Alg} \to {\mathbb Z}\text{-Mod}\, , \quad \mathcal{M} (S) := S \otimes_R M. $$
\end{definition}
\medskip
Quasi-coherent modules are determined by its global sections. In particular, we will make use of the following statement, whose proof is immediate:
\medskip
\begin{proposition} \label{tercer}
Restriction to global sections $\,f\mapsto f_R\,$ defines a bijection:
$${\rm Hom}_{\mathcal R} ({\mathcal M}, \mathbb M) = {\rm Hom}_R (M, \mathbb M(R))\,, $$ for any quasi-coherent $\,\mathcal{R}$-module $\,\mathcal{M}\,$ and any $\,{\mathcal R}$-module $\,\mathbb M$.
\end{proposition}
\medskip
As a consequence, both notions of dual module introduced above coincide on quasi-coherent modules; that is, $\,\mathcal M^*=\mathcal M^\vee\,$.
In fact, if $\,S\,$ is an $R$-algebra, then
$$\,\mathcal{M}^\vee (S) = \Hom_{\mathcal R}(\mathcal{M},\mathcal S) = \Hom_{R}(M , S)\,$$
and, as $\,{\mathcal M}_{\mid {S}}\,$ is the quasi-coherent $\mathcal {S}$-module associated with $\,S\otimes_R M \,$,
$$\mathcal M^*(S)=\Hom_{\mathcal S}(\mathcal M_{|S},\mathcal S)=\Hom_S(S\otimes_R M,S)=\Hom_R(M,S) = \mathcal{M}^\vee (S) \ . $$
\bigskip
\begin{definition} We will say that a functor from the category of right $R$-modules to the category of abelian groups is a functor of abelian groups.
\end{definition}
\begin{definition} Given a functor of abelian groups $\mathbb G$, let $\mathbb G^o$ be the functor from the category of $R$-algebras to the category of abelian groups defined by
$$\mathbb G^o(S):=\mathbb G(S),$$
for any $R$-algebra $S$. Given a morphism of $R$-algebras $w\colon S\to S'$
then $\mathbb G^o(w):=\mathbb G(w)$.
\end{definition}
\begin{remark} \label{recall}
Observe that we can define
$s* g:=\mathbb G(s\cdot)(g)$, for any
$g\in \mathbb G^o(S)$ and $s\in S$ (where $s\cdot S\to S$ is defined by $s\cdot (s'):=s\cdot s'$). If $\mathbb G$ is additive, then $\mathbb G^o$ is a functor of $\R$-modules.
\end{remark}
Any morphism $\phi\colon \mathbb G\to \mathbb G'$ of functors of abelian groups
defines the morphism $\phi^o\colon \mathbb G^o\to \mathbb G'^o$, $\phi^o_S:=\phi_S$ for any $R$-algebra $S$. Obviously,
$$\phi^o_S(s* g)=\phi_S(\mathbb G(s\cdot)(g))=\mathbb G'(s\cdot )(\phi_S(g))=
s*\phi^o_S(g).$$
Finally, any definition or statement in the category of $\,\mathcal R$-modules has a corresponding definition or statement in the category of right $\,\mathcal R$-modules, that we will use without more explicit mention.
As examples, if $\,\mathbb{M}\,$ is an $\,\mathcal{R}$-module, then $\,\mathbb M^* = {\mathbb Hom}_{\mathcal R}(\mathbb M,\mathcal R)\,$ is a right $\,\mathcal R$-module. If $\,\mathbb N\,$ is a right $\,\mathcal R$-module, then the dual module defined by
$$\mathbb N^*:=\mathbb Hom_{\mathcal R}(\mathbb N,\mathcal R)$$ is an $\,\mathcal R$-module, etc.
\section{Extension of a functor on the category of algebras to a functor on the category of modules}
\label{SectExt}
\begin{notation}
If $\,M\,$ is an $\,R$-module, observe that $\,M\otimes_{\ZZ} R\,$ is an $R$-bimodule and we can
consider the tensorial $\,R$-algebra \label{N3.4}
$$R\langle M\rangle :=T^\cdot_{R} (M\otimes_{\ZZ} R)=(T^{\cdot}_{\ZZ} M)\otimes_{\ZZ} R \, . $$
\end{notation}
\medskip
\begin{remark} If $N$ is a right $R$-module, then:
$$R\langle N\rangle:=T^\cdot_R(R\otimes_{\ZZ} N) \, . $$
\end{remark}
\medskip
\begin{lemma} The following functorial map is bijective:
$$\Hom_{R-alg}(R\langle M\rangle, S)\to \Hom_{R}(M,S) \ , \quad f\mapsto f'\ ,$$
where $\,f'(m):=f(m\otimes 1)\,$ for any $\, m\in M\,$.\end{lemma}
\begin{proof} $ \Hom_{R-alg}( T^\cdot_{R} (M\otimes_{\ZZ} R),S)=\Hom_{R\otimes_{\ZZ} R} (M\otimes_{\ZZ} R,S) =\Hom_R(M,S).$
\end{proof}
\medskip
Any $\,R$-linear morphism $\,\phi\colon M\to M'\,$ uniquely extends to a morphism of $\,R$-algebras $\,\tilde \phi\colon R\langle M\rangle \to R\langle M'\rangle$, $\,m\otimes 1\mapsto \phi(m)\otimes 1$.
If we use the notation
$$M\overset {n}\cdots M\cdot R:=M\otimes_{\ZZ}\overset {n}\cdots\otimes_{\ZZ} M\otimes_{\ZZ} R\,,\,\,\, m_1\cdots m_{n}\cdot r\mapsto m_1\otimes\cdots\otimes m_n\otimes r,$$
then
$$R\langle M\rangle =\oplus_{n=0}^\infty \,M\overset {n}\cdots M\cdot R\,,$$ and the product in this algebra can be written as follows:
$$(m_1\cdots m_{n}\cdot r)\cdot (m'_1\cdots m'_{n'}\cdot r')=
m_1\cdots m_{n}\cdot (r m'_1)\cdot m'_2\cdots m'_{n'}\cdot r'.$$
\medskip
\medskip
\begin{notation} Let us use the following notation
$$M\oplus Rx:=M\oplus R \quad , \quad (m,r\cdot x)\mapsto (m,r) \, . $$
Likewise, if $M$ is a right $R$-module $M\oplus xR:=M\oplus R,$ $(m,x\cdot r)\mapsto (m,r) \, . $
\end{notation}
\medskip
\begin{notation} Let $M$ be a right $R-$module. Consider the morphisms of $R-$algebras $$in,h_x: R\langle M\rangle \longrightarrow R\langle M\oplus xR\rangle $$ induced by the morphism of $R$-modules $M\to R\langle M\oplus xR\rangle$, $m\mapsto m,\,x\cdot m$ for any $m\in M\,$.
\end{notation}
\medskip
\begin{definition} \label{D4.2} Given a functor of $\,\R -$modules $\,\mathbb F\,$, let $\,{\mathbb F}^r\,,$ be the functor of abelian groups, defined as follows: $\,{\mathbb F}^r(M)\,$
is the kernel of the morphism
$$\xymatrix{\mathbb F(R\langle M\rangle) \ar[rr]^-{\mathbb F(h_x)-x\cdot\mathbb F(in)} & &\mathbb F(R\langle M\oplus xR\rangle)\\ f \ar@{|->}[rr] & & \mathbb F(h_x)(f)- x\cdot \mathbb F(in)(f),}$$
for any right $R$-module $M$ and any $f\in \mathbb F(R\langle M\rangle)$.
\end{definition}
If $\,w\colon M\to M'\,$ is a morphism of $R$-modules, it induces morphisms of $R$-algebras
$$R\langle w\rangle \colon R\langle M\rangle
\to R\langle M'\rangle \ , \quad R\langle w\rangle(m)=w(m)$$ and
$R\langle w\oplus 1\rangle\colon R\langle M\oplus xR\rangle
\to R\langle M'\oplus xR\rangle$, $R\langle w\oplus 1\rangle(m)=w(m)$, $R\langle w\oplus 1\rangle(x)=x$.
Observe that $R\langle w\oplus 1\rangle \circ h_x=h_x\circ R\langle w\rangle$. Hence,
we have the morphism
$${\mathbb F}^r(w)\colon {\mathbb F}^r(M)\to {\mathbb F}^r(M')\, , \quad {\mathbb F}^r(w)(f):=\mathbb F(R\langle w\rangle)(f)$$ for any $f\in {\mathbb F}^r(M)\subset \mathbb F(R\langle M\rangle)$.
\begin{note} In a similar vein, we can define the {\sl extension} of a functor $\,\mathbb F\,$ of right $\R$-modules, which is a functor $\,{\mathbb F}^r\,$ from the category of $R$-modules to the category of abelian groups.\end{note}
\begin{notation}
Let $N$ be a right $R$-module and let $M$ be an $R$-module. Consider the sequence of morphisms of groups
$$N\otimes_RM\overset{i}\to N\otimes_R M\otimes_{\ZZ} R\dosflechasa{p_1}{p_2} N\otimes_R M\otimes_{\ZZ} R\otimes_{\ZZ} R$$
where $\,i(n\otimes m):=n\otimes m\otimes 1$, $\,p_1(n\otimes m\otimes r):=n\otimes m\otimes r\otimes 1$ and
$\,p_2(n\otimes m\otimes r):=n\otimes m\otimes 1\otimes r$, is exact.
\end{notation}
\begin{lemma} \label{reperab} Let $\,M\,$ be an $\,R$-module and $\,N\,$ a right $\,R$-module. Then,
$$\mathcal N^r(M)=\Ker[N\otimes_R M\otimes_{\ZZ} R\dosflechasab{p_1}{p_2} N\otimes_R M\otimes_{\ZZ} R\otimes_{\ZZ} R],$$
\end{lemma}
\begin{proof} It is easy to prove that the kernel of the morphism
$$N\otimes_R R\langle M\rangle\to N\otimes_R R\langle M\rangle[x] , \,\,n\otimes p(m)\mapsto
n\otimes (p(m)x-p(mx))$$
is included in $N\otimes_R M\otimes_{\ZZ} R$.
Observe that the morphism of $R$-algebras $R\langle M\oplus Rx\rangle\to R\langle M\rangle[x]$, $m\mapsto m$ and $x\mapsto x$, is an epimorphism.
Then, $$\mathcal N^r(M)\subseteq N\otimes_R M\otimes_{\ZZ} R$$ and
$\mathcal N^r(M)=\Ker(p_1-p_2).$
\end{proof}
\medskip
\begin{remark} \label{repera2b} Observe that
$${\mathcal N}^r(M)=\Ker[N\otimes_R M\otimes_{\ZZ} R\overset{p_1-p_2}\longrightarrow N\otimes_R M\otimes_{\ZZ} R\otimes_{\ZZ} R]={\mathcal M}^r(N)\ . $$
\end{remark}
\medskip
\begin{proposition} \label{super} Let $\,N\,$ be a right $\,R$-module and let $\,M\,$ be an $\,R$-module. If $\,M\,$ (or $\,N\,$) is an $\,R$-bimodule or a flat module, then $$\mathcal N^r(M)=N\otimes_R M.$$
\end{proposition}
\begin{proof} By Lemma \ref{reperab}, we have to prove that $\Ker(p_1-p_2)=N\otimes_R M$.
Suppose that $\,M\,$ is a bimodule.
It is clear that $\,\Ima i\subseteq \Ker(p_1-p_2)$.
Let
$\,s\colon N\otimes_R M\otimes_{\ZZ} R\to N\otimes_R M $, $s(n\otimes m\otimes r)=n\otimes mr$
and $$\,s'\colon N\otimes_R M\otimes_{\ZZ} R \otimes_{\ZZ} R \to N\otimes_R M \otimes_{\ZZ} R\,,\,\,\, s'(n\otimes m\otimes r\otimes r')=n\otimes mr\otimes r'.$$
Observe that $\,s\circ i=\Id$, so that $i$ is injective. Also, $\,s'\circ p_2=\Id\,$ and $\,s'\circ p_1=i\circ s$. Thus, if $\,x\in \Ker(p_1-p_2)$, then $\,p_1(x)-p_2(x)=0$; hence, $\,0=s'(p_1(x))-s'(p_2(x))=i(s(x))-x\,$ and $\,x\in \Ima i$.
Then, $\Ker(p_1-p_2)=N\otimes_R M$.
In particular, taking the bimodule $\,M=R\,$, the following sequence of morphisms of groups is exact:
$$N\overset{i}\to N\otimes_{\ZZ} R\dosflechasa{p_1}{p_2} N\otimes_{\ZZ} R\otimes_{\ZZ} R \ . $$
Thus, if $\,M\,$ is flat, tensoring by $\,M\,$ it also follows that $\Ker(p_1-p_2)=N\otimes_R M$.
\end{proof}
\begin{proposition} \label{super2} If there exists a central subalgebra $\,R'\subseteq R\,$ such that $\,Q\to Q\otimes_{R'} R\,$ is injective, for any $\,R'$-module $\,Q\,$, then
$$\mathcal N^r(M)=N\otimes_R M.$$
\end{proposition}
\begin{proof}
Let us write $\,M':=M\otimes_{R'} R\,$, which is a bimodule as follows:
$$r_1\cdot (m\otimes r)\cdot r_2=r_1m\otimes rr_2.$$
The morphism of $R$-modules $i\colon M\to M'$, $i(m):= m\otimes 1$ is universally injective: Given an $R$-module $P$, put $Q:=P\otimes_R M$. Then, the morphism $P\otimes_R M=Q\to Q\otimes_{R'} R=P\otimes_R M'$ is injective.
Put $Q:=M'/M$ and $M'':=Q\otimes_{R'} R$. Let $p_1$ be the composite morphism
$M'\to M'/M=Q\to Q\otimes_{R'}R =M''$.
The sequence of morphisms of $R$-modules
$$0\to M\overset i\to M'\overset p\to M''$$
is universally exact. Consider the following commutative diagram
$$\xymatrix @R10pt @C10pt {0\ar[r] & N\otimes_R M \ar[r]^-{Id\otimes i} \ar[d] & N\otimes_RM' \ar[r]^-{Id\otimes p} \ar[d] & N\otimes_RM''\ar[d] \\ 0\ar[r] & N\otimes_R M \otimes_{\ZZ} R\ar[r]^-{Id\otimes i\otimes Id} \ar@<1ex>[d] \ar@<-1ex>[d] & N\otimes_R M' \otimes_{\ZZ} R\ar[r]^-{Id\otimes p\otimes Id} \ar@<1ex>[d] \ar@<-1ex>[d] & N\otimes_RM''\otimes_{\ZZ} R \ar@<1ex>[d] \ar@<-1ex>[d] \\ 0\ar[r] &
N\otimes_R M \otimes_{\ZZ} R \otimes_{\ZZ} R\ar[r]^-{i'} & N\otimes_RM'\otimes_{\ZZ} R\otimes_{\ZZ} R \ar[r]^-{p'} &N\otimes_R M''\otimes_{\ZZ} R\otimes_{\ZZ} R
}$$ (where $i'=\Id\otimes i\otimes Id\otimes Id$ and $p'=\Id\otimes p\otimes Id\otimes Id$) whose rows are exact, as well as both the second and third columns, by Proposition \ref{super}. Hence, the first column is exact too.
\end{proof}
\medskip
\begin{proposition}\label{1.30} Let $\mathbb F$ be a functor of $\mathcal R$-modules. Then, $${\mathbb F^\vee}^r(M)=\Hom_{\mathcal R}(\mathbb F,\mathcal M),$$
for any $R$-module $M$. Hence, ${\mathbb F^\vee}^{ro}=\mathbb F$.
\end{proposition}
\begin{proof} By Lema \ref{reperab} and Proposition \ref{super}, the sequence of morphisms
$$\xymatrix @R6pt {S\otimes_R M\ar[r] & S\otimes_R R\langle M\rangle \ar@<1ex>[r]
\ar@<-1ex>[r]& S\otimes_R R\langle M\oplus Rx\rangle\\ m \ar@{|->}[r] & m,\quad p(m) \ar@{|->}[r]<1ex>
\ar@{|->}[r]<-1ex>&
p(mx),\,p(m)x}$$
is exact for any $\,R$-algebra $S$. that is, if $\mathcal M$, $\mathcal R\langle M\rangle$ and $\mathcal R\langle M\oplus Rx\rangle$ are the quasi-coherent modules associated with $M,R\langle M\rangle$ and $R\langle M\oplus Rx\rangle$, respectively, then
the sequence of morphisms
$$\xymatrix{\mathcal M\ar[r] & \mathcal R\langle M\rangle \ar@<1ex>[r]
\ar@<-1ex>[r]& \mathcal R\langle M\oplus Rx\rangle}$$
is exact. Hence,
${\mathbb F^\vee}^r(M)=\Hom_{\mathcal R}(\mathbb F,\mathcal M).$
\end{proof}
\medskip
\section{Adjoint functor theorem}
Given an $R$-algebra $\,S\,$, let $\,\pi_S\colon R\langle S\rangle \to S\,$ be the morphism of $\,R$-algebras $\,s\mapsto s\,$, for any $\,s\in S$.
\begin{definition} Let $\mathbb F$ be an $\R$-module. We have a natural morphism $\pi_{\mathbb F}\colon \mathbb F^{ro}\to \mathbb F$ defined as follows $$\pi_{\mathbb F,S}(m)=\mathbb F(\pi_S)(m),\,\, \text{for any } m\in \mathbb F^{ro}(S)=\mathbb F^{r}(S)\subseteq \mathbb F(R\langle S\rangle).$$
\end{definition}
\begin{proposition} For any $s\in S$ and $m\in \mathbb F^{ro}(S)=\mathbb F^{r}(S)\subseteq \mathbb F(R\langle S\rangle)$,
$$\pi_{\mathbb F,S}(s* m)=s\cdot \mathbb F(\pi_S)(m).$$
(Recall Remark \ref{recall}).
\end{proposition}
\begin{proof} Given $m\in {\mathbb F}^r(S)$, we know that $\mathbb F(h_x)(m)-x\cdot \mathbb F(in)(m)=0$, by Definition \ref{D4.2}. Let $h_s\colon R\langle S\rangle\to R\langle S\rangle$ be defined by
$h_s(s')=s\cdot s'\in S\cdot S\subseteq R\langle S\rangle$.
Consider the morphism of $R$-algebras $R\langle S\oplus Rx\rangle \overset{x=s}\to R\langle S\rangle$, $s'\mapsto s'$ and $x\mapsto s$. We have the commutative diagrams
$$\xymatrix{R\langle S\rangle \ar[rd]^-{h_s} \ar[r]^-{h_x} & R\langle S\oplus xR\rangle \ar[d]^-{x=s }\\ & R\langle S\rangle }\quad
\xymatrix{R\langle S\rangle \ar[r]^-{in} \ar[dr]^-{Id} & R\langle S\oplus xR\rangle \ar[d]^-{x=s }\\ & R\langle S\rangle }
$$
Then,
$$0 =\mathbb F(x=s)(\mathbb F(h_x)(m)-x\cdot \mathbb F(in)(m))=\mathbb F(h_s)(m)-s\cdot m.$$
Observe that $\pi_S\circ h_s=\pi_S\circ R\langle s\cdot \rangle$. Then,
$$\aligned 0 & =\mathbb F(\pi_S)(\mathbb F(h_s)(m)-s\cdot m)=\mathbb F(\pi_S)( \mathbb F(R\langle s\cdot \rangle)(m)-s\cdot m)\\ & =\mathbb F(\pi_S)( s*m-s\cdot m)=
\mathbb F(\pi_S)( s*m)-s\cdot \mathbb F(\pi_S)(m)\\ & =\pi_{\mathbb F,S}(s*m)-s\cdot \pi_{\mathbb F,S}(m).\endaligned
$$
\end{proof}
\begin{proposition} \label{PB} Let $\phi\colon \mathbb F\to \mathbb F' $ be a morphism of $\R$-modules. The diagram
$$\xymatrix{\mathbb F^{ro} \ar[r]^-{\phi^{ro}} \ar[d]^-{\pi_{\mathbb F}} & \mathbb F'^{ro} \ar[d]^-{\pi_{\mathbb F'}} \\ \mathbb F \ar[r]_-\phi & \mathbb F'}$$
is commutative.
\end{proposition}
\begin{proof} The diagram
$$\xymatrix{\mathbb F^{ro}(S) \ar[r]^-{\phi^{ro}_S} \ar@/^-2.0pc/[dd]_-{\pi_{\mathbb F,S}} \ar@{^{(}->}[d] & \mathbb F'^{ro}(S) \ar@/^2.0pc/[dd]^-{\pi_{\mathbb F',S}} \ar@{^{(}->}[d]
\\ \mathbb F (R\langle S\rangle) \ar[r]_-{\phi_{R\langle S\rangle}} \ar[d]^-{\mathbb F(\pi_S)} & \mathbb F'(R\langle S\rangle ) \ar[d]_-{\mathbb F'(\pi_S)}
\\ \mathbb F (S) \ar[r]_-{\phi_S} & \mathbb F'(S)
}$$
is commutative.
\end{proof}
Given a right $R$-module $\,N$, let $\, i_N\colon N\to R\langle N\rangle \,$ be the morphism of $\,R$-modules $\,n\mapsto n\,$, for any $\,n\in N$.
\begin{definition} Let $\mathbb G$ be a functor of abelian groups. We have a natural morphism $i_{\mathbb G}\colon \mathbb G\to \mathbb G^{or}$ defined as follows:
$$i_{\mathbb G,N}(g):=\mathbb G(i_N)(g),\text{ for any } g\in \mathbb G(N)$$
Let us check that $\mathbb G(i_N)(g)\in \mathbb G^{or}(N)\subset \mathbb G^o(R\langle N\rangle)=\mathbb G(R\langle N\rangle)$: The composite morphism $$N\overset{i_N}\to R\langle N\rangle \overset{h_x-x\cdot in}\longrightarrow R\langle N\oplus xR\rangle$$
is zero. Hence, $$\aligned 0 & =\mathbb G(h_x-x\cdot in)(\mathbb G(i_N)(g))=
(\mathbb G(h_x)- \mathbb G(x\cdot in))(\mathbb G(i_N)(g))
\\ & = (\mathbb G^o(h_x)-x\cdot \mathbb G^o(in))(\mathbb G(i_N)(g))\endaligned$$
and $\mathbb G(i_N)(g)\in \mathbb G^{or}(N)$.
\end{definition}
\begin{proposition} \label{PA} Let $\phi\colon \mathbb G\to \mathbb G'$ be a morphism of functors of groups. The diagram
$$\xymatrix{\mathbb G\ar[r]^-\phi \ar[d]^-{i_{\mathbb G}} & \mathbb G' \ar[d]^-{i_{\mathbb G'}} \\ \mathbb G^{or} \ar[r]_-{\phi^{or}} & \mathbb G'^{or}}$$
is commutative.
\end{proposition}
\begin{proof} The diagram
$$\xymatrix @R=10pt {\mathbb G(N) \ar[r]^-{\phi_N} \ar[d]^-{i_{\mathbb G,N}} & \mathbb G'(N) \ar[d]^-{i_{\mathbb G',N}} \\ \mathbb G^{or}(N) \ar@{-->}[r]_-{\phi^{or}_N} \ar@{^{(}->}[d] & \mathbb G'^{or}(N) \ar@{^{(}->}[d] \\
\mathbb G^r(R\langle N\rangle) \ar[r]^-{\phi^r_{R\langle N\rangle}} \ar@{=}[d] & \mathbb G'^r(R\langle N\rangle) \ar@{=}[d]
\\ \mathbb G(R\langle N\rangle) \ar[r]^-{\phi_{R\langle N\rangle}} & \mathbb G'(R\langle N\rangle)}
$$
is commutative.
\end{proof}
\begin{lemma} \label{T2} Let $\mathbb G$ be a functor of abelian groups.
The composite morphism
$$\mathbb G^o\overset{i_{\mathbb G}^o}\longrightarrow \mathbb G^{oro} \overset{\pi_{\mathbb G^o}}\longrightarrow \mathbb G^o$$
is the identity morphism.
\end{lemma}
\begin{proof} The diagram
$$\xymatrix{\mathbb G^o(S) \ar@{=}[d] \ar[r]^-{i^o_{\mathbb G,S}} &
\mathbb G^{oro}(S) \ar@{=}[d] \ar[r]^-{\pi_{\mathbb G^o,S}} &
\mathbb G^o(S) \ar@{=}[r] & \mathbb G(S)\\
\mathbb G(S) \ar[r]^-{i_{\mathbb G,S}} \ar@/^-2.0pc/[rrr]_-{\mathbb G(i_S)}&
\mathbb G^{or}(S) \ar@{^{(}->}[r] &
\mathbb G^o(R\langle S\rangle) \ar@{=}[r] \ar[u]^-{\mathbb G^o(\pi_S)} & \mathbb G(R\langle S\rangle) \ar[u]^-{\mathbb G(\pi_S)}
}$$
is commutative. Hence, $\pi_{\mathbb G^o,S}\circ i_{\mathbb G,S}^o=
\mathbb G(\pi_S)\circ \mathbb G(i_S)=\mathbb G(\pi_S\circ i_S)=Id$.
\end{proof}
\begin{lemma} \label{T1} Let $\mathbb F$ be an $\R$-module. The composite morphism
$$\mathbb F^r\overset{i_{\mathbb F^r}}\longrightarrow \mathbb F^{ror} \overset{\pi_{\mathbb F}^r} \longrightarrow \mathbb F^r$$
is the identity morphism.
\end{lemma}
\begin{proof} The diagram
$$\xymatrix{\mathbb F^r(N) \ar[d]_-{i_{\mathbb F^r,N}} \ar@{^{(}->}[rrr] \ar[rrd]^-{\mathbb F^r(i_N)} & & & \mathbb F(R\langle N\rangle) \ar[d]^-{\mathbb F({R\langle i_N\rangle})} \\
\mathbb F^{ror}(N) \ar@{^{(}->}[r] \ar[d]_-{\pi^r_{\mathbb F,N}} &
\mathbb F^{ro}(R\langle N\rangle) \ar@{=}[r] \ar[d]_-{\pi_{\mathbb F,R\langle N\rangle}} & \mathbb F^{r}(R\langle N\rangle)
\ar@{^{(}->}[r] & \mathbb F(R\langle R\langle N\rangle\rangle)
\ar[dll]^-{\mathbb F(\pi_{R\langle N\rangle})}\\ \mathbb F^{r}(N)
\ar@{^{(}->}[r] & \mathbb F(R\langle N\rangle)
}$$
is commutative. Then, $\pi^r_{\mathbb F,N}\circ i_{\mathbb F^r,N}=Id$
since $$\mathbb F(\pi_{R\langle N\rangle})\circ \mathbb F(i_{R\langle N\rangle})=\mathbb F(\pi_{R\langle N\rangle}\circ {R\langle i_N\rangle})=\mathbb F(Id)=Id.$$ Hence, $\pi^r_{\mathbb F}\circ i_{\mathbb F^r}=Id$.
\end{proof}
\begin{theorem} \label{T3} Let $\mathbb F$ be an $\R$-module and $\mathbb G$ an additive functor of abelian groups. Then, we have the functorial isomorphism
$$\xymatrix @R=8pt { \Hom_{grp}(\mathbb G,\mathbb F^r) \ar@{=}[r] &
\Hom_{\R}(\mathbb G^o,\mathbb F)\\ \phi \ar@{|->}[r] & \pi_{\mathbb F}\circ\phi^o\\\varphi^r\circ i_{\mathbb G} & \varphi \ar@{|->}[l]}$$
\end{theorem}
\begin{proof} It is a check:
$$\aligned & \phi \mapsto \pi_{\mathbb F}\circ \phi^o\mapsto \pi^r_{\mathbb F}\circ \phi^{or}\circ i_{\mathbb G} \overset{\text{\ref{PA}}}= \pi^r_{\mathbb F}\circ i_{\mathbb F^r}\circ \phi\overset{\text{\ref{T1}}}=\phi\\
& \varphi \mapsto \varphi^r\circ i_{\mathbb G}\mapsto \pi_{\mathbb F}\circ \varphi^{ro}\circ i^o_{\mathbb G} \overset{\text{\ref{PB}}}
=\varphi\circ \pi_{\mathbb G^o}\circ i^o_{\mathbb G}\overset{\text{\ref{T2}}}=\varphi\endaligned$$
\end{proof}
\begin{corollary} \label{L5.111} Let $\,\mathbb F\,$ be an $\,\mathcal R$-module such that
$\pi_{\mathbb F}\colon \mathbb F^{ro}\to \mathbb F$ is an isomorphism.
If $\,\mathbb F'\,$ is another $\,\mathcal{R}$-module, the natural morphism
$$\Hom_{\mathcal R}(\mathbb F,\mathbb F')\to \Hom_{grp}({\mathbb F}^r , {\mathbb F'}^r),\, f\mapsto f^r.$$
\end{corollary}
\begin{proof} $\Hom_{\mathcal R}(\mathbb F,\mathbb F')
=
\Hom_{\mathcal R}({\mathbb F}^{ro} , {\mathbb F'})
\overset{\text{\ref{T3}}}=\Hom_{grp}({\mathbb F}^r , {\mathbb F'}^r)$.
\end{proof}
\medskip
\section{Reflexivity theorem}\label{lafinal}
Let $\,M\,$ be an $\,R-$module. The functor $\,{\mathcal M^\vee}^r\,$ is precisely the functor of (co)points of $\,M\,$ in the category of $\,R-$modules: if $\,Q\,$ is another $\,R-$module, in virtue of Proposition \ref{1.30}:
$${\mathcal{M}^\vee}^r(Q) = \Hom_{\mathcal{R}} (\mathcal{M} , \mathcal{Q}) = \Hom_{R}(M , Q) \ . $$
\begin{lemma} \label{Yoneda} Let $\,M\,$ be a right $\,R$-module and $\,\mathbb G\,$ an aditive functor of abelian groups.
Then, $${\Hom}_{grp} (\mathcal M^{\vee r}, \mathbb G)=\mathbb G(M)\, .$$
\end{lemma}
\begin{proof} It is Yoneda's lemma.
\end{proof}
\begin{theorem}\label{prop4} Let $\,M\,$ be an $\,R$-module and $\,N\,$ be a right $\,R$-module.
Then, $${\Hom}_{\mathcal R} ({\mathcal M^\vee}, {\mathcal N})={\mathcal N}^r(M) \, .$$
\end{theorem}
\begin{proof} $\,\mathcal{M}^\vee\,$ satisfies the hypothesis of Theorem \ref{L5.111} (see Proposition \ref{1.30}), so that
$$ {\Hom}_{\mathcal R} ({\mathcal M^\vee}, {\mathcal N}) \overset{\text{\ref{L5.111}}}= {\Hom}_{grp} ({{\mathcal M^\vee}^r}, {{\mathcal N}^r})\overset{\text{\ref{Yoneda}}}= {\mathcal N}^r(M) \ .
$$
\end{proof}
\begin{theorem} \label{reflex2}
Let $\,M\,$ be an $\,R$-module. The natural morphism of $\,\mathcal R$-modules $$\mathcal M \longrightarrow \mathcal M^{\vee\vee}$$ is an isomorphism.
\end{theorem}
\begin{proof} It is a consequence of Theorem \ref{prop4} and Proposition \ref{super}:
$$\mathcal M^{\vee\vee}(S)=\Hom_{\mathcal R}(\mathcal M^\vee,\mathcal S)= \mathcal M^r(S)= S\otimes_R M=\mathcal M(S) \ .$$
\end{proof}
\medskip
\begin{theorem} \label{reflex}
Let $\,M\,$ be an $\,R$-module.
The natural morphism of $\,\mathcal R$-modules $$\mathcal M \longrightarrow \mathcal M^{**}$$ is an isomorphism.
\end{theorem}
\begin{proof} Let $S$ be an $R$-algebra. $\,\mathcal M_{|S}\,$ is the $\mathcal S$-quasi-coherent module associated with $\,S\otimes_R M\,$ and $\,{\mathcal M_{|S}}^{*}= {\mathcal M_{|S}}^{\vee}\,$. Then,
$$\aligned \mathcal M^{**}(S) & =\Hom_{\mathcal S}({\mathcal M^*}_{|S},\mathcal S)=
\Hom_{\mathcal S}({\mathcal M_{|S}}^*,\mathcal S)\overset{\text{\ref{prop4}}}={\mathcal M_{|S}}^r(S) \\ & \overset{\text{\ref{super}}}=
S\otimes_S (S\otimes_R M) =S\otimes_R M=\mathcal M(S). \endaligned $$
\end{proof}
\section{Quasi-coherent modules associated with flat modules}
Given an $R$-module $M$, let $\mathcal M_r$ be the functor of abelian groups defined by
$$\mathcal M_r(N):=N\otimes_R M.$$
for any right $R$-module $N$. Observe that
there exists a natural morphism $\mathcal M_r\to \mathcal M^r$ and
$\mathcal M_r^{\, o}=\mathcal M^{ro}=\mathcal M$, by Proposition \ref{super}.
In \cite{mittagleffler}, it is given several characterizations of $\mathcal M_r$, when $M$ is a flat or projective module.
The adjoint functor theorem will give us the corresponding characterizations of $\mathcal M$, when $M$ is a flat or projective module.
We have only to add the following hypothesis.
\begin{hypothesis} \label{HP2} From now on we wil assume that $\mathcal M_r=\mathcal M^r$ (if $M$ is a flat $R$-module, then $\mathcal M_r=\mathcal M^r$, by Proposition \ref{super}.)
\end{hypothesis}
\begin{definition} We will say that $\mathcal N^\vee$ is the module scheme associated with $N$.
\end{definition}
\begin{theorem} $M$ is a finitely generated projective module iff $\mathcal M$ is a module scheme.\end{theorem}
\begin{proof} $\Rightarrow)$ By \cite[3.1]{mittagleffler}, there exists an isomorphism
$\mathcal M_r\simeq \mathcal N^{\vee r}$. Then,
$$\mathcal M=\mathcal M_r^{\, o}\simeq \mathcal N^{\vee ro}=\mathcal N^{\vee}.$$
$\Leftarrow)$
$\mathcal M\simeq \mathcal N^\vee$. Then, $\mathcal M_r=\mathcal M^r \simeq \mathcal N^{\vee r}$. By \cite[3.1]{mittagleffler}, $M$ is a finitely generated projective module.
\end{proof}
\begin{theorem} $M$ is a flat module iff $\mathcal M$ is a direct limit of module schemes.\end{theorem}
\begin{proof} $\Rightarrow)$ By \cite[3.2]{mittagleffler}, there exists an isomorphism
$\mathcal M_r\simeq \ilim{i\in I} \mathcal N_i^{\vee r}$. Then,
$$\mathcal M=\mathcal M_r^{\,s}\simeq (\ilim{i\in I} \mathcal N_i^{\vee r})^o= \ilim{i\in I} \mathcal N_i^{\vee ro}=\ilim{i\in I} \mathcal N_i^{\vee}.$$
$\Leftarrow)$
$\mathcal M\simeq \ilim{i\in I} \mathcal N^\vee$. Then, $\mathcal M_r=\mathcal M^r \simeq \ilim{i\in I} \mathcal N^{\vee r}$. By \cite[3.2]{mittagleffler}, $M$ is flat.
\end{proof}
\begin{lemma} \label{L6.5A} Let $N_1,N_2$ be right $R$-modules. Then, \begin{enumerate}
\item $ \Hom_{\R}(\mathcal N_1^\vee,\mathcal N_2^\vee) =\Hom_{grp}(\mathcal N_1^{\vee r},\mathcal \mathcal N_2^{\vee r})$.
\item $ \Hom_{\R}(\mathcal N_1,\mathcal N_2) =\Hom_{grp}(\mathcal N_{1 r},\mathcal \mathcal N_{2 r})$.
\item $\Hom_{\R}(\mathcal N_1^\vee,\mathcal M) =\Hom_{grp}(\mathcal N_1^{\vee r},\mathcal M_r)$.
\item $\Hom_{\R}(\mathcal M^\vee,\mathcal N_1) =\Hom_{grp}(\mathcal M^{\vee r},\mathcal N_{1r})$.
\end{enumerate}
\end{lemma}
\begin{proof} 1. It is Corollary \ref{L5.111}.
2. $ \Hom_{\R}(\mathcal N_1,\mathcal N_2) \overset{\text{\ref{tercer}}}=\Hom_{R}(N_1,N_2)\overset{\text{\cite[2.4]{mittagleffler}}}= \Hom_{grp}(\mathcal N_{1 r},\mathcal \mathcal N_{2 r})$.
3. $\Hom_{\R}(\mathcal N_1^\vee,\mathcal M) \overset{\text{\ref{L5.111}}}=\Hom_{grp}(\mathcal N_1^{\vee r},\mathcal M^r)=\Hom_{grp}(\mathcal N_1^{\vee r},\mathcal M_r)$.
4. $\Hom_{\R}(\mathcal M^\vee,\mathcal N_1) \overset{\text{\ref{prop4}}}=\mathcal N_1^r(M)\overset{\text{\ref{repera2b}}}=\mathcal M^r(N_1)=\mathcal M_r(N_1)=N_1\otimes_RM$\newline \phantom{pp}
\hskip 3cm $\overset{\text{\ref{Yoneda}}}=\Hom_{grp}(\mathcal M^{\vee r},\mathcal N_{1r})$.
\end{proof}
\begin{lemma} \label{L6.5}
Let $f\colon \mathbb F_1\to \mathbb F_2$ be a morphism of $\R$-modules. Then, $f$ is a monomorphism
iff the morphism of functors of groups $f^r\colon \mathbb F_1^r\to \mathbb F_2^r$ is a monomorphism.
\end{lemma}
\begin{proof} If $f$ is a monomorphism, then $f^r$ is a monomorphism, since $f^r_N=f_{R\langle N\rangle}$ on $\mathbb F^r_1(N)\subseteq
\mathbb F_1(R\langle N\rangle)$. If $f^r$ is a monomorphism, then $f=f^{ro}$
is a monomorphism since $f^{ro}_{S}=f^r_S$.
\end{proof}
\begin{theorem} \label{ML} Let $M$ be an $R$-module. The following statements are equivalent
\begin{enumerate}
\item $M$ is a flat Mittag-Leffler module.
\item Every morphism of $\mathcal R$-modules $\mathcal N^\vee\to \mathcal M$ factors through an $\R$-submodule scheme of $\mathcal M$, for any right $R$-module $N$.
\item $\mathcal M$ is equal to a direct limit of $\R$-submodule schemes.
\end{enumerate}
\end{theorem}
\begin{proof} $1. \iff 2.$ It is an immediate consequence of \cite[4.5]{mittagleffler}, Lemma \ref{L6.5A} and Lemma \ref{L6.5}.
$1. \iff 3.$ It is an immediate consequence of \cite[4.5]{mittagleffler} and Lemma \ref{L6.5}, since $\mathcal M\simeq \ilim{i\in I}\mathcal N_i^\vee$ iff $\mathcal M_r=\mathcal M^r\simeq \ilim{i\in I}\mathcal N_i^{\vee r}$.
\end{proof}
\begin{lemma} \label{L6.8} A morphism of $\R$-modules $f\colon \mathcal M^{\vee} \to \prod_{i\in I} \mathcal N_i$ is an epimorphism iff the corresponding morphism of functors of groups $\mathcal M^{\vee r}\to \prod_{i\in I} \mathcal N_{ir}$ (see \ref{L6.5A} (4)) is an epimorphism.\end{lemma}
\begin{proof} $\Rightarrow)$ $f=(\sum_j n_{ij}\otimes m_{ij})_{i\in I}$ through the equality
$$\Hom_{\R}(\mathcal M^{\vee}, \prod_{i\in I} \mathcal N_i)=
\Hom_{\R}(\mathcal M^{\vee r}, \prod_{i\in I} \mathcal N_{i,r})\overset{\text{\ref{Yoneda}}}=
\prod_{i\in I} \mathcal N_{i,r}(M)=\prod_{i\in I} ( N_i\otimes_R M).$$
We have to prove that the morphism
$$\xymatrix{\Hom_{R}(M,N) \ar@{=}[r] & \mathcal M^{\vee r}(N)\ar[r] & \prod_{i\in I} \mathcal N_{ir}(N)\ar@{=}[r] & \prod_{i\in I} (N_{ir}\otimes N)\\ h \ar@{|->}[rrr] & & & (\sum_j n_{ij}\otimes h(m_{ij}))_{i\in I}}$$ is an epimorphism, for any $R$-module $N$. If $N$ is an $R$-algebra, then it is an epimorphism, since $f$ is an epimorphism.
We can suppose that $N$ is a free $R$-module, since
the functor $\prod_{i\in I} \mathcal N_{ir}$ preserves epimorphisms.
In this case $N$ is naturally a bimodule. Let $\pi\colon R\langle N\rangle \to N$ be the composition of the obvious morphisms of $R$-modules $R\langle N\rangle\to N\cdot R\to N$. Obviously, $\pi$ is an epimorphism. Then, we can suppose that $N$ is an $R$-algebra. We conclude.
\end{proof}
\begin{theorem} \label{blabla} Let $\,M\,$ be an $\,R$-module. The following statements are equivalent:
\begin{enumerate}
\item $\,M\,$ is a flat strict Mittag-Leffler module.
\item Any morphism $\,f\colon \mathcal M^\vee\to \mathcal N\,$ factors through the quasi-coherent module associated with $\,\Ima f_R$, for any right $\,R$-module $\,N$.
\item Any morphism $\,f\colon \mathcal M^\vee\to \mathcal R\,$ factors through the quasi-coherent module associated with $\,\Ima f_R$.
\item Let $\,\{M_i\}_{i\in I}\,$ be the set of all finitely generated $\,R$-submodules of $\,M$, and $\,M'_i:=\Ima[M^*\to M_i^*]$. The natural morphism
$$\mathcal M\to \lim \limits_{\rightarrow} {\mathcal M_i'}^\vee$$
is an isomorphism.
\item $\,\mathcal M\,$ is a direct limit of submodule schemes, $\,\mathcal N_i^\vee\subseteq\mathcal M\,$ and the dual morphism
$\,\mathcal M^\vee\to \mathcal N_i\,$ is an epimorphism, for any $\,i$.
\item There exists a monomorphism $\,\mathcal M\hookrightarrow \prod_{I}\mathcal R$.
\end{enumerate}
\end{theorem}
\begin{proof} $ 1. \iff 2.$ It is a consequence of
\cite[4.9\,(2)]{mittagleffler} and Lemma \ref{L6.5A}\,4.
$ 1. \iff 3.$ It is a consequence of
\cite[4.10]{mittagleffler} and Lemma \ref{L6.5A}\,4.
$ 1. \iff 4.$ It is a consequence of
\cite[4.9\,(3)]{mittagleffler}.
$ 1. \iff 5.$ It is a consequence of
\cite[4.9\,(4)]{mittagleffler} and Lemma \ref{L6.8}.
$ 1. \iff 6.$ It is a consequence of
\cite[4.7\,(3)]{mittagleffler} and Lemma \ref{L6.5}.
\end{proof}
\begin{proposition} \label{CR7} An $R$-module $M$ is a projective $R$-module of countable type if and only if there exists a chain of submodule schemes of $\mathcal M$
$$\mathcal N_1^\vee\subseteq \mathcal N_2^\vee\subseteq \cdots\subseteq \mathcal N_n^\vee\subseteq \cdots$$
such that $\mathcal M=\cup_{n\in\mathbb N} \mathcal N_n^\vee$. \end{proposition}
\begin{proof} It is a consequence of
\cite[4.11,13]{mittagleffler} and Lemma \ref{L6.5}.
\end{proof}
\begin{theorem} An $R$-module $M$ is projective if and only if there exists a chain of $\R$-sub\-mo\-du\-les of $\mathcal M$
$$\mathbb W_1\subseteq \mathbb W_2\subseteq \cdots\subseteq \mathbb W_n\subseteq \cdots$$
such that $\mathcal M=\cup_{n\in\mathbb N} \mathbb W_n$, where $\mathbb W_n$ is a direct sum of module schemes and the natural morphism $\mathcal M^\vee\to \mathbb W_{n}^\vee$ is an epimorphism, for any $n\in \mathbb N$.
\end{theorem}
\begin{proof} It is a consequence of
\cite[4.14]{mittagleffler} and Lemma \ref{L6.8}.
\end{proof}
| 200,997
|
TITLE: Differentiation always easy?
QUESTION [4 upvotes]: There are many examples of real functions admitting antiderivatives (since e.g. continuous), but where computing a concrete antiderivative is a seriously hard problem even if an elementary one exists.
What about differentiation? My experience is that the basic rules of calculus along with term-by-term differentiantion of power series make differentiation a just-do-it kind of problem for virtually all everyday kinds of functions. In fact, if we add the limit exchange trick for uniformly convergent sequence of derivatives, I cannot think of any examples, where finding a closed form for $f'$, given a closed form for $f$, is not a mechanical task.
So the question is: are there any examples of real functions $f$, such that
(1) $f$ is given in "nice" closed form $f(x)=\ldots$
(2) it is "relatively easy" to justify that $f$ is differentiable
(3) computing the derivative of $f$ is actually hard.
This isn't exactly a precise question, but there just might be a "know it when I see it" example.
REPLY [1 votes]: I think one example you might be interested in is the Cantor Ternary Function. Recall that the Cantor set is formed by taking the unit interval $[0,1]$, then throwing away the middle third, then throwing away the middle thirds of each resulting interval, and doing so infinitely many times. The resulting set is the Cantor set.
You can read the exact definition of the function in the link but loosely speaking, it's obvious that the function is constant on the complement of the Cantor ternary set: the intervals you threw away. So clearly the function has zero derivative on these intervals. Less obvious is that the function is not differentiable at any point of the Cantor ternary set. On the other hand, the Cantor ternary set has measure zero, so in a precise sense the function is differentiable almost everywhere. It's also continuous, which is not obvious! So it's an example of an increasing, continuous function whose derivative is 0 almost everywhere!
REPLY [0 votes]: Given any function comprised of elementary compositions of base case elementary functions whose derivatives are known, i.e. starting with base case elementary functions and then making new functions through repeated application of addition, multiplication, and composition of functions, the standard rules of differentiation say that we can come up with an elementary formulation for the derivative provided derivatives are "known" for the base case functions. Thus, your only hope for an example is something like an exotic function whose definition we take as given, in terms of base case exotic functions that are acceptable, but whose derivative cannot be similarly expressed in terms of the collection of "accepted" base case exotic functions. But this seems a bit pedantic.
| 91,245
|
TITLE: Can anything be proven about this complex variant of the Collatz problem, or is it just as intractable?
QUESTION [12 upvotes]: Given a Gaussian integer $z = a + bi$, where $a, b \in \mathbb{Z}$, $i = \sqrt{-1}$, iterate the function $$f(z) = \frac{z}{1 + i}$$ if $z$ has even Gaussian norm (that is, both $a$ and $b$ are odd, or they're both even), otherwise $f(z) = 3z + i$.
I conjecture that iterating this function eventually leads, if not to $1$, to one of the other Gaussian units ($-1, i, -i$).
For example, starting with $z = 14$, we get $$7 - 7i, -7i, -22i, -11 - 11i, -11, -33 + i, -16 + 17i, \ldots$$
(this is wrong, see edit below)
I have tried a few different values of $z$ with small norm, some purely real, with pencil and paper, haven't gotten far. Also I have tried it in Mathematica, but either I've made some mistakes in my programming that crash the program, or there really are a lot of values that escape to some infinity.
Surely someone else has studied this variant? If so, have they been able to determine anything (like finding a periodic orbit that doesn't include any units)?
EDIT: I made a mistake. Mathematica did save my notebook file at some point before it crashed, and I could have gotten the correct sequence from there instead of having to recalculate it anew. It should go like this: $$7 - 7i, -7i, -20i, 10 + 10i, 10, 5 - 5i, -5i, -14i, -7 - 7i, -7, -20, \ldots$$
Thanks to Mr. Cortek for pointing this out.
REPLY [4 votes]: Not a full answer, just some remarks and considerations.
In the norm language we can see that division of $z$ with $1+i$ produces some complex number $z^*$ that has the norm smaller by a factor of $\sqrt{2}$, and $3z+1$ approximately triples the norm.
Now since ${(\sqrt{2}})^3 \approx 2.8<3$ we see that for every $z \in \mathbb Z$ his associated Collatzian sequence must be generated in such a way that, on average, on every appliance of a function $f(z)=3z+1$ there are more than three appliances of a function $f(z)=\frac {z}{1+i}$ if we want that all subsequences of a Collatzian sequence associated to $z$ do not escape to infinity.
But this seems very unlikely to ever happen because there seems to be no reason that division $\frac {z}{1+i}$ will favour numbers of the form $O+Oi$ and $E+Ei$ quite more often than numbers $E+Oi$ and $O+Ei$, where $E$ stands for even and $O$ for odd integers.
| 214,257
|
\begin{document}
\author{David Mart\'inez Torres}
\address{PUC-Rio de Janeiro\\
Departamento de Matem\'atica \\ Rua Marqu\^es de S\~ao Vicente, 225\\
G\'avea - 22451-900, Rio de Janeiro, Brazil }
\email{dfmtorres@gmail.com}
\title{Semisimple coadjoint orbits and cotangent bundles}
\begin{abstract}
Semisimple (co)adjoint orbits through real hyperbolic elements are well-known to be symplectomorphic to cotangent bundles. We provide a new proof of this fact based on elementary results on both Lie theory
and symplectic geometry, thus shedding some new light on the symplectic geometry of semisimple (co)adjoint orbits.
\end{abstract}
\maketitle
\section{Introduction}
The coadjoint representation has Poisson nature: the Lie bracket of a Lie algebra $\gg$ canonically induces a linear Poisson
bracket on its dual $\gg^*$. The symplectic leaves of the linear Poisson structure are the coadjoint orbits.
The induced symplectic structure on a coadjoint orbit is the so called Konstant-Kirillov-Souriau (KKS) symplectic structure.
For any Poisson structure the understanding of the
symplectic structure of any of it leaves is fundamental; for duals of Lie algebras this understanding is even more important,
for has deep implications on Hamiltonian group actions and representation theory \cite{LG,Ki}.
When $\gg$ is (semisimple) of compact type, coadjoint orbits --which are the classical flag manifolds from complex geometry--
are compact, and therefore more tools are available for the study of their
symplectic geometry \cite{LG}.
Global aspects of the symplectic geometry of non-compact coadjoint orbits are much harder to grasp. The first result in that direction
is due to Arnold. In \cite{Ar} he proved that the regular (complex) coadjoint orbit of $\mathrm{SL}(n+1,\C)$ endowed with the
imaginary
part of the KKS holomorphic symplectic structure is symplectomorphic to the cotangent bundle
of the variety of full flags in $\C^{n+1}$, if and only if all the eigenvalues of some (and hence any) matrix in the orbit are
pure imaginary.
Later on, Azad, van den Ban and Biswas \cite{ABB} discovered that Arnold's result had a far reaching generalization for
semisimple real hyperbolic orbits, which we briefly discuss:
Let $G$ be a connected, non-compact semisimple Lie group with finite center, and let $\mathfrak{g}$ denote its Lie algebra.
The Killing form $\langle \cdot,\cdot \rangle$
intertwines the coadjoint and adjoint actions, and it is used to transfer the symplectic structure from a coadjoint
orbit to the corresponding adjoint one, so one can speak of the KKS symplectic structure of an adjoint
orbit. An element $H\in \gg$ is real hyperbolic if the operator $\mathrm{ad}(H)$ diagonalizes
with real eigenvalues; if an Iwasawa decomposition $G=KAN$ has been fixed, then real hyperbolic
elements are those conjugated to elements in the closure of
the fixed positive Weyl chamber $\mathrm{Cl}(\aa^+)\subset \aa$ of the Lie
algebra of $A$.
\begin{theorem}[\cite{ABB}]\label{thm:main} Let $G$ be a connected, non-compact semisimple Lie group with finite center
and let $G=KAN$ be any fixed Iwasawa decomposition. Then
for any real hyperbolic element $H\in \gg$, there exist a canonical symplectomorphism between the adjoint orbit
$\mathrm{Ad}(G)_H\subset \gg$
with its KKS symplectic structure,
and the cotangent bundle of the real flag manifold $\mathrm{Ad}(K)_H$ with its standard Liouville symplectic structure.
\end{theorem}
The existing proofs of Theorem \ref{thm:main} are far from elementary. They rely either on deep results on integrable systems \cite{ABB},
or on
non-trivial integrability results for Lie algebra actions \cite{GGS}.
The purpose of this note is to revisit Theorem \ref{thm:main} and provide a proof based on elementary facts on both Lie theory
and symplectic geometry, thus shedding some new light on the symplectic geometry of semisimple (co)adjoint orbits.
The key ingredients in our strategy are the full use of the \emph{canonical ruling} of the adjoint orbit and the description of new aspects
the \emph{symplectic
geometry of the `Iwasawa projections'}.
In what follows we briefly discuss the main ideas behind our approach, and compare it with \cite{ABB,GGS}:
We assume without loss of generality that $H\in \mathrm{Cl}(\aa^+)$.
The Iwasawa decomposition defines a well-known canonical ruling on the adjoint orbit $\mathrm{Ad}(G)_H$. The ruling, together
with the Killing form, determine a diffeomorphism
\begin{equation}\label{eq:emb}i\colon T^*\mathrm{Ad}(K)_H\rightarrow \mathrm{Ad}(G)_H
\end{equation}
extending the inclusion of real flag manifold $\mathrm{Ad}(K)_H\hookrightarrow\mathrm{Ad}(G)_H$.
Of course, the diffeomorphism (\ref{eq:emb}) appears both in \cite{ABB,GGS}, but it is not fully exploited.
\begin{itemize}
\item In \cite{ABB} non-trivial theory
of complete Lagrangian fibrations is used to construct a symplectomorphism
$\varphi\colon (T^*\mathrm{Ad}(K)_H,\w_{\mathrm{std}})\rightarrow (\mathrm{Ad}(G)_H,\w_{\mathrm{KKS}})$, with the property
of being the unique symplectomorphism
which (i) extends the identity on $\mathrm{Ad}(K)_H$ and (ii) is a morphism fiber bundles, where $\mathrm{Ad}(G)_H$
has the bundle structure induced by $i$ in (\ref{eq:emb}).
In fact, it can be checked that $\varphi$ coincides with $i$ (\ref{eq:emb})
(the uniqueness statement is also a consequence of the absence of non-trivial symplectic automorphisms of the cotangent bundle
preserving the zero section and the fiber bundle structure).
\item In \cite{GGS} a complete Hamiltonian action of $\gg$ on
$(T^*\mathrm{Ad}(K)_H,\w_{\mathrm{std}})$ is built. The momentum map
$\mu\colon (T^*\mathrm{Ad}(K)_H,\w_{\mathrm{std}})\rightarrow (\mathrm{Ad}(G)_H,\w_{\mathrm{KKS}})$ is the desired
symplectomorphism; the authors also show that the momentum map $\mu$ matches $i$ in (\ref{eq:emb}).
\end{itemize}
Both in \cite{ABB,GGS} a global construction on a non-compact symplectic manifold is performed, something which always presents
technical difficulties. \footnote{In \cite{Ki}, Corollary 1, a proof of the isomorphism of a regular coadjoint orbit
with the cotangent bundle
is presented,
but it is not correct as the completeness issues are entirely ignored.}
Our strategy is much simpler: we shall take full advantage of the ruling structure
to prove the equality
\begin{equation}\label{eq:rul}
i_*\w_{\mathrm{std}}=\w_{\mathrm{KKS}}.
\end{equation}
In fact, this is the approach sketched by Arnold \cite{Ar}.
Basic symplectic linear algebra \cite{MS} implies that to prove (\ref{eq:rul}) at $x\in \mathrm{Ad}(G)_H$
it is enough to find $L_v,L_h\subset T_x\mathrm{Ad}(G)_H$ such that:
(i) $L_v,L_h$ are Lagrangian subspaces for both symplectic structures;
(ii) $L_v\cap L_h=\{0\}$;
(iii) $i_*\w_{\mathrm{std}}(x)(Y,Z)=\w_{\mathrm{KKS}}(x)(Y,Z),\,\, \forall\, Y\in L_v,\,Z\in L_h$.
As the notation suggests, $L_v$ will be the vertical tangent space coming from the fiber bundle structure,
which is trivially Lagrangian for $i^*\w_{\mathrm{std}}$ and easily
seen to be Lagrangian for $\w_{\mathrm{KKS}}$ \cite{ABB}.
Transitivity of the adjoint action implies the existence of $g\in G$ so that
\[x\in \mathrm{Ad}(g)(\mathrm{Ad}(K)_H):=\mathrm{Ad}(K)_H^g.\]
The `horizontal' subspace $L_h$ will be the tangent space to $\mathrm{Ad}(K)_H^g$ at $x$;
because the zero section $\mathrm{Ad}(K)_H$ is also Lagrangian
w.r.t. $\w_{\mathrm{KKS}}$ \cite{ABB},
$G$-invariance of $\w_{\mathrm{KKS}}$ implies that $L_h$ is
a Lagrangian subspace w.r.t $\w_{\mathrm{KKS}}$. If $\mathrm{Ad}(K)_H^g$
it is to be Lagrangian w.r.t to $\w_{\mathrm{std}}$, it should correspond to a closed 1-form in
$\mathrm{Ad}(K)_H$. In fact, it will be the graph of an exact 1-form, and the `projections' associated to the Iwasawa decomposition
will play a crucial role to determine a potential.
The `Iwasawa projection' $H\colon G\colon \rightarrow \aa$ is defined by $x\in K\mathrm{exp}H(x)N$. A pair $H\in \aa$, $g\in G$
determines a function
\[F_{g,H}\colon K\rightarrow \R,\,\, k\mapsto \langle H, H(gk)\rangle.\] Under the assumption $H\in \mathrm{Cl}(\aa^+)$
the function descends to the real flag
manifold $\mathrm{Ad}(K)_H\cong K/Z_K(H)$, where $Z_K(H)$ is the centralizer of $H$ in $K$ \cite{DKV}. The functions $F_{g,H}$
are well-studied, and they play a prominent
role in Harmonic analysis and convexity theory \cite{DKV,H,B,BB}.
Our main technical result is:
\begin{proposition}\label{pro:pro}
Let $G$ be a connected, non-compact semisimple Lie group with finite center, let $G=KAN$ be any fixed Iwasawa decomposition and let
$H\in \mathrm{Cl}(\aa^+)$. Then for any $g\in G$ the submanifold \[\mathrm{Ad}(K)_H^g\subset \mathrm{Ad}(G)_H\overset{i^{-1}}{\cong}T^*\mathrm{Ad}(K)_H\]
is the graph of the exterior differential of $-F_{g,H}\in C^\infty(\mathrm{Ad}(K)_H)$.
\end{proposition}
Proposition \ref{pro:pro} completes the description of the `horizontal Lagrangians'. The equality
\[i_*\w_{\mathrm{std}}(x)(Y,Z)=\w_{\mathrm{KKS}}(x)(Y,Z),\,\, \forall\, Y\in L_v,\,Z\in L_t\]
will follow from computations analogous to those used to establish proposition \ref{pro:pro},
this providing a proof of Theorem \ref{thm:main} which only appeals to basic symplectic geometry and Lie theory.
\section{Proof of Theorem \ref{thm:main}}
In this section we fill in the details of the proof of Theorem \ref{thm:main} sketched in the introduction.
Let us fix a Cartan decomposition $G=KP$ associated
to an involution $\theta$, and let $\mathfrak{k},\mathfrak{p}$ denote the respective Lie algebras.
A choice maximal abelian subalgebra $\aa\subset \pp$ and positive Weyl chamber $\mathfrak{a}^+\subset \aa$ (or root ordering)
gives rise to an Iwasawa decomposition $G=KAN$, with $\mathfrak{n}$ the Lie algebra of the nilpotent factor.
We shall denote the adjoint action of $g$ on $X\in \mathfrak{g}$ by $X^g$.
We may pick without any loss of generality $H\in \mathrm{Cl}(\aa^+)$ and consider the corresponding adjoint orbit $\mathrm{Ad}(G)_H$.
The orbit is identified
with the homogeneous space $G/Z(H)$, where $Z(H)$ denotes the centralizer of $H$. Under this identification $\mathrm{Ad}(K)_H$
is mapped to a submanifold canonically isomorphic to $K/Z_K(H)$, where $Z_K(H)=K\cap Z(H)$ is the centralizer of $H$ in $K$. At the infinitesimal level
the tangent space at $H\in \mathrm{Ad}(K)_H$ is identified
with the quotient space $\mathfrak{k}/\mathfrak{z}_K(H)$, where $\mathfrak{z}_K(H)$ is the Lie algebra of $Z_K(H)$.
\subsection{The ruling and the identification $T^*\mathrm{Ad}(K)H\overset{i}{\cong}\mathrm{Ad}(G)_H$.}
The contents we sketch in this subsection are rather standard. We refer the reader to \cite{ABB} for a throughout exposition.
Let $\mathfrak{n}(H)$ be the sum of root subspaces associated to positive roots not vanishing on $H$. We have
the $\theta$-orthogonal decomposition
\[\mathfrak{g}=\theta\mathfrak{n}(H)\oplus \mathfrak{z}(H)\oplus \mathfrak{n}(H),\]
where $\theta\mathfrak{n}(H)=\mathfrak{n}^-(H)$ are the root spaces corresponding to the negative roots which are non-trivial on $H$.
The affine subspace $H+\mathfrak{n}(H)$ is tangent to $\mathrm{Ad}(G)_H$ and complementary to $\mathrm{Ad}(K)_H$ at $H$.
Even more, the adjoint action of the subgroup
$N(H)$ integrating the nilpotent Lie algebra $\mathfrak{n}(H)$ maps $N(H)$ diffeomorphically
into $H+\mathfrak{n}(H)\subset \mathrm{Ad}(G)_H$.
This induces the well-known ruling of $\mathrm{Ad}(G)_H$.
As any ruled manifold, $\mathrm{Ad}(G)_H$ becomes an affine bundle. Since $\mathrm{Ad}(K)_H$ is transverse to the affine fibers,
the structure can be reduced to that of a vector bundle with zero section $\mathrm{Ad}(K)_H$. As to which vector bundle this is,
a vector tangent to the fiber over $H$ belongs to
$\mathfrak{n}(H)$; the map $X\mapsto X+\theta X$ is a monomorphism from $\mathfrak{n}$ to $\mathfrak{k}$. Since
the image of $\mathfrak{n}(H)$ has
trivial intersection with $\mathfrak{z}_K(H)$, it is isomorphic to $T_H\mathrm{Ad}(K)_H$. Therefore the pairing
$\langle\cdot,\cdot\rangle\colon \mathfrak{n}(H)\times \mathfrak{k}/\mathfrak{z}_K(H)\rightarrow \mathbb{R}$ --which is well defined--
is also non-degenerate, and this provides the
canonical identification of the fiber at $H$ with $T^*_H\mathrm{Ad}(K)_H$. Since the Killing form and Lie bracket are $\mathrm{Ad}$-invariant,
for any $k\in K$ we have the analogous statement for
\[\langle\cdot,\cdot\rangle\colon \mathfrak{n}(H^k)\times \mathfrak{k}/\mathfrak{z}_K(H^k)=\mathfrak{n}(H)^k\times
\mathfrak{k}/\mathfrak{z}_K(H)^k\rightarrow \mathbb{R},\] this giving the identification
\[i\colon T^*\mathrm{Ad}(K)_H\longrightarrow \mathrm{Ad}(G)_H.\]
\subsection{The symplectic forms $\w_{\mathrm{std}}$ and $\w_{\mathrm{KKS}}$.}
From now on we shall omit the map $i$ in the notation, so we have $\w_{\mathrm{std}},\w_{\mathrm{KKS}}$
two symplectic forms on $\mathrm{Ad}(G)_H$ whose equality we want to check.
For the purpose of fixing the sign convention, we take the standard symplectic form of the cotangent bundle $\w_{\mathrm{std}}$
to be
$-d\lambda$, where $\lambda=\xi dx$ and $\xi,x$ are the momentum and position coordinates, respectively.
The tangent space at $H^g\in \mathrm{Ad}(G)_H$ is spanned by vectors of the form $[X^g,H^g]$, $X\in \gg$. The formula
\[\w_{KKS}(H^g)([X^g,H^g],[Y^g,H^g])=\langle H,[X,Y]\rangle\]
is well defined on $\mathfrak{g}/\mathfrak{z}(H)$, and gives rise to an $\mathrm{Ad}(G)$-invariant symplectic form on the orbit \cite{Ki}.
As discussed in the introduction, to prove the equality $\w_{\mathrm{std}}(H^g)=\w_{\mathrm{KKS}}(H^g)$,
we shall start by finding complementary Lagrangian subspaces for both symplectic forms.
\subsection{The vertical Lagrangian subspaces}
At $H^g$ we define $L_v(H^g)= H^g+\mathfrak{n}(H)^g$, i.e. the tangent space to the ruling. Of course,
this space is Lagrangian for $\w_{\mathrm{std}}$. It is also Lagrangian for $\w_{\mathrm{KKS}}$ \cite{ABB}. We include
the proof of this fact to illustrate the kind of arguments we will use in our computations:
Two vectors in $L_v(H^g)$
are of the form $[X^g,H^g],[Y^g,H^g]$, where $X,Y\in \mathfrak{n}(H)$. Therefore
\[\w_{KKS}(H^g)([X^g,H^g],[Y^g,H^g])=\langle H^g,[X^g,Y^g]\rangle=\langle H,[X,Y]\rangle=0,\]
where the vanishing follows because $[X,Y]\in \mathfrak{n}(H)$ and the subspaces $\mathfrak{a}$ and
are $\mathfrak{n}(H)$ are orthogonal w.r.t. the Killing form (following from the orthogonality w.r.t. the inner product
$\langle\cdot,\theta\cdot\rangle$ used in the Iwasawa decomposition).
\subsection{The horizontal Lagrangian subspaces} We consider $\mathrm{Ad}(K)_H^g$ the conjugation by $g$ of $\mathrm{Ad}(K)_H$ and
we define $L_h(H^g)= T_{H^g}\mathrm{Ad}(K)_H^g$. We shall prove that $\mathrm{Ad}(K)_H^g$ is a Lagrangian submanifold for both symplectic
forms, so in particular $L_h(H^g)$ is a Lagrangian subspace.
The KKS symplectic form is $\mathrm{Ad}(G)$-invariant. Therefore it suffices to prove that $\mathrm{Ad}(K)_H$
is Lagrangian w.r.t $\w_{\mathrm{KKS}}$ to conclude that for all $g\in G$ the submanifold $\mathrm{Ad}(K)_H^g$ is Lagrangian w.r.t $\w_{\mathrm{KKS}}$.
At $H^k$ two vectors tangent to $\mathrm{Ad}(K)_H$
are of the form $[X^k,H^k],[Y^k,H^k]$, where $X,Y\in \mathfrak{k}$. Hence
\[\w_{KKS}(H^k)([X^k,H^k],[Y^k,H^k])=\langle H^k,[X^k,Y^k]\rangle=\langle H,[X,Y]\rangle=0,\]
where the vanishing follows from $[X,Y]\in \mathfrak{k}$ and the orthogonality of $\mathfrak{a}\subset \mathfrak{p}$ and
$\mathfrak{k}$ w.r.t. the Killing form (see also \cite{ABB}).
To describe the behavior of $\mathrm{Ad}(K)_H^g$ w.r.t. to $\w_{\mathrm{std}}$, we need a formula for the projection map
$\mathrm{pr}\colon \mathrm{Ad}(G)_H\rightarrow \mathrm{Ad}(K)_H$ defined by the bundle structure. To that end we introduce
all the `Iwasawa projections'
\[K\colon G\rightarrow K, \,\,A\colon G\rightarrow A,\,\,N\colon G\rightarrow N,\]
characterized by $x\in K(x)AN$, $x\in KA(x)N$, $x\in KAN(x)$, respectively (note that the `Iwasawa projection'
cited
in the Introduction is $H=\mathrm{log}A$).
\begin{lemma}\label{lem:pro} The 'Iwasawa projection' $K\colon G\rightarrow K$ descends to the bundle projection
\[\mathrm{pr}\colon \mathrm{Ad}(G)_H\cong G/Z(H)\rightarrow \mathrm{Ad}(K)_H\cong K/Z_K(H)\]
associated to the ruling.
\end{lemma}
\begin{proof}
Let us write $g=K(g)A(g)N(g)$. Then it follows that \[H^g=H^{K(g)A(g)N(g)K(g)^{-1}K(g)},\]
where $K(g)A(g)N(g)K(g)^{-1}\in \mathfrak{n}(H)^{K(g)}$, which proves our assertion.
\end{proof}
To understand the bundle projection infinitesimally we also need information on the differential of the `Iwasawa projections'.
This information can be found for $H$ or $A$ in \cite{DKV} (for higher order derivatives as well; see also \cite{BB}). The result for the three projections
is presented below; the proof is omitted since it is a straightforward application of the chain rule.
\begin{lemma}\label{lem:infpro}
For any $X\in \mathfrak{g}$ and $g\in G$ we have
\begin{equation}\label{eq:decomp2}
X^{AN(g)}=K(X,g)+A(X,g)^{A(g)}+N(X,g)^{AN(g)}
\end{equation}
written as sum of vectors in $\mathfrak{k},\mathfrak{a},\mathfrak{n}$,
where $K(X,g),A(X,g),N(X,g)$ stand for the left translation to the identity on $K,A,N$ of the vector field represented
by the curves $K(g\mathrm{exp}(tX)),A(g\mathrm{exp}(tX)),N(g\mathrm{exp}(tX))$, respectively, and $AN(g)$ denotes $A(g)N(g)$.
\end{lemma}
\begin{proof}[Proof of Proposition \ref{pro:pro}]
The submanifold $\mathrm{Ad}(K)_H^g$ is the graph of a 1-form $\alpha_{g,H}\in \Omega^1(\mathrm{Ad}(K)_H)$, which we evaluate
now: given any $k\in K$, according to Lemma \ref{lem:pro} the point $H^{gk}\in \mathrm{Ad}(K)_H^g$ projects
over $H^{K(gk)}\in \mathrm{Ad}(K)_H$. The tangent space $T_{H^{K(gk)}}\mathrm{Ad}(K)_H$ is spanned by vectors of the form
$L_{K(gk)^{-1}*}K(X,gk)$, where $X\in \kk$. By definition of $\a_{g,H}$ we have:
\[
\a_{g,H}(K(gk))(L_{K(gk)^{-1}*}K(X,gk))=\langle (H^{gk}-H^{K(gk)})^{K(gk)^{-1}}, K(X,gk)\rangle.
\]
Because $\mathfrak{k}$ and $\mathfrak{a}\subset \mathfrak{p}$ are orthogonal, we deduce
\[\langle (H^{gk}-H^{K(gk)})^{K(gk)^{-1}}, K(X,gk)\rangle=\langle H^{AN(gk)},K(X,gk)\rangle.
\]
By (\ref{eq:decomp2})
\[\langle H^{AN(gk)},K(X,gk)\rangle=\langle H^{AN(gk)},X^{AN(gk)}-A(X,gk)-N(X,g)^{AN(g)}\rangle.\]
Because $H^{AN(gk)}\in \mathfrak{a}+\mathfrak{n}$ and because $\aa$ and $\mathfrak{n}(H)$ are $\langle \cdot,\cdot\rangle$-orthogonal
\[\langle H^{AN(gk)},X^{AN(gk)}-A(X,gk)-N(X,g)^{AN(g)}\rangle=-\langle H^{AN(gk)},A(X,gk)\rangle,\]
and therefore
\begin{equation}\label{eq:form1}
\a_{g,H}(K(gk))(L_{K(gk)^{-1}*}K(X,gk))=-\langle H,A(X,gk)\rangle.
\end{equation}
Now consider the function $F_{g,H}\colon K\rightarrow \R,\, k\mapsto \langle H,H(gk)\rangle$.
By \cite{DKV}, Proposition 5.6, it descends to a function $F_{g,H}\in C^\infty(\mathrm{Ad}(K)_H)$. According to \cite{DKV},
Corollary 5.2,
\[-D F_{g,H}(L_{K(g)*}K(X,g))=-\langle X^{AN(g)},H\rangle,\]
and by equation (\ref{eq:decomp2})
\begin{equation}\label{eq:form2}
-D F_{g,H}(L_{K(g)*}K(X,g))=-\langle H, A(X,g)\rangle.
\end{equation}
Hence by equations (\ref{eq:form1}) and (\ref{eq:form2}) we conclude
\[\a_{g,H}=-DF_{g,H},\]
as we wanted to prove.
\end{proof}
\subsection{The equality $\w_{\mathrm{KKS}}=\w_{\mathrm{std}}$.}
We just need to prove the equality at any point $H^g$ on pairs of vectors $[X^g,H^g], [Y^g,H^g]$,
where $X\in \mathfrak{k}$ and $Y\in \mathfrak{n}(H)$.
By definition of the KKS form $\w_{KKS}(H^g)([X^g,H^g],[Y^g,H^g])=\langle H,[X,Y]\rangle$.
As for the standard form
\[ \w_{\mathrm{std}}(H^g)([X^g,H^g],[Y^g,H^g])=\langle [Y^g,H^g]^{K(g)^{-1}},K(X,g)\rangle=\langle [Y^{AN(g)},H^{AN(g)}],K(X,g)\rangle.\]
By equation (\ref{eq:decomp2})
\[\langle [Y^{AN(g)},H^{AN(g)}],K(X,g)\rangle=\langle [Y,H],X\rangle -\langle [Y^{AN(g)},H^{AN(g)}], A(X,g)^{A(g)}+N(X,g)^{AN(g)}\rangle,\]
which equals $\langle [Y,H],X\rangle=\langle H,[X,Y]\rangle$ since in the second summand the first entry belongs to $\mathfrak{n}$ and the second to
$\mathfrak{a}+\mathfrak{n}$.
| 205,179
|
Search result for 'Anita Desai':
Paper Excerpts:
... different laws as to what crimes (violent or nonviolent) count as a strike, whether or not sentenced prisons should get as it sweeps away smaller businesses. In other situation, allies force suppliers to abort the agreement with McDonald's. is showing the film the want to see at the time they went to see it. This indicates that the theatres with the best based upon my culture, as do American women. However, I will also respond to my environment and can alter my behaviors depending upon the environment I am in and the potential rewards for my actions. This is particularly applicable in my ...
Sources list for ANITA DESAI:
More sources on "ANITA DESAI"
term papers at term papers 2000
| 193,163
|
\typeout{TCILATEX Macros for Scientific Word 2.5 <22 Dec 95>.}
\typeout{NOTICE: This macro file is NOT proprietary and may be
freely copied and distributed.}
\makeatletter
\newcount\@hour\newcount\@minute\chardef\@x10\chardef\@xv60
\def\tcitime{
\def\@time{
\@minute\time\@hour\@minute\divide\@hour\@xv
\ifnum\@hour<\@x 0\fi\the\@hour:
\multiply\@hour\@xv\advance\@minute-\@hour
\ifnum\@minute<\@x 0\fi\the\@minute
}}
\@ifundefined{hyperref}{\def\hyperref#1#2#3#4{#2\ref{#4}#3}}{}
\@ifundefined{qExtProgCall}{\def\qExtProgCall#1#2#3#4#5#6{\relax}}{}
\def\FILENAME#1{#1}
\def\QCTOpt[#1]#2{
\def\QCTOptB{#1}
\def\QCTOptA{#2}
}
\def\QCTNOpt#1{
\def\QCTOptA{#1}
\let\QCTOptB\empty
}
\def\Qct{
\@ifnextchar[{
\QCTOpt}{\QCTNOpt}
}
\def\QCBOpt[#1]#2{
\def\QCBOptB{#1}
\def\QCBOptA{#2}
}
\def\QCBNOpt#1{
\def\QCBOptA{#1}
\let\QCBOptB\empty
}
\def\Qcb{
\@ifnextchar[{
\QCBOpt}{\QCBNOpt}
}
\def\PrepCapArgs{
\ifx\QCBOptA\empty
\ifx\QCTOptA\empty
{}
\else
\ifx\QCTOptB\empty
{\QCTOptA}
\else
[\QCTOptB]{\QCTOptA}
\fi
\fi
\else
\ifx\QCBOptA\empty
{}
\else
\ifx\QCBOptB\empty
{\QCBOptA}
\else
[\QCBOptB]{\QCBOptA}
\fi
\fi
\fi
}
\newcount\GRAPHICSTYPE
\GRAPHICSTYPE=\z@
\def\GRAPHICSPS#1{
\ifcase\GRAPHICSTYPE
\special{ps: #1}
\or
\special{language "PS", include "#1"}
\fi
}
\def\GRAPHICSHP#1{\special{include #1}}
\def\graffile#1#2#3#4{
\leavevmode
\raise -#4 \BOXTHEFRAME{
\hbox to #2{\raise #3\hbox to #2{\null #1\hfil}}}
}
\def\draftbox#1#2#3#4{
\leavevmode\raise -#4 \hbox{
\frame{\rlap{\protect\tiny #1}\hbox to #2
{\vrule height#3 width\z@ depth\z@\hfil}
}
}
}
\newcount\draft
\draft=\z@
\let\nographics=\draft
\newif\ifwasdraft
\wasdraftfalse
\def\GRAPHIC#1#2#3#4#5{
\ifnum\draft=\@ne\draftbox{#2}{#3}{#4}{#5}
\else\graffile{#1}{#3}{#4}{#5}
\fi
}
\def\addtoLaTeXparams#1{
\edef\LaTeXparams{\LaTeXparams #1}}
\newif\ifBoxFrame \BoxFramefalse
\newif\ifOverFrame \OverFramefalse
\newif\ifUnderFrame \UnderFramefalse
\def\BOXTHEFRAME#1{
\hbox{
\ifBoxFrame
\frame{#1}
\else
{#1}
\fi
}
}
\def\doFRAMEparams#1{\BoxFramefalse\OverFramefalse\UnderFramefalse\readFRAMEparams#1\end}
\def\readFRAMEparams#1{
\ifx#1\end
\let\next=\relax
\else
\ifx#1i\dispkind=\z@\fi
\ifx#1d\dispkind=\@ne\fi
\ifx#1f\dispkind=\tw@\fi
\ifx#1t\addtoLaTeXparams{t}\fi
\ifx#1b\addtoLaTeXparams{b}\fi
\ifx#1p\addtoLaTeXparams{p}\fi
\ifx#1h\addtoLaTeXparams{h}\fi
\ifx#1X\BoxFrametrue\fi
\ifx#1O\OverFrametrue\fi
\ifx#1U\UnderFrametrue\fi
\ifx#1w
\ifnum\draft=1\wasdrafttrue\else\wasdraftfalse\fi
\draft=\@ne
\fi
\let\next=\readFRAMEparams
\fi
\next
}
\def\IFRAME#1#2#3#4#5#6{
\bgroup
\let\QCTOptA\empty
\let\QCTOptB\empty
\let\QCBOptA\empty
\let\QCBOptB\empty
#6
\parindent=0pt
\leftskip=0pt
\rightskip=0pt
\setbox0 = \hbox{\QCBOptA}
\@tempdima = #1\relax
\ifOverFrame
\typeout{This is not implemented yet}
\show\HELP
\else
\ifdim\wd0>\@tempdima
\advance\@tempdima by \@tempdima
\ifdim\wd0 >\@tempdima
\textwidth=\@tempdima
\setbox1 =\vbox{
\noindent\hbox to \@tempdima{\hfill\GRAPHIC{#5}{#4}{#1}{#2}{#3}\hfill}\\%
\noindent\hbox to \@tempdima{\parbox[b]{\@tempdima}{\QCBOptA}}
}
\wd1=\@tempdima
\else
\textwidth=\wd0
\setbox1 =\vbox{
\noindent\hbox to \wd0{\hfill\GRAPHIC{#5}{#4}{#1}{#2}{#3}\hfill}\\%
\noindent\hbox{\QCBOptA}
}
\wd1=\wd0
\fi
\else
\ifdim\wd0>0pt
\hsize=\@tempdima
\setbox1 =\vbox{
\unskip\GRAPHIC{#5}{#4}{#1}{#2}{0pt}
\break
\unskip\hbox to \@tempdima{\hfill \QCBOptA\hfill}
}
\wd1=\@tempdima
\else
\hsize=\@tempdima
\setbox1 =\vbox{
\unskip\GRAPHIC{#5}{#4}{#1}{#2}{0pt}
}
\wd1=\@tempdima
\fi
\fi
\@tempdimb=\ht1
\advance\@tempdimb by \dp1
\advance\@tempdimb by -#2
\advance\@tempdimb by #3
\leavevmode
\raise -\@tempdimb \hbox{\box1}
\fi
\egroup
}
\def\DFRAME#1#2#3#4#5{
\begin{center}
\let\QCTOptA\empty
\let\QCTOptB\empty
\let\QCBOptA\empty
\let\QCBOptB\empty
\ifOverFrame
#5\QCTOptA\par
\fi
\GRAPHIC{#4}{#3}{#1}{#2}{\z@}
\ifUnderFrame
\nobreak\par #5\QCBOptA
\fi
\end{center}
}
\def\FFRAME#1#2#3#4#5#6#7{
\begin{figure}[#1]
\let\QCTOptA\empty
\let\QCTOptB\empty
\let\QCBOptA\empty
\let\QCBOptB\empty
\ifOverFrame
#4
\ifx\QCTOptA\empty
\else
\ifx\QCTOptB\empty
\caption{\QCTOptA}
\else
\caption[\QCTOptB]{\QCTOptA}
\fi
\fi
\ifUnderFrame\else
\label{#5}
\fi
\else
\UnderFrametrue
\fi
\begin{center}\GRAPHIC{#7}{#6}{#2}{#3}{\z@}\end{center}
\ifUnderFrame
#4
\ifx\QCBOptA\empty
\caption{}
\else
\ifx\QCBOptB\empty
\caption{\QCBOptA}
\else
\caption[\QCBOptB]{\QCBOptA}
\fi
\fi
\label{#5}
\fi
\end{figure}
}
\newcount\dispkind
\def\makeactives{
\catcode`\"=\active
\catcode`\;=\active
\catcode`\:=\active
\catcode`\'=\active
\catcode`\~=\active
}
\bgroup
\makeactives
\gdef\activesoff{
\def"{\string"}
\def;{\string;}
\def:{\string:}
\def'{\string'}
\def~{\string~}
}
\egroup
\def\FRAME#1#2#3#4#5#6#7#8{
\bgroup
\@ifundefined{bbl@deactivate}{}{\activesoff}
\ifnum\draft=\@ne
\wasdrafttrue
\else
\wasdraftfalse
\fi
\def\LaTeXparams{}
\dispkind=\z@
\def\LaTeXparams{}
\doFRAMEparams{#1}
\ifnum\dispkind=\z@\IFRAME{#2}{#3}{#4}{#7}{#8}{#5}\else
\ifnum\dispkind=\@ne\DFRAME{#2}{#3}{#7}{#8}{#5}\else
\ifnum\dispkind=\tw@
\edef\@tempa{\noexpand\FFRAME{\LaTeXparams}}
\@tempa{#2}{#3}{#5}{#6}{#7}{#8}
\fi
\fi
\fi
\ifwasdraft\draft=1\else\draft=0\fi{}
\egroup
}
\def\TEXUX#1{"texux"}
\def\BF#1{{\bf {#1}}}
\def\NEG#1{\leavevmode\hbox{\rlap{\thinspace/}{$#1$}}}
\def\func#1{\mathop{\rm #1}}
\def\limfunc#1{\mathop{\rm #1}}
\long\def\QQQ#1#2{
\long\expandafter\def\csname#1\endcsname{#2}}
\@ifundefined{QTP}{\def\QTP#1{}}{}
\@ifundefined{QEXCLUDE}{\def\QEXCLUDE#1{}}{}
\@ifundefined{Qlb}{\def\Qlb#1{#1}}{}
\@ifundefined{Qlt}{\def\Qlt#1{#1}}{}
\def\QWE{}
\long\def\QQA#1#2{}
\def\QTR#1#2{{\csname#1\endcsname #2}}
\long\def\TeXButton#1#2{#2}
\long\def\QSubDoc#1#2{#2}
\def\EXPAND#1[#2]#3{}
\def\NOEXPAND#1[#2]#3{}
\def\PROTECTED{}
\def\LaTeXparent#1{}
\def\ChildStyles#1{}
\def\ChildDefaults#1{}
\def\QTagDef#1#2#3{}
\@ifundefined{StyleEditBeginDoc}{\def\StyleEditBeginDoc{\relax}}{}
\def\QQfnmark#1{\footnotemark}
\def\QQfntext#1#2{\addtocounter{footnote}{#1}\footnotetext{#2}}
\def\MAKEINDEX{\makeatletter\input gnuindex.sty\makeatother\makeindex}
\@ifundefined{INDEX}{\def\INDEX#1#2{}{}}{}
\@ifundefined{SUBINDEX}{\def\SUBINDEX#1#2#3{}{}{}}{}
\@ifundefined{initial}
{\def\initial#1{\bigbreak{\raggedright\large\bf #1}\kern 2\p@\penalty3000}}
{}
\@ifundefined{entry}{\def\entry#1#2{\item {#1}, #2}}{}
\@ifundefined{primary}{\def\primary#1{\item {#1}}}{}
\@ifundefined{secondary}{\def\secondary#1#2{\subitem {#1}, #2}}{}
\@ifundefined{ZZZ}{}{\MAKEINDEX\makeatletter}
\@ifundefined{abstract}{
\def\abstract{
\if@twocolumn
\section*{Abstract (Not appropriate in this style!)}
\else \small
\begin{center}{\bf Abstract\vspace{-.5em}\vspace{\z@}}\end{center}
\quotation
\fi
}
}{
}
\@ifundefined{endabstract}{\def\endabstract
{\if@twocolumn\else\endquotation\fi}}{}
\@ifundefined{maketitle}{\def\maketitle#1{}}{}
\@ifundefined{affiliation}{\def\affiliation#1{}}{}
\@ifundefined{proof}{\def\proof{\noindent{\bfseries Proof. }}}{}
\@ifundefined{endproof}{\def\endproof{\mbox{\ \rule{.1in}{.1in}}}}{}
\@ifundefined{newfield}{\def\newfield#1#2{}}{}
\@ifundefined{chapter}{\def\chapter#1{\par(Chapter head:)#1\par }
\newcount\c@chapter}{}
\@ifundefined{part}{\def\part#1{\par(Part head:)#1\par }}{}
\@ifundefined{section}{\def\section#1{\par(Section head:)#1\par }}{}
\@ifundefined{subsection}{\def\subsection#1
{\par(Subsection head:)#1\par }}{}
\@ifundefined{subsubsection}{\def\subsubsection#1
{\par(Subsubsection head:)#1\par }}{}
\@ifundefined{paragraph}{\def\paragraph#1
{\par(Subsubsubsection head:)#1\par }}{}
\@ifundefined{subparagraph}{\def\subparagraph#1
{\par(Subsubsubsubsection head:)#1\par }}{}
\@ifundefined{therefore}{\def\therefore{}}{}
\@ifundefined{backepsilon}{\def\backepsilon{}}{}
\@ifundefined{yen}{\def\yen{\hbox{\rm\rlap=Y}}}{}
\@ifundefined{registered}{
\def\registered{\relax\ifmmode{}\r@gistered
\else$\m@th\r@gistered$\fi}
\def\r@gistered{^{\ooalign
{\hfil\raise.07ex\hbox{$\scriptstyle\rm\text{R}$}\hfil\crcr
\mathhexbox20D}}}}{}
\@ifundefined{Eth}{\def\Eth{}}{}
\@ifundefined{eth}{\def\eth{}}{}
\@ifundefined{Thorn}{\def\Thorn{}}{}
\@ifundefined{thorn}{\def\thorn{}}{}
\def\TEXTsymbol#1{\mbox{$#1$}}
\@ifundefined{degree}{\def\degree{{}^{\circ}}}{}
\newdimen\theight
\def\Column{
\vadjust{\setbox\z@=\hbox{\scriptsize\quad\quad tcol}
\theight=\ht\z@\advance\theight by \dp\z@\advance\theight by \lineskip
\kern -\theight \vbox to \theight{
\rightline{\rlap{\box\z@}}
\vss
}
}
}
\def\qed{
\ifhmode\unskip\nobreak\fi\ifmmode\ifinner\else\hskip5\p@\fi\fi
\hbox{\hskip5\p@\vrule width4\p@ height6\p@ depth1.5\p@\hskip\p@}
}
\def\cents{\hbox{\rm\rlap/c}}
\def\miss{\hbox{\vrule height2\p@ width 2\p@ depth\z@}}
\def\vvert{\Vert}
\def\tcol#1{{\baselineskip=6\p@ \vcenter{#1}} \Column}
\def\dB{\hbox{{}}}
\def\mB#1{\hbox{$#1$}}
\def\nB#1{\hbox{#1}}
\def\note{$^{\dag}}
\def\newfmtname{LaTeX2e}
\def\chkcompat{
\if@compatibility
\else
\usepackage{latexsym}
\fi
}
\ifx\fmtname\newfmtname
\DeclareOldFontCommand{\rm}{\normalfont\rmfamily}{\mathrm}
\DeclareOldFontCommand{\sf}{\normalfont\sffamily}{\mathsf}
\DeclareOldFontCommand{\tt}{\normalfont\ttfamily}{\mathtt}
\DeclareOldFontCommand{\bf}{\normalfont\bfseries}{\mathbf}
\DeclareOldFontCommand{\it}{\normalfont\itshape}{\mathit}
\DeclareOldFontCommand{\sl}{\normalfont\slshape}{\@nomath\sl}
\DeclareOldFontCommand{\sc}{\normalfont\scshape}{\@nomath\sc}
\chkcompat
\fi
\def\alpha{{\Greekmath 010B}}
\def\beta{{\Greekmath 010C}}
\def\gamma{{\Greekmath 010D}}
\def\delta{{\Greekmath 010E}}
\def\epsilon{{\Greekmath 010F}}
\def\zeta{{\Greekmath 0110}}
\def\eta{{\Greekmath 0111}}
\def\theta{{\Greekmath 0112}}
\def\iota{{\Greekmath 0113}}
\def\kappa{{\Greekmath 0114}}
\def\lambda{{\Greekmath 0115}}
\def\mu{{\Greekmath 0116}}
\def\nu{{\Greekmath 0117}}
\def\xi{{\Greekmath 0118}}
\def\pi{{\Greekmath 0119}}
\def\rho{{\Greekmath 011A}}
\def\sigma{{\Greekmath 011B}}
\def\tau{{\Greekmath 011C}}
\def\upsilon{{\Greekmath 011D}}
\def\phi{{\Greekmath 011E}}
\def\chi{{\Greekmath 011F}}
\def\psi{{\Greekmath 0120}}
\def\omega{{\Greekmath 0121}}
\def\varepsilon{{\Greekmath 0122}}
\def\vartheta{{\Greekmath 0123}}
\def\varpi{{\Greekmath 0124}}
\def\varrho{{\Greekmath 0125}}
\def\varsigma{{\Greekmath 0126}}
\def\varphi{{\Greekmath 0127}}
\def\nabla{{\Greekmath 0272}}
\def\FindBoldGroup{
{\setbox0=\hbox{$\mathbf{x\global\edef\theboldgroup{\the\mathgroup}}$}}
}
\def\Greekmath#1#2#3#4{
\if@compatibility
\ifnum\mathgroup=\symbold
\mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}
{\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}
{\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}
{\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}
\else
\mathchar"#1#2#3#4
\fi
\else
\FindBoldGroup
\ifnum\mathgroup=\theboldgroup
\mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}
{\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}
{\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}
{\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}
\else
\mathchar"#1#2#3#4
\fi
\fi}
\newif\ifGreekBold \GreekBoldfalse
\let\SAVEPBF=\pbf
\def\pbf{\GreekBoldtrue\SAVEPBF}
\@ifundefined{theorem}{\newtheorem{theorem}{Theorem}}{}
\@ifundefined{lemma}{\newtheorem{lemma}[theorem]{Lemma}}{}
\@ifundefined{corollary}{\newtheorem{corollary}[theorem]{Corollary}}{}
\@ifundefined{conjecture}{\newtheorem{conjecture}[theorem]{Conjecture}}{}
\@ifundefined{proposition}{\newtheorem{proposition}[theorem]{Proposition}}{}
\@ifundefined{axiom}{\newtheorem{axiom}{Axiom}}{}
\@ifundefined{remark}{\newtheorem{remark}{Remark}}{}
\@ifundefined{example}{\newtheorem{example}{Example}}{}
\@ifundefined{exercise}{\newtheorem{exercise}{Exercise}}{}
\@ifundefined{definition}{\newtheorem{definition}{Definition}}{}
\@ifundefined{mathletters}{
\newcounter{equationnumber}
\def\mathletters{
\addtocounter{equation}{1}
\edef\@currentlabel{\theequation}
\setcounter{equationnumber}{\c@equation}
\setcounter{equation}{0}
\edef\theequation{\@currentlabel\noexpand\alph{equation}}
}
\def\endmathletters{
\setcounter{equation}{\value{equationnumber}}
}
}{}
\@ifundefined{BibTeX}{
\def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em
T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}}{}
\@ifundefined{AmS}
{\def\AmS{{\protect\usefont{OMS}{cmsy}{m}{n}
A\kern-.1667em\lower.5ex\hbox{M}\kern-.125emS}}}{}
\@ifundefined{AmSTeX}{\def\AmSTeX{\protect\AmS-\protect\TeX\@}}{}
\ifx\ds@amstex\relax
\message{amstex already loaded}\makeatother\endinput
\else
\@ifpackageloaded{amstex}
{\message{amstex already loaded}\makeatother\endinput}
{}
\@ifpackageloaded{amsgen}
{\message{amsgen already loaded}\makeatother\endinput}
{}
\fi
\let\DOTSI\relax
\def\RIfM@{\relax\ifmmode}
\def\FN@{\futurelet\next}
\newcount\intno@
\def\iint{\DOTSI\intno@\tw@\FN@\ints@}
\def\iiint{\DOTSI\intno@\thr@@\FN@\ints@}
\def\iiiint{\DOTSI\intno@4 \FN@\ints@}
\def\idotsint{\DOTSI\intno@\z@\FN@\ints@}
\def\ints@{\findlimits@\ints@@}
\newif\iflimtoken@
\newif\iflimits@
\def\findlimits@{\limtoken@true\ifx\next\limits\limits@true
\else\ifx\next\nolimits\limits@false\else
\limtoken@false\ifx\ilimits@\nolimits\limits@false\else
\ifinner\limits@false\else\limits@true\fi\fi\fi\fi}
\def\multint@{\int\ifnum\intno@=\z@\intdots@
\else\intkern@\fi
\ifnum\intno@>\tw@\int\intkern@\fi
\ifnum\intno@>\thr@@\int\intkern@\fi
\int}
\def\multintlimits@{\intop\ifnum\intno@=\z@\intdots@\else\intkern@\fi
\ifnum\intno@>\tw@\intop\intkern@\fi
\ifnum\intno@>\thr@@\intop\intkern@\fi\intop}
\def\intic@{
\mathchoice{\hskip.5em}{\hskip.4em}{\hskip.4em}{\hskip.4em}}
\def\negintic@{\mathchoice
{\hskip-.5em}{\hskip-.4em}{\hskip-.4em}{\hskip-.4em}}
\def\ints@@{\iflimtoken@
\def\ints@@@{\iflimits@\negintic@
\mathop{\intic@\multintlimits@}\limits
\else\multint@\nolimits\fi
\eat@}
\else
\def\ints@@@{\iflimits@\negintic@
\mathop{\intic@\multintlimits@}\limits\else
\multint@\nolimits\fi}\fi\ints@@@}
\def\intkern@{\mathchoice{\!\!\!}{\!\!}{\!\!}{\!\!}}
\def\plaincdots@{\mathinner{\cdotp\cdotp\cdotp}}
\def\intdots@{\mathchoice{\plaincdots@}
{{\cdotp}\mkern1.5mu{\cdotp}\mkern1.5mu{\cdotp}}
{{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}
{{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}}
\def\RIfM@{\relax\protect\ifmmode}
\def\text{\RIfM@\expandafter\text@\else\expandafter\mbox\fi}
\let\nfss@text\text
\def\text@#1{\mathchoice
{\textdef@\displaystyle\f@size{#1}}
{\textdef@\textstyle\tf@size{\firstchoice@false #1}}
{\textdef@\textstyle\sf@size{\firstchoice@false #1}}
{\textdef@\textstyle \ssf@size{\firstchoice@false #1}}
\glb@settings}
\def\textdef@#1#2#3{\hbox{{
\everymath{#1}
\let\f@size#2\selectfont
#3}}}
\newif\iffirstchoice@
\firstchoice@true
\def\Let@{\relax\iffalse{\fi\let\\=\cr\iffalse}\fi}
\def\vspace@{\def\vspace##1{\crcr\noalign{\vskip##1\relax}}}
\def\multilimits@{\bgroup\vspace@\Let@
\baselineskip\fontdimen10 \scriptfont\tw@
\advance\baselineskip\fontdimen12 \scriptfont\tw@
\lineskip\thr@@\fontdimen8 \scriptfont\thr@@
\lineskiplimit\lineskip
\vbox\bgroup\ialign\bgroup\hfil$\m@th\scriptstyle{##}$\hfil\crcr}
\def\Sb{_\multilimits@}
\def\endSb{\crcr\egroup\egroup\egroup}
\def\Sp{^\multilimits@}
\let\endSp\endSb
\newdimen\ex@
\ex@.2326ex
\def\rightarrowfill@#1{$#1\m@th\mathord-\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}
\def\leftarrowfill@#1{$#1\m@th\mathord\leftarrow\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill\mkern-6mu\mathord-$}
\def\leftrightarrowfill@#1{$#1\m@th\mathord\leftarrow
\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}
\def\overrightarrow{\mathpalette\overrightarrow@}
\def\overrightarrow@#1#2{\vbox{\ialign{##\crcr\rightarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}
\let\overarrow\overrightarrow
\def\overleftarrow{\mathpalette\overleftarrow@}
\def\overleftarrow@#1#2{\vbox{\ialign{##\crcr\leftarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}
\def\overleftrightarrow{\mathpalette\overleftrightarrow@}
\def\overleftrightarrow@#1#2{\vbox{\ialign{##\crcr
\leftrightarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}
\def\underrightarrow{\mathpalette\underrightarrow@}
\def\underrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil
$\crcr\noalign{\nointerlineskip}\rightarrowfill@#1\crcr}}}
\let\underarrow\underrightarrow
\def\underleftarrow{\mathpalette\underleftarrow@}
\def\underleftarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil
$\crcr\noalign{\nointerlineskip}\leftarrowfill@#1\crcr}}}
\def\underleftrightarrow{\mathpalette\underleftrightarrow@}
\def\underleftrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th
\hfil#1#2\hfil$\crcr
\noalign{\nointerlineskip}\leftrightarrowfill@#1\crcr}}}
\def\qopnamewl@#1{\mathop{\operator@font#1}\nlimits@}
\let\nlimits@\displaylimits
\def\setboxz@h{\setbox\z@\hbox}
\def\varlim@#1#2{\mathop{\vtop{\ialign{##\crcr
\hfil$#1\m@th\operator@font lim$\hfil\crcr
\noalign{\nointerlineskip}#2#1\crcr
\noalign{\nointerlineskip\kern-\ex@}\crcr}}}}
\def\rightarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@
$#1\copy\z@\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\box\z@\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}
\def\leftarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@
$#1\mathord\leftarrow\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\copy\z@\mkern-2mu$}\hfill
\mkern-6mu\box\z@$}
\def\projlim{\qopnamewl@{proj\,lim}}
\def\injlim{\qopnamewl@{inj\,lim}}
\def\varinjlim{\mathpalette\varlim@\rightarrowfill@}
\def\varprojlim{\mathpalette\varlim@\leftarrowfill@}
\def\varliminf{\mathpalette\varliminf@{}}
\def\varliminf@#1{\mathop{\underline{\vrule\@depth.2\ex@\@width\z@
\hbox{$#1\m@th\operator@font lim$}}}}
\def\varlimsup{\mathpalette\varlimsup@{}}
\def\varlimsup@#1{\mathop{\overline
{\hbox{$#1\m@th\operator@font lim$}}}}
\def\tfrac#1#2{{\textstyle {#1 \over #2}}}
\def\dfrac#1#2{{\displaystyle {#1 \over #2}}}
\def\binom#1#2{{#1 \choose #2}}
\def\tbinom#1#2{{\textstyle {#1 \choose #2}}}
\def\dbinom#1#2{{\displaystyle {#1 \choose #2}}}
\def\QATOP#1#2{{#1 \atop #2}}
\def\QTATOP#1#2{{\textstyle {#1 \atop #2}}}
\def\QDATOP#1#2{{\displaystyle {#1 \atop #2}}}
\def\QABOVE#1#2#3{{#2 \above#1 #3}}
\def\QTABOVE#1#2#3{{\textstyle {#2 \above#1 #3}}}
\def\QDABOVE#1#2#3{{\displaystyle {#2 \above#1 #3}}}
\def\QOVERD#1#2#3#4{{#3 \overwithdelims#1#2 #4}}
\def\QTOVERD#1#2#3#4{{\textstyle {#3 \overwithdelims#1#2 #4}}}
\def\QDOVERD#1#2#3#4{{\displaystyle {#3 \overwithdelims#1#2 #4}}}
\def\QATOPD#1#2#3#4{{#3 \atopwithdelims#1#2 #4}}
\def\QTATOPD#1#2#3#4{{\textstyle {#3 \atopwithdelims#1#2 #4}}}
\def\QDATOPD#1#2#3#4{{\displaystyle {#3 \atopwithdelims#1#2 #4}}}
\def\QABOVED#1#2#3#4#5{{#4 \abovewithdelims#1#2#3 #5}}
\def\QTABOVED#1#2#3#4#5{{\textstyle
{#4 \abovewithdelims#1#2#3 #5}}}
\def\QDABOVED#1#2#3#4#5{{\displaystyle
{#4 \abovewithdelims#1#2#3 #5}}}
\def\tint{\mathop{\textstyle \int}}
\def\tiint{\mathop{\textstyle \iint }}
\def\tiiint{\mathop{\textstyle \iiint }}
\def\tiiiint{\mathop{\textstyle \iiiint }}
\def\tidotsint{\mathop{\textstyle \idotsint }}
\def\toint{\mathop{\textstyle \oint}}
\def\tsum{\mathop{\textstyle \sum }}
\def\tprod{\mathop{\textstyle \prod }}
\def\tbigcap{\mathop{\textstyle \bigcap }}
\def\tbigwedge{\mathop{\textstyle \bigwedge }}
\def\tbigoplus{\mathop{\textstyle \bigoplus }}
\def\tbigodot{\mathop{\textstyle \bigodot }}
\def\tbigsqcup{\mathop{\textstyle \bigsqcup }}
\def\tcoprod{\mathop{\textstyle \coprod }}
\def\tbigcup{\mathop{\textstyle \bigcup }}
\def\tbigvee{\mathop{\textstyle \bigvee }}
\def\tbigotimes{\mathop{\textstyle \bigotimes }}
\def\tbiguplus{\mathop{\textstyle \biguplus }}
\def\dint{\mathop{\displaystyle \int}}
\def\diint{\mathop{\displaystyle \iint }}
\def\diiint{\mathop{\displaystyle \iiint }}
\def\diiiint{\mathop{\displaystyle \iiiint }}
\def\didotsint{\mathop{\displaystyle \idotsint }}
\def\doint{\mathop{\displaystyle \oint}}
\def\dsum{\mathop{\displaystyle \sum }}
\def\dprod{\mathop{\displaystyle \prod }}
\def\dbigcap{\mathop{\displaystyle \bigcap }}
\def\dbigwedge{\mathop{\displaystyle \bigwedge }}
\def\dbigoplus{\mathop{\displaystyle \bigoplus }}
\def\dbigodot{\mathop{\displaystyle \bigodot }}
\def\dbigsqcup{\mathop{\displaystyle \bigsqcup }}
\def\dcoprod{\mathop{\displaystyle \coprod }}
\def\dbigcup{\mathop{\displaystyle \bigcup }}
\def\dbigvee{\mathop{\displaystyle \bigvee }}
\def\dbigotimes{\mathop{\displaystyle \bigotimes }}
\def\dbiguplus{\mathop{\displaystyle \biguplus }}
\def\stackunder#1#2{\mathrel{\mathop{#2}\limits_{#1}}}
\begingroup \catcode `|=0 \catcode `[= 1
\catcode`]=2 \catcode `\{=12 \catcode `\}=12
\catcode`\\=12
|gdef|@alignverbatim#1\end{align}[#1|end[align]]
|gdef|@salignverbatim#1\end{align*}[#1|end[align*]]
|gdef|@alignatverbatim#1\end{alignat}[#1|end[alignat]]
|gdef|@salignatverbatim#1\end{alignat*}[#1|end[alignat*]]
|gdef|@xalignatverbatim#1\end{xalignat}[#1|end[xalignat]]
|gdef|@sxalignatverbatim#1\end{xalignat*}[#1|end[xalignat*]]
|gdef|@gatherverbatim#1\end{gather}[#1|end[gather]]
|gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]]
|gdef|@gatherverbatim#1\end{gather}[#1|end[gather]]
|gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]]
|gdef|@multilineverbatim#1\end{multiline}[#1|end[multiline]]
|gdef|@smultilineverbatim#1\end{multiline*}[#1|end[multiline*]]
|gdef|@arraxverbatim#1\end{arrax}[#1|end[arrax]]
|gdef|@sarraxverbatim#1\end{arrax*}[#1|end[arrax*]]
|gdef|@tabulaxverbatim#1\end{tabulax}[#1|end[tabulax]]
|gdef|@stabulaxverbatim#1\end{tabulax*}[#1|end[tabulax*]]
|endgroup
\def\align{\@verbatim \frenchspacing\@vobeyspaces \@alignverbatim
You are using the "align" environment in a style in which it is not defined.}
\let\endalign=\endtrivlist
\@namedef{align*}{\@verbatim\@salignverbatim
You are using the "align*" environment in a style in which it is not defined.}
\expandafter\let\csname endalign*\endcsname =\endtrivlist
\def\alignat{\@verbatim \frenchspacing\@vobeyspaces \@alignatverbatim
You are using the "alignat" environment in a style in which it is not defined.}
\let\endalignat=\endtrivlist
\@namedef{alignat*}{\@verbatim\@salignatverbatim
You are using the "alignat*" environment in a style in which it is not defined.}
\expandafter\let\csname endalignat*\endcsname =\endtrivlist
\def\xalignat{\@verbatim \frenchspacing\@vobeyspaces \@xalignatverbatim
You are using the "xalignat" environment in a style in which it is not defined.}
\let\endxalignat=\endtrivlist
\@namedef{xalignat*}{\@verbatim\@sxalignatverbatim
You are using the "xalignat*" environment in a style in which it is not defined.}
\expandafter\let\csname endxalignat*\endcsname =\endtrivlist
\def\gather{\@verbatim \frenchspacing\@vobeyspaces \@gatherverbatim
You are using the "gather" environment in a style in which it is not defined.}
\let\endgather=\endtrivlist
\@namedef{gather*}{\@verbatim\@sgatherverbatim
You are using the "gather*" environment in a style in which it is not defined.}
\expandafter\let\csname endgather*\endcsname =\endtrivlist
\def\multiline{\@verbatim \frenchspacing\@vobeyspaces \@multilineverbatim
You are using the "multiline" environment in a style in which it is not defined.}
\let\endmultiline=\endtrivlist
\@namedef{multiline*}{\@verbatim\@smultilineverbatim
You are using the "multiline*" environment in a style in which it is not defined.}
\expandafter\let\csname endmultiline*\endcsname =\endtrivlist
\def\arrax{\@verbatim \frenchspacing\@vobeyspaces \@arraxverbatim
You are using a type of "array" construct that is only allowed in AmS-LaTeX.}
\let\endarrax=\endtrivlist
\def\tabulax{\@verbatim \frenchspacing\@vobeyspaces \@tabulaxverbatim
You are using a type of "tabular" construct that is only allowed in AmS-LaTeX.}
\let\endtabulax=\endtrivlist
\@namedef{arrax*}{\@verbatim\@sarraxverbatim
You are using a type of "array*" construct that is only allowed in AmS-LaTeX.}
\expandafter\let\csname endarrax*\endcsname =\endtrivlist
\@namedef{tabulax*}{\@verbatim\@stabulaxverbatim
You are using a type of "tabular*" construct that is only allowed in AmS-LaTeX.}
\expandafter\let\csname endtabulax*\endcsname =\endtrivlist
\def\@@eqncr{\let\@tempa\relax
\ifcase\@eqcnt \def\@tempa{& & &}\or \def\@tempa{& &}
\else \def\@tempa{&}\fi
\@tempa
\if@eqnsw
\iftag@
\@taggnum
\else
\@eqnnum\stepcounter{equation}
\fi
\fi
\global\tag@false
\global\@eqnswtrue
\global\@eqcnt\z@\cr}
\def\endequation{
\ifmmode\ifinner
\iftag@
\addtocounter{equation}{-1}
$\hfil
\displaywidth\linewidth\@taggnum\egroup \endtrivlist
\global\tag@false
\global\@ignoretrue
\else
$\hfil
\displaywidth\linewidth\@eqnnum\egroup \endtrivlist
\global\tag@false
\global\@ignoretrue
\fi
\else
\iftag@
\addtocounter{equation}{-1}
\eqno \hbox{\@taggnum}
\global\tag@false
$$\global\@ignoretrue
\else
\eqno \hbox{\@eqnnum}
$$\global\@ignoretrue
\fi
\fi\fi
}
\newif\iftag@ \tag@false
\def\tag{\@ifnextchar*{\@tagstar}{\@tag}}
\def\@tag#1{
\global\tag@true
\global\def\@taggnum{(#1)}}
\def\@tagstar*#1{
\global\tag@true
\global\def\@taggnum{#1}
}
\makeatother
\endinput
| 193,255
|
PLEASE NOTE: You are viewing a page from a past show, Trade Deals, Products & Competitions may no longer be valid.
BuildNZ collocates with Registered Master Builders' CARTERS Apprentice of the Year
We are delighted to announce that BuildNZ will be hosting Registered Master Builders' National Practical on 4th November 2021.
Entries for 2021 are now closed.
To follow the competition, check out the Winners page after each regional event and follow us on social media: Facebook, Instagram.
In association with:
| 347,377
|
Life’s a Beach
During photo of bunker renovation at Spring Ford CC (Photo: Spring Ford CC)
Two bad bunkers
When Spring Ford Country Club started having problems with two of its bunkers, they looked for a solution that would allow them a little flexibility
By Clara Richter
Anyone who has spent any portion of their life in northern climes knows what happens to roads after years of freezing and thawing during the fall, winter and early spring. If the freeze-thaw cycle can rip a pothole in asphalt, imagine what it can do to a bunker liner.
That was the thought running through the heads of Superintendent Mark Rubbo and the greens committee of Spring Ford Country Club in Royersford, Pa., when it came time to decide on a new bunker liner for two of their most problematic bunkers. And it’s one of the main reasons they chose Blinder Bunker.
“If you start to get all kinds of moisture and freezing and thawing, we thought it was safer to withstand the freeze-thaw cycles,” Rubbo explains. “All you have to do is drive around Pennsylvania. Whatever you’re driving on is solid material, whatever it is — concrete or asphalt — you get potholes. Things are subject to the freeze-thaw.”
Spring Ford CC has 60 bunkers, but so far only two have been renovated using Blinder Bunker. According to Rubbo, these bunkers were much worse than the others because they had surface water moving in from outside, which accelerated the damage and contamination.
“We were having problems with those in 2017. It was pretty wet and the two that we ended up doing got so washed out and so destroyed, the sand was completely contaminated,” Rubbo says. “They wouldn’t drain. That’s when we got searching for liners.”
Rubbo and the greens committee began the hunt for a solution by looking at the pros and cons of various bunker liners. They liked the flexibility of Blinder Bunker, as well as the fact that it wouldn’t damage clubs.
“There’s nothing in there that can scratch a golf club if you don’t have the sand perfect. They never really want to take a chance with gauging up clubs with anything that might be more solid underneath,” Rubbo laughs.
Once the course decided to move forward with Blinder Bunker, a certified installer — George E. Ley Co., out of Glenmoore, Penn. — came in and did the installation. The renovation took place in April 2018 and took about three or four days per bunker.
Rubbo saw immediate results. The course received two and a half inches of rain the weekend after the liners were installed and, according to Rubbo, not one grain of sand moved in either bunker. “We didn’t even turn a rake over to push a little ripple back,” he says.
“Everybody loves it,” Rubbo adds confidently.
Another thing everyone can agree on: The course needs 58 more Blinder-lined bunkers. Rubbo says they have had a lot of great side-by-side comparisons. “One looked like a bathtub, and the others, you’re ready to play golf.”
The course doesn’t have any plans to renovate the rest of its bunkers in the near future, but Rubbo says they might renovate a few here and there as issues pop up.
“After” photo of the bunker renovation at Mission Viejo CC (Photo: Mission Viejo CC)
Mission: Bunkers
Why bunker renovations were the first order of business for the new superintendent at Mission Viejo Country Club
By Sarah Webb
When Doug Rudolph become the superintendent of Mission Viejo (Calif.) Country Club in March 2018, his first order of business was to oversee the renovation of the club’s 60 bunkers.
Rudolph joined the Mission Viejo, Calif.-based club after 27 years at Pauma Valley Country Club in Pauma Valley, Calif., where he gained experience in handling bunker projects.
Rudolph says he was approached by Mission Viejo CC’s board of directors and greens committee about a bunker renovation. At that point, the bunkers were about 10 years old. They mentioned that the existing bunkers had nonplayable areas on the bottoms, torn lining that would poke through the sand and poor drainage, to boot.
“The previous bunker liner that was in there just wasn’t acceptable to the membership anymore,” Rudolph says. “We had to choose something else out there.”
After viewing bunker liners from three different companies and visiting a few local courses that had recently installed Polylast-lined bunkers, Mission Viejo CC settled on Polylast as its supplier.
In addition to installing new liner, Mission Viejo CC had many of the bunkers’ floors redone, new drainage installed and new sand brought in. The venue remained open during the renovations. In fact, Rudolph says the club typically saw between 150 and 200 rounds of golf per day during that time.
The club bid the project out to a local contractor, renovation began in September 2018 and was wrapped up by December 2018.
Despite the large number of bunkers and heavy rain that fell throughout the construction phase, Rudolph says the project ran smoothly.
“The experience of going through the process and listening to representatives from Polylast just made it easy,” Rudolph says.
He adds that at his previous course, he dealt with sand-based soil, and therefore, the bunkers that didn’t include liners. However, with Mission Viejo’s clay-based soil, Rudolph says the bunker lining and drainage systems are necessary.
“Having the bunker liner over the top of the clay really helps because you don’t get any ‘bleeding’ (like mud) through the sand,” Rudolph explains.
The bunkers complete, Rudolph says the club’s members are pleased with the results.
“(Our members) are ecstatic about the bunkers. I hear nothing but great comments about the bunkers almost every day,” Rudolph says.
Additionally, Mission Viejo’s crews have reaped the benefits of the improvements, according to Rudolph.
For example, in January alone, Mission Viejo received more than 7 inches of rain — a significantly higher amount than what’s typical for the area, according to Rudolph. In the past, such a high amount of rainfall would’ve caused serious damage, but Rudolph says his crews were able to clear the excess water and repair the edges of the bunkers in fewer than two hours.
“We still have very steep bunkers, but with the Polylast bunker liner, we did not have any major washouts at all,” Rudolph explains.
Satisfied with the final result, Rudolph says Mission Viejo CC doesn’t have any major bunker renovations on the horizon, but the club’s next major project will include renovating cart paths.
| 330,608
|
TITLE: QM - Particle in Potential Well - Probability of states
QUESTION [0 upvotes]: A particle is confined in a potential well such that its allowed energies are $E^n = n^2\epsilon$, where $n = 1, 2, \dots$ is an
integer and $\epsilon$ a positive constant. The corresponding energy eigenstates are $\lvert1\rangle, \lvert2\rangle, \dots , \lvert n\rangle, \dots$ At t = 0 the
particle is in the state:
$\lvert\psi(0)\rangle = 0.2\lvert1\rangle + 0.3\lvert2\rangle + 0.4\lvert3\rangle + 0.843\lvert4\rangle$.
What is the probability if energy is measured at $t=0$ of finding a number smaller than $6\sigma$?
Am I right in saying this would just be the sum of states $n = 1$ and $n = 2$ which is $0.5$?
Then I'm wondering how you would calculate the mean value and rms deviation of the energy of the particle in the state $\lvert\psi(0)\rangle$
How do I find the state vector $\lvert\psi\rangle$ at any time $t$? And therefore do the results calculated above remain valid for any arbitrary time?
The last thing I'm stuck on is, lets say the energy is measured and it is said to be $16\epsilon$. After this measurement, what is the state of the system, and what result would you get if you tried to measure energy again?
REPLY [1 votes]: For your first question-you actually have to square the coefficients and add them up. Notice that it is the sum of the squares of the coefficients which add up to unity. So the required probability would be $(0.2)^2+(0.3)^2=0.13$.
In general, energy superpositions are not stationary states, so your state-vector will change with time. However, the probabilities will remain unchanged, because when you evolve the state-vector with time, each term in its expansion will only vary by a phase factor of the form $e^{-ikt}$, where t is time and k is some constant. You can see that the modulus will remain the same, since $|ce^{-ikt}|^2=|c|^2.$
The mean value of energy will just be $\frac{\sum|c_n|^2E_n}{4}$.
To answer your last question: Energy eigenstates are stationary states. Therefore, it will remain in $E^n$ after you measure it, and you will get the same result if you measured energy again.
| 10,150
|
TITLE: Approximation of the number of partitions of n, denoted p(n)
QUESTION [1 upvotes]: I managed to prove that $p(n)\ge \max_{1\le k\le n}{{n-1\choose k-1}\over k!}$,that wasn't hard. Now I need to use this result to prove that there exists a constant c>0 for which $p(n)\ge e^{c\sqrt n}$ for any $n\in \mathbb{N}$. I tried to use Stirling's approximation here, but didn't get the result I want. Any hints?
REPLY [1 votes]: First find for which $k$ the expression
$$a(n,k) = \frac{1}{k!}\binom{n-1}{k-1} = \frac{(n-1)!}{k!(k-1)!(n-k)!}$$
reaches its maximum. Since $a(n,k) > 0$ for $1 \leqslant k \leqslant n$ and
$$\frac{a(n,k+1)}{a(n,k)} = \frac{n-k}{(k+1)k}$$
we see that
$$a(n,k+1) < a(n,k) \iff n - k < (k+1)k \iff n+1 < (k+1)^2,$$
so the maximum is attained for $k = \lfloor \sqrt{n+1}\rfloor$.
Now recall Stirling's approximation
$$\log m! = \bigl(m + \tfrac{1}{2}\bigr)\log m - m + \log \sqrt{2\pi} + O\biggl(\frac{1}{m}\biggr)$$
and the equivalent
$$\log (m-1)! = \bigl(m - \tfrac{1}{2}\bigr)\log m - m + \log \sqrt{2\pi} + O\biggl(\frac{1}{m}\biggr)\,.$$
Plugging these into
$$\log p(n) \geqslant \log (n-1)! - \log (n-k)! - \log (k-1)! - \log k!$$
yields
\begin{align}
\log p(n)
&\geqslant \bigl(n - \tfrac{1}{2}\bigr) \log n - n + \log \sqrt{2\pi} + O\biggl(\frac{1}{n}\biggr) \\
&\qquad - \bigl(n - k + \tfrac{1}{2}\bigr)\log (n-k) + n - k - \log \sqrt{2\pi} + O\biggl(\frac{1}{n-k}\biggr) \\
&\qquad - \bigl(k - \tfrac{1}{2}\bigr)\log k + k - \log \sqrt{2\pi} + O\biggl(\frac{1}{k}\biggr) \\
&\qquad - \bigl(k + \tfrac{1}{2}\bigr)\log k + k - \log \sqrt{2\pi} + O\biggl(\frac{1}{k}\biggr) \\
&= (k-1)\log n - 2k\log k - \bigl(n - k + \tfrac{1}{2}\bigr)\log \biggl(1 - \frac{k}{n}\biggr) + k - \log (2\pi) + O\biggl(\frac{1}{\sqrt{n}}\biggr)
\end{align}
for $k = \lfloor \sqrt{n+1}\rfloor = \sqrt{n} + O(1)$. With $k^2 = n + O(\sqrt{n})$ and a low-order Taylor expansion of the logarithms this leads to
$$\log p(n) \geqslant 2\sqrt{n} + O(\log n)$$
and hence
$$\liminf_{n \to \infty} \frac{\log p(n)}{\sqrt{n}} \geqslant 2\,,$$
from which
$$a := \inf \: \biggl\{ \frac{\log p(n)}{\sqrt{n}} : n \in \mathbb{N}\setminus \{0\}\biggr\} > 0$$
follows. Thus $p(n) \geqslant e^{c\sqrt{n}}$ holds e.g. for $c = a > 0$.
| 40,173
|
TITLE: How do I determine the values of $k$ for which $3x^2 + kx+12 = 0$ has no real solutions, $1$ real solution, and $2$ real solutions?
QUESTION [1 upvotes]: I know that a point has to be $(0,12)$ just because of the $C$ term always being the y-intercept.
Please just tell me how to approach the problem/give tips, so that you are not doing my homework for me.
REPLY [1 votes]: By the distributive property the equation is equivalently:
$$0=3\left(x^2+\frac{k}{3}x\right)+12$$
Subtracting $12$ from both sides of the equation.
$$-12=3\left(x^2+\frac{k}{3}x\right)$$
Dividing by $3$ on both sides of the equation.
$$-4=x^2+\frac{k}{3}x$$
Using the fact that $\left(x+\frac{y}{2}\right)^2=x^2+2\left(x\right)\left(\frac{y}{2}\right)+\frac{y^2}{4}$:
$$-4=\left(x+\frac{k}{6}\right)^2-\frac{k^2}{36}$$
$$\frac{k^2}{36}-4=\left(x+\frac{k}{6}\right)^2$$
Note now that if $u^2=c$ then $c=0$ gives one solution $0$, $c>0$ gives $2$ solutions $\pm \sqrt{c}$, and $c<0$ gives no real solutions.
With a little more thinking we can see that If we want $1$ real root we need:
$$\frac{k^2}{36}-4=0$$
If we want $2$ real solutions we need:
$$\frac{k^2}{36}-4>0$$
And finally no real solutions:
$$\frac{k^2}{36}-4<0$$
| 150,761
|
TITLE: A last year Putnam question maximum $\sum{\cos 3x}$
QUESTION [6 upvotes]: Determine the greatest possible value of $$\sum_{i=1}^{10}{\cos 3x_i}$$
for real numbers $x_1,x_2....x_{10}$ satisfying
$$\sum_{i=0}^{10}{\cos x_i}=0$$
My attempt:
$$\sum \cos 3x = \sum 4\cos^3x -\sum3\cos x=4\sum \cos^3 x $$
So now we have to maximize sum of cubes of ten numbers when their sum is zero and each lie in interval $[-1,1]$.
i often use AM GM inequalities but here are 10 numbers and they are not even positive. Need help to how to visualize and approach these kinds of questions.
REPLY [2 votes]: Visualising the solution
You have asked for help in visualising the solution. I think you will find it useful to have in mind the picture of $y=x^3$ for $-1\le x\le1$.
Now consider the arrangement of the 10 numbers in the maximum position. (We have a continuous function on a compact set and so the maximum is attained.)
First suppose that there is a number, $s$, smaller in magnitude than the least negative number $l$. Increasing $l$ whilst decreasing $s$ by the same amount would increase the sum of cubes and therefore cannot occur.
So, all the negative numbers are equal, to $l$ say, and all the positive numbers are greater than $|l|$.
Now suppose that a positive number was not $1$. Then increasing it to $1$ whilst reducing one of the $l$s would increase the sum of cubes and therefore cannot occur.
Hence we need only consider the case where we have $m$ $1$s and $10-m$ numbers equal to $-\frac{m}{10-m}$.
| 115,913
|
TITLE: Is there a name for the topology of a totally disconnected space where every point is arbitrarily close to all the others?
QUESTION [0 upvotes]: Is there a name for the topology of a totally disconnected space where every point is arbitrarily close to all the others?
This may actually be the $xy$ question. This is the specific example I'm interested in:
Let $\Bbb Q,d$ have the standard topology as a one-dimensional euclidean metric.
Now glue together parts of $\Bbb Q$ as follows: Let $2^m3^nq\sim q:m,n\in\Bbb Z$
That enables us to think of the 5-rough rationals as a transversal of $X=\Bbb Q/\langle2,3\rangle$.
But because $2^m3^n$ is dense in $\Bbb Q$, every set $\{x\cdot2^m3^n:m,n\in\Bbb Z\}$ comes arbitrarily close to every other set. Or in other words $\inf \{d(y,z):y\in Y, z\in Z\}=0$ where $Y,Z\in \Bbb Q/\langle2,3\rangle$
Although we still have $\min\{d(y,z):y\in Y, z\in Z\}=0\iff y=z$ so we might recover some measure of distance between the cosets $d'$.
I'm seeing $\Bbb Q/\langle2,3\rangle,d'$ as a totally disconnected space where every point is arbitrarily close to all the others.
Is this a commonly encountered space and is there a better way to assign $\Bbb Q/\langle2,3\rangle$ a topology - given that I'm interested in the topological properties of the cosets? Does the property of every elements being arbitrarily close render the topology essentially useless?
UPDATE: I think what I'm describing here is essentially the discrete topology since $d'$ assigns distance $0$ between $x,x$ and one can choose any arbitrary distance between distinct classes since $\min$ is not defined.
REPLY [2 votes]: Usually you would give quotients the quotient topology, that is, a subset $U \subseteq \mathbb Q / \sim$ is open iff its preimage $\pi^{-1}(U) \subseteq\mathbb Q$ is open. In particular the projection $\pi : \mathbb Q \to \mathbb Q / \sim$ is a continuous map. Now the open sets look like the following:
$\mathbb Q / \sim$ whose preimage is $\mathbb Q$
$\emptyset$ whose preimage is $\emptyset$
$\mathbb Q^\times / \sim$ whose preimage is $\mathbb Q^\times$
$\pm \mathbb Q^+ / \sim$ whose preimage is $\pm \mathbb Q^+$
If $U \neq \emptyset$ is open, then it contains without loss of generality $[x]$ with $x > 0$ and its preimage $x \in \pi^{-1}(U)$ is open. For all $y > 0$ by denseness of $\{2^n 3^m y \mid m, n \in \mathbb Z \} \subseteq \mathbb Q^+$, $\pi^{-1}(U)$ contains some $2^n 3^m y$ and thus $[y] \in U$. A similar argument for $x < 0$ shows that $U$ is of the form above.
In particular, your topological space is relatively boring.
| 187
|
TITLE: Is there a non-simply-connected space with trivial first homology group?
QUESTION [12 upvotes]: Is there a path connected topological space such that its fundamental group is non-trivial, but its first homology group is trivial?
Since the first homology group of a space is the abelianization of the fundamental group, we are looking for a non-trivial group whose abelianization is trivial. Is there such a group?
REPLY [9 votes]: A group whose abelianization is trivial is called perfect, and there are many such groups. In particular, any nonabelian finite simple group is perfect, so $A_5$, for example, is perfect. $A_5$ is in fact the smallest nontrivial perfect group. So any space with fundamental group $A_5$ is an example.
A famous example which almost has fundamental group $A_5$ is Poincare dodecahedral space. This is a closed $3$-manifold which is the quotient of $S^3$ by an action of the binary icosahedral group, which is an extension of $A_5$, and like $A_5$ is perfect. Any closed $3$-manifold with perfect fundamental group necessarily has the same homology as a sphere, and this showed Poincare that for a $3$-manifold to be a sphere it did not suffice that its homology was the same as a sphere, motivating the fundamental group condition in the statement of the Poincare conjecture.
REPLY [9 votes]: This is exactly the type of space that Poincare constructed to show that homology was not enough to distinguish three-manifolds from the three-sphere. He took a dodecahedron and glued opposite faces with a minimal clockwise twist. The resulting space is a homology sphere --- it has the homology groups of $\Bbb S^3$, but has nontrivial $\pi_1$.
| 144,918
|
The Museum Of Santa Cruz, Spain, Toledo
- Country: Spain
- City: Toledo
- Address: Rambla de Santa Cruz, 116
-
One of the most popular museums in Spain is the Museum of Santa Cruz, which houses the world’s largest collection of paintings by El Greco. The building in which the Museum is very ancient, it was built in the 16th century.
Santa Cruz was formerly the hospital, which was started in 1494. The construction didn’t last long, but particularly after completion of construction are not allocated. The only thing that attracted attention, it’s 4 buildings at the head of the hospital that formed a cross. From height of bird’s flight it is very beautiful. In those days the hospitals were intended not for the treatment of patients, and in order that the patient could come there and calm the soul to leave this world. The rooms of the hospital of Santa Cruz (now a Museum) was located so that the prayer spoken by the Minister of the Church, was heard in all corners of the hospital.
Now the Museum of Santa Cruz renovated and looks great. The courtyard outside the Museum is impressive. Nicely trimmed lawns, a few small green trees. Next the two-storey gallery, with unusual arches. It was in this building came several styles, as scientists suggest that these buildings worked not one, but two of the architect. Medieval style follows the Renaissance aesthetics.
During the civil war in Spain, the Museum building was seriously damaged. But by 1958, the Museum was renovated, and it was held. The Museum exposition tells about religious art from the middle ages to the present day.
One of the features of the Museum is that entry is free, but tickets still need to be purchased at the box office. Photographing in the Museum is prohibited, so scant is the photographic coverage of this article about the Museum.
The three main hall of the Museum: archaeological, painting and sculpture and applied arts. Many museums gave to the Museum of Santa Cruz their exhibits. Here you can find a good collection of paintings by Spanish artists, and the paintings of El Greco even allotted a separate room.
In the Museum of Santa Cruz , you can see various tapestries, ancient weapons, sarcophagi, ancient tools, the remains of a mammoth, beautiful sculptures, and more.
| 309,062
|
\begin{document}
\input xy
\xyoption{all}
\renewcommand{\mod}{\operatorname{mod}\nolimits}
\newcommand{\proj}{\operatorname{proj}\nolimits}
\newcommand{\rad}{\operatorname{rad}\nolimits}
\newcommand{\Gproj}{\operatorname{Gproj}\nolimits}
\newcommand{\Ginj}{\operatorname{Ginj}\nolimits}
\newcommand{\Gd}{\operatorname{Gd}\nolimits}
\newcommand{\soc}{\operatorname{soc}\nolimits}
\newcommand{\ind}{\operatorname{inj.dim}\nolimits}
\newcommand{\Top}{\operatorname{top}\nolimits}
\newcommand{\ann}{\operatorname{Ann}\nolimits}
\newcommand{\id}{\operatorname{id}\nolimits}
\newcommand{\Id}{\operatorname{id}\nolimits}
\newcommand{\irr}{\operatorname{irr}\nolimits}
\newcommand{\Mod}{\operatorname{Mod}\nolimits}
\newcommand{\End}{\operatorname{End}\nolimits}
\newcommand{\Ob}{\operatorname{Ob}\nolimits}
\newcommand{\Aus}{\operatorname{Aus}\nolimits}
\newcommand{\ver}{\operatorname{v}\nolimits}
\newcommand{\arr}{\operatorname{a}\nolimits}
\newcommand{\cone}{\operatorname{cone}\nolimits}
\newcommand{\rep}{\operatorname{rep}\nolimits}
\newcommand{\Ext}{\operatorname{Ext}\nolimits}
\newcommand{\Hom}{\operatorname{Hom}\nolimits}
\newcommand{\RHom}{\operatorname{RHom}\nolimits}
\renewcommand{\Im}{\operatorname{Im}\nolimits}
\newcommand{\Ker}{\operatorname{Ker}\nolimits}
\newcommand{\Coker}{\operatorname{Coker}\nolimits}
\renewcommand{\dim}{\operatorname{dim}\nolimits}
\newcommand{\Ab}{{\operatorname{Ab}\nolimits}}
\newcommand{\Coim}{{\operatorname{Coim}\nolimits}}
\newcommand{\pd}{\operatorname{proj.dim}\nolimits}
\newcommand{\Ind}{\operatorname{Ind}\nolimits}
\newcommand{\add}{\operatorname{add}\nolimits}
\newcommand{\pr}{\operatorname{pr}\nolimits}
\newcommand{\Tr}{\operatorname{Tr}\nolimits}
\newcommand{\Def}{\operatorname{Def}\nolimits}
\newcommand{\Gp}{\operatorname{Gproj}\nolimits}
\newcommand{\ca}{{\mathcal A}}
\newcommand{\cb}{{\mathcal B}}
\newcommand{\cc}{{\mathcal C}}
\newcommand{\cd}{{\mathcal D}}
\newcommand{\cg}{{\mathcal G}}
\newcommand{\cp}{{\mathcal P}}
\newcommand{\ce}{{\mathcal E}}
\newcommand{\cs}{{\mathcal S}}
\newcommand{\cm}{{\mathcal M}}
\newcommand{\cn}{{\mathcal N}}
\newcommand{\cx}{{\mathcal X}}
\newcommand{\ct}{{\mathcal T}}
\newcommand{\cu}{{\mathcal U}}
\newcommand{\co}{{\mathcal O}}
\newcommand{\cv}{{\mathcal V}}
\newcommand{\calr}{{\mathcal R}}
\newcommand{\ol}{\overline}
\newcommand{\ul}{\underline}
\newcommand{\st}{[1]}
\newcommand{\ow}{\widetilde}
\newcommand{\coh}{{\mathrm coh}}
\newcommand{\CM}{{\mathrm CM}}
\newcommand{\vect}{{\mathrm vect}}
\newcommand{\bp}{{\mathbf p}}
\newcommand{\bL}{{\mathbf L}}
\newcommand{\bS}{{\mathbf S}}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{acknowledgement}[theorem]{Acknowledgement}
\newtheorem{algorithm}[theorem]{Algorithm}
\newtheorem{axiom}[theorem]{Axiom}
\newtheorem{case}[theorem]{Case}
\newtheorem{claim}[theorem]{Claim}
\newtheorem{conclusion}[theorem]{Conclusion}
\newtheorem{condition}[theorem]{Condition}
\newtheorem{conjecture}[theorem]{Conjecture}
\newtheorem{construction}[theorem]{Construction}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{criterion}[theorem]{Criterion}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{example}[theorem]{Example}
\newtheorem{exercise}[theorem]{Exercise}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{notation}[theorem]{Notation}
\newtheorem{problem}[theorem]{Problem}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{remark}[theorem]{Remark}
\newtheorem{solution}[theorem]{Solution}
\newtheorem{summary}[theorem]{Summary}
\newtheorem*{thm}{Theorem}
\newtheorem*{thma}{Theorem A}
\newtheorem*{thmb}{Theorem B}
\newtheorem*{thmc}{Theorem C}
\newtheorem*{thm1}{Main Theorem 1}
\newtheorem*{thm2}{Main Theorem 2}
\def \A{{\Bbb A}}
\def \Z{{\Bbb Z}}
\def \X{{\Bbb X}}
\renewcommand{\L}{{\Bbb L}}
\renewcommand{\P}{{\Bbb P}}
\newcommand {\lu}[1]{\textcolor{red}{$\clubsuit$: #1}}
\title[Gorenstein properties of simple gluing algebras]{Gorenstein properties of simple gluing algebras}
\author[Lu]{Ming Lu}
\address{Department of Mathematics, Sichuan University, Chengdu 610064, P.R.China}
\email{luming@scu.edu.cn}
\subjclass[2000]{18E30, 18E35}
\keywords{Gorenstein projective module, Singularity category, Gorenstein defect category, Simple gluing algebra}
\begin{abstract}
Let $A=KQ_A/I_A$ and $B=KQ_B/I_B$ be two finite-dimensional bound quiver algebras, fix two vertices $a\in Q_A$ and $b\in Q_B$.
We define an algebra $\Lambda=KQ_\Lambda/I_\Lambda$, which is called a simple gluing algebra of $A$ and $B$, where $Q_\Lambda$ is from $Q_A$ and $Q_B$ by identifying $a$ and $b$, $I_\Lambda=\langle I_A,I_B\rangle$. We prove that $\Lambda$ is Gorenstein if and only if $A$ and $B$ are Gorenstein, and describe the Gorenstein projective modules, singularity category, Gorenstein defect category and also Cohen-Macaulay Auslander algebra of $\Lambda$ from the corresponding ones of
$A$ and $B$.
\end{abstract}
\maketitle
\section{Introduction}
In the study of B-branes on Landau-Ginzburg models in the framework of Homological Mirror Symmetry Conjecture, D. Orlov rediscovered the notion of singularity categories \cite{Or1,Or2,Or3}. The singularity category of an algebra $A$ is defined to be the Verdier quotient of the bounded derived category with respect to the thick subcategory formed by complexes isomorphic to those consisting of finitely generated projective modules, \cite{Bu}. It measures the homological singularity of an algebra in the sense that an algebra $A$ has finite global dimension if and only if its singularity category vanishes.
The singularity category captures the stable homological features of an algebra \cite{Bu}. A fundamental result of R. Buchweitz \cite{Bu} and D. Happel \cite{Ha1} states that for a Gorenstein algebra $A$, the singularity category is triangle equivalent to the stable category of Gorenstein projective (also called (maximal) Cohen-Macaulay) $A$-modules. Buchweitz's Theorem (\cite[Theorem 4.4.1]{Bu}) says that there is an exact embedding $\Phi:\underline{\Gp}A\rightarrow D_{sg}(A)$ given by $\Phi(M)=M$, where the second $M$ is the corresponding stalk complex at degree $0$, and $\Phi$ is an equivalence if and only if $A$ is Gorenstein. Recently, to provide a categorical characterization of Gorenstein algebras, P. A. Bergh, D. A. J{\o}rgensen and S. Oppermann \cite{BJO} defined the Gorenstein defect category $D_{def}(A):=D_{sg}(A)/\Im \Phi$ and proved that $A$ is Gorenstein if and only if $D_{def}(A)=0$. In general, it is difficult to describe the singularity categories and Gorenstein defect categories. Many people are trying to describe these categories for some special kinds of algebras, see e.g. \cite{Chen1,chen2,chen3,Ka,CGLu,CDZ}. In particular, for a CM-finite algebra $A$, F. Kong and P. Zhang \cite{KZ} proved that its Gorenstein defect category is equivalent to the singularity category of its Cohen-Macaulay Auslander algebra. Recently, X-W. Chen \cite{chen3} described the singularity category and Gorenstein defect category for a quadratic monomial algebra.
For a triangular matrix algebra $\Lambda=\left( \begin{array}{cc} A&M\\0&B \end{array}\right)$, the relations between these categories of $\Lambda$ and the corresponding ones of $A$ and $B$ are decribed clearly in some sense \cite{Chen1,KZ,XZ,Zhang2,Lu}.
Inspired by the above, we define a new algebra (called \emph{simple gluing algebra}) from some algebras. Explicitly, let $A=KQ_A/I_A$, $B=KQ_B/I_B$ be two finite-dimensional algebras. For any two vertices $a\in Q_A$, $b\in Q_B$, we define a new quiver $Q$ from $Q_A$ and $Q_B$ by identifying $a$ and $b$. In this way, we can view $Q_A$ and $Q_B$ as subquivers of $Q$. We call $Q$ the \emph{simple gluing quiver} of $Q_A$ and $Q_B$. Denote by $v\in Q$ the \emph{glued vertex}. Let $I$ be the ideal of $KQ$ generated by $I_A$ and $I_B$. Then $\Lambda=KQ/I$ is called a simple gluing algebra of $A$ and $B$ if $\Lambda$ is finite-dimensional.
Similar to the triangular matrix algebras, first we prove that $\Lambda$ is Gorenstein if and only if $A$ and $B$ are Gorenstein, see Proposition \ref{lemma simple gluing Nakayama algebra Gorenstein}; second we prove that $D_{sg}(\Lambda)\simeq D_{sg}(A) \coprod D_{sg}(B)$ (see Theorem \ref{theorem singularity categories}), $\underline{\Gp}(\Lambda)\simeq \underline{\Gp}(A) \coprod \underline{\Gp}(B)$ (see Theorem \ref{theorem stable category of Cm modules}) and $D_{def}(\Lambda)\simeq D_{def}(A) \coprod D_{def}(B)$ (see Corollary \ref{corollary Gorenstein defect categories}). In particular, if we know the Gorenstein projective modules over $A$ and $B$, then we can get all the Gorenstein projective modules over $\Lambda$. Finally, we
prove that the Cohen-Macaulay Auslander algebra of $\Lambda$ is a simple gluing algebra of the Cohen-Macaulay Auslander algebras of $A$ and $B$, see Theorem \ref{theorem Cohen-Macaulay Auslander algebras}. As applications, we redescribe the singularity categories for cluster-tilted algebras of type $\A$ and endomorphism algebras of maximal rigid objects of cluster tube $\cc_n$.
\vspace{0.2cm} \noindent{\bf Acknowledgments.}
The work was done during the stay of the author at the Department of Mathematics,
University of Bielefeld. He is deeply indebted to Professor Henning Krause for his kind
hospitality, inspiration and continuous encouragement.
The author thanks Professor Liangang Peng very much for his guidance and constant support. The author was supported by the National Natural Science Foundation of China (No. 11401401 and No. 11601441).
\section{preliminary}
In this paper, we always assume that $K$ is an algebraically closed field and all algebras are finite-dimensional algebras over $K$ and modules are finitely generated.
Let $A$ be a $K$-algebra. Let $\mod A$ be the category of finitely generated left $A$-modules. With $D=\Hom_K(-,K)$ we denote the standard duality with respect to the ground field. Then $D(A_A)$ is an injective cogenerator for $\mod A$. For an arbitrary $A$-module $_AX$ we denote by $\pd_AX$ (resp. $\ind_AX$) the projective dimension (resp. the injective dimension) of the module $_AX$.
A complex $$P^\bullet:\cdots\rightarrow P^{-1}\rightarrow P^0\xrightarrow{d^0}P^1\rightarrow \cdots$$ of finitely generated projective $A$-modules is said to be \emph{totally acyclic} provided it is acyclic and the Hom complex $\Hom_A(P^\bullet,A)$ is also acyclic \cite{AM}.
An $A$-module $M$ is said to be (finitely generated) \emph{Gorenstein projective} provided that there is a totally acyclic complex $P^\bullet$ of projective $A$-modules such that $M\cong \Ker d^0$ \cite{EJ}. We denote by $\Gproj A$ the full subcategory of $\mod A$ consisting of Gorenstein projective modules.
An algebra is of \emph{finite Cohen-Macaulay type}, or simply, \emph{CM-finite}, if there are only finitely many isomorphism classes of indecomposable finitely generated Gorenstein projecitve modules. Clearly, $A$ is CM-finite if and only if there is a finitely generated module $E$ such that $\Gproj A=\add E$. In this way, $E$ is called to be a \emph{Gorenstein projective generator}. If $A$ is self-injective, then $\Gproj A=\mod A$, so $A$ is CM-finite if and only if $A$ is representation-finite. An algebra is called \emph{CM-free} if $\Gproj A=\proj A$. If $A$ has finite global dimension, then $\Gproj A=\proj A$, so it is CM-free.
Let $A$ be a CM-finite algebra, $E_1,\dots,E_n$ all the pairwise non-isomorphic indecomposable Gorenstein projective $A$-modules. Put $E=\oplus_{i=1}^n E_i$. Then $E$ is a Gorenstein projective generator. We call $\Aus(\Gproj A):=(\End_A E)^{op}$ the \emph{Cohen-Macaulay Auslander algebra}(also called \emph{relative Auslander algebra}) of $A$.
Let $\cx$ be a subcategory of $\mod A$. Then $^\bot\cx:=\{M|\Ext^i(M,X)=0, \mbox{ for all } X\in\cx, i\geq1\}$. Dually, we can define $\cx^\bot$. In particular, we define $^\bot A:=\,^\bot
(\proj A)$.
The following lemma follows from the definition of Gorenstein projective module easily.
\begin{lemma}
(i) \cite{Be}
\begin{eqnarray*}
\Gp(A)&=&\{M\in \Lambda\mbox{-}\mathrm{mod}\,|\,\exists \mbox{ an exact sequence }
0\rightarrow M\rightarrow T^0\xrightarrow{d^0}T^1\xrightarrow{d^1}\cdots, \\ &&\mbox{ with }T^i\in\proj A,\ker d^i\in\,^\bot A,\forall i\geq0\}.
\end{eqnarray*}
(ii) If $M$ is Gorenstein projective, then $\Ext^i_A(M,L)=0$, $\forall i>0$, for all $L$ of finite projective dimension or of finite injective dimension.
(iii) If $P^\bullet$ is a totally acyclic complex, then all $\Im d^i$ are Gorenstein projective; and any
truncations
$$\cdots\rightarrow P^i\rightarrow\Im d^i\rightarrow0,\quad 0\rightarrow\Im d^i\rightarrow P^{i+1}\rightarrow\cdots$$
and
$$0\rightarrow\Im d^i\rightarrow P^{i+1}\rightarrow\cdots\rightarrow P^j\rightarrow \Im d^j\rightarrow0,i<j$$
are $\Hom_A(-,\proj A)$-exact.
\end{lemma}
\begin{definition}[\cite{Ha1}, see also \cite{AR1,AR2}]
A finite-dimensional algebra $A$ is called a Gorenstein algebra (also called Iwanaga-Gorenstein algebra) if $A$ satisfies $\ind A_A<\infty$ and $\ind_AA<\infty$.
Given an $A$-module $X$. If $\Ext^i_A(X,A)=0$ for all $i>0$, then $X$ is called a Cohen-Macaulay module of $A$.
\end{definition}
Observe that for a Gorenstein algebra $A$, we have $\ind _AA=\ind A_A$, see \cite[Lemma 6.9]{Ha1}; the common value is denoted by $\Gd A$. If $\Gd A\leq d$, we say that $A$ is \emph{$d$-Gorenstein}. Furthermore, since $\pd _A D(A_A)=\ind A_A$, we get that $A$ is Gorenstein if and only if $\pd _A D(A_A)<\infty$ and $\ind_AA<\infty$.
\begin{theorem}[\cite{Bu,EJ}]
Let $A$ be a Gorenstein algebra. Then
(i) If $P^\bullet$ is an exact sequence of projective left $A$-modules, then $\Hom_A(P^\bullet,A)$ is again an exact sequence of projective right $A$-modules.
(ii) A module $G$ is Gorenstein projective if and only if there is an exact sequence $0\rightarrow G\rightarrow P^0\rightarrow P^1\rightarrow \cdots$ with each
$P^i$ projective.
(iii) $\Gproj A=\,^\bot A$.
\end{theorem}
For a module $M$, take a short exact sequence $$0\rightarrow \Omega M\rightarrow P\rightarrow M\rightarrow0$$
with $P$ projective. The module $\Omega M$ is called a \emph{syzygy module} of $M$. For each $i\geq1$, denote by $\Omega^i$ the $i$-th power of $\Omega$ and then for a module $X$, $\Omega^i X$ is the $i$-th syzygy module of $X$. For details, see \cite{ARS}.
\begin{theorem}[\cite{AM}]
Let $A$ be an finite-dimensional algebra and $d\geq0$. Then the following statements are equivalent:
(i) the algebra $A$ is $d$-Gorenstein;
(ii) $\Gproj A=\Omega^d(\mod A)$.
\noindent In this case, we have $\Gproj A=\, ^\bot A$.
\end{theorem}
So for a Gorenstein algebra, the definition of Cohen-Macaulay module coincides with the one of Gorenstein projective.
Recall that for an algebra $A$, the \emph{singularity category} of $A$ is the quotient category $D_{sg}(A):=D^b(A)/K^b(\proj A)$, which is defined by Buchweitz \cite{Bu}, see also \cite{Ha1,Or1}.
\begin{theorem}[Buchweitz's Theorem, see also \cite{KV} for a more general version]\label{theorem stable category of CM modules }
Let $A$ be an Artin algebra. Then $\Gproj (A)$ is a Frobenius category with the projective modules as the projective-injective objects, and there is an exact embedding $\Phi:\underline{\Gp}A\rightarrow D_{sg}(A)$ given by $\Phi(M)=M$, where the second $M$ is the corresponding stalk complex at degree $0$, and $\Phi$ is an equivalence if and only if $A$ is Gorenstein.
\end{theorem}
Let $A$ be an Artin algebra. Inspired by Buchweitz's Theorem, the \emph{Gorenstein defect category} is defined to be Verdier quotient $D_{def}(A):=D_{sg}(A)/\Im(\Phi)$, see \cite{BJO}.
From \cite{KZ}, we know that $D_{def}(A)$ is triangle equivalent to $D^b(A)/\langle \Gp(A)\rangle$, where $\langle \Gp(A)\rangle$ denotes the triangulated subcategory of $D^b(A)$ generated by $\Gp(A)$, i.e., the smallest triangulated subcategory of $D^b(A)$ containing $\Gp(A)$.
\begin{lemma}[\cite{BJO,KZ}]
Let $A$ be an Artin algebra. Then the following are equivalent.
(i) $A$ is Gorenstein;
(ii) $\underline{\Gp}(A)$ is triangle equivalent to $D_{sg}(A)$;
(iii) $D_{def}(A)=0$;
(iv) $D^b(A)=\langle \Gp(A)\rangle$.
\end{lemma}
\section{Simple gluing algebras}
Let $Q=(\ver(Q),\arr(Q))$ be a finite quiver, where $\ver(Q)$ is the set of vertices, and $\arr(Q)$ is the set of arrows. For any arrow $\alpha$ in $Q$, we denote by $s(\alpha),t(\alpha)$ the source and the target of $\alpha$ respectively. The path algebra $KQ$ of $Q$ is an associative algebra with an identity. If $I$ is an admissible ideal of $KQ$, the pair $(Q,I)$ is said to be a bound quiver. The quotient algebra $KQ/I$ is said to be the algebra of the bound quiver $(Q,I)$, or simply, a \emph{bound quiver algebra}. Denote by $e_i$ the idempotent corresponding to the vertex $i\in Q$.
In this paper, we shall freely identify the representations of $(Q,I)$ with left modules over $KQ/I$.
Recall that a representation $M$ of $(Q,I)$ is of form $(M_i,M_\alpha)_{i\in \ver(Q),\alpha\in\arr(Q)}$, or $(M_i,M_\alpha)$. So for any representation $M$, we always use $M_i$ to denote the vector space associated to the vertex $i$.
As the beginning, let us recall the definition of \emph{simple gluing algebras}.
Let $A=KQ_A/I_A$, $B=KQ_B/I_B$ be two finite-dimensional algebras. For any two vertices $a\in \ver(Q_A)$, $b\in \ver(Q_B)$, we define a new quiver $Q$ from $Q_A$ and $Q_B$ by identifying $a$ and $b$. In this way, we can view $Q_A$ and $Q_B$ as subquivers of $Q$. We call $Q$ the \emph{simple gluing quiver} of $Q_A$ and $Q_B$. Denote by $v\in \ver(Q)$ the \emph{glued vertex}. Let $I$ be the ideal of $KQ$ generated by $I_A$ and $I_B$.
The following lemma follows directly from the definition of bound quiver algebras.
\begin{lemma}
Keep the notations as above. Then $\Lambda=KQ/I$ is a finite-dimensional algebra if and only if each non-trivial path in $Q_A$ from $a$ to $a$ is in $I_A$ or each non-trivial path in $Q_B$ from $b$ to $b$ is in $I_B$.
\end{lemma}
In the following, we always assume that the above condition holds, and call $\Lambda$ the \emph{simple gluing algebra} of $A$ and $B$. Inductively, from finitely many bound quiver algebras $A_1,\dots,A_n$, we can define simple gluing algebra (which is finite-dimensional) of $A_1,\dots,A_n$.
Without loss of generality, we always assume that each non-trivial path in $Q_A$ from $a$ to $a$ is in $I_A$. Denote by $e_B=\sum_{i\in Q_B} e_i$ the idempotent of $\Lambda$. Obviously, $A$ and $B$ are subalgebras (and also quotient algebras) of $\Lambda$, and
$B\cong e_B \Lambda e_B$.
Since $B=e_B\Lambda e_B$ is a subalgebra of $\Lambda$, there exists an exact functor $j_\mu:\mod \Lambda\rightarrow \mod B$, which maps $M$ to $e_B M$. Note that $j_\mu=e_B\Lambda\otimes_\Lambda-$. In fact, $j_\mu$ restricts the representation of $(Q,I)$ to the representation of $(Q_B,I_B)$. $j_\mu$ admits a right adjoint functor $j_\rho:=\Hom_B(_B\Lambda,-):\mod B\rightarrow \mod\Lambda$, and admits a left adjoint functor $j_\lambda:=\Lambda\otimes_B-=\Lambda e_B\otimes_B-: \mod B\rightarrow \mod\Lambda$, see e.g. \cite{CPS2}.
Similarly, since $A$ is also a subalgebra of $\Lambda$, there exists a natural exact functor $i_\mu:\mod \Lambda\rightarrow \mod A$.
$i_\mu$ admits a right adjoint functor $i_\rho:=\Hom_A(_A\Lambda,-):\mod A\rightarrow \mod \Lambda$, and admits a left adjoint functor $i_\lambda:= \Lambda\otimes_A-: \mod A\rightarrow \mod \Lambda$.
From the structure of $\Lambda$, obviously, $\Lambda$ is projective both as left (also right) modules over $A$ and $B$. So we have the following lemma immediately.
\begin{lemma}
Keep the notations as above. Then all the functors $j_\lambda,j_\mu,j_\rho$, $i_\lambda,i_\mu,i_\rho$ are exact functors. In particular, $i_\lambda$, $i_\mu$, $j_\mu$ and $j_\lambda$ preserve projectives, $i_\rho$, $i_\mu$, $j_\mu$ and $j_\rho$ preserve injectives.
\end{lemma}
For any vertex $i\in \ver(Q_A)$, we denote by $P(i)$ (resp. $_A S(i)$, $I(i)$) the projective (resp. simple, injective) $A$-module corresponding to $i$.
For any vertex $i\in \ver(Q_B)$, we denote by $Q(i)$ (resp. $_B S(i)$, $J(i)$) the projective (resp. simple, injective) $B$-module corresponding to $i$.
For any vertex $i\in \ver(Q)$, we denote by $U(i)$ (resp. $_\Lambda S(i)$, $V(i)$) the projective (resp. simple, injective) $\Lambda$-module corresponding to $i$.
Note that $i_\lambda(P(i))=U(i)$, $i_\rho(I(i))=V(i)$ for any $i\in Q_A$, and $j_\lambda(Q(i))= U(i)$ and $j_\rho(J(i))=V(i)$ for any $i\in Q_B$.
In the following, we describe the actions of all these functors on representations.
First, from the structure of indecomposable projective modules \cite{ASS}, we can assume that $Q(v)$ is locally as the following left figure shows, where $j_1,\dots j_q\in Q_A$, $k_1,\dots,k_r\in Q_B$ and each of them is the ending point of some arrow starting at $v$. Then $j_\lambda(\,_B S(b))$ and $i_\lambda(\,_AS(a))$ are as the following middle and right figures show respectively.
\begin{center}\setlength{\unitlength}{0.7mm}
\begin{picture}(70,30)(0,10)
\put(-40,10){\begin{picture}(50,10)
\put(20,20){$v$}
\put(0,0){$j_1$}
\put(7,0){$\cdots$}
\put(15,0){$j_q$}
\put(25,0){$k_1$}
\put(32,0){$\cdots$}
\put(40,0){$k_r$}
\put(18,18){\vector(-1,-1){14}}
\put(20.5,18){\vector(-1,-4){3.3}}
\put(21.5,18){\vector(1,-4){3.3}}
\put(23,18){\vector(1,-1){14}}
\put(1,-8){$\vdots$}
\put(16,-8){$\vdots$}
\put(26,-8){$\vdots$}
\put(41,-8){$\vdots$}
\end{picture}}
\put(30,10){\begin{picture}(50,10)
\put(20,20){$v$}
\put(0,0){$j_1$}
\put(7,0){$\cdots$}
\put(15,0){$j_q$}
\put(18,18){\vector(-1,-1){14}}
\put(20.5,18){\vector(-1,-4){3.3}}
\put(1,-8){$\vdots$}
\put(16,-8){$\vdots$}
\end{picture}}
\put(60,10){\begin{picture}(50,10)
\put(20,20){$v$}
\put(25,0){$k_1$}
\put(32,0){$\cdots$}
\put(40,0){$k_r$}
\put(21.5,18){\vector(1,-4){3.3}}
\put(23,18){\vector(1,-1){14}}
\put(26,-8){$\vdots$}
\put(41,-8){$\vdots$}
\end{picture}}
\put(-40,-10){Figure 1. The structures of $Q(v)$, $j_\lambda(\,_BS(b))$ and $i_\lambda(\,_AS(a))$.}
\end{picture}
\vspace{1.9cm}
\end{center}
Let $M=(M_i,M_\alpha)$ be a representation of $(Q_A,I_A)$. We assume that $M$ is locally as the following left figure shows, where $i_1,\dots i_p$ are the starting points of arrows ending to $v=a$ in $Q_A$. Then $i_\lambda(M)$ is as the following right figure shows, where the submodule in the dashed box is $i_\lambda(\,_AS(a))^{\oplus\dim M_v }$.
\begin{center}\setlength{\unitlength}{0.7mm}
\begin{picture}(80,55)(0,10)
\put(-30,10){\begin{picture}(50,10)
\put(-5,40){$M_{i_1}$}
\put(5,40){$\cdots$}
\put(15,40){$M_{i_p}$}
\put(18,20){$M_v$}
\put(-5,0){$M_{j_1}$}
\put(5,0){$\cdots$}
\put(15,0){$M_{j_q}$}
\put(3,38){\vector(1,-1){14}}
\put(17,38){\vector(1,-4){3.2}}
\put(17,18){\vector(-1,-1){14}}
\put(20.5,18){\vector(-1,-4){3.3}}
\put(-3,46){$\vdots$}
\put(17,46){$\vdots$}
\put(-3,-8){$\vdots$}
\put(17,-8){$\vdots$}
\end{picture}}
\put(50,10){\begin{picture}(50,10)
\put(-5,40){$M_{i_1}$}
\put(5,40){$\cdots$}
\put(15,40){$M_{i_p}$}
\put(18,20){$M_v$}
\put(-5,0){$M_{j_1}$}
\put(5,0){$\cdots$}
\put(15,0){$M_{j_q}$}
\put(3,38){\vector(1,-1){14}}
\put(17,38){\vector(1,-4){3.2}}
\put(17,18){\vector(-1,-1){14}}
\put(20.5,18){\vector(-1,-4){3.3}}
\put(22,18){\vector(1,-2){6.3}}
\put(23,18){\vector(2,-1){24}}
\put(44,0){$M_{k_r}$}
\put(36,0){$\cdots$}
\put(27,0){$M_{k_1}$}
\put(-3,46){$\vdots$}
\put(17,46){$\vdots$}
\put(-3,-8){$\vdots$}
\put(17,-8){$\vdots$}
\put(46,-8){$\vdots$}
\put(29,-8){$\vdots$}
\qbezier[40](17,24)(21,10)(28,-9)
\qbezier[40](28,-9)(40,-12)(60,-9)
\qbezier[40](17,24)(30,26)(50,24)
\qbezier[40](50,24)(55,15)(60,-9)
\put(35,27){$i_\lambda(\,_AS(a))^{\oplus\dim M_v }$}
\end{picture}}
\put(-40,-10){Figure 2. The local structures of $M\in \mod A$ and $i_\lambda(M)$.}
\end{picture}
\vspace{1.9cm}
\end{center}
The action of $j_\lambda$ is similar, and $i_\mu$, $j_\mu$ are the restriction functors. For $i_\rho$ and $j_\rho$, their actions are dual to these of $i_\lambda$ and $j_\lambda$.
For any $M\in\mod A$ and $N\in\mod \Lambda$, we denote by
$$\alpha_{M,N}: \Hom_\Lambda(i_\lambda(M),N)\xrightarrow{\sim} \Hom_A(M,i_\mu(N))$$
the adjoint isomorphism.
For any $M\in \mod A$, we denote by $\mu_M= \alpha_{M, i_\lambda (M)} (1_{i_\lambda (M)})$ the adjunction morphism, and for any $N\in \mod \Lambda$, we denote by
$\epsilon_N= \alpha_{i_\mu(N),N}^{-1}(1_{i_\mu (N)})$.
Similarly, for any $L\in\mod B$ and $N\in\mod \Lambda$ we denote by
$$\beta_{L,N}: \Hom_\Lambda(j_\lambda(L),N)\xrightarrow{\sim} \Hom_B(L,i_\mu(N))$$
the adjoint isomorphism.
For any $L\in \mod B$, we denote by $\nu_L= \beta_{L, j_\lambda (L)} (1_{j_\lambda (L)})$ the adjunction morphism, and for any $N\in \mod \Lambda$, we denote by
$\zeta_N= \beta_{j_\mu(N),N}^{-1}(1_{j_\mu (N)})$.
It is easy to see that for any $M\in \mod A$, $i_\mu i_\lambda M= M\oplus P(a)^{\oplus m}$ for some $m$, and $\mu_M$ is the section map.
Furthermore, for any morphism
$$f=(f_x)_{x\in \ver(Q_A)} : M=(M_x,M_\alpha)_{x\in \ver( Q_A),\alpha\in \arr(Q_A)}\rightarrow N=(N_x,N_\alpha)_{x\in \ver( Q_A),\alpha\in \arr(Q_A)}$$ in $\mod A$, we get that
$i_\mu i_\lambda (f)$ is of form
\begin{equation}\label{equation form of morphism 1}
\left( \begin{array}{cc} f& 0\\
0& h \end{array} \right): M\oplus P(a)^{\oplus m} \rightarrow N\oplus P(a)^{\oplus n},
\end{equation}
where $h:P(a)^{\oplus m} \rightarrow P(a)^{\oplus n}$ can be represented by a $n\times m$ matrix with its entries $h_{ij}:P(a)\rightarrow P(a)$ in $K$. In particular, $h$ is determined by the map $f_v$.
For any morphism
$$g=(g_y)_{y\in \ver(Q_B)} : M=(M_y,M_\beta)_{y\in \ver( Q_B),\beta\in \arr(Q_B)}\rightarrow N=(N_y,N_\beta)_{y\in \ver( Q_B),\beta\in \arr(Q_B)}$$ in $\mod B$, it is similar to get the form of $j_\mu j_\lambda (g)$.
Besides, we also get that $i_\mu j_\lambda (M)= P(a)^{\oplus m}$ for some $m$, and
$i_\mu j_\lambda (N)= P(a)^{\oplus n}$ for some $n$. Then $i_\mu j_\lambda(g)$ is of form
\begin{equation}\label{equation form of morphism 2}
h: P(a)^{\oplus m} \rightarrow P(a)^{\oplus n},
\end{equation}
where $h:P(a)^{\oplus m} \rightarrow P(a)^{\oplus n}$ can be represented by a $n\times m$ matrix with its entries $h_{ij}:P(a)\rightarrow P(a)$ in $K$. In particular, $h$ is determined by the map $g_v$.
Similarly, we can describe $j_\mu i_\lambda (f)$ for any $f:M\rightarrow N$ in $\mod A$.
For any $N\in\mod \Lambda$, we denote by $l(N)$ its \emph{length}.
\begin{lemma}\label{lemma existence of short exact sequene 1}
Keep the notations as above. Let $M=(M_i,M_\gamma)$ be a $\Lambda$-module. Then we have a short exact sequence
\begin{equation}\label{equation short exact sequence 1}
0\rightarrow U(v)^{\oplus\dim M_v} \rightarrow i_\lambda i_\mu M\oplus j_\lambda j_\mu M\xrightarrow{(\mu_M,\nu_M)} M\rightarrow0.
\end{equation}
\end{lemma}
\begin{proof}
First, suppose that $M=\,_\Lambda S(i)$ is a simple $\Lambda$-module. Without loss of generality, we assume that $i\in \ver(Q_A)$.
If $i\neq v$, then $i_\lambda i_\mu (_\Lambda S(i))=\,_\Lambda S(i)$ and $j_\lambda j_\mu (_\Lambda S(i))=0$, which satisfies (\ref{equation short exact sequence 1}). If $i=v$, then $i_\mu(_\Lambda S(v))=\,_A S(a)$,
and $j_\mu(_\Lambda S(v))=\,_B S(a)$.
From the structures of $Q(v)$, $j_\lambda(\,_BS(b))$ and $i_\lambda(\,_AS(a))$ in Figure 1, it is easy to see that there exists a
short exact sequence
$$0\rightarrow U(v)\rightarrow i_\lambda i_\mu S(v)\oplus j_\lambda j_\mu S(v)\rightarrow S(v)\rightarrow0,$$
which satisfies the requirement.
For general $M$, we prove it by induction on its length. If $l(M)=1$, then $M$ is a simple module, and the result follows from the above.
If $l(M)>1$, then there exists $M_1,M_2$ such that $0<l(M_i)<l(M)$ for $i=1,2$, and there is a short exact sequence
$$0\rightarrow M_1\xrightarrow{f} M\xrightarrow{g} M_2\rightarrow0.$$
Obviously, $\dim M_v=\dim (M_1)_v+\dim (M_2)_v$ for the glued vertex $v\in \ver(Q)$.
Since $i_\lambda, i_\mu,j_\lambda,j_\mu$ are exact functors, there are two short exact sequences
$$0\rightarrow i_\lambda i_\mu M_1 \xrightarrow{i_\lambda i_\mu(f) } i_\lambda i_\mu M\xrightarrow{i_\lambda i_\mu(g) } i_\lambda i_\mu M_2\rightarrow0,$$
and
$$0\rightarrow j_\lambda j_\mu M_1 \xrightarrow{j_\lambda j_\mu(f) } j_\lambda j_\mu M\xrightarrow{j_\lambda j_\mu(g) } j_\lambda j_\mu M_2\rightarrow0.$$
The inductive assumption yields the following short exact sequences:
$$0\rightarrow U(v)^{\oplus\dim (M_1)_v} \rightarrow i_\lambda i_\mu M_1\oplus j_\lambda j_\mu M_1\xrightarrow{(\mu_{M_1},\nu_{M_1})} M_1\rightarrow0,$$
and
$$0\rightarrow U(v)^{\oplus\dim (M_2)_v} \rightarrow i_\lambda i_\mu M_2\oplus j_\lambda j_\mu M_2\xrightarrow{(\mu_{M_2},\nu_{M_2})} M_2\rightarrow0.$$
From the naturality of the adjoint pairs $(i_\lambda,i_\mu)$ and $(j_\lambda,j_\mu)$, we get the following commutative diagram
\[\xymatrix{ U(v)^{\oplus\dim (M_1)_v} \ar[r] \ar@{.>}[dd] & i_\lambda i_\mu M_1\oplus j_\lambda j_\mu M_1 \ar[rr]^{\quad\quad(\mu_{M_1},\nu_{M_1})} \ar[dd]^{\tiny \left( \begin{array}{cc} i_\lambda i_\mu(f) &\\ &j_\lambda j_\mu(f) \end{array} \right)}&& M_1\ar[dd]^{f}\\
\\
N\ar[r] \ar@{.>}[dd] & i_\lambda i_\mu M\oplus j_\lambda j_\mu M \ar[rr]^{\quad\quad(\mu_{M},\nu_{M})} \ar[dd]^{ \tiny\left( \begin{array}{cc} i_\lambda i_\mu(g) &\\ &j_\lambda j_\mu(g) \end{array} \right)} &&M \ar[dd]^{g} \\
\\
U(v)^{\oplus\dim (M_2)_v} \ar[r] & i_\lambda i_\mu M_2\oplus j_\lambda j_\mu M_2 \ar[rr]^{\quad\quad(\mu_{M_2},\nu_{M_2})} &&M_2, }\]
where $N$ is the kernel of $(\mu_{M},\nu_{M})$. It is obvious that all the sequences appearing in the rows and columns of the above commutative diagram are short exact sequences. From the short exact sequence in the first column, we get that
$$N\cong U(v)^{\oplus \dim (M_1)_v+\dim (M_2)_v}= U(v)^{\oplus \dim M_v} $$
since $U(v)$ is projective, and then there is a short exact sequence
$$0\rightarrow U(v)^{\oplus\dim M_v} \rightarrow i_\lambda i_\mu M\oplus j_\lambda j_\mu M\xrightarrow{(\mu_M,\nu_M)} M\rightarrow0.$$
\end{proof}
Similar to Gorenstein property of the upper triangular matrix algebras obtained in \cite{Chen1}, we get the following result.
\begin{proposition}\label{lemma simple gluing Nakayama algebra Gorenstein}
Let $\Lambda$ be a simple gluing algebra of the two finite-dimensional bound quiver algebras $A$ and $B$. Then $\Lambda$ is Gorenstein if and only if $A$ and $B$ are Gorenstein. In particular, $\max\{\Gd A,\Gd B\}\leq\Gd (\Lambda)\leq \max\{\Gd A,\Gd B,1\}$.
\end{proposition}
\begin{proof}
Keep the notations as above. If $\Lambda$ is Gorenstein, then for any indecomposable injective $\Lambda$-module $V(i)$, it has finite projective dimension. Let
$$0\rightarrow U_m \rightarrow \cdots U_1\rightarrow U_0\rightarrow V(i)\rightarrow0$$
be a projective resolution of $V(i)$. Apply $i_\mu$ to the above projective resolution. Since $i_\mu$ is exact and preserves projectives, we get that $\pd_A (i_\mu(V(i)))\leq m$.
It is easy to see that $i_\mu(V(i))= I(i)\oplus I(a)^{\oplus s}$ for some integer $s$. Then $\pd_A (I(i))\leq m$, and $\pd_A D(A)<\infty$. Similarly, we can get that $\ind_A A<\infty$ and so $A$ is Gorenstein. Furthermore, $\Gd A \leq \Gd \Lambda$.
Similarly, $B$ is Gorenstein with $\Gd B\leq \Gd\Lambda$. Then $\max\{\Gd A,\Gd B\}\leq \Gd\Lambda$.
Conversely, for any indecomposable injective $\Lambda$-module $V(i)$, $i_\mu (V(i))$ and $j_\mu (V(i))$ are injective as $A$-module and $B$-module respectively.
Since $A$ and $B$ are Gorenstein, $i_\mu (V(i))$ and $j_\mu(V(i))$ have finite projection dimensions. From $i_\lambda$ and $j_\lambda$ are exact and preserve projectives, we get that
$i_\lambda i_\mu(V(i))$ and $j_\lambda j_\mu(V(i))$ have finite projection dimensions. In particular,
\begin{eqnarray*}
&&\pd_\Lambda i_\lambda i_\mu (V(i)) \leq \pd_A i_\mu (V(i))\leq \Gd A \mbox{ and}\\
&&\pd_\Lambda j_\lambda j_\mu(V(i))\leq \pd_B j_\mu(V(i))\leq \Gd B.
\end{eqnarray*}
So $\pd _\Lambda (i_\lambda i_\mu (V(i)) \oplus j_\lambda j_\mu(V(i)))\leq \max\{\Gd A,\Gd B\}$.
From the short exact sequence (\ref{equation short exact sequence 1}) in Lemma \ref{lemma existence of short exact sequene 1}, it is easy to see that
$$\pd_\Lambda V(i)\leq \max\{\Gd A,\Gd B,1\}.$$
So $\pd_\Lambda D\Lambda\leq \max\{\Gd A,\Gd B,1\}$.
Similarly,
we can get that $$\ind_\Lambda \Lambda \leq \max\{\Gd A,\Gd B,1\}.$$
So $\Lambda$ is Gorenstein, and $\max\{\Gd A,\Gd B\}\leq\Gd (\Lambda)\leq \max\{\Gd A,\Gd B,1\}$.
\end{proof}
From the above Proposition, if $A$ and $B$ are Gorenstein, then we have the following:
if $\Gd A=0=\Gd B$, then $\Gd(\Lambda)\leq 1$; otherwise, $\Gd (\Lambda)= \max\{\Gd A,\Gd B\}$.
\begin{example}
Let $A$ and $B$ be self-injective ($0$-Gorenstein) algebras, and $\Lambda$ be the simple gluing algebra of $A$ and $B$. In general, $\Lambda$ is $1$-Gorenstein, and is not self-injective.
For an example, please see the algebra $\Lambda$ in Example \ref{example Auslander algebra}.
\end{example}
\begin{example}\label{example 1}
For simple gluing algebra $\Lambda= KQ/I$ of $A$ and $B$, the condition that $I$ is generated by that of $I_A$ and $I_B$ is necessary to ensure the above proposition.
Let $Q_\Lambda$ be the quiver $\xymatrix{1 \ar@/^/[r]^{\alpha} & 2 \ar@/^/[l]^\beta & 3 \ar[l]_\gamma }$, $Q_A$ be the full subquiver of $Q_\Lambda$ containing vertices $1,2$, and $Q_B$ be the full subquiver of $Q_\Lambda$ containing the vertices $2,3$. Then $Q_\Lambda$ is a simple gluing quiver of $Q_A$ and $Q_B$. Let $I_A=\langle \alpha\beta,\beta\alpha \rangle$, $I_B=0$. Then both $A$ and $B$ are Gorenstein. However, if we take $\Lambda=KQ_\Lambda/I_\Lambda$, where $I_\Lambda$ is generated by $\alpha\beta,\beta\alpha$ and $\beta\gamma$, then $\Lambda$ is not Gorenstein any more.
\end{example}
We recall two basic facts concerning functors between triangulated categories, which are useful to prove our main results.
\begin{lemma}[see. e.g. \cite{chen2}]\label{lemma functor to derived functor}
Let $F_1:\ca\rightarrow\cb$ be an exact functor between abelian categories which has an exact right adjoint $F_2$. Then the pair $(D^b(F_1),D^b(F_2))$ is adjoint, where $D^b(F_1)$ is the induced functor from $D^b(\ca)$ to $D^b(\cb)$ ($D^b(F_2)$ is defined similarly).
\end{lemma}
Since all the six functors defined above are exact, so they induces six triangulated functors on derived categories, and $(D^b(i_\lambda), D^b(i_\mu),D^b(i_\rho))$ and
$(D^b(j_\lambda), D^b(j_\mu),D^b(j_\rho))$ are adjoint triples. Since $i_\lambda$, $i_\mu$, $j_\lambda$ and $j_\mu$ preserve projectives, they induce triangulated functors on singularity categories, which are denoted by $\tilde{i}_\lambda$, $\tilde{i}_\mu$, $\tilde{j}_\lambda$ and $\tilde{j}_\mu$ respectively.
The second one on adjoint functors is well-known.
\begin{lemma}[\cite{Or1, Chen1}]\label{lemma adjoint}
Let $\cm$ and $\cn$ be full triangulated subcategories in triangulated categories $\cc$ and $\cd$ respectively. Let $F: \cc\to\cd$ and $G: \cd\to\cc$ be an adjoint pair of exact functors such that $F(\cm)\subset \cn$ and $G(\cn)\subset\cm$. Then they induce functors
\[\ow{F}: \cc/\cm\to \cd/\cn, \indent \ow{G}: \cd/\cn\to\cc/\cm
\]which are adjoint.
\end{lemma}
\begin{lemma}\label{lemma adjoint functor singularity category}
Keep the notations as above. Then the following hold:
(i) $(\tilde{i}_\lambda,\tilde{i}_\mu)$, $(\tilde{j}_\lambda,\tilde{j}_\mu)$ are adjoint pairs;
(ii) $\tilde{i}_\lambda$ and $\tilde{j}_\lambda$ are fully faithful;
(iii) $\ker(\tilde{j}_\mu)\simeq \Im (\tilde{i}_\lambda)$ and $\ker (\tilde{i}_\mu)\simeq \Im(\tilde{j}_\lambda)$.
\end{lemma}
\begin{proof}
(i) follows from Lemma \ref{lemma adjoint} directly.
(ii) we only need to prove that $\tilde{i}_\lambda$ is fully faithful.
For any $A$-module $M$, we have that $i_\mu i_\lambda(M)=M \oplus P(a)^{\oplus s}$ for some integer $s$. So $\tilde{i}_\mu \tilde{i}_\lambda(M)\cong M$, in particular, this isomorphism is natural which implies that $\tilde{i}_\mu \tilde{i}_\lambda \simeq \Id_{D_{sg}(A)}$ since $\mod A$ is a generator of $D_{sg}(A)$. Therefore, $\tilde{i}_\lambda$ is fully faithful.
(iii) for any $A$-module $M$, we have that $j_\mu i_\lambda (M)= Q(b)^{\oplus t}$ for some integer $t$. So $\tilde{j}_\mu \tilde{i}_\lambda(M)\cong 0$, and then $\tilde{j}_\mu \tilde{i}_\lambda=0$.
It follows that $\Im (\tilde{i}_\lambda)\subseteq\ker(\tilde{j}_\mu)$. Similarly, we can get that $\Im (\tilde{j}_\lambda)\subseteq\ker(\tilde{i}_\mu)$.
For any $M\in \ker(\tilde{j}_\mu)$, the adjunction $\tilde{i}_\lambda \tilde{i}_\mu (M)\xrightarrow{\epsilon_M} M$ is extended to a triangle in $D_{sg}(\Lambda)$:
$$X\rightarrow\tilde{i}_\lambda \tilde{i}_\mu (M)\xrightarrow{\epsilon_M} M\rightarrow X[1]$$
where $[1]$ is the suspension functor. Applying $\tilde{i}_\mu$ to the above triangle, we get that $X\in \ker (\tilde{i}_\mu)$ since $\tilde{i}_\mu(\epsilon_M)$ is an isomorphism.
Also applying $\tilde{j}_\mu$ to the above triangle, we get that $X\in \ker (\tilde{j}_\mu)$ since $M\in \ker (\tilde{j}_\mu)$ and $\tilde{j}_\mu \tilde{i}_\lambda=0$.
From Lemma \ref{lemma existence of short exact sequene 1}, it is easy to see that $X\cong \tilde{i}_\lambda \tilde{i}_\mu(X) \oplus \tilde{j}_\lambda \tilde{j}_\mu (X)$ in $D_{sg}(\Lambda)$. Together with $X \in \ker (\tilde{i}_\mu) \bigcap\ker (\tilde{j}_\mu)$, we get that $X\cong 0$ in $D_{sg}(\Lambda)$ and then $M\cong \tilde{i}_\lambda \tilde{i}_\mu (M)$, so $M\in \Im(\tilde{i}_\lambda)$. Therefore, $\ker(\tilde{j}_\mu)\simeq \Im (\tilde{i}_\lambda)$ since $\tilde{i}_\lambda$ is fully faithful. Similarly, we can get that $\ker (\tilde{i}_\mu)\simeq \Im(\tilde{j}_\lambda)$.
\end{proof}
It is worth noting that $i_\lambda$ and $j_\lambda$ are faithful functors (in general, not full), which can be obtained from the proof of Lemma \ref{lemma adjoint functor singularity category}.
\begin{theorem}\label{theorem singularity categories}
Let $\Lambda$ be a simple gluing algebra of the two finite-dimensional bound quiver algebras $A$ and $B$. Then $D_{sg}(\Lambda)\simeq D_{sg}(A) \coprod D_{sg}(B)$.
\end{theorem}
\begin{proof}
For any $L\in D_{sg}(A),N\in D_{sg}(B)$, $\Hom_{D_{sg}(\Lambda)}( \tilde{i}_\lambda (L), \tilde{j}_\lambda(N))\cong \Hom_{D_{sg}(A)}(L, \tilde{i}_\mu \tilde{j}_\lambda(N))=0$ since $\tilde{i}_\mu \tilde{j}_\lambda=0$.
Similarly, $\Hom_{D_{sg}(\Lambda)}( \tilde{j}_\lambda(N),\tilde{i}_\lambda (L))=0$.
For any $M\in D_{sg}(\Lambda)$, from Lemma \ref{lemma existence of short exact sequene 1}, we get that $M\cong \tilde{i}_\lambda \tilde{i}_\mu(M) \oplus \tilde{j}_\lambda \tilde{j}_\mu (M)$ in $D_{sg}(\Lambda)$.
Together with the fully faithfulness of $\tilde{i}_\lambda$ and $\tilde{j}_\lambda$, we get that $D_{sg}(\Lambda)\simeq \Im(\tilde{i}_\lambda) \coprod \Im(\tilde{j}_\lambda)\simeq D_{sg}(A) \coprod D_{sg}(B)$.
\end{proof}
\begin{example}\label{example 2}
For simple gluing algebra $\Lambda= KQ_\Lambda/I_\Lambda$ of $A$ and $B$, the condition that $I_\Lambda$ is generated by $I_A$ and $I_B$ is necessary to ensure the above theorem.
Let $Q_A,Q_B$ and $Q_\Lambda$ be the quivers as the following figure shows.
Then $Q_\Lambda$ is a simple gluing quiver of $Q_A$ and $Q_B$.
Let $A=KQ_A/ \langle \varepsilon_1^2\rangle$, $B=KQ_B/\langle \varepsilon_2^2 \rangle$.
If $\Lambda=KQ_\Lambda/I_\Lambda$, where $I_\Lambda$ is generated by
$\varepsilon_1^2,\varepsilon_2^2, \varepsilon_1\alpha-\alpha \varepsilon_2$, then all these algebras are Gorenstein algebras.
It is easy to see that $D_{sg}(A)\simeq D_{sg}(B)$, and $D_{sg}(\Lambda)\simeq D^b(KQ)/ [1]$, where $Q$ is a quiver of type $\A_2$, and $[1]$ is the suspension functor, see \cite{RZ}. However, $D_{sg}(\Lambda)$ is not equivalent to $D_{sg}(A)\coprod D_{sg}(B)$.
\begin{center}\setlength{\unitlength}{0.7mm}
\begin{picture}(50,10)(0,10)
\put(5,0){\begin{picture}(50,10)
\put(0,-2){$b$}
\qbezier(-1,1)(-3,3)(-2,5.5)
\qbezier(-2,5.5)(1,9)(4,5.5)
\qbezier(4,5.5)(5,3)(3,1)
\put(3.1,1.4){\vector(-1,-1){0.3}}
\put(1,10){$\varepsilon_2$}
\end{picture}}
\put(40,0){\begin{picture}(50,10)
\put(0,-2){$1$}
\put(18,0){\vector(-1,0){14}}
\put(10,-4){$\alpha$}
\put(20,-2){$2$}
\qbezier(-1,1)(-3,3)(-2,5.5)
\qbezier(-2,5.5)(1,9)(4,5.5)
\qbezier(4,5.5)(5,3)(3,1)
\put(3.1,1.4){\vector(-1,-1){0.3}}
\qbezier(19,1)(17,3)(18,5.5)
\qbezier(18,5.5)(21,9)(24,5.5)
\qbezier(24,5.5)(25,3)(23,1)
\put(23.1,1.4){\vector(-1,-1){0.3}}
\put(1,10){$\varepsilon_1$}
\put(21,10){$\varepsilon_2$}
\end{picture}}
\put(-40,0){\begin{picture}(50,10)
\put(0,-2){$1$}
\put(18,0){\vector(-1,0){14}}
\put(10,-4){$\alpha$}
\put(20,-2){$a$}
\qbezier(-1,1)(-3,3)(-2,5.5)
\qbezier(-2,5.5)(1,9)(4,5.5)
\qbezier(4,5.5)(5,3)(3,1)
\put(3.1,1.4){\vector(-1,-1){0.3}}
\put(1,10){$\varepsilon_1$}
\end{picture}}
\put(-50,-13){Figure 3. The quivers $Q_A$, $Q_B$ and $Q_\Lambda$ in Example \ref{example 2}. }
\end{picture}
\vspace{1.9cm}
\end{center}
\end{example}
\begin{lemma}[\cite{Lu}]\label{lemma adjoint preserves Gorenstein projectives}
Let $A_1$ and $A_2$ be Artin algebras. If $F_1:\mod A_1\rightarrow \mod A_2$ has the property that it is exact, preserves projective objects, and admits a right adjoint functor $F_2$, then
(i) for any $X\in\mod A_1$ and $Y\in \mod A_2$, we have that $\Ext_{A_2}^k(F_1(X),Y) \cong \Ext_{A_1}^k (X,F_2(Y))$ for any $k\geq1$.
(ii) if $\pd F_2(Q)<\infty$ or $\ind F_2(Q)<\infty$ for any indecomposable projective $A_2$-module $Q$, then $F_1(\Gp(A_1)) \subseteq \Gp(A_2)$.
\end{lemma}
\begin{lemma}\label{lemma the upper four functors presereve Gorenstein projectives}
Keep the notations as above. Then $i_\lambda$, $i_\mu$, $j_\lambda$ and $j_\mu$ preserve Gorenstein projective modules.
\end{lemma}
\begin{proof}
We only prove that for $i_\lambda$ and $i_\mu$.
Since $(i_\lambda,i_\mu)$ is an adjoint pair, and both of them are exact and preserve projectives, Lemma \ref{lemma adjoint preserves Gorenstein projectives} implies that $i_\lambda$ preserves Gorenstein projective modules.
For $i_\mu$, first, we claim that $i_\mu M\in \,^\bot A$ for any Gorenstein projective $\Lambda$-module $M$. In fact, for any indecomposable Gorenstein projective $\Lambda$-module $M$, Lemma \ref{lemma existence of short exact sequene 1} shows that there is a short exact sequence
$$0\rightarrow U(v)^{\oplus\dim M_v} \rightarrow i_\lambda i_\mu M\oplus j_\lambda j_\mu M\rightarrow M\rightarrow0,$$
which is split since $M\in\,^\bot \Lambda$, and then $i_\lambda i_\mu M\oplus j_\lambda j_\mu M\cong M\oplus U(v)^{\oplus\dim M_v}$. Note that
both $i_\lambda i_\mu M$ and $j_\lambda j_\mu M$ are Gorenstein projective.
Since $M$ is indecomposable, we get that $M\in \add (i_\lambda i_\mu M)$ or $M\in \add(j_\lambda j_\mu M)$. If $M\in \add(j_\lambda j_\mu M)$, then
$i_\lambda i_\mu M$ is projective, which implies that $i_\mu i_\lambda i_\mu M\in \proj A$. On the other hand, $i_\mu i_\lambda i_\mu M =i_\mu M \oplus P(a)^{\oplus s}$ for some integer $s$, so $i_\mu M$ is also projective.
If $M\in\add (i_\lambda i_\mu M)$, then for any indecomposable projective $\Lambda$-module $U(i)$ with $i\in\ver(Q_A)$, Lemma \ref{lemma adjoint preserves Gorenstein projectives} shows that
$$\Ext_A^k(i_\mu M, i_\mu U(i))\cong \Ext_\Lambda^k(i_\lambda i_\mu M,U(i))=0$$ for any $k>0$,
since $i_\lambda i_\mu M$ is Gorenstein projective. Furthermore, $i_\mu U(i)= P(i)\oplus P(a)^{\oplus t}$ for some integer $t$, which implies that
$\Ext_\Lambda^k(i_\mu M, P(i))=0$ for any $k>0$ and any indecomposable projective $A$-module $P(i)$, and then
$i_\mu M\in \,^\bot A$.
Second, for any Gorenstein projective $\Lambda$-module $Z$, there is an exact sequence
$$0\rightarrow Z \rightarrow U_0 \xrightarrow{d_0} U_1 \xrightarrow{d_1}\cdots$$
with $U_j$ projective and $\ker(d_j) \in \Gp(\Lambda)$ for any $j\geq0$. By applying $i_\mu$ to it, we get that there is an exact sequence
$$0\rightarrow i_\mu Z\rightarrow i_\mu U_0 \xrightarrow{i_\mu d_0} i_\mu U_1\xrightarrow{i_\mu d_1}\cdots $$
with $i_\mu U_j$ projective for any $j\geq0$ since $i_\mu$ is exact and preserves projectives. Additionally, since $\ker(d_i) \in \Gp(\Lambda)$, from the above, we know that $i_\mu \ker(d_j) \in \,^\bot A$ for any $j\geq0$, and then $i_\mu Z$ is Gorenstein projective.
\end{proof}
For any additive category $\ca$, we denote by $\Ind \ca$ the set formed by all the indecomposable objects (up to isomorphisms) of $\ca$.
\begin{theorem}\label{theorem stable category of Cm modules}
Let $\Lambda$ be a simple gluing algebra of the two finite-dimensional bound quiver algebras $A$ and $B$.
Then $\underline{\Gp}(\Lambda)\simeq \underline{\Gp}(A) \coprod \underline{\Gp}(B)$.
Furthermore, for any indecomposable $\Lambda$-module $Z$, $Z$ is Gorenstein projective if and only if there exists an indecomposable Gorenstein projective $A$-module $X$ or $B$-module $Y$ such that $Z\cong i_\lambda(X)$ or $Z\cong j_\lambda(Y)$. \end{theorem}
\begin{proof}
Lemma \ref{lemma the upper four functors presereve Gorenstein projectives} shows that $i_\lambda,i_\mu,j_\lambda,j_\mu$ induce four exact functors on the stable categories of Gorenstein projective modules, which are also denoted by $\tilde{i}_\lambda,\tilde{i}_\mu, \tilde{j}_\lambda,\tilde{j}_\mu$ respectively. Note that $\tilde{i}_\lambda$ and $\tilde{j}_\lambda$ are fully faithful.
For any Gorenstein projective $\Lambda$-modules $Z$, Lemma \ref{lemma existence of short exact sequene 1} yields that
$Z\cong \tilde{i}_\lambda \tilde{i}_\mu(Z) \oplus \tilde{j}_\lambda \tilde{j}_\mu (Z)$.
Similar to the proof of Theorem \ref{theorem singularity categories}, we get that
for any $L\in \Gp(A),N\in \Gp(B)$, $\Hom_{\underline{\Gp}(\Lambda)}( \tilde{i}_\lambda (L), \tilde{j}_\lambda(N))=0=\Hom_{\underline{\Gp}(\Lambda)}( \tilde{j}_\lambda(N),\tilde{i}_\lambda (L))$. Therefore,
$$\underline{\Gp}(\Lambda)\simeq \underline{\Gp}(A) \coprod \underline{\Gp}(B).$$
Obviously, $i_\lambda,j_\lambda$ preserve indecomposable modules. So for any indecomposable Gorenstein projective $A$-module $X$ and $B$-module $Y$, we get that
$i_\lambda(X),j_\lambda(Y)$ are indecomposable Gorenstein projective $\Lambda$-modules. On the other hand, for any indecomposable Gorenstein projective $\Lambda$-module $Z$, if $Z$ is projective, then $Z=U(i)$ for some vertex $i$. If $i\in \ver(Q_A)$, then $Z=i_\lambda(P(i))$; if $i\in \ver(Q_B)$, then $Z=j_\lambda(Q(i))$.
If $Z$ is not projective, then the proof of Lemma \ref{lemma the upper four functors presereve Gorenstein projectives} implies that one and only one of $i_\mu Z$, $j_\mu Z$ is not projective. If $i_\mu Z$ is not projective, then $Z\cong i_\lambda i_\mu Z$ in $\underline{\Gp}(\Lambda)$. Set $i_\mu Z= \bigoplus_{i=1}^nZ_i$, where $Z_i$ is indecomposable for each $1\leq i\leq n$. Then $i_\lambda i_\mu Z=\bigoplus_{i=1}^n i_\lambda(Z_i)$ which is isomorphic to $Z$ in $\underline{\Gp}(\Lambda)$. So there is one and only one non-projective $Z_{i_0}$ such that $i_\lambda(Z_{i_0})\cong Z$. Since $i_\mu(Z)\in \Gp(A)$, we get that $Z_{i_0}$ is an indecomposable Gorenstein projective $A$-module such that $Z\cong i_\lambda(Z_{i_0})$.
If $j_\mu Z$ is not projective, we can prove it similarly.
\end{proof}
\begin{corollary}
Let $\Lambda$ be a simple gluing algebra of several finite-dimensional bound quiver algebras $A_1,\dots,A_n$. If $A_1,\dots A_n$ are self-injective algebras, then
$$D_{sg}(\Lambda)\simeq \underline{\Gproj} \Lambda\simeq \coprod_{i=1}^n\underline{\mod} A_i.$$
\end{corollary}
\begin{proof}
It follows from Proposition \ref{lemma simple gluing Nakayama algebra Gorenstein} and Theorem \ref{theorem stable category of Cm modules} immediately.
\end{proof}
\begin{corollary}\label{corollary Gorenstein defect categories}
Let $\Lambda$ be a simple gluing algebra of the two finite-dimensional bound quiver algebras $A$ and $B$.
Then $D_{def}(\Lambda)\simeq D_{def}(A) \coprod D_{def}(B)$.
\end{corollary}
\begin{proof}
From Theorem \ref{theorem singularity categories} and Theorem \ref{theorem stable category of Cm modules}, we get that
$$D_{sg}(\Lambda)\simeq D_{sg}(A)\coprod D_{sg}(B),\mbox{ and } \underline{\Gp}(\Lambda)\simeq \underline{\Gp}(A) \coprod \underline{\Gp}(B).$$
In particular, since these two equivalences are induced by the functors $i_\lambda,j_\lambda$, they are compatible, which implies the result by the definition of Gorenstein defect categories immediately.
\end{proof}
\begin{corollary}\label{corollary factors through the gluing vertex}
Keep the notations as above. Then for any $X\in\Gp(A)$, $Y\in\Gp (B)$, and any morphism $f\in\Hom_\Lambda(i_\lambda (X),j_\lambda (Y))$, we have that $f$ factors through $U(v)^{\oplus n}$ for some $n$.
\end{corollary}
\begin{proof}
From Theorem \ref{theorem stable category of Cm modules}, we get that $f$ factors through some projective $\Lambda$-module $U$ as $f=f_2f_1$, where $f_1:i_\lambda(X)\rightarrow U$, $f_2:U\rightarrow j_\lambda(Y)$.
Let $Q_Y\xrightarrow{\alpha_1} Y$ be the projective cover of $Y$. Then $f_2$ factors through $j_\lambda(Q_Y)$ as $f_2=j_\lambda(\alpha_1) f_3$ for some $f_3:U\rightarrow j_\lambda(Q_Y)$.
Since $X$ is Gorenstein projective, there is a short exact sequence
$$0\rightarrow X\xrightarrow{\beta_1} P_X\xrightarrow{\beta_2} X_1\rightarrow0$$
such that $P_X$ projective, and $X_1\in\Gp(A)$. Then
$f_1$ factors through $i_\lambda(P_X)$ as $f_1=f_4 i_\lambda(\beta_1)$ for some $f_4: i_\lambda(P_X)\rightarrow U$
since $i_\lambda$ is exact and preserves Gorenstein projective modules.
So $f=j_\lambda(\alpha_1)f_3f_4 i_\lambda(\beta_1)$.
For any indecomposable $\Lambda$-modules $U(i),U(j)$ with $i\in \ver(Q_A)$, $j\in \ver(Q_B)$, and any morphism $\gamma:U(i)\rightarrow U(j)$, it is easy to see that $\gamma$ factors through some object in $\add U(v)$.
So $f_3f_4: i_\lambda(P_X)\rightarrow j_\lambda(Q_Y)$ factors through $U(v)^{\oplus n}$ for some $n$, and then the desired result follows.
\end{proof}
Similarly, for any $X\in\Gp(A)$, $Y\in\Gp (B)$, and any morphism $g\in\Hom_\Lambda(j_\lambda (Y),i_\lambda (X))$, then $g$ factors through $U(v)^{\oplus n}$ for some $n$.
\begin{example}
For simple gluing algebra $\Lambda= KQ/I$ of $A$ and $B$, the condition that $I$ is generated by $I_A$ and $I_B$ is necessary to ensure that the above corollary holds.
Keep the notations as in Example \ref{example 1}. Then $D_{def}(A)=0=D_{def}(B)$ since $A$ and $B$ are Gorenstein. However, $D_{def}(\Lambda)\neq 0$ since $\Lambda$ is not Gorenstein.
\end{example}
The following example is taken from \cite{chen3}.
\begin{example}\label{example not gentle algebra}
Let $\Lambda=KQ/I$ be the algebra where $Q$ is the quiver $\xymatrix{1 \ar@/^/[r]^{\alpha_1} & 2 \ar@/^/[l]^{\beta_1} \ar@/^/[r]^{\alpha_2} & 3\ar@/^/[l]^{\beta_2} &4,\ar[l]_\gamma }$
and $I=\langle \beta_i\alpha_i,\alpha_i\beta_i, \beta_2\gamma |i=1,2 \rangle$. Let $A=(e_1+e_2)\Lambda (e_1+e_2)$, and $B=(1-e_1)\Lambda(1-e_1)$. Then $\Lambda$ is a simple gluing algebra of $A$ and $B$. Obviously, $A$ is self-injective with the indecomposable non-projective Gorenstein projective modules $_AS(1),\,_AS(2)$; $B$ is CM-free, and then $D_{def}(B)=D_{sg}(B)$. Theorem \ref{theorem singularity categories} shows that
$D_{sg}(\Lambda)\simeq D_{sg}(A) \coprod D_{sg}(B)$.
From Theorem \ref{theorem stable category of Cm modules}, we get that $\underline{\Gp}(\Lambda)\simeq \underline{\Gp}(A)=\underline{\mod} A$. In particular, the indecomposable non-projective Gorenstein projective $\Lambda$-modules are $i_\lambda(_AS_1)=\,_\Lambda S_1$, $i_\lambda(_A S_2)=\rad U(1)$ (which is the string module with its string $2\xrightarrow{\alpha_2}3$).
From Corollary \ref{corollary Gorenstein defect categories}, we get that $D_{def}(\Lambda)\simeq D_{sg}(B)$.
\end{example}
The following corollary follows from Theorem \ref{theorem stable category of Cm modules} directly.
\begin{corollary}\label{corollary CM-finite CM-free}
Let $\Lambda$ be a simple gluing algebra of the two finite-dimensional bound quiver algebras $A=KQ_A/I_A$ and $B=KQ_B/I_B$. Then
(i) $\Lambda$ is CM-free if and only if $A$ and $B$ are CM-free;
(ii) $\Lambda$ is CM-finite if and only if $A$ and $B$ are CM-finite.
\end{corollary}
At the end of this section, we describe the singularity categories of cluster-tilted algebras of type $\A_n$ and endomorphism algebras of maximal rigid objects of cluster tube $\cc_n$. Note that both of them are $1$-Gorenstein algebras.
Let $H_1=K(\circ\rightarrow\circ)$ be the hereditary algebra of type $\A_2$, $H_2=KQ/I$ the self-injective algebra, where $Q$ is the quiver
\[\xymatrix{&\circ\ar[dr]^{\alpha_2} &\\
\circ\ar[ur]^{\alpha_1} &&\circ\ar[ll]^{\alpha_3}&\mbox{ and }I=\langle \alpha_2\alpha_1,\alpha_3\alpha_2,\alpha_1\alpha_3 \rangle.
}\]
Let $A_1,\dots,A_n$ be algebras where each of them is either $H_1$ or $H_2$. The cluster-tilted algebra of type $\A$ is an algebra $\Lambda$ which is a simple gluing algebra of $A_1,\dots,A_n$ such that there are only two of $A_1,\dots,A_n$ glued at each glued vertex, see \cite{BV}. From Theorem \ref{theorem stable category of Cm modules}, we get the following result immediately.
\begin{corollary}[\cite{CGLu,Ka}]
Let $KQ/I$ be a cluster-tilted algebra of type $\A$. Then
$$\underline{\Gproj}(KQ/I)\simeq \coprod_{t(Q)} \underline{\mod} H_2,$$
where $t(Q)$ is the number of the oriented cycles of length $3$ in $Q$.
\end{corollary}
Let $KQ_1/I_1$ be a cluster-tilted algebra of type $\A_{n-1}$ $(n\geq2)$, and $KQ_2/\langle \varphi^2\rangle$, where $Q_2$ is a loop $\varphi$. Note that $KQ_1/I_1$ is a simple gluing algebra of several algebras which are isomorphic to $H_1$ or $H_2$.
Choose a vertex $a\in Q_1$ which is not a glued vertex for $Q_1$. Then define $KQ/I$ to be the simple gluing algebra of $KQ_1/I_1$ and $KQ_2/I_2$ by identifying $a$ and the unique vertex of $Q_2$. From \cite{Va,Yang}, we get that $KQ/I$ is the endomorphism algebra of a maximal rigid object of cluster tube $\cc_n$, and all endomorphism algebras of maximal rigid objects of cluster tube $\cc_n$ are arisen in this way. Theorem \ref{theorem stable category of Cm modules} yields the following result immediately.
\begin{corollary}[\cite{Ka}]
Let $KQ/I$ be an endomorphism algebra of a maximal rigid objects of cluster tube $\cc_n$. Then
$$\underline{\Gproj}(KQ/I)\simeq \coprod_{t(Q)} \underline{\mod} H_2 \coprod \underline{\mod} K[X]/\langle X^2\rangle.$$
where $t(Q)$ is the number of the oriented cycles of length $3$ in $Q$.
\end{corollary}
\section{Cohen-Macaulay Auslander algebras of simple gluing algebras}
In this section, we describe the Cohen-Macaulay Auslander algebras for CM-finite simple gluing algebras.
First, let us recall some results about almost split sequences and irreducible morphisms.
\begin{lemma}[see e.g. \cite{Liu}]\label{lemma relation of almost split sequences and irreducilbe morphisms}
Let $\ca$ be a Krull-Schmidt exact $K$-category with $0\rightarrow X\xrightarrow{f} Y \xrightarrow{g} Z\rightarrow0$ an almost split sequence.
(i) Up to isomorphism, the sequence is the unique almost split sequence starting with $X$ and the unique one ending with $Z$.
(ii) Each irreducible morphism $f_1:X\rightarrow Y_1$ or $g_1:Y_1\rightarrow Z$ fits into an almost split sequence
$$0\rightarrow X\xrightarrow{\left( \begin{array}{c} f_1\\ f_2 \end{array}\right)} Y_1\oplus Y_2\xrightarrow{(g_1,g_2)}Z\rightarrow0.$$
\end{lemma}
Recall that for a CM-finite algebra $\Lambda$, $\Gproj\Lambda$ is a functorially finite subcategory of $\mod \Lambda$, which implies that $\Gproj\Lambda$ has almost split sequences, see \cite[Theorem 2.4]{AS}, and then $\underline{\Gproj}\Lambda$ has Auslander-Reiten triangles. $\Aus(\Gproj\Lambda)$ is isomorphic to the opposite algebra of $KQ^{\Aus}/I^{\Aus}$ for some ideal $I^{\Aus}$, see Chapter VII, Section 2 of \cite{ARS}.
For any irreducible morphism $f:U(i)\rightarrow U(j)$ in $\Gp \Lambda$, then it is irreducible in $\proj \Lambda$, and there exists some arrow $\alpha:j\rightarrow i$ such that $f$ is induced by $\alpha$.
In the following, we always assume that the simple gluing algebra $\Lambda$ is CM-finite. Since $\Lambda$ is CM-finite, Corollary \ref{corollary CM-finite CM-free} yields that both $A$ and $B$ are CM-finite.
Let $Q^{\Aus}_A$, $Q^{\Aus}_B$ and $Q^{\Aus}_\Lambda$ be the Auslander-Reiten quivers of $\Gproj A$, $\Gp B$ and $\Gp \Lambda$ respectively. First, we prove that $Q^{\Aus}_\Lambda$ is a simple gluing quiver of $Q^{\Aus}_A$ and $Q^{\Aus}_B$ by identifying the vertices corresponding to the indecomposable projective modules $P(a)$ and $Q(b)$. Before that, we give two lemmas.
\begin{lemma}\label{lemma functors preserve for irreducible morphisms between projective modules}
Let $\Lambda=KQ_\Lambda/I_\Lambda$ be a simple gluing algebra of the two finite-dimensional bound quiver algebras $A=KQ_A/I_A$ and $B=KQ_B/I_B$ by identifying $a\in Q_A$ and $b\in Q_B$. Then for any vertices $i,j$ in $Q_\Lambda$ and any arrow $\alpha: j\rightarrow i$ in $Q_\Lambda$, we have
(i) if $\alpha$ is in $Q_A$, and $f: P(i)\rightarrow P(j)$ is the irreducible morphism in $\Gp A$ induced by the arrow $\alpha$, then $i_\lambda(f):U(i)\rightarrow U(j)$ is irreducible.
(ii) if $\alpha$ is in $Q_B$, and $g: P(i)\rightarrow P(j)$ is the irreducible morphism in $\Gp B$ induced by the arrow $\alpha$, then $j_\lambda(g):U(i)\rightarrow U(j)$ is irreducible.
(iii) any irreducible morphism from $U(i)$ to $U(j)$ in $\Gproj\Lambda$ is of form $i_\lambda(f)$
for some irreducible morphism $f: P(i)\rightarrow P(j)$ in $\Gproj A$ or $j_\lambda(g)$ for some irreducible morphism $g: Q(i)\rightarrow Q(j)$ in $\Gp B$.
\end{lemma}
\begin{proof}
(i)
if $i_\lambda(f)$ factors through an object in $\Gp(\Lambda)$, then by Theorem \ref{theorem stable category of Cm modules}, it is of form $i_\lambda(f)=hg$ for some $i_\lambda(P(i))\xrightarrow{g} i_\lambda(L)\oplus j_\lambda(M)\xrightarrow{h} i_\lambda(P(j))$, where $L\in \Gp(A)$, and $M\in\Gp(B)$. We only need to prove that either $g$ is a section or $h$ is a retraction.
Since $f$ is induced by an arrow $\alpha: j\rightarrow i$ in $Q_A$, it is easy to see that $i_\lambda(f): U(i)\rightarrow U(j)$ is the morphism induced by the arrow $\alpha:j\rightarrow i$ in $Q_\Lambda$, which is irreducible in $\proj \Lambda$.
We assume that $L=L_p\oplus L_g$ where $L_p$ is projective, $L_g$ satisfies that all its indecomposable direct summands are non-projective, and
$M=M_p\oplus M_g$ where $M_p$ is projective, $M_g$ satisfies that all its indecomposable direct summands are non-projective.
So there exist the following exact sequences
$$0\rightarrow L'_g\xrightarrow{k_0} P_0\xrightarrow{k_1} L_g\rightarrow0,\mbox{ and }0\rightarrow L_g\xrightarrow{l_0} P_1\xrightarrow{l_1} L''_g\rightarrow0,$$
with $P_0,P_1\in\proj A$ and $L'_g,L''_g\in\Gproj A$.
Similarly, there are exact sequences
$$0\rightarrow M'_g\xrightarrow{s_0} Q_0\xrightarrow{s_1} M_g\rightarrow0,\mbox{ and }0\rightarrow M_g\xrightarrow{t_0} Q_1\xrightarrow{t_1} M''_g\rightarrow0,$$
with $Q_0,Q_1\in\proj B$ and $M'_g,M''_g\in\Gproj B$.
Since $i_\lambda$ and $j_\lambda$ are exact, preserve projective modules, there exist the following short exact sequences
\begin{eqnarray*}
&&0\rightarrow i_\lambda(L'_g) \oplus j_\lambda(M'_g)\xrightarrow{\tiny\left(\begin{array}{cc}0&0\\i_\lambda(k_0)&0\\
0&0\\ 0& j_\lambda(s_0)\end{array}\right)} i_\lambda(L_p)\oplus i_\lambda(P_0)\oplus j_\lambda(M_p)\oplus j_\lambda(Q_0)\\
&&\xrightarrow{\tiny\left(\begin{array}{cccc}1&&&\\&i_\lambda(k_1)&&\\
&&1&\\ &&& j_\lambda(s_1)\end{array}\right)} i_\lambda(L_p)\oplus i_\lambda(L_g) \oplus j_\lambda(M_p)\oplus j_\lambda(M_g)\rightarrow0,
\end{eqnarray*}
and
\begin{eqnarray*}
&&0\rightarrow i_\lambda(L_p)\oplus i_\lambda(L_g) \oplus j_\lambda(M_p)\oplus j_\lambda(M_g)\xrightarrow{\tiny\left(\begin{array}{cccc}1&&&\\&i_\lambda(l_0)&&\\
&&1&\\ &&& j_\lambda(t_0)\end{array}\right)} \\
&&i_\lambda(L_p)\oplus i_\lambda(P_1)\oplus j_\lambda(M_p)\oplus j_\lambda(Q_1)\xrightarrow{\tiny\left(\begin{array}{cccc}0&i_\lambda(l_1)&0&0\\0&0&0&j_\lambda(t_1)\end{array}\right)} i_\lambda(L''_g) \oplus j_\lambda(M''_g)\rightarrow0.
\end{eqnarray*}
It is easy to see that there exist morphisms $p: i_\lambda(P(i))\rightarrow i_\lambda(L_p)\oplus i_\lambda(P_0)\oplus j_\lambda(M_p)\oplus j_\lambda(Q_0)$ and
$q: i_\lambda(L_p)\oplus i_\lambda(P_1)\oplus j_\lambda(M_p)\oplus j_\lambda(Q_1)\rightarrow i_\lambda(P(i))$
such that
$$g= \left(\begin{array}{cccc}1&&&\\&i_\lambda(k_1)&&\\
&&1&\\ &&& j_\lambda(s_1)\end{array}\right)p,
\mbox{ and }
h=q\left(\begin{array}{cccc}1&&&\\&i_\lambda(l_0)&&\\
&&1&\\ &&& j_\lambda(t_0)\end{array}\right). $$
Therefore,
$$i_\lambda(f)=hg=q\left(\begin{array}{cccc}1&&&\\&i_\lambda(l_0 k_1) &&\\
&&1&\\ &&& j_\lambda(t_0s_1)\end{array}\right)p.$$
Since $i_\lambda(f)$ is irreducible in $\proj\Lambda$, we get that either $p$ is a section or $q$ is a retraction.
For $p$ is a section, if $p$ induces $i_\lambda(P(i))$ to be a direct summand of $i_\lambda(L_p)$ or a direct summand of $j_\lambda(M_p)$, it is easy to see that $g$ is a section.
If $p$ induces $i_\lambda(P(i))$ to be a direct summand of $i_\lambda(P_g)$, then
$$\left(\begin{array}{cccc}1&&&\\&i_\lambda(l_0k_1)&&\\
&&1&\\ &&& j_\lambda(t_0s_1)\end{array}\right)p$$
is not a section, which yields that $q$ is a retraction, in particular, $q$ induces that $i_\lambda(P(j))$ is a direct summand of $i_\lambda (P_1)$. So $f$ factors through $L_g$ which implies that $f$ is not irreducible in $\Gp A$ since $L_g$ satisfies that all its indecomposable direct summands are non-projective, a contradiction.
If $p$ induces $i_\lambda(P(i))$ to be a direct summand of $j_\lambda(M_g)$, then we get that $i_\lambda(f)$ factors through $j_\lambda(t_0s_1)$. From $t_0s_1:Q_0\rightarrow Q_1$ factors through $M_g$, it is easy to see that there exists an element $w=\sum_{i=0}^n d_i w_i\in KQ_\Lambda$ with $d_i\in K$, $w_i$ a path containing as least one arrow in $Q_B$ for each $1\leq i\leq n$, such that $\alpha-w\in I_\Lambda$, which is impossible.
For $q$ is a retraction, it is dual to the above, we omit the proof here.
So $i_\lambda(f)$ is irreducible.
For (ii), it is similar to (i).
(iii) for any irreducible morphism $\psi$ from $U(i)$ to $U(j)$ in $\Gproj\Lambda$, then it is also irreducible in $\proj\Lambda$, which is induced by an arrow $\alpha:j\rightarrow i$. If $\alpha$ is in $Q_A$, then $\psi=i_\lambda(f)$ where $f$ is the irreducible morphism $f:P(i)\rightarrow P(j)$ in $\proj A$ induced by $\alpha$. Suppose for a contradiction that there exist morphisms $f_1$ and $f_2$ in $\Gproj A$ such that $f=f_2f_1$ with neither $f_1$ a section nor $f_2$ a retraction. Then it is easy to see that neither $i_\lambda (f_1)$ is a section nor $i_\lambda(f_2)$ a retraction, and $i_\lambda(f)=i_\lambda(f_2) i_\lambda(f_1)$, a contradiction to that $i_\lambda(f)$ is irreducible in $\Gp \Lambda$.
If $\alpha$ is in $Q_B$, then we can prove it similarly.
\end{proof}
\begin{lemma}\label{lemma functors preserve for irreducible morphisms between non-projective modules}
Let $\Lambda=KQ_\Lambda/I_\Lambda$ be a simple gluing algebra of the two finite-dimensional bound quiver algebras $A=KQ_A/I_A$ and $B=KQ_B/I_B$ by identifying $a\in Q_A$ and $b\in Q_B$. If $\Lambda$ is CM-finite, then
(i) for any irreducible morphism $f:X\rightarrow Y$ in $\Gp A$ with either $X$ or $Y$ not projective, we have that $i_\lambda(f)$ is irreducible in $\Gproj \Lambda$;
(ii) for any irreducible morphism $g:L\rightarrow M$ in $\Gp B$ with either $L$ or $M$ not projective, we have that $j_\lambda(g)$ is irreducible in $\Gproj \Lambda$;
(iii) any irreducible morphism in $\Gproj\Lambda$ is of form $i_\lambda(f)$
for some irreducible morphism $f: X\rightarrow Y$ in $\Gproj A$ or $j_\lambda(g)$ for some irreducible morphism $g: L\rightarrow M$ in $\Gp B$.
\end{lemma}
\begin{proof}
Since $\Lambda$ is CM-finite, Corollary \ref{corollary CM-finite CM-free} yields that both $A$ and $B$ are CM-finite. Then all the categories $\Gp A$, $\Gp B$ and $\Gp \Lambda$ have almost split sequences.
First, we prove that $i_\lambda$ preserves almost split sequences.
For any almost split sequence
$$0\rightarrow L\xrightarrow{f} M\xrightarrow{g} N\rightarrow0$$
in $\Gp(\Lambda)$, obviously, $L$ and $N$ are indecomposable, and
$$L\xrightarrow{\tilde{f}} M \xrightarrow{\tilde{g}}N\rightarrow \Sigma L$$
is an Auslander-Reiten triangle in $\underline{\Gp}(\Lambda)$, where $\Sigma$ is its suspension functor.
Obviously.
$i_\lambda(L)$ and $i_\lambda(N)$ are indecomposable modules. Since $i_\lambda$ is exact, the sequence
\begin{equation}\label{equation 6}
0\rightarrow i_\lambda (L)\xrightarrow{i_\lambda(f)} i_\lambda(M)\xrightarrow{i_\lambda(g)} i_\lambda(N)\rightarrow0
\end{equation}
is exact. We claim that the sequence (\ref{equation 6}) is almost split. From Theorem \ref{theorem stable category of Cm modules}, we get that
$\tilde{i}_\lambda$ preserves Auslander-Reiten triangles, and then
\begin{equation}\label{equation 7}
i_\lambda (L)\xrightarrow{\widetilde{i_\lambda(f)}} i_\lambda(M) \xrightarrow{\widetilde{i_\lambda(g)}}i_\lambda(N)\rightarrow \Sigma L
\end{equation}
is an Auslander-Reiten triangles in $\underline{\Gp}(\Lambda)$.
For any morphism $h:Z\rightarrow i_\lambda(N)$ which is not a retraction, then in $\underline{\Gp}(\Lambda)$, $\tilde{h}$ is also not a retraction. The Auslander-Reiten triangle (\ref{equation 7}) yields that $\tilde{h}$ factors through $\widetilde{i_\lambda(g)}$ as $\tilde{h}=\widetilde{i_\lambda(g)}\tilde{l}$ for some morphism $l: Z\rightarrow i_\lambda(M)$.
Then there exists a projective $\Lambda$-module $U$ such that $h=i_\lambda(g) l+p_2p_1$ for some morphism $p_1:Z\rightarrow U$ and $p_2:U\rightarrow i_\lambda(N)$ as the following diagram shows
\[\xymatrix{ i_\lambda(L) \ar[r]^{i_\lambda(f)} & i_\lambda(M) \ar[r]^{i_\lambda(g)}& i_\lambda(N) \\
&Z\ar[r]^{p_1} \ar[u]^l \ar[ur]^{h} &U.\ar[u]^{p_2} }\]
Since $U$ is projective, we get that $p_2$ factors through $i_\lambda(g)$ as $p_2i_\lambda(g) p_3$ for some morphism $p_3:U\rightarrow i_\lambda(M)$. Then
$h=i_\lambda(g) l+p_2p_1=i_\lambda(g)( l+p_3p_1)$ which implies that $i_\lambda(g)$ is a right almost split morphism. Therefore, the sequence (\ref{equation 6})
is almost split in $\Gproj\Lambda$.
Similarly, we get that $j_\lambda$ preserves almost split sequences.
Theorem \ref{theorem stable category of Cm modules} also implies that for any Auslander-Reiten triangles in $\underline{\Gproj}\Lambda$, this triangle is either from an Auslander-Reiten triangle in $\underline{\Gp}(A)$ mapped by $\tilde{i}_\lambda$ or from an Auslander-Reiten triangle in $\underline{\Gp}(B)$ mapped by $\tilde{j}_\lambda$.
Note that all the Auslander-Reiten triangles in the stable categories are induced by almost split sequences. So any almost split sequence in $\Gp(\Lambda)$ is either of form
$$0\rightarrow i_\lambda(L)\xrightarrow{i_\lambda(u)} i_\lambda(M) \xrightarrow{i_\lambda(v)} i_\lambda(N)\rightarrow0 $$
where $0\rightarrow L\xrightarrow{u} M\xrightarrow{v}N\rightarrow0$ is an almost split sequence in $\Gp(A)$;
or of form $$0\rightarrow j_\lambda(X)\xrightarrow{j_\lambda(u)} j_\lambda(Y) \xrightarrow{j_\lambda(v)} j_\lambda(Z)\rightarrow0 $$
where $0\rightarrow X\xrightarrow{u} Y\xrightarrow{v}Z\rightarrow0$ is an almost split sequence in $\Gp(B)$.
(i) for any irreducible morphism $f:X\rightarrow Y$ in $\Gproj A$, if $X$ is not projective, then there is an almost split sequence starting with $X$, which is of form $$0\rightarrow X\xrightarrow{\left( \begin{array}{c} f\\ f_2 \end{array}\right)} Y_1\oplus Y_2\xrightarrow{(g_1,g_2)}Z\rightarrow0.$$
by Lemma \ref{lemma relation of almost split sequences and irreducilbe morphisms} (ii). From above, we get an almost split sequence in $\Gproj \Lambda$
$$0\rightarrow i_\lambda X\xrightarrow{\left( \begin{array}{c} i_\lambda(f)\\ i_\lambda(f_2) \end{array}\right)} i_\lambda(Y_1)\oplus i_\lambda(Y_2)\xrightarrow{(i_\lambda(g_1),i_\lambda(g_2))}i_\lambda Z\rightarrow0.$$
From it, it is easy to see that $i_\lambda(f)$ is irreducible. If $Y$ is not projective, we can prove it similarly.
(ii) is similar to (i).
(iii) for any irreducible morphism $\psi: W\rightarrow Z$ in $\Gp \Lambda$, if $W,Z\in \proj \Lambda$, then it follows from Lemma \ref{lemma functors preserve for irreducible morphisms between projective modules} (iii) immediately. Otherwise, if $W$ is not projective, then Theorem \ref{theorem stable category of Cm modules} shows that $W$ is of form $i_\lambda (X)$ or $j_\lambda(L)$ for some non-projective indecomposable Gorenstein projective modules $X\in \Gproj A$ and $L\in \Gproj B$. We only prove for the case $W=i_\lambda(X)$, since the other one is similar.
Then there is an almost split sequence starting with $i_\lambda (X)$, by the above, it is of form
$$0\rightarrow i_\lambda(X)\xrightarrow{i_\lambda(f_1)} i_\lambda(Y_1) \xrightarrow{i_\lambda(g_1)} i_\lambda(Z_1)\rightarrow0.$$
From Lemma \ref{lemma relation of almost split sequences and irreducilbe morphisms} (ii), it is easy to see that $\psi=i_\lambda (f)$ for some $i_\lambda (f):i_\lambda (X)\rightarrow i_\lambda(Y)$, where $Y$ is an indecomposable direct summand of $Y_1$. Since $i_\lambda(f)$ is irreducible, similar to the proof of Lemma \ref{lemma functors preserve for irreducible morphisms between projective modules} (iii), we get that $f$ is irreducible in $\Gp A$.
For the case when $Z$ is not projective, we can prove it similarly.
\end{proof}
Recall that for any CM-finite algebra $\Lambda$, the Auslander-Reiten quiver of $\Gproj \Lambda$ is formed by indecomposable objects and irreducible morphisms in $\Gproj\Lambda$. So we get the following result by Lemma \ref{lemma functors preserve for irreducible morphisms between projective modules} and Lemma \ref{lemma functors preserve for irreducible morphisms between non-projective modules} immediately.
\begin{proposition}\label{proposition simple gluing of AR-quivers}
Let $\Lambda=KQ_\Lambda/I_\Lambda$ be a simple gluing algebra of the two finite-dimensional bound quiver algebras $A=KQ_A/I_A$ and $B=KQ_B/I_B$ by identifying $a\in Q_A$ and $b\in Q_B$.
Then $\Lambda$ is CM-finite if and only if $A$ and $B$ are CM-finite. In this case, the Auslander-Reiten quiver $Q^{\Aus}_\Lambda$ of $\Lambda$ is a simple gluing quiver of the Auslander-Reiten quiver $Q^{\Aus}_A$ of $A$ and the Auslander-Reiten quiver $Q^{\Aus}_B$ of $B$ by identifying the vertices corresponding to the indecomposable projective modules $P(a)$ and $Q(b)$.
\end{proposition}
Recall that for any $M\in\mod A$ and $N\in\mod \Lambda$, we denote by
$$\alpha_{M,N}: \Hom_\Lambda(i_\lambda(M),N)\xrightarrow{\sim} \Hom_A(M,i_\mu(N))$$
the adjoint isomorphism.
For any $M\in \mod A$, we denote by $\mu_M= \alpha_{M, i_\lambda (M)} (1_{i_\lambda (M)})$ the adjunction morphism.
For any $L\in\mod B$ and $N\in\mod \Lambda$ we denote by
$$\beta_{L,N}: \Hom_\Lambda(j_\lambda(L),N)\xrightarrow{\sim} \Hom_B(L,i_\mu(N))$$
the adjoint isomorphism.
\begin{lemma}\label{lemma characterization of ideal 1}
Let $\Lambda=KQ_\Lambda/I_\Lambda$ be a simple gluing algebra of the two finite-dimensional bound quiver algebras $A=KQ_A/I_A$ and $B=KQ_B/I_B$ by identifying $a\in Q_A$ and $b\in Q_B$.
Let $X\in \Gp A$ and $Y\in \Gp B$ be indecomposable Gorenstein projective modules. For any morphisms $f_i:X\rightarrow P(a)$ and $g_i:Q(b)\rightarrow Y$ where $1\leq i\leq n$, if $f_1,\dots,f_n$ are linearly independent and $\sum_{i=1}^n j_\lambda(g_i)i_\lambda(f_i)=0$, then $g_i=0$ for any $1\leq i\leq n$.
\end{lemma}
\begin{proof}
Recall that $i_\mu j_\lambda(Y)= P(a)^{\oplus m}$ for some integer $m$. If $m=0$, then $i_\mu j_\lambda(Y)=0$, so $\alpha_{P(a), j_\lambda(Y)}(j_\lambda(g_i))=0$ for any $1\leq i\leq n$, which implies that $j_\lambda(g_i)=0$, and then
$g_i=0$ for any $1\leq i\leq n$.
If $m>0$, then $\alpha_{X,j_\lambda(Y)} (j_\lambda(g_i)i_\lambda(f_i))=\alpha_{P(a),j_\lambda(Y)}(j_\lambda (g_i)) f_i$ for any $1\leq i\leq n$.
Since $i_\mu j_\lambda(Y)= P(a)^{\oplus m}$, from (\ref{equation form of morphism 2}) and the section map $\mu_{P(a)}$, we get that $\alpha_{P(a),j_\lambda(Y)}(j_\lambda (g_i)) f_i= i_\mu j_\lambda (g_i)\mu_{ P(a)} f_i$ is of form
$$X\xrightarrow{f_i} P(a) \xrightarrow{\left(\begin{array}{c} k_{i1}\\ \vdots\\ k_{im} \end{array} \right) }P(a)^{\oplus m},$$
where $k_{ij}\in K$ for any $1\leq j\leq m$.
On the other hand, $\alpha_{X, j_\lambda(Y)}(\sum_{i=1}^n j_\lambda(g_i)i_\lambda(f_i))=\sum_{i=1}^n \alpha_{P(a),j_\lambda(Y)}(j_\lambda(g_i)) f_i$, which is equal to
$$ \sum_{i=1}^n\left(\begin{array}{c} k_{i1}\\ \vdots\\ k_{im} \end{array} \right) f_i=0.$$
Since $f_1,\dots,f_n$ are linearly independent, we get that $k_{ij}=0$ for all $1\leq i\leq n$, $1\leq j\leq m$. Therefore, $\alpha_{P(a),j_\lambda(Y)}(g_i)=0$ for any $1\leq i\leq n$, and then $g_i=0$ for any $1\leq i\leq n$.
\end{proof}
Similarly, we get the following lemma.
\begin{lemma}\label{lemma characterization of ideal 2}
Let $\Lambda=KQ_\Lambda/I_\Lambda$ be a simple gluing algebra of the two finite-dimensional bound quiver algebras $A=KQ_A/I_A$ and $B=KQ_B/I_B$ by identifying $a\in Q_A$ and $b\in Q_B$.
Let $X\in \Gp A$ and $Y\in \Gp B$ be indecomposable Gorenstein projective modules. For any morphisms $f_i:P(a)\rightarrow X$ and $g_i:Y\rightarrow Q(b)$ where $1\leq i\leq n$, if $g_1,\dots,g_n$ are linearly independent and $\sum_{i=1}^n i_\lambda(f_i)j_\lambda(g_i)=0$, then $f_i=0$ for any $1\leq i\leq n$.
\end{lemma}
\begin{remark}\label{remark form of combination of morphism}
Let $X\in \Gp A$ and $Y\in \Gp B$ be indecomposable Gorenstein projective modules. Recall that $i_\mu i_\lambda(X)= X\oplus P(a)^{\oplus s}$ and $i_\mu j_\lambda(Y)=P(a)^{\oplus t}$ for some integers $s,t$. For any morphisms $f:P(a)^{\oplus n}\rightarrow X$ and $g: Y \rightarrow Q(b)^{\oplus n}$ with $\Im g\subseteq \rad Q(b)^{\oplus n}$, we have that
$i_\mu(i_\lambda(f)j_\lambda(g)):P(b)^{\oplus t}\rightarrow X\oplus P(a)^{\oplus s}$ is of form $\left(\begin{array}{c}0\\h\end{array}\right)$, where $h$ is a $s\times t$ matrix with entries in $K$.
\end{remark}
\begin{proof}
First, $i_\mu (U(v)^{\oplus n})= P(a)^{\oplus n}\oplus P(a)^{\oplus r}$ for some integer $r$.
From (\ref{equation form of morphism 1}) and (\ref{equation form of morphism 2}), we get that
$i_\mu j_\lambda(g):i_\mu j_\lambda Y=P(a)^{\oplus t}\rightarrow i_\mu (U(v)^{\oplus n})= P(a)^{\oplus n}\oplus P(a)^{\oplus r}$ is of form
$\left(\begin{array}{c}0\\h_2\end{array}\right)$ where $h_2$ is a $r\times t$ matrix with entries in $K$, since $\Im g\subseteq \rad Q(b)^{\oplus n}$, and
$i_\mu i_\lambda (f): i_\mu (U(v)^{\oplus n})= P(a)^{\oplus n}\oplus P(a)^{\oplus r}\rightarrow X\oplus P(a)^{\oplus s}$ is of form
$\left(\begin{array}{cc}f&\\&h_1\end{array}\right)$, where $h_1$ is a $s\times r$ matrix with entries in $K$.
Then $i_\mu(i_\lambda(f)j_\lambda(g))=\left(\begin{array}{c}0\\h_1h_2\end{array}\right)$ where $h_2h_1$ is a $s\times t$ matrix with entries in $K$.
\end{proof}
Now, we get the final main result in this paper.
\begin{theorem}\label{theorem Cohen-Macaulay Auslander algebras}
Let $\Lambda$ be a simple gluing algebra of the two finite-dimensional bound quiver algebras $A=KQ_A/I_A$ and $B=KQ_B/I_B$ by identifying $a\in Q_A$ and $b\in Q_B$. Then $\Lambda$ is CM-finite if and only if $A$ and $B$ are CM-finite. In this case, the Cohen-Macaulay Auslander algebra $\Aus(\Gp(\Lambda))$ is a simple gluing algebra of $\Aus(\Gp(A))$ and $\Aus(\Gp(B))$ by identifying the vertices corresponding to the indecomposable projective modules $P(a)$ and $Q(b)$.
\end{theorem}
\begin{proof}
From Proposition \ref{proposition simple gluing of AR-quivers},
the Auslander-Reiten quiver $Q^{\Aus}_\Lambda$ of $\Lambda$ is a simple gluing quiver of the Auslander-Reiten quiver $Q^{\Aus}_A$ of $A$ and the Auslander-Reiten quiver $Q^{\Aus}_B$ of $B$ by identifying the vertices corresponding to the indecomposable projective modules $P(a)$ and $Q(b)$.
On the other hand, from the above, we also get that all the irreducible morphisms in $\Gp\Lambda$ are induced explicitly by the ones in $\Gp A$ and $\Gp B$. So for any irreducible morphism in $\Gp\Lambda$, it is either of form $i_\lambda(f)$ for some irreducible morphism $f$ in $\Gp A$ or of form $j_\lambda(g)$ for some irreducible morphism $g$ in $\Gp B$.
By viewing $Q^{\Aus}_A$ and $Q^{\Aus}_B$ to be subquivers of $Q^{\Aus}_\Lambda$, we identify $f$ with $i_\lambda(f)$, $g$ with $j_\lambda(g)$ for any irreducible morphisms $f$ in $\Gp A$, and $g$ in $\Gp B$. Let $\Aus(\Gproj A)$, $\Aus(\Gproj B)$ and $\Aus(\Gp \Lambda)$ be the opposite algebras of $KQ^{\Aus}_A/I^{\Aus}_A$, $KQ^{\Aus}_B/I^{\Aus}_B$ and $KQ^{\Aus}_\Lambda/I^{\Aus}_\Lambda$ respectively.
In this way, we claim that $I^{\Aus}_\Lambda= \langle I^{\Aus}_A, I^{\Aus}_B\rangle$.
First, it is easy to see that $I^{\Aus}_A \subseteq I^{\Aus}_\Lambda$, and $I^{\Aus}_B\subseteq I^{\Aus}_\Lambda$, which implies that
$\langle I^{\Aus}_A, I^{\Aus}_B\rangle\subseteq I^{\Aus}_\Lambda$.
Second, for any element $w\in KQ^{\Aus}_\Lambda$ which is in $I^{\Aus}_\Lambda$, without loss of generality, we assume that $w$ starts from an indecomposable Gorenstein projective $\Lambda$-module $Z_0$, and ends to an indecomposable Gorenstein projective $\Lambda$-module $Z_1$.
It is easy to see that $w$ is a linear combination of combinations of irreducible morphisms in $\Gproj \Lambda$ by viewing arrows to be irreducible morphisms. Recall that the Auslander-Reiten quiver $Q^{\Aus}_\Lambda$ is a simple gluing quiver of the Auslander-Reiten quivers $Q^{\Aus}_A$ and $Q^{\Aus}_B$ by identifying the vertices corresponding to the indecomposable projective modules $P(a)$ and $Q(b)$.
The proof can be broken into the following four cases.
Case (a) $Z_0=i_\lambda(X)$, $Z_1=j_\lambda(Y)$ for some $X\in\Ind \Gp A$, $Y\in\Ind\Gp B$, and $w$ is of form
\begin{eqnarray*}
&&i_\lambda(X)\xrightarrow{i_\lambda(f_0)} U(v)^{\oplus n_0} \xrightarrow{ j_\lambda(g_0)} U(v)^{\oplus m_0} \xrightarrow{i_\lambda(f_1)} U(v)^{\oplus n_1}
\xrightarrow{j_\lambda(g_1)} U(v)^{\oplus m_1}\\
&& \xrightarrow{i_\lambda(f_2)}\cdots \xrightarrow{j_\lambda(g_{t-1})} U(v)^{\oplus m_{t-1}} \xrightarrow{ i_\lambda(f_{t})} U(v)^{\oplus n_{t}} \xrightarrow{j_\lambda(g_t)}j_\lambda(X_t)=j_\lambda(Y),
\end{eqnarray*}
with $\Im (f_i)\subseteq \rad P(a)^{\oplus n_i}$ for any $0\leq i\leq t$ and $\Im (g_j)\subseteq \rad Q(b)^{\oplus m_j}$ for any $0\leq j\leq t-1$.
We prove that $w\in \langle I_A^{\Aus},I_B^{\Aus}\rangle$ by induction on $t$.
If $t=0$, then it follows from Lemma \ref{lemma characterization of ideal 1}.
For $t>0$, let $f$ be of form
$$ X\xrightarrow{ \left(\begin{array}{c} f_{01}\\ \vdots\\ f_{0n_0} \end{array} \right) } P(a)^{\oplus n_0}.$$
If $f_{01}=0,\dots,f_{0n_0}=0$, then $f_0=0$, which implies that $w\in\langle I_A^{\Aus}\rangle \subseteq \langle I_A^{\Aus},I_B^{\Aus}\rangle $. Otherwise, without loss of generality, we assume that $f_{01},f_{02},\dots,f_{0n_0}$ are linearly independent.
Then $w$ is of form
$$i_\lambda(X)\xrightarrow{ \left(\begin{array}{c} i_\lambda(f_{01})\\ \vdots\\ i_\lambda(f_{0n_0}) \end{array} \right) } U(v)^{\oplus n_0}\xrightarrow{(l_1,l_2,\dots,l_{n_0}) } j_\lambda(Y),$$
where $(l_1,l_2,\dots,l_{n_0})=j_\lambda(g_{t})\cdots i_\lambda(f_1)j_\lambda(g_{0})$.
Furthermore, $$\alpha_{X, j_\lambda(Y)}( \sum_{i=1}^{n_0} l_ii_\lambda(f_{0i})) =\sum_{i=1}^{n_0}i_\mu (l_i) \mu_{P(a)^{\oplus n_0}}f_{0i}.$$
Since $\Im (g_j)\subseteq \rad Q(b)^{\oplus m_j}$ for any $0\leq j\leq t-1$, from Remark \ref{remark form of combination of morphism} and (\ref{equation form of morphism 2}),
we get that $i_\mu(l_i):P(a)^{\oplus s} \rightarrow P(a)^m$ is represented by a matrix with its entries in $K$, which implies that this property holds for $i_\mu(l_i) \mu_{P(a)^{\oplus n_0}}$ by the form of the section map $\mu_{P(a)^{\oplus n_0}}$.
Similar to the proof of Lemma \ref{lemma characterization of ideal 1}, we get that $i_\mu(l_i) \mu_{P(a)^{\oplus n_0}}=0$ for each $1\leq i\leq n_0$, and then $j_\lambda(g_{t})\cdots i_\lambda(f_1)j_\lambda(g_{0})=0$.
In order to get that $j_\lambda(g_{t})\cdots i_\lambda(f_1)j_\lambda(g_{0})\in\langle I_A^{\Aus}, I_B^{\Aus}\rangle$, without loss of generality, we assume $n_0=1$.
Then $g_0$ is of form
$$Q(b)\xrightarrow{\left(\begin{array}{c} g_{01}\\ \vdots\\ g_{0m_0} \end{array} \right) } Q(b)^{\oplus m_0}.$$
Similar to the above, it is enough to prove that for the case when $g_{01},\dots, g_{0m_0}$ are linear independent.
Let $(p_1,\dots,p_{m_0}): U(v)^{\oplus m_0}\rightarrow j_\lambda(Y)$ be the morphism $j_\lambda(g_{t})\cdots i_\lambda(f_1)$.
Then
\begin{eqnarray*}
&&\beta_{Q(b),j_\lambda(Y)}(j_\lambda(g_{t})\cdots i_\lambda(f_1)j_\lambda(g_{0}) )\\
&=&j_\mu (j_\lambda(g_{t})\cdots i_\lambda(f_1))\nu_{Q(b)^{\oplus m_0}}g_0=0.
\end{eqnarray*}
Similar to the proof of Remark \ref{remark form of combination of morphism}, since $j_\mu j_\lambda(Y)= Y\oplus Q(b)^{\oplus q}$ for some integer $q$, we get that $j_\mu (j_\lambda(g_{t})\cdots i_\lambda(f_1))\nu_{Q(b)^{\oplus m_0}}: Q(b)^{\oplus m_0}\rightarrow Y\oplus Q(b)^{\oplus q}$ is of form
$\left(\begin{array}{c}0\\ h' \end{array} \right)$ where $h'$ is a $q\times m_0$ matrix with entries in $K$.
Since $g_{01},\dots, g_{0m_0}$ are linear independent,
$$\beta_{Q(b),j_\lambda(Y)}(j_\lambda(g_{t})\cdots i_\lambda(f_1))=j_\mu (j_\lambda(g_{t})\cdots i_\lambda(f_1))\nu_{Q(b)^{\oplus m_0}}=0$$
and then
$j_\lambda(g_{t})\cdots i_\lambda(f_1)=0$.
From the assumption of induction, we have
$$j_\lambda(g_{t})i_\lambda(f_{t})\cdots j_\lambda(g_{1}) i_\lambda(f_1)\in \langle I^{\Aus}_A,I^{\Aus}_B\rangle,$$
and then
$w\in \langle I^{\Aus}_A,I^{\Aus}_B\rangle$.
Case (b) $Z_0=j_\lambda(Y)$ and $Z_1=i_\lambda(X)$ for some $X\in\Ind \Gp A$ and $Y\in\Ind\Gp B$. $Z_0=i_\lambda(X)$, $Z_1=j_\lambda(Y)$ for some $X\in\Ind \Gp A$, $Y\in\Ind\Gp B$, and $w$ is of form
\begin{eqnarray*}
&&j_\lambda(Y)\xrightarrow{j_\lambda(f_0)} U(v)^{\oplus n_0} \xrightarrow{ i_\lambda(g_0)} U(v)^{\oplus m_0} \xrightarrow{j_\lambda(f_1)} U(v)^{\oplus n_1}
\xrightarrow{i_\lambda(g_1)} U(v)^{\oplus m_1}\\
&& \xrightarrow{j_\lambda(f_2)}\cdots \xrightarrow{i_\lambda(g_{t-1})} U(v)^{\oplus m_{t-1}} \xrightarrow{ j_\lambda(f_{t})} U(v)^{\oplus n_{t}} \xrightarrow{i_\lambda(g_t)}i_\lambda(Y_t)=i_\lambda(X),
\end{eqnarray*}
with $\Im (f_i)\subseteq \rad P(a)^{\oplus n_i}$ for any $0\leq i\leq t$ and $\Im (g_j)\subseteq \rad Q(b)^{\oplus m_j}$ for any $0\leq j\leq t-1$.
It is similar to Case (a).
Case (c) $Z_0=i_\lambda(X)$, $Z_1=i_\lambda(Y)$ for some $X,Y\in\Ind \Gp A$, and $w$ is of form
\begin{eqnarray*}
&&i_\lambda(X)\xrightarrow{i_\lambda(f_0)} U(v)^{\oplus n_0} \xrightarrow{ j_\lambda(g_0)} U(v)^{\oplus m_0} \xrightarrow{i_\lambda(f_1)} U(v)^{\oplus n_1}
\xrightarrow{j_\lambda(g_1)} U(v)^{\oplus m_1}\\
&& \xrightarrow{i_\lambda(f_2)}\cdots \xrightarrow{j_\lambda(g_{t-1})} U(v)^{\oplus m_{t-1}} \xrightarrow{ i_\lambda(f_{t})} i_\lambda(X_t)=i_\lambda(Y),
\end{eqnarray*}
with $\Im (f_i)\subseteq \rad P(a)^{\oplus n_i}$ for any $0\leq i\leq t-1$ and $\Im (g_j)\subseteq \rad Q(b)^{\oplus m_j}$ for any $0\leq j\leq t-1$. If $t=0$, then $i_\lambda(f_0)=0$ and then $f_0=0$, which implies that
$w\in I_A^{\Aus}$. For $t>0$, we can prove it similar to Case (a).
Case (d) $Z_0=j_\lambda(X)$, $Z_1=j_\lambda(Y)$ for some $X,Y\in\Ind \Gp B$, and $w$ is of form
\begin{eqnarray*}
&&j_\lambda(X)\xrightarrow{j_\lambda(f_0)} U(v)^{\oplus n_0} \xrightarrow{ i_\lambda(g_0)} U(v)^{\oplus m_0} \xrightarrow{j_\lambda(f_1)} U(v)^{\oplus n_1}
\xrightarrow{i_\lambda(g_1)} U(v)^{\oplus m_1}\\
&& \xrightarrow{j_\lambda(f_2)}\cdots \xrightarrow{i_\lambda(g_{t-1})} U(v)^{\oplus m_{t-1}} \xrightarrow{ j_\lambda(f_{t})} j_\lambda(X_t)=j_\lambda(Y),
\end{eqnarray*}
with $\Im (f_i)\subseteq \rad P(a)^{\oplus n_i}$ for any $0\leq i\leq t-1$ and $\Im (g_j)\subseteq \rad Q(b)^{\oplus m_j}$ for any $0\leq j\leq t-1$.
It is similar to Case (c).
To sum up, $KQ^{\Aus}_\Lambda/I^{\Aus}$ is a simple gluing algebra of $KQ^{\Aus}_A/I^{\Aus}_A$ and $KQ^{Aus}_B/I^{\Aus}_B$ by identifying the vertices corresponding to the indecomposable projective modules $P(a)$ and $Q(b)$, and then
$\Aus(\Gp(\Lambda))$ is a simple gluing algebra of $\Aus(\Gp(A))$ and $\Aus(\Gp(B))$ by identifying the vertices corresponding to the indecomposable projective modules $P(a),Q(b)$.
\end{proof}
\begin{example}\label{example Auslander algebra}
Let $\Lambda=KQ/I$ be the algebra where $Q$ is the quiver
as the left quiver in Figure 4 shows,
and $I=\langle \alpha_{i+1}\alpha_i, \varepsilon^2 | i\in\Z/3\Z\rangle$. Let $A$ be the quotient algebra $\Lambda/\langle\varepsilon\rangle$, and $B=e_3\Lambda e_3$.
Then $\Lambda$ is a simple gluing algebra of $A$, $B$. Let $\Aus(\Gp(\Lambda))$ be the algebra corresponding to the bound quiver $(Q^{\Aus},I^{\Aus})$. Then $Q^{\Aus}$
is as the right quiver in Figure 4 shows, and $I^{Aus}=\langle \alpha_{i+1}^+\alpha_i^-, \varepsilon^+\varepsilon^- | i\in\Z/3\Z \rangle $. It is easy to see that
$\Aus(\Gp(\Lambda))$ is the simple gluing algebra of $\Aus(\Gp(A))$ and $\Aus(\Gp(B))$ by identifying the vertices corresponding to the indecomposable projective modules $P(3)$ and $Q(3)$.
\setlength{\unitlength}{0.7mm}
\begin{center}
\begin{picture}(80,40)
\put(0,20){\begin{picture}(50,10)
\put(0,-2){$3$}
\put(2,-2){\vector(1,-1){10}}
\put(12,-14){\vector(-1,0){22}}
\put(-10,-12){\vector(1,1){10}}
\put(13,-15){$2$}
\put(-13,-15){$1$}
\put(7,-6){\tiny$\alpha_2$}
\put(-10,-6){\tiny$\alpha_1$}
\put(-2,-13){\tiny$\alpha_3$}
\qbezier(-1,1)(-3,3)(-2,5.5)
\qbezier(-2,5.5)(1,9)(4,5.5)
\qbezier(4,5.5)(5,3)(3,1)
\put(3.1,1.4){\vector(-1,-1){0.3}}
\put(1,10){$\varepsilon$}
\end{picture}}
\setlength{\unitlength}{0.8mm}
\put(40,-5){\begin{picture}(100,40)
\put(19,11){\vector(-1,1){8}}
\put(34,10){\vector(-1,0){13}}
\put(44,19){\vector(-1,-1){8}}
\put(36,29){\vector(1,-1){8}}
\put(21,30){\vector(1,0){13}}
\put(11,21){\vector(1,1){8}}
\put(8,19){\small$6$}
\put(15,16){\tiny$\alpha_3^+$}
\put(19,8){\small$2$}
\put(34,8){\small$5$}
\put(25,11.5){\tiny$\alpha_2^-$}
\put(35,15){\tiny$\alpha_2^+$}
\put(45,18){\small$3$}
\put(18.9,29){\small$1$}
\put(34,29){\small$4$}
\put(35,22.5){\tiny$\alpha_1^-$}
\put(25,26.5){\tiny$\alpha_1^+$}
\put(15,23){\tiny$\alpha_3^-$}
\put(57,18){\vector(-1,0){9}}
\put(48,21){\vector(1,0){9}}
\put(58,18){\small $7$}
\put(52,22){\tiny$\varepsilon^+$}
\put(51,15){\tiny$\varepsilon^-$}
\end{picture}}
\put(-35,-5){Figure 4. The quiver of $\Lambda$ and its Cohen-Macaulay Auslander algebra.}
\end{picture}
\vspace{0.5cm}
\end{center}
\end{example}
\begin{example}
Following Example \ref{example not gentle algebra}, let $\Lambda=KQ/I$ be the algebra where $Q$ is the quiver $\xymatrix{1 \ar@/^/[r]^{\alpha_1} & 2 \ar@/^/[l]^{\beta_1} \ar@/^/[r]^{\alpha_2} & 3\ar@/^/[l]^{\beta_2} &4,\ar[l]_\gamma }$
and $I=\langle \beta_i\alpha_i,\alpha_i\beta_i, \beta_2\gamma |i=1,2 \rangle$. Let $\Aus(\Gp(\Lambda))$ be the algebra corresponding to the bound quiver $(Q^{\Aus},I^{\Aus})$. Then $Q^{\Aus}$
is as Figure 5 shows, and $I^{Aus}=\langle \alpha_{1}^+\beta_1^-,\beta_1^-\alpha_1^+,\alpha_2\beta_2,\beta_2\alpha_2, \beta_2\gamma\rangle $. Let $A$ and $B$ be as in Example \ref{example not gentle algebra}. It is easy to see that
$\Aus(\Gp(\Lambda))$ is the simple gluing algebra of $\Aus(\Gp(A))$ and $\Aus(\Gp(B))$ by identifying the vertices corresponding to the indecomposable projective modules $P(2)$ and $Q(2)$.
\setlength{\unitlength}{0.8mm}
\begin{center}
\begin{picture}(80,40)
\put(10,19.5){\small$1$}
\put(12,22){\vector(1,1){8}}
\put(20.5,31){\small$5$}
\put(23,29.5){\vector(1,-1){8}}
\put(32,19.5){\small$2$}
\put(31,19){\vector(-1,-1){8}}
\put(20.5,9){\small $6$}
\put(19.5,11){\vector(-1,1){8}}
\qbezier(35,22)(42.5,26)(50,22)
\put(50,22){\vector(3,-1){0.2}}
\qbezier(35,20)(42.5,16)(50,20)
\put(35,20){\vector(-3,1){0.2}}
\put(50.5,19.5){\small $3$}
\put(70,19.5){\small $4$}
\put(69,21){\vector(-1,0){16}}
\put(11.5,12){\tiny$\beta_1^-$}
\put(11.5,26){\tiny$\alpha_1^+$}
\put(26.5,12){\tiny$\beta_1^+$}
\put(26.5,26){\tiny$\alpha_1^-$}
\put(40,26){\tiny$\alpha_2$}
\put(40,14){\tiny$\beta_2$}
\put(60,22){\tiny$\gamma$}
\put(-5,-5){Figure 5. Cohen-Macaulay Auslander algebra of $\Lambda$.}
\end{picture}
\vspace{0.5cm}
\end{center}
\end{example}
| 67,044
|
The Piedmont Project: Peaceful, Sophisticated Bedrooms & Bathrooms
Are you following along with our Piedmont Project Reveal? If not, you’re missing out. Today, we’re continuing the tour and showcasing the completed primary bedroom and all the bathrooms. You may want to sit down for this.
The Piedmont Vibe
If you’re just joining the fun, here’s the recap… After Canyon Design Build completed the kitchens, bathrooms, and laundry room, we helped this Bay Area family of four furnish their spaces via our Online Design Service.
First, we worked with them to discover their unique style, unearthing their love of timeless comfort and eclectic, eye-catching art deco patterns. Then, we put together mood boards and determined the color palette for each room.
Best choice for their personal style combo? Soothing blues with pops of beige and warm, earthy orange. Blue acts as a neutral color and brings a serene, classic feeling to their home, while pops of orange add energy and interest.
From the bedrooms, main living areas, bathrooms, and office, we guided this family to furnishing their recently remodeled house into an inviting and functional, family-friendly home. Ready to take a look for yourself?
Peaceful Primary Bedroom with a Grand Fireplace
Craving a warm blanket and a good book just looking at this space? Me, too. This primary bedroom embodies a classic San Francisco aesthetic with elegant millwork details and a crisp white stone fireplace. We decided to complement the scene with lush bedding, tasteful decor, and these custom ivory drapes that frame the French doors and lead to the patio area. Perfect for enjoying early morning sun with coffee in hand.
Inspired by shades of French blue and slate, the primary bedroom is a calming escape from city life. Natural wood and gold accents warm things up while the organic pattern in this area rug adds interest.
Clean and Classic Primary Bathroom
Step out of the moody bedroom and you’re immersed in this light, bright, and soothing primary bathroom…
Palette of blue and slate? Yep. Same look as the bedroom? Nope. Although Canyon Design Build created this space, it perfectly reflects the color palette we repeated in the bedroom — but in this case, it feels like fresh, open retreat. Soft blues, pure whites, and brass finishes combine with the understated geometric floor (did you notice it?) and extra-long subway tiles for an eye-catching finish.
The walk-in shower features double showerheads with separate temperature controls (small luxuries) and a practical built-in inset. We used the warm brass tones as a jumping off point for other warm neutrals in the furnishings, like…
Stand-Out Jewel Box Powder Room
Step out of the bedroom and into the hallway, and you’ll find this jewel box of a powder room…
Fun wallpapers were a must-have for this family, and the powder room was the perfect place to make a bold design choice. Striking wallpaper envelopes their guests with an eclectic pattern and uses the same colors used throughout the rest of the home. (See the earthy gold color that mimics the brass in the bathroom?) It’s like a little design treat waiting to be discovered.
Sophisticated Overnight Guest Bath
The overnight guest bath (located on the lower level) is sophisticated and crisp. Black finishes match the hexagon floor tiles and contrast with the stark white walls. It’s a bit lighter and bright than the other bathrooms, which helps keep it feeling spacious with minimal light exposure.
There’s more where that came from…
This harmonious home reveal isn’t over. We have just a few more spaces to share. While you wait, which room caught your eye? Or are you imagining a totally different fusion of design styles that speak to your taste? Take our style quiz to discover your personal design aesthetic. It may just surprise you…
Cheers,
Melanie
| 149,649
|
Do you have to fill out the same forms repeatedly, both on and offline? Would it be nice to have auto-fill forms with data from your claims management system? Does...[read more]
Software vendor Quick Internet Software Solutions brings Internet-based software solutions to the insurance adjuster industry. The QISS claim management software system is the most complete, Web-based insurance
claim handling software service for insurance
adjusting firms, third party administrators, self-insured
employers and
state agencies.
The adjuster business that requires claims processing for workers' compensation or other lines of insurance, Electronic data interchange (EDI), which is integrated
directly into the system eliminating
tedious forms, paperwork and hassle, and that needs the backing of its software vendor, needs the QISS Claims Management System.
[read more]
Save money. QISS software is licensed on a monthly basis.
Our service is to provide the software,
hardware, and ongoing technical support. Other claims management
software upfront fees are greater than $20 thousand before
licenses.
Administrators. CMS handles your claims from date received
to final invoice. Manage your clients, adjusters, loss
reports, cut checks, submit first reports... Claims Management
System helps the entire work flow in your adjuster offices across
the nation.
| 267,385
|
Troy, Alabama Golf Courses
Click on any the golf course listings below to explore golf courses in Troy, Alabama. Find course information, driving directions, scorecards and Troy, Alabama golf instructors.
Home » Golf Courses » Alabama » Troy, Alabama Golf Courses
PGA of America Championships
Whistling Straits
Kohler, Wis.
Hazeltine National Golf Club
Chaska, Minn.
| 49,329
|
- Close Vivaldi.
- I felt like being bold so I got the the latest* ffmpeg build from here:- ... t/releases
- Unzipped it so the contained libffmpeg.so file got the complete path ~/Downloads/0.25.2-linux-x64/libffmpeg.so.
- In Terminal:
Code: Select all
cd /usr/share/vivaldi-stable/lib sudo mv libffmpeg.so libffmpeg.so.orig sudo cp ~/Downloads/0.25.2-linux-x64/libffmpeg.so .
Now start Vivaldi again. To verify, try e.g. a "gifv" at imgur that is actually using an underlying mp4 stream like this one:. If everything's OK, you should now see a behind the scenes look of a Game of Thrones swordfight!
| 51,187
|
TITLE: example of quotient topology
QUESTION [1 upvotes]: if we work in $\mathbb{R}^2\(0,0)$ with euclidean topology and we set following equivalence relation $P$ on this space: $(x,y)P(x',y')$ iff there exists $a$ in $\mathbb{R}^2\(0,0)$ such that $(x,y) = a(x',y').$ how we define the open sets in quotient space $\mathbb{R}^2-\{(0,0)\}/P.$
Let $f: \mathbb{R}^2\(0,0) \to \mathbb{R}^2\(0,0)/P$ the quotient function:
then the open sets in $\mathbb{R}^2\(0,0)/P$ are defined like: $\{f(A) \mid A\text{ open and saturated }\}$ but I have troubles by doing it for this example. Can someone help me?
REPLY [1 votes]: I’m going to assume that your $R$ is really $\Bbb R$, the set of real numbers.
HINT: Note that $\langle x,y\rangle\mathbin{P}\langle x',y'\rangle$ if and only if $\langle x,y\rangle$ and $\langle x',y'\rangle$ lie on the same straight line through the origin of $\Bbb R^2$. For $0\le\theta<\pi$ let $L_\theta'$ be the line through the origin containing the point $\langle\cos\theta,\sin\theta\rangle$, and let $L_\theta=L_\theta'\setminus\{\langle 0,0\rangle\}$. The sets $L_\theta$ for $\theta\in[0,\pi)$ are the $P$-equivalence classes and therefore also the fibres of the quotient map $f$ and the points of the quotient space. This means that you can identify the points of the quotient space with points of the interval $[0,\pi)$: each $L_\theta$ is a point of the quotient space, and it corresponds to the point $\theta$ in the set $[0,\pi)$.
To figure out the quotient topology $\tau$ on $[0,\pi)$, remember that a subset $U$ of $[0,\pi)$ is in $\tau$ if and only if $\bigcup_{\theta\in U}L_\theta$ is open in $\Bbb R^2\setminus\{\langle 0,0\rangle\}$. (It may help to imagine bending the interval $[0,\pi)$ around into a circle, to get the space obtained by identifying $0$ and $\pi$ in the closed interval $[0,\pi]$.)
| 161,822
|
Exams
Exams are managed by Laurier's Examinations, Scheduling and Booking Office, which is responsible for:
- The scheduling of final examinations;
- Deferred examinations approved by faculty petition committees;
- Classroom scheduling and booking for academic courses and events; and
- Exam computer card scanning.
The examination schedule is always posted in the fifth week of each 12-week term, and the third week in each six-week term.
Exam Centres
- WaterlooWATClick this tab to view content pertaining to Waterloo
- BrantfordBRANTClick this tab to view content pertaining to Brantford
75 University Ave. W.
Waterloo, ON
N2L 3C5
Directions to Laurier's Waterloo campus.
73 George St.
Brantford, ON
N3T 2Y3
Directions to Laurier's Brantford campus.
| 243,592
|
Physiotherapy Services
Physiotherapy has been proven to be effective in the treatment and management of physiological conditions such as:
- Chronic obstructive pulmonary disease, breathing disorders and other range of respiratory conditions, arthritis and diabetes
- Chronic heart disease, and other cardio-vascular disorders
- Neurological impairments such as Stroke, multiple sclerosis and also developmental disorders
- Back pain, neck pain, scoliosis and other orthopedic conditions as well as Sports injuries
Our Physiotherapists work closely with patients regardless of age, to assist them in managing conditions whether pre or post-surgery, assisting people come back from illness or injury and other age related conditions. One of the advantages and benefits of receiving early in-home physiotherapy service is to hasten the recovery process and to improve the quality of life.
| 382
|
\begin{document}
\maketitle
\begin{abstract}
Cartesian differential categories were introduced to provide an abstract
axiomatization of categories of differentiable functions. The fundamental example
is the category whose objects are Euclidean spaces and whose arrows are smooth maps.
Tensor differential categories provide the framework for categorical models
of differential linear logic. The coKleisli category of any tensor differential category
is always a Cartesian differential category. Cartesian differential categories, besides arising
in this manner as coKleisli categories, occur in many different
and quite independent ways. Thus, it was not obvious how to pass from Cartesian
differential categories back to tensor differential categories.
This paper provides natural conditions under which the linear maps of a Cartesian differential
category form a tensor differential category. This is a question of some practical importance
as much of the machinery of modern differential geometry is based on models which
implicitly allow such a passage, and thus the results and tools of the area tend to freely
assume access to this structure.
The purpose of this paper is to make precise the connection between the
two types of differential categories. As a prelude to this, however, it
is convenient to have available a general theory which relates the behaviour of
``linear'' maps in Cartesian categories to the structure of Seely categories. The
latter were developed to provide the categorical semantics for (fragments of)
linear logic which use a ``storage'' modality. The general theory of storage, which
underlies the results mentioned above, is developed in the opening
sections of the paper and is then applied to the case of differential categories.
\end{abstract}
\section{Introduction}
A fundamental observation of Girard, \cite{Girard}, which led to the
development of linear logic, was that the hom-functor of stable domains
could be decomposed as $A \Rightarrow B \colon=~ !A \lollipop B$ where
the latter, $X \lollipop Y$, is the hom-functor of coherence spaces.
This suggested that one might similarly be able to decompose
differentiable maps between two spaces $A$ and $B$ as the ``linear''
maps from a constructed space $S(A)$ to $B$. Ehrhard and Regnier's work
\cite{ER04} on the differential $\lambda$-calculus was inspired by this
analogy and in \cite{E04} Ehrhard provided concrete examples which
realized this analogy. This raised the question of how general a
correspondence this was.
In \cite{diffl} the notion of a (tensor) differential category was
introduced as a minimal categorical doctrine in which differentiability
could be studied. This generalized Ehrhard and Regnier's ideas in various ways.
It dispensed with the necessity that the setting be $*$-autonomous: the
decomposition of the hom-functor was then handled through the presence
of a comonad---or modality---satisfying a minimal set of coherences. Finally
the differential was introduced satisfying a minimal set of identities. In this paper
we shall, in fact, consider a slight strengthening of this basic notion by requiring,
first, that the modality be a storage modality ({\em i.e.} the Seely isomorphisms
$1 \equiv S(0)$ and $S(A) \ox S(B) \equiv S(A \x B)$ are present) and, second,
that the differential satisfies an interchange law.
This maintained the perspective of linear logic, by providing a
decomposition of differentiable functions through a comonad---or
storage modality---on more basic ``linear'' functions. However, a
rather unsatisfactory aspect of this perspective remained. Smooth
functions ({\em i.e.} infinitely differentiable functions)---after all the
main subject of calculus---appeared only indirectly
as the maps of the coKleisli category. This left a veil between these settings and the
direct understanding of these differentiable functions in the classical sense. It seemed,
therefore, important to develop a more direct view of what a category of
differentiable functions should look like. This caused us to examine more
closely the coKleisli categories of differential categories and to seek
a direct axiomatization for them.
To advance this aim we deployed (the dual of) Fuhrmann's notion of an
abstract Kleisli category \cite{fuhrmann}. In the course of developing these ideas we
stumbled on a much deeper sense in which the situation in linear logic
can be read as mathematics. Thus a key result of this paper is a very
general structural theorem about how ``models of linear logic'' arise.
Rather than retrospectively remove our path to this result, we have
centred the paper around the results on differential categories so that
the motivation which brought us to this particular passage is not lost.
The ``models of linear logic'' we consider here are not, in fact, models
of full linear logic, as the underlying categories are monoidal and not,
in general, monoidal closed, let alone $*$-autonomous. These models were
studied in depth by Gavin Bierman \cite{Bierman}: he called them ``Seely
categories''. Given that these categories have now been the object of
quite a number of studies, it is perhaps surprising that there is more to say about
them. Our main rather general structural theorem points out that these
categories arise in a rather natural mathematical way from the ``linear
maps'' of Cartesian storage categories. As Cartesian storage categories
have a rather natural mathematical genus, this helps to explain why
``Seely categories'' are such fundamental structures.
Our examination of the coKleisli categories of differential categories
had led us to introduce the notion of a Cartesian differential category,
\cite{CartDiff}. Cartesian differential categories are categories of
``raw'' differentiable maps with all the indirection---alluded to
above---removed. Pleasingly these categories have a wide range of
models. In particular, in \cite{CartDiff}, we showed that the
coKleisli category of a (tensor) differential category---satisfying an
additional interchange axiom---{\em is\/} a Cartesian differential
category.
However, this had left open the issue of providing a converse for this
result. In general, it is certainly {\em not\/} the case that a
Cartesian differential category is the coKleisli category of a (tensor)
differential category---a basic counterexample is the category of finite dimensional
real vector spaces with smooth maps. Thus, it is natural to ask what extra
conditions are required on a Cartesian differential category to make it
a coKleisli category of a (tensor) differential category. This
paper provides a rather natural (and appealing) answer to this
question. However, lest the reader think we have completely
resolved this question, we hasten to admit that it still does not answer the question in
complete generality. Furthermore, it leaves some other rather natural
questions open: for example it is natural to wonder whether {\em every\/}
Cartesian differential category arises as a full subcategory of the
coKleisli category of a (tensor) differential category.
The motivation for developing Cartesian storage categories was that they
provided an intermediate description of the coKleisli category of a
(tensor) differential category. These coKleisli categories were already
proven to be Cartesian differential categories and, as such, had an
obvious subcategory of linear maps---namely those which were linear in
the natural differential sense. A Cartesian storage category uses the abstract
properties of ``linear maps'' as the basis for its definition and is always the coKleisli category
of its linear maps. In order for such a storage category to be the coKleisli category of
a ``Seely category'', a further ingredient was required: one must be able to represent
bilinear maps with a tensor product. The theory of these categories---which we have taken
to thinking of as the theory of storage---provides the theoretical core of the paper
and is described in Sections \ref{storage-categories} and \ref{tensor-storage-cats}.
In Section \ref{diff-storage} we apply this theory to Cartesian differential categories.
Cartesian differential categories exhibit a notable further coincidence of
structure. When one can classify the linear maps of a Cartesian
differential category one also automatically has a transformation called a ``codereliction''.
Having a codereliction in any Cartesian storage category
implies---assuming linear idempotents split---that one also has, for free,
tensorial representation. Thus, to apply the theory of storage categories to
Cartesian differential categories it suffices to know just that the linear maps are classified.
Finally, but crucially for the coherence of the story, the linear maps then, with no further
structural requirement, form a tensor differential category. This provides the main
result of the paper Theorem \ref{lin-of-cdsc}. Completing the circle then,
at the end of the paper, we are in a position to point out that a closed Cartesian
differential storage category is a model of Ehrhard and Regnier's
differential $\lambda$-calculus \cite{ER04}.
Considering that we have been discussing two types of differential
category, it seemed a good idea to qualify each with an appropriate
adjective, so as to distinguish between them in a balanced manner. So
we shall (in this paper) refer to differential categories as ``tensor
differential categories'', to contrast them with ``Cartesian
differential categories''. In parallel with this, we introduce other
terminological pairings which are natural from the perspective of this
paper, but are also structures which have received attention under
different names: For example, ``tensor storage categories'' are
essentially ``Seely categories'' and ``tensor differential storage
categories'' are essentially what previously we called differential
storage categories. Our purpose is not to precipitate any renaming of
familiar structures but rather to emphasize the structural relationships
which emerge from the story we tell here.
\section{Cartesian Storage categories}
\label{storage-categories}
In the discussion above two situations were described in which we selected from
a Cartesian category certain maps, namely the coherent maps in stable domains
and the linear maps among differentiable maps, and then used a classification of these maps
to extract a Seely category. The aim of the next two sections is to show
why this works as a general procedure and, moreover, can
produce a tensor storage (or ``Seely'') category . In fact, our aim is to prove
much more: namely that {\em all\/} ``Seely categories'' (with an exact modality)
arise in this manner.
We start by defining what is meant by a system \L\ of \linear maps. One way
to view a \linear system is as a subfibration of the simple fibration. In
a category equipped with a system of \linear maps, we shall often
suppress explicit mention of $\cal L$, referring to maps belonging to
this system as simply being {\em linear\/}. Such systems arise very
naturally in Cartesian closed categories from exponentiable systems of
maps and we make a detour to describe these connections.
Our initial aim is to characterize the categories in which there is
a decomposition of maps analogous to that in linear logic. This
eventually leads us to ``Seely categories'' as those in which all the
``first-order'' structural components of the logic are present.
Leaving definitions for later, the path will be the following:
a ``Cartesian storage category'' is a category equipped with a system of
\linear maps which are ``strongly and persistently classified''. We then
prove that this is equivalent to demanding that the category is a ``strong
abstract coKleisli category''. This, in turn, is equivalent
to being the coKleisli category of a category with a ``forceful''
comonad. Finally, to this story we add, in the next section, ``tensorial representation'', which
when present, ensures the linear maps forms a tensor storage
category---this is essentially a ``Seely category''.
\subsection{Systems of \linear\ maps}\label{linear-maps}
By a {\bf system of maps} of a category is meant a subcategory on the same objects: a
{\bf Cartesian} system of maps of a Cartesian category is just such a
subcategory closed under the product structure. This means that
projections, diagonal maps, and final map must be in the system and,
furthermore, the pairing of any two maps in the system must be in the
system. For a system \L\ of maps to be \linear it must determine a
subfibration of the simple fibration.
The simple slice $\X[A]$ at $A$---that is the fiber over $A$ of the
simple fibration---of a Cartesian category $\X$, has the same objects
as $\X$ but the homsets are modified $\X[A](X,Y) = \X(A \x X,Y)$. The
composition in $\X[A]$ of two maps $g: A \x X \to Y$ and $h: A \x Y \to
Z$ is $\< \pi_0,g \> h$ with identities given by projection $\pi_1: A \x
X \to X$. The substitution functor between simple slices has the
following effect on the maps of $\X$. If $g: A \to B$ is any map of
$\X$, a map of $\X[B]$ is, in $\X$, a map $g: B \x X \to Y$, then
$\X[f](g)= (f \x 1)g: A \x X \to Y$.
\begin{definition}
A Cartesian category $\X$ is said to have a {\bf system of \linear
maps}, ${\cal L}$, in case in each simple slice $\X[A]$ there is a system
of maps ${\cal L}[A] \subseteq \X[A]$, which we shall refer to as the
{\bf ${\cal L}[A]$-linear} maps (or, when $\cal L$ is understood, simply
as ``linear'' maps), satisfying:
\begin{enumerate}[{\bf [LS.1]}]
\item for each $X,Y,Z \in \X[A]$, all of $1_X,\pi_0: X \x Y \to
X,\pi_1: X \x Y \to Y$ are in ${\cal L}[A]$ and, furthermore, $\< f,g\>:
X \to Y \x Z \in {\cal L}[A]$ whenever $f,g \in {\cal L}[A]$;
\item in each $\X[A]$, ${\cal L}[A]$, as a system of maps, is closed under composition, furthermore,
whenever $g \in {\cal L}[A]$ is a retraction and $gh \in {\cal L}[A]$
then $h \in {\cal L}[A]$;
\item all substitution functors
$$\infer{\X[B] \to_{\X[f]} \X[A]}{A \to^f B}$$
preserve linear maps.
\end{enumerate}
\end{definition}
Notice how a system of \linear maps determines a subfibration:
$$\xymatrix{{\cal L}[\X] \ar[d]^{\partial_{\cal L}} \ar[rr]^{{\cal I}[\X]}
& & \X[\X] \ar[d]^{\partial} \\
{\cal L} \ar[rr]_{{\cal I}} & & \X}$$
where ${\cal I}$ is the obvious inclusion and ${\cal L}[\X]$ is the category:
\begin{description}
\item{{\bf [Objects:]}} $(A,X) \in \X_0 \x \X_0$
\item{{\bf [Maps:]}} $(f,g): (A,X) \to (B,Y)$ where $g: A \to B$ is any
map and $f: A \x X \to Y$ is linear in
its second argument.
\end{description}
Composition and identities are as in $\X[\X]$, where the maps are the
same but with no linear restriction. Composition is defined by $(f,g)
(f',g') = (\< \pi_0 g,f \> f',g g')$ and $(\pi_1,1)$ is the identity
map. Cartesian maps are of the form $(\pi_1,g): (A,X) \to (B,X)$.
Furthermore we observe using {\bf [LS.2]}:
\begin{lemma}
For any $X$ with a system of \linear maps in any slice $\X[A]$:
\begin{enumerate}[(i)]
\item if $f$ is an linear map which is an isomorphism, then $f^{-1}$ is linear;
\item if $e$ is a linear idempotent which splits as a retraction $r$
and a section $i$, and if $r$ is linear, then $i$ is also linear.
\end{enumerate}
\end{lemma}
We shall need terminology which focuses on the linear argument rather
than the context of that argument. Thus to say that a map $f: X \x A
\to Y$ is linear {\em in its second argument\/} is equivalent to saying
that $f$ is ${\cal L}[X]$-linear. Similarly, a map, $f$, is linear {\em in its first argument\/} if
$c_{\x} f$ is ${\cal L}[X]$-linear.
\begin{lemma}
If ${\cal L}$ is a system of \linear maps on a Cartesian category $\X$
then if $f: X \x A \to B$ and $g: Y \x B \to C$ are linear in their
second arguments then $(f \x 1)g: Y \x X \x A \to C$ is linear in its
third argument.
\end{lemma}
\proof
As $f$ is linear in $\X[X]$, $\X[\pi_1](f): Y \x X \x A \to C$ is linear
in $\X[Y \x X]$. Similarly as $g$ is linear in $\X[Y]$, $\X[\pi_0](g)$ is
linear in $\X[Y \x X]$. Now $(1 \x f)g = \X[\pi_1](f)\X[\pi_0](g)$
which makes it a composite of linear maps and so itself linear.
\endproof
The lemma means that to determine whether a composite is linear in a
particular argument it suffices to know that each composition was at a
linear argument.
If ${\cal L}$ is a system of \linear maps then $f: A \x B \to C$ is
(${\cal L}$-){\bf bilinear} in case it is linear in each argument
individually.
Finally we make the useful if immediate observation:
\begin{lemma}
If $\X$ has a system of linear maps ${\cal L}$ then each simple slice, $\X[A]$, inherits a system of linear maps ${\cal L}[A]$.
\end{lemma}
\subsection{Closed systems of maps}
\label{closed-systems}
A system of \linear maps induces a system of maps on $\X$ itself, as $\X
\cong \X[1]$. This class, which we shall denote by ${\cal L}[]$, will
determine the whole system of linear maps when the category is Cartesian
closed. However, clearly such an ${\cal L}[]$ must satisfy some special
properties which we now outline.
\begin{definition}
A system of maps ${\cal C} \subseteq \X$ is an {\bf exponentiable system
of maps} in case:
\begin{enumerate}[{\bf [ES.1]}]
\item for each $X,Y,Z \in \X$, all of $1_X,\pi_0: X \x Y \to X,\pi_1: X
\x Y \to Y,\Delta: X \to X \x X$ are in ${\cal C}$ and, furthermore, $\<
f,g\>: X \to Y \x Z \in {\cal C}$ whenever $f,g \in {\cal C}$;
\item ${\cal C}$ as a system of maps is closed under composition, moreover, whenever $g \in {\cal
C}$ is a retraction and $gh \in {\cal C}$ then $h \in {\cal C}$;
\item whenever $f:A \to B$ is in ${\cal C}$ and $g: X \to Y$ is any map, then
$g \Rightarrow f: Y \Rightarrow A \to X \Rightarrow B$ is in ${\cal C}$;
\item furthermore
\begin{eqnarray*}
\eta[X] & = & \curry(\pi_1): A \to X \Rightarrow A \\
\mu[X] & = & \curry((\Delta \x 1) (1 \x \eval) \eval): X \Rightarrow (X
\Rightarrow A) \to X \Rightarrow A
\end{eqnarray*}
are all in ${\cal C}$.
\end{enumerate}
\end{definition}
Here we use the exponential correspondence in the following form:
$$\infer{X \to_{\curry(f)} A \Rightarrow B}{A \x X \to^f B}.$$
\begin{definition} \label{closed-system-defn}
We shall say that a system of \linear maps in a Cartesian closed
category is {\bf closed} in case
\begin{itemize}
\item each evaluation map is linear in its second (higher-order) argument.
\item the system is closed under currying, that is if $f: A \x B \x C \to
D$ is linear in its second argument ($B$) then $\curry(f): B \x
C \to A \Rightarrow D$ is linear in its first argument.
\end{itemize}
Equivalently, when $\X$ is Cartesian closed, a system of \linear maps is
closed provided $k: B \x X \to Y \in {\cal L}[B]$ if and only if
$\curry(k) \in {\cal L}[]$.
\end{definition}
\begin{proposition}
In a Cartesian closed category $\X$, exponentiable systems of
maps are in bijective correspondence with closed systems of \linear maps.
\end{proposition}
\proof
Given an exponentiable system of maps ${\cal C}$ we define a
system of linear maps $\widetilde{{\cal C}}$, by saying $f: A \x B \to C$ is
linear in $\X[A]$ if and only if $\curry(f): B \to A \Rightarrow C$ is
in ${\cal C}$. This defines a system of linear maps since:
\begin{enumerate}[{\bf [LS.1]}]
\item In $\X[A]$ the identity map is $\pi_1: A \x X \to X$ which is
linear as $\curry(\pi_1)$ is. For the remainder note that $A \x X
\to^{\pi_1} X \to^f Y$ has $\curry(\pi_1 f) = \curry(\pi_0) A
\Rightarrow f$ so that whenever $f$ is in ${\cal C}$ then $\curry(\pi_1
f)$ will be in ${\cal C}$. Finally note that $\curry(\< f,g \>) = \<
\curry(f),\curry(g) \> (\< A \Rightarrow \pi_0,A \Rightarrow \pi_1
\>)^{-1}$.
\item The curry of a composite in $\X[B]$ is given by
$$\curry(\< f,\pi_1 \>g) = \curry(f)(B \Rightarrow \curry(g)) \mu[B]$$
which is a composite of ${\cal C}$-maps when $f$ and $g$ are.
Suppose now that $g \bullet h$ in $\X[B]$ and $\curry(g), \curry(gh) \in
{\cal C}$ with $g$ a retraction (with section $g^{s}$) then $(B
\Rightarrow \curry(g^{s}) \mu[B]$ is right inverse to $(B \Rightarrow
\curry(g)) \mu[B]$ where the latter is in ${\cal C}$. Also $$B
\Rightarrow \curry(gh)) \mu[B] = B \Rightarrow \curry(g)) \mu[B] B
\Rightarrow \curry(h)) \mu[B]$$ so that $B \Rightarrow \curry(h)) \mu[B]
\in {\cal C}$ But this means $\eta[B](B \Rightarrow \curry(h)) \mu[B] =
\curry(h)$ is in ${\cal C}$.
\item If $\curry(h) \in {\cal C}$ then $\curry(\X[f](h)) = \curry((1 \x
f) h) = (f \Rightarrow B)\curry(h)$ so that the substitution functors
preserve $\widetilde{{\cal C}}$-maps.
\end{enumerate}
We observe that as $\curry(\eval) = 1$ that in $\widetilde{{\cal C}}$
the evaluation map must necessarily be linear in its second
(higher-order) argument.
For the converse, given a system of linear maps ${\cal L}$ in which
evaluation maps are linear in their second argument, we now show that
${\cal L}[]$ is an exponential system of maps. The only difficulties
are {\bf [ES.3]} and {\bf [ES.4]}. For the former note that $f
\Rightarrow g$ is obtained by currying: $$A \x (A' \Rightarrow B')
\to^{f \x 1} A' \x (A' \Rightarrow B') \to^{\eval} B' \to^g B$$ However
observe that this is linear in its second argument so that currying
preserves this establishing {\bf [ES.3]}. For the latter the argument
is similar as both $\eta$ and $\mu$ are obtained by currying maps which
are linear in the appropriate coordinates.
Finally we must argue that these transitions are inverse: that
$\widetilde{{\cal C}}[] = {\cal C}$ is immediate. We show that
$\widetilde{{\cal L}[]} = {\cal L}$ by showing ${\cal L} \subseteq
\widetilde{{\cal L}[]}$ and the converse. Suppose $f \in {\cal L}[A]$
then $\curry(f) \in {\cal L}[]$ so that $f \in \widetilde{{\cal L}[]}
[A]$ as required. Conversely suppose $f \in \widetilde{{\cal L}[]}[A]$
then $\curry(f) \in {\cal L}[]$ but this means $f = (1 \x \curry(f))
\eval$ is in ${\cal L}[A]$.
\endproof
We note that when ${\cal L}$ is an exponentiable system of linear maps, the
notion of being (${\cal L}$-)bilinear becomes the requirement that both
$\curry(f)$ and $\curry(c_\x f)$ are linear in $\X$.
\subsection{Storage}
A {\bf Cartesian storage category} is a Cartesian category $\X$ together with a
system of linear maps, ${\cal L}$, which is persistently and strongly
classified,\footnote{The notion of classification might (by comparison
with the notion of a subobject classifier in a topos, for example) be
called coclassification---we shall not do that. Furthermore, we also
use the term ``represented'', especially in the sense that under the
classification given by $S$ and $\varphi$, $f^{\sharp}$ represents $f$.}
in the following sense.
\begin{definition} ~
\begin{enumerate}[(i)]
\item
A system of \linear maps is {\bf classified} in
case there is a family of maps $\varphi_X: X \to S(X)$ and for each
$f: X \to Y$ a unique linear map $f^{\sharp}$ such that
$$\xymatrix{X \ar[r]^f \ar[d]_{\varphi_X} & Y \\S(X) \ar@{..>}[ru]_{f^\sharp} }$$
commutes.
\item
A system of \linear maps is {\bf strongly classified}
if there is an object function $S$ and maps $X \to^{\varphi_X} S(X)$ such
that for every $f: A \x X \to Y$ there is a unique
$f^{\sharp}: A \x S(X) \to Y$ in ${\cal L}[A]$ (that is,
$f^\sharp$ is linear in its second argument) making
$$\xymatrix{A \x X \ar[d]_{1 \x \varphi_X} \ar[rr]^{f} & & Y \\ A \x S(X)
\ar@{..>}[rru]_{f^{\sharp}}}$$
commute.
\item A strong classification is said to be {\bf persistent} in case
whenever $f: A \x B \x X \to Y$ is linear in its second argument $B$
then $f^\sharp: A \x B \x S(X) \to Y$ is also linear in its second
argument.
\end{enumerate}
\end{definition}
\begin{remark}
{\em When the \linear maps are classified this makes the inclusion of the linear
maps into a right adjoint. A strong classification makes the inclusion
of the linear maps into the simple fibration a fibred right adjoint. A
strong persistent classification allows a powerful proof technique which
we shall use extensively in what follows. To establish the equality of
$$f,g: A \x \underbrace{S(X) \x ...S(X)}_n \to Y$$
when $f$ and $g$ are maps which are linear {\em individually\/} in their
last $n$ arguments, it suffices to show $(1 \x \varphi^n)f = (1 \x
\varphi^n)g$. Note that the linearity precondition is vital! }
\end{remark}
Strong classification gives a morphism of fibrations defined by:
$$S[\X]: \X[\X] \to {\cal L}[\X]; (f,g) \mapsto ( (f \varphi)^\sharp, g).$$
This is a functor on the total category. Identity maps are preserved because
$$\xymatrix{A \x X \ar[d]_{1 \x \varphi}\ar[r]^{\pi_1} & X \ar[r]^{\varphi} & S(X) \\
A \x S(X) \ar[rru]_{\pi_1}}$$
commutes making $(\pi_1 \varphi)^\sharp = \pi_1$. Composition is preserved as
$$\xymatrix{A \x X \ar[d]_{1 \x \varphi}\ar[r]^{\<\pi_0 g,f\>} & B \x Y \ar[d]^{1 \x \varphi} \ar[r]^{f'}
& Z \ar[r]^{\varphi} & S(Z) \\
A \x S(X) \ar[r]_{\< \pi_0 g,(f \varphi)^\sharp \>} & B \x S(Y) \ar[rru]_{(f'\varphi)^\sharp} }$$
Finally $S[\X]$ preserves Cartesian arrows as $S[\X](\pi_1,g) =
(\pi_1,g): (A,S(X)) \to (B,S(X))$.
Therefore the classification is the couniversal property for the
inclusion of the linear maps into the simple fibration, and so
$S$ is a fibred right adjoint.
\begin{remark}{}\rm
We recall some definitions dealing with the notion of strength.
A {\bf functor} $S: \X\to\Y$ between monoidal categories is {\bf strong} if
there is a natural transformation (``strength'') $\theta^S:X\ox S(Y)\to
S(X\ox Y)$ so that the following diagrams commute:
{\small
\[
\xymatrix{\top\ox S(Y) \ar[dr]_{u^L_\ox} \ar[rr]^{\theta} & & S(\top\ox Y) \ar[dl]^{S(u^L_\ox)} \\
& S(Y) }
~~~~~~
\xymatrix{X\ox (Y\ox S(Z)) \ar[d]_{1 \ox \theta} \ar[rr]^{a_\ox} & & (X\ox Y)\ox S(Z)) \ar[dd]^{\theta} \\
X \ox S(Y \ox Z) \ar[d]_{\theta} \\
S(X \ox (Y \ox Z)) \ar[rr]_{S(a_\ox)} & & S((X \ox Y) \ox Z)
}
\]
}
The identity functor is strong (with strength given by the identity),
and if $S$ and $T$ are strong, so is their composite, with strength given by
$\theta^T T(\theta^S): X \ox T(S(Y))\to T(X\ox S(Y))\to T(S(X\ox Y))$.
A {\bf natural transformation} $\psi:S \to T$ between strong functors is
{\bf strong} if the following commutes.
{\small
\[
\xymatrix{ X\ox S(Y) \ar[d]_{1\ox\psi}\ar[r]^{\theta^S} & S (X\ox Y) \ar[d]^{\psi} \\
X\ox T(Y) \ar[r]_{\theta^T} & T(X\ox Y)
}
\]
}
A {\bf monad} $(S,\eta,\mu)$ is {\bf strong} if each of $S$, $\eta$, and $\mu$
is strong.
\end{remark}
Thus we have proved:
\begin{lemma}
If $\X$ has a strongly classified system of linear maps, then $S:
\X \to \X$ is a strong functor (and thus a morphism of simple
fibrations) and determines a fibred right adjoint to the inclusion
${\cal I}: \partial_{\cal L} \to \partial$.
\end{lemma}
The strength $\theta$ is given explicitly by
$$\xymatrix{A \x X \ar[d]_{1 \x \varphi} \ar[rr]^{\varphi} & & S(A \x X) \\
A \x S(X) \ar[rru]_{\theta = \varphi^\sharp}}$$
and all the natural transformations are appropriately strong. The
strength of $\varphi$, the unit of the adjunction, is provided by the
defining diagram of the strength transformation $\theta$ above. Notice
that $\epsilon$ is {\em not\/} natural as $\epsilon f$ is not
necessarily linear unless $f$ is; however, it does satisfy the strength
requirement. However, $\mu = \epsilon_{S(\_)}$ is natural and also
strong.
This adjunction induces a comonad
$\check{\S} = (S,\delta,\epsilon)$ on the linear maps whose data is given by
$$\xymatrix{A \ar@{=}[rd]\ar[r]^\varphi & S(A) \ar@{..>}[d]^\epsilon \\ & A}
~~~~~~~~
\xymatrix{A \ar[d]_\varphi \ar[r]^\varphi & S(A) \ar@{..>}[d]^{\delta=S(\varphi)} \\
S(A) \ar[r]_\varphi & S(S(A))}$$
Notice that by definition both $\epsilon$ and $\delta$ are linear (in ${\cal L}[]$).
In Cartesian storage categories the classification is also persistent, which means
that when you make a function linear in an argument---by classifying
it at that argument---you do not destroy the linearity that any of the
other arguments might have enjoyed. This has the important consequence
that the universal property can be extended to a multi-classification
property as mentioned above. Here is the argument for the
bi-classification property:
\begin{lemma} \label{bilinear-universal}
In a Cartesian storage category, for each $f: A \x X \x Y \to Z$ there is a
unique $f^{\sharp_2}$, which is linear in both its second and third
arguments ({\em i.e.} bilinear), such that
$$\xymatrix{A \x X \x Y \ar[d]_{1 \x \varphi \x \varphi} \ar[rr]^{f} & & Z \\
A \x S(X) \x S(Y) \ar@{..>}[rru]_{f^{\sharp_2}}}$$
commutes.
\end{lemma}
\proof
To establish the existence of a map we may extend it in stages:
$$\xymatrix{A \x X \x Y \ar[d]_{1 \x \varphi \x 1} \ar[rr]^{f} & & Z \\
A \x S(X) \x Y \ar[d]_{1 \x 1 \x \varphi} \ar@{..>}[rru]_{f^{\sharp_1}}\\
A \x S(X) \x S(Y) \ar@{..>}[rruu]_{f^{\sharp_2}}}$$
the map $f^{\sharp_2}$ is then linear in its last two arguments by
persistence. Suppose, for uniqueness, that a map $g$, which is linear
in its last two arguments, has $(1 \x \varphi \x \varphi)g =f : A \x
S(X) \x S(Y) \to Z$. Then $f^{\sharp_1} = (1 \x 1 \x \varphi)g$ as the
latter is certainly linear in its middle argument. Whence $g =
f^{\sharp_2}$.
\endproof
This allows the observation that there is a candidate for monoidal
structure $m_\x:S(A) \x S(B) \to S(A \x B)$ given as the unique bilinear map lifting $\varphi$:
$$\xymatrix{A \x B \ar[d]_{\varphi \x \varphi} \ar[rr]^{\varphi} & & S(A \x B) \\ S(A) \x S(B) \ar@{..>}[urr]_{m_\x} }$$
We observe that this implies the following two identities:
\begin{lemma} \label{bilinear-identities}
In any Cartesian storage category:
\begin{enumerate}[(i)]
\item $m_\x = \theta S(\theta') \mu$;
\item $(\epsilon \x \epsilon) m_\x = m_\x S(m_\x) \epsilon$.
\end{enumerate}
\end{lemma}
\begin{proof} Both can be verified using the universal property outlined in Lemma \ref{bilinear-universal}. First one notices that
all the maps are bilinear, thus it suffices to show that prefixing with $\varphi \x \varphi$ gives the same maps:
\begin{enumerate}[(i)]
\item Clearly $m_\x$ is bilinear. Recall that $\mu = \epsilon_{S(\_)}$. Notice that $\theta S(\theta') \mu$ is linear in its first argument as $\theta$ is and $S(\theta')\mu$ is linear.
However, using persistence it is also linear in its second argument as:
$$\xymatrix{A \x S(B) \ar[d]_{\varphi \x 1} \ar[rd]^{\varphi} \ar[rr]^{\theta'} & & S(A \x B) \ar[dr]^{\varphi} \ar@{=}[rr] & & S(A \x B) \\
S(A) \x S(B) \ar[r]_{\theta} & S(S(A) \x B) \ar[rr]_{S(\theta')} & & S^2(A \x B) \ar[ru]_{~~~\epsilon_{S(A \x B)}= \mu} }$$
This gives:
\begin{eqnarray*}
(\varphi \x \varphi) m_\x & = & \varphi \\
(\varphi \x \varphi) \theta S(\theta') \mu & = & (\varphi \x 1)(1 \x \varphi) \theta S(\theta') \mu \\
& = & (\varphi \x 1) \varphi S(\theta') \mu \\
& = & (\varphi \x 1) \theta' \varphi \mu \\
& = & \varphi \varphi \mu = \varphi
\end{eqnarray*}
\item In this case both maps are clearly bilinear and thus the equality is given by:
\begin{eqnarray*}
(\varphi \x \varphi) (\epsilon \x \epsilon) m_\x & = & m_\x \\
(\varphi \x \varphi) m_\x S(m_\x) \epsilon & = & \varphi S(m_\x) \epsilon \\
& = & m_\x \varphi \epsilon = m_\x
\end{eqnarray*}.
\end{enumerate}
\end{proof}
Then:
\begin{proposition}\label{prop:linmaps}
In a Cartesian storage category:
\begin{enumerate}[(i)]
\item $\S = (S,\varphi,\mu)$ is a commutative monad, that is $\S$ is a
monoidal monad with respect to the product with $m_\x = \theta
S(\theta') \mu: S(A) \x S(B) \to S(A \x B)$;
\item for every object $A$
$$S(S(X)) \Two^{\epsilon}_{S(\epsilon)} S(X) \to^\epsilon X$$
is an absolute coequalizer;
\item $f: A \x X \to Y$ is \linear in its second argument if and only if
$$\xymatrix{A \x S(X) \ar[d]_{1 \x \epsilon} \ar[r]^\theta &
S(A \x X)\ar[r]^{~~S(f)} & S(Y) \ar[d]^{\epsilon} \\
A \x X \ar[rr]_f & & Y}$$
commutes;
\item and so $f: A \x B \to Y$ is bilinear (\/{\em i.e.} linear in its
first two arguments) if and only if
$$\xymatrix{S(A) \x S(B) \ar[d]_{\epsilon \x \epsilon} \ar[r]^{m_{\x}} &
S(A \x B) \ar[r]^{~~S(f)} & S(Y) \ar[d]^{\epsilon} \\
A \x B \ar[rr]_f & & Y}$$
commutes.
\end{enumerate}
\end{proposition}
\avoidwidow
\proof ~
\begin{enumerate}[{\em (i)}]
\item To establish this, given that we know the monad is strong, it
suffices to show that the strength is commutative. That is that the
following diagram commutes:
$$\xymatrix{S(A) \x S(B) \ar[d]_{\theta'} \ar[r]^\theta & S(S(A) \x B) \ar[r]^{S(\theta')} & S(S(A \x B)) \ar[dd]^\mu \\
S(A \x S(B)) \ar[d]_{S(\theta)} \\
S(S(A \x B)) \ar[rr]_\mu & & S(A \x B)}$$
The defining diagrams for the two routes round this square are:
$$\xymatrix{A \x B \ar[dr]_{\varphi} \ar[r]^{\varphi \x 1}
& S(A) \x B \ar[d]^{\theta'} \ar[dr]^{\varphi} \ar[r]^{1 \x \varphi}
& S(A) \x S(B) \ar[d]^\theta \\
& S(A \x B) \ar@{=}[ddr]\ar[dr]^{\varphi} & S(A \x S(B)) \ar[d]^{S(\theta')} \\
& & S(S(A \x B)) \ar[d]^\mu \\ & & S(A \x B)}$$
and
$$\xymatrix{A \x B \ar[dr]_{\varphi} \ar[r]^{1 \x \varphi}
& A \x S(B) \ar[d]^{\theta} \ar[dr]^{\varphi} \ar[r]^{\varphi \x 1}
& S(A) \x S(B) \ar[d]^{\theta'} \\
& S(A \x B) \ar@{=}[ddr]\ar[dr]^{\varphi} & S(S(A) \x B) \ar[d]^{S(\theta)} \\
& & S(S(A \x B)) \ar[d]^\mu \\ & & S(A \x B)}$$
Note that, because the classification is persistent, following Lemma
\ref{bilinear-identities} (i), both these constructed maps are
bilinear. Thus, using the universal property of $\varphi \x \varphi$
they are equal. This shows that the two ways round the square making
the strength commutative are equal.
\item $\epsilon$ is split by $\varphi$ and so this is a split
coequalizer: that is $\epsilon \varphi = \varphi S(\epsilon)$ and both
$\varphi \epsilon = 1$ and $S(\varphi) \epsilon = 1$. Thus this is an
absolute coequalizer.
\item
If $f$ is linear in its second argument then $(1 \x \epsilon) f$ is also
linear in its second argument (recall $\epsilon$ is linear). But this
means $f^\sharp = (1 \x \epsilon) f$. However, also we know $f^\sharp =
\theta S(f) \epsilon$. Thus, the diagram (which shows linearity)
commutes.
Conversely, suppose this diagram commutes; then notice it suffices to
show that we can factor $f^\sharp = (1 \x \epsilon) g$. Then, as $1 \x
\epsilon$ is linear and also a retraction, $g$ must be linear. Finally
$g$ must of course be $f$ as $f = (1 \x \varphi) f^\sharp = (1 \x
\varphi) (1 \x \epsilon) g = g$. Now to show we can factor $f^\sharp$ in
this manner, we can use the use exactness; it suffices to show that
$$A \x S^2(X) \Two^{1 \x S(\epsilon)}_{1 \x \epsilon} A \x S(X) \to^{f^\sharp} Y$$
commutes. For this we have the following simple calculation which uses
the fact that $\epsilon\epsilon = S(\epsilon) \epsilon$:
\begin{eqnarray*}
(1 \x \epsilon) f^\sharp
& = & (1 \x \epsilon) \theta S(f) \epsilon \\
& = & (1 \x \epsilon) (1 \x \epsilon) f = (1 \x \epsilon\epsilon) f \\
& = & (1 \x S(\epsilon)\epsilon) f = (1 \x S(\epsilon)) (1 \x \epsilon) f \\
& = & (1 \x S(\epsilon)) \theta S(f) \epsilon = (1 \x S(\epsilon)) f^\sharp.
\end{eqnarray*}
\item This is immediate from the preceding.
\end{enumerate}
\endproof
Notice that this means that in a Cartesian storage category, the system
$\L$ may equivalently be viewed as being induced by the monad $S$. The
second property says that the induced comonad on the subfibration of
linear maps is an {\em exact modality}. In the next section, as we study this
comonad in more detail, the significance of this property will become
clear.
We end this subsection with two observations. The first of these is that
the notion of a storage category is simply stable in the sense that each
simple slice of a storage category is itself a storage category.
This, in principle, will allow us to suppress the fibred context of a
statement about storage categories, as such statements will be true in
all simple-slice fibres.
\begin{lemma} \label{simple-slice-context}
If $\X$ is a Cartesian storage category then each simple slice,
$\X[A]$, is also a Cartesian storage category.
\end{lemma}
The second observation concerns the algebras of the monad $(S,
\varphi,\mu=\epsilon_{S})$. Recall that an algebra for this monad
is an object $A$ with a map $\nu: S(A) \to A$ such that
$$\xymatrix{ A \ar[r]^{\varphi} \ar@{=}[dr] \ar[r] & S(A) \ar[d]_\nu & A \ar[l]_{S(\varphi)} \ar@{=}[dl] \\ & A}
~~~~~\xymatrix{S^2(A) \ar[d]_{S(\nu)} \ar[r]^{\epsilon_{S(A)}} & S(A) \ar[d]^{\nu} \\
S(A) \ar[r]_\nu & A}
$$
Consider the algebras whose structure maps are linear. Of course, there
could be algebras whose structure map does not lie in ${\cal L}$ but
for now we consider the case $\nu \in {\cal L}$. These algebras, due to
the first triangle identity above, must be precisely of the form
$\nu=\epsilon_A$---and then the remaining identities will automatically
hold. A homomorphism of such algebras is then precisely a map $f: A \to
B$ which is $\epsilon$-natural. Thus, we have:
\begin{lemma}
The full subcategory of $S$-algebras, determined by algebras whose
structure maps is linear, is, for any Cartesian storage category $\X$,
isomorphic to the subcategory of linear maps ${\cal L}[]$.
\end{lemma}
\subsection{Abstract coKleisli categories}
Abstract coKleisli categories were introduced (in the dual) by Fuhrmann
\cite{fuhrmann} to provides a direct description of coKleisli
categories. Thus, every coKleisli category is an abstract coKleisli
category and, furthermore, from any abstract coKleisli category, $\X$,
one can construct a subcategory, $\check{\X}$, with a comonad,
$\check{\S}$, whose coKleisli category is exactly the original category
$\X$. Of particular interest is when the constructed category,
$\check{\Y}_\S$, of a coKleisli category $\Y_\S$ is precisely $\Y$:
this, it turns out, happens precisely when $\S$ is an exact modality in the sense
above.
This section views a Cartesian storage category from the perspective of its being
an abstract coKleisli category. A Cartesian storage category is clearly a rather
special abstract coKleisli category so that our aim is to determine what
extra algebraic conditions are being required.
\begin{definition}
An {\bf abstract coKleisli category} is a category $\X$ equipped with a
functor $S$, a natural transformation $\varphi: 1_\X \to S$, and a family of maps
$\epsilon_A: S(A) \to A$ (which is not assumed to be natural) such that
$\epsilon_{S(\_)}$ is natural. Furthermore:
$$\varphi \epsilon = 1,~~S(\varphi) \epsilon = 1, ~~\mbox{and}~~ \epsilon\epsilon = S(\epsilon)\epsilon$$
An abstract coKleisli category is said to be {\bf Cartesian} if its underlying category, $\X$, is a
Cartesian category such that $!$, $\pi_0$ and $\pi_1$ are all
$\epsilon$-natural (in the sense that $\epsilon \pi_0 = S(\pi_0)\epsilon$, {\em etc.}).
\end{definition}
In an abstract coKleisli category the $\epsilon$-natural maps, that is
those $f$ which satisfy $S(f)\epsilon = \epsilon f$, form a subcategory
$\X_\epsilon$ and on this subcategory $\check{\S} =
(S,S(\varphi),\epsilon)$ is a comonad. On $\X$, the larger category,
$\S = (S,\varphi,\epsilon_{S(\_)})$ is a monad. It is not hard to see
that the $\epsilon$-natural maps form a system of linear maps which are
classified:
$$\xymatrix{X \ar[d]_{\varphi} \ar[rr]^f & & Y \\ S(X) \ar[urr]_{S(f)\epsilon}}$$
where the uniqueness follows since if $h: S(X) \to Y$ has $\varphi h = f$
then $S(f)\epsilon = S(\varphi h)\epsilon = S(\varphi)\epsilon h = h$.
\medskip
In an abstract coKleisli category, we shall always use
the $\epsilon$-natural maps as the default system $\cal L$ of maps. We now address
the question of when this system is a \linear system of maps.
\medskip
\begin{definition}
An abstract coKleisli category is said to be {\bf strong} in case it is
Cartesian and $S$ is a strong functor, $\varphi$ a strong natural
transformation, and $\epsilon$ is strong, even where it is unnatural,
meaning that the following commutes for all objects $A$ and $X$:
$$\xymatrix{A \x S(X) \ar[r]^\theta \ar[dr]_{1 \x \epsilon} & S(A \x X) \ar[d]^{\epsilon} \\ & A \x X}$$
\end{definition}
In a strong abstract coKleisli category $\X$, $\S = (S,\varphi,\epsilon_{S(\_)})$ is clearly a strong monad. We observe:
\begin{proposition}
If $(\X,S,\varphi,\epsilon)$ is a strong abstract coKleisli category then the maps
$f: A \x X \to Y$ of $\L[A]\subseteq\X[A]$ such that
$$\xymatrix{ A \x S(X) \ar[d]_{1 \x \epsilon}\ar[r]^\theta & S(A \x X)
\ar[r]^{~~~S(f)} & S(Y) \ar[d]^{\epsilon} \\
A \x X \ar[rr]_f & & Y}$$
form a system, ${\cal L}_\S[\X]$, of linear maps which are strongly classified by $(S,\varphi)$.
\end{proposition}
\proof We check the conditions for being a system of linear maps:
\begin{enumerate}[{\bf [LS.1]}]
\item The identity in $\X[A]$ is $\epsilon$-natural as
$$\xymatrix{A \x S(X) \ar[drr]_{\pi_1} \ar[dd]_{1 \x \epsilon} \ar[rr]^\theta & & S(A \x X) \ar[d]^{S(\pi_1)} \\
& & S(X) \ar[d]^\epsilon \\
A \x X \ar[rr]_{\pi_1} & & X}$$
where we use strength.
We check that the projections in $\X[A]$ are $\epsilon$-natural using:
$$\xymatrix{A \x S(X_1 \x X_2) \ar[d]_{1 \x \epsilon} \ar[r]^\theta &
S(A \x (X_1 \x X_2)) \ar[r]^{~~~S(\pi_1)}
& S(X_1 \x X_2) \ar[d]^\epsilon \ar[r]^{~~~S(\pi_i)} & S(X_i) \ar[d]^{\epsilon} \\
A \x (X_1 \x X_2) \ar[rr]_{\pi_1} & & X_1 \x X_2 \ar[r]_{~~\pi_i} & X_i}$$
Suppose $f_1: A \x X \to Y_1$ and $f_1: A \x X \to Y_1$ are $\epsilon$-natural in $\X[A]$. Then in the
following diagram the left hand square commutes if and only if the two outer squares corresponding to the
two post compositions with the projections commute.
$$\xymatrix{ A \x S(X) \ar[d]_{1 \x \epsilon} \ar[r]^\theta & S(A \x X) \ar[r]^{S(\< f_1,f_2 \>)}
& S(Y_1 \x Y_2) \ar[d]^\epsilon \ar[r]^{~~~S(\pi_i)} & S(Y_i) \ar[d]^\epsilon \\
A \x X \ar[rr]_{\< f_1,f_2 \>} & & Y_1 \x Y_2 \ar[r]_{~~\pi_i} & Y_i}$$
However the outer squares commute by assumption and so $\< f_1,f_2 \>$ is $\epsilon$-natural in $\X[A]$.
\item We must show that $\epsilon$-natural maps in $\X[A]$ compose. We shall do it explicitly:
$$\xymatrix{ & S(A \x X) \ar@/^4pc/[rrrd]^{S(\< \pi_0,f \>)} \ar[r]^{S(\Delta \x 1)}
& S(A \x A \x X) \ar[drr]^{S(1 \x f)} \\
A \x S(X) \ar[dd]_{1 \x \epsilon} \ar[ru]^\theta \ar[r]^{\!\!\Delta \x 1}
& A \x A \x S(X) \ar[r]^{1 \x \theta} \ar[dd]^{1 \x 1 \x \epsilon}
& A \x S(A \x X) \ar[u]^\theta \ar[r]^{~~1\x S(f)} & A \x S(Y) \ar[r]^\theta \ar[dd]^{1 \x \epsilon}
& S(A \x Y) \ar[d]^{S(g)} \\
& & & & S(Z) \ar[d]^\epsilon \\
A \x X \ar@/_3pc/[rrr]_{\< \pi_0,f\>} \ar[r]_{\Delta \x 1} & A \x A \x X \ar[rr]_{1 \x f}
& & A \x Y \ar[r]_g & Z}$$
Next we must show that if $gh$ is $\epsilon$-natural and $g$ is a retraction which is $\epsilon$-natural then
$h$ is $\epsilon$-natural. We shall prove it in the basic case leaving it as an exercise for the reader to
do the calculation in a general slice.
We have
$$\xymatrix{S(A)\ar[d]^\epsilon \ar[r]^{S(g)} & S(B)\ar[d]^\epsilon \ar[r]^{S(h)} & S(C) \ar[d]^\epsilon \\
A \ar[r]^{g} & B \ar[r]^{h} & C}$$
where the left square is known to commute and the outer square commutes. This means
$S(g) S(h) \epsilon = S(g) \epsilon h$ but $S(g)$ is a retraction as $g$ is so the right square commutes.
\item We must show that if $f$ is $\epsilon$-natural in $\X[A]$ then $(g \x 1) f$ is $\epsilon$-natural.
Here is the diagram:
$$\xymatrix{A \x S(X) \ar[dr]_{g \x 1} \ar[dd]_\epsilon \ar[r]^\theta
& S(A \x X) \ar[rd]_{S(g \x 1)} \ar[rr]^{S((g \x 1)f)} & & S(Y) \ar[dd]^\epsilon \\
& B\x S(X) \ar[d]_{1 \x \epsilon} \ar[r]_\theta & S(B \x X) \ar[ru]_{S(f)} \\
A \x X \ar[r]_{g \x 1} & B \x X \ar[rr]_f & & Y}$$
\end{enumerate}
This system of linear maps is always strongly classified by
$(S,\varphi)$ by defining $f^\sharp = \theta S(f) \epsilon$, which is
clearly $\epsilon$-natural in $\X[A]$, and certainly has $(1 \x
\varphi)\theta S(f) \epsilon = f$. It is unique, for if $g$ is
$\epsilon$-natural in $\X[A]$ and has $(1 \x \varphi) g = f$, then
$$\theta S(f) \epsilon = \theta S((1 \x \varphi) g)\epsilon = (1 \x S(\varphi)) \theta S(g) \epsilon
= (1 \x S(\varphi)) (1 \x \epsilon) g = g.$$
\endproof
To complete the story it remains to establish the property to which persistence corresponds:
\begin{lemma}
A strong abstract coKleisli category has a persistent strong classification of the $\epsilon$-natural maps if and only if $\S$ is a commutative
(or monoidal) monad.
\end{lemma}
\proof
Suppose that $f: A \x X \x Y \to Z$ is linear in its second argument. That is
$$\xymatrix{A \x S(X) \x Y \ar[d]_{1 \x \epsilon \x 1} \ar[r]^{\theta_2}
& S(A \x X \x Y) \ar[r]^{~~~S(f)} & S(Z) \ar[d]^\epsilon \\
A \x X \x Y \ar[rr]_f & & Z}$$
commutes. Then we must show that after linearizing the last argument the result is still linear in the second
argument. That is the following diagram commutes:
$$\xymatrix{ & S(A \x X \x S(Y)) \ar[r]^{~~S(\theta_3)} & S^2(A \x X \x
Y) \ar[dr]^\epsilon \ar[r]^{~~~S^2(f)}
& S^2(Z) \ar[dr]^\epsilon \\
A \x S(X) \x S(Y) \ar[ur]^{\theta_2} \ar[d]_{1 \x \epsilon \x 1} \ar[dr]_{\theta_3} & &
& S(A \x X \x Y) \ar[r]^{~~~~S(f)} & S(Z) \ar[d]^\epsilon \\
A \x X \x S(Y) \ar[dr]_{\theta_3} & S(A \x S(X) \x Y)
\ar[d]^{S(1 \x \epsilon \x 1)} \ar[r]^{~~S(\theta_2)}
& S^2(A \x X \x Y) \ar[r]_{~~~S^2(f)} \ar[ur]_\epsilon
& S^2(Z) \ar[ur]_\epsilon \ar[d]_{S(\epsilon)}
& Z \\
& S(A \x X \x Y) \ar[rr]_{S(f)} & & S(Z) \ar[ur]_\epsilon}$$
Conversely we have already established that a persistent strong classification gives rise to a commutative monad.
\endproof
We shall say that a strong abstract coKleisli category is {\bf
commutative} if its monad is. We have now established:
\begin{theorem}
If $\X$ is a Cartesian category, then $\X$ is a Cartesian storage category (that
is, $\X$ has a persistently strongly classified system of \linear maps)
if and only if it is a strong abstract commutative coKleisli category.
\end{theorem}
\subsection{Linear idempotents}
A very basic construction on a category is to split the idempotents and we pause our development to briefly consider how this construction
applies to storage categories. In a storage category there is a slightly subtle issue as it is quite possible for a linear idempotent to split into two
non-linear components. This means that even though an idempotent may split in the ``large'' category of all maps it need not in
the subcategory of linear maps. We shall be interested in splitting the linear idempotents linearly, for in this case we can show that the
resulting category is also a storage category which extends the original category without introducing any new linear maps.
We start with a basic observation:
\begin{lemma} \label{storage-splitting}
In a Cartesian storage category:
\begin{enumerate}[(i)]
\item When $fg$ is linear and $g$ is monic and linear, then $f$ is linear;
\item When $fg$ is linear and $f$ is linear with $S(f)$ epic, then $g$ is linear;
\item A linear idempotent $e:X \to Y$ splits linearly, in the sense that there is a splitting $(r,s)$ with $rs = e$ and $sr = 1$ with both $r$ and $s$ linear
iff there is a splitting in which at least one of $r$ and $s$ are linear.
\end{enumerate}
\end{lemma}
\begin{proof} Consider
\[
\xymatrix{ S(X) \ar[rr]^{S(fg)} \ar[dd]_{\epsilon} \ar[dr]^{S(f)} && S(Y) \ar[dd]^{\epsilon} \\
& S(Y) \ar[dd]^{\raisebox{20pt}{\scr$\epsilon$}} \ar[ur]^{S(g)} \\
X \ar[rr]|{~}^{fg~~~~~~~~~~~} \ar[dr]_{f}
&& Y \\
& Y \ar[ur]_{g} }
\]
Under the conditions of {\em (i)} the back face commutes since $fg$ is linear
({\em i.e.} $\epsilon$-natural), and the ``$g$ face'' commutes. Then $S(f)\epsilon g = S(f) S(g) \epsilon = \epsilon f g$ and, thus, as
$g$ is monic, $S(f)\epsilon = \epsilon f$ implying $f$ is linear.
Under the conditions of {\em (ii)} the back face still commutes and the ``$f$ face'' commutes. So then $S(f)\epsilon g = \epsilon f g = S(g) F(g) \epsilon$ and, thus, as
$S(f)$ is epic $\epsilon g = S(g) \epsilon$ implying $g$ is linear.
Finally for {\em (iii)} if $e$ splits and $s$ is linear then $e=rs$ satisfies {\em (i)} while if $r$ is linear it satisfies {\em (ii)} as $S(r)$ being a retraction is certainly epic.
\end{proof}
In a storage category we shall often wish to split all the linear idempotents, that is split all idempotents $e$ such that $e \in {\cal L}$. An important observation
is that this can be done entirely formally:
\begin{proposition}
\label{linear-idempotent-splitting}
If $\X$ is a Cartesian storage category then the category obtained by splitting the linear idempotents, ${\sf Split}_{\cal L}(\X)$, is a Cartesian storage category in which linear idempotents split and the embedding
$${\cal I}: \X \to {\sf Split}_{\cal L}(\X)$$
preserves and reflects linear maps and the classification.
\end{proposition}
\proof
The objects of ${\sf Split}_{\cal L}(\X)$ are the linear idempotents in $\X$, and the maps $f: e \to e'$ are, as usual, those such that $efe'=f$. As linear
idempotents are closed under products
(that is whenever $e$ and $e'$ are linear idempotents then $e \x e'$ is a linear idempotent) it is standard that these form a Cartesian category. We shall say that $f: e \x e' \to e''$ is linear in its first argument precisely when it is so as a map in $\X$. It is then immediate that in ${\sf Split}_{\cal L}(\X)$ linear idempotents will split linearly.
The functor $S$ is now defined on ${\sf Split}_{\cal L}(\X)$ by taking an object $e$ to $S(e)$ and a map $f$ to $S(f)$. The transformations $\varphi$ and $\epsilon$ can also be extended by setting $\varphi_e := e \varphi S(e) = \varphi S(e) = e \varphi$ and $\epsilon_e:= S(e) \epsilon e = \epsilon e = S(e)\epsilon$ (where here we use the linearity of $e$).
To show that $(S,\varphi)$ on ${\sf Split}_{\cal L}(\X)$ classifies linear maps in ${\sf Split}_{\cal L}(\X)$ consider
$$\xymatrix { a \x e \ar[rr]^f \ar[d]_{a \x \varphi} & & e' \\ a \x S(e) \ar@{..>}[urr]_{f^\sharp} }$$
where $a$, $e$, $e'$ are linear idempotents and $f= (a \x e)fe'$ and $f^\sharp$ is the linear classification for $f$ in $\X$ then it suffices to show that
$f^\sharp = (a \x S(e))f^\sharp e'$ to show this is a classification in ${\sf Split}_{\cal L}(\X)$. To show this first note that $(a \x S(e))f^\sharp e'$ is linear, so if
$(1 \x \varphi)(a \x S(e))f^\sharp e' =f$ we will have secured the equality. For this we note:
$$(1 \x \varphi)(a \x S(e))f^\sharp e' = (a \x e)(1 \x \varphi) )f^\sharp e' = (a \x e) f e' = f.$$
Persistence now follows immediately from persistence in $\X$.
We also need to show that the linear maps in ${\sf Split}_{\cal L}(\X)$ satisfy {\bf [LS.1]}, {\bf [LS.2]} ,and {\bf [LS.3]}. The only difficulty concerns {\bf [LS.2]} and the condition on linear
retractions. In a slice ${\sf Split}_{\cal L}(\X)[a]$ we must show that if $g$ is a linear retraction (where the section is not necessarily linear) and $gh$ is linear then $h$ is linear. This
gives the following commuting diagram of maps
$$\xymatrix{a \x e' \ar[d]_{a \x e'} \ar[rr]^{\< \pi_0,v \>} & & a \x e \ar[dll]^{\< \pi_0,g \>} \ar[d]^{\< \pi_0,g\> h} \\ a \x e' \ar[rr]_h & & e''}$$
in which the downward pointing arrows contain the known linear
components and $v$ is the section of $g$ which gives rise to the
leftmost arrow being an ``identity map'' at the idempotent $a \x e'$. We
must show that $h$ is linear in this slice, that is that $(1 \x
\epsilon) h = \theta S(h) \epsilon$. Here is the calculation:
\begin{eqnarray*}
\theta S(h) \epsilon & = & \theta S((a \x e')h) \epsilon ~~~~~~~~~~~\mbox{as $h: a \x e' \to e''$}\\
& = & \theta S(\<\pi_0,v\>\<\pi_0,g\>h) \epsilon ~~~~\mbox{as $v$ is a section of $g$}\\
& = & \< \pi_0,\theta S(v) \> \theta S(\<\pi_0,g\>h) \epsilon \\
& = & \< \pi_0,\theta S(v) \> (1 \x \epsilon) \<\pi_0,g\>h ~~~\mbox{as $gh$ is linear} \\
& = & \< \pi_0,\theta S(\< \pi_0,v \>g) \> (1 \x \epsilon) h ~~~\mbox{as $g$ is linear}\\
& = & \< \pi_0,\theta S(\pi_1 e') \>(1 \x \epsilon) h ~~~~~\mbox{as $v$ is a section of $g$}\\
& = & (1 \x \epsilon)(1 \x e')h = (1 \x \epsilon)h.
\end{eqnarray*}
\endproof
It is worth remarking that splitting linear idempotents of $\X$ does not cause linear idempotents in $\X[A]$ to split. One can, of course, split the linear
idempotents in $\X[A]$ but this in general will have more objects than ${\sf Split}_{\cal L}(\X)[A]$ as there will be more idempotents which are ``linear in one
argument'' than are just linear. More precisely, a linear idempotent in
$\X[A]$ is an $e': A \x X \to X$ linear in $X$ with $\<\pi_0,e'\>e' = e'$ and these, in general, will strictly include idempotents $\pi_1e$ where $e$ is a linear idempotent in $\X$.
\subsection{Forceful comonads}
At this stage we have a fairly good grasp of what the monad of a
Cartesian storage category looks like. However, notably absent has been
a discussion of the properties of the comonad $\check{\S}$ on the linear
maps of a Cartesian storage category. In this section we correct this
defect.
Recall that the strength in a Cartesian storage category is not linear,
in general, so that the comonad $\check{\S}$ will not have a strength.
To guarantee that $\check{\S}$ be strong,
we shall postulate the presence of a map which generates a strength
map in the coKleisli category: this we call a {\em force\/} and we
develop its properties below.
\begin{definition}
A {\bf (commutative) force} for a comonad $\check{\S} =
(S,\epsilon,\delta)$ is a natural transformation
$$S(A \x S(X)) \to^{\psi} S(A \x X)$$
which renders the following diagrams commutative:
\begin{enumerate}[{\bf [{Force}.1]}]
\item Associativity of force:
$$\xymatrix{S(A \x (B \x S(C)) \ar[d]_{\delta} \ar[rr]^{a_\x} & & S((A \x B) \x S(C)) \ar[dddd]^{\psi} \\
S^2(A \x (B \x S(C)) \ar[d]_{S(\sigma_\x)} \\
S(S(A) \x S(B \x S(C)) \ar[d]_{S(\epsilon \x \psi)}\\
S(A \x S(B \x C)) \ar[d]_{\psi} \\
S(A \x (B \x C)) \ar[rr]_{S(a_\x)} & & S((A \x B) \x C)
}$$
\item Projection and force:
$$\xymatrix{ S(A \x S(B)) \ar[d]_{S(\pi_1)} \ar[rr]^{\psi} & & S(A \x B) \ar[d]^{S(\pi_1)} \\
S^2(B) \ar[rr]_{\epsilon} & & S(B)}$$
\item Forceful naturality:
$$\xymatrix{S(A \x S(B)) \ar[rr]^{\delta} \ar[dd]_{\psi} & & S^2(A \x S(B)) \ar[d]^{S(\sigma_\x)} \\
& & S(S(A) \x S^(B)) \ar[d]^{S(1 \x (\epsilon\delta))} \\
S(A \x B) \ar[d]_{\delta} & & S(S(A) \x S^2(B)) \ar[d]^{\psi} \\
S^2(A \x B) \ar[rr]_{S(\sigma_\x)} & & S(S(A) \x S(B)) }$$
\item The forcefulness of counit:
$$\xymatrix{S(A \x S^2(B)) \ar[d]_{\psi} \ar[rr]^{S(1 \x \epsilon)} & & S(A \x S(B)) \ar[d]^{\psi} \\
S(A \x S(B)) \ar[rr]_{\psi} & & S(A \x B) }$$
\item The forcefulness of comultiplication:
$$\xymatrix{S(A \x B) \ar[r]^{\delta} \ar@{=}[d] & S^2(A \x B)
\ar[r]^{\!\!\!\!\!S(\sigma_\x)} & S(S(A) \x S(B)) \ar[d]^{\psi} \\
S(A \x B) & & S(S(A) \x B) \ar[ll]^{S(\epsilon \x 1)}}$$
\item Commutativity of force:
$$\xymatrix{S(S(A) \x S(B)) \ar[d]_{\psi'} \ar[rr]^{\psi} & & S(S(A) \x B) \ar[d]^{\psi'} \\
S(A \x S(B)) \ar[rr]_{\psi} & & S(A \x B)}$$
where $\psi': = S(c_\x) \psi S(c_\x)$ is the symmetric dual of the force.
\end{enumerate}
\end{definition}
\noindent
We shall say that a comonad is {\bf forceful} if it comes equipped with
a force. First we observe the following.
\begin{proposition}
In any Cartesian storage category $\X$ the comonad, $\check{\S}$ on the linear maps has a force.
\end{proposition}
\proof
If $\X$ is a Cartesian storage category we may define $\psi = S(\theta)\epsilon$:
by inspection this is a linear map ({\em i.e.} in ${\cal L}[]$).
We must check it satisfies all the given properties.
\begin{enumerate}[{\bf [{Force}.1]}]
\item The interaction of associativity and strength gives the equation $(1 \x \theta)\theta S(a_\x) = a_\x \theta$. We shall use this and the
fact that $S(\varphi h)\epsilon = h$ for linear $h: S(X) \to Y$ in the calculation:
\begin{eqnarray*}
\delta S(\sigma_\x) S(\epsilon \x \psi) \psi S(a_\x)
& = & S( \varphi \delta S(\sigma_\x) S(\epsilon \x \psi) \psi S(a_\x)) \epsilon \\
& = & S( \varphi \varphi S(\sigma_\x) S(\epsilon \x \psi) \psi S(a_\x)) \epsilon \\
& = & S( \varphi \sigma_\x (\epsilon \x \psi) \varphi \psi S(a_\x)) \epsilon \\
& = & S( (\varphi \x \varphi) (\epsilon \x \psi) \varphi \psi S(a_\x)) \epsilon \\
& = & S((1 \x \theta)\theta S(a_\x)) \epsilon \\
& = & S(a_\x \theta) \epsilon = S(a_\x) \psi
\end{eqnarray*}
\item The projection of force:
\begin{eqnarray*}
\psi S(\pi_1) & = & S(\theta)\epsilon S(\pi_1) \\
& = & S(\theta S(\pi_1)) \epsilon \\
& = & S(\pi_1) \epsilon
\end{eqnarray*}
\item Forceful naturality:
\\
We shall use the couniversal properties repeatedly: to show $\psi \delta S(\sigma_\x) = \delta S(\sigma_\x(1 \x \epsilon\delta))\psi$ it suffices,
as they are both linear to show that they are equal where prefixed with $\varphi$:
\begin{eqnarray*}
\varphi \psi \delta S(\sigma_\x) & = & \theta \delta S(\sigma_\x) \\
& = & (1 \x \delta) \theta S(\theta \sigma_\x) \\
\varphi \delta S(\sigma_\x (1 \x (\epsilon\delta))) \psi & = & \varphi \varphi S(\sigma_\x (1 \x (\epsilon\delta))) \psi \\
& = & \varphi (\sigma_\x (1 \x (\epsilon\delta))) \theta \\
& = & (\varphi \x \varphi) (1 \x (\epsilon\delta))) \theta \\
& = & (\varphi \x \delta) \theta
\end{eqnarray*}
The resulting expressions are both linear in the second argument so that they are equal provided precomposing with $1 \x \varphi$ makes them
equal:
\begin{eqnarray*}
(1 \x \varphi) (1 \x \delta) \theta S(\theta \sigma_\x)
& = & (1 \x (\varphi\varphi)) \theta S(\theta \sigma_\x) \\
& = & (1 \x \varphi) \varphi S(\theta \sigma_\x) \\
& = & (1 \x \varphi) \theta \sigma_\x \varphi \\
& = & \varphi \sigma_\x \varphi = (\varphi \x \varphi)\varphi \\
(1 \x \varphi) (\varphi \x \delta) \theta
& = & (\varphi \x \varphi)(1 \x \varphi) \theta \\
& = & (\varphi \x \varphi)\varphi
\end{eqnarray*}
This establishes the equality!
\item For the forcefulness of the counit both sides are linear as so
equal if they are equal after precomposing with $\varphi$. However,
$\varphi\psi\psi = \theta\psi$ and $\varphi S(1 \x \epsilon)\psi = (1 \x
\epsilon) \varphi\psi = (1 \x \epsilon) \theta$. These last are linear
in their second argument and, therefore, are equal if and only if
precomposing with $1 \x \varphi$ makes them equal: $$(1 \x \varphi)
\theta \psi = \varphi \psi = \theta = (1 \x \varphi) (1 \x \epsilon)
\theta$$
\item The forcefulness of the comultiplication:
\begin{eqnarray*}
\delta S(\sigma_\X) \psi S(\epsilon \x 1) & = & \delta S(\sigma_\x \theta) \epsilon S(\epsilon \x 1) \\
& = & \delta S(\sigma_\x \theta S(\epsilon \x 1)) \epsilon \\
& = & S(\varphi \sigma_\x (\epsilon \x 1) \theta) \epsilon \\
& = & S((\varphi \x \varphi) (\epsilon \x 1) \theta) \epsilon \\
& = & S((1 \x \varphi) \theta) \epsilon \\
& = & S( \varphi ) \epsilon = 1
\end{eqnarray*}
\item Commutativity of the force:
\begin{eqnarray*}
\psi \psi' & = & S(\theta) \epsilon S(\theta') \epsilon \\
& = & S(\theta S(\theta') \epsilon \epsilon \\
& = & S(\theta'S(\theta)) \epsilon \epsilon \\
& = & S(\theta')\epsilon S(\theta) \epsilon \\
& = & \psi' \psi
\end{eqnarray*}
\end{enumerate}
\endproof
\noindent Conversely:
\begin{proposition}
\label{abstract-coKleisli-basic}
Let $\S$ be a forceful comonad on $\X$ a Cartesian category then the
coKleisli category $\X_\S$ is a Cartesian storage category.
\end{proposition}
\proof
The coKleisli category is immediately an abstract coKleisli category (see \cite{fuhrmann}): it remains only to show that
all the data ($S$, $\varphi$, and $\epsilon$) are strong, and that the induced monad is commutative.
The following is the interpretation for the data of the Cartesian storage category:
\begin{eqnarray*}
\llbracket \varphi \rrbracket & := & 1_{S(A)} \\
\llbracket \epsilon \rrbracket & := & \epsilon\epsilon \\
\llbracket S( f ) \rrbracket & := & \epsilon\delta S(\llbracket f \rrbracket) \\
\llbracket \theta \rrbracket & := & \psi \\
\llbracket f \x g \rrbracket & := & \< S(\pi_0)\llbracket f \rrbracket,S(\pi_1)\llbracket g \rrbracket \>
= \sigma_\x (\llbracket f \rrbracket \x \llbracket g \rrbracket) \\
\llbracket \Delta \rrbracket & := & \epsilon \Delta \\
\llbracket \pi_i \rrbracket & := &\epsilon \pi_i \\
\llbracket a_\x \rrbracket & := & \epsilon a_\x \\
\llbracket f g \rrbracket & := & \delta S(\llbracket f \rrbracket)\llbracket g \rrbracket.
\end{eqnarray*}
First we must show that $\bm\theta$ satisfies the requirements of being a strength
for ${\bf S}$ with respect to the product. This may be expressed as following two requirements and naturality:
$$\xymatrix{A \x S(X) \ar[r]^{\theta} \ar[rd]_{\pi_1} & S(A \x X) \ar[d]^{ S(\pi_1)} \\
& S(X)}
~~~~~~~~
\xymatrix{A \x (B \x S(X)) \ar[d]_{a_\x} \ar[rr]^{1 \x \theta}
& & A \x S(B \x X) \ar[r]^{\theta} & S(A \x (B \x X)) \ar[d]^{S(a_\x)} \\
(A \x B) \x S(X) \ar[rrr]_{\theta} & & & S((A \x B) \x X)}$$
The two diagrams are verified by the following calculation:
\begin{eqnarray*}
\llbracket \theta \bullet S(\pi_1) \rrbracket
& = & \delta S(\psi) \epsilon \delta S(\epsilon \pi_1) \\
& = & \psi S(\pi_1) = S(\pi_1) \epsilon = \epsilon \pi_1 \\
& = & \llbracket \pi_1 \rrbracket \\
\llbracket (1 \x \theta) \bullet \theta \bullet S(a_\x) \rrbracket
& = & \delta S(\< S(\pi_0)\epsilon,S(\pi_1) \psi \>) \delta S(\psi) \epsilon \delta S(\epsilon a_\x) \\
& = & \delta S(\< S(\pi_0)\epsilon,S(\pi_1) \psi \>) \psi S(a_\x) \\
& = & S(a_\x) \psi \\
& = & \llbracket a_\x \bullet \theta \rrbracket
\end{eqnarray*}
For naturality we have:
\begin{eqnarray*}
\llbracket (f \x S(g)) \theta \rrbracket & = & \delta S(\sigma_\x(f \x (\epsilon\delta S(g)))) \psi \\
& = & \delta S(\sigma_\x(1\x (\epsilon\delta))) S(f \x S(g)) \psi \\
& = & \delta S(\sigma_\x(1\x (\epsilon\delta))) \psi S(f \x g) \\
& = & \psi \delta S(\sigma_\x) S(f \x g) \\
& = & \delta S(\psi) \epsilon\delta S(\sigma_\x(f \x g)) = \llbracket \theta S(f \x g) \rrbracket
\end{eqnarray*}
Next we must verify that both $\bm\varphi$ and $\bm\epsilon_S$
({\em i.e.} $\bm\mu$) are strong:
$$\xymatrix{A \x X \ar[drr]_{ \varphi} \ar[rr]^{ 1 \x \varphi} & & A \x S(X) \ar[d]^{ \theta} \\
& & S(A \x X) }
~~~~~~~~
\xymatrix{A \x S^2(X) \ar[d]_{ 1 \x \epsilon_S} \ar[r]^{ \theta} &
S(A \x S(X)) \ar[r]^{S(\theta)} & S^2(A \x X) \ar[d]^{ \epsilon_S} \\
A\x S(X) \ar[rr]_{\theta} & & S(A \x X) }
$$
\begin{eqnarray*}
\llbracket (1 \x \varphi) \theta \rrbracket
& = & \delta S(\sigma_\x (\epsilon \x 1)) \psi \\
& = & \delta S(\sigma_\x) \psi S(\epsilon \x 1) \\
& = & 1 = \llbracket \varphi \rrbracket \\
\llbracket (1 \x \epsilon) \theta \rrbracket
& = & \delta S(\sigma_\x (\epsilon \x \epsilon \epsilon)) \psi \\
& = & \delta S(\sigma_\x (\epsilon \x \epsilon)) S(1 \x \epsilon) \psi \\
& = & \delta S(\sigma_\x (\epsilon \x \epsilon)) \psi \psi \\
& = & \psi \psi \\
& = & \delta S(\psi\delta S(\psi)\epsilon)\epsilon \\
& = & \delta S(\psi)\delta S(\epsilon\delta S(\psi))\epsilon\epsilon \\
& = & \llbracket \theta S( \theta) \epsilon \rrbracket
\end{eqnarray*}
where we used the following equality
$\delta S(\sigma_\x (\epsilon \x \epsilon)) = 1_{S(A \x B)}$ which holds as
$$\delta S(\sigma_\x (\epsilon \x \epsilon))
= \delta S(\<S(\pi_0),S(\pi_1)\>(\epsilon \x \epsilon))
= \delta S(\<\epsilon\pi_0,\epsilon\pi_1\>))
= \delta S(\epsilon \<\pi_0,\pi_1\>)
= \delta S(\epsilon) = 1.$$
Lastly we must show that the induced monad is commutative, that is:
$$\xymatrix{ S(A) \x S(B) \ar[d]_{\theta'} \ar[rr]^{\theta} & & S(S(A) \x B) \ar[rr]^{S(\theta')} & & S^2(A \x B) \ar[d]^{\mu} \\
S(A \x S(B)) \ar[rr]_{S(\theta)} & & S^2(A \x B) \ar[rr]_{\mu} & & S(A \x B)}$$
\begin{eqnarray*}
\llbracket \theta S(\theta') \mu \rrbracket
& = & \delta S(\psi) (\delta S(\epsilon\delta S(\psi')) \epsilon\epsilon \\
& = & \delta S(\psi \delta S(\psi') \epsilon) \epsilon \\
& = & \psi \psi' = \psi'\psi \\
& = & \delta S(\psi' \delta S(\psi) \epsilon) \epsilon \\
& = & \delta S(\psi') (\delta S(\epsilon\delta S(\psi)) \epsilon\epsilon \\
& = & \llbracket \theta' S(\theta) \mu \rrbracket .
\end{eqnarray*}
\endproof
We shall call a Cartesian category with a forceful comonad a {\bf
Cartesian linear category}. We may summarize the
above results as follows:
\begin{theorem}
\label{exact-comonad}
A category is a Cartesian storage category if and only if it is the
coKleisli category of a Cartesian linear category. A Cartesian linear category
is the linear maps of a Cartesian storage category if and only if its comonad is exact.
\end{theorem}
Recall that a comonad is exact when the commuting diagram:
$$S(S(X)) \Two^{S(\epsilon)}_{\epsilon} S(X) \to^{\epsilon} X$$
is a coequalizer. A category with an exact comonad is always the
subcategory of $\epsilon$-natural maps of its coKleisli category. This
allows the original category to be completely recovered from the
coKleisli category.
\medskip
Starting with a forceful comonad $\S$ on a Cartesian category $\X$ one can directly form the
simple fibraton of the coKleisli category. The total category can be described by:
\begin{description}
\item{{\bf Objects:}} $(A,X)$ where $A,X \in \Y$;
\item{{\bf Maps:}} $(f,g): (A,X) \to (B,Y)$ where $f: S(A) \to B$ and $g:S(A \x X) \to Y$;
\item{{\bf Identities:}} $(\epsilon,S(\pi_1)\epsilon): (A,X) \to (A,X)$;
\item{{\bf Composition:}} $(f,g)(f',g') = (\delta S(f) f',\delta S(\<S(\pi_0)f,g\>)g')$.
\end{description}
That this amounts to the simple fibration $\partial: \X_\S[\X_\S] \to \X_\S; (f,g) \mapsto f$ is easily checked.
\begin{remark}{}
\rm
One might expect for there to be more interactions between the comultiplication and the force. In fact, there are
but they are often not so obvious! Here is an example:
$$\xymatrix{S(A \x S(B)) \ar[d]_{\psi} \ar[rr]^{S(1 \x \delta)} & & S(A \x S^2(B)) \ar[d]^{\psi} \\
S(A \x B) & & S(A \x S(B)) \ar[ll]^{\psi}}$$
An easy, but perhaps rather unsatisfactory way to check this is by
looking in the corresponding Cartesian storage category where we have
rather easily:
\begin{eqnarray*}
S(1 \x \delta) \psi \psi & = & S(1 \x S(\varphi)) S(\theta) \epsilon S(\theta) \epsilon \\
& = & S((1 \x S(\varphi))\theta S(\theta)) \epsilon \epsilon \\
& = & S(\theta S((1 \x \varphi)\theta) \epsilon) \epsilon \\
& = & S(\theta \varphi \epsilon) \epsilon = S(\theta)\epsilon = \psi
\end{eqnarray*}
However, one might reasonably want a direct proof. We shall use the following equality
$$\delta S(\sigma_\x (\epsilon \x \epsilon)) = 1_{S(A \x B)}$$
which holds since
$$\delta S(\sigma_\x (\epsilon \x \epsilon))
= \delta S(\<S(\pi_0),S(\pi_1)\>(\epsilon \x \epsilon))
= \delta S(\<\epsilon\pi_0,\epsilon\pi_1\>))
= \delta S(\epsilon \<\pi_0,\pi_1\>)
= \delta S(\epsilon) = 1.$$
Here is the calculation:
\begin{eqnarray*}
\psi & = & S(1 \x \delta S(\epsilon))\psi = (1 \x \delta) \psi S(1 \x \epsilon) \\
& = & (1 \x \delta) \psi \delta S(\sigma_\x) \psi S(\epsilon \x 1) S(1 \x \epsilon) \\
& = & (1 \x \delta) \psi \delta S(\sigma_\x) \psi S(\epsilon \x \epsilon) \\
& = & (1 \x \delta) \delta S(\sigma_\x) S(1 \x (\epsilon\delta)) \psi \psi S(\epsilon \x \epsilon) \\
& = & (1 \x \delta) \delta S(\sigma_\x) S(1 \x (\epsilon\delta)) \psi S(\epsilon \x S(\epsilon)) \psi \\
& = & (1 \x \delta) \delta S(\sigma_\x) S(1 \x (\epsilon\delta)) S(\epsilon \x S^2(\epsilon)) \psi \psi \\
& = & (1 \x \delta) \delta S(\sigma_\x) S(\epsilon \x (\epsilon\delta S^2(\epsilon))) \psi \psi \\
& = & (1 \x \delta) \delta S(\sigma_\x)(\epsilon \x \epsilon) (1 \x (S(\epsilon) \delta)) \psi \psi \\
& = & (1 \x \delta) (1 \x (S(\epsilon) \delta)) \psi \psi \\
& = & (1 \x \delta) \psi \psi
\end{eqnarray*}
\end{remark}
\bigskip
We now have the following which summarizes the main results of this section
\begin{theorem}
For a Cartesian category $\X$, the following are equivalent:
\begin{itemize}
\item $\X$ is a Cartesian storage category;
\item $X$ is a strong abstract commutative coKleisli category;
\item $X$ is the coKleisli category of a Cartesian category with a forceful comonad.
\end{itemize}
\end{theorem}
\section{Tensor storage categories}
\label{tensor-storage-cats}
Our objective is now to link our development to the categorical semantics of linear
logic through the ``storage categories'' which were developed from the ideas introduced in \cite{Seely}.
The original definition of these categories required a comonad $S$ on a $*$-autonomous
category and natural isomorphisms $s_\x: S(A \x B) \to S(A) \ox S(B)$
and $s_1: S(1) \to \top$. Subsequently Gavin Bierman realized that, in order
to ensure that equivalent proofs of Multiplicative Exponential Linear Logic (MELL)
were sent to the same map, the comonad actually had to be monoidal. Bierman called these
categories, in which this new requirement was added, ``new Seely'' categories.
Bierman's examination of MELL also revealed that, even in the
absence of additives, the coKleisli category of a symmetric monoidal
category with an exponential would have products and, furthermore, when
the original category was closed, would be Cartesian closed. Thus, it
seemed that the additive structure was not really necessary for the
theory. Bierman called the categories which provided models for MELL
{\em linear categories}. In this manner, the additive structure, basic in Seely's
original definition, was relegated to a secondary and largely optional
role.
Andrea Schalk \cite{Schalk} collected the
various axiomatizations of Seely categories originating from this work,
and removed the requirement that the category be closed. She showed that
this was an orthogonal property whose main purpose was to ensure the
coKleisli category was Cartesian closed. She called the modalities {\em
linear exponential comonads}, and thus replaced linear
categories with symmetric monoidal categories with a linear exponential
comonad.
Even more recently Paul-Andr\'e Melli\`es \cite{Mellies} while revisiting
``categorical models of linear logic'' concentrated entirely upon the
exponential structure: notably $*$-autonomous categories hardly rated a
mention and the additives are reduced to at most products---although
closedness is assumed throughout. Of note was the emphasis that was
placed on the role of the monoidal adjunction which was induced between
the linear and Cartesian categories. Of particular interest was an
axiomatization for Seely's original ideas which showed that rather than
demand that the comonad be a monoidal comonad one could obtain the same
effect by axiomatizing the Seely isomorphism itself more carefully. Here
we shall follow this idea and, adapting it somewhat, obtain a convenient
description of the variety of Seely category in which we are interested.
These are ``new Seely categories'', but following Andrea Schalk's lead, the
requirement of closedness is dropped.
One thing that should be emphasized is that in, this exposition of Seely
categories, we are focusing on the product structure: the monoidal
structure is regarded as secondary, indeed, even generated by products.
This is absolutely the opposite to the general trend in the work cited
above where an underlying theme is to decompose the product structure
into a more fundamental tensorial structure. Mathematically, of course,
there is no tension between these approaches, as in these settings the
product and tensor are linked and it should be no surprise that the
linkage can be worked in both directions. However, there is perhaps a
philosophical message, as it challenges the precept of what should be
taken as primary. Seemingly in spite of this, we shall call the current
notion ``tensor storage categories'', as they are very similar to the
``storage categories'' we considered in \cite{diffl}, in the context of
(tensor) differential categories.
The section starts by providing an exposition of tensor storage categories: we start
from the definition of a storage transformation and explain why this is
the same as a coalgebra modality. A tensor storage category is then defined to be
a symmetric monoidal category with products and a storage transformation
which is an isomorphism. We then prove that this implies that the
modality is a monoidal comonad and, thus, that we do indeed obtain
(modulo the relaxing of the closed requirement) what Gavin Bierman
called a ``new Seely'' category.
Next, we return to the main theme of the paper, and formulate the notion of tensorial
representation in a Cartesian storage category. We then show that the linear
maps of a Cartesian storage category, which has persistent tensorial representation,
always form a tensor storage category. Conversely, the coKleisli category of a
tensor storage category is a Cartesian storage category with tensorial representation.
Thus, tensor storage categories, in which the comonad is exact, correspond {\em precisely\/} to
the linear maps of Cartesian storage categories which have persistent tensorial representation.
\subsection{Coalgebra modalities}
\label{coalgebra-modality}
Let \X be a symmetric monoidal category with tensor
$(\ox,a_\ox,c_\ox, u^L_\ox,u^R_\ox)$ which has
products and a comonad $(S,\delta,\epsilon)$.
\begin{definition} \label{storage-trans}
A {\bf storage transformation} is a symmetric comonoidal transformation
$s: S \to S$ from $\X$, regarded as a symmetric monoidal category with respect
to the Cartesian product, to $\X$, regarded as a symmetric monoidal category with
respect to the tensor, for which $\delta$ is a comonoidal transformation.
\end{definition}
Thus, a storage transformation is a natural transformation
$s_2:S(X \x Y) \to S(X) \ox S(Y)$ and a map $s_1: S(1) \to \top$ satisfying:
$$\xymatrix{S((X \x Y) \x Z) \ar[d]_{S(a_\x)} \ar[r]^{s_2} & S(X \x Y)
\ox S(Z) \ar[r]^{s_2 \ox 1~~~}
& (S(X) \ox S(Y)) \ox S(Z) \ar[d]^{a_\ox} \\
S(X \x (Y \x Z)) \ar[r]_{s_2} & S(X) \ox S(Y \x Z)
\ar[r]_{1\ox s_2~~~}
& S(X) \ox (S(Y) \ox S(Z)) }$$
$$\xymatrix{S(1 \x X) \ar[d]_{S(\pi_1)} \ar[r]^{s_2~~} & S(1) \ox S(X) \ar[d]^{s_0 \ox 1} \\
S(X) & \top \ox S(X) \ar[l]^{u^L_\ox}} ~~~~~~
\xymatrix{S(X \x 1) \ar[d]_{S(\pi_0)} \ar[r]^{s_2~~} & S(X) \ox S(1) \ar[d]^{1 \ox s_0} \\
S(X) & S(X) \ox \top \ar[l]^{u^R_\ox}}$$
$$\xymatrix{S(X \x Y) \ar[d]_{S(c_\x)}\ar[r]^{s_2~~} & S(X) \ox S(Y) \ar[d]^{c_\ox} \\
S(Y \x X) \ar[r]_{s_2~~} & S(Y) \ox S(X) }$$
In general, a natural transformation is a comonoidal transformation if it respects the comonoidal structure
in the sense that the following diagrams commute:
$$\xymatrix{F(X \x Y) \ar[d]_{\sigma_2^F} \ar[r]^{\alpha} & G(X \x Y) \ar[d]^{\sigma_2^G} \\
F(X) \ox F(Y) \ar[r]_{\alpha \ox \alpha} & G(X) \ox G(Y)} ~~~~~
\xymatrix{F(1)\ar[dr]_{\sigma_0^F} \ar[rr]^\alpha & & G(1) \ar[dl]^{\sigma_0^G} \\ & \top}$$
A comonad is comonoidal if all its transformations are. However, for the comonad $(S,\delta,\epsilon)$
it makes no sense to insist that $\epsilon$ is comonoidal as the identity functor is not comonoidal (from $\X$ with products to $\X$ with tensor).
However, it does make sense to require $\delta$ is comonoidal. Recall first that, with respect
to the product, every functor is canonically comonoidal with:
$$\sigma_2^\x = \< S(\pi_0),S(\pi_1) \>: S(X \x Y) \to S(X) \x S(Y)
~~\mbox{and}~~ \sigma_0^\x = \<\>: S(1) \to 1.$$
This allows us to express the requirement that $\delta$ be comonoidal as follows:
$$\xymatrix{S(X \x Y) \ar[dd]_{s^2} \ar[r]^{\delta} & S(S(X \x Y)) \ar[d]^{\sigma_2^\x} \\
& S(S(X) \x S(Y)) \ar[d]^{s_2} \\
S(X) \ox S(Y) \ar[r]_{\delta \ox \delta~~~~} & S(S(X)) \ox S(S(Y))}
~~~~ \xymatrix{S(1) \ar[ddr]_{s_0} \ar[r]^{\delta~} & S(S(1)) \ar[d]^{S(\sigma_0^\x)} \\
& S(1) \ar[d]^{s_0} \\
& \top}$$
\begin{definition} \label{coalg-modality}
A symmetric monoidal category \X with products has a {\bf commutative coalgebra modality} in
case there is a comonad $(S,\delta,\epsilon)$ such that each $S(X)$ is naturally a cocommutative
comonoid $(S(X),\Delta,e)$ and $\delta$ is a homomorphism of these monoids:
$$\xymatrix{S(X) \ar[d]_{\Delta} \ar[r]^{\delta} & S(S(X)) \ar[d]^{\Delta} \\
S(X) \ox S(X) \ar[r]_{\delta \ox \delta~~~~} & S(S(X)) \ox S(S(X))}
~~~~ \xymatrix{S(X) \ar[dr]_e \ar[rr]^{\delta~} & & S(S(X)) \ar[dl]^{e} \\ & \top}$$
\end{definition}
\noindent
We now observe that:
\begin{proposition}
For a symmetric monoidal category with products, having a comonad with a symmetric
storage transformation is equivalent to having a cocommutative coalgebra modality.
\end{proposition}
\proof
\begin{description}
\item[$(\Rightarrow)$]
If one has a storage transformation then one can define natural
transformations $\Delta, e$ as
$\Delta = S(\Delta_\x) s_2: S(X) \to S(X) \ox S(X)$ and $e = S(\<\>) s_0: S(X) \to \top$.
As (symmetric) comonoidal functors preserve (commutative) comonoids these do define comonoids.
Further, since $\delta$ is comonoidal as a transformation it becomes a homomorphism of the induced
comonoids. This means that we have a (cocommutative) coalgebra modality.
\item[$(\Leftarrow)$]
Conversely given a (cocommutative) coalgebra modality on a (symmetric) monoidal category we
may define $s_2 = \Delta (S(\pi_0) \ox S(\pi_1)): S(X \x Y) \to S(X) \ox S(Y)$ and
$s_0 = e: S(1) \to \top$. $s_2$ is clearly a natural transformation and using the fact that
$\Delta$ is coassociative it is easily seen to satisfy the first comonoidal requirement. For the
last we have:
$$s_2(1 \ox s_0) u^R_\ox = \Delta(S(\pi_0) \ox S(\pi_1))(1 \ox e)u^R_\ox
= \Delta(1) \ox S(\pi_1))u^R_\ox S(\pi_0) = S(\pi_0).$$
Thus these maps do provide comonoidal structure for $S$. It remains to show that $\delta$ is
a comonoidal transformation:
\begin{eqnarray*}
\delta S(\sigma_2^\x) s_2 & = & \delta S(\sigma_2^\x) \Delta (S(\pi_0) \ox S(\pi_1))\\
& = & \delta \Delta (S(\sigma_2^\x) \ox S(\sigma_2^\x)) (S(\pi_0) \ox S(\pi_1)) \\
& = & \delta \Delta (S(\sigma_2^\x \pi_0) \ox S(\sigma_2^\x \pi_1)) \\
& = & \delta \Delta (S(S(\pi_0)) \ox S(S(\pi_))) \\
& = & \Delta (\delta \ox \delta)(S(S(\pi_0)) \ox S(S(\pi_))) \\
& = & \Delta (S(\pi_0) \ox S(\pi_)) (\delta \ox \delta) \\
& = & s_2 (\delta \ox \delta)
\end{eqnarray*}
\end{description}
\endproof
This not only provides an alternative way to describe a commutative coalgebra modality but also
leads us into the definition of a tensor storage category:
\begin{definition}
A {\bf tensor storage category} is a symmetric monoidal category with a
comonad $(S,\delta,\epsilon)$ which has a storage transformation which
is an isomorphism (\/{\em i.e.} both $s_2$ and $s_1$ are isomorphisms).
\end{definition}
Thinking of tensor storage categories as ``Seely categories'', one
notices that this is not the usual definition; in \cite{Mellies}
this definition (essentially) is given as a theorem which relates this
description to the more standard definition given by Bierman and adapted
by Schalk.
First, we shall compare tensor storage categories to models of the
multiplicative exponential fragment of linear logic (which we shall {\em
not\/} assume includes being closed). We shall simply call these linear
categories as they are Bierman's linear categories without the
requirement of closure. Thus, a {\bf linear category} is a symmetric
monoidal category $\X$ with a coalgebra modality on a symmetric monoidal
comonad $(S, \epsilon,\delta,m_\ox,m_\top)$ such that $\Delta$ and $e$
are both monoidal transformations and coalgebra morphisms.
The requirements of being a linear category are worth expanding.
The requirement that $e: S(A) \to \top$ be a monoidal transformation
amounts to two coherence diagrams:
$$\xymatrix{\top \ar@{=}[dr] \ar[r]^{m_\top~} & S(\top) \ar[d]^e \\ & \top}
~~~~ \xymatrix{S(A) \ox S(B) \ar[d]_{e \ox e} \ar[r]^{~m_\ox} & S(A \ox B) \ar[d]^e \\ \top \ox \top \ar[r]_u & \top}$$
The requirement the $\Delta: S(A) \to S(A) \ox S(A)$ is a monoidal
transformation amounts to two coherence requirements:
$$\xymatrix{\top \ar[d]_u \ar[r]^{m_\top}& S(\top) \ar[d]^{\Delta} \\
\top \ox \top \ar[r]_{m_\top \ox m_\top~~~} & S(\top) \ox S(\top)}
~~~~ \xymatrix{S(A) \ox S(B) \ar[d]_{\Delta \ox \Delta} \ar[rr]^{m_\ox} & & S(A \ox B) \ar[dd]^{\Delta} \\
S(A) \ox S(A) \ox S(B) \ox S(B) \ar[d]_{\ex_\ox} \\
S(A) \ox S(B) \ox S(A) \ox S(B) \ar[rr]_{~~~m_\ox \ox m_\ox} & & S(A \ox B)\ox S(A \ox B)}$$
Requiring that $e: S(A) \to \top$ forms a coalgebra morphism amounts to
the requirement that
$$\xymatrix{S(A) \ar[r]^{\delta} \ar[d]_e & S^2(A) \ar[d]^{S(e)} \\ \top \ar[r]_{m_\top} & S(\top)}$$
commutes. Finally requiring that $\Delta$ is a coalgebra morphism amounts to:
$$\xymatrix{S(A) \ar[d]_{\Delta} \ar[rr]^{\delta} & & S^2(A) \ar[d]^{S(\Delta)} \\
S(A)\ox S(A) \ar[r]_{\delta \ox \delta~~} & S^2(A)\ox S^2(A) \ar[r]_{m_\ox~} & S(S(A) \ox S(A))}$$
In his definition, Bierman had another requirement,
namely that whenever $f: S(A) \to S(B)$ is a coalgebra morphism it must
also be a comonoid morphism. We have amalgamated this into the
definition of a coalgebra modality:
\begin{lemma} In a linear category there is the following implication of commutative diagrams
$$\infer{\xymatrix{S(A) \ar[d]_{\Delta} \ar[r]^f & S(B) \ar[d]^{\Delta} \\
S(A) \ox S(A) \ar[r]_{f \ox f} & S(B) \ox S(B)}
}{\xymatrix{S(A) \ar[d]_{\delta} \ar[r]^f & S(B) \ar[d]^{\delta} \\
S^2(A) \ar[r]_{S(f)} & S^2(B)}
}$$
\end{lemma}
\proof
We use the fact that $\delta$ is a morphism of comonoids and is a
section (with retraction $\epsilon$).
This means the lower diagram commutes if and only if $f \Delta (\delta \ox \delta) = \Delta (f \ox f) (\delta \ox \delta)$
but for this we have:
\begin{eqnarray*}
f \Delta (\delta \ox \delta)
& = & f \delta \Delta = \delta S(f) \Delta = \delta \Delta (S(f) \ox S(f)) \\
& = & \Delta (\delta \ox \delta) (S(f) \ox S(f)) = \Delta ((\delta S(f)) \ox (\delta (S(f)) = \Delta ((f \delta) \ox (f \delta)) \\
& = & \Delta (f \ox f) (\delta \ox \delta).
\end{eqnarray*}
\endproof
Note that this immediately means that if $f$ is a coalgebra morphism, it
is also then a comonoid morphism, and so Bierman's original definition
of a linear category corresponds to ours (modulo the absence of
closedness).
\begin{theorem}
A tensor storage category is a linear category and conversely a linear category with products is
a tensor storage category.
\end{theorem}
\proof
We rely on a combination of \cite{Schalk} and \cite{Mellies} to provide the
proof. In particular, the monoidal structure of the monad is given by.
\begin{eqnarray*}
\top \to^{m_\top} S(\top) & = &\top \to^{s_1^{-1}} S(1) \to^\delta S^2(1) \to^{S(s_1)} S(\top) \\
S(A) \ox S(B) \to^{m_\ox} S(A \ox B)
& = & S(A) \ox S(B) \to^{s_2^{-1}} S(A \x B) \to^\delta S^2(A \x B) \\
& & \to^{S(s_2)} S(S(A) \ox S(B)) \to^{\epsilon \ox \epsilon} S(A \ox B)
\end{eqnarray*}
The converse requires checking that in a linear category with products
provides a storage isomorphism. The fact that there is a coalgebra
modality is part of the data above. What remains is to prove that the
induced storage transformation is an isomorphism and this is in the
literature above.
\endproof
\subsection{Tensor representation}
We now return to the main thread of the paper and consider the notion of tensorial representation for a Cartesian storage category. Our objective is to show
that a Cartesian storage category with tensorial representation is precisely the coKleisli category of a tensor storage category. To achieve this, however,
we must first take a detour to develop the notion of tensor representation. Recall that a basic intuition for a tensor product is that it should represent bilinear
maps: clearly this intuition can be expressed in any category with a system of linear maps:
\begin{definition}~
\begin{enumerate}[(i)]
\item
In any Cartesian category a system of linear maps has {\bf tensorial representation} in case for each $X$ and $Y$ there is an object $X \ox Y$ and
a bilinear map $\varphi_\ox: X \x Y \to X \ox Y$ such that for every bilinear map $g: X \x Y \to Z$ in $\X$
there is a unique linear map in $\X$ making the following diagram commute:
$$\xymatrix{X \x Y \ar[d]_{\varphi_\ox} \ar[rr]^g & & Z \\ X \ox Y\ar@{..>}[rru]_{g_{\ox[A]}} }$$
\item
In any Cartesian category a system of linear maps has {\bf strong tensorial representation} in case for each $X$ and $Y$ there is an object $X \ox Y$ and
a bilinear map $\varphi_\ox: X \x Y \to X \ox Y$ such that for every bilinear map $g: X \x Y \to Z$ in $\X[A]$ there is a unique linear map in $\X[A]$ making
the above diagram commute. Note that this means in $\X$ we have the diagram:
$$\xymatrix{A \x X \x Y \ar[d]_{1 \x \varphi_\ox} \ar[rr]^g & & Z \\ A\x (X \ox Y) \ar@{..>}[rru]_{g_{\ox[A]}} }$$
\item
A system of linear maps is {\bf unit representable} in case there is a linear map $\varphi_\top: 1 \to \top$
such that in $\X$ for each point $p: 1 \to Z$ there is a unique linear point $p_\top: \top \to Z$ making
$$\xymatrix{1 \ar[d]_{\varphi_\top } \ar[rr]^{p} & & Y \\
\top \ar@{..>}[rru]_{p^{\top}}}$$
commute.
\item
A system of linear maps is {\bf strongly unit representable} in case it is representable in each simple slice. This means there
is a unique $p^{\top[A]}$ making
$$\xymatrix{A \x 1 \ar[d]_{1 \x \varphi_\top} \ar[rr]^{p} & & Y \\
A \x \top \ar@{..>}[rru]_{p^{\top[A]}}}$$
commute.
\item
A strong tensor representation is {\bf persistent} in case, in {(ii)} above, whenever $A = A_1 \x B \x A_2$ and the map $g$ is
linear in $B$, then $g^{\ox[A]}$ is linear in $B$. Similarly, for a strong tensor unit representation to be persistent requires, setting $A = A_1 \x B \x A_2$ in
{(iv)}, that whenever $p$ is linear in $B$ then $p^{\top[A]}$ is also linear in $B$.
\end{enumerate}
\end{definition}
As before, the basic form of tensorial representation assumes only that it holds in the original
category: to be a strong tensorial representation requires the property must also hold in every simple slice.
Thus, as for classification, there is a progression of
notions: tensor representation, strong tensor representation, and
persistent strong tensor representation, each of which demands more than the last. This also
applies to the unit representation; however, as we shall shortly
discover (see Lemma \ref{lem:basic_tensor_rep}), every storage category
already has a persistent strong unit representation. Thus, we shall
often talk of {\em tensorial representation} when we mean both tensor and
unit representation.
We first observe that persistence produces a simultaneous universal property:
\begin{lemma} If $f: A \x X \x Y \x Z \to W$ is linear in its last two
arguments, then there are unique maps linear in their last arguments, $f_1: A \x X \ox (Y \ox Z) \to W$ and
$f_2: A \x (X \ox Y) \ox Z \to W$ such that $(1 \x ((1 \x \varphi)\varphi)) f_1 = f = (1 \x ((\varphi\x 1)\varphi)) f_2$.
\end{lemma}
This is useful in the proof of:
\begin{proposition}
If $\X$ has a system of linear maps for which there is a persistent strong
tensorial (and unit) representation, then $\ox$ is a symmetric tensor
product with unit $\top$ on the subcategory of linear maps, ${\cal L}[]$.
\end{proposition}
\proof
When we have a persistent representation we can define the required isomorphisms for a tensor product:
\begin{enumerate}
\item The associativity isomorphism:
$$\xymatrix{A \x B\x C \ar[d]_{1 \x \varphi_\ox} \ar[rr]^{\varphi_\ox \x 1} &
& (A \ox B) \x C \ar[rr]^{\varphi_\ox} & & (A \ox B) \ox C \\
A \x (B \ox C) \ar[d]_{\varphi_\ox} \ar@{..>}[urrrr] \\
A \ox (B \ox C) \ar@{..>}[uurrrr]_{a_\ox}}$$
where the key is to observe that $(1 \x \varphi_\ox)\varphi_\ox: A \x B\x C \to (A \ox B) \ox C$ is trilinear so that
the two extensions indicated are given by the universal property.
\item The symmetry isomorphism:
$$\xymatrix{A \x B \ar[d]_{\varphi_\ox} \ar[r]^{c_\x} & B \x A \ar[r]^{\varphi_\ox} & B \ox A \\
A \ox B \ar@{..>}[rru]_{c_\ox}}$$
\item The unit isomorphisms:
$$\xymatrix{A \x 1 \ar[d]_{1 \x \varphi_\top} \ar[rr]^{\pi_1} & & A \\
A \x \top \ar[d]_{\varphi_\ox}\ar@{..>}[rru]^{(\pi_0)^{[A]}_\top}_{\!\!\!=\,\pi_0} \\
A \ox \top \ar@{..>}[rruu]_{u^R_\ox}}$$
Note that $(\pi_0)^{[A]}_\top = \pi_0$ since both fit here and this is bilinear whence one can define $u^R_\ox$.
It is not obvious that $u^R_\ox$ is an isomorphism, however, the diagram itself suggests that its inverse is
$$(u^R_\ox)^{-1} = \pi_1^{-1}(1 \x \varphi_\top)\varphi_\ox: A \ox \top \to A$$
and it is easily checked that this works.
We can now obtain the unit elimination on the left using the symmetry map $u^L_\ox = c_\ox u^R_\ox$.
\end{enumerate}
The coherences now follow directly from the fact that the product is a symmetric tensor and the multi-universal
property of the representation.
For example, we need $(f+g) \ox h = f \ox h + g \ox h$, which follows
from the fact that $(f+g) \ox h$ is determined by $\varphi_\ox ((f+g) \ox
h)$:
\begin{eqnarray*}
\varphi_\ox ((f+g) \ox h)
&=_{\rlap{\scr[defn]}}\phantom{\mbox{\scr[$\varphi$ bilinear]}}& ((f+g) \x h) \varphi_\ox \\
&=_{\rlap{\scr[$\varphi_\ox$ bilinear]}}\phantom{\mbox{\scr[$\varphi$ bilinear]}}& (f \x h) \varphi_\ox + (g \x h) \varphi_\ox \\
&=_{\rlap{\scr[defn]}}\phantom{\mbox{\scr[$\varphi$ bilinear]}}& \varphi_\ox (f \ox h) + \varphi_\ox (g \ox h) \\
&=_{\rlap{\scr[left additive]}}\phantom{\mbox{\scr[$\varphi$ bilinear]}}& \varphi_\ox ((f \ox h) + (g \ox h))
\end{eqnarray*}
\endproof
When a Cartesian storage category has strong tensor representation, then this representation is automatically persistent.
To establish this we start by observing that a Cartesian storage category already has a fair amount of tensor representation.
\begin{lemma}\label{lem:basic_tensor_rep}
In any Cartesian storage category
\begin{enumerate}[{\em (i)}]
\item $\varphi: 1 \to S(1)$ gives persistent unit tensorial representation;
\item $m_\x: S(A) \x S(B) \to S(A \x B)$ gives persistent tensor representation for the objects $S(A)$ and $S(B)$.
\end{enumerate}
\end{lemma}
\proof
We shall focus on the binary tensorial representation: first note that $m_\x$ is bilinear
as $m_\x = \theta S(\theta') \epsilon = \theta' S(\theta) \epsilon$ where the last two maps of each
expansion are linear while $\theta$ is linear in its second argument while $\theta'$ is linear
in its first.
The universal property for an arbitrary bilinear $h$ is given by the following diagram, valid in any
slice $\X[X]$:
$$\xymatrix{A \x B \ar[drr]_{\varphi} \ar[rr]^{\varphi \x \varphi}
& & S(A) \x S(B) \ar[d]^{m_\x} \ar[rr]^{~~~~~~h=m((\varphi \x \varphi)h)^\sharp} & & Z \\
& & S(A \x B) \ar@{..>}[rru]_{((\varphi \x \varphi)h)^\sharp} }$$
where $h$ must be the solution to the simultaneous classification. But this makes the
righthand triangle commute by uniqueness of the universal property.
Furthermore this is a persistent representation as the classification is persistent.
\endproof
Notice that we have shown that the Seely isomorphisms $s_0: S(1) \to \top$ and
$s_2: S(A \x B) \to S(A) \ox S(B)$ are present. These isomorphisms, which are so central to the structure
of linear logic, play an important role in what follows.
\bigskip
The observation above means that a storage category always has (persistent and strong) representation of the tensor unit.
Thus, we now focus on understanding tensorial representation. A key observation is:
\begin{lemma} \label{bilinear-coequalizes}
In a Cartesian storage category $\X$, $f: A \x B \to C$ is
bilinear if and only if its classifying linear map $f^\sharp: S(A \x B) \to C$ coequalizes
the linear maps $(S(\epsilon)\ox S(\epsilon)) s^{-1}_2$ and $(\epsilon\ox\epsilon) s^{-1}_2$.
\end{lemma}
We shall use tensor notation even though we are not assuming that we have tensorial representation: this is justified by the fact that, for these particular objects,
we are {\em always\/} guaranteed representation by Lemma \ref{lem:basic_tensor_rep}. The translation of these maps, $S(\epsilon)\ox S(\epsilon)$ and
$\epsilon_{S(A)} \ox \epsilon_{S(A)}$, back into tensor-free notation uses the commuting diagrams below:
$$\xymatrix{S(S(A) \x S(B)) \ar[d]_{s_2} \ar[rr]^{S(\epsilon \x \epsilon)} & & S(A \x B) \ar[d]^{s_2} \\
S^2(A) \ox S^2(B) \ar[rr]_{S(\epsilon)\ox S(\epsilon)} & & S(A) \ox S(B) }$$
$$\xymatrix{S(S(A) \x S(B)) \ar[d]_{s_2} \ar[rr]^{\epsilon} & & S(A) \x S(B) \ar[rrd]_{\varphi_\ox} \ar[rr]^{m_\x} & & S(A \x B) \ar[d]^{s_2} \\
S^2(A) \ox S^2(B) \ar[rrrr]_{\epsilon\ox \epsilon} & && & S(A) \ox S(B) }$$
\proof
We shall use the characterization of bilinear maps in Proposition
\ref{prop:linmaps} to show that coequalizing these maps is equivalent to bilinearity.
Assume that $(S(\epsilon)\ox S(\epsilon)) s_2^{-1} f^\sharp = (\epsilon\ox\epsilon) s_2^{-1} f^\sharp$ so that
$$(\varphi \x \varphi) \varphi_\ox (S(\epsilon)\ox S(\epsilon)) s_2^{-1} f^\sharp = (\varphi \x \varphi) \varphi_\ox (\epsilon\ox\epsilon) s_2^{-1} f^\sharp$$
But then we have:
\begin{eqnarray*}
\lefteqn{(\varphi \x \varphi) \varphi_\ox (S(\epsilon)\ox S(\epsilon)) s_2^{-1} f^\sharp} \\
& = & (\varphi \x \varphi) (S(\epsilon)\x S(\epsilon)) \varphi_\ox s_2^{-1} f^\sharp \\
& = & (\epsilon \x \epsilon) (\varphi \x \varphi) m_\x f^\sharp \\
& = & (\epsilon \x \epsilon) \varphi f^\sharp \\
& = & (\epsilon \x \epsilon) f \\
\lefteqn{(\varphi \x \varphi) \varphi_\ox (\epsilon\ox\epsilon) s_2^{-1} f^\sharp} \\
& = & (\varphi \x \varphi) (\epsilon\x\epsilon) \varphi_\ox s_2^{-1} f^\sharp \\
& = & \varphi_\ox s_2^{-1} f^\sharp \\
& = & m_\x S(f)\epsilon
\end{eqnarray*}
So this condition implies bilinearity. Conversely, by reversing the
argument, assuming bilinearity gives---using classification and tensorial representation---the equality of these maps.
\endproof
We now have:
\begin{proposition} \label{basic-tensor-rep}
$\X$ has (basic) tensor representation at $A$ and $B$ if and only if
$$S^2(A)\ox S^2(B) \Two^{S(\epsilon)\ox S(\epsilon)}_{\epsilon\ox\epsilon} S(A)\ox S(B) \to^{\epsilon \ox \epsilon} A \ox B$$
is a coequalizer in the subcategory of linear maps.
\end{proposition}
\begin{proof}
Suppose $z$ equalizes $S(\epsilon) \ox S(\epsilon)$ and $\epsilon \ox \epsilon$ then by the previous lemma $z' = (\varphi \x \varphi) \varphi_\ox z$ is bilinear
and this determines a unique linear map $z'_\ox$ with $z'= \varphi_\ox z'_\ox$.
$$\xymatrix{ S^2(A) \x S^2(B) \ar[d]_{\varphi_\ox} \ar@<1ex>[rr]^{S(\epsilon) \x S(\epsilon)} \ar@<-1ex>[rr]_{\epsilon \x \epsilon}
&& S(A) \x S(B) \ar[d]^{\varphi_\ox} \ar[rr]^{\epsilon \x \epsilon} && A \x B \ar[d]^{\varphi_\ox} \ar@{..>}@/^1pc/[ddrr]^{z'}\\
S^2(A) \ox S^2(B) \ar@<1ex>[rr]^{S(\epsilon) \ox S(\epsilon)} \ar@<-1ex>[rr]_{\epsilon \ox \epsilon}
&& S(A) \ox S(B) \ar[rr]_{\epsilon \ox \epsilon} \ar@/_1pc/[drrrr]_z&& A \ox B \ar@{..>}[drr]_{z'_\ox}\\
&& && && Z}$$
We claim $z'_\ox$ is the unique comparison map making the fork a coequalizer in the linear map category. To show this we need $(\epsilon \ox \epsilon) z'_\ox = z$ and
we must show that any other linear map $k$ with $(\epsilon \ox \epsilon) k = z$ has $\varphi_\ox k = z'$. For the first of these we note that $z$ is determined by
representing $(\varphi \x \varphi)\varphi_\ox z = z'$ but
$$(\varphi \x \varphi)\varphi_\ox (\epsilon \ox \epsilon) z'_\ox = (\varphi \x \varphi) (\epsilon \x \epsilon) \varphi_\ox z'_\ox = \varphi_\ox z'_ox = z'$$
so the two maps are equal. For the second, suppose we have such a $k$ then
$$\varphi_\ox k = (\varphi \x \varphi)(\epsilon \x \epsilon) \varphi_\ox k = (\varphi \x \varphi) \varphi_\ox (\epsilon \ox \epsilon) k = (\varphi \x \varphi) \varphi_\ox z = z'.$$
For the converse assume that the linear map category has this fork a coequalizer and suppose $f:A \x B \to C$ is bilinear. Returning to the diagram above, by Lemma \ref{bilinear-coequalizes} we may set $z'=f$ and $z= s^{-1}_2f^\sharp$. Noting that the top fork is an absolute coequalizer gives the map $\varphi_\ox: A \x B \to A \ox B$ which will then represent bilinear maps.
\end{proof}
Considering what happens in an arbitrary simple slice gives:
\begin{corollary} \label{strong-tensor-rep}
A storage category has strong tensorial representation if and only if forks of the form:
$$X \x S^2(A) \ox S^2(B) \Two^{1 \x S(\epsilon) \ox S(\epsilon)}_{1 \x \epsilon \ox \epsilon} X \x S(A) \ox S(B) \to^{1 \x \epsilon \ox \epsilon} X \x A \ox B$$
are coequalizers for the linear subcategories ${\cal L}[X]$
\end{corollary}
There are a number of different reasons why these forks might be coequalizers. It is often the case that these forks will also be coequalizers in the whole storage category. When this is so, one may decompose the presence of these coequalizers into the presence of the basic coequalizer (of Proposition \ref{basic-tensor-rep}) and the fact they must be preserved by the functor $X \x \_$. Because these coequalizers are clearly reflexive, this latter condition is delivered whenever the product functor preserves, more generally, reflexive coequalization. Of course, when the storage category is Cartesian closed, the product functor will preserve {\em all\/} colimits and so in particular these coequalizers. In fact, the case which will be of primary interest to us here, as shall be discussed below, is when these coequalizers are absolute and so are automatically preserved by the product functors and, furthermore, are present under the very mild requirement that linear idempotents split.
We have established that these coequalizers must be present in any
storage category with strong tensor representation. The final
ingredient we need is persistence: fortunately this is guaranteed once
strong tensor representation is assumed.
\begin{proposition}
In a Cartesian storage category with a strong tensor representation, the representation is necessarily persistent: that is,
given a map $h:X_0\x C\x X_1\x A\x B\to Y$, which is linear in $C$, $A$, and $B$
\[\xymatrix{
X_0\x C\x X_1\x A\x B \ar[r]^{~~~~~~~~~~~~~h} \ar[d]_{1\x\varphi_{\ox}} & Y \\
X_0\x C\x X_1\x (A\ox B) \ar[ur]_{h^\ox}
}\]
then its linear lifting $h_\ox:X_0\x C\x X_1\x (A\ox B) \to Y$ is linear
in $C$.
\end{proposition}
\begin{proof}
By using Lemma \ref{simple-slice-context} we may simplify what needs to be proven: namely, given $f: A \x B \x C \to Y$ which is linear in all its
arguments then $f^\ox: A \ox B \x C \to Y$ is linear in $C$. This latter requires that we show that
$$\xymatrix{A \ox B \x S(C) \ar[d]_{1 \x \epsilon} \ar[r]^{\varphi \x 1} & S(A \ox B) \x S(C) \ar[r]^{m_\x} & S(A \ox B \x C) \ar[r]^{f^\ox} & S(Y) \ar[d]^{\epsilon} \\
A \ox B \x C \ar[rrr]_{f^\ox} &&& Y}$$
commutes. Because we have strong tensor representation we know $(\epsilon \ox \epsilon) \x 1 :S(A) \ox S(B) \x C \to A \ox B \x C$ is epic. Thus, we may preface this
square with the map $(s^2 \x 1) ((\epsilon \ox \epsilon) \x 1)$ to test commutativity. Preliminary to this calculation note that whenever $f$ is bilinear the following diagram
$$\xymatrix{S(A) \x S(B) \ar[dd]^{\epsilon \x \epsilon} \ar[r]_{m_\x} \ar[rd]_{\varphi_\ox}
& S(A \x B) \ar[d]^{s_2} \ar[r]_{S(f)} & S(Y) \ar[dd]^{\epsilon} \\
& S(A) \ox S(B) \ar@{..>}[dr] \ar[d]_{\epsilon \ox \epsilon} \\
A \x B \ar@/_2pc/[rr]_{f} \ar[r]^{\varphi_\ox} & A \ox B \ar[r]^{f^\ox} & Y}$$
commutes as the dotted arrow is the unique linear extension of the bilinear map $m_\x S(f) \epsilon = (\epsilon \x \epsilon) f$.
Putting the right square in the simple slice over $C$ gives the following commuting square:
$$\xymatrix{S(A \x B) \x C \ar[d]_{s_2 \x 1} \ar[r]^{(1 \x \varphi) m_\x} & S(A \x B \x C) \ar[r]^{~~~~S(f)} & S(Y) \ar[dd]^{\epsilon} \\
S(A) \ox S(B) \x C \ar[d]_{\epsilon \ox \epsilon \x 1} \\
A \ox B \x C \ar[rr]_{f_\ox} && Y}$$
We now have the calculation:
\begin{eqnarray*}
(s_2(\epsilon \ox \epsilon) \x 1)(1 \x \epsilon) f^\ox
& = & (1 \x \epsilon)(s_2(\epsilon \ox \epsilon) \x 1)f^\ox \\
& = & (1 \x \epsilon)(1 \x \varphi)m_\x S(f) \epsilon \\
& = & m_\x S(f) \epsilon \\
(s_2(\epsilon \ox \epsilon) \x 1)(\varphi \x 1)m_\x S(f^\ox) \epsilon
&= & (\varphi \x 1)m_\x S((s_2(\epsilon \ox \epsilon) \x 1)f^\ox) \epsilon \\
& = & (\varphi \x 1)m_\x S((1 \x \varphi)m_\x S(f) \epsilon)\epsilon \\
& = & (\varphi \x 1)m_\x S((1 \x \varphi)m_\x)\epsilon S(f) \epsilon \\
& = & m_\x S(f) \epsilon
\end{eqnarray*}
where the last step crucially uses the commutativity of the monad.
\end{proof}
When one has tensor representation, if the tensor preserves
the coequalizers which witness the exactness of the modality $S$, that is coequalizers of the form
$$S^2(A) \Two^{S(\epsilon)}_{\epsilon} S(A) \to^\epsilon A$$
then the coequalizer above can be further analyzed {\em via} the following parallel coequalizer diagram:
\[
\xymatrix{
S^2(A) \ox S^2(B) \ar@<1ex>@{<-}@/^1.5pc/[rr]^{\delta\ox1} \ar@<1ex>[rr]^{\epsilon \ox 1} \ar@<-1ex>[rr]_{S(\epsilon) \ox 1}
\ar@<-1ex>@{<-}@/_3pc/[dd]_{1\ox\delta} \ar@<1ex>[dd]^{1\ox\epsilon} \ar@<-1ex>[dd]_{1\ox S(\epsilon)}
\ar@<1ex>[ddrr]^{\epsilon\ox\epsilon} \ar@<-1ex>[ddrr]_{S(\epsilon)\ox S(\epsilon)}
&& S(A)\ox S^2(B) \ar[rr]^{\epsilon\ox1}
\ar@<1ex>[dd]^{1\ox\epsilon} \ar@<-1ex>[dd]_{1\ox S(\epsilon)}
&& A\ox S^2(B) \ar@<1ex>[dd]^{1\ox\epsilon} \ar@<-1ex>[dd]_{1\ox S(\epsilon)}
\\ &&&& \\
S^2(A)\ox S(B) \ar@<1ex>[rr]^{\epsilon \ox 1} \ar@<-1ex>[rr]_{S(\epsilon) \ox 1} \ar@<1ex>[dd]^{1\ox\epsilon}
&& S(A)\ox S(B) \ar@<1ex>[dd]^{1\ox\epsilon} \ar[rr]^{\epsilon\ox1} \ar[ddrr]^{\epsilon\ox\epsilon}
&& A\ox S(B) \ar@<1ex>[dd]^{1\ox\epsilon}
\\ &&&& \\
S^2(A)\ox B \ar@<1ex>[rr]^{\epsilon \ox 1} \ar@<-1ex>[rr]_{S(\epsilon) \ox 1}
&& S(A)\ox B \ar[rr]_{\epsilon\ox1}
&& A\ox B
}
\]
As proven in \cite[Lemma 1.2.11]{PTJElephant}, for example, given
horizontal and vertical reflexive coequalizers as shown above, the
diagonal is also a coequalizer. Recall that the basic coequalizers
are actually absolute coequalizers in the storage category so they
certainly are coequalizers in the linear category (use {\bf LS.2]}) but
they will not necessarily be absolute in the linear category. Thus,
assuming that the tensor product preserves them is a far from benign
assumption.
\subsection{Codereliction and tensor representation}
An important source of tensorial representation in Cartesian storage categories arises from the presence of a {\bf codereliction}: this is an
unnatural transformation $\eta: A \to S(A)$ which {\em is\/} natural for linear maps such that each $\eta_A$ is
linear and splits $\epsilon$ in the sense that $\eta_A\epsilon_A = 1_A$.
Notice that $\varphi$, in general, will not be a codereliction: is natural for all maps but it is not in general
linear---in fact if it were linear then all maps would be linear and $S$ would
necessarily be equivalent to the identity functor.
As we shall shortly see, all Cartesian differential storage categories have a codereliction. This is important because of the following
observation:
\begin{proposition}
Any Cartesian storage category in which linear idempotents split
linearly, and which has a codereliction, has persistent tensorial
representation.
\end{proposition}
Recall that a linear idempotent $e$ splits linearly when there is a
splitting $(r,s)$ with $rs = e$ and $sr = 1$ such that both $s$ and $r$
are linear. It follows from Lemma \ref{storage-splitting}, that both
$r$ and $s$ are linear if either is. Furthermore, it follows from
Proposition \ref{linear-idempotent-splitting} that we may always
formally split linear idempotents.
\begin{proof}
Once we have a codereliction we have a split pair
\[
\xymatrix{
S^2(A) \ox S^2(B) \ar@<1ex>@{<-}@/^1.5pc/[rr]^{\eta \ox\eta} \ar@<1ex>[rr]^{\epsilon \ox \epsilon} \ar@<-1ex>[rr]_{S(\epsilon) \ox S(\epsilon)}
&& S(A) \ox S(B)
}
\]
whose absolute coequalizer is the splitting of $\epsilon \eta \ox
\epsilon\eta$. Thus, when linear idempotents split one has basic
tensor representation. As this coequalizer is absolute in the whole
storage category it is necessarily preserved by products and so the
tensor representation is strong, hence persistent.
\end{proof}
In particular, as we shall discover in Section \ref{diff-storage}, a Cartesian differential storage category always has a codereliction
$\eta = \< 1,0 \> D_\x[\varphi]$ which is linear (in the differential sense) and splits $\epsilon$. Thus, in these examples it
suffices for linear idempotents (in simple slices) to have linear splittings. This can always be formally arranged by splitting the linear idempotents.
\subsection{Tensor storage categories with an exact modality}
We now observe that in a Cartesian storage category, which has
tensorial representation, the linear maps form a tensor storage category:
\begin{proposition}
In any Cartesian storage category with tensorial representation,
the subcategory of linear maps is a tensor storage category.
\end{proposition}
\proof
We have already observed that there is a monoidal comonad present on the linear maps. In addition,
when both a persistent classification and representation is present there is are natural isomorphisms
$s_\ox: S(X \x Y) \to S(X) \ox S(Y)$ and an isomorphism $s_\top: \top \to S(1)$ constituting an
iso-comonoidal structure for the functor $S: \X \to \X$ from $\X$ with product to $\X$ with tensor: this is
the storage isomorphism.
This follows immediately from the fact that
$$\xymatrix{X \x Y \ar[rrd]_{\varphi} \ar[r]^{\varphi \x \varphi} & S(X) \x S(Y) \ar[r]^{\varphi_\ox}
& S(X) \ox S(Y) \ar@{..>}[d]^{s_\ox^{-1}} \\
& & S(X \x Y) } ~~~~\mbox{and}~~~
\xymatrix{X \x Y \ar[rd]_{(\varphi \x \varphi)\varphi_\ox} \ar[r]^{\varphi}
& S(X \x Y) \ar@{..>}[d]^{s_\ox} \\ & S(X) \ox S(Y) }$$
have the same universal property. Similarly $\top$ and $S(1)$ have the same universal property.
That this constitutes a storage transformation is straightforward to check.
\endproof
To establish the converse of this observation we need to prove that the coalgebra modality of a
tensor storage category is a comonad with a commutative force:
\begin{proposition} \label{force-for-tensor-storage}
The coKleisli category of a tensor storage category is a Cartesian
storage category with tensorial representation.
\end{proposition}
\proof
We shall show that the comonad is forceful where the force is defined as:
\begin{eqnarray*}
\lefteqn{S(A \x S(X)) \to^\psi S(A \x X)} \\
& = & S(A \x S(X)) \to^{s_2} S(A) \ox S^2(X) \to^{1 \ox \epsilon} S(A) \ox S(X) \to^{s_2^{-1}} S(A \x X).
\end{eqnarray*}
We now have to check the six coherence diagrams of a force:
\begin{enumerate}[{\bf [Force.1]}]
\item For the associativity of force we use the fact that the Seely isomorphism is comonoidal:
\begin{eqnarray*}
\lefteqn{\delta S(\sigma_\x) S(\epsilon \x \psi) S(\psi) S(a_\x)}\\
& = & \delta S(\sigma_\x) S(\epsilon \x 1)S(1 \x (s_2 (1 \ox \epsilon)s_2^{-1})) s_2 (1 \ox \epsilon)s_2^{-1} S(a_\x) \\
& = & \delta S(\sigma_\x) s_2 (\epsilon \ox S(s_2 (1 \ox \epsilon)s_2^{-1})) (1 \ox \epsilon) s_2^{-1} S(a_\x) \\
& = & \delta S(\sigma_\x) s_2 (\epsilon \ox (\epsilon s_2 (1 \ox \epsilon))) (1 \ox s_2^{-1}) s_2^{-1} S(a_\x) \\
& = & \delta S(\sigma_\x) s_2 (\epsilon \ox (\epsilon s_2 (1 \ox \epsilon))) a_\ox (s_2^{-1} \ox 1) s_2^{-1} \\
& = & \delta S(\sigma_\x) s_2 (\epsilon \ox (\epsilon s_2))) a_\ox (s_2^{-1} \ox \epsilon) s_2^{-1} \\
& = & s_2 (\delta \ox \delta) (\epsilon \ox (\epsilon s_2)) a_\ox (s_2^{-1} \ox \epsilon) s_2^{-1} \\
& = & s_2 (1 \ox s_2) a_\ox (s_2^{-1} \ox \epsilon) s_2^{-1}
= S(a_\x) s_2 (1 \ox \epsilon)s_2^{-1}
= S(a_\x) \psi
\end{eqnarray*}
\item For projection of force:
\begin{eqnarray*}
\psi S(\pi_1) & = & s_2 ( 1 \ox \epsilon) s_2^{-1} S(\pi_1) \\
& = & s_2(1 \ox \epsilon) (e \ox 1) u^\top_L \\
& = & s_2 (e \x 1) u^\top_L \epsilon \\
& = & S(\pi_1) \epsilon
\end{eqnarray*}
\item For forceful naturality:
\begin{eqnarray*}
\delta S(\sigma_\x)S(1 \x (\epsilon\delta)))\psi
& = & \delta S(\sigma_\x)S(1 \x (\epsilon\delta)) s_2 (1 \ox \epsilon) s_2^{-1} \\
& = & \delta S(\sigma_\x)s_2 (1 \ox S(\epsilon\delta)) (1 \ox \epsilon) s_2^{-1} \\
& = & s_2 (\delta \ox \delta) (1 \ox S(\epsilon\delta)) (1 \ox \epsilon) s_2^{-1} \\
& = & s_2 (1 \ox \epsilon) (\delta \ox \delta) s_2^{-1} \\
& = & s_2 (1 \ox \epsilon) s_2^{-1} \delta S(\sigma_\x) \\
& = & \psi \delta S(\sigma_\x)
\end{eqnarray*}
\item For forcefulness of counit:
\begin{eqnarray*}
\psi\psi & = & s_@ (1 \ox \epsilon) s_2^{-1}s_2(1 \ox \epsilon)s_2^{-1} \\
& = & s_2(1 \ox \epsilon\epsilon)s_2^{-1} \\
& = & s_2(1 \ox S(\epsilon)\epsilon)s_2^{-1} \\
& = & S(1 \x \epsilon) s_2 (1 \ox \epsilon) s_2^{-1} \\
& = & S(1 \x \epsilon) \psi
\end{eqnarray*}
\item For forcefulness of comultiplication:
\begin{eqnarray*}
\delta S(\sigma_\x) \psi S(\epsilon \x 1)
& = & \delta S(\sigma_\x) s_2 (1 \ox \epsilon) s_2^{-1} S(\epsilon \x 1) \\
& = & s_2 (\delta \ox \delta) (S(\epsilon) \ox \epsilon) s_2^{-1} = 1
\end{eqnarray*}
\item Commutativity of force:
\begin{eqnarray*}
\psi \psi' & = & s_2 (1 \ox \epsilon) s_2^{-1} s^2 (\epsilon \ox 1) s^{-1}_2 \\
& = & s_2 (1 \ox \epsilon)(\epsilon \ox 1) s^{-1}_2 \\
& = & s_2 (\epsilon \ox 1)(1 \ox \epsilon) s^{-1}_2 \\
& = & s_2 (\epsilon \ox 1)s_2^{-1} s^2 ((1 \ox \epsilon) s^{-1}_2 \\
& = & \psi'\psi
\end{eqnarray*}
\end{enumerate}
\medskip
This shows that the coKleisli category is a Cartesian storage category;
it remains to show that it has tensorial representation. However, this
follows, since in the coKleisli category $A \x B
\to^{\varphi_\ox} A \ox B$ is given by the $\X$-map
$$S(A \x B) \to^{s_2} S(A) \ox S(B) \to^{\epsilon \ox \epsilon} A \ox B$$
That this represents the tensor is easily seen, and persistence is
automatic in a coKleisli category.
\endproof
\bigskip
The results above tell us that a tensor storage category with an exact
modality is always the category of linear maps of some Cartesian storage
category with tensorial representation. This is because its coKleisli
category is a Cartesian storage category with tensorial representation
and one can recover, as the linear maps, the original tensor storage
category.
\subsection{Closed storage categories} \label{closedstorage}
We briefly return to the issue of the closeness of these various categories. Starting with a tensor storage category (Seely category) it is well-known that if
the category is closed in the sense that there is an adjunction
$$\infer={ X \to A \lollipop B}{A \ox X \to B}$$
then the coKleisli category is Cartesian closed with
$$A \Rightarrow B := S(A) \lollipop B$$
as we have the following natural equivalences:
$$\infer={S(X) \to S(A) \lollipop B}{
\infer={S(A) \ox S(X) \to B}{
S(A \x X ) \to B}}$$
Thus, if the tensor storage category is closed then the coKleisli category, that is the Cartesian storage category, must be Cartesian closed. We wish
to work for a converse, namely, knowing that the Cartesian storage category is Cartesian closed, can we provide some natural conditions for the linear
maps to form a monoidal closed category? The conditions that we shall consider, in fact, do not require tensorial representation and, in fact, make the linear maps
into a closed category in the original sense of Kelly and Eilenberg \cite{KE}.
We shall say that a Cartesian storage category is {\bf closed} in case
it is Cartesian closed, the linear maps form a closed system (see
Definition \ref{closed-system-defn}), the functor $S$ is enriched over
itself, and the following equalizer (which defines the object $A \lollipop B$) exists for each $A$ and $B$:
$$A \lollipop B \to^{k_{AB}} A \Rightarrow B \Two^{S_{AB} (1 \Rightarrow \epsilon)}_{\epsilon \Rightarrow 1} S(A) \Rightarrow B$$
with $k_{AB}$ linear for each $A$ and $B$. Here the map $S_{AB}: A \Rightarrow B \to S(A) \Rightarrow S(B)$ is given by the enrichment of the functor $S$. Clearly we are
intending that this equalizer should provide the object of linear maps from $A$ to $B$. That it has this property relies on the following:
\begin{proposition} \label{closed-prop} In any closed Cartesian storage category:
\begin{enumerate}[{\em (i)}]
\item
If $f: A \x X \to B$ is linear in its first argument then there is a unique map $\tilde{f}: X \to A \lollipop B$ making
$$\xymatrix{ A \x X \ar[rr]^f \ar@{..>}[d]^{1 \x \tilde{f}}& & B \\ A \x A \lollipop B \ar[r]_{1 \x k_{AB}} & A \x A \Rightarrow B \ar[ur]_{{\sf ev}} }$$
commute.
\item Any $f$ which can be expressed as $f = (1 \x \tilde{f}) (1 \x k_{AB}) {\sf ev}$ must be linear in its first argument.
\item If $f$ is bilinear then $\tilde{f}$ is linear.
\end{enumerate}
\end{proposition}
\begin{proof}~
\begin{enumerate}[{\em (i)}]
\item
Clearly $\tilde{f} k_{AB} $ must be the unique map to the hom-object $A \Rightarrow B$, $\hat{f} = \tilde{f}k_{AB}: X \to A \Rightarrow B$, and so, as $k_{AB} $ is monic, $\tilde{f}$, must be unique, if it exists. The difficulty is to show that $\hat{f}$ factors through this equalizer.
Recall that as $f$ is linear in its first argument we have
$$\xymatrix{S(A) \x X \ar[d]_{\epsilon \x 1} \ar[r]^{\theta'} & S(A \x X) \ar[r]^{S(f)} & S(B) \ar[d]^{\epsilon} \\
A \x X \ar[rr]_f & & B}$$
commutes (that is $(1 \x \epsilon) f = \theta'S(f)\epsilon$). But then we have the following two commuting diagrams displaying the curried map for these two maps:
$$\xymatrix{S(A) \x X \ar[d]_{! \x \hat{f}} \ar[r]^{\epsilon \x 1} & A \x X \ar[d]^{1 \x \hat{f}}\ar@/^1pc/[rrd]^f \\
S(A) \x A \Rightarrow B \ar[d]_{1 \x (\epsilon \Rightarrow 1)} \ar[r]_{\epsilon \x 1} & A \x A \Rightarrow B \ar[rr]_{\sf ev} & & B \\
S(A) \x S(A) \Rightarrow B \ar@/_1pc/[rrru]_{\sf ev} }$$
This shows that ${\sf curry}((1 \x \epsilon) f) = \hat{f} (\epsilon \Rightarrow 1)$.
$$\xymatrix{S(A) \x X \ar[d]_{1 \x \hat{f}} \ar[r]^{\theta'} & S(A \x X) \ar[r]^{S(f)} & S(B) \ar[ddrr]^{\epsilon} \\
S(A) \x A \Rightarrow B \ar[d]_{1 \x S_{AB}} \\
S(A) \x S(A) \Rightarrow S(B) \ar[d]_{1 \x (1 \Rightarrow \epsilon)}\ar@/_1pc/[rruu]_{\sf ev} & & & & B \\
S(A) \x S(A) \Rightarrow B \ar@/_1pc/[rrrru]_{\sf ev} }$$
While this shows that ${\sf curry}(\theta' S(f) \epsilon) = \hat{f} S_{AB} (1 \Rightarrow \epsilon)$. Thus $\hat{f}$ factors through the equalizer as desired.
\item For the converse we have:
$$\xymatrix{S(A) \x X \ar[dr]_{! \x \hat{f}} \ar[ddd]^{\epsilon \x 1} \ar[rr]^{\theta'} & & S(A \x X) \ar[rr]^{S(f)} & & S(B) \ar[ddd]^{\epsilon} \\
& S(A) \x A \lollipop B \ar[rrd]_{1 \x (k_{AB}(\epsilon \Rightarrow 1))} \ar[rr]^{1 \x (k_{AB}S_{AB})~~} & & S(A) \x S(A) \Rightarrow S(B) \ar[ru]_{\sf ev} \ar[d]^{1 \x (1 \Rightarrow \epsilon)} \\
& & & S(A) \x S(A) \Rightarrow B \ar[dr]^{\sf ev} \\
A \x X \ar[rrrr]_f &&&& B}$$
\item It remains to show that $\tilde{f}$ is linear when $f$ is bilinear that is $\epsilon \tilde{f} = S(\tilde{f})\epsilon$ for which we have:
$$\xymatrix{A \x S(X) \ar[d]_{1 \x \epsilon\tilde{f}} \ar[r]^{1 \x \epsilon} & A \x X \ar[d]^f \ar[dl]_{1 \x \tilde{f}} \\
A \x A \lollipop B \ar[r]_{~~~~{\sf ev}_\lollipop} & B}$$
$$\xymatrix{ A \x S(X) \ar[r]^{\theta} \ar[d]_{1 \x S(\tilde{f}) }& S(A \x X) \ar[r]^{S(f)} & S(B) \ar[dd]^{\epsilon} \\
A \x S(A \lollipop B) \ar[r]_{\theta} \ar[d]_{1 \x \epsilon} & S(A \x A \lollipop B) \ar[ru]_{S({\sf ev}_\lollipop)} \\
A \x A \lollipop B \ar[rr]_{{\sf ev}_\lollipop} & & B}$$
Showing linearity in the second argument implies that $\tilde{f}$ is linear.
\end{enumerate}
\end{proof}
In particular, notice that this immediately means that ${\sf ev}_\lollipop := (1 \x k_{AB}) {\sf ev}$ is linear in its first argument, as ${\sf ev}_\lollipop$ defined in this manner certainly satisfies
Proposition \ref{closed-prop} {\em (ii)}.
However, as $k_{AB}$ is linear and ${\sf ev}$ is linear in its second argument---as we are assuming a closed linear system---it follows that ${\sf ev}_\lollipop$ is bilinear.
Thus, when one has tensorial representation this means that there is an induced evaluation map ${\sf ev}_\ox: A \ox (A \lollipop B) \to B$ with
$\varphi_\ox {\sf ev}_\ox = {\sf ev}_\lollipop$ which is linear. Furthermore, by Proposition \ref{closed-prop} (iii) the curry map for $f: A \ox X \to B$ is also linear
as it is $\widetilde{\varphi_\ox f}$. Thus we have:
\begin{corollary}
If $\X$ is a closed Cartesian storage category which has tensor representation, then the subcategory of linear maps forms a symmetric monoidal closed category.
\end{corollary}
This leaves open the converse: if one starts with a monoidal
closed tensor storage category, will its coKleisli category be a closed
Cartesian storage category? We shall now show that this is true, giving
a complete characterization for the closed case:
\begin{proposition}
If $\X$ is a tensor storage category which is monoidal closed then its coKleisli category is a closed Cartesian storage category.
\end{proposition}
\begin{proof}
We provide a sketch of the proof. We must show three things:
\begin{enumerate}
\item We need to show that the coKleisli category is a Cartesian closed category and that the functor induced by the modality
is suitably enriched. As above, the closed structure is given by $A \Rightarrow B := S(A) \lollipop B$. In a Cartesian closed category
a functor is enriched whenever it is strong, so the fact that there is a natural force $\psi: S(A \x S(B)) \to S(A \x B)$ (described in
Proposition \ref{force-for-tensor-storage}) guarantees the functor is strong.
\item We need to show that the linear maps of the coKleisli category form a closed system. We know already they form a linear system
so we need only check that they are a {\em closed\/} linear system.
This amounts to checking the following.
\begin{enumerate}
\item The evaluation map is linear in its higher-order argument. This is
immediately true by inspection. Here is the definition of the evaluation
map in the coKleisli category:
$$S(A \x (S(A) \lollipop B)) \to^{s_2} S(A) \ox S(S(A) \lollipop B) \to^{1 \ox \epsilon} S(A) \ox (S(A) \lollipop B) \to^{{\sf ev}_\lollipop} B$$
The second map in the sequence ensures it is linear in the second argument.
\item Linearity is ``persistent'' over currying. To say that a coKleisli map $f: S(A \x B \x C) \to D$ is linear in its second argument amounts to saying
that $f = s_3 (1 \ox \epsilon \ox 1) f'$. Currying this map (with respect to the first argument) clearly maintains the linearity in the second argument.
\end{enumerate}
\item This leaves only the requirement that $A \lollipop B$ occurs as an equalizer:
$$A \lollipop B \to^{k_{AB}} A \Rightarrow B \Two^{S_{AB} (1 \Rightarrow \epsilon)}_{\epsilon \Rightarrow 1} S(A) \Rightarrow B$$
Recall that we may assume the modality is exact (if it is not, simply work in the subcategory of linear maps). This equalizer (after considerable
unwinding) is exactly the image of
$$A \lollipop B \to^{\epsilon \lollipop 1} S(A) \lollipop B \Two^{S(\epsilon) \lollipop 1}_{\epsilon \lollipop 1} S^2(A) \lollipop B$$
under the inclusion into the coKleisli category. Since
$$S^2(A) \Two^{S(\epsilon)}_{\epsilon} S(A) \to^{\epsilon} A$$
under this inclusion is a coequalizer, the transpose of this is an equalizer. This shows that $A \lollipop B$ does occur as an equalizer as required.
\end{enumerate}
\end{proof}
\section{Cartesian differential storage categories} \label{diff-storage}
Tensor differential categories were introduced in \cite{diffl}: they consist of a symmetric monoidal category with a coalgebra modality
and a deriving transformation. In \cite{CartDiff} the notion of a Cartesian differential category was introduced. It was proven that an important way
in which Cartesian differential categories arise is as the coKleisli categories of tensor differential categories which satisfied an additional interchange requirement
on the deriving transformation. This latter requirement was presented as a sufficient condition: the question of whether it was necessary was left open. Here we partially
answer that question for an important class of tensor differential categories: those whose modality is a storage modality---in other words those which are
``Seely categories'' in the sense discussed above---and exact: we show that this condition is indeed necessary.
Not surprisingly, the strategy of the proof is to characterize the coKleisli category of a tensor differential category with a storage modality using the development
above. The fact that the coKleisli category of a tensor differential category (satisfying the interchange condition) is necessarily a Cartesian differential category
means we should consider Cartesian storage categories which are simultaneously Cartesian differential categories. Furthermore, we expect the linear maps
in the differential sense to be the linear maps in the storage sense. We will then show that, under these assumptions, the linear maps form a tensor differential category
which satisfies the interchange requirement. This then provides a {\em characterization\/} of coKleisli categories of tensor differential categories (with a storage modality)
which form differential categories.
It is worth mentioning---not least as it caused the authors some
strife---that there are at least three different sorts of tensor
differential category which have been discussed in the literature:
\begin{enumerate}[(A)]
\item A differential given by a {\bf deriving transformation} as introduced in \cite{diffl}.
\item A differential given by a {\bf codereliction} as introduced \cite{ER} and described in \cite{diffl}
\item A differential given by a {\bf creation operator} as introduced in \cite{fiore}.
\end{enumerate}
Each form of differential is more specialized than its predecessor: the first merely requires a coalgebra modality, the second requires a
bialgebra modality, while the last requires a storage modality. A standard way in which a bialgebra modality arises is from the presence of
biproducts together with a storage modality: the Seely isomorphism then transfers the bialgebra structure of the biproduct onto the modality.
This means that the last two forms of differential are particularly suited to storage settings. As, in the current context, we are considering
storage categories one might think that the differential which one extracts from a Cartesian differential storage category should be of type (C) or at least (B). It is, therefore,
perhaps somewhat unexpected that the differential that emerges is one of type (A).
Cartesian differential storage categories are important in their own right. Their structure can be approached in two very different ways: from the perspective of being a Cartesian differential category with tensorial representation, or as the coKleisli category of a tensor differential category. This tension allows one to transfer arguments between the Cartesian and the tensor worlds. This ability is frequently used in differential geometry where one often wants to view differential forms as maps from tensor products.
\subsection{Cartesian differential categories}
Cartesian differential categories, were introduced in \cite{CartDiff}: here we briefly review their properties.
One way to view the axiomatization of Cartesian differential categories is as an abstraction of the Jacobian of a smooth
map $f: \mathbb{R}^n \to \mathbb{R}^m$. One ordinarily thinks of the Jacobian as a smooth map
\[ J(f): \mathbb{R}^n \to \mbox{Lin}(\mathbb{R}^n, \mathbb{R}^m). \]
Uncurrying, this means the Jacobian can also be seen as a smooth map
\[ J(f): \mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R}^m \]
which is linear in its first variable. A Cartesian differential category asks for an operation of this type, satisfying the axioms described below. However, notice that to express the axioms, one needs the ability to add parallel maps. It turns out this is not an enrichment in commutative monoids as one might expect, but rather a skew enrichment, \cite{street}, which makes it a Cartesian left additive category. In such a category the addition of arrows is only preserved by composition on the left, that is $f (g+h) = fg + gh$ and $f 0 = 0$ .
\begin{definition}
A \textbf{Cartesian differential category} is a Cartesian left additive category with an operation
$$\infer[{D[\_]}]{X \x X \to_{D[f]} Y}{X \to^f Y}$$
(called ``differentiation'') satisfying:
\begin{enumerate}[{\bf [CD.1]}]
\item $D[f+g] = D[f]+D[g]$ and $D[0]=0$ (differentiation preserves addition);
\item $\< a+b,c\>D[f] = \< a,c\>D[f] + \< b,c\>D[f]$ and $\< 0,a\>D[f] = 0$ (a derivative is
additive in its first variable);
\item $D[\pi_0] = \pi_0\pi_0$, and $D[\pi_1]= \pi_0\pi_1$ (projections and identity are linear);
\item $D[\<f,g\>] = \< D[f],D[g]\>$ (differentiation is compatible with pairing) ;
\item $D[fg] = \<D[f],\pi_1f \>D[g]$ (the chain rule);
\item $\<\<a,0\>,\<c,d\>\>D[D[f]] = \<a,d\>D[f]$ (the differential is linear);
\item $\< \< a,b\>,\< c,d\>\> D[D[f]] = \<\< a,c\>,\< b,d\>\> D[D[f]]$ (interchange rule);
\end{enumerate}
\end{definition}
We recall a number of examples of Cartesian differential categories:
\begin{example}{\em ~
\begin{enumerate}[(1)]
\item If $\X$ is an additive Cartesian category ({\em i.e.} enriched in commutative monoids) then, by defining $D[f] = \pi_0f$, it can be viewed as a Cartesian differential category in which
every map is linear.
\item Smooth functions on finite dimensional Euclidean vector spaces form a Cartesian differential category.
\item The coKleisli category of any tensor differential category (satisfying the interchange law) is a Cartesian differential category, \cite{CartDiff}. A basic example of a tensor differential category is
provided by the category of relations, ${\sf Rel}$, with respect to the ``multi-set'' (or ``bag'') comonad. The deriving transformation $d_\ox: A \ox M(A) \to M(A)$, which
adds another element to the multi--set $M(A)$, provides the (tensor) differential structure.
\item Convenient vector spaces \cite{BET} form a Cartesian differential category.
\item There is a comonad ${\sf Faa}$ on the category of Cartesian left additive categories whose coalgebras are exactly Cartesian differential categories \cite{faa}.
\end{enumerate} }
\end{example}
Cartesian differential categories already have a notion of ``linear map'': namely those $f$ such that $D[f] = \pi_0f$.
Furthermore, it is a basic result that any simple slice, $\X[A]$, of a Cartesian
differential category, $\X$, is also a Cartesian differential category. The differential in the simple slice $\X[A]$ is given by:
$$\infer[{D_A[\_]}]{(X \x X) \x A \to_{D_A[f]} Y}{X \x A \to^f Y}$$
where
\begin{eqnarray*}
\lefteqn{(X \x X) \x A \to^{D_A[f]} Y} \\
& = & (X \x X) \x A \to_{\< \< \pi_0\pi_0,0\>,\<\pi_0\pi_1,\pi_1\>\>} (X \x A) \x (X \x A) \to_{D[f]} Y
\end{eqnarray*}
This is precisely the familiar notion of a {\em partial derivative\/}.
In particular, this means we may isolate linear maps in this differential sense in each slice as those maps with $D_A[f] = (\pi_0 \x 1) f$. Our first observation is then:
\begin{proposition}
In a Cartesian differential category, $\X$, the linear maps form a system of linear maps.
\end{proposition}
\begin{proof}
We must check the requirements of being a linear system:
\begin{enumerate}[{\bf [LS.1]}]
\item We require, for each $A \in \X$, that the identity maps,
projections and pairings of linear maps are linear. As explained above, in
$\X[A]$ a map $f: X \x A \to Y$ is linear in case $D_A[f] = (\pi_0 \x 1)f :
(X \x X) \x A \to Y$ where $D_A[f]$ is the
partial derivative of $f$:
$$D_A[f] := \< \< \pi_0\pi_0,0\>,\<\pi_0\pi_1,\pi_1\> \> D[f]$$
We shall leave all but the last requirement to the reader for which we
shall do a concrete calculation in $\X$. Suppose $f$ and $g$ are linear
in this sense in $\X[A]$ (that is they are in ${\cal L}[A]$), then we
must show that the pairing of $f$ and $g$ in $\X[A]$ is linear: as a map in
$\X$ this pairing is just $\<f,g\>$ so that:
\begin{eqnarray*}
D_A[\< f,g\>] & = & \< \< \pi_0\pi_0,0\>,\<\pi_0\pi_1,\pi_1\> \> D[\<f,g\>] \\
& = & \< \< \pi_0\pi_0,0\>,\<\pi_0\pi_1,\pi_1\> \> \< D[f],D[g] \> \\
& = & \< \< \< \pi_0\pi_0,0\>,\<\pi_0\pi_1,\pi_1\>
\>D[f],\< \< \pi_0\pi_0,0\>,\<\pi_0\pi_1,\pi_1\> \>D[g] \> \\
& = & \< D_A[f],D_A[g]\> \\
& & ~~~\mbox{(Using linearity of $f$ and $g$)} \\
& = & \< \< (\pi_0 \x 1) f, (\pi_0 \x 1) g \> \\
& = & (\pi_0 \x 1)\<f,g\>
\end{eqnarray*}
showing that $\<f,g\>$ is linear in this sense.
\item We must show that these linear maps are closed to composition in each
slice and that when $g$ is a linear retraction that $gh \in {\cal L}[A]$
implies $h \in {\cal L}[A]$. Leaving composition to the reader let us
focus on the second part. If $sg = 1$ then using the non-trivial fact that
$D_A[\_]$ is a differential in $\X[A]$ we have
\begin{eqnarray*}
D_A[h] & = & D_A[sgh] = \< D_A[s],\pi_1s\>D_A[gh] = \< D_A[s],\pi_1s\>
\pi_0gh \\
& = & D_A[s]g h = D_A[sg]h = D_A[1]h = \pi_0h.\\
\end{eqnarray*}
\item For the last part we shall do a concrete calculation in $\X$: we
must show that if $h$ is linear in $\X[B]$ then
$\X[f](h)$ is linear in $\X[A]$; here is the calculation:
\begin{eqnarray*}
D_A[\X[f](h)] & = & D_A[(1\x f)h] = \<\<
\pi_0\pi_0,0\>,\<\pi_0\pi_1,\pi_1\> \> D[(1 \x f)h] \\
& = & \< \< \pi_0\pi_0,0\>,\<\pi_0\pi_1,\pi_1\> \> \<
D[1 \x f], \pi_1 (1 \x f) \> D[h] \\
& = & \< \< \pi_0\pi_0,0\>,\<\pi_0\pi_1,\pi_1\> \> \<
\< \pi_0\pi_0,\<\pi_0\pi_1,\pi_1\pi_1\> D[f]\>, \pi_1 (1 \x f) \> D[h] \\
& = & \< \< \pi_0\pi_0,0\>,\<\pi_0\pi_1,\pi_1\> \> \<
\< \pi_0\pi_0,\<\pi_0\pi_1,\pi_1\pi_1\> D[f]\>, \pi_1 (1 \x f) \> D[h] \\
& = & \< \< \pi_0\pi_0,\< 0,\pi_1\>D[f] \>
,\<\pi_0\pi_1,\pi_1\>(1 \x f)\>D[h] \\
& = & \< \< \pi_0\pi_0,0 \> ,\<\pi_0\pi_1,\pi_1\>(1
\x f)\>D[h] \\
& = & (1 \x f) \<\< \pi_0\pi_0,0 \>
,\<\pi_0\pi_1,\pi_1\>\> D[h] \\
& & ~~~\mbox{(Linearity of $h$)} \\
& = & (1 \x f) (\pi_0 \x 1) h \\
& = & (\pi_0 \x 1) (1 \x f) h
\end{eqnarray*}
\end{enumerate}
\end{proof}
\subsection{Cartesian differential storage categories: the basics}
A {\bf Cartesian differential storage category}
is a Cartesian differential category whose linear maps---in the natural
differential sense---are (strongly and persistently) classified.
We first observe that every Cartesian differential category has a codereliction map defined by:
$$\eta_A := \< 1,0\> D[\varphi]: A \to S(A).$$
\begin{lemma}
$\eta$ so defined is a codereliction map: that is it is natural for linear maps and has $\eta\epsilon = 1$.
\end{lemma}
\begin{proof}~
We need to show $\eta_A \epsilon_A = 1_A$:
\begin{eqnarray*}
\eta_A \epsilon_A & = & \< 1,0\> D[\varphi] \epsilon \\
& = & \< 1,0\> D[\varphi]\epsilon] ~~~~~\mbox{(as $\epsilon$ is linear)} \\
& = & \< 1,0\> D[1] = \<1,0\>\pi_0 = 1
\end{eqnarray*}
and that, if $f: A \to B$ is a linear map, that $f \eta_B = \eta_A S(f)$:
\begin{eqnarray*}
\eta_A S(f) & = & \< 1,0\> D[\varphi] S(f) \\
& = & \< 1,0\> D[\varphi S(f)] ~~~~~\mbox{(as $S(f)$ is linear)} \\
& = & \< 1,0\> D[f \varphi] \\
& = & \<1,0\> \< D[f],\pi_1f\> D[\varphi] \\
& = & \<1,0\> \< \pi_0f,\pi_1f\> D[\varphi] \\
& = & \<1,0\> \< \pi_0f,\pi_1f\> D[\varphi] \\
& = & \< f,0f \> D[\varphi] = f \<1,0\>D[\varphi] = f \eta_B
\end{eqnarray*}
\end{proof}
Because Cartesian differential storage categories always have a codereliction map, up to
splitting linear idempotents, they also have tensor representation.
Thus it makes sense to ask whether the subcategory of linear maps forms
a tensor differential category. To prove this is so will be the main aim of
the current section. However, before turning to this it is worth making some
further basic observations.
Suppose $\X$ is a Cartesian storage category which is also a Cartesian
differential category. In this context, $\X$, has two notions of
``linear'' map: {\em viz} the notion which all storage categories have,
and the notion that all Cartesian differential categories have.
An interesting observation is:
\begin{proposition}
$\X$ is a Cartesian differential storage category if and only if it is both a Cartesian differential and storage category and $S(f)$ (for any $f$)
and $\epsilon$ are linear in the differential sense and the codereliction map $\eta$ is linear in the storage sense.
\end{proposition}
\begin{proof}
By assumption, $D[S(f)]=\pi_0 S(f)$, for any $f$,
$D[\epsilon]=\pi_0 \epsilon$, and $\eta$ is $\epsilon$-natural. We
must show that $f$ is $\epsilon$-natural if and only if $D_\x[f]=\pi_0 f$.
So suppose first that $f$ is $\epsilon$-natural:
\begin{eqnarray*}
D[f] &=& (\varphi\x\varphi)(\epsilon\x\epsilon)D[f] \\
&=& (\varphi\x\varphi) D[\epsilon f] \\
&=& (\varphi\x\varphi) D[S(f)\epsilon] \\
&=& (\varphi\x\varphi) \pi_0 S(f)\epsilon \\
&=& (\varphi\x\varphi) \pi_0 \epsilon f \\
&=& \pi_0 \varphi \epsilon f = \pi_0 f
\end{eqnarray*}
Next, suppose that $D[f]=\pi_0 f$ we must show $f$ is $\epsilon$-natural assuming that $\eta$ is. We observe
that, in this case, $f = \eta S(f) \epsilon$, from which the result is immediate, as $\eta$, $S(f)$, and $\epsilon$ are $\epsilon$ natural.
Here is the calculation:
\begin{eqnarray*}
f &=& \<1,0>\pi_0 f \\
&=& \<1,0>D[f] \\
&=& \<1,0>D[ f \varphi \epsilon] \\
&=& \<1,0>D[ \varphi S(f) \epsilon] \\
&=& \<1,0>D[ \varphi] S(f) \epsilon \\
&=& \eta S(f) \epsilon .
\end{eqnarray*}
\end{proof}
Recall that in a tensor differential category it is possible to re-express the derivative using a {\em deriving transformation}
$d_\ox: A \ox S(A) \to S(A)$ into a more compact form. It is natural to wonder whether the same thing cannot be done for
Cartesian differential storage categories: that is define the derivative in terms of analogous structure:
\begin{definition}
A {\bf Cartesian deriving transformation} on a Cartesian storage
category is a (not-necessarily natural) transformation
$d_{\x}:A \x A \to S(A)$ satisfying:
\begin{enumerate}[{\bf[cd.1]}]
\item $d_\x S(0)\epsilon=0$, $d_\x S(f+g)\epsilon =
d_\x(S(f)+S(g))\epsilon$
\item $<h+k,v>d_\x=<h,v>d_\x+<k,v>d_\x$, $<0,v>d_\x=0$
\item $d_\x\epsilon=\pi_0$
\item $d_\x S(<f,g>)\epsilon=d_\x<S(f)\epsilon,S(g)\epsilon>$ (Note that
$d_\x S(!)\epsilon=d_\x!=!$ is true since $1$ is terminal.)
\item $d_\x S(fg)\epsilon=<d_\x S(f)\epsilon,\pi_1f>d_\x S(g)\epsilon$
\item $<<g,0>,<h,k>>d_\x S(d_\x)\epsilon=<g,k>d_\x$
\item
$<<0,h>,<g,k>>d_\x S(d_\x)\epsilon=<<0,g>,<h,k>>d_\x S(d_\x)\epsilon$
\item $\eta=_{\sf def} <1,0>d_\x$ is $\epsilon$-natural (or linear).
\end{enumerate}
\end{definition}
It is now straightforward to observe:
\begin{proposition}
A Cartesian storage category with a Cartesian deriving transformation is precisely a Cartesian differential storage category.
\end{proposition}
\begin{proof}
The translation between the two structures is given by:
\[ D[f] := d_\x S(f)\epsilon \mbox{ ~~~and~~~ } d_\x := D[\varphi] \]
Note that these are inverse as
$$D[f] := d_\x S(f)\epsilon := D[\varphi]S(f)\epsilon = D[\varphi S(f)\epsilon] = D[f \varphi\epsilon] = D[f]$$
and
$$d_\x := D[\varphi] := d_\x S(\varphi) \epsilon = d_\x.$$
Most of the axioms are clearly direct translations of each other: {\bf[CD.1,2,4-7]} and
{\bf[cd.1,2,4-7]} are clearly equivalent through this translation. For {\bf[CD.3]}, note that
$D[1]=d_\x \epsilon=\pi_0$ and that $D[\pi_i]=d_\x S(p_i) \epsilon = d_\x \epsilon \pi_i = \pi_0\pi_i$, since in a
storage category, projections are linear (by {\bf[LS.1]}).
It remains to show that being linear in the differential sense coincides with being linear in the storage sense.
First note if $f$ is epsilon natural, that is $\epsilon f = S(f)\epsilon$, then $D[f]=d_\x S(f) \epsilon = d_\x \epsilon f = \pi_0 f$.
Conversely, suppose that $D_\x[f]=\pi_0 f$, then $f = \eta S(f) \epsilon$ as
$$ f ~=~ \<1,0>\pi_0f ~=~ \<1,0\>d_\x S(f)\epsilon ~=~ \eta S(f)\epsilon$$
which by assumption is linear (in the sense of being $\epsilon$-natural).
\end{proof}
\subsection{The main theorem}
From the results of Section 3, we also note that, in the presence of a codereliction map and when sufficient linear idempotents split, every Cartesian
differential storage category has strong persistent tensor representation. The main result of the paper is:
\begin{theorem} \label{lin-of-cdsc}
The linear maps of a Cartesian differential storage category, in which linear idempotents split, form a tensor storage differential category
satisfying the interchange rule.
\end{theorem}
Let us first recall what a tensor differential category is. A {\bf
tensor differential category} is a tensor category with a coalgebra
modality (see Definition \ref{coalg-modality}) equipped with a natural
transformation
$$d_\ox: A \ox S(A) \to S(A)$$
called a {\bf deriving transformation} satisfying:
\begin{enumerate}[{\bf [d.1]}]
\item $d_\ox e = 0$ (constants)
\item $d_\ox \epsilon = 1 \ox e$ (linear maps)
\item $d_\ox \Delta = (1 \ox \Delta) (d_\ox \ox 1) + (1 \ox \Delta)(c_\ox \ox 1)(1 \ox d_\ox)$ (the product rule)
\item $ d_\ox \delta = (1 \ox \Delta_\ox) a_\ox (d_\ox \ox \delta) d_\ox$ (the chain rule)
\item $(1 \ox d_\ox) d_\ox = a_\ox(c_\ox \ox 1)a_\ox^{-1} (1 \ox d_\ox) d_\ox$ (the interchange rule)
\end{enumerate}
A tensor {\em storage} differential category simply means that the
storage transformation (see Definition \ref{storage-trans}) are
isomorphisms. Note also that in \cite{diffl} a differential category
was defined to be one satisfying only the first four of these
conditions: the last condition, the interchange rule, was introduced in
\cite{CartDiff} in order to ensure that the coKleisli category was a
Cartesian differential category. Here we shall simply add the
interchange law as a condition for being a tensor differential category
as it will turn out not only to be sufficient but necessary to obtain
the characterization of tensor storage differential categories (with an
exact modality) as the linear maps of a Cartesian differential storage
category.
The remainder of the section is dedicated to proving Theorem \ref{lin-of-cdsc}.
Given a Cartesian differential storage category here is the definition of the tensor deriving transformation:
$$\xymatrix{A \x S(A) \ar[d]_{\varphi_\ox} \ar[r]^{\eta \x 1} & S(A) \x S(A) \ar[r]^{m_\x} & S(A \x A) \ar[r]^{S(D[\varphi_A])} & S^2(A) \ar[d]^{\epsilon} \\
A \ox S(A) \ar@{..>}[rrr]_{d_\ox} &&& S(A)}$$
We start by observing:
\begin{lemma}~
\begin{enumerate}[(i)]
\item $d_\ox$ is natural for linear maps;
\item
The tensor deriving transformation and the differential are inter-definable with
$$D[f] = (1 \x \varphi)\varphi_\ox d_\ox S(f)\epsilon.$$
\end{enumerate}
\end{lemma}
\begin{proof}~
\begin{enumerate}[(i)]
\item Note that $d_\ox S(f) = (f \ox S(f)) d_\ox$ if and only if $\varphi_\ox d_\ox S(f) = \varphi_\ox (f \ox S(f)) d_\ox$
but $\varphi d_\ox = (\eta \x 1) m_\x S(D[\varphi])\epsilon$ which is natural for linear $f$. This provides the result immediately.
\item It suffices to prove the equality:
\begin{eqnarray*}
\lefteqn{(1 \x \varphi)\varphi_\ox d_\ox S(f)\epsilon} \\
& = & (1 \x \varphi) (\eta \x 1) m_\x S(D[\varphi]) \epsilon S(f)\epsilon \\
& = & (1 \x \varphi) (\eta \x 1) m_\x S(D[\varphi ]) S(S(f) \epsilon) \epsilon] \\
& = & (\eta \x 1) (1 \x \varphi) m_\x S(D[\varphi S(f) \epsilon]) \epsilon \\
& = & (\eta \x 1) \theta S(D[f \varphi \epsilon]) \epsilon ~~~~ \mbox{(strength)}\\
& = & (1 \x \varphi) ( 1 \x \epsilon) D[f] ~~~~ \mbox{(linearity of differential in first argument)}\\
& = & D[f] \\
\end{eqnarray*}
\end{enumerate}
\end{proof}
We want to show that $d_\ox$ as defined satisfies {\bf (d.1)}--{\bf (d.5)} above. So we shall simply go through the conditions:
\begin{enumerate}[{\bf [d.1]}]
\item Constants ($d_\ox e = 0$)
\begin{eqnarray*}
d_\ox e & = & d_\ox S(0)s_\top \\
& = & (0 \ox S(0))d_\ox s_\top \\
& = & 0
\end{eqnarray*}
\item Differentials of linear maps ($d_\ox \epsilon = 1 \ox e$)
\begin{eqnarray*}
\varphi_\ox d_\ox \epsilon & = & (\eta \x 1) m_\x S(D[\varphi]) \epsilon\epsilon \\
& = & (\eta \x 1) m_\x S(D[\varphi] \epsilon) \epsilon \\
& = & (\eta \x 1) m_\x S(D[\varphi \epsilon]) \epsilon \\
& = & (\eta \x 1) m_\x S(D[1]) \epsilon \\
& = & (\eta \x 1) m_\x S(\pi_0) \epsilon \\
& = & (\eta \x 1) \pi_0 \epsilon \\
& = & \pi_0 \eta \epsilon \\
& = & \pi_0 = \varphi_\ox (1 \ox e)
\end{eqnarray*}
\item The product rule ($d_\ox \Delta = (1 \ox \Delta) (d_\ox \ox 1) + (1 \ox \Delta)(c_\ox \ox 1)(1 \ox d_\ox)$)
This is more complicated. Start by noting:
\begin{eqnarray*}
d_\ox \Delta & = & d_\ox S(\<1,0\> + \< 0,1\>) s_\ox \\
& = & ((\<1,0\> + \< 0,1\>) \ox S(\<1,0\> + \< 0,1\>)) d_\ox s_\ox \\
& = & (\< 1,0\>\ox S(\<1,0\> + \< 0,1\>)) d_\ox s_\ox \\
& & ~~~~~ + (\< 0,1\> \ox S(\<1,0\> + \< 0,1\>)) d_\ox s_\ox
\end{eqnarray*}
It suffices to prove that
$$(\< 1,0\>\ox S(\<1,0\> + \< 0,1\>)) d_\ox s_\ox = (1 \ox \Delta) (d_\ox \ox 1)~~~~\mbox{ and }$$
$$(\< 0,1\> \ox S(\<1,0\> + \< 0,1\>)) d_\ox s_\ox = (1 \ox \Delta)(c_\ox \ox 1)(1 \ox d_\ox)$$
We shall demonstrate the former leaving the latter to the reader.
However, we require some observations first:
\begin{lemma}~ \label{more-bilinear-identities}
\begin{enumerate}[(i)]
\item If $h: A \x B \to C$ is bilinear, then $\<\<a,b\>,\<c,e\>\> D[h] = \<a,e \> h+ \<c,b\>h$;
\item $\<\<a,b\>,\<c,e\>\> D[m_\x] = \<a,e \> m_\x + \<c,b\>m_\x$;
\item $\<\<a,b\>,\<c,e\>\> D[\varphi_{A \x B}] = \< \< a,c\>D[\varphi_A], e\varphi_B\>m_\x + \< c \varphi_A,\<b,e\>D[\varphi_B]\>m_\x$;
\item $(\<1,0\> \x 1)D[\varphi_{A \x B}] = a_\x (D[\varphi_A] \x \varphi_B)m_\x$.
\end{enumerate}
\end{lemma}
\begin{proof}~
\begin{enumerate}[(i)]
\item We have the following calculation:
\begin{eqnarray*}
\<\<a,b\>,\<c,e\>\> D[h] & = & \<\<a,0\>,\<c,e\>\> D[h] + \<\<0,b\>,\<c,e\>\> D[h] \\
& = & \< a,\<c,e\>\> D_B[h] + \< b,\<c,e\>\> D_A[h] \\
& = & \< a,e \> h + \< c,b\>h ~~~~~\mbox{($h$ is bilinear)}.
\end{eqnarray*}
\item Use the fact that $m_\x$ is bilinear (see Lemma \ref{lem:basic_tensor_rep}).
\item Using the fact that $\varphi_{A \x B} = (\varphi_A \x \varphi_B) m_\x$ (see Proposition \ref{prop:linmaps}) we have:
\begin{eqnarray*}
\lefteqn{<\<a,b\>,\<c,e\>\> D[\varphi_{A \x B}]} \\ & = &
<\<a,b\>,\<c,e\>\> D[(\varphi_A \x \varphi_B)m_\x] \\
& = & <\<a,b\>,\<c,e\>\> \< D[\varphi_A \x \varphi_B], \pi_1 (\varphi_A \x \varphi_B) \> D[m_\x] \\
& = & <\<a,b\>,\<c,e\>\> \< \< (\pi_0 \x \pi_0)D[\varphi_A], (\pi_1 \x \pi_1)D[\varphi_B] \>, \< \pi_1 \pi_0 \varphi_A ,\pi_1 \pi_1 \varphi_B \> \> D[m_\x] \\
& = & \< \< \< a,c\> D[\varphi_A], \< b,e\> D[\varphi_B] \>, \< c \varphi_A ,e \varphi_B \> \> D[m_\x] \\
& = & \< \< a,c\> D[\varphi_A],e \varphi_B \>m_\x + \< c \varphi_A, \< b,e\> D[\varphi_B]\> m_\x.
\end{eqnarray*}
\item We apply (iii):
\begin{eqnarray*}
\lefteqn{ (\<1,0\> \x 1)D[\varphi_{A \x B}]} \\
& =& \<\<\pi_0,0\>,\<\pi_1\pi_0,\pi_1\pi_1\>\> D[\varphi_{A \x B}] \\
& = & \< \< \pi_0,\pi_1\pi_0\> D[\varphi_A],\pi_1\pi_1 \varphi_B \>m_\x + \< \pi_1\pi_1 \varphi_A, \< 0,\pi_1\pi_1\> D[\varphi_B]\> m_\x \\
& = & \< \< \pi_0,\pi_1\pi_0\> D[\varphi_A],\pi_1\pi_1 \varphi_B \>m_\x + \< \pi_1\pi_1 \varphi_A, 0> m_\x \\
& = & \< \< \pi_0,\pi_1\pi_0\> D[\varphi_A],\pi_1\pi_1 \varphi_B \>m_\x + 0 \\
& = & a_\x (D[\varphi_A] \x \varphi_B) m_\x.
\end{eqnarray*}
\end{enumerate}
\end{proof}
We are now ready to calculate. Since both maps are linear we may prefix each side with the universal map which gives tensorial representation;
the universal property tells us that the maps are equal if and only if these composites are equal. Here is the calculation:
\begin{eqnarray*}
\lefteqn{ \varphi_\ox (\< 1,0\>\ox S(\<1,0\> + \< 0,1\>)) d_\ox s_2}\\
& = & (\< 1,0\>\x S(\Delta_\x)) \varphi_\ox d_\ox s_2 \\
& = & (\< 1,0\>\x S(\Delta_\x)) (\eta \x 1)m_\x S(D[\varphi_{A \x A}]) \epsilon s_2 \\
& = & (\eta \x S(\Delta_\x))m_\x S((\< 1,0\> \x 1)D[\varphi_{A \x A}]) \epsilon s_2 \\
& = & (\eta \x \Delta_\x)(1 \x m_\x) m_\x S(a_\x (D[\varphi_A] \x \varphi_A)m_\x) \epsilon s_2 ~~~\mbox{(Lemma \ref{more-bilinear-identities} (iv))}\\
& = & (\eta \x \Delta_\x) a_\x (m_\x \x 1) m_\x S(D[\varphi_A] \x \varphi_A) S(m_\x) \epsilon s_2 \\
& = & (\eta \x \Delta_\x)a_\x (m_\x \x 1) (S(D[\varphi_A]) \x S(\varphi_A)) m_\x S(m_\x) \epsilon s_2 \\
& = & (\eta \x \Delta_\x)a_\x(m_\x \x 1) (S(D[\varphi_A]) \x S(\varphi_A)) (\epsilon \x \epsilon) m_\x s_2 ~~~\mbox{(Lemma \ref{bilinear-identities} (ii))}\\
& = & (\eta \x \Delta_\x)a_\x( m_\x \x 1) (S(D[\varphi_A]) \x 1) (\epsilon \x 1) \varphi_\ox \\
& = & (1 \x \Delta_\x)a_\x (((\eta \x 1) m_\x S(D[\varphi_A])\epsilon) \x 1) \varphi_\ox \\
& = & (1 \x \Delta_\x)a_\x ((\varphi_\ox d_\ox) \x 1) \varphi_\ox \\
& = & (1 \x \Delta_\x)a_\x (\varphi_\ox \x 1)\varphi_\ox (d_\ox \ox 1) \\
& = & (1 \x \Delta_\ox) \varphi_\ox (d_\ox \ox 1) \\
& = & \varphi_\ox (1 \ox \Delta_\ox) (d_\ox \ox 1)
\end{eqnarray*}
\item To prove the chain rule, $ d_\ox \delta = (1 \ox \Delta_\ox) a_\ox (d_\ox \ox \delta) d_\ox$, we
precompose both sides with $(1 \x \varphi) \varphi_\ox$. As both maps are clearly linear we can then use the
universal properties of $\varphi$ and $\varphi_\ox$ to conclude the original maps are equal if and only if these composites are:
\begin{eqnarray*}
\lefteqn{(1 \x \varphi) \varphi_\ox d_\ox \delta} \\
& = & (1 \x \varphi) (\eta \x 1)m_\x S(D[\varphi])\epsilon S(\varphi) \\
& = & (1 \x \varphi) (\eta \x 1)m_\x S(D[\varphi]S(\varphi))\epsilon \\
& = & (\eta \x \varphi) m_\x S(D[\varphi S(\varphi)])\epsilon \\
& = & (\eta \x 1) \theta S(D[\varphi\varphi])\epsilon \\
& = & (\eta \x 1) (\epsilon \x 1) D[\varphi\varphi] \\
& = & D[\varphi\varphi] \\
\lefteqn{(1 \x \varphi) \varphi_\ox (1 \ox \Delta_\ox) a_\ox (d_\ox \ox \delta) d_\ox} \\
& = & (1 \x \varphi) (1 \x \Delta_\ox) \varphi_\ox a_\ox (d_\ox \ox \delta) d_\ox \\
& = & (1 \x \varphi\Delta_\x) a_\x(\varphi_\ox \x 1) \varphi_\ox (d_\ox \ox \delta) d_\ox \\
& = & (1 \x \varphi\Delta_\x) a_\x((\varphi_\ox d_\ox) \x S(\varphi)) \varphi_\ox d_\ox \\
& = & (1 \x \varphi\Delta_\x) a_\x(((\eta \x 1)m_\x S(D[\varphi])\epsilon) \x S(\varphi)) (\eta \x 1)m_\x S(D[\varphi])\epsilon \\
& = & (1 \x \Delta_\x (\varphi \!\x\! \varphi)) a_\x ((\eta \!\x\! 1)m_\x S(D[\varphi]) \epsilon \!\x\! S(\varphi)) (\eta \!\x\! 1)m_\x S(D[\varphi] ) \epsilon\\
& = & (1 \x \Delta_\x) a_\x (((\eta \x \varphi)m_\x S(D[\varphi]) \epsilon) \x \varphi S(\varphi)) (\eta \x 1)m_\x S(D[\varphi] ) \epsilon\\
& = & (1 \x \Delta_\x) a_\x (((\eta \x 1)\theta S(D[\varphi]) \epsilon) \x (\varphi\varphi)) (\eta \x 1)m_\x S(D[\varphi] ) \epsilon\\
& = & (1 \x \Delta_\x) a_\x (((\eta \x 1)(\epsilon \x 1)D[\varphi]) \x (\varphi\varphi)) (\eta \x 1)m_\x S(D[\varphi] ) \epsilon\\
& = & (1 \x \Delta_\x) a_\x (D[\varphi] \x \varphi) (\eta \x \varphi)m_\x S(D[\varphi] ) \epsilon\\
& = & (1 \x \Delta_\x) a_\x (D[\varphi] \x \varphi) (\eta \x 1)(\epsilon \x 1)D[\varphi] \\
& = &(1 \x \Delta_\x) a_\x (D[\varphi] \x \varphi) D[\varphi] \\
& = & D[\varphi\varphi]
\end{eqnarray*}
\item Interchange ($(1 \ox d_\ox) d_\ox = a_\ox(c_\ox \ox 1)a_\ox^{-1} (1 \ox d_\ox) d_\ox$)
We may precompose with $(1 \x 1 \x \varphi) (1 \x \varphi_\ox)\varphi_\ox$.
\begin{eqnarray*}
\lefteqn{\<a,b,c\>(1 \x 1 \x \varphi) (1 \x \varphi_\ox)\varphi_\ox(1 \ox d_\ox) d_\ox}\\
& = & \<a,b,c\>(1 \x 1 \x \varphi) (1 \x \varphi_\ox d_\ox)\varphi_\ox d_\ox \\
& = & \<a,b,c\>(1 \x 1 \x \varphi) (1 \x (\eta \!\x\! 1)m_\x S(D[\varphi]) \epsilon) (\eta \!\x\! 1)m_\x S(D[\varphi]) \epsilon \\
& = & \<a \eta,b \eta,c\> 1 \x \theta (S(D[\varphi]) \epsilon) m_\x S(D[\varphi]) \epsilon \\
& = & \<a \eta,b \eta,c\> (1 \x (\epsilon \x 1)D[\varphi] ) m_\x S(D[\varphi]) \epsilon \\
& = & \< a \eta,\<b,c\>D[\varphi]\>m_\x S(D[\varphi]) \epsilon
\end{eqnarray*}
The identity requires us to show that $a$ and $b$ can be swapped.
Consider the double differential of $\varphi$:
\begin{eqnarray*}
\<\< 0,a\>,\<b,c\>\>D[D[\varphi]] & = &\< \< 0,a\>,\<b,c\>\>D[\varphi S(D[\varphi])\epsilon] \\
& = & \< 0,a\>,\<b,c\>\>D[\varphi] S(D[\varphi])\epsilon \\
& = & \< 0,a\>,\<b,c\>\>D[(\varphi \x \varphi)m_\x]S(D[\varphi])\epsilon \\
& = & \< \< 0,a\>,\<b,c\>\>D[\varphi \x \varphi], \< b \varphi, c \varphi \> \> D[m_\x] S(D[\varphi])\epsilon \\
& = & \< \< 0,b\> D[\varphi], \< a,c\>D[\varphi] \> ,\< b \varphi, c \varphi \> \> D[m_\x] S(D[\varphi])\epsilon \\
& = & ( \< \< 0,b\> D[\varphi],c \varphi \> m_\x + \< b \varphi, \< a,c\>D[\varphi] \> m_\x ) S(D[\varphi])\epsilon \\
& = & ( 0 + \< b \varphi, \< a,c\>D[\varphi] \> m_\x ) S(D[\varphi])\epsilon \\
& = & \< b \varphi, \< a,c\>D[\varphi] \> m_\x S(D[\varphi])\epsilon
\end{eqnarray*}
Then we have:
\begin{eqnarray*}
\lefteqn{\< a \varphi,\<b,c\>D[\varphi] \> m_\x S(D[\varphi]) \epsilon }\\
& = & \<0,a\>,\<b,c\>\>D[D[\varphi]] \\
& = & \<0,b\>,\<a,c\>\>D[D[\varphi]] \\
& = & \< b \varphi,\<a,c\>D[\varphi] \> m_\x S(D[\varphi]) \epsilon
\end{eqnarray*}
We are nearly there except we would like to change $\varphi$ into $\eta$, which is allowed by another lemma.
\begin{lemma}~
$$\< b \varphi,\<a,c\> D[\varphi] \>m_\x S(D[\varphi])\epsilon = \< a \eta,\<b,c\>D[\varphi]\>m_\x S(D[\varphi])\epsilon$$
\end{lemma}
To prove this lemma we partially differentiate both sides of the equation derived above in the form:
$$\< \pi_0\pi_0 \varphi,\<\pi_0\pi_1,\pi_1\>D[\varphi] \> m_\x S(D[\varphi]) \epsilon = \< \pi_0\pi_1 \varphi,\<\pi_0\pi_0,\pi_1\>D[\varphi] \> m_\x S(D[\varphi]) \epsilon$$
with respect to the first coordinate at position $0$. This is best done using the term logic. In the derivation below we flip between the term logic and the categorical term.
The derivation may best be read from the middle outward:
\begin{eqnarray*}
\lefteqn{\< \pi_0\pi_0 \eta,\<\pi_0\pi_1,\pi_1\>D[\varphi]\>m_\x S(D[\varphi])\epsilon} \\
& = & \left\llbracket ((a,b),c) \mapsto \epsilon(S(D[\varphi])(m_\x(\eta(a),D[\varphi](c) \cdot b))) \right\rrbracket \\
& = & \left\llbracket ((a,b),c) \mapsto \epsilon(S(D[\varphi])(m_\x(\diff{\varphi(x)}{x}{0}{a},D[\varphi](c) \cdot b))) \right\rrbracket \\
& = & \left\llbracket ((a,b),c) \mapsto \diff{\epsilon(S(D[\varphi])(m_\x(\varphi(x),D[\varphi](c).b)))}{x}{0}{a} \right\rrbracket \\
& = & \<\< \pi_0\pi_0,0,\>,0\>,\<\<0,\pi_0\pi_1\>,\pi_1\>\> D[\< \pi_0\pi_0 \varphi,\<\pi_0\pi_1,\pi_1\>D[\varphi] \> m_\x S(D[\varphi]) \epsilon \\
& = & \<\< \pi_0\pi_0,0,\>,0\>,\<\<0,\pi_0\pi_1\>,\pi_1\>\> D[\< \pi_0\pi_1 \varphi,\<\pi_0\pi_0,\pi_1\>D[\varphi] \> m_\x S(D[\varphi]) \epsilon \\
& = & \left\llbracket ((a,b),c) \mapsto \diff{\epsilon(S(D[\varphi])(m_\x(\varphi(b),D[\varphi](c).x)))}{x}{0}{a} \right\rrbracket \\
& = & \left\llbracket ((a,b),c) \mapsto \epsilon(S(D[\varphi])(m_\x(\varphi b,\diff{D[\varphi](c)\cdot x}{x}{0}{a}))) \right\rrbracket \\
& = & \left\llbracket ((a,b),c) \mapsto \epsilon(S(D[\varphi])(m_\x(\varphi b,D[\varphi](c)\cdot a))) \right\rrbracket \\
& = & \< \pi_0\pi_1 \varphi,\<\pi_0\pi_0,\pi_1\>D[\varphi]\>m_\x S(D[\varphi])\epsilon
\end{eqnarray*}
The key step uses the fact that $\eta(a)$ is the partial derivative of $\varphi(x)$ at $0$ with linear argument $a$, that is $\eta(a) = \diff{\varphi(x)}{x}{0}{a}$. Otherwise we are using the fact that we are differentiating in linear positions so the term does not change.
\end{enumerate}
\begin{remark}{}
{\em Cartesian closed differential categories, following \cite{bem10,m12}, provide a semantics for the differential
and the resource $\lambda$-calculus. Closed differential storage categories, by Section \ref{closedstorage},
are precisely those semantic settings which arise as the coKleisli categories of monoidal closed differential tensor storage categories.}
\end{remark}
| 99,176
|
Got my contract for PJ. Leaving 14 Oct for basic.
Get it man. Never quit. Here’s to you getting that Maroon Beret brother.
That Others May Live.
Got my contract for PJ. Leaving 14 Oct for basic.
Get it man. Never quit. Here’s to you getting that Maroon Beret brother.
That Others May Live.
Tight squeeze.
[1] U.S. Air Force Master Sgt. Ryan Darnell (right), 83rd Expeditionary Rescue Squadron team leader, and his teammates move a mock patient through an opening during a mission rehearsal in an excess structure at Bagram Air Field.
[2,3] U.S. Air Force Master Sgt. Ryan Darnell, 83rd Expeditionary Rescue Squadron pararescue team leader, monitors mock patient, Airman 1st Class Benjamin Martin, while the team prepares to extract him from the structure through a hole in the floor.
[4] U.S. Air Force Senior Airman Thomas Schalin hands Senior Airman Joseph Brady a backpack after pulling mock patient, Airman 1st Class Benjamin Martin, from a building during a mission rehearsal.
[5] U.S. Air Force Staff Sgt. Jason Lee pulls himself out of a crawlspace as Senior Airman Joseph Brady secures equipment.
The mission rehearsal event allowed pararescuemen from the 83rd Expeditionary Rescue Squadron to hone their breaching, clearing, patient care and egress skills. Air Force rescue forces conduct combat search and rescue and personnel recovery operations. The 83rd ERQS partnered with Joint Task Force Trailblazer, U.S. Army 2nd Engineer Brigade, to use the structure prior to its scheduled deconstruction. Task Force Trailblazer is currently demolishing 50-70 wooden structures here each week as part of Operation Enduring Freedom retrograde operations.
(US Air Force photos by Major Brandon Lingle, 26 AUG 2014.)
Us AFSOC Pararescuemen during exercise Patriot Archangel. photo sourced from public domain
The guardian angels. That others may live.
Air Force pararescuemen ride in the back of a medevac helicopter with the bodies of two U.S. soldiers killed in a roadside bomb attack in Kandahar Province, October 10, 2010. Photographer Guttenfelder describes the scene: “When the pararescue guys were covering the bodies, they only had two flags with them. The wind was shipping through the open window…and the wind caught it and it blew out the window and they lost it. So they only had one flag. (But) one of the pilots had a flag that he kept behind the plate of his flak jacket, one that he’d kept with him for every deployment he’d ever done: Iraq, Afghanistan. He flew over Washington DC with it, his children had kissed it. He took it out and passed it to the back of the helicopter and that was one of the flags they used to cover one of the guys. The family of the soldier who died, who was covered by the donated flag, has reached out to me to ask for a contact for the pilot. They are hoping to give the flag back to him.”
Par.
Ordered the Stew Smith PJ workout plan book. I read through the ten week plan and saw it is supposed to end on test day. And for some reason I felt a great fear. To me, the next two years are dedicated to crushing my fear of that day…
| 294,426
|
Now let's get started... Click on either of below links to invoke the MyChess Java applet.
Wins in under 10 moves
Some tactics from 1001 Brilliant Ways to Checkmate
- Updated Jan 2004
Some tactics from 1001 Brilliant Chess Sacrifices - Updated
Jan 2004
Even more tactics to experiment and see
Database of Tactical Problems (Ref:)
Moscow Grand Prix 2002
Selected games from GM Tournaments
Best games from Chess Informant 1984-1998
The Games of Individual Chess
Championship for the Visually Handicapped, Istanbul 2002
Games of 35th Chess Olympiad
BLED (Men)
Games of 35th Chess Olympiad
BLED (Women)
56th Russian Championship, 2-12 September
2003 - New Jan 2004
Above selections will be updated in due time...
| 171,722
|
Barrel cake, licorice sides, cherry pie filling with a touch of cocoa added as blood and blood clots, gummy worms, gummy teeth, and gummy eyes. A rat is chewing an eyeball.
CakeCentral.com is the world's largest cake community for cake decorating professionals and enthusiasts.
(if this is a private computer)
Yes please, register now!
| 201,778
|
\begin{document}
\begin{frontmatter}
\title{Divergence rates of Markov order estimators and their
application to statistical estimation of stationary ergodic processes}
\runtitle{Statistical estimation of stationary ergodic processes}
\begin{aug}
\author{\fnms{Zsolt} \snm{Talata}\corref{}\ead[label=e1]{talata@math.ku.edu}}
\runauthor{Zs. Talata}
\address{Department of Mathematics, University of Kansas, 405 Snow Hall,
1460 Jayhawk Boulevard, Lawrence, KS 66045-7523, USA. \printead{e1}}
\end{aug}
\received{\smonth{4} \syear{2010}}
\revised{\smonth{6} \syear{2011}}
\begin{abstract}
Stationary ergodic processes with finite alphabets are estimated by
finite memory processes from a sample, an $n$-length realization of the
process, where the memory depth of the estimator process is also
estimated from the sample using penalized maximum likelihood (PML).
Under some assumptions on the continuity rate and the assumption of
non-nullness, a rate of convergence in $\bar{d}$-distance is obtained,
with explicit constants. The result requires an analysis of the
divergence of PML Markov order estimators for not necessarily finite
memory processes. This divergence problem is investigated in more
generality for three information criteria: the Bayesian information
criterion with generalized penalty term yielding the PML, and the
normalized maximum likelihood and the Krichevsky--Trofimov code
lengths. Lower and upper bounds on the estimated order are obtained.
The notion of consistent Markov order estimation is generalized for
infinite memory processes using the concept of oracle order estimates,
and generalized consistency of the PML Markov order estimator is
presented.
\end{abstract}
\begin{keyword}
\kwd{finite memory estimator}
\kwd{infinite memory}
\kwd{information criteria}
\kwd{Markov approximation}
\kwd{minimum description length}
\kwd{oracle inequalities}
\kwd{penalized maximum likelihood}
\kwd{rate of convergence}
\end{keyword}
\end{frontmatter}
\section{Introduction}
This paper is concerned with the problem of estimating stationary
ergodic processes with finite alphabet from a sample, an observed
length $n$ realization of the process, with the $\bar{d}$-distance
being considered between the process and the estimated one. The $\bar
{d}$-distance was introduced by Ornstein~\cite{O1} and became one of
the most widely used metrics over stationary processes. Two stationary
processes are close in $\bar{d}$-distance if there is a joint
distribution whose marginals are the distributions of the processes
such that the marginal processes are close with high probability (see
Section~\ref{secappl} for the formal definition). The class of ergodic
processes is $\bar{d}$-closed and entropy is $\bar{d}$-continuous,
which properties do not hold for the weak topology~\cite{ShB}.
Ornstein and Weiss~\cite{OW} proved that for stationary processes
isomorphic to i.i.d. processes, the empirical distribution of the
$k(n)$-length blocks is a strongly consistent estimator of the
$k(n)$-length parts of the process in $\bar{d}$-distance if and only if
$k(n)\le(\log n)/h$, where $h$ denotes the entropy of the
process.
Csisz\'ar and Talata~\cite{CsT3} estimated the $n$-length part of a
stationary ergodic process $X$ by a Markov process of order
$k_n$. The transition probabilities of this Markov
estimator process are the empirical conditional probabilities, and the
order $k_n\to+\infty$ does not depend on the sample. They obtained a
rate of convergence of the Markov estimator to the process $X$ in
$\bar{d}$-distance, which consists of two terms. The first one is the
bias due to the error of the approximation of the process by a Markov
chain. The second term is the variation due to the error of the
estimation of the parameters of the Markov chain from a
sample.
In this paper, the order $k_n$ of the Markov estimator process is
estimated from the sample. For the order estimation, penalized maximum
likelihood (PML) with general penalty term is used. The resulted Markov
estimator process finds a tradeoff between the bias and the variation
as it uses shorter memory for faster memory decays of the process $X$.
If the process $X$ is a Markov chain, the PML order estimation recovers
its order asymptotically with a wide range of penalty terms.
Not only an asymptotic rate of convergence result is obtained but also
an explicit bound on the probability that the $\bar{d}$-distance of the
above Markov estimator from the process $X$ is greater than
$\varepsilon
$. It is assumed that the process $X$ is non-null, that is, the
conditional probabilities of the symbols given the pasts are separated
from zero, and that the continuity rate of the process $X$ is summable
and the restricted continuity rate is uniformly convergent. These
conditions are usually assumed in this area~\cite{Br,DGG,FG,Marton}.
The summability of the continuity rate implies that the process is
isomorphic to an i.i.d. process~\cite{Bb}.
The above result on statistical estimation of stationary ergodic
processes requires a non-asymptotic analysis of the Markov order
estimation for not necessarily finite memory processes. In this paper,
this problem is also investigated in more generality: under milder
conditions than it would be needed for the above bound and not only for
the PML method.
A popular approach to the Markov order estimation is the minimum
description length (MDL) principle~\cite{RissanenB,BRY}. This method
evaluates an information criterion for each candidate order based on
the sample and the estimator takes the order for which the value is
minimal. The normalized maximum likelihood (NML)~\cite{St} and the
Krichevsky--Trofimov (KT)~\cite{KT} code lengths are natural
information criteria because the former minimizes the worst case
maximum redundancy for the model class of $k$-order Markov chains,
while the latter does so, up to an additive constant, with the average
redundancy. The Bayesian information criterion (BIC)~\cite{Schw} can be
regarded as an approximation of the NML and KT code lengths. The PML is
a generalization of BIC; special settings of the penalty term yield the
BIC and other well-known information criteria, such as the Akaike
information criterion (AIC)~\cite{Akaike}. There are other methods for
Markov order estimation, see~\cite{Narayan} and references there, and
the problem can also be formulated in the setting of hypothesis testing
\cite{Ryabko}.
If a process is a Markov chain, the NML and KT Markov order estimators
are strongly consistent if the candidate orders have an upper bound
$\RMo(\log n)$~\cite{Cs}. Without such a bound, they fail to be consistent
\cite{CsSh}. The BIC Markov order estimator is strongly consistent
without any bound on the candidate orders~\cite{CsSh}. If a process has
infinite memory, the Markov order estimators are expected to tend to
infinity as $n\to+\infty$. The concept of context trees of arbitrary
stationary ergodic processes is a model more complex than Markov
chains. Recent results~\cite{CsT} in that area imply that this
expectation holds true for the BIC and KT Markov order estimators but
they provide no information about the asymptotics of the
divergence.
In this paper, the divergence of the PML, NML and KT Markov order
estimators for not necessarily finite memory processes is
investigated.
Not only asymptotic rates of divergence are obtained but also explicit
bounds on the probability that the estimators are greater and less,
respectively, than some order. Instead of the usual assumption of
non-nullness, it is assumed only that the conditional probabilities of
one of the symbols given the pasts are separated from zero. This
property is called weakly non-nullness and is ``noticeably weaker''
than non-nullness~\cite{CFF}.
First, the process is assumed to be weakly non-null and $\alpha
$-summable. The $\alpha$-summability~\cite{N1,N2,GL,L} is a condition
weaker than the summability of the continuity rate. Under these
conditions, a bound on the probability that the estimators are greater
than some order is obtained, that yields an $\mathcal{O}(\log n)$ upper
bound on the estimated order eventually almost surely as $n\to+\infty$.
Then, a bound on the probability that the estimators are less than some
order is obtained assuming that the process is weakly non-null and the
decay of its continuity rates is in some exponential range. This bound
implies that the estimators satisfying the conditions attain a $c\log
n$ divergence rate eventually almost surely as $n\to+\infty$, where the
coefficient $c$ depends on the range of the continuity rates. The class
of processes with exponentially decaying continuity rate is considered
in various problems~\cite{DGG,GGG}. Fast divergence rate of the
estimators are expected only for a certain range of continuity rates.
Clearly, the estimators do not have a fast divergence rate if the
memory decay of the process is too fast. On the other hand, too slow
memory decay is also not favored to a fast divergence rate because then
the empirical probabilities do not necessarily converge to the true
probabilities.
To provide additional insight into the asymptotics of Markov order
estimators, the notion of consistent Markov order estimation is
generalized for infinite memory processes. A Markov order estimator is
compared to its oracle version, which is calculated based on the true
distribution of the process instead of the empirical distribution. The
oracle concept is used in various problems, see, for example,
\cite{Barron,Birge,Ooracle,GaoGijbels}. If the decay of the continuity
rate of the process is faster than exponential, the ratio of the PML
Markov order estimator with sufficiently large penalty term to its
oracle version is shown to converge to $1$ in probability.
The structure of the paper is the following. In Section \ref
{secnotation}, notation and definitions are introduced for stationary
ergodic processes with finite alphabets. In Section~\ref{secic}, the
PML, NML and KT information criteria are introduced. Section \ref
{secmain} contains the results on divergence of the
information-criterion based Markov order estimators. In Section \ref
{secappl}, the problem of estimating stationary ergodic process in
$\bar{d}$-distance is formulated and our results are presented. The
results require bounds on empirical entropies, which are stated in
Section~\ref{secmain} and are proved in Section~\ref{secent}.
Section~\ref{secproof} contains the proof of the divergence results,
and Section~\ref{secproofappl} the proof of the process estimation results.
\section{Finite and infinite memory processes}\label{secnotation}
Let $X= \{ X_i, -\infty<i<+\infty\}$ be a stationary ergodic
stochastic process with finite alphabet $A$. We write $X_i^j =
X_i,\ldots,X_j$ and $x_i^j = x_i,\ldots,x_j \in A^{j-i+1}$ for $j\ge
i$. If $j<i$, $x_i^j$~is the empty string. For two strings $x_1^i\in
A^i$ and $y_1^j\in A^j$, $x_1^i y_1^j$ denotes their concatenation
$x_1,\ldots,x_i,y_1,\ldots,y_j \in A^{i+j}$. Write
\[
P\bigl(x_i^j\bigr) = \Prob\bigl( X_i^j
= x_i^j \bigr)
\]
and, if $P(x_{-m}^{-1})>0$,
\[
P\bigl(a | x_{-m}^{-1} \bigr) = \Prob\bigl(
X_0 = a \mid X_{-m}^{-1} =
x_{-m}^{-1} \bigr).
\]
For $m=0$, $P(a | x_{-m}^{-1} ) = P ( a)$.
The process $X$ is called \textit{weakly non-null} if
\[
\alpha_0 = \sum_{a\in A}
\inf_{x_{-\infty}^{-1} \in A^{\infty}} P\bigl(a | x_{-\infty}^{-1} \bigr
) >0.
\]
Letting
\[
\alpha_k = \min_{y_{-k}^{-1} \in A^k} \sum_{a\in A}
\inf_{x_{-\infty
}^{-1} \in A^{\infty}: x_{-k}^{-1} = y_{-k}^{-1}} P\bigl(a | x_{-\infty
}^{-1} \bigr),\qquad k=1,2,
\ldots,
\]
we say that the process $X$ is \textit{$\alpha$-summable} if
\[
\alpha= \sum_{k=0}^{+\infty} (1-
\alpha_k) <+\infty.
\]
The \textit{continuity rates} of the process $X$ are
\[
\bar{\gamma}(k) = \sup_{x_{-\infty}^{-1} \in A^{\infty}} \sum_{a\in A}
\bigl\llvert P\bigl(a | x_{-k}^{-1} \bigr) - P\bigl(a |
x_{-\infty}^{-1} \bigr) \bigr\rrvert
\]
and
\[
\underbar{\gamma}(k) = \inf_{x_{-\infty}^{-1} \in A^{\infty}} \sum
_{a\in A}
\bigl\llvert P\bigl(a | x_{-k}^{-1} \bigr) - P\bigl(a |
x_{-\infty}^{-1} \bigr) \bigr\rrvert.
\]
Obviously, $\underbar{\gamma}(k) \le\bar{\gamma}(k)$. If $\sum
_{k=1}^{\infty} \bar{\gamma}(k)<+\infty$, then the process $X$ is said
to have \textit{summable continuity rate}.
\begin{remark}\label{remgammaeq}
Since for any $x_{-k}^{-1} \in A^k$ and $z_{-m}^{-k-1}\in A^{m-k}$,
$m\ge k$,
\[
\inf_{x_{-\infty}^{-k-1}} P\bigl(a | x_{-\infty}^{-1} \bigr) \le P
\bigl(a | z_{-m}^{-k-1} x_{-k}^{-1} \bigr)
\le\sup_{x_{-\infty}^{-k-1}} P\bigl(a | x_{-\infty}^{-1} \bigr),
\]
the above definition of continuity rate is equivalent to
\[
\bar{\gamma}(k) = \sup_{ i>k } \max_{x_{-i}^{-1} \in A^i} \sum
_{a\in A} \bigl\llvert P\bigl(a | x_{-k}^{-1}
\bigr) - P\bigl(a | x_{-i}^{-1} \bigr) \bigr\rrvert.\
\]
\end{remark}
\begin{remark}\label{remgammaalpha}
The process is $\alpha$-summable if it has summable continuity rate because
\begin{eqnarray*}
1-\alpha_k
&\le&1 - \max_{y_{-k}^{-1} \in A^k} \sum_{a\in A} P
\bigl(a | y_{-k}^{-1} \bigr)\\
&&{} + \max_{y_{-k}^{-1} \in A^k} \sum
_{a\in A} \sup_{x_{-\infty}^{-1}
\in
A^{\infty}: x_{-k}^{-1} = y_{-k}^{-1}} \bigl( P\bigl(a |
y_{-k}^{-1} \bigr) - P\bigl(a | x_{-\infty}^{-1}
\bigr) \bigr)
\\
&\le& |A| \bar{\gamma}(k).
\end{eqnarray*}
\end{remark}
The $k$-order \textit{entropy} of the process $X$ is
\[
H_k = - \sum_{a_1^k\in A^k} P
\bigl(a_1^k\bigr) \log P\bigl(a_1^k
\bigr),\qquad k\ge1,
\]
and the $k$-order \textit{conditional entropy} is
\[
h_k = - \sum_{a_1^{k+1}\in A^{k+1}} P
\bigl(a_1^{k+1}\bigr) \log P\bigl(a_{k+1} |
a_1^k\bigr),\qquad k\ge0.
\]
Logarithms are to the base $2$. It is well known for stationary
processes~\cite{Cover,CSbook} that the conditional entropy $h_k$ is a
non-negative decreasing function of $k$, therefore its limit exists as
$k\to+\infty$. The \textit{entropy rate} of the process is
\[
\bar{H} = \lim_{k\to+\infty} h_k = \lim_{k\to+\infty} \frac1k
H_k.
\]
Note that $h_k - \bar{H} \ge0$ for any $k\ge0$.
The process $X$ is a \textit{Markov chain} of order $k$ if for each $n>k$
and $x_1^n\in A^n$
\begin{equation}
\label{eqMCdef} P \bigl(x_1^n\bigr) = P
\bigl(x_1^k\bigr) \prod_{i=k+1}^n
P\bigl(x_i | x_{i-k}^{i-1}\bigr),
\end{equation}
where $P (x_1^k)$ is called initial distribution and $ \{
P(a|a_1^k), a\in A, a_1^k\in A^k \}$ is called transition
probability matrix. The case $k=0$ corresponds to i.i.d. processes. The
process $X$ is of \textit{infinite memory} if it is not a Markov chain
for any order $k<+\infty$. For infinite memory processes, $h_k - \bar
{H}>0$ for any $k\ge0$.
In this paper, we consider statistical estimates based on a sample
$X_1^n$, an $n$-length part of the process. Let $N_n (a_1^k)$ denote
the number of occurrences of the string $a_1^k$ in the sample $X_1^n$
\[
N_n \bigl(a_1^k\bigr) = \bigl\llvert\bigl
\{ i: X_{i+1}^{i+k} = a_1^k, 0\le i
\le n-k \bigr\}\bigr\rrvert.
\]
For $k\ge1$, the empirical probability of the string $a_1^k$ is
\[
\hat{P} \bigl(a_1^k\bigr) = \frac{N_n (a_1^k)}{n-k+1}
\]
and the empirical conditional probability of $a_{k+1}\in A$ given
$a_1^k$ is
\[
\hat{P} \bigl(a_{k+1}| a_1^k\bigr) =
\frac{ N_n(a_1^{k+1}) }{ N_{n-1}(a_1^k) }.
\]
For $k=0$, $\hat{P} (a_{k+1}| a_1^k) = \hat{P} (a_{k+1})$.
The $k$-order \textit{empirical entropy} is
\[
\hat{H}_{k}\bigl(X_1^n\bigr) = - \sum
_{a_1^k\in A^k} \hat{P} \bigl(a_1^k
\bigr) \log\hat{P} \bigl(a_1^k\bigr),\qquad 1\le k\le n,
\]
and the $k$-order \textit{empirical conditional entropy} is
\[
\hat{h}_{k}\bigl(X_1^n\bigr) = - \sum
_{a_1^{k+1}\in A^{k+1}} \hat{P} \bigl(a_1^{k+1}
\bigr) \log\hat{P} \bigl(a_{k+1}| a_1^k\bigr),\qquad
0\le k\le n-1.
\]
The likelihood of the sample $X_1^n$ with respect to a $k$-order Markov
chain model of the process $X$ with some transition probability matrix
$ \{ Q(a_{k+1}|a_1^k), a_{k+1}\in A, a_1^k\in A^k \}$, by~(\ref{eqMCdef}), is
\[
P'\bigl( X_1^n \bigr) = P'
\bigl(X_1^k\bigr) \prod_{a_1^{k+1}\in A^{k+1}}
Q\bigl(a_{k+1}| a_1^k\bigr)^{N_n(a_1^{k+1})}.
\]
For $0\le k< n$, the \textit{maximum likelihood} is the maximum in
$Q(a_{k+1}|a_1^k)$ of the second factor above, which equals
\[
\mathrm{ML}_{k}\bigl(X_1^n\bigr) = \prod
_{a_1^{k+1}\in A^{k+1}} \hat{P} \bigl(a_{k+1}|
a_1^k\bigr)^{N_n(a_1^{k+1})}.
\]
Note that $\log\mathrm{ML}_{k}(X_1^n) = -(n-k) \hat{h}_{k}(X_1^n)$.
\section{Information criteria}\label{secic}
An information criterion assigns a score to each hypothetical model
(here, Markov chain order) based on a sample, and the estimator will be
that model whose score is minimal.
\begin{definition}\label{defIC}
For an information criterion
\[
\mathrm{IC}_{X_1^n} ( \cdot)\dvtx\mathbb{N}\to\mathbb{R}^{+},
\]
the Markov order estimator is
\[
\hat{k}_{\mathrm{IC}} \bigl(X_1^n\bigr) = \arg
\min_{0\le k < n} \mathrm{IC}_{X_1^n} (k).
\]
\end{definition}
\begin{remark}
Here, the number of candidate Markov chain orders based on a sample is
finite, therefore the minimum is attained. If the minimizer is not
unique, the smallest one will be taken as $\arg\min$.
\end{remark}
We consider three, the most frequently used information criteria,
namely, the Bayesian information criterion and its generalization, the
family of penalized maximum likelihood (PML) \mbox{\cite{Schw,CsSh}}, the
normalized maximum likelihood (NML) code length~\cite{St}, and the
Krichevsky--Trofimov (KT) code length~\cite{KT}.
\begin{definition}\label{defPML}
Given a penalty function $\operatorname{pen}(n)$, a non-decreasing
function of the
sample size $n$, for a candidate order $0\le k<n$ the PML criterion is
\begin{eqnarray*}
\mathrm{PML}_{X_1^n} ( k ) &=& - \log\mathrm{ML}_{k}
\bigl(X_1^n\bigr) + \bigl(|A|-1\bigr) |A|^k
\operatorname{pen}(n)
\\
&=& (n-k) \hat{h}_{k}\bigl(X_1^n\bigr) +
\bigl(|A|-1\bigr) |A|^k \operatorname{pen}(n).
\end{eqnarray*}
\end{definition}
The $k$-order Markov chain model of the process $X$ is described by the
conditional probabilities $ \{ Q(a_{k+1}|a_1^k), a_{k+1}\in A,
a_1^k\in A^k \}$, and $(|A|-1) |A|^k$ of these are free parameters.
The second term of the PML criterion, which is proportional to the
number of free parameters of the $k$-order Markov chain model, is
increasing in $k$. The first term, for a given sample, is known to be
decreasing in $k$. Hence, minimizing the criterion yields a tradeoff
between the goodness of fit of the sample to the model and the
complexity of the model.
\begin{remark}
If $\operatorname{pen}(n) = \frac12\log n$, the PML criterion is
called \textit{Bayesian information criterion} (BIC), and if
$\operatorname{pen}(n)
= 1$, \textit{Akaike information criterion} (AIC).
\end{remark}
The minimum description length (MDL) principle minimizes the length of
a code of the sample tailored to the model class. Strictly speaking,
the information criterion would have an additive term, the length of a
code of the structure parameter. This additional term, the length of a
code of $k$, is omitted since it does not affect the results.
\begin{definition} \label{defNML}
For a candidate order $0\le k<n$, the NML criterion is
\[
\mathrm{NML}_{X_1^n} ( k ) = - \log P_{\mathrm{NML}, k}
\bigl(X_1^n\bigr),
\]
where
\[
P_{\mathrm{NML}, k} \bigl(X_1^n\bigr) =
\frac{ \mathrm{ML}_{k}(X_1^n) }{
\Sigma(n,k) } \qquad\mbox{with } \Sigma(n,k) = \sum
_{x_1^n\in A^n} \mathrm{ML}_{k} \bigl(x_1^n
\bigr)
\]
is the $k$-order NML-probability of $X_1^n$.
\end{definition}
\begin{remark}
Writing
\[
\mathrm{NML}_{X_1^n} ( k ) = - \log\mathrm{ML}_{k}
\bigl(X_1^n\bigr) + \log\Sigma_{n,k},
\]
the NML criterion can be regarded as a PML criterion in a broader sense.
\end{remark}
\begin{definition} \label{defKT}
For a candidate order $0\le k<n$, the KT criterion is
\[
\mathrm{KT}_{X_1^n} ( k ) = - \log P_{\mathrm{KT}, k}
\bigl(X_1^n\bigr),
\]
where
\[
P_{\mathrm{KT}, k} \bigl(X_1^n\bigr) =
\frac{ 1 }{ |A|^k } \!\!\mathop{\prod_{a_1^k\in A^k:}}_{N_{n-1}(a_1^k)\ge1}\!\!
\frac{
\prod_{a_{k+1}: N_n(a_1^{k+1}) \ge1} [
( N_n(a_1^{k+1}) - 1/2 ) ( N_n(a_1^{k+1}) -
3/2 )
\cdots( 1/2 ) ]
}{
( N_{n-1}(a_1^k) - 1 + {|A|}/{2} ) (
N_{n-1}(a_1^k) - 2 + {|A|}/{2} )
\cdots( {|A|}/{2} )
}
\]
is the $k$-order KT-probability of $X_1^n$. (For $k=0$, $N_{n-1}(a_1^k)=n$.)
\end{definition}
\begin{remark}
The $k$-order KT-probability of the sample is equal to a mixture of the
probabilities of the sample with respect to all $k$-order Markov chains
with uniform initial distribution, where the mixture distribution over
the transition probability matrices $ \{ Q(a_{k+1}|a_1^k),
a_{k+1}\in A, a_1^k\in A^k \}$ is independent for the rows
$Q( \cdot|a_1^k)$, $a_1^k\in A^k$, and has
Dirichlet $(
\frac12,\ldots,\frac12 )$ distribution in the rows. Hence,
the KT
Markov order estimator can be regarded as a Bayes (maximum a
posteriori) estimator.
\end{remark}
\begin{remark}
The $k$-order NML and KT coding distributions are nearly optimal among
the $k$-order Markov chains, in the sense that the code lengths $\lceil
-\log P_{\mathrm{NML}, k} (X_1^n) \rceil$ and $\lceil-\log
P_{\mathrm{KT}, k} (X_1^n) \rceil$ minimize the
worst case maximum and average, respectively, redundancy for this class
(up to an additive constant in the latter case).
\end{remark}
\section{Divergence of Markov order estimators}\label{secmain}
The BIC Markov order estimator is strongly consistent~\cite{CsSh}, that
is, if the process is a Markov chain of order $k$, then $\hat
{k}_{\mathrm{BIC}} (X_1^n) = k$ eventually almost surely as $n\to
+\infty
$. ``Eventually almost surely'' means that with probability $1$, there
exists a threshold $n_0$ (depending on the infinite realization
$X_{1}^{\infty}$) such that the claim holds for all $n \ge n_0$.
Increasing the penalty term, up to $cn$, where $c>0$ is a sufficiently
small constant, does not affect the strong consistency. It is not known
whether or not the strong consistency holds for smaller penalty terms
but it is known that if the candidate orders are upper bounded by
$c\log n$, where $c>0$ is a sufficiently small constant, that is, the
estimator minimizes the PML over the orders $0\le k \le c\log n$ only,
then $\operatorname{pen}(n) = C \log\log n$ still provides the strong
consistency,
where $C>0$ is a sufficiently large constant~\cite{Ramon}.
The NML and KT Markov order estimators fail to be strongly consistent
because for i.i.d. processes with uniform distribution, they converge
to infinity at a rate $\mathcal{O}(\log n)$~\cite{CsSh}. However, if
the candidate orders are upper bounded by $\RMo(\log n)$, the strong
consistency holds true~\cite{Cs}.
If the process is of infinite memory, the BIC and KT Markov order
estimators diverge to infinity~\cite{CsT}. In this section, results on
the divergence rate of the PML, NML and KT Markov order estimators are
presented. Bounds on the probability that the estimators are greater
and less, respectively, than some order are obtained, with explicit
constants. The first implies that under mild conditions, the estimators
do not exceed the $\mathcal{O}(\log n)$ rate eventually almost surely
as $n\to+\infty$. The second bound implies that the rate $\mathcal
{O}(\log n)$ is attained eventually almost surely as $n\to+\infty$ for
the processes whose continuity rates decay in some exponential range.
At the end of the section, the notion of consistent Markov order
estimation is generalized for infinite memory processes. If the
continuity rates decay faster than exponential, the PML Markov order
estimator is shown to be consistent with the oracle-type order estimate.
The proofs use bounds on the simultaneous convergence of empirical
entropies of orders in an increasing set. These bounds are obtained for
finite sample sizes $n$ with explicit constants under mild conditions
so they are of independent interest and are also presented here.
\begin{theorem}\label{thentSMP}
For any weakly non-null and $\alpha$-summable stationary ergodic
process, for any $0<\varepsilon<1/2$
\[
\Prob\biggl( \max_{1\le k\le({\varepsilon\log n})/({4\log|A|})} \bigl
\llvert\hat{H}_{k}
\bigl(X_1^n\bigr) - H_k \bigr\rrvert>
\frac{1}{n^{1/2-\varepsilon}} \biggr) \le\exp\biggl( -\frac{c_1\varepsilon^3}{\log
n} n^{\varepsilon/2}
\biggr)
\]
and
\[
\Prob\biggl( \max_{0\le k\le({\varepsilon\log n})/({4\log|A|})} \bigl
\llvert\hat{h}_{k}
\bigl(X_1^n\bigr) - h_k \bigr\rrvert>
\frac{1}{n^{
1/2-\varepsilon}} \biggr) \le\exp\biggl( -\frac{c_2\varepsilon^3}{\log
n} n^{\varepsilon/2}
\biggr),
\]
where $c_1,c_2>0$ are constants depending only on the distribution of
the process.
\end{theorem}
\begin{pf}
The proof including the explicit expression of the constants is in
Section~\ref{secent}.
\end{pf}
\begin{remark}
The convergence of $\hat{H}_{k_n}(X_1^n)$ and $\hat{h}_{k_n}(X_1^n)$,
$k_n\to\infty$, to the
entropy rate $\bar{H}$ of the process could be investigated using
Theorem~\ref{thentSMP}. However, good estimates of the entropy rate
are known from the theory of universal codes. In particular, mixtures
of the KT distributions over all possible orders provide universal
codes in the class of all stationary ergodic processes \cite
{Ryabko,Ryabko2,Ryabko3}, therefore the corresponding code length is a
suitable estimate of the entropy rate.
\end{remark}
An application of the Borel--Cantelli lemma in Theorem~\ref{thentSMP}
yields the following asymptotic result.
\begin{corollary}
For any weakly non-null and $\alpha$-summable stationary ergodic
process, for any $0<\varepsilon<1/2$
\[
\bigl\llvert\hat{H}_{k}\bigl(X_1^n\bigr)
- H_k \bigr\rrvert\le\frac{1}{n^{
1/2-\varepsilon}} \quad\mbox{and}\quad\bigl
\llvert\hat{h}_{k}\bigl(X_1^n\bigr) -
h_k \bigr\rrvert\le\frac{1}{n^{
1/2-\varepsilon}}
\]
simultaneously for all $k\le\frac{\varepsilon\log n}{4\log|A|}$,
eventually almost surely as $n\to+\infty$.
\end{corollary}
\begin{remark}
By~\cite{GGG}, under much stronger conditions on the process, the
convergence rate of $\hat{H}_{k}(X_1^n)$ and $\hat{h}_{k}(X_1^n)$ to
$\bar{H}$ is $n^{-1/2}$
for some fixed $k=\mathcal{O}(\log n)$. Hence, the rate in Theorem
\ref
{thentSMP} cannot be improved significantly.
\end{remark}
The first divergence result of the paper is the following.
\begin{theorem}\label{thpmllarge}
For any weakly non-null and $\alpha$-summable stationary ergodic
process there exist $\lambda_1, \lambda_2>0$ depending only on the
distribution of the process, such that for the Markov order estimator
$\hat{k}_{\mathrm{IC}} (X_1^n)$
\[
\Prob\bigl( \hat{k}_{\mathrm{IC}} \bigl(X_1^n\bigr)
> k_n \bigr) \le2^{\lambda_1 + 2\log n - \lambda_2 k_n}
\]
for any sequence $k_n$, $n\in\mathbb{N}$, where IC is either the PML
with arbitrary $\operatorname{pen}(n)$ or the NML or the KT criterion.
\end{theorem}
\begin{pf}
The proof including the explicit expression of the constants is in
Section~\ref{secproof}.
\end{pf}
An application of the Borel--Cantelli lemma in Theorem \ref
{thpmllarge} yields the following asymptotic result.
\begin{corollary}\label{copmllarge}
For any weakly non-null and $\alpha$-summable stationary ergodic
process there exists a constant $C>0$ such that for the Markov order
estimator $\hat{k}_{\mathrm{IC}} (X_1^n)$
\[
\hat{k}_{\mathrm{IC}} \bigl(X_1^n\bigr) \le C \log n
\]
eventually almost surely as $n\to+\infty$, where IC is either the PML
with arbitrary $\operatorname{pen}(n)$ or the NML or the KT criterion.
\end{corollary}
The second divergence result is the following.
\begin{theorem}\label{thpmllogSMP}
For any weakly non-null stationary ergodic process with continuity
rates $\bar{\gamma}(k) \le\delta_1 2^{-\zeta_1 k}$ and $\underbar
{\gamma}(k) \ge\delta_2 2^{-\zeta_2 k}$ for some $\zeta_1, \zeta_2,
\delta_1, \delta_2 >0$ ($\zeta_2\ge\zeta_1$), if
\[
\frac{6\log|A|}{\zeta_1} \le\varepsilon< \frac12,
\]
the Markov order estimator $\hat{k}_{\mathrm{IC}} (X_1^n)$ satisfies that
\[
\Prob\biggl( \hat{k}_{\mathrm{IC}} \bigl(X_1^n
\bigr) \le\frac{1}{2\zeta_2} \biggl( \frac12 -\varepsilon\biggr) \log
n -
c_3 \biggr) \le\exp\biggl( -\frac{c_2\varepsilon^3}{\log n}
n^{\varepsilon/2}
\biggr),
\]
if $n\ge n_0$, where IC is either the PML with $\operatorname
{pen}(n)\le\mathcal
{O}(\sqrt{n})$ or the NML or the KT criterion,
and $c_2, n_0>0$, $c_3\in\mathbb{R}$ are constants depending only on
the distribution of the process and $\operatorname{pen}(n)$.
\end{theorem}
\begin{pf}
The proof including the explicit expression of the constants is in
Section~\ref{secproof}.
\end{pf}
An application of the Borel--Cantelli lemma in Theorem \ref
{thpmllogSMP} yields the following asymptotic result.
\begin{corollary}\label{corpmllog}
For any weakly non-null stationary ergodic process with continuity
rates $\bar{\gamma}(k) \le\delta_1 2^{-\zeta_1 k}$ and $\underbar
{\gamma}(k) \ge\delta_2 2^{-\zeta_2 k}$ for some $\zeta_1, \zeta_2,
\delta_1, \delta_2 >0$ with $\zeta_2\ge\zeta_1>12\log|A|$,
the Markov order estimator $\hat{k}_{\mathrm{IC}} (X_1^n)$ satisfies that
\[
\hat{k}_{\mathrm{IC}} \bigl(X_1^n\bigr) \ge
C' \log n
\]
eventually almost surely as $n\to+\infty$, where IC is either the PML
with $\operatorname{pen}(n)\le\mathcal{O}(\sqrt{n})$ or the NML or
the KT criterion,
and $C'>0$ is a constant depending only on the distribution of the process.
\end{corollary}
The section concludes with the consistency result.
\begin{definition}\label{defoPML}
For a candidate order $0\le k<n$ the oracle PML criterion is
\[
\mathrm{PML}_{o,n} ( k ) = (n-k) h_k + \bigl(|A|-1\bigr)
|A|^k \operatorname{pen}(n),
\]
and the oracle PML Markov order estimator is
\[
k_{\mathrm{PML},n} = \arg\min_{0\le k < n} \mathrm{PML}_{o,n} ( k ).
\]
\end{definition}
\begin{remark}
For Markov chains of order $k$, $k_{\mathrm{PML},n} = k$ if $n$ is
sufficiently large, with any $\operatorname{pen}(n) = \RMo(n)$.
\end{remark}
\begin{theorem}\label{thoracle}
For any weakly non-null stationary ergodic process with
\[
\frac{\log\bar{\gamma}(k)}{k} \to-\infty,\qquad k\to\infty,
\]
the PML Markov order estimator $\hat{k}_{\mathrm{PML}} (X_1^n)$ with
$\operatorname{pen}(n)=n^{\kappa}$, $\frac12<\kappa<1$, is
consistent in the sense that
\[
\frac{ \hat{k}_{\mathrm{PML}} (X_1^n) }{ k_{\mathrm{PML},n} } \to1
\]
in probability as $n\to+\infty$.
\end{theorem}
\begin{pf}
The proof is in Section~\ref{secproof}.
\end{pf}
\section{Statistical estimation of processes}\label{secappl}
In the results of this section, the divergence rate of Markov order
estimators will play a central role. The problem of statistical
estimation of stationary ergodic processes by finite memory processes
is considered, and the following distance is used. The per-letter
Hamming distance between two strings $x_1^n$ and $y_1^n$ is
\[
d_n \bigl(x_1^n, y_1^n
\bigr) = \frac1n \sum_{i=1}^n
\mathbb{I}(x_i\neq y_i) \qquad\mbox{where } \mathbb{I}(a\neq
b) = \cases{
1, &\quad if $a\neq b$,
\cr
0, &\quad if $a=b$,}
\]
and the \textit{$\bar{d}$-distance} between two random sequences $X_1^n$
and $Y_1^n$ is defined by
\[
\bar{d} \bigl(X_1^n, Y_1^n
\bigr) = \min_{\mathbb{P}} \mathbb{E}_{\mathbb
{P}} d_n \bigl(
\tilde{X}_1^n, \tilde{Y}_1^n
\bigr),
\]
where the minimum is taken over all the joint distributions $\mathbb
{P}$ of $\tilde{X}_1^n$ and $\tilde{Y}_1^n$ whose marginals are equal
to the distributions of $X_1^n$ and $Y_1^n$.
The process $X$ is estimated by a Markov chain of order $k=k_n$ from
the sample in the following way.
\begin{definition}
The \textup{empirical $k$-order Markov estimator} of a process $X$ based
on the sample $X_1^n$ is the stationary Markov chain, denoted by $\hat
{X}[k]$, of order $k$ with transition probability matrix $ \{
\hat
{P} (a_{k+1}| a_1^k), a_{k+1}\in A, a_1^k\in A^k \}$. If the
initial distribution of a stationary Markov chain with these transition
probabilities is not unique, then any of these initial distributions
can be taken.
\end{definition}
In the previous section, weakly non-nullness is assumed for the
process. In this section the process $X$ is assumed to be \textit
{non-null}, that is,
\[
\pinf= \min_{a\in A} \inf_{x_{-\infty}^{-1} \in A^{\infty}} P\bigl(a |
x_{-\infty}^{-1}
\bigr) >0.
\]
\begin{remark}\label{rempmllarge}
For any non-null stationary ergodic process, $P(a_1^k)\le(1-\pinf)^k$
for any $a_1^k\in A^k$. Hence, Theorem~\ref{thpmllarge} holds with
$\lambda_1=0$ and $\lambda_2=|{\log}(1-\pinf)|$, see the proof of the theorem.
\end{remark}
The assumption of non-nullness allows us to use the following quantity
instead of $\underbar{\gamma}(k)$.
The \textit{restricted continuity rate} of the process $X$ is
\[
\bar{\gamma}(k|m) = \max_{x_{-m}^{-1} \in A^m} \sum_{a\in A}
\bigl\llvert P\bigl(a | x_{-k}^{-1} \bigr) - P\bigl(a |
x_{-m}^{-1} \bigr) \bigr\rrvert,\qquad k<m.
\]
Similarly to Remark~\ref{remgammaeq}, note that the above definition
is equivalent to
\[
\bar{\gamma}(k|m) = \max_{ k<i\le m } \max_{x_{-i}^{-1} \in A^i} \sum
_{a\in A} \bigl\llvert P\bigl(a | x_{-k}^{-1}
\bigr) - P\bigl(a | x_{-i}^{-1} \bigr) \bigr\rrvert.
\]
Hence, $\lim_{m\to+\infty} \bar{\gamma}(k|m) = \bar{\gamma}(k)$
for any
fixed $k$. We say that the process $X$ has \textit{uniformly convergent
restricted continuity rate} with parameters $\theta_1$, $\theta_2$,
$k_{\theta}$ if
\[
\bar{\gamma}(k)^{\theta_1} \le\bar{\gamma}\bigl( k | \lceil
\theta_2 k \rceil\bigr) \qquad\mbox{if } k\ge k_{\theta}
\mbox{, for some } \theta_1\ge1,\theta_2>1.
\]
The order $k$ of the empirical Markov estimator $\hat{X}[k]$ is
estimated from the sample, using the PML criterion. The estimated order
needs to be bounded to guarantee an accurate assessment of the memory
decay of the process.
\begin{definition}\label{defICr}
For an information criterion IC, the Markov order estimator bounded by
$r_n<n$, $r_n\in\mathbb{N}$, is
\[
\hat{k}_{\mathrm{IC}} \bigl(X_1^n | r_n
\bigr) = \arg\min_{0\le k \le r_n} \mathrm{IC}_{X_1^n} (k).
\]
\end{definition}
The optimal order can be smaller than the upper bound if the memory
decay of the process is sufficiently fast. Define
\[
K_n \bigl( r_n, \bar{\gamma}, f(n) \bigr) = \min\bigl\{
\lfloor r_n \rfloor, k\ge0\dvtx\bar{\gamma}(k) < f(n) \bigr\} ,
\]
where $f(n)\searrow0$ and $r_n\nearrow\infty$. Since $\bar{\gamma
}$ is
a decreasing function, $K_n$ increases in $n$ but does not exceed
$r_n$. It is less than $r_n$ if $\bar{\gamma}$ vanishes sufficiently
fast, and then the faster $\bar{\gamma}$ vanishes, the slower $K_n$ increases.
The process estimation result of the paper is the following.
\begin{theorem}\label{thqminSMP}
For any non-null stationary ergodic process with summable continuity
rate and uniformly convergent restricted continuity rate with
parameters $\theta_1$, $\theta_2$, $k_{\theta}$, and for any $\mu_n>0$,
the empirical Markov estimator of the process with the order estimated
by the bounded PML Markov order estimator $\hat{k}_n = \hat
{k}_{\mathrm
{PML}} (X_1^n | \eta\log n)$, $\eta>0$, with $\frac12 \log n \le
\operatorname{pen}(n)\le\mathcal{O}(\sqrt{n})$ satisfies
\begin{eqnarray*}
&&\Prob\biggl( \bar{d} \bigl( X_1^n, \hat{X} [
\hat{k}_n ]_1^n \bigr) > \frac{\beta_2}{\pinf^{2}}
\max\biggl\{ \bar{\gamma} \biggl( \biggl\lfloor\frac{\eta}{\theta_2}
\log n \biggr
\rfloor\biggr), n^{-( 1-4\eta\log({|A|^4}/{\pinf})
)/({4\theta_1})
} \biggr\} + \frac{1}{n^{1/2-\mu_n}} \biggr)
\\
&&\quad\le\exp\bigl( -c_4 4^{ \mu_n\log n - |\log\pinf| (
K_n ( \eta\log n, \bar{\gamma}, {c}\operatorname
{pen}(n)/{n} ) +
{\log\log n}/{\log|A|} ) } \bigr)
\\
&&\qquad{} + \exp\biggl( -\frac{c_5 \eta^3}{\log n} n^{\eta2\log|A|}
\biggr) +
2^{-s_n \operatorname{pen}(n)},
\end{eqnarray*}
if $n\ge n_0$, where $c>0$ is an arbitrary constant, $s_n\to\infty$ and
$\beta_2, c_4, c_5, n_0>0$ are constants depending only on the
distribution of the process.
\end{theorem}
\begin{pf}
The proof including the explicit expression of the constants is in
Section~\ref{secproofappl}.
\end{pf}
\begin{remark}
If the process $X$ is a Markov chain of order $k$, then the restricted
continuity rate is uniformly convergent with parameters $\theta_1=1$,
$\theta_2>1$ arbitrary (arbitrarily close to $1$), $k_{\theta}=k+1$,
and if $n$ is sufficiently large, $K_n=k$ and
\[
\max\biggl\{ \bar{\gamma} \biggl( \biggl\lfloor\frac{\eta
}{\theta_2} \log n \biggr
\rfloor\biggr), n^{- (
1-4\eta
\log({|A|^4}/{\pinf}) )/({4\theta_1}) } \biggr\} = n^{- ( 1-4\eta
\log({|A|^4}/{\pinf})
)/({4\theta_1}) } .
\]
\end{remark}
An application of the Borel--Cantelli lemma in Theorem~\ref{thqminSMP}
yields the following asymptotic result.
\begin{corollary}\label{cormain}
For any non-null stationary ergodic process with summable continuity
rate and uniformly convergent restricted continuity rate with
parameters $\theta_1$, $\theta_2$, $k_{\theta}$,
the empirical Markov estimator of the process with the order estimated
by the bounded PML Markov order estimator $\hat{k}_n = \hat
{k}_{\mathrm
{PML}} (X_1^n | r_n)$ with $\frac12 \log n \le\operatorname
{pen}(n)\le\mathcal
{O}(\sqrt{n})$ and
\[
\frac{5\log\log n}{2\log|A|} \le r_n \le\RMo(\log n)
\]
satisfies
\begin{eqnarray*}
\bar{d} \bigl( X_1^n, \hat{X} [ \hat{k}_n
]_1^n \bigr) &\le&\frac{\beta_2}{\pinf^{2}} \max\biggl\{ \bar{
\gamma} \biggl( \biggl\lfloor\frac{r_n}{\theta_2} \biggr\rfloor\biggr),
n^{-
{1}/({4\theta_1}) } \biggr\}\\
&&{} + \frac{ (\log n)^{c_6} }{ \sqrt{n} } 2^{
|\log\pinf| K_n ( r_n,
\bar{\gamma}, {c}\operatorname{pen}(n)/{n} ) }
\end{eqnarray*}
eventually almost surely as $n\to+\infty$,
where $c>0$ is an arbitrary constant, and $\beta_2, c_6>0$ are
constants depending only on the distribution of the process.
\end{corollary}
\begin{remark}
If the memory decay of the process is slow, the first term in the bound
in Corollary~\ref{cormain}, the bias, is essentially $\bar{\gamma} (
\lfloor r_n/\theta_2 \rfloor)$, and the second term, the
variance, is maximal. If the memory decay is sufficiently fast, then
the rate of the estimated order $\hat{k}_n$ and the rate of $K_n$ are
smaller, therefore the variance term is smaller, while the bias term is
smaller as well. The result, however, shows the optimality of the PML
Markov order estimator in the sense that it selects an order which is
small enough to allow the variance to decrease but large enough to keep
the bias below a polynomial threshold.
\end{remark}
\section{Empirical entropies}\label{secent}
In this section, we consider the problem of simultaneous convergence of
empirical entropies of orders in an increasing set, and prove the
following theorem that formulates Theorem~\ref{thentSMP} with explicit
constants.
\begin{theorem}\label{thent}
For any weakly non-null and $\alpha$-summable stationary ergodic
process, for any $0<\varepsilon<1/2$
\begin{eqnarray*}
&&
\Prob\biggl( \max_{1\le k\le({\varepsilon\log n})/({4\log|A|})} \bigl
\llvert\hat{H}_{k}
\bigl(X_1^n\bigr) - H_k \bigr\rrvert>
\frac{1}{n^{
1/2-\varepsilon}} \biggr) \\
&&\quad\le6 \RMe ^{1/\RMe } \exp\biggl( -\frac{7\alpha
_0\varepsilon^3}{32\RMe (\alpha
+\alpha_0)}
\frac{ n^{\varepsilon/2} }{ \log n } + \frac
{\varepsilon
}{4} \log n \biggr)
\end{eqnarray*}
and
\begin{eqnarray*}
&&
\Prob\biggl( \max_{0\le k\le({\varepsilon\log n})/({4\log|A|})} \bigl
\llvert\hat{h}_{k}
\bigl(X_1^n\bigr) - h_k \bigr\rrvert>
\frac{1}{n^{
1/2-\varepsilon}} \biggr) \\
&&\quad\le12 \RMe ^{1/\RMe } \exp\biggl( -\frac{7\alpha
_0\varepsilon^3}{256\RMe (\alpha+\alpha_0)}
\frac{ n^{\varepsilon/2} }{ \log n } + \frac{\varepsilon}{4} \log n
\biggr) .
\end{eqnarray*}
\end{theorem}
First, we show the following bounds.
\begin{proposition}\label{prent}
For any weakly non-null and $\alpha$-summable stationary ergodic
process, for any $1\le m\le n$ and $u,\nu>0$,
\begin{eqnarray*}
&& \Prob\Bigl( \max_{1\le k\le m} \bigl\llvert\hat{H}_{k}
\bigl(X_1^n\bigr) - H_k \bigr\rrvert> u
\Bigr)
\\
&&\quad\le6 \RMe ^{1/\RMe } |A|^m \exp\biggl( \frac{\alpha_0}{8\RMe (\alpha
+\alpha_0)}
\frac{-(n-m+1)u^{2(1+\nu)}}{ m |A|^{2m} }
\\
&&\qquad\hspace*{60.5pt}{} \times\min{}^2 \biggl\{ \biggl( \frac{\RMe }{2(1+\nu^{-1})}
\biggr)^{1+\nu}, \frac{u^{-\nu}\log \RMe }{2m\log|A|}, \frac{u^{-\nu}}{\RMe }
\biggr\}\biggr)
\end{eqnarray*}
and
\begin{eqnarray*}
&& \Prob\Bigl( \max_{0\le k\le m-1} \bigl\llvert\hat{h}_{k}
\bigl(X_1^n\bigr) - h_k \bigr\rrvert> u
\Bigr)
\\
&&\quad\le12 \RMe ^{1/\RMe } |A|^m \exp\biggl( \frac{\alpha_0}{8\RMe (\alpha
+\alpha_0)}
\frac{-(n-m+1) (u/2)^{2(1+\nu)}}{ m |A|^{2m} }
\\
&&\qquad\hspace*{64.5pt}{} \times \min{}^2 \biggl\{ \biggl( \frac{\RMe }{2(1+\nu^{-1})}
\biggr)^{1+\nu}, \frac{(u/2)^{-\nu}\log \RMe }{2m\log|A|}, \frac{(u/2)^{-\nu
}}{\RMe } \biggr\} \biggr).
\end{eqnarray*}
\end{proposition}
\begin{pf}
Fix $1\le k\le m$. Applying Lemma~\ref{lemsch} in the \hyperref
[app]{Appendix} to the
distributions $P_k = \{ P (a_1^k), a_1^k\in A^k \}$ and
$\hat{P}_k = \{ \hat{P} (a_1^k), a_1^k\in A^k \}$,
\begin{equation}
\label{eqsch} \bigl\llvert\hat{H}_{k}\bigl(X_1^n
\bigr) - H_k \bigr\rrvert\le\frac{1}{\log \RMe } \bigl[ k\log|A| - \log
d_{\TV} ( \hat{P}_k, P_k ) \bigr]
d_{\TV} ( \hat{P}_k, P_k ),
\end{equation}
if $d_{\TV} ( \hat{P}_k, P_k ) \le1/\RMe $. For any $\nu>0$, the right
of (\ref{eqsch}) can be written as
\begin{eqnarray}
\label{eqsch2} && \frac{k \log|A|}{\log \RMe } d_{\TV} ( \hat{P}_k,
P_k ) \nonumber\\
&&\qquad{}+ \frac{1+\nu}{\nu\log \RMe } d_{\TV}^{{1}/({1+\nu})} (
\hat{P}_k , P_k ) \bigl[ - d_{\TV}^{{\nu}/({1+\nu})}
( \hat{P}_k, P_k ) \log d_{\TV}^{{\nu}/({1+\nu})}
( \hat{P}_k, P_k ) \bigr]
\\
&&\quad\le\frac{k \log|A|}{\log \RMe } d_{\TV} ( \hat{P}_k,
P_k ) + \frac{1}{\RMe } \frac{1+\nu}{\nu} d_{\TV}^{{1}/({1+\nu})}
( \hat{P}_k, P_k ),\nonumber
\end{eqnarray}
where we used the bound $- x\log x \le \RMe ^{-1} \log \RMe $, $x\ge0$.
By~\cite{GL}, for any string $a_1^k\in A^k$ and $t>0$,
\begin{equation}
\label{eqGL} \Prob\bigl( \bigl\llvert N_n\bigl(a_1^k
\bigr) - (n-k+1) P\bigl(a_1^k\bigr) \bigr\rrvert> t
\bigr) \le \RMe ^{1/\RMe } \exp\biggl( \frac{-c_{\alpha} t^2}{ k(n-k+1) }
\biggr),
\end{equation}
where
\[
c_{\alpha}= \frac{\alpha_0}{8\RMe (\alpha+\alpha_0)}
\]
is positive for any weakly non-null and $\alpha$-summable stationary
ergodic process.
(\ref{eqGL}) implies that
\begin{eqnarray}
\label{eqGL2} \Prob\bigl( d_{\TV} ( \hat{P}_k,
P_k ) > t \bigr) &\le&\Prob\biggl( \max_{a_1^k\in A^k} \bigl\llvert
\hat{P} \bigl(a_1^k\bigr) - P\bigl(a_1^k
\bigr) \bigr\rrvert> \frac{t}{|A|^k} \biggr)
\nonumber\\[-8pt]\\[-8pt]
&\le& \RMe ^{1/\RMe } |A|^k \exp\biggl( \frac{-c_{\alpha}(n-k+1)
t^2}{ k |A|^{2k} }
\biggr).\nonumber
\end{eqnarray}
Applying (\ref{eqGL2}) to (\ref{eqsch2}),
\begin{eqnarray*}
&&\Prob\bigl( \bigl\llvert\hat{H}_{k}
\bigl(X_1^n\bigr) - H_k \bigr\rrvert> u
\bigr)
\\
&&\quad\le\Prob\biggl( \frac{k \log|A|}{\log \RMe } d_{\TV} (
\hat{P}_k , P_k ) + \frac{1}{\RMe } \frac{1+\nu}{\nu}
d_{\TV}^{{1}/({1+\nu})} ( \hat{P}_k, P_k ) > u
\biggr)
\\
&&\qquad{} + \Prob\bigl( d_{\TV} ( \hat{P}_k,
P_k ) > 1/\RMe \bigr)
\\
&&\quad\le\Prob\biggl( d_{\TV} ( \hat{P}_k,
P_k ) > \frac{u\log \RMe }{2k\log|A|} \biggr) + \Prob\biggl(
d_{\TV}^{{1}/({1+\nu})} ( \hat{P}_k, P_k ) >
\frac
{\nu \RMe u}{2(1+\nu)} \biggr)
\\
&&\qquad{} + \Prob\bigl( d_{\TV} ( \hat{P}_k,
P_k ) > 1/\RMe \bigr)
\\
&&\quad\le3 \RMe ^{1/\RMe } |A|^k \exp\biggl( \frac{-c_{\alpha
}(n-k+1)u^{2(1+\nu
)}}{ k |A|^{2k} }\\
&&\qquad\hspace*{59.5pt}{}\times
\min{}^2 \biggl\{ \biggl( \frac{\RMe }{2(1+\nu^{-1})} \biggr)^{1+\nu} ,
\frac{u^{-\nu}\log \RMe }{2k\log|A|}, \frac{u^{-\nu}}{\RMe } \biggr\} \biggr).
\end{eqnarray*}
This completes the proof of the first claimed bound as
\begin{eqnarray*}
&&\Prob\Bigl( \max_{1\le k\le m} \bigl\llvert\hat{H}_{k}
\bigl(X_1^n\bigr) - H_k \bigr\rrvert> u
\Bigr)
\\
&&\quad\le\sum_{1\le k\le m} \Prob\bigl( \bigl\llvert
\hat{H}_{k}\bigl(X_1^n\bigr) -
H_k \bigr\rrvert> u \bigr)
\\
&&\quad\le3 \RMe ^{1/\RMe } \biggl( \sum_{1\le k\le m}|A|^k
\biggr)
\\
&&\qquad\hspace*{0pt}{}\times
\exp\biggl( \frac{-c_{\alpha}(n-m+1)u^{2(1+\nu)}}{ m
|A|^{2m} } \min{}^2 \biggl\{
\biggl( \frac{\RMe }{2(1+\nu^{-1})} \biggr)^{1+\nu}, \frac{u^{-\nu}\log
\RMe }{2m\log|A|},
\frac{u^{-\nu}}{\RMe } \biggr\} \biggr).
\end{eqnarray*}
The second claimed bound follows using $\hat{h}_{0}(X_1^n) - h_0 =
\hat{H}_{1}(X_1^n) - H_1$ and
\[
\bigl\llvert\hat{h}_{k}\bigl(X_1^n\bigr)
- h_k \bigr\rrvert\le\bigl\llvert\hat{H}_{k+1}
\bigl(X_1^n\bigr) - H_{k+1} \bigr\rrvert+
\bigl\llvert\hat{H}_{k}\bigl(X_1^n\bigr) -
H_k \bigr\rrvert,\qquad k\ge1,
\]
as
\begin{eqnarray*}
&& \Prob\Bigl( \max_{0\le k\le m-1} \bigl\llvert\hat{h}_{k}
\bigl(X_1^n\bigr) - h_k \bigr\rrvert> u
\Bigr)
\\
&&\quad\le\Prob\biggl( \max_{1\le k\le m} \bigl\llvert\hat{H}_{k}
\bigl(X_1^n\bigr) - H_k \bigr\rrvert>
\frac{u}{2} \biggr) + \Prob\biggl( \max_{1\le k\le m-1} \bigl\llvert
\hat{H}_{k}\bigl(X_1^n\bigr) -
H_k \bigr\rrvert> \frac{u}{2} \biggr)
\\
&&\quad\le2\Prob\biggl( \max_{1\le k\le m} \bigl\llvert\hat{H}_{k}
\bigl(X_1^n\bigr) - H_k \bigr\rrvert>
\frac{u}{2} \biggr) .
\end{eqnarray*}
\upqed\end{pf}
Now, the theorem follows from the proposition with special settings.
\begin{pf*}{Proof of Theorem~\ref{thent}}
We use Proposition~\ref{prent} setting $u = n^{-1/2+\varepsilon}$,
$\nu= \varepsilon$, and $m = \lfloor(\varepsilon\log n)/(4\log|A|)
\rfloor$. Then, in the exponent of the first inequality of the proposition,
\begin{eqnarray*}
\frac{ u^{2(1+\nu)} }{ |A|^{2m} } &>& n^{\varepsilon/2 -1
+2\varepsilon
^2},
\\
\frac{n-m+1}{m} &>& n \frac{7}{\log n},
\\
\min\biggl\{ \biggl( \frac{\RMe }{2(1+\nu^{-1})} \biggr)^{1+\nu},
\frac
{u^{-\nu}\log \RMe }{2m\log|A|}, \frac{u^{-\nu}}{\RMe } \biggr\} &>& n^{
-\varepsilon^2 } \biggl(
\frac{2\varepsilon}{3} \biggr)^{3/2} > n^{ -\varepsilon^2 } \frac
{\varepsilon^{3/2}}{2},
\end{eqnarray*}
where we used that $0<\varepsilon<1/2$. This gives the lower bound
\[
-\frac{7\alpha_0\varepsilon^3}{32\RMe (\alpha+\alpha_0)} \frac{
n^{\varepsilon/2} }{ \log n }
\]
on the exponent and completes the proof of the first claimed bound. The
second claimed bound follows similarly from the second inequality of
the proposition with the same settings.
\end{pf*}
\section{Divergence bounds proofs}\label{secproof}
In this section, we consider the divergence of the PML, NML and KT
Markov order estimators and prove Theorems~\ref{thpmllarge}, \ref
{thpmllogSMP} and~\ref{thoracle}.
\begin{pf*}{Proof of Theorem~\ref{thpmllarge}}
By~\cite{GL}, any weakly non-null and $\alpha$-summable process is
$\phi
$-mixing with a coefficient related to $\alpha_0>0$ and $\alpha
<+\infty
$. Namely, there exists a sequence $\rho_i$, $i\in\mathbb{N}$, satisfying
\[
\sum_{i=0}^{\infty} \rho_i \le1+
\frac{2\alpha}{\alpha_0} ,
\]
such that for each $k$, $m$, $l$ and each $a_1^k\in A^k$, $b_1^m\in
A^m$, with $P(b_1^m)>0$,
\[
\bigl\llvert\Prob\bigl( X_{m+l+1}^{m+l+k} = a_1^k
\mid X_1^m = b_1^m \bigr) - P
\bigl(a_1^k\bigr) \bigr\rrvert\le\sum
_{i=l}^{l+k-1} \rho_i .
\]
This implies that for any $d\ge1$
\begin{eqnarray*}
&& \Prob\bigl( X_{m+l+1}^{m+l+k} = a_1^k
\mid X_1^m = b_1^m \bigr)
\\
&&\quad\le\Prob\bigl( X_{m+l+id} = a_{id}, 1\le i\le\lfloor
k/d \rfloor\mid X_1^m = b_1^m
\bigr)
\\
&&\quad= \prod_{i=1}^{ \lfloor k/d \rfloor} \Prob\bigl(
X_{m+l+id} = a_{id} \mid X_{m+l+jd} = a_{jd},
1\le j<i, X_1^m = b_1^m
\bigr)
\\
&&\quad\le\prod_{i=1}^{ \lfloor k/d \rfloor} \bigl(
P(a_{id}) + \rho_{d-1} \bigr)
\\
&&\quad\le\Bigl( \max_{a\in A} P(a) + \rho_{d-1}
\Bigr)^{ \lfloor k/d
\rfloor}.
\end{eqnarray*}
Since $\max_{a\in A} P(a)<1$ and $\rho_d\to0$, $\max_{a\in A} P(a) +
\rho_{d-1} <1$ for sufficiently large $d$. Then
\[
\max_{l,a_1^k,b_1^m} \Prob\bigl( X_{m+l+1}^{m+l+k} =
a_1^k \mid X_1^m =
b_1^m \bigr) \le2^{\lambda_1 - \lambda_2 k }
\]
holds with $\lambda_1 = -\log( \max_{a\in A} P(a) + \rho_{d-1}
) >0$ and $\lambda_2 = -\log( \max_{a\in A}
P(a) +
\rho_{d-1} )^{1/d} >0$. Thus, for any $k$,
\begin{eqnarray}
\label{eqexp} && \Prob\bigl( N_n\bigl(a_1^k
\bigr) \ge2 \mbox{ for some } a_1^k \bigr)
\nonumber
\\
&&\quad= \Prob\bigl( X_i^{i+k-1} = X_j^{j+k-1}
\mbox{ for some } 1\le i<j\le n-k+1 \bigr)
\nonumber
\\
&&\quad\le\sum_{1\le i<j\le n-k+1} \Prob\bigl(
X_i^{i+k-1} = X_j^{j+k-1} \bigr)
\\
&&\quad= \sum_{1\le i<j\le n-k+1} \mathbb{E} \bigl\{ \Prob\bigl(
X_j^{j+k-1} = X_i^{i+k-1} |
X_1^{j-1} \bigr) \bigr\}
\nonumber
\\
&&\quad\le n^2 2^{\lambda_1 - \lambda_2 k }.\nonumber
\end{eqnarray}
For any information criterion IC, we can write
\begin{eqnarray}
\label{eqsplit} && \bigl\{ \hat{k}_{\mathrm{IC}} \bigl(X_1^n
\bigr) > k_n \bigr\}
\nonumber
\\
&&\quad\subseteq\bigl\{ \mathrm{IC}_{X_1^n} ( m ) < \mathrm
{IC}_{X_1^n} ( k_n ) \mbox{ for some } m>k_n
\bigr\}
\nonumber\\[-8pt]\\[-8pt]
&&\quad\subseteq\bigl\{ \mathrm{IC}_{X_1^n} ( m ) < \mathrm
{IC}_{X_1^n} ( k_n ) \mbox{ for some } m>k_n
\bigr\} \cap\bigl\{ N_n\bigl(a_1^{k_n}\bigr)
\le1 \mbox{ for all } a_1^{k_n} \bigr\}
\nonumber
\\
&&\qquad{} \cup\bigl\{ N_n\bigl(a_1^{k_n}
\bigr) \ge2 \mbox{ for some } a_1^{k_n} \bigr\}.\nonumber
\end{eqnarray}
Here, $N_n(a_1^{k_n}) \le1$ for all $a_1^{k_n}\in A^{k_n}$ implies that
$N_n(a_1^m) \le1$ for all $a_1^m\in A^m$ for all $m\ge k_n$, which
further implies that for all $m>k_n$ (i) $\hat{h}_{m}(X_1^n)=0$ and therefore
$\mathrm{PML}_{X_1^n} ( m ) = (|A|-1) |A|^m \operatorname
{pen}(n)$ and $\mathrm{NML}_{X_1^n} ( m ) = \Sigma_{n,m}$ and
(ii) $\mathrm{KT}_{X_1^n} ( m ) = |A|^{-n}$. Then all
the three information criteria do
not depend on the sample and are non-decreasing in $m$. Hence, in (\ref
{eqsplit})
\[
\bigl\{ \mathrm{IC}_{X_1^n} ( m ) < \mathrm{IC}_{X_1^n} (
k_n ) \mbox{ for some } m>k_n \bigr\} \cap\bigl\{
N_n\bigl(a_1^{k_n}\bigr) \le1 \mbox{ for all }
a_1^{k_n} \bigr\}
\]
is an empty set. Thus, (\ref{eqsplit}) gives
\[
\Prob\bigl( \hat{k}_{\mathrm{IC}} \bigl(X_1^n\bigr)
> k_n \bigr) \le\Prob\bigl( N_n\bigl(a_1^{k_n}
\bigr) \ge2 \mbox{ for some } a_1^{k_n} \bigr)
\]
and using (\ref{eqexp}) completes the proof.
\end{pf*}
To prove Theorem~\ref{thpmllogSMP}, first we show the
following bounds.
\begin{proposition}\label{thpmlh}
For any weakly non-null and $\alpha$-summable stationary ergodic
process with $h_k-\bar{H} \le\delta2^{-\zeta k}$ for some $\delta
,\zeta>0$, if
\[
\frac{4\log|A|}{\zeta} \le\varepsilon<\frac12 ,
\]
\begin{longlist}[(ii)]
\item[(i)]
the PML Markov order estimator $\hat{k}_{\mathrm{PML}} (X_1^n)$
satisfies that
\[
\Prob\bigl( \hat{k}_{\mathrm{PML}} \bigl(X_1^n\bigr)
< k_n \bigr) \le12 \RMe ^{1/\RMe } \exp\biggl( -\frac{7\alpha_0\varepsilon
^3}{256\RMe (\alpha+\alpha_0)}
\frac{ n^{\varepsilon/2} }{ \log n } + \frac{\varepsilon}{4} \log n
\biggr),
\]
if $n\ge(\delta2^{\zeta})^2$, where
\[
k_n = \min\biggl\{ k\ge0\dvtx h_k - \bar{H} <
\frac{ 4\max(\sqrt{n},(|A|-1) \operatorname{pen}(n)) }{
n^{1-\varepsilon} } \biggr\} ;
\]
\item[(ii)] the Markov order estimator $\hat{k}_{\mathrm{IC}} (X_1^n)$,
where IC is either NML or KT, satisfies~that
\[
\Prob\bigl( \hat{k}_{\mathrm{IC}} \bigl(X_1^n\bigr)
< k_n \bigr) \le12 \RMe ^{1/\RMe } \exp\biggl( -\frac{7\alpha_0\varepsilon
^3}{256\RMe (\alpha+\alpha_0)}
\frac{ n^{\varepsilon/2} }{ \log n } + \frac{\varepsilon}{4} \log n
\biggr),
\]
if $n\ge\max^2 \{ \sqrt{24}(\log^2 \RMe )(|A|-1)^2, 2C_{\mathrm{KT}},
\delta2^{\zeta} \}$, where
\[
k_n = \min\biggl\{ k\ge0\dvtx h_k - \bar{H} <
\frac{4}{n^{
1/2-\varepsilon}} \biggr\} .
\]
\end{longlist}
\end{proposition}
\begin{remark}
For Markov chains of order $k$, in Proposition~\ref{thpmlh} $k_n=k$
if $n$ is sufficiently large.
\end{remark}
\begin{pf*}{Proof of Proposition~\ref{thpmlh}}
Let $0<\varepsilon<1/2$ be arbitrary and
\begin{equation}
\label{eqBndef} B_n \biggl( { \frac{\varepsilon\log n}{4\log|A|} }
\biggr) = \biggl\{
\max_{0\le k\le({\varepsilon\log n})/({4\log|A|})} \bigl\llvert\hat
{h}_{k}\bigl(X_1^n
\bigr) - h_k \bigr\rrvert\le\frac{1}{n^{
1/2-\varepsilon}} \biggr\}.
\end{equation}
For any information criterion IC, we can write for any $k_n\le\frac
{\varepsilon\log n}{4\log|A|}$
\begin{eqnarray}
\label{eqsplit2} && \bigl\{ \hat{k}_{\mathrm{IC}} \bigl(X_1^n
\bigr) < k_n \bigr\}
\nonumber
\\
&&\quad\subseteq\biggl\{ \mathrm{IC}_{X_1^n} ( m ) \le\mathrm
{IC}_{X_1^n} \biggl( \biggl\lfloor{ \frac{\varepsilon\log
n}{4\log|A|} } \biggr\rfloor
\biggr) \mbox{ for some } m<k_n \biggr\}
\nonumber\\[-8pt]\\[-8pt]
&&\quad\subseteq\biggl( \biggl\{ \mathrm{IC}_{X_1^n} ( m ) \le
\mathrm{IC}_{X_1^n} \biggl( \biggl\lfloor{ \frac
{\varepsilon\log n}{4\log|A|} } \biggr
\rfloor\biggr) \mbox{ for some } m<k_n \biggr\} \cap B_n
\biggl( { \frac{\varepsilon\log n}{4\log|A|} } \biggr) \biggr)\nonumber\\
&&\qquad{} \cup
\overline{B_n
\biggl( { \frac
{\varepsilon
\log n}{4\log|A|} } \biggr)}.\nonumber
\end{eqnarray}
(i) If $\mathrm{IC} = \mathrm{PML}$, by the definition of
the PML information criterion, see Definition~\ref{defPML},
\begin{eqnarray}
\label{eqsplitpml} && \biggl\{ \mathrm{PML}_{X_1^n} ( m ) \le\mathrm
{PML}_{X_1^n} \biggl( \biggl\lfloor{ \frac{\varepsilon
\log n}{4\log|A|} } \biggr\rfloor
\biggr) \mbox{ for some } m<k_n \biggr\} \cap B_n \biggl(
{ \frac{\varepsilon\log n}{4\log|A|} } \biggr)
\nonumber
\\
&&\quad\subseteq\biggl\{ (n-m)\hat{h}_{m}\bigl(X_1^n
\bigr) - \biggl( n- \biggl\lfloor{ \frac{\varepsilon\log n}{4\log|A|}
} \biggr\rfloor\biggr) \hat
{h}_{ \lfloor({\varepsilon\log n})/({4\log|A|})
\rfloor}\bigl(X_1^n\bigr)
\nonumber
\\[-2pt]
&&\hspace*{17pt}\quad\le\bigl(|A|-1\bigr) \bigl( |A|^{ \lfloor{
({\varepsilon
\log n})/({4\log|A|}) } \rfloor} - |A|^m \bigr)
\operatorname{pen}(n) \mbox{ for some } m<k_n \biggr\} \cap
B_n \biggl( { \frac{\varepsilon\log n}{4\log|A|} } \biggr)
\nonumber
\\[-2pt]
&&\quad\subseteq\biggl\{ \hat{h}_{m}\bigl(X_1^n
\bigr) - \hat{h}_{ \lfloor
({\varepsilon\log n})/({4\log|A|}) \rfloor}\bigl(X_1^n\bigr)
\nonumber\\[-9pt]\\[-9pt]
&&\hspace*{25.5pt}\le\bigl(|A|-1\bigr) |A|^{ \lfloor{
({\varepsilon\log n})/({4\log|A|}) } \rfloor} \frac
{\operatorname{pen}(n)}{n-
\lfloor(\varepsilon\log n)/(4\log|A|) \rfloor}
\mbox{ for some } m<k_n \biggr\} \nonumber\\[-2pt]
&&\qquad{}\cap B_n \biggl( {
\frac{\varepsilon\log n}{4\log|A|} } \biggr)
\nonumber
\\[-2pt]
&&\quad\subseteq\biggl\{ h_m - h_{ \lfloor({\varepsilon\log
n})/({4\log
|A|}) \rfloor} \le
\frac{(|A|-1) |A|^{({\varepsilon\log
n})/({4\log|A|})} \operatorname{pen}(n)}{n- (\varepsilon\log n)/(4\log
|A|) } + \frac{2}{n^{1/2-\varepsilon}}\nonumber\\[-2pt]
&&\hspace*{23.5pt}\mbox{ for some } m<k_n \biggr\}.\nonumber
\end{eqnarray}
Since for any $0<\varepsilon<1/2$
\begin{equation}
\label{eqatom} \frac{ |A|^{({\varepsilon\log n})/({4\log|A|})} }{n-
(\varepsilon
\log
n)/(4\log|A|) } < \frac{1}{ n^{1-\varepsilon} },
\end{equation}
we have
\begin{equation}
\label{eqhpmlub} \frac{(|A|-1) |A|^{({\varepsilon\log n})/({4\log|A|})}
\operatorname{pen}(n)}{n-
(\varepsilon\log n)/(4\log|A|) } + \frac{2}{n^{1/2-\varepsilon}} <
\frac{ 3\max(\sqrt{n},(|A|-1) \operatorname{pen}(n)) }{
n^{1-\varepsilon} }.
\end{equation}
Now, let $\varepsilon$ and $k_n$ be as in the claim of the proposition.
Using the conditions $h_k-\bar{H} \le\delta2^{-\zeta k}$ and
$\varepsilon\ge(4\log|A|)/\zeta$,
\begin{equation}
\label{eqdhpmlub} h_{ \lfloor({\varepsilon\log n})/({4\log|A|}) \rfloor
} - \bar{H} \le\delta\exp\biggl\{ -\zeta
\biggl( \frac{\varepsilon
\log
n}{4\log|A|} -1 \biggr) \biggr\} \le\frac{1}{\sqrt{n}} \qquad
\mbox{if } n\ge\bigl(\delta2^{\zeta}\bigr)^2.
\end{equation}
Thus, if $n\ge(\delta2^{\zeta})^2$, it follows that $k_n \le\frac
{\varepsilon\log n}{4\log|A|}$, and for any $m<k_n$
\begin{eqnarray}
\label{eqhpmllb} h_m - h_{ \lfloor({\varepsilon\log n})/({4\log|A|})
\rfloor} &\ge&(h_{k_n-1} -
\bar{H}) - (h_{ \lfloor({\varepsilon\log
n})/({4\log|A|}) \rfloor} - \bar{H})
\nonumber\\[-9pt]\\[-9pt]
&\ge&(h_{k_n-1} - \bar{H}) - \frac{1}{\sqrt{n}} \ge\frac{ 3\max(\sqrt
{n},(|A|-1) \operatorname{pen}(n)) }{
n^{1-\varepsilon} },\quad
\nonumber
\end{eqnarray}
where we used that $h_k$ is non-increasing. Comparing (\ref
{eqhpmllb}) to (\ref{eqhpmlub}), the right of (\ref
{eqsplitpml}) is an empty set, and (\ref{eqsplit2}) yields
\[
\Prob\bigl( \hat{k}_{\mathrm{PML}} \bigl(X_1^n\bigr)
< k_n \bigr) \le\Prob\biggl( \overline{B_n \biggl( {
\frac
{\varepsilon
\log n}{4\log|A|} } \biggr)} \biggr) \le12 \RMe ^{1/\RMe } \exp\biggl( -
\frac{7\alpha_0\varepsilon^3}{256\RMe (\alpha
+\alpha_0)} \frac{ n^{\varepsilon/2} }{ \log n } + \frac
{\varepsilon
}{4} \log n \biggr),
\]
if $n\ge(\delta2^{\zeta})^2$, according to Theorem~\ref{thent}.\vadjust{\goodbreak}
(ii) If $\mathrm{IC} = \mathrm{NML}$, by the definition of
the NML information criterion, see Definition~\ref{defNML},
\begin{eqnarray}
\label{eqsplitnml} && \biggl\{ \mathrm{NML}_{X_1^n} ( m ) \le\mathrm
{NML}_{X_1^n} \biggl( \biggl\lfloor{ \frac{\varepsilon
\log n}{4\log|A|} } \biggr\rfloor
\biggr) \mbox{ for some } m<k_n \biggr\} \cap B_n \biggl(
{ \frac{\varepsilon\log n}{4\log|A|} } \biggr)
\nonumber
\\
&&\quad\subseteq\biggl\{ (n-m)\hat{h}_{m}\bigl(X_1^n
\bigr) - \biggl( n- \biggl\lfloor{ \frac{\varepsilon\log n}{4\log|A|}
} \biggr\rfloor\biggr) \hat
{h}_{ \lfloor({\varepsilon\log n})/({4\log|A|})
\rfloor}\bigl(X_1^n\bigr)
\nonumber
\\
&&\hspace*{16.5pt}\quad\le\log\Sigma\biggl( n, { \biggl\lfloor\frac{\varepsilon\log
n}{4\log|A|} \biggr\rfloor}
\biggr) - \log\Sigma(n,m) \mbox{ for some } m<k_n \biggr\} \cap
B_n \biggl( { \frac{\varepsilon\log n}{4\log|A|} } \biggr)
\nonumber
\\
&&\quad\subseteq\biggl\{ \hat{h}_{m}\bigl(X_1^n
\bigr) - \hat{h}_{ \lfloor
({\varepsilon\log n})/({4\log|A|}) \rfloor}\bigl(X_1^n\bigr)
<
\frac{ \log
\Sigma( n,
{ \lfloor({\varepsilon\log n})/({4\log|A|})
\rfloor} ) }{n- \lfloor(\varepsilon\log n)/(4\log|A|)
\rfloor}
\nonumber\\[-8pt]\\[-8pt]
&&\qquad\hspace*{4pt}\mbox{ for some } m<k_n \biggr\}
\nonumber
\\
&&\qquad{}\cap B_n \biggl( { \frac{\varepsilon\log n}{4\log|A|} } \biggr)
\nonumber
\\
&&\quad\subseteq\biggl\{ h_m - h_{ \lfloor({\varepsilon\log
n})/({4\log
|A|}) \rfloor} <
\frac{ \log\Sigma( n,
{ \lfloor({\varepsilon\log n})/({4\log|A|})
\rfloor} ) }{n- (\varepsilon\log n)/(4\log|A|) } + \frac{2}{n^{
1/2-\varepsilon}} \nonumber\\
&&\qquad\hspace*{4pt}\mbox{ for some } m<k_n \biggr
\},\nonumber
\end{eqnarray}
where in the second relation we used that $\Sigma(n,m)>1$ for any
$m\ge
0$. By Lemma~\ref{lemktml} in the \hyperref[app]{Appendix},
\[
\mathrm{ML}_{k}\bigl(X_1^n\bigr) \le
P_{\mathrm{KT}, k} \bigl(X_1^n\bigr) \exp\biggl(
C_{\mathrm{KT}} |A|^k + \frac{|A|-1}{2} |A|^k \log
\frac{n}{|A|^k} \biggr)
\]
that gives the upper bound
\begin{equation}
\label{eqsigmaub} \log\Sigma(n,k) \le C_{\mathrm{KT}} |A|^k +
\frac{|A|-1}{2} |A|^k \log\frac{n}{|A|^k}.
\end{equation}
Using (\ref{eqsigmaub}) and (\ref{eqatom}),
\begin{eqnarray*}
&&
\frac{ \log\Sigma( n, { \lfloor
({\varepsilon\log n})/({4\log|A|}) \rfloor} ) }{n-
(\varepsilon\log n)/(4\log|A|) } \\
&&\quad< \biggl( C_{\mathrm{KT}} + \frac
{|A|-1}{2} \log
\frac{n}{|A|^{
\lfloor{ ({\varepsilon\log n})/({4\log|A|}) } \rfloor}} \biggr) \frac
{1}{ n^{1-\varepsilon} }
\\
&&\quad< \biggl( C_{\mathrm{KT}} + \frac{|A|-1}{2} \log n \biggr)
\frac
{1}{ n^{1-\varepsilon} }.
\end{eqnarray*}
Using $\RMe ^x\ge x^2/2+x^4/4!$, $x\ge0$, it follows that $(|A|-1)\log n
\le\sqrt{n}$ if $n\ge24(\log^4 \RMe )(|A|-1)^4$, which implies that
\[
C_{\mathrm{KT}} + \frac{|A|-1}{2} \log n \le\sqrt{n} \qquad\mbox{if }
n\ge
\max\bigl\{ 24\bigl(\log^4 \RMe \bigr) \bigl(|A|-1\bigr)^4,
4C_{\mathrm
{KT}}^2 \bigr\} .
\]
Thus, the expression in (\ref{eqsplitnml}) can be bounded as
\begin{eqnarray}
\label{eqhnmlub}
&&\frac{ \log\Sigma( n, { \lfloor
({\varepsilon\log n})/({4\log|A|}) \rfloor} ) }{n-
(\varepsilon
\log n)/(4\log|A|) } + \frac{2}{n^{1/2-\varepsilon}}
\nonumber\\[-8pt]\\[-8pt]
&&\quad
< \frac
{3}{n^{1/2-\varepsilon}}
\qquad\mbox{if } n\ge\max\bigl\{ 24\bigl(\log^4 \RMe \bigr)
\bigl(|A|-1\bigr)^4, 4C_{\mathrm
{KT}}^2 \bigr\}.\nonumber
\end{eqnarray}
Now, let $\varepsilon$ and $k_n$ be as in the claim of the proposition.
Then the conditions $h_k-\bar{H} \le\delta2^{-\zeta k}$ and
$\varepsilon\ge(4\log|A|)/\zeta$ imply (\ref{eqdhpmlub}), thus, if
$n\ge(\delta2^{\zeta})^2$, it follows that $k_n \le\frac
{\varepsilon
\log n}{4\log|A|}$, and for any $m<k_n$
\begin{eqnarray}
\label{eqhnmllb} h_m - h_{ \lfloor({\varepsilon\log n})/({4\log|A|})
\rfloor} &\ge&(h_{k_n-1} -
\bar{H}) - (h_{ \lfloor({\varepsilon\log
n})/({4\log|A|}) \rfloor} - \bar{H})\nonumber\\
&\ge&(h_{k_n-1} - \bar{H}) -
\frac{1}{\sqrt{n}} \\
&\ge&\frac{3}{n^{1/2-\varepsilon}},\nonumber
\end{eqnarray}
where we used that $h_k$ is non-increasing. Comparing (\ref
{eqhnmllb}) to (\ref{eqhnmlub}), the right of (\ref
{eqsplitnml}) is an empty set, and (\ref{eqsplit2}) yields
\[
\Prob\bigl( \hat{k}_{\mathrm{NML}} \bigl(X_1^n\bigr)
< k_n \bigr) \le\Prob\biggl( \overline{B_n \biggl( {
\frac
{\varepsilon
\log n}{4\log|A|} } \biggr)} \biggr)
\le 12 \RMe ^{1/\RMe } \exp\biggl( -
\frac{7\alpha_0\varepsilon^3}{256\RMe (\alpha
+\alpha_0)} \frac{ n^{\varepsilon/2} }{ \log n } + \frac
{\varepsilon
}{4} \log n \biggr),
\]
if $n\ge\max\{ 24(\log^4 \RMe )(|A|-1)^4, 4C_{\mathrm{KT}}^4,
(\delta
2^{\zeta})^2 \}$, according to Theorem~\ref{thent}.
(iii) If $\mathrm{IC} = \mathrm{KT}$, by the definition of
the KT information criterion, see Definition~\ref{defKT}, and using
that $P_{\mathrm{KT}, m} (X_1^n) \le\mathrm{ML}_{m}(X_1^n)$ for any
$0\le m <n$,
\begin{eqnarray}
\label{eqsplitkt3} && \biggl\{ \mathrm{KT}_{X_1^n} ( m ) \le\mathrm
{KT}_{X_1^n} \biggl( \biggl\lfloor{ \frac{\varepsilon\log
n}{4\log|A|} } \biggr\rfloor
\biggr) \mbox{ for some } m<k_n \biggr\} \cap B_n \biggl(
{ \frac{\varepsilon\log n}{4\log|A|} } \biggr)
\nonumber
\\[-0.8pt]
&&\quad\subseteq\biggl\{ (n-m)\hat{h}_{m}\bigl(X_1^n
\bigr) - \biggl( n- \biggl\lfloor{ \frac{\varepsilon\log n}{4\log|A|}
} \biggr\rfloor\biggr) \hat
{h}_{ \lfloor({\varepsilon\log n})/({4\log|A|})
\rfloor}\bigl(X_1^n\bigr)
\nonumber
\\[-0.8pt]
&&\hspace*{16.4pt}\quad\le\log\mathrm{ML}_{ \lfloor({\varepsilon\log n})/({4\log
|A|}) \rfloor}\bigl(X_1^n
\bigr) - \log P_{\mathrm{KT}, \lfloor
({\varepsilon\log n})/({4\log|A|}) \rfloor} \bigl(X_1^n\bigr)
\mbox{ for some } m<k_n \biggr\} \nonumber\\[-0.8pt]
&&\qquad{}\cap B_n \biggl( {
\frac{\varepsilon\log n}{4\log|A|} } \biggr)
\nonumber
\\[-0.8pt]
&&\quad\subseteq\biggl\{\hat{h}_{m}\bigl(X_1^n
\bigr) - \hat{h}_{ \lfloor({\varepsilon\log n})/({4\log|A|})
\rfloor}\bigl(X_1^n\bigr)
\nonumber\\[-0.8pt]
&&\hspace*{15.6pt}\quad\le\frac{ \log\mathrm{ML}_{ \lfloor({\varepsilon\log
n})/({4\log|A|}) \rfloor}(X_1^n) - \log P_{\mathrm{KT},
\lfloor({\varepsilon\log n})/({4\log|A|}) \rfloor} (X_1^n)
}{n- \lfloor(\varepsilon\log n)/(4\log|A|)
\rfloor} \\[-0.8pt]
&&\qquad\hspace*{4.5pt}\mbox{ for some } m<k_n \biggr\} \nonumber\\[-0.8pt]
&&\qquad{}\cap
B_n \biggl( { \frac{\varepsilon\log n}{4\log|A|} } \biggr)
\nonumber
\\[-0.8pt]
&&\quad\subseteq\biggl\{ h_m - h_{ \lfloor({\varepsilon\log
n})/({4\log
|A|}) \rfloor} \nonumber\\[-0.8pt]
&&\hspace*{15.6pt}\quad\le
\frac{ \log\mathrm{ML}_{ \lfloor
({\varepsilon\log n})/({4\log|A|}) \rfloor}(X_1^n) - \log
P_{\mathrm{KT}, \lfloor({\varepsilon\log n})/({4\log|A|})
\rfloor} (X_1^n) }{n- (\varepsilon\log
n)/(4\log|A|) } + \frac{2}{n^{1/2-\varepsilon}}
\nonumber
\\[-0.8pt]
&&\qquad\hspace*{4.5pt} \mbox{ for some } m<k_n \biggr\}.\nonumber
\end{eqnarray}
By Lemma~\ref{lemktml} in the \hyperref[app]{Appendix},
\begin{eqnarray*}
&& \log\mathrm{ML}_{ \lfloor({\varepsilon\log n})/({4\log|A|})
\rfloor}\bigl(X_1^n\bigr) -
\log P_{\mathrm{KT}, \lfloor
({\varepsilon\log n})/({4\log|A|}) \rfloor} \bigl(X_1^n\bigr)
\\[-0.8pt]
&&\quad\le C_{\mathrm{KT}} |A|^{({\varepsilon\log n})/({4\log|A|})} +
\frac
{|A|-1}{2}
|A|^{({\varepsilon\log n})/({4\log|A|})} \log\frac
{n}{|A|^{ \lfloor({\varepsilon\log n})/({4\log|A|})
\rfloor}},
\end{eqnarray*}
and the proof continues in the same way as in the NML case
(ii).
\end{pf*}
Now, we are ready to prove Theorem~\ref{thpmllogSMP}. We
prove the following theorem that formulates Theorem~\ref{thpmllogSMP}
with explicit constants.
\begin{theorem}\label{thpmllog}
For any weakly non-null stationary ergodic process with continuity
rates $\bar{\gamma}(k) \le\delta_1 2^{-\zeta_1 k}$ and $\underbar
{\gamma}(k) \ge\delta_2 2^{-\zeta_2 k}$ for some $\zeta_1, \zeta_2,
\delta_1, \delta_2 >0$ ($\zeta_2\ge\zeta_1$), if
\[
\frac{6\log|A|}{\zeta_1} \le\varepsilon< \frac12 ,
\]
\begin{longlist}
\item
the PML Markov order estimator $\hat{k}_{\mathrm{PML}} (X_1^n)$
satisfies that
\[
\Prob\bigl( \hat{k}_{\mathrm{PML}} \bigl(X_1^n\bigr)
\le k_n \bigr) \le12 \RMe ^{1/\RMe } \exp\biggl( -
\frac{7\alpha_0\varepsilon^3}{256\RMe (\alpha+\alpha_0)} \frac{
n^{\varepsilon/2} }{ \log n } + \frac{\varepsilon}{4} \log n \biggr),
\]
if $n\ge(36 \delta_1^{4/3} 2^{(4\zeta_1)/3} \log^2 |A|)/(\log^2
\RMe )$, where
\[
k_n = \frac{1}{2\zeta_2} \biggl( 2\log\delta_2 -3 +
\biggl( \frac12 -\varepsilon\biggr) \log n - \log\max\biggl\{ 1, \bigl(|A|-1\bigr)
\frac
{\operatorname{pen}(n)}{\sqrt{n}} \biggr\} \biggr) ;
\]
\item the Markov order estimator $\hat{k}_{\mathrm{IC}} (X_1^n)$,
where IC is either NML or KT, satisfies that
\[
\Prob\bigl( \hat{k}_{\mathrm{IC}} \bigl(X_1^n\bigr)
\le k_n \bigr) \le12 \RMe ^{1/\RMe } \exp\biggl( -
\frac{7\alpha_0\varepsilon^3}{256\RMe (\alpha+\alpha_0)} \frac{
n^{\varepsilon/2} }{ \log n } + \frac{\varepsilon}{4} \log n \biggr),
\]
if $n\ge\max^2 \{ \sqrt{24} (\log^2 \RMe ) (|A|-1)^2, 2C_{\mathrm{KT}},
(6 \delta_1^{2/3} 2^{(2\zeta_1)/3} \log|A|)/(\log \RMe ) \}$, where
\[
k_n = \frac{1}{2\zeta_2} \biggl( 2\log\delta_2 -3 +
\biggl( \frac12 -\varepsilon\biggr) \log n \biggr) .
\]
[Here, $C_{\mathrm{KT}}$ is the constant in the well-known bound of
$\log\mathrm{ML}_{k}(X_1^n) -\log P_{\mathrm{KT}, k} (X_1^n)$, see
Lemma~\textup{\ref{lemktml}} in the \hyperref[app]{Appendix}.]
\end{longlist}
\end{theorem}
\begin{pf}
By Remark~\ref{remgammaalpha}, $\sum_{k=0}^{+\infty} \bar{\gamma}(k)
\le\sum_{k=0}^{+\infty} \delta_1 2^{-\zeta_1 k} < +\infty$
implies the
$\alpha$-summability. The deviation of the conditional entropies from
the entropy rate will also be controlled by the continuity rates of the
process, and Proposition~\ref{thpmlh} will yield the claim of the theorem.
First, for any $k\le m$,
\begin{eqnarray}
\label{eqgsplit1}
&& h_k - h_m
\nonumber
\\
&&\quad= \sum_{a\in A} \sum
_{a_{m-k+1}^m \in A^k} \biggl( -P\bigl(a_{m-k+1}^m a\bigr)
\log\frac{ P(a_{m-k+1}^m a) }{
P(a_{m-k+1}^m) } \nonumber\\
&&\qquad\hspace*{60.5pt}{}- \sum_{a_1^{m-k} \in A^{m-k}} -P
\bigl(a_1^m a\bigr) \log\frac{ P(a_1^m a) }{
P(a_1^m) } \biggr)
\\
&&\quad= \sum_{a\in A} \sum
_{a_{m-k+1}^m \in A^k} \biggl( -P\bigl(a_{m-k+1}^m \bigr)
\sum_{a_1^{m-k} \in A^{m-k}} \frac{ P(a_1^m)
}{ P(a_{m-k+1}^m) } \biggl(
\frac{ P(a_{m-k+1}^m a) }{ P(a_{m-k+1}^m) } \log\frac{ P(a_{m-k+1}^m a)
}{ P(a_{m-k+1}^m) } \biggr)
\nonumber
\\
&&\hspace*{58pt}\qquad{} - P\bigl(a_{m-k+1}^m\bigr) \sum
_{a_1^{m-k} \in A^{m-k}} -\frac{
P(a_1^m) }{ P(a_{m-k+1}^m) } \biggl( \frac{ P(a_1^m a) }{ P(a_1^m) }
\log
\frac{ P(a_1^m a) }{ P(a_1^m) } \biggr)\biggr)
\nonumber
\\
&&\quad= \sum_{a_{m-k+1}^m \in A^k} -P\bigl(a_{m-k+1}^m
\bigr) \sum_{a_1^{m-k} \in A^{m-k}} \frac{ P(a_1^m) }{
P(a_{m-k+1}^m) }
\nonumber
\\
&&\qquad\hspace*{0pt}{}\times \sum_{a\in A} \biggl( \frac{ P(a_{m-k+1}^m a) }{
P(a_{m-k+1}^m) } \log
\frac{ P(a_{m-k+1}^m a) }{ P(a_{m-k+1}^m) } - \frac{ P(a_1^m a) }{
P(a_1^m) } \log\frac{ P(a_1^m a) }{ P(a_1^m) } \biggr).\nonumber
\end{eqnarray}
On the right of (\ref{eqgsplit1}), the difference of entropies of the
conditional distributions $\{ P(a| a_{m-k+1}^m), a\in A\}$
and $\{ P(a| a_1^m), a\in A\}$ appears. By Remark~\ref{remgammaeq},
the total variation of these conditional distributions can be upper
bounded as
\[
d_{\TV} \bigl( P\bigl( \cdot| a_{m-k+1}^m\bigr), P
\bigl( \cdot| a_1^m\bigr) \bigr) = \sum
_{a\in A} \bigl\llvert P\bigl(a| a_{m-k+1}^m
\bigr) - P\bigl(a| a_1^m\bigr) \bigr\rrvert\le\bar{
\gamma}(k) .
\]
Hence, applying Lemma~\ref{lemsch} in the \hyperref[app]{Appendix} it
follows, similar
to the bound (\ref{eqsch}) and (\ref{eqsch2}) in the proof of
Proposition~\ref{prent}, that
\begin{eqnarray}
\label{eqgsplit11} &&\biggl\llvert\sum_{a\in A} P
\bigl(a| a_{m-k+1}^m\bigr) \log P\bigl(a| a_{m-k+1}^m
\bigr) - \sum_{a\in A} P\bigl(a| a_1^m
\bigr) \log P\bigl(a| a_1^m\bigr) \biggr\rrvert
\nonumber
\\
&&\quad\le\frac{\log|A|}{\log \RMe } d_{\TV} \bigl( P\bigl( \cdot|
a_{m-k+1}^m\bigr) , P\bigl( \cdot| a_1^m
\bigr) \bigr) + \frac{1}{\RMe } \frac{1+\nu}{\nu} d_{\TV}^{{1}/({1+\nu})}
\bigl( P\bigl( \cdot| a_{m-k+1}^m\bigr), P\bigl( \cdot|
a_1^m\bigr) \bigr)\qquad
\nonumber\\[-8pt]\\[-8pt]
&&\quad\le\frac{\log|A|}{\log \RMe } \bar{\gamma}(k) + \frac{1}{\RMe }
\frac{1+\nu}{\nu} \bar{\gamma}(k)^{
{1}/({1+\nu})}
\nonumber
\\
&&\quad\le\frac{2\log|A|}{\log \RMe } \frac{1+\nu}{\nu} \bar{\gamma
}(k)^{{1}/({1+\nu})}\nonumber
\end{eqnarray}
for any $\nu>0$, if $\bar{\gamma}(k) \le1/\RMe $. Setting $\nu=1/2$,
combining (\ref{eqgsplit11}) with (\ref{eqgsplit1}) and taking
$m\to
+\infty$ yield the bound
\begin{equation}
\label{eqklb} h_k - \bar{H} \le\frac{6\log|A|}{\log \RMe } \bar{
\gamma}(k)^{2/3},
\end{equation}
if $\bar{\gamma}(k) \le1/\RMe $. Since $h_k - \bar{H} \le h_k \le\log|A|$,
the bound (\ref{eqklb}) is trivial if $\bar{\gamma}(k) > 1/\RMe $. Hence,
using the assumption $\bar{\gamma}(k) \le\delta_1 2^{-\zeta_1 k}$ of
the theorem,
\begin{equation}
\label{eqhg} h_k - \bar{H} \le\frac{6\log|A|}{\log \RMe }
\delta_1^{2/3} 2^{-{2\zeta
_1} k/3},
\end{equation}
and the assumption $h_k-\bar{H} \le\delta2^{-\zeta k}$ of Proposition
\ref{thpmlh} is satisfied with
\[
\delta= \frac{6\log|A|}{\log \RMe } \delta_1^{2/3} \quad\mbox{and}
\quad\zeta= \frac{2\zeta_1}{3} .
\]
Thus, the constraint $\varepsilon\ge(4\log|A|)/\zeta$ in Proposition
\ref{thpmlh} becomes $\varepsilon\ge(6\log|A|)/\zeta_1$, and
$n\ge
(\delta2^{\zeta})^2$ becomes
\[
n\ge\frac{36\log^2 |A|}{\log^2 \RMe } \delta_1^{4/3} 2^{(4\zeta_1)/3} .
\]
Next, for any $k < +\infty$,
\begin{eqnarray}
\label{eqgsplit2} &&h_k - \bar{H}
\nonumber
\\
&&\quad=\sum_{x_{-k}^{-1} \in A^k} \sum
_{a\in A} -P\bigl(x_{-k}^{-1} a\bigr) \log P
\bigl(a| x_{-k}^{-1} \bigr) \nonumber\\
&&\qquad{}+ \int_{A^{\infty}}
\sum_{a\in A} P\bigl(a| x_{-\infty}^{-1}
\bigr) \log P\bigl(a| x_{-\infty}^{-1} \bigr) \mrmd P
\bigl(x_{-\infty}^{-1}\bigr)
\\
&&\quad=\int_{A^{\infty}} \sum_{a\in A} P
\bigl(a| x_{-\infty}^{-1} \bigr) \log\frac{P(a| x_{-\infty}^{-1}
)}{P(a| x_{-k}^{-1} )} \mrmd P
\bigl(x_{-\infty}^{-1}\bigr)
\nonumber
\\
&&\quad=\int_{A^{\infty}} D \bigl( P\bigl( \cdot|
x_{-\infty}^{-1} \bigr) \| P\bigl( \cdot| x_{-k}^{-1}
\bigr) \bigr) \mrmd P\bigl(x_{-\infty}^{-1}\bigr),\nonumber
\end{eqnarray}
where $D( \cdot\| \cdot)$ denotes the Kullback--Leibler
divergence. Using Pinsker's inequality~\cite{Cover,CSbook}, (\ref
{eqgsplit2}) can be lower bounded by
\begin{equation}
\label{eqgsplit22} \int_{A^{\infty}} \frac12 \biggl( \sum
_{a\in A} \bigl\llvert P\bigl(a| x_{-\infty}^{-1}
\bigr) - P\bigl(a| x_{-k}^{-1} \bigr) \bigr\rrvert
\biggr)^2 \mrmd P\bigl(x_{-\infty}^{-1}\bigr) \ge
\frac12 \underbar{\gamma}(k)^2 \ge\delta_2^2
2^{-2\zeta_2 k -1},
\end{equation}
where in the last inequality we used the assumption $\underbar{\gamma
}(k) \ge\delta_2 2^{-\zeta_2 k}$ of the theorem. Hence, in case (i)
\begin{eqnarray*}
&&\min\biggl\{ k\ge0\dvtx h_k - \bar{H} < \frac{ 4\max(\sqrt
{n},(|A|-1)\operatorname{pen}(n)) }{ n^{1-\varepsilon} }
\biggr\}
\\
&&\quad\ge\min\biggl\{ k\ge0\dvtx\delta_2^2
2^{-2\zeta_2 k-1} < \frac{
4\max
(\sqrt{n},(|A|-1)\operatorname{pen}(n)) }{ n^{1-\varepsilon} } \biggr\}
\\
&&\quad= \min\biggl\{ k\ge0\dvtx k > \frac{1}{2\zeta_2} \bigl( 2\log
\delta_2 -3 + (1-\varepsilon) \log n - \log\max\bigl(\sqrt{n},\bigl(|A|-1\bigr)
\operatorname{pen}(n)\bigr) \bigr) \biggr\}
\\
&&\quad= 1+ \biggl\lfloor\frac{1}{2\zeta_2} \biggl( 2\log\delta_2 -3
+ \biggl( { \frac12} -\varepsilon\biggr) \log n - \log\max\biggl\{ 1, \bigl(|A|-1\bigr)
\frac{\operatorname{pen}(n)}{\sqrt{n}} \biggr\} \biggr) \biggr\rfloor,
\end{eqnarray*}
while in case (ii)
\begin{eqnarray*}
&&\min\biggl\{ k\ge0\dvtx h_k - \bar{H} < \frac{4}{n^{
1/2-\varepsilon}} \biggr
\}
\\
&&\quad\ge\min\biggl\{ k\ge0\dvtx\delta_2^2
2^{-2\zeta_2 k-1} < \frac
{4}{n^{1/2-\varepsilon}} \biggr\}
\\
&&\quad= \min\biggl\{ k\ge0\dvtx k > \frac{1}{2\zeta_2} \biggl( 2\log
\delta_2 -3 + \biggl( { \frac12} -\varepsilon\biggr) \log n \biggr)
\biggr\}
\\
&&\quad= 1+ \biggl\lfloor\frac{1}{2\zeta_2} \biggl( 2\log\delta_2 -3
+ \biggl( { \frac12} -\varepsilon\biggr) \log n \biggr) \biggr\rfloor,
\end{eqnarray*}
and the proof is completed.
\end{pf}
Finally, we prove the following proposition that directly implies
Theorem~\ref{thoracle}.
\begin{proposition}\label{proporacle}
For any weakly non-null stationary ergodic process with continuity rate
$\bar{\gamma}(k) \le\delta2^{-\zeta k}$, $\zeta, \delta>0$, and for
any $\xi>0$, if $\varepsilon>0$ is so small and $\zeta>0$ is so
large that
\[
\frac12 + \varepsilon< \kappa< 1-\frac{\varepsilon}{4}
\]
and
\[
\frac{6\log|A|}{\zeta} \le\frac{\varepsilon}{1-\kappa} < 2\xi,
\]
the PML Markov order estimator $\hat{k}_{\mathrm{PML}} (X_1^n)$ with
$\operatorname{pen}(n)=n^{\kappa}$ satisfies that
\[
\Prob\biggl( \biggl\llvert\frac{ \hat{k}_{\mathrm{PML}} (X_1^n) }{
k_{\mathrm
{PML},n} } -1 \biggr\rrvert>\xi\biggr)
\le\exp\biggl( -\frac{c_2'\varepsilon^3}{\log n} n^{\varepsilon/2}
\biggr),
\]
if $n$ is sufficiently large, where $c_2'>0$ is a constant depending
only on the distribution of the process.
\end{proposition}
\begin{pf}
The proof of Theorem~\ref{thpmllog} begins with the observation that
the summability of the continuity rate implies the $\alpha
$-summability. Hence, the conditions of Theorem~\ref{thent} are
satisfied now. Moreover, according to (\ref{eqhg}), $\bar{\gamma}(k)
\le\delta2^{-\zeta k}$ also implies that
\begin{equation}
\label{eqhg2} h_k - \bar{H} \le\frac{6\log|A|}{\log \RMe }
\delta^{2/3} 2^{-{2\zeta}k/3}.
\end{equation}
Set $\xi$, $\varepsilon$ and $\kappa$ as in the conditions of the
proposition, and
define a sequence $k_n\in\mathbb{N}$ such that for sufficiently large
$n$
\begin{eqnarray*}
&&\mbox{\hphantom{ii}(i)\quad} h_{\lfloor(1-\xi/2) k_n \rfloor} - h_{k_n} \ge|A|
n^{-1+\kappa
+\varepsilon/4},\\
&&\mbox{\hphantom{i}(ii)\quad}h_{k_n} - \bar{H} \le\frac{1}{2} \bigl(|A|-1\bigr)^2
n^{-1+\kappa},\\
&&\mbox{(iii)\quad} k_n\le\frac{\varepsilon\log n}{4\log|A|} .
\end{eqnarray*}
Due to (\ref{eqhg2}), such a sequence exists. Since $h_k - \bar{H}$ is
non-negative decreasing, it is sufficient to show this when $h_k - \bar
{H} = \frac{6\log|A|}{\log \RMe } \delta^{2/3} 2^{-{2\zeta}k/{3}}$.
Then, writing $k_n$ in the form $k_n=\nu\log n$,
\[
h_{\lfloor(1-\xi/2) k_n \rfloor} - h_{k_n} \ge\frac12 \frac{6\log
|A|}{\log \RMe }
\delta^{2/3} n^{-
{2\zeta
}(1-\xi/2)\nu/{3}} \quad\mbox{and}\quad h_{k_n} -
\bar{H} = \frac{6\log|A|}{\log \RMe } \delta^{2/3} n^{-{2\zeta}
\nu/{3}},
\]
if $n$ is sufficiently large, that implies (i) and (ii) if
\[
1-\kappa< \frac{2\zeta\nu}{3} < \biggl( 1-\kappa-\frac
{\varepsilon}{4} \biggr)
\frac{1}{1-\xi/2} .
\]
Such $\nu>0$ exists because it follows from the condition $\varepsilon
/(1-\kappa) < 2\xi$ that $1-\kappa< ( 1-\kappa-\varepsilon/4 ) /
(1-\xi
/2)$. Moreover, the condition $\varepsilon/(1-\kappa) \ge(6 \log
|A|)/\zeta$ implies $\nu\le\varepsilon/(4 \log|A|)$ satisfying (iii).
First, recall the definition of $B_n ( \frac{\varepsilon\log
n}{4\log|A|} )$ in (\ref{eqBndef}) in the proof of Proposition
\ref{thpmlh}. Similar to (\ref{eqsplit2}) and (\ref{eqsplitpml}), we
can write that
\begin{eqnarray}
\label{eqolowerb} && \bigl\{ \hat{k}_{\mathrm{PML}} \bigl(X_1^n
\bigr) < (1-\xi/2) k_n \bigr\} \cap B_n \biggl( {
\frac{\varepsilon\log n}{4\log|A|} } \biggr)
\\
&&\quad\subseteq\bigl\{ \mathrm{PML}_{X_1^n} ( m ) \le
\mathrm{PML}_{X_1^n} ( k_n ) \mbox{ for some } m<(1-\xi/2)
k_n \bigr\} \cap B_n \biggl( { \frac{\varepsilon\log n}{4\log|A|} }
\biggr)
\nonumber
\\
&&\quad= \bigl\{ \mathrm{PML}_{o,n} ( m ) + (n-m) \bigl(\hat
{h}_{m}\bigl(X_1^n\bigr) - h_m
\bigr) \le\mathrm{PML}_{o,n} ( k_n ) + (n-k_n)
\bigl(\hat{h}_{k_n}\bigl(X_1^n\bigr) -
h_{k_n}\bigr)
\nonumber
\\
&&\hspace*{25pt} \mbox{ for some } m<(1-\xi/2) k_n \bigr\} \cap B_n
\biggl( { \frac{\varepsilon\log n}{4\log|A|} } \biggr)
\nonumber
\\
\label{eqolowerhalf} &&\quad\subseteq\biggl\{ \mathrm{PML}_{o,n} ( m
) - \frac
{2n}{n^{1/2-\varepsilon}} \le\mathrm{PML}_{o,n} ( k_n )
\mbox{ for some } m<(1-\xi/2) k_n \biggr\}
\\
&&\quad\subseteq\biggl\{ (n-m) h_{m} - ( n- k_n )
h_{k_n} \le\bigl(|A|-1\bigr) \bigl( |A|^{k_n} - |A|^m \bigr)
\operatorname{pen}(n) + \frac
{2n}{n^{1/2-\varepsilon}}
\nonumber
\\
&&\hspace*{25pt} \mbox{ for some } m<(1-\xi/2) k_n \biggr\}
\nonumber
\\
&&\quad\subseteq\biggl\{ h_m - h_{ k_n } \le
\frac{(|A|-1) |A|^{
({\varepsilon\log n})/({4\log|A|})} \operatorname{pen}(n)}{n-
(\varepsilon\log n)/(4\log
|A|) } + \frac{2}{n^{1/2-\varepsilon}} \mbox{ for some } m<(1-\xi/2)
k_n \biggr\}
\nonumber
\\
\label{eqolower1} &&\quad\subseteq\bigl\{ h_{\lfloor(1-\xi/2) k_n
\rfloor} - h_{ k_n }
< |A| n^{-1+\kappa+\varepsilon/4} \bigr\}
\end{eqnarray}
that is empty set by (i), if $n$ is large enough and $k_n \le\frac
{\varepsilon\log n}{4\log|A|}$. The latter is satisfied because
of~\textup{(iii)}. On the other hand,
\begin{eqnarray}
\label{eqoupperb} && \bigl\{ \hat{k}_{\mathrm{PML}} \bigl(X_1^n
\bigr) > (1+\xi/2) k_n \bigr\} \cap B_n \biggl( {
\frac{\varepsilon\log n}{4\log|A|} } \biggr) \cap\biggl\{ \hat
{k}_{\mathrm{PML}}
\bigl(X_1^n\bigr) \le\frac{\varepsilon
\log
n}{4\log|A|} \biggr\}
\\[-0.5pt]
&&\quad\subseteq\biggl\{ \mathrm{PML}_{X_1^n} ( m ) < \mathrm
{PML}_{X_1^n} ( k_n ) \mbox{ for some } (1+\xi/2)
k_n <m\le\frac{\varepsilon\log n}{4\log|A|} \biggr\}\nonumber\\[-0.5pt]
&&\qquad{} \cap B_n \biggl( {
\frac{\varepsilon\log n}{4\log|A|} } \biggr)
\nonumber
\\[-0.5pt]
\label{eqoupperhalf} &&\quad\subseteq\biggl\{ \mathrm{PML}_{o,n} ( m
) - \frac
{2n}{n^{1/2-\varepsilon}} < \mathrm{PML}_{o,n} ( k_n ) \mbox{
for some } m>(1+\xi/2) k_n \biggr\}
\\[-0.5pt]
&&\quad\subseteq\biggl\{ \bigl(|A|-1\bigr) \bigl( |A|^m - |A|^{k_n}
\bigr) \operatorname{pen}(n) - \frac{2n}{n^{1/2-\varepsilon}} < (
n- k_n )
h_{k_n} - (n-m) h_{m}
\nonumber
\\[-0.5pt]
&&\hspace*{25pt} \mbox{ for some } m>(1+\xi/2) k_n \biggr\}
\nonumber
\\[-0.5pt]
&&\quad\subseteq\biggl\{ \bigl(|A|-1\bigr) \bigl( |A|^m - |A|^{k_n}
\bigr) \frac
{\operatorname{pen}(n)}{n} - \frac{2}{n^{1/2-\varepsilon}} <
h_{k_n} - \biggl(1-
\frac{m}{n} \biggr) h_{m}
\nonumber
\\[-0.5pt]
&&\hspace*{25pt} \mbox{ for some } m>(1+\xi/2) k_n \biggr\}
\nonumber
\\[-0.5pt]
&&\quad\subseteq\biggl\{ \bigl(|A|-1\bigr) \bigl( |A|^m - |A|^{k_n}
\bigr) \frac
{\operatorname{pen}(n)}{n} - \frac{2}{n^{1/2-\varepsilon}} - \frac
{m}{n}\bar{H} \nonumber\\[-0.5pt]
&&\hspace*{26.5pt}
< (
h_{k_n} - \bar{H} ) - \biggl(1-\frac{m}{n} \biggr) (
h_{k_n} - \bar{H} )
\nonumber
\\[-0.5pt]
&&\hspace*{25pt} \mbox{ for some } m>(1+\xi/2) k_n \biggr\}
\nonumber
\\[-0.5pt]
\label{eqoupper1} &&\quad\subseteq\biggl\{ h_{k_n} - \bar{H} >
\frac{ (|A|-1)^2 }{2} n^{-1+\kappa} \biggr\}
\end{eqnarray}
that is empty set by (ii), if $n$ is large enough.
Observe that
\begin{equation}
\label{eqknko} \frac{1+\xi/2}{1+\xi} k_n \le k_{\mathrm{PML},n} \le
\frac{1-\xi
/2}{1-\xi} k_n,
\end{equation}
if $n$ is sufficiently large. Indeed, on indirect way the following
sequence of implications can be written
\begin{eqnarray*}
k_{\mathrm{PML},n} < \frac{1+\xi/2}{1+\xi} k_n \quad&\Rightarrow\quad&
k_{\mathrm{PML},n} < 1-\frac{\xi/2}{1+\xi} k_n \quad\Rightarrow\quad
k_{\mathrm{PML},n} < (1-\xi/2) k_n
\\
&\Rightarrow\quad&\mathrm{PML}_{o,n} ( m ) < \mathrm{PML}_{o,n}
( k_n ) \qquad\mbox{for some } m<(1-\xi/2) k_n
\\
&\Rightarrow\quad&\mathrm{PML}_{o,n} ( m ) - \frac
{2n}{n^{1/2-\varepsilon}} <
\mathrm{PML}_{o,n} ( k_n ) \\
&&\mbox{for some } m<(1-
\xi/2) k_n
\end{eqnarray*}
that does not hold by (\ref{eqolowerhalf}) and (\ref{eqolower1}) if
$n$ is large enough, and
\begin{eqnarray*}
k_{\mathrm{PML},n} > \frac{1-\xi/2}{1-\xi} k_n \quad&\Rightarrow&\quad
k_{\mathrm{PML},n} > 1+\frac{\xi/2}{1-\xi} k_n \Rightarrow
k_{\mathrm{PML},n} > (1+\xi/2) k_n
\\
&\Rightarrow&\quad\mathrm{PML}_{o,n} ( m ) < \mathrm{PML}_{o,n}
( k_n ) \qquad\mbox{for some } m>(1+\xi/2) k_n
\\
&\Rightarrow&\quad\mathrm{PML}_{o,n} ( m ) - \frac
{2n}{n^{1/2-\varepsilon}} <
\mathrm{PML}_{o,n} ( k_n ) \\
&&\quad\mbox{for some } m>(1+
\xi/2) k_n
\end{eqnarray*}
that does not hold either by (\ref{eqoupperhalf}) and (\ref
{eqoupper1}) if $n$ is large enough.
Finally, using (\ref{eqknko}), we get
\begin{eqnarray*}
&& \Prob\biggl( \biggl\llvert\frac{ \hat{k}_{\mathrm{PML}} (X_1^n) }{
k_{\mathrm{PML},n} } -1 \biggr\rrvert>\xi\biggr)
\\
&&\quad=\Prob\bigl( \hat{k}_{\mathrm{PML}} \bigl(X_1^n
\bigr) < (1-\xi) k_{\mathrm
{PML},n} \bigr) + \Prob\bigl( \hat{k}_{\mathrm{PML}}
\bigl(X_1^n\bigr) > (1+\xi) k_{\mathrm
{PML},n} \bigr)
\\
&&\quad\le\Prob\bigl( \hat{k}_{\mathrm{PML}} \bigl(X_1^n
\bigr) < (1-\xi/2) k_n \bigr) + \Prob\bigl( \hat{k}_{\mathrm{PML}}
\bigl(X_1^n\bigr) > (1+\xi/2) k_n \bigr)
\\
&&\quad\le\Prob\biggl( \bigl\{ \hat{k}_{\mathrm{PML}} \bigl(X_1^n
\bigr) < (1-\xi/2) k_n \bigr\} \cap B_n \biggl( {
\frac{\varepsilon\log
n}{4\log
|A|} } \biggr) \biggr)
\\
&&\qquad{} + \Prob\biggl( \bigl\{ \hat{k}_{\mathrm{PML}} \bigl(X_1^n
\bigr) > (1+\xi/2) k_n \bigr\} \cap B_n \biggl( {
\frac{\varepsilon\log
n}{4\log
|A|} } \biggr) \cap\biggl\{ \hat{k}_{\mathrm{PML}}
\bigl(X_1^n\bigr) \le\frac
{\varepsilon\log n}{4\log|A|} \biggr\}
\biggr)
\\
&&\qquad{} +2\Prob\biggl( \overline{ B_n \biggl( {
\frac
{\varepsilon\log n}{4\log|A|} } \biggr) } \biggr) + \Prob\biggl( \hat
{k}_{\mathrm{PML}}
\bigl(X_1^n\bigr) > \frac{\varepsilon
\log
n}{4\log|A|} \biggr),
\end{eqnarray*}
where the first two terms are zero if $n$ is large enough by (\ref
{eqolowerb})--(\ref{eqolower1}) and (\ref{eqoupperb})--(\ref
{eqoupper1}). Using Proposition~\ref{propqminupper} with $r_n=n-1$,
$k_n= \lfloor\frac{\varepsilon\log n}{4\log|A|}
\rfloor$
and $m_n= \lfloor\frac{\varepsilon\log n}{6\log|A|}
\rfloor$,
\[
\Prob\biggl( \hat{k}_{\mathrm{PML}} \bigl(X_1^n
\bigr) > \frac{\varepsilon
\log
n}{4\log|A|} \biggr) \le\exp\bigl( - \mathcal{O}
\bigl(n^{\kappa+\varepsilon/4}\bigr) \bigr),
\]
because
\[
n \bar{\gamma}(m_n) \le n \delta2^{\zeta} \exp\biggl( -\zeta
\frac{\varepsilon\log
n}{6\log|A|} \biggr) = \delta2^{\zeta} n^{1-\zeta\varepsilon/(6\log|A|)},
\]
but $1-\zeta\varepsilon/(6\log|A|) < \kappa+\varepsilon/4$
according to
the condition $\varepsilon/(1-\kappa) \ge(6 \log|A|)/\zeta$. Then the
claim of the proposition follows from Theorem~\ref{thent}.
\end{pf}
\section{Process estimation proofs}\label{secproofappl}
In this section, we consider the estimation of stationary ergodic
processes by finite memory processes. First, define
\[
\beta_1 = \frac{1}{ \prod_{j=1}^{+\infty} (1-2\bar{\gamma}(j)) }
\]
and
\[
\beta_2 = \sup_{k\ge1} 2|A|\frac{1- (1-2|A|\bar{\gamma}(k))^k}{
k \bar
{\gamma}(k) \prod_{j=1}^{+\infty} (1-2|A|\bar{\gamma}(j))^2 } .
\]
Clearly, if $\sum_{k=1}^{\infty} \bar{\gamma}(k)<+\infty$, then
$\beta_1,\beta_2<+\infty$.
Now we prove the following theorem that formulates Theorem \ref
{thqminSMP} with explicit constants.
\begin{theorem}\label{thqmin}
For any non-null stationary ergodic process with summable continuity
rate and uniformly convergent restricted continuity rate with
parameters $\theta_1$, $\theta_2$, $k_{\theta}$, for any $\mu_n>0$,
the empirical Markov estimator of the process with the order estimated
by the bounded PML Markov order estimator $\hat{k}_{\mathrm{PML}}
(X_1^n | \eta\log n)$, $\eta>0$, with penalty function
$\operatorname{pen}(n)\le
\mathcal{O}(\sqrt{n})$ satisfies
\begin{eqnarray*}
&& \Prob\biggl( \bar{d} \bigl( X_1^n, \hat{X} \bigl[
\hat{k}_{\mathrm
{PML}} \bigl(X_1^n | \eta\log n\bigr)
\bigr]_1^n \bigr) > \frac{\beta_2}{\pinf^{2}} g_n +
\frac{1}{n^{1/2-\mu_n}} \biggr)
\\
&&\quad\le2 \RMe ^{1/\RMe } |A|^{K_n+h_n+2} \exp\biggl\{ -
\frac{ \pinf^2 }{ 16 \RMe |A|^3 (\alpha+ \pinf) (\beta_1+1)^2 } \frac{
(n-K_n-h_n)}{ (1+K_n+h_n) n }
\\
&&\hspace*{87.4pt}\qquad{}\times 4^{-(K_n+h_n) |\log\pinf|} \biggl[ 4^{\mu_n\log n} - \frac
{(K_n+h_n)|\log\pinf|(\beta_1+1)^2}{2} \biggr]
\biggr\}
\\
&&\qquad{}+ 12 \RMe ^{1/\RMe } \exp\biggl( -\frac{7\alpha_0 (\log|A|)^3
\eta^3}{4\RMe (\alpha+\alpha_0)}
\frac{ n^{\eta2\log|A|} }{ \log n } + \bigl(\eta\log|A|\bigr)\log n \biggr)
\\
&&\qquad{}+ \exp\biggl( -\bigl(|A|-1\bigr) |A|^{K_n+h_n+1} \\
&&\hspace*{32pt}\qquad{}\times\operatorname{pen}(n)
\biggl[ 1 - \frac
{1}{|A|^{1+h_n}} - \frac{1}{2\operatorname{pen}(n)} \bigl( \log n -
(K_n+h_n) \log|A| \bigr) \biggr]
\\
&&\hspace*{32pt}\qquad{} + \frac{c \operatorname{pen}(n)}{\pinf/\log \RMe } +
|A|^{K_n+h_n+1} C_{\mathrm{KT}} + \log(
\eta\log n) \biggr),
\end{eqnarray*}
if $n$ is so large that
\begin{equation}
\label{eqnbound} \min\biggl\{ \biggl\lfloor\frac{\eta}{\theta_2}\log n
\biggr
\rfloor, k\ge0\dvtx\bar{\gamma}(k) < \biggl( \frac{ 6\max( \sqrt{n},(|A|-1)
\operatorname{pen}(n))
}{ \pinf n^{1-\eta\log(|A|^4/\pinf)} }
\biggr)^{1/(2\theta_1)} \biggr\} \ge k_{\theta},
\end{equation}
where
\begin{eqnarray*}
g_n &=& \max\biggl\{ \bar{\gamma} \biggl( \biggl\lfloor
\frac{\eta
}{\theta_2}\log n \biggr\rfloor\biggr), \biggl( \frac{ 6\max( 1,
(|A|-1)
({\operatorname{pen}(n)})/{\sqrt{n}}) }{ \pinf n^{1/2-\eta
\log(|A|^4/\pinf)} }
\biggr)^{1/(2\theta_1)} \biggr\} ,
\\
K_n &=& K_n \biggl( r_n, \bar{\gamma},
\frac{c}{n}\operatorname{pen}(n) \biggr) ,
\end{eqnarray*}
and $c>0$ is an arbitrary constant and $h_n\in\mathbb{N}$ is an
arbitrary sequence.
\end{theorem}
The proof is based on the following two propositions.
\begin{proposition}\label{propqmin}
For any non-null and $\alpha$-summable stationary ergodic process with
uniformly convergent restricted continuity rate with parameters $\theta
_1$, $\theta_2$, $k_{\theta}$,
\begin{longlist}
\item
the bounded PML Markov order estimator $\hat
{k}_{\mathrm
{PML}} (X_1^n | \eta\log n)$ with penalty function $\operatorname
{pen}(n)\le\mathcal
{O}(\sqrt{n})$ satisfies that
\[
\Prob\bigl( \hat{k}_{\mathrm{PML}} \bigl(X_1^n |
\eta\log n\bigr) < k_n \bigr) \le12 \RMe ^{1/\RMe } \exp\biggl( -
\frac{7\alpha_0 (\log|A|)^3 \eta^3}{4\RMe (\alpha+\alpha_0)} \frac{ n^{\eta
2\log|A|} }{ \log n } + \bigl(\eta\log|A|\bigr)\log n \biggr),
\]
if $n$ is so large that $k_n\ge k_{\theta}$, where
\[
k_n = \min\biggl\{ \biggl\lfloor\frac{\eta}{\theta_2}\log n \biggr
\rfloor, k\ge0\dvtx\bar{\gamma}(k) < \biggl( \frac{ 6\max( \sqrt{n},(|A|-1)
\operatorname{pen}(n))
}{ \pinf n^{1-\eta\log(|A|^4/\pinf)} }
\biggr)^{1/(2\theta_1)} \biggr\} ;
\]
\item the bonded Markov order estimator $\hat{k}_{\mathrm{IC}} (X_1^n
| \eta\log n)$, where IC is either NML or KT, satisfies that
\[
\Prob\bigl( \hat{k}_{\mathrm{IC}} \bigl(X_1^n |
\eta\log n\bigr) < k_n \bigr) \le12 \RMe ^{1/\RMe } \exp\biggl( -
\frac{7\alpha_0 (\log|A|)^3 \eta^3}{4\RMe (\alpha+\alpha_0)} \frac{ n^{\eta
2\log|A|} }{ \log n } + \bigl(\eta\log|A|\bigr)\log n \biggr),
\]
if $n$ is so large that $k_n\ge k_{\theta}$ and $n\ge\max^2 \{
\sqrt{24}(\log^2 \RMe )(|A|-1)^2, 2C_{\mathrm{KT}} \}$, where
\[
k_n = \min\biggl\{ \biggl\lfloor\frac{\eta}{\theta_2}\log n \biggr
\rfloor, k\ge0\dvtx\bar{\gamma}(k) < \biggl( \frac{ 6 }{ \pinf n^{
1/2-\eta\log
(|A|^4/\pinf)} }
\biggr)^{1/(2\theta_1)} \biggr\} .
\]
\end{longlist}
\end{proposition}
\begin{pf}
First, define $B_n( \eta\log n )$ similar to (\ref{eqBndef}) in the
proof of Proposition~\ref{thpmlh}. Similar to (\ref
{eqsplit2})--(\ref
{eqhpmlub}), we can write for any $k_n\le(\eta/\theta_2)\log n$ that
\begin{eqnarray}
\label{eqsplit22} &&\Prob\bigl( \hat{k}_{\mathrm{PML}} \bigl(X_1^n
| \eta\log n\bigr) < k_n \bigr)
\nonumber
\\
&&\quad\le\Prob\biggl( h_m - h_{\lfloor\eta\log n \rfloor} <
\frac{
3\max
( \sqrt{n},(|A|-1) \operatorname{pen}(n)) }{ n^{1-\eta4 \log
|A|} } \mbox{ for some } m<k_n \biggr)
\\
&&\qquad{} +\Prob\bigl( \overline{B_n( \eta\log n )} \bigr).\nonumber
\end{eqnarray}
Now, the difference $h_m - h_{\lfloor\eta\log n \rfloor}$ in (\ref
{eqsplit22}) is controlled as follows. For any $m\le k$,
\begin{eqnarray}
\label{eqgsplit22a} &&h_m - h_k
\nonumber
\\
&&\quad=\sum_{a\in A} \sum
_{a_1^k \in A^k} \bigl( -P\bigl(a_1^k a\bigr)
\log P\bigl(a| a_{k-m+1}^k \bigr) + P\bigl(a_1^k
a\bigr) \log P\bigl(a| a_1^k \bigr) \bigr)
\nonumber\\[-8pt]\\[-8pt]
&&\quad=\sum_{a_1^k \in A^k} P\bigl(a_1^k
\bigr) \sum_{a\in A} P\bigl(a| a_1^k
\bigr) \log\frac{P(a| a_1^k )}{P(a|
a_{k-m+1}^k )}
\nonumber
\\
&&\quad=\sum_{a_1^k \in A^k} P\bigl(a_1^k
\bigr) D \bigl( P\bigl( \cdot| a_1^k \bigr) \| P\bigl(
\cdot| a_{k-m+1}^k \bigr) \bigr).\nonumber
\end{eqnarray}
Using Pinsker's inequality~\cite{Cover,CSbook}, (\ref{eqgsplit22a})
can be lower bounded by
\begin{eqnarray}
\label{eqgsplit222} &&\sum_{a_1^k \in A^k} P
\bigl(a_1^k\bigr) \frac12 \biggl( \sum
_{a\in A} \bigl\llvert P\bigl(a| a_1^k
\bigr) - P\bigl(a| a_{k-m+1}^k \bigr) \bigr\rrvert
\biggr)^2
\nonumber
\\
&&\quad\ge\frac12 \bar{\gamma}(m|k)^2 \min_{a_1^k\in A^k} P
\bigl(a_1^k\bigr)
\\
&&\quad\ge\frac12 \bar{\gamma}(m|k)^2 \pinf^{ k}.\nonumber
\end{eqnarray}
Using (\ref{eqgsplit222}) and the assumption $\bar{\gamma
}(k)^{\theta
_1} \le\bar{\gamma}( k | \lceil\theta_2 k \rceil)$ if
$k\ge k_{\theta}$ ($\theta_1\ge1,\theta_2>1$), it follows that
\[
h_{k} - h_{ \lceil\theta_2 k \rceil} \ge\tfrac12 \bar{\gamma}\bigl( k |
\lceil
\theta_2 k \rceil\bigr)^2 \pinf^{ \lceil\theta_2 k \rceil} \ge
\tfrac12 \bar{\gamma}(k)^{2\theta_1} \pinf^{ \theta_2 k +1} \qquad\mbox
{if } k
\ge k_{\theta} .
\]
Hence, we can write
\begin{eqnarray}
\label{eqknpml2} &&\min\biggl\{ k\ge k_{\theta}\dvtx h_{k} -
h_{ \lceil\theta_2 k
\rceil} < \frac{ 3\max( \sqrt{n},(|A|-1) \operatorname{pen}(n))
}{ n^{1-\eta4
\log|A|} } \biggr\}
\nonumber
\\
&&\quad\ge\min\biggl\{ k\ge k_{\theta}\dvtx\bar{\gamma}(k) < \biggl(
\frac{
6\max( \sqrt{n},(|A|-1) \operatorname{pen}(n)) }{ n^{1-\eta4
\log|A|} } 2^{-(\theta_2 k +1)\log\pinf} \biggr)^{1/(2\theta_1)} \biggr
\}\quad
\\
&&\quad\ge\min\biggl\{ k\ge k_{\theta}\dvtx\bar{\gamma}(k) < \biggl(
\frac{
6\max( \sqrt{n},(|A|-1) \operatorname{pen}(n)) }{ \pinf
n^{1-\eta\log
(|A|^4/\pinf)} } \biggr)^{1/(2\theta_1)} \biggr\}.\nonumber
\end{eqnarray}
Let $k_n$ be as in the claim of the proposition and suppose that $k_n
\ge k_{\theta}$. Then, since $h_k$ is non-increasing, for any $m<k_n\le
(\eta/\theta_2)\log n$
\begin{equation}
\label{eqhpmllb2} h_m - h_{\lfloor\eta\log n \rfloor} \ge h_{k_n-1} -
h_{ \lceil\theta_2 (k_n-1) \rceil} \ge\frac{ 3\max( \sqrt{n},(|A|-1)
\operatorname{pen}(n)) }{
n^{1-\eta4 \log
|A|} }.
\end{equation}
Applying (\ref{eqhpmllb2}) to (\ref{eqsplit22}), the first term
on the right in (\ref{eqsplit22}) equals zero, therefore
\begin{eqnarray*}
\Prob\bigl( \hat{k}_{\mathrm{PML}} \bigl(X_1^n\bigr)
< k_n \bigr) &\le&\Prob\bigl( \overline{B_n( \eta\log n )}
\bigr)
\\
&\le&12 \RMe ^{1/\RMe } \exp\biggl( -\frac{7\alpha_0 (\log|A|)^3 \eta
^3}{4\RMe (\alpha+\alpha_0)} \frac{ n^{\eta2\log|A|} }{ \log n } + \bigl(
\eta\log|A|\bigr)\log n \biggr)
\end{eqnarray*}
by Theorem~\ref{thent} with $\varepsilon=\eta4\log|A|$.
In cases $\mathrm{IC}=\mathrm{NML}$ and $\mathrm{IC}=\mathrm{KT}$, the proofs deviate from the above similar to
as (ii) and (iii) deviate from (i) in the proof of Proposition
\ref{thpmlh}. Now, instead of (\ref{eqhnmlub}) we have
\begin{eqnarray*}
&&\frac{ \log\Sigma( n, \lfloor\eta\log n
\rfloor
) }{n- \eta\log n } + \frac{2}{n^{1/2-\eta4\log|A|}}\\
&&\quad< \frac
{3}{n^{1/2-\eta4\log|A|}} \qquad\mbox{if }
n\ge\max\bigl\{ 24\bigl(\log^4 \RMe \bigr) \bigl(|A|-1\bigr)^4,
4C_{\mathrm
{KT}}^2 \bigr\} .
\end{eqnarray*}
\upqed\end{pf}
\begin{proposition}\label{propqminupper}
For any non-null stationary ergodic process, the bounded PML Markov
order estimator $\hat{k}_{\mathrm{PML}} (X_1^n | r_n)$ satisfies that
\begin{eqnarray*}
&&\Prob\bigl( \hat{k}_{\mathrm{PML}} \bigl(X_1^n |
r_n\bigr) > k_n \bigr)
\\
&&\quad\le\exp\biggl( \log(r_n-k_n) +
\frac{(n-m_n)\bar{\gamma
}(m_n)}{\pinf
/\log \RMe } + \bigl(|A|-1\bigr)|A|^{m_n} \operatorname{pen}(n)
\\
&&\hspace*{19pt}\qquad{} + |A|^{k_n+1} \biggl[ C_{\mathrm{KT}} + \frac{|A|-1}{2}
\log\frac
{n}{|A|^{k_n+1}} - \bigl(|A|-1\bigr) \operatorname{pen}(n) \biggr] \biggr)
\end{eqnarray*}
for any $0\le m_n \le k_n \le r_n \le n$.
\end{proposition}
\begin{pf}
For any $m\ge0$,
\begin{equation}
\label{eqpfact1} P \bigl(x_1^n\bigr) = P
\bigl(x_1^m\bigr) \prod_{i=m+1}^n
P\bigl(x_i | x_1^{i-1}\bigr) \le\Biggl( \prod
_{i=m+1}^n P\bigl(x_i |
x_{i-m}^{i-1}\bigr) \Biggr) \prod
_{i=m+1}^n \frac{ P(x_i | x_1^{i-1}) }{ P(x_i | x_{i-m}^{i-1}) }.
\end{equation}
Using $P(x_i | x_1^{i-1}) \le P(x_i | x_{i-m}^{i-1}) + \bar{\gamma}(m)$
and $P(x_i | x_{i-m}^{i-1}) \ge\pinf$, (\ref{eqpfact1}) can be upper
bounded by
\begin{equation}
\label{eqpfact2} \Biggl( \prod_{i=m+1}^n P
\bigl(x_i | x_{i-m}^{i-1}\bigr) \Biggr) \biggl( 1+
\frac{\bar{\gamma}(m)}{\pinf} \biggr)^{n-m} \le\mathrm{ML}_{m}
\bigl(x_1^n\bigr) \biggl( 1+ \frac{\bar{\gamma}(m)}{\pinf}
\biggr)^{n-m}.
\end{equation}
Now, let $C_{n,k}= \{ \hat{k}_{\mathrm{PML}} (X_1^n | r_n) = k
\}$. By the definition of the PML information criterion, see
Definition~\ref{defPML}, for any $0\le m_n,k \le r_n$
\begin{eqnarray}
\label{eqsh1} \log\mathrm{ML}_{m_n}\bigl(X_1^n
\bigr) &\le&\log\mathrm{ML}_{k}\bigl(X_1^n
\bigr)
- \bigl(|A|-1\bigr)|A|^k \operatorname{pen}(n)\nonumber\\[-8pt]\\[-8pt]
&&{} + \bigl(|A|-1\bigr)|A|^{m_n}
\operatorname{pen}(n) \qquad\mbox{if } X_1^n \in
C_{n,k} .
\nonumber
\end{eqnarray}
By Lemma~\ref{lemktml} in the \hyperref[app]{Appendix},
\begin{equation}
\label{eqktml} \mathrm{ML}_{k}\bigl(X_1^n
\bigr) \le P_{\mathrm{KT}, k} \bigl(X_1^n\bigr) \exp
\biggl( C_{\mathrm{KT}} |A|^k + \frac{|A|-1}{2} |A|^k
\log\frac{n}{|A|^k} \biggr).
\end{equation}
Combining (\ref{eqpfact2}), (\ref{eqsh1}) and (\ref{eqktml}),
\begin{eqnarray*}
P \bigl(X_1^n\bigr) &\le& P_{\mathrm{KT}, k}
\bigl(X_1^n\bigr) \biggl( 1+ \frac{\bar
{\gamma}(m_n)}{\pinf}
\biggr)^{n-m_n} \\
&&{}\times\exp\biggl( C_{\mathrm{KT}} |A|^k +
\frac{|A|-1}{2} |A|^k \log\frac
{n}{|A|^k}
\\
&&\hspace*{33pt}{}- \bigl(|A|-1\bigr)|A|^k \operatorname{pen}(n) + \bigl(|A|-1\bigr)|A|^{m_n}
\operatorname{pen}(n) \biggr) \qquad\mbox{if } X_1^n
\in C_{n,k} ,
\end{eqnarray*}
that implies
\begin{eqnarray}
\label{eqsh2} P (C_{n,k}) &\le& \biggl( 1+ \frac{\bar{\gamma
}(m_n)}{\pinf}
\biggr)^{n-m_n} \nonumber\\
&&{}\times\exp\biggl( C_{\mathrm{KT}} |A|^k +
\frac{|A|-1}{2} |A|^k \log\frac
{n}{|A|^k}
\nonumber
\\
&&\hspace*{32.5pt}{} - \bigl(|A|-1\bigr)|A|^k \operatorname{pen}(n) + \bigl(|A|-1\bigr)|A|^{m_n}
\operatorname{pen}(n) \biggr)
\\
&\le&\exp\biggl( \frac{(n-m_n)\bar{\gamma}(m_n)}{\pinf/\log \RMe } +
\bigl(|A|-1\bigr)|A|^{m_n}
\operatorname{pen}(n)
\nonumber
\\
&&\hspace*{21pt}{} + |A|^k \biggl[ C_{\mathrm{KT}} + \frac{|A|-1}{2} \log
\frac{n}{|A|^k} - \bigl(|A|-1\bigr) \operatorname{pen}(n) \biggr]
\biggr),\nonumber
\end{eqnarray}
where in the last inequality we used $\log(1+x)\le x\log \RMe $, $x\ge0$.
In the exponent of (\ref{eqsh2}), it may be assumed that $|A|^k$ is
multiplied by a negative number otherwise the bound is trivial. Then,
the claim of the lemma follows from (\ref{eqsh2}) as
\[
\Prob\bigl( \hat{k}_{\mathrm{PML}} \bigl(X_1^n |
r_n\bigr) > k_n \bigr) \le\sum
_{k=k_n+1}^{r_n} P (C_{n,k})
\le(r_n-k_n) P (C_{n,k_n+1}) .
\]
\upqed\end{pf}
Now, we are ready to prove Theorem~\ref{thqmin}.
\begin{pf*}{Proof of Theorem~\ref{thqmin}}
Letting
\[
G_n = \bigl\{ \bar{\gamma} \bigl( \hat{k}_{\mathrm{PML}}
\bigl(X_1^n | \eta\log n\bigr) \bigr) \le
g_n \bigr\}
\]
and
\[
H_n = \bigl\{ \hat{k}_{\mathrm{PML}} \bigl(X_1^n
| \eta\log n\bigr) \le k_n \bigr\} ,
\]
write
\begin{eqnarray}
\label{eqfinal} &&\Prob\biggl( \bar{d} \bigl( X_1^n,
\hat{X} \bigl[ \hat{k}_{\mathrm
{PML}} \bigl(X_1^n |
\eta\log n\bigr) \bigr]_1^n \bigr) > \frac{\beta_2}{\pinf^{2}}
g_n + \frac{1}{n^{1/2-\mu_n}} \biggr)
\nonumber
\\
&&\quad\le\Prob\biggl( \biggl\{ \bar{d} \bigl( X_1^n,
\hat{X} \bigl[ \hat{k}_{\mathrm{PML}} \bigl(X_1^n |
\eta\log n\bigr) \bigr]_1^n \bigr) > \frac{\beta_2}{\pinf^{2}}
g_n + \frac{1}{n^{1/2-\mu_n}} \biggr\} \cap G_n \cap
H_n \biggr)
\nonumber
\\
&&\qquad{} + \Prob(\bar{G}_n ) + \Prob(\bar
{H}_n )
\\
&&\quad\le\Prob\biggl( \biggl\{ \bar{d} \bigl( X_1^n,
\hat{X} \bigl[ \hat{k}_{\mathrm{PML}} \bigl(X_1^n |
\eta\log n\bigr) \bigr]_1^n \bigr) > \frac
{\beta_2}{\pinf^{2}}
\bar{\gamma} \bigl( \hat{k}_{\mathrm{PML}} \bigl(X_1^n
| \eta\log n\bigr) \bigr) + \frac{1}{n^{1/2-\mu_n}} \biggr\} \cap H_n
\biggr)
\nonumber
\\
&&\qquad{} + \Prob(\bar{G}_n ) + \Prob(\bar
{H}_n ).\nonumber
\end{eqnarray}
The three terms on the right of (\ref{eqfinal}) is bounded as follows.\vadjust{\goodbreak}
Since the process is non-null with summable continuity rate, Lemma \ref
{lemapprox} in the \hyperref[app]{Appendix} with $\mu=\mu_n$, $\nu\log
n=k_n$ and
$k=\hat{k}_{\mathrm{PML}} (X_1^n | \eta\log n)$ gives
\begin{eqnarray}
\label{eqf1} &&\mathrm{Pr} \biggl( \biggl\{ \bar{d} \bigl( X_1^n,
\hat{X} \bigl[ \hat{k}_{\mathrm{PML}} \bigl(X_1^n |
\eta\log n\bigr) \bigr]_1^n \bigr) > \frac
{\beta_2}{\pinf^{2}}
\bar{\gamma} \bigl( \hat{k}_{\mathrm{PML}} \bigl(X_1^n
| \eta\log n\bigr) \bigr) + \frac{1}{n^{1/2-\mu_n}} \biggr\} \cap H_n
\biggr)
\nonumber
\\
&&\quad\le2 \RMe ^{1/\RMe } |A|^{k_n+2} \exp\biggl\{ -
\frac{ \pinf^2 }{ 16 \RMe |A|^3 (\alpha+ \pinf) (\beta_1+1)^2 } \frac{
(n-k_n) 4^{-k_n |\log\pinf|} }{ (1+k_n) n }
\\
&&\hspace*{90pt}{}\times \biggl[ 4^{\mu_n\log n} - \frac{k_n|\log\pinf|(\beta_1+1)^2}{2}
\biggr] \biggr\} .\nonumber
\end{eqnarray}
By Remark~\ref{remgammaalpha}, the summability of the continuity rate
implies the $\alpha$-summability. Hence, for the non-null process with
summable continuity rate and uniformly convergent restricted continuity
rate with parameters $\theta_1$, $\theta_2$, $k_{\theta}$, Proposition
\ref{propqmin} implies that
\begin{equation}
\label{eqGbound} \Prob(\bar{G}_n ) \le12 \RMe ^{1/\RMe } \exp
\biggl( -\frac{7\alpha_0 (\log|A|)^3 \eta^3}{4\RMe (\alpha+\alpha_0)} \frac
{ n^{\eta2\log|A|} }{ \log n } + \bigl(\eta\log|A|\bigr)\log n \biggr),
\end{equation}
if (\ref{eqnbound}) holds because
\begin{eqnarray*}
&&\Prob\bigl( \bar{\gamma} \bigl( \hat{k}_{\mathrm{PML}} \bigl(X_1^n
| \eta\log n\bigr) \bigr) \ge g_n \bigr)
\\
&&\quad= \Prob\biggl( \hat{k}_{\mathrm{PML}} \bigl(X_1^n
| \eta\log n\bigr) \\
&&\qquad\hspace*{14pt}\le\min\biggl\{ \biggl\lfloor\frac{\eta}{\theta
_2}\log n \biggr
\rfloor,
k\ge0\dvtx\bar{\gamma}(k) < \biggl( \frac{ 6\max( \sqrt{n}
,(|A|-1) \operatorname{pen}(n)) }{ \pinf n^{1-\eta\log
(|A|^4/\pinf)} }
\biggr)^{1/(2\theta_1)} \biggr\} \biggr) .
\end{eqnarray*}
Applying Proposition~\ref{propqminupper} with $r_n=\eta\log n$,
\[
m_n = \min\biggl\{ \lfloor\eta\log n \rfloor, k\ge0\dvtx\bar{
\gamma}(k) < \frac{c \operatorname{pen}(n)}{ n } \biggr\}
\]
and $k_n=h_n+m_n$, it follows that
\begin{eqnarray}
\label{eqHbound}
&&\Prob(\bar{H}_n ) \le\exp\biggl( -\bigl(|A|-1\bigr)
|A|^{k_n+1} \nonumber\\
&&\hspace*{63pt}{}\times\operatorname{pen}(n) \biggl[ 1 - \frac
{1}{|A|^{1+h_n}} -
\frac{1}{2\operatorname{pen}(n)} \bigl( \log n - k_n \log|A| \bigr) \biggr]
\\
&&\hspace*{63pt}{}+ \frac{c \operatorname{pen}(n)}{\pinf/\log \RMe } + |A|^{k_n+1} C_{\mathrm
{KT}} + \log(\eta\log
n) \biggr) .
\nonumber
\end{eqnarray}
Finally, applying the bounds (\ref{eqf1}), (\ref{eqGbound}) and
(\ref
{eqHbound}) to the right of (\ref{eqfinal}), the proof is
complete.\vspace*{-4pt}
\end{pf*}
\begin{appendix}\label{app}
\section*{Appendix}
\setcounter{definition}{0}
\begin{lemma}\label{lemsch}
For two probability distributions $P_1$ and $P_2$ on $A^k$,
\[
\bigl\llvert H(P_1) - H(P_2) \bigr\rrvert\le
\frac{1}{\log \RMe } \bigl[ k\log|A| - \log d_{\TV} ( P_1,
P_2 ) \bigr] d_{\TV} ( P_1, P_2 ),
\]
if $d_{\TV} ( P_1, P_2 ) \le1/\RMe $, where
\[
H(P_i) = - \sum_{a_1^k\in A^k} P_i
\bigl(a_1^k\bigr) \log P_i
\bigl(a_1^k\bigr)
\]
is the entropy of $P_i$, $i=1,2$, and
\[
d_{\TV} ( P_1, P_2 ) = \sum
_{a_1^k\in A^k} \bigl\llvert P_1 \bigl(a_1^k
\bigr) - P_2 \bigl(a_1^k\bigr) \bigr\rrvert
\]
is the total variation distance of $P_1$ and $P_2$.
\end{lemma}
\begin{pf}
See Lemma 3.1 of~\cite{Sch}.
\end{pf}
\begin{lemma}\label{lemktml}
There exists a constant $C_{\mathrm{KT}}$ depending only on $|A|$, such
that for any $0\le k<n$
\[
\log\mathrm{ML}_{k}\bigl(X_1^n\bigr) -\log
P_{\mathrm{KT}, k} \bigl(X_1^n\bigr) \le
C_{\mathrm{KT}} |A|^k + \frac{|A|-1}{2} |A|^k \log
\frac{n}{|A|^k} .
\]
\end{lemma}
\begin{pf}
The bound, see, for example, (27) in~\cite{Cs},
\begin{eqnarray*}
&&\biggl\llvert\log P_{\mathrm{KT}, k} \bigl(X_1^n
\bigr) + k\log|A| -\log\mathrm{ML}_{k}\bigl(X_1^n
\bigr) + \frac{|A|-1}{2} \mathop{\sum_{a_1^k\in
A^k:}}_{N_{n-1}(a_1^k)\ge1}
\log N_{n-1}\bigl(a_1^k\bigr) \biggr\rrvert
\\
&&\quad\le C_{\mathrm{KT}}' |A|^k,
\end{eqnarray*}
where $C_{\mathrm{KT}}'$ depends only on $|A|$, implies the claim using
\[
\mathop{\sum_{a_1^k\in A^k:}}_{N_{n-1}(a_1^k)\ge1} \log
N_{n-1}\bigl(a_1^k\bigr) \le|A|^k
\log\frac{n}{|A|^k} ,
\]
see Proof of Theorem 6 in~\cite{Cs}.
\end{pf}
\begin{lemma}\label{lemapprox}
Let $X$ be a non-null stationary ergodic process with summable
continuity rate. Then, for any $\mu>0$ and $k\le\nu\log n$, $\nu>0$,
the empirical $k$-order Markov estimator of the process satisfies
\begin{eqnarray*}
&&\mathrm{Pr} \biggl\{ \bar{d} \bigl( X_1^n,
\hat{X}[k]_1^n \bigr) > \beta_2
\pinf^{-2} \bar{\gamma}(k) + \frac{1}{n^{1/2-\mu}} \biggr\}
\\
&&\quad\le2 \RMe ^{1/\RMe } |A|^{2+\nu\log n}
\\
&&\qquad\hspace*{0pt}{}\times \exp\biggl\{ - \frac{ \pinf^2 }{ 16 \RMe |A|^3 (\alpha+
\pinf) (\beta_1+1)^2 } \frac{
(n-\nu\log n) n^{-2\nu|\log\pinf|} }{ (1+\nu\log n) n }
\\
&&\qquad\hspace*{30pt}{}\times \biggl[ n^{2\mu} - \frac{\nu|\log\pinf|(\beta
_1+1)^2\log n}{2} \biggr] \biggr\}
.
\end{eqnarray*}
\end{lemma}
\begin{pf}
See the proof of Theorem 2 and Lemma 3 in~\cite{CsT3}.
\end{pf}
\end{appendix}
\section*{Acknowledgements}
The author would like to thank the referees for their comments that
helped improving the presentation of the results and generalizing the
consistency concept. The research of the author was supported in part
by NSF Grant DMS-09-06929.
| 205,011
|
\begin{document}
\title{Cluster structures for 2-Calabi-Yau categories and unipotent groups}
\author[Buan]{A. B. Buan}
\address{Institutt for matematiske fag\\
Norges teknisk-naturvitenskapelige universitet\\
N-7491 Trondheim\\
Norway}
\email{aslakb@math.ntnu.no}
\author[Iyama]{O. Iyama}
\address{Graduate school of Mathematics \\
Nagoya University, Chikusa-ku\\
Nagoya\\
464-8602 Nagoya \\
Japan}
\email{iyama@math.nagoya-u.ac.jp}
\author[Reiten]{I. Reiten}
\address{Institutt for matematiske fag\\
Norges teknisk-naturvitenskapelige universitet\\
N-7491 Trondheim\\
Norway}
\email{idunr@math.ntnu.no}
\author[Scott]{J. Scott} \thanks{All authors were supported by a STORFORSK-grant 167130 from the Norwegian Research Council}
\address{Dept. of Pure Mathematics \\
University of Leeds \\
Leeds LS2 9JT \\
United Kingdom \\
England
}
\email{jscott@maths.leeds.ac.uk}
\maketitle
\begin{abstract}
We investigate cluster tilting objects (and subcategories) in triangulated 2-Calabi-Yau
categories and related categories. In particular we construct a new class of such categories
related to preprojective algebras of non-Dynkin quivers
associated with elements in the Coxeter group. This class of 2-Calabi-Yau categories
contains the cluster categories and the stable categories of preprojective algebras of Dynkin
graphs as special cases. For these 2-Calabi-Yau categories we construct cluster tilting objects
associated with each reduced expression. The associated quiver is described in terms of
the reduced expression.
Motivated by the theory of cluster
algebras, we formulate the notions of (weak) cluster structure and substructure, and give
several illustrations of these concepts. We give applications to cluster algebras and
subcluster algebras related to unipotent groups, both in the Dynkin and non-Dynkin case.
\end{abstract}
\tableofcontents
\section*{Introduction}
The theory of cluster algebras, initiated by Fomin-Zelevinsky in \cite{fz1}, and further
developed in a series of papers, including \cite{fz2,fz3,fz4}, has turned out to have
interesting connections with many parts of algebra and other branches of mathematics. One of the
links is with the representation theory of algebras, where a first connection was discovered in \cite{mrz}.
A philosophy has been to model
the main ingredients in the definition of a cluster algebra in a categorical/module theoretical setting.
The cluster categories associated with finite dimensional hereditary algebras were introduced for this
purpose in \cite{bmrrt}, and shown to be triangulated in \cite{k1} (see also \cite{ccs} for the $A_n$ case),
and the module categories $\mod\Lambda$ for $\la$ a preprojective algebra of a Dynkin quiver have been used for a
similar purpose \cite{gls1}. This development has both inspired new directions of investigations on the categorical
side, as well as interesting feedback on the theory of cluster algebras, see for example
\cite{abs,bmr1,bmr2,bm,bmrt,cc,ck1,ck2,gls1,gls3,hub,i1,i2,ir,iy,it,kr1,kr2,ringel,t} for material related to this paper.
Both the cluster categories and the stable categories $\ul{\mod}\la$ of preprojective algebras
are triangulated Calabi-Yau categories of dimension 2
(2-CY for short). They both have what is called cluster tilting objects/subcategories \cite{bmrrt,kr1,iy}
(called maximal 1-orthogonal in \cite{i1}), which are important since
they are the analogs of
clusters. The investigation of cluster tilting objects/subcategories in 2-CY categories and related
categories is interesting both from the point
of view of cluster algebras and in itself. Hence it is of interest to develop methods for constructing
2-CY categories together
with the special objects/subcategories, and this is the main purpose of
the first two chapters.
The properties of cluster tilting objects in ($\Hom$-finite) 2-CY
categories which have been important for applications to cluster
algebras are (a) the unique exchange property for indecomposable
summands of cluster tilting objects, (b) the existence of associated
exchange triangles, (c) having no loops or 2-cycles (in the quiver of the
endomorphism algebra of a cluster tilting object) and (d) when passing
from the endomorphism algebra of a cluster tilting object $T$ to
the endomorphism algebra of another one $T^*$ via an exchange, the change in quivers is given by
Fomin-Zelevinsky mutation. The properties (a) and (b) are known to hold for any
2-CY triangulated category \cite{iy}, proved for cluster categories
in \cite{bmrrt} and for stable categories of preprojective algebras of
Dynkin type in \cite{gls1}. The property (c) does not always hold (see
\cite{bikr}), and hence it is of interest to establish criteria for
this to be the case, which is one of the topics in this paper. We then
show for any 2-CY categories that if (c) holds, then also (d) follows, as
previously shown by Palu for algebraic triangulated categories \cite{p}. We
construct new 2-CY categories with cluster tilting objects from old
ones via some subfactor construction, extending results from \cite{iy},
with a main focus on how condition (c) behaves under this
construction. Associated with this we introduce the notions of
cluster structures and cluster substructures.
Important examples, investigated in \cite{gls1}, are the categories
$\mod\la$ of finitely generated modules over the preprojective algebra
$\la$ of a Dynkin quiver. We deal with appropriate subcategories of
$\mod\la$. The main focus
in this paper is on the more general case of subcategories of the
category $\fl\la$ of finite length modules over the completion of the
preprojective algebra of a non-Dynkin quiver with no loops. Our main
tool is to extend the tilting theory developed for $\la$ in the
noetherian case in \cite{ir}.
This turns out to give a large class of 2-CY
categories associated with elements in the corresponding Coxeter
groups. For these categories we construct cluster tilting objects associated
with each reduced expression, and we describe the associated quiver
directly in
terms of the reduced expression. We prove that this class of 2-CY categories
contains all the cluster categories of finite dimensional
hereditary algebras and the stable categories $\ul{\mod}\la$ for a
preprojective algebra $\Lambda$ of Dynkin type. This also allows us to
get more information on the latter case.
We illustrate with applications to constructing subcluster algebras of cluster algebras, a notion
which we define here,
and which is already implicit in the literature.
For this we define, inspired by maps from \cite{cc,ck1} and \cite{gls1},
(strong) cluster maps. These maps have the property that we can pass from cluster structures
and substructures to cluster algebras and subcluster algebras.
Associated with substructures for preprojective
algebras of Dynkin type,
we discuss examples from $\SO_8(\mathbb{C})$-isotropic Grassmanians, for the $G_{2,5}$ Schubert variety
and for a unipotent
cell of the unipotent subgroup of $\SL_4(\mathbb{C})$. For preprojective algebras of extended
Dynkin type we use our results to
investigate cluster structures for affine unipotent cells, in some special cases for the loop
group $\SL_2(\mathcal{L})$.
For a (non-Dynkin) quiver $Q$ with associated Coxeter group $W$ we can
for each $w\in W$ consider the coordinate ring $\mathbb{C}[U^w]$ of
the unipotent cell associated with $w$ in the corresponding Kac-Moody
group. We conjecture that this ring has a cluster algebra structure,
and that it is modelled by our (stably) 2-CY category associated
with the same $w$. As a support for this we prove the conjecture for
the case $\widehat{A_1}$ for the word $w$ of length at most 4.
The first chapter is devoted to introducing and investigating the notions of
cluster structures and substructures,
and giving sufficient conditions for such structures to occur. Also the three concrete examples
mentioned above
are investigated, and used to illustrate the
connection with cluster algebras and subcluster algebras in Section \ref{c3_sec2}. In Chapter \ref{chap2}
we use tilting theory to construct categories whose stable categories are 2-CY,
along with natural cluster tilting objects in these categories. In
Section \ref{c3_sec2} we illustrate with examples for
preprojective algebras of Dynkin type,
and in Section \ref{c3_sec3} we illustrate with examples from the extended Dynkin case.
Part of this work was done independently by Geiss-Leclerc-Schr\"oer in \cite{gls3}.
For Chapter \ref{chap1}, this concerns the
development of 2-CY categories (in a different language) in the case of
subcategories of the form $\Sub P$ (or $\Fac P$) for $P$
projective, over a preprojective algebra of Dynkin type, with a somewhat
different approach. Concerning Section \ref{c3_sec2},
examples arising from $\Sub P$ were done independently in \cite{gls3}. For
this connection, the last author was inspired
by a lecture of Leclerc in 2005, where cluster algebras associated with $\Sub P$
in the $A_n$-case were discussed. There is also recent work of Geiss-Leclerc-Schr\"oer \cite{gls5}
related to Chapter \ref{chap2}, where completely different methods are used.
For general background on representation theory of algebras, we refer
to \cite{ars, ass, rin, h1, ahk}, and for Lie theory we refer to \cite{bl}.
Our modules are usually left modules and composition of maps $fg$ means
first $f$, then $g$.
The second author would like to thank William Crawley-Boevey and Christof Geiss for
answering a question on references about 2-CY property of preprojective algebras.
He also would like to thank Bernald Leclerc for valuable comments.
\section{2-CY categories and substructures}\label{chap1}
The cluster algebras of Fomin and Zelevinsky have motivated work on
trying to model the essential ingredients in the definition of a
cluster algebra in a categorical /module theoretical way. In particular,
this led to the theory of cluster categories and the investigation of new aspects
of the module theory of preprojective algebras of Dynkin type. In Section \ref{c1_sec1}
we give some of the main categorical requirements needed for the modelling, for the cases
with and without coefficients, leading to the notions of weak cluster structure
and cluster structure. The main examples do, like the above mentioned
examples, have 2-Calabi-Yau type properties.
We introduce substructures of (weak) cluster structures in Section \ref{c1_sec2}.
For this it is natural to deal with (weak) cluster structures with what will be called
coefficients, at least for the substructures. Of particular interest
for our applications to cluster algebras is the case of completions of
preprojective algebras $\Lambda$ of a finite connected quiver with
no loops over an algebraically closed field $k$, where the
interesting larger category is the stable category
$\underline{\fl}\Lambda$ of the finite length $\Lambda$-modules. For
Dynkin quivers this is the stable category $\underline{\mod}\Lambda$
of the finitely generated $\Lambda$-modules, and in the non-Dynkin
case $\underline{\fl}\Lambda =\fl\Lambda$. The first case is discussed in Section \ref{c1_sec3},
while Chapter \ref{chap2} is devoted to the second case.\\
\hspace{7mm}
\subsection{Cluster structures}\label{c1_sec1}
${}$ \\
In this section we introduce the concepts of weak cluster structure and cluster
structure for extension closed subcategories of
triangulated categories or for exact categories. We illustrate with 2-CY categories
and closely related categories, and the main objects we investigate are the cluster
tilting ones. These cases are particularly nice when the quivers of the cluster
tilting subcategories
have no loops or 2-cycles. Also the closely related maximal rigid objects (see \cite{gls1}) provide interesting examples.
We start with introducing the notions of weak cluster structure and cluster structure.
Throughout this chapter all categories are Krull-Schmidt categories
over an algebraically closed field $k$, that is, each object is isomorphic to a finite
direct sum of indecomposable objects with local endomorphism ring. The categories we consider
are either exact (for example abelian) categories or extension closed subcategories of triangulated
categories. Note that an extension closed subcategory of an exact category is again exact.
We refer to \cite{k2,k3} for definition and basic properties
of exact categories, which behave very much like abelian categories, also with respect to
derived categories and Ext-functors.
We often identify a set of indecomposable objects with the additive
subcategory consisting of all summands of direct sums of these
indecomposable objects. We also identify an object with the set of
indecomposable objects appearing in a direct sum decomposition,
and with the subcategory obtained in the above way.
Assume that we have a collection of sets $\underline{x}$
(which may be infinite) of non-isomorphic indecomposable objects,
called \emph{clusters}. The union of all indecomposable objects in clusters are called
\emph{cluster variables}. Assume also that there is a subset $\ul{p}$
(which may be infinite) of indecomposable objects which are not cluster
variables, called {\em coefficients}. We denote by $T$
the union of the indecomposable objects in $\ul{x}$ and $\ul{p}$, sometimes viewed as a category with these objects,
and call it an \emph{extended cluster}.
We say that the clusters, together with the prescribed set of coefficients $\ul{p}$, give a
\emph{weak cluster structure} on $\C$ if the following hold:
\begin{itemize}
\item[(a)]{For each extended cluster $T$ and each cluster variable $M$ which is a summand in $T$, there is a unique
indecomposable object $M^{\ast}\not\simeq
M$ such that we get a new extended cluster $T^{\ast}$ by replacing $M$ by $M^{\ast}$.
We denote this operation, called {\em exchange}, by $\mu_{M}(T)=T^{\ast}$,
and we call $(M, M^{\ast})$ an \emph{exchange pair}.}
\item[(b)]{There are triangles/short exact sequences
$M^{\ast}\xrightarrow{f}B\xrightarrow{g}M$ and
$M\xrightarrow{s}B^{'}\xrightarrow{t}M^{\ast}$, where the maps $g$ and $t$ are
minimal right $\add(T\char92 \{M\})$-approximations and $f$ and $s$
are minimal left $\add(T\char92 \{M\})$-approximations.
These are called {\em exchange triangles/sequences}.}
\end{itemize}
Denote by $Q_{T}$ the quiver of
$T$, where the vertices correspond to the indecomposable objects in $T$ and the number of arrows $T_i \to T_j$
between two indecomposable objects $T_i$ and $T_j$ is
given by the dimension of the space of irreducible maps $\rad(T_i,T_j)/\rad^2(T_i,T_j)$.
Here $\rad(\ ,\ )$ denotes the radical in $\add T$, where the objects are finite direct sums
of objects in $T$. For an algebra $\la$ (where $\la$ has a unique decomposition as a direct sum of indecomposable objects),
the quiver of $\la$ is then the opposite of the quiver of $\add \la$.
We say that a quiver $Q= (Q_0,Q_1)$ is an {\it extended quiver} with respect to a
subset of vertices $Q_0'$ if there are no
arrows between two vertices in $Q_0\backslash Q_0'$. We regard the quiver $Q_T$ of
an extended cluster as an extended quiver by neglecting all arrows
between two vertices corresponding to coefficients.
We say that $\C$, with a fixed set of clusters and coefficients, has
{\it no loops} (respectively, {\em no 2-cycles}) if in the extended
quiver of each extended cluster there are no loops (respectively, no 2-cycles).
When $\underline{x}$ is finite, this is the opposite quiver of the factor algebra $\underline{\End}(T)$ of $\End(T)$
by the maps factoring through direct sums of objects from $\underline{p}$.
We say that we have a \emph{cluster structure} if the following additional conditions hold:
\begin{itemize}
\item[(c)]There are no loops or 2-cycles. (In other words, for a cluster variable $M$, any
non-isomorphism $u\colon M\to M$ factors through $g\colon B\to M$ and through $s\colon M\to B'$,
and any non-isomorphism $v\colon M^{\ast}\to M^{\ast}$ factors through $f\colon M^{\ast}\to B$ and
through $t\colon B'\to M^{\ast}$, and $B$ and $B'$ have no common indecomposable summand.)
\item[(d)] For an extended cluster $T$, passing from $Q_{T}$ to $Q_{T^{\ast}}$ is given by Fomin-Zelevinsky mutation
at the vertex of $Q_{T}$ given by the cluster variable $M$.
\end{itemize}
Note that (c) is needed for (d) to make sense, but it is still convenient to write two separate statements.
We recall that for an extended quiver $Q$ without loops or 2-cycles
and a vertex $i$ in $Q_0'$, the Fomin-Zelevinsky
mutation $\mu_i(Q)$ of $Q$ at $i$ is the quiver obtained from $Q$
making the following changes \cite{fz1}:
\begin{itemize}
\item[-]{Reverse all the arrows starting or ending at $i$.}
\item[-]{Let $s\neq i$ and $t\neq i$ be vertices in $Q_0$ such that at
least one vertex belongs to $Q_0'$.
If we have $n>0$ arrows from $t$ to $i$ and $m>0$ arrows from $i$
to $s$ in $Q$ and $r$ arrows from $s$ to $t$ in $Q$ (interpreted as $-r$ arrows from $t$ to $s$ if $r<0$),
then we have $nm-r$
arrows from $t$ to $s$ in the new quiver $\mu_i(Q)$ (interpreted as $r-nm$
arrows from $s$ to $t$ if $nm-r<0$).}
\end{itemize}
The main known examples of triangulated $k$-categories with finite
dimensional homomorphism spaces (Hom-finite for short) which have a weak cluster structure, and usually
cluster structure, are 2-CY categories. These are triangulated $k$-categories with functorial
isomorphisms $D\Ext^1(A,B)\simeq \Ext^1(B,A)$ for all $A,B$ in $\C$, where $D=\Hom_k(\ ,k)$.
A $\Hom$-finite triangulated category is 2-CY if and only if it
has almost split triangles with translation $\tau$ and $\tau \colon \C
\to \C$ is a functor isomorphic to the shift functor $[1]$ (see
\cite{rv}).
We have the following examples of 2-CY categories.
\noindent
(1) The cluster category $\C_{H}$ associated with a finite
dimensional hereditary $k$-algebra $H$ is by definition the orbit
category ${\bf D}^{\bo}(H)/\tau^{-1}[1]$, where ${\bf D}^{\bo}(H)$ is
the bounded derived category of finitely generated $H$-modules, and
$\tau$ is the AR-translation of ${\bf D}^{\bo}(H)$ \cite{bmrrt}. It is
a $\Hom$-finite triangulated category \cite{k2}, and it is 2-CY since $\tau=[1]$.
\bigskip
\noindent
(2) The stable category of maximal Cohen-Macaulay modules $\underline{\CM}(R)$ over a 3-dimensional
complete local commutative noetherian Gorenstein isolated singularity
$R$ containing the residue field $k$ \cite{a} (see \cite{yo}).
\bigskip
\noindent
(3) The preprojective algebra $\Lambda$ associated to a finite connected quiver $Q$
without loops is defined as follows:
Let $\widetilde{Q}$ be the quiver constructed from $Q$ by adding an arrow $\alpha^{\ast}\colon i\to j$
for each arrow $\alpha\colon j\to i$ in $Q$.
Then $\la=k\widetilde{Q}/I$, where $I$
is the ideal generated by the sum of commutators
$\sum_{\beta\in Q_1}[\beta,\beta^{\ast}]$.
Note that $\Lambda$ is uniquely determined up to isomorphism by the
underlying graph of $Q$.
When $\Lambda$ is the preprojective algebra of a Dynkin quiver over $k$,
the stable category $\ul{\mod}\la$ is 2-CY (see \cite[3.1,1.2]{ar2}\cite{cb}\cite[8.5]{k2}).
When $\Lambda$ is the completion of the preprojective algebra of a
finite connected quiver without loops which is not Dynkin,
the bounded derived category ${\bf D}^{\bo}(\fl\Lambda)$
of the category $\fl\la$ of the modules of finite length is 2-CY
(see \cite{b,cb,bbk}\cite[section 8]{gls2}).
\bigskip
We shall also use the terminology 2-CY in more general situations.
Note that from now on we will
usually write just ``category'' instead of ``$k$-category''.
We say that an exact Hom-finite category $\C$ is {\em derived 2-CY} if the triangulated category
${\bf D}^{\bo}(\C)$ is 2-CY, i.e. if
$D\Ext^i(A,B)\simeq\Ext^{2-i}(B,A)$ for all $A$, $B$ in ${\bf D}^{\bo}(\C)$ and all $i$.
Note that
when $\C$ is derived 2-CY, then $\C$ has no non-zero projective or injective objects.
The category $\fl\Lambda$ where $\la$ is the completion of the preprojective algebra of a non-Dynkin
connected quiver without loops is an important example of a derived 2-CY category.
We say that an exact category $\C$ is {\em stably 2-CY} if
it is Frobenius, that is, $\C$ has enough projectives and injectives, which coincide,
and the stable category $\underline{\C}$, which is triangulated \cite{h1}, is Hom-finite 2-CY.
Recall that $\C$ is said to have enough projectives if for each $X$ in $\C$ there is an
exact sequence $0 \to Y \to P \to X \to 0$ in $\C$ with $P$ projective. Having enough injectives
is defined in a dual way.
We have the following characterization of stably 2-CY categories.
\begin{proposition}\label{propI1.1}
Let $\C$ be an exact Frobenius category.
Then $\C$ is stably 2-CY if and only if $\Ext^1_{\C}(A,B)$
is finite dimensional and we have functorial isomorphisms
$D\Ext^1_{\C}(A,B)\simeq\Ext^1_{\C}(B,A)$ for all $A$, $B$ in $\C$.
\end{proposition}
\begin{proof}
Let $A$ and $B$ be in $\C$, and let $0\to A\to P\to \Omega^{-1}A\to 0$
be an exact sequence in $\C$ where $P$ is projective injective. Apply
$\Hom_{\C}(B,\text{ })$ to get the exact sequence $0\to
\Hom_{\C}(B,A)\to \Hom_{\C}(B,P)\to \Hom_{\C}(B,\Omega^{-1}A)\to
\Ext^1_{\C}(B,A)\to 0$. Then we get $\Ext^1_{\C}(B,A)\simeq
\Hom_{\underline{\C}}(B,\Omega^{-1}A)= \Ext^1_{\underline{\C}}(B,A)$.
Assume that $\C$ is stably 2-CY, that is, the stable category $\ul{\C}$
is a Hom-finite triangulated 2-CY category. Then $\Ext^1_{\ul{\C}}(B,A)$
is finite dimensional for all $A$, $B$ in $\C$, and hence $\Ext^1_{\C}(B,A)$
is finite dimensional, and we have functorial isomorphisms $D\Ext^1_{\C}(A,B)\simeq\Ext^1_ {\C}(B,A)$.
The converse also follows directly.
\end{proof}
Examples \sloppy of exact stably 2-CY categories are categories of maximal Cohen-Macaulay modules $\CM(R)$ for a
3-dimensional complete local commutative isolated Gorenstein singularity $R$ (containing the residue field $k$)
and $\mod\Lambda$
for $\Lambda$ being the preprojective algebra of a Dynkin quiver. We shall see several further examples later.
We are especially interested in pairs of 2-CY categories
$(\C,\underline{\C})$ where $\C$ is an exact stably 2-CY category. The only difference in
indecomposable objects between $\C$ and $\underline{\C}$ is the indecomposable projective objects in
$\C$. Also note that given an exact sequence $0\to A\to B\to C\to 0$ in $\C$,
there is an associated triangle $A\to B\to C\to A[1]$ in
$\underline{\C}$. Conversely, given a triangle $A\to B\stackrel{\ul{g}}{\to} C\to A[1]$
in $\underline{\C}$, we lift $\ul{g}\in\Hom_{\ul{\C}}(B,C)$ to $g\in\Hom_{\C}(B,C)$, and obtain an
exact sequence $0\to A\to B\oplus P\to C\to 0$ in $\C$, where $P$ is projective. We then have the
following useful fact.
\begin{proposition}\label{prop1.1}
Let $\C$ be an exact stably 2-CY category with a set of clusters
$\ul{x}$ and a set of coefficients $\ul{p}$ which are the
indecomposable projective objects. For the stable 2-CY category
$\ul{\C}$ consider the same set of clusters $\ul{x}$ and with no
coefficients. Then we have the following.
\begin{itemize}
\item[(a)]The $(\ul{x},\ul{p})$ give a weak cluster structure on $\C$ if
and only if the $(\ul{x},\emptyset)$ give a weak cluster structure on
$\ul{\C}$.
\item[(b)]$\C$ has no loops if and only if $\ul{\C}$ has no loops.
\item[(c)]If $\C$ has a cluster structure, then $\ul{\C}$ has a cluster structure.
\end{itemize}
\end{proposition}
\begin{proof}
(a) Assume we have a collection of extended clusters for $\C$.
Assume also that $\mu_{M}(T)$ is defined for each indecomposable non-projective object $M$
in the cluster $T$, and that we have the required exchange exact sequences. Then the
induced clusters
for $\underline{\C}$ determine a weak cluster structure for $\underline{\C}$.
The converse also follows directly.
\noindent
(b) For an extended cluster $T$ in $\C$, the quiver $\bar{Q}_{T}$ is obtained from the quiver $Q_{T}$
by removing the vertices corresponding to the indecomposable projective objects. The claim is then obvious.
\noindent
(c) It is clear that if there is no 2-cycle for $\C$, then there is no
2-cycle for $\ul{\C}$, and the claim follows from this.
\end{proof}
In the examples of 2-CY categories with cluster structure
which have been investigated the extended clusters have been the
subcategories $T$ where $\Ext^1(M,M)=0$ for all $M\in T$,
and whenever $X\in\C$ satisfies $\Ext^1(M,X)=0$ for all $M\in T$,
then $X\in T$. Such $T$ has been called {\it cluster tilting
subcategory} in \cite{bmrrt,kr1} if it is
in addition functorially finite in the sense of \cite{as}, which is
automatically true when $T$ is finite. Such $T$
has been called maximal 1-orthogonal subcategory in \cite{i1,i2}, and
$\Ext$-configuration in \cite{bmrrt}, without the assumption on
functorially finiteness.
We have the following nice connections between $\C$ and
$\underline{\C}$ for an exact stably 2-CY category $\C$ when using the
cluster tilting subcategories.
\begin{lemma}\label{lem1.2}
Let $\C$ be an exact stably 2-CY category, and let $T$ be a
subcategory of $\C$ containing all indecomposable projective objects.
Then $T$ is a cluster tilting subcategory in $\C$ if and only if
it is the same in
$\underline{\C}$.
\end{lemma}
\begin{proof}
We have $\Ext^1_{\C}(C,A) \simeq \Ext^1_{\ul{\C}}(C,A)$ from the proof of
Proposition \ref{prop1.1}. It is easy to see that $T$ is functorially finite in $\C$ if and only
if it is functorially finite in $\ul{\C}$ \cite{as}. Hence $T$ is cluster tilting in $\C$ if and
only if it is cluster tilting in $\underline{\C}$.
\end{proof}
\begin{lemma}\label{lem1.3}
Let $\C$ be an exact stably 2-CY category and $T$ a cluster tilting object in $\C$,
with an indecomposable non-projective summand $M$. Then there is no loop at $M$ for
$\End_{\C}(T)$ if and only if there is no loop at $M$ for
$\End_{\underline{\C}}(T)$. If $\C$ has no 2-cycles, there are none for $\ul{\C}$.
\end{lemma}
\begin{proof}
This is a direct consequence of Proposition \ref{prop1.1} and the definitions.
\end{proof}
Note that an exact stably 2-CY category $\C$, with the cluster tilting
subcategories, gives a situation where we have a natural set of coefficients,
namely the indecomposable projective objects which clearly belong to all cluster tilting
subcategories, whereas $\underline{\C}$ with the cluster tilting subcategories
gives a case where it is natural to choose no coefficients. We have
the following useful observation, which follows from Proposition
\ref{prop1.1}.
\begin{proposition}\label{prop1.4}
Let $\C$ be a $\Hom$-finite exact stably 2-CY category. Then the cluster tilting subcategories
in $\C$, with the indecomposable projectives as coefficients, determine a weak cluster structure
on $\C$ if and only if the cluster tilting subcategories in $\underline{\C}$
determine a weak cluster structure on $\underline{\C}$.
\end{proposition}
When $\C$ is Hom-finite triangulated 2-CY, then $\C$ has a weak cluster structure,
with the extended clusters being the cluster tilting subcategories and the indecomposable
projectives being the coefficients \cite{iy}. Properties
(c) and (d) hold for cluster categories and the stable category $\ul{\mod}\la$ of a
preprojective algebra of Dynkin type \cite{bmrrt,bmr2,gls1}, but (c) does not hold in
general \cite{bikr}. However, we show that when we have some cluster
tilting object in the 2-CY category $\C$, then (d)
holds under the assumption that (c) holds. This was first proved in \cite{p} when $\C$ is {\em algebraic}, that is
by definition the
stable category of a Frobenius category, as a special case of a more general result. Our proof
is inspired by \cite[7.1]{ir}.
\begin{theorem}\label{teoI1.6}
Let $\C$ be $\Hom$-finite triangulated (or exact stably) 2-CY
category with some cluster tilting subcategory. If $\C$ has no loops or
2-cycles, then the cluster tilting subcategories determine a cluster structure for $\C$.
\end{theorem}
\begin{proof}
We give a proof for the triangulated 2-CY case. Using exact sequences instead of triangles, a similar argument works for
the stably 2-CY case.
Note that in the stably 2-CY case we do not have to consider arrows
between projective vertices.
Let $T=\oplus_{i=1}^nT_i$ be a cluster tilting subcategory in $\C$. Fix
a vertex $k\in\{1,\cdots ,n\}$,
and let $T^{\ast}=\oplus_{i\ne k}T_i\oplus T_k^{\ast}=\mu_k(T)$.
We have exchange triangles $T_k^{\ast}\to B_k\to T_k$ and $T_k\to B_k'\to T_k^{\ast}$, showing that
when passing from $\End(T)$ to $\End(T^{\ast})$ we reverse all arrows in the quiver of $\End(T)$
starting or ending at $k$.
We need to consider the situation where we have arrows $j \to k \to i$. Since there
are no 2-cycles, there is no arrow $i \to k$. Consider the exchange triangles
$T_i^{\ast} \to B_i \to T_i$ and $T_i \to B_i' \to T_i^{\ast}$. Then $T_k$ is not a
direct summand of $B_i'$, and we write $B_i = D_i \oplus T_k^m$ for some $m>0$, where
$T_k$ is not a direct summand of $D_i$.
Starting with the maps in the upper square and the triangles they
induce, we get by the octahedral axiom the diagram below, where the
third row is a triangle.
$$\xymatrix{
&(T_k^{\ast})^m[1] \ar@{=}[r]& (T_k^{\ast})^m[1]& \\
T_i[-1] \ar[r]\ar@{=}[d] & T_i^{\ast}\ar[r]\ar[u] & D_i\oplus T_k^m \ar[u]\ar[r]& T_i\ar@{=}[d]\\
T_i[-1]\ar[r] & X \ar[r]\ar[u] &D_i\oplus B_k^m \ar[r]\ar[u]& T_i\\
& (T_k^{\ast})^m \ar@{=}[r] \ar[u] & (T_k^{\ast})^m \ar[u] &
}$$
Using again the octahedral axiom, we get the following commutative diagram of triangles,
where the second row is an exchange triangle and the third column is the second column of the
previous diagram.
$$\xymatrix{
& (T_k^{\ast})^m[1] \ar@{=}[r] & (T_k^{\ast})^m[1] & \\
T_i\ar@{=}[d]\ar[r] & B_i' \ar[r]\ar[u] & T_i^{\ast} \ar[r]\ar[u]& T_i[1]\ar@{=}[d]\\
T_i\ar[r] & Y \ar[r]\ar[u]& X\ar[r]\ar[u]& T_i[1]\\
& (T_k^{\ast})^m\ar@{=}[r] \ar[u]& (T_k^{\ast})^m \ar[u]&
}$$
Since $T_k$ is not in $\add B_i'$, we have $(B_i', (T_k^{\ast})^m[1])=0$, and hence $Y=B_i'\oplus(T_k^{\ast})^m$.
Consider the triangle $X\to D_i\oplus B_k^{m}\xto{a}T_i \to X[1]$.
Let $\overline{T}^{\ast} = (\oplus_{t \neq i,k} T_t) \oplus T_k^{\ast}$.
We observe that $D_i \oplus B_k^m$ is in $\add \overline{T}^{\ast}$.
For $D_i \oplus B_k^m = B_i$ is in $\add T$.
Since there is no loop at $i$, then $T_i$ is not a direct summand of $D_i$,
and $T_i$ is not a direct summand of $B_k$ since there is no arrow from $i$ to $k$.
Further $T_k$ is not a direct summand of $D_i$ by the choice of $D_i$,
and $T_k$ is not a direct summand of $B_k$ since there is no loop at $k$. Hence
we see that $B_i$ is in $\add \overline{T}^{\ast}$.
We next want to show that $a$ is a right $\add \overline{T}^{\ast}$-approximation.
It follows from the first commutative diagram that any map $g \colon T_t \to T_i$,
where $T_t$ is an indecomposable direct summand of $ \overline{T}^{\ast}$
not isomorphic to $T_k^{\ast}$, factors through $a$. Let then $f \colon T_k^{\ast} \to T_i$
be a map, and $h \colon T_k^{\ast} \to B_k$ the minimal left $\add \overline{T}$-approximation, where
$\overline{T} = \oplus_{t \neq k} T_t$. Then
there is some $s \colon B_k \to T_i$ such that $hs=f$. Then $s$ factors through $a$ by
the above, since $B_k$ is in $\add \overline{T}^{\ast}$ (using that $T_i$ is not a direct summand in $B_k$), and
$T_k^{\ast}$ is not a direct summand of $B_k$. It follows that $a$ is
a right $\add \overline{T}^{\ast}$-approximation.
Consider now the triangle $T_i \to B_i' \oplus (T_k^{\ast})^m \xto{b} X \to T_i[1]$.
Then $ B_i' \oplus (T_k^{\ast})^m$ is clearly in $\add \overline{T}^{\ast}$, since
$T_k$ is not a direct summand of $B_i'$. Since $T_i$ is in both $T$ and $T^{\ast}$,
we have that $\Hom(\overline{T}^{\ast},T_i[1]) = 0$, and hence $b$ is a right
$\add \overline{T}^{\ast}$-approximation. Note that the approximations $a$ and $b$
need not be minimal.
Recall that we are interested in paths of length two $j \to k \to i$
passing through $k$. By the above, the number of arrows from $j$ to $i$
in the quiver $Q_{T^{\ast}}$ is
$$u = \alpha_{D_i \oplus B_k^m}(T_j) - \alpha_{B_i' \oplus (T_k^{\ast})^m} (T_j)$$
where $\alpha_X(T_j)$ denotes the multiplicity of $T_j$ in $X$.
We have $$u =\alpha_{D_i}(T_j) + m \alpha_{B_k}(T_j) - \alpha_{B_i'}(T_j) =
\alpha_{B_i}(T_j) + m \alpha_{B_k}(T_j) - \alpha_{B_i'}(T_j),$$
since $B_i = D_i \oplus T_k^m$.
The last expression says that $u$ is equal to the number of arrows from $j$ to $i$
in $Q_T$, minus the number of arrows from $i$ to $j$, plus the product of the number
of arrows from $j$ to $k$ and from $k$ to $i$. This is what is required for having the
Fomin-Zelevinsky mutation, and we are done.
\end{proof}
We shall also use the terminology stably 2-CY for certain subcategories of triangulated
categories. Let $\B$ be a functorially finite extension closed subcategory of a Hom-finite triangulated
2-CY category $\C$. We say that $X\in \B$ is {\it projective} in $\B$ if $\Hom(X,\B[1])=0$.
In this setting we shall prove in \ref{teoI2.1} that the category
$\B$ modulo projectives in $\B$ has a 2-CY triangulated structure.
Note that $\B$ does not necessarily have enough projectives or injectives, for example if $\B = \C$.
We then say that $\B$ is \emph{stably 2-CY}.
We illustrate the concept with the following.
\bigskip
\noindent
\textbf{Example.}
Let $\C_Q$ be the cluster category of the path algebra $kQ$, where $Q$ is the quiver
$\stackrel{1}{\cdot}\to\stackrel{2}{\cdot}\to\stackrel{3}{\cdot}$. We have the following
AR-quiver for $\C_Q$, where $S_i$ and $P_i$ denote the simple and projective modules
associated with vertex $i$ respectively.
$$\xymatrix@C0.5cm{
&&P_1\ar[dr]&& S_3[1]\ar[dr]&&S_3\ar[dr]&&\\
&P_2\ar[ur]\ar[dr]&& P_1/S_3\ar[ur]\ar[dr]&& P_2[1]\ar[ur]\ar[dr]&& P_2\ar[dr]& \\
S_3\ar[ur]&& S_2\ar[ur]&& S_1 \ar[ur]&& P_1[1]\ar[ur] && P_1
}$$
Then $\B=\mod kQ$ is an extension closed subcategory of $\C_Q$ and it is easy to
see that $P_1$ is the only indecomposable projective object in $\B$. Then $\B/P_1$ is clearly
equivalent to the cluster category $\C_{Q'}$ where $Q'$ is a quiver of type $A_2$, which is a
triangulated 2-CY category. Hence $\B$ is stably 2-CY.
\bigskip
In addition to the cluster tilting objects, also the maximal rigid objects have played an important
role in the investigation of 2-CY categories. We now investigate the concepts of cluster structure
and weak cluster structure with respect to these objects.
Recall that a subcategory $T$ of a category $\C$ is said to
be {\em rigid} if $\Ext^1(M,M)=0$ for all $M$ in $T$,
and {\em maximal rigid} if $T$ is maximal among rigid subcategories
\cite{gls1}. It is clear that any cluster tilting subcategory
is maximal rigid, but the converse is not the case \cite{bikr}.
There always exists a maximal rigid subcategory in $\C$ if the
category $\C$ is skeletally small, while the existence of a cluster
tilting subcategory is rather restrictive. It is of interest
to have sufficient conditions for the two concepts to coincide.
For this the following is useful (see \cite{bmr1,i1,kr1} for (a) and
the argument in \cite[5.2]{gls1} for (b)).
\begin{proposition}\label{pro-extra}
Let $\C$ be a triangulated (or exact stably) 2-CY category.
\begin{itemize}
\item[(a)] Let $T$ be a cluster tilting subcategory. Then for any $X$
in $\C$, there exist triangles (or short exact sequences) $T_1 \to
T_0 \to X$ and $X \to T_0' \to T_1'$ with $T_i, T_i'$ in $T$.
\item[(b)] Let $T$ be a functorially finite maximal rigid subcategory.
Then for any $X$ in $\C$ which is rigid, the same conclusion as in (a) holds.
\end{itemize}
\end{proposition}
Then we have the following.
\begin{theorem}\label{prop2.1}
Let $\C$ be an exact stably 2-CY category, with some cluster tilting object.
\begin{itemize}
\item[(a)]Then any maximal rigid object in $\C$ (respectively, $\ul{\C}$) is a cluster tilting object.
\item[(b)]Any rigid subcategory in $\C$ (respectively, $\ul{\C}$) has an additive generator which
is a direct summand of a cluster tilting object.
\item[(c)]All cluster tilting objects in $\C$ (respectively,
$\ul{\C}$) have the same number of non-isomorphic indecomposable
summands.
\end{itemize}
\end{theorem}
\begin{proof}
(a) Let $N$ be maximal rigid in $\C$.
We only have to show that any $X\in\C$ satisfying $\Ext^1(N,X)=0$ is contained in $\add N$.
(i) Let $M$ be a cluster tilting object in $\C$. Since $N$ is maximal rigid and $M$ is rigid,
there exists an exact sequence $0\to N_1\to N_0\to M\to 0$ with $N_i \in \add N$
by Proposition \ref{pro-extra}(b). In particular, we have ${\rm pd }_{{\rm End}(N)}{\rm
Hom}(N,M)\le1$.
(ii) Since $M$ is cluster tilting, there is, by Proposition
\ref{pro-extra}(a), an exact sequence $0 \to X \to M_0 \to M_1\to 0$ for $X$ as above, with
$M_i \in \add M$, obtained by taking the minimal left $\add M$-approximation $X \to M_0$.
Applying $(N,\ )$,
we have an exact sequence $0\to
(N,X) \to (N,M_0) \to (N,M_1) \to \Ext^1(N,X)=0$. By (i), we have $\pd_{\End(N)}\Hom (N,X) \le1$.
Take a projective resolution
$0 \to (N,N_1) \to (N,N_0) \to (N,X) \to 0$.
Then we have a complex
\begin{equation}\label{N approximation of X}
0\to N_1\to N_0\to X\to0
\end{equation}
in $\C$. Since $0 \to (P,N_1) \to (P,N_0) \to (P,X)
\to 0$ is exact for any projective $P$ in $\C$, it follows from the axioms
of Frobenius categories that the complex \eqref{N approximation of X} is an exact sequence in
$\C$. Since $\Ext^1(X,N)=0$, we have
$X\in \add N$, and hence $N$ is cluster tilting.
(b) Let $M$ be a cluster tilting object in $\C$ and $N$ a
rigid object in $\C$. By \cite[5.3.1]{i2}, $\Hom(M,N)$ is a partial
tilting $\End(M)$-module. In particular, the number of non-isomorphic
indecomposable direct summands of $N$ is not greater than that of $M$.
Consequently, any rigid object in $\C$ is a direct summand of some
maximal rigid object in $\C$, which is cluster tilting by (a).
(c) See \cite[5.3.3]{i2}.
\end{proof}
For a Hom-finite triangulated 2-CY category we also get a weak cluster structure,
and sometimes a cluster structure, determined by the maximal rigid objects, if there are any.
Note that there are cases where the maximal rigid objects are not cluster tilting \cite{bikr}.
But we suspect the following.
\begin{conjecture}
Let $\C$ be a connected $\Hom$-finite triangulated 2-CY category. Then any maximal rigid object without
loops or 2-cycles in its quiver is a cluster tilting object.
\end{conjecture}
Furthermore, we have the following.
\begin{theorem}\label{theoI1.8}
Let $\C$ be a $\Hom$-finite triangulated 2-CY category (or exact stably
2-CY category) having some functorially finite maximal rigid subcategory.
\begin{itemize}
\item[(a)]{The functorially finite maximal rigid subcategories determine a weak cluster structure on $\C$.}
\item[(b)]{If there are no loops or 2-cycles for the functorially finite maximal rigid subcategories,
then they determine a cluster structure on $\C$.}
\end{itemize}
\end{theorem}
\begin{proof}
(a) This follows from \cite[5.1,5.3]{iy}. Note that the arguments there are stated only
for cluster tilting subcategories, but work also for functorially finite maximal rigid subcategories.
\noindent
(b) The proof of Theorem \ref{teoI1.6} works also in this setting.
\end{proof}
There exist triangulated or exact categories with cluster tilting objects also when the
categories are not 2-CY or stably 2-CY (see \cite{i1,kz,eh}). But we do not necessarily
have even a weak cluster structure in this case. For let $\la$ be a Nakayama algebra with
two simple modules $S_1$ and $S_2$, with associated projective covers $P_1$ and $P_2$.
Assume first that $P_1$ and $P_2$ have length 3. Then in $\mod\la$ we have that $S_1\oplus P_1\oplus P_2$,
$\begin{smallmatrix} S_1\\ S_2 \end{smallmatrix}\oplus P_1\oplus P_2$,
$\begin{smallmatrix} S_2\\ S_1 \end{smallmatrix}\oplus P_1\oplus P_2$ are the cluster
tilting objects, so we do not have the unique exchange property.
If $P_1$ and $P_2$ have length 4, then the cluster tilting objects are
$S_1\oplus \begin{smallmatrix} S_1\\ S_2 \\S_1 \end{smallmatrix}\oplus P_1\oplus P_2$ and
$S_2\oplus \begin{smallmatrix} S_2\\ S_1\\S_2 \end{smallmatrix}\oplus P_1\oplus P_2$,
and so there is no way of exchanging $S_1$ in the first object to obtain a new cluster tilting object.
\bigskip
We end ths section with some information on the endomorphism algebras of cluster tilting
objects in stably 2-CY categories. Such algebras are studied as
analogs of Auslander algebras in \cite{gls1,i1,i2,kr1}.
We denote by $\mod\C$ the category of finitely presented $\C$-modules.
If $\C$ has pseudokernels, then $\mod\C$ forms an abelian category \cite{aus1}.
\begin{proposition}\label{prop2.5}
Let ${\C}$ be an exact stably 2-CY category.
Assume that $\C$ has pseudokernels and the global dimension of ${\rm mod}\C$ is finite.
Let $\Gamma=\End(T)$ for a cluster tilting object $T$ in $\C$.
\begin{itemize}
\item[(a)]{$\Gamma$ has finite global dimension.}
\item[(b)]{If ${\C}$ is $\Hom$-finite, then the quiver of $\Gamma$ has no loops.
If moreover ${\C}$ is an extension closed subcategory of an abelian
category closed under subobjects, then the quiver of $\Gamma$ has no
2-cycles.}
\end{itemize}
\end{proposition}
\begin{proof}
(a) Let $m={\rm gl.dim} (\mod\C)$.
For any $X\in \mod \Gamma$, take a projective
presentation $(T,T_1)\to(T,T_0)\to X\to0$. By our assumptions, there
exists a complex $0\to F_m\to\cdots\to F_2\to T_1\to T_0$ in
$\C$ such that
$0\to(\ ,F_m)\to\cdots\to(\ ,F_2)\to(\ ,T_1)\to(\ ,T_0)$ is exact in
$\mod \C$. Since $T$ is cluster tilting,
we have an exact sequence $0\to T_1\to T_0\to F_i\to 0$, with
$T_1$ and $T_0$ in $\add T$ by Proposition \ref{pro-extra}. Hence we have ${\pd}_{{\Gamma}}(T,F_i)\le 1$ and
consequently ${\pd}_{\Gamma}X\le m+1$. It follows that $\Gamma$ has finite global dimension.
(b) By (a), $\Gamma$ is a finite dimensional algebra of finite global dimension.
By \cite{l,ig}, the quiver of $\Gamma$ has no loops.
We shall show the second assertion.
Our proof is based on \cite[6.4]{gls1}.
We start with showing that $\Ext^2_\Gamma(S,S)=0$ for any simple
$\Gamma$-module $S$, assumed to be the top of the projective
$\Gamma$-module $(T,M)$ for an indecomposable summand $M$ of $T$.
First, we assume that $M$ is not projective in $\C$.
Take exact exchange sequences $0\to M^{\ast} \stackrel{f}{\to}B\stackrel{g}{\to}M\to0$
and $0\to M\stackrel{s}{\to}B'\stackrel{t}{\to}M^{\ast}\to0$.
Since $\Gamma$ has no loops, we have a projective presentation
$0\to(T,M)\stackrel{\cdot s}{\to}(T,B')\stackrel{\cdot tf}{\to}(T,B)
\stackrel{\cdot g}{\to}(T,M)\to S\to0$
of the $\Gamma$-module $S$.
Since $M$ is not a summand of $B'$, we have
${\Ext}^2_\Gamma(S,S) = 0$.
Next, we assume that $M$ is projective in $\C$.
Take a minimal projective presentation $(T,B)\stackrel{\cdot g}{\to}(T,M)\to S\to0$
of the $\Gamma$-module $S$. By assumption, $\Im g$ in the abelian category belongs to $\C$.
Then $g \colon B\to{\Im} g$ is a minimal right ${\add} T$-approximation.
By Proposition \ref{pro-extra}(a), we have that $B'={\Ker} g$ belongs to ${\add} T$.
Thus we have a projective resolution
$0\to(T,B')\stackrel{}{\to}(T,B)\stackrel{\cdot g}{\to}(T,M)\to S\to0$
of the $\Gamma$-module $S$.
Since $g$ is right minimal, $B'$ does not have an injective summand.
Thus we have ${\Ext}^2_\Gamma(S,S) = 0$.
Since in both cases $\Ext^2_\Gamma(S,S)=0$, we can not have a 2-cycle
by \cite[3.11]{gls1}.
\end{proof}
\hspace{7mm}
\subsection{Substructures}\label{c1_sec2}
${}$ \\
For extension closed subcategories of triangulated or exact categories both having a weak
cluster structure, we introduce the notion of substructure. Using heavily \cite{iy},
we give sufficient conditions for having a substructure, when starting with a Hom-finite triangulated
2-CY category or an exact stably 2-CY category, and using the cluster tilting subcategories,
with the indecomposable projectives as coefficients.
Let $\C$ be an exact or triangulated $k$-category,
and $\B$ a subcategory of $\C$
closed under extensions. Assume that both $\C$ and
$\B$ have a weak cluster structure. We say that we have
a \emph{substructure} of $\C$ induced by an extended cluster $T$ in
$\B$ if we have the following:
There is
a set $A$ of indecomposable objects in $\C$ such that
$\widetilde{T'} = T'\cup A$ is an extended cluster in $\C$
for any extended cluster $T'$ in $\B$ which is obtained
by a finite number of exchanges from $T$.
Note that for each sequence of cluster variables $M_1,\cdots, M_t$, with
$M_{i+1}$ in $\mu_{M_i}(T)$, we have
$\mu_{M_t}(\cdots\mu_{M_1}(T))\cup A=\tilde{\mu}_{M_t}(\cdots
\tilde{\mu}_{M_1}(\widetilde{T}))$, where $\mu$ denotes
the exchange for $\B$ and $\tilde{\mu}$ the exchange for $\C$.
We shall investigate substructures arising from certain extension closed subcategories of
triangulated 2-CY categories and of exact stably 2-CY categories. We start with the
triangulated case, and here we first recall some results from \cite{iy} specialized to the
setting of 2-CY categories.
For a triangulated category $\C$ and full subcategories $\B$ and $\B'$,
let ${\B}^\perp =\{X \in \C \mid \Hom({\B},X)=0\}$ and
${}^\perp{\B}=\{ X\in \C \mid \Hom(X,\B)=0\}$.
We denote by $\B \ast {\B}'$ the full subcategory of $\C$
consisting of all $X \in \C$ such that there exists a triangle
$B\to X\to B'\to B[1]$ with $B\in \B$ and $B'\in {\B}'$.
We get the following sufficient
conditions for constructing 2-CY categories, and hence categories with weak cluster structures.
\begin{theorem}\label{teoI2.1}
Let ${\C}$ be a $\Hom$-finite triangulated 2-CY category
and ${\B}$ a functorially finite extension closed subcategory of $\C$.
\begin{itemize}
\item[(a)]{${\B}^\perp$ and ${}^\perp{\B}$ are functorially finite
extension closed subcategories of $\C$.
Moreover, ${\B}\ast {\B}^\perp={\C}={}^\perp{\B}\ast{\B}$
and ${}^\perp({\B}^\perp)={\B}=({}^\perp{\B})^\perp$ hold.}
\item[(b)]{Let $\D= \B \cap {}^\perp{\B}[1]$.
Then $\B/\D$ is a $\Hom$-finite triangulated 2-CY
category, so that $\B$ is a stably 2-CY category.
Moreover, ${\B}\subseteq({\D}\ast{\B}[1]) \cap ({\B}[-1]\ast{\D})$ holds,
and $\D$ is a functorially finite rigid subcategory of $\C$.}
\item[(c)]{Let $\D$ be a functorially finite rigid subcategory
of $\C$ and ${\B'}={}^\perp{\D}[1]$. Then ${\B'}$ is a
functorially finite extension closed subcategory of ${\C}$
and ${\B'}/{\D}$ is a triangulated 2-CY category.
Moreover, there exists a one-one correspondence between cluster tilting
(respectively, maximal rigid, rigid) subcategories of ${\C}$ containing ${\D}$ and
cluster tilting (respectively, maximal rigid, rigid) subcategories of
${\B'}/{\D}$. It is given by $T\mapsto T/\D$.}
\end{itemize}
\end{theorem}
\begin{proof}
(a) Since $\B^\perp={}^\perp\B[2]$ holds by the 2-CY property, the
assertion follows from \cite[2.3]{iy}.
(b) Clearly $\B/\D$ is $\Hom$-finite, since $\C$ is.
To show that $\B/\D$ is a triangulated 2-CY category, we only need to check
${\B}\subseteq({\D}\ast{\B}[1]) \cap ({\B}[-1]\ast{\D})$ by \cite[4.2]{iy}.
Let $Z$ be in $\B$. Since $\B$, and hence $\B[1]$, is functorially finite in
$\C$, it follows from (a) that we have a triangle $X\to Y\to Z\to X[1]$ with $Y$ in
$^{\bot}\B[1]$ and $X[1]$ in $\B[1]$. Since $\B$ is extension closed,
$Y$ is in $\B$, and consequently $Y$ is in $\B\cap{}^{\bot}\B[1]=\D$.
It follows that $Z$ is in $\D{\ast}\B[1]$, and similarly in $\B[-1] \ast \D$.
To see that $\D$ is functorially finite in $\C$,
we only have to show that $\D$ is functorially finite in $\B$.
For any $Z \in \B$, take the above triangle $X\to Y\stackrel{f}{\to}Z\to X[1]$
with $Y$ in $\D$ and $X[1]$ in $\B[1]$.
Since $(\D,X[1])=0$, we have that $f$ is a right $\D$-approximation.
Thus $\D$ is contravariantly finite in $\B$,
and similarly covariantly finite in $\B$.
(c) See \cite[4.9]{iy}.
\end{proof}
The example of the cluster category $\C$ of the path algebra $kQ$ where $Q$ is of type
$A_3$ from the previous section illustrates part of this theorem. For let $\D=\add P_1$.
Then $\B'={^{\bot}\D[1]}=\mod kQ$, and $\B'/\D=\C_{kQ'}$, where $Q'$ is a quiver of type $A_2$.
The cluster tilting objects in $\C$ containing $P_1$ are $P_1\oplus S_3\oplus P_2$, $P_1
\oplus P_2\oplus S_2$, $P_1\oplus S_2\oplus P_1/S_3$, $P_1\oplus P_1/S_3\oplus S_1$,
$P_1\oplus S_1\oplus S_3$, which are in one-one correspondence with the cluster tilting objects in $\B'/\D$.
In order to get sufficient conditions for having a substructure we investigate cluster tilting
subcategories in $\B$. For this the following lemma is useful.
\begin{lemma}\label{lemI2.3}
Let $\C$ be a $\Hom$-finite triangulated 2-CY category. For any
functorially finite and thick subcategory $\C_1$ of
$\C$, there exists a functorially finite and thick
subcategory $\C_2$ of $\C$ such that
$\C=\C_1\times\C_2$.
\end{lemma}
\begin{proof}
Let $\C_2 =\C_1^\perp$.
Then we have
$\C_2=\C_1^\perp={}^\perp\C_1[2]={}^\perp\C_1$
by Serre duality, using that $\C_1$ is triangulated. We only have to show that any object in
$\C$ is a direct sum of objects in $\C_1$ and
$\C_2$. For any $X\in\C$, there exists a triangle
$A_1\to X \to A_2\stackrel{f}{\to}A_1[1]$ in $\C$ with $A_1$
in $\C_1$ and $A_2$ in $\C_2=\C_1^{\bot}$ by
Theorem \ref{teoI2.1}(a). Since $f=0$, we have $X\simeq A_1\oplus A_2$.
Thus we have $\C=\C_1\times\C_2$.
\end{proof}
Using Lemma \ref{lemI2.3}, we get the following decomposition of
triangulated categories.
\begin{proposition}\label{propIadded}
Let $\C$ be a $\Hom$-finite triangulated 2-CY category and $\B$ a
functorially finite extension closed subcategory of $\C$. Let $\D= \B
\cap {}^\perp{\B}[1]$ and $\B'={}^\perp\D[1]$.
\begin{itemize}
\item[(a)]{There exists a
functorially finite and extension closed subcategory $\B''$ of $\C$
such that $\D\subseteq\B''\subseteq\B'$ and $\B'/\D=\B/\D\times\B''/\D$ as
a triangulated category.}
\item[(b)]{There exists a one-one correspondence between pairs
consisting of cluster tilting (respectively, maximal rigid,
rigid) subcategories of $\B$ and of $\B''$, and cluster tilting
(respectively, maximal rigid, rigid) subcategories of $\B'$.
It is given by $(T,T'')\mapsto T\oplus T''$.}
\end{itemize}
\end{proposition}
\begin{proof}
(a) We know by Theorem \ref{teoI2.1}(b)(c) that
$\D$ is functorially finite rigid, and that $\B/\D$
and $\B'/\D$ are both triangulated 2-CY categories. The inclusion functor
$\B/\D\to \B'/\D$ is a triangle functor by the construction
of their triangulated structures in \cite[4.2]{iy}. In particular $\B/\D$ is a thick subcategory
of $\B'/\D$, and hence we have a decomposition by Lemma \ref{lemI2.3}.
(b) This follows by Theorem \ref{teoI2.1}(c).
\end{proof}
Then we get the following.
\begin{corollary}\label{corI2.4}
Let $\C$ be a $\Hom$-finite 2-CY algebraic triangulated category with a cluster tilting object,
and $\B$ a functorially finite extension closed subcategory of $\C$. Then we have the following.
\begin{itemize}
\item[(a)]{The stably 2-CY category $\B$ also has some cluster tilting
object. Any maximal rigid object in $\B$ is a cluster tilting
object in $\B$.}
\item[(b)]{There is some rigid object $A$ in $\C$ such that $T\oplus A$ is a cluster tilting
object in $\C$ for any cluster tilting object $T$ in $\B$.}
\item[(c)]{Any cluster tilting object $T$ in $\B$ determines a substructure for the
weak cluster structures on $\B$ and $\C$ given by cluster tilting objects.}
\end{itemize}
\end{corollary}
\begin{proof}
(a) Let $\D ={}^\perp\B[1]$ and $\B'={}^{\bot}\D[1]$.
Since $\C$ is algebraic by Theorem \ref{prop2.1}, we have
a cluster tilting object $T$ in $\C$ containing $\D$.
By Proposition \ref{propIadded}, we have decompositions
$\B'/\D=\B/\D\times\B''/\D$ for some subcategory $\B''$ of $\B'$
and $T=T_1\oplus T_2$ with a cluster
tilting object $T_1$ (respectively, $T_2$) in $\B$ (respectively, $\B''$).
Thus $\B$ has a cluster tilting object.
Now we show the second assertion.
Let $M$ be maximal rigid in $\B$. By Proposition \ref{propIadded}(b),
we have that $M\oplus T_2$ is maximal rigid in $\C$. By Theorem \ref{prop2.1},
it follows that $M\oplus T_2$ is cluster tilting in $\C$ and by Proposition
\ref{propIadded}(b), we have that $M$ is cluster tilting in $\B$.
\noindent
(b) We only have to let $A = T_2$.
\noindent
(c) This follows from (b).
\end{proof}
It is curious to note that combining Proposition \ref{propIadded} with Theorem \ref{teoI2.1} we obtain a
kind of classification of functorially finite extension closed subcategories of a
triangulated 2-CY category in terms of functorially finite rigid
subcategories, analogous to results from \cite{ar}.
\begin{theorem}\label{theoI2.5}
Let $\C$ be a 2-CY triangulated category. Then the functorially finite
extension closed subcategories $\B$ of $\C$ are all obtained as preimages under the
functor $\pi\colon \C\to \C/\D$ of the direct summands of $^{\bot}\D[1]/\D$ as a triangulated category,
for functorially finite rigid subcategories $\D$ of $\C$.
\end{theorem}
\begin{proof}
Let $\D$ be functorially finite rigid in $\C$. Then
$\B'={^{\bot}\D}[1]$ is functorially finite extension closed in $\C$ by Theorem \ref{teoI2.1}(a).
Then the preimage under $\pi\colon\C\to\C/\D$ of any direct summand of $\B'/\D$ as a triangulated category
is functorially
finite and extension closed in $\C$.
Conversely, let $\B$ be a functorially finite extension closed subcategory of
$\C$ and $\D=\B\cap{}^{\bot}\B[1]$. By Proposition \ref{propIadded}, we have that $\B/\D$ is a direct
summand of ${^{\bot}}\D[1]/\D$.
\end{proof}
We now investigate substructures also for exact categories which are stably 2-CY.
We have the following main result.
\begin{theorem}\label{teoI2.7}
Let $\C$ be an exact stably 2-CY category, and
$\B$ a functorially finite extension closed subcategory of $\C$.
Then $\B$ has enough projectives and injectives, and is a stably 2-CY category.
\end{theorem}
\begin{proof}
We know that
$\B$ is an exact category and $\D=\B\cap{}^\perp\B[1]$ is the
subcategory of projective injective objects. Since $\B\subseteq\B[-1]\ast\D$
holds by Theorem \ref{teoI2.1}(b), then for any $X\in\B$, there exists
a triangle $X\to Y\to Z\to X[1]$ with $Y\in\D$
and $Z\in\B$. This is induced from an exact sequence $0\to X\to Y\to
Z\to0$ in $\C$. Thus $\B$ has enough injectives. Dually, $\B$ has
enough projectives, which coincide with the injectives. Hence $\B$ is a Frobenius category, and
consequently, $\B$ is stably 2-CY.
\end{proof}
Alternatively we give a direct approach, where the essential information is given by the following lemma and its dual.
\begin{lemma}\label{lemI2.3new}
Let $\C$ be an exact category with enough injectives, and
$\B$ a contravariantly finite extension closed subcategory of $\C$. Then $\B$ is an exact
category with enough injectives.
\end{lemma}
\begin{proof}
It is clear that $\B$ is also an exact category. Let $X$ be in $\C$ and
take an exact sequence $0 \to X \to I \to X' \to 0$ with $I$ injective in $\C$. Then we have an exact
sequence of functors $(\ ,X') \to \Ext ^1(\ ,X)\to 0$. Since $\B$ is Krull-Schmidt and
contravariantly finite in $\C$, we can take a projective cover
$\phi:(\ ,Y)\to{\Ext}^1(\ ,X)|_{\B}\to0$ of $\B$-modules.
This is induced by an exact sequence $0\to X\to Z\stackrel{}{\to}
Y\to0$ with terms in $\B$.
We will show that $Z$ is injective. Take any exact sequence $0\to
Z\stackrel{}{\to}Z'\to Z''\to0$ with terms in $\B$. We will
show that this splits. Consider the following exact commutative diagram:
\begin{equation}\label{commutative diagram}
\begin{array}{ccccccccc}
&&&&0&&0&&\\
&&&&\downarrow&&\downarrow&&\\
0&\to&X&\to&Z&\to&Y&\to&0\\
&&\parallel&&\downarrow&&\downarrow^{a}&&\\
0&\to&X&\to&Z'&\to&Y'&\to&0\\
&&&&\downarrow&&\downarrow&&\\
&&&&Z''&=&Z''&&\\
&&&&\downarrow&&\downarrow&&\\
&&&&0&&0&&
\end{array}
\end{equation}
Then $Y'\in\B$, and we have the commutative diagram
\begin{equation}\label{commutative diagram2}
\begin{array}{ccccccccccc}
0&\to&(\ ,X)&\to&(\ ,Z)&\to&(\ ,Y)&\stackrel{\phi}{\to}&{\rm Ext}^1(\
,X)|_{\B}&\to&0\\
&&\parallel&&\downarrow^{}&&\downarrow^{\cdot a}&&\parallel&&\\
0&\to&(\ ,X)&\to&(\ ,Z')&\to&(\ ,Y')&\to&{\rm Ext}^1(\ ,X)|_{\B}&&
\end{array}
\end{equation}
of exact sequences of $\B$-modules. Since $\phi$ is a projective cover, we have that
$(\cdot a)$ is a split monomorphism. Thus $a$ is a split monomorphism.
We see that the sequence $0 \to \Ext^1(Z'',Z) \to \Ext^1(Z'',Y)$ is
exact by evaluating the upper sequence in \eqref{commutative diagram2}
at $Z''$. Since the right vertical sequence
in \eqref{commutative diagram} splits, it follows that the middle
vertical sequence in \eqref{commutative diagram} splits. Hence $Z$ is
injective, and consequently $\B$ has enough injectives.
\end{proof}
It follows from $\C$ being 2-CY that the projectives and injectives in $\B$ coincide,
and hence $\B$ is Frobenius by Lemma \ref{lemI2.3new} and its dual. It follows as before
that $\B$ is stably 2-CY, and the alternative
proof of Theorem \ref{teoI2.7} is completed.
\bigskip
We have the following interesting special case, as a consequence of Theorem \ref{teoI2.7} and Corollary \ref{corI2.4}.
For $X$ in $\C$ we denote by $\Sub X$ the subcategory of $\C$ whose objects are subobjects of finite direct sums of copies of $X$.
\begin{corollary}\label{cor2.6}
Let $\C$ be a $\Hom$-finite abelian stably 2-CY category, and let $X$ be an object in
$\C$ with $\Ext^1(X,X)=0$, and $\id X\leq 1$.
\begin{itemize}
\item[(a)]{Then $\Sub X$ is a functorially finite extension closed subcategory of $\C$ and is exact stably 2-CY.}
\item[(b)]{If $\C$ has a cluster tilting object, then so does $\Sub X$, and any cluster
tilting object in $\Sub X$ determines a substructure of the cluster structure for $\C$.}
\item[(c)]{If $\C$ is abelian, then $\Sub X$ has no loops or 2-cycles.}
\end{itemize}
\end{corollary}
\begin{proof}
(a) We include the proof for the convenience of the reader.
We first want to show that $\Sub X$ is extension closed. Let $0\to A\to B\to C\to 0$ be an
exact sequence with $A$ and $C$ in $\Sub X$, and consider the diagram
$$\xymatrix{
& 0\ar[d]&& 0\ar[d] &\\
0\ar[r]& A\ar[r]^i\ar[d]^f& B\ar[r]^j& C\ar[r]\ar[d]^q& 0\\
0\ar[r]& X_0\ar[r]& X_0\oplus X_1\ar[r]& X_1\ar[r] & 0
}$$
with $X_0, X_1$ in $\add X$. Since $\id X_0\le 1$, we have the exact sequence
$\Ext^1(X_1,X_0)\to \Ext^1(C,X_0)\to 0$, which shows that $\Ext^1(C,X_0)=0$.
Then the exact sequence $(B,X_0)\to (A,X_0)\to \Ext^1(C,X_0)$ shows that there
is some $t\colon B\to X_0$ such that $it=f$. This shows that $B$ is in $\Sub X$,
which is then closed under extensions.
It is also functorially finite \cite{as}, and clearly
Krull-Schmidt. So $\Sub X$ has enough projectives and injectives,
with the projectives coinciding with the injectives. Hence $\Sub X $
is Frobenius, and as we have seen before, it follows that the stable category
$\underline{\Sub} X$ is 2-CY.\\
(b) This follows directly using Corollary \ref{corI2.4}.\\
(c) This follows from Proposition \ref{prop2.5}.
\end{proof}
In order to see when we have cluster structures we next want to give sufficient conditions
for algebraic triangulated (or stably) 2-CY categories not to have loops or 2-cycles.
\begin{proposition}\label{propI2.11}
Let $\C$ be a $\Hom$-finite algebraic triangulated (or exact stably) 2-CY
category with a cluster tilting object, and $\B$ a functorially finite
extension closed subcategory.
\begin{itemize}
\item[(a)]{If $\C$ has no 2-cycles, then also $\B$ has no 2-cycles.}
\item[(b)]{If $\C$ has no loops, then $\B$ has no loops.}
\end{itemize}
\end{proposition}
\begin{proof}
We give a proof for the algebraic triangulated 2-CY case.
A similar argument works for the stably 2-CY case.
(a) Let $\D = \B \cap{}^\perp \B[1]$ and $\B' ={}^\perp \D[1]$.
Since cluster tilting objects in $\B'$ are exactly
cluster tilting objects in $\C$ which contain $\D$,
our assumption implies that $\B'$ has no 2-cycles.
We shall show that $\B$ has no 2-cycles.
Let $T$ be a cluster tilting object in $\B$.
By Corollary \ref{corI2.4}(b), there exists $T'\in \B'$ such that $T\oplus T'$ is
a cluster tilting object in $\C$.
We already observed that $T\oplus T'$ has no 2-cycles.
If $T$ has a 2-cycle, then at least one arrow in the 2-cycle
represents a morphism $f \colon X\to Y$ which factors through an object in $T'$.
We write $f$ as a composition of $f_1 \colon X\to Z$ and $f_2 \colon Z\to Y$ with $Z\in T'$.
Since $\B/\D$ is a direct summand of $\B'/\D$ by Proposition \ref{propIadded},
any morphism between $T$ and $T'$ factors through $\D$.
Thus we can write $f_1$ (respectively $f_2$) as a composition
of $g_1 \colon X\to W_1$ and $h_1 \colon W_1\to Z$ (respectively, $g_2 \colon Z\to W_2$ and
$h_2 \colon W_2\to Y$)
with $W_1\in \D$ (respectively, $W_2\in \D$).
We have $f=f_1f_2=g_1(h_1g_2)h_2$, where $h_1g_2$ is in $\rad \B$ and
at least one of $h_2$ and $g_1$ is in $\rad \B$, since
at least one of $X$ and $Y$ is not in $\D$.
So $f$ can not be irreducible in $\add T$, a contradiction.
\noindent
(b) This follows in a similar way.
\end{proof}
Note that the quiver $Q_T$ may have 2-cycles between coefficients.
For example, let $\C = \mod \la$ for the preprojective algebra of a Dynkin quiver and let $\B$ be the
subcategory $\add \la$. Then there are no 2-cycles for $\C$, but there are 2-cycles for $\B$,
since $\la$ is the only cluster tilting object in $\B$. \\
\hspace{7mm}
\subsection{Preprojective algebras of Dynkin type}\label{c1_sec3}
${}$ \\
In this section we specialize our general results from Section \ref{c1_sec2} to the case of the finitely generated
modules over a preprojective algebra of Dynkin type. We also illustrate with three concrete examples.
The same examples will be used in the Chapter \ref{chap3} to illustrate how
to use this theory to construct subcluster algebras of cluster algebras.
The category $\C =\mod\Lambda$ for $\Lambda$ preprojective of Dynkin
type is a Hom-finite Frobenius category. By \cite{gls1}, see also Section \ref{c2_sec2}, a rigid
$\la$-module is cluster tilting if and only if the number of
non-isomorphic indecomposable summands is the number of positive
roots, so $\frac{n(n+1)}{2}$ for $A_n$, $n(n-1)$ for $D_n$, $36$ for
$E_6$, $63$ for $E_7$ and $120$ for $E_8$.
Let $\B$ be an extension closed functorially finite
subcategory of $\C$. We know that $\B$ is stably 2-CY by Theorem \ref{teoI2.7}. It is
known that $\C$ and $\ul{\C}$ have no loops or 2-cycles for the cluster
tilting objects \cite{gls1}, and this also follows from Proposition
\ref{prop2.5}. Then it follows from Proposition \ref{propI2.11} that
there are also no loops or 2-cycles for $\B$ and the subcategory $\ul{\B}$ of $\ul{\C}$.
Note that $\ul{\B}$ is not the stable category of $\B$ since $\B$ may
have more projectives than $\C$.
We then have the following.
\begin{theorem}\label{teoI3.1}
Let $\B$ be an extension closed functorially finite subcategory of the category $\C=\mod\la$ for the
preprojective algebra $\la$ of a Dynkin quiver. Then we have the following.
\begin{itemize}
\item[(a)]The exact stably 2-CY category $\B$ has a cluster tilting object, and any
maximal rigid object in $\B$ is a cluster tilting object,
which can be extended to a cluster tilting object for $\C$, and which gives rise to a substructure.
\item[(b)]{The category $\underline{\B}$ is a stably 2-CY Frobenius
category with no loops or 2-cycles for the cluster tilting objects, and hence has a cluster structure}.
\end{itemize}
\end{theorem}
\begin{proof}
\noindent (a) This follows from Theorem \ref{prop2.1} and Corollary \ref{cor2.6}.
\noindent (b) This follows from the above comments and Theorem \ref{teoI1.6}.
\end{proof}
We now give some concrete examples of weak cluster structures and substructures.
In Chapter \ref{chap3} these examples will be revisited, and used to model
cluster algebras and subcluster algebras.
We denote by $P_i$ the indecomposable projective module associated to vertex $i$,
by $J$ the radical of a ring, and by
$S_i$ the simple top of $P_i$.
Usually, we represent a module $M$ by its radical filtration,
the numbers in the first row represent the indices of the simples in $M/J M$, and
the numbers in the $i$'th row represent the indices of the simples in $J^{i-1} M / J^i M$.
e.g.
$$\begin{matrix} & 2 & \\ 1 & & 3 \\ & 2 &
\end{matrix}$$
represents the indecomposable projective module $P_2$ for the preprojective algebra of type
$A_3$, which has quiver
$$
\xymatrix{
1 \ar@<0.5ex>[r] & 2 \ar@<0.5ex>[l] \ar@<0.5ex>[r] & 3 \ar@<0.5ex>[l] \\
}
$$
\noindent
\textbf{Example 1.}
Let $\la$ be the preprojective algebra of a Dynkin quiver $A_4$.
This algebra has quiver
$$
\xymatrix{
1 \ar@<0.5ex>[r] & 2 \ar@<0.5ex>[l] \ar@<0.5ex>[r] & 3 \ar@<0.5ex>[l] \ar@<0.5ex>[r] & 4 \ar@<0.5ex>[l] \\
}
$$
We consider the modules $P_3$ and $M = J P_3$.
These modules are represented by their radical filtrations:
$$
\begin{smallmatrix} & & 3 & \\ & 2 & & 4 \\ 1 & & 3 & \\ & 2 & &
\end{smallmatrix} \text{ and }
\begin{smallmatrix} & 2 & & 4 \\ 1 & & 3 & \\ & 2 & &
\end{smallmatrix}
$$
We let $\C'=\Sub P_3$ and $\B = \{X \in \C' | \Ext^1(M,X) = 0 \}$.
The AR-quiver of $\C'$ is given below, where
we name the indecomposables in $\C'$ to ease notation.
The indexing will be explained in Section \ref{c3_sec2}.
$$
\xymatrix@C0.4cm@R0.5cm{
&
{\stackrel{(M_{45})}{\begin{smallmatrix} & & 3 & \\ & 2 & & 4 \\ 1 & & 3 & \\ & 2 & & \end{smallmatrix}}} \ar[dr] &
&
{\stackrel{(M_{15})}{\begin{smallmatrix} & & & 4 \\ & & 3 & \\ & 2 & & \\ & & & \end{smallmatrix}}} \ar[dr] &
&
&
&
{\stackrel{(M_{23})}{\begin{smallmatrix} 1 & & & \\ & 2 & & \\ & & & \\ & & & \end{smallmatrix}}} \\
{\stackrel{(M_{35})}{\begin{smallmatrix} & 2 & & 4 \\ 1 & & 3 & \\ & 2 & & \\ & & & \end{smallmatrix}}} \ar[dr] \ar[ur] &
&
{\stackrel{(M_{y})}{\begin{smallmatrix} & & & \\ & & 3 & \\ & 2 & & \\ & & & \end{smallmatrix}}} \ar[dr] \ar[ur] &
&
{\stackrel{(M_{25})}{\begin{smallmatrix} & & & 4 \\ 1 & & 3 & \\ & 2 & & \\ & & & \end{smallmatrix}}} \ar[dr] &
&
{\stackrel{(M_{13})}{\begin{smallmatrix} & & & \\ & 2 & & \\ & & & \\ & & & \end{smallmatrix}}} \ar[dr] \ar[ur] & \\
&
{\stackrel{(M_{13})}{\begin{smallmatrix} & & & \\ & 2 & & \\ & & & \\ & & & \end{smallmatrix}}} \ar[dr] \ar[ur] &
&
{\stackrel{(M_{x})}{\begin{smallmatrix} & & & \\ 1 & & 3 & \\ & 2 & & \\ & & & \end{smallmatrix}}} \ar[dr] \ar[ur] &
&
{\stackrel{(M_{35})}{\begin{smallmatrix} & 2 & & 4 \\ 1 & & 3 & \\ & 2 & & \\ & & & \end{smallmatrix}}} \ar[dr] \ar[ur] &
&
{\stackrel{(M_{y})}{\begin{smallmatrix} & & & \\ & & 3 & \\ & 2 & & \\ & & & \end{smallmatrix}}} \\
&
&
{\stackrel{(M_{23})}{\begin{smallmatrix} & & & \\ 1 & & & \\ & 2 & & \\ & & & \end{smallmatrix}}} \ar[ur] &
&
{\stackrel{(M_{34})}{\begin{smallmatrix} & 2 & & \\ 1 & & 3 & \\ & 2 & & \\ & & & \end{smallmatrix}}} \ar[ur] &
&
{\stackrel{(M_{45})}{\begin{smallmatrix} & & 3 & \\ & 2 & &4 \\ 1 & & 3 & \\ & 2 & & \end{smallmatrix}}} \ar[ur] &
}
$$
>From the AR-quiver we see that
the indecomposable projectives in $\C'$ are $M_{45}, M_{34}, M_{23}$ and $M_{15}$.
The indecomposables of the subcategory $\B$
are obtained from $\C'$ by deleting the indecomposable objects
$P_3 = M_{45}, M_{x}$ and $M_y$.
The category $\B$ is extension closed by definition, and the
indecomposable projectives in $\B$ are $M_{35}, M_{34}, M_{23},
M_{15}$.
Let $T = M_{34} \oplus M_{23} \oplus M_{13} \oplus M_{15} \oplus M_{35}$, then clearly $\Ext^1(T,T) = 0$,
and the unique indecomposable
in $\B$ which is not a summand in $T$ is $M_{25}$, which has a non-zero extension with $T$.
Hence $T$ is a cluster tilting object in $\B$, and $\B$ has a cluster structure,
with coefficients $M_{35}, M_{34}, M_{23}, M_{15}$.
Now, since $M_{45}$ is projective in $\C'$, and $M_x$ as well as $M_y$ has non-zero extensions with $T$,
it is clear that
$T' = T \oplus M_{45}$ is a cluster tilting object in $\C'$, and hence
$\C'$ has a cluster structure, such that we have a substructure for $\B$ induced by $T$.
We claim that the cluster tilting object $T'$ in $\C'$ can be extended to a cluster tilting
object $\widetilde{T} = T' \oplus P_1 \oplus P_2 \oplus P_4 \oplus Z$ of $\mod \la$,
where $Z$ is the $\la$-module with radical filtration
${\begin{smallmatrix} & & & \\ 1& & & \\ & 2 & & \\ & & 3 & \\ & & & \end{smallmatrix}}$
To see that $\Ext^1(\widetilde{T},\widetilde{T}) = 0$, it is sufficient to
show that $\Ext^1(Z, X \oplus Z) = 0$ for all $X$ in $\C'$.
There is an exact sequence $0 \to S_4 \to P_1 \to Z \to 0$, and hence
for every $X$ in $\C'$, there is an exact sequence
$$\Hom(S_4,X \oplus Z) \to \Ext^1(Z,X \oplus Z) \to \Ext^1(P_1,X \oplus Z).$$
Note that $\Hom(S_4, P_3) = 0$
and hence $\Hom(S_4, X) = 0$ for all $X$ in $\C'$, and that $\Hom(S_4,Z) = 0$.
It follows that $\Ext^1(Z,X \oplus Z) = 0$.
Thus $\widetilde{T}$ is a cluster tilting object since it has the correct number
$10=\frac{4\cdot 5}{2}$ of indecomposable direct summands.
\bigskip
\noindent
\textbf{Example 2.}
For our next example, let $\la$ be the preprojective algebra
of type $A_3$. It has the quiver
$$
\xymatrix{
1 \ar@<0.5ex>[r] & 2 \ar@<0.5ex>[l] \ar@<0.5ex>[r] & 3 \ar@<0.5ex>[l] \\
}
$$
The AR-quiver of $\C=\mod \la$ is given by the following.
We name the indecomposables in $\C$ according to the following table.
The indexing will be explained in Section \ref{c3_sec2}.
$$
\xymatrix{
&
{\stackrel{(M_{34})}{\begin{smallmatrix} & 2 & \\ 1 & & 3 \\ & 2 & \end{smallmatrix}}} \ar[ddr] & & &
{\stackrel{(M_{234})}{\begin{smallmatrix} & & 3 \\ & 2 & \\ 1 & & \end{smallmatrix}}} \ar[dr] & & \\
&
{\stackrel{(M_{124})}{\begin{smallmatrix} & & \\ & 3 & \\ & & \end{smallmatrix}}} \ar[dr] &
&
{\stackrel{(M_{23})}{\begin{smallmatrix} & & \\ & 2 & \\ 1 & & \end{smallmatrix}}} \ar[dr] \ar[ur] &
&
{\stackrel{(M_{14})}{\begin{smallmatrix} & & \\ & & 3 \\ & 2 & \end{smallmatrix}}} \ar[dr] & \\
{\stackrel{(M_{x})}{\begin{smallmatrix} & & \\ 1 & & 3 \\ & 2 & \end{smallmatrix}}} \ar[uur] \ar[ur] \ar[dr] &
&
{\stackrel{(M_{24})}{\begin{smallmatrix} & & \\ & 2 & \\ 1 & & 3 \end{smallmatrix}}} \ar[dr] \ar[ur] &
&
{\stackrel{(M_{13})}{\begin{smallmatrix} & & \\ & 2 & \\ & & \end{smallmatrix}}} \ar[dr] \ar[ur] &
&
{\stackrel{(M_{x})}{\begin{smallmatrix} & & \\ 1 & & 3 \\ & 2 & \end{smallmatrix}}} \\
&
{\stackrel{(M_{y})}{\begin{smallmatrix} & & \\ 1 & & \\ & & \end{smallmatrix}}} \ar[ur] &
&
{\stackrel{(M_{134})}{\begin{smallmatrix} & & \\ & 2 & \\ & & 3 \end{smallmatrix}}} \ar[dr] \ar[ur] &
&
{\stackrel{(M_{t})}{\begin{smallmatrix} & & \\ 1 & & \\ & 2 & \end{smallmatrix}}} \ar[ur] & \\
& & & &
{\stackrel{(M_{z})}{\begin{smallmatrix} 1 & & \\ & 2 & \\ & & 3 \end{smallmatrix}}} \ar[ur] & &
}
$$
The indecomposable projectives in $\C$ are $M_{34}, M_z, M_{234}$.
Let $\B$ be the full subcategory of $\C$ generated by $P_2 \oplus P_3$.
Then $\B=\add (M_{34} \oplus M_{124} \oplus M_{24} \oplus M_{23} \oplus M_{134} \oplus M_{234}
\oplus M_{13} \oplus M_{14})$.
In addition to $M_{34}, M_{234}$, also $M_{134}$ becomes projective in $\B$.
It is straightforward to see that $M_{23} \oplus M_{13}$ is extension-free,
so $T = M_{34} \oplus M_{234} \oplus M_{134} \oplus M_{23} \oplus M_{13}$ has $\Ext^1(T, T)= 0$.
Let $\widetilde{T} = T \oplus M_{z}$. Then also
$\Ext^1(\widetilde{T}, \widetilde{T})= 0$.
Since $\widetilde{T}$ has the correct number of indecomposable direct
summands $6=\frac{3\cdot 4}{2}$, it is a cluster tilting object.
Hence $T$ is a cluster tilting object in $\B$.
\bigskip
\noindent
\textbf{Example 3.}
In this example we let $\la$ be the preprojective algebra of a Dynkin quiver $D_4$.
$$
\xymatrix{
& & 3 \ar@<0.5ex>[dl] \\
1 \ar@<0.5ex>[r] & 2 \ar@<0.5ex>[l] \ar@<0.5ex>[ur] \ar@<0.5ex>[dr] & \\
& & 4 \ar@<0.5ex>[ul]
}
$$
We consider the subcategory $\B=\Sub P_2$.
Using Corollary \ref{cor2.6} we have that $\B$ is extension closed.
We know by Theorem \ref{teoI3.1} that $\B$ has a cluster tilting object that
can be extended to a cluster tilting object for $\C=\mod \la$.
The following gives $P_2$ as a representation of the quiver with relations
\[ \xymatrix{ & k_2 \ar[ld]_{(1)} \ar[d]^{(1)} \ar[rd]^{(1)} & \\
k_1\ar@/_1.1pc/[rdd]_{\begin{pmatrix} \p 1 \\ \p 0 \end{pmatrix}}
& k_3 \ar[dd]^{\begin{pmatrix} \p 0 \\ \p 1 \end{pmatrix}}
& k_4 \ar@/^1.1pc/[ldd]^{{\p -}\begin{pmatrix} \p 1 \\ \p 1 \end{pmatrix}} \\ \\
&k_2 \oplus k_2
\ar@/_1.1pc/[ldd]_{\begin{pmatrix} \p 0 & \p 1 \end{pmatrix}}
\ar[dd]^{\begin{pmatrix} \p 1 & \p 0 \end{pmatrix}}
\ar@/^1.1pc/[rdd]^{\begin{pmatrix} \p 1 & \p -1 \end{pmatrix}} & \\ \\
k_1\ar[rd]_{(1)} & k_3 \ar[d]^{({\s -}1)}
& k_4 \ar[ld]^{(1)} \\
&k_2 & } \]
The modules in $\B$ do not necessarily have a simple socle, and in fact
the subcategory is not of finite type. As noted earlier, it is functorially finite.
However, the indecomposable direct summands in the
cluster tilting object we will construct all have simple socle.
The indecomposable submodules of $P_2$ we will need to construct a cluster tilting object
have the following radical filtrations. The indexing will be explained in Chapter \ref{chap3}.
\begin{center}
\begin{tabular}{|rr|rr|rr|rr|}
\hline
$M_{16}$ & ${\begin{smallmatrix} & & & & \\ & & & & \\ & 3 & & 4& & \\ & & 2 & & \\ & & & & \\ & & & &
\end{smallmatrix}}$ &
$M_{24}$ & ${\begin{smallmatrix} & & & & \\ & & & & \\ & 1 & & 3 & \\ & & 2& & \\ & & & & \\ & & & &
\end{smallmatrix}}$ &
$M_{25}$ & ${\begin{smallmatrix} & & & & \\ & & & & \\ & 1 & & 4 & \\ & & 2 & & \\ & & & & \\ & & & &
\end{smallmatrix}}$
&
$M_{26}$ & ${\begin{smallmatrix} & & & & \\ & & & & \\ & 1 & 3 & 4 & \\ & & 2 & & \\ & & & & \\ & & & &
\end{smallmatrix}}$ \\
\hline
$M_{68}$ & ${\begin{smallmatrix} & & & & \\ 1 & & 3 & & 4 \\ & 2 & & 2 & \\ 1 & & 3 & & 4 \\ & & 2 & & \\ & & & &
\end{smallmatrix}}$
&
$M_{18}$ & ${\begin{smallmatrix} & & & & \\ & & 1 & & \\ & & 2 & & \\ & 3 && 4 & \\ && 2 & & \\ & & & &
\end{smallmatrix}}$
&
$M_{-}$ & ${\begin{smallmatrix} & & & & \\ & & 4 & & \\ & & 2 & & \\ & 1 & & 3 & \\ & & 2 & & \\ & & & &
\end{smallmatrix}}$
&
$M_{+}$ & ${\begin{smallmatrix} & & & & \\ & & 3 & & \\ & & 2 & & \\ & 1 & & 4 & \\ & & 2 & & \\ & & & &
\end{smallmatrix}}$ \\
\hline
\end{tabular}
\end{center}
The indecomposable projectives in $\B$ are $P_2, M_{18}, M_{+}$ and $M_{-}$.
This follows from the following.
\begin{lemma}
Let $\la$ be a finite dimensional algebra with $X$ in $\mod\la$ such that $\Sub X$ is extension
closed in $\mod\la$. Then the indecomposable projective objects in $\Sub X$ are of the form
$P/\alpha(P)$ where $P$ is an indecomposable projective $\la$-module and $\alpha(P)$ is the smallest
submodule of $P$ such that $P/\alpha(P)$ is in $\Sub X$.
\end{lemma}
\begin{proof}
For convenience of the reader, we include a proof.
Let $P$ be indecomposable projective in $\mod\la$. It is clear that there is a smallest
submodule $\alpha(P)$ of $P$ such that $P/\alpha(P)$ is in $\Sub X$. For, if $A$ and $B$ are
submodules of $P$ with $P/A$ and $P/B$ in $\Sub X$, then clearly $P/{A\cap B}\subseteq P/A\oplus P/B$
is in $\Sub X$. It is clear that the natural map $f\colon P\to P/\alpha(P)$ is a minimal left
$\Sub X$-approximation and that every module in $\Sub X$ is a factor of a direct sum of $\la$-modules
of the form $P/\alpha(P)$ for $P$ indecomposable projective. To see that each $P/\alpha(P)$ is
projective in $\Sub X$, consider the exact sequence $0\to A\xto{s} B\xto{t}P/\alpha(P)\to 0$ in $\Sub X$.
Then there is some $ u\colon P\to B$ such that $ut=f$. Since $B$ is in $\Sub X$, then $u(\alpha(P))=0$,
so the sequence splits. Clearly there are no other indecomposable
projectives in $\Sub X$ since all modules in
$\Sub X$ are factors of direct sums of those of the form $P/\alpha(P)$.
\end{proof}
In addition we need the following, where we leave the details to the reader.
\begin{lemma}
Let $M = M_{16} \oplus M_{24} \oplus M_{25} \oplus M_{26} \oplus M_{68}$.
Then we have $\Ext^1(M,M)= 0$.
\end{lemma}
Hence, the module $T = M \oplus P_2 \oplus M_{18} \oplus M_{+} \oplus M_{-}$ in $\B$ is rigid.
If we add the other projectives we obtain the module $\widetilde{T} = T \oplus P_1 \oplus P_3 \oplus P_4$,
which also satisfies $\Ext^1(\widetilde{T},\widetilde{T}) = 0$.
Since $\widetilde{T}$ has the correct number $12=4\cdot 3$ of indecomposable
summands, it is a cluster tilting object in $\C=\mod \la$.
It is also clear from this that $T$ is a cluster tilting object in
$\B=\Sub P_2$, since
we added only projectives/injectives to $T$ to obtain $\widetilde{T}$.
Note that $\B$ has a substructure of the cluster structure of $\C$.
\section{Preprojective algebras for non-Dynkin quivers}\label{chap2}
In this chapter we deal with completions of preprojective algebras of a finite
connected quiver $Q$ with no oriented cycles, and mainly those which
are not Dynkin. In this case the modules of finite length coincide
with the nilpotent modules over the preprojective algebra.
These algebras $\la$ are known to be derived 2-CY (see
\cite{b,cb,bbk,gls2}).
Tilting $\la$-modules of projective dimension at most one were
investigated in \cite{ir} when the quiver $Q$ is a (generalized)
extended Dynkin quiver.
It was shown that such tilting modules are exactly
the ideals in $\la$ which are finite products of two-sided
ideals $I_i=\la(1-e_i)\la$, where $e_1,\cdots ,e_n$ correspond to
the vertices of the quiver, and that they are in
one-one correspondence with the elements of the corresponding Weyl
group, where $w=s_{i_1}\cdots s_{i_k}$ corresponds to
$I_w=I_{i_1}\cdots I_{i_k}$. Here we generalize some of the results
from \cite{ir} beyond the
noetherian case. In particular, we show that any finite product of
ideals of the form $I_i$ is a tilting module, and show that there is a
bijection between cofinite tilting ideals and elements of the
associated Coxeter group $W$.
For any descending chain of tilting ideals of the form $\la \supseteq
I_{i_1}\supseteq I_{i_1}I_{i_2}\supseteq I_{i_1}I_{i_2}\cdots
I_{i_k}\supseteq\cdots$ we show that for
$\la_m=\la/{I_{i_1}\cdots I_{i_m}}$, the categories $\Sub\la_m$ and
$\underline{\Sub}\la_m$ are respectively stably 2-CY and 2-CY with nice cluster tilting objects.
In this way we get, for any $w\in W$, a stably 2-CY category
$\C_w=\Sub(\Lambda/I_w)$, and for any reduced expression
$w=s_{i_1}\cdots s_{i_k}$, a cluster tilting object
$\bigoplus_{j=1}^k\Lambda/I_{s_{i_1}\cdots s_{i_j}}$ in $\C_w$.
We also construct cluster tilting subcategories of the derived 2-CY category $\fl\la$.
This way we get many examples of weak cluster
structures without loops or 2-cycles which are then cluster structures by Theorem \ref{teoI1.6}.
We also get many examples of substructures. In particular, any
cluster category and the stable category $\ul{\mod}\la$ of a preprojective algebra of Dynkin
type occur amongst this class.
We give a description of the quivers of the cluster tilting
objects/subcategories in terms of the associated reduced expressions.
For example, the quiver of the preprojective component of the hereditary
algebra with additional arrows from $X$ to $\tau X$ occur this way.
In Section \ref{c3_sec3} results in this chapter are used
to show that coordinate rings of some unipotent cells of $\SL_2(
\mathbb{C}[t,t^{-1}] )$ have a cluster algebra structure.
We refer to \cite{i3} for corresponding results for $d$-CY algebras.\\
\hspace{7mm}
\subsection{Tilting modules over 2-CY algebras}\label{c2_sec1}
${}$ \\
Let $Q$ be a finite connected quiver without oriented cycles which is
not Dynkin, $k$ an algebraically closed field and $\la$ the completion
of the associated preprojective algebra. In \cite{ir} the
tilting $\la$-modules of projective dimension at most one were investigated
in the noetherian case, that is, when $Q$ is extended Dynkin \cite{bgl} (and
also the generalized ones having loops). In this section we
generalize some of these results to the non-noetherian case,
concentrating on the aspects that will be needed for our
construction of new 2-CY categories with cluster tilting
objects/subcategories in the next sections.
Note that since $\Lambda$ is complete, the Krull-Schmidt theorem holds
for finitely generated projective $\Lambda$-modules.
We say that a finitely presented $\la$-module $T$ is a {\em tilting module}
if
(i) there exists an exact sequence $0\to P_n\to\cdots\to
P_0\to\Lambda\to 0$ with finitely generated projective
$\Lambda$-modules $P_i$, (ii) $\Ext_{\la}^i(T,T)=0$ for any $i>0$, (iii)
there exists an exact sequence $0\to\Lambda\to T_0\to\cdots\to T_n\to 0$ with
$T_i$ in $\add T$.
We say that $T\in{\bf D}(\Mod\la)$ a {\em tilting complex} \cite{rick} if
(i$'$) $T$ is quasi-isomorphic to an object in the
category ${\bf K}^{\bo}(\pr\la)$ of bounded complexes of finitely generated
projective $\la$-modules $\pr\la$,
(ii$'$) $\Hom_{{\bf D}(\mod\la)}(T,T[i])=0$ for any $i\neq0$,
(iii$'$) $T$ generates ${\bf K}^{\bo}(\pr\la)$.
A tilting module is nothing but a module which is a tilting complex
since the condition (iii) can be replaced by (iii$'$).
A {\em partial tilting complex} is a direct summand of a tilting
complex. A {\em partial tilting module} is a module which is a partial
tilting complex.
Let $1,\cdots,n$ denote the vertices in $Q$, and let $e_1,\cdots,e_n$
be the corresponding idempotents. For each $i$ we denote by $I_i$ the
ideal $\la(1-e_i)\la$. Then $S_i=\la/I_i$ is a simple $\la$-module and
$\la^{\op}$-module since by assumption there are no loops in the quiver.
We shall show that each $I_i$, and any finite
product of such ideals, is a tilting ideal in $\la$, and give
some information about how the different products are related. But first
we give several preliminary results, where new proofs are needed
compared to \cite{ir} since we do not assume $\la$ to be noetherian.
\begin{lemma}\label{lemII1.1}
Let $T$ be a partial tilting $\Lambda$-module of projective dimension at
most 1 and $S$ a simple $\Lambda^{\op}$-module. Then at least one
of the statements $S\otimes_\Lambda T=0$ and ${\Tor}^\Lambda_1(S,T)=0$ holds.
\end{lemma}
\begin{proof}
We only have to show that there is a projective resolution $0\to
P_1\to P_0\to T\to0$ such that $P_0$ and $P_1$ do not have a common
summand. This is shown as in \cite[1.2]{hu}.
\end{proof}
Recall that
for rings $\la$ and $\Gamma$, we call an object $T$ in ${\bf D}(\Mod\Lambda\otimes_{\mathbb{Z}}\Gamma^{\op})$ a {\em two-sided tilting complex} if $T$ is a
tilting complex in ${\bf D}(\Mod \Lambda)$ and $\End_{{\bf D}(\Mod \Lambda)}(T)\simeq\Gamma$ naturally.
The following result is useful (see \cite{rick}\cite[1.7]{ye}).
\begin{lemma}\label{lemII1.3}
Let $T\in{\bf D}(\Mod \Lambda\otimes_{\mathbb{Z}}\Gamma^{\op})$
be a two-sided tilting complex.
\begin{itemize}
\item[(a)] For any tilting complex (respectively, partial tilting
complex) $U$ of $\Gamma$, we have a tilting complex (respectively,
partial tilting complex) $T\stackrel{\bf L}{\otimes}_{\Gamma}U$ of $\Lambda$
such that $\End_{{\bf D}(\Mod\Lambda)}(T\stackrel{\bf L}{\otimes}_{\Gamma}U)\simeq\End_{{\bf D}(\Mod\Gamma)}(U)$.
\item[(b)] ${\bf R}\Hom_{\Lambda}(T,\Lambda)$ and ${\bf
R}\Hom_{\Gamma^{\op}}(T,\Gamma)$ are two-sided tilting complexes and
isomorphic in ${\bf D}(\Mod \Gamma\otimes_{\mathbb{Z}}\Lambda^{\op})$.
\end{itemize}
\end{lemma}
We collect some basic information on preprojective algebras.
\begin{proposition}\label{2CY}
Let $\Lambda$ be the completion of the preprojective algebra of a finite
connected non-Dynkin diagram without loops.
\begin{itemize}
\item[(a)] Let $\Gamma$ be the completion of $\la\otimes_{kQ_0}\la^{\op}$ with respect to the ideal
$J\otimes_{kQ_0}\la^{\op}+\la\otimes_{kQ_0}J^{\op}$ where $J$ is the radical of $\la$.
Then there exists a commutative diagram
$$\begin{array}{ccccccccccc}
0&\to&P_2&\stackrel{f_2}{\to}&P_1&\stackrel{f_1}{\to}&P_0&\stackrel{}{\to}&\Lambda&\to&0\\
&&\downarrow\wr&&\downarrow\wr&&\downarrow\wr&&\downarrow\wr&&\\
0&\to&\Hom_{\Gamma}(P_0,\Gamma)&\stackrel{f_1\cdot}{\to}&\Hom_{\Gamma}(P_1,\Gamma)&\stackrel{f_2\cdot}{\to}&
\Hom_{\Gamma}(P_2,\Gamma)&\stackrel{}{\to}&\Lambda&\to&0
\end{array}$$
of exact sequences of $\Gamma$-modules such that each $P_i$ is
a finitely generated projective $\Gamma$-module
and $P_0\simeq P_2\simeq\Gamma$.
\item[(b)]There exists a functorial
isomorphism $\Hom_{{\bf D}(\Mod\la)}(X,Y[1])\simeq
D\Hom_{{\bf D}(\Mod\la)}(Y,X[1])$ for any $X\in{\bf D}^{\bo}(\fl\la)$ and $Y\in{\bf K}^{\bo}(\pr\la)$.
\item[(c)]$\fl\la$ is derived 2-CY and $\gl\la=2$. In particular, any left ideal $I$ of $\la$ satisfies $\pd{}_\la I\le1$.
\item[(d)]$\Ext^i_\la(X,\la)=0$ for $i\neq 2$ and
$\Ext^2_\la(X,\la)\simeq DX$ for any $X\in\fl\la$.
\end{itemize}
\end{proposition}
\begin{proof}
(a) See \cite[Section 8]{gls1} and \cite[Section 4.1]{bbk}.
(b) This follow from (a) and \cite[4.2]{b}.
(c)(d) Immediate from (a) and (b).\end{proof}
We are now ready to show that each $I_i$, and a finite product of such
ideals, is a tilting module.
\begin{proposition}\label{propII1.4}
$I_i$ is a tilting $\Lambda$-module of projective dimension at most
one and $\End_\Lambda(I_i)=\Lambda$.
\end{proposition}
\begin{proof}
We have $\Ext^n_\Lambda(S_i,\Lambda)\simeq
D\Ext^{2-n}_\Lambda(\Lambda,S_i)=0$ for $n=0,1$ by Proposition \ref{2CY}.
Applying $\Hom_\Lambda(\text{ },\Lambda)$ to the exact sequence $0\to I_i\to\Lambda\to
S_i \to 0$, we get $\Hom_\Lambda(I_i,\Lambda)=\Lambda$. Applying
$\Hom_\Lambda(I_i,\text{ })$, we get an exact sequence $0\to
\End_\Lambda(I_i) \to \Hom_\Lambda(I_i,\Lambda)\to
\Hom_\Lambda(I_i,S_i)$. Since $\Hom_\Lambda(I_i,S_i)=0$, we
have $\End_\Lambda(I_i)= \Hom_\Lambda(I_i,\Lambda)=\Lambda$.
Applying $\ \otimes_\Lambda S_i$ to the exact sequence in Proposition
\ref{2CY}(a) we have a projective resolution
\begin{equation}\label{simple resolution}
0 \to \Lambda e_i\stackrel{g}{\to}P\stackrel{f}{\to}\Lambda e_i\to
S_i \to 0
\end{equation}
with $\Im f=I_ie_i$ and $P \in \add \Lambda(1-e_i)$.
In particular $I_i=\Im f\oplus\Lambda(1-e_i)$ is a finitely presented $\la$-module with $\pd I_i\le1$.
We have $\Ext^1_\la(I_i,I_i)\simeq\Ext^2_\la(S_i,I_i)\simeq D\Hom_\la(I_i,S_i)=0$. Using \eqref{simple resolution}, we have an exact sequence
\[0\to\la\to P\oplus\la(1-e_i)\to I_ie_i\to0\]
such that the middle and the right terms belong to $\add I_i$.
Thus $I_i$ is a tilting $\la$-module.
\end{proof}
\begin{proposition}\label{propII1.5}
Let $T$ be a tilting $\Lambda$-module of projective dimension at
most one.
\begin{itemize}
\item[(a)]{If $\Tor^\Lambda_1(S_i,T)=0$, then $I_i\stackrel{\bf
L}{\otimes}_\Lambda T = I_i \otimes_{\la} T =I_iT$ is a tilting $\Lambda$-module of
projective dimension at most one.}
\item[(b)]{$I_iT$ is always a tilting $\Lambda$-module of projective
dimension at most one, and $\End_\la(I_iT)\simeq\End_\Lambda(T)$.}
\end{itemize}
\end{proposition}
\begin{proof}
(a) Since $\Tor^\Lambda_1(I_i,T)=\Tor^\Lambda_2(S_i,T)=0$
because $\pd T\le1$, we have $I_i\stackrel{\bf L}{\otimes}_\Lambda
T=I_i\otimes_\Lambda T$.
Since we have an exact sequence
$$
0= \Tor^\Lambda_1(S_i,T)\to I_i\otimes_\Lambda
T\to\Lambda\otimes_\Lambda T\to S_i\otimes_\Lambda T\to0,
$$
we have $I_i\otimes_\Lambda T=I_iT$.
Thus $I_i\stackrel{\bf L}{\otimes}_\Lambda T=I_iT$ is a tilting $\la$-module by Lemma \ref{lemII1.3} and Proposition \ref{propII1.4}.
Since $\pd T \leq 1$ and $\pd T/I_i T \leq 2$ by Proposition \ref{2CY}, we have $\pd I_iT\le1$.
\noindent
(b) By Lemma \ref{lemII1.1}, either $S_i \otimes_\Lambda T=0$ or $\Tor^\Lambda_1(S_i,T)=0$
holds. If $S_i\otimes_\Lambda T=0$, then $I_iT=T$ holds. If $\Tor^\Lambda_1(S_i,T)=0$,
then we apply (a). For the rest we use Lemma \ref{lemII1.3}.
\end{proof}
A left ideal $I$ of $\la$ is called {\em
cofinite} if $\Lambda/I\in {\fl} \Lambda$, and called {\em tilting}
(respectively, {\em partial tilting}) if it is a tilting
(respectively, partial tilting) $\la$-module.
Similarly, a {\em cofinite} (respectively, {\em (partial) tilting})
{\em right ideal} of $\la$ is defined. An ideal $I$ of $\la$ is called
{\em cofinite tilting} if it is cofinite tilting as a left and right ideal.
We denote by $\langle I_1,...,I_n\rangle$ the ideal semigroup
generated by $I_1,...,I_n$. Then we have the following result.
\begin{theorem}\label{teoII1.6}
\begin{itemize}
\item[(a)]Any $T\in\langle I_1,...,I_n\rangle$ is a
cofinite tilting ideal and satisfies ${\rm End}_\Lambda(T)=\Lambda$.
\item[(b)]Any cofinite tilting ideal of $\Lambda$ belongs to $\langle
I_1,...,I_n\rangle$.
\item[(c)] Any cofinite partial tilting left (respectively, right)
ideal of $\Lambda$ is a cofinite tilting ideal.
\end{itemize}
\end{theorem}
\begin{proof}
(a) This is a direct consequence of Propositions \ref{propII1.4} and
\ref{propII1.5}.
(b)(c) Let $T$ be a cofinite partial tilting left ideal of $\Lambda$.
If $T\neq\Lambda$, then there exists a simple submodule $S_i$ of $\Lambda/T$.
Since $\Hom_\la(S_i,\la)=0$, we have $\Ext^1_\Lambda(S_i,T)\neq0$. Thus we have
$\Tor_1^\Lambda(S_i,T)\simeq D\Ext^1_\Lambda(T,S_i)\simeq\Ext^1_\Lambda(S_i,T)\neq0$.
By Lemma \ref{lemII1.1}, we have $S_i\otimes_\Lambda T=0$.
Put $U={\bf R}\Hom_\Lambda(I_i,T)$. By Lemma \ref{lemII1.3}, we have that $U\simeq
{\bf R}\Hom_\Lambda(I_i,\Lambda)\stackrel{{\bf L}}{\otimes}_\Lambda T$
is a partial tilting complex of $\Lambda$. Since $\pd I_i\le 1$ and
$\Ext^1_\Lambda(I_i,T)\simeq\Ext^2_\Lambda(S_i,T)\simeq
D\Hom_\Lambda(T,S_i)\simeq S_i\otimes_\Lambda
T=0$, we have $U=\Hom_\Lambda(I_i,T)$, which
is a partial tilting $\la$-module.
Since we have a commutative diagram
$$\begin{array}{ccccccccc}
0=\Hom_\la(S_i,\la)&\to&\la&\to&\Hom_\la(I_i,\la)&\to&\Ext^1_\la(S_i,\la)=0&&\\
&&\cup&&\cup&&&&\\
&&T&\to&\Hom_\la(I_i,T)&\to&\Ext^1_\la(S_i,T)&\to&0\end{array}$$
of exact sequences, $U$ is a cofinite partial
tilting left ideal of $\Lambda$
containing $T$ properly, such that $U/T$ is a direct sum of copies of $S_i$.
By $S_i\otimes_\Lambda T=0$, we have $T=I_iU$.
Thus we have $T\in\langle I_1,\cdots,I_n\rangle$ by induction on the length of $\Lambda/T$.
\end{proof}
We here pose the following question, where there is a positive answer
in the extended Dynkin case \cite{ir}.
\begin{question}
For any tilting $\la$-module $T$ of projective dimension at most one,
does there exist some $U$ in $\langle I_1, \cdots, I_n \rangle$ such that $\add T = \add U$?
\end{question}
We have some stronger statements on products of the ideals $I_i$, generalizing results in the noetherian case
from \cite{ir}.
\begin{proposition}\label{propII1.7}
The following equalities hold for multiplication of
ideals.
\begin{itemize}
\item[(a)] $I_i^2=I_i$,
\item[(b)] $I_iI_j=I_jI_i$ if there is no arrow between $i$ and $j$ in $Q$,
\item[(c)] $I_iI_jI_i=I_jI_iI_j$ if there is precisely one arrow between $i$ and $j$ in $Q$.
\end{itemize}
\end{proposition}
\begin{proof}
(a) is obvious.
Parts (b) and (c) are proved in \cite[6.12]{ir} for module-finite
2-CY algebras. Here we give a direct proof for an arbitrary preprojective
algebra $\la$ associated with a finite quiver $Q$ without oriented cycles.
Let $I_{i,j} = \Lambda(1-e_i-e_j)\Lambda$.
Then any product of ideals $I_i$ and $I_j$ contains $I_{i,j}$. If there is no arrow from
$i$ to $j$, then
$\Lambda/I_{i,j}$ is semisimple. Thus $I_iI_j$ and $I_jI_i$
are contained in $I_{i,j}$, and we have $I_iI_j=I_{i,j}=I_jI_i$.
If there is precisely one arrow from $i$ to $j$, then $\la/I_{i,j}$
is the preprojective algebra of type $A_2$. Hence
there are two indecomposable projective $\Lambda/I_{i,j}$-modules,
whose Loewy series are ${i\choose j}$ and ${j\choose i}$.
Thus $I_iI_jI_i$ and $I_jI_iI_j$ are contained in $I_{i,j}$,
and we have $I_iI_jI_i=I_{i,j}=I_jI_iI_j$.
\end{proof}
Now let $W$ be the Coxeter group associated to the quiver $Q$,
so $W$ has generators $s_1,...,s_n$ with relations
$s_i^2=1$, $s_is_j=s_js_i$ if there is no arrow between $i$ and $j$ in $Q$,
and $s_is_js_i=s_js_is_j$ if there is a precisely one arrow between
$i$ and $j$ in $Q$.
\begin{theorem}\label{teoII1.8}
There exists a bijection $W\to\langle I_1,...,I_n\rangle$.
It is given by $w\mapsto I_w =I_{i_1}I_{i_2}...I_{i_k}$ for any reduced
expression $w=s_{i_1}s_{i_2}...s_{i_k}$.
\end{theorem}
\begin{proof}
The corresponding result was proved in \cite{ir} in the noetherian case,
using a partial order of tilting modules. Here we use instead properties of Coxeter groups.
We first show that the map is well-defined.
Take two reduced expressions
$w=s_{i_1}s_{i_2}...s_{i_k}=s_{j_1}s_{j_2}...s_{j_k}$.
By \cite[3.3.1(ii)]{bb}, two words $s_{i_1}s_{i_2}...s_{i_k}$
and $s_{j_1}s_{j_2}...s_{j_k}$ can be connected by a sequence of
the following operations:
(i) replace $s_is_j$ by $s_js_i$ (there is no arrow from $i$ to $j$),
(ii) replace $s_is_js_i$ by $s_js_is_j$ (there is precisely one arrow from $i$ to $j$).
Consequently, by Proposition \ref{propII1.7}(b)(c), we have
$I_{i_1}I_{i_2}...I_{i_k}=I_{j_1}I_{j_2}...I_{j_k}$.
Thus the map is well-defined.
Next we show that the map is surjective.
For any $I\in\langle I_1,...,I_n\rangle$, take an expression
$I=I_{i_1}I_{i_2}...I_{i_k}$ with a minimal number $k$.
Let $w =s_{i_1}s_{i_2}...s_{i_k}$.
By \cite[3.3.1(i)]{bb}, a reduced expression of $w$ is obtained from
the word $s_{i_1}s_{i_2}...s_{i_k}$ by a sequence of
the operations (i)(ii) above and
(iii) remove $s_is_i$.
By Proposition \ref{propII1.7}, the operation (iii) can not appear since $k$ is minimal.
Thus $w=s_{i_1}s_{i_2}...s_{i_k}$ is a reduced expression, and we have $I=I_w$.
Finally we show that the map is injective in a similar way as in
\cite{ir}. Let $\E ={\bf K}^{\bo}(\text{pr } \Lambda)$. For any $i$,
we have an autoequivalence $I_i\stackrel{{\bf L}}{\otimes}_\Lambda\ $
of $\E$ and an automorphism $[I_i\stackrel{{\bf
L}}{\otimes}_\Lambda\ ]$ of the Grothendieck group $K_0(\E)$.
By \cite[proof of 6.6]{ir}, we have the action
$s_i\mapsto[I_i\stackrel{{\bf L}}{\otimes}_\Lambda\ ]$ of $W$ on
$K_0(\E)\otimes_{\bf Z}\mathbb{C}$, which is shown to be faithful \cite[4.2.7]{bb}.
For any reduced expression $w=s_{i_1}s_{i_2}...s_{i_k}$, we have
$I_w=I_{i_1}\stackrel{{\bf L}}{\otimes}_\Lambda...\stackrel{{\bf L}}{\otimes}_\Lambda I_{i_k}$,
by Proposition \ref{propII1.5}(a) and the minimality of $k$.
Thus the action of $w$ on $K_0(\E)\otimes_{\mathbb{Z}}\mathbb{C}$ coincides with
$[I_w\stackrel{{\bf L}}{\otimes}_\Lambda\ ]$.
In particular, if $w,w'\in W$ satisfy $I_w=I_{w'}$,
then the actions of $w$ and $w'$ on $K_0(\E)\otimes_{\mathbb{Z}}\mathbb{C}$ coincide,
so we have $w=w'$ by the faithfulness of the action.
\end{proof}
We denote by $l(w)$ the length of $w\in W$.
We say that an infinite expression $s_{i_1}s_{i_2}\cdots s_{i_k}\cdots$ is
{\it reduced} if the expression $s_{i_1}s_{i_2}\cdots s_{i_k}$ is
reduced for any $k$.
\begin{proposition}\label{length}
Let $w\in W$ and $i\in\{1,\cdots,n\}$.
If $l(ws_i)>l(w)$, then we have $I_wI_i=I_{ws_i}\subsetneq I_w$.
If $l(ws_i)<l(w)$, then we have $I_wI_i=I_w\subsetneq I_{ws_i}$.
\end{proposition}
\begin{proof}
Let $w=s_{i_1}\cdots s_{i_k}$ be a reduced expression.
If $l(ws_i)>l(w)$, then $ws_i=s_{i_1}\cdots s_{i_k}s_i$ is a reduced
expression, so the assertion follows from Theorem \ref{teoII1.8}.
If $l(ws_i)<l(w)$, then $u=ws_i$ satisfies $l(us_i)>l(u)$, so
$I_uI_i=I_{us_i}=I_w\subsetneq I_u$.
\end{proof}
Let $s_{i_1}s_{i_2}\cdots s_{i_k}\cdots$ be a (finite or infinite) expression
such that $i_k\in\{1,\cdots,n\}$. Let
$$w_k=s_{i_1}s_{i_2}\cdots s_{i_k},\ \ T_k=I_{w_k}=I_{i_1}I_{i_2}\cdots I_{i_k}\ \mbox{ and }\ \la_k=\la/T_k.$$
We have a descending chain
$$\Lambda=T_0\supseteq T_1\supseteq T_2\supseteq \cdots$$
of cofinite tilting ideals of $\Lambda$, and a chain
$$\la_1\leftarrow\la_2\leftarrow\la_3\leftarrow\cdots$$
of surjective ring homomorphisms.
We have the following properties of the chain.
\begin{proposition}\label{strict}
\begin{itemize}
\item[(a)] If $T_{m-1}\neq T_m$, then $\la_m$ differs from $\la_{m-1}$ in exactly
one indecomposable summand $\Lambda_me_{i_m}$.
\item[(b)] Let $k\le m$. Then $\la_ke_{i_k}$ is a projective $\la_m$-module
if and only if $i_k\notin\{i_{k+1},i_{k+2},\cdots,i_m\}$.
\item[(c)] $T_1\supsetneq T_2\supsetneq T_3\supsetneq\cdots$ holds if and only if $s_{i_1}s_{i_2}\cdots$ is
reduced.
\end{itemize}
\end{proposition}
\begin{proof}
(a) This follows from $T_m(1-e_{i_m}) = T_{m-1} I_{i_m} (1- e_{i_m}) = T_{m-1}(1-e_{i_m})$.
(b) If $i_k\notin\{i_{k+1},\cdots,i_m\}$, then $\la_ke_{i_k}$ is a
summand of $\la_m$ by (a), so it is a projective $\la_m$-module.
Otherwise, take the smallest $k'$ with $k<k'\le m$ satisfying
$i_k=i_{k'}$. Then we have $\la_ke_{i_k}=\la_{k'-1}e_{i_k}$ and that
$\la_ke_{i_k}$ is a proper factor module of $\la_{k'}e_{i_k}$ by (a).
Hence $\la_ke_{i_k}$ is not a projective $\la_m$-module.
(c) This follows from Proposition \ref{length}.
\end{proof}
Our next goal is to show that $\Ext_{\la}^1(T_k,T_m)=0$
for $k\le m$. For this the following result will be useful.
\begin{lemma}\label{lemII1.9}
Let the notation and assumptions be as above. Then $^{\perp_{>0}}{T_{m-1}}\subseteq
{}^{\perp_{>0}}T_m$, where $^{\perp_{>0}}T=\{X\in\mod\la \mid \Ext^i_{\la}(X,T)=0 \text{ for all }i>0\}$.
\end{lemma}
\begin{proof}
We can assume $T_{m-1}\neq T_m$. Then we have that $T_{m-1}\otimes_\Lambda
S_{i_m}\neq0$. Hence $\Tor_1^\Lambda(T_{m-1},S_{i_m})=0$ by Lemma
\ref{lemII1.1}, so
$T_{m-1}\otimes_\Lambda I_{i_m}=T_{m-1}I_{i_m}=T_m$ by Proposition \ref{propII1.5}.
Let $0\to P_1\to P_0\to I_{i_m}\to 0$ be a projective resolution.
We have $\Tor^\Lambda_1(T_{m-1},I_{i_m})\simeq\Tor^\Lambda_2(T_{m-1},S_{i_m})=0$.
Applying $T_{m-1}\otimes_\Lambda\ $, we have an exact sequence
$0\to T_{m-1}\otimes_\Lambda P_1\to T_{m-1}\otimes_\Lambda P_0\to T_m\to 0$.
This immediately implies ${}^{\perp_{>0}} T_{m-1}\subseteq {}^{\perp_{>0}} T_m$.
\end{proof}
We now have the following consequence.
\begin{proposition}\label{propII1.10}
With the above notation and assumptions, we have $\Ext_{\la}^1(T_k,T_m)=0$ for $k\le m$.
\end{proposition}
\begin{proof}
By Lemma \ref{lemII1.9} we have $^{\perp_{>0}}T_k\subseteq
^{\perp_{>0}}T_{k+1}\subseteq\cdots\subseteq^{\perp_{>0}}T_m$. Since $T_k$ is in
$^{\perp_{>0}}T_k$, we then have that $T_k$ is in $^{\perp_{>0}}T_m$, and hence
$\Ext^1_{\la}(T_k,T_m)=0$ for $k\le m$.
\end{proof}
Later we shall use the following observation.
\begin{lemma}\label{hom-calculation}
Assume that the expression $s_{i_1}s_{i_2}\cdots$ is reduced.
Let $T_{k,m}=I_{i_k}\cdots I_{i_m}$ if $k\le m$ and $T_{k,m}=\Lambda$ otherwise.
Then we have $\Hom_\la(T_k,T_m)\simeq T_{k+1,m}=\{x\in\Lambda \mid
T_kx\subseteq T_m\}$ and $\Hom_\la(\la_k,\la_m)\simeq T_{k+1,m}/T_m$.
\end{lemma}
\begin{proof}
Let $U = \{ x \in \la \mid T_kx \subseteq T_m \}\supseteq T_{k+1,m}$.
If $k\ge m$, then clearly $U=\Lambda=T_{k+1,m}$ holds, and
$\Hom_\la(T_k,T_m)\subseteq\End_\la(T_k)\simeq\la$ by Theorem \ref{teoII1.6}.
Thus $\Hom_\la(T_k,T_m)\simeq\Lambda$.
We assume $k<m$.
Since $T_m = T_k \stackrel{\bf L}{\otimes}_\Lambda T_{k+1,m}$ holds by
Proposition \ref{propII1.5}(a) and Lemma \ref{lemII1.3},
we have ${\bf R} \Hom_{\la}(T_k, T_m)=
{\bf R} \Hom_{\la}(T_k, (T_k \stackrel{\bf L}{\otimes}_\Lambda T_{k+1,m})) =
{\bf R} \Hom_{\la}(T_k,T_k) \stackrel{\bf L}{\otimes}_\Lambda T_{k+1,m}
= \la \stackrel{\bf L}{\otimes}_\Lambda T_{k+1,m} = T_{k+1,m}$.
In particular, we have $\Hom_\la(T_k,T_m)=T_{k+1,m}$.
On the other hand, we have a commutative diagram
$$\begin{array}{ccccc}
&\la&\stackrel{}{\to}&\Hom_\la(T_k,\la)&\\
&\cup&&\cup&\\
&U&\to&\Hom_\la(T_k,T_m)&\simeq T_{k+1,m},
\end{array}$$
where the horizontal map is given by $x\mapsto(\cdot x)$ for any $x\in\la$,
which is injective. Thus we have $U\subseteq T_{k+1,m}$, and so $U=T_{k+1,m}$.
Now we show the second equality.
For any $f\in\Hom_\la(\la_k,\la_m)$, there exists a unique
element $x\in\la_m$ such that $f(y)=yx$ for any $y\in\la$.
Since $T_kx\subseteq T_m$ holds, we have $x\in U$.
Thus we have $\Hom_\la(\la_k,\la_m)\simeq U/T_m=T_{k+1,m}/T_m$.
\end{proof}
\hspace{7mm}
\subsection{Cluster tilting objects for preprojective algebras}\label{c2_sec2}
${}$ \\
Let again $\Lambda$ be the completion of the preprojective algebra of
a finite connected non-Dynkin quiver without loops over the field
$k$. We show that for a large class of
cofinite tilting ideals $I$ in $\Lambda$ we have that $\Lambda/I$ is a finite dimensional
$k$-algebra which is Gorenstein of dimension at most one, and the categories $\Sub\Lambda/I$
and $\underline{\Sub}\Lambda/I$ are stably 2-CY and 2-CY respectively. We describe some cluster
tilting objects in these categories, using tilting ideals. We also describe cluster tilting
subcategories in the derived 2-CY abelian category $\fl\Lambda$, which have an infinite number of
nonisomorphic indecomposable objects. Hence we get examples of cluster structures with infinite
clusters (see \cite{kr2} for other examples).
We start with investigating $\Lambda/T$ for our special cofinite tilting
ideals $T$ as a module over $\Lambda$ and over the factor ring
$\Lambda/U$ for a cofinite tilting ideal $U$ contained in $T$.
\begin{lemma}\label{lemII2.1}
Let $T$ and $U=TU'$ be cofinite tilting ideals in $\Lambda$.
Then $\Ext_{\Lambda}^1(\Lambda/T,\Lambda/U)=0=\Ext_{\Lambda}^1
(\Lambda/U,\Lambda/T)$.
\end{lemma}
\begin{proof}
Consider the exact sequence $0\to U\to \Lambda\to \Lambda/U\to 0$.
Applying $\Hom_\la(\la/T,\ )$, we have an exact sequence
\[\Ext^1_\la(\la/T,\la)\to\Ext^1_\la(\la/T,\la/U)\to\Ext^2_\la(\la/T,U).\]
We have $\Ext^1_{\la}(\Lambda/T,\Lambda)=0$ by Proposition \ref{2CY}.
It follows from Corollary \ref{propII1.10} that $\Ext_{\Lambda}^1(T,U)=0$.
Since $\Ext_{\Lambda}^2(\Lambda/T,U)\simeq\Ext^1_\la(T,U)=0$, it follows that
$\Ext_{\Lambda}^1(\Lambda/T,\Lambda/U)=0$.
Since $\Lambda$ is derived 2-CY it follows that also $\Ext_{\Lambda}^1(\Lambda/U,\Lambda/T)=0$.
\end{proof}
Using this lemma we obtain more information on $\Lambda/T$.
\begin{proposition}\label{propII2.2}
\begin{itemize}
\item[(a)]For a cofinite ideal $T$ in $\Lambda$ with
$\Ext^1_\la(\la/T,\la/T)=0$, the algebra $\Lambda/T$ is Gorenstein
of dimension at most one.
\item[(b)]For a cofinite tilting ideal $T$ in $\Lambda$, the factor
algebra $\Lambda/T$ is Gorenstein of dimension at most one.
\end{itemize}
\end{proposition}
\begin{proof}
(a) Consider the exact sequence $0\to \Omega_\Lambda( D(\Lambda/T))\to P\to
{D}(\Lambda/T)\to 0$ with a projective $\la$-module $P$. Using Lemma
\ref{lemII2.1} and \cite{ce}, we have
$\Tor^{\Lambda}_1(\Lambda/T,{D}(\Lambda/T))\simeq {D}
\Ext^1_{\Lambda^{\op}}(\Lambda/T,\Lambda/T)=0$. Applying $\Lambda/T
\otimes_{\Lambda}\ $ to the above exact sequence, we get the
exact sequence
$0\to\Lambda/T\otimes_{\Lambda}\Omega_\Lambda({D}(\Lambda/T))\to
\Lambda/T\otimes_{\Lambda} P\to
\Lambda/T\otimes_{\Lambda}{D}(\Lambda/T)\to 0$. The $\Lambda/T$-module
$\Lambda/T\otimes_{\Lambda}P$ is projective. To see that also
$\Lambda/T\otimes_{\Lambda}\Omega_\Lambda({D}(\Lambda/T))$ is a projective
$\Lambda/T$-module, we show that the functor
$\Hom_{\Lambda/T}(\Lambda/T\otimes_{\Lambda}\Omega_\Lambda({D}(\Lambda/T)),\text{
})\simeq\Hom_{\Lambda}(\Omega_\Lambda({D}(\Lambda/T)),\text{ })$ is exact
on $\mod\Lambda/T$. This follows from the functorial isomorphisms
\begin{eqnarray*}
\Ext_{\Lambda}^1(\Omega_\Lambda({D}(\Lambda/T)), \text{
})&\simeq&\Ext^2_{\Lambda}({D}(\Lambda/T), \text{ })\\
&\simeq& {D}\Hom_{\Lambda}(\text{ },{D}(\Lambda/T))\\
&\simeq& {D}\Hom_{\Lambda/T}(\text{ },{D}(\Lambda/T))\simeq \id_{\mod\la/T}
\end{eqnarray*}
Hence we conclude that $\pd_{\Lambda/T}D(\Lambda/T)\leq 1$. Then it
is well known and easy to see that $\pd_{(\la/T)^{\op}}D(\la/T)\le 1$, so that by definition $\Lambda/T$ is Gorenstein of
dimension at most one.
(b) This is a direct consequence of (a) and Lemma \ref{lemII2.1}.\end{proof}
When $\Lambda/T$ is Gorenstein of dimension at most one, the category
of Cohen-Macaulay modules is the category $\Sub(\Lambda/T)$ of first
syzygy modules (see \cite{ar, h2}). It is known that $\Sub(\Lambda/T)$ is a Frobenius
category, with $\add(\Lambda/T)$ being the category of projective and
injective objects, and the stable category
$\underline{\Sub}(\Lambda/T)$ is triangulated \cite{h1}.
Moreover $\Sub(\Lambda/T)$ is an
extension closed subcategory of $\mod\Lambda/T$ by Corollary \ref{cor2.6}, since $\id_{\la/T}\la/T\le1$
and $\Ext^1_{\la/T}(\la/T,\la/T)=0$. But to show that the
stably 2-CY property is deduced from $\fl\la$ being derived 2-CY we need that $\Sub\Lambda/T$ is also
extension closed in $\fl\Lambda$.
\begin{proposition}\label{propII2.3}
Let $T$ be a cofinite ideal with $\Ext^1_\la(\la/T,\la/T)=0$ (for example
a cofinite tilting ideal).
\begin{itemize}
\item[(a)]
$\Ext_{\Lambda}^1(\Lambda/T, X)=0=\Ext_{\Lambda}^1(X,\Lambda/T)$
for all $X$ in $\Sub\Lambda/T$.
\item[(b)]
$\Sub\Lambda/T$ is an extension closed subcategory of $\fl\Lambda$
\item[(c)]
$\Sub\Lambda/T$ and $\underline{\Sub}\Lambda/T$ are stably 2-CY and 2-CY respectively.
\end{itemize}
\end{proposition}
\begin{proof}
(a) For $X$ in $\Sub\la/T$ we have an exact sequence $0\to X\to P\to Y\to 0$ with $Y$ in
$\Sub\Lambda/T$ and $P$ in $\add\Lambda/T$. Applying
$\Hom_{\Lambda}(\Lambda/T,\text{
})\simeq\Hom_{\Lambda/T}(\Lambda/T,\text{ })$, the sequence does not
change. Since $\Ext_{\Lambda}^1(\Lambda/T, \Lambda/T)=0$, we conclude that
$\Ext_{\Lambda}^1(\Lambda/T,X)=0$. Hence
$\Ext_{\Lambda}^1(X,\Lambda/T)=0$ by the derived 2-CY property of
$\fl\Lambda$.
\noindent
(b) Let $0\to X\to Y\to Z\to 0$ be an exact sequence in
$\fl\Lambda$, with $X$ and $Z$ in $\Sub\la/T$. Then we have a
monomorphism $X\to P$, with $P$ in $\add\Lambda/T$. Since
$\Ext_{\Lambda}^1(Z,P)=0$ by (a), we have a commutative diagram of
exact sequences
$$\xymatrix@C0.5cm@R0.5cm{
0\ar[r]& X\ar@{^{(}->}[d]\ar[r]& Y\ar@{^{(}->}[d]\ar[r]&
Z\ar@{=}[d]\ar[r]& 0\\
0\ar[r]& P\ar[r]& P\oplus Z\ar[r]& Z\ar[r]& 0
}$$
Thus $Y$ is a submodule of $P\oplus Z\in\Sub\Lambda/T$, and we have $Y\in\Sub\Lambda/T$.
\noindent
(c) Since $\Sub\Lambda/T$ is extension closed in $\fl \la$, we have $\Ext^1_{\Sub \la/T}(X,Y)= \Ext^1_{\la}(X,Y)$.
Since $\Sub\Lambda/T$ is Frobenius, it follows from Proposition \ref{propI1.1}, that
$\Sub \la/T$ is stably 2-CY, since $\fl \la$ is derived 2-CY,
and so $\ul{\Sub}\Lambda/T$ is 2-CY.
\end{proof}
We now want to investigate the cluster tilting objects in
$\Sub\Lambda/T$ and $\underline{\Sub}\Lambda/T$ for certain tilting
ideals $T$, and later also the cluster tilting subcategories of
$\fl\Lambda$. The following observation will be useful.
\begin{lemma}\label{lemII2.4}
Let $\Delta$ be a finite dimensional algebra and $M$ a
$\Delta$-module which is a generator. Let
$\Gamma=\End_{\Delta}(M)$, and assume $\gl\Gamma\leq 3$ and
$\pd_{\Gamma}D(M)\leq 1$.
Then for any $X$ in $\mod\Delta$ there is an exact sequence $0\to M_1\to M_0\to X\to 0$,
with $M_0$ and $M_1$ in $\add M$.
\end{lemma}
\begin{proof}
Let $X$ be in $\mod\Delta$, and consider the exact sequence $0\to
X\to I_0\to I_1$ where $I_0$ and $I_1$ are injective. Apply
$\Hom_{\Delta}(M,\text{ })$ to get an exact sequence $0\to
\Hom_{\Delta}(M,X)\to \Hom_{\Delta}(M,I_0)\to
\Hom_{\Delta}(M,I_1)$. Since by assumption
$\pd_{\Gamma}\Hom_{\Delta}(M,I_i)\leq 1$ for $i=0,1$ and $\gl
\Gamma\le 3$, we obtain $\pd_{\Gamma}\Hom_{\Delta}(M,X)\le
1$. Hence we have an exact sequence $0\to P_1\to P_0\to
\Hom_{\Delta}(M,X)\to 0$ in $\mod\ga$ with $P_0$ and $P_1$
projective. This sequence is the image under the functor $\Hom_{\Delta}(M,\text{
})$ of the complex $0\to M_1\to M_0\to X\to 0$ in $\mod\Delta$, with
$M_0$ and $M_1$ in $\add M$. Since $M$ is assumed to be a
generator, this complex must be exact, and we have our desired exact
sequence.
\end{proof}
Let now $\la=T_0\supsetneq T_1\supsetneq T_2\supsetneq \cdots$ be a strict
descending chain of tilting ideals corresponding to a (finite or
infinite) reduced expression $s_{i_1}s_{i_2}s_{i_3}\cdots$.
We want to describe some
natural cluster tilting objects for the algebras $\la_m=\la/T_m$. Let
$$\la_k = \la/T_k\ \mbox{ and }\ M_m=\oplus_{k=0}^m\la_k,$$
and $\ga=\End_{\la_m}(M_m)$. The following will be essential.
\begin{proposition}\label{propII2.5}
With the above notation we have the following.
\begin{itemize}
\item[(a)]
For $X$ in $\mod\la_m$ there is an exact sequence $0\to N_1\to
N_0\to X\to 0$ in $\mod\la_m $, with $N_i$ in $\add M_m$ for
$i=1,2$.
\item[(b)]
$\gl\ga\le 3$.
\end{itemize}
\end{proposition}
\begin{proof}
We prove (a) and (b) by induction on $m$. Assume first that $m=1$. Then $\la_1=\la/T_1$,
which is a simple $\la_1$-module. Since $M_1 =\la/T_1$, (a) and (b) are trivially satisfied.
Assume now that $m> 1$ and that (a) and (b) have been proved for
$m-1$. Then we first prove (b) for $m$. Note that since there are no loops for $\la$,
we have $T_{m-1}J\subseteq T_m$ where $J$ is the Jacobson radical of
$\la$, so that $J\la_m$ is a $\la_{m-1}$-module ($\ast$).
For an indecomposable object $X$ in $\M_m=\add M_m$, let
$f\colon C_0\to X$ be a minimal right almost split map in
$\M_m$. We first assume that $X$ is not a projective
$\la_m$-module. Then $f$ must be surjective. An
indecomposable object which is in $\M_m$ but not
in $\M_{m-1}$ is a projective $\la_m$-module, so we can write
$C_0=C_0'\oplus P$ where $C_0'\in \M_{m-1}$ and $P$ is a
projective $\la_m$-module. Since $f$ is right minimal, we have $\Ker
f\subseteq C_0'\oplus JP$, so that $\Ker f$ is a $\la_{m-1}$-module by
($\ast$). It follows by the induction assumption that there is an
exact sequence $0\to C_2\to C_1\to \Ker f\to 0$ with $C_1$ and $C_2$
in $\M_{m-1}$. Hence we have an exact sequence $0\to C_2\to
C_1\to C_0\to X\to 0$. Applying $\Hom_{\la}(M_m, \text{ })$ gives an
exact sequence
\begin{multline*}
0\to {\Hom_{\la}(M_m,C_2)}\to {\Hom_{\la}(M_m,C_1)}\to
{\Hom_{\la}(M_m,C_0)} \to {\Hom_{\la}(M_m,X)}\to S\to 0.
\end{multline*}
Then the module $S$, which is a simple module in the top of $\Hom_{\la}(M_m,X)$ in
$\mod\ga$, has projective dimension at most 3.
Assume now that $X$ is a projective $\la_m$-module. Then by ($\ast$) we have that $JX$ is in $\mod\la_{m-1}$.
By the induction assumption there is then an exact sequence $0\to C_1\to C_0\to J X\to 0$, with $C_0$ and $C_1$ in
$\M_{m-1}$. Hence we have an exact sequence $0\to C_1\to
C_0\to X$. Applying $\Hom_{\la}(M_m,\text{ })$ gives the exact
sequence
$$0\to \Hom_{\la}(M_m,C_1)\to \Hom_{\la}(M_m,C_0)\to
\Hom_{\la}(M_m,X)\to S\to 0$$
where $S$ is the simple top of $\Hom_{\la}(M_m,X)$, and hence $\pd_{\Gamma}S\le2$. It now follows that
$\gl\ga_m\le 3$.
We now want to show (a) for $m$. By Proposition \ref{propII2.2} we have an
exact sequence $0\to P_1\to P_0\to {D}(\la_m)\to 0$ in $\mod\la_m$, where $P_0$ and $P_1$ are projective
$\la_m$-modules. By
Lemma \ref{lemII2.1} we have $\Ext^1_{\la}(M_m,\la_m)=0$. Applying
$\Hom_{\la}(M_m,\text{ })$ gives the exact sequence
$$0\to \Hom_{\la}(M_m,P_1)\to \Hom_{\la}(M_m,P_0)\to
\Hom_{\la}(M_m,{ D}(\la_m))\to 0$$
Since $\Hom_{\la}(M_m,{D}(\la_m))\simeq {D}(M_m)$, we have
$\pd_{\ga_m}{D}(M_m)\le 1$. Now our desired result follows from Lemma
\ref{lemII2.4}.
\end{proof}
We can now describe some cluster tilting objects in $\Sub\la_m$ and
$\underline{\Sub}\la_m$.
\begin{theorem}\label{teoII2.6}
With the above notation, $M_m$ is a cluster tilting object in
$\Sub\la_m$ and in $\ul{\Sub}\la_m$.
\end{theorem}
\begin{proof}
We already have that $\Ext_{\la}^1(M_m,M_m)=0$ by Lemma \ref{lemII2.1},
so $\Ext_{\la_m}^1(M_m,M_m)=0$. Note that
$\Sub\la_m=\{X\in\mod\Lambda_m \mid \Ext_{\la_m}^1(X,\la_m)=0\}$ because $\la_m$ is a cotilting module with $\id \la_m \leq 1$. Since $\la_m$ is a summand
of $M_m$, we have that $M_m$ is in $\Sub\la_m$. Assume then that
$\Ext_{\la_m}^1(X,M_m)= 0 $ for $X$ in $\mod\la_m$. By Proposition \ref{propII2.5}(a) there is
an exact sequence $0\to C_1\to C_0\to X\to 0$ with $C_1$ and $C_0$ in
$\add M_m$, which must split by our assumption. Hence $X$ is in
$\add M_m$, and it follows that $M_m$ is a cluster tilting object in
$\Sub\la_m$. It then follows as usual that it is a cluster tilting
object also in $\underline{\Sub}\la_m$.
\end{proof}
We have now obtained a large class of 2-CY categories $\Sub\la/I_w$
and $\ul{\Sub}\la/I_w$ defined via elements $w$
of the associated Coxeter group $W$, along with cluster tilting objects
associated with reduced expressions of elements in $W$. We call these
{\em standard cluster tilting objects} for $\Sub\la/I_w$ or
$\ul{\Sub}\la/I_w$. We can also describe cluster tilting
subcategories with an infinite number of nonisomorphic indecomposable
objects in the categories $\fl\la$.
\begin{theorem}\label{teoII2.7}
With the above notation, assume that each $i$ occurs an
infinite number of times in $i_1,i_2,\cdots$. Then
$\M=\add\{\la_m \mid 0\le m\}$ is a cluster tilting subcategory of $\fl\la$.
\end{theorem}
\begin{proof}
We already know that $\Ext_{\la}^1(\la_k,\la_m)= 0 $ for all $k$ and
$m$. Let now $X$ be indecomposable in $\fl\la$. Then $X$ is a
$\la/J^k$-module for some $k$. We have $J=I_1\cap\cdots\cap I_n\supseteq
I_1 \cdots I_n$, where $1,\cdots ,n$ are the vertices in the
quiver. By our assumptions we have $J^k\supseteq T_m$ for some $m$, so
that $X$ is a $\la_m$-module. Consider the exact sequence $0\to
C_1\to C_0\to X\to 0$ in $\mod\la_m$, with $C_1$ and $C_0$ in $\add
M_m$, obtained from Proposition \ref{propII2.5}. Assume that
$\Ext_{\la}^1(X,\M)= 0 $. Since also $\Ext_{\la_m}^1(X,\M_m)= 0 $, the sequence splits,
so that $X$
is in $\M_m$ and hence in $\M$.
It only remains to show that $\M$ is functorially finite. So
let $X$ be in $\fl\la$. Using the above exact sequence $0\to
C_1\to C_0\to X\to 0$, we get the exact sequence
$$0\to (C,C_1)\to (C,C_0)\to (C,X)\to \Ext_{\la}^1(C,C_1)=0$$
for $C$ in $\M$. Hence $\M$ is contravariantly finite.
For $X$ in $\fl\Lambda$, take the left $(\Sub\M)$-approximation $X\to Y$
and choose $m$ such that $Y\in \Sub\Lambda_m$. For any $Y\in \Sub\Lambda_m$, there exists
an exact sequence $0\to Y\to C_0\to C_1\to0$ with $C_i\in\M_m$ by Proposition \ref{pro-extra}(a).
Then the composition $X\to Y\to C_0$ is a left $\M$-approximation
since $\Ext^1_\la(C_1,\M)=0$, and hence $\M$ is also covariantly finite.
\end{proof}
Summarizing our results, we have the following.
\begin{theorem}\label{CY and ct}
\begin{itemize}
\item[(a)] For any $w\in W$, we have a stably 2-CY category $\C_w=\Sub\la/I_w$.
\item[(b)] For any reduced expression $w=s_{i_1}\cdots s_{i_m}$ of
$w\in W$, we have a cluster tilting object
$\bigoplus_{k=1}^m\la/I_{s_{i_1}\cdots s_{i_k}}$ in $\C_w$.
In particular the number of non-isomorphic indecomposable summands
in any cluster tilting object is $l(w)$.
\item[(c)] For any infinite reduced expression $s_{i_1}s_{i_2}\cdots$ such that each
$i$ occurs an infinite number of times in $i_1,i_2,\cdots$, we have
a cluster tilting subcategory $\add\{\Lambda/I_{s_{i_1}\cdots s_{i_k}} \mid 0\le k\}$ in $\fl\la$.
\end{itemize}
\end{theorem}
We end this section by showing that the subcategories $\Sub\la/I_w$ can be
characterized using torsionfree classes.
\begin{theorem}
Let $\la$ be the completed preprojective algebra of a connected non-Dynkin
quiver without loops. Let $\C$ be a torsionfree class in $\fl\la$ with
some cluster tilting object. Then we have $\C=\Sub\la/I_w$ for some element
$w$ in the Coxeter group associated with $\Lambda$.
\end{theorem}
\begin{proof}
We first prove that if $M$ is a cluster tilting object in $\C$, then
$\C=\Sub M$. We only have to show $\C\subset\Sub M$.
For any $X\in\C$, take a projective resolution
$\Hom_\la(M,N)\to\Ext^1_\la(M,X)\to 0$ ($*$) of $\End_\la(M)$-modules
with $N\in\add M$.
Replacing $M$ in ($*$) by $N$, we get an exact sequence $0 \to X \to
Y \to N \to 0$ as the image of the identity $1_N\in\End_\la(N)$ in $\Ext^1_\la(N,X)$.
Since $\C$ is extension closed, we have $Y\in \C$.
We have an exact sequence
$\Hom_\la(M,N) \to \Ext^1_\la(M,X) \to \Ext^1_\la(M,Y) \to
\Ext^1_\la(M,N)=0$.
Since ($*$) is exact, we have $\Ext^1_\la(M,Y)=0$.
Thus we have $Y\in\add M$ and $X\in\Sub M$.
Let now $I$ be the annihilator $\ann{}_\la M$ of $M$ in $\la$.
Then $I$ is clearly a cofinite ideal in $\la$, and $\ann{}_{\la/I}M=0$.
Further $\Sub M$ is extension closed also in $\mod\la/I$. Hence the direct
sum $A$ of one copy of each of the non-isomorphic indecomposable
$\Ext$-injective $\la/I$-modules in $\Sub M$ is a cotilting
$\la/I$-module satisfying
$\id{}_{\la/I}A\le 1$ and $\Sub M=\Sub A$ by \cite{Sm}.
Since $\Sub M$ is extension closed in the derived 2-CY
category $\fl\la$, the $\Ext$-injective $\la/I$-modules in $\Sub M$
coincide with the $\Ext$-projective ones, which are the projective
$\la/I$-modules. Hence we have that $A$ is a progenerator of $\la/I$
and $\Sub M=\Sub\la/I$. Since $\Sub\la/I$ is extension closed in $\fl\la$,
we have $\Ext^1_\la(\la/I,\la/I)=0$.
By Theorem \ref{teoII1.6}, we only have to show that $I$ is a partial
tilting left ideal. By Bongartz completion, we only have to show $\Ext^1_\la(I,I)=0$.
The natural surjection $\la\to\la/I$ clearly induces a surjection
$\Hom_\la(\la/I,\la/I)\to\Hom_\la(\la,\la/I)$. Since $\la$ is
derived 2-CY, we have injections
$\Ext^2_\la(\la/I,\la)\to\Ext^2_\la(\la/I,\la/I)$ and
$\Ext^1_\la(I,\la)\to\Ext^1_\la(I,\la/I)$.
Using the exact sequence $0\to I\to\la\to\la/I\to0$, we have a
commutative diagram
\[\begin{array}{cccccc}
&&\Ext^1_\la(\la/I,\la/I)=0\\
&&\uparrow\\
\Hom_\la(I,\la)&\to&\Hom_\la(I,\la/I)&\to&\Ext^1_\la(I,I)&\to
\Ext^1_\la(I,\la)\to\Ext^1_\la(I,\la/I)\\
\uparrow&&\uparrow&&\uparrow\\
\la&\to&\la/I&\to&0
\end{array}\]
of exact sequences. Thus we have $\Ext^1_\la(I,I)=0$.
\end{proof}
Note that we have proved that an extension closed subcategory of $\fl\la$
of the form $\Sub X$ for some $X$ in $\fl\la$ with some cluster tilting
object must be $\Sub\la/I_w$ for some element $w$ in the Coxeter group
associated with $\Lambda$.
We point out that there are other extension closed subcategories of $\fl\la$
with some cluster tilting object.
Let $Q$ be an extended Dynkin quiver and $Q'$ a Dynkin subquiver,
and $\la$ and $\la'$ the corresponding completed preprojective algebras.
Then clearly $\mod\la'=\fl\la'$ is an extension closed subcategory of
$\fl\la$. Hence any extension closed subcategory of $\mod\la'$ is extension
closed in $\fl\la$, so Example 1 in Section \ref{c1_sec3}
is an example of an extension closed stably 2-CY subcategory of $\fl\la$
with some cluster tilting object, but which is not closed under submodules.\\
\hspace{7mm}
\subsection[Realzations of categories]{Realization of cluster categories and stable categories for preprojective
algebras of Dynkin type}\label{c2_sec3}
${}$ \\
In this section we show that for an appropriate choice of $T$ as a
product of tilting ideals $I_j=\la(1-e_j)\la$, any cluster category
is equivalent to some $\underline{\Sub}\la/T$. In particular, any cluster
category can be realized as the stable category of a Frobenius
category with finite dimensional homomorphism spaces. We also show that the stable
categories for preprojective algebras of Dynkin type can be realized this way.
Let $Q$ be a finite connected quiver without loops, $kQ$ the associated
path algebra, and $\la $ the completion of the preprojective algebra of $Q$. Choose a complete set of orthogonal primitive
idempotents $e_1,...,e_n$ of $kQ$. We can assume that $e_i(kQ)e_j=0$
for any $i>j$. We regard $e_1,...,e_n$ as a complete set of orthogonal primitive
idempotents of $\Lambda$, and let as before $I_i =\Lambda(1-e_i)\Lambda$.
Assume first that $Q$ is not Dynkin.
We consider an exact stably 2-CY category associated to the square
$w^2$ of a Coxeter element $w=s_1s_2\cdots s_n\in W$.
Let $\la_i=\la/I_1I_2\cdots I_i$ and
$\la_{i+n}=\la/I_1I_2\cdots I_nI_1\cdots I_i$ for $1\le i\le n$.
We have seen in Section \ref{c2_sec2} that $\Sub\la_{2n}$, and also $\ul{\Sub}\la_{2n}$, has a cluster tilting object
$M=\oplus_{i=1}^{2n}\la_i$.
We shall need the following.
\begin{lemma}\label{lemII3.1}
Assume that $Q$ is not Dynkin.
Then $I_1\cdots I_nI_1\cdots I_nI_1\cdots$ gives rise to a
strict descending chain of tilting ideals.
In particular, $s_1\cdots s_ns_1\cdots s_ns_1\cdots$ is reduced.
\end{lemma}
\begin{proof}
Assume to the contrary that the descending chain of ideals is not strict. Let
$T_i=I_1 \cdots I_i$ and $U_i=I_1\cdots I_{i-1}I_{i+1}\cdots I_n$ for $i=1,\cdots, n$.
Then we have $T_n^kT_{i-1}=T_n^kT_i$ for some $i=1,\cdots,n$ and $k\ge0$, where $T_0=\la$.
Hence we obtain $T_n^{k+1}=T_n^kU_i$.
Then we get $T_n^{k+m}=T_n^kU_i^m$ for any $m>0$. Since $U_i e_i = \la e_i$ and $J\supseteq T_n$,
we have $J^{m+k}e_i \supseteq T_n^{m+k}e_i=T_n^kU^m_ie_i = T_n^ke_i$. Since $(\la/T_n^k)e_i$ has finite
length, we have $J^{m+k}e_i=T_n^ke_i$ for $m$ sufficiently large. Thus we have $T_n^ke_i=0$, which is a
contradiction since $\la e_i$ has infinite length.
We have the latter assertion from Proposition \ref{strict}.
\end{proof}
We have the following.
\begin{proposition}\label{teoII3.2}
Let $Q$ be a finite connected non-Dynkin quiver without oriented cycles
and with vertices $1, \cdots, n$ ordered as above.
Let $\la_{2n}=\la/{(I_1\cdots I_n)^2}$. Then $\la_n=\la/{I_1\cdots I_n}$
is a cluster tilting object in $\ul{\Sub}\la_{2n}$ with
$\End_{\ul{\Sub}\la_{2n}}(\la_n)\simeq kQ$.
\end{proposition}
\begin{proof}
Since the associated chain of ideals is strict descending by Lemma \ref{lemII3.1}, our general theory applies.
We have a cluster tilting object $\bigoplus_{i=1}^{2n}\la_i$ in $\ul{\Sub}\la_{2n}$ by Theorem \ref{teoII2.6}.
We have $\add\bigoplus_{i=1}^{2n}\la_i=\la_n\oplus\la_{2n}$ in
$\fl\la$ by Proposition \ref{strict}.
Thus $\la_n$ is a cluster tilting object in $\ul{\Sub}\la_{2n}$.
Note that the path algebra $kQ$ is, in a natural way, a factor algebra of $\la$, and $kQ$ is hence a $\la$-module.
We want to show that the $\la$-modules $\la_n$ and $kQ$ are isomorphic.
Let $P_j$ be the indecomposable projective $\la$-module corresponding to the vertex $j$.
Then $I_{j+1}\cdots I_nP_j=P_j$ and $I_jP_j=JP_j$, the smallest submodule of $P_j$ such that
the corresponding factor has only composition factors $S_j$. Further, $I_{j-1}I_jP_j$
is the smallest submodule of $I_jP_j=JP_i$ such that the factor has only composition
factors $S_{j-1}$, etc. By our choice of ordering, this means that the paths starting at $j$,
with decreasing indexing on the vertices, give a basis for $P_j/I_1\cdots I_nP_j$.
In other
words, we have $P_j/I_1\cdots I_nP_j\simeq(kQ)e_j$. Hence the $\la$-modules $\la_n=\la/I_1\cdots I_n$
and $kQ$ are isomorphic, so that $\End_{\la}(\la_n)\simeq kQ$.
It remains to show $\End_{\la_{2n}}(\la_n)\simeq\End_{\ul{\Sub}\la_{2n}}(\la_n)$.
By Lemma \ref{hom-calculation}, any morphism
from $\la_{n}$ to $\la_{2n}$ is given by a right multiplication of an
element in $(I_1\cdots I_n)/(I_1\cdots I_n)^2$.
This implies
$\Hom_\la(\la_n,\la_{2n})\Hom_\la(\la_{2n},\la_n)=0$.
Thus we have the assertion.
\end{proof}
We now show that we have the same kind of result for Dynkin quivers.
\begin{proposition}\label{teoII3.3}
Let $Q'$ be a Dynkin quiver with vertices $1,\cdots,m$ contained in a finite
connected non-Dynkin quiver $Q$ without oriented cycles and with vertices $1,\cdots,n$ ordered as before.
Let $\la$ be the preprojective algebra of $Q$ and
$\la_{n+m}=\la/(I_1\cdots I_nI_1\cdots I_m)$. Then
$\la_m=\la/ I_1\cdots I_m$ is a cluster tilting object in $\ul{\Sub}\la_{n+m}$
with $\End_{\ul{\Sub}\la_{n+m}}(\la_m)\simeq kQ'$.
\end{proposition}
\begin{proof}
Since we have seen in Lemma \ref{lemII3.1} that the product $(I_1\cdots I_n)^2$ gives
rise to a strict descending chain of ideals,
it follows that the same holds for $I_1\cdots I_nI_1\cdots I_m$.
The assertions follow as in the proof of Proposition \ref{teoII3.2}.
\end{proof}
Recall from \cite{kr2} that if a connected algebraic triangulated 2-CY category has a cluster tilting object $M$
whose quiver $Q$ has no oriented cycles, then $\C$ is triangle equivalent to the cluster
category $\C_{kQ}$. Then we get the following consequence of the last two results.
\begin{theorem}\label{teoII3.4}
Let $Q'$ be a finite connected quiver without oriented cycles. Let
$Q=Q'$ if $Q'$ is not Dynkin, and $Q$ as in
Proposition \ref{teoII3.3} if $Q'$ is Dynkin.
Let $\la$ be the preprojective algebra of $Q$.
Then there is a tilting ideal $I$ in $\la$ such that $\ul{\Sub}\la/I$
is triangle equivalent to the cluster category $\C_{kQ'}$ of $Q'$.
\end{theorem}
We finally show that also the categories $\ul{\mod}\la'$ where $\la'$ is the preprojective
algebra of a Dynkin quiver $Q'$, can be realized this way.
\begin{theorem}\label{teoII3.5}
Let $Q'$ be a Dynkin quiver contained in a finite connected non-Dynkin
quiver $Q$ without loops. We denote by $\Lambda'$ the preprojective algebra of $Q'$,
by $W'$ the subgroup of $W$ generated by $\{s_i \mid i\in Q_0'\}$,
and by $w_0$ the longest element in $W'$.
Then $\Lambda'$ is isomorphic to $\Lambda/I_{w_0}$
and $\ul{\mod}\la'=\ul{\Sub}\Lambda/I_{w_0}$.
\end{theorem}
\begin{proof}
Let $I_{Q'}:=\la(\sum_{i\in Q_0\backslash Q_0'}e_i)\la$.
Since we have $\la/I_{Q'}\simeq\la'$, we only have to show $I_{w_0}=I_{Q'}$.
We use the fact that $I_{Q'}$ is maximal amongst all two-sided ideals
$I$ of $\la$ such that any composition factor of $\la/I$ is $S_i$ for
some $i\in Q_0'$.
Since $w_0$ is a product of $s_i$ ($i\in Q_0'$),
any composition factor of $\la/I_{w_0}$ is $S_i$ for some $i\in Q_0'$.
Thus we have $I_{w_0}\supseteq I_{Q'}$.
On the other hand, since $w_0$ is the longest element of $W'$,
we have $l(s_iw_0)<l(w_0)$ for any $i\in Q_0'$.
By Proposition \ref{length}, we have $I_iI_{w_0}=I_{w_0}$ for any $i\in Q_0'$.
This implies $I_{w_0}=I_{Q'}$.
\end{proof}
Using Theorem \ref{teoII3.5}, we see that our theory also applies to
preprojective algebras of Dynkin type. In particular, we can specialize
Theorem \ref{CY and ct} to recover the following result from \cite{gls1}.
\begin{corollary}
For a preprojective algebra $\la'$ of a Dynkin quiver the number of
non-isomorphic indecomposable summands in a cluster tilting object
is equal to the length $l(w_0)$ of the longest element in the associated
Weyl group, which is equal to the number of positive roots.
\end{corollary}
We also obtain a large class of cluster tilting objects associated with
the different reduced expressions of $w_0$.
Our results can also be viewed as giving an interpretation in terms of
tilting theory of some functors $\mathcal{E}_i$ used in \cite[5.1]{gls3}.\\
\hspace{7mm}
\subsection{Quivers of cluster tilting subcategories}
${}$ \\
In this section we show that the quivers of standard cluster tilting
subcategories associated with a reduced expression can be described
directly from the reduced expression.
Let $s_{i_1}s_{i_2}\cdots s_{i_k}\cdots$ be a (finite or infinite)
reduced expression associated with a graph $\Delta$ with vertices
$1,\cdots,n$.
We associate with this sequence a quiver $Q(i_1,i_2,\cdots)$ as follows,
where the vertices correspond to the $s_{i_k}$.
\begin{itemize}
\item[-] For two consecutive $i$ ($i\in\{1,\cdots,n\}$), draw an arrow
from the second one to the first one.
\item[-] For each edge $i\stackrel{d_{ij}}{-}j$, pick out the expression
consisting of the $i_k$ which are $i$ or $j$, so that we have
$\cdots i i \cdots i j j \cdots j i i \cdots i \cdots$.
We draw $d_{ij}$ arrows from the last $i$ in a connected set of $i$'s
to the last $j$ in the next set of $j$'s, and the same from $j$ to
$i$. (Note that since by assumption both $i$ and $j$ occur an infinite
number of times if the expression is infinite, each connected set of $i$'s
or $j$'s is finite.)
\end{itemize}
Note that in the Dynkin case essentially the same quiver has been used in
\cite{fz3}.
For a finite reduced expression $s_{i_1}\cdots s_{i_k}$ we denote by
$\ul{Q}(i_1,\cdots i_k)$ the quiver obtained from $Q(i_1,\cdots i_k)$
by removing the last $i$ for each each $i$ in $Q_0$.
We denote by $\Lambda=T_0\supsetneq T_1\supsetneq\cdots$ the associated
strict descending chain of tilting ideals.
Then we have a cluster tilting
subcategory $\M(i_1,i_2,\cdots)=\add\{\Lambda_k \mid k>0\}$
for $\Lambda_k:=\Lambda/T_k$.
\begin{theorem}\label{quiver}
Let the notation be as above.
\begin{itemize}
\item[(a)]The quiver of the cluster tilting
subcategory $\M(i_1,i_2,\cdots)$ is $Q(i_1,i_2,\cdots)$.
\item[(b)]The quiver of $\ul{\End}_\la(\M(i_1,\cdots i_k))$ is
$\ul{Q}(i_1,\cdots i_k)$.
\end{itemize}
\end{theorem}
Before we give the proof, we give some examples and consequences.
It follows from the definition that we get the same quiver if we
interchange two neighbors in the expression of $w$ which are not
connected with any edge in $\Delta$. But if we take two reduced expressions in
general, we may get different quivers, as the following examples show.
Let $\Delta$ be the graph
$\xymatrix@R0.1cm@C0.5cm{&2\ar@{-}[dr]\\ 1\ar@{-}[ur]&&3\ar@{-}[ll]}$
and $w=s_1s_2s_1s_3s_2=s_2s_1s_2s_3s_2=s_2s_1s_3s_2s_3$ expressions
which are clearly reduced.
The first expression gives the quiver
$$\xymatrix@R0.1cm{1\ar[r]&2\ar[r]\ar@/_1pc/[rr]&
1\ar@/_1pc/[ll]\ar@/_1pc/[rr]\ar[r]&3\ar[r]&2\ar@/_1pc/[lll]\\
{\begin{smallmatrix}1\end{smallmatrix}}&
{\begin{smallmatrix}&2\\ 1&\end{smallmatrix}}&
{\begin{smallmatrix}1&\\ &2\end{smallmatrix}}&
{\begin{smallmatrix}&&3&&\\ &2&&1&\\ 1&&&&2\end{smallmatrix}}&
{\begin{smallmatrix}&2&&&\\ 1&&3&&\\ &2&&1&\\ &&&&2\end{smallmatrix}}}$$
the second one gives the quiver
$$\xymatrix@R0.1cm{2\ar[r]&1\ar@/_1pc/[rrr]\ar@/_1pc/[rr]&
2\ar@/_1pc/[ll]\ar[r]&3\ar[r]&2\ar@/_1pc/[ll]\\
{\begin{smallmatrix}2\end{smallmatrix}}&
{\begin{smallmatrix}1&\\ &2\end{smallmatrix}}&
{\begin{smallmatrix}&2\\ 1&\end{smallmatrix}}&
{\begin{smallmatrix}&&3&&\\ &2&&1&\\ 1&&&&2\end{smallmatrix}}&
{\begin{smallmatrix}&2&&&\\ 1&&3&&\\ &2&&1&\\ &&&&2\end{smallmatrix}}}$$
and the third one gives the quiver
$$\xymatrix@R0.1cm{2\ar[r]\ar@/_1pc/[rr]&1\ar@/_1pc/[rr]\ar@/_1pc/[rrr]&
3\ar[r]&2\ar@/_1pc/[lll]\ar[r]&3\ar@/_1pc/[ll]\\
{\begin{smallmatrix}2\end{smallmatrix}}&
{\begin{smallmatrix}1&\\ &2\end{smallmatrix}}&
{\begin{smallmatrix}&3&&\\ 2&&1&\\ &&&2\end{smallmatrix}}&
{\begin{smallmatrix}&2&&&\\ 1&&3&&\\ &2&&1&\\ &&&&2\end{smallmatrix}}&
{\begin{smallmatrix}&&3&&\\ &2&&1&\\ 1&&&&2\end{smallmatrix}}}$$
We now investigate the relationship between the cluster tilting
objects given by different reduced expressions of the same element.
\begin{lemma}\label{replace}
Let $w=s_{i_1}\cdots s_{i_m}=s_{i'_1}\cdots s_{i'_m}$ be reduced
expressions and $\Lambda=T_0\supsetneq T_1\supsetneq\cdots$ and
$\Lambda=T_0'\supsetneq T_1'\supsetneq\cdots$ corresponding tilting ideals.
\begin{itemize}
\item[(a)] Assume that for some $k$ we have $i_k=i'_{k+1}$,
$i'_k=i_{k+1}$ and $i_j=i'_j$ for any
$j\neq k,k+1$. Then the corresponding cluster tilting objects are isomorphic.
\item[(b)] Assume that for some $k$ we have $i_{k-1}=i'_{k}=i_{k+1}$,
$i'_{k-1}=i_{k}=i'_{k+1}$ and $i_j=i'_j$ for any
$j\neq k,k\pm1$. Then the corresponding cluster tilting objects are
in the relationship of exchanges of $T_{k-1}e_{i_{k-1}}$ and $T'_{k-1}e_{i'_{k-1}}$.
\end{itemize}
\end{lemma}
\begin{proof}
(a) Obviously we have $T_j=T'_j$ for any $j<k$. Since
$s_{i_k}s_{i_{k+1}}=s_{i'_k}s_{i'_{k+1}}$, we have $I_{i_k}\
I_{i_{k+1}}=I_{i'_k}I_{i'_{k+1}}$.
Thus we have $T_j=T_j'$ for any $j>k+1$. In particular, we have
$T_je_{i_j}=T'_je_{i'_j}$ for any $j\neq k,k+1$. Since
$I_{i_k}e_{i_k}=I_{i'_k}I_{i'_{k+1}}e_{i'_{k+1}}$, we have
$T_ke_{i_k}=T'_{k+1}e_{i'_{k+1}}$.
Similarly, we have $T_{k+1}e_{i_{k+1}}=T'_ke_{i'_k}$.
Thus the assertion follows.
(b) Since $I_{i_{k-1}}I_{i_{k}}I_{i_{k+1}}=I_{i'_{k-1}}I_{i'_{k}}I_{i'_{k+1}}$, we
have $T_je_{i_j}=T'_je_{i'_j}$ for any $j\neq k,k\pm1$.
Since $I_{i_{k-1}}I_{i_{k}}e_{i_{k}}=I_{i'_{k-1}}I_{i'_{k}}I_{i'_{k+1}}e_{i_{k+1}}$,
we have $T_{k}e_{i_{k}}=T'_{k+1}e_{i'_{k+1}}$. Similarly we have
$T_{k+1}e_{i_{k+1}}=T'_{k}e_{i'_{k}}$. Thus we have the assertion.
\end{proof}
As an illustration, note that in the above example we obtain the
second quiver from the first by mutation at the left vertex.
Immediately we have the following conclusion.
\begin{proposition}\label{transitivity}
All cluster tilting objects in $\Sub(\Lambda/I_w)$ obtained from
reduced expressions of $w$ can be obtained from each other under
repeated exchanges.
\end{proposition}
\begin{proof}
This is immediate from Lemma \ref{replace} and \cite[3.3.1]{bb} since
we get from one reduced expression to another by applying the
operations described in Lemma \ref{replace}.
\end{proof}
Using Theorem \ref{teoII3.5} we see that for preprojective algebras of
Dynkin quivers we get the quivers of the endomorphism algebras associated
with reduced expressions of the longest element $w_0$.
For a stably 2-CY category or a triangulated 2-CY category $\C$ with
cluster tilting subcategories we have an associated {\em cluster tilting graph}
defined as follows. The vertices correspond to the non-isomorphic basic
cluster tilting objects, and two vertices are connected with an edge if
the corresponding cluster tilting objects differ in exactly one
indecomposable summand. For cluster categories this graph is known to be
connected \cite{bmrrt}, while this is an open problem in general.
For the categories $\Sub\la/I_w$ or $\ul{\Sub}\la/I_w$ it follows from
Proposition \ref{transitivity} that all standard cluster tilting objects
belong to the same component of the cluster tilting graph, and we call
this the {\em standard component}.
\medskip
We now illustrate with some classes of examples.
Let $Q$ be a connected non-Dynkin quiver without oriented cycles,
and vertices $1,\cdots,n$, where there is no arrow $i\to j$ for $j>i$.
(a) Let $w=s_1s_2\cdots s_ns_1s_2\cdots s_n$. The last $n$ vertices
correspond to projectives, so the quiver for the cluster tilting object
in the stable category $\ul{\Sub}\la/I_w$ is $Q$, which has no
oriented cycles. So we get an alternative proof of Proposition \ref{teoII3.2}.
(b) Choose $w=s_1s_2\cdots s_ns_1s_2\cdots s_n\cdots$.
Ordering the indecomposable preprojective modules as
$P_1,\cdots,P_n,\tau^{-1}P_1,\cdots,\tau^{-1}P_n,\cdots,\tau^{-i}P_1,\cdots,
\tau^{-i}P_n,\cdots$
where $P_i$ is the projective module associated with vertex $i$,
we have a bijection between the indecomposable preprojective modules
and the terms in the expression for $w$. Then the quiver of
the corresponding cluster tilting subcategory is the
preprojective component of the AR quiver of $kQ$, with an additional
arrow from $X$ to $\tau X$ for each indecomposable preprojective module
$X$. This is a direct consequence of our rule, since we know by Lemma
\ref{lemII3.1} that the expression for $w$ is reduced.
(c) Now take a part $\mathcal{P}$ of the AR quiver of the
preprojective component, closed under predecessors. Consider the
expression obtained from $s_1s_2\cdots s_ns_1s_2\cdots s_n\cdots$.
by only keeping the terms corresponding to the objects in
$\mathcal{P}$ under our given bijection. We show below that this new
expression is reduced. Then it follows directly by our rule that
when adding arrows $X\to\tau X$ when $X$ is nonprojective in $\mathcal{P}$, we get
the quiver of the cluster tilting object given by the above reduced
expression. That this quiver is the quiver of a cluster tilting object
was also shown in \cite{gls1b} for $\mathcal{P}$ being the AR quiver of a Dynkin
quiver, and in \cite{gls5} in the general case.
\begin{lemma}
The word associated with $\mathcal{P}$ obtained in this way is reduced.
\end{lemma}
\begin{proof}
The word satisfies the following conditions.
\begin{itemize}
\item[(a)] For each pair $(i,j)$ of vertices connected by some edge,
$i$ and $j$ occur each second time after removing the other vertices.
\item[(b)] $w=A_1A_2\cdots A_t$, where
each $A_s$ is a strict increasing sequence of numbers in $\{1,\cdots,n\}$,
such that, if $j\notin A_s$, then $j\notin A_{s+1}$,
and if $i<j$ are connected with an edge and
$i\notin A_s$ (respectively, $j\notin A_s$), then $j\notin A_s$
(respectively, $i\notin A_{s+1}$).
\end{itemize}
The condition (a) is immediate from the construction.
For each pair $(i,j)$ connected with some edge, we
have in the AR quiver
$$\xymatrix@R0.1cm@C0.5cm{\cdots&&j\ar[dr]&&j\ar[dr]&&\cdots\\
&i\ar[ur]&&i\ar[ur]&&i&}$$
the part involving $i$'s and $j$'s. Hence they must occur each second
time to give this quiver. Thus the condition (a) is satisfied.
If $A_1\neq(1,\cdots,n)$, then $w$ is a subsequence of
$(1,\cdots,n)$. So the word is clearly reduced in this case.
Thus we can assume $A_1=(1,\cdots,n)$.
We show that, for any word satisfying the conditions (a)(b) and
$A_1=(1,\cdots,n)$, the corresponding descending chain $\Lambda=T_0\supseteq
T_1\supseteq\cdots$ is strict. Then the word is reduced by Proposition \ref{strict}.
We assume $T_{k-1}=T_{k}$ for some $k$.
So $I_{i_1}\cdots I_{i_k}=I_{i_1}\cdots I_{i_{k-1}}$.
Take $s$ minimal such that $A_s\neq(1,\cdots,n)$, and take $i$ minimal
such that $i\notin A_s$. By assumption (b), all terms
appearing after the position of $i$ in $A_s$ are not connected with
$i$ by an edge in $\Delta$. In particular, the corresponding ideals
commute with $I_i$. By multiplying with $I_i$ from the right and using
commutative relations, we get an equality where $i$ is inserted in
$A_s$ after $1,\cdots,i-1$. Repeating this process, we get an equality
$$I_1\cdots I_nI_1\cdots I_n\cdots I_1\cdots I_{i_k}=I_1\cdots I_n
I_1\cdots I_n\cdots I_1\cdots I_{i_k-1}.$$
This contradicts Lemma \ref{lemII3.1}.
\end{proof}
\bigskip
In the rest of this section we give a proof of Theorem \ref{quiver}.
Note that (b) follows directly from (a) and Proposition \ref{strict}(b).
(a) Let $J$ be the Jacobson radical of $\Lambda$.
Let $\M=\M(i_1,i_2,\cdots)$ and $T_{l,k}=I_{i_l}\ I_{i_{l+1}}\cdots I_{i_k}$.
For $l>k$, this means $T_{l,k}=\Lambda$.
In the rest we often use the following equalities.
$$e_i\ J\ e_{i'}=\left\{\begin{array}{cc}\ e_i\ I_i\ e_{i'}&(i=i')\\
e_i\ \Lambda\ e_{i'}&(i\neq i'),\end{array}\right.\ \
I_i\ e_{i'}=\left\{\begin{array}{cc}J\ e_{i'}&(i=i')\\
\Lambda\ e_{i'}&(i\neq i'),\end{array}\right.\ \mbox{ and }\
e_{i'}\ I_i=\left\{\begin{array}{cc}e_{i'}\ J&(i=i')\\
e_{i'}\ \Lambda&(i\neq i').\end{array}\right.$$
We have
$$\Hom_\Lambda(\Lambda_l\ e_{i_l},\ \Lambda_k\ e_{i_k})=e_{i_l}\
(T_{l+1,k}/T_k)\ e_{i_k}$$
by Lemma \ref{hom-calculation}.
We have
$$\rad_{\M}(\Lambda_l\ e_{i_l},\ \Lambda_k\ e_{i_k})=
e_{i_l}\ (T_{(l+1-\delta_{l,k}),k}/T_k)\ e_{i_k}.$$
Moreover, we have
$$\rad^2_{\M}(\Lambda_l\ e_{i_l},\ \Lambda_k\ e_{i_k})=e_{i_l}\
((T_k+\sum_{j>0}T_{(l+1-\delta_{l,j}),j}\ e_{i_j}\
T_{(j+1-\delta_{j,k}),k})/T_k)\ e_{i_k}.$$
To get the quiver of $\M$, we have to compute
$(\rad_{\M}/\rad_{\M}^2)(\Lambda_l\ e_{i_l},\ \Lambda_k\ e_{i_k})=E_{l,k}/D_{l,k}$ for
$$E_{l,k}=e_{i_l}\ T_{(l+1-\delta_{l,k}),k}\ e_{i_k}\supseteq
D_{l,k}=e_{i_l}\ T_k\ e_{i_k}+\sum_{j>0}e_{i_l}\ T_{(l+1-\delta_{l,j}),j}\
e_{i_j}\ T_{(j+1-\delta_{j,k}),k}\ e_{i_k}.$$
We denote by $k^+$ the minimal number satisfying $k<k^+$ and
$i_k=i_{k^+}$ if it exists.
(i) We consider the case when there are no arrows in $Q$ from $l$ to $k$.
We shall show $E_{l,k}=D_{l,k}$.
If $l>k$ and $i_l=i_k$, then we have $l>k^+>k$. Thus
$$E_{l,k}=e_{i_l}\ \Lambda\ e_{i_k}=e_{i_l}\ \Lambda\ e_{i_{k^+}}\
\Lambda\ e_{i_k}\subseteq D_{l,k}.$$
In the rest we assume that either $l\le k$ or $i_l\neq i_k$ holds.
First we show
$$E_{l,k}=e_{i_l}\ T_{l+1,k-1}\ (1-e_{i_k})\ \Lambda\ e_{i_k}=\sum_{a\neq
i_k}e_{i_l}\ T_{l+1,k-1}\ e_a\ \Lambda\ e_{i_k}$$
by the following case by case study.
\begin{itemize}
\item[-] If $l<k$, then
$E_{l,k}=e_{i_l}\ T_{l+1,k-1}I_{i_k}\ e_{i_k}=e_{i_l}\ T_{l+1,k-1}\
(1-e_{i_k})\ \Lambda\ e_{i_k}$.
\item[-] If $l=k$, then $E_{l,k}=e_{i_l}\ I_{i_k}\ e_{i_k}=e_{i_l}\
\Lambda\ (1-e_{i_k})\ \Lambda\ e_{i_k}$.
\item[-] If $l>k$ and $i_l\neq i_k$, then
$E_{l,k}=e_{i_l}\ \Lambda\ e_{i_k}=e_{i_l}\ I_{i_k}\ e_{i_k}=e_{i_l}\
\Lambda\ (1-e_{i_k})\ \Lambda\ e_{i_k}$.
\end{itemize}
Thus we only have to show that $e_{i_l}\ T_{l+1,k-1}\ e_a\ \Lambda\
e_{i_k}\subseteq D_{l,k}$
for any $a\neq i_k$. We have the following three possibilities.
\begin{itemize}
\item[-] If $a\notin\{i_1,i_2,\cdots,i_{k-1}\}$, then we have
$T_{l+1,k-1}\ e_a=I_{i_{l+1}}\cdots I_{i_{k-1}}\ e_a=\Lambda
e_a=I_{i_1}\cdots I_{i_{k-1}}\ e_a=T_{k-1}e_a$. Thus
$$e_{i_l}\ T_{l+1,k-1}\ e_a\ \Lambda\ e_{i_k}=e_{i_l}\ T_{k-1}\ e_a\
\Lambda\ e_{i_k}\ \subseteq
e_{i_l}\ T_k\ e_{i_k}\subseteq D_{l,k}.$$
\item[-] If $a\notin\{i_{k+1},i_{k+2},\cdots,i_{k^+-1}\}$, then we have
$e_a\ \Lambda=e_a\ I_{i_k}\cdots I_{i_{k^+}}=e_a\ T_{k,k^+}$. Thus
$$e_{i_l}\ T_{l+1,k-1}\ e_a\ \Lambda\ e_{i_k}=
e_{i_l}\ T_{l+1,k-1}\ e_a\ T_{k,k^+}\ e_{i_k}\subseteq
e_{i_l}\ T_{l+1,k^+}\ e_{i_k}=e_{i_l}\ T_{l+1,k^+}\ e_{i_{k^+}}\
\Lambda\ e_{i_k}\subseteq D_{l,k}$$
since $l\neq k^+\neq k$.
\item[-] Otherwise, there is an arrow $j\to k$ with $i_j=a$.
Since $a\notin\{i_{j+1},i_{j+2},\cdots,i_{k-1}\}$,
we have $T_{j+1,k-1}\ e_a\ \Lambda=I_{i_{j+1}}\cdots I_{i_{k-1}}\ e_a\
\Lambda=\Lambda\ e_a\ \Lambda=\Lambda\ e_a\ I_{i_{j+1}}\cdots
I_{i_{k-1}}=\Lambda\ e_aT_{j+1,k-1}$. Thus
$$e_{i_l}\ T_{l+1,k-1}\ e_a\ \Lambda\ e_{i_k}=e_{i_l}\ T_{l+1,j}\
T_{j+1,k-1}\ e_a\ \Lambda\ e_{i_k}=e_{i_l}\ T_{l+1,j}\ e_{i_j}\
T_{j+1,k}\ e_{i_k}\subseteq D_{l,k}$$
since $l\neq j\neq k$.
\end{itemize}
In each case we have $e_{i_l}\ T_{l+1,k-1}\ e_a\ \Lambda\
e_{i_k}\subseteq D_{l,k}$ for any $a\neq i_k$.
(ii) We consider the case $l=k^+$.
We have $E_{k^+,k}=e_{i_{k^+}}\ \Lambda\ e_{i_k}$.
We shall show $D_{k^+,k}=e_{i_l}\ J\ e_{i_k}$.
Clearly we have $D_{k^+,k}\subseteq e_{i_{k^+}}\ J\ e_{i_k}$.
Conversely, we have
$$e_{i_{k^+}}\ J\ e_{i_k}=e_{i_{k^+}}\ I_{i_k}\ e_{i_k}=e_{i_{k^+}}\
\Lambda\ (1-e_{i_k})\ \Lambda\ e_{i_k}=\sum_{a\neq i_k}e_{i_{k^+}}\
\Lambda\ e_a\ \Lambda\ e_{i_k}.$$
Thus we only have to show $e_{i_{k^+}}\ \Lambda\ e_a\ \Lambda\
e_{i_k}\subseteq D_{k^+,k}$
for any $a\neq i_k$. We have the following two possibilities.
\begin{itemize}
\item[-] If $a\notin\{i_1,i_2,\cdots,i_{k-1}\}$, then we have
$\Lambda e_a=I_{i_1}\cdots I_{i_{k-1}}e_a=T_{k-1}\ e_a$. Thus
$$e_{i_{k^+}}\ \Lambda\ e_a\ \Lambda\ e_{i_k}=e_{i_{k^+}}\ T_{k-1}\
e_a\ I_{i_k}\ e_{i_k}\subseteq e_{i_{k^+}}\ T_k\ e_{i_k}\subseteq D_{k^+,k}.$$
\item[-] If $a\in\{i_1,i_2,\cdots,i_{k-1}\}$, then take the largest $j$
such that $i_j=a$. Then we have
$\Lambda=T_{k^++1,j}$ and $e_a\ \Lambda=e_a\ I_{i_{j+1}}\cdots
I_{i_k}=e_a\ T_{j+1,k}$. Thus
$$e_{i_{k^+}}\ \Lambda\ e_a\ \Lambda\ e_{i_k}=e_{i_{k^+}}\
T_{k^++1,j}\ e_{i_j}\ T_{j+1,k}\ e_{i_k}\subseteq
D_{k^+,k}$$
since $k^+\neq j\neq k$.
\end{itemize}
In each case we have $e_{i_{k^+}}\ \Lambda\ e_a\ \Lambda\
e_{i_k}\subseteq D_{k^+,k}$ for any $a\neq i_k$.
(iii) Finally we consider the case when $l\neq k^+$ and there is an
arrow in $Q$ from $l$ to $k$. Then $l<k$.
We have $E_{l,k}=e_{i_l}\ J\ e_{i_k}$.
We shall show $D_{l,k}=e_{i_l}\ J^2\ e_{i_k}$.
First we show $D_{l,k}\subseteq e_{i_l}\ J^2\ e_{i_k}$.
We have $e_{i_l}\ T_k\ e_{i_k}\subseteq e_{i_l}\ I_{i_l}\ I_{i_k}\
e_{i_k}=e_{i_l}\ J^2\ e_{i_k}$.
We have the following three possibilities.
\begin{itemize}
\item[-] Assume $l\le j\le k$. Then
$e_{i_l}\ T_{(l+1-\delta_{l,j}),j}\ e_{i_j}\ T_{(j+1-\delta_{j,k}),k}\
e_{i_k}\subseteq
e_{i_l}\ J\ e_{i_j}\ J\ e_{i_k}\subseteq e_{i_l}\ J^2\ e_{i_k}$.
\item[-] Assume $k<j$. If $i_j\neq i_k$, then
$e_{i_l}\ T_{l+1,j}\ e_{i_j}\ T_{j+1,k}\ e_{i_k}\subseteq
e_{i_l}\ I_{i_j}\ e_{i_j}\ J\ e_{i_k}\subseteq e_{i_l}\ J^2\ e_{i_k}$.
If $i_j=i_k$, then $j\ge k^+$. Since there is an arrow $l\to k$, we
have $i_l\in\{i_{k+1},i_{k+2},\cdots,i_{k^+-1}\}\subseteq\{i_{l+1},i_{l+2},\cdots,i_{j-1}\}$.
Thus $e_{i_l}\ T_{l+1,j}\ e_{i_j}\ T_{j+1,k}\ e_{i_k}\subseteq
e_{i_l}\ I_{i_l}\ I_{i_j}\ e_{i_j}\ \Lambda\ e_{i_k}\subseteq e_{i_l}\
J^2\ e_{i_k}$.
\item[-] Assume $l>j$. If $i_j\neq i_l$, then
$e_{i_l}\ T_{l+1,j}\ e_{i_j}\ T_{j+1,k}\ e_{i_k}\subseteq
e_{i_l}\ J\ e_{i_j}\ I_k\ e_{i_k}\subseteq e_{i_l}\ J^2\ e_{i_k}$.
If $i_j=i_l$, then $e_{i_l}\ T_{l+1,j}\ e_{i_j}\ T_{j+1,k}\ e_{i_k}\subseteq
e_{i_l}\ \Lambda\ e_{i_j}\ I_{i_l}\ I_{i_k}\ e_{i_k}\subseteq e_{i_l}\
J^2\ e_{i_k}$.
\end{itemize}
Next we show $e_{i_l}\ J^2\ e_{i_k}\subseteq D_{l,k}$. We have
$$e_{i_l}\ J^2\ e_{i_k}=e_{i_l}\ J\ (1-e_{i_k})\ \Lambda\
e_{i_k}=\sum_{a\neq i_k}e_{i_l}\ J\ e_a\ \Lambda\ e_{i_k}.$$
Thus we only have to show $e_{i_l}\ J\ e_a\ \Lambda\ e_{i_k}\subseteq D_{l,k}$
for any $a\neq i_k$. We have the following two possibilities.
\begin{itemize}
\item[-] If $a\notin\{i_1,i_2,\cdots,i_{k-1}\}$, then we have
$e_{i_l}\ J\ e_a=e_{i_l}\ \Lambda\ e_a=e_{i_l}\ I_{i_1}\cdots
I_{i_{k-1}}\ e_a=e_{i_l}\ T_{k-1}\ e_a$. Thus
$$e_{i_l}\ J\ e_a\ \Lambda\ e_{i_k}=e_{i_l}\ T_{k-1}\ e_a\ I_{i_k}\
e_{i_k}\subseteq e_{i_l}\ T_k\ e_{i_k}\subseteq D_{l,k}.$$
\item[-] If $a\in\{i_{1},i_{2},\cdots,i_{k-1}\}$, then take the largest $j$
such that $i_j=a$. Then we have
$e_a\ \Lambda=e_a\ I_{i_{j+1}}\cdots I_{i_k}=e_a\ T_{j+1,k}$.
Moreover if $j=l$, then $e_{i_l}\ J\ e_a=e_{i_l}\
T_{(l+1-\delta_{l,j}),j}\ e_a$.
If $j\neq l$, then $e_{i_l}\ J\ e_a\subseteq e_{i_l}\ I_{i_{l+1}}\cdots
I_{i_j}\ e_a=e_{i_l}\ T_{l+1,j}\ e_a$. Thus
$$e_{i_l}\ J\ e_a\ \Lambda\ e_{i_k}=e_{i_l}\ T_{(l+1-\delta_{l,j}),j}\
e_{i_j}\ T_{j+1,k}\ e_{i_k}\subseteq D_{l,k}$$
since $j\neq k$.
\end{itemize}
In each case we have $e_{i_l}\ J\ e_a\ \Lambda\ e_{i_k}\subseteq
D_{l,k}$ for any $a\neq i_k$.
\qed \\
\hspace{7mm}
\subsection{Substructure}\label{c2_sec4}
${}$ \\
In this section we point out that the work in this chapter gives several illustrations of
substructures of cluster structures. We also give some concrete examples of 2-CY categories and
their cluster tilting objects, to be applied in Chapter \ref{chap3}.
Let $s_{i_1}s_{i_2}\cdots s_{i_t}\cdots$ be an infinite reduced expression
which contains each $i\in\{1,\cdots,n\}$ an infinite number of times.
Let $T_t=I_{i_1}\cdots I_{i_t}$, and $\la_t=\la/T_t$. Recall that for $t<m$, we have
$\Sub\la_t \subseteq\Sub\la_m\subseteq \fl\la$. We then have the following.
\begin{theorem}\label{teoII4.1}
Let the notation be as above.
\begin{itemize}
\item[(a)]$\Sub\la_m$, $\ul{\Sub}\la_m$ and $\fl\la$ have a cluster structure using the
cluster tilting subcategories with the indecomposable projectives as coefficients.
\item[(b)]{For $t<m$, the cluster tilting object
$\la_1\oplus\cdots\oplus\la_t$ in $\Sub \la_t$ can be extended to
a cluster tilting object
$\la_1\oplus\cdots\oplus\la_t\oplus\cdots\oplus\la_m$ in $\Sub
\la_m$, and determines a substructure of $\Sub\la_m$.}
\item[(c)]{The cluster tilting object $\la_1\oplus\cdots\oplus\la_t$
in $\Sub \la_t$ can be extended to the cluster tilting subcategory
$\{\la_i \mid i\ge 0\}$ in $\fl\la$, and determines a substructure
of $\fl\la$.}
\end{itemize}
\end{theorem}
\begin{proof}
(a) Since $\Sub\la_m$ and $\ul{\Sub}\la_m$ are stably 2-CY and 2-CY respectively, they have a weak cluster
structure. It follows from Proposition \ref{prop2.5} that we have no
loops or 2-cycles, using the cluster tilting objects. Then it follows
from Theorem \ref{teoI1.6} that we have a
cluster structure for $\Sub\la_m$ and $\ul{\Sub}\la_m$.
That also $\fl\la$ has a cluster structure follows by using that it is the case for all the $\Sub\la_m$.
\noindent
(b) and (c) follow directly from the definition of substructure and previous results.
\end{proof}
We now consider the Kronecker quiver
$\xymatrix@R0.2cm{
1\ar@<0.5ex>[r]\ar@<-0.5ex>[r]& 0
}$,
and let $\la$ be the associated preprojective algebra. The only strict descending chains are
$$I_0\supsetneq I_0I_1\supsetneq I_0I_1I_0\supsetneq \cdots(I_0I_1)^j\supsetneq(I_0I_1)^jI_0\supsetneq\cdots
\text{ \hspace{0.5cm}and }$$
$$I_1\supsetneq I_1I_0\supsetneq I_1I_0I_1\supsetneq \cdots(I_1I_0)^j\supsetneq(I_1I_0)^jI_1\supsetneq\cdots$$
We let $T_t$ be the product of the first $t$ ideals, and $\la_t=\la/T_{t}$. Both $I_0$ and $I_1$
occur an infinite number of times in each chain. The indecomposable projective $\la$-modules $P_0$ and $P_1$
have the following structure\\
$P_0=
\begin{smallmatrix}
&& 0 &&\\
&1&&1&\\
0&&0&&0\\
&& \cdot &&\\
&& \cdot &&\\
&& \cdot &&
\end{smallmatrix}
$\hspace{0.5cm}
$P_1=
\begin{smallmatrix}
&& 1 &&\\
&0&&0&\\
1&&1&&1\\
&& \cdot &&\\
&& \cdot &&\\
&& \cdot &&
\end{smallmatrix}
$ \hspace{0.5cm} \\\\where radical layer number $2i$ has $2i$ copies of 1 for $P_0$ and $2i$ copies of 0 for $P_1$,
and radical layer number $2i+1$ has $2i+1$ copies of 0 for $P_0$ and $2i+1$ copies of 1 for $P_1$. We write
$P_{0,t}=P_0/J^tP_0$ and $P_{1,t}=P_1/J^tP_1$. Then it is
easy to see that for the chain $I_0\supsetneq I_0I_1\supsetneq\cdots $ we have $\la_1=\la/I_0=P_{0,1}=(0)$,
$\la_2=\la/I_0I_1=P_{0,1}\oplus P_{1,2}=(0)\oplus\left(
\begin{smallmatrix}
& 1&\\
0&&0\\
\end{smallmatrix}
\right)$, $\la_3=\la/I_0I_1I_0=P_{0,3}\oplus P_{1,2}$,..., $\la_{2t}=P_{0,2t+1}
\oplus P_{1,2t}$, $\la_{2t+1}=P_{0,2t-1}\oplus P_{1,2t}$,..
Note that this calculation also shows that both our infinite chains are strict descending.
It follows from Section 4 (and is also easily seen directly) that the
quiver of the cluster tilting subcategory $\{\la_i \mid i\ge 1\}$ is
the following:
$$\xymatrix@C0.2cm{
& P_{1,2}\ar@<0.1cm>[dr]\ar@<-0.1cm>[dr]&& P_{1,4}\ar[ll]\ar@<0.1cm>[dr]\ar@<-0.1cm>[dr]&
&\cdots&& P_{1,2t+2}\ar@<0.1cm>[dr]\ar@<-0.1cm>[dr]&& \cdots\\
P_{0,1}\ar@<0.1cm>[ur]\ar@<-0.1cm>[ur]&& P_{0,3}\ar[ll]\ar@<0.1cm>[ur]\ar@<-0.1cm>[ur]&& P_{0,5}\ar[ll]&\cdots&
P_{0,2t+1}\ar@<0.1cm>[ur]\ar@<-0.1cm>[ur]&& P_{0,2t+3}\ar[ll]& \cdots
}$$
In particular, we have the cluster tilting object $P_{0,1}\oplus P_{1,2}\oplus P_{0,3}$ for $\Sub\la_3$, where the
last two summands are projective. Hence $P_{0,1}$ is a cluster tilting object in $\ul{\Sub}\la_3$.
The quiver of the endomorphism
algebra consists of one vertex and no arrows. Hence $\ul{\Sub}\la_3$ is equivalent to the cluster
category $\C_k$,
which has exactly two indecomposable objects. The other one is $JP_{0,3}$, obtained from the exchange sequence
$0\to JP_{0,3}\to P_{0,3}\to P_{0,1}\to 0$. Note that it is also easy to see directly that there are no other
indecomposable rigid nonprojective objects in $\Sub\la_3$.
For $\la_4$, we have the cluster tilting object $P_{0,1}\oplus P_{1,2}\oplus P_{0,3}\oplus P_{1,4}$ for $\Sub\la_4$.
Again the last two $\la_4$-modules are projective, so $P_{0,1}\oplus P_{1,2}$ is a cluster
tilting object in $\ul{\Sub}\la_4$.
The quiver of the endomorphism algebra is
$\xymatrix@C0.8cm{
\cdot \ar@<0.5ex>[r]\ar@<-0.5ex>[r]&\cdot
}$
, which has no oriented cycles, and hence $\ul{\Sub}\la_4$ is triangle equivalent to the cluster category $\C_{k(
\xymatrix@C0.4cm{
\cdot\ar@<0.4ex>[r]\ar@<-0.4ex>[r]&\cdot
})}$.
In particular the cluster tilting graph is connected.
We can use this to get a description of the rigid objects in $\Sub\la_4$.
\begin{proposition}
Let $\la_4=\la/I_0I_1I_0I_1$ be the algebra defined above. Then the
indecomposable rigid $\la_4$-modules in $\Sub\la_4$ are
exactly the ones of the form $\Omega^i_{\la_4}(P_{0,1})$ and $\Omega^i_{\la_4}(P_{1,2})$
for $i\in \mathbb{Z}$
\end{proposition}
\begin{proof}
For $\C_{k(
\xymatrix@C0.4cm{
\cdot\ar@<0.4ex>[r]\ar@<-0.4ex>[r]&\cdot
})}$ the indecomposable rigid objects are the $\tau$-orbits of the objects induced by the
indecomposable projective $k(
\xymatrix@C0.4cm{
\cdot\ar@<0.4ex>[r]\ar@<-0.4ex>[r]&\cdot
})$-modules. Here $\tau=[1]$, and for $\ul{\Sub}\la_4$, $\Omega^{-1}=[1]$. This proves the claim.
\end{proof}
The cluster tilting graph for $\Sub\la_3$ and $\Sub\la_4$ are
$\xymatrix@C0.4cm@R0.4cm{
\cdot \ar@{-}[r] & \cdot}$ and
$\xymatrix{\cdots\text{ }\cdot \ar@{-}[r] & \cdot \ar@{-}[r] & \cdot \ar@{-}[r] & \cdot \ar@{-}[r] &\cdot \text{ }\cdots}$.
We end with the following problem.
\begin{conjecture}
For any $w\in W$ the cluster tilting graph for $\Sub\la/I_w$ is connected.
\end{conjecture}
\section{Connections to cluster algebras}\label{chap3}
While the theory of 2-CY categories is interesting in itself, one of the motivations
for investigating 2-CY categories comes from the theory of cluster algebras initiated by
Fomin and Zelevinsky \cite{fz1}. In many situations the 2-CY categories can be used to construct
new examples of cluster algebras, and also to give a new categorical model for
already known examples.
This has been done in for example \cite{ck1,ck2} and \cite{gls1}. In this chapter
we illustrate with some applications of the theory developed in the first two chapters. In
Section 2, we recall the definition and basic properties of
a map $\varphi$ from the finite dimensional modules over the completed preprojective algebra
of a connected quiver with no loops
to the function field $\mathbb{C}(U)$ of the associated unipotent group
$U$. This was used in \cite{gls1} to model the cluster algebra structure of
the coordinate ring $\mathbb{C}[U]$ in the Dynkin case. It is what we call a (strong) cluster map. We also make explicit the
notion of subcluster algebra, and observe that
a substructure of a (stably) 2-CY category together with a cluster map gives rise to a subcluster
algebra. This gives one way to construct new cluster algebras inside
$\mathbb{C}[U]$ in the Dynkin case, or model old ones, as we illustrate with examples in Section 2.
In Section 3 we deal with the non-Dynkin case. We conjecture that the stably 2-CY category
$\Sub \la/I_w$ discussed in Chapter \ref{chap2} gives a model for a cluster algebra structure on the
coordinate ring $\mathbb{C}[U^w]$ of the corresponding unipotent cell.
We give examples which support this conjecture.\\
\hspace{7mm}
\subsection{Cluster algebras, subcluster algebras and cluster maps}\label{c3_sec1}
${}$ \\
In this section we recall the notion of cluster algebras
\cite{fz1} and make explicit a notion of {\em subcluster algebras}.
Actually we extend the definition of cluster algebras to include the possibility of
clusters with countably many elements. The coordinate rings of unipotent groups of non-Dynkin diagrams
are candidates for containing such cluster algebras.
We also introduce certain maps, called (strong) {\em cluster maps},
defined for categories with a cluster structure.
The image of a cluster map gives rise to a cluster algebra.
Cluster substructures on the category side give rise to subcluster algebras.
We first recall the definition of a cluster algebra, allowing countable clusters.
Note that the setting used here is not the most general one.
Let $m \geq n$ be positive integers, or countable numbers.
Let $\F= \Q(u_1, \dots, u_m)$ be the field of rational functions over
$\Q$ in $m$ independent variables. A cluster algebra is a subring
of $\F$, constructed in the following way.
A {\em seed} in $\F$ is a triple $(\ul{x}, \ul{c}, \widetilde{B})$,
where $\ul{x}$ and $\ul{c}$ are non-overlapping sets of elements in $\F$, where
we let $\widetilde{\ul{x}} = \ul{x} \cup \ul{c}$ and sometimes denote the seed
by the pair $(\widetilde{\ul{x}}, \widetilde{B})$.
Here $\widetilde{\ul{x}} = \{x_1, \dots, x_m \}$ should be a transcendence basis for $\F$
and $\widetilde{B} = (b_{ij})$ is a locally finite $m \times n$-matrix
with integer elements such that the submatrix $B$ of $\widetilde{B}$ consisting of the $n$
first rows is skew-symmetric.
The set $\ul{x} = \{x_1, \dots, x_n \}$ is called the {\em cluster} of the seed,
and the set $\ul{c} = \{x_{n+1}, \dots, x_m \}$
is the {\em coefficient set} of the cluster algebra. The
set $\ul{\tilde{x}} = \ul{x} \cup \ul{c}$ is called an {\em extended cluster}.
For a seed $(\ul{\tilde{x}},\widetilde{B})$, with $\widetilde{B} =
(b_{ij})$, and for $k \in \{ 1, \dots, n \}$,
a {\em seed mutation} in direction $k$ produces a new seed
$(\ul{\tilde{x}}',\widetilde{B}')$. Here
$\ul{\tilde{x}}' = (\ul{\tilde{x}} \setminus \{x_k \}) \cup \{x_k' \}$,
where
$$
x_k' = x_k^{-1} (\prod_{b_{ik} > 0} x_i^{b_{ik}} + \prod_{b_{ik} < 0} x_i^{-b_{ik}} )
$$
This is called an {\em exchange relation} and $\{ x_k, x_k'\}$ is called an {\em exchange pair}.
Furthermore
$$
b_{ij}' = \begin{cases} -b_{ij} & \text{if $i=k$ or $j=k$} \\
b_{ij} + \frac{\left| b_{ik} \right| b_{kj} + b_{ik} \left| b_{kj} \right|}{2} & \text{else.}
\end{cases}
$$
Fix an (initial) seed $(\ul{\tilde{x}}, \widetilde{B})$,
and consider the set $\Sc$ of all seeds obtained from $(\ul{\tilde{x}},\widetilde{B})$
by a sequence of seed mutations. The union $\X$ of all elements in the
clusters in $\Sc$ are called the {\em cluster variables}, and for a fixed subset of coefficients
$\ul{c}_0 \subseteq \ul{c}$,
the {\em cluster algebra} $\A(\Sc)$ with the coefficients $\ul{c}_0$ inverted, is the $\ZZ[\ul{c}, {\ul{c}_0}^{-1}]$-subalgebra
of $\F$
generated by $\X$. Note that we, unlike in the original definition, do not
necessarily invert all coefficients. This is done in order to catch examples like
the coordinate ring of a maximal unipotent group in the Dynkin case
and the homogeneous coordinate ring
of a Grassmannian.
Note that we often extend the scalars for cluster algebras to $\mathbb{C}$.
We now make explicit the notion of subcluster algebras.
Let $\A$ be a cluster algebra with cluster variables $\X$, coefficients $\ul{c}$,
and ambient field $\F = \Q(u_1, \dots , u_m)$.
A {\em subcluster algebra} $\A'$ of $\A$ is a cluster algebra such that
there exists a seed $(\ul{x},\ul{c},Q)$ for $\A$ and a seed
$(\ul{x}',\ul{c}',Q')$ for $\A'$ such that
\begin{itemize}
\item[(S1)]{$\ul{x}'\subseteq \ul{x}$ and $\ul{c}' \subseteq \ul{x} \cup \ul{c}$.}
\item[(S2)]{For each cluster variable $x_i\in\ul{x}'$, the set of
arrows entering and leaving $i$ in $Q$ lie in $Q'$.}
\item[(S3)]{The invertible coefficients $\ul{c}_0' \subseteq \ul{c}'$ satisfy
$\underline{c}_0 \cap \underline{c}' \subseteq \underline{c}_0'$.}
\end{itemize}
Note that a subcluster algebra is not necessarily a subalgebra since
some coefficients may be inverted. Also note that $\A'$ is determined by
the seed $(\ul{x}',\ul{c}',Q')$ and the set $\ul{c}_0'$ of invertible
coefficients.
The definition implies that clusters in the subcluster algebra can be uniformly extended.
\begin{proposition}\label{lemIII1.1}
\begin{itemize}
\item[(a)]Seed mutation in $\A'$ is compatible with seed mutation in $\A$.
\item[(b)]There is a set $\ul{v}$ consisting of cluster variables and coefficients in $\A$, such that
for any extended cluster $\ul{x}'$ in $\A'$, $\ul{x}' \cup \ul{v}$ is
an extended cluster in $\A$.
\end{itemize}
\end{proposition}
\begin{proof}
This follows directly from the definition.
\end{proof}
Inspired by \cite{gls1,gls2} and \cite{cc, ck1,ck2} we introduce certain maps, which we call
(strong) cluster maps,
defined for a 2-CY category with a (weak) cluster structure,
and such that the image gives rise to a cluster algebra. We show that such
maps preserve substructures, as defined above and in Section \ref{c1_sec2}.
Recall that a category $\C$ is stably $2$-CY if it is either an exact Frobenius category
where $\ul{\C}$ is $2$-CY or a functorially finite extension closed subcategory
$\B$ of a triangulated $2$-CY category $\C$.
Let $\C$ be a stably 2-CY category with a
cluster structure defined by cluster tilting objects, where projectives are
coefficients.
We assume that the cluster tilting objects have $n$ cluster variables and $c$ coefficients,
where $1 \leq n \leq \infty$ and $0 \leq c \leq \infty$.
For a cluster tilting object $T$, we denote by $ B_{\End_{\C}(T)}$ the $m \times n$-matrix
obtained by removing the last $m-n$ columns of
the skew-symmetric $m \times m$ matrix corresponding
to the quiver of the endomorphism algebra $\End_{\C}(T)$, where
the columns are ordered such that those corresponding to projective summands of $T$ come last.
We can also think of this as dropping the arrows between vertices in $\End_{\C}(T)$ corresponding
to indecomposable projective summands of $T$ from the quiver of $\End_{\C}(T)$.
Let $\F = \Q(u_1, \dots, u_m)$.
Given a connected component $\Delta$ of the cluster tilting graph of $\C$,
a {\em cluster map} (respectively, {\em strong cluster map}) for
$\Delta$ is a map $\varphi \colon
\E=\add\{T\mid T\in\Delta\} \to \F$ (respectively, $\varphi \colon
\C \to \F$) where isomorphic objects have the same image, satisfying
the following three conditions.
\begin{itemize}
\item[(M1)]{For a cluster tilting object $T$ in $\Delta$, $\varphi(T)$
is a transcendence basis for $\F$.}
\item[(M2)]{(respectively, (M2$'$)) For all indecomposable objects $M$
and $N$ in $\E$ (respectively, $\C$) with
$\dim_k\Ext^1(M,N)=1$, we
have $\varphi(M)\varphi(N) = \varphi(V) + \varphi(V')$ where $V$
and $V'$ are the middle of the non-split triangles/short exact
sequences $N\to V\to M$ and $M\to V'\to N$.}
\item[(M3)]{(respectively, (M3$'$)) $\varphi(A \oplus A')= \varphi(A)
\varphi(A')$ for all $A,A'$ in $\E$ (respectively, $\C$).}
\end{itemize}
Note that a pair $(M,N)$ of indecomposable objects in $\E$ is an
exchange pair if and only if $\Ext^1(M,N) \simeq k$ (see \cite{bmrrt}).
Note that a map $\varphi\colon \C\to \F$ satisfying
(M2$'$) and (M3$'$) is called a {\em cluster character} in \cite{p}.
Important examples of (strong) cluster maps appear in \cite{ck1,ck2,
gls1}, and more recently in \cite{gls5,p}.
\begin{theorem}\label{propIII2.2}
With $\C$ and $\E$ as above, let $\varphi \colon \E \to \F$ be a cluster map.
Then the following hold.
\begin{itemize}
\item[(a)]{Let $\A$ be the subalgebra of $\F$ generated by
$\varphi(X)$ for $X\in\E$. Then $\A$ is a cluster algebra and
$(\varphi(T),B_{\End_{\C}(T)})$ is a seed for $\A$ for any cluster
tilting object $T$ in $\Delta$.}
\item[(b)]{Let $\B$ be a subcategory of $\C$ with a substructure, and
$\E'$ a subcategory of $\B$ defined by a connected component of
the cluster tilting graph of $\B$. Then $\varphi(X)$ for $X\in\E'$
generates a subcluster algebra of $\A$.}
\end{itemize}
\end{theorem}
\begin{proof}
(a) follows from the fact that $\C$ has a cluster structure. For (b),
let $T'$ be a cluster tilting object in the subcategory that extends to a cluster tilting object
$T$ for $\E$. Then $\varphi(T')$ gives a transcendence basis for a subfield $\F'$ of $\F$,
and $T'$ together with its matrix $B_{\End_{\C}(T')}$ gives a seed for a subcluster algebra.
\end{proof}
For any subset $\ul{c}_0$ of coefficients of $\A$, we have a cluster
algebra $\A[\ul{c}_0^{-1}]$. We say that {\em the cluster algebra
$\A[\ul{c}_0^{-1}]$ is modelled by the cluster map $\varphi:\E\to\F$}.\\
\hspace{7mm}
\subsection{The GLS $\varphi$-map with applications to the Dynkin case}\label{c3_sec2}
${}$ \\
Let $Q$ be a finite connected quiver without loops, and let $\la$ be
the associated completed preprojective algebra over an algebraically
closed field $k$, and $W$ the associated Coxeter group. For the non-Dynkin
case, we have
the derived 2-CY category $\fl \Lambda$ of finite dimensional left
$\Lambda$-modules, and for each $w$ in $W$ the stably 2-CY category
$\Sub \la/I_w$ as investigated in Chapter \ref{chap2}.
On the other hand, associated with the underlying graph of $Q$, is a Kac-Moody
group $G$ \cite{kp} with a maximal unipotent subgroup $U$.
Let $H$ be the torus. Recall that the Weyl group
$\operatorname{Norm}(H)/H$ is isomorphic to the Coxeter group $W$
associated with $Q$.
For an element $w \in W=\operatorname{Norm}(H)/H$ and
any lifting $\widetilde{w}$ of $w$ in $G$, define the unipotent
cell $U^w$ \cite{BZ} to be the intersection
\[U^w = U \cap B_- \widetilde{w} B_-,\]
where $B_-$ is the opposite Borel subgroup corresponding to $U$. Then
$U^w$ is independent of the choice of lifting of $w$. It is a
quasi-affine algebraic variety of dimension
$l(w)$ and we have $U = \bigsqcup_{w \in W} U^w$.
Let $U(\mathfrak{n})$ be enveloping algebra
of the maximal nilpotent subalgebra
of the Kac-Moody Lie algebra $\mathfrak{g}$
associated to $Q$ and
let $U(\mathfrak{n})^*$ and $U(\mathfrak{n})^*_{\text{gr}}$
be the dual and graded dual of
$U(\mathfrak{n})$ respectively.
Note that both $U(\mathfrak{n})^*$ and
$U(\mathfrak{n})^*_{\text{gr}}$ become
algebras with respect to the $\circ$-product which is dual
to the coproduct on $U(\mathfrak{n})$.
Recall that the matrix coefficient function $f^{\tau}_{v,\zeta}$
associated to a representation $\tau:
U(\mathfrak{n}) \longrightarrow \mathfrak{gl}(V)$
and vectors $v \in V$ and $\zeta \in V^*$ is the linear
form in $U(\mathfrak{n})^*$
defined by $f^{\tau}_{v,\zeta}(x) = \zeta
\Big( \tau(x) \cdot v \Big)$.
Define the restricted dual $U(\mathfrak{n})^*_{\text{res}}$ of
the enveloping algebra $U(\mathfrak{n})$
to be the span of the unit element $1 \in U(\mathfrak{n})^*$
together with all matrix coefficient functions
$f^{\tau}_{v,v^*}$ for integrable lowest weight
representations
$\tau \colon U(\mathfrak{n}) \longrightarrow \mathfrak{gl}(V)$
with $v \in V$ and $\zeta \in V^*$.
The restricted dual $U(\mathfrak{n})^*_{\text{res}}$ is a
subalgebra of $U(\mathfrak{n})^*$ due to the fact
that $f^{\tau}_{v,\zeta} \circ f^{\tau'}_{v', \zeta'}
\ = \ f^{\tau \otimes \tau'}_{ v \otimes v', \zeta \otimes \zeta'}$.
One can check that the graded dual $U(\mathfrak{n})^*_{\text{gr}}$
is a subalgebra of the restricted dual $U(\mathfrak{n})^*_{\text{res}}$.
Let $\mathbb{C}[U]$ denote the ring of (strongly)
regular functions on the unipotent group $U$ as defined
in \cite{kp2}.
Given a representation $\varrho \colon U \longrightarrow \GL(V)$
of the unipotent group $U$ we may differentiate it to obtain
a $U(\mathfrak{n})$-representation $d \varrho \colon U(\mathfrak{n})
\longrightarrow \mathfrak{gl}(V)$ uniquely determined by the
formula
\[ d {\varrho}(x) \cdot v = \ { {\displaystyle \partial} \over
{\displaystyle \partial t} } \Big|_{t=0} \, \exp(tx) \cdot v \]
\bigskip
\noindent
for all $x \in \mathfrak{n}$. Accordingly a representation
$\varrho \colon U \longrightarrow \GL(V)$
will be called an integrable lowest weight representation
of the unipotent group $U$ if the differentiated
representation $d \varrho \colon U(\mathfrak{n}) \longrightarrow
\mathfrak{gl}(V)$ is integrable and lowest weight.
Theorem 1 of \cite{kp2} implies that
$\mathbb{C}[U]$ is spanned by the unit element together with
all matrix coefficient functions
$F^{\varrho}_{v,\zeta}$ for
integrable lowest weight representations $\varrho: U \longrightarrow
\GL(V)$ with $v \in V$ and $\zeta \in V^*$. The function
$F^{\varrho}_{v,\zeta}: U \longrightarrow
\Bbb{C}$ is strongly regular and is defined for $g \in U$
by $F^{\varrho}_{v,\zeta}(g) := \zeta \Big( \varrho(g) \cdot v \Big)$.
One easily checks that, with regard to the usual product of functions,
the identity $F^{\varrho}_{v,\zeta} F^{\varrho'}_{v', \zeta'}
\ = \ F^{\varrho \otimes \varrho'}_{ v \otimes v', \zeta \otimes \zeta'}$
holds.
In view of these facts it follows that the mapping $\iota$ given by
\[ f^{d\varrho}_{v, \zeta} \longmapsto F^{\varrho}_{v,\zeta} \]
defines an algebra isomorphism between the restricted dual
$U(\mathfrak{n})^*_{\text{res}}$ and the coordinate ring
$\mathbb{C}[U]$. In particular $\iota$ restricts to an
embedding of the graded dual $U(\mathfrak{n})^*_{\text{gr}}$
into $\mathbb{C}[U]$.
\bigskip
We now apply the construction of the GLS $\varphi$-map \cite{gls1} to the non-Dynkin case.
For $M\in\fl\Lambda$, Geiss-Leclerc-Schr\"oer \cite{gls5} constructed
$\delta_M\in U(\mathfrak{n})^*_{\text{gr}}$ by using Lusztig's Lagrangian
construction of $U(\mathfrak{n})$ \cite{Lu1,Lu2}.
We denote by $\varphi(M)\in\mathbb{C}[U]$ the image of $\delta_M$
under the above map $\iota:U(\mathfrak{n})^*_{\text{gr}}\to \mathbb{C}[U]$.
Then $\varphi(M)$ has the following property.
Let
$x_i(a) = \operatorname{exp}(ae_i)$
for $a \in \mathbb{C}, i \in Q_0$ be
elements in $U$, for a Chevalley generator $e_i$.
For ${\bf i} = \big(i_1, \dots, i_k \big) \in \mathbb{Z}_{\geq 0}^k$
the symbol {\LARGE $\chi_{\bf \s i}$}$(M)$ denotes the
Euler characteristic of the variety $\mathcal{F}_{\bf i}(M)$ of
all flags in $M=M_0 \supset M_1 \supset \cdots \supset M_k = 0$
with $M_{l-1}/M_l$ isomorphic to the simple module $S_{i_l}$.
Then for any ${\bf i} = \big(i_1, \dots, i_k \big) \in \mathbb{Z}_{\geq 0}^k$
and $a_1,\cdots,a_k\in\mathbb{C}$, we have
\begin{equation}\label{eq-gls}
\varphi(M)(x_{i_1}(a_1) \cdots x_{i_k}(a_k)) \ = \ \sum_{ {\bf j} \in \mathbb{Z}_{\geq 0}^k} \
\text{\huge $\chi$}_{\rev {\bf i}^{\bf j} } (M) \ \frac{{a_1^{j_1} \cdots a_k^{j_k}}}{
{j_1! \cdots j_k!}}
\end{equation}
where $ \rev {\bf i}^{\bf j}$ is the
$j_1 + \cdots + j_k$ tuple which, when read from left to right, starts
with $j_k$ occurrences of $i_k$, followed by $j_{k-1}$ occurrences of $i_{k-1}$,
and ultimately $j_1$ occurrences $i_1$.
Notice that one can prove that property \eqref{eq-gls} uniquely
determines $\varphi(M)$.
The following result (a) was shown in \cite{gls2, gls5} and (b) was
shown in \cite{gls1,gls1a}.
\begin{theorem}
\begin{itemize}
\item[(a)] $\varphi:\fl\la\to\mathbb{C}[U]\subset\mathbb{C}(U)$
satisfies (M2$'$) and (M3$'$).
\item[(b)] If $Q$ is Dynkin, then $\mathbb{C}[U]$
(respectively, $\mathbb{C}[U^{w_0}]$ for
the longest element $w_0$) is a cluster algebra
modelled by a strong cluster map
$\varphi:\mod\la\to\mathbb{C}(U)$ for the
standard component of the cluster tilting graph of $\mod\la$ with no (respectively, all) coefficients inverted.
\end{itemize}
\end{theorem}
The image of a substructure $\B$ of $\mod \la$
gives a subcluster algebra of $\mathbb{C}[U]$ for the Dynkin case, and we
illustrate this with the examples
from \ref{c1_sec3}. We omit the calculation involved in proving
the isomorphisms between the subcluster algebras arising from $\B$ and
the coordinate rings of the varieties under consideration.
See \cite{bl} for general background on Schubert varieties and (isotropic) Grassmannians.
For a
subset $J$ of size $k$ in $[1 \dots n]$ the symbol $[J]$ will denote
the $k \times k$ matrix minor of an $n \times n$ matrix with row set
$[1 \dots k]$ and column set $J$.
In the first two examples $G$ is $\SL_n(\mathbb{C})$ and $U$ is the subgroup of all upper triangular
$n \times n$ unipotent matrices.
\bigskip
\noindent
\textbf{Example 1} ($\Gr_{2,5}$-Schubert variety)
Let $\la$ be of type $A_4$, and let $\B$ be the full additive subcategory
of $\mod \la$ from Example 1 in \ref{c1_sec3}. The associated algebraic group is
then $\SL_5(\mathbb{C})$. Consider the Grassmannian $\Gr_{2,5}$, and the Schubert
variety $X_{3,5}$ associated with the subset $\{3,5 \}$ of
$\{1,2,3,4,5\}$. Let $w_{3,5} = \begin{pmatrix} 1 & 2 & 3 & 4 & 5 \\ 3 &4 & 1 & 5 & 2
\end{pmatrix}$ be the associated Grassmann permutation
in $S_5$, and $U^{w_{3,5}}$ the unipotent cell in $U$ associated to
$w_{3,5}$.
Note that the
Schubert variety $X_{3,5}$ is birationally isomorphic to the unipotent
cell $U^{w_{3,5}}$ \cite{BZ}.
Then $\mathbb{C}[U^{w_{3,5}}]$ is known to be a subcluster algebra
of $\mathbb{C}[U]$ \cite{fz3}.
Under the GLS-map $\varphi$ from $\mod \la$ to $\mathbb{C}[U]$ one can
check that $\varphi(M_x) = [x]$, with $M_x$ as defined in Example 1 in
\ref{c1_sec3}. Since $\B$ has a cluster substructure of
$\mod \la$, we know
that the image gives rise to a subcluster algebra of $\mathbb{C}[U]$.
Then the image of $\mathcal{B}$ under the strong cluster map $\varphi$
is precisely $\mathbb{C}\big[U^{w_{3,5}}\big]$.
To see this, we mutate a seed from \cite{fz3} which generates the cluster algebra
structure for $\mathbb{C}[U]$, to get a new seed which contains $\varphi(T)$ for
the cluster tilting object $T$ in $\B$ in Example 1 in
\ref{c1_sec3}. Then one proves that the image is
$\mathbb{C}[U^{w_{3,5}}]$ after a proper choice of which coefficients to invert.
\bigskip
\noindent
\textbf{Example 2} (Unipotent Cell in
$\SL_4\big(\mathbb{C}\big)$)
Let $\la$ be the preprojective algebra of type $A_3$, and $\B$ the subcategory
of $\mod \la$ from Example 2 in \ref{c1_sec3}. The associated algebraic group is
$\SL_4(\mathbb{C})$. Let $U^w$ be the unipotent cell of the unipotent subgroup $U$,
associated with the permutation $w = \begin{pmatrix} 1 & 2 & 3 & 4 \\ 4 & 3 & 1 & 2
\end{pmatrix}$. It is shown in \cite{fz3} that $\mathbb{C}[U^{w}]$ is a cluster algebra
of type $A_2$, and implicitly that it is a subcluster algebra of $\mathbb{C}[U]$.
In view of Section \ref{c3_sec1}, the image $\varphi(\B)$ has a
subcluster algebra structure modelled by $\B$.
As in Example 1, one begins
by mutating a seed from
\cite{fz3} which generates the cluster algebra structure for $\mathbb{C}[U]$, such
that the new seed contains $\varphi(T)$. Then one proves that
the image is $\mathbb{C}[U^{w}]$, after a proper choice of which
coefficients to invert.
\bigskip
\noindent
\textbf{Example 3} (The $\SO_{8}(\mathbb{C})$-Isotropic Grassmannians (cf. \cite[10.4.3]{gls3}))
Let $\la$ be the preprojective algebra of the Dynkin quiver $D_4$.
Let $\varrho$ be the $4 \times 4$ anti-diagonal matrix
whose $i,j$ entry is $(-1)^{i}\delta_{i,5-j}$ and let
$J$ be the $8 \times 8$ anti-diagonal matrix, written in block form as
\[ \begin{pmatrix} 0 & \ \varrho \\ \\ \varrho^T & \ 0 \end{pmatrix}. \]
The {\em even special orthogonal group} $\SO_{8}(\mathbb{C})$ is the
group of $8 \times 8$ matrices
\[ \left\{ g \in \SL_{8}(\mathbb{C}) \ \Big| \ g^T J g = J \right\}. \]
The {\em maximal unipotent subgroup} $U$ of $\SO_{8}(\mathbb{C})$ consists
of all $8 \times 8$ matrices in $\SO_{8}(\mathbb{C})$ which are upper triangular
and unipotent, i.e. having all diagonal entries equal to $1$. A more
explicit description in terms of matrices in block form is
\begin{equation}\label{formel1}
U \ = \ \left\{
\begin{pmatrix} u & u \varrho v \\ \\ 0 & \varrho^T \big(u^{-1}\big)^T \varrho
\end{pmatrix} \ \Bigg| \begin{array}{l} \text{$u$ is upper triangular
unipotent in } \SL_{4}(\mathbb{C})
\\ \text{$v$ is skew-symmetric in } {\rm M}_{4}(\mathbb{C})\end{array} \right\} \end{equation}
The isotropic
Grassmannian $\Gr_{2,8}^{\iso}$ is the closed subvariety of the classical Grassmannian $\Gr_{2,8}$
consisting of all isotropic $2$-dimensional subspaces of $\mathbb{C}^8$.
Let $\widehat{\Gr}^{\iso}_{2,8}$ be the corresponding affine cone. Let $q \colon U \to
\widehat{\Gr}^{\iso}_{2,8}$ denote the map given by
$q(u)= u_1 \wedge
u_2$, where $u_1, u_2$ are the first two rows
of $u$ in $U$, and let
$q^{\ast} \colon \mathbb{C}\big[ \widehat{\Gr}_{2,8}^{\iso} \big]
\longrightarrow \mathbb{C}\big[U \big]$ be the associated
homomorphism of coordinate rings.
Let $\varphi \colon \mod \la \to \mathbb{C}[U]$ be the GLS $\varphi$-map. Then one can show that
\[ \begin{array}{lllllllll}
\varphi\big(M_{16}\big) &=[16] &\varphi\big(M_{24}\big) &=[24]
&\varphi\big(M_{25}\big) &=[25] &\varphi\big(M_{26}\big) &=[26] \\
\varphi\big(M_{68}\big) &=[68] &\varphi\big(M_{18}\big) &=[18]
&\varphi\big(M_-\big) &= \psi_{\s -} &\varphi\big(M_+\big) &= \psi_{\s +} \\
\varphi\big(P_1\big) &= [8] &\varphi\big(P_2\big) &= [78]
&\varphi\big(P_3\big) &= {\displaystyle [678] \over {\displaystyle \Pfaff_{[1234]}} }&\varphi\big(P_4\big)
&= \Pfaff_{[1234]} \end{array} \]
Here $\Pfaff_{[1234]}$
denotes the Pfaffian of the $4 \times 4$ skew-symmetric part $v$ appearing in (\ref{formel1}) of the unipotent element
and $\psi_{\pm} = {1 \over 2}\big( [18] - [27] + [36] \pm [45]\big)$.
The functions ${\displaystyle [678] \over {\displaystyle \Pfaff_{[1234]}}}$ and $\Pfaff_{[1234]}$
are examples of generalized minors of type $D$ \cite{fz-d}.
In the notation of Example 3 in Section \ref{c1_sec3}, we have seen that
$$T=
M_{16} \oplus M_{24} \oplus M_{25} \oplus M_{26} \oplus M_{68} \oplus M_{18}
\oplus M_{-} \oplus M_{+} \oplus P_2 $$ is a cluster tilting object in
$\B = \Sub P_2$,
which can be extended to a cluster tilting object
$$\widetilde{T}=
T \oplus P_1 \oplus
P_3 \oplus P_4 $$
for $\mod \la$. One shows that the initial seed used in \cite{fz3},
which determines a cluster algebra structure for $\mathbb{C}[U]$,
is mutation equivalent to the initial seed determined by $\widetilde{T}$, which
hence generates the same cluster algebra. Since the subcategory
$\B$ of $\mod \la$ in \ref{c1_sec3}, Example 3, has a substructure, as defined in Chapter \ref{chap1},
the connected component of the cluster tilting graph of $\B$ containing $T$
determines a subcluster algebra $\A'$ of $\mathbb{C}[U]$
(where $[18], \psi_{\pm},[78]$ are taken as noninverted
coefficients). Then we can prove that $\A'$ coincides with
$\Im q^{\ast}$ (which we conjecture to be true more generally).
Notice that we have the cluster algebra structure for $\widehat{\Gr}^{\iso}_{2,8}$ by adjoining the coefficient $[12]$ to $\Im q^{\ast}$.
\\
\hspace{7mm}
\subsection{Cluster Structure of the loop group $\SL_2( \mathcal{L} )$}\label{c3_sec3}
${}$ \\
Let $Q$ be a finite connected non-Dynkin quiver without loops, and
let $\la$ be the associated completed preprojective algebra and
$W$ the associated Coxeter group. Let $G$ be the associated Kac-Moody
group $G$ with a maximal unipotent subgroup $U$, and let $U^w$ be the
unipotent cell associated with $w\in W$.
Using the GLS-map $\varphi:\fl\la\to\mathbb{C}[U]$ and the
restriction map $\mathbb{C}[U]\to\mathbb{C}[U^w]$, define the induced
map
\[\varphi_w:\Sub \la/I_w\subset\fl\la\stackrel{\varphi}{\to}\mathbb{C}[U]\to\mathbb{C}[U^w].\]
Using our results from Chapter \ref{chap2} we know that the transcendence degree
$l(w)$ of $\mathbb{C}(U^w)$ is equal to the number of non-isomorphic
summands of a cluster tilting object in $\Sub \la/I_w$.
It is then natural to pose the following.
\begin{conjecture}\label{first conjecture}
For any $w \in W$, the coordinate ring $\mathbb{C}[U^w]$ is a cluster
algebra modelled by a strong cluster map $\varphi_w:\Sub
\la/I_w\to\mathbb{C}[U^w]$ for the standard component of the cluster
tilting graph of $\Sub \la/I_w$ with all coefficients inverted.
\end{conjecture}
Recall that any infinite reduced expression where all generators occur
an infinite number of times gives rise to a cluster tilting
subcategory with an infinite number of non-isomorphic indecomposable
objects. Since the GLS-map $\varphi:\fl\la\to\mathbb{C}[U]$ satisfies
(M2$'$) and (M3$'$), it is natural to ask the following.
\begin{question}
The coordinate ring $\mathbb{C}[U]$ contains a cluster algebra
modelled by $\varphi:\fl\la\to\mathbb{C}[U]$ for any
connected component of the cluster tilting graph of $\fl\la$.
\end{question}
As a support for Conjecture \ref{first conjecture}, we show that this
is the case when $Q$ is the Kronecker quiver
$\xymatrix@R0.2cm{
1\ar@<0.5ex>[r]\ar@<-0.5ex>[r]& 0
}$,
and the length of $w$ is at most 4. Without loss of generality,
we only have to consider the case $w=w_i$ for $w_1=s_0$, $w_2=s_0
s_1$, $w_3=s_0 s_1 s_0$ and $w_4 = s_0 s_1 s_0 s_1$.
The cases $w_i$ for $i=1$ or $2$ are clear since $\Lambda/I_{w_i}$ is a
cluster tilting object in $\Sub\la/I_{w_i}$ and $\mathbb{C}[U^{w_i}]$
is generated by invertible coefficients.
For the case $w_i$ for $i=3$ or $4$, we have cluster tilting objects
$T_3 = P_{0,1} \oplus P_{1,2} \oplus P_{0,3}$ in $\Sub \la/I_{w_3}$
and $T_4 = P_{0,1} \oplus P_{1,2} \oplus P_{0,3} \oplus P_{1,4}$ in
$\Sub \la/I_{w_4}$, where $P_{i,k}=P_i/J^kP_i$ for $i=0,1$ and $k>0$.
The Kac-Moody group $\widehat{\SL_2}(\mathcal{L})$ associated with the
Kronecker quiver is defined as the unique non-trivial central
extension
\[ 1 \longrightarrow \Bbb{C}^* \longrightarrow \widehat{\SL_2}(\mathcal{L})
\overset{\pi}{\longrightarrow} \SL_2(\mathcal{L}) \longrightarrow
1 \]
of the algebraic loop group $\SL_2(\mathcal{L})$ by $\Bbb{C}^*$ (see
\cite{kp,ps} for details.)
The group $\SL_2(\mathcal{L})$ consists of all $\mathcal{L}$-valued $2
\times 2$ matrices $g =(g_{ij})$ with determinant 1, where
$\mathcal{L}$ is the Laurent polynomial ring $\mathbb{C}[t,t^{-1}]$.
The maximal unipotent subgroup (respectively, unipotent cells) of
$\widehat{\SL_2}(\mathcal{L})$ is mapped isomorphically onto the
maximal unipotent subgroup $$ U \ = \ \Bigg\{ g \in \begin{pmatrix}
1 + t\mathbb{C}[t] & \mathbb{C}[t] \\ \\ t\mathbb{C}[t] & 1 + t\mathbb{C}[t]
\end{pmatrix} \ \Bigg| \ \det(g) = 1 \ \Bigg\} $$
(respectively, corresponding unipotent cells) of $\SL_2(\mathcal{L})$.
Hence we deal with $\SL_2(\mathcal{L})$ instead of
$\widehat{\SL_2}(\mathcal{L})$.
The torus $H$ is the subgroup of $\SL_2(\mathcal{L})$ where the
elements are the diagonal matrices of the form $\begin{pmatrix} a & 0
\\ 0 & a^{-1} \end{pmatrix}$.
The Weyl group $W=\operatorname{Norm}(H)/H$ is generated by the two
non-commuting involutions $s_0$ and $s_1$.
Each $\varphi(P_{i,k})$ for $i= 0,1$ and $k>0$ is a regular function
by computing explicit determinental formulas. For $g = \begin{pmatrix}
g_{11} & g_{12} \\ g_{21} & g_{22}\end{pmatrix} \in U$, let $T_g$ be
the $\mathbb{Z}\times \mathbb{Z}$ matrix whose $(M,N)$ entry is given
by the residue formula
\[ \big(T_g\big)_{M,N} \ = \ \Res {\Di {g_{rs} \over {t^{n-m+1} } }} \]
with $N = 2n + r$ and $M = 2m + s$ and with $n, m \in \mathbb{Z}$
and $r,s \in \{1, 2\}$.
Let $\Delta^\sigma_{k;i}(g)$ be the determinant of the $k \times k$
submatrix of $T_g$ whose row and column sets are given respectively by
\begin{eqnarray*}
\text{rows} &=&
\Big\{ k-i,\ k-i-2, \ k-i-4, \ k - i - 6, \ \dots \ , \ 2 - k - i \Big\},\\
\text{columns} & = &
\Big\{k -i + 1, \ k-i, \ k-i-1, \ k-i-2, \ \dots \ , \ 2-i \Big\}.
\end{eqnarray*}
The following theorem is a special case of \cite[Theorem 2]{scott}.
\begin{theorem}
$\varphi(P_{0,k})=\Delta^\sigma_{k;1}$ and
$\varphi(P_{1,k})=\Delta^\sigma_{k;0}$ for any $k>0$.
\end{theorem}
In the present case of the Weyl group elements $w_3= s_0 s_1 s_0$ and $w_4= s_0 s_1 s_0 s_1$, the
corresponding affine unipotent varieties $U^w$
are given by:
\begin{equation}\label{unipotent cell}
\begin{array}{ll}
U^{w_3} \ = \ \left\{ \begin{pmatrix} 1 + At & B \\
Dt + Et^2 & 1 + Ft \end{pmatrix} \ \Bigg| \
\begin{array}{ll} A+F = BD & \\ AF = BE & E \ne 0
\end{array} \right\} \\
U^{w_4} \ = \ \left\{ \begin{pmatrix} 1 + At & B + Ct \\
Dt + Et^2 & 1 + Ft + Gt^2 \end{pmatrix} \ \left| \
\begin{array}{ll} A+F = BD & \ \ AG = CE \\ AF - CD = BE - G
& \ \ G \ne 0 \end{array} \right. \right\} \end{array}
\end{equation}
In terms of the complex parameters $A, \dots, G$ we have
\begin{equation}\label{eqn3}
\begin{array}{ll} \varphi\big(P_{0,1}\big) = \Delta^\sigma_{1;1} = D
& \varphi\big(P_{1,2}\big) = \Delta^\sigma_{2;0} = DF-E \\
\varphi\big(P_{0,3}\big) = \Delta^\sigma_{3;1} = DEF - D^2 G - E^2
& \varphi\big( P_{1,4}\big) = \Delta^\sigma_{4;0} = G(DEF - D^2 G - E^2)
\end{array}
\end{equation}
We are now ready to prove the crucial result on transcendence bases.
\begin{proposition}\label{propIV2.4}
The collections $\big\{ \Delta^\sigma_{1;1}, \Delta^\sigma_{2;0},
\Delta^\sigma_{3;1} \big\} \text{ and } \big\{ \Delta^\sigma_{1;1}, \Delta^\sigma_{2;0},
\Delta^\sigma_{3;1}, \Delta^\sigma_{4;0} \big\}$ are respectively
transcendence bases for the rational function fields
$\mathbb{C}\big(U^{w_3}\big)$ and $\mathbb{C}\big(U^{w_4}\big)$.
\end{proposition}
\begin{proof} Consider first the case of $\big\{ \Delta^\sigma_{1;1}, \Delta^\sigma_{2;0},
\Delta^\sigma_{3;1}, \Delta^\sigma_{4;0} \big\}$ within $\mathbb{C}\big(U^{w_4}\big)$.
The transcendence degree of $\mathbb{C}\big(U^{w_4}\big)$ over $\mathbb{C}$ is 4 since
$\dim_{\mathbb{C}} \, U^{w_4}$ is 4. In this case any
4 rational functions which generate the field
$\mathbb{C}\big(U^{w_4}\big)$ must constitute a transcendence basis for
$\mathbb{C}\big(U^{w_4}\big)$. The global coordinates $A,B,C,D,E,F, G^{\pm 1}$
clearly generate $\mathbb{C}\big(U^{w_4}\big)$, and we will be finished if we
can express each global coordinate as a rational function
in $\Delta^\sigma_{1;1}$, $\Delta^\sigma_{2;0}$,
$\Delta^\sigma_{3;1}$, and $\Delta^\sigma_{4;0}$. To do this we
introduce five auxiliary functions $\widetilde{\Delta}^\sigma_{1;1}$,
$\widetilde{\Delta}^\sigma_{2;0}$, $\Psi$, $\Omega$, and $\Sigma$
defined implicitly by the formulas
\begin{equation}\label{eqn4}
\begin{array}{lllllll}
& \widetilde{\Delta}^\sigma_{1;1} \ \Delta^\sigma_{1;1} &= \
\Big( \Delta^\sigma_{2;0} \Big)^2 + \ \Delta^\sigma_{3;1} &
& \widetilde{\Delta}^\sigma_{2;0} \ \Delta^\sigma_{2;0} &= \
\Big( \Delta^\sigma_{1;1} \Big)^2 \Delta^\sigma_{4;0} + \
\Big(\Delta^\sigma_{3;1} \Big)^2 \\
& \Psi \ \Delta^\sigma_{2;0} &= \ \Big(\widetilde{\Delta}^\sigma_{1;1}\Big)^2
+ \ \Delta^\sigma_{4;0} &
& \Omega \ \Delta^\sigma_{1;1} &= \ \Big(\widetilde{\Delta}^\sigma_{2;0}\Big)^2
+ \ \Big(\Delta^\sigma_{3;1} \Big)^3 \\
& \Sigma \ \widetilde{\Delta}^\sigma_{2;0} &= \ \Big(\Delta^\sigma_{3;1} \Big)^4
\Delta^\sigma_{4;0} + \ \Omega^2 \end{array} \end{equation}
\noindent
These are exactly the cluster variables obtained from the initial cluster from at most two mutations.
Evidently these five functions can be rationally expressed in terms
of $\Delta^\sigma_{1;1}$, $\Delta^\sigma_{2;0}$,
$\Delta^\sigma_{3;1}$, and $\Delta^\sigma_{4;0}$.
Moreover, after using the equations \eqref{eqn3} and carrying out the divisions arising in
solving the above system, each function
can be written as a polynomial in the
global coordinates $A, \dots, G^{\pm 1}$:
\begin{equation}\label{eqn5}
\begin{array}{lll}
&\widetilde{\Delta}^\sigma_{1;1} &= \ DF^2 - EF - DG \\
&\widetilde{\Delta}^\sigma_{2;0} &= \ E \big( DEF - D^2G -E^2 \big) \\
&\Psi &= \ \big(F^2 - G \big)\big(DF - E\big) - DFG \\
&\Omega &= \ \big(EF - DG\big)\big(DEF - D^2G - E^2\big)^2 \\
&\Sigma &= \ \big(EF^2 - DFG - EG\big)\big(DEF - D^2G - E^2\big)^3
\end{array}
\end{equation}
\noindent
The following identities can be easily checked by using the relations
\eqref{unipotent cell}.
\begin{equation}\label{eqn6}
\begin{array}{lllll}
&A =
{\Di { \Omega \widetilde{\Delta}^\sigma_{2;0} \over { \Big(\Delta^\sigma_{3;1}\Big)^{4} } }}
&B= {\Di { \Sigma \over { \Big(\Delta^\sigma_{3;1}\Big)^{4}}} }
&C ={\Di { \Omega \Delta^\sigma_{4;0} \over { \Big(\Delta^\sigma_{3;1}\Big)^{4}}} }
&D = \Delta^\sigma_{1;1}
\\
&E= {\Di { \widetilde{\Delta}^\sigma_{2;0} \over { \Delta^\sigma_{3;1} }} }
&F = { \Di {\Sigma \Delta^\sigma_{1;1} + \ \Omega \widetilde{\Delta}^\sigma_{2;0} \over
{ \Big(\Delta^\sigma_{3;1} \Big)^{4}}} }
&G^{\pm 1} =
\Bigg( {\Di { \Delta^\sigma_{4;0} \over { \Delta^\sigma_{3;1} } } } \Bigg)^{\pm 1}
\end{array}
\end{equation}
\noindent
Thus $\big\{ \Delta^\sigma_{1;1}, \Delta^\sigma_{2;0},
\Delta^\sigma_{3;1}, \Delta^\sigma_{4;0} \big\}$ is a transcendence basis.
\medskip
\noindent
The case of $\big\{ \Delta^\sigma_{1;1}, \Delta^\sigma_{2;0},
\Delta^\sigma_{3;1} \big\}$ is handled similarly. Over
the unipotent cell $U^{w_3}$ the global
coordinates $A,B,D,E^{\pm 1},F$ are rationally expressed as
\[ A= {\Di {\widetilde{\Delta}^\sigma_{1;1} \over
{\Big( \Delta^\sigma_{2;0} \Big)^2 }} } \quad
B = {\Di {\Big(\widetilde{\Delta}^\sigma_{1;1} \Big)^2 \over
{\Big( \Delta^\sigma_{2;0} \Big)^3 }} } \quad
D = \Delta^\sigma_{1;1} \quad
E^{\pm 1} = \Bigg( {\Di {\Delta^\sigma_{3;1} \over
{\Delta^\sigma_{2;0} }} } \Bigg)^{\pm 1} \quad
F = {\Di { \widetilde{\Delta}^\sigma_{1;1} \over
{\Delta^\sigma_{2;0} }} } \]
\noindent
from which it follows that $\big\{ \Delta^\sigma_{1;1}, \Delta^\sigma_{2;0},
\Delta^\sigma_{3;1} \big\}$ generates the rational function field
$\mathbb{C}\big(U^{w_3}\big)$. The transcendence degree of
$\mathbb{C}\big(U^{w_3}\big)$ is 3 and consequently
the collection $\big\{ \Delta^\sigma_{1;1}, \Delta^\sigma_{2;0},
\Delta^\sigma_{3;1} \big\}$ is a transcendence basis.
\end{proof}
\begin{theorem}\label{}
For $w=w_3=s_0s_1s_0$ or $w=w_4=s_0s_1s_0s_1$, the coordinate ring
$\mathbb{C}[U^w]$ is a cluster algebra modelled by a strong cluster
map $\varphi_w\colon\Sub\la/I_w\to\mathbb{C}[U^w]$ for the standard
component of the cluster tilting graph of $\Sub \la/I_w$ with all
coefficients inverted.
\end{theorem}
\begin{proof}
We treat the case $w=w_4$, and leave the other easier case to the reader.
Since $\varphi_{w_4}:\Sub\la/I_{w_4}\to\mathbb{C}[U^{w_4}]$ is a strong
cluster map by Proposition \ref{propIV2.4}, we have a cluster algebra
$\A^{w_4} \subset\mathbb{C}[U^{w_4}]$ where we invert all
coefficients.
We start with proving the inclusion $\A^{w_4} \subset
\mathbb{C}\big[U^{w_4} \big]$. By construction the functions
$\Delta^\sigma_{1;1}$, $\Delta^\sigma_{2;0}$,
$\Delta^\sigma_{3;1}$, and $\Delta^\sigma_{4;0}$ are
regular. The
coefficients $\Delta^\sigma_{3;1}$ and $\Delta^\sigma_{4;0}$
are invertible, so we must verify that their
inverses are in $\mathbb{C}\big[U^{w_4} \big]$.
Put
\[\Delta_{3;1} = \ BCF - B^2G - C^2,\ \ \ \Delta_{4;0}= \ G\big(BCF -B^2G - C^2 \big).\]
Using relations \eqref{unipotent cell} one has
$\Delta^\sigma_{3;1} \ \Delta_{3;1} \ = \ \Delta^\sigma_{4;0} \ \Delta_{4;0} \ = \ G^4$.
The function $G \ne 0$ over $U^{w_4}$, and consequently $\Delta^\sigma_{3;1}$
and $\Delta^\sigma_{4;0}$, are invertible with inverses
given by
\[ \Big( \Delta^\sigma_{3;1} \Big)^{-1} = { \Di \Delta_{3;1} \over
{\Di G^4}} \qquad \Big( \Delta^\sigma_{4;0} \Big)^{-1} =
{\Di \Delta_{4;0} \over {\Di G^4 } }. \]
\noindent
The two exchange relations for the initial seed are precisely the
first two relations given in \eqref{eqn4}, namely
\[\begin{array}{llllll}
& \widetilde{\Delta}^\sigma_{1;1} \ \Delta^\sigma_{1;1} &= \
\Big( \Delta^\sigma_{2;0} \Big)^2 + \ \Delta^\sigma_{3;1},
&& \widetilde{\Delta}^\sigma_{2;0} \ \Delta^\sigma_{2;0} &= \
\Big( \Delta^\sigma_{1;1} \Big)^2 \Delta^\sigma_{4;0} + \
\Big(\Delta^\sigma_{3;1} \Big)^2
\end{array} \]
\noindent
and we have seen in \eqref{eqn5} that we have the following expressions for the cluster variables
$\widetilde{\Delta}^\sigma_{1;1}$ and
$\widetilde{\Delta}^{\sigma}_{2;0}$:
\begin{equation}
\begin{array}{llllll}
&\widetilde{\Delta}^\sigma_{1;1} &= \ DF^2 - EF - DG,
&&\widetilde{\Delta}^\sigma_{2;0} &= \ E \big( DEF - D^2G -E^2 \big)
\end{array}
\end{equation}
which are clearly regular functions on $U^{w_4}$.
Define $x_1 = \Delta^\sigma_{1;1}$,
$x_2 = \Delta^\sigma_{2;0}$, $x_3 = \Delta^\sigma_{3;1}$, $x_4 =
\Delta^\sigma_{4;0}$, and define also $\widetilde{x}_1 =
\widetilde{\Delta}^\sigma_{1;1}$, and $\widetilde{x}_2
= \widetilde{\Delta}^\sigma_{2;0}$. Select an arbitrary
cluster variable $x$ in $\A^{w_4}$.
By the Laurent phenomenon we know that $x$ can
be expressed as a polynomial in
$x_1^{\pm 1}$, $x_2^{\pm 1}$, $x_3$, and $x_4$.
>From this it follows that
$x$ is regular on the Zariski open set $\mathcal{U}$ of
$U^{w_4}$ defined by
\[ \mathcal{U} = \ \Big\{ g \in U^{w_4} \ \Big| \ x_i(g) \ne 0
\ \text{for $i = 1, 2$ } \Big\} \]
\noindent
Again, by the Laurent phenomenon, $x$ can also be expressed, simultaneously,
as a polynomial in both $\Big\{ \widetilde{x}_1^{\pm 1}, \ x_2^{\pm 1}
, \ x_3, \ x_4 \Big\}$ and
$\Big\{ x_1^{\pm 1}, \ \widetilde{x}_2^{\pm 1}, \ x_3,
\ x_4 \Big\}$. Consequently $x$ is simultaneously regular on both of the
Zariski open subsets
\[ \begin{array}{lll}
&\mathcal{U}^{(1)} &= \ \Big\{ g \in U^{w_4} \ \Big| \ \widetilde{x}_1(g) \ne 0
\ \text{and} \ x_2(g) \ne 0 \Big\} \\
&\mathcal{U}^{(2)} &= \ \Big\{ g \in U^{w_4} \ \Big| \ x_1(g) \ne 0
\ \text{and} \ \widetilde{x}_2(g) \ne 0 \Big\}
\end{array} \]
\noindent
Taken all together we conclude that $x$ is regular over the union
$\mathcal{U} \cup \ \mathcal{U}^{(1)} \cup \ \mathcal{U}^{(2)}$.
The complement $V$ of this union inside $U^{w_4}$ consists of those
points $g$ for which $x_1(g) = \widetilde{x}_1(g) = 0$
and/or $x_2(g) = \widetilde{x}_2(g) = 0$. Using the
expressions for $\widetilde{x}_1$ and $\widetilde{x}_2$ given (in \eqref{eqn3}) we see that
\[ V \ = \ \left\{ \begin{pmatrix} 1 & B \\ Et^2 & 1 + Gt^2 \end{pmatrix}
\ \Bigg| \begin{array}{ll} &BE = G \\ &G \ne 0 \end{array}
\right\} \bigcup
\left\{ \begin{pmatrix} 1 & Ct \\ Dt & 1 + Gt^2 \end{pmatrix}
\ \Bigg| \begin{array}{ll} &CD = G \\ &G \ne 0 \end{array}
\right\}
\]
\noindent
The only hypothetical singularities of $x$ must lie in $V$,
and since the codimension of $V$ inside $U^{w_4}$ is clearly
2, it follows that $x$ has no singularities and is thus
regular on all of $U^{w_4}$. The cluster algebra
$\A^{w_4}$ is generated by the coefficients and
cluster variables and so we conclude that
$\A^{w_4} \subset \mathbb{C}\big[ U^{w_4}\big]$.
\medskip
\noindent
The reverse inclusion $\mathbb{C}\big[ U^{w_4}\big] \subset
\A^{w_4}$ follows from the fact
the functions $x$, $\Omega$, and $\Sigma$
defined previously are also cluster
variables of $\A^{w_4}$
and the fact that the global coordinates
$A, \dots, G^{\pm 1}$ are generated inside of $\A^{w_4}$
by using the expressions in \eqref{eqn6}.
\end{proof}
\medskip
Finally we show the cluster graphs for $\mathbb{C}[U^{w_3}]$ and
$\mathbb{C}[U^{w_4}]$.
Then $\mathbb{C}[U^{w_3}]$ has a seed consisting of a cluster $\{ \Delta^\sigma_{1;1}
\}$, coefficients $\{ \Delta^\sigma_{2;0},\Delta^\sigma_{3;1} \}$ and the quiver
$$\xymatrix@C0.4cm@R0.4cm{
&\Delta^\sigma_{2;0} &\\
\Delta^\sigma_{1;1}\ar@<0.5ex>[ru]\ar@<-0.5ex>[ru]&& \Delta^\sigma_{3;1}\ar[ll]
}$$
\noindent
which is the quiver of $\End_\la(T_3)$, where we drop the arrows between coefficients. The cluster graph is
$$\xymatrix@C0.4cm@R0.4cm{
\{ \Delta^\sigma_{1;1} \} \ar@{-}[r] & \{ \widetilde{\Delta}^\sigma_{1;1} \}}
$$
\noindent
where the only other cluster variable $\widetilde{\Delta}^\sigma_{1;1}$ is determined by
$\Delta^\sigma_{1;1} \widetilde{\Delta}^\sigma_{1;1} = (\Delta^\sigma_{2;0})^2 + \Delta^\sigma_{3;1}$.
The cluster type of $\mathbb{C}[U^{w_3}]$ is $A_1$.
Similarly, $\mathbb{C}[U^{w_4}]$ has a seed consisting of a cluster
$\{ \Delta^\sigma_{1;1},\Delta^\sigma_{2;0}\}$, coefficients $\{ \Delta^\sigma_{3;1},\Delta^\sigma_{4;0} \}$ and the quiver
$$\xymatrix@C0.4cm@R0.4cm{
&\Delta^\sigma_{2;0}\ar@<0.5ex>[rd]\ar@<-0.5ex>[rd]& & \Delta^\sigma_{4;0}\ar[ll] \\
\Delta^\sigma_{1;1}\ar@<0.5ex>[ru]\ar@<-0.5ex>[ru]&& \Delta^\sigma_{3;1}\ar[ll] &
}$$
The cluster graph is
$$\xymatrix{\cdots\text{ }\cdot \ar@{-}[r] & \cdot \ar@{-}[r] & \{ \Delta^\sigma_{1;1}, \Delta^\sigma_{2;0} \} \ar@{-}[r] & \cdot \ar@{-}[r] &\cdot \text{ }\cdots}$$
The cluster type of $\mathbb{C}[U^{w_4}]$ is $\widehat{A_1}$.
Note that this gives an example of a substructure of a cluster
structure coming from the inclusion $\Sub\la/I_{w_3}\subset\Sub\la/I_{w_4}$,
and a cluster map such that we get a subcluster algebra of a cluster algebra,
namely $\mathbb{C}[U^{w_3}]$ as a subcluster algebra of $\mathbb{C}[U^{w_4}]$.
| 107,547
|
\begin{document}
\maketitle
\begin{abstract}
The success of large-scale models in recent years has increased the importance of statistical models with numerous parameters. Several studies have analyzed over-parameterized linear models with high-dimensional data that may not be sparse; however, existing results depend on the independent setting of samples. In this study, we analyze a linear regression model with dependent time series data under over-parameterization settings. We consider an estimator via interpolation and developed a theory for excess risk of the estimator under multiple dependence types. This theory can treat infinite-dimensional data without sparsity and handle long-memory processes in a unified manner. Moreover, we bound the risk in our theory via the integrated covariance and nondegeneracy of autocorrelation matrices. The results show that the convergence rate of risks with short-memory processes is identical to that of cases with independent data, while long-memory processes slow the convergence rate. We also present several examples of specific dependent processes that can be applied to our setting.
\end{abstract}
\section{Introduction}
In this study, we analyze the stochastic regression problem with an over-parameterized setting.
Let us suppose $n$ covariates, $x_1,...,x_n \in \bH$, with Hilbert space $\bH$ and $n$ responses, $y_1,...,y_n \in \R $, observed from the following linear model with an unknown true parameter, $\beta^* \in \bH$:
\begin{align}
y_t = \langle \beta^*, x_t \rangle + \varepsilon_t, ~ t=1,...,n. \label{def:model}
\end{align}
where $\langle \cdot, \cdot \rangle$ denotes the inner product for $\bH$.
Here, covariates $\{x_i: t = 1,...,n\}$ and independent noise $\{\varepsilon_t, : t = 1,...,n\}$ are centered stationary Gaussian processes independent of each other.
It should be noted that we allow dependence within $x_t$ and $\varepsilon_t$.
We focus on the over-parameterized case, wherein the dimension of $\bH$ is significantly larger than $n$ (sometimes infinite).
In this study, we derive sufficient conditions for a risk of an estimator for a model to converge to zero as $n \to \infty$, i.e., benign overfitting, under several settings for the dependence of $\{x_t: t = 1,...,n\}$.
The statistics for high-dimensional and large-scale data analyses have received significant attention over the decades.
The most representative approach involves the utilization of sparsity via $\ell_1$-norm regularization and its variants \citep{candes2007dantzig,van2008high,buhlmann2011statistics,hastie2019statistical}, which is effective when a signal to be estimated has many zero elements.
Another study investigated the high-dimensional limit of risks of estimators \citep{dobriban2018high,belkin2019reconciling,hastie2019surprises,bartlett2020benign} and revealed the risk limit when $n$ data instances and $p$ parameters diverged infinitely, while their ratio $p/n$ converged to a positive constant value.
Interpolators are a typical estimator that perfectly fit the observed data under the $p \gg n$ setting; moreover, it has been demonstrated that the risk or variance of interpolators converges to zero \citep{liang2020just,hastie2019surprises,ba2019generalization}.
It should be noted that these studies did not use the sparsity of data or signals; however, they are compatible with recent large-scale data analyses.
We focuse on the risk of interpolators for high-dimensional data with temporal dependence but no sparsity.
Although various have been conducted on high-dimensional dependent data with sparsity \citep{wang2007regression,Basu2015Regularized,han2015direct,Wong2020Lasso}, the analysis of interpolators without sparsity is still an emerging problem \citep{daskalakis2019regression,kandiros2021statistical}.
This is because the fundamental tools for studying interpolators, obtained from the random matrix theory \citep{bai2010spectral,Vershynin2018High}, rely on the independence assumption of data.
In this study, we investigate the risk of interpolators in stochastic linear regression \eqref{def:model} with high-dimensional dependent processes.
Specifically, we developeda notion of integrated covariance and derive non-asymptotic upper and lower bounds on the prediction risk \eqref{def:risk} of interpolators \eqref{def:interpolator}.
Furthermore, we demonstrate that the upper and lower bounds converge to zero as $n \to \infty$ under tail assumptions on the eigenvalues of cross-covariance of the process.
These results are valid when the data dimension was significantly large (even possible for $p=\infty$), with a condition on autocovariance.
We demonstrate the validity of the results using the following example: a stationary Gaussian process, $\{x_t\}_{t=1}^n$, is a (i) short-memory process, such as autoregressive moving-average (ARMA); (ii) long-memory process, such as autoregressive fractionally integrated moving-average (ARFIMA); and (iii) autoregressive moving-average Hilbertian (ARMAH) process.
Moreover, we establish the conditions for over-parameterized models to make consistent predictions with the general class of covariates.
The main contributions of our theory are summarized below.
(i) We observe that our results can handle both short- and long-memory processes in a unified manner and remained valid without sparsity.
Treating long-memory processes in high-dimensional time series studies is a challenging task; however, our non-asymptotic analysis facilitate it.
Therefore, we find that the convergence rate of risk remains the same as that in independent cases for short-memory covariates, but slows for long-memory covariates.
(ii) We clarify the role of bias and variance in the over-parameterized setting.
The bias of risk depends on time-directional eigenvalue decay by the dependence.
For short-memory covariate processes, the dependence does not affect the error because its effect is weaker than the time-independent spectrum.
By contrast, the variance depends on the minimum eigenvalue of the autocorrelation matrix, and it diverges when the matrix is degenerate.
This implies that the regularity of autocorrelation plays a similar role in the restricted eigenvalue property in sparse dependent data \citep{bickel2009simultaneous,van2008high}.
Moreover, on the technical side, our core results depend on the following two contributions.
First, we derive the moment inequality for empirical covariance generated from possibly infinite-dimensional Gaussian processes under dependence.
This is an extension of moment bounds on covariance matrices by \citet{HL2020Moment} based on Sudakov's inequality.
The developed inequality can be used to perform systematic investigations on dependent data, while the work by \citet{bartlett2020benign} on independent data used matrix concentration inequalities based on the generic chaining technique, which is not efficient for dependent data.
Second, we derive inequalities on the eigenvalues of empirical covariance operators for heterogeneous autocorrelation cases.
Unlike the independent and homogeneous correlation cases, the empirical eigenvalues differ for each instance of data in the heterogeneous case.
We resolve the heterogeneity by deriving the inequalities for individual eigenvalues.
\subsection{Related Studies}
We discuss the following two types of related studies in this paper.
\textit{High-dimensional time series}:
Time series analysis of high-dimensional data has been performed for the stochastic regression and vector autoregression (VAR) estimation problems with several sparsity-induced regularizations.
These studies used regularization under the sparsity assumption.
\citet{wang2007regression} proposed a flexible parameter tuning method for lasso, which can be applied to VAR.
\citet{alquier2011sparsity} studied a general scheme of $\ell_1$-norm regularization for dependent noise with a wide class of loss functions.
\citet{song2011large} focused on a large-scale VAR and its sparse estimation.
\citet{Basu2015Regularized} developed a spectrum-based characterization of dependent data and utilized a restricted eigenvalue condition for the $\ell_1$-norm regularization of dependent data.
\citet{kock2015oracle} studied lasso for time series data and derived an oracle inequality and model selection consistency.
\citet{han2015direct} used the Danzig selector for high-dimensional VAR and studied its efficiency by the spectral norm of a transition matrix.
\citet{wu2016performance} developed a widely applicable error analysis method on time series lasso with various tail probabilities of data.
\citet{guo2016high} focused on lasso for VAR with banded transition matrices.
\citet{davis2016sparse} proposed a two-step algorithm for VAR with sparse coefficients.
\citet{medeiros2016L1} studied time series lasso with non-Gaussian and heteroskedastic covariates.
\citet{masini2019regularized} derived an oracle inequality for sparse VAR with fat probability tail and various dependent settings.
\citet{Wong2020Lasso} studied time series lasso with general tail probability and dependence by relaxing a restriction in \citet{Basu2015Regularized}.
These studies imposed sparsity on data; thus, they can handle non-sparse high-dimensional data in our setting.
\textit{Large-scale model with dependent data}:
In contrast to sparse-based studies, not many studies have analyzed dependent data in the context of large-scale models.
The learning theory involves advancements in the predictive error analysis of dependent data using uniform convergence and complexity, wherein sparsity is not used, and it can be easily (relatively) applied to large-scale models.
\citet{yu1994rates} demonstrated uniform convergence in empirical processes with mixing observations and derived its convergence rate.
\citet{mohri2008rademacher} developed the Rademacher complexity to study the predictive errors in stationary mixing processes.
\citet{berti2009rate} derived uniform convergence in empirical processes with an exchangeable condition.
\citet{mohri2010stability} proved the data-dependent Rademacher complexity for several types of mixing processes.
\citet{agarwal2012generalization} developed an upper bound for generalization errors with general loss functions and mixing processes.
\citet{kuznetsov2015learning} derived an upper bound for predictive errors in stochastic processes without stationary or mixing assumptions.
\citet{dagan2019learning} derived a generalization error bound with dependent data that satisfied the Dobrushin condition.
\cite{daskalakis2019regression} and \cite{kandiros2021statistical} applied the Ising model, studied several linear models with dependent data, and derived novel estimation error bounds.
These studies yielded general results, but they did not assume situations where the number of parameters diverges to infinity, such as over-parameterization.
\subsection{Organization}
Section \ref{sec:setting} presents the stochastic regression problem and the definition of estimators with interpolation.
Section \ref{sec:result} discusses the homogeneous autocorrelation case, followed by the assumptions and main results, including the risk bounds and convergence rates.
Section \ref{sec:derivation} presents an overview of the proofs of the main result as well as the moment inequality as our technical contribution.
Section \ref{sec:example} provides several examples of the Gaussian process as covariates.
Section \ref{sec:hetero} discusses the results of heterogeneous autocorrelation and the required proof techniques.
Section \ref{sec:conclusion} states the conclusions of this study.
The appendix contains all the proofs.
\subsection{Notation}
Let $\Hilbert$ be a {separable} Hilbert space with an inner product $\langle \cdot, \cdot \rangle$ and norm $\|\cdot\|$ induced by the inner product.
$I: \Hilbert \to \Hilbert$ is an identity operator.
For a linear operator $T: \bH \to \bH$, $T^{\top}$ denotes an adjoint operator of $T$.
$\|T\| := \sup_{z \in \bH: \|z\|=1} \|T z\|$ is an operator norm.
$\mu_i(T)$ is the $i$-th largest eigenvalue of $T$, and $\trace(T) = \sum_{i} \mu_i(T)$ denotes a trace of $T$.
For a vector $b \in \R^d$, $b^{(i)}$ denotes the $i$-th element of $b$ for $i=1,...,d$.
For $z \in \Hilbert$, $z^\top$ denotes a linear functional $z^\top: \Hilbert \to \R, z' \mapsto \langle z,z' \rangle$; moreover, $z = (z^\top)^\top$ is considered an adjoint operator of $z^\top$.
For a square matrix $A \in \R^{d \times d}$, $A^{(i_1,i_2)} \in \R$ denotes an element at the $i_1$-th row and $i_2$-th column of $A$ for $i_1,i_2 = 1,...,d$.
For positive sequences $\{a_n\}_n$ and $\{b_n\}_n$, $a_n \lesssim b_n$ and $a_n = O(b_n)$ indicate that there exists $C>0$ such that $a_n \leq C b_n$ for any $n \geq \Bar{n}$ with some $\Bar{n} \in \N$.
$a_n \prec b_n$ and $a_n = o(b_n)$ denote that for any $C>0$, $a_n \leq C b_n$ holds true for any $n \geq \Bar{n}$ with some $\Bar{n} \in \N$.
$a_n \asymp b_n$ denotes that both $a_n \lesssim b_n$ and $b_n \lesssim a_n$ hold true.
For $a \in \R$, $\log^a n$ denotes $(\log n)^a$.
For an event $E$, $\mone\{E\}$ is an indicator function such that $\mone\{E\} = 1$ if $E$ is true and $\mone\{E\} = 0$ otherwise.
For arbitrary random variables $r_{1},\ldots,r_{i}$, let $\Ep_{r_{1},\ldots,r_{i}}$ denote the conditional expectation given all random variables other than $r_{1},\ldots,r_{i}$.
$\N_{0}$ indicates the union of $\N$ and $\left\{0\right\}$.
\section{Setting and Assumption} \label{sec:setting}
\subsection{Hilbert-valued Gaussian Stationary Processes} \label{sec:setting_process}
Let us consider {an} $\Hilbert$-valued Gaussian stationary process $\left\{x_{t}:t\in\Z\right\}$, whose covariance operator $\Sigma_{0}:\Hilbert\to\Hilbert$ and cross-covariance operator $\Sigma_{h}:\Hilbert\to\Hilbert$ for $h\in\Z$ can be given by:
\begin{align*}
\Sigma_{h} u:=\Ep\left[\left(x_{t+h}^{\top}u\right)x_{t}\right]
\end{align*}
for all $t\in\Z$ and $u\in\Hilbert$.
Note that $\Sigma_0$ corresponds to an autocovariance operator, i.e., $\Sigma_0 u = \Ep[ ( x_t^\top u ) x_t]$.
Let us assume that these operators have the following spectral decompositions:
\begin{align*}
\Sigma_{h}=\sum_{i=1}^{\infty}\lambda_{h,i}e_{i}e_{i}^{\top},
\end{align*}
where $\left\{\lambda_{0,i}\right\}_{i}$ denotes the non-increasing sequences of non-negative numbers, and $\left\{e_{i}\in\Hilbert\right\}_{i}$ is an orthogonal basis of $\Hilbert$.
If $\Hilbert$ is finite-dimensional and the decompositions require only $K < \infty$ nonzero $\lambda_{h,i}$, then we set $\lambda_{h,i} = 0$ for $i \geq K+1$.
Note that
$\Sigma_{h}$ is self-adjoint for all $h\in\Z$ under these assumptions.
Let us also consider the spectral decomposition of $x_{t}$ with $\R^{n}$-valued Gaussian vectors $z_{i}=(z_{1,i},\ldots,z_{n,i})^\top$ such that
\begin{align}
x_{t}=\sum_{i=1}^{\infty}\sqrt{\lambda_{0,i}} z_{t,i}e_{i}, \label{def:spectral_decomp}
\end{align}
where $\Ep[z_{t,i}z_{t+h,i}]=\lambda_{h,i}/\lambda_{0,i}$ for all $i\in\N$ with $\lambda_{0,i}>0$, and $z_{t_{1},i_{1}}$ and $z_{t_{2},i_{2}}$ are independent for all $t_{1},t_{2}\in\Z$ if $i_{1}\neq i_{2}$.
We introduce a basic property of such a process as follows:
\begin{lemma}\label{LemmaAssumption01}
If $\lambda_{h,i}\to0$ as $\left|h\right|\to\infty$ for all $i\in\N$ with $\lambda_{0,i}>0$ and $\trace\left(\Sigma_{0}\right)<\infty$ holds true, then for all $u\in\Hilbert$, $x_{t}^{\top}u$ is mixing.
\end{lemma}
\subsection{Stochastic Regression Problem}
Suppose that we have $n$ observations, $(x_1,y_1),...,(x_n,y_n) \in \bH \times \R$, which follow the stochastic linear regression model \eqref{def:model} for $t=1,...,n$ with an unknown true parameter, $\beta^* \in \bH$.
$\{\varepsilon_t : t =1,...,n\}$ is a stationary Gaussian process such that $(\varepsilon_{1},\ldots,\varepsilon_{n})^{\top}\sim N\left(\mathbf{0},\Upsilon_{n}\right)$, where $\Upsilon_{n}\in\R^{n}\otimes\R^{n}$ is an {autocovariance} matrix whose diagonal entries are $\sigma_{\varepsilon}^{2}>0$.
{ Note that $x_{t}$ and $\varepsilon_{t}$ are independent of each other.}
We prepare some matrix-type notations.
Let $X$ be a linear map:
\begin{align*}
X: \bH \to \R^n, z \mapsto ( x_1^\top z,..., x_n^\top z)^\top,
\end{align*}
which is also a bounded operator.
Similarly, we define $X^\top: \R^n \to \bH$ as an adjoint of $X$.
For $\bH = \R^p$, $X$ corresponds to an $n \times p$ matrix, whose $t$-th row is $x_t$ for $t= 1,...,n$, and $X^\top$ corresponds to a transposed matrix.
We also define $Y = (y_1,...,y_n)^\top \in \R^n$ as a design vector.
Thus, we obtain the infinite series representation of $X$:
\begin{align}
Xu=\sum_{i=1}^{\infty}\sqrt{\lambda_{0,i}} z_{i}\left(e_{i}^{\top}u\right) \in\R^{n}, \label{def:series_representation}
\end{align}
where $u\in\Hilbert$ and $z_{i}=(z_{1,i},\ldots,z_{n,i})^{\top}$.
It is almost surely (a.s.) convergent by the It\^{o}--Nisio theorem \citep{IN1968Convergence} because $\Sigma_{0}$ is of the trace class.
We define a coordinate-wise autocorrelation matrix $\Xi_{i,n} \in \R^{n \times n}$ for observation $\{x_t: t =1,...,n\}$ of process $\{x_t: t \in \Z\}$.
The $(t_1,t_2)$-th element of $\Xi_{i,n}$ can be defined as:
\begin{align*}
\Xi_{i,n}^{\left(t_{1},t_{2}\right)}=\lambda_{\left|t_{1}-t_{2}\right|,i}/\lambda_{0,i},
\end{align*}
for all $i\in\N$ with $\lambda_{0,i}>0$, $t_{1},t_{2}=1,\ldots,n$.
We set all elements of $\Xi_{i,n}$ as zero for all $i\in\N$ with $\lambda_{0,i}=0$.
By definition, $\Xi_{i,n}$ is the autocorrelation matrix of $\{(x_{t}^{\top}e_{i}):t=1,\ldots,n\}$.
\begin{lemma}[corollary of Proposition 5.1.1 of \citealp{BD1991Time}]\label{LemmaAssumption02}
If $\lambda_{h,i}\to 0$ as $h \to \infty$ for all $i\in\N$ with $\lambda_{0,i}>0$, we have invertibility of $\Xi_{i,n}$ for all $i\in\N$ with $\lambda_{0,i}>0$ and $n\in\N$.
\end{lemma}
While studying the excess risk, we examined two cases of autocorrelation structures, i.e., (i) a \textit{homo-correlated process}: there exists a matrix $\Xi_{n}$ such that for all $i\in\N$ with $\lambda_{0,i}$, $\Xi_{i,n}=\Xi_{n}$; and (ii) a \textit{hetero-correlated process}: there is no matrix as $\Xi_{n}$.
\subsection{Interpolation Estimator}
We consider an interpolation estimator with the minimum norm:
\begin{align}
\hat{\beta} \in \argmin_{\beta \in \Hilbert} \|\beta\|^2 \mbox{~~s.t.~~} \textstyle\sum_{t=1}^n (y_t - \langle \beta, x_t \rangle)^2 = 0, \label{def:interpolator}
\end{align}
with norm $\|\cdot\|$ for $\bH$.
A solution satisfying this constraint in \eqref{def:interpolator} is guaranteed to exist if $\{x_t\}_{t=1}^n$ spans an $n$-dimensional linear space, i.e., if $\bH$ has a larger dimensionality than $n$ (for example, $\bH = \R^p$ and $p \geq n$), then multiple solutions satisfy the linear equation $ Y= \langle X, \beta \rangle$.
Using the matrix-type notation, the interpolation estimator \eqref{def:interpolator} can be rewritten as:
\begin{align}
\hat{\beta} = (X^\top X)^\dagger X^\top Y = X^\top (XX^\top)^{-1} Y, \label{eq:betahat}
\end{align}
where $^\dagger$ denotes the pseudo-inverse of linear operators.
Because operator $X^\top X$ is bounded and linear, its pseudo-inverse is guaranteed to exist \citep{desoer1963note}.
Furthermore, we study an excess prediction risk of $\hat{\beta}$ as
\begin{align}
R(\hat{\beta}) := \Ep^* [(y^* - \langle \hat{\beta}, x^* \rangle )^2 - (y^* - \langle {\beta}^*, x^* \rangle )^2] \label{def:risk}
\end{align}
where $(x^*, y^*)$ is an independent and identical copy of $(y_0,x_0)$ and expectation $\Ep^*[\cdot]$ is taken with respect to $(x^*, y^*)$.
The setting of $(x^*, y^*)$ is justified by the mixing property shown in the previous section.
\section{Error Analysis with Homo-Correlated Case} \label{sec:result}
We present an analysis of excess risk of the interpolation estimator.
First, we introduce basic assumptions and general bounds on the excess risk.
Subsequently, we provide a sufficient condition on the covariance operators of the covariate for the excess risk to converge to zero.
In this section, we consider a homo-correlated case, wherein the autocorrelation matrices do not depend on $i$.
Rigorously, there exists $\Xi_n \in \R^{n \times n}$ that satisfies
\begin{align}
\Xi_n = \Xi_{i,n}, ~ \forall i \in \N. \label{def:auto_correlation_homo}
\end{align}
The case without homogeneity is discussed in Section \ref{sec:hetero} because it requires a different approach.
\subsection{Assumption and Notion}
\subsubsection{Basic Assumption}
We present assumptions on the covariance operators ($\Sigma_h$) of a covariate process $\{x_t : t \in \Z\}$.
We consider a setting wherein both $\Sigma_h$ and $\lambda_{h,i}$ can depend on the sample size, $n$, which allows us to analyze the high dimensionality depending on $n$ \citep{van2008high,bartlett2020benign}.
In this section, we discuss assumptions and bounds under fixed $n$.
\begin{assumption}[basic] \label{asmp:basic}
For fixed $n$, the autocovariance, $\Sigma_0$, satisfies the following:
\begin{enumerate}
\item[(i)] $\trace\left(\Sigma_0\right)<\infty$;
\item[(ii)] $\lambda_{0,n+1}>0$.
\end{enumerate}
Further, the cross-covariance operator, $\Sigma_h$, satisfies the following:
\begin{enumerate}
\item[(iii)] $\lambda_{h,i}\to 0$ as $\left|h\right|\to\infty$ for all $i\in\N$;
\end{enumerate}
\end{assumption}
Conditions (i) and (iii) validate the mixing property of $\{x_t: t \in \Z\}$ and invertibility of $\Xi_n$, as discussed in Section \ref{sec:setting}.
By condition (ii), a linear space spanned by the observed process, $\{x_t: t=1,...,n\}$, becomes $n$-dimensional almost surely; hence, the estimator \eqref{def:interpolator} is guaranteed to work.
In Section \ref{sec:example}, we provide several examples to satisfy Assumption \ref{asmp:basic}.
\subsubsection{Integrated Covariance}
We present several notions to demonstrate our result.
First, we define a positive version of a cross-covariance operator, $\Sigma_h$, as:
\begin{align*}
\tilde{\Sigma}_{h}:={\sum_{i=1}^{\infty}\left|\lambda_{h,i}\right|}e_{i}e_{i}^{\top}.
\end{align*}
Then, for $n \in \N$, we define integrated covariance:
\begin{definition}[integrated covariance]For $n \in \N$, we define
\begin{align*}
\bar{\Sigma}_n :=\Sigma_{0}+2\sum_{h=1}^{n-1}\tilde{\Sigma}_{h},
\end{align*}
\end{definition}
This operator describes a unified covariance of the entire observed process, $\{x_t: t=1,...,n\}$.
Note that the time width of $\Bar{\Sigma}_n$ depends on the number of data instances, $n$, and Assumption \ref{asmp:basic} yields $\mu_1(\Bar{\Sigma}_n) < \infty$ for fixed finite $n$.
We discuss a limit of $\bar{\Sigma}_n$ as $n \to \infty$ in Section \ref{sec:benign}.
\citet{Basu2015Regularized} considered a similar notion, i.e., the spectral density of integrated covariance with infinite time width $\sum_{h=-\infty}^\infty \Sigma_h$.
While \citet{Basu2015Regularized} required the largest eigenvalues of $\sum_{h=-\infty}^\infty \Sigma_h$ to be bounded, our analysis only used integrated covariance with finite time width.
\subsubsection{Degeneracy of autocorrelation Matrix}
We introduce a notation for the smallest eigenvalue of the autocorrelation matrix, $\Xi_n$, in \eqref{def:auto_correlation_homo}:
\begin{align}
\nu_n := \mu_n(\Xi_n). \label{def:non-degenerate_correlation}
\end{align}
This value describes the degeneracy of observed data, which is specific to the dependent data setting.
Lemma \ref{LemmaAssumption02} and Assumption 1 (iii) guarantee that $\nu_n > 0$ for finite $n$.
We provide examples of $\nu_n$ in Section \ref{sec:example}.
In the study by \citet{Basu2015Regularized}, a similar condition was required, i.e., the lower bound on the spectral density of the sums of cross-covariance matrices should be positive.
By contrast, we used the non-asymptotic approach of studying the eigenvalues of matrices under finite time width.
\subsubsection{Effective Rank}
We applied a notion of the effective ranks of linear operators:
\begin{definition}[effective rank]
For a linear operator $T: \Hilbert \to \Hilbert$ and $k \in \N$, two types of effective ranks of $T$ can be defined:
\begin{align*}
r_k(T) = \frac{\sum_{i > k} \mu_i(T)}{ \mu_{k+1}(T)}, \mbox{~and~} R_k(T) = \frac{(\sum_{i > k} \mu_i(T)^2)}{( \Sigma_{i > k} \mu_i(T))^2}.
\end{align*}
Furthermore, we define an effective number of bases with constant $b > 0$:
\begin{align*}
k^* = \min \{k \geq 0: r_k(\Sigma_0) \geq bn \}.
\end{align*}
\end{definition}
This definition has been used in studies related to the matrix concentration inequality \citep{koltchinskii2017concentration} and benign overfitting in linear regression \citep{bartlett2020benign}.
In the homo-correlated case, the following result simplifies our analysis.
\begin{lemma} \label{lem:homo_efficient_rank}
In the homo-correlated process case,
$r_0(\Sigma_0) = r_0(\Bar{\Sigma}_n)$ holds true.
\end{lemma}
\subsection{Main Result: General Bound on Excess Risk}
We first derive general upper and lower bounds on the excess risk without specifying the covariance operators, $\Sigma_h, h \in \Z$.
\begin{theorem}[homo-correlated case] \label{thm:homo}
Consider the interpolation estimator $\hat{\beta}$ in \eqref{def:interpolator} for the stochastic regression problem \eqref{def:model}.
Suppose that Assumption \ref{asmp:basic} holds true.
Further, assume that $\lambda_{h,i_{1}}/\lambda_{0,i_{1}}=\lambda_{h,i_{2}}/\lambda_{0,i_{2}}$ for all $h\in\Z$ and $i_{1},i_{2}\in\N$ such that $\lambda_{0,i_{1}},\lambda_{0,i_{2}}>0$.
Then, there exist $b,c,c_{1}>1$ such that for all $\delta \in (0,1)$ with $\log\left(1/\delta\right)\le n/c$: (i) if $k^{*}\ge n/c$ leads to $\Ep R(\hat{\beta})\ge \mu_{1}^{-1}\left(\Xi_{n}\right)\mu_{n}\left(\Upsilon_{n}\right)/c$; (ii) otherwise,
\begin{align*}
R(\hat{\beta})&\le c\delta^{-1}\left\|\beta^{*}\right\|^2\left(\sqrt{\frac{\left\|\Sigma_{0}\right\|\left\|\bar{\Sigma}_{n}\right\|r_{0}\left(\Sigma_{0}\right)}{n}}+\frac{\left\|\bar{\Sigma}_{n}\right\|r_{0}\left(\Sigma_0\right)}{n}\right)+c\frac{\mu_{1}\left(\Upsilon_{n}\right)}{\nu_n}\left(\frac{k^{*}}{n}+\frac{n}{R_{k^{*}}\left(\Sigma_{0}\right)}\right)\log(\delta^{-1}).
\end{align*}
with probability at least $1-\delta$, and
\begin{align*}
\Ep R(\hat{\beta})\ge \frac{\mu_{n}\left(\Upsilon_{n}\right)}{c\mu_{1}\left(\Xi_{n}\right)}\left(\frac{k^{*}}{n}+\frac{n}{R_{k^{*}}\left(\Sigma_{0}\right)}\right).
\end{align*}
\end{theorem}
The upper bound is composed of two elements: the first term on the right side bounds the bias part of the risk, and the second term represents the variance.
The bias term yields several implications.
First, its scale depends on the Hilbert norm of $\beta^*$.
This is in contrast to sparsity-based high-dimensional analysis, which depends on the $L1$-norm (or the number of nonzero coordinates) of $\beta^*$.
Second, the term is affected by the largest eigenvalue of integrated covariance $\Bar{\Sigma}_n$, which provides a similar implication as the high-dimensional time series theory by \citet{Basu2015Regularized}.
The variance term depends on the smallest eigenvalue of $\Xi_{n}$, which indicates that the covariate process should not degenerate in the time direction.
This can be considered a time series version of the restricted eigenvalue condition in high-dimension analysis \citep{bickel2009simultaneous,van2008high}.
By contrast, the lower bound has a similar form as the variance term of the upper bound.
This indicates that an order of the lower bound is the same as that of the upper bound until the bias and variance of the upper bound are balanced.
\begin{remark}[comparison with independent case]\label{remark:comparison}
Now, we compare our results with the linear regression case with independent observations by \cite{bartlett2020benign}.
Specifically, if $\Sigma_h = 0$ for any $h \in \Z \backslash \{0\}$ and $\Upsilon_n = \sigma_\varepsilon^2 I$, Theorem 4 in \citet{bartlett2020benign} bounds the excess risk with probability at least $1-\delta$:
\begin{align}
R(\hat{\beta})&\le c \left\|\beta^{*}\right\|^{2}\|\Sigma_0\| \left( \sqrt{\frac{r_0(\Sigma_0)}{n}} + \frac{r_0(\Sigma_0)}{n}+\sqrt{\frac{\log(\delta^{-1})}{n}}\right) + c\sigma_{\varepsilon}^{2}\left(\frac{k^{*}}{n}+\frac{n}{R_{k^{*}}\left(\Sigma_0\right)}\right)\log(\delta^{-1}), \label{ineq:bartlett}
\end{align}
By comparing this with Theorem \ref{thm:homo}, the following differences were observed:
(i) The bound for the dependent case contained $\|\Bar{\Sigma}_n\|$ and $1/\nu_n$, whose implications have been discussed above.
(ii) The first term of the bound in Theorem \ref{thm:homo} contained $\delta^{-1}$, while that of \eqref{ineq:bartlett} depended on $\log(\delta^{-1})$.
This is because in \citet{bartlett2020benign}, matrix concentration inequalities were used to bound the bias; however, we used the moment and Markov's inequalities to effectively study the dependence of data.
Further details are discussed in Section \ref{sec:derivation}.
\end{remark}
\subsection{Benign Covariance: Asymptotic Analysis of Risk} \label{sec:benign}
We present sufficient conditions under which the upper bound converges to zero as $n \to \infty$.
Let us recall that both $\Sigma_h$ and its eigenvalues can depend on the sample size, $n$, when $\Sigma_h = \Sigma_{h,n}$ for $h \in \Z$.
First, we introduce a characterization on the autocovariance, $\Sigma_0$, developed by \citet{bartlett2020benign}.
\begin{definition}[benign autocovariance] \label{def:benign_covariance}
An autocovariance operator, $\Sigma_0$, is \textit{benign} if there exist sequences, $\{\mu_n: n \in \N\}, \{\eta_n: n \in \N\} \subset \R_+$, such that
\begin{align*}
\zeta_n := \trace(\Sigma_0) = o(n), \mbox{~~and~~}
\eta_n := \max\left\{ \frac{k^*}{n}, \frac{n}{R_{k^*}(\Sigma_0)} \right\} = o(1), \mbox{~as~}n \to \infty.
\end{align*}
\end{definition}
The study provides specific examples of benign autocovariance.
Although the analysis of autocovariance does not have a significantly direct relationship with dependent data, we have included it for completeness.
\begin{example}[Theorem 31 in \citet{bartlett2020benign}] \label{ex:benign_covariance}
autocovariance $\Sigma_0$ with eigenvalues $\{\lambda_{0,i}: i \in \N\}$ is benign, as stated in Definition \ref{def:benign_covariance}, with corresponding $\{\eta_n: n \in \N \}$ and $\{\zeta_n: n \in \N \}$:
\begin{itemize}
\item[(i)] $\lambda_{0,i} = i^{-1} \log^{-\gamma}i$ for $\gamma > 0$, and $\zeta_n = O(1),\eta_n = o(1)$.
\item[(ii)] $\lambda_{0,i} = i^{-(1+\gamma_n)}$ for $\gamma_n = o(1)$, and $\zeta_n = O(1),\eta_n =O( \min\{(\gamma_n n)^{-1} + \gamma_n, 1\})$.
\item[(iii)] $\lambda_{0,i} = i^{-\gamma}\mone\{i \leq p_n\}$ for $\gamma \in (0,1)$ and $ n \prec p_n \prec n^{1/(1-\alpha)}$, and $\zeta_n = O(1),\eta_n = o(1)$.
\item[(iv)] $\lambda_{0,i} = (\gamma_i + \epsilon_n)\mone\{i \leq p_n\}$, for $\gamma_i = \Theta(\exp(-i)), n \preceq p_n$, and $ \epsilon_n = p_n^{-1}n e^{-o(n)}$, \\
and $\zeta_n = O(n e^{-o(n)}),\eta_n =O( n^{-1}( 1 + \log(n/(\epsilon_n p_n))) + n p_n^{-1})$.
\end{itemize}
\end{example}
In case (i), $\Sigma_0$ is independent of $n$; otherwise, it depends on $n$.
Cases (i) and (ii) yield infinite nonzero eigenvalues, while cases (iii) and (iv) yield $p_n$ nonzero eigenvalues, which represents finite dimensionality increasing in $n$.
\begin{remark}[convergence rate in independent case]\label{remark:comparison_rate}
\cite{bartlett2020benign} studied the convergence of risk in the case of independent data.
If $\Sigma_h = 0$ for any $h \in \Z \backslash \{0\}$ and $\Sigma_0$ is benign, as stated in Definition \ref{def:benign_covariance}, the following holds true:
\begin{align}
R(\hat{\beta}) = O\left(\zeta_n^{1/2} n^{-1/2} + \eta_n \right). \label{ineq:convergence_rate_independent}
\end{align}
\end{remark}
Next, to study the dependency of data, we characterize the eigenvalues of cross-covariance operators $\Sigma_h$ for $h \in \Z \backslash \{0\}$.
We consider the following form:
\begin{align}
|\lambda_{h,i}| \leq c |h|^{-\alpha} \lambda_{0,i},~ \forall h \in \Z \backslash \{0\}, \forall i \in \N. \label{ineq:eigen_decay}
\end{align}
with $\alpha > 0$ and some constant, $c > 0$.
For $\alpha > 1$, the covariate process is considered a \textit{short-memory process}, i.e., $\|\sum_{h=-\infty}^\infty \Sigma_h\| < \infty$ and $\|\sum_{h=-\infty}^\infty \Sigma_h\| \neq 0$ hold true by the decay of $\lambda_{h,i}$ in $h$.
For {$\alpha \le 1$}, the process is {possibly} a \textit{long-memory process}, which has $\|\sum_{h=-\infty}^\infty \Sigma_h\| = \infty$.
For both cases, the following result was obtained:
\begin{proposition}[convergence rate] \label{prop:benign_rate}
Consider the setting and assumptions for Theorem \ref{thm:homo}.
Suppose that $\Sigma_0$ is a benign covariance, as stated in Definition \ref{def:benign_covariance}, and $\lambda_{h,i}$ satisfies \eqref{ineq:eigen_decay} $\alpha > 0$.
Then, we have
\begin{align*}
R(\hat{\beta}) = O\left(\zeta_n^{1/2} \kappa^{(\alpha)}_n + \nu_n^{-1}\eta_n\right), \mbox{~as~}n \to \infty,
\end{align*}
where
\begin{align*}
\kappa^{(\alpha)}_n =
\begin{cases}
O( n^{-\alpha/2}) & (\alpha \in (0,1))\\
O( n^{-1/2}\log^{1/2} n) & (\alpha = 1)\\
O( n^{-1/2}) & (\alpha > 1).
\end{cases}
\end{align*}
\end{proposition}
This rate consists of two items, $\zeta_n^{1/2} n^{-1/2}$ and $\nu_n^{-1}\eta_n$, corresponding to the bias and variance in Theorem \ref{thm:homo}, respectively.
Importantly, the results based on $\kappa$ can handle both long- and short-memory processes simultaneously.
Because the existing dependent data studies have been limited to short-memory processes, such as \citet{Basu2015Regularized} where $\|\sum_{h=-\infty}^\infty \Sigma_h\| < \infty$ was required, this result is more general.
A comparison with the rate in the independent case \eqref{ineq:convergence_rate_independent} yielded several implications:
(i) In the short-memory case $(\alpha > 1)$, the bias exhibits the same rate, and only the variance increases by the deteriorated convergence rate owing to the autocorrelation matrix, $\Xi_n$.
(ii) The bias rate is deteriorated only when the covariates involved a long-memory process.
(iii) There was no guarantee that the risk would converge to zero with a part of benign autocovariances in Example \ref{ex:benign_covariance}, according to the worsening rate of dependency.
Figure \ref{fig:compare_rate} demonstrates the comparison.
\begin{figure}
\centering
\includegraphics[width=0.6\hsize]{rate_3.png}
\caption{Comparison of convergence rate of risk in the independent case. $\alpha$ (decay rate of eigenvalues of cross-covariance) and $\nu_n^{-1}$ (inverse of the smallest eigenvalue of the autocovariance matrix) affect the convergence rate in the dependent case.}
\label{fig:compare_rate}
\end{figure}
\section{Derivation of Bounds} \label{sec:derivation}
We present an overview of the risk bound in Theorem \ref{thm:homo} along with the moment inequality as our technical contribution toward achieving the bound.
\subsection{Proof Overview}
The excess risk can be decomposed into bias and variance.
We define an empirical noise vector in $\R^n$ as $\mE = Y - X \beta^*$.
\begin{lemma} \label{lem:risk_decomp}
For any $\delta \in (0,1)$, we obtain the following with probability at least $1-\delta$:
\begin{align*}
R(\hat{\beta}) \leq 2 \beta^{*\top} T_B \beta^* + 2 C_{\delta} \trace(T_V),
\end{align*}
where $C_{\delta} = 2\log(1/\delta) + 1$,
where
\begin{align*}
&T_B := (I - X^\top (X X^\top)^\dagger X) \Sigma_0 (I - X^\top (X X^\top)^{-1}X) \\
&T_V := \Upsilon_{n}^{1/2}\left(XX^{\top}\right)^{-1}X\Sigma_{0} X^{\top}\left(XX^{\top}\right)^{-1}\Upsilon_{n}^{1/2}.
\end{align*}
Furthermore, we have $\Ep R(\hat{\beta}) \geq \beta^{*\top} T_B \beta^* + \trace(\Ep[T_V])$.
\end{lemma}
This decomposition was obtained by standard calculations.
The first and second terms, $\beta^{*\top} T_B \beta^*$ and $C_{\delta} \trace({ T_{V}})$, denote the analogy of bias and variance, respectively.
We investigate these two terms separately.
To study $C_{\delta} \trace({ T_{V}})$, we use a decorrelated series representation of $X$.
Based on the original series representation \eqref{def:series_representation} and autocorrelation matrix $\Xi_n$, we obtain the following for any $u \in \Hilbert$:
\begin{align*}
Xu= \Xi_n^{1/2} \sum_{i=1}^{\infty}\sqrt{\lambda_{0,i}} \tilde{z}_{i}\left(e_{i}^{\top}u\right),
\end{align*}
where $\Tilde{z}_i \sim N(\mathbf{0},I_n), i=1,2,...$ denotes an i.i.d. $n$-dimensional Gaussian random vector.
This decorrelation by $\Xi_n^{1/2}$ is important to our setting with dependence.
Because $X$ is represented by a weighted sum of i.i.d. random vectors, we can apply the extended version of techniques proposed by \citet{bartlett2020benign} for independent data and bound the term; further details are provided in Section \ref{sec:bound_tv_homo}.
Next, we study the first term $\beta^{*\top} T_B \beta^*$, which is more critical in this study.
We define an empirical autocovariance matrix:
\begin{align}
\hat{\Sigma}_0 := \frac{1}{n} X^\top X. \label{def:sigma_0_hat}
\end{align}
Because we can obtain $(I - X^\top (X X^\top)^\dagger X)X^\top = X^\top - X^\top (X X^\top)^{\dagger} (X X^\top) = 0 $, a simple calculation yields
\begin{align*}
\beta^{*\top} T_B \beta^* = \beta^{*\top} (I - X^\top (X X^\top)^\dagger X) (\Sigma_0 - \hat{\Sigma}_0) (I - X^\top (X X^\top)^\dagger X) \beta^*.
\end{align*}
Let us recall that $X^\top (X X^\top)^\dagger X$ is a projection operator that yields $\|I - X^\top (X X^\top)^\dagger X\| \leq 1$; hence, it is sufficient to study the empirical deviation of the cross-covariance operator, $\Sigma_0 - \hat{\Sigma}_0$.
By Markov's inequality, we have
\begin{align*}
P\left(\|\hat{\Sigma}_{0}-\Sigma_{0}\|\ge \delta\right)\le \frac{1}{\delta}\Ep\left[\|\hat{\Sigma}_{0}-\Sigma_{0}\|\right],
\end{align*}
for any $\delta > 0$.
Consequently, it is sufficient to study the moment term, $\Ep[\|\hat{\Sigma}_{0}-\Sigma_{0}\|]$.
We develop the upper bound in the next section.
\subsection{Moment Inequality on Infinite-dimensional Dependent Processes}
The moment inequality for $\Hilbert$-valued dependent processes is the most technically important part of this study.
We developed an inequality to bound the empirical deviation of covariance operators by (possibly) infinite-dimensional dependent processes.
We present the following statement:
\begin{proposition}\label{prop:moment_ineq_display}
Let us assume that $\Hilbert$ is a separable Hilbert space and $x_{t}$ is an $\Hilbert$-valued centered stationary Gaussian process whose cross-covariance operator, $\Sigma_{h}$, of the trace class for all $h\in\Z$ can be defined as:
\begin{align*}
\Sigma_{h}u:=\Ep\left[\left(x_{t+h}^{\top}u\right)x_{t}\right]
\end{align*}
for all $u\in\Hilbert$.
We have
\begin{align*}
\Ep\left[\|\hat{\Sigma}_{0}-\Sigma_{0}\|\right]&\le \frac{2\sqrt{2}}{n}\left(\sqrt{2}\trace\left(\bar{\Sigma}_n\right)+\sqrt{2n\left\|\Sigma_{0}\right\|\trace\left(\bar{\Sigma}_n\right)}+\sqrt{n\trace\left(\Sigma_{0}\right)\left\|\bar{\Sigma}_{n}\right\|}\right),
\end{align*}
where for all $u\in\Hilbert$,
\begin{align*}
\bar{\Sigma}_{n}u:=\Sigma_{0}u+2\sum_{h=1}^{n-1}\tilde{\Sigma}_{h}u,\ \tilde{\Sigma}_{h}u:=\frac{1}{2}\sum_{i}\sigma_{i}\left(\Sigma_{h}\right)\left(\left(f_{h,i}^{\top}u\right)f_{h,i}+\left(g_{h,i}^{\top}u\right)g_{h,i}\right),
\end{align*}
$\sigma_{i}\left(\Sigma_{h}\right)$ are singular values of $\Sigma_{h}$, $\left\{f_{h,i}\right\}$ are left singular vectors and $\left\{g_{h,i}\right\}$ are right ones of $\Sigma_{h}$.
\end{proposition}
This inequality shows that the integrated covariance, $\Bar{\Sigma}_n$, characterizes a moment of empirical deviation in the dependent process.
If $\Bar{\Sigma}_\infty$ is a trace class, the bound is guaranteed to be $O(1/\sqrt{n})$.
Moreover, if $\Bar{\Sigma}_n = o(n)$ as $n \to \infty$ in the case of long-memory process, Proposition \ref{prop:moment_ineq_display} yields a valid upper bound.
The upper bound is obtained by extending the moment inequality to finite-dimensional dependent processes by \citet{HL2020Moment}.
\citet{HL2020Moment} used Sudakov's inequality that exploits the Gaussianity of processes, and the results of this study follow the strategy.
Further details are provided in Section \ref{sec:appendix_moment_inequality}.
\section{Extension to Hetero-correlated Case} \label{sec:hetero}
We provide the excess risk bound in the case of hetero-correlated process with matrix $\Xi_{i,n}$ separately because its proof strategy and implications are different.
\begin{theorem}[hetero-correlated case]\label{thm:hetero}
Consider an interpolation estimator, $\hat{\beta}$, in \eqref{def:interpolator} for the stochastic regression problem \eqref{def:model}.
Suppose that Assumption \ref{asmp:basic} holds true.
Further, assume that for some constant $\epsilon\in\left(0,1\right)$,
\begin{align}
\epsilon\le \inf_{n\in\N}\inf_{i\in\N}\mu_{n}\left(\Xi_{i,n}\right)\le \sup_{n\in\N}\sup_{i\in\N}\mu_{1}\left(\Xi_{i,n}\right)\le \epsilon^{-1}. \label{ineq:eigenvalues_correlation}
\end{align}
Then, there exist $b,c,c_{1}>1$ such that for all $\delta \in (0,1)$ with $\log\left(1/\delta\right)\le n/c$: (i) if $k^{*}\ge n/c$ leads to $\Ep R(\hat{\beta})\ge \mu_{1}^{-1}\left(\Xi_{n}\right)\mu_{n}\left(\Upsilon_{n}\right)/c$; (ii) otherwise,
\begin{align*}
R(\hat{\beta})&\le c\delta^{-1}\left\|\beta^{*}\right\|^2\left(\sqrt{\left\|\Sigma_{0}\right\|\left\|\bar{\Sigma}_{n}\right\|}\left(\sqrt{\frac{r_{0}\left(\Sigma_{0}\right)}{n}}+\sqrt{\frac{r_{0}\left(\bar{\Sigma}_{n}\right)}{n}}\right)+\frac{\left\|\bar{\Sigma}_{n}\right\|r_{0}\left(\bar{\Sigma}_{n}\right)}{n}\right)\\
& \quad +c\mu_{1}\left(\Upsilon_{n}\right)\left(\frac{k^{*}}{n}+\frac{n}{R_{k^{*}}\left(\Sigma_{0}\right)}\right)\log(\delta^{-1}).
\end{align*}
with probability at least $1-\delta$, and
\begin{align*}
\Ep R(\hat{\beta})\ge \frac{\mu_{n}\left(\Upsilon_{n}\right)}{c}\left(\frac{k^{*}}{n}+\frac{n}{R_{k^{*}}\left(\Sigma_{0}\right)}\right).
\end{align*}
\end{theorem}
This result provides two implications.
First, the bias part of the upper bound is the same as that in the homo-correlated case in Theorem \ref{thm:homo}.
Second, the variance term of the upper bound does not have a coefficient related to the eigenvalues of $\Xi_{i,n}$.
However, it is not stricter than the bound in the case of homo-correlation because we assume that the eigenvalues are bounded above and away from zero, as \eqref{ineq:eigenvalues_correlation}.
\begin{remark}[proof difference]
We present the difference between the proofs of Theorem \ref{thm:hetero} (hetero-correlated case) and Theorem \ref{thm:homo} (homo-correlated case), which was observed in the second term of the upper bound.
The proof for the homo-correlated case relies on the property of Gaussian random vectors, which states that for all positive definite $M_{n}\in\R^{n\times n}$,
\begin{align*}
v\sim N\left(\mathbf{0},M_{n}\right)\Rightarrow M_{n}^{-1/2}v\sim N\left(\mathbf{0},I_{n}\right).
\end{align*}
It enables us to standardize all coordinate processes, $x_{t}^{\top}e_{i}/\sqrt{\lambda_{0,i}}$ for $i\in\N$, with $\lambda_{0,i}>0$ using a unique matrix, $\Xi_{n}$.
However, the same approach is not suitable in the hetero-correlation case.
Alternatively, we bound the eigenvalues of autocorrelation matrices, $\Xi_{i,n}$, above and away from zero.
This short-memory assumption yields significantly similar results as those obtained by \citet{bartlett2020benign}.
For example, for $z_{1}:=(x_{1}^{\top}e_{1}/\sqrt{\lambda_{0,1}},\ldots,x_{n}^{\top}e_{1}/\sqrt{\lambda_{0,1}})^{\top}$ and for all $v\in\R^{n}$,
\begin{align*}
\Ep\left[\exp\left(v^{\top}Z_{1}\right)\right]=\exp\left(\frac{1}{2}v^{\top}\Xi_{1,n}v\right)\le \exp\left(\frac{1}{2}\left\|v\right\|^{2}\mu_{1}\left(\Xi_{1,n}\right)\right)\le \exp\left(\frac{1}{2}\left\|v\right\|^{2}\sup_{n\in\N}\sup_{i\in\N}\mu_{1}\left(\Xi_{i,n}\right)\right),
\end{align*}
which indicates that $z_{1}$ is $\sup_{n\in\N}\sup_{i\in\N}\mu_{1}\left(\Xi_{i,n}\right)$-sub-Gaussian.
\end{remark}
\section{Example: Hilbert-valued Linear Processes} \label{sec:example}
We developed examples of Hilbert-valued Gaussian stationary processes as covariates.
In preparation, we defined a general class of Hilbert-valued linear processes (ARMA, ARFIMA, and ARMAH processes) and considered their applications.
{We use notations suitable for infinite-dimensional Hilbert spaces, but the discussion for finite-dimensional Hilbert spaces is similar.}
\subsection{General Scheme: Hilbert-valued Gaussian Linear Process}
We developed a class of $\Hilbert$-valued stationary Gaussian linear processes.
Let $Q$ be a self-adjoint nonnegative definite operator (possibly not of the trace class); then, we consider a process $\left\{x_{t}:t\in\Z\right\}$ defined as:
\begin{align}
x_{t}:=\sum_{j=-\infty}^{\infty}\Phi_{j}w_{t-j}^{Q}, \label{def:general_process}
\end{align}
where $\Phi_{j}$s denote the self-adjoint operators, and $w_{t}^{Q}$s are Gaussian random variables whose covariance operator is $Q$.
This setting is similar to that of generalized Wiener processes, e.g., Section 4.1.2 in \citealp{DZ2014Stochastic}.
We restrict the case such that they have the following representations with an orthonormal basis of $\Hilbert$, denoted by: $\left\{e_{k}\in\Hilbert:k\in\N\right\}$:
\begin{align}
\Phi_{j}:=\sum_{k=1}^{\infty}\phi_{j,k}e_{k}e_{k}^{\top},\
Q:=\sum_{k=1}^{\infty}\vartheta_{k}e_{k}e_{k}^{\top},\
w_{t}^{Q}:=\sum_{k=1}^{\infty}\sqrt{\vartheta_{k}}e_{k}w_{t,k},
\end{align}
where $\phi_{j,k}\in\R,\vartheta_{k}\ge 0$ for all $j\in\Z$ and $k\in \N$, $\{w_{t,k}:t\in\Z,k\in\N\}$ is a double array of i.i.d.\ real-valued standard Gaussian random variables.
We assume that $\{\phi_{j,k}\sqrt{\vartheta_{k}}:j\in\Z,k\in\N\}\subset \R$ is a real-valued double array satisfying
$\sum_{j=-\infty}^{\infty}\sum_{k=1}^{\infty}\phi_{j,k}^{2}\vartheta_{k}<\infty$.
Then, $\Phi_{j}w_{t-j}^{Q}$ and $x_{t}$ have well-defined representations:
\begin{align}
\Phi_{j}w_{t-j}^{Q}=\sum_{k=1}^{\infty}\phi_{j,k}\sqrt{\vartheta_{k}}e_{k}w_{t-j,k},\ x_{t}=\sum_{j=-\infty}^{\infty}\sum_{k=1}^{\infty}\phi_{j,k}\sqrt{\vartheta_{k}}e_{k}w_{t-j,k}, \label{form:x2}
\end{align}
which are well-defined $\Hilbert$-valued Gaussian random variables owing to $L^{2}$- and the a.s.\ convergence of infinite series by the It\^{o}--Nisio theorem \citep{IN1968Convergence} and Proposition 1.16 in \citet{DaPrato2006Introduction}.
Form \eqref{form:x2} provides the spectral decomposition \eqref{def:spectral_decomp} and $\Ep[x_i] = \mathbf{0}$.
\begin{remark}
We set the assumption for well-defined $\Phi_{j}w_{t-j}^{Q}$ as a random variable rather than that of $\Phi_{j}$ or $w_{t}^{Q}$.
If $Q$ is a trace class operator and $\Phi_{j}$ is bounded, then $w_{t}^{Q}$ and $\Phi_{j}w_{t-j}^{Q}$ are $L^{2}$- and a.s.\ convergent.
Furthermore, $x_{t}$ is also $L^{2}$- and a.s.\ convergent if we additionally assume that $\sum_{j=-\infty}^{\infty}\|\Phi_{j}\|^{2}<\infty$.
\end{remark}
\begin{lemma}\label{LemmaExample01}
The process $\left\{x_{t}\right\}$ in \eqref{form:x2} with $\sum_{j=-\infty}^{\infty}\sum_{k=1}^{\infty}\phi_{j,k}^{2}\vartheta_{k}<\infty$ has the following properties:
\begin{enumerate}
\item[(i)] The cross-covariance operator, $\Sigma_{h}$, is for all $u\in\Hilbert$,
\begin{align}
\Sigma_{h}u:=\sum_{j=-\infty}^{\infty}\sum_{k=1}^{\infty}\phi_{j,k}\phi_{j+h,k}\vartheta_{k}\left(u^{\top}e_{k}\right) e_{k}.\label{def:general_covariance}
\end{align}
\item[(ii)] All cross-covariance operators are of the trace class.
\end{enumerate}
\end{lemma}
Let us define the quantity $\lambda_{h,k}:=\sum_{j=-\infty}^{\infty}\phi_{j,k}\phi_{j+h,k}\vartheta_{k}$ for all $h\in\Z$, which are the eigenvalues of $\Sigma_{h}$.
The aforementioned lemma leads to $\trace\left(\Sigma_{h}\right)=\sum_{k=1}^{\infty}\lambda_{h,k}<\infty$.
\begin{lemma}\label{LemmaExample02}
For all $k$ such that $\lambda_{0,k}>0$, each coordinate process, $x_{t}^{\top}e_{k}/\sqrt{\lambda_{0,k}}$, has the following properties:
\begin{enumerate}
\item[(i)] Each coordinate process is a real-valued centered stationary Gaussian process with an autocovariance matrix
\begin{align*}
\Xi_{k,n}^{\left(t_{1},t_{2}\right)}:=\frac{1}{\lambda_{0,k}}\sum_{j=-\infty}^{\infty}\phi_{j,k}\phi_{j+\left|t_{1}-t_{2}\right|,k}, ~\vartheta_{k}=\frac{1}{\sum_{j=-\infty}^{\infty}\phi_{j,k}^{2}}\sum_{j=-\infty}^{\infty}\phi_{j,k}\phi_{j+\left|t_{1}-t_{2}\right|,k},
\end{align*}
which is also the autocorrelation matrix.
\item[(ii)] $\Xi_{k,n}^{\left(t,t+h\right)}$, or equivalently $\lambda_{h,k}=\lambda_{0,k}\Xi_{k,n}^{\left(t,t+h\right)}$, converges to zero as $\left|h\right|\to\infty$ for all $k\in\N$ with $\lambda_{0,k}>0$, $n\in\N$ and $t\in\Z$.
\end{enumerate}
\end{lemma}
\begin{remark}
Property (i) of Lemma \ref{LemmaExample02} leads to spectral decomposition in equation \eqref{def:spectral_decomp} with independent Gaussian vectors, $z_{k}\sim N\left(0,\Xi_{k,n}\right)$.
Property (ii) of Lemma \ref{LemmaExample02} immediately indicates the mixing of $x_{t}^{\top}u$ for each $u\in\Hilbert$ by Lemma \ref{LemmaAssumption01} and invertibility of $\Xi_{k,n}$ by Lemma \ref{LemmaAssumption02}.
\end{remark}
We provide a convenient class of linear processes with scaling transformations in Appendix \ref{sec:scaling_transform}.
\subsection{ARMA Process}
We consider homo-correlated ARMA processes in $\Hilbert$ of order $p,q\in\N$ (ARMA($p,q$)), a simple version of ARMAH processes.
ARMA models are well-known as a representative class of short-memory processes.
It is an example of the class with scaling transformations in Appendix \ref{sec:scaling_transform}.
A homo-correlated ARMA($p,q$) process, $x_{t}$, is defined as
\begin{align}
x_{t}-\sum_{i=1}^{p}\left(a_{i}I\right)x_{t-i}=w_{t}^{Q}+\sum_{i=1}^{q}\left(b_{i}I\right)w_{t-i}^{Q},\label{def:arma}
\end{align}
where $a_{i},b_{i}\in\R$ and $Q$ is a trace class operator {with $\mathrm{rank}\left(Q\right)>n$.
We set $\vartheta_{k}$ to be non-increasing in $k$ without loss of generality}.
Let us assume that the characteristic polynomial of the autoregressive (AR) part, $A\left(z\right):=1-\sum_{i=1}^{p}a_{i}z^{i}$, for $z\in\C$ does not have roots on the unit circle, $\left|z\right|$, nor common roots with the characteristic polynomial for the moving-average (MA) part, $B\left(z\right):=1+\sum_{i=1}^{q}b_{i}z^{i}$ in $\C$.
We present an infinite-order moving-average representation (denoted by $\mathrm{MA}(\infty)$) for $x_{t}$ such that
\begin{align}
x_{t}=\sum_{j=0}^{\infty}\left(\phi_{j}I\right)w_{t-j}^{Q}\label{def:arma_ma}
\end{align}
with sequence $\{\phi_{j}\in\R\}$, which is square-summable and satisfies $\left(1+\sum_{i}b_{i}z^{i}\right)/\left(1-\sum_{i}a_{i}z^{i}\right)=\sum_{j=0}^{\infty}\phi_{j}z^{j}$ \citep[e.g., see Theorem 3.1.1 in][]{BD1991Time}.
{With this representation, we can define $\lambda_{h,k}:=\sum_{j=0}^{\infty}\phi_{j}\phi_{j+\left|h\right|}\vartheta_{k}$.
Evidently, $\lambda_{0,k}$ is non-increasing with respect to $k$ and $\lambda_{0,n+1}>0$ by $\sum_{j=1}^{\infty}\phi_{j}^{2}\ge \phi_{0}^{2}=1$.
Moreover, $\trace\left(\Sigma_{0}\right)<\infty$.}
Note that each coordinate process, $x_{t}^{\top}e_{k}/\sqrt{\lambda_{0,k}}$, for $k$ with $\lambda_{0,k}>0$, or equivalently $\vartheta_{k}>0$, has a common autocorrelation matrix, $\Xi_{n}$, with $\Xi_{n}^{\left(t,t+h\right)}=\sum_{j=0}^{\infty}\phi_{j}\phi_{j+\left|h\right|}$ for all $t=1,\ldots,n$ and $h$ with $t+h\in\left\{1,\ldots,n\right\}$.
The following proposition states the consequences of classical discussions related to ARMA($p,q$) models.
\begin{proposition} \label{prop:ARMA}
The homo-correlated ARMA($p,q$) process \eqref{def:arma} has the following properties:
\begin{enumerate}
\item[(i)] $\lambda_{h,k}$ decays geometrically as $\left|h\right|\to\infty$ for all $k\in\N$ with $\lambda_{0,k}>0$.
\item[(ii)] for some constant $c>0$ independent of $n$ and $\Sigma_{0}$, $\left\|\bar{\Sigma}_{n}\right\|\le c\left\|\Sigma_{0}\right\|$ and $\trace\left(\bar{\Sigma}_{n}\right)\le c\trace\left(\Sigma_{0}\right)$.
\item[(iii)] $0<\inf_{n}\mu_{n}\left(\Xi_{n}\right)\le \sup_{n}\mu_{1}\left(\Xi_{n}\right)<\infty$.
\end{enumerate}
\end{proposition}
{Let us check that process \eqref{def:arma} satisfies Assumption 1: (i) $\trace\left(\Sigma_{0}\right)=\sum_{j=0}^{\infty}\phi_{j}^{2}\trace\left(Q\right)<\infty$; (ii) is already shown; and (iii) follows (i) of Proposition \ref{prop:ARMA}.}
The result conforms to the following convergence of excess risk without proof:
\begin{corollary}
Consider the setting and assumptions for Theorem \ref{thm:homo} and that the covariate follows process \eqref{def:arma}.
Suppose that $\Sigma_0$ is a benign covariance, as stated in Definition \ref{def:benign_covariance}.
Then, we have
\begin{align*}
R(\hat{\beta}) = O\left( \zeta_n^{1/2} n^{-1/2} + \eta_n \right)
\end{align*}
\end{corollary}
It can be observed that the case of ARMA has the same convergence rate as that of independent data with the same condition on $\Sigma_0$.
\subsection{{ARFIMA} Process}
We also consider homo-correlated ARFIMA processes \citep{GJ1980Introduction,Hosking1981Fractional}.
Moreover, this process is an example of processes with scale transform (see Appendix \ref{sec:scaling_transform}).
Let $L$ denote the lag operator for stochastic processes on $\Z$, i.e., for an arbitrary process $\left\{p_{t}:t\in\Z\right\}$, $L^{j}p_{t}=p_{t-j}$ for all $t,j\in\Z$.
We also define the difference operator, $\nabla^{d}$, for $d>-1$ such that
\begin{align*}
\nabla^{d}=\left(1-L\right)^{d}=\sum_{j=0}^{\infty}\pi_{j}L^{j},
\end{align*}
where
$\pi_{j}=\frac{\Gamma\left(j-d\right)}{\Gamma\left(j+1\right)\Gamma\left(-d\right)}$
for all $j$, and $\Gamma(\cdot)$ is the gamma function.
Using the lag operator $L$ and its characteristic polynomials of the AR ($A\left(z\right):=1-\sum_{j=1}^{p}a_{j}z^{j}$) and MA ($B\left(z\right)1+\sum_{j=1}^{q}b_{j}z^{j}$) parts,
we consider a homo-correlated ARFIMA($p,d,q$) process, $x_{t}$, with $d\in\left(-1/2,1/2\right)$ such that
\begin{align}
A\left(L\right)\nabla^{d}x_{t}=B\left(L\right)w_{t}^{Q},\label{def:arfima}
\end{align}
{where $Q$ is the trace class with $\mathrm{rank}\left(Q\right)>n$ and non-increasing $\left\{\vartheta_{k}\right\}$ without loss of generality}.
We assume that $A\left(z\right)$ and $B\left(z\right)$ neither have common roots in $\C$ nor roots in the unit circle, i.e., $z$ with $\left|z\right|\le1$, and $Q$ is a trace class.
Then, {Theorem 7.2.3 in \citet{GKS2012Large}} provides an $\mathrm{MA}(\infty)$ representation with {square-summable} coefficients, $\left\{\phi_{j}\right\}$, such that
\begin{align}
x_{t}=\sum_{j=0}^{\infty}\left(\phi_{j}I\right)w_{t-j}^{Q}.\label{def:arfima_ma}
\end{align}
As homo-correlated ARMA processes, we can define the autocorrelation matrix, $\Xi_{n}$, of all coordinate processes, $x_{t}^{\top}e_{k}/\sqrt{\lambda_{0,k}}$, with $\lambda_{0,k}>0$, where $\lambda_{h,k}$ is defined in a manner identical to the ARMA case.
The following proposition is from Theorem 7.2.3 in \citet{GKS2012Large} \citep[see also Theorem 13.2.2 in][]{BD1991Time}
and the discussion of condition numbers in \citet{LH2005Complexity} \citep[see also][]{BG1998Condition,CHL2006Correlation}.
\begin{proposition} \label{prop:arfima}
The homo-correlated ARFIMA($p,d,q$) process \eqref{def:arfima} with $d\in\left(-1/2,1/2\right)\backslash\left\{0\right\}$ has the following properties:
\begin{enumerate}
\item[(i)] $\lambda_{h,k}/\lambda_{0,k}\asymp h^{2d-1}$ as $\left|h\right|\to\infty$ for all $k\in\N$ with $\lambda_{0,k}>0$.
\item[(ii)] If $d\in\left(0,1/2\right)$, $\sup_{n}\mu_{1}\left(\Xi_{n}\right)\asymp n^{2d}$ and $\inf_{n}\mu_{n}\left(\Xi_{n}\right)>0$.
\item[(iii)] If $d\in\left(-1/2,0\right)$, $\sup_{n}\mu_{1}\left(\Xi_{n}\right)<\infty$ and $\inf_{n}\mu_{n}\left(\Xi_{n}\right)\asymp n^{2d}$.
\end{enumerate}
\end{proposition}
{It can be observed that Assumption 1 holds true under process \eqref{def:arfima}; moreover, property (i) follows $\trace\left(\Sigma_{0}\right)=\sum_{j=0}^{\infty}\phi_{j}^{2}\trace\left(Q\right)<\infty$ by the assumption for $Q$ and Theorem 7.2.3 in \citet{GKS2012Large}; property (ii) holds true by the rank condition of $Q$ and $\sum_{j=0}^{\infty}\phi_{j}^{2}\ge \phi_{0}^{2}=1$ by \citet{BD1991Time} or \citet{GKS2012Large}; and property (iii) is by property (i) of Proposition \ref{prop:arfima}.}
To apply the results of Proposition \ref{prop:benign_rate}, we present the condition for benign overfitting.
\begin{corollary}
Consider the setting and assumptions for Theorem \ref{thm:homo} and that the covariate follows the ARFIMA process \eqref{def:arfima}.
Suppose that $\Sigma_0$ is a benign covariance, as stated in Definition \ref{def:benign_covariance}.
Then, we have
\begin{align*}
R(\hat{\beta}) = O\left( \zeta_n^{1/2} n^{-\left( 1 -2 \max\{0, d\}\right)/2} + n^{-2\min\{0,d\}} \eta_n \right).
\end{align*}
\end{corollary}
This result shows that the convergence rate reduces owing to the effect of $d$ caused by the long-memory ARFIMA with $d \in (0,1/2)$.
In this case, the error converges to zero with $\zeta_n = O(1)$, i.e., $\Sigma_0$ is a trace class.
For $d \in (-1/2,0)$, if $\eta_n = O(n^{-1/2})$ holds true, i.e., for property (ii) in Example \ref{ex:benign_covariance} with $\gamma_n = n^{-1/2}$, the convergence of the risk to zero is guaranteed.
\subsection{ARMAH Process}
ARMAH processes are an extension of real-valued ARMA models into Hilbert-valued linear processes \citep{Bosq2000Linear}.
We present an example of hetero-correlated processes.
Let us consider a class of causal ARMAH($p,q$) models with component-wise MA($\infty$) representation:
\begin{align}
x_{t}-\sum_{j=1}^{p}\rho_{j}x_{t-j}=w_{t}^{Q}+\sum_{j=1}^{q}\varphi_{j}w_{t-j}^{Q},\label{def:armah}
\end{align}
where operators $\rho_{j}:\Hilbert\to\Hilbert$ and $\varphi_{j}:\Hilbert\to\Hilbert$ are self-adjoint bounded linear operators with spectral decompositions with $\left\{e_{k}\right\}$ such that
\begin{align*}
\rho_{j}=\sum_{k\in\N}\rho_{j,k}e_{k}e_{k}^{\top},\
\varphi_{j}=\sum_{k\in\N}\varphi_{j,k}e_{k}e_{k}^{\top}.
\end{align*}
This leads to the property that for all $k\in\N$, $x_{t}^{\top}e_{k}$ is a real-valued ARMA process with AR parameters ($\rho_{j,k}$), MA parameters ($\varphi_{j,k}$), and white noise, whose variance is $\vartheta_{k}$; in other words,
\begin{align*}
A_{k}\left(L\right)x_{t}^{\top}e_{k}=B_{k}\left(L\right)\sqrt{\vartheta_{k}}w_{t,k},
\end{align*}
where $A_{k}\left(z\right)=1-\sum_{j=1}^{p}\rho_{j,k}z^{j}$ and $B_{k}\left(z\right)=1+\sum_{j=1}^{q}\varphi_{j,k}z^{j}$ denote the characteristic polynomial of the AR and MA parts, respectively, and $L$ is the lag operator.
Note that all coordinate processes, $x_{t}^{\top}e_{k}$, are independent of each other because they are driven by independent white noise, $\{w_{t,k}:t\in\Z\}$.
For stability and causality of coordinate ARMA processes, we formulate the following assumptions:
\begin{description}
\item[(ARMAH1)] There exists some $\epsilon\in\left(0,1\right)$ such that for all $k\in\N$,
\begin{description}
\item[(i)]
$\left|A_{k}\left(z\right)\right|$ and $\left|B_{k}\left(z\right)\right|$ are in $[\epsilon,\epsilon^{-1}]$ for all $z\in\C$ such that $\left|z\right|=1$.
\item[(ii)]
{$A_{k}\left(z\right)$ does not have its roots in $z\in\C$ with $\left|z\right|\le 1+\epsilon$.}
\end{description}
\item[(ARMAH2)] $Q$ is of the trace class and $\mathrm{rank}\left(Q\right)> n$.
\end{description}
Without loss of generality, we can set $\rho_{1,k}=\cdots=\rho_{p,k}=\varphi_{1,k}=\cdots=\varphi_{q,k}=0$ for all $k$ with $\vartheta_{k}=0$.
We assume (ARMAH1-i) for the uniqueness of MA($\infty$) representation of a real-valued ARMA($p,q$) process with AR ($\rho_{j,k}$) and MA ($\varphi_{j,k}$) parameters. Let $\left\{\phi_{j,k}:j\in\N_{0}\right\}$ denote the sequence of coefficients of the unique MA($\infty$) representation for all $k\in\N$.
With the MA representation, we can define {$\lambda_{h,k}:=\sum_{j=0}^{\infty}\phi_{j,k}\phi_{j+\left|h\right|,k}\vartheta_{k}$ and assume that $\lambda_{0,k} $ is non-increasing with respect to $k$ without loss of generality; hence, $\lambda_{0,n+1}>0$ by (ARMAH2).}
This leads to the property that all coordinate processes as vectors, $((x_{1}^{\top}e_{k})/\sqrt{\lambda_{0,k}},\ldots,(x_{n}^{\top}e_{k})/\sqrt{\lambda_{0,k}})^\top$, {for $k$ with $\lambda_{0,k}>0$ }are $\R^{n}$-valued independent but not necessarily identically distributed Gaussian random variables.
$\Xi_{k,n}$ denotes the covariance matrices of these random vectors.
We further present an example of an ARMAH process with $(p,q) = (1,0)$ in Section \ref{sec:armah1}.
The following proposition verifies that the condition for $\Sigma_{0}$ to be benign is identical to that presented in \citet{bartlett2020benign} under these assumptions for ARMAH.
\begin{proposition}\label{prop:armah}
Under (ARMAH1)-(ARMAH2), the following properties hold true for \eqref{def:armah}:
\begin{enumerate}
\item[(i)] $1\le \sum_{j=0}^{\infty}\phi_{j,k}^{2}\le \epsilon^{-4}$.
\item[(ii)] $\epsilon^{8}\le \inf_{k,n}\mu_{n}\left(\Xi_{k,n}\right)\le \sup_{k,n}\mu_{1}\left(\Xi_{k,n}\right)<\epsilon^{-4}$.
\item[(iii)] For some $c>0$ dependent only on $\epsilon$, $p$, and $q$, $\left|\lambda_{h,k}/\lambda_{0,k}\right|\le c\left(1+\epsilon\right)^{-\left|h\right|/2}$.
\item[(iv)] $\left\|Q\right\|\le \left\|\Sigma_{0}\right\|\le \epsilon^{-4}\left\|Q\right\|$, $\trace\left(Q\right)\le\trace\left(\Sigma_{0}\right)\le \epsilon^{-4}\trace\left(Q\right)$.
\item[(v)] There exists $c>0$ dependent only on $\epsilon$, $p$, and $q$ such that for all $n\in\N$,
\begin{align*}
\left\|\bar{\Sigma}_{n}\right\|\le c\left\|\Sigma_{0}\right\|,\ \trace\left(\bar{\Sigma}_{n}\right)\le c\trace\left(\Sigma_{0}\right).
\end{align*}
\end{enumerate}
\end{proposition}
{
Assumption 1 can be satisfied as follows: (i) $\trace\left(\Sigma_{0}\right)<\infty$ was demonstrated by property (iv) of Proposition \ref{prop:armah}; property (ii) $\lambda_{0,n+1}$ was already given by $\mathrm{rank}\left(Q\right)>n$; and property (iii) was given by property (iii) of Proposition \ref{prop:armah}.
Furthermore, the condition \eqref{ineq:eigenvalues_correlation} for Theorem \ref{thm:hetero} was satisfied by property (ii) of Proposition \ref{prop:armah}.
}
Therefore, Theorem \ref{thm:hetero} immediately gives the following result without proof:
\begin{corollary}
Consider the setting in Theorem \ref{thm:hetero} with covariate $\{x_t\}_{t}$ that follows the ARMAH process \eqref{def:armah}.
Under assumptions (ARMAH1)-(ARMAH2), there exist $b,c,c_{1}>1$ such that for all $\delta \in (0,1)$ with $\log\left(1/\delta\right)\le n/c$: if $k^{*}\le n/c$,
\begin{align*}
R(\hat{\beta})&\le c\delta^{-1}\left\|\beta^{*}\right\|^2\left\|\Sigma_{0}\right\|\left(\sqrt{\frac{r_{0}\left(\Sigma_{0}\right)}{n}}+\frac{r_{0}\left(\Sigma_{0}\right)}{n}\right)+c\left(\frac{k^{*}}{n}+\frac{n}{R_{k^{*}}\left(\Sigma_{0}\right)}\right)\log(\delta^{-1}).
\end{align*}
Further, if the autocovariance, $\Sigma_0$, is benign, as stated in Definition \ref{def:benign_covariance}, we have
\begin{align}
R(\hat{\beta}) = O\left(\zeta_n^{1/2} n^{-1/2} + \eta_n \right).
\end{align}
\end{corollary}
This upper bound does not depend on dependence-based terms, such as $\Bar{\Sigma}_n$; therefore, it can be observed that the upper bound is equivalent to the independent data case in \citet{bartlett2020benign}.
Additionally, the derived convergence rate was identical to that of the independent case, as displayed in \eqref{ineq:convergence_rate_independent}.
In other words, in the dependent ARMAH case, the same rate is achieved for excess risk as that in the independent case.
\section{Conclusion and Discussion} \label{sec:conclusion}
In this study, we investigated the excess risk of an estimator of an over-parameterized linear model with dependent time series data.
We constructed an estimator using an interpolator that fits the data perfectly and measured the excess risk of prediction.
In addition to the notion of the effective ranks of the autocovariance operators used for independent data, we developed a notion of integrated covariance, which indicated the sum of cross-covariance operators; subsequently, we derived the upper bound of the excess risk.
This result is valid regardless of sparsity and in long-memory processes, where an infinite sum of cross-covariance operators does not exist.
Consequently, we found that the dependence of data does not affect the convergence rate of risk in short-memory processes; however, it affects the convergence rate in long-memory processes.
We provided several examples of linear processes as covariates that satisfied the assumption.
Furthermore, we studied a high-dimensional time series problem without sparsity and showed that the over-parameterization theory using the eigenvalues of covariance can also handle short- and long-memory processes in a unified manner.
A limitation of this study is that the dependent data are assumed to be Gaussian.
While it is common to assume Gaussianity in the dependent data analysis, there are several ways to relax this assumption.
However, in our study, it is non-trivial to relax it because Sudakov's inequality for the derived moment inequality heavily depends on the Gaussianity.
Further, the Gaussianity also plays a critical role in reducing the correlation of data to handle the bias term.
In the over-parameterized setting, avoiding Gaussianity is important for future work.
\bibliographystyle{apecon}
\bibliography{main}
\newpage
\appendix
\section*{Additional Notation}
For the operator $X$ by the observations, we define its Hilbert-Schmidt norm as $\left\|X\right\|_{\text{HS}}=\sqrt{\sum_{t=1}^{n}\left\|x_{t}\right\|^{2}}$.
\section{Additional Formulation of Linear Processes}
\subsection{Process with Scaling Transformation Operator} \label{sec:scaling_transform}
We consider the process satisfying the following assumptions:
\begin{description}
\item[(ST1)] There exists a real-valued square-summable sequence $\left\{\phi_{j}:j\in\Z\right\}$ such that $\phi_{j,k}=\phi_{j}$ for all $j\in\Z$ and $k\in\N$,
\item[(ST2)] $\sum_{j\in\Z}\phi_{j}^{2}>0$,
\item[(ST3)] $Q$ is of trace-class and $\mathrm{rank}\left(Q\right)>n$.
\end{description}
Under these assumptions, all the linear operators $\Phi_{j}$'s are scaling transformations as one of the simplest model structures; that is, $\phi_{j,k}=\phi_{j}$ or equivalently $\Phi_{j}=\phi_{j}I$, where $\phi_{j}\in\R$ and $I$ is the identity operator of $\Hilbert$ (the spectral decomposition holds as a resolution of the identity with $\left\{e_{k}\right\}$).
Note that the assumptions enable us to extend a real-valued linear process with the coefficients $\left\{\phi_{j}\right\}$ to an $\Hilbert$-valued process in a na\"{i}ve manner, which is useful to examine benign overfitting in toy models under dependence.
The covariates $x_{t}$'s have a concise representation
\begin{align}
x_{t}=\sum_{j=-\infty}^{\infty}\phi_{j}\sum_{k\in\N}\sqrt{\vartheta_{k}}e_{k}w_{t-j,k}.
\end{align}
We see that the coordinate processes $\{(x_{t}^{\top}e_{k})/\sqrt{\lambda_{0,k}}:t=1,\ldots,n\},\ ^{\forall}k\in\N$ are i.i.d.\ $n$-dimensional Gaussian random variable whose covariance matrix is a Toeplitz one determined by $\left\{\phi_{j}:j\in\Z\right\}$ (similar settings can be seen in theoretical studies of time series analysis as a toy example; e.g., see \citealp{Basu2015Regularized}).
Clearly $\left\{x_{t}:t\in\Z\right\}$ has the covariance operator $\Sigma_{0}$ and cross-covariance ones $\Sigma_{h}$ such that
\begin{align*}
\Sigma_{h}=\sum_{j=-\infty}^{\infty}\phi_{j}\phi_{j+h}Q,
\end{align*}
These facts yield that
\begin{align*}
\left\|\tilde{\Sigma}_{h}\right\|&=\left|\sum_{j=-\infty}^{\infty}\phi_{j}\phi_{j+h}\right|\left\|Q\right\|,\\
\trace\left( \bar{\Sigma}_n\right)&=\left(\sum_{j=-\infty}^{\infty}\phi_{j}^{2}+2\sum_{h=1}^{n-1}\left|\sum_{j=-\infty}^{\infty}\phi_{j}\phi_{j+h}\right|\right)\trace\left(Q\right),\\
\left\| \bar{\Sigma}_n\right\|&=\left(\sum_{j=-\infty}^{\infty}\phi_{j}^{2}+2\sum_{h=1}^{n-1}\left|\sum_{j=-\infty}^{\infty}\phi_{j}\phi_{j+h}\right|\right)\left\|Q\right\|,\\
r_{0}\left( \bar{\Sigma}_n\right)&=r_{0}\left(Q\right)=r_{0}\left(\Sigma_{0}\right).
\end{align*}
Hence the upper bound for the bias term in the decomposition of the excess risk is dependent on how fast the infinite series of autocorrelations $\sum_{h}\sum_{j} \phi_{j}\phi_{j+h}/\sum_{j}\phi_{j}^{2}$ diverges.
In addition, we need to examine the convergence rate of the smallest eigenvalue of the autocorrelation matrix as $n\to\infty$ to give the upper bound for the variance term in the decomposition of the risk, as well as the divergence rate of the largest eigenvalue of the autocorrelation to give the lower bound.
\subsection{Example of ARMAH Process} \label{sec:armah1}
As an illustrative example of ARMAH processes, let us consider autoregressive Hilbertian processes of order 1.
Autoregressive Hilbertian (ARH) processes, or functional autoregressive (FAR) ones of order 1 are one of the central concerns in linear processes in Hilbert spaces \citep[e.g., see][and references therein]{Bosq1991Modelization,Bosq2000Linear,BCS2000Autoregressive,Guillas2001Rates,AS2003Wavelet,Mas2007Weak,KO2008Curve,DKZ12Empirical,HK12Inference,RRF2016Plugin,ABR2017Asymptotic,RA2019Note}.
Let us consider ARH(1) processes in an infinite moving-average representation such that
\begin{align}
x_{t}:=\sum_{j=0}^{\infty}\rho_{1}^{j}w_{t-j}^{Q},\label{def:arh1}
\end{align}
where $\rho_{1}:\Hilbert\to\Hilbert$ is the autocorrelation operator with the spectral decomposition
\begin{align*}
\rho_{1}=\sum_{k\in\N}\rho_{1,k}e_{k}e_{k}^{\top}
\end{align*}
with $\left\|\rho_{1}\right\|=\sup_{k}\left|\rho_{1,k}\right|<1$ and $\rho_{1,k}$'s are eigenvalues of $\rho_{1}$ not necessarily in non-increasing or non-descending order.
Note that we use the convention that $\rho_{1}^{0}=I$.
Similar settings can be seen in studies for statistical estimation and prediction of ARH(1) processes; see \cite{RRF2016Plugin} and \cite{ABR2017Asymptotic}.
In this case, we have the MA($\infty$) representation
\begin{align*}
\Phi_{j}&=\begin{cases}
\rho_{1}^{j} &\text{if }j\ge0,\\
O &\text{otherwise},
\end{cases}\\
\phi_{j,k}&=\begin{cases}
\rho_{1,k}^{j} &\text{if }j\ge0\text{ and }\vartheta_{k}>0,\\
0&\text{otherwise},\\
\end{cases},\\
\lambda_{0,k}&=\frac{1}{1-\rho_{1,k}^{2}}\vartheta_{k}.
\end{align*}
Here, we adopt the convention that $0^{0}=1$.
We considered some assumptions of ARMAH processes with coordinate-wise MA($\infty$) representations above; in ARH(1) processes, however, the condition (ARMAH1) can have more simplified equivalent condition $\left\|\rho_{1}\right\|<1$.
\begin{lemma}\label{LemmaARH01}
Assume $\left\|\rho_{1}\right\|<1$ and $\trace\left(Q\right)<\infty$; then the following properties holds:
\begin{enumerate}
\item[(i)] $1\le \sum_{j=0}^{\infty}\phi_{j,k}^{2}\le \frac{1}{1-\left\|\rho_{1}\right\|^{2}}$;
\item[(ii)] $\frac{1-\left\|\rho_{1}\right\|}{1+\left\|\rho_{1}\right\|}\le \inf_{k,n}\mu_{n}\left(\Xi_{k,n}\right)\le \sup_{k,n}\mu_{1}\left(\Xi_{k,n}\right)\le\frac{1}{\left(1-\left\|\rho_{1}\right\|\right)^{2}}$;
\item[(iii)] $\left|\lambda_{h,k}/\lambda_{0,k}\right|\le \frac{\left\|\rho_{1}\right\|^{\left|h\right|}}{1-\left\|\rho_{1}\right\|^{2}}$;
\item[(iv)] $\left\|Q\right\|\le \left\|\Sigma_{0}\right\|\le \frac{1}{1-\left\|\rho_{1}\right\|^{2}}\left\|Q\right\|$, $\trace\left(Q\right)\le\trace\left(\Sigma_{0}\right)\le \frac{1}{1-\left\|\rho_{1}\right\|^{2}}\trace\left(Q\right)$;
\item[(v)] for all $n\in\N$,
\begin{align*}
\left\|\bar{\Sigma}_{n}\right\|\le \frac{1}{\left(1-\left\|\rho_{1}\right\|\right)^{2}}\left\|\Sigma_{0}\right\|,\ \trace\left(\bar{\Sigma}_{n}\right)\le \frac{1}{\left(1-\left\|\rho_{1}\right\|\right)^{2}}\trace\left(\Sigma_{0}\right).
\end{align*}
\end{enumerate}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{LemmaARH01}]
(i) is shown as follows:
\begin{align*}
\sum_{j=0}^{\infty}\phi_{j,k}^{2}=\sum_{j=0}^{\infty}\rho_{1,k}^{2j}\le \sum_{j=0}^{\infty}\sup_{k}\rho_{1,k}^{2j}=\sum_{j=0}^{\infty}\left\|\rho_{1}\right\|^{2j}=\frac{1}{1-\left\|\rho_{1}\right\|^{2}},
\end{align*}
and the fact that $\phi_{0,k}=1$.
(ii) is from Proposition 4.5.3 of \citet{BD1991Time} such that for all $k$ with $\vartheta_{k}>0$,
\begin{align*}
\frac{1-\left\|\rho_{1}\right\|}{1+\left\|\rho_{1}\right\|}=\frac{1-\left\|\rho_{1}\right\|^{2}}{\left(1+\left\|\rho_{1}\right\|\right)^{2}}&\le \left(\sum_{j=0}^{\infty}\phi_{j,k}^{2}\right)^{-1}\inf_{z\in\C:\left|z\right|=1}\frac{1}{1-2\rho_{1,k}z+\rho_{1,k}^{2}}\le \mu_{n}\left(\Xi_{k,n}\right)\le\mu_{1}\left(\Xi_{k,n}\right)\\
&\le \left(\sum_{j=0}^{\infty}\phi_{j,k}^{2}\right)^{-1}\sup_{z\in\C:\left|z\right|=1}\frac{1}{1-2\rho_{1,k}z+\rho_{1,k}^{2}}\le \frac{1}{\left(1-\left\|\rho_{1}\right\|\right)^{2}}
\end{align*}
because $\left(\sum_{j=0}^{\infty}\phi_{j,k}^{2}\right)^{-1}$ is the variance of white noises of each coordinate process $\left\{x_{t}^{\top}e_{k}/\sqrt{\lambda_{0,k}}\right\}$ with $\vartheta_{k}>0$.
(iii) is trivial because for all $h\ge 0$,
\begin{align*}
\sum_{j=0}^{\infty}\phi_{j,k}\phi_{j+h,k}=\sum_{j=0}^{\infty}\rho_{1,k}^{2j+h}=\frac{\rho_{1,k}^{h}}{1-\rho_{1,k}^{2}}\le \frac{\left\|\rho_{1}\right\|^{h}}{1-\left\|\rho_{1}\right\|^{2}},
\end{align*}
and hence
\begin{align*}
\left|\frac{\lambda_{h,k}}{\lambda_{0,k}}\right|=\left|\frac{\sum_{j=0}^{\infty}\phi_{j,k}\phi_{j+h,k}}{\sum_{j=0}^{\infty}\phi_{j,k}^{2}}\right|\le \frac{\left\|\rho_{1}\right\|^{h}}{1-\left\|\rho_{1}\right\|^{2}}
\end{align*}
by (i).
The case for $h<0$ is obviously parallel.
(iv) is trivial owing to (i).
(v) is a consequence of (i) and (iii) because
\begin{align*}
\trace\left(\Sigma_{0}+2\sum_{h=1}^{n-1}\tilde{\Sigma}_{h}\right)&=\sum_{j=0}^{\infty}\sum_{k=1}^{\infty}\phi_{j,k}^{2}\vartheta_{k}+2\sum_{h=1}^{n-1}\sum_{j=0}^{\infty}\left|\sum_{k=1}^{\infty}\phi_{j,k}\phi_{j+h,k}\right|\vartheta_{k}\\
&\le \left(\sup_{k}\sum_{j=0}^{\infty}\rho_{1,k}^{2j}+2\sum_{h=1}^{n-1}\sup_{k}\left|\sum_{j=0}^{\infty}\rho_{1,k}^{2j+h}\right|\right)\trace\left(Q\right)\\
&\le \left(\frac{1}{1-\left\|\rho_{1}\right\|^{2}}+2\sum_{h=1}^{n-1}\frac{\left\|\rho_{1}\right\|^{h}}{1-\left\|\rho_{1}\right\|^{2}}\right)\trace\left(Q\right)\\
&\le \frac{1}{1-\left\|\rho_{1}\right\|^{2}}\left(1+\frac{2\left\|\rho_{1}\right\|}{1-\left\|\rho_{1}\right\|}\right)\trace\left(Q\right)\\
&\le \frac{1}{\left(1+\left\|\rho_{1}\right\|\right)\left(1-\left\|\rho_{1}\right\|\right)}\frac{1+\left\|\rho_{1}\right\|}{1-\left\|\rho_{1}\right\|}\trace\left(Q\right)\\
&\le \frac{1}{\left(1-\left\|\rho_{1}\right\|\right)^{2}}\trace\left(Q\right)\\
&\le \frac{1}{\left(1-\left\|\rho_{1}\right\|\right)^{2}}\trace\left(\Sigma_{0}\right).
\end{align*}
The derivation of the upper bound for $\left\|\bar{\Sigma}_{n}\right\|$ is quite parallel.
\end{proof}
\begin{corollary}
Assume that an hetero-correlated ARH(1) model \eqref{def:arh1} satisfies $\left\|\rho_{1}\right\|<1$ and $\trace\left(Q\right)$.
Then all the assumptions for Theorem \ref{thm:hetero} are satisfied and
there exist $b,c,c_{1}>1$ dependent on $\left\|\rho_{1}\right\|$ such that for all $\delta \in (0,1)$ with $\log\left(1/\delta\right)\le n/c$: if $k^{*}\le n/c$,
\begin{align*}
R(\hat{\beta})&\le c\delta^{-1}\left\|\beta^{*}\right\|^{2}\left\|\Sigma_{0}\right\|\left(\sqrt{\frac{r_{0}\left(\Sigma_{0}\right)}{n}}+\frac{r_{0}\left(\Sigma_{0}\right)}{n}\right)+c\left(\frac{k^{*}}{n}+\frac{n}{R_{k^{*}}\left(\Sigma_{0}\right)}\right)\log(\delta^{-1}).
\end{align*}
\end{corollary}
\section{Bias-Variance Decomposition of the Excess Risk}
We follow the bias-variance decomposition of the excess risk.
The following lemma is a variation of Lemma S.1 of \citealp{bartlett2020benign}.
\begin{lemma}\label{varBLLT20LS01}
The excess risk of the minimum norm estimator satisfies
\begin{align*}
R(\hat{\beta})=\Ep_{x_{\infty}}\left[\left(x_{\infty}^{\top}\left(\beta^{\ast}-\hat{\beta}\right)\right)^{2}\right]\le 2\left(\beta^{\ast}\right)^{\top}T_{B}\beta^{\ast}+2\left(\Upsilon_{n}^{-1/2}\mathcal{E}\right)^{\top}T_{V}\left(\Upsilon_{n}^{-1/2}\mathcal{E}\right)
\end{align*}
and
\begin{align*}
\Ep_{x_{\infty},\mathcal{E}}R(\hat{\beta}){=} \left(\theta^{\ast}\right)^{\top}T_{B}\theta^{\ast}+\mu_{n}\left(\Upsilon_{n}\right)\trace\left(T_{V}\right)
\end{align*}
where
\begin{align*}
T_{B}&:=\left(I-X^{\top}\left(XX^{\top}\right)^{-1}X\right)\Sigma_{0}\left(I-X^{\top}\left(X X^{\top}\right)^{\dagger}X\right),\\
T_{V}&:=\Upsilon_{n}^{1/2}\left(XX^{\top}\right)^{-1}X\Sigma_{0} X^{\top}\left(XX^{\top}\right)^{-1}\Upsilon_{n}^{1/2}.
\end{align*}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{varBLLT20LS01}]
Because $y_{\infty}-x_{\infty}^{\top}\beta^{\ast}$ is zero-mean conditionally on $x_{\infty}$,
\begin{align*}
R\left(\hat{\beta}\right)&=\Ep_{x_{\infty},y_{\infty}}\left[\left(y_{\infty}-x_{\infty}^{\top}\hat{\beta}\right)^{2}\right]-\Ep\left[\left(y_{\infty}-x_{\infty}^{\top}\beta^{\ast}\right)^{2}\right]\\
&=\Ep_{x_{\infty},y_{\infty}}\left[\left(y_{\infty}-x_{\infty}^{\top}\beta^{\ast}+x_{\infty}^{\top}\left(\beta^{\ast}-\hat{\beta}\right)\right)^{2}\right]-\Ep\left[\left(y_{\infty}-x_{\infty}^{\top}\beta^{\ast}\right)^{2}\right]\\
&=\Ep_{x_{\infty}}\left[\left(x_{\infty}^{\top}\left(\beta^{\ast}-\hat{\beta}\right)\right)^{2}\right].
\end{align*}
The equality \eqref{eq:betahat}, the definition of $\Sigma_{0}$ and $Y=X\beta^{\ast}+\mathcal{E}$ lead to
\begin{align*}
R\left(\hat{\beta}\right)&=\Ep_{x_{\infty}}\left[\left(x_{\infty}^{\top}\left(\beta^{\ast}-X\left(XX^{\top}\right)^{-1}\left(X\beta^{\ast}+\mathcal{E}\right)\right)\right)^{2}\right]\\
&=\Ep_{x_{\infty}}\left[\left(x_{\infty}^{\top}\left(I-X\left(XX^{\top}\right)^{-1}X\right)\beta^{\ast}-x_{\infty}^{\top}X\left(XX^{\top}\right)^{-1}\mathcal{E}\right)^{2}\right]\\
&\le 2\Ep_{x_{\infty}}\left[\left(x_{\infty}^{\top}\left(I-X\left(XX^{\top}\right)^{-1}X\right)\beta^{\ast}\right)^{2}\right]+2\Ep_{x_{\infty}}\left[\left(x_{\infty}^{\top}X\left(XX^{\top}\right)^{-1}\mathcal{E}\right)^{2}\right]\\
&= 2\left(\beta^{\ast}\right)^{\top}\left(I-X^{\top}\left(XX^{\top}\right)^{-1}X\right)\Sigma_{0}\left(I-X^{\top}\left(XX^{\top}\right)^{-1}X\right)\beta^{\ast}\\
&\quad+2\mathcal{E}^{\top}\left(XX^{\top}\right)^{-1}X\Sigma_{0}X^{\top}\left(XX^{\top}\right)^{-1}\mathcal{E}\\
&=2\left(\beta^{\ast}\right)^{\top}T_{B}\beta^{\ast}+2\mathcal{E}^{\top}\Upsilon_{n}^{-1/2}\Upsilon_{n}^{1/2}\left(XX^{\top}\right)^{-1}X\Sigma_{0}X^{\top}\left(XX^{\top}\right)^{-1}\Upsilon_{n}^{1/2}\Upsilon_{n}^{-1/2}\mathcal{E}\\
&=2\left(\beta^{\ast}\right)^{\top}T_{B}\beta^{\ast}+2\left(\Upsilon_{n}^{-1/2}\mathcal{E}\right)^{\top}T_{V}\left(\Upsilon_{n}^{-1/2}\mathcal{E}\right).
\end{align*}
Because $\mathcal{E}$ is centered and independent of $x_{\infty}$ and {$X$},
\begin{align*}
\Ep_{x_{\infty},\mathcal{E}}R\left(\hat{\beta}\right)&=\Ep_{x_{\infty},\mathcal{E}}\left[\left(x_{\infty}^{\top}\left(I-X\left(XX^{\top}\right)^{-1}X\right)\beta^{\ast}-x_{\infty}^{\top}X\left(XX^{\top}\right)^{-1}\mathcal{E}\right)^{2}\right]\\
&=\left(\beta^{\ast}\right)^{\top}\left(I-X^{\top}\left(XX^{\top}\right)^{-1}X\right)\Sigma_{0}\left(I-X^{\top}\left(X X^{\top}\right)^{\dagger}X\right)\beta^{\ast}\\
&\quad+\trace\left(\left(XX^{\top}\right)^{-1}X\Sigma_{0}X^{\top}\left(XX^{\top}\right)^{-1}\Ep\left[\mathcal{E}\mathcal{E}^{\top}\right]\right)\\
&= \left(\beta^{\ast}\right)^{\top}T_{B}\beta^{\ast} + \trace\left(\Upsilon_{n}^{1/2}\left(XX^{\top}\right)^{-1}X\Sigma_{0}X^{\top}\left(XX^{\top}\right)^{-1}\Upsilon_{n}^{1/2}\right)\\
&= \left(\beta^{\ast}\right)^{\top}T_{B}\beta^{\ast} + \trace\left(T_{V}\right).
\end{align*}
Hence we obtain the equality.
\end{proof}
To bound the tail probability of the term $T_V$, we give a corollary of Lemma S.2 of \citealp{bartlett2020benign}.
\begin{lemma}\label{varBLLT20LS02}
For all $M$, $n\times n$-dimensional a.s.\ positive semi-definite random matrices, with probability at least $1-e^{-t}$,
\begin{align*}
\left(\Upsilon_{n}^{-1/2}\mathcal{E}\right)^{\top}M\Upsilon_{n}^{-1/2}\mathcal{E}\le \trace\left(M\right)+2\left\|M\right\|t+2\sqrt{\left\|M\right\|^{2}t^{2}+\trace\left(M^{2}\right)t}.
\end{align*}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{varBLLT20LS02}]
It follows from Lemma S.2 of \citet{bartlett2020benign} and the facts such that $\mathcal{E}$ is independent of $X$ and $\Upsilon_{n}^{-1/2}\mathcal{E}\sim N\left(0,I_{n}\right)$.
\end{proof}
As \citet{bartlett2020benign}, we see that with probability at least $1-e^{-t}$,
\begin{align*}
\left(\Upsilon_{n}^{-1/2}\mathcal{E}\right)^{\top}T_{V}\left(\Upsilon_{n}^{-1/2}\mathcal{E}\right)\le
\left(4t+2\right)\trace\left(T_{V}\right).
\end{align*}
\section{Upper Bound on the Variance Term $T_{V}$ with {Homo-Correlated Process}} \label{sec:bound_tv_homo}
We consider the {homo-correlation} case with $\Xi_{k,n} = \Xi_n$.
Proofs in this section are based on \cite{bartlett2020benign} for the independent data case.
The key difference in our study is that we conduct de-correlation by the autocorrelation matrix $\Xi_n$ to deal with dependent data.
Also, we additionally handle the dependency of the noise variable.
Therefore, the results in \citet{bartlett2020benign} need to be updated.
We recall $z_{t,i} = \lambda_{h,i}/\lambda_{0,i}$ defined in Section \ref{sec:setting_process}, and also define its vector $z_{k} = (z_{1,k},...,z_{n,k})^\top$.
Note that $\{z_{k}:k\in\N,\lambda_{0,k}>0\}$
is an i.i.d.\ sequence of $\R^{n}$-valued Gaussian random variables whose distributions are $N\left(\mathbf{0},\Xi_{n}\right)$.
We define an $\R^{n}$-valued Gaussian random variable $\tilde{z}_{k}$ also for all $k\in\N$ and $\lambda_{0,k}>0$, which is a decorrelated version of $z_{k}$ as
\begin{align*}
\tilde{z}_{k}:=\Xi_{n}^{-1/2}z_{k}=\Xi_{n}^{-1/2}Xe_{k}/\sqrt{\lambda_{0,k}}.
\end{align*}
{Note that $\tilde{z}_{k}\sim N\left(\mathbf{0},I_{n}\right)$ are i.i.d.\ $\R^{n}$-valued standard Gaussian random vector.}
We introduce an effect of noise covariance into Lemma 3 of \citealp{bartlett2020benign} and obtain the following result:
\begin{lemma}\label{varBLLT20L03Ho}
Let us assume $\lambda_{n+1}>0$. We have
\begin{align*}
\trace\left(T_{V}\right)=\sum_{i:\lambda_{0,i}>0}\left[\lambda_{0,i}^{2}\tilde{z}_{i}^{\top}\left(\sum_{j:\lambda_{0,j}>0}\lambda_{0,j}\tilde{z}_{j}\tilde{z}_{j}^{\top}\right)^{-1}\Xi_{n}^{-1/2}\Upsilon_{n}\Xi_{n}^{-1/2}\left(\sum_{j:\lambda_{0,j}>0}\lambda_{0,j}\tilde{z}_{j}\tilde{z}_{j}^{\top}\right)^{-1}\tilde{z}_{i}\right],
\end{align*}
where $\tilde{z}_{i}$'s are i.i.d.\ $\R^{n}$-valued standard Gaussian random variables.
In addition,
\begin{align*}
\trace\left(T_{V}\right)&\le\frac{\mu_{1}\left(\Upsilon_{n}\right)}{\mu_{n}\left(\Xi_{n}\right)}\sum_{i:\lambda_{0,i}>0}\left[\lambda_{0,i}^{2}\tilde{z}_{i}^{\top}\left(\sum_{j:\lambda_{0,j}>0}\lambda_{0,j}\tilde{z}_{j}\tilde{z}_{j}^{\top}\right)^{-1}\left(\sum_{j:\lambda_{0,j}>0}\lambda_{0,j}\tilde{z}_{j}\tilde{z}_{j}^{\top}\right)^{-1}\tilde{z}_{i}\right],\\
\trace\left(T_{V}\right)&\ge\frac{\mu_{n}\left(\Upsilon_{n}\right)}{\mu_{1}\left(\Xi_{n}\right)}\sum_{i:\lambda_{0,i}>0}\left[\lambda_{0,i}^{2}\tilde{z}_{i}^{\top}\left(\sum_{j:\lambda_{0,j}>0}\lambda_{0,j}\tilde{z}_{j}\tilde{z}_{j}^{\top}\right)^{-1}\left(\sum_{j:\lambda_{0,j}>0}\lambda_{0,j}\tilde{z}_{j}\tilde{z}_{j}^{\top}\right)^{-1}\tilde{z}_{i}\right].
\end{align*}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{varBLLT20L03Ho}]
The linear map $X:\Hilbert\to\R^{n}$ has the decomposition with respect to the orthonormal basis $\left\{e_{k}\right\}$
\begin{align*}
X
=\sum_{i:\lambda_{0,i}>0}\sqrt{\lambda_{0,i}}z_{i}e_{i}^{\top}
=\Xi_{n}^{1/2}\sum_{i:\lambda_{0,i}>0}\sqrt{\lambda_{0,i}}\tilde{z}_{i}e_{i}^{\top},
\end{align*}
where the infinite series converges almost surely (and in $L^{p}$ for all $p\ge1$).
It leads to
\begin{align*}
XX^{\top}=\Xi_{n}^{1/2}\left(\sum_{i:\lambda_{0,i}>0}\sqrt{\lambda_{0,i}}\tilde{z}_{i}e_{i}^{\top}\right)\left(\sum_{i:\lambda_{0,i}>0}\sqrt{\lambda_{0,i}}\tilde{z}_{i}e_{i}^{\top}\right)^{\top}\Xi_{n}^{1/2}=\Xi_{n}^{1/2}\left(\sum_{i:\lambda_{0,i}>0}\lambda_{0,i}\tilde{z}_{i}\tilde{z}_{i}^{\top}\right)\Xi_{n}^{1/2}
\end{align*}
and
\begin{align*}
X\Sigma X^{\top}&=\Xi_{n}^{1/2}\left(\sum_{i:\lambda_{0,i}>0}\sqrt{\lambda_{0,i}}\tilde{z}_{i}e_{i}^{\top}\right)\left(\sum_{i:\lambda_{0,i}>0}\lambda_{0,i}e_{i}e_{i}^{\top}\right)\left(\sum_{i:\lambda_{0,i}>0}\sqrt{\lambda_{0,i}}\tilde{z}_{i}e_{i}^{\top}\right)^{\top}\Xi_{n}^{1/2}\\
&=\Xi_{n}^{1/2}\left(\sum_{i:\lambda_{0,i}>0}\lambda_{0,i}^{2}\tilde{z}_{i}\tilde{z}_{i}^{\top}\right)\Xi_{n}^{1/2}.
\end{align*}
We obtain
\begin{align*}
&\trace\left(T_{V}\right)\\
&=\trace\left(\Upsilon_{n}^{1/2}\left(XX^{\top}\right)^{\dagger}X\Sigma X^{\top}\left(XX^{\top}\right)^{\dagger}\Upsilon_{n}^{1/2}\right)\\
&=\sum_{i:\lambda_{0,i}>0}\left[\lambda_{0,i}^{2}\tilde{z}_{i}^{\top}\Xi_{n}^{1/2}\left(\sum_{j:\lambda_{0,j}>0}\lambda_{0,j}\Xi_{n}^{1/2}\tilde{z}_{j}\tilde{z}_{j}^{\top}\Xi_{n}^{1/2}\right)^{-1}\Upsilon_{n}\left(\sum_{j:\lambda_{0,j}>0}\lambda_{0,j}\Xi_{n}^{1/2}\tilde{z}_{j}\tilde{z}_{j}^{\top}\Xi_{n}^{1/2}\right)^{-1}\Xi_{n}^{1/2}\tilde{z}_{i}\right]\\
&=\sum_{i:\lambda_{0,i}>0}\left[\lambda_{0,i}^{2}\tilde{z}_{i}^{\top}\Xi_{n}^{1/2}\Xi_{n}^{-1/2}\left(\sum_{j:\lambda_{0,j}>0}\lambda_{0,j}\tilde{z}_{j}\tilde{z}_{j}^{\top}\right)^{-1}\Xi_{n}^{-1/2}\Upsilon_{n}\Xi_{n}^{-1/2}\left(\sum_{j:\lambda_{0,j}>0}\lambda_{0,j}\tilde{z}_{j}\tilde{z}_{j}^{\top}\right)^{-1}\Xi_{n}^{-1/2}\Xi_{n}^{1/2}\tilde{z}_{i}\right]\\
&=\sum_{i:\lambda_{0,i}>0}\left[\lambda_{0,i}^{2}\tilde{z}_{i}^{\top}\left(\sum_{j:\lambda_{0,j}>0}\lambda_{0,j}\tilde{z}_{j}\tilde{z}_{j}^{\top}\right)^{-1}\Xi_{n}^{-1/2}\Upsilon_{n}\Xi_{n}^{-1/2}\left(\sum_{j:\lambda_{0,j}>0}\lambda_{0,j}\tilde{z}_{j}\tilde{z}_{j}^{\top}\right)^{-1}\tilde{z}_{i}\right].
\end{align*}
The remaining statements immediately follow.
\end{proof}
The following lemma is a corollary of Lemma 6 of \citealp{bartlett2020benign}.
\begin{lemma}\label{varBLLT20L11}
There are constants $b,c\ge 1$ such that if $0\le k\le n/c$, $r_{k}\left(\Sigma_{0}\right)\ge bn$, and $l\le k$ then with probability at least $1-7e^{-n/c}$,
\begin{align*}
\trace\left(T_{V}\right)\le c\mu_{n}^{-1}\left(\Xi_{n}\right)\mu_{1}\left(\Upsilon_{n}\right)\left(\frac{l}{n}+n\frac{\sum_{i>l}\lambda_{0,i}^{2}}{\left(\sum_{i>k}\lambda_{0,i}\right)^{2}}\right).
\end{align*}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{varBLLT20L11}]
It is obvious that by Lemma \ref{varBLLT20L03Ho}
\begin{align*}
\trace\left(T_{V}\right)\le \mu_{1}(\Xi_{n}^{-1})\mu_{1}\left(\Upsilon_{n}\right)\sum_{i:\lambda_{0,i}>0}\left[\lambda_{0,i}^{2}\tilde{z}_{i}^{\top}\left(\sum_{j:\lambda_{0,j}>0}\lambda_{0,j}\tilde{z}_{j}\tilde{z}_{j}^{\top}\right)^{-2}\tilde{z}_{i}\right],
\end{align*}
and by Lemma 6 of \cite{bartlett2020benign}, with probability at least $1-7e^{-n/c}$,
\begin{align*}
\sum_{i:\lambda_{0,i}>0}\left[\lambda_{0,i}^{2}\tilde{z}_{i}^{\top}\left(\sum_{j:\lambda_{0,j}>0}\lambda_{0,j}\tilde{z}_{j}\tilde{z}_{j}^{\top}\right)^{-2}\tilde{z}_{i}\right]\le \left(\frac{l}{n}+n\frac{\sum_{i>l}\lambda_{0,i}^{2}}{\left(\sum_{i>k}\lambda_{0,i}\right)^{2}}\right)
\end{align*}
because $\tilde{z}_{i}$'s are $\R^{n}$-valued i.i.d.\ standard Gaussian random variables.
\end{proof}
The following lemma is a corollary of Lemma 10 of \citealp{bartlett2020benign}.
\begin{lemma}\label{varBLLT20L16}
There are constants $c$ such that for all $k$ such that $0\le k\le n/c$ and any $b>1$ with probability at least $1-10e^{-n/c}$,
\begin{enumerate}
\item If $r_{k}\left(\Sigma_{0}\right)<bn$, then $\trace\left(T_{V}\right)\ge \mu_{1}^{-1}\left(\Xi_{n}\right)\mu_{n}\left(\Upsilon_{n}\right)\frac{k+1}{cb^{2}n}$.
\item If $r_{k}\left(\Sigma_{0}\right)\ge bn$, then
\begin{align*}
\trace\left(T_{V}\right)\ge \frac{\mu_{n}\left(\Upsilon_{n}\right)}{cb^{2}\mu_{1}\left(\Xi_{n}\right)}\min_{l\le k}\left(\frac{l}{n}+\frac{b^{2}n\sum_{i>l}\lambda_{0,i}^{2}}{\left(\lambda_{0,k+1}r_{k}\left(\Sigma_{0}\right)\right)^{2}}\right).
\end{align*}
\end{enumerate}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{varBLLT20L16}]
It is obvious by Lemma 10 of \cite{bartlett2020benign} and the discussion of the proof of Lemma \ref{varBLLT20L11}.
\end{proof}
\section{Upper and Lower Bounds on the Variance Term $T_{V}$ with (Possibly) Hetero-Correlated Process}
Let us consider a general case such that for all $k\in\N$ such that $\lambda_{0,k}>0$, the coordinate processes $\left\{\left(x_{t}^{\top}e_{k}\right)/\sqrt{\lambda_{0,k}}:t=1,\ldots,n\right\}$ have the autocovariance matrix (not necessarily identical to each other)
\begin{align*}
\Xi_{k,n}^{\left(t_{1},t_{2}\right)}=\frac{\lambda_{\left|t_{1}-t_{2}\right|,k}}{\lambda_{0,k}}
\end{align*}
for all $t_{1},t_{2}=1,\ldots,n$.
If for some $k_{1},k_{2}\in\N$ such that $\lambda_{0,k_{1}}>0$ and $\lambda_{0,k_{2}}>0$, $\Xi_{k_{1},n}\neq\Xi_{k_{2},n}$, we cannot utilize decorrelation used in the {homo-correlation} case.
However, we can use an alternative discussion that for all $k:\lambda_{0,k}>0$ and $v\in\R^{n}$, $v^{\top}z_{k}$ is $\left\|v\right\|^{2}\sup_{k:\lambda_{0,k}>0}\mu_{1}\left(\Xi_{k,n}\right)$-sub-Gaussian, where
\begin{align*}
z_{k}&:=Xe_{k}/\sqrt{\lambda_{0,k}}=\left[
\begin{matrix}
z_{1,k}\\
\vdots\\
z_{n,k}
\end{matrix}\right]\sim N\left(\mathbf{0},\Xi_{k,n}\right).
\end{align*}
Note that $z_{k}$'s with $k\in\N,\lambda_{0,k}>0$ are not necessarily identically distributed but independent Gaussian random variables.
Furthermore, we define some random matrices for $i,k \in \N$ satisfying $\lambda_{0,i}>0$ and $\lambda_{0,k}>0$ such that
\begin{align*}
A=\sum_{i:\lambda_{0,i}>0}\lambda_{0,i}z_{i}z_{i}^{\top},\
A_{-i}=\sum_{j\neq i:\lambda_{0,j}>0}\lambda_{0,j}z_{j}z_{j}^{\top},\mbox{~and~}
A_{k}=\sum_{i>k:\lambda_{0,i}>0}\lambda_{0,i}z_{i}z_{i}^{\top}.
\end{align*}
\subsection{Supportive Result}
Using the notation above, we give another variation of Lemma 3 of \citealp{bartlett2020benign} by introducing the effect of the noise covariance.
\begin{lemma}\label{varBLLT20L03He}
Let us assume $\lambda_{n+1}>0$. We have
\begin{align*}
\trace\left(T_{V}\right)=\sum_{i:\lambda_{0,i}>0}\left[\lambda_{0,i}^{2}z_{i}^{\top}\left(\sum_{j:\lambda_{0,j}>0}\lambda_{0,j}z_{j}z_{j}^{\top}\right)^{-1}\Upsilon_{n}\left(\sum_{j:\lambda_{0,j}>0}\lambda_{0,j}z_{j}z_{j}^{\top}\right)^{-1}z_{i}\right],
\end{align*}
where $z_{i}\sim N\left(\mathbf{0},\Xi_{i,n}\right)$'s are independent $\R^{n}$-valued random variables.
In addition,
\begin{align*}
\trace\left(T_{V}\right)&\le\mu_{1}\left(\Upsilon_{n}\right)\sum_{i:\lambda_{0,i}>0}\left[\lambda_{0,i}^{2}z_{i}^{\top}\left(\sum_{j:\lambda_{0,j}>0}\lambda_{0,j}z_{j}z_{j}^{\top}\right)^{-2}z_{i}\right],\\
\trace\left(T_{V}\right)&\ge\mu_{n}\left(\Upsilon_{n}\right)\sum_{i:\lambda_{0,i}>0}\left[\lambda_{0,i}^{2}z_{i}^{\top}\left(\sum_{j:\lambda_{0,j}>0}\lambda_{0,j}z_{j}z_{j}^{\top}\right)^{-2}z_{i}\right],
\end{align*}
and for all $i$ with $\lambda_{0,i}>0$,
\begin{align*}
\lambda_{0,i}^{2}z_{i}^{\top}\left(\sum_{j:\lambda_{0,j}>0}\lambda_{0,j}z_{j}z_{j}^{\top}\right)^{-2}z_{i}=\frac{\lambda_{0,i}^{2}z_{i}^{\top}A_{-i}^{-2}z_{i}}{\left(1+\lambda_{0,i}z_{i}^{\top}A_{-i}^{-1}z_{i}\right)^{2}}.
\end{align*}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{varBLLT20L03He}]
The linear map $X:\Hilbert\to\R^{n}$ has the decomposition with respect to the orthonormal basis $\left\{e_{k}\right\}$
\begin{align*}
X
=\sum_{i:\lambda_{0,i}>0}\sqrt{\lambda_{0,i}}z_{i}e_{i}^{\top},
\end{align*}
{where the infinite series converges almost surely (and in $L^{p}$ for all $p\ge1$)}.
It leads to
\begin{align*}
XX^{\top}=\left(\sum_{i:\lambda_{0,i}>0}\sqrt{\lambda_{0,i}}z_{i}e_{i}^{\top}\right)\left(\sum_{i:\lambda_{0,i}>0}\sqrt{\lambda_{0,i}}z_{i}e_{i}^{\top}\right)^{\top}=\left(\sum_{i:\lambda_{0,i}>0}\lambda_{0,i}z_{i}z_{i}^{\top}\right)
\end{align*}
and
\begin{align*}
X\Sigma X^{\top}&=\left(\sum_{i:\lambda_{0,i}>0}\sqrt{\lambda_{0,i}}z_{i}e_{i}^{\top}\right)\left(\sum_{i:\lambda_{0,i}>0}\lambda_{0,i}e_{i}e_{i}^{\top}\right)\left(\sum_{i:\lambda_{0,i}>0}\sqrt{\lambda_{0,i}}z_{i}e_{i}^{\top}\right)^{\top}\\
&=\left(\sum_{i:\lambda_{0,i}>0}\lambda_{0,i}^{2}z_{i}z_{i}^{\top}\right).
\end{align*}
We obtain
\begin{align*}
\trace\left(T_{V}\right)
&=\trace\left(\Upsilon_{n}^{1/2}\left(XX^{\top}\right)^{\dagger}X\Sigma X^{\top}\left(XX^{\top}\right)^{\dagger}\Upsilon_{n}^{1/2}\right)\\
&=\sum_{i:\lambda_{0,i}>0}\left[\lambda_{0,i}^{2}z_{i}^{\top}\left(\sum_{j:\lambda_{0,j}>0}\lambda_{0,j}z_{j}z_{j}^{\top}\right)^{-1}\Upsilon_{n}\left(\sum_{j:\lambda_{0,j}>0}\lambda_{0,j}z_{j}z_{j}^{\top}\right)^{-1}z_{i}\right].
\end{align*}
The remaining statements immediately follow by Lemma S.3 of \cite{bartlett2020benign}.
\end{proof}
The following lemma is a variation of Corollary 1 of \citealp{bartlett2020benign}.
\begin{lemma}\label{varBLLT20C01}
There is a universal constant $a>0$ such that for any mean-zero Gaussian random variable $z\in\R^{n}$ with the covariance matrix {$\Xi$}, any random subspace $\mathscr{L}$ of $\R^{n}$ of codimension $d$ that is independent of $z$, and any $u>0$, with probability at least $1-3e^{-u}$,
\begin{align*}
\left\|z\right\|^{2}&\le \trace\left(\Xi\right)+a\mu_{1}\left(\Xi\right)\left(u+\sqrt{nu}\right),\\
\left\|\Pi_{\mathscr{L}}z\right\|^{2}&\ge \trace\left(\Xi\right)-a\mu_{1}\left(\Xi\right)\left(d+u+\sqrt{nu}\right),
\end{align*}
where $\Pi_{\mathscr{L}}$ is the orthogonal projection on $\mathscr{L}$.
\end{lemma}
\begin{proof}[Proof of Lemma \ref{varBLLT20C01}]
We use the notation $\tilde{z}:=\Xi^{-1/2}z$, whose distribution is $N\left(\mathbf{0},I_{n}\right)$.
Theorem 1.1 of \cite{RV2013Hanson} verifies that there is a universal constant $c>0$ such that at least probability $1-2e^{-u}$,
\begin{align*}
\left|\tilde{z}^{\top}\Xi\tilde{z}-\trace\left(\Xi\right)\right|\le c\left(\mu_{1}\left(\Xi\right)u+\left\|\Xi\right\|_{\mathrm{HS}}\sqrt{u}\right)\le c\mu_{1}\left(\Xi\right)\left(u+\sqrt{nu}\right)
\end{align*}
We can easily obtain the first inequality of the statement, because we have
\begin{align*}
\left\|z\right\|^{2}= \tilde{z}^{\top}\Xi\tilde{z}\le \trace\left(\Xi\right)+c\mu_{1}\left(\Xi\right)\left(u+\sqrt{nu}\right).
\end{align*}
With respect to the second inequality of the statement, we have the decomposition as \cite{bartlett2020benign}:
\begin{align*}
\left\|\Pi_{\mathscr{L}}z\right\|^{2}=\left\|z\right\|^{2}-\left\|\Pi_{\mathscr{L}^{\perp}}z\right\|^{2}.
\end{align*}
Note that
\begin{align*}
\left\|z\right\|^{2}=\tilde{z}^{\top}\Xi\tilde{z}\ge \trace\left(\Xi\right)-c\mu_{1}\left(\Xi\right)\left(u+\sqrt{nu}\right).
\end{align*}
Using the transpose $\Pi_{\mathscr{L}^{\perp}}^{\top}$ of the projection $\Pi_{\mathscr{L}^{\perp}}$, we define $M:=\Pi_{\mathscr{L}^{\perp}}^{\top}\Pi_{\mathscr{L}^{\perp}}$, $\left\|M\right\|=1$ and $\trace\left(M\right)=\trace\left(M^{2}\right)=d$.
Then, Lemma S.2 of \citet{bartlett2020benign} leads to the bound that at least probability $1-e^{-u}$,
\begin{align*}
\left\|\Pi_{\mathscr{L}^{\perp}}z\right\|^{2}&\le \tilde{z}^{\top}\Xi^{1/2}M\Xi^{1/2}\tilde{z}\\
&\le \trace\left(\Xi M\right)+2\left\|\Xi^{1/2}M\Xi^{1/2}\right\|u+2\sqrt{\left\|\Xi^{1/2}M\Xi^{1/2}\right\|^{2}u^{2}+\trace\left(\left(\Xi M\right)^{2}\right)u}\\
&\le \mu_{1}\left(\Xi\right)\trace\left(M\right)+2\mu_{1}\left(\Xi\right)u+2\sqrt{\mu_{1}^{2}\left(\Xi\right)u^{2}+\mu_{1}^{2}\left(\Xi\right)\trace\left(M^{2}\right)u}\\
&\le \mu_{1}\left(\Xi\right)d+2\mu_{1}\left(\Xi\right)u+2\sqrt{\mu_{1}^{2}\left(\Xi\right)u^{2}+ \mu_{1}^{2}\left(\Xi\right)du}\\
&= \mu_{1}\left(\Xi\right)\left(d+2u+2\sqrt{u\left(d+u\right)}\right)\\
&\le \mu_{1}\left(\Xi\right)\left(2d+4u\right)
\end{align*}
by some trace inequalities (e.g., immediately obtained by von Neumann trace inequalities)
\begin{align*}
\trace\left(\Xi M\right)
&\le\mu_{1}\left(\Xi\right)\trace\left(M\right),\\
\trace\left(\Xi M\Xi M\right)
&\le \mu_{1}\left(\Xi\right)\trace\left(M\Xi M\right)
=\mu_{1}\left(\Xi\right)\trace\left(\Xi M^{2}\right)
\le \mu_{1}^{2}\left(\Xi\right)\trace\left(M^{2}\right).
\end{align*}
Hence
\begin{align*}
\left\|\Pi_{\mathscr{L}}z\right\|^{2}&\ge \trace\left(\Xi\right)-c\mu_{1}\left(\Xi\right)\left(u+\sqrt{nu}\right)-\mu_{1}\left(\Xi\right)\left(2d+4u\right)\\
&\ge \trace\left(\Xi\right)-\mu_{1}\left(\Xi\right)\left(c\left(u+\sqrt{nu}\right)+2d+4u\right).
\end{align*}
We obtain the second inequality.
\end{proof}
\begin{lemma}\label{lem:bound_eigen}
There is a universal constant $c>0$ such that with probability at least $1-2e^{-n/c}$,
\begin{align}
&\frac{1}{c}\inf_{k:\lambda_{0,k}>0}\mu_{n}\left(\Xi_{k,n}\right)\sum_{i}\lambda_{0,i}-c\sup_{k:\lambda_{0,k}>0}\mu_{1}^{2}\left(\Xi_{k,n}\right)\lambda_{0,1}n\notag\\
&\le \mu_{n}\left(A\right)\le\mu_{1}\left(A\right)\le c\left(\sup_{k:\lambda_{0,k}>0}\mu_{1}\left(\Xi_{k,n}\right)\sum_{i}\lambda_{0,i}+\sup_{k:\lambda_{0,k}>0}\mu_{1}^{2}\left(\Xi_{k,n}\right)\lambda_{0,1}n\right).
\end{align}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lem:bound_eigen}]
Let $\varkappa_{n}:=\sup_{k:\lambda_{0,k}>0}\mu_{1}\left(\Xi_{k,n}\right)$.
For any $v\in\R^{n}$ and $k\in\N:\lambda_{0,k}>0$, $v^{\top}z_{k}$ is clearly $\left\|v\right\|^{2}\varkappa_{n}$-sub-Gaussian.
Because $\left(v^{\top}z_{k}\right)^{2}-v^{\top}\Xi_{k}v$ is a centered $c_{1}\varkappa_{n}$-sub-exponential random variable for a universal constant $c_{1}>0$, Proposition 2.5.2 and Lemma 2.7.6 of \citet{Vershynin2018High} yield that there exists a universal constant $c_{2}=c_{2}\left(c_{1}\right)>0$ such that for any fixed unit vector $v\in\R^{n}$, with probability at least $1-2e^{-t}$,
\begin{align*}
\left|v^{\top}Av-v^{\top}\left(\sum_{i}\lambda_{0,i}\Xi_{i,n}\right)v\right|
&\le c_{2}\varkappa_{n}\max\left(\sup_{k:\lambda_{0,k}>0}\left(v^{\top}\Xi_{k,n}v\right)\lambda_{0,k}t,\sqrt{t\sum_{i}\lambda_{0,i}\left(v^{\top}\Xi_{i,n}v\right)}\right)\\
&\le c_{2}\varkappa_{n}\max\left(\sup_{k:\lambda_{0,k}>0}\left(v^{\top}\Xi_{k,n}v\right)\lambda_{0,1}t,\sqrt{\sup_{k:\lambda_{0,k}>0}\left(v^{\top}\Xi_{k,n}v\right)t\sum_{i}\lambda_{0,i}}\right)\\
&\le c_{2}\varkappa_{n}\max\left(t\varkappa_{n}\lambda_{0,1},\sqrt{t\varkappa_{n}\sum_{i}\lambda_{0,i}}\right)\\
&\le c_{2}\varkappa_{n}^{2}\max\left(t\lambda_{0,1},\sqrt{t\sum_{i}\lambda_{0,i}}\right).
\end{align*}
\citep[see also as Lemma S.9 of][]{bartlett2020benign}.
Let $\mathcal{N}$ be a $1/4$-net on $\mathcal{S}^{n-1}:=\left\{v\in\R^{n}:\left\|v\right\|=1\right\}$ such that $\left|\mathcal{N}\right|\le 9^{n}$.
The union bound argument leads to that with probability at least $1-2e^{-t}$, for all $v\in\mathcal{N}$,
\begin{align*}
\left|v^{\top}Av-v^{\top}\left(\sum_{i}\lambda_{0,i}\Xi_{i,n}\right)v\right|&\le c_{2}\varkappa_{n}^{2}\max\left(\left(t+n\log9\right)\lambda_{0,1},\sqrt{\left(t+n\log9\right)\sum_{i}\lambda_{0,i}}\right)
\end{align*}
By Lemma S.8 of \cite{bartlett2020benign}, there exists a universal constant $c_{3}=c_{3}\left(c_{2}\right)>0$ such that with probability at least $1-2e^{-t}$,
\begin{align*}
\left\|A-\sum_{i}\lambda_{0,i}\Xi_{i,n}\right\|\le c_{3}\varkappa_{n}^{2}\left(\left(t+n\log9\right)\lambda_{0,1}+\sqrt{\left(t+n\log9\right)\sum_{i}\lambda_{0,i}}\right).
\end{align*}
Note that
\begin{align*}
\left\|A-\sum_{i}\lambda_{0,i}\Xi_{i,n}\right\|=\max_{v\in\mathcal{S}^{n-1}}\left|v^{\top}\left(A-\sum_{i}\lambda_{0,i}\Xi_{i,n}\right)v\right|\ge \max_{v\in\mathcal{S}^{n-1}}v^{\top}Av-\varkappa_{n}\sum_{i}\lambda_{0,i},
\end{align*}
and
\begin{align*}
\left\|A-\sum_{i}\lambda_{0,i}\Xi_{i,n}\right\|=\max_{v\in\mathcal{S}^{n-1}}\left|v^{\top}\left(\sum_{i}\lambda_{0,i}\Xi_{i,n}-A\right)v\right|\ge \inf_{k:\lambda_{0,k}>0}\mu_{n}\left(\Xi_{k,n}\right)\sum_{i}\lambda_{0,i}-\min_{v\in\mathcal{S}^{n-1}}v^{\top}Av.
\end{align*}
The remaining discussion is parallel to \cite{bartlett2020benign}.
\end{proof}
By regarding $\sup_{n\in\N}\sup_{k:\lambda_{0,k}>0}\mu_{1}\left(\Xi_{k,n}\right)<\infty$ as $\sigma_{x}^{2}$ in \citet{bartlett2020benign}, we obtain the following discussions quite identical to \citet{bartlett2020benign}.
In the following discussion, let us assume
\begin{align*}
\sup_{n\in\N}\sup_{k:\lambda_{0,k}>0}\mu_{1}\left(\Xi_{k,n}\right)<\infty,\ \inf_{n\in\N}\inf_{k:\lambda_{0,k}>0}\mu_{n}\left(\Xi_{k,n}\right)>0
\end{align*}
and fix $\varkappa>0$ defined as
\begin{align*}
\varkappa:=\max\left\{\sup_{n\in\N}\sup_{k:\lambda_{0,k}>0}\mu_{1}\left(\Xi_{k,n}\right),\left(\inf_{n\in\N}\inf_{k:\lambda_{0,k}>0}\mu_{n}\left(\Xi_{k,n}\right)\right)^{-1}\right\}<\infty.
\end{align*}
Then the bounds get quite parallel to \cite{bartlett2020benign}.
The following corollary is a variation if Lemma 4 of \citealp{bartlett2020benign}.
\begin{corollary}\label{varBLLT20L04}
There is a constant $c=c\left(\varkappa\right)>0$ such that for any $k\ge 0$, with probability at least $1-2e^{-n/c}$,
\begin{align*}
\frac{1}{c}\sum_{i>k}\lambda_{0,i}-c\lambda_{0,1}n\le \mu_{n}\left(A_{k}\right)\le\mu_{1}\left(A_{k}\right)\le c\left(\sum_{i>k}\lambda_{0,i}+\lambda_{0,1}n\right).
\end{align*}
\end{corollary}
We give a variation of Lemma 5 of \citealp{bartlett2020benign}.
\begin{lemma}\label{varBLLT20L05}
There are constants $b=b\left(\varkappa\right)\ge1$ and $c=c\left(\varkappa\right)\ge 1$ such that for any $k\ge0$, with probability at least $1-2e^{-n/c}$,
\begin{enumerate}
\item for all $i\ge 1$,
\begin{align*}
\mu_{k+1}\left(A_{-i}\right)\le \mu_{k+1}\left(A\right)\le \mu_{1}\left(A_{k}\right)\le c\left(\sum_{j>k}\lambda_{0,j}+n\lambda_{0,k+1}\right);
\end{align*}
\item for all $1\le i\le k$,
\begin{align*}
\mu_{n}\left(A\right)\ge \mu_{n}\left(A_{-i}\right)\ge \mu_{n}\left(A_{k}\right)\ge \frac{1}{c}\sum_{j>k}\lambda_{0,j}-cn\lambda_{0,k+1};
\end{align*}
\item if $r_{k}\left(\Sigma_{0}\right)\ge bn$, then
\begin{align*}
\frac{1}{c}\lambda_{0,k+1}r_{k}\left(\Sigma_{0}\right)\le \mu_{n}\left(A_{k}\right)\le \mu_{1}\left(A_{k}\right)\le c\lambda_{0,k+1}r_{k}\left(\Sigma_{0}\right).
\end{align*}
\end{enumerate}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{varBLLT20L05}]
Let us recall Corollary \ref{varBLLT20L04}: there exists a constant $c_{1}=c_{1}\left(\varkappa\right)>0$ such that for all $k\ge 0$, with probability at least $1-2e^{-n/c_{1}}$,
\begin{align*}
\frac{1}{c_{1}}\sum_{i>k}\lambda_{0,i}-c_{1}\lambda_{0,1}n\le \mu_{n}\left(A_{k}\right)\le\mu_{1}\left(A_{k}\right)\le c_{1}\left(\sum_{i>k}\lambda_{0,i}+\lambda_{0,1}n\right).
\end{align*}
Firstly, notice that the matrix $A-A_{k}$ has its rank at most $k$.
Thus, there is a linear space $\mathscr{L}$ of dimension $n-k$ such that for all $v\in\mathscr{L}$, $v^{\top}Av=v^{\top}A_{k}v\le \mu_{1}\left(A_{k}\right)\left\|v\right\|^{2}$ and therefore $\mu_{k+1}\left(A\right)\le \mu_{1}\left(A_{k}\right)$ \citep[Lemma S.10 of][]{bartlett2020benign}.
In the second place, Lemma S.11 of \citet{bartlett2020benign} yields that for all $i$ and $j$, $\mu_{j}\left(A_{-i}\right)\le \mu_{j}\left(A\right)$.
On the other hand, for all $i\le k$, $\mu_{n}\left(A_{-i}\right)\ge\mu_{n}\left(A_{k}\right)$ by $A_{k}\preceq A_{-i}$ and Lemma S.11 of \citet{bartlett2020benign} too.
Finally, if $r_{k}\left(\Sigma_{0}\right)\ge bn$ for some $b\ge 1$,
\begin{align*}
c_{1}\left(\sum_{j>k}\lambda_{0,j}+n\lambda_{0,k+1}\right)&=c_{1}\left(\lambda_{0,k+1}r_{k}\left(\Sigma_{0}\right)+n\lambda_{0,k+1}\right)\\
&\le \left(c_{1}+\frac{c_{1}}{b}\right)\lambda_{0,k+1}r_{k}\left(\Sigma_{0}\right),\\
\frac{1}{c_{1}}\sum_{j>k}\lambda_{0,j}-c_{1}n\lambda_{0,k+1}&=\frac{1}{c_{1}}\lambda_{0,k+1}r_{k}\left(\Sigma_{0}\right)-c_{1}n\lambda_{0,k+1}\\
&\ge \left(\frac{1}{c_{1}}-\frac{c_{1}}{b}\right)\lambda_{0,k+1}r_{k}\left(\Sigma_{0}\right).
\end{align*}
Let $b>c_{1}^{2}$ and $c>\max\left\{c_{1}+1/c_{1},1/\left(1/c_{1}-c_{1}/b\right)\right\}$; then the third statement holds.
\end{proof}
\subsection{Derivation of Upper Bound}
We provide the following lemma for an upper bound by developing a variation of Lemma 6 of \citealp{bartlett2020benign}.
\begin{lemma} \label{lem:TV_upper}
There are constants $b=b\left(\varkappa\right)\ge 1$ and $c=c\left(\varkappa\right)\ge 1$ such that if $0\le k\le n/c$, $r_{k}\left(\Sigma_{0}\right)\ge bn$, and $l\le k$, then with probability at least $1-7e^{-n/c}$,
\begin{align*}
\trace\left(T_{V}\right)\le c\mu_{1}\left(\Upsilon_{n}\right)\left(\frac{l}{n}+n\frac{\sum_{i>l}\lambda_{0,i}^{2}}{\left(\sum_{i>k}\lambda_{0,i}\right)^{2}}\right).
\end{align*}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lem:TV_upper}]
Fix $b=b\left(\varkappa\right)$ to its value in Lemma \ref{varBLLT20L05}.
By Lemma \ref{varBLLT20L03He},
\begin{align*}
\trace\left(T_{V}\right)&\le \mu_{1}\left(\Upsilon_{n}\right)\sum_{i}\lambda_{0,i}^{2}z_{i}^{\top}A^{-2}z_{i}\\
&\le \mu_{1}\left(\Upsilon_{n}\right)\left(\sum_{i=1}^{l}\frac{\lambda_{0,i}^{2}z_{i}^{\top}A_{-i}^{-2}z_{i}}{\left(1+\lambda_{0,i}z_{i}^{\top}A_{-i}^{-1}z_{i}\right)^{2}}+\sum_{i>l}\lambda_{0,i}^{2}z_{i}^{\top}A^{-2}z_{i}\right).
\end{align*}
Firstly, let us consider the sum up to $l$.
Lemma \ref{varBLLT20L05} shows that there exists a constant $c_{1}=c_{1}\left(\varkappa\right)\ge1$ such that on the event $E_{1}$ with $P\left(E_{1}\right)\ge 1-2e^{-n/c_{1}}$, if $k$ satisfies $r_{k}\left(\Sigma_{0}\right)\ge bn$, then for all $i\le k$ $\mu_{n}\left(A_{-i}\right)\ge \lambda_{0,k+1}r_{k}\left(\Sigma_{0}\right)/c_{1}$, and for all $i\ge 1$ $\mu_{k+1}\left(A_{-i}\right)\le c_{1}\lambda_{0,k+1}r_{k}\left(\Sigma_{0}\right)$.
Hence on $E_{1}$, for all $z\in\R^{n}$ and $1\le i\le l$,
\begin{align*}
z^{\top} A_{-i}^{-2}z&\le \frac{c_{1}^{2}\left\|z\right\|^{2}}{\left(\lambda_{0,k+1}r_{k}\left(\Sigma_{0}\right)\right)^{2}},\\
z^{\top}A_{-i}^{-1}z&\ge \left(\Pi_{\mathscr{L}_{i}}z\right)^{\top} A_{-i}^{-1}\Pi_{\mathscr{L}_{i}}z\ge \frac{\left\|\Pi_{\mathscr{L}_{i}}z\right\|^{2}}{c_{1}\lambda_{0,k+1}r_{k}\left(\Sigma_{0}\right)},
\end{align*}
where $\mathscr{L}_{i}$ is the span of the $n-k$ eigenvectors of $A_{-i}$ corresponding to its smallest $n-k$ eigenvalues.
Therefore, on $E_{1}$, for all $i\le l$,
\begin{align*}
\frac{\lambda_{0,i}^{2}z_{i}^{\top}A_{-i}^{-2}z_{i}}{\left(1+\lambda_{0,i}z_{i}^{\top}A_{-i}^{-1}z_{i}\right)^{2}}\le \frac{z_{i}^{\top}A_{-i}^{-2}z_{i}}{\left(z_{i}^{\top}A_{-i}^{-1}z_{i}\right)^{2}}\le c_{1}^{4}\frac{\left\|z_{i}\right\|^{2}}{\left\|\Pi_{\mathscr{L}_{i}}z_{i}\right\|^{2}}.
\end{align*}
We apply Lemma \ref{varBLLT20C01} $l$ times together with a union bound to show that, there exist constant $c_{0}=c_{0}\left(a,\varkappa\right)>0$, $c_{2}=c_{2}\left(a,c_{0},\varkappa\right)>0$ and $c_{3}=c_{3}\left(a,c_{0},\varkappa\right)>0$ such that on the event $E_{2}$ with $P\left(E_{2}\right)\ge 1-3e^{-t}$, for all $1\le i\le l$,
\begin{align*}
\left\|z_{i}\right\|^{2}&\le n+a\left(t+\log k+\sqrt{n\left(t+\log k\right)}\right)\le c_{2}n,\\
\left\|\Pi_{\mathscr{L}_{i}}z_{i}\right\|^{2}&\ge n-a\left(k+t+\log k+\sqrt{n\left(t+\log k\right)}\right)\ge n/c_{3},
\end{align*}
provided that $t<n/c_{0}$ and $c>c_{0}$ owing to $\trace\left(\Xi_{i,n}\right)=n$ and $\log k\le n/c\le n/c_{0}$.
Then, there exists a constant $c_{4}=c_{4}\left(c_{1},c_{2},c_{3}\right)>0$ on the event $E_{1}\cap E_{2}$ with $P\left(E_{1}\cap E_{2}\right)\ge 1-5e^{-n/c_{0}}$,
\begin{align*}
\sum_{i=1}^{l}\frac{\lambda_{0,i}^{2}z_{i}^{\top}A_{-i}^{-2}z_{i}}{\left(1+\lambda_{0,i}z_{i}^{\top}A_{-i}^{-1}z_{i}\right)^{2}}\le c_{4}\frac{l}{n}.
\end{align*}
In the second place, we consider the sum $\sum_{i>l}\lambda_{0,i}^{2}z_{i}^{\top}A^{-2}z_{i}$.
Lemma \ref{varBLLT20L05} shows that on $E_{1}$, $\mu_{n}\left(A\right)\ge \lambda_{0,k+1}r_{k}\left(\Sigma_{0}\right)/c_{1}$, and thus
\begin{align*}
\sum_{i>l}\lambda_{0,i}^{2}z_{i}^{\top}A^{-2}z_{i}\le \frac{c_{1}^{2}\sum_{i>l}\lambda_{0,i}^{2}\left\|z_{i}\right\|^{2}}{\left(\lambda_{0,k+1}r_{k}\left(\Sigma_{0}\right)\right)^{2}}.
\end{align*}
Note that
\begin{align*}
\sum_{i>l}\lambda_{0,i}^{2}\left\|z_{i}\right\|^{2}=\sum_{i>l}\lambda_{0,i}^{2}\tilde{z}_{i}^{\top}\Xi_{i,n}\tilde{z}_{i}\le \varkappa\sum_{i>l}\lambda_{0,i}^{2}\left\|\tilde{z}_{i}\right\|^{2}=\varkappa\sum_{i>l}\lambda_{0,i}^{2}\sum_{t=1}^{n}\left(\tilde{z}_{t,i}\right)^{2},
\end{align*}
where $\tilde{z}_{i}:=\Xi_{i,n}^{-1/2}z_{i}$, and $\tilde{z}_{t,i}\sim N\left(0,1\right)$ are i.i.d.\ random variables.
Lemma 2.7.6 of \citet{Vershynin2018High} yields that there exists a constant $c_{5}=c_{5}\left(a,c_{0},\varkappa\right)>0$ on the event $E_{3}$ with $P\left(E_{3}\right)\ge 1-2e^{-t}$,
\begin{align*}
\sum_{i>l}\lambda_{0,i}^{2}\left\|z_{i}\right\|^{2}&\le \varkappa\sum_{i>l}\lambda_{0,i}^{2}\sum_{t=1}^{n}\left(\tilde{z}_{t,i}\right)^{2}\\
&\le \varkappa \left(n\sum_{i>l}\lambda_{0,i}^{2}+a\max\left\{\lambda_{0,l+1}^{2}t,\sqrt{tn\sum_{i>l}\lambda_{0,i}^{4}}\right\}\right)\\
&\le \varkappa \left(n\sum_{i>l}\lambda_{0,i}^{2}+a\max\left\{\sum_{i>l}\lambda_{0,i}^{2}t,\sqrt{tn}\sum_{i>l}\lambda_{0,i}^{2}\right\}\right)\\
&\le c_{5}n\sum_{i>l}\lambda_{0,i}^{2}.
\end{align*}
because $t<n/c_{0}$.
Then there exists a constant $c_{6}=c_{6}\left(c_{1},c_{5}\right)>0$ such that on $E_{1}\cap E_{2}\cap E_{3}$ with $P\left(E_{1}\cap E_{2}\cap E_{3}\right)\ge1-7e^{-n/c_{0}}$,
\begin{align*}
\sum_{i>l}\lambda_{0,i}^{2}z_{i}^{\top}A^{-2}z_{i}\le c_{6}n\frac{\sum_{i>l}\lambda_{0,i}^{2}}{\left(\lambda_{0,k+1}r_{k}\left(\Sigma_{0}\right)\right)^{2}}
\end{align*}
Choosing $c>\max\left\{c_{0},c_{4},c_{6}\right\}$ gives the lemma.
\end{proof}
\subsection{Derivation of Lower Bound}
We start with the following lemma as a variation of Lemma 8 of \citealp{bartlett2020benign}.
\begin{lemma}\label{varBLLT20L08}
There is a constant $c=c\left(\varkappa\right)>0$ such that for any $i\ge 1$ with $\lambda_{0,i}>0$ and any $0\le k\le n/c$, with probability at least $1-5e^{-n/c}$,
\begin{align*}
\frac{\lambda_{0,i}^{2}z_{i}^{\top}A_{-i}^{-2}z_{i}}{\left(1+\lambda_{0,i}z_{i}^{\top}A_{-i}^{-1}z_{i}\right)^{2}}\ge \frac{1}{cn}\left(1+\frac{\sum_{j>k}\lambda_{0,j}+n\lambda_{0,k+1}}{n\lambda_{0,i}}\right)^{-2}.
\end{align*}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{varBLLT20L08}]
Fix $i\ge 1$ with $\lambda_{0,i}>0$ and $k$ with $0\le k\le n/c_{0}$, where $c_{0}=c_{0}\left(a,\varkappa\right)>0$ is a sufficiently large constant such that there exists $c_{2}=c_{2}\left(a,c_{0},\varkappa\right)$ with
\begin{align*}
n-a\varkappa\left(2n/c_{0}+n/\sqrt{c_{0}}\right)\ge n/c_{2}.
\end{align*}
By Lemma \ref{varBLLT20L05}, there exists a constant $c_{1}=c_{1}\left(\varkappa\right)\ge 1$ such that on the event $E_{1}$ with $P\left(E_{1}\right)\ge1-2e^{-n/c_{1}}$,
\begin{align*}
\mu_{k+1}\left(A_{-i}\right)\le c_{1}\left(\sum_{j>k}\lambda_{0,j}+n\lambda_{0,k+1}\right),
\end{align*}
and hence
\begin{align*}
z_{i}^{\top}A_{-i}^{-1}z_{i}\ge \frac{\left\|\Pi_{\mathscr{L}_{i}}z_{i}\right\|^{2}}{c_{1}\left(\sum_{j>k}\lambda_{0,j}+n\lambda_{0,k+1}\right)},
\end{align*}
where $\mathscr{L}_{i}$ is the span of the $n-k$ eigenvectors of $A_{-i}$ corresponding to its smallest $n-k$ eigenvalues.
Lemma \ref{varBLLT20C01} and the definitions of $c_{0}$ and $c_{2}$ give that on the event $E_{2}$ with $P\left(E_{2}\right)\ge 1-3e^{-t}$,
\begin{align*}
\left\|\Pi_{\mathscr{L}_{i}}z_{i}\right\|^{2}\ge n-a\varkappa \left(k+t+\sqrt{tn}\right)\ge n-a\varkappa\left(2n/c_{0}+n/\sqrt{c_{0}}\right)\ge n/c_{2},
\end{align*}
provided that $t<n/c_{0}$.
Hence, there exists a constant $c_{3}=c_{3}\left(c_{0},c_{1},c_{2}\right)>0$ such that on the event $E_{1}\cap E_{2}$ with $P\left(E_{1}\cap E_{2}\right)\ge 1-5e^{-n/c_{3}}$,
\begin{align*}
z_{i}^{\top}A_{-i}^{-1}z_{i}\ge \frac{n}{c_{3}\left(\sum_{j>k}\lambda_{0,j}+n\lambda_{0,k+1}\right)},
\end{align*}
and
\begin{align*}
1+\lambda_{0,i}z_{i}^{\top}A_{-i}^{-1}z_{i}\le \left(\frac{c_{3}\left(\sum_{j>k}\lambda_{0,j}+n\lambda_{0,k+1}\right)}{\lambda_{0,i}n}+1\right)\lambda_{0,i}z_{i}^{\top}A_{-i}^{-1}z_{i}.
\end{align*}
Then we have
\begin{align*}
\frac{\lambda_{0,i}^{2}z_{i}^{\top}A_{-i}^{-2}z_{i}}{\left(1+\lambda_{0,i}z_{i}^{\top}A_{-i}^{-1}z_{i}\right)^{2}}\ge \left(\frac{c_{3}\left(\sum_{j>k}\lambda_{0,j}+n\lambda_{0,k+1}\right)}{\lambda_{0,i}n}+1\right)^{-2}\frac{z_{i}^{\top}A_{-i}^{-2}z_{i}}{\left(z_{i}^{\top}A_{-i}^{-1}z_{i}\right)^{2}}.
\end{align*}
The Cauchy--Schwarz inequality and Lemma \ref{varBLLT20C01} yield that there exists $c_{4}=\left(a,c_{0},\varkappa\right)>0$ such that on $E_{1}$,
\begin{align*}
\frac{z_{i}^{\top}A_{-i}^{-2}z_{i}}{\left(z_{i}^{\top}A_{-i}^{-1}z_{i}\right)^{2}}\ge \frac{z_{i}^{\top}A_{-i}^{-2}z_{i}}{\left\|A_{-i}^{-1}z_{i}\right\|^{2}\left\|z_{i}\right\|^{2}}=\frac{1}{\left\|z_{i}\right\|^{2}}\ge \frac{1}{n+a\varkappa\left(t+\sqrt{nt}\right)}\ge \frac{1}{c_{4}n}.
\end{align*}
Then there exists a constant $c_{5}=c_{5}\left(c_{3},c_{4}\right)$ such that for all $i\ge 1$ with $\lambda_{0,i}>0$ and $0\le k\le n/c_{0}$, with probability at least $1-5e^{-n/c_{3}}$,
\begin{align*}
\frac{\lambda_{0,i}^{2}z_{i}^{\top}A_{-i}^{-2}z_{i}}{\left(1+\lambda_{0,i}z_{i}^{\top}A_{-i}^{-1}z_{i}\right)^{2}}\ge\frac{1}{c_{4}n}\left(\frac{c_{3}\left(\sum_{j>k}\lambda_{0,j}+n\lambda_{0,k+1}\right)}{\lambda_{0,i}n}+1\right)^{-2}\ge \frac{1}{c_{5}n}\left(1+\frac{\sum_{j>k}\lambda_{0,j}+n\lambda_{0,k+1}}{n\lambda_{0,i}}\right)^{-2}.
\end{align*}
Choosing $c>\max\left\{c_{0},c_{3},c_{5}\right\}$ gives the lemma.
\end{proof}
Using the above result, we develop a lower bound of $ \trace\left(T_{V}\right)$, by a variation of Lemma 10 of \citealp{bartlett2020benign}.
\begin{lemma} \label{lem:lower_tv}
There is a constant $c=c\left(\varkappa\right)>0$ such that for any $0\le k\le n/c$ and any $b>1$ with probability at least $1-10e^{-n/c}$,
\begin{enumerate}
\item if $r_{k}\left(\Sigma_{0}\right)<bn$, then $\trace\left(T_{V}\right)\ge \mu_{n}\left(\Upsilon_{n}\right)\left(k+1\right)/\left(cb^{2}n\right)$;
\item if $r_{k}\left(\Sigma_{0}\right)\ge bn$, then
\begin{align*}
\trace\left(T_{V}\right)\ge \frac{\mu_{n}\left(\Upsilon_{n}\right)}{cb^{2}}\min_{l\le k}\left(\frac{l}{n}+\frac{b^{2}n\sum_{i>l}\lambda_{0,i}^{2}}{\left(\lambda_{0,k+1}r_{k}\left(\Sigma_{0}\right)\right)^{2}}\right).
\end{align*}
\end{enumerate}
In particular, if all choices of $k\le n/c$ give $r_{k}\left(\Sigma_{0}\right)<bn$, then $r_{n/c}\left(\Sigma_{0}\right)<bn$ implies that, with probability at least $1-10e^{-n/c}$, $\trace\left(T_{V}\right) \gtrsim 1$.
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lem:lower_tv}]
By Lemma \ref{varBLLT20L03He},
\begin{align*}
\trace\left(T_{V}\right)\ge \mu_{n}\left(\Upsilon_{n}\right)\sum_{i}\frac{\lambda_{0,i}^{2}z_{i}^{\top}A_{-i}^{-2}z_{i}}{\left(1+\lambda_{0,i}z_{i}^{\top}A_{-i}^{-1}z_{i}\right)^{2}}
\end{align*}
and then Lemma 9 of \citet{bartlett2020benign} and Lemma \ref{varBLLT20L08} yield that there exist constants $c_{1}=c_{1}\left(\varkappa\right)>0$ and $c_{2}=c_{2}\left(\varkappa\right)>0$ such that with probability at least $1-10e^{-n/c_{1}}$,
\begin{align*}
\sum_{i}\frac{\lambda_{0,i}^{2}z_{i}^{\top}A_{-i}^{-2}z_{i}}{\left(1+\lambda_{0,i}z_{i}^{\top}A_{-i}^{-1}z_{i}\right)^{2}}&\ge \frac{1}{c_{1}n}\sum_{i}\left(1+\frac{\sum_{j>k}\lambda_{0,j}+n\lambda_{0,k+1}}{\lambda_{0,i}n}\right)^{-2}\\
&= \frac{1}{c_{1}n}\sum_{i}\left(1^{-1}+\left(\frac{\lambda_{0,i}n}{\sum_{j>k}\lambda_{0,j}}\right)^{-1}+\left(\frac{\lambda_{0,i}}{\lambda_{0,k+1}}\right)^{-1}\right)^{-2}\\
&\ge \frac{1}{c_{2}n}\sum_{i}\left(\max\left\{1^{-1},\left(\frac{\lambda_{0,i}n}{\sum_{j>k}\lambda_{0,j}}\right)^{-1},\left(\frac{\lambda_{0,i}}{\lambda_{0,k+1}}\right)^{-1}\right\}\right)^{-2}\\
&= \frac{1}{c_{2}n}\sum_{i}\min\left\{1,\frac{\lambda_{0,i}^{2}n^{2}}{\left(\sum_{j>k}\lambda_{0,j}\right)^{2}},\frac{\lambda_{0,i}^{2}}{\lambda_{0,k+1}^{2}}\right\}\\
&\ge \frac{1}{c_{2}b^{2}n}\sum_{i}\min\left\{1,\left(\frac{bn^{2}}{r_{k}\left(\Sigma_{0}\right)}\right)^{2}\frac{\lambda_{0,i}^{2}}{\lambda_{0,k+1}^{2}},\frac{\lambda_{0,i}^{2}}{\lambda_{0,k+1}^{2}}\right\}.
\end{align*}
If $r_{k}\left(\Sigma_{0}\right)<bn$, then
\begin{align*}
\trace\left(T_{V}\right)\ge \frac{\mu_{n}\left(\Upsilon_{n}\right)}{c_{2}b^{2}n}\sum_{i}\min\left\{1,\frac{\lambda_{0,i}^{2}}{\lambda_{0,k+1}^{2}}\right\}=\frac{k+1}{c_{2}b^{2}n}
\end{align*}
because $\lambda_{0,i}\ge \lambda_{0,k+1}$ for $i\le k+1$.
If $r_{k}\left(\Sigma_{0}\right)\ge bn$, then
\begin{align*}
\trace\left(T_{V}\right)\ge \frac{\mu_{n}\left(\Upsilon_{n}\right)}{c_{2}b^{2}}\sum_{i}\min\left\{\frac{1}{n},\frac{b^{2}n\lambda_{0,i}^{2}}{\left(\lambda_{0,k+1}r_{k}\left(\Sigma_{0}\right)\right)^{2}}\right\}=\frac{\mu_{n}\left(\Upsilon_{n}\right)}{c_{2}b^{2}}\min_{l\le k} \left(\frac{l}{n}+\frac{b^{2}n\sum_{i>l}\lambda_{0,i}^{2}}{\left(\lambda_{0,k+1}r_{k}\left(\Sigma_{0}\right)\right)^{2}}\right)
\end{align*}
since $\left\{\lambda_{0,i}\right\}$ is non-increasing and then the minimizer $l$ gets restricted in $1\le l\le k$.
\end{proof}
\begin{lemma}[Lemma 11 of \citealp{bartlett2020benign}]
For any $b\ge 1$ and $k^{\ast}:=\min\left\{k:r_{k}\left(\Sigma_{0}\right)\ge bn\right\}$, if $k^{\ast}<\infty$, then
\begin{align*}
\min_{l\le k^{\ast}} \left(\frac{l}{n}+\frac{b^{2}n\sum_{i>l}\lambda_{0,i}^{2}}{\left(\lambda_{0,k+1}r_{k}\left(\Sigma_{0}\right)\right)^{2}}\right)= \left(\frac{k^{\ast}}{n}+\frac{b^{2}n\sum_{i>k^{\ast}}\lambda_{0,i}^{2}}{\left(\lambda_{0,k+1}r_{k}\left(\Sigma_{0}\right)\right)^{2}}\right)=\frac{k^{\ast}}{bn}+\frac{bn}{R_{k^{\ast}}\left(\Sigma_{0}\right)}.
\end{align*}
\end{lemma}
\section{Moment Bound for Hilbert-valued Gaussian Processes under Dependence} \label{sec:appendix_moment_inequality}
We develop a moment inequality of empirical covariance operators with (possibly) infinite-dimensional dependent processes.
This inequality will be used to bound the bias term $T_V$ in a proof of Proposition \ref{varBLLT20LS18}.
We extend Theorem 2.2 of \cite{HL2020Moment}, a moment bound for sample autocovariance matrices of finite-dimensional centered stationary Gaussian processes whose dependence decays sufficiently fast such that
\begin{align*}
\Ep\left[\|\hat{\Sigma}_{0}-\Sigma_{0}\|\right]\le c\left\|\Sigma_{0}\right\|\left(\sqrt{\frac{r_{0}\left(\Sigma_{0}\right)}{n}}+\frac{r_{0}\left(\Sigma_{0}\right)}{n}\right),
\end{align*}
where
\begin{align*}
\hat{\Sigma}_{0}u:=\frac{1}{n}\sum_{t=1}^{n}\left(x_{t}^{\top}u\right)x_{t},\ \Sigma_{0}u=\Ep\left[\left(x_{1}^{\top}u\right)x_{1}\right]
\end{align*}
for all $u\in\Hilbert$.
{Note that the bound is independent of the dimension of the state space, and hence we can expect that it is possible to generalize the result to infinite-dimensional separable Hilbert spaces.}
If this result holds even in infinite-dimensional settings, then we can obtain a simple concentration inequality by Markov's inequality: for all $\delta>0$,
\begin{align*}
P\left(\|\hat{\Sigma}_{0}-\Sigma_{0}\|\ge \delta\right)\le \frac{1}{\delta}\Ep\left[\|\hat{\Sigma}_{0}-\Sigma_{0}\|\right]\le \frac{1}{\delta}c\left\|\Sigma_{0}\right\|\left(\sqrt{\frac{r_{0}\left(\Sigma_{0}\right)}{n}}+\frac{r_{0}\left(\Sigma_{0}\right)}{n}\right).
\end{align*}
The following statement is a complete version of Proposition \ref{prop:moment_ineq_display} in the main body, which is an extention of Proposition 4.6 of \citealp{HL2020Moment}.
We recall the integrated covariance operator $\bar{\Sigma}_{n}$ such that for all $u\in\Hilbert$,
\begin{align*}
\bar{\Sigma}_{n}u=\Sigma_{0}u+2\sum_{h=1}^{n-1}\tilde{\Sigma}_{h}u
\end{align*}
where
\begin{align*}
\tilde{\Sigma}_{h}u=\sum_{i=1}^{\infty}\left|\lambda_{h,i}\right|\left(e_{i}^{\top}u\right)e_{i}.
\end{align*}
\begin{proposition}\label{varHL20P46}
Assume $\Hilbert$ is a separable Hilbert space and $x_{t}$ is an $\Hilbert$-valued centered stationary Gaussian process whose cross-covariance operator $\Sigma_{h}$ of trace class for all $h\in\Z$ defined as
\begin{align*}
\Sigma_{h}u:=\Ep\left[\left(x_{t+h}^{\top}u\right)x_{t}\right]
\end{align*}
for all $u\in\Hilbert$.
We have
\begin{align*}
\Ep\left[\|\hat{\Sigma}_{0}-\Sigma_{0}\|\right]&\le \frac{2\sqrt{2}}{n}\left(\sqrt{2}\trace\left(\bar{\Sigma}_n\right)+\sqrt{2n\left\|\Sigma_{0}\right\|\trace\left(\bar{\Sigma}_n\right)}+\sqrt{n\trace\left(\Sigma_{0}\right)\left\|\bar{\Sigma}_{n}\right\|}\right),
\end{align*}
where for all $u\in\Hilbert$,
\begin{align*}
\bar{\Sigma}_{n}u:=\Sigma_{0}u+2\sum_{h=1}^{n-1}\tilde{\Sigma}_{h}u,\ \tilde{\Sigma}_{h}u:=\frac{1}{2}\sum_{i}\sigma_{i}\left(\Sigma_{h}\right)\left(\left(f_{h,i}^{\top}u\right)f_{h,i}+\left(g_{h,i}^{\top}u\right)g_{h,i}\right),
\end{align*}
$\sigma_{i}\left(\Sigma_{h}\right)$ are singular values of $\Sigma_{h}$, $\left\{f_{h,i}\right\}$ are left singular vectors and $\left\{g_{h,i}\right\}$ are right ones of $\Sigma_{h}$.
\end{proposition}
\begin{remark}
Let us give some remarks on $\tilde{\Sigma}_{h}$.
The operator $\tilde{\Sigma}_{h}$ has the property such that for all $u\in\Hilbert$,
\begin{align*}
u^{\top}\left(\Sigma_{h}+\Sigma_{h}^{\top}\right)u=\sum_{i}2\sigma_{i}\left(\Sigma_{h}\right)\left(f_{h,i}^{\top}u\right)\left(g_{h,i}^{\top}u\right)\le \sum_{i}\sigma_{i}\left(\Sigma_{h}\right)\left(\left(f_{h,i}^{\top}u\right)^{2}+\left(g_{h,i}^{\top}u\right)^{2}\right)=2u^{\top}\tilde{\Sigma}_{h}u.
\end{align*}
When $\Sigma_{h}$ is self-adjoint, for all $u\in\Hilbert$, {
we can set}
\begin{align*}
\tilde{\Sigma}_{h}u=\sum_{i}\left|\mu_{i}\left(\Sigma_{h}\right)\right|\left(e_{h,i}^{\top}u\right)e_{h,i},
\end{align*}
where $\mu_{i}\left(\Sigma_{h}\right)$ and $e_{h,i}$ are the $i$-th eigenvalue and the corresponding eigenvector of $\Sigma_{h}$.
\end{remark}
The following lemma is an extension of Lemma 4.7 of \citealp{HL2020Moment}.
\begin{lemma}\label{varHL20L47}
Under the assumptions same as Proposition \ref{varHL20P46}, we have
\begin{align*}
\Ep\left[\|\hat{\Sigma}_{0}-\Sigma_{0}\|\right]
\le \frac{2\sqrt{2}}{n}\Ep\left[\left\|X\right\|_{\mathrm{HS}}\right]\left(\sqrt{\trace\left( \bar{\Sigma}_n\right)}+\sqrt{\left\| \bar{\Sigma}_n\right\|}\right).
\end{align*}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{varHL20L47}]
Let us consider a decoupling seen in Lemma 5.2 of \citet{vanHandel2017Structured}.
$\left\{\tilde{x}_{t}\right\}$ denotes an independent copy of $\left\{x_{t}\right\}$: then
\begin{align*}
\Ep\left[\|\hat{\Sigma}_{0}-\Sigma_{0}\|\right]&=\Ep\left[\sup_{u,v:\left\|u\right\|\le1,\left\|v\right\|\le1}u^{\top}\left(\hat{\Sigma}_{0}-\Sigma_{0}\right)v\right]\\
&=\Ep\left[\sup_{u,v:\left\|u\right\|\le1,\left\|v\right\|\le1}\Ep\left[\left.\frac{1}{n}\sum_{t=1}^{n}u^{\top}\left(x_{t}x_{t}^{\top}-\tilde{x}_{t}\tilde{x}_{t}^{\top}\right)v\right|\left\{x_{t}\right\}\right]\right]\\
&=\Ep\left[\sup_{u,v:\left\|u\right\|\le1,\left\|v\right\|\le1}\Ep\left[\left.\frac{1}{n}\sum_{t=1}^{n}\left(\left(x_{t}^{\top}u\right)\left(x_{t}^{\top}v\right) -\left(\tilde{x}_{t}^{\top}u\right)\left(\tilde{x}_{t}^{\top}v\right)\right)\right|\left\{x_{t}\right\}\right]\right]\\
&=\Ep\left[\sup_{u,v:\left\|u\right\|\le1,\left\|v\right\|\le1}\Ep\left[\left.\frac{1}{n}\sum_{t=1}^{n}\left(\left(x_{t}+\tilde{x}_{t}\right)^{\top}u\right)\left(\left( x_{t}-\tilde{x}_{t}\right)^{\top}v\right)\right|\left\{x_{t}\right\}\right]\right]\\
&\le\Ep\left[\Ep\left[\left.\sup_{u,v:\left\|u\right\|\le1,\left\|v\right\|\le1}\frac{1}{n}\sum_{t=1}^{n}\left(\left(x_{t}+\tilde{x}_{t}\right)^{\top}u\right)\left(\left(x_{t}-\tilde{x}_{t}\right)^{\top}v\right)\right|\left\{x_{t}\right\}\right]\right]\\
&=\Ep\left[\sup_{u,v:\left\|u\right\|\le1,\left\|v\right\|\le1}\frac{1}{n}\sum_{t=1}^{n}\left(\left(x_{t}+\tilde{x}_{t}\right)^{\top}u\right)\left(\left(x_{t}-\tilde{x}_{t}\right)^{\top}v\right)\right]\\
&\le 2\Ep\left[\sup_{u,v:\left\|u\right\|\le1,\left\|v\right\|\le1}\frac{1}{n}\sum_{t=1}^{n}\left(x_{t}^{\top}u\right)\left(\tilde{x}_{t}^{\top}v\right)\right]
\end{align*}
for $u,v\in\Hilbert$, because {
\begin{align*}
\Ep\left[\left.\left(\tilde{x}_{t}^{\top}u\right)\left(x_{t}^{\top}v\right)-\left(x_{t}^{\top}u\right)\left(\tilde{x}_{t}^{\top}v\right)\right|\left\{x_{t}\right\}\right]=\Ep\left[\left.\left(\tilde{x}_{t}^{\top}u\right)\right|\left\{x_{t}\right\}\right]\left(x_{t}^{\top}v\right)-\Ep\left[\left.\left(\tilde{x}_{t}^{\top}v\right)\right|\left\{x_{t}\right\}\right]\left(x_{t}^{\top}u\right)=0
\end{align*}
and thus}
\begin{align*}
\Ep\left[\left.\left(\left(x_{t}+\tilde{x}_{t}\right)^{\top}u\right)\left(\left(x_{t}-\tilde{x}_{t}\right)^{\top}v\right)\right|\left\{x_{t}\right\}\right]&=\Ep\left[\left.\left(x_{t}^{\top}u\right)\left(x_{t}^{\top}v\right)-\left(\tilde{x}_{t}^{\top}u\right)\left(\tilde{x}_{t}^{\top}v\right)\right|\left\{x_{t}\right\}\right]\\
&\quad+\Ep\left[\left.\left(\tilde{x}_{t}^{\top}u\right)\left(x_{t}^{\top}v\right)-\left(x_{t}^{\top}u\right)\left(\tilde{x}_{t}^{\top}v\right)\right|\left\{x_{t}\right\}\right]\\
&=\Ep\left[\left.\left(x_{t}^{\top}u\right)\left(x_{t}^{\top}v\right)-\left(\tilde{x}_{t}^{\top}u\right)\left(\tilde{x}_{t}^{\top}v\right)\right|\left\{x_{t}\right\}\right].
\end{align*}
Note that $\left(\left(\left(x_{t}+\tilde{x}_{t}\right)^{\top}u\right), \left(\left(x_{t}-\tilde{x}_{t}\right)^{\top}v\right)\right)$ has the same law as $\sqrt{2}\left(\left(x_{t}^{\top}u\right), \left(\tilde{x}_{t}^{\top}v\right)\right)$ {because $\left\{\tilde{x}_{t}\right\}$ is an independent copy of $\left\{x_{t}\right\}$, which is a sequence of centered Gaussian random variables.}
Let us define the following random process
for $u,v\in\Hilbert$ with $\left\|u\right\|\le 1$ and $\left\|v\right\|\le 1$ as follows:
\begin{align*}
W_{u,v}:=\sum_{t=1}^{n}\left(x_{t}^{\top}u\right)\left(\tilde{x}_{t}^{\top}v\right).
\end{align*}
Then
\begin{align*}
\left(W_{u,v}-W_{u^{\prime},v^{\prime}}\right)^{2}&=\left(\sum_{t=1}^{n}\left(x_{t}^{\top}u\right)\left(\tilde{x}_{t}^{\top}v\right)-\sum_{t=1}^{n}\left(x_{t}^{\top}u^{\prime}\right)\left(\tilde{x}_{t}^{\top}v^{\prime}\right)\right)^{2}\\
&= \left(\sum_{t=1}^{n}\left(x_{t}^{\top}\left(u-u^{\prime}\right)\right)\left(\tilde{x}_{t}^{\top}v\right)+\sum_{t=1}^{n}\left(x_{t}^{\top}u^{\prime}\right)\left(\tilde{x}_{t}^{\top}\left(v-v^{\prime}\right)\right)\right)^{2}\\
&\le 2\left(\sum_{t=1}^{n}\left(x_{t}^{\top}\left(u-u^{\prime}\right)\right)\left(\tilde{x}_{t}^{\top}v\right)\right)^{2}+2\left(\sum_{t=1}^{n}\left(x_{t}^{\top}u^{\prime}\right)\left(\tilde{x}_{t}^{\top}\left(v-v^{\prime}\right)\right)\right)^{2}\\
&=2\sum_{h=0}^{n-1}\sum_{\substack{t_{1},t_{2}=1,\ldots,n\\\left|t_{1}-t_{2}\right|=h}}\left(x_{t_{1}}^{\top}\left(u-u^{\prime}\right)\right)\left(x_{t_{2}}^{\top}\left(u-u^{\prime}\right)\right)\left(\tilde{x}_{t_{1}}^{\top}v\right)\left(\tilde{x}_{t_{2}}^{\top}v\right)\\
&\quad+2\sum_{h=0}^{n-1}\sum_{\substack{t_{1},t_{2}=1,\ldots,n\\\left|t_{1}-t_{2}\right|=h}}\left(x_{t_{1}}^{\top}u^{\prime}\right)\left(x_{t_{2}}^{\top}u^{\prime}\right)\left(\tilde{x}_{t_{1}}^{\top}\left(v-v^{\prime}\right)\right)\left(\tilde{x}_{t_{2}}^{\top}\left(v-v^{\prime}\right)\right).
\end{align*}
By the Cauchy--Schwarz inequality,
\begin{align*}
\Ep\left[\left(W_{u,v}-W_{u^{\prime},v^{\prime}}\right)^{2}|\left\{\tilde{x}_{t}\right\}\right]
&\le 2\left(u-u^{\prime}\right)^{\top}\Sigma_{0}\left(u-u^{\prime}\right)\sum_{t=1}^{n}\left(\tilde{x}_{t}^{\top}v\right)^{2}\\
&\quad+2\sum_{h=1}^{n-1}\left(u-u^{\prime}\right)^{\top}\left(\Sigma_{h}+\Sigma_{h}^{\top}\right)\left(u-u^{\prime}\right)\sum_{\substack{t_{1},t_{2}=1,\ldots,n\\t_{1}-t_{2}=h}}\left(\tilde{x}_{t_{1}}^{\top}v\right)\left(\tilde{x}_{t_{2}}^{\top}v\right)\\
&\quad+2\left(u^{\prime}\right)^{\top} \Sigma_{0}u^{\prime}\sum_{t=1}^{n} \left(\tilde{x}_{t}^{\top}\left(v-v^{\prime}\right)\right)^{2}\\
&\quad+2\sum_{h=1}^{n-1}\left(u^{\prime}\right)^{\top}\left(\Sigma_{h}+\Sigma_{h}^{\top}\right)u^{\prime} \sum_{\substack{t_{1},t_{2}=1,\ldots,n\\t_{1}-t_{2}=h}}\left(\tilde{x}_{t_{1}}^{\top}\left(v-v^{\prime}\right)\right)\left(\tilde{x}_{t_{2}}^{\top}\left(v-v^{\prime}\right)\right)\\
&\le 2\left(u-u^{\prime}\right)^{\top}\left( \bar{\Sigma}_n\right)\left(u-u^{\prime}\right)\sum_{t=1}^{n}\left\|\tilde{x}_{t}\right\|^{2}\\
&\quad+2\left\| \bar{\Sigma}_n\right\|\sum_{t=1}^{n}\left(\tilde{x}_{t}^{\top}\left(v-v^{\prime}\right)\right)^{2}.
\end{align*}
We define the following Gaussian process on $u,v\in\Hilbert$ with $\left\|u\right\|\le 1,\left\|v\right\|\le 1$:
\begin{align*}
Y_{u,v}:=\sqrt{2}\sqrt{\sum_{t=1}^{n}\left\|\tilde{x}_{t}\right\|^{2}}\left(g^{\top}u\right) +\sqrt{2}\sqrt{\left\| \bar{\Sigma}_n\right\|}\sum_{t=1}^{n}\left(\tilde{x}_{t}^{\top}v\right) g_{t}^{\prime}
\end{align*}
where $g$ is an $\Hilbert$-valued centered Gaussian variable with independence of other random variables and the covariance operator $ \bar{\Sigma}_n$
and $\left\{g_{t}^{\prime}:t=1,\ldots,n\right\}$ is an i.i.d.\ sequence of $\R$-valued standard Gaussian random variables independent of other random variables.
Obviously
\begin{align*}
\Ep\left[\left(W_{u,v}-W_{u^{\prime},v^{\prime}}\right)^{2}|\left\{\tilde{x}_{t}\right\}\right]\le \Ep\left[\left(Y_{u,v}-Y_{u^{\prime},v^{\prime}}\right)^{2}|\left\{\tilde{x}_{t}\right\}\right].
\end{align*}
It yields that by the Sudakov--Fernique inequality \citep{AT2007Random},
\begin{align*}
&\Ep\left[\left.\sup_{u,v:\left\|u\right\|\le1,\left\|v\right\|\le1}W_{u,v}\right|\left\{\tilde{x}_{t}\right\}\right]\\
&\le \Ep\left[\left.\sup_{u,v:\left\|u\right\|\le1,\left\|v\right\|\le1}Y_{u,v}\right|\left\{\tilde{x}_{t}\right\}\right]\\
&\le \sqrt{2}\sqrt{\sum_{t=1}^{n}\left\|\tilde{x}_{t}\right\|^{2}}\Ep\left[\left\|g\right\|\right]+\sqrt{2}\sqrt{\left\|\bar{\Sigma}_n\right\|}\Ep\left[\left\|\sum_{t=1}^{n}\tilde{x}_{t}g_{t}^{\prime}\right\||\left\{\tilde{x}_{t}\right\}\right]\\
&\le \sqrt{2}\sqrt{\sum_{t=1}^{n}\left\|\tilde{x}_{t}\right\|^{2}}\sqrt{\trace\left(\bar{\Sigma}_n\right)}+\sqrt{2}\sqrt{\left\|\bar{\Sigma}_n\right\|}\sqrt{\sum_{t=1}^{n}\left\|\tilde{x}_{t}\right\|^{2}},
\end{align*}
and by taking the expectation with respect to $\left\{\tilde{x}_{t}\right\}$,
\begin{align*}
\Ep\left[\left\|\hat{\Sigma}_{0}-\Sigma_{0}\right\|\right]
\le \frac{2\sqrt{2}}{n}\Ep\left[\left\|X\right\|_{\mathrm{HS}}\right]\left(\sqrt{\trace\left(\bar{\Sigma}_n\right)}+\sqrt{\left\|\bar{\Sigma}_n\right\|}\right).
\end{align*}
We now see that the statement holds.
\end{proof}
The followin lemma is a variation of Lemma 4.8 of \citealp{HL2020Moment}.
\begin{lemma}\label{varHL20L48}
Under the assumptions same as Proposition \ref{varHL20P46}, we have
\begin{align*}
\Ep\left[\left\|X\right\|_{\mathrm{HS}}\right]\le \sqrt{2\trace\left(\bar{\Sigma}_n\right)}+\sqrt{2n\left\|\Sigma_{0}\right\|}.
\end{align*}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{varHL20L48}]
Let us define the following Gaussian process such that for $u\in\Hilbert$ with $\left\|u\right\|\le 1$ and $v=\left[v^{\left(1\right)},\ldots,v^{\left(n\right)}\right]\in\R^{n}$ with $\left\|v\right\|\le 1$,
\begin{align*}
{\tilde{W}_{u,v}}:=\sum_{t=1}^{n}\left(x_{t}^{\top}u\right) v^{\left(t\right)}.
\end{align*}
Then
\begin{align*}
\Ep\left[\left(\tilde{W}_{u,v}-\tilde{W}_{u^{\prime},v^{\prime}}\right)^{2}\right]
&= \Ep\left[\left(\tilde{W}_{u,v}-\tilde{W}_{u^{\prime},v}+\tilde{W}_{u^{\prime},v}-\tilde{W}_{u^{\prime},v^{\prime}}\right)^{2}\right]\\
&\le 2\Ep\left[\left(\tilde{W}_{u,v}-\tilde{W}_{u^{\prime},v}\right)^{2}\right]+2\Ep\left[\left(\tilde{W}_{u^{\prime},v}-\tilde{W}_{u^{\prime},v^{\prime}}\right)^{2}\right]\\
&\le 2\Ep\left[\left(\sum_{t=1}^{n}\left(x_{t}^{\top}\left(u-u^{\prime}\right)\right)v^{\left(t\right)}\right)^{2}\right]+2\Ep\left[\left(\sum_{t=1}^{n}\left(x_{t}^{\top}u^{\prime}\right)\left(v-v^{\prime}\right)^{\left(t\right)}\right)^{2}\right]\\
&\le 2\sum_{t_{1}=1}^{n}\sum_{t_{2}=1}^{n}\left(u-u^{\prime}\right)^{\top} \Sigma_{t_{1}-t_{2}}\left(u-u^{\prime}\right)v^{\left(t_{1}\right)}v^{\left(t_{2}\right)}\\
&\quad+2\sum_{t_{1}=1}^{n}\sum_{t_{2}=1}^{n}\left(\left(u^{\prime}\right)^{\top} \Sigma_{t_{1}-t_{2}}u^{\prime}\right)\left(v-v^{\prime}\right)^{\left(t_{1}\right)}\left(v-v^{\prime}\right)^{\left(t_{2}\right)}.
\end{align*}
As \cite{HL2020Moment}, we define the $n\times n$-matrix
\begin{align*}
\Sigma_{L,u}&:=\left[\begin{matrix}
u^{\top}\Sigma_{0}u & u^{\top}\Sigma_{1}u & \cdots & u^{\top}\Sigma_{n-1}u\\
u^{\top}\Sigma_{h}^{\top}u & u^{\top}\Sigma_{0}u & \cdots & u^{\top}\Sigma_{n-2}u\\
\vdots & \vdots & \ddots & \vdots \\
u^{\top}\Sigma_{n-1}^{\top}u & u^{\top}\Sigma_{n-1}^{\top}u & \cdots & u^{\top}\Sigma_{0}u
\end{matrix}\right]\\
\Sigma^{\circ}&:=\left\|\Sigma_{0}\right\|\mathbf{1}_{n}\mathbf{1}_{n}^{\top},
\end{align*}
where $\mathbf{1}_{n}\in\R^{n}$ whose all the elements are $1$.
{Because for all $u\in\Hilbert$ and $v\in\R^{n}$ such that $\left\|u\right\|\le 1$ and $\left\|v\right\|\le 1$,
\begin{align*}
v^{\top}\Sigma_{L,u}v = \sum_{t_{1}=1}^{n}\sum_{t_{2}=1}^{n}\left(u^{\top}\Sigma_{t_{1}-t_{2}}u\right)v^{\left(t_{1}\right)}v^{\left(t_{2}\right)}\le \sum_{t_{1}=1}^{n}\sum_{t_{2}=1}^{n}\left\|\Sigma_{0}\right\| v^{\left(t_{1}\right)}v^{\left(t_{2}\right)}=\left\|\Sigma_{0}\right\|\left(\sum_{t=1}^{n}v^{\left(t\right)}\right)^{2}=v^{\top}\Sigma^{\circ}v,
\end{align*}}
we obtain that for all $u\in\Hilbert$ such that $\left\|u\right\|\le 1$,
\begin{align*}
\Sigma_{L,u}\le \Sigma^{\circ}
\end{align*}
in the meaning of the Loewner partial order.
Hence, as Lemma \ref{varHL20L47},
\begin{align*}
\Ep\left[\left(\tilde{W}_{u,v}-\tilde{W}_{u^{\prime},v^{\prime}}\right)^{2}\right]&\le 2\left(u-u^{\prime}\right)^{\top} \left(\bar{\Sigma}_n\right)\left(u-u^{\prime}\right)\\
&\quad+2\left\|\Sigma_{0}\right\|\left(v-v^{\prime}\right)^{\top}\mathbf{1}_{n}\mathbf{1}_{n}^{\top}\left(v-v^{\prime}\right).
\end{align*}
Let us define a Gaussian process such that
\begin{align*}
\tilde{Y}_{u,v}:=\sqrt{2}\left(g^{\top}u\right)+\sqrt{2}\left\|\Sigma_{0}\right\|^{1/2}\left(\left( g^{\prime}\right)^{\top}v\right),
\end{align*}
where $g$ is an $\Hilbert$-valued centered Gaussian random variable with independence of other random variables and the covariance operator $\bar{\Sigma}_n$ and $g'$ is an $\R^{n}$-valued centered Gaussian random variable with independence of other random variables and the degenerate covariance matrix $\mathbf{1}_{n}\mathbf{1}_{n}^{\top}$.
Then we obtain
\begin{align*}
\Ep\left[\left(\tilde{W}_{u,v}-\tilde{W}_{u^{\prime},v^{\prime}}\right)^{2}\right]\le \Ep\left[\left(\tilde{Y}_{u,v}-\tilde{Y}_{u^{\prime},v^{\prime}}\right)^{2}\right].
\end{align*}
The Sudakov--Fernique inequality verifies that
\begin{align*}
\Ep\left[\sup_{u,v:\left\|u\right\|\le1, \left\|v\right\|\le 1}\tilde{W}_{u,v}\right]&\le \Ep\left[\sup_{u,v:\left\|u\right\|\le1, \left\|v\right\|\le 1}\tilde{Y}_{u,v}\right]\\
&\le \sqrt{2\trace\left(\bar{\Sigma}_n\right)}+\sqrt{2}\left\|\Sigma_{0}\right\|^{1/2}\sqrt{n}.
\end{align*}
Here we obtain the statement.
\end{proof}
\begin{proof}[Proof of Proposition \ref{varHL20P46}]
It holds immediately for Lemmas \ref{varHL20L47} and \ref{varHL20L48}, and the fact \begin{align*}
\Ep\left[\left\|X\right\|_{\mathrm{HS}}\right]\le \Ep\left[\left\|X\right\|_{\mathrm{HS}}^{2}\right]^{1/2}=\sqrt{\sum_{t=1}^{n}\Ep\left[\left\|x_{t}\right\|^{2}\right]}=\sqrt{n\trace\left(\Sigma_{0}\right)}
\end{align*}
by the Cauchy--Schwarz inequality.
\end{proof}
\section{Upper Bound on the Bias Term $T_{B}$} \label{sec:appendix_bias}
We develop the bound for the bias term by using the moment inequality developed above.
The bound essentially depends on the moment inequality derived in Section \ref{sec:appendix_moment_inequality}.
The next proposition corresponds to Lemma S.18 of \citet{bartlett2020benign}.
\begin{proposition}\label{varBLLT20LS18}
There exists a universal constant $c>0$ such that for any $\delta>0$, with probability at least $1-\delta$,
\begin{align*}
\left(\beta^{\ast}\right)^{\top}T_{B}\beta^{\ast}\le c\delta^{-1}\left\|\beta^{\ast}\right\|^{2}\left(\left\|\bar{\Sigma}_{n}\right\|r_{0}\left(\bar{\Sigma}_{n}\right)+\sqrt{\frac{\left\|\Sigma_{0}\right\|\left\|\bar{\Sigma}_{n}\right\|}{n}}\left(\sqrt{r_{0}\left(\Sigma_{0}\right)}+\sqrt{r_{0}\left(\bar{\Sigma}_{n}\right)}\right)\right)
\end{align*}
\end{proposition}
\begin{proof}[Proof of Proposition \ref{varBLLT20LS18}]
We follow the discussion of Lemma S.18 of \citet{bartlett2020benign}.
We see
\begin{align*}
\left(I-X^{\top}\left(XX^{\top}\right)^{-1}X\right)X^{\top}=X^{\top}-X^{\top}\left(XX^{\top}\right)^{-1}XX^{\top}=0
\end{align*}
and for any $v$ in the orthogonal complement to the span of the columns of $X^{\top}$,
\begin{align*}
\left(I-X^{\top}\left(XX^{\top}\right)^{-1}X\right)v=v.
\end{align*}
Hence
\begin{align*}
\left\|I-X^{\top}\left(XX^{\top}\right)^{-1}X\right\|\le 1.
\end{align*}
Combining it with \eqref{eq:betahat}, we see
\begin{align*}
\left(\beta^{\ast}\right)^{\top}T_{B}\beta^{\ast}&=\left(\beta^{\ast}\right)^{\top}\left(I-X^{\top}\left(XX^{\top}\right)^{-1}X\right)\Sigma_{0}\left(I-X^{\top}\left(XX^{\top}\right)^{-1}X\right)\beta^{\ast}\\
&=\left(\beta^{\ast}\right)^{\top}\left(I-X^{\top}\left(XX^{\top}\right)^{-1}X\right)\left(\Sigma_{0}-\frac{1}{n}X^{\top}X\right)\left(I-X^{\top}\left(XX^{\top}\right)^{-1}X\right)\beta^{\ast}\\
&\le \left\|\Sigma_{0}-\frac{1}{n}X^{\top}X\right\|\left\|\beta^{\ast}\right\|^{2}.
\end{align*}
Proposition \ref{varHL20P46} and Markov's inequality yield that for all $\delta>0$, with probability at least $1-\delta$,
\begin{align*}
\left\|\Sigma_{0}-\frac{1}{n}X^{\top}X\right\|\le \delta^{-1}\frac{2\sqrt{2}}{n}\left(\sqrt{2}\trace\left(\bar{\Sigma}_n\right)+\sqrt{2n\left\|\Sigma_{0}\right\|\trace\left(\bar{\Sigma}_n\right)}+\sqrt{n\trace\left(\Sigma_{0}\right)\left\|\bar{\Sigma}_{n}\right\|}\right),
\end{align*}
and on the same event,
\begin{align*}
\left(\beta^{\ast}\right)^{\top}T_{B}\beta^{\ast}\le \delta^{-1}\left\|\beta^{\ast}\right\|^{2}\frac{2\sqrt{2}}{n}\left(\sqrt{2}\trace\left(\bar{\Sigma}_n\right)+\sqrt{2n\left\|\Sigma_{0}\right\|\trace\left(\bar{\Sigma}_n\right)}+\sqrt{n\trace\left(\Sigma_{0}\right)\left\|\bar{\Sigma}_{n}\right\|}\right).
\end{align*}
Therefore the statement holds.
\end{proof}
\section{Proof for Convergence Rate Analysis}
\begin{proof}[Proof of Proposition \ref{prop:benign_rate}]
Without loss of generality, we set $\lambda_{0,1} = 1$.
By Theorem \ref{thm:homo}, it is sufficient to study the term $n^{-1/2}\sqrt{\|\bar{\Sigma}_n\| r_0 (\Sigma_0)}$.
We obtain
\begin{align*}
\sqrt{\|\bar{\Sigma}_n\| r_0 (\Sigma_0)}& = \left( 2 \sum_{h=0}^n \lambda_{h,1} \right)^{1/2}\zeta_n^{1/2} \leq \left( 2 + 2 c \sum_{h=1}^n h^{-\alpha} \right)^{1/2} \zeta_n^{1/2}.
\end{align*}
For $\alpha > 1/2$, we have
\begin{align*}
1 + c \sum_{h=1}^n h^{-\alpha} =
\begin{cases}
O(n^{1-\alpha}) & (\alpha \in (1/2,1))\\
O(\log n) & (\alpha = 1)\\
O(1) & (\alpha > 1).
\end{cases}
\end{align*}
Hence, we obtain
\begin{align*}
\frac{ \sqrt{\|\bar{\Sigma}_n\| r_0 ({\Sigma}_0)}}{\sqrt{n}}=
\begin{cases}
O(\zeta_n^{1/2} n^{-\alpha/2}) & (\alpha \in (1/2,1))\\
O(\zeta_n^{1/2} n^{-1/2}\log^{1/2} n) & (\alpha = 1)\\
O( \zeta_n^{1/2} n^{-1/2}) & (\alpha > 1).
\end{cases}
.
\end{align*}
Then, we obtain the statement.
\end{proof}
\section{Proofs for miscellaneous lemmas}
We give proofs for lemmas in Section \ref{sec:setting} and Section \ref{sec:result}.
\begin{proof}[Proof for Lemma \ref{LemmaAssumption01}]
Because the decay of the autocorrelation indicates mixing under Gaussianity \citep[e.g., see][]{Ito1944Ergodicity}, we only need to show
\begin{align*}
\Ep\left[\left(x_{t}^{\top}u\right) \left(x_{t+h}^{\top}u\right)\right]\to0.
\end{align*}
For all $u\in\Hilbert$ and $\epsilon>0$, there exist $M_{1}\left(\epsilon,u\right),M_{2}\left(\epsilon,u,M_{1}\right)\in\N$ such that $ \sum_{i=M_{1}+1}^{\infty}\lambda_{0,i}<\epsilon/2\left\|u\right\|^{2}$ and for all $h\ge M_{2}$, $|\sum_{i=1}^{M_{1}}\lambda_{h,i}|<\epsilon/2\left\|u\right\|^{2}$; then
\begin{align*}
\left|\Ep\left[\left(x_{t}^{\top}u\right) \left(x_{t+h}^{\top}u\right)\right]\right|&=\left|\sum_{i=1}^{\infty}\lambda_{h,i}\left(u^{\top}e_{i}\right) ^{2}\right|\le \left\|u\right\|^{2}\left(\left|\sum_{i=1}^{M_{1}}\lambda_{h,i}\right|+\sum_{i=M_{1}+1}^{\infty}\lambda_{0,i}\right)<\epsilon.
\end{align*}
Note that the first equality holds by $L^{2}$-convergence of infinite series equivalent to $x_{t}^{\top}u$.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lem:homo_efficient_rank}]
Since $\lambda_{0,i}\ge0$ for all $i$, we see that
\begin{align*}
\left\|\bar{\Sigma}_{n}\right\|&=\sup_{i=1,2,...,\infty}\left(\lambda_{0,i}+2\sum_{h=1}^{n-1}\left|\lambda_{h,i}\right|\right)=\sup_{i\in\N}\lambda_{0,i}\left(1+2\sum_{h=1}^{n-1}\left|\frac{\lambda_{h,i}}{\lambda_{0,i}}\right|\right)
=\lambda_{0,1}\left(1+2\sum_{h=1}^{n-1}\left|\frac{\lambda_{h,1}}{\lambda_{0,1}}\right|\right),
\end{align*}
and
\begin{align*}
\trace\left(\bar{\Sigma}_{n}\right)&=\sum_{i=1}^{\infty}\left(\lambda_{0,i}+2\sum_{h=1}^{n-1}\left|\lambda_{h,i}\right|\right)=\sum_{i=1}^{\infty}\lambda_{0,i}\left(1+2\sum_{h=1}^{n-1}\left|\frac{\lambda_{h,i}}{\lambda_{0,i}}\right|\right)
=\trace\left(\Sigma_{0}\right)\left(1+2\sum_{h=1}^{n-1}\left|\frac{\lambda_{h,1}}{\lambda_{0,1}}\right|\right).
\end{align*}
Then, by the definition of efficient ranks, we obtain the following:
\begin{align*}
r_{0}\left(\bar{\Sigma}_{n}\right)=\frac{\trace\left(\bar{\Sigma}_{n}\right)}{\left\|\bar{\Sigma}_{n}\right\|}=\frac{\trace\left(\Sigma_{0}\right)\left(1+2\sum_{h=1}^{n-1}\left|\lambda_{h,1}/\lambda_{0,1}\right|\right)}{\lambda_{0,1}\left(1+2\sum_{h=1}^{n-1}\left|\lambda_{h,1}/\lambda_{0,1}\right|\right)}=\frac{\trace\left(\Sigma_{0}\right)}{\lambda_{0,1}}=r_{0}\left(\Sigma_{0}\right).
\end{align*}
\end{proof}
\section{Proofs for the Examples}
We give proofs for lemmas and propositions for the example of processes presented in Section \ref{sec:example}.
\begin{proof}[Proof for Lemma \ref{LemmaExample01}]
We firstly show (1).
Note that the infinite series in \eqref{form:x2} converge in $L^{2}$ and a.s.\ by It\^{o}--Nisio theorem.
We see that $u\in\Hilbert$ and $t,h\in\Z$, due to the $L^{2}$-convergences of \eqref{form:x2},
\begin{align*}
\Sigma_{h}u
&=\Ep\left[\left(x_{t+h}^{\top}u\right)x_{t}\right]]\\
&=\lim_{M_{1}\to\infty}\lim_{M_{2}\to\infty}\Ep\left[\left(\sum_{j=-M_{1}}^{M_{1}}\sum_{k=1}^{M_{2}}\phi_{j,k}\sqrt{\vartheta_{k}}\left(e_{k}^{\top}u\right)w_{t+h-j,k}\right)\left(\sum_{j=-M_{1}}^{M_{1}}\sum_{k=1}^{M_{2}}\phi_{j,k}\sqrt{\vartheta_{k}}e_{k}w_{t-j,k}\right)\right]\\
&=\lim_{M_{1}\to\infty}\lim_{M_{2}\to\infty}\sum_{j=-M_{1}}^{M_{1}}\sum_{k=1}^{M_{2}}\phi_{j,k}\phi_{j+h,k}\vartheta_{k}\left(e_{k}^{\top}u\right)e_{k}\\
&=\sum_{j=-\infty}^{\infty}\sum_{k=1}^{\infty}\phi_{j,k}\phi_{j+h,k}\vartheta_{k}\left(u^{\top}e_{k}\right) e_{k}.
\end{align*}
(2) follows immediately by the assumption $\sum_{j=-\infty}^{\infty}\sum_{k=1}^{\infty}\phi_{j,k}^{2}\vartheta_{k}<\infty$ and the Cauchy--Schwarz inequality.
\end{proof}
\begin{proof}[Proof for Lemma \ref{LemmaExample02}]
Firstly we see that (1) holds.
We can see that $Xe_{k}/\sqrt{\lambda_{0,k}}$ has the representation
\begin{align*}
Xe_{k}/\sqrt{\lambda_{0,k}}&=\left[\begin{matrix}
\left(x_{1}^{\top}e_{k}\right)/\sqrt{\lambda_{0,k}}\\
\vdots\\
\left(x_{n}^{\top}e_{k}\right)/\sqrt{\lambda_{0,k}}
\end{matrix}\right]=\frac{1}{\sqrt{\lambda_{0,k}}}\left[\begin{matrix}
\sum_{j=-\infty}^{\infty}\sum_{\ell=1}^{\infty}\phi_{j,\ell}\sqrt{\vartheta_{\ell}}\left( e_{\ell}^{\top}e_{k}\right)w_{1-j,\ell}\\
\vdots\\
\sum_{j=-\infty}^{\infty}\sum_{\ell=1}^{\infty}\phi_{j,\ell}\sqrt{\vartheta_{\ell}}\left( e_{\ell}^{\top}e_{k}\right)w_{n-j,\ell}
\end{matrix}\right]\\
&=\frac{1}{\sqrt{\lambda_{0,k}}}\left[\begin{matrix}
\sum_{j=-\infty}^{\infty}\phi_{j,k}\sqrt{\vartheta_{k}}w_{1-j,k}\\% \vdots\\
\sum_{j=-\infty}^{\infty}\phi_{j,k}\sqrt{\vartheta_{k}}w_{n-j,k}
\end{matrix}\right]=:\frac{1}{\sqrt{\lambda_{0,k}}}\left[\begin{matrix}
z_{1,k}\\
\vdots\\
z_{n,k}
\end{matrix}\right]=z_{k},
\end{align*}
where $z_{k}\sim N\left(\mathbf{0},\Xi_{k,n}\right)$ owing to the $L^{2}$-convergence of the infinite series and the L\'{e}vy's equivalence theorem.
Let us check (2).
We see that for any $\epsilon>0$ there exists $M_{0}\in\N$ such that for all $M\ge M_{0}$
\begin{align*}
\max\left\{\sum_{j=M}^{\infty}\phi_{j,k}^{2},\sum_{j=-\infty}^{-M}\phi_{j,k}^{2}\right\}\le\frac{\epsilon^{2}}{9\sum_{j=-\infty}^{\infty}\phi_{j,k}^{2}};
\end{align*}
because $\sum_{j=-\infty}^{\infty}\phi_{j,k}^{2}<\infty$; then for all $m\in\N$ satisfying $m\ge 2M$ for fixed $M\ge M_{0}$,
\begin{align*}
\left|\sum_{j=-\infty}^{\infty}\phi_{j,k}\phi_{j+m,k}\right|&
\le \left|\sum_{j=-M}^{M}\phi_{j,k}\phi_{j+m,k}\right|+\left|\sum_{j=-\infty}^{-M}\phi_{j,k}\phi_{j+m,k}\right|+\left|\sum_{j=M}^{\infty}\phi_{j,k}\phi_{j+m,k}\right|\\
&\le \left(\left(\sum_{j=-M+m}^{\infty}\phi_{j,k}^{2}\right)^{1/2}+\left(\sum_{j=M}^{\infty}\phi_{j,k}^{2}\right)^{1/2}+\left(\sum_{j=-\infty}^{-M}\phi_{j,k}^{2}\right)^{1/2}\right)\left(\sum_{j=-\infty}^{\infty}\phi_{j,k}^{2}\right)^{1/2}\\
&\le \left(2\left(\sum_{j=M}^{\infty}\phi_{j,k}^{2}\right)^{1/2}+\left(\sum_{j=-\infty}^{-M}\phi_{j,k}^{2}\right)^{1/2}\right)\left(\sum_{j=-\infty}^{\infty}\phi_{j,k}^{2}\right)^{1/2}\\
&\le \epsilon.
\end{align*}
Hence $\lambda_{h,k}\to0$, and equivalently $\Xi_{k,n}^{\left(t,t+h\right)}\to0$.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:ARMA}]
To see (i), we follow the proof of Theorem 3.1.1 of \citet{BD1991Time}.
$\sum_{j=0}^{\infty}\phi_{j}z^{j}=\left(1+\sum_{i}b_{i}z^{i}\right)/\left(1-\sum_{i}a_{i}z^{i}\right)$ is convergent for all $z$ with $\left|z\right|\le 1+\epsilon$ for some $\epsilon>0$ owing to the assumption $\left|1-\sum_{i=1}^{p}a_{i}z^{i}\right|>0$ for all $z$ with $\left|z\right|\le 1$.
By letting $z=1+\epsilon/2$, we see that $\sum_{j=0}^{\infty}\phi_{j}\left(1+\epsilon/2\right)^{j}<\infty$ and thus $\phi_{j}\left(1+\epsilon/2\right)^{j}\to0$ as $j\to\infty$.
It indicates that there exists $K>0$ such that $\left|\phi_{j}\right|\le K\left(1+\epsilon/2\right)^{-j}$ for all $j\in\N_{0}$.
Hence for all $h\in\N_{0}$,
\begin{align*}
\left|\sum_{j=0}^{\infty}\phi_{j}\phi_{j+h}\right|\le \sum_{j=0}^{\infty}K^{2}\left(1+\epsilon/2\right)^{-2j-h}=\frac{K^{2}}{1-\left(1+\epsilon/2\right)^{-2}}\left(1+\frac{\epsilon}{2}\right)^{-h},
\end{align*}
and $\lambda_{h,k}=\vartheta_{k}\sum_{j=0}^{\infty}\phi_{j}\phi_{j+h}$ converges to zero geometrically for all $k$.
(ii) is trivial from (i), and (iii) follows from the assumption on the characteristic polynomials and Proposition 4.5.3 of \citet{BD1991Time}.
\end{proof}
\begin{proof}[Proof for Proposition \ref{prop:armah}]
Let us begin with showing (i).
Because $\phi_{0,k}=1$ for all $k$ by the definition \eqref{def:armah}, the lower bound is obvious.
By considering white noises with the unit variance, (ARMAH1-i), Theorem 4.4.2 and Proposition 4.5.3 of \citet{BD1991Time} lead to
\begin{align*}
\epsilon^{4}\le \inf_{z\in\C:\left|z\right|=1}\frac{\left|1+\sum_{j=1}^{q}\varphi_{j,k}z^{j}\right|^{2}}{\left|1-\sum_{j=1}^{p}\rho_{j,k}z^{j}\right|^{2}}\le \sum_{j=0}^{\infty}\phi_{j,k}^{2}\le \sup_{z\in\C:\left|z\right|=1}\frac{\left|1+\sum_{j=1}^{q}\varphi_{j,k}z^{j}\right|^{2}}{\left|1-\sum_{j=1}^{p}\rho_{j,k}z^{j}\right|^{2}}\le \epsilon^{-4}
\end{align*}
because $\sum_{j=0}^{\infty}\phi_{j,k}^{2}$ equals to all the diagonal elements of the corresponding autocovariance matrix.
(ii) follows from the same argument as (i) and the fact that $\sum_{j=0}^{\infty}\phi_{j,k}^{2}$ is the inverse variance of the corresponding noise terms in each coordinate process $\left\{\left(x_{t}^{\top}e_{k}\right)/\sqrt{\lambda_{0,k}}:t=1,\ldots,n\right\}$.
(iii) uses a discussion of near-epoch dependence.
It is sufficient to consider $h\ge0$; for all $h\ge 2\left(\max\left\{p,q\right\}+2\right)$,
\begin{align*}
\left|\sum_{j=0}^{\infty}\phi_{j,k}\phi_{j+h,k}\right|
&\le \left|\sum_{j=0}^{\left[h/2\right]}\phi_{j,k}\phi_{j+h,k}\right|+\left|\sum_{j=\left[h/2\right]+1}^{\infty}\phi_{j,k}\phi_{j+h,k}\right|\\
&\le \left(\sum_{j=0}^{\left[h/2\right]}\phi_{j,k}^{2}\right)^{1/2}\left(\sum_{j=0}^{\left[h/2\right]}\phi_{j+h,k}^{2}\right)^{1/2}+\left(\sum_{j=\left[h/2\right]+1}^{\infty}\phi_{j,k}^{2}\right)^{1/2}\left(\sum_{j=\left[h/2\right]+1}^{\infty}\phi_{j+h,k}^{2}\right)^{1/2}\\
&\le 2\left(\sum_{j=0}^{\infty}\phi_{j,k}^{2}\right)^{1/2}\left(\sum_{j=\left[h/2\right]}^{\infty}\phi_{j,k}^{2}\right)^{1/2}\\
&\le 4\left(\sum_{j=0}^{\infty}\phi_{j,k}^{2}\right)^{1/2}\left(\sum_{j=0}^{\infty}\phi_{j,k}^{2}\right)^{1/2}\left(1+\epsilon\right)^{-\left[h/2\right]+\max\left\{p,q\right\}+1}\\
&\le \left(4\left(\sum_{j=0}^{\infty}\phi_{j,k}^{2}\right)\left(1+\epsilon\right)^{\max\left\{p,q\right\}+2} \right)\left(1+\epsilon\right)^{-h/2}
\end{align*}
because of (ARMAH1-ii) and the result (i), and a discussion of near-epoch dependence (\citealp[see (2.6) and Proposition 2.1 of][]{Davidson2002Establishing}, and \citealp[Propositions 1.1 and 10.1 of][]{Hamilton1994Time}).
Clearly for all $h=0,\ldots,2\left(\max\left\{p,q\right\}+2\right)$,
\begin{align*}
\left|\sum_{j=0}^{\infty}\phi_{j,k}\phi_{j+h,k}\right|\le \sum_{j=0}^{\infty}\phi_{j,k}^{2}\le \left(4\left(\sum_{j=0}^{\infty}\phi_{j,k}^{2}\right)\left(1+\epsilon\right)^{\max\left\{p,q\right\}+2} \right)\left(1+\epsilon\right)^{-h/2}.
\end{align*}
Hence for all $h\ge0$,
\begin{align*}
\left|\frac{\lambda_{h,k}}{\lambda_{0,k}}\right|=\left|\frac{\sum_{j=0}^{\infty}\phi_{j,k}\phi_{j+h,k}}{\sum_{j=0}^{\infty}\phi_{j,k}^{2}}\right|\le \left(4\left(1+\epsilon\right)^{\max\left\{p,q\right\}+2} \right)\left(1+\epsilon\right)^{-h/2}.
\end{align*}
(iv) is another consequence of (i).
(v) holds because
\begin{align*}
\trace\left(\Sigma_{0}+2\sum_{h=1}^{n-1}\tilde{\Sigma}_{h}\right)&=\sum_{j=0}^{\infty}\sum_{k=1}^{\infty}\phi_{j,k}^{2}\vartheta_{k}+2\sum_{h=1}^{n-1}\sum_{j=0}^{\infty}\left|\sum_{k=1}^{\infty}\phi_{j,k}\phi_{j+h,k}\right|\vartheta_{k}\\
&\le \left(\sup_{k}\sum_{j=0}^{\infty}\phi_{j,k}^{2}+2\sum_{h=1}^{n-1}\sup_{k}\left|\sum_{j=0}^{\infty}\phi_{j,k}\phi_{j+h,k}\right|\right)\trace\left(Q\right)\\
&\le \epsilon^{-4}\left(1+8\left(1+\epsilon\right)^{\max\left\{p,q\right\}+2} \sum_{h=1}^{n-1}\left(1+\epsilon\right)^{-h/2}\right)\trace\left(Q\right)\\
&\le \epsilon^{-4}\left(1+8\left(1+\epsilon\right)^{\max\left\{p,q\right\}+2} \frac{\left(1+\epsilon\right)^{-1/2}}{1-\left(1+\epsilon\right)^{-1/2}}\right)\trace\left(Q\right)\\
&\le \epsilon^{-4}\left(1+8\left(1+\epsilon\right)^{\max\left\{p,q\right\}+2} \frac{\left(1+\epsilon\right)^{-1/2}}{1-\left(1+\epsilon\right)^{-1/2}}\right)\trace\left(\Sigma_{0}\right)
\end{align*}
by (i) and (iii).
A similar argument holds for $\left\|\bar{\Sigma}_{n}\right\|$.
\end{proof}
\end{document}
| 196,721
|
\section{Proof of Theorem \protect\ref{th.online}}
To ease notation and to be consistent with the proof in \cite{BSS12}, we will
let $\beta=d/2$ and talk about choosing $T=\beta n$ vectors instead of $dn/2$
vectors.
Let $n$ be a power of $4$ and
let $m=n$. Suppose $H_n$ is the Hadamard matrix of size $n$, normalized so
$\|H_n\|=1$, and let $h_1,\ldots,h_n$ be its columns. During any execution of the
game, let $$A_\tau:=\sum_{t\le \tau} s_t \vt_{i(t)} (\vt_{i(t)})^T$$ denote
the matrix obtained after $\tau$ rounds, with $A_0=0$. Consider the following
adversary:
\begin{quote}
In round $\tau+1$ present the player with vectors $v^{(\tau+1)}_1:=Uh_1,\ldots,
v^{(\tau+1)}_n:=Uh_n$, where $U$ is an orthogonal matrix whose columns form an
eigenbasis of $A_\tau$.
\end{quote}
\noindent Note that the vectors $v^{(\tau+1)}_1,\ldots,v^{(\tau+1)}_n$ are
always isotropic since
$$\sum_{i=1}^n (Uh_i)(Uh_i)^T = UHH^TU^T = I.$$
We will show that playing any strategy against this adversary must incur a condition
number of at least
$$\kappa_{d}-o_n(1)=
\frac{(\sqrt{\beta}+1)^2}{(\sqrt{\beta}-1)^2}=1+\frac{4}{\sqrt{\beta}}+O(1/\beta).$$
Let
$p_\tau(x):=\det(xI-A_\tau)=\prod_{j=1}^n(x-\lambda_j)$ denote the
characteristic polynomial of $A_\tau$. Observe that for any choice
$s=s_{\tau+1}$ and $v=Uh_i$ made by the player in round
$\tau+1$, we have:
\begin{align*}
p_{\tau+1}(x)&=\det(xI-A_\tau-svv^T)
\\&= \det(xI-A_\tau)\det(I-(xI-A)^{-1}(svv^T))
\\&=p_\tau(x)\left(1-s\sum_{j=1}^n \frac{\langle
v,u_j\rangle^2}{x-\lambda_j}\right)
\\&=p_\tau(x)\left(1-\frac{s}{n}\sum_{i=j}^n\frac1{x-\lambda_j}\right),
\\&\qquad\textrm{since $\langle Uh_i,u_j\rangle=\langle h_i,U^Tu_j\rangle=\langle
h_i, e_j\rangle=\pm 1$ for every $j$}
\\&= p_\tau(x)-(s/n)p'_\tau(x)
\\&= (1-(s/n)D)p_\tau(x),
\end{align*}
where $D$ denotes differentiation with respect to $x$. Thus, the characteristic
polynomial of $A_{\tau+1}$ does not depend on the choice of vector in round
$\tau+1$, but only on the scaling $s_{\tau+1}$. Applying this fact inductively
for all $T$ rounds, we have:
$$ p_T(x) = \prod_{t\le T} (1-(s_t/n)D) x^n,$$
since $p_0(x)=x^n$.
Note that since every $p_\tau(x)$ is the characteristic polynomial of a symmetric
matrix, it must be real-rooted.
\begin{remark} \label{rem.rr} Since the above calculation holds for all choices
of weights $s$ and matrices $A$, we have recovered the well-known fact that for
any real-rooted $p(x)$, the polynomial $(1-\alpha D)p(x)$ is also
real-rooted for real $\alpha$.\end{remark}
Let $S:=\sum_{t\le T} s_t/n$. We will show that among all assignments of the
weights $\{s_t\}$ with sum $S$, the roots of $p_T(x)$ are extremized when all of the
$s_t$ are equal, namely\footnote{To avoid confusion, we remark that in what follows $T$ is always a number and
never the transpose (we will be dealing only with polynomials, not matrices).}:
\begin{enumerate}
\item [(A)] $\lambda_{min}(p_T)\le \lambda_{min} (1-(S/T)D)^Tx^n.$
\item [(B)] $\lambda_{max}(p_T)\ge \lambda_{max} (1-(S/T)D)^Tx^n.$
\end{enumerate}
To do this, we will use some facts about majorization of roots of polynomials.
Recall that a nondecreasing sequence $b_1\le b_2\le\ldots b_n$ majorizes another
sequence $a_1\le\ldots\le a_n$ if $\sum_{j=1}^n a_j=\sum_{j=1}^n b_j$ and the partial sums satisfy:
$$ \sum_{j=1}^k a_j \ge \sum_{j=1}^k b_j$$
for $k=1,\ldots,n-1$. We will denote this by $(a_1,\ldots,a_n)\prec
(b_1,\ldots,b_n)$, and notice that this condition implies that $a_1\ge b_1$ and
$a_n\le b_n$, i.e., the extremal values of $a$ are more concentrated than those
of $b$. We will make use of the fact that for a given sum $S$, the uniform
sequence $(S/n,\ldots,S/n)$ is majorized by every other sequence with sum $S$.
We now appeal to the following theorem of Borcea and Br\"anden
\cite{borcea2010hyperbolicity}.
\begin{theorem}\label{th.bb} Suppose $L:\R_n[x]\rightarrow\R[x]$ is a linear
transformation on polynomials of degree $n$. If $L$ maps real-rooted polynomials to real-rooted
polynomials, then $L$ preserves majorization, i.e.
$$ \lambda(p)\prec \lambda(q) \quad\Rightarrow \lambda(L(p))\prec
\lambda(L(q)),$$
where $\lambda(p)$ is the vector of nondecreasing zeros of $p$.\end{theorem}
Let $$\phi(x):=(x-(S/T))^T$$ and let $\psi_T(x):=\prod_{t=1}^T (x-s_t/n).$ Observe that
$(S/T,\ldots,S/T)=\lambda(\phi)\prec \lambda(\psi_T)$, since the sum of the roots
of $\psi_T$ is $S$. Consider the linear transformation $L:\R_T[x]\rightarrow \R[x]$ defined by:
$$L(p) = D^np(1/D)x^n,$$
and observe that for any monic polynomial with roots $\alpha_t$:
$$L\left(\prod_{t=1}^T (x-\alpha_t)\right) = \prod_{t=1}^T (1-\alpha_t D) x^n.$$
By remark \ref{rem.rr}, $L(p)$ is real-rooted whenever $p$ is real-rooted, so
Theorem \ref{th.bb} applies. We conclude that the roots of $L(\psi_T)=p_T(x)$ majorize
the roots of $L(\phi)=(1-(S/T)D)^T x^n,$ so items (A) and (B) follow.
To finish the proof, we observe (as in \cite{MSSICM}, Section 3.2) that
$$ (1-(S/T)D)^Tx^n = \mathcal{L}_n^{(T-n)}(n^2x/S)=:\mathcal{L}(x)$$
where the right hand side is a scaling of an {\em associated Laguerre polynomial}. The
asymptotic distribution of the roots of such polynomials is known, and converges
to the Marchenko-Pastur law from Random Matrix Theory as $n\rightarrow\infty$. In particular, Theorem
4.4 of \cite{dette1995some} tells us that
$$\lambda_{min} \mathcal{L}(x)\rightarrow
\frac{S}{n}\left(1-\sqrt{\frac{n}{T}}\right)^2$$
and
$$\lambda_{max} \mathcal{L}(x)\rightarrow
\frac{S}{n}\left(1+\sqrt{\frac{n}{T}}\right)^2,$$
as $n\rightarrow\infty$ with $T=\beta n$. Thus, the condition number of of $A_T$ is
at least
$$
\frac{\lambda_{max}\mathcal{L}(x)}{\lambda_{min}\mathcal{L}(x)}=\kappa_d-o_n(1),$$
as desired.
| 174,112
|
TITLE: Group topology induced by group action
QUESTION [3 upvotes]: Let $G$ be a group, $X$ a topological space and $G\times X \longrightarrow X$ a group action so that $\begin{align*}
X\longrightarrow X \\
x \mapsto g\cdot x
\end{align*}$ is a homeomorphism $\forall g \in G$. Is there a coarsest topology on $G$ so that:
$G$ is a topological group
The group action is continuous
?
I have showed that the topology induced by the maps $\begin{align*} G \longrightarrow X \\
g\mapsto g\cdot x\end{align*}$ and
$\begin{align*} G \longrightarrow X \\
g\mapsto g^{-1}\cdot x\end{align*}$ $\forall x\in X$ makes inversion, left action and right action continuous on $G$.
REPLY [1 votes]: If $X$ is hausdorff and has compact and connected neighbourhoods around each point, the answer is the topology induced by the homomorphism $G\longrightarrow \text{Homeo}(X)$ defined by the group action, where $\text{Homeo}(X)$ is the homeomorphism group equipped with the compact-open-topology. To prove it:
Let $\tau$ be a topology on $G$ so that $G$ is a topological group and the group action $G\times X\longrightarrow X$ is continuous. Since $X$ is hausdorff and locally compact, the exponential object $X^X$ exists in the category $\text{Top}$. By the universal property of exponential objects $\exists!$ continuous $G \overset{\phi }{\longrightarrow} X^X$ so that
commutes. $X^X$ can be constructed as $X^X=C(X,X)$ equipped with the compact-open-topology. $\text{Homeo}(X)\subseteq X^X$. $\text{Im }\phi\subseteq\text{Homeo}(X)$. $G\overset{\phi}{\longrightarrow}\text{Homeo}(X)$ is precisely the homomorphism defined by the group action. There's a theorem stating that if $X$ is hausdorff and has compact and connected neighbourhoods around each point, $\text{Homeo}(X)$ equipped with the compact-open-topology is a topological group https://www.jstor.org/stable/pdf/30037630.pdf?refreqid=excelsior%3A0a25ff84e89d996eeaf57120097c8447. Define $\tau_i$ to be the topology on $G$ induced by $G\overset{\phi}{\longrightarrow}\text{Homeo}(X)$. Then $(G,\tau_i)$ is a topological group from the universl property of induced topology and since $\phi$ is a homomorphism. Since $(G,\tau)\overset{\phi}{\longrightarrow} \text{Homeo}(X)$ is continuous, $\tau_i\subseteq \tau$. That means $\tau_i$ is the coarsest topology so the action is continuous and $G$ is a topological group.
| 54,809
|
\begin{document}
\title[Fields of type $\tF$ and $\Fpm$]{A generalization of Serre's condition $\tF$ with applications to the finiteness of unramified cohomology}
\author[I.~Rapinchuk]{Igor A. Rapinchuk}
\begin{abstract}
In this paper, we introduce a condition $\mathrm{(F}_m'\mathrm{)}$ on a field $K$, for a positive integer $m$,
that generalizes Serre's condition (F) and which still implies the finiteness of the Galois cohomology of finite Galois modules annihilated by $m$ and algebraic $K$-tori that split over an extension of degree dividing $m$, as well as certain groups of \'etale and unramified cohomology.
Various examples of fields satisfying $\mathrm{(F}_m'\mathrm{)}$, including those that do not satisfy (F), are given.
\end{abstract}
\address{Department of Mathematics, Michigan State University, East Lansing, MI, 48824 USA}
\email{rapinch@math.msu.edu}
\maketitle
\section{Introduction}\label{S-1}
In \cite{SerreGC}, Serre introduced the following condition on a profinite group $G$:
\vskip2mm
$\mathrm{(F)}$ \ \ \ \parbox[t]{15.5cm}{{\it For every integer $m \geq 1$, $G$ has finitely many open subgroups of index $m$.}}
\vskip2mm
\noindent (see \cite[Ch. III, \S4.2]{SerreGC}). He then defined a perfect field $K$ to be {\it of type $\tF$} if the absolute Galois group $G_K = \Ga(\overline{K}/K)$ satisfies $\tF$ --- notice that this is equivalent to the fact that for every integer $m$, the (fixed) separable closure $\overline{K}$ contains only finitely many extensions of $K$ of degree $m$. This property provides a general framework for various finiteness results involving Galois cohomology, orbits of actions on rational points, etc. In particular, Serre showed that if
$K$ is a field of type $\tF$ and $\mathcal{G}$ is a linear algebraic group defined over $K$, then the Galois cohomology set $H^1(K,\mathcal{G})$ is finite (cf. \cite[Ch. III, \S4.3, Theorem 4]{SerreGC}).
Among the examples of fields of type $\tF$ given in {\it loc. cit.} are $\R$ and $\C$, finite fields, the field $C(\!(t)\!)$ of formal Laurent series over an algebraically closed field $C$ of characteristic 0, and $p$-adic fields (i.e. finite extensions of $\Q_p$). More generally, Serre noted that if $K$ is a perfect field such that $G_K$ is topologically finitely generated, then $K$ is of type $\tF$ (see \cite[Ch. III, \S4.1, Proposition 9]{SerreGC}). Using this, one can show, for example, that if $K$ is a field of characteristic 0 and $G_K$ is finitely generated, then the field $K(\!(t_1)\!) \cdots (\!(t_r)\!)$ of iterated Laurent series over $K$ is of type $\tF$ for any $r \geq 1$ (see Proposition \ref{P-Topfg}).
On the other hand, even when $k$ is a finite field of characteristic $p > 0$, it follows from Artin-Schreier theory that the (imperfect) field $L = k(\!(t)\!)$ has infinitely many cyclic extensions of degree $p$, hence is not of type $\tF.$ In this note, we would like to propose a generalization of condition $\tF$ which does hold in this case as well as in some other situations where $\tF$ fails, and which suffices to establish some finiteness results for Galois cohomology, and particularly for unramified cohomology.
So, let $K$ be a field and $m \geq 1$ be an integer prime to ${\rm char}~K.$
We will say that $K$ is {\it of type $\Fpm$} if
\vskip2mm
$\Fpm$ \ \ \ \parbox[t]{15.5cm}{{\it For every finite separable extension $L/K$, the quotient $L^{\times}/{L^{\times}}^m$ of the multiplicative group $L^{\times}$ by the the subgroup of $m$-th powers is finite.}}
\vskip2mm
\noindent We note that if $K$ is a field of type $\tF$, then it is also of type $\Fpm$ for all $m$ prime to ${\rm char}~K$ (see Lemma \ref{L-CondF}). However, condition $\Fpm$ appears to be somewhat more flexible in several respects. First, we do not require that the field $K$ be perfect. Also,
in contrast to what happens with $\tF$, one shows that if $K$ is of type $\Fpm$ for some $m$ prime to ${\rm char}~K$, then so is $K(\!(t_1)\!)\cdots (\!(t_r)\!)$ for any $r \geq 1$ (see Corollary \ref{C-FmLS}). Furthermore, in Example 2.9, we give a construction of fields of characteristic 0 that satisfy $\Fpm$ for some, but not all, integers $m$ (and hence, in particular, are not of type ${\rm (F)}$).
As in the case of condition $\tF$, the key motivation for introducing $\Fpm$ is to give a suitable context for finiteness properties in Galois cohomology. The main result is the following.
\begin{thm}\label{T-FieldF'}
Let $K$ be a field and $m$ be a positive integer prime to $\mathrm{char}~K.$ Assume that $K$ is of type $\Fpm.$ Then for any finite $G_K$-module $A$ such that $mA = 0$, the groups $H^i (K, A)$ are finite for all $i \geq 0$.
\end{thm}
The proof of this statement, which will be given in \S\ref{S-2a}, relies on the Norm Residue Isomorphism Theorem (formerly the Bloch-Kato Conjecture).
\vskip2mm
\noindent {\bf Remark 1.2.} We note that in \cite[Ch. III, \S 4]{SerreGC}, Serre only proves that condition (F) implies the finiteness of $H^1(K,A)$, where $A$ is a finite $G_K$-group (he considers both commutative and non-commutative $A$). It appears that to establish the finiteness of cohomology groups in all degrees $i \geq 1$, the use of the Bloch-Kato Conjecture cannot be avoided. We should also point out that recently, there have been attempts to approach the Bloch-Kato conjecture via an analysis of abstract properties of profinite groups in \cite{DCF}; so far, however, these efforts have shown only partial success, with no apparent bearing on Theorem \ref{T-FieldF'}.
\vskip2mm
\addtocounter{thm}{1}
In this note, our main applications of Theorem \ref{T-FieldF'} will be several finiteness results for unramified cohomology.
First, let us recall that if $K$ is a field equipped with a discrete valuation $v$, then for any positive integer $m$ prime to the characteristic of the residue field $\kappa (v)$, any $i \geq 1$, and any $j$, there exists a {\it residue map} in Galois cohomology
$
\partial_v^i \colon H^i (K, \mu_m^{\otimes j}) \to H^{i-1} (\kappa (v), \mu_m^{\otimes (j-1)})
$
(see, e.g., \cite[Ch. II, \S7]{GaMeSe} for a construction of $\partial_v^i$ and the end of this section for all unexplained notations).
A cohomology class $x \in H^i (K, \mu_m^{\otimes j})$ is said to be {\it unramified at $v$} if $x \in \ker \partial_v^i.$ Furthermore, if $V$ is a set of discrete valuations of $K$ such that the maps $\partial_v^i$ exist for all $v \in V$, one defines the {\it degree $i$ unramified cohomology of $K$ with respect to $V$} as
$$
H^i (K, \mu_m^{\otimes j})_V = \bigcap_{v \in V} \ker \partial_v^i.
$$
Now suppose that
$X$ is a smooth algebraic variety over a field $F$ with function field $F(X).$ Then each point $x \in X$ of codimension 1 defines a discrete valuation $v_x$ on $F(X)$ that is trivial on $F$. We let
$
V_0 = \{v_x \mid x \in X^{(1)} \}
$
denote the set of all such {\it geometric places} of $F(X)$ and define
$$
H^i_{\mathrm{ur}} (F(X), \mu_m^{\otimes j}) = H^i (F(X), \mu_m^{\otimes j})_{V_0}
$$
for any positive integer $m$ invertible in $F$.\footnotemark \footnotetext{Another definition of unramified cohomology that is frequently encountered is
$$
H^i_{\mathrm{nr}} (F(X), \mu_m^{\otimes j}) = H^i (F(X), \mu_m^{\otimes j})_{V_1},
$$
where $V_1$ is the set of all discrete valuations $v$ of $F(X)$ such that $F$ is contained in the valuation ring $\mathcal{O}_v$.
Clearly, there is an inclusion $H^i_{\mathrm{nr}} (F(X), \mu_m^{\otimes j}) \subset H^i_{\mathrm{ur}} (F(X), \mu_m^{\otimes j})$, which is in fact an equality if $X$ is proper (see \cite[Theorem 4.1.1]{CT-SB}). Note that by construction, $H^i_{\mathrm{nr}} (F(X), \mu_m^{\otimes j})$ is a birational invariant of $X$.} With these notations, we have the following.
\begin{thm}\label{T-MainThm}
Let $K$ be a field and $m$ be a positive integer prime to ${\rm char}~K$. Assume that $K$ is of type $\Fpm.$
\vskip2mm
\noindent $\mathrm{(a)}$ \parbox[t]{16cm}{Suppose $C$ is a smooth, geometrically integral curve over $K$. Then the
unramified cohomology groups $H^i_{\mathrm{ur}} (K(C), \mu_m^{\otimes j})$ are finite for all $i \geq 1$ and all $j$.}
\vskip3mm
\noindent $\mathrm{(b)}$ \parbox[t]{16cm}{Let $X$ be a smooth, geometrically integral algebraic variety of dimension $\geq 2$ over $K$. Then the unramified cohomology group $H^3_{\mathrm{ur}} (K(X), \mu_m^{\otimes 2})$ is finite if and only if $CH^2 (X)/m$ is finite, where $CH^2(X)$ is the Chow group of codimension 2 cycles modulo rational equivalence.}
\end{thm}
We should point out that the finiteness statement of part (a) has been used in \cite{CRR4} and \cite{CRR5} to show that if $K$ is a field of characteristic $\neq 2$ that satisfies $\mathrm{(F}_2'\mathrm{)}$, then the number of $K(C)$-isomorphism classes of spinor groups $G = \mathrm{Spin}_n(q)$ of nondegenerate quadratic forms $q$ over $K(C)$ in $n \geq 5$ variables, as well as of groups of some other types, that have good reduction at all geometric places $v \in V_0$ is finite. In fact, this result is part of a general program of studying absolutely almost simple groups having the same maximal tori over the field of definition --- we refer the reader to \cite{CRR3} for a detailed overview of these problems and several conjectures.
Our second application deals with the unramified cohomology of algebraic tori.
Suppose $X$ is a smooth,
geometrically integral variety over a field $F$. Given a torus $\mathbb{T}$ over $X$, we define the
the degree 1 unramified cohomology $H^1_{{\rm ur}}(F(X), T)$ of the generic fiber $T = \mathbb{T}_{F(X)}$ by
$$
H^1_{{\rm ur}}(F(X),T) = {\rm Im}~(\he^1(X, \mathbb{T}) \stackrel{f}{\longrightarrow} H^1(F(X), T)),
$$
where $f$ is the natural map (see \S\ref{S-5} for the relation between this definition and the above definition of unramified cohomology using residue maps). We have the following statement.
\begin{thm}\label{T-MainThm2}
Let $X$ be a smooth, geometrically integral algebraic variety over a field $K$ and $\mathbb{T}$ be a torus over $X$ with generic fiber $T = T_{K(X)}.$ Denote by $K(X)_T$ the minimal splitting field of $T$ inside a fixed separable closure $\overline{K(X)}$ of $K(X).$ If $K$ is of type $\Fpm$ for some positive integer $m$ that is prime to ${\rm char}~K$ and divisible by $[K(X)_T:K(X)]$,
then the unramified cohomology group $H^1_{{\rm ur}}(K(X), T)$ is finite.
\end{thm}
As an application, we will see in Corollary \ref{C-RequivFinite} a consequence for the finiteness of the group of
$R$-equivalence classes of algebraic tori.
The paper is organized as follows. In \S \ref{S-2}, we discuss some key properties and examples of fields of type $\tF$ and $\Fpm$. We then establish Theorem \ref{T-FieldF'} in \S \ref{S-2a} and deduce finiteness statements for the \'etale cohomology of smooth varieties (Corollary \ref{C-FinEt}) and algebraic tori (Corollary \ref{C-FinTori}) over fields of type $\Fpm.$
In \S \ref{S-4},
we briefly review several important points of Bloch-Ogus theory and then establish Theorem \ref{T-MainThm}. Finally, Theorem \ref{T-MainThm2} is proved in \S\ref{S-5}.
\vskip5mm
\noindent {\bf Notations and conventions.} Let $X$ be a scheme. For any positive integer
$n$ invertible on $X$, we let $\mu_n = \mu_{n, X}$ be the \'etale sheaf of $n$th roots of unity on $X$.
We follow the usual notations for the Tate twists of $\mu_n$. Namely, for $i \geq 0$, we set
$
\Z / n \Z (i) = \mu_n^{\otimes i}
$
(where $\mu_n^{\otimes i}$ is the sheaf associated to the $i$-fold tensor product of $\mu_n$), with the convention that
$
\mu_n^{\otimes 0} = \Z/n \Z.
$
If $i < 0$, we let
$$
\Z / n \Z (i) = \Hom (\mu_n^{\otimes (- i)}, \Z / n \Z).
$$
In the case that $X = \mathrm{Spec}~F$ for a field $F$, we identify $\mu_n$ with the group of $n$th roots of unity in a fixed separable closure $\overline{F}$ of $F$. We will also tacitly identify the \'etale cohomology of ${\rm Spec}~F$ with the Galois cohomology of $F$.
\section{Fields of type $\tF$ and $\Fpm$}\label{S-2}
In this section, we will discuss some key properties and examples of fields of types $\tF$ and $\Fpm.$
\subsection{Fields of type $\tF$} Let $K$ be a perfect field. As we already mentioned in \S\ref{S-1}, if the absolute Galois group $G_K = \Ga(\overline{K}/K)$ is topologically finitely generated, then $K$ is automatically of type $\tF.$ Well-known examples of fields for which $G_K$ is finitely generated include $\C$, $\R$, finite fields, and $p$-adic fields (in this case, it is known that if $[K : \Q_p] = d$, then $G_K$ can be generated by $d + 2$ elements --- see \cite[Theorem 3.1]{J1}). Some further examples of fields with finitely generated absolute Galois group
can be constructed using the next observation.
\begin{prop}\label{P-Topfg}
Let $K$ be a field of characteristic 0 such that $G_K = \Ga (\overline{K}/K)$ is topologically finitely generated. Then for the field $L = K(\!(t)\!)$ of formal Laurent series over $K$, $G_L$ is also topologically finitely generated.
\end{prop}
\begin{proof}
First, we note the following:
\vskip2mm
\noindent $(*)$ \parbox[t]{14.5cm}{\it Suppose $\mathscr{K}$ is a discretely valued henselian field with residue field $\mathbf{k}$ and group of units $\mathscr{U} \subset \mathscr{K}.$ Then for any $m \geq1$ relatively prime to $\mathrm{char}\: \mathbf{k}$, the reduction map induces an isomorphism $\mathscr{U}/\mathscr{U}^m \simeq \mathbf{k}^{\times}/{\mathbf{k}^{\times}}^m$.}
\vskip2mm
\noindent (Indeed, the henselian property implies that the polynomial $X^m - u$ with $u \in \mathscr{U}$ has a root in $\mathscr{K}$ (or $\mathscr{U}$) if and only if its reduction $X^m - \bar{u}$ has a root in $\mathbf{k}$, which leads to the required isomorphism.)
Now let $F = L^{\text{ur}}$ be the maximal unramified extension of $L$. Then $F$ is a discretely valued henselian field having residue field $\overline{K}$ (in fact, in this case, $F$ is simply the compositum of $L$ and $\overline{K}$ inside $\overline{L}$). Since $\Ga(F/L)$ is naturally identified with $G_K$, hence is finitely generated, it is enough to show that $H = \Ga (\overline{L}/F ) \simeq \widehat{\Z}$. This will follow if we verify that for each $n \geq 1$, $F(\sqrt[n]{t})$ is the \emph{unique} degree $n$ extension of $F$. But, since ${\rm char}~K = 0$, a degree $n$ extension $F'/F$ is totally tamely ramified, and consequently is of the form $F(\sqrt[n]{\pi})$ for {\it some} uniformizer $\pi \in F$ (see, e.g., \cite[Ch. II, Proposition 3.5]{FV}). Since the residue field $\overline{K}$ of $F$ is algebraically closed, it follows from $(*)$ that every unit in $F$ is an $n$-th power, hence $F(\sqrt[n]{\pi}) = F(\sqrt[n]{t})$, as required.
\end{proof}
Iterating the statement of the proposition, we obtain, in particular, the following.
\begin{cor}\label{C-TopFg}
Suppose $K$ is $\C$, $\R$, or a $p$-adic field, and let $L = K(\!(t_1)\!) \cdots (\!(t_r)\!)$ for some $r \geq 0$. Then $G_L$ is topologically finitely generated, and therefore $L$ is of type $\tF.$
\end{cor}
\vskip3mm
\noindent {\bf Remark 2.3.} Let us point out that while the examples of fields of type $\tF$ given in \cite[Ch. III, \S4.2]{SerreGC} have (virtual) cohomological dimension $\leq 2$, the preceding proposition and corollary allow us to construct such fields of arbitrary (finite) cohomological dimension.
\addtocounter{thm}{1}
\subsection{Fields of type $\Fpm$} We would first like to elaborate on the relationship between conditions $\tF$ and $\Fpm$ that was mentioned in \S\ref{S-1}.
\begin{lemma}\label{L-CondF}
Suppose $K$ is a (perfect) field of type $\tF.$ Then $K$ satisfies $\Fpm$ for all $m$ prime to ${\rm char}~K.$
\end{lemma}
\begin{proof}
It is clear from the definitions that if $K$ is of type $\tF$, then so is any finite separable extension $L/K$. Thus, it suffices to show that for any $m$ prime to $\text{char}~K$, the group
$$
H^1 (K, \Z / m \Z (1)) = K^{\times}/ {K^{\times}}^m
$$
is finite. Let $L = K(\zeta_m)$, where $\zeta_m$ is a primitive $m$-th root of unity. Then
$$
H^1 (L, \Z / m \Z (1)) = \Hom_{\text{cont}} (G_L, \Z / m \Z),
$$
which is finite since $L$ is of type $\tF$. The finiteness of $H^1 (K, \Z / m \Z (1))$ then follows from the inflation-restriction sequence
$$
0 \to H^1 (\Ga (L/K), \Z / m \Z (1)) \to H^1 (K, \Z / m \Z (1)) \to H^1 (L, \Z / m \Z (1)).
$$
\end{proof}
\vskip2mm
\noindent {\bf Remark 2.5.} It follows immediately from the definition that if $K$ is of type $\Fpm$, then $K$ is automatically of type $\mathrm{(F'_{\ell})}$ for every $\ell \vert m.$
\addtocounter{thm}{1}
\vskip2mm
As we mentioned previously, condition $\Fpm$ provides greater flexibility than $\tF$ in certain common situations. For instance, whereas $K(\!(t)\!)$, for $K$ a finite field of characteristic $p$, fails to be of type $\tF$, Corollary \ref{C-FmLS} below implies that it \emph{does} satisfy $\Fpm$ for all $m$ relatively prime to $p$. Quite generally, we have the following statement.
\begin{prop}\label{P-FmLS}
Let $\mathscr{K}$ be a complete discretely valued field with residue field $\mathbf{k}$. If $m$ is prime to $\mathrm{char}\:
\mathbf{k}$ and $\mathbf{k}$ is of type $\Fpm$, then $\mathscr{K}$ is also of type $\Fpm.$
\end{prop}
In particular, for the field of Laurent series, we have
\begin{cor}\label{C-FmLS}
Let $K$ be a field and $m \geq 1$ be an integer prime to ${\rm char}~K.$ If $K$ is of type $\Fpm$, then so is $L = K(\!(t)\!).$
\end{cor}
Thus, if, for example, $K$ is $\C$, $\R$, a $p$-adic field, or a finite field of characteristic $p$ prime to $m$, then $L = K(\!(t_1)\!)\cdots (\!(t_r)\!)$ is of type $\Fpm$ for any $r \geq 1.$
Now, before proceeding to the proof of Proposition \ref{P-FmLS}, let us observe that while we restricted ourselves to finite {\it separable} extensions in the definition of $\Fpm$, the property described there actually holds for {\it all} finite extensions.
\begin{lemma}\label{L-Insep}
If a field $K$ is of type $\Fpm$ for $m$ prime to $\mathrm{char}\: K$, then the quotient $L^{\times}/{L^{\times}}^m$ is finite for \emph{any} (not necessarily separable) finite extension $L/K$.
\end{lemma}
\begin{proof}
We can assume that $p = \mathrm{char}\: K > 0$. Let $L/K$ be an arbitrary finite extension, and $F$ be its maximal separable subextension. Then there exists $\alpha \geqslant 0$ such that $(L)^{p^{\alpha}} \subset F$. On the other hand, since $K$ is of type $\Fpm$, the quotient $F^{\times}/{F^{\times}}^m$ is finite. So, it suffices to show that the map
\begin{equation}\label{E:A-Inj}
L^{\times}/{L^{\times}}^m \longrightarrow F^{\times}/{F^{\times}}^m, \ \ x{L^{\times}}^m \mapsto x^{p^{\alpha}} {F^{\times}}^m,
\end{equation}
is injective. But the composite map
$$
L^{\times}/{L^{\times}}^m \longrightarrow F^{\times}/{F^{\times}}^m \longrightarrow L^{\times}/{L^{\times}}^m,
$$
where the second map is induced by the identity embedding $F^{\times} \hookrightarrow L^{\times}$, amounts to raising to the power $p^{\alpha}$ in $L^{\times}/{L^{\times}}^m$. Since the latter has exponent $m$, which is prime to $p^{\alpha}$, this map is injective, and the injectivity of (\ref{E:A-Inj}) follows.
\end{proof}
\noindent {\it Proof of Proposition \ref{P-FmLS}.}
Any finite extension $\mathscr{L}$ of $\mathscr{K}$ is clearly henselian, and its residue field $\mathbf{l}$ is a finite extension of $\mathbf{k}$.
So, since $\mathbf{k}$ is of type $\Fpm$, Lemma \ref{L-Insep} implies that the quotient $\mathbf{l}^{\times}/{\mathbf{l}^{\times}}^m$ is finite.
Applying $(*)$ from the proof of Proposition \ref{P-Topfg} to $\mathscr{L}$, we see that if $\mathscr{U}$ denotes the group of units in $\mathscr{L}$, then the quotient
$\mathscr{U}/\mathscr{U}^m$ is finite. A choice of uniformizer $\pi \in \mathscr{L}$ then yields an isomorphism
$$
\mathscr{L}^{\times} \simeq \langle \pi \rangle \times \mathscr{U},
$$
from which we conclude that the quotient $\mathscr{L}^{\times}/{\mathscr{L}^{\times}}^m$ is also finite, as required. $\Box$
\vskip2mm
We end this section with the following example of fields of characteristic 0 that satisfy $\Fpm$ for some, but not all, values of $m.$
\vskip2mm
\noindent {\bf Example 2.9.} Let $K$ be a field of characteristic 0 such that $G_K$ is an infinitely generated pro-$2$ group (e.g., we can take $K = \overline{\Q}^{\mathscr{G}_2}$, where $\mathscr{G}_2$ is a Sylow 2-subgroup of $G_{\Q}$). Clearly, $K$ is not of type ${\rm (F_2)}$, but is of type $\Fpm$ for any odd $m$ (in particular, $K$ is not of type $\tF$). By Corollary \ref{C-FmLS}, we also see that these properties are shared by $K(\!(t_1)\!)\cdots(\!(t_r)\!)$ for any $r \geq 1.$
\section{Proof of Theorem \ref{T-FieldF'}.}\label{S-2a}
In this section, we establish a finiteness result for the cohomology of finite Galois modules over fields of type $\Fpm$ and indicate some consequences for the \'etale cohomology of smooth algebraic varieties and the Galois cohomology of algebraic tori
One of the principal ingredients in the proof of Theorem \ref{T-FieldF'} is the Norm Residue Isomorphism Theorem (formerly the Bloch-Kato Conjecture), established in the work of Rost \cite{Rost}, Voevodsky (\cite{V1}, \cite{V2}), Weibel \cite{Weib}, and others. We briefly recall the set-up. For a field $F$ and integer
$n > 1$, the {\it $n$-th Milnor $K$-group} $K_n^M(F)$ of $F$ is defined as the quotient of the $n$-fold tensor product $F^{\times} \otimes_{\Z} \cdots \otimes_{\Z} F^{\times}$ by the subgroup generated by elements $a_1 \otimes \cdots \otimes a_n$ such that $a_i + a_j = 1$ for some $1 \leq i < j \leq n.$ For $a_1, \dots, a_n \in F^{\times}$, the image of $a_1 \otimes \cdots \otimes a_n$ in $K^M_n(F)$, denoted $\{ a_1, \dots, a_n \}$, is called a {\it symbol}; thus, $K^M_n (F)$ is generated by symbols.
By convention, one sets $K^M_0 (F) = \Z$ and $K^M_1 (F) = F^{\times}.$ Furthermore, for any integer $m$ prime to $\text{char}~F$, the Galois symbol yields a group homomorphism
$$
h^n_{F,m} \colon K_n^M (F) \to H^n (F, \Z / m \Z (n)),
$$
which obviously factors through $K_n^M(F)/m := K_n^M(F)/m \cdot K_n^M(F)$ (see \cite[\S 4.6]{GiSz} for the relevant definitions). We will write
$\{a_1, \dots, a_n\}_m$ for the image of $\{a_1, \dots, a_n\}$ in $K_n^M(F)/m.$ The main result is
\begin{thm}\label{T-BlochKato}
For any field $F$ and any integer $m$ prime to $\text{char}~F$, the Galois symbol induces an isomorphism
$$
K_n^M(F)/m \stackrel{\sim}{\longrightarrow} H^n (F, \Z / m \Z (n))
$$
for all $n \geq 0$.
\end{thm}
\vskip2mm
We now turn to
\vskip2mm
\noindent {\it Proof of Theorem \ref{T-FieldF'}.} First, we note that if $K$ is of type $\Fpm$, then the groups $K_n^M(L)/m$ are finite for all finite extensions $L/K$ and all $n \geq 0.$ Indeed, for each $n$, the group $K_n^M(L)/m$ is generated by symbols $\{ a_1, \ldots , a_n \}_m$ with $a_i \in L^{\times}$, and each such symbol is annihilated by $m$. On the other hand, since the quotient $L^{\times}/{L^{\times}}^m$ is finite by Lemma \ref{L-Insep}, there are only finitely many symbols of length $n$, and the finiteness of $K_n^M(L)/m$ follows.
Applying Theorem \ref{T-BlochKato} and Remark 2.5, we see that the cohomology groups $H^n(L , \Z/\ell \Z(n))$ are finite for all $\ell \vert m$ and all $n \geq 0$. Now, to deal with an arbitrary module $A$, we pick a finite extension $L/K$ that contains the $m$-th roots of unity and such that $G_L$ acts trivially on $A$. Then for any $\ell \vert m$ and any $n \geq 0$, the Galois $L$-module $\Z/\ell\Z(n)$ is isomorphic to the trivial module $\Z/\ell\Z$, so the above discussion implies the finiteness of the cohomology groups $H^n(L , \Z/\ell \Z)$. On the other hand, in view of the fact that $mA = 0$ and our choice of $L$, the Galois $L$-module $A$ is isomorphic to a direct sum of some $\Z/\ell\Z$'s. Thus, the cohomology groups $H^n(L , A)$ are finite for all $n \geq 0$. The finiteness of the cohomology groups $H^n(K , A)$ now follows from the Hochschild-Serre spectral sequence
$$
E_2^{i,j} = H^i(\Ga(L/K) , H^j(L , A)) \Longrightarrow H^{i+j}(K , A).
$$ \hfill $\Box$
\vskip2mm
As a first application of Theorem \ref{T-FieldF'}, let us mention the following statement for \'etale cohomology, which will be needed in our discussion of unramified cohomology.
\begin{cor}\label{C-FinEt}
Suppose $K$ is a field and $m$ is a positive integer prime to ${\rm char}~K.$ If $K$ is of type $\Fpm$, then for any smooth geometrically integral algebraic variety $X$ over $K$, the \'etale cohomology groups $\he^i(X, \Z / m \Z (j))$ are finite for all $i \geq 0$ and all $j$.
\end{cor}
\begin{proof}
Let $\bar{X} = X \times_K \overline{K}.$ It is well-known that $\he^i (\bar{X}, \Z / m \Z (j))$ are finite $m$-torsion groups for all $i \geq 0$ and all $j$ (see \cite[Expos\'e XVI, Th\'eor\`eme 5.2]{SGA4}). Consequently, the groups $H^p (K, \he^q (\bar{X}, \Z / m \Z (j))$ are finite for all $p, q \geq 0$ and all $j$ by Theorem \ref{T-FieldF'}. Our claim now follows from the
Hochschild-Serre spectral sequence
$$
E_2^{p,q} = H^p (K, H^q_{\text{\'et}} (\bar{X}, \Z / m \Z (j))) \Rightarrow H^{p+q}_{\text{\'et}} (X, \Z / m \Z (j)).
$$
\end{proof}
Next, suppose that $T$ is an algebraic torus defined over a field $K$ and
denote by $K_T$ the minimal splitting field of $T$ inside a fixed separable closure $\overline{K}$ of $K$.
\begin{cor}\label{C-FinTori}
Let $K$ be a field of type $\Fpm$. Then for an algebraic $K$-torus $T$, we have the following:
\vskip2mm
\noindent $\mathrm{(a)}$ \parbox[t]{16cm}{The $\ell$-torsion subgroups ${}_{\ell}H^i(K, T)$ are finite for all $i \geq 0$ and all $\ell \vert m.$}
\vskip2mm
\noindent $\mathrm{(b)}$ \parbox[t]{16cm}{If $[K_T:K]$ divides $m$, then $H^1(K,T)$ is finite.}
\end{cor}
\begin{proof}
\noindent (a) First, by Remark 2.5, $K$ is of type $\mathrm{(F'_{\ell})}$ for every $\ell \vert m.$ Next, for every $i \geq 0$, the map
$$
\varphi_{\ell} \colon T \to T, \ \ \ x \mapsto x^{\ell}
$$
induces a surjective homomorphism $H^i(K, T[\ell]) \twoheadrightarrow {}_{\ell}H^i(K, T)$, where $T[\ell] = \ker \varphi_{\ell}.$ Since $T[\ell]$ is a finite $\ell$-torsion module, the groups $H^i(K, T[\ell])$ are finite by
Theorem \ref{T-FieldF'}, which then yields the finiteness of ${}_{\ell}H^i(K, T).$
\vskip2mm
\noindent (b) By Hilbert's Theorem 90, the group $H^1(K , T)$ can be identified with $H^1(K_T/K , T)$, hence is annihilated by $n = [K_T : K]$. On the other hand, since $n \vert m$, the group ${}_n H^1(K , T)$ is finite according to (a), and our assertion follows.
\end{proof}
\noindent (We should point out that by contrast with part (b) of the corollary, the groups $H^i(K , T)$, for $i \geq 2$, may be infinite --- for instance, $K = \Q_p$ is of type $\tF$, hence also of type $\Fpm$ for any $m \geq 1$, but for the one-dimensional split torus $T = \mathbb{G}_m$, the group $H^2(K , T)$ is the Brauer group $\Br(K)$, which is infinite.)
We would like to mention one situation where one has the finiteness of $H^1(K , T)$ for all maximal $K$-tori of a semi-simple $K$-group $\mathcal{G}$ simultaneously.
\begin{cor}\label{C-Semisimple}
Let $\mathcal{G}$ be a semisimple algebraic group defined over a field $K$. Assume that the order $m$ of the automorphism group of the root system of $\mathcal{G}$ is prime to $\mathrm{char}\: K$ and that $K$ is of type $\Fpm$. Then for every maximal $K$-torus $T$ of $\mathcal{G}$, the group $H^1(K , T)$ is finite.
\end{cor}
Indeed, the Galois action on the root system $\Phi = \Phi(\mathcal{G} , T)$ yields an injective group homomorphism $\mathrm{Gal}(K_T/K) \hookrightarrow \mathrm{Aut}(\Phi)$. Thus, $[K_T : K]$ divides $m$, and our assertion follows from Corollary \ref{C-FinTori}(b).
In view of Serre's finiteness results for the Galois cohomology of linear algebraic groups over fields of type $\tF$, it would be interesting to determine if the assumptions of Corollary \ref{C-Semisimple} already imply the finiteness of $H^1(K , \mathcal{G})$.
\section{Proof of Theorem \ref{T-MainThm}}\label{S-4}
In this section, we will use Bloch-Ogus theory together with Theorem \ref{T-FieldF'} to establish Theorem \ref{T-MainThm}.
\subsection{A brief review of Bloch-Ogus theory} We begin by recalling, for the reader's convenience, several key points of Bloch-Ogus \cite{BO} theory, which allows one to relate unramified cohomology to \'etale cohomology.
Let $F$ be an arbitrary field and $X$ a smooth algebraic variety over $F$. In this subsection, we will fix a positive integer $m$ prime to ${\rm char}~F.$ By considering the filtration by coniveau, Bloch and Ogus established the existence of the following cohomological first quadrant spectral sequence called the {\it coniveau spectral sequence}:
\begin{equation}\label{E-ConiveauSS}
E_1^{p,q}(X/F, \Z / m \Z (b)) = \bigoplus_{x \in X^{(p)}} H^{q-p}(\kappa(x), \Z / m \Z (b - p)) \Rightarrow \he^{p+q}(X, \Z / m \Z (b)),
\end{equation}
where $X^{(p)}$ denotes the set of points of $X$ of codimension $p$ and the groups on the left are the Galois cohomology groups of the residue fields $\kappa(x)$ (the original statement of Bloch-Ogus was actually given in terms of \'etale homology, with the above version obtained via absolute purity; for a derivation of this spectral sequence that avoids the use of \'etale homology, we refer the reader to \cite{CTHK}). This spectral sequence yields a complex
\begin{equation}\label{E-BOComplex}
E_1^{\bullet, q}(X/F, \Z / m \Z (b)),
\end{equation}
and it is well-known (see, e.g., \cite[Remark 2.5.5]{JSS}) that the differentials in (\ref{E-BOComplex}) coincide up to sign with the differentials in an analogous complex constructed by Kato \cite{Kato} using residue maps in Galois cohomology.
The fundamental result of Bloch and Ogus was the calculation of the $E_2$-term of (\ref{E-ConiveauSS}). To give the statement, we will need the following notation: let
$\mathcal{H}^q (\Z / m \Z (j))$
denote the Zariski sheaf on $X$ associated to the presheaf that assigns to an open $U \subset X$ the cohomology group $H^i_{\text{\'et}}(U, \Z / m \Z (j)).$
Bloch and Ogus showed that
$$
E_2^{p,q}(X/F, \Z/ m \Z (b)) = H^p (X, \mathcal{H}^q (\Z / m \Z (b)))
$$
(see \cite[Corollary 6.3]{BO}). The resulting (first quadrant) spectral sequence
\begin{equation}\label{E-BO}
E_2^{p,q}(X/F, \Z / m \Z (b)) = H^p (X, \mathcal{H}^q (\Z / m \Z (b))) \Rightarrow H^{p+q}_{\text{\'et}} (X, \Z/ m \Z (b))
\end{equation}
is usually referred to as the {\it Bloch-Ogus spectral sequence}. For ease of reference, we summarize several key points pertaining to (\ref{E-BO}).
\begin{prop}\label{P-BlochOgusSS}
The Bloch-Ogus spectral sequence $(\ref{E-BO})$ associated to a smooth irreducible algebraic variety $X$ over a field $F$ has the following properties:
\vskip1mm
\noindent {\rm (a)} $E_2^{p,q} = 0$ for $p > \dim X$ and all $q$;
\vskip1mm
\noindent {\rm (b)} $E_2^{p,q} = 0$ for $p > q$; and
\vskip1mm
\noindent {\rm (c)} \parbox[t]{16cm}{$E_2^{0, q} = H^0 (X, \mathcal{H}^q (\Z / m \Z (b)))$ coincides with the unramified cohomology $H^q_{\mathrm{ur}} (F(X), \Z / m \Z (b))$ with respect to the geometric places of $F(X)$.}
\end{prop}
\subsection{Finiteness of unramified cohomology.} We now turn to the proof of Theorem \ref{T-MainThm}.
First suppose that
$X = C$ is a smooth, geometrically integral curve over a field $F$. Then by Proposition \ref{P-BlochOgusSS}(a), we have $E_2^{p,q} = 0$ for $p \neq 0,1$ and all $q$. So, in view of part (c), we have a surjection
\begin{equation}\label{E-EtUnram}
H^i_{\text{\'et}}(C, \Z / m \Z (j)) \twoheadrightarrow H^i_{\text{ur}} (F(C), \Z / m \Z (j))
\end{equation}
for all $i \geq 1$, all $j$, and all $m$ prime to ${\rm char}~F$. Now, if $F = K$ is a field of type $\Fpm$, then the groups $H^i_{\text{\'et}}(C, \Z / m \Z (j))$ are finite by Corollary \ref{C-FinEt}. Consequently, from (\ref{E-EtUnram}), we obtain
\begin{prop}\label{P-4.1}
Let $K$ be a field, $C$ a smooth, geometrically integral curve over $K$, and $m$ a positive integer prime to ${\rm char}~K.$ If $K$ is of type $\Fpm$, then the
unramified cohomology groups $H^i_{\mathrm{ur}} (K(C), \Z/ m \Z (j))$ are finite for all $i \geq 1$ and all $j$.
\end{prop}
We note that Proposition \ref{P-4.1} for $m=2$ has been used in \cite{CRR4} and \cite{CRR5} to prove the following finiteness theorem for spinor groups with good reduction: {\it Let F = K(C) be the function field of a smooth geometrically integral curve C over a field $K$ of characteristic $\neq 2$ that satisfies $\mathrm{(F}_2'\mathrm{)}$, and let $V$ be the set of geometric places of $F$ (corresponding to the closed points of $C$). Then the number of $F$-isomorphism classes of spinor groups $G = \mathrm{Spin}_n(q)$ of nondegenerate quadratic forms $q$ over $F$ in $n \geqslant 5$ variables that have good reduction at all $v \in V$ is finite.}
\vskip2mm
Next, for smooth algebraic varieties $X$ of dimension $\geq 2$, relatively little is known about finiteness properties of unramified cohomology in degrees $\geq 3$ in general. We should point out, however, that in the case of quadrics, several finiteness statements can be deduced from results of Kahn \cite{KahnLower} and Kahn, Rost, and Sujatha \cite{KRS}. More precisely, let $q$ be a nondegenerate quadratic form over a field $F$ of characteristic $\neq 2$ and denote by $X$ the associated projective quadratic. A natural approach for analyzing unramified cohomology is to
consider the restriction map
$$
\eta_2^i \colon H^i (F, \Z/2 \Z (1)) \to H^i_{\rm ur}(F(X), \Z/ 2 \Z(1)).
$$
If $\dim X > 2$, it is shown in \cite{KahnLower} that the cokernel of $\eta_2^3$ has order at most 2, while the main results of \cite{KRS} imply that $\vert {\rm coker}~\eta_2^4 \vert \leq 4$ for $\dim X > 4.$ Consequently, if $F = K$ is a field of type $\mathrm{(F}_2'\mathrm{)}$, we see that $H^3_{{\rm ur}}(K(X), \Z/ 2 \Z (1))$ and $H^4_{{\rm ur}}(K(X), \Z / 2 \Z (1))$ are finite in the respective cases.
Returning now to the general case, the edge map in (\ref{E-EtUnram}) is typically not surjective for varieties of dimension $\geq 2.$
However, the finiteness of unramified cohomology in degree 3 in this setting turns out to be closely related to the question of the finite generation of the Chow group $CH^2 (X)$ of codimension 2 cycles modulo rational equivalence. First, we note the following (well-known) general fact about first quadrant spectral sequences satisfying condition (b) of Proposition \ref{P-BlochOgusSS}.
\begin{lemma}\label{L-SpecSeqLemma}
Let $E_2^{p,q} \Rightarrow E^{p+q}$ be a first quadrant spectral sequence. Assume that $E_2^{p,q} = 0$ for $p > q$. Then there is an exact sequence
$$
E^3 \stackrel{e}{\longrightarrow} E_2^{0,3} \stackrel{d_2}{\longrightarrow} E_2^{2,2} \to E^4,
$$
where $e$ is the usual edge map and $d_2$ is the differential of the spectral sequence.
\end{lemma}
\vskip2mm
Applying Lemma \ref{L-SpecSeqLemma} to the Bloch-Ogus spectral sequence (with $b =2$), we thus obtain, for any smooth algebraic variety $X$ over an arbitrary field $F$ and any positive integer $m$ prime to ${\rm char}~F$, an exact sequence
\begin{equation}\label{E-BOChow1}
\he^3 (X, \Z / m \Z (2)) \to H^0 (X, \mathcal{H}^3(\Z / m \Z (2))) \to H^2 (X, \mathcal{H}^2 (\Z / m \Z (2))) \to \he^4 (X, \Z m \Z (2)).
\end{equation}
On the other hand, a well-known consequence of Quillen's proof of Gersten's conjecture in algebraic $K$-theory is the existence of an isomorphism
$$
H^2 (X, \mathcal{H}^2 (\Z / m \Z (2))) \simeq CH^2 (X)/m
$$
(see, e.g., the proof of \cite[Theorem 7.7]{BO}). Thus, using Proposition \ref{P-BlochOgusSS}(c), we may rewrite (\ref{E-BOChow1}) as
\begin{equation}\label{E-BOCHow2}
\he^3 (X, \Z / m \Z (2)) \to H^3_{\text{ur}} (F(X), \Z / m \Z(2)) \to CH^2 (X)/m \to \he^4 (X, \Z m \Z (2)).
\end{equation}
In view of Corollary \ref{C-FinEt}, we therefore have
\begin{prop}\label{P-4.3}
Let $K$ be a field, $X$ a smooth, geometrically integral algebraic variety over $K$, and $m$ a positive integer prime to ${\rm char}~K.$ If $K$ is of type $\Fpm$, then the
unramified cohomology group $H^3_{\mathrm{ur}} (K(X), \Z/ m \Z (2))$ is finite if and only if $CH^2 (X)/m$ is finite. In particular, $H^3_{\mathrm{ur}} (K(X), \Z/ m \Z (2))$ is finite if $CH^2(X)$ is finitely generated.
\end{prop}
\vskip2mm
\noindent The two statements of Theorem \ref{T-MainThm} are now contained in Propositions \ref{P-4.1} and \ref{P-4.3}, which concludes the proof.
\vskip5mm
\noindent {\bf Remark 4.5.} The question of the finite generation of $CH^2(X)$ (or, at least, the finiteness of $CH^2(X)/m$) is in general a wide-open problem; we mention a couple of cases where the affirmative answer is known.
\vskip2mm
\noindent (a) \parbox[t]{16cm}{(cf. \cite[\S 4.3]{CT-SB}) Let $K$ be a field of characteristic 0 of type $\Fpm$ and $X$ a smooth geometrically, integral algebraic variety of dimension $d$ over $K$. Write $\bar{X} = X \times_K \bar{K}$ and suppose there exists a dominant rational map
$$
\mathbb{A}_{\bar{K}}^{d-1} \times_{\bar{K}} C \dashrightarrow \bar{X},
$$
where $C/ \bar{K}$ is an integral curve. Then, using Corollary \ref{C-FinEt}, the argument in \cite[Theorem 4.3.7]{CT-SB} shows that $CH^2(X)/m$ is finite. One of the key points is that for any smooth algebraic variety $U$ over $K$, the group ${}_mCH^2(U)$ is finite: indeed, as observed in \cite[Corollaire 2]{CTSS}, the Merkurjev-Suslin theorem implies that ${}_mCH^2(U)$ is a subquotient of $\he^3 (U, \Z / m \Z (2)).$}
\vskip2mm
\noindent (b) \parbox[t]{16cm}{Let $X$ be a noetherian scheme. Recall that $CH_0(X)$ is defined as the cokernel of the natural map
$$
\bigoplus_{x \in X_1} K_1 (\kappa(x)) \to \bigoplus_{x \in X_0} K_0 (\kappa(x))
$$
induced by valuations (or, equivalently, coming from the localization sequence in algebraic $K$-theory). One of the key results of higher-dimensional global class field theory is that if $X$ is a regular scheme of finite type over $\Z$, then $CH_0(X)$ is a finitely generated abelian group. This was initially established in the work of Bloch \cite{BlochCFT}, Kato and Saito \cite{Kato-Saito}, and Colliot-Th\'el\`ene, Soul\'e, and Sansuc \cite{CTSS}; a more recent treatment was given by Kerz and Schmidt \cite{KerzSchmidt} based on ideas of Wiesend. In particular, if $X$ is a smooth irreducible algebraic surface over a finite field, then $CH_0(X)$ is finitely generated.}
\vskip3mm
\noindent To conclude this section, let us observe that the last example points to a connection between finiteness properties of unramified cohomology and Bass's conjecture on the finite generation of algebraic $K$-groups. We refer the reader to \cite{Geiss} for a detailed exposition and precise statements of Bass's conjecture and its motivic analogues.
\section{Unramified cohomology of tori}\label{S-5}
In this section, we will establish Theorem \ref{T-MainThm2} on the finiteness of the unramified cohomology of tori and discuss some applications to $R$-equivalence.
First, however, we would like to put in context the definition of the degree 1 unramified cohomology with coefficients in a torus given in \S \ref{S-1}. Let $K$ be a field equipped with a discrete valuation $v$, with valuation ring $\mathcal{O}_v$ and residue field $\kappa(v).$ Suppose that $m$ is a positive integer prime to ${\rm char}~\kappa(v).$ Recall that by absolute purity and the Gersten conjecture for discrete valuation rings, there is an exact sequence
$$
0 \to \he^i(\mathcal{O}_v, \Z/m \Z(j)) \stackrel{f_v}{\longrightarrow} H^i(K, \Z /m \Z(j)) \stackrel{\partial_{\mathcal{O}_v}^i}{\longrightarrow} H^{i-1}(\kappa(v), \Z /m \Z(j-1)) \to 0
$$
where $f_v$ is the natural map and $\partial_{\mathcal{O}_v}^i$ coincides up to sign with the residue map $\partial_v^i$ introduced in \S\ref{S-1} (see \cite[3.3, 3.6]{CT-SB}). In particular, $\ker \partial_v^i = \ker \partial_{\mathcal{O}_v}^i = \he^i(\mathcal{O}_v, \Z/m \Z(j)).$ It follows that if $X$ is a smooth algebraic variety over a field $F$ with function field $F(X)$, then for the unramified cohomology with respect to the geometric places, we have
\begin{equation}\label{E-DefUnr1}
H^i_{{\rm ur}}(F(X), \Z / m \Z(j)) = \bigcap_{P \in X^{(1)}} \he^i(\mathcal{O}_{X,P}, \Z / m \Z (j)),
\end{equation}
where $\mathcal{O}_{X,P}$ denotes the local ring of $X$ at $P.$
Now let $X$ be as above and $\mathbb{G}$ be a group scheme of multiplicative type over $X$. In this case,
we define the degree 1 unramified cohomology $H^1_{ur}(F(X), G)$ of the generic fiber $G = \mathbb{G}_{F(X)}$ by
\begin{equation}\label{E-DefUnr2}
H^1_{{\rm ur}}(F(X),G) = {\rm Im}~(\he^1(X, \mathbb{G}) \stackrel{f}{\longrightarrow} H^1(F(X),G)),
\end{equation}
where $f$ is the natural map. To make sense of this definition, we observe that by \cite[\S 6]{CTS}, we have
$$
H^1_{{\rm ur}}(F(X),G) = \bigcap_{P \in X^{(1)}} {\rm Im}~f_P^G,
$$
where
$$
f_P^G \colon \he^1(\mathcal{O}_{X,P}, \mathbb{G}) \to H^1(F(X),G)
$$
are the natural maps, which are injective for all points $P$ of codimension 1 according to \cite{CTSCoh}. Comparing (\ref{E-DefUnr2}) with (\ref{E-DefUnr1}), we see that the definition of degree 1 unramified cohomology with coefficients in a group of multiplicative type (in particular, a torus) that we adopted in \S\ref{S-1} is consistent
with the usual definition using residue maps in the case of finite coefficients.
Returning now to the set-up of Theorem \ref{T-MainThm2}, let $X$ be a smooth, geometrically integral variety over a field $K$ and $\mathbb{T}$ be a torus over $X$. Set $T = \mathbb{T}_{K(X)}$ to be the generic fiber of $\mathbb{T}$ and denote by $K(X)_T$ the minimal splitting field of $T$ inside a fixed separable closure $\overline{K(X)}$ of $K(X).$
We assume that $K$ is of type $\Fpm$ for some positive integer $m$ that is prime to ${\rm char}~K$ and divisible by $n = [K(X)_T :K(X)]$.
\vskip2mm
\noindent {\it Proof of Theorem \ref{T-MainThm2}.} First, as in the proof of Corollary \ref{C-FinTori}(b), Hilbert's Theorem 90 implies that $H^1(K(X), T) = H^1 (K(X)_T/K(X), T)$ is a group of exponent $n.$ Consequently, the natural map $\he^1(X, \mathbb{T}) \to H^1(K(X),T)$ factors through $\he^1(X, \mathbb{T})/n \he^1(X, \tT)$, so, to complete the proof, we need to show that the latter quotient group is finite. For this, we observe that the Kummer sequence
$$
1 \to \tT[n] \to \tT \stackrel{x \mapsto x^n}{\longrightarrow} \tT \to 1
$$
yields an embedding
\begin{equation}\label{E-Embedding}
\he^1(X, \tT)/n \he^1(X, \tT) \hookrightarrow \he^2(X, \tT[n]).
\end{equation}
To see that $\he^2(X, \tT[n])$ is finite, we note that since $X$ is a smooth algebraic variety, $\mathbb{T}$ is isotrivial by \cite[Expos\'e X, Th\'eor\`eme 5.16]{SGA3}, so there exists a connected principal \'etale Galois cover $Y \to X$ with (finite) Galois group $\mathfrak{G}$ that splits $\mathbb{T}.$ Then $\tT[n]_Y$ is a product of groups of the form $\mu_{n', Y}$ with $n' \vert n$, and it follows from Remark 2.5 and Corollary \ref{C-FinEt} that the groups $\he^i(Y, \tT[n]_Y)$ are finite for all $i \geq 0$. Then the groups $H^p(\mathfrak{G}, \he^q(Y, \tT[n]_Y))$ are finite, and the Hochschild-Serre spectral sequence
$$
E_2^{p,q} = H^p(\mathfrak{G}, \he^q(Y, \tT[n]_Y)) \Rightarrow \he^{p+q}(X, \tT[n])
$$
gives the finiteness of $\he^2(X, \tT[n])$, as needed. $\Box$
\vskip2mm
\noindent {\bf Remark 5.1.} Our proof actually shows that if $K$ is of type $\Fpm$ and $m H^1(K(X), T) = 0$, then $H^1_{{\rm ur}}(K(X), T)$ is finite.
\vskip2mm
As we already mentioned in \S \ref{S-1}, our analysis of the finiteness properties of unramified cohomology is motivated in part by the general program of studying absolutely almost simple groups having the same maximal tori over the field of definition. An important component of this work is centered around absolutely almost simple algebraic groups having good reduction at a specified set of discrete valuations. In this regard, we anticipate that Theorem \ref{T-MainThm2} will be useful in addressing the following conjecture (see \cite[\S 6]{CRR5}).
\vskip2mm
\noindent {\bf Conjecture 5.2.} {\it Let $K$ be a field of type $\tF$, and $C$ a smooth affine geometrically integral curve over $K$ with function field $K(C).$ Let $V_0$ be the set of geometric places of $K(C)$, associated with the closed points of $C$. Given an absolutely almost simple simply
connected algebraic $K(C)$-group $G$ for which $\mathrm{char}\: K$ is ``good," the set of $K$-isomorphism classes of $K$-forms of $G$ that have good reduction at all $v \in V_0$ is finite.}
\addtocounter{thm}{2}
\vskip2mm
The proof of Theorem \ref{T-MainThm2} has the following consequence for flasque tori.
First, recall that a torus $\tT$ over a normal, connected scheme $X$ is said to be {\it flasque} if there exists a connected principal \'etale Galois cover $Y \to X$ with (finite) Galois group $\mathfrak{G}$ that splits $\mathbb{T}$ and such that $H^{-1}(\mathfrak{H}, \widehat{\tT}(Y)) = 0$ for all subgroups $\mathfrak{H}$ of $\mathfrak{G}$, where $\widehat{\tT}$ denotes the group of characters of $\tT$ and we consider Tate cohomology (this definition, introduced in \cite{CTS-Flasque} in a slightly more general form, is a natural extension to schemes of the more classical notion of flasque tori over fields, which are discussed in detail in \cite{CTS-Requiv}). An important observation in the theory of flasque tori concerns the existence of {\it flasque resolutions}. Namely, if $X$ is a locally noetherian, normal, connected scheme, then for any torus $\tT$ over $X$, there exists an exact sequence of $X$-tori
$$
1 \to \mathbb{S} \to \mathbb{E} \to \tT \to 1,
$$
where $\mathbb{S}$ is flasque and $\mathbb{E}$ is quasi-trivial (see \cite[Proposition 1.3]{CTS-Flasque}).
\begin{cor}\label{C-FlasqueFinite}
Let $X$ be a smooth, geometrically integral algebraic variety over a field $K$ and $T$ a flasque torus over the function field $K(X)$. Suppose $K$ is of type $\Fpm$
for a positive integer $m$ prime to ${\rm char}~K.$ If $m H^1(K(X), T) = 0$ $($in particular, if $[K(X)_T:K(X)]$ divides $m)$, then $H^1(K(X), T)$ is finite.
\end{cor}
\begin{proof}
Since $T$ is flasque over $K(X)$, it follows from \cite[Proposition 1.5]{CTS-Flasque} that there exists a flasque torus $\tT$ over a Zariski open subset $U \subset X$ whose generic fiber is $T$. According to \cite[Theorem 2.2]{CTS-Flasque}, the fact that $\tT$ is flasque implies that
the natural map $\he^1(U, \tT) \to H^1(K(X), T)$ is surjective.
On the other hand, since $m H^1(K(X), T) = 0$,
this maps factors through the quotient $\he^1(U, \tT)/m \he^1(U, \tT)$. But, as we have seen in the proof of Theorem \ref{T-MainThm2}, the latter quotient is finite, and our claim follows.
\end{proof}
We would like to conclude this section with the following application of Corollary \ref{C-FlasqueFinite} to $R$-equivalence on algebraic tori (see, e.g., \cite[\S 4]{CTS-Requiv} for the relevant definitions).
\begin{cor}\label{C-RequivFinite}
Suppose $X$ is a smooth, geometrically integral algebraic variety over a field $K$ and $T$ is a torus defined over the function field $K(X).$
Assume that $K$ is of type $\Fpm$ for some positive integer $m$ that is prime to ${\rm char}~K$ and divisible by $n = [K(X)_T :K(X)]$. Then
the group of $R$-equivalence classes $T(K(X))/R$ is finite.
\end{cor}
\begin{proof}
Let
$$
1 \to S \to E \to T \to 1
$$
be a flasque resolution of $T$ over $K(X).$ Then, according to \cite[\S 5, Th\'eor\`eme 2]{CTS-Requiv} and \cite[Theorem 3.1]{CTS-Flasque}, there is an isomorphism of abelian groups
$$
T(K(X))/R \stackrel{\sim}{\longrightarrow} H^1(K(X), S).
$$
On the other hand, it follows from \cite[Proposition 6]{CTS-Requiv} and the restriction-corestriction sequence that $n H^1(K(X), S) = 0.$ Since $S$ is flasque, Corollary \ref{C-FlasqueFinite} implies that $H^1(K(X), S)$ is finite, which completes the proof.
\end{proof}
\bibliographystyle{amsplain}
| 47,319
|
VIN Number Location.com
VIN Location
Car History Check
VIN Decoder
Bike Frame VIN
ATV Quad VIN
Downloads
Landrover Defender VIN Number Locations
Produced from January 1983.
<Landrover
Defender VIN
Chassis/VIN Plate
Landrover Defender VIN plate is a riveted metal plate on top of brake servo unit on the offside of the bulkhead. The plate is held by two silver rivets at the top and bottom.
Defender VIN
Visible VIN
On a riveted metal plate on the nearside base of the front windscreen. Landrover logo repeated through plate.
Stamped VIN Number
Stamped Chassis Number
The defender VIN number is stamped with six point asterisk security marks on the front offside chassis member viewed through top front wheel arch.
Engine Number
On 4CYL petrol versions the number is on the offside of the block on a horizontal lug next to the distributor. Turbo-Diesel versions have the number on the offside rear behind the fuel pump.
V8 versions have the number by the dipstick.
Some later models it is by the exhaust manifold viewed from underneath
Vin Number Example
SALLDVM67KA123456
Back
Landrover Defender
Part of the uptodateknowledge.com network
VIN Decoders
|
Motorbike VIN
| Plant & Machinery |
Free Report
Registration Number Guide
info@uptodateknowledge.com
✕
| 30,348
|
|
|
Paintings Free Screensaver
Iris Paintings
Fairy Paintings
Aaj Shivaji Raja Jhala
Impressionist Paintings
African Wildlife Paintings
Desktop Background Paintings
Famous Rembrandt Paintings
Gustav Klimt Earlier Paintings
Tiger Paintings
Crc Press Florida
Blue Ray Converter For Mac
Modifier Pdf Php
Leprechaun Trap Plans
Double Heart Cemetery Monuments
Amy Brown Fairies
Autocad Spf
Pokemon Card Game
Kundali Maker Software In Hindi
Bde Engine
Yarde Metals
Air Jordan Flight Club Contact Php
Free Philips Tv Capture
Limewire Acceleration Patch Code
Autoroute Planner
Download - Raja Ravivarma Paintings 1.0
Raja Ravivarma Paintings 1.0
Rajasekhar Battu
Our application "Ravivarma Paintings" Brings Ravivarma oil Paintings into our device with pretty good layout and also allows you to share the paintings through mail
Raja...
Download Raja Ravivarma Paintings
| 83,027
|
.
Letter Tag Print Pullover Hoodies
Colorful Planet Astronaut Print Sweatshirts
Astronaut Graphic Print Sweatshirt
Cartoon Astronaut Print T-Shirts
Cotton Cartoon Astronaut Print T-Shirts
Classical Oil Print Leaf Short Sleeve Shirts
Ethnic Floral Print Casual Shirts
Halloween Allover Funny Cat Print Shirts
Astronaut&Earth Graphic T-shirts
Letter Printed Light Casual T-shirts
Flowers Printed Short Sleeve Lapel Shirt
Sample Cartoon Cat Graphic T-Shirt
100% Cotton Sunrise Print T-Shirts
Astronaut Graphic Basic T-shirts
Space Bear Astronaut Print Hoodies
Cherry Blossoms Text Hoodies
Forest Landscape Graphic Print Hoodie
Helping Hand Graphic Hoodies
Cotton Cartoon Black Cat Print T-Shirts
Graphic Rose Print Hoodies
100% Cotton Cat Printed Tee
Chinese Style Printed T-shirts
Banana & Character Print Cotton T-Shirts
Vintage Floral Print Holiday Shirts
100% Cotton Moon Eclipse Tee
Astronaut Chest Print Solid Color T-Shirts
Vintage Pictorial Printed Beach Shirt
Touch Hand Graphic Print 100% Cotton Hoodie
Japanese Style Graphic Back Print Hoodies
Cotton Rose Printing Plain Sweatshirts
100% Cotton Avocado Printed Shirt
Cute Cat Pinstripe Graphic T-Shirts
Abstract Element Pattern Print Shirts
100% Cotton Hand-Script Geometry Tee
Mens Funny Graphic Hoodies
Sushi Graphic Print Ukiyoe Hoodies
Abstract Leaf Print Shirt
Funny Portrait Graphic Print Sweatshirts
Astronaut Graphic Hoodies
Landscape Graphic Print Hoodies
Graphic Text Back Print Sweatshirts
100% Cotton Touch Hands Graphic Print Hoodie
Cartoon Ghost Print T-Shirts
Letter Print Back Graphic Hoodies
Funny Kuso Mona Lisa Oil Print T-Shirts
Cartoon Bear Astronaut Print Hoodies
Cartoon Mushroom Graphic T-Shirt
Colorblock Line Print Sweatshirt
Cotton Planet Print Round Neck T-Shirts
Rose Graphic Character Print Hoodies
Cotton Cartoon Dinosaur Graphic Print T-Shirt
Planet Printing Round Neck T-Shirts
Astronaut Print Hoodie
Funny Cartoon Cat Graphic T-Shirt
Figure Graphic Chest Print Hoodies
Sun & Planet Graphic Printed T-shirts
Halloween Cartoon Animal Print Hoodies
Geometric Printed Casual Shirts
No product found that meets the requirements
Please change the screening criteria
Related Searches
High-quality Goods
Enjoy top quality items for less
24/7 Livechat
Get instant assistance whenever you need it
Express Shipping
Fast & reliable delivery options
Secure Payment
Multiple safe payment methods
| 114,886
|
TITLE: An ambiguous statement from Halmos' Naive Set Theory
QUESTION [2 upvotes]: I'm reading the proof of Zorn's lemma from Halmos' Naive Set Theory. There is a small detail (page 63) that I can't understand. What Halmos essentially writes is the following.
Let $X$ be a set partially ordered by $\preccurlyeq$ such that every chain in $X$ has an upper bound. Let $^1$ $\mathcal S:=\{\bar s(x):x\in X\}$. Let $\mathcal X$ be the set of all (and only) the chains in $X$. It is clearly seen that for any $\mathcal C\in \mathcal X$ (i.e., $\mathcal C$ is a chain in $X$) there is an $a\in X$ such that $\mathcal C\subseteq\bar s(a)\in\mathcal S$.
Then he makes makes the following comment.
Since each set in $\mathcal X$ is dominated by some set in $\mathcal S$, the passage from $\mathcal S$ to $\mathcal X$ cannot introduce any new maximal elements.
Question: What is meant by this comment? Does it mean that any maximal element of $\mathcal X$ belongs to $\mathcal S$? But this is not necessarily true! So I must me misinterpreting his comment. Can you please elaborate what Halmos means?
$^1$ For any $x\in X$, the initial segment of $x$ is defined as $\bar s(x):=\{a\in X:a\preccurlyeq x\}$.
REPLY [3 votes]: On further thought, that is a bit sloppy. If we consider the four element Boolean algebra, it has two maximal chains but only one maximal initial segment (namely the whole thing).
What is true is that maximal elements in $\mathcal{X}$ yield maximal elements in $\mathcal{S}$, since every chain has an upper bound: if $A$ is a maximal chain and $a$ is an upper bound of $A$ (which exists by assumption), then $\overline{s}(a)$ is an initial segment containing $A$ and hence is maximal in $\mathcal{S}$.
(Why is $\overline{s}(a)$ maximal in $\mathcal{S}$ if $A$ is maximal in $\mathcal{X}$? Suppose $\overline{s}(b)\supsetneq\overline{s}(a)$. Then $b>a$. But this means $A\cup\{b\}$ is a chain properly containing $A$, which can't happen.)
Rephrasing the above, and this is really the point, we have:
If $\mathcal{X}$ has a maximal element then $\mathcal{S}$ has a maximal element.
This is what we really want for this proof.
| 200,849
|
\begin{document}
\title{New doubly even self-dual codes having minimum weight 20}
\author{
Masaaki Harada\thanks{
Research Center for Pure and Applied Mathematics,
Graduate School of Information Sciences,
Tohoku University, Sendai 980--8579, Japan.
email: {\tt mharada@tohoku.ac.jp}.}
}
\date{}
\maketitle
\begin{abstract}
In this note, we construct new doubly even self-dual codes having minimum
weight $20$ for lengths $112$, $120$ and $128$.
This implies that there are at least three inequivalent
extremal doubly even self-dual codes of length $112$.
\end{abstract}
\section{Introduction}
Self-dual codes are an important class of linear codes for both
theoretical and practical reasons (see~\cite{RS-Handbook}).
It is a fundamental problem to determine the largest minimum weights
among self-dual codes of that length and to construct self-dual
codes with the largest minimum weight.
Let $\FF_2$ denote the finite field of order $2$.
Codes over $\FF_2$ are called {\em binary} and
all codes in this note are binary.
The \textit{dual code} $C^{\perp}$ of a code
$C$ of length $n$ is defined as
$
C^{\perp}=
\{x \in \FF_2^n \mid x \cdot y = 0 \text{ for all } y \in C\},
$
where $x \cdot y$ is the standard inner product.
A code $C$ is called
\textit{self-dual} if $C = C^{\perp}$.
A self-dual code $C$ is {\em doubly even} if all
codewords of $C$ have weight divisible by four, and {\em
singly even} if there is at least one codeword of weight $\equiv 2
\pmod 4$.
It is known that a self-dual code of length $n$ exists
if and only if $n$ is even, and
a doubly even self-dual code of length $n$
exists if and only if $n$ is divisible by eight.
The minimum weight $d$ of a doubly even self-dual code of length $n$
is bounded by
\begin{equation}\label{eq:bound}
d \le 4 \left\lfloor{\frac {n}{24}} \right\rfloor + 4,
\end{equation}
\cite{MS73}.
A doubly even self-dual code meeting the bound is called {\em extremal}.
In this note, we study the existence of doubly even self-dual
codes having minimum weight $20$.
By~\eqref{eq:bound}, if there is a doubly even
self-dual code of length $n$ and minimum weight $20$, then
$n \ge 96$.
For length $96$, it is unknown whether there is an
extremal doubly even self-dual code.
For length $104$, the extended quadratic residue code
is the only known extremal doubly even self-dual code
(see~\cite{RS-Handbook}).
For length $112$, the first extremal doubly even self-dual
code was found in~\cite{H112}.
For lengths $120$ and $128$, it is unknown whether there is an
extremal doubly even self-dual code.
The first doubly even self-dual code
of length $120$ and minimum weight $20$
was found in~\cite{GNW}.
Then $25$ more doubly even self-dual codes
of length $120$ and minimum weight $20$
were found in~\cite{YW}.
The existence of a doubly even self-dual code
of length $128$ and minimum weight $20$
is known~\cite{G}.
In this note, we construct new doubly even self-dual codes having minimum
weight $20$ for lengths $112$, $120$ and $128$.
This implies that there are at least three inequivalent
extremal doubly even self-dual codes of length $112$.
All computer calculations in this note were
done with the help of {\sc Magma}~\cite{Magma}.
\section{Preliminaries}\label{Sec:2}
Let $C$ be a code of length $n$.
The elements of $C$ are called {\em codewords} and the {\em weight}
$\wt(x)$ of a codeword $x$ is the number of non-zero coordinates.
The {\em support} of a codeword $x=(x_1,x_2,\ldots,x_n)$ is
$\{i \mid x_i=1\}$.
We denote the support of $x$ by $\supp(x)$.
The minimum non-zero weight of all codewords in $C$ is called
the {\em minimum weight} of $C$.
Let $A_i$ be the number of codewords of
weight $i$ in $C$.
The {\em weight enumerator} $W_C$ of $C$ is given by
$\sum_{i=0}^n A_i y^i$.
By Gleason's theorem~\cite{Gleason} (see also~\cite{MS73}),
the weight enumerator $W_C$ of a self-dual code of length $n$
is written as:
\begin{equation}\label{eq:W}
W_C = \sum_{j=0}^{\lfloor n/8 \rfloor} a_j(1+y^2)^{n/2-4j}(y^2(1-y^2)^2)^j,
\end{equation}
for some integers $a_j$.
In addition,
a doubly even self-dual code of length $n$
exists then $n$ is divisible by eight, and
the weight enumerator $W_C$ of a doubly even self-dual code of length $n$
is written as:
\begin{equation}\label{eq:WII}
W_C = \sum_{j=0}^{\lfloor n/24 \rfloor}
a_j(1+14y^4+y^8)^{n/8-3j}(y^4(1-y^4)^4)^j,
\end{equation}
for some integers $a_j$.
Then Mallows and Sloane~\cite{MS73} established the upper bound
\eqref{eq:bound} on the minimum weights of doubly even self-dual codes.
Let $C$ be a singly even self-dual code and
let $C_0$ denote the
subcode of codewords having weight $\equiv0\pmod4$.
Then $C_0$ is a subcode of codimension $1$.
The {\em shadow} $S$ of $C$ is defined to be
$C_0^\perp \setminus C$~\cite{C-S}.
There are cosets $C_1,C_2,C_3$ of $C_0$ such that
$C_0^\perp = C_0 \cup C_1 \cup C_2 \cup C_3 $, where
$C = C_0 \cup C_2$ and $S = C_1 \cup C_3$.
If $C$ is a singly even
self-dual code of length divisible by $8$, then $C$ has two doubly
even self-dual neighbors, namely $C_0 \cup C_1$ and $C_0 \cup C_3$
(see~\cite{BP}).
Let $B_i$ be the number of vectors of
weight $i$ in $S$.
The weight enumerator $W_S$ of $S$ is given by
$\sum_{i=d(S)}^{n-d(S)} B_i y^i$, respectively,
where $d(S)$ denotes the minimum weight of $S$.
If $W_C$ is written as in \eqref{eq:W}, then $W_S$ can
be written as follows~\cite[Theorem~5]{C-S}:
\begin{equation}\label{eq:WS}
W_S = \sum_{j=0}^{\lfloor n/8 \rfloor}
(-1)^ja_j2^{n/2-6j}y^{n/2-4j}(1-y^4)^{2j}.
\end{equation}
Two self-dual codes $C$ and $C'$ of length $n$
are said to be {\em neighbors} if $\dim(C \cap C')=n/2-1$.
Two codes are {\em equivalent}
if one can be
obtained from the other by permuting the coordinates.
An {\em automorphism} of a code $C$ is a permutation of the coordinates of $C$
which preserves $C$.
The set consisting of all automorphisms of $C$ is called the
{\em automorphism group} of $C$.
An $n \times n$ circulant matrix has the following form:
\[
\left(
\begin{array}{ccccc}
r_0&r_1&r_2& \cdots &r_{n-1} \\
r_{n-1}&r_0&r_1& \cdots &r_{n-2} \\
\vdots &\vdots & \vdots && \vdots\\
r_1&r_2&r_3& \cdots&r_0
\end{array}
\right),
\]
so that each successive row is a cyclic shift of the previous one.
Let $A$ and $B$ be $n \times n$ circulant matrices.
Let $C$ be a code with generator matrix of the following form:
\begin{equation} \label{eq:GM}
\left(
\begin{array}{ccc@{}c}
\quad & {\Large I_{2n}} & \quad &
\begin{array}{cc}
A & B \\
B^T & A^T
\end{array}
\end{array}
\right),
\end{equation}
where $I_n$ denotes the identity matrix of order $n$
and $A^T$ denotes the transpose of a matrix $A$.
It is easy to see that $C$ is self-dual if
$AA^T+BB^T=I_n$.
The codes with generator matrices of the form~\eqref{eq:GM}
are called {\em four-circulant}~\cite{4cir}.
In this note,
we found a
singly even self-dual four-circulant code
of length $112$ and minimum weight $18$ and
doubly even self-dual four-circulant codes
of length $n$ and minimum weight $20$ for $n=112,120,128$,
by a non-exhaustive search.
An exhaustive search is beyond our current computer resources.
\section{New extremal doubly even self-dual codes of length 112}
\subsection{A singly even self-dual code
of length 112 and minimum weight 18}
By a non-exhaustive search, we found a
singly even self-dual four-circulant code $C_{112}$
of length $112$ and minimum weight $18$.
The first rows $r_A$ and $r_B$ of $A$ and $B$
in the generator matrix \eqref{eq:GM}
of $C_{112}$ are as follows:
\begin{align*}
r_A&=(1000010101101101111011011010),\\
r_B&=(0010001110000110001010000001),
\end{align*}
respectively.
Let $C$ be a singly even self-dual
code of length $112$ and minimum weight $18$.
Let $S$ be the shadow of $C$.
From \eqref{eq:W} and \eqref{eq:WS},
the possible weight enumerators of $C$ and $S$
are determined as follows:
\begin{align*}
W_{112}^C=&
1
+( 99176 + a)y^{18}
+( 355740 + 16 b + 2 a)y^{20}
\\&
+( 1745240 + 1024 c - 64 b - 17 a)y^{22}
\\&
+( 44404374 + 65536 d - 10240 c - 160 b - 36 a)y^{24}
\\&
+( 572977944 - 4194304 e - 1048576 d + 33792 c + 960 b + 135 a)y^{26}
\\&
+ \cdots,
\\
W_{112}^S=&
ey^4
+(- 26 e + d)y^8
+( 325 e - 24 d - c)y^{12}
\\&
+(- 2600 e + 276 d + 22 c + b)y^{16}
\\&
+( 14950 e - 2024 d - 231 c - 20 b - 4 a)y^{20}
+ \cdots,
\end{align*}
respectively, where $a,b,c,d,e$ are integers.
In order to determine the weight enumerator of $C_{112}$,
we found that
\[
A_{18}=8512,
d(S)=16,
B_{16}=728.
\]
This gives
\[
a=-90664,
b=728,
c=d=e=0.
\]
The weight distribution of $C_{112}$ is listed in Table~\ref{Tab:WD}.
Note that singly even self-dual
codes of length $112$ and minimum weight $18$ with weight
enumerators corresponding to $e=1$ were found in~\cite{H112}.
\begin{table}[thb]
\caption{Weight distribution of $C_{112}$}
\label{Tab:WD}
\begin{center}
{\footnotesize
\begin{tabular}{c|r|c|r}
\noalign{\hrule height0.8pt}
$i$ & \multicolumn{1}{c|}{$A_i$} &
$i$ & \multicolumn{1}{c}{$A_i$} \\
\hline
$ 0,112$& 1& $38, 74$ & 31676520067584 \\
$18, 94$& 8512& $40, 72$ & 109690203298312 \\
$20, 92$& 186060& $42, 70$ & 325630986391040 \\
$22, 90$& 3239936& $44, 68$ & 831288282918576 \\
$24, 88$& 47551798& $46, 66$ & 1829637194737408 \\
$26, 86$& 561437184& $48, 64$ & 3479230392288469 \\
$28, 84$& 5424089452& $50, 62$ & 5725819388994432 \\
$30, 82$& 43459872064& $52, 60$ & 8165553897114152 \\
$32, 80$& 291008417322& $54, 58$ & 10099951175046656 \\
$34, 78$& 1639219687168& $56$ & 10841051388476292 \\
$36, 76$& 7813559379696& &\\
\noalign{\hrule height0.8pt}
\end{tabular}
}
\end{center}
\end{table}
\subsection{New extremal doubly even self-dual codes of length 112}
If $C$ is a singly even
self-dual code of length divisible by $8$, then $C$ has two doubly
even self-dual neighbors
(see Section \ref{Sec:2}).
We verified that the two doubly even self-dual neighbors of
$C_{112}$ have minimum weights $20$ and $16$.
We denote the extremal doubly even self-dual neighbor by $D_{112}$.
The code $D_{112}$ is constructed as
\[
\langle (C_{112} \cap \langle x \rangle^\perp), x \rangle,
\]
where
the support $\supp(x)$ of $x$ is
\[
\{1,2,3,9,10,12,13,16,24,61,66,69,82,89,92,96,97,108,111,112\}.
\]
Moreover, by a non-exhaustive search, we found an extremal
doubly even self-dual four-circulant code $E_{112}$.
The first rows $r_A$ and $r_B$ of $A$ and $B$
in the generator matrix~\eqref{eq:GM}
of $E_{112}$ are as follows:
\begin{align*}
r_A&=(1000000010001000011111001101),\\
r_B&=(0111101000101001110101011110),
\end{align*}
respectively.
We denote the known extremal
doubly even self-dual code in~\cite{H112} by $H_{112}$.
In order to distinguish $H_{112}, D_{112}$ and $E_{112}$,
we consider the following invariant.
Let $C$ be an extremal doubly even self-dual code of length $112$.
Let $M(C)$ be the matrix with rows composed of the codewords of
weight $20$ in $C$,
where the $(1,0)$-matrix $M(C)$ is regarded as a matrix over $\ZZ$.
Let $m_{i,j}$ denote the $(i,j)$-entry of the matrix
$M(C)^T M(C)$.
Then define
\[
m(C)=\{m_{i,j} \mid i,j \in \{1,2,\ldots,112\}\}.
\]
In Table~\ref{Tab:mC}, we list all elements of $m(C)$ for
$C=H_{112},D_{112}$ and $E_{112}$.
Table~\ref{Tab:mC} shows that the three codes are inequivalent.
\begin{prop}
There are at least three inequivalent extremal doubly even self-dual
codes of length $112$.
\end{prop}
\begin{rem}
The code $H_{112}$ has automorphism group of order $112$~\cite{H112}.
We verified that the codes $D_{112}$ and $E_{112}$
have automorphism group of order $112$.
\end{rem}
\begin{table}[thb]
\caption{$m(H_{112}),m(D_{112})$ and $m(E_{112})$}
\label{Tab:mC}
\begin{center}
{\scriptsize
\begin{tabular}{l}
\noalign{\hrule height0.8pt}
\multicolumn{1}{c}{$m(H_{112})$} \\
\hline
10613, 10649, 10661, 10703, 10709, 10715, 10721, 10727, 10733, 10739, 10745, \\
10769, 10775, 10781, 10787, 10799, 10805, 10811, 10823, 10829, 10835, 10841, \\
10847, 10853, 10859, 10865, 10871, 10883, 10895, 10901, 10907, 10913, 10919, \\
10925, 10931, 10937, 10943, 10949, 10967, 10973, 10985, 10991, 10997, 11009, \\
11021, 11033, 11045, 11057, 11063, 11069, 11093, 11099, 11117, 63525\\
\hline\hline
\multicolumn{1}{c}{$m(D_{112})$} \\
\hline
10618, 10663, 10672, 10702, 10708, 10717, 10735, 10750, 10765, 10768, 10771, \\
10777, 10783, 10786, 10789, 10801, 10810, 10819, 10831, 10834, 10837, 10840, \\
10843, 10846, 10849, 10852, 10858, 10861, 10864, 10867, 10873, 10882, 10885, \\
10900, 10903, 10906, 10909, 10912, 10918, 10921, 10924, 10927, 10930, 10936, \\
10945, 10954, 10957, 10978, 10984, 10987, 11002, 11011, 11023, 11041, 11044, \\
11056, 11065, 11080, 11086, 11098, 11110, 63525\\
\hline\hline
\multicolumn{1}{c}{$m(E_{112})$} \\
\hline
10581, 10620, 10641, 10653, 10659, 10668, 10674, 10689, 10698, 10701, 10704, \\
10707, 10719, 10728, 10734, 10749, 10758, 10761, 10764, 10770, 10776, 10779, \\
10782, 10785, 10791, 10794, 10797, 10806, 10809, 10812, 10815, 10818, 10821, \\
10824, 10827, 10830, 10833, 10842, 10848, 10851, 10854, 10860, 10863, 10866, \\
10872, 10875, 10878, 10881, 10884, 10890, 10893, 10896, 10899, 10902, 10905, \\
10911, 10914, 10917, 10923, 10926, 10929, 10932, 10935, 10938, 10941, 10950, \\
10953, 10959, 10965, 10968, 10971, 10977, 10980, 10989, 10992, 10995, 11001, \\
11013, 11016, 11025, 11028, 11040, 11046, 11049, 11052, 11055, 11073, 11085, \\
11103, 11151, 63525\\
\noalign{\hrule height0.8pt}
\end{tabular}
}
\end{center}
\end{table}
\section{New doubly even self-dual codes
of length 120 and minimum weight 20}
From \eqref{eq:WII},
the possible weight enumerator of a doubly even
self-dual code of length $120$ and minimum weight $20$
is determined as follows:
\begin{align*}
&
1
+ a y^{20}
+( 39703755 - 20 a )y^{24}
+( 6101289120 + 190 a )y^{28}
\\&
+( 475644139425 - 1140 a )y^{32}
+( 18824510698240 + 4845 a )y^{36}
\\&
+( 397450513031544 - 15504 a )y^{40}
+( 4630512364732800 + 38760 a )y^{44}
\\&
+( 30531599026535880 - 77520 a )y^{48}
+( 116023977311397120 + 125970 a )y^{52}
\\&
+( 257257766776517715 - 167960 a )y^{56}
\\&
+( 335200280030755776 + 184756 a )y^{60}
+ \cdots,
\end{align*}
where $a$ is the number of codewords of weight $20$.
The first doubly even self-dual code
of length $120$ and minimum weight $20$
was found in~\cite{GNW}.
Then $25$ more doubly even self-dual codes
of length $120$ and minimum weight $20$
were found in~\cite{YW}.
From~\cite[Table~1]{YW},
these codes have different weight enumerators.
By a non-exhaustive search, we found $500$
doubly even self-dual four-circulant
codes of length $120$ and minimum weight $20$.
The numbers $a$ of codewords of weight $20$ in these codes
are listed in Table~\ref{Tab:WD120}.
It follows that
these codes and the $26$ codes in~\cite{GNW} and \cite{YW}
have distinct weight enumerators.
Hence, we have the following:
\begin{prop}
There are at least $526$ inequivalent
doubly even self-dual codes of length $120$ and minimum weight $20$.
\end{prop}
Our feeling is that the number of inequivalent
doubly even self-dual codes of length $120$ and minimum weight $20$
might be even bigger.
The first rows $r_A$ and $r_B$ of $A$ and $B$
in the generator matrices~\eqref{eq:GM}
of the $500$ codes are listed in
\url{http://www.math.is.tohoku.ac.jp/~mharada/Paper/120-d20.txt}.
As an example, we list the first rows $r_A$ and $r_B$
for ten codes in Table~\ref{Tab:120}.
\begin{table}[thbp]
\caption{Weight enumerators for length $120$}
\label{Tab:WD120}
\begin{center}
{\scriptsize
\begin{tabular}{l}
\noalign{\hrule height0.8pt}
\multicolumn{1}{c}{Numbers of codewords of weight $20$} \\
\hline
93180, 93936, 94512, 95136, 95202, 95376, 95496, 95532, 95826, 95946, 95952, 96012, 96096, 96126, \\
96156, 96216, 96240, 96312, 96336, 96360, 96366, 96372, 96486, 96540, 96576, 96666, 96690, 96720, \\
96762, 96780, 96816, 96840, 96846, 96876, 96906, 96912, 96936, 96996, 97026, 97056, 97092, 97116, \\
97176, 97230, 97260, 97266, 97272, 97296, 97326, 97356, 97422, 97446, 97452, 97476, 97566, 97572, \\
97590, 97596, 97626, 97632, 97656, 97716, 97746, 97770, 97776, 97782, 97836, 97842, 97866, 97890, \\
97896, 97926, 97950, 97962, 97986, 98016, 98040, 98076, 98130, 98136, 98166, 98196, 98220, 98226, \\
98250, 98256, 98262, 98286, 98292, 98316, 98346, 98412, 98466, 98496, 98502, 98526, 98532, 98556, \\
98562, 98580, 98586, 98610, 98616, 98622, 98640, 98646, 98670, 98676, 98682, 98700, 98706, 98712, \\
98730, 98742, 98772, 98796, 98802, 98826, 98832, 98856, 98886, 98910, 98916, 98940, 98952, 98976, \\
99000, 99036, 99066, 99090, 99096, 99120, 99126, 99156, 99162, 99180, 99186, 99210, 99216, 99222, \\
99240, 99246, 99252, 99270, 99282, 99306, 99312, 99330, 99336, 99342, 99372, 99390, 99396, 99402, \\
99432, 99450, 99456, 99486, 99516, 99540, 99546, 99576, 99612, 99666, 99672, 99690, 99696, 99702, \\
99720, 99726, 99750, 99756, 99786, 99792, 99810, 99816, 99846, 99876, 99906, 99936, 99942, 99966, \\
99972, 99996, 100026, 100032, 100062, 100086, 100110, 100116, 100122, 100140, 100146, 100170, \\
100176, 100182, 100200, 100206, 100212, 100236, 100242, 100260, 100266, 100290, 100296, 100350, \\
100356, 100380, 100446, 100452, 100476, 100482, 100500, 100506, 100512, 100536, 100542, 100560, \\
100566, 100590, 100596, 100626, 100650, 100656, 100662, 100680, 100686, 100716, 100722, 100746, \\
100752, 100770, 100776, 100782, 100800, 100806, 100842, 100860, 100872, 100896, 100902, 100920, \\
100926, 100956, 100980, 100986, 100992, 101046, 101052, 101070, 101076, 101082, 101106, 101112, \\
101130, 101136, 101142, 101160, 101166, 101196, 101202, 101226, 101232, 101250, 101256, 101280, \\
101286, 101316, 101376, 101382, 101400, 101406, 101412, 101436, 101442, 101472, 101496, 101526, \\
101532, 101550, 101556, 101586, 101616, 101622, 101640, 101646, 101652, 101670, 101676, 101700, \\
101706, 101730, 101736, 101760, 101766, 101772, 101790, 101796, 101802, 101820, 101826, 101850, \\
101856, 101862, 101880, 101892, 101910, 101916, 101940, 101946, 101952, 101970, 101976, 101982, \\
102000, 102006, 102030, 102036, 102042, 102066, 102072, 102096, 102120, 102126, 102150, 102156, \\
102180, 102186, 102210, 102216, 102240, 102246, 102252, 102270, 102312, 102336, 102342, 102360, \\
102366, 102372, 102402, 102420, 102426, 102456, 102480, 102486, 102492, 102516, 102540, 102546, \\
102570, 102576, 102582, 102606, 102636, 102660, 102666, 102672, 102690, 102696, 102702, 102726, \\
102732, 102750, 102756, 102780, 102786, 102792, 102816, 102840, 102846, 102870, 102876, 102906, \\
102930, 102936, 102942, 102966, 102972, 102996, 103002, 103020, 103026, 103032, 103050, 103056, \\
103080, 103086, 103092, 103116, 103140, 103146, 103176, 103182, 103206, 103236, 103266, 103272, \\
103296, 103320, 103326, 103332, 103356, 103380, 103386, 103410, 103416, 103422, 103452, 103500, \\
103506, 103530, 103560, 103566, 103590, 103596, 103632, 103650, 103656, 103686, 103692, 103710, \\
103716, 103722, 103740, 103746, 103752, 103770, 103776, 103800, 103806, 103830, 103836, 103860, \\
103896, 103932, 103962, 103986, 104022, 104046, 104076, 104106, 104166, 104220, 104226, 104232, \\
104256, 104286, 104316, 104346, 104436, 104442, 104496, 104502, 104532, 104556, 104580, 104592, \\
104616, 104622, 104646, 104652, 104676, 104736, 104772, 104796, 104820, 104880, 104886, 104892, \\
104910, 104916, 104970, 104982, 105066, 105096, 105156, 105336, 105396, 105426, 105456, 105510, \\
105546, 105576, 105636, 105666, 105696, 105762, 105966, 106152, 106236, 106266, 106290, 106386, \\
106626, 106662, 106812, 106836, 107220, 107406, 108486, 108600
\\
\noalign{\hrule height0.8pt}
\end{tabular}
}
\end{center}
\end{table}
\begin{table}[thbp]
\caption{Doubly even self-dual codes of length $120$ and minimum weight $20$}
\label{Tab:120}
\begin{center}
{\footnotesize
\begin{tabular}{c|c}
\noalign{\hrule height0.8pt}
$r_A$ & $r_B$ \\
\hline
$(100000111110000011111010001101)$&$(010000000010010010110101001111)$ \\
$(100001100101001111000100110010)$&$(100101100010011100001110011100)$ \\
$(100000011001010111001110101001)$&$(111101110101111111001100111010)$ \\
$(100001001111010001100000000011)$&$(010110111001100010000111011101)$ \\
$(100001011011001010010110000010)$&$(001010001100000100000010001000)$ \\
$(100000110010110111100100111110)$&$(100111000011110011101010001010)$ \\
$(100000000000111000011000111101)$&$(111000101010011000111000111011)$ \\
$(100001001111110101101101110111)$&$(011001000000100100101101110100)$ \\
$(100000001100001100111011101110)$&$(111111010001001110011000000000)$ \\
$(100000101111001000010100100110)$&$(001101011010111010101111011110)$ \\
\noalign{\hrule height0.8pt}
\end{tabular}
}
\end{center}
\end{table}
\section{New doubly even self-dual codes
of length 128 and minimum weight 20}
From \eqref{eq:WII},
the possible weight enumerator of a doubly even
self-dual code of length $128$ and minimum weight $20$
is determined as follows:
\begin{align*}
&
1
+ a y^{20}
+ (13228320 - 6 a) y^{24}
+ (2940970496 - 89 a) y^{28}
\\ &
+ (320411086380 + 1500 a) y^{32}
+ (18072021808640 - 10925 a) y^{36}
\\ &
+ (552523816524960 + 51186 a) y^{40}
+ (9491115264030720 - 173451 a) y^{44}
\\ &
+ (94116072808107840 + 449616 a) y^{48}
+ (549827773219608576 - 920550 a) y^{52}
\\ &
+ (1920594735166941760 + 1518100 a) y^{56}
\\ &
+ (4051982995220321280 - 2040714 a) y^{60}
\\ &
+ (5193576851944293670 + 2250664 a) y^{64}
+ \cdots,
\end{align*}
where $a$ is the number of codewords of weight $20$.
The existence of a doubly even self-dual code
of length $128$ and minimum weight $20$
is known~\cite{G}.
By a non-exhaustive search, we found $200$
doubly even self-dual four-circulant
codes of length $128$ and minimum weight $20$.
These codes have distinct weight enumerators, where
the numbers $a$ of codewords of weight $20$
are listed in Table~\ref{Tab:WD128}.
\begin{prop}
There are at least $200$ inequivalent
doubly even self-dual codes of length $128$ and minimum weight $20$.
\end{prop}
Our feeling is that the number of inequivalent
doubly even self-dual codes of length $128$ and minimum weight $20$
might be even bigger.
The first rows $r_A$ and $r_B$ of $A$ and $B$
in the generator matrices~\eqref{eq:GM}
of the $200$ codes are listed in
\url{http://www.math.is.tohoku.ac.jp/~mharada/Paper/128-d20.txt}.
As an example, we list the first rows $r_A$ and $r_B$
for ten codes in Table~\ref{Tab:128}.
\begin{table}[thbp]
\caption{Weight enumerators for length $128$}
\label{Tab:WD128}
\begin{center}
{\footnotesize
\begin{tabular}{l}
\noalign{\hrule height0.8pt}
\multicolumn{1}{c}{Numbers of codewords of weight $20$} \\
\hline
21376, 21824, 22016, 22400, 22464, 22880, 22944, 23008, 23104, 23136, 23232, \\
23296, 23328, 23360, 23392, 23520, 23552, 23616, 23648, 23680, 23808, 23936, \\
24000, 24032, 24064, 24096, 24128, 24160, 24192, 24224, 24256, 24288, 24320, \\
24352, 24384, 24416, 24448, 24480, 24512, 24544, 24576, 24640, 24672, 24704, \\
24736, 24768, 24800, 24832, 24864, 24896, 24928, 24960, 24992, 25024, 25056, \\
25088, 25120, 25152, 25184, 25216, 25248, 25280, 25312, 25344, 25376, 25408, \\
25440, 25472, 25504, 25536, 25568, 25600, 25632, 25664, 25696, 25728, 25760, \\
25824, 25856, 25888, 25920, 25952, 25984, 26016, 26048, 26080, 26112, 26144, \\
26176, 26208, 26240, 26272, 26304, 26336, 26368, 26400, 26432, 26464, 26496, \\
26528, 26560, 26592, 26624, 26656, 26688, 26720, 26752, 26784, 26816, 26848, \\
26880, 26912, 26944, 26976, 27008, 27040, 27072, 27104, 27136, 27168, 27200, \\
27232, 27264, 27296, 27328, 27360, 27392, 27424, 27456, 27488, 27520, 27584, \\
27616, 27648, 27680, 27712, 27744, 27776, 27808, 27840, 27872, 27904, 27936, \\
27968, 28000, 28032, 28064, 28096, 28128, 28160, 28192, 28224, 28256, 28288, \\
28320, 28352, 28384, 28416, 28448, 28480, 28512, 28544, 28576, 28608, 28640, \\
28736, 28768, 28800, 28832, 28864, 28896, 28928, 28992, 29024, 29056, 29088, \\
29120, 29152, 29216, 29248, 29312, 29344, 29376, 29536, 29600, 29632, 29696, \\
29760, 29792, 29824, 29856, 29888, 30048, 30144, 30176, 30208, 30240, 30304, \\
30368, 31584\\
\noalign{\hrule height0.8pt}
\end{tabular}
}
\end{center}
\end{table}
\begin{table}[thbp]
\caption{Doubly even self-dual codes of length $128$ and minimum weight $20$}
\label{Tab:128}
\begin{center}
{\footnotesize
\begin{tabular}{c|c}
\noalign{\hrule height0.8pt}
$r_A$ & $r_B$ \\
\hline
$(10000111111110100010000110111100)$&$(00101010111100011101101011110100)$\\
$(10000111010101110101100011100110)$&$(10001000101110011011111011100110)$\\
$(10000010011000000001101010010001)$&$(01010001111001100001001011000010)$\\
$(10000111000111101111100111100111)$&$(11010000000111100110110101111111)$\\
$(10000010001100010100011010000111)$&$(01110100011000110110100110000110)$\\
$(10000111010101110010100010001010)$&$(01011110001110011011111011100111)$\\
$(10000100011111000010100010100010)$&$(10001101000000111010111010010101)$\\
$(10000101110011001010100011011010)$&$(00111111111010001111100110111000)$\\
$(10000000110010100000011111111000)$&$(11110000010011001110110110110101)$\\
$(10000010010000110011101110110100)$&$(10101000001000010101011011100001)$\\
\noalign{\hrule height0.8pt}
\end{tabular}
}
\end{center}
\end{table}
\bigskip
\noindent
{\bf Acknowledgment.}
This work was supported by JSPS KAKENHI Grant Number 15H03633.
The author would like to thank
the anonymous referees for their useful comments on the manuscript.
| 43,290
|
With one of our retractable patio awnings or Dutch canopy fitted to your home, you can sit back and relax in the shade.
Window Shutters Woodford ~ THE Window Shutters Experts, providing high quality window shutters for your home in Woodford.
You can upgrade your windows with our Quality Shutters! Style and versatility; Our top quality Window Shutters can immediately transform the look of any room in a home. They are maintenance free and enhance all styles of any room from traditional classic to modern contemporary. Order a FREE Window Shutters Brochure now
Our high quality Window Shutters are hand-made to order from high quality timber. Our exceptional quality timber is specialially selected due to the attractive look, excellent strength, lightness and durability. The slim louvre design makes it straight-forward to control light and shade in any rooms of your house. They may be opened or closed entirely to precisely control how much light you require.
To increase privacy or minimise exterior noise, just close them entirely. The wood we use has superb benefits when compared with low quality alternative wood types. It offers good stability, and is very resistant to moisture penetration, cold and heat.
Our Window Shutters are functional as well as being beautiful, instantly giving luxury and
"Really happy with the fast installation of the window shutters which are even better than I imagined. They block out the light and heat much more than I actually thought they would.".
Mrs F - Cornwall ]
| 255,052
|
TITLE: Determine the ratio between the base area of the trunk $ T_2 $ and the trunk $ T_1 $
QUESTION [1 upvotes]: Two cone trunks $ T_1 $ and $ T_2 $ have a common base of radius equal to $ 8 $ cm, the other bases being concentric circles. Knowing that the radius of the larger base of $ T_1 $ is equal to $ 15 $ cm, and the volume of $ T_1 $ is three times the volume of $ T_2 $, determine the ratio between the base area of the trunk $ T_2 $ and the trunk $ T_1 $, this ratio being between non-common bases
Can anyone give me a tip?
REPLY [1 votes]: Volume of trunk $T_1$ is:
$V_1=\frac 12 h(8^2\times 3,14+15^2\times 3,14=A_1)$
And that of trunk $T_2$ is:
$V_2=\frac 12 h(8^2\times 3.14+ 3.14 \times R^2=A_2)$
where $h$ is the height of trunks and $A_1$ and $A_2$ are the areas of trunks $T_1$ and $T_2$ on the top.
$V_1=3V_2$
$\frac 12 h(8^2\times 3,14+15^2\times 3,14=A_1)=\frac 32 h(8^2\times 3.14+ 3.14 \times R^2=A_2)$
$\rightarrow 2\times 8^2\times 3.14+3A_2=A_1 $
$2\times 8^2\times 3.14\approx 402$
$A_1=15^2\times 3.14\approx 706$
so we have:
$ 402+3A_2=(706=A_1)$
dividing both sides by $706$ we get:
$\frac{3A_2}{A_1}=1-\frac{402}{706}=\frac {304}{706} \Rightarrow \frac {A_2}{A_1}=\frac{304}{3\times 706}\approx 0.144=\frac{144}{1000}$
| 34,852
|
Snoozer Luxury Corner Pet Bed
Give your pet a comfortable place to lie down with the Snoozer Luxury Corner Bed! Made with a luxurious micro suede fabric, this bed adds a touch of elegance while complimenting any corner in your home. Construction: Each bed has a polyurethane foam form. Cover and pillow are made of microsuede. Pillow is filled with spun poly fill. Cover is removable and machine washable and dryable.
Bed Type: Designer
Features: High density foam form, Poly filled pillow, Removable outer cover is machine washable & Dryable
Color: Black & Herringbone
Environment: Indoor
Dimensions: 21"L x 17"W x 12"H
Pet Size: Small
Interior Material: Polyurethane Foam
Exterior Material Top: Micro Suede Fabric
Exterior Material Bottom: Micro Suede Fabric
Cleaning Instructions: Wash cover in warm water & Tumble dry on low heat
Products
- $ 65.99
- $ 26.99 to $ 39.99Reg: $ 29.99 to $ 49.99
- $ 21.99 to $ 49.99
- $ 13.49 to $ 17.99Reg: $ 14.99 to $ 19.99
- $ 22.99 to $ 39.99
- $ 39.99 to $ 62.99Reg: $ 49.99 to $ 89.99
- $ 64.99 to $ 84.99
- $ 49.99 to $ 79.99
- $ 19.99
- $ 39.99 to $ 119.99
- $ 29.99 to $ 39.99
- $ 35.99
| 333,194
|
The TZUNAMI 5.0 WORK is a lightweight aluminum utility boat with a sturdy hull and constructed of heavy duty 5083 aluminum alloy 5 and 6 mm thick material. The welded console with crew friendly layout offers plenty of workspace and gives maximum safety and stability. This workboat is ideal for rapid intervention work whether youre piloting, diving, patrolling or surveying.
———————————————————
| 294,901
|
I spent most of Friday out wandering (walking and driving) on the hill checking on current projects. I started over in the Lower Alleys and ended up on The Summit. On a nice little side hike, I walked to the Middle East Wall gate and ventured out on The Wall a little bit. The wildlife viewing was excellent, seeing mountain goats, marmots, pikas and a coyote. The views of Little Lenawee Peak from that zone are particularly good.
| 414,106
|
In this section we detail the explicit construction of a vector
field~$\vct{B}$, using generalized Debye sources, that \emph{a priori}
analytically satisfies the Taylor state equation
\begin{equation}
\nabla \times \vct{B} = \lambda \vct{B}
\end{equation}
and leads to a uniquely invertible integral equation corresponding to
the boundary condition
\begin{equation}
\vct{B} \cdot \vct{n} = 0.
\end{equation}
For more details on this construction, in general,
see~\cite{epstein2015}, and for details in the axisymmetric case
see~\cite{O_Neil_2018_Taylor}. It should be noted that the above
boundary value problem does \emph{not} have a unique solution
if~$\lambda$ is an eigenvalue of the curl operator acting on
vectorfields with vanishing normal component along the
boundary. Furthermore, depending on the particular geometry in which
the above boundary value problem is being solved, it must be augmented
with a suitable number of extra conditions (usually provided as flux
constraints on $\vct{B}$) in order to be well-posed. For clarity, the
following is a very condensed version of what is contained in the
previously mentioned two references.
\subsubsection{Generalized Debye representation for Taylor states}
For the application at hand, namely that of computing Taylor states
for magnetically confined plasmas in general toroidal domains, we must
allow for the boundary~$\Boundary$ of the domain~$\Domain$ to be
multiply connected, and possibly have multiple components.
To this end, let~$\Boundary = \partial \Domain$ be a disjoint union
of $\Nsurf$ smooth toroidal surfaces
($\Boundary_1, \Boundary_2, \dots, \Boundary_\Nsurf$). We want to
compute vector fields $\vector{B}$ in $\Domain$ such that,
\begin{align}
\begin{aligned}
\Curl{\vector{B}} &= \BeltramiParam \vector{B},
&\qquad &\text{in } \Domain, \\
\dotprod{\vector{B}}{\Normal} &= 0, & &\text{on \Boundary}, \\
\int\limits_{\CrossSection_i} \dotprod{\vector{B}}{\VecAreaElem}
&= \Flux_i
& &\text{for } i = 1, \dots, \Nsurf,
\end{aligned}
\label{e:taylor-state}
\end{align}
where $\BeltramiParam$ is a constant real number called the Beltrami
parameter and $\Normal$ is the unit normal vector to~$\Boundary$ pointing
outward from $\Domain$. The surfaces~$\CrossSection_i$ are generally
chosen to capture all possible toroidal and poloidal fluxes of
interest inside~$\Domain$. In these flux
integrals,~$\VecAreaElem = \Normal \AreaElem$ with $\AreaElem$ being
the surface area element and $\Normal$ the oriented normal along the
cross-section $\CrossSection_i$.
In \cite{O_Neil_2018_Taylor}, we applied the generalized Debye
representation of \cite{Epstein_2012} for computing Taylor states in
axisymmetric geometries. We use the same integral formulation in the
present work for computing Taylor states in non-axisymmetric
geometries.
\begin{figure}[t!]
\centering
\includegraphics[width=0.56\textwidth]{figs/tor-pol-direction}
\includegraphics[width=0.43\textwidth]{figs/cross-section2}
\caption{\label{f:geometry} Left: A toroidal surface with the
toroidal and poloidal directions. Right: A toroidal shell domain
is the region between two nested toroidal geometries. The
cross-sections of the domain shown in blue and red are called the
toroidal and poloidal cross-sections respectively. The flux
conditions for a Taylor state are given as the prescribed magnetic
flux through these cross-sections. }
\end{figure}
As noted before, a vector field $\vct{B}$ which
satisfies the Beltrami condition~\mbox{$\Curl{\vector{B}} = \BeltramiParam
\vector{B}$} admits the pair
\mbox{$( \vector{E} = \Imag \vector{B}, \vector{H}=\vector{B} )$} which
automatically satisfies
the THME$(\lambda)$. The vector field \vector{B} can therefore be
represented using the generalized Debye source
representation. Furthermore, due to the relation
$\vector{E} = \Imag \vector{H}$ and the symmetry of the generalized
Debye representation in \pr{e:debye-maxwell}, similar dependencies on
the vector potentials, scalar potentials, Debye currents, and Debye
sources can also be shown~\cite{epstein2015}:
\begin{equation}
\vct{A} = i \vct{Q}, \qquad u = \Imag v, \qquad \rho = i \sigma.
\end{equation}
As a result, the generalized Debye source representation for the Taylor
state~$\vector{B}$ is given as:
\begin{align}
\vector{B} = \Imag \BeltramiParam \vector{Q} - \Grad{v} + \Imag \Curl{\vector{Q}},
\label{e:debye-taylor}
\end{align}
where the vector potential~$\vector{Q}$ and the scalar potential $v$
are as defined before in~\pr{e:potentials}. In order to satisfy
$\Curl{\vector{B}} = \BeltramiParam \vector{B}$, the
consistency condition~~$\SurfDiv{\vector{m}} = \Imag \BeltramiParam
\sigma$ must be met. With
an additional
constraint~~$\cross{\Normal}{\vector{m}} = -\Imag~\vector{m}$, the
surface vector field $\vector{m}$ is determined uniquely up to an
additive harmonic vector field,
\begin{align}\label{e:m-field}
\begin{aligned}
\vector{m} &= \vector{m}_0(\vector{\sigma}) + \vector{m}_H ,
\end{aligned}
\end{align}
where
\begin{align}
\vector{m}_0(\vector{\sigma}) &= \Imag \BeltramiParam \left(
\SurfGrad{\InvSurfLap{\sigma}} + \Imag~ \cross{\Normal}{\SurfGrad{\InvSurfLap{\sigma}}} \right)
\label{e:m0-field}
\end{align}
and $\vector{m}_H$ is a harmonic surface vector field such
that:~${\SurfDiv{\vector{m}_H} = 0}$
and~${\cross{\Normal}{\vector{m}_H} = -\Imag~\vector{m}_H}$. As
described at the beginning of this section, there are $\Nsurf$
linearly independent harmonic vector fields
$\{ \vector{m}^{1}_H, \cdots, \vector{m}^{\Nsurf}_H \}$ on
$\Boundary$ satisfying these conditions.
Therefore, we may represent $\vector{m}_H$ as:
\begin{align}
\vector{m}_H(\vector{\alpha}) = \sum_{k=1}^{\Nsurf} \alpha_k~\vector{m}^{k}_{H}.
\label{e:mh-field}
\end{align}
The unknowns $\sigma$ and
$\vector{\alpha} = \{\alpha_1, \cdots, \alpha_{\Nsurf}\}$ must be
determined using the boundary conditions and flux constraints in
\pr{e:taylor-state}.
\subsubsection{Boundary integral formulation}
The generalized Debye representation for $\vector{B}$, as stated in
\prange{e:debye-taylor}{e:mh-field}, \emph{a priori} satisfies
$\Curl{\vector{B}} = \BeltramiParam \vector{B}$. However, it does not
automatically satisfy the boundary
condition~$\dotprod{\vector{B}}{\Normal} = 0$ on $\Boundary$, nor the
non-trivial flux constraints. For any scalar function~$\sigma$ and
coefficients vector~$\vector{\alpha}$, the
field~$\vector{B}$ can be evaluated directly on the
boundary~$\Boundary$ by taking the limit of its value from the
interior of~$\Domain$:
\begin{align}
\vector{B}(\sigma, \vector{\alpha}) &= -\frac{\sigma}{2}\Normal + \Imag \frac{\cross{\Normal}{\vector{m}}}{2} + \Imag \BeltramiParam \SL{\BeltramiParam}[\vector{m}] - \Grad{\SL{\BeltramiParam}[\sigma]} + \Imag \Curl{\SL{\BeltramiParam}[\vector{m}]} & \text{on \Boundary},
\label{e:B-bdry-integ}
\end{align}
where $\vector{m}(\sigma, \vector{\alpha})$ is as defined in
\prange{e:m-field}{e:mh-field} and $\SL{\BeltramiParam}$ is the
single-layer potential
operator,
\begin{align*}
\SL{\BeltramiParam}[f](x) = \int\limits_{\Boundary} \HelmKer{\BeltramiParam}(\vector{x} - \vector{x}') f(\vector{x}') \AreaElem(\vector{x}').
\end{align*}
The boundary condition $\dotprod{\vector{B}}{\Normal} = 0$ on $\Boundary$ results in a second-kind integral equation,
\begin{align}
-\frac{\sigma}{2} + \conv{K}[\sigma, \vector{\alpha}] &= 0,
\label{e:taylor-second-kind}
\end{align}
where $\conv{K}$ denotes the compact operator,
\begin{align}
\conv{K}[\sigma, \vector{\alpha}] &= \Imag \BeltramiParam \dotprod{\Normal}{\SL{\BeltramiParam}[\vector{m}]} - \partial_{\Normal}{\SL{\BeltramiParam}[\sigma]} + \Imag \dotprod{\Normal}{\Curl{\SL{\BeltramiParam}[\vector{m}]}} .
\label{e:K-op}
\end{align}
\subsubsection{Flux computation}
The flux constraints as stated in \pr{e:taylor-state} are difficult to
impose in a boundary integral formulation since we only discretize the
boundary $\Boundary$ and not all of~$\Domain$. However, using
Stokes' theorem and the fact that by construction
$\Curl{\vector{B}} = \BeltramiParam \vector{B}$, we can relate the
flux of $\vector{B}$ through a cross-section $\CrossSection$ with its
circulation on
$\partial \CrossSection = \CrossSection \Intersection \Boundary$.
Applying the Stokes' theorem and using \pr{e:B-bdry-integ},
\begin{align}
\int\limits_{\CrossSection}
\dotprod{\vector{B}(\sigma, \vector{\alpha})}{\D\vct{a}}
&= \frac{1}{\BeltramiParam} \oint\limits_{\partial \CrossSection} \dotprod{\vector{B}(\sigma, \vector{\alpha})}{\VecLengthElem} \nonumber \\
&= \oint\limits_{\partial \CrossSection} \dotprod{\Imag \SL{\BeltramiParam}[\vector{m}_0 + \vector{m}_H]}{\VecLengthElem}
+ \frac{\Imag}{\BeltramiParam} \oint\limits_{\partial \CrossSection} \dotprod{\left( \frac{\cross{\Normal}{\vector{m}_0}}{2} + \Curl{\SL{\BeltramiParam}[\vector{m}_0]} \right)}{\VecLengthElem} \nonumber \\
& \hspace{15em} + \frac{\Imag}{\BeltramiParam} \oint\limits_{\partial \CrossSection} \dotprod{\left( \frac{\cross{\Normal}{\vector{m}_H}}{2} + \Curl{\SL{\BeltramiParam}[\vector{m}_H]} \right)}{\VecLengthElem}, \label{e:flux-eq}
\end{align}
where $\vector{m}_0(\sigma)$ is as defined in \pr{e:m0-field},
$\vector{m}_H(\vector{\alpha})$ is as defined in \pr{e:mh-field} and
$\VecLengthElem$ is the oriented unit arclength differential. The first and
the second integral terms in \pr{e:flux-eq} remain bounded as
$\BeltramiParam \to 0$ since
$\vector{m}_0 \sim \bigO{\BeltramiParam}$; however, computing the
last term becomes numerically unstable due to the $1/\BeltramiParam$
factor. In order to numerically stabilize this calculation, we begin
with the following lemma.
\begin{lem} \label{lem:vacuum-field-circulation}
For a tangential vector field~$\vector{m}$ on~$\Boundary$ such
that~$\SurfDiv{\vector{m}} = 0$,
and an arbitrary cross section~$\CrossSection$ of the domain~$\Domain$,
\[
\oint\limits_{\partial \CrossSection} \dotprod{\left(
\frac{\cross{\Normal}{\vector{m}}}{2} +
\Curl{\SL{0}[\vector{m}]} \right)}{\VecLengthElem} = 0.
\]
\end{lem}
\begin{proof}
Let $\vector{V} = \Curl{\SL{0}[\vector{m}]}$. Then, at every
point in~$\Domain$, in particular for points
on~$\CrossSection \subset \Domain$, we have that
\begin{align*}
\Curl{\vector{V}} &= \Curl{\Curl{\SL{0}[\vector{m}]}} \\
&= \Grad{\left(\Div{\SL{0}[\vector{m}]}\right)}
- {\Lap{\SL{0}}}[\vector{m}]\\
&= \Grad{\SL{0}[{\SurfDiv{\vector{m}}}]} \\
&= 0.
\end{align*}
Furthermore, we have that the limiting value of~$\vct{V}$
on~$\partial\CrossSection$ is
\[
\vct{V} = \frac{\cross{\Normal}{\vector{m}}}{2} + \nabla \times
\SL{0}[\vector{m}],
\]
as in~\pr{e:B-bdry-integ}.
Therefore, by Stokes' theorem
\[
\begin{aligned}
\oint\limits_{\partial \CrossSection} \dotprod{\left(
\frac{\cross{\Normal}{\vector{m}}}{2} +
\Curl{\SL{0}[\vector{m}]} \right)}{\VecLengthElem}
&= \oint\limits_{\partial\CrossSection}
\dotprod{\vct{V}}{\VecLengthElem} \\
&= \int\limits_{\CrossSection} \nabla \times \vct{V}
\cdot \D\vct{a} \\
&= 0.
\end{aligned}
\]
\end{proof}
Returning to the previous flux calculation,
since~$\SurfDiv{\vector{m}_H}=0$ we can apply the above lemma to obtain:
\begin{equation}
{\frac{\Imag}{\BeltramiParam}\oint\limits_{\partial \CrossSection}
\dotprod{\left( \frac{\cross{\Normal}{\vector{m}_H}}{2} +
\Curl{\SL{0}[\vector{m}_H]} \right)}{\VecLengthElem} = 0}.
\end{equation}
Subtracting this identity from the last term in
\pr{e:flux-eq} gives us the following relation,
\begin{align}
\frac{1}{\BeltramiParam} \oint\limits_{\partial \CrossSection} \dotprod{\left( \frac{\vector{m}_H}{2} + \Imag \Curl{\SL{\BeltramiParam}[\vector{m}_H]} \right)}{\VecLengthElem}
=
\oint\limits_{\partial \CrossSection} \dotprod{\Imag \Curl{\left( \frac{\SL{\BeltramiParam}-\SL{0}}{\BeltramiParam} \right) [\vector{m}_H]}}{\VecLengthElem},
\label{e:flux-last-term}
\end{align}
where the operator
$\left( \frac{\SL{\BeltramiParam}-\SL{0}}{\BeltramiParam} \right)$
is equivalent to computing a convolution along the boundary
with the following bounded kernel function:
\begin{align*}
\frac{\HelmKer{\BeltramiParam}(\vector{r}) - \HelmKer{0}(\vector{r})}{\BeltramiParam} &= - \frac{ \sin\left( {\BeltramiParam |\vector{r}|}/{2} \right) \sinc\left( {\BeltramiParam |\vector{r}|}/{2} \right) }{4 \pi} + \Imag~\frac{\sinc\left( \BeltramiParam |\vector{r}| \right) }{4 \pi}.
\end{align*}
The above expression is obtained straightforwardly using
trigonometric identities, and is numerically stable and bounded in its
evaluation, even as~$\lambda|\vct{r}|\to 0$. Therefore, the
right hand side in \pr{e:flux-last-term} can be stably computed for any
value of~$\BeltramiParam$. Using \pr{e:flux-eq,e:flux-last-term},
the flux constraints can be re-written as
\begin{multline}
\oint\limits_{\partial \CrossSection_i} \dotprod{\Imag \SL{\BeltramiParam}[\vector{m}_0 + \vector{m}_H]}{\VecLengthElem}
+ \frac{1}{\BeltramiParam} \oint\limits_{\partial \CrossSection_i} \dotprod{\left( \frac{\vector{m}_0}{2} + \Imag \Curl{\SL{\BeltramiParam}[\vector{m}_0]} \right)}{\VecLengthElem}
+ \oint\limits_{\partial \CrossSection_i} \dotprod{\Imag \Curl{\left( \frac{\SL{\BeltramiParam}-\SL{0}}{\BeltramiParam} \right) [\vector{m}_H]}}{\VecLengthElem}
= \Flux_i,
\label{e:circ-constr}
\end{multline}
for $i = 1, \dots, \Nsurf$. The complete formulation for computing
Taylor states is given by the boundary integral equation
\pr{e:taylor-second-kind} and the flux conditions
\pr{e:circ-constr}. We will discuss how to discretize and solve
these equations to obtain the required numerical solution in
\pr{s:algo}.
\subsubsection{Vacuum fields \label{sss:vacuum-formulation}}
We now briefly discuss the special case where $\BeltramiParam =
0$. The magnetic field $\vector{B}$ then satisfies
$\nabla\times\vector{B}=\vector{0}$, i.e. it is a vacuum field. In
this case, the boundary limit of the integral representation in
\pr{e:B-bdry-integ} for $\vector{B}$ simplifies to
\begin{align}
\vector{B}(\sigma, \vector{\alpha}) &= -\frac{\sigma}{2}\Normal + \frac{\vector{m}_{H}}{2} - \Grad{\SL{0}[\sigma]} + \Imag \Curl{\SL{0}[\vector{m}_{H}]} & \text{on \Boundary},
\label{e:B-bdry-integ-vacuum}
\end{align}
where $\vector{m}_{H}(\vector{\alpha})$ is as defined in \pr{e:mh-field}.
The boundary condition $\dotprod{\vector{B}}{\Normal} = 0$ on $\Boundary$
results in the following second-kind integral equation,
\begin{equation}
-\frac{\sigma}{2} + \conv{K}[\sigma, \vector{\alpha}] = 0,
\label{e:vacuum-second-kind}
\end{equation}
where $\conv{K}$ is a compact boundary integral operator given by,
\begin{align}
\conv{K}[\sigma, \vector{\alpha}] &= -\partial_{\Normal}{\SL{0}[\sigma]}
+ \Imag \dotprod{\Normal}{\Curl{\SL{0}[\vector{m}_{H}]}} .
\label{e:K-op-vacuum}
\end{align}
The flux constraints in \pr{e:circ-constr} cannot be used directly for
vacuum fields since we must first explicitly compute the limit
$\BeltramiParam \to 0$. To avoid this tedious calculation, we propose
a more straightforward method.
We begin by defining a surface vector
field~$\vector{j} = \cross{\Normal}\vector{B}$ along~$\Boundary$, and
then by using a Green's theorem for magnetostatic
fields~\cite{chew1999} we obtain that
$\vector{B} = \Curl{\SL{0}[\vector{j}}]$ in~$\Domain$. Notice that
$\vct{j}$ is not the same as $\vct{m}_H$. This relation allows us to
use Stokes' theorem to compute the flux of $\vector{B}$ through a
cross section $\CrossSection$ as the circulation of
$\SL{0}[\vector{j}]$ on $\partial\CrossSection$. The flux constraints
can then be written as,
\begin{equation}
\int\limits_{\CrossSection} \dotprod{\vector{B}}{\D\vct{a}}
= \oint\limits_{\partial \CrossSection} \dotprod{\SL{0}[\vector{j}(\sigma, \vector{\alpha})]}{\VecLengthElem}
= \Flux_i.
\label{e:vacuum-flux-constr}
\end{equation}
We discretize and solve \pr{e:vacuum-second-kind,e:vacuum-flux-constr} for the
unknowns $\sigma$ and $\alpha$.
This is discussed in the next section.
The magnetic field $\vector{B}$ on $\Boundary$ can then be computed using
\pr{e:B-bdry-integ-vacuum}. Notice that in this formulation we do not need to
solve a Laplace-Beltrami problem to evaluate the boundary integral
operator~$\conv{K}$, and the surface convolution operator $\SL{0}$ computes
convolutions with the single-layer Laplace kernel which is much less expensive
to compute than the Helmholtz kernel.
| 47,519
|
\begin{document}
\title{Covering Point Patterns}
\author{\authorblockN{Amos Lapidoth}
\authorblockA{
ETH Zurich\\
8092 Zurich, Switzerland\\
\texttt{ amos.lapidoth@ethz.ch}}\and
\authorblockN{Andreas Mal\"ar}
\authorblockA{ETH Zurich\\ 8092 Zurich, Switzerland\\
\texttt{amalaer@ee.ethz.ch}}\and
\authorblockN{Ligong Wang}
\authorblockA{ETH Zurich\\ 8092 Zurich, Switzerland\\
\texttt{wang@isi.ee.ethz.ch}}}
\maketitle
\begin{abstract}
An encoder observes a point pattern---a finite number of points in
the interval $[0,T]$---which is to be described to a reconstructor
using bits. Based on these bits, the reconstructor wishes to select
a subset of $[0,T]$ that contains all the points in the pattern. It
is shown that, if the point pattern is produced by a homogeneous
Poisson process of intensity $\lambda$, and if the reconstructor is
restricted to select a subset of average Lebesgue measure not
exceeding $DT$, then, as $T$ tends to infinity, the minimum number
of bits per second needed by the encoder is $-\lambda\log D$. It is
also shown that, as $T$ tends to infinity, any point
pattern on $[0,T]$ containing no more than $\lambda T$ points can be
successfully described using $-\lambda \log D$ bits per second in
this sense. Finally, a
Wyner-Ziv version of this problem is considered where some of the
points in the pattern are known to the reconstructor.
\end{abstract}
\section{Introduction}
An encoder observes a point pattern---a finite number of points in the
interval $[0,T]$---which is to be described to a reconstructor using
bits. Based on these bits, the reconstructor wishes to produce a
covering-set---a subset of $[0,T]$ containing all the points---of
least Lebesgue measure. There is a trade-off between the number of
bits used and the Lebesgue measure of the covering-set. This trade-off
can be formulated as a continuous-time rate-distortion problem
(Section~\ref{sec:poisson}). In this paper we investigate this
trade-off in the limit where $T\to\infty$.
When the point pattern is produced by a
homogeneous Poisson process, this problem is closely related to
that of transmitting information through an ideal peak-limited
Poisson channel
\cite{kabanov78,davis80,wyner88,wyner88b}. In fact, the two problems
can be considered dual in the sense of
\cite{coverchiang02}. However, the duality results of
\cite{coverchiang02} only apply to discrete memoryless channels and
sources, so they cannot be directly used to solve our
problem. Instead, we shall use a technique that is similar to Wyner's
\cite{wyner88,wyner88b} to find the desired rate-distortion
function. We shall show that, if the point pattern is the
outcome of a homogeneous Poisson process of intensity $\lambda$, and
if the reconstructor is restricted to select covering-sets of average
measure not exceeding $DT$, then the minimum number of
bits per second needed by the encoder to describe the pattern is
$-\lambda\log D$.
Previous works \cite{rubin74,colemankiyavashsubramanian08} have
studied rate-distortion functions of the Poisson process with different
distortion measures. It is interesting to
notice that our rate-distortion function, $-\lambda\log D$, is
equal to the one in \cite{colemankiyavashsubramanian08},
where a queueing distortion measure is considered. This is no
coincidence, since the Poisson channel is closely related to the
queueing channel introduced in \cite{anantharamverdu96}.
We also show that the Poisson process is the most difficult to cover,
in the sense that any point process that, with high probability, has no
more than $\lambda T$ points in $[0,T]$ can be described with
$-\lambda \log D$ bits per second. This is even true if an adversary
selects the point pattern provided that the pattern contains no more
than $\lambda$ points per second and that the encoder and the
reconstructor are allowed to use random codes.
Finally, we consider a Wyner-Ziv setting \cite{wynerziv76} of the
problem where some points in the pattern are known to the
reconstructor but the encoder does not know which ones they are. This
can be viewed as a dual problem to the Poisson channel with
noncausal side-information \cite{brosslapidothWLG09}. We
show that in this setting one can achieve the same minimum rate as
when the transmitter \emph{does} know the reconstructor's side-information.
The rest of this paper is arranged as follows: in
Section~\ref{sec:notations} we introduce some notation; in
Section~\ref{sec:poisson} we present the result for the Poisson process;
in Section~\ref{sec:general} we present the results for general point
processes and arbitrary point patterns; and in Section~\ref{sec:wz} we
present the results for the Wyner-Ziv setting.
\section{Notation}\label{sec:notations}
We use a lower-case letter like $x$ to denote a number, and an upper-case
letter like $X$ to denote a random variable. We use a boldface
lower-case letter like $\vect{x}$ to denote a vector, a function of
reals, or a point pattern, and it will be clear from the
context which one we mean. If $\vect{x}$ is a vector, $x_i$ denotes its
$i$th element. If $\vect{x}$ is a function, $x(t)$
denotes its value at $t\in\Reals$. If $\vect{x}$ is a point pattern,
we use $n_\vect{x}(\cdot)$ to denote its counting function, so
$n_\vect{x}(t_2)-n_\vect{x}(t_1)$ is the number of points in
$\vect{x}$ that fall in the interval $(t_1,t_2]$. We use a bold-face
upper-case letter like $\vect{X}$ to denote a random vector, a random
function, or a random point process. The random counting function
corresponding to a point process $\vect{X}$ is denoted by
$N_{\vect{X}}(\cdot)$.
We use $\textnormal{Ber}(p)$ to denote the Bernoulli distribution of
parameter $p$, namely, the distribution that has probability $p$ on
the outcome $1$ and probability $(1-p)$ on the outcome $0$.
\section{Covering a Poisson Process}\label{sec:poisson}
Consider a homogeneous Poisson process $\mathbf{X}$ of
intensity~$\lambda$ on the interval $[0,T]$. Its counting function
$N_{\vect{X}}(\cdot)$ satisfies
\begin{equation*}
\Pr\left[N_{\vect{X}}(t+\tau)-N_{\vect{X}}(t)=k\right] = \frac{e^{-\lambda
\tau}(\lambda\tau)^k}{k!}
\end{equation*}
for all $\tau\in[0,T]$, $t\in[0,T-\tau]$ and $k\in\{0,1,\ldots\}$.
The encoder maps the realization of the Poisson process to a message
in $\{1,\ldots,2^{TR}\}$. The reconstructor then maps this message
to a $\{ 0,1 \}$-valued, Lebesgue-measurable, signal $\hat{x}(t)$, $t\in
[0,T]$. We wish to minimize the total length of the region where
$\hat{x}(t)=1$ while guaranteeing that all points in the original
Poisson process lie in this region. See
Figure~\ref{fig:problem-illustration} for an illustration.
\begin{figure}[htbp]
\centering
\setlength{\unitlength}{0.68cm}
\begin{picture}(12.5,4.5)
\put(0.4,3.7){$\mathbf{x}$}
\put(0,2.6){\vector(1,0){12}}
\put(12,2.1){$t$}
\put(1.2,2.6){\line(0,1){1}}
\put(3.7,2.6){\line(0,1){1}}
\put(5.2,2.6){\line(0,1){1}}
\put(5.7,2.6){\line(0,1){1}}
\put(8.3,2.6){\line(0,1){1}}
\put(9.6,2.6){\line(0,1){1}}
\multiput(1.2,2.35)(0,-0.3){7}{\line(0,1){0.1}}
\multiput(3.7,2.35)(0,-0.3){7}{\line(0,1){0.1}}
\multiput(5.2,2.35)(0,-0.3){7}{\line(0,1){0.1}}
\multiput(5.7,2.35)(0,-0.3){7}{\line(0,1){0.1}}
\multiput(8.3,2.35)(0,-0.3){7}{\line(0,1){0.1}}
\multiput(9.6,2.35)(0,-0.3){7}{\line(0,1){0.1}}
\put(0,0.6){\vector(1,0){12}}
\put(12,0.1){$t$}
\linethickness{0.5mm}
\put(0,0.6){\line(1,0){0.2}}
\put(0.2,0.6){\line(0,1){1}}
\put(0.2,1.6){\line(1,0){1.3}}
\put(1.5,0.6){\line(0,1){1}}
\put(1.5,0.6){\line(1,0){2}}
\put(3.5,0.6){\line(0,1){1}}
\put(3.5,1.6){\line(1,0){2.9}}
\put(6.4,0.6){\line(0,1){1}}
\put(6.4,0.6){\line(1,0){2.7}}
\put(9.1,0.6){\line(0,1){1}}
\put(9.1,1.6){\line(1,0){0.6}}
\put(9.7,0.6){\line(0,1){1}}
\put(9.7,0.6){\line(1,0){1.3}}
\put(0.4,1.8){$\mathbf{\hat{x}}$}
\put(6.8,1.4){\small{missed!}}
\put(7.5,1.35){\vector(1,-1){0.7}}
\end{picture}
\caption{Illustration of the problem.}
\label{fig:problem-illustration}
\end{figure}
More formally, we formulate this problem as a continuous-time
rate-distortion problem, where the distortion between the point
pattern $\mathbf{x}$ and the reproduction signal
$\hat{\mathbf{x}}$ is
\begin{equation}\label{eq:distortion}
d(\mathbf{x},\hat{\mathbf{x}}) \triangleq \begin{cases}
\frac{\mu\left(\hat{x}^{-1}(1)\right)}{T}, & \textnormal{if
all points in $\mathbf{x}$ are in $\hat{x}^{-1}(1)$}\\
\infty, & \textnormal{otherwise}\end{cases}
\end{equation}
where $\mu(\cdot)$ denotes the Lebesgue measure.
We say that $(R,D)$ is an achievable rate-distortion pair for the
homogeneous Poisson process of intensity $\lambda$ if, for every
$\epsilon>0$, there exists some $T_0>0$
such that, for every $T>T_0$, there exists an encoder
$f_T(\cdot)$ and a
reconstructor $\phi_T(\cdot)$ of rate $R+\epsilon$ bits per second
which, when applied to the Poisson
process $\vect{X}$ on $[0,T]$, gives
\begin{equation*}
\E{d\bigl(\mathbf{X},\phi_T\left(f_T({\mathbf{X}})\right)\bigr)} \le
D+\epsilon.
\end{equation*}
Denote by $R(D,\lambda)$ the minimum rate $R$ such that $(R,D)$ is
achievable for the homogeneous Poisson process of intensity
$\lambda$. Define
\begin{equation}\label{eq:poisson}
R_{\textnormal{Pois}}(D,\lambda)\triangleq \begin{cases} -\lambda\log D \textnormal{ bits per
second},& D\in(0,1)\\ 0, & D\ge 1.\end{cases}
\end{equation}
\begin{thm}\label{thm:poisson}
For all $D,\lambda>0$,
\begin{equation}\label{eq:poisson1}
R(D,\lambda)=R_{\textnormal{Pois}}(D,\lambda).
\end{equation}
\end{thm}
To prove Theorem \ref{thm:poisson}, we propose a scheme to reduce the
original problem to one for a discrete memoryless source. This is
reminiscent of Wyner's
scheme for reducing the peak-limited Poisson channel to a discrete
memoryless channel \cite{wyner88}. We shall
show the optimality of this scheme in Lemma~\ref{lem:optimality}, and
we shall
then prove Theorem~\ref{thm:poisson} by computing the best rate that
is achievable using this scheme.
\emph{Scheme 1:} We divide the time-interval
$[0,T]$ into slots of $\Delta$ seconds long. The encoder first maps
the original point pattern $\vect{x}$ to a $\{0,1\}$-valued vector
$\vect{x}'$ of length $\frac{T}{\Delta}$\footnote{When $T$ is not
divisible by $\Delta$, we consider $\vect{x}$ as a
pattern on $[0,T']$ where $T'=\lceil \frac{T}{\Delta}\rceil
\Delta$. When we let $\Delta$ tend to zero, the difference between
$T$ and $T'$ also tends to zero. Henceforth we ignore this technicality
and assume $T$
is divisible by $\Delta$.} in the following way: if
$\vect{x}$ has at least one point in the time-slot $((i-1)\Delta,
i\Delta]$, choose $x_i'=1$; otherwise choose $x_i'=0$. The encoder
then maps $\vect{x}'$ to a message in $\{1,\ldots,2^{TR}\}$.
Based on the encoder's message, the reconstructor produces a
$\{0,1\}$-valued length-$\frac{T}{\Delta}$ vector $\hat{\vect{x}}'$ to
meet the distortion criterion
\begin{equation*}
\E{d'(\vect{X}',\hat{\vect{X}}')} \le D+\epsilon,
\end{equation*}
where the distortion measure $d'(\cdot,\cdot)$ is given by
\begin{IEEEeqnarray*}{rCl}
d'(0,0)& = & 0\\
d'(0,1)& = & 1\\
d'(1,0)& = & \infty\\
d'(1,1)& = & 1.
\end{IEEEeqnarray*}
It then maps $\hat{\vect{x}}'$ to a continuous-time signal
$\hat{\vect{x}}$ through
\begin{equation*}
\hat{x}(t)=\hat{x}_{\lceil \frac{t}{\Delta} \rceil}',\quad t\in[0,T].
\end{equation*}
Scheme~1 reduces the task of designing a code for
$\vect{X}$ subject to distortion $d(\cdot,\cdot)$
to the task of designing a code for the vector
$\vect{X}'$ subject to the distortion $d'(\cdot,\cdot)$. The way we
define $d'(\cdot,\cdot)$ yields the simple relation
\begin{equation}
d(\vect{x},\hat{\vect{x}})=d'(\vect{x}',\hat{\vect{x}}').
\end{equation}
When $\vect{X}$ is the homogeneous Poisson process of intensity
$\lambda$, the components of $\vect{X}'$ are
independent and identically distributed (IID)
$\textnormal{Ber}(1-e^{-\lambda \Delta})$. Let $R_\Delta(D,\lambda)$
denote the rate-distortion function for $\vect{X}'$ and
$d'(\cdot,\cdot)$. If we combine
Scheme~1 with an optimal code for $\vect{X}'$ subject to
$\E{d'(\vect{X}', \hat{\vect{X}}')} < D+\epsilon$, we can achieve any
rate that is larger than
\begin{equation*}
\frac{R_\Delta(D,\lambda) \textnormal{ bits}}{\Delta \textnormal{
seconds}}.
\end{equation*}
The next lemma, which is reminiscent of
\cite[Theorem 2.1]{wyner88b}, shows that when we let $\Delta$ tend to
zero, there is no loss in optimality in using Scheme~1.
\begin{lem}\label{lem:optimality}
For all $D,\lambda > 0$,
\begin{equation}\label{eq:lem}
R(D,\lambda)=\lim_{\Delta\downarrow 0}
\frac{R_\Delta(D,\lambda)}{\Delta}.
\end{equation}
\end{lem}
\begin{IEEEproof} See Appendix.
\end{IEEEproof}
\begin{IEEEproof}[Proof of Theorem \ref{thm:poisson}]
We derive $R(D,\lambda)$ by computing the
right-hand side of \eqref{eq:lem}. To compute $R_\Delta(D,\lambda)$ we
apply Shannon's formula of the rate-distortion function for a discrete
memoryless source \cite{shannon48}:
\begin{equation}\label{eq:1}
R_\Delta(D,\lambda)=\min_{P_{\hat{Z}|Z}: \E{d_\Delta(Z,\hat{Z}})\le D}
I(Z;\hat{Z}). \footnote{Strictly speaking, since our distortion
measure is unbounded, we need to modify Shannon's proof of this
formula in order to use it for our problem. This can be done
by letting the reconstructor produce the
all-one sequence, which yields bounded distortion for any source
sequence, whenever no codeword can be found that is jointly typical
with the source sequence.}
\end{equation}
When $D\in(0,1)$, the
conditional distribution $P_{\hat{Z}|Z}$ which achieves the minimum on
the right-hand side of \eqref{eq:1} is
\begin{IEEEeqnarray*}{rCl}
P_{\hat{Z}|Z}^*(1|0) & = & D e^{\lambda\Delta}-e^{\lambda\Delta}+1,\\
P_{\hat{Z}|Z}^*(1|1) & = & 1.
\end{IEEEeqnarray*}
Computing the mutual information $I(Z;\hat{Z})$ under this
$P_{\hat{Z}|Z}^*$ yields
\begin{equation}\label{eq:2}
R_\Delta(D,\lambda)=\Hb(D)-e^{-\lambda\Delta}\Hb(D
e^{\lambda\Delta}- e^{\lambda\Delta}+1),\ \ D\in(0,1),
\end{equation}
where $\Hb(\cdot)$ denotes the binary entropy function.
When $D\ge 1$, it is optimal to choose $\hat{Z}=1$
(deterministically), yielding
\begin{equation}\label{eq:3}
R_\Delta(D,\lambda)=0,\quad D\ge 1.
\end{equation}
Combining \eqref{eq:lem}, \eqref{eq:2} and \eqref{eq:3} and computing
the limit as $\Delta$ tends to zero yields
\eqref{eq:poisson1}.
\end{IEEEproof}
\section{Covering General Point Processes and Arbitrary Point
Patterns}\label{sec:general}
We next consider a general point process $\mathbf{Y}$.
We assume that there exists some $\lambda$ such that
\begin{equation}\label{eq:general}
\lim_{t\to\infty} \Pr \left[
\frac{N_{\vect{Y}}(t)}{t}>\lambda+\delta\right] = 0 \quad \textnormal{for
all } \delta>0.
\end{equation}
Condition \eqref{eq:general} is satisfied, for example, when
$\mathbf{Y}$ is an ergodic process whose expected number of points per
second is less than or equal to $\lambda$.
Since the Poisson process is memoryless, one naturally expects it to
be the most difficult to describe. This is indeed the case, as the
next theorem shows.
\begin{thm}\label{thm:general}
The pair $(R_{\textnormal{Pois}}(D,\lambda),D)$ is
achievable on any point process
satisfying~\eqref{eq:general}.
\end{thm}
Before proving Theorem \ref{thm:general}, we state a stronger
result. Consider a point pattern $\vect{z}$ chosen by an adversary
on the interval $[0,T]$ which contains no more than $\lambda T$
points. The corresponding counting function
$n_{\vect{z}}(\cdot)$ must then satisfy
\begin{equation}\label{eq:arbitrary}
n_{\vect{z}}(T) \le \lambda T.
\end{equation}
The encoder and the reconstructor are
allowed to use random codes. Namely, they fix a distribution on all
(deterministic) codes of a certain rate on $[0,T]$. According to this
distribution, they randomly pick a code which is not revealed to the
adversary. They then apply it to the point pattern $\vect{z}$ chosen
by the adversary. We say that
$(R,D)$ is achievable with random coding against an adversary subject
to \eqref{eq:arbitrary} if, for every $\epsilon>0$, there exists some
$T_0$ such that, for every $T>T_0$, there exists a random code
on $[0,T]$ of rate $R+\epsilon$ such that the expected
distortion between \emph{any} $\vect{z}$ satisfying
\eqref{eq:arbitrary} and its reconstruction is smaller than $D+\epsilon$.
\begin{thm}\label{thm:arbitrary}
The pair $(R_{\textnormal{Pois}}(D,\lambda),D)$ is
achievable with random coding against an adversary subject
to~\eqref{eq:arbitrary}.
\end{thm}
\begin{IEEEproof}
First note that when $D\ge 1$, the encoder does not need to describe
the pattern: the reconstructor simply produces the all-one
function, yielding distortion $1$ for any $\vect{z}$. Hence the pair
$(0,D)$ is achievable with random coding.
Next consider $D\in(0,1)$. We use Scheme~1 as in Section
\ref{sec:poisson} to reduce the original problem to one of random
coding for an arbitrary discrete-time sequence $\vect{z}'$. Here
$\vect{z}'$ is $\{0,1\}$-valued, has length $\frac{T}{\Delta}$, and
satisfies
\begin{equation}\label{eq:constraint_discrete}
\sum_{i=1}^{T/\Delta} z_i' \le \lambda T.
\end{equation}
We shall construct a random code of rate $\frac{R}{\Delta}$ which,
when applied to any $\vect{z}'$ satisfying
\eqref{eq:constraint_discrete}, yields
\begin{equation*}
\E{d'(\vect{z}',\hat{\vect{Z}}')} < D+\epsilon,
\end{equation*}
where the random vector $\hat{\vect{Z}}'$ is the result of applying
the random encoder and decoder to $\vect{z}'$. Combined with
Scheme~1 this random code will yield a random code on the
continuous-time point pattern $\vect{z}$ that achieves the
rate-distortion pair $(R,D)$.
Our discrete-time random code consists of $2^{TR}$ $\{0,1\}$-valued,
length-$\frac{T}{\Delta}$ random sequences $\hat{\vect{Z}}_m'$,
$m\in\{1,\ldots, 2^{TR}\}$. The first sequence $\hat{\vect{Z}}_1'$
is chosen deterministically to be the all-one sequence. The other
$2^{TR}-1$ sequences are drawn independently, with each sequence
drawn IID $\textnormal{Ber}(D)$.
To describe source sequence $\vect{z}'$, the encoder looks for a
codeword $\hat{\vect{z}}_m'$, $m\in\{2,\ldots,2^{TR}\}$ such that
\begin{equation}\label{eq:11}
\hat{z}_{m,i}'=1 \textnormal{ whenever }z_i'=1.
\end{equation}
If it finds one or more such codewords, it sends the index of the
first one; otherwise it sends $1$. The
reconstructor outputs the
sequence $\hat{\vect{z}}_m'$ where $m$ is the message it receives
from the encoder.
We next analyze the expected distortion of this random code for a
fixed $\vect{z}'$ satisfying \eqref{eq:constraint_discrete}. Define
\begin{equation*}
\mu\triangleq \frac{\sum_{i=1}^{T/\Delta} z_i'}{T},
\end{equation*}
and note that by \eqref{eq:constraint_discrete} $\mu\le
\lambda$. Denote by $\mathcal{E}$ the
event that the encoder cannot find $\hat{\vect{z}}_m'$,
$m\in\{2,\ldots, 2^{TR}\}$ satisfying \eqref{eq:11}. If
$\mathcal{E}$ occurs, the encoder sends $1$ and the resulting
distortion is equal to $1$.
The probability that a randomly
drawn codeword $\hat{\vect{Z}}_m'$ satisfies \eqref{eq:11} is
\begin{equation*}
D^{\mu T}\ge D^{\lambda T} = 2^{(\lambda \log D)T}.
\end{equation*}
Because the codewords $\hat{\vect{Z}}_m'$, $m\in\{2,\ldots,2^{TR}\}$
are chosen independently, if we choose $R>-\lambda \log D$, then
$\Pr [\mathcal{E}] \to 0$ as $T\to\infty$. Hence, for large
enough $T$, the contribution to the expected distortion from the
event $\mathcal{E}$ can be ignored.
We next analyze the expected distortion conditional on
$\mathcal{E}^{\textnormal{c}}$. The reproduction $\hat{\vect{Z}}'$ has the
following distribution: at positions where $\vect{z}'$ takes the value
$1$, $\hat{\vect{Z}}'$ must also be $1$; at other positions the
elements of $\hat{\vect{Z}}'$ have the IID $\textnormal{Ber}(D)$
distribution. Thus the expected value of $\sum_{i=1}^{T/\Delta}
\hat{Z}_i'$ is
$\mu T + D(\frac{T}{\Delta}-\mu T)$, and
\begin{equation*}
\E{\left.d'(\vect{z}',\hat{\vect{Z}}')\right| \mathcal{E}^\textnormal{c}}
= D+ (1-D)\mu\Delta.
\end{equation*}
When we let $\Delta$ tend to zero, this value tends to
$D$. We have thus shown that, for small enough $\Delta$, we can
achieve the pair $(R/\Delta, D)$ on $\vect{z}'$ with random coding
whenever $R>-\lambda \log D$, and therefore we can also achieve
$(R,D)$ on the continuous-time point pattern $\vect{z}$ with random
coding if $R>-\lambda\log D$.
\end{IEEEproof}
We next use Theorem~\ref{thm:arbitrary} to prove
Theorem~\ref{thm:general}.
\begin{IEEEproof}[Proof of Theorem~\ref{thm:general}]
It follows from
Theorem~\ref{thm:arbitrary} that, on any point process satisfying
\eqref{eq:general}, the pair $(R_{\textnormal{Pois}}(D,\lambda+\delta),D)$ is achievable
with \emph{random coding}. Further, since there is no adversary, the
existence of a good random code guarantees the existence of
a good deterministic code. Hence $(R_{\textnormal{Pois}}(D,\lambda+\delta),D)$ is also
achievable on this process with deterministic
coding. Theorem~\ref{thm:general} now follows when we let $\delta$
tend to zero, since $R_{\textnormal{Pois}}(D,\cdot)$ is a continuous function.
\end{IEEEproof}
\section{Some Points are Known to the Reconstructor}\label{sec:wz}
In this section we consider a Wyner-Ziv setting for our
problem. We first consider the case where $\vect{X}$ is a homogeneous
Poisson process of intensity $\lambda$. (Later we consider an
arbitrary point pattern.) Assume that each point in $\vect{X}$ is
known to the reconstructor independently with
probability $p$. Also assume that the encoder does not know which
points are known to the reconstructor. The encoder maps $\vect{X}$
to a message in $\{1,\ldots,2^{TR}\}$, and the reconstructor
produces a Lebesgue-measurable, $\{0,1\}$-valued signal $\hat{\vect{X}}$ on
$[0,T]$ based on this message and the positions of the points that he
knows. The achievability of a rate-distortion pair is defined
in the same way as in
Section~\ref{sec:poisson}. Denote the smallest rate $R$ for which
$(R,D)$ is achievable by $R_{\textnormal{WZ}}(D,\lambda,p)$.
Obviously, $R_{\textnormal{WZ}}(D,\lambda,p)$ is lower-bounded by the
smallest achievable rate when the transmitter \emph{does} know which
points are known to the reconstructor. The latter rate is given by
$R_{\textnormal{Pois}}(D,(1-p)\lambda)$, where
$R_{\textnormal{Pois}}(\cdot,\cdot)$ is given by
\eqref{eq:poisson}. Indeed, when the encoder knows which points are
known to the reconstructor, it is optimal for it to describe only the
remaining points, which themselves form a homogeneous Poisson process
of intensity $(1-p)\lambda$. The reconstructor then selects a
set based on this description to cover the points unknown to it and
adds to this set the points it knows. Thus,
\begin{equation}\label{eq:wz1}
R_{\textnormal{WZ}}(D,\lambda,p)\ge R_{\textnormal{Pois}}(D,(1-p)\lambda).
\end{equation}
The next theorem shows that \eqref{eq:wz1} holds with equality.
\begin{thm}\label{thm:wz}
Knowing the points at the reconstructor only is as good as knowing
them also at the encoder:
\begin{equation}
R_{\textnormal{WZ}}(D,\lambda,p) = R_{\textnormal{Pois}}(D,(1-p)\lambda).
\end{equation}
\end{thm}
To prove Theorem~\ref{thm:wz}, it remains to show that the pair
$(R_{\textnormal{Pois}}(D,(1-p)\lambda), D)$ is achievable. We shall
show this as a
consequence of a stronger result concerning arbitrarily varying
sources.
Consider an arbitrary point pattern $\vect{z}$ on $[0,T]$ chosen by an
adversary. The adversary is allowed to put at most $\lambda T$ points
in $\vect{z}$. Also, it must reveal all but at most $\nu T$
points to the reconstructor, without telling the encoder which points
it has revealed. The encoder and the reconstructor are
allowed to use random codes, where the encoder is a random mapping
from $\vect{z}$ to a message in $\{1,\ldots, 2^{TR}\}$, and where the
reconstructor is a random mapping from this message, together with the point
pattern that it knows, to a $\{0,1\}$-valued, Lebesgue-measurable
signal $\hat{\vect{z}}$. The distortion $d(\vect{z},\hat{\vect{z}})$
is defined as in \eqref{eq:distortion}.
\begin{thm}\label{thm:wzadversary}
Against an adversary who puts at most $\lambda T$ points on $[0,T]$
and reveals all but at most $\nu T$ points to the reconstructor, the
rate-distortion pair $(R_{\textnormal{Pois}}(D,\nu),D)$ is
achievable with random coding.
\end{thm}
\begin{IEEEproof}
The case $D\ge 1$ is trivial, so we shall only consider the
case where $D\in(0,1)$. The encoder
and the reconstructor first use Scheme~1 as in
Section~\ref{sec:poisson} to reduce the point pattern $\vect{z}$ to a
$\{0,1\}$-valued vector $\vect{z}'$ of length
$\frac{T}{\Delta}$. Define
\begin{equation*}
\mu\triangleq \frac{\sum_{i=1}^{T/\Delta} z_i'}{T},
\end{equation*}
and note that, by assumption, $\mu\le\lambda$. If $\mu\le \nu$, then
we can ignore the reconstructor's side-information and use the
random code of Theorem~\ref{thm:arbitrary}. Henceforth we assume
$\mu>\nu$.
Denote by $\vect{s}$ the point pattern known to the reconstructor
and by $\vect{s}'$ the vector obtained from $\vect{s}$ through the
discretization in time of Scheme~1. Since there are at most $\nu T$
points that are unknown to the reconstructor,
\begin{equation}\label{eq:14}
\sum_{i=1}^{T/\Delta} s_i'\ge (\mu-\nu)T.
\end{equation}
The encoder conveys the value of $\mu T$ to the receiver using
bits. Since $\mu T$ is an integer between $0$ and $\lambda T$, the
number of
bits per second needed to describe it tends to zero as $T$ tends to
infinity.
Next, the encoder and the reconstructor randomly generate
$2^{T(R+\tilde{R})}$ independent codewords $$\hat{\vect{z}}_{m,l}',\quad
m\in\{1,\ldots, 2^{TR}\},\ l\in\{1,\ldots,2^{T\tilde{R}}\},$$
where each codeword is generated IID $\textnormal{Ber}(D)$.
To describe $\vect{z}'$, the encoder looks for a codeword
$\hat{\vect{z}}_{m,l}'$ such that
\begin{equation}\label{eq:12}
\hat{z}_{m,l,i}'=1 \textnormal{ whenever } z_i'=1.
\end{equation}
If it finds one or more such codewords, it sends the index $m$ of
the first one; otherwise
it tells the reconstructor to produce the all-one sequence.
When the reconstructor receives the index $m$, it looks for an index
$\tilde{l}\in\{1,\ldots,2^{T\tilde{R}}\}$ such that
\begin{equation}\label{eq:13}
\hat{z}_{m,\tilde{l},i}'=1 \textnormal{ whenever }
s_i'=1.
\end{equation}
If there is only one such codeword, it outputs it as the
reconstruction; if there are more than one such codewords, it
outputs the all-one sequence.
To analyze the expected distortion for $\vect{z}'$ over this random
code, first consider the event that the encoder cannot find a
codeword satisfying \eqref{eq:12}. Note that the probability that a
randomly generated codeword satisfies \eqref{eq:12} is $D^{\mu
T}$, so the probability of this event tends to zero as
$T$ tends to infinity provided that
\begin{equation}\label{eq:15}
R+\tilde{R}>-\mu \log D.
\end{equation}
Next consider the event that the reconstructor finds more than one
$\tilde{l}$ satisfying \eqref{eq:13}.
The probability that a randomly generated codeword satisfies
\eqref{eq:13} is $D^{\sum_{i=1}^{T/\Delta} s_i'}$. Consequently, by
\eqref{eq:14} the probability of this event tends to zero as
$T$ tends to infinity provided
\begin{equation}\label{eq:16}
\tilde{R} < -(\mu-\nu)\log D.
\end{equation}
Finally, if the encoder finds a codeword satisfying \eqref{eq:12}
and the reconstructor finds only one codeword satisfying
\eqref{eq:13}, then the two codewords must be the same. Following the
same calculations as in the proof of Theorem~\ref{thm:arbitrary},
the expected distortion in this case tends to $D$ as $\Delta$ tends
to zero.
Combining \eqref{eq:15} and \eqref{eq:16}, we can make the expected
distortion arbitrarily close to $D$ as $T\to\infty$ if
\begin{equation*}
R>-\nu \log D.
\end{equation*}
\end{IEEEproof}
\begin{IEEEproof}[Proof of Theorem \ref{thm:wz}]
The claim follows from \eqref{eq:wz1}, Theorem
\ref{thm:wzadversary}, and the Law of Large Numbers.
\end{IEEEproof}
\begin{appendix}
In this appendix we prove Lemma~\ref{lem:optimality}. Given
any rate-distortion code with $2^{TR}$ codewords
$\hat{\vect{x}}_m$, $m\in\{1,\ldots, 2^{TR}\}$ that achieves
expected distortion $D$, we shall construct a new code that can be
constructed through Scheme~1, that contains $(2^{TR}+1)$ codewords, and
that achieves an expected distortion that is arbitrarily close to $D$.
Denote the codewords of our new code by $\hat{\vect{w}}_m$,
$m\in\{1,\ldots,2^{TR}+1\}$. We choose the last codeword to be the
constant 1. We next describe our choices for the other codewords.
For every $\epsilon>0$ and every $\hat{\vect{x}}_m$, we can
approximate the set $\{t\colon \hat{x}_m(t)=1\}$ by a set
$\mathcal{A}_m$ that is equal to a finite, say $N_m$, union of open
intervals. More specifically,
\begin{equation}\label{eq:21}
\mu\left(\hat{x}_m^{-1}(1)\bigtriangleup
\mathcal{A}_m\right)\le 2^{-TR}\epsilon,
\end{equation}
where $\bigtriangleup$ denotes the
symmetric difference between two sets (see,
e.g., \cite[Chapter 3, Proposition 15]{royden88}). Define
\begin{equation*}
\set{B}\triangleq \bigcup_{m=1}^{2^{TR}}
\left(\hat{x}_m^{-1}(1)\setminus \mathcal{A}_m\right),
\end{equation*}
and note that by \eqref{eq:21}
\begin{equation}\label{eq:19}
\mu (\set{B})\le \epsilon.
\end{equation}
For each $\mathcal{A}_m$, $m\in\{1,\ldots,2^{TR}\}$, define
\begin{equation*}
\mathcal{T}_m\triangleq \left\{t\in [0,T]\colon \bigl( \left(\left\lceil
{t}/{\Delta}\right\rceil -
1\right)\Delta, \left\lceil
{t}/{\Delta}\right\rceil\Delta\bigr] \cap
\mathcal{A}_m \neq\emptyset\right\}.
\end{equation*}
We now construct $\hat{\vect{w}}_m$, $m\in\{1,\ldots,2^{TR}\}$ as
\begin{equation*}
\hat{\vect{w}}_m= \mathbf{1}_{\mathcal{T}_m},
\end{equation*}
where $\mathbf{1}_\mathcal{S}$ denotes the indicator function of the
set $\mathcal{S}$. Note that $\mathcal{A}_m \subseteq \mathcal{T}_m
= \hat{w}_m^{-1}(1)$.
See Figure~\ref{fig:discretize} for an illustration of this construction.
\begin{figure}[htbp]
\centering
\setlength{\unitlength}{0.68cm}
\begin{picture}(12,4.5)
\put(0,3){\vector(1,0){12}}
\put(12,2.5){$t$}
\multiput(0,2.9)(1,0){12}
{\line(0,1){0.2}}
\multiput(0,2.7)(1,0){12}
{\line(0,1){0.1}}
\multiput(0,2.4)(1,0){12}
{\line(0,1){0.1}}
\multiput(0,2.1)(1,0){12}
{\line(0,1){0.1}}
\multiput(0,1.8)(1,0){12}
{\line(0,1){0.1}}
\multiput(0,1.5)(1,0){12}
{\line(0,1){0.1}}
\multiput(0,1.2)(1,0){12}
{\line(0,1){0.1}}
\put(2.35,2.65){\tiny{$\Delta$}}
\put(2.3,2.75){\vector(-1,0){0.3}}
\put(2.7,2.75){\vector(1,0){0.3}}
\put(0,1){\vector(1,0){12}}
\put(12,0.5){$t$}
\multiput(0,0.9)(1,0){12}
{\line(0,1){0.2}}
\linethickness{0.5mm}
\put(0,3){\line(1,0){0.2}}
\put(0.2,3){\line(0,1){1}}
\put(0.2,4){\line(1,0){1.3}}
\put(1.5,3){\line(0,1){1}}
\put(1.5,3){\line(1,0){2}}
\put(3.5,3){\line(0,1){1}}
\put(3.5,4){\line(1,0){2.9}}
\put(6.4,3){\line(0,1){1}}
\put(6.4,3){\line(1,0){2.7}}
\put(9.1,3){\line(0,1){1}}
\put(9.1,4){\line(1,0){0.6}}
\put(9.7,3){\line(0,1){1}}
\put(9.7,3){\line(1,0){1.3}}
\put(0.2,4.2){$\mathbf{1}_{\mathcal{A}_m}$}
\put(0,1){\line(0,1){1}}
\put(0,2){\line(1,0){2}}
\put(2,1){\line(0,1){1}}
\put(2,1){\line(1,0){1}}
\put(3,1){\line(0,1){1}}
\put(3,2){\line(1,0){4}}
\put(7,1){\line(0,1){1}}
\put(7,1){\line(1,0){2}}
\put(9,1){\line(0,1){1}}
\put(9,2){\line(1,0){1}}
\put(10,1){\line(0,1){1}}
\put(10,1){\line(1,0){1}}
\put(0.2,2.2){$\mathbf{\hat{w}}_m$}
\end{picture}
\caption{Constructing $\hat{\vect{w}}_m$ from
$\mathcal{A}_m$.}
\label{fig:discretize}
\end{figure}
Let
$$ N\triangleq \max_{m\in\{1,\ldots,2^{TR}\}} N_m.$$
It can be seen that
\begin{equation}\label{eq:20}
\mu\left(\hat{w}_m^{-1}(1)\right)-\mu(\mathcal{A}_m) \le
2N\Delta, \quad m\in\{1,\ldots,2^{TR}\}.
\end{equation}
Our encoder works as follows: if $\vect{x}$ contains no point in
$\mathcal{B}$, it maps $\vect{x}$ to the same message as the given
encoder; otherwise it maps $\vect{x}$ to the index $(2^{TR}+1)$ of
the all-one codeword. To analyze the distortion, first consider the
case where $\vect{x}$ contains no point in $\mathcal{B}$. In this
case, all points in $\vect{x}$ must be covered by the selected codeword
$\hat{\vect{w}}_m$. By \eqref{eq:21} and \eqref{eq:20}, the
difference
$d(\vect{x},\hat{\vect{w}}_m)-d(\vect{x},\hat{\vect{x}}_m)$, if
positive, can be
made arbitrarily small by choosing small $\epsilon$ and
$\Delta$. Next consider the case where $\vect{x}$ does contain
points in $\mathcal{B}$. By \eqref{eq:19}, the probability that this
happens can be made arbitrarily small by choosing $\epsilon$ small,
therefore its contribution to the expected distortion can also be made
arbitrarily small. We conclude that our code
$\{\hat{\vect{w}}_m\}$ can achieve a distortion that is arbitrarily
close to the distortion achieved by the original code
$\{\hat{\vect{x}}_m\}$. This concludes the proof of
Lemma~\ref{lem:optimality}.
\end{appendix}
\bibliographystyle{IEEEtran}
\bibliography{/Volumes/Data/wang/Library/texmf/tex/bibtex/header_short,/Volumes/Data/wang/Library/texmf/tex/bibtex/bibliofile}
\end{document}
| 100,462
|
\begin{document}
\def\commutatif{\ar@{}[lldd]|{\circlearrowleft}}
\def\commutative{\ar@{}[rrdd]|{\circlearrowleft}}
\maketitle
\begin{abstract}
A central conjecture in inverse Galois theory, proposed by D\`ebes and Des-champs, asserts that every finite split embedding problem over an arbitrary field can be regularly solved. We give an unconditional proof of a consequence of this conjecture, namely that such embedding problems can be regularly solved if one waives the requirement that the solution fields are normal. This extends previous results of M. Fried, Takahashi, Deschamps, and the last two authors concerning the realization of finite groups as automorphism groups of field extensions.
\end{abstract}
\section{Introduction} \label{sec:intro}
Understanding the structure of the absolute Galois group ${\rm{G}}_{\Qq}$ of $\mathbb{Q}$ is one of the central objectives in algebraic number theory, and has inspired a number of very different approaches. One of them, which shall be our focus, is classical inverse Galois theory in the tradition of Hilbert and E.\ Noether. The first question here, which was studied already in the late 19th century and is yet still open, is the inverse Galois problem, which asks whether all finite groups occur as quotients of ${\rm{G}}_{\Qq}$, i.e.,~as Galois groups of Galois extensions of $\mathbb{Q}$. More information on ${\rm{G}}_{\Qq}$ would be obtained by knowing which finite {\em embedding problems} over $\Qq$ are solvable, or which finite groups have geometric, so-called {\em regular} realizations over $\Qq$ (see \cite{Vol96, MM99, FJ08} or below for more details).
The central conjecture in this area, which suggests answers to all of these questions, consistent with what is known and expected about each of them, and without needing to restrict to the field $\mathbb{Q}$, was formulated by D\`ebes and Deschamps (see \cite[\S2.2]{DD97b}):
\begin{conjecture} \label{conj:DD1}
Let $k$ be a field, $G$ a finite group, $L$ a finite Galois extension of the rational function field $k(T)$, and
$\alpha : G \rightarrow {\rm{Aut}}(L/k(T))$ an epimorphism that has a section\footnote{i.e., there exists an embedding $\alpha' : {\rm{Aut}}(L/k(T)) \rightarrow G$ such that $\alpha \circ \alpha' = {\rm{id}}_{{\rm{Aut}}(L/k(T))}$.}. Then there are a finite Galois extension $E/k(T)$ with $L \subseteq E$ and $E \cap \overline{k} = L \cap \overline{k}$, and an isomorphism $\beta : {\rm{Aut}}(E/k(T)) \rightarrow G$ such that $\alpha \circ \beta$ is the restriction map ${\rm{Aut}}(E/k(T)) \rightarrow {\rm{Aut}}(L/k(T))$.
\end{conjecture}
\noindent
In particular, Conjecture \ref{conj:DD1} would imply that every finite group occurs as the Galois group of a Galois extension of $\mathbb{Q}$, or, more generally, of every field that is {\em Hilbertian}, i.e., for which Hilbert's irreducibility theorem holds\footnote{For example, all global fields are Hilbertian. See, e.g., \cite{FJ08} for more on Hilbertian fields.}. So far, Conjecture \ref{conj:DD1} has been proved only if $k$ is an ample field\footnote{Recall that a field $k$ is {\it{ample}} (or {\it{large}}) if every smooth $k$-curve has either zero or infinitely many $k$-rational points. See, e.g., \cite{Jar11} and \cite{BSF13} for more about ample fields.} (see \cite[Main Theorem A]{Pop96} and \cite[Theorem 2]{HJ98b}), and no counterexample is known.
However, certain consequences of Conjecture \ref{conj:DD1}, like the inverse Galois problem, were proven in a weak form. Starting with \cite{FK78}, the easier question whether every finite group occurs as the automorphism group of a finite extension of $\mathbb{Q}$, not necessarily Galois, was studied. Clearly, a positive answer to this question is necessary for a positive solution to the inverse Galois problem, hence the interest in the question. The work \cite{LP18}, which extends previous results of M. Fried \cite{Fri80} and Takahashi \cite{Tak80} on this question, shows that indeed all finite groups occur as automorphism groups of finite extensions of any Hilbertian field. In \cite{DL18}, this was strengthened to `regular' realizations (for an arbitrary field); see th\'eor\`eme 1 of that paper for more details.
In light of these results, it is natural to ask whether in fact Conjecture \ref{conj:DD1} holds unconditionally if one waives the requirement that the extension $E/k(T)$ is normal. Once again, an affirmative answer to this question is necessary for an affirmative answer to the conjecture. In the present work, we show that this indeed is the case, thereby also generalizing the results mentioned above:
\begin{theorem} \label{thm:intro}
The statement of Conjecture \ref{conj:DD1} holds unconditionally if we do not require the extension $E/k(T)$ to be normal.
\end{theorem}
\begin{corollary} \label{coro:intro}
Let $k$ be a Hilbertian field, $L/k$ a finite Galois extension, $G$ a finite group, and $\alpha:G \rightarrow {\rm{Aut}}(L/k)$ an epimorphism that has a section. Then there exist a finite separable extension $F/L$ and an isomorphism $\beta : {\rm{Aut}}(F/k) \rightarrow G$ such that $\alpha \circ \beta$ is the restriction map ${\rm{Aut}}(F/k) \rightarrow {\rm{Aut}}(L/k)$.
\end{corollary}
\noindent
Note that Corollary \ref{coro:intro} is new already for $k=\mathbb{Q}$. An even more general, yet more technical version of Theorem \ref{thm:intro} is given in Theorem \ref{thm:main}, where the extension $L/k(T)$ is not necessarily Galois and the epimorphism $\alpha$ does not necessarily have a section, and accordingly Corollary \ref{coro:intro} holds in this greater generality (see Corollary \ref{coro:spec}).
\vspace{3mm}
{\bf{Acknowledgements.}} We wish to thank Lior Bary-Soroker for suggesting to use \cite[Proposition 16.11.1]{FJ08} in the proof of Proposition \ref{lemma:0}, and Dan Haran for his help with Proposition \ref{prop:pac}.
\section{Terminology and notation} \label{sec:basics}
The aim of this section is to state some terminology and notation on restriction maps, finite embedding problems, and specializations of function field extensions in our non-Galois context. For this section, let $k$ be an arbitrary field and $\overline{k}$ an algebraic closure of $k$.
\subsection{Restriction maps} \label{ssec:rm}
Let $L/k$ be a finite separable extension. Recall that ${\rm{Aut}}(L/k)$ is the automorphism group of $L/k$, i.e., the group of all isomorphisms of the field $L$ which fix $k$ pointwise. If $L/k$ is Galois, then this group is the Galois group of $L/k$, denoted by ${\rm{Gal}}(L/k)$.
Let $L/k$ and $F/M$ be two finite separable extensions with $k \subseteq M$ and $L \subseteq F$. Let $H$ be the subgroup of ${\rm{Aut}}(F/M)$ consisting of all elements fixing $L$ setwise. The restriction map
$$\left \{ \begin{array} {ccc}
H & \longrightarrow & {\rm{Aut}}(L/k) \\
\sigma & \longmapsto & \sigma{|_L}
\end{array} \right. $$
shall be denoted by ${\rm{res}}_{L/k}^{F/M}$. Note that the map ${\rm{res}}_{L/k}^{F/M}$ is not necessarily surjective. Moreover, if $N/K$ is a finite separable extension such that $M \subseteq K$ and $F \subseteq N$, then the composed map
$${\rm{res}}_{L/k}^{F/M} \circ {\rm{res}}^{N/K}_{F/M}$$
is not defined in general, and if it is, it is not necessarily equal to ${\rm{res}}^{N/K}_{L/k}$. However, these two properties hold if the domains of ${\rm{res}}_{L/k}^{F/M}$ and ${\rm{res}}^{N/K}_{F/M}$ are ${\rm{Aut}}(F/M)$ and ${\rm{Aut}}(N/K)$, respectively (in particular, if $L/k$ and $F/M$ are Galois).
Let $L/k$ be a finite separable extension and $\widehat{L}$ the Galois closure of $L$ over $k$. If $H$ is the normalizer of ${\rm{Gal}}(\widehat{L}/L)$ in ${\rm{Gal}}(\widehat{L}/k)$, then ${\rm{res}}_{L/k}^{\widehat{L}/k}$ has domain $H$, it is surjective, and it induces an isomorphism between the quotient group $H/{\rm{Gal}}(\widehat{L}/L)$ and the group ${\rm{Aut}}(L/k)$. Hence, if $M$ is any field containing $k$ that is linearly disjoint from $\widehat{L}$ over $k$, then the domain of ${\rm{res}}_{L/k}^{LM/M}$ is the whole automorphism group ${\rm{Aut}}(LM/M)$ and this map is an isomorphism from ${\rm{Aut}}(LM/M)$ to ${\rm{Aut}}(L/k)$ \footnote{If we only assume that the fields $L$ and $M$ are linearly disjoint over $k$, then the groups ${\rm{Aut}}(LM/M)$ and ${\rm{Aut}}(L/k)$ are not isomorphic in general. For example, $\Qq(\sqrt[3]{2})$ and $\Qq(e^{2i\pi/3})$ are linearly disjoint over $\Qq$ (as they have coprime degrees). But ${\rm{Aut}}(\Qq(\sqrt[3]{2}) / \Qq)$ is trivial while ${\rm{Gal}}(\Qq(\sqrt[3]{2},e^{2i\pi/3}) / \Qq(e^{2i\pi/3}))$ has order 3.}. In particular, if ${\bf{T}}$ is a finite tuple of algebraically independent indeterminates, then this holds for $M=k({\bf{T}})$, that is, the restriction map ${{\rm{res}}_{L/k}^{L({\bf{T}})/k({\bf{T}})}} : {\rm{Aut}}(L({\bf{T}})/k({\bf{T}})) \rightarrow {\rm{Aut}}(L/k)$ is an isomorphism.
\subsection{Finite embedding problems} \label{ssec:fep}
The terminology below extends standard terminology like in \cite[Definition 16.4.1]{FJ08}.
A {\it{finite embedding problem over $k$}} is an epimorphism $\alpha : G \rightarrow {\rm{Aut}}(L/k)$, where $G$ is a finite group and $L/k$ a finite separable extension. Say that $\alpha$ is {\it{split}} if there is an embedding $\alpha' : {\rm{Aut}}(L/k) \rightarrow G$ such that $\alpha \circ \alpha' = {\rm{id}}_{{\rm{Aut}}(L/k)}$, and that $\alpha$ is {\it{Galois}} if $L/k$ is Galois. A {\it{solution}} to $\alpha$ is an isomorphism $\beta : {\rm{Aut}}(E/k) \rightarrow G$, where $E/k$ is a finite separable extension such that $L \subseteq E$, that satisfies $\alpha \circ \beta = {\rm{res}}_{L/k}^{E/k}$ \footnote{This implies in particular that the domain of the restriction map ${\rm{res}}_{L/k}^{E/k}$ is the whole group ${\rm{Aut}}(E/k)$ and that this map is surjective.}. Refer to $E$ as the {\it{solution field}} associated with $\beta$. In the case $\alpha$ is Galois, the solution $\beta$ is {\it{Galois}} if the extension $E/k$ is Galois.
Let $\alpha: G \rightarrow {\rm{Aut}}(L/k)$ be a finite embedding problem over $k$ and let $\widehat{L}$ be the Galois closure of $L$ over $k$. If $M$ denotes any field containing $k$ which is linearly disjoint from $\widehat{L}$ over $k$, then the finite embedding problem
$${{\rm{res}}_{L/k}^{LM/M}}^{-1} \circ \alpha : G \rightarrow {\rm{Aut}}(LM/M)$$
over $M$ is denoted by $\alpha_{M}$ \footnote{As seen in \S\ref{ssec:rm}, the restriction map ${\rm{res}}_{L/k}^{LM/M} : {\rm{Aut}}(LM/M) \rightarrow {\rm{Aut}}(L/k)$ is a well-defined isomorphism.}. If $\beta$ is a solution to $\alpha_M$ with solution field denoted by $E$, then
$${\rm{res}}^{E/M}_{LM/M} = \alpha_M \circ \beta= {{\rm{res}}_{L/k}^{LM/M}}^{-1} \circ \alpha \circ \beta.$$
In particular, one has
$\alpha \circ \beta= {\rm{res}}_{L/k}^{LM/M} \circ {\rm{res}}^{E/M}_{LM/M} = {\rm{res}}^{E/M}_{L/k}.$
Given an indeterminate $T$, let $\alpha: G \rightarrow {\rm{Aut}}(L/k(T))$ be a finite embedding problem over $k(T)$. A solution to $\alpha$ is {\it{regular}} if the associated solution field $E$ satisfies $E \cap \overline{k} = L \cap \overline{k}$.
Let $\alpha : G \rightarrow {\rm{Aut}}(L/k)$ be a finite embedding problem over $k$. A {\it{geometric solution}} to $\alpha$ is a regular solution $\beta$ to $\alpha_{k(T)}$. Furthermore, in the case $L/k$ is Galois, we shall say that $\beta$ is a {\it{geometric Galois solution}} to $\alpha$ if $\beta$ is a regular Galois solution to $\alpha_{k(T)}$.
\subsection{Specializations} \label{ssec:ffe}
For more on the following, we refer to \cite[\S1.9]{Deb09} and \cite[\S2.1.4]{DL13}. Let ${\bf{T}}=(T_1, \dots, T_n)$ be an $n$-tuple of algebraically independent indeterminates ($n \geq 1$), $E/k({\bf{T}})$ a finite separable extension, and $\widehat{E}$ the Galois closure of $E$ over $k({\bf{T}})$.
Let $\widehat{B}$ be the integral closure of $k[{\bf{T}}]$ in $\widehat{E}$. For ${\bf{t}}=(t_1, \dots, t_n) \in k^n$, the residue field of $\widehat{B}$ at a maximal ideal $\mathcal{P}$ lying over the ideal $\langle {\bf{T}} - {\bf{t}} \rangle$ of $k[{\bf{T}}]$ generated by $T_1-t_1, \dots, T_n-t_n$ is denoted by $\widehat{E}_{\bf{t}}$ and the extension $\widehat{E}_{\bf{t}}/k$ is called the {\it{specialization}} of $\widehat{E}/k({\bf{T}})$ at ${\bf{t}}$. As $\widehat{E}/k({\bf{T}})$ is Galois, the field $\widehat{E}_{\bf{t}}$ does not depend on $\mathcal{P}$ and the extension $\widehat{E}_{\bf{t}}/k$ is finite and normal. Moreover, for ${\bf{t}}$ outside a Zariski-closed proper subset (depending only on $\widehat{E}/k({\bf{T}})$), the extension $\widehat{E}_{\bf{t}}/k$ is Galois and its Galois group is a subgroup of ${\rm{Gal}}(\widehat{E}/k({\bf{T}}))$, namely the decomposition group of $\widehat{E}/k({\bf{T}})$ at $\mathcal{P}$. If $P({\bf{T}},X) \in k[{\bf{T}}][X]$ is the minimal polynomial of a primitive element of $\widehat{E}$ over $k({\bf{T}})$, assumed to be integral over $k[{\bf{T}}]$, then, for ${\bf{t}} \in k^n$, the field $\widehat{E}_{\bf{t}}$ contains a root $x_{\bf{t}}$ of $P({\bf{t}},X)$. In particular, if $P({\bf{t}},X)$ is irreducible over $k$ and separable, then $\widehat{E}_{\bf{t}} = k(x_{\bf{t}})$ and $[\widehat{E}_{\bf{t}} : k]=[\widehat{E}:k({\bf{T}})]$, the extension $\widehat{E}_{\bf{t}}/k$ is Galois, and $\widehat{E}_{\bf{t}}$ is the splitting field over $k$ of $P({\bf{t}},X)$.
Let $B$ be the integral closure of $k[{\bf{T}}]$ in ${E}$. For ${\bf{t}} \in k^n$, let $\mathcal{P}_1, \dots, \mathcal{P}_s$ be the maximal ideals of $B$ lying over $\langle {\bf{T}} - {\bf{t}} \rangle$. For $i \in \{1, \dots,s\}$, the residue field of $B$ at $\mathcal{P}_i$ is denoted by $E_{{\bf{t}},i}$ and the finite extension ${E}_{{\bf{t}},i}/k$ is called a {\it{specialization}} of ${E}/k({\bf{T}})$ at ${\bf{t}}$. If $\widehat{E}_{\bf{t}}/k$ is Galois, then ${E}_{{\bf{t}},1}/k, \dots, {E}_{{\bf{t}},s}/k$ are separable. Moreover, if $[\widehat{E}_{\bf{t}} : k]=[\widehat{E}:k({\bf{T}})]$, then $s=1$, in which case the field $E_{{\bf{t}},1}$ is simply denoted by $E_{{\bf{t}}}$, one has $[{E}_{\bf{t}} : k]=[{E}:k({\bf{T}})]$, and if ${E}_{{\bf{t}}}/k$ is separable, then $\widehat{E}_{\bf{t}}$ is the Galois closure of $E_{\bf{t}}$ over $k$.
\section{Two general results about finite embedding problems} \label{sec:series}
This section is devoted to two general results about finite embedding problems (Galois or not) that will be used in \S\ref{sec:proof} to prove Theorem \ref{thm:main} and Corollary \ref{coro:spec}; see Propositions \ref{lemma:0} and \ref{prop:sssecI}. For this section, let $k$ be an arbitrary field.
\subsection{Specializing indeterminates} \label{ssec:spe_ind}
Let ${\bf{T}}=(T_1, \dots, T_n)$ be a tuple of algebraically independent indeterminates ($n \geq 1$).
Given a finite Galois embedding problem $\alpha : G \rightarrow {\rm{Gal}}(L/k)$ over $k$, it is classical that if $k$ is Hilbertian and $\alpha_{k({\bf{T}})}$ has a Galois solution whose solution field is denoted by $E$, then $\alpha$ has a Galois solution whose associated solution field is a suitable specialization of $E$. See, e.g., \cite[Lemma 16.4.2]{FJ08} for more details. We now extend this to our non-Galois context:
\begin{proposition} \label{lemma:0}
Let $\alpha:G \rightarrow {\rm{Aut}}(L/k)$ be a finite embedding problem over $k$. Suppose the finite embedding problem $\alpha_{k({\bf{T}})}$ has a solution, whose solution field is denoted by $E$.
\vspace{0.5mm}
\noindent
{\rm{(1)}} Suppose $k$ is Hilbertian. Then, for each ${\bf{t}}$ in a Zariski-dense subset of $k^n$, the embedding problem $\alpha$ has a solution whose associated solution field is $E_{{\bf{t}}}$.
\vspace{0.5mm}
\noindent
{\rm{(2)}} Suppose $k =\kappa(T)$ for some field $\kappa$ (and $T$ an indeterminate) and $E \cap \overline{\kappa} = L \cap \overline{\kappa}$. Then there exists ${\bf{t}} \in k^n$ such that $E_{\bf{t}}$ is the solution field of a regular solution to $\alpha$.
\end{proposition}
\begin{proof}
We break the proof into three parts. Let $\widehat{E}$ denote the Galois closure of $E$ over $k({\bf{T}})$.
Firstly, let ${\bf{t}} \in k^n$. We claim that if $[\widehat{E}_{\bf{t}} : k]=[\widehat{E}:k({\bf{T}})]$ and $\widehat{E}_{\bf{t}}/k$ is Galois, then the field $E_{{\bf{t}}}$, which is well-defined (see \S\ref{ssec:ffe}), is the solution field of a solution to $\alpha$. From our assumptions on ${\bf{t}}$, there exists an isomorphism
$$\psi_{\bf{t}}:{\rm{Gal}}(\widehat{E}_{\bf{t}}/k) \rightarrow {\rm{Gal}}(\widehat{E}/k({\bf{T}}))$$
such that the following two conditions hold:
\begin{equation} \label{eq1}
\psi_{\bf{t}}({\rm{Gal}}(\widehat{E}_{\bf{t}} / E_{\bf{t}}))={\rm{Gal}}(\widehat{E}/E),
\end{equation}
\begin{equation} \label{eq1.1}
\psi_{\bf{t}}(\sigma)(x) = \sigma(x) \, \, \, {\rm{for}} \, \, \, {\rm{every}} \, \, \, \sigma \in {\rm{Gal}}(\widehat{E}_{\bf{t}}/k) \, \, \, {\rm{and}} \, \, \, {\rm{every}} \, \, \, x \in L.
\end{equation}
See \cite[Lemma 16.1.1]{FJ08} and \cite[\S1.9]{Deb09} for more details. Denote the normalizer of ${\rm{Gal}}(\widehat{E}/E)$ in ${\rm{Gal}}(\widehat{E}/k({\bf{T}}))$ by $H$ and that of ${\rm{Gal}}(\widehat{E}_{\bf{t}}/E_{\bf{t}})$ in ${\rm{Gal}}(\widehat{E}_{\bf{t}}/k)$ by $H_{\bf{t}}$. From \eqref{eq1},
\begin{equation} \label{eq1.5}
\psi_{\bf{t}}(H_{\bf{t}}) = H.
\end{equation}
As the domains of the maps ${\rm{res}}_{E/k({\bf{T}})}^{\widehat{E}/k({\bf{T}})}$, ${\rm{res}}_{L({\bf{T}})/k({\bf{T}})}^{E/k({\bf{T}})}$, and ${\rm{res}}_{L/k}^{L({\bf{T}})/k({\bf{T}})}$ are $H$, ${\rm{Aut}}(E/k({\bf{T}}))$, and ${\rm{Aut}}(L({\bf{T}})/k({\bf{T}}))$, respectively, $H$ is contained in the domain of ${\rm{res}}_{L/k}^{\widehat{E}/k({\bf{T}})}$. Combine this, \eqref{eq1.1}, and \eqref{eq1.5} to get that $H_{\bf{t}}$ is contained in the domain of ${\rm{res}}_{L/k}^{\widehat{E}_{\bf{t}}/k}$. Hence, as ${\rm{res}}^{\widehat{E}_{\bf{t}}/k}_{E_{\bf{t}}/k}$ is surjective, the domain of the map ${\rm{res}}^{E_{\bf{t}}/k}_{L/k}$, which is well-defined as $L \subseteq E_{\bf{t}}$, is the whole automorphism group ${\rm{Aut}}(E_{\bf{t}}/k)$ \footnote{We use here a special case of the following easy claim: if $L/k$, $F/M$, and $N/K$ are three finite separable extensions with $k \subseteq M \subseteq K$ and $L \subseteq F \subseteq N$, then one has ${\rm{res}}^{N/K}_{F/M}(H^{N/K}_{L/k} \cap H^{N/K}_{F/M}) \subseteq H_{L/k}^{F/M}$, where $H^{N/K}_{F/M}$, $H^{N/K}_{L/k}$, and $H_{L/k}^{F/M}$ denote the domains of the maps ${\rm{res}}^{N/K}_{F/M}$, ${\rm{res}}^{N/K}_{L/k}$, and ${\rm{res}}^{F/M}_{L/k}$, respectively.}. Moreover, the domain of the map
${\rm{res}}^{{E}/k({\bf{T}})}_{L/k}$ is ${\rm{Aut}}({E}/k({\bf{T}}))$ (since those of ${\rm{res}}^{{E}/k({\bf{T}})}_{L({\bf{T}})/k({\bf{T}})}$ and ${\rm{res}}^{L({\bf{T}})/k({\bf{T}})}_{L/k}$ are ${\rm{Aut}}(E/k({\bf{T}}))$ and ${\rm{Aut}}(L({\bf{T}})/k({\bf{T}}))$, respectively). Now, use \eqref{eq1} and the surjectivity of ${\rm{res}}^{\widehat{E}_{\bf{t}}/k}_{E_{\bf{t}}/k}$ and ${\rm{res}}^{\widehat{E}/k({\bf{T}})}_{{E}/k({\bf{T}})}$ to get that the isomorphism ${\psi_{\bf{t}}}|_{{H_{\bf{t}}}}$ induces an isomorphism $h_{\bf{t}} : {\rm{Aut}}(E_{\bf{t}}/k) \rightarrow {\rm{Aut}}(E/k({\bf{T}}))$ that satisfies
\begin{equation} \label{equ}
h_{\bf{t}} \circ {\rm{res}}^{\widehat{E}_{\bf{t}}/k}_{{E}_{\bf{t}}/k} = {\rm{res}}^{\widehat{E}/k({\bf{T}})}_{{E}/k({\bf{T}})} \circ {\psi_{\bf{t}}}{|_{H_{\bf{t}}}}.
\end{equation}
\vspace{-4mm}
\begin{figure}[h!]
\[ \xymatrix{
1 \ar[r] & {\rm{Gal}}(\widehat{E}_{\bf{t}}/E_{\bf{t}}) \ar[r] \ar[dddd]_{\psi_{\bf{t}}|_{{\rm{Gal}}(\widehat{E}_{\bf{t}}/E_{\bf{t}})}} & H_{\bf{t}} \ar[rrdd]^{{{{{\rm{res}}^{\widehat{E}_{\bf{t}}/k}_{L/k}}}}{| _{H_{\bf{t}}}}} \ar[dddd]_{{\psi_{\bf{t}}}{|_{H_{\bf{t}}}}} \ar[rrrr]^{{\rm{res}}^{\widehat{E}_{\bf{t}}/k}_{{E}_{\bf{t}}/k}} & & & & {\rm{Aut}}(E_{\bf{t}}/k) \ar[ldld]_{{\rm{res}}^{{E}_{\bf{t}}/k}_{L/k}} \ar@{.>}[dddd]^{h_{\bf{t}}} \ar[r] & 1 \\
& & & & \circlearrowleft & & & \\
& & & \circlearrowleft & {\rm{Aut}}(L/k) & & & \\
& & & & \circlearrowleft & & & \\
1 \ar[r] & {\rm{Gal}}(\widehat{E}/E) \ar[r]& H \ar[uurr]_{{{{{\rm{res}}^{\widehat{E}/k({\bf{T}})}_{L/k}}}}{|_{H}}} \ar[rrrr]_{{\rm{res}}^{\widehat{E}/k({\bf{T}})}_{{E}/k({\bf{T}})}} & & & & {\rm{Aut}}(E/k({\bf{T}})) \ar[lulu]^{{\rm{res}}^{{E}/k({\bf{T}})}_{L/k}} \ar[r] & 1
}
\]
\caption{Group homomorphisms} \label{Figu}
\end{figure}
\noindent
By \eqref{equ}, the surjectivity of ${\rm{res}}^{\widehat{E}_{\bf{t}}/k}_{E_{\bf{t}}/k}$, and the commutativity of the three triangles in Figure \ref{Figu} (denoted by $\circlearrowleft$), one has
\begin{equation} \label{eq1.6}
{\rm{res}}^{{E}/k({\bf{T}})}_{L/k} \circ h_{\bf{t}} = {\rm{res}}^{E_{\bf{t}}/k}_{L/k}.
\end{equation}
Finally, let $\beta : {\rm{Aut}}(E/k({\bf{T}})) \rightarrow G$ be a solution to $\alpha_{k({\bf{T}})}$ whose solution field is $E$. Consider the isomorphism
$\beta \circ h_{\bf{t}} : {\rm{Aut}}(E_{\bf{t}}/k) \rightarrow G.$
By \eqref{eq1.6} and as
$\alpha \circ \beta= {{\rm{res}}_{L/k}^{E/k({\bf{T}})}},$ one has
$$\alpha \circ \beta \circ h_{\bf{t}} = {\rm{res}}^{E_{\bf{t}}/k}_{L/k},$$
as needed.
Secondly, we prove (1). Let $P({\bf{T}},X) \in k[{\bf{T}}][X]$ be the minimal polynomial of a primitive element of $\widehat{E}$ over $k({\bf{T}})$, assumed to be integral over $k[{\bf{T}}]$. As $k$ has been assumed to be Hilbertian, for each ${\bf{t}}$ in a Zariski-dense subset of $k^n$, the polynomial $P({\bf{t}},X)$ is irreducible over $k$ and separable. In particular, one has $[\widehat{E}_{\bf{t}} : k]=[\widehat{E}:k({\bf{T}})]$ and the extension ${\widehat{E}}_{\bf{t}}/k$ is Galois (see \S\ref{ssec:ffe}). Then (1) follows from the first part of the proof.
Thirdly, we prove (2). By the first part of the proof, it suffices to find ${\bf{t}} \in k^n$ such that $[\widehat{E}_{\bf{t}} : k]=[\widehat{E}:k({\bf{T}})]$, the extension $\widehat{E}_{\bf{t}}/k$ is Galois, and $E_{\bf{t}} \cap \overline{\kappa} = L \cap \overline{\kappa}$. Note that the existence of ${\bf{t}}$ such that the first two conditions hold follows immediately from the second part of the proof and the fact that $\kappa(T)$ is Hilbertian. To get the extra conclusion that at least one solution is regular, we need to specialize $T_1, \dots, T_n$ suitably. Set $F= \widehat{E} \cap \overline{\kappa}$ and let $P(T, {\bf{T}}, X) \in F[T, {\bf{T}}, X]$ be the minimal polynomial of a primitive element of $\widehat{E}$ over $F(T, {\bf{T}})$, assumed to be integral over $F[T, {\bf{T}}]$. Moreover, set $F'=E \cap \overline{\kappa}$ and let $Q(T, {\bf{T}}, X) \in F'[T, {\bf{T}}, X]$ be the minimal polynomial of a primitive element of ${E}$ over $F'(T, {\bf{T}})$, assumed to be integral over $F'[T, {\bf{T}}]$. Clearly, $P(T, {\bf{T}}, X)$ and $Q(T, {\bf{T}}, X)$ are irreducible over $\overline{\kappa}(T, {\bf{T}})$. Then apply either \cite[Proposition 13.2.1]{FJ08} and an induction on $n$ if $\kappa$ is infinite or \cite[Theorem 13.4.2 and Proposition 16.11.1]{FJ08} if $\kappa$ is finite to get the existence of ${\bf{t}} = (t_1, \dots, t_n) \in k^n$ such that $P(T, {\bf{t}}, X) \in F(T)[X]$ and $Q(T, {\bf{t}}, X) \in F'(T)[X]$ are irreducible over $\overline{\kappa}(T)$ and separable. Let $M$ be the field generated over $F(T)$ by one root of $P(T, {\bf{t}}, X)$. As this polynomial is irreducible over $F(T)$, one has $[M:\kappa(T)] = [\widehat{E}:\kappa(T, {\bf{T}})]$ and, by \S\ref{ssec:ffe}, the fields $M$ and $\widehat{E}_{\bf{t}}$ coincide (in particular, $\widehat{E}_{\bf{t}}/k$ is Galois). Then, by \S\ref{ssec:ffe}, the field $E_{\bf{t}}$ is well-defined. Moreover, $E_{\bf{t}}$ contains a root $x$ of $Q(T, {\bf{t}}, X)$. As this polynomial is irreducible over $F'(T)$, one has $E_{\bf{t}} = F'(T, x)$. Then combine this equality and the irreducibility of $Q(T, {\bf{t}}, X)$ over $\overline{\kappa}(T)$ to get $E_{\bf{t}} \cap \overline{\kappa} = F'= E \cap \overline{\kappa}$, thus ending the proof.
\end{proof}
\subsection{On the existence of geometric solutions after base change}
The aim of the next proposition is to provide a geometric Galois solution to any given finite Galois embedding problem over any given field, up to making a regular finitely generated base change.
Recall that a field extension $k_0/k$ is {\it{regular}} if $k_0/k$ is separable (in the sense of non-necessarily algebraic extensions; see, e.g., \cite[\S2.6]{FJ08}) and the equality $k_0 \cap \overline{k}=k$ holds.
\begin{proposition} \label{prop:sssecI}
Let $\alpha: G \rightarrow {\rm{Gal}}(L/k)$ be a finite Galois embedding problem over $k$. Then there exists a regular finitely generated extension $k_0/k$ such that the finite Galois embedding problem $\alpha_{k_0}$ has a geometric Galois solution\footnote{Since $k_0/k$ is regular, the fields $L$ and $k_0$ are linearly disjoint over $k$, thus making $\alpha_{k_0}$ well-defined.}.
\end{proposition}
The proof requires the next two results, that are more or less known to experts. The first one, which is Proposition \ref{prop:sssecI} for PAC fields, shows that one can take $k_0=k$ in this case.
Recall that $k$ is said to be {\it{Pseudo Algebraically Closed}} (PAC) if every non-empty geometrically irreducible $k$-variety has a Zariski-dense set of $k$-rational points. See, e.g., \cite{FJ08} for more on PAC fields.
\begin{proposition} \label{prop:pac}
Assume $k$ is PAC. Then every finite Galois embedding problem over $k$ has a geometric Galois solution.
\end{proposition}
\begin{proof}[Comments on proof]
It is a classical and deep result in field arithmetic that every finite Galois embedding problem over an arbitrary PAC Hilbertian field $k$ has a Galois solution; see \cite[Theorem A]{FV92}, \cite{Pop96}, and \cite[Theorem 5.10.3]{Jar11}. To our knowledge, the natural strengthening we consider in Proposition \ref{prop:pac} does not appear in the literature. Recall that the classical proof of the former consists in using the {\it{projectivity}} of the absolute Galois group of $k$ to reduce to solving some finite Galois split embedding problem over $k$ (see, e.g., \cite[Theorem 11.6.2 and Proposition 22.5.9]{FJ08} for more details), which can then be done by making use of \cite[Main Theorem A]{Pop96} and the Galois analogue of Proposition \ref{lemma:0}(1). We give in Appendix \ref{app} a similar and self-contained argument in our geometric context, with the necessary adjustments due to the higher generality.
\end{proof}
\begin{remark}
It is not true in general that if the field $k$ is PAC and $T$ denotes an indeterminate, then every finite Galois embedding problem over $k(T)$ (non-necessarily ``constant") has a (regular) Galois solution. For example, if $k$ is of characteristic zero and not algebraically closed, then the absolute Galois group of $k(T)$ is not projective, cf. \cite[Chapter II, Proposition 11]{Ser97}\footnote{On the other hand, if the field $k$ is separably closed, then the absolute Galois group of $k(T)$ is projective; see, e.g., \cite[Proposition 9.4.6]{Jar11}. In particular, by \cite[Proposition 22.5.9]{FJ08} and as Conjecture \ref{conj:DD1} holds over separably closed fields, all finite Galois embedding problems over $k(T)$ have regular Galois solutions.}.
\end{remark}
The second statement we need shows that if one can provide a geometric Galois solution to a given finite Galois embedding problem after some linearly disjoint base change, then one can further require such a base change to be finitely generated:
\begin{lemma} \label{prop:desc}
Let $\alpha: G \rightarrow {\rm{Gal}}(L/k)$ be a finite Galois embedding problem over $k$. Suppose there exists a field extension $M/k$ such that $M$ and $L$ are linearly disjoint over $k$ and such that $\alpha_M$ has a geometric Galois solution. Then there exists a finitely generated subextension $k_0/k$ of $M/k$ such that $\alpha_{k_0}$ has a geometric Galois solution\footnote{For every intermediate field $k \subseteq k_0 \subseteq M$, the fields $k_0$ and $L$ are linearly disjoint over $k$, thus making $\alpha_{k_0}$ well-defined.}.
\end{lemma}
\begin{proof}[Comments on proof]
For split embedding problems, the lemma may be proved by following the lines of Part B of the proof of \cite[Lemma 5.9.1]{Jar11}. We offer in Appendix \ref{app} a similar and self-contained argument which applies to any finite Galois embedding problem.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:sssecI}]
Let $M$ be an arbitrary PAC field which is regular over $k$. See, e.g., \cite[Proposition 13.4.6]{FJ08} for an example of such a field. Consider the finite Galois embedding problem $\alpha_M$ (which is well-defined from the regularity condition). As $M$ is PAC, we may apply Proposition \ref{prop:pac} to get that $\alpha_M$ has a geometric Galois solution. It then remains to apply Lemma \ref{prop:desc} to finish the proof of Proposition \ref{prop:sssecI}.
\end{proof}
\begin{remark}
With the notation of Proposition \ref{prop:sssecI}, if $\alpha$ splits, then it is enough that the field $M$ we consider in the proof is ample (instead of PAC) and one may replace Proposition \ref{prop:pac} by \cite[Main Theorem A]{Pop96}. In particular, if $X$ denotes an indeterminate, taking $M$ equal to the Henselization of $k(X)$ with respect to the $X$-adic valuation yields the extra conclusion that $k_0$ can be chosen to have transcendence degree at most 1 over $k$.
\end{remark}
\section{Proof of Theorem \ref{thm:intro}} \label{sec:proof}
The aim of this section is to prove the following result, which generalizes Theorem \ref{thm:intro}:
\begin{theorem} \label{thm:main}
Let $k$ be an arbitrary field and $T$ an indeterminate. Then each finite embedding problem over $k(T)$ has a regular solution.
\end{theorem}
By combining Proposition \ref{lemma:0}(1) and Theorem \ref{thm:main}, we immediately get the following corollary, which generalizes Corollary \ref{coro:intro}:
\begin{corollary} \label{coro:spec}
Every finite embedding problem over a Hilbertian field has a solution.
\end{corollary}
We split the proof of Theorem \ref{thm:main} into two parts. First, we claim it suffices to ``regularly" solve embedding problems over arbitrary fields after adjoining finitely many indeterminates:
\begin{proposition} \label{thm:cst}
Let $k$ be a field and $\alpha : G \rightarrow {\rm{Aut}}(L/k)$ a finite embedding problem over $k$. Then there exists a finite non-empty tuple ${\bf{T}}$ of algebraically independent indeterminates such that $\alpha_{k({\bf{T}})}$ has a solution, whose associated solution field $E$ satisfies $E \cap \overline{k} = L$.
\end{proposition}
\begin{proof}[Proof of Theorem \ref{thm:main} under Proposition \ref{thm:cst}]
Let $k$ be a field, $T$ an indeterminate, and $\alpha : G \rightarrow {\rm{Aut}}(L/k(T))$ a finite embedding problem over $k(T)$. By Proposition \ref{thm:cst}, there exist a finite non-empty tuple ${\bf{T}}$ of algebraically independent indeterminates and a solution to the finite embedding problem $\alpha_{k(T,{\bf{T}})}$, whose solution field $E$ satisfies $E \cap \overline{k(T)}=L$. In particular, one has $E \cap \overline{k} = L \cap \overline{k}$. It then remains to apply Proposition \ref{lemma:0}(2) to get Theorem \ref{thm:main}.
\end{proof}
We now proceed to prove Proposition \ref{thm:cst} and start by recalling the following result, which is \cite[Proposition 2.3]{LP18}:
\begin{proposition} \label{lemma 11}
Given a field $k$ and $y \in k$, set $P_y(T,X)=X^3 + (T-y) X + (T-y) \in k[T][X].$ Then the polynomial $P_y(T,X)$ is irreducible, separable, and of Galois group $S_3$ over $k(T)$. Moreover, if $k_y$ denotes the field generated over $k(T)$ by any given root of $P_y(T,X)$, then, given $y_1 \not=y_2$ in $k$, one has $k_{y_1} \not= k_{y_2}$.
\end{proposition}
Let $k$ be a field and $\alpha : G \rightarrow {\rm{Aut}}(L/k)$ a finite embedding problem over $k$. Set $L'=L^{{\rm{Aut}}(L/k)}$. As ${\rm{Aut}}(L/k) = {\rm{Gal}}(L/L')$, the finite embedding problem $\alpha$ over $k$ can be seen as a finite Galois embedding problem $\alpha' : G \rightarrow {\rm{Gal}}(L/L')$ over $L'$. By Proposition \ref{prop:sssecI}, there exists an extension $k_0$ of $L'$ that is regular and finitely generated, and such that the finite Galois embedding problem $\alpha'_{k_0}$ has a geometric Galois solution. That is, there exist an indeterminate $Z$, a finite Galois extension $N/k_0(Z)$ such that $Lk_0(Z) \subseteq N$ and $N \cap \overline{k_0}=Lk_0$, and an isomorphism
$\beta : {\rm{Gal}}(N/k_0(Z)) \rightarrow G$
such that $\alpha' \circ \beta = {\rm{res}}^{N/k_0(Z)}_{L/L'}$, i.e.,
\begin{equation} \label{!!}
\alpha \circ \beta = {\rm{res}}^{N/k_0(Z)}_{L/k}.
\end{equation}
Let ${\bf{Y}}$ be a separating transcendence basis of $k_0$ over $L'$. Since the extensions $k_0/L'({\bf{Y}})$ and $L'({\bf{Y}})/k({\bf{Y}})$ are finite and separable, the same is true for the extension $k_0/k({\bf{Y}})$. Hence, there exists $y \in k_0$ such that $k_0=k({\bf{Y}},y).$ Let $T$ be an extra indeterminate. Then consider
$$P_{y}(T,X) = X^3+(T-y)X + (T-y) \in k_0[T][X].$$
Let $x$ be a root of $P_{y}(T,X)$ and denote the field $N(T,x)$ by $E$. By Proposition \ref{lemma 11}, one has
$$3=[k_0(T,x) : k_0(T)]=[k_0(Z,T,x) : k_0(Z,T)].$$
\vspace{-12mm}
\begin{figure}[h!]
\[ \xymatrix{
& & & & N(T,x) \ar@{-}[d] \ar@{=}[r] & E \ar@{-}[ddd] \\
& & & N(T) \ar@{-} @/_3pc/[dd]_{G} \ar@{-}[ru] \ar@{-}[d] & Lk_0(Z,T,x) \ar@{-}[d] & \\
& & N \ar@{-} @/_3pc/[dd]_{G} \ar@{-}[ru] \ar@{-}[d] & Lk_0(Z,T) \ar@{-} @/_1pc/[d]_{\Gamma} \ar@{-}[ru] \ar@{-}[d] & k_0(Z,T,x) & \\
& & Lk_0(Z) \ar@{-} @/_1pc/[d]_{\Gamma} \ar@{-}[ru] \ar@{-}[d] & k_0(Z,T) \ar@{-}[ru]^3 & & L(Z,{\bf{Y}},T) \ar@{-}[dd] \ar@{-} @/_1pc/[dd]_{\Gamma} \\
L \ar@{-} @/_1pc/[dd]_{\Gamma} \ar@{=}[r] \ar@{-}[dd] & L \ar@{-} @/_1pc/[d]_{\Gamma} \ar@{-}[ru] \ar@{-}[d] & k_0(Z) \ar@{-}[ru] & & & \\
& L' \ar@{-}[ru] & & & & k(Z,{\bf{Y}},T) \\
k \ar@{-}[ru] & & & & &
}
\]
\caption{Construction of the function field $E$}\label{Fig?}
\end{figure}
Now, we need the following lemma:
\begin{lemma} \label{lem:sssecIII}
One has ${\rm{Aut}}(E/k(Z,{\bf{Y}},T)) = {\rm{Gal}}(E/k_0(Z,T,x)).$
\end{lemma}
\begin{proof}
We break the proof into two parts.
Firstly, one has ${\rm{Aut}}(E/k(Z,{\bf{Y}},T)) = {\rm{Aut}}(E/k_0(Z, T)).$
Indeed, recall that $k_0(Z,T) = k({\bf{Y}}, y, Z, T)$. Let $\sigma \in {\rm{Aut}}(E/k(Z, {\bf{Y}}, T)) \setminus {\rm{Aut}}(E/k_0(Z, T))$. Then one has $\sigma(y)\not=y$ and $\sigma(x)$ is a root of
$$P_{\sigma(y)}(T,X)=X^3 + (T-\sigma(y))X + (T-\sigma(y)) \in \widehat{N}[T][X],$$
where $\widehat{N}$ denotes the Galois closure of $N$ over $k(Z, {\bf{Y}})$. Proposition \ref{lemma 11} then gives
$\widehat{N}(T, \sigma(x)) \not=\widehat{N}(T,x).$
As $\widehat{N}$ is the compositum of the $k(Z, {\bf{Y}})$-conjugates of $N$, one has
${N}(T, \sigma(x)) \not={N}(T,x).$ Since $\sigma(E) \subseteq E$, we get that ${N}(T, \sigma(x))$ is strictly contained in ${N}(T,x)$ (which is $E$). However, by Proposition \ref{lemma 11}, one has
$$[N(T,x) : N(T)]=3= [\widehat{N}(T,\sigma(x)) : \widehat{N}(T)] \leq [{N}(T,\sigma(x)) : {N}(T)],$$
thus providing a contradiction.
Secondly, one has ${\rm{Aut}}(E/k_0(Z,T)) = {\rm{Gal}}(E/k_0(Z,T,x)).$ Indeed, one has to show that each $\sigma \in {\rm{Aut}}(E/k_0(Z,T))$ fixes $x$. Assume $\sigma$ does not. Then $\sigma(x)$ is another root of $P_{y}(T,X)$ and it is in $E$. Hence, $E$ contains all the roots of $P_{y}(T,X)$ (as this polynomial has degree 3 in $X$). By Proposition \ref{lemma 11}, $[E:N(T)] = 6$, a contradiction.
\end{proof}
Next, by Proposition \ref{lemma 11}, the polynomial $P_{y}(T,X)$ is irreducible over $N(T)$, that is, the map ${\rm{res}}^{E/k_0(Z,T,x)}_{N(T)/k_0(Z,T)}$ is an isomorphism. Moreover, the map ${\rm{res}}^{N(T)/k_0(Z,T)}_{N/k_0(Z)}$ is an isomorphism. Then apply Lemma \ref{lem:sssecIII} to get that the map
$$\left \{ \begin{array} {ccc}
{\rm{Aut}}(E/k(Z,{\bf{Y}},T)) & \longrightarrow & {\rm{Gal}}(N/k_0(Z)) \\
\sigma & \longmapsto & \sigma|_{N} \end{array} \right. $$
is a well-defined isomorphism, which we denote by ${\rm{res}}^{E/k(Z, {\bf{Y}}, T)}_{N/k_0(Z)}$ \footnote{with a slight abuse of notation as $k_0(Z) $ is not necessarily contained in $k(Z, {\bf{Y}}, T)$.}. In particular, the map
$$\beta \circ {\rm{res}}^{E/k(Z, {\bf{Y}}, T)}_{N/k_0(Z)} : {\rm{Aut}}(E/k(Z, {\bf{Y}}, T)) \rightarrow G$$
is a well-defined isomorphism. Moreover, by Lemma \ref{lem:sssecIII}, the domain of ${\rm{res}}^{E/k(Z, {\bf{Y}}, T)}_{L(Z, {\bf{Y}}, T)/k(Z, {\bf{Y}}, T)}$ is the whole automorphism group ${\rm{Aut}}(E/k(Z, {\bf{Y}}, T))$ and, by \eqref{!!}, one has
$$\alpha_{k(Z, {\bf{Y}},T)} \circ \beta \circ {\rm{res}}^{E/k(Z, {\bf{Y}}, T)}_{N/k_0(Z)} = {{\rm{res}}^{L(Z, {\bf{Y}},T)/k(Z, {\bf{Y}},T)}_{L/k}}^{-1} \circ \alpha \circ \beta \circ {\rm{res}}^{E/k(Z, {\bf{Y}}, T)}_{N/k_0(Z)} = {\rm{res}}^{E/k(Z, {\bf{Y}}, T)}_{L(Z, {\bf{Y}},T)/k(Z, {\bf{Y}},T)}.$$
Finally, $E$ is regular over $L$. Indeed, by Proposition \ref{lemma 11}, $P_y(T,X)$ is irreducible over $\overline{N}(T)$, i.e., $E$ is regular over $N$. Moreover, $N$ is regular over $Lk_0$ and $k_0$ is regular over $L'$. Then, from the latter and, e.g., \cite[Corollary 2.6.8(a)]{FJ08}, $Lk_0$ is regular over $L$. It then remains to apply, e.g., \cite[Corollary 2.6.5(a)]{FJ08} to finish the proof of Proposition \ref{thm:cst}.
\appendix
\section{Proofs of Proposition \ref{prop:pac} and Lemma \ref{prop:desc}} \label{app}
\subsection{Proof of Proposition \ref{prop:pac}} \label{app1.1}
Let $k$ be a PAC field and $\alpha : G \rightarrow {\rm{Gal}}(L/k)$ a finite Galois embedding problem over $k$. We show below that $\alpha$ has a geometric Galois solution.
Firstly, we provide a geometric Galois solution to some finite Galois split embedding problem over $k$ which ``dominates" $\alpha$. Let $k^{\rm{sep}}$ be a separable closure of $k$. As $k$ is PAC, one may apply \cite[Theorem 11.6.2]{FJ08} to get the existence of a (continuous) homomorphism $\gamma: {\rm{Gal}}(k^{\rm{sep}}/k) \rightarrow G$ that satisfies
\begin{equation} \label{D1}
\alpha \circ \gamma = {\rm{res}}^{k^{\rm{sep}}/k}_{L/k}.
\end{equation}
Let $L'$ denote the fixed field of ${\rm{ker}}(\gamma)$ in $k^{\rm{sep}}$. The extension $L'/k$ is finite and Galois, and the homomorphism $\gamma$ induces a homomorphism $\gamma' : {\rm{Gal}}(L'/k) \rightarrow G$ that satisfies
\begin{equation} \label{D2}
\gamma' \circ {\rm{res}}^{k^{\rm{sep}}/k}_{L'/k} = \gamma.
\end{equation}
By \eqref{D1} (which implies $L \subseteq L'$) and \eqref{D2}, one has
\begin{equation} \label{D3}
\alpha \circ \gamma'= {\rm{res}}^{L'/k}_{L/k}.
\end{equation}
Let
$$G'= \{(g, \sigma) \in G \times {\rm{Gal}}(L'/k) \, \, : \, \, \alpha(g) = {\rm{res}}^{L'/k}_{L/k}(\sigma)\}$$
denote the fiber product of $G$ and ${\rm Gal}(L'/k)$ over ${\rm Gal}(L/k)$ and let $\alpha' : G' \rightarrow {\rm{Gal}}(L'/k)$ be the projection on the second coordinate. Set
$$\delta: \left \{ \begin{array} {ccc}
{\rm{Gal}}(L'/k) & \longrightarrow & G' \\
\sigma & \longmapsto & (\gamma'(\sigma), \sigma).
\end{array} \right.$$
By \eqref{D3}, the map $\alpha'$ is surjective and the map $\delta$ is a well-defined homomorphism. Moreover, one has $\alpha' \circ \delta = {\rm{id}}_{{\rm{Gal}}(L'/k)}.$ In particular, the finite Galois embedding problem $\alpha'$ over $k$ splits. Now, since the field $k$ is PAC, it is ample. One may then apply \cite[Main Theorem A]{Pop96} to get the existence of an indeterminate $T$, a finite Galois extension $E/k(T)$ such that $L'(T) \subseteq E$ and $E \cap \overline{k}=L'$, and an isomorphism $\beta' : {\rm{Gal}}(E/k(T)) \rightarrow G'$ such that the following equality holds:
\begin{equation} \label{D4}
\alpha' \circ \beta' = {\rm{res}}^{E/k(T)}_{L'/k}.
\end{equation}
Secondly, we deduce a Galois solution to $\alpha_{k(T)}$. Let $\beta : G' \rightarrow G$ be the projection on the first coordinate. Clearly, one has $\alpha \circ \beta = {\rm{res}}^{L'/k}_{L/k} \circ \alpha'.$
Then, by \eqref{D4}, one has
\begin{equation} \label{D6}
\alpha \circ \beta \circ \beta' = {\rm{res}}^{E/k(T)}_{L/k}.
\end{equation}
Let $E'$ be the fixed field of ${\rm{ker}}(\beta \circ \beta')$ in $E$. The epimorphism $\beta \circ \beta' : {\rm{Gal}}(E/k(T)) \rightarrow G$ induces an isomorphism $\epsilon : {\rm{Gal}}(E'/k(T)) \rightarrow G$ that satisfies
\begin{equation} \label{D7}
\epsilon \circ {\rm{res}}^{E/k(T)}_{E'/k(T)} = \beta \circ \beta'.
\end{equation}
By the definition of $\beta$, one has
\begin{equation} \label{D7.5}
{\rm{ker}}(\beta) = \{1_G\} \times {\rm{Gal}}(L'/L).
\end{equation}
Then combine \eqref{D4} and \eqref{D7.5} to get $L(T) \subseteq E'$. It then remains to combine \eqref{D6}, \eqref{D7}, and the latter inclusion to get
$$\alpha_{k(T)} \circ \epsilon = {\rm{res}}^{E'/k(T)}_{L(T)/k(T)}.$$
Thirdly, we show that $E'/L$ is regular. Since $E'=E^{{\rm ker}(\beta\circ\beta')}$ and $E \cap \overline{k} = L'$, one has
$$L \subseteq E' \cap \overline{k} \subseteq E' \cap E \cap \overline{k} = E' \cap L' \subseteq L'^{{\rm{res}}^{E/k(T)}_{L'/k}({\rm ker}(\beta\circ\beta'))}.$$
Then use successively \eqref{D4} and \eqref{D7.5} to get
$${\rm{res}}^{E/k(T)}_{L'/k}({\rm ker}(\beta\circ\beta')) = \alpha'({\rm{ker}}(\beta)) = {\rm{Gal}}(L'/L).$$
Hence, one has $L \subseteq E' \cap \overline{k} \subseteq L'^{{\rm{Gal}}(L'/L)}=L$, thus ending the proof of Proposition \ref{prop:pac}.
\subsection{Proof of Lemma \ref{prop:desc}} \label{app1.2}
By our assumption, there exist an extension $M/k$ such that the fields $M$ and $L$ are linearly disjoint over $k$, an indeterminate $T$, a finite Galois extension $E/M(T)$ such that $LM(T) \subseteq E$ and $E \cap \overline{M} = LM$, and an isomorphism $\beta : {\rm{Gal}}(E/M(T)) \rightarrow G$ such that
\begin{equation} \label{!}
\alpha \circ \beta = {\rm{res}}^{E/M(T)}_{L/k}.
\end{equation}
Let $x \in E$ be such that $E=M(T,x)$. Let $P(X) \in M(T)[X]$ be the minimal polynomial of $x$ over $M(T)$. Let
$$x_2, \dots, x_{|G|}$$
be the roots of $P(X)$ that are not equal to $x$. As the extension $E/M(T)$ is Galois, for each $i \in \{2, \dots, |G|\}$, there is a polynomial $P_i(X) \in M(T)[X]$ such that $x_i=P_i(x).$ Also, pick $z \in L$ such that $L=k(z)$. Then there is $Q(X) \in M(T)[X]$ such that $z=Q(x).$ Let $k_0$ be a subfield of $M$ that is finitely generated over $k$ and such that
$$P(X), P_2(X), \dots, P_{|G|}(X), Q(X) \in k_0(T)[X].$$
Then the extension $k_0(T,x)/k_0(T)$ is Galois, one has $L \subseteq k_0(T, x)$, and the restriction map $${\rm{res}}^{E/M(T)}_{k_0(T,x)/k_0(T)}$$
is an isomorphism. Moreover, as the fields $M$ and $L$ are linearly disjoint over $k$ and $k \subseteq k_0 \subseteq M$, the fields $k_0$ and $L$ are linearly disjoint over $k$. That is, ${\rm{res}}^{Lk_0/k_0}_{L/k}$ is an isomorphism.
\begin{figure}[h!]
\[ \xymatrix{
& & E = M(T,x) \ar@{-}[d]\ar@{-}[rd] \ar@{-} @/_3pc/[dd]_{G} & & & \\
& & LM(T) \ar@{-} @/_1pc/[d]_{\Gamma} \ar@{-}[d] \ar@{-}[rd] & k_0(T,x) \ar@{-} @/_3pc/[dd]_{G} \ar@{-}[d] & &\\
& LM \ar@{-} @/_0.5pc/[d]_{\Gamma} \ar@{-}[ru] \ar@{-}[d] &M(T) \ar@{-}[rd] & Lk_0(T) \ar@{-} @/_1pc/[d]_{\Gamma} \ar@{-}[rd] \ar@{-}[d] && \\
L \ar@{-} @/_1pc/[d]_{\Gamma} \ar@{-}[d] \ar@{-}[ru] &M \ar@{-}[ru] & &k_0(T) \ar@{-}[rd] & Lk_0 \ar@{-} @/_0.5pc/[d]_{\Gamma} \ar@{-}[d] \ar@{-}[rd]&\\
k \ar@{-}[ru]& & & &k _0 \ar@{-}[rd] & L \ar@{-} @/_0.5pc/[d]_{\Gamma} \ar@{-}[d]\\
& & & & &k}
\]
\caption{Defining the field $E$ over a suitable extension $k_0$ of $k$}\label{Fig6b}
\end{figure}
Then consider the finite Galois embedding problem $\alpha_{k_0(T)}$. The map
$$\beta \circ {{\rm{res}}^{E/M(T)}_{k_0(T,x)/k_0(T)}}^{-1} : {\rm{Gal}}(k_0(T,x)/k_0(T)) \rightarrow G$$
is an isomorphism and one has $Lk_0(T) \subseteq k_0(T,x)$. Furthermore, by \eqref{!} and since
$$\alpha_{k_0(T)} = {{\rm{res}}^{Lk_0(T)/k_0(T)}_{Lk_0/k_0}}^{-1} \circ {{\rm{res}}^{Lk_0/k_0}_{L/k}}^{-1} \circ \alpha,$$
one has
$\alpha_{k_0(T)} \circ \beta \circ {{\rm{res}}^{E/M(T)}_{k_0(T,x)/k_0(T)}}^{-1}={\rm{res}}^{k_0(T,x)/k_0(T)}_{Lk_0(T)/k_0(T)}.$
Finally, $k_0(T, x) \cap \overline{k_0} = Lk_0$. Indeed, combine
$k_0(T,x) \cap \overline{k_0} \subseteq E \cap \overline{M} = LM \subseteq LM(T)$
and the linear disjointness of the fields $LM(T)$ and $k_0(T,x)$ over $Lk_0(T)$ to get
$$k_0(T,x) \cap \overline{k_0} \subseteq k_0(T,x) \cap LM(T) \cap \overline{k_0}= Lk_0(T) \cap \overline{k_0} = Lk_0,$$
thus ending the proof of Lemma \ref{prop:desc}.
| 4,649
|
TITLE: First isomorphism theorem: is there a canonical "inverse" map?
QUESTION [0 upvotes]: Let's say $N,P$ are $R$-modules, and $\psi:N\to P$ is $R$-linear and surjective.
Knowing that $N/\ker(\psi)\cong P$ ( by the first iso theorem), what can we say about a map $P\to N$? Is there a canonical linear map here? I tried constructing one as follows with no avail:
For each $p\in P$, there is some $n_p\in N$ with $\psi(n_p)=p$. So define the map
$$
G:P\to N,\quad p\mapsto n_p.
$$
This should be well-defined since for any $p\in P$, we picked exactly one corresponding $n_p\in N$. But I am not seeing how to show this is $R$-linear. I'm not even convinced it is. But is there something else we can do?
REPLY [1 votes]: It is extremely unlikely that this map will be linear. For example, consider $\mathbb Z \longrightarrow \mathbb Z/2$. The only $\mathbb Z$ linear map in the other direction is the $0$ map. The proposed "inverse" as you called it is typically referred to as a section: a map $G: P \longrightarrow N$ such that $\psi \circ G = id$. The $G$ you constructed is therefore a section in the category of sets, but it is very rare that it would be in the category of modules.
However, there is an extremely important class of modules $P$ where this will always hold. These are called projective modules. By definition, we say that a module $P$ is projective precisely if every surjection $N \longrightarrow P$ has a section in the sense I defined above. These are an extremely important class of modules, and I'd strongly recommend reading up on them. Interestingly, you can prove that a module is projective if and only if it is the direct summand of a free module. So in some sense, projective modules are the next best thing to free modules.
| 75,928
|
Design - F6002
Design.
At a glance
Course Details
Please select a specialisation for more details:
Part B: Advanced design studies
Part C: Advanced design applications
Making the application
Future students
Semester one (February)
Apply directly to Monash by end-of-day Friday 7 February using course code F6002.
Semester two (July)
Apply directly to Monash using course code F600$32.
| 324,914
|
This It Girl Mastered 2017's Biggest Trend Before It Was a Thing
Margaret Zhang has been mastering the ballet trend well before we saw its influence across sneakers or swimsuits. But you might have never guessed it from looking at the street style star. The Australia-born photographer and creative director made a splash when her blog Shine by Three launched in 2009, and subsequently, every Fashion Week after, as her unique layering and use of color and pattern made her fodder for every photographer standing outside the runway. But a major dance influence came into play much earlier for Zhang, informing pretty much every outfit we’ve ever seen on the 24-year-old and preceding the 2017 It trend by decades.
Zhang studied ballet from a very young ago but stepped away from her rigorous training at 16 right before she started her website. However, the rehearsal mentality and discipline stuck, not to mention an appreciation for layered pieces, easy silhouettes, and fabrics that move. “Most traditional ballet training really trains you to be very feminine,” she explains, adding that her POV has shifted to be “more boyish and a little more tailored,” but it’s just as connected to her training.
Below, Zhang told us a bit more about what makes ballet such a powerful point of reference for fashion, all while demonstrating four inspiring head-to-toe ensembles anyone—even those with two left feet—can re-create.
Take a look at Zhang’s impeccable outfits below.
On Margaret Zhang: Karen Walker suit; stylists own corset; Levi’s Batwing Logo Tee ($16); Céline glasses; Dinosaur Designs Louise Olsen Medium Liquid Hoop Earrings ($212); and Givenchy Boots.
WHO WHAT WEAR: What are your very first memories of fashion?
MARGARET ZHANG: All of my fashion—I hate the word inspiration, but—inspiration comes from when I grew up training to be a ballet dancer. More so the movement than the physical garments, although the tights and the leotard situation is very on-trend right now.
WWW: Totally. The ballet trend is huge right now, but that’s not something we really associate with you. Why does a different type of silhouette appeal to you more?
MZ: I’m a practical person—I’m on set, photographing and directing, which requires me to be comfortable, so I’m not a huge advocate for dresses. Anything fluffy, I kind of steer clear of. Having said that, being comfortable doesn’t mean it has to be a shirt and pants. You can still be comfortable and layer with a coat or a blazer and a shirt and a corset—something that makes it a little more interesting. I think it’s less about ballet rehearsal, but its influence.
WWW: After all these years, why does this ballerina style POV still make sense for you?
MZ: You need to be agile. It has to be simple but interesting enough that I can go to the next thing and add a layer when I’m going somewhere else and don’t have to carry a whole new outfit with me.
WWW: Over the last seven years since you launched your site, how has your style changed?
MZ: For anybody going from age 16 to turning 24, that period of your life is such a rapidly shifting period. I was a little more feminine when I was 16. I’ve become more boyish and a little more tailored. I guess what I’m known for would be layering. I went through a period of trying to fit as many garments on my body as possible without it looking ridiculous. I’ve kind of taken a step back from that, but it’s still very much about creating intricate dimensions.
WWW: So is more always more?
MZ: I don’t really believe in shopping a whole lot, I’m not a huge fan of shopping for a new outfit every week or spending a lot on clothing if there are more important ways to spend money. Rather than buying a dress or a jumpsuit, it’s about mixing and matching and making the most of the separates that you have.
WWW: Do you have any signature layering tricks?
MZ: It’s easy layering a camisole over a men’s shirt. As my layering goes, there’s the physical layering element of contrasting masculine and feminine silhouettes: What about a large menswear coat and cinch it at the waist? What can go over a blazer that will make it more feminine without being girly? It’s more a balancing thing.
On Margaret Zhang: Zara blazer; Kate Sylvester one-piece; Citizens of Humanity jeans; Stella Luna Architect 50 Slingback ($331).
WWW: Do you often come away from a photo shoot with insights or tips that resonated with your sense of style?
MZ: I feel like I do that more with beauty than I do with clothes. I have a quite definitive approach to clothes that I really like and what works on my body, and I’m not much for shopping for things all the time. Whereas I’m very much jazzed about beauty in general. I’d ask “Oh what brushes should I use?” or "What’s your favorite mascara?”
WWW: Are there elements of your style that you specifically credit to your upbringing in Australia?
MZ: It’s more of an approach to color, texture, and shape, which I personally think is very much because of moving light [in Sydney]. The light in every city—like L.A. has amazing gold, NY has a very particular light because of the way the buildings pass shadows, Paris at night is very blue and yellow—impacts the way you see color. Certain people dress a little more experimental and experiment with color a bit more. And at the same time, Australians are very practical and easygoing with their dressing. It’s a balance, and that 100% influences me.
WWW: And finally, as someone who is quite experimental with fashion, is there anything you’d never wear?
MZ: It’s not that I want them to go away forever, I’m just not a girly-girl. Whenever the Zimmermann sisters have a show or some kind of event, I always go and support them because they’ve been supportive of me for a long time and they’re inherently Australia in the way that they approach design. We have this great working relationship … and running joke of “What’s Margaret going to do with Zimmerman this time?” Because it’s this very frilly, girly style. I want to go there, but I just can’t. Aesthetically, the way that I look and my frame and structure, the way that my body is shaped, my coloring, I just don’t feel comfortable. Zimmerman does a lot of separates that I like, but I’m running out of ways to layer all the frills.
WWW: There’s always a man’s shirt you can put underneath it.
MZ: Exactly. I think that’s so demonstrative of how you can have trends, but our styles shape them.
On Margaret Zhang: Black Messiah Hands Up Long Sleeve T-Shirt ($30); C/MEO Collective Right Now Skirt ($200); Cooper Street pants; Stella Luna shoes; Saint Laurent vintage earrings.
What’s your favorite look from Zhang’s shoot? Let us know in the comments below.
Credits: Photographer/Stylist/Model: Margaret Zhang; Producer: Samantha Bennetts; Hair and Makeup: Chloe Langford
Add a Comment
| 203,635
|
\begin{document}
\title[Renewal thms.\ for dependent interarrival times]{Renewal theorems for a class of processes with dependent interarrival times and applications in geometry}
\author{Sabrina Kombrink}
\address{Sabrina Kombrink, Universit\"at zu L\"ubeck, Ratzeburger Allee 160, 23562 L\"ubeck, Germany}
\email{kombrink@math.uni-luebeck.de}
\thanks{Part of this work was supported by grant 03/113/08 of the \emph{Zentrale Forschungsf\"orderung, Universit\"at Bremen}.}
\begin{abstract}
Renewal theorems are developed for point processes with interarrival times $W_n=\codefun(X_{n+1}X_n\cdots)$, where $(X_n)_{n\in\mathbb Z}$ is a stochastic process with finite state space $\Sigma$ and $\codefun\colon\Sigma_A\to\mathbb R$ is a H\"older continuous function on a subset $\Sigma_A\subset\Sigma^{\mathbb N}$. The theorems developed here unify and generalise the key renewal theorem for discrete measures and Lalley's renewal theorem for counting measures in symbolic dynamics. Moreover, they capture aspects of Markov renewal theory.
The new renewal theorems allow for direct applications to problems in fractal and hyperbolic geometry; for instance, results on the Minkowski measurability of self-conformal sets are deduced.
Indeed, these geometric problems motivated the development of the renewal theorems.
\end{abstract}
\keywords{Renewal theorem, dependent interarrival times, symbolic dynamics, Key renewal theorem, Ruelle Perron-Frobenius theory}
\subjclass[2010]{Primary: 60K05, 60K15. Secondary: 28A80, 28A75}
\maketitle
\section{Introduction and statement of main results}\label{sec:intro}
\input{intro}
\section{Ruelle-Perron-Frobenius Theory -- the equilibrium distribution}\label{sec:preliminaries}
\input{subshifts}
\section{Renewal theorems}\label{sec:results}
\input{renewal}
\section{Proofs of the renewal theorems and their corollaries}\label{sec:proofs}
In Sec.~\ref{sec:PFanalyticp} we present essentials from complex Ruelle-Perron-Frobenius theory. These are used to prove the renewal theorems in Sec.~\ref{sec:proofsRT}.
\subsection{Analytic Properties of the Ruelle-Perron-Frobenius Operator}\label{sec:PFanalyticp}
\input{PFanalytic}
\subsection{Proof of the renewal theorems}\label{sec:proofsRT}
\input{renewalproof}
\input{corproof}
\bibliographystyle{alpha}
\bibliography{Literaturvz}
\end{document}
| 34,264
|
Rich Mahan has soulful swagger to spare on the morning-after a one-night-stand song, "Favorite Shirt." The protagonist has (conveniently) left his favorite shirt at the house and is invited back over for breakfast. The object of his desire shows up wearing nothing but his favorite shirt. Great shows of affection ensue. The track comes off of the St. Louis native and Nashville transplant's Blame Bobby Bare album and is a terrific sexy track. Listen HERE.
Speaking of soulfulness, no other band in bluegrass shows as much delta blues passion as The Steeldrivers. This last week, they've just released their third album Hammer Down and it's wildly terrific. I had the opportunity to interview singer/fiddler Tammy Rogers for Engine 145 this past August and she said it best when she said, "We’re still murdering people regularly. We’ve still got a high body count. That’s all good." The lyrics are dark and foreboding and lead singer Gary Nichols has taken over for Chris Stapleton and amazingly hasn't missed a beat. The album highlights are "Lonesome Goodbye" and the spurned-lover-who-meets-her-cheater-with-a-.45 "When You Don't Come Home." Listen HERE.
Born in Portland, Oregon and raised in the music hotbed of Austin, Texas, Reed Turner finished off his music training four years at Berklee College of Music in Boston, where he majored in songwriting. On his latest Ghost In The Attic album, I was blown away by the haunting fiddle and electric-guitar laden track "The Fire." I asked Turner to explain the deep and symbolic track and he said, "That song was inspired by the housing crisis/economic collapse. I was stunned and angered by the amount of greed and stupidity displayed by a group of people who were seemingly willing to help bring down their own country." Powerful stuff. Listen HERE.
Will Hoge and Wade Bowen are songwriting legends, especially around the Lone Star State of Texas. They've just recently collaborated on a new track called "Another Song Nobody Will Hear" that is Hoge's most recent release on the Texas radio scene. For anyone that's found themselves at the mercy of having to create a piece of art that's not a vehicle for wealth development, but a personal validation, it's the perfect message. Listen HERE.
Country music radio's seemingly closed door to female artists not named Carrie, Taylor or Miranda belies the fact that there is a whole slew of quality music being put out by the fairer sex right now. There's not a better example than one Katie Armiger. She's released four quality albums since 2007 and doesn't have a top forty hit to show for it. Her January release, Fall Into Me, has some bright spots on it, but none better than the slow burning "Okay Alone" that shows off her big voice and wide range. Listen HERE.
On Holly Williams' phenomenal 2009 release, Here With Me, I wrote, ." Fast-forward four years and Williams' new album The Highway is equally personal with all the same emotional nuances. It couldn't be less commercial or country music radio friendly if it tried. But it has more heart and soul than an hour of any country radio broadcast in one song. Listen HERE.
Ray Scott's humorous "Those Jeans" wasn't officially released in 2013, but it just has gotten a bit of traction on XM/Sirius Radio this last month and just might see the light of day on terrestrial radio if country music radio has any sense. Scott proves out that honky-tonk humor is alive and well with this song about a cowboy getting the snot beat out of him by hitting on the wrong bar babe. Listen HERE.
Age hasn't slowed down Gary Allan one bit. His Set You Free album released this past month sets his unique voice to a tapestry of an album that's reminiscent of passing days and reflective of bright spots in years to come. It's the fact that it's not as polished vocally as much of what is being released now that makes it so real, so believable. He brings his normal passionate delivery and while it doesn't chart any new territory for a song standpoint, it's a great reminder of how Allan is one of this generation's finest and most underrated artists. Listen HERE.
John Corbett surprises.). His new independent album, Leaving Nothing Behind, has several good moments on it, but the Western-themed "El Paso" steals the album. With only a nod to the famous Marty Robbins track, this new story of love gone wrong in the Wild West, is terrific storytelling. Listen HERE.
Dale Watson is a Texas honky-tonk hero. If you don't believe me, check out the best honky-tonk album released in recent memory called El Rancho Azul. Drinking. Loving. Drinking. Dancing. Drinking. Bars. And Drinking. The one exception is the song that I imagine as a first-dance song with my daughter on her wedding day, appropriately called "Daughter's Wedding Song." Listen HERE.
Matthew Mayfield's "Track You Down" opens with this funky little guitar riff that's more bluesy than country and his vocal delivery reminds quickly of Dave Matthews. The fascinating production and urgency of the vocal delivery, matching lyrics and frantic fiddle make this track truly memorable. Listen HERE.
Randy Houser's new How Country Feels album can be uneven in places, particularly when he ramps the rocking tempo up on a handful of tracks. But when he slows things down, few artists in country music currently do it better. Three tracks on the album stand out in "The Power of a Song," the Katie Lee Cook duet "Wherever Love Goes" and the album stand-out, "The Singer." The last track is a highly personal and melancholy look at the high costs of following your musical dreams. Listen HERE.
Sure, he's done the beach themed thing before. Sure, he's not carving out new territory with the island-bound music video. But damn. I have to admit that Kenny Chesney's new David Lee Murphy-penned "Pirate Flag" is really cool ear candy. I dare any guy not to think of a "long legged model" and "island girl" on a beach while it's playing. Listen HERE.
Kris Kristofferson is Feeling Mortal these days and as the legends of his era are passing away around him, his songwriting has turned to themes like mortality and nostalgia. His voice is ravaged from time and use, but no one pens a song quite like the Country Music Hall of Famer. On "Castaway," the album's stand-out track, he uses the open water as an analogy to his personal journey: "Cause like a ship without a rudder/ I’m just drifting with the tide/And each day I’m drawing closer to the brink/ Just a speck upon the waters/ Of an ocean deep and wide/ I won’t even make a ripple when I sink." Listen HERE.
A year in Aspen back in 2005 is the only connection that Louise Day has to the American music scene. Born and raised in Cape Town, South Africa, this piano songstress drew on her U.S. musical influences and South African inspirations on her terrific international release of "Nothingville." Listen below:
| 542
|
\begin{document}
\maketitle
\begin{abstract} The {\em acyclic chromatic number} of a graph is the least number of colors needed to properly color its vertices
so that none of its cycles has only two colors. The {\em acyclic chromatic index} is the analogous graph parameter for edge colorings. We first show that the acyclic chromatic index is at most $2\Delta-1$, where $\Delta$ is the maximum degree of the graph. We then show that for all $\epsilon >0$ and for $\Delta$ large enough (depending on $\epsilon$), the acyclic chromatic number of the graph is at most
$\lceil(2^{-1/3} +\epsilon) {\Delta}^{4/3} \rceil +\Delta+ 1$.
Both results improve long chains of previous successive advances. Both are algorithmic, in the sense that the colorings are generated by randomized algorithms. However,
in contrast with extant approaches, where the randomized algorithms assume the availability of enough colors to guarantee properness deterministically, and use additional colors for randomization in dealing with the bichromatic cycles, our algorithms may initially generate colorings that are not necessarily proper; they only aim at avoiding cycles where all pairs of edges, or vertices, that are one edge, or vertex, apart in a traversal of the cycle are homochromatic (of the same color). When this goal is reached, they check for properness and if necessary they repeat until properness is attained.
\end{abstract}
\section{Introduction}\label{sec:intro}
Let $\chi(G)$ denote the {\em chromatic number} of a graph, i.e the least number of colors needed to color the vertices of $G$ in a way that no adjacent vertices are homochromatic. The {\em acyclic chromatic number} of a graph $G$, a notion introduced back in 1973 by Gr\"{u}nbaum \cite{grunbaum1973acyclic} and denoted here by $\chi_{a}(G)$, is the least number of colors needed to properly color the vertices of $G$ so that no cycle of even length is {\em bichromatic} (has only two colors). Notice that in any properly colored graph, no cycle of odd length can be bichromatic.
The literature on the acyclic chromatic number for general graphs with arbitrary maximum degree $\Delta$ includes:
\begin{itemize}
\item Alon eta al. \cite{alon1991acyclic} proved that $\chi_{a}(G) \leq \lceil 50 {\Delta}^{4/3} \rceil$. They also showed that there are graphs for which $\chi_{a}(G) = \Omega \Big( \frac{{\Delta}^{4/3}}{(\log {\Delta})^{1/3} }\Big)$.
\item Ndreca eta al. \cite{ndreca2012improved} proved that for $ \chi_{a}(G) \leq \lceil 6.59 {\Delta}^{4/3} + 3.3 \Delta \rceil$.
\item Sereni and Volec \cite{sereni2013note} proved that $$\chi_{a}(G) \leq (9/2^{5/3}) {\Delta}^{4/3} + \Delta < 2.835 \Delta^{4/3} + \Delta. $$
\item Finally, Gon\c{c}alves et al. \cite{gonccalves2014entropy} provided the best previous bound, namely that for
$\Delta \geq 24,$ $$\chi_{a}(G) \leq (3/2) {\Delta}^{4/3} + \min\left(5\D -14, \D +\frac{8\D^{4/3}}{\D^{2/3} - 4}+1\right). $$
\end{itemize}
There is also an extensive literature on special cases of graphs.
Here we show that for all $\epsilon >0$ and for $\Delta$ large enough (depending on $\epsilon$),
$$\chi_{a}(G) \leq \lceil(2^{-1/3} +\epsilon) {\Delta}^{4/3} \rceil +\Delta+ 1.$$
With respect to edge coloring, the {\em chromatic index} of $G$, often denoted by $\chi'(G)$, is the least number of colors needed to properly color the edges of $G$, i.e., to color them so that no adjacent edges get the same color. It is known that its chromatic index is either $\Delta$ or $\Delta+1$ (Vizing \cite{vizing1965critical}). Nevertheless observe that to generate a proper edge coloring by successively coloring its edges in a way that at each step the color is arbitrarily chosen so that properness is not destroyed, and no color is ever changed, necessitates a palette of at least $2\Delta -1$ colors, because $2\Delta -2 $ edges might be coincident with any given edge. The {\em acyclic chromatic index} of $G$, often denoted by $\chi'_{a}(G)$, is the least number of colors needed to properly color the edges of $G$ so that no cycle of even length is bichromatic. Notice again that in any properly edge-colored graph, no cycle of odd length can be bichromatic. It has been conjectured (J. Fiam\v{c}ik \cite{fiam} and Alon et al. \cite{alon2001acyclic}) that the acyclic chromatic index of any graph with maximum degree $\Delta$ is at most $\Delta +2$.
Besides the numerous publications for special cases of graphs, the literature on the acyclic chromatic index for general graphs with max degree $\Delta$ includes:
\begin{itemize}
\item Alon et al. \cite{alon1991acyclic} proved $\chi'_{\alpha}(G) \leq 64\Delta$, Molloy and Reed improved this to $\chi'_{\alpha}(G) \leq 16\Delta$, and then Ndreca et al \cite{{ndreca2012improved}}
showed $\chi'_{\alpha}(G) \leq \lceil9.62(\Delta-1)\rceil$. Subsequently,
\item Esperet and Parreau \cite{DBLP:journals/ejc/EsperetP13} proved that $\chi'_{\alpha}(G) \leq 4( \Delta -1)$.
\item The latter bound was improved to $\lceil3.74(\Delta -1)\rceil +1$ by Giotis et al. \cite{giotis2017acyclic}. Also, an improvement of the $4( \Delta -1)$ bound was announced by Gutowski et al. \cite{Gutowski2018} (the specific coefficient for $\Delta$ is not given in the abstract of the announcement).
\item Finally, the best bound until now was given by Fialho et al. \cite{fialho2019new}, who proved that $\chi'_{\alpha}(G) \leq \lceil3.569(\Delta -1)\rceil +1$. \end{itemize} Here we show that $$\chi'_{\alpha}(G) \leq 2\Delta -1.$$
The most recent results from both groups above are based on the algorithmic proofs of Lov\'{a}sz Local Lemma (LLL) by Moser \cite{moser2009constructive} and Moser and Tardos \cite{moser2010constructive}, which use an approach that has been known as the {\em entropy compression method}. The main difficulty in this approach is to prove the eventual halting of a randomized algorithm that successively and randomly assigns colors to the vertices, or edges in the case of edge coloring, unassigning some colors when a violation of the desired properties arises. Towards proving the eventual halting (actually proving that the expected time of duration of the process is constant), a structure called {\em witness} forest is associated with the process, so that at every step, the history of the random choices made can be reconstructed from the current witness forest and the current coloring; the key observation is that the number of such forests (entropy) is not compatible, probabilistically, with the number of random choices made if the process lasted for too long. For nice expositions, see Tao \cite{TT} and Spencer \cite{Sp}. It should be kept in mind that as the algorithm develops, dependencies are introduced between the colors of the vertices.
Very roughly and in qualitative terms, to get our improvements on acyclic colorings, we again design a Moser-type algorithm, but {\it not} caring for properness, and only with the aim to avoid badly colored cycles, i.e. a cycles where all pairs of vertices, or edges for edge coloring, that are an odd number of vertices, or edges, apart in a traversal of the cycle are homochromatic. Parenthetically, observe that in the case of properly colored graphs the notions of being badly colored and bichromatic for a cycle are equivalent, whereas if non-properness is considered, being badly colored is the natural formulation of being bichromatic. With our approach, we avoid the need of a number of colors that guarantee that choices are made in a way that does not destroy properness plus additional colors to leave sufficient leeway for randomness. If the coloring generated by the Moser-type algorithm of our approach is not proper, we just repeat the process, until properness is achieved. Of course, this rough sketch puts under the rug many things: for example that we need to show that the probability that the graph generated by the Moser-type algorithm is not proper is bounded away from 1. To prove this, we approach the correctness proof of the Moser-type algorithms not via the entropy compression method, but via a direct probabilistic argument, first introduced in Giotis et al.\cite{giotisanalco}. Also, actually our algorithm during its Moser-type phases not only ignores properness, but a stronger property that has to do with the colorings of 4-cycles. This is because the Moser-type algorithm does not work well for 4-cycles. This stronger properness notion is defined differently for the cases of vertex and edge colorings. It turns out that with the number of colors we assume to have, the graph generated during a Moser-type phase has a positive probability to have the strong properness property.
\subsection{Notation and terminology} In the sequel, we give some general notions, and introduce the notation and terminology we use.
Throughout this paper, $G$ is a simple graph with $l$ vertices and $m$ edges, and these parameters are considered {\it constant}. On the other hand, we denote by $n$ the number of steps an algorithms takes, and it is only with reference to $n$ that we make asymptotic considerations.
The \emph{maximum degree} of $G$ is denoted by $\D$ and we assume, to avoid trivialities, that it is $>1$. A (simple) $k$-path is a succession $u_1, \ldots, u_k, u_{k+1}$ of $k+1 \geq 2$ distinct vertices any two consecutive of which are connected by an edge. A $k$-cycle is a succession of $k\geq 3$ distinct vertices $u_1, \ldots, u_k$ any two consecutive of which, as well as the the pair $u_1, u_k$, are connected by an edge.
A path (respectively, cycle) is a $k$-path (respectively, $k$-cycle) for some $k$. Vertices of a cycle or a path separated by an odd number of other vertices are said to have {\em equal parity}. Analogously, we define equal parity edges of a cycle.
A vertex coloring of $G$ is an assignment of colors to its vertices selected from a given palette of colors. A vertex coloring is \emph{proper} if no neighboring vertices have the same color. We define analogously edge coloring and proper edge coloring (no coincident pair of edges is homochromatic).
A path or a cycle of a properly vertex colored graph is called {\em bichromatic} if the vertices of the path or the cycle are colored by only two colors. Analogously for edge colorings. A proper coloring is {\em $k$-acyclic}, for some $k\geq 3$, if there are no bichromatic $k'$-cycles, for any $k' \geq k$. A proper coloring is called acyclic if there are no bichromatic cycles of any length. Note that for a cycle to be bichromatic in a proper coloring, its length must be even. The \emph{acyclic chromatic number } of $G$, denoted by $\x(G)$, is the least number of colors needed to produce a proper, acyclic vertex coloring of $G$. Analogously, we define the \emph{acyclic chromatic index } of $G$, denoted by $\x'(G)$, for edge colorings.
In the algorithms of this paper not necessarily proper colorings are constructed by independently selecting one color from a palette of
$K$ colors, for suitable values of $K$, uniformly at random (u.a.r.). Thus, for any vertex $v\in V$, or edge $e \in E$ for the case of edge coloring, and any color
$i\in\{1,\ldots, K\}$,
\begin{equation}\label{eq:prob}\Pr[v \text{ (or } e) \text{ receives color }i]=\frac{1}{K}.\end{equation}
We call such colorings {\em random colorings} (they are not necessarily proper).
In all that follows, we assume the existence of some arbitrary (total, strict) ordering among all vertices, paths and cycles of the given graph to be denoted by $\prec$.
Among the two consecutive traversals of a cycle, we arbitrarily select one and call it {\it positive}. Given a vertex $v$ and a $2k$-cycle $C$ containing it, we define $C(v):=\{v=v_1^C,\ldots,v_{2k}^C\}$ to be the set of vertices of $C$ in the positive traversal starting from $v$.
The two disjoint and equal cardinality subsets of $C(v)$ comprised of vertices of the same parity that are at even (odd, respectively) distance from $v$ are to be denoted by $C^0(v)$ ($C^1(v)$, respectively). We define analogously $C(e):=\{e=e_1^C,\ldots,e_{2k}^C\}, C^0(e)$ and $C^1(e)$, for the case of edge colorings and an edge $e$ of $C$.
We call {\em badly colored} cycles whose sets of equal parity vertices (or edges for edge colorings) are monochromatic (have a single color). Notice that in the case of not proper colorings, a badly colored cycle might have all its vertices (edges) of the same color. Also bichromaticity of a cycle does not imply that its coloring is bad.
We define the {\em scope} of $C(v)$ to be the set $\{v=v_1^C,\ldots,v_{2k-2}^C\}$, i.e. all but the last two of the vertices of $C$ in a traversal starting from $v$. Analogously, we define the scope of $C(e)$ to be the set of all but the last two edges in a traversal starting from $e$. Roughly, the reason we introduce this notion is because if we recolor the scope of a badly colored cycle, all information of it being badly colored is lost, and thus the randomness for the colors before discovering that the cycle is badly colored is re-established.
In the following sections, edge and vertex colorings will be investigated separately. We start with edge coloring, because we consider, perhaps quite subjectively, that the corresponding result is more interesting.
\section{Acyclic edge colorings}\label{sec:aec}
In this section, the term ``coloring" refers to edge coloring, even in the absence of the specification ``edge".
We assume that we have $K=\K$ colors at our disposal, where $\epsilon> 0$ is an arbitrarily small constant. We show that this number of colors suffice to algorithmically construct, with positive probability, a proper, acyclic edge coloring for $G$. Therefore, since for any $\D$, there exists a constant $\epsilon>0$ such that $\K\leq 2(\Delta-1)+1=2\D-1$,
it follows that $\x'(G)\leq 2\D-1$.
We now give a cornerstone result proven by Esperet and Parreau~\cite{DBLP:journals/ejc/EsperetP13}:
\begin{lemma}[Esperet and Parreau~\cite{DBLP:journals/ejc/EsperetP13}]\label{lem:sufficientcolors} At any step of any successive coloring of the edges of a graph, there are at most
$2 (\Delta-1)$ colors that should be avoided in order to produce a proper 4-acyclic coloring.
\end{lemma}
\begin{proof}[Proof Sketch]
Notice that for each edge $e$, one has to avoid the colors of all edges adjacent to $e$, and moreover for each pair of homochromatic edges $e_1, e_2$ adjacent to $e$ at different endpoints (which contribute one to the count of colors to be avoided), one has also to avoid the color of the at most one edge $e_3$ that together with $e,e_1, e_2$ define a cycle of length 4. Thus, easily, the total count of colors to be avoided does not exceed the number of adjacent edges of $e$, which is at most $2(\Delta-1)$.\end{proof}
We now give the following definition:
\begin{definition}\label{def:strongp}
We call a coloring {\em strongly proper} if it is proper and 4-acyclic.
\end{definition}
So by the above Lemma, $2\Delta-1$ colors are sufficient to produce a strongly proper coloring, by choosing colors successively for all edges in any way that at each step strong properness is not destroyed.
\subsection{\EC}\label{ssec:color}
We first present below the algorithm \EC.
\begin{algorithm}[!hb]\label{alg:ec}
\caption{\EC}
\vspace{0.1cm}
\begin{algorithmic}[1]
\For{each $e\in E$}\label{ec:for}
\State Choose a color for $e$ from the palette, independently for each $e$, and u.a.r. \Statex \hspace{2.5em}(not caring for properness)\label{ec:color}
\EndFor\label{ec:endfor}
\While{there is an edge contained in a badly colored cycle of even length $\geq 6,$ \Statex \hspace{3.6em} let $e$ be the least such edge and $C$ be the least such cycle and}\label{ec:while}
\State \Recolor($e,C$)\label{ec:recolor}
\EndWhile\label{ec:endwhile}
\State \textbf{return} the current coloring
\end{algorithmic}
\begin{algorithmic}[1]
\vspace{0.1cm}
\Statex \underline{\Recolor($e,C$)}, where $C = C(e)=\{e=e_1^C,\ldots,e_{2k}^C\}$, $k\geq 3$.
\For{$i = 1,\ldots,2k-2 $}\label{recolor:for}
\State Choose a color $e_i^C$ independently and u.a.r. (not caring for properness)\label{recolor:color}
\EndFor\label{recolor:endfor}
\While{there is an edge in $\sco(C(e)) = \{e_1^C,\ldots,e_{2k-2}^C\}$ contained in a badly \Statex \hspace{3.6em} colored cycle of even length $\geq 6,$ let $e'$ be the least such edge and \Statex \hspace{3.6em} $C'$ the least such cycle and}\label{recolor:while}
\State \Recolor($e',C'$)\label{recolor:recolor}
\EndWhile\label{recolor:endwhile}
\end{algorithmic}
\end{algorithm}
\begin{remark}
Parenthetically, note that \EC\ introduces dependencies between the colors, since choosing the least edge and cycle means that all previous edges, with respect to the assumed ordering, do not belong to a badly colored cycle. We will deal with this problem by introducing an algorithm which instead of choosing cycles it takes as input a succession of cycles possibly generated by \EC\ and validates that indeed this could be sequence of cycles generated by \EC\ (see Algorithm \ValE\ below).
\end{remark}
It is convenient to think of \EC\ as being executed in the following way: we are initially given a sequence $\rho$ of color choices (irrespective of the edge those colors will be assigned to) that are selected independently and u.a.r. from the palette; then \EC\ assigns colors to the edges dictated by its execution and following successively the color choices of $\rho$. Of course we always assume that $\rho$ is long enough to carry out the execution of \EC\ until it halts. Probabilistic considerations about \EC\ are made in relation to the space of such sequences.
Notice that \EC\ may not halt, and perhaps worse,
even if it stops, it may generate a coloring that is not strongly proper. However, it is obvious, because of the {\bf while}-loops in the main part of \EC\ and
in the procedure \Recolor, that if the algorithm halts, then it outputs a coloring with no badly colored cycles of even length $\geq 6$. So in the \MAE\ that follows, we repeat \EC\ until the desired coloring is obtained.
\begin{algorithm}[ht]\label{alg:main}
\caption{\MAE}
\vspace{0.1cm}
\begin{algorithmic}[1]
\State Execute \EC\ and if it stops, let $\mathcal{C}$ be the coloring it generates.
\While{$\mathcal{C}$ is not {\color{black} strongly} proper \remove{or contains a bichromatic 4-cycle}}
\State Execute \EC\ anew and if it halts, set $\mathcal{C}$ to be the newly \Statex \hspace{2em} generated coloring \EndWhile
\end{algorithmic}
\end{algorithm}
Obviously \MAE, {\it if and when} it stops, generates a proper acyclic coloring. The rest of the paper is devoted to compute the probability distribution of the number of steps it takes.
A call of the \Recolor\ procedure from line \ref{ec:recolor} of the algorithm \EC\ is a \emph{root call} of \Recolor, while one made from within the execution of another \Recolor \ procedure is called a \emph{recursive call}. Each iteration of \Recolor, which entails coloring all but two edges of a cycle, is called a {\em phase}. So the number of steps of a phase is at most $m-2$ (recall that $m$ denotes the number of the edges). In the sequel, we count phases rather than color assignments. Because the number $m$ of the edges of the graph is constant, this does not affect the form of the asymptotics of the number of steps.
We prove the following progression lemma, which shows that at every time a \Recolor($e,C$) procedure terminates, some progress has indeed been made, which is then preserved in subsequent phases.
\begin{lemma}\label{lem:progr}
Consider an arbitrary call of \Recolor($e,C$) and let $\E$ be the set of edges that at the beginning of the call are not contained in a cycle of even length $\geq 6$ with homochromatic edges of the same parity. Then, if and when that call terminates, no such edge in $\E\cup \{e\}$ exists. \end{lemma}
\begin{proof}
Suppose that \Recolor($e,C$) terminates and there is an edge $e'\in\E\cup\{e\}$ contained in a cycle of even length $\geq 6$ whose edges of the same parity are homochromatic. If $e'=e$, then by line \ref{recolor:while}, \Recolor($e,C$) could not have terminated. Thus, $e'\in\E$.
Since $e'$ is not contained in a cycle with homochromatic edges of the same parity, it must be the case that at some point during this call, some cycle, with $e'$ among its edges, turned into one whose edges of the same parity are homochromatic,
because of some call of \Recolor. Consider the last time this happened and let \Recolor($e^*,C^*$)\ be the causing call. Then, there is some cycle $C'$ of even length $\geq 6$ and with $e'\in C'$, such that the recoloring of the edges of $C^*$
resulted in $C'$ having all edges of the same parity homochromatic and staying such until the end of the \Recolor($e,C$) call.
Then there is at least one edge $e''$ contained in both $C^*$ and $C'$ that was recolored by \Recolor($e^*,C^*$). By line \ref{recolor:while} of \Recolor($e^*,C^*$), this procedure could not terminate, and thus neither could \Recolor($e,C$), a contradiction.
\end{proof}
By Lemma \ref{lem:progr}, we get:
\begin{lemma}\label{lem:root}
There are at most $m$, the number of edges of $G$, i.e. a constant, repetitions of the {\bf while}-loop of the main part of \EC.
\end{lemma}
However, a {\bf while}-loop of \Recolor\ or \MAE\ could last infinitely long. In the next subsection we analyze the distribution of the number of steps they take.
\subsection{Analysis of the algorithms}\label{sec:analysis}
In this subsection we will prove the following two facts:
\begin{fact} \label{factt1} The probability that \EC\ lasts at least $n$ phases is inverse exponential in $n$, i.e. there exist constant integer $n_0$ and $c\in (0,1)$ such that if $n \geq n_0$, this probability is at most $c^n$.\end{fact}
\begin{fact} \label{factt2} The probability that the {\bf while}-loop of \MAE\ is repeated at least $n$ times is inverse exponential in $n$.\end{fact}
From the above two facts, {\it yet to be proved},
and because $\epsilon$ in the number of colors $\K$ of the palette is an arbitrary positive constant, we get Theorem \ref{maintheorem} below and its corollary, Corollary \ref{maincorollary}, the main results of this section.
\begin{theorem}\label{maintheorem}
Assume that the number of edges $m$, and the number of vertices $l$ of a graph are considered constants and that there are $2\Delta-1$ colors available, where $\D $ is the maximum degree of the graph. Then the probability that \MAE\ lasts at least $n^2$ steps is inverse exponential in $n$.
\end{theorem}
\begin{proof} By Fact \ref{factt1}, the probability that one of the first $n$ repetitions of the {\bf while}-loop of \MAE\ entails an execution of \EC\ with $n$ or more phases is inverse exponential in $n$. The result now follows by Fact \ref{factt2}. \end{proof}
Therefore:
\begin{corollary}\label{maincorollary}
$2\Delta -1$ colors suffice to properly and acyclically color a graph. \end{corollary}
The possible successions of edges that are colored by \EC are depicted by graph structures called {\em feasible forests\/}.
These structures are described next.
\subsection{Feasible forests} \label{ssec:forests}
We will depict an execution of \EC\ organized in phases with a rooted forest, that is an acyclic graph whose connected components (trees) all have a designated vertex as their root. We label the vertices of such forests with pairs $(e,C)$, where $e$ is an edge and $C$ a $2k$-cycle containing $e$, for some $k\geq 3$. If a vertex $u$ of $\F$ is labeled by $(e,C)$, we will sometimes say that $e$ is the \emph{edge-label} and $C$ the \emph{cycle-label} of $u$. The number of nodes of a forest is denoted by $|\F|$.
\begin{definition}\label{def:forest}
A labeled rooted forest $\F$ is called \emph{feasible}, if the following two conditions hold:
\begin{itemize}
\item[i.] Let $e$ and $e'$ be the edge-labels of two distinct vertices $u$ and $v$ of $\F$. Then, if $u,v$ are both either roots of $\F$ or siblings (i.e. they have a common parent) in $\F$, then $e$ and $e'$ are distinct.
\item[ii.] If $(e,C)$ is the label of a vertex $u$ that is not a leaf, where $C$ has half-length $k\geq 3$, and $e'$ is the edge-label of a child $v$ of $u$, then $e'\in \sco(C(e)) = \{e_1^C,\ldots,e_{2k-2}^C\}$.
\end{itemize}
\end{definition}
Notice that because the edge-labels of the roots of the trees are distinct, a feasible forest has at most as many trees as the number $m$ of edges.
Given an execution of \EC\ with at least $n$ phases, we construct a feasible forest with $n$ nodes by creating one node $u$ labeled by $(e,C)$ for each phase, corresponding to a call (root or recursive) of \Recolor($e,C$). We structure these nodes according to the order their labels appear in the recursive stack implementing \EC: the children of a node $u$ labeled by $(e,C)$ correspond to the recursive calls of \Recolor \ made by line \ref{recolor:recolor} of \Recolor($e,C$), with the leftmost child corresponding to the first such call and so on. We order the roots and the siblings of the witness forest according to the order they were examined by \EC. By traversing $\F$ in a depth-first fashion, respecting the ordering of roots and siblings, we obtain the \emph{label sequence} $\Ls(\F)=(e_1,C_1),\ldots,(e_{|\F|},C_{|\F|})$ of $\F$.
Given a finite sequence $\rho$ of independent and u.a.r. color-choices (irrespective of the edges these color are to be assigned), let $\F_{\rho}$ be the uniquely defined feasible forest generated by \EC\ if it follows the random choices $\rho$. Here and in the sequel we assume that the length of $\rho$ is large enough to carry out the execution of \EC\ until it halts. Similarly, for all algorithms that entail color choices.
Let $P_n$ be the probability that \EC\ lasts at least $n$ phases, and $Q_n$ be the probability that if \EC\ is executed for less than $n$ phases, the coloring generated is not strongly proper. In the next subsection, we will compute upper bounds for these probabilities.
\subsection{Validation Algorithm}\label{ssec:vale}
We now give the validation algorithm:
\begin{algorithm}[H]\label{alg:vale}
\caption{\ValE($\F$)}
\vspace{0.1cm}
\begin{algorithmic}[1]
\Statex \underline{Input:} $\Ls(\F)=(e_1,C_1),\ldots,(e_{|\F|},C_{|\F|}): \ C_i(e_i)=\{e_i=e_1^{C_i},\ldots,e_{2k_i}^{C_i}\}$.
\State Color the edges of $G$, independently and selecting for each a color u.a.r. from \Statex \hspace{1em} $\{1,\ldots,K\}$
\For{$i=1,\ldots,|\F|$} \label{val:for}
\If{$C_i$ is badly colored }
\State Recolor the edges in $\sco(C(e)) = \{e_1^{C_i},\ldots,e_{2k_i-2}^{C_i}\}$ by selecting colors \Statex \hspace{3.6em} independently and u.a.r.
\Else
\State \textbf{return} {\tt failure} and \textbf{exit}
\EndIf
\EndFor \label{val:endfor}
\State \textbf{return} {\tt success}
\end{algorithmic}
\end{algorithm}
We call each iteration of the \textbf{for}-loop of lines \ref{val:for}--\ref{val:endfor} where {\tt failure} is not reported (i.e recoloring takes place) a {\em phase} of \ValE($\F$).
Given a sequence $\rho$ of color choices (made independently from the palette u.a.r.) and a feasible $\F$, we say that \ValE$(\F)$ if executed following $\rho$ is {\em successful} if it goes through all cycles of $\Ls(\F)$ without reporting {\tt failure}. Let $V_{\F}$ be the event comprised of sequences of color choices $\rho$ such that \ValE$(\F)$, if executed following $\rho$, is successful.
\begin{lemma}\label{lem:distr}
The following hold
\begin{itemize}
\item For every feasible $\F$, the coloring generated at the end of a successful execution of \ValE$(\F)$ is random i.e. the colors of the edges are distributed as if they were assigned for a first and single time to each edge, selecting independently for each edge a color u.a.r. from the palette. Formally, if $c$ denotes a coloring of the graph, the probability of the event
$E(c;\F)$ that an execution of \ValE$(\F)$ is successful and generates $c$ is independent of $c$, i.e. it is the same for all colorings $c$. Therefore, by considering sub-forests, the same is true for the coloring generated at the end of every (successful) phase of the execution of \ValE$(\F)$.
\item If $$\Ls(\F)=(e_1,C_1),\ldots,(e_{|\F|},C_{|\F|})$$ and if $C_i$ has half-length $k_i\geq 3$, $i=1,\ldots,n$, then $$\Pr[V_{\F}]=\prod_{i=1}^{|\F|}\Bigg(\frac{1}{K^{(2k_i-2)}}\Bigg).$$
\item Given any finite collection $\{F_1, \ldots, \F_k\}$ of feasible forests, random is a coloring $c$ generated when \ValE$(\F)$\ is successfully executed for some $\F \in \{F_1, \ldots, \F_k\}$ which depends on the sequence $\rho$ of color choices. Formally, the same for all $c$ is the probability of the event $$E(c; \F_1 \lor \cdots \lor \F_k)$$ comprised of sequences of random choices $\rho$ such that for at least one element $\F \in \{F_1, \ldots, \F_k\}$ (depending on $\rho$), successful is the execution of \ValE$(\F)$ and $c$ is generated.\end{itemize}
\end{lemma}
\begin{proof} For the first statement, observe that if an execution of \ValE$(\F)$ is successful, all cycles (cycle-labels) of $\F$ are found to be badly colored during the execution. However after a cycle of $\F$ is found badly colored, all colors in the scope of the cycle are replaced by random choices of colors. Therefore all final colorings of the graph have equal probability to be generated.
The second statement is an immediate corollary of the first (considered when referring to individual phases) and the fact that for a cycle of half length $k$, the probability that is badly colored for a random coloring is $\frac{1}{K^{(2k-2)}}$.
For the proof of the third statement, assume, just to avoid to unnecessarily burden the notation, that we have two feasible forests, $\F_1$ and $\F_2$. By inclusion/exclusion, it suffices to show that independent of $c$ is the probability of the event $E(c; \F_1 \land \F_2)$ that both \ValE$(\F_1)$ and \ValE$(\F_2)$ are successful and both generate $c$. This again is true, because during an execution of each \ValE$(\F_i), i=1,2$ all cycles of $\F_i, i=1,2$, respectively, are found to be badly colored, however immediately after, all edges in the scope of such a cycle are replaced by random colors. Therefore the probability that colors $c_1$ and $c_2$ are generated by \ValE$(\F_i), i=1,2,$ respectively, is the same for all pairs of colors $c_1$ and $c_2$.
\end{proof}
We now let $\hat{P}_n$ be the probability that \ValE($\F$) succeeds for at least one $\F$ with $n$ nodes,
and let $\hat{Q}_n$ be the probability of sequences of random choices $\rho$ for which there is some feasible forest $\F$ of length less than $n$ (which may depend on $\rho$) such that $\rho$ gives a successful execution of \ValE$(\F)$ and generates a coloring that is not strongly proper.
\begin{lemma}\label{lem:PQn}
We have that $P_n \leq \hat{P}_n$ and $Q_n \leq \hat{Q}_n.$
\end{lemma}
\begin{proof}
{\color {black} Consider an execution of \EC\ with sequence of random choices $\rho$ and let $\F_{\rho}$ be the corresponding feasible forest. Execute now \ValE($\F_{\rho}$) making the random choices in $\rho$}. \end{proof}
\begin{lemma}\label{lem:bounds1}
We have that $\hPn \leq \sum_{|\F|=n}\Pr[V_{\F}] $ and
$\hat{Q}_n \leq 1 - \left( 1 -\left( \frac{2}{2+\epsilon}\right) \right)^m.$
\end{lemma}
\begin{proof}
The first inequality is obvious.
{\color {black} For the second inequality, first observe that from the the cornerstone result of Esperet and Parreau given in Lemma~\ref{lem:sufficientcolors}, we have that the probability that a random coloring is proper and does not contain a bichromatic 4-cycle is at at least
$\left( 1 -\left( \frac{2}{2+\epsilon}\right)\right)^m$.
Recall now that by Lemma~\ref{lem:distr}, the distribution of the colorings generated when \ValE$(\F)$\ is successfully executed on input some $\F$ of length less than $n$ (which may depend on the random choices) is uniform. Thus, the probability of the event that \ValE \ is successfully executed on input some $\F$ of length less than $n$ and that the coloring generated is not strongly proper, is at most the probability of the event that a random coloring is not strongly proper. The latter probability is at most $1-\left( 1 -\left( \frac{2}{2+\epsilon}\right)\right)^m$.} Observe that this bound of $\hat{Q}_n$ is independent of $n$.
\end{proof}
The second inequality of {\color{black}Lemma \ref{lem:bounds1}}\remove{the Lemma above} has as corollary, by Lemma~\ref{lem:PQn}, that the probability that the number of repetitions of the {\bf while}-loop of \MAE\ is at least a number $n$ is inverse exponential in $n$ (Fact \ref{factt2}). So all that remains to be proved to complete the proof of Theorem \ref{maintheorem} is to prove Fact \ref{factt1}, and for the latter it suffices to show that $\sum_{|\F|=n}\Pr[V_{\F}] $ is inverse exponential in $n$. We do this in the next subsection, expressing the sum as a recurrence.
\subsection{The recurrence}\label{ssec:rec}
We will estimate $\sum_{|\F|=n}\Pr[V_{\F}]$ by purely combinatorial arguments. Towards this end, we first define the weight of a forest, denoted by $\|\F\|$, to be the number $$\prod_{i=1}^{|\F|}\Bigg(\frac{1}{K^{(2k_i-2)}}\Bigg),$$ (recall that $|\F|$ denotes the number of nodes of $\F$), and observe that by Lemma~\ref{lem:distr}
\begin{equation}\label{eq:norm}\sum_{|\F|=n}\Pr[V_{\F}] = \sum_{|\F|=n}\|\F\|.\end{equation}
From the definition of a feasible forest, we have that such a forest is comprised of a number of trees at most as many as the number of edges.
For $j=1, \ldots, m$, let $\T_j$ be the set of all possible feasible trees whose root has as edge-label the edge $e_j$ together with the empty tree. Assume that the weight of the empty tree is one, i.e. $\|\emptyset \| = 1$ (of course, the number of nodes of the empty tree is 0, i.e. $|\emptyset| =0$). Let also $\T$ be the collection of all $m$-ary sequences $(T_1,\ldots,T_m)$ with $T_j \in \T_j$.
Now, obviously:\begin{align}\label{eq:trees}
\sum_{|\F|=n}\|\F\|& =\sum_{\substack{(T_1,\ldots,T_m)\in\T \\ |T_1|+ \cdots +|T_m| =n}}\|T_1\|\cdots\|T_m\| \nonumber\\
& = \sum_{\substack{n_1+\cdots+n_m=n\\n_1,\ldots,n_m\geq 0}}\Bigg(\Big(\sum_{\substack{T_1\in\T_1:\\|T_1|=n_1}}\|T_1\|\Big)\cdots\Big(\sum_{\substack{T_m\in\T_m:\\|T_m|=n_m}}\|T_m\|\Big)\Bigg).\end{align}
We will now obtain a recurrence for each factor of the rhs of \eqref{eq:trees}. Let:\begin{equation}\label{eq:q} q= \frac{\Delta-1}{K} = \frac{\D-1}{\K}. \end{equation}
\begin{lemma}\label{lem:rrec}
Let $\T^e$ be anyone of the $\T_j$. Then:\begin{equation}\label{Te}
\sum_{\substack{T\in\T^e \\ |T| =n}}\|T\|\leq R_n,
\end{equation}
where $R_n$ is defined as follows:\begin{equation}\label{Qn}
R_n:=\sum_{k\geq 3}q^{2k-2}\Bigg(\sum_{\substack{n_1+\cdots+n_{2k-2}=n-1\\n_1,\ldots,n_{2k-2}\geq 0}}R_{n_1}\ldots R_{n_{2k-2}}\Bigg)
\end{equation}
and $R_0=1$.
\end{lemma}
\begin{proof}
Indeed, the result is obvious if $n=0$, because the only possible $T$ is the empty tree, which has weight 1. Now if $n>0$, observe that there are at most $\Delta^{2k-2} $ possible cycles with $2k$ edges, for some $k \geq 3$, that can be the cycle-edge of the root of a tree $T\in \T^e$ with $|T| >0$. Since the probability of each such cycle having homochromatic equal parity sets is $\left(\frac{1}{K}\right)^{2k-2}$, the lemma follows.
\end{proof}
\subsection{The solution of the recurrence}\label{ssec:analysisrec}
For the solution we will follow the technique peresented by Flajolet and Sedgewick in \cite[Proposition IV.5, p. 278]{Flajolet:2009:AC:1506267}. Towards this we will first find the Ordinary Generating Function (OGF) of the sequence $R_n$. For technical reasons (simpler computations), we will find the OGF $R(z)$ where for $n=0,\ldots, \infty$ the coefficient of $z^{n+1}$ is $R_{n}$ and the constant term is 0.
Since $R_0=1$, the coefficient of $z$ in $R(z)$ is 1. Multiply both sides of \eqref{Qn} by $z^{n+1}$ and sum for $n= 1, \dots, \infty$ to get
\begin{equation}\label{Q}
R(z) -z = \sum_{k\geq 3}\Bigg(q^{2k-2} zR(z)^{2k-2}\Bigg).
\end{equation}
Letting $R := R(z)$ we get:
\begin{equation}\label{W2}
R= z\left(\sum_{k\geq 2}\left(q^{2k}R^{2k}\right)+1\right) =z\left(\frac{(qR)^4}{1-(qR)^2}+1\right).
\end{equation}
Now, set:
\begin{equation}\label{eq:phi}
\phi(x) =\frac{(qx)^4}{1-(qx)^2} +1,
\end{equation}
to get from \eqref{W2}:
\begin{equation}\label{Wphi}
R= z\phi(R).
\end{equation}
Observe now that:
\begin{itemize} \item $\phi$ is a function analytic at 0 with nonnegative Taylor coefficients (with respect to $x$), \item $\phi(0) \neq 0$, \item the radius of convergence $r$ of the series representing $\phi$ at 0 is $1/q$ and $\lim_{x \rightarrow r^-} \frac{x\phi'(x)}{\phi(x)}= +\infty$,\end{itemize} so all the hypotheses to apply \cite[Proposition IV.5, p. 278]{Flajolet:2009:AC:1506267} are satisfied. Therefore, $[z^n]R \bowtie {\rho}^n$, i.e. $\limsup \ ([z^n]R)^{1/n} = \rho$, where
$\rho=\phi'(\tau),$ and $\tau$ is the (necessarily unique) solution of the {\em characteristic equation}:
\begin{equation}
\label{eq:char}
\frac{\tau\phi'(\tau)}{\phi(\tau)} =1
\end{equation}
within $(0,r) = (0, 1/q)$ (for the asymptotic notation ``$\bowtie$" see \cite[IV.3.2, p. 243]{Flajolet:2009:AC:1506267}).
Within the interval $(0,r) = (0, 1/q)$, the characteristic equation given in \eqref{eq:char} reduces to a quadratic one with unique solution $\tau = \frac{\sqrt{5} -1}{2q}$. The value of $\phi'(\tau)$ is easily computed to be $2q$, which since $q<1/2$, is $<1$. Therefore, by \cite[Proposition IV.5, p. 278]{Flajolet:2009:AC:1506267}, the rate of growth of the sequence $[z^n]R$ is inverse exponential.
By the above, and since there are at most $n^m$ sequences $n_1,\ldots,n_m$ of integers that add up to $n$ and since $m$ is considered to be constant, we get by the first inequality of Lemma \ref{lem:bounds1}, then using Equations \eqref{eq:norm} and \eqref{eq:trees} and finally using Lemma~\ref{lem:rrec} that:
\begin{equation}\label{finall}
\hPn \bowtie \rho^n,
\end{equation}
where $\rho$ is a positive constant $<1$.
By
Equation \eqref{finall} and the first inequality of Lemma \ref{lem:PQn}, we get Fact \ref{factt1}, and thus
the proofs of Theorem \ref{maintheorem} and its corollary, Corollary \ref{maincorollary}, our main results of this section, are completed.
\section{Acyclic vertex coloring}\label{sec:avc}
In this section, since we only deal with vertex colorings, we often avoid the specification ``vertex" for colorings (unless the context is ambiguous). It is known that the acyclic chromatic number is $O(\Delta^{4/3})$ and that this result is optimal within a logarithmic factor (see Intoduction). Intuitively, the reason we need asymptotically more colors for acyclic vertex coloring than acyclic edge coloring is essentially the fact that given a vertex, we have a choice of $k-1$ other vertices to form a $k$-cycle, whereas given an edge we have $k-2$ choices of other edges (the last edge is uniquely determined), so the probability of selecting a vertex should be smaller to offset the larger number of choices. We will prove that (the main result of this section):
\begin{theorem}\label{thm:mainv}
For all $\alpha >2^{-1/3}$ and for $\Delta$ large enough (depending on $\alpha$), the acyclic chromatic number of the graph is at most
$K := \lceil \alpha{\Delta}^{4/3} \rceil +\Delta+ 1.$
\end{theorem}
Again the main idea is to design a Moser-type algorithm that ignores properness (actually a stronger notion, as we will see immediately below). Since several theorems, and their proofs, are very analogous to the edge coloring case, we will often give only an outline of, or even omit, proofs, when they are fully analogous to their edge coloring counterpart. As in the case of edge coloring, the stronger properness condition entails 4-cycles. However, the stronger properness condition will not force all 4-cycles not to be bichromatic, as in edge coloring (recall Definition \ref{def:strongp}). Rather besides properness, it will require that any two vertices $u$ and $v$ such that many 4-cycles exist having $u$ and $v$ as opposite vertices are differently colored. In the next subsection we formalize this notion.
\subsection{Special pairs}\label{ssec:special}
We define the notion of special pairs originally introduced by Alon et al. \cite{alon1991acyclic}. Gon\c{c}alves et al. \cite{gonccalves2014entropy} generalized this notion and proved about it results of which we make strong use.
The reason that the notion of special special pairs is useful for us is on one hand that 4-cycles through a given vertex that forms a special pair with its opposing vertex can be handled with respect to bichromaticity directly and on the other that 4-cycles through a given vertex that does not form a special pair with its opposing vertex, although commoner, their number (see Lemma \ref{lem:4-cycles}) allows them to be handled with respect to bichromaticity with a Moser-type algorithm.
Below, we follow the notation and terminology of Gon\c{c}alves et al. slightly adjusted to our needs. We give in detail the relevant definitions and proofs.
Given a vertex $u$, let $N(u)$ and $N^2(u)$, respectively, denote the set of vertices at distance one and two, respectively, from $u$. Among the vertices in $N^2(u)$ define a strict total order $\prec_u$ as follows: $v_1 \prec_u v_2$ if either $|N(u)\cap N(v_1)| < |N(u)\cap N(v_2)|$ or alternatively $|N(u)\cap N(v_1)| = |N(u)\cap N(v_2)|$ and $v_1$ precedes $v_2$ in the ordering $\prec$ between vertices we assumed to exist.
\begin{definition}[Gon\c{c}alves et al. \cite{gonccalves2014entropy}]\label{def:special}
A pair $(u,v)$ of vertices such that $v \in N^2(u)$ is called an $\alpha$-special pair if it belongs to the at most $\lceil\alpha \D^{4/3}\rceil$ highest, in the sense of $\prec{u}$, elements of $N^2(u)$. The set of vertices $v$ for which $(u, v)$ form a special pair is denoted by $S_{\alpha}(u)$. Also $N^2(u)\setminus S_{\alpha}(u)$ is denoted by $\overline{S_{\alpha}(u)} $ \end{definition}
It is possible that $v \in S_{\alpha}(u)$ but $u \in \overline{S_{\alpha}(v)}$. Also by definition, \begin{equation}\label{eq:S}|S_{\alpha}(u)| = \min(\lceil\alpha\Delta ^{4/3}\rceil, |N^2(u)|). \end{equation}
We now give the proof of the following, that is essentially the proof presented by Gon\c{c}alves et al. \cite{gonccalves2014entropy}.\begin{lemma}[Gon\c{c}alves et al. {\cite[Claim 11]{gonccalves2014entropy}}]\label{lem:4-cycles} For all vertices $u$, there are at most $\frac{\Delta ^{8/3}}{8\alpha}$ 4-cyles that contain $u$ but contain no vertex $v \in S_{\alpha}(u)$.
\end{lemma}\label{lem:basicgoncalves}
\begin{proof}Let $d$ be an integer such that
\begin{align}
\mbox{if } v \in S_{\alpha}(u) \mbox{ then } |N(u)\cap N(v)| &\geq d \mbox{ and } \label{eq:more}\\
\mbox{if }
v \in \overline{S_{\alpha}(u)} \mbox{ then } |N(u)\cap N(v)| &\leq d. \label{eq:less} \end{align}
Now, because cycles that contain $u$ and a given $v \not\in S_{\alpha}(u)$ are in one to one correspondence with a subset of the at most $\binom{N(u) \cap N(v)}{2}$
pairs of distinct edges from $u$ to $N(u) \cap N(v)$, and because of Equation \eqref{eq:less},
we conclude that the 4-cycles through $u$ whose opposing vertex is not in $S_{\alpha}(u)$ are at most
$\sum_{v \in \overline{S_{\alpha}(u)} } \binom{|N(u) \cap N(v)|}{2}\leq (1/2)d \sum_{v \in \overline{S_{\alpha}(u)}} |N(u) \cap N(v)|.$
Assume now that $ \lceil\alpha \Delta ^{4/3} \rceil \leq |N^2(u)|$ , and therefore by Equation \eqref{eq:S} that $|S_{\alpha}(u)| = \lceil\alpha\Delta ^{4/3}\rceil$ (otherwise all vertices in $N^2(u)$ are special and so there is nothing to prove). Observe that because there at most $\D^2$ edges between $N(u) \cap N(v)$ and $N^2(u)$, and because of Equation \eqref{eq:more} above, $$\sum_{v \in \overline{S_{\alpha}(u)}} |N(u) \cap N(v)| \leq \D^2 - d S_{\alpha}(u) \leq \D^2 - d \alpha \Delta ^{4/3}$$
and therefore the number of 4-cycles through $u$ whose opposing vertex
$v \not\in S_{\alpha}(u)$
is at most
$(1/2)d(\D^2 - d \alpha \Delta ^{4/3})$,
a binomial in $d$ whose maximum is $\frac{\D^{8/3}}{8\alpha}.$ \end{proof}
We now give the following definition.
\begin{definition}\label{def:special}
We call a coloring $\alpha$-specially proper, if for any two vertices $u,v $ such that $v$ is a neighbor of $u$ or $v \in S_{\alpha}(u)$, $u$ and $v$ are differently colored.
\end{definition}
We have that:
\begin{lemma} \label{lem:mainrandom}
For any graph with maximum degree $\D$ and any positive $\alpha$, a random coloring from a palette with $ \lceil \alpha{\Delta}^{4/3}\rceil + \Delta+ 1$ colors is $\alpha$-specially proper with positive probability.
\end{lemma}
\begin{proof}
Given a vertex $u$, there are at most $\lceil \alpha \D^{4/3} \rceil$ vertices forming a $\alpha$-special pair with $u$ and also at most $\Delta$ neighbors of $u$. Therefore a palette with at least
$\lceil\alpha{\Delta}^{4/3}\rceil + \Delta + 1$ colors suffices for a random coloring to have positive probability to avoid for $u$ all the colors of its neighbors and of vertices that form a special pair with $u$. Since vertices are assigned their colors independently, positive is the probability for a random coloring to be $\alpha$-specially proper (recall at this point that all parameters except the number of steps of the algorithms are considered constant). \end{proof}
\subsection{The Moser part of the proof}\label{sec:moser}
In this section we will show that, for any $\alpha>2^{-1/3}$, $\lceil \alpha{\Delta}^{4/3}\rceil + \Delta+ 1$ colors suffice to color the vertices of a graph in a way that although may produce a not $\alpha$-specially proper (or not even proper) coloring, it succeeds nevertheless in producing a coloring where for every vertex $u$, for all 4-cycles that contain $u$ and the opposing to $u$ vertex does not form a $\alpha$-special pair with $u$ (these are the commoner 4-cycles), as well as for all cycles of length at least 6 that contain $u$, not both equal parity sets are monochromatic, i.e. not all equal parity pairs of vertices are homochromatic.
For this we will need to assume that the maximum degree of the graph is at least as small as an integer depending on $\alpha$ (but not depending on the graph).
In what follows, assume we have a palette of $\lceil a{\Delta}^{4/3}\rceil + \D +1$ colors, $\alpha>2^{-1/3}$. Let also $\B$ be the set comprised (i) of all $4$-cycles whose opposing vertices do not form $\alpha$-special pairs and (ii) of all $5$-paths, that is paths containing five edges and six vertices. Recall that the elements of $\B$ are ordered according to $\prec$. Given a set $B\in\B$, a {\em pivot} vertex $u$ of $B$ is any vertex in $B$ if $B$ is a 4-cycle, or any of $B$'s endpoints if $B$ is a 5-path. In the former case, let $B(u):=\{u=u_1^B,\ldots,u_4^B\}$ be the set of consecutive vertices of $B$ in its positive traversal beginning from $u$, while in the latter case let $B(u):=\{u=u_1^B,\ldots,u_6^B\}$ be the set of consecutive vertices of $B$ starting from $u$. Also let $|B| = 4 \mbox{ or } 6$ be the number of vertices of $B$. Given a pivot vertex $u$ of a set $B\in\B$, we define the \emph{scope} of $B(u)$ to be the set $\sco(B(u)):=\{u_1^B,\ldots,u_{k-2}^B\}$, where $k=4$ or $6$. In the sequel, we call {\em badly colored\/} the sets in $\B$ whose both equal parity sets are monochromatic. Consider now $\VC$, Algorithm \ref{alg:vc} defined below.
\begin{algorithm}[ht]
\caption{\VC}\label{alg:vc}
\vspace{0.1cm}
\begin{algorithmic}[1]
\For{each $u\in V$}
\State Choose a color from the palette, independently for each $u$, and u.a.r.
\EndFor
\While{there is a pivot vertex of a badly colored set in $\B$, let $u$ be the least such \Statex \hspace{3.6em} vertex and $B$ be the least such set and}\label{main:loop}
\State \hspace{1.4em}\Recolor($u,B$)\label{main:rec}
\EndWhile
\State \textbf{return} the current coloring
\end{algorithmic}
\begin{algorithmic}[1]
\vspace{0.1cm}
\Statex \underline{\Recolor($u,B$)}, $B(u)=\{u=u_1^B,\ldots,u_k^B\}$, $k=4$ or $k=6$.
\State Choose a color independently for each $v\in\sco(B(u)))$, and u.a.r.
\While{there is vertex in $\sco(B(u))$ which is a pivot vertex of a badly colored \Statex \hspace{3.6em} set in $\B$, let $u'$ be the least such vertex and $B'$ be the least such \Statex \hspace{3.6em} set and}\label{rec:loop}
\State \hspace{1.4em}\Recolor($u',B'$)\label{rec:rec}
\EndWhile
\end{algorithmic}
\end{algorithm}
\begin{remark} As in edge coloring, note that \VC\ introduces dependencies between the colors, since choosing the least vertex $u$ and set $B$ means that all previous, with respect to the assumed ordering, vertices are not pivot vertices in badly colored sets of $\B$. As in the case of edge coloring, we deal with this problem introducing a validation algorithm.
\end{remark}
Because of lack of $\alpha$-special properness of colorings generated by \VC \ we introduce:
\begin{algorithm}[ht]\label{alg:main}
\caption{\MAV}
\vspace{0.1cm}
\begin{algorithmic}[1]
\State Execute \VC\ and if it stops, let $c$ be the coloring it generates.
\While{$c$ is not $\alpha$-specially proper}\label{ln:mainwhile}
\State Execute \VC\ anew and if it halts, set $c$ to be the newly \Statex \hspace{2em} generated coloring \EndWhile
\end{algorithmic}
\end{algorithm}
The following two lemmas are analogous to Lemmas \ref{lem:progr} and \ref{lem:root}, respectively. We do not prove them, neither do we give any comment, as both their proofs and their role should be clear by analogy.
\begin{lemma}\label{lem:progrr}
Let $\V$ be the set of vertices that at the beginning of some procedure \Recolor($u,B$) are not pivot vertices in a badly colored set $B$. Then, if and when that call terminates, no such vertex in $\V\cup \{u\}$ exists. \end{lemma}
\begin{lemma}\label{lem:roott}
There are at most $l$, the number of vertices of $G$, i.e. a constant, repetitions of the {\bf while}-loop of line \ref{main:loop} of the main part of \VC.
\end{lemma}
Let $P_n$ be probability that \VC\ lasts at least $n$ phases, and $Q_n$ be the probability that if \VC\ is executed for less than $n$ phases, the coloring generated is not strongly proper.
We will now show that for any $\alpha> 2^{-1/3}$, there is an integer $\D_{\alpha}$ such that for any graph whose maximum degree is at least $\D_{\alpha}$, the following two facts hold:
\begin{fact}\label{fact3}
The probability $\Pn$ is inverse exponential in $n$.
\end{fact}
\begin{fact}\label{fact4}The probability that the {\bf while}-loop of \MAV\ is repeated at least $n$ times is inverse exponential in $n$.
\end{fact}
From the above two facts, {\it yet to be proved}, the proof of Theorem \ref{thm:mainv} follows.
As in edge coloring, we depict the phases of an execution of $\VC$ with a labeled rooted forest, the witness structure. The labels are pairs $(u,B)$, where $u$ is a pivot vertex of $B\in\B$. We call $u$ the vertex-label and $B$ the set-label of the node of the tree.
\begin{definition}\label{def:forest}
A labeled rooted forest $\F$ is called feasible, if the following conditions hold:
\begin{itemize}
\item[i.] Let $u$ and $v$ be the vertex-labels of two distinct nodes $x$ and $y$ of $\F$. If $x$ and $y$ are both either roots of $\F$ or siblings in $\F$, then $u$ and $v$ are distinct.
\item[ii.] If $(u,B)$ is the label of an internal node $x$ of the forest, the vertex-labels of the children of $x$ comprise the set $\sco(B(u))$.
\end{itemize}
\end{definition}
As in edge coloring, we order the vertices of a feasible forest, and we set:
$$\Ls(\F):=((u_1,B_1),\ldots,(u_{|\F|},B_{|\F|})).$$
We also associate a feasible forest with every execution of \VC\ (the forest may differ for different executions).
We now give $\ValV$, algorithm \ref{alg:vv} below.
\begin{algorithm}[!ht]
\caption{\ValV($\F$)}\label{alg:vv}
\vspace{0.1cm}
\begin{algorithmic}[1]
\Statex \underline{Input:} Feasible forest $\F$, where $\Ls(\F)=(u_1,B_1),\ldots,(u_{|\F|},B_{|\F|})$.
\State Color the edges of $G$, independently and selecting for each a color u.a.r. \Statex \hspace{1em} from the palette.
\For{$i=1,\ldots,|\F|$}\label{vv:for}
\If{ $B_i$ is badly colored}
\State recolor the vertices in $\sco(B_i(u_i))$ independently by selecting for each a \Statex \hspace{3.6em} color u.a.r. from the palette.\label{vv:recolor}
\Else
\State \textbf{return} {\tt failure} and \textbf{exit}
\EndIf
\EndFor \label{vv:endfor}
\State \textbf{return} {\tt success}
\end{algorithmic}
\end{algorithm}
Let $V_{\F}$ be the event comprised of sequences of color choices $\rho$ such that \ValV$(\F)$ if executed following $\rho$ is successful.
The following two lemmas, Lemmas \ref{lem:random} and \ref{lem:hatbound}, are analogous to the corresponding ones for edge coloring, namely Lemmas \ref{lem:distr} and \ref{lem:PQn}, respectively, so we omit their proofs.
\begin{lemma}\label{lem:random} For every $\F$
\begin{itemize}
\item At the end of each phase of \ValV$(\F)$\ the colorings generated are random i.e. they are distributed as if they were assigned for a first and single time to each vertex, selecting independently for each vertex a color u.a.r. from the palette.
\item If $$\Ls(\F)=(u_1,B_1),\ldots,(u_{|\F|},B_{|\F|}),$$
then $$\Pr[V_{\F}]=\prod_{i=1}^{|\F|}\Bigg(\frac{1}{K^{(|B_i|-2)}}\Bigg).$$
\item Given any finite collection $\frak{F}$ of feasible forests,
random is a coloring $c$ generated when \ValV$(\F)$\ is executed for some $\F \in \mathfrak{F}$
which depends on the sequence $\rho$ of color choices. More formally, independent of $c$ is the probability of
the event $E(c; \mathfrak{F})$ comprised of sequences of random choices
$\rho$ such that for at least one element
$\F \in \mathfrak{F}$ (depending on $\rho$),
if \ValV$(\F)$ is executed following the choices in $\rho$, the coloring $c$ is generated. \end{itemize}
\end{lemma}
Now in complete analogy to edge coloring, we let $\hat{P}_n$ be the probability that \ValV($\F$) succeeds for at least one $\F$ with exactly $n$ nodes,
and let $\hat{Q}_n$ be the probability that we do not get an $\alpha$-specially proper graph, if for all sequences of color choices we execute \ValV$(\F)$ for some feasible forest of length less than $n$ which may depend on the sequence of random choices. We have:
\begin{lemma}\label{lem:hatbound}
For every $n$, it holds that $\Pn\leq\hPn$ and $\Q_n\leq\hQ_n$.
\end{lemma}
Observe that by Lemma \ref{lem:random}, the coloring produced at the end of an execution of $\ValV$ for input at least one $\F$ is random. Also, by Lemma \ref{lem:mainrandom}, the probability that a random coloring is $\alpha$-specially proper is positive. This means that $\hQ_n$ and, by Lemma \ref{lem:hatbound}, $Q_n$ too, is less than $1$.
From this we get Fact \ref{fact4}.
We still have to show Fact \ref{fact3}.
For this we will bound $\hat{P}_n$.
Let: \begin{equation}\label{q}
q:= \frac{1}{\alpha{\Delta}^{4/3}} > \frac{1}{\lceil \alpha{\Delta}^{4/3}\rceil + \D +1}.
\end{equation}
We define the weight $\|\F\|$ of a feasible forest $\F$ by taking the product of weights assigned to its nodes as follows: for each node with set-label $B$, if $B$ is a $4$-cycle, assign weight $q^2$; if $B$ is a $5$-path, assign weight $q^4$.
Since we obviously have that $\hat{P}_n \leq \sum_{\F:|\F|=n}\Pr[V_F]$, we get by Lemma \ref{lem:distr}
\begin{equation}\label{norm}
\hPn \leq \sum_{\F:|\F|=n}\|\F\|.
\end{equation}
We will bound $\sum_{\F:|\F|=n}\|\F\|$ by purely combinatorial arguments.
Let $\T_j$ be the set of all possible feasible trees, whose root has as a vertex-label the vertex $u_j$, together with the empty tree (for the latter, we assume $|\emptyset|= 0$ and $\|\emptyset\| =1$). Let also $\T$ be the collection of all $l$-ary sequences $(T_1,\ldots,T_l)$ with $T_j \in \T_j$.
Now, obviously:\begin{align}\label{trees}
\sum_{\F: |\F|=n}\|\F\|& =\sum_{\substack{(T_1,\ldots,T_l)\in\T \\ |T_1|+ \cdots +|T_l| =n}}\|T_1\|\cdots\|T_l\| \nonumber\\
& = \sum_{\substack{n_1+\cdots+n_l=n\\n_1,\ldots,n_l\geq 0}}\Bigg(\Big(\sum_{\substack{T_1\in\T_1:\\|T_1|=n_1}}\|T_1\|\Big)\cdots\Big(\sum_{\substack{T_l\in\T_l:\\|T_l|=n_l}}\|T_l\|\Big)\Bigg).\end{align}
We now obtain a recurrence for each factor of \eqref{trees}.
\begin{lemma}\label{lem:rec}
Let $\T^u$ be anyone of the $\T_j$. Then:\begin{equation}\label{Rn}
\sum_{\substack{T\in\T^u\\|T|=n}}\|T\|\leq R_n,
\end{equation}
where $R_n$ is defined as follows:\begin{multline}\label{rec}
\mbox{ for } n \geq 1,\
\\R_n:= \frac{\Delta^{8/3}}{8\alpha} q^2\sum_{\substack{n_1+n_2=n-1\\n_1,n_2\geq 0}}\Big(R_{n_1}R_{n_2}\Big)+\Delta^5 q^4\sum_{\substack{n_1+n_2+n_3+n_4=n-1\\n_1,n_2,n_3,n_4\geq 0}}\Big(R_{n_1}R_{n_2}R_{n_3}R_{n_4}\Big)\\
= \frac{1}{8\alpha^3}\sum_{\substack{n_1+n_2=n-1\\n_1,n_2\geq 0}}\Big(R_{n_1}R_{n_2}\Big)+\frac{1}{\Delta^{1/3}\alpha^4}\sum_{\substack{n_1+n_2+n_3+n_4=n-1\\n_1,n_2,n_3,n_4\geq 0}}\Big(R_{n_1}R_{n_2}R_{n_3}R_{n_4}\Big), \end{multline}
$R_0=1$.
\end{lemma}
\begin{proof}
The result is obvious for $n=0$, because the empty tree has weight 1. For $n>0$, we have two cases for the set-label $B$ of the root of $T\in\T^u$. If is one of the, by Lemma \ref{lem:4-cycles}, $\frac{\Delta^{8/3}}{8\alpha}$ $4$-cycles whose opposing to $u$ vertices do not form special pairs, then the root $u$ has weight $q^2$ and two children. Otherwise, observe that there are at most $\Delta^5$ $5$-paths beginning from $u$. In this case, $u$ has weight $q^4$ and four children.
\end{proof}
To estimate the asymptotic behavior of the sequence $R_n$, we will find the Ordinary Generating Function (OGF) of $R_n$ and apply \cite[Proposition IV.5, p. 278]{Flajolet:2009:AC:1506267}. For technical reasons, we find the OGF $R(z)$ for $R_n$, with $R_n, n\geq 0$ being the coefficient of $z^{n+1}$ instead of $z^n$, and the constant term being 0.
Multiply both sides of Eq. \eqref{rec} by $z^{n+1}$ and sum for $n=1,\ldots,+\infty$, to get:\begin{align}\label{OGF}
R(z)-zR_0= & \frac{1}{8\alpha^3} zR(z)^2+\frac{1}{\Delta^{1/3}\alpha^4}zR(z)^4\Rightarrow\nonumber\\
R(z)= & z\Bigg(\frac{1}{8\alpha^3}R(z)^2+\frac{1}{\D^{1/3}\alpha^4}R(z)^4\Bigg)+z
\end{align}
Set $R:=R(z)$ and observe that for: \begin{equation}\label{phix}
\phi(x)=\frac{x^4}{\Delta^{1/3}\alpha^4}+\frac{x^2}{8\alpha^3}+1,
\end{equation}
we have that $R=z\phi(R)$.
Therefore following \cite[Proposition IV.5, p. 278]{Flajolet:2009:AC:1506267}, we consider the characteristic equation:\begin{equation}\label{characteristic}
x{\phi}'(x)-{\phi}(x)=0 \Leftrightarrow \frac{3x^4}{\alpha^4\Delta^{1/3}}+\frac{x^2}{8\alpha^3}-1=0,
\end{equation}
and we let $\tau$ be its unique positive solution. It only remains to find the range $\alpha$ for which ${\phi}'(\tau) <1$.
Towards this, we consider instead equation $\frac{x^2}{8\alpha^3}-1=0$, which has unique positive solution $\sqrt{8\alpha^3}.$
We first observe that:
\begin{lemma}\label{lem:hat}
The limit of the unique positive solution of the characteristic equation \eqref{characteristic} for $\Delta\rightarrow+\infty$ is equal to $\sqrt{8a^3}$.
\end{lemma}
\begin{proof}
It is easy to check that the unique positive solution of Eq. \eqref{characteristic} is:
\begin{equation}\label{withD}
\Big(\frac{\alpha\Delta^{1/6}}{48}\Big(\sqrt{\Delta^{1/3}+768\alpha^2}-\Delta^{1/6}\Big)\Big)^{1/2},
\end{equation}
which, by taking the limit for $\Delta$ going to infinity, is $(8\alpha^3)^{1/2}$.
\end{proof}
So, the range of $\alpha$ for which ${\phi}'(\tau) <1$ is computed as follows:
\begin{align}\label{final}
4\frac{{\tau}^3}{\alpha^4\Delta^{1/3}}+\frac{{\tau}}{4\alpha^3} & <1\stackrel{\Delta\rightarrow+\infty}{\Longleftrightarrow}\nonumber\\
0+\frac{\sqrt{8\alpha^3}}{4\alpha^3} & <1\Leftrightarrow\nonumber\\
\frac{1}{2^{1/2}\alpha^{3/2}} & <1\Leftrightarrow\nonumber\\
2^{-1/3} & <\alpha.
\end{align}
It follows that, for every $\alpha>2^{-1/3}$, there is a $\Delta_{\alpha}$ (depending on $\alpha$), such that if the maximum degree $\D$ of the graph is at least $\D_{\alpha}$, then with $\lceil\alpha{\Delta}^{4/3}\rceil + \Delta+ 1$ colors, $P_n$ is exponentially small, so the proof of Fact \ref{fact3} and therefore Theorem \ref{thm:mainv}, our main result in this section, is complete.
\begin{remark}
Our technique does not lead to the conclusion that for large enough maximum degree $\D$, the chromatic number is at most $\lceil2^{-1/3} {\Delta}^{4/3} \rceil +\Delta+ 1$, because we cannot exclude that $\D_{\alpha}$ approaches $+\infty$ as $\alpha$ approaches $2^{-1/3}$. \end{remark}
\section{Acknowldgements}
We are indebted to the anonymous referees who pointed out mistakes in previous versions of this work. For the same reason, we are grateful to Fotis Iliopoulos and Aldo Procacci.
| 156,065
|
\begin{document}
\begin{abstract}
We introduce a new class of finite dimensional gentle algebras, the \emph{surface algebras}, which are constructed from an unpunctured Riemann surface with boundary and marked points by introducing cuts in internal triangles of an arbitrary triangulation of the surface. We show that surface algebras are endomorphism algebras of partial cluster-tilting objects in generalized cluster categories, we compute the invariant of Avella-Alaminos and Geiss for surface algebras and we provide a geometric model for the module category of surface algebras.
\end{abstract}
\maketitle
\section{Introduction}
We introduce a new class of finite dimensional gentle algebras, the \emph{surface algebras}, which includes the hereditary, the tilted, and the cluster-tilted algebras of Dynkin type $\mathbb{A}$ and Euclidean type $\tilde{\mathbb{A}}$.
These algebras are constructed from an unpunctured Riemann surface with boundary and marked points by introducing cuts in internal triangles of an arbitrary triangulation of the surface.
To be more precise, let $T$ be a triangulation of a bordered unpunctured Riemann surface $S$ with a set of marked points $M$, and let $(Q_T,I_T)$ be the bound quiver associated to $T$ as in \cite{CCS,ABCP}. The corresponding algebra $B_T=kQ_T/I_T$, over an algebraically closed field $k$, is a finite dimensional gentle algebra \cite{ABCP}. Moreover, $B_T$ is the endomorphism algebra of the cluster-tilting object corresponding to $T$ in the generalized cluster category associated to $(S,M)$, see \cite{CCS,BMRRT,Amiot,BZ}. Each internal triangle in the triangulation $T$ corresponds to an oriented $3$-cycle in the quiver $Q_T$, and the relations for the algebra $B_T$ state precisely that the composition
of any two arrows in an oriented $3$-cycle is zero in $B_T$.
If the surface is a disc or an annulus then the corresponding cluster algebra, as defined in \cite{FST}, is acyclic, and, in this case, the algebra $B_T$ is cluster-tilted of type $\mathbb{A}$, if $S$ is a disc; and of type $\tilde{\mathbb{A}}$, if $S$ is an annulus \cite{CCS,BMR}. It has been shown in \cite{ABS} that every cluster-tilted algebra is the trivial extension of a tilted algebra $C$ by the canonical $C$-bimodule $\Ext^2_C(DC,C)$, where $D$ denotes the standard duality.
The quiver of the tilted algebra $C$ contains no oriented cycle and can be obtained from the quiver of $B_T$ by an admissible cut, that is, by deleting one arrow in every oriented $3$-cycle.
Moreover, it has been shown in \cite{BFPPT} that an algebra is iterated tilted of Dynkin type $\AA$ of global
dimension at most two if and only if it is the quotient of a cluster-tilted algebra of the same
type by an admissible cut.
It is then natural to ask, what kind of algebras we can get from admissible cuts of algebras $B_T$ coming from other surfaces.
This motivates the definition of a \emph{surface algebra}, which is constructed by cutting a triangulation $T$ of a surface $(S,M)$ at internal triangles. Cutting an internal triangle $\triangle$ means replacing the triangle $\triangle$ by a quadrilateral $\triangle^\dd$ with one side on the boundary of the same surface $S$ with an enlarged set of marked points $M^\dd$, see Definition \ref{def cut}. Cutting as many internal triangles as we please, we obtain a partial triangulation $T^\dd$ of a surface with marked points $(S,M^\dd)$, to which we can associate an algebra $B_{T^\dd}=kQ_{T^\dd}/I_{T^\dd}$ in a very similar way to the construction of $B_T$ from $T$, see Definition \ref{def surface algebra}. This algebra $B_{T^\dd}$ is called a \emph{surface algebra of type} $(S,M)$.
{A surface algebra is called an \emph{admissible cut} if it is obtained by cutting every internal triangle exactly once.}
Our first main results are the following.
\begin{theorem*1}
\it Every surface algebra is isomorphic to the endomorphism algebra of a partial cluster-tilting object in a generalized cluster category. More precisely, if the surface algebra $B_{T^\dd} $ is given by the
cut $(S,M^\dd,T^\dd)$ of the triangulated surface $(S,M,T)$, then \[B_{T^\dd}\cong \End_{\calc_{(S,M^\dd)}}T^\dd,\]
where $T^\dd$ denotes the object in cluster category $\calc_{(S,M^\dd)}$ corresponding to $T^\dd$.
\end{theorem*1}
\begin{theorem*2}
\it
If $(S,M^\dd,T^\dd)$ is an admissible cut of $(S,M,T)$ then
\begin{enuma}
\item $Q_{T^\dd}$ is an admissible cut of $Q_T$.
\item $B_{T^\dd}$ is of global dimension at most two.
\item The tensor algebra of $B_{T^\dd}$ with respect to the $B_{T^\dd}$-bimodule
\[\Ext^2_{B_{T^\dd}}(DB_{T^\dd},B_{T^\dd})\]
is isomorphic to the algebra $B_T$.
\end{enuma}
\end{theorem*2}
{Part (c) of Theorem 2 implies that the cluster category associated to $B_{T^\dd}$ is the same as the cluster category associated to the surface $(S,M)$. Therefore, the surface algebras $B_{T^\dd}$ which are admissible cuts are algebras of cluster type $(S,M)$ in the sense of \cite{AO}. In \cite{AO}, the authors study algebras of global dimension two whose cluster type is acyclic, which, in our setting, corresponds to admissible cuts from a disc or an annulus. }
Applying the result of \cite{BFPPT} mentioned above, we see that the admissible cut surface algebras of the disc with $n+3$ marked points are precisely the
iterated tilted algebras of type $\AA_n$ whose global dimension is at most two.
{For all surfaces other than the disc, we show that the surface algebras form several different classes under derived equivalence. For admissible cuts from the annulus, this recovers a result in \cite{AO}. }
To show that the surface algebras are not always derived equivalent, we use an invariant introduced by Avella-Alaminos and Geiss in \cite{AG}, which we call the AG-invariant for short. For each surface algebra $B_{T^\dd}$, we compute the AG-invariant in terms of the original triangulated surface $(S,M,T)$ and the number of cuts on each boundary component, see Theorem \ref{thm AG calc}. In particular, we show that already for the annulus there are different cuts coming from the same triangulation $T$ such that the corresponding surface algebras have different AG invariants and hence are not derived equivalent.
We want to point out here that Ladkani has very recently also computed the AG invariant for the algebras $B_T$ (without cuts) in the special case where each boundary component has exactly one marked point, and used it to classify the surfaces $(S,M)$ such that any two triangulations $T_1,T_2$ of $(S,M)$ give rise to derived equivalent algebras $B_{T_1}$ and $B_{T_2}$, see \cite{Lad}.
Let us also mention that the AG-invariant has been used by Bobinski and Buan in \cite{BB} to classify algebras that are derived equivalent to cluster-tilted algebras of type $\AA$ and $\tilde{\AA}$.
We then study the module categories of the surface algebras. Since surface algebras are gentle, their indecomposable modules are either string modules or band modules. In the special case where the surface is not cut, the indecomposable modules and the irreducible morphisms in the module category of the algebras $B_T$ have been described by Br\"ustle and Zhang in \cite{BZ} in terms of the of generalized arcs on the surface $(S,M,T)$. One of the main tools used in \cite{BZ} is the description of irreducible morphisms between string modules by Butler and Ringel \cite{BR}. We generalize the results of Br\"ustle and Zhang to the case of arbitrary surface algebras $B_{T^\dd}$, and we describe the indecomposable modules in terms of certain permissible generalized arcs in the surface
$(S,M^\dd,T^\dd)$ and the irreducible morphisms in terms of pivots of these arcs in the surface. In this way, we construct a combinatorial category $\cale^\dd$ of permissible generalized arcs in $(S,M^\dd,T^\dd)$ and define a functor \[H\colon \cale^\dd\to \cmod B_{T^\dd}.\] This construction is inspired by the construction of the cluster category of type $\AA$ as a category of diagonals in a convex polygon by Caldero, Chapoton and the second author in \cite{CCS}. We then show the following theorem.
\begin{theorem*3}\
\begin{enuma}
\item The functor $H$ is faithful and induces a dense, faithful functor from $\cale\dag$ to
the category of string modules over $B_{T\dag}$. Moreover, $H$ maps irreducible morphisms to irreducible morphisms and commutes with Auslander-Reiten translations.
\item If the surface $S$ is a disc, then $H$ is an equivalence of categories.
\item If the algebra $B_{T\dag}$ is of finite representation type, then $H$ is an equivalence of categories.
\end{enuma}\end{theorem*3}
As an application of our results, we provide a geometric model for the module category of any iterated tilted algebra of type $\AA$ of global dimension two in terms of permissible diagonals in a partially triangulated polygon.
The paper is organized as follows. In section \ref{sect 2}, we recall definitions and results that we need at a later stage. Section \ref{sect 3} is devoted to the definition of surface algebras and their fundamental properties. The computation of the AG-invariant is contained in section \ref{sect 4} and, in section \ref{sect 5}, we study the module categories of surface algebras in terms of arcs in the surface. The application to iterated tilted algebras of type $\AA$ is presented in section \ref{sect 6}.
\section{Preliminaries and notation}\label{sect 2}
In this section, we recall concepts that we need and fix notations.
\subsection{Gentle algebras, string modules and band modules}
Recall from \cite{AS} that a finite-dimensional algebra $B$ is {\em gentle} if it admits a
presentation $B=kQ/I$ satisfying the following {\nobreak conditions:}
\begin{itemize}
\item[(G1)] At each point of $Q$ start at most two arrows and stop at
most two arrows.
\item[(G2)] The ideal $I$ is generated by paths of length 2.
\item[(G3)] For each arrow $ b$ there is at most one arrow $ a$
and at most one arrow $ c$ such that $ a b \in I$
and $ b c \in I$.
\item[(G4)] For each arrow $ b$ there is at most one arrow $ a$
and at most one arrow $ c$ such that $ a b \not\in I$
and $ b c \not\in I$.
\end{itemize}
An algebra $B=kQ/I$ where $I$ is generated by paths and $(Q,I)$ satisfies the two
conditions (G1) and (G4) is called a \df{string algebra} (see \cite{BR}), thus every
gentle algebra is a string algebra.
Butler and Ringel have shown in \cite{BR} that, for a finite dimensional string algebra, there are two types of indecomposable modules, the string modules and the band modules.
We recall the definitions here.
For any arrow $ b \in Q_1$, we denote by $ b^{-1}$ a \emph{formal inverse} for $ b$, with $s( b^{-1})=t( b)$, $t( b^{-1})=s( b)$, and we set $( b^{-1})^{-1} = b$.
A \emph{walk} of length $n \geq 1$ in $Q$ is a sequence $w=a_1 \cdots a_n$ where each $a_i$ is an arrow or a formal inverse of an arrow and such that $t(a_i) = s(a_{i+1})$, for any $i \in \{1, \ldots, n-1\}$. The \emph{source of the walk} $w$ is $s(w)=s(a_1)$ and the \emph{target of the walk} $w$ is $t(w) = t(a_n)$. We define a \emph{walk $e_i$ of length zero} for any point $i \in Q_0$ such that $s(e_i)=t(e_i)=i$.
If $(Q,I)$ is a bound quiver, a \emph{string} in $(Q,I)$ is either a walk of length zero or a walk $w=a_1 \cdots a_n$ of length $n \geq 1$ such that $a_i \neq a_{i+1}^{-1}$ for any $i \in \{1, \ldots, n-1\}$ and such that no walk of the form $a_i a_{i+1} \cdots a_t$ nor its inverse belongs to $I$ for $1 \leq i$ and $t \leq n$. A \emph{band} is a string $b=a_1\cdots a_n$ such that $s(b)=t(b)$, and any power of $b$ is a string, but $b$ itself is not a proper power of a string.
Any string $w$ gives rise to a string module $M(w)$ over $B$, whose underlying vector space consists of the direct sum of one copy of the field $k$ for each vertex in the string $w$, and the action of an arrow $a$ on $M(w)$ is induced by the identity morphism $1 \colon k\to k$ between the copies of $k$ at the endpoints of $a$, if $a$ or $a^{-1}$ is in $w$, and zero otherwise. Each band $b=a_1a_2\cdots a_n$ defines a family of band modules $M(b,\ell,\phi)$, where $\ell$ is a positive integer and $\phi$ is an automorphism of $k^\ell$. The underlying vector space of $M(b,\ell,\phi)$ is the direct sum of one copy of $k^\ell$ for each vertex in $b$, and the action of an arrow $a$ is induced by the identity morphism $1\colon k^\ell\to k^\ell$, if $a=a_1,a_2,\ldots,a_{n-1}$, by the automorphism $\phi$ if $a=a_n$ and the action is zero if $a$ is not in $w$.
\subsection{The AG-invariant}
First we recall from \cite{AG} the combinatorial definition of the derived invariant of
Avella-Alaminos and Geiss. We will refer to this as the AG-invariant. Throughout let $A$ be a gentle
$k$-algebra with bound quiver $(Q,I)$, $Q=(Q_0,Q_1,s,t)$ where $s,t\colon Q_1\to Q_0$ are the source and
target functions on the arrows.
\begin{dfn}
A \df{permitted path} of $A$ is a path $C=a_1a_2\cdots a_n$ which is not in $I$. We
say a permitted path is a \df{non-trivial permitted thread} of $A$ if for all arrows $ b\in Q_1$,
neither $ b C$ or $C b$ is not a permitted path. These are the `maximal' permitted paths of
$A$. Dual to this, we define the \df{forbidden paths} of $A$ to be a sequence
$F= a_1a_2\cdots a_n$ such that $a_i\ne a_j$ unless $i=j$, and
$a_ia_{i+1}\in I$, for $i=1,\dots,n-1$. A forbidden path $F$ is a \df{non-trivial forbidden thread}
if for all $ b\in Q_1$, neither $ b F$ or $F b$ is a forbidden path.
We also require \df{trivial permitted} and \df{trivial forbidden threads}. Let $v\in Q_0$ such that
there is at most one arrow starting at $v$ and at most one arrow ending at $v$.
Then
the constant path $e_v$ is a trivial permitted thread if when there are arrows $ b, c\in
Q_1$ such that $s( c)=v=t( b)$ then $ b c\not\in I$. Similarly, $e_v$ is a trivial forbidden
thread if when there are arrows $ b, c\in Q_1$ such that $s( c)=v=t( b)$ then
$ b c\in I$.
Let $\calh$ denote the set of all permitted threads and $\calf$ denote the set of all forbidden threads.
\end{dfn}
Notice that each arrow in $Q_1$, is both a permitted and a forbidden
path. Moreover, the constant path at each sink and at each source will
simultaneously satisfy the definition for a permitted and a forbidden
thread because there are no paths going through $v$.
We fix a choice of functions $\sigma,\e\colon Q_1\to \{-1,1\}$ characterized by the following conditions.
\begin{enumerate}
\item If $ b_1\neq b_2$ are arrows with $s( b_1)=s( b_2)$, then
$\sigma( b_1)=-\sigma( b_2)$.
\item If $ b_1\neq b_2$ are arrows with $t( b_1)=t( b_2)$, then $\e( b_1)=-\e( b_2)$.
\item If $ b, c$ are arrows with $s( c)=t( b)$ and $ b c\not\in I$,
then $\sigma( c)=-\e( b)$.
\end{enumerate}
Note that the functions need not be unique. Given a pair $\sigma$ and $\e$, we can define
another pair $\sigma':=-1\sigma$ and $\e':=-1\e$.
These functions naturally extend to paths in $Q$. Let
$C= a_1a_{2}\cdots a_{n-1}a_n$ be a path. Then
$\sigma(C) = \sigma(a_1)$ and $\e(C)=\e(a_n)$. We can also extend these functions
to trivial threads. Let $x,y$ be vertices in $Q_0$, $h_x$ the trivial permitted thread at $x$, and
$p_y$ the trivial forbidden thread at $y$. Then we set
\begin{align*}
\sigma(h_x) = -\e(h_x) &= -\sigma(a), & \IF s(a)&=x, \text{ or}\\
\sigma(h_x) = -\e(h_x) &= -\e( b), & \IF t( b)&=x
\end{align*}
and
\begin{align*}
\sigma(p_y) = \e(p_y)& = -\sigma( c), & \IF s( c) &= y, \text{ or}\\
\sigma(p_y) =\e(p_y)& = -\e(d), & \IF t(d) &= y ,
\end{align*}
where $a, b, c,d\in Q_1$. Recall that these arrows are unique if they exist.
\begin{dfn}
The AG-invariant $\AG(A)$ is defined to be a function depending on the ordered pairs generated
by the following algorithm.
\begin{enumerate}
\item \begin{enumerate}
\item Begin with a permitted thread of $A$, call it $H_0$.
\item \label{alg:HtoF} To $H_i$ we associate $F_i$, the forbidden thread which ends at
$t(H_i)$ and such that $\e(H_i)=-\e(F_i)$. Define $\varphi(H_i) := F_i$.
\item \label{alg:FtoH}To $F_i$ we associate $H_{i+1}$, the permitted thread which starts
at $s(F_i)$ and such that $\sigma(F_i)=-\sigma(H_{i+1})$. Define $\psi(F_i):= H_{i+1}$.
\item Stop when $H_n=H_0$ for some natural number $n$. Define $m=\sum_{i=1}^n \ell(F_i)$,
where $\ell(C)$ is the length (number of arrows) of a path $C$. In this way we obtain the
pair $(n,m)$.
\end{enumerate}
\item Repeat (1) until all permitted threads of $A$ have occurred.
\item For each oriented cycle in which each pair of consecutive arrows form a relation, we associate
the ordered pair $(0,n)$, where $n$ is the length of the cycle.
\end{enumerate}
We define $\AG(A)\colon \NN^2\to \NN$ where $\AG(A)(n,m)$ is the number of times the ordered pair
$(n,m)$ is formed by the above algorithm.
\end{dfn}
The algorithm defining $\AG(A)$ can be thought of as dictating a walk in the quiver $Q$, where we
move forward on permitted threads and backward on forbidden threads.
\begin{remark}\label{rmk:bijection}
Note that the steps \eqref{alg:HtoF} and \eqref{alg:FtoH} of this algorithm give two different bijections $\varphi$ and $\psi$ between the set of permitted
threads $\calh$ and the set of forbidden threads which do not start and end in
the same vertex.
We will often refer to the permitted (respectively forbidden) thread ``corresponding'' to a given
forbidden (respectively permitted) thread. This correspondence is referring to the bijection $\varphi$ (respectively $\psi$).
\end{remark}
\begin{thm}\label{thm AG}
\begin{enuma}
\item Any two derived equivalent gentle algebras have the same AG-invariant.
\item Gentle algebras which have at most one (possibly non-oriented) cycle in their quiver are derived equivalent if and only if they have the same AG-invariant.
\end{enuma}
\end{thm}
\begin{proof}
See \cite[Theorems A and C]{AG}.
\end{proof}
\subsection{Surfaces and triangulations}\label{sect surfaces}
In this section, we recall a construction of \cite{FST} in the
case of surfaces without punctures.
Let $S$ be a connected oriented unpunctured Riemann surface with boundary $\partial S$ and let $M$ be a
non-empty finite subset of the boundary $\partial S$. The elements of $M$ are called \emph{marked
points}. We will refer to the pair $(S,M)$ simply by \emph{unpunctured surface}. The orientation of
the surface will play a crucial role.
We say that two curves in $S$ \emph{do not cross} if they do not intersect
each other except that endpoints may coincide.
\begin{dfn}\label{def arc}
An \emph{arc} $\zg$ in $(S,M)$ is a curve in $S$ such that
\begin{itemize}
\item[(a)] the endpoints are in $M$,
\item[(b)] $\zg$ does not cross itself,
\item[(c)] the relative interior of $\zg$ is disjoint from $M$ and
from the boundary of $S$,
\item[(d)] $\zg$ does not cut out a monogon or a digon.
\end{itemize}
\end{dfn}
\begin{dfn}
A \emph{generalized arc} is a curve in $S$ which satisfies the conditions (a), (c) and (d)
of Definition \ref{def arc}.
\end{dfn}
Curves that connect two
marked points and lie entirely on the boundary of $S$ without passing
through a third marked point are called \emph{boundary segments}.
Hence an arc is a curve between two marked points, which does not
intersect itself nor the boundary except possibly at its endpoints and
which is not homotopic to a point or a boundary segment.
Each generalized arc is considered up to isotopy inside the class of such curves. Moreover, each
generalized arc is considered up to orientation, so if a generalized arc has endpoints $a,b\in M$ then it can
be represented by a curve that runs from $a$ to $b$, as well as by a curve that runs from $b$ to $a$.
For any two arcs $\zg,\zg'$ in $S$, let $e(\zg,\zg')$ be the minimal number of crossings of
$\zg$ and $\zg'$, that is, $e(\zg,\zg')$ is the minimum of the numbers of crossings of
curves $\za$ and $\za'$, where $\za$ is isotopic to $\zg$ and $\za'$ is isotopic to $\zg'$.
Two arcs $\zg,\zg'$ are called \emph{non-crossing} if $e(\zg,\zg')=0$. A
\emph{triangulation} is a maximal collection of non-crossing arcs.
The arcs of a triangulation cut the surface into
\emph{triangles}. Since $(S,M)$ is an unpunctured surface, the three sides of each triangle
are distinct (in contrast to the case of surfaces with punctures). A triangle in $T$ is
called an \emph{internal triangle} if none of its sides is a boundary segment. We often
refer to the triple $(S,M,T)$ as a \emph{triangulated surface}.
Any two triangulations of $(S,M)$ have
the same number of elements, namely \[n=6g+3b+|M|-6,\] where $g$ is the
genus of $S$, $b$ is the number of boundary components and $|M|$ is the
number of marked points. The number $n$ is called the \emph{rank} of $(S,M)$.
Moreover, any two triangulations of $(S,M)$ have
the same number of triangles, namely\[ n-2(g-1)-b.
\]
Note that $b> 0$ since the set $M$ is not empty. We do not allow $n$ to be negative or zero, so we
have to exclude the cases where $(S,M)$ is a disc with one, two or three marked points. Table
\ref{table 1} gives some examples of unpunctured surfaces.
\begin{table}
\begin{center}
\begin{tabular}{ c | c | c || l }
\ $b$\ \ &\ \ $ g$ \ \ & \ \ $ \vert M\vert$ \ \ &\ surface \\ \hline
$1$ & 0 & $n+3$ & \ disc \\
1 & 1 & $n-3$ & \ torus with disc removed \\
1 & 2 & $n-9 $& \ genus 2 surface with disc removed \\\hline
2 & 0 & $n$ & \ annulus\\
2 & 1 & $n-6$ & \ torus with 2 discs removed \\
2 & 2 & $n-12$ & \ genus 2 surface with 2 discs removed \\ \hline
3 & 0 & $n-3$ & \ pair of pants \\
\end{tabular}
\medskip
\end{center}
\caption{Examples of unpunctured surfaces}\label{table 1}
\end{table}
\subsection{Jacobian algebras from surfaces}
If $T=\{\tau_1,\tau_2,\ldots,\tau_n\}$ is a triangulation of an unpunctured surface $(S,M)$, we define a quiver $Q_T$ as follows.
$Q_T$ has $n$ vertices, one for each arc in $T$. We will denote the vertex corresponding to $\tau_i$
simply by $i$. The number of arrows from $i$ to $j$ is the number of triangles $\triangle$ in $T$ such
that the arcs $\tau_i,\tau_j$ form two sides of $\triangle$, with $\tau_j$ following $\tau_i$ when going
around the triangle $\triangle$ in the \orientationofthearrow orientation, see Figure \ref{fig quiver}
for an example. Note that the interior triangles in $T$ correspond to oriented 3-cycles in $Q_T$.
\begin{figure}
\centering
\begin{tikzpicture}[scale=.55]
{ []
\draw (0,0) circle[radius=5cm] ;
\filldraw[fill=black!20] (1.5,1.5) circle[radius=1cm] ;
\filldraw[fill=black!20] (-1.5,-1.5) circle[radius=1cm] ;
\foreach \x [count=\n] in {0,1,...,11}{
\node[name=p\n] at ($(1.5,1.5)+(30*\x:1cm)$) {};
\node[name=n\n] at ($(-1.5,-1.5)+(30*\x:1cm)$) {};
\node[name=o\n] at (30*\x:5cm) {};
}
\node[solid] at (90:5cm) {};
\node[solid] at (180:5cm) {};
\node[solid] at ($(-1.5,-1.5)+(60:1cm)$) {};
\node[solid] at ($(1.5,1.5)+(180:1cm)$) {};
\node[solid] at ($(1.5,1.5)+(-30:1cm)$) {};
\draw (o7.center) -- (p7.center) node[int] {$\tau_1$}
(o7.center) ..controls +(0:2cm) and +(135:1cm) .. (n3) node[int] {$\tau_2$}
(o7.center) .. controls +(-75:2cm) and +(170:1cm) .. ($(o9)!.75!(n10)$) node[int] {$\tau_3$}
($(o9)!.75!(n10)$) .. controls +(-10:.5cm) and +(200:.5cm) .. ($(n12)!.3!(o10)$)
($(n12)!.3!(o10)$) .. controls +(20:1cm) and (-70:1cm) .. (n3.center)
(p12.center) .. controls +(60:1cm) and +(-60:.5cm) .. ($(p4)!.33!(o3)$) node[int,pos=.9] {$\tau_7$}
($(p4)!.33!(o3)$) .. controls +(130:.cm) and +(-40:1cm) .. (o4.center)
(n3.center) .. controls +(0:3cm) and +(-90:1cm) .. ($(p1)!.7!(o1)$) node[int] {$\tau_4$}
($(p1)!.7!(o1)$) .. controls +(90:1cm) and +(-20:1cm) .. (o4.center)
(o4.center) -- (p7.center) node[int] {$\tau_8$}
(p7.center) -- (n3.center) node[int] {$\tau_6$}
(p12.center) .. controls +(-110:1cm) and +(30:1cm) .. (n3.center) node[int] {$\tau_5$};
}
{[xshift=7cm,scale=1.5]
\node[name=1] at (1,1) {1};
\node[name=2] at (0,0) {2};
\node[name=3] at (1,-1) {3};
\node[name=4] at (5,-1) {4};
\node[name=5] at (3.5,0) {5};
\node[name=6] at (2,0) {6};
\node[name=7] at (5,1) {7};
\node[name=8] at (3,1.5) {8};
\draw[->] (1) edge (2) edge (8)
(2) edge (6) edge (3)
(6) edge (1) edge (5)
(5) edge (4)
(4) edge (7) edge (3)
(7) edge (5) edge (8);
}
\end{tikzpicture}
\caption{A triangulation and its quiver} \label{fig quiver}
\end{figure}
In \cite{FST}, the authors associate a cluster algebra $\mathcal{A}(Q_T)$ to this quiver; the
cluster algebras obtained in this way are called cluster algebras from (unpunctured) surfaces and
have been studied in \cite{FST,FT,S3,MSW, FeShTu}, and the corresponding cluster categories in
\cite{CCS, BZ}.
Following \cite{ABCP,LF}, let $W$ be the sum of all oriented 3-cycles in $Q_T$. Then $W$ is a
potential, in the sense of \cite{DWZ}, which gives rise to to a Jacobian algebra $B_T=
\textup{Jac}(Q_T,W)$, which is defined as the quotient of the path algebra of the quiver $Q_T$ by
the two-sided ideal generated by the subpaths of length two of each oriented 3-cycle in $Q_T$.
\begin{prop}\label{prop 2.7} $B_T$ is a gentle algebra. \end{prop}
\begin{proof}
This is shown in \cite{ABCP}.
\end{proof}
\subsection{Cuts and admissible cuts} Let $Q$ be a quiver. An oriented cycle $C$ in $Q$ is called \emph{chordless} if $C$ is a full subquiver of $Q$ and has the property that for every vertex $v$ in $C$, there is exactly one arrow in $C$ starting and one arrow in $C$ ending at $v$. We define a \emph{cut} of $Q$ to be a
subset of the set of arrows of $Q$ with the property that each arrow in the cut lies in an oriented
chordless cycle in $Q$. Following \cite{Fernandez}, such a cut is called \emph{admissible} if it
contains exactly one arrow of each oriented chordless cycle of $Q$.
Now let $C=kQ/I$ be a quotient of the path algebra of $Q$ by an admissible ideal $I$. Then an
algebra is said to be obtained from $C$ by a cut if it is isomorphic to a quotient $kQ/\langle I\cup
\zG \rangle$, where $\zG$ is a cut of $Q$.
\begin{prop} \label{prop 2.8} Any algebra obtained by a cut from a gentle algebra is gentle. \end{prop}
\begin{proof} If a bound quiver satisfies the conditions (G1)--(G4) of the definition of a
gentle algebra, then deleting an arrow will produce a bound quiver that still satisfies
these conditions.
\end{proof}
\section{Algebras from surfaces without punctures} \label{sect 3}
In this section, we introduce the surface algebras and develop their fundamental properties.
Let $(S,M)$ be a surface without punctures, $T$ a
triangulation, $Q_T$ the corresponding quiver, and $B_T$ the Jacobian algebra. Throughout this
section, we assume that, if $S$ is a disc, then $M$ has at least $5$ marked points, thus we exclude
the disc with $4$ marked points. Recall that the oriented $3$-cycles in the quiver $Q_T$ are in
bijection with the interior triangles in the triangulation $T$.
\subsection{Cuts of triangulated surfaces}
We want to define a geometric object which corresponds to cuts of the quiver $Q_T$, and, to that
purpose, we modify the set $M$ to a new set of marked points $M^\dagger$, and we modify the
triangulation $T$ to a partial triangulation $T^\dagger$ of the surface $(S,M^\dagger)$.\footnote{We use the dagger symbol $\dd$ to indicate the cut.}
We need some terminology. Each arc has two ends defined by trisecting the arc and deleting the
middle piece. If $\triangle$ is a triangle with sides $\za,\zb,\zg$, then the six ends of the three sides
can be matched into three pairs such that each pair forms one of the angles of the triangle $\triangle$.
Let $\triangle$ be an internal triangle of $T$ and let $v\in M$ be one of its vertices. Let $\zd'$ and
$\zd''$ be two curves on the boundary, both starting at $v$ but going in opposite directions, and
denote by $v'$ and $v''$ there respective endpoints. Moreover, choose $\zd',\zd''$ short enough
such that $v'$ and $v''$ are not in $M$, and no point of $M$ other that $v$ lies on the curves
$\zd',\zd''$. We can think of $v',v''$ being obtained by moving the point $v$ a small amount in
either direction along the boundary, see Figure \ref{fig cut def} for an example.
Define \[\chi_{v,\triangle} (M) =\left(M\setminus\{v\}\cup\{v',v''\}\right).\]
\begin{figure}
\begin{tikzpicture}[scale=1.5,lbl/.style={fill=white,opacity=.66,shape=circle,inner sep=1,outer sep =2}]
{[]
\shade[shading=axis,bottom color=black!20, top color=black!5] (0,0) rectangle (3,.25);
\draw (0,0) -- (3,0) node[solid,pos=.5,scale=1,name=v] {};
\node[above,lbl] at (v) {$v$};
\draw (v.center) -- ++(-60:2cm) node[int,pos=.8] {$\beta$}
-- ++(180:2cm) node[above,pos=.66] {$\triangle$}
-- (v.center) node[int,pos=.2] {$\alpha$};
\draw[line width=1.5pt]
(v.center) -- +(-60:.75cm) node[right,pos=.4] {$\bar\beta$}
(v.center) -- +(-120:.75) node[left,pos=.4] {$\bar\alpha$}
(v.center) -- +(180:.75cm) node[above,lbl,pos=.8] {$\delta'$}
(v.center) -- +(0:.75) node[above,lbl,pos=.8] {$\delta''$};
}
\draw[->] (3,-.86) -- (4,-.86) node[above,pos=.5] {$\cut{v}{\alpha}{\beta}$};
{[xshift=4cm]
\shade[shading=axis,bottom color=black!20, top color=black!5] (0,0) rectangle (3,.25);
\draw (0,0) -- (3,0) node[solid,pos=.25,scale=1,name=v'] {} node[solid,pos=.75,scale=1,name=v''] {};
\node[above,lbl] at (v') {$v'$};
\node[above,lbl] at (v'') {$v''$};
\draw (v') -- ++(270:1.73cm) node[int] {$\alpha\dag$}
-- ++(0:1.5cm) node[above,pos=.25] {$\triangle\dag$}
-- (v'') node[int] {$\beta\dag$};
}
\end{tikzpicture}
\caption{A local cut at $v$ relative to $\alpha,\beta$. The internal triangle $\triangle$ in $T$ becomes a quasi-triangle $\triangle^\dd$ in $T^\dd$.}\label{fig cut def}
\end{figure}
Let $\bar\za$ and $\bar \zb$ be ends of two sides $\za,\zb$ of $\triangle$ such that $\bar\za,\bar\zb$
form an angle of $\triangle$ at $v$. If $\bar\gamma $ is an end of an arc $\zg\in T$ such that $\bar\zg$
is incident to $v$, let $\bar\zg'$ be a curve in the interior of $S$ homotopic to
\[\Bigg\{\begin{array}{ll}
\text{the concatenation of $\bar\zg$ and $\zd'$, if $\bar\za$ lies between $\bar \zg$ and $\bar\zb$, or $\bar \zg=\bar\za$;}\\
\text{the concatenation of $\bar\zg$ and $\zd''$, if $\bar\zb$ lies between $\bar \zg $ and $\bar\za$, or $\bar \zg=\bar\zb$.}\\
\end{array}\]
Then let $\chi_{v,\za,\zb}(\zg)$ be the arc obtained from $\zg$ by
replacing the end $\bar\zg$ by $\bar\zg'$. If both ends of $\zg$ are incident to $v$ then
$\chi_{v,\za,\zb}(\zg)$ is obtained from $\zg$ by replacing both ends with the appropriate new end;
this is the case in the example in Figure \ref{fig cut}.
If $\gamma\ne\alpha,\beta$ then, abusing notation, we will denote the arc $\chi_{v,\za,\zb}(\zg)$ again by $\gamma$. The arcs obtained from $\za$ and $\zb$ will be denoted by $\za^\dd=\chi_{v,\za,\zb}(\za)$ and $\zb^\dd=\chi_{v,\za,\zb}(\zb)$, see Figure \ref{fig cut def}.
Define
\[\chi_{v,\za,\zb}(T)=\left(T\setminus \{\zg\in T\mid \zg \textup{ incident to $v$}\}\right)
\cup \{\chi_{v,\za,\zb}(\zg)\mid \zg \textup{ incident to $v$}\} .\]
Finally, let $\chi_{v,\za,\zb}(S,M,T)=(S,\chi_{v,\zD}(M),\chi_{v,\za,\zb}(T))$.
Let us point out that
$(S,\chi_{v,\triangle}(M))$ is a surface which has exactly one marked point more than $(S,M)$, and that
$\chi_{v,\za,\zb}(T)$ has the same number of arcs as $T$. Therefore $\chi_{v,\za,\zb}(T)$ is a
partial triangulation of the surface $(S,\chi_{v,\triangle}(M))$.
We denote by $\triangle^\dd$ the quadrilateral with sides $\za^\dd,\zb^\dd,\zg$ and the new boundary segment between $v'$ and $v''$.
\begin{dfn} \label{def cut}\
\begin{enumerate}
\item The partially triangulated surface $\chi_{v,\za,\zb}(S,M,T)$ is called the \emph{local cut} of
$(S,M,T)$ at $v$ relative to $\za,\zb$.
\item A \emph{cut} of the triangulated surface $(S,M,T)$ is a partially triangulated surface
$(S,M^\dagger,T^\dagger)$ obtained by applying a sequence of local cuts $\chi_{v_1,\za_1,\zb_1},
\ldots, \chi_{v_t,\za_t,\zb_t}$ to $(S,M,T)$, subject to the condition that the triangle $\triangle_i$ in
the $i$-th step is still an internal triangle after $i-1$ steps.
\end{enumerate}
\end{dfn}
Thus we are allowed to cut each internal triangle of $T$ at most once.
The quadrilaterals $\triangle^\dd_i$ in $T^\dd$ corresponding to $\triangle_i$ in $T$ are called \emph{quasi-triangles}. Note that a quasi-triangle is a quadrilateral that has exactly one side on the boundary.
\begin{dfn}
A cut of $(S,M,T)$ is called an \emph{admissible cut} if every internal triangle of $T$ is cut exactly once.
\end{dfn}
\begin{figure}
\centering
\begin{tikzpicture}[scale=.33]
{[]
\draw (0,0) circle [radius=5cm] ;
\filldraw[fill=black!20] (0,0) circle [radius=1.5cm] ;
\foreach \x [count=\n] in {0,1,2,3}{
\node[name=i\n] at (90*\x:1.5cm) {};
\node[name=o\n] at (90*\x:5cm) {};
}
\node[solid] at (90:1.5cm) {};
\node[solid,label=90:$v$] at (90:5cm) {};
\node[solid] at (-90:5cm) {};
\draw (o2.center) .. controls +(-135:2) and +(90:2) .. ($(o3)!.25!(i3)$) node[int] {$\gamma$}
($(o3)!.25!(i3)$) .. controls +(-90:2) and +(180:2) .. ($(o4)!.25!(i4)$)
($(o4)!.25!(i4)$) .. controls +(0:2) and +(-90:2) .. ($(o1)!.25!(i1)$)
($(o1)!.25!(i1)$) .. controls +(90:2) and +(-45:2) .. (o2.center)
(o2.center) -- (i2.center) node[int] {$\alpha$}
(i2.center) .. controls +(130:1) and +(90:1.7) .. ($(o3)!.80!(i3)$)
($(o3)!.80!(i3)$) .. controls +(0,-1) and +(-1.7,0) .. ($(o4)!.65!(i4)$) node[pos=0,left] {$\triangle$}
($(o4)!.65!(i4)$) .. controls +(1.7,0) and +(0,-1) .. ($(o1)!.50!(i1)$)
($(o1)!.50!(i1)$) .. controls +(0,1) and +(-45:1em) .. (o2.center) node[int,pos=.2] {$\beta$};
\node[below] at (o4) {$(S,M,T)$};
}
{[yshift=-12cm,xshift=-6cm]
\draw (0,0) circle [radius=5cm] ;
\filldraw[fill=black!20] (0,0) circle [radius=1.5cm] ;
\foreach \x [count=\n] in {0,1,2,3}{
\node[name=i\n] at (90*\x:1.5cm) {};
\node[name=o\n] at (90*\x:5cm) {};
}
\node[name=leftv,solid,label=125:$v'$] at (125:5cm) {};
\node[solid] at (90:1.5cm) {};
\node[name=rightv,solid,label=90:$v^{''}$] at (65:5cm) {};
\node[solid] at (-90:5cm) {};
\draw (leftv.center) .. controls +(-45:1) and +(90:2) .. ($(o3)!.25!(i3)$) node[int] {$\gamma\dag$}
($(o3)!.25!(i3)$) .. controls +(-90:2) and +(180:2) .. ($(o4)!.25!(i4)$)
($(o4)!.25!(i4)$) .. controls +(0:2) and +(-90:2) .. ($(o1)!.25!(i1)$)
($(o1)!.25!(i1)$) .. controls +(90:2) and +(-45:2) .. (rightv.center)
(rightv) -- (i2.center) node[int] {$\alpha\dag$}
(i2.center) .. controls +(130:1) and +(90:1.7) .. ($(o3)!.80!(i3)$) node[pos=.5,below left=.5em] {$\triangle\dag$}
($(o3)!.80!(i3)$) .. controls +(0,-1) and +(-1.7,0) .. ($(o4)!.65!(i4)$)
($(o4)!.65!(i4)$) .. controls +(1.7,0) and +(0,-1) .. ($(o1)!.50!(i1)$)
($(o1)!.50!(i1)$) .. controls +(0,1) and +(-45:1em) .. (rightv.center) node[int,pos=.2] {$\beta$};
\node[below] at (o4) {$\cut v\gamma\alpha$};
}
{[yshift=-12cm,xshift=6cm]
\draw (0,0) circle [radius=5cm] ;
\filldraw[fill=black!20] (0,0) circle [radius=1.5cm] ;
\foreach \x [count=\n] in {0,1,2,3}{
\node[name=i\n] at (90*\x:1.5cm) {};
\node[name=o\n] at (90*\x:5cm) {};
}
\node[name=rightv,solid,label=55:$v^{''}$] at (55:5cm) {};
\node[solid] at (90:1.5cm) {};
\node[name=leftv,solid,label=90:$v'$] at (115:5cm) {};
\node[solid] at (-90:5cm) {};
\draw (leftv.center) .. controls +(-135:2) and +(90:2) .. ($(o3)!.25!(i3)$)
($(o3)!.25!(i3)$) .. controls +(-90:2) and +(180:2) .. ($(o4)!.25!(i4)$)
($(o4)!.25!(i4)$) .. controls +(0:2) and +(-90:2) .. ($(o1)!.25!(i1)$) node[int] {$\gamma\dag$}
($(o1)!.25!(i1)$) .. controls +(90:2) and +(-90:2) .. (rightv.center)
(leftv.center) -- (i2.center) node[int] {$\alpha$}
(i2.center) .. controls +(130:1) and +(90:1.7) .. ($(o3)!.80!(i3)$) node[pos=.5,below left=.5em] {$\triangle\dag$}
($(o3)!.80!(i3)$) .. controls +(0,-1) and +(-1.7,0) .. ($(o4)!.65!(i4)$)
($(o4)!.65!(i4)$) .. controls +(1.7,0) and +(0,-1) .. ($(o1)!.50!(i1)$)
($(o1)!.50!(i1)$) .. controls +(0,1) and +(-45:1em) .. (leftv.center) node[int,pos=.2] {$\beta\dag$};
\node[below] at (o4) {$\cut v\beta\gamma$};
}
\end{tikzpicture}
\caption{All of the possible cuts at vertex $v$}\label{fig cut}
\end{figure}
\subsection{Definition of surface algebras}
Let $(S,M^\dd,T^\dd)$ be a cut of $(S,M,T)$ given by the sequence
$(\cut{v_i}{\alpha_i}{\beta_i})_{i=1,2\dots,t}$. Note that each of the pairs $(\alpha_i,\beta_i)$
corresponds to a pair of vertices $(\alpha_i,\beta_i)$ in the quiver $Q_T$ and each triangle $\triangle_i$ to an arrow
$\alpha_i\to \beta_i$ or $\beta_i\to \alpha_i$.
The collection $T^\dd$ is a partial triangulation of $(S,M^\dd)$; at each local cut
$\cut{v_i}{\alpha_i}{\beta_i}$ the arcs $\alpha_i^\dd$, $\beta_i^\dd$, $\gamma_i$ together
with the boundary segment between $v_i'$ and $v_i''$ form a quasi-triangle $\triangle_i^\dd$
in $T^\dd$. Choose a diagonal $\e_i$ for each of these quasi-triangles, as in
Figure~\ref{fig ei}, and let $\overline{T}^\dd= T^\dd \cup \{\e_1,\e_2,\dots, \e_t\}$. Then $\overline{T}^\dd
$ is a triangulation of $(S,M^\dd)$. Let $Q_{\overline{T}^\dd}$ be the associated quiver.
Note that each quasi-triangle $\triangle_i^\dd$ gives rise to a subquiver with four vertices
$\alpha_i^\dd$, $\beta_i^\dd$, $\gamma_i$, and $\e_i$, consisting of a 3-cycle with
vertices $\e_i$, $\gamma_i$ and either $\alpha_i^\dd$ or $\beta_i^\dd$, and one arrow
connecting the fourth vertex (either $\beta_i^\dd$ or $\alpha_i^\dd$) to the vertex $\e_i$. We may
suppose without loss of generality that these subquivers have the form
\[\begin{tikzpicture}
\node[name=a] at (0,0) {$\alpha_i^\dd$};
\node[name=e] at (1,0) {$\e_i$};
\node[name=g] at (3,0) {$\gamma_i$};
\node[name=b] at (2,-1) {$\beta_i^\dd$};
\path[->] (a) edge (e) (e) edge (g) (g) edge (b) (b) edge (e);
\end{tikzpicture}\]
\begin{figure}
\centering
\begin{tikzpicture}[yscale=.5,xscale=.75]
\shade[shading angle=180] (0,4) rectangle (3,4.5);
\draw (0,0) -- (0,4) node[int] {$\alpha_i^\dd$}
-- (3,4) -- (3,0) node[int] {$\beta_i^\dd$}
-- (0,0) node[int] {$\gamma_i$}
-- (3,4) node[int] {$\e_i$};
\node[solid] at (0,0) {};
\node[solid] at (3,0) {};
\node[solid] at (0,4) {};
\node[solid] at (3,4) {};
\end{tikzpicture}
\caption{A choice for $\e_i$ in $\triangle\dag$}\label{fig ei}
\end{figure}
Now we define a new quiver $Q_{T^\dd}$ corresponding to the partial triangulation $T^\dd$
by deleting the vertices $\e_i$ and replacing each of the paths of length two given by
$\alpha_i^\dd \to \e_i\to \gamma_i$ by an arrow $\alpha_i^\dd \to \gamma_i$. Thus each quasi-triangle $\triangle_i^\dd$ in
${T^\dd}$ gives rise to a subquiver of
$Q_{T^\dd}$ of the form
$\alpha_i^\dd \to \gamma_i\to \beta_i^\dd$.
Next, we define relations on the quiver $Q_{T^\dd}$. First note that the quiver
$Q_{\overline{T}^\dd}$ comes with the potential $W_{\overline{T}^\dd}$ and the Jacobian
algebra $B_{\overline{T}^\dd}= kQ_{\overline{T}^\dd}/I_{\overline{T}^\dd}$, where $I_{\overline{T}^\dd}$
is generated by the set $\calr$ consisting of all subpaths of length two of all oriented
3-cycles. In particular, for each quasi-triangle $\triangle_i^\dd$, we have the three
relations $\e_i\to \gamma_i\to \beta_i^\dd$, $\gamma_i\to \beta_i^\dd\to \e_i$,
and $\beta_i^\dd\to \e_i\to \delta_i$. Denote by $\calr_i$ the set of these three
relations. Define $I_{T^\dd}$ to be the two-sided ideal generated by all relations in
\[\left(\calr\setminus \Big( \bigcup_{i=1}^t \calr_i\Big) \right)
\cup \left( \bigcup_{i=1}^t \{\alpha_i^\dd\to \gamma_i\to \beta_i^\dd\}\right).\]
Thus each $\triangle_i^\dd$ corresponds to a subquiver
$\alpha_i^\dd\to \gamma_i\to \beta_i^\dd$ with a zero relation.
\begin{dfn}\label{def surface algebra}
A \df{surface algebra of type} $(S,M)$ is a bound quiver algebra $B_{T^\dd}=kQ_{T^\dd}/I_{T^\dd}$ where
$(S,M^\dd,T^\dd)$ is a cut of a triangulated unpunctured surface $(S,M,T)$.
\end{dfn}
\subsection{Properties of surface algebras}
\begin{lemma}\label{lem admissible cut}
If $(S,M^\dd,T^\dd)$ is obtained from $(S,M,T)$ by the sequence
of local cuts $(\cut{v_i}{\alpha_i}{\beta_i})_{i=1,\dots,t}$, then $Q_{T^\dd}$ is isomorphic to the quiver
obtained from $Q_T$ by deleting the arrows $\beta_i\to \alpha_i$ for $i=1,2,\dots,t$.
\end{lemma}
\begin{proof}
This follows immediately from the construction.
\end{proof}
\begin{thm}\label{thm adm cut}
If $(S,M^\dd,T^\dd)$ is an admissible cut of $(S,M,T)$ then
\begin{enuma}
\item $Q_{T^\dd}$ is an admissible cut of $Q_T$;
\item $B_{T^\dd}$ is of global dimension at most two, and $B_{T^\dd}$ is of global dimension
one if and only if $B_{T^\dd}$is a hereditary algebra of type $\AA$ or $\tilde \AA$;
\item The tensor algebra of $B_{T^\dd}$ with respect to the $B_{T^\dd}$-bimodule
\[\Ext^2_{B_{T^\dd}}(DB_{T^\dd},B_{T^\dd})\]
is isomorphic to the Jacobian algebra $B_T$.
\end{enuma}
\end{thm}
\begin{proof}
Part (a). The oriented 3-cycles in $Q_T$ are precisely the chordless cycles in $Q_T$ and,
by Lemma~\ref{lem admissible cut}, $Q_{T\dag}$ is obtained from $Q_T$ by
deleting exactly one arrow in each chordless cycles; this shows (a).
Part (b). Since $Q_{T^\dd}$ does not contain any oriented cycles, the ideal $I_{T^\dd}$ is generated by
monomial relations which do not overlap. This immediately implies $\gdim B_{T^\dd}\leq 2$.
If $B_{T^\dd}$ is of global dimension at most one, then the ideal $I_{T^\dd}$ is trivial and
hence the ideal $I_T$ is trivial too. It follows that $Q_T$ has no oriented cycles, $T^\dd=T$,
$B_{T^\dd}=B_T$, and that $T$ is a triangulation without internal triangles. The only unpunctured surfaces
that admit such a triangulation are the disc and the annulus, corresponding to the case where
$B_{T^\dd}$ is hereditary algebra of type $\AA$ or $\tilde \AA$, respectively.
Part (c). Let $\widetilde{B_{T^\dd}}$ denote the tensor algebra. It follows from \cite{ABS} that its
quiver is obtained from $Q_{T^\dd}$ by adding on arrow $\beta_i^\dd\to \alpha_i^\dd$ for each
relation $\alpha_i^\dd \to \gamma_i\to \beta_i^\dd$; thus the quiver of $\widetilde{B_{T^\dd}}$ is
isomorphic to the quiver $Q_T$. Moreover, it follows from \cite[Theorem 6.12]{Keller-deformed}
that $\widetilde{B_{T^\dd}}$ is a Jacobian algebra with potential $\tilde W$ given by the
sum of all 3-cycles $\alpha_i^\dd\to \gamma_i\to \beta_i^\dd \to \alpha_i^\dd$; thus
$\widetilde{B_{T^\dd}}\cong B_T$.
\end{proof}
\begin{cor}\label{cor 3.6}
The admissible cut surface algebras of the disc with $n+3$ marked points are precisely the
iterated tilted algebras of type $\AA_n$ whose global dimension is at most two.
\end{cor}
\begin{proof}
In \cite{BFPPT}, the authors have shown that the quotients by an admissible cut of a cluster-tilted
algebra of type $\AA_n$ are precisely the iterated tilted algebras of type $\AA_n$ of global
dimension at most two. By \cite{CCS}, the cluster-tilted algebras of type $\AA_n$ are precisely the
algebras $B_T$, where $T$ is a triangulation of the disc with $n+3$ marked points. The result now
follows from Theorem~\ref{thm adm cut}.
\end{proof}
\begin{thm}
Every surface algebra is isomorphic to the endomorphism algebra of a partial cluster tilting
object in a generalized cluster category. More precisely, if the surface algebra $B_{T^\dd} $ is given by the
cut $(S,M^\dd,T^\dd)$ of the triangulated surface $(S,M,T)$, then \[B_{T^\dd}\cong \End_{\calc{(S,M^\dd)}}T^\dd,\]
where $T^\dd$ denotes the object in the generalized cluster category $\calc{(S,M^\dd)}$ corresponding to $T^\dd$.
\end{thm}
\begin{proof}
Let $\overline{T}^\dd$ be the completion of $T^\dd$ as in the construction of the quiver $Q_{T^\dd}$.
By \cite{FST}, the triangulation $\overline{T}^\dd$ corresponds to a cluster in the cluster algebra of $(S,M^\dd)$, hence $\overline{T}^\dd$ also corresponds to a cluster-tilting object in the generalized cluster category $\mathcal{C}{(S,M^\dd)}$, see \cite{Amiot, BZ}.
Thus $T^\dd$ is a partial
cluster-tilting object in $\calc{(S,M^\dd)}$. The endomorphism algebra $\End_{\calc{(S,M^\dd)}}\overline{T}^\dd$ of
the cluster-tilting object $\overline{T}^\dd$ is isomorphic to the Jacobian algebra $B_{\overline{T}^\dd}$.
Let $\cut{v_i}{\alpha_i}{\beta_i}$ be one of the local cuts that defines the cut $(S,M^\dd,T^\dd)$.
In the quiver $Q_{\overline{T}^\dd}$, we have the corresponding subquiver
\[\begin{tikzpicture}
\node[name=a] at (0,0) {$\alpha_i^\dd$};
\node[name=e] at (1,0) {$\e_i$};
\node[name=g] at (3,0) {$\gamma_i$};
\node[name=b] at (2,-1) {$\beta_i^\dd$};
\path[->] (a) edge (e) (e) edge (g) (g) edge (b) (b) edge (e);
\end{tikzpicture}\]
and there are no other arrows in $Q_{\overline{T}^\dd}$ starting or ending at $\e_i$. Each vertex in this
subquiver corresponds to an indecomposable summand of the cluster-tilting object $\overline{T}^\dd$, and
each non-zero path corresponds to a non-zero morphism between the indecomposable summands
associated to the endpoints of the path. Thus in $\calc{(S,M^\dd)}$, there are non-zero morphisms
$f\colon \beta_i^\dd\to \gamma_i$, $g\colon \gamma_i\to \e_i$, $h_i\colon \e_i \to \beta_i$,
$u_i\colon \e_i \to \alpha_i^\dd$, and the compositions $g_if_i$, $h_ig_i$, $f_ih_i$ are zero, but the
composition $u_ig_i$ is non-zero.
Removing the summand $\e_i$ from the cluster-tilting object $\overline{T}^\dd$ and considering
$\End_{\calc{(S,M^\dd)}}(\overline{T}^\dd\setminus \e_i)$, the only non-zero morphisms between the summands
$\alpha_i^\dd,$ $\beta_i^\dd$, and $\gamma_i$ are $f_i$ and $u_ig_i$, and the composition
$(u_ig_i)f_i$ is zero, since $g_if_i$ is zero. Thus, locally, the quiver of $\End_{\calc{(S,M^\dd)}}(\overline{T}^\dd\setminus \e_i)$ is
$\alpha_i^\dd \to \gamma_i\to \beta_i^\dd$ and the composition of the two arrows is zero in
$\End_{\calc{(S,M^\dd)}}(\overline{T}^\dd\setminus \e_i)$.
The endomorphism algebra $\End_{\calc{(S,M^\dd)}}(T^\dd)$ of the partial cluster tilting object $T^\dd$ is obtained
by a finite number of such local transformations, each of which is corresponding to a local cut
$\cut{v_i}{\alpha_i}{\beta_i}$. Thus $\End_{\calc{(S,M)}}(T^\dd) = kQ_{T^\dd}/I_{T^\dd} = B_{T^\dd}$.
\end{proof}
\begin{thm}
Every surface algebra is gentle.
\end{thm}
\begin{proof}
Let $B_{T^\dd}$ be the surface algebra obtained by a cut $(S,M^\dd,T^\dd)$ of a triangulated surface
$(S,M,T)$. The algebra $B_T$ is gentle by Proposition \ref{prop 2.7}, and the result then follows from Lemma \ref {lem admissible cut} and Proposition \ref{prop 2.8}.
\end{proof}
\section{Computing the AG-invariant of surface algebras}\label{sect 4}
In this section, we give an explicit formula for the AG-invariant of an arbitrary surface
algebra in terms of the surface. The key idea is to interpret the permitted threads as
complete fans and the forbidden threads as triangles or quasi-triangles in the partial
triangulation $T\dag$.
\subsection{Permitted threads and complete fans}
Let $(S,M\dag,T\dag)$ be a cut of a triangulated unpunctured surface $(S,M,T)$, and let $v$ be a
point in $M\dag$. Let $v',v''$ be two points on the boundary, but not in $M\dag$, such that
there is a curve $\delta$ on the boundary from $v'$ to $v''$ passing through $v$, but not
passing through any other point of $M\dag$. Let $\gamma$ be a curve from $v'$ to $v''$ which is
homotopic to $\delta$ but disjoint from the boundary except for its endpoints, such that
the number of crossings between $\gamma$ and $T\dag$ is minimal. The sequence of arcs $\tau_1$,
$\tau_2$, \dots $\tau_j$ in $T\dag$ that $\gamma$ crosses in order is called the \df{complete
fan at $v$ in $T\dag$.} The arcs $\tau_2,\dots,\tau_{j-1}$ are called the \df{interior arcs of
the fan} and $j$ is called the \df{width of the fan}. See Figure~\ref{fig fan def}
Note that the sequence is of width zero if no arc in $T\dag$ is incident to $v$.
Fans of width zero are called \df{empty fans}, and fans of width one are called
\df{trivial fans}. Every arc in $T^\dd$ is contained in exactly two non-empty
complete fans, namely the two fans at the two endpoints of the arc.
\begin{figure}
\begin{tikzpicture}
\shade[shading=axis,shading angle=180] (0,0) rectangle (4,.25);
\draw (0,0) -- (4,0);
\node[coordinate,name=m1,label={[fill=white,shape=circle,inner sep=.5,outer sep=1.5]90:$v'$}] at (1,0) {};
\node[solid,name=m2,label={[fill=white,shape=circle,inner sep=.5,outer sep=1.5]90:$v$}] at (2,0) {};
\node[coordinate,name=m3,label={[fill=white,shape=circle,inner sep=.5,outer sep=1.5]90:$v''$}] at (3,0) {};
\draw (m1) edge[bend right] (m3);
\clip (0,.1) rectangle (4,-1.5);
\foreach \a in {35,55,...,155}
\draw (m2.center) -- +(-\a:3cm);
\node[fill=white,opacity=.8,inner sep=.8] at ($(m2)+(-35:1.5cm)$) {$\tau_j$};
\node[fill=white,opacity=.8,inner sep=.8] at ($(m2)+(-135:1.5cm)$) {$\tau_2$};
\node[fill=white,opacity=.8,inner sep=.8] at ($(m2)+(-155:1.5cm)$) {$\tau_1$};
\end{tikzpicture}
\caption{The fan at $v$}\label{fig fan def}
\end{figure}
\begin{lemma}\label{lem permitted to v}
The construction of $Q_{T^\dd}$ induces a bijection between the set $\calh$ of permitted threads in
$Q_{T^\dd}$ and the non-empty complete fans of $(S,M^\dd,T^\dd)$. Moreover, under this bijection,
the trivial permitted threads correspond to the trivial fans.
\end{lemma}
\begin{proof}
From the construction of $Q_{T^\dd}$ it follows that the non-trivial permitted threads of $Q_{T^\dd}$
correspond to the complete fans in $(S,M^\dd,T^\dd)$ of width at least 2.
Now consider a trivial permitted thread; this corresponds to a vertex $x\in Q_{T^\dd}$ with at most
one arrow starting at $x$ and at most one arrow ending at $x$. By the construction of $Q_{T^\dd}$, the vertex
$x$ corresponds to an arc in $T^\dd$, and this arc is contained in exactly two complete fans, one at each
endpoint of the arc. If both fans are non-trivial, then this configuration generates either two arrows
ending at the vertex $x$ in $Q_{T^\dd}$ or two arrows starting at the vertex $x$, which contradicts our
assumption that $x$ corresponds to a trivial permitted thread. Hence one of the two fans must be trivial.
Since we are excluding the case where $(S,M)$ is a disc with four marked points, it follows from the
construction of $T^\dd$, that it is impossible that both fans at the endpoint of an arc are trivial.
This shows that the trivial permitted threads are in bijection with the trivial complete fans.
\end{proof}
\subsection{Forbidden threads, triangles and quasi-triangles}
\begin{lemma}\label{lem forbidden to tri}
The construction of $Q_{T^\dd}$ induces a bijection between the set of forbidden threads of length
at most two of $Q_{T^\dd}$ and the set of non-internal triangles and \quasitri{s} in $T^\dd$.
Moreover, under this bijection
\begin{enuma}
\item forbidden threads of length two correspond to \quasitri{s},
\item forbidden threads of length one correspond to triangles with
exactly one side on the boundary,
\item forbidden threads of length zero correspond to triangles with
exactly two sides on the boundary.
\end{enuma}
\end{lemma}
\begin{remark}
\begin{enuma}
\item Each internal triangle in $T\dag$ gives rise to three forbidden threads of length three.
\item There are no forbidden threads in $Q_{T\dag}$ of length greater than three.
\item If $T\dag=T$ then there are no forbidden threads of length two.
\end{enuma}
\end{remark}
\begin{proof}
Forbidden threads of length two can only occur at the local cuts, since all relations in
$B_T$ occur in oriented 3-cycles, and hence give forbidden threads of
length three. Thus the forbidden threads of length two are precisely the
paths $\alpha_i^\dd \to \gamma_i\to \beta_i^\dd$ in $Q_{T^\dd}$ which were obtained from the
3-cycles
$\alpha_i\to \gamma_i\to \beta_i \to \alpha_i$
by the cuts $\cut{v_i}{\alpha_i}{\beta_i}$. These paths correspond to the \quasitri\ $\triangle_i^\dd$. This shows (a).
A forbidden thread of length one in $Q_{T^\dd}$ is an arrow $x\to y$ which does not appear in any
relation, which means that the arcs corresponding to $x$ and $y$ in the triangulation $T^\dd$ bound
a triangle which is not an internal triangle and also not a \quasitri\ and which has two interior sides
(corresponding to $x$ and $y$). This shows (b).
Finally, a trivial forbidden thread in $Q_{T^\dd}$ is a vertex $x$ that is either a sink with only one arrow
ending at $x$, or a source with only one arrow starting at $x$, or the middle vertex of a zero relation of
length two and the two arrows from this zero relation are the only two arrows at $x$.
If $x$ is such a source or sink, then it follows from the construction of $Q_{T^\dd}$ that the triangle on one side of the
arc corresponding to $x$ in $T^\dd$ must not generate an arrow in $Q_{T^\dd}$. Thus $x$ is a side
of a triangle that has two sides on the boundary. On the other hand, if $x$ is such a middle vertex of a zero relation, then
the triangle on one of side of the arc corresponding to $x$ in $T^\dd$ must be an internal triangle or a \quasitri.
On the other side, we must have a triangle with two sides on the boundary, since there are no further
arrows at $x$.
Conversely, given any triangle with two sides on the boundary and the third side $\gamma$ corresponding to
a vertex $x\in Q_{T^\dd}$, we can deduce that $x$ is a trivial forbidden thread, because, on the other side
of $\gamma$, we have either a triangle with one side on the boundary or an internal triangle,
respectively \quasitri, thus $x$ is either a vertex with only one arrow incident to it or the middle vertex
of a zero relation with no further arrows incident to it.
\end{proof}
If $F\in \calf$ is a forbidden thread, we denote by $\triangle_F$ the corresponding triangle or quasi-triangle in $Q_{T^\dd}$,
and if $H\in \calh$ is a permitted thread, we denote by $v_H$ the marked point in $H$ such that $H$
corresponds to the complete fan at $v_H$.
\subsection{AG-invariant}
We now want to compute the AG-invariant of $B_{T^\dd}$ in terms of the partially triangulated surface
$(S,M^\dd,T^\dd)$. To do so, we need
to describe how to go from a forbidden thread to a the following permitted thread and from a
permitted thread to the following forbidden thread as in the AG-algorithm.
\begin{lemma}\label{lem varphi(F)}
Let $F\in \calf$ be a forbidden thread in $Q_{T^\dd}$, and let $x=s(F)$ be its starting vertex in
$Q_{T^\dd}$. Then there exists a unique permitted thread $\varphi(F)\in \calh$ such that $s(\varphi(F))=x$
and $\sigma(\varphi(F))=-\sigma(F)$. Moreover, the marked point $v_{\varphi(F)}$ of the complete
fan of $\varphi(F)$ is the unique vertex of $\triangle_F$ that is the starting point of the arc
corresponding to $x$ in $T^\dd$, with respect to the counterclockwise orientation of $\triangle_F$, see
Figure~\ref{fig varphi(F)}.
\end{lemma}
\begin{figure}
\begin{tikzpicture}
{[]
\shade[shading=axis,shading angle=-90] (1,-1) rectangle (-.25,1);
\filldraw[fill=white] (0,0) node[pos=0,solid] {}
-- +(1,1) node[pos=1,solid] {}
-- +(1,-1) node[pos=1,solid,label=below:$v_{\varphi(F)}$]{} node[int] {$x$}
-- (0,0);
\node[font=\small] at (.5,0) {$\triangle_F$};
}
{[xshift=2cm]
\shade[shading=axis,shading angle=-90] (0,1) rectangle (-.25,-1);
\filldraw[fill=white] (0,1)
-- ++(1,-1) node[pos=0,solid] {} node[pos=1,solid] {}
-- ++(-1,-1) node[pos=1,solid,label=below:$v_{\varphi(F)}$]{} node[int] {$x$}
-- cycle;
\node[font=\small] at (.5,0) {$\triangle_F$};
}
{[xshift=4cm]
\shade[shading=axis,shading angle=-90] (0,1) rectangle (-.25,-1);
\filldraw[fill=white] (0,1)
-- ++(1,0) node[pos=0,solid] {} node[pos=1,solid] {}
-- ++(0,-2) node[pos=1,solid] {}
-- ++(-1,0) node[pos=1,solid,label=below:$v_{\varphi(F)}$]{} node[int] {$x$}
-- cycle;
\node[font=\small] at (.5,0) {$\triangle_F$};
}
\end{tikzpicture}
\caption{ The relative position of the marked point $v_{\varphi(F)}$; on the left, $F$ is the
trivial forbidden thread; in the middle, $F$ is of length one; and, one the right, $F$ is of length two.}
\label{fig varphi(F)}
\end{figure}
\begin{proof} The existence and uniqueness of $\varphi(F)$ follows already from \cite{AG}, but we
include a proof here, for convenience. If $F$ is a trivial forbidden thread, then there is at most
one arrow start at $x$, hence there is a unique permitted thread $\varphi(F)$ starting at $x$. By
Lemma~\ref{lem permitted to v}, $\varphi(F)$ corresponds to a complete fan in which the first arc
corresponds to $x$. It follows that the vertex of this fan is the one described in the Lemma.
Now suppose that $F$ is not trivial, and let $\alpha$ be its initial arrow. Then there are two
permitted threads starting at $x$, one of which has also $\alpha$ as initial arrow. However, since
$\sigma(F)=\sigma(\alpha)$, the condition $\sigma(\varphi(F))=-\sigma(F)$ excludes the possibility
that $\varphi(F)$ starts with $\alpha$; thus $\varphi(F)$ is the other permitted thread starting at
$x$. Again using Lemma~\ref{lem permitted to v}, we see that $\varphi(F)$ corresponds to the
complete fan whose vertex is the marked point $v_{\varphi(F)}$ described in the Lemma. Note that
if $\alpha$ is the only arrow starting at $x$, then $\varphi(F)$ trivial.
\end{proof}
\begin{lemma}\label{lem psi(H)}
Let $H\in \calh$ be a permitted thread in $Q_{T^\dd}$ and let $y=t(H)$ be its terminal vertex in
$Q_{T^\dd}$. Then there exists a unique forbidden thread $\psi(H)\in \calf$ such that $t(\psi(H))=y$
and $\e(\psi(H))=-\e(H)$. Moreover, the triangle or \quasitri\ $\triangle_{\psi(H)}$ of $\psi(H)$ is
the unique triangle or \quasitri\ which is adjacent to the arc corresponding to $y$ and incident to
$v_H$, see Figure~\ref{fig psi(H)}.
\end{lemma}
\begin{figure}
\begin{tikzpicture}
{[]
\draw (0,0) node[solid,label=135:$v_H$] {}
-- ++(-30:2.25cm)
node[int] {$y$}
node[solid,pos=1] {}
node[left=.33cm,pos=1] {$\triangle_{\psi(H)}$}
-- ++(-150:2.25cm)
node[solid,pos=1,name=b] {}
-- cycle;
\draw (0,0) -- (20:2cm)
(0,0) -- (10:2cm)
(0,0) -- (0:2cm);
\draw[loosely dotted] (-1:1.9cm).. controls (-15:2cm) .. (-29:1.9cm)
node[pos=.5,right=.25cm] {$\Fan(v_H)$};
{[on background layer] \shade[shading=axis,shading angle=-90] (-.25,0) rectangle (b); }
}
{[xshift=4.5cm]
\draw (0,0) node[solid,label=135:$v_H$] {}
-- ++(-25:2.25cm)
node[int] {$y$}
node[solid,pos=1] {}
--++(-90:.75cm)
node[solid,pos=1,name=c] {}
node[above left=.33cm,pos=1] {$\triangle_{\psi(H)}$}
(0,0) -- ++(-90:2.5cm)
node[solid,pos=1,name=b] {}
-- (c);
\draw (0,0) -- (20:2cm)
(0,0) -- (10:2cm)
(0,0) -- (0:2cm);
\draw[loosely dotted] (-1:1.9cm).. controls (-15:2cm) .. (-24:1.9cm)
node[pos=.5,right=.25cm] {$\Fan(v_H)$};
{[on background layer] \shade[shading=axis,shading angle=-90] (-.25,0) rectangle (b); }
}
\end{tikzpicture}
\caption{}\label{fig psi(H)}
\end{figure}
\begin{proof}
Again, the existence and uniqueness already follows from \cite{AG}, but we include a
proof here too.
It follows from Lemma~\ref{lem permitted to v} that the side of the triangle $\triangle_{\psi(H)}$
which is following $y$ in the counterclockwise order must be a boundary segment, since the fan at $v_H$
is a complete fan and $y=t(H)$. It follows from Lemma~\ref{lem forbidden to tri} that the forbidden
thread corresponding to $\triangle_{\psi(H)}$ ends in $y$. Let $b$ be the terminal arrow in $H$.
If $\triangle_{\psi(H)}$ has two sides on the boundary, then $\psi(H)$ is a trivial forbidden thread
and $\e(\psi(H))=-\e(b)=-\e(H)$. Note that in this case, $b$ is the only arrow in $Q_{T^\dd}$
ending at $y$, and, since $\e(b)=\e(H)$, it follows that $\psi(H)$ is unique.
Suppose now that $\triangle_{\psi(H)}$ has only one side on the boundary. Since this boundary
segment follows $y$ in the counterclockwise orientation, $\triangle_{\psi(H)}$ induces an arrow
$b'$ in $Q_{T^\dd}$, with $t(b')=y$, and this arrow is the terminal arrow of $\psi(H)$.
Since $b$ and $b'$ end at the same vertex, we have $\e(\psi(H)) = \e(b') = -\e(b) = -\e(H)$.
This shows the existence of $\psi(H)$, and its uniqueness follows as above.
\end{proof}
We are now able to prove the main result of this section, the computation of the AG-invariant for
an arbitrary surface algebra.
For any triangulated surface $(S,M,T)$ and any boundary component $C$ of $S$, denote by $M_{C,T}$
the set of marked points on $C$ that are incident to at least one arc in $T$. Let $n(C,T)$ be the
cardinality of $M_{C,T}$, and let $m(C,T)$ be the number of boundary segments on $C$ that
have both endpoints in $M_{C,T}$.
\begin{thm}\label{thm AG calc}
Let $B=B_{T^\dd}$ be a surface algebra of type $(S,M,T)$ given by a cut $(S,M^\dd,T^\dd)$. Then
the AG-invariant $AG(B)$ can be computed as follows:
\begin{enuma}
\item The ordered pairs $(0,3)$ in $AG(B)$ are in bijection with the internal triangles in $T^\dd$, and
there are no ordered pairs $(0,m)$ with $m\neq 3$.
\item The ordered pairs $(n,m)$ in $AG(B)$ with $n\neq 0$ are in bijection with the boundary
components of $S$. Moreover, if $C$ is a boundary component, then the corresponding ordered
pair $(n,m)$ is given by
\[n= n(C,T) +\ell , \quad m= m(C,T) +2\ell, \]
where $\ell$ is the number of local cuts $\chi_{v,\alpha,\beta}$ in $(S,M^\dd,T^\dd)$ such that $v$
is a point on $C$.
\end{enuma}
\end{thm}
\begin{proof}
Part (a) follows directly from the construction of $B_{T^\dd}$. To show (b), let
\[(H_0,F_0,H_1,F_1,\dots,H_{n-1},F_{n-1},H_n=H_0)\]
be a sequence obtained by the AG-algorithm. Thus each $H_i$ is a permitted thread in
$Q_{T^\dd}$, each $F_i$ is a forbidden thread, $F_i=\psi(H_i)$, and $H_{i+1} = \varphi(F_i)$,
where $\varphi$ and $\psi$ are the maps described in the Lemmas~\ref{lem varphi(F)}
and~\ref{lem psi(H)} respectively.
By Lemma~\ref{lem permitted to v}, each permitted thread $H_i$ corresponds to a
non-empty complete fan, thus to a marked point $v_{H_i}$ in $M^\dd$. On the other
hand, Lemma~\ref{lem forbidden to tri} implies that each forbidden thread $F_i$
corresponds to a triangle or \quasitri\ $\triangle_{F_i}$ in $T^\dd$.
Let $C$ be the boundary component containing $v_{H_0}$. Lemma~\ref{lem psi(H)} implies
that $\triangle_{F_0}$ contains a boundary segment incident to $v_{H_0}$, and it follows then from
Lemma~\ref{lem varphi(F)} that $v_{H_1}\in M^\dd$ is a marked point on the same boundary
component $C$. Note that if $F_0$ is a non-trivial forbidden thread then $v_{H_0}$ and $v_{H_1}$
are the two endpoints of the unique boundary segment in $\triangle_{F_0}$, and, if $F_0$ is a
trivial forbidden thread, the $\triangle_{F_0}$ is a triangle with two sides on the boundary and
$v_{H_0}$, $v_{H_1}$ are the two endpoints of the side of $\triangle_{F_0}$ that is not on the boundary.
Recursively, we see that each of the marked points $v_{H_i}\in M^\dd$ lies on the boundary component
$C$ and that the set of points $\{v_{H_0},v_{H_1},\dots,v_{H_{n-1}}\}$ is precisely the set of marked
points in $M^\dd$ that lie on $C$ and that are incident to at least one arc in $T^\dd$. In particular,
$n=n(C,T)+\ell$.
Recall that the number $m$ in the ordered pair $(n,m)$ is equal to the sum of the number of
arrows appearing in the forbidden threads $F_0,F_1,\dots,F_{n-1}$. For each $i$, the number of
arrows in $F_i$ is zero if $\triangle_{F_i}$ has two sides on the boundary component $C$; it is one if
$\triangle_{F_i}$ is a triangle with one side on $C$; and it is two if $\triangle_{F_i}$ is a \quasitri\
with one side on $C$. Taking the sum, we see that $m$ is the number of triangles in $T^\dd$
that have exactly one side on $C$ plus twice the number of \quasitri\ in $T^\dd$ that have
exactly one side on $C$. Thus $m$ is equal to the number of boundary segments in $(S,M,T)$
on $C$ that have both endpoints in $M_{C,T}$ plus twice the number of local cuts $\cut{v}{\za}{\zb}$ with $v$ a marked point in $C$.
\end{proof}
\begin{remark}
The theorem holds for arbitrary cuts of $(S,M,T)$, thus, in particular, it computes the AG-invariant of the
Jacobian algebra $B_T$ corresponding to the uncut triangulated surface $(S,M,T)$.
\end{remark}
\begin{cor}
Any admissible cut surface algebra of the disc with $n+3$ marked points is derived
equivalent to the path algebra $A$ of the quiver
\[ 1\xrightarrow{\ \, a_1 \ } 2 \xrightarrow{\ \, a_2\ } 3 \xrightarrow{\,\ a_3\ } \cdots \xrightarrow{a_{n-2}} n-1 \xrightarrow{a_{n-1}} n\]
\end{cor}
\begin{proof}
This statement already follows from Corollary \ref{cor 3.6}, since tilting induces an equivalence of derived categories. We give here an alternative proof using Theorem \ref{thm AG calc}. The algebra $A$ has AG-invariant $(n+1,n-1)$, since in the AG algorithm we get the sequence
\[\begin{array}{lll} H_0=a_1a_2\cdots a_{n-1} & \quad\quad& F_0=e_n\\ H_1=e_n && F_1=a_{n-1}\\ H_2=e_{n-1}&& F_2=a_{n-2}\\ \qquad\vdots && \qquad\vdots \\H_n=e_1 && F_{n}=e_1\\ H_{n+1}=H_0\end{array}\]
which contains $n+1$ permissible threads and $n-1$ arrows in the forbidden threads.
On the other hand, Theorem \ref{thm AG calc} implies that the AG-invariant of an admissible cut surface algebra $B_{T^\dd}$ of the disc is given by \[(n(C,T)+\ell\ , \ m(C,T)+2\ell).\] It follows from an easy induction that the number of internal triangles in $T$ is equal to $n+1-n(C,T).$ Since the cut is admissible, this is also the number of local cuts. Thus
\[ n(C,T)+\ell=n+1.\]
Moreover, $m(C,T)$ is equal to the number of boundary segments minus $2(n+3-n(C,T))$, thus $m(C,T)=2n(C,T)-n-3$, and it follows that \[m(C,T)+2\ell =n-1.\]
Alternatively, since there is only one ordered pair $(n,m)$ in the AG-invariant, the number $m$ is the number of arrows in the quiver, which is equal to $n-1$, since the quiver of an admissible cut from the disc with $n+3$ marked points is a tree on $n$ vertices.
Thus the AG-invariant of $B_{T^\dd}$ is equal to the AG-invariant of $A$. Since the quivers of both algebras have no cycles, now the result follows from Theorem \ref{thm AG}.
\end{proof}
\begin{remark}
It also follows from Theorem \ref{thm AG calc} that for surfaces other than the disc, the surface algebras obtained by admissible cuts of a fixed triangulation are in general not all derived equivalent.
\end{remark}
\subsection{Surface algebras of annulus type}
We make some observations for surface algebras coming from admissible cuts of a triangulated annulus.
\begin{cor}\label{cor annulus AG classes}
Let $S$ be an annulus, and let $B_1$, $B_2$ be surface algebras of type $(S,M,T)$ obtained by
two admissible cuts of the same triangulation. Then
\begin{enuma}
\item $B_1$ and $B_2$ are derived equivalent
if and only if $B_1 $ and $B_2$ have the same AG-invariant.
\item If on each boundary component the number of local cuts is the same in the
two admissible cuts then $B_1$ and $B_2$ are derived equivalent.
\end{enuma}
\end{cor}
\begin{proof}
(a) The quivers of $B_1$ and $B_2$ have exactly one cycle, since they are admissible cuts of the annulus.
Thus Theorem \ref{thm AG} implies that $\AG(B_1)=\AG(B_2)$ if and only if $B_1$ and $B_2$ are derived
equivalent.
(b) Theorem \ref{thm AG calc} shows that $\AG(B_1)=\AG(B_2)$ if the number
of local cuts on any given boundary component is the same in both admissible cuts.
\end{proof}
\begin{remark}{In \cite{AO}, the authors associate a weight to each of the algebras appearing in Corollary \ref{cor annulus AG classes} and show that two such algebras are derived equivalent if and only if they have the same weight (respectively the same absolute weight if the marked points are equally distributed over the two boundary components).}
\end{remark}
\begin{ex}
We end this section with an example coming from an annulus. Let $(S,M,T)$ be the triangulated surface
given in Figure~\ref{fig ex annulus} and $B$ the corresponding cluster-tilted algebra of type $\tilde\AA_4$ with the
following quiver
\[\begin{tikzpicture}[rotate=-90]
\node[name=1] at (1,0) {1};
\node[name=2] at (2.5,0) {2};
\node[name=3] at (1.75,1) {3};
\node[name=4] at (2.5,2) {4};
\node[name=5] at (1,2) {5};
\path[->] (1) edge[bend left] (5) edge (3)
(5) edge (3)
(3) edge (2) edge (4)
(4) edge (5)
(2) edge (1);
\end{tikzpicture}\]
\begin{figure}
\begin{tikzpicture}[scale=.45]
\draw (0,0) circle[radius=5cm];
\filldraw[fill=black!20] (0,0) circle[radius=1cm];
\node[solid,name=a,label=above:$w$] at (90:5cm) {} ;
\node[solid,name=b] at (180:5cm) {} ;
\node[solid,name=c,label=below:$v$] at (270:5cm) {} ;
\node[solid,name=1] at (90:1cm) {} ;
\node[solid,name=2] at (270:1cm) {} ;
\draw (a) .. controls +(210:5cm) and +(150:5cm) .. (c) node[int] {4}
(a) ..controls +(235:2cm) and +(90:1cm) .. (180:2.5cm) node[int] {3}
(180:2.5cm) ..controls +(270:1cm) and +(235:2cm) .. (2)
(a) ..controls +(-55:2cm) and +(90:1cm) .. (0:2.5cm) node[int] {1}
(0:2.5cm) ..controls +(270:1cm) and +(-55:2cm) .. (2)
(2) .. controls +(-10:1cm) and +(270:1cm) .. (0:2cm)
(0:2cm) .. controls +(90:1cm) and +(0:1cm) .. (90:2cm) node[int,pos=1] {2}
(90:2cm) .. controls +(180:1cm) and +(90:1cm) .. (180:2cm)
(180:2cm) .. controls +(270:1cm) and +(190:1cm) .. (2)
(2) -- (c) node[int] {5};
\node[outer sep=.1cm,fill=white,opacity=.85,below right] at (270:1cm){$z$};
\end{tikzpicture}
\caption{Triangulated annulus}\label{fig ex annulus}
\end{figure}
Applying Theorem~\ref{thm AG calc}, we see that $\AG(B)$ is given by the ordered pairs
$\{ (0,3) , (0,3), (1,0), (2,1)\}$ where $(1,0)$ is associated to the inner boundary component
and $(2,1)$ the outer boundary component. If we consider only admissible cuts, then the
AG-invariant of the resulting algebras will be given by only two ordered pairs, one per boundary
component. In Figures \ref{fig ex (2,2)}--\ref{fig inner} we list the bound quivers given by the different possible admissible cuts grouped
by their AG-invariant. We use dashed lines to represent the zero relations induced by the cuts.
There are two internal triangles, hence there are nine distinct admissible cuts. If we make sure to cut
both boundary components, we will get five algebras having the AG-invariant given by $\{(2,2),(3,3)\}$.
It follows from \cite[Section 7]{AG} and Theorem \ref{thm AG}, that these are iterated tilted algebras of type $\tilde \AA_4$.
\begin{figure}
\begin{tikzpicture}
{[rotate=-90]
\node at (1.75,-1.2) {$\cut z12 \cut v45$:};
\node[name=1] at (1,0) {1};
\node[name=2] at (2.5,0) {2};
\node[name=3] at (1.75,1) {3};
\node[name=4] at (2.5,2) {4};
\node[name=5] at (1,2) {5};
\path[->] (1) edge[bend left] (5) edge (3)
(5) edge (3)
(3) edge (2) edge (4);
\path[dashed] (1) edge (2) (4) edge (5);
}
{[rotate=-90,xshift=2cm]
\node at (1.75,-1.2) {$\cut z23 \cut v45$:};
\node[name=1] at (1,0) {1};
\node[name=2] at (2.5,0) {2};
\node[name=3] at (1.75,1) {3};
\node[name=4] at (2.5,2) {4};
\node[name=5] at (1,2) {5};
\path[->] (1) edge[bend left] (5) edge (3)
(5) edge (3)
(3) edge (4)
(2) edge (1);
\path[dashed] (2) edge (3) (4) edge (5);
}
{[rotate=-90,xshift=0cm, yshift=5cm]
\node at (1.75,-1.2) {$\cut z23 \cut w43$:};
\node[name=1] at (1,0) {1};
\node[name=2] at (2.5,0) {2};
\node[name=3] at (1.75,1) {3};
\node[name=4] at (2.5,2) {4};
\node[name=5] at (1,2) {5};
\path[->] (1) edge[bend left] (5) edge (3)
(5) edge (3)
(2) edge (1)
(4) edge (5);
\path[dashed] (2) edge (3) (4) edge (3);
}
{[rotate=-90,xshift=2cm,yshift=5cm]
\node at (1.75,-1.2) {$\cut z12 \cut w43$:};
\node[name=1] at (1,0) {1};
\node[name=2] at (2.5,0) {2};
\node[name=3] at (1.75,1) {3};
\node[name=4] at (2.5,2) {4};
\node[name=5] at (1,2) {5};
\path[->] (1) edge[bend left] (5) edge (3)
(5) edge (3)
(3) edge (2)
(4) edge (5);
\path[dashed] (2) edge (1) (4) edge (3);
}
{[rotate=-90,xshift=4cm,yshift=2.5cm]
\node at (1.75,-1.2) {$\cut w13 \cut z35$:};
\node[name=1] at (1,0) {1};
\node[name=2] at (2.5,0) {2};
\node[name=3] at (1.75,1) {3};
\node[name=4] at (2.5,2) {4};
\node[name=5] at (1,2) {5};
\path[->] (1) edge[bend left] (5)
(4) edge (5)
(3) edge (2) edge (4)
(2) edge (1);
\path[dashed] (1) edge (3) (3) edge (5);
}
\end{tikzpicture}
\caption{The surface algebras of $(S,M,T)$ with AG-invariant given by $\{(2,2),(3,3)\}$}\label{fig ex (2,2)}
\end{figure}
The other four admissible cuts of $(S,M,T)$ split into two AG-classes: $\{(1,0),(4,5)\}$, corresponding to
cutting the outer boundary component twice, and $\{(3,4),(2,1)\}$ cutting the inner boundary component
twice. See Figure~\ref{fig outer} and Figure~\ref{fig inner}, respectively. Note that from
Corollary~\ref{cor annulus AG classes}, these are also the derived equivalence classes for the surface
algebras of type $(S,M,T)$ coming from admissible cuts.
\begin{figure}
\begin{tikzpicture}
{[rotate=-90]
\node at (1.75,-1.2) {$\cut w13 \cut w34$:};
\node[name=1] at (1,0) {1};
\node[name=2] at (2.5,0) {2};
\node[name=3] at (1.75,1) {3};
\node[name=4] at (2.5,2) {4};
\node[name=5] at (1,2) {5};
\path[->] (1) edge[bend left] (5)
(5) edge (3)
(3) edge (2)
(4) edge (5)
(2) edge (1);
\path[dashed] (4) edge (3) (3) edge (1);
}
{[rotate=-90,yshift=5cm]
\node at (1.75,-1.2) {$\cut w13 \cut v45$:};
\node[name=1] at (1,0) {1};
\node[name=2] at (2.5,0) {2};
\node[name=3] at (1.75,1) {3};
\node[name=4] at (2.5,2) {4};
\node[name=5] at (1,2) {5};
\path[->] (1) edge[bend left] (5)
(5) edge (3)
(3) edge (2) edge (4)
(2) edge (1);
\path[dashed] (4) edge (5) (3) edge (1);
}
\end{tikzpicture}
\caption{The surface algebras of $(S,M,T)$ with AG-invariant given by $\{(1,0),(4,5)\}$}\label{fig outer}
\end{figure}
\begin{figure}
\begin{tikzpicture}
{[rotate=-90]
\node at (1.75,-1.2) {$\cut z23 \cut z35$:};
\node[name=1] at (1,0) {1};
\node[name=2] at (2.5,0) {2};
\node[name=3] at (1.75,1) {3};
\node[name=4] at (2.5,2) {4};
\node[name=5] at (1,2) {5};
\path[->] (1) edge[bend left] (5) edge (3)
(3) edge (4)
(4) edge (5)
(2) edge (1);
\path[dashed] (3) edge (5) (3) edge (2);
}
{[rotate=-90,yshift=5cm]
\node at (1.75,-1.2) {$\cut z12 \cut z35$:};
\node[name=1] at (1,0) {1};
\node[name=2] at (2.5,0) {2};
\node[name=3] at (1.75,1) {3};
\node[name=4] at (2.5,2) {4};
\node[name=5] at (1,2) {5};
\path[->] (1) edge[bend left] (5) edge (3)
(3) edge (4)
(3) edge (2)
(4) edge (5);
\path[dashed] (5) edge (3) (2) edge (1);
}
\end{tikzpicture}
\caption{The surface algebras with of $(S,M,T)$ AG-invariant given by $\{(3,4),(2,1)\}$}\label{fig inner}
\end{figure}
\end{ex}
\section{A geometric description of the module categories of surface algebras}\label{sect 5}
In this section, we study the module categories of surface algebras.
Let $(S,M,T)$ be a triangulated surface without punctures and let $(S,M\dag,T\dag)$ be a cut
of $(S,M,T)$ with corresponding surface algebra $B_{T\dag}$. We will associate a category
$\cale \dag$ to $(S,M\dag,T\dag)$ whose objects are given in terms of curves on the surface
$(S,M\dag)$, and we will show that there is a functor from $\cale\dag$ to $\cmod B_{T\dag}$
is faithful and dense in the category of string modules over $B_{T^\dd}$. Consequently, the string modules over the surface algebra $B_{T\dag}$
have a geometric description in terms of certain curves on $(S,M\dag)$.
In the special case where $S$ is a disc, or more generally if $B_{T^\dd}$ is representation finite, we even have that $\cale\dag$ and $B_{T\dag}$ are equivalent categories.
We will use this fact in Section~6 to give a geometric description of tilted algebras of type $\AA$ and
their module categories.
\subsection{Permissible arcs}
\begin{dfn}\mbox{}
\begin{enuma}
\item A curve $\gamma$ in $(S,M\dag,T\dag)$ is said to \df{consecutively cross} $\tau_1,\tau_2\in T\dag$
if $\gamma$ crosses $\tau_1$ and $\tau_2$ in the points $p_1$ and $p_2$, and the segment of $\gamma$
between the points $p_1$ and $p_2$ does not cross any other arc in $T\dag$.
\item A generalized arc or closed curve in $(S,M\dag)$ is called \df{permissible} if it is not consecutively
crossing two non-adjacent sides of a \quasitri\ in $T\dag$.
\item Two permissible generalized arcs $\gamma,\gamma'$ in $(S,M\dag)$ are called \df{equivalent} if
there is a side $\delta$ of a \quasitri\ $\triangle^\dd$ in $T\dag$ such that
\begin{enumi}
\item $\gamma$ is homotopic to the concatenation of $\gamma'$ and $\delta$,
\item both $\gamma$ and $\gamma'$ start at an endpoint of $\delta$ and their first
crossing with $T\dag$ is with the side of $\triangle^\dd$ that is opposite to $\delta$.
\end{enumi}
\end{enuma}
Examples of a non-permissible arc and of equivalent arcs are given in Figure~\ref{fig arcs ex}.
\end{dfn}
\begin{figure}
\begin{tikzpicture}
{[]
\draw (0,0) node[pos=0,solid] {}
-- (2,0) node[pos=1,solid] {}
-- (2,-2) node[pos=1,solid,label=135:$\triangle\dag$] {}
-- (0,-2) node[pos=1,solid] {}
-- cycle;
{[on background layer] \shade[shading=axis,shading angle=90] (2,0) rectangle (2.25,-2);}
\draw (40:2cm) -- (-80:3cm) node[int] {$\gamma$};
}
{[xshift=4cm]
\draw (0,0) node[pos=0,solid] {}
-- (2,0) node[pos=1,solid] {} node[int,below,outer sep=1] {$\delta$}
-- (2,-2) node[pos=1,solid,label=135:$\triangle\dag$] {}
-- (0,-2) node[pos=1,solid] {}
-- cycle;
\draw (0,0) ..controls +(-40:1cm) and +(40:1cm) .. (-80:3cm) node[int,pos=.35] {$\gamma$}
(2,0) ..controls +(200:1cm) and +(20:1cm) .. (-80:3cm) node[int,pos=.35] {$\gamma'$}
node[pos=1,solid] {};
}
\end{tikzpicture}
\caption{Example of a non-permissible arc $\gamma$ on the left, and of two equivalent arcs
$\gamma,\gamma'$ on the right}\label{fig arcs ex}
\end{figure}
Note that the arcs in $T\dag$ are permissible by definition. Furthermore, note that the side $\delta$
in part (c) of the definition may be, but does not have to be, a boundary segment.
\subsection{Pivots}
\begin{dfn} \label{def pivot}
Let $\gamma$ be a permissible generalized arc, and let $v$ and $w$ be its
endpoints. Let $f_v\gamma$ be the permissible generalized arc or boundary
segment obtained from $\gamma$ by fixing the endpoint $v$ and moving the
endpoint $w$ as follows:
\begin{enumi}
\item if $w$ is a vertex of a \quasitri\ $\triangle^\dd$ and $w'$ is the
counterclockwise neighbor vertex of $w$ in $\triangle^\dd$ such that $\gamma$ is
equivalent to an arc $\gamma'$ with endpoints $v,w'$ then move $w$ to the
counterclockwise neighbor of $w'$ on the boundary of $S$.
Thus $f_v\gamma$ is given by a curve homotopic to the concatenation of
$\gamma'$ and the boundary segment that connects $w'$ to its counterclockwise
neighbor.
\item otherwise, move $w$ to its counterclockwise neighbor on the
boundary. Thus $f_v\gamma$ is given by a curve homotopic to the concatenation
of $\gamma$ and the boundary segment that connects $w$ to its counterclockwise
neighbor.
\end{enumi}
We call $f_v\gamma$ the \df{pivot of $\gamma$ fixing $v$}.
\end{dfn}
\begin{remark}
In the special case of a trivial cut, that is $(S,M\dag,T\dag)=(S,M,T)$, the pivots are defined
by condition (ii) only, since there are no \quasitri{s}. In this case our pivots are the same as in \cite{BZ},
where it has been show that these pivots correspond to irreducible morphisms in $\cmod B_T$.
\end{remark}
\begin{dfn}
Let $\gamma$ be a permissible generalized arc in $(S,M\dag,T\dag)$ and let $v$ and $w$ be
its endpoints. Let $\tau^-\gamma$ be the permissible generalized arc given by
$\tau^- \gamma = f_vf_w\gamma$. Dually, let $\tau^+\gamma$ be the permissible generalized
arc in $(S,M\dag,T\dag)$ such that $\tau^-(\tau^+ \gamma)=\gamma$.
\end{dfn}
\begin{remark}
Again, in the case where $(S,M\dag,T\dag)=(S,M,T)$, it has been show in \cite{BZ} that
$\tau^+$ and $\tau^-$ correspond to the Auslander-Reiten translation and the inverse
Auslander-Reiten translation in $\cmod B_T$, respectively.
\end{remark}
\subsection{The categories $\cale^\dd$ and $\cale$}
\begin{dfn}
Let $\cale\dag = \cale(S,M\dag,T\dag)$ be the additive category whose indecomposable
objects are the equivalence classes of permissible arcs in $(S,M\dag,T\dag)$ that are not
in $T\dag$ and whose morphisms between indecomposable objects are generated by the pivots
$f_v\in \Hom_{\cale\dag}(\gamma,f_v\gamma)$ subject to the relations $f_vf_w = f_wf_v$.
Here we use the convention that $f_v\gamma$ denotes the zero object in $\cale\dag$
whenever $f_v\gamma$ is a boundary segment in $(S, M^\dd)$ or an arc in $T\dag$.
\end{dfn}
Our first goal is to relate the category $\cale\dag =\cale(S,M\dag,T\dag)$ to the category
$\cale = \cale(S,M,T)$ of the original triangulated surface, by constructing a functor
$G\colon \cale\dag \to \cale$. Let $\gamma$ be an indecomposable object in $\cale\dag$;
thus $\gamma$ is represented by a permissible generalized arc in $(S,M\dag)$. The
generalized arc $\gamma$ is determined by the sequence of its crossing points with $T\dag$
together with the sequence of its segments between consecutive crossing points. Each of
these segments lies entirely in a triangle or \quasitri\ of $T\dag$. Define $G(\gamma)$ to
be the unique generalized arc in $(S,M)$ determined by the same sequence of crossing
points and segments, with the difference that segments of $\gamma$ that were lying
on a \quasitri\ become segments of $G(\gamma)$ which lie in the corresponding triangle.
If $f_v\colon \gamma \to f_v\gamma$ is a morphism in $\cale\dag$ given by a pivot of type (ii)
in Definition~\ref{def pivot}, then let $G(f_v)\colon G(\gamma) \to G(f_v\gamma)$ be the pivot
in $\cale$ fixing the marked point corresponding to $v$ in $M$. If $f_v\colon \gamma \to f_v\gamma$
is given by a pivot of type (i) in Definition~\ref{def pivot}, then let
$G(f_v)\colon G(\gamma) \to G(f_v\gamma)$ be the minimal sequence of pivots in $\cale$, each pivot
fixing the marked point corresponding to $v$ in $M$, which transforms $G(\gamma)$ into
$G(f_v\gamma)$.
\begin{prop}\label{prop:G full faithful}
$G$ is a full and faithful functor.
\end{prop}
\begin{proof}
The image of a composition of pivots under $G$ is equal to the composition of the images of the pivots.
Moreover, if $v\dag,w\dag$ are marked points in $M\dag$ and $v,w$ are the corresponding points in $M$,
then $G(f_{v\dag})\circ G(f_{w\dag})(G(\gamma) )= G(f_{w\dag})\circ G(f_{v\dag})(G(\gamma))$, which shows that $G$ respects the
relations on $\cale\dag$.
To show that $G$ is full, let $\gamma,\gamma'$ be indecomposable objects in $\cale\dag$ and let $f\in
\Hom_\cale(G(\gamma),G(\gamma'))$ be a nonzero morphism in $\cale$. Then $f$ is given by a sequence of
pivots $f=f_{v_i}\circ f_{v_{i-1}} \circ \cdots\circ f_{v_1}$ with $v_1,\cdots,v_i\in M$. Using the relations $f_vf_w =f_wf_v$
in $\cale$, we may suppose without loss of generality that
$v_1=v_2=\cdots=v_h\neq v_{h+1} = \cdots = v_{i-1} = v_i$, for some $h$ with $1\leq h\leq i$. If all
intermediate generalized arcs $f_{v_j}\circ f_{v_{j-1}} \circ \cdots\circ f_{v_1}(G(\gamma))$ are in the
image of $G$, then there is a corresponding sequence of pivots
$f_{v_i\dag}\circ f_{v_{i-1}\dag} \circ \cdots\circ f_{v_1\dag}$ in $\cale\dag$ such that
$G(f_{v_i\dag}\circ f_{v_{i-1}\dag} \circ \cdots\circ f_{v_1\dag} (\gamma))
= f_{v_i}\circ f_{v_{i-1}} \circ \cdots\circ f_{v_1}(G(\gamma))$.
Otherwise, let $\alpha = f_{v_j}\circ f_{v_{j-1}} \circ \cdots\circ f_{v_1}(G(\gamma))$ be the first generalized
arc in this sequence that does not lie in the image of $G$. Then $\alpha$ must cross an internal triangle
$\triangle$ in $T$ such that the corresponding (non-permissible) generalized arc $\za^\dd$ in $(S,M\dag,T\dag)$
crosses the corresponding \quasitri\ in two opposite sides consecutively. Let
$\delta=f_{v_{j-1}} \circ f_{v_{j-2}} \circ \cdots \circ f_{v_1}(G(\gamma))$ be the immediate predecessor of
$\alpha$ in the sequence, and let $\beta = f_{v_k} \circ f_{v_{k-1}} \circ \cdots \circ f_{v_1}(G(\gamma))$
with $k>j$ be the first arc after $\alpha$ in the sequence which is again in the image of $G$. We distinguish
two cases.
\begin{enum}
\item Suppose that $v_j = v_{j+1} = \cdots = v_k$. Thus $\beta$ and $\alpha$ are both incident to $v_j$,
and it follows that in $\cale\dag$, we have $f_{v_j\dag}\delta\dag = \beta\dag$, where
$\delta\dag = f_{v_{j-1}\dag}\circ f_{v_{j-2}\dag}\circ \cdots \circ f_{v_1\dag}(\gamma)$ in $\cale\dag$
is the object corresponding to $\delta$, and $\beta\dag$ the object in $\cale\dag$ corresponding
to $\beta$. Clearly,
\[G( f_{v_j\dag}\circ f_{v_{j-1}\dag}\circ \cdots \circ f_{v_1\dag}(\gamma)) = G(\beta\dag) = \beta.\]
By induction on the number of generalized arcs in the sequence
$f_{v_i}\circ f_{v_{i-1}} \circ \cdots\circ f_{v_1}(G(\gamma))$ which are not in the image of $G$, it
follows now that the morphism $f$ is in the image of $G$.
\item Suppose that $v_j\neq v_k$. Then the triangle $\triangle$ separates the two arcs $G(\gamma)$
and $\beta$, which implies that none of the arcs in $T$ can cross both $G(\gamma)$ and
$\beta$. Therefore $\Hom_\cale(G(\gamma),\beta) = 0$, hence $f=0$.
\end{enum}
This shows that $G$ is full.
To prove that $G$ is faithful, it suffices to show that $G$ is faithful on pivots $\gamma\mapsto f_v\gamma$.
For pivots of type (ii) in Definition~\ref{def pivot}, this is clear, so let $\gamma\mapsto f_v\gamma$ be
a pivot of type (i). Recall that such a pivot occurs if $\gamma$ is a permissible generalized arc in
$(S,M\dag, T\dag)$ with endpoint $v,w\in M\dag$, where $w$ is a vertex of a \quasitri\ $\triangle^\dd$ in
$T\dag$ such that $\gamma$ is equivalent to a permissible generalized arc $\gamma'$ with endpoints
$v,w'$, where $w'$ is the counterclockwise neighbor of $w$ in $\triangle^\dd$. Since $\gamma$ and
$\gamma'$ are equivalent, they define the same object of $\cale\dag$, so the morphism
$\gamma\mapsto f_v\gamma$ can be represented by the pivot $\gamma'\mapsto f_v\gamma'$ which
is of type (ii). This shows that the image of this morphism under $G$ is non-zero, hence $G$ is faithful.
\end{proof}
\subsection{Geometric description of module categories}
It has been shown in \cite{BZ} that there exists a faithful functor $F\colon \cale \to \cmod B_T$ which sends
pivots to irreducible morphisms and $\tau^-$ to the inverse Auslander-Reiten translation in $\cmod B_T$.
This functor induces a dense and faithful as functor from $\cale$ to the category of $B_T$-string modules.
The composition $F\circ G$ is a faithful functor from $\cale\dag$ to $\cmod B_{T\dag}$. We will now define
another functor $\res \colon \cmod B_T\to \cmod B_{T\dag}$ and get a functor
$H=\res\circ F \circ G\colon \cale\dag \to \cmod B_{T\dag}$ completing the following commutative
diagram.
\[\begin{tikzpicture}[scale=1.75]
\node[name=lt] at (0,0) {$\cale\dag$};
\node[name=rt] at (2,0) {$\cale$};
\node[name=lb] at (0,-1) {$\cmod B_{T\dag}$};
\node[name=rb] at (2,-1) {$\cmod B_T$};
\path[->]
(lt) edge node[auto] {$H$} (lb) edge node[auto] {$G$} (rt)
(rt) edge node[auto] {$F$} (rb)
(rb) edge node[auto] {$\res$} (lb);
\end{tikzpicture}\]
Note that the inclusion map $i\colon B_{T\dag} \to B_T$ is an algebra homomorphism which sends
$1_{B_{T\dag}}$ to $1_{B_T}$. Thus $B_{T\dag}$ is a subalgebra of $B_T$, and we can define $\res$ to be the
functor given by the restriction of scalars.
On the other hand, $B_{T\dag}$ is the quotient of $B_T$ by the two-sided ideal generated by the arrows in
the cut. Denoting the projection homomorphism by $\pi\colon B_T \to B_{T\dag}$, we see that $\pi\circ i$
is the identity morphism on $B_{T\dag}$. The extension of scalars functor
$\iota = - \otimes_{B_{T\dag}} B_T\colon \cmod B_{T\dag} \to \cmod B_T$ is sending $B_{T\dag}$-modules
$M$ to $B_T$-modules $\iota(M)=M$ with the arrows in the cut acting trivially; and $\iota$ is the identity
on morphisms. We have $\res\circ \iota = 1_{B_{T\dag}}$.
\begin{thm}\label{thm cats}Let $H=\res\circ F\circ G \colon \cale\dag \to \cmod B_{T\dag}$.
\begin{enuma}
\item The functor $H$ is faithful and induces a dense, faithful functor from $\cale\dag$ to
the category of string modules over $B_{T\dag}$.
\item If $f_v\colon \gamma \to f_v\gamma$ is a pivot in $\cale\dag$ then
$H(f_v):H(\gamma)\to H(f_v\gamma)$ is an irreducible morphism in $\cmod B_{T\dag}$.
\item The inverse Auslander-Reiten translate of $H(\gamma)$ is $H(\tau^-\gamma)$, and the Auslander-Reiten translate of $H(\gamma)$ is $H(\tau^+\gamma)$.
\item If $\gamma\in T\dag$ then $H(\tau^- \gamma)$ is an indecomposable projective
$B_{T\dag}$-module and $H(\tau^+\gamma)$ is an indecomposable injective $B_{T\dag}$-module.
\item If the surface $S$ is a disc, then $H$ is an equivalence of categories.
\item If the algebra $B_{T\dag}$ is of finite representation type, then $H$ is an equivalence of categories.
\end{enuma}
\end{thm}
\begin{proof}
Part (a). The restriction of scalars functor is a faithful functor. The fact that $F$ is faithful has been
shown in \cite{BZ}, and $G$ is faithful by Proposition~\ref{prop:G full faithful}. Thus $H$ is faithful.
It also follows from \cite{BZ} that $B_T$-modules in the image of $F$ are string modules and that $F$ is
a dense and faithful functor from $\cale$ to the category of string $B_T$-modules. Consequently, the
$B_{T\dag}$-modules in the image of $H$ are string modules. Now let $M$ be any indecomposable string
module over $B_{T\dag}$, then $\iota(M)$ is a string module in $\cmod B_T$, and hence there exists
an indecomposable $\gamma\in \cale$ such that $F(\gamma)\cong\iota(M)$. Since the arrows in the
cut act trivially on $\iota(M)$, the generalized arc $\gamma$ in $(S,M,T)$ lifts to a permissible
generalized arc $\gamma\dag$ in $(S,M\dag,T\dag)$. Moreover
$H(\gamma\dag) = \res\circ F(\gamma) \cong M$. Thus $H$ is dense to the category of string
$B_{T\dag}$-modules.
Part (b).
It has been shown in \cite{BR} that the irreducible morphisms starting at an indecomposable string module $M(w)$ can be described by adding hooks or deleting cohooks at the endpoints of the string $w$. If $w$ is a string such that there exists an arrow $a$ such that $aw$ is a string, then let $v_a$ be the maximal non-zero path starting at $s(a)$ whose initial arrow is different from $a$. Then there is an irreducible morphism $M(w)\to M(w_h)$, where $w_h=v_a^{-1}aw$ is obtained from $w$ by adding the ``hook" $v_a^{-1}a$ at the starting point of $w$.
On the other hand, if $w$ is a string such that there is no arrow $a$ such that $aw $ is a string, then there are two possibilities:
\begin{enumerate}
\item either $w$ contains no inverse arrow, in which case there is no irreducible morphism corresponding to the starting point of $w$,
\item or $w$ is of the form $w=u_a a^{-1} w_c$, where $a$ is an arrow and $u_a$ is maximal nonzero path ending at $t(a)$. In this case, there is an irreducible morphism $M(w)\to M(w_c)$ and $w_c$ is said to be obtained by deleting the ``cohook" $u_a a^{-1} $ at the starting point of the string $w$.
\end{enumerate}
In a similar way, there are irreducible morphisms associated to adding a hook or deleting a cohook at the terminal point of the string $w$.
The inverse Auslander-Reiten translation of the string module $M(w)$ corresponds to the string obtained from $w$ by performing the two operations of adding a hook, respectively deleting a cohook, at both endpoints of the string $w$.
Br\"ustle and Zhang have shown that for the Jacobian algebra $B_T$ of a triangulated unpunctured surface $(S,M,T)$, the irreducible morphisms between indecomposable string modules are precisely given by the pivots of the generalized arcs. Adapting their construction to cuts, we show now that the same is true for surface algebras.
Let $\gamma^\dd$ be a generalized permissible arc in $(S,M^\dd,T^\dd)$ which is not in $T^\dd$, let $v^\dd$ and $w^\dd$ be the endpoints of $\gamma^\dd$ and consider the pivot $f_{v^\dd}\gamma^\dd$.
Applying the functor $H$, we obtain a homomorphism between $B_{T^\dd}$-string modules
\[H(f_{v^\dd})\colon H(\gamma^\dd)\longrightarrow H (f_{v^\dd} \gamma^\dd),\]
and we must show that this morphism is irreducible.
Clearly, if the image under the extension of scalars functor $ \iota H (f_{v^\dd})$ is an irreducible morphism in $\cmod B_T$, then $H (f_{v^\dd} )$ is an irreducible morphism in $\cmod B_{T^\dd}$. By \cite{BZ}, this is precisely the case when $\iota H (f_{v^\dd} \gamma^\dd) = F( f_v\zg)$, where $f_v$ is the pivot in $(S,M,T)$ at the vertex $v$ corresponding to the vertex $v^\dd$, and $\zg=G(\gamma^\dd)$.
Thus we must show the result in the case where
$\iota H (f_{v^\dd} \gamma^\dd) \ne F( f_v\zg)$. In this case, we have that $f_v\zg$ consecutively crosses two sides $ \alpha , \beta$ of a triangle $\triangle$ in $T$, which corresponds to a quasi-triangle $\triangle^\dd$ in $T^\dd$ in which the sides $\alpha, \beta$ give rise to two opposite sides $\alpha^\dd,\beta^\dd$, and $w$ is a common endpoint of $\alpha $ and $\beta$, which gives rise to two points $w'$ and $w''$ in $\triangle^\dd$, see Figure \ref{fig irredmorph}.
\begin{figure}
\begin{tikzpicture}
{[]
\node[solid,name=a] at (0,0) {};
\node[solid,name=b] at (-2,-2) {};
\node[solid,name=w,label=-20:$w$] at (2,-2) {};
\shade[shading=axis,shading angle=90] (2,-2) rectangle (2.2,.5);
\draw (a) -- (b)
-- (w) node[pos=.35,above] {$\triangle$} node[int,pos=.2] {$\beta$}
-- (a) node[int,pos=.8] {$\alpha$}
(2,-2) -- ++(205:3.5cm)
-- ++(0:4cm)
(2,-2) -- +(190:3cm)
(2,-2) -- +(225:2.5cm) node[solid,name=v,label=-20:$v$] {} node[int,pos=.33] {$\gamma$}
(2,-2) -- +(90:2.5cm) node[solid,name=u,label=45:$u$] {};
{[on background layer] \draw (v) -- (u) node[int,pos=.75] {$f_v\gamma$};}
}
{[xshift=7cm,yshift=-2cm]
\shade[shading=axis,shading angle=90] (0,0) rectangle (.2,3.5);
\draw (0,0) -- (0,2)
node[name=w1,solid,label=-10:$w'$,pos=0] {}
node[name=w2,solid,label=-10:$w''$,pos=1] {}
-- (0,3.5)
node[name=u,solid,label=10:$u\dag$,pos=1] {}
(0,2) -- (-2.5,2)
node[solid,pos=1,name=a] {}
node[int,pos=.6] {$\alpha\dag$}
--(-2.5,0)
node[solid,pos=1] {}
node[pos=.6,right] {$\triangle\dag$}
-- (0,0)
node[int,pos=.25] {$\beta\dag$}
-- ++(205:3cm)
-- ++(0:4cm)
(0,0) -- +(190:2cm)
(0,0) -- +(225:2.5cm) node[solid,name=v,label=-20:$v\dag$] {} node[int,pos=.33] {$\gamma\dag$} ;
\draw (v) ..controls +(50:2cm) and +(225:1cm) .. (w2) node[left,pos=.75,fill=white,opacity=.8] {$f_{v\dag}\gamma\dag$}
(v) .. controls +(50:2cm) and +(-35:1cm) .. (a) ;
}
\draw[->] (3,-1) -- (4,-1);
\end{tikzpicture}
\caption{Proof that irreducible morphisms are given by pivots}\label{fig irredmorph}
\end{figure}
Now in this situation, $f_{v^\dd}\gamma^\dd$ is obtained from $\gamma$ by moving the endpoint $w'$ along the boundary segment of the quasi-triangle $\triangle^\dd$ to the point $w''$. The generalized arc $f_{v^\dd}\gamma^\dd$ crossed every arc in $T^\dd$ that is crossed by $\gamma^\dd$ and, in addition, $f_{v^\dd}\gamma^\dd$ also crosses every arc in $T^\dd$ that is in the complete fan $\beta_1,\beta_2,\ldots,\beta_j=\beta^\dd$ at $w^\dd$. It particular, its last crossing is with the arc $\beta^\dd$. On the level of the string modules, this corresponds to obtaining the string modules $H(f_{v^\dd}\gamma^\dd)$ by adding the hook $\bullet\ot\beta_1\to\beta_2\to \cdots\to \beta^\dd$ to the string module $H(\gamma^\dd)$. Thus the morphism $H(f_{v^\dd})$ is irreducible.
Part (c). This follows directly from (b) and from the description of the Auslander-Reiten translation for string modules in \cite{BR}.
Part (d). The statement about the projective modules follows from (c) and the fact that the indecomposable projective modules are the indecomposable string modules $M(w)$ whose Auslander-Reiten translate is zero. Similarly, the statement about the injective modules follows from (c) and the fact that the indecomposable injective modules are the indecomposable string modules $M(w)$ whose inverse Auslander-Reiten translate is zero.
Part (e).
If $S$ is a disc, then it has been shown in \cite{CCS} that the functor $F$ is an equivalence of
categories. In particular, all $B_T$-modules, and therefore all $B_{T\dag}$-modules, are string modules,
which shows that $H\colon \cale\dag\to \cmod B_T$ is dense.
It remains to show that $H$ is full. Let $\gamma,\gamma'$ be indecomposable objects in $\cale\dag$ and
let $f\in \Hom_{B_{T\dag}}(H(\gamma),H(\gamma'))$. Applying the functor $\iota$ yields
$\iota(f)\in \Hom_{B_{T}}(\iota\circ H(\gamma),\iota\circ H(\gamma'))$. Since $\gamma$ and $\gamma'$
are objects in $\cale\dag$, their images under $F\circ G$ are $B_T$-modules on which the arrows in the cut act
trivially. Thus, applying $\iota\circ\res$ to $F\circ G(\gamma)$ and $F\circ G(\gamma')$ gives back
$F\circ G(\gamma)$ and $F\circ G(\gamma')$, hence $\iota\circ H(\gamma) = F\circ G(\gamma)$ and
$\iota\circ H(\gamma')=F\circ G(\gamma')$. Since the composition $F\circ G$ is full, it follows that there
exists a morphism $f\dag\in \Hom_{\cale\dag}(\gamma,\gamma')$ such that $F\circ G(f\dag)=\iota(f)$.
Applying $\res$ now yields $H(f\dag)=f$. Thus $H$ is full.
Part (f).
If $B_{T\dag}$ is of finite representation type, then all $B_{T\dag}$-modules are
string modules. Indeed, a band module would give rise to infinitely many isoclasses of
indecomposable band modules. Thus $H$ is dense by part (a). Since $B_{T\dag}$ is of finite
representation type it follows that the Auslander-Reiten quiver of $B_{T\dag}$ has only one
connected component and that each morphism between indecomposable $B_{T\dag}$-modules is
given as a composition of finitely many irreducible morphisms. It follows from part (b)
that $H$ is full.
\end{proof}
\begin{remark}
Since $B_{T\dag}$ is a string algebra, it has two types of indecomposable modules: string
modules and band modules, see \cite{BR}. The theorem shows that the string modules are given by the equivalence
classes of permissible generalized arcs.
The band modules can be parameterized by triples
$(\gamma,\ell,\phi)$ where $\gamma$ is a permissible closed curve corresponding to a
generator of the fundamental group of $S$, $\ell$ is a positive integer, and $\phi$ is an automorphism of $k^\ell$.
The Auslander-Reiten structure for band modules is very simple: the irreducible morphisms are of the form
$M(\zg,\ell,\phi) \to M(\zg,\ell+1,\phi)$ and, if $\ell>1$, $M(\zg,\ell,\phi) \to M(\zg,\ell-1,\phi)$; whereas the Auslander-Reiten translation is the identity $\tau M(\zg,\ell,\phi) = M(\zg,\ell,\phi)$.
\end{remark}
\subsection{Finite representation type}
\begin{cor}\label{cor: finite type 1}
The surface algebra $B_{T\dag}$ is of finite representation type if and only if no simple
closed non-contractible curve in $(S,M\dag,T\dag)$ is permissible.
\end{cor}
\begin{proof} Every permissible non-contractible closed curve defines infinitely many
indecomposable band modules. Conversely, if no simple non-contractible closed curve
is permissible, then $B_{T\dag}$ has no band modules. It follows that $B_{T\dag}$ is
of finite representation type.
\end{proof}
\begin{cor}\label{cor: finite type 2}\mbox{}
\begin{enuma}
\item If $S$ is a disc, then $B_{T\dag}$ is of finite representation type.
\item If $S$ is an annulus, then $B_{T\dag}$ is of finite representation type if and only if
there is a \quasitri\ in $T\dag$ with two vertices on one boundary component and two
vertices on the other boundary component.
\end{enuma}
\end{cor}
\begin{proof}
Part (a). If $S$ is a disc, then there are no simple closed non-contractible curves in $S$, and the
result follows from Corollary~\ref{cor: finite type 1}.
Part (b). The fundamental group of $S$ is generated by a closed curve $\gamma$ that goes around
the inner boundary component exactly once. This curve $\gamma$ is permissible precisely when
it does not cross two opposite sides of a \quasitri\ in $T\dag$, and this is precisely the case if there
is there is no \quasitri\ with two vertices on the interior boundary component and two vertices on the
exterior boundary component.
\end{proof}
\begin{figure}
\begin{tikzpicture}
{[scale=.35]
\draw (0,0) circle[radius=5cm] ;
\filldraw[fill=black!20] (0,0) circle[radius=1cm];
\node[solid,name=a] at (0:5cm) {};
\node[solid,name=b] at (90:5cm) {};
\node[solid,name=c] at (180:5cm) {};
\node[solid,name=d] at (90:1cm) {};
\draw (a) .. controls +(115:4cm) and +(65:4cm) .. (c) node[int] {1}
(a) .. controls +(180:2cm) and +(25:2cm) .. (d) node[int] {2}
(c) .. controls +(0:2cm) and +(155:2cm) .. (d) node[int] {3}
(d) .. controls +(180:.5cm) and +(90:1cm) .. (180:2cm)
(180:2cm) .. controls +(-90:1cm) and +(-180:1cm) .. (270:2cm) node[int] {4}
(270:2cm) .. controls +(0:2cm) and +(190:2cm) .. (a);
\node at (270:6cm) {$(S,M,T)$};
}
{[yshift=-3.5cm,xshift=-2cm]
\node at (.25,0) {$Q_T:$};
\node[name=1] at (1,0) {$1$};
\node[name=3] at (2,0) {$3$};
\node[name=2] at (3,0) {$2$};
\node[name=4] at (2,-1) {$4$};
\path[->] (1) edge (3)
(3) edge(2)
(2) edge[bend right=20] (1)
(4) edge (3) edge (2);
}
{[xshift=7cm,scale=.35]
\draw (0,0) circle[radius=5cm];
\filldraw[fill=black!20] (0,0) circle[radius=1cm];
\node[solid,name=a] at (0:5cm) {};
\node[solid,name=b] at (90:5cm) {};
\node[solid,name=c] at (180:5cm) {};
\node[solid,name=d] at (146:1cm) {};
\node[solid,name=d'] at (33:1cm) {};
\draw (a) .. controls +(115:4cm) and +(65:4cm) .. (c) node[int] {1}
(a) .. controls +(180:2cm) and +(25:2cm) .. (d') node[int] {2}
(c) .. controls +(0:2cm) and +(155:2cm) .. (d) node[int] {3}
(d) .. controls +(180:.5cm) and +(90:1cm) .. (180:2cm)
(180:2cm) .. controls +(-90:1cm) and +(-180:1cm) .. (270:2cm) node[int] {4}
(270:2cm) .. controls +(0:2cm) and +(190:2cm) .. (a);
\node at (270:6cm) {$(S,M\dag,T\dag)$};
}
{[yshift=-3.5cm,xshift=5cm]
\node at (.25,0) {$Q_{T\dag}:$};
\node[name=1] at (1,0) {$1$};
\node[name=3] at (2,0) {$3$};
\node[name=2] at (3,0) {$2$};
\node[name=4] at (2,-1) {$4$};
\path[->] (1) edge (3)
(3) edge[dashed,-] (2)
(2) edge[bend right=20] (1)
(4) edge (3) edge (2);
}
\end{tikzpicture}
\caption{An admissible cut of the annulus whose corresponding surface algebra is of finite representation type}\label{fig:annulus finite type ex}
\end{figure}
\begin{ex}
Let $(S,M,T)$ and $(S,M\dag,T\dag)$ be as in Figure~\ref{fig:annulus finite type ex}.
By Corollary~\ref{cor: finite type 2}, $B_{T\dag}$ is of finite representation type. The
Auslander-Reiten quiver is given in Figure~\ref{fig: AR ex}, where modules
are given by their Loewy series in the upper picture and by the
corresponding generalized permissible arc on the surface in the lower picture.
\end{ex}
\begin{figure}
\begin{tikzpicture}[pin distance=.75,scale=1.4]
{[xscale=1.2]
\foreach \x/\y/\d [count=\n] in
{1/1/1,
2/2/\dimv21, 2/4/3,
3/1/2,3/3/\dimvv4{3\ 2}{\ds 1}, 3/5/\dimv13,
4/2/\dimv{4}{3\ 2}, 4/4/\dimvv{1\ 4}{\ms3 2}{\ds\ 1},
5/1/\dimv43, 5/3/\dimv{1\ 4}{\ 3\ 2}, 5/5/\dimvv421,
6/2/\dimv{1\ 4}{\ms 3}, 6/4/\dimv42,
7/3/4, 7/1/1
}
{
\node[name=\n] at (\x,\y) {$\d$};
}
\path[->] (1) edge (2)
(2) edge (4) edge (5)
(3) edge (5) edge (6)
(4) edge (7)
(5) edge (7) edge (8)
(6) edge (8)
(7) edge (9) edge (10)
(8) edge (10) edge (11)
(9) edge (12)
(10) edge (12) edge (13)
(11) edge (13)
(12) edge (14) edge (15)
(13) edge (14);
}
\end{tikzpicture} \\
\vspace{1.5cm}
\begin{tikzpicture}[scale=.8]
{[xshift=0,yshift=0,scale=.15]
\node[name=1,outer sep=1cm] at (0,0) {};
\fill[fill=black!20] (0,0) circle[radius=1cm];
\draw (0,0) circle[radius=5cm] (0,0) circle[radius= 1cm]
node[solid] at (0:5cm) {} node[solid] at (90:5cm) {} node[solid] at (180:5cm) {}
node[solid] at (146:1cm) {} node[solid] at (33:1cm) {}
(146:1cm) -- (90:5cm);
}
{[xshift=2cm,yshift=2cm,scale=.15]
\node[name=2,outer sep=1cm] at (0,0) {};
\fill[fill=black!20] (0,0) circle[radius=1cm];
\draw (0,0) circle[radius=5cm] (0,0) circle[radius= 1cm]
node[solid] at (0:5cm) {} node[solid] at (90:5cm) {} node[solid] at (180:5cm) {}
node[solid] at (146:1cm) {} node[solid] at (33:1cm) {}
(146:1cm) .. controls +(180:.5cm) and +(90:1cm) .. (180:2cm)
(180:2cm) .. controls +(-90:1cm) and +(-180:1cm) .. (270:2cm)
(270:2cm) .. controls +(0:1cm) and +(-90:1cm) .. (0:2cm)
(0:2cm) -- (90:5cm);
}
{[xshift=2cm,yshift=6cm,scale=.15]
\node[name=3,outer sep=1cm] at (0,0) {};
\filldraw[fill=black!20] (0,0) circle[radius=1cm];
\draw (0,0) circle[radius=5cm] (0,0) circle[radius= 1cm]
node[solid] at (0:5cm) {} node[solid] at (90:5cm) {} node[solid] at (180:5cm) {}
node[solid] at (146:1cm) {} node[solid] at (33:1cm) {}
(0:5cm) .. controls +(180:1cm) and +(0:1cm) .. (90:2cm)
(90:2cm) ..controls +(180:1cm) and +(90:1cm) .. (180:2cm)
(180:2cm) .. controls +(-90:1cm) and +(-180:1cm) .. (270:2cm)
(270:2cm) ..controls +(0:1cm) and +(180:1cm) .. (0:5cm);
}
{[xshift=4cm,yshift=0cm,scale=.15]
\node[name=4,outer sep=1cm] at (0,0) {};
\fill[fill=black!20] (0,0) circle[radius=1cm];
\draw (0,0) circle[radius=5cm] (0,0) circle[radius= 1cm]
node[solid] at (0:5cm) {} node[solid] at (90:5cm) {} node[solid] at (180:5cm) {}
node[solid] at (146:1cm) {} node[solid] at (33:1cm) {}
(146:1cm) .. controls +(180:.5cm) and +(90:1cm) .. (180:2cm)
(180:2cm) .. controls +(-90:1cm) and +(-180:1cm) .. (270:2cm)
(270:2cm) .. controls +(0:1cm) and +(-90:1cm) .. (0:2cm)
(0:2cm) .. controls +(90:1cm) and +(0:1cm) .. (90:2cm)
(90:2cm) ..controls +(180:1cm) and +(0:1cm) .. (180:5cm);
}
{[xshift=4cm,yshift=4cm,scale=.15]
\node[name=5,outer sep=1cm] at (0,0) {};
\fill[fill=black!20] (0,0) circle[radius=1cm];
\draw (0,0) circle[radius=5cm] (0,0) circle[radius= 1cm]
node[solid] at (0:5cm) {} node[solid] at (90:5cm) {} node[solid] at (180:5cm) {}
node[solid] at (146:1cm) {} node[solid] at (33:1cm) {}
(0:5cm) .. controls +(180:1cm) and +(0:1cm) .. (90:2cm)
(90:2cm) ..controls +(180:1cm) and +(90:1cm) .. (180:2cm)
(180:2cm) .. controls +(-90:1cm) and +(-180:1cm) .. (270:2cm)
(270:2cm) .. controls +(0:1cm) and +(-90:1cm) .. (0:2cm)
(0:2cm) -- (90:5cm);
}
{[xshift=4cm,yshift=8cm,scale=.15]
\node[name=6,outer sep=1cm] at (0,0) {};
\fill[fill=black!20] (0,0) circle[radius=1cm];
\draw (0,0) circle[radius=5cm] (0,0) circle[radius= 1cm]
node[solid] at (0:5cm) {} node[solid] at (90:5cm) {} node[solid] at (180:5cm) {}
node[solid] at (146:1cm) {} node[solid] at (33:1cm) {}
(90:5cm) ..controls +(270:1cm) and +(90:1cm) .. (180:2cm)
(180:2cm) ..controls +(270:1cm) and +(180:1cm) .. (270:2cm)
(270:2cm) ..controls +(0:1cm) and +(180:1cm) .. (0:5cm);
}
{[xshift=6cm,yshift=2cm,scale=.15]
\node[name=7,outer sep=1cm] at (0,0) {};
\fill[fill=black!20] (0,0) circle[radius=1cm];
\draw (0,0) circle[radius=5cm] (0,0) circle[radius= 1cm]
node[solid] at (0:5cm) {} node[solid] at (90:5cm) {} node[solid] at (180:5cm) {}
node[solid] at (146:1cm) {} node[solid] at (33:1cm) {}
(0:5cm) .. controls +(180:1cm) and +(0:1cm) .. (90:2.5cm)
(90:2.5cm) ..controls +(180:1cm) and +(90:1cm) .. (180:2cm)
(180:2cm) .. controls +(-90:1cm) and +(-180:1cm) .. (270:2cm)
(270:2cm) .. controls +(0:1cm) and +(-90:1cm) .. (0:2cm)
(0:2cm) .. controls +(90:1cm) and +(0:1cm) .. (90:2cm)
(90:2cm) ..controls +(180:1cm) and +(0:1cm) .. (180:5cm);
}
{[xshift=6cm,yshift=6cm,scale=.15]
\node[name=8,outer sep=1cm] at (0,0) {};
\fill[fill=black!20] (0,0) circle[radius=1cm];
\draw (0,0) circle[radius=5cm] (0,0) circle[radius= 1cm]
node[solid] at (0:5cm) {} node[solid] at (90:5cm) {} node[solid] at (180:5cm) {}
node[solid] at (146:1cm) {} node[solid] at (33:1cm) {}
(90:5cm) ..controls +(270:1cm) and +(90:1cm) .. (180:2cm)
(180:2cm) .. controls +(-90:1cm) and +(-180:1cm) .. (270:2cm)
(270:2cm) .. controls +(0:1cm) and +(-90:1cm) .. (0:2cm)
(0:2cm) -- (90:5cm);
}
{[xshift=8cm,yshift=0cm,scale=.15]
\node[name=9,outer sep=1cm] at (0,0) {};
\fill[fill=black!20] (0,0) circle[radius=1cm];
\draw (0,0) circle[radius=5cm] (0,0) circle[radius= 1cm]
node[solid] at (0:5cm) {} node[solid] at (90:5cm) {} node[solid] at (180:5cm) {}
node[solid] at (146:1cm) {} node[solid] at (33:1cm) {}
(0:5cm) .. controls +(180:1cm) and +(0:1cm) .. (90:2cm)
(90:2cm) ..controls +(180:1cm) and +(90:1cm) .. (180:2cm)
(180:2cm) .. controls +(-90:1cm) and +(-180:1cm) .. (270:2cm)
(270:2cm) .. controls +(0:1cm) and +(-90:1cm) .. (0:2cm)
(0:2cm) ..controls +(90:1cm) and +(0:1cm) .. (33:1cm);
}
{[xshift=8cm,yshift=4cm,scale=.15]
\node[name=10,outer sep=1cm] at (0,0) {};
\draw (0,0) circle[radius=5cm] (0,0) circle[radius= 1cm]
node[solid] at (0:5cm) {} node[solid] at (90:5cm) {} node[solid] at (180:5cm) {}
node[solid] at (146:1cm) {} node[solid] at (33:1cm) {}
(90:5cm) ..controls +(270:1cm) and +(90:1cm) .. (180:2cm)
(180:2cm) .. controls +(-90:1cm) and +(-180:1cm) .. (270:2cm)
(270:2cm) .. controls +(0:1cm) and +(-90:1cm) .. (0:2cm)
(0:2cm) .. controls +(90:1cm) and +(0:1cm) .. (90:2cm)
(90:2cm) ..controls +(180:1cm) and +(0:1cm) .. (180:5cm);
}
{[xshift=8cm,yshift=8cm,scale=.15]
\node[name=11,outer sep=1cm] at (0,0) {};
\fill[fill=black!20] (0,0) circle[radius=1cm];
\draw (0,0) circle[radius=5cm] (0,0) circle[radius= 1cm]
node[solid] at (0:5cm) {} node[solid] at (90:5cm) {} node[solid] at (180:5cm) {}
node[solid] at (146:1cm) {} node[solid] at (33:1cm) {}
(180:5cm) ..controls +(0:1cm) and +(180:1cm) .. (270:2cm)
(270:2cm) .. controls +(0:1cm) and +(-90:1cm) .. (0:2cm)
(0:2cm) .. controls +(90:1cm) and +(270:1cm) .. (90:5cm);
}
{[xshift=10cm,yshift=2cm,scale=.15]
\node[name=12,outer sep=1cm] at (0,0) {};
\fill[fill=black!20] (0,0) circle[radius=1cm];
\draw (0,0) circle[radius=5cm] (0,0) circle[radius= 1cm]
node[solid] at (0:5cm) {} node[solid] at (90:5cm) {} node[solid] at (180:5cm) {}
node[solid] at (146:1cm) {} node[solid] at (33:1cm) {}
(33:1cm) ..controls +(45:1cm) and +(90:1cm) .. (0:2cm)
(0:2cm) .. controls +(270:1cm) and +(0:1cm) .. (270:2cm)
(270:2cm) .. controls +(180:1cm) and +(270:1cm) .. (180:2cm)
(180:2cm) .. controls +(90:1cm) and +(270:1cm) .. (90:5cm);
}
{[xshift=10cm,yshift=6cm,scale=.15]
\node[name=13,outer sep=1cm] at (0,0) {};
\fill[fill=black!20] (0,0) circle[radius=1cm];
\draw (0,0) circle[radius=5cm] (0,0) circle[radius= 1cm]
node[solid] at (0:5cm) {} node[solid] at (90:5cm) {} node[solid] at (180:5cm) {}
node[solid] at (146:1cm) {} node[solid] at (33:1cm) {}
(180:5cm) ..controls +(0:1cm) and +(180:1cm) .. (270:2cm)
(270:2cm) .. controls +(0:1cm) and +(-90:1cm) .. (0:2cm)
(0:2cm) .. controls +(90:1cm) and +(0:1cm) .. (90:2cm)
(90:2cm) ..controls +(180:1cm) and +(0:1cm) .. (180:5cm);
}
{[xshift=12cm,yshift=4cm,scale=.15]
\node[name=14,outer sep=1cm] at (0,0) {};
\fill[fill=black!20] (0,0) circle[radius=1cm];
\draw (0,0) circle[radius=5cm] (0,0) circle[radius= 1cm]
node[solid] at (0:5cm) {} node[solid] at (90:5cm) {} node[solid] at (180:5cm) {}
node[solid] at (146:1cm) {} node[solid] at (33:1cm) {}
(33:1cm) ..controls +(45:1cm) and +(90:1cm) .. (0:2cm)
(0:2cm) .. controls +(270:1cm) and +(0:1cm) .. (270:2cm)
(270:2cm) .. controls +(180:1cm) and +(0:1cm) .. (180:5cm);
}
{[xshift=12cm,yshift=0,scale=.15]
\node[name=15,outer sep=1cm] at (0,0) {};
\fill[fill=black!20] (0,0) circle[radius=1cm];
\draw (0,0) circle[radius=5cm] (0,0) circle[radius= 1cm]
node[solid] at (0:5cm) {} node[solid] at (90:5cm) {} node[solid] at (180:5cm) {}
node[solid] at (146:1cm) {} node[solid] at (33:1cm) {}
(146:1cm) -- (90:5cm);
}
\path[<-] (1) edge (2)
(2) edge (4) edge (5)
(3) edge (5) edge (6)
(4) edge (7)
(5) edge (7) edge (8)
(6) edge (8)
(7) edge (9) edge (10)
(8) edge (10) edge (11)
(9) edge (12)
(10) edge (12) edge (13)
(11) edge (13)
(12) edge (14) edge (15)
(13) edge (14);
\end{tikzpicture}
\caption{The Auslander-Reiten quiver of the surface algebra in Figure \ref {fig:annulus finite type ex}. Indecomposable modules are represented by their Loewy series in the upper diagram, and by their permissible generalized arc in the lower diagram.}\label{fig: AR ex}
\end{figure}
\section{ A geometric description of iterated tilted algebras of type $\AA_n$ of global dimension at most 2}\label{sect 6}
In this section, we apply the results in section 3 to obtain a description of the admissible cut
surface algebras of type $(S,M,T)$ where $S$ is a disc.
In this case, the Jacobian algebra $B_T$ is a cluster-tilted algebra of type $\AA$ and $B_{T\dag}$ is
a quotient of $B_T$ by an admissible cut. It has been show in \cite{BFPPT} that the algebras
$B_{T\dag}$ obtained in this way are precisely the iterated tilted algebras of type $\AA$ of
global dimension at most two.
\begin{prop}
Every iterated tilted algebra $C$ of type $\AA_n$ of global dimension at most two is
isomorphic to the endomorphism algebra of a partial cluster-tilting object in the cluster
category of type $\AA_{n+\ell}$, where $\ell$ is the number of relations in a minimal
system of relations for $C$.
\end{prop}
\begin{proof}
Let $(S,M\dag,T\dag)$ be the admissible cut corresponding to $C$. Then
$C\cong \End_{\calc{(S,M\dag)}}(T\dag)$
and $(S,M\dag)$ is a disc with $n+\ell$ marked points.
\end{proof}
\begin{ex} Let $C$ be the tilted algebra given by the bound quiver
\begin{tikzpicture}[baseline=-3]
\node[name=1] at (1,0) {1};
\node[name=2] at (2,0) {2};
\node[name=3] at (3,0) {3};
\node[name=4] at (4,0) {4};
\path[->] (1) edge (2) (2) edge (3) (3) edge (4) (1) edge[bend left=20,dashed,-] (3);
\end{tikzpicture}.
Then we have $n=4$ and $\ell=1$, so $C$ is the endomorphism algebra of a partial cluster-tilting object in the cluster category of type $\AA_5$. By \cite{CCS} this category can be seen as the category of diagonals in an octagon, and the partial cluster-tilting object corresponds to the following partial triangulation.
\[\begin{tikzpicture}[scale=.66]
\foreach \a [count=\n] in {0,1,2,3,4,5,6,7}
\node[name=\n,solid] at (360*\a/8:3cm) {};
\foreach \x [remember=\x as \lastx (initially 8)] in {1,...,8}
\draw (\lastx.center) -- (\x.center);
\draw (1) -- (3) node[int] {4}
(1) -- (4) node[int] {3}
(1) -- (7) node[int] {2}
(7) -- (5) node[int] {1};
{[on background layer] \fill[black!10] (1.center) -- (4.center) --(5.center) -- (7.center) -- cycle;}
\end{tikzpicture}\]
Here the shaded region indicates the unique quasi-triangle. The Auslander-Reiten quiver is described in Figure~\ref{fig tilted}. Note that permissible arcs are not allowed to consecutively cross the two sides label $1$ and $3$ of the shaded quasi-triangle. Also note that each of the modules $\begin{array}{c} 3 \vspace{-2pt}\\ 4\end{array}$, $3,2$ and $1$, can be represented by two equivalent permissible arcs.
\begin{figure}
\begin{tikzpicture}[pin distance=.75,scale=1.75]
\foreach \x/\y/\d [count=\n] in
{1/1/4,
2/2/\dimv34,
3/1/3,3/3/\dimvv234,
4/2/\dimv23,
5/1/2,
6/0/\dimv12,
7/1/1
}{
\node[name=\n] at (\x,\y) {$\d$};
}
\path[->] (1) edge (2)
(2) edge (3) edge (4)
(3) edge (5)
(4) edge (5)
(5) edge (6)
(6) edge (7)
(7) edge (8);
\end{tikzpicture}
\begin{tikzpicture}[scale=.85]
{[]
\node[name=n1,outer sep=1cm] at (0,0) {};
\foreach \a [count=\n] in {0,1,2,3,4,5,6,7}
\node[name=\n,solid] at (360*\a/8:.75cm) {};
\foreach \x [remember=\x as \lastx (initially 8)] in {1,...,8}
\draw (\lastx.center) -- (\x.center);
\draw (2) -- (4);
{[on background layer] \fill[black!10] (1.center) -- (4.center) --(5.center) -- (7.center) -- cycle;}
}
{[xshift=2cm,yshift=2cm]
\node[name=n2,outer sep=1cm] at (0,0) {};
\foreach \a [count=\n] in {0,1,2,3,4,5,6,7}
\node[name=\n,solid] at (360*\a/8:.75cm) {};
\foreach \x [remember=\x as \lastx (initially 8)] in {1,...,8}
\draw (\lastx.center) -- (\x.center);
\draw (2) -- (5);
{[on background layer] \fill[black!10] (1.center) -- (4.center) --(5.center) -- (7.center) -- cycle;}
}
{[xshift=4cm,yshift=0cm]
\node[name=n3,outer sep=1cm] at (0,0) {};
\foreach \a [count=\n] in {0,1,2,3,4,5,6,7}
\node[name=\n,solid] at (360*\a/8:.75cm) {};
\foreach \x [remember=\x as \lastx (initially 8)] in {1,...,8}
\draw (\lastx.center) -- (\x.center);
\draw (3) -- (7);
{[on background layer] \fill[black!10] (1.center) -- (4.center) --(5.center) -- (7.center) -- cycle;}
}
{[xshift=4cm,yshift=4cm]
\node[name=n4,outer sep=1cm] at (0,0) {};
\foreach \a [count=\n] in {0,1,2,3,4,5,6,7}
\node[name=\n,solid] at (360*\a/8:.75cm) {};
\foreach \x [remember=\x as \lastx (initially 8)] in {1,...,8}
\draw (\lastx.center) -- (\x.center);
\draw (2) -- (8);
{[on background layer] \fill[black!10] (1.center) -- (4.center) --(5.center) -- (7.center) -- cycle;}
}
{[xshift=6cm,yshift=2cm]
\node[name=n5,outer sep=1cm] at (0,0) {};
\foreach \a [count=\n] in {0,1,2,3,4,5,6,7}
\node[name=\n,solid] at (360*\a/8:.75cm) {};
\foreach \x [remember=\x as \lastx (initially 8)] in {1,...,8}
\draw (\lastx.center) -- (\x.center);
\draw (3) -- (8);
{[on background layer] \fill[black!10] (1.center) -- (4.center) --(5.center) -- (7.center) -- cycle;}
}
{[xshift=8cm,yshift=0cm]
\node[name=n6,outer sep=1cm] at (0,0) {};
\foreach \a [count=\n] in {0,1,2,3,4,5,6,7}
\node[name=\n,solid] at (360*\a/8:.75cm) {};
\foreach \x [remember=\x as \lastx (initially 8)] in {1,...,8}
\draw (\lastx.center) -- (\x.center);
\draw (4) -- (8);
{[on background layer] \fill[black!10] (1.center) -- (4.center) --(5.center) -- (7.center) -- cycle;}
}
{[xshift=10cm,yshift=-2cm]
\node[name=n7,outer sep=1cm] at (0,0) {};
\foreach \a [count=\n] in {0,1,2,3,4,5,6,7}
\node[name=\n,solid] at (360*\a/8:.75cm) {};
\foreach \x [remember=\x as \lastx (initially 8)] in {1,...,8}
\draw (\lastx.center) -- (\x.center);
\draw (6) -- (8);
{[on background layer] \fill[black!10] (1.center) -- (4.center) --(5.center) -- (7.center) -- cycle;}
}
{[xshift=12cm,yshift=0cm]
\node[name=n8,outer sep=1cm] at (0,0) {};
\foreach \a [count=\n] in {0,1,2,3,4,5,6,7}
\node[name=\n,solid] at (360*\a/8:.75cm) {};
\foreach \x [remember=\x as \lastx (initially 8)] in {1,...,8}
\draw (\lastx.center) -- (\x.center);
\draw (6) -- (1);
{[on background layer] \fill[black!10] (1.center) -- (4.center) --(5.center) -- (7.center) -- cycle;}
}
\path[<-] (n1) edge (n2)
(n2) edge (n3) edge (n4)
(n3) edge (n5)
(n4) edge (n5)
(n5) edge (n6)
(n6) edge (n7)
(n7) edge (n8);
\end{tikzpicture}
\caption{The Auslander-Reiten quiver of a tilted algebra of type $\AA_4$}\label{fig tilted}
\end{figure}
\end{ex}
| 220,046
|
Hungry Howies
Welcome to the Hungry Howies BestFlag shop. Here you'll find everything you need to promote your Hungry Howies branch. Already built and designed, all you have to do is order the products you want. They're built on the highest quality materials, and they'll be there quickly. Don't wait!
| 363,921
|
Today, a parisian monument wich is ay Bastille place : the Genius of the Bastille and the bronze column.
This bronze column, known as the column of "July", have been raised in the center of the place between 1833 & 1840. It is 52 meters high, and has a 4 meters in diameters. It ends with a balcony surmounted of a genius which holds in a hand a broken chain and in the other a lit torch. (parisrama.fr)
The Genius represents "the Freedom which flies away by breaking his irons and by sowing the light". (parisrama.fr)
The Bastille place is also one of the important parisian point for the public demonstration, as the Republique place (a post about it next week).
3 commentaires:
Remarkable photography! Absolutely beautiful.
I continue to be amazed at all the beautiful art, architecture and sculptures in France. A friend just returned from Paris and showed me her photographs. I was actually able to remember some of the sites from CDPB.
Wonderful. Must. Get. To. Paris.
Enregistrer un commentaire
| 181,091
|
TITLE: Is there a limit of electrons a single hydrogen atom can have?
QUESTION [2 upvotes]: Is there a limit of electrons a single hydrogen atom can have? If so what is it? why? Is the the answer to why scalable to helium?
REPLY [10 votes]: By definition, "hydrogen atom" refers to the neutral system with one proton and one electron, so it cannot hold any extra electrons.
However, protons can hold more than one electron, in which case the system is termed a hydrogen anion. This is a stable, bound system, and the reaction
$$
\mathrm{H}+e^- \to \mathrm{H}^-
\tag 1
$$
releases about $0.75\:\rm eV$ (an energy known as the electron affinity of the hydrogen atom), plus whatever kinetic energy the electron came in with, through the emission of a photon.
(As an aside, the hydrogen anion, and particularly the reaction $(1)$ above together with its converse in the form of photodetachment, is incredibly important ─ this is the reason why the Sun's spectrum is continuous.)
Free atoms of most elements tend to have positive electron affinities, which means that their singly-charged negative anions are stable systems, and they release energy when they capture their first extra electron. There's a few exceptions, though, starting with helium: atoms which have stable closed shells can 'reject' that extra electron, as it's forbidden from sitting in the closed valence shells and it's forced to sit at higher-energy shells that are too far uphill in energy to be stable.
If you want to up the game and go to a second extra electron, though, to get to $\rm H^{2-}$, the game runs out, and indeed it runs out for every element ─ all the second electron affinities are negative. That is, it takes work to cram a second extra electron in, and the resulting dianion will at best be in a metastable state that's ready and jumping to give that energy back out by dissociating into the single anion and a free electron. It's just too hard to try and hold two extra electrons (and their resulting mutual electrostatic repulsion) within the confines of an atomic system.
| 24,541
|
The Bronze U Bench is an exploration in the simplicity of the circle. Each section is rolled from a sheet of raw bronze to form a half circle. The two parts are welded together with one round side up and one round side down, forming a complete circle when viewed directly from the side.
Suitable for outdoor use.
United States
About Christopher Stuart
American, b. 1974
Solo Shows
2016
Group Shows
2017
2017
| 396,632
|
It’s Thanksgiving week! And for a great majority of us that means a shortened work schedule, 24 hours of unashamed gluttony, and 10 hours of NFL football.
Oh, and of course, a day of pristine, highly anticipated, conflict-free interaction with all those extended family members!
Say what? Your family is more Dunphy than Cleaver? A convergence of opposing personalities that science somehow insists share the same DNA structure as you? You know…
- The outspoken political junky that leans to the opposite aisle of everyone else in the family (and loves to remind you why).
- The hypochondriac that divulges every detail of awkward medical conditions between fork-fulls of cranberries and stuffing.
- The one-upsman who is bound and determined to outdo any accomplishment tabled for celebration.
- Insert your favorite caricature here _______________.
While some of us can’t wait to share pumpkin pie with Aunt Frieda and Cousin Charlie, I also know (because I’ve talked to you) there is a high percentage of the population who is welcoming the Holidays with a homemade cocktail of antacids and nitro tablets. But even if your irritating, dysfunctional, or broken relationships aren’t branches on the family tree, let’s face it – we all have a few of them. Some of us quite a few.
And that’s why this Thanksgiving has given me pause.
A few days ago, we received one of those “audible gasp” text messages that a dear friend had unexpectedly passed away. The wife of my former boss who had also served as my assistant for a few years. Tonight, as my wife and I plan to host our first family Thanksgiving, her husband and daughters are planning a funeral. It’s hard to even type.
In that one grievous moment, everything snapped into perspective.
The petty issues that drive us apart, the convictions we refuse to back down on, the denominational affiliations, structures, arguments, insults, betrayals, irritations, and hurt feelings we just can’t seem to let go of, all seem to (at least momentarily) melt into a puddle of “remind me what were we so upset about again?”
Maybe I’m losing my edge, but in moments like these it just seems like life is way too short to focus on the things that separate us. That irritate us. That offend us. Maybe it’s time to lighten up. To let go of some things we’ve been hanging onto for way too long. To learn some patience. To exercise humility. I don’t know. Maybe?
“Always be humble and gentle. Be patient with each other, making allowance for each other’s faults because of your love.” -Ephesians 4:2 NLT
Wherever you are, whatever you’re wrestling with, Happy Thanksgiving. To you, your crazy Uncle Walter, your irritating sister-in-law Sue, and even those squirly little nieces who will probably break something important to you while they’re screwing around unsupervised in your basement.
Let’s do our best to love them all well this week, to let go of the less important things that keep us apart. At least for the Holidays, and maybe eventually even the every-days.
If you pray, stop right now and ask God to be with the Delp family this week. They’re saying an incredibly difficult goodbye to a beautiful wife and mother.
Excellent timely post. Praying for your friends through this very difficult time. Happy Thanksgiving!
Thanks Cory! I’m afraid the painful moments are often the most clarifying. I hate that it has to be that way. Happy Thanksgiving.
| 232,651
|
Supernacularfiction Release that Witch – Chapter 1404 Undetected Capabilities cough easy to you-p1
Novel–Release that Witch–Release that Witch
the hell ship ticonderoga
Chapter 1404 Undetected Capabilities dry save
The price tag was linking to something different.
Such as, demons.
The retail price was aiming to something different.
the leaven in a great city archdiocese
Right after handling the specialized troubles, Roland pointed out the Dream World’s challenge that was troubling him.
Thinking about Hackzord’s unique skill which it was subsequently far more complicated for larger class demons to up grade their selves, this step would help the front side queues considerably. He could then force the fault to outer components or injuries that stopped another party from turning up and lay to Valkries properly. If fortunate enough, a similar prepare could be used to lure other fantastic lords likewise.
The Translations of Beowulf
Soon after dinner time, Anna accessed work which has a roll of style strategies and sat at the opposing end from the mahogany workspace. It was enough time because of their program connection and was the only real period during the day exactly where they observed comfortable and more content. So long as the study inst.i.tute failed to work with the night time, Anna would stay in the office for 2 or 3 a long time, the spot that the chats involved the day’s work to quick enlightenments and concepts.
The absence of magical electrical power meant life creatures that trusted it would not can be found.
During this period, Nightingale would seem and rest via the section desk with snack food items around, going through the comic books sickly.u.s.trated by Scroll that represented issues on the Fantasy Community. During this period, Nightingale would occasionally speak up, creating the nearby disposition rather placid and heated.
I will look for the a.s.sociation’s support regarding this in the next trip to the Fantasy environment.
However, the challenge was the regular mountain peak range terrain into the northern of Neverwinter. To travel to the ridge in the country, that they had to make use of the ‘Fire of Heaven’, as well having the cabability to protect against problems from the atmosphere. The method from researching and improvement to development expected time. During this time, it absolutely was an not known if there will probably be any changes on the front product lines.
sorcery hacker divulge the magic vulnerability
Each Nightingale and Roland ended up amazed. “What?”
Such as, demons.
Roland and Anna investigated the other.
One example is, demons.
Following meal, Anna moved into the workplace that has a roll of layout plans and sat at the opposite finish of your mahogany work desk. That was enough time because of their regimen communication and was really the only period during the day exactly where they experienced peaceful and more content. So long as the study inst.i.tute did not be employed in the evening, Anna would remain at the office for 2 or 3 hrs, where the talks incorporated the day’s try to unexpected enlightenments and concepts.
“…” He thought to retract thinking that Nightingale searched placid and heated.
The 2 ended up in the time size segregated by in excess of ten thousand years… 100s and countless years… or perhaps further more.
Quite frankly, this alternative best suited Roland’s fashion also it was the goal he pursued out of the start. Regardless of what response the Demons acquired, for mankind to offer the capability to traverse more than a thousand kilometers to get to their getaway was undoubtedly probably the most trustworthy result.
Evil Emperor’s Poisonous Consort: Divine Doctor Young Miss
During this time period, Nightingale would appear and chill out with the facet table with snack food items on hand, going through the comic books sick.you.s.trated by Browse that portrayed points from the Aspiration Entire world. During this period, Nightingale would occasionally articulate up, doing the nearby disposition rather placid and warmer.
edward the second
Or concerning the tsunamis and storm that devoured most of the survivors.
“So that’s why you were definitely sighing incessantly…” Nightingale curled her lip area. “Would it be a necessity to create a connection between equally worlds? Imagine if the images you spotted from the Dream Society were a.s.sembled randomly? The greater number of you think of it, the greater amount of white hair you will have. No matter what, specific things should never be comprehended.”
“It depicts a complete story.” Anna finished his phrase.
But checking out the bigger picture, the problem has become totally different.
Practical Grammar and Composition
Right after evening meal, Anna accessed the workplace which has a roll of layout programs and sat at the complete opposite conclude from the mahogany work desk. This was the time for their regimen communication and was truly the only timeframe during the day the place they felt comfortable and happier. So long as the studies inst.i.tute did not be employed in the night, Anna would keep at the office for 2-3 time, in which the dialogues integrated the day’s try to rapid enlightenments and concepts.
“—Magic ability failed to are available nowadays before.”
The end result was that gravitational pressure was not worth as a revered drive, along with a enormous and crimson cavity appeared on the universe.
The absence of miracle strength resulted in existing beings that trusted it might will no longer occur.
“Right here is the cost.”
And… witches.
Valkries may have realized this point therefore decided to accept the chance.
The sole difference between the next along with the past two decisions was how the amount of possibility pretty much used humanity’s campaigns to make up for their deficiency.
“…” He decided to retract thinking that Nightingale appeared placid and cozy.
I will hunt for the a.s.sociation’s assistance regarding this in the next journey to the Dream world.
But exploring the more substantial picture, the matter turned out to be very different.
Both equally Nightingale and Roland ended up surprised. “What?”
| 73,514
|
It’s not too early to enter the 2006 World’s Forage Analysis Superbowl, the an-nual quality contest that culminates in October at the World Dairy Expo in Madi-son, WI. Entries are due Sept. 7.
You may enter either of two divisions; the Dairy Division, open to growers with milk production information, or the Commercial Division, open to all other grow-ers. The Dairy Division is divided into hay, haylage and corn silage classes, while the Commercial Division has two classes: hay and baleage.
For entry forms and contest rules, visit or. At both sites, simply click on the Superbowl logo.
| 138,562
|
USD139.99
Same as JR PROPO "NX8935+"
[Torque] 11.7(4.8V)/16.0(6.6V)/17.8(7.4V)/20.2(8.4V)kg・cm
[Speed] 0.10(4.8V)/0.07(6.6V)/0.06(7.4V)/0.05.
[diemension] 35mm x 21mm x 40.5mm
[Length of lead harness] 300mm
[Operating Voltage] 4.5V - 8.5V
· Wide Voltage
· Double ball bearings
* Do not use this servo under the different voltage other than specified
*.
SKU: 364215376135199
RM573.96Price
| 353,804
|
The attorney for the local high school football coach accused of inappropriate contact with a female student said Friday morning that his client denies the accusations against him.
Ed Brass, who was retained late Thursday by Cottonwood coach Josh Lyman, said Lyman “is absolutely adamant that he is denying that he has had any sort of inappropriate physical contact with any student.”
A Granite School District spokesman said Thursday that Lyman had been accused by an underage female student of inappropriate contact, and that the coach, who is also a physical education teacher at the high school, was placed on paid administrative leave.
Lyman, who played receiver for the University of Utah from 1999 to 2001, is facing both administrative and criminal investigations by the Granite School District.
“There’s obviously multiple outcomes of an investigation,” spokesman Ben Horsley said Wednesday, “and we’ll take appropriate action at that time.”
Brass said Friday that he had not yet sat down with Lyman to discuss the best way to defend against the accusations.
“He wants to go back to being a football coach,” Brass said. “He wants his name cleared and he wants to go back to being a football coach.”
| 257,590
|
I was delighted when I received my new Sleek PPQ iDivine palette 'Me, Myself & Eye' (review coming up soon!) the other day. Yesterday was the first time I was properly out in two weeks, due to my operation. I was meeting up with my friends to go to for a meal and the cinema, afterwards I was heading out to a club. Makeup wise, I wanted something that would be suitable for day and night, and something inspired by Autumn and Halloween.
I created a deep purple and rusty orange smokey eye, using my new Sleek palette and my Urban Decay NAKED palette. I was delighted with how it turned out, and if you guys want a video tutorial on it.. comment below!
(PPQ - Sleek palette, NP - NAKED palette)
Inner corner: NP - 'Half Backed' & PPQ 'Golden Silvers'
Middle lid: PPQ 'Supernova'
Outer corner: PPQ 'Chris De Burgundy'
Lower lashline: NP 'Half Baked' & PPQ 'Chis De Burgundy'
Prior to applying my eyeshadow, I primed my eyelids with Urban Decay's Primer Potion (original). I applied my typical pin-up eyeliner wing, using Essence's old-applicator liquid eyeliner. I applied Urban Decay's 24/7 eyeliner pencil in 'Zero' on my lower waterliner and I smudged it into my outer corner lower lashes. I wore my favourite Maybelline The Falsies mascara also.
Here's also a picture of me, taken yesterday. The swelling from my operation is completely gone, and my bruises are fading.. thank god! I resembled a guinea-pig last week and my cheeks were huuuge and puffy! Considering I only had the operation last week, I think my face has gone back to normal (I hope).
Due to my operation, my blog posts have been a bit scattered. Expect more blog posts and videos up this week, and thank you for all of your best wishes during recovery.
| 338,543
|
TITLE: Coupon Collector problem - 2nd moment method
QUESTION [2 upvotes]: Recall the coupon collector problem, namely, drawing numbers (coupons) iid from $[n]$.
I want to prove that the probability that we drawed all numbers after $k=\frac{n\ln n}{4}$ tries tends to $0$ as $n$ goes to $\infty$.
For that, define $n$ random variables $X_1,\dots, X_n$ forwhich $X_i =1$ iff the $i$'th coupon wasn't drawed after $k$ tries and $X=\sum _i X_i$ that counts the number of coupons we didn't draw after $k$ tries. I want to prove that $\Pr [X=0]\xrightarrow{k \to \infty} 0$.
Observations/facts I have used (all easy to prove/known):
$\Pr [X=0]\leq \frac{\text{Var}[X]}{\mathbb{E}^2[X]}$ .
$\Pr[X_i=1]= \mathbb{E}[X_i]=\left(1-\frac{1}{n} \right)^k$
$\mathbb{E}[X]=n\left(1-\frac{1}{n} \right)^k \ge n^{\frac{1}{2}}$
$\text{Var}[X]=\sum_i \text{Var}[X_i]+\sum_{i\ne j} \text{Cov}[X_i,X_j]\leq \mathbb{E}[X] + \sum_{i\ne j} \text{Cov}[X_i,X_j]$
$\sum_{i\ne j} \text{Cov}[X_i,X_j] \leq \sum_{i\ne j} \left(1-\frac{2}{n} \right)^k \leq n^2 \left(1-\frac{2}{n} \right)^k\leq n^{\frac{3}{2}}$.
Where I used:
$1-x \leq e^{-x}$ for all $x\in\mathbb{R}$,
$1-x \geq e^{-2x}$ for all $x\leq \frac{1}{4} $
and the definition of $k$ in 5 and 3 respectively.
We get:
$$\Pr [X=0]\leq o(1) + \frac{\sum_{i \ne j}\text{Cov}[X_i,X_j]}{\mathbb{E}^2[X]}\leq o(1) + \frac{n^{\frac{3}{2}}}{n}=o(1)+ \sqrt{n}$$ and I need a much better bound.
Appreciate any help/hints as I've been stuck on it for a couple of days.
REPLY [0 votes]: Well, apparently the co-variance is negative:
$$\text{Cov} \left( X_i , X_j \right ) = \mathbb{E} \left( X_i X_j \right )-\mathbb{E} \left( X_i\right ) \mathbb{E} \left( X_j\right )$$
$$=\left( 1- \frac{2}{n} \right )^k -\left( 1- \frac{1}{n} \right )^{2k} \leq n^{-\frac{1}{2}}-n^{-\frac{1}{2}}\leq 0$$
By using the inequalities above and assuming $n\geq 4$ .
That finishes the analysis.
| 24,307
|
TITLE: The Jouanolou trick
QUESTION [31 upvotes]: In Une suite exacte de Mayer-Vietoris en K-théorie algébrique (1972) Jouanolou proves that for any quasi-projective variety $X$ there is an affine variety $Y$ which maps surjectively to $X$ with fibers being affine spaces. This was used e.g. by D. Arapura to (re)prove that the Leray spectral sequence of any morphism of quasi-projective varieties is equipped from the second term on with a natural mixed Hodge structure.
Here is a proof when $X$ is $\mathbf{P}^n$ over a field $k$: take $Y$ to be the affine variety formed by all $n+1 \times n+1$ matrices which are idempotent and have rank 1. This is indeed affine since it is given by the equations $A^2=A$, the characteristic polynomial of $A$ is $x^n(x-1)$. Moreover, $Y$ is mapped to $\mathbf{P}^n(k)$ by taking a matrix to its image. The preimage of a point of $\mathbf{P}^n(k)$ is "the set of all hyperplanes not containing a given line", which is isomorphic to an affine space.
The general (quasi-projective) case follows easily from the above. However, it is not clear how to generalize Jouanolou's trick for arbitrary varieties. Nor is it clear (to me) that this is impossible.
Is there an analogue of the Jouanolou lemma for arbitrary (not necessarily quasi-projective) varieties (i.e. reduced separated schemes of finite type over say an algebraically closed field)?
(weaker version of 1 over complex numbers) Is there, given a complex algebraic variety $X$, an affine variety $Y$ that maps surjectively to $X$ and such that all fibers are contractible in the complex topology? A negative answer would be especially interesting.
(the following question is a bit vague, but if it has a reasonable answer, then it would probably imply a positive answer to 2.) Is there a quasi-projective analog of the topological join of two projective spaces? I.e., if $P_1$ and $P_2$ are two complex projective spaces, is there a quasi-projective variety $X$ which "contains the disjoint union of $P_1$ and $P_2$ and is formed by all affine lines joining a point in $P_1$ with a point in $P_2$"?
Edit 1: in 1. and 2. the varieties are required to be connected (meaning that the set of closed points is connected in the Zariski topology; in 2 one could use the complex topology instead).
Edit 2: as Vanya Cheltsov explained to me, the answer to question 3 is most likely no.
REPLY [13 votes]: Jouanolou's trick has been extended to schemes with an "ample family of line bundles" by Thomason; see Weibel: Homotopy Algebraic K-theory, Proposition 4.4. This includes all smooth varieties and more generally all varieties with torsion local class groups. However, there exist (positive dimensional) proper varieties with no non-trivial line bundles on them; it seems possible that on such varieties there are no affine bundles with affine total space.
| 160,973
|
\begin{document}
\maketitle
\begin{abstract}
For a compact simply connected simple Lie group $G$ with an involution $\alpha$, we compute
the $G\rtimes \Z/2$-equivariant K-theory of $G$ where $G$ acts by conjugation and $\Z/2$
acts either by $\alpha$ or by $g\mapsto \alpha(g)^{-1}$. We also give a representation-theoretic
interpretation of those groups, as well as of $K_G(G)$.
\end{abstract}
\section{Introduction}\label{intro}
Brylinski and Zhang \cite{brylinski} computed, for (say) a simple compact Lie group $G$,
the $G$-equivariant K-theory ring $K_G(G)$ with $G$ acting on itself by conjugation,
as the ring $\Omega_{R(G)/\Z}$ of K\"{a}hler differentials of $R(G)$ over $\Z$
(see also Adem, Gomez \cite{ag} for related work).
Let $\alpha$ be an involutive automorphism of $G$. Then we can consider
actions of $G\rtimes \Z/2$ on $G$ where $G$ acts by conjugation and the
generator of $\Z/2$ acts
either by the automorphism $\alpha$ or by the map $\gamma:g\mapsto \alpha(g)^{-1}$.
The main result of the present paper is a computation of $K_{G\rtimes \Z/2}(G)$
in both cases as a module over $K_{G\rtimes \Z/2}(*)=R(G\rtimes\Z/2)$.
(Here $R(H)$ is the complex representation ring of a compact Lie group $H$.)
The involutive automorphism $\alpha$ determines a compact symmetric space $G/G^\alpha$,
and we were originally interested in these computations as a kind of topological invariant
of symmetric pairs of compact type. (Recall, in effect, that $G/G^\alpha$ is a connected
component of $G^\gamma$ via the embedding $x\mapsto \alpha(x)x^{-1}$.)
It turns out, however, that the groups $K_{G\rtimes \Z/2}(G)$ are a rather crude
invariant of symmetric pairs, since they essentially only depend on whether $\alpha$ is
an outer or inner automorphism of $G$; if $\alpha$ is an inner automorphism,
$G\rtimes \Z/2$ becomes a central product, which behaves essentially the same as
the direct product from our point of view.
Nevertheless, having a complete calculation is still interesting, as are some of the methods
involved in it. The main ingredient of the method we present here is the construction of
Brylinski-Zhang \cite{brylinski,brylinski1} of the element $dv\in\Omega_{R(G)/\Z}=K_G(G)$
for a finite-dimensional complex representation $v$ of $G$. That construction,
unfortunately, was presented incorrectly in \cite{brylinski} (in fact, the elements
written there are $0$), and so we developed an
alternate construction of those elements using induction from the normalizer of a maximal
torus. However, Brylinski \cite{brylinski1} communicated the correct construction to us.
The construction \cite{brylinski1} is completely geometric, and supersedes our previous
induction method (which, for that reason, we omit from this presentation).
In fact, the construction \cite{brylinski1} turns out to be equivariant with respect
to both the $\alpha$ and $\gamma$ actions. This allows an ``obvious guess" of what
$K_{G\rtimes\Z/2}(G)$ should be. We validate that guess following the methods
of Greenlees and May \cite{gmt}, involving Tate cohomology. (Essentially,
the main point is that under suitable finiteness hypothesis, a $\Z/2$-equivariant
map of $\Z/2$-CW complexes
which is an equivalence non-equivariantly is an equivalence equivariantly because
the Tate cohomology can be computed as an algebraic functor of the ``geometric
fixed points''.)
We realized, however, that the construction \cite{brylinski1} can be
generalized to give a representation-theoretical interpretation of the groups
$K_G(G)$, $K_{G\rtimes \Z/2}(G)$. Such an interpretation is strongly motivated
by the work of Freed, Hopkins and Teleman \cite{fht} who showed that if $\tau$
is a regular $G$-equivariant twisting of $K$-theory on $G$, then the twisted
equivariant K-theory $K_{G,\tau}(G)$ is isomorphic to the free abelian group on
irreducible projective representations
of level $\tau-h^\vee$ (where $h^\vee$ is the dual Coxeter number)
of the loop group $LG$.
This suggests that untwisted K-theory $K_G(G)$ should correspond to representations
at the critical level of the Lie algebra $L\frak{g}$. We found that this is indeed true,
but the representations one encounters are not lowest weight representations
(which occur, for example, in the geometric Langlands program). Instead, the
fixed point space of the infinite loop space
$K_G(G)_0$ turns out to be the group completion of the space of
{\em finite} representations of the loop group $LG$ with an appropriate
topology. Here by a finite representation
we mean a finite-dimensional representation which factors through a projection
$LG\r G^n$ given by evaluation at finitely many points (cf. \cite{ps}). (It is possible to conjecture
that every finite-dimensional representation of $LG$ is finite, although it may
depend on the type of loops we consider; in this paper, we restrict our attention
to continuous loops.) In fact, we also prove that this is true $\Z/2$-equivariantly
with respect to involutions, i.e. that the fixed point space of $K_{G\rtimes\Z/2}(G)_0$ is
the group completion of the space of representations of $LG\rtimes \Z/2$,
where $\Z/2$ acts on $LG$ via its action on $G$ in the case of $\Z/2$ acting
on $G$ by $\alpha$, and simultaneously on $G$ and on the loop parameter
by reversal of direction in the case when $\Z/2$ acts on $G$ by $\gamma$.
The present paper is organized as follows: In Section \ref{ssi}, we review the construction
of Brylinski-Zhang \cite{brylinski,brylinski1} and study its properties with respect to
the involution on $G$. In Section \ref{smi},
we compute the $R(G\rtimes \Z/2)$-modules $K^{*}_{G\rtimes \Z/2}(G)$.
In Section \ref{sconc}, we discuss the computation in more concrete terms,
and give some examples. In Section \ref{srep}, we give an interpretation of
$K^{*}_{G}(G)$ in terms of representation of the loop group $LG$, and
in Section \ref{srep2}, we make that interpretation $\Z/2$-equivariant,
thus extending it to $K^{*}_{G\rtimes \Z/2}(G)$.
\vspace{3mm}
\section{The Brylinski-Zhang construction}
\label{ssi}
Let $G$ be a simply connected compact Lie group, $T$ a maximal torus, $N$ its
normalizer, $W$ the Weyl group. Let $R(G)$ denote, as usual, the
complex representation ring. Recall that if $u_1,\dots, u_n$ are the fundamental
weights of $G$ ($n=rank(G)$), then the weight lattice $T^*=Hom(T,S^1)$
is freely generated by $u_1,\dots, u_n$ and we have
$$R(T)=\Z[u_1,u_{1}^{-1},\dots, u_n,u_{n}^{-1}],$$
$$R(T)\supset R(G)=\Z[\overline{u_1},\dots, \overline{u_n}]$$
where $\overline{u_i}$ is the sum of elements of the $W$-orbit of $u_i$.
Let, for a map of commutative rings $S\r R$, $\Omega_{R/S}$ denote
the ring of K\"{a}hler differentials of $R$ over $S$.
Then one easily sees by the K\"{u}nneth theorem that we have an
isomorphism:
\beg{ebz1}{\begin{array}{l}K^{*}_{T}(T)=\Omega_{R(T)/\Z}\\
=\Z[u_1,u_{1}^{-1},\dots,u_n,u_{n}^{-1}]\otimes \Lambda_\Z[du_1,\dots,du_n]
\end{array}
}
\begin{theorem}\label{tbz}
(Brylinski-Zhang \cite{brylinski})
Suppose $G$ acts on itself by conjugation. Then
there is a commutative diagram of rings
\beg{ebz2}{
\diagram
K_G(G)\dto_\cong \rrto^{res.} && K_T(T)\dto^\cong\\
\Omega_{R(G)/\Z}\dto_=\rrto && \Omega_{R(T)/\Z}\dto^=\\
\Z[\overline{u_1},\dots,\overline{u_n}]
\otimes \Lambda [d\overline{u_1},\dots,d\overline{u_n}]
\rrto^{\subset}&&\Z[u_{1}^{\pm 1},\dots,u_{n}^{\pm 1}]\otimes \Lambda[du_1,\dots,du_n].
\enddiagram
}
Moreover, the isomorphisms in \rref{ebz2} can be chosen in such a way
that the generator $dW\in K_G(G)$ for a complex (finite-dimensional)
$G$-representation $W$ is represented in $G$-equivariant $K$-theory
by a complex of $G$-bundles
\beg{ebz2a}{\diagram
G\times \R\times W\rto^\phi & G\times\R\times W
\enddiagram
}
where $\phi$ is a homomorphism which is iso outside of $G\times \{0\}\times W$,
and is given by
\beg{ebz3}{\begin{array}{ll}
\phi(g,t,w)=(g,t,tw) & \text{for $t<0$}\\
\phi(g,t,w)=(g,t,-tg(w)) & \text{for $t\geq 0$}.
\end{array}
}
\end{theorem}
\begin{proof}
The proof of the diagram \rref{ebz2} is given in \cite{brylinski}. Since there is a mistake
in the formula \rref{ebz3} in \cite{brylinski}, (corrected in \cite{brylinski1}), we
give a proof here. In view of the commutativity of diagram \rref{ebz2},
and injectivity of the diagonal arrows, it suffices to prove the statement for $T$
instead of $G$. By K\"{u}nneth's theorem, it suffices to further consider $T=S^1$.
In the case $T=S^1$, let $z$ be the tautological $1$-dimensional complex representation
of $S^1$ (considered as the unit circle in $\C$). Then the element of
$\widetilde{K}_{S^1}^{-1}(S^1)$ given by \rref{ebz3}
for $W=z^n$ is equal to the element of $\widetilde{K}^{0}_{S^1}(S^2)$
given by $H^n-1$ where $H$ is the tautological bundle on $S^2=\C P^1$
(with trivial action of $S^1$). But it is well known that
$$H=u+1$$
where $u\in \widetilde{K}^{0}_{S^1}(S^2)$ is the Bott periodicity element, and thus,
(recalling that $u^2=0$),
$$H^n-1=(u+1)^n-1=nu.$$
Thus, choosing the Bott element as $u$ gives the required isomorphism in the right hand
column of \rref{ebz2} for $T=S^1$, and thus the statement follows.
\end{proof}
\vspace{3mm}
\begin{proposition}
\label{pl2}
Let $G$ be as above, let $\alpha$ be an involutive automorphism of $G$,
and let $W$ be a finite-dimensional complex representation such that
\beg{ebz6}{\alpha^*(W)\cong W}
(where $\alpha^*(W)$ is the representation of $G$ on $W$
composed with the automorphism $\alpha$).
Then, given the choices described in Theorem \ref{tbz}, if the generator
$a$ of $\Z/2$ acts on $G$ by $\alpha$, then $dW$ is in the image of the
restriction (forgetful map)
\beg{ebz4}{K^{1}_{G\rtimes\Z/2}(G)\r K^{1}_{G}(G).
}
When the generator $a$ of $\Z/2$ acts on $G$ by $\gamma$, $dW$ is in the
image of the restriction (forgetful map)
\beg{ebz5}{K^{A}_{G\rtimes \Z/2}(G)\r K^{1}_{G}(G)
}
where $A$ is the $1$-dimensional real representation of $G\rtimes \Z/2$ given by
the sign representation of the quotient $\Z/2$.
\end{proposition}
\begin{proof}
Recall that when \rref{ebz6} holds, then a choice of the isomorphism \rref{ebz6}
can be made to give a representation of $G\rtimes\Z/2$ on $W$. Moreover, there are
precisely two such choices, differing by tensoring by the complex $1$-dimensional
sign representation of $\Z/2$.
Consider first the case when the generator $a$ of $\Z/2$ acts on $G$ by $\alpha$.
Then consider the $\Z/2$-action on the Brylinski-Zhang construction
\beg{eddiag}{\diagram
G\times \R\times W\dto \rto^\phi &
G\times \R\times W\dto \\
G\times\R\times W\rto^\phi &
G\times\R\times W
\enddiagram
}
where the generator of $\Z/2$ acts by
$$
\diagram
(g,t,w)\dto|<\stop &(g,t,w)\dto|<\stop \\
(\alpha(g),t,\alpha(w))&
(\alpha(g),t,\alpha(w)).
\enddiagram
$$
When the generator $a$ of $\Z/2$ acts on $G$ by $\gamma$,
consider the $\Z/2$-action \rref{eddiag} where the generator of $\Z/2$ acts by
$$
\diagram
(g,t,w)\dto|<\stop &(g,t,w)\dto|<\stop \\
(\alpha(g)^{-1},-t,\alpha(w)) &
(\alpha(g)^{-1},-t,\alpha(g^{-1}w))
\enddiagram
$$
(Note that $\alpha(\alpha(g)\alpha(g^{-1}w))=w$, so the action of $a$ on the
right hand side is involutive. One readily sees that it also intertwines the
action of $G$ via the automorphism $\alpha$.)
To verify that the homomorphism $\phi$ commutes with the involution in
the case of $a$ acting on $G$ via $\gamma$, since we already know the
action is involutive, it suffices to consider $t<0$. In this case, we have
$$
\diagram
(g,t,w)\rto|<\stop
\dto|<\stop &
(g,t,tw)\dto|<\stop\\
(\alpha(g)^{-1},-t,\alpha(w))\rto|<\stop &
(\alpha(g)^{-1},-t,t\alpha(g)^{-1}\alpha(w)).
\enddiagram
$$
\end{proof}
\vspace{3mm}
\section{The computation of equivariant $K$-theory}
\label{smi}
In this section, we will compute $K_{G\rtimes \Z/2}(G)$ where the generator
$a$ of $\Z/2$ acts by $\alpha$ or $\gamma$. First observe that in both cases, the
generator $a$ of $\Z/2$ acts on $K^{*}_{G}(G)\cong \Omega_{R(G)/\Z}$ by automorphisms
of rings. The action on $R(G)$ is given by a permutation representation given by the permutation
of irreducible representations by the automorphism $\alpha$. Alternately, one may think in terms
of the action of $\alpha$ on Weyl group orbits of weights. Let $u_1,\dots,u_n$ be the fundamental
weights of the simply connected group $G$ determined by the Lie algebra $\frak{g}$.
Let $\sigma$ be the involution on $\{1,\dots,n\}$ given by
$$\alpha^*\overline{u_i}=\overline{u_{\sigma(i)}}.$$
Consider now the short exact sequence
\beg{emi1}{1\r G\r G\rtimes \Z/2\r \Z/2\r 1.
}
By $\Z/2_+$, we shall mean the suspension spectrum of the $G\rtimes\Z/2$-space $\Z/2_+$ by the
action \rref{emi1}. We define $S^A$ by the cofibration sequence
$$\diagram\Z/2_+\rto^\iota & S^0\r S^A
\enddiagram
$$
where $S^0$ is the $G\rtimes \Z/2$-sphere spectrum and $\iota$ is the collapse map
(for terminology, see \cite{lms}). We have, of course,
$$K^{*}_{G\rtimes\Z/2}S^0=R(G\rtimes\Z/2)_{even},$$
$$K^{*}_{G\rtimes\Z/2}\Z/2_+=R(G)_{even}.$$
Here, the subscript $?_{even}$ means that the given $R(G\rtimes\Z/2)$-module is located in
the even dimension of the $\Z/2$-graded ring $K^*$. Furthermore, we have an exact sequence
$$
0\r K^{0}_{G\rtimes\Z/2}S^A\r R(G\rtimes \Z/2)
\r R(G)\r K^{1}_{G\rtimes \Z/2}S^A\r 0
$$
where the middle arrow is restriction. Therefore,
$K^{1}_{G\rtimes\Z/2}S^A$ is the free abelian group on irreducible $G$-representations which
do not extend to $G\rtimes \Z/2$. Recall that $\Z/2$ acts on the set of isomorphism
classes of irreducible representations of
$G$; $R(G\rtimes\Z/2)$ is the free abelian group on the regular orbits, and on two copies
of each fixed orbit. Therefore, $K^{0}_{G\rtimes\Z/2}S^A$ can be thought of as the free
abelian group on irreducible $G$-representations which do extend to $G\rtimes \Z/2$-representations.
Equivalently,
$$K^{0}_{G\rtimes\Z/2}S^A=\Z\{u\in T^*\; \text{dominant}\;|\:\alpha^*\overline{u}=\overline{u}\},$$
$$K^{1}_{G\rtimes\Z/2}S^A=\Z\{\text{regular $\alpha^*$-orbits
of dominant weights}\}.$$
Let $S^{(\epsilon)}$ for $\epsilon\in \Z$ denote $S^{A-1}=\Sigma^{-1}S^A$
resp. $S^0$ depending on whether $\epsilon$
is odd or even.
\vspace{3mm}
Let $\succ$ denote any chosen linear ordering of the set of subsets of
$\{1,\dots,n\}$.
Let $I_\sigma$ be the set of subsets
\beg{edefisigma}{\{i_1<\dots<i_k\}\subseteq \{1,\dots,n\}}
such that
$$\{\sigma(i_1),\dots,\sigma(i_k)\}\succ\{i_1,\dots,i_k\}$$
and let $J_\sigma$ be the set of subsets \rref{edefisigma} such that
$$\{\sigma(i_1),\dots,\sigma(i_k)\}=\{i_1,\dots,i_k\}.$$
Let $orb( S)$ for a $\sigma$-invariant set $S$ denote the
number of regular (=$\Z/2$-free) $\sigma$-orbits of $S$ when $a$ acts on $G$ by $\alpha$, and
of all $\sigma$-orbits of $S$ when $a$ acts on $G$ by $\gamma$.
\vspace{3mm}
\begin{theorem}\label{t1}
There exists an isomorphism of $R(G\rtimes\Z/2)$-modules
\beg{emi2}{\begin{array}{l}K^{*}_{G\rtimes\Z/2}(G)\cong\\[3ex]
K^{*}_{G\rtimes\Z/2}(
\displaystyle\bigvee_{\{ i_1<\dots<i_k\}\in I_\sigma}
\Sigma^k\Z/2_+
\vee
\displaystyle\bigvee_{\{ i_1<\dots<i_k\}\in J_\sigma}
\Sigma^k S^{(orb\{i_1,\dots,i_k\})})
\end{array}}
$G\rtimes \Z/2$
acts on the wedge summands on the right hand side of \rref{emi2} through the projection to $\Z/2$.
\end{theorem}
\begin{proof}
We first construct a $G\rtimes\Z/2$-equivariant stable map $u_S$ of each wedge summand
of \rref{emi2} into the $E_\infty$-algebra
\beg{emi3}{F(\Lambda_+,K_{G\rtimes \Z/2})
}
where $\Lambda$ is $G$ on which $g\in G\subset G\rtimes \Z/2$ acts by conjugation
and $\alpha$ acts by $\gamma$ such that the wedge of all the maps \rref{emi3} induces
an isomorphism on $G$-equivariant coefficients. Here $F(?,?)$, as usual,
denotes the (equivariant) function spectrum, see \cite{lms}.
First, recall the isomorphism \cite{brylinski}
\beg{ecomment1}{
\begin{array}{l}
\pi_*(F(\Lambda_+,K_{G\rtimes\Z/2})^G)=K_{G}^{*}(G)\\[2ex]
\cong \Omega_{R(G)/\Z}=\Z[\overline{u_1},\dots,\overline{u_n}]
\otimes \Lambda[d\overline{u_1},\dots,d\overline{u_n}]
\end{array}
}
of Theorem \ref{tbz}, induced by \rref{ebz1}.
For regular (=$\sigma$-free) orbits, the map we need
follows from $G$-equivariant considerations: Send, $G$-equivariantly,
$$S^k\r F(\Lambda_+,K_{G\rtimes\Z/2})$$
by the generator
$$d\overline{u_{i_1}}\wedge\dots\wedge d\overline{u_{i_k}}$$
of \rref{ecomment1} and then
use the fact that $(G\rtimes\Z/2)\rtimes?$ is the left adjoint to the forgetful functor from
$G\rtimes\Z/2$-spectra to $G$-spectra (cf. \cite{lms}).
Next, for $\sigma$-invariant sets $1\leq i_1<\dots<i_k\leq n$ which consist of a single
orbit, we have $k\leq 2$. If $k=1$, the map follows from Proposition \ref{pl2}. If $k=2$, we have
a $G$-equivariant map
$$u:S^1\r F(\Lambda_+,K_{G\rtimes \Z/2})$$
given as the generator $d\overline{u_{i_1}}\wedge d\overline{u_{i_2}}$ of
$\pi_*(F(\Lambda_+,K_{G\rtimes\Z/2})^G)=K_{G}^{*}(G)$.
The $G\rtimes\Z/2$-equivariant map
$$u_{\{i_1,i_2\}}:S^{1+A}\r F(\Lambda_+,K_{G\rtimes\Z/2})$$
we seek may then be defined as
$$N_{G}^{G\rtimes\Z/2}u$$
where $N$ is the multiplicative norm (see \cite{gmc,hhr}). Finally, we may define
$$u_{S_1\amalg\dots\amalg S_\ell}:=u_{S_1}\wedge\dots\wedge u_{S_\ell},$$
using Bott periodicity to identify $S^2$ with $S^{2A}$. Thus, taking a wedge
sum of these maps, we have a map
\beg{emi4}{\begin{array}{c}
X:=F(\bigvee \Sigma^k\Z/2_+\vee\bigvee \Sigma^kS^{(orb\{i_1,\dots,i_k\})},K_{G\rtimes\Z/2})\\[3ex]
\downarrow f\\[3ex]
Y:=F(\Lambda_+,K_{G\rtimes\Z/2})
\end{array}
}
of $K_{G\rtimes\Z/2}$-modules, inducing an isomorphism of $G$-equivariant coefficients (using the
Wirthm\"{u}ller isomorphism \cite{lms} and, again, Bott periodicity). This implies that \rref{emi4} induces
an equivalence on Borel cohomology:
\beg{emi5}{\diagram
F(E\Z/2_+,f^G):F(E\Z/2_+,X^G)^{\Z/2}\rto^(.6)\sim &
F(E\Z/2_+,Y^G)^{\Z/2}.
\enddiagram
}
We need to conclude that \rref{emi4} induces an equivalence on
$G\rtimes\Z/2$-fixed points, i.e. that we have an equivalence
\beg{emi6}{\diagram(f^G)^{\Z/2}:(X^G)^{\Z/2}
\rto^(.6)\sim &
(Y^G)^{\Z/2}.
\enddiagram
}
To this end, consider $X^G$, $Y^G$ as $\Z/2$-equivariant spectra.
\vspace{3mm}
\begin{lemma}
\label{lmi1}
Denote
$$R=R_G:=(R(G\rtimes\Z/2)/ind_{G}^{G\rtimes\Z/2}R(G))\otimes\Q,$$
$$\widehat{R}=\widehat{R_G}:=(R(G\rtimes\Z/2)/ind_{G}^{G\rtimes\Z/2}R(G))^{\wedge}_{2}\otimes\Q.$$
Then for the $\Z/2$-spectra $Z=X^G,Y^G$, the spectra $\Phi^{\Z/2}Z$, $\widehat{Z}$
(see \cite{lms, gmt})
are rational, and we have an isomorphism
\beg{emi7}{\widehat{Z}_*\cong(\Phi^{\Z/2}Z)_*\otimes_R \widehat{R}}
natural with respect to the map \rref{emi4}. (Here $ind_{G}^{G\rtimes \Z/2}:R(G)\r R(G\rtimes \Z/2)$
denotes the induction, and $(?)^{\wedge}_{2}$ denotes completion at $2$.)
\end{lemma}
\vspace{3mm}
\noindent
{\em Proof of \rref{emi6} using Lemma \ref{lmi1}:} Note that \rref{emi4} also implies an equivalence on
Borel homology:
\beg{emi8}{\diagram
(E\Z/2_+\wedge f^G):(E\Z/2_+\wedge X^G)^{\Z/2}\rto^(.6)\sim &
(E\Z/2_+\wedge Y^G)^{\Z/2}
\enddiagram
}
and Tate cohomology
\beg{emi9}{\diagram\widehat{f^G}:\widehat{X^G}\rto^\sim &\widehat{Y^G}.
\enddiagram
}
By \rref{emi7}, however, the map
$$\Phi^{\Z/2}f^G:\Phi^{\Z/2}(X^G)\r\Phi^{\Z/2}(Y^G)$$
is also an equivalence, and together with \rref{emi8}, this implies \rref{emi6}.
\qed
\vspace{3mm}
\noindent
{\em Proof of Lemma \ref{lmi1}:} The spectra $\Phi^{\Z/2}(M)$, $\widehat{M}$ are rational
for any cell module $M$ over the $E_\infty$-ring spectrum $K_{\Z/2}$ by
a theorem of Greenless and May \cite{gmt} which asserts this for $M=K_{\Z/2}$.
Additionally, the methods of \cite{gmt} (or a direct calculation) readily imply that
\beg{emi10}{\begin{array}{l}
\Phi^{\Z/2}(K_{G\rtimes \Z/2}^{G})_*=R,\\
(\widehat{K_{G\rtimes \Z/2}^{G}})_*=\widehat{R}.
\end{array}
}
Now $?\otimes_R \widehat{R}$ is clearly an exact functor on $R$-modules, so by using
the long exact sequence in cohomology, it suffices to filter $X^G$, $Y^G$ both into finite
sequences of cofibrations such that the quotients $Z$ satisfy \rref{emi7}.
In the case of $X^G$, the quotients are either of the form $F(\Z/2_+,K_{G\rtimes\Z/2}^{G})$
for which the statement is trivial (both the geometric and Tate theory are $0$) or
$K_{G\rtimes\Z/2}^{G}$, which is covered by \rref{emi10}, or $\Sigma^A(K_{G\rtimes\Z/2}^{G})$,
which is a cofiber of modules of the first two types.
In the case of $Y^G$, use the decomposition of $\Lambda$ into $G$-orbits with respect
to conjugation of skeleta of the fundamental
alcove, applying also the fact that $\gamma$ acts trivially on $T$. This is, in fact, a
$G\rtimes\Z/2$-CW decomposition, where the cells are of type $H\rtimes\Z/2$ where $H$
is a compact Lie subgroup of $G$ associated to a sub-diagram of the affine Dynkin diagram.
Applying the computation of geometric and Tate $\Z/2$-fixed points of $K_H$, we are done if we can prove
\beg{emi11}{\widehat{R_H}=R_H\otimes_{R_G}\widehat{R_G}.
}
To this end, put
$$R_{H}^{0}:=R(H\rtimes\Z/2)/ind_{H}^{H\rtimes\Z/2}R(H).$$
Recall from \cite{segalrep} that $R(G\rtimes\Z/2)$ is a Noetherian ring
and $R(H\rtimes\Z/2)$ is a finite module. Therefore, $R^{0}_{H}$ is a finite $R^{0}_{G}$-module.
Now for any Noetherian ring $P$ and a finite $P$-module $M$, we have
\beg{emi12}{M^{\wedge}_{2}=M\otimes_P P^{\wedge}_{2}.}
(Consider the presentation
$$\bigoplus_n P \r \bigoplus_m P \r M \r 0$$
and right exactness of $(?)^{\wedge}_{2}$ in this case.)
Rationalizing, \rref{emi12} implies \rref{emi11}.
\end{proof}
\vspace{3mm}
\section{Concrete computations and examples}\label{sconc}
Let, again, $G$ be a simply connected simple compact Lie
group and the generator $a$ of $\Z/2$
act on the target $G$ by $\alpha$ or by $\gamma$.
To calculate $K_{G\rtimes \Z/2}(G)$ as an $R(G\rtimes \Z/2)$-module,
in view of Theorem \ref{t1}, it suffices to calculate the action of the
automorphism $\alpha$ on the Weyl group orbits of the fundamental weights
of the group $G$.
The key observation is that if $\alpha$ is an inner automorphism, then the
action is trivial simply because an inner automorphism does not change
the isomorphism class of a representation.
Outer automorphisms of simply connected simple Lie groups correspond
to automorphisms of the Dynkin diagram, and therefore are necessarily
trivial for all types except $A,D$ and $E_6$. Furthermore, the
permutation representation of $\Z/2$ on orbits of fundamental weights is isomorphic
to the permutation representation on the set of simple roots (using the bijection
between fundamental weights and simple roots: A fundamental weight is
a point of the weight lattice with which one simple root has minimal positive
inner product, and the other simple roots have inner product $0$).
Recall also that an automorphism $\alpha$ of a semisimple Lie algebra $\frak{g}$
is outer if and only if $rank(\frak{g}^\alpha)<rank(\frak{g})$
(cf., \cite{helgason}).
From the point of view of symmetric pairs $(\frak{g}, \frak{g}^\alpha)$ of compact type,
we refer to the classification of such pairs (cf., \cite{helgason}, pp. 532-534).
For types $AI$ and $AII$ (corresponding to compact simply connected symmetric
spaces $SU(n)/SO(n)$, $Sp(2n)/SO(n)$), the automorphism is outer,
so the simple weights $v_1,\dots,v_{n-1}$ are transformed by
$$\alpha(\overline{v_i})=\overline{v_{n-i}}.$$
For types $AIII$ and $AIV$ (corresponding to simply connected compact
symmetric spaces $U(p+q)/U(p)\times U(q)$, the automorphism $\alpha$ is
inner, so all the fundamental weights are fixed.
For types $DI$-$DIII$ (corresponding to compact simply connected symmetric
spaces of type $SO(p+q)/SO(p)\times SO(q)$ with $p+q$ even),
the automorphism $\alpha$ is outer if $p$ (or, equivalently, $q$) is odd,
in which case $\alpha$ interchanges the two fundamental weights
corresponding to the spin representations. When $p$ (or, equivalently, $q$)
is even, the automorphism $\alpha$ is inner and thus, again, the action
is trivial.
For $E_6$, the four different compact simply connected symmetric spaces
are $EI$, $EII$, $EIII$, $EIV$. The automorphism $\alpha$ is outer for
$EI$ and $EIV$, interchanging two pairs of fundamental weights
(and leaving two fundamental weights fixed). For $EII$, $EIII$, the
automorphism is inner and all the fundamental weights are fixed.
\vspace{3mm}
\subsection{Two explicit examples} \label{ssex}
Let us work out explicitly the cases of $G=SU(2)$, $G=SU(3)$
where the involution $\alpha$ is of type $AI$, and $a$ acts on $G$ by
$\gamma$. In the case of $G=SU(2)$,
denote by $x$ the fundamental weight and by $z$ the representation with character $x+x^{-1}$,
as well as its chosen extension to $SU(2)\rtimes \Z/2$. Denote further by $q$ the complex
sign representation of $\Z/2$. Then
$$K_{SU(2)\rtimes \Z/2}^{*}(S^0)=R(SU(2)\rtimes\Z/2)=\Z[z,q]/(q^2-1)_{even}$$
and
$$K_{SU(2)\rtimes\Z/2}^{*}(S^A)=Ker(res:R(SU(2)\rtimes\Z/2)\r R(SU(2)))=\Z[z]_{even},$$
generated by $q-1\in R(SU(2)\rtimes\Z/2)$. The argument of K-theory on the right hand
side of \rref{emi2} is
$$S^0\vee S^A.$$
Thus, we have
$$K^{*}_{SU(2)\rtimes\Z/2}(SU(2))=(\Z[z,q]/(q^2-1)\oplus \Z[z])_{even}
$$
as a $\Z[z,q]/(q^2-1)$-module.
For $G=SU(3)$, we have
$$K_{SU(3)}^{*}(*)=R(SU(3))=\Z[z,t]$$
where $z,t$ are sums of orbits of the two fundamental weights, which
are formed by vertices of the two smallest equilateral triangles with center $0$
in the honeycomb lattice. We have
$$K_{SU(3)\rtimes\Z/2}^{*}(*)=R(SU(3)\rtimes\Z/2)=\Z[\sigma_1,\sigma_2,q]/(q^2-1,(q-1)\sigma_1)$$
where $\sigma_i$ are the elementary symmetric polynomials in $z,t$ and $q$ is the complex
sign representation of $\Z/2$ (more precisely, a non-canonical choice has to be made in lifting $\sigma_2$ to
a representation of $SU(3)\rtimes \Z/2$, but that is not important for our calculation). To compute
$K^{*}_{SU(3)\rtimes\Z/2}(S^A)$, consider the exact sequence
$$
\diagram
0\dto\\
\Z[\sigma_2]\dto^{\displaystyle q-1}\\
\Z[\sigma_1,\sigma_2,q]/(q^2-1,(q-1)\sigma_1)\dto^{\displaystyle q\mapsto 1}\\
\Z[z,t]\dto^{\scriptstyle\protect\begin{array}{l}\protect z\mapsto 1\\ \protect t\mapsto -1\end{array}}\\
\Z[\sigma_1,\sigma_2]\{z\}\dto\\
0.
\enddiagram
$$
The middle arrow is the restriction
$$K_{SU(3)\rtimes\Z/2}^{0}(S^0)\r K_{SU(3)}^{0}(S^0),
$$
so the kernel resp. cokernel is $K^{0}_{SU(3)\rtimes\Z/2}(S^A)$
resp. $K^{1}_{SU(3)\rtimes\Z/2}(S^A)$. The argument of K-theory on the
right hand side of \rref{emi2} is
$$S^0\vee \Sigma \Z/2_+\vee S^{1+A},$$
so we have for the maximal rank pair $(\frak{su(3)}, \frak{h})$:
$$K^{0}_{SU(3)\rtimes\Z/2}(SU(3))=\Z[\sigma_1,\sigma_2,q]/(q^2-1,(q-1)\sigma_1)\oplus \Z[\sigma_1,\sigma_2],$$
$$K^{1}_{SU(3)\rtimes\Z/2}(SU(3))=\Z[z,t]\oplus \Z[\sigma_2]$$
as $R(SU(3)\rtimes\Z/2)=\Z[\sigma_1,\sigma_2,q]/(q^2-1,(q-1)\sigma_1)$-modules.
\vspace{3mm}
\section{Representation-theoretical interpretation of $K_G(G)$}
\label{srep}
Freed, Hopkins and Teleman \cite{fht} showed that for a ``regular'' twisting $\tau$,
$K^{*}_{G,\tau}(G)$ is the free abelian group on irreducible lowest weight
representations of level $\tau-h^\vee$ of the universal central extension $\widetilde{LG}$
of the loop group $LG$. Therefore, $0$ twisting (which is not regular) corresponds to the
critical level.
We found that $K^{*}_{G}(G)$ is, indeed, related to representations of $LG$, but
not lowest weight representations (in the sense that they would be quotients of
the corresponding vertex algebra - see, e.g. \cite{langlands}).
Instead, we encounter finite representations. Denote by $e_x:LG\r G$ the evaluation
at a point $x\in S^1$. Call a finite-dimensional complex representation of $LG$ {\em finite}
if it factors through a projection of the form
\beg{esrep1}{e_{x_1}\times\dots\times e_{x_n}: LG\r G^n.}
These representations are briefly mentioned in the book \cite{ps}. It is possible to conjecture
that all finite-dimensional representations of $LG$ are finite, although this may depend on
what kind of loops we consider; in this paper, we restrict attention to {\em continuous} loops
with the compact-open topology.
Let us first define the {\em finite representation space} $Rep(\Gamma)$ of a topological group $\Gamma$.
Since we are about to do homotopy theory, let us work in the category of compactly generated spaces.
A finite-dimensional representation of $\Gamma$ is a finite-dimensional complex vector space $V$
together with a continuous homomorphism
\beg{esrep1a}{\Gamma\r GL(V).}
Continuous homomorphisms are, in particular, continuous maps.
Thus, the set of all representations \rref{esrep1a} for a fixed $V$ forms a topological space, denoted by
$Rep(\Gamma, V)$ with respect to the compact-open topology made compactly generated
(cf.\cite{may}). Consider the topological category $C(\Gamma)$ (both objects and morphisms
are compactly generated spaces, the source $S$
and the target $T$ are fibrations and $Id$ is a cofibration) with objects
\beg{esrep2}{\coprod_V Rep(\Gamma,V)
}
and morphisms
\beg{esrep3}{\coprod_{V,W} Rep(\Gamma,V)\times GL(V,W)
}
(where, say, $S$ is the projection and $T(\phi)$ is the representation on $W$ given
by conjugating the representation on $V$ by $\phi$; $GL(V,W)=\emptyset$
when $dim(V)\neq dim(W)$). Define the {\em representation space}
$Rep(\Gamma)$ as the bar construction on the category $C(\Gamma)$.
For $\Gamma=LG$, let $Rep_0(LG,V)$ denote the subspace of $Rep(LG,V)$ (with the induced
topology made compactly generated) consisting of finite representations. Let $C_0(LG)$
be the subcategory of $C(LG)$ defined by replacing $Rep(LG,V)$ with $Rep_0(LG,V)$,
and let $Rep_0(LG)$ be the bar construction on $C_0(LG)$. Our first aim is to identify a group
completion of the
weak homotopy type of $C_0(LG)$ with $K_G(G)$. To this end, we need a few technical
tools. First of all, let $IG$ denote the group of continuous paths
$\omega:[0,1]=I\r G$ with the
compact-open topology, and define $C_0(IG)$, $Rep_0(IG)$ analogously with the
above, replacing $LG$ with $IG$.
\begin{lemma}\label{lrep1}
The inclusion $G\r IG$ via constant maps induces a homotopy equivalence
$$\iota:Rep(G)\r Rep_0(IG).$$
\end{lemma}
\begin{proof}
Define a map
$$\kappa:Rep_0(IG)\r Rep(G)$$
by composing with $e_0$. Then we have $\kappa\iota=Id$.
Consider now the homotopy $h_t:IG\r IG$ given on $f:I\r G$ by $h_t(f)(x)=f(tx)$.
One easily checks that $h_t$ induces a homotopy on $Rep_0(IG)$ between $\iota\kappa$
and $Id$.
\end{proof}
Next, we define a simplicial symmetric monoidal category as follows: On the simplicial
$n$-level, we take the category $Rep_0(IG\times G^n)$ by which we mean the subcategory
of $Rep(IG\times G^n)$ on representations which factor through evaluation of $IG$
on finitely many points. We let degeneracies be given by projection
$$IG\times G^{n+1}\r IG\times G^n,$$
and faces by the maps
$$\diagram IG\rto^{ev_1} &G
\rto^\Delta & G\times G,
\enddiagram$$
$$\diagram G
\rto^\Delta & G\times G,
\enddiagram$$
$$\diagram IG\rto^{ev_0} &G
\rto^\Delta & G\times G,
\enddiagram$$
where $\Delta$ is the diagonal. It makes sense to denote this simplicial symmetric monoidal
category by $CH_{\C_2}(C_0(IG),C(G))$ in reference to a ``Hochschild homology complex'',
and its realization by
\beg{esrephh1}{CH_{Rep(\{e\})}(Rep_0(IG), Rep(G)).}
In fact, we will also be interested in the corresponding spaces
$$CH_{\C_2}(C(G),C(G)),$$
and their realizations
\beg{esrephh2}{CH_{Rep(\{e\})}(Rep(G),Rep(G))}
defined analogously replacing $IG$ by the subgroup of constant paths in $G$.
Note also that both spaces \rref{esrephh1}, \rref{esrephh2} are $E_\infty$ spaces,
since they are classifying spaces of symmetric monoidal categories.
Lemma
\ref{lrep1} then immediately extends to the following
\begin{lemma}\label{lrep2}
Inclusion of constant loops induces a homotopy equivalence of $E_\infty$ spaces
\beg{esrephh3}{CH_{Rep(\{e\})}(Rep_0(IG), Rep(G))\r CH_{Rep(\{e\})}(Rep(G), Rep(G)).
}
\end{lemma}
\qed
\vspace{3mm}
Next, we prove
\begin{lemma}\label{lrep3}
There is an equivalence of $E_\infty$ spaces
$$\Omega B CH_{Rep(\{e\})}(Rep(G), Rep(G))\sim K_G(G)_0$$
where the subscript denotes infinite loop space, and by $K_G(G)$ we denote the
spectrum of $G$-equivariant maps $F_G(G_+,K_G)$ (where, as before, on the
source $G$ acts by conjugation). (See \cite{lms} for the standard
notation.)
\end{lemma}
\begin{proof}
We need a symmetric monoidal functor from the simplicial realization of
the category $CH_{C(\{e\})}(C(G), C(G))$ to vector $G$-bundles on $G$.
Since the source however has topologized objects, it is convenient to
consider an equivalent model of the category of bundles where objects
can vary continuously parametrized by a space. In more detail, we consider
the category with both objects and morphisms topologized, where
the objects are formed by the disjoint union of CW-complexes $X\times\{\xi\}$ where $\xi$ is a
$G$-bundle on $G\times X$. Morphisms are disjoint unions of spaces $\Gamma_{X,\xi,Y,\eta}$
consisting of triples $x\in X$, $y\in Y$ and isomorphisms $f:\xi|G\times\{x\}\r \eta|G\times \{y\}$
topologized so that the projection $\Gamma_{X,\xi,Y\eta}\r X\times Y$ is locally a product
(which is done canonically by local triviality of $\xi$, $\eta$).
This is a symmetric monoidal category (which we will denote by $Bun_{G}(G)^\prime$
by Whitney sum of pullbacks of $G$-bundles over $G\times X$ and
$G\times Y$ to a $G$-bundle over $G\times X\times Y$, and moreover a groupoid whose
skeleton is the ordinary symmetric monoidal category $Bun_G(G)$ of $G$-bundles on $G$.
Then,
if we denote by $C$ the simplicial realization of the category $CH_{C(\{e\})}(C(G), C(G))$,
it suffices to construct a symmetric monoidal functor
$C\r Bun_G(G)^\prime$, which can be constructed from a $G$-bundle on $G\times Obj(C)$ which satisfies
the appropriate additivity and functoriality properties. To this end,
it suffices to construct a functor, symmetric monoidal over $G$, of the form
\beg{erephh4}{G\times CH_{C(\{e\})}(C(G), C(G))\r C(G),
}
where in the source, we consider the ``total'' category spanned by the level
morphisms as well as the simplicial structure. Construct \rref{erephh4}
as follows: on each level, put
$$(g,V)\mapsto V.$$
All faces and degeneracies are set to the identity, except the $0$'th face on each
level. The $0$-face from level $n$ to level $n-1$
is sent, at $g\in G$, to multiplication by
$$(1,g,\underbrace{1,\dots,1)}_{\text{$n-1$ times}}.$$
Applying the classifying
space functor, this gives an $E_\infty$-map from $CH_{Rep(G)}(Rep(G), Rep(G))$
to the space of $G$-equivariant vector bundles on $G$. Applying an infinite loop
space machine and localization at the Bott element, we obtain a map which, up to homotopy, can be expressed as
\beg{erephh5}{CH_K(K_G,K_G)\r K_G(G)}
where $K_G$ denotes the $K$-module of $G$-fixed points of $G$-equivariant $K$-theory.
Now there is a spectral sequence (coming from the simplicial structure) converging to the
homotopy of the left hand side of \rref{erephh5} whose $E_2$-term is
\beg{erephh6}{HH_\Z(R(G),R(G)).}
Note that the source and target coefficient rings of \rref{erephh5}, as well as \rref{erephh6} are rings,
where in fact the ring \rref{erephh6} is isomorphic to
the homotopy ring of the target of \rref{erephh5}, and the generators of \rref{erephh6} are permanent cycles, which map
to the corresponding generators of $K_{G}^{*}(G)$ by Theorem \rref{tbz}.
Thus, we are done if we can prove that \rref{erephh5} is a map of ring spectra.
In fact, one can rigidify \rref{erephh6} to become
a map of $E_\infty$ ring spectra. The functor from $CH_{Rep(G)}(Rep(G), Rep(G))$ to
$Bun_G(G)^\prime $ is a weak symmetric bimonoidal functor on each simplicial level,
with the simplicial structure maps weakly preserving the structure. Thus, on the ``totalized'' category
where we combine the levels and consider simplicial structure maps as morphisms, we obtain
a weak symmetric bimonoidal functor into $Bun_G(G)^\prime$. Applying the Elmendorf-Mandell machine
\cite{em} and localization at the Bott element, an $E_\infty$ model of \rref{erephh5} follows.
(The Joyal-Street construction on categories with both objects and morphisms topological \cite{kl}
is also relevant.)
\end{proof}
\vspace{3mm}
Finally, using the map
\beg{elli}{LG\r IG\times G^n}
given by $f\mapsto (f\circ\pi,f(0),\dots,f(0))$ where $\pi:I\r I/0\sim 1=S^1$ is the projection,
we obtain a map
\beg{erephh7}{p:CH_{Rep(G)}(Rep_0(IG),Rep(G))\r Rep_0(LG).}
\begin{lemma}\label{lrep4}
The map $p$ is an equivalence.
\end{lemma}
\begin{proof}
First, we show that $p$ is a quasi-fibration. We use the criterion in \cite{dt}, which
is restated as Theorem 2.6 of \cite{mq}. We first observe that $Rep_0(G)$
is a disjoint sum of connected components indexed by dimension $d$ of the
representation. We may consider one connected component at
a time. For a given $d$, let the $i$'th (increasing) filtered part be
spanned by all representations which are (up to isomorphism) of the form $V\otimes W$ where
$W$ has dimension $\geq n-i$ and factors through the projection
$$e_0: LG\r G.$$
The open neighborhoods required in Theorem 2.6 of \cite{mq} are
then spanned by representations of the form $V\otimes W$ where
$W$ is of dimension $\geq n-i$, and factors through the projection
of $LG$ to $Map(U,G)$ where $U$ is an $\epsilon$-neighborhood
of $1$ in $S^1$. The homotopies $H_t$ and $h_t$ are then
defined by contracting $U$ to $1$.
Once we know that $p$ is a quasifibration, the statement follows, as it
is easily checked that the inverse image of every point is contractible.
\end{proof}
\vspace{3mm}
Putting together Lemmas \ref{lrep1}, \ref{lrep2}, \ref{lrep3}, \ref{lrep4}, we
now obtain the following
\begin{theorem}
\label{trrep}
The group completion of the $E_\infty$ space $Rep_0(LG)$ is weakly equivalent
to the infinite loop space $K_G(G)_0$.
\end{theorem}
\qed
\vspace{3mm}
\section{Representation-theoretical interpretation of $K_{G\rtimes\Z/2}(G)$}
\label{srep2}
There is also an equivariant version of these constructions with respect
to a $\Z/2$-action where the generator of $\Z/2$ acts on $G$ either by
$\alpha$ or by $\gamma$. In these cases, we consider the topological
group $LG\rtimes\Z/2$.
$\Z/2$ acts on the loop $f:S^1\r G$ by $f\mapsto g$ where
in the case of action by $\alpha$ on the target, $g(t)=\alpha(f(t))$,
and in the case of action by $\gamma$ on the target,
$g(t)=\alpha(f(1-t))$ (again, we use the identification $S^1=I/0\sim 1$).
In both cases, {\em finite representations} are defined as finite-dimensional
representations which factor through a projection to $G^n\rtimes \Z/2$
by evaluation at finitely many points (where, in the case of the $\gamma$-action
on the target, with each evaluation point $t$, we must also include $1-t$).
Now restricting, again, to finite representations, we obtain from $C(G\rtimes\Z/2)$
the categories
$C_0(LG\rtimes \Z/2)$ and their classifying spaces $Rep_0(LH\rtimes\Z/2)$.
Next, we may also similarly define
\beg{eeqcc}{C_0((IG\times G^n)\rtimes \Z/2),}
and
its classifying space
\beg{eeqhh}{Rep_0(IG\times G^n)\rtimes\Z/2).}
Here the generator $a$ of $\Z/2$ acts on $f:I\r G$
by $f\mapsto g$ where $g(t)=\alpha(f(t))$,
in the case of $\alpha$-action on the target, $a$ acts on each of the $n$ copies
of $G$ separately by $\alpha$, in the case of $\gamma$ action on the target,
$a$ acts on $G^n$ by
$(g_1,\dots,g_n)\mapsto (\alpha(g_n),\dots,\alpha(g_1))$.
In the case where $a$ acts by $\alpha$ on the target $G$, \rref{eeqhh}
forms a simplicial category, and its classifying space \rref{eeqhh} forms
a simplicial space. In the case of $a$ acting on the target $G$ by $\gamma$,
the $\Z/2$-action is an automorphism over the involution of the simplicial
category reversing the order of each set $\{0,\dots,n\}$, so we can
still form a ``simplicial realization'' by letting $a$ act on each standard
simplex $\Delta_n$ by $[t_0,\dots,t_n]\mapsto[t_n,\dots,t_0]$ in
barycentric coordinates. In both cases, we denote the simplicial realizations
$$CH_{\C_2}(C_0I(G),C(G))_{\Z/2}$$
and
$$CH_{Rep(G)}(Rep_0(IG),Rep(G))_{\Z/2}.$$
Again, the precise application of the method of proof of Lemma \ref{lrep1} gives
\begin{lemma}
\label{leeh1}
Restriction to constant loops induces a homotopy equivalence
$$CH_{Rep(G)}(Rep_0(IG),Rep(G))_{\Z/2}\r CH_{Rep(G)}(Rep(G),Rep(G))_{\Z/2}$$
(where the target is defined the same way as the source, restricting to constant
maps $I\r G$).
\end{lemma}
\qed
\vspace{3mm}
Next, the map \rref{elli} defines a map
\beg{elli1}{p_{\Z/2}:CH_{Rep(G)}(Rep_0(IG), Rep(G))_{\Z/2}\r Rep_0(LG\rtimes\Z/2)
}
in both the cases of $\alpha$ and $\gamma$ action on the target. Again, we have an
equivariant analogue of Lemma \ref{lrep4}:
\begin{lemma}
\label{leeh2}
The map $p_{\Z/2}$ is an equivalence.
\end{lemma}
\begin{proof}
In both cases, the filtration and the deformations used in the proof of Lemma \ref{lrep4}
have obvious $\Z/2$-equivariant analogous. Therefore, the same argument applies.
\end{proof}
\vspace{3mm}
We shall now construct an $E_\infty$ map
\beg{eehh10}{CH_{Rep(G)}(Rep(G),Rep(G))_{\Z/2}\r K_{G\rtimes\Z/2}(G)}
where by the right hand side, we mean the space of $G\rtimes\Z/2$-equivariant maps
$G\r (K_{G\rtimes\Z/2})_0$ where the generator $a$ of $\Z/2$ acts on $G$ by
either $\alpha$ or $\gamma$. To this end, again, it suffices to construct
a symmetric monoidal map of the realization of the category
$$CH_{\C_2}(C(G),C(G))_{\Z/2}$$
to the category of $G\rtimes\Z/2$-vector bundles on $G$.
In the case of $a$ acting on the target $G$ by $\alpha$, the construction is directly
analogous to Lemma \ref{lrep3}.
In the case of $a$ acting by $\gamma$, we must deal with the involutive automorphism
of the simplicial category. It is most convenient to give this in the form of a
$G\rtimes\Z/2$-equivariant bundle on
$$G\times CH_{Rep(G)}(Rep(G),Rep(G))_{\Z/2}.$$
We let the bundle be defined by the same formula as in the non-equivariant
case, with $\Z/2$-action
$$\alpha(g,v,[t_0,\dots,t_n])=(\alpha(g)^{-1}, \alpha(g^nv),[t_n,\dots,t_0])$$
(see the proof of Lemma \ref{lrep3}).
\begin{theorem}
\label{teehh}
The induced map
\beg{ehhp*}{\Omega B(CH_{Rep(G)}(Rep(G),Rep(G))_{\Z/2})\r K_{G\rtimes \Z/2}(G)_0
}
is an equivalence.
\end{theorem}
\begin{proof}
It is useful to note that $Rep(G\rtimes\Z/2)$ is a $\Z/2$-equivariant symmetric monoidal category
in the sense that there is a transfer functor
\beg{ehhp1}{\bigoplus_{\Z/2}:Rep(G\rtimes\Z/2)\r Rep(G\rtimes\Z/2)}
extending the commutativity, associativity and unitality axioms in
the usual sense. The functor \rref{ehhp1} is given by
$$V\mapsto V\oplus V\otimes A$$
where $A$ is the sign representation of the quotient $\Z/2$.
We may recover $Rep(G)$ as the image of the functor \rref{ehhp1}, and
then the functor \rref{ehhp1} provides a symmetric monoidal
model of the restriction $Rep(G\rtimes \Z/2)\r Rep(G)$.
Applying this construction level-wise on the simplicial level,
$$ CH_{Rep(G)}(Rep(G),Rep(G))_{\Z/2}\r K{G\rtimes \Z/2}(G)_0$$
becomes a map of $\Z/2$-equivariant $E_\infty$-spaces, and applying
$\Z/2$-equivariant infinite loop space theory, and localizing at the Bott
element, we obtain a map of $Z\2$-equivariant spectra (indexed over the complete universe)
\beg{ehhp**}{CH_K(K_G,K_G)_{\Z/2}\r K_{G\rtimes\Z/2}(G).}
In fact, it is a map of $K_{\Z/2}$-modules.
Forgetting the $\Z/2$-equivariant structure, we recover the map \rref{erephh5},
which is, of course, an equivalence. To show that the map \rref{ehhp**}, which is
a non-equivariant equivalence, is, in effect, a $\Z/2$-equivalence, we use a
variant of the finiteness argument of the section \ref{smi}, this time using finiteness
directly over $K_{\Z/2}$.
Recall again that
$$R(G)\cong \Z[v_1,\dots,v_r]$$
where $v_i$ are the fundamental irreducible representations of $G$. Let us call
a finite-dimensional complex representation of $G$ {\em of degree $n$}
if it is a sum of subrepresentations, each of which is isomorphic to a tensor product
of $n$ elements of $\{v_1,\dots,v_r\}$ (an element is allowed to occur more
than once). We see, in fact, that we obtain degree $d$ versions of all the
symmetric monoidal categories considered, giving a stable splitting of the form
$$K_G\simeq \bigvee_{d\geq 0} K_{G}^{d},$$
$$CH_K(K_G,K_G)\simeq \bigvee_{d\geq 0} CH_{K}^{d}(K_G,K_G).$$
Also, these splittings are $\Z/2$-equivariant (since $\Z/2$ acts on $\{v_1,\dots,v_r\}$
by permutation), and in fact the map \rref{ehhp**} decomposes into maps
$$CH_K(K_G,K_G)_{\Z/2}\r K_{G\rtimes\Z/2}(G).$$
On the other hand, the Brylinski-Zhang construction restricts to a map
of finite $K_{\Z/2}$-modules
\beg{ehhpi}{\begin{array}{l}
K^{d}_{G\rtimes\Z/2}\wedge(
\displaystyle\bigvee_{\begin{array}[t]{c}1\leq i_1<\dots<i_k\leq n\\
\sigma\{i_1,\dots,i_k\}\succ\{i_1,\dots,i_k\}
\end{array}} \Sigma^k\Z/2_+
\\[8ex]
\vee
\displaystyle\bigvee_{\begin{array}[t]{c}1\leq i_1<\dots<i_k\leq n\\
\sigma\{i_1,\dots,i_k\}=\{i_1,\dots,i_k\}
\end{array}} \Sigma^k S^{(orb\{i_1,\dots,i_k\})})\r CH_{K}^{d}(K_G,K_G).
\end{array}
}
Since each of $K_{\Z/2}$-modules are finite, an equivariant map which is a non-equivariant
equivalence is an equivalence by the argument of Section \ref{smi} (in fact, made simpler
by the fact that we do not consider $G$-equivariance here). Denoting the left hand side
of \rref{ehhpi} by $\mathcal{K}^d$, we now have a diagram of $K_{\Z/2}$-modules
of the form
\beg{ehhpii}{\diagram
\bigvee_{d\geq 0} \mathcal{K}^d\rto^(.4)\sim\drto_\sim & CH_{K}(K_G,K_G)_{\Z/2}\dto\\
& K_{G\rtimes \Z/2}(G).
\enddiagram
}
The horizontal arrow is a $\Z/2$-equivalence by the fact that \rref{ehhpi} is
a $\Z/2$-equivalence and by the stable splitting, the diagonal arrow is a
$\Z/2$-equivalence by Theorem \ref{t1}. Therefore, the vertical arrow is
a $\Z/2$-equivalence.
\end{proof}
\vspace{10mm}
| 78,867
|
Cinque terre
Cinque Terre Insider’s Tips by @lostinflorence | Photo credit: Nardia Plumridge | BrowsingItaly
Map of the Cinque Terre - 5 towns.. starting with Monterossa al Mare, Vernazza, Corniglia, Manarola, Riomaggiore
The Cinque Terre is Italy's most gorgeous place. Here are 10 amazing things to do during your travels there!
pueblo italiano costera de Riomaggiore en Cinque Terre | Fotografía increíble de las ciudades y Señales famosas de alrededor del mundo
15 Things To Know About Visiting Cinque Terre In Italy - Hand Luggage Only - Travel, Food & Photography Blog
| 10,069
|
If reinstall them on a system after a full update, even one that involves reformatting the hard drive. While there is no officially supported method, there have been several posts in the Tvheadend forum recently stating that people have had success in doing this by simply copying over the /home/hts/.hts/tvheadend directory from one system to another. However it’s not quite that simple, because you want to preserve permissions and ownership of the files, and there are a few other steps. As best I can determine, here is the full procedure (yes, I realize there is some redundancy in the use of sudo here). You should probably read the entire post before actually doing anything, and make sure you understand the entire process.
On the existing system, enter the following commands:
sudo service tvheadend stop
sudo -s
cd /home/hts/.hts
sudo tar cvfp ../tvheadend.tar tvheadend
cd /home/hts/
At this point you need to move tvheadend.tar file to a different system if you plan on reformatting the drive, or to the new system if you’re creating an entirely new setup after you have installed Tvheadend as described below. I would also urge you to copy any recordings you wish to keep, and any programs/scripts you may run outside of Tvheadend (for example, scripts and data to provide guide information to Tvheadend). You can use a Linux tool such as scp for this process (for example, sudo scp tvheadend.tar root@ip:~/) or if you have a way to transfer files via nfs, samba, ftp, sftp, etc. it doesn’t matter, just so long as you can move the tvheadend.tar file and any other files you want to save to a safe place.
On the new system, install your operating system and the driver(s) for your tuner card(s)/device(s), if any. Then install Tvheadend using one of the methods shown on the Download page. After you have done that, check to make sure that Tvheadend can find your tuners (in the TV adapters tab) – if it can’t then you need to figure out why before proceeding. Next, copy back/over all the files you saved in the previous step, putting them in the exact same directories they were in on the previous system (this is especially important if you want Tvheadend to find any recordings you saved). Then enter the following commands on the new system:
sudo service tvheadend stop
sudo -s
cd /home/hts/.hts
sudo mv tvheadend tvheadend-backup
tar xvfp ~/tvheadend.tar
Now if you reboot (or restart Tvheadend) you should find that your previous configuration is restored, except that it is possible that your tuner(s) will not be linked to any network(s). So go back to the TV adapters tab and make sure that each tuner is enabled and that it is linked to the correct network. Also, if you did not copy over the recordings from your previous system (or failed to put them back in the exact same spot in the directory tree), they will show up under the “Failed Recordings” tab in the Tvheadend web interface, so you can go ahead and delete those. Also, check the “Recording System Path” and the Timeshift “Storage Path” (under the Recording tab) to make sure that they are correct and that you have actually created directories with those names, and that they have the correct permissions and ownership.
If for some reason it does not work and you want to reconfigure Tvheadend from scratch, stop the Tvheadend service, delete the /home/hts/.hts/tvheadend directory, and rename the /home/hts/.hts/tvheadend-backup directory back to just tvheadend. Then reboot, or restart Tvheadend.
EDIT (August 2017): Note that if you are moving to Tvheadend version 4.2 or later from a pre-4.2 version, you may find that everything will work fine until you try to add a new channel to an existing mux, by rescanning the mux. It seems Tvheadend may find the new service, and allow you to map it as a new channel, but when you go to watch it you will get “No input source available for subscription” errors. The problem is that in 4.2 there are some new mux settings that you can only see if you set the View level to Expert. They are the checkboxes for “Accept zero value for TSID:” and “EIT – skip TSID check:”, and by default both are unchecked. But here in North America, providers often use invalid transport stream IDs. So, you probably need to check the boxes for both of those, and again, you will not even see those settings unless you set the view level to Expert. If you don’t do this, Tvheadend may not scan in any or all of the available services on the mux, but even if it does find and add a new service, you might get errors when you attempt to view a channel that you have mapped from the service.
New checkboxes in Tvheadend 4.2
If that doesn’t resolve the problem, you may need to delete the existing mux and any associated services and channels, and then re-add that mux, and then rescan the mux and re-add your channels. But maybe you fear that if you delete your existing channels, Tvheadend won’t rescan them. So if you want to assure yourself that Tvheadend can rescan the mux before you delete the existing one, create a temporary “test” network, temporarily link it to the tuner associated with the correct satellite dish and LNB, and then create a new temporary mux under that. That will allow you to see if Tvheadend can find the services in the mux before you delete your existing mux, services, and channels. If it works, you can delete the test network and temporary mux, as well as the existing mux and associated services and channels, and then recreate the mux using the correct network. (End of edit).
Once you are sure that everything is working the way you want, you can delete the tvheadend.tar file and the tvheadend-backup directory. However, it might not be a bad idea to take a “snapshot” of the /home/hts/.hts/tvheadend directory from time to time, and save it to a different system or at least a different hard drive, so you have a way to go back to working settings if everything falls apart or you make some major blunder.
When creating a new system, keep in mind that if you were using ffmpeg for any purpose on your previous system, you should make sure it is properly installed on the new one, and that if it’s not in the same place in the directory tree that you adjust any links accordingly. Also make sure any programs that provide guide data to Tvheadend are working correctly (you may need to check the permissions and ownership), and if you had any scheduled cron jobs, make sure you recreate those.
Some of the information in this post was derived from posts by Mark Clarkstone in an obscure thread in the Tvheadend forum.
Pingback: Install TVHeadend in a Proxmox LXC Container running Ubuntu 16.04 – Gav's Tech Blog
| 404,826
|
You currently have javascript disabled. Several functions may not work. Please re-enable javascript to access full functionality.
Posted 10 February 2019 - 07:30 PM
Posted 10 February 2019 - 07:57 PM
A weak spring perhaps.
Posted 10 February 2019 - 08:00 PM
Posted 10 February 2019 - 09:27 PM
Posted 11 February 2019 - 02:59 AM
Possible causes:
Posted 11 February 2019 - 10:38 AM
Possible causes:
sticky or out-of-standard film, too wide
film loop(s) lost; better load automatically, if not used to do it by hand
bent spool flange(s)
grease toughened in the cold; mechanism can be winterized with dry lubricant instead of grease and oil
fader engaged or not adjusted, binds on a shutter stop
tired spring
Thank you very much Simon! This information is all very helpful, and honestly it could be any of these issues. Its good to know though so I could start the process of elimination. I did forget to mention that the film I was working with was spooled down from a 400ft roll, and quite honestly i think I might have put too much film on the spool, not sure if that could have been a possible cause as well.
As long as its not the spring I'm good, I could deal with a ruined roll of (expired) film. Normally my camera purrs quite nicely and runs pretty well on full wind, although it does seems to slow down just a tid bit on the last second of the wind, but if i'm not mistaken this is pretty common amongst Bolex spring wound h16 cameras?
Thanks for the information, Its very much appreciated.
Best,
Derick
Community Forum Software by IP.BoardLicensed to: Cinematography.com
| 288,135
|
A pair of round-toed black ballerinas secured with Velcro closure
Leather upper
Cushioned footbed
Textured and patterned outsole
Feel like a ballerina in these chic Clarks flats. Team this black pair with any casual outfit for a day out with friends and family.
Leather
Use a branded leather conditioner to clean the product
Con un alto descuento todo tipo de Clarks, New Balance, Nike, Supra, The North Face, Mizuno, Superdry, Vans, Merrell Online.
| 402,905
|
Jerry's Merry Christmas
Search all of Jerry's house and help him find the tools he needs to hang up a sprig of mistletoe for Christmas. Click on all kinds of things and find the hidden items.
Game ControlsUse the mouse to play.
(5 votes)
8 / 10
Advertisement
| 182,827
|
.
Established in 1999, our practical education and training prepares students for successful careers and enter the health field with confidence and professionalism. Our mission is to collaborate with students, families, and the community to develop competent medical professionals who can meet the expectations of today's employers. Our training meets the constantly changing job requirements of today's health-related industries. We keep our class sizes small to ensure individual attention. We provide lifetime job placement assistance.
Our externship program gives you advantages over other prospective job applicants. While under the supervision of physicians, office managers, nursing staff, lab technicians and other allied professionals, you will receive on-the-job training. Financial aid is available to those who qualify.
- Medical Assistant
- Diagnostic Medical Sonographer
- Administration Billing & Coding
- MRI Technologist
Our program curriculum is developed in response to industry trends and is taught by professionals who provide students with a balance of education and technical training with relevant insight. As a result, you can take what is learned in the classroom and apply it in the work place, thereby generating immediate payback on your educational investment!
| 60,062
|
\begin{document}
{
\flushleft\Huge\bf Graphene in complex magnetic fields
}\vspace{5mm}
\begin{adjustwidth}{1in}{}
{
\flushleft\large\bf David J Fern{\'a}ndez C and Juan D Garc{\'i}a-Mu{\~n}oz\footnote{Author to whom correspondence should be addressed}
}
\vspace{2.5mm}
{
\par\noindent\small Physics Department, Cinvestav, P.O.B. 14-740, 07000 Mexico City, Mexico
}
\vspace{2.5mm}
{
\par\noindent\small Email: david@fis.cinvestav.mx and dgarcia@fis.cinvestav.mx
}
\vspace{5mm}
{
\par\noindent\bf Abstract
}
{
\newline\small Exact analytic solutions for an electron in graphene interacting with external complex magnetic fields are found. The eigenvalue problem for the non-hermitian Dirac-Weyl Hamiltonian leads to a pair of intertwined Schr{\"o}dinger equations, which are solved by means of supersymmetric quantum mechanics. Making an analogy with the non-uniform strained graphene a prospective physical interpretation for the complex magnetic field is given. The probability and currents densities are explored and some remarkable differences as compared with the real case are observed.
}
\vspace{2mm}
{
\newline\footnotesize {\bf Keywords:} graphene, complex magnetic field, supersymmetric quantum mechanics
}
\end{adjustwidth}
\section{Introduction}
At the beginning of the twenty-first century a $2D$-material known as graphene was isolated for the first time by Geim and Novoselov \cite{Novoselov2004}. After this discovery a lot of work trying to delve into its properties have been done. In particular, its electronic properties are of great interest, as the integer quantum Hall effect where the charge carriers behave as massless chiral quasiparticles with a linear dispersion relation. Within this wide field of study, the work of Kuru, Negro and Nieto is worth being mentioned \cite{Kuru2009}, since they show how to use supersymmetric quantum mechanics (SUSY QM) in order to find exact analytic solutions for a class of Hamiltonians, which describe the interaction of the electron in graphene with external magnetic fields. Other authors have explored further this technique, finding as well interesting results \cite{Milpas2011,Midya2014,Erik2017,Schulze2017,Concha2018,Roy2018,Erik2019,Celeita2020,Erik2020}.
However, as far as we know a research about generalizing the method to complex magnetic fields has not been done yet.
Even though complex magnetic fields are considered to be non-physical, in recent years it has been found that their effective action on the coherence of many-body quantum systems can be detected \cite{Peng2015}. Motivated by this result, in this paper we shall study from a theoretical viewpoint the effects caused on the electron behavior by a complex magnetic field applied orthogonally to the graphene surface, and how this generalization modifies the energies, probability and current densities as compared with the real case. We shall discuss as well a possible physical interpretation of such complex magnetic fields.
The paper has been organized as follows: in section~\ref{S2} we will describe how SUSY QM works to solve the eigenvalue problem for the effective Hamiltonian of graphene in a complex magnetic field; section~\ref{S3} shows some cases where different complex magnetic profiles are taken, and for which the algorithm of the previous section can be applied. A discussion about a possible physical interpretation of complex magnetic fields, and an important effect induced by their non-null imaginary parts, is given in section~\ref{S4}; finally, in section \ref{S5} we present our conclusions.
\section{Effective Hamiltonian for graphene in complex magnetic fields} \label{S2}
A hexagonal structure of carbon atoms in a honeycomb $2D$-arrangement is called monolayer graphene, or simply graphene (see figure \ref{graphene}). In the study of this material one usually works with an effective Hamiltonian describing the hopping of an electron from one atom to any of its nearest neighbors \cite{Katnelson2011,Mermin1976,Raza2012,Saito1998}. Such Hamiltonian can be written as a $2\times 2$ matrix operator of the form
\begin{equation} \label{E-1}
H = v_{0}
\begin{pmatrix}
0 && \pi \\
\pi^{\dagger} && 0
\end{pmatrix},
\end{equation}
where $v_{0} = \sqrt{3}a\gamma_{0}/2\hbar$ is the Fermi velocity and $\pi = p_{x}-ip_{y}$. The quantity $a \approx 2.46\ \textup{\r{A}}$ is the intramolecular distance in the graphene layer \cite{McCann2013}, while $\gamma_{0} \approx 3.033\ \text{eV}$ is known as the in-plane hopping parameter, which is equal to the negative binding energy between two adjacent carbon atoms \cite{Saito1998}. On the other hand, $p_{j}$ is the momentum operator in the $j$-th direction, with $j=x,y$.
\begin{figure}[t]
\begin{center}
\begin{tikzpicture}
\foreach \n in {0,1,2,3}
{
\ifodd \n \newcommand{\xo}{3/2*\n} \newcommand{\yo}{-0.866}
\else \newcommand{\xo}{3/2*\n} \newcommand{\yo}{0}
\fi
\foreach \c in {0,1,2,3}
{
\foreach \x in {0,1,3,4}
{
\foreach \y in {0,1,-1}
{
\ifnum \x=0 \ifnum \y=0 \fill[color=black] (\xo+\x/2,\yo+\y*0.866+\c*1.732) circle [radius=3pt];
\draw (\xo+\x/2+1/2,\yo+\y*0.866-0.866+\c*1.732)--(\xo+\x/2,\yo+\y*0.866+\c*1.732)--(\xo+\x/2+1/2,\yo+\y*0.866+0.866+\c*1.732);
\fi \fi
\ifnum \x=1 \ifnum \y=1 \draw (\xo+\x/2,\yo+\y*0.866+\c*1.732)--(\xo+\x/2+1,\yo+\y*0.866+\c*1.732); \fill[color=gray] (\xo+\x/2,\yo+\y*0.866+\c*1.732) circle [radius=3pt];
\fi
\ifnum \y=-1 \draw (\xo+\x/2,\yo+\y*0.866+\c*1.732)--(\xo+\x/2+1,\yo+\y*0.866+\c*1.732); \fill[color=gray] (\xo+\x/2,\yo+\y*0.866+\c*1.732) circle [radius=3pt];
\fi \fi
\ifnum \x=3 \ifnum \y=1 \draw (\xo+\x/2,\yo+\y*0.866+\c*1.732)--(\xo+\x/2+1/2,\yo+\y*0.866-0.866+\c*1.732); \fill[color=black] (\xo+\x/2,\yo+\y*0.866+\c*1.732) circle [radius=3pt]; \fi
\ifnum \y=-1 \draw (\xo+\x/2,\yo+\y*0.866+\c*1.732)--(\xo+\x/2+1/2,\yo+\y*0.866+0.866+\c*1.732); \fill[color=black] (\xo+\x/2,\yo+\y*0.866+\c*1.732) circle [radius=3pt]; \fi \fi
\ifnum \x=4 \ifnum \y=0 \fill[color=gray] (\xo+\x/2,\yo+\y*0.866+\c*1.732) circle [radius=3pt];
\fi \fi
}
}
}
}
\draw[arrows={|Stealth[scale=1.5]-Stealth[scale=1.5]|}] (-0.3,0)-- node[left=1pt]{\Large $a$} (-0.3,1.732);
\end{tikzpicture}
\caption{Graphene structure. The intramolecular distance $a$ is traced. }\label{graphene}
\end{center}
\end{figure}
Let us suppose that a complex magnetic field orthogonal to the graphene layer is applied, which varies only along one direction, e.g., $\mathbf{B}(x) = B(x)\mathbf{e}_{z}$, $B(x)\in \mathbb{C}$. In the Landau gauge the associated vector potential can be written as $\mathbf{A}(x) = A(x)\mathbf{e}_{y}$, where $B(x) = dA(x)/dx$. Taking into account the minimal coupling rule, the effective Hamiltonian \eqref{E-1} becomes now
\begin{equation} \label{E-2}
H = v_{0}
\begin{pmatrix}
0 && p_{x} - ip_{y} - i\frac{e}{c}A(x) \\
p_{x} + ip_{y} + i\frac{e}{c}A(x) && 0
\end{pmatrix}.
\end{equation}
Since $H$ is invariant under translations along $y$-direction, its eigenvectors can be expressed as:
\begin{equation} \label{E-3}
\Psi(x,y) = Ne^{iky}
\begin{pmatrix}
\psi^{+}(x) \\
i\psi^{-}(x)
\end{pmatrix},
\end{equation}
with $N$ being a normalization factor and $k$ the wavenumber in $y$-direction. In the coordinates representation the momen{\-t}um operator $p_{j}$ can be written as $-i\hbar \partial_{j}$, with $\partial_{j} = \partial/\partial{j}$, $j = x,y$, thus the eigenvalue equation for $H$ looks like
\begin{equation} \label{E-4}
H\Psi(x,y) = \hbar v_{0}
\begin{pmatrix}
0 && -i\partial_{x} - \partial_{y} - i\frac{e}{c\hbar}A(x) \\
-i\partial_{x} + \partial_{y} + i\frac{e}{c\hbar}A(x) && 0
\end{pmatrix} \Psi(x,y) = E\Psi(x,y).
\end{equation}
Using expression~(\ref{E-3}), the matrix equation \eqref{E-4} is reduced to a coupled system of equations:
\begin{equation} \label{E-5}
\begin{aligned}
& L^{-}\psi^{-}(x) \equiv \left[\frac{d}{dx} + k + \frac{e}{c\hbar}A(x)\right]\psi^{-}(x) = \mathcal{E}\psi^{+}(x),
\\
& L^{+}\psi^{+}(x) \equiv \left[-\frac{d}{dx} + k + \frac{e}{c\hbar}A(x)\right]\psi^{+}(x) = \mathcal{E}\psi^{-}(x),
\end{aligned}
\end{equation}
with $\mathcal{E} = E/\hbar v_{0}$. It is important to realize that $L^{+}$ is not the Hermitian conjugate of $L^{-}$, $\left(L^{-}\right)^{\dagger} = -d/dx + k + \frac{e}{c\hbar}\bar{A}(x)\neq L^{+}$, with $\bar{z}$ being the complex conjugate of $z$. In order to decouple the system~\eqref{E-5}, let us apply $L^{+}$ on the first equation and $L^{-}$ on the second, which leads to:
\begin{equation} \label{E-6}
\begin{aligned}
& L^{+}L^{-}\psi^{-}(x) = \left[-\frac{d^{2}}{dx^{2}} + \left(k + \frac{e}{c\hbar}A(x)\right)^{2} - \frac{e}{c\hbar}A'(x)\right]\psi^{-}(x) = \varepsilon \psi^{-}(x),
\\
& L^{-}L^{+}\psi^{+}(x) = \left[-\frac{d^{2}}{dx^{2}} + \left(k + \frac{e}{c\hbar}A(x)\right)^{2} + \frac{e}{c\hbar}A'(x)\right]\psi^{+}(x) = \varepsilon\psi^{+}(x),
\end{aligned}
\end{equation}
where $\varepsilon=\mathcal{E}^{2}$ and $dA(x)/dx \equiv A'(x)$. It is natural to identify now
\begin{equation}\label{E-6a}
H^- = L^{+}L^{-}, \qquad H^+ = L^{-}L^{+},
\end{equation}
with $H^\pm$ being two non-Hermitian Hamiltonians fulfilling
\begin{equation} \label{E-7}
H^+L^{-} = L^{-}H^{-}.
\end{equation}
The intertwining relation~\eqref{E-7}, together with the expressions \eqref{E-5} for $L^{\pm}$ and the factorizations in equation~\eqref{E-6} are the basis of the so-called supersymmetric quantum mechanics (SUSY QM).
In fact, it is standard to denote
\begin{equation} \label{E-9}
L^{\pm} =\mp \frac{d}{dx} + \text{w}(x),
\end{equation}
with the complex function
\begin{equation} \label{superpotential}
\text{w}(x) = k + \frac{e}{c\hbar}A(x)
\end{equation}
being called superpotential. Thus, the non-Hermitian Hamiltonians $H^\pm$ take the form
\begin{equation} \label{E-8}
H^{\pm} = -\frac{d^{2}}{dx^{2}} + V^{\pm}(x),
\end{equation}
where the complex SUSY partner potentials $V^\pm$ are written in terms of the superpotential as follows:
\begin{equation} \label{E-10}
V^{\pm} = \text{w}^{2}(x) \pm \text{w}'(x).
\end{equation}
Suppose now that $\psi^{\pm}_{n}(x)$ are eigenfunctions of $H^{\pm}$ with eigenvalues $\varepsilon^{\pm}_{n}$, the quantum number $n$ being a non-negative integer. We choose $H^-$ as the Hamiltonian having the null energy as one of its eigenvalues, i.e., $\varepsilon^{-}_{0}=0$. This automatically fixes the superpotential, since
\begin{equation}
H^- \psi^{-}_{0} = \varepsilon^{-}_{0} \psi^{-}_{0} = 0 \quad \Rightarrow \quad L^- \psi^{-}_{0} = 0 = (\psi^{-}_{0})' + \text{w}(x) \psi^{-}_{0} \quad \Rightarrow \quad \text{w}(x)= - \frac{(\psi^{-}_{0})'}{\psi^{-}_{0}},
\end{equation}
where equation~\eqref{E-6a} was used. As $\psi^{-}_{0}$ is square-integrable, the solution to $H^+ \psi^+=0$, which also satisfies $L^+ \psi^+ = 0$ $\Rightarrow$ $\psi^+=1/\psi^{-}_{0}$, is not square-integrable, thus $\varepsilon^{-}_{0}=0$ is not in the spectrum of $H^+$. However, the intertwining relation~\eqref{E-7} guarantees that any other non-null eigenvalue of $H^{-}$ ($\varepsilon^{-}_{n}, \, n=1,2,\dots$) belongs to the spectrum of $H^+$. Proceeding by analogy with the real case, we will denote $\varepsilon^{+}_{n-1}= \varepsilon^{-}_{n}$, thus the corresponding eigenstates $\psi^{\pm}_{n}(x)$ are interrelated through $L^\pm$ as follows:
\begin{equation} \label{E-12}
\psi^{+}_{n-1}(x) = \frac{L^{-}\psi^{-}_{n}(x)}{\sqrt{\varepsilon^{-}_{n}}},\quad \psi^{-}_{n}(x) = \frac{L^{+}\psi^{+}_{n-1}(x)}{\sqrt{\varepsilon^{+}_{n-1}}}, \quad n=1,2,\dots
\end{equation}
Note that, despite $L^{+}$ is not the hermitian conjugate of $L^{-}$, the second equation in \eqref{E-12} is fulfilled since the factorizations \eqref{E-6a} imply that $H^{-}L^{+} = L^{+}H^{+}$, then $L ^{+}\psi^{+}_{n-1}(x)$ is an eigenfunction of $H^{-}$ with eigenvalue $\varepsilon^{-}_{n}$.
It is important to stress that the potentials $V^\pm(x)$ are only auxiliary tools to solve the original problem, thus they do not have physical meaning. Moreover, they are typically shape-invariant SUSY partner potentials, since the factorization energy involved in equation~\eqref{E-6a} is the null energy associated to a Hamiltonian $H^{-}$ chosen as to have such a symmetry \cite{Gangopadhyaya2018}.
Let us remark that the derivative of the superpotential is directly related to the magnetic field amplitude as follows
\begin{equation} \label{E-15}
\text{w}'(x) = \frac{e}{c\hbar}B(x).
\end{equation}
Coming back to our initial problem, the eigenvectors and eigenvalues of the Hamiltonian \eqref{E-2} describing the graphene layer in the complex magnetic fields are given by:
\begin{equation} \label{E-16}
\begin{aligned}
& \Psi_{0}(x,y) = e^{iky}
\begin{pmatrix}
0 \\
i\psi^{-}_{0}(x)
\end{pmatrix},\quad E_{0} = 0,
\\
& \Psi_{n}(x,y) = \frac{e^{iky}}{\sqrt{2}}
\begin{pmatrix}
\psi^{+}_{n-1}(x) \\
i\psi^{-}_{n}(x)
\end{pmatrix},\quad E_{n} = \pm\hbar v_{0}\sqrt{\varepsilon_{n}^{-}},
\end{aligned}
\end{equation}
with $n \in \mathbb{N}$. Let us mention that the energies $E_n$ with the plus sign are associated to electrons while the ones with the minus sign to holes. In our examples below only the electron energies will be considered.
Before addressing some examples, let us define first two physical quantities that will help us to describe the electron behavior ruled by the Hamiltonian~\eqref{E-2}. Since such Hamiltonian is a piece of a $2\times 2$ diagonal-block supermatrix, whose non-zero extra block is the transpose of $H$, we are in fact dealing with a Dirac-like problem \cite{Katnelson2011}. However, it is sufficient to solve $H$ in order to obtain the whole solution of the Dirac-like Hamiltonian characterizing the monolayer graphene. The first physical quantity to be explored is the probability density defined by
\begin{equation} \label{E-17}
\rho = \Psi^{\dagger}\Psi,
\end{equation}
while the second one is the probability current written as
\begin{equation} \label{E-18}
\mathbf{J} = v_{0}\Psi^{\dagger}\vec{\sigma}\Psi.
\end{equation}
The previous expression for $\mathbf{J}$ is similar to the real case given in \cite{Kuru2009} and to the free case in \cite{Ferreira2011}, nevertheless the continuity equation turns out to be inhomogeneous, with the term of inhomogenei{\-t}y being given by
\begin{equation} \label{E-19}
\frac{2ev_{0}}{c\hbar}\text{Im}[A(x)]\Psi^{\dagger}\sigma_{y}\Psi.
\end{equation}
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.5]{./imagenes/fig1.pdf}
\caption{Real (a) and imaginary part (b) of the complex harmonic oscillator potentials $V^\pm$ and the associated magnetic field. The parameters chosen are $|\omega|$ = $k$ = 1 and $\theta$ = $\pi/10$.} \label{F-1}
\end{center}
\end{figure}
\section{Exactly solvable cases} \label{S3}
In these examples we shall take several magnetic profiles whose amplitude is the product of a complex constant times a real function of $x$. We shall determine the corresponding superpotential, the auxiliary SUSY partner potentials and then the solutions to the original problem. It is worth noting that we shall solve first the potential $V^{-}(x)$ and then, from its eigenfunctions and eigenva{\-l}ues, the corresponding solutions of $V^{+}(x)$ will be found. Moreover, any other parameter of the magnetic profile is supposed to be positive, unless otherwise specified.
\subsection{Constant magnetic field}
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.5]{./imagenes/fig2.pdf}
\caption{(a) First energy eigenvalues in the complex plane for the constant magnetic profile with three different angles $\theta$. The ground state is the same for all these $\theta$-values and it is drawn as a red circle at the origin; the other potential parameters were taken as $|\omega| = k = 1$. (b) Real (top) and imaginary (bottom) part of the first five energy eigenvalues as functions of $k$ for $|\omega| = 1$ and $\theta = \pi/10$.} \label{F-2}
\end{center}
\end{figure}
The first magnetic profile we will consider is constant, i.e., $\mathbf{B}(x) = B \mathbf{e}_{z}$, $B\in \mathbb{C}$. In the Landau gauge the vector potential is $\mathbf{A}(x) = xB \mathbf{e}_{y}$. Substituting this expression in equation~\eqref{superpotential} we get $\text{w}(x) = k + \omega\, x/2$ with $\omega = 2eB/c\hbar\in\mathbb{C}$, and the auxiliary SUSY partner potentials become
\begin{equation} \label{E-20}
\begin{aligned}
& V^{-}(x) = \frac{\omega^{2}}{4}\left(x + \frac{2k}{\omega}\right)^{2} - \frac{\omega}{2},
\\
& V^{+}(x) = \frac{\omega^{2}}{4}\left(x + \frac{2k}{\omega}\right)^{2} + \frac{\omega}{2}.
\end{aligned}
\end{equation}
These are called complex harmonic oscillators \cite{JuanCarlos2015}, whose real and imaginary parts can be obser{\-v}ed in figure~\ref{F-1}. The corresponding eigenfunctions are given by
\begin{equation} \label{E-21}
\psi^{\pm}_{n}(x) =
\begin{cases}
c_{n}e^{-\frac{\zeta^{2}}{2}}\mathcal{H}_{n}\left[\zeta\right],\quad -\frac{\pi}{2} < \theta < \frac{\pi}{2},
\\
c_{n}e^{-\frac{\xi^{2}}{2}}\mathcal{H}_{n}\left[\xi\right],\quad \frac{\pi}{2} < \theta < \frac{3\pi}{2},
\end{cases}
\end{equation}
with $n$ being a non-negative integer, $\zeta = \sqrt{\omega/2}\left(x + 2k/\omega\right)$, $\xi = \sqrt{-\omega/2}\left(x - 2k/\omega\right)$, $\omega = |\omega|e^{i\theta}$ and $\mathcal{H}_{n}(\zeta)$ is a Hermite polynomial of degree $n$ and complex argument \cite{JuanCarlos2015}; we are denoting $\sqrt{\omega}=\sqrt{|\omega|}e^{i\theta/2}$ and $\sqrt{-\omega}=\sqrt{|\omega|}e^{i(\pi-\theta)/2}$. The eigenvalues for the potentials~\eqref{E-20} turn out to be
\begin{equation} \label{E-22}
\varepsilon^{-}_{0} = 0,\quad \varepsilon^{-}_{n} = \varepsilon^{+}_{n-1} = \pm n\omega,
\end{equation}
where $n$ is a natural number, the upper sign $+$ is taken for $-\pi/2 < \theta < \pi/2$ and the lower sign $-$ for $\pi/2 < \theta < 3\pi/2$. Thus, the electron energies~\eqref{E-16} for graphene in a constant complex magnetic field can be written as follows:
\begin{equation} \label{E-23}
E_{n} = \hbar v_{0}\sqrt{\pm n\omega},
\end{equation}
whose norms coincide with the result for the real case deduced in \cite{Kuru2009} but now they are rotated in the complex plane by an angle $\theta/2$ with respect to the positive real line (see figure~\ref{F-2}(a)). In that plot it can be observed as well concentric circumferences of radius $R \propto \sqrt{n|\omega|}$ centered at the origin on which the energy $E_{n}$ is located regardless of the angle $\theta$. This leads us to conclude that despite its complex nature, for a fixed angle $\theta$ the spectrum of $H$ is ordered in the standard way. Moreover, $\text{Sp}\left(H\right)$ is infinite discrete, and its energies do not depend on $k$. In figure \ref{F-2}(b) it is shown the real and imaginary parts of the first energy eigenvalues as functions of $k$. The square-integrability of $\Psi_{n}(x,y)$ does not impose any constraint to the norm of $\omega$, but it does on its argument $\theta$, as it is shown in equation~\eqref{E-21}. Furthermore, when $\theta = \pm\pi/2$ the eigenfunctions $\psi^{\pm}_{n}(x)$ are not square-integrable, since in this case $V^\pm(x)$ in equation \eqref{E-20} become repulsive oscillator potentials \cite{Bermudez2013}, displaced by some imaginary quantities in the coordinate $x$ as well as in the energy origin, and thus the Hamiltonian $H$ does not have bound state solutions at all. The probability and current densities are drawn in figure~\ref{F-3} for the first four bound states. Note that the ``ground state'', for $n=0$, does not have associated any current density, since its upper entry is zero, as it is seen in equation~\eqref{E-16}.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.45]{./imagenes/fig3.pdf}
\caption{Probability density (top), current density in the $x$-direction (middle) and in the $y$-direction (bottom) for the constant magnetic field. The potential parameters taken are $|\omega| = k = 1$ and $\theta = \pi/10$.} \label{F-3}
\end{center}
\end{figure}
\subsection{Trigonometric singular well}
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.75]{./imagenes/fig4.pdf}
\caption{Real (top) and imaginary part (bottom) of the complex trigonometric Rosen-Morse potentials and the corresponding magnetic field. The chosen potential parameters are $|D| = 4, \theta = \pi/10, k = -2, \mu = 1$.} \label{F-4}
\end{center}
\end{figure}
In this case a complex magnetic field of trigonometric form is taken, $\mathbf{B}(x) = B\csc^{2} \left(\mu x\right)\mathbf{e}_{z}$, $B\in \mathbb{C}$, $\mu\in\mathbb{R}^+$. The vector potential is given by $\mathbf{A}(x) = \left(-B/\mu\right) \cot (\mu x)\mathbf{e}_{y}$, thus it is straightforward to obtain the superpo{\-t}ential as $\text{w}(x) = k - D\cot (\mu x)$, with $D = eB/c\hbar\mu$. Hence, the auxiliary potentials now acquire the form
\begin{equation} \label{E-24}
\begin{aligned}
& V^{-}(x) = D(D - \mu)\csc^{2} (\mu x) - 2Dk\cot (\mu x) + k^{2} - D^{2}, \\
& V^{+}(x) = D(D + \mu)\csc^{2} (\mu x) - 2Dk\cot (\mu x) + k^{2} - D^{2}.
\end{aligned}
\end{equation}
These expressions suggest to call the previous $V^\pm(x)$ as complex trigonometric Rosen-Morse potentials. Their real and imaginary parts are plotted in figure~\ref{F-4}. The corresponding eigenfunc{\-t}ions are given in terms of Jacobi polynomials $P_{n}^{(\alpha, \beta)}(\zeta)$ with complex argument and indexes, namely
\begin{equation} \label{E-25}
\psi^{j}_{n}(x) = c_{n} (-1)^{-\left(s_{j} + n\right)/2}(1 + \zeta^{2})^{-\left(s_{j} + n\right)/2}e^{r_{j}\text{arccot} (\zeta)}P_{n}^{\left(-s_{j} - n -ir_{j}, -s_{j} - n + ir_{j}\right)}(i\zeta),\quad j = \pm,
\end{equation}
where $s_{-} = D/\mu$, $s_{+} = s_{-} + 1$, $r_{-} = -kD/\mu(D + n\mu)$, $r_{+} = -kD/\mu(D + \mu + n\mu)$, $\zeta = \cot (\mu x)$ and $n$ is a non-negative integer. Using now the polar form $D=|D|e^{i\theta}$, it turns out that the square-integrability of these eigenfunctions is limited to the right side of the complex plain, where $-\pi/2 < \theta < \pi/2$. The spectra of the Hamiltonians $H^{\pm}$ consist of the complex eigenvalues
\begin{equation} \label{E-26}
\varepsilon^{-}_{0} = 0,\quad \varepsilon^{-}_{n} = \varepsilon_{n-1}^{+} = k^{2} - D^{2} + (D + n\mu)^{2} - \frac{k^{2}D^{2}}{(D + n\mu)^{2}}, \quad n\in \mathbb{N}.
\end{equation}
Substituting these expression in equation~\eqref{E-16} the electron energies for the complex trigono{\-m}etric singular magnetic field turn out to be
\begin{equation} \label{E-27}
E_{n} = \hbar v_{0}\sqrt{k^{2} - D^{2} + (D + n\mu)^{2} - \frac{k^{2}D^{2}}{(D + n\mu)^{2}}}.
\end{equation}
We must mention that the norm of $E_{n}$ is different in general from the result for the real case shown in \cite{Kuru2009}, except in the case with $\theta = 0$. In fact, the argument of $E_n$ depends in a non-trivial way of $\theta$, and it shows also a strong dependence on the potential parameters. The first electron energies on the complex plane are shown in figure \ref{F-5}(a). It can be observed as well concentric ellipses centered at the origin, with the energy $E_{n}$ belonging to the ellipse whose semi-major axis coincides with the $n$-th energy in the real case regardless of the value of $\theta$. As in the previous case this fact implies that, for a fixed angle $\theta$, $\text{Sp}\left(H\right)$ is ordered in the standard way. However, this happens only in the interval $(-k_{0},k_{0})$, where $k_{0}$ is such that $\text{Im}(E_{1}(k_{0})) = 0$. Hence, the spectrum of $H$ turns out to be infinite discrete, as it can be seen in figure \ref{F-5}(b). Lastly, plots of the probability and current densities are displayed in figure~\ref{F-6} for the first four bound states.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.5]{./imagenes/fig5.pdf}
\caption{(a) First electron energies $E_n$ for the trigonometric singular well in the complex plane with three different angles. The ground state is marked as a red circle at the origin, and the other potential parameters are $|D| = 4, k = -2, \mu = 1$. (b) Real (top) and imaginary part (bottom) of $E_n$ as functions of $k$ for $|D| = 4, \theta = \pi/10, \mu = 1$.} \label{F-5}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.6]{./imagenes/fig6.pdf}
\caption{Probability density (top), current density in $x$-direction (middle) and $y$-direction (bottom) in the case of a trigonometric singular well. The potential parameters taken are $|D| = 4, \theta = \pi/10, k = -2$ and $\mu = 1$.} \label{F-6}
\end{center}
\end{figure}
\subsection{Exponentially decaying magnetic field}
Our last example is an exponentially decaying complex magnetic field $\mathbf{B}(x) = Be^{-\mu x}\mathbf{e}_{z}$, $B\in \mathbb{C}$, $\mu\in\mathbb{R}^+$, whose vector potential is $\mathbf{A}(x) = -(B/\mu)e^{-\mu x} \mathbf{e}_{y}$. In agreement with equation~\eqref{superpotential}, the superpotential is given by $\text{w}(x) = k - De^{-\mu x}$, $D = eB/c\hbar\mu$. Inserting this expression in equation~\eqref{E-10}, we get the auxiliary SUSY partner potentials
\begin{equation} \label{E-28}
\begin{aligned}
& V^{-} = k^{2} + D^{2}e^{-2\mu x} - 2D\left(k + \frac{\mu}{2}\right)e^{-\mu x}, \\
& V^{+} = k^{2} + D^{2}e^{-2\mu x} - 2D\left(k - \frac{\mu}{2}\right)e^{-\mu x},
\end{aligned}
\end{equation}
which are the Morse potentials but with the parameter $D$ being now complex. Their real and imaginary parts are shown in figure~\ref{F-7}. These potentials are as well exaclty solvable, the correspon{\-d}ing eigenfuntions are given by
\begin{equation} \label{E-29}
\psi^{j}_{n}(x) = c_{n}(\zeta)^{s_{j} - n}e^{-\frac{\zeta}{2}}L_{n}^{2(s_{j} - n)}(\zeta),\quad j = \pm,
\end{equation}
where $s_{-} = k/\mu$, $s_{+} = s_{-} - 1$, $\zeta = (2D/\mu)e^{-\mu x}$, $n$ is a non-negative integer and $L_{n}^{\lambda}(\zeta)$ is an associated Laguerre polynomial of complex argument. The polar form $D = |D|e^{i\theta}$ allows us to deduce the square-integrability conditions: $-\pi/2 < \theta < \pi/2$ and $k > n\mu$. The corresponding eigenvalues are
\begin{equation} \label{E-30}
\varepsilon^{-}_{0} = 0,\quad \varepsilon_{n}^{-} = \varepsilon_{n - 1}^{+} = k^{2}-(k - n\mu)^{2},
\end{equation}
with $n$ being a natural number. It is worth stressing that the spectra of $H^\pm$ are real, since unlike the previous cases now these Hamiltonians are pseudo-hermitian \cite{Mostafazadeh2002}. Hence, the energy eigenvalues for the graphene electron in the exponentially decaying complex magnetic field turn out to be
\begin{equation} \label{E-31}
E_{n} = \hbar v_{0}\sqrt{k^{2}-(k - n\mu)^{2}}.
\end{equation}
Note that, in this example, Sp($H$) coincides exactly with the spectrum for the real case address{\-e}d in \cite{Kuru2009}. Such spectrum is finite discrete, since once the parameters $k$ and $\mu$ are fixed, the condition $k > n\mu$ limits the number of square-integrable eigenfunctions and hence the number of allowed electron energies (see figure~\ref{F-8}). Moreover, there is an enveloping line, also shown in figure~\ref{F-8}(b), whose slope (equal to $v_{0}$) represents the average $y$-velocity. This line separates the $k$-domain into two subsets, one where there are bound states and the other where there are just scattering states. Finally, the probability and current densities are plotted in figure~\ref{F-9}.
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.75]{./imagenes/fig7.pdf}
\caption{Real (top) and imaginary part (bottom) of the complex Morse potentials and the corresponding magnetic field. The potential parameters taken are $|D| = 1, \theta = \pi/10, k = 6, \mu = 1$.} \label{F-7}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.75]{./imagenes/fig8.pdf}
\caption{(a) First electron energies $E_n$ for the exponentially decaying magnetic field in the complex plane for $k = 6, \mu = 1$. (b) Electron energies $E_n$ as functions of $k$ for the same $\mu$ value as in (a).} \label{F-8}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.6]{./imagenes/fig9.pdf}
\caption{Probability density (top), current density in the $x$-direction (middle) and the $y$-direction (bottom) for the exponentially decaying magnetic field. The potential parameters taken are $|D| = 1, \theta = \pi/10, k = 6$ and $\mu = 1$.} \label{F-9}
\end{center}
\end{figure}
\section{Discussion} \label{S4}
It is interesting to observe that there are some $x$-points for which the non-trivial imaginary parts of $eB(x)/c\hbar$ and $V^+(x)$ are equal, as it is shown in figures~\ref{F-1}, \ref{F-4} and \ref{F-7}. If we denote as $\chi$ one of these points, it turns out that $\text{Im}[\text{w}^{2}(\chi)] = 0$ in order to fulfill equation~\eqref{E-10}, and this implies that $\text{Re}[\text{w}(\chi)] = 0$. Let us recall now a classical quantity, the {`\it kinematical'} momentum along $y$-direction given by $\Pi_{y} = p_{y} + (e/c)A$. Since the canonical momentum $p_{y}$ is a constant of motion, it follows that $\text{Re}[\Pi_{y}(\chi)] = \hbar\text{Re}[\text{w}(\chi)] = 0$. It is worth noticing that the maximum of the ground state probability density appears at the point $\chi$, and the latter depends on the angle $\theta$ (see figures~\ref{F-3},\ref{F-6} and \ref{F-9}).
On the other hand, since the Hamiltonian \eqref{E-2} is non-hermitian, its eigenvalues are not necessari{\-l}y real. In fact, it can be written as $H/v_{0} = \sigma_{x}p_{x} + \sigma_{y}[p_{y} + (e/c)A(x)]$. Now, if we express the vector potential in polar form it turns out that
\begin{equation} \label{E-32}
H = H_{R} + iv_{0}(e/c)\sigma_{y}|A(x)|\sin\theta,
\end{equation}
where $H_{R}= v_{0}\left[\sigma_{x}p_{x} + \sigma_{y}\left(p_{y} + (e/c)|A(x)|\cos\theta\right)\right]$ is a hermitian operator whose eigenvalues are real, which is similar to the Hamiltonian addressed in \cite{Kuru2009}. The second term of equation~\eqref{E-32} is an anti-hermitian operator whose eigenvalues are purely imaginary. In order to understand the nature of this term, let us remember that the Dirac-Weyl equation in graphene describes a massless {\it pseudo-spin} $1/2$ particle, where {\it pseudo-spin `up'} means that the electron is in the sublattice B and {\it `down'} in the sublattice A. In terms of the pseudo-spin ladder operators $S_{\pm} = S_{x} \pm iS_{y}$ it follows that the second term can be written as $(ev_{0}/c\hbar)\left(S_{+} - S_{-}\right)|A(x)|\sin\theta$. It induces a pseudo-spin rotation, and it is analogous to the corresponding term that appears in the Hamiltonian describing non-uniformly strained graphene (see \cite{Maurice2015}). By sticking to this analogy, in \cite{Vozmediano2013} this anti-hermitian term is associated to the layer curvature induced by strain, while in the case analyzed here the analogous term is proportional to the imaginary part of the vector potential, however the issue is to find a phenomenon that could be associated with it. Then, since the Hamiltonian \eqref{E-32} is time-independent, the time evolution of the total probability associated to any eigenstate has an exponential factor which depends on the imaginary part of its eigenvalue $E_{n}$, namely,
\begin{equation} \label{E-33}
\mathscr{P}_{T}(t) = \langle\Psi_{n}(t)|\Psi_{n}(t)\rangle = e^{2\frac{\text{Im}[E_{n}]}{\hbar}t}\langle\Psi_{n}(0)|\Psi_{n}(0)\rangle.
\end{equation}
A small probability increase (decrease) occurs when the exponent $2(\text{Im}[E_{n}]/\hbar)t\ll 1$, which happens for approximate times inversely proportional to the imaginary part of the eigenva{\-l}ue. Using the polar form $E_{n} = |E_{n}|e^{i\phi_n}$ for the first bound states, it turns out that for $\phi_n\ll 1$ we obtain long enough times in order to guarantee the probability conservation except for a small perturbation term. Thus, the anti-hermitian term in the Hamiltonian \eqref{E-32} can be seen as a perturbation describing the loss or gain of charger carriers in the graphene sublattices. Therefore, one could consider the graphene in a real magnetic field orthogonal to its surface, trying to model a non-conservative system due to the interaction of the {\it pseudo-spin} electron with the magnetic field by means of the exactly solvable non-hermitian Hamiltonian~\eqref{E-2}, and taking the expansion of the eigenvalues $E_{n}$ in powers of its argument $\phi_n$ one could find the corresponding energy corrections. Furthermore, the probability and current densities will be now modified as compared with the real case. It is worth noticing that despite the Hamiltonian is non-hermitian, this property does not ensure that its eigenvalues are complex, as it was seen in the third example worked out in the previous section where the eigenvalues were real, and thus the total probability of its eigenstates was conserved.
Finally, we must mention an interesting case where the anti-hermitian term in the Hamiltonian \eqref{E-32} is not a perturbation around $\theta=0$: in the limit $\theta\rightarrow\pi/2$ our Hamiltonian describes the free graphene \cite{Ferreira2011} plus a term of interaction between the {\it pseudo-spin} electron and a purely imaginary magnetic field. All the magnetic profiles used in section \ref{S3} lead us to auxiliary potentials without bound states in this limit. However, if we choose an argument $\theta\approx\pi/2$ we will get potentials with ``weak'' bound states, whose probability densities have a pronounced maximum that diverges in the limit when $\theta$ tends to $\pi/2$.
\section{Conclusions} \label{S5}
Exact analytic solutions for non-hermitian Hamiltonian $H$ describing an electron in graphene interac{\-t}ing with external complex magnetic fields were found. It is worth noticing that $H$ can be expressed as the addition of a hermitian operator plus a non-hermitian term, the last causing pseudo-spin rotations. Although, we have solved the eigenvalue problem for the Hamiltonian \eqref{E-2} assuming that a complex magnetic field in principle can be produced, a possible physical interpretation could be that a real magnetic field, with the same amplitude as the complex one, is being applied to graphene, and the angle $\theta$ is a parameter allowing us to introduce in the Hamiltonian a term describing the loss or gain of charge carriers induced by the interaction between the {\it pseudo-spin} electron and the magnetic field. Due to the complex nature of the problem, there are important differences in some physical quantities with respect to the real case, as the probabili{\-t}y and current densities. In particular, the current density along $x$-direction turns out to be non-null, in contrast to the result found in \cite{Kuru2009} for the real case. Furthermore, now the ground state probability density acquires its maximum at the point where the imaginary parts of $eB/\hbar c$ and $V^{+}$ are equal.
\section*{Acknowledgements}
This work was supported by CONACYT (Mexico), project FORDECYT-PRONACES/61533/20\\20. JDGM especially thanks to Conacyt for the economic support through the PhD scholarship.
\bibliography{complex_fields}
\bibliographystyle{unsrt}
\addcontentsline{toc}{section}{References}
\end{document}
| 11,086
|
aleks8007
38, Cancer, Россия, Ярославль
Last Login: 1/17/20 9:54:02 AM
Send Tip
Fan Boost
Raise model to 1st position
By clicking the button, you will be raising aleks8007 to the first position on the main page for 60 seconds. During this time the model's chat room will become much more visible and incredibly popular! A notification about your action will appear in the chat, visible to the model and the other chat users. aleks8008007
What makes me horny
общение
About me
open sex masturbacy fallos anus
What turns me off
грубость
What We do on webcam
| 311,807
|
HESTER PRYNE
Hester and Pearl went into town, they were tormented and harassed. In retaliation, Pearl began to throw rocks back at the children. This kind of thing only happened in the beginning of Pearl's life. Later on, the kids stopped harassing Pearl because her mom's sin did not have as much effect on the people of the town. The best thing that ever happened to Pearl was her move to Europe with Hester and her father, Reverend Dimmesdale.
In Europe, Hester pretty much left Pearl alone. Pearl, then got married and started a new life. In the book, Pearl was always the smartest character portrayed by Hawthorne. Had Hester been put to death because of her sin, Pearl might not have been as successful as she became.
Hester was a very admirable person. After committing her awful sin (awful as seen by the townspeople), and losing the respect of most of the townspeople, Hester was able to turn her life around for the better. Her turn around, however, happened slowly. It took Hester and Pearl a while to earn some respect in their community. Hester became a renowned seamstress. She made clothes for herself and Pearl, she even sewed gloves for the governor. For Pearl, she made some beautiful...
| 272,215
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.