text stringlengths 14 5.77M | meta dict | __index_level_0__ int64 0 9.97k ⌀ |
|---|---|---|
En xylograf er en person der udfører xylografi (træsnit af endetræ).
Den mest kendte danske xylograf var Frederik Hendriksen (1847-1938), grundlægger af F. Hendriksens xylografiske atelier (1870), ugebladet Ude og Hjemme (1877) samt Forening for Boghåndværk (1888).
Andre kendte danske xylografer var Axel Kittendorff (1821-1868) og Hans Peter Hansen (1829-1899).
Stillingsbetegnelser
Grafikere | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 1,076 |
\section{Introduction}
Our main motivation for this work is the
celebrated Skolem-Noether theorem. We will state its version as given, for example, in \cite{Her}.
But first, a word on conventions. All our algebras are assumed to be unital algebras over a fixed field $F$, subalgebras are assumed to contain the same unity, and all homomorphisms send $1$ to $1$.
\begin{theorem} {\bf (Skolem-Noether)} Let $A$ be simple artinian algebra with center $F$. If $R$ is a finite-dimensional simple $F$-subalgebra of $A$ and $\varphi$ is an $F$-algebra homomorphism from $R$ into $A$, then there exists an invertible element $c\in A$ such that $\varphi(x)=cxc^{-1}$ for all $x\in R$. (In other words, $\varphi$ can be extended to an inner automorphism of $A$.)
\end{theorem}
Recall that
an algebra is said to be {\bf central} if its center consists of
scalar multiples of unity.
As usual, we will use the term {\bf central simple algebra} for an algebra that is central, simple, and also finite-dimensional.
\begin{definition}\label{def}
An algebra $S$ is a {\bf Skolem-Noether algebra} ({\bf SN algebra} for short) if for every central simple algebra $R$ and every homomorphism $\varphi:R\to R\otimes S$ there exists an invertible element $c\in R\otimes S$ such that $\varphi(x)= cxc^{-1}$ for every $x\in R$. (Here, $R$ is identified with $R\otimes 1$).
\end{definition}
The Skolem-Noether theorem, together
with the well-known fact that the class of central simple algebras is closed under tensor products, implies that every central simple algebra $S$ is an SN algebra. A partial converse is also true: the assertion that central simple algebras are SN algebras implies
an important special case of the Skolem-Noether theorem where $A$ is a central simple algebra and $R$ is its central simple subalgebra. This is because, under these assumptions,
$A$ is isomorphic to $R\otimes S$ where $S$ is also a central simple subalgebra of $A$ \cite[Corollary 4.49]{INCA}.
SN algebras naturally arise from the problem of understanding automorphism groups of tensor products of algebras.
Unlike the case of derivations on tensor products \cite{Bre}, the general solution to this problem seems far out of reach.
For instance, while automorphisms of univariate and bivariate polynomial algebras
are well understood \cite{Jun}, already the trivariate case is wild \cite{SU}.
In another direction, functional analysts consider the question when the flip automorphism $A\otimes A\to A\otimes A$ is (approximately) inner for operator algebras $A$, see \cite{S,ER,Izu}. In this paper we settle the following special case of the above problem. If $S$ is an SN algebra and $R$ is a central simple algebra, then automorphisms of $R\otimes S$ are just compositions of inner automorphisms and automorphisms of $S$; see Proposition \ref{prop:aut}. While
the class of SN algebras looks restrictive, our main results show that various classical and important families of algebras satisfy the SN property, for example semilocal (in particular artinian and finite-dimensional) algebras, unique factorization domains, free algebras, etc.
Some of the readers might be interested only in the case where $R=M_n(F)$, the algebra of $n\times n$ matrices with entries in $F$. Let us therefore mention that since $M_n(F)\otimes S$ can be identified with $M_n(S)$, the condition that $S$ is an SN algebra implies that every homomorphism from $M_n(F)$ into $M_n(S)$ can be extended to an inner automorphism of $M_n(S)$. Moreover, we show in Proposition \ref{p:mat} that the latter condition implies the SN property.
However, this does not lead to any simplifications of our proofs, so we persist with central simple algebras as in Definition \ref{def}.
\subsection*{Main results and guide to the paper}
The short Section \ref{s:prelim} on preliminaries
includes Proposition \ref{p:mat}: $S$ is an SN algebra if and only if all homomorphisms $M_n(F)\to M_n(S)$ extend to inner automorphisms. Section \ref{s:auto}
positions SN algebras into a wider context of automorphisms of tensor products. For instance, Proposition \ref{prop:aut} proves that given an
SN algebra $S$ and a central simple algebra
$R$, every automorphism of $R\otimes S$ is the composition of an inner automorphism and an automorphism of $S$. In particular, this applies to matrix algebras over SN algebras.
We then identify classes of algebras which satisfy the SN property. In Section \ref{s:basic} we derive Lemma \ref{l}, which is the main technical tool for proving subsequent results. Section \ref{s:semiloc} culminates in Theorem \ref{thm:semiLoc} showing that semilocal algebras are SN. Hence all artinian algebras and thus all finite-dimensional algebras are SN. Section \ref{s:findim}
refines the latter result. Namely,
every homomorphism from a central simple subalgebra $R$ of a finite-dimensional algebra $A$
into $A$ extends to an inner
automorphism of $A$ (see Theorem \ref{tfd}). In Section \ref{s:dom} we give examples of
domains which are SN algebras, such as
unique factorization domains (UFDs) and free algebras, see Corollary \ref{ufd}
and Corollary \ref{cor:free}.
Section~\ref{s:matrixpoly} uses the Quillen-Suslin theorem to prove that matrix algebras over polynomial algebras are SN.
The paper concludes with Section \ref{s:poly},
where we show that an algebra $S$ is SN if and only if the formal power series algebra $S[[\xi]]$ is SN.
\section{Preliminaries}\label{s:prelim}
The purpose of this section is to introduce the notation and terminology, and
prove a proposition that yields a characterization of SN algebras.
Let $R$ be a central simple algebra. Given $w, z \in R$, we define
the left and right multiplication operators
$L_w, R_z:R\to R$
by
$$L_w(x)=wx\quad\mbox{and}\quad R_z(x)=xz.$$
As is well-known, every linear map from $R$ into $R$ can be written as a sum of maps of the form $L_w R_z$, $w,z\in R$ \cite[Lemma 1.25]{INCA}. Accordingly, given a basis $\{r_1,\dots,r_d\}$ of $R$, there exists $w_j,z_j\in R$ such that
$h=\sum_j L_{w_j}R_{z_j}$ satisfies
$h(r_1)=1$ and $h(r_k)=0$, $k\ne 1$. That is,
\begin{equation}\label{oh}
\sum_j w_jr_1z_j = 1\quad \mbox{and}\quad \sum_j w_jr_kz_j = 0\,\,\,\mbox{if $k> 1$.}
\end{equation}
We will be mostly concerned with tensor product algebras $R\otimes S$.
Here $R,S$ are algebras over a field $F$ and the tensor product is taken over $F$.
As usual,
we identify $R$ by $R\otimes 1$, and, accordingly, often write $r\otimes 1\in R\otimes 1$ simply as $r$.
Let us point out an elementary fact that will be used freely without further reference. If the $r_i$'s are linearly independent elements
in $R$, then for all $p_j\in R$ and $s_j, t_i\in S$,
\begin{equation}\label{nov}
\sum_i r_i\otimes t_i = \sum_j p_j\otimes s_j
\end{equation}
implies that each $t_i$ lies in the linear span of the $s_j$'s \cite[Lemma 4.9]{INCA}. Similarly, assuming that the $t_i$'s are linearly independent, it follows from \eqref{nov} that each $r_i$ lies in the linear span of the $p_j$'s.
By rad$(S)$ we denote the {\bf Jacobson radical} of the algebra $S$. Recall that $S$ is called a {\bf semilocal algebra}
if $S/{\rm rad}(S)$ is a semisimple algebra, i.e., isomorphic to a finite direct product of simple artinian algebras.
In the special case where $S/{\rm rad}(S)$ is a division algebra, $S$ is called a {\bf local algebra}. Finally, we say that $S$ is a {\bf stably finite algebra} if for all $n\ge 1$ and all $x,y\in M_n(S)$, $xy=1$ implies $yx=1$.
To conclude the section we give an alternative characterization of the SN property.
In order to show that $S$ is an SN algebra it suffices to
verify
the condition of Definition \ref{def} for $R=M_n(F)$, i.e.,
all $F$-algebra homomorphisms $M_n(F)\to M_n(S)$
are given by conjugation.
\begin{proposition}\label{p:mat}
Let $S$ be an algebra and suppose that for every $n\in\mathbb{N}$ and a homomorphism $\varphi:M_n(F)\to M_n(S)$ there exists $c\in M_n(S)$ such that $\varphi(x)=cxc^{-1}$ for every $x\in R$. Then $S$ is an SN algebra.
\end{proposition}
\begin{proof}
Let $R$ be a central simple algebra and $\varphi:R\to R\otimes S$ a homomorphism. Let $e\in\mathbb{N}$ be the exponent of $R$, i.e., the order of $R$ as an element of the Brauer group of $F$ \cite[Definition 4.5.12]{GS}. Let
$$\tilde{\varphi}={\rm id}^{e-1}\otimes \varphi: R^{\otimes e}\to R^{\otimes e}\otimes S.$$
Since
$$R^{\otimes e}\cong M_{(\deg R)^e}(F),$$
by assumption there exists $c\in R^{\otimes e}\otimes S$ such that
$\tilde{\varphi}(x)=cxc^{-1}$ for every $x\in R^{\otimes e}$.
If $e>1$, we can write $c$ as
$$c=\sum_{i_2,\dots,i_e,j} a_{i_2,\dots,i_e,j}\otimes r_{i_2}\otimes\cdots \otimes r_{i_e}\otimes s_j$$
for some $a_{i_2,\dots,i_e,j},r_i\in R$ and $s_j\in S$ where $\{r_i\}_i\subset R$ and $\{s_j\}_j\subset S$ are linearly independent sets. If $x=x_1\otimes 1\otimes\cdots\otimes 1$ for $x_1\in R$, then $\tilde{\varphi}(x)c-cx=0$ becomes
$$ \sum_{i_2,\dots,i_e,j}
(xa_{i_2,\dots,i_e,j}-a_{i_2,\dots,i_e,j}x)\otimes r_{i_2}\otimes\cdots r_{i_e}\otimes s_j=0.$$
Since the elements $r_{i_2}\otimes\cdots r_{i_e}\otimes s_j$ form a linearly independent set in $R^{\otimes (e-1)}\otimes S$, we conclude that $xa_{i_2,\dots,i_e,j}=a_{i_2,\dots,i_e,j}x$ for all $a_{i_2,\dots,i_e,j}\in R$ and $x\in R$. As $R$ is central we have $a_{i_2,\dots,i_e,j}\in F$ and therefore $c\in 1\otimes R^{\otimes (e-1)}\otimes S\cong R^{\otimes (e-1)}\otimes S$. Consequently the homomorphism
$$\hat{\varphi}={\rm id}^{e-2}\otimes \varphi: R^{\otimes (e-1)}\to R^{\otimes (e-1)}\otimes S$$
satisfies $\hat{\varphi}(x)=cxc^{-1}$ for every $x\in R^{\otimes (e-1)}$. Continuing by induction we conclude that $c\in R\otimes S$ and $\varphi(x)=cxc^{-1}$ for all $x\in R$, so $S$ is an SN algebra.
\end{proof}
While Proposition \ref{p:mat} seemingly facilitates demonstrating that $S$ is an SN algebra, it does not simplify our proofs in the sequel.
\section{SN algebras and automorphisms}\label{s:auto}
In this section we give a few motivating
results and prove that every automorphism of
a matrix algebra over an SN algebra $S$
is an inner automorphism composed
with an automorphism of $S$, see Corollary \ref{cor:autMn}.
We begin with a proposition which justifies the requirement in Definition \ref{def} that the algebra $R$
is central simple.
\begin{proposition}\label{rjes}
Let $R$ be a subalgebra of an algebra $S$. If the homomorphism $x\otimes 1\mapsto 1\otimes x$ from $R=R\otimes 1$
into $R\otimes S$ can be extended to an inner automorphism of $R\otimes S$, then
$R$ is a central simple algebra.
\end{proposition}
\begin{proof}
By assumption, there exists an invertible element $a\in R\otimes S$ such that
$$1\otimes x =a(x\otimes 1)a^{-1}$$ for all $x\in R$.
Let us write $a=\sum_{i=1}^m u_i\otimes v_i$ and $a^{-1}= \sum_{j=1}^n w_j\otimes z_j$.
Accordingly,
\begin{align}\label{k}
1\otimes x =& \Big(\sum_{i=1}^m u_i\otimes v_i\Big)(x\otimes 1)\Big( \sum_{j=1}^n w_j\otimes z_j\Big) \\
=&\sum_{i=1}^m \sum_{j=1}^n u_ixw_j\otimes v_iz_j.\nonumber
\end{align}
This implies that every $x\in R$ lies in the linear span of all $v_iz_j$, $i=1,\dots,m$, $j=1,\dots,n$. Thus, $R$ is finite-dimensional.
On the other hand, \eqref{k} implies that for every nonzero $x\in R$, $1$ lies in $RxR$. This means that $R$ is simple.
Finally, if $z$ lies in the center of $R$, then $1\otimes z = a(z\otimes 1)a^{-1} = z\otimes 1$, which readily implies that $z$ is a scalar multiple of $1$, as desired.
\end{proof}
The question of when the automorphism $x\otimes y\mapsto y\otimes x$ of $R\otimes R$ is inner was initiated by Sakai \cite{S} in the C$^*$-algebra context, and investigated further by Bunce \cite{Bu}. The following corollary is an extension of \cite[Theorem 2]{Bu}.
\begin{corollary}\label{rjes2}
Let $R$ be an arbitrary algebra. The homomorphism $x\otimes 1\mapsto 1\otimes x$ from $R=R\otimes 1$ into $R\otimes R$ can be extended to an inner automorphism of $R\otimes R$ if and only if
$R$ is a central simple algebra.
\end{corollary}
\begin{proof}
If $R$ is a central simple algebra, then so is $R\otimes R$ \cite[Corollary 4.44]{INCA}, and so every homomorphism from $R$ into $R\otimes R$ can be extended to an inner automorphism by the Skolem-Noether theorem. The converse follows from Proposition \ref{rjes}.
\end{proof}
The next proposition yields another motivation for considering SN algebras.
\begin{proposition}\label{prop:aut}
Let $R$ be a central simple algebra and let $S$ be an SN algebra. Then every automorphism $\varphi$ of $R\otimes S$ is the composition of an inner automorphism and an automorphism of the form ${\rm id}_R\otimes \sigma$ where $\sigma$ is an automorphism of $S$.
\end{proposition}
\begin{proof}
By assumption, the restriction of $\varphi$ to $R$ can be extended to an inner automorphism $x\mapsto cxc^{-1}$ of $R\otimes S$. Considering the automorphism $x\mapsto
c^{-1}\varphi(x)c$ we thus see that there is no loss of generality in assuming that $\varphi$ acts as the identity on $R$. Note that the proposition will be proved by showing
that $\varphi$
maps $1\otimes S$ into itself. Pick $s\in S$. We can write $\varphi(1\otimes s)$ as
$\sum_{j} p_j\otimes s_j$
where the $s_j$'s are linearly independent.
Since $1\otimes s$ commutes with $x\otimes 1$ for every $x\in R$ it follows that so does $\varphi(1\otimes s)$. This implies that
$$\sum_j (p_jx-xp_j)\otimes s_j =0.$$
As the $s_j$'s are linearly independent it follows that $p_jx-xp_j=0$ for each $j$ and each $x\in R$.
Hence, since $R$ is central, each $p_j$ is a scalar multiple of $1$. Consequently, $\varphi(1\otimes s)\in 1\otimes S$.
\end{proof}
If $R=M_n(F)$, then
$R\otimes S$ can be identified with $M_n(S)$, and the proposition gets the following form.
\begin{corollary}\label{cor:autMn}
If $S$ is an SN algebra, then every automorphism $\varphi$ of $M_n(S)$ is of the form $$\varphi\big((s_{ij})\big) = c\big(\sigma(s_{ij})\big)c^{-1}$$ where $c$ is an invertible element in $M_n(S)$ and $\sigma$ is an automorphism of $S$.
\end{corollary}
This result is known in the case where $S$ is either an artinian algebra \cite[Theorem 3.13]{BO}, a UFD \cite[Corollary 15]{I}, or a commutative local algebra (see, e.g., \cite[p. 163]{K}). As we will see, all these algebras are SN algebras. On the other hand, \cite{I}
shows that the commutative domain $\mathbb Z[\sqrt{-5}]$ does not satisfy the conclusion of Corollary \ref{cor:autMn}.
We give an algebra with the same property in Example \ref{ex:elliptic}.
\section{Basic lemma}\label{s:basic}
All our main results will be derived from the following technical lemma. Its proof will use some ideas from the proof of the (special case of) Skolem-Noether theorem given in \cite[pp. 13--14]{INCA}.
\begin{lemma}\label{l}
Let $R$ be a central simple algebra with basis $\{r_1,\dots,r_d\}$ and let $S$ be an arbitrary algebra. Then $\varphi:R\to R\otimes S$ is a homomorphism
if and only if there exist $c_1,\dots,c_d\in R\otimes S$ such that
\begin{enumerate}
\item[(a)] $\varphi(x)=\sum_{k=1}^d c_k x r_{k}$ for all $x\in R$,
\item[(b)] $\varphi(x)c_k=c_kx$ for all $x\in R$, and
\item[(c)] $\sum_{k=1}^d c_kr_k = 1.$
\end{enumerate}
Moreover, writing $c_k= \sum_{l=1}^{d} r_{l}\otimes s_{kl}$, we
have that for each $k$ and $l$ there exists $b_{kl}\in R\otimes S$ such that $$b_{kl}c_k = 1\otimes s_{kl}.$$ Accordingly, if $S$ is stably finite and there exist $k$ and $l$ such that $s_{kl}$ is invertible in $S$, then $c=c_k$ is invertible in $R\otimes S$ and $$\varphi(x)=cxc^{-1}$$ for all $x\in R$.
\end{lemma}
\begin{proof}
Since $R$ is finite-dimensional, there exist finitely many $s_i\in S$ and
linear maps $f_i:R\to R$
such that
$$\varphi(x)= \sum_i f_i(x)\otimes s_i
$$
for all $x\in S$. By \cite[Lemma 1.25]{INCA} there exist $w_{ij}, z_{ij}\in R$ such that
$$f_i = \sum_{j} L_{w_{ij}}R_{z_{ij}}.$$
Consequently, for every $x\in R$ we have
\begin{align*}
\varphi(x) &=\sum_i \Big( \sum_{j} w_{ij}xz_{ij}\Big)\otimes s_i\\
&=\sum_i \sum_{j} (w_{ij}\otimes s_i)x z_{ij}.
\end{align*}
Writing each $z_{ij}$ as a linear combination of $r_1,\dots,r_d$ we see that $\varphi$ is of the form described in (a).
We now use the multiplicativity of $\varphi$, i.e.,
$\varphi(xy)= \varphi(x)\varphi(y)$ for all $x,y\in R$. In view of (a) we can rewrite this as
\begin{equation} \label{nm} \sum_{k=1}^d c_kxyr_k = \sum_{k=1}^d \varphi(x) c_k yr_k. \end{equation}
Pick $w_j,z_j\in R$ such that \eqref{oh} holds.
Setting $y= w_j$ in \eqref{nm},
multiplying the identity, so obtained, from the right by $z_j$, and then summing up over all $j$ we get
$$
\sum_j \sum_{k=1}^d c_kx w_j r_k z_j = \sum_j \sum_{k=1}^d \varphi(x) c_k w_j r_k z_j,
$$
that is,
$$
\sum_{k=1}^d c_kx \Big(\sum_j w_j r_k z_j\Big) = \sum_{k=1}^d \varphi(x) c_k \Big(\sum_j w_j r_k z_j\Big).
$$
By \eqref{oh} this reduces to $c_1x = \varphi(x)c_1$. Of course, the same proof applies to every $c_k$, so (b) holds. Finally, (c) follows
from $\varphi(1)=1$.
A direct verification shows that (a), (b), and (c) imply that $\varphi$ is a homomorphism.
Let us write $c_1 = \sum_l r_{l}\otimes s_{1l}$, and let $w_j,z_j$ be as above. Using (b) we obtain
\begin{align*}
\sum_j w_j\varphi(z_j) c_1 &= \sum_j w_jc_1z_j\\
&= \sum_j\sum_l w_j ( r_{l}\otimes s_{1l})z_j\\
&=\sum_l\Big(\sum_j w_jr_{l}z_j\Big)\otimes s_{1l}\\
&= 1\otimes s_{11}.
\end{align*}
Thus, $b_{11}= \sum_j w_j\varphi(z_j)$ satisfies $b_{11}c_1 = 1\otimes s_{11}$. Similarly we find other $b_{kl}$'s.
Finally, assume that $s_{kl}$ is invertible in $S$ for some $k$ and $l$. Then
$(1\otimes s_{kl}^{-1})b_{kl}$ is a left inverse of $c_k$. If $S$ is stably finite, then the result by Montgomery \cite[Theorem 1]{Mon} implies that
this element is also a right inverse. Therefore, (b) shows that $c=c_k$ satisfies $\varphi(x)=cxc^{-1}$ for all $x\in R$.
\end{proof}
We continue with a simple application
of Lemma \ref{l}, showing that local algebras
are SN. This result will be generalized
to semilocal algebras (with a considerably more involved proof)
in the next section, see Theorem \ref{thm:semiLoc}.
\begin{corollary}\label{t1}
Every local algebra is an SN algebra.
\end{corollary}
\begin{proof}
Let $r_k,s_{kl}$ be elements from Lemma \ref{l}. From (c) it follows that $$\sum_{k,l} r_{l}r_k\otimes s_{kl} = 1\otimes 1.$$
This implies that
$1$ lies in the linear span of $s_{kl}$. Consequently, at least one $s_{kl}$ does not lie in rad$(S)$. Since $S$ is local it follows that $s_{kl}$ is invertible in $S$.
As $S$ is stably finite \cite[Theorem 20.13]{Lam}, the last assertion of Lemma \ref{l} shows that there exists $c\in R\otimes S$ such that
$\varphi(x)=cxc^{-1}$ for all $x\in R$.
\end{proof}
\section{Semilocal algebras}\label{s:semiloc}
The main result of this section is Theorem
\ref{thm:semiLoc} showing that
semilocal algebras are SN. We begin with a simple lemma.
\begin{lemma}\label{prod}
If $S_1$ and $S_2$ are SN algebras, then so is their direct product $S_1\times S_2$.
\end{lemma}
\begin{proof}
Recall that $R\otimes (S_1\times S_2)$ can be identified with $(R\otimes S_1)\times (R\otimes S_2)$. Take a homomorphism $$\varphi:R\to (R\otimes S_1)\times (R\otimes S_2).$$
Writing $$\varphi(x)=(\varphi_1(x),\varphi_2(x))$$ it is immediate that $\varphi_i$ is a homomorphism from $R$ into $R\otimes S_i$, $i=1,2$. By assumption, there exist $c_i\in R\otimes S_i$ such that
$\varphi_i(x)= c_ixc_i^{-1}$ for all $x\in R$, $i=1,2$. Hence, $$c=(c_1,c_2)\in (R\otimes S_1)\times (R\otimes S_2)$$ satisfies $\varphi(x)=cxc^{-1}$ for all $x\in R$.
\end{proof}
As mentioned in the introduction, the Skolem-Noether theorem implies that every central simple algebra is an SN algebra.
With a little extra effort we can extend this to semisimple algebras.
\begin{lemma}\label{lsemi}
Every semisimple algebra is an SN algebra.
\end{lemma}
\begin{proof}
In view of Lemma \ref{prod} it suffices to consider the case where $S$ is simple artinian.
Let $R$ be a central simple algebra. The algebra $R\otimes S$
is then simple \cite[Theorem 4.42]{INCA}.
We claim that it is also artinian. Indeed, considering $R\otimes S$ as a left $S$-module in the natural way we see that
it is isomorphic to the left $S$-module $S^d$ where $d$ is the dimension of $R$, and that
a descending chain of left ideals of $R\otimes S$
is also a descending chain of left $S$-submodules. The desired conclusion thus follows from the fact that $S^d$ is artinian.
Let $K$ be the center of $S$.
The center of $R\otimes S$ is equal to $1\otimes K$ \cite[Corollary 4.32]{INCA}, which we identify with $K$. Consider $R\otimes K$ as an algebra over $K$ in the usual way. Clearly,
it is finite-dimensional and, again by \cite[Theorem 4.42]{INCA}, simple.
Now, given a homomorphism $\varphi:R\to R\otimes S$, we define $$\Phi:R\otimes K\to R\otimes S$$ by $$\Phi(x\otimes k)= \varphi(x)(1\otimes k).$$ Note that $\Phi$ is a $K$-algebra homomorphism. The Skolem-Noether theorem thus tells us that there exists $c\in R\otimes S$ such that
$\Phi(x\otimes k)= c(x\otimes k)c^{-1}$ for all $x\in R$, $k\in K$. Setting $k=1$ we get the desired conclusion.
\end{proof}
Our goal is to show that semilocal algebras are SN algebras by reducing the general case to the semisimple case. We will actually prove a general reduction theorem whose possible applications are not limited to semilocal algebras. To this end, we need the following lemma. From its nature one would expect that it is known, but we were unable to find a good reference.
We include a short proof for the sake of completeness.
\begin{lemma}\label{lemrad}
If $R$ is a central simple algebra, then ${\rm rad}(R\otimes S) = R\otimes {\rm rad}(S)$ for every algebra $S$.
\end{lemma}
\begin{proof}
As an ideal of $R\otimes S$, ${\rm rad}(R\otimes S)$ is necessarily of the form
$R\otimes I$ for some ideal $I$ of $S$ \cite[Theorem 4.42]{INCA}.
We will show that $I\subseteq {\rm rad}(S)$, by making use of the following characterization of rad$(A)$:
$v\in {\rm rad}(A)$ if and only if
$1- vx$ is invertible for every $x\in A$.
Take $u\in I$. Since $1\otimes u \in {\rm rad}(R\otimes S)$ it follows that
$$1\otimes (1-ux)=1\otimes 1 - 1\otimes ux = 1\otimes 1 - (1\otimes u)(1\otimes x)$$
is invertible in $R\otimes S$ for every $x\in S$. However, this is possible only if $1-ux$ is invertible, implying that $u\in {\rm rad}(S)$. Thus, $I\subseteq {\rm rad}(S)$,
and so $ {\rm rad}(R\otimes S)\subseteq R\otimes {\rm rad}(S)$.
As the lemma is well-known if $R=M_n(F)$ (see, e.g., \cite[pp. 57-58]{Lam}),
we will establish the converse inclusion by reducing the general case to this one.
Take a splitting field $K$ for $R$ which is a finite separable extension of $F$ (see, e.g., \cite[Proposition 4.5.4]{GS}). Then $K\otimes R$ may be identified with $M_n(K)$ for some $n\ge 1$, and, therefore, $K\otimes R\otimes S$ may be identified with $M_n(K\otimes S)$. Thus, by what we pointed out at the beginning of the paragraph, we have
$${\rm rad}(K\otimes R\otimes S) = M_n({\rm rad}(K\otimes S)).$$
By \cite[Theorem 5.17]{Lam}, ${\rm rad}(K\otimes S) = K\otimes {\rm rad}(S)$, so that
\begin{equation}\label{rs}{\rm rad}(K\otimes R\otimes S) = M_n(K)\otimes {\rm rad}(S).\end{equation}
According to \cite[Theorem 5.14]{Lam},
$${\rm rad}(R\otimes S) = (R\otimes S)\cap {\rm rad}(K\otimes R\otimes S),$$
and hence, by \eqref{rs},
$${\rm rad}(R\otimes S) = (R\otimes S)\cap (M_n(K)\otimes {\rm rad}(S)).$$
Since both $R\otimes S$ and $M_n(K)\otimes {\rm rad}(S)$ readily contain $R\otimes {\rm rad}(S)$,
it follows that $R\otimes {\rm rad}(S)\subseteq {\rm rad}(R\otimes S)$.
\end{proof}
We can now prove the announced reduction theorem.
\begin{theorem}\label{trad} If an algebra $S$ is stably finite and $S/{\rm rad}(S)$ is an SN algebra, then
$S$ is an SN algebra.
\end{theorem}
\begin{proof}
Let $R$ be central simple and
write $$J=R\otimes {\rm rad}(S).$$
Take a homomorphism $\varphi:R\to R\otimes S$. We define
$$\Phi:R\to (R\otimes S)/J$$ by
$$\Phi(x)=\varphi(x)+J.$$
Since $(R\otimes S)/J$ is canonically isomorphic to $R\otimes (S/{\rm rad}(S))$, and $S/{\rm rad}(S)$ is assumed to be an SN algebra, it follows that
there exists an invertible element $a\in (R\otimes S)/J$ such that \begin{equation*}\label{bg}\Phi(x)= a(x+J)a^{-1}\,\,\mbox{ for all $x\in R$.}\end{equation*}
As $J$ is, by Lemma \ref{lemrad}, the Jacobson radical of $R\otimes S$, it follows that there exists an invertible element $b\in R\otimes S$ such that $a = b +J$. Obviously, we have
$$\varphi(x)-bxb^{-1}\in J\,\,\mbox{ for all $x\in R$,}$$
that is,
$$b^{-1}\varphi(x)b -x\in J\,\,\mbox{ for all $x\in R$.}$$
Replacing the role of $\varphi$ by the homomorphism $x\mapsto b^{-1}\varphi(x)b$ we see that without loss of generality we may assume that $b=1$. Thus,
\begin{equation}\label{kk}\varphi(x) -x\in J\,\,\mbox{ for all $x\in R$.}\end{equation}
Now apply Lemma \ref{l}. Picking a basis $\{r_1,\dots,r_d\}$ of $R$, we can thus find $s_{kl}\in S$, $k,l=1,\dots,p$, such that
\begin{equation}\label{ll}\varphi(x)=\sum_{k=1}^d \sum_{l=1}^d r_l x r_k \otimes s_{kl} \,\,\mbox{ for all $x\in R$,}\end{equation}
and our goal is to show that at least one $s_{kl}$ is invertible in $S$.
Let $\lambda_k\in F$ be such that $1=\sum_{k=1}^d {\lambda_k}r_k$. Then
$$x=1\cdot x\cdot 1 = \sum_{k=1}^d \sum_{l=1}^d (\lambda_k\lambda_l) r_l x r_k \,\,\mbox{ for all $x\in R$}.$$
Using \eqref{kk} and \eqref{ll} we thus obtain
\begin{equation}\label{ro}
\sum_{k=1}^d \sum_{l=1}^d r_l x r_k \otimes (s_{kl} - \lambda_k\lambda_l\cdot 1)\in J \,\,\mbox{ for all $x\in R$}.
\end{equation}
We may assume that $\lambda_1\ne 0$. Choose $w_j,z_j\in R$ that satisfy \eqref{oh}.
Denoting the expression in \eqref{ro} by $\rho(x)$, we have
\begin{align*}
\sum_j w_j\rho(z_j) &= \sum_j \sum_{k=1}^d \sum_{l=1}^d w_jr_l z_j r_k \otimes (s_{kl} - \lambda_k\lambda_l \cdot1)\\
&= \sum_{k=1}^d \sum_{l=1}^d \Big(\sum_j w_jr_l z_j \Big) r_k \otimes (s_{kl} - \lambda_k\lambda_l\cdot 1)\\
&= \sum_{k=1}^d r_k \otimes (s_{k1} - \lambda_k\lambda_1 \cdot1).
\end{align*}
As $\rho$ maps into $J$ it follows that
$$\sum_{k=1}^d r_k \otimes (s_{k1} - \lambda_k\lambda_1 \cdot 1)\in J =R\otimes {\rm rad}(S).$$
Since the $r_k$'s are linearly independent, we must have $s_{k1} - \lambda_k\lambda_1 \cdot 1 \in {\rm rad}(S)$
for each $k$. In particular, $s_{11} = \lambda_1^2\cdot 1 + u$ for some $u\in {\rm rad}(S)$. Since $\lambda_1\ne 0$ it follows that $s_{11}$ is invertible, as desired.
\end{proof}
We are now in a position to give our main result.
\begin{theorem}\label{thm:semiLoc}
Every semilocal algebra $S$ is an SN algebra.
\end{theorem}
\begin{proof}Since $S$ is stably finite \cite[Theorem 2.13]{Lam} and the algebra $S/{\rm rad}(S)$ is semisimple,
the theorem follows from Lemma \ref{lsemi} and Theorem \ref{trad}.
\end{proof}
\begin{corollary} \label{art}
Every artinian algebra is an SN algebra.
\end{corollary}
\section{Finite-dimensional algebras}\label{s:findim}
Corollary \ref{art} shows that every finite-dimensional algebra is an SN algebra.
The next result gives a strengthening of this property.
\begin{theorem}\label{tfd}
Let $A$ be a finite-dimensional algebra and let $R$ be its central simple subalgebra. Then every homomorphism from $R$ into $A$ can be extended to an inner automorphism of $A$.
\end{theorem}
\begin{proof}
Assume first that $R=M_n(F)$. Then $A$ contains a set of $n\times n$ matrix units and is therefore isomorphic to $M_n(S)\cong R\otimes S$ for some subalgebra $S$ of $A$ \cite[Lemma 2.52]{INCA}. Since $S$ is also finite-dimensional, the desired conclusion follows from Corollary \ref{art}.
Now let $R$ be an arbitrary central simple algebra. We may assume that the field $F$ is infinite, for otherwise $R\cong M_n(F)$ by Wedderburn's theorem on finite division rings.
Let
$\varphi$ be a homomorphism from $R$ into $A$.
Take a splitting field $K$ for $R$. Identifying $K\otimes R$ with $M_n(K)$, $n \ge 1$, it follows from the preceding paragraph that there exists
$b\in K\otimes A$ such that \begin{equation*} \label{y} ({\rm id}_K\otimes \varphi)(y) = byb^{-1}\end{equation*} for all $y\in K\otimes R$.
In particular, $$(1\otimes \varphi(x))b = b(1\otimes x)$$ for all $x\in R$.
Writing $b=\sum_{i=1}^m k_i\otimes a_i$ with the $k_i$'s linearly independent, it follows that
$$\sum_{i=1}^m k_i \otimes (\varphi(x)a_i - a_ix)=0,$$
and so
$\varphi(x)a_i = a_ix$
for all $x\in R$ and every $i$. Hence we see that it suffices to show that span$_F\{a_1,\dots,a_m\}$ contains an element which is invertible in $A$.
As a finite-dimensional algebra, $A$ can be considered as a subalgebra of $M_N(F)$ for some $N\ge 1$. Take the polynomial
$$f(\xi_1,\dots,\xi_m)=\det\Big(\sum_{i=1}^m \xi_i a_i\Big) \in F[\xi_1,\dots,\xi_m].$$
Note that $K\otimes A$ can be viewed as a subalgebra of $M_N(K)$.
Since $b$ is invertible in $K\otimes A$, we know that span$_K\{a_1,\dots,a_m\}$ contains an invertible element in $K\otimes A$. This clearly implies that $f$ is a nonzero polynomial.
As $F$ is infinite, there exist $\lambda_i\in F$ such that $f(\lambda_1,\dots,\lambda_m)\ne 0$. That is, span$_F\{a_1,\dots,a_m\}$ contains an element $c$ which is invertible in $M_N(F)$. However,
since we are in finite dimensions, $c^{-1}$ is a polynomial in $c$. Thus, $c$ is invertible in $A$.
\end{proof}
Using the standard homomorphism construction we will now see that Theorem \ref{tfd} can be used for showing that all derivations from $R$
into any $R$-bimodule $M$ are inner (in accordance with the conventions mentioned at the very beginning of the paper, we assume that our bimodules are unital).
This is, of course, a well-known result. Another way of stating it is that central simple algebras are separable.
\begin{corollary}\label{cfd}
Every derivation from a central simple algebra $R$ into an arbitrary $R$-bimodule $M$ is inner.
\end{corollary}
\begin{proof} Let $d:R\to M$ be a derivation.
As a finite-dimensional subspace of $M$, $d(R)$ generates a finite-dimensional subbimodule of $M$. Therefore, there is no loss of generality in assuming that $M$ itself is finite-dimensional.
Let $\widetilde{A}$ be the set of all matrices of the form $\left[
\begin{smallmatrix}
x & u\\
0&x
\end{smallmatrix}\right]$, where $x\in R$ and $u\in M$. Note that $\widetilde{A}$ is a (finite-dimensional!) algebra under the standard matrix operations.
Let $\widetilde{R}$ be its subalgebra consisting of all matrices of the form $\left[
\begin{smallmatrix}
x & 0\\
0&x
\end{smallmatrix}\right]$, $x\in R$. Of course, $\widetilde{R}\cong R$.
Define $\varphi:\widetilde{R}\to \widetilde{A}$ by
$$\varphi\left(\left[
\begin{matrix}
x & 0\\
0&x
\end{matrix}\right]\right)=\left[
\begin{matrix}
x & d(x)\\
0&x
\end{matrix}\right].$$
One immediately checks that $\varphi$ is a homomorphism. By Theorem \ref{tfd} there exists an invertible element $c= \left[
\begin{smallmatrix}
t & v\\
0&t
\end{smallmatrix}\right] \in \widetilde{A}$ such that $\varphi(\tilde{x})= c\tilde{x}c^{-1}$ for all $\tilde{x}\in\widetilde{R}$. Consequently, $\varphi(\tilde{x})c= c\tilde{x}$, that is,
$$\left[
\begin{matrix}
x & d(x)\\
0&x
\end{matrix}\right]\cdot \left[
\begin{matrix}
t & v\\
0&t
\end{matrix}\right] = \left[
\begin{matrix}
t & v\\
0&t
\end{matrix}\right]\cdot \left[
\begin{matrix}
x & 0\\
0&x
\end{matrix}\right]$$
for all $x\in R$. This yields $$xt = tx\,\,\,\mbox{ and}\,\,\, xv + d(x)t = vx$$ for all $x\in R$. Since $R$ is central, the first identity shows that $t\in F$. Moreover, $t\ne 0$ for $c$ is invertible. Hence $w=t^{-1}v$ satisfies $d(x) = wx - xw$ by the second identity.
\end{proof}
\section{Domains}\label{s:dom}
In this section we give classes of domains which are SN algebras. For instance,
UFDs and free algebras are SN algebras (Corollaries \ref{ufd} and \ref{cor:free}).
As the coordinate ring of an elliptic curve demonstrates (see Example \ref{ex:elliptic}),
not every commutative domain is an SN algebra.
However, the following proposition shows that
every domain $S$ embedded into a division algebra satisfies
a certain weaker condition.
\begin{proposition}\label{p}
Let $R$ be a central simple algebra, and let an algebra $S$ be a domain which can be embedded into a division algebra $D$. If $\varphi$ is a homomorphism from $R$ into
$R\otimes S$, then there exists $c\in R\otimes S$ which is invertible in $R\otimes D$, in fact $$c^{-1} = (1\otimes s^{-1})b\in R\otimes D$$ for some nonzero $s\in S$ and $b\in R\otimes S$, such that
$$\varphi(x)=cxc^{-1}$$ for all $x\in R$. Moreover, if $\{r_1,\dots,r_d\}$ is a basis of $R$, $b=\sum_{k=1}^d
r_k\otimes s_k$ for some $s_k\in S$, and
$c=\sum_{l=1}^d r_l\otimes t_l$ for some $t_l\in S$, then $$t_ls^{-1}s_k\in S$$ for all $k$ and $l$.
\end{proposition}
\begin{proof}
Not every $c_k$ from Lemma \ref{l} can be $0$ (in view of (c)), and so $s_{kl}\ne 0$ for some $k$ and $l$.
Set $c=c_k$ and $s=s_{kl}$. By the lemma we have $\varphi(x)c=cx$ for all $x\in R$ and $bc=1\otimes s$ for some
$b\in R\otimes S$. Of course, $s$ is invertible in $D$. Therefore $(1\otimes s^{-1})b$ is a left inverse of $c$ in $R\otimes D$. By \cite[Theorem 1]{Mon}, a left inverse in $R\otimes D$ is also a right inverse, so $c^{-1}=(1\otimes s^{-1})b$.
Now take a basis $\{r_1,\dots,r_d\}$ of $R$, and let us write
$b=\sum_{k=1}^d r_k\otimes s_k$ and
$c=\sum_{l=1}^d r_l\otimes t_l$. Then
$$\varphi(x)= cxc^{-1}=\sum_{k=1}^d\sum_{l=1}^d r_lxr_k\otimes t_ls^{-1}s_k.$$
for all $x\in R$. Pick $w_j,z_j\in R$ satisfying \eqref{oh}.
We have
\begin{align*}
\sum_j w_j\varphi(z_j) &= \sum_j \sum_{k=1}^d \sum_{l=1}^d w_jr_l z_j r_k \otimes t_ls^{-1}s_k\\
&= \sum_{k=1}^d \sum_{l=1}^d \Big( \sum_j w_jr_l z_j \Big)r_k \otimes t_ls^{-1}s_k\\
&= \sum_{k=1}^d r_k \otimes t_1s^{-1}s_k.
\end{align*}
Since the left hand side, i.e. $\sum_j w_j\varphi(z_j)$, lies in $R\otimes S$, so does the right hand side. This readily yields that
$t_1s^{-1}s_k\in S$.
\end{proof}
\begin{corollary}\label{ufd}
Every UFD is an SN algebra.
\end{corollary}
\begin{proof}
Let $S$ be a UFD and let $R$, $\varphi$, $b$, $c$, $s$, $s_k$, $t_l$
be as in Proposition \ref{p}. Since $S$ is a UFD, $t_1,\dots,t_d$ have a greatest common divisor $e$ and $c$ can be replaced with $(1\otimes e^{-1})c$, we can without loss of generality assume that $t_1,\dots,t_d$ are coprime. Then it suffices to prove that $s^{-1}s_k\in S$ for every $k$. Since $t_ls^{-1}s_k\in S$ for every $k,l$, we see that $s$ divides $t_ls_k$ for every $l,k$. Let $p$ be a prime such that $p^n$ divides $s$. Suppose that $p^n$ does not divide $s_{k_0}$ for some $k_0$. Since $p^n$ divides $t_ls_{k_0}$ for every $l$, we conclude that $p$ divides $t_l$ for every $l$, which contradicts the assumption about $t_l$ being coprime. Hence $s$ divides $s_k$ for every $k$.
\end{proof}
We now move to the noncommutative setting. Since embeddings of noncommutative domains into division rings can be ill-behaved or nonexistent, one needs stronger assumptions than in Corollary \ref{ufd}. Let $S$ be an arbitrary ring. The {\bf inner rank} of $A\in S^{m\times n}$ is the least $r$ such that $A=BC$ for some $B\in S^{m\times r}$ and $C\in S^{r\times n}$. We write $\rho A=r$. For example, if $S$ is a division ring, then $\rho A$ is just the rank of $A$. We say that $S$ is a {\bf Sylvester domain} \cite[Section 5.5]{Coh} if for any $P\in S^{\ell\times m}$ and $Q\in S^{m\times n}$ such that $PQ=0$, it follows that $\rho P+\rho Q\le m$.
We say that an element $s\in S$ {\bf right divides} $a\in S$ if $a=a's$ for some $a'\in S$. If $S$ is a domain and $a,b\in S\setminus\{0\}$, then $s$ is a {\bf highest common right factor} (HCRF) of $a$ and $b$ if $s$ right divides $a,b$ and every $s'\in S$ that right divides $a,b$ also right divides $s$. We say that $S$ is an {\bf HCRF domain} if every pair of nonzero elements in $S$ admits a HCRF. Special examples of HCRF domains are filtered rings satisfying the 2-term weak algorithm \cite[Section 2.8]{Coh} or more generally, 2-firs with right ACC$_1$ (ascending chain condition on principal right ideals) \cite[Exercise 3.2.1]{Coh}.
\begin{theorem}\label{t:sylv}
If $S$ is an HCRF domain and a Sylvester domain, then $S$ is an SN algebra.
\end{theorem}
\begin{proof}
Since $S$ is a Sylvester domain, it admits a universal skew field of fractions $D$ and this embedding preserves the inner rank by \cite[Theorem 7.5.13]{Coh}. Let $R$ be a central simple algebra and $\varphi:R\to R\otimes S$ a homomorphism. By Proposition \ref{p} there exists $c\in R\otimes S$ invertible in $R\otimes D$ such that $\varphi(x)=cxc^{-1}$ for all $x\in R$. Furthermore, if $\{r_1,\dots,r_d\}$ is a basis of $R$ and
$$c=\sum_l r_l\otimes t_l,\qquad c^{-1} =\sum_k r_k\otimes u_k$$
for $t_l\in S$ and $u_k\in D$, then $t_lu_k=s_{lk}\in S$ for all $1\le l,k\le d$ (here $u_k=s^{-1}s_k$ from Proposition \ref{p}). Since $S$ is an HCRF domain, we can assume that $t_1,\dots,t_d$ have no non-trivial common right factors (otherwise they have a nontrivial HCRF $e$ and we can replace $c$ with $c(1\otimes e^{-1})$). Fix $k$ such that $u_k\neq 0$. Then $(u_k,-1)^{\rm t}\in D^2$ belongs to the right kernel of the matrix
$$\begin{pmatrix}
t_1 & s_{1k} \\
\vdots & \vdots \\
t_d & s_{dk}
\end{pmatrix}\in S^{d\times 2}$$
which is therefore of (inner) rank $1$ over $D$. Since the embedding $S\subseteq D$ is inner rank preserving, this matrix is also of inner rank 1 over $S$, so
$$\begin{pmatrix}
t_1 & s_{1k} \\
\vdots & \vdots \\
t_d & s_{dk}
\end{pmatrix}=
\begin{pmatrix}
v_1 \\ \vdots \\ v_d
\end{pmatrix}
\begin{pmatrix}
w_1 & w_2
\end{pmatrix}$$
for some $v_i,w_j\in S$. Since $w_1$ right divides $t_l$ for every $l$ and $t_1,\dots,t_d$ are right coprime by assumption, we conclude that $w_1$ is invertible in $S$. By taking some $t_l\neq0$ we get
$$u_k=t_l^{-1}s_{lk}=w_1^{-1}v_l^{-1}v_lw_2=w_1^{-1}w_2\in S.$$
Consequently $c^{-1}\in R\otimes S$.
\end{proof}
\begin{corollary}\label{cor:free}
Every free algebra $F\langle X\rangle$ is an SN algebra.
\end{corollary}
\begin{proof} A free algebra is a filtered ring with a weak algorithm \cite[Theorem 2.5.3]{Coh}, so it is a HCRF domain and a fir (free ideal ring) by \cite[Theorem 2.4.6]{Coh} and hence a Sylvester domain by \cite[Proposition 5.5.1]{Coh}.
\end{proof}
Theorem \ref{t:sylv} has the following form for commutative rings.
\begin{corollary}\label{bez}
Every B\'ezout domain is an SN algebra.
\end{corollary}
\begin{proof}
Every B\'ezout domain is a GCD domain, which is just a commutative HCRF domain. Moreover, by \cite[Proposition 2.3.17]{Coh} it is also a semifir and hence a Sylvester domain by \cite[Proposition 5.5.1]{Coh}. Therefore Theorem \ref{t:sylv} applies.
\end{proof}
In the next example we present a domain that is not an SN algebra;
cf.~\cite[Theorem 15]{RZ}.
\begin{example}\label{ex:elliptic}
Let $S=F[x,y]/(y^2-x^3-x)$. Then $S$ is a domain,
$$a=\begin{pmatrix}y & x \\ x^2 & y\end{pmatrix}\in M_2(S)$$
is invertible as a matrix over the field of fractions of $S$ and
$$a^{-1}=\begin{pmatrix}\frac{y}{x} & -1 \\ -x & \frac{y}{x}\end{pmatrix}.$$
Since every product of an entry in $a$ and an entry in $a^{-1}$ lies in $S$, it follows that
$$\varphi\colon M_2(F)\to M_2(F)\otimes S,\qquad u\mapsto aua^{-1}$$
is a well-defined homomorphism. Suppose there exists an invertible $c\in M_2(S)$ such that $\varphi(u)=cuc^{-1}$ for all $u\in M_2(F)$. Then $\gamma=\det(c)$ is invertible in $S$ and it is easy to see that this implies $\gamma\in F\setminus\{0\}$. Since $c^{-1}a$ commutes with every $u\in M_2(F)$ by the definition of $\varphi$, we have $c^{-1}a=f I_2$ for some $f\in S$. But then
$$\gamma f^2=\gamma \det(c^{-1}a)=\det(a)= x$$
contradicts the irreducibility of $x$ in $S$.
\end{example}
\section{Polynomial matrix algebras}\label{s:matrixpoly}
In this section we prove that matrix algebras over polynomial algebras are SN.
\begin{theorem}\label{t:matrixpoly}
$M_n(F[\xi_1,\dots,\xi_s])$ is an SN algebra.
\end{theorem}
Besides the Quillen-Suslin theorem, saying that over $F[\xi_1,\dots,\xi_s]$ every finitely generated projective module is free, see \cite{Q,Sus}, the proof of this theorem is mostly based on the following simple factorization lemma.
\begin{lemma}
\label{l:denominators}
Let $A$ be a commutative algebra, which is a domain with field of fractions $K$, $R$ a central simple algebra and $a\in R\otimes M_n(A)$.
Suppose that $a$ is invertible in $R\otimes M_n(K)$ and that $a(x\otimes 1)a^{-1}\in R\otimes M_n(A)$ for all $x\in R$. Then for $c\in M_n(A)$ the following are equivalent:
\begin{enumerate}
\item[(i)] There exists a factorization $a=u(1\otimes c)$ for some invertible $u\in R\otimes M_n(A)$;
\item[(ii)] The left ideal
\[
I(a):=\{\, m\in M_n(A) \mid (1\otimes m)a^{-1}\in R\otimes M_n(A)\,\}\subseteq M_n(A)
\]
is generated by $c$;
\item[(iii)] The rows of $c$ form a basis of the $A$-module
\[
M(a):=\{\, r\in A^{1\times n} \mid (1\otimes r)a^{-1}\in R\otimes A^{1\times n}\,\}.
\]
\end{enumerate}
\end{lemma}
\begin{proof}
(i)$\Rightarrow$(ii): Let $u\in R\otimes M_n(A)$ be invertible such that $a=u(1\otimes c)$. Then for any $m\in M_n(A)$ we have that $(1\otimes m)a^{-1}=(1\otimes mc^{-1})u^{-1}$ lies in $R\otimes M_n(A)$ if and only if $mc^{-1}\in M_n(A)$, i.e., $m\in M_n(A)c$.
(ii)$\Rightarrow$(i):
Let $\{r_1,\dots,r_d\}$ be a basis of $R$ and for fixed $i$ take $w_k,z_k\in R$ such that $\sum_k w_k r_j z_k=\delta_{ij}$, see \eqref{oh} at the beginning of Section \ref{s:prelim}. If $a=\sum_jr_j\otimes a_j$, then
\[
(1\otimes a_i)a^{-1}=\sum_k (w_k\otimes 1)a(z_k\otimes 1)a^{-1}\in R\otimes M_n(A).
\]
Since $i$ was arbitrary, this shows that all coefficients $a_i$ of $a$ lie in $I(a)$. Assuming that $I(a)=M_n(A)c$, we can thus factor $a=u(1\otimes c)$ for some $u\in R\otimes M_n(A)$. But then $u^{-1}=(1\otimes c)a^{-1}\in M_n(A)$, i.e., $u$ is invertible.
(ii)$\Rightarrow$(iii): Suppose $I(a)=M_n(A)c$. Since $a$ is invertible over $K$, there exists a nonzero $e\in A$ such that $e \cdot 1\in I(a)$. Hence the rows of $c$ are clearly linearly independent. Given any $r\in M(a)$, we can extend $r$ by zero to form a matrix $m\in I(a)$ which has $r$ as one of its rows. By assumption $m\in M_n(A)c$. In particular, $r$ is a linear combination of the rows of $c$, which shows that they form a basis of $M(a)$.
(iii)$\Rightarrow$(ii): Conversely, suppose that the rows of $c$ form a basis of $M(a)$. In particular, $c\in I(a)$. Moreover, for any $m\in I(a)$ the rows of $m$ lie in $M(a)$ and are, therefore, linear combinations of the rows of $c$, which implies that $m\in M_n(A)c$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{t:matrixpoly}]
Let $A:= F[\xi_1,\dots,\xi_s]$, $K$ its field of fractions, $R$ a central simple algebra and $\varphi\colon R\to R\otimes M_n(A)$ a homomorphism. Since $M_n(K)$ is SN, there exists (after clearing denominators) $a\in R\otimes M_n(A)$, invertible in $R\otimes M_n(K)$, such that $\varphi(x)=a(x\otimes 1)a^{-1}$ for all $x\in R$.
Fix any prime ideal $P$ of $A$. Then $M_n(A_P)/{\rm rad}(M_n(A_P))$ is canonically isomorphic to the simple algebra $M_n(F)\otimes A_P/PA_P$, see Lemma~\ref{lemrad}. Therefore, $M_n(A_P)$ is semilocal and by Theorem~\ref{thm:semiLoc} it is also an SN algebra.
It follows that there exists an invertible $u_P\in R\otimes M_n(A_P)$ such that $\varphi(x)=u_P(x\otimes 1)u_P^{-1}$. Then $u_P^{-1}a$ commutes with all elements of $R\otimes 1$ and thus lies in $1\otimes M_n(A)$. This means, $a$ can be factored as $a=u_P(1\otimes c_P)$ for some $c_P\in M_n(A)$. By Lemma~\ref{l:denominators}, this implies that the $A_P$-module $A_PM(a)$ is free of rank $n$. Since the prime ideal $P$ was arbitrary, this shows that $M(a)$ is locally free of rank $n$. As being projective is a local property, this implies $M(a)$ is projective. By the Quillen-Suslin theorem $M(a)$ is free of rank $n$. We choose $c\in M_n(A)$ such that its rows form a basis of $M(a)$. Then again from Lemma~\ref{l:denominators}, we get a factorization $a=u(1\otimes c)$ where $u\in R\otimes M_n(A)$ is invertible. Now $\varphi(x)=u(x\otimes 1)u^{-1}$ for all $x\in R$.
\end{proof}
\section{Formal power series}\label{s:poly}
The aim of this section is to show that the property of being an SN algebra transfers from $S$ to the formal power series algebra $S[[\xi]]$.
\begin{theorem}\label{pow}
$S$ is an SN algebra if and only if $S[[\xi]]$ is an SN algebra.
\end{theorem}
\begin{proof}
$(\Rightarrow)$ Let $R$ be a central simple algebra and let $\varphi:R\to R\otimes S[[\xi]]$ be a homomorphism. Since $R$ is finite-dimensional, we can identify $R\otimes S[[\xi]]$ with $(R\otimes S)[[\xi]]$ and write
$$\varphi(x)= \varphi_0(x) + \varphi_1(x)\xi+ \varphi_2(x)\xi^2 + \dots$$
where $\varphi_i:R\to R\otimes S$. Note that $\varphi_0$ is an algebra homomorphism. By assumption, there exists an invertible element $a\in R\otimes S$ such that
$\varphi_0(x)=axa^{-1}$ for all $x\in R$. Considering the map $x\mapsto a^{-1}\varphi(x)a$
we see that without loss of generality we may assume that $\varphi_0(x)=x$ for all $x\in R$, so that
\begin{equation}\label{ena1}\varphi(x)= x + \varphi_1(x)\xi+ \varphi_2(x)\xi^2 + \dots\end{equation}
Now apply Lemma \ref{l}. Thus, let $\{r_1,\dots,r_d\}$ be a basis of $R$ and let $c_1,\dots,c_d\in R\otimes S[[\xi]]$ be such that \begin{equation}\label{ena2}
\sum_{k=1}^d c_kr_k =1\,\,\,\mbox{ and
}\,\,\,\varphi(x)c_k = c_kx\end{equation} for all $x\in R$ and all $k$. Writing $$c_k = \sum_{j=0}^\infty c_{kj}\xi^j,$$
where $c_{kj}\in R\otimes S$, it follows from \eqref{ena1} and \eqref{ena2} that
\begin{equation}\label{ena3}xc_{k0} = c_{k0}x\end{equation}
for all $x\in R$. Let us write $c_{k0} = \sum_j p_{kj}\otimes s_{kj}$
with the $s_{kj}$'s linearly independent. From \eqref{ena3} we infer that $$\sum_j (xp_{kj} - p_{kj}x)\otimes s_{kj} =0$$
for all $x\in R$, yielding $xp_{kj} - p_{kj}x=0$. Since $R$ is central this means that each $p_{kj}$ is a scalar multiple of $1$. Accordingly, each $c_{k0}$ is of the form
$1\otimes t_k$ for some $t_k\in S$. From the first identity in $\eqref{ena2}$ one easily deduces that $\sum_{k=1}^d r_k \otimes t_k = 1\otimes 1$. Writing $1=\sum_{k=1}^d\lambda_k r_k$, where $\lambda_k\in F$, it follows that $t_k=\lambda_k 1$. We may assume that $\lambda_1\ne 0$. Accordingly, $c_{10}$ is a nonzero scalar multiple of unity of $R\otimes S$, implying that $c_1$ is invertible in $(R\otimes S)[[\xi]]$. Applying $\eqref{ena2}$ we arrive at $\varphi(x)= c_1xc_1^{-1}$ for all $x\in R$.
$(\Leftarrow)$ Straightforward; more generally, the SN property is clearly preserved by retractions. Here an algebra $S'$ is a retract of $S$ if $S'\subset S$ and there exists a homomorphism $\pi:S\to S'$ that restricts to the identity map on $S'$.
\end{proof}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 7,388 |
Q: Sliding Menu - How to add an activitity? I am trying out sliding menu and so far it works. Now what I want is to associate an activity to the Fragment which I don't know how to do. This is what I have:
1) A sliding menu with few options
2) Upon selecting one options - now it loads an activity (it is a list activity) which I have. The problem is that when it loads that activity, I no longer able to pull out the sliding menu (I have to click back to back main screen). I know maybe this is due to I am not loading my activity as a fragment but instead as List Activity.
So how do I convert my listactivity into fragment?
My Code (in MainActivity to select that options)
Intent intent = new Intent(MainActivity.this, MListActivity.class);
In the ListActivity.class, it is listing of all items:
public class MListActivity extends ListActivity {
Appreciate your help. Many thanks.
My MainActivity:
public class MainActivity extends Activity {
private DrawerLayout mDrawerLayout;
private ListView mDrawerList;
private ActionBarDrawerToggle mDrawerToggle;
// nav drawer title
private CharSequence mDrawerTitle;
// used to store app title
private CharSequence mTitle;
// slide menu items
private String[] navMenuTitles;
private TypedArray navMenuIcons;
private ArrayList<NavDrawerItem> navDrawerItems;
private NavDrawerListAdapter adapter;
private String RSSFEEDURL = "https://sites.google.com/site/fappweb/Topmovies.xml";
RSSFeed feed;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
mTitle = mDrawerTitle = getTitle();
// load slide menu items
navMenuTitles = getResources().getStringArray(R.array.nav_drawer_items);
// nav drawer icons from resources
navMenuIcons = getResources()
.obtainTypedArray(R.array.nav_drawer_icons);
mDrawerLayout = (DrawerLayout) findViewById(R.id.drawer_layout);
mDrawerList = (ListView) findViewById(R.id.list_slidermenu);
navDrawerItems = new ArrayList<NavDrawerItem>();
// adding nav drawer items to array
// Home
navDrawerItems.add(new NavDrawerItem(navMenuTitles[0], navMenuIcons.getResourceId(0, -1)));
// Find People
navDrawerItems.add(new NavDrawerItem(navMenuTitles[1], navMenuIcons.getResourceId(1, -1)));
// Photos
navDrawerItems.add(new NavDrawerItem(navMenuTitles[2], navMenuIcons.getResourceId(2, -1), true, "30"));
// Communities, Will add a counter here
navDrawerItems.add(new NavDrawerItem(navMenuTitles[3], navMenuIcons.getResourceId(3, -1), true, "22"));
// Pages
navDrawerItems.add(new NavDrawerItem(navMenuTitles[4], navMenuIcons.getResourceId(4, -1)));
// What's hot, We will add a counter here
navDrawerItems.add(new NavDrawerItem(navMenuTitles[5], navMenuIcons.getResourceId(5, -1), true, "50+"));
// Recycle the typed array
navMenuIcons.recycle();
mDrawerList.setOnItemClickListener(new SlideMenuClickListener());
// setting the nav drawer list adapter
adapter = new NavDrawerListAdapter(getApplicationContext(),
navDrawerItems);
mDrawerList.setAdapter(adapter);
// enabling action bar app icon and behaving it as toggle button
getActionBar().setDisplayHomeAsUpEnabled(true);
getActionBar().setHomeButtonEnabled(true);
mDrawerToggle = new ActionBarDrawerToggle(this, mDrawerLayout,
R.drawable.ic_drawer, //nav menu toggle icon
R.string.app_name, // nav drawer open - description for accessibility
R.string.app_name // nav drawer close - description for accessibility
) {
public void onDrawerClosed(View view) {
getActionBar().setTitle(mTitle);
// calling onPrepareOptionsMenu() to show action bar icons
invalidateOptionsMenu();
}
public void onDrawerOpened(View drawerView) {
getActionBar().setTitle(mDrawerTitle);
// calling onPrepareOptionsMenu() to hide action bar icons
invalidateOptionsMenu();
}
};
mDrawerLayout.setDrawerListener(mDrawerToggle);
if (savedInstanceState == null) {
// on first time display view for first nav item
displayView(0);
}
}
/**
* Slide menu item click listener
* */
private class SlideMenuClickListener implements
ListView.OnItemClickListener {
@Override
public void onItemClick(AdapterView<?> parent, View view, int position,
long id) {
// display view for selected nav drawer item
displayView(position);
}
}
@Override
public boolean onCreateOptionsMenu(Menu menu) {
getMenuInflater().inflate(R.menu.main, menu);
return true;
}
@Override
public boolean onOptionsItemSelected(MenuItem item) {
// toggle nav drawer on selecting action bar app icon/title
if (mDrawerToggle.onOptionsItemSelected(item)) {
return true;
}
// Handle action bar actions click
switch (item.getItemId()) {
case R.id.action_settings:
return true;
default:
return super.onOptionsItemSelected(item);
}
}
/* *
* Called when invalidateOptionsMenu() is triggered
*/
@Override
public boolean onPrepareOptionsMenu(Menu menu) {
// if nav drawer is opened, hide the action items
boolean drawerOpen = mDrawerLayout.isDrawerOpen(mDrawerList);
menu.findItem(R.id.action_settings).setVisible(!drawerOpen);
return super.onPrepareOptionsMenu(menu);
}
/**
* Diplaying fragment view for selected nav drawer list item
* */
private void displayView(int position) {
// update the main content by replacing fragments
Fragment fragment = null;
switch (position) {
case 0:
fragment = new HomeFragment();
break;
case 1:
fragment = new FindPeopleFragment();
break;
case 2:
//fragment = new PhotosFragment();
Intent intent = new Intent(MainActivity.this, MListActivity.class);
startActivity(intent);
break;
case 3:
fragment = new CommunityFragment();
break;
case 4:
fragment = new PagesFragment();
break;
case 5:
fragment = new WhatsHotFragment();
break;
default:
break;
}
if (fragment != null) {
FragmentManager fragmentManager = getFragmentManager();
fragmentManager.beginTransaction()
.replace(R.id.frame_container, fragment).commit();
// update selected item and title, then close the drawer
mDrawerList.setItemChecked(position, true);
mDrawerList.setSelection(position);
setTitle(navMenuTitles[position]);
mDrawerLayout.closeDrawer(mDrawerList);
} else {
// error in creating fragment
Log.e("MainActivity", "Error in creating fragment");
}
}
@Override
public void setTitle(CharSequence title) {
mTitle = title;
getActionBar().setTitle(mTitle);
}
/**
* When using the ActionBarDrawerToggle, you must call it during
* onPostCreate() and onConfigurationChanged()...
*/
@Override
protected void onPostCreate(Bundle savedInstanceState) {
super.onPostCreate(savedInstanceState);
// Sync the toggle state after onRestoreInstanceState has occurred.
mDrawerToggle.syncState();
}
@Override
public void onConfigurationChanged(Configuration newConfig) {
super.onConfigurationChanged(newConfig);
// Pass any configuration change to the drawer toggls
mDrawerToggle.onConfigurationChanged(newConfig);
}
}
What I did was to duplicate the main activity and rename as SlideActivity. Then in my MlistActivity I put it as extend SlideActivity as follows:
public class MListActivity extends SlideActivity {
But I got the following error when I try to start my MlistActivity:
05-25 18:20:20.404: E/FragmentManager(28565): No view found for id 0x7f0a0001 (com.fpbbnlsly.worldv:id/frame_container) for fragment HomeFragment{42a29d40 #0 id=0x7f0a0001}
A: The sliding menu is associated with the current activity. The new activity you are starting doesn't have a sliding menu associated with it.
What you need to do is have a MyListFragment and add that fragment to your current activity.
Fragment fragment = new MyListFragment();
FragmentManager fragmentManager = getFragmentManager();
fragmentManager.beginTransaction()
.replace(R.id.content_frame, fragment)
.commit();
// Close the drawer
mDrawerLayout.closeDrawer(mDrawerList);
Where R.id.content_frame is a in your current activity.
A: You can go with the fragment option and keep swapping fragments in the same activity or you can attach an identical SlidingMenu to your new ListActivity
What I did was I created a class that implements activity and had it have a SlidingMenu then extended that class in all my activities or you can copy/paste the sliding menu code
public class MyActivity extends Activity {/*
* (non-Javadoc)
*
* @see android.app.Activity#onCreate(android.os.Bundle)
*/
@Override
protected void onCreate(final Bundle savedInstanceState) {
// put sliding menu code here
}
}
public class MainActivity extends MyActivity {
}
public class SecondActivity extends MyActivity {
}
NOTE: this is just pseudo code
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 5,502 |
DistroWatch Weekly, Issue 154, 5 June 2006
Welcome to this year's 23rd issue of DistroWatch Weekly! The long-awaited version 6.06 of the Ubuntu family of Linux distributions dominated the headlines of many open source news sites last week; we'll comment on the release and share our first impressions of the new product. In other news, the second Red Hat Summit, concluded last week, was characterised by the launch of several new initiatives, while the Debian release team has been busy finalising the feature set for the December release of Debian "etch". Also, don't miss our opinion piece about the changing landscape of Linux users prompted by the recent release of the binary-only Picasa photo management software for Linux. Finally, we are pleased to announce that the May 2006 DistroWatch donation has been awarded to LilyPond and Lua. Happy reading!
News: Ubuntu 6.06, Red Hat Summit, Fedora Unity respins, Debian "etch" update
First looks: Ubuntu 6.06
Opinion: The changing landscape of Linux users
Released last week: Ubuntu 6.06 LTS, Turbolinux 11 International Edition
Upcoming releases: Frenzy 1.0, Parsix GNU/Linux 0.80
Donations: LilyPond €190, Lua US$250
New additions: BinToo GNU/Linux, Dzongkha Linux
New distributions: BINKI GNU/Linux, Bluewhite 64 Linux, HOST, maliGNUz, PapugLinux
Listen to the Podcast edition of this week's DistroWatch Weekly in ogg (11.4MB) or mp3 (13.6MB) format (courtesy of Shawn Milo).
Ubuntu 6.06, Red Hat Summit, Fedora Unity respins, Debian "etch" update
If you happen to enjoy Linux, but dislike Ubuntu, you probably regretted connecting to the Internet in the second half of the past week. As expected, there just wasn't a single open source web site which didn't carry the big announcements by the various Ubuntu sub-projects, further linking to a large number of reviews, tutorials, screenshot tours, feature lists, third-party enhancements, and many other resources that the distribution's user community has created since the big release. If there were any doubts whether Ubuntu is truly one of the most popular Linux distributions today, they were dispelled once and for all. With its fourth official release and the first one with long term support benefits, it is now clear that the Ubuntu family of distributions has become one of the most influential and enthusiastically embraced open source operating systems ever created - and deservedly so.
The Ubuntu family of distributions now comes with graphical language and software management tools.
(full image size: 288kB, resolution: 1280x800 pixels)
In contrast, the second annual Red Hat Summit, which took place last week in Nashville, Tennessee, has attracted little community media coverage. This was perhaps best demonstrated by the comments section on Slashdot which linked to an article covering the event, but which generated barely over 30 reader comments. That's not to say there wasn't anything interesting happening at the Red Hat conference. Quite the opposite - the North Carolina company has unveiled Mugshot, an online social network for sharing music and other entertainment-related content, reiterated its support for the US$100 One Laptop Per Child (OLPC) project, and opened Red Hat 108, a development portal dedicated to enterprise developers and system integrators. The company has also announced the release dates for the upcoming Red Hat Enterprise Linux (RHEL) 5, the first beta of which is scheduled for release in July, the second beta in September, and the final version in December 2006. The new version of the enterprise distribution will be based on the upcoming Fedora Core 6, the first beta of which should be out in about two weeks from now.
Also largely unnoticed was the announcement by the Fedora Unity project that it has released a respin of Fedora Core 5, complete with all official security updates and bug fixes since the distribution's release in March 2006. Fedora Unity is a community project which distributes its files through the BitTorrent network. The idea is to help those who wish to perform a new Fedora 5 installation; instead of getting the original release and applying hundreds of megabytes of updates after the installation, user can simply download the updated DVD image from Fedora Unity and have an up-to-date system straight away. Compared to Fedora Core 5, the new respin contains over 550 updates; most of these are minor security and bug fixes, however, quite a few major applications were upgraded to newer upstream versions. These include the Linux kernel (upgraded to version 2.6.16), Beagle (0.2.6), Epiphany (2.14.1), Ethereal (0.99.0), Firefox (1.5.0.3), GIMP (2.2.11), GNOME (2.14.1), K3B (0.12.14), KDE (3.5.2), MySQL (5.0.21) and PHP (5.1.4), just to mention a few popular ones. The Fedora Unity respins are currently available for the i386 architecture, with the x86_64 edition in a testing stage and the PowerPC edition in a planning phase. New respins are expected to be released monthly.
Some six months before the planned final release of Debian "etch", the feature set of the new major version of the world's largest Linux distribution is about to be finalised. According to a mailing list post by Andreas Barth, some of the main "etch" release goals include: "GCC 4.1 transition, LSB 3.1 compatibility, SELinux support, pervasive IPV6 support, pervasive LFS (large files) support, new Python framework." The Python updates should make it significantly easier to migrate Python-based applications to newer versions, with "etch" expected to default to the Python 2.4 series. See the bits from the release team for more details and time line. On a related note, Debian has also announced that security support for Debian GNU/Linux 3.0 "woody", a product originally released in July 2002, will be terminated on 30 June 2006. All "woody" installations should now be migrated to Debian GNU/Linux 3.1 "sarge", the current stable release.
First Look at Ubuntu 6.06 (Dapper Drake) by Robert Storey
We've got them now.
- General George Armstrong Custer, 1876
In the great battle for supremacy on the Linux desktop, Ubuntu is a force to be reckoned with. Long occupying the number one spot on the DistroWatch hit list (which is not the same as saying this is the most popular distro), Ubuntu has a large and growing following.
Needless to say, the long-anticipated release of Ubuntu "Dapper Drake" on June 1 was a major event that dominated the news on DistroWatch and other Linux web sites around the world. The release was originally scheduled for April 10, but was delayed six weeks in order to give the developers time to exterminate bugs. Ubuntu fans grumbled about the delay, but nearly everybody agreed that a delayed but excellent release was better than an on-time bug-ridden mess.
As an Ubuntu beta-tester, I was particularly keen to see this awesome distro make its debut. And now that it's arrived, I'm sad to say that I'm just a little bit deflated. Maybe more than a little. I realize that many people are delighted with their Ubuntu installation, and I hate to be the guy who dropped a turd in the punch bowl. The reason for my melancholy is that at least two major flaws survived the bug-squashing party, and they are showstoppers for me. There are also a number of other minor annoyances.
I want to end this mini-review on a happy-happy note, so let's get the negative stuff out of the way first. The most noticeable problem was with the video driver. I currently have two (only two?) computers, a laptop and a desktop. Coincidentally, both machines use the "ati" driver, and with Dapper both machines display what looks like "snow" in the "radio buttons" (those "OK" and "Cancel" buttons) in certain applications. This is a minor annoyance, but not fatal. Unfortunately, (on the laptop only) I also experience fatal screen crashes at the login screen whenever I move the mouse cursor. Interestingly, this only occurs if I login and logout at least one time after bootup - it will not occur at the first login. I can only recover from the crash with a hard reset, which means pulling the plug and removing the laptop's battery, since this machine has no reset button. This bug did not occur in Ubuntu Breezy, or any other distro I've used to date.
I reported this bug and received a polite response from the developers. While waiting for a fix, I've found a reasonable workaround. I edited file /etc/X11/xorg.conf and replaced the "ati" driver with "vesa". The vesa driver works with just about any hardware, but it's a noticeably slow driver. I no longer experience crashes, but I can forget about playing PlanetPenguin-Racer (ppracer, formerly known as TuxRacer).
The other big bug involves printing. Whether or not this affects you may depend on the printer you have, but many others besides myself reported the same problem so I know I'm not alone. What I found (during beta-testing) is that my printer would only print in draft mode, even though I'd set it up to print high-quality. However, the problem only seemed to affect Gnome apps - the KDE apps all printed beautifully. You notice I'm talking in the past tense. That's because since the final release of Dapper, things have changed - the above-mention bug is no longer an issue because I can no longer print anything at all! This is a near disaster. Although I don't use it often, a printer is rather like an ATM card - when you need it, you really need it, or you're in big trouble.
Although not officially a bug (it's a feature!), I found that I could not start the X server by disabling gdm, booting into text mode, and then running startx. This is one of those "minor annoyances" I was referring to earlier.
So much for the bad news. On the positive side, Dapper's applications are (not surprisingly) very up-to-date. Performance has also improved since Breezy. The "live" CD has been renamed "desktop" and now boasts a graphical installer. The former "install" CD (which sports a text-mode installer) has been redubbed "alternate". Personally, I prefer the "alternate" to the "desktop" - the live CD would simply not boot on my laptop, though it did work OK on my desktop computer. Furthermore, the text-mode installer offers options not available on the graphical one.
A new member to the CD collection is the "server" CD. Intriguingly, if you install from the server CD and later decide that you really wanted a desktop after all, you can run one of the following commands to install the Ubuntu desktop of your choice:
• sudo apt-get install ubuntu-desktop
• sudo apt-get install kubuntu-desktop
• sudo apt-get install xubuntu-desktop
• sudo apt-get install edubuntu-desktop
There are also Dapper DVD editions in the pipeline, but these have not been released yet. Hopefully, these won't be released until the bugs have been eradicated.
For some users, an important new feature is that Dapper comes with built-in Asian-language support. Since I live in a bilingual household, this happens to be very useful to me. Even for those who don't speak Chinese, Japanese or Korean, this is an important innovation since it finally makes Ubuntu a truly international distro on a par with Fedora or Mandriva. This can only help increase its mind share.
Ubuntu uses SCIM for inputting the complex East Asian languages
Thanks to two evil legislative concepts, the Digital Millennium Copyright Act (DMCA) and software patents, almost every Linux distro on the planet is crippled at birth. For example, you can't legally watch DVDs without a license or rip your own MP3s. In theory, this is only a problem if you live in a country that has enacted this odious legislation, but as a practical matter, Linux developers can't create a separate CD for every country. Therefore, they aim for the lowest common denominator, which means that almost every distro is multi-media brain-dead on arrival.
Ubuntu is no exception, but geeks have found a clever way around the issue. Currently, there are two excellent scripts that you can download and run that will solve this problem handily. One is called Automatix and the other is EasyUbuntu. Of the two, I prefer Automatix because it seems to do more and offers more precise control over what you install. However, EasyUbuntu is slightly easier, as the name implies.
One problem I encountered with EasyUbuntu is when I gave it permission to install ATI binary drivers - this rendered my X server useless, leaving me with an unbootable system. Fortunately, I had the presence of mind to make a backup copy of my /etc/X11/xorg.conf file, and I was able to recover by booting into "recovery mode" (single-user mode) and renaming the backup. I guess this is as good a time as any for me to say that I think ATI's drivers suck. Perhaps this will change if AMD takes over ATI Technologies, as rumoured, but don't hold your breath waiting.
I was surprised to find that neither Automatix nor EasyUbuntu installed Grip, the premier MP3 ripper-encoder. Of course, it's easy enough to install by itself after you've enabled MP3 support. (Note: in Grip, select "lame" as your encoder if you want the MP3 file format).
Getting support is an important factor in choosing a distro. Happily, the Ubuntu community is a strong selling point. Check out the forum and/or #ubuntu on freenode. Don't forget to take a peek at the wiki.
Ubuntu is a slick desktop (and now server) distro with many commendable features. Speed, ease of use and brilliant Debian-style package management have rightfully put this distro at (or near) the top of the charts.
Unfortunately, despite a six-week delay from the original release date, I can't help but get the feeling that Dapper shipped too early. There are several serious bugs which will likely leave many newbies and hardcore veterans alike banging their collective heads against the wall. A live CD that can't boot, a video driver that hard crashes and the inability to print, are not minor quibbles. And I really wish it was possible to boot into text-mode and run startx rather than being chained to gdm. Admittedly, your mileage may vary - many of the problems I've described are hardware dependent. However, I successfully ran Ubuntu Breezy on this same equipment for six months and didn't have such problems - it's disconcerting to see backsliding. I was hoping that I could tell my friends that Dapper is now the end-all be-all of Linux distros, but sadly, I cannot.
My main hope is that, in the next few weeks, Ubuntu developers will issue updates that eliminate all these bugs. It's worth noting that Dapper is supposed to be Ubuntu's first "enterprise class" release. This means that the server repositories will be supported for five years, and the desktop repos for three. Of course, long before then I hope to be running Ubuntu's next release, code-named Edgy Eft.
One issue that you, our readership, may wish to discuss is whether or not Ubuntu's decision to stick to a six-month release schedule is so wise. After this latest experience, I'm personally of the opinion that Debian's "release when it's ready" philosophy has much to recommend it.
The changing landscape of Linux users
Life of a Linux user was a lot less stressful back in the nineties. The difficulty of installing and maintaining a Linux distribution ensured that only those who had made an effort to learn the UNIX command line were able to use it effectively. The rest of the population was simply out of the game, leaving the mailing lists and online forums to experienced hackers engaging in highly technical discussions. Today, things are different. Those who still claim that Linux is not ready for the desktop should just visit one of the general Linux discussion forums to notice questions that clearly indicate that even moderately experienced computer users now install and use Linux as a matter of routine - perhaps not full time just yet, but certainly with an expectation to be able to dump their proprietary operating system completely in the future.
While it is always nice to see that today's Linux distributions are able to attract computer users who would have not considered installing the open source operating system just a few years ago, the increasing number of Linux users have contributed to the polarisation of the Linux user base. On one hand, we have the old style hacker, fluent in command line tasks, Vim or Autoconf. On the other, there are a large number of new Linux fans who use their computer to get things done. By extension, there are Linux users who value the power and freedom open source software provides, but there are also increasing numbers of those who see Linux as just a freely available operating system they can use for their work or entertainment.
Nowhere was this polarisation more obvious than in the discussion about the merits of Picasa in last week's edition of DistroWatch Weekly. Although some people agreed with the negative sentiment towards Picasa as expressed in the main article, there were many who argued the opposite. "Who cares if Picasa is closed-source software or that it isn't a native Linux application? The most important thing is that it works and enables more people migrate to Linux," they argued. "After all, isn't Linux about world domination?"
As defined by the Free Software Foundation (FSF), Linux, GNU and Free Software provide four essential freedoms. Of these, the freedom to modify the program to suit your needs and the freedom to build a community by improving the program are particularly important for this discussion. Those of you who have ever recompiled an open source application to change a certain behaviour or add a feature will understand the joy of being able to do so. Likewise, those of you who have ever helped a project by submitting a bug report, suggestion, code snippet or translation must have felt the satisfaction of helping in the development of your favourite program. Clearly, this is software freedom at its best!
Let me give you an example. Recently I was fascinated to read a web log post by Christian Perrier, a Debian developer, about his trip to Bhutan, a mountainous Asian country sandwiched between India and China. As it turns out, a group of Linux enthusiasts in Bhutan have been working on localising Debian into Dzongkha, the kingdom's principal language. After several years of work creating fonts, developing input methods, writing documentation and providing translation, the project was able to produce a full-featured desktop distribution with support for the country's language and writing system. Finally, last week, the Department of Information Technology at Bhutan's Ministry of Information and Communications launched Dzongkha Linux, a Debian-based distribution with support for the Dzongkha language and writing system. Even the Prime Minister of Bhutan is said to have attended the launch party!
If you don't think this is absolutely fantastic, then let me put it in a different way. You and I know about several huge software monopolies which make more money in one day that the entire population of Bhutan makes in a year. Despite that, they will never ever be able to create software localised into Dzongkha! That's just not possible - simply because it doesn't make financial sense to the shareholders of those companies. The above miracle was possible only because Debian is an open source operating system which encourages collaboration and welcomes contributions!
But maybe you don't care if citizens of some faraway, impoverished country can use their computers without having to learn a foreign language first. If that's the case, then let me give you another example. Just visit the Picasa web site and compare it to the web sites of some of the open source photo management applications, such as digiKam or F-Spot. On the latter two, you can quickly find links to reporting bugs, requesting features, and getting involved in the development. Most open source projects also provide an easy way to translate menus and documentation into different languages. With Picasa (and most other proprietary software), there is no easy way to submit requests for new features, exchange email with the developers, or localise menus and help files.
Don't get me wrong - I am not saying that you shouldn't use Picasa. Maybe it has a feature that you absolutely cannot live without or you just consider it the best photo management applications there is. But before you download Picasa, or before you recommend it to your relatives and friends, ask yourself a simple question: in the long run, wouldn't we all be better off if we used and promoted Free Software equivalents instead? If you like a feature in Picasa and would want to see it included in digiKam, why not write to the developers of digiKam and request that feature? While you might not receive a positive answer straight away, I can virtually guarantee you that if enough people ask for the feature, the developers will eventually implement it. Contributing code, documentation, translation or a few dollars are other options that can help a Free Software project move ahead faster.
Free Software has come a long way since its early, visionary days. Please don't fall into the trap of believing that freely available proprietary software can enrich the lives of millions of people the same way as Free Software does. It cannot. Not in the long run.
F-Spot, a promising new digital photo management program, is Free Software
(full image size: 268kB, resolution: 810x629 pixels)
CentOS 4.3 Server CD
Karanbir Singh has announced the release of CentOS 4.3 Server CD, the project's single-CD variant designed for server use: "The single CD server install for CentOS 4.3 / i386 has now been released and is available from all active mirrors. Notes: this installer will only work with i686 based CPUs; the included packages are a subset of all packages available in the CentOS distribution, however yum has been pre-configured to use the entire repository; in order to ensure that drivers and other third party apps maintain compatibility, the package set used on the Server CD is from CentOS 4.3, you are strongly encouraged to run a 'yum update' immediately after installation." Read the rest of the release announcement for further information.
Scientific Linux 4.3 Live CD/DVD
Troy Dawson has released a set of updated builds of Scientific Linux live CDs and DVDs: "Scientific Linux Live CD 4.3 has been released for both i386 and x86_64. The Scientific Linux Live CD/DVD is a bootable CD/DVD that runs Linux directly from CD/DVD without installing. It is based on Scientific Linux 4. It uses Unionfs, allowing read-only filesystem to behave as a writable one and SquashFS providing on-the-fly decompression that allows storing 2GB software on a normal CD-ROM. The Live CD/DVD was built using modified scripts from www.linux-live.org." New in this release is an option to save files to a hard disk or USB storage device. See the release announcement for further details.
Turbolinux 11 (International Edition)
Turbolinux has announced the immediate availability of an international edition of its successful desktop-oriented operating system, also known as "Turbolinux 11 Fuji". Originally launched in Japan in November 2005, some of the key features of the new release include: "Expanded Control Center features; wireless LAN capacity; automatic updates for security patches; enhanced broadband features, including Windows Media Playback and Flash Player; and development tools such as JAVA and several other state-of-the-art functions for personal and business users." For more information please read the full press release and visit the product's features page. The international edition of Turbolinux 11 "Fuji" is available for purchase from Source One Network for US$39.00.
StressLinux 0.3.1
StressLinux 0.3.1 final is out: "After a long time of development the final StressLinux 0.3.1 is released. The changes between rc4 and this final are very small. The kernel was bumped to version 2.6.16.18 and the Realtek R1000 driver was added. Everton Marques suggested adding the following network testing tools, which are now included: nepim, iperf, netperf. Busybox reached version 1.1.3. With this final release the PXE packages are available, too. There seem to be still some problems with AMD64 hardware; this is the main reason why I included an extra boot menu entry (noinitrd) to boot without loading extra drivers." Visit the distribution's home page to read the release announcement.
Musix GNU/Linux 0.40
A new version of Musix GNU/Linux, a Debian-based live CD with a large collection of audio software, has been released: "Thanks to the support of the Ututo Project, FSF, Ourproject, and to the usual collaborators, the Musix project has just released Musix 0.40. Musix 0.40 can be considered the most stable and functional Musix version until now, and its use is recommended in the long term." Musix 0.40 includes a number of new audio programs, such as Rosegarden 1.2.3, Mixxx 1.4.2 (Digital Disc Jockey Interface), Cecilia 2.0.5, Csound and others, as well as many updated packages. For more details and known issues please consult the release announcement.
Nonux 3.0
Marcel J. Zwiebel has announced a new major release of Nonux, a Slackware-based distribution with Dropline GNOME designed for business use in Dutch-speaking office environments. Version 3.0 comes with the following improvements and updates: upgrade to kernel 2.6.16.17; upgrade to GNOME 2.14.1; upgrade to Evolution 2.6.1; upgrade to Firefox 1.5.0.3; new menu entries for common system administration tasks; new, more serene, desktop theme; by default the live CD now boots into a non-root account; improved power management for notebook computers; many application updates, including Mail Notification, WiFi-radar and GParted. For more details please read the full release announcement on the project's news page (in Dutch).
Kubuntu 6.06 LTS
Kubuntu, one of the distributions belonging to the Ubuntu family of Linux operating systems, is the first to publish a formal announcement about the product's new release: "Kubuntu 6.06 LTS has been released. It is available for download now or for the first time you can order free Kubuntu CDs through Shipit. This release comes with KDE 3.5.2 and includes a new installer which you can use direct from the live desktop CD. We have focused on stability and bug fixes, as our first Long Term Support release 6.06 will be supported for 3 years on the desktop and 5 years on the server."
Ubuntu 6.06 LTS
Right on schedule, Ubuntu 6.06, a distribution with long term support features, has been released: "Ubuntu, which has become one of the world's most popular Linux distributions in recent years, launched its latest version on June 1 following months of intense testing. The new release is titled Ubuntu 6.06 LTS (Long Term Support), and has a specific emphasis on the needs of large organisations with both desktop and server versions." For full details please read the formal press release and the more useful release notes.
Edubuntu 6.06 LTS
Edubuntu 6.06, an Ubuntu-based distribution designed for classroom use and inclusive of long term support options, has been released: "The Edubuntu team is proud to present Edubuntu 6.06 LTS. Included in this release are installation CDs, live CDs, and combination DVDs for i386, PPC, and AMD64 architectures. Edubuntu is the education-focused counterpart of Ubuntu, offering fast, easy installation of stand-alone systems, thin clients and servers. Like Ubuntu, Edubuntu 6.06 LTS offers long term security updates after release (3 years for desktops and 5 years for servers." Read the rest of the release announcement for more information.
Xubuntu 6.06 LTS
Not to be left behind, Xubuntu 6.06 has also been released: "Today sees the first launch of Xubuntu 6.06 LTS. This version of Xubuntu will be supported for 3 years on the desktop, and brings you all of the latest XFce goodness." Xubuntu's new desktop features: "XFce 4.4beta1 including a more flexible panel, many panel plugins and icons on the desktop; Thunar file manager; GDM desktop manager; Gnome Office (latest AbiWord and Gnumeric); Evince document viewer; Xarchiver archive manager; Xfburn simple CD burner; Xubuntu System Tools for GUI system administration...." Read the release announcement and release notes for further details.
Puppy Linux 2.00
Puppy Linux 2.00 is out! From the release notes: "This is a major upgrade from the 1.xx series. How to summarise five months work? The graphical user interface is much the same, as most work has been on the underlying architecture. In a nutshell, the fundamental architecture and boot-up / shutdown scripts are a total rewrite, from scratch, no relationship to any other distro." Among other major changes, the 'standard' edition of Puppy Linux now supplies SeaMonkey 1.0 web suite for web browsing and email, Inkscape for vector drawing, and GParted for disk partitioning tasks, while the kernel has been upgraded to version 2.6.16.7.
Trixbox 1.0
Trixbox is a new name of what used to be called Asterisk@Home, a CentOS-based Linux distribution that enables the home user to quickly set up a VOIP Asterisk PBX. Version 1.0 is the project's first stable release under the new name: "Trixbox 1.0 released. Like Asterisk@Home, Trixbox is a complete Asterisk PBX including a Linux OS, Asterisk PBX software, a web GUI, and many other useful add-ons. Trixbox will focus on both the business and home user and will have more features including automatic upgrade capability. As with Asterisk@Home, Trixbox can be quickly and easily installed in under one hour." Here is the brief release announcement.
KNOPPIX 5.0.1
Klaus Knopper has announced the availability of the first public release of KNOPPIX 5 live CD and DVD: "At CeBIT 2006, a preview of Knoppix 5 was introduced, which has now been updated with the new kernel and udev hotplug management. What's new: Linux kernel 2.6.17 (rc); X.Org Version 7.0; detection of onboard IDE raid controllers and raid disk components; udev + hwsetup for automatic hardware detection; KDE 3.5.2, GNOME 2.12 from Debian unstable; OpenOffice.org 2.0.2; transparent write access for NTFS partitions; new Knoppix installer, now also with the possibility to update existing installations of Knoppix...." Read the full release announcement on the project's home page.
Development and unannounced releases
Damn Small Linux 3.0-rc1, the changelog
GeeXboX 1.0-rc2, the release announcement
Ark Linux 2006.1-rc2, the release announcement
T2 2.2.0-rc, the release announcement
VLOS 1.3-beta1, the release announcement
2006-06-05: Frenzy 1.0 (see announcement)
2006-06-05: Parsix 0.80 (see to-do list)
2006-06-15: Mandriva Linux 2007 0.2
2006-06-16: SUSE Linux 10.2 Alpha 1
2006-06-18: SimplyMEPIS 6.0 (see schedule)
2006-06-21: Fedora Core 6 Test1
2006-07-XX: Red Hat Enterprise Linux 5 Beta 1
2006-08-01: Freespire 1.0 BETA 1
2006-08-XX: Freespire 1.0 BETA 2
2006-08-XX: Gentoo Linux 2006.1 (see roadmap)
2006-09-01: Freespire 1.0 (see release roadmap)
2006-09-15: Mandriva Linux 2007 (see schedule estimation)
2006-09-27: Fedora Core 6 (see preliminary schedule)
2006-10-05: SUSE Linux 10.2 Alpha 5 (see roadmap)
2006-12-04: Debian GNU/Linux 'etch' (see release plan)
2006-12-XX: Red Hat Enterprise Linux 5 (see release plan)
May 2006 donations: LilyPond €190, Lua US$250
Ladies and Gentleman, we are pleased to announce that, based on readers' requests, the DistroWatch May 2006 donation has been awarded to LilyPond (€190.00) and Lua (US$250.00). This is only the second time we had a chance to set aside US$500 for our monthly donation - many thanks to our sponsors and advertisers. As mentioned previously, the monthly donations programme is a joint initiative between DistroWatch, which allocates 10% of its advertising revenue, and two online shops selling low-cost CDs and DVDs with Linux, BSD and other open source software - LinuxISO.co.uk and LinuxCD.org, each of which contributed US$50 towards this month's donation.
According to the description on its web site, LilyPond, developed by Han-Wen Nienhuys in the Netherlands, is a music typesetter or an automated "engraving" system. It formats music beautifully and automatically, and has a friendly syntax for its input files. LilyPond is Free Software and is part of the GNU Project. See the FAQ section and take the features tour for more information about this project.
Lua, on the other hand, is a programming language: "Lua is a programming language originally designed for extending applications, but also frequently used as a general-purpose, stand-alone language. It combines simple procedural syntax (similar to Pascal) with powerful data description constructs based on associative arrays and extensible semantics. The implementation goals are simplicity, efficiency, portability, and low embedding cost. It has been used on games such as Vendetta, FarCry, Homeworld2, Painkiller, and World of Warcraft." Lua, released under a GPL-compatible licence, is developed by a group of coders at the Pontifical Catholic University of Rio de Janeiro in Brazil. These are the PayPal receipts for the donations to LilyPond and Lua:
Dear DistroWatch.com,
This email confirms that you have paid hanwen at xs4all.nl €190.00 EUR using PayPal.
Payment Details:
Transaction ID: 45S56615WB368821F
Total: €190.00 EUR
Item/Product Name: LilyPond donation
Buyer: DistroWatch.com
This email confirms that you have paid Lua.org $250.00 USD using PayPal.
Transaction ID: 69E25132GV1846438
Total: €$250.00 USD
Item/Product Name: Lua
Item/Product Number: 502
Here is the list of projects that received a DistroWatch donation since the launch of the programme:
03/2004: GnuCash (US$250)
04/2004: Quanta Plus (US$200)
05/2004: PCLinuxOS (US$300)
06/2004: The GIMP (US$300)
07/2004: Vidalinux (US$200)
08/2004: Fluxbox (US$200)
09/2004: K3b (US$350)
10/2004: Arch Linux (US$300)
11/2004: Kile KDE LaTeX Editor (US$100)
12/2004: UNICEF - Tsunami Relief Operation (US$340)
01/2005: Vim (US$250)
02/2005: AbiWord (US$220)
03/2005: BitTorrent (US$300)
04/2005: NdisWrapper (US$250)
05/2005: Audacity (US$250)
06/2005: Debian GNU/Linux (US$420)
07/2005: GNOME (US$425)
08/2005: Enlightenment (US$250)
09/2005: MPlayer (US$400)
10/2005: amaroK (US$300)
11/2005: KANOTIX (US$250)
12/2005: Cacti (US$375)
01/2006: Gambas (US$250) and Krusader (US$250)
02/2006: FreeBSD Foundation (US$450)
03/2006: GParted (US$360)
04/2006: Doxygen (US$260)
05/2006: LilyPond (US$250) and Lua (US$250)
Since the launch of the DistroWatch Donations Programme in March 2004, we have donated a total of US$8,300 to various open source software projects.
New distributions added to the database
BinToo Linux. BinToo Linux is a full-featured binary distribution based on Gentoo Linux.
Dzongkha Linux. Dzongkha Linux is a Debian-based distribution developed in Bhutan by the Department of Information Technology at the Ministry of Information and Communications. Dzongkha Linux is created with the sole aim of providing complete Dzongkha computing capability, free of cost.
Dzongkha Linux, a Debian-based distribution with support for the language of Bhutan, was launched last week.
(full image size: 930kB, resolution: 1280x1024 pixels)
New distributions added to the waiting list
BINKI GNU/Linux. BINKI GNU/Linux is a distribution inspired by the Linux From Scratch project and designed for intermediate to advanced users.
Bluewhite64 Linux. Bluewhite64 Linux is an unofficial port of Slackware Linux to the AMD64 architecture.
HOST. HOST is a Fedora-based commercial distribution launched recently in Saudi Arabia. It is developed by India-based Host Technologies Ltd.
maliGNUz. maliGNUz is a tiny live CD based on Slackware Linux.
PapugLinux. PapugLinux is a minimal Linux live CD based on Gentoo Linux for the x86 computer architecture.
Number of discontinued distributions: 79
That's all for today. The next issue of DistroWatch Weekly will be published on Monday, 12 June 2006. See you then :-)
Number of Comments: 97
Karoshi is a free and open source school server operating system based on Ubuntu. Karoshi provides a simple graphical interface that allows for quick installation, setup and maintenance of a network.
Questions and answers: Tweaking X.Org drivers for better Intel graphics support
Questions and answers: OpenJDK versus Oracle Java
Questions and answers: Launching tasks when computer is idle
Tips and tricks: Working with images from the command line
Tips and tricks: Combining commands in the shell
Tips and tricks: Keep terminal programs running, using the at command, reverse OpenSSH connections
Questions and answers: Hardware that respects user freedom
Tips and tricks: Nix package manager on alternative Linux distributions
Questions and answers: Using noexec to prevent social engineering attacks | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 6,072 |
Speaker of the House of …
Documents filtered by: Author="Hamilton, Alexander" AND Recipient="Speaker of the House of Representatives" AND Period="Washington Presidency" AND Period="Washington Presidency" AND Period="Washington Presidency"
Results 91-98 of 98 sorted by date (ascending)
91Report on the Petition of Peter Pray Van Zandt, [27 February 1794] (Hamilton Papers)
The Secretary of the Treasury to whom was referred by an order of the House of Representatives of...
92Report on the Petition of William Wirtz, [27 February 1794] (Hamilton Papers)
The Secretary of the Treasury, to whom was referred by an order of the House of Representatives...
93Report on Several Petitions Barred by the Acts of Limitation, [27 February 1794] (Hamilton Papers)
The Secretary of the Treasury to whom were referred by the House of Representatives, the several...
94Report on a Representation from the State of Kentucky, [7 April 1794] (Hamilton Papers)
[To the Speaker of the House of Representatives] The Secretary of the Treasury to whom was...
95Report on the Contract Made with the Bank of the United States for a Loan of Two Million Dollars, [25 April 1794] (Hamilton Papers)
The Secretary of the Treasury pursuant to the Order of the House of Representatives of the 28th....
96Report on Abstract of Exports for One Year Ending September 30, 1793, 2 June 1794 (Hamilton Papers)
I have the honor to transmit herewith for the information of the House of Representatives, a...
97Report on an Account of Receipts and Expenditures of the United States for the Year 1793, 26 December 1794 (Hamilton Papers)
I have the honor to transmit a letter of this date, from the Comptroller of the Treasury,...
98Report on a Plan for the Further Support of Public Credit, [16 January 1795] (Hamilton Papers)
[To the Speaker of the House of Representatives and the President of the Senate] The Secretary of... | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 9,471 |
ISO 3166-1 este parte a standardului ISO 3166, definit de Organizația Internațională de Standardizare, care definește coduri pentru țări, teritorii dependente și zone de interes geografic. ISO 3166-1 definește trei seturi de coduri pentru țări:
ISO 3166-1 alpha-2 — coduri de două litere, cele mai frecvent folosite dintre cele trei, utilizate și de domeniile de internet de nivel superior pentru țări, cu câteva excepții
ISO 3166-1 alpha-3 — coduri de trei litere, care permit o asociere mai bună între coduri și numele de țări, decât codurile alpha-2
ISO 3166-1 numeric — coduri de trei cifre care sunt identice cu cele definite și întreținute de Departamentul de Statistică al Statelor Unite. Aceste coduri au avantajul că sunt independente de grafie (latină, chirilică, arabă, etc.), și în consecință, utile mai ales pentru cei care nu folosesc grafia latină.
ISO 3166 conține coduri alfabetice pentru țări de la prima ediție din 1974 și coduri numerice de la ediția a doua din 1981. Codurile de țară au fost publicate pentru prima dată ca ISO 3166-1 în 1997, în a cincea ediție a ISO 3166, când ISO 3166 a fost divizat în 3 părți separate.
Ca standard folosit larg la scară internațională, ISO 3166-1 este implementat în alte standarde și utilizat de organizații internaționale pentru a facilita schimbul de bunuri și informații. Oricum, ISO 3166-1 nu este singurul standard pentru coduri de țări. Multe organizații internaționale folosesc standarde proprii, care sunt parțial sau total incompatibile cu ISO 3166-1.
Codurile desemnate oficial
Tabelul următor conține lista completă a țărilor pentru care au fost definite coduri, în ordinea alfabetică a numelui în limba engleză.
Vezi și
Lista FIPS a codurilor țărilor lumii
Note
Coduri de țări
Liste de țări | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 3,122 |
Q: Finding the Max of Select Query I am new developer for JPA and have a assignment to write the retrieve records from database using JPQL. Below is my Query. Basically I am trying to find the max of field1 from table Z.
SELECT X.Id,
MAX (
NVL (
(SELECT field1
FROM table Z
WHERE X.Id = Z.id.Id),'')) field
FROM table1 X, table2 Y
WHERE X.Id = Y.Id
group by X.Id
While Executing this query I am getting the error as below
java.lang.IllegalStateException: No data type for node: org.hibernate.hql.ast.tree.AggregateNode
\-[AGGREGATE] AggregateNode: 'MAX'
\-[METHOD_CALL] MethodNode: '('
+-[METHOD_NAME] IdentNode: 'NVL' {originalText=NVL}
\-[EXPR_LIST] SqlNode: 'exprList'
Please Advise. When I am running the query using normal sql, its working fine.
A: Maybe you have problem with '' :
NVL (
(SELECT field1
FROM table Z
WHERE X.Id = Z.id.Id),'')
try something like this:
NVL (
(SELECT field1
FROM table Z
WHERE X.Id = Z.id.Id),0)
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 4,472 |
The impact of the coronavirus pandemic on global gender equality – as equals
She had no idea that the 19-year-old had started exchanging sex for money to help pay for the food for her three younger brothers and two cousins, who live together in a one-room house in a seaside slum. in Mombasa Kenya. When Bella came home with rice and other ingredients for dinner later in the day, she didn't explain how she bought them.
"The pandemic broke the economy, especially for my region. So I had to help with expenses in one way or another, "said Bella on WhatsApp. The teenager asked for her name to be changed to protect her identity.
Before the pandemic, Bella was a sophomore in a high school in the city, where she was an avid history student and enjoyed playing table tennis with friends during breaks between classes. But in March, with the spread of Covid-19, Kenya closed and so did schools.
Unable to continue her studies remotely due to a lack of electricity and internet access, and with her mother's income from selling vegetables on the cut street, Bella started washing clothes to help supplement the family's income.
"God, that day, my mom almost killed me. My mom was so mad at me, she hit me. I don't want to talk about it. She didn't know I was having an affair with that man."
Your browser does not support the audio element
When one of her much older clients pressured her for sex, saying she would pay 1,000 Kenyan shillings ($ 9) or 1,500 shillings ($ 13) for unprotected sex – triple what he paid to wash his clothes – she felt she couldn't say no. After he found out she was pregnant, he disappeared.
"The pandemic played a very important role for me to achieve this pregnancy now, because if the pandemic was not here, I would be at school. How to wash clothes and all that stuff, meeting that man, it wouldn't have happened, "said Bella, who is currently receiving social support and money transfers through ActionAid, an international campaign group. She complements this with odd jobs and laundries.
Now three months pregnant, Bella said she will not be able to resume her studies when schools in Kenya fully reopen in January – a friend of her mother, who was helping to pay her fees, withdrew her support.
The United Nations Educational, Scientific and Cultural Organization (UNESCO) estimates that nearly 24 million children and adolescents, including 11 million girls and young women like Bella, may drop out of education next year due to the economic impact alone. pandemic (130 million girls have already been out of school, according to the agency). That reality not only threatens to throw back decades of progress made towards gender equality, but it also puts girls around the world at risk of child labor, teenage pregnancy, forced marriage and violence, experts say.
"It's kind of a vicious cycle," said Stefania Giannini, UNESCO's assistant director-general for education, noting that girls who became pregnant during isolation are less likely to return to school; the policies and practices of some countries specifically prohibit their participation in education. Teenage pregnancies during the pandemic threaten to block the education of one million girls in sub-Saharan Africa alone, according to a report by World Vision, a member of UNESCO's Covid-19 Global Education Coalition.
For many girls, school is not only a place of learning and a path to a better future, adds Gianni, it is also a lifeline – offering vital nutrition services, menstrual hygiene management, sexual health information and social support.
Previous crises have proved that girls are the first to be removed from the classroom and the last to return. When the Ebola outbreak led to the closure of schools in West Africa from 2014 to 2016, girls faced increasing poverty, child labor and teenage pregnancy, preventing them, in some cases, from resuming school, UNICEF reports have shown, Save the Children and UNDP.
In Sierra Leone, teenage pregnancies more than doubled to 14,000, according to UNICEF. And many girls in the country have never returned to the classroom, in part because of a recently repealed policy that prohibited pregnant girls from going to school, Plan International reported. Enrollment fell by 16 percentage points in the most affected communities in Sierra Leone, according to a working document published by the World Bank.
Using data on school dropout from the Ebola epidemic in Sierra Leone, the Malala Fund estimated that an additional 20 million girls of secondary school age could remain out of the classroom long after the coronavirus pandemic had passed.
"The pandemic played the biggest role in me getting this pregnancy now, because if the pandemic was not here, I would be at school. Meeting that man would not have happened like never before."
The repercussions of the Covid-19 pandemic on girls have been felt for generations.
Earlier this year, UNFPA designed whereas blockages lasting at least six months can lead to about 7 million additional unwanted pregnancies and 31 million cases of gender violence, as well as 13 million child marriages and 2 million cases of female genital mutilation in the next decade .
Covid-19 will also put another 47 million women and girls into poverty, according to an analysis commissioned by UN Women and UNDP, which estimates that about 435 million women and girls will live on less than US $ 1.90 per day in 2021. According to the report, the number of women and girls living in extreme poverty will not return to pre-pandemic levels until 2030.
"With the impact of Covid, we are seeing a very rapid and dramatic reversal of the progress we have made on gender equality," said Julia Sánchez, ActionAid's secretary general, highlighting issues on which advocates have made progress in recent years, such as ending with genital mutilation.
"Suddenly, it is as if we have all turned our backs and started walking in the opposite direction."
In an ActionAid survey of 1,219 women aged 18 to 30 in urban areas of India, Ghana, Kenya and South Africa, only about 22% of those who were studying said they could continue their studies remotely. But the survey was limited by the fact that young women were interviewed based on their willingness and willingness to respond – only about 25% were currently in any form of education.
Out of school and facing extreme economic insecurity, many of the girls interviewed said they were forced to assume a greater burden of unpaid care and domestic work, without access to life-saving sexual and reproductive health services – including birth control – – and were more vulnerable to gender-based violence.
Reported incidents of violence were particularly high in Kenya (76%), where young women interviewed repeatedly mentioned sexual abuse and early pregnancy. Echoing Bella's story, several girls and young women who were out of school told researchers that they were forced to exchange sex for money out of financial desperation, ActionAid wrote.
"There are many girls in my area who are going through the same situation. As for my situation, I am now just waiting for God to help me with this and I will get out of this insurance."
Like many other countries on the African continent, Kenya has pledged to close the gap in education exclusion by providing access for all children by 2030. But the dispersed approach to dealing with teenage pregnancy – a problem before the pandemic – was criticized by campaign groups like Human Rights Watch. In July, Kenyan President Uhuru Kenyatta ordered an investigation into the growing reports of violence against women and girls, noting that teenage pregnancies had increased during the pandemic.
Frustrated advocates say cuts in foreign aid from donor countries like the UK amid a wave of Covid-induced austerity measures will have devastating impacts on girls' education and leave them without the safety net the school offers. They warn that failing to put women and girls at the center of recovery plans comes at a high cost to economic growth, especially when faced with one of the deepest recessions since World War II.
A World Bank report, released in partnership with the Malala Fund in 2018, showed that limited educational opportunities for women and girls completing high school can cost the global economy between $ 15 trillion and $ 30 trillion.
"Governments are under pressure because aid is going to be cut, because revenues are declining because of the economic effects of Covid and also because there are greater demands in the health sector," said Lucia Fry, director of research and policy at the Malala Fund, said . "In some cases, not all, countries are in fact diverting funds from education at this time of great need."
Various advocacy groups are calling on governments to maintain the priority they have given to education, while seeking the international community to provide fiscal stimulus in the form of debt relief and emergency aid. In the long run, they are considering reforms to things like the international tax system so that countries can keep more of the revenue they have for public services.
Meanwhile, teenagers like Bella are having to change their expectations for a future at school to one at home.
"It's been so hard for me. I have no words to explain how I feel," said Bella.
"Going back to school won't be possible … and my baby is coming."
Tags Coronavirus, Equality, equals, gender, global, impact, pandemic, The impact of the coronavirus pandemic on global gender equality - As Equals - CNN, world | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 2,807 |
<?php
namespace Magento\Theme\Controller\Adminhtml\System\Design\Wysiwyg\Files;
class DeleteFolder extends \Magento\Theme\Controller\Adminhtml\System\Design\Wysiwyg\Files
{
/**
* Delete folder action
*
* @return void
*/
public function execute()
{
try {
$path = $this->storage->getCurrentPath();
$this->_getStorage()->deleteDirectory($path);
} catch (\Exception $e) {
$result = ['error' => true, 'message' => $e->getMessage()];
$this->getResponse()->representJson(
$this->_objectManager->get('Magento\Framework\Json\Helper\Data')->jsonEncode($result)
);
}
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 7,786 |
Séamus Sweeney
Poca favilla gran fiamma seconda
Extinct in Ireland
On Silence
Castletownroche, Co. Cork – labyrinths, dinosaurs, and spies.
by Séamus SweeneyPosted on June 21, 2016 June 21, 2016
At the labyrinthos site article about Irish labyrinths, I came across a discussion of a labyrinth no longer visible but nevertheless preserved in a farmhouse in Castletownroche, Co Cork:
A further example of an ancient labyrinth design in Ireland comes from Bridgetown House, a large farmhouse to the south of Castletownroche in County Cork. It is formed from river worn pebbles laid as a cobblestone floor in the kitchen of the farmhouse. Only 5½ by 4½ feet (1.68 x 1.37 m) in diameter, the 'walls' of the labyrinth were created by laying larger, flattened, stones at an angle to the smaller stones that form the 'pathway.' Its design is of the widespread Classical type, with seven concentric paths surrounding the goal. The story of how this little labyrinth came to be created is quite remarkable.
The farmhouse was built in 1782 and sometime in the 1790's a family wedding party was held at the house. At the height of the festivities, the assembled folk were dancing in the kitchen when the original wooden floor collapsed, sending everyone tumbling into the cellar beneath! Apparently nobody was seriously injured, but to avoid a repeat of this unfortunate incident, it was decided that the cellar would be filled in and a local paver, Joe Knott, laid the cobblestone floor. Presumably he chose the labyrinth motif as a good luck charm, or maybe as a way of commemorating the unusual circumstances that lead to the construction of the floor (Saward, 1984).
The new floor was evidently sturdier than the original and was quite well known locally, but over the years the floor settled unevenly and during the 1960's the owners were forced to pour a new concrete floor over the original cobblestones. Fortunately they appreciated the value of the original floor and had the foresight to photograph the labyrinth and cover the cobbles with a layer of plastic sheeting and sand before the concrete was poured, so it should be possible to recover the labyrinth at some point in the future.
I visited Castletownroche, near Fermoy, a few years ago with my family – mainly to visit the Dinosaur Cafe. The blog I linked to carries a good description of the museum as do these photos. The New York Natural History Museum it ain't, but there is a certain charm and spirit to the enterprise, all quite unexpected in the context of an Irish village.
Another reason that Castletownroche is notable (I am sure there are many) is the brief career of the Nazi spy, Oskar Metzke. This article summarises what happened:
Retired schoolteacher, Richie O'Grady, now living in Fermoy has vivid memories of that long ago day. He was only twelve years of age at the time, but can still recall the hue and cry in the Village. As Richie remembers, the stranger sought lodgings at the house of a Mrs Casey, near the Church. Apart from the strange accent, this lady saw nothing untoward in the visitor and he arranged to stay for the night. He is also known to have called to the local Presbytery, where he met Rev. Fr. James Sheedy, the Parish Priest. He represented himself to Fr. Sheedy as being a Czech National on his way to seek work in Mallow Beet Factory. Fr. Sheedy gave the stranger some money and he was next seen in O'Connor's Shop in the Main Street where he bought some bread and cheese. By a strange quirk of fate the Local Garda, Jeremiah 0'Sullivan called to the shop as Oskar Metzke was being served. The stranger immediately attracted the attention of Sergeant 0'Sullivan and this is understandable as the country was in a state alert at the time.
Oskar Metzke was then escorted to the local Garda Station where he again claimed to be on his way to seek work in Mallow Beet Factory. He gave his name as Oskar Metzke and said that he had been discharged from The British Army as being medically unfit. In support of this he produced a British Army Servile Book which appeared to be genuine, but Sgt. O'Sullivan insisted that he empty his pockets. What came to light sealed the fate of Oskar Metzke.
In his possession were found a map with aerial views of the North Cork Countryside, a compass, a combined torch and fountain-pen and most damning of all, the standard equipment of every German Spy, a Luger revolver. Realising that a potentially explosive situation was developing, Sgt. O'Sullivan decided to contact his Superior, Superintendent Moore in Fermoy. In the meantime Oskar Metzke was left in the care of the Barrack Orderly, Garda Francis Mannix. Garda Mannix is now dead, but his son, Billy, now living in Mallow, takes up the story
"I was a very young boy at the time, but the story was often repeated to me by my father. Oskar Metzke was sitting quietly by the fireplace, when he asked Garda Mannix if he could eat some of his bread and cheese. On receiving permission, he walked over to the table where it lay. He started to eat his frugal meal, then turned his back on the Garda. Seconds later Oskar Metzke was in convulsions, it was obvious that he had swallowed something lethal and my father, Garda Mannix, did his utmost to retrieve it from his mouth, but already the German was unconscious. Within a matter of minutes Metzke was dead, but just before he expired he received a blessing from the man who only a short time before had been so kind to him, Fr James Sheedy. Dr Jeremiah Foley arrived soon afterwards, but by this time Metzke was beyond all human aid. A post-mortem was carried out by the then state Pathologist, Dr John McGrath and at the subsequent inquest Coroner Nagle of Buttevant revealed that Oskar Metzke had taken a deadly poison, cyanide of potassium".
There is a rather touching quality to this story and one wonders how much is really known about Oskar Metzke and why he died in a small country town in North Cork – and why he was there in the first place.
Posted in adventure, unusual placesTagged bridgetown house, castletownroche, coronor nagle of buttevant, dinocafe, dinosaur museum, dinosaurs, fermoy, labyrinth, labyrinths, labyrinths in houses, nazi spies in ireland, nazi spy, north cork, operation osprey, oskar metzke, tourist, tourist and proud, tourist attractions north cork
Published by Séamus Sweeney
View all posts by Séamus Sweeney
Prev A Labyrinth on the Rock of Cashel
Next A Short Walk In The Linguan River
Archives Select Month October 2019 September 2019 August 2019 July 2019 June 2019 May 2019 April 2019 March 2019 February 2019 January 2019 December 2018 November 2018 October 2018 September 2018 August 2018 July 2018 June 2018 May 2018 April 2018 March 2018 February 2018 January 2018 December 2017 November 2017 October 2017 September 2017 August 2017 July 2017 June 2017 May 2017 April 2017 March 2017 February 2017 January 2017 December 2016 November 2016 October 2016 September 2016 August 2016 July 2016 June 2016 May 2016 April 2016 March 2016 February 2016 January 2016 December 2015 November 2015 October 2015 September 2015 August 2015
Seamus at Herge museum 20.06.19
"This is the avocado game"
"The Best Lack All Conviction, while the Worst / Are Full Of Passionate Intensity"
St Patrick's Breastplate / The Deer's Cry : music by Shaun Davey / Rita Connolly, Arvo Part, John Fahey, John Kenny, Melville Cook
"Still on patrol"
"A culture is no better than its woods" - W H Auden, "Bucolics"
Tory Island, poitín, and prohibition
Extinct in Ireland: September 1st, the sturgeon
The "lazy and indifferent" heron of "Monday or Tuesday", Virginia Woolf
The third who always walks beside you
Annals of not-very-deceptive front business names: "Republican Outfitters"
Posts from "A Medical Education" (sister blog of medical writing): A Medical Education
Underwear that counts steps, tracks calories, monitors sleep? Count me in!
Utako Okamoto 1st April 1918 – 21st April 2016
Core Emergency Medicine Podcast on V Fib and Pulseless V Tachy
Give TXA now!
Follow Séamus Sweeney on WordPress.com | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 2,515 |
@php $delay = 0; $increment = 150; @endphp
<script>
var notyf = new Notyf({ delay: 5000 });
@if(Session::has('error'))
notyf.alert("{{ Session::get('error') }}");
@php $delay += $increment; @endphp
@elseif(Session::has('success'))
notyf.confirm("{{ Session::get('success') }}");
@php $delay += $increment; @endphp
@endif
@foreach($errors->all() as $error)
setTimeout(function() {
notyf.alert("{{ $error }}");
}, {{ $delay }});
@php $delay += $increment; @endphp
@endforeach
</script>
| {
"redpajama_set_name": "RedPajamaGithub"
} | 8,137 |
Manual of Parliamentary Practice
swer to the petition of appeal therein was filed, and a list of all causes, whether on writs of error or appeal, which shall be put at issue during the session of this court, shall in like manner be made by the clerk and added to the list; and when this.court shall be ready to proceed to the hearing of causes, the same shall be called in the order in which they stand on the list.
15. When any cause put in the list as aforesaid, shall have been twice called and passed in consequence of the plaintiff in error, or appellant, not being in readiness to proceed with the argument thereof, the defendant in error shall be entitled to a judgment of non pross of the writ of error, and the respondent to a decree dismissing the appeal, as the case may be, with costs, unless this court on good cause shown shall otherwise order.
16. That the remittitur, in case of a writ of error, shall contain a copy of the judgment of this court annexed to the writ of error, and the transcript of the record of proceedings, as brought into this court, under the seal of this court, and signed by the clerk thereof; and the remittitur, in case of an appeal, shall contain a copy of the decree or order of this court annexed to the petition of appeal, and the matters thereto annexed as brought into this court, under the seal of this court, and signed by the clerk thereof.
17. That all costs awarded by this court, in causes upon writs of error or appeal, shall be taxed by the chancellor or a judge of the supreme court, and inserted in the judgment of this court, and form part of the remittitur, for which costs the supreme court shall award execution according to the course of that court; and all costs awarded by this court, in cases upon appeals, shall be taxed in like manner, and the court shall award execution for the same, or enforce payment thereof, according to the course and practice of that court.
18. That no member of this court shall, as attorney, solicitor, or counsel, be concerned in or argue any cause in this court, either upon error or appeal, unless such member was, without reference to this court, actually retained and employed in the cause in the court below, before the judgment or decree on which the writ of error or appeal is founded was rendered : provided, however,
that this rule shall not extend to causes in which any member of this court was actually retained as attorney, solicitor or counsel, previous to the adoption thereof.
19. That at the hearing of causes on appeal or writs of error, not more than one counsel shall
argument, and no more than two counsel shall answer, and no more than one counsel shall reply or close, except in special cases on appeal, where there are distinct parties on the same side having distinct interests in question.
20. That special motions shall require a notice to the opposite party of such motion, to be duly served two days at least before the motion is to be made.
21. That in all cases on error and appeal brought inta this court, the judges in cases of writs of error, and the chancellor in cases of appeal, shall give the reasons for their judgment or decree, immediately after the reading of the record or decree, and before any counsel in the cause is heard.
22. That when an appeal from any decree of the chancellor shall be heard in this court, the chancellor may state his opinion upon every matter that shall arise on such hearing, but shall not have a voice in the decision of the court on any question whatever arising on such appeal; and that when a cause shall be brought into this court by a writ of error on the question of law in a judgment of the supreme court, the judges of such court may severally state their opinions upon every matter that may arise on such hearing, but shall not have a voice in the decision of the court on any question whatever arising in the cause so brought into this court.
23. That hereafter it shall be the duty of the appellant or plaintiff in error in this court, to deliver a copy of the opinion of the chancellor or supreme court to each member, as an appendix to his case, previous to the argument thereof.
24. That in cases not already provided for, the practice of this court shall be similar to the practice of the court of exchequer chamber in England; and that on appeals it shall be conformable to that of the house of lords in England, when sitting as a court of appeals, until furtheu order; and that all former rules made by this court relotive to its practice be vacated.
STATE OF NEW-YORK. 1. Upon the appearance of a quorum, the speaker shall take the chair, and the members shall be called to order.
2. Immediately after the speaker shall have taken the chair, the minutes of the preceding day shall be read by the clerk, to the end, that any mistakes therein may be corrected by the house.
3. The speaker shall preserve order and decorum, and shall decide questions of order, subject to an appeal to the house : he shall have the right to name any member to perform the duties of the chair, but such substitution shall not extend beyond an adjournment.
4. The speaker shall not vote in any case, unless where the vote shall be by ballot; or when the House shall be equally divided; or when his vote added to the minority, shall make an equal division ; and in case of such equal division, the question shall be lost.
5. When the house adjourns, the members shall keep their seats until the speaker leaves the chair.
6. Every member, previous to his speaking, shall rise from his seat, and address himself to the speaker.
7. When two or more members rise at once, the speaker shall name the member who is first to speak.
8. No member shall speak more than twice to the same question without leave of the house ; nor more than once, until every member, choosing to speak, shall have spoken.
9. No motion shall be debated or put, unless the same be seconded. When a motion is seconded, it shall be stated by the Speaker, before debate; and every such motion shall be reduced to writing, if the speaker, or any member, desire it.
10. After a motion is stated by the speaker, it shall be deemed to be in possession of the house ; but may be withdrawn at any time before decision or amendment.
it. When a question is under debate, no motion shall be received, unless to amend it; to lay it on the table; to commit it: to postpone it to a day certain; for the previous question; or to adjourn.
12. A motion to adjourn shall be always in order, and shall be decided without debate.
13. The previous question, until it is decided, shall preclude all amendment and debate of the main question, and shall be decided in this form-Shall the main question be now put?
14. No member shall speak more than once, without leave, upon a previous question.
15. A motion for commitment, until it is decided, shall preclude all amendment of the main question.
16. Every order, resolution, and vote, to which the concurrence of the Senate shall be necessary, shall be read to the House, and laid upon the table, on a day preceding that in which the same be moved, unless the house shall otherwise allow.
17. Petitions, memorials, and other papers, addressed to the house, shall be presented by the Speaker, or by a member in his place.
18. Every member who shall be present when a question is stated from the chair, shall vote for or against the same, unless the house shall excuse him, or unless he be immediately interested in the question; in which case he shall not vote; but no member shall be permitted to vote upon any question, unless present when his name is called upon a division in its regular order.
19. While the speaker is putting a question, no member shall walk out of, or across the house; nor when a member is speaking, shall any member entertain any private discourse, or pass between him and the chair.
20. A member called to order, shall immediately sit down, unless permitted to explain; and the house, if appealed to, shall decide on the case, but without debate; if there be no appeal, the decision of the chair shall be submitted to.
21. Every bill shall be introduced by motion for leave, or by an order of the house on the report of a committee; and one day's notice, at least, shall be given of a motion to bring in a bill, unless the house unanimously allow
the same to be brought in without such previous notice.
22. The printed copies of bills which are brought into this house, by any member or committee, and ordered to be printed, shall contain the name of the member or committee bringing in or reporting such bill.
23. Every bill shall receive three several readings, previous to its being passed ; and the second and third reading shall be on different days: and the third reading shall be on a day subsequent to that on which it has passed a committee of the whole house, unless the house unanimously direct otherwise.
24. No bill shall be committed or amended until it has been twice read.
25. In forming a committee of the whole house, tho speaker shall leave the chair, and a chairman shall be appointed to preside.
26. Bills committed to a committee of the whole house, shall be first read through by the clerk, and then read and debated by clauses, leaving the preamble to be last considered : all amendments shall be entered on a separate piece of paper, and so reported to the house by the chairman, standing in his place; after the report, the bill shall be subject to debate and amendment, before the question to engross it be taken.
27. All questions, whether in committee, or in the house, shall be put in the order they were moved ; except that in filling up blanks, the largest sum and longest time shall be first put.
28. A similar mode of proceeding shall be observed with bills which have originated in, and have passed the senate, as with bills originating in the house.
29. When a bill passes the house, the speaker shall certify the same, with the date thereof at the foot of the bill.
30. Upon a division, the names of those who vote for, and those who vote against the question, shall be entered upon the minutes, if any ten members require it.
31. The order, of the day shall have the preference to any motion before the house.
32. A motion that the chairman leave the chair, shall always be in order, and shall take place of any other motion. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 6,887 |
\section{Introduction}
\label{intro}
Three exoplanets have been discovered with the \textit{Kepler} mission \cite{Borucki1} that are inferred to have tails of dusty effluents trailing behind, or ahead of them in orbit about their host star. The first exoplanet with a comet-like tail, Kepler-1520b, was discovered by \cite{Rappaport1}, and was found to be a close-in exoplanet with an extremely short orbital period of $P_\mathrm{orb} = 0.65356(1)$ d. The host star that is apparently being occulted is Kepler-1520, a $V = 16.7$ mag K-dwarf with a $T_\mathrm{eff} = 4677(82)$ K. The shape of the transit is highly asymmetric (see Fig. \ref{Cele}). It shows a significant brightening just before the eclipse -- pre-transit brightening, sharp ingress followed by a short sharp egress and long smooth egress, and a weak post-transit brightening. Moreover, the planet exhibits strong variability in the transit core on the timescale of one day \cite{Rappaport1,Bochinski1}, and a variability in the egress on the timescale of about 1.3 years \cite{Budaj1}. \cite{Rappaport1} suggested that the planet's size is not larger than Mercury, and is slowly disintegrating/evaporating, creating a comet-like tail. \cite{Brogi1} and \cite{Budaj1} first validated the disintegrating-planet scenario using a model and both found that dust particles in the tail have typical radii of about 0.1 - 1 micron. Both brightenings are caused by the forward scattering on dust particles in the tail. Strong variability in the transit depth is a consequence of changes in the cloud optical depth. \cite{Perez1} proposed a model of the atmospheric escape via the thermal wind that is only effective for planets, which are less massive than Mercury. Gravity of more massive planets would provide too deep potential barrier for the wind. Later, two more exoplanets were discovered, KOI-2700b and K2-22b, whose transit shapes show evidence of a similar comet-like tail \cite{Rappaport2,Sanchis1}.
The ongoing \textit{Transiting Exoplanet Survey Satellite} (\textit{TESS}) mission \cite{Ricker1} seems to be also promising to increase the number of such objects. Recently, three distinct dipping events in the light curve of $\beta$ Pictoris were identified based on the \textit{TESS} observations by \cite{Zieba1}. The dips are asymmetric in nature and are consistent with a model of an evaporating comet with an extended tail crossing the disc of the star. \textit{TESS} studying more stars than were targeted by \textit{Kepler} and \textit{K2}. Therefore, we can expect more similar discoveries in the near future. Moreover, the host stars are likely to be brighter and therefore easier to do follow-up studies from the ground.
The planned \textit{Ariel} space mission \cite{DaDeppo1,Pascale1,Tinetti1}, expected to be launched in 2029, has also potential to be successful in detection of disintegrating rocky exoplanets, or exocomets. During its 3.5-years operations from L2 orbit \textit{Ariel} will continuously observe exoplanets transiting their host star. It is designed to achieve a stability of $< 100$ ppm (the goal is 10 ppm) over the temporal bandwidth of the transit, typically less than 10 hours. Moreover, the observations will be obtained in several channels, centered at 0.55, 0.70, 0.95, 1.25, and 1.65 microns. The optical properties of dust grains in the tail (hence also the transit depth) may vary with the wavelength. Small grains scatter light in the Rayleigh scattering regime, which decreases steeply with the wavelength. On the other hand, large particles attenuate the light in the geometrical optics regime, which does not depend on the wavelength, see e.g., Fig. 13 in \cite{Croll1}. The optical properties of dust depend mainly on the particle size and their chemical composition. Consequently, multiwavelength transit observations can put constraints on these two parameters.
Disintegrating exoplanets discovered by the \textit{Kepler} space telescope have host stars too faint to be targeted in the \textit{Ariel} core survey, but nearby analogues of such disintegrating exoplanets orbiting bright stars must exist. These bright stars would be attractive targets for \textit{Ariel} core survey. We propose the target selection based on the results of the Dispersed Matter Planet Project \cite{Haswell1}. This project was partly motivated by the aim of finding nearby disintegrating exoplanets and their progenitors. Some highly irradiated close-in exoplanets orbit stars showing anomalously low chromospheric emission. The authors in \cite{Haswell1} attribute this deficit to absorption by circumstellar material replenished by mass loss from disintegrating planets. Hence, anomalously low chromospheric emission can indicate disintegrating planets. The first possible transiting disintegrating exoplanet in the framework of this project was already reported in the $V = 7.98$ mag system DMPP-1 (HD38677) by \cite{Jones1}. Their results also suggest that disintegrating rocky exoplanets can co-exist with hot/warm giant planets.
\begin{figure*}
\centering
\centerline{
\includegraphics[width=60mm]{Cele.eps}
\includegraphics[width=60mm]{Krivka.eps}}
\caption{Phase-folded and averaged transit light curve of Kepler-1520b, smoothed using the running window technique. The window width and step was 0.01 and 0.001, respectively (in units of phase). Red
points are \textit{Kepler} observations and the blue line represents averaged data (left-hand panel). The same averaged data, focused to better visualize the asymmetry of the light curve (right-hand panel).}
\label{Cele}
\end{figure*}
There is one more aspect of disintegrating exoplanets, and this is the advantage that there is more phase space available to detect some of these objects, even if the planet does not transit the host star, i.e., via the forward-scattering signal alone. In the transiting case the extinction by dust during the transit removes more light from the beam than is scattered into it. Thus, the forward-scattering component of the light is best seen either just prior to ingress, or just after egress, but with reduced amplitude over the larger peak that is obscured by the transit. The highest amplitude portion of the forward-scattering peak is obscured by the transit part of the light curve, where extinction nullifies the effect of the forward scattering. This raises the possibility that grazing, non-transiting exoplanets with dusty tails, where the solid body of the planet does not transit, but part of the comet-like tail can transit, might have a distinct and detectable signature via this same forward-scattering peak, as it was pointed out by \cite{DeVore1}. The authors calculated that the amplitude of the scattering peak could be in the range from 0.00005 to 0.0005 (from 50 to 500 ppm), depending on the orbit inclination angle. Contrary to \cite{DeVore1}, in our case study we took the disintegrating exoplanet Kepler-1520b and changed the orbital properties of the system to get a grazing, non-transiting orbit scenario, and investigated, how different particle radii, species, \textit{Ariel} observational channels, and other factors affect the amplitude of the forward-scattering peak, and the detectability of the scattering event.
The paper is organized as follows. We first calculated the optical properties of the dust used in our analysis. This is briefly described in Section \ref{optprop}. We then modeled the phase-folded and averaged transit light curve of Kepler-1520b using the {\tt{Shellspec}} code. This procedure is detailed in Section \ref{transitmod}. Section \ref{grazingmod} is the main part of our work. We took the orbital and planetary properties of Kepler-1520b discussed in Section \ref{transitmod}, changed only the orbital inclination of the system to get a grazing, non-transiting orbit scenario, and performed several modeling using the {\tt{Shellspec}} code. In Section \ref{discuss} we discussed three important factors, which could affect the results described in Section \ref{grazingmod}. Finally, our findings are summarized in Section \ref{conc}.
\section{Calculation of the optical properties of the dust}
\label{optprop}
Dust can absorb the impinging radiation and convert it directly into heating of the grains. This process is called \textit{absorption}, or \textit{true absorption} and it is quantified by the \textit{absorption opacity}. Dust can also scatter radiation in a process called \textit{scattering} without being heated. This process is quantified by the \textit{scattering opacity}. The sum of the absorption opacity and the scattering opacity is the \textit{total opacity} (or simply the \textit{opacity}). Furthermore, scattering can be highly asymmetric, a property that is described by means of the phase function, which depends on the scattering angle (the deflection angle from the original direction of the impinging radiation). The most prominent feature is a strong forward scattering, when the scattering angle is nearly zero.
Optical properties of selected species were calculated by \cite{Budaj2}. The tables, which contains phase functions, opacities, albedos, equilibrium temperatures, and radiative accelerations of dust grains in exoplanet systems are freely available on the web\footnote{See \url{https://www.ta3.sk/~budaj/dust/deirm}}. The tables cover the wavelength range from 0.2 to 500 microns and 21 particle radii from 0.01 to 100 microns for several species. Their assumptions include spherical grain shape, Deirmendjian particle size distribution, and Mie theory. From these tables we selected opacities and phase functions for alumina, enstatite, forsterite, olivine with 50\% magnesium, pyroxene with 40\% magnesium, and iron, for the particle radii of 0.01, 0.1, and 1 micron. We also have to take into account the fact that from the viewpoint of dust particles the parent star has a non-negligible angular dimension on the sky. Dust particles of Kepler-1520b in the distance of $a = 2.77~\mathrm{R}_{\odot}$ from the parent star view this star as a disk with an angular diameter of about $26^{\circ}$. The true dimension of this star is $R_\mathrm{s} = 0.66~\mathrm{R}_{\odot}$ \cite{Rappaport1}. To take this effect into account we have to split the stellar disk into elementary surfaces and integrate the phase function over the disk. We calculated phase functions with a very fine step in the interval from 0 to $13^{\circ}$, because of the strong forward scattering and, consequently, the disk averaged phase function with a very fine step near the edge of the stellar disk. For this purpose we applied the software {\tt{Diskaver}}\footnote{See \url{https://www.ta3.sk/~budaj/dust/deirm/diskaver}}. It assumes a quadratic limb darkening of the stellar surface:
\begin{equation}
\label{qadraticlimbdarklaw}
I_\nu = I_\nu(0)[1-u_1(1 - \cos \theta) - u_2(1 - \cos \theta)^2],
\end{equation}
\noindent{where $I_\nu(0)$ is the intensity perpendicular to the surface of the source and $\theta$ is the angle between the line of sight and a normal to the surface. The quadratic limb darkening coefficients $u_1$ and $u_2$ were linearly interpolated based on the stellar parameters of $T_\mathrm{eff} = 4677~\mathrm{K}$, $\log g = 4.60$ (cgs), and $\mathrm{Fe/H}=-0.18$ \cite{Rappaport1} for the passbands $V$, $i$, $z$, $J$, and $H$, which correspond to the \textit{Ariel} observational channels of 0.55, 0.70, 0.95, 1.25, and 1.65 microns, respectively. Subsequently, we also calculated the quadratic limb darkening coefficients for the \textit{Kepler} passband. During this step we used the on-line applet {\tt{EXOFAST - Quadratic Limb Darkening}}\footnote{See \url{http://astroutils.astronomy.ohio-state.edu/exofast/limbdark.shtml}}, which is based on the IDL-routine {\tt{QUADLD}} \cite{Eastman1}. This software interpolates the \cite{Claret1} quadratic limb darkening tables. Calculations with the software {\tt{Diskaver}} were performed on the wavelengths of 0.55, 0.60, 0.70, 0.95, 1.25, and 1.65 microns for consistency with the \textit{Ariel} and \textit{Kepler} passbands.}
\section{The transit model of Kepler-1520b}
\label{transitmod}
We first modeled the phase-folded and averaged transit light curve of Kepler-1520b (see Fig. \ref{Cele}, right-hand panel). This analysis was previously carried out by \cite{Budaj1} using the {\tt{Shellspec}} code \cite{Budaj3}, but because we used other models of the comet-like tail, we had to repeat this analysis. During our analysis we also used the same code\footnote{See \url{https://www.ta3.sk/~budaj/shellspec.html}}, but its version No. 39. This code calculates the light curves and spectra of interacting binaries or exoplanets immersed in the three-dimensional (3D) circum-stellar, or circum-planetary environment. It solves simple radiative transfer along the line of sight and the scattered light is taken into account under the assumption that the medium is optically thin. A number of optional objects (such as a spot, disk, stream, ring, jet, shell) can be defined within the model, or it is possible to load a precalculated model from an extra file. Synthetic light curves or trailing spectrograms can be produced by changing our viewpoints on the 3D object.
For our purpose we used an optional object in the form of a ring, which we subsequently modified. We can model a comet-like tail as part of a ring with a non-negligible thickness around a central star. During our calculations we assumed a spherical and limb-darkened central star with a radius of $R_\mathrm{s} = 0.66~\mathrm{R}_{\odot}$, mass of $M_\mathrm{s} = 0.76~\mathrm{M}_\odot$, and effective temperature of $T_\mathrm{eff} = 4677~\mathrm{K}$ \cite{Rappaport1}, located in the geometrical center of the ring. We modeled the comet-like tail as part of a ring with a radius of $a = 2.77~\mathrm{R}_{\odot}$. Its geometrical cross-section is monotonically enlarging from the planet to the end of the ring, which is located at $60^{\circ}$ behind the planet. At this point the ring is truncated. The cross-section of the ring \textit{C} and dust density along the ring $\rho$ are allowed to change with the angle \textit{t} [rad]:
\begin{equation}
\label{densitychangeA2}
\rho(t) = \rho(0)\frac{C(0)}{C(t)}e^{(|t-t(0)|A2)/\pi},
\end{equation}
\noindent{where $\rho(0)$, $C(0),$ and $t(0)$ are the dust density, cross-section, and phase angle of view at the beginning of the ring, respectively, and $A2$ is the density exponent to model the dust destruction in the tail. We can see that there is a strong degeneracy between the cross-section and the dust density at a certain phase angle. In general, if we increase the cross-section and we want to obtain an appropriate model of the observed light curve, the result is that we need to decrease the dust density. There is a degeneracy relation $C\rho=$ const. Since the dimension of the dust tail of Kepler-1520b is unknown, we arbitrarily defined a geometrical cross-section of the tail, which was preferable for our computation process (in terms of the grid density, grid dimension, and computing time). Therefore, in our calculations we assumed a dust tail with a cross-section of $0.05 \times 0.05~\mathrm{R}_\odot$ at the beginning and $0.09 \times 0.09~\mathrm{R}_\odot$ at its end. These values are also in agreement with the escape velocity from a Mercury-sized small planet (a few km.s$^{-1}$). We note that this is the first difference in comparison with \cite{Budaj1}, who used a dust-tail model with a cross-section of $0.01 \times 0.01~\mathrm{R}_\odot$ at the beginning and $0.09 \times 0.09~\mathrm{R}_\odot$ at its end.}
In the {\tt{Shellspec}} code the central star with the defined object is located in a 3D grid. The code enables the user to look on the grid from different points of view and to calculate the corresponding flux. The flux is always calculated in the observer's line of sight. The orbit inclination angle $i$ corresponds the inclination of the intrinsic rotation axis of the model to the line of sight. At each point of view we calculated the final flux as $(s+r)/s = f$, where $s$ means modeling the flux from the parent star, $s+r$ means modeling the parent star with the ring, and $f$ is the final and normalized flux from the system. In this way we also eliminated fluctuations due to the grid structure of objects in our model. The synthetic light curves were subsequently convolved with a box-car with a width of 30 min, simulating the integration time of the \textit{Kepler} long cadence exposure. Convolved light curves were used for comparison with the observed light curve.
For the modeling process we used an iterative procedure, which applied the {\tt{Shellspec}} code as a subroutine, and which searched for the best fit, similarly as per \cite{Garai1}. Only one free parameter was adjusted during the fitting procedure, i.e., the dust density at the beginning of the ring $\rho(0)$ [g.cm$^{-3}$]. The orbit inclination angle $i$ [$^{\circ}$] and the density exponent $A2$ were fixed during the fitting procedure using the values found by \cite{Budaj1}, i.e., $i = 82^{\circ}$ and $A2 = -20$. One more free parameter -- the transit midpoint phase shift of the synthetic light curve with respect to the observed light curve ($\Delta\varphi_0$) -- was adjusted only before the modeling process and then was kept fixed to its best value. This parameter reflects the unknown mid-transit time of the planet. Every synthetic light curve was shifted in phase by $\Delta\varphi_0=-0.26$. The advantage of this treatment is that it saves computing time. A formal, quantitative goodness-of-fit was measured via determination of $\chi^2$. The best fit corresponds to the minimum value of $\chi^2$. The calculations were performed on the wavelength of 0.6 micron for consistency with the \textit{Kepler} passband. We executed 18 fitting procedures -- one for each combination of species and dust particle sizes. We note that this is the second difference in comparison with \cite{Budaj1}, who used only four species (pyroxene with 40 \% magnesium, enstatite, forsterite, and iron) and three particle sizes (0.01, 0.1, and 1 micron). The best-fitting $\rho(0)$ values are summarized in the Table \ref{rhovalues}. The observed phase-folded and averaged transit light curve of Kepler-1520b, overplotted with the best-fitting models of alumina is depicted, as an example, in Fig \ref{transitmodel}, top left-hand panel. For better effect visibility, selected flux ratios between two species are also shown on the further panels of the same figure. We note that the models composed from enstatite are very similar to the forsterite-models, also the olivine-models are very similar to the models composed from pyroxene. Contrary to \cite{Garai1}, in this case we did not derive uncertainties in the fitted $\rho(0)$ values, because it takes a lot of computing time and the scientific goal of this part of the analysis is not to investigate the light curve of Kepler-1520b, which was already performed by several authors, see e.g., \cite{Brogi1,Budaj1}, rather to prepare the next step, which is modeling Kepler-1520b as a grazing, non-transiting planet. The second simplification made during this step was that we did not include the solid body of the planet in the model. This is justified, because the planet's solid body, contrary to KOI 2700b, is negligible in comparison with the comet-like tale itself.
Although our scientific goal was not to study the transit light curve of Kepler-1520b, based on the obtained transit models we can conclude that (1) the modified ring-model of the tail fits the observations well, except the end of the transit egress, where the models appear to over-predict this part of the light curve. This is very probably due to the dominant particles located at the end of the tail. These can have smaller radii than 0.01 micron. Fortunately, this feature does not affect our further conclusions. (2) The applied orbit inclination angle $i$ and the density exponent $A2$, derived by \cite{Budaj1}, seem to be appropriate. (3) Based on modeling of the transit we cannot determine the chemical composition of the dust ejected by Kepler-1520b. (4) Based on modeling of the forward-scattering amplitude we can conclude that the typical particle at the beginning of the tail, where is the largest concentration of the evaporating material, is the 0.1-micron grain, if the particles are composed from alumina, enstatite, forsterite, olivine with 50 \% magnesium, or from pyroxene with 40 \% magnesium. If the evaporating material is composed from alumina, olivine, pyroxene, or from iron, then 1-micron grains are also possible. Models applying 0.01-micron grains do not satisfy the observed forward-scattering amplitude. This is in agreement with the results about the typical particle size obtained by \cite{Brogi1}, \cite{Budaj1}, and by \cite{Bochinski1}.
\begin{table*}
\caption{The best-fitting $\rho(0)$ values as a result of the fitting procedure, where we modeled the observed phase-folded and averaged transit light curve of Kepler-1520b. These values were derived based on a tail-model with a cross-section of $0.05 \times 0.05~\mathrm{R}_\odot$ at the beginning and $0.09 \times 0.09\mathrm{R}_\odot$ at its end. $^{1}$With 50\% magnesium. $^{2}$With 40\% magnesium.}
\label{rhovalues}
\begin{tabular}{l|llllll}
\hline\noalign{\smallskip}
Particle size & Alumina & Enstatite & Forsterite & Olivine$^1$ & Pyroxene$^2$ & Iron\\
$r$ [micron] & \multicolumn{6}{c}{$\rho(0)$ ($\times 10^{-15}$) [g.cm$^{-3}$]}\\
\noalign{\smallskip}\hline\noalign{\smallskip}
0.01 & 14.0 & 515.0 & 410.5 & 8.1 & 17.5 & 3.35\\
0.1 & 1.175 & 1.35 & 1.14 & 0.81 & 0.925 & 1.45\\
1 & 9.70 & 11.0 & 10.4 & 10.0 & 10.25 & 26.0\\
\noalign{\smallskip}\hline
\end{tabular}
\end{table*}
\section{Kepler-1520b as a grazing, non-transiting exoplanet}
\label{grazingmod}
Brightenings on the light curve of Kepler-1520b are also caused by the forward scattering on dust particles in the tail \cite{Brogi1,Budaj1}. The most dominant out-of-transit part light-curve feature of Kepler-1520b is the pre-transit brightening, (see Fig. \ref{Cele}, right-hand panel), which has the highest amplitude portion just before the transit. During the transit, extinction nullifies the effect of the forward scattering, therefore it is not visible in these orbital phases. After the transit event the forward scattering becomes again visible during the less dominant post-transit brightening. We can therefore conclude that without transits the forward-scattering peak would be the most dominant feature of the Kepler-1520b light curve. This means that theoretically it is possible to detect grazing, non-transiting disintegrating exoplanets only via this signature, as it was pointed out by \cite{DeVore1}. The question is, how different particle radii, species, observational channels, and other factors affect the amplitude of the forward-scattering peak, and the detectability of the scattering event.
\begin{figure*}
\centering
\centerline{
\includegraphics[width=60mm]{Alumina_060.eps}
\includegraphics[width=60mm]{Alumina_per_Enstatite_060.eps}}
\centerline{
\includegraphics[width=60mm]{Alumina_per_Olivine_060.eps}
\includegraphics[width=60mm]{Alumina_per_Iron_060.eps}}
\caption{Model light curves calculated for 0.01-micron, 0.1-micron and 1-micron grains of alumina, compared with the observed phase-folded and averaged transit light curve of Kepler-1520b are depicted on top left-hand panel. Flux ratios between alumina and enstatite/forsterite, alumina and olivine/pyroxene, alumina and iron calculated for the orbital phases, where the pre-transit brightening is visible, are depicted on top right-hand panel, bottom left-hand panel and bottom right-hand panel, respectively.}
\label{transitmodel}
\end{figure*}
To study these possibilities we took the orbital and planetary properties of Kepler-1520b discussed in Section \ref{transitmod}, and changed only the orbital properties of the system to get a grazing, non-transiting orbit scenario, where the solid body of the planet does not transit. Subsequently, we performed several modeling using the {\tt{Shellspec}} code. During this analysis we used the same optional object in the form of a modified ring as previously and assumed a spherical and limb-darkened central star with a radius of $R_\mathrm{s} = 0.66~\mathrm{R}_{\odot}$, mass of $M_\mathrm{s} = 0.76~\mathrm{M}_\odot$, and effective temperature of $T_\mathrm{eff} = 4677~\mathrm{K}$ \cite{Rappaport1}, located in the geometrical center of the ring. We modeled the comet-like tail as part of a ring with a radius of $a = 2.77~\mathrm{R}_{\odot}$. Its geometrical cross-section is monotonically enlarging from the planet to the end of the ring, which is located at $60^{\circ}$ behind the planet. We used the dust-tail cross-section of $0.05 \times 0.05~\mathrm{R}_\odot$ at the beginning and $0.09 \times 0.09~\mathrm{R}_\odot$ at its end. The density exponent $A2$ was fixed during the modeling procedure as previously, i.e., we used $A2 = -20$ in our calculations. The dust density at the beginning of the ring $\rho(0)$ was calculated previously for the given cross section of the tail, see Table \ref{rhovalues}. During this step we used these values also as fixed parameters. The key fixed parameter is, however, the orbit inclination angle $i$. By taking into account the above mentioned properties of the model we calculated that a non-transiting orbit scenario is possible if the orbit inclination angle is $i \leq 75^{\circ}$. In our analysis we finally decided to use the value of $i = 75^{\circ}$, which means that our hypothetical planet Kepler-1520b' is just non-transiting. We call this situation as "grazing, non-transiting scenario". We note that part of the dust-tail is still transiting, however, this is not its densest part, responsible for the observed forward scattering. The calculations were performed on the wavelengths of 0.55, 0.70, 0.95, 1.25, and 1.65 microns for consistency with the planned \textit{Ariel} observational channels. We executed 90 modeling calculations -- one for each combination of wavelengths, species, and dust particle sizes. Since in this case we used only fixed parameters, we did not need any goodness-of-fit parameter, nor uncertainty determination. The corresponding model light curves, calculated for the \textit{Ariel} 0.55-micron wavelength observational channel are depicted on the selected panels of Figs. \ref{aluminafull} and \ref{ironfull}. For better effect visibility, flux ratios between two \textit{Ariel} observational channels are also shown on the further panels of the same figure. We note that models composed from alumina are very similar to olivine- and pyroxene-models, enstatite- and forsterite-models are also very similar, and models calculated on 1.25 and 1.65 microns do not differ significantly, as well.
\begin{figure*}
\centering
\centerline{
\includegraphics[width=60mm]{Alumina_055.eps}
\includegraphics[width=60mm]{Alumina055per070.eps}}
\centerline{
\includegraphics[width=60mm]{Alumina055per095.eps}
\includegraphics[width=60mm]{Alumina055per125.eps}}
\centerline{
\includegraphics[width=60mm]{Enstatite_055.eps}
\includegraphics[width=60mm]{Enstatite055per070.eps}}
\centerline{
\includegraphics[width=60mm]{Enstatite055per095.eps}
\includegraphics[width=60mm]{Enstatite055per125.eps}}
\caption{Model light curves of the disintegrating exoplanet Kepler-1520b in a grazing, non-transiting regime with forward scattering only. Predicted photometry for different dust species and particle sizes, calculated for the \textit{Ariel} 0.55-micron wavelength observational channel. Flux ratios between \textit{Ariel} 0.55- and 0.70-micron, 0.55- and 0.95-micron, 0.55- and 1.25-micron wavelength observational channels are also shown. The same forward scattering can be applied in searching for grazing, non-transiting evaporating planets.}
\label{aluminafull}
\end{figure*}
\begin{figure*}
\centering
\centerline{
\includegraphics[width=60mm]{Iron_055.eps}
\includegraphics[width=60mm]{Iron055per070.eps}}
\centerline{
\includegraphics[width=60mm]{Iron055per095.eps}
\includegraphics[width=60mm]{Iron055per125.eps}}
\caption{As in Fig. \ref{aluminafull}, but for iron grains.}
\label{ironfull}
\end{figure*}
\begin{figure*}
\centering
\centerline{
\includegraphics[width=60mm]{AlEnFo_amplitude.eps}
\includegraphics[width=60mm]{OlPyIr_amplitude.eps}}
\caption{Comparison of the forward-scattering amplitudes of alumina, enstatite, forsterite (left-hand panel), olivine, pyroxene, and iron grains (right-hand panel), calculated for the \textit{Ariel} observational channels.}
\label{amplitude}
\end{figure*}
Based on the obtained models we can make several conclusions. (1) The first and the most important conclusion is that there are no significant differences among the selected species. (2) More significant differences are in the forward-scattering amplitude, as it is well visible on the panels of Fig. \ref{amplitude}. We can see that a change in the observational channel affects mainly the scattering amplitude of 0.1-micron grains. The 0.1-micron model becomes less dominant with the increasing wavelength. The most dominant is the 1-micron model, these particles generate forward-scattering peaks with the largest amplitude. The amplitude is affected by the observational channel only slightly, even though it is somewhat less dominant at the shortest wavelength. Similar properties have 0.01-micron grains, but with the difference that this is the least dominant model. These conclusions are in agreement with Fig. 13, presented in \cite{Croll1} (dimensionless grain size). (3) We can make the second group of conclusions concerning the duration of the forward scattering. From this viewpoint we can divide the models as follows. In the first group is the 1-micron model. This model is narrow and the duration of the scattering event is from 0.2 to 0.4 in units of phase. In the second group are the 0.1- and 0.01-micron models. These models are not so narrow as the 1-micron model and they have wider wings near the continuum. The duration of the scattering event is from 0.4 to 0.6 in units of phase.
About the detectability of a grazing, non-transiting disintegrating planet, creating a comet-like tail from the evaporating material we can conclude the followings. (1) The most difficult is to detect evaporating material composed from 0.01-micron grains, which has the smallest forward-scattering peak at any of the \textit{Ariel} wavelengths. (2) If the evaporating material is composed from 0.1-micron grains, observations on 0.55 micron is the best way to detect such a disintegrating planet. (3) In the most ideal case the evaporating material is composed from 1-micron grains, which could be the most detectable material at any of the \textit{Ariel} wavelengths. For example, \cite{Brogi1} found that the typical particle size in the comet-like tail of Kepler-1520b (near the planet) is 0.1 micron. If Kepler-1520b would be a grazing, non-transiting planet with the same planet properties we could detect this object using \textit{Ariel} very probably only at 0.55 micron. On the other hand, it is clear that we do not have information about the grain size of unknown disintegrating planets, therefore multiwavelength observations are needed to increase the probability of detections.
\section{Discussion}
\label{discuss}
In the previous section we did not take into account several accompanying factors, which could affect the detectability of a grazing, disintegrating planet. In this section we discuss three of them, which we consider as very important: white noise, the orbit inclination angle, and the broad-band nature of the photometry.
\subsection{White noise}
It is very important limiting factor at photometric observations. Since no significant differences among the selected species, we worked only with the models calculated for alumina. We added white noise in these models. White noise has zero mean, constant variance, and is uncorrelated in time. Since the tabulated \textit{Ariel} noise parametrisation\footnote{Obtained from Gy. M. Szab\'{o} via private communication, \url{szgy@gothard.hu}.} gives the total white noise calculated for 30-min integration, we rebinned the model data using the 30-min-width window, and tried the following 30-min noise levels: $N_\mathrm{1/2h} = 3 \times 10^{-6}$ (3 ppm), $3 \times 10^{-5}$ (30 ppm), $3 \times 10^{-4}$ (300 ppm), $3 \times 10^{-3}$ (3000 ppm), $3 \times 10^{-2}$ (30~000 ppm = 3\%), and $3 \times 10^{-1}$ (300~000 ppm = 30\%). The corresponding model light curves are depicted on the panels of Figs. \ref{noise106} and \ref{noise104}. We note that the models calculated on 1.25 and 1.65 microns do not differ significantly, also the models with white noise of $N_\mathrm{1/2h} \geq 3 \times 10^{-3}$ ($\geq 3000$ ppm) are below the detection limit. Based on these models we can conclude the followings. (1) If the evaporating material is composed from 1-micron grains of alumina, the 30-min integrated noise in the data should be $N_\mathrm{1/2h} \leq 10^{-4}$ ($\leq 100-900$ ppm) to detect a Kepler-1520b-like disintegrating planet in a grazing, non-transiting regime. In this case the observational wavelength does not matter. (2) If the evaporating material is composed from 0.1-micron grains of alumina, the 30-min integrated noise in the data should be $N_\mathrm{1/2h} \leq 10^{-5}$ ($\leq 10-90$ ppm) for a convincing detection. In this case the detectability is strongly affected by the observational wavelength. As we concluded in Section \ref{grazingmod}, observations on 0.55 micron could be the most productive. (3) Evaporating material composed from 0.01-micron grains of alumina is not possible to detect nor with noise of $N_\mathrm{1/2h} = 3 \times 10^{-6}$ (3 ppm).
\begin{figure*}
\centering
\centerline{
\includegraphics[width=60mm]{Alumina_055_106.eps}
\includegraphics[width=60mm]{Alumina_070_106.eps}}
\centerline{
\includegraphics[width=60mm]{Alumina_095_106.eps}
\includegraphics[width=60mm]{Alumina_125_106.eps}}
\centerline{
\includegraphics[width=60mm]{Alumina_055_105.eps}
\includegraphics[width=60mm]{Alumina_070_105.eps}}
\centerline{
\includegraphics[width=60mm]{Alumina_095_105.eps}
\includegraphics[width=60mm]{Alumina_125_105.eps}}
\caption{Model light curves of the disintegrating exoplanet Kepler-1520b in a grazing, non-transiting regime with forward scattering and white noise (part I). Predicted photometry for different \textit{Ariel} observational channels and particle sizes of alumina.}
\label{noise106}
\end{figure*}
\begin{figure*}
\centering
\centerline{
\includegraphics[width=60mm]{Alumina_055_104.eps}
\includegraphics[width=60mm]{Alumina_070_104.eps}}
\centerline{
\includegraphics[width=60mm]{Alumina_095_104.eps}
\includegraphics[width=60mm]{Alumina_125_104.eps}}
\caption{Model light curves of the disintegrating exoplanet Kepler-1520b in a grazing, non-transiting regime with forward scattering and white noise (part II). Predicted photometry for different \textit{Ariel} observational channels and particle sizes of alumina.}
\label{noise104}
\end{figure*}
We can note that other species could have similar properties of detectability. In the case of Kepler-1520b in a grazing, non-transiting scenario with the typical particle size of 0.1 micron, observed on 0.55 micron, the 30-min integrated noise in the data should be $N_\mathrm{1/2h} \leq 10^{-5}$ ($\leq 10-90$ ppm) for a convincing detection. We note, however, that the calculations do not contain red noise, which has zero mean, constant variance, and is serially correlated in time, therefore the real precision needed for detections may be higher. On the other hand, longer photometric runs registering several orbits of such a disintegrating exoplanet could improve the observational precision via phase-folding the obtained data. Based on the tabulated \textit{Ariel} noise parametrisation\footnote{The expected noise spans from $N_\mathrm{1/2h} = 9 \times 10^{-4}$ (900 ppm) at the faintest stars ($K = 13.5$ mag), up to $N_\mathrm{1/2h} = 4 \times 10^{-5}$ (40 ppm) at the brightest stars ($K = 3.6$ mag).} we can conclude that the predicted white noise, calculated for 30-min integration seems to be enough to detect some disintegrating exoplanets in a grazing, non-transiting regime, as well. \textit{Ariel} core survey will monitor about 1000 stars and the list of potential targets can be found in \cite{Edwards1}. If we consider only $N_\mathrm{1/2h} \leq 10^{-5}$ ($\leq 10-90$ ppm, $K < 10.0$ mag, $d < 100$ pc) for a convincing detection, roughly 50\% of the \textit{Ariel} core survey targets\footnote{See top left-hand panel of Fig. 2 in \cite{Edwards1}.} might be expected to have sufficient signal-to-noise ratio for detection of a Kepler-1520b-like disintegrating planet in a grazing, non-transiting regime. This optimistic number will be, however, reduced by other factors, as well (e.g., by the orbit inclination angle).
\subsection{The orbit inclination angle}
Planetary systems also have one more degree of freedom, i.e., the orbit inclination angle. In the previous section we investigated only the case of $i = 75^{\circ}$, but as \cite{DeVore1} pointed out, the amplitude of the forward scattering decreases with decreasing inclination angle. To illustrate the effect of a change in the orbit inclination angle value on the light curves we selected the models, which were calculated for alumina on 0.55 micron without white noise (see Fig. \ref{aluminafull}, top left-hand panel), changed only the orbit inclination angle value from $i = 75^{\circ}$ to $i = 73^{\circ}$, and recalculate these models. The recalculated model light curves are depicted on the left-hand panel of Fig. \ref{incl}. For better effect visibility, flux ratios between the original and the recalculated light curves are also shown on the right-hand panel of the same figure.
\begin{figure*}
\centering
\centerline{
\includegraphics[width=60mm]{Alumina_055_73deg.eps}
\includegraphics[width=60mm]{Alumina_055_75per73deg.eps}}
\caption{Model light curves of the disintegrating exoplanet Kepler-1520b in a grazing, non-transiting regime with forward scattering only. Predicted photometry for different particle sizes of alumina, calculated for the orbit inclination angle of $i = 73^{\circ}$, and for the \textit{Ariel} wavelength of 0.55 micron (left-hand panel). Flux ratios between the light curves calculated for $i = 75^{\circ}$ and $i = 73^{\circ}$ are also shown (right-hand panel).}
\label{incl}
\end{figure*}
\begin{figure*}
\centerline{
\includegraphics[width=60mm]{Al_incl_amplitude.eps}
\includegraphics[width=60mm]{Al_radius_amplitude.eps}}
\caption{Comparison of the forward-scattering amplitudes of alumina grains, calculated for the \textit{Ariel} wavelength of 0.55 micron, and for different orbit inclination angles (left-hand panel). The same comparison, but the orbit radius is also changing, at the same time the impact parameter remains roughly constant (right-hand panel).}
\label{inclandimpact}
\end{figure*}
Although this is only an illustrative example, we can clearly see, how this small change of $\Delta i = 2^{\circ}$ in the orbit inclination angle affects the light curves. This factor changed mainly the scattering amplitude of the 1-micron model. The 0.1-micron model calculated at $i = 73^{\circ}$ is very similar to the original model with the orbit inclination angle of $i = 75^{\circ}$. This is also true for the 0.01-micron model, even though this light curve is below the detection limit. We can also see that now the models calculated for 0.1- and 1-micron grains have comparable amplitude. We can distinguish them based on the "sharpness" of the light curves. In Section \ref{grazingmod} we concluded that in the most ideal case the evaporating material is composed from 1-micron grains, which could be the most detectable material at any of the \textit{Ariel} wavelengths. Here we can see (Fig. \ref{inclandimpact}, left-hand panel) that this is valid only for the case, if the disintegrating planet is just non-transiting. With decreasing inclination angle the forward-scattering amplitude of 1-micron grains decreases rapidly, as well. The forward-scattering amplitude of 0.1-micron grains has similar trend with decreasing inclination angle, but the change rate in the flux is smaller.
Assuming a circular orbit, if we change not only the orbit inclination angle, but also the orbit radius, at the same time the impact parameter, defined as the projected distance between the planet and the star centre, remains roughly constant, we can get the following results. As it is depicted on the right-hand panel of Fig. \ref{inclandimpact}, a change in the orbit radius/inclination angle affects mainly the model composed from 1-micron grains, which is less dominant at smaller distances from the parent star, but the forward-scattering amplitude increases with the orbit radius. A small decrease in the forward-scattering amplitude of 0.1-micron grains is visible with increasing orbit radius/inclination angle. The scattering amplitude of 0.01-micron grains is not affected by the orbit radius/inclination angle. We can therefore conclude that from this point of view grazing, non-transiting disintegrating planets with longer orbital periods could be favored over ultra-short planets.
\subsection{The broad-band nature of the photometry}
The last factor, which we investigated is the broad-band nature of the photometry. The light-curve models presented in Section \ref{grazingmod} were calculated for a single wavelength, i.e., for the central wavelength of each \textit{Ariel} passband, however the real \textit{Ariel} observations will be taken as broad-band observations. For example, we can mention that the \textit{Ariel} channel called "VISPhot" spans from 0.50 to 0.60 micron. To see the effect of this factor on the light curves we again selected the models, which were calculated for alumina on 0.55 micron and at $i = 75^{\circ}$ without white noise (see Fig. \ref{aluminafull}, top left-hand panel), and recalculated these models for the above mentioned broad-band \textit{Ariel} "VISPhot" channel by integrating the wavelengths from 0.50 to 0.60 micron. The recalculated model light curves are depicted on the left-hand panel of Fig. \ref{broad}. For better effect visibility, flux ratios between the original and the recalculated light curves are also shown on the right-hand panel of the same figure.
\begin{figure*}
\centering
\centerline{
\includegraphics[width=60mm]{Alumina_05-06.eps}
\includegraphics[width=60mm]{Alumina_055perwide.eps}}
\caption{Model light curves of the disintegrating exoplanet Kepler-1520b in a grazing, non-transiting regime with forward scattering only. Predicted photometry for different particle sizes of alumina, calculated for the broad-band \textit{Ariel} channel, called "VISPhot" (left-hand panel). Flux ratios between the light curves calculated for the idealized and real broad-band "VISPhot" channel are also shown (right-hand panel).}
\label{broad}
\end{figure*}
As we can see, the effect of this factor is similar as in the case of the orbit inclination angle. The most affected light curve is the model calculated for 1-micron grains, where the scattering amplitude is reduced significantly in comparison with the original model calculated for the central wavelength of the passband. In comparison with the original models, the light curves calculated for 0.1- and 0.01-micron grains are relatively unchanged. Thus, we can again conclude that although the 1-micron model is theoretically the easiest to measure with \textit{Ariel}, on the other hand, the light-curve model calculated for 1-micron grains is the most sensitive to different factors. Therefore, the real detectability of a Kepler-1520b-like disintegrating planet in a grazing, non-transiting regime will strongly depend on this, and similar factors not investigated here (e.g., mixing the grain sizes in the dust-tail, red noise, etc.).
\section{Conclusions}
\label{conc}
In our case study we took the disintegrating exoplanet Kepler-1520b and changed the orbital properties of the system to get a grazing, non-transiting orbit scenario, where the planet is just non-transiting, but part of the dust-tail is still transiting, and investigated, how different particle radii, species, observational channels, and other factors affect the amplitude of the forward-scattering peak, and the detectability of the scattering event. We analyzed the following particle radii and species: 0.01-, 0.1-, and 1-micron grains of alumina, enstatite, forsterite, olivine with 50 \% magnesium, pyroxene with 40 \% magnesium, and iron. Using the {\tt{Shellspec}} code we calculated several synthetic light-curve models on the wavelengths of the planned \textit{Ariel} space observatory, i.e., on 0.55, 0.70, 0.95, 1.25, and 1.65 microns. Our most important conclusions are the followings.
(1) There are no significant differences among the selected species. The models are very similar from this viewpoint.
(2) The most dominant is the 1-micron model, these particles generate forward-scattering peaks with the largest amplitude. This model is narrow and the duration of the scattering event is from 0.2 to 0.4 in units of phase. The scattering amplitude is affected by the observational channel only slightly, therefore 1-micron grains could be the most detectable material at any of the \textit{Ariel} wavelengths. On the other hand, the 1-micron model is the most sensitive to different factors. A small change in the orbit inclination angle, in the orbit radius, or the broad-band nature of the photometry can cause significant decrease in the scattering amplitude, therefore the real detectability will strongly depend on these, and similar factors not investigated here (e.g., mixing the grain sizes in the dust-tail, red noise, etc.).
(3) A change in the observational channel affects mainly the scattering amplitude of 0.1-micron grains. This model becomes less dominant with the increasing wavelength, therefore observations on 0.55 micron is the best way to detect such a disintegrating planet. In this case the orbital inclination, the orbit radius, the broad-band nature of the photometry, or similar factors do not affect the scattering amplitude significantly. The duration of the scattering event is longer than in the previous case with wider wings near the continuum.
(4) 0.01-micron grains generate long and very small forward-scattering amplitude, which is below the detection limit.
(5) Furthermore, we investigated the impact of white noise on the detectability. In the case of Kepler-1520b in a grazing, non-transiting scenario with the typical particle size of 0.1 micron, observed on 0.55 micron, the 30-min integrated noise in the data should be $N_\mathrm{1/2h} \leq 10^{-5}$ ($\leq 10-90$ ppm) for a convincing detection. Roughly 50\% of the \textit{Ariel} core survey targets might be expected to have sufficient signal-to-noise ratio, however this optimistic number will be reduced by other factors described above.
Based on our results we can assume that forward scattering generated by 0.1-, and 1-micron grains, evaporating from disintegrating exoplanets, creating a comet-like tail, will be detectable by \textit{Ariel} and will be possible to investigate not only transiting, but also grazing, non-transiting exoplanets based on the forward scattering. From the viewpoint of such objects the big advantage of \textit{Ariel} will be the possibility of multiwavelength observations.
\begin{acknowledgements}
I thank Prof. Gy. M. Szab\'{o} for the technical assistance, comments, and discussions. I also thank the anonymous reviewers for helpful comments and corrections. This work was supported by the Hungarian NKFI grant No. K-119517 and the GINOP grant No. 2.3.2-15-2016-00003 of the Hungarian National Research, Development and Innovation Office, by the City of Szombathely under agreement No. 67.177-21/2016, and by the VEGA grant of the Slovak Academy of Sciences No. 2/0031/18.
\end{acknowledgements}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 9,788 |
Automotive expert Mark Boudreau demonstrates how to polish a scratch out of a car.
Mark Boudreau: Hi, my name is Mark Boudreau and I am with Spectrum Auto Painting & Collision Center in Arlington, Virginia. Today, I will be showing you how to polish out scuffs and scratches out of your vehicle's finish both by hand and with the machine buffer. From determining which scratches can and cannot be polished out of your vehicle's finish, what techniques and materials you are going to need to polish your car, I will show you everything from start to finish. Here is what you are going to need, liquid dishwashing soap, clean rags, rubbing compound, polish, wax, a machine buffer and a little elbow grease. Let's talk about some safety precautions. Make sure the vehicle you are working on is off, parked in the shade and free from traffic. Now, let me tell you a little bit about myself. My name is Mark Boudreau and I am an I-CAR trained ASE Master Certified Collision and Refinish technician. I own Spectrum Auto Painting & Collision Center in Arlington, Virginia, just outside of Washington, D.C. where for 15 years we have been D.C.'s premiere Automotive Reconstructive Surgery Center. Now, let's get started, fixing some scuffs and scratches in your vehicle's finish with polish. | {
"redpajama_set_name": "RedPajamaC4"
} | 5,741 |
{"url":"https:\/\/www.biostars.org\/p\/393877\/#393908","text":"Appropriate gene IDs for enrichment analysis (with clusterProfiler)\n1\n0\nEntering edit mode\n2.3 years ago\nayatrience \u2022 0\n\nHi, I am trying to do GO&KEGG enrichment analysis using R package, clusterProfiler. I changed gene IDs (ENSEMBL\u2192uniprot) by \"bitr\" functions for KEGG enrichment analysis. However, \"bitr\" returned the multiple IDs from single gene sometimes (I changed into ENTREZ id at the same time). I should pick up one ID from multiple IDs returned from single gene, for enrichment analysis, I thought. So my question is \u2461How people select the appropriate IDs from multiple returns. I need to do it manually by confirming each returned IDs using uniprot website ? (Ex. judging from the annotation score) but this is so hard working. How everyone deal with this problem ?? (Or we don't need to pick up one from single gene in the first place ...?)\n\nrna-seq R gene gene IDs clusterProfiler \u2022 2.3k views\n0\nEntering edit mode\n\nIt seems very nice to pick up one. If I want to refer the uniprot annotation score, we also can write down another script using function in \"multiVal\". I will follow your script and workflow. Thank you very much !!\n\n2\nEntering edit mode\n2.3 years ago\nBarry Digby \u25b4 780\n\nFollow the detailed workflow here : https:\/\/github.com\/twbattaglia\/RNAseq-workflow :\n\n# Add ENTREZ ID\nresults\\$entrez <- mapIds(x = org.Mm.eg.db,\nkeys = row.names(results),\ncolumn = \"ENTREZID\",\nkeytype = \"SYMBOL\",\nmultiVals = \"first\")\n\n\nFor starters, don't bother using ENSEMBL to UniProt. In the guide, the user has set\n\nmultiVals = \"first'\n\nWhich means: \"This value means that when there are multiple matches only the 1st thing that comes back will be returned. This is the default behavior.\" I have seen this used quite a lot in workflows, so assumed it is ok. If you want to set it to something else, check out the MultiVals argument here: https:\/\/www.rdocumentation.org\/packages\/AnnotationDbi\/versions\/1.30.1\/topics\/AnnotationDb-objects\n\n(EDIT): When you get a handle of that workflow, move to this one: https:\/\/yulab-smu.github.io\/clusterProfiler-book\/chapter12.html\n\n0\nEntering edit mode\n\nIt seems very nice to pick up one. If I want to refer the uniprot annotation score, we also can write down another script using function in \"multiVal\". I will follow your script and workflow. Thank you very much !!","date":"2021-11-30 19:31:32","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.23241060972213745, \"perplexity\": 4418.836485984498}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-49\/segments\/1637964359065.88\/warc\/CC-MAIN-20211130171559-20211130201559-00235.warc.gz\"}"} | null | null |
<?php
namespace Events;
/**
* Proxy all calls and property access to an object and sends events for each actions
*
* Events:
* - __get : get
* - __set : set
* - __isset : isset
* - __unset : unset
* - __call : call
*
* (__toString() and __invoke() are also proxied but are notified as "call")
*/
class EventProxy
{
/** @var object */
protected $object;
/** @var Notifier */
protected $notifier;
/**
* @param object $object The object to proxy calls to
* @param EventDispatcher $dispatcher
*/
public function __construct($object, EventDispatcher $dispatcher)
{
$this->object = $object;
$this->notifier = new Notifier($dispatcher, $object, 'proxy.');
}
public function __get($property)
{
$this->notifier->notify('get', array('property' => $property));
return $this->object->$property;
}
public function __set($property, $value)
{
$event = $this->notifier->notify('set', array('property' => $property, 'value' => $value));
$this->object->$property = $event->getReturnValue($value);
}
public function __isset($property)
{
$this->notifier->notify('isset', array('property' => $property));
return isset($this->object->$property);
}
public function __unset($property)
{
$this->notifier->notify('unset', array('property' => $property));
unset($this->object->$property);
}
public function __toString()
{
return $this->__call('__toString', func_get_args());
}
public function __invoke()
{
return $this->__call('__invoke', func_get_args());
}
public function __call($method, $args)
{
$event = $this->notifier->notify('call', array('method' => $method, 'args' => $args));
return call_user_func_array(array($this->object, $method), $event->getReturnValue($args));
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 2,088 |
Born on 19th February 1986 in Brazil, Marta Vieira da Silva is a Brazilian soccer player, plays for FC Rosengård in the Swedish Damallsvenskan and for the national team of Brazil as a forward. After surpassing the previous record of Birgit Prinz of14 goals against South Korea, Marta currently holds the record of most no. of goals scored at the FIFA Women's World Cup tournament, with 15 goals. She has earned a no. of achievements all through her soccer career.
Rebekah "Becky" Dyroen-Lancer was born on 19th February 1971. She is a synchronized swimmer and an Olympics champion representing America. | {
"redpajama_set_name": "RedPajamaC4"
} | 6,559 |
Myiodynastes hemichrysus е вид птица от семейство Tyrannidae.
Разпространение
Видът е разпространен в Коста Рика и Панама.
Източници
Myiodynastes | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 4,127 |
Violins are tuned by turning the pegs the strings are wrapped around. Tightening the string raises the pitch; loosening the string lowers the pitch.
Many violins also have fine tuners mounted on each string near the tailpiece. With these the tension of the string can be adjusted more precisely by turning the small screw.
Violins are tuned G-D-A-E, with G being the lowest string.
The easiest way to tune a violin is with an inexpensive digital tuner. Many not only create a note to tune to but also have a display that indicates which direction the string needs to be tuned and when it's tuned correctly.
An alternative method is to use a pitch pipe, piano, or other instrument to create the pitches to tune to.
Experienced players sometimes use a tuning fork. An A-440 tuning fork creates the pitch for the A string. From that reference point a violinist tunes each of the other strings, recognizing the correct intervals between them. | {
"redpajama_set_name": "RedPajamaC4"
} | 4,898 |
\section{Introduction}
The dynamical evolution of perturbation in a black hole background \cite{kk, nol}
mainly consists of three stages - the first, an initial wave
burst which depends on the original form of the field perturbation,
second, the damped oscillation whose frequency and damping
depends entirely on the background spacetime and not on the perturbing
field. The final stage involves a power law tail behaviour of the waves at very
late times. It is the second stage which is of most importance in
studying black hole physics. This is characterized by a discrete set
of frequencies termed quasinormal frequencies. On perturbing the black
hole background, either through external fields or via metric
perturbation, the black hole system responds by emitting damped
oscillations \cite{kk,nol}. The frequencies consists of a real part
which represents the frequency of the oscillations and an imaginary
part representing the damping. For a large class of black holes, the
equations governing the perturbations can be cast into the
Schr\"{o}dinger like wave equation. For asymptotically flat
space-times, the quasinormal modes (QNMs) are solutions of the
corresponding wave equation with complex frequencies which are purely
ingoing at the horizon and purely outgoing at spatial infinity
\cite{rg,vish}. One of the most important interest in studying
QNMs arises from the fact that there is a possibility of detecting
them in gravitational wave detectors \cite{sterio, fera}. Apart
from that, QNM's have been found to be a useful probe of the
underlying space-time geometry. QN frequencies carry unique
information about black hole parameters \cite{card_thes}. It has also been shown that quasinormal modes in Anti-de Sitter
space-time appear naturally in the description of the corresponding
dual conformal field theories living on the boundary \cite{danny,
danny1}. Therefore, many researchers found it interesting to study
QNMs in the background of asymptotically AdS black holes
\cite{horo, card1}. The QNMs of black hole in
asymptotically de Sitter spaces were also studied accordingly
\cite{kono3}, since there
were evidences that our universe is governed by positive cosmological
constant \cite{perl}. Let us also mention here for the sake of
completeness that in spite of their classical origin, it has been
suggested that QNM's might provide a glimpse into the quantum nature
of the black hole \cite{hod}.
Black hole QNMs have been studied since long time. A lot of
studies were made for perturbations of black hole background with
fields which are of integer spins (scalar, electromagnetic and
gravitational). Compared to this, the study of Dirac QNMs in black
hole backgrounds is rather very limited. The Dirac QNM for (2+1)
dimensional BTZ black hole in AdS space was studied both numerically
and analytically in \cite{danny, card1}. Studying Dirac QNMs in four
dimensions started with the work of Cho \cite{cho} who first studied
massive and massless Dirac QNMs in a Schwarzschild black hole
background using the WKB approach \cite{will,will2} followed by Jing
\cite{jing}, who modified the approach using continued fraction
\cite{lever} and Hill determinant method \cite{hill}. For the massless
Dirac field, it was shown that the real part of the QN frequency
increases with the multipole number $l$ while the imaginary part
increases with the overtone numbers $n$. For the massive Dirac field
the real part of the QN frequency increases with the mass of the
field, while the damping decreases, which implies that it would be
possible to detect QN frequencies due to perturbations of black hole
background with massive Dirac fields, since fields with higher masses
will decay slowly. The high overtones of Dirac perturbations of a
Schwarzschild black hole were studied in \cite{khc}. Then there were
lots of similar works on the Dirac perturbations of four dimensional
black holes both in asymptotically flat and non-flat black hole
backgrounds \cite{wu,jing3}.
All the abovementioned studies of Dirac QNMs were done in the four
dimensional black hole backgrounds while, the study of Dirac
perturbations in higher dimensional black hole backgrounds are even more
limited. String theory requires the existence of extra spatial
dimensions and for a long time it was thought that the only possible
way to think of these as extra spatial dimensions tightly curled with
the radius of curvature around string scale. However later it was
realized that these extra spatial dimensions need not be of the order
of string scale but they can be as large as a few millimeters \cite{antoniadis1,led,antoniadis}.
An important feature of the large extra dimensions is that it implies that the fundamental Planck
scale is much lower than the four dimensional Planck scale, in fact it
might be of the order of a few TeV. This provides a tool for looking into
stringy effects which might become observable at the LHC. One of the most
interesting effects will be the production of microscopic black holes
at the LHC. Since gravity will be sensitive to macroscopic extra
dimensions, these black holes produced at the colliders will
essentially be higher dimensional. The study of QNMs of these kind of
black holes projected on the four dimensional brane with the
perturbations due to brane localized standard model fields were made
in \cite{kanti1, kanti2, zhidGB}. It was shown that the increase in number of
extra spatial hidden dimensions dampens the QN frequencies produced
via perturbations of all kinds of brane localized standard model
fields. The presence of charge $Q$ in the black hole background also
affects the QN spectrum and it is significantly different from the
behavior of charged black holes in four dimensions.
However, the study of Dirac QNMs of purely higher dimensional black
holes were not present in the literature until the work of Cho et al
\cite{split}. In that work they have studied QNMs of Schwarzschild
black holes using the conformal properties of the spinor field. Such an
idea is perfectly general and in principle can be applied to all
higher dimensional spherically symmetric black holes. In this paper we
will use the abovementioned idea to study QNMs of
higher dimensional charged black holes arising out of Einstein Hilbert
action as well as from higher derivative corrections to such actions.
In particular we will study the QNMs of higher dimensional Reissner-
N\"{o}rdtrom type black holes \cite{mype} and the charged Gauss-Bonnet black holes \cite{zwei,wheeler,wheeler1,wiltsh}. The motivation of the paper is three fold, namely
To study the Dirac QNM in charged black hole backgrounds in higher dimensions. As we have mentioned that study of Dirac perturbations in higher dimension is rather limited, this paper will try to fill in a gap in the literature by studying Dirac QNMs in purely higher dimensional charged black hole backgrounds in the framework of general relativity and its higher derivative corrected scenario.
To compare the results of charged black hole QNMs in two different scenarios, namely the black holes arising out of Einstein Hilbert action and in the higher derivative gravities, precisely the Gauss- Bonnet black holes due to Dirac perturbations. We also study the behaviour of the QN frequency with the Gauss-Bonnet coupling $\alpha$ for the charged Gauss-Bonnet black hole.
To compare our results with the available results for brane localized black holes studied by Kanti etal \cite{kanti1, kanti2} and Zhidenko \cite{zhidGB}.
The plan of the paper is as follows: in the next section we briefly discuss the Reissner Nordstr\"{o}m type solutions and charged Gauss Bonnet solution in dimensions $d>4$. In section 3 we present a brief discussion of WKB method along with a comparative study of the QNMs of Reissner-Nordstr\"{o}m and charged Gauss-Bonnet
black holes. Section 4 deals with the quasinormal modes in large multipole number limit and in section 5 we conclude the paper with a brief discussion on future directions. Finally in the appendix we give a brief review of the Dirac equations in higher dimensional curved background following \cite{split}.
\section{Charged Black Holes in Higher Dimensions}
In this section we will discuss the charged black holes in higher dimensions. As we have mentioned that we will study charged black holes arising out of Einstein's theory of general relativity and those arising out of Einstein Gauss Bonnet theory. Let us first discuss the Reissner Nordstr\"{o}m type solutions in higher dimensional Einstein gravity.
\subsection{Reissner-Nordstr\"{o}m type solutions}
One of the main goals of theoretical physics is to find out a theory which unifies gravity with all other fundamental forces in nature. String theory is one of the candidates for such a unified theory and it predicts that we live in a world which has dimensions greater than $3+1$. In this context, study of black hole physics is an important area. In this section we discuss higher dimensional black holes which are static and spherically symmetric. The study of such black holes began with the work of Tangherlini \cite{tnln} and later by Myers and Perry \cite{mype}.
Let us start with the static spherically symmetric metric
\begin{eqnarray}
ds^2=-f(r)dt^2+f^{-1}(r)dr^2+r^2d\bar\Omega_{d-2}^2, \label{methighd}
\end{eqnarray}
The vacuum Einstein equation then implies
\begin{eqnarray}
f=\left(1-\frac{2\mu}{r^{d-3}}\right),
\end{eqnarray}
provided that $d\ge 4$. The parameter $\mu$ is a constant of integration and is related to the mass $M$ of the black hole
\begin{eqnarray}
M=\frac{(d-2)\mathcal{A}_{d-2}}{8\pi G_{d}}\mu,
\end{eqnarray}
where $\mathcal{A}_{d-2}$ is the area of the unit $(d-2)$-sphere given by
\begin{eqnarray}
\mathcal{A}_{d-2}=\frac{2\pi^{(d-1)/2}}{\Gamma(\frac{d-1}{2})}
\end{eqnarray}
One can find the analog of Reissner-Nordstr\"{o}m solutions in higher dimensions also and there the metric is given by \cite{mype}
\begin{eqnarray}
f=\left(1-\frac{2\mu}{r^{d-3}}+\frac{\theta^2}{r^{2(d-3)}}\right)
\end{eqnarray}
The electric charge of the black hole is given by
\begin{eqnarray}
Q^2=\frac{(d-2)(d-3)}{8\pi G_{d}}\theta^2
\end{eqnarray}
For $\theta^2<\mu^2$, there is an outermost horizon situated at
\begin{eqnarray}
r^{d-3}=\mu+(\mu^2-\theta^2)^{1/2} \label{extml}
\end{eqnarray}
This will be of importance for our analysis since we will be interested in the potential just outside the outermost horizon of the black hole. In our analysis of the QNM for the charged black hole, we will also have to keep in mind the relation (\ref{extml}), since this relation gives a constraint on the mass and charge of the hole. In other words, one can not work with arbitrary mass and charge parameter while working with the Reissner-Nordstr\"{o}m type metric. The non-extremality condition should always be maintained while working with such metrics.
\subsection{Charged Gauss-Bonnet black hole}
We will now briefly discuss the black holes which arise out of Einstein Gauss-Bonnet gravity.
In space-time dimensions $d \geq 5$ the Einstein-Gauss-Bonnet action is given by
\begin{equation}
I=\frac{1}{16\pi G_d} \int d^dx \sqrt{-g} R + \alpha^{\prime}\int
d^dx
\sqrt{-g}(R_{\mu\nu\beta\gamma}R^{\mu\nu\beta\gamma}-4R_{\beta\gamma}R^{\beta\gamma}+R^2)
,\label{alpha}
\end{equation}
where $G_d$ is the $d$-dimensional Newton's constant and the parameter
$\alpha^{\prime}$ denotes the Gauss-Bonnet coupling. We will choose
$G_d=1$ from now on and will consider only positive $\alpha^{\prime}$
which is consistent with the string expansion \cite{deser}.
The metric for spherically symmetric asymptotically flat Gauss-Bonnet
black hole solution of mass $M$ is given by Eqn. (\ref{methighd}), where
$f(r)$ has the form \cite{deser}
\begin{equation}
f(r) = 1 + \frac{r^2}{2\alpha} -
\frac{r^2}{2\alpha}\sqrt{1+\frac{8\alpha M}{r^{d-1}}}, \label{fr}
\end{equation}
where,
\begin{equation}
\alpha=16\pi G_d (d-3)(d-4)\alpha'.
\end{equation}
For $\alpha'>0$, this black hole admits only a single horizon
\cite{deser}. The horizon $r=r_h$ is determined by the real positive
solution of the equation
\begin{equation}
r_h^{d-3}+\alpha r_h^{d-5}=2M. \label{horizon}
\end{equation}
The charged Gauss-Bonnet black hole has the following form of $f(r)$
\cite{wiltsh}:
\begin{equation}
f(r)=1+\frac{r^2}{2\alpha}-\frac{r^2}{2\alpha}\sqrt{1+\frac{8\alpha
M}{r^{d-1}}-\frac{4\alpha Q^2}{2\pi (d-2)(d-3)r^{2d-4}}},
\end{equation}
if $M>0$ and $\alpha >0$, then there will be a timelike singularity
which will be shielded by two horizons if $Q<Q_{ex}$. Here $Q_{ex}$ is
the extremal value of the charge determined from \cite{wiltsh}:
\begin{equation}
r_{ex}^{2(d-3)}+\frac{d-5}{d-3}\alpha
r_{ex}^{2(d-4)}-\frac{Q_{ex}^2}{2\pi (d-2)(d-3)}=0,
\end{equation}
where,
\begin{equation}
r_{ex}^{d-3}=-\frac{1}{2}(d-5)M+\left[\frac{1}{4}(d-5)^2M^2+\frac{(d-4)Q_{ex}^2}{2\pi(d-2)(d-3)}\right]^{1/2}.
\end{equation}
Again, this equation will be of importance to us since this will constrain the values of the mass and charge of the Gauss-Bonnet black hole. For a general discussion on higher derivative corrected black holes and their perturbative stability, see Ref \cite{moura}.
\section{QNM using WKB method}
Studying QNMs in black hole backgrounds essentially require the solution of Scr\"{o}dinger like wave equation with a particular boundary condition. We will briefly review the derivation of the wave equation in higher dimensions due to Dirac perturbations in the appendix.
However, for most spacetime geometries, the wave equation governing the QNMs is
not exactly solvable. In our case also the wave equation so obtained is not exactly solvable and we have to look for numerical schemes for finding out the QN frequencies. Various numerical schemes have been used in the literature to find the QN frequencies, which include direct
integration of the wave equation in the frequency domain \cite{chandra}, P\"{o}schl-Teller approximation \cite{ferrari}, WKB method \cite{will,will2,iyer,kon2}, phase integral method
\cite{Andersson, and1} and continued fraction method \cite{leaver}. We will use the WKB method in our case. The WKB method has some advantages over the other semianalytic methods since it can be carried systematically to higher orders to improve the accuracy. It has been found also that sixth order WKB methods used to find black hole QNMs give almost same result as can be found out by full numerical integrations. Also, one of the advantages of using the WKB method is that it can give an analytic expression for the frequency when one uses the lowest order WKB formula in the large multipole number limit which is otherwise almost impossible to find out.
Having discussed the black hole backgrounds of our interest we now look into the tool that we will use in evaluating the QNMs, i.e. the WKB method. As we will see in the appendix, the equation we need to solve is
\begin{eqnarray}
\left(-\frac{d^2}{dr_{\star}^2}+V_1\right)G=\omega^2G;~~~~\left(-\frac{d^2}{dr_{\star}^2}+V_2\right)F=\omega^2F,
\end{eqnarray}
where $r_{\star}$ is the tortoise coordinate and the potential $V_{1,2}$ is given by
\begin{eqnarray}
V_{1,2}=\pm \frac{dW}{dr_{\star}}+W^2, ~~ W=\frac{\sqrt{f}}{r}\left(l+\frac{d-2}{2}\right),
\end{eqnarray}
with the choice of outgoing boundary conditions at the horizon and spatial infinity i.e. nothing should come in from asymptotic infinity to disturb the system and nothing should come out of the horizon.
The formula for QN frequencies using third order WKB approach is given
by \cite{will,iyer}
\begin{equation}
\omega^2=[V_0+(-2V_0^{\prime\prime})^{1/2}\tilde\Lambda(n)]-i(n+\frac{1}{2})(-2V_0^{\prime\prime})^{1/2}[1+\tilde\Omega(n)].\label{freq}
\end{equation}
where, $\tilde\Lambda=\Lambda/i$ and
$\tilde\Omega=\Omega/(n+\frac{1}{2})$ and $\Lambda$ and $\Omega$ are
given by
\begin{eqnarray}
\Lambda(n)&=&\frac{i}{(-2V^{\prime\prime}_0)^{1/2}}\left[\frac{1}{8}\left(\frac{V^{(4)}_0}{V^{\prime\prime}_0}\right)
\left(\frac{1}{4}+\nu^2\right)-\frac{1}{288}\left(\frac{V^{(3)}_0}{V^{\prime\prime}_0}\right)^2
(7+60\nu^2)\right],\nonumber\\
\Omega(n)&=&\frac{(n+\frac{1}{2})}{(-2V^{\prime\prime}_0)^{1/2}}\bigg [\frac{5}{6912}
\left(\frac{V^{(3)}_0}{V^{\prime\prime}_0}\right)^4
(77+188\nu^2)\nonumber\\&&-
\frac{1}{384}\left(\frac{V^{(3)^2}_0V^{(4)}_0}{V^{\prime\prime^3}_0}\right)
(51+100\nu^2)
+\frac{1}{2304}\left(\frac{V^{(4)}_0}{V^{\prime\prime}_0}\right)^2(67+68\nu^2)
\nonumber\\&&+\frac{1}{288}
\left(\frac{V^{(3)}_0V^{(5)}_0}{V^{\prime\prime^2}_0}\right)(19+28\nu^2)-\frac{1}{288}
\left(\frac{V^{(6)}_0}{V^{\prime\prime}_0}\right)(5+4\nu^2)\bigg ].\label{wl}
\end{eqnarray}
Where, $V^{(n)}_0=(d^nV/dx^n)_{x=x_0}$ and $\nu=n+1/2$.
It may be mentioned here that the accuracy of the WKB method depends
on the multipole number $l$ and the overtone number $n$. It has been
shown \cite{cardosoyousi} that the WKB approach is a good one for
$l>n$, i.e. the numerical and the WKB results are in good agreement if
$l>n$, but the WKB approach is not so good if $l=n$ and not at all
applicable for $l<n$.
Now, in our case the form of the potential for Reissner-Nordstr\"{o}m type solutions is given by
\begin{eqnarray}
V_1(r)&=&f\frac{d}{dr}\left\{\sqrt{f}\frac{\kappa}{r}\right\}+f\frac{\kappa^2}{r^2}, \label{potl22}\\
&=& \kappa \sqrt{\frac{\Delta}{r^{2d-6}}}\frac{1}{r^{2(d+1)}}\left[r^d\left\{(d-1)\mu r^3+r^d\left(\sqrt{\frac{\Delta}{r^{2d-6}}}\kappa-1\right)\right\}-(d-2)\theta^2r^6\right],\nonumber
\end{eqnarray}
where $\Delta=(r^{2d-6}-2\mu r^{d-3}+\theta^2)$. One can also write $\Delta$ as
\begin{eqnarray}
\Delta=(r^{d-3}-r_-^{d-3})(r^{d-3}-r_+^{d-3}).
\end{eqnarray}
where $r_+$ and $r_-$ are the outer and inner horizons for the Reissner-Nordstr\"{o}m black hole in $d$=dimensions. The explicit form of $r_{\pm}$ is given by
\begin{eqnarray}
r_{\pm}=(\mu\pm\sqrt{\mu^2-\theta^2})^{\frac{1}{d-3}}.
\end{eqnarray}
We will need this particular form for the discussion of large angular momentum limit in the next section.
The potential for the Reissner-Nordstr\"{o}m type black holes and the charged Gauss-Bonnet black hole for different dimensions are plotted in figure (1) for a particular value of multipole number, charge and Gauss-Bonnet coupling. Since the QN frequencies depend on the height and width of the potential, it is clear from the picture, at least qualitatively, that the quasinormal frequencies should be different for the two different space times discussed in the paper. Even with a very small value of the Gauss Bonnet coupling ($\alpha=0.1$ in the figure (1)), there is significant changes in the potential.
\begin{figure}[here]
\centering
\subfigure[Potential for Reissner-Nordstr\"{o}m black hole]{\rotatebox{0}{\epsfxsize=8cm\epsfbox{pltRN.eps}}}
\subfigure[Potential for charged Gauss-Bonnet black hole with $\alpha=0.1$]{\rotatebox{0}{\epsfxsize=8cm\epsfbox{pltGBQ.eps}}}
\caption{The potential for higher dimensional (a) Reissner-Nordstr\"{o}m and (b) charged Gauss-Bonnet black holes. The lower plot is for $d=5$ and the topmost one is for $d=10$ for charge $Q=0.05$ and $l=2$ in both cases}
\end{figure}
By properly choosing the $\Delta$ one can get the potential for the Reissner-Nordstr\"{o}m case for $d=4$, which is given in \cite{wu}. Similarly a potential for the charged Gauss Bonnet black hole can be found out. Due to the complicated expression for the potential for charged Gauss-Bonnet case, we do not explicitly write down the form here.
Having found out the explicit form for the potential, our next goal is to determine the quasinormal frequencies for Reissner-Nordstr\"{o}m type black holes and charged Gauss-Bonnet black holes in higher dimensions. For that we will use the WKB formula for the QN frequencies given by Eqn. (\ref{freq}). In this paper we will be looking at the above mentioned black hole backgrounds in space-time dimensions $d=5$ to $d=10$.
\begin{figure}[htp]
\centering
\subfigure[$d=5,~ Re(\omega)~ vs~ Q$]{\rotatebox{0}{\epsfxsize=5.1cm\epsfbox{chgd5l0.eps}}}
\hspace{.2in}
\subfigure[$d=6,~ Re(\omega)~ vs~ Q$]{\rotatebox{0}{\epsfxsize=5.1cm\epsfbox{chgd6l0.eps}}}
\vspace{.05in}
\hspace{.2in}
\subfigure[$d=7,~ Re(\omega)~ vs~ Q$]{\rotatebox{0}{\epsfxsize=5.1cm\epsfbox{chgd7l0.eps}}}
\hspace{.2in}
\subfigure[$d=8,~ Re(\omega)~ vs~ Q$]{\rotatebox{0}{\epsfxsize=5.1cm\epsfbox{chgd8l0.eps}}}
\hspace{.2in}
\subfigure[$d=9,~ Re(\omega)~ vs~ Q$]{\rotatebox{0}{\epsfxsize=5.1cm\epsfbox{chgd9l0.eps}}}
\hspace{.2in}
\subfigure[$d=10,~ Re(\omega)~ vs~ Q$]{\rotatebox{0}{\epsfxsize=5.1cm\epsfbox{chgd10l0.eps}}}
\hspace{.1in}
\subfigure[$d=5,~ -Im(\omega)~ vs~ Q$]{\rotatebox{0}{\epsfxsize=5.1cm\epsfbox{chgd5l0Im.eps}}}
\hspace{.2in}
\subfigure[$d=6,~ -Im(\omega)~ vs~ Q$]{\rotatebox{0}{\epsfxsize=5.1cm\epsfbox{chgd6l0Im.eps}}}
\vspace{.05in}
\hspace{.2in}
\subfigure[$d=7,~ -Im(\omega)~ vs~ Q$]{\rotatebox{0}{\epsfxsize=5.1cm\epsfbox{chgd7l0Im.eps}}}
\hspace{.2in}
\subfigure[$d=8,~ -Im(\omega)~ vs~ Q$]{\rotatebox{0}{\epsfxsize=5.1cm\epsfbox{chgd8l0Im.eps}}}
\hspace{.2in}
\subfigure[$d=9,~ -Im(\omega)~ vs~ Q$]{\rotatebox{0}{\epsfxsize=5.1cm\epsfbox{chgd9l0Im.eps}}}
\hspace{.2in}
\subfigure[$d=10,~ -Im(\omega)~ vs~ Q$]{\rotatebox{0}{\epsfxsize=5.1cm\epsfbox{chgd10l0Im.eps}}}
\caption{The behaviour of Real (a-f) and Imaginary (g-l) part of the quasinormal frequency $\omega$ with charge $Q$ for
Reissner-Nordstr\"{o}m type black hole in $d=5$ to $10$ for $l=0$ and $n=0$.}
\label{fig2}
\end{figure}
In figure (\ref{fig2}) we show the behaviour of the real (2(a)-2(f)) and imaginary part (2(g)-2(l)) of the QN frequency $\omega$ with charge $Q$ for Reissner-Nordstr\"{o}m type solutions in higher dimensions. The plots are made for a particular value of the multipole number $l=0$ and mode number $n=0$. As can be seen from the figure the real part of the frequency increases with the increase of charge. This implies the real oscillation frequency increases as the charge is increased.
The behaviour of the imaginary part of the QN frequency $\omega$ with charge $Q$ for Reissner-Nordstr\"{o}m type solutions in higher dimensions for $l=0$ and $n=0$ is shown in Fig. (2(g)-2(l)). As can be seen from the figure the negative imaginary part of the frequency falls off with the increase of charge. This implies that the damping decreases with the increase of the charge, i.e. a longer ringdown phase. For a better and compact look at the behaviour of the frequencies with charge and multipole number see Fig. (\ref{fig4})
It may be mentioned here that, in the mathematica code, if we use $d=4$, then the values for the QN frequencies in four space time dimensions can be found out and our result matches with the result obtained by \cite{wu} and \cite{jing3}. Both the authors of \cite{wu, jing3} have used P\"{o}schl Teller approximation scheme to determine the QN frequencies whereas we have used WKB method to determine the same. We have also checked our results with the results obtained by Cho et al in \cite{split} by explicitly putting $Q=0$ in the metric and have found that the results matches with the results of higher dimensional Schwarzschild black holes as obtained by them.
\begin{figure}[htp]
\centering
\subfigure[Variation of Re $\omega$ with $Q$ and $l$]{\rotatebox{0}{\epsfxsize=8.2cm\epsfbox{RealQLW.eps}}}
\hspace{0.2cm
\subfigure[Variation of Im $\omega$ with $Q$ and $l$]{\rotatebox{0}{\epsfxsize=7.6cm\epsfbox{ImagQLW.eps}}}
\caption{Behaviour of the (a) Real and (b) Imaginary part of $\omega$ with $l$ and $Q$ for the higher dimensional RN black hole}
\label{fig4}
\end{figure}
\begin{figure}[htp]
\centering
\subfigure[Variation of Re $\omega$ with $d$]{\rotatebox{0}{\epsfxsize=7.8cm\epsfbox{dimREL2.eps}}}
\hspace{0.2cm
\subfigure[Variation of Im $\omega$ with $d$]{\rotatebox{0}{\epsfxsize=7.8cm\epsfbox{dimIML2_n.eps}}}
\caption{Behaviour of the (a) Real and (b) Imaginary part of $\omega$ with space time dimension for $l=2$ and $n=0$ for the higher dimensional RN black hole}
\label{fig5a}
\vspace{0.5cm}
\subfigure[$d=5,~ Re(\omega)~ vs~ Im(\omega)$]{\rotatebox{0}{\epsfxsize=5.1cm\epsfbox{d5lplt.eps}}}
\hspace{.2in}
\subfigure[$d=6,~ Re(\omega)~ vs~ Im(\omega)$]{\rotatebox{0}{\epsfxsize=5.1cm\epsfbox{d6lplt.eps}}}
\vspace{.05in}
\hspace{.2in}
\subfigure[$d=7,~ Re(\omega)~ vs~ Im(\omega)$]{\rotatebox{0}{\epsfxsize=5.1cm\epsfbox{d7lplt.eps}}}
\hspace{.2in}
\subfigure[$d=8,~ Re(\omega)~ vs~ Im(\omega)$]{\rotatebox{0}{\epsfxsize=5.1cm\epsfbox{d8lplt.eps}}}
\hspace{.2in}
\subfigure[$d=9,~ Re(\omega)~ vs~ Im(\omega)$]{\rotatebox{0}{\epsfxsize=5.1cm\epsfbox{d9lplt.eps}}}
\hspace{.2in}
\subfigure[$d=10,~ Re(\omega)~ vs~ Im(\omega)$]{\rotatebox{0}{\epsfxsize=5.1cm\epsfbox{d10lplt.eps}}}
\caption{Re $\omega$ vs Im $\omega$ for different $l$ for the higher dimensional RN black hole. Each four segments in the plots correspond to
$l=0,1,2,3$ respectively.}
\label{fig5}
\end{figure}
Let us now discuss the behaviour of the real and imaginary part of the QN frequency with the space time dimensions. The real part of the frequency increases with the increase of dimension while the imaginary part also increases with the space time dimension (See Fig. \ref{fig5a}). So, the real oscillation frequencies are increasing as the space time dimensions increase, but the damping also becomes larger for higher dimensional charged black holes. Thus it is found that the qualitative behaviour of the QN frequencies for purely higher dimensional RN black hole is the same with the behaviour of QN frequencies for higher dimensional Reissner-Nordstr\"{o}m black holes projected on the brane \cite{kanti1}. Though it may be mentioned that the quantitative nature of the frequencies are hugely different. Both the real and imaginary parts of the pure $d$-dimensional RN black holes are larger in magnitude when compared with the results of \cite{kanti1}, where the QN frequencies for RN black holes projected on the brane were calculated. We have checked this for dimensions $d=5, 6$ (for which the data were available in \cite{kanti1}).
The real and imaginary part of the quasinormal frequency is plotted in figure (\ref{fig5}). Each four distinct segments in all the plots correspond to multipole values $l=0$, $1$, $2$ and $3$ respectively. The points in each segment correspond to different values of charge starting from $Q=0$ to $Q_{ex}$, where $Q_{ex}$ is the extremal value of the charge.
Now, let us take a look at the results for the charged Gauss-Bonnet black hole. Due to a huge set of data, we do not give the data for all the dimensions with all values of charge, coupling and multipole number in this paper, however we only consider a particular $d=7$ and a particular multipole number $l=1$ and overtone number $n=0$ for the sake of simplicity. The study of scalar field evolution in the Gauss Bonnet background was done in \cite{konoplyaGB} and \cite{konabdalla} while vector and tensor perturbation was done in \cite{sayan} (for recent study see \cite{konGBz}). It may be noted that there were no such studies on QNMs in the charged Gauss-Bonnet background due to Dirac perturbations, this paper tries to fill in this gap in the literature.
\begin{figure}[htp]
\centering
\subfigure[$Re(\omega)~ vs~ Q~~for ~\alpha=0.1$]{\rotatebox{0}{\epsfxsize=5.1cm\epsfbox{GBQRa.1.eps}}}
\hspace{.2in}
\subfigure[$Re(\omega)~ vs~ Q~~for ~\alpha=0.2$]{\rotatebox{0}{\epsfxsize=5.1cm\epsfbox{GBQRa.2.eps}}}
\vspace{.05in}
\hspace{.2in}
\subfigure[$Re(\omega)~ vs~ Q~~for ~\alpha=0.3$]{\rotatebox{0}{\epsfxsize=5.1cm\epsfbox{GBQRa.3.eps}}}
\hspace{.2in}
\subfigure[$Re(\omega)~ vs~ Q ~for~\alpha=0.4$]{\rotatebox{0}{\epsfxsize=5.1cm\epsfbox{GBQRa.4.eps}}}
\hspace{.2in}
\subfigure[$Re(\omega)~ vs~ Q~~for ~\alpha=0.5$]{\rotatebox{0}{\epsfxsize=5.1cm\epsfbox{GBQRa.5.eps}}}
\hspace{.2in}
\subfigure[$Re(\omega)~ vs~ Q~for~ \alpha=1.0$]{\rotatebox{0}{\epsfxsize=5.1cm\epsfbox{GBQRa1.eps}}}
\hspace{.1in}
\subfigure[$-Im(\omega)~ vs~ Q~for~ \alpha=0.1$]{\rotatebox{0}{\epsfxsize=5.1cm\epsfbox{GBQIa.1.eps}}}
\hspace{.2in}
\subfigure[$-Im(\omega)~ vs~ Q~for~ \alpha=0.2$]{\rotatebox{0}{\epsfxsize=5.1cm\epsfbox{GBQIa.2.eps}}}
\vspace{.05in}
\hspace{.2in}
\subfigure[$-Im(\omega)~ vs~ Q~for~ \alpha=0.3$]{\rotatebox{0}{\epsfxsize=5.1cm\epsfbox{GBQIa.3.eps}}}
\hspace{.2in}
\subfigure[$-Im(\omega)~ vs~ Q~for~ \alpha=0.4$]{\rotatebox{0}{\epsfxsize=5.1cm\epsfbox{GBQIa.4.eps}}}
\hspace{.2in}
\subfigure[$-Im(\omega)~ vs~ Q~for~ \alpha=0.5$]{\rotatebox{0}{\epsfxsize=5.1cm\epsfbox{GBQIa.5.eps}}}
\hspace{.2in}
\subfigure[$-Im(\omega)~ vs~ Q~for~ \alpha=1.0$]{\rotatebox{0}{\epsfxsize=5.1cm\epsfbox{GBQIa1.eps}}}
\caption{The behaviour of Real (a-f) and Imaginary (g-l) part of the quasinormal frequency $\omega$ with charge $Q$ for charged Gauss-Bonnet black hole in $d=7$ for $l=1$ and $n=0$.}
\label{fig6}
\end{figure}
As can be seen from the Figs. (\ref{fig6}(a)-\ref{fig6}(f)), the real part of the frequency for the charged Gauss-Bonnet black hole increases with the increase of charge whereas Figs. (\ref{fig6}(g)-\ref{fig6}(l)) suggests that the imaginary part decreases with the charge implying that the real oscillation frequency will be increasing with charge whereas the damping decreases with it. It is in this sense that the behavior of the frequencies in both the Reissner-Nordstr\"{o}m and charged Gauss-Bonnet background is same. Now let us comment on the Gauss-Bonnet coupling $\alpha\to 0$ limit. It has been shown in \cite{konoplyaGB} and \cite{sayan} that in the $\alpha\to 0$ limit, the quasinormal modes for scalar field perturbations and vector and tensorial perturbations of the uncharged Gauss Bonnet black hole yields Schwarzschild QN frequencies. This is very easy to understand since in the limit $\alpha\to 0$, the Gauss Bonnet metric looks like $(1-2M/r^{d-3}+4\alpha M^2/r^{2d-4}+\cdots)$. The third term in the above metric is actually $\mathcal{O}(\alpha^2)$ term and hence for very small values of $\alpha$ the quasinormal frequencies for Gauss-Bonnet black hole goes to the Schwarzschild values. The same thing is expected here because in the $\alpha\to 0$ limit, the charged Gauss-Bonnet metric looks like $(1-2M/r^{d-3}+\tilde{Q}^2/r^{2d-6}+\mathcal{O}(\alpha^2)+\cdots)$, where $\tilde{Q}=Q/\sqrt{2\pi(d-2)(d-3)}$. We have checked for different small values of $\alpha ~(\alpha=0.0001, 0.0002,\cdots, 0.0005)$, keeping the charge fixed that the quasinormal frequencies for Reissner N\"{o}rdstrom black holes in different dimensions are indeed produced.
Having discussed the behaviour of the real and imaginary parts of the frequencies with the charge of the Gauss Bonnet black hole, let us now concentrate on the variation of the frequencies with the parameter $\alpha$, which is the Gauss-Bonnet coupling. The variation of the real and imaginary parts of the frequency with the Gauss-Bonnet coupling is plotted in Fig. (\ref{alphGBQ}). As can be seen from the figures, the real part of the frequency increases with the increase of Gauss-Bonnet coupling while the negative imaginary part decreases with $\alpha$. It may be noticed that the variation of the imaginary part in higher dimensions is comparatively slower than the lower dimensions. For example, in $d=8, ~9,~ 10$, the negative imaginary part first starts decreasing with $\alpha$, then remains unchanged for certain consecutive values and again starts decreasing with $\alpha$.
\begin{figure}[here]
\centering
\subfigure[Plot of Real $\omega$ vs $\alpha$ for charged Gauss-Bonnet black hole]{\rotatebox{0}{\epsfxsize=5.5cm\epsfbox{alphI.eps}}}
\subfigure[Plot of Imaginary $\omega$ vs $\alpha$ for charged Gauss-Bonnet black hole]{\rotatebox{0}{\epsfxsize=5.5cm\epsfbox{alphR.eps}}}
\subfigure[Plot of Quality Factor vs $\alpha$ in $d=7$]{\rotatebox{0}{\epsfxsize=5.5cm\epsfbox{QFactor.eps}}}
\caption{The variation of (a) real, (b) imaginary parts of frequency and (c) quality factor with Gauss-Bonnet coupling $\alpha$ for a fixed value of charge $Q=0.05$ and multipole number $l=1$. Dimension of the space time increases for (a) and (b) from bottom to top, i.e. the lowest plot in each graph corresponds to $d=6$ and top one corresponds to $d=10$}
\label{alphGBQ}
\end{figure}
If one defines the quality factor as $\hat{Q}\sim\vert\frac{Re(\omega)}{Im(\omega)}\vert$, then its variation is plotted in fig (\ref{alphGBQ}c). This shows that in spite of the behaviour of the real and imaginary parts of the frequency stated above, the quality factor increases as one increase the value of $\alpha$. It is also interesting to note that though $\hat{Q}$ increases faster for small values of $\alpha$, it becomes somewhat saturated as $\alpha$ is gradually increased, which is similar to the behaviour of the quality factor observed for uncharged Gauss-Bonnet black hole on the brane also \cite{zhidGB}.
\section{QNM in large multipole number limit}
Let us now focus on the large angular momentum (multipole number) limit, i.e. now $\kappa\to\infty$. Our aim in this section is to simplify the otherwise complicated expression for the frequency given by Eqn. (\ref{freq}) and get an analytic expression for $\omega$. For this, we will consider the large multipole number limit, where one can see from the expression of the potential (\ref{potl22}), the most dominant term of the potential in $\kappa\to\infty$ limit will be
\begin{eqnarray}
V_1\vert_{\kappa\to\infty}\sim f\frac{\kappa^2}{r^2}.
\end{eqnarray}
Hence this approximation simplifies the potential a lot. Next, we will consider the WKB formula only upto the first order, i.e. now we will write Eqn. (\ref{freq}) as
\begin{eqnarray}
\omega^2\sim V_0-i\left(n+\frac{1}{2}\right)(-2V_0^{\prime\prime})^{1/2}+\cdots \label{1ord},
\end{eqnarray}
where $V_0$ is the maxima of the potential $V_1$ that we are working with. For finding the maxima of the potential we solve the equation $\frac{dV_1}{dr}=0$ and get
\begin{eqnarray}
r_m\vert_{\kappa\to\infty}\sim \left(\frac{2d-6}{d-3}\right)^{\frac{1}{d-3}}\frac{r_+r_-}{(r_+^{d-3}+r_-^{d-3})^{1/d-3}}
\end{eqnarray}
The form of the potential with this maxima of $r$ is
\begin{eqnarray}
V_1\vert_{\kappa\to\infty}&=&\left(2^{\frac{1}{d-3}} r_- r_+
\left(r_-^{d-3}+r_+^{d-3}\right){}^{\frac{1}{3-d}}\right){}^{
4-2 d} \nonumber\\
&&\left(\left(2^{\frac{1}{d-3}} r_- r_+
\left(r_-^{d-3}+r_+^{d-3}\right){}^{\frac{1}{3-d}}\right){}^{
d-3}-r_-^{d-3}\right)\nonumber\\
&& \left(\left(2^{\frac{1}{d-3}} r_- r_+
\left(r_-^{d-3}+r_+^{d-3}\right){}^{\frac{1}{3-d}}\right){}^{
d-3}-r_+^{d-3}\right)
\end{eqnarray}
Next one uses this to find out an expression for the frequency
\begin{eqnarray}
\omega^2&=&[2^{\frac{1}{d-3}} r_- r_+
(r_-^{d-3}+r_+^{d-3}){}^{\frac{1}{3-d}}]{}^{
4-2 d}\times [(2^{\frac{1}{d-3}} r_- r_+
(r_-^{d-3}+r_+^{d-3}){}^{\frac{1}{3-d}}){}^{
d-3}-r_-^{d-3}]\times\nonumber\\
&& [(2^{\frac{1}{d-3}} r_- r_+
(r_-^{d-3}+r_+^{d-3}){}^{\frac{1}{3-d}}){}^{
d-3}-r_+^{d-3}]\nonumber\\
&&-\frac{1}{4} i \left(n+\frac{1}{2}\right)\times\nonumber\\
&&\left(\frac{1}{r_-^2
r_+^2}\right) [-2^{\frac{4-d}{3-d}}
(r_-^{d-3}+r_+^{d-3}){}^{-\frac{2}{d-3}}
(2^{\frac{1}{d-3}} r_- r_+
(r_-^{d-3}+r_+^{d-3}){}^{\frac{1}{3-d}}){}^{
-2 d} (-(d-1) d (2^{\frac{1}{d-3}} r_- r_+\nonumber\\
&& (r_-^{d-3}+r_+^{d-3}){}^{\frac{1}{3-d}}){}^d
(r_+^3 r_-^d+r_+^d r_-^3)
(r_-^{d-3}+r_+^{d-3}){}^{\frac{3}{d-3}}+3
2^{\frac{d-6}{d-3}}\nonumber\\
&& (2^{\frac{1}{d-3}} r_- r_+
(r_-^{d-3}+r_+^{d-3}){}^{\frac{1}{3-d}}){}^{
2 d}
(r_-^{d-3}+r_+^{d-3}){}^{\frac{6}{d-3}}+2^{\frac{d
}{d-3}} (d (2 d-7)+6) r_-^{d+3} r_+^{d+3})]{}^{\frac{1}{2}}
\end{eqnarray}
Which indeed is very complicated, though we have simplified our potential and used the first order WKB formula. We can note from the above form that the QN frequencies indeed depend on the mass and charge of the black hole via $r_+$ and $r_-$ (since $r_+$ and $r_-$ are determined in terms of $\mu$ and $\theta$). One can check by putting $d=4$ in the above formula that the result for 4 dimensional Reissner Nordstr\"{o}m black hole is indeed obtained.
\section{Conclusion}
In this paper, we have discussed the Dirac quasinormal frequencies of charged black holes in higher dimensions. We have investigated two different backgrounds, namely the higher dimensional Reissner-Nordstr\"{o}m and the charged Gauss-Bonnet black holes. Though the field of studying QNMs is being saturated day by day, a comparative study of QNMs in different scenarios might be interesting.
One of the main ideas of \cite{split} was to study fermion quasinormal modes in purely higher dimensional Schwarzschild background within the framework of split fermion models where the quarks and the leptons are forced to live on separate branes in order to keep proton stability. The present work can also be interpreted as studying split fermion quasinormal modes for higher dimensional charged black holes. Within this model, we found that the real part of the QN frequencies increases with the increase of the charge in both the backgrounds discussed in this paper, while the imaginary part decreases with the charge for both. The behavior of the frequencies are studied with the increases in space time dimensions. We found that the real part of the frequency increases as we increase the space time dimensions while the damping also increases. A comparative study of the quasinormal modes in the higher dimensional RN and charged Gauss-Bonnet black hole suggests that the variation of the frequencies with charge is much more rapid in the Reissner-Nordstr\"{o}m background than the charged Gauss-Bonnet background. Quantitatively we found that the real and imaginary parts of the frequencies in the split fermion models are larger than their brane localized partners. One can also get back the QN frequencies of the black holes arising out of pure Einstein theory of gravity from the values of the QN frequencies of charged Gauss-Bonnet background in the Gauss-Bonnet coupling $\alpha\to 0$ limit. We have also studied the behaviour of the real and imaginary parts of the frequencies with the Gauss-Bonnet coupling $\alpha$ and found that the real part increases with $\alpha$, while the negative imaginary part decreases. For higher dimensions this variation in the imaginary part is slower and it can be seen that for some consecutive values of $\alpha$, the change in the imaginary part is very small and then if we vary $\alpha$ again, the negative imaginary part starts decreasing again.
It would be interesting now to calculate the black hole absorption cross section for bulk RN and Gauss Bonnet fermions in higher dimensions and a comparative study of late time fall off would also be interesting in these backgrounds. As it was found that the massive fields might change the low lying modes of the QN spectrum, the study of massive Dirac perturbations in these background will also be an important aspect.
\section{Acknowledgment}
The author wishes to thank Prof. Kumar S. Gupta for numerous discussions and suggestions during the work. He also wishes to thank Prof. Palash B. Pal and Pulak Ranjan Giri for many useful discussions during the earlier stages of the work. Finally he wishes to thank Subhajit Karmakar for numerous help regarding programming with Mathematica.
\section{Appendix A: Dirac Equation in spherically symmetric higher dimensional black
hole background}
In this section we will study the Dirac equation in higher dimensions in a static spherically symmetric black hole background. Essentially this section contains a brief review of the works done in \cite{split}.
Let us start with a $d$ dimensional spherically symmetric metric of
the form
\begin{eqnarray}
ds^2=-f(r)dt^2+f^{-1}(r)dr^2+r^2d\bar\Omega_{d-2}^2, \label{mtrc}
\end{eqnarray}
where $d\bar\Omega_{d-2}^2$ is the metric for $(d-2)$
sphere. Following \cite{split,gibbons,gibbons1}, let us think
of a conformal transformation under which the metric and the
determinant of the metric behaves as
\begin{eqnarray}
g_{\mu\nu}\to\tilde g_{\mu\nu}&=&\Omega^2g_{\mu\nu}\nonumber \\
g^{\mu\nu}\to\tilde g^{\mu\nu}&=&\Omega^{-2}g^{\mu\nu}\nonumber\\
\mathrm{det}~|g_{\mu\nu}|&=&\Omega^{-2d}\mathrm{det}~|\tilde g_{\mu\nu}|\nonumber\\
\sqrt{-\tilde{g}}&=&\Omega^{d}\sqrt{-g}
\end{eqnarray}
where $\Omega$ is the conformal factor. Now, under such a conformal
transformation the spinor field behaves as
$\psi\to\tilde\psi=\Omega^n\psi$, where $n$ can be determined by
claiming that the Dirac Lagrangian remains invariant under such a
conformal transformation. Using the fact that
$\{\gamma^{\mu},\gamma^{\nu}\}=2g^{\mu\nu}$ and
$\tilde\gamma^{\mu}=\Omega^{-1}\gamma^{\mu}$ along with the massless
Dirac Lagrangian one can get $n=-(d-1)/2$. Thus
\begin{eqnarray}
\psi\to\tilde\psi&=&\Omega^{\frac{(1-d)}{2}}\psi\nonumber\\
\gamma^{\mu}\nabla_{\mu}\psi\to\tilde\gamma^{\mu}\tilde\nabla_{\mu}\tilde\psi&=&\Omega^{-\frac{(d+1)}{2}}\gamma^{\mu}\nabla_{\mu}\psi
\end{eqnarray}
Following \cite{split}, we choose $\Omega^2=\frac{1}{r^2}$, then
$\tilde\psi=r^{(d-1)/2}\psi$. The conformal metric $d\tilde s^2$ then
has a completely separated $t-r$ part and the $(d-2)$-sphere
part. The massless Dirac equation then can be written as
\begin{eqnarray}
\tilde\gamma^{\mu}\tilde\nabla_{\mu}\tilde\psi=0 \label{dirac}
\end{eqnarray}
where
$\tilde\nabla_{\mu}=\tilde\partial_{\mu}-\frac{i}{4}\tilde\eta_{ac}\tilde\omega^c_{b\mu}\tilde\sigma^{ab}$,
and $\tilde\omega^c_{b\mu}=\tilde e^c_{\nu}\tilde \partial_{\mu}\tilde
e^\nu_b+\tilde e^c_{\nu}\tilde e^\sigma_{b}
\tilde\Gamma^{\nu}_{\sigma\mu}$,
$\tilde\sigma^{ab}=\frac{i}{2}[\tilde\gamma^a,\tilde\gamma^b]$.
The problem now is to find out the $\tilde\gamma$ matrices for higher dimensional space time. In \cite{zee}, a complete discussion about this problem was given for flat space time. Following their notation and using \cite{split} we write the separated Dirac equation as
\begin{eqnarray}
\left[\left(\frac{r}{\sqrt{f}}(-i\sigma^3)\tilde\nabla_t+r\sqrt{f}\sigma^2\tilde\nabla_r\right)\otimes \mathbf{1}\right]\tilde\psi + \left[-\sigma^1\otimes(\tilde\gamma^i\tilde\nabla_i)_{S_{d-2}}\right]\tilde\psi=0
\end{eqnarray}
Where, $\sigma^i$ are the Pauli matrices
\begin{eqnarray}
\sigma^1 =
\left(\begin{array}{cc}
0 & 1 \\
1 & 0
\end{array}\right),~~
\sigma^2 =
\left(\begin{array}{cc}
0 & -i \\
i & 0
\end{array}\right),~~
\sigma^3 =
\left(\begin{array}{cc}
1 & 0 \\
0 & -1
\end{array}\right)
\end{eqnarray}
Following \cite{higuchi}, one can write
\begin{eqnarray}
(\tilde\gamma^i\tilde\nabla_i)_{S_{d-2}}\tilde\chi_l^{\pm}=\pm
i\left(l+\frac{d-2}{2}\right)\tilde\chi_l^{\pm}, ~~~~l=0,~1,~2,\cdots
\end{eqnarray}
where $\tilde\chi_l^{\pm}$ are the eigenspinors for the
$(d-2)$-dimensional sphere. Again, following \cite{split}, one can
write $\tilde\psi=\displaystyle\sum_{l}(\tilde\phi_l^+(r,t)\tilde\chi_l^++\tilde\phi_l^-(r,t)\tilde\chi_l^-)$ using the
orthogonality of the eigenspinors. The Dirac Equation can then be written as
\begin{eqnarray}
\sigma^2 r
\sqrt{f}\left[\partial_r+\frac{r}{2\sqrt{f}}\frac{d}{dr}\left(\frac{\sqrt f}{r}\right)\right]\phi_l^+-i\sigma^1\left(l+\frac{d-2}{2}\right)\phi_l^+=i\sigma^3\left(\frac{r}{\sqrt{f}}\right)\partial_t\phi_l^+,
\end{eqnarray}
In writing the above
equation we have changed the notation by removing the tilde from the
expressions and also we are working here with the positive sign of the eigenspinors. It may be mentioned that one can also choose to work with the negative sign as well. Next, we use the ansatz
\begin{eqnarray}
\phi_l^+=\left(\frac{\sqrt f}{r}\right)^{-1/2}e^{-i\omega t}\left(\begin{array}{c}
iG(r)\\
F(r)
\end{array}
\right)
\end{eqnarray}
The Dirac equation then simplifies to
\begin{eqnarray}
f\frac{dG}{dr}-\frac{\sqrt{f}}{r}\left(l+\frac{d-2}{2}\right)G=\omega F,\\
f\frac{dF}{dr}+\frac{\sqrt{f}}{r}\left(l+\frac{d-2}{2}\right)F=-\omega G
\end{eqnarray}
By defining the tortoise coordinate as $dr_{\star}=dr/f(r)$ and introducing a function
\begin{eqnarray}
W=\frac{\sqrt{f}}{r}\kappa
\end{eqnarray}
where $\kappa=(l+(d-2)/2)$, one can finally arrive at
\begin{eqnarray}
\left(-\frac{d^2}{dr_{\star}^2}+V_1\right)G=\omega^2G;~~~~\left(-\frac{d^2}{dr_{\star}^2}+V_2\right)F=\omega^2F,
\end{eqnarray}
where
\begin{eqnarray}
V_{1,2}=\pm \frac{dW}{dr_{\star}}+W^2.
\end{eqnarray}
The potentials $V_1$ and $V_2$ corresponding to Dirac particles and anti-particles are supersymmetric to each other and dervide from the same superpotential $W$. It has also been proved \cite{jing3} that the Dirac particles and anti-particles have the same QN spectra. In this paper we have used only $V_1$ to study the quasinormal modes.
\bibliographystyle{unsrt}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 3,392 |
function experiments(topN,socialdimsnums,nameDescriptors,nr_splits,dir_predictions,dir_data,dir_socialDimensions,collectionFolder,method,C)
% Run all the features with different TopN (top-k neighbors), Social Dimension and C (for
% SVM) to tune the parameters
% and extract evaluation metrics for different dataset splits for each concept for every combination.
% This code should be used in order to extract the best parameters and then fuse them.
% Input:
% *topN: number of top neighbors
% *nameDescriptors: the name of descriptors
% *nr_splits: how many splits to use for the dataset
% *dir_predictions,dir_data,dir_socialDimensions,collectionFolder: the folders of data
% *C: the SVM parameter
numSocialDim = length(socialdimsnums);
numTopN = length(topN);
if nargin ==9
C = 5;
end
mAPresultsTable = [];
mIAPresultsTable = [];
numDescriptors = length(nameDescriptors);
experimentsStart =tic;
disp ('Start Running tuning Experiment!');
for current_desc=1:numDescriptors
disp(current_desc)
for current_k=1:numSocialDim
disp(['Social Dimension:',num2str(socialdimsnums(current_k))]);
for current_TopN= 1:numTopN
disp(['Top N neighbor:',num2str(topN(current_TopN))]);
for current_C= 1:length(C)
% load social dimensions
vName = ['V_Top_',num2str(socialdimsnums(current_k)),'_','TopN','_',num2str(topN(current_TopN))];
load ([dir_socialDimensions,collectionFolder,cell2mat(nameDescriptors{current_desc}),'/', vName]);
AP =cell( zeros(nr_splits));
InterPrecisionRecall = cell(zeros(nr_splits));
%======================make dir to save experiments================
predictionsDir = [dir_predictions,collectionFolder,cell2mat(nameDescriptors{current_desc}),'/'];
if (exist(predictionsDir,'dir')==0)
mkdir (predictionsDir)
end
addpath(dir_predictions);
addpath(predictionsDir);
for current_split=1:nr_splits
disp(['Dataset:',num2str(current_split)]);
[trainLabels,testLabels, trainListID, testListID ] = setData(collectionFolder, dir_data,current_split);
[AP{current_split}, InterPrecisionRecall{current_split}] = RunExperiments(socialdim ,trainLabels,testLabels, trainListID, testListID,method);
disp(num2str(mean(cell2mat(InterPrecisionRecall(:,current_split)))));
end
avgAP = mean(cell2mat(AP),2);
avgmIAP = mean(cell2mat(InterPrecisionRecall),2);
disp(mean(avgmIAP));
class_nr = size(avgmIAP,1);
socdim_nr = socialdimsnums(current_k)*ones(class_nr,1);
topN_val = topN(current_TopN)*ones(class_nr,1);
C_val = C(current_C)*ones(class_nr,1);
class_nr_val = 0:class_nr-1;
tmpmAPresultsTable = [socdim_nr topN_val C_val class_nr_val' avgAP];
mAPresultsTable = [mAPresultsTable; tmpmAPresultsTable];
tmpmIAPresultsTable = [socdim_nr topN_val C_val class_nr_val' avgmIAP];
mIAPresultsTable = [mIAPresultsTable;tmpmIAPresultsTable];
%======================Save tuned metrics==================
if current_TopN == numTopN && current_k == numSocialDim
nameSaveExperiments = ['mIAP','_','results_Top_','all','_','TopN_','all'];
save([predictionsDir nameSaveExperiments],'mIAPresultsTable');
nameSaveExperiments = ['mAP','_','results_Top_','all','_','TopN_','all'];
save([predictionsDir nameSaveExperiments],'mAPresultsTable');
end
end
end
end
mIAPresultsTable=[];
mAPresultsTable = [];
end
experimentsEnd= toc(experimentsStart);
fprintf('Tuning of parameters computed at %d minutes and %f seconds\n',floor(experimentsEnd/60),rem(experimentsEnd,60));
end
| {
"redpajama_set_name": "RedPajamaGithub"
} | 9,225 |
De Matra Djet is een Franse sportwagen, op basis van Renault-techniek ontwikkeld door René Bonnet als René Bonnet Djet, en later overgenomen en doorontwikkeld door Matra. De auto had gedurende zijn productie van 1962 tot 1967 verschillende namen: Matra Bonnet Djet, Matra Sports Djet en ten slotte Matra Sports Jet.
René Bonnet Djet
René Bonnet begon oorspronkelijk als sportwagenfabrikant in Deutsch et Bonnet, maar na onenigheid met compagnon Charles Deutsch richtte hij Automobiles René Bonnet op. Hij besloot zelf een sportwagen te ontwikkelen, de Djet. Deze naam werd gekozen, omdat hij dacht dat Fransen het Engelse woord Jet (straaljager) niet goed zouden uitspreken. De auto werd gebouwd op een chassis van buizen, met een vezelversterkte kunststof (glasvezel en polyester) carrosserie, die Bonnet van Matra betrok, evenals de fabriekshal in Romorantin.
De in het midden geplaatste motor en de voorwielophanging komen van de Renault 8, de versnellingsbak van de Renault Estafette bestelwagen (met gewijzigde overbrengingsverhoudingen). Het ontwerp was erg modern voor zijn tijd, met schijfremmen en onafhankelijke wielophanging rondom. De René Bonnet Djet is 3,80 m lang, 1,40 m breed en 1,15 m hoog, biedt plaats aan twee personen (er is geen achterbank, omdat de motor zich daar bevindt) en weegt slechts 600 kg.
Er werden drie types René Bonnet Djet gebouwd:
René Bonnet Djet I 1108 cc Renault 8 Major motor (65 pk), 165 km/h)
René Bonnet Djet II 1108 cc Renault 8 Gordini motor (80 pk, 190 km/h)
René Bonnet Djet III / Djet IV 998 cc motor met dubbele bovenliggende nokkenas (100 pk). Deze auto's zijn voor het racecircuit ontworpen.
Slechts 197 René Bonnet Djets zijn gebouwd, van 1962 tot 1964. Als redenen voor het uitblijven van commercieel succes worden genoemd het onpraktische ontwerp voor dagelijks gebruik en de onbetrouwbaarheid van de Renault-mechaniek ten opzichte van de, eveneens van Renault-techniek voorziene, concurrent Alpine.
Matra Djet/Jet
Omdat Bonnet in financiële problemen kwam en Matra's CEO het idee een sportwagenfabrikant te worden wel aanstond, heeft Matra op 14 oktober 1964 (in officiële stukken geantedateerd op 1 oktober 1964) de productie van de Djet overgenomen van Bonnet, die het bedrijf vervolgens verliet. De leiding kwam in handen van de toen nog jonge manager Jean-Luc Lagardère, die later topman van het Matra-concern werd.
De productie van de oorspronkelijke Djet stopte in december 1964, waarna de van Simca aangetrokken ontwerper Philippe Guèdon, het ontwerp op vele punten verbeterde. De auto werd door deze verbeteringen een stuk groter: 4,22 m lang, 1,50 m breed, 1,20 m hoog en het gewicht steeg naar 660 kg. In april 1965 werd de productie voortgezet met twee nieuwe versies, de Matra Bonnet Djet V (met een Renault 8 Major-motor) en Djet V S (met een Renault 8 Gordini-motor). In oktober 1965, na de Salon de l'Auto autoshow in Parijs, werden de Romeinse cijfers en de naam Bonnet verlaten, de auto heette vanaf dat moment Matra Sports Djet 5.
In 1966 kwam een model met een grotere Gordini-motor op de markt en werd de Djet naam vervangen door het oorspronkelijk bedoelde Jet. De modellenreeks bestond nu uit de Matra Sports Jet 5 (1108 cc-Renault 8 Major-motor), Jet 5 S (1108 cc-Renault 8 Gordini-motor) en Jet 6 (1255 cc-Renault Gordini-motor).
Modellen
Er werden in totaal drie types van de Matra Djet/Jet geproduceerd van 1965 tot en met 1967.
Matra Bonnet Djet V / Matra Sports Djet 5 / Jet 5 1108 cc-Renault 8 Major-motor, 70 SAE pk, 170 km/h
Matra Bonnet Djet V S / Matra Sports Djet 5 S / Jet 5 S 1108 cc-Renault 8 Gordini-motor, 90 SAE pk, 190 km/h
Matra Sports Jet 6 1255 cc-Renault 8 Gordini-motor, 105 SAE pk, 210 km/h
Los van deze modelaanduidingen waren er "De Luxe"-versies verkrijgbaar, met een houten instrumentenpaneel, houten stuurwiel, andere plaatsing van instrumenten en handrem en een grotere bumper. Tevens waren er versies op de markt met een afneembaar dakpaneel.
Prijzen
De prijslijst voor België in 1966 was als volgt:
Djet 5 Standard: 152.200 Belgische frank
Djet 5 De Luxe: 164.700 Belgische frank
Djet 5 S Standard: 194.000 Belgische frank
Djet 5 S De Luxe: 207.900 Belgische frank
Beroemde Djets
Bekende Djet-rijders zijn Joeri Gagarin die er eentje cadeau kreeg in 1965, koning Hoessein van Jordanië die in 1966 een rode Djet 5 S kocht en de Gendarmerie Nationale die langs de Autoroute du soleil in een Djet patrouilleerde.
Men voorzag in 1965 een productie van 3.000 Jets tegen 1967, maar dat aantal werd bij lange na niet gehaald. In 1967 eindigde de productie met een totaal van 1495 geproduceerde Matra (D)Jets. Het model werd opgevolgd door de Matra 530, hoewel de laatste Jets (allemaal Jet 6) pas in 1968 werden verkocht.
Fotogalerij
Djet
Sportwagen | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 7,937 |
Home > Fashion
In order to make money, South Korea actually opened a children's beauty salon, 4 years old makeup, apply mask, do SPA, island countries can not see it.
Tokyo New Youth
No one knows more about Japan than we do about Japan
Everyone knows that beauty salons provide beauty care, skin care and hydrotherapy services for fairies. Place.
, however, can you imagine 4, 5 years old children who are immature, naive and romantic to do beauty, make spa, apply masks and make-up? In recent years, there has been a wave of children's beauty clubs in Korea. Despite the constant criticism of society, children's beauty parlors are expanding.
Korean's crazy love for beauty
Korean's obsession and persistence for beauty are sweaty all over the world. In 2017, South Korea spent $45 per capita on cosmetics, higher than the world spent $21 per capita on cosmetics. South Korea's beauty industry is also worth more than $10 billion.
It is said that many Korean girls over the age of 18 give themselves adult gifts: cosmetic surgery! From childhood to adulthood, there are hopes of beauty or cosmetic surgery in birthday wishes.
In this age of looking at beauty, there are people who love beauty. Nevertheless, it seems a little terrible to start with those 4 or 5-year-olds after 2005 for the sake of business interests.
p>Nani? Children's Beauty Club
ShuShuShu&, South Korea; Sassy's Beauty Club, which specializes in children's beauty, has recently been on the cusp of the storm. This club is almost identical to the general adult beauty club. From foot bath, spa, facial mask, makeup, nail and other services, there are all kinds of services. The only difference is that they aim at 36 months old -13 years old children, but most of the people who come to consume are children of 4-7 years old.
This company originally only sells cosmetics for children after 05. Later, because of the wide (bian) market in Korea and the unwillingness to earn only a little money for cosmetics, it simply opened a children's beauty club.
All stores use pink decoration, which at first glance looks like the popular online red stores.
The youngest babies who come here for consumption are only three years old, and it is estimated that they still need to drink milk every day and can't speak clearly, but they have to start enjoying a full set of beauty services.
Change into a special pink bathrobe for spa, wear pink hair bands, and start foot therapy.
Then, a special service girl will massage her feet.
, while enjoying feet and massage, applies a moisturizing mask to facial maintenance. (Xiaobian can't help Tucao: I drop a god! 4, 5 years old is the skin tender to the age of choking water, moisturizing mask is what Sao operation!
However, these are only preludes. After basic skin care, you can choose nail care for your hands and feet.
Children can choose their own colors and patterns, which is probably what a sophisticated pig girl should say from an early age.
finished nail, and came to the skin care products, makeup, foundation, lipstick, blush. (see this Xiaobian understand why 4 year old children need to do mask maintenance, every day to face up so much chemicals, 4, 5 years old skin estimate is really not good!
That's the end of the story. No Don't worry! How can perfect makeup get less perfect hairstyle? Finally, Mr. Tony will wash, cut and ironing the whole set of services, and design a delicate hairstyle.
Even if the high school students are not allowed to stigmatize, it can be achieved here.
Do you think this store is only a patent for girls'paper? Wrong! There are also mothers who send their precious sons over to enjoy exclusive services.
Change into Blue Spa clothes and wear blue hair hoops.
Happy foot bath with little sister.
applies a maintenance mask. There are a lot of men's strength.
To be a delicate pig boy, nails can't be let go of course.
Xiaobian sincerely felt anxious for the girls after 05. Will the future boyfriend or husband who grew up in this environment change the light bulb and repair the toilet? When you see cockroaches, it is estimated that the scream is worse than the scream.
According to the company, all of their products are made from non-toxic ingredients. For example, herbal ingredients or natural pigments of cosmetics and skin care products. However, what Xiaobian wants to say is that Koreans probably have not heard of "three poisons of drugs". This sentence, even herbal medicine still has a certain toxicity. What's more, who can guarantee that these non-toxic cosmetics are really non-toxic, just like the so-called "pure natural green organic products" on the market, to see such title gimmicks, haha.
Even though criticized by foreign media, such children's beauty clubs are increasingly in Korea. Its branches have increased by dozens in two years. If you want to visit the beauty parlor, you need to make an appointment a week in advance. Every appointment is stolen by your parents.
even if there is no location, but to its home store counters, you can try the foundation color number, red mouth number, and professional teller will recommend the purchase for the skin of a small pot friend. In a word, you can imagine what the adult counter is like and copy it all here.
Some media interviewed these Korean pelvic friends, and some expressed that they liked the club very much and made-up could make themselves more tender and beautiful. Also, before going to kindergarten every day, make-up is as essential as getting up early and brushing your teeth.
precocity is the main trend of this era. However, excessive promotion of premature maturity for the sake of commercial interests is worth pondering. Not advocating knowledge, not morality, blindly pursuing the beauty of appearance, the final result is that there is no beauty of appearance and empty heart. Why don't children need to drink because they are happy even if they don't drink? Why don't children need makeup, because innocent age is beautiful enough without makeup. <
hot article > 9658;\\\\\\\
" < / section > < / section > < / section > < / section > < / section > < / section > < / section > < / section > < / section > < / section > < / section > < / section > < / section > > < / section > < / section > and < section > respectively.
In the previous:Supermodel and three muscular men deduce a big film, this sexy bust waistline, it can be said that four amazing seats! __________
The next article:Miranda Kerr is pregnant with her third child and loves her husband with fresh meat. Her ex-husband still promises to take care of her for a lifetime.
2019-08-07The first appearance of the postpartum Zhao Liying wearing a hat mask and wearing a blue dress, the waist is slender like a girl
2019-04-07Zheng Shuang took out the prosthesis and the horse's face disappeared. Yang Fan picked up his nostrils and enlarged his nose. Shen Mengchen's nose was mistaken from the beginning.
2019-04-01How well do you spell the net red photos? Hoisting a train by a pig's arch and climbing a building can be avoided.
2019-03-01Girls want to survive Ouyang Nana, but I don't think she will wear it again. She makes boys sour.
2019-03-01The sculpture sponsors the PK competition of the goddess of China and Korea, and the conservative baby easily chops Xiuzhi off his horse?!
2019-02-09Ten Years Old Photo Exposure: How Different Are Sporting and Non-Sporting People?
2019-01-31What do you wear for the New Year? 59 sets of the most beautiful matches give you inspiration!
2019-01-31CP stir-fried God harvester, birthday 7 hot search? Whoever lets her change her head without moving her knife! uuuuuuuuuuu
2019-01-22Welcome to "My Lady's Makeup Table"
2019-01-22How strong does the woman Bailian get up? You can't imagine it.
The first appearance of the postpartum Zhao Liying wearing a hat mask and wearing a blue dress, the waist is slender like a girl
Why hasn't the Queen's temperament changed in more than 70 years when she was 93 and 20?
Did Xiao Tiantian receive psychotherapy again? Sexy fashion in those days, but it collapsed because of too much pressure.
"The girls in the Uniqlo are very low."
Supermodel and three muscular men deduce a big film, this sexy bust waistline, it can be said that four amazing seats! __________
How many fresh meat can Star Lord hang for a box lunch?
Fifty-year-old Xu Qing is still a girl with a good figure who kills people in seconds
Supermodel Candice is quietly kicked out of the Icon list, and the evening is not guaranteed!
Liu Yan's latest green dress Christmas photo exposure, sexy burst
In 2019, the popular color struck, and the yellow skin was crying? ! | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 6,689 |
Q: Worse performance after 1.9.2 update On my local test system, 1.9.1 performed ok in the ab benchmark:
Requests per second: 2.42 [#/sec] (mean)
Time per request: 2065.882 [ms] (mean)
Time per request: 413.176 [ms] (mean, across all concurrent requests)
Transfer rate: 60.29 [Kbytes/sec] received
After upgrading to 1.9.2, results are significantly worse:
Requests per second: 1.33 [#/sec] (mean)
Time per request: 3769.256 [ms] (mean)
Time per request: 753.851 [ms] (mean, across all concurrent requests)
Transfer rate: 33.14 [Kbytes/sec] received
No errors are reported in the log. Is this normal? 1.9.2 update were supposed to improve the performance.
Or is performance benefit visible only when cache is enabled? (I have it disabled on a test system)
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 6,689 |
\section{Introduction}
\label{sec:intro}
With the rapid development of information communication technology, the network has become an indispensable ingredient in the social development, where plenty of transmitted data contain sensitive or even confidential information, inevitably attracting malicious attacks such as data tampering.
To defend against those growing, rampant and sophisticated cyber threats, cyber defense strategies, which focus on preventing, detecting and responding to attacks or threats in a timely manner so that infrastructure or information is not tampered with, are essential for most entities to protect sensitive information as well as protect assets, and have attracted more and more attention from both the computer science community and control systems community.
In this work, we shall investigate the cyber defense strategies at the supervisory control layer, where the system is modeled as discrete-event systems (DES) with discrete state space and event-triggered dynamics. Existing works have proposed plenty of defending strategies, including: 1) synthesis of resilient supervisors \cite{Su2018}-\cite{WLYWL21} such that there does not exist any (covert) sensor-actuator attacker that could induce the plant to the damage state via altering sensor readings and disrupting control signals on those vulnerable observation-command sequence encoded in the supervisor, 2) supervisor obfuscation \cite{Zhu2018}, which computes resilient supervisors that are control equivalent to the original insecure supervisor, that is, on one hand, the original closed-behavior of the closed-loop system is preserved, and on the other hand, the insecure supervisor is obfuscated in the sense that the plant under the new supervisor is not attackable, 3) deploying secure communication channel \cite{WP19}, which guarantees the confidentiality of data to prevent attackers from eavesdropping and intercepting secret information, 4) embedding the mitigation module to disable events that can be defended \cite{YYL20}, 5) designing transition protecting policies against an external intruder, which could delay the event firings, to guarantee that the makespan of the requirement does not drop below a given deadline \cite{HMT21}, 6) synthesizing secret protection strategies such that any event sequence from the initial state that reaches a secret state contains a number of protected events no less than a given threshold \cite{MC21,MC21Conf}, and 7) synthesizing liveness-enforcing supervisors against attacks \cite{YWS22}.
This paper would continue our previous work \cite{Zhu2018} on the study of supervisor obfuscation problem. \cite{Zhu2018} proposes a method to firstly synthesize control equivalent supervisors by reducing it to the Boolean Satisfiability Problem (SAT), and then carry out a verification procedure on each supervisor to check the attackability. However, there are some shortcomings for this approach: 1) it can only compute bounded supervisors by using SAT solvers; thus, the procedure proposed in \cite{Zhu2018} is generally incomplete, and 2) the process of enumerating each behavior-preserving supervisor and verifying its attackability causes massive degradation on the performance of the overall algorithm.
Here we remark that the approach of synthesizing bounded resilient supervisors proposed in \cite{LZS19b,LS20BJ} can also be adopted to address the problem of supervisor obfuscation, by which there is no need to verify the attackability of each control equivalent supervisor. However, it remains a bounded synthesis approach, and thus not complete.
In this work, we propose a sound and complete method to synthesize obfuscated supervisors against covert actuator attackers. The contributions are summarized as follows:
\begin{enumerate}[1.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item Different from the constraint-based synthesis approach adopted in \cite{Zhu2018}, which can only generate bounded control equivalent supervisors, we provide a new algorithm to construct a structure, named as behavior-preserving structure, such that it exactly encodes all the control equivalent supervisors.
\item Instead of verifying the attackability of the control equivalent supervisors one by one, which is adopted in \cite{Zhu2018}, we propose a new approach to directly extract from the behavior-preserving structure all the control equivalent supervisors that are resilient against any covert actuator attacker. We prove that the proposed algorithm for synthesizing obfuscated supervisors against covert actuator attackers is sound and complete. In comparison, the algorithm proposed in \cite{Zhu2018} is sound, but generally incomplete due to the constraint of bounded state size when generating control equivalent supervisors. The same remark in terms of incompleteness also holds for the bounded synthesis approach in \cite{LZS19b,LS20BJ}.
\end{enumerate}
This paper is organized as follows. In Section \ref{sec:Preliminaries}, we recall the preliminaries which are needed for understanding this paper. In Section \ref{sec:Construction of Behavior-Preserving structure}, we introduce the system architecture and propose a method for constructing a behavior-preserving structure to encode all the control equivalent supervisors. The approach of extracting resilient supervisors from the behavior-preserving structure is presented in Section \ref{sec:Synthesis of Obfuscated Supervisors Against Covert Actuator Attackers}. Conclusions are drawn in Section \ref{sec:conclusions}. A running example is given throughout the paper.
\section{Preliminaries}
\label{sec:Preliminaries}
In this section, we introduce some basic notations and terminologies that will be used in this work, mostly following~\cite{WMW10, CL99, HU79}.
Given a finite alphabet $\Sigma$, let $\Sigma^{*}$ be the free monoid over $\Sigma$ with the empty string $\varepsilon$ being the unit element.
For a string $s$, $|s|$ is defined to be the length of $s$. Given two strings $s, t \in \Sigma^{*}$, we say $s$ is a prefix substring of $t$, written as $s \leq t$, if there exists $u \in \Sigma^{*}$ such that $su = t$, where $su$ denotes the concatenation of $s$ and $u$.
A language $L \subseteq \Sigma^{*}$ is a set of strings.
The prefix closure of $L$ is defined as $\overline{L} = \{u \in \Sigma^{*} \mid (\exists v \in L) \, u\leq v\}$.
If $L = \overline{L}$, then $L$ is prefix-closed. The concatenation of two languages $L_{a}, L_{b} \subseteq \Sigma^{*}$ is defined as $L_{a}L_{b} = \{s_{a}s_{b} \in \Sigma^{*}|s_{a} \in L_{a} \wedge s_{b} \in L_{b}\}$.
$\mathcal{P}_{j}(s)$ represents the prefix of length $j$, specifically, $\mathcal{P}_{0}(\cdot) = \varepsilon$.
$s[i]$ denotes the $i$-th element in $s$. $s^{\downarrow}$ denotes the last event in $s$.
The event set $\Sigma$ is partitioned into $\Sigma = \Sigma_{c} \dot{\cup} \Sigma_{uc} = \Sigma_{o} \dot{\cup} \Sigma_{uo}$, where $\Sigma_{c}$ (respectively, $\Sigma_{o}$) and $\Sigma_{uc}$ (respectively, $\Sigma_{uo}$) are defined as the sets of controllable (respectively, observable) and uncontrollable (respectively, unobservable) events, respectively. As usual, $P_{o}: \Sigma^{*} \rightarrow \Sigma_{o}^{*}$ is the natural projection defined as follows: 1) $P_{o}(\varepsilon) = \varepsilon$, 2) $(\forall \sigma \in \Sigma) \, P_{o}(\sigma) = \sigma$ if $\sigma \in \Sigma_{o}$, otherwise, $P_{o}(\sigma) = \varepsilon$, 3) $(\forall s \in \Sigma^{*}, \sigma \in \Sigma) \, P_{o}(s\sigma) = P_{o}(s)P_{o}(\sigma)$.
We sometimes also write $P_o$ as $P_{\Sigma_o}$, to explicitly illustrate the co-domain $\Sigma_o^*$.
A finite state automaton $G$ over $\Sigma$ is given by a 5-tuple $(Q, \Sigma, \xi, q_{0}, Q_{m})$, where $Q$ is the state set, $\xi: Q \times \Sigma \rightarrow Q$ is the (partial) transition function, $q_{0} \in Q$ is the initial state, and $Q_{m}$ is the set of marker states.
We write $\xi(q, \sigma)!$ to mean that $\xi(q, \sigma)$ is defined. We define $En_{G}(q) = \{\sigma \in \Sigma|\xi(q, \sigma)!\}$.
$\xi$ is also extended to the (partial) transition function $\xi: Q \times \Sigma^{*} \rightarrow Q$ and the transition function $\xi: 2^{Q} \times \Sigma \rightarrow 2^{Q}$ \cite{WMW10}, where the later is defined as follows: for any $Q' \subseteq Q$ and any $\sigma \in \Sigma$, $\xi(Q', \sigma) = \{q' \in Q|(\exists q \in Q')q' = \xi(q, \sigma)\}$.
Let $L(G)$ and $L_{m}(G)$ denote the closed-behavior and the marked behavior, respectively. $G$ is said to be marker-reachable if some marker state of $G$ is reachable~\cite{WMW10}, i.e., $L_m(G) \neq \varnothing$. When $Q_{m} = Q$, we shall also write $G = (Q, \Sigma, \xi, q_{0})$ for simplicity. $Ac(G)$ stands for the automaton by taking the ``accessible'' part of $G$ \cite{CL99}, i.e., deleting those states (and the associated transitions) that are not reachable from the initial state.
The ``unobservable reach'' of the state $q \in Q$ under the subset of events $\Sigma' \subseteq \Sigma$ is given by $UR_{G, \Sigma - \Sigma'}(q) := \{q' \in Q|[\exists s \in (\Sigma - \Sigma')^{*}] \, q' = \xi(q,s)\}$.
We shall abuse the notation and define $P_{\Sigma'}(G)$ to be the finite state automaton $(2^{Q} - \{\varnothing\}, \Sigma, \delta, UR_{G, \Sigma - \Sigma'}(q_{0}))$ over $\Sigma$, where $UR_{G, \Sigma - \Sigma'}(q_{0}) \in 2^Q-\{\varnothing\}$ is the initial state, and the (partial) transition function $\delta: (2^{Q} - \{\varnothing\}) \times \Sigma \rightarrow (2^{Q} - \{\varnothing\})$ is defined as follows:
\begin{enumerate}[(1)]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item For any $\varnothing \neq Q' \subseteq Q$ and any $\sigma \in \Sigma'$, if $\xi(Q', \sigma) \neq \varnothing$, then $\delta(Q', \sigma) = UR_{G, \Sigma - \Sigma'}(\xi(Q', \sigma))$, where $UR_{G, \Sigma - \Sigma'}(Q'') = \bigcup\limits_{q \in Q''}UR_{G, \Sigma - \Sigma'}(q)$
for any $\varnothing \neq Q'' \subseteq Q$
\item For any $\varnothing \neq Q' \subseteq Q$ and any $\sigma \in \Sigma - \Sigma'$, if there exists $q \in Q'$ such that $\xi(q, \sigma)!$, then $\delta(Q', \sigma) = Q'$.
\end{enumerate}
As usual, for any two finite state automata $G_{1} = (Q_{1}, \Sigma_{1}, \xi_{1}, q_{1,0}, Q_{1,m})$ and $G_{2} = (Q_{2}, \Sigma_{2}, \xi_{2}, q_{2,0}, Q_{2,m})$, where $En_{G_{1}}(q) = \{\sigma \in \Sigma_1|\xi_{1}(q, \sigma)!\}$ and $En_{G_{2}}(q) = \{\sigma \in \Sigma_2|\xi_{2}(q, \sigma)!\}$, their synchronous product \cite{CL99} is defined to be $G_{1}||G_{2} := (Q_{1} \times Q_{2}, \Sigma_{1} \cup \Sigma_{2}, \zeta, (q_{1,0}, q_{2,0}), Q_{1,m} \times Q_{2,m})$, where the (partial) transition function $\zeta$ is defined as follows, for any $(q_{1}, q_{2}) \in Q_{1} \times Q_{2}$ and $\sigma \in \Sigma = \Sigma_1 \cup \Sigma_2$:
\[
\begin{aligned}
& \zeta((q_{1}, q_{2}), \sigma) := \\ & \left\{
\begin{array}{lcl}
(\xi_{1}(q_{1}, \sigma), \xi_{2}(q_{2}, \sigma)) & & {\rm if} \, {\sigma \in En_{G_{1}}(q_{1}) \cap En_{G_{2}}(q_{2}),}\\
(\xi_{1}(q_{1}, \sigma), q_{2}) & & {\rm if} \, {\sigma \in En_{G_{1}}(q_{1}) \backslash \Sigma_{2},}\\
(q_{1}, \xi_{2}(q_{2}, \sigma)) & & {\rm if} \, {\sigma \in En_{G_{2}}(q_{2}) \backslash \Sigma_{1},}\\
{\rm not \, defined} & & {\rm otherwise.}
\end{array} \right.
\end{aligned}
\]
\textbf{Notation.} Let $\mathbb{N}$ be the set of nonnegative integers, and $\mathbb{N}^{+}$ the set of positive integers. Let $[m:n] := \{m,m+1,\cdots,n\}$ ($m \in \mathbb{N}, n \in \mathbb{N}$).
\section{Construction of Control-Equivalent Supervisors}
\label{sec:Construction of Behavior-Preserving structure}
In this section, we shall firstly introduce the system architecture under actuator attack. Then, we shall briefly introduce the main idea of our solution methodology. Finally, given the plant and the supervisor, the procedure of constructing the behavior-preserving structure that encodes all the control-equivalent supervisors is presented.
\subsection{Component models}
\label{subsec:Component models}
\begin{figure}[htp]
\begin{center}
\includegraphics[height=2.8cm]{System_Architecture.pdf}
\caption{Supervisory control architecture under actuator attack}
\label{fig:architecture}
\end{center}
\end{figure}
The supervisory control architecture under actuator attack is illustrated in Fig. \ref{fig:architecture}, which consists of the following components: 1) Plant $G$, 2) Supervisor under attack $BT(S)^{A}$, 3) Command execution automaton under attack $CE^{A}$, and 4) Actuator attacker $\mathcal{A}$.
Next, following \cite{LZS19}, we shall briefly explain how we can model these components as finite state automata.
\subsubsection{Plant}
\label{subsubsec:Plant}
The plant is modeled as a finite state automaton $G = (Q, \Sigma, \xi, q^{init}, Q_{d})$, where $Q_{d}$ is the set of damage states that the actuator attacker targets to induce the plant to reach.
\subsubsection{Supervisor}
\label{subsubsec:Supervisor}
The original supervisor is modeled as a finite state automaton $S = (Q_{s}, \Sigma, \xi_{s}, q_{s}^{init})$ satisfying two constraints:
\begin{itemize}
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item (controllability) For any state $q \in Q_{s}$ and any event $\sigma \in \Sigma_{uc}$, $\xi_{s}(q, \sigma)!$.
\item (observability) For any state $q \in Q_{s}$ and any event $\sigma \in \Sigma_{uo}$, if $\xi_{s}(q, \sigma)!$, then $\xi_{s}(q, \sigma) = q$.
\end{itemize}
The control command issued by $S$ at state $q \in Q_{s}$ is defined to be $\Gamma(q) = En_{S}(q) = \{\sigma \in \Sigma|\xi_{s}(q,\sigma)!\} \in \Gamma = \{\gamma \subseteq \Sigma|\Sigma_{uc} \subseteq \gamma\}$, where $\Gamma$ is the set of control commands. We assume the supervisor $S$ will immediately issue a control command in $\Gamma$ to the plant whenever an event $\sigma \in \Sigma_{o}$ is received or when the system initiates. Next, based on the supervisor $S$, we construct a bipartite structure to explicitly encode the observation reception and command sending phase. Such a structure is named as bipartite supervisor \cite{LZS19}, which is denoted by $BT(S) = (Q_{bs}, \Sigma_{bs}, \xi_{bs}, q_{bs}^{init})$ and its construction procedure is given as follows:
\begin{enumerate}[1.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $Q_{bs} = Q_{s} \cup Q_{s}^{com}$, where $Q_{s}^{com}:= \{q^{com} \mid q \in Q_s\}$, and $q \in Q_{s}$ is a reaction state ready to observe any event in $\Gamma(q)$, and $q^{com} \in Q_{s}^{com}$ is a control state corresponding to $q$, which is ready to issue the control command $\Gamma(q)$.
\item $\Sigma_{bs} = \Sigma \cup \Gamma$
\item \begin{enumerate}[a.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $(\forall q^{com} \in Q_{s}^{com}) \, \xi_{bs}(q^{com}, \Gamma(q)) = q$.
\item $(\forall q \in Q_{s})(\forall \sigma \in \Sigma_{uo}) \, \xi_{s}(q, \sigma)! \Rightarrow \xi_{bs}(q, \sigma) = \xi_{s}(q, \sigma) =q$.
\item $(\forall q \in Q_{s})(\forall \sigma \in \Sigma_{o}) \, \xi_{s}(q, \sigma)! \Rightarrow \xi_{bs}(q, \sigma) = (\xi_{s}(q, \sigma))^{com}$.
\end{enumerate}
\item $q_{bs}^{init} = (q_{s}^{init})^{com}$
\end{enumerate}
The basic idea of constructing a bipartite supervisor is: 1) at any control state $q^{com}$, a control command $\Gamma(q)$ should be issued, which leads to a reaction state $q$ (Case 3.a), 2) at any reaction state $q$, any unobservable event, if defined in $S$, is a self-loop transition (Case 3.b), and any observable event, if defined in $S$, would lead to a control state $(\xi_{s}(q, \sigma))^{com}$ (Case 3.c).
In this work, the set of observable events in $\Sigma$ for the actuator attacker is denoted by $\Sigma_{o,a} \subseteq \Sigma$. All the control commands in $\Gamma$ are observable to the actuator attacker. The set of actuator attackable events is denoted by $\Sigma_{c,a} \subseteq \Sigma_{c}$, i.e., it can enable or disable the execution of events in $\Sigma_{c,a}$ at the plant.
We assume that $\Sigma_{c,a} \subseteq \Sigma_{o,a}$.
\textbf{Remark III.1:} The assumptions of $\Sigma_{o,a} \subseteq \Sigma_{o}$ and $\Sigma_{c} \subseteq \Sigma_{o}$, which are imposed in \cite{Zhu2018}, are relaxed in this work. Thus, we consider a more general setup than \cite{Zhu2018}.
Next, we shall encode the attack effects into $BT(S)$ to generate the bipartite supervisor under attack, which is denoted by $BT(S)^{A} = (Q_{bs}^{a}, \Sigma_{bs}^{a}, \xi_{bs}^{a}, q_{bs}^{a,init})$, where:
\begin{enumerate}[1.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $Q_{bs}^{a} = Q_{bs} \cup \{q^{detect}\}$
\item $\Sigma_{bs}^{a} = \Sigma \cup \Gamma$
\item \begin{enumerate}[a.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $(\forall q, q' \in Q_{bs}^{a})(\forall \sigma \in \Sigma \cup \Gamma) \xi_{bs}(q, \sigma) = q' \Rightarrow \xi_{bs}^{a}(q, \sigma) = q'$
\item $(\forall q \in Q_{s})(\forall \sigma \in \Sigma_{c,a} \cap \Sigma_{uo}) \neg\xi_{bs}(q, \sigma)! \Rightarrow \xi_{bs}^{a}(q, \sigma) = q$
\item $(\forall q \in Q_{s})(\forall \sigma \in \Sigma_{o}) \neg\xi_{bs}(q, \sigma)! \Rightarrow \xi_{bs}^{a}(q, \sigma) = q^{detect}$
\end{enumerate}
\item $q_{bs}^{a,init} = q_{bs}^{init}$
\end{enumerate}
Step 3.a retains all the transitions originally defined in $BT(S)$. In Step 3.b, for any reaction state $q \in Q_{s}$, the transitions labelled by unobservable and attackable events in $\Sigma_{c,a} \cap \Sigma_{uo}$, which are not originally defined at the state $q$ in $BT(S)$, are added. In Step 3.c, for any reaction state $q \in Q_{s}$, the transitions labelled by observable events, which are not originally defined at the state $q$ in $BT(S)$, would lead to the newly added state $q^{detect}$, with the interpretation that the supervisor has received some observation that should not have occurred based on the supervisor structure, i.e., the actuator attacker is detected, and then the system operation would be halted.
\subsubsection{Command execution automaton}
\label{subsubsec:command execution automaton}
To explicitly encode the phase from receiving a control command in $\Gamma$ to executing an event in $\Sigma$ at the plant, we construct the command execution automaton $CE = (Q_{ce}, \Sigma_{ce}, \xi_{ce}, q_{ce}^{init})$, where $Q_{ce} = \{q^{\gamma}|\gamma \in \Gamma\} \cup \{q_{ce}^{init}\}$, $\Sigma_{ce} = \Gamma \cup \Sigma$ and the (partial) transition function $\xi_{ce}: Q_{ce} \times \Sigma_{ce} \rightarrow Q_{ce}$ is defined as follows:
\begin{enumerate}[1.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $(\forall \gamma \in \Gamma) \xi_{ce}(q_{ce}^{init}, \gamma) = q^{\gamma}$
\item $(\forall \gamma \in \Gamma)(\forall \sigma \in \gamma \cap \Sigma_{o}) \xi_{ce}(q^{\gamma}, \sigma) = q_{ce}^{init}$.
\item $(\forall \gamma \in \Gamma)(\forall \sigma \in \gamma \cap \Sigma_{uo}) \xi_{ce}(q^{\gamma}, \sigma) = q^{\gamma}$.
\end{enumerate}
Next, we encode the attack effects into $CE$ to generate the command execution automaton under attack, which is denoted by $CE^{A} = (Q_{ce}^{a}, \Sigma \cup \Gamma, \xi_{ce}^{a}, q_{ce}^{a,init})$, where $Q_{ce}^{a} = Q_{ce}$, $q_{ce}^{a,init} = q_{ce}^{init}$ and the (partial) transition function $\xi_{ce}^{a}: Q_{ce}^{a} \times (\Sigma \cup \Gamma) \rightarrow Q_{ce}^{a}$ is defined as follows:
\begin{enumerate}[1.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $(\forall q, q' \in Q_{ce}^{a})(\forall \sigma \in \Sigma \cup \Gamma) \xi_{ce}(q, \sigma) = q' \Rightarrow \xi_{ce}^{a}(q, \sigma) = q'$
\item $(\forall \gamma \in \Gamma)(\forall \sigma \in \Sigma_{c,a} \cap \Sigma_{o}) \neg\xi_{ce}(q^{\gamma}, \sigma)! \Rightarrow \xi_{ce}^{a}(q^{\gamma}, \sigma) = q_{ce}^{a,init}$
\item $(\forall \gamma \in \Gamma)(\forall \sigma \in \Sigma_{c,a} \cap \Sigma_{uo}) \neg\xi_{ce}(q^{\gamma}, \sigma)! \Rightarrow \xi_{ce}^{a}(q^{\gamma}, \sigma) = q^{\gamma}$
\end{enumerate}
Case 1 retains all the transitions originally defined in $CE$. In Case 2 and Case 3, the attack effects are encoded: for any state $q^{\gamma}$, the transitions labelled by attackable events, which are not originally defined at the state $q^{\gamma}$ in $CE$, are added, where the observable events would lead to the initial state $q_{ce}^{init}$ (Case 2), and the unobservable events would lead to self-loop transitions (Case 3).
\subsubsection{Actuator attacker}
\label{subsubsec:Actuator attacker}
The actuator attacker is modeled by a finite state automaton $\mathcal{A} = (Q_{a}, \Sigma_{a}, \xi_{a}, q_{a}^{init})$, where $\Sigma_{a} = \Sigma \cup \Gamma$. There are two conditions that need to be satisfied:
\begin{itemize}
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item ($\mathcal{A}$-controllability) For any state $q \in Q_{a}$ and any event $\sigma \in \Sigma_{a,uc} := \Sigma_{a} - \Sigma_{c,a}$, $\xi_{a}(q, \sigma)!$
\item ($\mathcal{A}$-observability) For any state $q \in Q_{a}$ and any event $\sigma \in \Sigma_{a,uo} := \Sigma_{a} - (\Sigma_{o,a} \cup \Gamma)$, if $\xi_{a}(q, \sigma)$!, then $\xi_{a}(q, \sigma) = q$.
\end{itemize}
$\mathcal{A}$-controllability states that the actuator attacker can only disable events in $\Sigma_{c,a}$. $\mathcal{A}$-observability states that the actuator attacker can only make a state change after observing an event in $\Sigma_{o,a} \cup \Gamma$. In the following text, we shall refer to $(\Sigma_{o,a}, \Sigma_{c,a})$ as the attack constraint, and $\mathscr{C}_{ac} = (\Sigma_{c,a}, \Sigma_{o,a} \cup \Gamma)$ as the attacker's control constraint.
Based on the above-constructed component models, including the plant $G$, the bipartite supervisor under attack $BT(S)^{A}$, the command execution automaton under attack $CE^{A}$ and the actuator attacker $\mathcal{A}$, the closed-loop system under attack is denoted by $CLS^{A} = G||CE^{A}||BT(S)^{A}||\mathcal{A} = (Q_{b}^{a}, \Sigma_{b}^{a}, \xi_{b}^{a}, q_{b}^{a,init}, Q_{b,m}^{a})$.
\textbf{Definition III.1 (Covertness):} Given $G$, $BT(S)^{A}$ and $CE^{A}$, an actuator attacker $\mathcal{A}$ is said to be covert against the supervisor $S$ w.r.t. the attack constraint $(\Sigma_{o,a}, \Sigma_{c,a})$ if any state in $\{(q_{g},q_{ce}^{a},q_{bs}^{a}, q_{a}) \in Q_{b}^{a}| q_{bs}^{a} = q^{detect}\}$ is not reachable in $CLS^{A}$.
\textbf{Definition III.2 (Damage-reachable):} Given $G$, $BT(S)^{A}$ and $CE^{A}$, an actuator attacker $\mathcal{A}$ is said to be damage-reachable against the supervisor $S$ w.r.t. the attack constraint $(\Sigma_{o,a}, \Sigma_{c,a})$ if $L_{m}(CLS^{A}) \neq \varnothing$.
\textbf{Definition III.3 (Resilience):} Given $G$, a supervisor $S$ is said to be resilient if there does not exist any covert and damage-reachable actuator attacker $\mathcal{A}$ against $S$ w.r.t. the attack constraint $(\Sigma_{o,a}, \Sigma_{c,a})$.
In this work, we assume that the original supervisor $S$ is not resilient for the plant $G$, i.e., there exists a covert and damage-reachable actuator attacker $\mathcal{A}$ against $S$ w.r.t. the attack constraint $(\Sigma_{o,a}, \Sigma_{c,a})$.
\textbf{Definition III.4 (Control equivalence):} Given $G$ and $S$, a supervisor $S'$ (bipartite supervisor $BT(S')$, respectively) is said to be control equivalent to $S$ ($BT(S)$, respectively) if $L(G||S) = L(G||S')$ ($P_{\Sigma}(L(G||CE||BT(S))) = P_{\Sigma}(L(G||CE||BT(S')))$, respectively)\footnote{By construction, $CE$ indeed encodes all the bipartite supervisors, thus, $L(BT(S)) \subseteq L(CE)$ and $L(BT(S')) \subseteq L(CE)$, implying that $L(G||CE||BT(S)) = L(G||BT(S))$ and $L(G||CE||BT(S')) = L(G||BT(S'))$. Hence, the control equivalence could also be defined as $P_{\Sigma}(L(G||BT(S))) = P_{\Sigma}(L(G||BT(S')))$.}.
With the above definitions, we are ready to introduce the problem to be solved in this work.
\textbf{Problem 1:} Given $G$ and $S$, find a structure that encodes the set of all the resilient supervisors that are control equivalent to $S$
\textbf{Problem 2:} Given $G$ and $S$, compute a resilient supervisor that is control equivalent to $S$.
\textbf{Remark III.2:} By extracting one supervisor out of the structure that encodes the set of all the resilient supervisors that are control equivalent to $S$, if we could solve \textbf{Problem 1}, then we also solve \textbf{Problem 2}. Thus, in this work, we mainly focus on \textbf{Problem 1}. Later in Section \ref{subsec:Generation of Obfuscated Supervisors Against Covert Actuator Attackers}, we will show how the extraction can be easily carried out.
\textbf{Example III.1} Consider the plant $G$ and supervisor $S$ shown in Fig. \ref{fig:G_S}. $\Sigma = \{a,b,c,d,e\}$. $\Sigma_{o} = \{a,c,d\}$. $\Sigma_{uo} = \{b,e\}$. $\Sigma_{c} = \{a,d,e\}$. $\Sigma_{uc} = \{b,c\}$. $\Sigma_{o,a} = \{b,c,d,e\}$. $\Sigma_{c,a} = \{e\}$. The damage state is state 10, i.e., $Q_{d} = \{10\}$. We have $L(G||S) = \overline{\{acd,bac\}}$.
Based on the above model constructions, bipartite supervisor $BT(S)$, bipartite supervisor under attack $BT(S)^{A}$, command execution automaton $CE$ and command execution automaton under attack $CE^{A}$ are illustrated in Fig. \ref{fig:BTS_BTSA} and Fig. \ref{fig:CE_CEA}, where the difference between $BT(S)$ and $BT(S)^{A}$, and $CE$ and $CE^{A}$ are marked in blue. It can be checked that $S$ is not resilient for $G$ as there exist covert and damage-reachable attackers. For example, an attacker could implement enablement attack to enable the execution of event $e$ after observing that $S$ issues the initial control command $\{a,b,c\}$. Then $G$ transits to state 7. Since $e$ is unobservable, the command $\{a,b,c\}$ would be reused and event $a$ is executed and observed by $S$, which triggers the sending of command $\{b,c,d\}$. After that, event $d$ is executed and triggers the sending of command $\{b,c,d\}$, resulting in the execution of event $c$ and the damage state is reached.
\begin{figure}[htbp]
\centering
\subfigure[]{
\begin{minipage}[t]{0.4\linewidth}
\centering
\includegraphics[height=0.6in]{G.pdf}
\end{minipage}
}
\subfigure[]{
\begin{minipage}[t]{0.4\linewidth}
\centering
\includegraphics[height=0.3in]{S.pdf}
\end{minipage}
}
\centering
\caption{(a) Plant $G$. (b) Supervisor $S$.}
\label{fig:G_S}
\end{figure}
\begin{figure}[htbp]
\centering
\subfigure[]{
\begin{minipage}[t]{0.4\linewidth}
\centering
\includegraphics[height=0.8in]{BTS.pdf}
\end{minipage}
}
\subfigure[]{
\begin{minipage}[t]{0.52\linewidth}
\centering
\includegraphics[height=0.8in]{BTSA.pdf}
\end{minipage}
}
\centering
\caption{(a) $BT(S)$. (b) $BT(S)^{A}$.}
\label{fig:BTS_BTSA}
\end{figure}
\begin{figure}[htbp]
\centering
\subfigure[]{
\begin{minipage}[t]{0.4\linewidth}
\centering
\includegraphics[height=1in]{CE.pdf}
\end{minipage}
}
\subfigure[]{
\begin{minipage}[t]{0.4\linewidth}
\centering
\includegraphics[height=1in]{CEA.pdf}
\end{minipage}
}
\centering
\caption{(a) $CE$. (b) $CE^{A}$.}
\label{fig:CE_CEA}
\end{figure}
\subsection{Main idea}
\label{subsec:Main idea}
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=6.7cm]{SolutionMethodology.pdf}
\caption{The procedure of the proposed solution methodology}
\label{fig:Solution_Methodology}
\end{center}
\end{figure}
Before we delve into the details of our proposed approach for supervisor obfuscation, we shall present the high-level idea of the solution methodology, as illustrated in Fig. \ref{fig:Solution_Methodology}. Firstly, at Step I (Section \ref{subsec:Generation of control equivalent bipartite supervisors}), based on $G$, $S$ and $CE$, we shall construct the behavior-preserving structure $BPNS$ to exactly encode all the control equivalent bipartite supervisors. Then, at At Step II (Section \ref{subsec:Synthesis of covert actuator attackers for control equivalent bipartite supervisors}), based on $G$, $CE^{A}$ and $BPNS^{A}$, which is the version of $BPNS$ under attack, we shall synthesize $\hat{\mathcal{A}}$ which encodes all the damage strings that could be taken use of by covert and damage-reachable actuator attackers for control equivalent bipartite supervisors encoded in $BPNS$. Finally, at Step III (Section \ref{subsec:Generation of Obfuscated Supervisors Against Covert Actuator Attackers}), based on $G$, $CE^{A}$, $BPNS^{A}$ and $\hat{\mathcal{A}}$, we shall carry out an iterative synthesis to generate $ONS$, which is the solution to \textbf{Problem 1} that exactly encodes all the resilient and control equivalent supervisors. Then we could extract from $ONS$ a resilient and control equivalent supervisor $OS$, which is the solution to \textbf{Problem 2}.
\subsection{Generation of control equivalent bipartite supervisors}
\label{subsec:Generation of control equivalent bipartite supervisors}
In this part, we shall introduce the procedure to construct the behavior-preserving structure, where all the bipartite supervisors that are control equivalent to the bipartite supervisor $BT(S)$ are included. The construction procedure consists of the following three steps:
\textbf{Step 1}: Firstly, to encode all the control equivalent supervisors, we need to know the closed-behavior of the closed-loop system under the supervisor $S$ in the absence of attack, thus, we compute $G||S$. Then, since for any supervisor $S'$, we have $L(BT(S')) \subseteq \overline{(\Gamma\Sigma_{uo}^{*}\Sigma_{o})^{*}}$ and any unobservable event defined in $BT(S')$ is a self-loop transition, we shall compute a subset construction $B = P_{\Sigma_{o}}(G||S) = (Q_{b}, \Sigma, \xi_{b}, q_{b}^{init})$, where $|Q_{b}| \leq 2^{|Q| \times |Q_{s}|}$. In fact, $B$ can be regarded as a structure built upon the observer of $G||S$ by adding the self-loop transitions labelled by the unobservable events that could possibly occur at each state of the observer of $G||S$. In addition, it can be checked that, for any $(q_{g}, q_{s}), (q_{g}', q_{s}') \in q \in Q_{b}$, we have $q_{s} = q_{s}'$ since all the unobservable events in $S$ are self-loop transitions. Thus, for any $q, q' \in Q_{b}$ and any $\sigma \in \Sigma_{o}$ such that $\xi_{b}(q, \sigma) = q'$, where $q_{s}$ ($q_{s}'$, respectively) is the supervisor state in the state $q$ ($q'$, respectively), we know that 1) $En_{B}(q')$ contains all the events that could happen at the plant $G$ when the supervisor $S$ issues the corresponding control command at the state $q_{s}'$, and 2) the state of the supervisor $S$ would transit from $q_{s}$ to $q_{s}'$ upon the observation of $\sigma$. Henceforth, we can use $En_{B}(q')$ as the criterion to determine all the possible control commands w.r.t. each observation such that the generated bipartite supervisor $BT(S')$ are control equivalent to the original one, i.e., $BT(S)$. In the next step, we shall explain how to realize this procedure.
\textbf{Example III.2} Given $G$ and $S$ shown in Fig. \ref{fig:G_S}, the automaton $B$ is shown in Fig. \ref{fig:B}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=0.75cm]{B.pdf}
\caption{The computed automaton $B$}
\label{fig:B}
\end{center}
\end{figure}
\textbf{Step 2}: Based on $B = P_{\Sigma_{o}}(G||S)$, we shall generate a bipartite structure similar to $BT(S)$, where upon each observation in $P_{\Sigma_{o}}(G||S)$, we add all the possible control commands, under which the closed-behavior of the closed-loop system $L(G||S)$ is preserved. Such a structure is named as bipartite behavior-preserving structure, denoted by $BPS = (Q_{bps}, \Sigma_{bps}, \xi_{bps}, q_{bps}^{init})$, where
\begin{enumerate}[1.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $Q_{bps} = Q_{b} \cup Q_{b}^{com} \cup \{q^{dump}\}$, where $Q_{b}^{com} = \{q^{com}|q \in Q_{b}\}$
\item $\Sigma_{bps} = \Sigma \cup \Gamma$
\item \begin{enumerate}[a.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $(\forall q \in Q_{b})(\forall \gamma \in \Gamma)\mathcal{C}_{1} \wedge \mathcal{C}_{2} \Rightarrow \xi_{bps}(q^{com}, \gamma) = q$, where
\begin{enumerate}[i.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $\mathcal{C}_{1} := En_{B}(q) \subseteq \gamma$
\item $\mathcal{C}_{2} := (\forall (q_{g},q_{s}) \in q)En_{G}(q_{g}) \cap \gamma \subseteq En_{B}(q)$
\end{enumerate}
\item $(\forall q \in Q_{b})(\forall \sigma \in \Sigma_{uo})\xi_{b}(q, \sigma)! \Rightarrow \xi_{bps}(q, \sigma) = q$
\item $(\forall q \in Q_{b})(\forall \sigma \in \Sigma_{o})\xi_{b}(q, \sigma)! \Rightarrow \xi_{bps}(q, \sigma) = (\xi_{b}(q, \sigma))^{com}$
\item $(\forall q \in Q_{b})(\forall \sigma \in \Sigma_{uo})\neg\xi_{b}(q, \sigma)! \Rightarrow \xi_{bps}(q, \sigma) = q$
\item $(\forall q \in Q_{b})(\forall \sigma \in \Sigma_{o})\neg\xi_{b}(q, \sigma)! \Rightarrow \xi_{bps}(q, \sigma) = q^{dump}$
\item $(\forall \sigma \in \Sigma \cup \Gamma)\xi_{bps}(q^{dump}, \sigma) = q^{dump}$
\end{enumerate}
\item $q_{bps}^{init} = (q_{b}^{init})^{com}$
\end{enumerate}
In the state set $Q_{bps}$, any state $q^{com} \in Q_{b}^{com}$ is a control state, which is ready to issue the control command, and any state $q$ in $Q_{b}$ is a reaction state, which is ready to receive an observation. After a control command is issued at a control state $q^{com}$, $BPS$ would transit to a reaction state $q$. The state $q^{dump}$ denotes the situation when an event $\sigma \in \Sigma_{o}$, which is not originally defined at the state $q \in Q_{b}$ in $B = P_{\Sigma_{o}}(G||S)$, occurs at the state $q$ in $BPS$. The initial state of $BPS$ is thus the initial control state, denoted by $q_{bps}^{init} = (q_{b}^{init})^{com}$. The definition of the (partial) transition function $\xi_{bps}$ is given in Step 3. Case 3.a adds the control commands that can be issued at any control state $q^{com}$ and the criterion for adding a control command $\gamma \in \Gamma$ at the state $q^{com}$ is: 1) The sending of $\gamma$ should make sure that all the events in $En_{B}(q)$ would occur at the plant $G$ once $\gamma$ is received, denoted by the condition $\mathcal{C}_{1} := En_{B}(q) \subseteq \gamma$; 2) According to the way of constructing $B = P_{\Sigma_{o}}(G||S)$, any state $q \in Q_{b}$ such that $(\exists t \in \Sigma_{o}^{*})\xi_{b}(q_{b}^{init}, t) = q$ already contains all the possible estimated states of the plant $G$ w.r.t. the observation sequence $t$. Henceforth, for any possible plant state $q_{g}$ in the state $q$, the sending of $\gamma$ should make sure that any event that might be executed at the state $q_{g}$ under $\gamma$ would not go beyond $En_{B}(q)$, denoted by the condition $\mathcal{C}_{2} := (\forall (q_{g},q_{s}) \in q)En_{G}(q_{g}) \cap \gamma \subseteq En_{B}(q)$. The conditions $\mathcal{C}_{1}$ and $\mathcal{C}_{2}$ together enforce that at the control state $q^{com}$, any control command $\gamma$ satisfying these two conditions would enable the plant to execute exactly those events in $En_{B}(q)$. Case 3.b and Case 3.c retains all the transitions originally defined in $B$. Next, we shall explain why we add Case 3.d - Case 3.e. Our goal is to construct a structure to include all the bipartite supervisors that are control equivalent to $BT(S)$, where for any supervisor $S' = (Q_{s'}, \Sigma, \xi_{s'}, q_{s'}^{init})$, at any reaction state $q \in Q_{s'}$ of its bipartite version $BT(S')$, all the events in the control command issued at the state $q^{com}$ should be defined. Since Case 3.a has already added all the possible control commands that ensure control equivalence, our basic idea is:
\begin{enumerate}[1.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item Firstly, for any reaction state $q \in Q_{b}$ in $BPS$, we shall carry out Case 3.d and Case 3.e to complete all the transitions labelled by events in $\Sigma$ that are not originally defined at the state $q \in Q_{b}$ in $B$. The completed unobservable events would lead to self-loop transitions and completed observable events would result in transitions to the state $q^{dump}$, where any control command is defined, since these completed observable events would not occur at all under the control commands defined at the control state $q^{com}$, and thus the control equivalence would not be violated no matter which command is issued at the state $q^{dump}$.
\item Then we use $CE$ to refine the above-constructed structure to get all bipartite control equivalent supervisors, which would be done in the later \textbf{Step 3}.
\end{enumerate}
In Case 3.f, since $BPS$ has transited from some state $q \in Q_{b}$ to the state $q^{dump}$, i.e., some observable event that would never occur under the control of the command issued at the state $q^{com}$ happens, we can safely add the transitions labelled by all the events in $\Sigma \cup \Gamma$ at the state $q^{dump}$ as now the closed-behavior of the closed-loop system would not be affected no matter which control command the supervisor issues.
\textbf{Example III.3} Based on $B$ shown in Fig. \ref{fig:B}, the constructed $BPS$ is illustrated in Fig. \ref{fig:B}. We shall briefly explain the construction procedure by taking two states as instances. At the initial control state $\{(0,0),(5,0)\}^{com}$, 1) according to $\mathcal{C}_{1}$ of Case 3.a, we have $En_{B}(\{(0,0),(5,0)\}) = \{a,b\} \subseteq \gamma$, 2) according to $\mathcal{C}_{2}$ of Case 3.a, we have $En_{G}(0) = \{a,b,e\} \cap \gamma \subseteq En_{B}(\{(0,0),(5,0)\}) = \{a,b\}$, which implies that event $e$ should not be contained in any command, and $En_{G}(5) = \{a\} \cap \gamma \subseteq En_{B}(\{(0,0),(5,0)\}) = \{a,b\}$. Thus, the control commands satisfying $\mathcal{C}_{1}$ and $\mathcal{C}_{2}$ are $\{a,b,c\}$ and $\{a,b,c,d\}$, as uncontrollable events $b$ and $c$ are always contained in any command. Hence, there are two transitions labelled by $\{a,b,c\}$ and $\{a,b,c,d\}$ from the initial state to the reaction state $\{(0,0),(5,0)\}$. This means that at the initial state, to ensure the control equivalence, a supervisor could only issue command $\{a,b,c\}$ or $\{a,b,c,d\}$. At the state $\{(0,0),(5,0)\}$, according to Case 3.b and Case 3.c, there are two transitions labelled by event $b$, which is a self-loop, and event $a$, which leads to the state $(\xi_{b}(\{(0,0),(5,0)\}, a)^{com} = \{(1,1),(6,1)\}^{com}$. According to Case 3.d, a transition labelled by unobservable event $e$ is added at the state $\{(0,0),(5,0)\}$. According to Case 3.e, two transitions labelled by observable events $c$ and $d$ are added at the state $\{(0,0),(5,0)\}$, which lead to the state $q^{dump}$. It can be checked that event $c$ and $d$ would not occur at all under the initial command $\{a,b,c\}$ and $\{a,b,c,d\}$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=3.5cm]{BPS.pdf}
\caption{Bipartite behavior-preserving structure $BPS$}
\label{fig:BPS}
\end{center}
\end{figure}
\textbf{Step 3}: To obtain the final desired structure which contains all the bipartite supervisors that are control equivalent to $BT(S)$, we shall carry out the refinement on $BPS$ by computing the synchronous product of $BPS$ and $CE$. Intuitively speaking, the reason why we could use $CE$ are: 1) $CE$ already encodes all the bipartite supervisors\footnote{This is because: 1) at the state $q_{ce}^{init}$, any control command $\gamma \in \Gamma$ is defined and would lead to the state $q^{\gamma}$, and 2) at any state $q^{\gamma} \in Q_{ce}$, only events in $\gamma$ are defined, and any unobservable event in $\gamma$ would lead to a self-loop transition and any observable event in $\gamma$ would lead to a transition back to the initial state $q_{ce}^{init}$.}, and 2) the structure of $CE$ could ensure that the automaton computed by synchronous product still maintains the structure similar to that of a bipartite supervisor.
We shall name this structure as bipartite behavior-preserving command-nondeterministic\footnote{We note that this structure is a deterministic automaton, but command non-deterministic in the sense that at each control state, more than two different control commands may be issued.} supervisor, denoted by
$BPNS = BPS || CE = (Q_{bpns}, \Sigma \cup \Gamma, \xi_{bpns}, q_{bpns}^{init})$, where $Q_{bpns} = (Q_{b} \cup Q_{b}^{com} \cup \{q^{dump}\}) \times Q_{ce} = (Q_{b} \cup Q_{b}^{com} \cup \{q^{dump}\}) \times (\{q^{\gamma}|\gamma \in \Gamma\} \cup \{q_{ce}^{init}\})$. According to the structure of $BPS$ and $CE$, we know that $Q_{bpns} = ((Q_{b} \cup \{q^{dump}\}) \times \{q^{\gamma}|\gamma \in \Gamma\}) \dot{\cup} ((Q_{b}^{com} \cup \{q^{dump}\}) \times \{q_{ce}^{init}\})$. Thus, we have $|Q_{bpns}| \leq (2^{|Q| \times |Q_{s}|} + 1) \times |\Gamma| + 2^{|Q| \times |Q_{s}|} + 1 = (2^{|Q| \times |Q_{s}|} + 1)(|\Gamma| + 1)$.
For convenience, we shall call $Q_{bpns}^{rea} := (Q_{b} \cup \{q^{dump}\}) \times \{q^{\gamma}|\gamma \in \Gamma\}$ the set of reaction states since any event, if defined at these states, belongs to $\Sigma$, and $Q_{bpns}^{com} := (Q_{b}^{com} \cup \{q^{dump}\}) \times \{q_{ce}^{init}\}$ the set of control states since any event, if defined at these states, belongs to $\Gamma$. Thus, $Q_{bpns} = Q_{bpns}^{rea} \dot{\cup} Q_{bpns}^{com}$.
\textbf{Remark III.3:} The above method starts from $P_{\Sigma_{o}}(G||S)$ by adding all the possible control commands that preserve control equivalence.
An alternative construction of $BPNS$ is to start from $P_{\Sigma_{o} \cup \Gamma}(G||CE)$, which considers all the possible control commands beforehand,
by pruning the control commands based on comparison with $P_{\Sigma_{o}}(G||S)$ to ensure control equivalence.
The details are omitted here for brevity.
\textbf{Example III.4} Based on $BPS$ shown in Fig. \ref{fig:BPS} and $CE$ shown in Fig. \ref{fig:CE_CEA}. (a), the computed $BPNS$ is illustrated in Fig. \ref{fig:BPNS}. At the initial control state 0, two control commands $\{a,b,c\}$ and $\{a,b,c,d\}$ are defined, which means that a control equivalent supervisor could issue either $\{a,b,c\}$ or $\{a,b,c,d\}$ when the system initiates. If $\{a,b,c\}$ is issued, then $BPNS$ would transit to state 1, where according to the structure of a bipartite supervisor, unobservable event $b$ in $\{a,b,c\}$ is a self-loop transition, and observable events $a$ and $c$ in $\{a,b,c\}$ lead to two control states, state $2$ and state 3, respectively. At control state 2, to ensure the control equivalence, a supervisor could only issue one of $\{b,c,d\}$, $\{b,c\}$, $\{b,c,e\}$ and $\{b,c,d,e\}$, as encoded in $BPS$. At control state 3, according to the structure of $G$, we know that event $c$ would never occur under the issued initial command $\{a,b,c\}$, thus, any command could be issued without violating the control equivalence.
In the rest of this subsection, we show $BPNS$ indeed encodes all the control equivalent bipartite supervisors. We have the following results.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=6.5cm]{BPNS.pdf}
\caption{Bipartite behavior-preserving command-nondeterministic supervisor $BPNS$}
\label{fig:BPNS}
\end{center}
\end{figure}
\textbf{Lemma III.1:} Given $G$ and $S$, for any supervisor $S'$, we have $L(G||S) = L(G||S')$ iff $L(P_{\Sigma_{o}}(G||S)) = L(P_{\Sigma_{o}}(G||S'))$.
\emph{Proof:} See Appendix \ref{appendix: Lemma III.1}. \hfill $\blacksquare$
\textbf{Proposition III.1:} Given $G$ and $S$, for any supervisor $S' = (Q_{s'}, \Sigma, \xi_{s'}, q_{s'}^{init})$ such that $L(G||S) = L(G||S')$, we have $L(BT(S')) \subseteq L(BPNS)$.
\emph{Proof:} See Appendix \ref{appendix: Proposition III.1}. \hfill $\blacksquare$
\textbf{Proposition III.2:} Given $G$ and $S$, for any supervisor $S' = (Q_{s'}, \Sigma, \xi_{s'}, q_{s'}^{init})$ such that $L(G||S) \neq L(G||S')$, we have $L(BT(S')) \not\subseteq L(BPNS)$.
\emph{Proof:} See Appendix \ref{appendix: Proposition III.2}. \hfill $\blacksquare$
In the following text, we shall denote by $\mathscr{S}$ the set of all supervisors that satisfy \
controllability and observability, and $\mathscr{S}_{e}(S) := \{S' \in \mathscr{S} |L(G||S) = L(G||S')\}$ the set of supervisors that are control equivalent to $S$.
\textbf{Theorem III.1:} $\bigcup\limits_{S' \in \mathscr{S}_{e}(S)}L(BT(S')) = L(BPNS)$.
\emph{Proof:} See Appendix \ref{appendix: Theorem III.1}. \hfill $\blacksquare$
Based on \textbf{Theorem III.1}, $BPNS$ encodes exactly all the control equivalent bipartite supervisors.
\section{Supervisor Obfuscation Against Covert Actuator Attackers}
\label{sec:Synthesis of Obfuscated Supervisors Against Covert Actuator Attackers}
In the section, based on the bipartite behavior-preserving command-nondeterministic supervisor $BPNS$ constructed in Section \ref{subsec:Generation of control equivalent bipartite supervisors}, we shall firstly find all the damage strings that could be taken use of by covert damage-reachable actuator attackers for the control equivalent bipartite supervisors encoded in $BPNS$. Then, according to those damage strings, we shall extract from $BPNS$ those control equivalent bipartite supervisors that are resilient against any covert actuator attack.
\subsection{Damage strings encoding}
\label{subsec:Synthesis of covert actuator attackers for control equivalent bipartite supervisors}
We firstly construct the version of $BPNS$ under actuator attack, denoted by $BPNS^{A} = (Q_{bpns}^{a}, \Sigma_{bpns}^{a}, \xi_{bpns}^{a}, q_{bpns}^{a,init})$, where
\begin{enumerate}[1.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $Q_{bpns}^{a} = Q_{bpns} \cup \{q_{bpns}^{detect}\} = Q_{bpns}^{rea} \cup Q_{bpns}^{com} \cup \{q_{bpns}^{detect}\}$, where $Q_{bpns}^{rea} = (Q_{b} \cup \{q^{dump}\}) \times \{q^{\gamma}|\gamma \in \Gamma\}$ and $Q_{bpns}^{com} = (Q_{b}^{com} \cup \{q^{dump}\}) \times \{q_{ce}^{init}\}$
\item $\Sigma_{bpns}^{a} = \Sigma \cup \Gamma$
\item \begin{enumerate}[a.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $(\forall q, q' \in Q_{bpns}^{a})(\forall \sigma \in \Sigma \cup \Gamma) \xi_{bpns}(q, \sigma) = q' \Rightarrow \xi_{bpns}^{a}(q, \sigma) = q'$
\item $(\forall q \in Q_{bpns}^{rea})(\forall \sigma \in \Sigma_{c,a} \cap \Sigma_{uo}) \neg\xi_{bpns}(q, \sigma)! \Rightarrow \xi_{bpns}^{a}(q, \sigma) = q$
\item $(\forall q \in Q_{bpns}^{rea})(\forall \sigma \in \Sigma_{o}) \neg\xi_{bpns}(q, \sigma)! \Rightarrow \xi_{bpns}^{a}(q, \sigma) =\\ q_{bpns}^{detect}$
\end{enumerate}
\item $q_{bpns}^{a,init} = q_{bpns}^{init}$
\end{enumerate}
The construction procedure of $BPNS^{A}$ from $BPNS$ is similar to that of generating $BT(S)^{A}$ from $BT(S)$ in Section \ref{subsubsec:Supervisor}. Similarly, when $BPNS^{A}$ reaches the state $q_{bpns}^{detect}$, it means that some observation that should not have occurred happens based on the supervisor structure, i.e., the attacker is detected. We note that $(Q_{b} \cup \{q^{dump}\}) \times \{q^{\gamma}|\gamma \in \Gamma\}$ is the set of reaction states and $(Q_{b}^{com} \cup \{q^{dump}\}) \times \{q_{ce}^{init}\}$ is the set of control states in $BPNS^{A}$. We have the following.
\textbf{Proposition IV.1:} Given $G$ and $S$, for any supervisor $S' = (Q_{s'}, \Sigma, \xi_{s'}, q_{s'}^{init})$ such that $L(G||S) = L(G||S')$, we have $L(BT(S')^{A}) \subseteq L(BPNS^{A})$.
\emph{Proof:} See Appendix \ref{appendix: Proposition IV.1}. \hfill $\blacksquare$
\textbf{Theorem IV.1:} $\bigcup\limits_{S' \in \mathscr{S}_{e}(S)}L(BT(S')^{A}) = L(BPNS^{A})$.
\emph{Proof:} See Appendix \ref{appendix: Theorem IV.1}. \hfill $\blacksquare$
\textbf{Example IV.1} Based on the computed $BPNS$ shown in Fig. \ref{fig:BPNS}, the constructed $BPNS^{A}$ is illustrated in Fig. \ref{fig:BPNSA}. We shall briefly explain the construction procedure by taking state 0 and state 1 as an illustration. According to Case 3.a, 1) at the initial state 0, the control commands $\{a,b,c\}$ and $\{a,b,c,d\}$ originally defined at state 0 in $BPNS$ are retained, 2) at the state 1, the events $a$, $b$ and $c$ originally defined at state 1 are retained. According to Case 3.b, at state 1, the transition labelled by the unobservable but attackable event $e$ is added, which is a self-loop, meaning that $e$ could be enabled by the attacker but would not be observed by the supervisor. According to Case 3.c, the transition labelled by the observable event $d$, which is not originally defined at state 1 in $BPNS$, is added at state 1 and leads to the state $q_{bpns}^{detect}$, meaning that the attacker is discovered.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=6.5cm]{BPNSA.pdf}
\caption{Bipartite behavior-preserving command-nondeterministic supervisor under attack $BPNS^{A}$}
\label{fig:BPNSA}
\end{center}
\end{figure}
Next, we shall synthesize a structure to contain all the damage strings that can be taken use of by covert and damage-reachable actuator attackers for the control equivalent bipartite supervisors encoded in $BPNS$. The procedure is presented as follows:
\noindent \textbf{Procedure 1:}
\begin{enumerate}[1.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item Compute $\mathcal{P} = G||CE^{A}||BPNS^{A} = (Q_{\mathcal{P}}, \Sigma_{\mathcal{P}}, \xi_{\mathcal{P}}, q_{\mathcal{P}}^{init}, \\Q_{\mathcal{P},m})$, where $Q_{\mathcal{P},m} = Q_{d} \times Q_{ce}^{a} \times Q_{bpns}^{a}$.
\item Generate $\mathcal{P}_{r} = (Q_{\mathcal{P}_{r}}, \Sigma_{\mathcal{P}_{r}}, \xi_{\mathcal{P}_{r}}, q_{\mathcal{P}_{r}}^{init}, Q_{\mathcal{P}_{r},m})$.
\begin{itemize}
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $Q_{\mathcal{P}_{r}} = Q_{\mathcal{P}} - Q_{bad}$
\begin{itemize}
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $Q_{bad} = \{(q, q_{ce,a}, q_{bpns}^{a}) \in Q_{\mathcal{P}}|q_{bpns}^{a} = q_{bpns}^{detect}\}$
\end{itemize}
\item $\Sigma_{\mathcal{P}_{r}} = \Sigma_{\mathcal{P}}$
\item $(\forall q, q' \in Q_{\mathcal{P}_{r}})(\forall \sigma \in \Sigma_{\mathcal{P}_{r}}) \, \xi_{\mathcal{P}}(q, \sigma) = q' \Leftrightarrow \xi_{\mathcal{P}_{r}}(q, \sigma) = q'$
\item $q_{\mathcal{P}_{r}}^{init} = q_{\mathcal{P}}^{init}$
\item $Q_{\mathcal{P}_{r},m} = Q_{\mathcal{P},m} - Q_{bad}$
\end{itemize}
\item Synthesize the supremal supervisor ${\hat{\mathcal{A}}} = (Q_{\hat{a}}, \Sigma_{\hat{a}}, \xi_{\hat{a}}, q_{\hat{a}}^{init})$ over the (attacker's) control constraint $\mathscr{C}_{ac} = (\Sigma_{c,a}, \Sigma_{o,a} \cup \Gamma)$ by treating $\mathcal{P}$ as the plant and $\mathcal{P}_{r}$ as the requirement such that $\mathcal{P}||{\hat{\mathcal{A}}}$ is marker-reachable and safe w.r.t. $\mathcal{P}_{r}$.
\end{enumerate}
We shall briefly explain \textbf{Procedure 1}. At Step 1, we generate a new plant $\mathcal{P} = G||CE^{A}||BPNS^{A}$. At Step 2, we generate $\mathcal{P}_{r}$ from $\mathcal{P}$ by removing those states in $Q_{bad}$, where the covertness is broken, denoted by $q_{bpns}^{a} = q_{bpns}^{detect}$. Then we synthesize the supremal supervisor ${\hat{\mathcal{A}}}$ at Step 3 by treating $\mathcal{P}$ as the plant and $\mathcal{P}_{r}$ as the requirement, whose existence is guaranteed because the set of controllable events $\Sigma_{c,a}$ is a subset of the set of observable events $\Sigma_{o,a} \cup \Gamma$ in the (attacker's) control constraint. Here, we note that, since the target of this work is to find the supervisors that should be resilient against any covert actuator attack, we shall consider the attack in the worst case, i.e., we should find damage strings that could be used by all the covert damage-reachable actuator attackers. Thus, at Step 3, we only need to make sure that $\mathcal{P}||{\hat{\mathcal{A}}}$ is marker-reachable and safe w.r.t. $\mathcal{P}_{r}$.
In the following text, the set of covert and damage-reachable actuator attackers against the supervisor $S'$ (or bipartite supervisor $BT(S')$) w.r.t. the attack constraint $(\Sigma_{o,a}, \Sigma_{c,a})$ is denoted as $\mathscr{A}(S')$ (or $\mathscr{A}(BT(S'))$).
\textbf{Proposition IV.2:} Given $G$ and $S$, for any supervisor $S'$ such that $L(G||S) = L(G||S')$ and any attacker $\mathcal{A} \in \mathscr{A}(S')$, we have $L(G||CE^{A}||BT(S')^{A}||\mathcal{A}) \subseteq L(G||CE^{A}||BPNS^{A}||\hat{\mathcal{A}})$.
\emph{Proof:} See Appendix \ref{appendix: Proposition IV.2}. \hfill $\blacksquare$
\textbf{Corollary IV.1:} Given $G$ and $S$, for any supervisor $S'$ such that $L(G||S) = L(G||S')$ and any attacker $\mathcal{A} \in \mathscr{A}(S')$, we have $L_{m}(G||CE^{A}||BT(S')^{A}||\mathcal{A}) \subseteq L_{m}(G||CE^{A}||BPNS^{A}||\hat{\mathcal{A}})$.
\emph{Proof:} Based on \textbf{Proposition IV.2}, it holds that LHS $= L(G||CE^{A}||BT(S')^{A}||\mathcal{A}) \cap L_{m}(G) \subseteq L(G||CE^{A}||BPNS^{A}\\||\hat{\mathcal{A}}) \cap L_{m}(G) = L_{m}(G||CE^{A}||BPNS^{A}||\hat{\mathcal{A}}) =$ RHS. \hfill $\blacksquare$
\textbf{Theorem IV.2:} Given $G$ and $S$, we have
\[
\begin{aligned}
\bigcup\limits_{S' \in \mathscr{S}_{e}(S)}\bigcup\limits_{\mathcal{A} \in \mathscr{A}(S')}&L_{m}(G||CE^{A}||BT(S')^{A}||\mathcal{A}) \\= & L_{m}(G||CE^{A}||BPNS^{A}||\hat{\mathcal{A}})
\end{aligned}
\]
\emph{Proof:} See Appendix \ref{appendix: Theorem IV.2}. \hfill $\blacksquare$
Based on \textbf{Theorem IV.2}, $L_{m}(G||CE^{A}||BPNS^{A}||\hat{\mathcal{A}})$ encodes all the damage strings in all the control equivalent bipartite supervisors, which can be taken use of by the attacker to cause damage infliction. Thus, we can rely on $L_{m}(G||CE^{A}||BPNS^{A}||\hat{\mathcal{A}})$ to remove inappropriate control commands in $BPNS^{A}$, by which all the control equivalent bipartite supervisors that are resilient against any covert actuator attack can be generated.
\textbf{Example IV.2} Based on $G$, $CE^{A}$ and $BPNS^{A}$ shown in Fig. \ref{fig:G_S}. (a), Fig. \ref{fig:CE_CEA}. (b) and Fig. \ref{fig:BPNSA}, respectively, the synthesized $\hat{\mathcal{A}}$ by adopting \textbf{Procedure 1} is illustrated in Fig. \ref{fig:A}. It can be checked that $\hat{\mathcal{A}}$ encodes three kinds of damage strings (marked by red, green and blue) that can be taken use of by the attacker:
\begin{itemize}
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item Red part: After observing the initial control command $\{a,b,c\}$ or $\{a,b,c,d\}$, the attacker can carry out the enablement attack to enable the execution of unobservable event $e$, which results in the reuse of initial control command and event $a$ is executed. After that, if the supervisor issues a control command containing event $d$, that is, command $\{b,c,d\}$ or $\{b,c,d,e\}$, then event $d$ would be executed, triggering the command sending by the supervisor and finally event $c$ is executed, causing the damage inflcition.
\item Green part: At first, the attacker would not implement any attack, and the system runs the string $\{a,b,c\}/\{a,b,c,d\} \rightarrow b \rightarrow a \rightarrow \{b,c\}/\{b,c,d\}/\{b,c,e\}/\{b,c,d,e\} \rightarrow c$. Afterwards, if the supervisor issues a control command containing event $a$, that is, command $\{a,b,c,d\}$, then the attacker enables the execution of unobservable event $e$, which results in the reuse of command $\{a,b,c,d\}$ and then event $a$ is executed, causing the damage infliction.
\item Blue part: The idea of this attack strategy is similar to the green part. At first, the attacker would not implement any attack, and the system runs the string $\{a,b,c\}/\{a,b,c,d\} \rightarrow a \rightarrow \{b,c\}/\{b,c,d\}/\{b,c,e\}/\{b,c,d,e\} \rightarrow c \rightarrow \{b,c,d\}/\{a,b,c,d\} \rightarrow d$. Afterwards, if the supervisor issues a control command containing event $a$, that is, command $\{a,b,c\}$ or $\{a,b,c,d\}$, then the attacker enables the execution of unobservable event $e$, which results in the reuse of command $\{a,b,c\}$ or $\{a,b,c,d\}$ and then event $a$ is executed, causing the damage infliction.
\end{itemize}
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=5.5cm]{A.pdf}
\caption{The synthesized $\hat{\mathcal{A}}$}
\label{fig:A}
\end{center}
\end{figure}
\subsection{Generation of obfuscated supervisors against covert actuator attackers}
\label{subsec:Generation of Obfuscated Supervisors Against Covert Actuator Attackers}
In this subsection, we shall propose an approach for computing all the resilient supervisors that are control equivalent to $S$. Since $BPNS^{A}$ has exactly encoded all the control equivalent bipartite supervisors under attack based on \textbf{Theorem IV.1}, we know that, with the guidance of $L_{m}(G||CE^{A}||BPNS^{A}||\hat{\mathcal{A}})$, the resilient and control equivalent bipartite supervisors under attack can be obtained from $BPNS^{A}$ by pruning inappropriate transitions that are labelled by commands in $\Gamma$, which are controllable to the supervisor. Thus, the intuitive idea of our methodology is to extract the resilient and control equivalent bipartite supervisors under attack by treating $BPNS^{A}$ as a plant, and then perform the synthesis.
The detailed methodology is presented as follows.
\noindent \textbf{Procedure 2:}
\begin{enumerate}[1.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item Compute $\mathcal{P} = G||CE^{A}||BPNS^{A}||\hat{\mathcal{A}} = (Q_{\mathcal{P}}, \Sigma \cup \Gamma, \xi_{\mathcal{P}}, q_{\mathcal{P}}^{init}, Q_{\mathcal{P},m})$.
\item Construct $\mathcal{P}_{r} = (Q_{\mathcal{P}_{r}}, \Sigma \cup \Gamma, \xi_{\mathcal{P}_{r}}, q_{\mathcal{P}_{r}}^{init})$ based on $\mathcal{P}$, where
\begin{enumerate}[a.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $Q_{\mathcal{P}_{r}} = (Q_{\mathcal{P}} - Q_{\mathcal{P},m}) \cup \{q^{dump}\}$
\item \begin{enumerate}[i.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $(\forall q, q' \in Q_{\mathcal{P}} - Q_{\mathcal{P},m})(\forall \sigma \in \Sigma \cup \Gamma)\xi_{\mathcal{P}}(q, \sigma) = q' \Rightarrow \xi_{\mathcal{P}_{r}}(q, \sigma) = q'$
\item $(\forall q \in Q_{\mathcal{P}} - Q_{\mathcal{P},m})(\forall \sigma \in \Sigma \cup \Gamma)\neg\xi_{\mathcal{P}}(q, \sigma)! \Rightarrow \xi_{\mathcal{P}_{r}}(q, \sigma) = q^{dump}$
\item $(\forall \sigma \in \Sigma \cup \Gamma)\xi_{\mathcal{P}_{r}}(q^{dump}, \sigma) = q^{dump}$
\end{enumerate}
\item $q_{\mathcal{P}_{r}}^{init} = q_{\mathcal{P}}^{init}$
\end{enumerate}
\item Synthesize the supremal supervisor $S_{0}^{A} = (Q_{S_{0}^{A}}, \Sigma \cup \Gamma, \xi_{S_{0}^{A}}, q_{S_{0}^{A}}^{init})$ over the control constraint $(\Gamma, \Sigma_{o} \cup \Gamma)$ by treating $BPNS^{A}$ as the plant and $\mathcal{P}_{r}$ as the requirement such that $BPNS^{A}||S_{0}^{A}$ is safe w.r.t. $\mathcal{P}_{r}$.
\end{enumerate}
In this procedure, we shall prune control commands in $BPNS^{A}$ to obtain all the resilient and control equivalent bipartite supervisors under attack encoded in $BPNS^{A}$. At Step 1, we compute $\mathcal{P} = G||CE^{A}||BPNS^{A}||\hat{\mathcal{A}}$, whose marked behavior encodes all the damage strings that could lead to damage infliction for control equivalent supervisors. Next, to find all the resilient and control equivalent bipartite supervisors under attack encoded in $BPNS^{A}$, we shall ``design a supervisor'' to ``control'' $BPNS^{A}$, which encodes all the control equivalent bipartite supervisors under attack, by disabling control commands such that the damage strings would not occur in the plant $BPNS^{A}$. Thus, at Step 2, we construct a requirement automaton $\mathcal{P}_{r}$ based on $\mathcal{P}$, where 1) the set of marker states of $\mathcal{P}$ are removed in $\mathcal{P}_{r}$ in Step 2.a, 2) all the transitions originally defined in $\mathcal{P}$ are retained in $\mathcal{P}_{r}$ for those states that have not been removed, as shown in Step 2.b.i, 3) for those states that have not been removed, we complete the transitions that are not originally defined in $\mathcal{P}$, which would lead to the newly added state $q^{dump}$, as shown in Step 2.b.ii, and 4) all the transitions in $\Sigma \cup \Gamma$ are defined at the state $q^{dump}$ in Step 2.b.iii.
The purpose of adding Step 2.b.ii and Step 2.b.iii to complete transitions is: The requirement in the synthesis is only supposed to forbid the execution of the strings that might result in damage infliction. Thus, we carry out Step 2.b.ii and Step 2.b.iii such that $\mathcal{P}_{r}$ specifies the set of all the possible legal strings.
At Step 3, by treating $BPNS^{A}$ as the plant and $\mathcal{P}_{r}$ as the requirement, we could synthesize a safe supervisor $S_{0}^{A}$. Since the requirement automaton has removed all the damage strings, $S_{0}^{A}$ indeed includes
the attacked version of those resilient and control equivalent bipartite supervisors in $BPNS$.
\textbf{Remark IV.1:} Since $S_{0}^{A}$ is synthesized by treating $BPNS^{A}$ as the plant at Step 3 of \textbf{Procedure 2}, we consider the case where the synthesized supervisor $S_{0}^{A}$ satisfies the requirement that $L(S_{0}^{A}) \subseteq L(BPNS^{A})$, following the standard notion of controllability and observability \cite{WMW10} over the control constraint $(\Gamma, \Sigma_{o} \cup \Gamma)$ w.r.t. the plant $BPNS^{A}$. Without loss of generality, any event in $\Sigma_{uo}$, if defined, is a self-loop transition in $S_{0}^{A}$. Thus, $S_{0}^{A}$ is a bipartite structure\footnote{If we follow our definition of a supervisor and synthesize $S_{0}^{A}$, we could always update $S_{0}^{A} := BPNS^{A}||S_{0}^{A}$ to generate a bipartite supervisor $S_{0}^{A}$ with $L(S_{0}^{A}) \subseteq L(BPNS^{A})$.} similar to $BPNS^{A}$.
\textbf{Example IV.3} Based on $G$, $CE^{A}$, $BPNS^{A}$ and $\hat{\mathcal{A}}$ shown in Fig. \ref{fig:G_S}. (a), Fig. \ref{fig:CE_CEA}. (b), Fig. \ref{fig:BPNSA} and Fig. \ref{fig:A}, respectively, the synthesized $S_{0}^{A}$ by adopting \textbf{Procedure 2} is illustrated in Fig. \ref{fig:S0A}. Compared with $BPNS^{A}$ shown in Fig. \ref{fig:BPNSA}, to ensure the damage infliction would not be caused, there are several control commands removed at some control states in $S_{0}^{A}$:
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=5.5cm]{S0A.pdf}
\caption{The synthesized $S_{0}^{A}$}
\label{fig:S0A}
\end{center}
\end{figure}
\begin{itemize}
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item After $S_{0}^{A}$ observes the sequence $\{a,b,c\}/\{a,b,c,d\} \rightarrow a \rightarrow \{b,c,d\}/\{b,c,d,e\} \rightarrow d$, $S_{0}^{A}$ transits to state 4, where now the plant might execute the string $ead$ due to the enablement attack of unobservable event $e$. Thus, $S_{0}^{A}$ cannot issue any control command at state 4 as the string $eadc$ would cause damage infliction and uncontrollable event $c$ is contained in any control command. This case corresponds to the defense strategy against the damage strings marked by red explained in \textbf{Example IV.2}.
\item After $S_{0}^{A}$ observes the sequence $s = \{a,b,c\}/\{a,b,c,d\} \rightarrow a \rightarrow \{b,c\}/\{b,c,d\}/\{b,c,e\}/\{b,c,d,e\} \rightarrow c$, $S_{0}^{A}$ transits to state 5, where now the plant might execute the string $bac$ as $b$ is unobservable to the supervisor. Since the string $bacea$ would cause damage infliction and unobservable event $e$ could be enabled by the attacker, $S_{0}^{A}$ cannot issue any control command containing event $a$ at state 5, that is, the command $\{a,b,c,d\}$, which follows the sequence $s$ in $BPNS^{A}$, cannot be defined at state 5 and only command $\{b,c,d\}$ is retained. This case corresponds to the defense strategy against the damage strings marked by green explained in \textbf{Example IV.2}.
\item After $S_{0}^{A}$ observes the sequence $s = \{a,b,c\}/\{a,b,c,d\} \rightarrow a \rightarrow \{b,c\}/\{b,c,d\}/\{b,c,e\}/\{b,c,d,e\} \rightarrow c \rightarrow \{b,c,d\} \rightarrow d$, $S_{0}^{A}$ transits to state 6, where now the plant executes the string $acd$. Since the string $acdea$ would cause damage infliction and unobservable event $e$ could be enabled by the attacker, $S_{0}^{A}$ cannot issue any control command containing event $a$ at state 6, that is, commands $\{a,b,c\}$ and $\{a,b,c,d\}$, which follow the sequence $s$ in $BPNS^{A}$, cannot be defined at state 6 and only commands $\{b,c\}$ and $\{b,c,d\}$ are retained. This case corresponds to the defense strategy against the damage strings marked by blue explained in \textbf{Example IV.2}.
\end{itemize}
Next, we shall transform $S_{0}^{A}$ to the version in the absence of attack. The generated automaton is denoted as $S_{0} = (Q_{S_{0}}, \Sigma \cup \Gamma, \xi_{S_{0}}, q_{S_{0}}^{init})$, where
\begin{enumerate}[1.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $Q_{S_{0}} = Q_{S_{0}^{A}}$
\item \begin{enumerate}[a.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $(\forall q, q' \in Q_{S_{0}})(\forall \gamma \in \Gamma)\xi_{S_{0}^{A}}(q, \gamma) = q' \Rightarrow \xi_{S_{0}}(q, \gamma) = q'$
\item $(\forall q, q' \in Q_{S_{0}})(\forall \gamma \in \Gamma)(\forall \sigma \in \gamma \cap \Sigma_{uo})\xi_{S_{0}^{A}}(q, \gamma) = q' \Rightarrow \xi_{S_{0}}(q', \sigma) = q'$
\item $(\forall q, q', q'' \in Q_{S_{0}})(\forall \gamma \in \Gamma)(\forall \sigma \in \gamma \cap \Sigma_{o})\xi_{S_{0}^{A}}(q, \gamma) \\= q' \wedge \xi_{S_{0}^{A}}(q', \sigma) = q'' \Rightarrow \xi_{S_{0}}(q', \sigma) = q''$
\end{enumerate}
\item $q_{S_{0}}^{init} = q_{S_{0}^{A}}^{init}$
\end{enumerate}
Briefly speaking, 1) we retain all the transitions labelled by events in $\Gamma$ that are originally defined in $S_{0}^{A}$, as shown in Step 2.a, 2) for any state $q'$ such that there exists a transition $\xi_{S_{0}^{A}}(q, \gamma) = q'$, we retain the transition labelled by any event in $\gamma \cap \Sigma_{uo}$ ($\gamma \cap \Sigma_{o}$, respectively), which is a self-loop (leads to a new state $q''$, respectively), as shown in Step 2.b (Step 2.c, respectively).
Then we generate the automaton $Ac(S_{0})$. For convenience, in the rest, we shall refer to $Ac(S_{0})$ whenever we talk about $S_{0}$.
By Remark IV.1, $S_{0}$ is a bipartite structure and the state set of $S_{0}$ could be divided into two disjoint sets $Q_{S_{0}} = Q_{S_{0}}^{rea} \dot{\cup} Q_{S_{0}}^{com}$, where $Q_{S_{0}}^{rea}$ is the set of reaction states and $Q_{S_{0}}^{com}$ is the set of control states, and
\begin{enumerate}[1.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item At any state of $Q_{S_{0}}^{rea}$, any event in $\Gamma$ is not defined.
\item At any state of $Q_{S_{0}}^{rea}$, any event in $\Sigma_{uo}$, if defined, leads to a self-loop, and any event in $\Sigma_{o}$, if defined, would lead to a transition to a control state.
\item At any state of $Q_{S_{0}}^{com}$, any event in $\Sigma$ is not defined.
\item At any state of $Q_{S_{0}}^{com}$, any event in $\Gamma$, if defined, would lead to a transition to a reaction state.
\end{enumerate}
We shall briefly explain why these 4 facts hold. We know that: 1) since $L(BPNS^{A}) \subseteq \overline{(\Gamma\Sigma_{uo}^{*}\Sigma_{o})^{*}}$, we have $L(S_{0}) \subseteq \overline{(\Gamma\Sigma_{uo}^{*}\Sigma_{o})^{*}}$, 2) any transition labelled by an unobservable event in $\Sigma_{uo}$ would be a self-loop in $S_{0}$ and any transition labelled by an event in $\Sigma_{o} \cup \Gamma$ would enable $S_{0}$ to make a state transition. Thus, we could always divide the state set of $S_{0}$ into two disjoint parts: 1) the set of control states $Q_{S_{0}}^{com}$, where any event in $\Sigma$ is not defined (fact 3) and any event defined at such a state belongs to $\Gamma$, 2) the set of reaction states $Q_{S_{0}}^{rea}$, where any event in $\Gamma$ is not defined (fact 1) and any event defined at such a state belongs to $\Sigma$. In addition, based on the form of closed-behavior of $BPNS^{A}$, fact 2 and fact 4 naturally hold.
\textbf{Example IV.4} Based on $S_{0}^{A}$ shown in Fig. \ref{fig:S0A}, the transformed $S_{0}$ is illustrated in Fig. \ref{fig:S0}. By taking several states as an illustration, we shall briefly explain how to obtain $S_{0}$ based on $S_{0}^{A}$. At the initial state 0 of $S_{0}$, according to Case 2.a of the construction of $S_{0}$, we retain the transitions labelled by commands $\{a,b,c\}$ and $\{a,b,c,d\}$ originally defined in $S_{0}^{A}$. After the command $\{a,b,c\}$ is issued, $S_{0}$ would transit to state 1, where we shall only retain those transitions in the absence of attack. According to Case 2.b, the transition labelled by event $b \in \{a,b,c\} \cap \Sigma_{uo}$ is retained, which is a self-loop. According to Case 2.c, the transitions labelled by events $a,c \in \{a,b,c\} \cap \Sigma_{o}$ are retained, which would lead to state 2 and state 3, respectively, as defined in $S_{0}^{A}$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=4.8cm]{S0.pdf}
\caption{The transformed $S_{0}$}
\label{fig:S0}
\end{center}
\end{figure}
Although inappropriate control commands that would result in damage infliction have been removed in $S_{0}$, we cannot ensure $S_{0}$ exactly encodes all the resilient and control equivalent bipartite supervisors. This is because it is possible that at some control state of $S_{0}$, where an observation has just been received, there are no control commands defined as a result of the synthesis. Such a phenomenon violates the structure of a bipartite supervisor, where a control command must be defined at any control state according to the construction of a bipartite supervisor as presented in Section \ref{subsubsec:Supervisor}. Thus, by treating $S_{0}$ as a plant, we shall carry out the following procedure to iteratively remove these newly created bad states until the generated structure satisfies the condition that any observation is followed by at least a control command.
\noindent \textbf{Procedure 3:}
\begin{enumerate}[1.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item Let $k := 0$.
\item Compute $Q_{k,del} := \{q \in Q_{S_{k}}^{com}|En_{S_{k}}(q) = \varnothing\}$.
\item If $Q_{k,del} = \varnothing$, then output $S_{k}$ and end the procedure; otherwise, i.e., $Q_{k,del} \neq \varnothing$, then proceed to Step 4.
\item Construct $S_{k,r} = (Q_{S_{k,r}}, \Sigma \cup \Gamma, \xi_{S_{k,r}}, q_{S_{k,r}}^{init})$ based on $S_{k}$, where
\begin{enumerate}[a.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $Q_{S_{k,r}} = Q_{S_{k}} - Q_{k,del}$
\item $(\forall q, q' \in Q_{S_{k,r}})(\forall \sigma \in \Sigma \cup \Gamma)\xi_{S_{k}}(q, \sigma) = q' \Rightarrow \xi_{S_{k,r}}(q, \sigma) = q'$
\item $q_{S_{k,r}}^{init} = q_{S_{k}}^{init}$
\end{enumerate}
\item Synthesize the supremal supervisor $S_{k+1} = (Q_{S_{k+1}}, \Sigma \cup\\ \Gamma, \xi_{S_{k+1}}, q_{S_{k+1}}^{init})$ over the control constraint $(\Gamma, \Sigma_{o} \cup \Gamma)$ by treating $S_{k}$ as the plant and $S_{k,r}$ as the requirement such that $S_{k}||S_{k+1}$ is safe w.r.t. $S_{k,r}$. We denote
$Q_{S_{k+1}} = Q_{S_{k+1}}^{rea} \dot{\cup} Q_{S_{k+1}}^{com}$, where $Q_{S_{k+1}}^{rea}$ is the set of reaction states and $Q_{S_{k+1}}^{com}$ is the set of control states\footnote{The division rule is the same as that of $Q_{S_{0}} = Q_{S_{0}}^{rea} \dot{\cup} Q_{S_{0}}^{com}$.}.
\item Let $k \leftarrow k+1$ and proceed to Step 2.
\end{enumerate}
At Step 1, we set the counter $k$ to be 0. Then, at Step 2, taking the $k$-th iteration as an illustration, we shall compute the set of control states in $S_{k}$, denoted by $Q_{k,del}$, where any control state $q \in Q_{k,del}$ satisfies that there are no control commands defined at state $q$, denoted by $En_{S_{k}}(q) = \varnothing$. According to the structure of a bipartite supervisor, any state in $Q_{k,del}$ should not exist in $S_{k}$. At Step 3, if $Q_{k,del} = \varnothing$, then $S_{k}$ is the desired structure that exactly encodes all the resilient and control equivalent bipartite supervisors; otherwise, we shall remove the set of states $Q_{k,del}$ in $S_{k}$ to construct a requirement automaton $S_{k,r}$ for the plant $S_{k}$ at Step 4. At Step 5, by treating $S_{k}$ as the plant and $S_{k,r}$ as the requirement, we shall synthesize a safe supervisor $S_{k+1}$.
\textbf{Remark IV.2:} Similar to Remark IV.1, we consider the case where the synthesized supervisor $S_{k+1}$ satisfies the requirement that $L(S_{k+1}) \subseteq L(S_{k})$, following the standard notion of controllability and observability \cite{WMW10} over the control constraint $(\Gamma, \Sigma_{o} \cup \Gamma)$. Without loss of generality, any event in $\Sigma_{uo}$, if defined, is a self-loop transition in $S_{k+1}$. Thus, $S_{k+1}$ is a bipartite structure.
We shall name the output of \textbf{Procedure 3} as obfuscated command non-deterministic supervisor, and we denote it by
$ONS = (Q_{ons}, \Sigma \cup \Gamma, \xi_{ons}, q_{ons}^{init})$. For convenience, we denote $Q_{ons} = Q_{ons}^{rea} \dot{\cup} Q_{ons}^{com}$, where $Q_{ons}^{rea}$ is the set of reaction states and $Q_{ons}^{com}$ is the set of control states\footnote{The division rule is the same as that of $Q_{S_{0}} = Q_{S_{0}}^{rea} \dot{\cup} Q_{S_{0}}^{com}$.}.
\textbf{Proposition IV.3:} $L(ONS) \subseteq L(BPNS)$.
\emph{Proof:} See Appendix \ref{appendix: Proposition IV.3}. \hfill $\blacksquare$
\textbf{Theorem IV.3:} $\bigcup\limits_{S' \in \mathscr{S}_{e}^{r}(S)}L(BT(S')) = L(ONS)$, where $\mathscr{S}_{e}^{r}(S)$ denotes the set of resilient supervisors that are control equivalent to $S$.
\emph{Proof:} See Appendix \ref{appendix: Theorem IV.3}. \hfill $\blacksquare$
\textbf{Theorem IV.4:} \textbf{Problem 1} is decidable.
\emph{Proof:} To prove this result, based on \textbf{Theorem IV.3}, we only need to additionally check whether \textbf{Procedure 2} and \textbf{Procedure 3} could terminate within finite steps. Clearly, \textbf{Procedure 2} terminates within finite steps. For \textbf{Procedure 3}, in each iteration, since the requirement $S_{k,r}$ is generated by removing at least one control state $q$ from the plant $S_{k}$ and any unobservable event in $\Sigma_{uo}$, if defined, is a self-loop in $S_{k}$, we know that $S_{k+1}$ is a substructure of $S_{k}$, and to satisfy the controllability w.r.t. the control constraint $(\Gamma, \Sigma_{o} \cup \Gamma)$, at least two states of $S_{k}$ are removed to get $S_{k+1}$, including the removed control state $q$ and the reaction state $q'$ where there exists $\sigma \in \Sigma_{o}$ such that $\xi_{S_{k}}(q', \sigma) = q$. Thus, \textbf{Procedure 3} would iterate Steps 2-6 for at most $\lfloor \frac{|Q_{S_{0}}|}{2} \rfloor$ times\footnote{$\lfloor \cdot \rfloor$ is the floor function that takes as input a real number, and gives as output the greatest integer less than or equal to this real number.}. This completes the proof. \hfill $\blacksquare$
Next, we shall analyze the computational complexity of the proposed algorithm to synthesize all the obfuscated supervisors, which depends on the complexity of three synthesis steps (\textbf{Procedure 1}, \textbf{Procedure 2} and \textbf{Procedure 3}), and the construction of $S_{0}$ from $S_{0}^{A}$. By using the synthesis approach in \cite{WMW10,WLLW18}, the complexity of \textbf{Procedure 1} is $O((|\Sigma| + |\Gamma|)2^{|Q| \times |Q_{ce}^{a}| \times |Q_{bpns}^{a}|})$, the complexity of \textbf{Procedure 2} is $O((|\Sigma| + |\Gamma|)|Q_{S_{0}^{A}}|)$,
and the complexity of \textbf{Procedure 3} is no more than $O((|\Sigma| + |\Gamma|)|Q_{S_{0}}| + (|\Sigma| + |\Gamma|)(|Q_{S_{0}}|-2) + \dots + (|\Sigma| + |\Gamma|) \times 3)= O((|\Sigma| + |\Gamma|)|Q_{S_{0}}|^{2})$ when $|Q_{S_{0}}|$ is odd
and no more than $O((|\Sigma| + |\Gamma|)|Q_{S_{0}}| + (|\Sigma| + |\Gamma|)(|Q_{S_{0}}|-2) + \dots + (|\Sigma| + |\Gamma|) \times 2) = O((|\Sigma| + |\Gamma|)|Q_{S_{0}}|^{2})$ when $|Q_{S_{0}}|$ is even. The complexity of constructing $S_{0}$ from $S_{0}^{A}$ is $O((|\Sigma| + |\Gamma|)|Q_{S_{0}^{A}}|)$. Thus, the overall complexity is $O((|\Sigma| + |\Gamma|)|Q_{S_{0}}|^{2})$, where
\begin{itemize}
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $|Q_{ce}^{a}| = |\Gamma| + 1$
\item $|Q_{bpns}^{a}| \leq (2^{|Q| \times |Q_{s}|} + 1)(|\Gamma| + 1) + 1$
\item $|Q_{S_{0}}| \leq |Q_{S_{0}^{A}}| - 1$
\item $|Q_{S_{0}^{A}}| \leq 2^{|Q| \times |Q_{ce}^{a}| \times |Q_{bpns}^{a}|^{2} \times |Q_{\hat{a}}|}$
\item $|Q_{\hat{a}}| \leq 2^{|Q| \times |Q_{ce}^{a}| \times |Q_{bpns}^{a}|}$
\end{itemize}
\textbf{Remark IV.3}: In this work, we focus on addressing the decidability of synthesizing obfuscated supervisors against covert actuator attacks. We are not sure if the above analysis of complexity upper bound is tight. This issue will not be addressed here and is left as a future work.
\textbf{Example IV.5} We shall continue with $S_{0}$ shown in Fig. \ref{fig:S0}. It can be checked that there exists a control state marked by a red cross in $S_{0}$, where there is no control command defined, and this violates the structure of a bipartite supervisor. Thus, according to Step 2 of \textbf{Procedure 3}, this state is included in $Q_{0,del}$ and according to Step 4 of \textbf{Procedure 3}, we remove this state to generate $S_{0,r}$. Then, by treating $S_{0}$ as the plant and $S_{0,r}$ as the requirement, we could synthesize $S_{1}$, which is illustrated in Fig. \ref{fig:S1}. It can be checked that at least a control command is defined at any control state of $S_{1}$, which means that $Q_{1,del} = \varnothing$. Thus, the procedure terminates after the first iteration and outputs $ONS := S_{1}$. Compared with $S_{0}$ shown in Fig. \ref{fig:S0}, the control commands $\{b,c,d\}$ and $\{b,c,d,e\}$ are removed at state 2. Intuitively speaking, this is because 1) at state 2, we know that the supervisor has issued the initial control command $\{a,b,c\}$ or $\{a,b,c,d\}$ and the plant might execute the string $ea$ due to the enablement of unobservable event $e$ by the attacker, 2) if the supervisor issues any command containing the event $d$ at state 2, then $d$ might be executed at $G$ and when the event $d$ is observed by the supervisor, no matter which command is issued by the supervisor, $G$ would execute the event $c$ and reach the damage state as $c$ is uncontrollable and always contained in any control command.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=4.2cm]{S1.pdf}
\caption{The synthesized $S_{1}$ ($ONS$)}
\label{fig:S1}
\end{center}
\end{figure}
Based on \textbf{Theorem IV.3}, $ONS$ has already exactly encoded all the control equivalent and resilient bipartite supervisors. Next, we show how to extract a control equivalent and resilient bipartite supervisor from $ONS$. We construct the following structure, denoted by $OS = (Q_{os}, \Sigma \cup \Gamma, \xi_{os}, q_{os}^{init})$, where
\begin{enumerate}[1.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $Q_{os} = Q_{ons}$
\item \begin{enumerate}[a.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $(\forall q, q' \in Q_{os})(\forall \sigma \in \Sigma)\xi_{ons}(q, \sigma) = q' \Rightarrow \xi_{os}(q, \sigma) = q'$
\item For any control state $q \in Q_{ons}^{com}$, we randomly pick a control command $\gamma \in En_{ONS}(q)$ and define that: for any reaction state $q' \in Q_{ons}^{rea}$, if $\xi_{ons}(q, \gamma) = q'$, then $\xi_{os}(q, \gamma) = q'$ and for any control command $\gamma' \in En_{ONS}(q) - \{\gamma\}$, we have $\neg\xi_{os}(q, \gamma')!$.
\end{enumerate}
\item $q_{os}^{init} = q_{ons}^{init}$
\end{enumerate}
Then we generate the automaton $Ac(OS)$. For convenience, we shall still denote $Ac(OS)$ as $OS$. The basic idea for the construction of $OS$ from $ONS$ is: at any control state $q$ of $OS$, we shall only retain one transition labelled by a control command originally defined at the state $q$ in $ONS$, and for any other control command $\gamma' \in En_{ONS}(q) - \{\gamma\}$, we do not define $\gamma'$ at the state $q$ in $OS$, as shown in Step 2.b.
\textbf{Proposition IV.4:} Given $G$ and $S$, we have $OS \in \mathscr{S}_{e}^{r}(S)$.
\emph{Proof:} See Appendix \ref{appendix: Proposition IV.4}. \hfill $\blacksquare$
\textbf{Theorem IV.5:} \textbf{Problem 2} is decidable.
\emph{Proof:} Based on \textbf{Theorem IV.4} and \textbf{Proposition IV.4}, we could directly have this result. \hfill $\blacksquare$
\textbf{Example IV.6} Based on $ONS$ shown in Fig. \ref{fig:S1}, by choosing the control command marked by a green check mark at each control state, a control equivalent and resilient supervisor $OS$ is extracted, which is illustrated in Fig. \ref{fig:OS}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=2.5cm]{OS.pdf}
\caption{A control equivalent and resilient supervisor $OS$ extracted from $ONS$}
\label{fig:OS}
\end{center}
\end{figure}
\section{Conclusions}
\label{sec:conclusions}
In this work, we investigate the problem of obfuscating supervisors against covert actuator attackers. By constructing the behavior-preserving structure to exactly encode all the control equivalent supervisors, we propose a sound and complete algorithm to generate all the obfuscated supervisors and show the studied problem is decidable. In the future works, we shall continue investigating supervisor obfuscation against more powerful attacks, e.g., covert sensor-actuator attacks, in more challenging scenarios, e.g., networked systems.
\begin{appendices}
\section{Proof of Lemma III.1}
\label{appendix: Lemma III.1}
(If) Firstly, it can be checked that $L(P_{\Sigma_{o}}(G||S)) \subseteq L(P_{\Sigma_{o}}(G)||P_{\Sigma_{o}}(S)) = L(P_{\Sigma_{o}}(G)||S)$. Next, we prove that $L(G||S) \subseteq L(G||S')$. Thus, we need to show that for any $t \in L(G||S)$, we have $t \in L(G||S')$. Since $t \in L(G||S)$, we have $t \in L(G)$ and $t \in L(S)$. Thus, to prove $t \in L(G||S') = L(G) \cap L(S')$, we only need to show $t \in L(S')$. Since $t \in L(G||S) \subseteq L(P_{\Sigma_{o}}(G||S))$, we have $t \in L(P_{\Sigma_{o}}(G||S)) = L(P_{\Sigma_{o}}(G||S')) \subseteq L(P_{\Sigma_{o}}(G)||S') = L(P_{\Sigma_{o}}(G)) \cap L(S')$, which implies that $t \in L(S')$. Thus, $L(G||S) \subseteq L(G||S')$. By the same way, we could prove that $L(G||S') \subseteq L(G||S)$. Hence, $L(G||S) = L(G||S')$.
(Only if) The necessity is straightforward. \hfill $\blacksquare$
\section{Proof of Proposition III.1}
\label{appendix: Proposition III.1}
Since $L(BT(S')) \subseteq L(CE)$ and $L(BPNS) = L(BPS) \cap L(CE)$, to prove $L(BT(S')) \subseteq L(BPNS)$, we only need to show that $L(BT(S')) \subseteq L(BPS)$. Thus, we need to prove for any $t \in L(BT(S'))$, we have $t \in L(BPS)$. We adopt the mathematical induction to prove this result. The base case is: $t = \varepsilon$. Clearly, $\varepsilon \in L(BT(S'))$ and $\varepsilon \in L(BPS)$. Thus, the base case holds. Next, the induction hypothesis is that: for any $t \in L(BT(S'))$, we have $t \in L(BPS)$, when $|t| = k$. Then we shall show that for any $t\sigma \in L(BT(S'))$, we have $t\sigma \in L(BPS)$. For convenience, we denote $BT(S') = (Q_{bs'}, \Sigma \cup \Gamma, \xi_{bs'}, q_{bs'}^{init})$. According to the construction procedure of $BT(S')$, we have $L(BT(S')) \subseteq \overline{(\Gamma\Sigma_{uo}^{*}\Sigma_{o})^{*}}$, and then the verification can be divided into the following two cases:
1. $\sigma \in \Gamma$. For convenience, we shall denote $\sigma = \gamma \in \Gamma$. Based on the structure of $BT(S')$, we have $t \in (\Gamma\Sigma_{uo}^{*}\Sigma_{o})^{*}$.
Then we have the following two subcases:
\begin{enumerate}[1)]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $\xi_{bps}(q_{bps}^{init}, t) = q^{dump}$. Based on Step 3.f in the construction of $BPS$, we have $En_{BPS}(\xi_{bps}(q_{bps}^{init}, t)) = \Gamma \cup \Sigma$. Thus, it holds that $t\gamma \in L(BPS)$.
\item $\xi_{bps}(q_{bps}^{init}, t) \neq q^{dump}$. Since $t \in (\Gamma\Sigma_{uo}^{*}\Sigma_{o})^{*}$, i.e., $t$ is ended with an event in $\Sigma_{o}$, based on the construction procedure of $BPS$ from $B$, we know that $\xi_{bps}(q_{bps}^{init}, t) \in Q_{b}^{com} \cup \{q^{dump}\}$. In addition, since $\xi_{bps}(q_{bps}^{init}, t) \neq q^{dump}$, we have $\xi_{bps}(q_{bps}^{init}, t) \in Q_{b}^{com}$. By construction of $BPS$, we have $P(t) \in L(B)$ and $(\xi_{b}(q_{b}^{init}, P(t)))^{com} = \xi_{bps}(q_{bps}^{init}, t)$, where $P: (\Sigma \cup \Gamma)^{*} \rightarrow \Sigma_{o}^{*}$.
Then we show that at the state $\xi_{bps}(q_{bps}^{init}, t)$, the event $\gamma$ satisfies the conditions $\mathcal{C}_{1}$ and $\mathcal{C}_{2}$ presented in Step 3.a. For $\mathcal{C}_{1}$, it requires that $En_{B}(\xi_{b}(q_{b}^{init}, P(t))) \subseteq \gamma$. Since $L(G||S) = L(G||S')$, we have $L(B) = L(P_{\Sigma_{o}}(G||S)) = L(P_{\Sigma_{o}}(G||S')) \subseteq L(S')$. Thus, we have $En_{B}(\xi_{b}(q_{b}^{init}, P(t))) \subseteq En_{S'}(\xi_{s'}(q_{s'}^{init}, P(t)))$. Since $t\gamma \in L(BT(S'))$, we have $En_{S'}(\xi_{s'}(q_{s'}^{init}, P(t))) \\= \gamma$. Thus, $En_{B}(\xi_{b}(q_{b}^{init}, P(t))) \subseteq \gamma$. For $\mathcal{C}_{2}$, it requires that $(\forall (q_{g},q_{s}) \in \xi_{b}(q_{b}^{init}, P(t)))En_{G}(q_{g}) \cap En_{S'}(\xi_{s'}(q_{s'}^{init}, P(t))) \subseteq En_{B}(\xi_{b}(q_{b}^{init}, P(t)))$, which clearly holds; otherwise, we know that there exists $(q_{g},q_{s}) \in \xi_{b}(q_{b}^{init}, P(t))$ such that $En_{G}(q_{g}) \cap En_{S'}(\xi_{s'}(q_{s'}^{init}, P(t))) \not\subseteq En_{B}(\xi_{b}(q_{b}^{init}, P(t)))$, and then we have $L(P_{\Sigma_o}(G||S)) \neq L(P_{\Sigma_o}(G||S'))$, implying that
$L(G||S) \neq L(G||S')$ based on \textbf{Lemma III.1}, which causes the contradiction.
\end{enumerate}
2. $\sigma \in \Sigma$. Based on the structure of $BT(S')$, there exists $t_{1} \in (\Gamma\Sigma_{uo}^{*}\Sigma_{o})^{*}$, $\gamma \in \Gamma$ and $t_{2} \in (\gamma \cap \Sigma_{uo})^{*}$ such that $t = t_{1}\gamma t_{2} \in L(BPS)$ and $\sigma \in \gamma$. Then we have the following two subcases:
\begin{enumerate}[1)]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $\xi_{bps}(q_{bps}^{init}, t) = q^{dump}$. Based on Step 3.f in the construction of $BPS$, we have $En_{BPS}(\xi_{bps}(q_{bps}^{init}, t)) = \Gamma \cup \Sigma$. Thus, it holds that $t\sigma \in L(BPS)$.
\item $\xi_{bps}(q_{bps}^{init}, t) \neq q^{dump}$. Since any event in $\Sigma \cup \Gamma$ labels a self-loop transition at the state $q^{dump}$, we know that $\xi_{bps}(q_{bps}^{init}, t_{1}) \neq q^{dump}$. Thus, $\xi_{bps}(q_{bps}^{init}, t_{1}\gamma)$ is a reaction state. According to Step 3.b and Step 3.d in the construction of $BPS$, we have $\xi_{bps}(q_{bps}^{init}, t_{1}\gamma) = \xi_{bps}(q_{bps}^{init}, t_{1}\gamma t_{2})$, which is still a reaction state. In addition, according to Step 3.b - Step 3.e in the construction of $BPS$, any event in $\Sigma$ is defined at any reaction state, we have $t\sigma = t_{1}\gamma t_{2} \sigma \in L(BPS)$.
\end{enumerate}
Based on the above analysis, in any case, we have $t\sigma \in L(BPS)$, which completes the proof. \hfill $\blacksquare$
\section{Proof of Proposition III.2}
\label{appendix: Proposition III.2}
Since $L(G||S) \neq L(G||S')$, based on \textbf{Lemma III.1}, we have $L(B) = L(P_{\Sigma_{o}}(G||S)) \neq L(P_{\Sigma_{o}}(G||S')) = L(B')$, where $B' = P_{\Sigma_{o}}(G||S') = (Q_{b'}, \Sigma, \xi_{b'}, q_{b'}^{init})$. Then we know that there exists $t \in \Sigma_{o}^{*} \cap L(B) \cap L(B')$ such that for any $i \in [0: |t|-1]$, the following conditions are satisfied:
\begin{enumerate}[1)]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $En_{B}(\xi_{b}(q_{b}^{init}, \mathcal{P}_{i}(t))) = En_{B'}(\xi_{b'}(q_{b'}^{init}, \mathcal{P}_{i}(t)))$
\item $En_{B}(\xi_{b}(q_{b}^{init}, t)) \neq En_{B'}(\xi_{b'}(q_{b'}^{init}, t))$
\end{enumerate}
According to the way of constructing $B$ and $B'$, we have for any $i \in [0: |t|-1]$, the following conditions are satisfied:
\begin{enumerate}[C1)]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $En_{B}(\xi_{b}(q_{b}^{init}, \mathcal{P}_{i}(t))) \subseteq En_{S'}(\xi_{s'}(q_{s'}^{init}, \mathcal{P}_{i}(t)))$
\item $(\forall (q_{g}, q_{s}) \in \xi_{b}(q_{b}^{init}, \mathcal{P}_{i}(t)))En_{G}(q_{g}) \cap En_{S'}(\xi_{s'}(q_{s'}^{init}, \\ \mathcal{P}_{i}(t))) \subseteq En_{B}(\xi_{b}(q_{b}^{init}, \mathcal{P}_{i}(t)))$
\item $En_{B}(\xi_{b}(q_{b}^{init}, t)) \not\subseteq En_{S'}(\xi_{s'}(q_{s'}^{init}, t)) \vee \\ (\exists (q_{g}, q_{s}) \in \xi_{b}(q_{b}^{init}, t))En_{G}(q_{g}) \cap En_{S'}(\xi_{s'}(q_{s'}^{init}, t)) \\ \not\subseteq En_{B}(\xi_{b}(q_{b}^{init}, t))$
\end{enumerate}
Next, we consider two strings $u = \gamma_{0}t[1]\gamma_{1}\dots t[|t|-1]\gamma_{|t|-1}t[|t|]$ ($u = \varepsilon$ if $t = \varepsilon$) and $u\gamma_{|t|}$, where for any $i \in [0: |t|]$, we have $\gamma_{i} = En_{S'}(\xi_{s'}(q_{s'}^{init}, \mathcal{P}_{i}(t)))$. Since $t \in \Sigma_{o}^{*} \cap L(B) \cap L(B')$, we know that $t \in L(S')$. Thus, for any $j \in [1: |t|]$, it holds that $t[j] \in \gamma_{j-1}$. In addition, according to the construction procedure of $BT(S')$, we know that $u \in L(BT(S'))$. Next, we prove that $u \in L(BPS)$ by mathematical induction. For convenience, we denote $u = c_{1}\dots c_{|t|}$, where $c_{i} = \gamma_{i-1}t[i]$. The base case is to prove $c_{1} = \gamma_{0}t[1] \in L(BPS)$. If $t = \varepsilon$, then $u = \varepsilon$, which means that $c_{1} = \varepsilon \in L(BPS)$. Next, we only consider $t \neq \varepsilon$. Since $t \in L(P_{\Sigma_{o}}(G||S))$, we have $t[1] \in L(P_{\Sigma_{o}}(G||S))$. In addition, since the condition C1) and C2) hold, we know that for Step 3.a in the construction procedure of $BPS$, the condition $\mathcal{C}_{1}$ and $\mathcal{C}_{2}$ are satisfied for $\gamma_{0}$ at the state $(q_{b}^{init})^{com}$ in $BPS$. Thus, $\gamma_{0}t[1] \in L(BPS)$ and the base case holds. The induction hypothesis is $c_{1}\dots c_{k} = \gamma_{0}t[1]\gamma_{1}\dots \gamma_{k-1}t[k] \in L(BPS)$ and we need to prove $c_{1}\dots c_{k+1} = \gamma_{0}t[1]\gamma_{1}\dots \gamma_{k-1}t[k]\gamma_{k}t[k+1] \in L(BPS)$, where the hypothesis holds for $k \leq |t|-2$. It can be checked that $BPS$ would transit to the state $(\xi_{b}(q_{b}^{init}, t[1]\dots t[k]))^{com}$ via the string $c_{1}\dots c_{k}$. Thus, we need to check whether the condition $\mathcal{C}_{1}$ and $\mathcal{C}_{2}$ in Step 3.a of the construction procedure of $BPS$ are satisfied for $\gamma_{k}$ at the state $(\xi_{b}(q_{b}^{init}, t[1]\dots t[k]))^{com}$, that is, whether $En_{B}(\xi_{b}(q_{b}^{init}, t[1]\dots t[k])) \subseteq \gamma_{k}$ and $(\forall (q_{g},q_{s}) \in \xi_{b}(q_{b}^{init}, t[1]\dots t[k]))En_{G}(q_{g}) \cap \gamma_{k} \subseteq En_{B}(\xi_{b}(q_{b}^{init}, t[1]\dots t[k]))$ hold. Clearly, these two conditions hold as it is a special case of C1) and C2) when $i = k$. Thus, $u \in L(BPS)$. Since $BPNS = BPS||CE$ and $t[j] \in \gamma_{j-1}$ ($j \in [1: |t|]$), we know that $u \in L(BPNS)$.
Finally, we prove that $u\gamma_{|t|} \in L(BT(S'))$ and $u\gamma_{|t|} \notin L(BPS)$. According to the way of generating $BT(S')$, we have $u\gamma_{|t|} \in L(BT(S'))$ because $\gamma_{|t|} = En_{S'}(\xi_{s'}(q_{s'}^{init}, t))$. Since the condition C3) holds, we know that for $BPS$, at the state $q^{com} = (\xi_{b}(q_{b}^{init}, t))^{com} = \xi_{bps}(q_{bps}^{init}, u)$, it holds that either $En_{B}(q) \not\subseteq En_{S'}(\xi_{s'}(q_{s'}^{init}, t)) = \gamma_{|t|}$ or $(\exists (q_{g}, q_{s}) \in q)En_{G}(q_{g}) \cap En_{S'}(\xi_{s'}(q_{s'}^{init}, t)) = En_{G}(q_{g}) \cap \gamma_{|t|} \not\subseteq En_{B}(q)$, i.e., the conditions $\mathcal{C}_{1}$ and $\mathcal{C}_{2}$ in Step 3.a of the construction procedure of $BPS$ are not satisfied for the state $q^{com} = \xi_{bps}(q_{bps}^{init}, u)$, rendering that $u\gamma_{|t|} \notin L(BPS)$ and thus $u\gamma_{|t|} \notin L(BPNS)$, which completes the proof. \hfill $\blacksquare$
\section{Proof of Theorem III.1}
\label{appendix: Theorem III.1}
Based on \textbf{Proposition III.1}, we have LHS $\subseteq$ RHS. Next, we prove RHS $\subseteq$ LHS. Thus, we need to show that for any $t \in L(BPNS)$, we have $t \in$ LHS. We adopt the contradiction and assume that $t \notin$ LHS. Since $t \in L(BPNS) = L(BPS) \cap L(CE)$, we have $t \in L(BPS)$ and $t \in L(CE)$. In addition, since $CE$ encodes all the bipartite supervisors and $t \notin$ LHS, we know that there exists a supervisor $\hat{S}$ such that $L(G||S) \neq L(G||\hat{S})$ and $t \in L(BT(\hat{S})) -$ LHS. Then, $t$ must contain some control command that would result in the violation of control equivalence. Without loss of generality, we know that there exists $u \leq t$ such that $u = \gamma_{0}t_{1}\gamma_{1}\dots t_{m}\gamma_{m}$, where $m \in \mathbb{N}$ ($u = \gamma_{0}$ when $m = 0$) and the following conditions are satisfied:
\begin{enumerate}[1.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $(\forall i \in [1:m])t_{i} \in (\gamma_{i-1} \cap \Sigma_{uo})^{*}(\gamma_{i-1} \cap \Sigma_{o})$ for $m \geq 1$. For convenience, we denote $t^{obs} = t_{1}^{\downarrow}\dots t_{m}^{\downarrow}$ for $m \geq 1$, and $t^{obs} = \varepsilon$ for $m = 0$.
\item $(\forall i \in [1:m])En_{B}(\xi_{b}(q_{b}^{init}, \mathcal{P}_{i-1}(t^{obs}))) \subseteq \gamma_{i-1}$ for $m \geq 1$.
\item $(\forall i \in [1:m])(\forall (q_{g}, q_{s}) \in \xi_{b}(q_{b}^{init}, \mathcal{P}_{i-1}(t^{obs})))En_{G}(q_{g})\\ \cap \gamma_{i-1} \subseteq En_{B}(\xi_{b}(q_{b}^{init}, \mathcal{P}_{i-1}(t^{obs})))$ for $m \geq 1$.
\item $En_{B}(\xi_{b}(q_{b}^{init}, t^{obs})) \not\subseteq \gamma_{m} \vee (\exists (q_{g}, q_{s}) \in \xi_{b}(q_{b}^{init}, t^{obs}))\\En_{G}(q_{g}) \cap \gamma_{m} \not\subseteq En_{B}(\xi_{b}(q_{b}^{init}, t^{obs}))$
\end{enumerate}
Based on the above item 4, we know that $\mathcal{C}_{1} \wedge \mathcal{C}_{2}$ is not satisfied for the control command $\gamma_{m}$ at the state $(\xi_{b}(q_{b}^{init}, t^{obs}))^{com}$ in Case 3.a of the construction of $BPS$. Thus, $u \notin L(BPS)$ and $t \notin L(BPS)$, which causes the contradiction. Hence, the assumption does not hold and $t \in$ LHS, which completes the proof. \hfill $\blacksquare$
\section{Proof of Proposition IV.1}
\label{appendix: Proposition IV.1}
We denote $BT(S') = (Q_{bs'}, \Sigma \cup \Gamma, \xi_{bs'}, q_{bs'}^{init})$ and $BT(S')^{A} = (Q_{bs'}^{a}, \Sigma \cup \Gamma, \xi_{bs'}^{a}, q_{bs'}^{a,init})$. To prove $L(BT(S')^{A}) \subseteq L(BPNS^{A})$, we only need to demonstrate that $BT(S')^{A}$ is simulated by $BPNS^{A}$. Let $R \subseteq Q_{bs'}^{a} \times Q_{bpns}^{a} = (Q_{bs'} \cup \{q^{detect}\}) \times (Q_{bpns} \cup \{q_{bpns}^{detect}\})$ be a relation defined such that
\begin{enumerate}[1.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item For any $q_{1} \in Q_{bs'} \subseteq Q_{bs'}^{a}$, any $q_{2} \in Q_{bpns} \subseteq Q_{bpns}^{a}$ and any $t \in L(BT(S')) \subseteq L(BPNS)$ such that $\xi_{bs'}(q_{bs'}^{init}, t) = q_{1}$ and $\xi_{bpns}(q_{bpns}^{init}, t) = q_{2}$, $(q_{1}, q_{2}) \in R$
\item $(q^{detect}, q_{bpns}^{detect}) \in R$
\end{enumerate}
We observe that, by construction, $(q_{bs'}^{a,init}, q_{bpns}^{a,init}) \in R$. Next, without loss of generality, we consider two states $q_{1} \in Q_{bs'}$ and $q_{2} \in Q_{bpns}$ such that $(q_{1}, q_{2}) \in R$. According to the definition of $R$, we know that there exists $t \in L(BT(S')) \subseteq L(BPNS)$ such that $\xi_{bs'}(q_{bs'}^{init}, t) = q_{1}$ and $\xi_{bpns}(q_{bpns}^{init}, t) = q_{2}$. Since $L(BT(S')) \subseteq \overline{(\Gamma\Sigma_{uo}^{*}\Sigma_{o})^{*}}$, there are three cases:
\begin{enumerate}[1.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $t \in (\Gamma\Sigma_{uo}^{*}\Sigma_{o})^{*}\Gamma$. We know that $q_{1}$ and $q_{2}$ are reaction states and only events in $\Sigma$ are defined at $q_{1}$ and $q_{2}$. Then, for any $\sigma \in \Sigma$ such that $\xi_{bs'}^{a}(q_{1}, \sigma) = \hat{q}_{1}$, we have the following analysis: Firstly, by construction, it can be checked that $En_{BT(S')}(q_{1}) = En_{BPNS}(q_{2})$. Secondly, as presented in Step 3.b-Step 3.c of the construction of $BT(S')^{A}$ and Step 3.b-Step 3.c of the construction of $BPNS^{A}$, we know that 1) the new transitions in $BT(S')^{A}$ ($BPNS^{A}$, respectively) would only be added at the reaction states of $BT(S')$ ($BPNS$, respectively), and 2) at any reaction state in $BT(S')$ ($BPNS$, respectively), we shall complete the undefined events in $(\Sigma_{c,a} \cap \Sigma_{uo}) \cup \Sigma_{o}$, where the unobservable events are self-loop transitions and the observable events lead to the state $q^{detect}$ ($q_{bpns}^{detect}$, respectively). Thus, $\xi_{bpns}^{a}(q_{2}, \sigma)!$ and we denote $\xi_{bpns}^{a}(q_{2}, \sigma) = \hat{q}_{2}$. If $\sigma \in En_{BT(S')}(q_{1})$, then we have $\xi_{bs'}(q_{bs'}^{init}, t\sigma) = \hat{q}_{1}$ and $\xi_{bpns}(q_{bpns}^{init}, t\sigma) = \hat{q}_{2}$, i.e., $(\hat{q}_{1}, \hat{q}_{2}) \in R$. If $\sigma \notin En_{BT(S')}(q_{1})$, then we have the following two subcases: 1) $\sigma \in \Sigma_{uo}$. Since unobservable events are self-loop transitions, we know that $\hat{q}_{1} = q_{1}$ and $\hat{q}_{2} = q_{2}$, i.e., $(\hat{q}_{1}, \hat{q}_{2}) \in R$. 2) $\sigma \in \Sigma_{o}$. Then we know that $\hat{q}_{1} = q^{detect}$ and $\hat{q}_{2} = q_{bpns}^{detect}$. In addition, since $(q^{detect}, q_{bpns}^{detect}) \in R$, we still have $(\hat{q}_{1}, \hat{q}_{2}) \in R$.
\item $t \in (\Gamma\Sigma_{uo}^{*}\Sigma_{o})^{*}\Gamma\Sigma_{uo}^{*}$. Since unobservable events are self-loop transitions at any reaction state of $BT(S')^{A}$ and $BPNS^{A}$, this case can be reduced to Case 1.
\item $t \in (\Gamma\Sigma_{uo}^{*}\Sigma_{o})^{*}$. We know that $q_{1}$ and $q_{2}$ are control states and only events in $\Gamma$ are defined at $q_{1}$ and $q_{2}$. Then, for any $\gamma \in \Gamma$ such that $\xi_{bs'}^{a}(q_{1}, \gamma) = \hat{q}_{1}$, firstly, we know that $t\gamma \in L(BT(S'))$. Based on \textbf{Proposition III.1}, we have $t\gamma \in L(BPNS)$, and thus, by construction, $t\gamma \in L(BPNS)^{A}$, i.e., $\xi_{bpns}^{a}(q_{2}, \sigma)!$ and we denote $\xi_{bpns}^{a}(q_{2}, \sigma) = \hat{q}_{2}$. Clearly, $(\hat{q}_{1}, \hat{q}_{2}) \in R$ as $\xi_{bs'}(q_{bs'}^{init}, t\gamma) = \hat{q}_{1}$ and $\xi_{bpns}(q_{bpns}^{init}, t\gamma) = \hat{q}_{2}$
\end{enumerate}
Thus, for any $\sigma \in \Sigma \cup \Gamma$ such that $\xi_{bs'}^{a}(q_{1}, \sigma) = \hat{q}_{1}$, we have $\xi_{bpns}^{a}(q_{2}, \sigma) = \hat{q}_{2}$ and $(\hat{q}_{1}, \hat{q}_{2}) \in R$, which completes the proof. \hfill $\blacksquare$
\section{Proof of Theorem IV.1}
\label{appendix: Theorem IV.1}
Based on \textbf{Proposition IV.1}, we have LHS $\subseteq$ RHS. Next, we prove RHS $\subseteq$ LHS, that is, for any $t \in$ RHS, we need to show $t \in$ LHS. Then there are two cases:
\begin{enumerate}[1.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $t \in L(BPNS)$. Based on \textbf{Theorem III.1}, we have $t \in \bigcup\limits_{S' \in \mathscr{S}_{e}(S)}L(BT(S'))$. Since the contrscution of $BT(S')^{A}$ does not remove any transition originally defined in $BT(S)$, we have $t \in $ LHS.
\item $t \notin L(BPNS)$ but $t \in L(BPNS^{A})$. Then we need to prove $t \in$ LHS, i.e., for any $n \in [0: |t|]$, we have $\mathcal{P}_{n}(t) \in$ LHS.
We shall adopt the mathematical induction. For the base case, it clearly holds as $\mathcal{P}_{0}(t) = \varepsilon \in$ LHS. The induction hypothesis is $\mathcal{P}_{k}(t) \in$ LHS, where the hypothesis holds for $k \leq |t|-2$, and we need to prove $\mathcal{P}_{k+1}(t) := \mathcal{P}_{k}(t)\sigma \in$ LHS. Then there are two subcases:
\begin{enumerate}[a.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $\mathcal{P}_{k}(t) = t_{1}\gamma t_{2}$, where $t_{1} \in (\Gamma\Sigma_{uo}^{*}\Sigma_{o})^{*}$, $\gamma \in \Gamma$, $t_{2} \in (\gamma \cap \Sigma_{uo})^{*}$. Since $\mathcal{P}_{k}(t) \in$ LHS, there exists a supervisor $S' \in \mathscr{S}_{e}(S)$ such that $\mathcal{P}_{k}(t) \in L(BT(S')^{A})$ and $L(BT(S')^{A})$ would reach a reaction state $q$ via the string $\mathcal{P}_{k}(t)$.
We denote $BT(S')^{A} = (Q_{bs'}^{a}, \Sigma \cup \Gamma, \xi_{bs'}^{a}, q_{bs'}^{a,init})$. Since the construction of $BT(S')^{A}$ from $BT(S)$ follows the same operation as that of the construction of $BPNS^{A}$ from $BPNS$, we have $En_{BPNS^{A}}(\xi_{bpns}^{a}(q_{bpns}^{a,init}, \mathcal{P}_{k}(t))) = En_{BT(S')^{A}}(\xi_{bs'}^{a}(q_{bs'}^{a,init}, \mathcal{P}_{k}(t)))$. Thus, $\mathcal{P}_{k+1}(t) = \mathcal{P}_{k}(t)\sigma \in L(BT(S')^{A}) \subseteq$ LHS.
\item $\mathcal{P}_{k}(t) \in (\Gamma\Sigma_{uo}^{*}\Sigma_{o})^{*}$. It can be checked that $BPNS^{A}$ does not reach the state $q_{bpns}^{detect}$ via $\mathcal{P}_{k}(t)$; otherwise, if $BPNS^{A}$ reaches the state $q_{bpns}^{detect}$ via $\mathcal{P}_{k}(t)$, since there are no events in $\Sigma \cup \Gamma$ defined at the state $q_{bpns}^{detect}$, then the string is halted at $\mathcal{P}_{k}(t)$, which causes the contradiction with the fact that $\mathcal{P}_{k+1}(t)= \mathcal{P}_{k}(t)\sigma$. In addition, we know that $\sigma \in \Gamma$ because $L(BPNS^{A}) \subseteq \overline{(\Gamma\Sigma_{uo}^{*}\Sigma_{o})^{*}}$. Since the construction of $BPNS^{A}$ does not remove from $BPNS$ any transition that is labelled by an event in $\Gamma$ and $\mathcal{P}_{k}(t) \in$ LHS, based on \textbf{Theorem III.1}, there exists a supervisor $S' \in \mathscr{S}_{e}(S)$ such that $\mathcal{P}_{k+1}(t) = \mathcal{P}_{k}(t)\sigma \in L(BT(S')^{A}) \subseteq$ LHS.
\end{enumerate}
\end{enumerate}
Based on the above analysis, we have $t \in$ LHS, which completes the proof. \hfill $\blacksquare$
\section{Proof of Proposition IV.2}
\label{appendix: Proposition IV.2}
We adopt the contradiction and assume that $L(G||CE^{A}||BT(S')^{A}||\mathcal{A}) \not\subseteq L(G||CE^{A}||BPNS^{A}||\hat{\mathcal{A}})$.
Since $L(G||CE^{A}||BT(S')^{A}||\mathcal{A}) \subseteq \overline{(\Gamma\Sigma_{uo}^{*}\Sigma_{o})^{*}}$ and $L(G||CE^{A}||BPNS^{A}||\hat{\mathcal{A}}) \subseteq \overline{(\Gamma\Sigma_{uo}^{*}\Sigma_{o})^{*}}$, it must be the case that there exists $t \in \overline{(\Gamma\Sigma_{uo}^{*}\Sigma_{o})^{*}}$ such that $t \in L(G||CE^{A}||BT(S')^{A}||\mathcal{A}) = L(G||CE^{A}||BT(S')^{A}) \cap L(\mathcal{A})$ and $t \notin L(G||CE^{A}||BPNS^{A}||\hat{\mathcal{A}}) = L(G||CE^{A}||BPNS^{A}) \cap L(\hat{\mathcal{A}})$. Thus, $t \in L(G||CE^{A}||BT(S')^{A})$. In addition, based on \textbf{Proposition IV.1}, we have $L(BT(S')^{A}) \subseteq L(BPNS^{A})$. Since the alphabets of $BT(S')^{A}$ and $BPNS^{A}$ are the same, we have $L(G||CE^{A}||BT(S')^{A}) \subseteq L(G||CE^{A}||BPNS^{A})$. Thus, $t \in L(G||CE^{A}||BPNS^{A})$, based on which we have $t \notin L(\hat{\mathcal{A}})$. Since $\hat{\mathcal{A}}$ is synthesized by treating $\mathcal{P}$ as the plant and $\mathcal{P}_{r}$ as the requirement, we have the following two cases:
\begin{enumerate}[1.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $t \notin L(\mathcal{P}_{r})$. Since $t \in L(\mathcal{P})$, according to the construction of $\mathcal{P}_{r}$, we know that via the string $t$, $G||CE^{A}||BT(S')^{A}||\mathcal{A}$ would reach the state $(q, q_{ce}^{a}, q_{bs'}^{a}, q_{a})$, where $q_{bs'}^{a} = q^{detect}$, i.e., $\mathcal{A}$ is not a covert actuator attacker against the supervisor $S'$, which causes the contradiction.
\item $t \in L(\mathcal{P}_{r})$. According to Step 3 of \textbf{Procedure 1}, we know that $L(\hat{\mathcal{A}})$ is the supremal controllable and normal sublanguage of $L(G||CE^{A}||BPNS^{A})$ w.r.t. $L(\mathcal{P}_{r})$. In addition, based on the construction of $BPNS^{A}$ and $CE^{A}$, we know that any transition that would lead to a state in $Q_{bad}$ of $\mathcal{P}$ must be labelled by
an event in $\Sigma_{c,a} \cap \Sigma_{o}$, which is controllable by the actuator attacker. Since $t \in L(\mathcal{P}_{r})$ and $t \notin L(\hat{\mathcal{A}})$, we know that there exists $\hat{t} = \hat{t}'\gamma\sigma_{1}\sigma_{2}\dots\sigma_{n}\sigma_{o} \in L(\mathcal{P})$ such that the following conditions are satisfied:
\begin{enumerate}[1)]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $(\exists u \leq t)P_{\Sigma_{o,a} \cup \Gamma}(\hat{t}) = P_{\Sigma_{o,a} \cup \Gamma}(u)$, where $P_{\Sigma_{o,a} \cup \Gamma}: (\Sigma \cup \Gamma)^{*} \rightarrow (\Sigma_{o,a} \cup \Gamma)^{*}$
\item $\xi_{\mathcal{P}}(q_{\mathcal{P}}^{init}, \hat{t}) \in Q_{bad}$
\item $\hat{t}^{\downarrow} = \sigma_{o} \in (\Sigma_{c,a} - \gamma) \cap \Sigma_{o}$
\item $(\forall i \in [1:n]) \sigma_{i} \in (\gamma \cup \Sigma_{c,a}) \cap \Sigma_{uo}$
\end{enumerate}
Since $P_{\Sigma_{o,a} \cup \Gamma}(\hat{t}) = P_{\Sigma_{o,a} \cup \Gamma}(u)$, we know that $u = t'\gamma\sigma_{1}'\sigma_{2}'\dots\sigma_{m}'\sigma_{o}$ such that the following conditions are satisfied:
\begin{enumerate}[1)]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $P_{\Sigma_{o,a} \cup \Gamma}(t') = P_{\Sigma_{o,a} \cup \Gamma}(\hat{t}')$
\item $(\forall i \in [1:m]) \sigma_{i}' \in (\gamma \cup \Sigma_{c,a}) \cap \Sigma_{uo}$
\end{enumerate}
Since $\sigma_{o} \in (\Sigma_{c,a} - \gamma) \cap \Sigma_{o}$, we know that $BPNS^{A}$ would reach the state $q_{bpns}^{detect}$ via the string $u$, implying that $t = u$. Thus, $\xi_{\mathcal{P}}(q_{\mathcal{P}}^{init}, t) \in Q_{bad}$, which causes the contradiction with the fact that $t \in L(\mathcal{P}_{r})$.
\end{enumerate}
Based on the above analysis, the assumption $L(G||CE^{A}||\\BT(S')^{A}||\mathcal{A}) \not\subseteq L(G||CE^{A}||BPNS^{A}||\hat{\mathcal{A}})$ does not hold, which completes the proof.
\hfill $\blacksquare$
\section{Proof of Theorem IV.2}
\label{appendix: Theorem IV.2}
Based on \textbf{Corollary IV.1}, we have LHS $\subseteq$ RHS. Then we only need to show RHS $\subseteq$ LHS. We shall adopt the contradiction and assume that RHS $\not\subseteq$ LHS. Then we know that there exists $\varepsilon \neq t \in \overline{(\Gamma\Sigma_{uo}^{*}\Sigma_{o})^{*}}$ such that $t \in$ RHS and $t \notin$ LHS. Hence, $t$ can be executed in $G$, $CE^{A}$, $BPNS^{A}$ and $\hat{\mathcal{A}}$, after we lift their alphabets to $\Sigma \cup \Gamma$, and $G$ would reach the state in $Q_{d}$ via $t$ after alphabet lift. Then, based on \textbf{Theorem IV.1}, we know that there exists $S' \in \mathscr{S}_{e}(S)$ such that $t \in L(BT(S')^{A})$. Thus, $t \in L_{m}(G||CE^{A}||BT(S')^{A}||\hat{\mathcal{A}})$, i.e., $\hat{\mathcal{A}}$ is damage-reachable against $S'$. In addition, it can be checked that $\hat{\mathcal{A}}$ is covert against $S'$; otherwise, there exists a string $t' \in L(G||CE^{A}||BT(S')^{A}||\hat{\mathcal{A}})$ such that $BT(S')^{A}$ reaches the state $q^{detect}$ via the string $t'$, which results in that $L(G||CE^{A}||BPNS^{A}||\hat{\mathcal{A}})$ reaches some state in $Q_{bad}$ via the string $t'$ and the contradiction is caused. Thus, $\hat{\mathcal{A}}$ is covert and damage-reachable against $S'$ and we have $\hat{\mathcal{A}} \in \mathscr{A}(S')$, which means that that $t \in$ LHS and this causes the contradiction. Hence, the assumption that RHS $\not\subseteq$ LHS does not hold, which completes the proof. \hfill $\blacksquare$
\section{Proof of Proposition IV.3}
\label{appendix: Proposition IV.3}
Firstly, we prove $L(S_{0}) \subseteq L(BPNS)$. We adopt the contradiction and assume that $L(S_{0}) \not\subseteq L(BPNS)$. Since $L(S_{0}) \subseteq \overline{(\Gamma\Sigma_{uo}^{*}\Sigma_{o})^{*}}$ and $L(BPNS) \subseteq \overline{(\Gamma\Sigma_{uo}^{*}\Sigma_{o})^{*}}$, we have the following two cases:
\begin{enumerate}[1.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item There exists
$t \in L(S_{0}) \cap L(BPNS)$ and $\gamma \in \Gamma$ such that $t\gamma \in L(S_{0})$ and $t\gamma \notin L(BPNS)$. Thus, $t\gamma \in L(S_{0}) \subseteq L(S_{0}^{A}) \subseteq L(BPNS^{A})$. Since the construction of $BPNS^{A}$ from $BPNS$ does not add any transition labelled by a control command, we have $t\gamma \in L(BPNS)$, which causes the contradiction.
\item There exists $t \in (\Sigma \cup \Gamma)^*$, $\gamma \in \Gamma$, $t' \in (\gamma \cap \Sigma_{uo})^{*}$ and $\sigma \in \Sigma$ such that $t\gamma t' \in L(S_{0}) \cap L(BPNS)$, $t\gamma t'\sigma \in L(S_{0})$ and $t\gamma t'\sigma \notin L(BPNS)$. Thus, $t\gamma t'\sigma \in L(S_{0}) \subseteq L(S_{0}^{A}) \subseteq L(BPNS^{A})$. Based on the construction of $BPNS^{A}$ from $BPNS$, we have $\sigma \in \Sigma - \gamma$. However, this would violate the structure of $S_{0}$, which causes the contradiction.
\end{enumerate}
Thus, the assumption does not hold. It follows that $L(S_{0}) \subseteq L(BPNS)$. Hence, $L(ONS) \subseteq L(S_{0}) \subseteq L(BPNS)$. \hfill $\blacksquare$
\section{Proof of Theorem IV.3}
\label{appendix: Theorem IV.3}
Firstly, we prove LHS $\subseteq$ RHS. Thus, we shall show for any $S' \in \mathscr{S}_{e}^{r}(S)$, we have $L(BT(S')) \subseteq L(ONS)$. Based on \textbf{Proposition IV.1}, we have $L(BT(S')^{A}) \subseteq L(BPNS^{A})$. Next, we prove $L(BT(S')^{A}) \subseteq L(S_{0}^{A})$. We adopt the contradiction and assume that $L(BT(S')^{A}) \not\subseteq L(S_{0}^{A})$. Firstly, since $S'$ is a resilient supervisor, based on \textbf{Theorem IV.2}, we have $(\forall t \in L(BT(S')^{A}))t \notin L_{m}(\mathcal{P}) = L_{m}(G||CE^{A}||BPNS^{A}||\hat{\mathcal{A}})$; otherwise, there exists $t \in L(BT(S')^{A})$ such that $t \in L_{m}(\mathcal{P})$, i.e., there exists a covert and damage-reachable actuator attacker against $S'$, which is contradictory to the fact that $S'$ is resilient. Then, due to the assumption $L(BT(S')^{A}) \not\subseteq L(S_{0}^{A})$ and Step 3 of \textbf{Procedure 2} to synthesize $S_{0}^{A}$, we know that there exists $u \in L(BPNS^{A})$, $\gamma \in \Gamma$ and $v \in (\gamma \cup \Sigma_{c,a})^{*} - \{\varepsilon\}$ such that
\[
\begin{aligned}
u\gamma v \in L_{m}(\mathcal{P}) \wedge [(&\exists t \in L(BT(S')^{A} ))t\gamma \in L(BT(S')^{A}) \wedge \\ &t\gamma \notin L(S_{0}^{A}) \wedge P_{\Sigma_{o} \cup \Gamma}(t) = P_{\Sigma_{o} \cup \Gamma}(u)]
\end{aligned}
\]
Since 1) $L(BPNS^{A}) \subseteq \overline{(\Gamma\Sigma_{uo}^{*}\Sigma_{o})^{*}}$ and $L(BT(S')^{A}) \subseteq \overline{(\Gamma\Sigma_{uo}^{*}\Sigma_{o})^{*}}$, where any event in $\Sigma_{uo}$, if defined, leads to a self-loop transition and any event in $\Sigma_{o} \cup \Gamma$, if defined, leads to an outgoing transition, and 2) $P_{\Sigma_{o} \cup \Gamma}(t) = P_{\Sigma_{o} \cup \Gamma}(u)$, we know that $u\gamma \in L(BT(S')^{A})$. Then, according to the construction procedure of $BT(S')^{A}$, we have $u\gamma v \in L(BT(S')^{A})$, which causes the contradiction because $u\gamma v \in L_{m}(\mathcal{P})$. Thus, the assumption does not hold and $L(BT(S')^{A}) \subseteq L(S_{0}^{A})$. Then, we have $L(BT(S')) \subseteq L(S_{0})$; otherwise, there exists a string $w \in L(BT(S')) \subseteq L(BT(S')^{A}) \subseteq L(S_{0}^{A})$ such that $w \notin L(S_{0})$, thus, based on the construction of $S_{0}^{A}$ from $S_{0}$, there exists $w', w'' \in \overline{\{w\}}$, $\gamma' \in \Gamma$ and $\sigma \in \Sigma - \gamma'$ such that $w' = w''\gamma'\sigma$, which violates the structure of $BT(S')$.
Next, we prove $L(BT(S')) \subseteq L(ONS)$. We adopt the contradiction and assume that $L(BT(S')) \not\subseteq L(ONS)$. Then, according to \textbf{Procedure 3}, without loss of generality, we know that there exists $k \geq 0$ such that $L(BT(S')) \subseteq L(S_{k})$ and $L(BT(S')) \not\subseteq L(S_{k+1})$. Then we know that there exists $u' \in L(S_{k})$, $\gamma'' \in \Gamma$ and $\sigma' \in \gamma \cap \Sigma_{o}$ such that
\[
\begin{aligned}
&En_{S_{k}}(\xi_{S_{k}}(q_{S_{k}}^{init}, u'\gamma'' \sigma')) = \varnothing \wedge
\\& [(\exists t' \in L(BT(S'))) t'\gamma'' \in L(BT(S')) \wedge t'\gamma'' \notin L(S_{k+1}) \wedge \\& P_{\Sigma_{o} \cup \Gamma}(u') = P_{\Sigma_{o} \cup \Gamma}(t')]
\end{aligned}
\]
Similarly, we have $u'\gamma'' \in L(BT(S'))$, and thus $u'\gamma'' \sigma' \in L(BT(S'))$. Since $L(BT(S')) \subseteq L(S_{k})$ and there is always a control command in $\Gamma$ defined at any control state of $BT(S')$, we have $En_{S_{k}}(\xi_{S_{k}}(q_{S_{k}}^{init}, u'\gamma'' \sigma')) \neq \varnothing$, which causes the contradiction. Thus, the assumption does not hold and $L(BT(S')) \subseteq L(ONS)$.
Secondly, we prove RHS $\subseteq$ LHS. Thus, we need to show for any $t \in$ RHS, we have $t \in$ LHS. Next, we shall construct a bipartite supervisor whose closed behavior contains the string $t$, and then we prove it is resilient and control equivalent to $BT(S)$. Firstly, we generate an automaton $T$ that recognizes $t$, i.e., $L_{m}(T) = t$. Then we compute its subset construction $P_{\Sigma_{o} \cup \Gamma}(T) = (Q_{t}, \Sigma \cup \Gamma, \xi_{t}, q_{t}^{init})$. By construction, we could denote $Q_{t} = Q_{t}^{rea} \dot{\cup } Q_{t}^{com}$, where $Q_{t}^{rea}$ is the set of reaction states and $Q_{t}^{com}$ is the set of control states. Then we shall complete some transitions in $P_{\Sigma_{o} \cup \Gamma}(T)$ and generate a new automaton $NC = (Q_{nc}, \Sigma \cup \Gamma, \xi_{nc}, q_{nc}^{init})$, which contains the necessary control command sequence encoded in $P_{\Sigma_{o} \cup \Gamma}(T)$. The construction procedure of $NC$ is given as follows.
\begin{enumerate}[1.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $Q_{nc} = Q_{t} \cup \{q^{obs}\} \cup \{q^{\gamma}|\gamma \in \Gamma\}$
\item \begin{enumerate}[a.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $(\forall q, q' \in Q_{t})(\forall \sigma \in \Sigma \cup \Gamma)\xi_{t}(q, \sigma) = q' \Rightarrow \xi_{nc}(q, \sigma) = q'$
\item $(\forall q \in Q_{t}^{rea})(\forall \sigma \in \Sigma_{uo})\neg \xi_{t}(q, \sigma)! \Rightarrow \xi_{nc}(q, \sigma) = q$
\item $(\forall q \in Q_{t}^{rea})(\forall \sigma \in \Sigma_{o})\neg \xi_{t}(q, \sigma)! \Rightarrow \xi_{nc}(q, \sigma) = q^{obs}$
\item $(\forall q \in Q_{t}^{com})En_{P_{\Sigma_{o} \cup \Gamma}(T)}(q) = \varnothing \Rightarrow (\forall \gamma \in \Gamma)\xi_{nc}(q, \gamma) = q^{\gamma}$
\item $(\forall \gamma \in \Gamma)\xi_{nc}(q^{obs}, \gamma) = q^{\gamma}$
\item $(\forall \gamma \in \Gamma)(\forall \sigma \in \gamma \cap \Sigma_{o}) \xi_{ce}(q^{\gamma}, \sigma) = q^{obs}$.
\item $(\forall \gamma \in \Gamma)(\forall \sigma \in \gamma \cap \Sigma_{uo}) \xi_{ce}(q^{\gamma}, \sigma) = q^{\gamma}$.
\end{enumerate}
\item $q_{nc}^{init} = q_{t}^{init}$
\end{enumerate}
Briefly speaking, all the transitions in $P_{\Sigma_{o} \cup \Gamma}(T)$ are retained, denoted by Step 2.a. At any reaction state $q \in Q_{t}^{rea}$, all the undefined unobservable (observable, respectively) events in $P_{\Sigma_{o} \cup \Gamma}(T)$ are completed, which would lead to a self-loop transition (lead to the newly added state $q^{obs}$, respectively), denoted by Step 2.b (Step 2.c, respectively). At any reaction state $q \in Q_{t}^{com}$, if there are no control commands defined in $P_{\Sigma_{o} \cup \Gamma}(T)$, then we shall complete the transition labelled as any control command $\gamma$, which would lead to the state $q^{\gamma}$, denoted by Step 2.d. Finally, for the state $q^{obs}$ and $q^{\gamma}$ ($\gamma \in \Gamma$), we shall follow the same construction procedure as that of $CE$ to encode the command-event execution phase, denoted by Step 2.e - Step 2.g. Then we compute $NCS = NC||ONS = (Q_{ncs}, \Sigma \cup \Gamma, \xi_{ncs}, q_{ncs}^{init})$. By construction, we denote $Q_{ncs} = Q_{ncs}^{rea} \dot{\cup} Q_{ncs}^{com}$, where $Q_{ncs}^{rea}$ is the set of reaction states and $Q_{ncs}^{com}$ is the set of control states. Based on $NCS$, we shall generate a bipartite supervisor, denoted as $BT = (Q_{bt}, \Sigma \cup \Gamma, \xi_{bt}, q_{bt}^{init})$, where
\begin{enumerate}[1.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $Q_{bt} = Q_{ncs} = Q_{ncs}^{rea} \dot{\cup} Q_{ncs}^{com}$
\item \begin{enumerate}[a.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $(\forall q, q' \in Q_{bt})(\forall \sigma \in \Sigma)\xi_{ncs}(q, \sigma) = q' \Rightarrow \xi_{bt}(q, \sigma) = q'$
\item For any control state $q \in Q_{ncs}^{com}$, we randomly pick a control command $\gamma \in En_{NCS}(q)$ and define that: for any reaction state $q' \in Q_{ncs}^{rea}$, if $\xi_{ncs}(q, \gamma) = q'$, then $\xi_{bt}(q, \gamma) = q'$ and for any control command $\gamma' \in En_{NCS}(q) - \{\gamma\}$, we have $\neg\xi_{bt}(q, \gamma')!$.
\end{enumerate}
\item $q_{bt}^{init} = q_{ncs}^{init}$
\end{enumerate}
Finally, we generate the automaton $Ac(BT)$. For convenience, we shall still denote $Ac(BT)$ as $BT$. Next, we prove that $BT$ is a control equivalent and resilient bipartite supervisor, and $t \in L(BT)$. We have the following two facts for $ONS$: 1) at any control state, there is at least one control command defined, which would lead to a reaction state, and 2) at any reaction state which is reached from a control state by a transition labelled as a control command $\gamma$, all the events in $\gamma$ are defined and any unobservable event would lead to a self-loop and any observable event would lead to a control state. Based on the construction of $NC$, it can be checked that the facts 1) and 2) also hold for $NC$. Since $NCS = NC||ONS$, the facts 1) and 2) also hold for $NCS$. According to the construction of $BT$, where we only define one control command at each control state, we know that the fact 2) holds for $BT$, and there is only one control command defined at any control state of $BT$. Thus, $BT$ is consistent with a bipartite supervisor structure.
Next, firstly, we prove $BT$ is control equivalent to $S$. We adopt the contradiction and assume that $BT$ is not control equivalent to $S$. Based on \textbf{Proposition III.2}, we have $L(BT) \not\subseteq L(BPNS)$. Since $BT = NC||ONS$, we have $L(BT) \subseteq L(ONS)$. Based on \textbf{Proposition IV.3}, we have $L(BT) \subseteq L(ONS) \subseteq L(BPNS)$, which causes the contradiction. Hence, the assumption does not hold and $BT$ is control equivalent to $S$.
Secondly, we prove $BT$ is resilient. We adopt the contradiction and assume $BT$ is not resilient. We denote the version of $BT$ under attack as $BT^{A}$, whose construction procedure is given in Section \ref{subsubsec:Supervisor}. Clearly, we have $L(BT^{A}) \subseteq L(S_{0}^{A})$ as $L(BT) \subseteq L(ONS) \subseteq L(S_{0})$. Since $BT$ is not resilient, we know that there exists an attacker $\mathcal{A} \in \mathscr{A}(BT)$ and a string $t \in L(BT^{A}) \subseteq L(S_{0}^{A})$ such that $t \in L_{m}(G||CE^{A}||BT^{A}||\mathcal{A})$. Based on \textbf{Proposition IV.2}, we have $t \in L_{m}(\mathcal{P}) = L_{m}(G||CE^{A}||BPNS^{A}||\hat{\mathcal{A}})$. Due to the synthesis procedure in Step 3 of \textbf{Procedure 2}, we know that $t \notin L(S_{0}^{A})$, which causes the contradiction. Thus, the assumption does not hold and $BT$ is resilient.
Finally, we show that $t \in L(BT)$. By construction, we know that $t \in L(NC)$. Since $t \in$ RHS = $L(ONS)$ and $NCS = NC||ONS$, we have $t \in L(NCS)$. We adopt the contradiction and assume that $t \notin L(BT)$. In the construction of $BT$ from $NCS$, where we only remove control commands for those control states in $NCS$ where more than one control command is defined. Thus, we know that there exists $t' \leq t$ and $\gamma \in \Gamma$ such that 1) $t'\gamma \leq t$, 2) $|En_{NCS}(\xi_{ncs}(q_{ncs}^{init}, t'))| \geq 2$, and 3) we do not pick the control command $\gamma$ at the control state $\xi_{ncs}(q_{ncs}^{init}, t')$ when we construct $BT$. However, due to the construction of $NC$ and $NCS$, there is only one control command defined at the state $\xi_{ncs}(q_{ncs}^{init}, t')$, which means that we are supposed to retain the control command $\gamma$ at the state $\xi_{ncs}(q_{ncs}^{init}, t')$, and this cause the contradiction. Thus, the assumption does not hold and $t \in L(BT)$.
Based on the above analysis, $BT \in \mathscr{S}_{e}^{r}(S)$, and $t \in L(BT) \subseteq$ LHS. Thus, RHS $\subseteq$ LHS, which completes the proof. \hfill $\blacksquare$
\section{Proof of Proposition IV.4}
\label{appendix: Proposition IV.4}
Firstly, we have the following two facts: 1) at any reachable control state, only one control command in $\Gamma$ is defined, and such a transition would lead to a reaction state, and 2) at any reachable reaction state, which is reached from a control state via a transition labelled by $\gamma \in \Gamma$, all the events in $\gamma$ are defined, and any event in $\gamma \cap \Sigma_{uo}$ is a self-loop transition and any event in $\gamma \cap \Sigma_{o}$ would lead to a control state. Thus, $OS$ is consistent with a bipartite supervisor structure. Secondly, we prove it is control equivalent to $S$. We adopt the contradiction and assume that $OS$ is not control equivalent to $S$. Based on \textbf{Proposition IV.3}, we have $L(OS) \subseteq L(ONS) \subseteq L(BPNS)$. Based on \textbf{Proposition III.2}, we have $L(OS) \not\subseteq L(BPNS)$, which causes the contradiction. Thus, the assumption does not hold and $OS$ is control equivalent to $S$. Thirdly, we prove that $OS$ is resilient. We adopt the contradiction and assume that $OS$ is not resilient. Then we know that there exists a covert and damage-reachable actuator attacker $\mathcal{A}$ such that $L_{m}(G||CE^{A}||OS^{A}||\mathcal{A}) \neq \varnothing$, where $OS^{A}$ is the attacked version of $OS$, whose construction procedure is presented in Section \ref{subsubsec:Supervisor}. Without loss of generality, we assume that $t \in L_{m}(G||CE^{A}||OS^{A}||\mathcal{A})$, which implies that $t \in L(OS^{A})$. Based on \textbf{Theorem IV.2}, we have $t \in L_{m}(\mathcal{P}) = L_{m}(G||CE^{A}||BPNS^{A}||\hat{\mathcal{A}})$. In addition, we have $L(OS^{A}) \subseteq L(S_{0}^{A})$ as $L(OS) \subseteq L(S_{0})$. Thus, $t \in L(OS^{A}) \subseteq L(S_{0}^{A})$. However, due to Step 3 of \textbf{Procedure 2}, we have $L_{m}(\mathcal{P}) \cap L(S_{0}^{A}) = \varnothing$, implying that $t \notin L(S_{0}^{A})$, which causes the contradiction. Hence, the assumption does not hold and $OS$ is resilient, which completes the proof. \hfill $\blacksquare$
\end{appendices}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 5,466 |
Home Real Housewives of New Jersey News PHOTOS – Meet RHONJ Star Teresa Giudice's Rumored Boyfriend Shane Wierks! Plus He Speaks Out
PHOTOS – Meet RHONJ Star Teresa Giudice's Rumored Boyfriend Shane Wierks! Plus He Speaks Out
by Sola Delano February 15, 2018
by Sola Delano February 15, 2018 13 comments
For months now, the rumors and whispers have gotten louder when it comes to the allegations that Teresa Giudice has a boyfriend on the side, while her husband Joe Giudice remains incarcerated.
We had previously decided not to publish the identity of Shane Wierks, Teresa's rumored beau, that is until today. The reason being Shane is now speaking out on the record, addressing the rumors of an alleged romance with the Real Housewives of New Jersey star.
But first, sources continue to insist that Teresa, 45, and Shane, 32, are more than just friends in the latest print issue of Life and Style magazine (currently out on newsstands).
"Teresa publicly claims she and Shane are just friendly, but no one close to her believes her," a source tells the mag. "She'd been talking about finding a man, and now it seem she's found one."
For starters, just who is Shane? Well, the single dad and successful business man happens to be a very close friend of both Joe and Melissa Gorga. And due to that reason, Teresa is able to openly hang around Shane quite a bit due to the fact that he's a family friend. For example, above is a photo of Shane supporting Teresa at a book signing last October, and below is a photo of Shane hanging out with Teresa and the Gorgas.
Shane Wierks hanging out with the Gorgas and Teresa in 2017.
In case you're wondering, Shane is the same guy Kim DePaola was referring to when she accused Teresa of having an affair on the latest season of the RHONJ. Kim claimed back then that both Joe [Gorga] and Melissa had knowledge of the affair and were helping Teresa cover it up.
It was also reported last month that Teresa even brought Shane on her recent vacation to Cancun… as a friend of the family. Indeed when you look at both Shane and Teresa's Instagram pages, you can see they were both in Cancun to ring in the New Year. And many don't believe that was simply a coincidence following rumors for about a year now that they are secretly seeing each other.
"Teresa felt she could trust Shane," reveals the source who doesn't believe Teresa's denials about the affair adding, "because he knows her family."
The affair allegations came back to light after Teresa and Shane vacationed in Mexico at the same time.
So what is Shane saying about these allegations? Well he tells Life & Style there is nothing going on between him and Teresa, adding that she "has always been a friend and nothing more."
Sources however state that Joe, who is due to be released in March 2019, will blow a fuse "when he hears about Shane."
Below are more photos of Shane and Teresa.
Teresa and Shane pose with the Melissa and Joe in December 2017
Shane and Teresa hang out with the Gorgas in July 2017
Photos Credit: Instagram
TELL US – THOUGHTS ON SHANE'S DENIAL? DO YOU BELIEVE THEY ARE JUST FRIENDS?
Joe GorgaJoe GiudiceKim DePaolaRHONJShane WierksTeresa GiudiceMelissa Gorga | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 7,336 |
Q: Bitmap gets mangled when I converted from System.Drawing.Bitmap object to cv::Mat I have a WPF application that takes a screen shot of the running Handbrake executable using a class called ScreenCapture that I copied from stack overflow.
public class ScreenCapture
{
[DllImport("user32.dll")]
static extern int GetWindowRgn(IntPtr hWnd, IntPtr hRgn);
//Region Flags - The return value specifies the type of the region that the function obtains. It can be one of the following values.
const int ERROR = 0;
const int NULLREGION = 1;
const int SIMPLEREGION = 2;
const int COMPLEXREGION = 3;
[DllImport("user32.dll")]
[return: MarshalAs(UnmanagedType.Bool)]
static extern bool GetWindowRect(HandleRef hWnd, out RECT lpRect);
[DllImport("gdi32.dll")]
static extern IntPtr CreateRectRgn(int nLeftRect, int nTopRect, int nRightRect, int nBottomRect);
[DllImport("user32.dll", SetLastError = true)]
[return: MarshalAs(UnmanagedType.Bool)]
static extern bool PrintWindow(IntPtr hwnd, IntPtr hDC, uint nFlags);
[StructLayout(LayoutKind.Sequential)]
public struct RECT
{
public int Left, Top, Right, Bottom;
public RECT(int left, int top, int right, int bottom)
{
Left = left;
Top = top;
Right = right;
Bottom = bottom;
}
public RECT(System.Drawing.Rectangle r) : this(r.Left, r.Top, r.Right, r.Bottom) { }
public int X
{
get { return Left; }
set { Right -= (Left - value); Left = value; }
}
public int Y
{
get { return Top; }
set { Bottom -= (Top - value); Top = value; }
}
public int Height
{
get { return Bottom - Top; }
set { Bottom = value + Top; }
}
public int Width
{
get { return Right - Left; }
set { Right = value + Left; }
}
public System.Drawing.Point Location
{
get { return new System.Drawing.Point(Left, Top); }
set { X = value.X; Y = value.Y; }
}
public System.Drawing.Size Size
{
get { return new System.Drawing.Size(Width, Height); }
set { Width = value.Width; Height = value.Height; }
}
public static implicit operator System.Drawing.Rectangle(RECT r)
{
return new System.Drawing.Rectangle(r.Left, r.Top, r.Width, r.Height);
}
public static implicit operator RECT(System.Drawing.Rectangle r)
{
return new RECT(r);
}
public static bool operator ==(RECT r1, RECT r2)
{
return r1.Equals(r2);
}
public static bool operator !=(RECT r1, RECT r2)
{
return !r1.Equals(r2);
}
public bool Equals(RECT r)
{
return r.Left == Left && r.Top == Top && r.Right == Right && r.Bottom == Bottom;
}
public override bool Equals(object obj)
{
if (obj is RECT)
return Equals((RECT)obj);
else if (obj is System.Drawing.Rectangle)
return Equals(new RECT((System.Drawing.Rectangle)obj));
return false;
}
public override int GetHashCode()
{
return ((System.Drawing.Rectangle)this).GetHashCode();
}
public override string ToString()
{
return string.Format(System.Globalization.CultureInfo.CurrentCulture, "{{Left={0},Top={1},Right={2},Bottom={3}}}", Left, Top, Right, Bottom);
}
}
public Bitmap GetScreenshot(IntPtr ihandle)
{
IntPtr hwnd = ihandle;//handle here
RECT rc;
GetWindowRect(new HandleRef(null, hwnd), out rc);
Bitmap bmp = new Bitmap(rc.Right - rc.Left, rc.Bottom - rc.Top, PixelFormat.Format32bppArgb);
Graphics gfxBmp = Graphics.FromImage(bmp);
IntPtr hdcBitmap;
try
{
hdcBitmap = gfxBmp.GetHdc();
}
catch
{
return null;
}
bool succeeded = PrintWindow(hwnd, hdcBitmap, 0);
gfxBmp.ReleaseHdc(hdcBitmap);
if (!succeeded)
{
gfxBmp.FillRectangle(new SolidBrush(Color.Gray), new Rectangle(Point.Empty, bmp.Size));
}
IntPtr hRgn = CreateRectRgn(0, 0, 0, 0);
GetWindowRgn(hwnd, hRgn);
Region region = Region.FromHrgn(hRgn);//err here once
if (!region.IsEmpty(gfxBmp))
{
gfxBmp.ExcludeClip(region);
gfxBmp.Clear(Color.Transparent);
}
gfxBmp.Dispose();
return bmp;
}
public void WriteBitmapToFile(string filename, Bitmap bitmap)
{
bitmap.Save(filename, ImageFormat.Bmp);
}
So when the button click handler below is called a screenshot of the handbrake window is taken.
I write it to the harddrive to make sure its ok:
handbrake screen shot.
I create an instance of a CLR class library ClassLibrary1::Class1 and call the method "DoSomething" passing it the System.Drawing.Bitmap object.
private void button4_Click(object sender, RoutedEventArgs e)
{
string wName = "HandBrake";
IntPtr hWnd = IntPtr.Zero;
foreach (Process pList in Process.GetProcesses())
{
if (pList.MainWindowTitle.Contains(wName))
{
hWnd = pList.MainWindowHandle;
var sc = new ScreenCapture();
SetForegroundWindow(hWnd);
var bitmap = sc.GetScreenshot(hWnd);
sc.WriteBitmapToFile("handbrake.bmp", bitmap);
Bitmap image1 = (Bitmap)System.Drawing.Image.FromFile("handbrake.bmp", true);
ClassLibrary1.Class1 opencv = new ClassLibrary1.Class1();
opencv.DoSomething(image1);
}
}
}
Inside DoSomething I attempt to convert the System.Drawing.Bitmap to a OpenCV class cv::Mat. I call cv::imwrite to make sure the bitmap is still ok, unfortunately somethings gone wrong: mangled handbrake screenshot
void Class1::DoSomething(Bitmap ^mybitmap)
{
cv::Mat *imgOriginal;
// Lock the bitmap's bits.
Rectangle rect = Rectangle(0, 0, mybitmap->Width, mybitmap->Height);
Imaging::BitmapData^ bmpData = mybitmap->LockBits(rect, Imaging::ImageLockMode::ReadWrite, mybitmap->PixelFormat);
try
{
// Get the address of the first line.
IntPtr ptr = bmpData->Scan0;
// Declare an array to hold the bytes of the bitmap.
// This code is specific to a bitmap with 24 bits per pixels.
int bytes = Math::Abs(bmpData->Stride) * mybitmap->Height;
array<Byte>^rgbValues = gcnew array<Byte>(bytes);
// Copy the RGB values into the array.
System::Runtime::InteropServices::Marshal::Copy(ptr, rgbValues, 0, bytes);
imgOriginal = new cv::Mat(mybitmap->Height, mybitmap->Width, CV_8UC3, (void *)ptr, std::abs(bmpData->Stride));
}
finally { mybitmap->UnlockBits(bmpData); }//Remember to unlock!!!
cv::imwrite("from_mat.bmp", *imgOriginal);
}
Can anybody spot my error?
A: Since your image is stretched horizontally, I'm betting that you have the wrong pixel format selected. (It's not stretched vertically, nor skewed diagonally, so the stride is correct.) CV_8UC3 specifies 24 bits per pixel, but I think that your BMP file is using 32 bits per pixel.
Switch your pixel format to CV_8UC4, or better yet, read the number of bits per pixel from the image and select the correct CV format based on that.
Side note: Since you're doing sc.WriteBitmapToFile() followed by opencv.DoSomething(Image.FromFile(), the entire bit about how you're capturing the screenshot is irrelevant. You're reading the bitmap from a file; that's all that matters.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 2,213 |
layout: gallery_page
title: people
front_image_url: "https://drive.google.com/file/d/0B1V5a-ykS8okSUZkQmxYaFFxeEE/view?usp=sharing"
drive_link: "https://drive.google.com/folderview?id=0B1V5a-ykS8oka0FHREJsODlJZGs&usp=sharing"
--- | {
"redpajama_set_name": "RedPajamaGithub"
} | 4,617 |
When it comes to a multi-tasking tile it's hard to beat good looking, practical and budget friendly porcelain and ceramic tiles. Our Tileworks collection is already home to a wide range of products that are suitable for kitchens, bathrooms, hallways and even outside, whether you crave a simple aesthetic, handy wood effect tiles or textured and decorative effects. To complement the existing range, we have just launched a selection of new products that we're sure you will love.
One of the most popular additions so far has been the 'Mont Blanc' collection for walls. A versatile selection of glossy, ceramic tiles in Blue, White, Anthracite or Pearl, these tiles feature subtle variation which makes for a really interesting finish. Available in two large sizes as well as in a patchwork and mock-mosaic effect, the range has all you need to create a show stopping bathroom or kitchen scheme.
You may have noticed a move towards including more adventurous textures in the home. We've seized on this with the inclusion of our 'Form' textured, glazed ceramic tiles. Using textured tiles in bright white can be a contemporary way of adding depth without sacrificing a pale colour scheme. What's more - the patterns on the face of the tile change depending how they are lit, so it's an opportunity to get creative.
Also joining the ranks of existing Tileworks products are our 'Arbo' wood effect porcelain tiles (shown top.) Available in both Carvalo natural and textured slip resistant formats, these can be used inside and out as well as in areas such as pool sides. These realistic planks can be laid in the usual way, or in a herringbone effect for a chic finish. | {
"redpajama_set_name": "RedPajamaC4"
} | 8,805 |
Jana Burčeska, född 6 juli 1993 i Skopje, är en makedonsk sångerska. Hon representerade Makedonien i Eurovision Song Contest 2017 i Kiev med låten "Dance Alone". Burčeska har även deltagit i den makedonska versionen av Idol, där hon slutade på en femteplats.
Kvinnor
Födda 1993
Personer från Skopje
Levande personer
Makedonska sångare
Deltagare i Eurovision Song Contest 2017
Artister som representerat Makedonien i Eurovision Song Contest | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 5,259 |
Release today | Are there parrots on Java?
The last few days, I was sick, so I couldn't work for the drone. This postponed the release for a from Wednesday to Saturday.
But the good news is that the new version is already complete. It will be released later this evening.
This entry was posted on Saturday, May 21st, 2011 at 12:14 pm and is filed under Uncategorized. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site. | {
"redpajama_set_name": "RedPajamaC4"
} | 3,522 |
Q: Create subcategory select box onChange I am creating a category system where users can select category from DB and after they select it creates another select box with subcategory of that category.
So, my question is how can I do it the best way?
BTW I am using Laravel Framework and first category is simple
<select>
@foreach(Category::all() as $k)
<option value="{{ $k['id'] }}">{{ $k['name'] }}</option>
@endforeach
</select>
But what should I do after they pick a category? Is it better to do the AJAX call to send the ID of picked category and returns the subcategory or what?
I need the best and professional way to do this.
In my Database I have
ID, name, parent
A: Populate a dropdown on selecting an option from another dropdown Laravel
This might surely help you. Otherwise ask if you do not understand
A: Use ajax, after selecting the category send the ajax request and to do this you need to use change event on your select, for example:
// Assumed category is id of the select
$('#category').on('change', function(){
var id = $(this).val();
$.getJSON("subcategory/" + id , function(data){
// Assumed subcategory is id of another select
var subcat = $('#subcategory').empty();
$.each(data, function(k, v){
var option = $('<option/>', {id:k, value});
subcat.append(option);
});
});
});
On the server side, create a route like this (You may use a controller and Eloquent):
Route('subcategory/{id}', function($id){
// Get the data from database according to the id
// Build an array as: id => value and then return
return Response::json($subcat);
});
A: select_cat.php
<script type="text/javascript" src="http://ajax.googleapis.com/
ajax/libs/jquery/1.4.2/jquery.min.js"></script>
<script type="text/javascript">
$(document).ready(function()
{
$(".category").change(function()
{
var id=$(this).val();
var dataString = 'id='+ id;
$.ajax
({
type: "POST",
url: "select_subcat.php",
data: dataString,
cache: false,
success: function(html)
{
$(".subcat").html(html);
}
});
});
});
</script>
Category :
<select name="category" class="category">
<option selected="selected">--Select Category--</option>
<?php
include('databasefile');
mysql_connect($server,$username,$password)or die(mysql_error());
mysql_select_db($database)or die(mysql_error());
$sql=mysql_query("select cat_name from category order by cat_name");
while($row=mysql_fetch_array($sql))
{
$cname=$row['cat_name'];
echo '<option value="'.$cname.'">'.$cname.'</option>';
} ?>
</select> <br/><br/>
SubCategory :
<select name="subcat" class="subcat">
<option selected="selected">--Select SubCat--</option>
</select>
2.select_subcat.php
<?php
include('databasefile);
mysql_connect($server,$username,$password)or die(mysql_error());
mysql_select_db($database)or die(mysql_error());
if($_POST['id'])
{
$id=$_POST['id'];
$sql=mysql_query("select s_name from subcat_l1 where cat_name='$id'");
while($row=mysql_fetch_array($sql))
{
$sname=$row['s_name'];
echo '<option value="'.$sname.'">'.$sname.'</option>';
}
}
?>
SubCategory :
<select name="subcat" class="subcat">
<option selected="selected">--Select SubCat--</option>
</select>
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 1,090 |
Most 2020 Mini Cooper models get $1,500 price bump, some get huge power bump
For the diminutive Mini brand, the wild child's name is always the same: John Cooper Works. Regardless of model, John Cooper Works editions mean more power, stiffer handling, more fun.
For 2020, the Mini Cooper John Cooper Works Clubman and Countryman All4 models get a power bump from their turbo-4 engines. Power is boosted beyond 300 horsepower—301 hp actually, up from 228 hp last year—to push the small wagon or tall crossover to 60 mph in about 5 seconds.
On Monday, Mini outlined the changes to its Cooper Clubman, Countryman, Hardtop 2-Door and 4-Door, and Convertible models for the 2020 model year. Aside from modest equipment changes and a price increase of $1,500 across most models, the updates include a 7- or 8-speed automatic that will replace last year's 6-speed automatic (the 8-speed automatic is standard on all-wheel-drive models), standard automatic emergency braking across all cars, and some package shuffling.
Mini's all-electric Cooper is due to arrive next year, but this year's plug-in hybrid Countryman S E All4 gets a bigger battery for 16 miles of electric range, up from 12 miles last year.
The 2020 Mini Cooper Clubman received a light update, mostly new front and rear bumpers, and last year's base version is gone. This year, the base Cooper Clubman from last year is gone, the Cooper S Clubman is the entry wagon and will cost $31,750, including destination charges. All-wheel drive is a $2,000 spend-up option.
The 2020 Mini Cooper Hardtop 2-Door is still the most affordable Mini and will cost $24,250, an increase of $1,500 over last year. Mini upgraded the automatic transmission to a 7-speed gearbox, up from a 6-speed last year. Mini said it will still offer a 6-speed manual transmission as standard equipment on Cooper 2-Door, 4-Door, and Convertible models, and that it will be late to arrive in the U.S. A spokesman for Mini didn't immediately comment on when those cars would be available in the U.S.
2020 Mini Cooper Countryman S E All4
Mini's crossover, the Cooper Countryman will start at $29,250 for base models, an increase of $1,500 over last year. All Countryman models will use an automatic transmission; a 7-speed is standard on front-drive models while an 8-speed automatic is standard on all-wheel-drive crossovers.
Those wild John Cooper Works versions of the Countryman and Clubman models also get a suspension lowered by 0.4 inches, bigger brakes, and larger exhaust pipes to handle their increased power. John Cooper Works Clubman All4 models cost $40,250—an increase of $3,500 over last year—and John Cooper Works Countryman All4 models cost $42,250—an increase of $3,600 over last year.
Filed under: Future Vehicles
2023 Ford Mondeo is the Fusion we need
Ford on Monday unveiled a redesigned Mondeo…
Dartz reveals new Black Alligator MMXX edition, promises "big" EVs
Dartz is celebrating the new year, the…
Steering yokes and touchpad controls are considered…
← 2020 Lexus LS gets the Inspiration Series treatment Lucid Motors hires away Tesla manufacturing chief → | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 8,191 |
package org.jaxen.test;
import org.jaxen.Navigator;
import org.jaxen.xom.DocumentNavigator;
import nu.xom.Builder;
public class XOMNavigatorTest extends XPathTestBase
{
private Builder builder = new Builder();
public XOMNavigatorTest(String name)
{
super( name );
}
public Navigator getNavigator()
{
return new DocumentNavigator();
}
public Object getDocument(String url) throws Exception
{
return this.builder.build( url );
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 1,190 |
\section{Introduction}
In the seminal papers of Refs.\cite{drukier,freese} it was
pointed out that the Earth's
motion around the Sun
can produce a sizeable
annual modulation of the signal in experiments of direct search for heavy
relic particles.
Actually, the analysis of a new set of data, recently collected
by the DAMA/NaI Collaboration (in the period denoted by the Collaboration
as running period \# 2) \cite{dama2} supports the possible presence of an
annual
modulation effect in the counting rate for WIMPs: the hypothesis of
presence of modulation against the hypothesis of absence of modulation
is statistically favoured at 98.5\% C.L.
Remarkable features of this measurement, obtained with an exposure of 14,962
kg $\times$ day, are:
i) An analysis of the experimental data, based on a maximum likelihood method,
pins down, at a 2--$\sigma$ C.L., a well
delimited region in the plane
$\xi \sigma^{(\rm nucleon)}_{\rm scalar}$ -- $m_\chi$,
where $m_\chi$ is the WIMP mass,
$\sigma^{(\rm nucleon)}_{\rm scalar}$ is the WIMP--nucleon scalar elastic
cross section and $\xi = \rho_\chi / \rho_l$
is the fractional amount of local
WIMP density $\rho_\chi$ with respect to the total local
dark matter density $\rho_l$. This
$\xi \sigma^{(\rm nucleon)}_{\rm scalar}$ -- $m_\chi$ modulation
region is shown in Fig. 1,
which is reproduced here from Fig. 6 of Ref.\cite{dama2} (the values of
$\xi \sigma^{(\rm nucleon)}_{\rm scalar}$ plotted in Fig. 1 are normalized
to the value $\rho_l=0.3$ GeV cm$^{-3}$).
The ensuing 1--$\sigma$
ranges for the two quantities are: $m_{\chi} = 59_{- 14}^{+ 22}$ GeV and
$\xi \sigma^{(\rm nucleon)}_{\rm scalar} = 7.0_{-1.7}^{+0.4} \times 10^{-9}$ nb
\cite{dama2}.
ii) The new data confirm a previous indication of an annual modulation (at the
90\% C.L.) found by the same Collaboration, by using a smaller sample of data,
collected in the running period \# 1, with an exposure of 4,549 kg $\times$ day
\cite{dama1}. Most remarkably
the 2--$\sigma$ C.L. region from data of
Ref. \cite{dama2} is
entirely contained inside the 90\% C.L. region derived from data of Ref.
\cite{dama1}, also shown in Fig. 1
(the open solid curve
denotes the 90\% C.L. upper bound derived in Ref.\cite{damapsa}, by
using pulse shape analysis).
iii) Because of the property at point ii),
when the data of the two running periods (with a total exposure of
19,511 kg $\times$ day) are combined together, one obtains
a more delimited 2--$\sigma$ C.L. region in the plane
$\xi \sigma^{(\rm nucleon)}_{\rm scalar}$ -- $m_\chi$,
which is fully embedded in the previous regions.
Consequently, the determination of
$m_\chi$ and $\xi \sigma^{(\rm nucleon)}_{\rm scalar}$ remains very stable:
$m_{\chi} = 59_{- 14}^{+ 17}$ GeV and
$\xi \sigma^{(\rm nucleon)}_{\rm scalar} =
7.0_{-1.2}^{+0.4} \times 10^{-9}$ nb
(if $\rho_l$ is normalized to the value $\rho_l=0.3$ GeV cm$^{-3}$).
By combining the two sets of data, the
hypothesis of presence of modulation increases to 99.6\% C.L.
It is noticeable that the two sets of data have been taken under different
operating conditions, since the experimental set--up was
dismounted and reassembled between the two running periods.
In extracting the contour lines of Fig. 1
from the experimental data, the values of some
astrophysical parameters
(the root mean square velocity
$v_{\rm rms}$ of the WIMP
Maxwellian velocity distribution in the halo,
the WIMP escape velocity $v_{\rm esc}$ in the halo,
the velocity $v_\odot$ of the Sun around the galactic centre),
relevant
for the event rates at the detector, had to be chosen.
The values adopted in Fig. 1
refer to the median values of these parameters in their
experimentally allowed ranges (reported, for instance, in \cite{limiti}),
namely:
$v_{\rm rms}$ = 270 Km s$^{-1}$, $v_{\rm esc} = 650$ Km s$^{-1}$,
$v_\odot = 232$ Km s$^{-1}$.
In Refs. \cite{our1,our2} we derived the theoretical implications of
the experimental data of \cite{dama1}, assuming that the indication of the
possible annual modulation reported there was due to relic
neutralinos. We selected the relevant supersymmetric
configurations and discussed how these may be investigated by
indirect searches for relic WIMPs and at accelerators.
In the present paper we apply a
similar analysis to the new, much more significant set of data of Ref.
\cite{dama2} and we show that these data are fully compatible with an
interpretation in terms of a relic neutralino as the major component of dark
matter in the Universe. We pin down the regions of the supersymmetric
parameter space relevant for this neutralino and derive the implications for
search at accelerators.
A word of caution is in order here. As also remarked in Ref.
\cite{dama2}, although the new DAMA data appear to bring more evidence for a
possible annual modulation effect, first singled out in Ref.\cite{dama1},
this effect awaits further confirmation by additional experimental
investigation in WIMP direct detection \cite{cm}.
Actually, the DAMA/NaI Collaboration has already collected new data over the
past year; moreover, the experiment still keeps running under good stability
conditions \cite{dama2}
and is expected to provide increasingly significant statistics in
the future. Furthermore, it is remarkable that, as subsequently discussed in
the present paper, the supersymmetric configurations singled out by the
annual modulation effect are also explorable at accelerators and in terms
of indirect signals of relic neutralinos (i.e., in terms of antiprotons
in space and of up--going muons at neutrino telescopes).
\section{Supersymmetric model}
In this paper we consider the neutralino as a WIMP candidate, able to induce
annual modulation effects in direct particle dark matter searches.
This supersymmetric particle is defined
as the lowest--mass linear superposition of photino ($\tilde \gamma$),
zino ($\tilde Z$) and the two higgsino states
($\tilde H_1^{\circ}$, $\tilde H_2^{\circ}$) \cite{susy}
\begin{equation}
\chi \equiv a_1 \tilde \gamma + a_2 \tilde Z + a_3 \tilde H_1^{\circ}
+ a_4 \tilde H_2^{\circ}.
\label{eq:neu}
\end{equation}
We also define a parameter $P \equiv a_1^2 + a_2^2$ in terms of which we
classify neutralinos as: {\it gaugino--like} when $P > 0.9$,
{\it mixed} when $0.1 \leq P \leq 0.9$ and {\it higgsino--like} when $P < 0.1$.
As a theoretical framework we use the Minimal Supersymmetric extension of the
Standard Model (MSSM)\cite{susy}, which conveniently describes the
supersymmetric phenomenology at the electroweak scale, without too strong
theoretical assumptions. This model has been extensively used by a number of
authors for evaluations of the neutralino relic abundance and detection rates
(a list of references may be found, for instance, in
\cite{our1}).
The MSSM is based on the same gauge group as the Standard Model,
contains the supersymmetric extension of its particle content and
two Higgs doublets $H_1$ and $H_2$.
As a consequence, the MSSM contains three neutral Higgs fields: two of them
($h$, $H$)
are scalar and one ($A)$ is pseudoscalar.
At the tree level the Higgs sector is specified by two independent parameters:
the mass of one of the physical Higgs fields, which we choose to
be the mass $m_A$ of the neutral pseudoscalar boson, and the ratio of the
two vacuum expectation values, defined as $\tan\beta\equiv \langle H_2
\rangle/\langle H_1\rangle$.
Once radiative corrections are introduced, the Higgs sector depends
also on the squark masses through loop diagrams. The radiative corrections
to the neutral and charged Higgs bosons, employed in the present paper, are
taken from Refs. \cite{carena,haber}.
The other parameters of the model are defined in the superpotential,
which contains all the Yukawa interactions
and the Higgs--mixing term
$\mu H_1 H_2$, and in the soft--breaking
Lagrangian, which contains the trilinear and bilinear breaking
parameters and the soft gaugino and scalar mass terms.
The MSSM contains a large number of free parameters.
To cast it into a form adequate for phenomenology,
it is necessary to introduce a number of
restrictive assumptions at the electroweak scale.
The usual conditions, which are also employed here, are the following:
i) all trilinear parameters are set to zero except those of the third family,
which are unified to a common value $A$;
ii) all squarks and sleptons soft--mass parameters are taken as
degenerate: $m_{\tilde l_i} = m_{\tilde q_i} \equiv m_0$,
iii) the gaugino masses are assumed to unify at $M_{GUT}$, and this implies that
the $U(1)$ and $SU(2)$ gaugino masses are related at the electroweak scale by
$M_1= (5/3) \tan^2 \theta_W M_2$.
After these conditions are applied, the supersymmetric parameter space
consists of six independent parameters. We choose them to be:
$M_2, \mu, \tan\beta, m_A, m_0, A$ and vary these parameters in
the following ranges: $10\;\mbox{GeV} \leq M_2 \leq 500\;\mbox{GeV},\;
10\;\mbox{GeV} \leq |\mu| \leq 500\;\mbox{\rm GeV},\;
75\;\mbox{GeV} \leq m_A \leq 1\;\mbox{TeV},\;
100\;\mbox{GeV} \leq m_0 \leq 1\;\mbox{TeV},\;
-3 \leq A \leq +3,\;
1 \leq \tan \beta \leq 50$.
We remark that the values taken here as upper limits of the ranges for
the dimensional parameters, $M_2, \mu, m_0, m_A$, are inspired by the upper
bounds which may be
derived for these quantities in SUGRA theories, when one requires that the
electroweak symmetry breaking, radiatively induced by the soft supersymmetry
breaking, does not occur with excessive fine tuning
(see Ref. \cite{bbefms1} and references quoted therein).
Our supersymmetric parameter space is further constrained by
all the experimental limits obtained from accelerators on
supersymmetric and Higgs searches. Thus, the latest data from
LEP2 on Higgs, neutralino, chargino and sfermion masses are
used \cite{lep2,ichep}.
Moreover, the constraints
due to the $b \rightarrow s + \gamma$ process
(see, for instance,
Refs.\cite{bertolini,bg,garisto,borzumati,wu,barger})
have to be taken into
account. In our analysis, the inclusive decay rate
BR($B \rightarrow X_s \gamma$) is calculated with corrections up to the
leading order. Next--to--leading order corrections
\cite{chetyrkin,ciuchini1,cza,ciuchini2}
are included only when
they can be applied in a consistent way, i.e. both to standard--model
and to susy diagrams. This criterion limits the use of
next--to--leading order corrections to peculiar regions of the supersymmetric
parameter space, where the assumptions, under which the next--to--leading order
susy corrections have been obtained, apply \cite{ciuchini2}.
We require that our theoretical evaluation for BR($B \rightarrow X_s \gamma$)
is within the range:
1.96 $\times 10^{-4} \leq$ BR($B \rightarrow X_s \gamma$) $\leq$ 4.32
$\times 10^{-4}$. This range is obtained by combining the experimental
data of Refs. \cite{glenn,barate} at 95\% C.L. and by adding a
theoretical uncertainty of 25\%, whenever the still incomplete
next--to--leading order susy corrections cannot be applied.
Since we are exploring here the neutralino as a stable dark matter candidate,
we have to further constrain the
parameter space by requiring that the neutralino is the Lightest
Supersymmetric Particle (LSP), i.e. we have to exclude regions where the
gluino or squarks or
sleptons are lighter than the neutralino. We also have to disregard
those regions of the parameter space where
the neutralino relic abundance exceeds the cosmological bound, derivable from
measurements of the age of the Universe \cite{age} and
of the Hubble constant \cite{hubble}.
Conservatively, for this cosmological bound we take
$\Omega_{\chi}h^2 \leq 0.7$
($h$ is the usual Hubble parameter, defined in terms of the present--day
value $H_0$ of the Hubble constant as
$h \equiv H_0/(100~$ km$~$s$^{-1}~$Mpc$^{-1})$).
The neutralino relic abundance is calculated here as illustrated in
Ref.\cite{ouromega}.
Inclusion of
coannihilation effects \cite{coannih}
in the calculation of $\Omega_{\chi} h^2$ are not necessary here,
since the instances under which these effects might be sizeable are
marginal in our supersymmetric parameter space.
\section{Selection of supersymmetric configurations by the annual modulation
effect}
We discuss now which region in the susy parameter space is selected by the
new DAMA modulation data \cite{dama2}.
Let us start by converting the region delimited by the 2--$\sigma$
C.L. dashed contour line
in the plane $\xi \sigma^{(\rm nucleon)}_{\rm scalar}$ -- $m_\chi$ of Fig. 1
into an enlarged one, which accounts for the uncertainty in the value of
$\rho_l$. If a possible flattening of the dark matter
halo \cite{turner1} and a possibly sizeable baryonic contribution to the
galactic dark matter \cite{turner} are taken into account, the following range
for $\rho_l$ has conservatively to be taken:
0.1 GeV cm$^{-3} \leq \rho_l \leq $ 0.7 GeV cm$^{-3}$. One then obtains from the
2--$\sigma$ C.L. region of Fig. 1, where the total dark matter density is
normalized to
the value $\rho_l=0.3$ GeV cm$^{-3}$, the relevant 2--$\sigma$ C.L.
region of Fig. 2 (hereafter denoted as region $R$).
Now we have to find which supersymmetric configurations, out of those in the
parameter space outlined in Sect. II, are selected by the requirement that
($m_{\chi}, \xi \sigma_{\rm scalar}^{(\rm nucleon)}) \in R$.
To this purpose we evaluate
$m_{\chi}$, $\sigma_{\rm scalar}^{(\rm nucleon)}$ and $\xi$ in the MSSM scheme
previously defined.
The neutralino mass is evaluated as usual by taking the lowest mass
eigenstate of the neutralino mass matrix \cite{susy}.
The neutralino--nucleon scalar cross--section is calculated with the formula
\beq
\sigma_{\rm scalar}^{(\rm nucleon)} = \frac {8 G_F^2} {\pi} M_Z^2 m_{\rm red}^2
\left[\frac{F_h I_h}{m_h^2}+\frac{F_H I_H}{m_H^2}+
\frac{M_Z}{2} \sum_q <N|\bar{q}q|N>
\sum_i P_{\bar q_i} ( A_{\tilde{q}_i}^2- B_{\tilde{q}_i}^2) \right]^2,
\label{eq:sigma}
\eeq
\noindent
where the two first terms inside the brackets refer to the diagrams with
$h$-- and $H$--exchanges in the t--channel
(the $A$--exchange diagram is strongly kinematically suppressed and then omitted
here) \cite{barbieri} and the third term refers
to the graphs with squark--exchanges in the s-- and u--channels \cite{griest}.
The mass $m_{\rm red}$ is the neutralino--nucleon reduced mass and
\beqarr
F_h &=& (-a_1 \sin \theta_W+a_2 \cos \theta_W)
(a_3 \sin \alpha + a_4 \cos \alpha)
\nonumber \\
F_H &=& (-a_1 \sin \theta_W+a_2 \cos \theta_W)
(a_3 \cos \alpha - a_4 \sin \alpha) \nonumber \\
I_{h,H}&=&\sum_q k_q^{h,H} m_q \langle N|\bar{q} q |N \rangle.
\label{effe}
\eeqarr
The angle $\alpha$ rotates $H_1^{(0)}$ and $H_2^{(0)}$ into $h$ and $H$, and
the coefficients $k_q^{h,H}$ are given by
$k_{u{\rm -type}}^h = $cos$\alpha / $sin$\beta$ and
$k_{u{\rm -type}}^H = - $sin$\alpha /$ sin$\beta$ for the up--type quarks,
and by $k_{d{\rm -type}}^h = - $sin$\alpha / $cos$\beta$ and
$k_{d{\rm -type}}^H = -$ cos$\alpha / $cos$\beta$ for the down--type quarks.
The matrix elements $<N|\bar q q|N>$ are meant over the nucleonic state.
By using the heavy quark expansion \cite{svz}, one may rewrite the quantity
$I_{h,H}$ as follows
\beq
I_{h,H} = k_{u{\rm -type}}^{h,H} g_u + k_{d{\rm -type}}^{h,H} g_d,
\label{eq:i}
\eeq
\noindent
where
\beq
g_u = \frac {4} {27} (m_N + \frac {19}{8} \sigma_{\pi N}
- a \sigma_{\pi N}),~~~~~g_d = \frac{2}{27} (m_N + \frac{23}{4} \sigma_{\pi N}
+ \frac{25}{2} a \sigma_{\pi N}).
\eeq
\noindent
Here $\sigma_{\pi N}$ is the so--called pion--nucleon sigma term,
$\sigma_{\pi N} = \frac{1}{2} (m_u + m_d) <N|\bar uu + \bar dd|N>$, and the
parameter $a$ is related to the strange--quark content of the nucleon $y$ by
\beq
a = y \frac{m_s}{m_u + m_d},~~~~y=2 \frac{<N|\bar ss|N>}
{<N|\bar uu+ \bar dd|N>}.
\eeq
\noindent
For these parameters we use the following values:
$\sigma_{\pi N} = 45$ GeV \cite{gasser}, $y = 0.33 \pm 0.09$ \cite{liu} and
2$m_s/(m_u + m_d) = 29$ \cite{bj}; thus, using the central value of $y$,
we obtain $g_u = 123$ GeV and $g_d = 288$ GeV.
In the squark--exchange terms of Eq.(\ref{eq:sigma}) $\sum_i$ denotes a sum
over the mass eigenstates, $P_{\tilde q_i}$ stands for the squark propagators
\beq
P_{\tilde{q}_i}=\frac{1}{2}\left(
\frac{1}{ m_{\tilde{q}_i}^2-(m_{\chi}-m_q)^2}+
\frac{1}{m_{\tilde{q}_i}^2-(m_{\chi}+m_q)^2}
\right),
\label{eq:prop}
\eeq
\noindent
\noindent
and the $A_{\tilde q_i}$ and $B_{\tilde q_i}$ coefficients are given by
\beqarr
A_{\tilde{q_1}}&=&\cos \theta_q (X_q+Z_q)+\sin \theta_q (Y_q+Z_q) \nonumber \\
B_{\tilde{q_1}}&=&\cos \theta_q (X_q-Z_q)+\sin \theta_q (Z_q-Y_q) \nonumber \\
X_q&=&-\left( \cos \theta_W T_{3q}a_2+\sin \theta_W \frac{Y_{qL}}{2}a_1 \right)
\;\;\;\; ; \;\;\;\; Y_q\;=\;\sin \theta_W \frac{Y_{qR}}{2}a_1 \nonumber \\
Z_{u{\rm -type}}&=&-\frac{m_{u{\rm -type}} a_4}{2 \sin\beta M_Z}
\;\;\;\; ; \;\;\;\;
Z_{d{\rm -type}}=-\frac{m_{d{\rm -type}} a_3}{2 \cos\beta M_Z},
\eeqarr
where $T_{3q}$, $Y_{qL}$, $Y_{qR}$ refer to the isospin and to the
hypercharge quantum
numbers of $\tilde{q}_{L,R}$,
respectively.
The couplings $A_{\tilde{q_2}}$ and $B_{\tilde{q_2}}$ may be obtained with
the substitution
$\sin \theta_q$ $\rightarrow$ $\cos \theta_q$ and
$\cos \theta_q$ $\rightarrow$ $-\sin \theta_q$.
In our numerical applications the squark propagators in Eq.(\ref{eq:prop})
have been regularized by inserting appropriate widths in the denominators.
In general, it turns out that the Higgs--exchange
amplitudes are largely dominant over the squark--exchange ones, the
latter competing with the former ones almost exclusively when an
enhancement
in their size is originated by a mass fine--tuning in the squark--propagator
denominators.
As for the values to be assigned to the quantity $\xi = \rho_{\chi}/ \rho_l$
we adopt the standard rescaling recipe \cite{gaisser}.
For each point of the parameter
space, we take into account the relevant value of the cosmological neutralino
relic density. When $\Omega_\chi h^2$ is larger than a minimal value
$(\Omega h^2)_{\rm min}$, compatible with observational data and with
large--scale
structure calculations, we simply put $\xi=1$.
When $\Omega_\chi h^2$ turns out to be less than $(\Omega h^2)_{\rm min}$,
and then the neutralino may only provide a fractional contribution
${\Omega_\chi h^2 / (\Omega h^2)_{\rm min}}$
to $\Omega h^2$, we take $\xi = {\Omega_\chi h^2 / (\Omega h^2)_{\rm min}}$.
The value to be assigned to $(\Omega h^2)_{\rm min}$ is
somewhat arbitrary, in the range
$0.01 \lsim (\Omega h^2)_{\rm min} \lsim 0.3$. We use here the value
$(\Omega h^2)_{\rm min} = 0.01$, which is conservatively derived from the
estimate $\Omega_{\rm galactic} \sim 0.03$.
Using the previous formulae we find that a large portion of the
modulation region $R$ is indeed covered by supersymmetric configurations,
compatible with all present physical constraints. This set of susy states,
which will hereafter be denoted as set $S$, is displayed in Fig. 2 with
different symbols, depending on the neutralino composition.
In Fig. 2(a) we notice that a quite sizeable portion of region $R$ is
populated by supersymmetric configurations
with neutralino relic abundance inside the cosmologically interesting range
$0.01 \lsim \Omega_{\chi} h^2 \lsim 0.7$.
Thus we obtain the first main result of our analysis, i.e.
{\it the annual
modulation region, singled out by the DAMA/NaI experiment, is largely
compatible with a relic neutralino as the major component of dark matter}.
This is certainly the most remarkable possibility. However, we also keep under
consideration neutralino configurations with a small contribution to
$\Omega_{\chi} h^2$ (see Fig. 2(b)), since also the detection of relic
particles with these features would provide in itself a very noticeable
information.
The neutralino relic abundance $\Omega_{\chi} h^2$ is plotted versus the
quantity
$\xi \sigma_{\rm scalar}^{(\rm nucleon)}$ in terms of the neutralino
composition in
Fig. 3. Here we remark some anticorrelation between the two plotted quantities.
This feature is expected on general grounds, as discussed for instance in
Ref. \cite{bbefms}. In fact, it is due to the combination of two properties:
(i) the direct detection rate is proportional to
$\sigma_{\rm scalar}^{(\rm nucleon)}$,
and $\Omega_{\chi} h^2 \propto \sigma_{\rm ann}^{-1}$,
where $\sigma_{\rm ann}$ is the neutralino--neutralino
annihilation cross--section, (ii) usually $\sigma_{\rm ann}$
and $\sigma_{\rm scalar}^{(\rm nucleon)}$, as functions of the supersymmetric
model parameters, are
either both increasing or both decreasing. Therefore, neutralinos with lower
values for the relic abundance have higher couplings with matter (this feature
is attenuated, when rescaling in $\rho_{\chi}$ is operative; this
occurs here
for $\Omega_{\chi} h^2 < 0.01$).
In view of the discussed anticorrelation between
$\sigma_{\rm scalar}^{(\rm nucleon)}$ and $\Omega_{\chi} h^2$,
it is remarkable that the relatively large neutralino--matter cross--sections,
implied by the DAMA modulation effect, agree with a relic
neutralino making up a major contribution to dark matter, i.e.
with a neutralino whose relic abundance falls into the cosmologically
interesting range $0.01 \lsim \Omega_{\chi} h^2 \lsim 0.7$.
Most of the neutralino configurations falling in this range of $\Omega_{\chi}
h^2$ turn out to be gaugino--like.
We further notice that recent observations and
analyses \cite{omegamatter} point to values of
$\Omega_{\rm matter}$ somewhat smaller than those considered in the past:
$0.1 \lsim \Omega_{\rm matter} \lsim 0.4$.
If we combine this range with the one for $h$:
$0.55 \lsim h \lsim 0.80$ \cite{hubble} and require that a
cold dark matter candidate (such as the neutralino) supplies
$\sim (80$--$90)\%$ of $\Omega_{\rm matter}$, we obtain:
$0.02 \lsim \Omega_{\rm CDM} h^2 \lsim 0.2$. This turns out to be the most
appealing interval for relic neutralinos. It is remarkable that this range for
$\Omega_{\chi} h^2$ is densely populated by configurations of set $S$ (see
Fig. 3).
\section{Further properties of the configurations singled out by the
annual modulation effect}
Let us proceed now to an analysis of other
main properties of the configurations of set $S$, related to a possible
investigation of these supersymmetric states at accelerators.
As is already clear from Fig. 2, the set $S$ contains neutralino
compositions of various nature, from higgsino--like to gaugino--like ones.
This property is further displayed in
Fig. 4, where we show the location of the configurations of set $S$ in the plane
$\mu$--$M_2$, for two representative values of $\tan \beta$.
The properties of our set $S$ relevant to searches of neutral Higgses at
accelerators are displayed in Fig. 5. Section (a) of this figure
shows a scatter plot of set $S$ in term of $m_h$ and $\tan \beta$,
section (b) provides essentially the same information, but in terms of
$m_h$ and the quantity $\sin^2 (\alpha - \beta)$, which is the relevant
coupling for the
channels of possible neutral Higgs production at LEP. In the plot of section
(a) it is apparent a correlation between $\tan \beta$
and $m_h$. This is due to the fact that the rather
large values of the neutralino--nucleon scalar cross--section,
$\sigma_{\rm scalar}^{(\rm nucleon)} \sim (10^{-9} - 10^{-8}$) nb, as required
by the annual modulation data, impose that either the couplings are large
(then large $\tan \beta$) and/or the
process goes through the exchange of a light particle.
Thus, Higgs--exchange dominance and
$\sigma_{\rm scalar}^{(\rm nucleon)} \sim (10^{-9} - 10^{-8})$ nb require a
very light $h$ at small $\tan \beta$,
and even put {\it a lower bound on tan $\beta$: $\tan \beta \gsim 2.5$}.
At larger values of $\tan \beta$, the mass $m_h$ is less constrained, also
because, at large $\tan \beta$, the squark--exchange diagrams
may occasionally compete with the Higgs--exchange ones in keeping
$\xi \sigma_{\rm scalar}^{(\rm nucleon)}$ at a sizeable value.
{From} Fig. 5(a) we notice that a good deal of susy
configurations are explorable at LEP2, while others will require experimental
investigation at a high luminosity Fermilab Tevatron, which
should be capable to explore Higgs masses up to $m_h \sim$ 130 GeV
\cite{tev,baer}.
In Fig. 6 the configurations of set $S$ are shown in the plane
$m_{\tilde t_1} - \tan \beta$ ($t_{\tilde t_1}$ denotes the lightest
top--squark).
This scatter plot reveals an interesting correlation:
at small $\tan \beta$ only light $\tilde t_1$'s are allowed.
In the Appendix it is shown that this feature occurs as a joint effect due to
the $b \rightarrow s + \gamma$ constraint and to the annual modulation data
\cite{bsg}.
{From} the previous results, it then turns out that annual modulation data and
$b \rightarrow s + \gamma$ constraint complement each other in providing
stringent bounds on both $m_h$ and $m_{\tilde t_1}$, at small $\tan \beta$. For
instance, for $\tan \beta \lsim 5$ one has $m_h \lsim $ 105 GeV and
$m_{\tilde t_1} \lsim$ 350 GeV.
Finally, in Fig. 7 we display the scatter plot of set $S$ in the plane
$m_{\chi}$ -- $\tan \beta$. Since the reach of LEP2 extends only up to
the dashed vertical line, at $m_\chi \simeq 50$ GeV, the exploration of the
whole
interesting region will require Tevatron upgrades or LHC. Under favorable
hypothesis, TeV33 could provide exploration up to the vertical solid line.
Apart from exploration at accelerators, configurations of set $S$ may be
investigated by means of indirect measurements of relic neutralinos,
such as cosmic--ray antiprotons \cite{pierre} and
neutrino fluxes from Earth and Sun (\cite{bbefms} and references quoted
therein).
On the basis of a preliminary analysis, we found that configurations of set
$S$ provide quite significant signals in both instances. In the case of
antiprotons, a large
fraction of configurations of set $S$ provide $\bar p$ fluxes at the
level of the measurements by the balloon--borne BESS experiment
\cite{bess}.
These configurations will be further investigated with the data
collected during the Shuttle flight by the AMS experiment \cite{ams}.
A similar situation occurs for
the neutrino fluxes induced by configurations of set $S$, which turn out
to be within the reach of MACRO \cite{macro} and Baksan \cite {baksan}
neutrino telescopes.
Details of our analysis on the indirect detection searches are
presented in Ref.\cite{indirectnew}.
We end this section by some more general theoretical considerations.
We have discussed here the physical
implications of the annual modulation data in the framework of a
MSSM at the electroweak scale, since this scheme provides the
simplest and least--constrained model for discussing susy phenomenology.
However, we have also performed an
analysis of the modulation data in the framework of Supergravity
(SUGRA) theories. The results of this study are presented in
Ref. \cite{companion}. We simply report here that we have
ascertained that a fraction of configurations of set $S$ are indeed
compatible with SUGRA schemes, even more so when the unification
conditions, which are usually imposed at GUT scale, are
somewhat relaxed,
for instance by allowing deviations from a strict
unification assumption in the Higgs masses at the GUT scale
\cite{bbefms1}. It is remarkable that these configurations fall into the
region of susy parameter space where electroweak symmetry breaking
occurs without excessive fine tuning between competing terms. A simple
case of this feature occurs for the neutralino mass, whose range for the
annual modulation configurations is well within the
no--fine--tuning upper bound $m_{\chi} \lsim $ O(100 GeV) \cite{bbefms1}.
\section{Conclusions}
The new data of the
DAMA/NaI experiment \cite{dama2}, which support a possible annual
modulation effect in
the counting rates for relic WIMPs, previously reported by the same
Collaboration \cite{dama1}, have been analysed here in terms of relic
neutralinos. {\it We have proved that the annual modulation data are largely
compatible with a relic
neutralino making up the major part of dark matter in the Universe}.
We have also investigated the possibility of exploring the supersymmetric
states, selected by the annual modulation data, at accelerators.
We have demonstrated that an analysis of the main features of these
susy configurations
is within the reach of present or planned experimental set--ups.
In particular, we have found the following results:
a) The sizeable neutralino--nucleon elastic cross--sections, implied by the
annual modulation data, entail a rather stringent upper bound for
$m_h$ in terms of $\tan \beta$. In particular, this property implies that no
susy configuration would be allowed for $\tan \beta \lsim 2.5$.
A large portion of the region covered by the scatter plot in the plane
$m_h$ -- $\tan \beta$ is explorable at LEP2, the remaining one will be at
TeV33.
b) The annual modulation data and the $b \rightarrow s + \gamma$
constraint complement each other in providing a correlation
between $\tan \beta$ and the mass of the lightest top--squark.
As remarked in the introduction, a solid confirmation of the
annual modulation effect as singled out by the DAMA/NaI Collaboration
will require further accumulation of an increasingly significant
statistics with very stable set--ups over a few years. However,
it is worth noticing that the detection of this effect,
if confirmed by further experimental evidence, would
turn out to be a major breakthrough in establishing the existence of
particle dark matter in the Universe. It is very rewarding that the
features of this dark matter particle are widely compatible with those
expected for the neutralino, both in MSSM and
in SUGRA schemes, and that several of its
properties can be explored
in the near future at accelerators and by indirect searches for
relic neutralinos.
\vspace{1cm}
\section { Acknowledgements}
We wish to thank Prof. R. Bernabei and Dr. P. Belli for very
interesting discussions about experimental aspects of the DAMA/NaI
experiment and about the analysis of their data. We also thank Dr. P. Gambino
for informative discussions on the next--to--leading order corrections to
$b \rightarrow s + \gamma$.
\vspace{1cm}
\section {Appendix}
Here we discuss the origin of the correlation between $m_{\tilde t_1}$ and
$\tan \beta$ which is apparent in the plot of Fig. 6 at small $\tan \beta$.
Let us start by considering how the $b \rightarrow s + \gamma$
constraint
\cite{bertolini,bg,garisto,borzumati,wu,barger}
correlates the three parameters $\tan \beta, m_h$ and
$m_{\tilde t_1}$. Thus, leaving momentarily aside the annual modulation data,
let us consider in the plane
$m_{\tilde t_1}$--$\tan \beta$ the regions of our parameter space
which satisfy all accelerator constraints (including $b \rightarrow s +
\gamma$) and the further requirement that
$m_h$ is below some arbitrarily fixed value $m_h^*$.
In Fig. A.1 these regions are represented by the domains on the left
of the various lines, which are denoted by the following values of
$m_h^*$: $m_h^* = 80, 90, 100, 110$ GeV.
It is possible to show that the $b \rightarrow s + \gamma$ constraint is
instrumental in establishing the peculiar shape of the various contour lines
at fixed $m_h$.
If we now combine the plot of Fig. A.1 with the one of Fig. 5(a) we obtain
the situation displayed in Fig. A.2, the allowed region being the
one on the left of the various curves, depending on the values of
$m_h^*$. {From} this figure we see how the
$\tan \beta$--$m_{\tilde t_1}$ correlation, occurring in Fig. 6,
is due to the joint effect of
$b \rightarrow s + \gamma$ and annual modulation data.
\vfill
\eject
\begin{center}
{\large FIGURE CAPTIONS}
\end{center}
\vspace{1cm}
{\bf Figure 1} --
Annual modulation regions singled out by the DAMA/NaI experiments in the plane
$m_{\chi}$--$\xi \sigma_{\rm scalar}^{(\rm nucleon)}$. The dotted contour line
denotes the 90\% C.L. region deduced from the data of the running period
\# 1\cite{dama1},
the solid contour line delimits the 2--$\sigma$ C.L. region deduced from
the data of the running period \# 2 \cite{dama2}, and the dashed contour line
delimits the 2--$\sigma$ C.L. region, obtained by combining together the data of
the two running periods. The solid open curve denotes the 90\% C.L. upper
bound, obtained in Ref.\cite{damapsa},
where a pulse shape analysis of the events was used.
This figure is reproduced from Fig. 6 of Ref.\cite{dama2}.
{\bf Figure 2} -- Scatter plot of set $S$ in the plane
$m_{\chi}$--$\xi \sigma_{\rm scalar}^{(\rm nucleon)}$.
The dashed contour line
delimits the 2--$\sigma$ C.L. region, obtained by the DAMA/NaI Collaboration,
by combining together the data of
the two running periods of the annual modulation experiment \cite{dama2}.
The solid contour line is obtained from the dashed line, which refers to the
value $\rho_l = 0.3 ~$ GeV cm$^{-3}$, by accounting for
the uncertainty range of $\rho_l$, as explained in Sect. III (the region
delimited by the solid line is denoted as region $R$ in the text).
Displayed in this figure are only the representative points of the susy
parameter space, defined in Sect. II, which fall inside the region $R$.
Dots, crosses and circles denote neutralino compositions
according to the classification given in Sect. II.
Sections (a) and (b) refer to configurations with
$0.01 \leq \Omega_{\chi} h^2 \leq 0.7$ and with $\Omega_{\chi} h^2 < 0.01$,
respectively.
{\bf Figure 3} -- Scatter plot of set $S$ in the plane
$\Omega_{\chi} h^2$ -- $\xi \sigma_{\rm scalar}^{(\rm nucleon)}$.
Dots, crosses and circles denote neutralino compositions
according to the classification given in Sect. II. The two vertical solid lines
delimit the $\Omega_{\chi} h^2$--range of cosmological interest. The two
dashed lines delimit the most appealing interval for
$\Omega_{\chi} h^2$, as suggested by the most recent observational data.
{\bf Figure 4} -- Scatter plot of set $S$ in the plane
$\mu$--$M_2$. Sections (a) and (b)
refer to two representative values of $\tan \beta$:
$\tan \beta = 8$ and $\tan \beta = 30$, respectively.
The solid curves denote the iso--mass curves which delimit the annual
modulation region $R$, i.e. the iso--mass curves for $m_{\chi}$ = 34 GeV and
$m_{\chi}$ = 107 GeV. The dashed curves denote the neutralino composition,
and correspond to $P$ = 0.1, 0.5, 0.9. The hatched region is excluded by LEP at
$\sqrt s$ = 183 GeV.
{\bf Figure 5} -- Section (a) --
Scatter plot for set $S$ in the plane $m_h$ -- $\tan \beta$.
The hatched region on the right is excluded by theory.
The hatched region on the left is
excluded by present LEP data at $\sqrt s$ = 183 GeV. The dotted and the dashed
curves denote the reach of LEP2 at energies $\sqrt s$ = 192 GeV and
$\sqrt s$ = 200 GeV, respectively. The solid line represents the
95\% C.L. bound reachable at LEP2, in case of non discovery of a neutral
Higgs boson. \\
Section (b) --
Scatter plot for set $S$ in the plane $m_h$ -- $\sin^2 (\alpha - \beta)$.
The hatched region on the left is
excluded by LEP data at $\sqrt s$ = 183 GeV.
{\bf Figure 6} -- Scatter plot for set $S$ in the plane
$m_{\tilde t_1}$ -- $\tan \beta$.
The hatched region is excluded by LEP data (without any restriction on other
masses).
{\bf Figure 7} -- Scatter plot for set $S$ in the plane
$m_{\chi}$ -- $\tan \beta$.
The hatched region on the left is
excluded by present LEP data. The dashed and the
solid vertical lines denote the reach of LEP2 and TeV33, respectively.
{\bf Figure A.1} -- Regions of the parameter space defined in Sect. II,
which satisfy all accelerator constraints (including
$b \rightarrow s + \gamma$) and the further requirement that
$m_h$ is below some arbitrarily fixed value $m_h^*$.
The various lines denote the following representative values of
$m_h^*$: $m_h^* = 80, 90, 100, 110$ GeV.
The allowed regions are given by
the domains on the left of the various curves for each value
of $m_h^*$.
{\bf Figure A.2} -- Allowed region in the plane
$m_{\tilde t_1}$ -- $\tan \beta$ when
the plot of Fig. A.1 is combined with the one of Fig. 5(a).
The hatched region on the left is excluded by LEP data
(without any restriction on other masses).
\vfill\eject
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 7,750 |
Applications that want to use the API need to be registered in the partner panel on BookingSync.com.
Click on New Application button.
Fill in required details, redirect_uri being your apps OAuth callback URL and admin_url to your admin section.
Choose if application should open in a new window (standalone), or remain embedded in BookingSync frame.
Choose Private application, at least during your development and testing process.
Note: BookingSync traditional Users have to input private access code in their Apps section to start using your private application. That code is automatically generated for you and is visible in your applications manage section.
Embedded applications are rendered within an iframe on BookingSync.com. You application needs to allow on embedding in iframe.
Standalone applications will be working outside of BookingSync website. It means that they will be opened in a new window outside of the BookingSync app.
While it's not the recommended approach, some applications can benefit from this.
All published applications are available by default for all accounts.
Note: When first accessing the application, you will be asked to authorize the permissions.
First obtain private access code from the application owner.
Input the code in your Apps section.
Note: When first accessing the application, you will be asked to authorize the permissions. After the first authorization, you will be able to keep access to their account until they manually request to revoke your access. | {
"redpajama_set_name": "RedPajamaC4"
} | 7,044 |
Q: Adding buildpath of existing project to a new project I have two projects in Eclipse's workspace Project A & Project B.
I want to add Project B to A's buildpath but no matter what I do it doesn't work. I've looked it look and none of the answers work for me. Although exporting B as a jar and adding the jar to the buildpath works I will need to update B's code and I do not want to have to constantly export it as jar everytime.
I've tried adding B as a class folder but it fails to work. In the jar import bproj.BMain; works but as a class folder it does not.
How can I add B to A's buildpath?
A: On Eclipse Mars, open any file from Project B and on the menu, go to Project > Properties > Java Build Path > Projects > Add
From there, you:
can add B to A's buildpath
by selecting Project A from the list.
Nice username, by the way :-)
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 6,358 |
The 1981–82 United Counties League season was the 75th in the history of the United Counties League, a football competition in England.
Premier Division
The Premier Division featured 17 clubs which competed in the division last season, along with two new clubs, promoted from Division One:
British Timken Duston
Stevenage Borough
League table
Division One
Division One featured 14 clubs which competed in the division last season, along with one new club, relegated from the Premier Division:
Northampton Spencer
League table
References
External links
United Counties League
1981–82 in English football leagues
United Counties League seasons | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 5,331 |
Q: Html Select dropdown not populate option in Mobile Chrome : Angularjs In our application we are using both Jquery and Angularjs on some modules.
I'm facing some weird issue, html select dropdown is not populating options in
mobile browser especially in mobile chrome. Because of angualr-arina.
Please check this snippet in Mobile chrome browser
<script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.2.23/angular.min.js"></script>
<select>
<option value='1'>One</option>
</select>
Any idea on enable/disable ng-arina module on page load. Or how to handle this problem ?
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 8,593 |
{"url":"https:\/\/socratic.org\/questions\/how-do-you-find-the-square-root-x-2#168433","text":"# How do you find the square root x^2?\n\n$\\sqrt{{x}^{2}} = \\left\\mid x \\right\\mid$\n${x}^{2}$ is always positive for any real valued x. In the case that x is negative square root returns the positive value of x.","date":"2021-10-16 02:49:02","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 2, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.43769460916519165, \"perplexity\": 176.8955482850057}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-43\/segments\/1634323583408.93\/warc\/CC-MAIN-20211016013436-20211016043436-00658.warc.gz\"}"} | null | null |
\section{Introduction}
\label{sec:intro}
It was demonstrated (1998) that neutron star r-mode oscillations of any harmonic, frequency and amplitude belong to the group of non-axisymmetric modes that
can be driven unstable due to gravitational radiation \citep{GWHYNS, GRINSTHYNS, RMODEINST}. This theory suggested that (assuming the r-mode oscillation amplitude
grows sufficiently large) r-mode gravitational radiation (primarily in the m = 2 harmonic) could carry away most of the angular momentum of a rapidly rotating
newborn neutron star in the form of r-mode gravitational radiation.
In a previous paper \citep{rmodespaper} we presented a sensitivity study of a seedless clustering detection algorithm based on r-mode waveforms predicted by the
Owen et. al. 1998 model
\begin{equation}
\label{eq.1a}
f(t) = \frac{1}{ \left ( f_o^{-6} + \mu t \right )^{\frac{1}{6}} }
\end{equation}
\begin{equation}
\label{eq.2a}
\mu= 1.1 \times 10^{-20} |\alpha|^2 \frac{\text{s}^{-1}}{\text{Hz}^6}
\end{equation}
\noindent
and the gravitational radiation power given by
\begin{equation}
\label{eq.3aa}
\dot{E} \approx 3.5 \times 10^{19} f^8 | \alpha |^2 \,\ \mbox{W}.
\end{equation}
\noindent
This model depends on two parameters: the (dimensionless) saturation amplitude, $\alpha$, of the r-mode oscillations and the initial gravitational
wave spindown frequency $f_o$. The theoretical predictions for the values of these parameters were discussed extensively in our previous paper.
The values we considered for $\alpha$ lie in the range of $10^{-3}- 10^{-1}$ while the values we considered for $f_o$ lie in the interval of
$\unit[600-1600]{Hz}$. Due to the wide range within which the values of these parameters lie, we cannot effectively use a matched filtering algorithm. Instead,
we have to develop techniques that could detect all possible distinct waveforms.
In our previous paper, a seedless clustering algorithm was used. This algorithm is based on the statistical significance of signal to noise ratios
(snr) of clusters of above the threshold snr pixels. This method is not dependent on any knowledge of the signal and it can be applied generically
to any long-lived gravitational-wave transients. In particular, it is unable to discriminate between r-modes and other possible gravitational wave
sources. Knowledge of the r-mode signal can be used to make minor modifications in the clustering algorithm, however, there was not much hope for a
dramatic improvement in the efficiency. Nevertheless, we were able to recover signals of magnitude 5 times weaker than the noise.
In the sensitivity study performed for the clustering algorithm, we used 9 distinct waveforms. These were chosen by taking ($\alpha$, $f_o$) pairs using
3 values ($10^{-1}$, $10^{-2}$, $10^{-3}$) for $\alpha$ and 3 values ($\unit[700]{Hz}$, $\unit[1100]{Hz}$, $\unit[1500]{Hz}$) for $f_o$.
In this sensitivity study for the MLAs, for comparison purposes, we used 2 of these waveforms: $(f_o=\unit[1500]{Hz}, \alpha=0.1)$ and $(f_o=\unit[1100]{Hz}, \alpha=0.01)$.
These waveforms as well as their corresponding power decays are shown in Fig.\ref{Fig:ef1} and Fig.\ref{Fig:ef2} respectively.
MLAs are well suited especially for cases where the signal is not precisely (but only crudely) known. This paper is based on three specific MLAs:
ANN \citep{haykin2004comprehensive}, SVM \citep{vapnik1998statistical} and CSC \citep{CSC_paper}. All three methods are considered novel applications
in the area of long transient gravitational wave searches.
\begin{figure}
\includegraphics[width=1.05 \linewidth]{Owen_MLA_a-eps-converted-to.pdf}
\caption{The $(f_o=\unit[1500]{Hz}, \alpha=0.1)$ waveform is the most powerful waveform considered in our sensitivity studies both for the clustering and the MLAs.
The second waveform we chose has an amplitude 25 times smaller than the first one. This waveform has parameters $(f_o=\unit[1100]{Hz}, \alpha=0.01)$ and is
approximately monochromatic for the durations our sensitivity studies were designed for. The clustering algorithm could detect the weaker signal at distances
not further than a few Kpcs.} \label{Fig:ef1}
\end{figure}
\begin{figure}
\includegraphics[width=1.1 \linewidth]{Owen_MLA_b-eps-converted-to.pdf}
\caption{These power evolutions correspond to the signals in Fig.\ref{Fig:ef1}. We see that the blue plot corresponds to a rapidly decaying signal: within $\unit[2500]{s}$ the
radiation power drops to $17\%$ of its original value. The red plot corresponds to a power decay that dropped to only $99.9\%$ of the initial. The power of this (red) signal is about
3 orders of magnitude lower than that of the signal plotted in blue. We chose this weak signal so that we can examine how the MLAs compare to the clustering algorithm both for powerful
and weak signals.} \label{Fig:ef2}
\end{figure}
This paper is designed as follows: The next section describes the preparation of the data used for the training of the MLAs. The discussion extends to the
resolution reduction of the data maps and also to the training efficiencies as a function of resolution reduction for all three MLAs. In section 3 we present
a brief introduction into ANN methods, in section 4 we do the same for SVM methods and in section 5 we present a similar introduction for CSCs. In section 6
we present and discuss our results and compare the MLA efficiencies to the conventional (clustering) algorithms efficiencies. Finally, in section 7, we summarize
our conclusions including topics for future work.
\section{Data Preparation}
\label{Dataprep}
\subsection{Choice of the $f_o$ and $\alpha$ parameter values}
From equations \eqref{eq.1a} and \eqref{eq.2a} we have the two model parameters $\alpha$ and $f_o$ that determine the shape of the waveform. Apart from the shape,
the injections that were produced for the training of the MLAs were also dependent on the pixel brightness or pixel signal-to-noise ratio (SNR). For a single pixel
in the frequency-time maps (ft-maps) the SNR satisfies \citep{STAMPPAPER}
\begin{equation}
\label{eq.4_1_13}
\text{SNR}(t,f, \hat{\Omega}) \propto Re \left [ \hat{Q}_{ij}(t,f, \hat{\Omega}) C_{ij}(t,f) \right]
\end{equation}
\noindent
where $i=1$ and $j=2$ are indices corresponding to the two advanced LIGO (aLIGO) detectors \citep{abbott2009ligo}, $\hat{Q}_{ij}(t,f, \hat{\Omega})$ is a filter function
that depends on the source direction, $\hat{\Omega}$, \citep{abbott2005first} and $C_{ij} \equiv 2 \tilde{h}_i^*(t,f) \tilde{h}_j(t,f)$ is the cross spectral density,
$\tilde{h}$ being the Fourier transform of the gravitational wave strain amplitude $h$. The latter is expressed in \citep{HOLAI} as a function of the distance $d$ to
the source, the gravitational-wave frequency $f$ and the r-mode oscillation amplitude $\alpha$, by
\begin{equation}
\label{eq.3a}
h \approx 1.5 \times 10^{-23} \left( \frac{1\text{Mpc}}{d} \right) \left( \frac{f}{1\text{kHz}} \right )^3 | \alpha | .
\end{equation}
\noindent
For the construction of the injection maps we chose the 3 parameter values $f_o$, $\alpha$ and $h^2$ to be uniformly distributed within the corresponding predetermined ranges.
The values of the distance $d$ are constrained accordingly such that the above conditions are satisfied.
Each injection set that was produced and used for the MLA training was limited to 11350 injection maps and 11350 noise maps. This was due to the
finite amount of data that was produced during the S5 LIGO run, the computational resources available as well as the time needed to produce the
22700 maps. For each injection the waveform was randomly chosen in such a way that the $\alpha$ value was randomly chosen from a uniform distribution
of 11350 $\alpha$ values in the range of $10^{-3}- 10^{-1}$, the $f_o$ value was randomly chosen from a uniform distribution of 11350 $f_o$
values in the range of $\unit[600-1600]{Hz}$, and for the $h^2$ values we picked 3 ranges (for 3 separate MLA trainings), whose choice is discussed
in the next paragraph.
\subsection{Choice of the $h^2$ parameter values}
The results of the sensitivity study for the clustering algorithm showed that for a signal of $f=\unit[1500]{Hz}$ and $\alpha=0.1$ the detection distance was
up to $\unit[1.2]{Mpc}$. Using equation \eqref{eq.3a} we see that the conventional algorithms can detect gravitational wave strains of value $h \approx 4 \times 10^{-24}$.
Values of the same order are obtained if we substitute the results for the other 8 waveforms. For example from table 1 in \citep{rmodespaper} we see that for
$f=\unit[700]{Hz}$ and $\alpha=0.01$ we get a detection distance of $\unit[0.043]{Mpc}$. Substituting in equation \eqref{eq.3a} we get $h=1.2 \times 10^{-24}$.
Therefore, the value of $h \approx 10^{-24}$ will become a reference point because this is the value of gravitational wave strain the MLAs will have to detect
if they are shown to be at least as efficient as the conventional clustering algorithms \citep{stochtrack}.
If we consider supernova events at distances in the range from $\unit[1]{Kpc}$ to $\unit[1]{Mpc}$ then the corresponding range for the gravitational wave strain
values is $h \approx 10^{-24}$ to $10^{-21}$. Therefore, there are several approaches in determining the range of $h^2$ values for the injection maps produced for
the training of the MLAs:
\noindent
(i) produce one set of data with injections at distances distributed in such a way that the $h^2$ values are uniformly distributed in the range from $10^{-48}$ to
$10^{-42}$ (i.e. $10^{-24} \le h \le 10^{-21} $ ).
\noindent
(ii) produce three sets of data such that the $h^2$ values are uniformly distributed in the following ranges: \\
(a) from $10^{-46.4}$ to $10^{-45.4}$ \,\,\,\,\ (i.e. $10^{-23.2} \le h \le 10^{-22.7}$ ) \\
(b) from $10^{-47.4}$ to $10^{-46.4}$ \,\,\,\,\ (i.e. $10^{-23.7} \le h \le 10^{-23.2}$ ) \\
(c) from $10^{-48.0}$ to $10^{-47.4}$ \,\,\,\,\ (i.e. $10^{-24.0} \le h \le 10^{-23.7}$ ) \\
\noindent
The last choice of $10^{-24}$ is such that the waveform with $(f_o=\unit[1500]{Hz}, \alpha=0.1)$ may be detectable up to a distance of $\unit[5]{Mpc}$,
depending on the MLA detection efficiencies. Note that at those distances (in the neighborhood of Milky Way) the supernova event rate is $1$ every 1-2 years.
\subsection{Production of the data matrix for the MLA training}
We start with the data maps in the frequency-time domain (ft-maps) produced by the stochastic transient analysis multi-detector pipeline (STAMP) \citep{STAMPPAPER}: 11350
noise maps and 11350 (r-mode) injection maps. These ft-maps are produced using time-shifted S5 data recolored with the aLIGO sensitivity noise curve. Each map has a
size of $1001 \times 4999$ pixels with each pixel along the vertical axis corresponding to $\unit[1]{Hz}$ and each pixel along the horizontal axis corresponding to $\unit[0.5]{s}$,
hence the length of the map is $\unit[2499.5]{s}$. This ft-map is reshaped to a $1 \times 5003999$ row vector that occupies a disc space of 37.4MB. We reshaped all 22700
ft-maps (each one of size $1001 \times 4999$) and produced 22700 row vectors $x_i$ with $i \in \{1,2,...,22700 \}$. The rows with $i \in \{1,2,...,11350 \}$ correspond to
the noise ft-maps while the rows with $i \in \{11351,11352,...,22700 \}$ correspond to the injection ft-maps.
We then used the rows $x_i$ with $i \in \{1,2,...,11350 \}$ to produce a $11350 \times 5003999$ noise data matrix, $X_1$, given by
\begin{equation}
\label{X1}
X_1 = \left( \begin{array}{c}
x_1 \\
x_2 \\
. \\
. \\
. \\
x_{11350}
\end{array} \right)
\end{equation}
\noindent
and we also used the rows with $x_i$ with $i \in \{11351,11352,...,22700 \}$ to produce a $11350 \times 5003999$ injection data matrix, $X_2$, given by
\begin{equation}
\label{X2}
X_2 = \left( \begin{array}{c}
x_{11351} \\
x_{11352} \\
. \\
. \\
. \\
x_{22750}
\end{array} \right) .
\end{equation}
\noindent
The MLAs would take as an input the $22700 \times 5003999$ data matrix given by
\begin{equation}
\label{X}
X = \left( \begin{array}{c}
X_1\\
X_2
\end{array} \right) .
\end{equation}
\noindent
Each row $x_i$ with $i \in \{1,2,..,22700 \}$ of the data matrix $X$ corresponds to a single ft-map. The total number of rows is equal to the number of data points, $n=22700$,
while the total number of columns (i.e. the total number of features) is equal to $D=5003999$, where $D$ is the dimensionality of the feature space in which each single ft-maps
lives.
For any matrix we know that row rank = column rank, therefore, the number of linearly independent columns of $X$ is equal to 22700. This number is determined by the limited
number ($n=22700$) of ft-maps we could produce. This means that even though each single ft-map lives in a $5003999-$dimensional space, we can only approximate these ft-maps
as vectors living in a $22700$-dimensional space (subspace of the $5003999-$dimensional space). The best approximation of this subspace would be the one in which the most
'dominant' 22700 features (out of the total number of 5003999) constitute a basis of the subspace. A well known method of choosing the 22700 most dominant features is described
by the principal component analysis (PCA) \citep{jolliffe2002principal} or see section \ref{PCA}. However, the data matrix $X$ is of size $\unit[848.8]{GB}$ and this makes the
(RAM) memory required to perform PCA on $X$ beyond $1$TB, making it practically impossible to perform PCA on $X$ with realistically available computing resources.
A reliable approach to solve the problem of the high dimensionality of the features ($D \gg n$) is to seek MLAs that will naturally select d-many features (with $d \ll D$)
such that $d \leq n$ \citep{johnstone2009statistical}. Three classes of MLAs that can achieve this are the ANN, SVM and CSC methods. However, the data matrix is too large to
attempt to perform any MLAs on it. Therefore, the only way out of these restrictions the data matrix size imposes, is to perform resolution reduction for each $1001 \times 4999$
ft-map (before reshaping each one of them to a row vector). After the resolution reduction, performing further feature selection would still benefit the training of the algorithms
in terms of speed. The right choice of features can significantly decrease the training time without noticeably affecting the training efficiencies.
A resolution reduction on the ft-maps would result in 22700 $1 \times d$ row vectors such that $d \ll D$. The desired effect of the resolution reduction would be to get
$d \leq n$. The first guess for such a reduction would be to choose a factor of $D/n \sim 220$. That would be equivalent to a reduction by a factor of $\sim 15$ along each
axis (frequency and time) of the ft-map. However, it turned out that this is not the optimal resolution (per axis) reduction factor. The following two sub-sections describe
the experimentation on the reduction factor.
\subsection{Resolution reduction: bicubic interpolation}
To perform the resolution reduction, we used the imresize matlab function. The original ft-map of $1001 \times 4999$ pixels consists of a $1002 \times 5000$ point grid.
Imresize will first decrease the number of points in the point grid according to the chosen resolution reduction factor, $r$. When $r=10^{-2}$, the imresize function gives a new ft-map
of dimensionality $11 \times 50$, the new point grid will be $12 \times 51$. Interpolation is then used to calculate the surface within each pixel in the new point grid.
We used the bicubic interpolation option of the imresize function. According to this, the surface within each pixel can be expressed by
\begin{equation}
S(t,f)= \sum^3_{i=0} \sum^3_{j=0} a_{ij} t^i f^j
\end{equation}
\noindent
The bicubic interpolation problem is to calculate the $16$ $a_{ij}$ coefficients. The $16$ equations used for these calculations consist of the following conditions at the 4
corners of each pixel: \\
(a) the values of $S(t,f)$ \\
(b) the derivatives of $S(t,f)$ with respect to $t$ \\
(c) the derivatives of $S(t,f)$ with respect to $f$ and \\
(d) the cross derivatives of $S(t,f)$ with respect to $t$ and $f$ \\
Determining the resolution reduction factor that would yield the best training efficiencies for the MLAs was not a very straight forward task. To do so we performed a series
of tests using the set of $11350$ noise ft-maps and the set of $11350$ injection ft-maps. The injected signal SNR values lay in a range such that $10^{-23.7} \le h \le 10^{-23.2}$.
\subsection{Resolution reduction versus training efficiency}
We tested 5 different resolution reduction factors ($r=10^{-1}$, $r=10^{-1.5}$, $r=10^{-2}$, $r=10^{-2.5}$ and $r=10^{-3}$) where the value of $r$ corresponds to the factor
by which each axis resolution is reduced. The resulting data matrices had dimensions $22700 \times 50500$, $22700 \times 5155$, $22700 \times 550$, $22700 \times 64$ and $22700 \times 10$
respectively. Subsequently each of the three MLAs were trained and the training efficiencies were plotted against the resolution reduction factors. The results are shown in Fig.\ref{Fig:ef3}.
From the plots we see that the training efficiencies first improve as we lower the resolution. For too low or too high resolution reductions the training efficiencies decrease.
This behavior was consistent on all three MLAs. At a reduction factor of 100 per axis we have the maximum training efficiency. Resolution reduction offers two advantages:
(a) it increases the MLA training efficiency and (b) it reduces the training time. Using the results from Fig.\ref{Fig:ef3} we determined that the best resolution reduction
would be the factor of $r=10^{-2}$. This results in a data matrix with dimensions of $22700 \times 550$ (disc space of 84MB).
\begin{figure}
\includegraphics[width=1.05 \linewidth]{imresize-eps-converted-to.pdf}
\caption{Training efficiencies of ANN (blue), SVM (green) and CSC (red) versus the resolution reduction (per axis). For SVM and CSC there is a clear peak
at a resolution reduction factor of $10^{-2}$. The ANN peak seems to be a little off but for uniformity we used $10^{-2}$ for all 3 MLAs.
The training of all three MLAs was performed using the ($\alpha=0.1, f_o=1500$) waveform. No tests have been performed to verify the validity of these plots for
other waveforms or other $h$ value ranges.} \label{Fig:ef3}
\end{figure}
After dimensionality reduction, the matrices $X_1$ and $X_2$ as given by equations \eqref{X1} and \eqref{X2} respectively become $X'_1$ and $X'_2$ each one of a reduced dimensionality $11350 \times 550$.
We define these matrices as follows
\begin{equation}
\label{X'1}
X'_1 = \left( \begin{array}{c}
x_1 \\
x_2 \\
. \\
. \\
. \\
x_{11350}
\end{array} \right)
\end{equation}
\noindent
with row vectors $x_i \in \mathbb{R}^{550}$ where $i \in \{1,2,...,11350 \}$ and
\begin{equation}
\label{X'2}
X'_2 = \left( \begin{array}{c}
x_{11351} \\
x_{11352} \\
. \\
. \\
. \\
x_{22750}
\end{array} \right)
\end{equation}
\noindent
with row vectors $x_i \in \mathbb{R}^{550}$ where $i \in \{11351,...,22700 \}$. Similarly we define the dimensionally reduced $22700 \times 550$ data matrix
\begin{equation}
\label{X'}
X' = \left( \begin{array}{c}
X'_1\\
X'_2
\end{array} \right).
\end{equation}
\noindent
The number of rows, $n=22700$, is the number of data points (ft-maps) and the number of columns, $d=550$, is the number of features of
each point or the dimensionality of the space in which each ft-map lives (after the resolution reduction).
\section{Artificial Neural Networks}
The $22700 \times 550$ data matrix, $X'$, will be presented as input into a feed-forward neural network with an input layer of dimensionality equal to the number of columns of the data
matrix (i.e. $d=550$). For the training of the ANN we randomly picked $90\%$ of the first 11350 (injection data) rows and also $90\%$ of the second 11350 (noise data)
rows. The other $10\%$ of the (injection and noise) rows was used to determine the training efficiency of the trained algorithm. The ANN had one hidden layer with 50
nodes (`neurons') and an output layer with a two `neurons' that would `fire' for `signal' or `no signal', respectively. The `hidden' layer used
`neurons' with the logistic sigmoid function \citep{murphy2012machine}
\begin{equation}
\label{eq.4a}
\sigma(a_j) =\frac{1}{1+\exp(-a_j)}
\end{equation}
\noindent
where $a_j$ ($ j= 1,2,...,550$) are the values presented at one `neuron' in the hidden layer. The purpose of the hidden layer is to allow for non-linear combinations of
the input values to be forwarded to the output layer. These combinations in the hidden layer carry forward `features' from the input to the output layer
that would not be possible to be extracted from each individual neuron in the input layer, enabling non-linear classification. The number of hidden layers
and hidden neurons was chosen, as is typically done, after experimentation with various ANN architectures, aiming to enhance the accuracy, the robustness
and the generalization ability of the ANN, along with the training efficiency and feasibility.
Starting from the first ft-map in the data matrix $X$ i.e. starting from the row vector $x_1$ where
\begin{equation}
\label{eq.x1}
x_1= \{ \ x_{ij} | \ i=1 \,\ \mbox{and} \,\ j=1,2,...,550 \}
\end{equation}
\noindent
we have $550$ values that are fed into the input layer of the neural network.
These values are then non-linearly combined in each hidden `neuron' to get $50$ output values forwarded to the output layer, given by
\begin{equation}
\label{eq.5a}
x'_{1k} = \sigma \large ( \sum^{550}_{j=1} w_{kj}^{(1)}x_{1j} + w_{k0}^{(1)} \large )
\end{equation}
\noindent
where $k=1,2,...,50$ is the index corresponding to each `neuron' in the hidden layer and the superscript (1) represents the hidden layer. The parameters
$w_{kj}$ are called the weights while the parameters $w_{k0}$ are called the biases of the neural network.
The `output' layer used `neurons' with the soft-max activation function which is typically used in classification problems to achieve a 1-to-n output encoding \citep{Bishop06a}.
In particular, the soft-max function rescales the outputs in order for all of them to lie within the range $[0,1]$ and to sum-up to 1. This is done by
normalizing the exponential of the input $b_k$ to each output neuron over the exponential of the inputs of all neurons in the output layer:
\begin{equation}
\label{eq.4softmax}
\text{soft-max}(b_k) =\frac{\exp(b_k)}{\sum_k(\exp(b_k))} .
\end{equation}
\noindent
When the values from equation \eqref{eq.5a} are presented in the output layer we get the result
\begin{equation}
\label{eq.6a}
x''_{1l} = \mbox{soft-max} \large ( \sum^{50}_{k=1} w_{lk}^{(2)} x'_{1k} + w_{l0}^{(2)} \large )
\end{equation}
\noindent
(where $l=1,2$) as the output value in the single neuron of the output layer. Equation \eqref{eq.6a} represents the `forward propagation' of information in
the neural network since the inputs are `propagated forward' to produce the outputs of the ANN, according to the particular `weights' and `biases'.
Equation \eqref{eq.6a} also shows that a neural network is a non-linear function, $\mathcal{F}$, from a set of input variables $\{x_{i}\}$ such that
$i \in \{1,2,...,22700\}$ as defined by equation \eqref{eq.x1} i.e. $x_i$ are row vectors of the matrix $X'$ to a set of output variables $\{x''_l \}$ such that
$l \in \{1,2 \}$ i.e. the output layer has 2 neurons (one fires for noise and the other fires for injection). To merge the weights $w^{(1)}_{kj}$ and
biases $w^{(2)}_{k0}$ into a single matrix (and similarly do with the weights $w^{(2)}_{lk}$ and biases $w^{(2)}_{l0}$) we need to redefine $x_1$ as
given by equation \eqref{eq.x1} to
\begin{equation}
\label{x1}
x_1= \{ \ x_{ij} | \ i=1 \,\ \mbox{and} \,\ j=0,1,2,...,550 \,\ \mbox{and} \,\ x_{10}=1 \}
\end{equation}
\noindent
and similarly redefine all row vectors of $X'$ as well as all the output row vectors from the hidden layer. Then the non-linear function $\mathcal{F}$ is controlled
by a $(50+1) \times (550+1)$ matrix $\bold{w^{(1)}}$ and a $2 \times(50+1)$ matrix $\bold{w^{(2)}}$ of adjustable parameters. Training a neural network corresponds
to calculating these parameters.
Numerous algorithms for training ANN exist \citep{murphy2012machine} and in general can be classified as being either sequential or batch training methods: \\
(i) sequential (or `online') training: A `training item' consists of a single row (one ft-map) of the data matrix. In each iteration a single row is passed through
the network. The weight and bias values are adjusted for every `training item' based on the difference between computed outputs and the training data target outputs. \\
(ii) batch training: A `training item' consists of the matrix $X'$ (all 22700 rows of the data matrix). In each iteration all rows of $X'$ are successively passed through
the network. The weight and bias values are adjusted only after all rows of $X'$ have passed through the network.
In general, batch methods perform a more accurate estimate of the error (i.e. the difference between the outputs and the training data target outputs) and hence (with
sufficiently small learning rate \citep{wilson2003general}) they lead to a faster convergence. As such, we used a batch version of gradient descent as the optimization algorithm. This
form of algorithm is also known as `back-propagation' because the calculation of the first (or hidden) layer errors is done by passing the layer 2 (or output) layer
errors back through the $w^{(2)}$ matrix. The `back-propagation' gradient descent for ANNs in batch training is summarized as follows:
\begin{algorithm} [H]
\caption{Gradient Descent for ANN}
\label{gradient_descent}
\begin{algorithmic}
\begin{small}
\STATE 1. Initialize $w$ (and biases) randomly.
\WHILE{error on the validation set satisfies certain criteria}
\FOR{i=1:22700}
\STATE 2. Feed-forward computation of the input vector $x_i$.
\STATE 3. Calculate the error at the output layer.
\STATE 4. Calculate the error at hidden layer.
\ENDFOR
\STATE 5. Calculate the mean error.
\STATE 6. Update $w$ of the output layer.
\STATE 7. Update $w$ of the hidden layer.
\ENDWHILE
\end{small}
\end{algorithmic}
\end{algorithm}
Out of the $90\%$ of the data that was (randomly) chosen for the training, $10\%$ of that was used as a validation set. The latter is used in the
`early stopping' technique that is used to avoid over-fitting and maintain the ability of the network to `generalize'. Generalization is the ability
of a trained ANN to identify not only the points that were used for the training but also points in between the points of the training set. For each
iteration the detection efficiency of the ANN is tested on the validation set. When the error on the validation set drops by less than $10^{-3}$ for
two consecutive iterations then we do the `early stopping' and the training is stopped.
The learning rate of the gradient-decent algorithm determines the rate at which the training of the network is moving towards the optimal parameters.
It should be small enough not to skip the optimal solution but large enough so that the convergence is not too slow. A crucial challenge for the algorithm
is not to converge to local minima. This can be avoided by adding a fraction of a weight update to the next one. This method is called `momentum' of the
training of the network. Adding `momentum' to the training implies that for a gradient of constant direction the size of the optimization steps will increase.
As such, the momentum should be used with relatively small learning rate in order not to skip the optimal solution. After experimenting with various parameter
populations we used a learning rate of $0.02$ and a momentum of $0.9$.
\section{Support Vector Machine}
The second MLA we trained is a support vector machine (SVM). This method gained popularity over the ANNs because it is based on well formulated and
mathematically sound theory \citep{Bishop06a}. In the following paragraphs we give a brief introduction to the SVM mathematical formulation.
We start from the data matrix $X'$ as given by equation \eqref{X'}. In the SVM formulation we treat the noise ft-maps, rows of $X'_1$ given by equation \eqref{X'1},
as well as the ft-maps with r-mode injections, rows of $X'_2$ given by equation \eqref{X'2}, as points in a $550$-dimensional space. The idea behind the formulation
of the SVM optimization problem is to find the optimal hypersurface that would separate (and hence classify) the noise points from the injection points.
For this discussion we will need the following definitions: \\
\noindent
\textbf{Definition 1:} The distance of a point $x_i$ to a flat hypersurface $\mathcal{H} = \{ x | \langle w,x \rangle +b = 0 \}$ is given by
\begin{equation}
\label{eq.7a}
d_{x_i}(w,b) = z_i \times ( \langle w,x_i \rangle +b )
\end{equation}
\noindent
where $w$ is a unit vector perpendicular to the flat hypersurface, $b$ is a constant, and $z_i =+1$ for $\langle w,x_i \rangle +b >0$
and $z_i=-1$ $\langle w,x_i \rangle +b <0$. The index $i$ (in $x_i$) takes values from the set $\{1,2,3,..., 22700 \}$. In the following discussion each point $x_i$
that lies above the hypersurface pairs with a value $z_i=1$ and each point $x_i$ that lies below the hypersurface pairs with a value of $z_i=-1$. \\
\noindent
\textbf{Definition 2:} The `margin', $\gamma_{\mathcal{S}}(w,b)$, of any set, $\mathcal{S}$, of vectors is defined as the minimum of the set of all
distances $\mathcal{D} = \{d_{x_i}(w,b) | x_i \in \mathcal{S} \}$ from the hypersurface $\mathcal{H}$. For the purpose of our discussion the set $\mathcal{S}$ is the union of the
set of all noise points and the set of all injection points.\\
\noindent
For definition 3 we assume that a training set consists of points $x_i$ with each one belonging to one of two distinct data classes denoted by $y_i=1$ (for one class)
and $y_i=-1$ (for the other class). We may further assume that the set of all noise points belongs to the class represented by $y_i=-1$ while the set of all injection points
belongs to the class represented by $y_i=+1$.\\
\noindent
\textbf{Definition 3:} A training set $\{(x_1,y_1),...,(x_n,y_n) | x_i \in \mathbb{R}^d, y_i \in\{-1,+1 \} \}$ is called `separable'
by a hypersurface $\mathcal{H} = \{ x | \langle w,x \rangle +b =0 \}$ if both a unit vector $w$ $(\| w \|=1)$ and a constant $b$ exist so that the following inequalities hold:
\begin{align}
\langle w,x_i \rangle +b & \ge \gamma_{\mathcal{S}} \,\,\,\,\,\ & \mbox{if} \,\,\,\,\,\ y_i=+1 \label{eq.8a} \\
\langle w,x_i \rangle +b & \le -\gamma_{\mathcal{S}} \,\,\,\,\,\ & \mbox{if} \,\,\,\,\,\ y_i=-1 \label{eq.8b}
\end{align}
\noindent
where $\mathcal{S}=\{x_i | i=1,2,...,n \}$ and $\gamma_{\mathcal{S}}$ is given by definition 2. \\
For the purpose of our discussion $d=550$ is the dimensionality of the points
$x_i$ (this dimensionality corresponds to the number of pixels in each ft-map) and $n=22700$ is the number of our (ft-maps) data points. Using the fact
that the hypersurface is defined up to a scaling factor $c$, i.e. $\mathcal{H} = \{ x | \langle cw,x \rangle +cb =0 \}$, we can take $c$ such that $c \gamma_{\mathcal{S}}=1$
and hence we can rewrite equations \eqref{eq.8a} and \eqref{eq.8b} as
\begin{equation}
\label{eq.9a}
y_i \times ( \langle cw,x_i \rangle +cb ) \ge 1 \,\,\,\,\ \mbox{for all i=1,2,...,n}.
\end{equation}
\noindent
Defining $w'=cw$ i.e. $\|w'\|=c$ and dividing equation \eqref{eq.9a} by $c$ we get
\begin{equation}
y_i \times ( \langle \frac{w'}{\|w'\|},x_i \rangle + b ) \ge \frac{1}{\|w'\|} \,\,\,\,\ \mbox{for all i=1,2,...,n}.
\end{equation}
\noindent
\textbf{Formulation of the SVM optimization problem:} Given a training set, that is, a data matrix
$X' = \left( \begin{array}{c}
X'_1\\
X'_2
\end{array} \right) $, $X'_1$ being the noise points and $X'_2$ being the injection points, we want to find the `optimal separating hypersurface' (OSH), that separates
the row-vectors of $X'_1$ from the row-vectors of $X'_2$. According to definition 3, this translates to maximizing the `margin' $\gamma_{\mathcal{S}}$. In other words,
we want to find a unit vector $w$ and a constant $b$ that maximize $\frac{1}{\|w'\|}$. Therefore, the SVM optimization problem can be expressed as follows
\begin{align}
\min_{w,b} \,\,\ & \frac{1}{2} \|w'\|^2 \,\,\,\,\,\,\ \mbox{subject to} \label{eq.10a} \\
1-y_i \times ( \langle w',x_i \rangle & + b') \le 0 \,\,\,\,\ \mbox{for all i=1,2,...,n} \label{eq.10b}
\end{align}
\noindent
where $b'=cb$. This is a quadratic (convex) optimization problem with linear constraints and can be solved by seeking a solution to the
Lagrangian problem dual to equations \eqref{eq.10a} and \eqref{eq.10b}.
Before formulating the Lagrangian dual we introduce the `slack variables', $\xi_i$ ($i=1,2,...,n$), that are used to relax the conditions in equation \eqref{eq.9a} and account for
outliers or `errors'. Instead of solving equation \eqref{eq.10a} we seek a solution to
\begin{equation}
\label{eq.11a}
\begin{split}
\min_{w,b} \,\,\ & \frac{1}{2} \|w'\|^2 + C \sum_{i=1}^n \xi_i \,\,\,\,\,\,\ \mbox{subject to} \\
\xi_i \ge 0 \,\,\ \mbox{and} \,\,\ 1-y_i \times ( & \langle w',x_i \rangle + b')-\xi_i \le 0 \,\,\ \mbox{for all i=1,..,n}.
\end{split}
\end{equation}
\noindent
The slack variables $\xi_i$ measure the distance of a point that lies on the wrong side of its `margin hypersurface'.
Using the Lagrange multipliers
\begin{equation}
\label{eq.mult}
\alpha_i \ge 0 \,\,\,\ \text{and} \,\,\,\ \beta_i \ge 0
\end{equation}
\noindent
the Lagrangian dual formulation of equation \eqref{eq.11a} is to maximize the following Lagrangian
\begin{equation}
\label{eq.13a}
\begin{split}
\mathcal{L}(w^\prime,b,\xi_i,\alpha,\beta)= & \frac{1}{2} \|w^\prime \|^2 + C \sum_{i=1}^n \xi_i - \sum_{i=1}^n \beta_i \xi_i + \\
& +\sum^n_{i=1} \alpha_i(1-y_i \times ( \langle w^\prime,x_i \rangle +b) -\xi_i).
\end{split}
\end{equation}
\noindent
Using the stationary first order conditions for $w^\prime$, $b$ and $\xi_i$
\begin{subequations} \label{eq:Derivatives}
\begin{align}
\frac{\partial \mathcal{L}}{\partial w_j^\prime} &= {w_j^\prime} - \sum_{i=1}^n \alpha_i y_i x_{ij} = 0, \,\,\,\,\ \ \forall j=1,2, \dots d, & \label{eq.14b_1} \\
\frac{\partial \mathcal{L}}{\partial b} &= \sum_{i=1}^n \alpha_i y_i = 0, & \\ \
\frac{\partial \mathcal{L}}{\partial \xi_i} &= C - \alpha_i - \beta_i = 0, \,\,\,\,\ \ \forall i=1,2, \dots n \label{eq.14b_3}.
\end{align}
\end{subequations}
\noindent
(where $x_{ij}$ is the $j^{th}$ entry of the $x_i$ data point) the Lagrangian dual as given in expression \eqref{eq.13a} can be re-expressed only in terms of
the $\alpha_i$ Lagrange multipliers, as follows
\begin{equation}
\label{Lag_dual}
\mathcal{L}(\alpha_i) = \sum_{i=1}^n \alpha_i - \frac{1}{2} \sum_{i,j=1}^n \alpha_i \alpha_j y_i y_j \langle x_j, x_i \rangle
\end{equation}
\noindent
and hence we can evaluate the $\alpha_i$ Lagrange multipliers by solving the following optimization problem
\begin{subequations}
\begin{align}
\label{eqa:dual1}
& \max_{\alpha_i} \mathcal{L}(\alpha_i) \,\,\,\ \text{subject to} \,\,\,\,\ \sum_{i=1}^n \alpha_i y_i = 0, \\
& 0 \leq \alpha_i \leq C \,\,\ \mbox{,} \,\,\ \forall i=1,2,.., n . \label{eqa:dual2}.
\end{align}
\end{subequations}
Defining $G_{ij}=y_i y_j x_j^\intercal x_i$ problem \eqref{eqa:dual1}-\eqref{eqa:dual2} is equivalently expressed as
\begin{subequations}
\label{eq.14b}
\begin{align}
& \min_{\alpha_i \in \mathcal{R}^n} \,\,\,\ \frac{1}{2} \alpha^\intercal G \alpha -e^\intercal \alpha \label{eq.14b1}\\
& \mbox{subject to} \,\,\ y^\intercal \alpha = 0 \\
& \mbox{and} \,\,\ 0 \le \alpha_i \le C \,\,\ \mbox{,} \,\,\ \mbox{$i=1,2,...,n$} \label{eq.14b3}
\end{align}
\end{subequations}
\noindent
where $e^\intercal$ is a $n-$dimensional row vector equal to $e^\intercal=(1,1,...,1)$ and \eqref{eq.14b3} is derived from \eqref{eq.14b_3} together with \eqref{eq.mult}.
Since the objective function in equation \eqref{eq.14b} is quadratic and all the constraints are affine, the problem defined by these equations is a quadratic
optimization problem. Using the fact that (by constrution) the sum of all the entries of $G$ can be written as a sum of squares and also using that $\alpha_i \ge 0$ we can see
that $G$ is positive semidefinite, which implies that the problem is convex. Convex problems offer the advantage of global optimality; that is any local minimum is also the
global one. Several methods have been proposed for solving such problems including primal, dual and parametric algorithms \citep{goldfarb1983numerically}.
After solving the optimization problem defined by expressions \eqref{eq.14b1}-\eqref{eq.14b3}, i.e. after evaluating all the $\alpha_i$ ($i=1,2,...,n$), we can find the vector
$w$ using \eqref{eq.14b_1}. The constant $b$ can be found by using the Karush-Kuhn-Tucker (KKT) complementarity conditions \citep{fletcher2013practical},
\begin{subequations} \label{eq.KKT}
\begin{align}
& \alpha_i\{-1+y_i \times ( \langle w',x_i \rangle + b') + \xi_i \}= 0 \label{eq.KKT1} \\
& \beta_i \xi_i = 0 \label{eq.KKT2}
\end{align}
\end{subequations}
\noindent
along with equation \eqref{eq.14b_3}. For any $\alpha_i$ satisfying $0 < \alpha_i < C $, equation \eqref{eq.14b_3} implies that $\beta_i >0$ and hence \eqref{eq.KKT2} implies that $\xi_i = 0$.
Consequently, we can use the $x_i$ corresponding to the aformentioned $\alpha_i$ to solve equation \eqref{eq.KKT1} for $b^\prime$.
Having calculated the vector $w^\prime$ and the constant $b^\prime$ is equivalent to knowing the hypersurface defined by $\langle w',x_i \rangle + b'=0$.
During the testing phase a new data point, $x_i$, is classified according to
\begin{equation}
\label{eq.testing}
\text{class}(x_i) = \text{sgn}( \langle w',x_i \rangle + b').
\end{equation}
\noindent
For $\text{class}(x_i)=-1$ we classify the $x_i$ point as noise and for $\text{class}(x_i)=+1$ we classify the $x_i$ point as injection.
We choose to solve the convex quadratic problem as defined in equation \eqref{eq.14b} with sequential minimal optimization
(SMO)\citep{platt1998sequential}. SMO modifies only a subset of dual variables $\alpha_i$ at each iteration, and thus only some columns of $G$ are used at
any one time. A smaller optimization subproblem is then solved, using the chosen subset of $\alpha_i$. In particular at each iteration only two Lagrange multipliers
that can be optimized are computed. If a set of such multipliers cannot be found then the quadratic problem of size two is solved analytically. This process is
repeated until convergence. The integrated software for support vector classification (LIBSVM) \citep{chang2011libsvm} is a state of the art SMO-type solver for the quadratic
problem found in the SVM formulation. SMO outperforms most of the existing methods for solving quadratic problems \citep{platt1999fast}. Hence we choose to use it for
training the SVM, using the LIBSVM routine `svmtrain'.
\textbf{Non-linear SVM:} The soft margins $\xi_i$ can only help when data are `reasonably' linearly separable. However, in most real world problems, data is not linearly separable.
To deal with this issue we transform the data into a `feature' (Hilbert) space, $\mathcal{H}$, (a vector space equipped with a norm and an inner product), where a linear separation
might be possible due to the choice of the dimensionality of $\mathcal{H}$, $\text{dim}(\mathcal{H}) \ge \text{dim}(\mathbb{R}^d)$. The transformation is represented by
\begin{equation}
\label{Phi}
\begin{split}
\Phi: & \mathbb{R}^d \rightarrow \mathcal{H} \\
\mbox{such that} \,\ & \Phi(x_i) \in \mathcal{H}.
\end{split}
\end{equation}
\noindent
From equations \eqref{Lag_dual} and \eqref{Phi} we see that the non-linear SVM formulation depends on the data only through the dot products $\Phi(x_i) \cdot \Phi(x_j)$ in $\mathcal{H}$.
These dot products are generated by a real-valued `comparison function' (called the `Kernel' function) $k: \mathbb{R}^d \times \mathbb{R}^d \rightarrow \mathbb{R}$ that generates all the
pairwise comparisons $K_{ij}=k(x_i,x_j) =\Phi(x_i) \cdot \Phi(x_j)$. We represent the set of these pairwise similarities as entries in a $n \times n$ matrix, $K$. The use of a kernel function
implies that neither the feature transformation $\Phi$ nor the dimensionality of $\mathcal{H}$ are required to be explicitly known. \\
\noindent
\textbf{Definition 4:} A function $k: \mathcal{L} \times \mathcal{L} \rightarrow \mathbb{R}$ is called a positive semi-definite kernel if and only if it is:
(i) symmetric, that is $k(x_i,x_j)=k(x_j,x_i)$ for any $ x_i, x_j \in \mathcal{L}$ and (ii) positive semi-definite, that is
\begin{equation}
\label{eq.17a}
c^\intercal Kc= \sum_{i=1}^n \sum_{j=1}^n c_i c_j k(x_i,x_j) \ge 0
\end{equation}
\noindent
for any $x_i, x_j \in \mathcal{L}$ where $i,j \in \{1,2,...,n\}$ and any $c \in \mathbb{R}^n$ i.e. $c_i, c_j \in \mathbb{R} \,\,\ (i=1,2,...,n)$ and the $n \times n$ matrix $K$ has elements $K_{ij}=k(x_i,x_j)$. \\
The nature of the data we are using strongly suggests that our data points are not linearly separable in the original feature space. Therefore we choose to solve the dual
formulation as given by equation \eqref{eq.14b} where $G$ is now defined by $G_{ij}=y_i y_j k(x_i,x_j)$ so that we can use the `Kernel Trick'. Solving the dual problem has the additional advantage
of obtaining a sparse solution; most of the $\alpha_i$ will be zero (those that satisfy $0< \alpha_i \le C$ are the support vectors that define the hypersurface). For the purpose of our study
we used the Radial Basis Function (RBF) kernel defined by
\begin{equation}
\label{eq.18b}
k(x_i,x_j)= \exp \Bigg ( - \gamma \frac{\|x_i-x_j \|^2}{\sigma^2} \Bigg )
\end{equation}
\noindent
where $\gamma$ is a free parameter and $\sigma$ is the standard deviation of the $x_i$. Typically free parameters are calculated by using the cross validation method on the data set, meaning that
we split the data set into several subsets and the optimization problem is solved on each subset by using a kernel with a different parameter value $\gamma$. We then choose the parameter value that
gives the lowest minimum value of the objective function. It has been seen in many previous applications that the value of $\gamma$ giving optimal results was equal to $\gamma = 1/d = 1/550$. To determine
the value of the parameter $C$, we plotted training efficiencies against several values of $C$. We determined that $C$ should be in the range of $10^4 - 10^5$. All experiments with SVM are conducted
with 90/10 split on data, where 90\% of the data is randomly selected for training and the remaining 10\% is used for testing.
Using the 'Kernel trick', we substitute $x_i$ with $\Phi(x_i)$ in equations \eqref{eq.11a}-\eqref{eq.testing}. Then equation \eqref{Lag_dual} is re-expressed as
\begin{equation}
\mathcal{L}(\alpha_i) = \sum_{i=1}^n \alpha_i - \frac{1}{2} \sum_{i,j=1}^n \alpha_i \alpha_j y_i y_j \langle \Phi(x_j), \Phi(x_i) \rangle.
\end{equation}
\noindent
After solving \eqref{eq.14b}, the $\alpha_i$ ($i=1,2,...,n$) are substituted in \eqref{eq.14b_1} that we solve for $w_j^\prime$ to get
\begin{equation}
\label{w_prime}
w_j^\prime = \sum_{i=1}^n \alpha_i y_i \Phi_j(x_i) \,\,\,\,\ \ \forall j=1,2, \dots d
\end{equation}
\noindent
where $\Phi_j(x_i)$ is the $j^{th}$ entry of the $\Phi(x_i)$ transformed data point. Since the transformation $\Phi$ is not obtained directly we never calculate the $w^\prime$ vector explicitly.
Nevertheless,we can substitute expression \eqref{w_prime} in \eqref{eq.KKT1} and solve the latter for $b^\prime$ (when $\xi_k=0$ and $\alpha_k \ne 0$) as follows
\begin{equation}
\label{b_prime}
b^\prime = 1 - y_k \times \sum_{i=1}^n \alpha_i y_i \langle \Phi(x_i) , \Phi(x_k) \rangle
\end{equation}
\noindent
where this result should be independent of which $k$ we use. Having the expression \eqref{w_prime} for the vector $w^\prime$ and the expression \eqref{b_prime} for the constant $b^\prime$ we can
classify a new data point during the testing phase according to
\begin{equation}
\label{eq.testing2}
\text{class}(x_i) = \text{sgn}( \langle w',\Phi(x_i) \rangle + b').
\end{equation}
\noindent
From \eqref{eq.testing2} we see that we are able to calculate the new (flat) hypersurface in the new feature (Hilbert) space simply through inner products of
$\langle \Phi(x_i), \Phi(x_j) \rangle $.
\section{Constrained Subspace Classifier}
The idea in the constrained subspace classifier (CSC) method is similar to the idea used in SVM. In the latter the target was to separate the noise points
(or noise vectors) from the injection points (or injection vectors) using a hypersurface. In the CSC method the idea is to project the noise vectors,
rows of $X'_1$ ( equation \eqref{X'1}), onto a $d_1$-dimensional subspace $S_1$, (of dimensionality $d_1 < d$) of the $d$-dimensional space ($d=550$) and also project
the injection vectors, rows of $X'_2$ (equation \eqref{X'2}), onto a subspace $S_2$, (of dimensionality $d_2 < d$). That is we seek to find two (optimal) subspaces such
that we can classify data (ft-map) points according to their distance from each subspace: points closer to the subspace $S_1$ are classified as `noise points'
and points closer to the subspace $S_2$ are classified as injection points.
The optimality of the choice of each subspace depends on the chosen basis vectors, the chosen dimensionalities, $d_1$ and $d_2$, of each subspace as well as the
relative orientation between the two subspaces. Each choice corresponds to a given variance of the projected data: the closer the variance of the projected points
is to the variance of the original data set the more optimal the subspaces are considered. Of course there is a trade-off between optimality and speed so we picked
dimensionalities $d_1=d_2=100$ for some cases (most powerful injections) and $d_1=d_2=200$ for some others (weakest injections).
\subsection{The projection operator}
Let $S$ be a data space of dimension equal to the number of features, $d$, of the selected dataset (for our study $d$ is the dimensionality of the ft-maps
after the resolution reduction i.e. $d=550$).
We can always find an orthonormal basis for $S$ (using the Gram-Schmidt process) given by
\begin{equation}
\label{eq.A1}
U_d=\{u_1, u_2, \dots, u_d \} \,\,\ \mbox{with}
\,\,\ u_i \in \mathbb{R}^d \,\,\ \forall i=1,2,...,d
\end{equation}
\noindent
i.e. $U_d \in \mathbb{R}^{d \times d}$. We seek to find a subspace of $S$ of dimension $d_1 < d$. Since reducing the dimensionality brings the
data points closer to each other, thus reducing the variance, we try to reduce the number of features from $d$
to $d_1$ while trying to maintain the variance of the data distribution as high as possible.
To achieve the dimensionality reduction we seek to find a projection operator that projects the data points from $\mathbb{R}^{d}$
to a (dimensionally reduced) subspace $\mathbb{R}^{d_1}$ of orthonormal basis given by
\begin{equation}
\label{eq.A2}
U_{d_1}=\{ u_1, u_2, \dots, u_{d_1} \} \,\,\ \mbox{with}
\,\,\ u_i \in \mathbb{R}^d \,\,\ \forall i=1,2,...,d_1
\end{equation}
\noindent
i.e. $U_{d_1}\in \mathbb{R}^{d \times d_1}$. By definition the projection operator is given by
\begin{equation}
\label{eq.A3}
P=Q { ( Q^\intercal Q) }^{-1} Q^\intercal
\end{equation}
\noindent
and projects a vector onto the space spanned by the columns of $Q$. Therefore, we may take the columns of $Q$
to be the orthonormal vectors given in \eqref{eq.A2}, that is $Q=U_{d_1}$. In that case, equation \eqref{eq.A3} becomes
\begin{equation}
\label{eq.8}
P=U_{d_1} { (U_{d_1}^\intercal U_{d_1} ) }^{-1} U_{d_1}^\intercal
\end{equation}
\noindent
which is the projection operator onto the space spanned by the column vectors of $U_{d_1}$.
Since equation \eqref{eq.A1} is an orthonormal basis for $\mathbb{R}^d$ then $U_{d_1}^\intercal U_{d_1} = I_{d_1}$. Therefore, the expression of the projection
operator that can project the (data) vectors in $\mathbb{R}^d$ onto its subspace $\mathbb{R}^{d_1}$ is given by
\begin{equation}
\label{P}
P=U_{d_1} U_{d_1}^\intercal.
\end{equation}
\noindent
In case $d_1=d$ then $P=U_{d} U_{d}^\intercal$. Since $U_d$ is a square matrix whose columns are orthonormal, this implies that its rows are also orthonormal. Orthonormality of the columns of $U_d$ implies
$U_{d}^\intercal U_{d} = I_{d}$ (i.e. $U_{d}^\intercal$ is the left inverse of $U_d$) and orthonormality of the rows of $U_d$ implies $ U_d U_{d}^\intercal = I_{d}$
(i.e. $U_{d}^\intercal$ is the right inverse of $U_d$). Therefore, for the special case that $d_1=d$ we have that $U_{d}^\intercal$ is the inverse of $U_d$ or
\begin{equation}
\label{eq.inv}
U_{d}^\intercal = U_{d}^{-1}.
\end{equation}
\subsection{Principal component analysis (PCA)}
\label{PCA}
To introduce PCA we will use the definition of the data matrix $X'_1$ as given by equation (\ref{X'1}). Using the projection operator as given by expression \eqref{P} we want to
project the ft-maps of $X'_1$ in a subspace $\mathbb{R}^{d_1}$ of $\mathbb{R}^d$ ($d_1 < d$). Let $x_i$ be the original $1 \times d$ row vector in $\mathbb{R}^d$. We project the
column vector $x_i^\intercal$ onto $\mathbb{R}^{d_1}$ thus defining $\tilde{x_i}^\intercal = U_{d_1} U_{d_1}^\intercal x_i^\intercal$. Then the norm of the difference between the
original and the projected (column) vectors can be expressed as
\begin{equation}
\| x_i^\intercal - \tilde{x_i}^\intercal \| = \| x_i^\intercal - U_{d_1} U_{d_1}^\intercal x_i^\intercal\|
\end{equation}
\noindent
where $U_{d_1} \in \mathbb{R}^{d \times d_1}$. In PCA we want to find the subspace $\mathbb{R}^{d_1}$ such that
\begin{equation}
\begin{split}
\label{eq.12}
& \sum_{i=1}^{n} \| x_i^\intercal - U_{d_1} U_{d_1}^\intercal x_i^\intercal \|^2 \,\,\,\ \mbox{is minimized} \\
& \text{subject to} \,\,\,\,\ U_{d_1}^\intercal U_{d_1} = \mathcal{I}_{d_1}. \quad
\end{split}
\end{equation}
\noindent
This subspace $\mathbb{R}^{d_1}$ is defined as the $d_1$-dimensional hypersurface that is spanned by the (reduced) orthonormal basis
$ \{ u_1, u_2, u_3, \dots, u_{d_1} \}$. i.e. finding such a basis is equivalent to defining the subspace $\mathbb{R}^{d_1}$.
\noindent
Using the definition of the Frobenius norm for a $m \times n$ matrix $A$,
\begin{equation}
\label{eq.Frob}
\| A \|_F = \sqrt{\sum_{i=1}^m \sum_{j=1}^n | a_{ij} |^2 } = \sqrt{ \mbox{trace} (A^* A)}
\end{equation}
\noindent
where $A^*$ is the conjugate transpose of $A$, we get
\begin{equation}
\label{eq.10}
\sum_{i=1}^{n} \| x_i^\intercal - U_{d_1} U_{d_1}^ \intercal x_i^\intercal \|^2_{F} = \operatorname{tr} \big\{ {X'_1}^\intercal X'_1 ( \mathcal{I}-U_{d_1} U_{d_1}^\intercal ) \big\}
\end{equation}
\noindent
where $X'_1 \in \mathbb{R}^{n \times d}$ (where $n=22700$ and $d=550$ as shown in equation \eqref{X'1}). Thus the optimization problem in equation \eqref{eq.12} reduces to \citep{vidal2005generalized}
\begin{equation}
\label{eq.11}
\begin{split}
&\min_{U_{d_1}} \operatorname{tr} \big \{ {X'_1}^\intercal X'_1 ( \mathcal{I} - U_{d_1} U_{d_1}^\intercal ) \big \} \\
& \text{subject to} \,\,\ U_{d_1}^\intercal U_{d_1} = \mathcal{I}_{d_1}. \quad
\end{split}
\end{equation}
\noindent
Since $ \operatorname{tr}\big\{ {X'_1}^{\intercal} X'_1 \big\} $ is a constant, the optimization problem can be re-written as
\begin{equation}
\label{eq.16}
\begin{split}
&\max_{U_{d_1}} \operatorname{tr}\{ U_{d_1}^{\intercal} {X'_1}^\intercal X'_1 U_{d_1} \} \\
& \text{subject to} \,\ U_{d_1}^\intercal U_{d_1} = \mathcal{I}_{d_1}. \quad \\
\end{split}
\end{equation}
To solve equation \eqref{eq.16} we define the Lagrangian dual problem by
\begin{equation}
\begin{split}
\label{eq.17}
\mathcal{L} (U_{d_1}, \lambda_{ij} ) = & \operatorname{tr}( U_{d_1}^\intercal {X'_1}^\intercal X'_1 U_{d_1}) - \\
- & \sum_{i=1}^{d_1} \sum_{j=1}^{d_1} \lambda_{ij} ( \sum_{k=1}^d U_{jk}^\intercal U_{ki} - \delta_{ji} ) \\
& \text{where} \,\ \delta_{ij}= \left \{\begin{array}{ll}
1 & \text{for } i = j \\
0 & \text{for } i \neq j .\\
\end{array}
\right.
\end{split}
\end{equation}
\noindent
Since $U_{d_1}^\intercal U_{d_1}$ is a symmetric $d_1 \times d_1$ matrix then the orthonormality condition in equation \eqref{eq.16} represents a total of
$d_1 \times (d_1 +1)/2$ conditions. Therefore, for the Lagrangian dual problem (as shown in equation \eqref{eq.17}) we need to introduce $d_1 \times (d_1 +1)/2$
Lagrange multipliers $\lambda_{ij}$. Hence we require that $\lambda_{ij}$ is a symmetric matrix. Also since each term in \eqref{eq.17} involves symmetric matrices
then the following first order optimality conditions
\begin{equation}
\label{eq.18}
\frac{\partial \mathcal{L}}{\partial \lambda_{pq}} = 0 \,\,\,\,\,\,\,\ \mbox{and} \,\,\,\,\,\,\,\ \frac{\partial \mathcal{L}}{\partial U_{lm}} = 0 .
\end{equation}
\noindent
can be solved for $\lambda_{ij}$ only if the latter is symmetric. Using equations \eqref{eq.17} and \eqref{eq.18} we get
\begin{equation}
\label{eq.19}
\begin{aligned}
\frac{\partial}{\partial \lambda_{pq} } \bigg[ & \sum_{i=1}^{d_1} \sum_{j=1}^{d} \sum_{k=1}^{d} U_{ij}^\intercal (X^\intercal X)_{jk} U_{ki} - \\
- & \sum_{i=1}^{d_1} \sum_{j=1}^{d_1} \lambda_{ij} ( \sum_{k=1}^d U_{jk}^\intercal U_{ki} - \delta_{ji} ) \bigg] = 0
\end{aligned}
\end{equation}
\noindent
and
\begin{equation}
\label{eq.21}
\begin{split}
\frac{\partial}{\partial U_{lm}} \bigg[ & \sum_{i=1}^{d_1} \sum_{j=1}^{d} \sum_{k=1}^{d} U_{ij}^\intercal (X^\intercal X)_{jk} U_{ki} - \\
- & \sum_{i=1}^{d_1} \sum_{j=1}^{d_1} \lambda_{ij} ( \sum_{k=1}^d U_{jk}^\intercal U_{ki} - \delta_{ji} ) \bigg] = 0.
\end{split}
\end{equation}
\noindent
\noindent
Equation \eqref{eq.19} implies the $d_1 \times (d_1 +1)/2$ equations
\begin{equation}
\label{on}
\sum_{k=1}^d U_{qk}^\intercal U_{kp} = \delta_{qp}
\end{equation}
\noindent
while equation \eqref{eq.21} implies the $d \times d_1$ equations
\begin{equation}
\label{eq.22}
\begin{split}
\sum_{j=1}^d U_{mj}^\intercal ({X'_1}^\intercal X'_1)_{jl} + \sum_{k=1}^d ({X'_1}^\intercal X'_1)_{lk} U_{km} - \\
- \sum_{j=1}^{d_1} \lambda_{mj} U_{jl}^\intercal - \sum_{i=1}^{d_1} \lambda_{im} U_{li} = 0.
\end{split}
\end{equation}
\noindent
Using the fact that ${X'_1}^\intercal X'_1$ is symmetric, the first two terms of equation \eqref{eq.22} can be combined to a single term and similarly
(using the symmetry of $\lambda_{ij}$) the last two terms of equation \eqref{eq.22} can be combined to a single term to get
\begin{equation}
\label{eq.22a}
\sum_{j=1}^d U_{mj}^\intercal ({X'_1}^\intercal X'_1)_{jl} - \sum_{i=1}^{d_1} \lambda_{mi} U_{il}^\intercal = 0.
\end{equation}
\noindent
Equations \eqref{eq.22a} and \eqref{on} are sufficient to solve for $\lambda_{ij}$ and $U_{kl}$. Right-multiplying equation \eqref{eq.22a} by $U_{ln}$ and summing over $1 \le l \le d$ we get
\begin{equation}
\label{eq.23b}
\sum_{l=1}^d \sum_{j=1}^d {U_{mj}}^\intercal ({X'_1}^\intercal X'_1)_{jl} U_{ln} - \sum_{i=1}^{d_1} \lambda_{mi} \sum_{l=1}^d U_{il}^\intercal U_{ln} = 0.
\end{equation}
\noindent
Using equation \eqref{on} then equation \eqref{eq.23b} becomes
\begin{equation}
\label{eq.24}
\sum_{l=1}^d \sum_{j=1}^d {U_{mj}}^\intercal ({X'_1}^\intercal X'_1)_{jl} U_{ln} = \lambda_{mn}.
\end{equation}
\noindent
Equations \eqref{eq.24} and \eqref{eq.22a} represent a set of $d_1 \times (d_1 +1)/2$ and $d_1 \times d$ equations respectively. These can be solved to obtain the
$d_1 \times (d_1 +1)/2$ degrees of freedom of $\lambda_{ij}$ and the $d_1 \times d$ degrees of freedom of $U_{d_1}$.
The left hand side (LHS) of equation \eqref{eq.24} represents the $a_{mn}$ elements of a $d_1 \times d_1$ matrix and similarly the right hand side (RHS) of \eqref{eq.24}
represents the $\lambda_{mn}$ elements of another $d_1 \times d_1$ matrix. Equation \eqref{eq.24} implies an entry-by-entry equation ($a_{mn}=\lambda_{mn}$) between the two matrices.
Choosing $m=n$ and summing equation \eqref{eq.24} over $1 \le m \le d_1$ implies that the sum along the diagonal of the matrix on the LHS is equal to the sum
along the diagonal of the matrix on the RHS or equivalently
\begin{equation}
\label{eq.27}
\sum_{m=1}^{d_1} \sum_{l=1}^d \sum_{j=1}^d {U_{mj}}^\intercal ({X'_1}^\intercal X'_1)_{jl} U_{lm} = \sum_{m=1}^{d_1} \lambda_{mm}.
\end{equation}
\noindent
Noting that the LHS of \eqref{eq.27} is the trace of the LHS of \eqref{eq.24} we can re-write \eqref{eq.27} as
\begin{equation}
\label{eq.27a}
\operatorname{tr} (U_{d_1}^\intercal {X'_1}^\intercal X'_1 U_{d_1}) = \sum_{m=1}^{d_1} \lambda_{mm}.
\end{equation}
\noindent
To interpret the $\lambda_{mm}$ we use a theorem according to which the trace of a matrix is equal to the sum of its eigenvalues. Therefore, we can identify
the $\lambda_{mm}$ for $1 \le m \le d_1$ as the eigenvalues of the symmetric matrix $(X_1 U_{d_1})^\intercal (X_1 U_{d_1})$. However, these $d_1$ eigenvalues
are $d_1$ out of the total $d$ eigenvalues of $X_1^\intercal X_1$. This can be shown by using the invariance of trace under similarity transformations
(in this case under conjugacy). Using equation \eqref{eq.inv} we can re-write equation \eqref{eq.27a} for $d_1=d$ as
\begin{equation}
\operatorname{tr} (U_{d}^{-1} {X'_1}^\intercal X'_1 U_{d}) = \operatorname{tr} ( {X'_1}^\intercal X'_1 ) = \sum_{m=1}^{d} \lambda_{mm}.
\end{equation}
\noindent
Therefore, the maximum of the objective function $F=\operatorname{tr} (U_{d_1}^\intercal {X'_1}^\intercal X'_1 U_{d_1})$ in expression \eqref{eq.16} is equal to the summation of the $d_1$
largest eigenvalues of $X_1^\intercal X_1$. Therefore the orthonormal basis for the lower dimensional subspace is given by the set of the eigenvectors corresponding
to the $d_1$ largest eigenvalues of the symmetric matrix $X_1^\intercal X_1$.
\subsection{Formulation of CSC}
Consider the binary classification problem with $X'_1 \in \mathbb{R}^{n \times d}$ and $X'_2$ $\in \mathbb{R}^{n \times d}$ be the data matrices
corresponding to two data classes, $\mathcal{C}_1$ (noise points) and $\mathcal{C}_2$ (injection points) respectively. The number of data samples
in $\mathcal{C}_1$ is the same as the number of data samples in $\mathcal{C}_2$ and is equal to $n/2$. The corresponding number of features is given
by $d$ for both classes $\mathcal{C}_1$ and $\mathcal{C}_2$ (in our case $n/2=11350$ and $d=550$).
We attempt to find two linear subspaces $\mathcal{S}_{1} \subseteq \mathcal{C}_1$ and $\mathcal{S}_{2} \subseteq \mathcal{C}_2$
that best approximate the data classes. Without loss of generality we assume the dimensionality of these subspaces to be the same and equal to $d_1$. Let
\begin{equation}
\label{eq.28}
U = [u_1, u_2, \dots, u_{d_1}] \in \mathbb{R}^{d \times d_1}
\end{equation}
\noindent
and
\begin{equation}
\label{eq.29}
V = [v_1, v_2, \dots, v_{d_1}] \in \mathbb{R}^{d \times d_1}
\end{equation}
\noindent
represent matrices whose columns are orthonormal bases of the subspaces $\mathcal{S}_{1}$ and $\mathcal{S}_{2}$ respectively. If we attempted to find
$\mathcal{S}_{1}$ independently from $\mathcal{S}_{2}$ then we would have to capture the maximal variance of the data projected onto $\mathcal{S}_1$
separately from the maximal variance of the data projected onto $\mathcal{S}_2$. That would be equivalent to solving the following two optimization problems \citep{laaksonen1997local}
\begin{equation}
\label{eq.30}
\begin{split}
& \underset{U \in \mathbb{R}^{d \times d_1} } {\text{max}} \text{tr}( U^\intercal {X'_1}^\intercal X'_1 U) \\
& \text{subject to} \,\,\ U^\intercal U = I_{d_1}
\end{split}
\end{equation}
\noindent
and
\begin{equation}
\label{eq.31}
\begin{split}
& \underset{V \in \mathbb{R}^{d \times d_1} } {\text{max}} \text{tr}( V^\intercal {X'_2}^\intercal X'_2 V) \\
& \text{subject to} \,\,\ V^\intercal V = I_{d_1}.
\end{split}
\end{equation}
The solution to the optimization problem as shown in expression \eqref{eq.30} is given by the eigenvectors (the columns of the orthonormal basis $U$ of $\mathcal{S}_1$)
corresponding to the $d_1$ largest eigenvalues of the matrix ${X'_1}^\intercal X'_1$. Similarly, the solution to the optimization problem as shown in expression \eqref{eq.31}
is given by the eigenvectors (the columns of the orthonormal basis $V$ of $\mathcal{S}_2$) corresponding to the $d_1$ largest eigenvalues of the matrix ${X'_2}^\intercal X'_2$.
Though the subspaces $\mathcal{S}_{1}$ and $\mathcal{S}_{2}$ are good approximations to the two classes $\mathcal{C}_1$ and $\mathcal{C}_2$ respectively, these projections
may not be the ideal ones for classification purposes as each one of them is obtained without the knowledge of the other.
In the constrained subspace classifier (CSC) the two subspaces are found simultaneously by considering their relative orientation. This way CSC allows for a trade off between
maximizing the variance of the projected data onto the two subspaces and the relative orientation between the two subspaces. The relative orientation between the two subspaces is generally defined
in terms of the principal angles. The optimization problem in CSC is formulated as follows
\begin{equation}
\label{CSC}
\begin{split}
\underset{ U, V \in \mathbb{R}^{d \times d_1}} {\text{max}} \text{tr}(U^\intercal {X'_1}^\intercal X'_1 U) & + \text{tr}( V^\intercal {X'_2}^\intercal X'_2 V)+ C \text{tr} (U^\intercal V V^\intercal U) \\
\text{subject to} \quad & U^\intercal U = I_{d_1} ,\quad V^\intercal V = I_{d_1}.
\end{split}
\end{equation}
\noindent
The last term of the objective function $G=\text{tr}(U^\intercal {X'_1}^\intercal X'_1 U) + \text{tr}( V^\intercal {X'_2}^\intercal X'_2 V)+ C \text{tr} (U^\intercal V V^\intercal U)$
is a measure of the relative orientation between the two subspaces as defined in \citep{CSC_paper}. The parameter $C$
controls the trade off between the relative orientation of the subspaces and the cumulative variance of the data as projected onto the two subspaces. For large positive
values of $C$, the relative orientation between the subspaces reduces (the two subspaces become more `parallel'), while for large negative values of $C$, the relative
orientation increases (the two subspaces become more `perpendicular' to each other).
This problem is solved using an alternating optimization algorithm described in \citep{CSC_paper}. For a fixed $V$, expression \eqref{CSC} reduces to
\begin{equation}
\label{CSC1}
\begin{aligned}
& \underset{ U \in \mathbb{R}^{d \times d_1}}{\text{max}} \text{tr}( U^\intercal ({X'_1}^\intercal X'_1 + C V V^\intercal) U) \\
& \text{subject to} \,\,\ U^\intercal U = I_{d_1}.
\end{aligned}
\end{equation}
\noindent
The solution to the optimization problem \eqref{CSC1} is obtained by choosing the eigenvectors corresponding to the $d_1$ largest eigenvalues of the symmetric matrix
${X'_1}^\intercal X_1 + C V V^\intercal$. Similarly, for a fixed $U$, expression \eqref{CSC} reduces to
\begin{equation}
\label{CSC2}
\begin{split}
& \underset{ V \in \mathbb{R}^{d \times d_1}} {\text{max}} \text{tr} ( V^\intercal ({X'_2}^\intercal X'_2 + C U U^\intercal) V) \\
& \text{subject to} \,\,\ V^\intercal V = I_{d_1}
\end{split}
\end{equation}
\noindent
where the solution to the optimization problem \eqref{CSC2} is again obtained by choosing the eigenvectors corresponding to the $d_1$ largest eigenvalues of the symmetric
matrix ${X'_2}^\intercal X'_2 + C U_1 U_1^\intercal$.
The algorithm for CSC can be summarized as follows:
\begin{algorithm} [H]
\caption{CSC ($X'_1, X'_2$, $d_1$, $C$)}
\label{alg1}
\begin{algorithmic}
\begin{small}
\STATE 1. Initialize $U$ and $V$ such that $U^\intercal U = I_{d_1}$, $V^\intercal V = I_{d_1}$.
\STATE 2. Find eigenvectors corresponding to the $d_1$ largest eigenvalues of the symmetric matrix $ {X'_1}^\intercal X'_1 + C V V^\intercal$.
\STATE 3. Find eigenvectors corresponding to the $d_1$ largest eigenvalues of the symmetric matrix $ {X'_2}^\intercal X'_2 + C U U^\intercal$.
\STATE 4. Alternate between 2 and 3 until one of the termination rules below is satisfied.
\end{small}
\end{algorithmic}
\end{algorithm}
\noindent
We define the following three termination rules:
\begin{itemize}
\item Maximum limit $Z$ on the number of iterations,
\item Relative change in $U$ and $V$ at iteration $m$ and $m+1$,
\begin{equation}
\label{eq.tol}
\begin{split}
\text{tol}_U^m = & \frac{\| U^{(m+1)} - U^{(m)} \|_{F}}{\sqrt{N}}, \\
\text{tol}_V^m = & \frac{\| V^{(m+1)} - V^{(m)} \|_{F}}{\sqrt{N}}
\end{split}
\end{equation}
\noindent
where $N$ = $d\times d_1$ and the subscript $F$ denotes the Frobenius norm.
\item Relative change in the value of the objective function $G$ as shown in expression \eqref{CSC} at iteration $m$ and $m$+1,
\begin{equation}
\text{tol}_{f}^{m} = \frac{G^{(m+1)} - G^{(m)}}{|G^{(m)}| + 1}.
\end{equation}
\end{itemize}
\noindent
The value of Z was set to $2000$, while $\text{tol}_{f}^{m}$, $\text{tol}_U^m$ and $\text{tol}_V^m$ are all set at the same value of $10^{-6}$. From equation \eqref{eq.Frob}
we see that the factor of $1/\sqrt{N}$ in \eqref{eq.tol} results in the averaging of the squares of all the entries of the matrices $(U^{(m+1)} - U^{(m)})$ or $(V^{(m+1)} - V^{(m)})$.
This regularization factor keeps the tolerance values independent of the data set. \\
After solving the optimization problem \eqref{CSC} (by utilizing algorithm 2) a new point $x$ is classified by computing the distances from the two
subspaces $\mathcal{S}_{1}$ and $\mathcal{S}_{2}$ defined by
\begin{equation}
\text{dist}( x,\mathcal{S}_1) = \text{tr}( U^\intercal x^\intercal x U)
\end{equation}
\noindent
and
\begin{equation}
\label{eq.32}
\text{dist}( x,\mathcal{S}_2) = \text{tr}( V^\intercal x^\intercal x V).
\end{equation}
\noindent
The class of $x$ is defined by
\begin{equation}
\label{eq.33}
\text{class}(x) = \text{arg} \{ \min_{ i \in \{1,2 \} } \{\text{dist}(x,\mathcal{S}_{i})\} \}.
\end{equation}
\noindent
In our case, if $x$ is closer to $S_1$ then $x$ is classified as noise (or `no signal') and if $x$ is closer to $S_2$ then $x$ is classified as
a r-mode injection (or `presence of signal').
\section{Results and discussion}
\label{comparison}
When using the conventional clustering algorithm in \citep{rmodespaper} the false alarm rate (FAR), or false positives, is easily controlled by adjusting the SNR threshold above which a ft-map is considered to
include a r-mode signal. This is not the case for the MLAs we used where the FAR is given after the training is performed as part of the training output. For this reason, to draw fair
comparisons, we adjusted the FAR of the conventional algorithm to match the output FAR of the MLAs. In each of the tables II-VI, the results of the conventional algorithm are
presented in 4-tuples: the first entry corresponds to a sensitivity result with a fixed FAR equal to $\unit{0.1}\%$ and the second, third and forth entries are results corresponding
to the same FAR as that of the ANN, SVM and CSC respectively. The results presented on tables III, IV and VI are also plotted in Fig.\ref{Fig:ef4}, Fig.\ref{Fig:ef5}
and Fig.\ref{Fig:ef6} respectively.
In table I we present the results of the conventional algorithm and the three MLAs on the $(\alpha=0.1, f_o=\unit[1500]{Hz})$ waveform. The latter were trained with data produced with $h$
taking values over the range of $10^{-24} \le h \le 10^{-21} $. The MLAs did not outperform the conventional algorithm. The number of data that was produced
was limited due to the finite amount of data that was produced during the S5 LIGO run. This amount of data was too small for the MLAs to achieve generalization,
hence the training efficiencies are too low. To avoid this the next steps involved training of the same number of data
over smaller ranges of values of $h$.
In table II we present the detection efficiency results for the conventional algorithm and the three MLAs on the $(\alpha=0.1, f_o=\unit[1500]{Hz})$ waveform.
The MLAs were trained with data produced with $h$ taking values over the range of $10^{-23.2} \le h \le 10^{-22.7}$. There was not a lot of expectation that the
MLAs would outperform the conventional algorithms because the MLAs were trained on a data set whose injection distances were lower than the distance at which the
conventional algorithm had a 50\% detection efficiency.
In table III we present the detection efficiency results for the conventional algorithm and the three MLAs on the $(\alpha=0.1, f_o=\unit[1500]{Hz})$ waveform.
The MLAs were trained with data produced with $h$ taking values over the range of $10^{-23.7} \le h \le 10^{-23.2}$. The training of the MLAs on this training
set resulted in false alarm rates of 4\%, 5\% and 10\% for the ANN, SVM and CSC respectively.
At the 50\% false dismissal rate (FDR), the ANN shows an increase of $\sim$ 20\% in the detection distance - from \unit{1.5}Mpc (of the conventional algorithm
dash-dot blue line) to \unit{1.8}Mpc. The SVM shows an increase of $\sim$ 16\% - from \unit{1.55}Mpc (of the conventional algorithm dash-dot green line) to
\unit{1.8}Mpc. The CSC shows an increase of $\sim$ 10\% - from \unit{1.6}Mpc (of the conventional algorithm dash-dot red line) to \unit{1.75}Mpc.
In table IV we present the detection efficiency results for the conventional algorithm and the three MLAs on the $(\alpha=0.1, f_o=\unit[1500]{Hz})$ waveform.
The latter were trained with data produced with $h$ taking values over the range of $ 10^{-24.0} \le h \le 10^{-23.7}$. The training of the MLAs on this training
set resulted in high false alarm rates of 18\%, 22\% and 36\% for the ANN, SVM and CSC respectively.
At the 50\% FDR, both the ANN and the SVM algorithms show an increase of $\sim$ 75\% in the detection distance - from \unit{1.6}Mpc (of the
conventional algorithm dash-dot blue and green lines) to \unit{2.8}Mpc. The CSC shows no increase - both dash-dot and solid red lines stay at 50\% up to distances of
$\sim$ \unit{2.9}Mpc. The distance range covered in this set has a practical significance because it covers: (a) the distance of $\unit[3.5]{Mpc}$ at which the January
2014 supernova occured in M82 and (b) the distance of $\unit[5]{Mpc}$ at which the supernova event rate in the Milky Way neighborhood is about 1 every 1-2 years.
In table V we present the detection efficiency results for the conventional algorithm and the three MLAs on the $(\alpha=0.01, f_o=\unit[1100]{Hz})$ waveform.
This is a much weaker signal than the $(\alpha=0.1, f_o=\unit[1500]{Hz})$ as can be seen from \eqref{eq.3aa}
The MLAs were trained with data produced with $h$ taking values over the range of $10^{-23.7} \le h \le 10^{-23.2}$. The MLAs did not outperform the conventional algorithms.
This was expected because the MLAs were trained on a data set whose injection distances were lower than the distance at which the conventional algorithm had a 50\% detection efficiency.
In table VI we present the detection efficiency results for the conventional algorithm and the three MLAs on the $(\alpha=0.01, f_o=\unit[1100]{Hz})$ waveform. The MLAs were trained
with data produced with $h$ taking values over the range of $10^{-24} \le h \le 10^{-23.7}$. The training of the MLAs on this training set resulted in false alarm rates of 18\%, 22\% and 36\%
for the ANN, SVM and CSC respectively. At the 50\% false dismissal rate (FDR), the ANN shows an increase of $\sim$ 20\% in the detection distance - from \unit{175}Kpc (of the conventional algorithm
dash-dot blue line) to \unit{210}Kpc. The SVM shows no increase while the CSC shows a small increase.
\begin{table*}[htbp!]
\caption{\label{tab:table1} Detection efficiencies for injections with waveform parameters $\alpha=0.1$ and $f_o=\unit[1500]{Hz}$. The training was performed
with 11350 noise maps and 11350 injection maps. The latter were produced with $10^{-3} \le \alpha \le 10^{-1}$, $600 \le f_o \le 1600$ and $h$ values
distributed in the range of $10^{-24} \le h \le 10^{-21}$. }
\begin{ruledtabular}
\begin{tabular}{cccccc}
Distance ($\times \unit[1170]{Kpc}$) \footnote{$\unit[1170]{Kpc}$ is the distance at which the conventional algorithm detects $50 \%$ of the signals with FAR=$\unit{0.1}\%$.}
& Signal amplitude ($h$) \footnote{Calculated using \eqref{eq.3a} and substituting the parameter values $\alpha=0.1$ and $f_0=1500$Hz and a distance given by the first column.}
& Conventional (\%) \footnote{Results are based on detection efficiencies on full resolution maps.}
& ANN (\%) \footnote{Highest training efficiency (99\%) with parameter values: momentum=0.9, learning rate=0.02. True positive 98\%. False positive: $< 0.01$\%}
& SVM (\%) \footnote{Highest training efficiency (98\%) with parameter values: C$=10^3$. True positive: 98 \%. False positive: $< 0.01$\%.}
& CSC (\%) \footnote{Highest training efficiency (90\%) with parameter values: $d_1$=100, C$=1$. True positive: 80 \%. False positive: $< 0.01$\%. } \\ \hline
0.1 & $4.33 \times10^{-23}$ & 100 & 100 & 100 & 0 \\
0.2 & $2.16 \times10^{-23}$ & 100 & 100 & 100 & 0 \\
0.3 & $1.44 \times10^{-23}$ & 100 & 100 & 96 & 0 \\
0.4 & $1.08 \times10^{-23}$ & 100 & 83 & 47 & 0 \\
0.5 & $8.65 \times10^{-24}$ & 100 & 42 & 13 & 0 \\
0.6 & $7.21 \times10^{-24}$ & 100 & 17 & 0 & 0 \\
0.7 & $6.18 \times10^{-24}$ & 100 & 3 & 0 & 0 \\
0.8 & $5.41 \times10^{-24}$ & 98 & 1 & 0 & 0 \\
0.9 & $4.81 \times10^{-24}$ & 76 & 0 & 0 & 0 \\
\textcolor{red}{1.0} & \textcolor{red}{$4.33 \times10^{-24}$} & \textcolor{red}{50} & \textcolor{red}{0} & \textcolor{red}{0} & \textcolor{red}{0} \\
1.1 & $3.93 \times10^{-24}$ & 29 & 0 & 0 & 0 \\
1.2 & $3.61 \times10^{-24}$ & 13 & 0 & 0 & 0 \\
1.3 & $3.33 \times10^{-24}$ & 4 & 0 & 0 & 0 \\
1.4 & $3.09 \times10^{-24}$ & 4 & 0 & 0 & 0 \\
1.5 & $2.88 \times10^{-24}$ & 1 & 0 & 0 & 0 \\
\end{tabular}
\end{ruledtabular}
\end{table*}
\begin{table*}[htbp!]
\caption{\label{tab:table1} Detection efficiencies for injections with waveform parameters $\alpha=0.1$ and $f_o=\unit[1500]{Hz}$. The training was performed
with 11350 noise maps and 11350 injection maps. The latter were produced with $10^{-3} \le \alpha \le 10^{-1}$, $600 \le f_o \le 1600$ and $h$ values
distributed in the range of $6.31 \times10^{-24} = 10^{-23.2} \le h \le 10^{-22.7} = 2.00 \times10^{-23}$. The signal amplitudes that lie within this range are
in blue text while the distance at which the conventional algorithm detects $50\%$ with FAR = $\unit{0.1}\%$ is in red text.}
\begin{ruledtabular}
\begin{tabular}{cccccc}
Distance ($\times \unit[1170]{Kpc}$) \footnote{$\unit[1170]{Kpc}$ is the distance at which the conventional algorithm detects $50 \%$ of the signals with FAR=$\unit{0.1}\%$.}
& Signal amplitude ($h$) \footnote{Calculated using \eqref{eq.3a} and substituting the parameter values $\alpha=0.1$ and $f_0=1500$Hz and a distance given by the first column.}
& Conventional (\%) \footnote{These 4-tuples are detection efficiencies with FAR= (0.1\%, 0.7\%, 0.2\%, 1\%) that were obtained on full resolution maps. The second, third
and forth entries are to be compared with the ANN, SVM and CSC results respectively.}
& ANN (\%) \footnote{Highest training efficiency (98\%) with parameter values: momentum=0.9, learning rate=0.02. True positive 97\%. False positive: 0.7\%}
& SVM (\%) \footnote{Highest training efficiency (98\%) with parameter values: C$=10^3$. True positive: 96\%. False positive: 0.2\%.}
& CSC (\%) \footnote{Highest training efficiency (95\%) with parameter values: $d_1$=100, C$=10^3$. True positive: 92\%. False positive: 1\%.} \\ \hline
0.1 & $4.33 \times10^{-23}$ & (100, 100, 100, 100) & 100 & 100 & 100 \\
\textcolor{blue}{0.2} & \textcolor{blue}{$2.16 \times10^{-23}$} & \textcolor{blue}{(100, 100, 100, 100)} & \textcolor{blue}{100} & \textcolor{blue}{100} & \textcolor{blue}{100} \\
\textcolor{blue}{0.3} & \textcolor{blue}{$1.44 \times10^{-23}$} & \textcolor{blue}{(100, 100, 100, 100)} & \textcolor{blue}{100} & \textcolor{blue}{100} & \textcolor{blue}{100} \\
\textcolor{blue}{0.4} & \textcolor{blue}{$1.08 \times10^{-23}$} & \textcolor{blue}{(100, 100, 100, 100)} & \textcolor{blue}{100} & \textcolor{blue}{100} & \textcolor{blue}{100} \\
\textcolor{blue}{0.5} & \textcolor{blue}{$8.65 \times10^{-24}$} & \textcolor{blue}{(100, 100, 100, 100)} & \textcolor{blue}{100} & \textcolor{blue}{100} & \textcolor{blue}{96} \\
\textcolor{blue}{0.6} & \textcolor{blue}{$7.21 \times10^{-24}$} & \textcolor{blue}{(100, 98, 98, 98)} & \textcolor{blue}{98} & \textcolor{blue}{97} & \textcolor{blue}{89} \\
\textcolor{blue}{0.7} & \textcolor{blue}{$6.18 \times10^{-24}$} & \textcolor{blue}{(100, 97, 97, 97)} & \textcolor{blue}{89} & \textcolor{blue}{81} & \textcolor{blue}{64} \\
0.8 & $5.41 \times10^{-24}$ & (98, 97, 97, 98) & 76 & 53 & 45 \\
0.9 & $4.81 \times10^{-24}$ & (76, 96, 87, 96) & 54 & 12 & 20 \\
\textcolor{red}{1.0} & \textcolor{red}{$4.33 \times10^{-24}$} & \textcolor{red}{(50, 90, 73, 93)} & \textcolor{red}{44} & \textcolor{red}{9} & \textcolor{red}{19} \\
1.1 & $3.93 \times10^{-24}$ & (29, 67, 45, 69) & 32 & 2 & 9 \\
1.2 & $3.61 \times10^{-24}$ & (13, 39, 21, 41) & 33 & 2 & 7 \\
1.3 & $3.33 \times10^{-24}$ & (4, 15, 9, 19) & 22 & 0 & 10 \\
1.4 & $3.09 \times10^{-24}$ & (4, 8, 4, 11) & 13 & 0 & 6 \\
1.5 & $2.88 \times10^{-24}$ & (1, 6, 2, 7) & 8 & 0 & 10
\end{tabular}
\end{ruledtabular}
\end{table*}
\begin{widetext}
\begin{table*}[htbp!]
\caption{\label{tab:table1} Detection efficiencies for injections with waveform parameters $\alpha=0.1$ and $f_o=\unit[1500]{Hz}$. The training was performed
with 11350 noise maps and 11350 injection maps. The latter were produced with $10^{-3} \le \alpha \le 10^{-1}$, $600 \le f_o \le 1600$ and $h$ values
distributed in the range of $2.00 \times 10^{-24} = 10^{-23.7} \le h \le 10^{-23.2} = 6.31 \times 10^{-24}$. The signal amplitudes that lie within this range are
in blue text while the distance at which the conventional algorithm detects $50\%$ with FAR = $\unit{0.1}\%$ is in red text.}
\begin{ruledtabular}
\begin{tabular}{cccccc}
Distance ($\times \unit[1170]{Kpc}$) \footnote{$\unit[1170]{Kpc}$ is the distance at which the conventional algorithm detects $50 \%$ of the signals with FAR=$\unit{0.1}\%$.}
& Signal amplitude ($h$) \footnote{Calculated using \eqref{eq.3a} and substituting the parameter values $\alpha=0.1$ and $f_0=1500$Hz and a distance given by the first column.}
& Conventional (\%) \footnote{These 4-tuples are detection efficiencies with FAR= (0.1\%, 4\%, 5\%, 10\%) that were obtained on full resolution maps. The second, third
and forth entries are to be compared with the ANN, SVM and CSC results respectively. }
& ANN (\%) \footnote{Highest training efficiency (88\%) with parameter values: momentum=0.9, learning rate=0.02. True positive: 91\%. False positive: 4\%.}
& SVM (\%) \footnote{Highest training efficiency (89\%) with parameter values: C=$10^5$. True positive: 88\%. False positive: 5\%.}
& CSC (\%) \footnote{Highest training efficiency (83\%) with parameter values: $d_1$=100, C$=10^4$. True positive: 77\%. False positive: 10\%.} \\ \hline
0.1 & $4.33 \times10^{-23}$ & (100, 100, 100, 100) & 100 & 100 & 100 \\
0.2 & $2.16 \times10^{-23}$ & (100, 100, 100, 100) & 100 & 100 & 100 \\
0.3 & $1.44 \times10^{-23}$ & (100, 100, 100, 100) & 100 & 100 & 100 \\
0.4 & $1.08 \times10^{-23}$ & (100, 100, 100, 100) & 100 & 100 & 100 \\
0.5 & $8.65 \times10^{-24}$ & (100, 100, 100, 100) & 100 & 100 & 100 \\
0.6 & $7.21 \times10^{-24}$ & (100, 99, 100, 100) & 100 & 100 & 100 \\
\textcolor{blue}{0.7} & \textcolor{blue}{$6.18 \times10^{-24}$} & \textcolor{blue}{(100, 98. 98. 98)} & \textcolor{blue}{100}& \textcolor{blue}{99} & \textcolor{blue}{99} \\
\textcolor{blue}{0.8} & \textcolor{blue}{$5.41 \times10^{-24}$} & \textcolor{blue}{(98, 98, 98, 98)} & \textcolor{blue}{99} & \textcolor{blue}{97} & \textcolor{blue}{98} \\
\textcolor{blue}{0.9} & \textcolor{blue}{$4.81 \times10^{-24}$} & \textcolor{blue}{(76, 98, 98, 98)} & \textcolor{blue}{97} & \textcolor{blue}{92} & \textcolor{blue}{94} \\
\textcolor{red}{1.0} & \textcolor{red}{$4.33 \times10^{-24}$} & \textcolor{red}{(50, 97, 98, 98)} & \textcolor{red}{90} & \textcolor{red}{90} & \textcolor{red}{88} \\
\textcolor{blue}{1.1} & \textcolor{blue}{$3.93 \times10^{-24}$} & \textcolor{blue}{(29, 85, 87, 94)} & \textcolor{blue}{88} & \textcolor{blue}{90} & \textcolor{blue}{77} \\
\textcolor{blue}{1.2} & \textcolor{blue}{$3.61 \times10^{-24}$} & \textcolor{blue}{(13, 63, 67, 79)} & \textcolor{blue}{92} & \textcolor{blue}{93} & \textcolor{blue}{70} \\
\textcolor{blue}{1.3} & \textcolor{blue}{$3.33 \times10^{-24}$} & \textcolor{blue}{(4, 41, 55, 64)} & \textcolor{blue}{77} & \textcolor{blue}{79} & \textcolor{blue}{68} \\
\textcolor{blue}{1.4} & \textcolor{blue}{$3.09 \times10^{-24}$} & \textcolor{blue}{(4, 21, 25, 38)} & \textcolor{blue}{74} & \textcolor{blue}{74} & \textcolor{blue}{67} \\
\textcolor{blue}{1.5} & \textcolor{blue}{$2.88 \times10^{-24}$} & \textcolor{blue}{(1, 9, 13, 24)} & \textcolor{blue}{69} & \textcolor{blue}{70} & \textcolor{blue}{55} \\
\textcolor{blue}{1.6} & \textcolor{blue}{$2.70 \times10^{-24}$} & \textcolor{blue}{(0, 19, 23, 36)} & \textcolor{blue}{45} & \textcolor{blue}{44} & \textcolor{blue}{37} \\
\textcolor{blue}{1.7} & \textcolor{blue}{$2.55 \times10^{-24}$} & \textcolor{blue}{(0, 19, 23, 30)} & \textcolor{blue}{40} & \textcolor{blue}{42} & \textcolor{blue}{30} \\
\textcolor{blue}{1.8} & \textcolor{blue}{$2.40 \times10^{-24}$} & \textcolor{blue}{(0, 16, 16, 19)} & \textcolor{blue}{45} & \textcolor{blue}{49} & \textcolor{blue}{35} \\
\textcolor{blue}{1.9} & \textcolor{blue}{$2.28 \times10^{-24}$} & \textcolor{blue}{(0, 27, 31, 37)} & \textcolor{blue}{35} & \textcolor{blue}{29} & \textcolor{blue}{30} \\
\textcolor{blue}{2.0} & \textcolor{blue}{$2.16 \times10^{-24}$} & \textcolor{blue}{(0, 17, 21, 30)} & \textcolor{blue}{26} & \textcolor{blue}{23} & \textcolor{blue}{25} \\
\textcolor{blue}{2.1} & \textcolor{blue}{$2.06 \times10^{-24}$} & \textcolor{blue}{(0, 13, 15, 22)} & \textcolor{blue}{33} & \textcolor{blue}{27} & \textcolor{blue}{23} \\
2.2 & $1.97 \times10^{-24}$ & (0, 20, 24, 29) & 30 & 23 & 27 \\
2.3 & $1.88 \times10^{-24}$ & (0, 29, 33, 41) & 30 & 29 & 26 \\
2.4 & $1.80 \times10^{-24}$ & (0, 16, 22, 33) & 18 & 16 & 16 \\
2.5 & $1.73 \times10^{-24}$ & (0, 20, 26, 33) & 30 & 19 & 20 \\
2.6 & $1.66 \times10^{-24}$ & (0, 3, 4, 11) & 8 & 6 & 8 \\
2.7 & $1.60 \times10^{-24}$ & (0, 1, 2, 9) & 16 & 11 & 19 \\
2.8 & $1.55 \times10^{-24}$ & (0, 8,10,16) & 15 & 12 & 17 \\
2.9 & $1.49 \times10^{-24}$ & (0, 4, 5, 8) & 26 & 18 & 20 \\
3.0 \footnote{This is the 3.5 Mpc distance at which the M82 supernova exploded in January 2014.}
& $1.44 \times10^{-24}$ & (0, 1, 3, 7) & 15 & 7 & 22
\end{tabular}
\end{ruledtabular}
\end{table*}
\begin{sidewaysfigure}[htbp!]
\centering
\includegraphics[width=1.05 \linewidth]{table_III-eps-converted-to.pdf}
\caption{These are the detection efficiencies for the $(f_o=\unit[1500]{Hz}, \alpha=0.1)$ waveform. This waveform produces the most powerful signal that the 1998 model predicts.
This plot demonstrates that (when compared for the same FAR) the MLAs performance is at least as good as that of the conventional algorithm. At the 50\% false dismissal
rate (FDR), the ANN shows an increase of $\sim$ 20\% in the detection distance - from \unit{1.5}Mpc (of the conventional algorithm dash-dot blue line) to \unit{1.8}Mpc.
The SVM shows an increase of $\sim$ 16\% - from \unit{1.55}Mpc (of the conventional algorithm dash-dot green line) to \unit{1.8}Mpc. The CSC shows an increase of $\sim$ 10\%
- from \unit{1.6}Mpc (of the conventional algorithm dash-dot red line) to \unit{1.75}Mpc.} \label{Fig:ef4}
\end{sidewaysfigure}
\begin{table*}[htbp!]
\caption{\label{tab:table1} Detection efficiencies for injections with waveform parameters $\alpha=0.1$ and $f_o=\unit[1500]{Hz}$. The training was performed
with 11350 noise maps and 11350 injection maps. The latter were produced with $10^{-3} \le \alpha \le 10^{-1}$, $600 \le f_o \le 1600$ and $h$ values
distributed in the range of $ 10^{-24.0} \le h \le 10^{-23.7} = 2.00 \times 10^{-24}$. The signal amplitudes that lie within this range are
in blue text while the distance at which the conventional algorithm detects $50\%$ with FAR = $\unit{0.1}\%$ is in red text.}
\begin{ruledtabular}
\begin{tabular}{cccccc}
Distance ($\times \unit[1170]{Kpc}$) \footnote{$\unit[1170]{Kpc}$ is the distance at which the conventional algorithm detects $50 \%$ of the signals with FAR=$\unit{0.1}\%$.}
& Signal amplitude ($h$) \footnote{Calculated using \eqref{eq.3a} and substituting the parameter values $\alpha=0.1$ and $f_0=1500$Hz and a distance given by the first column.}
& Conventional (\%) \footnote{These 4-tuples are detection efficiencies with FAR= (0.1\%, 18\%, 22\%, 36\%) that were obtained on full resolution maps. The second, third and forth
entries are to be compared with the ANN, SVM and CSC results respectively.}
& ANN (\%) \footnote{Highest training efficiency (68\%) with parameter values: momentum=0.9, learning rate=0.02. True positive: 72\%. False positive: 18\%.}
& SVM (\%) \footnote{Highest training efficiency (64\%) with parameter values: C=$10^5$. True positive: 61\%. False positive: 22\%.}
& CSC (\%) \footnote{Highest training efficiency (60\%) with parameter values: $d_1$=200, C=$10^5$. True positive: 62\%. False positive: 36\%.} \\ \hline
0.4 & $1.08 \times10^{-23}$ & (100, 100, 100, 100) & 100 & 100 & 100 \\
0.5 & $8.65 \times10^{-24}$ & (100, 100, 100, 100) & 100 & 100 & 100 \\
0.6 & $7.21 \times10^{-24}$ & (100, 98, 98, 98) & 100 & 100 & 100 \\
0.7 & $6.18 \times10^{-24}$ & (100, 97, 97, 97) & 100 & 100 & 98 \\
0.8 & $5.41 \times10^{-24}$ & (98, 98, 98, 98) & 97 & 98 & 95 \\
0.9 & $4.81 \times10^{-24}$ & (76, 98, 98, 98) & 98 & 100 & 93 \\
\textcolor{red}{1.0} & \textcolor{red}{$4.33 \times10^{-24}$} & \textcolor{red}{(50, 98, 98, 98)} & \textcolor{red}{93} & \textcolor{red}{98} & \textcolor{red}{91} \\
1.1 & $3.93 \times10^{-24}$ & (29, 96, 96, 98) & 93 & 98 & 83 \\
1.2 & $3.61 \times10^{-24}$ & (13, 92, 92, 96) & 96 & 99 & 92 \\
1.3 & $3.33 \times10^{-24}$ & (4, 65, 65, 73) & 91 & 93 & 82 \\
1.4 & $3.09 \times10^{-24}$ & (4, 50, 50, 64) & 94 & 92 & 81 \\
1.5 & $2.88 \times10^{-24}$ & (1, 37, 37, 49) & 92 & 90 & 78 \\
1.6 & $2.70 \times10^{-24}$ & (0, 46, 46, 52) & 68 & 72 & 61 \\
1.7 & $2.55 \times10^{-24}$ & (0, 44, 44, 54) & 70 & 70 & 61 \\
1.8 & $2.40 \times10^{-24}$ & (0, 35, 35, 51) & 77 & 72 & 63 \\
1.9 & $2.28 \times10^{-24}$ & (0, 49, 49, 57) & 55 & 61 & 51 \\
2.0 & $2.16 \times10^{-24}$ & (0, 43, 43, 55) & 55 & 59 & 56 \\
2.1 & $2.06 \times10^{-24}$ & (0, 41, 41, 52) & 62 & 61 & 61 \\
\textcolor{blue}{2.2} & \textcolor{blue}{$1.97 \times10^{-24}$} & \textcolor{blue}{(0, 42, 42, 50)} & \textcolor{blue}{61} & \textcolor{blue}{64} & \textcolor{blue}{59} \\
\textcolor{blue}{2.3} & \textcolor{blue}{$1.88 \times10^{-24}$} & \textcolor{blue}{(0, 50 , 50 , 57)} & \textcolor{blue}{61} & \textcolor{blue}{61} & \textcolor{blue}{59} \\
\textcolor{blue}{2.4} & \textcolor{blue}{$1.80 \times10^{-24}$} & \textcolor{blue}{(0, 45, 45, 53)} & \textcolor{blue}{52} & \textcolor{blue}{54} & \textcolor{blue}{50} \\
\textcolor{blue}{2.5} & \textcolor{blue}{$1.73 \times10^{-24}$} & \textcolor{blue}{(0, 42, 42, 48)} & \textcolor{blue}{50} & \textcolor{blue}{49} & \textcolor{blue}{52} \\
\textcolor{blue}{2.6} & \textcolor{blue}{$1.66 \times10^{-24}$} & \textcolor{blue}{(0, 28, 28, 35)} & \textcolor{blue}{55} & \textcolor{blue}{37} & \textcolor{blue}{44} \\
\textcolor{blue}{2.7} & \textcolor{blue}{$1.60 \times10^{-24}$} & \textcolor{blue}{(0, 21, 21, 40)} & \textcolor{blue}{76} & \textcolor{blue}{46} & \textcolor{blue}{58} \\
\textcolor{blue}{2.8} & \textcolor{blue}{$1.55 \times10^{-24}$} & \textcolor{blue}{(0, 24, 24, 33)} & \textcolor{blue}{67} & \textcolor{blue}{42} & \textcolor{blue}{50} \\
\textcolor{blue}{2.9} & \textcolor{blue}{$1.49 \times10^{-24}$} & \textcolor{blue}{(0, 22, 22, 37)} & \textcolor{blue}{61} & \textcolor{blue}{48} & \textcolor{blue}{59} \\
\textcolor{blue}{3.0} \footnote{This is the 3.5 Mpc distance at which the M82 supernova exploded in January 2014.}
& \textcolor{blue}{$1.44 \times10^{-24}$} & \textcolor{blue}{(0, 14, 14, 26)} & \textcolor{blue}{64} & \textcolor{blue}{42} & \textcolor{blue}{51} \\
\textcolor{blue}{3.1} & \textcolor{blue}{$1.40 \times10^{-24}$} & \textcolor{blue}{(0, 36, 36, 40)} & \textcolor{blue}{47} & \textcolor{blue}{43} & \textcolor{blue}{41} \\
\textcolor{blue}{3.2} & \textcolor{blue}{$1.35 \times10^{-24}$} & \textcolor{blue}{(0, 34, 34, 44)} & \textcolor{blue}{41} & \textcolor{blue}{39} & \textcolor{blue}{41} \\
\textcolor{blue}{3.3} & \textcolor{blue}{$1.31 \times10^{-24}$} & \textcolor{blue}{(0, 34, 34, 44)} & \textcolor{blue}{51} & \textcolor{blue}{42} & \textcolor{blue}{43} \\
\textcolor{blue}{3.4} & \textcolor{blue}{$1.27 \times10^{-24}$} & \textcolor{blue}{(0, 54, 54, 57)} & \textcolor{blue}{40} & \textcolor{blue}{41} & \textcolor{blue}{40} \\
\textcolor{blue}{3.5} & \textcolor{blue}{$1.24 \times10^{-24}$} & \textcolor{blue}{(0, 42, 42, 57)} & \textcolor{blue}{41} & \textcolor{blue}{36} & \textcolor{blue}{46} \\
\textcolor{blue}{3.6} & \textcolor{blue}{$1.20 \times10^{-24}$} & \textcolor{blue}{(0, 41, 41, 47)} & \textcolor{blue}{43} & \textcolor{blue}{46} & \textcolor{blue}{49} \\
\textcolor{blue}{3.7} & \textcolor{blue}{$1.17 \times10^{-24}$} & \textcolor{blue}{(0, 44, 44, 48)} & \textcolor{blue}{43} & \textcolor{blue}{44} & \textcolor{blue}{46} \\
\textcolor{blue}{3.8} & \textcolor{blue}{$1.14 \times10^{-24}$} & \textcolor{blue}{(0, 54, 54, 63)} & \textcolor{blue}{48} & \textcolor{blue}{53} & \textcolor{blue}{55} \\
\textcolor{blue}{3.9} & \textcolor{blue}{$1.11 \times10^{-24}$} & \textcolor{blue}{(0, 38, 38, 48)} & \textcolor{blue}{43} & \textcolor{blue}{43} & \textcolor{blue}{43} \\
\textcolor{blue}{4.0} & \textcolor{blue}{$1.08 \times10^{-24}$} & \textcolor{blue}{(0, 42, 42, 48)} & \textcolor{blue}{35} & \textcolor{blue}{36} & \textcolor{blue}{44} \\
\textcolor{blue}{4.1} & \textcolor{blue}{$1.05 \times10^{-24}$} & \textcolor{blue}{(0, 32, 32, 39)} & \textcolor{blue}{30} & \textcolor{blue}{26} & \textcolor{blue}{40} \\
\textcolor{blue}{4.2} & \textcolor{blue}{$1.03 \times10^{-24}$} & \textcolor{blue}{(0, 21, 21, 40)} & \textcolor{blue}{37} & \textcolor{blue}{30} & \textcolor{blue}{44} \\
\textcolor{blue}{4.3} \footnote{This is the 5 Mpc distance for which the supernova event rate (in the Milky Way neighborhood) is 1 every 1-2 years}
& \textcolor{blue}{$1.00 \times10^{-24}$} & \textcolor{blue}{(0, 23, 23, 32)} & \textcolor{blue}{34} & \textcolor{blue}{33} & \textcolor{blue}{40} \\
4.4 & $9.83 \times10^{-25}$ & (0, 24, 24, 38) & 34 & 41 & 52 \\
4.5 & $9.61 \times10^{-25}$ & (0, 19, 19, 31) & 41 & 33 & 48
\end{tabular}
\end{ruledtabular}
\end{table*}
\begin{sidewaysfigure}[htbp!]
\centering
\includegraphics[width=1.05 \linewidth]{table_IV-eps-converted-to.pdf}
\caption{These are the detection efficiencies for the $(f_o=\unit[1500]{Hz}, \alpha=0.1)$ waveform. This waveform produces the most powerful signal that the 1998 model predicts.
At the 50\% FDR, both the ANN and the SVM algorithms show an increase of $\sim$ 75\% in the detection distance - from \unit{1.6}Mpc (of the
conventional algorithm dash-dot blue and green lines) to \unit{2.8}Mpc. The CSC shows no increase - both dash-dot and solid red lines stay at 50\% up to distances of
$\sim$ \unit{2.9}Mpc. The distance range covered in this set has a practical significance because it covers: (a) the distance of $\unit[3.5]{Mpc}$ at which the January
2014 supernova occured in M82 and (b) the distance of $\unit[5]{Mpc}$ at which the supernova event rate in the Milky Way neighborhood is about 1 every 1-2 years.} \label{Fig:ef5}
\end{sidewaysfigure}
\begin{table*}[htbp!]
\caption{\label{tab:table1} Detection efficiencies for injections with waveform parameters $\alpha=0.01$ and $f_o=\unit[1100]{Hz}$. The training was performed
with 11350 noise maps and 11350 injection maps. The latter were produced with $10^{-3} \le \alpha \le 10^{-1}$, $600 \le f_o \le 1600$ and $h$ values
distributed in the range of $2.00 \times 10^{-24} = 10^{-23.7} \le h \le 10^{-23.2} = 6.31 \times 10^{-24}$. The signal amplitudes that lie within this range are
in blue text while the distance at which the conventional algorithm detects $50\%$ with FAR = $\unit{0.1}\%$ is in red text.}
\begin{ruledtabular}
\begin{tabular}{cccccc}
Distance ($\times \unit[133]{Kpc}$) \footnote{$\unit[133]{Kpc}$ is the distance at which the conventional algorithm detects $50 \%$ of the signals with FAR=$\unit{0.1}\%$.}
& Signal amplitude ($h$) \footnote{Calculated using \eqref{eq.3a} and substituting the parameter values $\alpha=0.01$ and $f_0=1100$Hz and a distance given by the first column.}
& Conventional (\%) \footnote{These 4-tuples are detection efficiencies with FAR= (0.1\%, 4\%, 5\%, 10\%) that were obtained on full resolution maps. The second, third and forth
entries are to be compared with the ANN, SVM and CSC results respectively.}
& ANN (\%) \footnote{Highest training efficiency (88\%) with parameter values: momentum=0.9, learning rate=0.02. True positive: 91\%. False positive 4\%.}
& SVM (\%) \footnote{Highest training efficiency (89\%) with parameter values: C=$10^5$. True positive: 88\%. False positive 5\%.}
& CSC (\%) \footnote{Highest training efficiency (83\%) with parameter values: $d_1$=100, C=$10^4$. True positive: 77\%. False positive 10\%.} \\ \hline
0.1 & $1.50 \times10^{-23}$ & (100, 100, 100, 100) & 100 & 100 & 100 \\
0.2 & $7.51 \times10^{-24}$ & (100, 100, 100, 100) & 100 & 100 & 100 \\
\textcolor{blue}{0.3} & \textcolor{blue}{$5.00 \times10^{-24}$} & \textcolor{blue}{(100, 100, 100, 100)} & \textcolor{blue}{99} & \textcolor{blue}{100} & \textcolor{blue}{100} \\
\textcolor{blue}{0.4} & \textcolor{blue}{$3.75 \times10^{-24}$} & \textcolor{blue}{(100, 100, 100, 100)} & \textcolor{blue}{86} & \textcolor{blue}{93} & \textcolor{blue}{80} \\
\textcolor{blue}{0.5} & \textcolor{blue}{$3.00 \times10^{-24}$} & \textcolor{blue}{(100, 100, 100, 100)} & \textcolor{blue}{69} & \textcolor{blue}{71} & \textcolor{blue}{65} \\
\textcolor{blue}{0.6} & \textcolor{blue}{$2.50 \times10^{-24}$} & \textcolor{blue}{(100, 100, 100, 100)} & \textcolor{blue}{58} & \textcolor{blue}{63} & \textcolor{blue}{57} \\
\textcolor{blue}{0.7} & \textcolor{blue}{$2.14 \times10^{-24}$} & \textcolor{blue}{(98, 100, 100, 100)} & \textcolor{blue}{43} & \textcolor{blue}{47} & \textcolor{blue}{43} \\
0.8 & $1.88 \times10^{-24}$ & (92, 100, 100, 100) & 38 & 50 & 37 \\
0.9 & $1.67 \times10^{-24}$ & (79, 100, 100, 100) & 24 & 28 & 22 \\
\textcolor{red}{1.0} & \textcolor{red}{$1.50 \times10^{-24}$} & \textcolor{red}{(50, 91, 92, 93)} & \textcolor{red}{20} & \textcolor{red}{27} & \textcolor{red}{21} \\
1.1 & $1.36 \times10^{-24}$ & (23, 69, 70, 75) & 24 & 7 & 14 \\
1.2 & $1.25 \times10^{-24}$ & (4, 41, 45, 55) & 27 & 11 & 18 \\
1.3 & $1.15 \times10^{-24}$ & (1, 32, 35, 44) & 14 & 12 & 16 \\
1.4 & $1.07 \times10^{-24}$ & (1, 19, 21, 25) & 15 & 18 & 20 \\
1.5 & $1.00 \times10^{-24}$ & (0, 10, 13, 18) & 14 & 7 & 21
\end{tabular}
\end{ruledtabular}
\end{table*}
\begin{table*}[htbp!]
\caption{\label{tab:table1} Detection efficiencies for injections with waveform parameters $\alpha=0.01$ and $f_o=\unit[1100]{Hz}$. The training was performed
with 11350 noise maps and 11350 injection maps. The latter were produced with $10^{-3} \le \alpha \le 10^{-1}$, $600 \le f_o \le 1600$ and $h$ values
distributed in the range of $ 10^{-24.0} \le h \le 10^{-23.7} = 2.00 \times 10^{-24}$. The signal amplitudes that lie within this range are
in blue text while the distance at which the conventional algorithm detects $50\%$ with FAR=$\unit{0.1}\%$ is in red text.}
\begin{ruledtabular}
\begin{tabular}{cccccc}
Distance ($\times \unit[133]{Kpc}$) \footnote{Distance at which the conventional algorithm detects $50 \%$ of the signals with FAR=$\unit{0.1}\%$.}
& Signal amplitude ($h$) \footnote{Calculated using \eqref{eq.3a} and substituting the parameter values $\alpha=0.01$ and $f_0=1100$Hz and a distance given by the first column.}
& Conventional (\%) \footnote{These 4-tuples are detection efficiencies with FAR= (0.1\%, 18\%, 22\%, 36\%) that were obtained on full resolution maps. The second, third and forth
entries are to be compared with the ANN, SVM and CSC results respectively.}
& ANN (\%) \footnote{Highest training efficiency (68\%) with parameter values: momentum=0.9, learning rate=0.02. True positive: 72\%. False positive: 18\%.}
& SVM (\%) \footnote{Highest training efficiency (64\%) with parameter values: C=$10^5$. True positive: 61\%. False positive: 22\%.}
& CSC (\%) \footnote{Highest training efficiency (60\%) with parameter values: $d_1$=200, C=$10^5$. True positive: 62\%. False positive: 36\%.} \\ \hline
0.1 & $1.50 \times10^{-23}$ & (100, 100, 100, 100) & 100 & 100 & 100 \\
0.2 & $7.51 \times10^{-24}$ & (100, 100, 100, 100) & 100 & 100 & 100 \\
0.3 & $5.00 \times10^{-24}$ & (100, 100, 100, 100) & 100 & 100 & 100 \\
0.4 & $3.75 \times10^{-24}$ & (100, 100, 100, 100) & 91 & 97 & 85 \\
0.5 & $3.00 \times10^{-24}$ & (100, 100, 100, 100) & 90 & 92 & 75 \\
0.6 & $2.50 \times10^{-24}$ & (100, 100, 100, 100) & 85 & 89 & 79 \\
0.7 & $2.14 \times10^{-24}$ & (98, 100, 100, 100) & 76 & 81 & 68 \\
\textcolor{blue}{0.8} & \textcolor{blue}{$1.88 \times10^{-24}$} & \textcolor{blue}{(92, 100, 100, 100)} & \textcolor{blue}{75} & \textcolor{blue}{77} & \textcolor{blue}{66} \\
\textcolor{blue}{0.9} & \textcolor{blue}{$1.67 \times10^{-24}$} & \textcolor{blue}{(79, 100, 100, 100)} & \textcolor{blue}{63} & \textcolor{blue}{62} & \textcolor{blue}{57} \\
\textcolor{red}{1.0} & \textcolor{red}{$1.50 \times10^{-24}$} & \textcolor{red}{(50, 95, 95, 95)} & \textcolor{red}{58} & \textcolor{red}{59} & \textcolor{red}{56} \\
\textcolor{blue}{1.1} & \textcolor{blue}{$1.36 \times10^{-24}$} & \textcolor{blue}{(23, 83, 83, 86)} & \textcolor{blue}{58} & \textcolor{blue}{44} & \textcolor{blue}{47} \\
\textcolor{blue}{1.2} & \textcolor{blue}{$1.25 \times10^{-24}$} & \textcolor{blue}{(4, 74, 74, 80)} & \textcolor{blue}{77} & \textcolor{blue}{49} & \textcolor{blue}{59} \\
\textcolor{blue}{1.3} & \textcolor{blue}{$1.15 \times10^{-24}$} & \textcolor{blue}{(1, 52, 52, 62)} & \textcolor{blue}{69} & \textcolor{blue}{41} & \textcolor{blue}{49} \\
\textcolor{blue}{1.4} & \textcolor{blue}{$1.07 \times10^{-24}$} & \textcolor{blue}{(1, 45, 45, 54)} & \textcolor{blue}{61} & \textcolor{blue}{46} & \textcolor{blue}{58} \\
\textcolor{blue}{1.5} & \textcolor{blue}{$1.00 \times10^{-24}$} & \textcolor{blue}{(0, 30, 30, 43)} & \textcolor{blue}{60} & \textcolor{blue}{39} & \textcolor{blue}{50} \\
1.6 & $9.38 \times10^{-25}$ & (0, 30, 30, 40) & 52 & 27 & 37 \\
1.7 & $8.83 \times10^{-25}$ & (0, 21, 21, 30) & 40 & 25 & 39 \\
1.8 & $8.34 \times10^{-25}$ & (0, 33, 33, 43) & 43 & 39 & 45 \\
1.9 & $7.90 \times10^{-25}$ & (0, 32, 32, 43) & 40 & 25 & 38 \\
2.0 & $7.51 \times10^{-25}$ & (0, 24, 24, 33) & 42 & 33 & 43
\end{tabular}
\end{ruledtabular}
\end{table*}
\begin{sidewaysfigure}[htbp!]
\centering
\includegraphics[width=1.05 \linewidth]{table_VI-eps-converted-to.pdf}
\caption{These are the detection efficiencies for the $(f_o=\unit[1100]{Hz}, \alpha=0.01)$ waveform. This signal was proven (in our previous study) to be detectable only at distances
that cover the Milky Way. This signal is approximately monochromatic for the durations our sensitivity studies were designed. At the 50\% false dismissal rate (FDR), the
ANN shows an increase of $\sim$ 20\% in the detection distance - from \unit{175}Kpc (of the conventional algorithm dash-dot blue line) to \unit{210}Kpc. The SVM shows no increase
while the CSC shows a very small increase. } \label{Fig:ef6}
\end{sidewaysfigure}
\end{widetext}
\begin{figure}[htbp!]
\centering
\begin{tabular}{@{}p{1.0\linewidth}@{\quad}p{1.0\linewidth}@{}}
\subfigimg[width=1.0 \linewidth]{}{noise_orig-eps-converted-to.pdf} \caption{This is one of the noise ft-maps with the original resolution of $1000 \times 5000$ pixels. The
pixels along the vertical axis correspond to $1$Hz each. The pixels along the horizontal axis
correspond to $0.5$s each, hence the total duration of the map is 2500s. The frequency cuts are well known
seismic frequency bands and suspension vibration modes.
} &
\subfigimg[width=1.0 \linewidth]{}{noise_im_0_01-eps-converted-to.pdf} \caption{The highest training efficiency for the MLAs was achieved with resolution reduction by a factor of 100 per axis,
(Fig.3). This reduced $10 \times 50$ resolution ft-map corresponds to the full resolution noise map in Fig.4.
For the resolution reduction we used bicubic interpolation as provided by the matlab imresize.m function. The
frequency cuts were substituted with zeros before reducing the resolution.
} \\
\subfigimg[width=1.0 \linewidth]{}{inj_orig-eps-converted-to.pdf}\caption{This is an injection added to the noise ft-map shown in Fig.4. The waveform has parameters $\alpha=0.1$ and $f_0=1500$ Hz.
The duration of the injection is $2500$s and corresponds to a distance to the source of $\unit[117]{Kpc}$. Injections at longer
distances are harder to see by eye in the original resolution maps. The contrast between signal pixels and noise pixels
is higher in the reduced resolution maps as shown in Fig.5. This makes it is easier to see the injections in the reduced
resolution maps rather than the full resolution ft-maps.
} &
\subfigimg[width=1.0 \linewidth]{}{inj_im_0_01-eps-converted-to.pdf}\caption{This reduced $10 \times 50$ resolution ft-map corresponds to the full resolution map in Fig.4. Despite the 10000 times
reduced resolution as compared to the ft-map of Fig.6, the r-mode injection is still visible. It turns out that the reduced
resolution ft-maps increase the training efficiency for the MLAs, according to Fig.3. However, for the parameter estimation
algorithms we still use the full resolution ft-maps.
}
\end{tabular}
\end{figure}
\section{Conclusions}
\label{sec:conclusion}
\begin{description}
\item[$\bullet$ Pipeline suitability] ANN, SVM and CSC (and very likely other machine learning algorithms not tested yet) are a suitable class of decision making algorithms in
the search not only for r-mode gravitational waves but in the search for long transient gravitational waves in general. The results in this paper demonstrate that
the stochastic pipeline would benefit from utilizing machine learning algorithms for determining the presence of a signal or not.
\item[$\bullet$ Computational efficiency] The most computationally expensive part of this study was the production of the one set of 11350 noise ft-maps and the 3 sets of 11350
injection ft-maps (each set requires up to 10 GB of memory and up to 1 week on a 50 node cluster). The 3 sets of injections examined the 3 different ranges of values
for $h$ (those correspond to 3 different ranges of values for the distance). In practice, we will know the distance to the source so we will have to produce only one
set of injections that will be determined according to that distance.
\item[$\bullet$ Training/testing speeds] Once we have the method (that is presented in this paper) the training of the CSC method requires 10 minutes, the training of the SVM
method requires about 30 minutes while the training of the ANN method requires about 8 hours. After the training is done the decision making about the presence of a
signal or not takes about 2 seconds for 100 ft-maps. The MLAs are much faster when it comes to the decision making process than the conventional algorithm is (that takes
up to 5 minutes for one ft-map).
\item[$\bullet$ Detection performance] Comparing table II to table III and table V to table VI, we see that when the training is performed with injections at distances (marked
in blue) shorter than the distance, $d_{red}$ (marked in red) at which the conventional algorithm has a $50\%$ success rate, the MLAs are not efficient enough and do not outperform
the conventional algorithm. This is because the MLA training sets (for tables II and V) did not include injections corresponding to distances as long as $d_{red}$ or beyond. This is what we included in the training sets
for the MLAs whose results are shown in tables III and VI. The latter show that when the MLAs are trained with signals injected at distances a little shorter than $d_{red}$ up to distances 1.5-2 times longer than the latter,
then the MLAs performance is at least as good as that of the conventional algorithm. Training the MLAs with injections at distances shorter than $d_{red}$ was to ensure that the
MLAs can detect signals injected at distances $0.7-0.8$ that of $d_{red}$, and training the MLAs with injections at distances longer than $d_{red}$ was done in order to push the
limits of the MLAs and see how much (if at all) they can outperform the conventional algorithm.
\item[$\bullet$ Low detection efficiency] for the $(0.01, \unit[1100]{Hz})$ waveform. Our suspicion for the low detection efficiencies for the second waveform (weakest signal) we tested
is on the resolution reduction factor of $10^{-2}$ we used. The plot in Fig.\ref{Fig:ef3} was obtained on a study with the strongest signals $(0.1,\unit[1500]{Hz})$. We have not tested
whether the weaker signals have maximum training efficiencies at a different resolution reduction than the one we used in this study. This needs further investigation.
\item[$\bullet$ False alarm rates] In our study FARs of 4-10$\%$ (for tables III and V) and 18-36\% (for tables IV and VI) are considered very high, however, a more carefully chosen training
set may result in lower FARs. The first suggestion would be to train the MLAs with a higher number of noise and injection ft-maps. If that is not possible (due to data availability)
we may train the MLAs with injections at distances over a range of ($h^2$) values that is smaller than those in the current training sets. Similarly we can use smaller ranges of
parameter values for $\alpha$ and $f_o$. We can also try to increase the ratio of noise maps over injection maps in the training set so that the MLAs may recognize the noise maps
more efficiently. Specifically for the ANN, one way we may try to reduce the FAR is by exploring different topologies in the neural network architecture. For SVM and CSC we may
introduce a cost function to suppress FAR to acceptable values.
\item[$\bullet$ Search optimization] There are many ways that we can further optimize the MLAs specifically designed for the search of r-modes gravitational radiation. One way is by customizing
the ft-map resolution reduction. Instead of using bicubic interpolation we may use a resolution reduction algorithm specifically designed for the r-mode signals so that the averaging
is done along the r-mode signal curves. Since the r-mode search is a targeted search (using a supernova electromagnetic or neutrino trigger) the distance to the source can be estimated
with an accuracy of $10-15\%$ \citep{TYPE1DIST,TYPEIIDIST}. This distance range can then be used to produce injection ft-maps with which the MLAs will be trained. In this way the training
can be optimized for the distance of the detectors to the external trigger.
\item[$\bullet$ Search constraints] Our current method is specifically designed for r-mode gravitational wave searches. A different signal (e.g. gravitational waves sourcing from other neutron
star oscillation modes) would require their own training set produced over the specific model parameter values. This is a quite different approach than that of the conventional
algorithm that is generically designed for the detection of any type of signal. Our current method involves the production of at least 10000 ft-maps (that may be overlapping), any
amount of data that will not be enough for the production of this many ft-maps will limit the sensitivity of the search. At the same time the higher the number of the ft-maps used
for training is the more we may increase the training efficiencies of the MLAs.
\item[$\bullet$ Future developments] Future developments include optimization of the current methods as well as the use of other supervised machine learning algorithms such as random
forests\citep{breiman2001random}. Random forests can deal with the high dimensionality of our data by revealing features that contribute very low information to our analysis;
which can be discarded prior to classification. With respect to the ANN, we plan to train a deep convolutional neural network \cite{krizhevsky2012imagenet} which appears
to be very promising for image classification.
\end{description}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 1,886 |
\section{Introduction}
\label{sec:intro}
With the development of computer vision techniques, more and more people began to focus on understanding the behavior as well as other context of the objects via visual information. Tracking targets in video sequences, one of the core topics with wide applications in video surveillance, rocketed with the boost of tracking-by-detection (TBD) methods \cite{smeulders2014visual}. The TBD reconstruct the states of targets based on the detection responses by assigning identity to each detection and optimizing the trajectories \cite{Milan2012,Milan2014}. The prosperity of TBD these years has raised people's interests in a more challenging topic - multi-object tracking (MOT) with unknown numbers. MOT remains difficult due to complex settings of sequences, \eg, intricate trajectories of targets, varying illumination, movements of cameras, \etc.
The MOT problem can be handled in an online fashion, which could be adopted in time critical applications. However, the traditional online methods is susceptible to outliers brought by occlusions and noises, \eg, false positives, true negatives, duplicate detections of a single target, \etc. These outliers can cause ambiguities in data association. Some tackles the problem using sparse appearance model \cite{sparserepresentation2013,LSSsparse2013}, and others via prediction \cite{Bae2014} of states in future frames. But dynamics and appearances of the targets are unpredictable in some cases. Batch tracking methods are easier to solve the problem of outliers than online methods by global optimization of association and trajectories. Terms that penalize mutual exclusions and the number of tracklets \cite{Milan2014,choi2015near} were added to the energy function to regularize trajectories.
Apart from advantages of batch methods, one major problem is that the global optimization involves frames in the whole sequence \cite{Bae2014TIP} which does not suit for real-time applications. Some batch methods require initial solutions, \eg \cite{Milan2014}. Therefore, we propose our method in this paper, aiming at combining advantages of online and batch methods together while avoiding their disadvantages. We derive an iteratively Approximation-Shrink Scheme (AS Scheme) from the Maximum-A-Posterior (MAP) formulation using sequential approximation. We show that the state space can be effectively shrunk, but there may exist conflicts in the sequential optimization and the results may vary with different optimization sequences. In order to avoid these problems, an Ambiguity-Clearness Graph (A-C Graph) is formulated to efficiently represent the tracklet fragments and ambiguities in the association. A set of rules and procedures are defined for changes of nodes and edges in the graph, \eg, connections, disconnections, transforms, merges, \etc. A sliding Window-of-Ambiguity (WOA) is defined in the A-C Graph for sequential optimization of layers in the graph. Based on the A-C Graph and the sliding WOA optimization, MOT is conducted in a window-wise manner, which is able to disambiguate the association and accelerate the optimization process. We also show that the traditional online and batch approach can be embraced into this framework with different window sizes.
Our main contributions can be summarized as: (1) an approximation-shrink scheme that iteratively approximate the global optimization, (2) a window-wise optimization framework based on the novel A-C Graph which embrace the traditional online and batch methods, (3) a unified analysis of window-wise approaches with different window sizes using search tree.
\section{Related Works}
\label{sec:related}
Different from the past tracking methods~\cite{Mutihypothesistracking1979,JPDA1983}, TBD reconstructs trajectories of targets by associating detections provided by the object detectors. Most of the researchers exploits the TBD framework to design their algorithms in MOT, which can be categorized as online and batch approaches.
As for batch tracking~\cite{Milan2014,Milan2013,Milan2012,Butt2013,Segal2013,Dicle2013,MonteCarlo3Dtrackingstrategy2011} approaches, conditional random field (CRF) is often used to learn and model the affinity such as appearance and motion to discriminate among different trajectories~\cite{Yang2012,Yang2014}. A global and pairwise model is learned online in ~\cite{Yang2014} to form an energy function, which is minimized offline via heuristic search. Despite the popularity of CRF model, extensive training is needed. Continuous energy model is introduced by a series of work~\cite{Milan2014,Milan2013,Milan2012}. Milan \etal \cite{Milan2014} built a comprehensive continuous energy function by linearly combining terms regarding appearance, motion, mutual exclusion, trajectory persistence, \etc. The continuous energy functions are easier to optimize than discrete ones, whereas they possess too many parameters and are hard to be tuned. Network flow is first applied to tracking by Zhang \etal~\cite{Zhang2008}. A graph is formed with states of targets as nodes and the associations as edges. The likelihood of the states are represented as the capacity of edges. Butt \etal~\cite{Butt2013} improved the network structure by defining their node as a candidate pair of matching observations between consecutive frames. In order for a better model of occlusions, \cite{Segal2013} designed a latent data association framework. Instead of assigning each detection to a corresponding track, they assume each detection is its own track and assign a latent data to each node to represent the association. In addition to the general modeling of targets, some people worked on tracking targets with specific characteristics, \eg, Dicle \etal~\cite{Dicle2013} focus on tracking targets with similar appearance but different motion patterns.
Online tracking~\cite{Bae2014,Bae2014TIP,Shitrit2014,Breitenstein2011,FCimformationtheoretic2015,Motioncontext2015,sparserepresentation2013} has become more and more popular these days. Network flow has also been adopted in online tracking. \cite{Shitrit2014} formulate multi-object tracking into a multi-commodity network flow problem. They use sparse appearance to reduce computational complexity. Lu \etal~\cite{sparserepresentation2013} constructed a dictionary using already tracked objects and assigned the new detections by minimizing the L1 regularized function. Wang \etal~\cite{Leastabsoluted2014} finds that the representation residuals follow the Laplacian distribution, by which they improved the sparse representation method on tracking. Hungarian algorithm is firstly introduced into tracking problems by Joo \etal~\cite{Joo2007} to solve the bipartite graph model they proposed. The frame-by-frame scheme of online tracking takes great advantages of hungarian algorithm. Bae \etal~\cite{Bae2014} designed tracklet confidence by considering the length, occlusion and affinity. Different strategies are applied to tracklets with high and low confidence. Hungarian algorithm is employed in the association for local and global association respectively. Hungarian algorithm greedily associates detections in consecutive frames which could possibly misses the global optimal and cause identity switches. Besides the popularity of Hungarian algorithm in association algorithms, Bayesian framework is also one of the most popular model for target modeling. Bae \etal~\cite{Bae2014TIP} improved their previous work~\cite{Bae2014} by perform data association with a track existence probability, the provided detections are associated to the existed tracks and the corresponding track existence probabilities will be updated. Yoon \etal~\cite{Motioncontext2015} constructed a Relative Motion Network(RMN) to factor out the camera motion by considering motion context from multiple object and incorporate relative motion network to Bayesian framework.
\section{Approximate-Shrink Scheme}
\label{sec:form}
Given observations $\mathbb{Z}_{1:t}=\{Z_{(\tau,i)}|\tau=1,2,\dots,t, i = 1,2,\dots,n_{\tau}\}$ of a real time video sequence, where $n_{\tau}$ denotes the number of observations in frame $\tau$, we assume: (1) each observation $Z_{(\tau,i)}$ corresponds to a state $X_{(\tau,i)}$ \cite{Segal2013}, (2) states in the same frame are independent, (3) some of the states are already clear given observations. The Maximum-a-Posterior (MAP) formulation of MOT is
\begin{equation}\label{eqn:MAP}
\hat{\mathbb{X}}_{1:t}= \argmax_{\mathbb{X}_{1:t}}P(\mathbb{X}_{1:t}|\mathbb{Z}_{1:t}).
\end{equation}
Based on Assumption (2), we resolve $\mathbb{X}_{1:t}$ as
\begin{equation}\label{eqn:MAP1}
\hat{\mathbb{X}}_{1:t}= \argmax_{\mathbb{X}_{1:t}}\prod_{\tau=1}^{t}\prod_{i=1}^{n_{\tau}}{P(X_{(\tau,i)}|\mathbb{Z}_{1:t},\mathbb{X}_{1:\tau-1})}.
\end{equation}
Assumption (3) offers us an intuition that there exist some states $\mathbb{X}^C_{1:t}=\{X_{(\tau',j)}|P(X_{(\tau',j)}|\mathbb{Z}_{1:t},\mathbb{X}_{1:\tau'-1}) \approx P(X_{(\tau',j)}|\mathbb{Z}_{1:t})\}$. Denote $\mathbb{X}^A_{1:t}= \mathbb{X}_{1:t}\setminus\mathbb{X}^C_{1:t}$. We name $\mathbb{X}^C_{1:t}$ Clear states (C states) and $\mathbb{X}^A_{1:t}$ Ambiguous states (A states). The global optimization in Equation \ref{eqn:MAP1} can be relaxed to
\begin{equation}\label{eqn:MAP2}
\hat{\mathbb{X}}^A_{1:t} = \argmax_{\mathbb{X}^A_{1:t}}\prod_{\forall X_{(\tau,i)}\in \mathbb{X}^A_{1:t} }{P(X_{(\tau,i)}|\mathbb{Z}_{1:t},\mathbb{X}_{1:\tau-1})},
\end{equation}
and
\begin{equation}\label{eqn:MAP3}
\hat{X}_{(\tau',j)} = \argmax_{X_{(\tau',j)}}P(X_{(\tau',j)}|\mathbb{Z}_{1:t}), \forall X_{(\tau',j)}\in\mathbb{X}^C_{1:t}.
\end{equation}
Doing these two optimization separately is an approximation to Equation \ref{eqn:MAP1}. First, we sequentially optimize every state $X_{(\tau',j)}$ in $\mathbb{X}^C_{1:t}$ (approximation step) via Equation \ref{eqn:MAP3}. Then we set $\mathbb{X}^C_{1:t}$ fixed as the evidence for $\mathbb{X}^A_{1:t}$, and derive Equation \ref{eqn:MAP2} to
\begin{multline}\label{eqn:MAP5}
\hat{\mathbb{X}}^A_{1:t} = \argmax_{\mathbb{X}^A_{1:t}}\prod_{\forall X_{(\tau,i)}\in \mathbb{X}^A_{1:t} }{P(X_{(\tau,i)}|\mathbb{Z}_{1:t},\mathbb{X}^A_{1:\tau-1},\mathbb{X}^C_{1:t})}
\end{multline}
(shrink step). We iteratively find the $\mathbb{X}^{C'}_{1:t}\in\mathbb{X}^A_{1:t}$, $\mathbb{X}^{C'}_{1:t} = \{X_{(\tau',j)}|P(X_{(\tau',j)}|\mathbb{Z}_{1:t},\mathbb{X}^A_{1:\tau'-1},\mathbb{X}^C_{1:t}) \approx P(X_{(\tau',j)} | \mathbb{Z}_{1:t}, \mathbb{X}^C_{1:t}) \}$, let $\mathbb{X}^{C}_{1:t} = \mathbb{X}^{C'}_{1:t}$ and repeat the above steps to shrink the search space.
This Approximate-Shrink Scheme (A-S Scheme) iteratively search and narrow down the state space. $\mathbb{X}^C_{1:t}$ serve as nucleus of trajectories in the space which attract states to associate to them. Some nucleus merge together in the iteration to form longer tracklets during the iteration. However, the space is still too large, and the convergence is not guaranteed. More approximations are needed to accelerate the speed and ensure the convergence of this scheme. Moreover, it is necessary to design a data structure so as to avoid conflicts of associations of states in $\mathbb{X}^C_{1:t}$ and the effects of the sequence on the optimization results. Therefore, we propose a self-organizing A-C Graph and window-wise optimization framework to meet the demands in this regard.
\section{Window-wise Optimization for Tracking}
\label{sec:opt}
\begin{figure*}
\centering
\subfigure[Frame 11 to 17]{
\label{fig:flow1}\hspace{-0.12in}
\includegraphics[width=1.82in]{Fig/TUD-Stadtmitte_12-18.eps}}\hspace{-0.19in}
\subfigure[Frame 12 to 18]{
\label{fig:flow2}
\includegraphics[width=1.82in]{Fig/TUD-Stadtmitte_13-19.eps}}\hspace{-0.19in}
\subfigure[Frame 13 to 19]{
\label{fig:flow3}
\includegraphics[width=1.82in]{Fig/TUD-Stadtmitte_14-20.eps}}\hspace{-0.19in}
\subfigure[Frame 14 to 20]{
\label{fig:flow4}
\includegraphics[width=1.82in]{Fig/TUD-Stadtmitte_15-21.eps}}
\label{fig:flow}
\caption{Visualization of the Window of Ambiguity (WOA) from frame 12 to frame 21 in TUD-Stadtmitte dataset. Each association is directed from parent to child and the A-C Graph is directed and acyclic. From (a) to (b), state $X_{(16,7)}$ was connected to state $X_{(14,2)}$ as a C state child, and was merged with $X_{(16,2)}$. Meanwhile, the association from $X_{(14,9)}$ and $X_{(14,7)}$ to $X_{(16,7)}$ were removed. From (c) to (d), state $X_{(15,9)}$ was inserted into the tracklet of state $X_{(14,6)}$ and state $X_{(17,4)}$. The figure is best shown in color.}
\end{figure*}
\subsection{Ambiguous-Clearness Graph}
{
\label{sec:def}
Given states $\mathbb{X}=\{X_{(\tau,i)}|\tau=1,2,\dots,t, i = 1,2,\dots,n_{\tau}\}$ and observations $\mathbb{Z}=\{Z_{(\tau,i)}|\tau=1,2,\dots,t, i = 1,2,\dots,n_{\tau}\}$ (the detections serve as observations in TBD multi-object tracking) in a real time video sequence, predefined thresholds $C_{\mathit{thre}}$ and $A_{\mathit{thre}}$ (the value of $C_{\mathit{thre}}$ and $A_{\mathit{thre}}$ are given in Section \ref{sec:exp}), we define state $X_{(\tau',j)}$ to be the \emph{parent} of state $X_{(\tau,i)}$ if $\tau'<\tau$ and there exists an association between $X_{(\tau',j)}$ and $X_{(\tau,i)}$, and $X_{(\tau,i)}$ is the \emph{child} of $X_{(\tau',j)}$. ($X_{(\tau,i)}$ and $X_{(\tau',j)}$ are only used as examples for clearness in illustration. They do not indicate certain states.) The \emph{determined parent} of a state is its only parent and the affinity score of the association is greater than $C_{\mathit{thre}}$. We now formally define the C states and A states. If a state $X_{(\tau,i)}$ has one determined parent or does not have parent, $X_{(\tau,i)}$ is a \emph{clear state} (C state), denoted as $X^{C}_{(\tau,i)}$. On the contrary, if $X_{(\tau,i)}$ has parent states but does not have a determined parent, it is an \emph{Ambiguous State} (A state), denoted as $X^{A}_{(\tau,i)}$. Note that a C state can only have zero or one parent. All the parents of a state $X_{(\tau,i)}$ form its \emph{active set}. We regulate that a state can have up to one C state as its child, and the frame number of its A state child should be smaller than that of its C state child. The observation corresponding to $X^{C}_{(\tau,i)}$ and $X^{A}_{(\tau,i)}$ is notated as $Z^{C}_{(\tau,i)}$ and $Z^{A}_{(\tau,i)}$. A \emph{clear association} is the association between a clear state and its parent, and a \emph{tracklet} is defined as a group states connected by clear association. The tracklet including $X_{(\tau,i)}$ is denoted as $\mathit{Trk}(X_{(\tau,i)})$. The C states in $\mathit{Trk}(X_{(\tau,i)})$ after $X_{(\tau,i)}$ is defined as the \emph{descendant} of $X_{(\tau,i)}$. By taking states and associations as the vertices and edges, we form the A-C Graph of the MOT problem. In this paper, we use states and associations instead of vertices and edges when discussing on the A-C Graph. The A-C Graph of TUD-Stadtmitte dataset is visualized in Figure \ref{fig:flow}, where the clear association is shown in solid line and the states belong to the same tracklet is in the same color.
As the association is directed from parent to child, the A-C Graph is a directed acyclic graph. In an A-C Graph, we define a time period $\tau'$ to $\tau$ ($1 \le \tau' < \tau \le t$) where there is only clear association in $1$ to $\tau'-1$ and $\tau+1$ to $t$ as Window-of-Ambiguity (WOA). The tracklet outside the WOA is determined and fixed and the changes of the states and association can only take place in the WOA. One can restrict the size of state space by setting the length of WOA.
}
\subsection{Actions}
{
As is mentioned in Section \ref{sec:form}, actions in A-C Graph should help avoid conflicts, \eg, multiple fathers for a C state, multiple C state children, clear association forms cycle, \etc. Meanwhile, the actions should be symmetrical to avoid the effect of chronological order. The basic actions of A-C Graph are initializations, disconnections, connections and merges between two states. Table \ref{tab:notation} shows functions and symbols used in defining these actions.
\begin{table}
\footnotesize
\centering
\begin{tabular}{|c|c|}
\hline
Functions and Symbols & Description \\
\hline
\hline
$\mathit{isempty}(\mathit{State Set})$ & Check whether the $\mathit{State Set}$ is empty. \\
\hline
$\mathit{find}(\mathit{State Set} = \mathit{C State})$ & Find the C States in the $\mathit{State Set}$.\\
\hline
$X_{(\tau,i)}.\mathit{pStates}$ & Find all the parents of $X_{(\tau,i)}$.\\
\hline
$X_{(\tau,i)}.\mathit{cldStates}$ & Find all the children of $X_{(\tau,i)}$.\\
\hline
$X_{(\tau,i)}.\mathit{frameN}$ & Find the frame number of $X_{(\tau,i)}$.\\
\hline
$X_{(\tau,i)}.\mathit{isClear}$ & Judge whether $X_{(\tau,i)}$ is a clear state.\\
\hline
\multirow{2}{*}{$X_{(\tau,i)}.\mathit{conf}$} & The affinity scores between all the\\
& fathers of $X_{(\tau,i)}$ and $X_{(\tau,i)}$. \\
\hline
\end{tabular}
\normalsize
\caption{The functions and symbols used in this paper.}\label{tab:notation}
\end{table}
For a newly-entered state $X_{(\tau,i)}$, first we initialize the active set by enumerating all the potential parents. As is regulated in Section \ref{sec:def}, $X_{(\tau,i)}$ is able to connect with states in the previous frames, who does not have C state child or whose C state child is after $X_{(\tau,i)}$. Procedure \ref{alg:active} shows the pseudocode of initializing the active set.
We disconnect two states $X_{(\tau,i)}$ and $X_{(\tau',j)}$ by removing the association between them, and update these two states.
As is shown in Procedure \ref{alg:connectA}, we assign $X_{(\tau,i)}$ to $X_{(\tau',j)}$ as A state child. The procedure is terminated if $X_{(\tau,i)}$ is already a C state. If not, we check the descendant of $X_{(\tau',j)}$. If $X_{(\tau',j)}$ has no descendants, we directly add an association between $X_{(\tau,i)}$ and $X_{(\tau',j)}$, otherwise, we find the nearest C state descendant $X^p$ in the tracklet of $X_{(\tau',j)}$ not after $X_{(\tau,i)}$. If $X^p$ is in frame $\tau$, the procedure is terminated. If $X^p$ is before $X_{(\tau,i)}$, add the association between $X_{(\tau,i)}$ and $X^p$.
Procedure \ref{alg:connectC} illustrates the action that $X_{(\tau,i)}$ is connected to $X_{(\tau',j)}$ as C state child. If $X_{(\tau,i)}$ is currently not a C state, the existing parents of $X_{(\tau,i)}$ are removed. If $X_{(\tau',j)}$ does not have C state children, we directly add a connection between $X_{(\tau,i)}$ and $X_{(\tau',j)}$, otherwise, we find $X_{(\tau',j)}$'s latest C state descendant $X^p$ not after $X_{(\tau,i)}$. If $X^p$ is in frame $\tau$, $X^p$ and $X_{(\tau,i)}$ are merged together via Procedure \ref{alg:merge}. As $X^p$ is before $X_{(\tau,i)}$, an association is added between $X_{(\tau,i)}$ and $X^p$. All the A and C state children of $X^p$ after $X_{(\tau,i)}$ are removed from $X^p$ and reconnected to $X_{(\tau,i)}$ following Procedure \ref{alg:connectA} and \ref{alg:connectC} respectively. If $X_{(\tau,i)}$ is currently a C state and $X_{(\tau',j)}$ is not, $X_{(\tau',j)}$ is inserted into $X_{(\tau,i)}$'s tracklet using Procedure \ref{alg:connectC} if there is not a state in frame $\tau'$ in $X_{(\tau,i)}$'s tracklet and Procedure \ref{alg:merge} if there exists a state $X^{p1}$ in frame $\tau'$. If $X_{(\tau,i)}$ and $X_{(\tau',j)}$ are both C states, the two tracklets $X_{(\tau,i)}$ and $X_{(\tau',j)}$ will be grouped into one by recursively calling Procedure \ref{alg:connectC} and \ref{alg:merge}, as shown in Procedure \ref{alg:connectC}. If one of the two states is in a tracklet, the other state will be inserted into the tracklet.
Procedure \ref{alg:merge} describes the process of merging $X_{(\tau,j)}$ to $X_{(\tau,i)}$ in the same frame. As we cannot make changes on the states and tracklets outside WOA, we ensure that $X_{(\tau,j)}$ and $X_{(\tau,i)}$ cannot be C states at the same time to avoid merging of states outside WOA. For the descendants of $X_{(\tau,i)}$ and $X_{(\tau,j)}$, we recursively merge them into one tracklet by Procedure \ref{alg:connectC}. For the A state child $X^{\mathit{cld}}$ of $X_{(\tau,j)}$, we simply remove the association between $X^{\mathit{cld}}$ and $X_{(\tau,j)}$ and connect it to $X_{(\tau,i)}$ via Procedure \ref{alg:connectA}.
\begin{algorithm}\caption{Initialize the active set for the state $X_{(\tau,i)}$.}
\label{alg:active}
\begin{algorithmic}
\footnotesize
\Require state $X_{(\tau,i)}$, latest frame number $t$, size of Window of Ambiguity (WOA) $l$
\Ensure the active set containing all the potential parents of $X_{(\tau,i)}$
\For {all the states $X_{(\tau',j)}$ in frame $t-l+1$ to $\tau$}
\If {$\mathit{isempty}(\mathit{find}(X_{(\tau',j)}.\mathit{cldStates} = \mathit{CState}))$ \textbf{or} $\mathit{find}(X_{(\tau',j)}.\mathit{cldStates} = \mathit{CState}).frameN \le \tau$}
\State Add $X_{(\tau',j)}$ to the active set
\EndIf
\EndFor
\normalsize
\end{algorithmic}
\end{algorithm}
\begin{algorithm}\caption{Connect state $X_{(\tau,i)}$ to state $X_{(\tau',j)}$ as A state child.}
\label{alg:connectA}
\begin{algorithmic}
\footnotesize
\Require child state $X_{(\tau,i)}$, parent state $X_{(\tau',j)}$
\Ensure the updated network
\If {$X_{(\tau,i)}.\mathit{isClear}=\mathit{false}$}
\State $X^p=X_{(\tau',j)}$
\While {$($ \textbf{not} $\mathit{isempty}(\mathit{find}(X^p.\mathit{cldStates} = \mathit{CState}))$ \textbf{and} $X^p.\mathit{frameN}\le \tau$}
\State $X^p=$ the C state child of $X^p$
\EndWhile
\If {$X^p.\mathit{frameN}<\tau$}
\State Add $X_{(\tau,i)}$ to $X^p.\mathit{cldStates}$
\State Add $X^p$ to $X_{(\tau,i)}.\mathit{pStates}$
\State Update the features of $X_{(\tau,i)}$ and $X^p$
\EndIf
\EndIf
\normalsize
\end{algorithmic}
\end{algorithm}
\begin{algorithm}\caption{Connect state $X_{(\tau,i)}$ to state $X_{(\tau',j)}$ as C state child, where $X_{(\tau,i)}$ is currently an A state.}
\label{alg:connectC1}
\begin{algorithmic}
\footnotesize
\Require child state $X_{(\tau,i)}$, parent state $X_{(\tau',j)}$, latest frame number $t$, size of Window of Ambiguity (WOA) $l$
\Ensure the updated network
\State $X^p=X_{(\tau',j)}$
\While {$($ \textbf{not} $\mathit{isempty}(\mathit{find}(X^p.\mathit{cldStates} = \mathit{CState}))$ \textbf{and} $X^p.\mathit{frameN}\le \tau$}
\State $X^p=$ the C state child of $X^p$
\EndWhile
\If {$X^p.\mathit{frameN}=\tau$}
\State Do Procedure \ref{alg:merge} with $(X_{(\tau,i)}$,$X^p$,$t$,$l)$ as input
\Else
\State Remove all parents of $X_{(\tau,i)}$
\State Remove all children of $X^p$ in the same frame with $X_{(\tau,i)}$
\For {all children $X^{\mathit{cld}}$ of $X^p$ in the frames after $X_{(\tau,i)}$}
\State Remove the association between $X^{\mathit{cld}}$ and $X^p$
\If {$X^{\mathit{cld}}.\mathit{isClear}=\mathit{true}$}
\State Do Procedure \ref{alg:connectC} with $(X^{\mathit{cld}}$,$X_{(\tau,i)}$,$t$,$l)$ as input
\Else
\State Do Procedure \ref{alg:connectA} with $(X^{\mathit{cld}}$,$X_{(\tau,i)})$ as input
\EndIf
\EndFor
\EndIf
\normalsize
\end{algorithmic}
\end{algorithm}
\begin{algorithm}\caption{Connect state $X_{(\tau,i)}$ to state $X_{(\tau',j)}$ as C state child, where $X_{(\tau,i)}$ is currently a C state.}
\label{alg:connectC2}
\begin{algorithmic}
\footnotesize
\Require child state $X_{(\tau,i)}$, parent state $X_{(\tau',j)}$, latest frame number $t$, size of Window of Ambiguity (WOA) $l$
\Ensure the updated network
\State $X^{p1}=X_{(\tau',j)}$
\While {$(X^{p1}.\mathit{isClear} = \mathit{true})$ \textbf{and} $X^{p1}.\mathit{frameN} > t-l$}
\State $X^{p1}=$ the determined father of $X^{p1}$
\EndWhile
\State $X^{p2}=X_{(\tau',j)}$
\While {$(X^{p2}.\mathit{isClear} = \mathit{true})$ \textbf{and} $X^{p2}.\mathit{frameN} > t-l$}
\State $X^{p2}=$ the determined father of $X^{p2}$
\EndWhile
\If {$X^{p1}.\mathit{frameN} > X^{p2}.\mathit{frameN}$}
\State Do Procedure \ref{alg:connectC} with $(X^{p2}$,$X^{p1}$,$t$,$l)$ as input
\ElsIf {$X^{p2}.\mathit{frameN} > X^{p1}.\mathit{frameN}$}
\State Do Procedure \ref{alg:connectC} with $(X^{p1}$,$X^{p2}$,$t$,$l)$ as input
\Else
\If {$X^{p1}.\mathit{isClear}=\mathit{true}$ \textbf{and} $X^{p2}.\mathit{isClear}=\mathit{true}$}
\If {$X^{p1}.\mathit{conf}\ge X^{p2}.\mathit{conf}$}
\State Remove the parent of $X^{p2}$, $X^{p2} = \mathit{A State}$
\State Do Procedure \ref{alg:merge} with $(X^{p1}$,$X^{p2}$,$t$,$l)$ as input
\Else
\State Remove the parent of $X^{p1}$, $X^{p1} = \mathit{A State}$
\State Do Procedure \ref{alg:merge} with $(X^{p2}$,$X^{p1}$,$t$,$l)$ as input
\EndIf
\ElsIf {$X^{p1}.\mathit{isClear}=\mathit{true}$}
\State Do Procedure \ref{alg:merge} with $(X^{p1}$,$X^{p2}$,$t$,$l)$ as input
\Else
\State Do Procedure \ref{alg:merge} with $(X^{p2}$,$X^{p1}$,$t$,$l)$ as input
\EndIf
\EndIf
\normalsize
\end{algorithmic}
\end{algorithm}
\begin{algorithm}\caption{Connect state $X_{(\tau,i)}$ to state $X_{(\tau',j)}$ as C state child.}
\label{alg:connectC}
\begin{algorithmic}
\footnotesize
\Require child state $X_{(\tau,i)}$, parent state $X_{(\tau',j)}$, latest frame number $t$, size of Window of Ambiguity (WOA) $l$
\Ensure the updated network
\If {$X_{(\tau,i)}.\mathit{isClear}=\mathit{false}$}
\State Do Procedure \ref{alg:connectC1} with $(X_{(\tau,i)}$,$X_{(\tau',j)}$,$t$,$l)$
\Else
\State Do Procedure \ref{alg:connectC2} with $(X_{(\tau,i)}$,$X_{(\tau',j)}$,$t$,$l)$
\EndIf
\normalsize
\end{algorithmic}
\end{algorithm}
\begin{algorithm}\caption{Merge state $X_{(\tau,j)}$ with state $X_{(\tau,i)}$.}
\label{alg:merge}
\begin{algorithmic}
\footnotesize
\Require state $X_{(\tau,i)}$, state $X_{(\tau,j)}$, latest frame number $t$, size of Window of Ambiguity (WOA) $l$
\Ensure the updated network
\If {$X_{(\tau,j)}.\mathit{isClear}$}
\State Remove $X_{(\tau,j)}$ from its parent $X^p$, if any
\State Do Procedure \ref{alg:connectC} with $(X_{(\tau,i)}$,$X^p$,$t$,$l)$ as input
\Else
\State Remove $X_{\tau}$ from its parents $X^p$
\State For all $X^p$, do Procedure \ref{alg:connectA} with $(X_{(\tau,i)}$,$X^p)$ as input
\EndIf
\For {all the children $X^{\mathit{cld}}$ of $X_{(\tau,j)}$}
\If {$X^{\mathit{cld}}.\mathit{isClear}=true$}
\State Remove $X^{\mathit{cld}}$ from $X_{(\tau,j)}$
\State Do Procedure \ref{alg:connectC} with $(X^{\mathit{cld}}$,$X_{(\tau,i)}$,$t$,$l)$ as input
\Else
\State Remove $X^{\mathit{cld}}$ from $X_{(\tau,j)}$
\State Do Procedure \ref{alg:connectA} with $(X^{\mathit{cld}}$,$X_{(\tau,i)}$,$t$,$l)$ as input
\EndIf
\EndFor
\normalsize
\end{algorithmic}
\end{algorithm}
Although there exists recursion in the actions, it can be easily proved that the recursion in Procedure \ref{alg:connectA}, \ref{alg:connectC} and \ref{alg:merge} cannot form an endless recursion loop, and the sequence of carrying out actions on a set of states will not affect the structure of A-C Graph. Visualization of these actions in TUD-Stadtmitte dataset can be found in Figure \ref{fig:flow}. In Figure \ref{fig:flow2}, newly-entered states $X_{(18,1)}$ to $X_{(18,8)}$ connected to their initial active sets via Procedure \ref{alg:active}, \ref{alg:connectA} and \ref{alg:connectC}. From Figure \ref{fig:flow1} to \ref{fig:flow2}, $X_{(16,7)}$ was connected to $X_{(14,2)}$ as a C state child by Procedure \ref{alg:connectC}, and merged with $X_{(16,2)}$ using Procedure \ref{alg:merge}.
}
\subsection{Sliding Window Optimization}
{
\label{sec:swo}
For a real time sequence, the A-C Graph is continuously adding new states from latest frame $t$. The WOA should be sliding to keep its size from being too large and remove the ambiguities to generate tracks. So we set the upper bound of the size of WOA as $l$.
The sliding window optimization consists of three steps. First, for all the newly-entered states $X_{(t,i)}$ in frame $t$, $i = 1,\dots,n_t$, we find the active sets via Procedure \ref{alg:active} and compute the affinity score $a(X_{(t,i)},X^p)$ between $X_{(t,i)}$ and each state $X^p$ in the corresponding active set. If $a(X_{(t,i)},X^p) \ge C_{thre}$, do Procedure \ref{alg:connectC} with $(X_{(t,i)},X^p,t,l)$ as input. If $A_{thre} < a(X_{(t,i)},X^p) < C_{thre}$, do Procedure \ref{alg:connectA} with $(X_{(t,i)},X^p)$ as input. Second, from frame $t-l$ to $t$, we sequentially recompute the affinity score of states in the same frame with their fathers and reconnect them according to the new affinity. Third, Hungarian Algorithm \cite{ahuja1988network} is carried out on states in frame $t-l$ with their father states to get the best arrangement of association and clear all the ambiguity in frame $t-l$. All states in frame $t-l$ are transformed to C states and the WOA shifts forward. If $t$ has not reached the end, $t = t + 1$ and return to the first step, otherwise, $l = l - 1$ and redo the third step. The outline of the optimization process is shown in Procedure \ref{alg:sliding}.
\begin{algorithm}\caption{Conduct sliding window optimization for MOT.}
\label{alg:sliding}
\begin{algorithmic}
\footnotesize
\Require size of Window-of-Ambiguity (WOA) $l$
\Ensure the final A-C Graph and the association result
\State 1. Associate the newly-entered states in the latest frame $t$ to their initial active sets.
\State 2. Sequentially shrink the active set of each A state in WOA.
\State 3. Determine the association of states in frame $t-l$ using Hungarian Algorithm \cite{ahuja1988network}.
\If {$t$ has not reached the end}
\State $t=t+1$, return to 1
\Else
\State $l=l+1$, return to 3
\EndIf
\normalsize
\end{algorithmic}
\end{algorithm}
The sliding window optimization conducts A-S Scheme in a window-wise manner. Procedure \ref{alg:connectC} and \ref{alg:merge} in step one and two serve as the approximation step, and updating affinity score in step two follows the shrink step. Step three forces the states in frame $t-l$ to determine their connections, which guarantees the convergence.
}
\subsection{Online, Delayed and Batch Methods}
{
\label{sec:ODB}
Based on the definition of A-C Graph and sliding window optimization, we form this window-wise framework which includes online ($l = 1$), delay ($1<l<t$) and batch methods ($l = t$). Figure \ref{fig:tree} demonstrates the formation of a trajectory starting from $X^s$ in the A-C Graph via these three methods. The window-wise optimization finds a relatively small search tree $T_1,\dots,T_{t-l+1}$ according to $l$ at each iteration. As for an online method (Figure \ref{fig:tree1}), $l = 1$ and the search is greedy. For a delayed method (Figure \ref{fig:tree2}), heuristic search is conducted in $T_1,\dots,T_{t-l+1}$. The search space remains unchanged for a batch method (Figure \ref{fig:tree3}), so local search methods, \eg, hill climbing, simulated annealing, \etc, is often exploited to direct to local optimal iteratively. The experimental analysis of the relation between $l$ and optimization results is provided in Section \ref{sec:AWOA}.
}
\begin{figure*}
\centering
\subfigure[Search Space]{
\label{fig:tree0}
\includegraphics[width=1.49in]{Fig/origin.pdf}}
\subfigure[Online $l=1$]{
\label{fig:tree1}
\includegraphics[width=1.53in]{Fig/online.pdf}}
\subfigure[Delayed $1<l<t$]{
\label{fig:tree2}
\includegraphics[width=1.5in]{Fig/delayed.pdf}}
\subfigure[Batch $l=t$]{
\label{fig:tree3}
\includegraphics[width=1.5in]{Fig/batch.pdf}}
\caption{Formation of a trajectory with different $l$. (a) illustrates the original search space. (b),(c) and (d) stand for the search process with local search tree. $T_{\tau}$ indicates the SWO in frame $\tau$, $\tau = 1,\dots,(t-l+1)$. The red lines represent the associations. For online and delayed approaches, the trajectory are formed from top to bottom, while for batch approaches, the trajectory is formed and optimized via iteration.}
\label{fig:tree}
\end{figure*}
\section{Experimental Evaluation}
\label{sec:exp}
\subsection{Implementation}
{
\label{sec:implem}
\textbf{Affinity model}: We implemented a basic affinity model, following \cite{Bae2014}, which includes the appearance model $\mathit{App}(X_{(\tau,i)},X_{(\tau',j)})$, motion model $\mathit{Mot}(X_{(\tau,i)},X_{(\tau',j)})$ and shape model $\mathit{Shp}(X_{(\tau,i)},X_{(\tau',j)})$. The appearance model measures the Bhattacharyya distance of histograms of $X_{(\tau,i)}$ and $X_{(\tau',j)}$. If $X_{(\tau,i)}$ is in a tracklet $\mathit{Trk}(X_{(\tau,i)})$, instead of using Incremental Linear Discriminant Analysis (ILDA) used in \cite{Bae2014}, we simply average the appearance histograms of all states in $\mathit{Trk}(X_{(\tau,i)})$ using an exponential discount factor. First-order Kalman filter is applied to smoothing and predicting positions of the targets and shapes of the bounding boxes. We compute the normalized distance of target positions and bounding box shapes and map them to a Gaussian distribution $N(O,\mathit{Var})$ to get the affinity scores. The overall affinity
\begin{equation}\label{eqn:aff1}
\mathit{Aff}(X_{(\tau,i)},X_{(\tau',j)}) = \mathit{App}\times\mathit{Mot}\times\mathit{Shp}.
\end{equation}
\textbf{Dataset description}: We use the MOT Benchmark \cite{leal2015motchallenge} for training and evaluation in this paper, where the benchmark contains both $11$ sequences for training and testing. In total, there are $11,286$ frames, $5,503$ for training set and $5,783$ for testing set. The sequences possess different frame rates and resolutions, and only tracking pedestrians.
\textbf{Parameter Settings}: In our experiment, the $C_{thre} = 0.5$ and $A_{thre} = 0.1$. We estimate the length of every occlusion (number of frames with overlap $>$ 0.4) in the training set of MOT Benchmark and study the distribution of occlusion lengths. As shown in Figure \ref{fig:overlap}, about $99\%$ of the overlaps are within $5s$, and $84\%$ of which are within $1s$. Therefore, the delayed time is set to $1s$ and the length of WOA $l = $ frame rate $\times$ delayed time. The variance of the Gaussian distribution in the motion model and shape model is $ \mathit{Var} = [20^2, 50^2]$. Other parameters of the affinity model are the same as \cite{Bae2014}.
\begin{figure}
\centering
\includegraphics[width=3.2in]{Fig/Overlap.eps}\\
\caption{Distribution of lengths of bounding box overlaps in the ground truth sequences in MOT Benchmark \cite{leal2015motchallenge}. $99\%$ of the overlaps are within 5s and $84\%$ of them are in 1s.}\label{fig:overlap}
\end{figure}
}
\subsection{Analysis of Window-of-Ambiguity}
{
\label{sec:AWOA}
To analyze the connection of WOA size and the quality of the window-wise optimization, we define the energy of an A-C Graph as
\begin{equation}\label{eqn:energy}
E(t) = -\sum_{\tau=1}^{t}\sum_{i=1}^{n_{\tau}}\max_{X^p \in X_{(\tau,i)}.pStates}\mathit{Aff(X_{\tau,i},X^p)}.
\end{equation}
Figure \ref{fig:energy} presents the final energy with varying size of WOA on TUD-Stadtmitte (number of frame $=179$), TUD-Campus (number of frame $=71$) and PETS-S2L2 (number of frame $=436$) in MOT Benchmark. The X-axis is in logarithmic scale. Interestingly, final energy of these sequences reduced significantly when window size $l$ grows from $1$ to $5$, while the speed of decrease become much slower when $l > 5$. Settings of these sequences, \eg, target density, viewpoint, \etc, are different, but the patterns of energy change almost remain identical. It is likely that the trend of final energy only deals with WOA size $l$. And the tracking results can be much improved with a small WOA comparing to the online method, which experimentally illustrates the better performance of delayed methods than online ones in the window-wise optimization framework. The final energy does not reduce too much when $l$ grows larger than $5$. This indicates the sliding window approximation only has a minor effect on the final performance. And it becomes a trade-off between speed and better results when WOA grows larger.
\begin{figure}
\centering
\subfigure[TUD-Stadtmitte(number of frame $=179$)]{
\label{fig:energy1}
\includegraphics[width=2.7in]{Fig/TUD-Stadtmitte.png}}
\subfigure[TUD-Campus(number of frame $=71$)]{
\label{fig:energy2}
\includegraphics[width=2.7in]{Fig/TUD-Campus.png}}
\subfigure[PETS-S2L2(number of frame $=436$)]{
\label{fig:energy3}
\includegraphics[width=2.7in]{Fig/PETS-S2L2.png}}
\caption{The final energy with varying size of Window-of-Ambiguity (WOA) on different sequences. The X-axis is in logarithmic-scale. The energy decrease rapidly when $l$ grows from $1$ to $5$. When $l>5$, the decrease of energy becomes slower.}
\label{fig:energy}
\end{figure}
}
\subsection{Performance Evaluation}
{
\label{sec:per}
\textbf{Evaluation Metrics:} We apply the CLEAR MOT \cite{keni2008evaluating} and \cite{Yang2012,kuopersonidentity2011}'s metric when evaluating our result. The multiple object tracking accuracy (MOTA) shows the combined accuracy based on the number of false positives (FP), identity switches (IDS) and missed targets (FN). The multiple object tracking precision (MOTP) measures the overlap of bounding boxes between ground truths and results given by trackers. MT and ML indicate the number of mostly tracked and lost targets. FG represents the number of fragmented tracks.
\textbf{Evaluation:} As shown in Table \ref{tab:result}, our method clearly outperforms the TC\_ODAL method using the same affinity model, not only in MOTA. Even in some datasets, shown in Table \ref{tab:dataset}, our method with the basic affinity model reached the performance of the methods using state-of-the-art affinity models.
\begin{table*}
\centering
\footnotesize
\label{tab:result}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline
Method & Type & MOTA$\uparrow$ & MOTP$\uparrow$ & MT$\uparrow$ & ML$\downarrow$ & FP$\downarrow$ & FN$\downarrow$ & IDS$\downarrow$ & FG$\downarrow$ \\
\hline
\hline
AC-MOT(Proposed $+$ affinity of \cite{Bae2014}$\dagger$) & Delayed & $18.1\pm17.9$ & $70.4$ & $5.8\%$ & $58.3\%$ & $13,492$ & $36,295$ & $509$ & $1,092$ \\
\hline
TBD\cite{geiger20143d} & Batch & $15.9\pm17.6$ & $70.9$ & $6.4\%$ & $47.9\%$ & $14,943$ & $34,777$ & $1,939$ & $1,963$ \\
\hline
TC\_ODAL \cite{Bae2014}$\dagger$ & Online & $15.1\pm15.0$ & $70.5$ & $3.2\%$ & $55.8\%$ & $12,970$ & $38,538$ & $637$ & $1,716$ \\
\hline
DP\_NMS \cite{pirsiavash2011globally} & Batch & $14.5\pm13.9$ & $70.8$ & $2.3\%$ & $40.8\%$ & $13,171$ & $34,814$ & $4,537$ & $3,090$ \\
\hline
\end{tabular}
\normalsize
\caption{Performance evaluation. Results can be found in \url{http://motchallenge.net/results/2D_MOT_2015/}. The best outcomes are marked in bold. $\uparrow$ represents higher is better, while $\downarrow$ stands for lower being better. Methods evaluated using the same set of affinity descriptor are marked with the same symbol.}
\end{table*}
\begin{table}
\centering
\footnotesize
\label{tab:dataset}
\begin{tabular}{|c|c|c|c|c|}
\hline
\multirow{2}*{Method} & \multirow{2}*{AC-MOT} & CEM & MotiCon & SegTrack\ \\
& & \cite{Milan2014} & \cite{leal2014learning} & \cite{milanjoint} \\
\hline
type & Delayed & Batch & Batch & Batch \\
\hline
\hline
TUD-Crossing & $62.3$ & $61.6$ & $58.2$ & $53.9$ \\
\hline
ETH-Linthescher & $18.2$ & $18.4$ & $18.3$ & $11.1$ \\
\hline
ETH-Crossing & $23.4$ & $18.2$ & $22.8$ & $23.4$ \\
\hline
KITTI-16 & $38.1$ & $31.6$ & 38.8 & 40.2 \\
\hline
\end{tabular}
\normalsize
\caption{MOTA of some sequences in MOT Benchmark. We compare our method using \cite{Bae2014}'s affinity model with state-of-the-art affinity models.}
\end{table}
}
\section{Conclusion}
\label{sec:con}
This paper proposed an A-S Scheme for sequential approximation and a window-wise optimization framework based on the A-C Graph. The core idea of this method is to cluster the states subject to several constraints, \eg states in the same frame cannot be clustered into one group, \etc. The A-C Graph together with the sliding window optimization transformed the global clustering into a sequential local clustering which self-organized the structure in a relatively small state space, which can be done efficiently with little harm to handling occlusions. We showed experimentally that the characteristics of window-wise optimization framework rarely change with the varying settings of the sequence. As the affinity model serves as the distance metric in clustering, it can influence the results of clustering. Therefore, it is a fair comparison of optimization models if similar affinity models are adopted. The experimental results show that by using the basic affinity model, our method even showed competitive performance in an unfair test. Our future work is to realize more state-of-the-art affinity models to the window-wise optimization model. Also, we plan to design a unity interface, which can help to embed the affinity models into different optimization models much easier than now.
{\small
\bibliographystyle{ieee}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 1,562 |
\section{INTRODUCTION}
As a fundamental but challenging task in computer vision, semantic segmentation can be broadly applied to the fields of path planning, autonomous driving, and video surveillance \cite{kang2011multiband} \cite{li2017traffic} \cite{chen2018importance}. Most of the existing deep learning-based semantic segmentation networks \cite{garcia2017review} \cite{romera2017erfnet} mainly deal with RGB images. However, RGB images could provide less information for the model training and produce inaccurate prediction results on the scenarios of similar texture, complex background with dim light, or total darkness. With the popularity of thermal imaging cameras, some
\begin{figure}[h]
\centering
\includegraphics[width=3.4in]{fig/fig1.jpg}
\vspace{-5mm}
\caption{Qualitative comparison with two latest networks in daytime and nighttime urban street scenes. The color cones (the objects marked in the red frame) are too small to detect and segment. We can see that the color cone boundaries segmented by the RTFNet are not so sharp or fail to segment them correctly (e.g., (d), (e), (j), (k)), whereas our FEANet gives a more ideal segmentation result (e.g., (f), (l)).}
\label{fig:fig1}
\vspace{-5mm}
\end{figure}
researchers found that thermal information is robust and effective for reducing the ambiguity in challenging lighting conditions \cite{ha2017mfnet}, such as in urban street scenes. Therefore, the thermal images created by the thermal imaging cameras can be added as important supplements to improve the performance of RGB-T semantic segmentation.
Recently, RGB-T semantic segmentation has received increasing research attention. Various RGB-T models \cite{sun2019rtfnet} \cite{shivakumar2020pst900} \cite{sun2020fuseseg} have been proposed to improve the segmentation performance by combining RGB and thermal information. However, the performance of existing RGB-T models tends to drastically decrease when faced with certain complex scenarios (e.g., cluttered backgrounds, varying illuminations). Therefore, the existing RGB-T methods still need to solve the following challenges in order to make further progress.
The first challenge is to effectively extract multi-level features from RGB-T fused data. Generally, high-level features contain rich semantic information which can be used for object location, while low-level features provide plentiful micro details that are useful for reducing glitch noise and refining segmentation boundaries. Therefore, current RGB-T semantic segmentation methods (e.g., MFNet \cite{ha2017mfnet}, RTFNet \cite{sun2019rtfnet}) use either a direct feature extracting strategy or a progressive multi-data fusing process to leverage multi-level features. However, due to the direct multi-level features extracting and merging strategy without considering differences between levels, these processes suffer from the incomplete extraction problem of noisy low-level features. As shown in Fig.~\ref{fig:fig1}, the segmented object boundaries in the RTFNet prediction maps are not sharp (e.g., (d), (e), (j), (k)).
The second challenge is to excavate informative features from the thermal modality. Thermal images are of low quality, which leads to unpredictable noise during the data fusion process. Previous RGB-T models usually treat the extra thermal images as a fourth-channel input without the modification of three-channel RGB encoder stream or fuse RGB and thermal features by simple summation and multiplication. These methods treat thermal and RGB information from the same perspective and ignore the fact that RGB images contain color and texture, whereas thermal maps contain the spatial relations among objects. Due to this modality difference, the above-mentioned simple combination methods \cite{sun2019rtfnet} \cite{sun2020fuseseg} are not effective. As shown in Fig.~\ref{fig:fig1} (d), (e), (j), (k), the RTFNet fails to detect and segment the small target objects (e.g., color cones).
To address the above issues, we propose a two-stage FEANet for better RGB-T semantic segmentation performance. As shown in Fig.~\ref{fig:fig2}, our FEANet contains a two-stages process for feature extraction and fusion. In stage 1, we introduce a FEAM module, which exploits the inter-channel and spatial relations for the detail information. The proposed FEAM exploits multi-level features in a progressive refinement way to suppress distractors in the encoder stream. This strategy is based on the observation that low-level features provide discriminative semantic information with micro details, which may contribute significantly to eliminating the background distractors. In stage 2, to improve the compatibility of RGB and thermal features, the corresponding RGB and thermal feature maps are aggregated through elementwise summation into the RGB encoder stream. Our main contributions are summarized as follows:
\begin{itemize}
\item We design a two-stage FEANet to deal with the object boundaries and the small target object for RGB-T semantic segmentation in urban scenes.
\item We introduce a FEAM module to enhance multi-level features and fuse RGB and thermal information in a complementary way.
\end{itemize}
The remainder of this letter is structured as follows. In section II, related works have been reviewed. In section III, we describe our network in detail. In section IV, experimental results and discussions are presented. Conclusions and future work are drawn in the last section.
\section{RELATED WORKS}
\subsection{Semantic Segmentation}
Over the past few years, semantic segmentation is a great challenge for detecting and locating target objects in computer vision. The Convolutional Neural Networks (CNNs) \cite{krizhevsky2012imagenet} had been applied to improve the accuracy in the image classification and semantic segmentation tasks since 2012. In 2015, Fully Convolutional Networks (FCN) \cite{long2015fully} had been proposed for the semantic segmentation, which had an end-to-end network architecture and outperformed the traditional methods that rely on the hand-crafted features extraction mechanism.
Similar to the FCN, the SegNet \cite{badrinarayanan2017segnet} adopted the Encoder-Decoder network architecture as the backbone for semantic segmentation tasks. The SegNet used a pre-trained VGG16 architecture as its encoder, and then applied the output of the encoder as the input of the up-sampling decoder. SegNet achieved state-of-the-art accuracy while getting the low inference speed. In subsequent years, the Encoder-Decoder network structure is widely used in semantic segmentation methods. UNet \cite{ronneberger2015u} and DeepLabv3 \cite{chen2018encoder} have the large Encoder-Decoder architecture, specially, the decoder can restore the high-resolution feature maps from the low layers through the short-cut connections.
Real-time semantic segmentation methods aim to generate high-quality segmentation results in real-time. ENet \cite{paszke2016enet} also followed the Encoder-Decoder architecture to achieve real-time semantic segmentation, but it was optimized for fast inference and high accuracy. Due to the efficiency of ENet, it can effectively process images (480×640 RGB) in the requiring high-speed inference situations. However, ENet failed to perform as well as SegNet on spectral images datasets, such as SUN RGB-D \cite{song2015sun}.
To improve the accuracy and speed of semantic segmentation, the BiSeNet \cite{yu2018bisenet} had the spatial path that preserves the spatial information and the semantic path that obtains the sufficient receptive field. Based on these two paths, a new Feature Fusion Module was developed to combine the features efficiently. However, the BiSeNet just captures the information from the lower layer to sharpen the boundaries with a slow inference speed.
\subsection{RGB-T Semantic Segmentation}
Some methods adopting CNNs were designed for the RGB-D dataset, which contains images that were acquired by multispectral cameras. In these works, we found that some ideas were useful for designing our method. Hazirbas et al. \cite{hazirbas2016fusenet} proposed a new CNN network named FuseNet, which contained an Encoder-Decoder structure that simultaneously extracts features from RGB and depth images. In \cite{wang2018depth}, RGB and spectral images feature maps were not only processed separately in the encoder stream but also in the decoder stream.
The existing urban scenario image segmentation datasets are based on visible spectral images (RGB images), such as Cityscapes \cite{cordts2016cityscapes} and Daimler Urban dataset \cite{scharwachter2013efficient}. Naturally, semantic segmentation methods based on these datasets can only be used to process RGB images. Furthermore, most of these methods focused only on improving the segmentation accuracy while neglecting the inference speed. For RGB-T semantic segmentation of urban scenes, MFNet, RTFNet, and FuseSeg-161 were proposed to fuse RGB and thermal data in a novel Encoder–Decoder structure. In this structure, two identical encoders were employed to extract features from RGB and thermal data, respectively, and one decoder was designed to gradually restore the resolution. In addition to the above methods, recently, other RGB-T fusion methods \cite{stone2021deepfusenet} \cite{jayasuriya2020active} utilized the combination of omnidirectional (O-D) infrared sensors and O-D visual RGB sensor for semantic segmentation in autonomous robotic systems.
\begin{figure*}[ht]
\centering
\includegraphics [width=6.2in]{fig/fig2}
\caption{The overall architecture of the proposed FEANet. From left to right are Thermal Stream, RGB Stream, and Output Stream. The encoder in Thermal Stream and RGB Stream contains two extracting stages. In stage 1, Thermal Stream and RGB Stream use ResNet \cite{he2016deep} as the feature extractor layer. The output part of each layer is weighted through the FEAM. In stage 2, the output map of Thermal Stream is fused into the RGB Stream. The decoder in Output Stream is composed of Transposed blocks A and B.}
\label{fig:fig2}
\vspace{-3mm}
\end{figure*}
\section{PROPOSED METHOD}
In this section, we first introduce the overall architecture of our FEANet, which contains two extracting encoder streams and an output decoder stream. As the Encoder-Decoder structure has been confirmed as an effective architecture in many semantic segmentation networks, our FEANet also adopts this structure. Then, to fully excavate informative cues from both the RGB and thermal feature maps, we present the FEAM to enhance the multi-level features for superior segmentation performance.
\subsection{Overall Architecture}
As shown in Fig.~\ref{fig:fig2}, our FEANet contains two main steps: feature extracting and resolution restoring. Two encoder streams and one decoder stream are designed for the feature extraction and recovery, respectively. In the feature extraction progress, two encoders extract the multi-level features from three-channel RGB and one-channel thermal images, respectively. With increasing encoder stream depth, the high-level features (e.g., L3, L4 in Fig.~\ref{fig:fig2}) will be more useful for capturing global context, while they lose the object details. When we up-sample the high-level feature maps, the output prediction will be blurred and object boundaries will become unclear. Instead, the proposed FEAM can distinguish object regions which are too small to detect. In the resolution restoration progress, the decoder gets dense output predictions. At the end of FEANet, the final softmax layer is adapted to get the prediction output map for the RGB-T semantic segmentation results.
\begin{figure}[t]
\centering
\includegraphics[width=3.4in]{fig/fig3}
\vspace{-5mm}
\caption{Architecture of the Feature-Enhanced Attention Module (FEAM)}
\label{fig:fig3}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=3.2in]{fig/fig4}
\caption{ Architecture of the Transposed block. Conv, TransConv and BN refer to the convolutional layers, transposed convolutional layers and the batch normalization layers, respectively. The detailed channel numbers for Conv and TransConv layers are listed in Tab.~\ref{tab:tab1}.}
\label{fig:fig4}
\vspace{-5mm}
\end{figure}
The proposed FEANet explores the two-stages cross-modal fusion methods. In the first stage, we first extract both the RGB and thermal feature maps by the ResNet block and then refine the detail features through the FEAM module. In the second stage, the corresponding RGB and thermal feature maps are aggregated through elementwise summation into the RGB encoder stream. At the end of two encoders, the final refined feature maps are transmitted to the decoder. With the two-stage feature extracting strategy, the loss of rich semantic information through the intensive feature extracting could be recovered.
\subsection{Encoder-Feature Extracting}
In the proposed FEANet, both the RGB and thermal features are extracted in two encoder streams. Specifically, both the RGB and thermal encoder streams employ five convolutional blocks from ResNet \cite{he2016deep} as the standard backbone and attach an additional FEAM after every single convolutional block, respectively. Usually, the existing ResNet is designed for the three-channel RGB images extracting, which is not suitable for the single-channel images, then we modify the number of the first convolutional layer to be one to extend it to the thermal image.
\begin{table}[htbp]
\centering
\caption{Configuration for the convolution (Conv) and transposed convolution (TransConv) layers in the individual module of the decoder.}
\begin{tabular}{rrrcccccccc}
\toprule
& & \multicolumn{2}{c}{Name} & \multicolumn{2}{c}{Kernel Size} & \multicolumn{2}{c}{Stride} & \multicolumn{2}{c}{Padding} \\
\midrule
\multicolumn{2}{c}{\multirow{2}*{Block A}} & \multicolumn{2}{c}{Conv 1} & \multicolumn{2}{c}{3x3} & \multicolumn{2}{c}{1} & \multicolumn{2}{c}{1} \\
\cmidrule{3-10}
& & \multicolumn{2}{c}{Conv 2} & \multicolumn{2}{c}{3x3} & \multicolumn{2}{c}{1} & \multicolumn{2}{c}{1} \\
\midrule
\multicolumn{2}{c}{\multirow{4}*{Block B}} & \multicolumn{2}{c}{Conv 1} & \multicolumn{2}{c}{3x3} & \multicolumn{2}{c}{1} & \multicolumn{2}{c}{1} \\
\cmidrule{3-10}
& & \multicolumn{2}{c}{TransConv 1} & \multicolumn{2}{c}{2x2} & \multicolumn{2}{c}{2} & \multicolumn{2}{c}{0} \\
\cmidrule{3-10}
& & \multicolumn{2}{c}{TransConv 2} & \multicolumn{2}{c}{2x2} & \multicolumn{2}{c}{2} & \multicolumn{2}{c}{0} \\
\midrule
\end{tabular}%
\label{tab:tab1}%
\end{table}%
To effectively extract features from both RGB and thermal images is the focus of this paper. When in the nighttime, some colorful objects in RGB maps are invisible but can be clearly seen in the thermal maps. Considering the modality difference, RGB and thermal features need to be enhanced. Inspired by \cite{woo2018cbam}, we design a FEAM module using an attention component to learn features from the fused data and then refine the prediction. In Fig.~\ref{fig:fig2}, the FEAM is added after each convolutional layer in two encoder streams, which can enhance the compatibility of the features. This extraction process improves the representation of image features and preserves the multi-level information. To better understand the working mechanism of the FEAM module, the channel-wise feature maps from FEAM are visualized at different levels.
As illustrated in Fig.~\ref{fig:fig3}, the FEAM contains a sequential channel attention operation and a spatial attention operation. Channel attention operation shifts attention to the feature that extracted from the convolutional layer and then explores foreground cues. Complementarily, spatial attention operation focuses on the global area to explore the informative cues, looking for possible small target objects within it. To the best of our knowledge, we are the first to introduce the attention mechanism to excavate informative cues from both RGB and thermal multi-level features. Our experiments in Tab.~\ref{tab:tab2} and Fig.~\ref{fig:fig5} demonstrate the effectiveness of our approach in improving RGB-T semantic segmentation performance.
\subsection{Decoder-Resolution Restoring}
After computing multi-level features from two encoder streams, which are the final map of the RGB and thermal features. The decoder is mainly designed to efficiently leverage the multi-level information to carry out the detail pixels refinement. Our decoder architecture is refined from the RTFNet decoder and then restores the feature map to the original images. Different from RTFNet, we delete two sequential 1 × 1 convolutions of the original block, which avoids the complicated up-sample process in the decoder. As illustrated in Fig.~\ref{fig:fig4}, the decoder consists of two blocks (Transposed blocks A and B). Specifically, Transposed block B contains an additional branch to enlarge the receptive field and a residual connection to preserve the information.
Detailed configurations for the neural network layers in the Transposed blocks are displayed in Tab.~\ref{tab:tab1}. In block A, there is a batch normalization (BN) layer \cite{ioffe2015batch} and a ReLu activation layer \cite{nair2010rectified} followed by the convolutional layer as the feature resolution. The short cut from the input and the output of the final BN layer is element-wisely added up. In block B, it consists of Conv 1 and two TransConv layers. Each residual-based transposed block contains a 3×3 convolution and a residual-based transposed convolution. Through Conv 1, the resolution of the map is the same as the original one, however, the number of feature channels is decreased by a factor of 2. And the TransConv 1 keeps the number of channels unchanged and increases the resolution by a factor of 2. Different from the TransConv 1, the TransConv 2 needs to increase the resolution and decrease the number of feature channels. Finally, the decoder will get more details to generate the final predicted map in a progressive upsampling way.
\begin{table*}[htbp]
\caption{Comparison result on the test set (\%). 3c and 4c represent that the networks are tested with the three-channel RGB data and four-channel RGB-Thermal data, respectively. Note that mAcc and mIoU are calculated with the unlabeled classes, but the results for the unlabeled classes are not displayed. The bold font highlights the best result in each column.}
\setlength{\tabcolsep}{1.5mm}
\begin{tabular}{cccccccccccccccccccccccc}
\toprule
\multicolumn{4}{c}{\multirow{2}*{Methods}} & \multicolumn{2}{c}{Car } & \multicolumn{2}{c}{Person } & \multicolumn{2}{c}{Bike } & \multicolumn{2}{c}{Curve } & \multicolumn{2}{c}{Car Stop } & \multicolumn{2}{c}{Guardrail } & \multicolumn{2}{c}{Color Cone } & \multicolumn{2}{c}{Bump} & \multirow{2}*{mAcc} & \multirow{2}*{mIoU} \\
\cmidrule{5-20} \multicolumn{4}{c}{} & Acc & IoU & Acc & IoU & Acc & IoU & Acc & IoU & Acc & IoU & Acc & IoU & Acc & IoU & Acc & IoU & \\
\midrule
\multicolumn{4}{c}{FRRN(4c)} & 81.9 & 74.7 & 66.2 & 60.8 & 62.8 & 50.3 & 41.2 & 35.0 & 12.5 & 11.5 & 0.0 & 0.0 & 37.2 & 34.0 & 35.2 & 34.6 & 48.5 & 44.2 \\
\multicolumn{4}{c}{FRRN(3c)} & 80.0 & 71.2 & 53.0 & 46.1 & 65.1 & 53.0 & 34.0 & 27.1 & 21.6 & 19.1 & 0.0 & 0.0 & 34.7 & 32.5 & 36.2 & 30.5 & 47.1 & 41.8 \\
\midrule
\multicolumn{4}{c}{BiSeNet(4c)} & 89.7 & 84.1 & 72.0 & 63.2 & 74.1 & 60.1 & 45.1 & 36.7 & 34.2 & 25.3 & 18.2 & 5.0 & 47.4 & 42.2 & 39.8 & 35.9 & 57.7 & 50.0 \\
\multicolumn{4}{c}{BiSeNet(3c)} & 90.0 & 84.5 & 65.0 & 54.3 & 75.0 & 61.4 & 32.1 & 25.7 & 32.3 & 26.2 & 3.2 & 0.9 & 49.6 & 43.3 & 48.1 & 40.5 & 54.9 & 48.2 \\
\midrule
\multicolumn{4}{c}{DFN(4c)} & 90.0 & 84.4 & 73.2 & 65.0 & 75.5 & 60.9 & 54.0 & 40.4 & 38.9 & 25.7 & 10.2 & 2.7 & 48.3 & 42.5 & 55.8 & 47.4 & 60.5 & 52.0 \\
\multicolumn{4}{c}{DFN(3c)} & 90.7 & 81.4 & 67.7 & 52.8 & 71.5 & 57.5 & 49.2 & 34.9 & 35.1 & 23.8 & 4.1 & 1.4 & 44.2 & 31.0 & 54.6 & 47.5 & 57.3 & 47.5 \\
\midrule
\multicolumn{4}{c}{SegHRNet(4c)} & 92.8 & 87.6 & 79.3 & 71.0 & 78.3 & 63.4 & 59.8 & 42.5 & 25.7 & 19.1 & 18.8 & 0.0 & 56.5 & 49.8 & 63.5 & 44.5 & 63.7 & 53.2 \\
\multicolumn{4}{c}{SegHRNet(3c)} & 92.2 & 86.6 & 73.1 & 59.8 & 74.9 & 61.3 & 47.0 & 33.2 & 23.8 & 28.7 & 7.3 & 0.0 & 54.6 & 47.2 & 61.5 & 46.2 & 60.9 & 51.3 \\
\midrule
\multicolumn{4}{c}{MFNet} & 77.2 & 65.9 & 67.0 & 58.9 & 53.9 & 42.9 & 36.2 & 29.9 & 19.1 & 9.9 & 0.1 & 8.5 & 30.3 & 25.2 & 30.0 & 27.7 & 45.1 & 39.7 \\
\midrule
\multicolumn{4}{c}{FuseNet} & 81.0 & 75.6 & 75.2 & 66.3 & 64.5 & 51.9 & 51.0 & 37.8 & 28.7 & 15.0 & 0.0 & 0.0 & 31.1 & 21.4 & 51.9 & 45.0 & 52.4 & 45.6 \\
\midrule
\multicolumn{4}{c}{DepthAwareCNN} & 85.2 & 77.0 & 61.7 & 53.4 & 76.0 & 56.5 & 40.2 & 30.9 & 9.9 & 29.3 & 22.8 & 6.4 & 32.9 & 30.1 & 36.5 & 32.3 & 55.1 & 46.1 \\
\midrule
\multicolumn{4}{c}{RTFNet-50} & 91.3 & 86.3 & 78.2 & 67.8 & 71.5 & 58.2 & 69.8 & 43.7 & 32.1 & 24.3 & 13.4 & 3.6 & 40.4 & 26.0 & 73.5 & \textbf{57.2} & 62.2 & 51.7 \\
\midrule
\multicolumn{4}{c}{RTFNet-152} & 93.0 & 87.4 & 79.3 & 70.3 & 76.8 & 62.7 & 60.7 & 45.3 & \textbf{38.5 } & \textbf{29.8 } & 0.0 & 0.0 & 45.5 & 29.1 & 74.7 & 55.7 & 63.1 & 53.2 \\
\midrule
\multicolumn{4}{c}{FuseSeg-161} & 93.1 & \textbf{87.9} & 81.4 & \textbf{71.7} & \textbf{78.5} & \textbf{64.6} & \textbf{68.4} & 44.8 & 29.1 & 22.7 & 63.7 & 6.4 & 55.8 & 46.9 & 66.4 & 47.9 & 70.6 & 54.5 \\
\midrule
\multicolumn{4}{c}{FEANet(Ours)} & \textbf{93.3} & 87.8 & \textbf{82.7} & 71.1 & 76.7 & 61.1 & 65.5 & \textbf{46.5} & 26.6 & 22.1 & \textbf{70.8} & \textbf{6.6} & \textbf{66.6} & \textbf{55.3} & \textbf{77.3} & 48.9 & \textbf{73.2} & \textbf{55.3} \\
\bottomrule
\end{tabular}%
\label{tab:tab2}%
\end{table*}%
\begin{figure*}[h]
\includegraphics[width=6.95in]{fig/fig5}
\vspace{-1mm}
\caption{Qualitative demonstrations for the fusion networks in daytime or nighttime. We can see that our FEANet can provide acceptable results in various lighting conditions. The comparative results demonstrate our superiority.}
\label{fig:fig5}
\vspace{-5mm}
\end{figure*}
\section{EXPERIMENTS AND RESULTS}
\subsection{The RGB-T Dataset}
We use the public data set released by the MFNet \cite{ha2017mfnet}. It was recorded in urban street scenes, which contains eight hand-labeled object classes and one unlabelled background class. This dataset contains 1569 pairs of RGB and thermal images, in which 820 taken at daytime and 749 taken at nighttime. We follow the dataset splitting scheme proposed in \cite{ha2017mfnet}. The training set consists of 784 pairs of images. The validation set consists of 392 pairs of images. The other images are used for testing.
\subsection{Training Details}
We use the Stochastic Gradient Descent (SGD) optimization solver for training. The momentum and weight decay are set to 0.9 and 0.0005, respectively. The initial learning rate is set to 0.03. We adapt the CosineAnnealingWarmRestarts \cite{loshchilov2017decoupled} to gradually decrease the learning rate. The loss function uses the DiceLoss \cite{milletari2016v} and the SoftCrossEntropy \cite{yi2019probabilistic} for training. We give the DiceLoss and the SoftCrossEntropy a weight of 0.5 and add them to get the loss function:
$$
\textup{DiceLoss}=1-\frac{2 \sum_{i}^{N} p_{i} g_{i}}{\sum_{i}^{N} p_{i}^{2}+\sum_{i}^{N} g_{i}^{2}} \eqno{(1)}
$$
where the sums run over the $N$ voxels, of the predicted binary segmentation volume $p_{i} \in P$ and the ground truth binary volume $g_{i} \in G$. This formulation of DiceLoss can be differentiated yielding the gradient.
$$
\textup{SoftCrossEntropyloss }=-\frac{1}{n} \sum_{i=1}^{n} \sum_{j=1}^{c} \hat{y}_{i j} \log (y_{i j}^{d})\eqno{(2)}
$$
where the $n$ is the number of the batch. In this work $n = 5$, $\hat{y}_{i j}$ is is binary indicator if class label $c$ is the correct classification for pixel $i$, and $y_{i j}^{d}$ is the corresponding predicted probability be normalized to a probability distribution.
\subsection{Evaluation Metrics}
The Accuracy (Acc) and the Intersection-over-Union (IoU) are used as the evaluation indicators of our model. The Acc defines the overall accuracy as the probability of correspondence between a positive decision and true condition. The IoU calculates the intersection of the true label and the predicted result separately for each class. Both mAcc and mIoU are the average values across all the classes for the Acc and the IoU.
$$
\textup{{m}{A}{c}{c}} =\frac{1}{k+1}{\sum_{i=0}^{k}}\frac{p_{i i}}{\sum_{j=0}^{k} p_{i j}} \eqno{(3)}
$$
$$
\textup{{m}{I}{o}{U}}=\frac{1}{k+1} \sum_{i=0}^{k} \frac{p_{i i}}{\sum_{j=0}^{k} p_{i j}+\sum_{j=0}^{k} p_{j i}-p_{i i}}\eqno{(4)}
$$
where the $k$ is the number of the hand-labeled object classes, in this work, $k = 8$. $p_{i i}$ is the number of the pixels of class $i$ that are correctly classified as class $i$, $p_{i j}$ is the number of pixels of class $i$ that are wrongly classified as class $j$, $p_{j i}$ is the number of pixels of class $j$ that are wrongly classified as class $i$.
\subsection{Results And Analysis}
The complete quantitative evaluation results of the networks are listed in Tab.~\ref{tab:tab2}. Our FEANet achieves some remarkable advantages over the comparison methods in terms of both mAcc and mIoU indicators. Compared to the SOTA RGB-T semantic segmentation methods, our proposed FEANet has a great improvement on small target object detection and segmentation, especially in the Guardrail class. Compared to the SOTA FuseSeg-161, We find the Guardrail class has the +7.1 \% Acc and +0.2 \% IoU results in improvement. At the same time, the Color Cone class has the +5.4 \% Acc and +8.4 \% IoU results in improvement. And the other objects also have good segmentation performance. This indicates that our FEANet can make more efficient detection and segmentation on small target objects. To further demonstrate the effectiveness of our FEANet, we visualize the prediction maps of our FEANet and other top 2 methods in Fig.~\ref{fig:fig5}. Experiments show that Our FEANet effectively utilizes the RGB and thermal information for sharp object boundaries, while the others are disturbed by the background.
\begin{table}[htbp]
\centering
\caption{Inference speed of SOTA networks. ms and FPS represent the time of milliseconds and the speed of Frames Per Second, respectively.}
\begin{tabular*}{\hsize}{@{}@{\extracolsep{\fill}}lllllllllllll@{}}
\midrule
\multicolumn{3}{c}{\multirow{2}*{Methods}} & \multicolumn{3}{c}{RTX 2080 Ti} \\
\cmidrule{4-6} & & & \multicolumn{1}{c}{ms} & & \multicolumn{1}{c}{FPS} \\
\midrule
\multicolumn{3}{c}{RTFNet-50} & 11.25 & & 88.87 \\
\midrule
\multicolumn{3}{c}{RTFNet-152} & 30.47 & & 32.81 \\
\midrule
\multicolumn{3}{c}{FuseSeg-161} & 33.32 & & 30.01 \\
\midrule
\multicolumn{3}{c}{FEANet(ours)} & 28.52 & & 35.06 \\
\bottomrule
\end{tabular*}%
\label{tab:tab3}%
\end{table}%
Although our FEANet can get prior results in the tiny object classes, there are some limitations in our network. According to Tab.~\ref{tab:tab2}, FuseSeg-161 gets the best results in the Person and Bike classes. This proves the effectiveness of the DenseNet, which can keep the feature map resolution unchanged in the encoder stream. And for the RTFNet, our results are better than those of the RTFNet in most object classes, however, the RTFNet152 gets the best indicators in the Car Stop class. This proves network can get the dense feature in a deeper layer to improve the segmentation performance. As indicated in Tab.~\ref{tab:tab3}, our FEANet achieves real-time inference speed (approximately 35 images/s) on a single NVIDIA Geforce RTX 2080 Ti GPU.
\subsection{Ablation Study}
\begin{table}[htbp]
\centering
\caption{The comparison of mAcc ($\%$) and mIoU ($\%$) on the test-sets for the NFRTS, NFRS, NFTS and FRTS(ours) variants. The bold font highlights the better results in each scenario.}
\begin{tabular*}{\hsize}{@{}@{\extracolsep{\fill}}lllllllllllll@{}}
\midrule
\multicolumn{3}{c}{\multirow{2}*{Variants}} & \multicolumn{3}{c}{Test-set} \\
\cmidrule{4-6} & & & \multicolumn{1}{c}{mAcc} & & \multicolumn{1}{c}{mIoU} \\
\midrule
\multicolumn{3}{c}{NFRTS} & 63.9 & & 50.0 \\
\midrule
\multicolumn{3}{c}{NFRS} & 69.5 & & 54.5 \\
\midrule
\multicolumn{3}{c}{NFTS} & 65.3 & & 50.6 \\
\midrule
\multicolumn{3}{c}{FRTS(ours)} & \textbf{73.2} & & \textbf{55.3} \\
\bottomrule
\end{tabular*}%
\label{tab:tab4}%
\end{table}%
In order to verify that our FEAM module is effective at each feature level, we removed the FEAM module from the RGB stream and thermal stream, respectively, then we can see the performance without using the attention module FEAM. Therefore, we call no FEAM in a thermal stream as NFTS. Similarly, no FEAM in RGB stream as NFRS and no FEAM in RGB and thermal stream is named as NFRTS, respectively. FRTS means that the FEAM is both in RGB and thermal stream. Tab.~\ref{tab:tab4} shows the quantitative comparison test results. By comparing the results of NFRTS, NFRS, NFTS, and FRTS, we find that FRTS usually provides better performance than NFRS, NFRTS and NFTS in the RGB-T semantic segmentation task. The performance in FRTS can also prove that FEAM can enhance the fusion effect of RGB image information and thermal image information. In this experiment, the FEAM in every layer facilitates a universal improvement in detection performance. In addition, we find that FEAM applied in the thermal stream contributes more to the results.
\section{CONCLUSIONS}
We proposed a novel two-stage FEANet to excavate informative thermal cues from both RGB and thermal images for the semantic segmentation of urban scenes. Specifically, we introduce a FEAM to excavate and enhance informative features from both the channel and spatial views. The experimental results demonstrate that FEANet performs better on small target object segmentation and produces sharp object boundaries. The proposed FEANet runs at real-time speed on a single GPU, making it a potential solution for autonomous driving applications. In the future, we would like to fuse more different modalities of information (e. g., depth, audio) into a network for segmentation improvement.
\addtolength{\textheight}{-12cm}
\bibliographystyle{IEEEtran}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 376 |
Q: Yahoo Messenger vs. UAC Lately I've been needing to play with limiting user rights on some machines, UAC is turned on. We use Vista, and we also use Yahoo Messenger. When I use Yahoo Messenger as an Administrator, everything works fine.
However, when I use Yahoo Messenger under a standard user account, nothing happens. YahooMessenger.exe shows as starting as my user, but no window for me to login or the like.
I've tried to bypass this by having the Task Scheduler start the program as my administrative user at login, but it starts at in Session 0 (and thus not interactive with my desktop).
Any suggestions for allowing the people that use this computer to simply use Yahoo Messenger without giving them administrator access to everything else, as well?
Thanks!
Update: I used cacls on the Yahoo Messenger folder to give the standard user full control of that folder. This had no effect.
A: I'd start with giving the limited users read/write/modify rights to the Yahoo Messenger directory on the disk. If that doesn't fix it, track down Yahoo Messenger in the registry (probably under HKLM --> Software) and give the limited read/write/modify rights for any keys there.
If none of that works, consider switching to something like Pidgin. I know from personal experience that this will work in a limited environment.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 3,128 |
Bleached Wood Kitchen Cabinets Com Oak Pickled For Sale Bleaching With Cabinet Decor 35 is one of pictures thet are related with the picture before in the collection gallery. If you would like to see the Bleached Wood Kitchen Cabinets Com Oak Pickled For Sale Bleaching With Cabinet Decor 35 in High Resolution [HD Resolution] version, please press the right click on picures/image then choose "Save as Image" option, and done. You will get Bleached Wood Kitchen Cabinets Com Oak Pickled For Sale Bleaching With Cabinet Decor 35 pictures that you want. The exactly dimension of Bleached Wood Kitchen Cabinets Com Oak Pickled For Sale Bleaching With Cabinet Decor 35 was 970x728 pixels. You can also look for some pictures by collection on below this picture. Find the other picture or article about Pickled Oak Cabinet here. We hope it can help you to get information of the picture. | {
"redpajama_set_name": "RedPajamaC4"
} | 2,470 |
{"url":"https:\/\/dsp.stackexchange.com\/questions\/49693\/how-to-make-series-circle-shift-invariant","text":"# How to make series circle shift invariant\n\nI have a series of numbers. They may be integer, floating, or binary. Length of series is constant. I need criteria which is circle shift invariant. For more information assume series S1, S2, S3, S4 and S5:\n\nS1 = {A, B, C, D, E, F}\nS2 = {F, A, B, C, D, E}\nS3 = {D, E, F, A, B, C}\nS4 = {A, C, D, F, E, B}\nS5 = {A, B, C, D, E, G}\n\nwhere A, B, C, E, F, and G are numbers. S2 and S3 is resulted by shift circle of S1. So the criteria should return same values for these three series. S4 have same number of S1, S2, and S3 but order of numbers are not the same. S5 and S1 are similar so I want to criteria of S1 and S5 be similar.\n\nI want to use them as descriptors in image processing. In other words, I want to make a feature (like LBP, or HOG) rotation invariant. For example mean and variance can be used but I need more. Are there any suggestion?\n\n\u2022 So what is your problem \/question? You are describing the situation, and mentioning something about criteria, but it is unclear what you want to achieve and why you are not able to do this? \u2013\u00a0Irreducible Jun 6 '18 at 10:47\n\u2022 I need criteria for these series for comparing them as I said. These criteria should be circle shift invariant. \u2013\u00a0Babak.Abad Jun 6 '18 at 11:18","date":"2019-06-17 16:12:10","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.24189646542072296, \"perplexity\": 501.13513054056}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-26\/segments\/1560627998509.15\/warc\/CC-MAIN-20190617143050-20190617165050-00544.warc.gz\"}"} | null | null |
\section{Conclusion}
\vspace{-0.5em}
We propose a self-supervised method for point cloud completion via inpainting and random region removal that can be trained using only LiDAR-based partial point clouds. Our method produces significantly more accurate point cloud completions and outperforms the previous unsupervised methods on ShapeNet and Semantic KITTI. Through exhaustive ablation, we show the importance of each component of our method and the robustness to alignment errors. While the current method uses intersecting half-spaces defined by coordinate planes, other methods for point cloud partitioning can be explored in future work. We hope that our method will improve real-world 3D object understanding.
\vspace{-1em}
\section{Acknowledgements}
\vspace{-0.5em}
This material is based upon work supported by the National Science Foundation under Grant No. IIS-1849154, and the CMU Argo AI Center for Autonomous Vehicle Research.
\section{Method}
\vspace{-0.5em}
The point cloud completion problem can be defined as follows: given an incomplete set of sparse 3D points $X$, sampled from a partial view of an underlying dense object geometry $G$, the goal is to predict a new set of points $Y$, which mimics a uniform sampling of $G$.
\input{images/model-diagram}
\vspace{-1em}
\subsection{Self-supervised Inpainting}
\label{sec:inpainting}
\vspace{-0.5em}
In our self-supervised inpainting-based approach to learn to complete full point clouds using only partial point clouds, we randomly remove regions of points from a given partial point cloud and train the network to inpaint these synthetically removed regions. The original partial point cloud is then used as a pseudo-ground truth to supervise the completion. Since we do not have the complete ground-truth point cloud, supervision is only applied to the regions of the original point cloud that contain points (i.e. unoccluded regions).
The network leverages the information of available regions across samples and embeds each region separately that can generalize across partially occluded samples with different missing regions. Further, due to the stochastic nature of region removal, the network cannot easily differentiate between the synthetic and original occlusions of the input partial point cloud, making the network learn to complete the point cloud. Thus, the combination of inpainting, random-region removal, and region-specific embeddings enables the model to generate all the regions and create a complete point cloud.
\vspace{-1em}
\subsection{Network Architecture}
\vspace{-0.5em}
Figure~\ref{fig:model_diagram} depicts the architecture and training flow of our network, Point Cloud Partition-and-Completion Network (PointPnCNet). We use a multi-level encoder-decoder architecture to allow the network to focus on different parts of an object. We present the evaluations of various alternate designs of our method in the appendix.
\vspace{-0.5em}
\subsubsection{Multi-Level Encoder}
\label{sec:multi-scale-encoder}
\vspace{-0.5em}
Our encoder consists of multiple, parallel encoder streams that encode the input partial point cloud at global and local levels. The global-level encoder operates on the full-scope of the object while a local-level encoder focuses on a particular region of the object. Since a local encoder only sees points in a given local region and is invariant to other parts of the shape which may be missing, local encoders make the network robust to occlusions by focusing on individual object parts separately. Global encoder further enhances shape consistency by focusing on regions jointly with each other.
Given a partial point cloud, we estimate its canonical frame using a learned method~(Sec.~\ref{sec:experiments}) and transform it to obtain a canonicalized partial point cloud $X$. We show that our method is robust to errors in this canonicalization~(Sec.~\ref{sec:robustness}).
We then partition the canonicalized partial point cloud using intersecting half-spaces that are produced by the coordinate planes after canonicalization. This effectively separates the space into eight 3D octants as shown in Figure~\ref{fig:model_diagram}. While other types of partitioning could be used, we found this subdivision to be simple and reasonably effective. Rather than a strict partitioning, we allow a small overlap between neighboring regions such that points in the overlap are present in both regions. This helps to avoid seams at boundaries.
Let $X_{i}$ consist of the points in the region $i$. After partitioning, we remove points of any particular region with a probability $p$ to simulate a synthetic occlusion. We use this synthetically occluded point cloud $\hat{X}$ as input to our inpainting network.
The points in the remaining regions are aggregated together and passed as input to the global encoder, $E_{g}$, to give a global embedding $e_g$~(Figure~\ref{fig:model_diagram}).
In parallel, each remaining region $X_{i}$ is separately encoded by a local encoder, $E_{\ell}$ to obtain a local embedding for that region, $e^{i}_{\ell}$. To aggregate the local feature embeddings, each embedding $e^{i}_{\ell}$ is fed as input into an attention module, consisting of an MLP layer, that generates a set of weights $w_{i} = \phi (e^{i}_\ell)$. These weights are used to weigh each of the embeddings $e^{i}_{\ell}$ in a linear combination to form the aggregate embedding $P_\ell = \sum_{i} \mathbbm{1}_i w_{i} \, e^{i}_\ell$, where $\mathbbm{1}_i$ is an indicator function which equals 1 if region $i$ is present~(i.e. present in the original partial point cloud $X$ and not randomly removed) and 0 otherwise.
We then perform a channel-wise max-pooling across the global encoding $e_g$ and the attention-weighted local encoding $P_\ell$, as $P = \max (e_g, P_\ell)$.
\vspace{-0.5em}
\subsubsection{Multi-Level Decoder}
\vspace{-0.5em}
Our decoder consists of multiple decoder streams that work in parallel to decode the fused embedding $P$ (Figure~\ref{fig:model_diagram}). The global decoder $D_{g}$ takes the embedding $P$ as input and attempts to generate an entire completed point cloud $Y_g$.
In parallel, we use a local decoder $D_{\ell}$ to decode the points in each region of the input space. The embedding $P$ is concatenated with a one-hot vector indicating each region's location and create a region-specific embedding. Through one-hot encoding, each decoder specializes in completing a certain region and learns a region-specific embedding. The decoder takes as input these region-specific embeddings and generates a subset of the output point cloud localized to the respective region $Y_\ell^i$. The generated local regions are combined together to obtain the full point cloud $Y_\ell$.
The multi-level output generated by the network captures the details of the object at global and local levels. The outputs of the multi-level decoder streams, $D_{\ell}$ and $D_{g}$, are concatenated to form the final prediction of our network as $Y$.
\vspace{-1em}
\subsection{Point Cloud Completion Losses}
\label{sec:losses}
\vspace{-0.5em}
The standard loss used for comparing two point clouds is the Chamfer Distance (CD). It is a bi-directional permutation invariant loss over two point clouds representing the nearest neighbor distance between each point and its closest point in the other cloud. In our method,
we use an asymmetric Weighted Chamfer Distance loss, $\mathcal{L}_{wcd}$, defined as,
\begin{equation}
\footnotesize
\mathcal{L}_{wcd} (X, Y) = \frac{(1 - \beta)}{|X|}\sum_{x \in X} \min_{y \in Y}\|x - y\|_2 +
\frac{\beta}{|Y|}\sum_{y \in Y} \min_{x \in X} \|y - x\|_2
\label{equ:cd_global_loss}
\end{equation}
\noindent where $X$ is the original partial point cloud used here as pseudo-ground truth and $Y$ is the output.
Importantly, we only compute the loss for the regions that are present in $X$. A weight of $(1-\beta)$ is applied to the first term in Eqn.~\ref{equ:cd_global_loss} which penalizes the distance from each point in $X$ to its nearest neighbor in $Y$. This term enforces that the output point cloud $Y$ should contain points that are close to those in $X$. Note that the input to the network is $\hat{X}$, which has synthetic occlusions, not $X$, which is the original partial point cloud. A weight of $\beta$ is applied to the second term in Eqn.~\ref{equ:cd_global_loss} to penalize the distance from each point in $Y$ to its nearest neighbor in $X$.
We do not expect this term to reach 0 for a well-trained network since $X$ only contains a partial point cloud, while output $Y$ contains the entire point cloud; we still find it a helpful regularization.
We impose the following variants of $\mathcal{L}_{wcd}$ on the model,
\textbf{Inpainting-Global Loss:} This loss acts as a \textit{global shape loss}, focusing on the overall shape of an object. We impose it as the Weighted Chamfer Distance (Eqn.~\ref{equ:cd_global_loss}) between original partial point cloud $X$ and output of the global decoder $Y_{g}$ and define it as $\mathcal{L}_{wcd} (X, Y_{g})$.
\textbf{Inpainting-Local Loss:} We impose Inpainting-Local loss as the Weighted Chamfer distance between each region output $Y_\ell^i$ from local decoder $D_{\ell}$ and corresponding partitioned region $X_i$ in the original partial point cloud $X$ where $i$ indexes over regions. While Inpainting-Global loss considers the entire $X$ to find the nearest neighbor, Inpainting-Local loss differs in that it only considers the partitioned region $X_i$ to find the nearest neighbor. Thus, it acts as a \textit{local shape loss} that enables the network to learn region-specific shapes and embeddings and focus on the finer details of an object. We do not penalize regions that are missing in $X$ where a region is considered missing if the number of points in that region is below a certain threshold. The Inpainting-Local Loss is therefore defined as, $\sum_{i}\mathbbm{1}_i \cdot \mathcal{L}_{wcd} (X_{i}, Y_\ell^i)$ where the indicator function $\mathbbm{1}_i$ equals one if region $i$ is present in $X$ and zero otherwise.
\textbf{Multi-View Consistency:} Similar to Gu~\emph{et al}\bmvaOneDot~\cite{gu2020weakly}, our method uses multi-view consistency as an auxiliary loss. Their method explicitly performs pose estimation. Similarly, we perform an estimated pose canonicalization (weakly supervised, Sec.~\ref{sec:experimental_setup}). We also show later~(Sec.~\ref{sec:robustness}) that our method is robust to canonicalization errors. During training, we sample a view $k$ from $V$ partial views of an object $X$ given as $X^{1}, \ldots, X^{V}$. Since all the views of an object correspond to the same object, for input partial point cloud $X^{k}$, the loss is computed with all views $X^{1}, \ldots, X^{V}$. We define a global inpainting multi-view consistency loss as $\sum_{j=1}^{V}\mathcal{L}_{wcd} (X^{j}, Y_{g}^{k})$ where $X^{j}$ is the $j^{th}$ view of $X$ and $Y_{g}^{k}$ is the global output from decoder $D_{g}$. We also define local inpainting multi-view consistency loss as $\sum_{i}\sum_{j=1}^{V}\mathbbm{1}^j_i \cdot \mathcal{L}_{wcd} (X^{j}_{i}, Y_\ell^{i,k})$ where $i$ indexes over regions, $X^{j}_{i}$ is the $i^{th}$ region of view $X^j$,
$Y_\ell^{i,k}$ is the region output from local decoder $D_{\ell}$ for input $X^{k}_{i}$, and $\mathbbm{1}^j_i$ is $1$ if region $i$ is present in $X^{j}$ and $0$ otherwise.
During training, we sum the losses as, $\sum_{j=1}^{V}\mathcal{L}_{wcd} (X^{j}, Y_{g}^{k}) + \sum_{i}\sum_{j=1}^{V}\mathbbm{1}_i \cdot \mathcal{L}_{wcd} (X^{j}_{i}, Y_\ell^{i,k})$. This multi-view information is only available at training time.
\section{Introduction}
\vspace{-0.5em}
Autonomous vehicles often understand the world around them using depth sensors such as LiDAR. However, the LiDAR point clouds are often incomplete even when recorded from multiple viewpoints over time.
To accurately track objects and plan routes to avoid collisions, it is important for autonomous vehicles to understand the complete shape of surrounding objects.
Previous methods~\cite{yuan2018pcn, tchapmi2019topnet, wang2020cascaded, xie2020grnet, wen2020point} have learned to complete partial point clouds but they strongly rely on the availability of ground truth complete shapes as supervision. Since complete point clouds are costly to obtain for real-world scenarios, these methods typically train only from simulated data where ground-truth completions are available. This limitation motivates our approach to learn only from partial point clouds to complete shapes without ever observing the ground-truth completed point clouds during training.
To this end, our method leverages self-supervision via an inpainting-based approach where we randomly remove regions from the partial point clouds and train the network to complete the entire point cloud. Across multiple training examples, different regions will be occluded and varying regions will be synthetically removed. Because the network does not know which regions were artificially removed and which were naturally occluded in each original partial point cloud, the network learns to attempt to complete the entire point cloud.
In contrast to images, where a mask can specify the region to inpaint~\cite{liu2020rethinking, pathakCVPR16context, yu2018generative, yeh2017semantic, hong2019deep}, the unstructured nature of point clouds makes it challenging to define which regions the network needs to inpaint. We solve this using a region-aware loss which penalizes only the regions where the original point cloud was present. Additionally, we partition the point cloud into local regions using intersecting half-spaces and encode/decode them separately. This allows the network to learn data-driven embeddings separately for each local region that specialize in individual region-level object parts. We also encode/decode at the global point cloud level to further allow the network focus on each region jointly with each other. Previous works \cite{insafutdinov2018unsupervised, gu2020weakly} depend on aligning multiple viewpoints of an object during training which can be sensitive to pose alignment errors. While we also incorporate a multi-viewpoint loss, we show that our use of inpainting allows our method to be robust to alignment errors.
The key contributions of this paper are as follows: 1). We present a novel inpainting-based self-supervised algorithm that learns to complete missing local regions in an incomplete point cloud without the need for ground truth point cloud completions, 2). Our multi-level encoder-decoder based architecture, PointPnCNet, partitions the point clouds to learn local and global embeddings to obtain improved completion performance, 3).Our approach outperforms existing methods for unsupervised point cloud completion~\cite{insafutdinov2018unsupervised, gu2020weakly} when evaluated on the standard completion benchmarks of ShapeNet~\cite{chang2015shapenet} and SemanticKITTI~~\cite{behley2019semantickitti}.
\section{Related Work}
\vspace{-0.5em}
\textbf{Supervised Point Cloud Completion} Most of the existing 3D shape completion methods~\cite{yuan2018pcn, tchapmi2019topnet, wang2020cascaded, xie2020grnet, wen2020point, wang2020softpoolnet, huang2020pf} make use of complete ground-truth shape labels. A common approach for point cloud completion is to use an encoder-decoder style architecture~\cite{yuan2018pcn,liu2020morphing,xie2020grnet,wang2020point}. On the other hand, Tchapmi \emph{et al}\bmvaOneDot~\cite{tchapmi2019topnet} proposed to generate a point cloud using a hierarchical rooted tree structure. Our architecture builds on the typical encoder-decoder style of previous work~\cite{yuan2018pcn}. In contrast to the above supervised methods, our proposed approach does not require ground truth annotations. This allows our method to be trained using LiDAR data in the wild, as opposed to the previous methods which are trained only with simulated data.
\textbf{Weakly-Supervised Methods}
Recently, Gu~\emph{et al}\bmvaOneDot~\cite{gu2020weakly} proposed a weakly-supervised approach for point cloud completion where the pose of the input partial point cloud and 3D canonical shape are jointly optimized. Their method is weakly-supervised via multi-view consistency among the multiple partial observations of the same instance. Our method also uses partial point clouds, however, using our inpainting-based approach, our method is able to learn a more accurate completion and is robust to view alignment errors. Other methods also learn 3D shape reconstruction using weak supervision~\cite{yan2016perspective, zhu2017rethinking, tulsiani2018multi}. Among these, Differentiable Point Clouds~(DPC)~\cite{insafutdinov2018unsupervised} jointly predicts camera poses and a 3D shape representation given two image views of the same instance. The geometric consistency between the estimated 3D shape and the input images is enforced using an end-to-end differentiable point cloud projection. We show in the results that we significantly outperform this method.
\textbf{Image Inpainting}
In the area of image inpainting~\cite{liu2020rethinking, pathakCVPR16context, yu2018generative, yeh2017semantic, hong2019deep, zhan2020self}, Zhan \emph{et al}\bmvaOneDot~\cite{zhan2020self} proposed self-supervised partial completion networks (PCNets) to complete an occluded object's mask and content in the input image. Our method takes inspiration from Zhan \emph{et al}\bmvaOneDot~\cite{zhan2020self} for shape completion in 3D point clouds.
However, due to the structured nature of images, often a mask can be used to specify the region to inpaint. In 3D point clouds, where the data is unstructured and sparse in nature, it is difficult to specify a ``mask" for the regions to inpaint. There is ambiguity between regions that have been ``masked out" and regions that are naturally occluded, making the task of inpainting challenging for point cloud data.
\textbf{Point Cloud Inpainting} Some of the previous methods~\cite{fu2018point, yu2020point, fu20203d, zhao2020pui, chen2020point, hu2019local} have also explored inpainting in the point cloud domain. However, these methods either use ground truth during training~\cite{yu2020point, zhao2020pui}, rely on template-matching within a data sample~\cite{fu2018point, fu20203d, hu2019local}, or project a point cloud into 2D structured representation~\cite{chen2020point}. Our method is novel in the sense that it uses inpainting directly on the point clouds without any ground truth information while leveraging large datasets to learn domain-specific priors.
\section{Appendix}
\subsection{Architecture Details}
For all results and ablations, we keep the output size of our network as 8192 points, where the global decoder $D_{g}$ generates 4096 points and the local decoder $D_{\ell}$ generates 512 points for each region, to make an overall size of 4096 points across all local regions. Similarly, the input size is kept consistent for all the ablations; that is, the input size is 3096 and 387 for global encoder $E_{g}$ and local encoder $E_{\ell}$ respectively. Each region in $X$ is dropped with a probability of removal of 20\% and the resulting synthetically occluded point cloud $\hat{X}$ is passed to the global encoder $E_{g}$. In parallel, the input partial point cloud is subdivided into 8 regions along the axial planes of the canonical frame. Each region not artificially removed or marked as missing is then independently encoded using the local encoder, $E_{\ell}$. When encoding each region of the input cloud, regions that are marked as missing based on the threshold number of points are replaced with zeros equal to the threshold. In our method, we set this threshold as 4. We allow a small overlap of 0.02 cm between neighboring regions for the ShapeNet dataset and 0.02m for the KITTI dataset. The architecture of local encoder $E_{g}$ and global decoder $D_{g}$ are similar to the PCN~\cite{yuan2018pcn}. For local encoder $E_{\ell}$ and local decoder $D_{\ell}$, we use the architecture of PCN~\cite{yuan2018pcn}, but reduce the number of hidden units to 1/8th of the original number. We use Adam optimizer with a learning rate of $1 \times 10^{-4}$ and train our network for 400K iterations.
\subsection{Data preparation}
\textbf{Shapenet}: We obtain a point cloud from the RGB-D data by backprojecting 2.5D depth images to 3D similar to Gu \emph{et al}\bmvaOneDot~\cite{gu2020weakly}. In contrast to DPC~\cite{insafutdinov2018unsupervised}, we do not use the color information. The centers of the oriented clouds are then shifted to the origin before passing it to our shape completion network. Specifically, we use the 3D partial shape classification branch of IT-net pre-trained on ModelNet40 to generate the pose transformations, as it does not require the ground-truth pose annotations for training. Since our method does not require perfect pose alignment, using IT-Net pretrained on ModelNet40 instead of ShapeNet is sufficient for our purpose, as it represents an off-the-shelf canonical frame estimator for our model classes. We refer the reader to IT-Net~\cite{yuan2018iterative} for details on this pose canonicalization method.
Originally, the ShapeNet~\cite{chang2015shapenet} dataset has 5 views. When training on $N$ views, we only consider a fixed set of $N$ random views, which is chosen at the beginning of training; the network is only trained on these $N$ views and the other views of an object are discarded.
\textbf{Semantic KITTI:} At training time, we subdivide the observations of a single instance into groups of 20 sequential observations and randomly sample a set of four views for multi-view training. When evaluating accuracy on this dataset, all 20 frames are combined using ground truth odometry to form the ground truth shape of each instance. This merged cloud is only used for evaluation and is not present during training. At inference time, only a single view is used.
\subsection{Ablation Studies}
In this section, we present a more exhaustive ablation study focusing on the number of views, architecture changes, number of input points used for training and mention the details of the ablation of densification of input point clouds for the KITTI dataset.
\subsubsection{Number of views}
We evaluate the sensitivity of our method to the number of views available at training time in Supplementary Figure~\ref{fig:views_plot}. We show the results both with and without inpainting in \textcolor{green}{green} and \textcolor{red}{red} lines respectively. It can be observed that our model is able to outperform the baseline with 2 views and 3 views, even though the baseline Gu~\emph{et al}\bmvaOneDot~\cite{gu2020weakly} is trained with 4 views. This demonstrates that our method is able to take advantage of a reduced number of views, due to our use of inpainting.
We also show the qualitative results with varying numbers of training views in the Supplementary Figure~\ref{fig:views_results}; as can be seen, the results of 2 views and 3 views are qualitatively very similar to the results with 4 views.
\subsubsection{Architecture Changes}
\paragraph*{\textbf{Global and Local Encoders and Decoders}}
We analyze whether to use both global and local encoders and decoders in our network. The results can be found in Supplementary Table~\ref{tbl:shapenet_ablation}. It can be observed that a combination of global and local encoders and decoders gives the best performance among all the possible combinations.
\paragraph*{Number of levels} In addition to the two levels in our parallel model (global and local), we experiment with adding another branch where the partial point cloud is partitioned into $3\times 3\times 3$ regions. For this branch, we use an independent local encoder and decoder. The input size of a region to the encoder is taken as 115 points (to maintain a total input size of 3096) and the size of the predicted point cloud is 152 points for each region (to maintain a total output size of 4096 for the local decoder). For computing the loss, we divide the original input (before dropping points) into regions and subsample the points to have at most 304 points in each region. The results are in Supplementary Table~\ref{tbl:more_ablations}. We notice that further partitioning of the partial point cloud and the additional branch do not give a significant improvement in the performance.
\subsubsection{Number of input points}
We evaluate the effect of the number of points in the point cloud on the performance of our method. To test this, we create new versions of the test set with varying numbers of points; for each object, we resample the point cloud (without replacement) from the input point cloud with a varying number of sampled points.
We evaluate the Chamfer Distance metric as a function of the number of points in the input point cloud on the ShapeNet and KITTI~\cite{behley2019semantickitti} dataset during testing. We evaluate our method on the number of points ranging from 100 to 4000 and present the results in Figure~\ref{fig:numpoints_chamfer}. As expected, performance degrades as we reduce the number of available points.
\subsubsection{Densification of KITTI point clouds}
\label{sec:Densification of KITTI point clouds}
To evaluate the quantitative effects of simply densifying the input point cloud without completing occluded regions, we design a simple densification method. For each point in the input partial point cloud, we find its 10 nearest neighbors and estimate the eigenvalues of this local neighborhood. An ellipsoid is formed using these values and points are uniformly sampled within this volume. This approximates the local surface. From Table 2 of the main paper, the improvement of our method over the results of this densification method demonstrates that our model is completing the partial point clouds rather than simply densifying the partial input cloud.
\subsubsection{Performance Analysis with respect to Occlusions}
We conduct an experiment to assess the impact of occlusions in the input partial point clouds on the ability of the model to complete the given shape. To do so, we introduce artificial occlusions by removing a certain number of regions from the input during testing (we have divided the input into 8 total regions). Given that the original input is already naturally occluded, we artificially remove at most three regions because beyond that, the input is barely visible. The results are shown in Table~\ref{tbl:num_region}; we can observe that as the number of artificial occlusions in the input increases, there is a slight drop in performance for all categories. However, the model is considerably robust to the additional occlusions.
\subsection{Metrics}
In this section, we report different metrics for further analysis of our method.
\subsubsection{Precision and Coverage of observed and unobserved regions}
For a detailed analysis, we compute the precision and coverage of the observed and unobserved regions of the input point cloud. To categorize points as observed or unobserved, we compute the distance between each point in the predicted point cloud and its nearest neighbor in the input point cloud. We compute the mean and standard deviation of these distances for each point cloud and use 1 standard deviation over this mean as a threshold. Points with the nearest neighbor distance greater than this threshold are considered as unobserved, while all other points are considered observed. The precision and coverage are computed separately for each of these types of points and we report the results in the Supplementary Table~\ref{tbl:obs_unobs}. As expected, we find that the precision and coverage of the observed regions are slightly better than that of unobserved regions in the input partial point cloud; however, the results are relatively similar for the observed and unobserved regions, which provides further evidence that we are completing (and not just densifying) the input (see also Section~\ref{sec:Densification of KITTI point clouds}).
\subsubsection{F1-Score}
Following Xie et al~\cite{xie2020grnet}, we evaluate the F1-score@1\%, which is the harmonic mean between precision and recall, on the ShapeNet dataset. In this context, ``precision" is the percentage of the points in the predicted point cloud which are within a specified distance threshold with the ground truth. ``Recall" is the percentage of the points in the ground truth point cloud that are within a distance threshold with the predicted point cloud. Precision helps to measure the accuracy of the prediction and recall measures the coverage of the prediction. In this metric, we use ${d = 1\%}$ of the side length of the predicted point cloud. It can be observed from the Supplementary Table~\ref{tbl:shapenet_f1score} that our method is able to outperform the baseline DPC~\cite{insafutdinov2018unsupervised} when evaluated on this metric. We do not report the results on Gu~\emph{et al}\bmvaOneDot~\cite{gu2020weakly} since their code is not open-source.
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{images/supp_views_line_new.pdf}
\caption{Quantitative Results on the number of views (1, 2, and 3) (with \textcolor{green}{green} and without inpainting \textcolor{red}{red}) used during network training. Our original method trains on 4 views. All the values reported are average Chamfer Distance metric over the ShapeNet~(Airplane, Car, Chair) and KITTI dataset. We are able to outperform the baseline using a limited number of views due to our use of inpainting.}
\label{fig:views_plot}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=\columnwidth]{images/multi_view_supp_v2.pdf}
\caption{Qualitative results on varying the number of views given as input to the PointPnCNet. The first, second, third, and fourth row shows the results on the ShapeNet test set of car, chair, plane, and Semantic KITTI~\cite{behley2019semantickitti} dataset respectively. As can be seen, the results of 2 views and 3 views are qualitatively very similar to the results with 4 views. This demonstrates that our method is able to take advantage of a reduced number of views, due to our use of inpainting.}
\label{fig:views_results}
\end{figure*}
\subsubsection{Uniformity Metric}
We also evaluate the uniformity metric following Xie et al~\cite{xie2020grnet} on the ShapeNet and KITTI datasets. In the Supplementary Table~\ref{tbl:shapenet_uniformity}, we compare our method with the baseline DPC on the ShapeNet dataset. Our method gives a similar performance with the baseline with respect to this metric, revealing that both methods have similar uniformity of predicted points.
For the KITTI dataset, we compare our method with the ablation of our method without inpainting, as DPC does not train and evaluate on KITTI and Gu~\emph{et al}\bmvaOneDot~\cite{gu2020weakly} do not have open-source code. We report the results on KITTI in Supplementary Table~\ref{tbl:kitti_uniformity} and show the improvement in the performance of our model when using inpainting.
\begin{figure}[hbt!]
\centering
\includegraphics[trim=0 0 0 0, clip,width=0.6\textwidth]{images/subplots_numpoints_vs_chamfer.pdf}
\caption{Quantitative Results of the Chamfer Distance metric with respect to the number of points in the input point cloud during testing.}
\label{fig:numpoints_chamfer}
\end{figure}
\subsection{Qualitative Results}
We present additional visualizations of the complete predicted point cloud generated by our network, PointPnCNet
\textbf{Cars:} As can be observed from the Supplementary Figure~\ref{fig:shapenet_car_success}, our model is able to complete the finer details of a car such as the headlight of a car and generates a more defined outer boundary in comparison to DPC~\cite{insafutdinov2018unsupervised}. We also show that our network has the ability to not only complete the shapes of general cars, but also the shape of a truck as shown in the third row of Supplementary Figure~\ref{fig:shapenet_car_success}. We show a few failure cases as well on the car category in the Supplementary Figure~\ref{fig:shapenet_car_fail}. Our method is unable to create detailed shapes of various sports cars. Further, for the truck in the second row, our method fails to create a gap between the front and back of a truck.
\textbf{Chairs:} We present in Supplementary Figure~\ref{fig:shapenet_chair_success} that our method is able to generate finer completion results on different types of chairs such as a sofa and desk chair than DPC~\cite{insafutdinov2018unsupervised}. It is able to complete the front, back, and arms of the chair. There are also a few failure cases where the network generates noisy results especially near the legs of a chair as seen in Supplementary Figure~\ref{fig:shapenet_chair_fail}.
\textbf{Airplanes:}
From Supplementary Figure~\ref{fig:shapenet_plane_success}, we observe that the network is able to complete the front, back, and wings of the planes. Supplementary Figure~\ref{fig:shapenet_plane_fail} shows some failure cases in which it also generates some noisy points near the wings of the planes.
\textbf{KITTI:} We show the visualizations where our network is able to complete the partial point cloud cars from the LiDAR scans of the Semantic KITTI dataset in the first and second row of Supplementary Figure~\ref{fig:kitti_supp}. Additionally, there are a few failure cases where the network is unable to generate the details in a fine manner such as the tire of a car as seen in the third and fourth row of Supplementary Figure~\ref{fig:kitti_supp}. We also show the completion results of the partial point clouds in a scene in the Supplementary Figure~\ref{fig:kitti_scene}.
\textbf{ShapeNet Categories: } We present the qualitative results on the 5 other categories of the ShapeNet dataset - Cabinet, Lamp, Sofa, Table, and Vessel in the Figure~\ref{fig:shapenet_categories}. We compare the results of our method with our ablation of without inpainting. It can be observed that our method is able to complete the shape of the incomplete point clouds whereas our method without inpainting outputs noisy points.
\subsection{Comparison with supervised method}
To analyze the performance gap between self-supervised method and supervised method, we compare the performance of our method with a fully supervised method, PCN~\cite{yuan2018pcn} on 8 categories of the ShapeNet dataset and present the results in Table~\ref{tbl:pcn_sup_self_sup}. Since our method builds on the architecture of PCN, we compare our method to fully-supervised PCN; the choice of architecture is somewhat orthogonal to our proposed method of inpainting. We observe that the fully supervised PCN outperforms our self-supervised method, as expected. However, our results indicate that our method has reduced the gap between self-supervised and fully supervised approaches. In Table~\ref{tbl:pcn_sup_self_sup}, we also compare our method to the ablation of ``no inpainting" across 8 object categories of ShapeNet and show consistent improvement in performance.
\input{tables/shapenet_ablation}
\input{tables/num_occlusions}
\input{tables/more_ablations}
\input{tables/obs_unobs}
\input{tables/shapenet_f1score}
\input{tables/shapenet_uniformity}
\input{tables/kitti_uniformity}
\input{tables/pcn_sup_self_sup}
\newpage
\begin{figure*}[hbt!]
\centering
\includegraphics[width=0.99\textwidth]{images/sucess_car_supp.pdf}
\caption{Success cases on the Car category of the ShapeNet dataset. It can be observed that our model is able to complete the finer details of a car such as the headlight of a car and generates a detailed outer boundary in comparison to DPC~\cite{insafutdinov2018unsupervised} in all the rows. It is also able to generate the shape of a truck as can be seen in the third row.}
\label{fig:shapenet_car_success}
\end{figure*}
\begin{figure*}[hbt!]
\centering
\includegraphics[width=0.99\textwidth]{images/fail_car_supp.pdf}
\caption{Failure cases on the Car category of the ShapeNet dataset. We compare our method with the results of DPC~\cite{insafutdinov2018unsupervised}. Our method fails to create the detailed shapes of various sports cars. Further, for the truck in the second row, our method fails to create a gap between the front and back of a truck.}
\label{fig:shapenet_car_fail}
\end{figure*}
\begin{figure*}[hbt!]
\centering
\includegraphics[width=0.99\textwidth]{images/sucess_chair_supp.pdf}
\caption{Success cases on the Chair category of the ShapeNet dataset. We compare our method with the results of DPC~\cite{insafutdinov2018unsupervised}. Our method is able to show finer completion results on different types of chairs such as a sofa and desk chair than DPC~\cite{insafutdinov2018unsupervised}. It is able to complete the front, back, and arms of the chair.}
\label{fig:shapenet_chair_success}
\end{figure*}
\vspace{2 cm}
\begin{figure*}[hbt!]
\centering
\includegraphics[width=0.99\textwidth]{images/fail_chair_supp.pdf}
\caption{Failure cases on the Chair category of the ShapeNet dataset. We compare our method with the results of DPC~\cite{insafutdinov2018unsupervised}. It can be observed in these cases that the network generates noisy results especially near the legs of a chair.}
\label{fig:shapenet_chair_fail}
\end{figure*}
\begin{figure*}[hbt!]
\centering
\includegraphics[width=0.99\textwidth]{images/success_plane_supp_v2.pdf}
\caption{Success cases on the Plane category of the ShapeNet dataset. We compare our method with the results of DPC~\cite{insafutdinov2018unsupervised}. It can be observed that the network is able to complete the front, back and wings of the planes.}
\label{fig:shapenet_plane_success}
\end{figure*}
\vspace{2 cm}
\begin{figure*}[hbt!]
\centering
\includegraphics[width=0.99\textwidth]{images/fail_plane_supp.pdf}
\caption{Failure cases on the Plane category of the ShapeNet dataset. We compare our method with the results of DPC~\cite{insafutdinov2018unsupervised}. The network generates some noisy points near the wings of the planes.}
\label{fig:shapenet_plane_fail}
\end{figure*}
\begin{figure*}[hbt!]
\centering
\includegraphics[width=0.99\textwidth]{images/semantic_kitti_supp.pdf}
\caption{Completion of partial point cloud cars from the LiDAR scans of the Semantic KITTI dataset (first and second row). The third and fourth row show some failure cases of the network where the network is unable to generate the smaller details such as a tire of a car.}
\label{fig:kitti_supp}
\end{figure*}
\begin{figure*}[hbt!]
\centering
\includegraphics[width=0.99\textwidth]{images/kitti_sup.pdf}
\caption{Completion of partial point cloud of cars in a LiDAR scan of the Semantic KITTI dataset.}
\label{fig:kitti_scene}
\end{figure*}
\begin{figure*}[hbt!]
\centering
\includegraphics[width=0.99\textwidth]{images/categories1.pdf}
\includegraphics[trim=0 0 0 50, clip,width=0.99\textwidth]{images/categories2.pdf}
\caption{Qualitative Results on five categories of Shapenet compared to our ablation of without inpainting.}
\label{fig:shapenet_categories}
\end{figure*}
\section{Experiments}
\vspace{-0.5em}
\label{sec:experiments}
\subsection{Implementation Details}
\vspace{-0.5em}
During test-time, we use a single view of an object. The multiple views are only available during training. To get the final completed point cloud, we concatenate the output from multi-level decoders $D_g$ and $D_{\ell}$. We do not remove regions at inference time. Otherwise, the network during inference is the same as described above.
For consistency with prior work~\cite{gu2020weakly}, we resample each partial point cloud $X$ to have a total of 3096 points.
PointPnCNet uses architecture from PCN~\cite{yuan2018pcn} for encoder and decoder blocks. The model is trained from scratch for 400K iterations with batch size of 32, learning rate of 5e-4 decayed by 0.5 after every 100K iterations and $\beta = 0.25$ in $\mathcal{L}_{wcd}$. Please refer to appendix for more details.
\vspace{-1em}
\subsection{Experimental Setup}
\vspace{-0.5em}
\label{sec:experimental_setup}
Following the evaluation protocol of Gu~\emph{et al}\bmvaOneDot~\cite{gu2020weakly}, we test our approach on ShapeNet~\cite{chang2015shapenet} and Semantic KITTI~\cite{behley2019semantickitti}. ShapeNet has ground truth annotations for each object class which allows us to evaluate how well our method generates completed shape. On the other hand, Semantic KITTI allows us to evaluate the robustness of our method on real LiDAR data.
The observations are transformed to a canonical frame (a shared reference frame which aligns all instances of a class) using canonical frame predictions generated via IT-Net~\cite{yuan2018iterative} for ShapeNet and predicted bounding boxes obtained from OpenPCDet~\cite{openpcdet2020} for Semantic KITTI. We use IT-Net for ShapeNet as it is trained in a weakly-supervised manner from only classification labels and learns to align the instances of each class, without any pose supervision. In general, any pose estimator can be used here. We evaluate the robustness of our method to this canonical frame estimation in Sec.~\ref{sec:robustness}.
\textbf{ShapeNet:} ShapeNet~\cite{chang2015shapenet} is a synthetic dataset with 3D CAD models. We report our results on three categories, airplanes, cars, and chairs, that are commonly used in the related works~\cite{insafutdinov2018unsupervised, gu2020weakly}. We use the same data split provided by DPC~\cite{insafutdinov2018unsupervised}, where RGB-D data is generated for random camera views with fixed translation, similar to Gu \emph{et al}\bmvaOneDot~\cite{gu2020weakly}. For evaluation, we use ground truth point clouds provided by DPC~\cite{insafutdinov2018unsupervised} which are densely sampled from ShapeNet meshes and downsampled to 8192 points.
\textbf{Semantic KITTI:} We evaluate our method for a real-world scenario using KITTI~\cite{behley2019semantickitti}. Previous methods~\cite{xie2020grnet, gu2020weakly, wen2020point, yuan2018pcn} have a standard protocol of evaluation on real-world data by testing on the cars of KITTI only. We adopt the same protocol in our work. Following Gu \emph{et al}\bmvaOneDot~\cite{gu2020weakly}, we train over the parked car instances~(which have multiple views captured when a LiDAR sensor moves through the scene and scans a parked car from different locations) with sequences 00 to 10 (excluding 08) as train set and sequence 08 as test set. The train set consists of 507 parked car instances and 46152 observations, while the test set has 229 parked car instances and 16296 observations.
Although having no complete ground truth information in KITTI creates some limitations in its evaluation, testing on this dataset shows the ability of our method to handle real-world LiDAR data. By combining the evaluations on a real-world dataset~(KITTI) and a synthetic dataset~(ShapeNet), which has ground truth annotations, we are able to present a more thorough evaluation. This is the standard evaluation procedure following Gu \emph{et al}\bmvaOneDot~\cite{gu2020weakly}.
\textbf{Metrics:}
Our primary metric for quantitatively evaluating shape completion is the \textit{Chamfer Distance (CD)}, as is used in previous works~\cite{gu2020weakly,yuan2018pcn,wang2020cascaded}. We define this metric in its weighted form in Equation \ref{equ:cd_global_loss}. For evaluation, to compare with the ground-truth completed point cloud, we equally weight each component with a $\beta$ of $0.5$.
Additionally, we follow Gu \emph{et al}\bmvaOneDot~\cite{gu2020weakly} and report each component of the Chamfer distance independently: the mean distance from each predicted point to its nearest true point described as \textit{Precision}, and the mean distance from each true point to its nearest predicted point described as \textit{Coverage}. \textit{Precision} describes how well the predicted points match the local shape, while \textit{Coverage} is related to how much of the shape is completed. We also evaluate the Earth Mover's Distance~(EMD)~\cite{yuan2018pcn}, which finds a bijection between the predicted point cloud and the ground truth point cloud that minimizes the average distance between corresponding points. Like previous work, we also evaluate the F-score@1\%~\cite{xie2020grnet}.
\vspace{-1em}
\subsection{Point Cloud Completion Results}
\vspace{-0.5em}
We compare with the current state-of-the-art unsupervised methods, DPC~\cite{insafutdinov2018unsupervised} and Gu~\emph{et al}\bmvaOneDot~\cite{gu2020weakly}.
Table~\ref{table:shapenet_main_results} shows our method outperforming the baseline methods on the synthetic ShapeNet dataset, producing lower Chamfer distances across all shape categories. \textit{Precision} and \textit{Coverage} metrics also improve, showing that our method produces more accurate points and better covers the full object shape.
Our method is also able to outperform DPC~\cite{insafutdinov2018unsupervised} as per the Earth Mover Distance (EMD) metric (Table~\ref{table:kitti_agg_emd}b).
Since the code for \cite{gu2020weakly} is not publicly available, the EMD metric on that method cannot be evaluated.
We further show in Table~\ref{table:agg_ablation_sem_kitti}a that our method outperforms the previous state-of-the-art~\cite{gu2020weakly} on the Semantic KITTI dataset, generating outputs that are significantly more accurate than~\cite{gu2020weakly}.
The KITTI dataset is more realistic than ShapeNet. With samples having a range of sparsity (since real-world LIDAR gets sparse with distance), it represents the data available in self-driving scenarios. We also show improvement compared to a simple densification baseline (Densified Input in Table~\ref{table:agg_ablation_sem_kitti}a) which suggests that our method is indeed completing the partial point clouds rather than simply densifying them. This densification baseline uniformly samples points within the volume of a local surface that is approximated as an ellipsoid, formed using eigenvalues for 10 nearest neighbors of each point in input partial point cloud. We also conduct a uniformity analysis whose results we report in the appendix.
\input{tables/shapenet_main}
\input{tables/table2_3}
\input{tables/cd_emd_fig_table}
\vspace{-1.125em}
\subsection{Qualitative Results}
\vspace{-0.5em}
We present the qualitative results of our method for each category of ShapeNet in Figure~\ref{fig:qualtitative_results} and KITTI in Figure~\ref{fig:ablation_results}. In comparison to the baseline DPC~\cite{insafutdinov2018unsupervised}, we observe that our method is able to better cover the target object with a more uniform distribution over the target surface and accurately reconstructs the fine-grained object details. For example, our method is able to complete the back of the car and the side mirror whereas the baseline outputs noisy points. For chairs, our method generates more uniformly distributed points whereas the baseline outputs patches/clusters of points in that location. This highlights the fact that our method is better able to generalize and complete the unseen regions of incomplete shapes.
\input{images/ablation_vis}
\vspace{-1.125em}
\subsection{Ablation Study}
\vspace{-0.5em}
\noindent \textbf{Inpainting Loss:} Region removal to create synthetic occlusions and the task of inpainting are removed, \textbf{Multi-View Loss:} The multi-view loss is removed from our training method. Each partial point cloud is used to supervise its own, synthetically occluded completion, \textbf{Global Level:} We remove the Inpainting-Global Loss, global encoder $\mathbf{E_g}$ and global decoder $\mathbf{D_g}$ from our completion pipeline. \textbf{Local Level:} The Inpainting-Local loss, local encoder $\mathbf{E_{\ell}}$ and local decoder $\mathbf{D_{\ell}}$ are removed from our method. The number of output points for the \textit{global ghape} and \textit{local shape} ablations are kept consistent with our full method.
We report the ablation results in Table~\ref{table:agg_ablation_sem_kitti}b on ShapeNet, as an average over all categories, and on Semantic KITTI. We find that all components of our system are crucial for optimal performance across both datasets. We further report the F-score@1\%~\cite{xie2020grnet} and the EMD metric on the Semantic KITTI dataset with the ablation of removing inpainting in Table \ref{table:kitti_agg_emd}c. We find that inpainting greatly improves our results across both of these metrics.
The qualitative effects of our ablation study can be seen in Figure~\ref{fig:ablation_results}. We observe that inpainting generates an object-specific, less noisy output, when comparing "Ours" and "Without Inpainting". Our method without local loss fails to complete local details of an object such as back of a car or wings of a plane and without the global loss predicts a generic, noisy shape of an object. Since each local encoder and decoder only observe the points within that region and not the points in the other potentially occluded regions, they allow the network to focus on individual parts of an object and be robust to different occlusion patterns. The local loss helps in creating a more uniform completion, since it completes its associated region and the global loss reasons about the entire shape of the object. Finally, without multi-view loss, the output point cloud is noisy and incomplete as can be seen in all shapes.
\vspace{-1.125em}
\subsection{Robustness to Canonical Frame Estimation}
\vspace{-0.5em}
\label{sec:robustness}
Previous work~\cite{gu2020weakly} depends on multiple views which can be sensitive to the pose alignment errors. While we also use a multi-view loss, our inpainting losses make the model robust to noisy alignment allowing it to learn from poorly aligned data. The mean rotation/translation difference, after using IT-Net for pose canonicalization, between the multiple partially observed shapes during both training and inference is $5.46^{\circ}/~0.008$, $12.33^{\circ}/~0.013$ and $7.12^{\circ}/~0.010$ for Car, Chair, and Plane, respectively~(unit of translation is object diameter). This shows that even the canonicalized poses are not perfectly aligned and due to inpainting, our method is still able to learn from this poorly aligned data (particularly with respect to rotation). To further highlight the contribution of inpainting to this robustness, we add noise to the predicted IT-Net poses with rotations and translations sampled uniformly with a maximum displacement of 5\textdegree / 0.01, 10\textdegree / 0.05, and 15\textdegree / 0.10. Figure in Table~\ref{table:kitti_agg_emd}a shows that without inpainting (in \textcolor{red}{red}), our method is extremely sensitive to alignment noise but with inpainting (in \textcolor{green}{green}), our method only degrades slightly with higher noise, and remains more accurate at all levels of noise than the baseline~\cite{gu2020weakly} with no noise added.
\vspace{-1.125em}
\subsection{Impact of $\beta$ in the Weighted Chamfer Distance loss}
\vspace{-0.5em}
\input{tables/beta_table_only}
We present an analysis in Table~\ref{table:beta_table} in which we train PointPnCNet with different values of $\beta$ to understand the contribution of the second term in the asymmetric Weighted Chamfer Distance loss (Eqn.~\ref{equ:cd_global_loss}), $\mathcal{L}_{wcd}$.
From Table~\ref{table:beta_table}, we can observe that
the optimal performance occurs at $\beta = 0.25$ across both the ShapeNet and KITTI datasets.
Our intuition is that a larger value of $\beta$ imposes a penalty for generating points in $Y$ in the regions that were occluded in the input; this contradicts our goal of completing those missing regions. Nonetheless, setting $\beta = 0$ also leads to worse performance because the second term in Eqn.~\ref{equ:cd_global_loss} is needed to (minimally) penalize the network for predicting points in $Y$ that are far from the original partial point cloud $X$. Setting $\beta = 0.25$ provides the appropriate balance between these competing objectives. As explained in Section~\ref{sec:losses}, this tradeoff only occurs for the global loss; the local loss uses a regional indicator that only applies the loss to regions for which we have ground truth information.
\section{Introduction}
\label{sec:intro}
The proceedings of BMVC are published only in electronic form, but it is still assumed
that readers of the papers may wish to print the paper. This document
illustrates the required paper format, which is designed to read well either printed
with two pages per sheet (``2-up''), or on screen. Note that printing with one page
per sheet will produce a ``large print'' version, which in many cases is not what is desired.
To approximate the old BMVC format, print at one page per sheet, but do not choose
the option to ``scale to fit paper''.
\LaTeX\ users should use this template in order to prepare their paper.
Users of other packages should emulate the style and layout of this
example. Note that best results will be achieved using {\tt pdflatex},
which is available in most modern distributions.
\subsection{Paper length}
Paper length of the final version must not exceed 10~pages, {\em not counting} the bibliography.
{\bf Only} the bibliography should be {\em excluded} from the page count: {\bf all} appendices must be counted within the {\em 10~pages limit} or supplied as supplementary material. Papers must not have altered margins and formatting from those laid down by this style guide.
The bibliography should begin immediately after the paper text. It may
be of any length, within reason. It must {\em not} include
annotations, figures, or any other paraphernalia intended to subvert the
paper length requirement.
\begin{figure}
\begin{tabular}{ccc}
\bmvaHangBox{\fbox{\parbox{2.7cm}{~\\[2.8mm]
\rule{0pt}{1ex}\hspace{2.24mm}\includegraphics[width=2.33cm]{images/eg1_largeprint.png}\\[-0.1pt]}}}&
\bmvaHangBox{\fbox{\includegraphics[width=2.8cm]{images/eg1_largeprint.png}}}&
\bmvaHangBox{\fbox{\includegraphics[width=5.6cm]{images/eg1_2up.png}}}\\
(a)&(b)&(c)
\end{tabular}
\caption{It is often a good idea for the first figure to attempt to
encapsulate the article, complementing the abstract. This figure illustrates
the various print and on-screen layouts for which this paper format has
been optimised: (a) traditional BMVC print format; (b) on-screen
single-column format, or large-print paper; (c) full-screen two column, or
2-up printing. }
\label{fig:teaser}
\end{figure}
\subsection{Citations}
When citing a multi-author paper, you may save space by using ``{\em et
alia}'', shortened to ``\emph{et al}\bmvaOneDot'' (not ``{\em et.\ al.}'' as ``{\em et}'' is
a complete word.) The provided \verb'\emph{et al}\bmvaOneDot' macro is a useful {\em aide
memoire} in this regard. However, use it only when there are three or more
authors. Thus, the following is correct: `` Frobnication has been trendy
lately. It was introduced by Alpher~\cite{Alpher02}, and subsequently
developed by Alpher and Fotheringham-Smythe~\cite{Alpher03}, and Alpher
\emph{et al}\bmvaOneDot~\cite{Alpher04}.''
This is incorrect: ``... subsequently developed by Alpher \emph{et al}\bmvaOneDot~\cite{Alpher03} ...''
because reference~\cite{Alpher03} has just two authors. If you use the
\verb'\emph{et al}\bmvaOneDot' macro, then you need not worry about double periods
when used at the end of a sentence as in Alpher \emph{et al}\bmvaOneDot.
We use {\tt natbib}, so citations in random order are nicely sorted:
\cite{Alpher03,Alpher02,Authors06b,Authors06}. However, we don't use the
compress option, as we want each reference to have its own hyperlink and
popup window.
\subsection{Footnotes}
Please use footnotes\footnote {This is what a footnote looks like. It
often distracts the reader from the main flow of the argument.} sparingly.
Indeed, try to avoid footnotes altogether and include necessary peripheral
observations in
the text (within parentheses, if you prefer, as in this sentence). If you
wish to use a footnote, place it at the bottom of the column on the page on
which it is referenced. Use Times 8-point type, single-spaced.
\begin{figure*}
\begin{center}
\fbox{\rule{0pt}{2in} \rule{.9\linewidth}{0pt}}
\end{center}
\caption{Example of a short caption, which should be centered.}
\label{fig:short}
\end{figure*}
\begin{table}
\begin{center}
\begin{tabular}{|l|c|}
\hline
Method & Frobnability \\
\hline\hline
Theirs & Frumpy \\
Yours & Frobbly \\
Ours & Makes one's heart Frob\\
\hline
\end{tabular}
\end{center}
\caption{Results. Ours is better.}
\end{table}
\subsection{Mathematics}
Please number all of your sections and displayed equations. It is
important for readers to be able to refer to any particular equation. Just
because you didn't refer to it in the text doesn't mean some future reader
might not need to refer to it. It is cumbersome to have to use
circumlocutions like ``the equation second from the top of page 3 column
1''. (Note that the ruler will not be present in the final copy, so is not
an alternative to equation numbers). All authors will benefit from reading
Mermin's description~\cite{Mermin89} of how to write mathematics.
\subsection{References}
List and number all bibliographical references in 9-point Times,
single-spaced, at the end of your paper. When referenced in the text,
enclose the citation number in square brackets, for
example~\cite{Authors06}. Where appropriate, include the name(s) of
editors of referenced books.
\subsection{Color}
Color is valuable, and will be visible to readers of the electronic copy.
However ensure that, when printed on a monochrome printer, no important
information is lost by the conversion to grayscale.
\section{Introduction}
\label{sec:intro}
The proceedings of BMVC are published only in electronic form, but it is still assumed
that readers of the papers may wish to print the paper. This document
illustrates the required paper format, which is designed to read well either printed
with two pages per sheet (``2-up''), or on screen. Note that printing with one page
per sheet will produce a ``large print'' version, which in many cases is not what is desired.
To approximate the old BMVC format, print at one page per sheet, but do not choose
the option to ``scale to fit paper''.
\LaTeX\ users should use this template in order to prepare their paper.
Users of other packages should emulate the style and layout of this
example. Note that best results will be achieved using {\tt pdflatex},
which is available in most modern distributions.
\subsection{Paper length}
Paper length of the final version must not exceed 10~pages, {\em not counting} the bibliography.
{\bf Only} the bibliography should be {\em excluded} from the page count: {\bf all} appendices must be counted within the {\em 10~pages limit} or supplied as supplementary material. Papers must not have altered margins and formatting from those laid down by this style guide.
The bibliography should begin immediately after the paper text. It may
be of any length, within reason. It must {\em not} include
annotations, figures, or any other paraphernalia intended to subvert the
paper length requirement.
\begin{figure}
\begin{tabular}{ccc}
\bmvaHangBox{\fbox{\parbox{2.7cm}{~\\[2.8mm]
\rule{0pt}{1ex}\hspace{2.24mm}\includegraphics[width=2.33cm]{images/eg1_largeprint.png}\\[-0.1pt]}}}&
\bmvaHangBox{\fbox{\includegraphics[width=2.8cm]{images/eg1_largeprint.png}}}&
\bmvaHangBox{\fbox{\includegraphics[width=5.6cm]{images/eg1_2up.png}}}\\
(a)&(b)&(c)
\end{tabular}
\caption{It is often a good idea for the first figure to attempt to
encapsulate the article, complementing the abstract. This figure illustrates
the various print and on-screen layouts for which this paper format has
been optimised: (a) traditional BMVC print format; (b) on-screen
single-column format, or large-print paper; (c) full-screen two column, or
2-up printing. }
\label{fig:teaser}
\end{figure}
\subsection{Citations}
When citing a multi-author paper, you may save space by using ``{\em et
alia}'', shortened to ``\emph{et al}\bmvaOneDot'' (not ``{\em et.\ al.}'' as ``{\em et}'' is
a complete word.) The provided \verb'\emph{et al}\bmvaOneDot' macro is a useful {\em aide
memoire} in this regard. However, use it only when there are three or more
authors. Thus, the following is correct: `` Frobnication has been trendy
lately. It was introduced by Alpher~\cite{Alpher02}, and subsequently
developed by Alpher and Fotheringham-Smythe~\cite{Alpher03}, and Alpher
\emph{et al}\bmvaOneDot~\cite{Alpher04}.''
This is incorrect: ``... subsequently developed by Alpher \emph{et al}\bmvaOneDot~\cite{Alpher03} ...''
because reference~\cite{Alpher03} has just two authors. If you use the
\verb'\emph{et al}\bmvaOneDot' macro, then you need not worry about double periods
when used at the end of a sentence as in Alpher \emph{et al}\bmvaOneDot.
We use {\tt natbib}, so citations in random order are nicely sorted:
\cite{Alpher03,Alpher02,Authors06b,Authors06}. However, we don't use the
compress option, as we want each reference to have its own hyperlink and
popup window.
\subsection{Footnotes}
Please use footnotes\footnote {This is what a footnote looks like. It
often distracts the reader from the main flow of the argument.} sparingly.
Indeed, try to avoid footnotes altogether and include necessary peripheral
observations in
the text (within parentheses, if you prefer, as in this sentence). If you
wish to use a footnote, place it at the bottom of the column on the page on
which it is referenced. Use Times 8-point type, single-spaced.
\begin{figure*}
\begin{center}
\fbox{\rule{0pt}{2in} \rule{.9\linewidth}{0pt}}
\end{center}
\caption{Example of a short caption, which should be centered.}
\label{fig:short}
\end{figure*}
\begin{table}
\begin{center}
\begin{tabular}{|l|c|}
\hline
Method & Frobnability \\
\hline\hline
Theirs & Frumpy \\
Yours & Frobbly \\
Ours & Makes one's heart Frob\\
\hline
\end{tabular}
\end{center}
\caption{Results. Ours is better.}
\end{table}
\subsection{Mathematics}
Please number all of your sections and displayed equations. It is
important for readers to be able to refer to any particular equation. Just
because you didn't refer to it in the text doesn't mean some future reader
might not need to refer to it. It is cumbersome to have to use
circumlocutions like ``the equation second from the top of page 3 column
1''. (Note that the ruler will not be present in the final copy, so is not
an alternative to equation numbers). All authors will benefit from reading
Mermin's description~\cite{Mermin89} of how to write mathematics.
\subsection{References}
List and number all bibliographical references in 9-point Times,
single-spaced, at the end of your paper. When referenced in the text,
enclose the citation number in square brackets, for
example~\cite{Authors06}. Where appropriate, include the name(s) of
editors of referenced books.
\subsection{Color}
Color is valuable, and will be visible to readers of the electronic copy.
However ensure that, when printed on a monochrome printer, no important
information is lost by the conversion to grayscale.
\section{Conclusion}
\vspace{-0.5em}
We propose a self-supervised method for point cloud completion via inpainting and random region removal that can be trained using only LiDAR-based partial point clouds. Our method produces significantly more accurate point cloud completions and outperforms the previous unsupervised methods on ShapeNet and Semantic KITTI. Through exhaustive ablation, we show the importance of each component of our method and the robustness to alignment errors. While the current method uses intersecting half-spaces defined by coordinate planes, other methods for point cloud partitioning can be explored in future work. We hope that our method will improve real-world 3D object understanding.
\vspace{-1em}
\section{Acknowledgements}
\vspace{-0.5em}
This material is based upon work supported by the National Science Foundation under Grant No. IIS-1849154, and the CMU Argo AI Center for Autonomous Vehicle Research.
\section{Method}
\vspace{-0.5em}
The point cloud completion problem can be defined as follows: given an incomplete set of sparse 3D points $X$, sampled from a partial view of an underlying dense object geometry $G$, the goal is to predict a new set of points $Y$, which mimics a uniform sampling of $G$.
\input{images/model-diagram}
\vspace{-1em}
\subsection{Self-supervised Inpainting}
\label{sec:inpainting}
\vspace{-0.5em}
In our self-supervised inpainting-based approach to learn to complete full point clouds using only partial point clouds, we randomly remove regions of points from a given partial point cloud and train the network to inpaint these synthetically removed regions. The original partial point cloud is then used as a pseudo-ground truth to supervise the completion. Since we do not have the complete ground-truth point cloud, supervision is only applied to the regions of the original point cloud that contain points (i.e. unoccluded regions).
The network leverages the information of available regions across samples and embeds each region separately that can generalize across partially occluded samples with different missing regions. Further, due to the stochastic nature of region removal, the network cannot easily differentiate between the synthetic and original occlusions of the input partial point cloud, making the network learn to complete the point cloud. Thus, the combination of inpainting, random-region removal, and region-specific embeddings enables the model to generate all the regions and create a complete point cloud.
\vspace{-1em}
\subsection{Network Architecture}
\vspace{-0.5em}
Figure~\ref{fig:model_diagram} depicts the architecture and training flow of our network, Point Cloud Partition-and-Completion Network (PointPnCNet). We use a multi-level encoder-decoder architecture to allow the network to focus on different parts of an object. We present the evaluations of various alternate designs of our method in the appendix.
\vspace{-0.5em}
\subsubsection{Multi-Level Encoder}
\label{sec:multi-scale-encoder}
\vspace{-0.5em}
Our encoder consists of multiple, parallel encoder streams that encode the input partial point cloud at global and local levels. The global-level encoder operates on the full-scope of the object while a local-level encoder focuses on a particular region of the object. Since a local encoder only sees points in a given local region and is invariant to other parts of the shape which may be missing, local encoders make the network robust to occlusions by focusing on individual object parts separately. Global encoder further enhances shape consistency by focusing on regions jointly with each other.
Given a partial point cloud, we estimate its canonical frame using a learned method~(Sec.~\ref{sec:experiments}) and transform it to obtain a canonicalized partial point cloud $X$. We show that our method is robust to errors in this canonicalization~(Sec.~\ref{sec:robustness}).
We then partition the canonicalized partial point cloud using intersecting half-spaces that are produced by the coordinate planes after canonicalization. This effectively separates the space into eight 3D octants as shown in Figure~\ref{fig:model_diagram}. While other types of partitioning could be used, we found this subdivision to be simple and reasonably effective. Rather than a strict partitioning, we allow a small overlap between neighboring regions such that points in the overlap are present in both regions. This helps to avoid seams at boundaries.
Let $X_{i}$ consist of the points in the region $i$. After partitioning, we remove points of any particular region with a probability $p$ to simulate a synthetic occlusion. We use this synthetically occluded point cloud $\hat{X}$ as input to our inpainting network.
The points in the remaining regions are aggregated together and passed as input to the global encoder, $E_{g}$, to give a global embedding $e_g$~(Figure~\ref{fig:model_diagram}).
In parallel, each remaining region $X_{i}$ is separately encoded by a local encoder, $E_{\ell}$ to obtain a local embedding for that region, $e^{i}_{\ell}$. To aggregate the local feature embeddings, each embedding $e^{i}_{\ell}$ is fed as input into an attention module, consisting of an MLP layer, that generates a set of weights $w_{i} = \phi (e^{i}_\ell)$. These weights are used to weigh each of the embeddings $e^{i}_{\ell}$ in a linear combination to form the aggregate embedding $P_\ell = \sum_{i} \mathbbm{1}_i w_{i} \, e^{i}_\ell$, where $\mathbbm{1}_i$ is an indicator function which equals 1 if region $i$ is present~(i.e. present in the original partial point cloud $X$ and not randomly removed) and 0 otherwise.
We then perform a channel-wise max-pooling across the global encoding $e_g$ and the attention-weighted local encoding $P_\ell$, as $P = \max (e_g, P_\ell)$.
\vspace{-0.5em}
\subsubsection{Multi-Level Decoder}
\vspace{-0.5em}
Our decoder consists of multiple decoder streams that work in parallel to decode the fused embedding $P$ (Figure~\ref{fig:model_diagram}). The global decoder $D_{g}$ takes the embedding $P$ as input and attempts to generate an entire completed point cloud $Y_g$.
In parallel, we use a local decoder $D_{\ell}$ to decode the points in each region of the input space. The embedding $P$ is concatenated with a one-hot vector indicating each region's location and create a region-specific embedding. Through one-hot encoding, each decoder specializes in completing a certain region and learns a region-specific embedding. The decoder takes as input these region-specific embeddings and generates a subset of the output point cloud localized to the respective region $Y_\ell^i$. The generated local regions are combined together to obtain the full point cloud $Y_\ell$.
The multi-level output generated by the network captures the details of the object at global and local levels. The outputs of the multi-level decoder streams, $D_{\ell}$ and $D_{g}$, are concatenated to form the final prediction of our network as $Y$.
\vspace{-1em}
\subsection{Point Cloud Completion Losses}
\label{sec:losses}
\vspace{-0.5em}
The standard loss used for comparing two point clouds is the Chamfer Distance (CD). It is a bi-directional permutation invariant loss over two point clouds representing the nearest neighbor distance between each point and its closest point in the other cloud. In our method,
we use an asymmetric Weighted Chamfer Distance loss, $\mathcal{L}_{wcd}$, defined as,
\begin{equation}
\footnotesize
\mathcal{L}_{wcd} (X, Y) = \frac{(1 - \beta)}{|X|}\sum_{x \in X} \min_{y \in Y}\|x - y\|_2 +
\frac{\beta}{|Y|}\sum_{y \in Y} \min_{x \in X} \|y - x\|_2
\label{equ:cd_global_loss}
\end{equation}
\noindent where $X$ is the original partial point cloud used here as pseudo-ground truth and $Y$ is the output.
Importantly, we only compute the loss for the regions that are present in $X$. A weight of $(1-\beta)$ is applied to the first term in Eqn.~\ref{equ:cd_global_loss} which penalizes the distance from each point in $X$ to its nearest neighbor in $Y$. This term enforces that the output point cloud $Y$ should contain points that are close to those in $X$. Note that the input to the network is $\hat{X}$, which has synthetic occlusions, not $X$, which is the original partial point cloud. A weight of $\beta$ is applied to the second term in Eqn.~\ref{equ:cd_global_loss} to penalize the distance from each point in $Y$ to its nearest neighbor in $X$.
We do not expect this term to reach 0 for a well-trained network since $X$ only contains a partial point cloud, while output $Y$ contains the entire point cloud; we still find it a helpful regularization.
We impose the following variants of $\mathcal{L}_{wcd}$ on the model,
\textbf{Inpainting-Global Loss:} This loss acts as a \textit{global shape loss}, focusing on the overall shape of an object. We impose it as the Weighted Chamfer Distance (Eqn.~\ref{equ:cd_global_loss}) between original partial point cloud $X$ and output of the global decoder $Y_{g}$ and define it as $\mathcal{L}_{wcd} (X, Y_{g})$.
\textbf{Inpainting-Local Loss:} We impose Inpainting-Local loss as the Weighted Chamfer distance between each region output $Y_\ell^i$ from local decoder $D_{\ell}$ and corresponding partitioned region $X_i$ in the original partial point cloud $X$ where $i$ indexes over regions. While Inpainting-Global loss considers the entire $X$ to find the nearest neighbor, Inpainting-Local loss differs in that it only considers the partitioned region $X_i$ to find the nearest neighbor. Thus, it acts as a \textit{local shape loss} that enables the network to learn region-specific shapes and embeddings and focus on the finer details of an object. We do not penalize regions that are missing in $X$ where a region is considered missing if the number of points in that region is below a certain threshold. The Inpainting-Local Loss is therefore defined as, $\sum_{i}\mathbbm{1}_i \cdot \mathcal{L}_{wcd} (X_{i}, Y_\ell^i)$ where the indicator function $\mathbbm{1}_i$ equals one if region $i$ is present in $X$ and zero otherwise.
\textbf{Multi-View Consistency:} Similar to Gu~\emph{et al}\bmvaOneDot~\cite{gu2020weakly}, our method uses multi-view consistency as an auxiliary loss. Their method explicitly performs pose estimation. Similarly, we perform an estimated pose canonicalization (weakly supervised, Sec.~\ref{sec:experimental_setup}). We also show later~(Sec.~\ref{sec:robustness}) that our method is robust to canonicalization errors. During training, we sample a view $k$ from $V$ partial views of an object $X$ given as $X^{1}, \ldots, X^{V}$. Since all the views of an object correspond to the same object, for input partial point cloud $X^{k}$, the loss is computed with all views $X^{1}, \ldots, X^{V}$. We define a global inpainting multi-view consistency loss as $\sum_{j=1}^{V}\mathcal{L}_{wcd} (X^{j}, Y_{g}^{k})$ where $X^{j}$ is the $j^{th}$ view of $X$ and $Y_{g}^{k}$ is the global output from decoder $D_{g}$. We also define local inpainting multi-view consistency loss as $\sum_{i}\sum_{j=1}^{V}\mathbbm{1}^j_i \cdot \mathcal{L}_{wcd} (X^{j}_{i}, Y_\ell^{i,k})$ where $i$ indexes over regions, $X^{j}_{i}$ is the $i^{th}$ region of view $X^j$,
$Y_\ell^{i,k}$ is the region output from local decoder $D_{\ell}$ for input $X^{k}_{i}$, and $\mathbbm{1}^j_i$ is $1$ if region $i$ is present in $X^{j}$ and $0$ otherwise.
During training, we sum the losses as, $\sum_{j=1}^{V}\mathcal{L}_{wcd} (X^{j}, Y_{g}^{k}) + \sum_{i}\sum_{j=1}^{V}\mathbbm{1}_i \cdot \mathcal{L}_{wcd} (X^{j}_{i}, Y_\ell^{i,k})$. This multi-view information is only available at training time.
\section{Introduction}
\vspace{-0.5em}
Autonomous vehicles often understand the world around them using depth sensors such as LiDAR. However, the LiDAR point clouds are often incomplete even when recorded from multiple viewpoints over time.
To accurately track objects and plan routes to avoid collisions, it is important for autonomous vehicles to understand the complete shape of surrounding objects.
Previous methods~\cite{yuan2018pcn, tchapmi2019topnet, wang2020cascaded, xie2020grnet, wen2020point} have learned to complete partial point clouds but they strongly rely on the availability of ground truth complete shapes as supervision. Since complete point clouds are costly to obtain for real-world scenarios, these methods typically train only from simulated data where ground-truth completions are available. This limitation motivates our approach to learn only from partial point clouds to complete shapes without ever observing the ground-truth completed point clouds during training.
To this end, our method leverages self-supervision via an inpainting-based approach where we randomly remove regions from the partial point clouds and train the network to complete the entire point cloud. Across multiple training examples, different regions will be occluded and varying regions will be synthetically removed. Because the network does not know which regions were artificially removed and which were naturally occluded in each original partial point cloud, the network learns to attempt to complete the entire point cloud.
In contrast to images, where a mask can specify the region to inpaint~\cite{liu2020rethinking, pathakCVPR16context, yu2018generative, yeh2017semantic, hong2019deep}, the unstructured nature of point clouds makes it challenging to define which regions the network needs to inpaint. We solve this using a region-aware loss which penalizes only the regions where the original point cloud was present. Additionally, we partition the point cloud into local regions using intersecting half-spaces and encode/decode them separately. This allows the network to learn data-driven embeddings separately for each local region that specialize in individual region-level object parts. We also encode/decode at the global point cloud level to further allow the network focus on each region jointly with each other. Previous works \cite{insafutdinov2018unsupervised, gu2020weakly} depend on aligning multiple viewpoints of an object during training which can be sensitive to pose alignment errors. While we also incorporate a multi-viewpoint loss, we show that our use of inpainting allows our method to be robust to alignment errors.
The key contributions of this paper are as follows: 1). We present a novel inpainting-based self-supervised algorithm that learns to complete missing local regions in an incomplete point cloud without the need for ground truth point cloud completions, 2). Our multi-level encoder-decoder based architecture, PointPnCNet, partitions the point clouds to learn local and global embeddings to obtain improved completion performance, 3).Our approach outperforms existing methods for unsupervised point cloud completion~\cite{insafutdinov2018unsupervised, gu2020weakly} when evaluated on the standard completion benchmarks of ShapeNet~\cite{chang2015shapenet} and SemanticKITTI~~\cite{behley2019semantickitti}.
\section{Related Work}
\vspace{-0.5em}
\textbf{Supervised Point Cloud Completion} Most of the existing 3D shape completion methods~\cite{yuan2018pcn, tchapmi2019topnet, wang2020cascaded, xie2020grnet, wen2020point, wang2020softpoolnet, huang2020pf} make use of complete ground-truth shape labels. A common approach for point cloud completion is to use an encoder-decoder style architecture~\cite{yuan2018pcn,liu2020morphing,xie2020grnet,wang2020point}. On the other hand, Tchapmi \emph{et al}\bmvaOneDot~\cite{tchapmi2019topnet} proposed to generate a point cloud using a hierarchical rooted tree structure. Our architecture builds on the typical encoder-decoder style of previous work~\cite{yuan2018pcn}. In contrast to the above supervised methods, our proposed approach does not require ground truth annotations. This allows our method to be trained using LiDAR data in the wild, as opposed to the previous methods which are trained only with simulated data.
\textbf{Weakly-Supervised Methods}
Recently, Gu~\emph{et al}\bmvaOneDot~\cite{gu2020weakly} proposed a weakly-supervised approach for point cloud completion where the pose of the input partial point cloud and 3D canonical shape are jointly optimized. Their method is weakly-supervised via multi-view consistency among the multiple partial observations of the same instance. Our method also uses partial point clouds, however, using our inpainting-based approach, our method is able to learn a more accurate completion and is robust to view alignment errors. Other methods also learn 3D shape reconstruction using weak supervision~\cite{yan2016perspective, zhu2017rethinking, tulsiani2018multi}. Among these, Differentiable Point Clouds~(DPC)~\cite{insafutdinov2018unsupervised} jointly predicts camera poses and a 3D shape representation given two image views of the same instance. The geometric consistency between the estimated 3D shape and the input images is enforced using an end-to-end differentiable point cloud projection. We show in the results that we significantly outperform this method.
\textbf{Image Inpainting}
In the area of image inpainting~\cite{liu2020rethinking, pathakCVPR16context, yu2018generative, yeh2017semantic, hong2019deep, zhan2020self}, Zhan \emph{et al}\bmvaOneDot~\cite{zhan2020self} proposed self-supervised partial completion networks (PCNets) to complete an occluded object's mask and content in the input image. Our method takes inspiration from Zhan \emph{et al}\bmvaOneDot~\cite{zhan2020self} for shape completion in 3D point clouds.
However, due to the structured nature of images, often a mask can be used to specify the region to inpaint. In 3D point clouds, where the data is unstructured and sparse in nature, it is difficult to specify a ``mask" for the regions to inpaint. There is ambiguity between regions that have been ``masked out" and regions that are naturally occluded, making the task of inpainting challenging for point cloud data.
\textbf{Point Cloud Inpainting} Some of the previous methods~\cite{fu2018point, yu2020point, fu20203d, zhao2020pui, chen2020point, hu2019local} have also explored inpainting in the point cloud domain. However, these methods either use ground truth during training~\cite{yu2020point, zhao2020pui}, rely on template-matching within a data sample~\cite{fu2018point, fu20203d, hu2019local}, or project a point cloud into 2D structured representation~\cite{chen2020point}. Our method is novel in the sense that it uses inpainting directly on the point clouds without any ground truth information while leveraging large datasets to learn domain-specific priors.
\section{Appendix}
\subsection{Architecture Details}
For all results and ablations, we keep the output size of our network as 8192 points, where the global decoder $D_{g}$ generates 4096 points and the local decoder $D_{\ell}$ generates 512 points for each region, to make an overall size of 4096 points across all local regions. Similarly, the input size is kept consistent for all the ablations; that is, the input size is 3096 and 387 for global encoder $E_{g}$ and local encoder $E_{\ell}$ respectively. Each region in $X$ is dropped with a probability of removal of 20\% and the resulting synthetically occluded point cloud $\hat{X}$ is passed to the global encoder $E_{g}$. In parallel, the input partial point cloud is subdivided into 8 regions along the axial planes of the canonical frame. Each region not artificially removed or marked as missing is then independently encoded using the local encoder, $E_{\ell}$. When encoding each region of the input cloud, regions that are marked as missing based on the threshold number of points are replaced with zeros equal to the threshold. In our method, we set this threshold as 4. We allow a small overlap of 0.02 cm between neighboring regions for the ShapeNet dataset and 0.02m for the KITTI dataset. The architecture of local encoder $E_{g}$ and global decoder $D_{g}$ are similar to the PCN~\cite{yuan2018pcn}. For local encoder $E_{\ell}$ and local decoder $D_{\ell}$, we use the architecture of PCN~\cite{yuan2018pcn}, but reduce the number of hidden units to 1/8th of the original number. We use Adam optimizer with a learning rate of $1 \times 10^{-4}$ and train our network for 400K iterations.
\subsection{Data preparation}
\textbf{Shapenet}: We obtain a point cloud from the RGB-D data by backprojecting 2.5D depth images to 3D similar to Gu \emph{et al}\bmvaOneDot~\cite{gu2020weakly}. In contrast to DPC~\cite{insafutdinov2018unsupervised}, we do not use the color information. The centers of the oriented clouds are then shifted to the origin before passing it to our shape completion network. Specifically, we use the 3D partial shape classification branch of IT-net pre-trained on ModelNet40 to generate the pose transformations, as it does not require the ground-truth pose annotations for training. Since our method does not require perfect pose alignment, using IT-Net pretrained on ModelNet40 instead of ShapeNet is sufficient for our purpose, as it represents an off-the-shelf canonical frame estimator for our model classes. We refer the reader to IT-Net~\cite{yuan2018iterative} for details on this pose canonicalization method.
Originally, the ShapeNet~\cite{chang2015shapenet} dataset has 5 views. When training on $N$ views, we only consider a fixed set of $N$ random views, which is chosen at the beginning of training; the network is only trained on these $N$ views and the other views of an object are discarded.
\textbf{Semantic KITTI:} At training time, we subdivide the observations of a single instance into groups of 20 sequential observations and randomly sample a set of four views for multi-view training. When evaluating accuracy on this dataset, all 20 frames are combined using ground truth odometry to form the ground truth shape of each instance. This merged cloud is only used for evaluation and is not present during training. At inference time, only a single view is used.
\subsection{Ablation Studies}
In this section, we present a more exhaustive ablation study focusing on the number of views, architecture changes, number of input points used for training and mention the details of the ablation of densification of input point clouds for the KITTI dataset.
\subsubsection{Number of views}
We evaluate the sensitivity of our method to the number of views available at training time in Supplementary Figure~\ref{fig:views_plot}. We show the results both with and without inpainting in \textcolor{green}{green} and \textcolor{red}{red} lines respectively. It can be observed that our model is able to outperform the baseline with 2 views and 3 views, even though the baseline Gu~\emph{et al}\bmvaOneDot~\cite{gu2020weakly} is trained with 4 views. This demonstrates that our method is able to take advantage of a reduced number of views, due to our use of inpainting.
We also show the qualitative results with varying numbers of training views in the Supplementary Figure~\ref{fig:views_results}; as can be seen, the results of 2 views and 3 views are qualitatively very similar to the results with 4 views.
\subsubsection{Architecture Changes}
\paragraph*{\textbf{Global and Local Encoders and Decoders}}
We analyze whether to use both global and local encoders and decoders in our network. The results can be found in Supplementary Table~\ref{tbl:shapenet_ablation}. It can be observed that a combination of global and local encoders and decoders gives the best performance among all the possible combinations.
\paragraph*{Number of levels} In addition to the two levels in our parallel model (global and local), we experiment with adding another branch where the partial point cloud is partitioned into $3\times 3\times 3$ regions. For this branch, we use an independent local encoder and decoder. The input size of a region to the encoder is taken as 115 points (to maintain a total input size of 3096) and the size of the predicted point cloud is 152 points for each region (to maintain a total output size of 4096 for the local decoder). For computing the loss, we divide the original input (before dropping points) into regions and subsample the points to have at most 304 points in each region. The results are in Supplementary Table~\ref{tbl:more_ablations}. We notice that further partitioning of the partial point cloud and the additional branch do not give a significant improvement in the performance.
\subsubsection{Number of input points}
We evaluate the effect of the number of points in the point cloud on the performance of our method. To test this, we create new versions of the test set with varying numbers of points; for each object, we resample the point cloud (without replacement) from the input point cloud with a varying number of sampled points.
We evaluate the Chamfer Distance metric as a function of the number of points in the input point cloud on the ShapeNet and KITTI~\cite{behley2019semantickitti} dataset during testing. We evaluate our method on the number of points ranging from 100 to 4000 and present the results in Figure~\ref{fig:numpoints_chamfer}. As expected, performance degrades as we reduce the number of available points.
\subsubsection{Densification of KITTI point clouds}
\label{sec:Densification of KITTI point clouds}
To evaluate the quantitative effects of simply densifying the input point cloud without completing occluded regions, we design a simple densification method. For each point in the input partial point cloud, we find its 10 nearest neighbors and estimate the eigenvalues of this local neighborhood. An ellipsoid is formed using these values and points are uniformly sampled within this volume. This approximates the local surface. From Table 2 of the main paper, the improvement of our method over the results of this densification method demonstrates that our model is completing the partial point clouds rather than simply densifying the partial input cloud.
\subsubsection{Performance Analysis with respect to Occlusions}
We conduct an experiment to assess the impact of occlusions in the input partial point clouds on the ability of the model to complete the given shape. To do so, we introduce artificial occlusions by removing a certain number of regions from the input during testing (we have divided the input into 8 total regions). Given that the original input is already naturally occluded, we artificially remove at most three regions because beyond that, the input is barely visible. The results are shown in Table~\ref{tbl:num_region}; we can observe that as the number of artificial occlusions in the input increases, there is a slight drop in performance for all categories. However, the model is considerably robust to the additional occlusions.
\subsection{Metrics}
In this section, we report different metrics for further analysis of our method.
\subsubsection{Precision and Coverage of observed and unobserved regions}
For a detailed analysis, we compute the precision and coverage of the observed and unobserved regions of the input point cloud. To categorize points as observed or unobserved, we compute the distance between each point in the predicted point cloud and its nearest neighbor in the input point cloud. We compute the mean and standard deviation of these distances for each point cloud and use 1 standard deviation over this mean as a threshold. Points with the nearest neighbor distance greater than this threshold are considered as unobserved, while all other points are considered observed. The precision and coverage are computed separately for each of these types of points and we report the results in the Supplementary Table~\ref{tbl:obs_unobs}. As expected, we find that the precision and coverage of the observed regions are slightly better than that of unobserved regions in the input partial point cloud; however, the results are relatively similar for the observed and unobserved regions, which provides further evidence that we are completing (and not just densifying) the input (see also Section~\ref{sec:Densification of KITTI point clouds}).
\subsubsection{F1-Score}
Following Xie et al~\cite{xie2020grnet}, we evaluate the F1-score@1\%, which is the harmonic mean between precision and recall, on the ShapeNet dataset. In this context, ``precision" is the percentage of the points in the predicted point cloud which are within a specified distance threshold with the ground truth. ``Recall" is the percentage of the points in the ground truth point cloud that are within a distance threshold with the predicted point cloud. Precision helps to measure the accuracy of the prediction and recall measures the coverage of the prediction. In this metric, we use ${d = 1\%}$ of the side length of the predicted point cloud. It can be observed from the Supplementary Table~\ref{tbl:shapenet_f1score} that our method is able to outperform the baseline DPC~\cite{insafutdinov2018unsupervised} when evaluated on this metric. We do not report the results on Gu~\emph{et al}\bmvaOneDot~\cite{gu2020weakly} since their code is not open-source.
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{images/supp_views_line_new.pdf}
\caption{Quantitative Results on the number of views (1, 2, and 3) (with \textcolor{green}{green} and without inpainting \textcolor{red}{red}) used during network training. Our original method trains on 4 views. All the values reported are average Chamfer Distance metric over the ShapeNet~(Airplane, Car, Chair) and KITTI dataset. We are able to outperform the baseline using a limited number of views due to our use of inpainting.}
\label{fig:views_plot}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=\columnwidth]{images/multi_view_supp_v2.pdf}
\caption{Qualitative results on varying the number of views given as input to the PointPnCNet. The first, second, third, and fourth row shows the results on the ShapeNet test set of car, chair, plane, and Semantic KITTI~\cite{behley2019semantickitti} dataset respectively. As can be seen, the results of 2 views and 3 views are qualitatively very similar to the results with 4 views. This demonstrates that our method is able to take advantage of a reduced number of views, due to our use of inpainting.}
\label{fig:views_results}
\end{figure*}
\subsubsection{Uniformity Metric}
We also evaluate the uniformity metric following Xie et al~\cite{xie2020grnet} on the ShapeNet and KITTI datasets. In the Supplementary Table~\ref{tbl:shapenet_uniformity}, we compare our method with the baseline DPC on the ShapeNet dataset. Our method gives a similar performance with the baseline with respect to this metric, revealing that both methods have similar uniformity of predicted points.
For the KITTI dataset, we compare our method with the ablation of our method without inpainting, as DPC does not train and evaluate on KITTI and Gu~\emph{et al}\bmvaOneDot~\cite{gu2020weakly} do not have open-source code. We report the results on KITTI in Supplementary Table~\ref{tbl:kitti_uniformity} and show the improvement in the performance of our model when using inpainting.
\begin{figure}[hbt!]
\centering
\includegraphics[trim=0 0 0 0, clip,width=0.6\textwidth]{images/subplots_numpoints_vs_chamfer.pdf}
\caption{Quantitative Results of the Chamfer Distance metric with respect to the number of points in the input point cloud during testing.}
\label{fig:numpoints_chamfer}
\end{figure}
\subsection{Qualitative Results}
We present additional visualizations of the complete predicted point cloud generated by our network, PointPnCNet
\textbf{Cars:} As can be observed from the Supplementary Figure~\ref{fig:shapenet_car_success}, our model is able to complete the finer details of a car such as the headlight of a car and generates a more defined outer boundary in comparison to DPC~\cite{insafutdinov2018unsupervised}. We also show that our network has the ability to not only complete the shapes of general cars, but also the shape of a truck as shown in the third row of Supplementary Figure~\ref{fig:shapenet_car_success}. We show a few failure cases as well on the car category in the Supplementary Figure~\ref{fig:shapenet_car_fail}. Our method is unable to create detailed shapes of various sports cars. Further, for the truck in the second row, our method fails to create a gap between the front and back of a truck.
\textbf{Chairs:} We present in Supplementary Figure~\ref{fig:shapenet_chair_success} that our method is able to generate finer completion results on different types of chairs such as a sofa and desk chair than DPC~\cite{insafutdinov2018unsupervised}. It is able to complete the front, back, and arms of the chair. There are also a few failure cases where the network generates noisy results especially near the legs of a chair as seen in Supplementary Figure~\ref{fig:shapenet_chair_fail}.
\textbf{Airplanes:}
From Supplementary Figure~\ref{fig:shapenet_plane_success}, we observe that the network is able to complete the front, back, and wings of the planes. Supplementary Figure~\ref{fig:shapenet_plane_fail} shows some failure cases in which it also generates some noisy points near the wings of the planes.
\textbf{KITTI:} We show the visualizations where our network is able to complete the partial point cloud cars from the LiDAR scans of the Semantic KITTI dataset in the first and second row of Supplementary Figure~\ref{fig:kitti_supp}. Additionally, there are a few failure cases where the network is unable to generate the details in a fine manner such as the tire of a car as seen in the third and fourth row of Supplementary Figure~\ref{fig:kitti_supp}. We also show the completion results of the partial point clouds in a scene in the Supplementary Figure~\ref{fig:kitti_scene}.
\textbf{ShapeNet Categories: } We present the qualitative results on the 5 other categories of the ShapeNet dataset - Cabinet, Lamp, Sofa, Table, and Vessel in the Figure~\ref{fig:shapenet_categories}. We compare the results of our method with our ablation of without inpainting. It can be observed that our method is able to complete the shape of the incomplete point clouds whereas our method without inpainting outputs noisy points.
\subsection{Comparison with supervised method}
To analyze the performance gap between self-supervised method and supervised method, we compare the performance of our method with a fully supervised method, PCN~\cite{yuan2018pcn} on 8 categories of the ShapeNet dataset and present the results in Table~\ref{tbl:pcn_sup_self_sup}. Since our method builds on the architecture of PCN, we compare our method to fully-supervised PCN; the choice of architecture is somewhat orthogonal to our proposed method of inpainting. We observe that the fully supervised PCN outperforms our self-supervised method, as expected. However, our results indicate that our method has reduced the gap between self-supervised and fully supervised approaches. In Table~\ref{tbl:pcn_sup_self_sup}, we also compare our method to the ablation of ``no inpainting" across 8 object categories of ShapeNet and show consistent improvement in performance.
\input{tables/shapenet_ablation}
\input{tables/num_occlusions}
\input{tables/more_ablations}
\input{tables/obs_unobs}
\input{tables/shapenet_f1score}
\input{tables/shapenet_uniformity}
\input{tables/kitti_uniformity}
\input{tables/pcn_sup_self_sup}
\newpage
\begin{figure*}[hbt!]
\centering
\includegraphics[width=0.99\textwidth]{images/sucess_car_supp.pdf}
\caption{Success cases on the Car category of the ShapeNet dataset. It can be observed that our model is able to complete the finer details of a car such as the headlight of a car and generates a detailed outer boundary in comparison to DPC~\cite{insafutdinov2018unsupervised} in all the rows. It is also able to generate the shape of a truck as can be seen in the third row.}
\label{fig:shapenet_car_success}
\end{figure*}
\begin{figure*}[hbt!]
\centering
\includegraphics[width=0.99\textwidth]{images/fail_car_supp.pdf}
\caption{Failure cases on the Car category of the ShapeNet dataset. We compare our method with the results of DPC~\cite{insafutdinov2018unsupervised}. Our method fails to create the detailed shapes of various sports cars. Further, for the truck in the second row, our method fails to create a gap between the front and back of a truck.}
\label{fig:shapenet_car_fail}
\end{figure*}
\begin{figure*}[hbt!]
\centering
\includegraphics[width=0.99\textwidth]{images/sucess_chair_supp.pdf}
\caption{Success cases on the Chair category of the ShapeNet dataset. We compare our method with the results of DPC~\cite{insafutdinov2018unsupervised}. Our method is able to show finer completion results on different types of chairs such as a sofa and desk chair than DPC~\cite{insafutdinov2018unsupervised}. It is able to complete the front, back, and arms of the chair.}
\label{fig:shapenet_chair_success}
\end{figure*}
\vspace{2 cm}
\begin{figure*}[hbt!]
\centering
\includegraphics[width=0.99\textwidth]{images/fail_chair_supp.pdf}
\caption{Failure cases on the Chair category of the ShapeNet dataset. We compare our method with the results of DPC~\cite{insafutdinov2018unsupervised}. It can be observed in these cases that the network generates noisy results especially near the legs of a chair.}
\label{fig:shapenet_chair_fail}
\end{figure*}
\begin{figure*}[hbt!]
\centering
\includegraphics[width=0.99\textwidth]{images/success_plane_supp_v2.pdf}
\caption{Success cases on the Plane category of the ShapeNet dataset. We compare our method with the results of DPC~\cite{insafutdinov2018unsupervised}. It can be observed that the network is able to complete the front, back and wings of the planes.}
\label{fig:shapenet_plane_success}
\end{figure*}
\vspace{2 cm}
\begin{figure*}[hbt!]
\centering
\includegraphics[width=0.99\textwidth]{images/fail_plane_supp.pdf}
\caption{Failure cases on the Plane category of the ShapeNet dataset. We compare our method with the results of DPC~\cite{insafutdinov2018unsupervised}. The network generates some noisy points near the wings of the planes.}
\label{fig:shapenet_plane_fail}
\end{figure*}
\begin{figure*}[hbt!]
\centering
\includegraphics[width=0.99\textwidth]{images/semantic_kitti_supp.pdf}
\caption{Completion of partial point cloud cars from the LiDAR scans of the Semantic KITTI dataset (first and second row). The third and fourth row show some failure cases of the network where the network is unable to generate the smaller details such as a tire of a car.}
\label{fig:kitti_supp}
\end{figure*}
\begin{figure*}[hbt!]
\centering
\includegraphics[width=0.99\textwidth]{images/kitti_sup.pdf}
\caption{Completion of partial point cloud of cars in a LiDAR scan of the Semantic KITTI dataset.}
\label{fig:kitti_scene}
\end{figure*}
\begin{figure*}[hbt!]
\centering
\includegraphics[width=0.99\textwidth]{images/categories1.pdf}
\includegraphics[trim=0 0 0 50, clip,width=0.99\textwidth]{images/categories2.pdf}
\caption{Qualitative Results on five categories of Shapenet compared to our ablation of without inpainting.}
\label{fig:shapenet_categories}
\end{figure*}
\section{Experiments}
\vspace{-0.5em}
\label{sec:experiments}
\subsection{Implementation Details}
\vspace{-0.5em}
During test-time, we use a single view of an object. The multiple views are only available during training. To get the final completed point cloud, we concatenate the output from multi-level decoders $D_g$ and $D_{\ell}$. We do not remove regions at inference time. Otherwise, the network during inference is the same as described above.
For consistency with prior work~\cite{gu2020weakly}, we resample each partial point cloud $X$ to have a total of 3096 points.
PointPnCNet uses architecture from PCN~\cite{yuan2018pcn} for encoder and decoder blocks. The model is trained from scratch for 400K iterations with batch size of 32, learning rate of 5e-4 decayed by 0.5 after every 100K iterations and $\beta = 0.25$ in $\mathcal{L}_{wcd}$. Please refer to appendix for more details.
\vspace{-1em}
\subsection{Experimental Setup}
\vspace{-0.5em}
\label{sec:experimental_setup}
Following the evaluation protocol of Gu~\emph{et al}\bmvaOneDot~\cite{gu2020weakly}, we test our approach on ShapeNet~\cite{chang2015shapenet} and Semantic KITTI~\cite{behley2019semantickitti}. ShapeNet has ground truth annotations for each object class which allows us to evaluate how well our method generates completed shape. On the other hand, Semantic KITTI allows us to evaluate the robustness of our method on real LiDAR data.
The observations are transformed to a canonical frame (a shared reference frame which aligns all instances of a class) using canonical frame predictions generated via IT-Net~\cite{yuan2018iterative} for ShapeNet and predicted bounding boxes obtained from OpenPCDet~\cite{openpcdet2020} for Semantic KITTI. We use IT-Net for ShapeNet as it is trained in a weakly-supervised manner from only classification labels and learns to align the instances of each class, without any pose supervision. In general, any pose estimator can be used here. We evaluate the robustness of our method to this canonical frame estimation in Sec.~\ref{sec:robustness}.
\textbf{ShapeNet:} ShapeNet~\cite{chang2015shapenet} is a synthetic dataset with 3D CAD models. We report our results on three categories, airplanes, cars, and chairs, that are commonly used in the related works~\cite{insafutdinov2018unsupervised, gu2020weakly}. We use the same data split provided by DPC~\cite{insafutdinov2018unsupervised}, where RGB-D data is generated for random camera views with fixed translation, similar to Gu \emph{et al}\bmvaOneDot~\cite{gu2020weakly}. For evaluation, we use ground truth point clouds provided by DPC~\cite{insafutdinov2018unsupervised} which are densely sampled from ShapeNet meshes and downsampled to 8192 points.
\textbf{Semantic KITTI:} We evaluate our method for a real-world scenario using KITTI~\cite{behley2019semantickitti}. Previous methods~\cite{xie2020grnet, gu2020weakly, wen2020point, yuan2018pcn} have a standard protocol of evaluation on real-world data by testing on the cars of KITTI only. We adopt the same protocol in our work. Following Gu \emph{et al}\bmvaOneDot~\cite{gu2020weakly}, we train over the parked car instances~(which have multiple views captured when a LiDAR sensor moves through the scene and scans a parked car from different locations) with sequences 00 to 10 (excluding 08) as train set and sequence 08 as test set. The train set consists of 507 parked car instances and 46152 observations, while the test set has 229 parked car instances and 16296 observations.
Although having no complete ground truth information in KITTI creates some limitations in its evaluation, testing on this dataset shows the ability of our method to handle real-world LiDAR data. By combining the evaluations on a real-world dataset~(KITTI) and a synthetic dataset~(ShapeNet), which has ground truth annotations, we are able to present a more thorough evaluation. This is the standard evaluation procedure following Gu \emph{et al}\bmvaOneDot~\cite{gu2020weakly}.
\textbf{Metrics:}
Our primary metric for quantitatively evaluating shape completion is the \textit{Chamfer Distance (CD)}, as is used in previous works~\cite{gu2020weakly,yuan2018pcn,wang2020cascaded}. We define this metric in its weighted form in Equation \ref{equ:cd_global_loss}. For evaluation, to compare with the ground-truth completed point cloud, we equally weight each component with a $\beta$ of $0.5$.
Additionally, we follow Gu \emph{et al}\bmvaOneDot~\cite{gu2020weakly} and report each component of the Chamfer distance independently: the mean distance from each predicted point to its nearest true point described as \textit{Precision}, and the mean distance from each true point to its nearest predicted point described as \textit{Coverage}. \textit{Precision} describes how well the predicted points match the local shape, while \textit{Coverage} is related to how much of the shape is completed. We also evaluate the Earth Mover's Distance~(EMD)~\cite{yuan2018pcn}, which finds a bijection between the predicted point cloud and the ground truth point cloud that minimizes the average distance between corresponding points. Like previous work, we also evaluate the F-score@1\%~\cite{xie2020grnet}.
\vspace{-1em}
\subsection{Point Cloud Completion Results}
\vspace{-0.5em}
We compare with the current state-of-the-art unsupervised methods, DPC~\cite{insafutdinov2018unsupervised} and Gu~\emph{et al}\bmvaOneDot~\cite{gu2020weakly}.
Table~\ref{table:shapenet_main_results} shows our method outperforming the baseline methods on the synthetic ShapeNet dataset, producing lower Chamfer distances across all shape categories. \textit{Precision} and \textit{Coverage} metrics also improve, showing that our method produces more accurate points and better covers the full object shape.
Our method is also able to outperform DPC~\cite{insafutdinov2018unsupervised} as per the Earth Mover Distance (EMD) metric (Table~\ref{table:kitti_agg_emd}b).
Since the code for \cite{gu2020weakly} is not publicly available, the EMD metric on that method cannot be evaluated.
We further show in Table~\ref{table:agg_ablation_sem_kitti}a that our method outperforms the previous state-of-the-art~\cite{gu2020weakly} on the Semantic KITTI dataset, generating outputs that are significantly more accurate than~\cite{gu2020weakly}.
The KITTI dataset is more realistic than ShapeNet. With samples having a range of sparsity (since real-world LIDAR gets sparse with distance), it represents the data available in self-driving scenarios. We also show improvement compared to a simple densification baseline (Densified Input in Table~\ref{table:agg_ablation_sem_kitti}a) which suggests that our method is indeed completing the partial point clouds rather than simply densifying them. This densification baseline uniformly samples points within the volume of a local surface that is approximated as an ellipsoid, formed using eigenvalues for 10 nearest neighbors of each point in input partial point cloud. We also conduct a uniformity analysis whose results we report in the appendix.
\input{tables/shapenet_main}
\input{tables/table2_3}
\input{tables/cd_emd_fig_table}
\vspace{-1.125em}
\subsection{Qualitative Results}
\vspace{-0.5em}
We present the qualitative results of our method for each category of ShapeNet in Figure~\ref{fig:qualtitative_results} and KITTI in Figure~\ref{fig:ablation_results}. In comparison to the baseline DPC~\cite{insafutdinov2018unsupervised}, we observe that our method is able to better cover the target object with a more uniform distribution over the target surface and accurately reconstructs the fine-grained object details. For example, our method is able to complete the back of the car and the side mirror whereas the baseline outputs noisy points. For chairs, our method generates more uniformly distributed points whereas the baseline outputs patches/clusters of points in that location. This highlights the fact that our method is better able to generalize and complete the unseen regions of incomplete shapes.
\input{images/ablation_vis}
\vspace{-1.125em}
\subsection{Ablation Study}
\vspace{-0.5em}
\noindent \textbf{Inpainting Loss:} Region removal to create synthetic occlusions and the task of inpainting are removed, \textbf{Multi-View Loss:} The multi-view loss is removed from our training method. Each partial point cloud is used to supervise its own, synthetically occluded completion, \textbf{Global Level:} We remove the Inpainting-Global Loss, global encoder $\mathbf{E_g}$ and global decoder $\mathbf{D_g}$ from our completion pipeline. \textbf{Local Level:} The Inpainting-Local loss, local encoder $\mathbf{E_{\ell}}$ and local decoder $\mathbf{D_{\ell}}$ are removed from our method. The number of output points for the \textit{global ghape} and \textit{local shape} ablations are kept consistent with our full method.
We report the ablation results in Table~\ref{table:agg_ablation_sem_kitti}b on ShapeNet, as an average over all categories, and on Semantic KITTI. We find that all components of our system are crucial for optimal performance across both datasets. We further report the F-score@1\%~\cite{xie2020grnet} and the EMD metric on the Semantic KITTI dataset with the ablation of removing inpainting in Table \ref{table:kitti_agg_emd}c. We find that inpainting greatly improves our results across both of these metrics.
The qualitative effects of our ablation study can be seen in Figure~\ref{fig:ablation_results}. We observe that inpainting generates an object-specific, less noisy output, when comparing "Ours" and "Without Inpainting". Our method without local loss fails to complete local details of an object such as back of a car or wings of a plane and without the global loss predicts a generic, noisy shape of an object. Since each local encoder and decoder only observe the points within that region and not the points in the other potentially occluded regions, they allow the network to focus on individual parts of an object and be robust to different occlusion patterns. The local loss helps in creating a more uniform completion, since it completes its associated region and the global loss reasons about the entire shape of the object. Finally, without multi-view loss, the output point cloud is noisy and incomplete as can be seen in all shapes.
\vspace{-1.125em}
\subsection{Robustness to Canonical Frame Estimation}
\vspace{-0.5em}
\label{sec:robustness}
Previous work~\cite{gu2020weakly} depends on multiple views which can be sensitive to the pose alignment errors. While we also use a multi-view loss, our inpainting losses make the model robust to noisy alignment allowing it to learn from poorly aligned data. The mean rotation/translation difference, after using IT-Net for pose canonicalization, between the multiple partially observed shapes during both training and inference is $5.46^{\circ}/~0.008$, $12.33^{\circ}/~0.013$ and $7.12^{\circ}/~0.010$ for Car, Chair, and Plane, respectively~(unit of translation is object diameter). This shows that even the canonicalized poses are not perfectly aligned and due to inpainting, our method is still able to learn from this poorly aligned data (particularly with respect to rotation). To further highlight the contribution of inpainting to this robustness, we add noise to the predicted IT-Net poses with rotations and translations sampled uniformly with a maximum displacement of 5\textdegree / 0.01, 10\textdegree / 0.05, and 15\textdegree / 0.10. Figure in Table~\ref{table:kitti_agg_emd}a shows that without inpainting (in \textcolor{red}{red}), our method is extremely sensitive to alignment noise but with inpainting (in \textcolor{green}{green}), our method only degrades slightly with higher noise, and remains more accurate at all levels of noise than the baseline~\cite{gu2020weakly} with no noise added.
\vspace{-1.125em}
\subsection{Impact of $\beta$ in the Weighted Chamfer Distance loss}
\vspace{-0.5em}
\input{tables/beta_table_only}
We present an analysis in Table~\ref{table:beta_table} in which we train PointPnCNet with different values of $\beta$ to understand the contribution of the second term in the asymmetric Weighted Chamfer Distance loss (Eqn.~\ref{equ:cd_global_loss}), $\mathcal{L}_{wcd}$.
From Table~\ref{table:beta_table}, we can observe that
the optimal performance occurs at $\beta = 0.25$ across both the ShapeNet and KITTI datasets.
Our intuition is that a larger value of $\beta$ imposes a penalty for generating points in $Y$ in the regions that were occluded in the input; this contradicts our goal of completing those missing regions. Nonetheless, setting $\beta = 0$ also leads to worse performance because the second term in Eqn.~\ref{equ:cd_global_loss} is needed to (minimally) penalize the network for predicting points in $Y$ that are far from the original partial point cloud $X$. Setting $\beta = 0.25$ provides the appropriate balance between these competing objectives. As explained in Section~\ref{sec:losses}, this tradeoff only occurs for the global loss; the local loss uses a regional indicator that only applies the loss to regions for which we have ground truth information.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 1,618 |
Borg Antiquarian
Angling, Fishing & Water Sports (1)
Anthology & Collection (2)
Anthropology & Ethnology (10)
Anthropology & Paleontology (4)
Antiquities & Artifacts (2)
Art & Illustrations (72)
Audio Recording (1)
Autographs & Manuscripts (4)
Baedekers & Travel Guides (4)
Behavior & Ethology (1)
Bibles & Early Printing (3)
Bibliography & Textual Study (8)
Biography & History (1)
Black & Ethnic Literature (3)
Civil War (American) (3)
Comparative Religion (3)
Continuation (1)
Cook Books & Gastronomy (4)
Crime Thriller (2)
Dance & Choreography (2)
Demograpjhics (1)
Detective Fiction (2)
Drama & Acting (7)
Ecology & Environment (13)
Economics & Finance (9)
Economics & Technology (1)
Encyclopedias (5)
Engraving (hand-colored) (4)
Erotica & Sexuality (3)
Eroticism (1)
Essays & Opinions (21)
Evolution, Art (1)
Exotica & Curiosa (1)
Exploration & Travel (3)
Fairy Tales & Fantasy (5)
Fantasy & Fantastic Adventure (12)
Fishing, Angling, and Sport / Poetry (1)
Folklore & Magic (4)
Food, Cooking, & Cuisine (6)
Games & Play (5)
Gardening, Flowers, & Horticulture (11)
Graphic story (2)
Hand-colored print (1)
Historical Fiction: Mystery (1)
Historical Poetry (1)
Human-Animal Relationships (3)
Illustrated gift volume (22)
Illustrated volume (1)
Illustrations & Cartoons (4)
Language, Lexicons, & Grammar (3)
Law & Jurisprudence (4)
Legal Thriller (36)
Literary History / Bibliography (1)
Literature & Fiction (303)
Literature / Comic Art (1)
Literature / Private Press (1)
Literture (1)
Lithograph (colored) (2)
Magic, Card & Coin Tricks (1)
Mathematics / Business (1)
Medicine & Therapeutics (12)
Memoirs, Correspondence, & Letters (15)
Military & War (13)
Military fiction & War stories (2)
Monograph & Technical Study (6)
Movies & Film-making (6)
Musc (1)
Music & Musicians (6)
Myths & Legends (8)
Native Americans & Ethnic Groups (2)
Natural Philosophy (1)
Natural Science (9)
Nature & Environment (9)
Nobel Laureate & Prize (1)
Painting and Watercolors (5)
Philatelly, stamps, & 1st day covers (1)
Philosophy & Religion (1)
Photography & Related Processes (6)
Physics & Cosmology (1)
Plays & Theater (12)
Poetry & Poetics (91)
Political Thriller (3)
Printing & Typography (1)
Private Press (6)
Psychology & Psychoanalysis (13)
Religion, Myths, & Legends (16)
Religious Texts (2)
Religious Texts & Studies (61)
Romance & Suspense (1)
Science & Invention (6)
Short stories & novellas (17)
Signer Declaration of Independence (1)
Signers (1)
Sketches & caricatures (1)
Sketches, Cartoons, & Caricatures (12)
Spies & Intrigues (8)
Suspense, Thriller, Psychological Fiction (1)
Technology & Change (1)
Thrillers & Horror Stories (41)
Translation from the French (2)
Travel & Exploration (47)
Warfare & weapons (13)
Western History & LDS (20)
Women & Feminism (1)
Young Adult / Fantasy Adventure (1)
Young Adult / Fantasy Fiction / Epic Fiction (1)
Young Adult: Thrillers & Horror Stories (1)
Any Price (2) Under $100 (1) $5,000-19,999 (1)
Antiquities & Artifacts
Apothecary Scales 18th century
[Bunker Hill & Revolutionary War] Townsend, Dr. David & male physician-descendants in his family
Original with family inscriptions. Wooden Case. Original wooden case for a brass set of scales used by Dr. David Townsend during the battle of Bunker Hill and other events requiring medical attention during and after the American Revolutionary War. More
SKYSTONE AND SILVER; The Collector's Book of Southwest Indian Jewelry
Rosnick, Carl and Joseph Stacey
Englewood Cliffs, New Jersey: Prentice-Hall, 1976. Photographs. First Edition, First Printing. Cloth. Quarto (12 1/4" x 9 3/8" x 1 3/8"), tourquoise cloth with silver lettering on spine and blind-stamped Southwest Native American design on front cover, archival mylar-protected photographic dust jacket (unclipped) depicting Indian silver and tourquoise jewelry, profusely..... More
825 S. Waukegan Road A8
Lake Forest, IL 60045-4129
© 2021 Borg Antiquarian. All rights reserved. Site Map | Site by Bibliopolis | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 9,969 |
Possibility of measuring the thermal Casimir interaction between a plate and a cylinder attached to a micromachined oscillator
R. S. Decca, E. Fischbach, G. L. Klimchitskaya, D. E. Krause, D. López, V. M. Mostepanenko
We investigate the possibility of measuring the thermal Casimir force and its gradient in the configuration of a plate and a microfabricated cylinder attached to a micromachined oscillator. The Lifshitz-type formulas in this configuration are derived using the proximity force approximation. The accuracy of the obtained expressions is determined from a comparison with exact results available in ideal metal case. Computations of the thermal correction to both the Casimir force and its gradient are performed in the framework of different theoretical approaches proposed in the literature. The correction to the Casimir force and its gradient due to lack of parallelism of the plate and cylinder is determined using the nonmultiplicative approach. The error introduced in the theory due to the finite length of the cylinder is estimated. We propose that both static and dynamic experiments measuring the thermal Casimir interaction between a cylinder and a plate using a micromachined oscillator can shed additional light on the thermal Casimir force problem. Specifically, it is shown that the static experiment is better adapted for the measurement of thermal effects.
Physical Review A - Atomic, Molecular, and Optical Physics
https://doi.org/10.1103/PhysRevA.82.052515
10.1103/PhysRevA.82.052515
Dive into the research topics of 'Possibility of measuring the thermal Casimir interaction between a plate and a cylinder attached to a micromachined oscillator'. Together they form a unique fingerprint.
oscillators Physics & Astronomy 100%
configurations Physics & Astronomy 28%
proximity Physics & Astronomy 25%
temperature effects Physics & Astronomy 24%
approximation Physics & Astronomy 13%
metals Physics & Astronomy 13%
Decca, R. S., Fischbach, E., Klimchitskaya, G. L., Krause, D. E., López, D., & Mostepanenko, V. M. (2010). Possibility of measuring the thermal Casimir interaction between a plate and a cylinder attached to a micromachined oscillator. Physical Review A - Atomic, Molecular, and Optical Physics, 82(5), [052515]. https://doi.org/10.1103/PhysRevA.82.052515
Decca, R. S. ; Fischbach, E. ; Klimchitskaya, G. L. ; Krause, D. E. ; López, D. ; Mostepanenko, V. M. / Possibility of measuring the thermal Casimir interaction between a plate and a cylinder attached to a micromachined oscillator. In: Physical Review A - Atomic, Molecular, and Optical Physics. 2010 ; Vol. 82, No. 5.
@article{4c50ebf8b23d491ebf7982b3220dea76,
title = "Possibility of measuring the thermal Casimir interaction between a plate and a cylinder attached to a micromachined oscillator",
abstract = "We investigate the possibility of measuring the thermal Casimir force and its gradient in the configuration of a plate and a microfabricated cylinder attached to a micromachined oscillator. The Lifshitz-type formulas in this configuration are derived using the proximity force approximation. The accuracy of the obtained expressions is determined from a comparison with exact results available in ideal metal case. Computations of the thermal correction to both the Casimir force and its gradient are performed in the framework of different theoretical approaches proposed in the literature. The correction to the Casimir force and its gradient due to lack of parallelism of the plate and cylinder is determined using the nonmultiplicative approach. The error introduced in the theory due to the finite length of the cylinder is estimated. We propose that both static and dynamic experiments measuring the thermal Casimir interaction between a cylinder and a plate using a micromachined oscillator can shed additional light on the thermal Casimir force problem. Specifically, it is shown that the static experiment is better adapted for the measurement of thermal effects.",
author = "Decca, {R. S.} and E. Fischbach and Klimchitskaya, {G. L.} and Krause, {D. E.} and D. L{\'o}pez and Mostepanenko, {V. M.}",
doi = "10.1103/PhysRevA.82.052515",
journal = "Physical Review A",
Decca, RS, Fischbach, E, Klimchitskaya, GL, Krause, DE, López, D & Mostepanenko, VM 2010, 'Possibility of measuring the thermal Casimir interaction between a plate and a cylinder attached to a micromachined oscillator', Physical Review A - Atomic, Molecular, and Optical Physics, vol. 82, no. 5, 052515. https://doi.org/10.1103/PhysRevA.82.052515
Possibility of measuring the thermal Casimir interaction between a plate and a cylinder attached to a micromachined oscillator. / Decca, R. S.; Fischbach, E.; Klimchitskaya, G. L.; Krause, D. E.; López, D.; Mostepanenko, V. M.
In: Physical Review A - Atomic, Molecular, and Optical Physics, Vol. 82, No. 5, 052515, 2010.
T1 - Possibility of measuring the thermal Casimir interaction between a plate and a cylinder attached to a micromachined oscillator
AU - Decca, R. S.
AU - Fischbach, E.
AU - Klimchitskaya, G. L.
AU - Krause, D. E.
AU - López, D.
AU - Mostepanenko, V. M.
N2 - We investigate the possibility of measuring the thermal Casimir force and its gradient in the configuration of a plate and a microfabricated cylinder attached to a micromachined oscillator. The Lifshitz-type formulas in this configuration are derived using the proximity force approximation. The accuracy of the obtained expressions is determined from a comparison with exact results available in ideal metal case. Computations of the thermal correction to both the Casimir force and its gradient are performed in the framework of different theoretical approaches proposed in the literature. The correction to the Casimir force and its gradient due to lack of parallelism of the plate and cylinder is determined using the nonmultiplicative approach. The error introduced in the theory due to the finite length of the cylinder is estimated. We propose that both static and dynamic experiments measuring the thermal Casimir interaction between a cylinder and a plate using a micromachined oscillator can shed additional light on the thermal Casimir force problem. Specifically, it is shown that the static experiment is better adapted for the measurement of thermal effects.
AB - We investigate the possibility of measuring the thermal Casimir force and its gradient in the configuration of a plate and a microfabricated cylinder attached to a micromachined oscillator. The Lifshitz-type formulas in this configuration are derived using the proximity force approximation. The accuracy of the obtained expressions is determined from a comparison with exact results available in ideal metal case. Computations of the thermal correction to both the Casimir force and its gradient are performed in the framework of different theoretical approaches proposed in the literature. The correction to the Casimir force and its gradient due to lack of parallelism of the plate and cylinder is determined using the nonmultiplicative approach. The error introduced in the theory due to the finite length of the cylinder is estimated. We propose that both static and dynamic experiments measuring the thermal Casimir interaction between a cylinder and a plate using a micromachined oscillator can shed additional light on the thermal Casimir force problem. Specifically, it is shown that the static experiment is better adapted for the measurement of thermal effects.
U2 - 10.1103/PhysRevA.82.052515
DO - 10.1103/PhysRevA.82.052515
JO - Physical Review A
JF - Physical Review A
Decca RS, Fischbach E, Klimchitskaya GL, Krause DE, López D, Mostepanenko VM. Possibility of measuring the thermal Casimir interaction between a plate and a cylinder attached to a micromachined oscillator. Physical Review A - Atomic, Molecular, and Optical Physics. 2010;82(5). 052515. https://doi.org/10.1103/PhysRevA.82.052515 | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 7,902 |
\section{Introduction}
Since the work of Schwinger pointing to the possible vacuum instability due to
the pair production in strong external electriclike fields (the Schwinger
effect) \cite{Schwinger51}, this effect has always attracted the attention of
physicists. At present we know that astrophysical objects such as black holes
and hot strange stars can generate huge electromagnetic fields in their
vicinity (dozens of times higher than Schwinger's critical field, $E_{c
=m^{2}/e$), see, e.g. Refs. \cite{Usov97,Alf+etal01,Ruf+etal11,Web+etal14}. It
can also be seen that in some situations in graphene and similar
nanostructures the vacuum instability effects caused by strong (with respect to massless fermions) electric fields are of significant interest; see, e.g.,
Refs.
\cite{GelTan16,allor08,GavGitY12,VafVish14,KaneLM15,Olad+etal17,Akal+etal19}
and references therein. Recent progress in laser physics allows one to hope that the particle creation effect will be experimentally observed in laboratory conditions in the near future, as the strong laser experimental community, for example, Center for Relativistic Laser Science (CoReLS), Extreme Light Infrastructure (ELI), and Exawatt Center for Extreme Light Studies (XCELS), is slowly approaching the critical field strengths for observable pair production (see Ref. \cite{LaserRev} for a review).
Thus, there exists a clear physical motivation for
theoretical studies of the vacuum instability. Firstly, it seems necessary to
us to mention theoretical works devoted to various nonperturbative (with
respect to the external field) calculation methods. Some of these methods are
formulated for time-dependent external fields that vanish as
$|t|\rightarrow\infty$ (for $t$-electric potential steps in what follows) and
are based on possible exact solutions of the Dirac equation; see, e.g.,
\cite{FGS,Nikis79,General1,Ruf10,GelTan16}. Some of the methods are based on the
analysis of the Schwinger effective action (see \cite{Dunn04} for a
review). The so-called derivative expansion approximation method,
being applied to the Schwinger effective action, allows one to treat
effectively arbitrary slowly varying in time strong fields
\cite{DunnH98,GusSh99}. We note that the locally constant field approximation
(LCFA), which is to limit oneself to leading contributions of the derivative
expansion of the effective action, allows for reliable results for
electromagnetic fields of arbitrary strength; see, for example, Refs.~\cite{GalN83,GiesK17}. An alternative approach to treat slowly varying
$t$-electric potential steps, which does not depend on the existence of the
corresponding exact solutions, was formulated in Ref. \cite{GavGit17}.
When achieving extreme field strengths, the inhomogeneity of realistic
external fields becomes important. At the same time in astrophysics and in the
physics of graphene, electric fields can be considered as time independent;
see, e.g., references cited above. Thus, it is important to study the
effect of pair creation in strong constant inhomogeneous fields and to
develop corresponding nonperturbative methods. Here, it is natural to start
with considering the vacuum instability caused by time-independent
inhomogeneous electric fields of a constant direction that depend on only one
coordinate $x$ and are concentrated in restricted space areas, which means
that the fields vanish as $|x|\rightarrow\infty$. The latter fields represent
a kind of so-called $x$-electric potential steps for charged particles.
Nonperturbative methods for treating quantum effects in $t$-electric potential
steps with the help of exact solutions of the Dirac equation are not directly
applicable to the $x$-electric potential steps. One of the main differences
between these approaches is that unlike the case of $t$-electric potential steps, the
magnitude of the corresponding $x$-potential step, $\Delta U$, is crucial for
pair creation from a vacuum regardless of a field intensity. Only critical steps,
$\Delta U>2m$ ($m$ is electron mass), produce electron-positron pairs and this
production occurs only in a finite range of quantum numbers that is called the
Klein zone. Depending on the localization of the constant field, a critical
point (critical surface) exists in the space of inhomogeneous electric field
configurations where the pair production probability vanishes. This
critical surface separates the Klein zone from the adjacent ranges where the pair
production is impossible. Note that the absence of the critical surface in the
case of the $t$-electric potential step arises as a consequence of neglecting
the fact that a realistic electric field occupies a finite space region.
An adequate nonperturbative technique for treating the vacuum instability in
the $x$-electric potential steps was elaborated on in Ref. \cite{x-case}. Similar to
the case of $t$-electric potential steps, special sets of exact solutions of
the relativistic wave equations with corresponding external fields are crucial
in this formulation. This technique was effectively used to describe the particle
creation effect in the Sauter field of the form $E(x)=E\cosh^{-2}\left(
x/L_{\mathrm{S}}\right) $, in a constant electric field between two capacitor
plates, and in exponential time-independent electric steps, where the
corresponding exact solutions were available; see Refs.
\cite{x-case,L-field,x-exp}. However, there are only a few cases when such exact
solutions are known.
It was recently shown that near criticality, pair production exhibits
universal properties similar to those of continuous phase transitions
\cite{cr-regime1,cr-regime2}. In our terminology, this corresponds to the
situation when the Klein zone is relatively small. In the present article, we
show that there also exists a completely different type of universality,
believing that the Klein zone is quite extensive, so that the total number of
created pairs itself can be considered as a large parameter. Here, we develop a
nonperturbative approach that allows one to treat the vacuum instability
effects for arbitrary weakly inhomogeneous $x$-electric potential steps in the
absence of the corresponding exact solutions. The Schwinger effective action
method shows that when the probability of the pair creation is exponentially
small, there are certain relations between the probability and such total
physical quantities as a mean number density, a current density, and an
energy-momentum tensor of created pairs. However, for strong electric fields
such relations were established only for $t$-electric potential steps slowly
varying with time; see Ref. \cite{GavGit17}, and in a few existing exactly
solvable cases; see the review \cite{AdoGavGit17}. The approach developed by us in the
present article allows us to establish similar relations for arbitrary
weakly inhomogeneous $x$-electric potential steps.
The article is organized as follows. In Sec. \ref{S2}, we give a definition of
weakly inhomogeneous $x$-electric potential steps and present an overview of
the vacuum instability due to such backgrounds for existing exactly solvable
cases. We stress some universal features of the vacuum instability
in all these examples. In Sec. \ref{S3}, we present the approximation of a
weakly inhomogeneous field and derive universal forms for the flux density of
created particles and the probability of a vacuum to remain a vacuum. We
show that using these results, it is possible to calculate the physical
quantities for any slightly inhomogeneous but otherwise arbitrary constant
electric field. In this way, we reproduce the results obtained with the help of
existing exact solutions. Then we succeed to describe the vacuum instability
in electric fields where no exact solution of the corresponding Dirac equation
are known, namely, in the Gaussian peak $E(x)=E_{0}\exp\left[ -\left(
x/L_{\mathrm{G}}\right) ^{2}\right] $ and a special inverse square
field of a form\emph{ }$E(x)=E_{0}\left[ 1+\left( 2x/L_{\mathrm{w}}\right)
^{2}\right] ^{-1}$. Finally, in Sec.~\ref{S4}, we derive a
general expression for the current density vector of particles created from
a vacuum and relate it to the flux density of created particles obtained in the
approximation of a weakly inhomogeneous field.
\section{Weakly inhomogeneous potential steps: exactly solvable cases
\label{S2}}
Let us consider QED with a time-independent pure electric field\footnote{We recall that our system is placed in the ($d=D+1$)-dimensional Minkowski spacetime parametrized by the coordinates $X=\left(
X^{\mu},\ \mu=0,1,\ldots,D\right) =\left( t,\mathbf{r}\right) $, $X^{0}=t$,
$\ \ \mathbf{r}=\left( X^{1},\ldots,X^{D}\right) $, $x=X^{1}$. It consists
of a Dirac field $\psi\left( X\right) $ interacting with an external
electromagnetic field $A^{\mu}(X)$ in the form of a $x$-electric potential
step.} $\mathbf{E}\left( X\right) =\mathbf{E}\left( x\right) =\left(
E\left( x\right) ,0,...,0\right)$.
The inhomogeneous electric field $E\left( x\right)$ has the form,\footnote{We use the system of units, where
$c=\hbar=1.$
\begin{align}
& E\left( x\right) =E=\mathrm{const}>0,\ x\in S_{\mathrm{int}}=\left(
x_{\mathrm{L}},x_{\mathrm{R}}\right) ;\ \nonumber\\
& E\left( x\right) =0,\ x\in S_{\mathrm{L}}=\left( -\infty,x_{\mathrm{L
}\right] ,\ x\in S_{\mathrm{R}}=\left[ x_{\mathrm{R}},\infty\right) .
\label{sc0
\end{align}
We assume that the basic Dirac particle is an electron with the mass $m$ and
the charge $-e$, $e>0$, and the positron is its antiparticle. The electric
field under consideration accelerates the electrons along the $x$ axis in the
negative direction and the positrons along the $x$ axis in the positive
direction. Potentials of the corresponding electromagnetic field $A^{\mu
}\left( X\right) $ can be chosen a
\begin{equation}
A^{\mu}\left( X\right) =\left( A^{0}\left( x\right) ,A^{j
=0,\ j=1,2,\ldots,D\right) , \label{2.3
\end{equation}
so that $E\left( x\right) =-\partial_{x}A_{0}\left( x\right) $.
We call the electric field $E(x)$ a weakly inhomogeneous electric field on a
spatial interval $\Delta l$ if the following condition holds true
\begin{equation}
\left\vert \frac{\overline{\partial_{x}E(x)}\Delta l}{\overline{E(x)
}\right\vert \ll1,\ \ \Delta l/\Delta l_{\mathrm{st}}^{\mathrm{m}}\gg1,
\label{sc.1
\end{equation}
where $\overline{E(x)}$ and $\overline{\partial_{x}E(x)}$ are the mean values of
$E(x)$ and $\partial_{x}E(x)$ on the spatial interval $\Delta l$,
respectively, and $\Delta l$ is significantly larger than the length scale
$\Delta l_{\mathrm{st}}^{\mathrm{m}}$, which i
\begin{equation}
\Delta l_{\mathrm{st}}^{\mathrm{m}}=\Delta l_{\mathrm{st}}\max\left\{
1,m^{2}/e\overline{E(x)}\right\} ,\ \ \Delta l_{\text{\textrm{st}}}=\left[
e\overline{E(x)}\right] ^{-1/2}. \label{sc.2
\end{equation}
Note that the length scale $\Delta l_{\mathrm{st}}^{\mathrm{m}}$ appears in
Eq.~(\ref{sc.1}) as the length scale when the perturbation theory with respect
to the electric field breaks down and the Schwinger (nonperturbative)
mechanism is primarily responsible for the pair creation. In what follows, we
show that this condition is sufficient. We are primarily interested
in strong electric fields, $m^{2}/e\overline{E(x)}\lesssim1$. In this case,
the second inequality in Eq.~(\ref{sc.1}) is simplified to the form $\Delta
l/\Delta l_{\mathrm{st}}\gg1$, in which the mass $m$ is absent. In such cases,
the potential of the corresponding electric step hardly differs from the
potential of a uniform electric field
\begin{equation}
U(x)=-eA_{0}(x)\approx U_{\mathrm{const}}(x)=e\overline{E(x)}x+U_{0},
\label{sc.3
\end{equation}
on the interval $\Delta l$, where $U_{0}$ is a given constant. We see this
behavior for the fields of known exact solvable cases for $x$-electric
potential steps, namely, the Sauter field, the $L$-constant electric field,
and the exponential peak field.
The magnitude of the corresponding $x$-potential step i
\begin{equation}
\Delta U=U_{\mathrm{R}}-U_{\mathrm{L}}>0,\ \ U_{\mathrm{R}}=-eA_{0
(+\infty),\ \ U_{\mathrm{L}}=-eA_{0}(-\infty). \label{f.7
\end{equation}
We are interested in electron-positron pair creation that exists for the critical
steps, $\Delta U>2m$; see Ref. \cite{x-case} for details. The Dirac equation
with an $x$-electric potential step has the form,
\begin{align}
& i\partial_{0}\psi(X)=\hat{H}\psi(X),\ \ \hat{H}=\gamma^{0}\left(
-i\gamma^{j}\partial_{j}+m\right) +U(x),\ \ j=1,\ldots D,\nonumber\\
& \psi(X)=\exp\left( -ip_{0}t+i\mathbf{p}_{\perp}\mathbf{r}_{\perp}\right)
\psi_{n}(x),\ \ \mathbf{p}_{\perp}=\left( p^{2},\ldots,p^{D}\right)
,\nonumber\\
& \psi_{n}(x)=\left\{ \gamma^{0}\left[ p_{0}-U(x)\right] -\gamma^{1
\hat{p}_{x}-\mathbf{\gamma}_{\perp}\mathbf{p}_{\perp}+m\right\} \varphi
_{n}^{(\chi)}(x)\upsilon_{\chi},\ \nonumber\\
& \gamma^{0}\gamma^{1}\upsilon_{\chi}=\chi\upsilon_{\chi},\ \ \chi
=\pm1,\ \ \upsilon_{\chi^{\prime},\sigma^{\prime}}^{\dag}\upsilon_{\chi
,\sigma}=\delta_{\chi^{\prime}\chi}\delta_{\sigma^{\prime}\sigma}.
\label{ap.2
\end{align}
Here $\psi(x)$ is a $2^{[d/2]}$-component spinor, $[d/2]$ stands for the
integer part of $d/2$, $m\neq0$ is the electron mass, and $\gamma^{\mu}$ are the
$\gamma$ matrices in $d$ dimensions. The complete set of solutions of such a
Dirac equation is determined by the functions $_{\zeta}\varphi_{n}(x)$ and
$^{\zeta}\varphi_{n}(x)$ with special right and left asymptotics ($\zeta=\pm$)
at $x\in S_{\mathrm{L}}$ and $x\in S_{\mathrm{R}}$, respectively. These
solutions are parametrized by the set of quantum numbers $n=(p_{0
,\mathbf{p}_{\bot},\sigma)$ where $p_{0}$ stands for total energy,
$\mathbf{p}_{\bot}$ is transversal momentum (the index $\perp$ stands for
components of momentum that are perpendicular to the electric field), and
$\sigma$ is spin polarization. The solutions $\varphi_n^{(\chi)}(x)$ which only differ by the values of $\chi$, are linearly dependent. Because of this, it suffices to work with solutions corresponding to one of the possible two values of $\chi$; so here and in what follows we omit the subscript $\chi$ in these solutions, implying that the spin quantum number $\chi$ is fixed in a certain way.
A critical step produces electron-positron pairs in the Klein zone $\Omega_{3}$,
defined by the double inequality,
\begin{equation}
\Omega_{3}:\ \ U_{\mathrm{L}}+\pi_{\perp}\leq p_{0}\leq U_{\mathrm{R}
-\pi_{\perp},\ \ 2\pi_{\perp}\leq\Delta U, \label{Klein
\end{equation}
where $\pi_{\bot}=\sqrt{\mathbf{p}_{\bot}^{2}+m^{2}}$. In this range, initial
states are determined by functions $_{-}\varphi_{n}$ for a positron
and$\ ^{-}\varphi_{n}$ for an electron while final states are determined by
functions $_{+}\varphi_{n}$ for a positron and$\ ^{+}\varphi_{n}$ for an electron
(see Ref. \cite{x-case} for details). The latter functions satisfy the
following asymptotic conditions:
\begin{align}
& _{\;\zeta}\varphi_{n}\left( x\right) =\ _{\zeta}\mathcal{N}\exp\left[
ip^{\mathrm{L}}\left( x-x_{\mathrm{L}}\right) \right] ,\ \ x\in
S_{\mathrm{L}},\nonumber\\
& ^{\;\zeta}\varphi_{n}\left( x\right) =\ ^{\zeta}\mathcal{N}\exp\left[
ip^{\mathrm{R}}\left( x-x_{\mathrm{R}}\right) \right] ,\ \ x\in
S_{\mathrm{R}},\nonumber\\
& p^{\mathrm{L}}=\zeta\sqrt{\left[ \pi_{0}\left( \mathrm{L}\right)
\right] ^{2}-\pi_{\bot}^{2}},\ \ p^{\mathrm{R}}=\zeta\sqrt{\left[ \pi
_{0}\left( \mathrm{R}\right) \right] ^{2}-\pi_{\bot}^{2}},\ \zeta
=\pm\ ,\nonumber\\
& \pi_{0}\left( \mathrm{L/R}\right) =p_{0}-U_{\mathrm{L/R}}. \label{L3
\end{align}
The constants $^{\zeta}\mathcal{N}$ and $_{\zeta}\mathcal{N}$ are
normalization factors with respect to the inner product on the $x$-constant
hyperplane \cite{x-case},
\begin{align}
& \left( \ _{\zeta}\psi_{n},\ _{\zeta^{\prime}}\psi_{n^{\prime}}\right)
_{x}=\zeta\eta_{\mathrm{L}}\delta_{\zeta,\zeta^{\prime}}\delta_{n,n^{\prime
},\ \ \eta_{\mathrm{L}}=\mathrm{sgn\ }\pi_{0}\left( \mathrm{L}\right)
,\nonumber\\
& \left( \ ^{\zeta}\psi_{n},\ ^{\zeta^{\prime}}\psi_{n^{\prime}}\right)
_{x}=\zeta\eta_{\mathrm{R}}\delta_{\zeta,\zeta^{\prime}}\delta_{n,n^{\prime
},\ \ \eta_{\mathrm{R}}=\mathrm{sgn\ }\pi_{0}\left( \mathrm{R}\right)
,\nonumber\\
& \left( \psi,\psi^{\prime}\right) _{x}=\int\psi^{\dag}\left( X\right)
\gamma^{0}\gamma^{1}\psi^{\prime}\left( X\right) dtd\mathbf{r}_{\bot}\ ,
\label{c3
\end{align}
having the form
\begin{align}
& ^{\zeta}\mathcal{N}=\ ^{\zeta}CY,\ _{\zeta}\mathcal{N}=\ _{\zeta
}CY,\ Y=(V_{\perp}T)^{-1/2},\nonumber\\
& ^{\zeta}C=\left[ 2\left\vert p^{\mathrm{R}}\right\vert \left\vert \pi
_{0}(\mathrm{R})-\chi p^{\mathrm{R}}\right\vert \right] ^{-1/2},\ _{\zeta
}C=\left[ 2\left\vert p^{\mathrm{L}}\right\vert \left\vert \pi_{0
(\mathrm{L})-\chi p^{\mathrm{L}}\right\vert \right] ^{-1/2}, \label{L4
\end{align}
where $V_{\bot}$ is the spatial volume of the $(d-1)$-dimensional hypersurface
orthogonal to the electric field direction $x$, and $T$ is the time duration
of the electric field. Both $V_{\bot}$ and $T$ are macroscopically large. The
functions $^{\;\zeta}\varphi_{n}\left( x\right) $ and $_{\;\zeta}\varphi
_{n}\left( x\right) $ are connected by the decompositio
\begin{align}
^{\;\zeta}\varphi_{n}\left( x\right) =\ _{\;+}\varphi_{n}\left( x\right)
g\left( _{+}\left\vert ^{\zeta}\right. \right) -\ _{\;-}\varphi_{n}\left(
x\right) g\left( _{-}\left\vert ^{\zeta}\right. \right), \ \ \left( \ _{\pm}\psi_{n},\ ^{\zeta}\psi_{n^{\prime}}\right)_{x}=
\delta_{n,n^{\prime}}g\left( _{\pm}\left\vert ^{\zeta}\right. \right)
\label{dec
\end{align}
in the Klein zone.
Partial vacuum states for a given $n\notin\Omega_{3}$ are stable. The
differential mean numbers of electrons and positrons from the electron-positron
pairs created are nonzero for $n\in\Omega_{3}$ only and are defined as the average values,
\begin{align}
& N_{n}^{a}\left( \mathrm{out}\right) =\left\langle 0,\mathrm{in}\left\vert
\ ^{+}a_{n}^{\dagger}(\mathrm{out})\ ^{+}a_{n}(\mathrm{out})\right\vert
0,\mathrm{in}\right\rangle ,\nonumber\\
& N_{n}^{b}\left( \mathrm{out}\right) =\left\langle 0,\mathrm{in}\left\vert
\ _{+}b_{n}^{\dagger}(\mathrm{out})\ _{+}b_{n}(\mathrm{out})\right\vert
0,\mathrm{in}\right\rangle ,\ \label{Nab
\end{align}
where\ $^{+}a_{n}^{\dagger}(\mathrm{out})$ and $\ ^{+}a_{n}(\mathrm{out})$ are
the creation and annihilation operators of final electrons while $_{+
b_{n}^{\dagger}(\mathrm{out})$ and$\ _{+}b_{n}(\mathrm{out})$ are the creation
and annihilation operators of final positrons, respectively. We define two
vacuum vectors $\left\vert 0,\mathrm{in}\right\rangle $ and $\left\vert
0,\mathrm{out}\right\rangle $; the first of which is the\ vacuum vector for
all annihilation operators of initial particles, and the other is the vacuum
vector for all annihilation operators of final particles. The numbers
$N_{n}^{a}\left( \mathrm{out}\right) $ and $N_{n}^{b}\left( \mathrm{out
\right) $ are equal and represent the number of pairs created, $N_{n
^{\mathrm{cr}}$, which can be expressed as
\begin{equation}
N_{n}^{b}\left( \mathrm{out}\right) =N_{n}^{a}\left( \mathrm{out}\right)
=N_{n}^{\mathrm{cr}}=|g\left( _{+}|^{-}\right) |^{-2},\ \ n\in\Omega_{3}.
\label{meanN
\end{equation}
The exact solutions of the Dirac equation are known for the following inhomogeneous electric fields.
\textrm{(i)} The Sauter electric field and its vector potential have the for
\begin{equation}
E(x)=E_{0}\cosh^{-2}(x/L_{\mathrm{S}}),\ \ A_{0}(x)=-L_{\mathrm{S}}E_{0
\tanh(x/L_{\mathrm{S}}),\ \ L_{\mathrm{S}}>0, \label{sc.5
\end{equation}
where the parameter $L_{\mathrm{S}}$ sets the scale. The corresponding
solutions of the Dirac equation\ $_{\zeta}\varphi_{n}(x)$ and $^{\zeta
\varphi_{n}(x)$ and the number of particles created $N_{n}^{\mathrm{cr}}$ for
this potential were found in Ref. \cite{x-case}. This field is considered a
weakly inhomogeneous one if the following condition holds true:
\begin{equation}
eE_{0}L_{\mathrm{S}}^{2}\gg\max\left\{ 1,m^{2}/eE_{0}\right\} . \label{sc.6
\end{equation}
The leading contribution to the total number of pairs created is formed in the
inner part of the Klein zone $\Omega_{3}$,
\begin{equation}
\left\vert p_{0}\right\vert <eE_{0}L_{\mathrm{S}}-K/L_{\mathrm{S}
,\ \ \pi_{\bot}<K_{\bot}/L_{\mathrm{S}},\ \ \pi_{\bot}^{2}=m^{2
+\mathbf{p}_{\bot}^{2}, \label{sc.7
\end{equation}
where $K$ is
\begin{equation}
K=L_{\mathrm{S}}\sqrt{(km)^{2}+\pi_{\bot}^{2}},\ \label{sc.8
\end{equation}
while $K_{\bot}$ and $k\gtrsim1$ are given arbitrary numbers obeying the
inequalitie
\begin{equation}
km\ll eE_{0}L_{\mathrm{S}},\ \ mL_{\mathrm{S}}\ll K_{\bot}\ll eEL_{\mathrm{S
}^{2}. \label{sc.8a
\end{equation}
The differential mean number of particles created in this range is
\cite{x-case}
\begin{equation}
N_{n}^{\mathrm{cr}}\approx N_{n}^{\mathrm{as}}=e^{-\pi\tau},\ \,\tau
=\exp\left[ -\pi L_{\mathrm{S}}\left( 2eE_{0}L_{\mathrm{S}}-\left\vert
p^{\mathrm{R}}\right\vert -\left\vert p^{\mathrm{L}}\right\vert \right)
\right] , \label{sc.9
\end{equation}
where left and right asymptotic momenta $\left\vert p^{\mathrm{R}}\right\vert
$ and $\left\vert p^{\mathrm{L}}\right\vert $ are defined by Eq. (\ref{L3}).
This distribution has its maximum at zero energy, $p_{0}=0$, which coincides
with the number of particles created by a uniform electric field,
\begin{equation}
N_{n}^{0}=\left. N_{n}^{\mathrm{cr}}\right\vert _{p_{0}=0}=\exp\left[
-\pi\lambda\right] ,\ \ \lambda=\pi_{\bot}^{2}/eE_{0}. \label{sc.10
\end{equation}
\textrm{(ii)} The so-called $L$-constant field does not change within
the spatial region $L$ and is zero outside of it
\begin{equation}
E(x)=\left\{
\begin{array}
[c]{l
0,\ \ x\in S_{\mathrm{L}}\\
E_{0}>0,\ \ x\in S_{\mathrm{int}}\\
0,\ \ x\in S_{\mathrm{R}
\end{array}
\right. \Longrightarrow A_{0}(x)=\left\{
\begin{array}
[c]{l
-E_{0}x_{\mathrm{L}},\ \ x\in S_{\mathrm{L}}\\
-E_{0}x,\ \ x\in S_{\mathrm{int}}\\
-E_{0}x_{\mathrm{R}},\ \ x\in S_{\mathrm{R}
\end{array}
\right. , \label{sc.11
\end{equation}
where the regions are $S_{\mathrm{L}}=\left( -\infty,x_{\mathrm{L}}\right]
$,$\ S_{\mathrm{R}}=\left[ x_{\mathrm{R}},+\infty\right) $, $S_{\mathrm{int
}=\left( x_{\mathrm{L}},x_{\mathrm{R}}\right) ,$ and we chose that
$x_{\mathrm{L}}=-L/2$,$\ x_{\mathrm{R}}=L/2$. The corresponding solutions of
the Dirac equation\ $_{\zeta}\varphi_{n}(x)$ and $^{\zeta}\varphi_{n}(x)$ were
found in Ref. \cite{L-field}. The $L$-constant field can be considered a
weakly inhomogeneous field i
\begin{equation}
\sqrt{eE_{0}}L\gg\max\left( 1,m^{2}/eE_{0}\right) . \label{sc.12
\end{equation}
The leading contribution to the number of particles created is formed in the inner part $D$ of the Klein zone $\Omega_{3}$
\begin{align}
& \sqrt{\lambda}<K_{\bot},\ \ \left\vert p_{0}\right\vert /\sqrt
{eE_{0}}\leq\sqrt{eE_{0}}L/2-K,\nonumber\\
& \sqrt{eE_{0}}L/2\gg K\gg K_{\bot}^{2}\gg\max\left\{ 1,m^{2}/eE_{0
\right\} . \label{sc.12a
\end{align}
The leading contribution to $N_{n}^{\mathrm{cr}}$ has the form (\ref{sc.10}).
\textrm{(iii)} An exponential peak electric field has the following structure.
Its first part is increasing exponentially on the spatial interval $I=\left(
-\infty,0\right] $ and reaches its maximum value $E_{0}$ at $x=0$. The other
part decreases exponentially from the same value $E_{0}$ on the spatial
interval $II=\left( 0,+\infty\right) $. The potential $A_{0}(x)$ and the
electric field $E(x)$ have the for
\begin{equation}
E(x)=E_{0}\left\{
\begin{array}
[c]{l
e^{k_{1}x},\ \ x\in I\\
e^{-k_{2}x},\ \ x\in II
\end{array}
\right. \Longrightarrow A_{0}(x)=E_{0}\left\{
\begin{array}
[c]{l
k_{1}^{-1}\left( -e^{k_{1}x}+1\right) ,\ \ x\in I\\
k_{2}^{-1}\left( e^{-k_{2}x}-1\right) ,\ \ x\in II
\end{array}
\right. , \label{sc.13
\end{equation}
where $k_{1}$ and $k_{2}$ are some positive constants. The corresponding
solutions of the Dirac equation $_{\zeta}\varphi_{n}(x)$ and $^{\zeta
\varphi_{n}(x)$ were found in Ref. \cite{x-exp}.
The case of a weakly inhomogeneous exponential peak corresponds to small
values of $k_{1}$ and $k_{2}$, and is characterized by the condition,
\begin{equation}
\min(h_{1},h_{2})\gg\max\left( 1,m^{2}/eE_{0}\right) ,\ \ h_{1,2
=2eE_{0}/k_{1,2}^{2}. \label{sc.16
\end{equation}
The main contributions to the number of particles created $N_{n}^{\mathrm{cr
}$ are formed in the ranges of quantum numbers $\pi_{\perp}<\pi_{0
(\mathrm{L})\leq eE_{0}k_{1}^{-1}$ and $-eE_{0}k_{2}^{-1}\geq\pi
_{0}(\mathrm{R})>-\pi_{\perp}$, and have the following forms
\begin{subequations}
\begin{align}
& N_{n}^{\mathrm{cr}}\approx\exp\left[ -\frac{2\pi}{k_{1}}\left( \pi
_{0}(\mathrm{L})-\left\vert p^{\mathrm{L}}\right\vert \right) \right]
,\ \ \pi_{\perp}<\pi_{0}(\mathrm{L})\leq eE_{0}k_{1}^{-1},\label{sc.17a}\\
& N_{n}^{\mathrm{cr}}\approx\exp\left[ -\frac{2\pi}{k_{2}}\left( \left\vert
\pi_{0}(\mathrm{R})\right\vert -\left\vert p^{\mathrm{R}}\right\vert \right)
\right] ,\ \ -eE_{0}k_{2}^{-1}\geq\pi_{0}(\mathrm{R})>-\pi_{\perp}.
\label{sc.17
\end{align}
\end{subequations}
In the examples under discussion, the intervals of growth and decay are
described by nearly the same functional form; that is, increasing and
decreasing components of the fields are almost symmetric. One can consider a
strongly asymmetric configuration of the peak field, when one of the
parameters $k$, for example, $k_{1}$, is sufficiently large, so tha
\begin{equation}
eE_{0}k_{1}^{-2}\ll1,\ \ \left\vert p^{\mathrm{L}}\right\vert /k_{1}\ll1,
\label{sc.18
\end{equation}
while the second one, $k_{2}>0$, is arbitrary. This field is a weakly
inhomogeneous one if
\begin{equation}
h_{2}\gg\max\left( 1,m^{2}/eE_{0}\right) . \label{sc.19
\end{equation}
The leading contribution to the number of particles created is formed in the
range of quantum numbers $-eE_{0}k_{2}^{-1}\geq\pi_{0}(\mathrm{R})>-\pi
_{\perp},$ and coincides with the Eq. (\ref{sc.17}). Note that
this situation can be easily transformed to the case with a large $k_{2}$ and
arbitrary $k_{1}$ by a simultaneous change $k_{1}\leftrightarrows k_{2}$ and
$\pi_{0}(\mathrm{L})\leftrightarrows-\pi_{0}(\mathrm{R})$.
For weakly inhomogeneous electric fields, the differential mean numbers of
electron-positron pairs created from the vacuum are almost constant over the
wide range of energies $p_{0}$ for any given transversal momenta
$\mathbf{p}_{\bot}$, even if these distributions are different for different
fields. Furthermore, for all exactly solvable cases, there are wide subranges
where the distributions $N_{n}^{\mathrm{cr}}$ coincide with the corresponding
distributions $N_{n}^{0}$ in a constant uniform electric field, given by Eq.
(\ref{sc.10}). We call this phenomenon the stabilization of the particle
creation effect. In these subranges of quantum numbers, $N_{n}^{\mathrm{cr}}$
hardly depend on the details of how the field grows and decays. We note that
the similar effect takes place for slowly varying electric fields; see Ref.
\cite{GavGit17} for the details.
The total number of pairs $N^{\mathrm{cr}}$ created from vacuum by an
$x$-electric potential step can be calculated by the summation over all possible
quantum numbers in the Klein zone $\Omega_{3}$
\begin{equation}
N^{\mathrm{cr}}=\sum_{n\in\Omega_{3}}N_{n}^{\mathrm{cr}}. \label{Ncr
\end{equation}
\emph{ }It is proportional to the so-called transversal space-time volume
$V_{\perp}T$, $N^{\mathrm{cr}}=V_{\perp}Tn^{\mathrm{cr}}$, where $d$ labels
the space-time dimensions, and the corresponding densities $n^{\mathrm{cr}}$
have the for
\begin{equation}
n^{\mathrm{cr}}=\frac{J_{(d)}}{(2\pi)^{d-1}}\int_{\Omega_{3}}dp_{0
d\mathbf{p}_{\bot}N_{n}^{\mathrm{cr}}. \label{sc.20
\end{equation}
In fact, $n^{\mathrm{cr}}$ is the total flux density of created particles. In
the latter expression the summations over the energies and transversal momenta
were transformed into integrals, and the summation over spin projections was
fulfilled, $J_{(d)}=2^{\left[ d/2\right] -1}$ (the square brackets mean the
integer part of $d/2$). In weakly inhomogeneous fields, the magnitude of a
potential step\emph{ }$\Delta U$ is large and can be used as a large
parameter. The integral on the right-hand side can be approximated by an
integral over a subrange $D$ that gives the dominant contribution with respect
to the total increment to the flux density of created particles
\begin{equation}
n^{\mathrm{cr}}\approx\tilde{n}^{\mathrm{cr}}=\frac{J_{(d)}
{(2\pi)^{d-1}}\int_{D}dp_{0}d\mathbf{p}_{\bot}N_{n}^{\mathrm{cr}}.
\label{sc.21
\end{equation}
The exact form of the subrange $D$ for each particular field must be
determined separately. The dominant contributions $\tilde{n}^{\mathrm{cr}}$
are proportional to the magnitude of potential steps (and then the maximum
increments of a particle energy), which, in general, differs for different
fields and, for example, has the following forms in the exactly solvable
cases $\mathrm{(i)}$, $\mathrm{(ii)}$, and $\mathrm{(iii)}$
\begin{align}
& \mathrm{(i)}\text{\ }\Delta U_{S}=2eE_{0}L_{\mathrm{S}}\text{ \textrm{for
the Sauter field}},\nonumber\\
& \mathrm{(ii)}\text{\ }\Delta U_{L}=eE_{0}L\text{ }\mathrm{for\ the
L\mathrm{-constant\ field},\nonumber\\
& \mathrm{(iii)}\text{\ }\Delta U_{P}=eE_{0}\left( k_{1}^{-1}+k_{2
^{-1}\right) \ \mathrm{for\ the\ peak\ field}. \label{sc.22
\end{align}
We note that $\Delta U_{P}$ corresponds to the case of a strongly asymmetric
exponential field configuration at $k_{1}^{-1}\rightarrow0$\textrm{ }(or at
$k_{2}^{-1}\rightarrow0$).
In terms of the introduced quantities (\ref{sc.22}), the densities $\tilde
{n}^{\mathrm{cr}}$ in the exactly solvable cases under consideration have the forms \cite{x-case,L-field,x-exp},
\begin{align}
& \mathrm{(i)}\text{\ }\tilde{n}^{\mathrm{cr}}=r^{\mathrm{cr}}\frac{\Delta
U_{S}}{2eE_{0}}\delta\ \text{\textrm{for the Sauter field}},\nonumber\\
& \mathrm{(ii)}\text{\ }\tilde{n}^{\mathrm{cr}}=r^{\mathrm{cr}}\frac{\Delta
U_{L}}{eE_{0}}\ \mathrm{for\ the\ }L\mathrm{-constant\ field},\nonumber\\
& \mathrm{(iii)}\ \tilde{n}^{\mathrm{cr}}=r^{\mathrm{cr}}\frac{\Delta U_{P
}{eE_{0}}G\left( \frac{d}{2},\pi\frac{m^{2}}{eE_{0}}\right)
\ \mathrm{for\ the\ peak\ field}, \label{sc.23
\end{align}
wher
\begin{align}
& r^{\mathrm{cr}}=\frac{J_{(d)}\left( eE_{0}\right) ^{d/2}}{(2\pi)^{d-1
}\exp\left\{ -\pi\frac{m^{2}}{eE_{0}}\right\} ,\ G\left( \alpha,x\right)
=\int_{1}^{\infty}\frac{ds}{s^{\alpha+1}}e^{-x(s-1)}=e^{x}x^{\alpha
\Gamma(-\alpha,x),\nonumber\\
& \delta=\int_{0}^{\infty}dt\ t^{-1/2}\left( t+1\right) ^{-(d+1)/2
\exp\left( -t\pi\frac{m^{2}}{eE_{0}}\right) =\sqrt{\pi}\Psi\left( \frac
{1}{2},\frac{2-d}{2};\pi\frac{m^{2}}{eE_{0}}\right) . \label{sc.24
\end{align}
Here, the $\Gamma(-\alpha,x)$ is the incomplete gamma function and $\Psi\left(
a,b;x\right) $ is the confluent hypergeometric function\footnote{In Ref.
\cite{x-case} the result for a Sauter field was obtained under an unnecessary
assumption that $\lambda>1$. However, the final form of $\delta=\sqrt{\pi
\Psi\left( \frac{1}{2},\frac{2-d}{2};\pi\frac{m^{2}}{eE}\right) $ is given
correctly for an arbitrary $m^{2}/eE$ in Ref. \cite{x-case}. We have to correct
its derivation as follows.\ Note that\textrm{ }for any $\lambda$ one can see
that $N_{n}^{\mathrm{as}}$, given by Eq.~(\ref{sc.9}), is exponentially small
if $km\sim\pi_{\bot}$, where $k$ is any given number satisfying inequality
$k\ll\pi mL_{\mathrm{S}}/2$. Therefore, the range $\pi_{\bot}\ll km$ is of
interest. In this range, the approximation $\tau\approx
eEL_{\mathrm{S}}^{2}\pi_{\perp}^{2}\left[ (eEL_{\mathrm{S}})^{2}-p_{0
^{2}\right] ^{-1}$ holds true. Taking into account the relation between $\tau$ and
$p_{0}$, one can find a correct form of $\delta$ that is presented in
Eq.~(\ref{sc.24}).}.
Equating the densities $\tilde{n}^{\mathrm{cr}}$ for a Sauter-like field
$\mathrm{(i)}$ and for the peak field $\mathrm{(iii)}$ to the density
$n^{\mathrm{cr}}$ for the $L$-constant field, we find effective lengths
$L_{\mathrm{eff}}$ of the interval of the field action for both cases,
\begin{align}
& \mathrm{(i)}\text{\ }L_{\mathrm{eff}}=L_{\mathrm{S}}\delta,\nonumber\\
& \mathrm{(iii)}\text{ }L_{\mathrm{eff}}=\left( k_{1}^{-1}+k_{2
^{-1}\right) G\left( \frac{d}{2},\pi\frac{m^{2}}{eE_{0}}\right) .
\label{sc.25
\end{align}
Note that the effective length $L_{\mathrm{eff}}$ for a strongly asymmetric
exponential field configuration is given by the second line in Eq.
(\ref{sc.25}) as $k_{1}^{-1}\rightarrow0$ (or $k_{2}^{-1}\rightarrow0$). It is
obvious that $L_{\mathrm{eff}}=L$ for the $L$-constant field. One can say that
the Sauter field, the peak electric field, and the asymmetric exponential
field with the same $L_{\mathrm{eff}}$ are equivalent to the $L$-constant
field with respect to the pair production. Note that the factors $G$ and
$\delta$ in Eq. (\ref{sc.24}) for weak $\left( m^{2}/eE_{0}\gg1\right) $ and
strong $\left( m^{2}/eE_{0}\ll1\right) $ electric fields can be approximated
a
\begin{align}
G\left( \frac{d}{2},\pi\frac{m^{2}}{eE_{0}}\right) & \approx\frac{eE_{0
}{\pi m^{2}},\ \delta\approx\frac{\sqrt{eE_{0}}}{m},\text{ }\frac{m^{2
}{eE_{0}}\gg1;\nonumber\\
G\left( \frac{d}{2},\pi\frac{m^{2}}{eE_{0}}\right) & \approx\frac{2
{d},\ \delta\approx\frac{\sqrt{\pi}\Gamma(d/2)}{\Gamma(d/2+1/2)},\ \frac
{m^{2}}{eE_{0}}\ll1. \label{sc.26
\end{align}
One can compare the scale lengths for the cases \textrm{(i)}, \textrm{(ii)},
and \textrm{(iii)}, given by Eqs. (\ref{sc.6}), (\ref{sc.12}), and
(\ref{sc.18}), for the same $E_{0}$ and energy increments $\Delta
U_{S}=\Delta U_{L}=\Delta U_{P}$ (in this case, $2L_{\mathrm{S}}=L=k_{1
^{-1}+k_{2}^{-1}$). The condition Eq. (\ref{sc.12}) is stronger than Eqs.
(\ref{sc.6}) and (\ref{sc.18}) if the fields are weak, whereas they are equivalent
if the fields are strong. For this reason, defining the scale $\Delta
l_{\mathrm{st}}^{\mathrm{m}}$ in general terms, we choose the form (\ref{sc.2}).
Initial $\left\vert 0,\mathrm{in}\right\rangle $ and final $\left\vert
0,\mathrm{out}\right\rangle $ vacua do not coincide. In the general case, the
probability for a vacuum to remain a vacuum $P_{\mathrm{v}}=\left\vert
\langle0,\mathrm{out}|0,\mathrm{in}\rangle\right\vert ^{2}$ can be expressed
via the distribution $N_{n}^{\mathrm{cr}}$ as
\begin{equation}
P_{\mathrm{v}}=\prod_{n\in\Omega_{3}}\left( 1-N_{n}^{\mathrm{cr}}\right) ,
\label{Pv
\end{equation}
whereas, for weakly inhomogeneous electric fields, it can be writte
\begin{equation}
P_{\mathrm{v}}\approx\prod_{n\in D}\left( 1-N_{n}^{\mathrm{cr}}\right) ,
\label{Papp
\end{equation}
where $D$ is the same subrange that gives the dominant contribution to
the flux density of created particles (\ref{sc.21}). The probability
$P_{\mathrm{v}}$ is given by similar forms for the Sauter field $\mathrm{(i)
$, the $L$-constant field $\mathrm{(ii)}$, and the peak field $\mathrm{(iii)
$, respectively, with the corresponding $N^{\mathrm{cr}}$,
\begin{align}
& P_{\mathrm{v}}=\exp\left( -\mu V_{\perp}T\tilde{n}^{\mathrm{cr}}\right)
,\ \mu=\sum_{l=0}^{\infty}\frac{\epsilon_{l+1}}{\left( l+1\right) ^{d/2
}\exp\left( -l\pi\frac{m^{2}}{eE_{0}}\right) ,\nonumber\\
& \mathrm{(i)}\ \ \epsilon_{l}=\epsilon_{l}^{S}=\delta^{-1}\sqrt{\pi
\Psi\left( \frac{1}{2},\frac{2-d}{2};l\pi\frac{m^{2}}{eE_{0}}\right)
,\nonumber\\
& \mathrm{(ii)}\ \ \epsilon_{l}=\epsilon_{l}^{L}=1,\nonumber\\
& \mathrm{(iii)}\ \ \epsilon_{l}=\epsilon_{l}^{P}=G\left( \frac{d}{2
,l\pi\frac{m^{2}}{eE_{0}}\right) \left[ G\left( \frac{d}{2},\pi\frac{m^{2
}{eE_{0}}\right) \right] ^{-1}. \label{sc.27
\end{align}
In the case of a weak field ($m^{2}/eE_{0}\gg1$), $\epsilon_{l}^{S}\approx
l^{-1/2}$ for the Sauter field, $\epsilon_{l}^{P}\approx l^{-1}$ for the peak
field, and $\exp(-\pi m^{2}/eE_{0})\ll1$. Then $\mu\approx1$ for all the cases
in Eq. (\ref{sc.27}) and we have a universal relation $N^{\mathrm{cr}
\approx\ln P_{\mathrm{v}}^{-1}$. In the case of a strong field ($m^{2
/eE_{0}\ll1$), all the terms with the different $\epsilon_{l}^{S}$ and
$\epsilon_{l}^{P}$ contribute significantly to the sum in Eq. (\ref{sc.27}), if
$l\pi m^{2}/eE_{0}\lesssim1$, and the quantities $\mu$ for the Sauter and peak
fields differ essentially from the case of the $L$-constant field. Consequently,
in this situation, one cannot derive a universal relation between
$P_{\mathrm{v}}$ and $\tilde{n}^{\mathrm{cr}}$ from particular cases given by
Eq. (\ref{sc.27}). In addition, it should be noted that in the case of a
strong field, when known semiclassical approaches are not applicable, the
probability $P_{\mathrm{v}}$ (unlike the total number $N^{\mathrm{cr}}$) no
longer has a direct relation to the vacuum mean values of the physical quantities
discussed above. Therefore, to study a universal behavior of the vacuum
instability in weakly inhomogeneous fields one should derive first a universal
form for the total flux density $\tilde{n}^{\mathrm{cr}}$.
\section{Universal behavior of the flux density of created pairs in a strong
electric field \label{S3}}
Unlike the case of uniform time-dependent electric fields, in constant
inhomogeneous electric fields, there is a critical surface in space of
particle momenta, which separates the Klein zone $\Omega_{3}$, defined by
inequality (\ref{Klein}), from the adjacent ranges $\Omega_{2}$ and $\Omega_{4}$
(we use notation defined in Ref. \cite{x-case}). In the ranges $\Omega_{2}$ and
$\Omega_{4}$, the work of an electric field is sufficient to ensure the total
reflection for electrons and positrons, respectively, but is not sufficient to
produce pairs from vacuum. Accordingly, it is expected that for any
nonpathological field configuration, the pair creation vanishes close to this
critical surface, so that $N_{n}^{\mathrm{cr}}\rightarrow0$ if $n$ tends to
the boundary with either the range $\Omega_{2}$ ($\left\vert p^{\mathrm{R
}\right\vert \rightarrow0$) or the range $\Omega_{4}$ ($\left\vert
p^{\mathrm{L}}\right\vert \rightarrow0$)
\begin{equation}
N_{n}^{\mathrm{cr}}\sim\left\vert p^{\mathrm{R}}\right\vert \rightarrow
0,\ \ N_{n}^{\mathrm{cr}}\sim\left\vert p^{\mathrm{L}}\right\vert
\rightarrow0,\ \ \forall\pi_{\bot}\neq0. \label{Nb
\end{equation}
This is exactly the behavior we see for all field configurations which are known
as exact solvable models \cite{x-case,L-field,x-exp}.
Absolute values of the asymptotic momenta $\left\vert p^{\mathrm{L
}\right\vert $ and $\left\vert p^{\mathrm{R}}\right\vert \ $are determined by
the quantum numbers $p_{0}$ and $p_{\bot}$; see Eq. (\ref{L3}). This fact
imposes a certain relation between both quantities. In particular, one can see
that $d\left\vert p^{\mathrm{L}}\right\vert /d\left\vert p^{\mathrm{R
}\right\vert <0,$ and at any given $p_{\bot}$, these quantities are restricted
inside the range $\Omega_{3}$
\begin{equation}
0\leq\left\vert p^{\mathrm{R/L}}\right\vert \leq p^{\mathrm{\max
},\ \ p^{\mathrm{\max}}=\sqrt{\Delta U\left( \Delta U-2\pi_{\bot}\right) }.
\label{d8
\end{equation}
It implies tha
\begin{equation}
0\leq\left\vert \left\vert p^{\mathrm{L}}\right\vert -\left\vert
p^{\mathrm{R}}\right\vert \right\vert \leq p^{\mathrm{\max}}. \label{g8
\end{equation}
Then for all $p_{0}$ and $p_{\bot}$ of the range $\Omega_{3}$, the numbers
$N_{n}^{\mathrm{cr}}$ are small and tend to zero if the Klein zone shrinks to
zero
\begin{equation}
N_{n}^{\mathrm{cr}}\sim\left\vert p^{\mathrm{R}}p^{\mathrm{L}}\right\vert
\rightarrow0\ \ \mathrm{if}\ \ p^{\mathrm{\max}}\rightarrow0. \label{tiny
\end{equation}
In this case, the probability of a pair creation with quantum numbers $n$,
$P(+-|0)_{n,n}$, can be approximated by the mean number $N_{n}^{\mathrm{cr}}$
a
\begin{equation}
P(+-|0)_{n,n}=\frac{N_{n}^{\mathrm{cr}}}{1-N_{n}^{\mathrm{cr}}}P_{\mathrm{v
}\approx N_{n}^{\mathrm{cr}}. \label{pr
\end{equation}
The total number of created particles, $N^{\mathrm{cr}}$, given by the sum
(\ref{Ncr}) over such a tiny Klein zone, is small too. It tends to zero if the
magnitude of the potential step tends to a critical point
\begin{equation}
N^{\mathrm{cr}}\rightarrow0\ \ \mathrm{if}\ \ \Delta U\rightarrow2m.
\label{cr_p
\end{equation}
We see that $N^{\mathrm{cr}}\ll1$ near the critical point, so the probability of a
vacuum to remain a vacuum $P_{\mathrm{v}}$, given by a general form (\ref{Pv}),
can be approximated by the total number of created particles as $P_{\mathrm{v
}\approx1-N^{\mathrm{cr}}$. On the other hand, this probability can be
represented via the imaginary part of a one-loop effective action $S$ by the
seminal Schwinger formula
\begin{equation}
P_{\mathrm{v}}=\exp\left( -2\mathrm{Im}S\right) . \label{np1a
\end{equation}
Taking into account that $P_{\mathrm{v}}\approx1-2\mathrm{Im}S$, one finds a
relatio
\begin{equation}
2\mathrm{Im}S\approx N^{\mathrm{cr}}\ \ \mathrm{if\ \ }N^{\mathrm{cr}
\ll1\text{.} \label{np1b
\end{equation}
Taking into account this relation we can confirm the behavior (\ref{cr_p}) by the
results \cite{cr-regime1,cr-regime2} recently obtained for $\mathrm{Im}S$ in
inhomogeneous $x$-potential electric steps of an arbitrary configuration that
decay asymptotically with a power law $\sim E_{0}\left( kx\right) ^{-a}$,
$a\geq2$ or vanish at a finite point $\sim E_{0}\left( k\left\vert
x-x_{0}\right\vert \right) ^{b}$, $b>0$ (by taking the limit $a\rightarrow
\infty$ or $b\rightarrow\infty$, an exponentially decaying field is recovered),
where $E_{0}$ is a characteristic field strength scale and $k$ is a
characteristic length scale of the inhomogeneous field. It was shown by using
semiclassical worldline instanton methods \cite{instantons} in the weak-field
( $m^{2}/eE_{0}>1$) critical regime \cite{cr-regime1} and by analysis of
solutions of the Klein-Gordon and Dirac equation in the immediate vicinity of
the critical point for an arbitrary peak field strength \cite{cr-regime2} that
near criticality pair production vanishes, exhibiting universal properties
similar to those of continuous phase transitions.
In what follows, we consider a completely different type of universality,
believing that the Klein zone is quite extensive, so that the total number of
created pairs itself can be considered a large parameter. We face such
situations in astrophysical and condensed matter problems where the electric
field is strong. In these scenarios the numbers of the pairs created can reach
their limiting values, $N_{n}^{\mathrm{cr}}\rightarrow1$, and the total number
of pairs created, $N^{\mathrm{cr}}$, is not a small value anymore. For the
weakly inhomogeneous fields, this number is proportional to the large parameter
$L_{\mathrm{eff}}/\Delta l_{\mathrm{st}}$. For an arbitrary weakly inhomogeneous
strong electric field, one can derive in the leading-term approximation a
universal form for the total density of created pairs.
As it was explained above, the contributions to the total number of created
particles due to the part of the Klein zone in the vicinity of the critical
point are small and can be neglected in a following approximation. The main
contribution to $N^{\mathrm{cr}}$ is formed in some inner subrange
$D(x)$ of the Klein zone where transversal momentum $\pi_{\perp}$ and energy
$p_{0}$ are small enough. This inner subrange $D(x)$ can be described a
\begin{equation}
D(x):\ \left\vert \pi_{0}(x)\right\vert \gg\pi_{\perp},\ \ \pi_{0
(x)=p_{0}-U(x). \label{np.2
\end{equation}
In $D(x)$, the effective particle energy is primarily determined by an
increment of energy $U(x)-U_{\mathrm{L}}$ or $U_{\mathrm{R}}-U(x)$ on the
spatial intervals $\Delta x=x-x_{\mathrm{L}}$ or $\Delta x=x_{\mathrm{R}}-x$,
respectively. It should be noted that $D(x)\subset D(x^{\prime})$ if
$x^{\prime}>x$.
Suppose that the electric field does not grow and decay abruptly at the edges
of some final interval, that is, the field slowly weakens at $x\rightarrow
\pm\infty$, and one of the points $x_{\mathrm{L}}$ or $x_{\mathrm{R}}$ or both
are infinitely distant from the origin, $x_{\mathrm{L}}\rightarrow-\infty$ and
$x_{\mathrm{R}}\rightarrow\infty$. In this case, the contributions to the flux
density $\tilde{n}^{\mathrm{cr}}$, given by Eq. (\ref{sc.21}), from the
regions $\left( x_{\mathrm{L}},x_{\mathrm{eff}}^{\text{\textrm{in}}}\right]
$ and $\left( x_{\mathrm{eff}}^{\text{\textrm{out}}},x_{\mathrm{R}}\right) $
are exponentially small and can be disregarded, since the electric field in
these regions is very weak in comparison with the maximum value of the peak
field $E_{0}$, $E(x_{\mathrm{eff}}^{\text{\textrm{in}}}),E(x_{\mathrm{eff
}^{\text{\textrm{out}}})\ll E_{0}$. Therefore, in general, it is sufficient to
consider only the finite interval $\left( x_{\mathrm{eff}}^{\text{\textrm{in
}},x_{\mathrm{eff}}^{\text{\textrm{out}}}\right] $. We can divide this
interval into $M$ intervals,
\begin{align}
& \ \Delta l_{i}=x_{i+1}-x_{i},\ \ i=1,\ldots,M,\ \nonumber\\
& \ \sum_{i=1}^{M}\Delta l_{i}=x_{\mathrm{eff}}^{\text{\textrm{out}
}-x_{\mathrm{eff}}^{\text{\textrm{in}}},\ \ x_{1}=x_{\mathrm{eff
}^{\text{\textrm{in}}},\ x_{M+1}=x_{\mathrm{eff}}^{\text{\textrm{out}
}\text{,} \label{np.3
\end{align}
in such a way that Eqs.~(\ref{sc.1}) and (\ref{sc.2}) hold true for each of
these intervals. Let us show that this allows us to treat the electric field as approximately
uniform in each interval $\Delta l_{i}$, $\overline{E(x)}\approx \overline{E
(x_{i})$\ for $x\in \left( x_{i},x_{i+1}\right] $\ despite the fact that at
the beginning of each interval $\Delta l_{i}$\ the electric field $E(x)$\
changes abruptly. Note that it is possible to use, for example, sharp
exponential steps for the regularization of rectangular steps (see the details in
Ref. \cite{x-exp}) if the length of the interval where this change occurs is
significantly smaller than the length of each corresponding interval $\Delta
l_{i}$. However, as we can see, it is not necessary.
In the case of the strong $L$-constant field, $m^{2}\lesssim eE_{0}$, and
the large parameter $\sqrt{eE_{0}}L$, a rough estimation of the
next-to-leading-term for the flux density of created pairs shows (see
details in Ref. \cite{L-field}) that it produces a small factor of
the order of $\left( \sqrt{eE_{0}}L\right) ^{-1}$, i.e.,
\begin{equation}
n^{\mathrm{cr}}=\tilde{n}^{\mathrm{cr}}\left[ 1+\frac{O(K)}{\sqrt{eE_{0}}L
\right] , \label{next}
\end{equation
where $\tilde{n}^{\mathrm{cr}}$\ is given by expression (ii) in Eq. (\ref{sc.23}). It
is clear that the abrupt change of the $L$-constant field at $x_{\mathrm{L
}=-L/2$ and $x_{\mathrm{R}}=L/2$\ entails considerable oscillations in the
distributions. Comparing the case of the $L$-constant field with other
examples of the exactly solvable cases \cite{x-case,x-exp}, we see that it
presents the roughest estimate of the neglected contributions for weakly
inhomogeneous potential steps. In particular, considering dominant
contributions to the flux density of pairs created by a very asymmetric
exponential peak (a field that grows from zero to its maximum value very rapidly and then experiences a smooth decay with the large effective length $L_{\mathrm
eff}}$) one can see that it does not depend on the details of the field
growth for the case of a strong field \cite{x-exp}. Then we can conclude that
this abrupt change cannot significantly influence the total value of $\tilde
n}^{\mathrm{cr}}$ as $N_{n}^{\mathrm{cr}}\leq 1$ for fermions.
In each interval $\Delta l_{i}$, $\overline{E(x)}\approx \overline{E}(x_{i})
\ for $x\in \left( x_{i},x_{i+1}\right] $\ assuming that $L=\Delta l_{i}$,
we can approximate partial contribution to the flux density due to this
interval, $\Delta n_{i}^{\mathrm{cr}}$, as $\Delta n_{i}^{\mathrm{cr
}=\Delta \tilde{n}_{i}^{\mathrm{cr}}+O(K_{i})$, where $K_{i}$\ is any given
number satisfying the conditio
\begin{equation*}
\sqrt{e\overline{E}(x_{i})}\Delta l_{i}\gg K_{i}\gg \max \left\{ 1,m^{2}/
\overline{E}(x_{i})\right\} .
\end{equation*
Then, using Eqs. (\ref{sc.22}) and (\ref{sc.23}) for the $L$-constant
field,\ we have for $\tilde{n}^{\mathrm{cr}}$\ tha
\begin{align}
& \tilde{n}^{\mathrm{cr}}=\sum_{i=1}^{M}\Delta \tilde{n}_{i}^{\mathrm{cr}},\
\ \Delta \tilde{n}_{i}^{\mathrm{cr}}\approx \frac{J_{(d)}}{(2\pi )^{d-1}
\int_{ex_{i}\overline{E}(x_{i})}^{e\left( x_{i}+\Delta
l_{i}\right)\overline{E}(x_{i}) }dp_{0}\int_{\sqrt{\lambda _{i}}<K_{\bot }^{(i)}}d\mathbf{p
_{\bot }N_{n}^{(i)}, \notag \\
& N_{n}^{(i)}=e^{-\pi \lambda _{i}},\ \ \lambda _{i}=\pi _{\bot }^{2}/
\overline{E}(x_{i}), \label{np.4}
\end{align
where $K_{\bot }^{(i)}$ are any given numbers satisfying the condition
\emph{\
\begin{equation*}
K_{i}\gg \left[ K_{\bot }^{(i)
\right] ^{2}\gg \max \left\{ 1,m^{2}/e\overline{E}(x_{i})\right\} .
\end{equation*}
We can formally represent the variable $p_{0}$ in the latter expression a
\begin{equation}
p_{0}=U(x),\ U(x)=\int_{x_{\mathrm{L}}}^{x}dx^{\prime}\ eE(x^{\prime
})+U_{\mathrm{L}},\ \ dp_{0}=eE(x)dx. \label{np.5
\end{equation}
Then neglecting small contributions to the integral (\ref{np.4}), we find the
following universal form for the flux density of created pairs in the
leading-term approximation for a weakly inhomogeneous, but otherwise arbitrary
strong electric fiel
\begin{equation}
\tilde{n}^{\mathrm{cr}}\approx\frac{J_{(d)}}{(2\pi)^{d-1}}\int_{x_{\mathrm{L
}}^{x_{\mathrm{R}}}dx\ eE(x)\int d\mathbf{p}_{\bot}N_{n}^{\mathrm{uni
},\ \ N_{n}^{\mathrm{uni}}=\exp\left[ -\pi\frac{\pi_{\bot}^{2}
{eE(x)}\right] . \label{np.6
\end{equation}
The quantity $N_{n}^{\mathrm{uni}}$ has a universal form which can be used to
calculate any total characteristic of the pair creation effect. One can
integrate the latter expression over $d\mathbf{p}_{\bot}$ to obtain the final
form,
\begin{equation}
\tilde{n}^{\mathrm{cr}}\approx\frac{J_{(d)}}{(2\pi)^{d-1}}\int_{x_{\mathrm{L
}}^{x_{\mathrm{R}}}dx\ \left[ eE(x)\right] ^{d/2}\exp\left[ -\pi\frac
{m^{2}}{eE(x)}\right] . \label{np.7
\end{equation}
These universal forms can be derived for bosons as well, if we are restricting
them to the forms of external electric fields, namely, fields that have no abrupt
variations of $E(x)$ that can produce significant growth of $N_{n
^{\mathrm{cr}}$ on a finite spatial interval; i.e., we have to include in the
range $D$, the only subranges where $N_{n}^{\mathrm{cr}}\leq1$. In this case,
the universal forms for bosons are the same, Eqs. (\ref{np.6}) and
(\ref{np.7}), with $J_{(d)}=1$ for scalar particles and $J_{(d)}=3$ for vector ones.
Using the identity $-\ln\left( 1-N_{n}^{\mathrm{cr}}\right) =N_{n
^{\mathrm{cr}}+\left( N_{n}^{\mathrm{cr}}\right) ^{2}+\ldots$, in the same
manner we can derive a universal form of the probability of a vacuum to remain a
vacuum $P_{\mathrm{v}}$ defined for fermions by Eq. (\ref{Pv}). First, we get
\begin{equation}
P_{\mathrm{v}}\approx\exp\left\{ -\frac{V_{\bot}TJ_{(d)}}{(2\pi)^{d-1}
\sum_{l=1}^{\infty}\int_{x_{\mathrm{L}}}^{x_{\mathrm{R}}}dxeE(x)\int
d\mathbf{p}_{\bot}\left( N_{n}^{\mathrm{uni}}\right) ^{l}\right\} .
\label{np.8
\end{equation}
After integration over $\mathbf{p}_{\bot}$, we finally obtai
\begin{equation}
P_{\mathrm{v}}\approx\exp\left\{ -\frac{V_{\bot}TJ_{(d)}}{(2\pi)^{d-1}
\sum_{l=1}^{\infty}\int_{x_{\mathrm{L}}}^{x_{\mathrm{R}}}dx\frac{\left[
eE(x)\right] ^{d/2}}{l^{d/2}}\exp\left[ -\pi\frac{lm^{2}}{eE(x)}\right]
\right\} . \label{np.9
\end{equation}
For bosons, we know that the vacuum-to-vacuum transition probability has the form,
\begin{equation}
P_{\mathrm{v}}^{(\mathrm{boson})}=\exp\left[ -\sum_{n}\ln\left(
1+N_{n}^{\mathrm{cr}}\right) \right] , \label{np.10
\end{equation}
so the universal form of the vacuum-to-vacuum transition probability for the
Bose case i
\begin{equation}
P_{\mathrm{v}}^{(\mathrm{boson})}\approx\exp\left\{ -\frac{V_{\bot}TJ_{(d)
}{(2\pi)^{d-1}}\sum_{l=1}^{\infty}\int_{x_{\mathrm{L}}}^{x_{\mathrm{R}
}dx\left( -1\right) ^{l-1}\frac{\left[ eE(x)\right] ^{d/2}}{l^{d/2}
\exp\left[ -\pi\frac{lm^{2}}{eE(x)}\right] \right\} , \label{np.11
\end{equation}
where $J_{(d)}$ is the number of boson spin degrees of freedom.
Using Eqs. (\ref{np.7}) and (\ref{np.9}), one can precisely reproduce
expressions (\ref{sc.23}) and (\ref{sc.27}) that are found for the total
densities and the vacuum-to-vacuum transition probabilities when directly
adopting the weakly inhomogeneous field approximation to the exactly solvable
cases. Comparing Eqs. (\ref{np.7}) and (\ref{np.11}) with the exact results
obtained for bosons \cite{x-case,L-field,x-exp}, one finds precise agreement
too. Thus, we have a confirmation of the universal forms obtained above.
The representations (\ref{np.9}) and (\ref{np.11}) coincide with the
vacuum-to-vacuum transition probabilities obtained from the imaginary part of
a locally constant field approximation (LCFA) for the one-loop effective
action in $d=4$ dimensions \cite{GiesK17,Karb17}. In this approximation, the
effective action $S$ is expanded about the constant field case, in terms of
derivatives of the background field strength $F_{\mu\nu}$
\begin{equation}
S=S^{\left( 0\right) }[F_{\mu\nu}]+S^{\left( 2\right) }[F_{\mu\nu
},\partial_{\mu}F_{\nu\rho}]+... \label{uni7b
\end{equation}
where $S^{\left( 0\right) }$ involves no derivatives of the background field
strength $F_{\mu\nu}$ (that is, $S^{\left( 0\right) }$ is a locally constant
field approximation for $S$ that has a form of the Heisenberg-Euler action),
while the first correction $S^{\left( 2\right) }$ involves two derivatives
of the field strength, and so on; see Ref.~\cite{Dunn04} for a review. Using
the representation (\ref{np1a}), one finds the LCFA for the probability
$P_{\mathrm{v}}$ as
\begin{equation}
P_{\mathrm{v}}\approx\exp\left( -2\mathrm{Im}S^{\left( 0\right) }\right) .
\label{LCFA
\end{equation}
However, it should be stressed that unlike the representations obtained in
Refs.~\cite{GiesK17,Karb17}, we derive Eqs.~(\ref{np.9}) and (\ref{np.11}) in
the framework of the general formulation of strong-field QED in the presence
of $x$-electric potential steps \cite{x-case}, where $P_{\mathrm{v}}$ are
defined by Eqs.~(\ref{Pv}) and (\ref{np.10}), respectively. Therefore, we
obtain Eqs.~(\ref{np.9}) and (\ref{np.11}) independently from the derivative
expansion approach, and the obtained result holds true for any strong field
under consideration. It is known that for a general background field, it is
extremely difficult to estimate and compare the magnitude of various terms in
the derivative expansion. Only under the assumption $m^{2}/eE_{0}>1$, one can
demonstrate that the derivative expansion is completely consistent with the
semiclassical WKB analysis of the imaginary part of the effective action
\cite{DunnH98}. Thus, the representations (\ref{np.9}) and (\ref{np.11}) are
proof that the imaginary part of the LCFA for the Heisenberg-Euler action is
correct for an arbitrarily weakly inhomogeneous electric field of a constant
direction. The universal forms (\ref{np.6}) and (\ref{np.7}) for the flux
density of created pairs are completely new and present a LCFA for this
physical quantity. It is a new kind of a LCFA obtained without any relation to
the Heisenberg-Euler action.
One can see that the obtained universal forms have especially simple forms in
two limited cases, for a weak electric field ($m^{2}/eE_{0}\gg1$), when the
term $\left[ eE(x)\right] ^{d/2}$ can be approximated by its maximum value,
$\left[ eE_{0}\right] ^{d/2}$, and a strong electric field ($m^{2}/eE_{0}\ll
1$), when there exist spatial intervals where $m^{2}/eE(x)\ll1$ and
approximations of the typ
\begin{equation}
\exp\left[ -\frac{\pi lm^{2}}{eE(x)}\right] =1-\frac{\pi lm^{2}
{eE(x)}+\ldots\label{np.12
\end{equation}
are available. For example, one can consider the case of a strong Gauss peak
\begin{equation}
E(x)=E_{0}\exp\left[ -\left( x/L_{\mathrm{G}}\right) ^{2}\right] ,
\label{np.13
\end{equation}
with a large parameter $L_{\mathrm{G}}\rightarrow\infty$. In this case, we do
not have an exact solution of the Dirac equation, and known semiclassical
approximations (valid for a weak field) are not applicable. However, using
approximation (\ref{np.12}), we find from Eqs. (\ref{np.7}) and (\ref{np.9}),
the leading term a
\begin{equation}
\tilde{n}^{\mathrm{cr}}\approx\frac{J_{(d)}\left( eE_{0}\right)
^{d/2}L_{\mathrm{G}}}{d(2\pi)^{d-2}},\ \ P_{\mathrm{v}}\approx\exp\left(
-V_{\bot}T\tilde{n}^{\mathrm{cr}}\sum_{l=1}^{\infty}l^{-d/2}\right) .
\label{np.14
\end{equation}
As another example, we consider an inverse square electric field,
\begin{equation}
E(x)=E_{0}\left[ 1+\left( \frac{2x}{L_{\mathrm{w}}}\right) ^{2}\right]
^{-1},\ \ A_{0}(x)=-\frac{L_{\mathrm{w}}}{2}E_{0}\ \mathrm{arctg}\frac
{2x}{L_{\mathrm{w}}}. \label{np.15
\end{equation}
It is a particular case of an inhomogeneous field that was used to study the
LCFA for the probability $P_{\mathrm{v}}$ in Ref.~\cite{Karb17}. Using the
Eqs. (\ref{np.7}) and (\ref{np.9}), we find in the leading order tha
\begin{align}
& \tilde{n}^{\mathrm{cr}}\approx\frac{L_{\mathrm{w}}}{2}r^{\mathrm{cr}
\delta_{\mathrm{w}},\label{np.16a}\\
& P_{\mathrm{v}}\approx\exp\left\{ -V_{\bot}T\tilde{n}^{\mathrm{cr}
\sum_{l=0}^{\infty}\frac{\epsilon_{l+1}}{\left( l+1\right) ^{d/2}
\exp\left[ -\frac{\pi lm^{2}}{eE_{0}}\right] \right\} , \label{np.16b
\end{align}
where
\begin{equation}
\epsilon_{l}=\epsilon_{l}^{\mathrm{w}}=\sqrt{\pi}\Psi(\frac{1}{2},\frac
{3-d}{2};\frac{\pi lm^{2}}{eE_{0}})\delta_{\mathrm{w}}^{-1},\ \ \delta
_{\mathrm{w}}=\sqrt{\pi}\Psi(\frac{1}{2},\frac{3-d}{2};\frac{\pi m^{2}
{eE_{0}}). \label{np.17
\end{equation}
Our result for the probability $P_{\mathrm{v}}$ (\ref{np.16b}) coincides with the
one obtained in Ref.~\cite{Karb17} for the particular case of $d=4$.
For the case when the external background is given by a slowly varying in time
electric field, the expressions for the total number of created particles and
for the probability of vacuum-to-vacuum transition similar to Eqs.
(\ref{np.7}) and (\ref{np.9}) were obtained in Ref. \cite{GavGit17}. It is
clear, however, that time-dependent fields and nonuniform fields describe
physically distinct situations, and in the general case the results have different forms.
\section{Mean current \label{S4}}
It is well known that in QFT, measurable values and their mean values,
generally speaking, are defined globally. This fact implies that those values
are defined in some macroscopic volume at some fixed moment of time. Of
course, the measurement procedure takes some macroscopic time itself. This is
usually ignored under the assumption that the measured physical value does not
change substantially during the measurement. If an external field does not
violate vacuum stability, then it just causes vacuum polarization inside the
area occupied by the field. This effect is a quasilocal one. In the case that
is most interesting to us, i.e., when an external field is capable of violating
vacuum stability, there is a global effect of the field due to an
electron-positron pair creation. These pairs do not disappear when the field
is turned off. The electrons and positrons created leave the area
$S_{\mathrm{int}}$ occupied by the field, creating a constant flow of
particles moving away from the field. For symmetry reasons, it is clear that
this flow is aligned along the field direction (which is the axis $x$ in our
case) and creates a longitudinal electric current.
Created electrons and positrons leaving the field region $S_{\mathrm{int}}$
fly out into the regions $S_{\mathrm{L}}$ and $S_{\mathrm{R}}$,
respectively, and move away from the electric field at constant
longitudinal velocities $-v^{\mathrm{L}}$ and $v^{\mathrm{R}}$,
where $v^{\mathrm{L}}=\left\vert p^{\mathrm{L}}/\pi_{0}\left( \mathrm{L
\right) \right\vert $ and $v^{\mathrm{R}}=\left\vert p^{\mathrm{R}}/\pi
_{0}\left( \mathrm{R}\right) \right\vert $. This longitudinal current is
coordinate independent and, therefore, can be determined by its value anywhere
in $S_{\mathrm{L}}$ or $S_{\mathrm{R}}$.
Using a general theory \cite{x-case}, we can derive the forms for the vacuum
mean current. The charge operator $\hat{Q}$ is defined in Ref. \cite{x-case}
as a commutator,
\begin{equation}
\hat{Q}=-\frac{e}{2}\int \left[ \hat{\Psi}^{\dagger }\left( X\right) ,\hat
\Psi}\left( X\right) \right] _{-}d\mathbf{r\ }. \label{Charge}
\end{equation
The expression (\ref{Charge}) for the charge operator suggests the
definition for the current operator,
\begin{equation}
\hat{J}^{\mu }=-\frac{e}{2}\left[ \hat{\Psi}^{\dag }(X)\gamma ^{0}\gamma ^{\mu },\
\hat{\Psi}(X)\right]_{-} \label{Current}.
\end{equation}
These values can be expressed via the vacuum mean values of the
electric current density of the Dirac field through the surface
$x=\mathrm{const}$, which are defined as follows
\begin{align}
& \left\langle J^{\mu}(x)\right\rangle _{\mathrm{in}}=\left\langle
0,\mathrm{in}\right\vert \hat{J}^{\mu}\left\vert 0,\mathrm{in}\right\rangle
,\ \ \left\langle J^{\mu}(x)\right\rangle _{\mathrm{out}}=\left\langle
0,\mathrm{out}\right\vert \hat{J}^{\mu}\left\vert 0,\mathrm{out}\right\rangle
,\label{m1}
\end{align}
where the Dirac Heisenberg operator $\hat{\Psi}\left( X\right) $ is assigned
to the Dirac field $\psi\left( X\right) $. We stress the $x$ coordinate
dependence of mean values (\ref{m1}), which does exist due to the coordinate
dependence of the external field. It should be noted that these densities
depend on a vacuum definition and the structure of the electric field in the
direction\emph{\ }$x$. The renormalized vacuum mean value $\left\langle
J^{\mu}(x)\right\rangle _{\mathrm{in}}$ is a source in the equations of motion for a
mean electromagnetic field. The quantity $\left\langle J^{\mu}(x)\right\rangle
_{\mathrm{out}}$ is needed to present the operator $\hat{J}^{\mu}$ in a normal ordering
form with respect to the creation and annihilation operators of the final
particles
\begin{equation}
\hat{N}_{\mathrm{out}}\left( J^{\mu}\right) =\hat{J}^{\mu}-\left\langle J^{\mu
}(x)\right\rangle _{\mathrm{out}}. \label{m2
\end{equation}
Mean values and probability amplitudes are described by the Feynman diagrams
with two kinds of charged particle propagators in the external field under
consideration, respectively. The probability amplitudes are calculated using
the causal (Feynman) propagator $S^{c}(X,X^{\prime})$ while mean values are
found using the so-called in-in propagator $S_{\text{\textrm{in}}
^{c}(X,X^{\prime})$ and out-out propagator $S_{\text{\textrm{out}}
^{c}(X,X^{\prime})$,
\begin{align}
& S^{c}(X,X^{\prime})=i\left\langle 0,\mathrm{out}\right\vert \hat{T
\hat{\Psi}(X)\hat{\Psi}^{\dag}(X^{\prime})\gamma^{0}\left\vert 0,\mathrm{in
\right\rangle c_{\mathrm{v}}^{-1},\nonumber\\
& S_{\text{\textrm{in}}}^{c}(X,X^{\prime})=i\left\langle 0,\mathrm{in
\right\vert \hat{T}\hat{\Psi}(X)\hat{\Psi}^{\dag}(X^{\prime})\gamma
^{0}\left\vert 0,\mathrm{in}\right\rangle ,\nonumber\\
& S_{\text{\textrm{out}}}^{c}(X,X^{\prime})=i\left\langle 0,\mathrm{out
\right\vert \hat{T}\hat{\Psi}(X)\hat{\Psi}^{\dag}(X^{\prime})\gamma
^{0}\left\vert 0,\mathrm{out}\right\rangle , \label{m5.1
\end{align}
where $\hat{T}$ denotes the chronological ordering operation and
$c_{\mathrm{v}}$ is the vacuum-to-vacuum transition amplitude,{\large \
$c_{\mathrm{v}}=\left\langle 0,\mathrm{out}\right\vert \left. 0,\mathrm{in
\right\rangle ${\large ,\ }$\left\vert c_{\mathrm{v}}\right\vert
^{2}=P_{\mathrm{v}}${\large .} As usual, these propagators can be expressed
via the following singular function
\begin{align}
& S^{c}(X,X^{\prime})=\theta(t-t^{\prime})\,S^{-}\left( x,x^{\prime}\right)
-\theta(t^{\prime}-t)\,S^{+}\left( x,x^{\prime}\right) ,\nonumber\\
& S_{\mathrm{in/out}}^{c}(X,X^{\prime})=\theta(t-t^{\prime
)S_{\mathrm{in/out}}^{-}(X,X^{\prime})-\theta(t^{\prime}-t)S_{\mathrm{in/out
}^{+}(X,X^{\prime})\,. \label{m5.2
\end{align}
The vacuum mean values (\ref{m1}) can be expressed via the propagators
$S_{\text{\textrm{in}}}^{c}$ and $S_{\text{\textrm{out}}}^{c}$ while the
causal propagator $S^{c}$ determines the vacuum polarization contribution to
current a
\begin{align}
& \left\langle J^{\mu}(x)\right\rangle ^{c}=\left\langle 0,\mathrm{out
\right\vert \hat{J}^{\mu}\left\vert 0,\mathrm{in}\right\rangle c_{\mathrm{v}
^{-1}=-ie\mathrm{tr}\left[ \gamma^{\mu}S^{c}(X,X^{\prime})\right]
|_{X=X^{\prime}},\nonumber\\
& \left\langle J^{\mu}(x)\right\rangle _{\mathrm{in/out}}=-ie\mathrm{tr
\left[ \gamma^{\mu}S_{\text{\textrm{in/out}}}^{c}(X,X^{\prime})\right]
|_{X=X^{\prime}}. \label{m5.3
\end{align}
Using the explicit forms of these singular functions, given by in Ref.
\cite{x-case}, we see that transversal components of these currents are equal
to zero
\begin{equation}
\left\langle J^{k}(x)\right\rangle _{\mathrm{in}}=\left\langle J^{k
(x)\right\rangle _{\mathrm{out}}=\left\langle J^{k}(x)\right\rangle
^{c}=0\ \ \mathrm{if}\ \ k\neq1,\ \label{m6
\end{equation}
due to the cylindrical symmetry of the problem.
Using a singular function,
\begin{equation}
S^{p}(X,X^{\prime})=S_{\text{\textrm{in}}}^{c}(X,X^{\prime})-S^{c
(X,X^{\prime}) \label{Sp
\end{equation}
it is useful to introduce a current,
\begin{equation}
\left\langle J^{\mu}(x)\right\rangle ^{p}=-ie\mathrm{tr}\left[ \gamma^{\mu
}S^{p}(X,X^{\prime})\right] |_{X=X^{\prime}}\ . \label{m13.1
\end{equation}
The explicit form of $S^{p}(X,X^{\prime})$ is \cite{x-case}
\begin{align}
& S^{p}(X,X^{\prime})=i\sum_{n\in\Omega_{3}}\mathcal{M}_{n}^{-1}\left[
g\left( _{-}\left\vert ^{-}\right. \right) ^{\ast}\right] ^{-1}\ _{-
\psi_{n}\left( X\right) \ ^{-}\bar{\psi}_{n}\left( X^{\prime}\right)
,\nonumber\\
& \mathcal{M}_{n}=2\frac{\tau^{\left( \mathrm{R}\right) }}{T}\left\vert
g\left( _{+}\left\vert ^{-}\right. \right) \right\vert ^{2}=2\frac
{\tau^{\left( \mathrm{L}\right) }}{T}\left\vert g\left( _{+}\left\vert
^{-}\right. \right) \right\vert ^{2}. \label{m13.2
\end{align}
Note that $S^{p}$ is formed in the range $\Omega_{3}$ only and vanishes if
there is no pair creation. Here, $\tau^{\left( \mathrm{L}\right) }$ and
$\tau^{\left( \mathrm{R}\right) }$ are equal macroscopic times of motion for
created particles in the regions $S_{\mathrm{L}}$ and $S_{\mathrm{R}}$,
respectively. It is assumed that the regions $S_{\mathrm{L}}$ and
$S_{\mathrm{R}}$ are substantially wider than the region $S_{\mathrm{int}}$,
and it is possible to neglect the time period when the created particles are
moving from the region $S_{\mathrm{int}}$ into regions $S_{\mathrm{L}}$ and
$S_{\mathrm{R}}$. We suppose that all the measurements are performed during a
macroscopic time $T$ when the external field can be considered as constant. In
particular, a charge transport through planes $x=x_{\mathrm{L}}$ and
$x=x_{\mathrm{R}}$ occurs during the time period $T$. In this case, the times
$\tau^{\left( \mathrm{L}\right) }$ and $\tau^{\left( \mathrm{R}\right) }$
coincide with $T$, $\tau^{\left( \mathrm{L}\right) }=\tau^{\left(
\mathrm{R}\right) }=T$, and we obtain tha
\[
\mathcal{M}_{n}^{-1}=\frac{1}{2}N_{n}^{\mathrm{cr}},
\]
where $N_{n}^{\mathrm{cr}}$ is given by Eq. (\ref{meanN}).
Thus, we have tha
\begin{equation}
\left\langle J^{\mu}(x)\right\rangle _{\mathrm{in}}=\left\langle J^{\mu
}(x)\right\rangle ^{c}+\left\langle J^{\mu}(x)\right\rangle ^{p}.
\label{m13.3
\end{equation}
It is clear from Eq.~(\ref{m6}) that only the components $\left\langle
J^{0}(x)\right\rangle ^{p}$ and $\left\langle J^{1}(x)\right\rangle ^{p}$ are
nonzero. Using representations (\ref{m13.1}) and (\ref{m13.2}) and
decomposition (\ref{dec}), we find tha
\begin{align}
& \left\langle J^{0}(x)\right\rangle ^{p}=\left\{
\begin{array}
[c]{l
-\bar{J}^{0}\left( \mathrm{L}\right) \ \ \mathrm{if}\ \ x\in S_{\mathrm{L
}\\
\bar{J}^{0}\left( \mathrm{R}\right) \ \ \mathrm{if}\ \ x\in S_{\mathrm{R}
\end{array}
\right. ,\ \ \nonumber\\
& \bar{J}^{0}\left( \mathrm{L/R}\right) =\frac{e}{2}\sum_{n\in\Omega_{3
}j_{n}^{0}\left( \mathrm{L}/\mathrm{R}\right) ,\ \ j_{n}^{0}\left(
\mathrm{L}/\mathrm{R}\right) =j_{n}^{1}/v^{\mathrm{L/R}};\label{m17a}\\
& \left\langle J^{1}(x)\right\rangle ^{p}=\frac{e}{2}\sum_{n\in\Omega_{3
}j_{n}^{1},\ \ j_{n}^{1}=N_{n}^{\mathrm{cr}}(TV_{\perp})^{-1}. \label{m17b
\end{align}
Using Eqs. (\ref{L3}), we can present the singular functions
$S_{\text{\textrm{in}}}^{c}$ and $S_{\text{\textrm{out}}}^{c}$ given by in Ref.~\cite{x-case}, in an
explicit form in the regions $S_{\mathrm{L}}$ and $S_{\mathrm{R}}$,
respectively. We find tha
\begin{equation}
\left\langle J^{1}(x)\right\rangle _{\mathrm{in}}=-\left\langle J^{1
(x)\right\rangle _{\mathrm{out}}=\left\langle J^{1}(x)\right\rangle
^{p}\ \ \mathrm{if}\ \ x\in S_{\mathrm{L}}\ \ \mathrm{or}\ \ S_{\mathrm{R}}.
\label{m7
\end{equation}
Note that the values of electric current densities (\ref{m7}) [including each
of the components of $j_{n}^{1}$, given by Eq. (\ref{m17b})] are conserved
along the axis $x$. These expressions are defined for regions where an
electric field is absent and do not contain contributions independent of an
electric field. For example, if an electric field in the region
$S_{\mathrm{int}}$ turns off, $E\rightarrow0$, then the number of pairs
created by the field vanishes, $N_{n}^{\mathrm{cr}}\rightarrow
0$\emph{.} For this reason, the densities (\ref{m7}) are characteristics of
real particles and cannot change after an electric field is turned off.
Taking into account the relations (\ref{m2}) and (\ref{m7}), we find that the
longitudinal current of pairs created i
\begin{equation}
J_{\mathrm{cr}}^{1}=\left\langle N_{\mathrm{out}}\left( J^{1}\right)
\right\rangle _{\mathrm{in}}=2\left\langle J^{1}(x)\right\rangle ^{p
=e\sum_{n\in\Omega_{3}}j_{n}^{1}. \label{m10
\end{equation}
Here, $j_{n}^{1}$ is the flux density of particles created with a given $n$, and
\begin{equation}
\sum_{n\in\Omega_{3}}j_{n}^{1}=n^{\mathrm{cr}} \label{m11
\end{equation}
is the total flux density of created particles, given by Eq. (\ref{sc.20}).
This allows us to interpret the density $2\left\langle J^{0}(x)\right\rangle
^{p}$ as the charge density of the particles created
\begin{equation}
J_{\mathrm{cr}}^{0}(x)=2\left\langle J^{0}(x)\right\rangle ^{p}=\left\{
\begin{array}
[c]{l
-e\sum_{n\in\Omega_{3}}j_{n}^{0}\left( \mathrm{L}\right) \ \ \mathrm{if\
\ x\in S_{\mathrm{L}}\\
e\sum_{n\in\Omega_{3}}j_{n}^{0}\left( \mathrm{R}\right) \ \ \mathrm{if
\ \ x\in S_{\mathrm{R}
\end{array}
\right. .\ \label{m19
\end{equation}
This interpretation also works for partial components of density
(\ref{m19}). We see that created electrons with a given $n$ move with a velocity
$v^{\mathrm{L}}$ in a direction opposite to the direction of the axis $x$,
i.e., in the direction opposite to the direction of the current density
$ej_{n}^{1}$. During the time $T$, that these electrons transport through the plane
$x=x_{\mathrm{L}}$, the amount of charge per $V_{\bot}$ is equal to $ej_{n}^{1}T$.
Taking into account that this charge is distributed uniformly over the
cylindrical volume with the length $v^{\mathrm{L}}T$, we obtain that the charge
density of created electrons with a given $n$ is equal to $ej_{n}^{1}/\left(
-v^{\mathrm{L}}\right) =-ej_{n}^{0}\left( \mathrm{L}\right) $, where
$j_{n}^{0}\left( \mathrm{L}\right) $ is given by Eq. (\ref{m17a}). Created
positrons with a given $n$ move at a velocity $v^{\mathrm{R}}$ along axis $x$;
i.e., the direction of their movement coincides with the direction of the
current density $ej_{n}^{1}$. During the time $T$ positrons transport the same
charge amount per $V_{\bot}$ as electrons, $ej_{n}^{1}T$, through the plane
$x=x_{\mathrm{R}}$ (in this case it is uniformly distributed over a
cylindrical volume with the length $v^{\mathrm{R}}T$). We find that the charge
density of created positrons with a given $n$ is $ej_{n}^{1}/v^{\mathrm{R
}=ej_{n}^{0}\left( \mathrm{R}\right) $, where $j_{n}^{0}\left(
\mathrm{R}\right) $ is given by Eq. (\ref{m17a}). We see that every pair
$ej_{n}^{1}$ and $-ej_{n}^{0}\left( \mathrm{L}\right) $ in $S_{\mathrm{L}}$
and $ej_{n}^{1}$ and $ej_{n}^{0}\left( \mathrm{R}\right) $ in $S_{\mathrm{R
}$ , correspondingly, can be connected by a Lorentz boost and represents
(nonzero) components of the same Lorentz vector. The number densities in both
regions $x\in S_{\mathrm{L}}$ and $x\in S_{\mathrm{R}}$ are equal,
\[
\sum_{n\in\Omega_{3}}j_{n}^{0}\left( \mathrm{L}\right) =\sum_{n\in\Omega
_{3}}j_{n}^{0}\left( \mathrm{R}\right) .
\]
Thus, the charge densities of created electrons in $x\in S_{\mathrm{L}}$ and
positrons in $x\in S_{\mathrm{R}}$ have the same value, but opposite sign. We see
that the total charge of created particles is zero, and an electric field
produces a charge polarization along axis $x$, just as one would expect.
The main contribution to the longitudinal current and flux density of the pair
created is formed in the inner subrange $D(x)$ of the Klein zone, given by the
inequality (\ref{np.2}). In this subrange, $v^{\mathrm{L}}\simeq v^{\mathrm{R
}\simeq1$, and we have that $j_{n}^{0}\left( \mathrm{L/R}\right) \simeq
j_{n}^{1}$. Then, we can write tha
\begin{equation}
J_{\mathrm{cr}}^{1}=e\tilde{n}^{\mathrm{cr}},\ \ J_{\mathrm{cr}
^{0}(x)=\left\{
\begin{array}
[c]{l
-e\tilde{n}^{\mathrm{cr}}\ \ \mathrm{if}\ \ x\in S_{\mathrm{L}}\\
e\tilde{n}^{\mathrm{cr}}\ \ \mathrm{if\ }\ x\in S_{\mathrm{R}
\end{array}
\right. ,\ \label{m20
\end{equation}
where $\tilde{n}^{\mathrm{cr}}$ is given by the universal forms (\ref{np.6})
and (\ref{np.7}).
\section{Concluding remarks}
In the present article, we have presented the approximation that allows one to
treat nonper\-tur\-ba\-ti\-vely the vacuum instability effects for arbitrary weakly
inhomogeneous $x$-electric potential steps in the absence of the corresponding
exact solutions. First, we have revised vacuum instability effects in three
exactly solvable cases in QED with $x$-electric potential steps that have a
real physical importance. These are the Sauter electric field, the so-called
$L$-constant electric field, and the exponentially growing and decaying strong
electric weakly inhomogeneous fields. Defining the conditions of a field being
weakly inhomogeneous in general terms, we observed some universal features of
vacuum effects caused by the strong electric fields. These universal features
appear when the length of the external field is sufficiently large in
comparison to the scale, $\Delta l_{\mathrm{st}}=\left[ e\overline
{E(x)}\right] ^{-1/2}$. In this case, the scale of the variation for an
external field and leading contributions to vacuum mean values are
macroscopic. We found universal approximate representations for the flux
density of created pairs (bosons and fermions) and the probability of the
vacuum to remain a vacuum in the leading-term approximation for a weakly
inhomogeneous, but otherwise arbitrary strong electric field. These
representations do not require a knowledge of corresponding solutions of the
Dirac equation; they have a form of simple functionals of a given weakly
inhomogeneous electric field. The universal forms for the flux density of
created pairs are completely new and present a LCFA for this physical
quantity. It is a new kind of a LCFA obtained without any relation to the
Heisenberg-Euler action. We established relations of these representations
with leading term approximations of derivative expansion results. We have
tested the obtained representations for cases of exactly solvable $x$-electric
potential steps (based on using exact solutions). We have also considered two
examples of $x$-electric potential steps where the exact solutions of the
corresponding Dirac equation are not known, a Gauss peak and an inverse square
electric field. We found the longitudinal current density and charge density
of created electrons and positrons and related these densities to the
corresponding flux density. In the regions $S_{\mathrm{L}}$ and $S_{\mathrm{R
}$, where the electric field is absent (or negligible), leading vacuum
characteristics are formed due to the real pair production. Thus, we have
isolated global contributions that depend on the total history of an electric
field from local contributions formed in the region $S_{\mathrm{int}}$. The nonperturbative (with respect to the external field) technique elaborated on in Ref.~\cite{x-case} allows one to calculate all the characteristics of zero-order processes and Feynman diagrams that describe all characteristics of processes with an interaction between charged particles and photons. These diagrams formally have the usual form but contain special propagators. Using expressions for these propagators in terms of in- and out-solutions, presented in Ref.~\cite{x-case}, our approximation method can be easily adapted to calculate one-loop and higher order contributions. The first step in developing the corresponding nonperturbative technique was recently done in Ref.~\cite{BrGavGitIv}, where a relation between the electron propagator in a constant electric field confined between two capacitor plates and the well-known Fock-Schwinger proper-time integral representation is established.
\section{Acknowledgement}
S.P.G. and D.M.G. acknowledge support from Tomsk State University
Competitiveness Improvement Program and the partial support from the Russian
Foundation for Basic Research (RFBR), under Project No. 18-02-00149;
D.M.G. is also supported by the Grant No. 2016/03319-6, Funda\c{c}\~{a}o de Amparo \`{a}
Pesquisa do Estado de S\~{a}o Paulo (FAPESP), and permanently by Conselho Nacional
de Desenvolvimento Cient\'{i}fico e Tecnol\'{o}gico (CNPq). The work of A.A.S. was
supported by Grant No. 2017/05734-3 of FAPESP.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 1,833 |
### About the Book
Best known as the singer of the pop hit 'Walk on the Wild Side', Lou Reed was in fact one of the most intelligent and innovative songwriters of his era − a sardonic, world-weary chronicler of underground culture and the dark side of human nature.
Starting with the Velvet Underground in the 1960s, Reed combined poetry with rock 'n' roll to create almost fifty years of powerful and challenging music that influenced generations of artists. In this authoritative new book, biographer Howard Sounes shows how Reed's seminal work with the Velvets shaped his whole career.
Sounes reveals a complicated man, and an erratic recording artist, who was frequently quarrelsome and could be downright nasty. He was a bisexual who married three times, an alcoholic and a drug user who portrayed himself as a tough guy, but struggled with mental health issues in private. He was also an astute and sensitive songwriter whose subject matter ranged from drugs and transgressive sex to such tender love ballads as 'Perfect Day' and 'Pale Blue Eyes'.
Through meticulous research, including interviews with over 140 people who played a part in Reed's life, many speaking in public for the first time, _Notes from the Velvet Underground_ is a vivid portrait of a talented but tortured artist.
Contents
Cover
About the Book
Title Page
I| | Coney Island Baby, 1942–59
---|---|---
II| | On to the Darkened Sea, 1960–4
III| | Honeybun, Black Jack, Sterl and Moesy, 1964–5
IV| | The Exploding Plastic Inevitable, 1966
V| | Light and Dark, 1967–8
VI| | A New VU, 1968–70
VII| | Solo in the Seventies, 1970–3
VIII| | Self-parody, 1973–4
IX| | Howling like the Devil, 1975–6
X| | The Arista Years, 1976–80
XI| | Second Marriage, 1980–7
XII| | New Inspiration, 1987–92
XIII| | Return to the Velvet Underground, 1992–6
XIV| | Love, Lou, 1996–2008
XV| | Nevermore, 2008–13
Picture Section
Source Notes
Bibliography
Author's Note and Acknowledgements
Picture Acknowledgements
Index
About the Author
Also by Howard Sounes
Copyright
# Notes from the Velvet Underground
## The Life of Lou Reed
## Howard Sounes
## I
## Coney Island Baby
### 1942–59
LISTEN. LOU REED has fallen silent. That black-clad curmudgeon, the rock 'n' roll poet they called the King of New York, co-founder of the Velvet Underground band, master of the wry, observational lyric, author of 'Walk on the Wild Side', 'Heroin' and 'Sweet Jane', among hundreds of extraordinary and some quite ordinary songs, is dead. Lou lived so fast, he drank so much and he took so many drugs that few expected him to live to seventy-one. But now he has sung his last song. What was he really like?
One quick story serves as a paradigm. In the autumn of 1963, when Lou was twenty-one, he drove to St Lawrence University in upstate New York with his college band to perform at a fraternity weekend. The terms of the engagement were that LA and the Eldorados would play at a student dance on Friday night, then for an hour on a pleasure boat on the St Lawrence River on Saturday afternoon, before performing at a fraternity party in the evening. 'I'm not playing on the boat,' Lou announced when they arrived, having decided that this was beneath his dignity. The others looked at their band mate in exasperation. Lou was a slim young man of 140lbs, five foot ten, with simian features, short, stubby fingers, bushy brown hair and clever brown eyes. The left eye was lazy, giving him a sly appearance. He spoke in a whiny, camp voice and gave the general impression of being trouble. 'He was just a prick,' says band mate Richard Mishkin, using a word that many friends chose to describe Lou over the years. Yet he was the heart and soul of LA and the Eldorados, their lead singer and guitarist, and they couldn't go on without him.
'You don't have a choice,' Richard told him. 'You are going to play on the boat and be happy... They are paying us a lot of money, and you have to do it.'
That was not the way to speak to Lou, ever. 'Mishkin, fuck you!' he retorted, thrusting his right hand through a glass door. Lou laughed as he looked at the injury he had done to himself, blood streaming down his arm as he held his hand up.
'Because he doesn't have to play now,' explains Richard, who took Lou to hospital for stitches. 'He's won!'
As he would show time and again during his long career, Lou would rather harm himself than be coerced into doing anything he didn't believe in. Such integrity is a mark of a true artist. It also helps explain why Lou never achieved as much success as he craved, or deserved.
This talented, difficult man was born Lewis Alan Reed at Beth-El Hospital in Brooklyn, New York, on Monday, 2 March 1942. A story later emerged that his real name was Louis Firbank, and this name is still cited in books and articles; it is completely erroneous. Some friends called him by his given name of Lewis, which he didn't object to, though he disliked his middle name, but to most people he would always be Lou. 'Lewis was his name, [but] it wasn't him,' says his brother-in-law, Harold Weiner. 'He was Lou.'
The temperature fell below freezing the night Lou was born, bringing snow to the five boroughs of New York City, of which Brooklyn is the second largest and most populous: eighty-one square miles of tightly packed houses, tenements, shops and factories, bisected by pot-holed highways, tram lines and rust-brown elevated train tracks; the teeming borough separated from its more glamorous neighbour, Manhattan, by the East River, spanned by the majestic Brooklyn Bridge. It was wartime and the newspapers were full of stories of America's struggle with 'the Japs', as well as the wider world war against the Hitler Axis. The mood at home was patriotic but jittery, with fears of attacks on the mainland. The day after Lou was born, to everyone's dismay, the _Brooklyn Eagle_ reported the sinking of a US destroyer by a German U-boat off the coast of New Jersey. The paper also warned its readers that the lights would shine as usual at Brooklyn's fun fair, Coney Island, that summer, 'but at the sound of an alert signal the entire amusement area will be blacked out'. In a time that has passed into history, men and women hurrying home from work in the snow were formally dressed, nearly everybody wore a hat and an overcoat and most adults smoked cigarettes. New releases at the cinema included _Woman of the Year_ , with Spencer Tracy and Katharine Hepburn; while Eddie Cantor promised to 'keep you laughing' in black face in a Broadway show called _Banjo Eyes_ , a production that would now be considered outrageously racist. Lou was capable of casual racism himself in later life.
There was a substantial African-American community in Brooklyn, part of the borough's heterogeneous mix of races and creeds, enlarged and diversified by waves of immigrants. Lou's family was part of that classic American story. His father, Sidney Joseph Rabinowitz, was born in New York in 1913, the son of Russian Jews who had emigrated to America to escape persecution. Sid's father, Mendel, was a printer who went bankrupt during the Depression, so Sid had known hard times. At the outbreak of the Second World War, the family was living in a tenement in Borough Park.
'I know what it is like to be on the outside. I know what it's like to have an unhappy childhood,' Lou said in 1992, going to the root of his formative experiences. His principal problem was with his father. Lou's sister concedes that Dad had his faults. Sid was 'controlling and rigid' and, like his son, '[he] could be a verbal bully.' At the end of his life Lou told his friend the artist Julian Schnabel a story about how he had once reached out to his father, only for Sid to hit him. 'He put his hand near his father, and his father kind of smacked him. He never got over that,' reports Schnabel. 'He felt the cruelty of that.' Father and son may have been too alike. Sid had a quick wit, he enjoyed music and had ambitions to become an author. His mother persuaded him to study to become an accountant instead, a steady job that enabled him to join the middle class, but not work in which he could express himself. He was, perhaps, a frustrated man.
Radical politics thrived in blue-collar Brooklyn in the forties. Sid's cousin Shulamit 'Shirley' Rabinowitz caused such a stir in the garment industry, agitating for better pay, that the local newspapers nicknamed her Red Shirley. Sid was also active in the labour unions, 'which was considered leftist and the FBI was investigating, so he changed [his name] to protect his father, ostensibly,' explains Lou's sister. So Sid Rabinowitz became Sid Reed. Soon after adopting his new name, Sid met Toby Futterman, who would become Lou's mother, when they were working together at a court service company. Toby's background was almost identical to Sid's in that her parents were European Jewish immigrants. She was born in 1920, making her seven years his junior. Although her father died when she was young, putting a financial strain on the family, the Futtermans were better off than the Rabinowitzes, owning their house in a more salubrious part of Brooklyn. Judith Futterman, one of Lou's cousins on his mother's side, became a professional singer who performed with the Metropolitan Opera. Toby was vivacious and attractive, crowned Queen of the Stenographers in a 1939 office beauty contest, but she lacked confidence. 'An anxious individual throughout her life, she took a traditional role with my father, always subservient to him,' notes their daughter.
1620 Avenue V, Brooklyn.
Toby married Sid not long after they met, and Lou was their first child. He was always closer to his mother than his father, taking after Toby in looks, as well as inheriting her chronic anxiety, something he learned to hide in public. Their first family home was a small house on Avenue V in the Sheepshead Bay area of Brooklyn. Shortly after the end of the war (Sid was registered for service, but not called up), they moved to a larger house in leafy Midwood. The Reeds were living there in 1947 when Lou's sister was born. There has been some confusion over the years regarding the size and make-up of Lou's family, with one author writing since Lou's death that he had a younger brother. He did not. His sister, his only sibling, was named Margaret Ellen, but soon acquired the endearing nickname of Bunny. In adult life she became a psychotherapist and assumed the name Merrill, thinking Bunny too silly for work, but Lou always called her Bunny. The siblings were alike: dark, good-looking, sharp-tongued, with the same strong New York accent, both somewhat controlling; and they were close. One of Bunny's earliest memories of Lou was waiting at the window for him to come home from school. 'And when he came home I was so happy, and that never changed. I always adored him.'
These were the innocent early years of childhood when Lou discovered the pleasure of drinking egg cream – milk shake made with soda – in the milk bars on Kings Highway, which he sang about nostalgically in his 1996 song 'Egg Cream'. In the humid New York summer Lou would have been taken to the lido at Brownsville and to Coney Island. He was part of an extended Jewish immigrant family with numerous relatives who spoke with East European accents. He began his education at a local redbrick elementary public school, where he played stick ball and stoop ball in the yard, as he describes in 'My Old Man' on the album _Growing Up in Public_ , a song which refers to a bullying father who beat his wife. Lou said he wrote the song for his father, but denied that Sid hit Toby. Throughout his career he was at pains to explain that he saw himself as a writer who created characters, in the way a novelist does, a position that allowed him to write from different points of view, to invent things, and to distance himself from his material. 'I see myself as a writer. Whether I'm a nice guy, whether I'm a liar, whether I'm immoral, should have nothing to do with it.'
The family left Brooklyn in 1951 for suburban Nassau County, Long Island; 'the armpit of the world,' Lou called it. Topographically, Long Island is the flat, narrow strip of land that extends east from New York City. Long Island Sound laps the ritzy _Great Gatsby_ communities of the North Shore. The utilitarian South Shore faces the Atlantic Ocean. Long Island becomes less populated and more expensive the further east one travels on Route 27, until the dwindling highway reaches the millionaire homes of the Hamptons. The Reeds bought a new ranch-style house at 35 Oakfield Avenue in Freeport, an expanding middle-class community on the South Shore. The house was a three-bedroom, single-storey brick building on a street of identical homes, separated by driveways and lawns: the classic 1950s American abode. These were the unremarkable environs of Lou's early life. It was evidently too dull a backstory for Lou, who sometimes gave the impression that he was from a wealthy Long Island family. 'My parents were self-made millionaires. On paper they were very rich,' he told _Melody Maker_ in 1976. This was fiction. Sid Reed was a Certified Public Accountant (CPA) who worked as the treasurer of a Long Island packaging company. Lou was no more the son of a millionaire than he was a child of the streets, though he also adopted that image. '[Later] he decided to portray himself as a guy who had grown up on the mean streets of New York City, when in fact he grew up in middle-class Freeport,' notes Allan Hyman, who lived round the corner and went through school and college with Lou. Even before Freeport, the Reeds had lived in nice areas of Brooklyn. Still, Lou struggled at first in the city's public-school system. 'The reason they moved out of Brooklyn to Long Island is that Lou was getting beat up on the way to school,' says his first wife, Bettye Kronstad.
35 Oakfield Avenue, Freeport.
Lou started school in Freeport in fifth grade, aged nine. He walked to school each morning, past the elevated train track that carried commuters into Manhattan, past the water tower and the United Methodist Church, all of which were landmarks of a suburb with the atmosphere of a small town. He was an honour roll student in middle school, and was assessed as having a relatively high IQ of 132. Although his family was not particularly religious, Lou celebrated his bar mitzvah and took time off for Jewish holy days. 'We were Jewish, but my father was rather opposed to organized religion,' says Bunny. 'He was a cultural Jew, I think you might say.' The children were indulged. Sid acquired a piano so they could take music lessons. Home movies show Lou as an apparently happy, toothy little kid frolicking in the yard with his sister and playing at the beach. One summer he worked as an attendant at Jones Beach, which entailed wearing a sailor's suit and cap – an early echo of the lyrics of 'Heroin' – as he collected litter with a stick pin, an episode he alludes to on the album _Take No Prisoners_.
There were problems, though. Lou was bullied again at Junior High in Freeport, according to Bunny, who says he was 'beaten up routinely after school', and he began to suffer panic attacks. 'It was obvious that Lou was becoming increasingly anxious and avoidant and resistant to most socializing unless it was on his terms,' she states, in the jargon of psychotherapy. 'He was possessed with a fragile temperament. His hyper-focus on things he liked led him to music, and it was there that he found himself. Self-taught, he began playing the guitar, absorbing every musical influence he could.'
Music came to Lou initially through AM radio, and he developed catholic tastes, enjoying doo-wop, rock 'n' roll, modern jazz and gospel. A keen reader, words were equally important to him. Ian Fleming's James Bond stories were among his favourite books as a child. As a teenager, Jack Kerouac's _On the Road_ made an impression. 'He was a very big fan of Jack Kerouac. I remember when he read _On the Road_. That was a big deal, and he wanted to talk to me about it, and I couldn't get that whole existential concept,' says Allan Hyman. While other boys sniggered over _Playboy_ , precocious Lou read the erotic French novel _The Story of O_.
Yet in many respects he remained a conventional middle-class teen. He was smartly turned out, remaining a neat, organized person throughout his life. He enjoyed basketball and tennis, and joined the track and field team at Freeport High. In his song 'Coney Island Baby' Lou sang about a boy who yearned to play football for his school coach. Was this autobiographical? Lou had a friend on the team, running back Jerome Jackson, one of the few African-American students at Freeport High. He doesn't recall Lou expressing any interest in trying out for Bill Ashley, who coached the Freeport High Red Devils, and says that Lou was one of the quieter kids in school. As with many writers, he was someone who lived much of his life in his imagination. He wasn't a jock, nor was he much of a mixer. 'Lou wasn't that popular. He was very friendly, but you had to be in his circle for him to warm up to you.' Their friendship was based on music. Jerome sang with the Valets, one of several high-school bands Lou was involved in. Another band, the oddly named Tretoads, performed 'Mickey Mouse Rock' at a school variety show, while the most significant of Lou's school bands was the Shades.
Composed of Lou and two older schoolboys, Phil Harris and Al Walters, the Shades performed at the 1958 Freeport High variety show in sunglasses, hence the name. Lou and Phil wrote a couple of songs together, 'So Blue' and 'Leave Her for Me'. 'I wrote all of "So Blue" and helped Lou out with "Leave Her for Me",' Harris recalled. 'Lou played all of the music, but we both sort of kicked around some chords during the writing phase.' Remarkably, they recorded these songs in New York City with a professional sax player, King Curtis, Phil singing lead vocals while Lou played guitar and added backing vocals. The music was generic, the lyrics formulaic, but the recording was good enough to be released as a single on the Time label in 1958 under the band name of the Jades (it turned out that there was another group called the Shades). Although the disc made no impression on the national charts, in a year of novelty hits like 'The Purple People Eater' and 'The Chipmunk Song', many kids in Freeport bought Lou's first record.
Adventures of this kind brought Lou into conflict with his father, who fretted about his son getting involved in the music business. This was the start of their problems. Lou complained about Sid to friends, telling Allan that his dad was one of the biggest assholes in the world. Allan was surprised. He liked Lou's parents. 'For whatever reason, Lou had a whole number going with [Sid] – always – and I don't know why,' he says. 'I remember that he was always upset with him for one reason or another.' There was an increasingly bad atmosphere at home. 'The stage was set. Anxious, controlling parents, limited communication, a society that valued secrecy, underlying mental-health issues,' comments Bunny. 'Verbal fights between Lou and my parents erupted – about going into the city, about the dangers he might confront. My parents were frightened, angry, bewildered.' There was a lot of yelling. After a period of crisis, however, things calmed down again. 'Remarkably, during Lou's senior year in high school there were moments of normalcy at home,' adds Bunny. 'Family dinners could be enjoyable. Lou and my father were both extremely witty, with erudite, dry senses of humour and remarkable literary sensibilities. I enjoyed their verbal jousting, as did they.'
By the time he was seventeen, Lou had started to write in earnest: song lyrics, poems and stories in which he began to express aspects of his sexuality. Allan Hyman was shocked by what he read. 'He would write poems and he would write short stories that had a gay theme about them.' There was violence in these stories, with characters punished for their sexuality. Sex and violence would be themes in Lou's adult writing, often linked. 'In some of the stories the hero would end up beating up the [other] person, catching somebody having sex in a public bathroom and having the hero of the story beat them up.' Allan further claims Lou told him that a man paid him to watch him masturbate in a local park. He thought this 'kind of odd', but didn't draw conclusions. 'The idea of being gay was not something I had even heard of. I knew a couple of kids in school who appeared to be, you know, gay, but nobody called them gay. [We] just thought they were sissies.' He didn't think of Lou as a sissy. Rather, his stories seemed to be part of his emerging rebelliousness, and who knew how much of what he said and wrote was true? Lou's sexuality remained enigmatic throughout his life, a conundrum to many friends and, one suspects, himself. Was he gay or straight? As the facts emerge, we see that his sexuality was mutable and not easily defined. If a label must be applied, Lou was bisexual, though he never used that word himself and didn't seem to welcome categorization. Nevertheless, importantly, his sexuality set him apart from society from an early age.
Even by the standards of teenage boys, Lou's behaviour with girls at high school was gauche and crude, probably because he was unsure of himself. When a girl in Gene's candy store asked Lou one day if he'd had his hair cut, for example, he snapped, 'Oh, fuck you, Carol!' at a time when middle-class American boys simply didn't talk to girls that way. This was the wholesome era of Pat Boone, _I Love Lucy_ and the hula-hoop craze. His friends were shocked. Then there was dating. 'Lou never had girlfriends,' says schoolfriend Richard Sigal, meaning that Lou didn't have a steady high-school girlfriend like Richard and Allan did. 'Lou had one-night stands. Lou had dirty girls.' Before the contraceptive pill, fumbling around in the back seats at the Freeport Theater, getting a hand job in a parked car, or 'dry humping' (rubbing against your sweetheart with your clothes on) were the early sexual experiences for their generation, but Lou was quick to boast that he had gone 'all the way'. He'd met an older girl named Bonnie who, he said, allowed him special favours. She had also introduced Lou, an avid cigarette smoker since fourteen, to marijuana, which was seemingly his entrée into the world of stimulants. Bonnie was evidently quite a character, and she must have left a strong impression. One wonders if Lou had her in mind when he wrote 'Sweet Bonnie Brown' for the Velvet Underground. Drug use of any kind was highly unusual in fifties Freeport. '[When the rest of us] were drinking beer, Lou was rolling joints, which was unheard of,' says Richard, while Lou's sister confirms that he got into drugs as a schoolboy. 'I do not know how much of the drug use was apparent to my parents.'
Allan and Richard sometimes tried to fix Lou up with a girl so they could double date, now that they were old enough to borrow their parents' cars, but double-dating with Lou rarely turned out well. One weekend Allan persuaded Lou to accompany him to New Jersey, where he knew a girl named Susan who had offered to invite a friend for Lou. Susan came to the door to greet the boys, her eight-year-old sister simultaneously sliding down the bannister to welcome them. Lou asked the child: 'Do you always masturbate on the bannister?'
'Get out!' Susan shouted. It was the end of the date.
There was an equally disastrous double-date with Richard Sigal. One night when Richard's parents were away, he invited Lou over for an evening with himself and his girlfriend, who had brought a friend for Lou. He turned up late, apparently drunk or high, and asked for a Scotch, which Richard gave him, in one of his father's crystal tumblers. Lou then nonchalantly mentioned that he had another girl outside, passed out in his parents' car. He went to check on her. 'She's OK,' he said, when he came back. 'But I dropped the glass.' Richard looked outside to see his father's expensive tumbler smashed on the drive. Then he looked at the poor girl who had come to the house to meet Lou. He felt like hitting him. 'To this day I can't tell you why I didn't – he deserved it.' The evening was ruined.
Lou and the CHDs, Freeport High, 1959. Richard Sigal on left.
When Lou performed at the annual variety show at Freeport High in his senior year, he did so as part of a number of student groups, including a foursome made up of himself, Johnny Dekam, Richard Sigal and Judy Titus, one of the most popular girls in school. Lou had them perform under the name CHDs, later explaining to Judy that the letters were a rearrangement of DHC, for Dry-Hump Club. 'They totally embarrassed me,' says Judy. The senior prom was a couple of months later. Allan, who chauffeured his girlfriend and Lou and his date around town for the evening, recalls that Lou made a move in the back seat while he and his girlfriend were in front. 'He's in the back seat banging this girl, and my date is getting angrier and angrier. It's like they don't care that we were in the front seat.' Allan's girlfriend complained that what the others were doing was disgusting.
'Fuck you,' replied Lou. 'Don't look if you don't like it.'
This was typical of his behaviour, and part of his emerging persona as a _provocateur._ His friends found it entertaining initially but later came to think of Lou as a selfish person. 'Lou felt there was no one else in the world other than him. And he was nasty. He was a person who could [not] care less how you felt,' says Allan.
Lou graduated high school.
Despite such juvenile and boorish behaviour, despite the dope smoking and conflicts with his father, Lou was far from a high-school drop-out. When he graduated in June 1959, he was ranked eighty-fourth out of the 312 students in his year, and appeared to be as clean-cut as any of his contemporaries. In light of the fact that he went on to become associated with a life of dissipation, the caption under his year-book photo seems comically guileless: 30'Tall, dark-haired Lou likes basketball, music and, naturally, girls. He was a valuable participant on the track team. He is one of Freeport's great contributors to the recording world. As for the immediate future, Lou has no plans, but will take life as it comes.'
In fact, the plan was for Lou to pursue his education at Syracuse University in upstate New York, almost three hundred miles north-west of Freeport, where Allan was also bound. The boys took a trip to look at the campus together. Then Lou changed his mind, deciding he would rather go to New York University in Manhattan, where he had also been offered a place. He felt that Syracuse was too provincial and middle class and he wanted to be in the city. 'I feel it will be better for my writing,' he told his friend.
He moved into a dormitory on the University Heights campus of New York University, in the Bronx, in September 1959. Living alone in Manhattan, away from his family and friends for the first time, Lou became anxious and depressed, to the extent that he suffered a breakdown. Bunny recalls the dramatic events that took place that autumn. 'Sometime during his freshman year at NYU, when I was twelve, my parents went into the city and returned with Lou, limp and unresponsive. I was terrified and uncomprehending. They said he had a "nervous breakdown". The family secret was tightly kept and the entire matter was concealed, from relatives, from friends. It was our private and unspoken burden...
'My parents finally sought professional help for Lou. I heard only the superficial pieces of what was going on. My mother came into my room and told me that they thought he might have schizophrenia. She said that the doctors told her it was because she had not picked him up enough as an infant but had let him cry in his room.' Toby sobbed as she explained herself to her daughter: 'The paediatrician told me to do that! He said that's how you teach a baby to go to sleep.' A psychiatrist wrote, in a letter which Lou later had framed, that he suffered from delusions and hallucinations and reported seeing spiders crawling on the walls. 'Lou was not able to function at that time. He was depressed, anxious and socially avoidant. If people came into our home, he hid in his room.' He also hid under a desk. 'He might sit with us, but he looked dead-eyed, non-communicative. I remember one evening when all of us were sitting in our den, watching television together. Out of nowhere, Lou began laughing maniacally. We all sat frozen in place. My parents did nothing, said nothing, and ignored it, as if it was not taking place.'
All this happened in the short space of time between Lou enrolling at NYU in September 1959 and the Thanksgiving holiday in November. When Allan visited his friend at home during the break, Lou told him the story. 'I think I'm having a nervous breakdown,' he said.
'Why do you think that?'
'I'm very depressed and I'm having a lot of problems, and I'm taking some [medication]. I feel like crap.' Looking back, Allan says Lou often seemed to be depressed. 'He was always looking on the dark side of things, rather than having a positive view.'
When Lou didn't improve, a psychiatrist referred him to Creedmoor Psychiatric Center near Freeport, a forbidding mental hospital, in the plain language of the day, with the grim appearance and high security of a prison. It was here that he received electro-convulsive therapy (ECT), a controversial form of shock treatment used to treat severe depression. 'I don't know which psychiatrist recommended the Electro Shock Therapy [ _sic_ ],' comments Bunny. 'I assume that Lou could not have been in any shape to really understand the treatment or the side effects. It may well be that he was fearful that he would be committed to a psychiatric hospital and not allowed home if he did not agree to the treatment... Was he suicidal? Impaired by drugs? Schizophrenic? Or a victim of psychiatric incompetence and misdiagnosis?' These are questions his sister poses but cannot answer. The precise nature and cause of his breakdown was seemingly a mystery to the family in 1959, and it remained a mystery because they never discussed it. Something shameful had happened which they were unable to confront.
ECT was given as a course of treatment: twenty-four shocks over several weeks, in Lou's case. He was required to fast before being strapped to a gurney. Muscle-relaxant drugs and an anaesthetic were administered. A gag was placed in his mouth to stop him biting his tongue. Electrodes were then attached to the sides of his head, and an alternating current passed briefly through his brain. This was the radical, frightening part of the treatment. Patients lost consciousness instantly. The body went into seizure. Although insensible, Lou grimaced, looking as if he was in pain, his fists clenching and his legs trying to kick, his whole body trembling briefly before going limp. The treatment had first come into use in Europe in the late thirties after it was observed that schizophrenics felt better after a fit. The electric shock is meant to induce a fit artificially. It is not clearly understood why this should be beneficial, but some patients say they feel less depressed after ECT, and the treatment is still used. There are, however, side effects.
'Our family was wrenched apart the day they began those wretched treatments,' laments Bunny. 'I watched my brother as my parents assisted him coming back into our home afterwards, unable to walk, stupor-like. It damaged his short-term memory horribly and throughout his life he struggled with memory retention, probably directly as a result of those treatments.' She further claims that Lou's breakdown, and the therapy, 'set in motion the dissolution of my family'. Lou came to blame his parents for subjecting him to ECT, mostly Sid, against whom he developed what Bunny terms 'incredible rage'. The closest he came to expressing this was in his song 'Kill Your Sons', singing that he was told he would be able to live at home if he accepted the treatment, rather than being confined to a hospital, but his memory was so bad after the shocks that he couldn't even concentrate to read a book.
Years later Lou gave an interview in which he suggested that ECT was administered because he had homosexual inclinations. 'That's what was recommended in Rockland County then to discourage homosexual feelings,' he told _Creem_ magazine in 1979. Victor Bockris suggested in his 1994 biography of Reed that ECT was administered because his parents wanted to 'cure' his homosexuality. While Lou was given ECT at a time when homosexuality was classified as a mental illness in the United States,* and ECT was sometimes used as a 'treatment' for homosexuality, his sister rejects the theory 'that ECT was undertaken because Lou had confessed bisexual urges'. She believes her parents simply wanted to help Lou at a time when he was suffering a full-blown breakdown (not merely being wayward and showing mood swings, as Bockris writes), and they followed the advice they were given. 'My father, controlling and rigid, was attempting to solve a situation that was beyond him. My mother was terrified and certain of her own implicit guilt since they had told her this was due to poor mothering,' comments Bunny. 'It has been suggested by some authors that ECT was undertaken because Lou had confessed to bisexual urges. How simplistic and unrealistic. He was depressed, weird, anxious, avoidant. My parents were many things – anxious, controlling – but they were blazing liberals. Homophobic they were not. They were caught in a bewildering web, of guilt, fear and poor psychiatric care. Did they make a mistake in not challenging the doctor's recommendation for ECT? Absolutely. I have no doubt that they regretted it every day until the day they died.'
It is an unhappy story. Bunny believes that the accusations Lou later made against his father, or implied in songs, stem from this period. 'His accusations, of violence, a lack of love, seemed rooted in that time. The stories he related – of being hit, of being treated like an inanimate object – seemed total fantasy to me. I must say I never saw my father raise a hand to anyone, certainly not us. Nor did I see a lack of love for his son during our childhood.' Yet the anger and bitterness Lou developed were real, and vital to his art. Bunny poses the essential rhetorical question: 'Would Lou have become the artist he became without the furious anger that the treatments engendered?' Anger was one of the motifs of his songwriting, the anger of an outsider who specialized in writing about damaged, neurotic people like himself.
* The American Psychiatric Association classified homosexuality as a mental illness until 1973.
## II
## On to the Darkened Sea
### 1960–4
LOU WAS UNABLE to resume his studies at New York University. His name was removed from the register in May 1960, by which time he was eighteen years old and deeply unhappy. 'I was miserable when I was young,' he later said. 'I wouldn't want to be eighteen again.'
Following treatments at Creedmoor hospital and the Payne Whitney Psychiatric Clinic in New York, he recuperated at home in Freeport, looked after by his mother. He later spoke tenderly of Toby Reed nursing him back to health. By contrast, he remained furious with his father, though Sid probably felt awful about what had happened. The Reeds tiptoed around their son, frightened to do or say anything that might upset him, and Lou learned to exploit their guilt; this became their unhealthy relationship. And he remained emotionally fragile. Bunny reveals that even after he recovered from his breakdown her brother 'suffered from anxiety and panic attacks throughout his life'.
It was several months before he felt well enough to make another attempt at college. In September 1960 he enrolled at Syracuse University, where he had originally intended to go. Allan Hyman was now beginning his sophomore year, and he was having a wonderful time. He tried to take Lou under his wing when he arrived on campus, introducing him to his fraternity brothers with a view to Lou becoming a fraternity member, which was, he says, 'a huge mistake'. Lou was more rebellious than ever, projecting the image of a tortured intellectual musician-writer, part beatnik, part folk singer, part James Dean from _Rebel without a Cause_ , a film that had a major impact on more than one budding rock star.
Although Allan asked Lou to dress smartly for the fraternity party, he arrived in his scruffiest clothes. 'How could you wear that?' Allan asked him. While they were talking, a senior member of the fraternity came over. 'Mike, I'd like you to meet Lou Reed,' said Allan, making the introduction.
The senior looked the newcomer up and down. 'There's no way you will become a member of this fraternity,' he told him.
'You fucking asshole,' replied Lou. 'Do you think I would join this fraternity as long as you are a member? You are the biggest asshole I have ever met in my entire life. You should simply kill yourself.'
Despite this inauspicious beginning, Lou and Allan remained friends, and they formed a college band, the aforementioned LA and the Eldorados, the 'L' and the 'A' standing for Lou and Allan (who played drums). A changing cast of musicians included Richard Mishkin on piano and bass, and Nelson Slater, a Canadian art student, who sang and played guitar.
Sometimes they used the name Pasha and the Prophets, while Lou also played with another college band called the All-night Workers. The Eldorados was a dance band, essentially, with a regular Friday-night gig at a local golf club, as well as performing at college dances and fraternity parties all over the state. They wore matching velvet jackets on stage and drove to gigs in Richard's Chrysler, which had the band name painted across the trunk. Although Lou was a limited singer and a rudimentary guitarist, he was their front man and leader, singing most of the songs and choosing the set list, which typically included covers of Ray Charles and Jimmy Reed tunes, and pop hits of the day like 'Angel Baby'. Moody and unpredictable, he would refuse to play at all if he got out of bed the wrong side, and he liked to shock audiences by performing a composition of his own called 'Fuck Around Blues', which sometimes got the Eldorados fired. 'He was definitely crazy, like, "Man, he's nuts!", that kind of crazy,' says Richard Mishkin, adding that Lou told the band about the shock treatment he had received. He was angry about it, but he also used it as part of his image. As Richard got to know Lou better he came to a more precise diagnosis of his personality. 'In some ways he was kind of a borderline-personality type, in terms of narcissism [and] compulsive behaviour. He had visions of who [he was] and what he wanted to do that didn't necessarily coincide with anything else. Lou just didn't give a shit about people, in general. If he stepped on you, it didn't matter... It was about advancing himself.'
There was music in the air at Syracuse in the early 1960s. Several contemporaries went on to have professional careers in music, including Lou's friend Garland Jeffreys, a singer-songwriter who supported him in concert in the 1970s. His college band mate Nelson Slater later made an album for RCA, though neither he nor Garland rivalled Lou in terms of success. Nelson says that Lou learned guitar phrases from him, and claims to hear these guitar licks in 'Vicious' and 'Walk on the Wild Side'. 'I showed him the chords and changes that I thought were great, and Lou was very good at picking up things and utilizing them.' Nelson was impressed by Lou, whom he recalls as one of the big characters at Syracuse, but not everybody saw him that way. 'I remember a very quiet, introverted type of person who would hang around the arty-type places in town,' says another contemporary Felix Cavaliere, who went on to have a notable career in music as a member of the Young Rascals. 'Unless you really got to know him you couldn't penetrate that [façade].'
Lou performed at Syracuse with LA and the Eldorados. Richard Mishkin is seen playing bass (left).
One of Lou's most significant college friends was Lincoln Swados, an awkward, gawky boy who shared his interests in music and literature and had a similar history of mental-health issues. After Lincoln started complaining of hearing voices, his parents sent him to a psychiatric hospital, where, like Lou, he underwent ECT. They roomed together for a while, sharing a cluttered, noisome student pit with a piece of pizza stuck to the ceiling. 8'There wasn't even a space to sit down,' says Richard Sigal, recalling a visit he made to see Lou. Richard couldn't find his friend at first in the gloomy room when he let himself in. 'I was looking at the bed, and in the dark I saw a hand and wrist sticking out from the sheets, so I kicked the bed and the sheet comes off and it's Lou – rubbing sleep out of his eyes.'
'What are you doing? It's five o'clock in the afternoon.'
Lou, a lifelong insomniac, explained that he was in the habit of staying up all night writing. 'The sheet was grey from use,' says Richard. 'Then I looked closer and there were scribblings all over the sheets, so I guess when he had a thought in his head he'd write it on the sheet.'
Although he kept irregular hours, Lou was a busy and active student who took a wide range of classes at Syracuse, including journalism, theology, drama, creative writing and botany. In Film Appreciation, he directed a short silent film featuring fellow student Peter Maloney, later a stage and screen actor, whom he cast as a clown, '[just] me, a clown, putting make-up on... It was fun.' He also directed at least one play and was an amateur deejay with his own campus radio show, _Excursion on a Wobbly Rail_ , named after a piece of cool modern jazz by the Cecil Taylor Quartet. As well as jazz, Lou played folk music, the blues, doo-wop and rock 'n' roll on air, and he was always looking for new discs to spin. Late one night he heard music coming from the floor below him in the dormitory and traced the sound to the room of student Jim Tucker, who grew up not far from Lou on Long Island. Jim's high-school friend Holmes Sterling Morrison was staying over. Known as Sterling or Sterl, Morrison was a loquacious, lanky youth of six foot three who liked to drink beer, chase girls and play guitar. He was also from Long Island, born the same year as Lou in nearby East Meadow. His widow Martha shades in the background: 'Blue collar. His mother was a cocktail waitress. His stepfather was a cop.' Having begun his college education at the University of Illinois, studying physics, Sterling was thinking of transferring to Syracuse. He never actually enrolled; he just hung around the dorm for a while. He and Jim were playing Lightnin' Hopkins records when there was a knock at the door. 'We thought it was the person in charge of the dormitory coming down to complain,' Sterling recalled. 'Instead it was the guy upstairs, who turned out to be Lou, and he needed records because he had a campus radio show and was running out of blues and that sort of thing.'
One day soon after this encounter a brass band struck up outside the dormitory as the Reserve Officers' Training Corps (ROTC, a form of military training) began band practice in the yard. Lou claimed to have been dismissed from ROTC in his freshman year at Syracuse after he pointed a gun at his commanding officer, a story he made part of his official biography when he became a solo artist. It sounds like something he either invented or exaggerated. None of his contemporaries remember it happening, though they remember Lou saying it did. Sterling was listening to the ROTC band when, as if in opposition to the martial music, an electric guitar rang out from Lou's room.
'Then I knew that Lou was a guitar player, too. And we just went from there.' Sterling and Lou jammed together at Syracuse, forming the basis of a friendship that developed significantly when they met up again in New York City in 1965.
In September 1961 freshman student Shelley Albin was getting a lift across campus when the guy who was driving her pointed to a student walking by the side of the road. 'This is Lou Reed,' he said, slowing to offer him a ride. 'He's really evil.'
Shelley was intrigued by the young man who got into the car. He didn't look evil. In his beatnik/folkie phase, Lou was dressed in black corduroy trousers, a shirt and jacket, his hair just long enough to touch his collar. 'He was cute,' says Shelley. 'His mouth would curl up. He had good skin. Slightly built. Five ten maybe, skinny... a small, light-weight guy, and not at all muscly... He had the build of a twelve-year-old, [and] a lot of curly hair.' She watched Lou in the rear-view mirror and saw that he was looking at her. She turned to meet his gaze. He was trying to make an impression. 'When we looked at each other in the eye in the car it was kind of funny: I know what you are up to here, it's an act. He was only in the car two minutes, [but] when he got out I knew I was going to hear from him, and I did within an hour.'
This was the start of Lou's first significant love affair, one that inspired several important songs, including 'Pale Blue Eyes', in which Lou quoted Shelley telling him off. 'I [told him], "Down for you is up". You like to be depressed... Just don't work it [out] on me.' She says that another Velvet Underground classic, 'I'll Be Your Mirror', was also based on a discussion they had. 'Those are words that I said to him... but it's always attributed to Nico.' Girlfriends of songwriters often believe that they know the 'real' story behind famous songs, and songwriters have been known to flatter girls by leading them to think that they inspired certain songs, whereas inspiration often comes from sundry sources. Although Shelley was certainly an important muse, she is unlikely to have been Lou's only source of inspiration. His literary influences should never be overlooked. 'Pale Blue Eyes' is a phrase in William Burroughs's _Junky_ : 'He looked at me with his pale blue eyes that seemed to have no depth at all.' He may likewise have borrowed inspiration for 'I'll Be Your Mirror' from another favourite book, John Rechy's _City of Night_ , in which the masochist Neil says: 'Youll [ _sic_ ] exist in My Eyes! I'll be a mirror!'
More specifically, Lou mentioned Shelley by name in the original version of 'I Can't Stand It', while other songs, including 'New Age', may include references to their relationship. Shelley was the kind of attractive, intelligent, friendly girl who could inspire a boy to write songs. She was Jewish, from Illinois, majoring in art history. When she arrived on campus she dressed conventionally in long skirts and cardigans, but like Lou she had a bohemian streak and soon began to wear more informal clothes. She was intrigued by Lou from the start. 'I knew from the second I met him this wasn't an ordinary person.' One of the first things he told her was the story of his ECT. 'That was like his introductory bit. "This horrible thing happened to me... I'm tortured, and I don't have any memory, and I'm this and I'm that, and I'm a little weird for today's college kid, and just a little dangerous."'
Shelley Albin.
'You just don't scare me,' she said.
They started to date. 'Look,' Lou told a friend proudly, 'you couldn't do better.'
The fact that Lou now had a regular girlfriend was significant. It wasn't until the late 1970s that he identified himself in public as gay, having previously danced around the subject. In one interview, he reflected on his youth and suggested that he had been acting straight with girls at college when he felt attracted to male students. He had a crush on one boy in particular, but never did anything about it, and spoke of the torment he went through. 'I remember that time in college, sitting around, trying to get myself together so I could "do it right" – and I resent it. It was a very big drag,' he said, suggesting plausibly that this embittered him. Bitterness was one of his characteristics. 'If the forbidden thing is love, then you spend most of your time playing with hate.' He added that his lack of interest in women attracted girls, perversely. 'Being gay, I have found that so many women – deluded creatures that they are – are attracted to you because you're not interested in them... it came across as the ultimate cool.'
While this rings true, it isn't the whole story. Lou had an active heterosexual life in college and afterwards, so much so that girlfriends like Shelley struggle to see him as gay. 'I never thought of him as being gay at all, or even bisexual.' Rather, it seemed to Shelley, and women who came after her, that Lou flirted with homosexuality to create an image and get a reaction. 'He always walked in a very effeminate way, but that was a very studied thing,' she says. He admired handsome men, but she says that this was like a joke. 'There was a guy that we both liked very much [who] was in a lot of my classes... Lou said, "I really like him, I'm going to get him." [I] said, "No, no, he's for me." Neither one of us did anything about it. Maybe he did... I doubt it.'
Shelley gained a deeper understanding of her boyfriend at Christmas when he invited her home. Lou had recently acquired a dog named Seymour, the first of several dogs he owned over the years, but he seemed incapable of looking after Seymour at college. Part of the reason for the trip home was to give the dog to his mother. Shelley hadn't been at Oakfield Avenue long before she concluded that Lou was spoiled by Toby Reed. 'Lou was, in his way, very dependent on his mother and very close to his mother, very much a mother's boy [and] a Jewish Prince... Lou could manipulate his parents very easily at home. I saw him do that.' He seemed to delight in annoying his father. Shelley noticed Lou exaggerating his effeminate side in front of Sid. 'I could see that [Sid] was frustrated with Lou being effeminate, and [Lou] would act super-effeminate in his presence. He would flip his hands and wiggle his ass.' Despite what Lou told her about his father, Shelley liked Sid. She was learning not to believe everything Lou said. 'When you looked at this man he had a twinkle in his eye and a ready, warm smile.' Many of Lou's friends agree that Lou misrepresented his father. 'He gave off to me an aura of being absolutely warm, absolutely one hundred per cent concerned about Lou as a parent [and] mystified by Lou's behaviour.'
Sid was evidently pleased that Lou had brought Shelley home. He increased his allowance so Lou could take her out, and offered the couple the use of the family car. Lou drove Shelley to the Hayloft, a nightclub in nearby Baldwin. 'I don't remember if he told me ahead of time we were going to a gay bar.' Although Lou's straight Long Island friends avoided the Hayloft assiduously, Lou was a regular at the club, where he also worked briefly. When Richard Sigal questioned Lou about his part-time job, Lou said that the male customers sometimes touched him. 'What do you do about it?'
'I just laugh.'
'Lou was an experimenter,' concludes Richard, a sociologist in adult life who lectured on what he calls 'deviant behaviour'. 'I think he probably experimented with everything at one time or another. I've always looked at his sexual adventures that way, because as a kid I had a feeling that Lou was attracted to boys, [yet] he ended up being married three times. So he obviously liked women.'
It was at the Hayloft that Lou probably first encountered transvestites and transsexuals, for whom he developed a fascination. Candy Darling, whom Lou wrote about in 'Candy Says' and 'Walk on the Wild Side', was the alter ego of James Slattery from nearby Massapequa, Long Island, and Shelley believes that Lou may have known him initially at the Hayloft, getting to know him better later in New York City. Shelley took her visit to the gay club in her stride, enjoying a dance with a woman. 'He thought it was wonderful that I was dancing with a woman... and he was trying to get me to go home, or out in the parking lot with the girl I was dancing with, which was not at all what I was interested in, but he was always setting up a scenario.'
During the Christmas break Lou also drove Shelley into Manhattan – to buy drugs in Harlem. Surprisingly, the thirty-mile ride into the city was the scariest part of the expedition. With his poor eyesight and limited concentration, Lou was a bad driver. 'I remember thinking, Oh my God, this is horrible, he's just the worst driver in the world! I was much more frightened of his driving than going to Harlem.' Then, as now, drugs were available at the corner of Lexington Avenue and 125th Street, the junction Lou invoked in the first verse of 'I'm Waiting for the Man'; also from connections in the nearby brownstone apartments. 'My memory is that [he said], "We have to pick up some drugs," and that it was heroin.' They went into a brownstone, where they met a guy Lou seemed to know, sat around talking for ten minutes, then left to hear some jazz in Greenwich Village. 'I got the impression he liked to shock me by taking me to Harlem. He knew we shouldn't be there and now he's taking this – as he used to call me – "little flower from the Midwest", into the evil [city]. He liked that, [and] he liked to scare himself.'
Although Shelley can't be sure that Lou bought heroin on this visit to Harlem, it is clear that he first used heroin during his college years. Drugs had been part of his life since high school, illicit drugs and the prescription medication doctors gave him (pills he inevitably misused). Drugs were also part of his imaginative life. Lou read about junkies before he became one, and he associated drugs with the underground culture he felt drawn to. People he admired took drugs: the jazz musicians he listened to smoked grass and used smack; William Burroughs was a junkie; so was the comic Lenny Bruce, whose work Lou adored. As noted, Lou started smoking pot at high school, and he continued to smoke at college. Indeed, he became a dealer. One night he went to see fellow student James Gorney with some marijuana he needed to hide. 'Clearly, he was drug dealing. He was drug dealing in a substantial quantity,' says James, who took care of the drugs in return for as much as he could smoke. 'If he had been caught, that would not have gone well with him.' James's girlfriend Paula Swarzman, later his wife, corroborates the story, but thinks the amount involved was smaller than the 'large suitcase' James talks about. 'It was buy a little stuff for my friends,' she qualifies. 'It was dealing, but no capital "D".'
Shelley confirms that Lou sold marijuana on campus, and says he dealt other drugs, too, up to and including heroin. 'He was selling it [heroin] to another fraternity group,' she says. Lou would give Shelley bags of marijuana to store in her dorm. 'I was stupid. I said OK. I didn't even think about it. And he'd call up, "I need so much of it, give it to somebody and get the money"... What he was really doing was keeping himself out of trouble at the expense of this girl.' When he bought peyote from a supplier in Arizona, he had the drugs posted to Shelley's address. 'That's not nice. You don't do that to your girlfriend... He could be a prick in that way. He took care of himself first, even with someone he loved madly. And he did, and I was crazy about him.'
Drugs got Lou into trouble, initially in an unexpected way. Karl Stoecker was an art-student friend who lived off campus with his young wife. 'We both liked Karl,' says Shelley. 'He was cute.' Lou invited Karl home to Freeport a couple of times. During one of these trips, he drove into a toll booth on the parkway. 'Rather than go through the aisle where you are supposed to pay the toll on the highway, he ran right through the middle of the thing,' laughs Karl. The accident resulted in Lou losing his licence. He didn't drive again for many years. His sister remembers the accident and says that he was probably 'high on drugs'.
Karl was one of a number of friends who helped Lou publish a literary magazine on campus in 1962. _Lonely Woman Quarterly_ was named after a tune by the saxophonist and free-jazz pioneer Ornette Coleman, who abandoned the harmonic conventions of mainstream jazz in the 1950s, creating strange, semi-abstract soundscapes. Some people hated this radical new music; others, like Lou, were enthralled by what was true outsider art. Free jazz became a passion for a man who was always attracted to the difficult and the outré, and would remain so long after he became a rock 'n' roll star. Indeed, Lou's musical career can be interpreted as a clash of experimentation and conventional melody, the tide ebbing and flowing from project to project. Jazz ideas can be heard on almost all his records. 'I liked Ornette Coleman a lot, and [his trumpet player] Don Cherry a _whole_ lot. I used to always go to see 'em at clubs [in New York], the original Ornette Coleman Quartet.'
Shelley and Karl created the illustrations for _Lonely Woman Quarterly_ , while the writers included Lou, Peter Maloney and Jim Tucker. They assembled the first issue at the Savoy Restaurant on Marshall Street, a student hang-out where the owner let them use his mimeograph machine. The magazine – neatly typed by Lou – contained the first substantial examples of his writing, and while much of what he wrote was verbose and jejune, a recognizable voice emerged. The first issue featured a short story about incest, which, together with other articles, showed Lou to have been reading Freud more than anything more sinister. Curiously, he wrote these pieces under the name Luis [ _sic_ ] Reed.
In the second issue he published a character assassination of a fellow student named Michael Kogan, the leader of the right-wing Young Americans for Freedom organization on campus, and an aspiring rabbi. Lou and Michael were in the same theology class, taught by Professor Gabriel Vahanian. They attended a party at the professor's house one night, memorable for the fact that Lou sang an outrageous song casting aspersions on the professor's sexuality. 'Vahanian threw us all out,' says Kogan, who doesn't recall speaking to Lou directly before he saw himself ridiculed in the _Lonely Woman Quarterly_. He was shocked by the spiteful tone of the article, which he showed to his lawyer father, who complained to the chancellor of the university, who told Lou to apologize, which he did. 'He didn't mean it – he hated me,' says Kogan. 'It was a forced apology.' Looking back, he believes that he presented an easy target. 'He was a Jew who was at war with his Judaism and I was very active in the Jewish student groups, so it was as religious as it was political. I represented to him everything he hated, from wearing a tie and jacket to school to being a conservative Republican [and] an active member of the Jewish community.' The antipathy was mutual. 'I detested Lou Reed. I found him a loathsome person.'
Lou reserved his best writing for the third and final issue of _Lonely Woman Quarterly._ In his short story 'And What, Little Boy, Will You Trade for Your Horse?' he wrote about a young man who tours the pornographic bookshops, diners, pinball arcades and sex cinemas around Times Square in New York, watching the street hustlers and 'mincing and giggling' boys, before going to a bar, where he is picked up by a man in drag, 'a regular queen'. Here was the landscape and subject matter of many Lou Reed songs, and the story rang true, as if drawn from experience.
Yet he was still with Shelley. When summer came and she went home to Illinois for the vacation, Lou missed her. He wrote to her regularly, telling her about his trips to the Hayloft, and sending her a macabre short story, reminiscent of the work of Edgar Allan Poe, about a boy named Waldo who mailed himself to his girlfriend, Marsha, only to perish when she plunged a blade into the box to open it. The Velvet Underground later set this darkly comic story to music, creating 'The Gift'. 'He used to call himself Waldo and he used to call me Marsha,' explains Shelley, who was delighted to receive the story in the mail. Lou then elaborated on the joke by sending her a large toy animal in a box. 'It was the story come to life... He was fun when he wanted to be.'
Later that summer Lou flew to Illinois to see Shelley. Typically, he attempted to create discord in the Albin household by espousing right-wing political ideas to annoy Shelley's liberal parents. Nevertheless, the Albins lent him their car. 'He crashes the car into a ditch,' sighs Shelley, who compares Lou to a wayward kindergarten child. 'If he was a kid, you'd say, This is one you've got to keep your eye on, or he'll go and put someone else's hand under the paper cutter, and get the other kid to feel responsible. He required watching.' The Albins were glad to see the back of Lou when he went home, and told Shelley she could only return to Syracuse if she promised never to see him again. Shelley agreed, but continued the relationship. '[My mother], to this day, if you mention Lou Reed, [will say] "What a horrible, disgusting person!"'
Delmore Schwartz arrived at Syracuse for the academic year starting in September 1962, Lou's junior year. Schwartz was hailed in his youth as a brilliant new writer, the author of an outstanding short story, 'In Dreams Begin Responsibilities'. But his work didn't sell, and he suffered with mental illness. Manic and paranoid, he fell out with friends and colleagues, drank to excess and took drugs, to the extent that he was repeatedly admitted to hospital. By the time he came to Syracuse to teach, aged forty-eight, he was a physical and emotional wreck, but nevertheless he was an inspiring figure. Saul Bellow used Schwartz as the model for the character Von Humboldt Fleisher in his novel _Humboldt's Gift_ , 'a wonderful talker, a hectic non-stop monologist' who elevated his students to 'a state of great exultation about literature'. Lou was such a student.
'By the time we encountered him, Schwartz was [taking] a lethal combination of speed to get him going, and barbiturates to slow him down, probably mixed with alcohol,' says James Gorney, who was also in Schwartz's creative-writing class. 'He was dishevelled, looked like he had rolled out of bed. Often the socks did not match, the shoes did not match. Looked kind of crazed. But none of that mattered.' Schwartz lectured his students on the work of T. S. Eliot, James Joyce and W. B. Yeats, teaching them to appreciate the music in the language, which surely helped Lou become the writer he did. The lectures on _Ulysses_ were the highlight of the course. 'It was really a course in teaching you how to read _Ulysses_ ,' explains James. 'One time he said, "When you first read a paragraph, and you don't understand it, read it aloud and listen to the music and you'll probably understand it."' If they still didn't get it, he said that they should have a few drinks, because Joyce wrote much of the book under the influence. Lou took this advice to heart. A lot of his own writing was done when he was drunk or stoned.
One day Schwartz brought his copy of _Finnegans Wake_ into class. The book was in ruins, the binding broken, the pages loose, the volume strapped together with shoelaces. He beckoned his students to gather around as he opened the book, explaining how he had been studying the text for years. He had filled virtually every blank space with notes, in various coloured inks. He said that Joyce had referred to a particular edition of the _Encyclopaedia Britannica_ while writing the book, so he had read every volume of that edition in an attempt to decode it. The magnitude of this task was almost beyond the comprehension of the students. The classroom windows were open as Schwartz told his mad story, and a gust of wind caught the pages and blew them about the room. Lou and others dashed to save them. The scene left a deep impression. 'I remember Lou and I talking about this a lot,' says James. 'Very poignant. It touched us both very much.'
The teacher bonded with his students, meeting them for drinks in the Orange bar (orange being the Syracuse colour), where he told stories about famous writers he had known, and attempted to conduct informal lessons. Lou made a particular connection with his tutor: a fellow Jew from Brooklyn who was old enough to be his father but seemed to understand him better than his own father, who wanted him to take up a conventional career after college. Lou didn't want to become a fucking accountant like Sid. He wanted to write and make music. Schwartz urged him to follow his instinct, warning him never to compromise his talent. 'Once, drunk in a Syracuse bar, he said, "If you sell out, Lou, I'm gonna get ya."' Delmore Schwartz became Lou's first mentor, a beloved and heroic figure in his eyes.
As Lou began his intellectual love affair with Delmore, his temporal affair with Shelley came to an end. Things hadn't been going well for a while by the time they attended a fraternity party at Syracuse in January 1963. Lou took another girl into one of the rooms at the party. Shelley was invited to watch. She walked out in disgust. Lincoln Swados gallantly escorted her home through the snow. A fourteen-month romance was over. Shelley loved Lou, but he was manipulative and sometimes nasty. 'He did some shitty things to me. He wasn't happy until he got somebody to call him a prick.' That word again. 'That's what he was after.'
Yet Lou remained infatuated with Shelley, never seeming to understand why she left him. As late as the 1980s, when he was middle-aged and famous, he was still calling her, trying to win her back. The couple did get together again briefly, at Syracuse and afterwards, but she chose not to take things further. 'He couldn't understand why we couldn't just pick up again – for the next twenty years. He was really pissed at me for twenty years.' She had decided that Lou was not the man she wanted to spend her life with, marry, or have children with. Lou never had children, and Shelley thinks that was wise. The issue had come up. 'He liked the idea of [having children],' she says, but she decided that someone who couldn't even look after a dog was not the man for her. 'I had an innate sense, _This is not someone you want to have kids with_.' Yet Lou could easily have become a father at this stage in life. Prior to the widespread use of the contraceptive pill, it was common for college girls in the early 1960s to become pregnant, and many had illegal abortions. Shelley reveals that Lou got girls pregnant on no less than three occasions at Syracuse, but says that all three pregnancies were aborted. 'There were people he made pregnant, through his own lack of responsibility. They didn't... choose to have his children.'
There were new romances and new influences for Lou in his final college year, starting in September 1963. At the start of the first term, he pinned a poster to his bedroom wall, a black-and-white image of a solitary man standing under the lights of Times Square. This was the cover of John Rechy's groundbreaking gay novel _City of Night_ , which documented the life of an itinerant hustler in bold, poetic language that foreshadowed Lou's mature songwriting.
Despite his evident fascination with gay life, Lou's relationships were still primarily, possibly exclusively, with girls. Soon after he broke up with Shelley he started to see her friend Erin Clermont, a likeable, gamine girl with an infectious sense of fun who was also in Schwartz's class. They were to have an unusually long relationship, lasting until the 1990s. 'From the day I met him I accepted him as this complicated, different guy [who] had something to offer. It wasn't music at that point. It was his head, and what he was into, and we shared these interests,' says Erin, who told Shelley as soon as she had slept with Lou, so there wouldn't be bad blood between them. The girls remained friends, often discussing Lou, who fascinated them both. Although Shelley knew him first, Erin maintained the longest relationship. 'I lasted all those years because I was eternally interested in him, not in love with him, although we did love each other. We said that.'
Although Erin and Lou shared what she calls an 'intense interest in sex', theirs was more a friendship, allowing her to make a cool study of Lou over the years. Like Shelley, she considered him to be predominantly heterosexual but, unlike Shelley, she ultimately concluded that he was bisexual. She says he was highly sexed in his youth. He slept around, and was pursued by at least one jealous boyfriend at Syracuse as a result. Never a particularly brave man, Lou ran and hid. Erin describes Lou as a 'coward', adding that 'he was always horny'. Lou boasted to male friends of all sorts of sexual adventures, including having sex with a girl in a cemetery and taking part in a threesome with two women in New York. Some of this may well have been fantasy. Such a level of sexual activity certainly wasn't a constant in his life. There were long periods later when he gave the impression of having little or no interest in sex.
He possessed an insistent, nagging personality that could be irritating to some people but which Erin found amusing. 'He was so probing. He wouldn't let go of things. He would just go on and on and on until he got his way, talked you into it. There was a great deal of neediness, too... He was extraordinary in that sense: how he could get you in his mindset, convince you to come out at three o'clock in the morning – "No! I have to go to work tomorrow!" – and just keep at it [until you gave in]. "All right! All right!"' He had characteristic poses, mannerisms and phrases. Erin noted his camp walk and sardonic tone of voice. He was captious by nature, reluctant to give praise, and would laugh mirthlessly, sarcastically, if somebody made a joke that didn't quite work. It was a challenge to get him to laugh properly, and a pleasure when he did so. 'If I said something really funny he would be suspicious,' she recalls. '"Where did you get that?"' It was indicative of his grudging disposition that his favourite phrases included 'Don't make fun' and 'Lucky you,' said in a laid-back tone of voice, 'flat, very flat – but amusing in its own way if you knew him... "Be nice" was in the same category as "Don't make fun," [also] said in his flat, sardonic style.'
Like many musicians of his generation, Lou was influenced by the folk music revival of the early 1960s, so much so that Nelson Slater says that when Lou first came to Syracuse he was 'presenting himself as a folk singer'. The biggest star of the folk movement by 1963 was Bob Dylan, whom Lou saw perform at the Regent Theater in Syracuse in November. Then twenty-two, Dylan was only ten months older than Lou, but he was already established as a major international figure. His second album, _The Freewheelin' Bob Dylan_ , released that year to great acclaim, contained sophisticated songs that articulated the mood of the young and inspired a generation of songwriters, including Lennon and McCartney, to be more ambitious in their work. Lou was one of many budding songwriters who fell under Dylan's spell, learning his songs and performing them in his style, to the extent that he took briefly to playing a rack harmonica. 'We would sit and listen to Bob Dylan [records] and we learned the chords to everything he played,' says Richard Mishkin, who says that Lou was 'blown away' by Dylan. '[But he] would never admit that he was impressed as he was, and would never admit then that that's who he wanted to be.' Nevertheless, Dylan's influence helped Lou make a leap forward in his songwriting.
Drugs were also a factor in his development as an artist. Although they were no longer an item, Lou and Shelley remained in touch, and she says that he 'really got heavy into heroin' in his last year at college, around the spring of 1964. 'I can handle heroin,' he told her. 'I'm not gonna [get addicted]. I just do it to get the experience. But I'm in control.' Lou needed experiences to write about something beyond the lightweight pop he had so far created. Six months before he met Delmore Schwartz, before he had heard _The Freewheelin' Bob Dylan_ , he recorded two of his compositions for the same New York producer who had recorded his high-school band, the Jades. This time Lou also sang lead vocals. The songs, 'Your Love' and 'Merry Go Round', were trite. Delmore Schwartz had little regard for this sort of pop music. There was one magical evening at the Orange bar in early 1964 when Lou and James Gorney were drinking with their teacher and the Beatles' number one 'I Want to Hold Your Hand' came on the jukebox. Delmore, James and Lou played the song repeatedly, as they got drunker and drunker, and left the bar arm in arm singing the lyric. This was, however, a rare example of Schwartz enjoying pop. He expected Lou to do more serious work. 'I remember [Schwartz] lecturing me about how it was my job to make sure that Lou went to a proper graduate school, like Harvard or Princeton, and become "a real writer", as he would put it, not a crappy rock 'n' roll writer,' says Shelley. It is surely no coincidence that Lou started to write more ambitious songs after falling under the twin influences of Schwartz and Dylan, and that his creative breakthrough came with a song about a subject he understood.
In his final months at Syracuse, Lou wrote 'Heroin', describing what it feels like to inject and get high on heroin in language that is convincing, thrilling and scary. He later gave a colourful, maybe embellished, account of how he came to write this remarkable song: 'I had recently been introduced to drugs at this time by a mashed-in-faced Negro whose features were in two sections (like a split-level house) named Jaw.' It is astonishing that he wrote such an accomplished song so soon after 'Your Love', but Lou insisted that this was the case, and Richard Mishkin remembers accompanying Lou while he worked up the tune. 'I used to sit around with him. We'd smoke pot and I'd play the bass, and he'd play the guitar, and he wrote the words [while] I fudged around with the bass line.'
Part of the reason Reed is an important artist is the fact that he became one of the first singer-songwriters of the 1960s to combine poetic, literary language with rock 'n' roll, also addressing subjects that hadn't been tackled before in popular song. He was much more literary than most of his contemporaries, someone for whom the work of writers including Raymond Chandler, Dostoyevsky, Edgar Allan Poe, Hubert Selby Jr and Delmore Schwartz were as big an influence as any musician. One can compare him best in this sense to Leonard Cohen. Throughout his career, Lou wrote poetry, short stories and non-fiction essays as well as songs, and he long aspired to write a novel. Some of his prose and poetry was published, and he won a poetry prize in the 1970s. Although he never managed to finish his literary novel, Lou often said that he hoped his songs would be seen in aggregate as the equivalent of the Great American Novel, and he began that ambitious work with 'Heroin'. Some of the lyrics put one in mind of what Dylan was already doing. The references to the 'clipper ship' sailing on 'darkened seas' in the third verse, and 'the politicians making crazy sounds' in the fourth have a Dylanesque quality, but the subject matter was Lou's own. Nobody else working in popular song in the early 1960s, Dylan included, was writing explicitly and realistically about using hard drugs. Rather, Lou took his lead from authors like William Burroughs, who had covered this ground in prose. Eschewing euphemism, he used authentic jargon to describe shooting up, and he didn't spare his listeners the unpleasant details, describing for example a junky method of attaching an eyedropper to a hypodermic needle. When the needle pierces the vein, a little blood 'shoots up the dropper's neck'. He didn't moralize but presented drug use frankly, as the way his character dealt with his problems 'in the big city'. For Lou, life was always more intense in the city.
I have made a big decision
I'm gonna try to nullify my life...
In shooting up, the character flirted with death but didn't care about the consequences. One minute he wanted to die, the next he was in a state of bliss. He brushed away well-meaning people who tried to help him, telling them to 'go take a walk'. This sneering wise-guy line was classic Lou, the cocky nihilist character he created and sold to the public over almost fifty years, attractive in the way that a book like _Junky_ is appealing to people of a certain disposition, especially so perhaps to those for whom addiction is merely an exotic spectacle. The song also acknowledged that drugs might not be the answer; Lou's character seemed bewildered about what he was doing to himself.
'Heroin' was a major breakthrough, though it may be that Lou worked on the song over several months before he perfected the lyric. In any event, his college career was over. He received his BA in Liberal Arts in June 1964, during the first summer of the so-called British invasion of the American pop charts, headed by the Beatles with 'Love Me Do' and 'A Hard Day's Night'. Lou was as big as anyone. In recent months, he had become increasingly unpopular with the local police, who disliked his disrespectful manner and knew that he was involved with drugs. They threatened to rough him up unless he left town as soon as possible, 'because of various clandestine operations I was alleged to have been involved in', as Lou later wrote, making the matter as mysterious as possible. 'They knew who he was and they didn't like him,' says Shelley, more directly. Nevertheless, Lou hung around campus for a couple more weeks, staying with Shelley. They'd recently had one of their reunions and she wasn't feeling well.
When Lou returned to Freeport he, too, fell sick. A yellow tinge appeared in the whites of his eyes, an early sign of jaundice that is itself the result of liver damage. Lou's doctor diagnosed viral hepatitis, which Lou blamed on his heroin connection and a dirty needle. 'Jaw gave me hepatitis... which is pathetic and laughable at once, considering I wrote a famous amplified version of the experience in a song ['Heroin'].' He telephoned Shelley, who was home in Illinois, advising her to see a doctor, because hepatitis can be transmitted through intercourse. 'That was the first time he had hepatitis,' she says. It was the start of a lifetime of lifestyle-related health problems for Lou, who had cast off on to the darkened sea.
## III
## Honeybun, Black Jack, Sterl and Moesy
### 1964–5
TOWARDS THE END of his time at Syracuse, Lou was approached after a gig by Terry Philips, a representative of Pickwick City Records of New York, who explained that he was looking to sign songwriting talent and had driven up to Syracuse to hear Lou on a recommendation. 'I heard him play and I really liked him,' recalls Terry, who says that one of the songs he sang that night was 'Cycle Annie', a faux-surf song about a Californian girl who bicycled in the nude, a very different proposition to 'Heroin'. In fact, Lou would always have the capacity to create comic songs. 'He couldn't sing, and he couldn't play, but he had a sound and he had a point of view, and beyond some of the poppy songs he wrote he was very smart. He wanted to be a writer.' Terry was a songwriter himself. He had been a staff writer for Leiber and Stoller, which is how he came to know Phil Spector. Terry and Phil wrote together before Spector went on to bigger things. Terry was proud of these connections, and Lou must have been impressed by his CV because, when Terry offered him a songwriting contract, he signed.
Sid Reed was concerned to hear that his son had blithely put his name to a Tin Pan Alley deal, and contacted Philips to discuss it. Terry agreed to come over to Freeport on a day when Lou was out of the house so they could talk privately. It was at this stage that he discovered how worried the Reeds were about their son, beyond his choice of career. '[Sid] also wanted to make sure, as I later found out, that his son, who had a pill problem, was not getting in with somebody who would get him further into trouble.' Although Lou had been using heroin and other substances at college, Terry insists that he was now 'popping pills like crazy'. This probably refers to his misuse of Placidyl, which had been prescribed as a sedative, though he may have been abusing other pills as well. His parents were aware of the situation, and they were worried. 'The mother used to sit there and cry,' says Terry, who spoke to Sid and Toby several times over the following months, developing sympathy for the couple. 'The father was a good guy. Lou beat the shit out of him [emotionally]. He cursed him, and he did all kinds of stuff, but he was a good father.'
Lou started work at Pickwick after he got over his hepatitis, catching the Long Island Railroad each morning from Freeport to Long Island City in Queens. At the end of a street of industrial buildings leading down to the East River was a warehouse where Terry worked in a corner office with two other songwriters, Jimmie Sims and Jerry Vance, writing pastiches of the pop hits of the day, in various genres, which they recorded under fictitious band names on the office reel-to-reel tape machine for the cheap exploitation albums Pickwick purveyed. The warehouse was full of boxes of records awaiting shipment. Lou later dismissed his Pickwick job as 'hack shit' but conceded that it was a useful experience. 'I had this horrifying job writing songs on command, like a songwriter machine. But that job taught me about the studio, and how to write really quickly. They'd just say, "Give me ten surf songs."' Blessed with natural writing ability, and a love of rock 'n' roll, Lou found that he enjoyed the challenge of writing to order. He wrote songs for films in later years. In 1981, he even wrote for the rock band Kiss, a stretch for a serious artist.
Lou usually turned up for work on time, but not necessarily in a fit state to work. 'There were days that he could hardly stand up,' says Terry. Lou was so stoned that he bumped into furniture and once passed out in the office. 'We had to call an ambulance... we thought he'd died.' He was also a handful when he was compos mentis. He groused about his job and gave the impression that he looked down on Terry, which didn't endear him to his boss, who considered himself to be a sophisticated, switched-on music business insider. 'I didn't have to have long hair in order to be a cool person,' says Terry. 'I liked his talent, but I didn't like him. He was obnoxious, he was pedantic. He was a punk. A guy with a big mouth who would always be a wise guy, be obnoxious, and would have me threatening to bust him one in the chops if he didn't shut up... I would grab him and say, "Stop this crap!" Or he would come in so whacked out I would get him coffee and I would spend half the time lecturing him... He kept bullshitting that he wanted to write and be a writer. I said, "So be a writer. I'm not telling you not to come in and work on a book idea, if you have it. I'm just saying, see if the idea leads to songs. That's why we are here. These people are giving us the money, and we are giving you the money."'
During his time at Pickwick, Lou wrote a string of songs, all pastiches of chart hits of the early 1960s, including 'You're Driving Me Insane' (by the Roughnecks) and 'Put a Tiger in Your Tank' (by the Intimates). The lyrics of his most notable composition, 'The Ostrich', instructed kids how to perform an impossible new dance that required stepping on their own heads. It is amazing that the author of 'Heroin' wrote such nonsense. One explanation is that Lou was stoned. Terry says he was high on pills when he wrote 'The Ostrich', as he was when they recorded the song on the office Ampex. Lou shrieked the lyric like a maniac while Terry and others (it is unclear who played all the instruments) accompanied him and yelled like banshees in the background. 'It made no sense. It was stupid. He was on pills, and I had a bottle of Old Bushmills, and we were feeling good.' None of this would be of much import save for the fact that Terry decided to release 'The Ostrich' as a single. 'I had to fight [with the company] to get them to release this record... Even Lou thought I was nuts.'
In order to promote 'The Ostrich' Terry needed personable young musicians to pretend to be the band that cut the record. He was at a party in New York when he was introduced to a graduate mathematician named Tony Conrad, who played fiddle in his spare time, and his friend John Cale, a tall, slim Welshman with a solemn face and floppy, dark hair. Terry invited them over to the office. They brought a friend, an artist named Walter De Maria, who played drums. Lou was in the music room when the visitors arrived. Cale said, 'My first impressions of Lou were of a highly strung, intelligent, fragile college kid in a polo-neck sweater, rumpled jeans and loafers,' describing the meeting in his memoir, _What's Welsh for Zen_. He added that Lou struck him as being 'bruised, trembling, quiet and insecure'. Lou played them 'The Ostrich'. Then he and John sat down to talk over coffee.
They discovered that they were the same age, born one week apart in 1942, making them twenty-two. John was from South Wales, where his father was a coal miner. He had studied piano from the age of seven, taking up viola at grammar school, at which point he also started to compose music. Like Lou, he was an intelligent, gifted, but fragile person who suffered a nervous breakdown in his teens. At eighteen, he won a scholarship to Goldsmiths College in London, where he began to mix with the leading figures in avant-garde music. An interest in _musique concrète_ led to a correspondence with the American modernists John Cage and Aaron Copland. John had come to the United States in 1963 to study at the Berkshire Music Center at Tanglewood, Massachusetts, on a Leonard Bernstein scholarship. One of his teachers was another leading modernist composer, Iannis Xenakis. It was a stellar start to a career.
Although prodigiously gifted, John suffered a lack of self-confidence that made it possible for others to dominate him. He also admits to 'a crackpot side' that made him a natural outsider. John loved to shock. He achieved this at Tanglewood in the summer 1963 by taking an axe to a table during a conceptual performance. In September he moved to New York City, where he became involved in the downtown music scene, taking part in a marathon recital of Erik Satie's _Vexations_ , organized by John Cage, that lasted nineteen hours. He also met La Monte Young, a philosopher-musician who led recitals at his loft apartment under the title of the Theater of Eternal Music, performing strange compositions that featured sustained notes sounding like the buzz of a machine or the drone of an insect. The 'drone' was one of the big ideas of the minimalist movement, of which Young was a pioneer. Musicians who collaborated with Young and Cale in these experiments included Tony Conrad and Tony's buddy Henry Flynt, another Harvard-educated mathematician who played violin; also Billy Linich, aka Billy Name, a photographer friend of the pop artist Andy Warhol, who sang; and Angus MacLise, a proto-hippie who played hand drums. 'We created a kind of music that nobody else in the world was making and that nobody had ever heard before,' Cale wrote. One of his achievements was to bring these new ideas to a wider audience.
Radical politics and drug use were part of this scene. Angus MacLise would sometimes be found unconscious on the street. Cale admits to dealing drugs at the time to supplement his income. He was arrested during a deal in 1964 and spent a night in jail. 'The problem was that I had a nickel bag on me. But when they analysed it, it was nothing, so they let me out the next day.' After this incident he moved in with Tony Conrad, sharing a fifth-floor walk-up at 56 Ludlow Street on the Lower East Side, then largely populated by poor immigrant families and therefore affordable for bohemian musicians. Angus lived next door. This was Cale's domestic set-up when he met Lou.
Like the Ancient Mariner, Lou felt compelled at this stage in his life to tell everybody he met about his past, and so he told Cale almost immediately about the ECT. 'I think I'm crazy,' he said. John didn't think Lou was insane. On the contrary, he was impressed to discover that he had detuned his Gretsch guitar to one note to create the drone sound on 'The Ostrich'. It was the same technique Cale used with La Monte Young. He was even more impressed when Lou played him his serious songs. Lou responded warmly, as he usually did with people who liked his work, no doubt recognizing someone who could be useful to him. Lou was not as accomplished a musician as John Cale. 'Lou, to say he was just an average player, is giving him more credit than he deserved. He was a crappy player,' says Terry Philips, a little harshly. Apart from having a technical knowledge of music theory and composition, John could play a variety of instruments to a high standard, and he sang beautifully. By contrast, Lou merely strummed guitar, and he struggled to sing in tune. He would be able to make much more interesting music with his new British friend than he could alone.
Meanwhile, Terry had work for them to do. He put Lou together with Cale, Tony Conrad and Walter De Maria to form a band called the Primitives. Featuring two future members of the Velvet Underground, a Harvard mathematician and, in De Maria, someone who became a leading conceptual artist, the Primitives was one of the more unusual acts in the history of pop music. Terry drove them around the New York area doing promotion for 'The Ostrich'. They did radio spots, a high-school gig, a supermarket opening and a show at the Riverside Plaza Hotel on 3 December 1964, around the date of the single's release. Lou soon started acting up, to the extent that Terry lost his patience one day in the car. 'The woman who was going to be my wife was in the car and Lou started his [crap]. He was a little high. He was cursing and stuff, and I smacked him in the face... Everybody was afraid of Lou, because he could get so nasty that most people didn't want to confront him. For me it was a joke. He was no fighter.' Unsurprisingly, 'The Ostrich' didn't do any business, which was the end of the Primitives. But Lou and John continued to work together on their own.
Although John was impressed by the words to Lou's serious songs, he wasn't keen on the way he was performing them, in the folk style, at a time when Joan Baez, the Byrds and Peter, Paul and Mary were enjoying great success with pop versions of folk songs and Dylan was at the peak of his early success. Folk music was very trendy in New York in the early 1960s, but John didn't think highly of the genre. As they jammed together at the Ludlow Street apartment, he helped Lou develop more original and appropriate arrangements informed by his knowledge of minimalism. This was the start of one of the great songwriting partnerships of the 1960s, comparable to Lennon and McCartney in that Cale and Reed created a catalogue of peerless work in a short space of time, becoming as close as brothers in the process, though, unlike Lennon and McCartney, one partner, Lou, wrote all the words. However, he did allow John to edit him, which was vital. 'When I first met Lou, we were interested in the same things. He had a certain expertise in songwriting that I thought was absolutely amazing. We both needed a vehicle. Lou needed one to carry out his lyrical ideas, and I needed one to carry out my musical ideas... I was going off into Never Never Land with classical notions of music [at the time],' Cale explained the collaboration. 'Lou was exorcizing a lot of devils back then, and maybe I was using him to exorcize some of mine. So when we first started working together it was on the basis that we were both interested in the same things. That's why the Velvet Underground was put together.'
Lou continued to live at home for the time being, and to work for Pickwick, in a period of transition. He took his Selective Service medical on 4 February 1965 to see if he was fit enough to be drafted into the armed forces, when the Vietnam War was escalating alarmingly. That February, President Johnson ordered an intensive bombing campaign against North Vietnam, and the deployment of US troops increased substantially during the year. Lou's mental-health record, and the fact he was on medication, was to his advantage. He was classified 1-Y, meaning he was to be drafted only in an emergency. With this problem solved, he had more freedom to plan his future. When Tony Conrad moved out of the Ludlow Street apartment, Lou moved in to share with John.
It was exciting to be living in the heart of the city after years in the boondocks, and Lou soon developed a profound love of Manhattan. It became his place, as if he had grown up there, defining him as a man and an artist. Apart from his job at Pickwick, Lou gave guitar lessons to help keep himself afloat in the city. 'I would say he was a very good teacher,' says Henry Flynt, who paid Lou ten dollars an hour for lessons. 'The thing that was striking about his personality was that he was a very tightly organized person in a very middle-class way on some level, and very disciplined.' When Henry asked Lou for his contact details, he gave his parents' address and phone number in Freeport, and when he mailed a demo tape to himself on 11 May 1965 to establish copyright of his new songs, he posted the package to Freeport, showing that he still remained tied to and to some extent dependent on his family. It may well be that Sid and Toby also helped support him financially in the city.
Drugs were part of the scene on the Lower East Side. 'Do you take heroin?' John asked Henry one day when he was visiting the apartment.
'No.'
'What are you, chicken?'
Prior to meeting Lou, John had used a variety of drugs. Now he and Lou started doing heroin together. John had never injected before and he was squeamish about needles. 'Lou took care of that by shooting me up for the first time. It was an intimate experience, not least because my first reaction was to vomit.' When they formed a band they initially called themselves the Falling Spikes, 'spike' being drug slang for a hypodermic needle, a term Lou used in 'Heroin'. Now he drew on these experiences to write another important song.
While 'Heroin' was a song about what it felt like to use smack, 'I'm Waiting for the Man' concerned the business of buying drugs. The first verse was one of the best Lou ever wrote, grabbing the listener's attention and setting the scene with a few carefully chosen words that conveyed a wealth of information.
I'm waiting for my man
Twenty-six dollars in my hand
Up to Lexington 1-2-5
Feeling sick and dirty more dead than alive...
One of Lou's songwriting maxims was 'to be terrific, be specific', and the opening verse of 'I'm Waiting for the Man' demonstrates the wisdom of this phrase. By citing a real place where drugs were scored (the junction of Lexington Avenue and 125th Street in Harlem) and specifying how much the buyer took with him to make the deal ($26), Lou created an immediate sense of realism. The words also conveyed tension. Buying drugs on the street was risky. What would happen? Lou proceeded to introduce the dealer, a creep in a floppy hat and 'PR' (Puerto Rican fence-climber) shoes, who inevitably showed up late. The frustration of waiting for the connection was at the heart of the song, and this too was realistic. The deal went down in a nearby brownstone apartment where the buyer borrowed the dealer's needle so he could get high at once. Then bliss.
These were among the most succinct and evocative lyrics Lou ever wrote, but not everything he was writing was in this spare, almost journalistic style. 'Black Angel's Death Song' was a surrealistic poem which John helped Lou work up. Instead of describing something real in the manner of a reporter, Lou presented a series of vivid images.
Cut mouth bleeding razors forget in the pain
Antiseptic remains coo goodbye
So you fly
To the cozy brown snow of the east
The reference to razors cutting the face is disturbing, reminiscent of the infamous scene in Luis Buñuel's 1928 film _Un Chien Andalou_ , in which a woman's eye appears to be slashed with a razor; a picture Lou may well have seen screened at this time. The following line emphasized the feeling of being trapped in a nightmare, while 'the cozy brown snow of the east' is a wonderful phrase, evoking New York winters when snow is swept into dirty brown heaps on the sidewalk, or imported brown heroin powder, or both.
Not long after the last of that winter's snow had melted, Lou and John went busking in the city, performing 'I'm Waiting for the Man' _in situ_ at 125th Street. During one of these uptown expeditions they met a girl named Elektrah Lobel, who briefly joined the Falling Spikes. They played coffee houses as a trio. Then they met another girl, Daryl, with whom Lou and John both had an affair. That didn't stop Lou making passes at John, who realized that his new friend was bisexual and declined his overtures. One night Lou picked up a gay black preacher and went to bed with him in a hotel in Harlem, while John pretended to sleep in the same room. 'I wasn't going to go all the way downtown in the freezing cold.'
Another important song from this period was 'Venus in Furs', inspired by the nineteenth-century novel by Leopold von Sacher-Masoch, after whom the term 'masochism' is named. The novel documented the relationship between a nobleman named Severin and a dominatrix, Wanda, who mocked, cuckolded and whip-lashed her lover, often when she was dressed in fur, which was exquisite stimulation to them both. Lou borrowed the characters for a song of the same name, creating a work that has itself become synonymous with S&M. Oddly, he and John played 'Venus in Furs' initially in the style of an English folk song reminiscent of 'Scarborough Fair'. It would change radically.
We might imagine Lou and John in the spring of 1965 as two pale, shabby, half-starved young men trudging the streets of New York, like the down-and-outs in _Midnight Cowboy_. They filled their bellies with porridge, which was the cheapest and simplest meal they could cook at home. To earn a few bucks they sold their blood at donation centres and posed for pictures in tabloid magazines, which depicted them as criminals in lurid 'true crime' stories. 'And when my picture came out it said I was a sex-maniac killer and that I had killed fourteen children,' Lou recalled. 'And when John's picture came out in the paper it said he had killed his lover because his lover was going to marry his sister.' They were on the subway one day during this lean period when they ran into Sterling Morrison, the guitarist Lou had met at Syracuse. Sterling, a perennial student, was living with his girlfriend's brother while he studied at the City College of New York. They invited him back to Ludlow Street to jam. It was so cold in the apartment that they broke up crates to make a fire, huddling around the blaze with blankets over their shoulders while they played. Angus MacLise wandered in from next door and joined them, on hand drums. This was the start of the Velvet Underground, though at this stage it was only friends making music together in an apartment.
Their other drummer friend, Walter De Maria, was still on the scene, later talking of how he 'joined the Velvet Underground' around this time. 'It was a great band,' De Maria said after he became established as an artist. 'But then I said, Do I want to go to rehearsal every day and every night, you know, take all these drugs? Do I really just keep playing these rhythms, is that going to be enough? That was a really painful decision. I said, "No, put it down."' In truth, neither De Maria nor MacLise were committed to the group. De Maria disliked the discipline of band practice and, at twenty-nine, he already felt too old to become a rock 'n' roll drummer. 'You see, being a musician is something like being an athlete to some extent; you really have to be young and strong to do it.' The same was probably true of MacLise, who turned twenty-seven in 1965, and who had other interests. MacLise kept his thoughts on the subject to himself, dying in obscurity in India in 1979.
As well as changes of personnel, changes of name are often part of the evolution of a group. The Falling Spikes became the Warlocks for a while, appearing under that name with Angus at the Film-maker's Cinémathèque, an event organized by the underground film-maker Piero Heliczer. They played behind a screen on to which films were projected, which was the start of the band being drawn into New York's visual arts scene. Lou didn't give himself over to the bohemian life just yet. He maintained his involvement with Pickwick Records, which brought in some income. 'We worked there and were songwriters on occasion,' said Sterling. 'At one time they called us the Beachnuts.' Under this name Lou recorded his absurd tale of the nude bicyclist, 'Cycle Annie', breaking up with laughter halfway through the session. The song appeared on a Pickwick album entitled _Soundsville!_ Lou sang on one other track and probably wrote, or co-wrote, most of the tunes.
The Warlocks evolved into the Velvet Underground after their friend Tony Conrad brought a book of that name to the apartment, a pseudo-academic study of 'the sexual corruption of our age' by Michael Leigh, published in paperback with an S&M-theme cover. 'It will shock and amaze you,' promised the blurb. It was, in fact, a trashy work, its only strength the title, which evoked an urban netherworld of illicit and deviant behaviour. 'The Velvet Underground' also had a double meaning for the musicians at the time. 'We thought it was a good name because it had "underground" in it, and we were playing for underground films,' explained Sterling. 'We considered ourselves part of the underground film community.'
Lou, John and Sterling recorded their practice sessions at Ludlow Street. One surviving tape from July 1965 captured the boys rehearsing six songs, including 'Heroin' and 'I'm Waiting for the Man', without a drummer, demonstrating the peripheral status of De Maria and MacLise, though the latter was on hand for band pictures that summer. Lou had perfected his lyrics, but the arrangements were still fluid. Sung by John to acoustic guitar, 'Venus in Furs' still sounded like 'Scarborough Fair'. 'Prominent Men', co-written by Lou and John, was a rip-off of a Bob Dylan protest song that would remain an obscurity. 'Heroin', however, was already the song the public would come to know, the dramatic, escalating arrangement complemented by John's manic viola.'All Tomorrow's Parties' also sounded as it would when it was released, while 'I'm Waiting for the Man' had a curious hillbilly arrangement. They made several attempts at the songs on the tape. Lou and John shared lead vocals while Sterling played guitar, sang backing vocals and kept time by tapping a sarinda. 'Too fast,' complained Lou, as Sterling started another song, 'Wrap Your Troubles in Dreams'.
'Well, that's what...' his friend defended himself, the rest of his argument inaudible on the tape, which captured the sound of traffic on the street outside.
'Well, that doesn't prove it's right!' Lou scoffed. 'What the fuck's wrong with you?'
'OK.'
Lou started laughing. 'All right. Jesus Christ!'
It wasn't a serious disagreement, but it is early evidence of Lou's sharp tongue and lack of patience.
Towards the end of the year the Velvets came to the attention of Al Aronowitz, a journalist for the _Saturday Evening Post_ who had the distinction of having introduced Bob Dylan to the Beatles while also being friends with the Rolling Stones. Aronowitz harboured ambitions to move into music management and had one band under contract. His friend the film-maker Barbara Rubin told him about the Velvet Underground, whom she had encountered on the underground film scene, telling him that they were looking for representation. 'Barbara gets me a tape. I listen to the tape. Awful! A piece of shit!' Aronowitz later wrote, possibly referring to the tape described above. Nevertheless, he made the effort to meet them. Lou boasted to him of being an ace guitarist, and seemed to think the Velvets were destined to become the new Beatles. 'I can't promise that,' said Al. 'I can get you exposure.' He offered them a gig opening for his band, the Myddle Class, at a high-school dance in New Jersey. The job paid $75, split between four – Lou, John, Sterling and a drummer. But Angus refused to play, explaining that he didn't want to turn professional. Lou was furious. They needed a replacement fast. Sterling mentioned that Jim Tucker's sister Maureen played drums. Sterling's girlfriend Martha, later his wife (and hereafter referred to as Martha Morrison), drove Lou out to Long Island to hear her. 'I used to do a lot of the driving with Lou,' Martha explains, 'because he lost his licence somewhere along the way [when] he drove into a toll booth.' This was the aforementioned incident when Lou was driving home from Syracuse, with a college friend, stoned.
Maureen Tucker, known as Moe, was two years younger than Lou, John and Sterling, having been born in 1944. Dad was an alcoholic house painter who died when she was nineteen, leaving the Tuckers in straitened circumstances. She was a small, homely, church-going Roman Catholic who became interested in drumming at high school, enthused by a school visit from the great Nigerian drummer Babatunde Olatunji ('Oh my God, it was stunning!') and by listening to the Beatles, Bo Diddley and the Rolling Stones. Moe was one of millions of American kids who were transfixed by the Beatles' first appearance on the _Ed Sullivan Show_ in 1964. 'I started not to be satisfied just to listen. I wanted to participate somehow. So I bought a snare drum.' She began to play along to pop records with her drum in her bedroom. 'I wasn't thinking, I'll be a drummer. I was just having fun.' Then her mother gave her a cheap drum kit. 'It had an old beat-up cymbal, like a truck ran over it.' Teaching herself to play, she developed a weird style of her own which included playing the bass face down on the floor like an African tribal drum. 'So Lou came to my house to see if I could actually keep a beat, I guess.' She passed the audition. 'It was supposed to be just that one show, the New Jersey show.' They rehearsed together at Ludlow Street, 'just to show me the songs so I'd know when to stop!' Moe was shocked to see where Lou and John were living. 'This was a real bad place, and a scary area.' And she was surprised by their songs. 'When they started to play "Heroin", I thought, This is really different, and good-different. I liked the music right away. I was impressed, like, _Holy mackerel! What is this?_ '
There were three acts on the bill at Summit High School on 11 December 1965, with the Myddle Class headlining. 'When the curtain went up, nobody could believe their eyes!' student Rob Norris recalled. 'There stood the Velvet Underground – all tall and dressed mostly in black; two of them wearing sunglasses. One of the guys [John] with the shades had _very_ long hair and was wearing silver jewellery. He was holding a large violin [actually a viola]. The drummer had a Beatle haircut and was standing at a small, oddly arranged drum kit. Was it a boy or a girl? Before we could take it all in, everyone was hit by a screeching surge of sound, with a pounding beat louder than anything we had ever heard.' The song was 'Heroin'. Fascinated, Rob edged forward. Others turned away. The band played two more songs, 'Venus in Furs' and 'I'm Waiting for the Man'. Moe's kit almost fell apart during the short set. By the end, many of the students had left the auditorium. 'I was sitting at the back feeling sorry for the band,' says Martha Morrison. 'They really wanted people to like them.' In fact, the Velvets were elated. They had played their first gig, and it had been fun. They went back to Al Aronowitz's house for a celebratory spaghetti dinner, asking their manager where they were going to play next.
Al booked them into the Café Bizarre, a Greenwich Village club catering to the tourists who streamed into the Village in the evening to see young comics and musicians. There was a debate about whether they needed Moe. The experience of having Elektrah and Daryl in the Falling Spikes, and the complications caused by Lou and John getting romantically involved with the girls, had put John off female band members. 'No chicks,' he told Lou and Sterling. Nevertheless, Moe was with them when they began their two-week residency in December. The classic line-up of the Velvet Underground band was thereby established.
The original four Velvets each had a distinct and different character. Lou was a writer first and foremost, fascinated with the outré but also in love with good-time rock 'n' roll. He was highly egocentric, with an aggression to succeed, dominating his friend and songwriting partner John Cale, who was different in the sense that he was British but also because he came from a musically academic background and was steeped in avant-garde composition. The fact that he played amplified viola on stage helped give the band a unique look and sound, and he didn't bring any pop-music clichés to the mix, as Sterling Morrison observed astutely. By contrast, Sterling was a sociable, jovial journeyman guitarist without ambition. Yet he was also intelligent, bookish and witty. Moe was perhaps the most unusual band member. The fact of her gender was remarkable. Moe knew that it was highly unusual for a girl to play rock 'n' roll drums; as far as she knew she was the only girl in America doing so in 1965. She also had a unique style. She played drums like a kid hitting pots and pans. She didn't know how to play properly. 'I can't play a roll. A drummer, you can't play a roll? But if I could have, it would have been Ginger Baker playing "Heroin". Can you imagine what that would have been like? It would have sucked. Not because he sucks... Anyone is a better drummer than me. But I just thought about it differently, or something.'
Moe and Sterling didn't take the band as seriously as Lou and John. They enjoyed making music but didn't see it as their career and held down day jobs for a long time. Moe worked in data entry. Sterling did various jobs, including working as a cook in a fish restaurant and delivering air-conditioning units. Moe took little interest in business decisions. She avoided disputes and didn't get involved in relationships with her band mates. 'That wasn't my style.' Paradoxically, for a band associated with drug culture and deviant sex, their drummer was also a rather unworldly person, with conservative, almost puritanical, views about certain behaviour, including bad language. She enjoyed a beer but didn't touch drugs and turned a blind eye to what the guys did in that respect. What really upset her was their swearing. 'The mouths on them! Oh my God!' Moe was one of the few people Lou never fell out with and always listened to. They roomed together on the road and were friends until the end of his life, whereas there were long periods when Lou, John and Sterling didn't speak to each other. It is indicative of the warmth between Lou and Moe that she called him Honeybun, which gives a glimpse of Lou's softer side, and he called her Moesy. Sterling was Sterl, while John was usually referred to within the band by his surname, or as Black Jack, because he liked to wear black.
As it turned out, the manager of the Café Bizarre didn't want Moe playing drums, because the patrons wouldn't be able to hear themselves talk in the small club, so she was merely required to shake a tambourine during the shows. The band performed short sets throughout the evening, while customers sat at tables a few feet away, drinking and talking. The audience more or less ignored them, save one young fellow who got up and danced – with a whip. 'I grabbed some girl that I didn't know sitting down at a table. She was pretty and I asked her to dance,' he recalled. 'No one was dancing to the Velvets' music, it almost seemed like it was undanceable music. So I started dancing, and I had a whip at the time. I actually bought the whip at an umbrella shop in the West Forties as a mere decoration [and] I started using the whip as a prop.' As there was no stage, this strange couple were dancing directly in front of the group.
'Who is this lunatic?' Sterling asked himself.
The dancer was Gerard Malanga. He had come to the Café Bizarre at the behest of his friend, the ubiquitous Barbara Rubin, who continued to advocate for the Velvets. Having got Al Aronowitz to manage them, she wanted Gerard to film them, knowing that photography and film-making were among his interests. More importantly, Gerard worked for Andy Warhol, one of the city's newest and brightest celebrities, who had the uncanny ability of making his friends famous, too. Warhol had achieved fame in 1962 with a gallery show of radical new pictures of everyday objects, including soup cans and Coke bottles, as well as silkscreen portraits of celebrities like Elvis and Marilyn Monroe. From that point on he was constantly in the papers, associated with a vogue for manufactured, commercial art that seemed to many people to be a put-on, as well as being a familiar face on the party scene. He was also ambitious to branch out into the film business, and music.
Next day at work Gerard told Andy and his business manager, Paul Morrissey, about the strange band he had seen at the Café Bizarre, suggesting that Paul come back to the club to check them out. Paul was looking for a band to present under Andy's name at a new discotheque on Long Island, one of several business propositions they were considering. They needed a band loud enough to fill a large venue and, just as importantly, a band with a distinctive look. So Paul went with Gerard to the Café Bizarre. 'I thought they were peculiar and unusual,' he recalls his first impression, intrigued by the fact that Cale played viola and the percussionist was a girl. 'Do you have a manager?' he asked them. 'No,' they said, apparently forgetting Al Aronowitz, who exits the story at this stage, feeling ill used by the Velvets, despite the fact that he never had any empathy for their music.
It was decided that Paul would bring Andy to the club. Moe recalls how thrilled they were about this. 'That was exciting – to see a famous person.' Warhol arrived the following night with Paul, Gerard and other friends who trooped about Manhattan with the artist in the evening, looking just like he did in the newspapers: a slightly built man of thirty-seven, with fine, silvery hair and pale, spotty skin. He wore shades and spoke softly like a child, yet commanded attention and respect. It was said that Warhol had 'a whim of iron'. After the band played a set he came over and told them in his fey voice that they were great, his expression both innocent and mischievous. Never much of a talker, he habitually said everything was great. 'We liked the idea that their drummer was a girl, that was unusual,' he enlarged slightly in his autobiography, _POPism_. 'And Lou looked good and pubescent then – Paul thought the kids out on [Long] Island would identify with that.'
It is frequently debated whether talent or ambition is most important in a successful career, but this is to forget that luck, or timing, usually plays a part. The Velvets were fortunate to run into Andy at the right time. They were fired from the Café Bizarre almost immediately after they met, for playing the disturbing 'Black Angel's Death Song' once too often, so were out of work when they came to his studio loft for a second meeting around Christmas 1965. The artist worked in an industrial building on East 47th Street, near the United Nations. A freight elevator took them up to the fourth floor, where they entered a large silver room. Andy's friend Billy Name, who lived in the studio, had sprayed the whole place with silver paint at Andy's request, including the toilet and the telephone, and had wrapped the water pipes and pillars in foil. As a result, the studio was known as the Silver Factory. It shone in the sunlight. Andy lived uptown with his mother, arriving each morning at around eleven o'clock to work in front of the windows, where the light was best. Gerard came in at about two in the afternoon to assist him. Artwork, including silkscreened Brillo-pad boxes, portraits of Jackie Kennedy and Liz Taylor, and life-size silver silkscreens of Elvis, were created on a production-line basis, then stacked against the walls for collection by Andy's dealer. The pictures were already valuable. This was only part of what went on at the factory. Film cameras were arranged around a couch where Warhol shot his 'screen tests' and strange, unsettling art films, many with a sexual theme. The red couch was 'as dirty as the gutter', recalls Factory habituée Mary Woronov, a mannish girl with cat eyes and a filthy laugh. It was not uncommon to find actors walking about in the nude between scenes. Opera would often be playing on the hi-fi.
The Velvets agreed to be managed by Andy and Paul Morrissey, on the basis that the duo would help support the band until they started to earn money. The initial idea was to present the group at the Long Island disco they were thinking of calling Andy Warhol's Up. Beyond this, Andy and Paul were ambitious to get into the music business, as they also wanted to become seriously involved in the film business; they were highly ambitious and businesslike in their own eccentric way. There was an initial problem with Lou when he refused to sign a contract, perhaps in light of the fact that he had signed too readily with Pickwick City, a contract which Terry Philips says he chose not to enforce. '[It] was eight or nine months, or more, before they signed a contract. Lou Reed was not going to do anything for anybody,' complains Paul Morrissey, who came to loathe Lou. 'Do you think people want to read about the life of Lou Reed?' he asked waspishly, in an interview for this book. 'You need a good title like _The Hateful Bitch_ [or] _The Worst Person Who Ever Lived_. Something that says this isn't a biography of a great human being, because he was not... He was a stupid, disgusting, awful human being.'
The Velvets started to hang out at the Silver Factory, becoming part of an extraordinary creative and social scene. They discovered that the factory ran on drugs, principally various forms of amphetamine known generically as speed. Andy took Obetrol, an amphetamine which gave him 'that wild happy, go-go-go feeling' that made him 'want to work-work-work'. Billy Name used methamphetamine daily. Many of their friends were also A-men (amphetamine users): flamboyant, highly strung young men known by enigmatic nicknames such as Ondine, Rotten Rita, Won-Ton and the Turtle. Most were gay. They liked speed partly because it kept them thin. Some of their behaviour was alarming. Ondine was known to inject his eyes. When Factory A-man and dancer Freddy Herko went off his head in 1964, he danced out of a window to his death. Lou felt a kinship with these extreme characters. Methamphetamine became his drug of choice, not heroin, which he only dabbled with. Meth made him feel like Superman. He told Nelson Slater that he was going to take meth every day for the rest of his life. For years, he did.
Friends gathered at the Silver Factory each evening prior to going out on the town with Andy as his entourage, filling the studio with their cigarette smoke, excited, speed-fuelled chatter and exaggerated laughter. The Factory crowd was a decadent, narcissistic bunch of clever, attractive people who vied for Andy's attention and were quick to put each other down. There were women in the group as well as men, some of them beautiful, some from wealthy or socially prominent families, all unusual. The queen of the scene when Lou arrived was the model and actress Edie Sedgwick, a dissipated rich kid with a history of mental illness who appeared in Warhol films with stark one-word titles like _Bitch_ , _Restaurant_ and _Vinyl._ Edie had become almost as famous as Andy himself by 1965, having her hair cut short and dyed to look like his twin. Barbara Rubin was another regular. 'She [was] indecent,' recalls Moe, shocked to hear a woman use words like 'cunt'. Then there was Brigid Berlin, the obese and wayward daughter of the chairman of the Hearst publishing empire, who walked around the factory topless, occasionally applying paint to her breasts to make 'tit paintings'. She was another speed freak, giving rise to her nickname Brigid Polk, a laboured pun on the fact that she was forever poking needles in her bum.
Celebrating New Year's Eve with this hedonistic bunch was fun. The Velvets had recently taken part in an underground film shot by their pal Piero Heliczer. They had performed in a crowded apartment building, Lou, John and Sterling stripped to the waist and decorated with body paint, with Angus MacLise and Moe Tucker both playing percussion, Moe in a veil. CBS Television sent a crew to cover the happening, as an example of the crazy things the downtown kids were doing. The item was to be broadcast as part of the evening news on 31 December 1965. That night, the band went out with their new friends from the Factory, including Andy, Edie and Gerard, going uptown in Edie's limousine to see James Brown at the Apollo Theater. High on speed, they then raced back downtown to watch themselves on flickering black-and-white TV, before partying into the small hours. So began 1966, the seminal year of Lou's career.
Being filmed by Piero Heliczer, late 1965. Lou and Sterling Morrison on guitars, John Cale on viola.
## IV
## The Exploding Plastic Inevitable
### 1966
IN THE MIDDLE of the roast-beef course at the annual New York Society for Clinical Psychiatrists' dinner at the Hotel Delmonico, on 13 January 1966, the Velvet Underground began to play 'Heroin' as loudly as possible. As Lou sang about mainlining drugs, Barbara Rubin and her film-maker colleague Jonas Mekas ran into the ballroom and fired outrageous questions at the shrinks and their wives, all of whom were in evening dress. 'What does her vagina feel like?' Barbara asked the women on camera.
'Is his penis big enough?'
'Do you eat her out? Why are you getting embarrassed? You're a psychiatrist; you're not supposed to get embarrassed.'
Andy Warhol simultaneously projected a film of a man being tortured. Diners stared at the shocking images, and at the outlandish musicians. Still dressing like a hobo, Lou wore his brown winter jacket and favourite flat-sole cowboy boots. Cale was dressed in black, with a diamanté torque around his neck. He was playing his viola as though he meant to break it. Edie Sedgwick, pretty in a red skirt and black top, was dancing with Gerard Malanga, who was wielding his whip. They were grinning at each other, evidently enjoying their audience's surprise. A second woman, handsome and solemn, stood at a microphone. When Lou finished 'Heroin', she intoned another strange song in a German accent.
Although the Velvet Underground were a surprise to most of the 350 guests at the dinner, Andy and his troupe had been invited to the event, and the artist had warned the organizers that he was going to stage a 'happening'. While many of the guests took this in good spirit, indulging the young people who were evidently having so much fun, others felt that their dignity had been offended. 'I suppose you could call this gathering a spontaneous eruption of the id,' Dr Alfred Lilienthal huffed. Dr Marcel Helman said he felt like being sick. 'Why are they exposing us to these nuts?' a third shrink asked a reporter from _The New York Times_.
The stunt was reported prominently in the _Times_ and the _Herald Tribune_ the following day, 14 January 1966, as a good joke at the expense of the head doctors. It was light relief from the grim daily news from Vietnam. This was the first significant press coverage the Velvets had received, and it established a tone. While Andy Warhol was expert at garnering publicity for himself and his friends, the mainstream press treated the Velvet Underground as a mere sideshow to his pranks. Their music was mocked as a 'din', comparable to a fire alarm going off, in the opinion of the _Herald_. There was no mention of Lou Reed, let alone any serious appraisal of his songs.
Nico, the solemn blonde who sang with the Velvets at the Hotel Delmonico, had recently been imposed on the band by Andy's right-hand man Paul Morrissey, to Lou's fury. 'Boy, did he hate that!' Paul didn't think Lou had sufficient charisma to hold an audience's attention. While Nico couldn't carry a tune, partly because she was deaf in one ear, she had the benefit of being glamorous. So he decided that she was going to be the band's chanteuse. 'John Cale was all for that. Lou wasn't. He wanted to sing all the songs. Oh God! He had no voice at all. He dominated John Cale. He tried to dominate me, [but] he couldn't.' Nico became a kind of guest star, who wandered on stage to sing a couple of songs with the Velvets, then wandered off again as if she had another engagement to go to. One could almost hear Lou's teeth grinding.
The tension eased briefly after they slept together. 'I fell in love with him. He was so beauuu-tu-full, and very tuff... tuff like a stat-tuu,' was how Nico explained her feelings for Lou. Fluent in four languages, she spoke English ponderously, with a heavy accent, elongating her vowels. The relationship fascinated their friends. 'I thought Lou was in love with her,' says Richard Mishkin. Maybe he was for a short time; Nico was a beauty. 'Lou was just completely stunned by] her, and could never quite figure it out – what was going on,' John Cale says, adding that Lou and Nico had 'very much a love–hate relationship.' [Mary Woronov, who joined the Factory scene around this time, suggests that Lou slept with Nico to keep control of the act. 'I seriously think he fucked her because she was part of the band and it was a way of controlling her, or finding out about her, at least. He used her as an instrument, so why not fiddle with it?' For her part, Nico may well have thought that by sleeping with Lou she was neutralizing her enemy.
Nico was an extraordinary person. Born illegitimately as Christa Päffgen in Cologne in 1938, she grew up as an only child after her father died in the Second World War. She claimed to have been raped by a soldier when she was thirteen, which may or may not have been true; Nico was a fabulist. By the time she left school, she was a tall, big-boned girl with a gloomy demeanour, a deep voice and long ash-blonde hair. She adopted the name Nico as a model, working in Paris and New York. Small film roles followed. She played herself in Fellini's _La Dolce Vita_ and had an affair with the actor Alain Delon, the first of a series of famous lovers. Nico subsequently gave birth to a son, Christian, known as Ari, who most people believed to be Delon's, though he denied paternity. By 1965 she was in London, where she was romantically linked with both Brian Jones and Bob Dylan, inspiring Dylan to write 'I'll Keep It with Mine', which she sang as if it were her own. She came to the Silver Factory in November 1965, shortly after which Paul Morrissey put her together with the Velvets.
Once Lou got over the shock of having to work with a chanteuse, he gave Nico three of his best songs to sing. Despite what Lou's college girlfriend Shelley Albin says, Nico always maintained that _Loouu_ , as she called him in her deep, doomy voice, wrote the limpid 'I'll Be Your Mirror' for her after she told him precisely that, in the sense of being his muse. As further evidence of how complex such matters are, another girlfriend has yet another explanation. 'I know for a fact that "I'll Be Your Mirror" is about Pisces,' asserts Barbara Hodes, who met Lou a little later in 1966. She says that Lou, who was a Pisces and took an interest in astrology, told her so. 'It's not about any one person, it's about the astrological sign Pisces.' Edie Sedgwick was almost certainly the inspiration for 'Femme Fatale', which Nico also sang. The third song Lou gave her was 'All Tomorrow's Parties', which evoked the hedonistic scene at the Silver Factory. Nico sang these three songs in a way that was much mocked within their circle at the time. 'We used to imitate her a lot, like "vat a cloooun",' giggles Martha Morrison, imitating her foreign pronunciation. 'We always made fun of Nico, and she was probably a poor lost soul who could have used a good girlfriend... poor Nico.'
Despite the fact that Nico was sleeping with Lou and had a child by a former lover, a rumour swept the factory that she was a lesbian, which spooked Martha and Moe. 'It's so stupid, but we were very young and unsophisticated,' says Martha. 'Nico was too much for us.' There was indeed plenty to shock a conventional Long Islander at the Silver Factory. The drug use, nudity and profanity was startling. 'They were degenerates, that's all, of all kinds,' says Martha, still shocked. Lou had no difficulty fitting in, though, quickly becoming one of Andy's favourites. He had found his second mentor, after Delmore Schwartz.
'Lou was sort of like Andy's Mickey Mouse,' says Billy Name, who knew both men well. 'He was his animated puppet.' The artist made little films of Lou, and whispered to him about his career. Lou listened to what he said, learning from Andy, as he had from Schwartz. The artist taught him all sorts of things, including giving tips on how to deal with the media. He said, for example, that it wasn't necessary to tell journalists the truth. Most importantly, perhaps, he urged Lou to work hard. Despite all the crazy things that happened at the Factory, Andy worked assiduously at his art and the business of art, and he encouraged Lou to be equally industrious, often asking him how many songs he had written that day. Although Lou was writing prolifically, whatever number he cited wasn't enough for Andy, who said he should have written more. It was a lesson Lou never forgot, and often referred to. 'Andy works very hard,' he said in 1971. 'One of the things you can learn from being in the Factory is if you want to do whatever you do, then you should work very hard, very hard all the time, and if you don't work hard all the time, well then, nothing will happen.' He took his advice to heart. Lou worked hard for the rest of his life, writing hundreds of songs, recording dozens of albums, performing countless shows and getting involved in numerous side projects. He was ultimately as busy as his mentor.
The happening at the psychiatrists' dinner was the prototype for a mixed-media show, called _Andy Warhol, Up-tight_ , first staged at the Cinémathèque in New York in February. Once again, the Velvets performed with Gerard and Edie dancing, while Andy and his helpers projected still images and Factory films on to them, and screens behind them. The band started to wear black on stage so they stood out against the projections, while they adopted sunglasses partly in order to protect their eyes from the light. Lou, who up until now had dressed in the scruffy style of a folk singer, made black leather and dark glasses his image for most of the rest of his career, with the result that he is sometimes credited for originating the look. Of course, he didn't. Roy Orbison beat him to it, as did Bob Dylan, to name only two. Lou and Nico bickered at the Cinémathèque about how many songs she could sing with the group. 'Lou wanted to sing everything,' she complained. She wanted to sing 'I'm Waiting for the Man', but he wouldn't let her, jealous of one of his very best songs. 'We quarrelled a lot,' she told her biographer Richard Witts. 'But he could be nice to me.'
Andy then took the show on the road, starting with a gig at Rutgers University in New Jersey on 9 March 1966. Approximately a dozen people made up the touring version of _Up-tight._ In addition to Andy and his lieutenant, Paul Morrissey, there were the four Velvets and Nico; roadie Dave Faison and lighting man Danny Williams (who later vanished, believed to have taken his own life); Gerard Malanga, whose whip dance was integral to the show; film-maker Barbara Rubin; photographer Nat Finkelstein; Factory A-man Ondine; and Ingrid Superstar, a sexy blonde who had taken Edie Sedgwick's place in the entourage after Edie upset Andy by telling him that people were laughing at their films, and drifted away from the scene. (The rest of her life was not happy. She met a drug-related death at the age of twenty-eight in 1971.) There were so many people in the party that Paul had to hire a minibus. When they arrived at Rutgers, Andy led everybody into the university cafeteria, where they deliberately created a scene. Ingrid flirted with the male students, eating off their plates, while Barbara filmed them and asked provocative questions, as she had at the psychiatrists' dinner, and Nat Finkelstein took pictures. When a staff member objected to the intrusion, there was a scuffle and the police were called. Although less than 20 per cent of the tickets had been sold in advance of the two evening shows, this incident ensured that the auditorium was full when the first show began at eight o'clock.
It started with a screening of Andy's film _Lupe_ , which featured Edie puking into a toilet. 'Down with Andy Warhol, up with art!' shouted an excited student. Then the Velvets came onstage, dressed in white on this occasion, so that when Andy projected his films, including films of Lou on to Lou and Nico on to Nico, they appeared to dissolve. The projections became increasingly complicated as the show progressed, image layered upon image, the auditorium strafed with light. Students and members of the revue danced, ran and played in the light beams like children. It was joyous. Not everyone was satisfied, of course. Nico was heckled.
'Speak English, if you can,' shouted a xenophobe.
The next day, they drove to Ann Arbor to perform at the University of Michigan Film Festival. Nico was behind the wheel. She liked to drive and was one of the few people on board who had a licence, though she was a truly terrible driver. As Nico careered across the country, clipping kerbs and ignoring stop signs, the Velvets attempted to rehearse in the back of the vehicle. When Lou lay down to rest, Andy fondled his crotch. Lou didn't object. Nat Finkelstein took their picture.
There was a party in Ann Arbor. A local teenager named Jim Osterberg, later better known as Iggy Pop, was among the guests. Friends flew in from New York to join the fun, including Danny Fields, editor of the teen magazine _Datebook_ and an early devotee of the Velvet Underground. 'I put a picture of the Velvets and Nico in _Datebook_ magazine, which would have been the first picture of the Velvet Underground ever in a national magazine, because I had a column where I would throw in everything I liked,' says Danny, who had a major crush on Lou. They became good friends. 'He was the cutest guy in town, he was the sexiest boy, and everyone was in love with Lou. He was so hot... I became a groupie of theirs, following them around.'
The bus broke down on the way home. They rolled to a stop at an isolated gas station in Ohio. Paul Morrissey went for help as the others peered hopelessly under the hood. Andy remarked that his next superstar would have to be a mechanic. The unusual appearance and behaviour of the troupe soon attracted attention in this isolated spot. 'And before we knew it there were three cop cars, with six or eight cops surrounding the bus, because the person in the garage saw Paul, who looked like trouble, and they called the police, and they wanted to search the bus. "Everybody out!"' recalls Moe. 'And then we were told we needed to be across the state line by noon the next day... That was totally on account of our looks.'
For Lou and the Velvets, April 1966 was the best month they spent with Andy Warhol. In many ways, 1966 was their greatest year. When plans for the Long Island disco fell through, Andy and Paul Morrissey became concerned about the amount of money the Velvet Underground was costing them and decided to find a venue in New York where they could put on a show quickly to make some cash. They settled on a scruffy Polish club on St Mark's Place, on the Lower East Side, called Polski Dom Narodowy, better known as the Dom. Paul hired the main room for the month of April. Trying to think of an exciting name for the event, he came up with the Erupting Plastic Inevitable, which appeared in an advertisement in the _Village Voice_ the day before opening night, Friday 1 April. They couldn't get access to the Dom before 3 p.m. on the Friday, at which point Gerard started to whitewash the back wall to create a projection screen, refreshing the fusty club with the clean smell of emulsion, and Andy got busy installing his projectors and lights on the balcony. He also hung a vintage mirror ball over the dance floor. They opened at eight o'clock, having decided at the last minute to rename the show the Exploding Plastic Inevitable: a month of live music, dancing and film projections, for an entrance fee of two dollars. A hand-printed banner was draped from the fire escape: ANDY WARHOL. LIVE. THE VELVET UNDERGROUND. LIVE DANCING. FILMS. PARTY EVENT _NOW_.
Andy's name was enough to draw a crowd of over seven hundred people. 'It was packed,' says Paul. 'It was an enormous success from its very first night.'
To enter the Dom in April 1966 was to step into a world of wonder. The Velvets had been involved in mixed-media shows before, with Andy, and before they met him, while rock concerts were being enhanced with psychedelic effects in San Francisco, where bands like the Grateful Dead were becoming established, their use of music and lights linked to the taking of LSD – but there had never been anything quite like the Exploding Plastic Inevitable. Directed by the pre-eminent pop artist of the day, the show was an authentic and innovative art happening and, more importantly, it was fun. Thousands of people flocked to the Dom over the following four weeks – ordinary New Yorkers, tourists, fashionistas and celebrities – to dance to the music of the Velvet Underground and Nico under a barrage of strobe lights, coloured gel projections and black-and-white movies. The mirror ball spun overhead, its facets casting myriad dots of light on their happy faces.
The scene outside the Dom, April 1966.
Andy stayed up in the balcony most of the evening, operating his cameras, although almost anyone was welcome to lend a hand with the equipment. There was a bar downstairs selling beer, Coke and sandwiches to the public, and little tables covered with gingham cloths where people sat in semi-darkness waiting for the show to begin. When the Velvets sauntered onstage and started to play, casually, as if they didn't care who was watching, sometimes with their backs to the room, the lights came on and everybody got up and danced. 'It was exciting for all of us, but it was really weird in that we had the place mostly all dark, and then when the band would start we would flash images on projectors and we would be showing films on the walls, films of the Velvet Underground that we had taken, and we were now showing on them while they were performing,' says Billy Name. 'It wasn't an announced thing. It wasn't a presented, with literature, thing. It was just a _happening_ thing, and it was happening and happening and happening. And people came in and started dancing. It was a great time.'
Among the films that were shown was footage of a transvestite named Mario Montez, his head enlarged in close-up to gigantic size, eyes blinking massively, mouth opening like a cavern. 'Dwarfed by Mario Montez's lipstick-stained teeth and looking like insects that had just crawled out of his mouth, the Velvet Underground played in their wrap-around shades,' wrote Mary Woronov in her Factory memoir _Swimming Underground_. Mary was Gerard's new dance partner. 'He bought me a whip, and then he bought me leather pants, and a leather bracelet and everything. I was perfect.' They devised dance routines for Velvet Underground songs. Speed gave Mary energy, so she lifted weights during 'I'm Waiting for the Man'. Gerard mimed mainlining with a prop syringe during 'Heroin', and they did a whip dance for 'Venus in Furs'. 'Gerard was the victim always, and I was the dominatrix,' she says. It was an act, of course. Mary didn't even like sex. 'The S&M was a pose, definitely.'
Ingrid Superstar tried to dance with the couple, who were in many ways the stars of the show, but Mary didn't want a threesome. 'Ingrid, you can't dance up here, you're fucking everything up,' she told her rival, whom she thought common and stupid.
'Yes, I can,' protested Ingrid. 'Andy said it was okay. And I'm sure it's cool with Gerard.'
'No, it's not okay. I'm dancing with Gerard, you skull fucker, not you, me.'
'Let go of my arm, you're hurting me...'
Others were having more innocent fun out on the dance floor. Grooving under the twinkly mirror ball to the music of her boyfriend's band, Martha Morrison told herself that she was having the time of her life. Absolutely the best time ever. Stamina was an issue. Some of the band's songs lasted fifteen minutes or more. High on speed, Lou could play his guitar almost indefinitely, which became tiring to dance to. One night Martha's friend Ellie became so exhausted she fell asleep with her head on a speaker.
Nico took an excessive amount of time to get ready for her brief guest spot, her elaborate backstage ritual involving lighting candles. 'Lou had very little time for women and their accoutrements, and this ritual would really irritate him,' recalls John Cale. 'The comic thing was that she'd do all this to help her performance, and then she'd start off singing on the wrong beat!' When Nico glanced around at the band as if _they_ were at fault, Lou would say cuttingly, 'We know what _we're_ doing, Nico.' Once she had sung her three songs, she stood aside and banged a tambourine for a while, out of time, then walked off.
Between sets, Gerard went around the hall with a hand-held Bell & Howell film projector, which he shone on the audience. A deejay named Norman Dolph played records to keep people dancing until the Velvets came back on. Norman was a twenty-seven-year-old sales executive for CBS who collected art and ran a mobile disco in his spare time. One night Andy told him that he wanted to make a recording of the band to try to get them a record deal. Norman offered to make the arrangements at his own expense in return for a Warhol painting. 'Andy] said, "Just do it." I said, "OK, I'll do it for a painting." The conversation took no longer than [that].' Norman hired a studio at Broadway and West 54th Street from a small company called [Scepter Records. The Velvets arrived on 18 April to begin recording what became their debut album, _The Velvet Underground & Nico_, also known as the Banana Album because of the banana artwork on the cover.
The band would spend two days in the studio making their demonstration record, working office hours so they could still perform at the Dom in the evening. Another couple of days were set aside for mixing the sound. Andy came with them on the first morning, and gave Lou some advice. 'Whatever you do,' he told him, 'keep all the dirty words.' There were no curse words in Lou's lyrics, but his use of drug jargon and fetishistic language about mainlining smack, whiplash sex and such was risqué. Andy knew that this gave the songs authenticity, and that had to be maintained. There was limited time and tape was expensive so they worked fast in the small, beige studio, a piano in one corner and a clock overhead to remind them of the time. Although Andy was the nominal producer, he soon left them to their own devices. Norman Dolph oversaw the sessions, with the help of engineer John Licata.
'There was not two minutes wasted,' says Norman, who recalls that Lou and John were intensely focused during the sessions, and Lou wasn't friendly. 'Lou projected a sort of hostility, whereas John Cale projected intensity.' This was Lou's chance to make a professional recording of the best of the songs he had written since Syracuse, a body of work developed with John, Sterling and Moe and practised nightly at the Dom. As a result, the band was tight. The songs were lyrically sophisticated, dealing with unusual and difficult subject matter, such as drug use, as well as the complexities of the heart and mind. 'Heroin' and 'I'm Waiting for the Man' were the jewels in the collection, and there was an extra intensity when Lou sang them. 'You did have the feeling it was being sung from the vein, as it were,' says Norman. 'That's what made it so freezing to listen to. Is this true? And if it is true, how can a person with the talent to make a song out of it not have succumbed to it? You didn't have the feeling Lou Reed had fantasized any of this. You had the feeling he was speaking first hand.'
Arrangements had developed since Lou met Cale. 'Basically, Lou would write these poppy little songs and my job was to slow them down, make them "slow 'n' sexy",' said Cale, explaining how he helped create the stern and sometimes chilling music. 'Everything was deeper, too. A song written in E would be played in D. Maureen didn't use cymbals. I had a viola, and Lou had this big drone guitar we called an "Ostrich" guitar [used on the Pickwick song "The Ostrich"]. It made a horrendous noise, and that's the sound on "All Tomorrow's Parties", for instance. In addition, Lou and Nico both had deep voices. All of this made the record entirely unique.' The sounds and ideas of the avant-garde were heard, including the use of drone, but never to the extent that the music became academic. 'European Son' was a guitar jam that sped up after the sound of a chair being scraped and a glass broken, sounds John introduced as _musique concrète_ to punctuate the composition. It worked. A radical new arrangement of 'Venus in Furs' turned what had sounded like a folk song into a sexy, sinister dirge. Drug references and influences were pervasive. Speed energy propelled 'Run Run Run', in the course of which Lou referred to his characters scoring drugs on Union Square.
Lou's love of pop music was also part of the recipe, lightening the tone at times. He borrowed a riff from Marvin Gaye's 'Hitchhike' to create 'There She Goes Again', a catchy little song with a misogynistic lyric that opens up an issue with Lou's songwriting. It was the first of many songs he wrote over the years in which women were physically abused. Lou argued that such songs were not an expression of his personal point of view. Rather, he was putting himself in the shoes of characters who were violent and misogynistic. 'It's about a guy whose girlfriend is giving him a bad time. She's split and he thinks she's making it with all his friends,' he said of 'There She Goes Again'. 'The line I always liked is "Well, you'd better hit her." I thought no one would ever notice. But they did... for me it's just a song, attitude. It's got nothing to do with me. Look, I write songs I don't agree with 'cause I don't have a real viewpoint. It has to do with a movie I saw, or a character who was a certain way. Or somebody I read about in the papers or met at a party. I put him in a song and act him out.' Yet many people who knew Lou describe him as a misogynist, and there are examples of him hitting women. One will serve for now. One evening, Lou and a date went to see his school friend Allan Hyman and his young wife. During the evening Lou repeatedly slapped his girlfriend. 'She would say something. He'd get pissed off at what she said and smash her around the back of the head,' says Allan. '[My wife said,] "Lou, if you continue to hit her, you have to leave." And then he smacks her in the back of the head [again]. So she said, "Get out!"' As we shall see, this was not an isolated case.
Lou's relationships rarely ran smoothly. By the time the Velvets came to record the Banana Album, he and Nico were finished. She dumped him, announcing the split to the band with the words: 'I cannot make love to Jews any more.'As a result, Lou was not nice to her at Scepter Records, and the other musicians took their lead from Lou. 'We reduced her to tears in the studio, because we wanted her to sing in a soft voice rather than in a hard Germanic voice,' said Sterling. Yet Nico's voice was integral to the character of the album, while her name alone would feature on the sleeve as part of the title.
At the end of the sessions, Norman Dolph made an acetate disc of nine songs, which he sent to colleagues at Columbia Records, hoping that they would sign the band. He received a swift reply saying that they weren't in the least bit interested. Andy made good on his promise, nevertheless, rewarding Norman with a silk-screen picture, which he later sold for $17,000. Andy and Paul Morrissey were not fazed by CBS's rejection. They were taking several thousand dollars a week at the Dom, out of which they paid the Velvets only five dollars a day each, so they had easily covered their investment in the band, and another record company was showing interest in the group. The record division of Metro-Goldwyn-Mayer (MGM) wanted to make a deal. A trip to Los Angeles was arranged to sign the contract and present some West Coast shows.
Andy, Paul, the Velvets and Nico, together with key members of the Exploding Plastic Inevitable, arrived in Los Angeles on 1 May 1966. Andy, the Velvets (including Lou), Nico and Mary Woronov stayed at the Castle, a mansion in the Hollywood Hills owned by the actor John Phillip Law, while the rest of the party checked into the Tropicana Motel.
The following day, the Velvets signed with MGM for a $3,000 advance against royalties. Although this was a break, Lou was unhappy about the fact he was also required to sign a management agreement with Andy's company, Warvel, whereby MGM payments would be made to Warvel, which would deduct twenty-five per cent commission, before paying the net to the group. Lou had so far refused to sign anything. 'He wouldn't sign a contract after Andy supported him for months and months and months,' groans Paul, who lost his patience. 'I said, "Well, then there's no record."' When he said that, Lou gave in. 'Finally he did [sign], with hate and hate and hate... Lou [always] wanted more money. He wanted this. He wanted that. He was in agony.' Lou still wasn't satisfied. Two months later the band sent MGM a letter, signed by all four members plus Nico, to the effect that payments should be made directly to them in future. Moreover, money should be paid in the first instance to Lou.
The day after they signed with MGM, the Exploding Plastic Inevitable opened at the Trip, a trendy nightclub in West Hollywood where they were booked for a two-week residency supported by Frank Zappa's Mothers of Invention, another way-out MGM band. The two groups had the same in-house producer, Tom Wilson. The Velvets took an instant dislike to the Mothers, having formed a prejudice against West Coast bands in general, and seeing Zappa, a talented and articulate musician, as a rival. Opening night, Tuesday 3 May, began inauspiciously, when Nico drove them down from the Castle for the show. 'Before we even got out of the driveway she had scraped the car on the gates and pulled the chrome off the rental. Oh my God!' recalls Moe, who feared for her life as Nico negotiated the winding road down from the hills. 'And when we got to the place she hit a parking sign.' Celebrity guests at the opening included the actors Dennis Hopper and Ryan O'Neal, and Sonny and Cher, who left early – telling journalists how unimpressed they were. 'We were a flop,' wails Mary Woronov. 'Cher looked in and then said the Velvets should go back underground.' Once again, journalists evinced more interest in Andy and his 'superstars' than in the Velvet Underground, none of whom was mentioned by name in the reviews. Rather, the musicians were mocked for their androgynous appearance, with one local journalist writing that, 'Their drummer is a girl who looks like a boy.' 'They were very depressed that they weren't received in a better way,' says Patti Elam, who was also staying at the Castle with her husband the singer Barry McGuire, who had just had his huge hit, 'Eve of Destruction'. 'They were all very down about it.' If so, Mary Woronov maintains that the depression didn't last. 'It's hard to feel down when you've got so much fucking amphetamine.'
Two days later, the Trip closed, due to an unrelated legal dispute, leaving them in limbo in Los Angeles. With time on their hands, the Velvets went into TTG Studios, a big old building off Sunset Boulevard, to re-record some songs for Tom Wilson. He wanted new cuts of 'Heroin', 'I'm Waiting for the Man' and 'Venus in Furs'. On these LA recordings, the versions heard on _The Velvet Underground & Nico_, Lou didn't sing so much as speak the lyrics, enunciating his words slowly and carefully, which was an effective approach for an artist with a limited voice. Indeed, he never sounded more authoritative than when he intoned the opening verse of 'I'm Waiting for the Man'. The listener was transfixed.
There was still time to kill before the band's next engagement in San Francisco, and they were bored at the Castle. Although the views from the house were spectacular, most of the band members couldn't drive, and they felt stranded in the hills without a car. Andy paid for Moe to take a taxi to church on Sunday, which briefly got her out of the house. To entertain the others, a trip was organized to Venice Beach. But when they arrived they sat in the car scowling at the people sunbathing on the sand. 'A tan? Uh! We were horrified,' snorts Mary, who concedes that this studied anti-Californianism was a pose. It bolstered their self-image as sophisticated New Yorkers to look down on the frivolous Californians, with their suntans and happy pop music. 'Monday, Monday' by the Mamas & the Papas was playing almost constantly on the radio in LA at the time. They professed to loathe it.
This East–West culture clash became marked when the show transferred to Bill Graham's Fillmore Auditorium in San Francisco, the epicentre of the West Coast music scene and hippie culture, at the end of the month. Graham tried to relate to Andy and Paul Morrissey when they met at the venue as he would to the managers of hip Bay Area bands, bullshitting them that he couldn't pay them much bread, man, but he believed in the same things they did, and the show was going to be far out. The New Yorkers – who disdained hippie patois – could barely conceal their contempt. As they talked to the promoter, Paul ate a tangerine, carelessly dropping the peelings on the floor of Bill's theatre, which annoyed him. Then the conversation turned to drugs. Paul posited the theory that musicians performed better on heroin than on LSD, suggesting outrageously that heroin might be healthy. If he meant to antagonize Graham, he succeeded. 'You disgusting germs from New York!' the promoter exploded. 'Here we are, trying to clean up everything, and you come here with your disgusting minds... and whips!'
As the Velvets went on stage that night, he hissed: 'I hope you fuckers bomb.'
They did. The San Francisco audience didn't like the show at all, and reviews were scathing. Ralph Gleason mocked the Velvets in the _San Francisco Chronicle_ as the Velvet Underpants, writing that, 'It was all very campy and very Greenwich Village sick.' Gleason went on to co-found _Rolling Stone_ , which quickly became the pre-eminent journal of popular music. The magazine ignored the Velvet Underground at first, and seemed suspicious of Lou for many years. Part of the problem was that the Velvets were out of step with the prevailing love-and-peace culture, of which the magazine was part. 'Let's say we were a little bit sarcastic about the love thing, which we were right about, because look what happened,' Lou said. 'They thought acid was going to solve everything. You take acid and you'll solve the problems of the universe. And we just said, "Bullshit, you people are fucked. That's not the way it is and you're kidding yourselves." And they hated us.'
At the end of what had been a fairly disastrous West Coast expedition, Lou fell ill with hepatitis after injecting speed. He was admitted to hospital when he returned to New York. He was still there on 14 July, when he read in _The New York Times_ that Delmore Schwartz had died of a heart attack in a hotel near Times Square aged fifty-two. His body lay in the morgue for two days before it was identified. Lou had tried to see his teacher a short time before, but Schwartz had turned him away. He had become even crazier, believing that his students were conspiring against him. 'He had a theory that Nelson Rockefeller was paying everyone at Syracuse to spy on him,' says Erin Clermont, who was one of the last of the Syracuse gang to see the writer. Lou left hospital to attend the funeral. He mourned for Delmore, and never forgot him. 'O Delmore how I miss you,' he wrote towards the end of his own life. 'You inspired me to write. You were the greatest man I ever met.' After the funeral he went home to Freeport to recuperate. 'He came home when he had hepatitis, and my parents cared for him,' says his sister, Bunny. 'They shipped me out of the house on a teen tour. Yet another family secret.' Then Lou found himself a new apartment in the city.
Lou maintained a home in Manhattan for almost fifty years. He moved frequently during this time, living at more than twenty addresses, mostly within a few favourite areas. In the early years, he lived predominantly on the cheap Lower East Side and in bohemian Greenwich Village, at a time when it was still possible to find an apartment in the city for little money. Even Lou, with the modest income he made with the Velvets, could afford a place in 1966, especially if he shared the rent. After Ludlow Street, he shared with John Cale for a while at 450 Grand Street, a decrepit Lower East Side walk-up. While the band were playing at the Dom one night a thief got in and stole Lou's record collection, leaving sneaker prints on his bed. This became known as the Great Sneaker Robbery. In the summer of 1966 Lou moved to the Village for the first time, living initially at 86 West 3rd Street, one block from Washington Square. The apartment consisted of one large room lined with mirrors. There was an elaborate furnace in the form of a golden dragon and, strangely, a coffin. Andy, Ondine, Rotten Rita and Ingrid Superstar were among friends who partied here with Lou. Another friend, Stanley Amos, lived next door. Stanley was an art critic renowned for his parties. He gave his guests bags of glitter, which they would toss in the air when they were high on LSD, so the glitter fell on them in showers while they tripped. But Lou's main drug interest was speed. Obtaining, trading and using methamphetamine occupied an increasing amount of his time. He had become an amateur pharmacologist, reading up on drugs, learning their various names and properties, applying the same attention to detail to his habit as he did to his music. 'Lou and Ondine would have furious fights over trading Desoxyn for Obetrols,' Warhol recalled in his autobiography, Desoxyn being a pharmaceutical form of methamphetamine and Lou's favourite pill. 'And Rotten Rita used to come in with his home-made speed that everyone knew was the worst in the world.'
'That's why he's rotten,' said Lou, who called another drug connection the Turtle because he was always late.
Meth is an aphrodisiac, and A-men took a carnal interest in Lou as a handsome young man of ambivalent sexuality, but despite his flirtatious manner he didn't seem interested in having sex with any of them. 'He wanted to be wanted, and to be desirable, but] it was mysterious. Who was he sleeping with?' asks [Danny Fields. Lou's fling with Nico aside, he seemed to be into transvestites at this stage in his life, people like Candy Darling, who inspired one of his most tender songs, 'Candy Says', a sympathetic ballad written during this period about a young man who had come to hate his body, as Lou's friend James Slattery (aka Candy) had. He called his penis his flaw. Lou was also close to a transvestite known as Brandy Alexander. 'There were a bunch of them,' says Danny. 'When we [first] knew him his girlfriends were boys – drag queens... We assumed they were sexual relationships. [Guys said] "Forget it, you'll never get Lou unless you are a drag queen," because although he was said to be gay we never knew any guy in our world he had sex with, except drag queens [supposedly].'
Lou had allowed himself to be more effeminate since he joined the camp circus of the Silver Factory. Cale recalls that Lou was 'very full of himself and faggy in those days', which is how he acquired the nickname Lulu. 'He wanted to be the queen bitch and spit out the sharpest rebukes of anyone around.' It wasn't just a pose. In the evening he toured the gay and after-hours bars of the city with Billy Name, looking for action. 'I loved after-hours bars. It's where I first saw someone beaten to death,' Lou once said. The bars were a good place to get material for songs, and meet lovers. 'We were never lovers,' says Billy. 'We played together, though. We used to go to the after-hours bars, after your regular bars closed at 4 a.m.... I remember we went into one, and Lou and I separated, and I looked over and on the stage area René Ricard was giving Lou a blow-job. It sort of surprised me.' René was a camp, witty aesthete, a poet and art critic, who appeared in several Warhol movies, including his then most recent picture, _The Chelsea Girls_. 'So that was Lou with René. But then Lou would be with Nico. He was bisexual.'
While he was evidently attracted to men, Lou maintained concurrent heterosexual relationships. He still saw Erin Clermont, who had moved to the city after leaving Syracuse. He also kept in touch with Shelley, becoming quite upset when she got married in December 1965, as if he still had a claim on her. Lou tried to persuade her to leave her husband, saying he was lonely. Shelley felt sorry for Lou, and comforted him. 'I did feel bad for him, because Lou was so lonely.' But she stayed with her husband. After an attempt to win Shelley back, he wrote 'Pale Blue Eyes', one of his two greatest love songs, the other being 'Perfect Day'. Confusingly, Shelley's eyes are hazel, but she and Lou agreed that 'Pale Blue Eyes' was about her. Lou started the song by singing that sometimes he was happy, sometimes sad, while there was someone who made him mad. 'I'm making him mad, because I won't leave my husband, [and] I'm the "mountain top",' explains Shelley, referring to the next verse. It was the final verse, in which Lou sang about his beloved being married, making their love a sin, that caused her embarrassment when the song was recorded. 'Thanks a lot, Lou.' She assiduously avoided listening to it.
Although Lou complained of being lonely, there was yet another girlfriend in his life. One day, Andy and René came by the apartment on West 3rd Street with a young woman who'd been living there previously. She had left some belongings behind, including her bed, which Lou was using. 'I want my bed back,' she told him. It was the start of a relationship. The girl's name was Barbara Hodes.She was still only a teenager, having just left school in upstate New York to attend college in the city. A fashionable, middle-class, well-connected young woman who became a clothes designer in adult life, she mixed in the same circles as Lou. She attended the opening of Betsey Johnson's boutique, Paraphernalia, in March, for example, an event at which the Velvets performed. Lou had been at Syracuse with Betsey, who later married John Cale. Barbara also went to the Dom when the Velvets were performing. She had taken a shine to Lou after watching him at a distance, and now they began to date. 'Lou was my first boyfriend,' she says. 'We were together, on and off, for over [ten] years.'
When Barbara moved to West 15th Street in Chelsea that summer, Lou moved in with her. The relationship was serious enough for Lou to telephone her parents to tell them that he was seeing their daughter, asking them not to be alarmed, considering the fact that she was under age. Lou's solicitude impressed Barbara, or Babs, as he called her. Lou was a rebel, but a middle-class rebel with a conventional side. He was protective of her, wrote her soppy letters and called to say he missed her when they were apart. But she also knew a darker side to Lou and, like Shelley, never entertained the idea that they would marry. 'There were so many different sides to Lou. He could be really sweet – with his sister he was very sweet – then he could turn around and be a complete prick.' Lou seemed to relish that reputation. He was funny, clever, talented, quirky, all those things, 'sometimes very nice, sometimes maudlin,' she says, 'and sometimes he was a prick'.
While he was at the heart of the art-music scene in New York, Barbara recalls that Lou had surprisingly conventional tastes in many respects. He subsisted on junk food, being particularly fond of hot dogs, though he also went through phases of eating macrobiotic and other faddy food. He was prone to fads. He loved Top Forty pop music. For a long time, his favourite record was 'You've Lost That Loving Feeling' by the Righteous Brothers. He liked to watch TV, and became an early fan of _Star Trek_. Lou and Barbara would go to a friend's apartment to follow Captain Kirk's adventures. Less attractively, Lou was tight with money, leaving Barbara to pay the rent and rarely buying her gifts. He used drugs, and he was vain. 'Lou had glasses and a lazy eye and he would never wear his glasses. He would never talk about it.' He was also highly unpredictable, loving and caring one minute and spiteful the next. 'He was just nasty. There was a part of Lou that was this nurturing, loving, extremely tender person. I have love notes that [are] mushy. There was a whole mushy side to him... Then six hours later a wall would come down,' says Barbara. 'He would do these outrageous [things]. He was self-destructive.'
Lou's sexuality was as enigmatic to girlfriends as it was to his gay male friends. 'Did Lou have sex with men? Yeah,' says Barbara, but she never knew the details. 'What kind was it? I don't know. I didn't ask. He didn't offer.' On this basis, they were an item, on and off, into the mid-1970s. Still, they didn't stay together at West 15th Street for long. Within a few months Lou had moved back to the Lower East Side, renting a walk-up on East 10th Street. Lou lived in a succession of small Manhattan apartments of this type, which was all he could afford for many years. He didn't seem to give much thought to where he lived. Although neat and tidy, he was remarkably careless in some aspects of life. Barbara recalls that Andy Warhol gave Lou a gift of his cow wallpaper, which was already collectible and valuable. Most people framed it as art. Lou actually stuck the paper on the wall of his rental, leaving it for the next tenant when he moved out a few months later.
Andy's movie _The Chelsea Girls_ was a surprise commercial success when it was released in September 1966, at which point Warhol became more focused on films than on the Velvet Underground. The fact that Lou had been difficult about contracts had also put strain on the relationship. Yet they continued to work together for the time being.
The plan had been to bring the Exploding Plastic Inevitable back to the Dom after California, but when they returned to New York they discovered that the main room had been hired out to rival promotors and was now operating as a club called the Balloon Farm. Unable to secure the lease, Andy's troupe reluctantly agreed to stage their show at the Balloon Farm two nights a week as guest artists, but it wasn't the same and reviews were lukewarm at best. The _Village Voice_ likened the evening to 'zombie night at a Polish casino'.Like anything trendy, the Exploding Plastic Inevitable had quickly become passé. When Cale fell sick, Richard Mishkin and Henry Flynt took turns sitting in with the Velvets. Lou and Richard had stayed in touch since college, despite the disapproval of Richard's mother. 'When I was living at home my mother found a hypodermic needle in my drawer. "What's this?"... I said, "It's Lewis's." "Lewis the Creep!" That's what she called him.' The stand-ins found the songs easy to play. Lou told them the chords, how many minutes each song should last, 'then we would just start jamming,' recalls Henry, who notes that Lou had enough spare energy to get up and dance between sets. No doubt this was speed energy. 'I think he was the greatest disco dancer I ever saw.'
The show ran until October at the Balloon Farm, at which point Nico decided that she wanted to continue performing as a solo act downstairs in the basement bar. She had a child to support, after all. She needed somebody to accompany her on guitar. Paul Morrissey asked Lou if he would do it. He refused, and discouraged John and Sterling from playing with Nico, arguing that it would be bad for the band's image. Morrissey thought this was very cruel. Finally, Lou and John made a tape for Nico, which she tried to sing along to on stage, becoming so befuddled that she burst into tears. It was a pathetic scene and perhaps deliberately humiliating. If Lou had ever loved Nico, he seemed to hate her now. 'So she photographs great!' he yelled one day. 'I'm not playing with her any more.'
Their producer Tom Wilson had other ideas. He wanted Lou to write a fourth song for Nico for the album, a record which would, after all, bear her name. Paul Morrissey goes so far as to claim that MGM signed the band primarily because of Nico. 'Then Tom said, "Listen, the only thing I don't like about the record is there's not enough Nico. You've got to get another song for] Nico. And there's nothing here we can use on the radio, so why don't you get Nico to sing another song that would be right for the radio?"' So Lou wrote 'Sunday Morning'. Inspiration came when he was returning from a party with Sterling and John in the early hours of a Sunday morning. They completed the song at the apartment of Lou's college friend Rosalind Stevenson, who had the presence of mind to film them in the act.[ 'When I did that footage it was in the spirit of someone who was a film-maker, i.e. me, just for fun, turning my camera on my friends,' she explains. This short piece of silent film is often seen in documentaries about the band, with the finished song dubbed on to it. Although one of the prettiest Velvet Underground tunes, the lyric conveyed the drug-fuelled paranoia of the Factory scene, as expressed in the line 'Watch out the world's behind you.' 'Sunday Morning' was written for Nico, but when the band went into the studio to record it Lou insisted on singing it himself. Paul Morrissey was outraged. 'He sang it! The little creep. He said, "I wanna sing it cause it's gonna be the single."'
Relations with Nico deteriorated further when the Exploding Plastic Inevitable toured Massachusetts, Michigan, Ohio and West Virginia that autumn and winter, with an excursion across the border to Hamilton, Ontario (Canada being the only foreign country the Velvets visited prior to their 1993 reunion). The troupe travelled by station wagon, the mirror ball strapped to the roof. Andy didn't attend every show. He was less interested in them now, and less keen on covering their expenses. 'I'd chase Andy around if I didn't have money for gas to get home,' recalls Moe. Andy would pretend he had no cash, fluttering away.
'Come back here, damn it, and give me some money!' she would yell, running after him.
Lou refused to let Nico sing with them in Columbus. Gerard Malanga noted in his diary that Lou and John were both 'drug sick' at the time, the yellow in Lou's eyes indicating that he had hepatitis again. Andy joined them in Detroit for the World's First Mod Wedding at the Michigan State Fairgrounds, during which a couple actually got married. After the usual mixed-media show, spiced up on this occasion by a cast member attacking a car with sledgehammer, Andy gave the bride, Randy Rossi, away to her fiancé, Gary Norris. The wedding ceremony was followed by a 'Carnaby Street Fun Festival'. This tacky event fell well short of the artistic heights of the Exploding Plastic Inevitable, reinforcing the image of the Velvets as publicity-seeking charlatans rather than major artists. Hardly anybody outside their circle appreciated that the opposite was true. The hope was that things would change when they released some records.
There had already been an attempt to put out a single. 'All Tomorrow's Parties' was released on MGM's Verve label in July with 'I'll Be Your Mirror' on the B side. For reasons that remain obscure, the record was not widely distributed, and it made no impression on the charts. A second single followed in December. 'Sunday Morning' was the A side, sung by Lou, with 'Femme Fatale' on the reverse, sung by Nico. The fact that Nico featured on three of the four sides released indicated that MGM saw her as the main attraction. But once again little effort was put into promoting the single, which suffered the same fate as the first. It looked as if MGM had lost confidence in the band. The company had signed them for little money on the strength of Warhol's celebrity and Nico's film-star looks. Their trip to Los Angeles, where MGM executives would have seen them play live, had gone badly, as had their visit to San Francisco. The press didn't take them seriously, their music was not on-trend in terms of 'love and peace', and the public didn't seem at all interested. There may have been a feeling within MGM that the company had made a mistake and there was little point putting much effort into promoting the band. There was still no release date for their album.
The Velvets finished the year with two shows in Philadelphia. The full mixed-media show was staged at the YMHA (Young Men's Hebrew Association) in the city in December 1966, with lights, films, dancers, and the Velvets playing. Andy was present. A large crowd came for 'Philadelphia's first happening', up to two thousand people on opening night. 'They came in droves,' noted Judy Altman in the _Philadelphia Daily News._ She added that many walked out during the show and that objects were thrown at the band. 'This is a great town. People curse at you and throw things. Great town,' the reporter quoted an unhappy band member. A year that had begun with such promise, a year of enormous creativity on stage and in the studio, ended with a sense of anti-climax for Lou. He had written and recorded some of the most radical songs of the 1960s, yet he remained almost completely unknown.
## V
## Light and Dark
### 1967–8
IT WAS A thrilling moment for Lou and his band mates when their debut album finally reached the shops in the spring of 1967. Moe was so excited to see the LP in her local record store on Long Island that she bought a copy. 'That was cool.' The album was cool; few albums in the history of rock 'n' roll have been cooler than _The Velvet Underground & Nico_. Above and beyond the extraordinary songs, the cover was a significant work of pop art, designed by one of the foremost artists of the second half of the twentieth century. The LP was issued in a white gatefold sleeve adorned with a stick-on picture of a banana, with Andy Warhol's name and the instruction ' _Peel slowly and see_.' The yellow sticker came away to reveal the meat of the fruit, suggestively tinted pink. Neither the band name nor the album title appeared on the front; that information was printed on the spine and back. 'We thought it would seem better with his name on it,' explained Lou with uncharacteristic modesty (his own name was listed first on the credits inside). 'Produced by Andy Warhol. It was like being on a soup can.'
The back cover featured a colour photo of the band on stage during the Exploding Plastic Inevitable, with Gerard dancing under a barrage of projections. One of these images was of a dancer named Eric Emerson who had featured in a couple of Andy's films. An extrovert of unusual habits – he wore hot pants and taught yodelling – Emerson created a major problem for the band immediately by demanding $500,000 compensation for unauthorized use of his picture. MGM withdrew the LP while it altered the artwork to remove his image, which damaged sales at the critical moment. 'It had started to go up [the charts] and then MGM pulled it, said, "No, until this lawsuit is settled we're pulling it off the shelves," and that busted the rise of the album,' laments Billy Name, who took most of the photographs inside the gatefold. 'When somebody does something like that from within your troupe it's really a pisser.'
Although MGM spared no expense on the production of the album sleeve, sanctioning a complex printing process to create the peelable banana, the publicity campaign was crude. It also traded heavily on Warhol's name. 'What happens when the daddy of Pop Art goes Pop Music?' asked print ads. 'The most underground album of all!' Even worse, there was an implied apology. 'Sorry, no home movies. But the album does feature Andy's Velvet Underground (they play funny instruments). Plus this year's Pop Girl, Nico (she sings groovy).' The mainstream press ignored an album that was presented almost as a joke. The few reviews it received, mostly in small magazines and in the underground press, were mixed. 'The Velvets are an important group, and this album has some major work,' Richard Goldstein wrote cautiously in New York's _Village Voice_ , one of their most positive reviews, but while he praised 'Heroin' he dismissed 'Black Angel's Death Song' and 'European Son' as pretentious.
The album crept up to 171st place on the _Billboard_ chart of America's Top 200 albums, only to slip into oblivion when it was briefly withdrawn from sale due to the Emerson problem. This was as high as the Velvets charted in the USA in the 1960s. In sales terms, they were a total failure. 'There was no audience for them,' crows Paul Morrissey, who had little regard for the music. 'Nobody bought their albums.' This was not literally true. From the beginning, the band had a small, discerning audience who not only bought their music but were profoundly affected by it. Brian Eno later observed that while only a few thousand people bought _The Velvet Underground & Nico_, it seemed as if every one of them started a band, and it is true that a remarkable number of people who became significant in the art-rock scene in the 1970s were influenced by the Banana Album.
Not least among these was nineteen-year-old David Bowie, who heard an advance copy of the record in suburban London, where he was beginning his career. It was a revelation. 'This music was so savagely indifferent to my feelings. It didn't care if I liked it or not. It could [not] give a fuck. It was completely preoccupied with a world unseen by my suburban eyes,' was how he described the experience of hearing _The Velvet Underground & Nico_ for the first time. 'One after another, tracks squirmed and slid their tentacles around my mind. Evil and sexual, the violin [ _sic_ ] of "Venus in Furs", like some pre-Christian pagan-revival music. The distant, icy, "Fuck me if you want, I really don't give a damn" voice of Nico's "Femme Fatale". What an extraordinary one-two knockout punch this affair was. By the time "European Son" was done, I was so excited I couldn't move. It was late in the evening and I couldn't think of anyone to call, so I played it again and again and again.' Bowie soon began to perform 'I'm Waiting for the Man' as part of his set.
There were several such acolytes in the USA. The Velvets were appearing at a venue known as the Gymnasium in New York in the spring of 1967 when their support act cancelled. A teenager named Chris Stein filled in at short notice with his band. 'So we got on the subway with our guitars and went up to [the] Gymnasium. It actually was an old gymnasium... The place was big and echoey, and kind of dark. The Velvets used the ambience of the room to their advantage.' Chris and his buddies were 'totally taken aback' by the band's music, at the Gymnasium gig and on record. 'My friends and I were really amused by the Velvets' record when it came out among all the love and peace.' By way of context, the Beatles were just about to release _Sgt Pepper's Lonely Hearts Club Band_. 'The darkness of it in the midst of that was quite a contrast... To this day, the first record sounds so modern in its weird fuzziness.' Stein later found fame with his girlfriend Debbie Harry as Blondie, one of several bands to emerge in the 1970s that were inspired by the Velvets. Others include Roxy Music (featuring Brian Eno), Talking Heads and U2.
The fact that the Velvets had caught the attention of a select number of smart young people who would build on their ideas, often with far greater commercial success, was unknowable in 1967, and probably of little comfort to Lou had he been able to see into the future. Having had their record released at last the band was immediately frustrated not to hear their songs on the radio, and dismayed to discover that the album, and their subsequent releases, were not well distributed. 'We were disappointed,' admits Moe. 'Verve didn't know what to do with us. We'd go play in St Louis or something and people [would tell us] "We can't find your record."' Lou began to wonder if their association with Warhol was holding them back.
Andy had been an important mentor, showing more interest in him than most of the kids who hung around the Silver Factory. He took the trouble to give Lou advice, in a way he rarely did, and the younger man heeded what he said, which was just as unusual. 'He and Lou would talk. Lou would ask him, "Do you like this, or that?" That's the only time I heard him say things like that,' says Mary Woronov. Now Lou began to resent Andy's proprietorial position. 'Lou didn't like the fact of Andy owning the Velvets, and Lou thought Andy was selfish,' says mutual friend Brigid Berlin, who notes that it was around this time that Lou began to call Andy by the nickname Drella, a portmanteau of Dracula and Cinderella that some friends thought insulting. Andy himself didn't like the name. 'He was the one person that called Andy "Drella",' adds Brigid. 'He didn't like the fact it was Andy Warhol's Velvet Underground. He wanted to be on his own.'
Artist and pupil had a frank talk about the future. 'Do you want to keep just playing museums from now on and the art festivals?' Andy asked, referring to a show they had just done at the Chrysler Art Museum in Massachusetts. 'Or do you want to start moving into other areas?' Andy generally encouraged his protégés to move on and do their own thing, but he surely didn't expect Lou to fire him, which is what Lou claimed happened. The day he fired Andy as their manager, and Andy was so shocked that he called him a rat ('It was the worst thing he could think of') became one of Lou's favourite stories, one he told many times in interviews and wove into the lyrics of 'Work' on _Songs for Drella_. Andy said that it wasn't as dramatic as Lou made out, while Paul Morrissey maintains that _he_ let the Velvets go. 'I said, "The management is over, do what you want." It was done legally. He always wanted to be in charge with John Cale [under his thumb]. He was, I think, the worst person I ever got involved with... he was not nice. It was difficult dealing with him.' The band continued to appear under Andy's name for the time being, but henceforth they were increasingly independent. And they were actively looking for new management.
In May 1967 they played two nights at the Boston Tea Party, a hip new club on Berkeley Street in Boston. 'It was just a big open thing, no seats,' says Moe of a venue where they played more frequently than anywhere over the next few years. Nico showed up for the second night with Andy, direct from the Cannes Film Festival, where they had been promoting _The Chelsea Girls_. She had recently recorded four of Lou's songs for her solo album _Chelsea Girl_ , including the track 'Chelsea Girls', in which Lou namechecked Factory friends, including Brigid, Ondine, Ingrid Superstar and Mary Woronov. He also played guitar on the album. But when Nico asked to join the Velvets on stage in Boston, Lou refused. Nico was no longer their chanteuse, and wouldn't appear on their next record. To make it official, the original four members signed a new agreement with MGM, 'the 1967 agreement', cutting her out of the deal.
The Boston gig was arranged by a businessman named Steve Sesnick, who had an interest in the club and ambitions to manage the band. Sesnick was only a few months older than Lou, but he had the manner and appearance of a more mature person. He was thickset, dressed conservatively and liked to smoke cigars, all of which lent him what Moe describes as an air of 'false pomposity'. She liked him nonetheless. 'He was fun... he had grand plans for us], which was cool.' Cale was less keen. Indeed, he came to blame Sesnick for creating a fatal rift between Lou and himself. ['Suddenly Lou was calling us his band while Sesnick was trying to get him to go solo.'
In June, the Velvets performed at a garden party at the country estate of the architect Philip Johnson in New Canaan, Connecticut. They played in front of Johnson's Glass House. Andy was among the guests at the party, which was a benefit for the choreographer Merce Cunningham. It was a sophisticated, enjoyable evening. Afterwards, the band was chauffeur-driven back to New York at Johnson's expense. During the long drive they discussed their future and made the fateful decision to hire Sesnick as their manager. It was what Lou wanted at the time, though he would come to regret it bitterly.
That summer, Lou moved West again. His new home was a loft apartment in the fur district of New York, at Seventh Avenue and West 28th Street. The smell of hides was pungent, while tufts of fur got everywhere, including the cracks in the floor. But when the furriers went home at night, he could play his guitar as loud as he pleased. 'You have to hear this,' he told his girlfriend Barbara Hodes, when she came over to hang handmade curtains in the loft to make the place more homely for Lou. 'It was all feedback,' she says of the loud, discordant music. These were songs for the second Velvet Underground album, _White Light/White Heat_ , which the band recorded at Mayfair Sound on Times Square in September 1967.
It could be argued that Lou had two principal songwriting styles. In the first place he liked to tell stories with named characters like a novelist. Such songs were often inspired by things he had seen, done or heard about. Sometimes he used the names of real people he knew, as in 'Candy Says'. His second style was more impressionistic. Songs didn't tell a story but relied upon the artful juxtaposition of phrases to create images to evoke feelings. 'White Light/White Heat', the title track on the second album, was such a song, open to a range of interpretation. Like most songwriters, Lou frequently repeated words. In this case 'white' alternated with 'light' and 'heat' created two phrases, repeated over a two-chord guitar progression (G5/D5). Played loudly in a style that would become known as heavy metal, the song achieved considerable binary power. One of the simplest tunes Lou ever wrote, 'White Light/White Heat' was also one of his strongest and most enduring compositions. Amphetamine use was an obvious influence. 'White heat' was then a slang term for a speed rush, and Lou made reference to speed freaks in his hyper vocal.
Song ideas were precious. Lou noted them down and hoarded them if he didn't have an immediate use in mind, sometimes returning to an idea years later. For the Velvets' second album, he reached back to college for the story he wrote for Shelley when they were exchanging letters in their summer holidays, about the boy who mailed himself to his girlfriend. Lou now gave the story of Marsha and Waldo to John Cale to narrate. He did so over a grinding backbeat with squeals of feedback guitar, creating 'The Gift', an unusual and funny highlight of the album. 'Lady Godiva's Operation' was another short story set to music, partly inspired by Lou's ECT experience, but less successful than 'The Gift'. The story – told by Cale, with help from Lou – was unclear, and the music meandered tediously to its conclusion. In contrast to these long, wordy recordings, 'Here She Comes Now' was brief, insistent and sexy. Another short song, 'I Heard Her Call My Name', was truly manic, Moe maintaining a frantic beat as Lou delivered a speed rap ending with a mind-splitting guitar solo.
The magnum opus was 'Sister Ray', a droning jam named after a drag queen of Lou's acquaintance that had become a staple of their act. The lyric introduced a cast of weird characters, including Duck, Sally and Miss Rayon (Lou was good at names), as well as an anonymous sailor. The lyric blended his two basic songwriting styles: a semi-abstract story with use of repetition and drug slang, also playing with the sounds of words, stuttering and jamming words together. Some of it sounded like nonsense, but a murky story of drug use and murder emerged, reminiscent of Tralala's tale in Hubert Selby's _Last Exit to Brooklyn_ and the Mardi Gras chapter in _City of Night_ , favourite books whose influence ran throughout Lou's songwriting. Guitars created a dense musical backdrop to the words, through which the bright notes of Cale's electric organ broke after several minutes. Sterling and Lou turned up their amps to try to drown John out in what became an epic battle of musical power, while Moe struggled to keep the beat. '"Sister Ray" could have turned into just noise. It's very easy to make noise,' she says. 'Lou goes flying off, and Cale is going crazy, but there is a beat going on [and] when Lou has stopped fooling around, here is something to come back to. Not only to come back to, but while the audience is listening to this cacophony they are [still] hearing a beat. If music doesn't have a beat it's not music – it's noise.'
Lou introduced a new character in the third verse, Cecil, who shoots the sailor. Lou objected to the crime in a whiny voice, saying Cecil shouldn't have done that and asking for a dollar, which led to a rare explicit sexual reference, 'sucking on my ding-dong'. Lou had a talent for inventing slang terms like 'ding-dong' that sounded right and had the benefit of not dating. After a quarter of an hour of churning music, the band let the volume drop and Lou reprised the lyric in the lull. Moe picked up the beat, and all four thrashed their way to the end. The whole song lasted seventeen minutes. They recorded it once. Mortified by what he heard, the studio engineer walked out.
'Sister Ray' was one of the most extreme and powerful tracks Lou ever created, but in the mixing stage he discovered that the recording of this, and other songs on the album, was distorted because the band had played too loudly in the studio. It was often hard to make out the words. 'I wrote the lyric – I think – while we were riding to and from a gig,' Lou said of 'Sister Ray'. 'It has such an attitude and feel to it, even if you don't understand a word of it. It sounds sleazy... It's just a parade of New York night denizens. But of course it's hard to understand a word of it. Which is a shame...' With limited studio time at their disposal, they were unable to fix the problem. Lou remained unhappy with the result, but the distorted nature of the recording became part of the essential character of what is one of his most iconic and important records. It is also true to say that 'Sister Ray' worked just as well in concert.
'White Light/White Heat' was another muddy recording. Released as a most unlikely single in November 1967, at a time when the Monkees were topping the charts with 'Daydream Believer', it flopped. The eponymous album followed in January 1968. To illustrate the record, Lou chose the image of a skull tattoo from Andy Warhol's film _Bike Boy_ , which Billy Name enlarged for the cover. 'It was black and white, and we did it black on black,' says Billy. 'That was the original black tattoo on the black album cover.' This was a dark, difficult album in every sense, the antithesis of a commercial record, and it managed 199th place on the _Billboard_ Top 200, an even lower position than their debut. There was virtually no airplay, and reviews were sparse. Many publications, including the new _Rolling Stone_ , ignored it. Other publications slated it, while a precious few critics recognized that the Velvets were doing unusual and interesting work. 'Probably the most blatant injustice perpetrated by the media on the contemporary music scene has been the virtual black-out coverage of the Velvet Underground,' Wayne McGuire wrote in _Crawdaddy_ , praising _White Light/White Heat_ as genuinely original and subversive, unlike the music of more popular posturers, such as the Doors, predicting that the Velvets would be vindicated in time. 'Put simply, the Velvet Underground is the most vital and significant group in the world today.' McGuire was proved right.
In the long term, the first two Velvet Underground albums were recognized as being among the most innovative rock records of the 1960s, and although initial sales were poor they have been selling for decades. _The Velvet Underground & Nico_ had sold nearly 3 million copies worldwide by the time of Lou's death in 2013, making it the most successful Velvet Underground album. _White Light/White Heat_ , a more challenging listen, sold over half a million copies in that time, putting it in second place in the league table of VU sales.* And as Chris Stein of Blondie notes above, the music, like all great art, has retained an extraordinarily modern quality through the years.
While Lou and John enjoyed their foray into extreme experimentation, the music of _White Light/White Heat_ was John's forte more than Lou's, and now the band turned away from sonic excess. When they returned to the studio in February 1968, they recorded more melodic songs, though the lyrics were just as sophisticated. These numbers included 'Stephanie Says', a pretty song about a girl suffering with depression. Lou used the 'she says' conceit several more times over the years, also creating 'Lisa Says' and 'Caroline Says'. Cale recalls that 'there was heroin involved' in the session for 'Stephanie Says', which has an anaesthetized sound. In contrast, 'Temptation Inside Your Heart' was a lively, happy sort of song, punctuated with laughter and doo-wop harmonies. It would be almost twenty years before these recordings were officially released, but this was the way forward for the band.
Under Steve Sesnick's management, the Velvets were making an effort to be more accessible and thereby, hopefully, more popular. As well as recording less confrontational music, they began to play more shows, though not in New York. They focused instead on cities like Boston, Philadelphia and San Francisco, where they began to build a student following. There was also a change of image. Eschewing austere black clothing and shades, Lou started to dress in the foppish fashions of 1968, wearing paisley shirts with bell-bottom trousers and wide belts. He let his hair grow, looking like a member of any mainstream band of the day. John remained a more radical dresser, appearing as unorthodox as he sounded in publicity photos for the second album, for which he wore a white shirt with a huge collar and elaborate cuffs that ended in bows. This was partly the influence of fashion designer Betsey Johnson, whom he married in April 1968.
Lou seemed to resent the fact that Betsey was ambitious for her new husband, pushing John forward in the band. A few days after the wedding the men almost came to blows onstage. Both had started to drink heavily. Lou liked Scotch whisky, preferably Cutty Sark or Johnny Walker. On top of speed, the booze made him aggressive. He and John were struggling for control of the group. 'They played brilliantly together, and they worked brilliantly together, but in a management-type situation Lou and John were always head to head, wanting to manage the situation. They both wanted to be the lead,' comments Billy Name. 'They couldn't agree.' Brigid Berlin's sympathies were with John. 'You know, Lou was a strange person. I had a lot of fun with him, but he had a side to him. He was very cranky, and he could be mean,' she says. 'John was a much nicer person than Lou was. Lou was a troublemaker. Lou had an _act_ going all the time.'
Lou had stayed in touch with Lincoln Swados since they had shared a room together at Syracuse. While he was busy with his music, Lincoln's life had become increasingly troubled. Around the time the Banana Album was released he attempted suicide by stepping in front of a subway train in New York. He survived, with the calamitous loss of his right arm and right leg, after which he took to living in an old storefront on the Lower East Side and singing on street corners. He also had a job in the box office at La MaMa, a theatre on East 9th Street where his sister was beginning a successful career as a composer. Elizabeth Swados had a difficult relationship with her brother. 'I began to wince at the strange little songs he sang in the box office and the sometimes nasty manner in which he treated the foreign directors and actors,' she wrote in a book about their family. 'He didn't wash and I was embarrassed by his body odour.'
Lincoln was back in hospital in the spring of 1968, on suicide watch at Bellevue, when Lou paid him a visit, bumping into a friend of Lincoln's at the lift. Bettye Kronstad was nineteen years old, pale and slim with wavy brown hair, full lips and the look of a girl who might be about to cry. Bettye always felt emotional after seeing Lincoln, whom she knew from La MaMa. She was a sensitive person in any case, affected by a difficult and disrupted childhood. After her parents broke up acrimoniously, and fought over her, she was raised with the help of her grandparents in Pennsylvania, coming to New York in 1967 when she was eighteen. Good looks got her modelling assignments and an audition to be a Bunny girl, but she was currently doing secretarial work at Columbia University, where she was also studying English. Lou may have timed his visit to see Lincoln in the hope of meeting her. 'I'm a friend of Lincoln Swados. My name's Lou Reed,' he introduced himself at the hospital lift, as if his name should mean something to her. Bettye nodded, stepped into the lift and pushed the down button. Lou held the doors open. 'So you'll tell Lincoln you met me?' When Bettye agreed, he let her go. She made a mental note of his appearance as the doors closed. 'I actually didn't really like him at first, because he's not my type. He had the whole rock 'n' roll star thing going on, with the blue-white snap pearl buttons, shirt open almost to his navel, couple of chains, hair coiffed, blue bell-bottoms, puddling out just perfectly.'
The next time she visited Lincoln, he asked, 'So you met my room mate – what do you think?'
'He's the rock star, right? The Velvet Underground?'
'Yeah.'
'Oh, come on, Lincoln, you know he's not my type.'
'He's actually really not like that at all,' said Lincoln, evidently aware of the fact that Lou could make a bad impression. 'He isn't the arrogant person he's coming off as. He is actually a nice guy... He's a good writer.' An interest in writing was something all three of them had in common. Lincoln and Lou had evidently discussed Bettye. 'He wants your phone number. May I give him your phone number?' he asked, like a teenager enquiring on behalf of a shy friend (Lou was twenty-six at the time). Bettye was reluctant. 'Oh come on, Bettye, give the guy a shot. He likes you and he'd like to see you. Would you at least [let him] give you a call?'
Bettye was living in student accommodation near Columbia. Lou telephoned her several times over the next few days. He was at his parents' house in Freeport. She got the impression that he was living on Long Island. He asked if he could come into the city and take her out. They agreed to meet at the West End, a bar near the university. 'I met him there and all he basically did was rave about John Cale, [and] he drank a lot. He was talking about how the album [hadn't] gone right, and nobody was listening to him. And it was just about Cale, Cale, Cale.' Lou said John was a madman who never listened to him. They fought constantly and John was trying to take the band away from him. 'I'm the leader!' he exclaimed, as he drank Scotch. 'They are not listening to me in the studio.' It became repetitive. 'John is doing his thing, and we don't want [him] droning on and on and on...'
It was a typically disastrous Lou date. By the end of the evening Bettye was bored and Lou was drunk. He insisted on walking her home. 'I think I walked _him_ home... He opened the door to me at my old apartment building, which was at the corner of Riverside Drive and 116th Street, down the hill. I seriously wondered how he was going to get back up the hill to the subway station at 116th and Broadway. But he was very polite. He thanked me.' He said they should meet again.
Columbia shut early that year because of student demonstrations on campus, culminating in a sit-in. The whole country seemed to be in uproar, over civil rights and the war in Vietnam. In recent months there had been a major race riot in Detroit and a stand-off between peace protestors and police at the Pentagon. Then Dr Martin Luther King Jr was assassinated in Memphis on 4 April 1968, causing national outrage. The young Laurie Anderson, studying art history at Barnard College in New York, was among the students who occupied the campus of Columbia University that spring, protesting about race relations and other matters. Unusually for someone of his generation and education, Lou remained uninterested in such issues, showing little or no engagement with politics until much later in life. 'I worked in the anti-war movement. I worked for mobilization against the war in Vietnam,' says his friend Erin Clermont. 'I never told Lou I was working there. It seemed completely irrelevant to his life. I never heard him say _a word_ about Vietnam.'
With Columbia shutting early, Bettye decided to go to Europe for the summer with friends. Lou wanted to see her again before she left, pestering her with calls up until her last night in New York. She told him that it was impossible to meet in the circumstances, and thought his behaviour odd. As she would discover, neediness was a trait in Lou's personality. 'I didn't mean to blow him off, but there were other people I wanted to see the night before I left the country than a person I had only met [twice].' Lou asked her to let Lincoln know when she got back. It was several weeks before Bettye had cause to think of Lou again. She was in Paris, at a café on the Left Bank, when she opened a US newspaper and saw an article about the Velvet Underground. 'I read the newspaper article and put it down. So that's Lou. Hmm. So that actually interested me. That's how I began to think of him pretty seriously.'
Even though they weren't in business together any more, Lou continued to socialize with Andy Warhol, who had recently moved his studio downtown to 33 Union Square West. A short walk across the square was Max's Kansas City, a restaurant that became a club house for the artist and his friends.
Max's Kansas City (named for a cut of steak) was long and thin, with a bar and booths in front and a mezzanine where bands played. The Warhol crowd congregated in the downstairs back room, around a circular table within the glow of a light sculpture. 'We went to Max's every night and] sat at a big round table at the back with a Dan Flavin big red light installation – everyone looked really good in that lighting. Red is really good for the face,' recalls David Croland, a model who had recently joined the Factory crowd. [Mary Woronov observes more tartly that the Flavin made everyone look like they were broiling in an oven. 'There was always a spillover table,' adds Croland. 'Sometimes it was like twenty of us, on one side of the room, and the whole rest of the room was staring at us.' The gang could be as outrageous as they liked in the back room. 'My younger brother, for his twelfth birthday, I took him to the city to see [a movie]. Then we went to Max's. Brigid came and sat down, and all of a sudden she pulled out her needle and pulls down her pants' – Moe Tucker cites an example of the open drug use – '"Brigid, my brother! He's twelve years old!"' There was exhibitionism of all kinds. Factory loon Andrea 'Whips' Feldman regularly did a striptease on the table while singing 'Everything's Coming Up Roses'. Andy kept an open tab, which he settled by giving artwork to the owner, Mickey Ruskin, so all his friends ate for free, Lou included. Lou spent a lot of time at Max's over the next couple of years. One night he bumped into Erin and a girlfriend at the restaurant and tried to talk them into a threesome. 'Didn't happen, [but] it was sort of a cool night. Max's was incredible,' giggles Erin. 'Anything could happen there, it was just crazy.'
Andy's new studio, on the sixth floor of the nearby Decker Building, was split into two parts. There was a dark back area painted black for developing film, where Billy Name resided, and a white front office for business. The new studio was not as informal as the Silver Factory, but crazy people still floated in and out. Few were nuttier than Valerie Solanas, founder of the Society for Cutting Up Men (SCUM). That summer she presented Andy with a film script, _Up Your Ass_. It was so obscene that he suspected she might be an undercover cop trying to entrap him. The artist was on the telephone at the office on the afternoon of Monday, 3 June 1968, a warm day, when he heard a bang, turned and saw Solanas pointing a gun at him. 'Valerie, don't do it!' Andy cried out, feeling like a character in a B-movie speaking movie dialogue. She fired nonetheless. 'I felt a horrible, horrible pain, like a cherry bomb exploding in me.'
Billy opened the dark-room door to see his friend lying in a pool of blood in the white office. He picked him up, asking what had happened.
'Don't, Billy, don't make me laugh, it hurts too much.'
Far from trying to make Andy laugh, Billy was in tears, 'wondering what was going on... then I looked around and there was [art critic] Mario Amaya, who was also shot in the back, and other people just standing around frozen. Everybody was traumatized. And then somebody told me Valerie Solanas had come and shot Andy and then she'd run out. I was there holding him in my arms, crying.'
The news flashed around the world. ANDY WARHOL FIGHTS FOR LIFE. The pop artist had been shot repeatedly, the bullets piercing his stomach, liver, spleen and lungs, and was in a critical condition in a New York hospital.
Lou was with the band on the West Coast, staying at the Beverly Wilshire. He was coming down in the lift the next morning with Steve Sesnick when he saw the headlines. 'In that particular hotel, they put the morning papers on the floor of the elevator,' Sesnick recalled. 'We were both extremely shocked and startled when we looked down and saw the headlines.'
Andy narrowly survived the shooting, spending the next seven weeks in hospital. He was left heavily scarred, and his health was permanently impaired. Lou was slow to call, let alone visit. 'Why didn't you visit me? Where were you?' the artist asked when he finally got in touch. Instead of hurrying to the bedside, Lou acted as if _he_ were at the centre of the drama. Shelley Albin agreed to meet Lou at Max's when he got back from the coast. 'He was scared,' she says. Lou explained that he had recently told Valerie Solanas that women were inferior to men. 'Lou, if he had the right audience, would say women are inferior. They're not as smart, they're not as capable. He would say whatever he had to in order] to piss off who he was talking to.' Now he was paranoid that Solanas wanted to shoot him, too. He had little real cause for concern. [Solanas was arrested within hours of shooting Andy and held in a psychiatric hospital. Hearts quickened at Christmas, when she was unexpectedly granted bail, but she was soon returned to custody, ultimately receiving a three-year prison sentence. That Lou had twisted Andy's shooting into a drama in which _his_ life was at risk revealed his egocentricity. The two men nevertheless remained friends.
Andy returned to work in September, the same month that Lou decided to settle his issues with John Cale. He invited Sterling and Moe to meet him at the Riviera Café in Greenwich Village, where he staged a _coup d'état_. He told them that Cale was out of the band. 'You mean out for today, or for this week?' asked Sterling. When Lou made it clear that John was out for good, Sterling became angry.
'You don't go for it?' asked Lou. 'All right, the band is dissolved.'
To his lasting regret, Sterling decided that the continuation of the band was more important than loyalty to John, and acquiesced. Moe also went along with it. It was the second example, after the dismissal of Andy as their manager, of what became a pattern in Lou's career. At some stage he turned against almost everybody who worked with him, even if this was to his detriment. He had to be in control.
Steve Sesnick hired Cale's replacement. 'I really don't know that Sesnick consulted anyone before calling me down to New York City. The offer was made over the phone and it was without any qualifications or conditions,' says Doug Yule, who was at home in Boston when Sesnick called to offer him a job as bass guitarist in the Velvet Underground. He was needed immediately. Doug accepted and asked a friend to drive him to the city. Although it was a journey of over two hundred miles, he met Lou and Steve at Max's that evening. As they sat down to talk in the back room, Doug looked around at the other strange people in the glow of the Flavin light sculpture: Factory freaks and sundry eccentrics, including the obese transvestite Divine. He had entered an almost infernal scene.
Sesnick explained that the band had a gig in Cleveland on Friday, 4 October. 'Think you can learn all the songs in two days?' Doug said he thought he could. He was a bright young man of twenty-one, born and raised on Long Island. He could have passed for Lou's younger brother: they were both slim with curly brown hair. Doug was the more able musician. He could read and write music and play a variety of instruments, including guitar and keyboards. He sang, too, in a high voice. He had been in a couple of small bands in Boston, including the Glass Menagerie, which played the Boston Tea Party, and had seen the Velvets perform. Although he had never played bass, he didn't doubt that he could turn his hand to the instrument.
'You can stay at my loft,' said Lou. 'It'll give us time to practise.'
Sterling wandered over to say 'Hi,' a beer in hand. There was a hint of suspicion, perhaps, in the way he addressed the new boy archly as Douglas. Oddly, there was no explanation as to why Doug in particular had been chosen to replace Cale. 'Sterling [must] have been told ahead of time as well as Lou, but I don't know how Sesnick presented it to them,' he reflects. 'Maureen was living on [Long] Island, so she didn't come. I don't know much more about the process. I always assumed I was the first choice, driven by the astrological fit (Pisces/Virgo/Pisces/Virgo) and time constraints (a gig in two days).' By the 'astrological fit', he means he was a Pisces, like John and Lou. 'Sesnick and Lou seemed to think, at the time, that it was important.'
It was late. Sesnick hailed a cab, dropping Lou and Doug at Lou's loft in the fur district, where Lou spent the next forty-eight hours teaching Doug the repertoire. 'The next two days were a blur of music and agreement with whatever Lou said. He had a very forceful personality and it was clear to me that he wanted what he said agreed with, so that's what I did.' Apart from being a talented, available musician known to Sesnick, Doug's biddable nature was undoubtedly part of the reason he was hired.
It was only at this late stage that John Cale discovered that he was out of the band. 'We were supposed to be going to Cleveland for a gig and Sterling showed up at my apartment and effectively told me that I was no longer in the band,' he says. 'Lou always got other people to do his dirty work for him. I don't think I'm blameless about what happened, but Lou never confronted me, saying, "I don't want you around any more." It was all done by sleight of hand.' Lou had destroyed the original, classic line-up of the Velvet Underground. But in doing so he discovered a melodious new sound for his songs.
* Excluding compilations. Worldwide sales of the third Velvet Underground studio album are roughly equal with _White Light/White Heat_.
## VI
## A New VU
### 1968–70
WHEN THE VELVET Underground appeared at La Cave in Cleveland, a tiny club where the stage was only inches off the floor, on 4 October 1968, it was the first time that Doug Yule had played with the whole group. 'The sound was pretty good for no rehearsal, but then the music was straightforward, without any serious complications,' he recalled. 'I made the usual mistakes that plague a newcomer trying to fit into an established group... I missed a lot of cues, got caught hanging over endings and had to keep asking, "What key?" But the audience seemed to love it...' Indeed, the new Velvet Underground swung. By replacing one member, they became a more conventional, accessible band purveying the kind of melodic rock 'n' roll Lou had always loved. 'It made a huge difference,' confirms Moe.
Doug enjoyed flirting with girls backstage after the show, though he noted that Lou didn't join in. Lou was unusual as a touring rock musician in that he rarely got involved with people on the road, of either sex, though he occasionally disappeared with someone. He contracted a venereal disease in California in this way. Still, such behaviour was sufficiently unusual to be noteworthy. For long periods he didn't seem interested in casual sex, or any kind of sex. He was a voyeur. 'Lou watched it all across his psychological moat, remaining somewhat detached,' Doug observed. He was also intensely focused on the music. Later that night Lou called Doug into his motel room to discuss the show. He was still talking when the sun rose.
The following month the band flew to Los Angeles to make their third album, simply titled _The Velvet Underground_ , as if they had returned to Year Zero following the Riviera Café coup. They were recording again at TTG, staying nearby at the Chateau Marmont, and doing a few gigs at the Whisky a Go Go to help pay the bills while they were in town. The main studio at TTG was large enough to accommodate an orchestra. The Velvets huddled together in a corner of the big room as they created their most intimate and subtle album.
On the first track, 'Candy Says', Lou showed considerable empathy for his friend James Slattery, aka Candy Darling, a man so confused about his sexuality that he asked women friends for tampons. Lou asked Doug to sing it, knowing that his high voice, with a vulnerable quality, would suit the tender lyric better than his own. This wistful recording wouldn't be bettered until Antony Hegarty performed 'Candy Says' on tour with Lou in the twenty-first century.
The next track in sequence, 'What Goes On', was, by contrast, a simple rocker that had become a mainstay of the band's act. Then came Lou's most erotic song, 'Some Kinda Love', a seductive conversation between characters named Tom and Marguerita. Lou delivered the sexy lyric in a breathy vocal, over a simple, metronomic beat, making it plain that the couple had something other than the missionary position in mind, singing, 'Let us do what you fear most.' His ability to twist a subject to imply something sinister was one of his characteristics as a songwriter.
This intriguing song was followed by a true masterpiece. 'Pale Blue Eyes' had been part of the band's repertoire since the early days. Now they recorded it andante, to the stately beat of Moe's tambourine, Lou's voice quavering (struggling to hold a note but giving the impression that he was feeling emotional) as he sang about his first love, Shelley, who remained a major figure in his imagination. Sterling's ravishing guitar solo may be his single most important contribution to the Velvet Underground canon. Bettye Kronstad, who started to date Lou properly after returning from Europe, believes that the rapturous language of 'Pale Blue Eyes', describing Shelley as the best thing that ever happened to Lou, the peak of his existence, contained some make-believe. The fact that Shelley was effectively out of his life enabled him to indulge in the fantasy of a perfect love, whereas in reality he drove her away with his behaviour; and the fact that she was married to someone else meant he could idolize her without having to deal with her. 'I think there was a certain type of girl that I might have fallen in with, that were the unobtainables, and then when he got them he kind of didn't know what to do with them anyway,' suggests Bettye, another very attractive girl whom Lou pursued ardently, only to neglect her when he caught her. There was a sense that, while he set his cap at such women, perhaps to prove his virility, his heart wasn't in it. '[We were] his perception of a trophy girlfriend.' Lou set Bettye on a pedestal, calling her his Princess, but she doesn't believe that he even liked women. 'He was a misogynist,' she asserts. 'I was his Princess in a certain sense, [but] as much as you idolize someone you might resent [them, too], because they are not real to you.'
Lou's insistence that he wasn't necessarily writing about himself in his songs but from the perspective of his characters allowed him to adopt some unlikely positions. It enabled him to sing a song like 'Jesus' on the new album, a secular hymn written by a Jew who showed no interest in religion until he turned to Tibetan Buddhism late in life. That didn't detract from the power of 'Jesus', which was reaffirmed years later when he performed it with the Blind Boys of Alabama on David Letterman's _Late Show_.
The theme of revelation continued with 'Beginning to See the Light', one of the strongest songs on the album, and 'I'm Set Free'; while a gnomic remark by Billy Name about his own life and 'the difference between wrong and right' provided the title and lyric for 'That's the Story of My Life'. 'I would have said something like that... He paid attention to things I said, because I was sort of a prophet. I had a way of saying things in those days that was very Zen-like,' explains Billy, who is one of a handful of Lou's friends to be name-checked in more than one song, the others being 'Slip Away' and 'A Dream' on _Songs for Drella_ (1990). An unusual man in many ways, Billy withdrew to live as a hermit in the dark room of Andy Warhol's studio at this time, hiding himself in the room for two whole years. Lou was one of the few people admitted to talk to him. He reported back to Andy that Billy had been reading Alice Bailey's books about astrology and white magic, which Lou had introduced him to; it seemed the literature had turned his head. One day, Andy came to work and found the door to the dark room open. Billy had gone. He wasn't seen again for seven years.
'The Murder Mystery' was the most experimental track on the album, an audio cut-up in a style pioneered in prose by William Burroughs and Brion Gysin, whereby the band recited scripts which were overdubbed in the hope of creating interesting verbal juxtapositions. Unfortunately, the result sounded like babble. 'It was supposed to be fun with words, fun with rhymes and sounds,' Lou commented, disappointed to find his ideas compromised by a lack of technical knowledge in the studio. It was frustrations like this that made him obsessive about sound recording in later years.
The last song was, by contrast, a work of perfect lucidity. Lou persuaded Moe to sing 'After Hours', thinking that her childlike personality ('Moe was an undeveloped person,' observes Mary Woronov) would suit the lyric about a timid person watching others having fun and wishing they could join in. 'I couldn't sing that song. Maureen could sing it and believe it, and feel much more. Because it's about loneliness,' Lou remarked. Despite her shyness, Moe proved the ideal choice for 'After Hours', which became a minor Velvet Underground classic. 'I was scared to death to get up and sing it.'
Presented in this sequence, the ten songs that made up the third Velvet Underground album formed a story in Lou's mind, though this wasn't immediately apparent to most listeners. 'It's not just arbitrary. They're all supposed to complement the preceding song,' he explained. '"Candy Says" had this person asking all these questions, and then "What Goes On" kind of asked like one specific one, and then "Some Kinda Love" and "Pale Blue Eyes" explicated some of it. It just went on and on...' More importantly, six of the ten songs were among the best he ever wrote, with 'Pale Blue Eyes' becoming a staple of his show for the rest of his career.
In suggesting that Moe was better suited to singing about loneliness than himself, Lou was being less than honest. He often complained about loneliness. He had recently moved to an apartment at East 60th Street, a slightly better apartment than his band mates could afford on the modest income they earned from record advances and live shows reflecting his status as their leader, but nevertheless a bare, cheerless place. 'Why don't you come shopping with me and help me buy some furniture?' he asked Shelley, who found her ex in a pathetic state in his new bachelor quarters. 'He was depressed from being lonely... I did feel bad for him.'
With Shelley ultimately unavailable, Lou turned increasingly to Bettye for support. Indeed, he asked her to marry him. 'He actually asked me to marry him about a year after I met him... I think he wanted me around him as much as possible, because he trusted me, he trusted me like no one else.' Bettye didn't accept Lou's initial proposal. She had reservations about her boyfriend, including concerns about his use of alcohol and drugs, which may be why he attempted to curb some of his bad habits at this time. There is evidence that he was relatively clean in 1969–70. 'I never saw him take anything except alcohol and the occasional joint in all the years I knew him,' says Doug Yule. When fan Rob Norris went backstage at the Boston Tea Party, Lou surprised him with 'a brief lecture on the evils of drugs'. However, like many users, what Lou said and what he did were often two different things.
Before the release of _The Velvet Underground_ , Lou went back into the studio and remixed the recordings, bringing the vocals up at the expense of the instrumentation. Sterling described Lou's version of the album as the 'closet mix', because it sounded like it had been recorded in a wardrobe: more intimate, less balanced. Lou's unilateral decision to make these changes is sometimes cited as an example of his high-handedness, though Doug and Moe also sang lead vocals on the record and their voices were enhanced in his closet mix. In any event, it was this version of the record that was released in the USA in March 1969, on the MGM label rather than Verve, with a cover photo of the band sitting on the couch at Andy's studio. The more conventional studio mix, by engineer Val Valentin, was released in Europe. Few listeners were aware of the difference at the time.
The LP was greeted with the first really good reviews the Velvets had enjoyed, including a laudatory notice in _Rolling Stone_ by Lester Bangs, who emerged as one of Lou's most attentive and perceptive critics. Bangs observed that the musical journey between 'Heroin' and 'Jesus' demonstrated that the Velvets 'have one of the broadest ranges of any group extant' and found brilliance in the album. Although he failed to appreciate 'Pale Blue Eyes', his review was good enough to be quoted in adverts for the LP, printed together with a photograph of the Velvets looking like any other fashionable young band of the late 1960s, with scarves draped feyly around their necks. There were also excellent reviews in _Crawdaddy_ , _Creem_ and _Planet_. The Velvets were finally being recognized for the quality of their musicianship and Lou's songs, rather than their association with Warhol. Yet _The Velvet Underground_ sold even fewer copies than its predecessors, failing to dent the _Billboard_ Top 200. At this point, MGM seems to have abandoned all hope in the band.
The Velvets went into the Record Plant in New York in May 1969 to record songs for a fourth album, which they owed MGM, to fulfil their contract, though it was becoming clear that they would soon be leaving the label, and the company had no intention of releasing another album. 'We did that to shut MGM up at the time,' reveals Moe. 'We had to finish the contract. I don't mean that we didn't do our best. It was _leave us alone_... We didn't expect those [songs to come out].' In the circumstances, it is remarkable that the songs recorded at the Record Plant were so strong, including 'Andy's Chest' (inspired by Andy's shooting), 'Foggy Notion', 'Ocean', 'Sad Song', 'We're Gonna Have a Real Good Time Together' and 'I Can't Stand It', in which Lou implored Shelley by name to come back to him. ('That hurt. That was sad,' says Shelley, who was shocked when she heard it many years later.) Lou also persuaded Moe to sing once again on 'I'm Sticking with You', which had the charming quality of a nursery rhyme.
It was years before these songs enjoyed official release. Some started to appear in re-recorded, often inferior versions on Lou's solo albums in the 1970s, while others had to wait until the Velvets' back catalogue was reissued with archive material in the 1980s, by which time the importance of the Velvet Underground was more widely appreciated. What is most impressive is how prolific Lou was in the late 1960s. Like many top-flight songwriters, he was able to tune into an almost continuous stream of original ideas. 'I have a radio in my head that's playing unrecorded things for me constantly.' He just had to note down what he heard. This wasn't invariably true. There would be lean years when Radio Lou wasn't broadcasting. But, for now, he had more ideas than he could use. He was writing a lot of songs, good and varied songs, from the whimsical 'I'm Sticking with You' to the existential angst of 'Ocean'. Range is a hallmark quality of the best songwriters, as true of Dylan and Lennon & McCartney as it was of Reed.
In many ways, 1969 was a golden year. Doug fitted in well, as their live tapes and studio recordings show, helping to create a new version of the band that was arguably as strong as that of the Cale era (though different), and Lou seemed to be enjoying his work, despite the loneliness and anxiety he suffered in his private life. He sounded positively joyous recording 'Foggy Notion' at the Record Plant. Nevertheless, their audience remained stubbornly small. The Velvets mostly played clubs and dance halls in the north-eastern states and in California, places like the Boston Tea Party, the Second Fret in Philadelphia, La Cave in Cleveland and the Whisky a Go Go in LA. It was rare that they drew more than a couple of hundred people, and often their audiences were even smaller. In the week of the Woodstock Festival, in August 1969, when 400,000 people gathered at Max Yasgur's farm in upstate New York to listen to the most notable rock acts of the day, the Velvet Underground were playing the obscure Woodrose Ballroom in Deerfield, Massachusetts. Sometimes they performed in coffee houses, like a band starting out. For context, the Doors, who released their debut album the same month as the Velvets released the Banana Album in 1967, were playing Madison Square Garden by this time, while the Velvets could only gaze in awe at the phenomenal worldwide success of the Beatles and the Rolling Stones. Moe recalls being as starstruck as a fan when Keith Richards visited Andy's studio. 'Wow, it's Keith!' The Stones weren't too big to borrow from the Velvets, though. 'I mean, even we've been influenced by the Velvet Underground,' Mick Jagger admitted. 'I'll tell you exactly what we pinched from him [Lou]. Y'know, "Stray Cat Blues"? The whole sound and the way it's paced, we pinched from the very first Velvet Underground album. Y'know, the sound on "Heroin". Honest to God, we did!'
While the Velvets were becoming slightly better known abroad, they didn't tour outside North America, so they failed to exploit a growing cult following in the United Kingdom and elsewhere, which was a mistake. It later became apparent that their principal audience was in Europe. And while they played a lot of shows in the USA in 1969–70, they only ever performed in sixteen US states, ignoring most of the country.
The band visited Texas for the first time in October 1969 to play six club shows. 'Good evening. We're the Velvet Underground... glad you could all make it,' Lou greeted the audience at the End of Cole Avenue in Dallas on 19 October. 'This is our last night here. Glad to see that you all showed up. Um, do you people have a curfew, or anything like that?'
'No,' replied a young man in the audience.
'Does it matter what time you go home tonight? Do you have school tomorrow?'
'No!'
'Nobody here has school tomorrow?'
'Yeah,' said a girl.
'Yeah. See. Because we could do either one long set, or we could do two sets, whichever made it easier for you.'
'One long one.'
'One long one? OK, then this is going to go on for a while. So we should get used to each other. Settle back. Pull up your cushions. Whatever else you have with you... that makes life bearable in Texas [laughter]... This is a song called "I'm Waiting for My [ _sic_ ] Man".' Lou's words were recorded and later used as the introduction to the live album, _1969 Velvet Underground Live with Lou Reed_ , released five years later in 1974 on the back of his subsequent solo career. Aside from being a terrific album full of interesting cuts, _1969_ demonstrates what a tiny audience the Velvets catered to. A mere handful of students gathered at the End of Cole Avenue to hear the band, the light smattering of applause on the recording revealing how few they were, as well as the striking fact that Lou consulted them personally about what time the gig should end.
Four songs were recorded in Dallas for the _1969_ album, the other thirteen tracks being taken from shows at the Matrix in San Francisco, a quintessential hippie venue, where they also played to a minuscule audience. 'There were a few nights when they started the first set with only four or five people in the club!' recalled Robert Quine, a twenty-seven-year-old law student who had begun to follow and record the Velvets for his own pleasure. An intense and depressive person, Quine later became Lou's lead guitarist, but their association started with him turning up at gigs with a tape recorder. 'They didn't have a lot of fans in San Francisco and when they saw me there every night they became friendly, got me into the club for free, bought me drinks, and let me hang around backstage.' Even on a good night, they never played to more than a hundred people at the Matrix. 'I'm sure the Velvets] were disappointed when they saw the size of the club,' said co-owner Peter Abram. 'The band was probably playing for $100 a night.' The music, however, was exceptional. Lou was still writing prolifically and trying out new songs live, songs like 'Sweet Bonnie Brown' and 'New Age', which Doug believes he wrote about Shelley. (Shelley struggles to see herself in the lyric. 'This doesn't ring any bells for me, other than lines like "I'll come running...", versions of which he used to say regularly.') The most important new song was ['Sweet Jane'. As performed at the Matrix and heard on _1969_ , recorded the very day Lou wrote it, the prototype was a simple, regretful record of a love affair. 'I can remember it starting out as a soft song,' says Doug, adding, 'It was very common to perform songs with little rehearsal that had been written very recently.' It would change radically, becoming Lou's power-chord signature tune.
The beginning of the end for Lou as a member of the Velvet Underground can be charted to the spring of 1970, when the band left MGM and signed with Atlantic Records. At the same time, Steve Sesnick began to promote Doug as their new front man. 'Sesnick was in it for the money – he never said that to me, but that is my thinking now, and I don't think any less of him for it,' says Moe, blaming Doug for falling for Sesnick's flattery. 'I think he filled Doug's head a little too much, because Doug was handsome, cute, young – great smile. His head got filled with maybe Sesnick saying things] like, "If Lou leaves, you could do it." [So] he started to get a real swell head and I didn't like him for a while.' Doug maintains that he was a team player. 'I loved being part of the band when Lou was there because the thing I enjoy most in music is singing harmony and being part of an ensemble, as opposed to being a leader and front man.' Lou was a hard man to work with, however. Ever since they met he had been telling Doug what to do, dominating him to the extent that [Doug daren't smoke a joint if Lou didn't partake as well. Though apparently modest, Doug wasn't immune to flattery, and he started to see a bigger role for himself in the group. It was unfortunate that Moe, a moderating influence, stepped back from the band at this stage to have her first child.
Doug Yule (left) began to rival Lou for leadership of the band. Sterling Morrison is seen third from left, Moe Tucker on the end.
The Velvets returned to New York to record their debut album for Atlantic, at the company's studio near Columbus Circle. With Moe too heavily pregnant to play drums, four people took turns with the sticks: studio engineer Adrian Barber, session musician Tommy Castanaro, Doug Yule and his teenage brother Billy. Their playing served to highlight what a vital contribution Moe had made. Without her unsophisticated but unique percussion, the Velvets sounded disappointingly ordinary on what would prove to be their most conventional album. Sterling also felt isolated, because Lou and Doug were making most of the musical decisions. 'Things were not happy,' comments Martha Morrison. 'I remember sitting outside Atlantic [in Central Park with Sterling] when they were making that album. He was grousing and grumpy. It was hard to listen to.' Martha encouraged Sterling to think ahead to life after the band, so he enrolled at City College to study English literature, paving the way for his own exit.
Unlike in the past, when the Velvets recorded everything live in a few days, they took months over the new album, which they titled _Loaded_ because Lou believed it was loaded with hits, as well as having the connotation of inebriation. Starting work in April 1970, they also recorded a lot of songs, one of the first being 'Satellite of Love', which didn't see the light of day until Lou resurrected it in 1972 for his solo album _Transformer_. Other strong songs that were tried but not used at the time included new versions of 'Ocean' and 'I'm Sticking with You'. Many were ruled out because Lou and Doug were searching for a Top 40 hit. 'Who Loves the Sun', eventually chosen as the opening track and a single, was typical of the new material, in that it was a relatively straightforward, upbeat pop song, good in its way but lacking the otherness that had characterized the Velvet Underground. It failed to become a hit. 'Cool It Down', 'Head Held High', 'Lonesome Cowboy Bill', 'I Found a Reason' and 'Oh! Sweet Nothin'' were likewise all mildly diverting songs but bland by the band's standards. Three tracks stood out as being of greater interest: 'New Age', 'Rock 'n' Roll' (which celebrated the excitement of discovering pop music on the radio) and a swaggering new version of 'Sweet Jane'.
Like most artists, Lou was primarily invested in whatever he was doing in the present, rather than music he had made in the past, and he was enthusiastic about _Loaded_ while the band were making the album. 'It's just fantastic, because we've never really made a record before. All our records we made in one or two days, you know, and we kept the tape running, sort of cut them live,' he told _Third Ear_ magazine at the time. 'You know, all our [previous] albums sound like basement tapes.* So it's a new experience and it's fantastic, it's really fantastic... This is the closest that the reality ever came to matching the concept, because what we're hearing in our heads is finally coming out on record...' Lou distanced himself from the Cale era, saying that he didn't like 'esoteric stuff', and predicted that the group would finally enjoy a commercial breakthrough with shorter, simpler songs. Lou may have had commercial concerns in mind when he chose to downplay the importance of drugs in their past. 'We [have a reputation] for supposedly being a drug group. We're not representing drug culture; we're just a rock 'n' roll group.' Asked if there was a time when he had to get high to perform, he replied, 'No, when you're wrecked you play bad.' He conceded that some people came to see them because of their druggy reputation, shouting out for 'Heroin', which he was reluctant to perform at this stage. 'I'd hate to think that was really totally where I was at,' he said. 'Same thing with "Sister Ray"; if that's the way we were, there wouldn't have been a third album.'
The _Third Ear_ interview was also notable for critical remarks Lou made about more famous contemporary artists. He sneered at Jefferson Airplane and the Grateful Dead for making music on drugs for people on drugs, and said 'Dylan gets on my nerves.' This was mild compared to what he had to say about Frank Zappa, who had enjoyed more success than the Velvets since they started out together on MGM. Lou rubbished Zappa as a 'low life... two-bit, pretentious, academic, and he can't play...' He failed to see the humour in _We're Only in It for the Money_ , Zappa's parody of _Sgt Pepper_. 'Like, you know, make fun of the Beatles – try to do a song, try to do a really pretty song they did... He can't do that, 'cause he's a loser. And that's why he dresses up funny. He's not happy with himself and I think he's right.' Although Lou burst into laughter at this point in the interview, his remarks were crass and unnecessary. The impression given in this and many subsequent interviews was of a prickly man who was jealous of his contemporaries.
Towards the end of the Atlantic sessions, the Velvets began a residency upstairs at Max's Kansas City, their first significant shows in New York for three years. One explanation of why they had avoided the city for so long is that Sesnick wanted to build their reputation around the country before returning to the metropolis. Doug doesn't believe this to be true. 'I've heard that strategy put forward, but it always sounded like hindsight. My sense at the time was that Sesnick was having difficulty getting gigs and took whatever he could find,' he says. 'The Max's gig certainly wasn't a triumphant return of the prodigal band. It felt more like a fill-in...'
They opened at Max's on 24 June 1970, playing two shows a night on the mezzanine, a small space that was less than ideal for gigs. The mezzanine was lined with tiles, which made the acoustics poor, the sight lines were obstructed, and there was constant background noise: people coming and going on the stairs, chatting to friends and using the toilet. Lou found it a depressing experience to be playing such a small room four years on from their sensational run at the Dom with the Exploding Plastic Inevitable.
'We once did an album with a pop painter,' he reminded the audience on opening night.
'You're doing better without him!' somebody retorted. But it wasn't true.
With Moe out of action, seventeen-year-old Billy Yule played drums with the band at Max's. He did so with an amateurish enthusiasm that made them sound even more like a bar band. Lou's depression deepened. 'I hated it,' he later said. 'I couldn't do the songs I wanted to do, and I was under a lot of pressure to do things I didn't want to do]. It made me sick.' This was despite the fact that he had friends in the audience most nights, old friends, including Brigid Berlin and Danny Fields, and new friends like the young Patti Smith, who got up to dance. ['The critic Donald Lyons was shocked that I had never seen them, and escorted me upstairs for the second set of their first night,' she reminisced. 'I loved to dance, and you could dance for hours to the music of the Velvet Underground.' Bettye also looked in. Lou performed a new song at Max's on 26 June, 'Wild Child', about an actress named Betty [ _sic_ ] who suffered with nerves. Bettye, who was doing some acting at the time, says it was about her, 'Because I didn't like the audition process.' She was part of a theatre troupe who performed at a Manhattan restaurant where the deal was that they would also wait tables at lunch. It was this work that later caused Lou to refer to her as a 'cocktail waitress', which his friends repeated as a put-down. 'It's just a nasty, stupid story and it's totally untrue,' she says, making a distinction between being a cocktail waitress and the kind of waitressing she briefly did.
When Moe came to see the show after the birth of her daughter, on 27 June, she found Lou in low spirits. 'Three or four weeks after Kerry was born I went to see them at Max's... Lou said, "Come on," and we sat on the stairs, and he told me that he was going to be leaving the band]. Even though we were like brother and sister, I didn't say, "What's the matter, why are you leaving?"' Moe had never been one for personal questions, but she saw that Lou was upset. 'He was not happy that it had come to that. He needed to get out.' It is a measure of how bad relations had become between Lou, Doug and Sterling that the others had no idea he was thinking of quitting. Sterling's focus was on his English course at City College. He brought his books into Max's each evening so he could study between sets. Although he was barely speaking to Lou [– 'I was mad at him for something' – he shared his misgivings about the residency. 'Lou may have felt that we'd done too much to wind up just sort of playing a club in Manhattan, which I felt too... I was against it,' he told Ignacio Julià, author of _Feedback_ , adding that playing at Max's at night tired Lou's voice, which affected his vocals in the studio, where they were still working on _Loaded_ during the day. 'I'm sure I said something like, "This is stupid," and probably got the reply from Sesnick, something like, "Don't be negative, just go and read your books and mind your own business."'
Lou decided that Sunday, 23 August 1970 would be his last show with the band. Doug remained oblivious. 'That particular night was no different than most of them, since we had no idea that it was Lou's last.' To witness his final performance Lou invited some special guests. Sterling was sitting in a booth eating a cheeseburger and reading Thackeray's _Vanity Fair_ when Lou came over and introduced a smartly dressed, middle-aged couple. 'Sterling, I'd like you to meet my parents,' he said. Sterling was astounded. 'Lou always had an extremely troubled relationship with his parents... So I was thinking, "What in the world can this portend?"'
Lou chatted with the audience between songs that night, sounding relaxed. Indeed, he seemed to be having a good time with his parents and girlfriend in the room. 'It's really fun to be able to play these for you,' he said during the introduction to 'Sunday Morning'. After the second and last show of the evening, Lou and Sterling had a fierce argument, according to Brigid Berlin, who was sitting a few feet away with the writer Jim Carroll, a mutual friend who later wrote _The Basketball Diaries_. 'Sterling Morrison had a big fight with Lou, and that seemed to be the end of it.' This must have been when Lou told Sterling that he was leaving.
The news went around Max's in a flash. Danny Fields went over to Brigid, who had a tape recorder on her table; it was a time when Factory people were habitually taping everything. He told her that Lou had quit the band and asked her if she had recorded his last show. 'I said, "Did you get that?" "Let's see." She [wound the tape] back. "Yeah".' The next morning they took the tape to the Velvets' record company, where Danny happened to work. 'I knew they owed them another album. That was perfect for everybody. I think we got $10,000 on the spot, turned over the [tape] and it became the album [ _Live at Max's Kansas City_ ].' The record was eventually released in 1972.
Doug Yule was the last person to know what was happening and came to blame Steve Sesnick for pushing Lou out of the band. 'The circumstances around his departure were between him and Sesnick. I only heard about it the following week from Sesnick, who was spinning it his own way, as if Lou had quit unexpectedly rather than being forced out by Sesnick, as I heard later.'
The Velvets completed their residency without Lou, and staggered on under Doug's leadership until 1973, though the band was a shadow of itself. Donald Lyons summed up the situation with a _bon mot_. When Danny Fields told him that Lou had left the band, he quipped: 'I guess we have to call them the Velveteen Underground now.'
* Referring to Bob Dylan's _Basement Tapes_ , low-fi home recordings that were bootlegged at this time but later given official release.
## VII
## Solo in the Seventies
### 1970–3
LEAVING THE VELVET Underground had a profound effect on Lou's mental health, triggering a second nervous breakdown, as can now be revealed. 'Yes, that was another breakdown, when he returned home after the break-up of the Velvet Underground,' confirms his sister. As with his first breakdown, Lou withdrew to his parents' house in Freeport, giving up his apartment in New York, which he could no longer afford, storing his precious guitars and his collection of Velvet Underground concert posters in his bedroom. It was little enough to show for five years as the guiding light of one of the most innovative and ultimately influential bands of the 1960s.
When he felt able to work again, Lou was employed as a typist in his father's office. The duties were light, allowing him time to write creatively. He wrote poetry for _Fusion_ magazine, the editor of which also commissioned him to contribute to a book, _No One Waved Good-bye_ , on the topic of the casualties of rock 'n' roll culture, several young stars having died drug-related deaths in recent months. Lou took the opportunity to reflect on his experiences as a performer, revealing some ambivalence about his career. 'At the age when identity is a problem some people join rock 'n' roll bands and perform for other people who share the same difficulties. The age difference between performer and beholder is not large. But, unfortunately, those in the fourth tier assume those on stage know something they do not. Which is not true,' he wrote in his essay, 'Fallen Knights and Fallen Angels', adding revealingly: 'The singer has a soul but feels he isn't loved off stage.' He referred to the drug use endemic in the music industry, as if this disappointed him, as did the fickleness of the public: 'As my analyst put it, don't depend on anyone...'
At the weekend he took his dog, Seymour, for long, thoughtful walks along the South Shore. Meanwhile, the Velvet Underground continued without him. Lou contacted Sterling to ask if he would work with him in a new band, but Sterling declined. 'I thought he had gone insane; gone insane in a very dull way,' he explained. 'He suddenly went home to Freeport and decided to become reconciled with his parents... Lou was unstable in such a tedious way. It wasn't that he was running around crazy in the streets; at times he was incommunicative and remote and content to stay with his parents.'
When _Loaded_ was released in September 1970, Lou was dismayed to see his songs attributed to the band, while the photo on the back cover showed Doug alone in the studio, as if he were the mastermind of the project. Lou responded by registering the songs in his own name, ultimately going to law to establish copyright, which he did successfully. This was a battle between Lou and Sesnick, primarily. He was doubly disappointed by the fact that the tracks weren't sequenced as he wished, while 'New Age' and 'Sweet Jane' were truncated, the latter missing some words. 'They took a song that I'd worked on for a year and ruined it.' Despite the amount of time spent in the studio, the sound was also poor. Yet _Loaded_ became the favourite Velvet Underground album for many people, a record of catchy, commercial songs that fellow artists, including Mitch Ryder and Mott the Hoople, would cover, and more were influenced by. 'It changed my life as a songwriter,' says Elliott Murphy, whose own recording career Lou later mentored. 'When he had a strong guitar player working with him, like he did with Doug Yule, I think he [flourished]. By the time of _Loaded_ , Lou's songwriting had just flowered.'
Reviews were excellent. Writing in _Creem_ , Lester Bangs praised the album to the sky but deplored the way that Lou's contribution was downplayed by the record company. 'Doug Yule is probably a very nice guy... But honest to God, this is the most outrageous misrepresentation and spit in the face of a great artist that I've ever seen on an album jacket. They don't come right out and _say_ that this is a Doug Yule Production, they know they can't get away with that, but the implication is clear.' Lester called Lou to ask what was going on, having heard rumours that he had 'flipped out'. Lou told him: 'I just walked out because we didn't have any money, I didn't want to tour again – I can't get any writing done on tour, and the grind is terrible – and like some other members of the band I've wondered for a long time if we were _ever_ going to be accepted on a scale large enough to make us a "success."'
Barbara Hodes kept in touch by phone during this difficult period, but she didn't see much of Lou. 'He told me he had a nervous breakdown. I think he was completely overwhelmed with how the Velvets had ended]... he needed to get out of Dodge, so to speak.' The woman he saw most of was Bettye Kronstad, who got into the habit of coming out to Freeport for the weekend, sleeping on the sofa in the den. She got on well with Lou's family. ['They were very happy with me, I guess, because I was a girl,' she laughs, noting that Sid and Toby did everything they could to keep their sensitive son in a good mood. 'I think they were worried that he was peculiar, that he was different, that he was nervous, that he did have a nervous breakdown. It was [like] china plates around Lou. Let's just make sure that nothing happens...' Lou sometimes took Bettye to the Hayloft, but their weekends on Long Island were mostly spent in conventional middle-class pursuits. They went swimming at Sid's country club, where Lou tried to teach Bettye to play tennis. In return she took him horse-riding. Unfortunately, he failed to persuade staff at the stables that he was competent to take a horse out. 'That didn't make him very happy.' Nevertheless, Bettye recalls this as a good time in their relationship, the period when she fell in love with Lou. Away from New York, she found him to be a nicer, gentler person. 'The man I fell in love with wasn't a tough guy, he was a teddy bear. He was a sweet, sensitive writer... a gentleman and a scholar. He drank too much, but you know...'
Lou couldn't stay away from the city for long. In March 1971 he took part in a poetry reading at St Mark's-in-the-Bowery with Allen Ginsberg and Jim Carroll. It was his first public appearance since leaving the Velvets. He recited some of his lyrics, together with poems about Bettye and poems with a gay theme that elicited a cheer from Danny Fields. Lou would give further readings over the years; they reaffirmed his belief in himself as a writer, which is how Bettye liked to see him, though she worried about Lou in New York. 'The people he had been hanging out with – the Warhol crowd – they didn't understand what he was going through. He was no longer the crazy Lou. He was quiet and reflective, contemplative. He wasn't that character any more.' Perhaps it was she who didn't understand Lou, who hadn't changed as much as she liked to believe, while many of his friends didn't understand her. 'We made fun of her,' chuckles Danny. 'We just called her the cocktail waitress. We couldn't believe he was [with her]. Where did she come from?' Bettye sensed their condescension and was irritated. She felt she was treated like she was no one of importance. Lou tried to integrate her into his circle by introducing her to hip friends like Brigid Berlin. 'I think that what he wanted Brigid to do was Max's Kansas City-me or Warhol-me... Let's Warholize Bettye. So I went over there several times and we had evenings of talking. My impression was that at the end of our sessions Brigid probably went back to Lou and said, "I don't see anything wrong with her, I think she's fine the way she is."' Others remained unconvinced. 'I go away and I come back and Bettye's with him!' exclaims Barbara Hodes, who was also struck by the physical change in Lou since his second breakdown. 'When he came back he was fat and he was drinking a bottle of Scotch a day.'
He started to spend alternate weekends in New York with Bettye while he plotted his return to the music business with the help of friends. His chief advisers were Danny Fields, who was working at Atlantic Records, and Richard and Lisa Robinson. Lisa was a music writer, while Richard was an A&R (Artists and Repertoire) man for RCA. They all held Lou in high esteem. The Robinsons hosted soirées for him at their Upper West Side apartment, to introduce him to people who might be helpful in launching a solo career. 'Lisa would invite me and Lou over to her house and all these people, the intelligentsia or whatever, rock 'n' roll people, and people in the music industry, would hang out and all sit around in a circle and listen to Lou pontificate,' says Bettye. 'I never had anything to say. We just all sat around and gazed adoringly at Lou, which was kind of boring for me.'
There was a reunion with Nico at the Robinsons' apartment. Since leaving the Velvets three and a half years earlier, Nico had dyed her hair black, developed a heroin habit and put on weight. She scratched a living performing an eccentric solo act in clubs, singing gloomy songs, accompanying herself on harmonium. Enough time had passed for Lou and Nico to forget their problems and contemplate doing a show together again. 'The Factory was then a "thing of the past" (although, of course, it could never be that actually) as far as their careers/futures were concerned, and so they could regard each other differently at that point,' explains Danny, who was present on 29 April 1971 when Lou and Nico rehearsed at the Robinsons'. Although Nico struggled to remember the words to their songs, Lou remained patient, encouraging her, and they conversed with the ease of old friends – moreover, old lovers – making each other laugh. She called him Lewis, teasingly. He confessed that he was hung over. 'I'm so tired. I got drunk last night. Ah, I've got a headache!' Lisa hovered in the background, offering refreshments. At the end of the evening, Lou walked Nico to the subway. It had been a pleasant reunion, but no show resulted in the short term.
As Lou was still under contract to Atlantic, Danny tried to persuade his bosses to let his talented friend make a solo album for the company. 'I was trying a lot to get them to think of him as an important songwriter, rather than a member of a band [for] which they'd never cared very much in the first place.' Atlantic wasn't interested. So Danny became Lou's manager, trying to find him another record deal. It proved an onerous job. 'I was his manager for two weeks. I couldn't take it. I could not have a personal relationship and a professional relationship with him simultaneously,' he says. 'Once you are working for him, and there are things that he wants, and it's about his career, he was relentless. "Did you hear from them yet? What do you think?" Pinning me down, pinning me down. "No, Lou, I told you... Not just yet... We're waiting... Yes, I did... No, not yet... But we need you to..." It was enveloping. It was _horrible_ – it was losing a friend in order to get a client.'
It became apparent that Lou's best hope lay with RCA. Known as the Washing-machine Company, because the parent company manufactured household appliances, RCA Records had one huge artist in Elvis Presley and a mainstream star in John Denver, who had the first of his big hits in 1971 with 'Take Me Home, Country Roads', but little presence in the burgeoning rock market. There was a new vice-president of contemporary music, however, an ambitious thirty-year-old lawyer named Dennis Katz, elder brother of Blood, Sweat and Tears guitarist Steve Katz, who wanted to sign rock talent to RCA. When Richard Robinson suggested bringing Lou to the label, Katz was cautiously receptive. Lou was a first-class songwriter, but he was difficult to deal with and the Velvet Underground had never been commercially successful. He was also still under contract to Atlantic, which complicated matters. Luckily for Lou, he had another persuasive champion.
By the age of twenty-four, David Bowie had released three albums in his native United Kingdom, none of which had done spectacularly well, though he scored a hit single with 'Space Oddity' in 1969 and showed a genius for self-publicity. Despite being married with a young child, Bowie had created an androgynous persona for himself that caught the attention of the press and public in Britain, and now he and his manager, Tony Defries, wanted to break America. They had struck a deal with RCA and were coming to New York in September 1971 to sign their contract. The Washing-machine Company got Bowie cheap, paying a modest $37,500 advance for his new album, _Hunky Dory_ , which he had already recorded in London. The songs were clever and catchy, but nobody knew whether the album would take off. 'David wasn't that famous. He wasn't famous _at all_ ,' says his friend Tony Zanetta, who had recently starred in the Warhol stage show _Pork_. Bowie was fascinated with the Warhol scene, so much so that he had recorded a song about Andy on his album. He had also recorded a tribute to the Velvet Underground, the song 'Queen Bitch', showing how much inspiration he had taken from their music, and Lou's attitude. His trip to New York presented an opportunity to meet his hero.
Lou met David at a dinner RCA arranged at the Ginger Man restaurant in Manhattan to welcome the Englishman to America. Lou was flattered to meet an up-and-coming artist who held him in high regard, while Bowie went out of his way to charm Lou. 'David always went after the people he admired, that he thought he could learn something from,' says his then wife Angie Bowie. 'They were whispering, kind of conspiratorial,' adds Tony Zanetta, who also attended the dinner. 'This was a thrill for him to meet Lou.' There were two further meetings that week, at the Warwick Hotel, where the Bowies were staying, and at the Robinsons' apartment, where Lou and David locked themselves in a room to talk privately while Angie banged on the door to be let in. David was a handsome young man and there was a hint of sexual attraction between him and Lou. 'I think there might have been something going on. I don't know. Lou probably went after him; I believe that,' says Bettye who, like Angie, felt left out. 'Lou did kind of fall in love with him a little bit. He would fall in love with anybody that was really crazy about him.'
The respect David showed Lou helped persuade Dennis Katz to sign him to RCA. The deal was done directly after Bowie's visit to New York, on 1 October. 'I did sign him when nobody else was interested in signing him, and I got the release from Atlantic Records. They let him go without any over-rides, without any payments or anything,' boasts Katz, who subsequently became an important person in Lou's career. 'Nobody else was interested in him. He was working for his father on Long Island when I signed him to RCA. There wouldn't be any Lou Reed, and you wouldn't be writing this book, if it weren't for me.' If RCA got Bowie cheap, Lou was for peanuts. The company advanced $6,600 for his debut solo album, approximately $38,000 in today's money, adjusted for inflation, the price of a new car. 'He didn't have any history in terms of being able to sell records. His claim to fame was being in the Velvets, that was it,' says Bob Ringe, a member of the A&R team at RCA who worked with both Reed and Bowie. 'At least Bowie had sold some records previously.' It was nevertheless a break. On the strength of the deal, Lou moved back to Manhattan, renting a small apartment at East 78th Street. 'It was a studio, pull-out [bed]. His grandmother's rocking chair was about the only chair in there,' says Bettye, with whom he split the rent.
Lou was an unusual artist in that he had two distinct careers: firstly as a member of the arty Velvet Underground between 1965 and 1970, a period when he had to subsume his personality to some degree to work in a team; then, he started all over again as a much more commercial solo artist in 1972. For most members of the public who were largely unaware of his first career, because the Velvets achieved so little success, Lou emerged almost from nowhere in the 1970s. In fact, he was a battle-scarred veteran of the music business. The experience of leading the Velvets to so little avail for five years had left him cynical, even bitter. And he was no longer in the first flush of youth, being almost thirty years old. This was the man who flew to London a few days after Christmas 1971 with Richard and Lisa Robinson to record his debut album for RCA. He was desperate to make it work, and willing to make whatever compromises were necessary.
The Americans checked into the Inn on the Park at Hyde Park Corner. Lou soon missed Bettye, so she flew over to join him. It was in London that she started to understand what it meant to be a rock musician's girlfriend at the start of the 1970s, and she didn't like it. 'We were not treated with any kind of real respect,' she says. 'You could feel that the only reason anybody was talking to you was because you were Lou Reed's girlfriend, or Lou Reed's fiancée, or Lou Reed's wife. You didn't actually have any identity. You were an appendage.'
Lou came to London to record his album, because London still enjoyed a cachet as the creative heart of popular music, thanks to the enormous success of bands like the Beatles. London was also known for the quality of its recording studios, and the expertise of its engineers. Lou recorded at Morgan Studios in north London, where Paul McCartney had recently finished his first solo album, with Richard Robinson producing. A couple of weeks into the project Lou met music journalists at his hotel to discuss the record. Unlike most members of the general public, these journalists knew all about his career with the Velvet Underground and were eager to ask him about it. Lou made it clear that he wanted to start afresh, distancing himself from his past. 'It's been a process of elimination from the start. First no more Andy, then no more Nico, then no more John, then no more Velvet Underground. Suddenly, I'm Lou Reed,' he said, rather pompously. 'This [album] is the closest realization to what I hear in my head that I've ever done. It's a real rock 'n' roll album... I think that the general audience will find it more accessible.' He claimed that there would be no 'Velvet remnants' on the album, but the opposite was true. Seven of the ten songs were Velvet Underground leftovers, including 'I Can't Stand It', 'Walk and Talk It', 'Lisa Says' and 'Ocean'. Of the rest, the only song of note was 'Berlin', a vignette of life in the German city (which Lou had never visited), between the two world wars. It seemed to be inspired by the popular new movie _Cabaret_.
To accompany Lou, Richard Robinson hired British session musicians, including Steve Howe and Rick Wakeman of the progressive rock band Yes. 'A message came through to the office from Lou's camp asking if myself and Steve Howe would play on his new album,' recalls Wakeman, who was much in demand for session work at the time, and had played on Bowie's _Hunky Dory_. He received little guidance as to what Lou wanted. 'Hardly got to speak to him, to be honest. There was a quick hello. Then he played the tracks, as they were at the time. I added my piano. He said thank you, and that was it really!' In the circumstances, Wakeman's cabaret club intro to 'Berlin' worked surprisingly well.
The problem with this hands-off method was highlighted by the recording of 'Ocean', a majestic number when the Velvets played it live, as can be heard on the _1969_ album, with Moe using her cymbals to create a plangent percussive sound that echoes the emotion of the song, about a man struggling with insanity. To record 'Ocean' in London, Lou used Clem Cattini, a veteran of British beat bands and countless studio sessions, and, despite Cattini being an excellent musician, the result was awkward, even crude. 'There was a number I played timpani on, a thing called "The Sea" [ _sic_ ]. We finished it, and he said, "That was great. I could see that you were getting into it, thinking of, like, the ocean..."' says Cattini. 'To be honest, I wasn't really.'
Considering Lou's tactless comments to the press about his former colleagues, it was surprising that he travelled to Paris to perform with Cale and Nico on 29 January 1972. A thousand people came to Le Bataclan, a club in the 11th arrondissement, to hear the trio play new and old songs, a fantastic show, subtle, focused and powerful, which was filmed and recorded, eventually emerging as the CD _Lou Reed, John Cale & Nico_, a far better album than the one Lou was making in London. The gig went so well that he suggested they work together further, but Cale wasn't interested.
There was more disappointment in May, when his solo album, simply titled _Lou Reed_ , was released. The American critic Robert Christgau, who marked albums in the _Village Voice_ as if they were school essays, awarded it a B+, but Nick Kent of the _New Musical Express_ spoke for many when he observed that _Lou Reed_ was 'one of the more disappointing releases of 1972'. It wasn't so much that the songs were bad as that they were badly produced, sounding like they were recorded in a shed, despite the use of a top-flight London studio, with Lou's voice sounding weak and uncertain. He took the reviews to heart. 'Anything negative that anybody said about Lou he took very seriously, even if he didn't act like it,' says Bettye. 'He read it all.' And he was doubly crushed by very poor sales.
Despite this critical and commercial failure, Dennis Katz gave Lou the go-ahead to make a second LP for RCA. The initial plan was for Richard to produce again, and by the terms of Lou's contract he would receive an enhanced advance of $15,000. It was enough for Lou to move to a nicer apartment at East 74th Street, and to ask Bettye to marry him. It seemed that the time was right. Lou was thirty. His ex, Shelley Albin, had just had a baby with her husband, which made it clear that she was never going to come back to him, and Lou desperately wanted _someone_ to look after him. 'He depended upon [women],' says Bettye. 'They took care of him. A mother figure. A nurse. A cop. That's what they were.' Nevertheless, they became engaged. To make it official, he bought Bettye a ring from Tiffany's, and a silver locket, in which she kept a picture of Lou and herself. He also gave her $500 to buy furniture for the apartment, directing her to a budget store in the neighbourhood. 'He was always very careful about money. _Very_ careful.'
The following months were relatively harmonious. Lou was writing songs for what became the _Transformer_ album. He acquired a new manager in Fred Heller, on the recommendation of Dennis Katz. Bettye went horseriding in Central Park when the weather was fine. One day after her ride she met Lou in the park for lunch. They drank sangria in a café and then went to a movie. Lou was inspired to write 'Perfect Day', his most famous love song. 'The important lines about that, of course, are "You [just] keep me hanging on". That's what the song is about,' says Bettye. 'That's what he is saying to me: you keep me hanging on.' She dismisses the urban legend that the song was actually about a couple of junkies. 'Yeah, I read that. It's pure bullshit.'
Other songs Lou wrote at this time included 'Vicious', after Andy Warhol, whom he continued to see socially, suggested he write a song around the word.
'What kind of vicious?' asked Lou.
'Oh, vicious, you hit me with a flower,' Andy replied camply.
Hitherto, Lou had addressed homosexuality in his songs in a matter-of-fact way, like his writer heroes William Burroughs, John Rechy and Hubert Selby Jr. By 1972, however, Bowie had created a new vogue for camp in pop music, whereby male stars dressed ostentatiously and wore make-up. Bowie went further than most. 'I'm gay, and always have been,' he told _Melody Maker_ in 1972 as he promoted _Hunky Dory_ , a statement so startling at the time that it put him on the cover of the music weekly. At this stage in his life, Lou was an unremarkable-looking man. Slightly chubby, with bushy hair, he mooched about in jeans and T-shirts. Seeing the success Bowie had achieved with his theatrical, androgynous image, Lou decided to copy him. This can be seen as an act of desperation. If he didn't have a hit with _Transformer_ , his career would be over.
He and David had stayed in touch by phone since their initial meeting in New York. When David recorded _The Rise and Fall of Ziggy Stardust and the Spiders from Mars_ , he sent Lou an advance copy of the record with a bunch of roses. 'I remember a great night at one of Lou's apartments when he played me the _Ziggy Stardust_ album and told me about Bowie,' says Erin Clermont, who continued to hang out with Lou, despite his impending marriage. 'Bowie had sent Lou a couple dozen yellow roses, along with a can of red spray paint in case he wanted them another colour.' Shelley also recalls Lou raving about Bowie. 'He would come to my studio – I used to work in a pottery studio – and he would give me what I call the David Bowie philosophy: you've got to do something really outrageous [to make it], you've got to figure out something to do [that's] never been done before, so people will talk about it, and it will be ahead of its time. Paint your fingernails black, dye your hair, be gay even if you're not gay, and wear sparkly stuff like Ziggy Stardust. All this stuff. And then you will make it.'
The transformation began when he and Bettye returned to London in the summer of 1972 to record _Transformer_. Angie found the couple a place to stay, a townhouse at Cedar Court in Wimbledon, an odd choice of digs, being way out in the suburbs, and introduced Lou to Freddie Burretti, the tailor who made David's stage costumes. Lou bought a paisley jumpsuit, accessorized with a pink scarf and gold shoes with hearts embossed on the toes. He also experimented with pancake make-up, eyeliner and black nail varnish. 'He and David put that together. Sometimes I put his eye make-up on,' says Bettye. Unfortunately, Lou didn't carry this look with the same aplomb as Bowie, lacking the younger man's fashion-model physique.
David was on the road with the Spiders from Mars band, featuring guitarist Mick Ronson, performing songs from _Ziggy Stardust_. It was the hottest ticket in pop. He brought the show to the Royal Festival Hall in London on 8 July, inviting Lou to join him on stage. For the first time in his career, Lou appeared before an audience in glam gear: a black velvet suit decorated with sequins, worn with his new gold shoes. He and David performed three Velvet Underground classics, to the delight of the crowd. 'On Saturday the magic was boosted by an unadvertised appearance of Lou Reed,' wrote Ray Coleman in the _Melody Maker_ , adding that 'an electrifying heat came across that stage as David and Lou roared into "White Light", "I'm Waiting for My [ _sic_ ] Man" and "Sweet Jane". Their obvious admiration for each other's style was great to watch.' Elated, Lou got wasted at the after-show party. 'There were a lot of parties,' sighs Bettye, 'drinking ad nauseam'.
The following night he appeared with an American pick-up band called the Tots at the King's Cross Cinema in north London, a grubby little venue also known as the Scala. 'That was the first time I had seen Lou perform,' says Tony Zanetta, recalling that his two sets on the night were very different. 'One he was absolutely brilliant, and the [other] he was absolutely awful. That was Lou – very erratic.' Photographer Mick Rock took pictures of him on stage. One shot, a flattering image of Lou looking wistfully into the middle distance over his guitar, became the iconic image of the artist when it was chosen for the cover of _Transformer_. It was one of the few pictures taken that night in which he didn't look fat. In truth, the picture looked very little like him.
While Richard Robinson was packing his bag to join Lou in London to produce the new album, he was told that he wasn't required. It had been arranged with RCA that Bowie was going to produce _Transformer_. 'We learned this news literally the day before Richard was set to fly to London,' notes Lisa Robinson, who was 'royally pissed off' on her husband's behalf. As he was to prove time and again, Lou was not a loyal friend.
The fact that Lou and David were on the same record label made the production deal possible, while Bowie knew that Lou's debut had been a failure and wanted to help his friend, no doubt thinking that it would benefit his reputation to be associated with a successful Reed record in America. As Angie says, 'they did use each other pretty effectively'. The album had to be made fast, however, in a break in Bowie's schedule that August, and he wanted to do it in London at Trident, the Soho studio where he and Ronson had made _Hunky Dory_ and _Ziggy Stardust_. They had a good relationship with the resident engineer, Ken Scott, who had also worked for the Beatles. This was an experienced, successful team, but Bowie was nervous. 'I was petrified that he said, yes, he would like to work with me in the producer capacity, because [I] felt so intimidated by my knowledge of the work he had already done. Even though there was only [a small amount of] time between us* it seemed like Lou had this great legacy of work, which indeed he did have... I really wanted it to work for him, and be a memorable album that people wouldn't forget.'
_Transformer_ turned out to be a dream of a record. In the first place, Lou had some great songs, including three Velvet Underground leftovers: 'Andy's Chest', 'Satellite of Love' and 'Goodnight Ladies', all of which had interesting, witty lyrics. Unlike his debut solo album, the songs were skilfully arranged, performed and recorded. Lou didn't try to sing but rather spoke the words, in the manner of a world-weary _flâneur_. British session musicians were once again employed to create the music, but Bowie and Ronson gave them direction. 'We went into the control room to hear what David Bowie wanted. I don't remember Lou Reed even coming out from behind the screens... He just sort of sat there in the studio all the time,' says John Halsey, one of three drummers on the album. 'It was very much David Bowie driving the whole thing.' Engineer Ken Scott says Lou wasn't even compos mentis. 'He was stoned the entire time, so that creates problems.' This gave Bowie freedom. 'I think it probably made it a hell of a lot easier, because that allowed David to take more control...'
The eight new songs on the album included 'Vicious', 'Perfect Day', and 'Make Up', the lyrics of which were borrowed from the new Gay Liberation movement. When Lou drawled '... we're coming out/Out of our closets' he was repeating a slogan used on marches and protests in New York and elsewhere, though he did so without any sense of political passion or personal commitment, and didn't identify himself explicitly as gay, though many listeners would assume he was.
The most important song was of course 'Walk on the Wild Side'. Lou had written 'Wild Side', as he liked to call his most famous song, while recovering from his second breakdown, having been approached by producers who wanted to adapt Nelson Algren's 1956 novel _A Walk on the Wild Side_ , about the adventures of a drifter in the Depression, for Broadway. Lou was asked to write songs for the show, which fell through. 'When they dropped the project I took my song and changed the book's characters into people I knew from Warhol's Factory. I don't like to waste things.'
'Walk on the Wild Side' was an unusual song in many ways, including the fact that each verse dealt with a different character, and these were all real people. Three of the verses concerned a trio of New York transvestites – Jackie Curtis, Candy Darling and Holly Woodlawn – who hung out on the fringe of the Warhol crowd. Holly, the subject of the first verse, was an exotic creature of complicated background, a Puerto Rican who grew up in Miami with a Polish stepfather, to whom he owed his legal name (Harold Ajzenberg). He came to New York as a teenager in 1962, slept rough ('in the gutter') and got mixed up in drugs and prostitution before he took to dressing as a woman. Holly then came to the attention of Paul Morrissey, who cast him in _Trash_. When the movie came out, he was in jail, serving time for impersonating a French diplomat's wife in order to fraudulently cash a cheque at the United Nations bank, a stunt he pulled off with an atrocious French accent (' _Bonjour_ , everybody!'). He had never met Lou, but Lou had heard enough about Holly's wild life to describe how he came to New York in the song. 'It is accurate,' says Holly, 'for the most part.'
When he sang in the second verse about Candy Darling giving head in public toilets, Lou was writing about an old friend, maybe from experience. He also knew Jackie Curtis, a speed freak with a James Dean fixation. Inspiration for Little Joe, the character in the third verse, was Joe Dallesandro, an actor in several Paul Morrissey films.
The song was transformed in the studio. 'If you had heard the song when he wrote it, the way he wrote it, it was a little ditty,' says Bettye. 'And it was turned into this incredible production that Bowie made. Just incredible! That's why he and I were so incredibly surprised, because we both knew it was just a little ditty. But by the time Bowie got done with it, Holy cow!' In his more honest moments, Lou gave Bowie and Ronson their due. 'Odds are that if Bowie and Ronson hadn't produced it, it wouldn't have been a hit!' he conceded in 1996. ' _My way_ , there wouldn't have been a string part, especially since it's a string part I didn't write.' Credit must also go to multi-instrumentalist Herbie Flowers, who played double bass on the track, overdubbing his electric bass ten notes higher to create the song's jazzy gait. Flowers also had a big hand in creating the Dixieland band arrangement of 'Goodnight Ladies', playing tuba on the recording. Notwithstanding Lou's love of jazz, it should be noted that 'Walk on the Wild Side' sounds nothing like anything else he ever recorded. The lush strings, the groovy bass part and the famous sax solo at the end created a seductively slick, lounge lizard pop song that was entirely untypical of his work. Even Lou's vocal sounded as if somebody else was singing.
It was Lou's idea to add a 'doo da doo' refrain, in the style of African-American girl groups. 'Oh my God, who arranged for us to do this session in the _morning_?' he said, coming into Trident. 'I can't possibly work until I've had some coffee.' While he drank his coffee Lou was introduced to the vocal group Thunder Thighs: a young Irishwoman named Casey Synge (pronounced sing) and her three American girlfriends: Karen Friedman, Jackie Hardin and Dari Lallou. 'We are all white,' clarifies Casey. 'Everybody thinks we are black.'
'I like the colour of your nails,' Lou told Karen, comparing nail varnish while he finished his coffee. 'What do you think of mine?'
That made the girls laugh. 'So we go into the booth to do "Walk on the Wild Side" first,' says Casey. Lou had already recorded his vocal. They misheard the fifth verse as 'Jackie comes from LA,' and laughed some more, because Jackie Hardin was from Los Angeles. 'He was happy that we were amused by his song,' says Casey. Lou then told them what to sing. When it was done, they moved on to other tracks. 'All the rest we did was improvised. The only thing he knew he wanted was the doo da doos,' says Casey, who had the idea of singing 'spoke spoke' on another track, 'Wagon Wheel', which brought some humour to a song about kicking a girl in the head. 'Then the three hours were up and that was the end of the session. He was very nice. He said it was a pleasure working with us "real girls".'
Thunder Thighs left the studio in high spirits. It had been fun working with Lou, and 'Walk on the Wild Side' was a terrific song. The reference to oral sex in the lyrics meant that it would never be heard on the radio, of course. 'I remember being in the cab with Karen and Casey and discussing it,' says Dari Lallou. '"What a drag, it's such a great song, and it's never going to get airplay."'
Mick Ronson was in the producer's chair when Thunder Thighs sang their parts; there was no sign of Bowie, who had at least one row with Lou during the making of _Transformer_. Angie recalls Lou storming out of the studio, before coming back and making up with David. 'It was something foolish. You know, [Lou] was tired. I don't think he wanted to sing [the song] again and then afterwards they were, you know, kissing each other, "Oh yes, yes, yes, of course I would have done it again..."' When Barbara Hodes visited Lou in London he told her that he'd had a major falling-out with the Englishman. 'Lou told me the story about David either sabotaging or losing the master of "Walk on the Wild Side" and they had [to do it again].'
The fact that Bowie was a bigger star than Lou, and getting bigger all the time, created tension. In September 1972 David returned to New York to play Carnegie Hall. He performed 'I'm Waiting for the Man' and 'White Light/White Heat' as part of his set, and received a standing ovation. Meanwhile, the author of these songs was trundling around the UK with the Tots playing student unions. The £450 he received for his trouble at Glasgow University on 7 October was typical of the money he was earning for small gigs at this time. It was irritating to be acknowledged as a major songwriter by one's peers and yet enjoy so little public recognition.
Returning to London to oversee the creation of the cover art for _Transformer_ , Lou was reunited with his college friend Karl Stoecker, who was working as a commercial photographer in the capital. Mick Rock's stage photo of Lou was used on the front of _Transformer_ , but Karl was commissioned to take the studio photographs for the back: a picture of Lou's friend and occasional road manager Ernie Thormahlen tipping his cap at an androgynous model. The conspicuous bulge in Ernie's jeans was created by a plastic banana. A rumour went about that the girl was a man in drag – a story perpetuated by Lou, who sometimes claimed that he dressed in drag for the photo. The girl was actually fashion model Gala Mitchell. 'I guess it was the transformer idea – he was a she, she as a he,' explains Karl. 'The guy was supposed to be looking in the mirror, and the girl was in the mirror.'
The album's gay overtones put off some American critics when _Transformer_ was released in November 1972. Writing in _Rolling Stone_ , Nick Tosches expressed disdain for 'Make Up' in particular. 'It isn't decadent – it isn't perverse, it isn't rock 'n' roll, it's just a stereotypical image of the faggot-as-sissy traipsing around and lisping about effeminacy,' he sniped, describing 'Perfect Day' as the worst song on the album before concluding that Lou 'should forget this artsy-fartsy kind of homo stuff...' _The New York Times_ was similarly scathing. 'The public has never discovered him and unfortunately _Transformer_ will not help his cause... a flaccid piece of work.' In retrospect, these reviews seem misjudged, even a tad homophobic. _Transformer_ was a strong album of clever, catchy songs, faultlessly produced by Bowie and Ronson, that showcased Lou as a distinctive and original singer-songwriter. RCA described him in press releases as 'The Phantom of Rock', an inept attempt to sell an artist they didn't understand to a public who knew little or nothing about him. This was a false image he accepted and hid behind, to his regret. 'I allowed it, and it was kind of a convenient thing to duck behind and use as a shield against just about everything,' he later said. 'The trouble was, I ducked behind the image for so long that after a while there was a real danger of it just becoming a parody thing, where even if I was trying to be serious you didn't know whether to take it seriously or not.' The charm and wit of Lou's songs caught the ear of the public, nevertheless, and the record sold strongly. _Transformer_ made Lou's career, though he struggled for years to match the quality of both the songs and the production.
Despite their engagement, his relationship with Bettye remained volatile. One night after what she calls 'a blow-out fight', she left the apartment. Lou immediately called Erin Clermont to tell her that they had split up. 'And then he used the line that melted me: "I'm lonely."' Erin said she would be right over. 'We went to bed, and a few hours later someone entered the apartment. It was Bettye... who just sat in a chair and stared at us. She didn't move, didn't speak. Have to say she did that well. I skulked out of bed, got my clothes and left.' Ungallantly, Lou told Bettye that Erin was just a girl he'd picked up in a bar. Remarkably enough, both women forgave him.
Bettye knew that Lou was attracted to men as well as women, though he didn't tend to talk about this side of his life with his girlfriends. 'There are some people that are bi or gay because they kind of really are. And then there are others who go to a certain lifestyle because it's safer for them than a lifestyle that is a little bit more threatening. And I believe that Lou falls in the latter category,' she says, arguing that Lou found women more congenial as long-term partners. 'I am just giving you my experiences as a girlfriend... That is how I felt.' On this shaky basis, they decided to go ahead and marry. Bettye's parents were aghast. Her mother told her that Lou was using her, while her father couldn't understand why she wanted to marry a man who wore make-up. 'When I asked him to come to the wedding, the only thing he knew about Lou was the cover of that album, and he didn't come.' More surprisingly, considering that Lou had been living at home recently, and he and Bettye had just attended Bunny's wedding, Lou didn't invite his parents or his sister. Bunny was upset. 'He told me about the wedding but did not invite me to attend. I cried for two weeks.'
The ceremony was conducted at Lou's apartment on East 74th Street on 9 January 1973. Bettye was a Presbyterian. Lou was Jewish. Perversely, they were married by a Catholic priest. '[Lou] thought it was funny.' The only guests were Lou's manager, Fred Heller, and his wife. Lou and Bettye wore white. Afterwards, they attended a dinner hosted by RCA. Lou was drunk by the end. 'He was always drunk, but he was dead drunk. What does Lou do when he is drunk? He goes home, goes to bed, and falls asleep.' This was not the wedding night of a girl's dreams, but it was often the way their evenings ended. 'That's an interesting [situation] for a twenty-two-year-old pretty woman to find herself in... And he's coming across as the great sex figure of all time. Huh!'
They went through the motions of married life. Bettye took Lou's surname, and they went on honeymoon to Jamaica. They discussed children. 'I think he probably would have wanted to have them with me.' But Bettye didn't have children with Lou. She declines to say whether she got pregnant and had a termination, as girlfriends had in the past, but says that she doesn't think that Lou would have made a good father. In truth, there wasn't much opportunity to get pregnant. 'He wasn't terribly interested in sex.' Lou's attitude to his wife was changeable and ambiguous. He talked to friends about her in romantic terms, only to trash her in the next breath. 'He said she was a "princess", and he was in love with a "fairytale princess". You know, he would get drunk and he would get sort of maudlin when he spoke about her. Then the next thing you know he would be saying that "some of the people who are dearest to me are the scum of the earth,"' recalls Ed McCormack, a journalist who was sent to interview Lou for _Rolling Stone_ around this time and struck up a friendship with him. 'It was a schizo kind of [relationship].'
Lou married Bettye.
Bettye was at Lou's side when he appeared onstage in New York on 27 January 1973 for an important concert. _Transformer_ was selling. RCA were running print adverts to maintain momentum. A US tour was lined up. The idea was to get Lou off to a good start with two prestige shows at Alice Tully Hall, a venue within Lincoln Center associated with classical music. RCA top brass were present, together with Lou's manager and friends. His parents, sister and brother-in-law also attended, though Lou hadn't seen fit to invite them to his wedding. And the press came. 'There was an enormous amount of pressure for that show. That was his New York solo debut. That was going to push _Transformer_. It was his comeback. The album _Lou Reed_ wasn't received well. But this has Bowie behind it and it was brilliant, so it was now or never. This was the make or break, and Alice Tully was the debut. It was really important,' says Bettye.
The first show went badly. 'I think the reason why the first show didn't go well was because he was drunk,' says Bettye. 'In fact, I know that's why it didn't go well. He would drink almost all the time anyway. But he would especially drink when he was nervous and upset.' Backstage, Lou sank into a funk of depression. Thankfully, there was enough time to get him sober for the second show. When he went on again at 11 p.m. he did better. It was this late show that John Rockwell reviewed for _The New York Times_. 'His voice is limited and insecure; his manner tense and shy. But his music in its basic, repetitive way, makes a powerful impact, and his songs have a reality to them that transcends easy moralism,' wrote Rockwell, who proved a sympathetic and astute observer of Lou's solo career. 'His voice was so personal. Not only was it limited, it was a little shaky in pitch. But, boy, could he sell one of his songs. It was just wonderful.'
There was a party afterwards at the Sherry-Netherland. Lincoln Swados hobbled into the hotel lobby with Lou and Bettye, eager to join the fun. It was appropriate that Lincoln was there. He had known Lou longer than most people and had brought Lou and Bettye together. 'Lincoln was with us, and we were walking up to the hotel room and Lincoln was down the hall, and it took him a little bit longer to walk than you and me, he had the cane, and I said, "Let's wait for Lincoln,"' says Bettye. Lou scowled and walked ahead. 'He wasn't good-looking enough any more, he was just weird-looking now, so he didn't want him in his circle. He cut him out of his life.'
Lou embarked on a US college tour after Alice Tully Hall. As he travelled the country, 'Walk on the Wild Side' started to get airplay on both sides of the Atlantic. Executives at RCA were as delighted as they were surprised by the success of the song. A&R man Bob Ringe notes that few among them had expressed much faith in the single in advance: 'Everybody said radio would never play [it]. People thought we were out of our fucking minds... Either people didn't catch [the oral sex reference], or they didn't know what it meant, or what he was saying, but it got on the fucking radio! I remember kids walking down the street doing, "doo da doo..."'
Holly Woodlawn was home in New York when he first heard 'Walk on the Wild Side' on the radio, having received no warning that Lou had written about him. 'A friend of mine calls me up and said, "Turn on the radio." I turned on the radio and heard] "Holly came from Miami, FLA." When I finally met [Lou] I asked him, "How did you know I came from Miami?" He said, "Honey, you have told the world already. I didn't have to do much research."' Holly considered the song useful publicity for his career. Joe Dallesandro, on the other hand, was not pleased to hear Lou telling everybody he was a hustler.[ 'He never knew me, didn't know me, never met me. He went, under Paul Morrissey's direction, to look at some of our movies, and wrote the song about the character in the movies,' he grumbles. 'He never knew the person.' But Joe liked the tune. Everybody did.
The album followed the single up the charts. Considering the fact that RCA had only paid Lou an advance of $15,000, the company was already turning a profit. His college tour was enlarged to a tour of concert halls. Then he fired Fred Heller. Lou later claimed in court that he was 'afraid' of his manager, 'afraid he'd become physically abusive', an intriguing statement which he didn't enlarge upon. He further complained that he hadn't received proper accounting for his first UK tour, adding, 'I just didn't get along with him [Fred],' a surprising remark in light of the fact that the Hellers had been the only guests at his recent wedding. Lou also said that he was told by his advisers that there would be 'no problem' if he wanted to fire his manager, but firing Heller was a serious blunder – he sued Lou for breach of contract.
After Heller was dismissed, Dennis Katz, who had recently left RCA, became Lou's new manager and lawyer, which proved to be a poisoned chalice. Ever since he started working with Paul Morrissey and Andy Warhol, Lou had had difficult relationships with managers, usually ending in mutual recrimination. He soon fell out with Morrissey and Warhol, and their replacement, Steve Sesnick. Danny Fields resigned after two weeks. Fred Heller lasted eight months. Dennis would serve two and a half years before he and Lou parted on poisonous terms. The pattern would be repeated into the future. 'Lou always seemed to be falling out with somebody that mattered to him,' says former RCA executive Bruce Somerfield. 'Lou was not an easily managed person.'
For the time being, however, the future looked bright. With _Transformer_ and 'Walk on the Wild Side' selling fast, Dennis was able to negotiate lucrative deals for his client: obtaining a $130,000 advance from RCA, and a $200,000 advance on a publishing deal between Lou's new company, Oakfield Avenue Music (surprisingly, he named his company after his parents' home address) and Dunbar Music, a subsidiary of RCA. Lou would get $50,000 up front, plus $12,500 a month until 1976. Suddenly, big money was flowing in.
_Transformer_ entered the _Billboard_ Top Forty on 24 March 1973, ultimately reaching number twenty-nine. Despite its reputation, the single 'Walk on the Wild Side' wasn't a number one, or anything like it. It peaked at sixteen in the US. It did however spend two months in the US Top Forty, and it sold around the world, reaching number ten in the UK, where it spent nine weeks in the charts. While this wasn't a monster hit by the standards of most artists, in a year when 'Tie a Yellow Ribbon round the Ole Oak Tree' was the dominant international number one, it was the biggest record of Lou's career by far, enough to make him an international star – at last. Was he happy? No. 'He didn't want to be known for a pop song. He wanted to be known for serious music,' says Bettye. 'He got sick of it on tour. All they wanted to hear was _Transformer_. All they wanted to hear was "Walk on the Wild Side".'
* Bowie was almost five years younger than Lou.
## VIII
## Self-parody
### 1973–4
SO FAT THAT his belly hung over his leather trousers, his pancake makeup slick with sweat, Lou lurched around the stage, stooping now and again to swig from a bottle of Scotch which he kept stashed behind a monitor. Commercial success had been a long time coming, but when it arrived in the summer of 1973 he was unable to cope.
Bettye was with her husband as he toured the United States to promote _Transformer._ She was his lighting and stage director now, having asked Lou to give her a job to do, rather than just being his 'appendage'. She still had to look after him, helping him on stage, drunk, and putting him to bed after the show – dead drunk. Lou was drinking every day, starting around 3 p.m., mostly Scotch whisky, as well as ingesting a variety of drugs, and he treated Bettye roughly when he was under the influence. 'We were on the road, and he was really drunk, and he would, like, pin you up against a wall and tussle you, like rough you up a little. Shove you around. Throw you up against a wall. Tussle you. Hit you... shake you,' she says, becoming upset as she recalls. 'And then one time he actually gave me a black eye, and that was when I said, "All right, this is it. I'm not taking this any more."' So she hit him back. 'I'm not going to let anybody do that to me, and it was pretty clear to me that the only way he would ever stop doing that was if I did it to him, so he'd have to walk on stage with a black eye.' Yet Lou continued to shove his wife around.
Bettye found herself caught between Lou's demons and the men who had a financial interest in his career. They expected Bettye to keep him sober so he could work. 'I was trying to physically stop him from taking all the god-damn drugs, and taking all the god-damn booze, and I would get between him and a bottle. And then he'd come after me physically. That's what was going on... That's what you had to do in order to stop him, to stop him from drinking. You put yourself between him and the bottle. "No! You can't have any more... No! No, you can't!"' The drink was making Lou heavy, which was another concern. Rock stars aren't supposed to be fat. 'Everybody had a word with me about Lou's weight,' sighs Bettye. 'He got into really bad shape with all the drinking, and he didn't care... It was pretty gross.'
This unhappy tour wound up in Miami on 1 June 1973. The local sheriff's department was notoriously conservative, being the people who arrested Jim Morrison for allegedly exposing himself during a Doors concert in 1969, and they were waiting for Lou. When he sang about oral sex, the cops stopped the show, escorted him off stage and put him in handcuffs. 'Dennis [Katz] and I and a couple of other people from the entourage followed the car to the jail. Dennis was a lawyer, so he got him out,' says Bettye. 'It was actually really frightening, but Lou thought it was funny because he was getting more publicity.'
Apart from the problems that came with touring, Lou was under pressure to record a new album for RCA to build on the success of _Transformer_. Although he had been a prolific songwriter in the past, he was struggling to write new material with so much going on in his life, and the drinking didn't help. It was at this stage that Dennis introduced him to Bob Ezrin, a twenty-four-year-old Canadian producer who had been making a series of hugely successful records with Alice Cooper, including _School's Out_ , one of the biggest hits of 1972. They met in Ezrin's home town of Toronto, where they decided to re-use 'Berlin' as the start of a song cycle of the same name, permeated with the sleazy atmosphere of inter-war Germany, part Christopher Isherwood/part _Cabaret_ /part _Threepenny Opera_. 'His writing was so evocative – I could see, smell and feel the record. It reminded me of Brecht and Weill,' explained Ezrin, who also saw the project in cinematic terms. _Berlin_ would be promoted grandiosely as 'a film for the ear'. 'We came up with a concept and he went and wrote it.' In fact, Lou wrote very little for _Berlin_. Apart from the title song, which was old, he adapted four Velvet Underground tunes: 'Oh, Jim', 'Men of Good Fortune', 'Sad Song' and 'Stephanie Says', which became 'Caroline Says' parts I and II. Still, he didn't have enough material.
Then Bettye received a call to inform her that her mother had cancer. The news prompted her to tell Lou the full story of her unhappy childhood, including the custody battle her parents fought over her, during which her father accused her mother of being promiscuous, alleging that she picked up servicemen in bars and was therefore unfit to look after a child. When Bettye went to bed that night, Lou sat up, drinking and writing. The next morning he told his wife that he had written the songs to complete _Berlin_. He sang them to Bettye, who was surprised to hear the story she had told Lou regurgitated, particularly in 'The Kids', which described how a mother lost custody of her daughter after being accused of sleeping around with, among others, a serviceman. Bettye recognized specific details as statements she had made to Lou about her mother, mixed up with things he'd invented. 'There is the whole thing with my mother and my father and me, "They're taking her children away," that stuff.' Lou had also based aspects of the character of Caroline on his wife. 'Yes, there's poetry on the shelf. That's mine,' she says referring to a line in the song 'The Bed'. 'Did I try to kill myself? [another line]. No. And that's how writers write. And he was desperate. He needed the material.' She didn't like the way Lou had used her life for his work, but didn't feel able to stop him. 'They were waiting for [an] album. He finally got an album. What am I going to do about that, say you can't use that album? I can't say that. Of course I was hurt. I was devastated. But I wasn't going to let him see that.'
The Reeds returned to London to make the new record, staying once again at the Inn on the Park, while Lou worked with Ezrin and a cast of superstar musicians, including Jack Bruce and Steve Winwood, at Morgan Studios. _Berlin_ would be the most expensive and highly produced album of Lou's career, with choirs, strings, horns and multiple overdubs, much of which he had nothing to do with. Ezrin wrote the arrangements. It was as much his record as Lou's, and Lou often seemed to be in the way. 'Bob gave me directions: "Keep him out of the studio,"' says Dinky Dawson, who was working with Lou at the time. 'So me and Bettye would keep him out of the studio until it was his turn to come in.'
The _Berlin_ songs delineated a disastrous relationship between the characters of Caroline and Jim. Caroline accused Jim of being sexually inadequate ('Caroline Says I'). He accused her of being a 'slut' who slept around and abused drugs ('The Kids'), and beat her. No less than three of the songs described domestic violence, including 'Oh, Jim', in which Lou sang, 'Beat her black and blue and get it straight.' Caroline was also beaten in 'Caroline Says II', while the last line of the last song, 'Sad Song', had Lou singing that anyone else would have broken her arms. Did any of this reflect his own feelings or behaviour?
As we have seen, Lou's friend Allan Hyman recalls him hitting a girlfriend, while Bettye is one of several sources to describe Lou as a misogynist. She also says that Lou hit her, which, indeed, he admitted to. 'I needed a sycophant who I could bounce around and she fit the bill... but she called it love, ha!' he said of his relationship with his first wife in 1978, looking back on the marriage. In another interview he went so far as to suggest that women liked being _bounced_ around. 'I'm a chauvinist down to my toes,' he told _Creem_ in 1979. 'I think women admire force all the more for not having it – nobody admires strength more than a weak person. It's axiomatic that a woman is all the more impressed that you could kill her. A straight guy might have something to learn from his gay friends, in that a woman can get turned off if you're appreciative of her when what she really wants is to be smacked across the mouth. I know this is a terrible, chauvinist point of view – this will be very unfavourable towards me...' Such comments were clearly meant to be outrageous, and project an image. But the evidence suggests that he may have meant it.
The sordid nature of _Berlin_ affected those involved. 'It was an emotionally hyper-charged atmosphere in that studio at that time,' says Ezrin, who reveals that there was a lot of drug abuse in the making of the album. 'I can honestly say that all of us were messing around with things we shouldn't have been messing around with.' Lou was partying at the time with David Bowie, whom Ezrin considered to be a distraction. The stars were photographed together at the Café Royal in July, appearing to kiss gingerly for the cameras. The smirk on Lou's face indicated that it was probably a publicity stunt. Bowie, a better actor, managed to keep a straight face. The same night Lou was photographed kissing his wife full on the mouth, though that picture didn't make the newspapers.
Bettye became very unhappy in London. Lou later claimed that she became so depressed that she attempted suicide in their hotel suite. 'Like, during the recording session, my old lady – who was an asshole, but I needed to have an asshole around to bolster me up,' he said in 1978, looking back on the marriage, '... anyway, my old lady, during a recording session, tried to commit suicide in the bathtub in the hotel... Cut her wrists...' Bettye denies this. 'I did not cut my wrists... At that point in our relationship he probably would have liked me to have committed suicide – an easier way to get rid of me.' It was nevertheless clear that their marriage was not going to last, and the end came soon enough.
The Reeds were at a party when Bettye caught Lou in a bathroom with another woman, shooting up. 'It was heroin,' she says. 'The next day I said, "I want a divorce." He thought I was kidding, and I wasn't.' Bettye told Dennis that she didn't want any money, over and above her wages for her lighting work, she just wanted to be free. Papers were drawn up for her to sign. Then she took a flight to the Dominican Republic, where she was granted a quickie divorce after just seven months of marriage. Strangely, she went back to living with Lou in New York as a divorcee within a fortnight.
Bettye has a vague memory of Lou being hospitalized around this time, late summer 1973, possibly once again with hepatitis, shortly after which he had to return to Europe for a tour. Even though they were now divorced, she agreed to go with him to keep him out of trouble. 'Now I was just the nursemaid...'
To capitalize on the success of _Transformer_ , and to promote _Berlin_ , a new band of top-notch musicians were hired for the tour, lead by Dick Wagner, a guitar ace from Detroit, partnered by another virtuoso guitarist named Steve Hunter, both of whom had played on _Berlin_. 'They're great players, no question about it. Steve came from Mitch Ryder's band. Musically, you don't get any better than that,' says Ray Colcord, who played keyboards in the band. 'My job was quite clear. I was to provide a solid background so the guitar players could go nuts.' Completing the band were Pentti 'Whitey' Glan on drums, and Peter Walsh was on bass. During rehearsals at the Music Inn in Massachusetts, Lou was told that he wouldn't be required to play guitar in the show. Although Lou rated himself highly as a guitarist, few professional musicians agreed with him. 'He thought he was like Jimi Hendrix. He used to brag about it – I never saw any evidence of that. In truth, the only thing I ever saw him do was strum a guitar,' says Dick Wagner. 'He played enough guitar to write the songs he wrote, and they are the most important part of the whole thing. He wrote some very simple, but really brilliant songs... I took what we had recorded with _Berlin_ , and the other songs, and tried to make them into coliseum-size songs – the songs are mostly my arrangements – so we could be on the road and be majestic. Take the brilliance of his songs and take them to a new level.'
They flew to London in September 1973, checking into Blake's Hotel in South Kensington, which Lou patronized for the rest of his career. He stayed initially in a small basement room, forming a friendship with the night porter. Bowie sent flowers to welcome him back to town. The band then flew to Germany for a festival on the 9th, for which Lou received $15,000, three times as much as he had been earning for a show during his recent US tour. This was the start of a new phase in his career, during which he played much bigger gigs, for which the new band was designed. The show began with a guitar duel between Hunter and Wagner, an instrumental they'd worked up in rehearsals as an introduction to 'Vicious', later adapting it as the introduction to 'Sweet Jane'. When they segued into the familiar power chords Lou strode on to an ovation. 'What a great way to bring on the show!' says Peter Walsh. 'When it finally got to the intro to "Sweet Jane" the crowd would get to their feet and go nuts because they would recognize "Sweet Jane" right away.' Without his guitar to occupy him, Lou was forced to be more theatrical on stage, making hand gestures and tossing his microphone around like Roger Daltrey. 'The best thing to do with Lou was stand him there, let him do little things with his arms and his microphone, and let the expression on his face – that was why it was painted with white face – come through,' says Dinky Dawson, 'except he did get excited now and again, and swing the mic stand on to monitors, and trash [equipment] for the heck of it.'
While the show proved popular, critics reserved most of their praise for the band. 'Lou Reed, looking like a panda in ill-fitting leathers, lurched on in something akin to a swagger and a stagger minus that old Gerry & the Pacemakers guitar previously designed to hide his paunch, and grabbed the mic stand,' Nick Kent wrote in the _New Musical Express_ , reviewing an outdoor show at London's Crystal Palace on 15 September. 'From then on we were treated to Reed's clumsy but earnest attempts to carry himself off as a lead singer... but the band itself bristles with potential, showcasing as it does blisteringly fine guitarists in Steve Hunter and Dick Wagner.' Lou was enraged by such coverage, accusing his guitarists of upstaging him. 'I'm fed up with them,' he told Dawson. 'If they don't calm down, send them home.' He insisted on travelling separately from the band, and ignored his musicians when they were off stage. 'He wouldn't talk to me at all,' Hunter complained. 'In Europe, all the newspaper reviews talked about the band and kind of belittled him as being seemingly out of it, and he was. He was doing a lot of speed at that time. He was pretty drugged up,' adds Wagner. 'The band had most of the reviews, and I know Lou didn't like that.'
Lou spent most of his time sequestered in his hotel suite with Bettye, his bodyguard, Bernie Gelb, and stage manager, Jim Jacobs, who didn't form a good opinion of his boss. 'Lou wasn't a nice person,' says Jacobs. 'I don't say that with any malice, we had a perfectly good relationship, but he wasn't a nice person. No one could ever accuse Lou of being a nice guy, whatever his [widow] says... It was "Walk on the Wild Side" that made Lou, and that's David, but Lou didn't give a shit what anybody did for him. He didn't care. He was a dick... It is [also] important to understand that Lou was a stone junkie. That had a lot to do with his personality. He loved to shoot up.'
After a short period when he had sworn off drugs, Lou was dabbling again. 'He was just mixing everything up, drinking and speed,' says Pentti Glan. 'I know he did dabble with heroin, too. If he was using on tour] it didn't [become obvious], but cocaine and amphetamine and drinking, that was primarily what [he was using].' Lou's minders Bernie Gelb and Jim Jacobs tried their best to keep him sober.[ 'When we started rehearsals for Europe I had an understanding with Lou that he wasn't going to drink on the tour, and he was good about it [initially]. He did other recreational drugs, but he was in control,' says Bernie. 'I would literally tuck him into bed every night. My room, or Jim's room, would be the room next to his. Once I got him into bed I was reasonably sure he [wouldn't stray]. If he needed something he would call me on the phone.'
When they got to Paris, Lou and Bettye checked into the luxurious Hotel Bristol, a sign of how much money Lou was now making. Bettye tried to persuade Lou to take her sightseeing, but he wasn't interested in doing anything of the kind. 'Lou did treat Bettye very poorly,' says Steve Katz, Dennis's younger brother, who was travelling with the entourage. Lou and Bettye had a row in their hotel before Lou's show on 17 September. 'We were sitting down, and he was incredibly obnoxious. It was the obnoxious Lou,' Bettye recalls. 'I got up from the table and said, "I'm not taking this any more." He got up and shoved me. So I picked up a glass of milk – I was drinking milk – and threw it in his face, and said, "I'm leaving you."' This was somewhat after the event, in that they had already been divorced for a month. Nevertheless, Bettye ran out of the hotel in tears, and walked the streets alone while she faced up to life without Lou. Meanwhile, Lou did his gig at L'Olympia, which took an unexpected turn when a well-known Parisian transvestite climbed on stage. Lou's minders threw the transvestite off, but Lou called for the guy to brought back, having apparently taken a fancy to him. The next day Bettye flew home to New York. They never spoke again. She saw him one more time from a distance, when she sat in the audience at one of his shows. 'He looked really angry with me. You don't leave Lou. He would just be angry with you for that – particularly me – it was an incredible betrayal as far as he was concerned.'
With Bettye gone, Lou really started to misbehave. 'We were in Amsterdam, and he hooked up with an old boyfriend, and this guy gave him speed after the show,' says Bernie Gelb. 'So when we picked him up in the morning he was already a little jangly. What I didn't know was that he still had some speed left over, which he did just before he went on stage in Brussels the next night, and had a huge health issue – a huge problem on stage.' Lou was partway into his set when he bent over and split his leather pants. Bernie and Bob Ringe, now his European booking agent, ran on and wrapped Lou in gaffer tape to prevent him exposing himself. Lou continued to sing for a few minutes, then collapsed.
Bernie carried Lou to his dressing room. 'What did you do?' he asked.
'I took some speed.' Bernie was so angry that he felt like punching him. 'Bernie, I can't go back on stage. Don't let anybody in here.'
As the curtain came down, signalling that the show was over, the audience began to tear up the seats in anger. 'Dennis comes running up with the promoter and says, "You've got to get Lou back out there,"' recalls Gelb.
'Dennis, it's not happening.'
'Let me in there to speak to him.'
'Dennis, I can't... Lou tells me he can't see anybody now, including you.' They had a row outside the dressing room. 'He was really angry with me. The cops came and quelled the riot. I went back to Lou and stayed with him for about an hour until his heart rate came within a somewhat normal range, and I said, "Lou, you can't do this to me again. Let's stick with our programme. No more speed." And he was good for the rest of the tour.'
When Lou brought his band back to the United Kingdom, his dates were reported in the _New Musical Express_ under the headline 'RETURN OF THE PRINCE OF PONCE'. Such irreverence didn't endear him to the British press, which tended to mock and applaud him in equal measure. More dismaying were the reviews of _Berlin_ , which were starting to appear on both sides of the Atlantic. In truth, the album received mixed reviews, including praise from John Rockwell in _The New York Times_ , who thought _Berlin_ 'one of the strongest and most original rock records in years'. The balance of opinion, though, was that _Berlin_ was an over-produced record of mawkish songs that were depressing to listen to. Ezrin's decision to overdub a home recording of his son crying on to 'The Kids' was the cherry on a sickly confection of misery. Writing in _Rolling Stone_ , Stephen Davis slated _Berlin_ as a 'disaster ... Lou's] last shot at a once-promising career'. Lou focused on the bad reviews, devastated by criticism of an album that, in his drunken, drug-addled hubris, he considered a masterpiece. [Sales were respectable; Lou claimed 110,000 copies sold over the next two years. But this didn't meet the expectations of RCA, who had hoped to build on the success of _Transformer_ , and _Berlin_ henceforth carried the taint of failure. 'The way that album was overlooked was probably the biggest disappointment I ever faced,' Lou said in 1977, explaining that his distrust of journalists stemmed from this experience. It could be argued that he went into a career sulk after _Berlin_ from which he never emerged. 'I pulled the blinds _shut_ at that point. And they've remained closed... I don't care what people write about me any more. I have no respect whatsoever for their opinions.'
Although he didn't enjoy commercial success with the Velvet Underground, Lou gained the respect of his peers, the critics and a select audience for the intelligence of his writing and the versatility of his music. He was one of the few songwriters of the 1960s who could legitimately be compared to the great Bob Dylan. His subsequent solo career started with a disappointment, then received a boost from the success of _Transformer_. After the perceived failure of _Berlin_ he entered a much less impressive period when he squandered his reputation. There would be good work in the years ahead, but there would also be a lot of sub-standard tosh from an artist whose judgement was often clouded by alcohol and drugs and who relied increasingly on image. This did long-term damage to his career.
After his break-up with Bettye, Lou returned to Barbara Hodes, who had never understood his marriage. He also stepped up his amphetamine use, becoming a patient of an Upper East Side quack who dispensed legal shots of pharmaceutical amphetamine laced with vitamin supplements. Celebrities queued up for these legal highs. 'Lou, this is going to kill you,' Steve Katz warned Lou when he started to frequent the surgery.
'That's all right,' he replied. 'I'm enjoying myself.'
That winter he went back on the road with a new bass guitarist in the band, Prakash John, whom Lou called 'the Christian' because of his religious beliefs. The Christian had previously played with George Clinton's Parliament, and gave the band a funkier sound. They toured the north-eastern United States during snowstorms that meant many venues were half empty. Lou's escalating use of amphetamine affected the pacing of the show. 'One night he would want everything fast. Another night he would want everything slow,' says Ray Colcord. He didn't always come in on time either. 'Not now, Lou!' the musicians would yell when he started to sing in the wrong place, an echo of Lou's problems with Nico in the Velvet Underground. The tour culminated on 21 December 1973 with two celebrated shows at the Academy of Music in New York. The mood backstage on the night – a freezing cold evening – was tense. Lou had a tiff with Andy Warhol, who showed up unexpectedly, while Bernie Gelb argued money with the impresario Howard Stein, threatening to cancel the second show right up to the last minute unless Stein handed over more cash. 'Howard Stein only wanted to pay for one show,' says Gelb, who threatened to pull the plug on his artist unless he got another six thousand dollars. 'I said, "In two minutes Lou is going to walk out and grab the microphone, and no one is going to hear him, and it's going to be your fault. You are the promoter." He finally broke down, counted out the cash. I gave Dinky [Dawson] the signal just as Lou was walking on. I told the band to stretch the intro as long as they could, so I could get the money... Literally, as Lou's about to walk on stage, we got the mic [switched] on. So that's the story behind the [long] intro.' The concerts were recorded for a live album, _Rock 'n' Roll Animal_ , produced by Steve Katz who, along with his brother, was trying to move Lou towards a mainstream audience by presenting classic Velvet Underground material alongside the best songs from _Berlin_ and _Transformer_ in an accessible rock setting. Lou was ambivalent about this. 'I think they were trying to get him a little more mainstream, in terms of the American audience, and he was hesitant about going more mainstream,' says his drummer Pentti Glan. 'He wanted to be truer to himself.'
Lou invited friends back to his apartment after the shows for what one guest, musician Alan Freedman, describes as some 'serious and dangerous partying ... He was messing around with some very dangerous drugs.' The binge continued for three days. On Christmas Eve, Lou was arrested while trying to buy more drugs on Long Island. 'I got busted in Riverhead for trying to cash someone else's illegal prescription]. I spent Christmas in the dangerous tier when it was discovered that someone with my name who was wanted for murder had escaped from jail in upstate New York,' Lou later exaggerated the tale. Barbara Fulk, who worked at his new company, Transformer Enterprises, says that she bailed Lou out of jail the same day, ['in a vintage white Cadillac limo... paid $500', meaning that he only spent a few hours in custody. Nevertheless, it was a foolish thing to do, especially as he could obtain as much speed as he liked legally from his Dr Feelgood in New York. Lou probably simply ran out of drugs over the holidays. His stage manager, Jim Jacobs, was furious. 'I was screaming at Lou, "What the fuck are you doing? You are a star now, you can't do this shit." He didn't care.' Jacobs had had enough. 'I was already on my way out at that point. I had a lot of commitments, and I didn't have time for Lou's shit any more.'
The album _Rock 'n' Roll Animal_ consisted of just five songs, four Velvet Underground standards plus 'Lady Day' from _Berlin_ , the live arrangements of which were stretched out to a generous length. Chrissie Hynde, reviewing the album for the _New Musical Express_ before her career as the leader of the Pretenders, wrote that Lou seemed to be putting on a cynical show for unhip people who'd missed the Velvets. She probably would have been even more disappointed had she known that the ecstatic audience applause on the record was overdubbed from a John Denver concert to add atmosphere. The irony was that this untypical Lou Reed album, with its swaggering arena-rock sound, proved a major success in what was the heyday of the live rock album (released in 1974, the same year as Dylan's _Before the Flood_ and a year before _Frampton Comes Alive!_ ), selling much better than _Berlin_ , reaching number forty-five in the US charts, twenty-six in the UK, and remaining to this day a favourite with many people who wouldn't normally buy a Reed record, let alone anything by the Velvet Underground. Considering it cost RCA next to nothing to make, this was a terrific result for the company. The record also expanded Lou's audience. But he hated it.
Lou and Barbara Hodes, at her apartment.
He was living with Barbara Hodes again, at her swanky apartment on Fifth Avenue. Despite the fact that this was the most successful period of his career, he let her pay the rent for all but one of the months he stayed with her. He was a tight-fisted boyfriend, and not particularly romantic. 'One Valentine's Day 1974] he brought me a present – you tell me whether this is romantic – he went out and bought me a snake, a garter snake called Edgar Allan Snake,' Barbara says with a laugh. The couple kept Edgar in a terrarium and fed him on live fish. Lou wrote the songs for his next album in Barbara's bedroom. 'I had a wonderful bedroom with the windows facing the Presbyterian church on Fifth Avenue, and you could see all the way to New Jersey. Lou sat on the bed improvising "Sally Can't Dance".' This was the title song of a studio album of the same name, the first Lou Reed solo album of entirely new material. The sweet melody belied a grim lyric, based on a real-life case of a girl who frequented the clubs of New York and was raped and murdered by men who stashed her body in the trunk of their car while they continued their night out. Lou sang about Sally's demise with amused detachment, noting obscenely how she got raped ['real good' before she was killed. 'And that's why Sally can't dance. They found her in the trunk of a car,' he explained. Here was another example of a morbid preoccupation.
The most significant new song was 'Kill Your Sons', in which he addressed the topic of his electro-convulsive therapy. Despite the implied criticism of Sid and Toby Reed, Lou remained in contact with his parents. Sid even acted as a consultant in his son's business affairs in 1973 when he looked over Lou's books and advised him to settle the Fred Heller lawsuit: good advice, which his son ignored. Lou introduced Barbara to his parents. Unlike most of his friends, she took a dislike to Sid. 'If you had met his father you would understand why he was fucked up. The father was cold, nasty, withdrawn, grumpy. I felt sorry for his mother... I came away from that just thinking it was a miracle that Lou didn't end up worse.'
_Sally Can't Dance_ was recorded at Electric Lady, the lavish studio Jimi Hendrix built in Greenwich Village, in the spring of 1974, with Steve Katz producing. 'We had a hit album with _Rock 'n' Roll Animal_... and we wanted to back it up with something that was just as commercial,' Steve explains the thinking behind the record, for which he made many of the key decisions. 'Lou had to have certain aesthetic decisions made for him because of the drugs, actually. Although I loved him, it was very difficult working with him. And I don't think he had the ability to go in and do his own album alone.' Dick Wagner and Steve Hunter had left the band, but Prakash and Pentti played on the record, together with guitarist Danny Weis and keyboardist Michael Fonfara, a moustachioed hedonist whom Lou called the Turk. Surprisingly, Doug Yule joined the band at this stage, following the expiration of the Velveteen Underground. Doug was a good musician, and enough time had passed for Lou to tolerate his presence again. Their former colleagues had left show business. Sterling was living with his wife in Austin, where he was a teaching assistant at the University of Texas, while Moe was living quietly with her family in California. Doug, Sterling, Moe and John Cale earned virtually nothing from their old VU recordings at this time. Lou did, however. In the spring of 1974, he received $40,000 from MGM.
In contrast to the cohesive sound and musical integrity of the Velvet Underground albums, _Sally Can't Dance_ was a hotchpotch of fashionable musical styles. 'I wanted to take Lou's songs, his newer songs, and sort of put them in a context of each song getting its own specific feel,' says Steve Katz, explaining what he tried to achieve. In this way, 'Kill Your Sons' was recorded as hard rock, sounding not unlike an Alice Cooper track; 'Ennui' was a piano-driven ballad that wouldn't have been out of place on an Elton John album; while the predominant musical flavour was R&B. This was Lou's choice but reflected the ongoing influence of David Bowie, who was about to make his own white-soul album, _Young Americans_ , which turned out to be a much better record, and advised Lou on his album. Barbara recalls Lou and David socializing together at this time, including one evening at her apartment when Bowie was so stoned that he was 'crawling around on his hands and knees'.
Lou called his sister, who was living on Long Island with her husband, Harold, to warn her about the album. 'Bunny, I have to tell you something.'
'What did you do now?'
'This song's coming out.' Lou recited the lyrics of 'Kill Your Sons', which, apart from touching on his ECT and making unflattering implications about their parents, described a sister who'd married a fat guy on Long Island who took the train to work and didn't have a brain.
'Are you serious?' asked Bunny. 'You wipe out my lifestyle and my husband in four phrases?'
'Ah, I needed something to rhyme with train. So I had to take poetic licence.'
'Thank you very much, that's very sweet.'
Bunny was always tolerant with Lou. 'It was indeed written about Harold,' she says now, sitting with her husband. 'Harold found it enormously funny.' Harold concurs with a chuckle: 'He had one of the great senses of humour... A lot of people didn't get it.' This was perhaps the best way to deal with a brother-in-law like Lou Reed: laugh at his rudeness.
One side-effect of the amount of speed Lou was using was that he lost a lot of weight, going from podgy to skeletal over the course of 1973–4. Barbara took a remarkable photo of Lou at her Fifth Avenue apartment in which he looks emaciated. His appearance became even more startling when he had a crew cut with Maltese crosses dyed in the sides in preparation for a spring tour of Europe. Then he had all his hair dyed blond.
Once again, Lou stayed at Blake's in London, where Prakash John witnessed a scrap with David Bowie. 'I remember coming out of Blake's [and] seeing the limousine doors open and Lou and David Bowie falling out of the car in a cat fight... rolling around on the sidewalk outside Blake's Hotel. Who knows what they were fighting about. It looked to me like two women fighting over a guy... I thought, "This is weird."' Another night, Lou urinated on the floor of the hotel bar. 'That grossed me out – that is unacceptable.'
Barbara Hodes photographed Lou looking emaciated.
Lou found himself singing to tens of thousands of people as one of the main support acts for the Who at Charlton Athletic football ground in London on 18 May 1974, his biggest show yet. Other acts on the bill included Humble Pie and Bad Company. This huge open-air event was a far cry from Max's Kansas City, demonstrating how far Lou had come in four years. 'The performance went well, except everybody was pretty drunk,' says Michael Fonfara, who became Lou's new band leader at this time. He shared his boss's fondness for booze. It was one of the factors that drew them together over the next six years.
Back in New York, Lou moved out of Barbara's flat and rented a one-bedroom apartment on the ground floor of the Yorkgate building at 405 East 63rd Street. Considering that this was the high point of his career in terms of record sales, the Yorkgate was an unprepossessing address. Barbara – who had a key to the apartment – could never understand why Lou chose to live in such dreary places. He certainly lived less ostentatiously than many of his rock-star contemporaries, in an era of conspicuous, often grotesque over-consumption. In contrast to the likes of Elton John and Rod Stewart, who lived like movie stars, acquiring mansion homes and limousines, Lou didn't own property, or even a car. After a brief flirtation with glam fashion, he also eschewed expensive bespoke clothing, dressing in jeans, T-shirts and leather jackets. He had lost or given away valuable artwork by Andy Warhol, as well as his Velvet Underground posters. So long as his rent was paid and he had a guitar, an amp and a tape recorder, a bottle of Scotch, a carton of cigarettes, a pint of coffee ice cream in the fridge, and enough drugs to keep him high while he wrote his songs, Lou seemed content living in low-rent accommodation. He remained a true bohemian.
One day when she was visiting Lou's new apartment, Barbara noticed a pair of false eyelashes in the bathroom. 'I asked Lou, "What are these false eyelashes doing here?" And he said, "Oh, I met this poor person at this club and I offered her a place to sleep."' So Barbara met Rachel, a young transvestite (about ten years younger than Lou) with an angelic face, long dark hair and plucked eyebrows. He was sleeping on the couch.
Rachel's real name was Richard, and he answered to Ricky. Lou's friends believe that his surname was Humphreys, though the spelling, like many of the facts of his background, remains uncertain and somewhat mysterious. The consensus of opinion is that he was from Philadelphia, where he once worked as a hairdresser. His childhood had been tough. Rachel was streetwise in a way that Lou only pretended to be, and this was evidently part of the attraction. Rachel was rough trade. They met one night at Club 82, an after-hours bar in the East Village, when Lou was speeding. 'I'd been up for days, as usual, and everything was at that super-real, glowing stage. I walked in, and there was this amazing person, this incredible head kind of vibrating out of it all. I kept watching for ages. Rachel was wearing this amazing make-up and dress and was obviously in a different world to anyone else in the place. Eventually I spoke and she came home with me,' he explained. 'At the time I was living with a girl, a crazy blonde lady, and I kind of wanted us all three to live together.' Barbara, who had until recently been living with Lou, confirms that he suggested a _ménage à trois_. 'I wasn't having any of that.' Although Lou assured Barbara that Rachel liked her, Barbara was scared of Rachel, who, she says, carried a knife. 'I didn't want my face slashed.' So she left them to it, though she stayed friends with Lou. 'Rachel was street trash, and I think that was the attraction to her.'
Rachel's alternating persona was confusing. As a man who dressed as a woman but hadn't had a sex change, Rachel was a 'he', though the feminine pronoun often seemed more appropriate, and many people used 'she', or a mixture of the two. Journalists had fun with the dichotomy. Lester Bangs solved the problem in print by referring to Rachel as Thing, which was a little cruel, while journalist Ben Fong-Torres described Lou's new friend more elegantly as 'a boyfriend named Rachel'. It really came down to who Rachel wanted to be on any given day. If he wanted to be Rachel, he dressed in a feminine way, though rarely in full drag. 'He wore one-pieces, very clingy, and literally just tucked himself under, and he didn't have boobs or anything, but he had long hair and a beautiful face and he wore make-up,' recalls Liz Gilmore, who got to know the couple as the girlfriend of one of Lou's European promoters. 'We were told, "If he's dressed like a man, he's called Richard. If he's dressed like a woman, he's called Rachel."' Lou and Rachel evidently enjoyed the confusion, and further muddied the water by wearing each other's clothes. 'I would go there on different days and one thing Rachel wore the last time, Lou would be wearing [the next],' says Elliott Murphy, who was hanging out with Lou at his apartment, having written the liner notes for the _1969_ album, which had only just been released, on the back of Lou's new celebrity. 'They were always wearing Fiorucci jeans and things like that.'
Rachel spoke quietly, with a lisp, and was subservient to Lou, though prone to melodrama, which made for two drama queens. Languid and effeminate, he could be ferocious in a fight – kicking, punching, wielding knives, even broken bottles. Lou claimed Rachel nearly blinded a guy in a fight in LA. While most of Lou's friends learned to rub along with Rachel, he was too strange for some. Prakash 'the Christian' John says that his dog trembled when Rachel came into the studio. 'I sensed the dog was terrified of Rachel... When she walked out, I noticed the dog had defecated on the studio floor.' But Lou found Rachel exciting, and they were together for a long time. 'Rachel knows how to do it for me,' he said, three years into the relationship. 'No one else before ever did.' Michael Fonfara, who worked with Lou throughout this period, has no doubts that theirs was a sexual relationship. 'He was Lou's live-in lover.' It wasn't, however, an exclusive relationship.
When it came time to design the cover for _Sally Can't Dance_ , Lou met artist David Byrd at the Ninth Circle, a gay bar in Greenwich Village, to discuss the project. 'Lou carried with him a shoebox full of SX-70 Polaroids (the camera of the day for cool art dudes). He told me the Polaroids were of his many "girlfriends", though being gay myself I could tell they were drag queens,' recalls Byrd, who found Lou 'quite magnetic and handsome in a kind of damaged way'. Lou picked out Polaroids of a favourite 'girlfriend', an androgynous creature with long hair, plucked eyebrows and a faint moustache who he wanted to represent Sally on the cover. Byrd used the picture as the basis of a portrait on the back of the sleeve. 'I decided to do a blow-up of "Sally" reflected in his mirrored glasses, smoking a fag and looking quite toasted.' This was Rachel (credited in the sleeve notes under the absurd moniker René de la Bush), demonstrating how important he had already become to Lou. Byrd adds intriguingly that one of the Polaroids Lou showed him 'was of René completely bound up in Saran Wrap [cling film].' We might imagine Lou and Rachel cocooned in plastic in their private moments.
_Sally Can't Dance_ sold surprisingly strongly, reaching number ten in the US in the summer of 1974, the summer Richard Nixon resigned the presidency, making it the highest-charting album of Lou's entire career. He was bemused by this unexpected and undeserved success, considering that the album wasn't very good. 'This is fantastic – the worse I am, the more it sells,' he told Danny Fields. He said many other disobliging things about _Sally Can't Dance_ over the years, to the annoyance of the musicians who played on the record. 'I once said to him, "Why don't you return your royalties, or share it with the rest of the band, if you hate it so much?" But I never got a good answer,' says Prakash John. 'You can't disparage your work when it has been successful for you. He did try to have commercial success. He pretends like he didn't.'
When Prakash and Pentti Glan discovered that RCA were going to release a second album from the Academy of Music tapes – _Lou Reed Live_ – without paying them any more money, they decided they'd had enough of Lou and quit the band. He carried on with replacement musicians, touring Australia that summer. Upon arrival in Sydney he gave a memorable press conference that can be seen as a homage to Andy Warhol's style of dealing with journalists, as well as pandering to his image as deviant drug fiend. 'Lou, you sing a lot about transvestites and sadomasochism. How would you describe yourself in light of these songs?' asked one bluff Aussie reporter at the airport.
'What does that have to do with me?' replied Lou, feyly.
'Could I put it bluntly, and pardon the question, are you a transvestite or a homosexual?'
'Sometimes.'
'Which one?'
'I don't know. What's the difference?'
'Where do you spend your money?'
'On drugs.'
It was what they wanted to hear.
He then went a step further and began to mime shooting up on stage during 'Heroin'. Gerard Malanga used the same routine during the Exploding Plastic Inevitable in 1966, only Gerard used a large prop syringe as a joke. Lou made it look real. First he wrapped his microphone cord around his left arm, which he extended towards the audience – palm up. Then he drew a real syringe from his jeans and appeared to mainline while the crowd howled with a mixture of disgust and excitement like the mob around the guillotine. Afterwards he passed his 'used' syringe to the people in the front row. 'What a tacky gesture,' sniffed his former friend Lisa Robinson in a review of his 9 October 1974 show at the Felt Forum in New York. Of course, he intended to create shock and disgust. 'Much of what Lou did was in bad taste. And that's one of the reasons people liked him,' points out former RCA executive Bruce Somerfield. 'He'd shoot his finger at good taste: I'm Lou Reed and fuck you! That was part of Lou's persona.' It hadn't been part of his persona in the Velvet Underground, when he was a serious artist, but in order to reach and hold the attention of a broader rock audience in the mid-'70s he had become a parody of himself, against his ultimate better judgement. 'Lou was being thrust into a commercial world, when really he wanted to be a poet,' says Steve Katz, enlarging on the point. 'Going commercial was like [becoming] a caricature of himself and, unfortunately, that's what he wanted to do.'
Looking back on his career, Lou conceded that this was a bad period in his life, so bad that he felt like killing himself at times. 'Back then, I thought I'd lost it and I did a bunch of things I was really unhappy with. And I did it all in public and on record and there it is,' he confessed in 1989. 'I just kind of blew it apart because my problem – amongst all the other problems that were going on, which you'd have to be deaf, dumb and blind to miss (i.e. the drugs) – was that I thought my ability had gone. I thought, "It's gone," and I got really upset about it. I thought it had deserted me, and it made me really crazy and suicidal, and I just did what a lot of people do about things like that, which was _more_ drugs and I got more fucked up, to say the least.' In fact, he had only begun his descent into drug addiction and chaos.
## IX
## Howling like the Devil
### 1975–6
IN THE FIRST week of 1975 Lou went back to Electric Lady to start work on the _Coney Island Baby_ album, recording 'Crazy Feeling', 'She's My Best Friend', 'Downtown Dirt' and the title song. Although these were all good songs, with hooky tunes and interesting lyrics, notably about Lou's love for Rachel and his childhood on Long Island, the vocals were sloppy and slurred, and his attitude to work was bad. 'He was really sort of crazed at the time,' says Steve Katz, who had recently caught Lou shooting up in the toilet. 'He was impossible to work with. He was unreliable... He just wasn't being a cooperative artist.'
Lou had come to dislike the two records he had made with Steve, despite the fact that _Rock 'n' Roll Animal_ and _Sally Can't Dance_ had been so successful, and he now turned on his producer. 'I give up! If you are gonna play these games, I _know_ you're gonna outwit me,' Steve told him after two miserable days in the studio. 'I acknowledge that you're much smarter than I am.' Lou claimed Steve walked out on the sessions at this stage; Steve says Lou left. Either way, work stopped. 'I called Bruce Somerfield at RCA and I said, "We have to call these sessions [off]. I don't have an artist here."' Bruce came down to investigate. 'I do not know [what] caused the massive blow-up between Katz and Reed... All I know is that [they] had a terrific falling-out at the January sessions,' he later testified in a court case between artist and producer, whose relationship never recovered. '[Steve] told me that Reed had been abusive towards him and that he felt that he never again could have the same type of personal affection for Reed...'
In recent months Lou had also succeeded in alienating his band, most of whom had left to work for Alice Cooper. Only Michael Fonfara and Doug Yule remained. Lou belatedly realized that he needed a full band for a European tour starting in February. He rang around for suggestions and was referred to the Everyman Band, an obscure jazz-fusion group in upstate New York, with whom he would enjoy a surprisingly long association, the core members staying with him until 1980. 'My name's Lou Reed, you probably haven't heard of me,' he introduced himself over the phone to Larry Packer, who played violin and guitar with the group. The other members were saxophonist Marty Fogel, bass guitarist Bruce Yaw and Michael Suchorsky on drums. Lou invited them into the city for a jam session with himself, Fonfara and Yule. The Everyman Band were country boys with long hair and beards, who dressed in jeans, plaid shirts and work boots. Lou wore a transparent plastic suit the day they met in Manhattan, which was the first indication that they were not dealing with 'a normal person', as Bruce Yaw observes. Still, they played well together. After the briefest of rehearsals, the musicians agreed to accompany Lou to Italy. It was to be a truly farcical tour, worthy of _Spinal Tap_ , as if directed by Fellini.
They arrived at a chaotic Leonardo da Vinci airport during a baggage handlers' strike, part of a wider political malaise gripping the country, where the far left was pitted against the far right in a series of protests and riots that lent an element of anarchy to the tour. Lou was driven into Rome with Rachel and his booking agent Bob Ringe. 'Let me see that,' he said during the journey, taking Bob's tape recorder. Lou played with the machine, decided he didn't like it and tossed it out of the car window. 'I'm looking at him like, "What the fuck!" [But] I'm keeping my [cool],' says Bob. They were still in transit fifteen minutes later when Lou complained of a headache and asked to borrow his agent's sunglasses. 'These suck,' he said after trying on the shades. He threw them out the window, too. Now Bob was angry.
When they arrived at the Ambasciatori Palace Hotel, Lou wanted lunch. The kitchen had closed for the afternoon, but the manager arranged for a simple meal to be served in the dining room. 'They bring out pasta and salad and bread,' says Bob. 'They put down this bowl of pasta. [Lou] takes one bite. He says, "This fucking sucks." He picks up the plate. He stands up and throws it against the fucking wall. And it splatters.'
Bob lost it. 'You motherfucker,' he said, grabbing Lou. 'I'm going to kill you!'
'Oh Bob, I'm sorry,' Lou apologized. When it came down to it, he was not a fighter. Bob told Rachel to take Lou upstairs while he said sorry to the manager.
Lou then phoned down to the front desk to say that he needed a doctor. He told the doctor who came to his room that he was depressed and needed amphetamines. The doctor prescribed pills, but not the pills Lou wanted. Lou pulled out a pharmacological reference book, which he now carried with him on tour, and pointed to the precise drug he required – a strong amphetamine. When the doctor demurred, Lou became stroppy and asked to see another doctor. This became his routine on tour in Europe. He would demand drugs from local doctors, whom he contacted via his hotel, bullying them until they gave him what he wanted. A prescription was important, because it allowed him to take his drugs across borders. Lou used the prescription medication to get high before going on stage. 'It would throw him into a depression when he didn't have it, and it would make him a little bit over-stimulated when he did. For the most part, it maintained him, except for the extremes that went both ways,' explains Michael Fonfara, who became closer to Lou than anyone else in the band. 'I didn't see it as being detrimental to his performances or his writing, which I thought was fairly brilliant... He didn't seem to suffer.' Others disagree, saying that Lou's behaviour was obviously affected by his drug use.
Strike action in Rome meant that the first show of the Italian tour had to be postponed. So Lou went north to play Turin and Milan. The latter concert was cut short when the audience rioted, while the effect of prescribed medication on Lou was clear to at least one band member. Backstage before the gigs, Lou was like a zombie, says Larry Packer. 'He would be on the side of the stage, and his jaw muscles would be involuntarily contracting. His jaw clenched. And he'd have his fists clenched. They would unroll his fingers and get the microphone in his hand, and let his fingers clench around the microphone... then they would more or less pick him up bodily and place him where the light would hit him, and the show would start.' As the drugs kicked in, Lou became animated.
While they were in Turin, they were invited to a party hosted by the Agnelli family, the owners of Fiat. It was an extravagant affair in a mansion, reminiscent of the castle scene in _La Dolce Vita_ featuring Lou's old friend Nico. During the evening he picked a fight with his fiddle player. 'Lou Reed, all of a sudden, pulled a knife on me – a switchblade. This was outside the [Agnelli] house,' says Larry. 'He challenged me to this knife fight. I was unarmed. I looked at him and said, "You fucking asshole!" and turned my back on him and walked away.' This wasn't an isolated incident. During his relationship with Rachel, Lou became increasingly confrontational, threatening various people with knives, even guns. As a result he found himself in some dangerous situations. Larry claims that Lou drove one member of the tour entourage so crazy in Europe that he threatened to shoot the star. 'Bruce [Yaw] and I talked him out of it.'
After these difficult northern shows the tour party returned to Rome to play the Palazzo dello Sport, an indoor arena holding seven thousand people. There was a press conference beforehand, at which a reporter asked Lou why he had come to the eternal city. 'I came to Rome because I want to fuck the Pope,' he replied, an outrageous statement that heightened tension around the gig, which turned into another full-blown riot. When Lou arrived at the Palazzo on 15 February 1975, a policeman emerged in body armour to welcome him. ' _Buona sera!_ ' said the officer politely, having removed his helmet. 'The crowd has already rushed to the stage and overturned the equipment of the support act.' By the time Lou came on, the arena was in uproar. Gatecrashers had got in, claiming the show should be free, and they had commandeered the bar. Aside from Lou's remarks about the Pope, there was a political dimension to the riot. Many of the gatecrashers identified themselves as communists, and they were opposed to the capitalist promoters, whom they characterized as fascists. The fact that Lou dressed in black, and was evidently in Italy to make money out of them, was enough for these radicals to condemn him as a fascist, too. He and the band were pelted with missiles as they began the opening number, 'Sweet Jane'. Larry was hit by a water bomb. 'I looked around and there were 150 guys wearing bandanas throwing things at us.' They fled the stage as the police let off tear gas, Bruce Yaw using his bass guitar as a club to clear a path to the dressing rooms. Rachel came into his own in the melee. 'Rachel was a street guy. He was a really good street fighter. So he was a great guy to have around,' says Bruce, who saw Rachel defend Lou with kicks and punches as they fought their way out of the hall.
Oddly, this chaotic show helped make Lou a big star in Italy. It created invaluable publicity, bringing his music to the attention of young people who might not otherwise have heard of him. Many were excited by his irreverent attitude to life, including religion. His louche, hedonistic image had enduring appeal in the Catholic nations of southern Europe, including Spain and Portugal, but particularly in Italy, where he remained a major draw for the rest of his career. 'That concert that went really wrong gave him a lot of publicity and [attracted] people like me who didn't want to be classified as the reds or the blacks,' explains Charlie Rapino, an Italian fan who later got to know Lou in the record business. 'I think in Italy they like people who talk about sin, and they sin as well, being a Catholic country. And I think that appealed to us a lot [growing] up with a Catholic education.'
Lou crashed from his amphetamine high after the riot. 'He] was doing so much amphetamine that he would go for days and days speeding, and then he would consume massive amounts of Valium, bottles of Valium, and alcohol, to come down,' explains Bruce Yaw. [Barbara Fulk, Lou's new road manager, confirms that she gave him Valium to calm him down.
Strike action and crowd trouble were making the Italian tour a nightmare, and a scheduled concert in Bologna was cancelled. Instead, the party flew to Zurich, where Lou was due to perform on 20 February. He checked into the local Novapark Hotel, complaining about the way the tour was being managed. Dennis Katz had not accompanied him to Europe. Deciding that he might need a new manager, Lou put in a call to New York to talk to Tony Defries, who until recently had managed David Bowie. Defries wanted to manage Lou. He was so excited to receive Lou's call that he caught the first plane to Switzerland, hoping to sign him. Barbara Fulk called Dennis to warn him that his rival was on his way, so Dennis also got on a plane.
Defries arrived in Zurich first, but Lou refused to meet him. 'I had to go down to the lobby and tell him Lou decided he did not want to see him [after all],' says Barbara. Defries turned around and went back to America, washing his hands of Lou. Dennis arrived on the next plane from New York. He was admitted to Lou's hotel suite and emerged with his signature (typically loosely drawn with an egocentrically large 'L' and 'R') on a new management contract. Though Lou clearly thought he had been smart in playing Katz off against Defries, he would rue the day he signed this document, which extended his association with Katz for a further three years, during which time they fell out spectacularly.
That afternoon before his show, Lou asked to see a doctor. When the first doctor didn't give him the drugs he required, he asked to see another. Dr Rudolf Breitenmoser came to the suite a couple of hours before Lou was due on stage. 'Upon my arrival I was led to a darkened room (a luxury suite). I found four persons there,' the doctor explained. Lou was sitting on the sofa, drumming his fingers nervously. 'I approached him carefully and found the patient to be excited and stimulated. I immediately noted that the patient in question was someone whom I suspected to be addicted to amphetamines... I found out that another physician had already been called before me, but that he had left the scene immediately when he heard he was expected to prescribe amphetamine. As for myself, the question arose before me whether I would further help or harm the patient by prescribing amphetamines.' The doctor took Lou's blood pressure and asked him how he was feeling. 'The patient explained to me that he wanted to have me prescribe Dexedrine. To attain a maximum performance level, he was taking amphetamines before starting a concert.' Despite concluding that Lou was addicted to amphetamines, and behaving irrationally, as well as being underweight, the doctor prescribed the drug Lou wanted, advising him to use as little as possible. This incident was later picked over in a court battle between Lou and Dennis Katz, during which Lou's lawyers argued that he had been 'under extreme emotional stress' on tour, and thereby not in a fit state to understand what he was signing when he extended his contract with Dennis. They asked Dr Breitenmoser to give a witness statement to help prove that he was in bad shape, admitting that, 'Reed's judgement was at that time... impaired by the habitual use of alcohol and certain medicinal drugs.'
There was further drama as the tour continued through Germany, France, Denmark and Sweden. Lou fired Larry Packer after a show in Lund and faced a band revolt in Paris when the other musicians discovered that their salaries weren't being paid. There were also issues with Lou and Rachel. 'I used to get calls from hotel managers that they would tear up these rooms,' says Bob Ringe. 'They were so fucking blitzed they would rip the carpet off the floor, the wallpaper off the walls. These rooms were trashed... They were shooting speed, and the crash off of that is nasty. You didn't want to be around when he was coming off drugs – it got insane.' The tour finished in London, where a waxwork of Lou had recently been erected at Madame Tussaud's, showing how famous he still was in Britain at this time.
Back home, on 1 June 1975, a judge at the Supreme Court of New York ruled in favour of Lou's former manager, Fred Heller, ordering Lou to pay him $174,140 commission on his earnings for breach of contract. This was a major blow, putting Lou in what his lawyers described as 'substantial financial jeopardy'. He could have settled the case earlier, for as little as $30,000, and had been urged to do so by several advisers, including his accountant father, but he'd ignored their advice and now he faced a huge bill. Despite his success over the past few years, Lou simply didn't have the cash to pay Heller. It was probably for this reason that he promptly decamped to Toronto. Studio time was booked in the city, with the hope that Lou would resume work on _Coney Island Baby_ , but he spent most of his stay hiding out with Rachel in the Continental Hyatt House. When he returned to New York later that month, Lou quietly moved apartments. A private detective named Arthur Moss called at the Yorkgate building on East 63rd Street on 16 June to serve Lou with a subpoena and restraining order in the Heller case. The building superintendent told him that Reed had moved to the building next door, the Royal York, adding that he had been seen coming and going 'dressed in various wigs and in women's clothing'. The superintendent may well have mistaken Rachel for Lou, but the possibility remains that he was wearing drag to avoid his creditor.
Now Lou's problems began to mount. At the same time that he was trying to deal with the Heller judgment, RCA were demanding a new album. Under the terms of his contract, Lou was obliged to deliver two albums a year, but he wasn't writing enough songs. Work on _Coney Island Baby_ had stalled. RCA had just put out a live album, _Lou Reed Live_ , but they still wanted a second LP for 1975. So Lou created one of the most extraordinary records ever made by a mainstream rock artist. Firstly, he asked Michael Fonfara to help him carry amplifiers and guitars up to his new, ninth-floor apartment. 'I helped him carry the amplifiers up to that room, and made sure that they were all turned up to eleven [ _sic_ ] and were feeding back – it was a pile of amplifiers all in one room that were strung up together, and one guitar to activate them with feedback. It was howling like the Devil. We had to leave the room and let the recording do the rest,' says Fonfara. 'RCA was invoking the contract, which stated they had to have a new record, and he wasn't ready, because he hadn't written enough songs yet. So he argued with them, and they were having a lot of fights. They said, "We are going to demand by the terms of your contract that you produce a[nother] record for us this year." I remember talking to him one night and he was saying, "Fuck them. If they want to play hardball, I'll show them." So he made an album of just nothing but feedback... RCA were aghast...They didn't know what to do...' Fonfara was also bemused. 'It was almost impossible for me to listen to. It was just noise, really.'
Devoid of vocals, beat or melody, _Metal Machine Music_ was four sides of squealing, howling feedback that created a feeling akin to a migraine (though some people claimed to like it). Lou insisted that each vinyl side should be precisely sixteen minutes long, except side four, which repeated until the needle was lifted out of the groove, a torturous detail he was especially proud of. Over the years, he spoke about this infamous work in conflicting terms, defending _Metal Machine Music_ as a conceptual piece in the tradition of La Monte Young; other times stating that it was a joke; also admitting that drugs were a factor. 'I was serious about it,' he once said.'I was also really stoned.' His manager didn't take it seriously. 'I thought it was a joke, and so did Lou when he made it,' scoffs Dennis Katz. 'He liked playing mind games.'
_Metal Machine Music_ is perhaps best described as an artistic tantrum. Lou was prone to such behaviour. 'One of the things he would do, he would just blow up his career for a while,' observes musician friend Scott Kempner. 'How he had the courage to do that, I couldn't tell you.' Surprisingly, RCA agreed to release this tantrum, showing a willingness to accommodate a difficult artist who had made money for them in the past. Lou wrote in the liner notes, 'No one I know has listened to it all the way through, including myself... Most of you won't like this, and I don't blame you at all. It is not meant for you... I love and adore it...' It was, he argued, an antidote to _Sally Can't Dance_ and _Rock 'n' Roll Animal_ , the success of which, ironically, made such self-indulgence possible. 'This is not meant for the market,' he stated haughtily, signing off with towering speed freak arrogance: 'My week beats your year.'
He was on tour when the record hit the shops in July 1975. Reviews ranged from bafflement to qualified praise. Writing in _The New York Times_ , a sympathetic John Rockwell gave Lou credit for trying to do something different but wondered if he had 'finally tripped over the line between outrageousness and sheer self-destructive indulgence'. Lester Bangs was more forthright in _Creem_ , writing that, 'what we are witnessing here is commercial suicide'. Initial sales were modest – 25,000 copies sold compared to 163,000 for _Lou Reed Live_ – but as _Metal Machine Music_ cost next to nothing to make, this wasn't a calamity. 'He made it in his bedroom and it probably cost nothing much to press a few copies and ship them,' explains Bruce Somerfield. 'It didn't make money, but I can't imagine it lost money.' It is more difficult to evaluate the harm Lou did to his career in terms of alienating RCA and his audience. Members of the public who bought _Metal Machine Music_ and found it unlistenable (many returned it to the store) were less likely to buy another Reed album, and Lou would not survive much longer as an RCA artist.
For the time being everything continued as normal. When he arrived in Tokyo on tour that summer he was invited to the local RCA office for a party in celebration of the album, which was squealing in the background while the champagne was served. The Japanese executives bowed to Lou, and presented him with little electronic gifts, including the new pocket calculator, which pleased him, telling him how honoured they were to meet him, and how brilliant _Metal Machine Music_ was. 'The head of RCA Japan was there, who really just knew exactly what Lou was trying to do in terms of manipulating [them] and getting toys and being a bad boy. Just handled Lou really, really well,' recalls Bruce Yaw. 'It was a great scene. This absurd [music] is going on and people are pretending, "This is marvellous, fantastic art, the latest in electronic music..." sycophantic genuflecting.' Lou was even persuaded that he might have a Japanese hit on his hands, which meant adapting his show. 'Lou said, "We're going to have to do it in the set,"' recalls Michael Fonfara. 'So we set up a whole bunch of amplifiers and at one point he hit a guitar, and pounded it on the ground until it was just screaming, and set it down, and we'd all walk off stage while fifteen minutes of this went on. The audience were [rocking] their heads back and forth. I don't know what they liked in it.'
The tour moved on to Australasia, where Lou discovered that he was broke. The first sign of trouble came when a cheque written on behalf of Transformer Enterprises bounced. Lou claimed that this is when he found out that he had lost the Heller case and was in contempt of court over his failure to pay Heller compensation. As we have seen, it seems that he had in fact known about the court judgment for some time, but it was in New Zealand that he was forced to confront his insolvency. Characteristically, he blamed his advisers for mismanaging his affairs rather than taking, or at least sharing, responsibility for the mess he found himself in. 'I found out, in Australia [ _sic_ ], on a tour, that every single royalty I'd ever gotten had been stolen!' he later said, with a good deal of hyperbole.'And that I hadn't had taxes reported for the past five years, that I was in contempt of court, there was a warrant out, I had no money in the bank, no apartment, and I had been taken for a ride by these people! And had about fifteen dollars in my pocket!'
Lou used one of the new pocket calculators he picked up in Japan to work out that, if he owed Heller $174,140 in commission for the period June 1972 to October 1974, as the court had ruled, he must have grossed over a million dollars in that time. 'Lou was astounded,' says Bruce Yaw, who helped him do his sums. 'Where did the money go? Because he didn't have it.' The tour, which had been scheduled to continue in Europe, was cancelled. Lou parted company with Barbara Fulk, in whom he lost confidence as an employee of Dennis Katz, and scraped together just enough cash to fly himself and his band home.
As soon as he got back to New York, he sacked Dennis. As he looked into his affairs more closely, Lou discovered that, in addition to the money he owed Heller, he owed $128,000 in taxes, bank charges and debts to sundry creditors, including hotels, credit-card companies and travel agents. American Express and the William Morris Agency were suing him, and his subscription to the Musicians' Union hadn't been paid. Lou identified what seemed to be a discrepancy between what he had earned over the past few years and what he had received in income, leading him to allege in court that Dennis and other advisers had misappropriated at least $500,000. Dennis, who denied the allegations, felt equally aggrieved. 'Dennis Katz felt betrayed by Lou,' explains Elliott Murphy, who had become a client of Dennis's after Lou introduced them, at a time when Lou couldn't say enough good things about his manager. 'He felt any damage to Lou's career had been done by himself, and Dennis felt he was the guy who got Lou back on his feet.' So bitter did feelings run that Dennis rid himself of anything that reminded him of his former client, giving Elliott a proof copy of Andy Warhol's book _The Philosophy of Andy Warhol (From A to B and Back Again)_ , with a handwritten inscription by the artist. 'Dennis said, "Do you want this? I don't want anything to do with Lou." And he gave it to me.'*
Lou was so broke that he had to leave his Upper East Side apartment and move into the Gramercy Park Hotel, a funky old joint on Lexington Avenue. He was in his room on 30 September 1975 when he was served with a court summons. Like Heller before him, Dennis Katz was suing him for breaking their management agreement – the one Lou had only recently extended – and depriving him of commission. He wanted $200,000, increasing Lou's liabilities to half a million. Faced with financial ruin, Lou struck an emergency deal with RCA that enabled him to keep working. He mortgaged his song catalogue (essentially getting a loan from RCA on the security of the copyright of his songs) to raise cash to pay his most pressing debts, including settling with Heller (but holding out against Dennis, whom he would fight in court). RCA further agreed to cover his bill at the Gramercy Park while he completed _Coney Island Baby_.
Although he agreed to resume work on the album, Lou refused to have anything further to do with Steve Katz, whom he officially fired as his producer on 13 October, thereby breaking yet another contract, and decided to produce the album himself. Steve responded by suing Lou, meaning that he was now in litigation with both Katz brothers. Steve ripped into Lou in a sworn affidavit, claiming credit for rescuing his career after the 'disaster' of _Berlin_. 'Despite the success I brought him... Reed is now following a course destined to destroy his career completely,' he said, describing _Metal Machine Music_ as 'a double album of the most obnoxious, unpleasant sounds imaginable'. He addressed the issue of Lou's drug use with stunning frankness. 'Lou Reed's recordings are an expression of his confusion, self-destruction and immersion in the drug culture. His whole being revolves around "speed"... to which he is addicted...' He also characterized the artist as being as feckless with money. 'Lou Reed is financially irresponsible. He spends his income quickly and recklessly, mostly on costly drugs. Notwithstanding the sums he has earned in the past, and whatever he might earn in the future, I know of no assets of value anywhere...'
In his defence, Lou said he had only agreed to work with Steve as a favour to his brother. He hated his production on _Sally Can't Dance_ , and spoke up for _Metal Machine Music._ 'Of course, as an electronic album, it would appeal to a limited audience. This was understood by both RCA and myself before its release. It is in the context of an experimental electronic music album that it must be evaluated. Obviously, it is not a rock 'n' roll album.' He responded to allegations about his lifestyle by asking rhetorically: 'If RCA felt I was as incompetent, drug crazed and irresponsible as Mr Katz claims, one wonders why RCA would allow me to produce myself and to even get into the recording studio.' This was a reference to his renewed work on _Coney Island Baby_ , though he was in fact receiving professional help with the record, while the management of RCA were rapidly losing patience with Lou.
Record engineer Godfrey Diamond was a type of person Lou gravitated to in his maturity: a young, good-looking man of talent but limited experience whom he thought he could control. The first phase of working with such a protégé – of which there were several – was bonding. When Lou first met Godfrey, a twenty-two-year-old staff engineer at Media Sound in New York, they hung out together, becoming friends. Lou visited Godfrey's family home and invited him over to Room 605 at the Gramercy Park Hotel, ostensibly to discuss his album, which is when Godfrey stepped through the looking glass into Louland.
'I went to the Gramercy. This was a trip, man. He had these people hanging around, and they were so strange to me, and I knew some weird people. These people were this beautiful array of homeless, degenerate-looking, brilliant, weird as shit [characters]. I walk in and he's got a camera on me right away, doing an Andy thing. He said, "I'm recording everybody who comes in." It's like a party. And I think I'm coming to [work]. They are drinking and doing blow. I start talking to these people. One guy is a taxi driver. He's known Lou for ten years, they are old friends and he writes books. One after another – characters. You had to be there to see these people – one step away from homeless on the street, half of them.' Rachel was there, too. 'Rachel was gorgeous, for a guy. Jesus! I guess he was a transvestite. God, he was just gorgeous looking... like a hot-looking chick. One of those guy-girl voices, very feminine, very sweet.' The party went on all night. When Godfrey looked at his watch it was 6 a.m. He had to be at work at nine. He was exhausted. Lou offered him a line of crystal meth as a pick-me-up. 'That was kind of a Lou thing. That was why he would go into the bathroom and disappear for long periods of time.'
Lou and Godfrey also went clubbing together, frequenting hip joints like CBGB in the Bowery, where the up-and-coming acts of the American new wave were playing, bands like the Ramones (managed by Danny Fields) and Talking Heads. One night Lou introduced his engineer to Holly Woodlawn, with whom he had become friendly since 'Walk on the Wild Side' made them both famous. Holly was singing Lou's songs in cabaret. When the bar closed he gave Holly and Godfrey a lift home by taxi. Godfrey had met a girl during the evening whom he wanted to bring along, but Lou became jealous. 'I guess he had a possessive streak for the people that were close to him,' says Godfrey. 'I said, "Lou, can I give her a ride?" It's like 4 a.m., we are all coming out of the club to go uptown... I didn't know what they were doing, but I knew where I was going. I was going to bring this chick home. She was really cute. So we are heading uptown. He drops Holly off and then he goes, "She can get out here, too."' Godfrey squeaked in protest, 'Lou, that's not cool! I'm hanging with her.' But Lou was adamant. They had to find another cab.
Godfrey and Lou recorded _Coney Island Baby_ at Media Sound, starting all over again after the aborted Steve Katz sessions. Lou used his road band, swearing the musicians to secrecy for fear that Steve's lawyers would try for an injunction. Godfrey told Lou that he loved his new songs, and the way he played them as demos, strumming an electric guitar that wasn't plugged in. He wanted him to play guitar on the album. 'Nobody else ever lets me play guitar on my records,' said Lou in surprise. Throughout his solo career to date he'd relied on hired hands to play guitar, having been told that he wasn't a good enough musician.
'I have to have your guitar on the record. You are the spirit of these songs.'
This was a turning point for Lou, who began to play much more guitar, both in the studio and on stage, becoming obsessive about it. Nevertheless, a more skilful guitarist, Bob Kulick, was brought in to put the finishing touches to _Coney Island Baby_. 'He would explain to me what the song was about and then just send me [into the studio to do an overdub],' says Kulick, who saw that Lou's relationship with Rachel was the mainspring of the project. 'I couldn't figure out what the motivation was until I heard a rough of "Coney Island Baby" [where he sings] "Man, I'd give it all up for you" – the end of the song, the plaintive line. That was for his mate – Ricky/Rachel, whatever it was on any given day... a guy who looked like a girl... I remember how he would look me in the face and very seriously tell me about these songs [and] I finally figured out that this was all about this he/she person. That's what's going on here.'
Lou was concerned that the dedication to Rachel sounded too sentimental, but Godfrey encouraged him to keep it. Other songs on the album can be interpreted as relating to Rachel, including 'Charley's Girl' (sung to a 'queen'), though Lou's friend Nelson Slater has another reading for at least part of the song. He says that the line in which Lou mentioned a girl named Sharon, saying he'd punch her in the face if he saw her again, referred to his brief relationship with Nelson's stepsister, Sharon, whose father was named Charley, and who 'ran [Lou] around the block a few times'. In any event, the lyric was another example of violence towards women. 'Kicks' was yet another song about violence, from the point of view of a character who is turned on by watching a murder. The overdubbed voices were recorded at a studio party at Media Sound, Lou and his agent Bob Ringe being among those who can be heard chatting at the beginning and end of the track.
These were powerful songs sung with care, in comparison to the Katz sessions. Lou knew that his career was on the line, so he decided to behave. 'Everybody told me, "Watch out, he's going to drive you crazy in the studio. He's a nut. He's moody. He's going to not show up sometimes,"' says Godfrey. 'Every day he was there on time and ready to go. I thought he was a dream.' That wasn't to say that Lou behaved normally. During the mixing sessions Godfrey watched Lou draw tiny circles with a draftsman's pen in his notebook – classic speed-freak behaviour. 'And once in a while he'd write a little lyric. And then more circles – tiny, tiny circles.'
_Coney Island Baby_ divided opinion when it was released in January 1976. Dave Marsh of _Rolling Stone_ described it as Lou's best solo album, which was too generous. It was less well received in the UK, where Lou had started to fall out of fashion with hip music writers, who were much more interested in new wave artists, and the nascent punk rock scene. In his _New Musical Express_ review, Charles Shaar Murray charted a downward trajectory from the Velvet Underground, when Lou 'produced his finest work', through the felicitous collaboration with David Bowie on _Transformer_ , to the 'useless and non-functional' _Coney Island Baby._ 'The songs sound like sophomoric Reed pastiches.' He concluded that the artist was all washed up. 'Lou Reed's revolutionary days are long gone, and the years of his farthood lie heavy upon him. He's a walking antique. He's got nothing to offer but the remnants of a discarded attitude and the crumbs of what once seemed to be a major talent.' As Marsh seemed overly kind, Murray was too harsh. The truth was in between. The public seemed to quite like the record, in as much as they went out and bought it in reasonable numbers. Lou telephoned Godfrey excitedly to tell him when they had entered the charts. 'He called [and said], "It's entering at sixty-five with a bullet!" He was so happy.' The album peaked at forty-one in the US.
Encouraged by this modest success, Lou checked out of his hotel and rented a little apartment on East 52nd Street, which he shared with Rachel, as he began to think about his next move. He was writing poetry again, some of which was published. The American Literary Council of Little Magazines deemed his poem 'The Slide', about homophobia, good enough for an award. He also began to show an interest in mentoring other artists. One such project was an album for Nelson Slater, who had signed with RCA. Lou asked Godfrey to help him record Nelson's debut, _Wild Angel_. 'This is where it gets a little sad,' says Godfrey. They had almost finished the LP when Godfrey mentioned that he had a vacation booked, so he couldn't stay to the end. 'I could tell that he was pissed that I had to leave. He didn't want me to go with my girlfriend on vacation.' This was the third and final stage of a typical Lou friendship with a young collaborator: when he discovered that his protégé had a mind of his own, he turned nasty. Lou remixed _Wild Angel_ by himself, taking Godfrey's name off the record. 'It didn't even have my name on it, and I did the entire record up to mixing half of the songs!' says Godfrey, who was so angry when he saw the result that he tossed his copy out of a window. 'It was "Fuck you!"' Nelson was no less disappointed, claiming that Lou ruined his album. In this way, Lou alienated two friends.
His friendship with singer-songwriter Elliott Murphy followed a similar trajectory. After Elliott wrote the liner notes for _1969_ , Lou helped him move from Polydor to RCA and introduced him to his manager at the time, Dennis Katz. 'Lou was going to produce my first album on RCA. That was the plan. But a couple of things happened that stopped that,' explains Elliott. 'I think he decided I was going to be his protégé. I think he wanted to do for me what David Bowie had done for him. He was so supportive, singing my praises to everyone. Without his support, I doubt I would have been signed to RCA, or Dennis Katz would have become involved. But he was into speed and he had a very hyper personality. I'll never forget I was at his house one night working to four or five in the morning on my songs for this album he was going to produce [ _Lost Generation_ ]. I said, "Lou, I've got to sleep a little bit." I went home. Three hours later the phone rang. He said, "Are you ready to start again?" I was not. And he didn't like that.' Lou didn't produce _Lost Generation_ , and the friendship cooled. They bumped into each other years later at the Rock 'n' Roll Hall of Fame annual dinner. 'I was sitting there with Bruce [Springsteen] and his entourage. I noticed Lou was at another table. I went over to say hello and he was very cold.'
Lou with Elliott Murphy, Lou's apartment on E 63rd St.
By the summer of 1976 a new generation of artists was emerging on the New York club scene who looked up to Lou because of what he had achieved in the Velvet Underground, artists such as Talking Heads and Patti Smith, whom Lou was delighted to see performing his old VU song 'We're Gonna Have a Real Good Time Together' at CBGB. A further connection was formed when John Cale produced Smith's album, _Horses_ , and Lou appeared on stage with Cale, Smith and David Byrne at the Ocean Club. Lou was so enthused about Byrne's song 'Psycho Killer', a song of alienation with a Reedian attitude, that he took a demo into RCA, offering to produce his band. RCA passed, and Talking Heads thought better of accepting Lou's patronage. 'David Byrne calls me later,' says Jonny Podell, who became Lou's new manager that year. 'He says, "Listen, this is awkward. It's great to meet you [and Lou, but] we don't want Lou to produce us. We don't know how to tell him."'
Addicted to drugs, Lou had become a self-destructive artist who alienated friends and colleagues, including executives at RCA, where his contract was due to expire in 1976. _Metal Machine Music_ had put a strain on the relationship between artist and label, while Lou was never the easiest person to work with. His record sales were in decline and he had little rapport with management. 'I might have been the most highly ranked individual Lou would talk to at the label, and I wasn't very high up on the food chain,' says Bruce Somerfield. 'After Dennis left, the kind of people they brought in were not people that [understood Lou].' His erratic behaviour didn't help him win new friends at the office. 'When he was on his game, he had a good sense of humour. At other times, he was very bitter and difficult to get along with.' The drugs had also taken a toll. Lou didn't look well. 'If I would have picked up a newspaper at any point during the time I knew him and read that Lou Reed was found dead somewhere it wouldn't have been a surprise.'
Luckily, another record company wanted him. Clive Davis was ten years older than Lou, a lawyer by background.He had a similar biography in other respects, being a fellow Jew from Brooklyn, and bisexual, though he didn't come to this realization about himself until late in life. He was also one of the biggest names in the American record business. By the mid-1960s, Clive was head of Columbia Records, where he gained a reputation for having 'golden ears' – which is to say that he knew a hit when he heard it. A successful career was derailed in 1973 when he was fired by CBS for allegedly fiddling his expenses, the details of which he disputed, though he was found guilty on a related tax charge and had his licence to practise law suspended. The scandal didn't finish him, however. He launched a new label, Arista Records, to which he signed singer-songwriters including Patti Smith and Ray Davies of the Kinks, and he began to court Lou as a prospective Arista artist, inviting him and Rachel to his apartment to watch the Macy's Thanksgiving Day Parade, and going on a bar crawl with the singer. 'I accompanied him after midnight to three or four clubs... eye-opening.'
Although Lou and RCA were no longer on the best of terms, Lou was still contracted to the company and its publishing subsidiary, Dunbar Music. Executives were willing to let him go and release him from the mortgage he had taken out on his song catalogue, on the condition that Dunbar retain all income from his RCA songs in North America, publishing and royalties, plus 20 per cent of foreign income, until his advances were earned out. The terms of this 'settlement agreement' show that RCA were as keen to get rid of Lou as he was to leave them. It was an amicable divorce on reasonable terms, though Lou had to relinquish income from his most famous songs, including 'Walk on the Wild Side', to win his freedom. He signed the papers on 19 August 1976. Four days later he signed a new five-album deal with Arista.
Clive Davis got Lou cheap. 'There was nothing about that deal that was expensive, other than the recording [costs]. This was not a major money situation,' explains the mogul, who says that he didn't sign Lou because he expected him to make big money for Arista, though of course he hoped he would, but because he was admired by up-and-coming artists whom he wanted to attract to the company. 'He brought prestige to the label.' So began a new chapter in a tumultuous career.
* Murphy later returned the book to Lou.
## X
## The Arista Years
### 1976–80
THE FOUR YEARS that Lou spent as an Arista recording artist were subtly different to his previous years on RCA. He was an older man, thirty-four at the start of his new contract, and even less inclined to compromise. As a result his Arista albums were more personal. Yet there was a diminution. Past his commercial peak, he found himself on a smaller label with limited budgets, playing to a declining audience. His use of drugs and alcohol peaked at this time, with detrimental consequences for his work, which often sounded scrappy. His critics were harsh. And even when he pulled himself together to make better work, he found that few people were still listening.
Within days of signing the new deal, he took his band into the Record Plant to make _Rock 'n' Roll Heart_ , his first Arista album and the first of three records he made with studio engineer Corky Stasiak, an affable Californian with a reputation for being patient with difficult artists. Few were more difficult than Lou. 'He was hard to work with. A lot of artists are. You have to understand when you are working with an artist that this is their career on the line. What they do with you is what they present to the public, and it's very stressful,' explains Corky, who watched Lou pass through three distinct personal phases over the next six years. 'The first one was manic and drug-addicted.'
Lou's chain-smoking, and the fact that his hands shook involuntarily, were outward signs of his debauched state of health by 1976. He was also more neurotic than ever, and extremely moody. 'He put us through the twist – all of his sessions were emotional. He would be upset about something, and then if we got a great take he would be so happy [as if he was] bipolar,' says Corky. Lou was told shortly after this that he did indeed have bipolar disorder, which would explain a lot of his behaviour. 'If one little thing went wrong it was, "Uh-oh!"'
Lou needed a good album to relaunch his recording career, but he was struggling to write songs. So he reached back into the barrel of Velvet Underground leftovers. Unfortunately, the barrel was almost empty. He found two old songs for _Rock 'n' Roll Heart_ , neither particularly strong: the mildly comical 'A Sheltered Life' and 'Follow the Leader', the new recording of which had the jittery quality of an amphetamine jag, punctuated by Marty Fogel's honking saxophone. Lou let his musicians follow their musical instincts on _Rock 'n' Roll Heart_ , and their jazz roots showed on what was, ironically, one of his least rock 'n' roll albums. Most of the songs were lyrically slight. 'I Believe in Love' and 'Banging on My Drum' featured particularly banal words. One track, 'Chooser and the Chosen One', was an instrumental, clearly to fill space. The whole album sounded careless and insubstantial, belying the fact that Lou was anxious about getting it right, phoning his engineer up to ten times a day with questions and suggestions. 'I had to shut my phone off because he would call at eight o'clock in the morning and wake me up after we'd been [in the studio] to two or four in the morning. He had been up all night listening to the rough mixes.' As soon as Corky reconnected his phone, it would ring.
'Corky, Lou.'
'Hey, Lewis, how you doing?' replied his engineer as brightly as possible, copying Rachel's habit of addressing Lou by his given name (which Rachel pronounced with a lisp: 'Oh, Lewith!')
'I was listening to the rough mixes, and it could use a little more bass.'
So another difficult day began.
When it came time for playback, Lou disappeared into the bathroom to take speed. He emerged with eyes as big as saucers and yelled, 'OK – play!' The songs sounded great to Lou, stoned, but _Rock 'n' Roll Heart_ would be regarded as one of his weakest albums. The only song of substance was 'Temporary Thing', while Clive Davis thought the title song had hit potential if Lou would develop it, but he rejected his advice. 'I told Lou, "You know, you could have a radio record here." But he really felt the work was complete, and he was entitled to feel that.' It was the start of a frustrating relationship.
To help launch the album, Lou hired Jonny Podell as his new manager. A hip guy who spoke and dressed like a rock star, Jonny represented Alice Cooper and the Allman Brothers, among other high-profile and highly profitable acts. He wasn't particularly into Lou's music, and he didn't buy into his image. 'I don't think Lou was all real. He was [an] accountant's son, Jewish middle class from [Freeport] who decided to be a rock-star shock-style: I'm gay, I'm a junkie, I'm everything a rock star should be, and now I've got a boyfriend/girlfriend to keep everybody off balance at all times. I saw through the disguise.' He was also aware that Lou had fallen out with his previous managers, some of whom he hated, which didn't bode well. 'Lou could do hate. Drugs do that – drugs take resentment into hate.'
It was drugs that they had common. 'Lou, I guess, saw him in me; I, me in him: a drug addict.' Jonny was a coke head at the time; Lou was a speed freak. 'He would go into the bathroom and come out a _totally_ different person, obviously on speed,' says Jonny, who warned his client: 'Lou, with all due respect, I love you, but you're going to kill yourself.' Lou retorted that his drug was healthy. In his own twisted mind, he seemed to truly believe that amphetamine and methamphetamine really were healthy highs; he argued the point constantly. Jonny believes that drugs warped Lou's personality. 'Once you get into drugs heavily, it does become you. Then you become the beast, as I did, and he did.' While Lou had always been a contrarian, he became almost impossible to work with during the height of his drug use in the late 1970s, and was often unpleasant. Jonny found his client to be 'generally unlikeable... not a charmer', citing two examples of his bad behaviour. One evening, Jonny and his wife arranged to have dinner with Lou and Rachel. 'Monica hated Lou, because Lou was the ultimate misogynist... She resented the disrespect.' Likewise, Lou loathed Monica. Before the dinner Jonny warned Lou that he had to be pleasant to his wife, or else. 'We used terms like "I'll stab you, and I'll throw you off a moving train."' They met at Willy's Bar. 'We are there fifteen minutes. Lou makes a crack that Monica has no tits. Monica goes into reactor mode.' She told Lou exactly what she thought of him, got up and left. 'I go to Lou, "You did it. Go catch her and apologize. I'll kill you!" So there's Monica [crossing the road], Lou running after Monica, Rachel running after Lou, and I'm running after everybody. I wish I had that on film!' Another time, Jonny threw a party at his apartment, inviting Lou and the Allman Brothers. Lou promptly insulted the band, and a fight broke out. 'Within fifteen minutes fists started swinging – a fistfight in the house!'
The music of the Allman Brothers may not have been to Lou's taste, but they sold millions of records in the 1970s. Lou's career had always been precarious, and his currency was falling as the decade wore on. 'For me, it was a step down,' says Jonny, whose first management decision was to get his client back on the road promoting _Rock 'n' Roll Heart_ , having persuaded Arista to support a tour. It was Lou's idea to decorate the stage with a bank of televisions on which he would play home movies, recorded on his latest toy, a Betamax video camera. The budget was limited, so Lou trawled the junk shops of Harlem for second-hand TVs, several of which failed to work in concert. 'I thought he was nuts.'
Guitarist Jeffrey Ross helped Lou carry the TVs home. Aged twenty-one, he was Lou's latest protégé. They wrote songs together, including 'Such a Pretty Face', which became 'Wait' on the _Street Hassle_ album. Jeffrey was the inspiration. Flattering though this was, being Lou's golden boy was a mixed blessing. 'In a very strange way, Lou adopted me,' he says. 'I was getting life lessons in _Oberführer_ tones, or treated with casual disdain because I was clearly too young to have a grasp on anything... I think he felt he was mentoring me.' Jeffrey found Rachel more simpatico. 'Rachel was much more straightforward. Lou would say, "I want this," and Rachel would set about making it happen. Rachel was also the voice of reason. Rachel was compassionate.'
Jeffrey hung out with the couple at their new apartment. It was a mark of Lou's drug-fuelled paranoia that he'd moved to a building with a view of Jonny Podell's office, so he could spy on his manager. 'It was a one-bedroom apartment – your typical Upper East Side professional apartment: parquet floor, store-bought furniture,' says Jeffrey. 'He was very fond of Tensor lamps with the really hot bulbs... He would have a bottle of speed pills under the lamp] melting down. He would go back into his room and spend a lot of time shooting speed. It's what he did. He would be creative for a while, then he would crash for hours, and I would sit at that apartment with Rachel, sit and play the guitar. Lou would eventually come out, and we would play together, or we wouldn't. That was the same in hotel rooms when we travelled. He did hide out a lot, like anyone with a drug issue.' Lou and Rachel shared the apartment with two dachshunds, Duke and Baron. ['He would sit there with those dogs and kiss them on the mouth,' says Erin Clermont. 'I thought, "He is more affectionate with those dogs than he is with people."' Ross quips, 'Lou kept us all as pets, man.'
When Lou resumed touring in October 1976, after a fourteen-month lay-off, Rachel was his new road manager. 'On this tour, Rachel looked after the money, and kept me in shape, and watched over the road-crew, and it's been great. There's someone hustling around for me that I can totally trust,' he said, explaining Rachel's role. He rewarded his boyfriend with jewellery, including two diamond rings, which, given what happened next, might be seen as engagement rings.
Lou and Rachel, about to cut their cake.
When the tour reached London in April 1977 the couple celebrated the fact that they had been together for three years with a remarkable party at the end of Lou's shows at the New Victoria Theatre. The party was held at Maunkberry's, a gay club in St James's. Rachel commissioned a three-tier cake for the occasion, a wedding cake by any other name, topped with a heart inscribed with the initials 'L' and 'R' and the words 'One layer for each year. Hoping for many more.' Wearing a skirt and high heels, Rachel towered over Lou as they cut the cake, the pair looking like a newly married couple. Then they kissed. 'Basically, it was Lou and Rachel getting married,' says Jeffrey Ross. 'It's an affirmation of their relationship. Whether or not they called it a marriage, I don't know. We weren't thinking of gay marriage at the time [but] in every respect they behaved like a married couple.' Lou's band leader Michael Fonfara concurs. 'She was his wife, for sure.'
The relationship was tempestuous. It was noticed that Rachel's right wrist was bandaged on the evening of the party at Maunkberry's. 'I've got a feeling that] they used to have fights,' says photographer Jill Furmanovsky, who took pictures of Lou and Rachel with the cake. 'I don't know if she had cut herself, [or] whether she had got injured.' Both enjoyed drama. ['They adored each other, but they also had these terrible fallings-out,' says Jill's friend Liz Gilmore, who travelled with the couple in Europe. 'They argued, and they fell out, and they would have separate rooms, and then they would get back together again.' Rachel was taken to hospital in Amsterdam. 'I went to hospital with him,' says Liz, 'and he sat up and winked at me like the whole thing had been faked. There was that kind of thing going on].' Meanwhile, Lou was as capricious as a child, refusing to perform until the promoter bought him a leather jacket he wanted, and he was obviously stoned much of the time. He had discovered an ingenious new way of doping himself on tour – he had his drugs prescribed as [eye drops. Excessive use of these narcotic drops made him irascible, even violent.
There are numerous examples of Lou threatening people during this period, often with knives, showing that violence was not just a theme in his songwriting. Jeffrey Ross says that Lou pulled a knife on an Arista representative in London. 'He was playing with switchblades all the time. He had this fascination with knives. He pulled out a knife and held it to the neck of this guy.' In addition, he threatened to stab at least two journalists. 'If I wanted to get you, I'd go behind your back and] stab you,' he told writer Josh Alan Friedman in New York. Somewhat absurdly, [he threatened another journalist with a butter knife during a breakfast interview in the Netherlands. This was the behaviour of an angry and unstable person. 'Lou was in a constant breakdown,' opines Jeffrey Ross. 'I think Lou had severe emotional stuff going on, and mood swings. Some of that was amphetamines. Some of that was maybe living with a transvestite, and not being sure where his sexuality [was].'
Lou had no compunction about stabbing people in the back, metaphorically. Shortly after Jonny Podell introduced him to a lawyer named Eric Kronfeld, Lou hired Kronfeld as his new manager, leaving Jonny out in the cold. 'He totally did the scumbag move.' Kronfeld worked with Lou into the 1980s, during which time he helped stabilize his finances. On his advice, Lou settled the lawsuit with Steve Katz, who was paid compensation. The court battle with his brother Dennis intensified, however, with Lou counter-suing his former manager, together with the lawyer and accountant who had advised him at the time, ultimately asking for punitive damages of $2 million. Both sides filed allegations, and Lou spent several days giving evidence in the case at the Supreme Court of New York. He was a rational and courteous witness but like many performers he showed a limited knowledge of his financial affairs.
Lou was much more interested in technology. Throughout his life, he delighted in gadgetry and gear, being an early adopter of everything from Betamax to CompuServe. He wasn't necessarily skilful at using technology, but he loved it. 'He was an enthusiast,' says Tom Sarig, who managed Lou at the end of his career. 'When he got into something, he went full bore. The instruction booklet with print _this_ small, and a thousand pages, he's read the whole thing.' In this spirit, Lou became enthused with Binaural Sound in the late 1970s, an eccentric new recording system developed by a German engineer named Manfred Schunke, who put microphones in polystyrene heads (like hat-shop dummies) that were positioned around a recording studio, or concert venue, to record music in the round, as human ears would hear it. Lou started to record his German concerts using this peculiar system, creating tapes which he used as the basis of his next, and arguably his best, Arista album, _Street Hassle._
There were two guitarists in the band on tour in Europe in 1977, Stuart Heinrich and Jeffrey Ross. By the time Lou brought the Binaural concert tapes back to the United States, he had fallen out with Ross. 'He took my name off the record,' complains Jeffrey. 'Off two songs he had promised me co-writer credit.' Then bass player Bruce Yaw fell foul of the boss. Lou wanted Bruce to come into Manhattan to redub his parts on the live tapes but refused to pay for a hotel, suggesting that Bruce stay with his mother. 'I said, "I'm not going to do that. I'm a professional and I would be glad to play with you, but you have to pay me." That was not something he wanted to hear from me. So he re-dubbed the bass parts himself...That was the ending of my relationship with him.'
Another contributor to _Street Hassle_ whose name didn't appear on the finished album was guitarist Ritchie Fliegler, who has the distinction of being one of only a handful of musicians outside the Velvet Underground to have worked with both Lou and John Cale, enabling him to make a comparison of their strengths and weaknesses. 'John is probably the greatest musician I ever met,' he concludes. '[Lou] was a serviceable guitar player, but it was the tool he used for expression. It wasn't music for music's sake... John was in service to the music. Lou was in service to the message of the story. I truly believe that if it weren't for the two of them, you wouldn't have heard from either of them.' Like others, Ritchie found working with Lou on _Street Hassle_ to be a tiresome experience. 'Lou was an easy person to despise. He was the biggest prick I ever met, or ever worked for, but he sure wrote some great songs.'
Hiring and firing musicians with such rapidity, erasing the contributions of those who fell out of favour and overdubbing new parts to cover the gaps, gave _Street Hassle_ a muddy sound, while some of the songs – 'Wait' and 'Leave Me Alone' – were weak. Nevertheless, _Street Hassle_ had a vitality that lifted it above Lou's other Arista albums, while sharing similarities with the punk-rock movement. Lou found himself described as the 'godfather of punk' at this time, a title he didn't care for, and indeed he was generally disobliging about punk musicians. As provocative and irreverent outsiders, however, they shared a common attitude, and he played up to this on his new album. In the song 'Dirt', for example, an update of 'Downtown Dirt', he sang about someone who would eat shit and say he liked the taste if there was money to be made. 'I was specifically referring to my manager-lawyer,' he explained, apparently referring back to Dennis Katz. Like many of Lou's songs, 'Dirt' had been gestating for some time, and his antagonism to Dennis was stronger than ever as they fought each other in court.
Another track, 'I Wanna Be Black', was an outrageous parody of the preconceptions some white people have about African-Americans. Lou sang that he wanted to have a stable of whores (as if that's what black men typically did) and would welcome being assassinated like Martin Luther King Jr, or Malcolm X, if only he could be black. Christine Wiltshire, an African-American singer who sang backing vocals on 'I Wanna Be Black', says that she wasn't offended. 'I didn't take it as a personal opinion of his.' At times, though, Lou tried so hard to be controversial that he sounded like a bigot. 'I don't like niggers like Donna Summer,' he said in one interview that year, referring in another interview to 'nigger music' (by which he meant disco, Summer's 'I Feel Love' being one of the biggest hits of 1977). He evidently thought this sort of language was amusing, making him seem edgy and streetwise. If he had said such things thirty years later it might have ended his career. Like many people of his generation, he was guilty of a good deal of casual racism. He even indulged in anti-Semitic rhetoric. 'He hated Dylan. I once said to Lou, "You know, Dylan's the only genius in your field,"' recalls his journalist friend Ed McCormack. 'He said, "You mean you actually like that pretentious kike?" He's Jewish himself. It was a typical Lou thing to say.'
The centrepiece of _Street Hassle_ was a three-part, eleven-minute song cycle of the same name, set to a simple phrase in A and E, arranged for strings and guitar, in which Lou described a murder and sang a lament for a lost love, with a preface voiced by Bruce Springsteen, who happened to be making his fourth album, _Darkness on the Edge of Town_ , in the next studio. Lou stressed in interview that the characters in 'Street Hassle' were gay. 'They're not heterosexual concerns running through that song,' he told _Rolling Stone_ , identifying himself as gay in passing, a subject we shall return to in a moment. 'I don't make a big] deal of it, but when I mention a pronoun, its gender is all-important. It's just that my gay people don't lisp. They're not any more affected than the straight world. They just _are_. That's important to me. I'm one of them, and I'm right there, just like anybody else.' He was particularly proud of the lyrics to the middle section: a disinterested observation of the violence in everyday urban life, concluding that murder, at a time when the murder rate in New York was extremely high, was often simply a matter of ['bad luck'.
_Time_ described _Street Hassle_ as 'one of his very best, bitterest and most adventurous records' when it was released in February 1978, comparing Lou's writing to the scabrous novels of Céline. Others were less impressed. Nick Kent pointed out in the _New Musical Express_ that 'at least half the record was shoddy', which was true, while Robert Christgau was troubled by some of the language. 'I don't think the racism of "I Wanna Be Black" is mitigated by "irony",' he wrote, giving the record a B in the _Village Voice_. The public showed little interest. After a period of relative popularity, Lou's sales were in long-term decline. Although many new-wave artists honoured his name as one of the originators of what might be termed outsider rock, few young record buyers cared. They had their own heroes. The success of the Ramones, Blondie, Talking Heads, U2 and the Pretenders would eclipse Lou over the next few years. _Street Hassle_ charted low in the USA, and not at all in the UK. ' _Street Hassle_ got great reviews [but] you learn very few albums sell off of reviews. There are examples, but very few,' says Clive Davis, who blamed the lack of a hit single. 'I think it did nicely for an album that did not have a hit single in it, but it certainly did not do sales commensurate with the reviews.' Lou felt crushed. Despite its flaws, he knew that the record represented his best work for some time.
As in the past, he reacted to failure by becoming surly and confrontational, which was his demeanour on tour that spring, a tour during which his relationship with Rachel began to fray. In April 1978 he invited Erin Clermont to accompany him to Philadelphia, where he gave a show in front of invited journalists to promote _Street Hassle_. They spent the night together. During the evening Erin asked Lou about Rachel, who didn't seem to be around. Erin had long wondered what the attraction was. Lou told her bluntly, 'She's more beautiful than any fucking woman,' which was hardly a compliment to Erin. The next morning, to Erin's great surprise, Rachel joined them for the limousine ride back to New York. He had evidently been in Philadelphia all along. 'It was so weird,' says Erin. 'We were in the car together and I had spent the whole night in the room with [Lou], in this hotel. I didn't even know where she was, and then she turns up in the car.' Theirs had evidently become an open relationship.
The following month, Lou played the Bottom Line, an intimate new club in Greenwich Village which held just 450 people, a tenth of the audience he could draw any night in Europe even at this stage in his career. 'Lou was accepted in Europe a lot better, and he made more money there. He always did. For some reason, Europeans are more open to anything a little different,' notes the singer Genya Ravan, who guested on _Street Hassle_ and appeared on stage with Lou at the Bottom Line. While he sometimes grumbled about his relative lack of popularity in his native land, Lou enjoyed the intimate atmosphere of the New York club, which he played many times over the next few years, while friends often dropped in to catch his act. 'I was _proud_ of him. For once, finally, he's himself, he's not copying anybody. Finally, he's got his own style,' Andy Warhol noted in his diary after seeing Lou at the Bottom Line in March. 'Because when John Cale and Lou were the Velvets, they really had a style, but when Lou went solo he got bad and was copying people...' This was well observed.
Another residency at the Bottom Line two months later was recorded in Binaural Sound for a remarkable live album, _Lou Reed Live, Take No Prisoners_ , the most extraordinary aspect of which was that Lou talked to his audience as much as he sang. Like a camp Lenny Bruce, he repeatedly interrupted the songs to tell rambling anecdotes about his life, or anything else that was on his mind, his comments sprinkled with so many expletives that the album was issued with a sticker warning that the content was 'offensive'. He also took the opportunity to insult his critics, picking on John Rockwell and Robert Christgau. 'You know how heavy it is if you get reviewed by Rockwell in _The New York Times_ ], and he says you are intelligent?' he asked the audience. 'Fuck you! I don't need you to tell me that I'm good.' Rockwell took the jibe in good humour. ['I thought it was kind of fun that Bob [Christgau] and I, who are friends, would be singled out by him. The key line was that he didn't need critics to tell him how good he was, but the subtext to that is that both Bob and I had told him how good he was. It wasn't like we were enemies.' Christgau was less amused to hear Lou mock his grading system and call him a moron. 'I wasn't offended. Was I pleased? No, I wasn't pleased either. The guy's a jerk,' he says tetchily. 'He hated critics, hated them more than most musicians.'
Some of Lou's stage patter was funny, though hardly original ('If you write as good as you talk, nobody reads ya!'). His reference to 'niggers' was offensive, while his monologues often dribbled away into mumbled non sequiturs. He sounded drunk a lot of the time, and often was. 'I do remember the Bottom Line performances being completely drunken and drug-ridden,' says Michael Fonfara, who continued to play keyboards in the band. Backing singer Chrissy Faith didn't enjoy the shows at all, finding Lou hard to work with. 'I was fairly intimidated, because he was not really approachable, and as a woman when you are dealing with somebody who is not very centred in their sexuality there is another element there. So [I'm] trying to learn how to relate to this guy. And there were a lot of drugs, so you never knew what you were going to get.' She was amazed that Arista proposed to issue a live double album of the shows. It was Lou's idea, and Clive Davis let him do what he wanted. This was fortunate, because, despite its flaws, _Take No Prisoners_ is the most entertaining of the six live albums Lou released as a solo artist, giving a visceral sense of what it was like to hear him in a club. And when he let his band play, on 'Berlin', 'Pale Blue Eyes' and 'Satellite of Love', the music was powerful and nuanced.
Lou was back living in Greenwich Village, not far from the Bottom Line, in a dingy apartment above the old Stonewall Inn on Christopher Street. This was the scene of the eponymous 1969 riot that started the modern gay rights movement in America, which had gathered considerable momentum by 1978. The fact that Lou chose to live in the heart of the Village's gay scene, above what had been an iconic gay bar (a deli in 1978), made a statement, at a time when he was also in a long-term relationship with a man. Indeed, it was at this stage that Lou chose to identify himself in public as gay.
Having skirted around the subject of his sexuality for years, presenting a camp image and insinuating at times that he was gay or bisexual but never defining his sexuality, Lou made his position clear in two interviews he gave to promote _Take No Prisoners_. 'I have such a heavy resentment thing because of all the prejudices against me being gay,' he told Stephen Demorest of _Creem_ magazine over lunch at the Russian Tea Room. He spoke about witnessing a recent demonstration about laws that discriminated against homosexuals in New York. As a result, for the first time in his life, he felt politically engaged. '[A] girl got up and talked about seeing a rabbi on TV who said "Homosexuality is an abomination." And I realized this guy was calling me – Lou Reed – an abomination, too.' It was in this same _Creem_ interview, published in March 1979, that Lou suggested he was given ECT as a teenager because of his 'homosexual feelings', and that he had forced himself to have heterosexual relationships in college, only accepting his homosexuality in his twenties. 'I just wouldn't want listeners to be under a false impression,' he said. 'I want them to know, if they're liking a man, that it's a gay one – from top to bottom.' This statement, together with the aforementioned comments he made about the characters in 'Street Hassle' to _Rolling Stone_ , which published its interview in the same month, appeared to show Lou coming out. But he seems to have soon regretted discussing his sexuality in this way. Perhaps he concluded that it was bad for his image to be classified as gay. Perhaps he thought it was less than the whole story. He had female lovers, too, after all. Erin was still part of his life, while Barbara Hodes had also continued to see Lou until recently. Maybe he thought his sex life was nobody's business but his own. In any event, he never described himself as gay in public again.
Lou furnished the Christopher Street apartment thriftily, acquiring a second-hand leather sofa from a classified ad in the _Village Voice_. The seller was surprised when Lou turned up in person to collect the couch. He kept the apartment for several years, improving it over time. He had skylights put in to brighten the dark rooms, had bookshelves built and purchased a big TV and a huge stainless-steel bath. Unlike his previous apartments, Christopher Street became a real home, but he didn't own it. As his finances improved, Lou decided the time was right to buy a place.
His choice of property was unexpected: an old hunting lodge next to the Kittatinny Mountains, outside the village of Blairstown, New Jersey. The house was only fifty-eight miles from Manhattan by limousine, but Lou felt himself in the middle of nowhere when he arrived. There were no neighbours in sight, and he could walk for hours in the woods without seeing a soul. He found the place restful after the hubbub of Manhattan. 'Even if you wanted to do something, there's nothing to do there. It's appalling how much sleep I get. You know, Andy used to say you can't see the stars in New York City because they're all on the ground [a phrase Lou later quoted in _Songs for Drella_ ]. Well, out there the stars are in the sky.'He came to own 138 acres in the country, including a two-acre pond next to the lodge – a small stone building with an oversized porch made of boulders. The horror movie _Friday the 13th_ was filmed at the neighbouring summer camp soon after he bought the place.
Lou's country home, New Jersey.
He brought friends out to Blairstown. 'The house was about half a mile off the road. And we're driving in this big black limousine through potholes... It looks like a Gingerbread house from the Brothers Grimm. We parked the car. You had to go over a little bridge to the house [built] from stone and logs. It was like a log cabin, very rustic,' recalls Ellard Boles, Lou's new bass guitarist. Nicknamed Moose for his size, the musician also became his room mate at Christopher Street, which Lou continued to rent and live in part of the time, and where they played endless games of Mastermind. 'I would kick his butt on this game. It would drive him crazy. I mean, _crazy_.' At night, Lou slept with Baron the dachshund in the master bedroom, while Moose slept with Duke in the spare room. What about Rachel? 'Rachel was kind of in and out,' says Moose. 'Rachel wasn't there all the time.'
Erin Clermont.
Erin lived nearby in the Village and, when Lou wanted female company, he would ring and ask if he could 'pop by', no matter how late. If Erin was out when he called, he would call back repeatedly until she answered. She once came home to find that Lou had called and hung up twenty-one times. 'He had this need to talk to people. He needed to have this contact... In his drug years it was this neurotic neediness.' Lou and Erin went through a phase of visiting sex clubs together: peep shows on 42nd Street, the new swingers' club, Plato's Retreat, and the Eulenspiegel Society, a bondage and S&M group. 'It was on a voyeuristic basis. He had an intense interest in sex. I did, too. Also, exploring the variations of sex.' Lou was more interested in watching than participating. 'I knew I was safe with him because he didn't want to get involved in anything dangerous – no way!' Lou did, however, have adventures on the road during this time, and they were primarily gay.
'He did have the odd tryst on the road, but never anyone that got to hang around long. [They] were kicked out before morning,' says Michael Fonfara. 'Young men... he did prefer pretty boys.' Moose recalls Lou sleeping with one guy in Buffalo. The next day on the bus, as the band members were discussing what a good time they'd had in town, Lou, who was eating an ice cream, gave his lolly a lascivious lick and said he'd had a wonderful time, too.
One night, Michael Fonfara spotted an attractive woman in the audience. 'She was making eyes at me... She was very demure in the way that she dressed. But I loved her exotic face. That's why I told the roadie to bring her backstage. I thought she had a good-looking figure. Not a really tall girl – about five [foot] five, maybe. Medium build, semi-voluptuous.' They went back to the hotel. 'We were going up in the elevator, and she looked over at me and said, "Michael, I like you a lot, but I really, really just wanted to meet Lou. That's why I'm here with you."' Michael called Lou to tell him about the girl, not expecting him to be interested. 'He didn't want any girls back to his room – ever.' Michael spoke up for her, however. 'I'll babysit with her to make sure nothing happens and take her away again.'
'OK, you got five minutes.'
He took her to Lou's suite, where they made an instant connection. 'After about two minutes they were sitting there with their heads together, and he looked up at me and said, "Get lost!"'
The girl, Sylvia Morales, was born in England in 1956 when her US serviceman father, Pedro Morales, was stationed at an airforce base in Cambridgeshire. She was currently living in New York City, where she was studying and taking part in the downtown arts scene. Like Lou, she sometimes hung out at CBGB. According to biographer Victor Bockris, she also attended the Eulenspiegel Society and performed in a burlesque show. She already knew John Cale, who claims to have had a fling with her before she met Lou. 'I picked Sylvia up one day in a club, went over to her apartment and that was it – a one-night stand.' It all pointed to an intense interest in Lou Reed's world. Lou, fourteen years her senior, was intrigued by Sylvia, who quickly established herself as his new companion. Sylvia and Rachel were both on the scene for a while, which was problematic. 'I know there was some effort to keep the two of them separate,' says band member Chrissy Faith, '"Oh my God, Sylvia _and_ Rachel are here!"'
Michael Fonfara was surprised to see Lou in a relationship with what he would have called 'a real woman'. 'The entire time I knew him until he met Sylvia he was gay,' he says. Indeed, Lou had only just come out in the press, which was awkward timing. Moose Boles saw Lou more as bisexual, and says that Sylvia accepted this. 'She was at peace with who he was, and she loved him for who he was. That was the thing between them. She really took care of Lou on the road.'
Rachel didn't disappear overnight, but he was seen less frequently in Lou's company, until one day at rehearsals in New York in early 1979. Lou was on stage with the band, Sylvia watching them play, when Rachel appeared in the room carrying a leather bag. When Lou finished playing he beckoned Rachel to come forward and took the bag from him. He was evidently returning his property. Then he left, disappearing back into the twilight world from which he emerged in 1974. 'No one ever knew her real name, her full name, or what happened to her,' says Erin. 'He never mentioned her again.'
Unusually, Lou co-wrote much of his next album, _The Bells._ He co-wrote the song 'Families' with Moose during the Thanksgiving vacation of 1978, which the men spent together at Christopher Street. The lyric addressed what it felt like to be a disappointment to one's parents, with obvious autobiographical connotations. Lou co-wrote the title song, informed by the Edgar Allan Poe poem of the same name, with his long-serving saxophonist Marty Fogel. The lyric was created extemporaneously in the studio, and Lou considered it one of his best. Other songs were written with guitarist Nils Lofgren and the jazz musician Don Cherry, who played trumpet on the album, which was recorded in Germany in Binaural Sound. The music was a strange hybrid of frenetic disco (a type of music Lou had previously professed to hate but which was in fashion) and jazz rock, with some intriguing lyrics. As ever, the critics were divided as to whether it was any good. Lester Bangs raved about it in _Rolling Stone_ , while the _Los Angeles Times_ dismissed it as a 'dismal, turgid effort'. It didn't chime with British critics. ' _The Bells_ is a delusion disguised as an album,' wrote Charles Shaar Murray. 'I'm not entirely sure whether Reed is attempting to delude his listeners or whether he is himself deluded.' The public showed virtually no interest in the record, which Lou came to think of as one of his most underrated albums, and indeed he may have been right. 'I think it sold two copies,' he reflected, 'and probably both to me.'
When he went on tour to support the album, Lou had a new whizz-kid guitarist in his band. Chuck Hammer admired Lou's songs so much that he wrote and told him so. 'I just said I loved his music, and I know _Berlin_ is a masterpiece, and I'd love to work with him, and I'm the best unknown guitarist in America.' Lou invited Chuck to an audition at his apartment. 'These are the ground rules,' he said, as controlling as ever. The guitarist was to come to Christopher Street promptly at 5 p.m. 'I'll set you up in a room to play. I'm not going to stay in the room. I'll go into a different room. You play for twenty minutes. If at the end of twenty minutes I don't like what I hear, you just pack up and leave.'
Chuck knocked at Lou's front door as arranged. Lou opened the door himself, unshaven and clearly disoriented. 'What time is it?' he asked.
'Five o'clock.'
'Day or night?'
After he passed the audition, Chuck was given a Roland Guitar Synthesizer to learn to play, a difficult instrument he likens to 'wrestling with a gorilla'. After a few days Lou invited him out to Blairstown to hear how he was getting on. Chuck travelled out to the country by Greyhound bus, arranging to meet Lou in town. 'Get off the bus, Lou is there in his Jeep with the top down.' After years when he hadn't driven at all, Lou started to drive again in the country, acquiring a variety of vehicles, including the Jeep, a Mercedes 450SL sports car, a snowmobile and several motorcycles. 'Lou takes the guitar, flings it in the back of the Jeep, says, "Get in." Drives thirty feet. Slams on the brakes... There was a little bar... Lou takes me into the bar. He orders four Scotches for him, and four Scotches for me, and a beer each. I don't drink. [Four Scotches] lined up. He downs his and I have, like, one. He finished mine as well. Lou is now driving his Jeep, drunk out of his freaking mind, I'm in the passenger seat, and we are driving towards his house from town.'
Lou treated Chuck as he had former protégés, bonding with him while also seeking to control him, and signalling that the good times wouldn't last. 'Never cross me,' he warned. He was trying to wean himself off methamphetamine by drinking more, and travelled on tour with a case of Johnny Walker Black Label. 'He was drinking two bottles every three days on his own. Straight up,' says Chuck, who counted the empties. Lou also liked to drink in bars on tour, routinely sending drinks back, saying they weren't mixed strong enough. 'He _constantly_ did that, and constantly abused waiters and waitresses.' Tour days often started with a champagne breakfast at the hotel. 'There would be jeroboams of champagne, and guys squeezing oranges, and we'd start before we even did the sound check for the gig. Everybody would be pie-faced, just flattened. And Lou would say, "Tell them to put the rest of that bottle in the car, we're taking it to the rehearsal,'" says Michael Fonfara. 'It was just a constant flow of booze. I guess that's rock 'n' roll.' Lou started to get heavy again with so much drinking, so he and Michael went to the gym. Afterwards, Lou would sometimes treat himself to a shot of meth. 'We used to go to the gym together in New York and work out, and then we'd get back to his place and he'd say, "OK, you tie me off [apply a tourniquet to his arm so he could inject speed]." Oh Jesus!'
They were in the middle of a gig at the Stadthalle in Offenbach, Germany, on 6 April 1979, playing to an audience largely composed of American servicemen, when a heckler interrupted Lou's concentration. He stopped the show and ordered the lights to be turned on the audience to identify the culprit. 'People started yelling,' says Marty Fogel, 'and Lou really got agitated and started yelling back.' A woman climbed on stage. When she came towards Lou, he sidestepped, she stumbled and he grabbed her. 'Lou proceeds to drag her off the stage by her hair, and pushes her off the stage. She fell fifteen feet – at least,' says Chuck Hammer, 'at which stage a full-blown riot breaks out – chairs start to fly – an incredible riot ensues.' Lou was cowering backstage when the German police came to arrest him. Lou handed Sylvia his leather bag – evidently, something precious was in it – before they took him away. He spent the night in custody, where blood and urine samples were taken for drug tests. The fact that he passed is probably because he was now primarily drinking on tour. As soon as Lou was released, they all flew to Switzerland. 'We had to leave the country,' says Chuck, who recalls that a further German show was cancelled.
A couple of days later, Lou arrived in London, where he asked to meet Charles Levison, Managing Director of Arista UK. Lou was increasingly unhappy about his low record sales, which he blamed on a lack of label promotion. Levison emerged shaken from their meeting, as Arista publicist Howard Harding recalls: 'Charles, looking very agitated, said to me, "You didn't tell me he was going to pull a gun on me!" "What?" Apparently, Lou was so incensed he actually took out a firearm and threatened Charles Levison with it.' Lou was fortunate that Levison didn't call the police.
That same week, Lou was on stage at the Hammersmith Odeon when he looked over and saw David Bowie sitting cross-legged on a flight case in the wings watching the show. Lou was so excited that he turned to his band and yelled, 'Play! Play! Play! Play!' He was eager to impress the Englishman, who had become one of the great figures of rock 'n' roll in the few years since they had worked together on _Transformer_ , releasing a series of varied, distinctive and ultimately classic albums that arguably placed him above Lou in the pantheon of rock. He was also commercially far more successful. When he came off stage, Lou hugged and kissed Bowie, who accompanied Lou and his band to the Chelsea Rendezvous for a reunion meal. 'Isn't David great? Don't you just love David?' Lou kept saying. The stars sat together at the head of a table. Dom Pérignon was served. After the meal, Lou switched to Irish coffee. He became drunk and loud, while Bowie remained subdued. Mindful of the success they had enjoyed in the past, badly needing another hit and emboldened with alcohol, Lou asked David if he would produce him again.
'Yes, if you clean up your act.'
Shocked by the reply, Lou slapped Bowie. 'Don't you ever say that to me!' he shouted. He slapped him again. 'Don't ever fucking say that to _me_.'
The musicians intervened, persuading Lou to move to another table, where he sat glowering at David for a few minutes. Then they made up. 'They start talking again. It's like nothing's happened,' says Marty Fogel. 'But then it happens _again_. He stands up and starts slapping [David] around.' Lou pulled David out of his chair. 'I told you never to say that!' he screamed, slapping his face. This time his band took him back to his hotel. They went to his room, where he normally liked to sit up late with the musicians, listening to a tape of the show they had just played.
There was a knock at the door.
'Who is it?'
'David Bowie.'
Bowie had come to the hotel to fight Lou. 'Come out and fight like a man!' Bowie challenged, marching up and down the hotel corridor while Lou snored. His band weren't sure whether he was sleeping, or pretending.
There was further unpleasantness when Clive Davis came to see Lou at the Bottom Line in New York on 4 June 1979. Lou, who was now bloated with drink, his belly big and his face puffy, berated his label boss from the stage, asking, 'Where is the money, Clive?' and 'How come I don't hear my album on the radio?' Davis was not amused to be shown up in public by an artist who didn't even make Arista much money. 'I had always done my best, but I realized he was frustrated [with sales],' he says. 'It was upsetting.' The following day, Lou was obliged to issue a public apology. 'Like most artists, Lou was a blamer,' comments Jonny Podell. 'It was Clive's fault, my fault...That's intrinsic in the business.'
Feeling the need for somebody at his side whom he could trust, Lou asked Sylvia Morales to marry him, and she accepted. He announced their engagement on stage at the Bottom Line in December, also singing a sappy new song, 'Love Is Here to Stay'. But first he had to make another album for Arista.
At the suggestion of Corky Stasiak, Lou recorded _Growing Up in Public_ at George Martin's new AIR Studios on the Caribbean island of Montserrat, arriving there on 5 January 1980. As a Beatles fan, Lou was thrilled to meet Martin, and he found the set-up at AIR congenial. Dire Straits, Paul McCartney and the Police would all record hit albums at the facility over the next few years. Lou, his band and their partners stayed in villas within the studio compound. Each evening there was a communal, catered meal, washed down with copious amounts of alcohol. One evening, Lou and Michael Fonfara wrote a comical paean to inebriation they called 'The Power of Positive Drinking'. They were, as Michael says, 'drunk as skunks'.
_Growing Up in Public_ marked a departure from the jazz-rock sound Lou had hitherto pursued with the Everyman Band, which had of course morphed into Lou's band over the past five years as musicians left and were replaced. Only two originals remained, saxophonist Marty Fogel and drummer Michael Suchorsky. Moose Boles, Michael Fonfara, Chuck Hammer and Stuart Heinrich had joined the band separately, but they had likewise shown the boss considerable loyalty. The music on the new album, largely written by Fonfara, who also produced the record, was basic bar-room boogie. Lou sang, rather than narrating his songs. His voice, which had deepened with years of cigarette smoking (he was trying to quit), sounded strained, while the lyrics were unusually wordy and autobiographical. 'So Alone', 'Love Is Here to Stay' and 'Think It Over' all appeared to be about his new fiancée, Sylvia, while 'Keep Away' may have expressed aspects of his break-up with Rachel. Most interesting was 'My Old Man'. Uniquely, Lou used his own name in this song, towards the end of which the bullying father tells him to 'act like a man'. Lou claimed that it wasn't autobiography, though it sounded like that to most people, including his band. 'That [song] I think was pretty autobiographical,' says guitarist Stuart Heinrich. 'It did seem like a retrospective of his own life, and his surroundings, as opposed to mirroring the lives of others.' Michael Fonfara has no doubt that Lou was writing about himself, and sees the whole album as autobiography. 'He was explaining how he was brought up, and what are some of the reasons for [the] behaviour he has, and what he thinks about the world, and what he wants everyone else to know... how things happen like this. He talks about his mother being "a harridan mother". Some other things in there [like] "Standing On Ceremony", all these things are based on his family. It's almost like telling a therapist, only he is doing it in song.'
Recording on Montserrat went smoothly. 'The buzz around the band was "Sylvia's really good for him, she's a sweetheart, she's really taking care of him. He's not doing as much drugs. He is drinking,"' recalls Corky, who had last worked with Lou on _Rock 'n' Roll Heart_ in 1976, when he was out of his mind on drugs. Four years on drink was the problem.
At the end of January 1980 everybody returned to New York, where Michael Fonfara and Corky Stasiak mixed the album at Electric Lady under Lou's supervision. He drank so much in the studio while they worked that he passed out. 'There were times he would nod out and we would have to wheel him in the chair outside the studio so we could finish the overdubs and stuff,' says Corky. Lou's drinking was not merely excessive; it had become dangerous. He confided to Michael and Corky that his doctor had told him that his liver was damaged. 'His doctor told him that if he did any more, his liver would explode and he would die,' says Michael, who believes that this is why Lou chose to review his life in _Growing Up in Public_. 'He was feeling his mortality.' If he wanted to live into his forties, he would have to change his lifestyle. Perhaps Sylvia could save him from himself.
## XI
## Second Marriage
### 1980–7
LOU MARRIED SYLVIA at his apartment in New York on St Valentine's Day, 1980. Many people were invited, in contrast to his first wedding, including his new twenty-three-year-old road manager, Daryl Bornstein, who had recent experience of his boss's gay side, despite the fact that he was about to embark upon matrimony with a woman once again. A few days before the wedding, Lou took Daryl for a drink. 'Lou wanted to go out, so we went out and stopped at some club – it was a gay bar – and we walked in and he made some comment like, "I bet you've never been to one of these." And then he grabbed my ass. I said, "Yes, Lou, I have. And don't grab my ass."'
Wedding guests were surprised to see Lou's parents at the ceremony. In his new, as yet unreleased album _Growing Up in Public_ , Lou insinuated in 'How Do You Speak to An Angel' that his mother was a 'harridan' and, in this and other songs, that his dad was worse. He described a 'simpering' ('How Do You Speak to an Angel'), 'bullying' man who 'beat my mother' ('My Old Man'), while denying in interviews that this was literally true, which might be seen as having his cake and eating it. Nevertheless, Sid and Toby were at Christopher Street to watch their son, dressed conventionally in jacket and tie, link his lot with Sylvia, who wore an off-white silk dress for their big day. He was thirty-eight; she was twenty-three. As the couple posed for photographs with their parents, Sylvia burst into tears. The new Mrs Reed had a tough side, however, as she would show over the course of the marriage, during which she took an increasingly prominent role in Lou's career, helping to design his album covers and organize his tours, ultimately becoming his personal manager. 'She seemed to be the adult in the relationship,' comments Daryl Bornstein. 'Sylvia was certainly more practical.'
The wedding party adjourned to a local restaurant, after which Lou took everybody to the Broadway Arcade, at Broadway and 52nd Street, where guests were given a bucket of quarters to play pinball. Lou had started to frequent the arcade while shooting scenes in the neighbourhood for Paul Simon's movie _One-trick Pony_ (1980), in which he played a producer. The picture flopped, but Lou developed a passion for pinball. 'I would call him a Pinhead. He would spend an hour, two hours, playing in a night,' says arcade owner Steve Epstein, who struck up an unlikely friendship with the star. The men ate together at Little Charley's Clam House in the Bowery (one of Lou's favourite restaurants), played golf together (one of his less well-known recreations) and went to see the Mets play at Shea Stadium, catching the subway to the venue and sitting in the bleachers like everybody else. Steve proved to be a good listener. 'If he was bitching and moaning, I was more than happy to listen.' He didn't care for Sylvia, however, 'a sour type of person', and adds to the evidence that Lou hadn't entirely forsaken his gay life. Boyfriends came by the Broadway Arcade looking for Lou, and Steve passed on their messages. 'I realized he definitely was a bisexual man.'
Polite conversation at the wedding was that the new album sounded great, and Clive Davis was pleased, but _Growing Up in Public_ was a disappointment upon release in May 1980. Mikal Gilmore, writing in _Rolling Stone_ , expressed lukewarm praise ('a polished package of bombastic rock 'n' roll'); the UK music paper _Sounds_ rubbished the album, the British music press having written Lou off in the post-punk era as being hopelessly old hat. His diminishing number of anglophone fans mostly ignored the record. He had released too many patchy albums over the past few years, and had lost the respect and attention of all but his most loyal listeners. His biggest and most receptive audience continued to be in the Latin nations of southern Europe, where his iconoclastic image went down particularly well, and where he toured to promote the album in the summer of 1980 and, once again, ran into trouble. A concert in Madrid degenerated into a riot, during which fans torched the arena and a frustrated Lou slammed his fist into the air-conditioning unit on his tour bus. He was drinking heavily, so much so that he told his road manager he was too sick to perform in Oporto. At least he wasn't using drugs. 'It must have been in Portugal when Lou had a doctor come [to his room, because he was so hung over], and the doctor wanted to give Lou a shot, and Lou said, "No shots." He didn't want to have anything to do with needles,' says Daryl Bornstein.
The failure of _Growing Up in Public_ brought his relationship with Arista to an end. 'I always felt bad that the years with Arista were not more fertile. I would have loved to have enjoyed more commercial success with Lou,' says Clive Davis, who had hoped that Lou's status as the 'godfather of punk' would result in a career lift in the late 1970s. When this didn't materialize, he made no attempt to extend the contract. Taken together with the fact that Lou was facing a health problem, it was time to step back and think about what he wanted to do with his career. So he told his band that he wouldn't be working for a while, citing his health as the principal reason. 'He had a medical condition at that point. They thought he was going to die of sclerosis, or something liver-orientated,' says Michael Fonfara, adding that Lou was devastated to have to stop drinking. 'He was almost in tears!' He didn't, however, stop completely.
Keeping the Christopher Street apartment as his city base, Lou retreated to his country home, where he worked to get fit and sober. He swam in the pond next to his cabin and hiked through the woods. He ate healthily and took up tai chi, the start of an interest in the martial art that lasted for the rest of his life. The quintessential city dweller found country living surprisingly congenial, though he made urbanite mistakes, like feeding the Canada Geese that alighted on his pond, with the result that he was inundated with birds, who covered the foreshore in guano. He also fed the bears, until one climbed on to his porch looking for food. The man in black also took fright when he found a snake in one of his outbuildings. 'I had to go up and help get it away,' says neighbour Rita Teel. 'I just went and kind of pushed it away with a stick.' Lou and Sylvia became friendly with Rita and her husband, Bob, who did odd jobs for the couple and kept an eye on their isolated property when they were away. 'Lou was a little introverted,' says Rita. 'Other than that, I think he was one of the nicest guys we ever had as a neighbour.' Others were less sure about the rock star who had moved into their quiet, conservative area. 'Blairstown has changed a bit over the years,' notes Sylvia. 'But the town was and still is primarily fairly Republican-conservative, with quite a few shotguns and pick-up types who dominated the area for a while.' Some saw Lou as a freak. Neighbour Judy Cook recalls watching Lou walk down the country road which ran past their homes with a woman and a stick. 'He had a stick, and he used to make her stay at a distance,' she laughs. 'He [was] so weird sometimes.'
Lou became a keen motorcyclist in New Jersey. 'Tramontin motorcycles, which is nearby Blairstown, was a haven for him during the years he was a Harley-Davidson enthusiast,' says Sylvia. Lou acquired a series of powerful Harleys, including a Super Glide and a Fat Boy. 'He liked to hang out in the service department and watch the technicians work; he was fascinated with anything mechanical,' says dealer Bob Tramontin. Lou on his bike, dressed from head to toe in black leather, even in midsummer, became a familiar figure around town. When he pulled into Dominick's pizzeria on Route 94 for a snack, local kids would cheek him by chanting 'doo da doo' as he walked by. His apparent dislike of being recognized caused him to keep his helmet on, even when he visited the Blairstown Inn for a drink. Although he gave the impression that he became clean and sober around 1981 and remained so, there is abundant evidence that this wasn't strictly true. He stopped using hard drugs, but he struggled with sobriety for most of the rest of his life. 'I often would not serve him when he came with the motorcycle, because I felt he was under the influence already when he arrived,' says Kellie Peterson, proprietor of the Blairstown Inn, who was struck by the oddity of Lou wearing his helmet in her bar. 'If he wasn't supposed to be drinking, and he was off the wagon, he might not have wanted to be seen... I never had a real argument with him. It was simply, "No, I think you've had enough for today, and I don't think it would be a good idea if I gave you a drink."' Lou wasn't pleased to be refused. 'But he did not make a scene.'
There is further evidence that he continued to drink even after being warned by his doctors that he had damaged his liver. 'I can remember bringing a bottle of wine out there when I was there with my wife and my kids and they invited us for dinner, and I'm pretty sure he drank some,' recalls school friend Richard Sigal, who resumed his acquaintance with Lou at this time, often visiting him in Blairstown. 'I think he was also smoking some dope.' Lou's attempts to get sober were analogous to his half-hearted efforts to stop smoking cigarettes. 'I consider I have slipped,' he once said, looking at the Marlboro in his hand. 'But I have not stopped trying to quit. As opposed to saying that I've failed and there's an end to it.' Here was the doublespeak of therapy and the sophistry of self-help books, of which he was an avid reader.
In his struggle to conquer his bad habits, he became anxious and depressed and was diagnosed as suffering with bipolar disorder. On 3 June 1980 Lou visited Erin Clermont to tell her about the diagnosis, adding that he had already found a cure for his problem. 'He came over, exhilarated. He'd found the answer – lithium. I remember him saying, "It all boils down to taking this salt, a natural product!"' Lithium salts have been used since the nineteenth century as a treatment for depression and manic behaviour, but overuse can result in nausea, lethargy and more serious side effects, and it seems that Lou was taking too much. From what Erin could see, lithium 'completely fucked him up.'
Work is often the best medicine, and it wasn't long before he started writing for a comeback album. His manager, Eric Kronfeld, had negotiated a new deal with RCA, surprisingly, considering their history. It was important that the first album was good. The songs he was writing addressed a variety of topics. In 'My House' he listed the three principal blessings of his new country life as his writing, his motorcycle and his wife, in that order, which put Sylvia in her place. Casting his mind back to student days, he wrote about the day President Kennedy died, and his reflections on the bravery of the President's widow ('The Heroine'). At a time when Lou kept a handgun at home in the country, he wrote 'The Gun', in which he imagined himself in the shoes of a psychopath who murders a woman. '"The Gun" is none of me,' he clarified. 'The guy in "The Gun" is a vicious, stupid, mean so-and-so. But I know people like that, and I wanted to act it out...' He wrote realistically about alcoholism in 'Underneath the Bottle', in contrast to the mendacious 'Power of Positive Drinking' from _Growing Up in Public_ , and expressed his mental turmoil in 'Waves of Fear' and 'The Blue Mask', for which his comeback album would be named. There was more thought, sincerity and art in these songs than Lou had marshalled for years.
His relationship with his old band ended before he went into the studio to make the new album. During his career hiatus, his musicians had taken on other projects, some choosing to work with the jazz star Don Cherry. When Lou wanted them to come back and work for him, they were unavailable, which caused him to wash his hands of the group. He had also fallen out with several individual members over the years. 'Because he was an ex-junkie, he had ex-junkie behaviour patterns. Loyalty was paramount and the tiniest infraction would make an end to a] relationship, because he would feel he couldn't trust you,' notes Daryl Bornstein. 'With the guys in the band it would be one funny look and that would be it, which was crazy because prior to that they would have been best friends. And Lou didn't have a lot of close friends.' Lou fell out with his guitarist, Chuck Hammer, for instance, over a solo album Hammer was making for RCA, his profile having been raised by his recent work on Bowie's 1980 hit 'Ashes to Ashes', which showed Bowie still to be a vital artist with a popular, contemporary sound. Lou announced that he wanted to produce Chuck's album, but only if he fired everybody else he had been working with to date. Chuck refused. This resulted in the end of their relationship. ['I was so pissed at Lou for putting me in that position... If I saw him in a grocery store, I walked out.'
So Lou assembled a new band. He began by hiring a sophisticated, experienced drummer named Doane Perry, who had played one gig with him in 1979. Lou mailed demo tapes and hand-typed lyric sheets to Doane in Los Angeles, spending hours discussing arrangements over the phone with him before they went into the studio to cut the new album. 'I made my charts over his words in terms of bar lines or accents, cos there was nothing notated... He wasn't that sort of [musician]... For instance, on the song "The Blue Mask", I remember pointing out to him, "Lou, this is in 9/4 [time]." He said, "It is?" Because Lou was very much a [simple] 4/4, 3/4, 6/8 kind of guy. And he was quite surprised, and a little bit delighted... It was just that he didn't think about that [sort of thing].'
The next recruit was Fernando Saunders, a virtuoso musician from Detroit who played fretless bass in a subtle, lyrical style. 'He said, "I want someone with fresh ears." Not copying the past,' says Fernando, who had previously worked with Jeff Beck and John McLaughlin. 'We had good chemistry from the first note.' Fernando would work with Lou, on and off, for most of the rest of his career. His longevity was partly due to his placid temperament. 'I never saw Lou trying to rile up Fernando, because Fernando wouldn't have risen to that,' explains Doane. 'There was a very Zen-like presence to Fernando that was imperturbable.'
The third and most important member of the band was Robert Quine, who had first met Lou as a fan in the 1960s, taping Velvet Underground gigs. He had subsequently achieved renown as a guitarist with Richard Hell and the Voidoids, one of the most influential new-wave bands of the mid- to late 1970s. Robert was nine months younger than Lou, but appeared older because of his bald head and formal manner of dress, usually wearing a starched shirt and sports jacket on stage. He also favoured dark glasses, which made a serious person seem more austere. 'He was very taciturn at times, never rude, very quiet and inward,' says Doane, who recalls that Quine even wore his shades in the studio. The man did not invite conversation. 'Occasionally, he would make a remark that was more often than not cryptic.'
This lean four-piece convened at RCA's Studio A in Manhattan in October 1981, a vast room where Elvis and Sinatra had recorded. 'All the lights were on. It was like working in a great big cafeteria,' recalls Doane. 'We were set up tight like a band on stage.' Lou stood to the right of Doane's drums as they recorded live; Quine on his right, Fernando to his left. They tackled the songs with intensity, even ferocity. During the maelstrom of 'The Blue Mask' Lou turned and urged them all to play harder. Doane and Fernando held the tempo, resisting Lou's inclination to speed up. They knew that tension was to be gained if the rhythm section held back slightly, which is part of the distinctive sound of the album. 'At the end of some songs he would almost be shaking with the intensity of it,' says Doane. Others noticed Lou's hands trembling. It may have been nerves, delirium tremens or a side-effect of the lithium.
Lou told his band that he was clean and sober. 'He was coming out of that at that period, doing that album,' says Fernando. 'He said, "Fernando, I went to healers, I went to prayer places, I went everywhere." He realized that no healer who would touch your arm, no acupuncture, or prayer is going to help you to stop drinking. You just have to stop. And I guess it got to the point where – he would tell me things – it got to the point where the doctor said, "If you keep drinking like this you will have no liver." So he had pretty much quit [by] himself.' Lou wouldn't tolerate people around him using drugs. He lost his temper with Quine early in the sessions when the mournful guitarist (who committed suicide by drug overdose in 2004) asked for some cash to buy lunch. Fernando believes that Lou may have suspected he wanted money for drugs. 'He didn't trust people with his money. People had taken advantage of him [in the past].'
Despite this altercation, the sessions went well. In contrast to the Everyman Band, the sound was skeletal hard rock. They could play softly when required. The arrangement of 'The Gun' was a subtle, jazzy sketch, an unlikely musical setting for such a menacing story. But the band's signature sound was closer to the second Velvet Underground album. Quine filled the role John Cale had once played. He was a similarly serious, demanding musician who didn't like to compromise. Every guitar lick he played was original, every take different, constantly challenging Lou to do better. There was a healthy creative tension between the men, though jealousy intruded. One version of 'Average Guy', a song in which Lou poked fun at himself, was so catchy that his engineer and co-producer, Sean Fullan, thought it could be a single. He was disappointed when Lou rejected the take in favour of a version in which Quine's guitar part wasn't as strong. 'Looking back, I think maybe Quine was just too fucking good [on that cut]. You know, guitar players get competitive with each other,' explains Fullan. 'Solo artists are always very much in control. It is their name, their project. They are narcissists, and they have to be that way to survive, unfortunately.'
As with his last album, Lou revealed himself in these new songs. His image, the eponymous and metaphorical blue mask, was ripped aside, showing him to be a highly neurotic, sometimes angry man. Emotions were still raw when he went to see Erin at her apartment in the Village after recording was finished. 'He came over late, scared, panicked. I'd never seen him like this. He feared he was losing his career, his life. He seemed on the verge of a total breakdown. He had started [taking] lithium a while before this, and now it wasn't working, and he was falling apart. He was ashamed that I was seeing him this way. Apologized. Which of course he did not have to do with me. We talked until 5 a.m.,' she says, referring to her diary of 6 December 1981. She wondered if Sylvia knew where Lou was in the middle of the night. 'He also said, "I've been self-medicating myself for years." Meaning both drugs and alcohol.' When Erin next saw him, on 20 December, he seemed better. 'Two weeks later, Lou returned, vastly improved. I questioned the bipolar diagnosis, the prescribed treatment, and the bad after-effects. I believed that the drugs had done this to him. He was intrigued, really listened to me – which was often not the case!'
_The Blue Mask_ has come to be regarded as one of Lou's best solo albums, but while it was better than what had come directly before it wasn't an unalloyed success. The words to a couple of songs, including 'Women', were weak. 'I love women/We all love women' was a trite lyric that sounded bogus coming from a man who evidently loved men as well, and was also known for his misogyny. Perversely, he chose this as the single to launch the album and, in the new MTV age, he made a film to promote the song. 'During the shoot of the video his hands were shaking so badly he went to restring a guitar and he couldn't. His hands were shaking incredibly,' says Fred Maher, who replaced Doane Perry on drums at this stage. 'He was in pretty bad shape.'
When Lou met the press to discuss the album he showed a reluctance to discuss his gay past. If journalists tried to tackle him on this subject, reasonably enough considering the comments he had made about his sexuality as recently as 1979 and the public nature of his relationship with Rachel, he became sullen. 'My past is _my_ past, and it's my business,' he told _The New York Times_. He had drawn a veil over that part of his life, a veil that would remain closed. He didn't refer to himself as gay any more (it would have been difficult to do so now that he was married), and he never, ever mentioned Rachel, though he may have had his former boyfriend in mind when he said that Sylvia had helped '[get] rid of certain things that were bad for me, certain people'.
As he talked up _The Blue Mask_ , he attempted to make further adjustments to his image. 'Some people like to think I'm just this black-leather-clad person in sunglasses. And there's certainly that side to me; I wouldn't want to deny my heritage. But while I have my share of street smarts, I'm not a rat from the streets by any means. I always wanted to be a writer, and I went to college to prepare myself for it.' He spoke of his interest in philosophy and literature, and his ambition to write lyrics of the highest literary merit. 'I don't hear anybody trying to do a _Lear_ , or a _Hamlet_ soliloquy, in rock 'n' roll. Who says you can't do that? People say rock 'n' roll is constricting, but you can do anything you want, any way you want. And my goal has been to make an album that would speak to people the way Shakespeare speaks to me, the way Joyce speaks to me. Something with that kind of power: something with _bite_ to it.' He also referred to wanting to write to the same standard as Dostoyevsky, another literary hero. While it is healthy for a writer to have ambitions, and there is no reason why rock 'n' roll lyrics shouldn't have artistic value, such statements were easily misconstrued as pretentious. As his sister says, 'Only my brother could compare himself to Shakespeare.'
_The Blue Mask_ came out ten years after Lou's first RCA album, in March 1982. He was forty now, looking back, as a reformed toper and born-again heterosexual, on an erratic solo career that had spanned the 1970s and left him exposed at the start of the 1980s to a new world with new sounds and new fashions. In Ronald Reagan's America, the public were listening to the catchy songs of Hall and Oates (who had supported Lou in concert only a few years earlier, but were now a much more popular act than he was) and British bands like the Human League. The packaging of the new album, re-using the Mick Rock iconic photograph from _Transformer_ , printed in blue monochrome, emphasized the impression of mature review and self-examination, while reminding record buyers of his greatest commercial success. The British music press remained unimpressed, _Sounds_ being particularly dismissive of 'Women' and the 'JFK adulation schtick'. There were better reviews in the USA, notably in _The New York Times_ and _Rolling Stone_ , which described _The Blue Mask_ as 'the least ironic album Reed's ever made'. A touch of irony may have improved 'Heavenly Arms', during which Lou declared his love for Sylvia by singing histrionically, 'Only a woman can love a man.' Nevertheless, on balance, this was a strong album. It didn't sell. When Sean Fullan tried to recover his producer's royalties some years later, he was informed that the record still hadn't earned out. 'Eric Kronfeld's pat answer was "The album never recouped its costs."' Part of the problem was that Lou's music was no longer fashionable. Apart from a brief period in the 1970s, between _Transformer_ in 1972 and _Coney Island Baby_ in 1976, he had in fact always operated outside of fashion, appealing to a discerning minority of listeners of an artistic, literary bent. But his uneven output in recent years alienated many of those fans, as it turned off critics, while younger audiences were diverted by a wide variety of fresh new acts. Also, he didn't tour to promote his record. 'Touring is just too hard on my body and my spirit,' he said at the time. 'I don't want to subject myself to that now.' He had been drunk or stoned for virtually every tour he had done in his solo career. So he stayed home in the country, playing his Sharpshooter pinball machine and riding his motorbikes.
Eight months later he returned to the RCA studio to make the _Legendary Hearts_ album, which would be released in a sleeve adorned with pictures of himself in motorcycle gear. Fernando Saunders was back for duty. 'I became the guy that he would look to] for advice, and final decisions. I guess you could call it band leader, but it was a little bit more than that... My thing was keeping the band and Lou in peace.' Peacekeeping on _Legendary Hearts_ mostly concerned intervening between Lou and Robert Quine, who clashed over the guitar sound on two songs in particular, 'Rooftop Garden' and 'Betrayed', culminating in an argument in the studio on 29–30 November 1982. Afterwards, Quine was sidelined. ['Quine is not part of the mix now,' engineer Corky Stasiak noted in his diary on 8 December. Lou had his guitar erased from 'Betrayed'. When Quine received a tape of the album he was so angry that he smashed it up with a hammer.
One of the better new songs was 'The Last Shot', in which Lou addressed the issue of quitting drugs, giving a graphic description of a junkie's life: shooting up at the kitchen sink, splashing blood over the dishes. These were vivid images that certainly didn't glamorize drug use. Opinions vary as to whether Lou had sworn off the booze as well. Fernando Saunders says that Lou had 'pretty much' stopped drinking by 1982, but most reformed drinkers agree that total abstinence is the only solution. He seemed out of sorts in the studio as he struggled with the vocal on another song, 'Home of the Brave', in which he referred to Lincoln Swados's suicide attempt. Corky wondered what the problem was. '"I ask him if he is nervous. [He] takes offence and calls _me_ obsessive." Oh boy!' says Corky, referring again to his studio diary. Despite such problems, the engineer reports that Lou was relatively stable during the making of _Legendary Hearts_ , in comparison to the drug fiend he'd encountered on _Rock 'n' Roll Heart_ and the drunkard who fell asleep during _Growing Up in Public_. But the result was lacklustre. 'I didn't think it was a strong album.'
He returned tentatively to live work to promote _Legendary Hearts_ , playing the Bottom Line during a snowstorm in March 1983 and doing a gig at Studio 54. In the summer he undertook a short Italian tour at the invitation of RCA Italy, who wanted a live album ( _Live in Italy_ ). 'He had set up what I would call a luxury tour. It was basically an Italian vacation for him and Sylvia,' says Fred Maher, who played drums. A highlight was playing to fifteen thousand people at the Roman arena in Verona, the size of the venues demonstrating how popular Lou remained on the Continent. The set list included Velvet Underground songs such as 'Sister Ray' and 'Some Kinda Love' that he hadn't performed in years. The arrangements were faithful and muscular. 'When Lou asked Quine to be in his band, Quine had a couple of stipulations, one of which was that Lou had to start playing guitar again, because Quine loved Lou's playing in the Velvets,' explains Fred. 'And I think the other thing Bob insisted on was that when we did live shows we did Velvet songs.'
Although Fred Maher was close friends with Quine, he concedes that the guitarist had a difficult personality that didn't sit well with Lou at a time when he was trying to conquer his bad habits. 'Lou couldn't handle Bob's dark personality, because that was a [time when] Lou was trying to be positive and sober and forward looking. I think Quine just didn't want to know about that. It wasn't that Quine wanted him to have a drink, or get high – nothing like that – I think it was just a personality clash. Lou was pissed at him because he was always so negative. Bob was just negative, negative, negative, negative... You either learn to love it, or it pisses you off.' A problem arose in the studio when Quine came up with a guitar riff that caught Lou's attention. 'Lou says, "Wow, what's that? That's great!"' recalls Fernando. 'Quine shows Lou the guitar part. The next thing, we are in the studio and, in two seconds, Lou [is singing] "I Love You, Suzanne."' Unfortunately, Lou failed to give Quine a co-writing credit, which became an issue when the song was released as a single which enjoyed relative success. 'I don't think Lou would on purpose take something... but to him the lyric is the song. He's not really thinking about the contribution of the musicians... I think honestly Lou thought he wrote "Suzanne". But Robert Quine wrote that guitar part,' says Fernando, who identifies this as the principal reason for their growing estrangement. 'Quine was talking about this all the time. "He took my riff!"'
'I Love You, Suzanne' became the first track on the 1984 album, _New Sensations_ , without any credit for Quine, who wasn't invited to play on the album, unlike his buddy and bandmate Fred Maher, who says, 'I felt very weird about that, and spoke to Bob about it, and he said, "Go ahead, do it, I don't care. He's an asshole."' _New Sensations_ had the mainstream rock sound of the mid-1980s, embellished with synthesizers and horns. The title song was the highlight, with an engaging lyric in which Lou referred to his Christmas Eve arrest for buying speed (in 1973, not two years ago as he sang) as well as his new passion for motorbikes, describing a ride from Blairstown to the Delaware Water Gap on his GPZ motorcycle. 'I love that GPZ so much,' he sang, 'I could kiss her.' It was the best line on the album, reminding listeners that he still had a nice sense of humour.
There were other autobiographical references. He wrote about his love of playing pinball and video games at Steve Epstein's Broadway Arcade in 'Down at the Arcade', while his friendships with Martin Scorsese and actor-playwright Sam Shepard inspired 'Doin' The Things That We Want To'. Like most of the songs on _New Sensations_ , the sentiment was upbeat, though Shepard, who had known Lou since the 1960s, says that his friend was dissatisfied with his work. 'I sensed he was frustrated about not being able to nail what he [was working on]. He didn't have super confidence about what he was doing.' Lou was right to be doubtful. Despite its strengths, _New Sensations_ , like most of the albums he had been making since _Transformer_ , fell well short of perfection. Some of the writing was flat. His voice didn't sound strong, and the arrangements were less than scintillating. Nevertheless, _New Sensations_ reached fifty-six in the US charts, also charting in the UK. This was his best result since _Coney Island Baby._
He toured extensively to promote the album. Robert Quine rejoined the band for the shows, but their relationship deteriorated on the road. Part of the problem was that Lou was playing more guitar now, leaving less for Quine to do, while the guitarist was brooding on the injustice of 'I Love You, Suzanne', which he had to play live. He was also irritated by the ban on drink and drugs on tour (Lou had decreed that there would be no alcohol, even in the hotel minibars), mockingly referring to the _No_ Sensations tour. By the time they reached Australia, in December 1984, Quine had had enough. 'By the second concert in Australia I called my wife and said, "I'm certainly never playing with him again, ever."' Yet another successful musical partnership was thereby broken.
Lou's pecuniary situation improved in the mid-1980s. _New Sensations_ had been a commercial success, and the world tour was lucrative. The eight-year lawsuit with Dennis Katz was finally resolved in 1984, after numerous claims and counter-claims. Around the same time, he started to receive North American royalties on his back catalogue, including 'Walk on the Wild Side', after several years when this income went directly to Dunbar Music in accordance with the 'termination agreement' he made with RCA in 1976. 'For many years he didn't make money from "Wild Side",' says Fernando Saunders. '[Then] he finally got this huge amount of money.'
In addition, the Velvet Underground's business affairs were reorganized to the advantage of all the former members after John Cale told his lawyer that they weren't receiving any royalties. Part of the reason was that the first three Velvet Underground albums were no longer in print in the USA, amazingly, and what money had accrued from international sales – from Verve, MGM and Atlantic Records – was not flowing through to the band members. The Velvet Underground Partnership was formed to make it easier for Atlantic and Polygram (which had absorbed Verve and MGM) to distribute royalties on an equitable percentage to Lou, John, Sterling and Moe, as well as Nico and Doug Yule. 'It's based on the albums they participated [in], and the sales at that time of those albums,' explains Cale's lawyer Chris Whent, who set up the partnership and thereby came to represent the whole band. 'It's relatively complicated. The principal beneficiaries were in fact Lou, Moe, Sterling and John, although John's percentage is a little less because he didn't participate in the latter albums.'
In order to form the partnership, John and Lou had to talk to each other again, which was awkward. 'Nobody actually came to blows, but the relationship between John and Lou was always a fraught one,' admits Whent. 'I was often a sort of buffer between them.' When the agreement was finalized, Polygram released the money it had been holding and decided to reissue the first three Velvet Underground albums, plus an album of unreleased material. This new record, _VU_ , including terrific archive songs such as 'Foggy Notion' and 'Temptation Inside Your Heart' which the band had recorded towards the end of their MGM contract, was a surprise success, charting on both sides of the Atlantic in 1985. A second album of unreleased material, the less essential _Another View_ , followed a year later. Thus began a revival of interest in the band, and better days for its former members.
Unlike the others, Lou had been making money from the VU songs for years. 'He had been collecting on publishing, and of course he re-recorded a number of these songs,' explains Chris Whent. Now all the Velvets began to receive regular, six-monthly royalty cheques. While this was not particularly significant to Lou, it was to John, Moe and Sterling. Since leaving the band, John had pursued a solo career to critical acclaim, and had enjoyed success as a producer, but he had not found a mainstream audience for his esoteric music. Like Lou, he had also struggled with drink and drug issues. At least he was still in the music business. After working for years as a teaching assistant at the University of Texas, in Austin, Sterling had become a tug-boat skipper in Houston. 'Sterling almost right up to his death had to work on that tug boat to keep afloat... He was really concerned about money,' says Martha Morrison, confirming that the Velvet Underground income that started to flow through was modest, but welcome. 'Nothing to change your life.' Moe was now a single mother bringing up five children in the remote country town of Douglas, Georgia. She struggled to find work in rural Georgia initially, despite replying to almost every ad in the local newspaper. She drew the line at one vacancy. 'I said, "What the hell's a chicken catcher?" You know those big chicken houses? When it's time to take [the chickens] to the slaughter house, they have to catch them – that's a chicken catcher.' She eventually found work in administration at a Walmart distribution centre, but the pay was dreadful. One year her Christmas bonus was five dollars. In such circumstances, Velvet Underground royalties were a lifeline.
After years when he had relatively little wealth, Lou was now quite well off, his earnings boosted by endorsement deals for American Express and Honda. When his landlord increased his rent at Christopher Street, Lou decided to move his city base. He bought an apartment in a high-rise on the Upper West Side, only the second property he had ever bought, after the house in Blairstown, which he still owned. There was some jealousy among his bohemian friends that he had finally attained a bourgeois level of comfort. 'The apartment in New York, which seemed grand to me at the time, was actually just a two-bedroom on the Upper West Side, in a kind of newish building – beige wall-to-wall carpeting, not at all interesting, furniture you could get at Jensen-Lewis,' says Tama Janowitz, author of the book _Slaves of New York_ , who was dating Lou's friend from Factory days, Ronnie Cutrone, and was in contact with other Warholians. 'He was getting enough to live better than the "rest of us" – Ondine and René Ricard, people of his generation... There was quite a lot of hostility towards him, because Lou was [now rich].'
One weekend, Tama and Ronnie went to stay with the Reeds in the country. Part of the reason for the invitation was that Lou and Ronnie were fellow members of Alcoholics Anonymous. They were in a group called Completely Sober which catered to show-business types in New York.> AA was fashionable in the 1980s. 'In those days in New York everybody ended up in some meeting or another, whether AA or NA [Narcotics Anonymous],' says Tama. 'It was as much social as anything else. "Oh, I've been sober for three days... Oops, I slipped!" "Oh, you are going to do fine, call your sponsor. Go out for coffee." Believe me, nobody was that bothered whether they were sober for three days, or three years.' Tama says that, during the weekend in the country, Sylvia mentioned that she wanted to start a family with Lou, but he wasn't interested. This became an issue in the marriage. Tama didn't warm to Sylvia personally. 'She seemed obsessed with him... [It felt like] she just never shut up about him... like a groupie. "My husband, my husband..." Nobody is responding. "My husband, Lou Reed, my husband _Lou Reed_." Come on, lady, the rest of us exist, too! She had lovely, lovely qualities, but I don't give a damn that you are married to Lou Reed.'
Another friend who received an invitation to Blairstown was the South American singer Rubén Blades. Lou and Rubén got to know each other when they lent their voices to the anti-apartheid song 'Sun City' in 1985, protesting against the situation in South Africa, one of a series of 1980s projects whereby rock stars worked together on an issue. It was a rare example of Lou showing any interest in politics. 'I think that Lou was always disgusted at the idea of becoming a tool of political positions. I don't think he was ever relaxed with that. So he would support something from his position, but not become involved,' says Rubén, who was, by contrast, highly political, to the extent that he ultimately served in the Panamanian government. He believes that Lou saw apartheid as a special case. 'It can be applied to people who are gay, or of a different religion. It was everything Lou despised.' The men consolidated their friendship during an all-star 1986 charity tour to promote the causes of Amnesty International. Lou admired Rubén's voice ('He used to say that I had a radio voice'), they made each other laugh, and they discovered that they could work together.
Rubén added vocals to Lou's 1986 album, _Mistrial_ , the sound of which was partly determined by Fernando Saunders, who co-produced the record and used drum machines to give the songs a contemporary setting in the age of Madonna and Prince. 'The _Mistrial_ album was just to have fun. It was not to impress Lou Reed fans, or critics, or people like that. Sometimes you've got to do a detour to get to the next level,' says Fernando, defending an album that, in the estimation of many people, was a contender for Lou's worst record. 'It definitely was a more poppy, well-recorded, produced record. Lou Reed fans and critics wouldn't give it recognition, [but] it is the record that got him back in big theatres and the big tours... It made new Lou Reed fans.'
Although the sound was poppy, the lyrics addressed dark subjects which Lou had returned to repeatedly throughout his career, including street life and the abuse of women. 'Don't Hurt a Woman' was a disturbing new take on this subject, a song in which Lou apologized in character for hitting his partner, explaining that he sometimes lost his temper, but would 'try to remember/Don't hurt a woman.' This sounded like an apology for domestic violence. The contemporary production of two other songs from _Mistrial_ , released as singles, earned Lou regular airplay in the latter half of 1986, with the result that he played Radio City Music Hall in October. With a capacity audience of six thousand, this was the biggest show he had ever done in New York City. He was so proud that he invited his sister, parents and friends. Later that month he performed on _Saturday Night Live_. Bowie was at the party afterwards, which may well have been the first time they had been in the same room since Lou slapped him in 1979. 'I had no idea they had a hostile relationship,' says friend Steve Epstein, who was present and recalls a bad atmosphere between the stars. 'They weren't talking to each other. They were looking at each other... I have never seen Lou that tense. It was very strange.'
This brief spell of commercial success was capped when Lou recorded a cover of the Sam and Dave classic, 'Soul Man', with Sam Moore, for a movie soundtrack. The contrast between Moore's tuneful tenor and Lou's laconic vocals, with a hot band behind them, and a jokey MTV video to promote the song, made 'Soul Man' a minor UK hit in January 1987.
The previous December, Lou and Sylvia had flown to Los Angeles to attend Rubén Blades's wedding. In the new year, Rubén and his wife, Lisa, accepted an invitation to stay with the Reeds in New Jersey. Snow was on the ground when they arrived in Blairstown. Lou took his friend out on his snowmobile to show him the property. In the evening they worked on songs in his music room. 'I wanted to write a song about family, and we started to talk about family and we both started getting upset,' says Rubén of one of these evening sessions. 'I think we were touching areas that were sensitive, or so private, that they made us angry, and the conversation was getting very confrontational. This was in the night. We had been drinking also [further evidence that Lou had not quit alcohol]. When I saw him getting into this place where his eyes went somewhere else, his whole attitude was different, I could sense it and I thought, "I'm gonna leave it alone." I simply became quiet, and he would be quiet, and we both knew we are not going to talk about it any more tonight. And I went to my bedroom, and left him there in that room with all his guitars and amplifiers.'
Later in the night Rubén was woken by loud guitar music. 'All of a sudden, at like three o'clock in the morning, I heard [Lou playing] this distorted, cranked-up-to-hell beautiful melody and I got up – I was lying in bed with my wife – I got up, grabbed my notebook, came and sat [on the steps] outside the [music] room. I didn't go in. And he played the same thing over and over, and there was an incredible intense anger, but also an intense love in the [music]. An electric guitar distorted to hell... And it moved me so much that I sat down there and wrote the lyrics to "The Calm Before the Storm" [co-written with Lou for Blades's 1988 album, _Nothing but the Truth_ ], about his mother and his father, and my mother and my father... I am moved when I remember that night... it was anger and love at the same time.'
Tama Janowitz had a similarly intense conversation with Lou about family during her visit to Blairstown. 'He would talk to me about going through the electro-shock stuff. [Soft voice]: "My mom sent me in for electro-shock and then took care of me afterwards." That was pretty weird... It was obviously something seminal and crucial in his life.' Here was the vulnerable, damaged Lewis Alan Reed, still struggling to make sense of his life as he moved into middle age.
## XII
## New Inspiration
### 1987–92
ANDY WARHOL'S DEATH was as sudden as it was unexpected. The artist was admitted to New York Hospital on Friday, 20 February 1987 to have his gall bladder removed. The operation was carried out successfully on Saturday, but he died of heart failure on Sunday, aged fifty-eight. The estate sued the hospital, claiming negligence (the case was settled out of court), while Lou was among those friends who expressed their surprise and anger in private. 'He was livid that his friend could die in hospital [that way],' says Steve Epstein. 'He was very upset for weeks.'
The strength of Lou's reaction was surely due in part to the guilt he felt about his recent estrangement from his mentor. They had managed to stay friends, more or less, until Lou attempted to get clean and sober in the early 1980s. Although Andy never encouraged people to use drugs, Lou distanced himself at that time from everybody he associated with his old life, including Andy, whom he didn't invite to his wedding in 1980. Shortly afterwards, he and Sylvia were in a cab with the artist when Lou asked the driver to slow down. Andy remarked that he would never have said that in the old days, a harmless enough comment that offended his hyper-sensitive friend. 'He was being evil, so I never spoke to him again,' said Lou, who snubbed Andy at the MTV Awards in 1984. 'Lou sat in my row but never even looked over. I don't understand Lou,' Andy told Pat Hackett, who wrote up his diary entries. His office was trying to win commissions to produce pop videos at the time, and he felt that Lou should be giving them his business. 'I hate Lou Reed more and more, I really do, because he's not giving us any video work,' he complained to Pat on 20 September 1986, which was the last time Lou was mentioned in the diary. When Andy said he 'hated' somebody he was not to be taken literally – Lou said Andy spoke like a child at such times – but the rift was real. 'There were some things that, for personal kinds of reasons, I kept him at a discreet arm's length,' he said after the artist's death, at which point he finally felt able to express his debt to Warhol.
Two thousand people attended the artist's memorial at St Patrick's Cathedral on April Fool's Day, 1987. There were limousines around the block, and a crowd of photographers to capture the arrival of the likes of Richard Gere, Debbie Harry, David Hockney, Philip Johnson, Liza Minnelli, Yoko Ono and Tom Wolfe, plus survivors of the thrilling nightmare that had been the Silver Factory. Brigid Berlin, Gerard Malanga and Paul Morrissey were present, as were more peripheral figures such as Holly Woodlawn. As Lou looked around at the faces, he began to think of an elegiac new song, 'Dime Store Mystery'. It would form part of the first of a series of three albums recorded in the wake of Andy's death, all of which touched on loss.
After the service, mourners gathered at the Century Plaza Hotel, in a room specially decorated with silver walls and covers from Warhol's _Interview_ magazine. Lou was talking with Billy Name, who was back in circulation after leaving the scene for several years, during which time he travelled across the country to live in San Francisco, cutting himself off from his old friends at the Factory, when John Cale walked past. 'I was standing there talking with Lou and I could see John coming by, and he was just going to walk past us and I grabbed him,' recalls Billy.
Lou and Sylvia at the Century Plaza Hotel, following Andy Warhol's memorial.
'John, look, I have Lou here,' he said. 'You've got to say hello.'
John stopped to shake hands. 'And then they started talking together, and then they started talking about Andy.' Julian Schnabel joined the discussion, encouraging John and Lou to collaborate on a requiem for Warhol. Others agreed that this would be a fine idea. A few days later, they met to discuss what evolved, over the next two years, into _Songs for Drella_. Some people were surprised at this change in Lou's attitude to his old mentor. Tama Janowitz notes that she had tried to get Lou to have dinner with Andy in recent years, only to be rebuffed. 'Then Andy dies and Lou is out there with _Songs for Drella_ , capitalizing on his friendship with his beloved Andy. "F—you!"' she exclaims. 'When Andy was alive, you wouldn't even come to dinner with him... I don't know why Lou hated Andy, but he hated Andy. He was nasty to him, [and] Drella was a completely derogatory name... Nobody would ever say that in front of him. It was cruel.' Nevertheless, _Songs for Drella_ would be the best work John and Lou had done, together or apart, since the Velvet Underground.
Meanwhile, Lou pursued his solo career, adding new members to his band, the composition of which changed frequently during the latter part of his career. One of the most notable and long-serving new recruits was Mike Rathke, a twenty-four-year-old music student whose surname, and possibly his rodent-like dentition, earned him the nickname Rat. He met Lou in 1984 when he was dating Sylvia's sister, Julie Morales, whom Mike married and divorced in the 1990s, making him briefly Lou's brother-in-law. He became his lead guitarist and right-hand man in the spring of 1987, a position he held for twenty-two years. Like other musicians who worked for Lou for a long time, Mike had a patient, pliant nature. 'I've been accused of being a yes-man, and this and that, but that is hardly true,' he says in his defence. 'We had our share of rubs, but it always turned the corner.' Their first gig was a club show in Chicago in May 1987. Fernando Saunders had temporarily left Lou's employ following _Mistrial_ , which Lou now considered a mistake, so a new bass player joined them in Chicago: a young Israeli named Yossi Fine whose playing was singled out for praise in a local review of the show. 'That's not good, Yossi,' Lou growled at breakfast the next day, showing him the newspaper. He didn't want anybody upstaging him.
The Chicago gig was a warm-up for a series of major European events, including stadium concerts supporting U2, now one of the biggest bands in the world. Playing Wembley Stadium in London in front of 72,000 people was a challenge for the less experienced band members. Mike Rathke was so nervous that he almost threw up. 'None of us had played for such a big audience,' says Yossi, who recalls that they were pelted with plastic bottles. Bono, who was a fan, told Lou that the fact people were throwing things at him meant that they actually _liked_ him, insisting that he go back and perform 'Walk on the Wild Side' as an encore. Bono had sung a snatch of the song as part of U2's Live Aid set at Wembley two years earlier, and it had gone down well with the audience, who sang along with the doo da doos. Lou took his advice, also agreeing to his suggestion that he simplify some of his songs for audiences who didn't know his catalogue well. 'I'll give you an example,' he explained. 'Bono loves "Street Hassle". And after we did it one night he told me, "You know, if you sang "sha-la-la-la" more, the audience would sing along with it. That's the fun part of the song; I love it when you sing that. Stay with it – they'll love it." Next time out I did that – changed the words around and did the "sha-la-la-la" more. Sure enough, they really liked it. Funnily enough, I liked it, too.'
Bono's encouraging words aside, many of the people Lou was playing to on the U2 tour had little idea who he was. 'The people who went to see U2 shows, very few came to see Lou Reed,' notes Yossi. 'He was not an MTV star]. He was not that famous... it was all about U2 [and] Lou felt he was passé at that point.' He certainly wasn't selling many records. When his second contract with RCA expired, it wasn't renewed. Then Lou fell out with his manager, Eric Kronfeld. 'He didn't like Eric,' says Yossi. 'He called him "super pig".' Friends say the feeling was mutual. ['Nobody could really put up with him for very long. Eric Kronfeld _hated_ him,' says Lou's old friend Allan Hyman, who had become a successful attorney and happened to know Kronfeld in business. 'He was never satisfied with anybody who represented him. He was always sure that everybody was stealing money from him.' After they parted company, Lou began to manage himself, with the help of his wife, who ran the family business from home initially. Unlike a normal manager, Sylvia Reed had the great advantage of always being available to him.
Luckily, he still had admirers in the industry, people like Bill Bentley, who ran publicity at Seymour Stein's Sire label, which now offered him a contract. 'My recollection is that Bill Bentley had talked Seymour into it. Bill really wanted him badly, and Seymour went along with it. Now Seymour may also have been excited, but Bill was really behind it,' says Howie Klein, then general manager of Sire, a boutique label that operated under the Warner Brothers umbrella. Lou wasn't an expensive signing for the company, which had an impressive roster of artists, including Madonna and Talking Heads. He was paid modest advances in accordance with his sales. 'It costs something else,' chuckles Howie, who says that Lou was considered 'a very difficult artist... he was never difficult with me all the time I knew him. He was a nice sweet, wonderful guy, and there was not so much as a ripple, but everyone I knew, journalists and people in the industry, said, "He's difficult." That was the word people used about Lou Reed, and I never experienced it – ever.' This was because Howie left Lou to his own devices, starting with his first Sire album.
As with _The Blue Mask_ , Lou spent a lot of time preparing for _New York_ , writing and rewriting the lyrics on his latest toy, a personal computer (and flying into a panic one day when he thought he'd accidentally erased everything). Despite the album title, and the predominantly metropolitan concerns of the songs, much of the preparation was done at his country home. Mike and Yossi came out to Blairstown to work on the arrangements, which proved to be a grind. 'He would play the same two chords from ten o'clock in the morning till eight o'clock in the evening, E and A, E and A,' sighs Yossi. 'I did not have that type of patience after a while.' When Yossi encouraged Lou to try something new, suggesting that he play bass on the record in the style of the hip heavy-metal band Metallica, Lou laid down the law. 'Yossi, you are going to play with whatever bass I tell you to play, and whatever amp I tell you to work on,' Lou lectured. 'You think rock 'n' roll is Metallica, but let me tell you something – Metallica is shit.' Yossi was reminded of this when Lou recorded with Metallica in 2011.
In the short time he had been in the band, Yossi had come to see that Lou had a pattern of behaviour with employees. After a honeymoon period, during which he was super friendly, he became increasingly controlling, before taking offence and turning nasty. A precious few, such as Rat and Fernando, were able to ride his moods for a time, but Lou fell out with almost everybody in the end. 'The thing about Lou was that at [a certain] point Lou did not like one person in particular. Every time it was somebody else's turn,' says Yossi. 'One time it was his [guitar] tech, another time it was the sound guy. He would pick someone. He always had an enemy in mind.' When Yossi started to feel that he was next on Lou's shit list he became too depressed to work. 'He called me, "Why aren't you at rehearsal?" I just couldn't tell him [so I said], "Look, man, I'm sick, I can't." It never happened to me [before] or since. That's completely out of my character. But my body told me, this is not the right thing for you to do. I never went back. He was like, "Well, you're going to regret it." Whatever. I listened to my body. He could drain your energy.'
So devalued was Lou's currency by 1988, as an unfashionable and erratic artist who was difficult to work with, that he struggled to find a producer for _New York_. Lou eventually asked his drummer Fred Maher for advice. 'I started saying the usual names, the big names of that time... and absolutely nobody was interested in working with him,' says Fred, who suggested that he might produce the record himself.
'What the fuck do you know about recording guitars? All you know is synth-pop crap,' Lou said rudely, referring to the fact that Fred had recently enjoyed transatlantic success as a member of Scritti Politti.
Fred persuaded Lou to let him record one song as a test. They chose 'Romeo Had Juliette', which, like several of his new songs, highlighted social problems in New York. Lou was particularly pleased with the opening lines:
Caught between the twisted stars the plotted lines the faulty map That brought Columbus to New York...
While there was poetry in these images, there was also poetic licence; Columbus never went near what is now New York. Nevertheless, the words served as a fitting introduction to an album about the city. Taking inspiration from Leonard Cohen's then recent album _I'm Your Man_ , Fred recorded the song with Lou's vocal to the fore and persuaded him to speak the lyric. 'That's the key for Lou to be Lou. Lou doesn't really sing. I had been through two recording sessions with him – _Legendary Hearts_ and _New Sensations_ – when the producers and everybody was [saying], "Lou, you've got to sing, man. Sing that song." On those records he wasn't being Lou.' In his youth, Lou had more voice. 'There are Velvets tracks where he is singing, but some vocalists completely lose that voice as they get older, as I believe Lou did. Lou lost it. He could not sing like he could in the Velvets. That voice had gone... So when we did "Romeo Had Juliette" he went to the speaking voice.' Lou was delighted with the result. 'I sound like Lou Reed for the first time in years,' he said, giving Fred the go-ahead to produce the whole album.
'Romeo Had Juliette' came easily. 'He pumped that out like he was taking dictation,' says Mike Rathke. In contrast, he laboured over other tracks, including 'Dirty Boulevard', though the result was ultimately just as successful. The lyric told the story of an immigrant to New York in mordant language, referring to the 'Statue of Bigotry', a neat pun, and an exploitative landlord who laughed until he wet himself at the rents he was able to charge. The authentic Lou Reed was heard in these epigrammatic lines. It had been a long time since he had sounded so good. 'He was so happy to be in the studio. And his wit was sharp – a Lou that I had previously not known,' says Fred, noting that Lou was also relaxed enough to joke about his alcoholism, exclaiming, 'Hmmm, vodka!' as he glugged from a bottle of mineral water. When he came to record 'Dirty Boulevard', Lou invited another famous New Yorker to add backing vocals. Dion DiMucci had been a star when Lou was a student, with hits like 'The Wanderer'. Both were reformed drug users, and survivors in a fickle industry. But it was the Big Apple that they had in common above all else. 'New York is different,' says Dion, who had known Lou since they met backstage at the Bottom Line in the 1970s. 'By osmosis, you download it into your spirit. That was his experience [too]. He wasn't talking about roses and lilies. He was talking about the streets and garbage cans and rooftops and what was happening in the Village. The subculture.'
There was one bucolic song, 'Last Great American Whale', an attack on thoughtless people who dumped refuse on Lou's property in New Jersey. As the Reagan presidency passed to his vice-president, George H. W. Bush, from a liberal, anti-Republican point of view, it was also one of a number of songs with an overt political dimension, which was a departure for Lou. Some of these songs worked better than others. Topical references to the controversial Austrian President Kurt Waldheim ('Good Evening Mr Waldheim') and other prominent public figures on the right dated as quickly as their names fell out of the headlines. 'Hold On', inspired by civil disturbances in New York in the summer of 1988, also soon lost its relevance. Far better was 'Halloween Parade', in which Lou observed the annual Halloween Parade in Greenwich Village and reflected on friends who had disappeared from the city's gay scene, some of whom had died of AIDS. Here was a timely and meaningful song about a topic of lasting significance:
In the back of my mind I was afraid it might be true
In the back of my mind I was afraid that they meant you
Lou probably had several individuals in mind when he sang these tender words. Was Rachel among them? One day around this time, Jeffrey Ross, who'd played guitar in the band in the 1970s, heard someone call his name and turned around. 'It's Rachel, who looks pretty much the same except very gaunt... a very gaunt transvestite, and embarrassingly she told me she was [living] under the [West Side] Highway and homeless at the time. We chatted for a minute. We had a hug. I headed off, being very sad.' Others who also saw Rachel in a poor state on the streets believe that he died shortly thereafter. He would have been in his early forties, at most. Like so much about his life, it is impossible to be sure.
Nico was another former lover who came to a bad end at this time. Lou said nothing about her passing in public, as he said nothing about Rachel, though he undoubtedly knew about their fates. For years, Nico had been a heroin addict, playing clubs to support her habit, a shadow of her once-glamorous self, and beyond the pale as far as fashionable rock society was concerned (being as snobbish as any sort of high society). Latterly, she had tried to conquer her habit with methadone. Nico was on vacation in Ibiza in July 1988 when, on a blazing-hot day, she decided to cycle into town to buy marijuana. She was found by a passing taxi driver slumped by the side of the road, dying the following day in a local hospital of a cerebral haemorrhage, aged forty-nine.
It was the death of Andy Warhol, above all, that remained the biggest loss for Lou. Andy had been Lou's second and most influential mentor, a towering figure in his career whose reputation overshadowed his own and grew posthumously. As touched upon, he was the inspiration for 'Dime Store Mystery', the last song on _New York_ , in which Lou used alliteration and assonance to excellent effect. This was a carefully crafted work of poetry, showing Lou not only to have found new inspiration but striving to express something meaningful, rather than just being out to shock, as he had been for too long.
I was sitting drumming, thinking, thumping, pondering
The mysteries of life
Outside the city shrieking screaming whispering
The mysteries of life.
There's a funeral tomorrow at St Patrick's
The bells will ring for you...
Moreover, Andy's death seemed to inspire Lou to work harder, after years of uneven and often disappointing releases, as the artist had always urged him to. The _New York_ album was the beginning of a sustained resurgence in his songwriting. Critics noted the difference and greeted _New York_ with warm reviews in January 1989. Writing in _The Times_ , Bryan Appleyard described the record as 'a complete return to form'. There was some criticism. 'It reads worse than it sounds,' wrote Tom Carson in the _Village Voice_ , highlighting the weaker lyrics. While not perfect, _New York_ was a great deal better than one had come to expect from Lou, who was rewarded with his first _Rolling Stone_ cover. He admitted in the interview to making a mess of much of his solo career. 'Most of the major mistakes were in public, and I put them on record to boot.' He also acknowledged that he was a temperamental artist, who had been 'really difficult in the past', describing himself as a cult figure who didn't sell many records. But this time he got lucky. _New York_ sold better than any of his albums since _Sally Can't Dance_ , reaching number fourteen in the UK charts, forty in the US. Lou's champion at Sire, Bill Bentley, got a gold disc to hang on his wall.
One of the many interesting aspects of the record was that Moe Tucker, who had recently started to record and tour on a small scale in her own right, played percussion on two tracks. John Cale was also invited to participate. 'I don't know what he was going to play – probably viola,' says Fred Maher, who, in addition to producing the album, played drums on all the songs save those that Moe was on. 'So Lou had reached out to Cale, and Cale had said yes. Cale called the studio one day, asked for me, I said, "Hi, John, it's nice to meet you on the phone... We are thinking about this date, and we've got Moe coming in on so and so date." He's like, "What? Is this some kind of fucking Velvet Underground reunion?" "I... er... no. I don't know." Apparently, that pissed him off. So he never came in, and he never did play on the record, which was unfortunate. But I guess Lou never told him Moe was playing on the record.' Here was an insight into how touchy Lou and John still were around each other, even at a time when they were collaborating on _Songs for Drella_. 'They didn't see eye to eye all the time, but God, when they started playing together it was amazing,' says Mike Rathke, who assisted Lou during _Drella_ (and, like many friends interviewed for this book, continued to talk about him in the present tense in the months after he died). 'It's funny that they don't get along, because they understand each other on this [musical] level. They understand, they can anticipate, they can react in ways most musicians can't... You'd think they'd be really good friends. But there's the other side of music – the business.' And Lou had to be the boss. 'He wants to be in control. For sure. Period. Last word.'
Lou and John first performed their requiem as a duo at the Brooklyn Academy of Music (BAM) over four nights in the winter of 1989, with images from Andy's career projected on to a screen behind them. A youthful-looking John sat at a grand piano. Wearing hexagonal spectacles, which lent him the appearance of 'an intimidatingly intelligent monkey', as the music writer Mat Snow observed nicely, Lou sat opposite at a music stand, reading his lyrics as he played electric guitar, stumbling over his words at first, which made John frown. They began with 'Small Town', the first of fourteen songs that told Andy's story in sequence, from his childhood in Pittsburgh, from the artist's point of view. The result was a sublime biography in words and music. If _New York_ was Lou's finest album since _Transformer_ , _Songs for Drella_ was his best work since the Velvet Underground, and it is surely no coincidence that he had to collaborate with Cale to reach that standard again. He was usually at his best when he worked with a gifted partner. As ever, Lou wrote the lyrics, but he allowed Cale to be part of the editing process, which evidently made a big difference. 'For him to let me be a part of going through all the lyrics and correcting was very difficult,' Cale said. 'We sat there and bandied them around. It's difficult for him to collaborate on that level. It's difficult for him to collaborate – period – and he admits it.'
The highlight of the requiem was 'A Dream', an eerie prose elegy written by Lou and narrated by John that evoked Andy's dream thoughts towards the end of his life. It was a melancholy piece, giving insight into what had been a lonely private life. Billy Name, who was mentioned in the song, thought it 'a masterpiece, just terrific'. There were also critical references to John and Lou, based on Andy's recently published and much publicized diaries. To Lou's credit, he included Andy's complaints about how he had snubbed him, and his final comment about why he hated him. It was as if Warhol himself were speaking.
Moe Tucker joined John and Lou on stage at BAM on their last night for an encore of 'Pale Blue Eyes', capping one of the triumphs of Lou's career. Yet he and John were still not reconciled. 'John loved Lou, even in the worst of times, because of what they went through together, and he respected his talent, of course. And I'm sure that Lou had great feeling for John. They just could not get along,' says Moe. '"Let's do _Drella_!" The next thing you know they wanted to kill each other.'
Sire's recording of _Songs for Drella_ was a masterwork, one of the half-dozen or so essential albums of Lou's career. Unfortunately, that didn't translate into sales. The album sold respectably in the United Kingdom but didn't chart in the USA. It was perhaps too esoteric for the mainstream American public. 'It wasn't a big album,' says Howie Klein. 'It didn't matter to me. It mattered because I wanted Lou to be happy and successful. That album was a masterpiece. I loved [it] so much. On a personal level, I loved listening to it. As a record company executive, I loved the fact that my label put out this really wonderful piece of art. It wasn't a failure. It didn't lose money, but it didn't sell the numbers in the United States that we needed for it to be a big runaway hit. So there were people who looked at it as a waste of time, but I didn't care. I loved it.'
Each year, Lou spent part of the summer working in Europe, where momentous changes were taking place as the Soviet Union crumbled and its client states overturned their tyrannical leaders. Poland was first to break free, in 1989. Then the Berlin Wall fell in November 1989, and the Romanian dictator Ceaus¸escu was toppled in December. In what was termed the Velvet Revolution, because it was peaceful (not after the Velvet Underground, as is sometimes claimed by fans of the band), totalitarian rule was also overthrown in Czechoslovakia, where the playwright and activist Václav Havel, a founder member of Charter 77, became president. Havel was an admirer of Lou's music. On a visit to New York in the 1960s, he bought _White Light/White Heat_ and smuggled it back into Czechoslovakia, where Western pop music was copied and distributed illicitly. In April 1990, as the democratically elected president of the free Czech and Slovak Federal Republic, Havel agreed to meet Lou. It was the start of a remarkable new friendship.
The initial meeting at Prague Castle was brief. The President was a busy man, but he wanted to tell Lou how important his music had been to himself and his dissident friends in the Soviet era. He explained that when a Czech band, the Plastic People of the Universe, played music in the style of the Velvet Underground in the 1970s, the musicians were arrested. Havel helped rally international support to free them. 'And it seemed to us that this community that originated in this way shouldn't just dissolve after this but should go on in some more stable form, and that's how the Charter 77 human rights movement originated,' he explained. He asked Lou if he would play for his friends that evening in a local club. Lou was initially reluctant, saying he was 'a very private person' and that playing at short notice would make him nervous. But he couldn't refuse a head of state, moreover a man of such charm and integrity.
That evening he got on stage with veterans of the Plastic People of the Universe to perform Velvet Underground songs. 'Any song I called they knew. It was as if Moe, John and Sterl were right there behind me, and it was a glorious feeling.' 'Did you enjoy yourself?' the President asked afterwards. Lou said he had. '[Havel] then introduced me to an astonishing array of people, all dissidents, all of whom had been jailed. Some had been jailed for playing my music,' Lou reported in an article he wrote about his Czech adventure, beginning to realize that the songs he had created in semi-obscurity in the 1960s had affected people in far-flung places he didn't know, and in circumstances he had never imagined. Finally, Havel gave him a gift. 'These are your lyrics, hand-printed and translated into Czechoslovakian,' he explained, handing him a book inscribed with tiny handwriting. 'There are only two hundred of them. They were very dangerous to have. People went to jail, and now you have one.' It was a humbling experience.
Two months later, Lou was back in Europe for the Andy Warhol Exposition, a celebration of the artist's life and work staged by the Cartier Foundation at Jouy-en-Josas, near Paris. Cartier had flown some of Andy's closest associates over from the USA for the event, including Lou and John Cale, who had agreed to perform together. Moe and Sterling were also present. All four former Velvets attended a luncheon on 15 June 1990, with family and friends, everybody dressed up for what was a rare and special occasion. It was an enjoyable lunch, with issues. Sterling hadn't been on speaking terms with Lou for years. 'Sterling had this bug up his ass that Lou was a crook. He kept saying, "I don't want to see Lou, he's a crook. He took my money from the records and he never gave me anything,"' says Billy Name, who acted as mediator between the men. Still, tensions remained. When the Velvets posed for a group photo, Moe sat between Lou and Sterling to keep the peace. 'It was uncomfortable,' she says. The band's lawyer Chris Whent, also present, identifies another raw issue, dating back to 1968: 'Sterling and Moe always bore with them the [guilt] that when Lou said, "Either John goes or I go," they went with Lou. There was always a sense of betrayal there.'
That afternoon, John and Lou performed a short set of _Drella_ songs on an open-air stage, Lou wearing black leather and shades. He had recently let his hair grow long at the back in the unfortunate mullet style. After playing five songs to a small but appreciative audience, he and John exchanged a glance, and John went backstage to ask Moe if she had her drum mallets ready, and Sterling to strap on a guitar. That was when the others knew for sure that they were all going to play together again. Lou smoked a cigarette while he waited, telling the audience laconically that they had 'a little surprise'.
Playing together for the first time in twenty-two years, the classic line-up of the band gave a rusty rendition of 'Heroin' in Jouy-en-Josas. Moe and Sterling were evidently nervous, but they all enjoyed themselves. 'It was so nice to look over and see them,' says Moe. Martha was astonished to behold her husband onstage with Lou again. 'I had to sit down, I was so surprised.' Afterwards, Sterling seemed to forget his animosity towards Lou, and put an arm around his old band mate. Suddenly everybody relaxed. The Velvets spent the next few days together in Paris, seeing the sights and going out for dinner. 'Sylvia said to me, "I've never seen Lou in such a great mood." I said, "I have to say the same [about Sterling],"' says Martha. 'Our men had made up and dropped that negativ[ity], so everybody was relieved.'
The Cartier event seemed to bode well for further collaborations, and indeed John and Lou performed _Songs for Drella_ in Tokyo that summer. Then Lou refused to tour with the show. 'The two of us could have done a world tour and made a decent amount of money, but no,' John complained. 'Because of Lou, we never performed _Drella_ again.'
Lou had his own career to consider, and a new project. In recent years, he had become friendly with Doc Pomus, who wrote the lyrics for many classic songs, including 'Save the Last Dance for Me' and 'Lonely Avenue', as well as twenty numbers recorded by Elvis Presley. Crippled by polio, using crutches and later a wheelchair to get around, Doc was a popular figure on the New York music scene until his death from lung cancer in March 1991. Lou was a fan. 'I grew up listening to so many songs written by Doc Pomus, so it was a pleasure to know him. He was a great spirit,' he explained in a 1998 television documentary, during which he became uncharacteristically emotional. 'Doc died from cancer. I was interested in some kind of magic, some kind of transcendence, that would take one away from everything. And I wrote [the album] _Magic and Loss_ because there had been loss to temper any magic, and... there didn't seem to be any...' Lou struggled to express and compose himself '... anything that I knew of that helped you with this particular subject matter.'
The first song written for the album was 'What's Good'. As he progressed with his meditation on death he drew inspiration not only from the loss of Doc but other friends, too. _Magic and Loss_ would be dedicated 'to Doc and especially to Rita'. Lou didn't specify who Rita was, though it was assumed that he was referring to his old Factory buddy Kenneth Rapp, known as Rotten Rita, who had died shortly before. He may also have been thinking of his New Jersey neighbours Bob and Rita Teel. Bob died of cancer in 1987, and Lou remained friendly with his widow, who recalls him saying that he had written a song about her. 'He [Lou] was a very soft-hearted man, but because of the image he had people didn't know it,' says Rita, noting that Lou offered her children his snowmobile after Bob died. 'Lou was always good to us.'
Lou had another friend in mind when he wrote 'Harry's Circumcision', in which the character attempts suicide. This was inspired by the death of his college room mate Lincoln Swados, whose emaciated body was found in the Lower East Side trash nest he called home in 1989. The autopsy revealed that he had multiple health issues, including lung cancer. In the years before he died, Lincoln had sung his heart out on street corners, rolling himself about on a skateboard, becoming one of New York's familiar, if slightly alarming, street people. The spirits of Andy, Nico and Rachel also seemed to flit between the tombstone tracks.
_Magic and Loss_ was the third of what can be seen as a trio of albums recorded after Warhol's death, united by the fact that the writing was of higher quality than Lou had achieved in years, and the fact that loss was a running theme. While death has always been one of the principal subjects of classical music, it is seldom addressed with much seriousness in popular music, which has traditionally been the domain of the young. By middle age, however, the end of life starts to be as compelling as the beginning, and Lou, forty-nine years old when he made _Magic and Loss_ , went deep into the subject, even describing a cremation in 'Gassed and Stoked'. 'He was an amazing lyricist about] subjects no one else wanted to touch,' observes Rob Wasserman, who played upright bass on _Magic and Loss_ and _New York_ and toured with Lou extensively in latter years. As with _New York_ and _Songs for Drella_ , this was an ambitious work, realized with a clarity of thought that would have been beyond Lou when he was abusing drink and drugs. ['Giving up those things has made it possible to do works of larger scope. Because my concentration now lasts for more than a minute,' he said. 'Also, I used to have to rely on my handwriting and I couldn't read what I wrote.' He now referred to the old days as 'when I was a lunatic'.
On tour, he performed _Magic and Loss_ in its entirety in the first part of his show: fourteen consecutive songs about death, none familiar to the audience. He had likewise insisted in the liner notes to _New York_ that the album should be listened to straight through in one sitting, 'as though it were a book or a movie'. This asked a lot of audiences with only a casual interest in his music, and put some people off. '[ _Magic and Loss_ ] is probably the best concept album, detailing reactions to the suffering and death of friends afflicted by terminal illness, released so far this year,' Ben Thompson wrote in the _Independent on Sunday_ , reviewing a show in Birmingham in March 1992. 'But live entertainment it is not.' Lou glared at audience members who called out requests for more familiar songs in concerts, telling them to get a refund if they didn't like it, though he rewarded those who stayed with some of his classics in the second half of the evening.
Lou was never the most engaging or gracious performer, and he got even grumpier as he got older, yet he retained a loyal following. 'He had his fans who would love anything he did. If he farted into a paper bag they would go, "It's the most brilliant thing ever!" He didn't particularly like that kind of person,' says Struan Oglanby, a Canadian guitar technician who started to work for Lou on the _Magic and Loss_ tour and got to know him well. 'It was an odd thing. He wanted attention, but he didn't want the kind of people who gave him that attention.' Struan bonded with his boss over a mutual interest in technology. 'At the heart of everything, Lou loves his gear,' says Struan, referring to what had become a full-blown obsession with guitars, microphones and recording techniques. One of Lou's frustrations was that audiences didn't appreciate the nuances of sound that he heard. 'Usually, what he wanted to think about was his gear, and his guitar sound. We would spend hours upon hours doing the most minute tests of microphones and amps and things... he would play like just an A chord [until six in the morning], so it would be a test. There was a lot of pseudo-science going on with these tests. And so he would play the same A chord and we would have the microphone three inches from the [guitar], and have a protractor out to measure the distance and angle, and we would keep charts on these things... There's a fine line between fascination and obsession and a disorder.'
Sylvia had a lot to put up with, living with such a demanding man, and twelve years into the marriage Lou was no longer as lovey-dovey as he had been with his second wife, who now had the additional burden of managing his career. 'There was always bickering, lots and lots of bickering,' comments Struan. 'She was playing a role which she wasn't necessarily either suited for, or trained for, and he would put expectations on her that somebody much more experienced [would be used to], and she dealt with him the best she could. I don't know if she wanted to be in that position. I know she enjoyed the perks of that position. I know that she loved being associated with him. I think she did it for that...There was some shouting that went on, but it was more her having enough of it, and being rubbed raw. And there were pleasant times, too. She did a lot of sitting back, letting him be him.'
Was Lou done with his gay life in this second marriage (or third, if one counts Rachel)? Not according to the elderly jazz singer Little Jimmy Scott, whose womanly contralto and diminutive size was caused by a hormone deficiency. Scott enjoyed a successful recording career in the 1940s and '50s before falling into obscurity, only to have his career revived in old age partly thanks to Lou, who heard him sing at Doc Pomus's funeral and invited him to add backing vocals to _Magic and Loss_ , before joining him on tour as a guest artist. All went well until March 1992, when Lou and Jimmy had an altercation backstage in Europe.
Jimmy claimed Lou tried to kiss him. 'Jimmy got pissed off and put him in a headlock,' says his wife. 'I don't know if Lou thought Jimmy was bisexual like himself, but he was not. Jimmy never was. He was just small in build, and whatever. Jimmy got angry. "I don't go that way – back off."' As Jeanie Scott related the story in 2014, Jimmy added corroboration from his death bed (he died a few weeks later). He may have misconstrued Lou's attempt to be friendly. Either that, or one has to believe that Lou really did fancy this little old man.
It may have been this odd incident that caused Lou to hire a bodyguard, his first since the early 1970s. He hadn't really been successful enough, or famous enough, in the intervening years to warrant one. Joe Doyle, a genial former New York City cop, joined the _Magic and Loss_ tour in Denver in May 1992. Apart from preventing the likes of Jimmy Scott from strangling Lou, part of the job was saving Lou from himself. 'I wasn't hired for that. I made it part of my job.' Such a situation occurred during that summer's festival season.
In June, Lou played Glastonbury in England. The following month, he appeared at the Leysin festival in Switzerland. There was a party one night in the Swiss hotel bar with members of his road crew performing his songs for fun. 'We did some Lou covers [including] "Vicious",' says Struan Oglanby, who was part of the bar band. Lou was watching, evidently enjoying himself. 'Then I noticed that he had a drink in front of him. Oh shit! Who gave him that?' The bodyguard Joe Doyle also saw the bottle, and took it away. 'He didn't need people taking a picture with that in his hand, with everybody knowing that he was trying to stay clean.' Struan says Lou drank enough to wake up the next day with a hangover. After this lapse, he says that Lou 'didn't drink much at all', but there were 'times he had a few drinks and was tipsy'. Doyle's reaction indicates that such lapses were covered up as much as possible, preserving the image of Lou as a reformed boozer, for the sake of his pride as much as anything, having declared himself clean and sober, rather than somebody who still struggled with sobriety.
A couple of drinks actually seemed to improve his mood. 'It never brought out his meanness; it was the opposite. He seemed to be a jolly [drinker],' says Struan, which was also the impression of Sire executive Howie Klein. 'I remember one night we were in New York, there was snow on the ground, it was freezing, way after midnight, we were walking around, and he was high on something – I don't know if he was drunk or what it was,' says Howie. 'I found that it was great, because it totally loosened [Lou] up, to the point that he went into some of his deepest, innermost thoughts that he had never discussed with me before... That is the only time I recall him being [high].'
Lou was among the stars who appeared at a concert at Madison Square Garden in New York in October 1992 to celebrate Bob Dylan's first thirty years in the music business. He performed an obscure song from the end of Dylan's born-again phase, 'Foot of Pride', which proved a highlight. His very presence at the event was surprising, considering the fact that he often made jealous, disparaging comments about Dylan in private. 'What the fuck has Bob Dylan ever done that I haven't done?' he asked Struan one day. '[I] knew him well enough [by then] to let him carry on, rather than say, "Here's a long list of things that Bob Dylan has done that you haven't done." That would have been the wrong approach. He couldn't stand [Dylan]. Dylan's best album wasn't a Bowie album.' It was impolitic at this stage to mention that Bowie had played any part in Lou's career.
Back in Europe the following month, Lou met the most significant person in the last quarter of his life. Born into a religious, wealthy family in Chicago in June 1947, making her five years Lou's junior, Laurie Anderson was an androgynous little woman with short, spiky hair and the mischievous smile of a naughty schoolboy. Interested in art and music from a young age, she played violin well enough to perform with the Chicago Youth Symphony Orchestra and, in 1965, travelled to Europe with Talented Teens USA. The following year, she moved to New York, where she became part of the downtown art scene, forming friendships with some of the most notable composers of the day, before emerging in her own right as one of the foremost performance artists of the 1970s, combining observational writing, minimalist music (she played keyboards and electric violin) and cutting-edge lighting effects. Her speciality was the semi-autobiographical shaggy-dog story, usually surreal and unsettling, and often very funny, delivered in a confidential, wry tone of voice. Her sense of humour saved her from seeming pretentious, unlike some of her contemporaries, all of whom she eclipsed in terms of fame when her recording 'O Superman (For Massenet)' became a surprise European hit in 1981. The robotic 'ha-ha-ha-ha' backing track had a catchy quality not unlike the 'doo da doos' in 'Walk on the Wild Side.' 'You know, people come up to me and go "doo da doo, doo da doo",' Lou once remarked, 'and they come up to her and go "ha-ha-ha-ha."'
'I met Lou in Munich,' Laurie recalled in an article she wrote for _Rolling Stone_ after his death. 'It was 1992, and we were both playing in John Zorn's Kristallnacht festival commemorating the Night of Broken Glass in 1938, which marked the beginning of the Holocaust.' Lou asked her to perform with his band. He complimented her warmly afterwards, and she decided that she liked him. The feeling was mutual. Lou suggested they meet up again when they got back to New York, where they both lived and worked, Laurie in a magnificent downtown loft. They had a lot in common. As well as a love for the city, they were both outsiders with a strong intellect and a leaning towards the avant-garde, serious artists who'd had one flukey hit for which they were best known, both cigarette smokers and 'gear heads' with a fascination for equipment, while Laurie's boyish looks appealed to bisexual Lou. This didn't bode well for his marriage.
## XIII
## Return to the Velvet Underground
### 1992–6
THE FOUR ORIGINAL members of the Velvet Underground had a band meeting at the Paramount Hotel in New York at the end of 1992 to discuss a box set of their music, _Peel Slowly and See_ , which was eventually released in 1995. During lunch, they talked about working together again. The Cartier event had been enjoyable, so why not play more shows on a bigger scale and make some money? For once, everybody seemed to think that this was a good idea. In the run-up to Christmas, they jammed together at Big Mike's rehearsal space, and Lou joined John and Sterling onstage at New York University to see if they could still tolerate each other in concert. Having decided that they could, the decision was made to re-form for a European tour in the summer of 1993, choosing to tour Europe rather than the United States because 80 per cent of their record sales were in Europe.
John Cale appeared with Sterling on the _Tonight Show_ in January, telling Jay Leno that the Velvets were getting back together for the money, though he later downplayed the pecuniary incentive, emphasizing instead their artistic aims. John said that he wanted the Velvets to make new music, '[to] demonstrate that whatever the four of us did before was not immured in 1969'. Time was set aside for him and Lou to write together with a view to recording a new Velvet Underground studio album, as well as performing new songs in concert. Not everybody shared Cale's enthusiasm for new material. Moe simply wanted to play the old songs for their fans, as authentically as possible, and she was honest enough to admit that the money was important. It meant that she might be able to buy a new car, or even her first home, after a lifetime of renting.
The money increased when U2 – whose members held the Velvets in high regard – invited the band to support them on their summer tour of Europe. While the money meant less to Lou than the others, he was not so rich that it didn't matter. _New York_ had been a hit, but his last two albums sold poorly, and a successful reunion tour might boost his career. He also wanted to help Moe, who had always been his best friend in the band. Lou was intrigued by the quiet life she led in Georgia. 'So does everybody in town know who you are?' he asked when they met in New York.
'Are you kidding me? They never heard of the Velvet Underground.'
Lou was only prepared to work with the others again on his terms. Importantly, Sylvia Reed was put in charge of the tour, ensuring that 'my artist', as she called her husband during meetings, received prima donna treatment. Sylvia became 'the buffer' between Lou and John and Sterling, says Moe, which was important 'because he could piss people off'. Little thanks Sylvia got. 'If the slightest thing went wrong, he would be yelling at her.' To make sure that Lou had everything his way, his guitar technician Struan Oglanby was appointed stage manager, while his brother-in-law Mike Rathke was tasked with mixing and producing recordings of the shows for a live album which would be released by Lou's record company. 'I was Lou's guy, so I didn't really develop any closeness with anybody else,' admits Sire boss Howie Klein.
Lou conducted band rehearsals in New York like an impatient teacher dealing with slow children. He was playing custom-built headless guitars these days – a guitar lacking a traditional headstock. Having heard that the ideal material for a guitar neck was Hawaiian koa wood, he'd persuaded a New York guitar maker to handcraft his guitar necks in this obscure material. 'He had a way – even though he wasn't a significant volume purchaser, and wasn't really selling that many records – of convincing people that he was their prime client and they had to pay lots and lots of attention to him, and field all his calls, no matter what time of night, and how preposterous the idea was,' notes Struan. 'So he made poor Ned [Steinberger] find a way to bond koa wood to this carbon composite [guitar neck], which required extensive testing of glues; it was a nightmare to go through. He used these things, and then he'd ditch them a couple of months later because he didn't like them, and go on to the next thing. That was very much what he did with people... he had all these guitars and he dropped them and never used them again.'
Lou played his headless guitars through his Big Rig, a special-effects box as tall as a refrigerator and so heavy it took six roadies to move it. The Big Rig had been custom-made at a cost of £10,649 by an English engineer named Pete Cornish, whom Lou had initially asked to cure a hum in his stage equipment. When Cornish eliminated the hum, he became beloved and trusted by Lou, as an animal dotes upon a person who extracts a thorn from its paw. The Big Rig became a symbol of his dominance in the re-formed band: he had the biggest, loudest and most expensive equipment, _ergo_ he was in charge.
The fun the Velvets had in France drained away. 'It was like being with a group of grumpy old people – bitching about things,' Struan says of rehearsals. The musicians weren't that old. The men turned fifty-one during the reunion year, while Moe was forty-eight. But they were old enough to be set in their ways, and Moe and Sterling were out of practice. 'Sterling finally figured out his amp, "What does this thing do?" Sterling was a tug-boat captain at that point [so] he was rusty. Moe was definitely rusty. And then John and Lou went right back to being John and Lou... That whole reunion was bitchy... I had to be everybody's guy, but Lou wanted to be sure the deck was stacked in his favour. He said, "Always be sure my stuff is taken care of first." He always wanted to make sure Moe was taken care of, too, because he loved her. She had no equipment, so we got her equipment, and made her special kick-drum pedals and things. Rehearsals were a laborious process of remembering chords and settings on guitars, and then Lou trying to tell everybody else how to set up their equipment, because he was the expert on these things. So that led to tension.'
Lou would say, 'We've got to do this,' and 'John, I think you should have this.'
'Lou!' Cale barked back. 'Don't tell me what to do.'
The original plan of writing enough songs for a new album was scaled down to creating just a few new songs, most of which sounded like sketchy ideas rather than fully realized works, and the most significant of which, 'Coyote', sounded more than anything like an out-take from one of Lou's recent albums. The band nevertheless decided to incorporate 'Coyote' in the show. Then it was time to fly to Europe.
Lou travelled first class in the bubble of the 747 that took them all to London, the others sitting in cheaper seats. Their own glum faces greeted them from the news stands at Heathrow airport, glowering back from the covers of the music monthlies. ' _S O_ HAPPY TO BE BACK' was the splendidly sarcastic headline on the cover of _Vox_ , under a picture of four scowling middle-aged musicians, the Velvets having invited the press to their rehearsals in New York to try to drum up interest in the tour. Ticket sales were slow in places. There were further rehearsals in London, where Lou took time out to hit golf balls with Joe Doyle in Hyde Park. The fact that he was the only band member to have a bodyguard was something else that set him apart from the others.
The Velvets reunited, 1993.
The first show, on 1 June 1993, was at the Edinburgh Playhouse in Scotland, a medium-sized venue holding just over three thousand people, where they were playing two nights. For Europeans who had been listening to the Velvet Underground since their teens, never having a chance to enjoy them live, because the classic line-up never toured outside North America, a ticket to see the band in 1993 was almost as unexpected as seeing the resurrected Elvis, and just as exciting for a certain sort of person. As Pat Kane observed in his _Guardian_ review, Velvet Underground aficionados were of a type. 'I have no problem defining the exact nature of the Edinburgh audience... "I've never seen so many books in so many bags," said the mumsy bag-search security lady. "Where do you find time to read them all?"... Where do they find time? They're _middle class_ , missus!'
The Velvets had always appealed to arty middle-class intellectuals, many of whom discovered their records as students. These fans presented themselves in Edinburgh as affluent professional people in early middle age. So large did the Velvet Underground loom in their imagination that it was almost enough to _see_ Lou, John, Moe and Sterling together. But what did they sound like?
Opening with 'We're Gonna Have a Real Good Time Together', the band played a long and well-constructed set, including classics like 'Heroin' and 'Venus in Furs', with Cale's frenzied viola greeted with cheers like an old friend, as well as less familiar material such as 'Hey Mr Rain' and 'The Gift'. There were differences from the recordings. Performed live in 1993 with state-of-the-art equipment, the songs sounded brighter and cleaner than on the original albums, which had an appropriately grungy, underground sound. More worryingly, Lou chose to sing rather than speak his lyrics, which was a stretch for his voice, and he forced the tempos. 'Everything was too fast – everything. Every night I was [grinding my teeth],' laments Moe. 'The best example is maybe "Waiting for the Man". It's a whole different thing if it's too fast. The tempo it was played at originally was perfect.'
'Hello, we haven't seen you in a while, but it seems like yesterday,' Lou greeted the audience, whereupon someone yelled, 'Nico!'
'That will take some doing.'
Almost as unlikely as Nico's appearance was the moment when Moe walked to the front to sing 'After Hours', with an endearing modesty that contrasted with Lou's rock-star posturing. Cale's voice was also heard on several songs, including 'I'm Waiting for the Man', which was a welcome surprise. Sterling seemed least comfortable, playing his Fender in a timorous and not always timely way. Lou glared when he fluffed his break in 'Rock 'n' Roll'. At the end, all four stood awkwardly together, accepting the applause, which was, of course, meant to convey respect for what they had achieved in the past as much as appreciation for the show. For those who had always loved the band, love was reaffirmed. '"What were they like?" people wanted to know the next day,' Allan Jones wrote dreamily in the _Melody Maker_ , 'Like nothing else at all apart from themselves.'
The tour moved on to London, where the Velvets played a small show at the Forum in Kentish Town and a big concert at Wembley Arena. 'That was the most thrilled I ever was to be with the band, to be at [Wembley] and to have people screaming for Sterling,' says Martha Morrison. But her husband wasn't happy. He struggled to master his parts and seemed overwhelmed by the experience of playing to large audiences. Lou showed little sympathy, berating his colleague backstage in a way the others found embarrassing. Even Moe is moved to criticism. 'Lou made it rough on everybody. He really did. I love him to death, and have no hard feelings, but, as John said, he need[ed] to join the band. We [were] not Lou's back-up band, this [was] the Velvets. We are all [equal]. He was treating Sterl pretty [badly]. I remember after one show I was thinking, "Sterl, punch him!" I don't know what Sterl had done, or not done, but it was no big deal whatever the hell it was, and certainly nothing to be reprimanded [about]. And by who – Lou? We are equals in this. Wake up! This isn't your Lou tour. I walked by, I didn't stop and listen, but as I walked by I could hear a sentence or two and I thought, "Sterl, punch him in the mouth!"' Instead, Sterling drowned his sorrows in the hotel bar, telling everybody what an asshole Lou was.
Despite the historic nature of the tour, Wembley wasn't a sell-out. The band travelled to the Netherlands next, where they played a club show and another arena which was 63 per cent full. Ticket sales for the rest of the tour were around 90 per cent. Nevertheless, Lou insisted that they stay in deluxe hotels. Thrifty Moe was shocked at the money they spent on tour. '[Lou] lived well and it didn't worry him to go and have a $70 dinner, and maybe that doesn't sound like much, but I'll be damned if I ever do that.' She thought money was wasted on expenses. She did, however, get an advance so she could fly her mother and children over to see the band in Prague, after which they all met Václav Havel. 'My mother was so thrilled to be meeting an actual history person,' says Moe, who felt she was paying Mom back for her first drum kit.
By the time they reached Paris, Lou and Sterling were barely speaking. 'The whole vibe between Lou and Sterling was completely foul at that point,' says Struan Oglanby. Because the shows at L'Olympia were to be filmed and recorded, Lou insisted on an exhaustive sound check. 'Seven hours!' exclaims Moe. 'That didn't go over well with Cale. They were all ridiculous, but this was the topper. God!' He was vindicated, however, by the quality of the shows. 'All Tomorrow's Parties' and a pounding version of 'Some Kinda Love' were particularly effective, as were Moe's two songs, 'After Hours' and 'I'm Sticking with You'. The audience cheered and clapped along to this diffident little woman who cupped a hand over her right ear as she sang, better to hear herself. 'She is very shy and modest, and getting up there is a big deal for her,' says her daughter Kate Mikulka, who together with her siblings held up a banner with Moe's name on it. 'I had tears in my eyes.'
After Paris they played Berlin. Then they joined U2's mammoth Zooropa tour as a support act. They played three stadium shows with the band in France and Switzerland, to audiences of sixty thousand a night. They also did two huge festivals, including Glastonbury. The Velvets played to far more people during these few shows than they had in total during the entire 1960s, and the U2 dates helped keep the reunion in the black, but the gigs weren't enjoyable for the band. Moe decries the lack of interaction with their audience at events where most people could only see them on screens. Sterling was depressed, and Lou and John were increasingly tetchy with each other, so much so that Lou chose to be driven to one gig rather than fly with the rest of the band. They finished up with a few smaller shows in Italy, where Lou remained popular, but by this stage his relationship with John was dreadful. 'One night in Italy, I think it was in Bologna, I was doing "Waiting for the Man" with a huge orchestral introduction, and I was trying to give them the tempo from the piano, but I was too far off,' Cale recalled. 'Lou went and told my tech to turn the piano off. At that point I was ready to knock his teeth down his throat.' The last show was in Naples, on 9 July. Then they flew home to the USA for a break, and began to ponder what, if anything, they wanted to do together next.
There would be a live album and video of the European tour. That much was decided. In addition, they had offers to do a North American tour, as well as dates in Japan and Australia. Having got this far, it made economic sense to play more shows. 'You know, they didn't make that much money [in Europe],' says tour manager Mike 'Coach' Sexton. 'They made money, but they didn't make a lot of money... It didn't make them all rich.' The original idea of recording a new studio album had fallen by the wayside, but the band was keen on an offer to make a live acoustic album in the popular MTV Unplugged series, which had been a shot in the arm for Eric Clapton's career the previous year. 'We were going to do MTV Unplugged. They wanted us to do that, and we all liked the idea. We were thinking about percussion things I could [use]. It got to that point,' confirms Moe. 'I was thinking we could play "Sister Ray" acoustically. This could be something! "Heroin" acoustic. A lot of the songs would transpose fine.' If they made the Unplugged record, they would tour the US to support it, but who would produce the album?
John Cale was spending the summer on Long Island when Lou came out to see him to discuss the issue. 'I'm the only one who can produce the VU,' Lou said, a ridiculous statement to make, considering the fact that it was Cale who had experience and success as a record producer. He suggested that they use an independent producer whom they both trusted, but Lou wouldn't hear of it. 'I must produce,' he insisted.
'Absolutely not.'
The argument continued by fax, in what was the era of the fax machine. Lou had recently established an office for his company, Sister Ray Enterprises, on the sixth floor of a building at 584 Broadway in SoHo, which was run by Sylvia. He kept his archive of concert tapes and career memorabilia there, together with a fancy sound system. Journalists came to the office to interview Lou, many leaving in frustration after he had lectured, insulted and dismissed them. There was also a good deal of sitting around wasting time. 'A lot of time was spent listening to his $10,000 speakers, and waiting for the phone to ring,' says Struan, who ran the office briefly. 'That [involved] taking care of things he needed. We had an assistant, Josh. Lou would call me at the office and say, "Tell Josh to bring me a hard-boiled egg"... So Josh would leave the [office] on Broadway, go purchase a hard-boiled egg for Lou, and deliver it to [his] apartment. Then come back to the office.'
One day that summer, Lou told Struan to send Cale a fax. 'He wrote down on a piece of paper, "I am going to be the only producer on this album, or there is going to be no album, and if you don't like it you can go and fuck yourself." That was the gist of the message. He said, "Send that to John now." I put the [paper in the] fax [machine]. I had my finger on the button.' Before he pressed the button, Struan asked Lou: 'Are you sure? you want me to send this?'
'Yes.'
'Are you really sure? Because there is only going to be one result.'
'Yes.'
The fax was sent and, of course, it ended the reunion. There was no MTV Unplugged, no more shows. 'I think it was more important to him to have things done his way than anything else. That was an absolute in his life,' says Struan, adding that Lou didn't seem to care that he had wrecked the reunion. 'There was no discussion after that. The Velvet Underground was dropped, the same way guitars and people were dropped... I think it was something that was way in the past for him [anyway], on the other side of a lot of haze.'
Sire issued a tour album recorded in Paris, _Live MCMXCIII_ , together with a complementary concert video. It was good to see and hear the classic line-up together on stage, well recorded, with Cale playing and singing as he had in their heyday. Nevertheless, _MCMXCIII_ was a souvenir of an event out of time, lacking the intrinsic interest of their original live albums, of which the second, _1969_ , was one of the best records they ever made. And _MCMXCIII_ didn't sell well.
The reunion had one happy outcome, however. Moe was able to buy a house.
That October, two middle-aged people could be seen wandering between the stands at the Audio Engineering Society convention in New York. Eleven months had passed since Lou had met Laurie Anderson in Germany. During that time they had spoken on the telephone, arranging to meet at the microphones stand. 'I had no idea this was meant to be a date,' she later wrote, 'but when we went for coffee after that, he said, "Would you like to see a movie?" "Sure." "And then after that, dinner?" "OK." "And then we can take a walk?" "Um..." From then on, we were never really apart.'
As Lou and Laurie became better acquainted, Lou and Sylvia separated. The fact that she wanted a family and he didn't, a subject touched upon in the song 'Beginning of a Great Adventure', had become a problem. 'He told me she wanted children and he never wanted children,' says Erin Clermont. 'He is not a fatherly type.' The days when Lou sang of his love for Sylvia were over. He now spoke to her like she worked for him, which she did. 'One thing that was very hard to take was the way he talked to Sylvia. I did see a brutal side to him. He was just cruel to her. I thought she was so sweet. She was a very sweet woman, but he was done with her, and he was not nice to her,' says singer Victoria Williams, who appeared with Lou on an MTV special the week after the AES convention in October 1993. 'I heard him talk to her roughly, and he didn't use loving terms.'
Sylvia moved out of their apartment on West End Avenue but continued to manage her husband's career from the Sister Ray office. Effectively single again, Lou returned to familiar habits. He played golf with Steve Epstein at Twin Brooks Country Club in New Jersey, showing up with a young man, whom Steve took to be his latest boyfriend. He also had a romantic reunion with Erin. 'I was walking home with this pack on my back, and we ran into each other and had a nice chat. He had this dog with him. Then he wrote me a really nice email [Lou and Erin were early adopters of the internet] that said how much he missed me, and we got together again.'
Shortly afterwards, Lou called Erin. 'I have to tell you something,' he said. 'I'm involved with someone.'
'Oh, who?'
'Laurie Anderson.'
Erin was surprised, partly because she had assumed that Laurie was gay. Then she began to wonder if the dog Lou was walking the day they met was Laurie's rat terrier, Lolabelle, and became cross. 'I thought he was treating me rather shabbily, actually. I never got to say that to him. I thought he treated me shabbily that he did not let me know [earlier that] Laurie was in the picture.'
Lou and Laurie began to appear together in public, at events such as the November 1993 premiere of _The Black Rider_ , a stage work by Robert Wilson. They were also together in Toronto in February 1994. Sylvia filed for divorce on 17 March. One of the first things Lou did when he received the papers was tell Struan to change the locks at Sister Ray. 'It was that sort of panic.' Although the divorce had an inevitability about it, friends say that both parties were upset, though Sylvia had suffered the most. She only filed for divorce after he left her publicly for another woman, who happened to be famous, which made the situation even more humiliating. 'She [was] inordinately angry that Lou left her. He dumped her for Laurie – Lou traded up – that's tough,' comments Daryl Bornstein. Lou still found reason to complain. He bitched about Sylvia to Steve Epstein, always a sympathetic listener, yet when Steve went through a crisis in his own life soon afterwards and tried to talk to Lou about it, Lou said he couldn't deal with his 'negative energy'. Steve was left wondering what sort of friend Lou was. 'A true friendship would have been, "Let me help you through this." That wasn't what he was about.'
Many people were surprised by Lou's relationship with Laurie, 'the next one in his list of astonishing women', as Danny Fields remarks. It soon became clear that this wasn't a fling; it was love. 'When Laurie came along, that was the first time in those years that there was any sort of sign of him spending time with anybody, and it was very much a romantic thing,' says Struan, who had hitherto considered Lou to be asexual. 'There was obviously love there, and it was a romantic relationship from the beginning.' Lou seemed changed. 'I did see a _twinge_ of happiness when he got together with Laurie,' remarks Sam Shepard. 'I was glad of that.' Others noted a dramatic improvement. 'Lou changed in a very big way, and I attributed that to his relationship with Laurie,' says Howie Klein, who found Lou to be 'a gentler, happier, more easygoing person'. The couple held hands in the studio as they worked on Laurie's 1994 album, _Bright Red._ Lou let his hair grow long and curly, like it was in his youth, apparently to please her. He so cherished the sound of her voice – a warm, attractive voice – that he kept all the messages she left on his answer machine. Photographer Bob Gruen took a picture of the couple at a function and sent prints to Lou as a gift, because he looked so happy. 'He sent me back a very sweet letter saying he gave the pictures to his mother, and his mother put them on the wall, and that made his mother happy, and that makes him happy. It was such a shock, because Lou was not known as a sweet guy,' says Gruen. 'I thought, "If she can tolerate him they'll be great."'
The divorce was finalized on 26 April 1995. Lou lost the New Jersey property in the settlement, together with the vehicles he kept in the country, but he seemed most upset about the fact that Sylvia kept their latest dog, Champion Mr Sox. 'He was devastated by the loss of Mr Sox,' says Struan, who briefly took over the running of Sister Ray. In doing so, he came to appreciate what Sylvia had had to put up with. 'It was a constant barrage. And the idea that if you weren't fulfilling that list of needs you were somehow disloyal, or against him in some way; everything from "I need a pack of gum" to "We need a better record contract." It was anything that came into his head.'
Despite the cost of the divorce, Lou was in good financial shape. Indeed, he started to live really well at this point. He moved to a grand apartment building at 45 Christopher Street, a few doors from where he had lived in the 1970s, but a much better address. The old place had been poky. His new penthouse apartment had a terrace with superb views across Manhattan, and room for a rooftop studio, which he had insulated with lead curtains so he could play his guitar as loudly as wanted. He had the Big Rig broken up into sections so it could be hauled up to this eyrie, and he hired interior decorators to make the place look beautiful. 'He was buying like art furniture... He started paying people to do all sorts of things for him,' says Erin, who had never known Lou to spend so much cash. 'He really pampered himself. He had a masseuse who came by, and all sorts of creature comforts at a high level.'
Lou asked Struan to install an Apple PowerBook in each room in the penthouse, all linked and connected with the internet. Late one night, not long after the job was done, Lou rang his technician at home. 'What's up?' asked Struan, looking at the clock. It was 3 a.m.
'Let me talk to your wife.'
Struan was surprised. 'He had never expressed any interest whatsoever in meeting my wife, talking to my wife, never enquired about my home life. He wasn't interested in anyone's life, past what they were doing with him at that moment.' He asked Lou why he wanted to talk to his wife. 'She's sleeping.' There was a pause, during which Lou could be heard breathing hard down the phone like Darth Vader, a favourite telephone technique when he was pissed off. Then he yelled: 'These computers you put in are total fucking shit!' After letting him rant and rave, Struan tried to divine the problem. 'You had to be really careful with him. You couldn't say, "Calm down." It was always a circuitous route to get to where you wanted to be.'
'Everything seemed to work fine – is the network going down?'
'No.'
'So what is it?'
'It's total shit.'
'Let's get specific, what is it technically that's not working?'
'The dictionary sucks.'
Struan had installed a digital version of the _Oxford English Dictionary_. 'What does my wife have to do with the dictionary?'
'I'm trying to look up something, and it's not in there.'
'What are you trying to look up?' asked Struan, careful not to imply that he might know a word Lou didn't know. 'You couldn't insult his intelligence – that was the worst possible thing you could probably do.'
'The word "simmer" is not in here.'
_Simmer?_
'I'm pretty sure that is in there. Why are you trying to look that up?'
'I'm trying to make a can of soup, and it says "to simmer". What exactly does that mean?'
Struan was dumbfounded. 'So he wanted to talk to my wife about cooking – some good old-fashioned sexism.' This absurd exchange was not untypical of Lou in the latter part of his life, when, despite the humanizing influence of Laurie Anderson, his obsessive nature reached almost autistic levels. Age, wealth and fame meant he had little reason to check his bad behaviour.
Like Mr Toad, Lou had always hurtled from one enthusiasm to another, soon tiring of his toys. Headless guitars became as passé as Binaural Sound, his koa-wood axes shut away in his storage locker with Manfred Schunke's white plastic heads and other curios. He was also done with pinball, and motorbikes. Lou's new vehicular enthusiasm was for bicycles. 'We went down to this bicycle shop in Greenwich Village and we went through twelve to fifteen different bicycles, and he rode each one around the block,' recalls Struan. 'Then he had me ride them around the block... And then we'd discuss the merits of these bicycles. We spent hours at this shop. He wanted to make sure he got the best thing. He was this way with everything. We finally picked the bicycle and the accessories for it. "Is this the best pump, the best saddle bag?" Then he wobbled off in traffic back home.'
Laurie rode a bike, and her influence was strong. The couple were becoming united in everything, from bicycling to Buddhism. Most significantly, Lou started to make art music again, like he had in the 1960s, and like his girlfriend. 'He looked up to her work in a big way, and he wanted his work to be that good,' says Howie Klein. Lou's collaboration with Robert Wilson on _Time Rocker_ , an avant-garde rock opera inspired by H. G. Wells's _The Time Machine_ , was a case in point. Born in Texas in 1941, Wilson had established himself as one of the most innovative figures in modern theatre by the 1990s, combining words, music and artwork in his shows. 'I was preparing an exhibition for London celebrating the hundredth anniversary of _The Time Machine_ by H. G. Wells and had thought to also make a stage work based on the same material... I had asked Lou if he would be interested in working with me, and he said yes,' explains Wilson, who, like Lou, was better known in Europe than in his native United States. 'I had just directed _The Black Rider_ with Tom Waits at the Thalia Theatre in Hamburg, and the producers there were asking for me to create another work. _Time Rocker_ was discussed, and we all agreed it would be a perfect complement to follow the Waits work. We started [with] a workshop in Hamburg in which I made drawings and staged the work visually without text. After a while I started to sketch in existing music of Lou's to get a sense of the structure and pace of the work. Lou was very excited and said it reminded him a little bit of Warhol, and it was easy for him to relate to it because it was so visual... This was the time where I really started to appreciate Lou, and especially the loudness of his work.'
Lou wrote fifteen songs for _Time Rocker_ , which he worked on with Wilson throughout 1995 and the first half of 1996, the show opening in Germany in June. It was a work of high art – far removed from mainstream rock – that relatively few people saw, but this was the direction in which his career was headed. As ever, Lou found writing to order easy and enjoyable, and writing for a professional cast was particularly liberating. 'I'm not writing for me, and I'm using melodies and writing for voices that aren't my own. And also, they can sing melodies I can't sing, which is really great.'
He began to work with other, similar people in the art world, in music, theatre and cinema. During his marriage to Sylvia, Lou had recorded music for mainstream Hollywood films, including _Get Crazy_ (1983), _Perfect_ (1985) and _Permanent Record_ (1988). Such work was easy, lucrative and enjoyable for a songwriter who liked the challenge of writing to order. He also did a little acting, playing a rock star in _Get Crazy_. He was not much of an actor, though, and these films were among the lamest examples of mainstream American cinema. Lou described the comedy _Get Crazy_ as 'the worst movie I've ever, ever seen in my life'. During his relationship with Laurie, he showed more interest in working with art-house film-makers on smaller pictures, with better results.
He spent one blisteringly hot day in the summer of 1994 being filmed leaning over the counter of a cigar store set in Garrison, New York, talking extemporaneously about his relationship with New York City for an experimental movie called _Blue in the Face_. Filmmakers Paul Auster and Wayne Wang used the footage to link scenes in the picture. Nobody thought the filming had gone particularly well at the time, but Lou stole the movie. His lethargic, apparently autobiographical and unscripted monologue, touching on his childhood in Brooklyn ('I couldn't have been unhappier'), his teenage years on Long Island ('infinitely worse... at least in Brooklyn you could walk around') and smoking ('while I am smoking cigarettes, I am not downing a bottle of Scotch in fifteen minutes. So, looked at from that point of view, it's a health tool') was funny and revealing. The film did well on an art-house level, and Lou and Paul Auster became friends. 'I remember we spent some very pleasant afternoons together smoking cigars on the roof of his building on Christopher Street,' says Auster, who is best known as one of America's foremost literary novelists. Lou told Paul that he wanted to write a crime novel. He often quoted from Raymond Chandler's novels as exemplars of good style. 'He told me, "I want to write a novel. I'm really going to do it." And I said, "Well, good luck." And then about six months later he said, "You know it's so hard, it's so beyond what I'm capable of that I have to give up... I admire you for being able to do it." He said he certainly couldn't. I told him, "Well, I can't write songs..." Everyone has his talent, and his gifts. It was touching to see him admit failure.'
Acting in _Blue in the Face._
Paul Auster and his wife, the novelist Siri Hustvedt, became part of a circle of intellectual friends with whom Lou and Laurie Anderson increasingly socialized. Philip Glass, Salman Rushdie, Julian Schnabel, Wim Wenders, Hal Willner and Robert Wilson were also part of this clique, people with whom Lou and Laurie ate dinner in the smartest Manhattan restaurants, partied with, collaborated with and supported by attending each other's events. It was a civilized, privileged milieu, removed from everyday life, and it became Lou's world in the latter part of his life. He kept in touch with relatively few people from his bohemian past.
When John, Sterling and Moe agreed to perform together at the new Andy Warhol Museum in Pittsburgh in November 1994, Lou decided not to join them. Moe and John were in the lobby of their Pittsburgh hotel before the event, when Sterling arrived from Houston, where he was still working on a tug boat. The recent reunion tour hadn't changed his finances that much. He was changed in another way, however. 'When Sterl walked in, we were both shocked at how sick he looked,' says Moe. 'He just looked awful.' Sterling thought he'd pulled a muscle, but it became obvious that something much more serious was wrong. 'We were there five or six days, and each day it became more evident that he was really sick.' At the end of the week, Martha Morrison, who was living with the children in upstate New York, begged her husband not to go back to work in Texas. 'I said, "Please come home with me." He said no... And then he came home in a wheelchair.'
Sterling had non-Hodgkin's lymphoma. Despite a bone-marrow transplant, it became apparent that he was dying. He spent his last days in bed at Martha's house in Poughkeepsie, where Lou, John and Moe came to say goodbye to their band mate. Lou took the train up from New York on a day when Moe was also visiting. He went upstairs to see Sterling with Moe and Martha. Then the women left the men alone. Lou found his friend emaciated, and bald. 'Sterl lay in bed, seeming to drift off, and I wondered if I should leave,' Lou later wrote in _The New York Times_. But Sterling roused himself and asked Lou to help him sit up. They sat together for a while, with Lou holding his hand. In these quiet moments Lou felt that they resolved their differences. It was a powerful and moving experience. 'I missed the train back to New York and sat on the cement pavement waiting for another. I very badly wanted a cigarette and a drink. My God, I thought, We'll never play guitar together again. No more Nico. No more Andy. No more Sterl.'
Sterling died on 30 August 1995, the day after his fifty-third birthday. The following January, Lou, John, Moe and Martha stood together on stage at the Waldorf Astoria in New York as the band was inducted into the Rock 'n' Roll Hall of Fame. Lou would be posthumously inducted in his own right in 2015. The important contribution of Nico and Doug Yule to their history was officially ignored and would have gone unremarked save for John mentioning Nico's name in his speech. Finally, Lou, John and Moe performed a eulogy for Sterling, 'Last Night I Said Goodbye to My Friend'. It was the last performance by the Velvet Underground.
## XIV
## Love, Lou
### 1996–2008
LOU EXPRESSED HIS love for Laurie Anderson in songs including 'Adventurer', which mentioned her recent trekking expedition to the Himalayas, where she got altitude sickness and almost died. Laurie was prone to weird mishaps; she also came to grief when she stepped out of a cab in New York and fell down a manhole. In 'Trade In', he sang about wanting to marry again, having met a special woman, while 'Hookywooky' was an unusually frisky song in which he described a rooftop party at Laurie's building on Canal Street. Many friends were there, including ex-boyfriends with whom she still got along, in contrast to Lou, who noted that none of his exes spoke to him. He felt like shoving the men off the roof. Then he wanted to 'hookywooky' with his beloved, the meaning of which was clear.
He recorded these songs for the album _Set the Twilight Reeling_ in his home studio on the roof of his new penthouse. One afternoon while he was working with his band in the studio, there was a big storm. 'We had done a song called "Riptide" on that album. It had so much energy that when we finished the downbeat of the track there was an immediate thunderstorm and lightning outside in New York, and it started blowing things around,' says Tony 'Thunder' Smith, who became Lou's drummer at this time. It seemed almost like the music triggered the storm. The musicians ran out on to the terrace to rescue Lou's plants, as he barked orders like Captain Ahab on the deck of the _Pequod_.
Lou and Laurie Anderson, 1996.
Smith had been hired after Lou's previous drummer, Danny Frankel, took time off to be with his dying father. Lou wasn't particularly sympathetic. 'Lou said he couldn't wait. He had to get the record going,' says Frankel. 'It's funny that musicians treat other musicians like they complain record companies mistreat them.'
Lou's recording career was once again in jeopardy. Following a corporate upheaval at Warner Brothers, he was one of several Sire artists who were shunted across to the main Warner label. As a result, he lost the protection of the executives who had looked after him at Sire and became exposed to the harsh realities of the modern record industry. As he went out on tour to promote _Set the Twilight Reeling_ in 1996, he became dissatisfied with the low level of promotion the company was giving his record and was generally grumpy. 'He was getting nasty,' notes his technician Struan Oglanby. 'The shows weren't selling out. The album was tanking. As I would joke, the album went wood.' Lou had developed a quasi-paternal relationship with Struan over the past few years, as he tended to do with young employees. 'He had no kids, and my dad fucked off when I was nine. There was definitely a bond there that way with us. It was never expressed that way, but there was a familial thing.' But Lou's affections were inconstant, and now he seemed to want to pick a fight with his favourite. If any little thing went wrong, if his hairdryer didn't work backstage, or his amp setting was off by the slightest degree, Lou flew into a rage. It became too much for Struan on 14 March 1996. 'You know, I don't have to put up with this shit,' he told Lou after he snapped at him during sound check at the Bronco Bowl in Dallas. Lou screamed at Struan to get back to his work station. 'I was on a plane the next day, and went to work for the Smashing Pumpkins... Never heard from him again.'
When _Set the Twilight Reeling_ failed to chart in the USA (it did better in Europe), Lou had a crisis meeting with the chairman of Warner Brothers, during which he complained about promotion for his album and other matters. 'He] wasn't happy he wasn't selling more, and this and that, but he really wanted to make sure I cared about his [girlfriend],' recalls Danny Goldberg. Laurie was also a Warner Brothers artist, and Lou spent a good part of the meeting talking about her, as if he thought the label didn't appreciate her either. 'I think he was maybe just frustrated that his last few albums had been taken for granted, that they were treated as prestige items rather than albums that were supposed to be marketed, and he wanted more enthusiastic, focused marketing.' Goldberg had bigger problems. He was trying to restructure Warners and re-sign major artists like REM, at a time when new technology was threatening their business model. 'So in that context Lou Reed was a relatively minor responsibility.' In fact, the company decided to drop him; Lou was too much trouble. Howie Klein intervened before this became public.[ 'I was always very concerned about his career and tried to be as supportive as I could without interfering with the other company,' says Howie, who'd recently been appointed head of the Reprise label within the Warner group, 'but as it turned out Lou didn't get along with one of the top executives there. They had one of those classic Lou things. It wasn't working out, and they decided to drop him. I don't know if I'm talking out of school here, and if anyone ever knew this, because we made it very, very smooth. Instead of Lou being in any way embarrassed, or humiliated, that he was being dropped, instead it was a celebration of me being able to say to Lou, "I've got great news, I've talked Warner Brothers into letting you be on Reprise."' So Lou moved to his sixth and last record label.
As he entered his final years, Lou found himself working in a contracting record industry run by young people who had little or no emotional attachment to older niche artists like himself who didn't make hits. At the same time, Lou was drifting further away from the mainstream. That summer he was in Hamburg for the premiere of his collaboration with Robert Wilson, the stage show _Time Rocker_ , in which the Wellsian time machine was represented by a skeletal fish. The show, a highbrow rock opera with stylized sets and costumes, appealed to a discerning European theatre audience who relished Wilson's original and vivid work. There was more scepticism when the show transferred to New York. The _Village Voice_ condemned it as a 'schlocky musical', while _The New York Times_ concluded that Wilson's visuals were better than Lou's music.
After _Time Rocker_ , Lou resumed his usual summer touring schedule, which typically included a string of European dates. That year he kept a tour diary for the _New Yorker_ as he criss-crossed the Continent, creating a fascinating record of a middle-aged rock star on the road at the end of the twentieth century, flitting from one country to another, staying in deluxe hotels and meeting up with famous friends along the way, yet not immune from the discomforts of international travel.
He found himself in Austria on 7 July 1996, playing in a medieval castle near Linz. Lou observed that audiences were much the same wherever he went in Europe. 'The audience here is the same as the audience in Budapest – or Udine, for that matter. People have mastered the jutting neck and sliding head-and-shoulder movements associated with rock-moves,' he noted, grumbling that he couldn't do these moves himself, as a creaky middle-aged man, because he'd put his back out exercising with a StairMaster. After Linz, he travelled to Rome to play the EUR complex. Then he flew to Barcelona for a festival. David Bowie, who was also on the bill, watched Lou's helicopter land at their luxury hotel, 'to see if we crashed, I suppose', Lou noted lugubriously. Bowie was with his second wife, Iman. Seeing them together reminded Lou of Laurie, who was touring the United States. He felt lonely and hoped the hotel's internet connection was working so they could hook up later online.
Iggy Pop was another old friend on the festival bill in Spain, still performing stripped to the waist despite the fact he was about to turn fifty, and displaying a remarkable physique for a man of his age. 'How does he stay in such great shape?' Lou wondered, reflecting on his own struggle to stay fit. 'I was doing crunches every day, but that's how I threw my back out.'
He noted that the hotel rooms he was given on tour were often bigger than his New York apartment, ruing his failure to play the Manhattan property market to his advantage over the years. 'I still rent, which is pretty much how I started out.' He vowed to do something about it, and indeed he moved home not long after this, buying a new apartment in Greenwich Village: a large duplex at the corner of West 11th Street and the West Side Highway. This was the biggest, fanciest apartment he had ever had, and it was his last home in the city. To make the place comfortable he had bookshelves built and a new home studio installed, decorating the walls of the apartment with original artwork by famous friends, including Julian Schnabel, who lived in a huge faux-palazzo on the other side of West 11th Street. Also on display was an antique door from Tibet and ceremonial swords Lou used for tai chi practice. The view from the terrace was across the Hudson River to the New Jersey shore. As friends observed, looking across the water to Jersey, 'America starts over there,' meaning the continental hump of workaday America, which chic Manhattanites like themselves had little in common with. Traffic roared night and day on the broad highway beneath Lou's windows, but between the highway and the river was a pleasant landscaped strip of land where he and Laurie walked their dog, Lolabelle, who was like a child to them. It was a two-mile walk south along the river to Laurie's home studio on Canal Street, which she maintained as an independent base throughout their relationship.
Back on tour in Europe, Sunday 14 July 1996 found Lou in a fancy hotel in Antibes, watching the boats on the water. On Wednesday he was in Prague, hanging out with his pal Václav Havel. 'We drink and smoke...' Lou wrote in his diary, a casual admission that he was back on the sauce. Then he suffered a problem any traveller may face. 'Shampoo exploded in my suitcase and freeze-dried breakfast mineral powder leaked across everything,' he noted on 18 July. He tried to mop up the mess with a damp cloth, only to create a lather. Two days later, en route to Belgium – 'Travel time: six and a half hours. Playing time: one hour' – his luggage was lost. This was the grumpy rock star seen shuffling through airports, ignoring fans who asked for autographs, his back aching, his carry-on luggage full of suds, worrying about his weight and missing his girlfriend and his dog. 'An interviewer asks me why don't I smile much.'
As his diary revealed, Lou was on speaking terms with David Bowie again. Indeed, when Bowie celebrated his fiftieth birthday with a concert at Madison Square Garden in January 1997, Lou was among the guest artists. Introduced to the audience by the English superstar as 'the King of New York', he performed four songs with Bowie, including a compelling 'I'm Waiting for the Man'. It was the first time they had worked together since _Transformer_ , and while Lou didn't look overjoyed to be on stage with his former producer, he didn't hit him.
The consensus was that Lou was mellowing, thanks partly to Laurie's influence and also to the fact he wasn't drinking manically any more. .'Everybody says I am really nice now,' he told the _Sunday Telegraph_ in advance of the 1997 Meltdown Festival in London, which Laurie was curating and at which he was performing He had become enthused about playing acoustic guitar recently, having found an electronic gizmo that eliminated feedback, lending his instrument what he called 'the sound of diamonds'. The Meltdown show was recorded for release as the CD _Perfect Night_. 'I myself don't notice a difference at all,' he said of his supposed good mood, 'but I have been told.' Few journalists would agree.
'What do you regard as the lowest depth of misery?' Lou was once asked.
'Being interviewed by an English journalist.'
There were several reasons for his notorious loathing of the British press. It is generally true that the press in the United Kingdom is more irreverent than in the USA, and while Lou had always commanded attention and respect in the UK, he was also sharply criticized and sometimes mocked. Although he didn't admit that this was the principal cause of his problem, he brooded on his bad reviews. His insistence on only talking about his current work in interviews, without reference to the past, or his private life, frustrated journalists who weren't content just to report what he wanted to tell them in order to sell his latest record. When they probed for more interesting material, he became tetchy. Over time, he became so defensive that it was almost impossible to talk to him about anything other than his new record, or recording technology, a deadly dull subject which he found fascinating. 'His paranoia sucks the life out of you,' groaned a writer for _The Times_ after a typically frustrating encounter in 2012. More generally, Lou had no patience for journalists who were inadequately briefed, or asked him questions he felt he had been asked too often before, though most celebrities learn to cope with this as part of their job. His crustiness was, to some extent, the carapace of an insecure, emotionally fragile man who was seldom at ease with journalists, distrusting their motives. 'I get nervous about interviews,' he once admitted. He was never a great rock 'n' roll interviewee like John Lennon or Bob Dylan, whose best interviews were highly entertaining as well as intellectually stimulating. A file of Lou's late interviews, in particular, makes for tedious reading, as a succession of journalists tried to engage him in conversation, only to be rebuffed by a suspicious, surly old man. This was not in fact an issue peculiar to his dealings with the British. Lou had run-ins with journalists of all nations over the years, as he had a habit of falling out with people generally. As we have seen, his biography is littered with quarrels. To some extent, he was simply disagreeable.
His particular problem with British journalists, a feud he played up in his last years, as if it amused him, was compounded and made ridiculous by the fact that he had to engage with them every time he released a record. Despite being the so-called King of New York, Lou's main market was Europe, where he still sold albums in reasonable numbers, where he played his biggest shows, and where he had admirers in high places. A prime example of his enduring status in the UK in particular came in 1997, when the BBC collaborated with him in a remix of 'Perfect Day', featuring guest artists including Bono, David Bowie, Lesley Garrett and Elton John, all filmed for a sumptuous complementary video to promote the corporation. When the recording was subsequently issued in aid of charity, it went to number one.
Home after his travels, Lou was mooching around the Gagosian Gallery in Chelsea one weekend when he recognized a colleague. 'We are there looking at art and somebody comes up behind me and picks me up and holds me tight and says, "Guess who?" "I don't know – put me down!" It's Lou,' recalls Godfrey Diamond, Lou's producer on _Coney Island Baby_ , one of many working relationships that had gone bad. Enough time had passed for Lou to forget the details. He invited Godfrey back to his apartment to listen to some new songs. 'He played me a couple of songs, and I'm listening. There was some good stuff there. I said, "Lou, all I want you to do is give me another 'Sweet Jane'. You're the master of writing songs about people. I don't know anybody else who can write about a person the way you can." He looks at me and goes, "Godfrey, I try to write 'Sweet Jane' every day," in this deep, awful, mean, aggravated, upset voice. Clearly, that wasn't the thing to say.'
Godfrey's comment about wanting Lou to record another 'Sweet Jane' embodied an existential problem facing 'legacy' artists whose audience was more interested in old songs than new work. Lou's interests were precisely opposite, and he made some of his boldest, most interesting music at the end of his career, though few people were paying attention. His late album _Ecstasy_ , songs from which he played for Godfrey, was rich in powerful, hooky songs like 'The Rock Minuet' and 'Baton Rouge', related in his best world-weary speaking voice. 'I remember thinking, "Ah, this is the Lou Reed that I remember!" Him telling a story... Also that thing he had of bringing in a sense of humour,' notes Jane Scarpantoni, who, having listened to Lou's music for years, was thrilled to play cello on _Ecstasy_. She went on to work with Lou extensively at the tail end of his career. 'To me, if you listen to "Baton Rouge" that's a song about divorce if ever I've heard one... I don't know if it's about Sylvia.' Other new songs seemed to relate to Lou's life. Infidelity was the subject of 'Mad': a man cheated on his partner when she was out of town. She threw a coffee cup at him and called him scum, and dumb, 'dumb as my thumb', a simple but telling rhyme, articulated with such feeling that it was tempting to think that this was something that had happened. Ultimately, it didn't matter, as Jane observes. 'Whether that truth is him, or someone else, or a totally fabricated [story], you know that the emotion is for real and he's not fooling around.'
In comparison to the upbeat lovers of _Set the Twilight Reeling_ , the characters in _Ecstasy_ were in agony. Lou drew on the Oedipus story to create 'The Rock Minuet', described by Paul Zollo, editor of _Performing Songwriter_ , as 'maybe the most graphically violent song ever written in waltz-time'., In this remarkable work he presented a series of violent vignettes, including scenes of drug abuse, rough and transgressive sex, torture and murder. He delivered his lines in a matter-of-fact tone that made the imagery more disturbing, pushing boundaries again. Even Lou hadn't written so explicitly before about the links between abuse, violence and eroticism, describing scenes where two men tied up a victim and sewed up his eyes for kicks; another where the protagonist picked up a guy by the waterfront 'and thought of his father as he cut his windpipe'.
A package arrived from London while Lou was recording. 'I made this [guitar] pedal that sounded to me like an amplifier when it's just about to explode, or it's very ill,' explains engineer Pete Cornish. 'I made this pedal and sent it to Lou, saying, "This simulates imminent amp death." He immediately called it the Death Pedal.' Lou used the Death Pedal on another terrific new song, 'Like a Possum', becoming so excited by the crunching tone the guitar pedal gave his instrument that he kept playing, repeating the lyrics of the song until they became a semi-abstract collage on top of the churning music, creating a track that was reminiscent of 'Sister Ray'. Here again were themes of drug taking and casual sex, all to fill an inner emptiness. This had been a theme of his writing throughout his career, as it was an issue in his life. The recording was very long, at over eighteen minutes. 'The question came up – why edit it?' asks drummer Tony Smith. 'Lou hadn't done something like this in many years – a never-ending piece that grows and builds... Lou said, "I want it exactly like it is."'
Like most of his songs, 'Like a Possum' was surely a mixture of experience, observation and imagination, but it was nonetheless the authentic expression of a man who knew what it felt like to indulge himself, as he sang, until it hurt. He told _Newsday_ that the fact listeners reacted strongly to the material was a sign of quality. 'It's like watching a movie, a really good one. You know it isn't real. But at a certain point, if it's really done well, you're there. That's what I try to do on the record.' Still, the American public showed little appetite for an album about 'violent death and violent sex', as _Newsday_ wrote in 2000. Lou's record company relied on foreign sales to earn back the modest amounts they advanced him to make his albums. 'So even if a record was a little iffy in the United States, which it usually was, he would do really well in France, for sure; in Scandinavia he was very big; Italy; Germany. So even though those weren't gigantic numbers, they added up,' explains Howie Klein. By the start of the new century, however, Lou's sales were in decline all over the world, as were the sales of most artists in the internet age. The fact that he was making such challenging new music didn't help broaden his audience, and he followed _Ecstasy_ with a work that most people found completely indigestible. Nevertheless, _The Raven_ was another fascinating album.
Lou had always felt drawn to the work of Edgar Allan Poe, a man described by one contemporary as 'intelligent, wayward and wilful', which also described Lou. They had a lot in common. Both were American outsiders who wrote sensational stories of the outré and morbid. They dressed in black, dosed themselves with alcohol and opium and lived in Greenwich Village. As he walked the dog, Lou traversed the same streets Poe walked in the nineteenth century. Robert Wilson first suggested that that they collaborate on a theatre piece inspired by Poe's work, as a result of which _POEtry_ , another stylized rock opera based on literary source material, debuted at the Thalia Theatre in Hamburg in 2000. Lou then decided to independently record a studio album based on Poe's writing, a grandiose concept album that would make _Berlin_ seem modest. Over the course of three years, _The Raven_ grew into a sprawling double album featuring numerous guest artists of distinction, including his jazz hero Ornette Coleman, the vocal group the Blind Boys of Alabama and David Bowie, presumably as payback for Lou appearing at his fiftieth birthday show. Lou also mentored the British singer Antony Hegarty, a large, sorrowful young man who added his tremulous falsetto to the project. Prose passages were narrated by actors Steve Buscemi, Willem Dafoe and Amanda Plummer.
_The Raven_ included theatrical set pieces, sound effects, old songs ('Perfect Day' and 'The Bed') mixed up with new songs based on Poe's work. Lou felt free to rewrite anything that took his fancy, including Poe's epic poem 'The Raven', which gave the record its name, as well as short stories and essays such as 'The Imp of the Perverse', which addressed a dilemma which spoke to him personally. 'Why am I drawn to do what I should not? I have wrestled with this thought innumerable times: the impulse of destructive desire – the desire for self-mortification,' Lou explained. 'Why do we do what we should not? Why do we love what we cannot have? Why do we have a passion for exactly the wrong thing? What do we mean by "wrong"?' He worked on this ambitious album in New York through September 2001, when the twin towers of the World Trade Center, which he could see from his apartment, were hit by passenger planes, caught fire and fell. 'Fire Music' on _The Raven_ was his response to the disaster.
Reprise hated the record. 'It was a bad time,' recalls Fernando Saunders. 'I spoke to Lou. He said, "The record company doesn't want to put it out."' _The Raven_ was eventually granted a delayed, low-key release in 2003, when it received muted reviews. _Rolling Stone_ found much to admire but predicted that the unconventional aspects would 'bewilder the rock & roll animals among Reed's following'. Yet he was often at his best when he eschewed the mainstream. For those that liked him to be bold, as he had been with the Velvet Underground, here was a record to stimulate the imagination. 'I think so. The public didn't judge it that way,' says Howie Klein, whose policy had always been to leave Lou alone as much as possible. 'My job was to be supportive of him, and to be protective of him, give him an environment where he felt he had everything he needed to be creative.' But the accountants had had enough.
Time was catching up with Lou. He turned sixty in 2002. He used an autocue on stage to help him remember his lyrics. Years of hard living and cigarette smoking had given his face the texture of an ancient, deflated leather football, with an underlying redness that indicated health issues. He was diagnosed as diabetic, as a result of which he became increasingly finicky about food. He learned how to tell waiters 'no butter... no sugar' in almost every European language, which was important, because he spent a lot of time on the road in Europe in his last decade. 'Lou got discouraged about making albums,' explains Fernando. 'Lou said, "Why make the records? Nobody buys them."' So he concentrated on his live show, mixing up the set to keep himself interested and pushing his musicians to do their very best. 'When you were on stage you had to give it everything you got,' notes Rob Wasserman, who, along with Fernando, played bass for Lou in later years, sometimes together on stage, unusually.
Lou enjoyed experimenting with unorthodox band formations. In 2003 he invited Antony Hegarty, cellist Jane Scarpantoni and his tai chi teacher, a Chinese-born man who had recently emigrated to the US named Master Ren Guang-Yi, to tour with him, along with Fernando and Mike Rathke. Lou and Ren met up a couple of times a week when he was in New York to practise tai chi on the roof of his apartment, to the amusement of his neighbours, moving indoors if the weather was inclement. Lou's interest in tai chi went back to the 1980s, but he became evangelical about the health benefits of the martial art in his latter years, and invited Ren to tour with him to spread the message. 'He asked me 2003 to go together on tour. We did 150 shows together – Europe, Asia, Japan, USA – we go a lot of place,' explains Master Ren in stilted English. 'He wanted more people to know tai chi... A lot of people do tai chi [now] from [seeing] Lou and our show. This is amazing.' Ren appeared on stage with the band dressed in what looked like silk pyjamas, making graceful and occasionally dramatic tai chi moves under Lou's fond gaze as the band played rock 'n' roll.
This was the weird but good show Lou brought to the Wiltern Theater in Los Angeles in June 2003. When it was discovered that he still owed Reprise an album, his few remaining friends at Warners persuaded management to release enough money to make one last record. The resulting live album, _Animal Serenade_ , is valuable partly as a record of Antony Hegarty's sublime vocals on 'Candy Says'. Lou, who had got into the habit of referring to record-company executives contemptuously as 'music-industry baboons', put a photo of a baboon on the cover of the CD, which was released in 2004. Sales were poor, and Reprise dropped him. Despite having written some of the most original music of the rock era, Lou ended his career without a mainstream record deal, though he would make one more significant album as a guest artist.
It was the time of endings. Sid Reed died in January 2005, never having responded publicly to what his son said about him, or insinuated in songs like 'Kill Your Sons' and 'My Old Man'. One would never guess he was such a brute, judging by his obituary in _The New York Times_. 'Devoted father of Bunny and Lou,' it read, 'a man of integrity and dignity for ninety-one years.'
There was a paternal aspect to Lou's relationship with his last manager, Tom Sarig, who started to work with him at this time. 'He was kind of like my dad and my son at the same time,' says Sarig, who was born in 1966, making him thirty-nine when he started working with Lou. Like a lot of rock stars, Lou was immature in many ways – wilful, demanding, petulant and egocentric. He required a lot of looking after, but paradoxically he also insisted on being in charge. They got into the habit of meeting once a week for breakfast in a restaurant near Lou's apartment to talk over his issues, and ideas for new projects. One topic of discussion was his recording career. Lou had signed with an independent label, Sanctuary, after being dropped by Warners, but the company had run into financial trouble before they'd made a record with him. One of the first things Sarig did was to get Lou out of this deal. Lou had plenty of ideas about new projects, but his experience with _The Raven_ had put him off songwriting. 'He wasn't feeling like writing a lot of new material, because of that. "I just did the best thing I could and fucking Warners didn't do anything with it," you know. So we started looking for other things to do, other ideas he had.'
Photography had become a favourite hobby, and a first volume of Lou's pictures was published in 2003 under the title _Emotion in Action_. The way the images were sequenced was intended 'to tell a story of sorts, a dream', as he explained in his introduction. With that in mind, the selected images – of ice, sky, water; numerous views of New York, often from the terraces of his various apartments; also pictures taken abroad, including images of wild animals on safari – succeeded in mimicking the sensations of a nightmare, though, individually, the images were less interesting. What was most remarkable about this and two subsequent books of photos, _Lou Reed's New York_ and _Romanticism_ , was the lack of human beings in the pictures, or anything personal. Laurie was glimpsed just twice over the course of three books, once apparently by accident. The impression was of the photographer, solitary and cold-hearted as a raptor, watching the world through a predatory lens-eye. In this sense, the books were a true autobiography.
Another project was an album of electronic meditation music, without lyrics or tunes, a beige version of _Metal Machine Music_. Although Lou had lost his enthusiasm for writing new songs, he recorded this ambient meditation music in his home studio, and he flattered himself that other people might like to hear it. 'Lou Reed does meditation music. It was odd, but Lou was one of a kind,' says Sarig of _Hudson River Meditations_ , which was released on an independent label in 2007. Lou took the cover photo, of the Hudson River, from his terrace. 'He was thrilled that I found a buyer to put it out.'
While Lou employed Sarig to develop his career, he also had a succession of personal assistants running errands for him, young people such as Zeljko McMullen, who found himself working at Sister Ray Enterprises after replying to a job advertisement on the website Craigslist. '[It said], "New York-based musician photographer seeks office intern." It didn't say anything about who it was. It was $12 an hour.' Initially, there was also a female assistant. 'She was so stressed out by him that she was not having her period...' Zeljko took over, cataloguing Lou's books, photos and tapes, manning the phone and taking care of whatever the boss needed. 'He wanted someone to order him car services, make appointments for massages, all of that frou-frou stuff...' When Lou and Laurie went on tour, Zeljko stayed at Lou's apartment to look after the dog and to learn to use his home studio. He also worked briefly with the star on stage. Once again, there was an intense period of bonding. 'It almost turned into a weird father–son relationship. He never had kids. My father died when I was very young... He would tell people he was trying to adopt me. It was a joke.' Then Lou overloaded his assistant with responsibilities, demands and complaints until he snapped. When he texted to complain about the seat he'd been assigned on a flight to Colorado in September 2006, grumbling, 'You know I like window seats,' Zeljko quit.
Significantly, Lou was asked to revive _Berlin_ on stage at St Anne's Warehouse in Brooklyn that year. 'Our agent didn't want us to do it because there was no money in it, and it was one of these artsy-fartsy not-for-profit projects],' says Tom Sarig. Lou was also sceptical at first, partly because he wasn't inclined to look back. 'On that basis, he was tentative about _Berlin_ , but with the people involved it became an exciting thing, and when he actually did it he said it was the best thing he ever did.' Lou worked with Bob Ezrin again on the show, employing his regular band plus additional sidemen and guest vocalists like Antony Hegarty; also a choir, horn and string section. 'Lou was like the king with thirty people on stage,' says Sarig. Steve Hunter came back to play with Lou after more than thirty years, even though Lou ignored him on tour in 1973. ['He [Steve] played on the original record and a lot of stuff back in the day, and had gone, to all intents and purposes, blind. He was teaching guitar in a blind school. And they pulled him back to do this tour, and completely rejuvenated his career,' explains Rupert Christie, who played keyboards in the show, though he didn't rate the original album. 'It's all slightly awkward and the subject matter is ridiculous.'
Julian Schnabel created film sequences for the show, made the backdrop and filmed the opening night on 14 December 2006. Lou's sister, Bunny, and his frail, widowed mother, Toby, were in the audience. 'We were flying by the seat of our pants,' says Christie, noting that there hadn't been time to rehearse properly. _Berlin_ was performed in its entirety. As it was a relatively short work, Lou played some of his better-known songs at the end, rewarding the audience with what the conductor Sir Thomas Beecham would have referred to as a lollipop. _The New York Times_ noted that _Berlin_ live in 2006 was an improvement on the original LP. 'In its time, _Berlin_ carried Mr Reed's music to an ornate extreme, but now its trappings are secondary. What comes through is the way it feels.'
The following February, he took the show to Australia for the Sydney Festival. The more gigs they played, the better _Berlin_ sounded. By the time Lou toured Europe in the summer of 2007, the band was tight and the sound excellent (something he cared about intensely). 'We were rocking it by then,' says Christie. Audiences responded enthusiastically. Jonathan Ross and David Walliams were among the celebrity fans who came backstage after the show in London to pay their respects, not that Lou was grateful. They interrupted him just as he was getting undressed for his massage. 'He had done an interview once for Ross, who thought they were best mates. It was all just about to kick off and Lou was angry. He hated anybody going into his dressing room after a gig because he would get a massage,' explains Rupert Christie, who says that Lou had no idea who Ross and Walliams were. 'I explained who everybody was: "This is David Walliams, a very big comedian in the UK. This is Jonathan, he is a radio deejay and chat-show host." He said, "These are your fucking friends?"... He wasn't too pleased.' Songs that had been reviled were acclaimed and, unwelcome celebrity visitors aside, Lou had seldom looked happier. 'He was in great shape,' says Tom Sarig. 'He was in great voice. He had perfected the _Berlin_ character. He never got tired of it.'
Lou started to think about songwriting again, and maybe making another album. During the London run of _Berlin_ he met with an executive from Decca to discuss a possible deal. 'Lou at the beginning could be a cautious, distant person,' says Charlie Rapino, vice-president of A&R at Decca, who admits that Lou seemed only half interested in his proposition. 'Lou knew he was probably making more money performing _Berlin_ live [at that point]. He was a highly intelligent man. A record at that point wouldn't really matter. He wouldn't make money on it.' However, he had started to write songs again, including 'Power of the Heart', which he wanted people to hear. As they discussed the options over dinner in an Italian restaurant in Hammersmith, Lou asked for some white wine and a glass filled with ice. He poured the wine over the ice, to dilute it. A bottle of soave was consumed. 'We drank a whole bottle. It was me, him and Hal Willner,' recalls Rapino, a modest amount for three with food, but more than most recovering alcoholics would consider wise. No record deal resulted.
Lou wound up touring _Berlin_ in Europe in 2007 and 2008, happy tours, during which he seemed at peace with himself. 'Lou was very intense,' says band member Rob Wasserman. 'I didn't see him as a grumpy guy at all. He was a very humorous person... he was moody, but he was a fun person to be around for me. I think he really enjoyed playing live.' Musicians like Rob became friends, and Lou's late tours were social and convivial. To keep costs down, he flew business class, but he insisted on deluxe hotel accommodation for everybody, and he liked his band to come out for dinner with him in the evening, usually to a gourmet restaurant. After decades of touring Europe, he knew the best places to go in every major city, and his manager kept lists of favourite restaurants, annotated and updated with the names of dishes Lou liked. He had become an extremely picky eater, partly because of his health, also as an extension of his general cussedness. 'He was going through a stage of eating particular colour salads,' recalls Rupert Christie. '"Tonight I'll have a red salad." "What's in that?" "Tomatoes, peppers, can't be anything green in it!" The next night would be a green salad. If there was anything red in it, he would get really angry.' He drank diluted white wine with his meal. 'He didn't drink a lot, but he would always have a spritzer.'
When his tour coincided with Laurie's dates, the couple met up for a few days' rest and relaxation. 'He was always looking forward to seeing her, and when they were together they were like children, they were so happy to be together,' says Jane Scarpantoni, who spent time with the couple during a tour break in Sardinia. 'I thought, "Ah! He is head over heels for her. And vice versa."'
When they were apart, they emailed and spoke by phone. Laurie was performing in Los Angeles in April 2008, talking to Lou on her mobile phone, when she began to list the things she hadn't done in life and might not do now that she was almost sixty-one. 'Like what?' asked Lou, who was sixty-six.
'You know,' she said, in her sing-song voice. 'I never learned German, I never studied physics, I never got married...'
'Why don't we get married?' He'd had this in mind for some time. Lou was a man who liked to be married, despite his bisexuality; he liked to have a woman to look after him. He suggested they meet the next day in Boulder, Colorado, the next stop on Laurie's tour.
'Don't you think that's too soon?'
'No, I don't.'
Lou flew to Boulder. They wrote their vows and were married at a friend's house in the city on Saturday 12 April. That evening Laurie did her show as scheduled at the Boulder Theater. Lou understood. It was third time lucky for a man who found himself well matched in his final marriage. A relationship that had already lasted nearly fifteen years deepened and achieved a new level of tenderness, quietening and enriching the last years of what had been a turbulent life. But there were more storms to come.
## XV
## Nevermore
### 2008–13
DURING THE LAST five years of his life Lou kept himself busy with a wide range of projects, including books of his photographs, films, speaking engagements, stage shows and endorsement deals. 'Between the branding, the photography stuff, music, film, theatre, he was working on, like, ten to twenty different projects at a time,' says his manager Tom Sarig, adding that Lou's interest in a subject was more important than the fee. 'Lou wouldn't do _anything_ ; you couldn't get him to do anything that he wasn't into.' One of these late projects was _Lou Reed's New York Shuffle_ , a radio show which debuted on the SIRIUS network in the USA in May 2008. Lou co-presented with his producer friend Hal Willner, who acted as his music archivist and foil. The music played was an eclectic mixture spanning jazz, electronica and doo-wop, the tone of the conversation that of two old men kibitzing. The homemade nature of the show was enhanced by the fact that Lou recorded it at his apartment when he was in town. 'It was something he enjoyed,' says Sarig. 'We would [also] take it on the road when Lou was on tour and do instalments from the road.'
Aside from his regular stage show, Lou toured with the musicians Ulrich Krieger and Sarth Calhorn in 2008–10 as Metal Machine Trio, performing what they called Deep Noise, electronica inspired by _Metal Machine Music_ , which had developed a cult following over the years. 'We did several shows in New York that were successful, and we booked two tours of Europe, and put out [two] live records of Metal Machine Trio, and that was a joy for him,' says Sarig. 'There was no pressure on him to sing. He would still [sing] a couple of things, so the fans didn't go crazy. But he just enjoyed this chaotic noise music.'
Lou used some of this Deep Noise music as the soundtrack for _Red Shirley_ , a short documentary film he made in 2009 about his ninety-nine-year-old aunt, Shulamit 'Shirley' Rabinowitz. Lou was fond of his aunt, and he was kindly towards her during their onscreen interview, yet couldn't quite eradicate his habitual grudging tone. 'You're joking?' he asked her, as she explained how she had left Poland for the New World at nineteen with two suitcases and no English, as if she, like the rest of mankind, was trying to deceive him. 'Aw, come on... You can't be serious. You're joking, right?... You're kidding?'
To his credit, Lou helped support Shirley in her old age. 'Lou really took care of her for a long time, and paid for aid, and paid for her apartment,' says his sister. Here was another side to the man, a kinder person who could be generous. There were other examples of his charity. Back in the old days at Max's Kansas City, Lou got to know a professional dancer named Mike Quashie, who, dressed in a loincloth and wielding a spear, danced the shango, the watusi and – his show-stopper – the limbo on the New York stage, acquiring a degree of celebrity as the Limbo King. They remained friends. Lou invited Mike to a party to celebrate his 2008 marriage to Laurie Anderson. When Mike subsequently had a nervous breakdown and fell behind with the rent on his Greenwich Village apartment, Lou came to his aid. 'When I was sick, he paid my rent for about six months,' says Mike, who later moved into sheltered accommodation in the Bronx, where Lou visited him. 'He was very kind to me... He was a great friend.'
At the same time, Lou was capable of turning his back on people who _thought_ he was their friend. When Little Jimmy Scott broke his hip in 2007, his wife Jeanie asked Lou to take part in a fundraising concert for the singer. '[Lou] would say to me, "Whatever Jimmy needs at all, just call me." [And] that was the only time we ever asked him for a favour, to be on the show, to sing one song, and he got really nasty about it. He didn't want to be bothered... He says one thing and behaves the opposite way.' So we see two sides to the man, apparently contradictory, but nevertheless part of his character.
It had long been an article of faith for Lou that Long Island was an irredeemable shit-hole. He left the suburbs for New York City in his twenties, and seemed determined never to return. 'This is funny, this is ironic. He always told me, "Dion, I have one fear and one fear only, and that's the suburbs,"' chuckles Dion DiMucci. 'And he ended up living on Long Island at the end of his life. That is totally ironic.' In 2009 Lou bought a summer home between the villages of Amagansett and Springs on the fashionable eastern extremity of Long Island, beyond the Hamptons. The area is very different to the lower-middle-class South Shore where Lou grew up. It is where the rich and famous vacation, including fellow entertainers. Sir Paul McCartney owns a holiday home nearby. But it is still Long Island.
For $1.5 million, Lou purchased a shingle-sided cottage on an acre and a half of land, with a sun deck and a lap pool, the house shaded by pine trees, between which he slung a hammock. It was a place where he and Laurie could relax as they moved into old age. They had friends locally, many of whom were fellow Manhattanites who also worked in showbusiness, people like Jenni Muldaur, who sang in the _Berlin_ show. Jenni was a neighbour both in Greenwich Village and on Long Island and, despite the age gap between them (she was born in 1965, the daughter of singer Maria Muldaur), they became close during these last years. 'I would say he was one of my best friends. I saw him almost every day... The guy was a very powerful force of nature, and for all the hardness on the outside there was just the most soft, beautiful inside,' she says. 'He was sort of like a father [and a] brother... It had a lot of layers. And we were just pals. We did stuff together.' Lou liked to walk and swim when he was in the country. 'We did things like go to the beach with the dog, so all these mundane things.' In New York, he enjoyed going to movies with Jenni. 'We went to a ton of old movies at the Film Forum.' He also liked to kick back at home in front of the TV to watch a boxing match, or _Mad Men_.
Lou's drinking had crept up again, and he made yet another attempt to stop. In May 2010 he was one of the guest stars at a Peter Gabriel concert in New York, where he ran into his former manager, Jonny Podell, who'd also struggled with addiction. 'He hugs me, whispers in my ear, "Eighteen months, JP." He was in AA. He was sober eighteen months.' The following month, Lou and Laurie appeared in fancy dress as King Neptune and Queen Mermaid at the Coney Island Mermaid Parade. Lou looked glum as they rode through the crowd in a pedal car. Laurie smiled and waved, trying to jolly her husband along. Onlookers remarked on how old he looked, and somewhat strange. When he cracked a smile he revealed an extraordinary new set of white metal teeth, a radical look for a sexagenarian, more suitable for a rapper. 'He thought it looked cool,' says Tom Sarig, adding that Lou saw himself as more of a man of the streets than an intellectual at the end of his life. 'I remember talking to him about it a few times specifically and he said, "What is an intellectual exactly? It's a way a person thinks." He said he didn't think that way, that he was a guy from the streets.'
That summer, Lou toured with Damon Albarn's band Gorillaz, performing 'Some Kind of Nature' as one of the headline acts at the Glastonbury Festival, where his movements were noticeably stiff and slow. This was followed by a collaboration with another hugely popular band, with whom he made his very last record, _Lulu_. Suitably for a transgressive artist who revelled in subverting expectations, it turned out to be one of the most controversial works of his whole career.
The story originated as two _fin-de-siècle_ plays by the German dramatist Frank Wedekind, _Erdgeist_ ( _Earth Spirit_ ) and _Die Büchse der Pandora_ ( _Pandora's Box_ ), which followed the adventures of a libidinous good-time girl named Lulu (coincidentally, Lou's nickname at the Silver Factory) who took a string of lovers, most of whom met with disaster. She shot one man dead during a jealous argument, thus ending the first play. The second play tracked her descent into prostitution in London where she was ultimately murdered by Jack the Ripper. Bizarre and explicit, the plays were originally meant as a satire on the German bourgeoisie of the late nineteenth century, but the story resonated beyond its era. It was first adapted for the screen in 1929, and made into an opera by Alban Berg in the 1930s. When Robert Wilson decided to create a new stage version with the Berliner Ensemble for the twenty-first century, he turned once again to Lou. _Lulu_.''For me, Lou was never someone who was very concerned about the commercial aspect of the rock industry. He was an inventor,' Wilson explains. 'It was with this music in mind that I asked him to work on Wedekind's
As Lou started work on this project he began to suffer liver problems again. Then his dog got sick. Lou and Laurie did everything they could to help Lolabelle, employing a form of musical therapy that involved the animal resting her paws on a keyboard, giving the impression that she was playing piano in return for treats (as can be seen on YouTube). Lou was almost as distraught as a bereaved parent when the dog died. 'Lou was going through a very difficult time,' says Wilson. 'First, there was the issue of his health, and second, and even more problematic, was the long, slow death of his dog, Lolabelle. When we met, he could only speak about his love for his dog.' When Lou delivered the music for _Lulu_ , his first set of new songs since _The Raven_ , a lament for a dead dog was mixed in.
This production of _Lulu_ was initially staged in Germany in 2011, then in Venice and Paris. 'It did quite well on an artsy European level,' says Sarig. Events then took an unexpected turn. Despite disparaging Metallica in private in the past, Lou had recently performed with the heavy-metal band at Madison Square Garden to celebrate the twenty-fifth anniversary of the Rock 'n' Roll Hall of Fame. Lou and Metallica were an odd combination, and not everybody was impressed with their performance. 'Those guys aren't at his level,' sniffs Fernando Saunders. 'Those guys can't hardly even play "Sweet Jane" with him.' But Lou enjoyed himself and started to talk about recording an album of his old songs with the band at their studio in Marin County, California. Then he changed his mind and said he would prefer to make an album of his new _Lulu_ songs with Metallica, which they agreed to.
There were warning voices from the start. 'I remember advising Lou against it,' says Sarig, who sensed that an adaptation of an obscure European theatre work wouldn't appeal to Metallica's conservative fans. It was going to be hard enough to get them to accept Lou singing with 'their band'. 'I thought the commercial viability was much less than if we did Lou's greatest hits with Metallica. I remember Lou getting angry at me for that – quite angry... At one of our breakfasts, as we were on the launch pad to go to Marin County, I said, "I don't know if this is a great idea. We should stick to the first idea. That was a great idea – the greatest hits." And he got so pissed off at me.'
Metallica were used to working slowly on their records, and expected to spend a good amount of time with Lou in the studio before they attempted to record anything. They were taken aback when he took charge of the sessions and forced the pace to the extent that they recorded all ten basic tracks in as many days, during which time he virtually told them what to play. Lou became so tyrannical that guitarist Kirk Hammett had to negotiate to get a couple of solos on the album, while Lou challenged the band's leader to a fight. 'One time, I had to point something out to him about how things were functioning in the outside world and he got hot and bothered,' says Lars Ulrich. 'He challenged me to a street fight...' This was a late display of machismo by a sick old man. Suddenly, Lou looked terrible, haggard and jaundiced, and no doubt his behaviour was affected by the fact that he wasn't feeling well.
Eight of the ten songs he recorded with Metallica were directly inspired by Wedekind's plays. Shocking in its day, Lou made the story even more explicit. From the point of view of the eponymous heroine, he croaked in his ruined voice about being penetrated with knives, fists and cocks, pleading for the ultimate violation and gloating over the suicides of Lulu's lovers. To hear Lou playing the part of a depraved woman was itself disconcerting; he also sang from the point of view of the doctor who married her. The words came in a stream of consciousness, without verses or choruses. Some of the rhymes were clunky, while he took his preoccupation with transgressive sex and violence towards women to new extremes. 'I'm a woman who likes men,' he growled in 'Mistress Dread', one of several songs on the album that touched on bondage. 'I wish you'd tie me up and beat me... I beg you to degrade me... Please spit into my mouth.' This was hard to listen to, while the underlying story was difficult to discern. As with _Berlin_ , Lou was less interested in narrative than in emotion. Also like _Berlin_ , he shoehorned in songs that were unrelated to the story. Unless one saw 'Little Dog' as a metaphor for the heroine, it was hard to see what it had to do with anything other than his grief for Lolabelle, while the closing song, 'Junior Dad', had been written years earlier, in collaboration with Rob Wasserman. Tom Sarig believes the lyric related to Lou's relationship with his father. If so, the bogeyman of his life became the subject of the last song on his last album, ending a lifelong obsession.
This grisly melodrama was set to Metallica's thrash-metal music, as loud and repetitive as a great machine spinning out of control, alleviated with snatches of electronica. The band's front man, James Hetfield, essentially sang backing vocals to Lou, who pronounced himself delighted with the result. He told the press bombastically that _Lulu_ was a record for a sophisticated adult audience who hadn't outgrown rock, adding that this was the future of serious music. 'If you say, "I want more rock, I want rock an adult can listen to, I want to still have the pleasure of rock, I don't want to have it dumbed down for me..." If you want to be able to continue to get that thrill that only rock can give you, then that's what this record is,' he raved. 'You know, this is the end of old European classical music. It's dead and buried once and for all. But this is the new breed, the new generation of classics – rock classic... Power.' Dismissing classical music in this way was absurd, but Lou had always been prone to such statements, especially when stoned. Use of medication in his last years may account for his increased excitability.
Metallica's followers had expressed their misgivings about _Lulu_ ever since news of the project emerged. One outraged fan adapted the subtitles of Oliver Hirschbiegel's film _Downfall_ , so Hitler was seen to fly into a rage after being told that Metallica were working with Reed. Lou, to his credit, saw the funny side of the satire and posted a link to it on his new website. Despite the signs that fans weren't going to like what they heard, Metallica's record company, Vertigo, shipped _Lulu_ in large quantities in the autumn of 2011, evidently anticipating success, while Lou and the band embarked on a promotional tour of TV studios.
The double CD that landed on reviewers' desks was a nasty object, illustrated with images of a dismembered female mannequin, the lettering scrawled in what looked like blood. Then there were the songs, a Blitzkrieg of head-banging rock combined with the most outrageous lyrics Lou had ever written, some of the words verging on the obscene, though one had to read the lyric sheet to make complete sense of what Lou said, in a voice that had become thin and quavery with illness. Many critics were revolted; others were baffled. A few, like Brad Nelson in the _Village Voice_ , rated it highly. Offering the faintest praise, _Rolling Stone_ reported that _Lulu_ was 'less ridiculous than you might expect'. Print reviews meant little to Metallica's core audience, who spoke to each other online, where the consensus was that the album was 'garbage', this being one of the more popular words used to denounce it on Amazon, where fans also complained that it was 'the worst metal album of all time', awarding it the lowest rating. Lou's own, much smaller audience was used to his eccentric projects. It was reassuring, in a way, that his last record turned out to be a wild one. As Mike Rathke notes, Lou never mellowed. 'He never slowed down, and he never calmed down.'
Despite ill health, Lou wanted to perform the _Lulu_ songs live on tour with Metallica, but the band was spooked by the negative reaction of their fans and plans for a full-blown _Lulu_ tour were scrapped. 'Metallica's management got a little scared of pushing the envelope too far with this record,' says Tom Sarig. 'They've got a huge business. I can understand it].' [The CD was a flop by the band's standards, selling 32,000 copies in the USA over two years, less than a tenth of what they had hoped for, though this wasn't bad for a Lou record. In any event, it was the bitter end of a forty-four-year recording career.
By the time Lou turned seventy in March 2012, he was in poor health. 'Lou was sick for the last couple of years [of his life],' Laurie Anderson noted after his death. To some extent, he had brought his problems on himself. He had abused drink and drugs since his teens. As a result, he had suffered bouts of hepatitis since his twenties, and was advised in his thirties that he had seriously damaged his liver. Although he curbed his drug taking at that time, he didn't stop drinking completely, and his liver problems became progressively more complicated in his final years. He was now obliged to submit to a course of interferon injections to treat his hepatitis C. The treatment made him feel lousy.
Things were serious enough for him to make his last will and testament in April. Laurie would be the principal beneficiary, but he set aside $500,000 for Bunny to help look after their mother, Toby, who now had dementia and was living in a care home on Long Island. He bequeathed his property to Laurie, together with Sister Ray Enterprises, his personal belongings and 75 per cent of his 'residuary estate', which included cash and investments and income from royalties and song publishing, as well as any posthumous business done in his name. The remaining quarter share would go to Bunny. His estate would be managed by trustees, who would make regular payments to Laurie and Bunny out of revenue.
Having settled his affairs, Lou went back to work. He continued to tour as long as possible and appeared to enjoy himself on the road. 'Lou would do some amazing things. I remember on this last tour, 2012, he did "Junior Dad". And when he was singing that song there was a moment he thrilled everybody. He dropped down to his knees with his guitar behind his back [and] clutched the mic,' recalls his drummer Tony 'Thunder' Smith. 'And you are looking at him and, Oh my God, it's like the divine light came through him. That energy. That gift as a singer, as a leader, as a front man, to be able to hold the audience... You either have it or you don't, and Lou, God rest his soul, definitely had it.'
This was the end of his career as a performer. He had been under the care of doctors for some time. Now he was diagnosed with liver cancer. Shows booked for 2013, starting with the Coachella Festival in California in April, were cancelled. The band was told ten days beforehand that he wasn't going to play, but not the reason why.
Lou was at the Cleveland Clinic in Ohio. Unless he received a liver transplant, he would die. 'When Lou first came, he was very sick, and before the transplant Lou had to come to Cleveland off and on to get certain therapies,' explains Charles Miller, Director of Liver Transplantation at the Cleveland Clinic, and Lou's surgeon. 'He was getting sicker and sicker, and a little crabbier as time rolled on. And one time he wanted to get back to New York really badly. He called me, he said, "Charlie, let me out of here, I want to go home." I said, "Lou, I'll be there just as soon as I can. I'll walk over and I'll say goodbye." On the way over I got a phone call. The liver had materialized, almost out of seemingly thin air, for Lou.'
Dr Miller walked into Lou's room. 'Hi, Lou!' he said. 'I know you want to go home. But I've got another idea. I think I have a liver for you.'
'Are you shitting me?'
'No, I think I have a liver for you. As soon as it gets here, I'll have a look at it...'
'When can we do it? When can we do it?'
'A couple of hours.'
Considering the reckless way he had abused his liver over the years, Lou might be considered very lucky indeed to have the opportunity of a transplant. Essentially, he was in a position to buy himself a second chance.
As the liver donor was taken off life support and died, Lou received new life. Dr Miller sewed the donor liver into his body while listening to 'Walk on the Wild Side'. The operation went well. 'They put it in immediately, and it started to work immediately. Every week it gets better,' Laurie explained in a June 2013 interview with _The Times_ , which broke the story. 'I don't think he'll ever totally recover from this, but he'll certainly be back to doing things] in a few months.' Apparently displeased with his wife's prognosis, Lou posted a contrary message online: ['I am a triumph of modern medicine, physics and chemistry. I am bigger and stronger than ever.' A few days later he was photographed walking with a cane near his New York apartment. On 14 June he posted a picture of himself doing a tai chi kick in his living room.
His first public appearance after the transplant was as a guest speaker at a festival in France on 20 June. His voice was shaky as he read from his adaptation of Poe's 'The Raven', in which the eponymous bird can be interpreted as the personification of Death. 'Quoth the Raven "Nevermore."' Lou spoke to reporters briefly, irascible and thoughtful by turn. 'How could time go that quickly?' he asked rhetorically. 'It never ceases to amaze me. The other day I was nineteen. I could fall down and get back up. Now if I fall down, you are talking about nine months of physical therapy.'
He spent the rest of the summer with Laurie and their new dog, Willy, at their home on Long Island. He walked on the beach, swam in his pool and ate more than usual, relishing dessert in particular. Everything suddenly felt precious. 'He would say, especially towards the end, "I'm so lucky,"' says Jenni Muldaur. 'He would say, "Do you know how lucky we are?"' Lou received get-well messages from friends and colleagues and replied in more kindly terms than in the past. 'Lou very much became a changed person,' says Velvet Underground lawyer Chris Whent. 'He was much easier to deal with, much mellower, more ready to share affection, and prouder than I can tell you of what the Velvets did.' Lou took a close interest in the re-release of the band's MGM albums in deluxe new editions, and spoke by phone to John Cale in Los Angeles, where Cale was now based, about the possibility of working together on a new arrangement of _Songs for Drella._ One day, Moe Tucker received an unexpected gift of candy at home in Georgia with a note, 'To Moesy with all love and respect, Lou.' 'I called him and said, "Thank you, you made me feel special." And he said, "You are." But he didn't say, "I'm sick." I think this package from nowhere had something to do with starting to say goodbye, because it was really unusual.'
He went to London in September to help Mick Rock promote a book of photographs he had taken of Lou over the years. They met the press at Trident Studios in Soho, where Lou had recorded _Transformer_ , after which the book was named. He also attended an awards ceremony at the Royal Opera House, where he spoke briefly about his debt to Warhol. Back in New York, he did an interview to promote a range of headphones. During the conversation, he spoke about his first guitar. 'Your father gave you a guitar?' the interviewer asked. Lou snarled in reply: 'My father didn't give me shit.' Fernando Saunders was struck by the vehemence of this remark, believing that it went to the root of Lou's psychology. 'Lou had a lot of bitterness,' he says. 'That interview when he said his father didn't do shit for him, I think that's the answer right there... I think that's where all that came from.'
Lou's failing health was still more evident when he did a book signing with Mick Rock at the former premises of CBGB in New York in October. His energy level was down. He spoke softly, as if it took a lot of effort, and he looked awful. When people at the back of the room continued to talk over him, he lost his temper and yelled at them: 'Hey! Shut up.' At the end, photographer Bob Gruen stepped forward to say hello and shake Lou's hand, shocked by how he looked. 'We all knew he was ill. He was yellow that day. It was obvious.'
The yellow tinge to his skin indicated that his new liver wasn't working. 'We all agreed that we did everything we could,' says Charles Miller, who met Lou at the Cleveland Clinic in October. Laurie took him home to New York, where he spent time with friends, including his neighbour Julian Schnabel. They watched Schnabel's film of the first night of their 2006 _Berlin_ show. '[He] said, "Does anybody know?" He never felt like people really got it. He always felt, in a way, unappreciated...' Then Lou left the city for the last time, returning the way he had come in life, past the old neighbourhoods in Brooklyn where he played stick ball and stoop ball as a kid, past Freeport, and out into rural Long Island. It was fall and the trees were resplendent in autumn colours as the car turned off the Montauk Highway, bumped over the railway track and rolled down the lane to the little grey house in the woods.
On Friday, 25 October, Lou was visited at home by a local doctor and his friends Jenni Muldaur and Hal Willner. They sat with him while Laurie made a quick trip into Manhattan. 'He was in a lot of pain,' says Jenni. Lou was most comfortable lying on the floor, so they lay down with him to watch David Cronenberg's _Crash_ and to listen to a music compilation Hal had made. 'We had the most peaceful night, and it was quite beautiful,' says Jenni. 'And then Laurie came back. She needed to do something [in] the city. She came back late at night.'
Lou and Laurie stayed up through Saturday night, talking and practising breathing exercises, Lou repeating his Buddhist mantra ' _om ah hung_ '. On the morning of Sunday, 27 October 2013, he asked his wife to take him into the light. She says that these were his last words. Buddhists seek the Clear Light of the Void at the time of death; it is analogous to a Christian seeing the light. More prosaically, Lou wanted to sit in the sunshine. When he was settled on the sun deck, he practised a tai chi exercise with his hands. 'I have never seen an expression as full of wonder as Lou's as he died,' Laurie later wrote in a tender and moving obituary for _Rolling Stone_. 'His hands were doing the water-flowing 21-form of tai chi. His eyes were wide open. I was holding in my arms the person I loved most in the world, and talking to him as he died. His heart stopped. He wasn't afraid. I had gotten to walk with him to the end of the world.' It was 12.30 p.m. Cause of death was cardiopulmonary arrest, as a consequence of the cancer which had started in his liver and then spread. He was seventy-one. 'It's good that he could die with her holding him,' observes Lou's old playmate Billy Name. 'He was able to die a graceful, elegant death, instead of being alone, and flopping out or something. She made his life beautiful.'
Friends gathered at the house to sit and pray with Lou's body during Sunday night. The body was transported to the nearby town of Center Moriches on Monday, where it was cremated. The next morning, Bunny told their mother. She had to speak loudly to get through to her, and didn't know if Mom would fully understand, but Lou's death registered. Toby sat up and gabbled excitedly. 'She said something like "ton",' says Bunny's husband. 'She was trying to say "son".' Toby died nine days later.
Despite the fact that Lou had been a niche artist with relatively modest record sales, his death received considerable media attention. The CNN screen on Times Square flashed his picture with the inevitable cliché, used in countless headlines, 'He walked on the wild side.' Celebrity friends paid tribute on Twitter. 'He was a master,' wrote David Bowie, with little discernible warmth. Newspapers ran substantial obituaries. Lou's cerebral, literary brand of rock had always appealed to writers and editors, and the press coverage of his death reflected this fact more than his popularity with the general public. While the tone of most obituaries was laudatory, Lou's antagonism towards journalists coloured some articles. Writing in the _New Statesman_ , Kate Mossman noted his 'studied charmlessness', adding that he 'could be one of the coldest, most humourless and – worse – boring characters rock 'n' roll has ever seen'.
Posthumous publicity boosted sales of his music, increasing the value of his estate to over $30 million – a surprisingly large sum for a man who had been in a financial muddle for much of his career. Laurie and Bunny were rich women. Bunny considers it remarkable that Lou lived as long as he did, bearing in mind 'the emotional issues that pursued him throughout his life'. They had remained close, which was an achievement when one considers how tricky he was. 'In his heart, my brother was a profoundly good, moral person,' she concludes.
Although Lou had been raised in the Jewish faith, he and Laurie had developed an interest in Tibetan Buddhism, which teaches that there is an intermediate state between death and reincarnation as another human being, a period symbolically taken to last forty-nine days, known as the Bardo. Every Sunday for the seven weeks of the Bardo after he died, Laurie met with friends to talk about aspects of his life as he made his spiritual journey. In the final stage the deceased is believed to be judged. Good karma accumulated in life can send the deceased to Buddhist Heaven, before returning to Earth to start all over again, while bad karma can result in a period of torment and pain not unlike being in Hell before reincarnation; it is only the fortunate few who achieve Nirvana and escape the circle. The Bardo process is marked by prayers, ending, in Lou's case, on 15 December 2013. The next day, friends and colleagues gathered at the Apollo Theater in Harlem for a memorial concert.
Taking an overview of his career, Lou's biggest achievement was as a member of the Velvet Underground, and the two surviving members of the original band were among the artists invited to the memorial. Laurie asked Moe if she would sing one of Lou's songs. John Cale then contacted Moe to suggest that he accompany her on stage, but Moe was concerned that there wasn't enough time to rehearse, and she thought she might cry if she tried to sing. 'I said, "No, I don't think so." And then [Cale] backed out... I think I gave him the excuse, I think that gave him [an] out, because he was so emotional about Lou dying.'
John did not attend the memorial. Instead, Moe read a letter he had written for the occasion. 'We got each other... he got me and I got him,' it read in part. 'Much will always be made of the band we formed together in my dingy little living-room apartment – I'll leave those remarks for others – I prefer to consider how much I gained from my friendship with Lou – the part the rest of the world is not privy to: the dreams, ideas and plans we shared and, to a degree, achieved.' Privately, John, a reformed drinker, was disappointed that Lou had returned to the bottle in recent years. 'It came as a shock,' he later said of his death, 'even though I was resigned to the fact that he was doing himself in. "What the hell are you doing? It's all about the work, not drinking a bottle of wine."' I don't understand. He went out in blazing colours, I suppose.'
Although many famous artists performed at the memorial, including Debbie Harry, Paul Simon and Patti Smith, the words spoken by Laurie were the most poignant part of the evening. 'From the moment we met, Lou and I started to talk, and we talked non-stop about everything _conceivable_ for twenty-one years,' she told the audience, referring back to their 1992 meeting. She then offered a series of insights into the private life of the man she had known. She referred to his image as a black-clad tough guy, saying: 'He had learned how not to be Lou Reed many years ago. And he could put Lou Reed on and take him off like one of his jackets.' She noted his extraordinary facility for songwriting, how he would sometimes wake in the night and write ideas down, and how songs often came to him fully formed. 'He never changed a word. First thought best thought.' She described an emotional man, who often cried, though she noted that outsiders experienced his anger and frustration. 'But in the last few years, each time he was angry it was followed by an apology until the anger and the apology got closer and closer, until they were almost on top of each other, and finally almost the same thing.'
She spoke about their everyday life together as a couple in New York, where they often went out at night to see a show or attend an event, the home they built together on Long Island, and their travels. She described a mutual love affair that had lasted to death. 'I never had a single doubt that we loved each other beyond anything else from the time we first met until the moment he died. Almost every day we said, "And you, you are the love of my life."... And even if I was angry and frustrated, I was never for one second bored.' On a lighter note, she recalled her husband's 'over-the-top _insane_ laugh', mimicking his cackle, described his habit of showing how the hairs on his arms bristled when he heard music he liked, and how he often used to tell her when they went for pizza, 'Like you always say, "You can't lose money with bread and cheese".' The audience laughed. This was like one of the funny stories in Anderson's shows, freighted with a deeper meaning about the way people misunderstand each other. 'I don't remember ever saying that. Or actually anything about bread and cheese, but it had become something Lou loved to quote... I had said a lot of other things that I _hoped_ would be memorable, maybe even quotable, but it was this one that he seemed to really have by heart...'
As the laughter subsided, she concluded: 'Lou showed me so many things. And I got to show some things to him, too. During the last few months of his life Lou was so _dazzled_ by nature, by the beauty of water and trees, and he often said, "You always told me the trees were dancing, and now I see that they are. They're dancing."'
This was a moving and insightful speech, while the memorial as a whole was a fitting send-off for a major artist. At his best, Lou Reed captured aspects of urban life in songs which had an economy of language, a distinct, often witty point of view and a literary quality uncommon in rock 'n' roll. While he specialized in the underbelly of life, he didn't just write about drug users and transvestites. He had range. Lou was a limited performer and a variable recording artist with a kamikaze streak, but he stands as one of the most distinctive American artists of the rock era. 'In the pantheon sense, he's up there. Whether he's the greatest American rock performer, I'd give Dylan that, obviously, just thinking of the white guys,' concludes the critic John Rockwell. 'I'd put [Lou] up there, but one notch down, bearing in mind there are a lot of notches below.'
For all those gathered at the Apollo Theater, many friends were absent. This was a memorial led by his third wife, attended by those who'd been in favour during the past few years, a time when Lou had mellowed and developed an elite new social circle. He had cut his ties with his rapscallion past, and people from that past. Absent friends had their own memories: more mixed, perhaps more realistic, no less profound. Erin Clermont was not among those invited to the memorial, despite being Lou's friend and lover over four decades, during which time he frequently called her late at night, when he was speeding, asking to pop round. Many nights he climbed the five flights of marble stairs to her small apartment in Greenwich Village. There is a cigarette burn on the side table in her lounge to remind her of his visits. 'It's hard to think that the phone is not going to ring at three in the morning [and he's going to say], "How are you doing?"' she says, impersonating his gruff voice. His death was the end of part of her life, too. 'What a guy,' she sighs. 'What a guy!'
The first pictures of the Velvet Underground were taken during the summer of 1965, while the group was rehearsing on the Lower East Side of New York. Clockwise from top left: Angus MacLise (their first drummer, wearing a cap), Sterling Morrison, Lou Reed (in Arabic headgear) and John Cale (praying).
The Velvets found a friend and mentor in Andy Warhol who is seen with members of the Exploding Plastic Inevitable at the Silver Factory in 1966. Clockwise from top: Warhol holding Nico's son, Ari; Lou; Nico; John Cale (with moustache); Moe Tucker in polka dots; Mary Woronov; Sterling Morrison and Gerard Malanga with his whip.
The band recorded the bulk of their first and greatest album, _The Velvet Underground & Nico_, in just two days at Scepter Records in New York in April 1966. Nico is seen singing in the studio with John and Lou.
Nico toured with the band as part of the Exploding Plastic Inevitable in 1966–7.
She and John were then forced out of the group, John replaced by Doug Yule who is seen in profile on tour with the Velvets in Massachusetts in 1969.
Lou is seen ( _bottom left of group picture_ ) with his road band the Tots, looking somewhat ridiculous in glam rock make-up, around the time of the release of _Transformer_.
The hit single from the album, 'Walk on the Wild Side', made Lou an international star, but he struggled to cope with success. By the time he played Amsterdam on 20 September 1973, he was drinking heavily and using speed. He collapsed on stage the following night in Brussels.
By 1974, Lou's career had degenerated into self-parody. He resorted to pretending to inject drugs on stage to excite audiences.
Lou's relationship with the transvestite known as Rachel was one of the most remarkable episodes in his personal life. When he wasn't dressed as a woman, Rachel answered to Richard or Ricky. He is seen as a man, with Lou at CBGB in New York, in 1976. See here for a picture of Rachel as a woman.
At the height of his drug mania in 1978, Lou was photographed looking potbellied and dishevelled backstage in Austin, Texas.
The extraordinary _Lou Reed Live, Take No Prisoners_ was recorded at the Bottom Line in New York. Lou is seen performing a song from _Berlin_ at the club in 1979 with guitarist Chuck Hammer ( _left_ ), Ellard 'Moose' Boles on bass ( _middle_ ) and Michael Suchorsky on drums. He was drinking heavily again, and putting on weight.
Lou married his second wife Sylvia Morales on St Valentine's Day, 1980. The newlyweds are seen with their parents at Lou's apartment on Christopher St in Greenwich Village. His much maligned mother and father, Toby and Sid Reed, are on the left of the group pictures.
Guitarist Robert Quine challenged Lou to make more effort with his music during the early 1980s, but their personalities clashed. They are seen on stage at the Beacon Theater, New York, in October 1984, promoting the _New Sensations_ album.
Working with John Cale again on _Songs for Drella_ , Lou did his best work since the Velvet Underground.
Lou found love with his third wife, Laurie Anderson.
Laurie tried to jolly her grumpy husband along as they took part in the 2010 Coney Island Mermaid Parade dressed as Queen Mermaid and King Neptune. Their dog Lolabelle sits between them.
Following his liver transplant, Lou was photographed walking with the aid of a stick near his Greenwich Village apartment in June 2013. He died four months later.
## Source Notes
Lou Reed is abbreviated in these notes to LR. Full details of court cases are given in the first instance, then abbreviated. I also abbreviate the titles of some newspapers, as will become apparent. For chart positions, I relied upon _Billboard_ and the _Guinness Book of British Hit Singles_ (ed. Roberts), in combination with _The Essential Rock Discography_ (Strong). See the bibliography for full publication details.
### **I Coney Island Baby, 1942–59**
LA and the Eldorados at St Lawrence University, based on author's interviews with Richard Mishkin (quoted dialogue) and Nelson Slater.
Physical description of LR, his draft record and author's interviews.
LR birth and ancestry, vital records and US census.
Louis Firbank: the otherwise excellent _Cambridge Biographical Encyclopaedia_ (Cambridge, 1994), for example, has LR born Louis Firbank in 1944, which is wrong in every respect.
Harold Weiner quoted from author's interview.
Contemporaneous events with birth, _Brooklyn Eagle_.
LR: 'I know what...': _Q_ , Feb. 1992.
For Sid Reed's character and much else in this chapter I am grateful to Merrill 'Bunny' Weiner for giving me (in 2014) a detailed written account of her family history, including her brother's first breakdown. I quote Mrs Weiner from this document throughout this chapter, unless otherwise specified. I also refer to a later version of the document posted online at Cuepoint in 2015.
Julian Schnabel: 'He put his...': _Rolling Stone_ , 21/11/13.
'Shirley' Rabinowitz background, LR's 2010 film _Red Shirley_.
Surname changes, census records and Merrill Weiner's family history.
Toby Reed's background, census records, author's interview and correspondence with Merrill 'Bunny' Weiner, plus her family history.
Mysterious brother: Mick Wall writes in _Lou Reed: The Life_ , 'Lewis was the eldest children [ _sic_ ] of three, with a younger sister, Elizabeth [ _sic_ ], whom he was close to, and a younger brother...' Completely wrong.
LR wrote in _Between Thought and Expression_ that he wrote 'My Old Man' for his father, but denied in a Sept. 1980 interview with _Creem_ that Sid hit his mother.
LR: 'I see myself...': _GQ_ , Sept. 1986.
LR: 'the armpit...': _Rolling Stone_ , 6/3/03.
LR: 'My parents were...': _Melody Maker_ , 18/12/76.
Allan Hyman quoted throughout from author's interview.
First wife (Bettye Kronstad) quoted from author's interview.
Merrill 'Bunny' Weiner: 'We were Jewish...': correspondence with author.
LR's education, thanks to Rose Luna at Freeport High School and Consuelo Velez at Caroline G. Atkinson Intermediate School in Freeport for school records; also Regina G. Feeney and Cynthia J. Krieg at Freeport Memorial Library.
Honour roll student, Freeport _Leader_ , 26/6/52.
LR's IQ/days off school, school records.
Summer job at Jones Beach, author's interview with Richard Sigal (quoted throughout).
Bullied at Junior High, 2015 version of Merrill 'Bunny' Weiner's family history.
Jerome Jackson quoted from author's interview.
Phil Harris quoted from an interview with Olivier Landemaine, olivier.landemaine.free.fr.
Dating stories, author's interviews with Hyman and Sigal.
Judy Titus quoted from author's interview.
LR's year-book entry, 1959 Freeport High _Voyageur_.
LR's first breakdown and treatment, sister's 2014 written history (quoted with reference to 2015 version) and author's interview with Hyman.
Toby Reed: 'The paediatrician...': recalled by daughter in 2015 version of the family history.
ECT background, _New Encyclopaedia Britannica_.
LR receives twenty-four shocks, _Between Thought and Expression_ (Reed).
Law on homosexuality, _How We Got Here_ (Frum).
Bockris writes that ECT was intended to cure LR of homosexuality and 'mood swings', _Transformer: The Complete Lou Reed Story_.
LR: 'That's what was...': _Creem_ , March 1979.
### **II On to the Darkened Sea, 1960–4**
Removed from New York University, NYU registrar.
LR: 'I was miserable...': _Melody Maker_ , 21/4/79.
Merrill 'Bunny' Weiner quoted from her written history, unless otherwise stated.
Fraternity party, author's interview with Allan Hyman (quoted throughout from author's interview).
LA and the Eldorados, author's interviews with Hyman, Richard Mishkin and Nelson Slater, all quoted from author's interviews.
Cavaliere quoted from author's interview.
Lincoln Swados background, _The Four of Us_ (Swados), and author's interviews.
Sigal quoted throughout from author's interview.
Insomnia, LR mentions to Paul Zollo, _Performing Songwriter_ , 2000.
Maloney quoted from author's interview.
Jim Tucker background, author's interview with Moe Tucker.
Sterling Morrison meets LR, quoted from _Feed-back_ (Julià). Morrison background, author's interview with his widow, Martha Morrison (quoted).
Dismissed from ROTC, LR interview in _People_ , 30/3/81.
Shelley Albin quoted throughout from author's interviews. Also dialogue with LR.
Quotes from _Junky_ (Burroughs, p. 49) and _City of Night_ (Rechy, p. 321).
LR on Shelley: 'Look, you couldn't...': author's interview with Erin Clermont.
LR: 'I remember that...': _Creem_ , March 1979.
James and Paula Gorney quoted from author's interviews.
Stoecker quoted from author's interview.
Car accident, author's interviews with Stoecker, Barbara Hodes and correspondence with Merrill Weiner (quoted).
LR: 'I liked Ornette...': _Creem_ , Sept. 1980.
_Lonely Woman Quarterly_ , Syracuse University archives.
Kogan quarrel, author's interview with Professor Kogan (quoted).
Schwartz background, _Delmore Schwartz_ (Atlas), _Humboldt's Gift_ (Bellow), and author's interviews with former students, including Clermont, Gorney and Rosalind Stevenson.
LR: 'Once, drunk in...': _Sounds_ , 6/5/78.
Reads _City of Night_ /poster on wall, author's interviews with Clermont, quoted throughout from author's interviews.
Sex boasts, James Gorney interview.
LR: 'I had recently...': his essay _Fallen Knights and Fallen Angels_ , published in _No One Waved Good-bye_ (Somma). Also _Between Thought and Expression_ (Reed).
Ambition to write Great American Novel, LR in _Rolling Stone_ , 5/11/87, for example.
Lyrics quoted from 'Heroin', Oakfield Avenue Music Ltd.
LR graduates, Syracuse University registrar service.
LR: 'because of various...' and 'Jaw gave me...': _Fallen Knights and Fallen Angels_ (Reed).
### **III Honeybun, Black Jack, Sterl and Moesy, 1964–5**
Terry Philips quoted throughout from author's interview.
LR: 'hack shit': _Independent_ , 26/4/03.
LR on his 'horrifying job', _Q_ , May 2000.
LR wrote for Kiss, _Music from the Elder_ (1981).
Cale on meeting LR, etc., quoted from _What's Welsh for Zen_ (unless otherwise specified).
La Monte Young background/downtown music scene, author's interviews and correspondence with Henry Flynt and Billy Name. Also _The Rest is Noise_ (Ross).
MacLise background, _NY Times_ , 5/5/11.
Cale arrested/conversation with LR, _What's Welsh for Zen_ (Cale).
Background on the Primitives, _White Light/White Heat_ (Unterberger).
Cale: 'When I first...': _Option_ , July 1990.
LR medical, Selective Service records.
Flynt conversation with Cale, author's interview with Flynt.
Lyrics quoted from 'I'm Waiting for the Man', Oakfield Avenue Music Ltd.
LR maxim, recalled by former road manager Daryl Bornstein in interview with author.
Lyrics quoted from 'Black Angel's Death Song' by Reed and Cale, John Cale Music, Inc./Oakfield Avenue Music Ltd.
Cale: 'I wasn't going...': _What's Welsh for Zen_ (Cale).
Giving blood, posing for photos, etc., LR quoted ('And when my...') by Bockris in _Lou Reed_.
LR meets Morrison in NYC, recalled by Morrison in the _Austin Sun_ , Oct. 1975.
De Maria quoted from Smithsonian oral-history interview, 4/10/72.
The Warlocks at the Cinémathèque, _The Velvet Underground_ (Kugelberg).
Morrison on working at Pickwick, _Austin Sun_ , Oct. 1975.
Morrison on band name, _Feed-back_ (Julià).
1965 demo tape/dialogue, _Peel Slowly and See_ box set (Polydor, 1995).
Aronowitz background, author's interview for _Down the Highway._
Aronowitz on meeting the Velvets, his blog _The Blacklisted Journalist_.
Martha Morrison quoted from author's interview.
Moe Tucker quoted throughout from author's interview.
Summit High gig, author's interviews with Martha Morrison and Moe Tucker; _Disc and Music Echo_ (29/1/72); Norris quoted from _The Velvet Underground_ (Kugelberg).
Cale: 'No chicks': author's interview with Tucker.
Cale didn't bring pop clichés to the band, Morrison in _Lou Reed_ (Bockris).
Malanga on meeting the Velvets, _Up-tight_ (Bockris) and _Feed-back_ (Julià).
Morrison: 'Who is this lunatic?': _Austin Sun_ , Oct. 75.
Paul Morrissey quoted throughout from author's interview, unless otherwise stated.
Warhol quoted from _POPism_ (Warhol).
Inside the Silver Factory, author's interviews with Brigid Berlin, Danny Fields, Billy Name, Paul Morrissey and Mary Woronov. Background reading includes _POPism_ (Warhol).
Ondine injected himself in eye, interview with Woronov.
Herko suicide, _POPism_ (Warhol) and _Famous for Fifteen Minutes_ (Ultra Violet).
Told Slater he intended to use meth for the rest of his life, author's interview.
Edie Sedgwick background, _Edie_ (Stein).
Heliczer film, _White Light/White Heat_ (Unterberger).
New Year's Eve revelry, author's interviews and _POPism_ (Warhol).
### **IV The Exploding Plastic Inevitable, 1966**
Psychiatrists' dinner, author's interview with Moe Tucker, Jonas Mekas's _Scenes from the Life of Andy Warhol_ , coverage in the _NY Times_ and _NY Herald Tribune_ , 14/1/65 (quoted diners).
Questions to diners, _POPism_ (Warhol).
Morrissey quoted throughout from author's interview, unless otherwise indicated.
Nico on LR, documentary _Nico: Icon_ (ZDF, 1995).
Mishkin quoted throughout from author's interview.
Cale on LR and Nico, _Nico: Icon_ (ZDF, 1995).
Woronov quoted throughout from author's interviews, unless otherwise indicated.
Nico background, _Nico_ (Witts); _Nico_ (Young); and _Nico: Icon_ (ZDF, 1995).
Albin and Hodes on 'I'll Be Your Mirror', author's interviews.
Martha Morrison quoted throughout from author's interview.
Billy Name quoted throughout from author's interview.
LR on Warhol's work ethic, _The Autobiography and Sex Life of Andy Warhol_ (Wilcock).
Nico: 'Lou wanted to...': _Nico_ (Witts).
Edie Sedgwick's death, _Edie_ (Stein).
Road trip, author's interviews, _POPism_ (Warhol), _Up-tight_ (Bockris) and _The Autobiography and Sex Life of Andy Warhol_ (Wilcock), from which the hecklers are quoted.
Warhol fondled LR, photograph published in _Andy Warhol: The Factory Years_ (Finkelstein).
Fields quoted throughout from author's interviews.
Moe Tucker quoted throughout from author's interview.
Morrissey: 'It was packed...': _Up-tight_ (Bockris).
Woronov quoted from author's interviews, except 'Dwarfed by Mario Montez's...' and dialogue with Ingrid Superstar (her book _Swimming Underground_ ).
Dancing at the Dom, author's interviews with Ellie Jacobs and Martha Morrison.
Cale: 'Lou had very...' and LR to Nico, 'We know what _we're_ doing, Nico': _Nico_ (Witts).
Norman Dolph records the Velvets, author's interview with Dolph (quoted throughout).
Warhol's advice to LR in the studio, _Peel Slowly and See_ booklet.
Scepter Records sessions, author's interview with Dolph, Martha Morrison and Moe Tucker. Background, _The Velvet Underground & Nico_ (Harvard).
Cale: 'Basically, Lou...': _Nico_ (Witts).
LR: 'It's about a...': _Penthouse_ interview, 1977 (vol. 12, no. 2). Lyric quoted from 'There She Goes Again', Oakfield Avenue Music Ltd.
Hyman quoted from author's interview.
Nico: 'I cannot make love...': recalled by Cale in _Nico_ (Witts).
Morrison on Nico crying, _Nico: Icon_ (ZDF, 1995).
Paid $5 a day each, Morrison in _Q_ , July 1993.
MGM deal, contracts including 19/7/66 letter of agreement; also _Up-tight_ (Bockris).
At the Castle, thanks to Lisa Law and Patti Elam (quoted).
At the Trip, _LA Free Press_ , 13/5/66 ('their drummer is...').
The Trip closes, _Variety_ , 17/5/66.
At Venice Beach, author's interview with Woronov (quoted).
Warhol and Morrissey meet Bill Graham, _POPism_ (Warhol), including dialogue.
Graham: 'I hope...': recalled by Moe Tucker.
Gleason review, _San Francisco Chronicle_ , 30/5/66.
LR: 'Let's say we...': _Modern Hi-fi and Music_ , 1975.
Hospitalized with hepatitis, _Between Thought and Expression_ (Reed); author's interview with Clermont.
Schwartz dies, _Delmore Schwartz_ (Atlas). Clermont quoted from author's interview.
LR: 'O Delmore...': quoted from his preface to the 2012 New Directions edition of _In Dreams Begin Responsibilities and Other Stories_ (Schwartz).
Merrill 'Bunny' Weiner quoted from correspondence with the author.
Grand Street apartment, author's interviews with Ellie Jacobs, Martha Morrison and Moe Tucker.
West 3rd Street apartment and Stanley Amos, _POPism_ (Warhol; Warhol and LR quoted).
Friendship with transvestites, Fields interview.
Cale on LR: 'very full of...': _Nico_ (Witts).
LR on after-hours bars, _Between Thought and Expression_ (Reed).
Billy Name on bar-crawling with LR, author's interview.
Shelley Albin quoted from author's interview. NB: LR indicated that 'Pale Blue Eyes' was about Shelley in his book _Between Thought and Expression_.
Barbara Hodes quoted throughout from author's interviews.
_Village Voice_ review: 'zombie night...': 29/9/66.
Mishkin and Flynt quoted from author's interview.
LR: 'So she photographs great!': _Up-tight_ (Bockris).
Morrissey: 'Then Tom said...' and 'He sang it!...': _Nico_ (Witts).
Stevenson quoted from author's interview.
Malanga notes yellow eyes, _Up-tight_ (Bockris).
World's First Mod Wedding, author's interviews and _Detroit News_ , 21/11/66.
Single releases, _White Light/White Heat_ (Unterberger).
Philadelphia shows, _Philadelphia Daily News_ , 12/11/66.
### **V Light and Dark, 1967–8**
Moe Tucker quoted throughout from author's interviews.
LR: 'We thought...': _Penthouse_ , 1977 (vol. 12, no. 2).
Emerson claim, Billy Name (quoted throughout from author's interview); _White Light/White Heat_ (Unterberger). Emerson background, _NY Post_ obituary, 4/6/75.
_Village Voice_ review of the Banana Album, 13/4/67.
Morrissey quoted throughout from author's interview.
Bowie quoted from _New York_ magazine, 29/9/03.
Chris Stein quoted from author's interview.
Woronov quoted throughout from author's interviews.
Brigid Berlin quoted throughout from author's interviews.
LR discussion with Warhol: 'It was the worst...': recalled by LR in _Rolling Stone_ , 4/5/89.
Cale: 'Suddenly Lou...': _What's Welsh for Zen_ (Cale).
Philip Johnson party, _POPism_ (Warhol) and Moe Tucker interview.
Moves to fur district, thanks to Barbara Hodes (quoted from author's interview).
'Sister Ray' named for a drag queen, _Rolling Stone_ , 21/11/13.
Phrase quoted from 'Sister Ray', John Cale Music, Inc./Oakfield Avenue Music Ltd.
LR: 'I wrote the...': _Peel Slowly and See_ booklet.
McGuire review, _Crawdaddy_ 17.
Cumulative sales, thanks to Chris Whent.
Cale: 'there was heroin involved': _What's Welsh for Zen_ (Cale).
LR and Cale almost came to blows, _Lou Reed_ (Bockris).
Swados's suicide attempt, _The Four of Us_ (Swados).
Bettye Kronstad quoted throughout from author's interview, including dialogue with LR.
Laurie Anderson involved in Columbia University demonstration, _Laurie Anderson_ (Goldberg).
Clermont quoted throughout from author's interview.
Croland quoted from author's interview.
Woronov on Max's Kansas City/Andrea Feldman's behaviour, _Swimming Underground_ (Woronov).
Warhol shooting, _POPism_ (Warhol), author's interview with Billy Name (quoted) and press coverage including _NY Post_ , 4/6/68 (headline).
Sesnick quoted from _Lou Reed_ (Wrenn).
Warhol: 'Why didn't you visit...': recalled by LR in _Musician_ , 1/4/89.
Shelley Albin quoted from author's interview.
Solanas sentenced, _The Times_ , 11/6/69.
Riviera Café coup, recalled by Sterling Morrison in _Up-tight_ (Bockris).
Doug Yule quoted from correspondence with the author, except dialogue with Sesnick and LR and 'The next two days...', which are from Yule's essay _My First Days with the Velvet Underground_ , published online at olivier.landemaine.free.fr.
Cale on being fired, _Lou Reed_ (Bockris).
### **VI A New VU, 1968–70**
Yule: 'The sound was...', etc.: _My First Days with the Velvet Underground_ (Yule).
Moe Tucker quoted throughout from author's interview.
Contracts VD, author's interview with Barbara Hodes.
Candy Darling and tampons, _Swimming Underground_ (Woronov).
Lyric quoted from 'Some Kinda Love', Oakfield Avenue Music Ltd.
Kronstad quoted throughout from author's interview.
Lyric quoted from 'That's the Story of My Life', Oakfield Avenue Music Ltd.
Billy Name quoted from author's interview.
LR visits Billy, _POPism_ (Warhol).
LR: 'It was supposed...': _Peel Slowly and See_ booklet.
Woronov quoted from author's interview.
LR: 'I couldn't sing...': _Peel Slowly and See_ booklet.
LR: 'It's not just...': Nov. 1969 radio interview, reported in _White Light/White Heat_ (Unterberger).
Shelley Albin quoted throughout from author's interview.
Yule: 'I never saw...': correspondence with author.
Norris quoted from an interview with _Kicks_ , reported in _White Light/White Heat_ (Unterberger).
Closet mix, _Peel Slowly and See_.
Bangs review, _Rolling Stone_ , 17/5/69.
LR: 'I have a...': _Guardian_ , 16/2/96.
Jagger: 'I mean, even...': _NME_ , 15/10/77.
LR at End of Cole Avenue, _1969 Velvet Underground Live with Lou Reed_ (Mercury, 1974).
Quine quoted from his liner notes to _The Velvet Underground, Bootleg Series Vol. 1_ , save 'They didn't have...', which is from an interview with _ZigZag_ , April 1985.
Abram quoted from the _San Francisco Chronicle_ , Nov. 2001.
The 'prototype' of 'Sweet Jane', _Pass Thru Fire_ (Reed); LR to _NME_ , 24/1/76.
Yule: 'I can remember...' etc., correspondence with author.
Yule scared to smoke a joint, _White Light/White Heat_ (Unterberger).
Martha Morrison quoted from author's interview.
LR interview with _Third Ear_ , reprinted in _The Velvet Underground_ (Kugelberg). NB: Kugelberg identifies the interview as taking place in 1968, but references in the interview date it to 1970.
Yule: 'I've heard that...': correspondence with the author.
LR: 'We once did...' etc.: _Village Voice_ , 2/7/70.
LR: 'I hated it...': _White Light/White Heat_ (Unterberger).
Patti Smith quoted from the _New Yorker_ , 11/11/13.
Morrison: 'I was mad...' etc.,: _Feed-back_ (Julià).
Yule: 'That particular night...' etc.,: correspondence with author.
LR introduces his parents to Morrison (quoted), _Up-tight_ (Bockris).
Berlin quoted from author's interview.
Fields quoted from author's interview.
### **VII Solo in the Seventies, 1970–3**
Merrill 'Bunny' Weiner quoted from correspondence with the author.
LR on showbusiness, _No One Waved Good-bye_ (Somma).
Sterling Morrison quoted from the _Austin Sun_ , Oct. 1975.
LR: 'They took a song...': _Melody Maker_ , 22/1/72.
Elliott Murphy quoted from author's interview.
Lester Bangs's interview with LR, _Creem_ , May 1971.
Hodes quoted throughout from author's interviews.
Kronstad quoted throughout from author's interview.
Fields quoted throughout from author's interviews.
LR's 29/4/71 meeting with Nico, tape recording of same (courtesy of Danny Fields).
RCA background, thanks to Dennis Katz, Bob Ringe and Bruce Somerfield (all quoted from author's interviews).
Zanetta quoted throughout from author's interview.
Angie Bowie quoted throughout from author's interview.
Dennis Katz quoted from author's interview.
RCA deal, contract of 1/10/72.
LR: 'It's been a...': _Disc and Music Echo_ , 29/1/72.
Rick Wakeman quoted from correspondence with the author.
Clem Cattini quoted from interview with the author.
Cale declines to work further with LR, _Lou Reed_ (Bockris).
Christgau rating, www.robertchristgau.com.
Kent: 'one of the more...': _NME_ , 9/6/72.
Advance for _Transformer_ : 1/10/72 contract.
Lyric quoted from 'Perfect Day', Oakfield Avenue Music Ltd.
LR and Warhol discuss 'Vicious', LR to _NME_ , 28/4/73.
Bowie: 'I'm gay...': _Melody Maker_ , 22/1/72.
Clermont quoted throughout from correspondence with author.
Shelley Albin quoted from interview with author.
LR's address in Wimbledon, thanks to Barbara Hodes.
Coleman review, _Melody Maker_ , 15/7/72.
Background on King's Cross show, _NME_ , 22/7/72.
Richard Robinson replaced, _There Goes Gravity_ (Lisa Robinson; quoted).
Bowie on producing _Transformer_ , _Lou Reed: Transformer_ , Classic Albums documentary (Sister Ray Enterprises, 2001).
Halsey quoted from author's interview.
Scott quoted from author's interview.
Lyric quoted from 'Make Up', Oakfield Avenue Music Ltd.
LR on the genesis of 'Walk on the Wild Side', _Between Thought and Expression_ (Reed).
Holly Woodlawn background and all quotes, author's interviews.
LR: 'Odds are that...': _Mojo_ , March 1996.
Thunder Thighs, author's interviews with Lallou and Synge (both quoted).
Phrase quoted from 'Wagon Wheel', Oakfield Avenue Music Ltd.
Paid £450 in Glasgow, financial records in Dennis Katz _vs_ Lou Reed, Transformer Enterprises Ltd and Oakfield Avenue Music Ltd, Supreme Court of the State of New York, case 19748/75.
Stoecker quoted from author's interview.
Tosches review, _Rolling Stone_ , 4/1/73.
_NY Times_ review, 17/12/72.
LR: 'I allowed it...': _Mojo_ , March 1996.
McCormack quoted from author's interview.
Backstage at Alice Tully Hall, author's interviews with Kronstad (quoted), _Rolling Stone_ , 1/3/73.
Rockwell review, _NY Times_ , 29/1/73, save for 'His voice was...' (interview with author).
Quotation from 'Walk on the Wild Side', Oakfield Avenue Music Ltd.
Dallesandro quoted from author's interview.
LR sacks Heller, Lou Reed _vs_ Fred Heller, Supreme Court of the State of New York, case 4692/1973.
Deals done by Katz, Dennis Katz _vs_ Lou Reed.
### **VIII Self-parody, 1973–4**
Kronstad quoted throughout from author's interview.
Ezrin: 'His writing was...': _The Times_ , 25/5/07. Additional background: Ezrin's 18/8/73 interview with _NME_ ; and _Circus_ , Dec. 1973.
Lyric quoted from 'The Kids', Oakfield Avenue Music Ltd.
Ezrin wrote arrangements, LR to _American Songwriter_ , 2/1/09.
Dawson quoted throughout from author's interview.
Lyric quoted from 'Oh, Jim', Oakfield Avenue Music Ltd.
LR: 'I needed a...': _Melody Maker_ , 13/5/78.
LR: 'I'm a chauvinist...': _Creem_ , March 1979.
Ezrin: 'It was an...': _Lou Reed: Rock 'n' Roll Heart_ documentary (American Masters, 1998); also _The Times_ , 25/5/07.
LR kissing Bowie and wife at the Café Royal, as photographed by Mick Rock.
LR: 'Like, during the...': _Melody Maker_ , 13/5/78.
Colcord and Wagner quoted throughout from author's interviews.
Thanks to Susie Curtois for Blake's Hotel background.
Walsh quoted from author's interview.
Kent review, _New Musical Express (NME)_ , 29/9/73.
LR: 'I'm fed up...': recalled by Dawson to author.
Hunter quoted from _NME_ , 15/3/75.
Jacobs quoted throughout from author's interview.
Glan quoted throughout from author's interview.
Gelb quoted throughout from author's interview. Also dialogue with LR.
Steve Katz quoted throughout from author's interview.
Thanks to Bob Ringe for background on Brussels show.
'RETURN OF THE PRINCE OF PONCE', _NME_ , 22/9/73.
Rockwell review of _Berlin_ , _NY Times_ , 9/12/73.
_Rolling Stone_ review, 20/12/73.
Sales of _Berlin_ , LR statement in Steven Katz and Anxiety Productions Ltd _vs_ Lou Reed and RCA, Supreme Court of the State of New York, case 18712/75.
LR: 'The way that...': _Melody Maker_ , 16/4/77.
Returns to live with Barbara Hodes, author's interviews. Hodes quoted throughout from author's interviews.
Prakash John joins the band, author's interview. John quoted throughout from author's interview.
Academy of Music fee, payment schedule in Dennis Katz _vs_ Lou Reed.
Freedman quoted from author's interview.
LR: 'I got busted...': _Between Thought and Expression_ (LR).
Fulk quoted from correspondence with the author.
Hynde review, _NME_ , 2/3/14.
'Sally Can't Dance' lyric, Oakfield Avenue Music Ltd.
LR: 'And that's why...': _NYC Man_ liner notes (BMG, 2003).
Sid Reed advises LR, Steven Katz and Anxiety Productions Ltd _vs_ Lou Reed and RCA.
Sterling Morrison teaching, author's interviews with Martha Morrison.
Moe Tucker's life after 1973, author's interview.
Velvet Underground income, Lou Reed _vs_ Fred Heller.
Merrill 'Bunny' Weiner and Harold Weiner quoted from author's interview.
LR emaciated photo, thanks to Barbara Hodes.
Michael Fonfara quoted throughout from author's interview.
Rachel background, thanks to various interviewees for their recollections, including Michael Fonfara, Liz Gilmore, Barbara Hodes, Prakash John and Jeffrey Ross.
LR on Rachel: 'I'd been up...': _Penthouse_ interview, 1977 (vol. 12, no. 2).
Bangs on Rachel, _Creem_ , Feb. 1976; Fong-Torres, _GQ_ , Sept. 1986.
Gilmore quoted from author's interview.
Murphy quoted from author's interview.
Rachel and the police, LR to _Penthouse_ (1977). Also, 'Rachel knows how...'
Byrd quoted from correspondence with the author.
LR: 'This is fantastic...': _Gig_ 1974, quoted in the liner notes for the CD of _Sally Can't Dance_.
Sydney press conference, 14/8/74, YouTube.
Robinson review, _NME_ , 19/10/74.
Somerfield quoted from author's interview.
LR: 'Back then, I...': _Q_ , Feb. 1989.
### **IX Howling like the Devil, 1975–6**
Steve Katz quoted throughout from author's interview, except 'I give up...': _Lou Reed & The Velvet Underground_ (Clapton).
Conflicting claims over who walked out on the sessions, and Somerfield quote – 'I do not know...' – court papers in Steven Katz and Anxiety Productions Ltd _vs_ Lou Reed.
Everyman Band background, thanks to Marty Fogel, Larry Packer and Bruce Yaw (the last two quoted throughout from author's interviews).
Drive into Rome, author's interviews with Bob Ringe (quoted throughout).
LR demands to see a doctor, thanks to Packer.
Fonfara quoted throughout from author's interview.
LR greeted by police, thanks to Packer. Background on Rome riot, _Chicago Tribune_ , 17/2/75.
Rapino quoted from author's interview.
Fulk gives Valium/Defries flies in, author's correspondence with Fulk.
Dr Breitenmoser's story, his 25/1/82 witness statement in Dennis Katz _vs_ Lou Reed, Transformer Enterprises Ltd.
LR 'under extreme emotional distress', defence statement in Dennis Katz _vs_ Lou Reed.
Waxwork, _Record Mirror_ , 8/3/75.
Loses Heller case, Lou Reed _vs_ Fred Heller.
Toronto, and Arthur Moss's story (affidavit of 28/7/75), Lou Reed _vs_ Fred Heller.
LR: 'I was serious...': _Lou Reed: Rock 'n' Roll Heart_ documentary (American Masters, 1998).
Dennis Katz quoted from conversation with author.
Kempner quoted from author's interview.
LR: 'No one I...': liner notes to _Metal Machine Music_ (RCA, 1975).
Rockwell review of _Metal Machine Music_ ( _MMM)_ , _NY Times_ , 20/6/75.
Bangs review, _Creem_ , Sept. 1975.
Sales of _MMM_ , quoted by LR and Steve Katz in Steven Katz and Anxiety Productions Ltd _vs_ Lou Reed. Copies returned, LR to _NME_ , 24/1/76.
Somerfield quoted throughout from author's interview.
Financial crisis, 1/6/75 judgment in Lou Reed _vs_ Fred Heller, and related documents in this case and Dennis Katz _vs_ Lou Reed.
LR: 'I found out...': _Dazed and Confused_ , April 1996.
Fulk leaves, correspondence with author.
Additional debts of $128,000/LR accuses Dennis Katz and others of misappropriating funds, Dennis Katz _vs_ Lou Reed.
Murphy quoted throughout from author's interview.
Summons served, Dennis Katz _vs_ Lou Reed.
LR mortgages song catalogue, legal documents in Arista Music et al. _vs_ Oakfield Avenue Music, Inc., Supreme Court of the State of New York, case 93024/83; Reed _vs_ Fred Heller, and Steven Katz and Anxiety Productions Ltd _vs_ Lou Reed.
Steve Katz sues, Steven Katz and Anxiety Productions Ltd _vs_ Lou Reed.
Godfrey Diamond quoted from author's interview, including dialogue with LR.
Kulick quoted from author's interview.
Nelson Slater quoted from author's interview.
Marsh reviews _Coney Island Baby_ , _Rolling Stone_ , Jan. 1976; Murray review, _NME_ 24/1/76.
Poetry award, _Between Thought and Expression_ (Reed).
Slater disappointed with _Wild Angel_ , author's interview.
LR sees Smith perform 'We're Gonna Have a Real Good Time Together', _Lou Reed_ (Bockris).
Podell quoted from author's interview.
Clive Davis quoted from author's interview. Background on his career, _The Soundtrack of My Life_ (Davis).
1976 'settlement agreement', copy of and related documents in Arista Music et al. _vs_ Oakfield Avenue Music, Inc.
### **X The Arista Years, 1976–80**
Corky Stasiak quoted throughout from author's interview, including dialogue with LR.
Davis quoted throughout from author's interview.
Podell quoted throughout from author's interview.
Ross quoted throughout from author's interview.
Clermont quoted throughout from author's interview.
LR: 'On this tour...': _Penthouse_ interview, 1977 (vol. 12, no. 2).
Anniversary party, author's interviews with Michael Fonfara (quoted throughout from author's interviews), Jeffrey Ross and Liz Furmanovsky, who took pictures of the event, published in the _NME_ , 7/5/77.
Gilmore quoted from author's interview.
Eye drops, author's interview with Ritchie Fliegler.
Threatens Friedman, Friedman's article in _Soho Weekly News_ , March 1978, and correspondence with author.
Threatens journalist with butter knife, author's interview with Gilmore.
Hires Kronfeld, author's interview with Podell.
Settles with Steve Katz, court papers.
Details of LR _vs_ Dennis Katz, court papers (Supreme Court of the State of New York, case 19748/75).
Sarig quoted from author's interview.
Falls out with Bruce Yaw, author's interview with Yaw.
Fliegler quoted from author's interview.
LR: 'I was specifically...': _Between Thought and Expression_ (Reed).
Wiltshire quoted from author's interview.
LR: 'I don't like niggers...': _Soho Weekly News_ , March 1978.
LR: 'nigger music': _Creem_ , Dec. 1976.
McCormack quoted from author's interview.
LR: 'They're not heterosexual...': _Rolling Stone_ , 22/3/79.
Phrase quoted from 'Street Hassle', Metal Machine Music, Inc.
_Street Hassle_ reviews: _Time_ , 24/4/78; _NME_ , 25/11/78; and _Village Voice_ , 29/5/79.
LR: 'She's more beautiful...': recalled by Erin Clermont.
Ravan quoted from author's interview.
Warhol quoted from his _Diaries_.
Rockwell and Christgau quoted from author's interviews.
Faith quoted throughout from author's interview.
LR: 'I have such...': _Creem_ , March 1979.
Blairstown property, author's local enquiries, real-estate records and field notes. Thanks to Sylvia Ramos (Reed), Ravi Romano and Rita Teel.
LR: 'Even if you...': _Creem_ , March 1979.
Ellard 'Moose' Boles quoted throughout from author's interview.
Sylvia Morales's background, her UK birth certificate, author's interviews and _Lou Reed_ (Bockris). Cale quoted from his book _What's Welsh for Zen_.
LR creates the lyric for 'The Bells' extemporaneously, _Lou Reed: Walk on the Wild Side_ (Roberts).
Reviews of _The Bells_ : _Rolling Stone_ , 14/6/79; _LA Times_ , 24/1/79; and Murray in _NME_ , 28/4/79.
LR: 'I think it...': _Mojo_ , March 1996.
Hammer quoted throughout from author's interview.
Arrested in Germany, author's interviews with band members. Also LR's interview with William Burroughs, reported in _Transformer: The Complete Lou Reed Story_ (Bockris).
LR threatened Levison, author's interview with Howard Harding (quoted).
LR: 'Play! Play!...': recalled by Hammer.
LR: 'Isn't David great?' etc.: recalled by Marty Fogel.
Fight with Bowie, author's interviews with Fogel and Hammer. Also _Melody Maker_ , 21/4/79.
LR: 'Where is the money, Clive?': _The Soundtrack of My Life_ (Davis).
Recording _Growing Up in Public_ , author's interviews with Boles, Fogel, Fonfara, Hammer, Heinrich and Stasiak.
Phrase quoted from 'My Old Man', Metal Machine Music, Inc.
### **XI Second Marriage, 1980–7**
Bornstein quoted throughout from author's interviews.
Lyrics quoted from 'How Do You Speak to An Angel' and 'My Old Man', Metal Machine Music, Inc.
Wedding, thanks in particular to Corky Stasiak for his diary notes and photographs.
Epstein quoted throughout from author's interview.
_Growing Up in Public_ reviews, _Rolling Stone_ (10/7/80); and _Sounds_ (3/5/80).
Davis quoted from author's interview.
Fonfara quoted from author's interview.
Country life, author's field notes and interviews. Thanks to Rita Teel (quoted from author's interview), Sylvia Ramos (formerly Reed; quoted from correspondence with the author), Judy Cook (quoted from author's interview), Bob Tramontin (quoted from author's correspondence), John Passalacqua at Dominick's pizzeria and Kellie Peterson (quoted from author's interview). Also thanks to Ken Lally and Ravi Romano.
Sigal renews friendship, author's interview (quoted).
LR: 'I consider I...': _Times_ magazine, 25/3/2000. LR mentioned self-help books in interviews, including Allen Carr's _Easy Way to Stop Smoking_ ( _Times_ , 21/7/07).
Clermont quoted throughout from author's interviews and correspondence, with thanks for consulting her diary.
Lithium side effects, _Companion to Psychiatric Studies_ (Johnstone).
LR: '"The Gun" is none...': _NME_ , 6/3/82.
Hammer quoted from author's interview.
Perry quoted from author's interview.
Saunders quoted throughout from author's interview.
Quine background, author's interviews with Sean Fullan, Fred Maher, Doane Perry and Fernando Saunders; also Quine's _Daily Telegraph_ obituary, 14/6/04.
Fullan quoted from author's interview.
Lyric quoted from 'Women', Metal Machine Music, Inc.
Maher quoted throughout from author's interview.
LR: 'My past is...': _NY Times_ , 10/3/82.
Spoke of admiration for Dostoyevsky, to _NME_ , for example, 21/4/79.
Merrill 'Bunny' Weiner quoted from author's interview.
Reviews of _The Blue Mask_ , _Sounds_ (13/3/82), _NY Times_ (10/3/82) and _Rolling Stone_ (15/4/82).
Lyric quoted from 'Heavenly Arms', Metal Machine Music, Inc.
LR: 'Touring is just...': _NY Times_ , 10/3/82.
Stasiak quoted from author's interview.
Quine smashes tape, _Lou Reed_ (Bockris).
Background on _New Sensations_ , thanks to horn players Randy Brecker and Tom 'Bones' Malone.
Lyric quoted from 'New Sensations', Metal Machine Music, Inc.
Shepard quoted from author's interview.
Quine on the 'No Sensations' tour, _Lou Reed_ (Bockris).
Katz case resolved, Dennis Katz _vs_ Lou Reed, Transformer Enterprises Ltd and Oakfield Avenue Music Ltd; and Supreme Court of New York records office.
Velvet Underground Partnership, thanks to Chris Whent (quoted from author's interview).
Cale's drug and alcohol issues, _What's Welsh for Zen_ (Cale) and _Week in Week Out_ (BBC Wales, 2009).
Martha Morrison quoted from author's interview.
Moe Tucker quoted from author's interview.
LR buys apartment on West End Avenue, thanks to Saunders.
Janowitz quoted throughout from author's interview.
LR attends Completely Sober, thanks to Tony Zanetta.
Lyric quoted from 'Don't Hurt a Woman', Metal Machine Music, Inc.
Blades quoted from author's interview.
### **XII New Inspiration, 1987–92**
Death of Warhol, _Holy Terror_ (Colacello) and _The Andy Warhol Diaries_ (Warhol). Lawsuit, _NY Times_ , 5 and 23/12/91. Epstein quoted from author's interview.
LR: 'He was being...': _Between Thought and Expression_ (Reed).
Warhol quoted from his _Diaries_. Thanks also to Pat Hackett.
LR on Warhol's use of language, _Lou Reed_ (Bockris).
LR: 'There were some...': _NME_ , 10/6/89.
Warhol's memorial service, author's interviews, _Holy Terror_ (Colacello) and _NY Times_ , 2/4/87.
Inspired to write 'Dime Store Mystery', _Times_ , 31/1/89.
Meets Cale at wake, author's interview with Billy Name (quoted). Also _What's Welsh for Zen_ (Cale).
Janowitz quoted from author's interview.
Rathke joins the band, author's interview (quoted throughout).
Fine quoted throughout from author's interview (including dialogue with LR).
LR: 'I'll give you...': _Rolling Stone_ , 5/11/87.
Hyman quoted from author's interview.
Klein quoted throughout from author's interview.
Maher quoted throughout from author's interview.
Lyric quoted from 'Romeo Had Juliette', Metal Machine Music, Inc.
Lyric quoted from 'Dirty Boulevard', Metal Machine Music, Inc.
Dion DiMucci quoted from author's interview.
Lyric quoted from 'Halloween Parade', Metal Machine Music, Inc.
Ross on Rachel's fate, quoted from author's interview. Thanks also to Bob Gruen.
Nico's demise, _Nico_ (Witts).
Lyric quoted from 'Dime Store Mystery', Metal Music, Inc.
_New York_ reviews: _The Times_ , 21/1/89 and _Village Voice_ , Jan. 1989.
_Rolling Stone_ cover interview (quoted), 4/5/89.
BAM performance, film of same and author's interviews with Brigid Berlin, Billy Name (quoted) and Moe Tucker (quoted throughout from author's interview).
Snow quoted from _Q_ , July 1993.
Cale: 'For him to...': _Option_ , July 1990.
Meeting Havel, _Between Thought and Expression_ (Reed).
Cartier event, film of the performance and author's interviews with Martha Morrison, Billy Name, Moe Tucker and Chris Whent (all quoted).
Cale: 'The two of...': _What's Welsh for Zen_ (Cale).
LR on Pomus, _Rock 'n' Roll Heart_ documentary (American Masters, 1998).
Rita Teel quoted from author's interview.
Swados's demise, his sister Elizabeth's book, _The Four of Us._ Inspires 'Harry's Circumcision', LR quoted in _Lou Reed_ (Bockris).
Wasserman quoted from author's interview.
LR: 'Giving up those...': _Q_ , Feb. 1992.
LR: 'as though it...': liner notes to _New York_.
Thompson review, _Independent on Sunday_ , 22/3/92.
Oglanby quoted throughout from author's interview.
Scott altercation, thanks to Jimmy and Jeanie Scott (quoted).
Doyle quoted from author's interview.
Anderson background, _Laurie Anderson_ (Goldberg); _Guardian_ , 13/5/95; author's research.
LR: 'You know, people...': _Q_ , May 2000.
Anderson quoted on meeting LR from the obituary she wrote for her husband for _Rolling Stone_ , 21/11/13.
### **XIII Return to the Velvet Underground, 1992–6**
Eighty per cent of sales in Europe, author's interview with Moe Tucker.
Cale on the _Tonight Show_ , _What's Welsh for Zen_ (Cale).
Cale: '[to] demonstrate that...': _NME_ , 5/6/93.
Tucker's thoughts, conversation and quotes, author's interview.
Sylvia: 'my artist': recalled by Martha Morrison.
Klein quoted throughout from author's interview.
Oglanby quoted throughout from author's interview.
Big Rig, author's interview with Pete Cornish.
Rehearsal dialogue, recalled by Struan Oglanby.
Travel arrangements, Martha Morrison.
' _S O_ HAPPY TO BE BACK', _Vox_ , July 1993.
Golfing in London, author's interview with Joe Doyle.
Kane quoted from the _Guardian_ , 4/6/93.
Audience repartee, _Musician_ , 1/8/93.
Jones review, _Melody Maker_ , 12/6/93.
Martha Morrison quoted throughout from author's interview.
Ticket sales, thanks to Mike 'Coach' Sexton (quoted from author's interview).
Mikulka quoted from author's interview.
LR driven to show/argues with Cale, _What's Welsh for Zen_ (Cale quoted).
LR and Laurie Anderson at AES convention, her article in _Rolling Stone_ , 21/11/13 (quoted).
Clermont quoted throughout from author's interview.
Williams quoted from author's interview.
LR and Steve Epstein, author's interview with Epstein (quoted).
Sylvia files for divorce, Supreme Court of New York records.
Bornstein quoted from author's interview.
Fields quoted from author's interview.
Shepard quoted from author's interview.
Holding hands in studio, author's interview with Fernando Saunders.
Gruen quoted from author's interview.
LR moves to 45 Christopher Street, thanks to Erin Clermont.
Dialogue with Oglanby, recalled by Oglanby to the author.
Robert Wilson quoted from correspondence with the author.
LR: 'I'm not writing...': _Observer_ , 16/8/96.
LR: 'the worst movie...': _Creem_ , Nov. 1984.
LR film dialogue, _Blue in the Face_ (Auster).
Auster quoted from interview with the author.
Morrison's terminal illness, thanks to Martha Morrison and Moe Tucker.
LR: 'Sterl lay in bed...': his obituary article in the _NY Times_ , 31/12/95.
### **XIV Love, Lou, 1996–2008**
Anderson's mishaps, _Laurie Anderson_ (Goldberg).
Recording _Set the Twilight Reeling_ , thanks to Struan Oglanby, Fernando Saunders and Tony Smith (all quoted throughout from author's interviews).
Frankel quoted from author's interview.
Background on Warner Brothers shake-up, _Billboard_ , 29/4/95.
Goldberg quoted from author's interview.
Klein quoted throughout from author's interview.
Reviews of _Time Rocker_ in the US, _Village Voice_ (25/11/97) and _NY Times_ (23/11/97).
Tour diary, _New Yorker_ , 26/8/96.
West 11th Street apartment, thanks to Godfrey Diamond, Zeljko McMullen and Jane Scarpantoni.
America starts 'over there', remark made to the author by LR's friend and neighbour Danny Fields.
Bowie concert, videotape of Jan. 1997 show.
LR: 'Everybody says...': _Sunday Telegraph_ , 29/1/97.
LR: 'the sound of diamonds': _Perfect Night_ liner notes.
LR on 'the lowest depth of misery', _Vanity Fair_ , Feb. 1996.
'His paranoia...' Will Hodgkinson in _The Times_ , 28/7/12.
LR: 'I get nervous...': _Pulse!_ , Feb. 1992.
Diamond quoted from author's interview.
Scarpantoni quoted throughout from author's interview.
Lyric quoted from 'Mad', Lou Reed Music.
Zollo quoted from _Performing Songwriter_ , 2000.
Lyric quoted from 'The Rock Minuet', Lou Reed Music.
Cornish quoted from author's interview.
LR: 'It's like watching...': _Newsday_ , 29/3/2000.
Poe 'intelligent, wayward and wilful': _Poe: A Life Cut Short_ (Ackroyd).
LR: 'Why am I...': liner notes to _The Raven_ (Reprise, 2003).
The making of 'Fire Music', LR in _Independent on Sunday_ , 4/1/04.
Diagnosed diabetic, thanks to Fernando Saunders; 'no butter...', recalled by Jane Scarpantoni.
Wasserman quoted throughout from author's interview.
Master Ren quoted from author's interview.
LR: 'music-industry baboons': author's interview with Tom Sarig.
Sid Reed dies, _NY Times_ obit., 18/1/05.
Sarig quoted throughout from author's interview.
LR: 'to tell a...': _Emotion in Action_ (Reed).
McMullen quoted from author's interview.
Christie quoted throughout from author's interview.
_NY Times_ review of _Berlin_ , 16/12/06.
Rapino quoted from author's interview.
LR and Anderson decide to marry/dialogue, recalled by Anderson in _The Times_ , 1/6/13 and _Rolling Stone_ , 21/11/13.
### **XV Nevermore, 2008–13**
Sarig quoted throughout from author's interview.
LR quoted from _Red Shirley_ , Sister Ray, 2010.
Merrill 'Bunny' Weiner quoted throughout from her correspondence and interview with the author, unless otherwise stated.
Quashie quoted from author's interview.
Jeanie Scott quoted from author's interview.
Dion quoted from author's interview.
LR buys home on Long Island, real-estate records and local enquiries.
Muldaur quoted throughout from author's interview.
Podell quoted from author's interview.
Coney Island Mermaid Parade, video footage.
Wilson quoted from correspondence with the author.
Saunders quoted throughout from author's interview.
Hammett and Ulrich on working with LR, 31/5/14 article on www.blabbermouth.net.
Lyric quoted from 'Mistress Dread', Lou Reed Music.
Writing 'Junior Dad', thanks to Rob Wasserman.
LR on _Lulu_ , _Interview_ , Nov. 2011.
_Lulu_ reviews, _Village Voice_ (20/1/12), _Rolling Stone_ (1/11/11) and postings on www.amazon.com.
Rathke, quoted from author's interview.
_Lulu_ sales, _Billboard_ , 27/10/13.
Anderson: 'Lou was sick...', given a course of interferon: her article for _Rolling Stone_ , 21/11/13.
LR's will, Surrogate Court of New York.
Smith quoted from author's interview.
Cancelled shows, thanks to Smith and Wasserman.
Charles Miller quoted from his eulogy at LR's memorial, 16/12/13. Also dialogue with LR and details of the operation.
Anderson: 'I don't think...': _The Times_ , 1/6/13.
LR: 'I am a triumph...': www.loureed.com.
LR: 'How could time...': the _Guardian_ , 20/6/13.
Whent quoted from author's interview. Also LR on the phone with Cale.
Moe Tucker quoted throughout from author's interview.
LR: 'My father didn't...': Parrot Zik promotion, 21/9/13.
NY book signing, videotape of 3/10/13 event at John Varvatos. Bob Gruen quoted from author's interview.
Dr Miller on Oct, 2013 meeting with LR, _NY Times_ , 27/10/13.
Schnabel quoted from _Rolling Stone_ , 21/11/13.
LR's last moments, Anderson's article in _Rolling Stone_ , 21/11/13. Also her 31/10/13 letter to _East Hampton Star_ and 16/12/13 eulogy.
Cause of death/cremation, death certificate.
Billy Name quoted from author's interview.
Telling Toby, thanks to Merrill (Bunny) and Harold Weiner (quoted).
Mossman obituary, _New Statesman_ , 1/11/13.
Posthumous sales, _Billboard_ , 9/11/13.
Value of estate, _NY Post_ , 30/8/14.
Merrill Weiner – 'the emotional issues...' – written statement to author; 'In his heart...', interview with the author.
Cale's letter, read at the memorial.
Cale: 'It came as...': _Financial Times_ , 23/8/14.
Anderson's eulogy, Apollo Theater, 16/12/14.
Rockwell quoted from author's interview.
Clermont quoted from author's interview.
## Bibliography
I found Richie Unterberger's _White Light/White Heat_ to be the best history of the Velvet Underground, and _POPism_ (ghostwritten for Andy Warhol by Pat Hackett) and Mary Woronov's _Swimming Underground_ the most evocative books about the Silver Factory. Reed was an admirer of Woronov's memoir. '[My] friend Mary Woronov, who was at the Factory, has written a book called _Swimming Underground_ that I think has the most accurate portrait of Andy Warhol that you will ever run into,' he said. 'It's hilarious and it'll give you a couple of chills.' I second that. He was less keen on Victor Bockris's books about himself and his colleagues, but they are nonetheless all worthwhile.
Ackroyd, Peter, _Poe: A Life Cut Short_ , London: Chatto and Windus, 2008
Algren, Nelson, _A Walk on the Wild Side_ , Edinburgh: Canongate, 2006 (1st edn, 1956)
Atlas, James, _Delmore Schwartz: The Life of an American Poet_ , New York: Farrar, Straus and Giroux, 1977
Auster, Paul, _Smoke and Blue in the Face_ , London: Faber and Faber, 1995
Barnes, Peter, _Lulu: A Sex Tragedy_ , London: Methuen, 1971
Bellow, Saul, _Humboldt's Gift_ , Harmondsworth: Penguin, 1976
Bockris, Victor, _Lou Reed: The Biography_ , London: Hutchinson, 1994 (updated as _Transformer: The Complete Lou Reed Story_ , London: HarperCollins, 2014)
Bockris, Victor, with Gerard Malanga, _Up-tight: The Velvet Underground Story_ , London: Bloomsbury, 2002
Burroughs, William S., _Junky_ , Harmondsworth: Penguin, 1977
Cale, John, with Victor Bockris, _What's Welsh for Zen_ , London: Bloomsbury, 1999
Clapton, Diana, _Lou Reed & The Velvet Underground_, London: Bobcat Books, 1987
Colacello, Bob, _Holy Terror: Andy Warhol Close Up_ , New York: HarperCollins, 1990
Davis, Clive, with Anthony DeCurtis, _The Soundtrack of My Life_ , New York: Simon and Schuster, 2013
Doggett, Peter, _Lou Reed: Growing Up in Public_ , London: Omnibus, 1991
Evans-Wentz, W. Y., _The Tibetan Book of the Dead_ , Oxford University Press, 1960
Finkelstein, Nat, _Andy Warhol: The Factory Years_ , Edinburgh: Canongate, 1999
Frum, David, _How We Got Here: The 70s: The Decade that Brought You Modern Life (For Better or Worse)_ , New York: Basic Books, 2000
Goldberg, RoseLee, _Laurie Anderson_ , London: Thames and Hudson, 2000
Harvard, Joe, _The Velvet Underground & Nico_, London: Bloomsbury, 2013
Heylin, Clinton (ed.), _All Yesterday's Parties: The Velvet Underground in Print 1966–1971_ , Cambridge, MA: Da Capo, 2005
Johnstone, Eve C. et al. (eds.), _Companion to Psychiatric Studies_ , Edinburgh: Churchill Livingstone, 2010
Julià, Ignacio, _Feed-back, The Velvet Underground: Legend, Truth_ , privately published in Catalonia, 2008
Koch, Stephen, _Stargazer: The Life, World and Films of Andy Warhol_ , New York: Marion Boyars, 2002
Kugelberg, Johan, _The Velvet Underground_ , New York: Rizzoli, 2009
Larkin, Colin (ed.), _The Encyclopaedia of Popular Music_ , London: Omnibus Press, 2007
Leigh, Michael, _The Velvet Underground_ , Wet Angel Books, 2011 (1st edn, 1963)
Manbeck, John B. (consulting ed.), _The Neighborhoods of Brooklyn_ , New Haven: Yale University Press, 1998
_The New Encyclopaedia Britannica_ , Chicago: Encyclopaedia Britannica Inc., 2005
Newsday (eds.), _Long Island, Our Story_ , Melville, NY: Newsday, 1998
Newsday (eds.), _Home Town Long Island_ , Melville, NY: Newsday, 1999
Poe, Edgar Allan, _Poems and Prose_ , New York: Everyman, 1995
Rechy, John, _City of Night_ , New York: Grove, 2013 (1st edn, 1963)
Reed, Jeremy, _The Life and Music of Lou Reed, Waiting for the Man_ , London: Omnibus, 2014
Reed, Lou, _Between Thought and Expression: Selected Lyrics of Lou Reed_ , New York: Hyperion, 1991
—, _Emotion in Action_ , Göttingen: Steidl, 2003
—, _Lou Reed's New York_ , Göttingen: Steidl, 2006
—, _Pass Thru Fire: The Collected Lyrics_ , Cambridge, MA: Da Capo Press, 2008
—, _Romanticism_ , Göttingen: Steidl, 2009
—, _Words and Music_ , Secaucaus, NJ: Warner Bros. Publications, 1991
Roberts, Chris, _Lou Reed: Walk on the Wild Side: The Stories behind the Songs_ , London: Carlton Books, 2004
Roberts, David (ed.), _The Guinness Book of British Hit Singles_ , London: Guinness, 2002
Robinson, Lisa, _There Goes Gravity: A Life in Rock 'n' Roll_ , New York: Riverhead Books, 2013
Ross, Alex, _The Rest is Noise_ , London: Fourth Estate, 2008
Schwartz, Delmore, _In Dreams Begin Responsibilities and Other Stories_ , New York: New Directions, 2012
Selby Jr, Hubert, _Last Exit to Brooklyn_ : London: Bloomsbury, 2000 (1st edn 1964)
Somma, Robert (ed.), _No One Waved Good-bye: A Casualty Report on Rock 'n' Roll_ , New York: Outerbridge and Dienstfrey, 1971
Sounes, Howard, _Down the Highway: The Life of Bob Dylan_ , London: Doubleday, 2001
—, _Seventies: The Sights, Sounds and Ideas of a Brilliant Decade_ , London: Simon and Schuster, 2006
Stein, Jean, and George Plimpton, _Edie_ , New York: Knopf, 1982
Strong, Martin C., _The Essential Rock Discography_ , Edinburgh: Canongate, 2006
Swados, Elizabeth, _The Four of Us: A Family Memoir_ , New York: Farrar, Straus and Giroux, 1991
Ultra Violet, _Famous for Fifteen Minutes: My Years with Andy Warhol_ , Orlando: Harcourt Brace Jovanovich, 1988
Unterberger, Richie. _White Light/White Heat: The Velvet Underground Day-by-Day_ , London, Jawbone Press, 2009
Von Sacher-Masoch, Leopold, _Venus in Furs_ , New York: Blast Books, 1989 (1st edn, 1870)
Warhol, Andy, and Pat Hackett, _POPism_ , Orlando: Harcourt, 1980
Warhol, Andy, edited by Pat Hackett, _The Andy Warhol Diaries_ , London: Simon and Schuster, 1989
Whitburn, Joel, _Billboard Book of Top 40 Hits_ , New York: Billboard, 1996
Wilcock, John, and Christopher Trela, _The Autobiography and Sex Life of Andy Warhol_ , New York: Trela Media, 2010
Witts, Richard, _Nico: The Life and Lies of an Icon_ , London: Virgin, 1993
Woronov, Mary, _Swimming Underground: My Years in the Warhol Factory_ , London: Serpent's Tail, 2000
Wrenn, Michael, _Lou Reed: Between the Lines_ , London: Plexus, 1993
Young, James, _Nico: Songs They Never Play on the Radio_ , London: Bloomsbury, 1999
## Author's Note and Acknowledgements
Unlike too many celebrities, Lou Reed didn't write an autobiography. 'Why would I? Write about myself? Set the record straight? There's not a record to set straight. I am what I am. It is what it is, and fuck you,' he told _Mojo_ with characteristic acerbity in 2013. An artist's work is the most important part of his life, but for those of us who were fascinated by Reed's songs it is natural to want to know more about the man who wrote and sang them.
To paraphrase Frank Zappa, it is difficult to write about what it means to listen to music, beyond giving a basic description of the work, a suggestion of what the author gets from it, and what critics have said. The effect remains ethereal, and what makes for a good or bad song is very much a matter of taste. In writing about a songwriter, it is easier to discuss the ideas and images conveyed in lyrics, which is partly why so much attention is paid to lyrics in books of this kind. Also, writers are interested in the words of other writers; it is a subject they understand. Above all, however, this book is a portrait of a human being, of his personality and actions, and the trajectory of his career, which is to say, it is a biography.
As with any significant artist with a long career, there have been several biographies of Lou Reed, and there will be more. Of those published in his lifetime, Victor Bockris's book, first published in 1994 as _Lou Reed: The Biography_ and recently updated as _Transformer: The Complete Lou Reed Story_ , was the best. It has its faults, as all books do. Bockris doesn't always get his facts right, and he has an unfortunate habit of quoting anonymous sources. But it is of interest. Two less impressive biographies were published quickly after Reed's death: _Lou Reed: The Life_ by Mick Wall and _The Life and Music of Lou Reed_ by Jeremy Reed, the latter being an updated book. There will be others. I hope that _Notes from the Velvet Underground_ surpasses all these books in all vital aspects. A good biography should be entertaining, enlightening and convincing. It should be well constructed and clearly written with a non-judgemental tone, conveying a sense of the subject's character and personality, his work, his relationships, and the times in which he lived. That is what I have always aimed for. It is of course for the reader to decide what they think of the result.
To explain a little more about my approach, _Notes from the Velvet Underground_ covers Reed's whole life with a slight emphasis on his work with the Velvet Underground, upon which his reputation largely rests. The songs he wrote for the band in the 1960s were the mainstay of his act for the rest of his career, and he was always closely associated with underground culture: the underbelly of urban life and the excitement of the illicit. Reed was one of the most literary rock musicians, making his notes from underground and reporting back to us in song, hence this book's title, which also invokes Dostoyevsky's underground man, another damaged, hyperconscious outsider. Reed aspired to write as well as the Russian.
As a biographer, it is helpful to have a pre-existing interest in and sympathy for the subject, and I have always felt drawn to outsiders. I developed my enthusiasm for Reed's songs at a young age, when music makes a strong and lasting impression, hooked by his second solo album, _Transformer_ , then by the Velvet Underground. Buying his subsequent albums could be a frustrating business, and sometimes felt like a waste of money. He was an inconsistent artist who released some shoddy work, and I found him to be a stiff and awkward performer in concert. Nevertheless, I always liked the way he expressed himself, I was intrigued by his subject matter, and I enjoyed his intelligence and sense of humour. So I kept listening. As some readers may agree, his work started to get more interesting again in recent years, if not more popular, and I have made an effort to reflect that in this book, which I started directly after his death in October 2013.
This is my tenth biographical book, written over the course of twenty years. My debut was a book about the murderers Fred and Rosemary West, published in 1995. I have written one other true-crime book (about a misfit gang of robbers), two books about the American author Charles Bukowski (another outsider), a book about the leading figures in professional golf, a history of the arts in the 1970s, an investigation into the so-called 27 Club of rock musicians who died at that young age, and three biographies of popular musicians: Bob Dylan, Paul McCartney and now Lou Reed. These might seem like eclectic subjects, but all these books are essentially about remarkable and intriguing people. It is the psychology of the individual that attracts me.
All biographies are built on a foundation of knowledge laid down over the years by previous writers. The first job is to read and assimilate everything that is in the public domain – books, newspaper articles, magazine interviews – as well as listening to and watching everything pertinent. I then do extensive research of my own. I visit the places where the subject lived and worked; I locate as many documents as I can find, including court records and what Americans call vital records (births, deaths and marriages), allowing me to construct the factual skeleton of the story; and I communicate with everybody I am able to reach who played a part in the life, including friends, family members, lovers, spouses and colleagues, in this case interviewing approximately one hundred and forty people. The job is then to assemble all this information in a way that tells the story better than it has been told before, and moves the story on, correcting mistakes, adding detail, introducing new themes and nuances, creating a compelling and entertainingly fresh portrait.
The first person I contacted about this book was Reed's widow, Laurie Anderson. She chose not to participate. This is not unusual, and it isn't a calamity. One doesn't require anyone's permission to write biography, and the people closest to the subject often have their own agenda. There is, however, a prejudice that biographies are only valid if they are 'authorized', a frequently misused term implying that there is an authority who ordains what is legitimate. There is not. In fact, with exceptions, so-called authorized biographies all too easily become vanity projects to promote an image of a public figure while burying or misrepresenting difficult truths. There are bad unauthorized biographies, of course. Writing a good book of any kind is hard. A particular challenge for the independent biographer is getting people to speak. Some won't cooperate, because they feel they aren't allowed to, whether or not there is interference. Others are brave enough to make up their own mind. If the biographer is sincere and persistent, they will get people to talk, while retaining the advantage of having the freedom to write a book that is not censored or influenced by interested parties.
Lives are messy, and Reed's was messier than most. He was a complex, difficult man. The whole truth must come out in a book of this kind, as much as one is able to find out about the truth. Most lives remain mysterious to some degree. Readers sometimes want to know if the author likes or dislikes his subject. I try to be neutral, though the shape and tone of the book is subjective. The aim is to assemble the best available evidence in a way that seems true and fair. I didn't know Reed personally, so I learned about him from people who did, and formulated a story partly based on their testimony, following the themes that emerged, while editing out what seemed to be bogus or irrelevant. Ultimately, it is up to readers to decide what they think about the subject, if they have to form a conclusion. To my mind, it isn't a question of liking or disliking someone like Lou Reed; it is enough that he is significant and interesting.
In writing biographies of popular musicians, my experience has been that some readers come to these books with a bias. They may have been listening to the artist's music for many years, during which time the songs have become part of their lives. They identify with the artist, have strong ideas about which records are best, and often believe they know the truth about them, though they haven't made a close study of the subject. If they then read what they take as criticism of their idol, though it may simply be somebody's opinion, they can become offended. It is better to keep an open mind, accepting that opinions differ, as interviewees often have contradictory memories of events (it is rare for two witnesses to remember an incident exactly the same way), that what you believe is not necessarily true, and that there is good and bad in most people.
I am grateful to everyone who assisted me during my research including, in alphabetical order: Shelley Albin, Paul Auster, Cornelius and Pat Bass, Brigid Berlin, Rubén Blades, Ellard 'Moose' Boles, Daryl Bornstein, Randy Brecker, David Byrd, Felix Cavaliere, Robert Christgau, Rupert Christie, Erin Clermont, Ray Colcord, Pete Cornish, David Croland, Joe Dallesandro, Thomas Dargan, Clive Davis, Stuart 'Dinky' Dawson, Godfrey Diamond, Dion DiMucci, Norman Dolph, Joe Doyle, Allen Edwards, Patti Elam, Steve Epstein, Chrissy Faith, Danny Fields, Yossi Fine, Ritchie Fliegler, Henry Flynt, Marty Fogel, Michael Fonfara, Danny Frankel, Alan Freedman, Vincent Fremont, Josh Alan Friedman, Barbara Fulk, Sean Fullan, Jill Furmanovsky, Bernie Gelb, Liz Gilmore, Pentti 'Whitey' Glan, Danny Goldberg, Dr James Gorney, Paula Gorney (née Swarzman), Bob Gruen, Pat Hackett, John Halsey, Chuck Hammer, Howard Harding, Stuart Heinrich, Catherine Hesketh (née Guinness), Barbara Hodes, Allan Hyman, Jerome Jackson, Elyse 'Ellie' Jacobs, Jim Jacobs, Tama Janowitz, Prakash John, Dennis Katz, Steve Katz, Scott Kempner, Howie Klein, Professor Michael S. Kogan, Bettye Kronstad, Bob Kulick, Steve Labar, Dari Lallou, Gloria Lauden, Lisa Law, Jeffrey Lichtman, Fred Maher, Tom 'Bones' Malone, Peter Maloney, Ed McCormack, Zeljko McMullen, Kate Mikulka, Richard Mishkin, Paul Morrissey, Martha Morrison, Jenni Muldaur, Elliott Murphy, Billy Name, Mandy Newall, Judith November (née Titus), Richard Nusser, Struan Oglanby, Larry Packer, John Passalacqua, Doane Perry, Kellie Petersen, Terry Philips, Jonny Podell, Mike Quashie (the Limbo King), Charlie Rapino, Mike Rathke, Genya Ravan, Master Ren Guang-Yi, Bob Ringe, John Rockwell, Jeffrey Ross, Ed Sanders, Tom Sarig, Fernando Saunders, Jane Scarpantoni, Fred Schmidt, Sam Shepard, Jon Sholle, Jeanie and the late Jimmy Scott, Mike 'Coach' Sexton, Richard Sigal, Tony 'Thunder' Smith, Bruce Somerfeld, Pete Stampfel, Corky Stasiak, Chris Stein, Rosalind Stevenson, Karl Stoecker, Casey Synge, Rita Teel, Robert Tramontin, Moe Tucker, the late Dick Wagner, Rick Wakeman, Peter Walsh, Rob Wasserman, Merrill 'Bunny' Weiner (née Reed) and her husband Harold, Chris Whent, Victoria Williams, Robert Wilson, Christine Wiltshire, Holly Woodlawn, Mary Woronov, Bruce Yaw, Doug Yule and Tony Zanetta.
I drew on a handful of interviews conducted for some of my previous books. These include my interviews with the late Al Aronowitz for _Down the Highway_ ; with Clem Cattini and Herbie Flowers for _Fab_ ; and Angie Bowie and Ken Scott for _Seventies_.
I am grateful to staff at the Andy Warhol Museum, the British Library and New York University Registrar's Office; Bill Bentley in Los Angeles; Ken Lally and Ravi Romano in Blairstown; June Koffi and Ivy K. Marvel at Brooklyn Public Library; Rose Luna at Freeport High School; Consuelo Velez at Caroline G. Atkinson School in Freeport; Regina G. Feeney and Cynthia J. Krieg at Freeport Memorial Library; Corey E. Stewart at the National Personnel Records Center in St Louis; Diana Rahmaan at PS 92 in Brooklyn; Mary M. O'Brien at Syracuse University Archives; Edda Tasiemka at the Hans Tasiemka Archives in London; and Richie Unterberger in San Francisco.
Finally, thank you to Andrea Henry, Sheila Lee and Michelle Signore at Transworld; to copyeditor Sarah Day; to lawyer Lucy Moorman; and to my agent, Gordon Wise, at Curtis Brown.
## Picture Acknowledgements
Every effort has been made to trace copyright holders; any who have been overlooked are invited to get in touch with the publishers.
### **Photos in the text**
Photos taken by the author: , , ; _'59 Voyageur_ , Freeport, New York: , ; courtesy Syracuse University Archives, Syracuse University Libraries: ; courtesy Shelley Corwin: ; Getty Images: (Adam Ritchie/Redferns), (Fred W. McDarrah), (NY Daily News Archive), (Michael Ochs Archives), (Roberta Bayley/Redferns), (Ron Galellla Ltd/WireImage), (Ebet Roberts/Redferns); courtesy Bettye Kronstad: ; courtesy Barbara Hodes: , ; Elliott Murphy personal collection: ; _New Musical Express_ , May 1977, © Jill Furmanovsky/rockarchive.com: ; courtesy Erin Clermont: ; Alamy: , (© AF archive/Alamy); courtesy of Fred Schmidt: .
### **Picture section**
The original Velvet Underground, 1965: © Donald Greenhaus/Shabobba ® International, LLC.
Andy Warhol with Gerard Malanga, Mary Woronov and the Velvet Undergound, 1966: photo © Billy Name.
Velvet Underground, _c_. 1967: © Pictorial Press Ltd/Alamy.
Velvet Undergound, Cambridge, Massachussetts, 1969: © Jeff Albertson/Corbis.
Velvet Undergound in the studio, Scepter Records, April 1966: © Steve Schapiro/Corbis.
LR and his band in the Netherlands, _c_. 1972: © Retna/Photoshot.
LR onstage, 1974: photo © Michael Zagaris/Handtint © Kristin Sundbom.
LR and Rachel, CBGB, New York, August 1976: © Bob Gruen/www.bobgruen.com.
LR on stage, Amsterdam, 20 September 1973: Gijsbert Hanekroot/Redferns/Getty Images.
LR backstage at the Austin Opry House, 9 April 1978: copyright 1978 Scott Newton.
LR marries Sylvia Morales, February 1980: photos all courtesy Corky Stasiak.
LR and Robert Quine performing at the Beacon Theater, New York, 18 October, 1984: Ebet Roberts/Redferns/Getty Images.
LR and his band onstage, the Bottom Line, New York, 1979: Chuck Hammer personal archive.
John Cale and LR, Montana Studios, New York, December 1988: Marilyn K. Yee/New York Times Co/Getty Images.
LR and Laurie Anderson, Turin, 10 July 2002: © Guido Harari/Contrasto/eyevine.
LR, West Village, New York, 6 June 2013: © Tom Meinelt/Splash News/Corbis.
Laurie Anderson and LR at the Mermaid Parade, Coney Island, New York, 19 June 2010: ©Wenn Ltd/Alamy.
## Index
The page references in this index correspond to the printed edition from which this ebook was created. To find a specific word or phrase from the index, please use the search feature of your ebook reader.
Abram, Peter 125
Academy of Music (New York) 174
'Adventurer' 305
'After Hours' 120, 290, 292
Agnelli family 190
AIR Studios (Montserrat) 234–5
Albarn, Damon 330
Albin, Shelley 25–30, _27_ , 31, 33–4, 36–7, 39–40, 43, 88, 112–13, 118, 122, 146
Alcoholics Anonymous (AA) 256
Algren, Nelson
_A Walk on the Wild Side_ 151
Alice Tully Hall (New York) 158–9
'All Tomorrow's Parties' 56, 70, 79, 93, 291
All-night Workers 21
Allman Brothers 212, 213
Amaya, Mario 111
Ambasciatori Hotel (Rome) 189
American Express 199, 255
American Literary Council of Little Magazines 205
Amnesty International 257
Amos, Stanley 86
'And What, Little Boy, Will You Trade for Your Horse?' (short story) 33
Anderson, Laurie 109, _306_ , 307–8
background 282–3
_Bright Red_ album 297
and death of Reed 340
expression of Reed's love for in songs 305
influence on Reed 300, 311
meeting of Reed 283
'O Superman' 283
and Reed's memorial service 343, 343–4
relationship with and marriage to Reed 283, 295–7, 305, 324, 325
trekking expedition to Himalayas 305
Andy Warhol Exposition 275
Andy Warhol Museum (Pittsburgh) 303
_Andy Warhol, Up-tight_ 71–3
'Andy's Chest' 122, 150
'Angel Baby' 22
_Animal Serenade_ (album) 319
_Another View_ (album) 254
Apollo Theater (NY) 66, 342
Appleyard, Bryan 271
Arista Records 208–9, 210, 214, 217, 218, 219, 223, 232, 234, 240
Aronowitz, Al 57, 59, 61, 62
Ashley, Bill 9
Atlantic Records 126–7, 139, 140, 142, 254
Audio Engineering Society 295
Auster, Paul 302–3
'Average Guy' 246
Bad Company 181
Baez, Joan 50
Bailey, Alice 119
Balloon Farm 91
Banana Album ( _The Velvet Underground & Nico_) 78–81, 83, 95–7, 98, 104
'Banging on My Drum' 211
Bangs, Lester 121–2, 137, 183, 197, 230
_Banjo Eyes_ (Eddie Cantor) 3
Barber, Adrian 127
Baron (pet) 215, 226
Bataclan, Le (Paris) 145
'Baton Rouge' 314
BBC 313
Beachnuts 56
Beatles 40, 42, 57, 58, 129
'I Want to Hold Your Hand' 40
'Love Me Do' 42
_Sgt Pepper's Lonely Hearts Club Band_ 98
Beck, Jeff 244
'Bed, The' 165
'Beginning of a Great Adventure' 295
'Beginning to See the Light' 119
Bellevue Hospital (NY) 106–7
Bellow, Saul
_Humboldt's Gift_ 34
_Bells, The_ (album) 229–30
'Bells, The' (song) 229–30
Bentley, Bill 265, 271
Berg, Alban 330
Berkshire Music Center (Tanglewood) 48
_Berlin_ (album) 164–6, 167, 172–3, 230, 322
'Berlin' (song) 144, 145, 164, 223
_Berlin_ (stage show) 321–3, 324, 339
Berlin, Brigid (aka Polk) 65, 99, 110, 132, 139
Bernstein, Leonard 48
Beth-El Hospital (Brooklyn) 2
'Betrayed' 250
Big Rig 286
_Billboard_ 96–7, 104, 122, 161
Binaural Sound 218, 222, 230, 300
'Black Angel's Death Song' 53, 63, 96
Blades, Lisa 258
Blades, Rubén 257, 258–9
_Nothing But the Truth_ 259
Blairstown 225–6, _226_ , 231, 241, 252, 258
Blairstown Inn 241–2
Blake's Hotel (London) 168, 180
Blind Boys of Alabama 119, 316
Blondie 98
Blood, Sweat and Tears 141
_Blue in the Face_ (film) 301–2
_Blue Mask, The_ (album) 242–9
'Blue Mask, The' (song) 243, 244, 245
Bockris, Victor 18, 228
Boles, Ellard (Moose) 226, 228, 229, 235
Bono 264
Bornstein, Daryl 238, 240, 243–4, 296
Boston Tea Party 99–100, 114, 123
Bottom Line (NY) 222, 234, 251
Bowie, Angie 142, 148, 149
Bowie, David 97, 141–3, 147, 166, 233, 258, 309, 316, 341
'Ashes to Ashes' 244
fights with Reed 180–1, 233–4
_Hunky Dory_ 141, 144, 147
Performs with Reed at Royal Festival Hall 148
performs with Reed at fiftieth birthday concert (1997) 311
plays Carnegie Hall (1972) 153–4
produces _Transformer_ 149–55
'Queen Bitch' 141
relationship and rift with Reed 142, 147, 153, 179, 180–1, 233–4, 258, 311
_The Rise and Fall of Ziggy Stardust and the Spiders from Mars_ 147
'Space Oddity' 141
success of 233
and Warhol scene 141
_Young Americans_ 179
Breitenmoser, Dr Rudolf 193
Broadway Arcade (NY) 238, 239, 252
Bronco Bowl (Dallas) 307
Brooklyn 3
Brooklyn Academy of Music (BAM) 272
_Brooklyn Eagle_ 3
Brown, James 66
Bruce, Jack 165
Bruce, Lenny 30
_Büchse der Pandora, Die_ ( _Pandora's Box_ ) (Wedekind) 330
Burretti, Freddie 148
Burroughs, William 30, 41, 119, 147
_Junky_ 26
Buscemi, Steve 316
Bush, President George H.W. 269
Byrd, David 184
Byrds, the 50
Byrne, David
'Psycho Killer' 207
_Cabaret_ 144
Café Bizarre (Greenwich Village) 59, 61
Cage, John 48
Cale, John 47–9, 50, 89, 145, 207, 218, 271
background and character 47–8, 60
collaboration with Reed on _Songs for Drella_ 263, 272–3
and death of Reed 342–3
dress style 105
drug taking 52, 255
fling with Sylvia Morales 228–9
formation of Falling Spikes with Reed 52, 53–4
marriage to Betsey Johnson 105
musical talent 49
and reformed Velvet Underground European tour (1993) 284–5, 287, 290
relationship and rifts with Reed 100, 106, 108, 254, 271–2, 273, 287, 292
and reunion of Velvets at Cartier event 276
sacking of from the Velvets 113, 115, 276
solo career 255
songwriting partnership with Reed 50–1
and Velvet Underground 60, 67–8
and Velvets' debut album 79
_What's Welsh for Zen_ 47
and _White Light/White Heat_ album 102–3, 105
Calhorn, Sarth 327
'Calm Before the Storm, The' 259
'Candy Says' 29, 87, 101, 117, 120, 319
Carnegie Hall (NY) 153–4
'Caroline Says' 105, 164, 165
Carroll, Jim 138
_The Basketball Diaries_ 132
Carson, Tom 271
Cartier Foundation event 275–6, 284
Castanaro, Tommy 127
Castle, the (LA) 81
Cattini, Clem 145
Cavaliere, Felix 23
CBGB (NY) 202, 207, 338
CBS Television 66
Céline, Louis-Ferdinand 220
Champion Mr Sox (pet) 297–8
Chandler, Raymond 41, 302
Charles, Ray 22
'Charley's Girl' 203–4
Charter 77 human rights movement 274
Chateau Marmont (Hollywood) 117
CHDs _13_ , 14
Chelsea Rendezvous (London) 233
Cher 82–3
Cherry, Don 32, 230, 243
_Chien Andalou, Un_ (film) 53
'Chipmunk Song, The' (Chipmunks) 10
'Chooser and the Chosen One' 211
Christgau, Robert 145, 221, 222–3
Christie, Rupert 322, 323, 324
Chrysler Art Museum 99
Clapton, Eric 293
Clermont, Erin 37–9, 85, 88, 109, 147, 155, 221, 224, 227, _227_ , 247, 295–6, 345
Cleveland Clinic 336, 339
Clinton, George 173
Coachella Festival 336
Cohen, Leonard 41
_I'm Your Man_ 267
Colcord, Ray 168, 174
Coleman, Ornette 32, 316
Coleman, Ray 148
Columbia Records 81, 208
Columbia University 108–9
Completely Sober 256
_Coney Island Baby_ (album) 187, 195, 200, 201–5
'Coney Island Baby' (song) 9, 203
Coney Island Mermaid Parade 329
Conrad, Tony 47, 48, 49, 51, 56
Continental Hyatt House (Toronto) 195
Cook, Judy 241
'Cool It Down' 128
Cooper, Alice 164, 188
_School's Out_ 164
Copland, Aaron 48
Cornish, Pete 286, 315
'Coyote' 287
_Crash_ (Cronenberg) 339
_Crawdaddy_ 104, 122
'Crazy Feeling' 187
Creedmoor Psychiatric Center 16, 20
_Creem_ 122, 137, 197
interview with Reed (1979) 18, 166, 224
Croland, David 110
Cunningham, Merce 100
Curtis, Jackie 151
Cutrone, Ronnie 256
'Cycle Annie' 44, 56
Czechoslovakia 274
Dafoe, Willem 316
Dallesandro, Joe 151, 160
Daltry, Roger 169
Darling, Candy (James Slattery) 29, 87, 117, 151
Daryl 54
_Datebook_ (magazine) 73–4
Davies, Ray 208
Davis, Clive 208, 209, 212, 221, 223, 234, 240
Davis, Stephen 172
Dawson, Dinky 165, 169, 174
De Maria, Walter 47, 49, 55
Decca 323
Defries, Tony 141, 192–3
Dekam, Johnny 14
Delon, Alain 70
Demorest, Stephen 224
Denver, John 140–1, 175
Diamond, Godfrey 201–2, 203, 205, 313
Diddley, Bo 58
'Dime Store Mystery' 261, 270
DiMucci, Dion 268, 328
Dire Straits 235
'Dirt' 219
'Dirty Boulevard' 268
Divine 114
'Doin' The Things That We Want To' 252
_Dolce Vita, La_ (Fellini) 70, 190
Dolph, Norman 78, 79, 81
Dom club 74–6, 91
'Don't Hurt a Woman' 258
Doors, the 104, 123–4, 163
Dostoyevsky, Fyodor 41, 248
'Down at the Arcade' 252
_Downfall_ (Hirschbiegel) 333
'Downtown Dirt' 187, 219
Doyle, Joe 281, 289
'Dream, A' 119, 273
Duke (pet) 215, 226
Dunbar Music 161, 209, 253
Dylan, Bob 39, 41, 50, 56, 57, 70, 72, 123, 129, 173, 220, 312, 345
_Basement Tapes_ 128
_Before the Flood_ 176
celebration concert (1992) 282
'Foot of Pride' 282
_The Freewheelin' Bob Dylan_ 39, 40
'I'll Keep It with Mine' 70
Reed's comments about 129, 220, 282
_Ecstasy_ (album) 314–15
_Ed Sullivan Show_ 58
Edgar Allan Snake (pet) 177
Edinburgh Playhouse 289
'Egg Cream' 6
Elam, Patti 83
Electric Lady studio (Greenwich Village) 178, 187, 236
Eliot, T.S. 35
Emerson, Eric 96
_Emotion in Action_ (volume of pictures) 320
End of Cole Avenue (Dallas) 124–5
'Ennui' 178
Eno, Brian 97, 98
Epstein, Steve 239, 252, 258, 260, 295–7
_Erdgeist_ (Earth Spirit) 330
Eulenspiegel Society 227, 228
'European Son' 80, 96
Everyman Band 188, 235, 246
_Excursion on a Wobbly Rail_ 24
Exploding Plastic Inevitable 74–8, 81, 82–3, 90–1, 92, 93, 130, 185
Ezrin, Bob 164, 165, 166, 172, 322
Faison, Dave 72
Faith, Chrissy 223, 229
'Fallen Knights and Fallen Angels' (essay) 136
Falling Spikes 52, 53–4, 55
'Families' 229
Feldman, Andrea 'Whips' 110
Fellini, Federico 70, 188
Felt Forum (NY) 186
'Femme Fatale' 70, 93, 97
Fields, Danny 73–4, 87, 130, 132, 134, 138, 139, 140, 160, 184, 202, 297
Fillmore Auditorium (San Francisco) 84
Film-Maker's Cinémathèque (NY) 55
Fine, Yossi 264, 265, 266–7
Finkelstein, Nat 72, 73
'Fire Music' 317
Flavin, Dan 110, 114
Fleming, Ian 9
Fliegler, Ritchie 218–19
Flowers, Herbie 152
Flynt, Henry 48, 51, 91
Fogel, Marty 188, 211, 230, 232, 233, 235
'Foggy Notion' 122, 123, 254
folk music revival 24, 25, 39, 50
'Follow the Leader' 211
Fonfara, Michael 178, 181, 184, 188, 190, 195–6, 198, 216, 223, 228, 229, 232, 235, 236, 240
Fong-Torres, Ben 183
Forum (London) 290
_Frampton Comes Alive!_ 176
Frankel, Danny 306
Freedman, Alan 175
Freeport High School 9, 14
_Friday the 13th_ 225
Friedman, Josh Alan 217
Friedman, Karen 152
Fulk, Barbara 175, 192, 193, 199
Fullan, Sean 246, 249
Furmanovsky, Jill 216
_Fusion_ magazine 135
Futterman, Judith 5
Gabriel, Peter 329
Garrett, Lesley 313
'Gassed and Stoked' 278
Gelb, Bernie 169, 170, 171, 174
Gere, Richard 261
_Get Crazy_ 301
'Gift, The' 34, 102, 289
Gilmore, Liz 183, 216
Gilmore, Mikal 239
Ginger Man (NY) 142
Ginsberg, Allen 138
Glan, Pentti 'Whitey' 168, 170, 174, 178, 185
Glasgow University 154
Glass, Philip 303
Glastonbury Festival
(1992) 281
(1993) 292
(2010) 330
Gleason, Ralph 84
Goldberg, Danny 307–8
Goldstein, Richard 96
'Good Evening Mr Waldheim' 269
'Goodnight Ladies' 150, 152
Gorillaz 330
Gorney, James 31, 35, 41
Gorney, Paula (née Swarzman) 31
Graham, Bill 84
Gramercy Park Hotel (New York) 199–200, 201
Grateful Dead 75, 129
Great Sneaker Robbery 86
Greenwich Village (NY) 59, 86, 223–4, 269, 310
_Growing Up in Public_ (album) 6, 234–6, 238, 239, 240, 251
Gruen, Bob 297, 339
_Guardian_ 289
'Gun, The' 243, 246
Gymnasium (New York) 97–8
Gysin, Brion 119
Hackett, Pat 261
Hall and Oates 249
'Halloween Parade' 269
Halsey, John 150
Hammer, Chuck 230–1, 232, 235, 244
Hammersmith Odeon 233
Hammett, Kirk 332
Hardin, Jackie 152, 153
Harding, Howard 233
Harris, Phil 9–10
Harry, Debbie 98, 343
'Harry's Circumcision' 277–8
Havel, Václav 274–5, 291, 310
Hayloft 29, 33, 138
'Head Held High' 128
'Heavenly Arms' 249
Hegarty, Antony 117, 316, 318, 319, 322
Heinrich, Stuart 218, 235
Heliczer, Piero 55, 66, _66_
Hell, Richard 245
Heller, Fred 146, 156, 160, 177, 194–5, 198, 199, 200
Hendrix, Jimi 168, 178
_Herald Tribune_ 68
'Here She Comes Now' 102
Herko, Freddy 65
'Heroin' 1, 8, 40–2, 43, 44, 46, 52, 56, 58, 59, 60, 67, 68, 77, 79, 83, 96, 121, 124, 129, 185, 276, 289, 293
'Heroine, The' 243
Hetfield, James 333
'Hey Mr Rain' 289
Hockney, David 261
Hodes, Barbara 70, 101, 137, 139, 153, _176_ , 177–8, 179, 180, 181, 225
background 88–9
relationship with Reed 88–90, 173, 177, 182, 225
'Hold On' 269
'Home of the Brave' 250
Honda 255
'Hookywooky' 305
Hopkins, Lightnin' 25
Hopper, Dennis 82
Hotel Bristol (Paris) 170
Hotel Delmonico (NY) 67, 68
'How Do You Speak to An Angel' 238
Howe, Steve 144
_Hudson River Meditations_ 320–1
Human League 249
Humble Pie 181
Hunter, Steve 167, 169, 178, 322
Hustvedt, Siri 303
Hyman, Allan 7–8, 9, 11, 12, 21, 80, 166, 265
Hynde, Chrissie 175
'I Believe in Love' 211
'I Can't Stand It' 26, 122, 144
'I Found a Reason' 128
'I Heard Her Call My Name' 102
'I Love You, Suzanne' 252, 253
'I Wanna Be Black' 219, 221
'I'll Be Your Mirror' 26, 70, 93
'I'm Set Free' 119
'I'm Sticking with You' 122, 123, 128, 292
'I'm Waiting for the Man' 30, 52–3, 56, 59, 72, 77, 79, 83, 97, 125, 148, 153, 290, 292, 311
_Independent on Sunday_ 279
Inn on the Park (London) 143, 165
Isherwood, Christopher 164
Italy
Reed's tour (1975) 188–91
Jackson, Jerome 9
Jacobs, Jim 169–70, 170, 175
Jades, the 10, 40
Jagger, Mick 124
Janowitz, Tama 256, 259, 263
Jefferson Airplane 129
Jeffreys, Garland 22
'Jesus' 119
John, Elton 178, 181, 313
John, Prakash 173–4, 178, 180, 183–4, 185
Johnson, Betsey 88, 89, 105
Johnson, Philip 100
Johnson, President Lyndon 51
Jones, Allan 290
Jones, Brian 70
Joyce, James 248
_Finnegans Wake_ 35
_Ulysses_ 35
Julià, Ignacio 131
_Feed-back_ 131
'Junior Dad' 333, 335–6
Kane, Pat 289
Katz, Dennis 141, 142, 146, 160, 161, 163, 164, 167, 170, 171–2, 192, 193, 196, 206, 219
accused by Reed of misappropriation of funds 199
law suit against Reed and court battle 194, 200, 217, 253
sacking of by Reed 199
Katz, Steve 141, 170, 173, 174, 178, 186, 187–8, 200–1, 202–3, 217
'Keep Away' 235
Kempner, Scott 196
Kennedy, President John F. 243, 249
Kent, Nick 145, 169, 221
Kerouac, Jack
_On the Road_ 9
'Kicks' 204
'Kids, The' 164, 165, 172
'Kill Your Sons' 18, 177, 178, 179, 319
King Curtis 10
King Jr, Martin Luther 108–9, 219
Kinks, the 208
Kiss 46
Klein, Howie 265, 266, 273, 282, 286, 297, 300, 308, 316, 317
Kogan, Michael 32, 33
Krieger, Ulrich 327
Kronfeld, Eric 217, 242–3, 249, 265
Kronstad, Bettye 107–9, 142, 143, 148, 152, 159, 161, 165, 173
acting career 131
attempt to stop Reed drinking 163
childhood and background 107, 164–5
condescension of by Reed's circle 138–9
ending of marriage and divorce 167–71
engagement to Reed 146
hitting of by Reed 163, 166
relationship with and marriage to Reed 118, 121, 137–8, 146, 155–8, _157_ , 162–3, 166, 167, 170–1
Kulick, Bob 203
La Cave (Cleveland) 116, 123
LA and the Eldorados 1–2, 21–2, _23_
La MaMa (NY) 106, 107
'Lady Day' 175
'Lady Godiva's Operation' 102
Lallou, Dari 152, 153
'Last Great American Whale' 269
'Last Night I Said Goodbye to My Friend' 304
'Last Shot, The' 250
_Late Show_ (David Letterman's) 119
Law, John Phillip 81
'Leave Her for Me' 10
'Leave Me Alone' 219
_Legendary Hearts_ (album) 250–1
Leiber and Stoller 44
Leigh, Michael
'The Velvet Underground' 56
Lennon, John 50, 123, 312
Leno, Jay 284
Letterman, David 119
Levison, Charles 232–3
Leysin festival (Switzerland) (1992) 281
Licata, John 79
'Like a Possum' 315
Lilienthal, Dr Alfred 68
Linich, Billy _see_ Name, Billy
'Lisa Says' 105, 144
Little Charley's Clam House (NY) 239
'Little Dog' 333
Live Aid 264
_Live at Max's Kansas City_ (album) 133
_Live in Italy_ 251
_Live MCMXCIII_ album 294
_Loaded_ (album) 127–9, 136–7
Lobel, Elektrah 53
Lofgren, Nils 230
Lolabelle (pet) 296, 310, 330–1, 333
London 144
'Lonely Avenue' 277
_Lonely Woman Quarterly_ 32–3
'Lonesome Cowboy Bill' 128
Long Island 6, 328
_Los Angeles Times_ 230
_Lou Reed_ (album) 143–6
_Lou Reed, John Cale & Nico_ (CD) 145
_Lou Reed Live_ (album) 185, 195
_Lou Reed Live, Take No Prisoners_ (album) 8, 222, 223
_Lou Reed's New York_ (book of photos) 320
_Lou Reed's New York Shuffle_ (radio show) 326
'Love Is Here to Stay' 234, 235
_Lulu_ (album) 331–5
_Lulu_ (stage show) 330, 331
Lyons, Donald 130, 134
McCartney, Paul 50, 123, 144, 235, 328
McCormack, Ed 158, 220
McGuire, Barry 83
'Eve of Destruction' 83
McGuire, Wayne 104
McLaughlin, John 244
MacLise, Angus 48, 48–9, 55, 56, 66
McMullen, Zeljko 321
'Mad' 314
Madame Tussaud's 194
Madison Square Garden 123–4, 282, 311, 331
Madonna 257, 265
_Magic and Loss_ (album) 277–9, 280
_Magic and Loss_ tour 278–9, 281
Maher, Fred 247, 251, 252, 267, 268, 271
'Make Up' 150, 154
Malanga, Gerard 61–2, 63, 68, 72, 77, 93, 185
Maloney, Peter 24
Mamas & the Papas
'Monday, Monday' 84
Marsh, Dave 204
Martin, George 234–5
Matrix (San Francisco) 125
Maunkberry's (London) 215–16
Max's Kansas City 109–10, 130, 132–3, _133_ , 181, 327
Mayfair Sound Studios 101
Media Sound 201, 202, 204
Mekas, Jonas 67
_Melody Maker_ 7, 147, 148, 290
Meltdown Festival (1997) (London) 311
'Men of Good Fortune' 164
'Merry Go Round' 40
_Metal Machine Music_ (album) 195–7, 200, 201, 207, 327
Metal Machine Trio 327
Metallica 266, 331–5
MGM Records 81–2, 92, 93, 94, 96, 100, 121, 122, 126, 129, 178, 254
Mikulka, Kate 292
Miller, Dr Charles 336, 337, 339
Minnelli, Liza 261
Mishkin, Richard 2, 21, 22, _23_ , 39, 41, 69, 91
'Mistress Dread' 332
_Mistrial_ (album) 257–8
Mitchell, Gala 154
Montez, Mario 76
Moore, Sam 258
Morales, Julie 263
Morales, Sylvia _see_ Reed, Sylvia
Morgan Studios (London) 144, 165
Morrison, Jim 163
Morrison, Martha (née Dargan) 58, 59, 70, 71, 77, 127–8, 255, 276, 290, 303–4
Morrison, Sterling 24–5, 54, 57, 114, 118, _126_ , 136, 255
character 60
death 304
jobs 60
non-Hodgkin's lymphoma 303–4
and reformed Velvets European tour 284, 287, 290–1, 292
relationship with Reed 131–2, 136, 275–6, 285, 291
and reunion of Velvets at Cartier event 276
and sacking of Cale from the band 113
studies English literature 128, 131
teaching assistant 178, 255
tug-boat skipper 255, 287
and Velvet Underground 56, 60, 66, 79, 81, 91, 102, 116, 121, 127–8, 132, 254
Morrissey, Paul 62, 64–5, 68–9, 72, 74, 81, 84, 91, 92, 97, 99, 261
_Trash_ 151
Moss, Arthur 195
Mossman, Kate 341
Mothers of Invention 82
Mott the Hoople 137
MTV 247, 258, 261, 265
MTV Unplugged 293, 294
Muldaur, Jenni 329
'Murder Mystery, The' 119–20
Murphy, Elliott 137, 183, 199, 205–7, _206_
_Lost Generation_ 207
Murray, Charles Shaar 204–5, 230
'My House' 243
'My Old Man' 6, 235, 238
Myddle Class, the 57, 59
Name, Billy (Billy Linich) 48, 63, 64, 71, 76, 87, 96, 104, 106, 110, 111, 119, 261–3, 273, 275
Nelson, Brad 334
'New Age' 26, 126, 128, 136
_New Musical Express_ 145, 169, 172, 175, 204, 221
_New Sensations_ (album) 252–3
'New Sensations' (song) 252
_New Statesman_ 341
_New York_ (album) 266–72, 285
New York Hospital 260
New York Society for Clinical Psychiatrists 67
_New York Times_ 154, 158–9, 172, 197, 248, 309, 322
New York University 15, 20
_New Yorker_ 309
_Newsday_ 315
Nico (Christa Päffgen) 26, 68–71, 69–70, 145, 254, 270, 278, 304
background 69–70
_Chelsea Girl_ 100
cut out from the Velvets 100
death 270
heroin addict 270
joins Velvet Underground and sings with 68–9, 70, 72, 73, 77–8
life after leaving the Velvets and reunion with Reed 139–40, 145
lovers 70
relationship with Reed 69, 70, 72, 81, 88
split with Reed 81
'Sunday Morning' written for 92
voice 81
_1969 Velvet Underground Live with Lou Reed_ (album) 125, 126, 145, 183, 206, 294
Ninth Circle (NY) 184
Nixon, President Richard 184
_No One Waved Good-bye_ (book) 135–6
Norris, Gary 93
Norris, Rob 59
Novapark Hotel (Zurich) 192
Oakfield Avenue Music 161
'Ocean' 122, 123, 128, 144, 145
Ocean Club (NY) 207
Oglanby, Struan 279, 280, 281, 282, 285, 286, 287, 291, 294, 296, 297, 298–9, 300, 307
'Oh, Jim' 164, 165
'Oh, Sweet Nothin' 128
Olatunji, Babatunde 58
Olympia, L' (Paris) 171, 291
Ondine (Robert Olivo) 65, 72, 86, 100, 256
_One-trick Pony_ (film) 238–9
O'Neal, Ryan 82
Ono, Yoko 261
Orange (bar) 36
'Ostrich, The' 46–7, 49, 50, 79
Packer, Larry 188, 190–1, 194
Päffgen, Christa _see_ Nico
Palazzo dello Sport (Rome) 191
'Pale Blue Eyes' 26, 88, 118, 120, 122, 223, 273
Parliament (George Clinton's) 173
Pasha and the Prophets 21
Payne Whitney clinic 20
_Peel Slowly and See_ (box set) 284
_Perfect_ (film) 301
'Perfect Day' 146–7, 150, 154, 313
_Perfect Night_ (CD) 311
_Performing Songwriter_ 314
_Permanent Record_ (film) 301
Perry, Doane 244, 245, 247
Peter, Paul and Mary 50
Peterson, Kellie 241–2
_Philadelphia Daily News_ 94
Philips, Terry 44–5, 46, 47, 49, 50, 64
Pickwick City Records 44–6, 51, 55, 56, 64
_Planet_ 122
Plastic People of the Universe 274, 275
Plato's Retreat (NY) 227
Plummer, Amanda 316
Podell, Jonny 207, 212–14, 217, 234, 329
Podell, Monica 213
Poe, Edgar Allan 33, 41, 229, 316–17
'The Raven' 316, 337
'Imp of the Perverse, The' 317
_POEtry_ 316
Police, the 235
Polydor 206
Polygram 254
Pomus, Doc 277
Pop, Iggy (Jim Osterberg) 73, 309–10
'Power of the Heart' 323
'Power of Positive Drinking, The' 235, 243
Presley, Elvis 140, 245, 277
Primitives 49–50
Prince 257
'Prominent Men' 56
'Purple People Eater, The' (Wooley) 10
'Put a Tiger in Your Tank' (Intimates) 46
Quashie, Mike 327–8
Quine, Robert 125, 245, 246, 250, 251–2, 253
Rabinowitz, Mendel 4
Rabinowitz, Shulamit 'Shirley' 4, 327
Rabinowitz, Sidney _see_ Reed, Sidney
Rachel (Richard/Ricky) 182–4, 191, 192, 202, 203, 208, 212, 213, 214, 226, 248
deterioration and ending of relationship with Reed 221–2, 226, 229
homelessness 269
as Reed's tour manager 215
relationship with Reed 182–4, 187, 189, 191, 194, 195, 203, 205, 215–16, 235
Radio City Music Hall (NYC) 258
Ramones 202
Rapino, Charlie 192, 323
Rapp, Kenneth (Rotten Rita) 65, 86, 277
Rathke, Mike 263–4, 266, 268, 272, 285–6, 318, 334
Ravan, Genya 222
_Raven, The_ (album) 316–17
RCA Records 140–1, 142–3, 146, 149, 155, 156, 158, 159, 160, 161, 185, 195, 196, 197, 200, 201, 205, 207, 208–9, 243, 245, 250, 251, 253, 265
_Rebel without a Cause_ 21
Rechy, John 147
_City of Night_ 26, 37
Record Plant (NY) 122, 123, 210
_Red Shirley_ (documentary film) 327
Reed, Jimmy 22
Reed, Lou
**Early Life**
BA in Liberal Arts awarded 42
birth and birth name 2–3
bullying of at school 8
campus radio show 24–5
character 9
childhood 4, 6, 8
dating and sexual experiences 12–13, 14
education and school days 6, 8, 9, 14
emerging rebelliousness 11, 14, 21
first record 10
forms the LA and the Eldorados 1–2, 21–2, _23_
graduation (high school) 14–15, _14_
high-school bands involved with 9–10
home life 6–7, 10–11
interest in music and early musical tastes 8–9
first nervous breakdown and ECT treatment 15–19, 20, 22, 27, 49, 224, 259
at New York University 15
reading 9
story and poetry writing 11, 33–4
at Syracuse University 21–4, 32, 35, 36, 37, 38, 42
writes for _Lonely Woman Quarterly_ campus magazine 32–3
**Music Career**
abuse of women in songs written by 80, 153, 177, 204, 258, 332
and British press 172, 239, 311–13
criticism of famous contemporary artists 129
departure from the Velvet Underground 131–2, 134, 135, 137
forming of band Falling Spikes with Cale 52, 53–4, 55
forms Oakfield Avenue Music 161
guitar lessons given by 51
guitar playing and guitars owned 49, 168, 203, 218, 253, 286
hatred of critics 222–3
Heller's breach of contract court case against 194–5, 198
homosexuality in songs 147
influences 32, 39
joins the Primitives 49–50
literary influences 26, 41
mentoring other artists 137, 205–7, 214, 244
obsessive about sound recording 120, 279
passion for free jazz 32
persona 186
personal assistants 321
posthumously inducted into Rock 'n' Roll Hall of Fame (2015) 304
and reforming of Velvet Underground for European tour (1993) 284–95
relationship with RCA 207–8
songwriting 40–1, 50–1, 52, 101–2, 123
songwriting contract with Pickwick City Records 45–7, 49–50, 51, 55–6
songwriting styles and themes 101, 102, 118, 315
takes legal action to establish copyright over _Loaded_ songs 136
_Third Ear_ interview 128–9
and Velvet Underground _see_ Velvet Underground
view of punk 219
voice 79, 83, 158–9, 235, 267–8
**Personal Life**
acting 301–2, _302_
alienation of friends and colleagues 72, 205–6, 207, 212, 217, 218, 219, 243–4, 253, 266, 307
anxiety and panic attacks 5, 8, 20, 123
appearance 2, 25–6, 147, 179–80, _180_ , 318, 329
arrest of for trying to buy drugs 175, 252
attempts to stop drinking 240, 242, 250, 251, 253, 281, 329
and bicycles 300
bipolar disorder 211, 242, 247
Blairstown property 225–6, _226_ , 240–1, 258
and Buddhism 119, 342
buys property on Long Island 328–9
character traits and personality 2, 22–3, 27, 38–9, 46, 51, 60, 89, 89–90, 106, 109, 115, 217, 319
death and memorial service 340, 342–5
and death of pet dog (Lolabelle) 330–1
and death of Schwartz 85
and death of Warhol 260–1, 270–1
and diabetes 318
difficulty in working with 99, 141, 178, 208, 210–11, 213, 219, 223, 265–6, 267, 271
dislike of being recognized 241
dress style and image 72, 105, 148, 181
drinking 106, 121, 139, 156, 158, 162, 163, 164, 170, 210, 231–2, 236, 239–40, 241–2, 246
driving and vehicles acquired 30, 31, 34, 231
drug taking and impact on 12, 30–1, 39–40, 43, 45, 46, 52, 65, 86–7, 121, 162, 167, 169, 170, 171, 173, 175, 185, 186, 187, 189–90, 193, 200, 210, 213, 214, 216, 232
ending of marriage to Bettye and divorce 167
endorsement deals for American Express and Honda 255
family history 3–5
finances 198–200, 253, 255, 298
friendship with Havel 274–5
glam image 148
health problems 211, 318, 330, 332, 335, 338–9
hepatitis 43, 85, 93, 167, 335
heroin use 30, 40, 52, 167, 170
hiring of bodyguard 169, 281
insomnia 24
intellectual friends socialized with 303
interest in technology 217–18
interviews 224, 312
IQ 8
and Jewish roots 8, 33
lithium 242, 245, 247
liver problems and transplant 236, 242, 246, 335, 336, 336–9
lives at the Gramercy Hotel 199–200, 201–2
living in New York and love of Manhattan 51
loneliness felt 88, 120, 123, 309
loses driving licence 31, 58
Madame Tussaud's wax model of 194
mellowness in later years 17, 311, 338, 345
memory retention problems 17, 18
misogyny and hitting of women 80–1, 118, 163, 166, 213, 247, 258
motorcycling and acquisition of Harley-Davidsons 241, 252
nastiness and confrontational behaviour 14, 36, 50, 90, 190–1, 205, 221, 259, 266, 307
need to be in control 106, 113, 231, 266, 272
New York apartments rented and purchased 85–6, 90, 101, 120, 143, 146, 181, 205, 214, 223–4, 225, 255–6, 298, 310
non drafting of in Vietnam War due to mental record 51
not wanting children 37, 295
obituaries 340–1
obsession with music technology 279–80
pet dogs 28, 215, 310, 337
and photography 320
and pinball 238–9, 252
poetry writing and readings 11, 33–4, 41, 135, 138, 205
and politics 109, 224, 257, 269
and racism 220, 221
reaction to Warhol's shooting 112–13
relationship with Barbara Hodes 88–90, 137, 139, 173, 177, 182
relationship with Erin Clermont 37–8, 88, 155, 221–2, 227, 295–6
relationship with father 4, 10–11, 18, 19, 20, 28–9, 45, 177, 338
relationship with and marriage to Bettye Kronstad 118, 121, 137–8, 146, 155–8, _157_ , 162–3, 166, 170–1
relationship with and marriage to Laurie Anderson 283, 295–7, 305, 324, 325
relationship with and marriage to Sylvia Morales 228–9, 234, 236, 237–9, _237_ , 243, 247, 248, 249, 256, 280
relationship with Morrison 131–2, 136, 275–6, 285, 291
relationship with mother 5, 20, 28–9, 177, 238
relationship with Nico 69, 70, 72, 81, 88, 91–3
relationship with Rachel 182–4, 187, 189, 191, 194, 195, 203, 205, 215–16, 221–2, 229, 235
relationship and rift with Bowie 142, 147, 153, 179, 180–1, 233–4, 258, 311
relationship and rifts with Cale 100, 106, 108, 254, 271–2, 273, 287, 292
relationship and rift with Warhol 71, 73, 90, 98–9, 109–10, 113, 260–1, 263
relationship with Shelley Albin 26–30, 33–4, 36–7, 88, 118, 122
relationship with sister (Bunny) 6
Schwartz as mentor to 36, 40
second breakdown 135, 137, 139
separation and divorce from Sylvia 295, 296, 297
sexuality and sexual adventures 11, 18, 27–8, 29, 38, 54, 87–8, 90, 155–6, 185, 224–5, 229, 239, 248, 277–8, 280
short stories written 41
smoking 12, 211, 242
stealing of record collection 86
stops using hard drugs 241
and tai chi 240, 318
tastes 89
transvestites/transsexuals fascination 29, 87, 182, 184
violence and threatening people with knives 216–17
and Warhol's Silver Factory scene 64–6, 71, 87, 110, 138–9
weight 2, 26, 163, 179–80, 234
will 335
work ethic 71
**Solo Career**
album sales 172, 197, 232–3, 249, 271, 316
Alice Tully Hall shows (1973) 158–9
_Animal Serenade_ album 319
appearance with Bowie at Royal Festival Hall 148
Australian tour (1974) 185–6
_The Bells_ album 229–30
_Berlin_ album 164–6, 167, 172–3
_The Blue Mask_ album 242–9
Bottom Line gigs 222–3
collaboration with Cale in _Songs for Drella_ 263, 272–3
collaboration with Wilson on _Time Rocker_ 300–1, 308–9
_Coney Island Baby_ album 187, 195, 200, 201–5
contract with Arista Records 208–9, 210
debut album ( _Lou Reed_ ) 143–6
departure of Kronfeld as manager 265
diary for _New Yorker_ 309, 310–11
dropping of by Reprise 319
dropping of by Warner Brothers and moves to Reprise 308
and Dylan's thirty years in music celebration (1992) 282
earnings 154, 168
_Ecstasy_ album 314–15
effect of drugs on performance 190
ending of Arista contract 240
ending of relationship with Everyman Band and assembling of new band 244–6
estrangement between Quine and 250, 251–2, 253
European tours 167–72, 180–1, 188–94, 218, 251
Everyman Band and members 167–8, 173–4, 178, 188, 218, 230–1, 235, 243, 246
fans 279, 311
Fields becomes manager 140
and film music 301
_Growing Up in Public_ album 6, 234–6, 238, 239
Heller as manager and firing of 146, 160
Japanese tour (1975) 197–8
Katz as manager and court battle with 160, 188, 193, 194, 199, 200, 217, 219, 253
Kronfeld as manager 217
_Legendary Hearts_ album 250–1
and _Lou Reed's New York Shuffle_ radio show 326
_Lulu_ album and collaboration with Metallica 331–5
_Magic and Loss_ album and tour 277–9, 280, 281
meditation music 320–1
_Metal Machine Music_ album 195–7, 200, 201, 207, 327
_Mistrial_ album 257–8
_New Sensations_ album 252–3
_New York_ album 266–72, 285
performance of _Songs for Drella_ at Brooklyn Academy of Music (1989) 272–3
performs on _Saturday Night Live_ 258
plays in Czechoslovakia 274–5
Podell as manager 207, 212–3
popularity in Europe 192, 222, 239, 251, 313
Radio City Music Hall show (1986) 258
_The Raven_ album 316–17
RCA contracts 142, 243
reviews of albums and shows 145–6, 154–5, 158–9, 169, 172, 175, 197, 204–5, 220–1, 230, 239, 249, 271, 279, 317, 334
revival of _Berlin_ on stage 321–3
riot at Germany concert and arrest (1979) 232
riot at Madrid concert (1980) 239
riot at Rome concert (1975) 188–91
_Rock 'n' Roll Animal_ album 174, 175–6, 178, 187
_Rock 'n' Roll Heart_ album 210–12
_Rolling Stone_ cover 271
sacking of Steve Katz as producer and suing of Reed by 200–1
_Sally Can't Dance_ album 177–9, 184–5, 187, 201
Sarig as manager 319
_Set the Twilight Reeling_ album 305–7
signs with RCA 142–3
signs with Sanctuary 319–20
signs with Sire 265–6
songs co-written 229–30
_Songs for Drella_ album 99, 119, 263, 272–3, 276
stadium concerts supporting U2 (1987) 264–5
stage act 169, 214, 279, 336
_Street Hassle_ album 218–21
support act to the Who (1974) 181
touring and live shows 173–4, 215, 221, 249–50, 251, 252, 253, 278–9, 309, 309–10, 318, 324, 335–6
tours with Albarn's Gorillaz 330
_Transformer_ album and tour 146, 149–55, 162–3
US college tour 159
Reed, Margaret 'Bunny' (sister) _see_ Weiner, Margaret
Reed, Sidney Joseph (formerly Rabinowitz) (father) 3–4, 6, 7, 10, 18, 28–9, 45, 137, 177–8, 319
Reed, Sylvia (née Morales) 228–9, _262_
background 228–9
in charge of reformed Velvet Underground European tour 254, 285
managing of Reed's career 238, 265, 280, 295
relationship with and marriage to Reed 228–9, 234, 237–9, _237_ , 256, 280
separation and divorce from Reed 295, 296, 297
Reed, Toby (née Futterman) (mother) 4–5, 15–16, 18, 28, 137, 335, 340
Ren Guang-Yi, Master 318
Reprise 308, 317, 319
Ricard, René 87–8
Richards, Keith 124
Righteous Brothers 89
Ringe, Bob 143, 159, 189, 194
'Riptide' 306
Riviera Café (NY) 113, 117
Robinson, Lisa 139, 149, 186
Robinson, Richard 139, 144, 149
Rock, Mick 149, 338
'Rock Minuet, The' 314–15
'Rock 'n' Roll' 128, 290
_Rock 'n' Roll Animal_ (album) 174, 175–6, 178, 187
Rock 'n' Roll Hall of Fame 207, 304, 331
_Rock 'n' Roll Heart_ (album) 210–12
Rockwell, John 158–9, 172, 197, 222–3, 345
_Rolling Stone_ 84–5, 104
cover of Reed 271
interviews with Reed 158, 220, 271
reviews of albums 121–2, 154, 172, 204, 230, 239, 317, 334
Rolling Stones 124
_Romanticism_ (book of photos) 320
'Romeo Had Juliette' 168, 267
Ronson, Mick 148, 149, 150, 152, 153, 155
'Rooftop Garden' 250
Ross, Jeffrey 214–15, 216–17, 218, 269
Ross, Jonathan 322
Rossi, Randy 93
Rotten Rita _see_ Rapp, Kenneth
Roxy Music 98
Royal Festival Hall (London) 148
Rubin, Barbara 57, 61–2, 65, 67, 72
'Run Run Run' 80
Rushdie, Salman 303
Ruskin, Mickey 110
Rutgers University 72
Ryder, Mitch 137, 167
Sacher-Masoch, Leopold von 54
_Venus in Furs_ 54
'Sad Song' 122, 164, 165–6
St Anne's Warehouse 321
St Lawrence University (NY) 1
St Mark's in-the-Bowery (NY) 138
St Patrick's Cathedral (NY) 261
_Sally Can't Dance_ (album) 177–9, 184–5, 187, 201
'Sally Can't Dance' (song) 177
Sam and Dave 258
_San Francisco Chronicle_ 84
Sanctuary 319–20
Sarig, Tom 217–18, 319, 320, 321, 322, 326, 329–30, 331, 334
'Satellite of Love' 128, 150, 223
_Saturday Evening Post_ 57
_Saturday Night Live_ 258
Saunders, Fernando 244, 246, 250, 257, 264, 266, 317, 318, 338
'Save the Last Dance for Me' 277
'Scarborough Fair' 54, 56
Scarpantoni, Jane 314, 318, 324
Scepter Records 78
Schnabel, Julian 4, 263, 303, 310, 322, 339
Schunke, Manfred 218, 300
Schwartz, Delmore 34–6, 40, 85
'In Dreams Begin Responsibilities' 34
Scorsese, Martin 252
Scott, Jeanie 281, 328
Scott, Ken 149, 150
Scott, Little Jimmy 280–1, 328
Scritti Politti 267
Second Fret (Philadelphia) 123
Second World War 3
Sedgwick, Edie 65, 68, 70, 72
Selby Jr, Hubert 41, 102, 147
_Last Exit to Brooklyn_ 102
September 11th (2001) 317
Sesnick, Steve 100, 105, 111, 114–15, 127, 130, 133, 136
_Set the Twilight Reeling_ (album) 305–7
Sexton, Mike 'Coach' 293
Seymour (pet) 28, 136
Shades, the 9–10
Shakespeare, William 248
'Sheltered Life, A' 211
Shepard, Sam 252–3, 297
Sherry-Netherland (NY) 159
'She's My Best Friend' 187
Sigal, Richard 12–13, 14, 23–4, 29, 242
Silver Factory 63–5, 70–1, 87, 110, 138–9, 261
Simon, Paul 238, 343
Sims, Jimmie 45
Sire 265, 273, 307
'Sister Ray' 102–3, 251, 293
Sister Ray Enterprises 293, 321
Slater, Nelson 21, 22–3, 39, 65, 203, 205
_Wild Angel_ album 205
Slattery, James _see_ Darling, Candy
'Slide, The' (poem) 205
'Slip Away' 119
'Small Town' 272
Smashing Pumpkins 307
Smith, Patti 130–1, 207, 208, 343
_Horses_ album 207
Smith, Tony 'Thunder' 306, 315, 336
Snow, Mat 272
'So Alone' 235
'So Blue' 10
Society for Cutting Up Men (SCUM) 111
Solanas, Valerie 111, _112_ , 113
_Up Your Ass_ 111
'Some Kinda Love' 117–18, 251
Somerfield, Bruce 160, 186, 188, 197, 208
_Songs for Drella_ (album) 99, 119, 263, 272–3, 276
Sonny and Cher 82
'Soul Man' 258
_Sounds_ 239, 249
South Africa 257
Spector, Phil 44–5
Springsteen, Bruce 207, 220
_Darkness on the Edge of Town_ 220
Stadthalle (Offenbach, Germany) 232
'Standing On Ceremony' 236
_Star Trek_ 89
Stasiak, Corky 210–12, 234, 236, 250
Stein, Chris 98
Stein, Howard 174
Stein, Seymour 265
Steinberger, Ned 286
'Stephanie Says' 105, 164
Stevenson, Rosalind 92
Stewart, Rod 181
Stoecker, Karl 31–2, 154
Stonewall Inn (NY) 223–4
_Story of O, The_ (Réage) 9
'Stray Cat Blues' 124
_Street Hassle_ (album) 214, 218–21
'Street Hassle' (song) 220, 224, 264
Studio 54 251
'Such a Pretty Face' 214
Suchorsky, Michael 188, 235
Summer, Donna 220
Summit High (NJ) 59
'Sun City' 257
'Sunday Morning' 92, 93
_Sunday Telegraph_ 311
Superstar, Ingrid 72, 77
Supreme Court of New York 194, 217
Swados, Elizabeth 106
Swados, Lincoln 23, 36, 106, 107, 159, 250, 278
'Sweet Bonnie Brown' 12, 125–6
'Sweet Jane' 126, 128, 136, 148, 168–9, 191, 313
Sydney Festival 322
Synge, Casey 152, 152–3
Syracuse University 15, 21, 22, 24–5, 34, 36, 39, 40, 44, 85
'Take Me Home, Country Roads' (Denver) 141
Talking Heads 98, 202, 207
'Psycho Killer' 207
Teel, Bob 241, 277
Teel, Rita 240–1, 277
'Temporary Thing' 212
'Temptation Inside Your Heart' 105, 254
Thalia Theatre (Hamburg) 300, 316
'That's the Story of My Life' 119
Theater of Eternal Music 48
'There She Goes Again' 80
'Think It Over' 235
_Third Ear_ 128–9
Thompson, Ben 279
Thormahlen, Ernie 154
_Threepenny Opera_ (Brecht and Weill) 164
Thunder Thighs 152–3
'Tie a Yellow Ribbon round the Ole Oak Tree' (Dawn) 161
_Time Machine, The_ (Wells) 300
_Time Rocker_ 300–1, 308–9
_Times, The_ 271, 337
Titus, Judy 14
_Tonight Show_ 284
Tosches, Nick 154
Tots 149
'Trade In' 305
Tramontin, Bob 241
_Transformer_ (album) 128, 146, 147, 148, 149–55, 158, 160, 161, 162, 167, 172, 173, 174
Transformer Enterprises 175
_Transformer_ tour 162–3
Tretoads, the 9
Trident (studio) 149, 152, 338
Trip nightclub (Hollywood) 82–3
TTG 83, 117
Tucker, Jim 24
Tucker, Maureen 'Moe' 98, _126_ , 127, 178
'After Hours' 120
background 58
character 60–1, 65, 83
and death of Reed 342–3
drum playing 58, 60, 102, 118
joins Velvet Underground 58–9, 60
life after Velvet Underground 255
plays percussion on _New York_ album 271–2
pregnancy and birth of daughter 127, 131
and reformed Velvets European tour 287, 290, 290–1, 292
relationship with Reed 61, 131, 285
and reunion of Velvets 276, 290–1, 295
Turtle, the 65, 86
U2 98, 221, 264–5, 285
Zooropa tour 292
Ulrich, Lars 332
'Underneath the Bottle' 243
University of Michigan 73
University of Texas 178, 255
Vahanian, Professor Gabriel 32–3
Valentin, Val 121
Valets, the 9
Vance, Jerry 45
_Vanity Fair_ (Thackeray) 132
Velvet Revolution 274
Velvet Underground
acolytes of 97–8
album sales 97, 104, 122
albums of unreleased material 254
and _Andy Warhol, Up-tight_ show 71–2, 73
Aronowitz as manager 57, 59
Boston Tea Party gigs 99–100
Café Bizarre residency 59, 61–3
change in image 105, 116, 122
clubs played at 123–5
cutting out of Nico 100
debut album (Banana Album) 78–81, 83, 95–7, 98, 104
deepening disenchantment of 130
dress style 72
End of Cole Avenue (Dallas) gig 124–5
ending of reformed 294
Exploding Plastic Inevitable involvement 75–7, 81, 91, 92–3
failed West Coast expedition 82–5
firing of Warhol by Reed 99
first gig at Summit High School (1965) 57, 59
groups and singers influenced by 97–8, 124, 137
Gymnasium gig 98–9
Herliczer's film of 66, _66_
inducted into Rock 'n' Roll Hall of Fame (1995) 304
joining of by Maureen Tucker 58–9, 60
joining of by Nico 68–9
La Cave gig 116
last performance of 304
_Live MCMXCIII_ album 294
_Loaded_ album 127–9, 132, 136–7
managed by Warhol and Morrissey 62–3, 64–5, 68–9, 74,78, 82, 90–1, 99
Matrix (San Francisco) gig 125–6
Max's Kansas City residency 130–1
MGM contracts 82, 100, 122
name 56
New York Society for Clinical Psychiatrists' dinner gig (1966) 67–8
original line-up 59–60
origins 51, 55, 56
_Peel Slowly and See_ box set 284
practice session tapes 56–7
and press 68, 94, 96
Reed's departure from and final performance with 131–2, 135, 137
reforming of for European tour (1993) 284–95, _288_
reissuing of back catalogue 122–3, 254
reunion at the Cartier event 275–6, 284
reviews of albums 96, 104, 121–2, 137
revival of interest in 254
and _Rolling Stone_ magazine 84–5
sacking of Cale and replacement of by Yule 113–15
scathing reviews 84–5
second album ( _White Light/White Heat_ ) 101–5, 274
Sesnick as manager 100, 105, 113, 114, 127, 130, 133
signs contract with Atlantic Records 126
signs contract with MGM 81–2
singles released 93, 104
small audiences and lack of initial success 123–4, 125
third album ( _The Velvet Underground_ ) 117–22
Trip club (Hollywood) gig 82–3
_Velvet Underground & Nico, The_ (Banana Album) 78–81, 83, 95–7, 98, 104
Velvet Underground Partnership 254
_Velvet Underground, The_ (album) 117–22
'Venus in Furs' 54, 56, 59, 77, 80, 97
Verve 93, 98, 121, 254
_Vexations_ (Satie) 48
'Vicious' 22, 147, 150, 168
Vietnam War 51, 68, 108
_Village Voice_ 74, 91, 96, 145, 221, 225, 271, 309, 334
_Vox_ 288
_VU_ (album) 254
Wagner, Dick 167, 168, 169, 178
'Wagon Wheel' 153
'Wait' 214, 219
Waits, Tom 300
Wakeman, Rick 144–5
Waldheim, Kurt 269
'Walk and Talk It' 144
'Walk on the Wild Side' 2–9, 22, 29, 150–3, 159–60, 161, 170, 209, 253, 264, 283, 337
Walliams, David 322
Walsh, Peter 168
Walters, Al 9
'Wanderer, The' 268
Wang, Wayne 302
Warhol, Andy 62–3, 68, 147, 174, 222
_Bike Boy_ 104
_Bitch_ 65
_The Chelsea Girls_ 88, 90, 100
death of 260–1, 270–1
declining interest in Velvets 92–3
drug taking 64
and Exploding Plastic Inevitable 74–6, 81
firing of by Reed 99
_Lupe_ 73
manages Velvet Underground 62–3, 64–5, 68–9, 74, 78, 82, 90–1, 99
memorial service 261–3
new studio on Union Square West 109, 110–11
_The Philosophy of Andy Warhol_ 199
_POPism_ 63
relationship and rift with Reed 71, 73, 90, 98–9, 109–10, 113, 260–1
shooting of by Solanas 111–14, _112_
Silver Factory 63–5, 110
and Velvets' debut album 95
_Vinyl_ 65
Warlocks (was Falling Spikes) 55, 56
Warner Brothers 265, 307, 307–8
Warvel 82
Wasserman, Rob 278, 324, 333
'Waves of Fear' 243
Wedekind, Frank 330, 332
Weiner, Bunny (Margaret Ellen) 5–6, 8, 10–11, 15, 16–18, 20, 85, 156, 179, 322, 335, 340, 341–2
Weiner, Harold 3, 179
Weis, Danny 178
Wembley Arena (London) 290, 291
Wembley Stadium (London) 264
Wenders, Wim 303
'We're Gonna Have a Real Good Time Together' 122, 289
'What Goes On' 117, 120
'What's Good' 277
Whent, Chris 254, 255, 276, 338
Whisky a Go Go (LA) 117, 123
_White Light/White Heat_ (album) 101–5, 274
'White Light/White Heat' (song) 101, 103–4
'Who Loves the Sun' 128
Who, the 181
'Wild Child' 131
Williams, Danny 72
Williams, Victoria 295
Willner, Hal 303, 323, 326, 339
Wilson, Robert 300–1, 303, 308, 309, 316, 330, 331
_The Black Rider_ 296
Wilson, Tom 82, 83, 92
Wiltern Theater (Los Angeles) 319
Wiltshire, Christine 219
Winwood, Steve 165
Witts, Richard 72
Wolfe, Tom 261
_Woman of the Year_ 3
'Women' 247, 249
Won-Ton 65
Woodlawn, Holly (Harold Ajzenberg) 151, 159–60, 202
Woodrose Ballroom 123
Woodstock Festival (1969) 123
'Work' 99
World's First Mod Wedding 93
Woronov, Mary 64, 69, 76–7, 81, 82, 83, 99, 100, 110, 120
_Swimming Underground_ 77
'Wrap Your Troubles in Dreams' 57
Xenakis, Iannis 48
Yaw, Bruce 188, 191, 192, 198, 199, 218
Yes 144
Young, La Monte 48, 49, 196
Young Rascals, The 23
'Your Love' 40
'You're Driving Me Insane' 46
'You've Lost that Loving Feeling' (Righteous Brothers) 89
Yule, Billy 127, 130
Yule, Doug 123, 126, _126_ , 130, 131, 137, 178, 304
background 114
and departure of Reed from Velvets 133
first gig played with the Velvets 116
joins Velvet Underground 113–15
plays in Reed's band 178, 188
promotion of as front man of Velvets 127, 134, 136
relationship with Reed 121, 127, 132, 133
voice 117
Zanetta, Tony 141, 142, 149
Zappa, Frank 82, 129
_We're Only in It for the Money_ 129
Zollo, Paul 314
Zorn, John 283
### About the Author
Howard Sounes is known for writing detailed and revelatory biographies of the musicians Bob Dylan ( _Down the Highway_ ) and Paul McCartney ( _Fab_ ) as well as other extraordinary personalities: the murderers Fred and Rosemary West ( _Fred & Rose_) and the American poet Charles Bukowski ( _Locked in the Arms of a Crazy Life_ ). Each book is based on extensive original research.
For more information visit www.howardsounes.com.
### _Also by Howard Sounes_
Amy, 27
Fab: An Intimate Life of Paul McCartney
Heist
Seventies
The Wicked Game
Down the Highway: The Life of Bob Dylan
Bukowski in Pictures
Charles Bukowski: Locked in the Arms of a Crazy Life
Fred & Rose
For more information about Howard Sounes and his books, visit his website at www.howardsounes.com
TRANSWORLD PUBLISHERS
61–63 Uxbridge Road, London W5 5SA
www.transworldbooks.co.uk
Transworld is part of the Penguin Random House group of companies whose addresses can be found at global.penguinrandomhouse.com
First published in Great Britain in 2015 by Doubleday
an imprint of Transworld Publishers
Copyright © Howard Sounes 2015
Howard Sounes has asserted his right under the Copyright, Designs and Patents Act 1988 to be identified as the author of this work.
Every effort has been made to obtain the necessary permissions with reference to copyright material, both illustrative and quoted. We apologize for any omissions in this respect and will be pleased to make the appropriate acknowledgements in any future edition.
A CIP catalogue record for this book is available from the British Library.
Version 1.0 Epub ISBN 9781473508958
ISBNs 9780857522665 (hb)
9780857522672 (tpb)
This ebook is copyright material and must not be copied, reproduced, transferred, distributed, leased, licensed or publicly performed or used in any way except as specifically permitted in writing by the publishers, as allowed under the terms and conditions under which it was purchased or as strictly permitted by applicable copyright law. Any unauthorized distribution or use of this text may be a direct infringement of the author's and publisher's rights and those responsible may be liable in law accordingly.
1 3 5 7 9 10 8 6 4 2
### Contents
1. Cover
2. About the Book
3. Contents
4. Title Page
5. I Coney Island Baby, 1942-59
6. II On to the Darkened Sea, 1960-4
7. III Honeybun, Black Jack, Sterl and Moesy, 1964-5
8. IV The Exploding Plastic Inevitable, 1966
9. V Light and Dark, 1967-8
10. VI A New VU, 1968-70
11. VII Solo in the Seventies, 1970-3
12. VIII Self-parody, 1973-4
13. IX Howling like the Devil, 1975-6
14. X The Arista Years, 1976-80
15. XI Second Marriage, 1980-7
16. XII New Inspiration, 1987-92
17. XIII Return to the Velvet Underground, 1992-6
18. XIV Love, Lou, 1996-2008
19. XV Nevermore, 2008-13
20. Picture Section
21. Source Notes
22. Bibliography
23. Author's Note and Acknowledgements
24. Picture Acknowledgements
25. Index
26. About the Author
27. Also by Howard Sounes
28. Copyright
| {
"redpajama_set_name": "RedPajamaBook"
} | 2,291 |
Q: I want to use cin or cout after activating the buffer. What should I do I am trying to use double beffering
but I haven't made it yet
please ignore it
Can I take input after activating the first buffer?
I get an error if I get input after activating the first buffer (0xc0000142)
I want to use cout or cin after calling SetConsoleActiveScreenBuffer function. How do I do this?
#include <iostream>
using namespace std;
int main()
{
HANDLE mhBuffer[2];
int mCurrentBufferIndex = 0;
COORD mSize;
mSize.X = 120;
mSize.Y = 30;
CONSOLE_CURSOR_INFO cci;
SMALL_RECT rect;
rect.Left = 0;
rect.Right = mSize.X - 1;
rect.Top = 0;
rect.Bottom = mSize.Y - 1;
//CreateBuffer
mhBuffer[0] = CreateConsoleScreenBuffer(GENERIC_READ | GENERIC_WRITE, 0, NULL, CONSOLE_TEXTMODE_BUFFER, NULL);
SetConsoleScreenBufferSize(mhBuffer[0], mSize);
SetConsoleWindowInfo(mhBuffer[0], TRUE, &rect);
mhBuffer[1] = CreateConsoleScreenBuffer(GENERIC_READ | GENERIC_WRITE, 0, NULL, CONSOLE_TEXTMODE_BUFFER, NULL);
SetConsoleScreenBufferSize(mhBuffer[1], mSize);
SetConsoleWindowInfo(mhBuffer[1], TRUE, &rect);
cci.dwSize = 1;
cci.bVisible = FALSE;
SetConsoleCursorInfo(mhBuffer[0], &cci);
SetConsoleCursorInfo(mhBuffer[1], &cci);
//Write to Buffer
DWORD dw;
COORD CursorPosition = { 0,0 };
SetConsoleCursorPosition(mhBuffer[mCurrentBufferIndex], CursorPosition);
char str[] = "buffer";
WriteFile(mhBuffer[mCurrentBufferIndex], str, strlen(str), &dw, NULL);
Sleep(33);
SetConsoleActiveScreenBuffer(mhBuffer[mCurrentBufferIndex]);
int b = 0;
cin >> b;
}
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 6,872 |
@implementation PBJVideoView
+ (Class)layerClass
{
return [AVPlayerLayer class];
}
#pragma mark - getters/setters
- (void)setPlayer:(AVPlayer *)player
{
[(AVPlayerLayer *)[self layer] setPlayer:player];
}
- (AVPlayer *)player
{
return [(AVPlayerLayer *)[self layer] player];
}
- (AVPlayerLayer *)playerLayer
{
return (AVPlayerLayer *)self.layer;
}
- (void)setVideoFillMode:(NSString *)videoFillMode
{
[self playerLayer].videoGravity = videoFillMode;
}
- (NSString *)videoFillMode
{
return [self playerLayer].videoGravity;
}
#pragma mark - init
- (instancetype)initWithFrame:(CGRect)frame
{
self = [super initWithFrame:frame];
if (self) {
self.playerLayer.backgroundColor = [[UIColor blackColor] CGColor];
}
return self;
}
@end
| {
"redpajama_set_name": "RedPajamaGithub"
} | 7,515 |
<html>
<head>
<title>
Arrogance of American empire
</title>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
<?php include "../../legacy-includes/Script.htmlf" ?>
</head>
<body bgcolor="#FFFFCC" text="000000" link="990000" vlink="660000" alink="003366" leftmargin="0" topmargin="0">
<table width="744" cellspacing="0" cellpadding="0" border="0">
<tr><td width="474"><a name="Top"></a><?php include "../../legacy-includes/TopLogo.htmlf" ?></td>
<td width="270"><?php include "../../legacy-includes/TopAd.htmlf" ?>
</td></tr></table>
<table width="744" cellspacing="0" cellpadding="0" border="0">
<tr><td width="18" bgcolor="FFCC66"></td>
<td width="108" bgcolor="FFCC66" valign=top><?php include "../../legacy-includes/LeftButtons.htmlf" ?></td>
<td width="18"></td>
<td width="480" valign="top">
<?php include "../../legacy-includes/BodyInsert.htmlf" ?>
<P><font face="Arial, Helvetica, sans-serif" size="2"><b>WHAT WE THINK</b></font><br>
<font face="Times New Roman, Times, serif" size="5"><b>Arrogance of American empire</b></font></P>
<p><font face="Arial, Helvetica, sans-serif" size="2">June 6, 2003 | Page 3</font></p>
<P><font face="Times New Roman, Times, serif" size="3">WHEN JULIUS Caesar conquered Gaul, he parlayed his victory over what is now modern-day France into becoming leader of the Roman Empire. Today's would-be emperor, George W. Bush, also plans to succeed at France's expense--not only because France opposed the U.S. invasion of Iraq, but to keep Europe divided and Washington on top of the world. "Punish France, ignore Germany, forgive Russia: that was the succinct reaction by Condoleezza Rice, America's National Security Adviser, to those countries' opposition to the second Gulf war," observed London's <I>Daily Telegraph.</I> </P>
<P>In its coverage of the Group of Eight (G8) summit in Evian, France, the U.S. media analyzed the warmth of Bush's handshakes with world leaders like society gossip columnists reporting on who's "in" or "out." After meeting with French President Jacques Chirac, Bush snubbed his host by leaving Evian early to attend a Middle East summit in Egypt.</P>
<P>Coming from the same White House image-makers who came up with Bush's prime-time TV landing on an aircraft carrier last month, the message couldn't be clearer: We run the world, and you'd better know your place. As Rice said at a Washington press conference before the Evian trip, "It isn't the power of the United States that needs to 'be checked.' It's the power of the United States that needs to work cooperatively with others who share the same values to achieve common goals."</P>
<P>Or as the <I>Financial Times </I>put it: "U.S. vision requires 'old Europe' to toe the line." By playing up "new Europe" countries like Poland at the expense of old allies Germany and France, Washington wants to ensure that the European Union doesn't coalesce into a political and military rival or an economic competitor.</P>
<P>Meanwhile, the world economy remains mired in stagnation and slump, and the leaders of G8--created in the 1970s to forge common economic policies among the world's most powerful countries--didn't have the least answer to offer. But if the other G8 players rankle at U.S. domination, they share Washington's goal of dominating the world's poor and developing nations.</P>
<P>While Bush made much of the U.S. pledge of $15 billion to fight AIDS at the global level--an amount that France's Chirac promised to match--the real impact remains to be seen. After all, last year's G8 meeting included a promise of $100 billion in debt relief--a number greatly inflated by money that was already committed. </P>
<P>This year, G8 leaders again gave lip service to the problem of Third World debt--but will continue to tighten the screws on poor countries through the International Monetary Fund. The G8 leaders' real views on global justice were reflected in the vicious police crackdown on protesters, who were forced to gather across the border in Switzerland after being banned in France.</P>
<P>The rivalries among G8 leaders are about whether Washington can rule the world alone--not whether the world should be ruled by a handful of the most powerful governments. The real opposition to empire won't be found in presidential palaces or at summit meetings, but in protests and struggle from below.</P>
<?php include "../../legacy-includes/BottomNavLinks.htmlf" ?>
<td width="12"></td>
<td width="108" valign="top">
<?php include "../../legacy-includes/RightAdFolder.htmlf" ?>
</td>
</tr>
</table>
</body>
</html>
| {
"redpajama_set_name": "RedPajamaGithub"
} | 8,760 |
It's scary to think how good this team could be in five years time.
This outstandingly good XI only comprises players aged 21 and under, but would no doubt still be a match for virtually any club side.
Thibaut Courtois starts in goal after two brilliant seasons on loan at Atletico Madrid, although he will surely return to Chelsea to take Petr Cech's goalkeeping jersey at some point.
Phil Jones starts at right-back, after a £16m move to Manchester United in 2011. Sir Alex Ferguson said at the end of last season that he could go on to become one United's greatest ever players. High praise indeed!
Bayern Munich's David Alaba takes the other fullback berth, after starring in the German champion's brilliant treble-winning side last term.
Manchester City's Matija Nastasic gets into the side after a string of superb performances helped him displace Joleon Lescott in City's defence, although Roma's Marquinhos is knocking loudly on the door. Real Madrid's Raphael Varane also starts, after displaying his immense potential with a string of composed Champions League performances last season.
Jack Wilshere and Marco Verratti offer fantastic touch, technical ability and a passing range that suits modern football. These two are sure to dominate European midfields for the next decade.
Further up the field, Neymar walks into the side, as he probably would into a regular World XI. Wonderful young attackers Isco and Mario Gotze play alongside him, just behind Romelu Lukaku, who scored 17 goals in the Premier League last season at just 19 years of age.
– This starting XI has already moved for a combined fee of £161m in their careers, which is even more remarkable when you consider Varane, Alaba, Courtois, Wilshere and Veratti have yet to move for huge money.
– Encouragingly from an English perspective, we have two representatives, as do Belgium, whereas Brazil, Germany, Spain, France, Italy, Austria and Serbia all have one each.
-The side features five Premier League players, (although Courtois has been on loan in Spain for two seasons), two Bundesliga players, three La Liga players and Marco Verratti from Ligue 1.
-Bayern Munich own two of these players, as do Chelsea and Real Madrid, whereas Man United, Man City, Arsenal and PSG and Barcelona all own one each.
– No Serie A players have been selected, although Juventus midfielder Paul Pogba, and AC Milan forward Stephan El Shaarawy both have legitimate claims for inclusion following brilliant seasons.
– Other players very unlucky to miss out include £30m plus men James Rodriguez and Lucas Moura. | {
"redpajama_set_name": "RedPajamaC4"
} | 2,664 |
Q: SQL Group By Question I have a table which tracks views of products.
TrackId ProductId CreatedOn
1 1 01/01/2011
2 4 01/01/2011
3 4 01/01/2011
4 10 01/01/2011
What I want to do is return a dataset which doesn't have two ProductIds next to each other. I.E from the above data set I would want to return:
TrackId ProductId CreatedOn
1 1 01/01/2011
2 4 01/01/2011
4 10 01/01/2011
I can't use distinct as far as I am aware as this is row based?
Help appreciated.
A: Generate a row number sequence per ProductID, take the first
;WITH cte AS
(
SELECT
*,
ROW_NUMBER() OVER (PARTITION BY ProductID ORDER BY TrackID) AS rn
FROM
MyProductTable
)
SELECT
TrackId ProductId CreatedOn
FROM
cte
WHERE
rn = 1
Edit:
If you want to use an aggregate, you need a separate subquery first to ensure consistent results. A straight MIN won't work.
This is based on my comment to the question
"not having productid in two adjacent rows. Adjacent is defined by next/previous Trackid"
SELECT
M.*
FROM
myProductTable M
JOIN
( --gets the lowest TrackID for a ProductID
SELECT ProductID, MIN(TrackID) AS MinTrackID
FROM myProductTable
GROUP BY ProductID
) M2 ON M.ProductID= M2.ProductID AND M.TrackID= M2.MinTrackID
A: You can GroupBy on the TrackID and ProductID and do a Min of the CreatedOn if the date is not important.
SELECT TrackID ,ProductID ,MIN(CreatedOn)
FROM [table]
GROUP BY TrackID ,ProductID
If the date is the same you can group by all three
SELECT TrackID ,ProductID ,CreatedOn
FROM [table]
GROUP BY TrackID ,ProductID ,CreatedOn
A: select min(TrackId), ProductId, CreatedOn
from YourTable
group by ProductId, CreatedOn;
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 9,528 |
{"url":"https:\/\/csharp-book.softuni.org\/Content\/Chapter-5-1-loops\/what-we-learned\/what-we-learned.html","text":"## What We Learned in This Chapter?\n\nWe can repeat a code block using a for loop:\n\nWe can read a sequence of n numbers from the console:","date":"2019-01-17 12:06:04","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5574701428413391, \"perplexity\": 1723.7965007743092}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-04\/segments\/1547583658928.22\/warc\/CC-MAIN-20190117102635-20190117124635-00058.warc.gz\"}"} | null | null |
{"url":"http:\/\/mathhelpforum.com\/differential-geometry\/145016-logarithm-print.html","text":"# logarithm\n\n\u2022 May 16th 2010, 11:15 AM\nsandy\nlogarithm\ni really need help in this , i have no idea at all , help!\n\nUse logarithms to find all solutions of the following equations\n(a)\ne^z= e\n\n(b) e^z= e^(-z)\n\u2022 May 16th 2010, 11:46 AM\nchisigma\na) $e^{z} = e \\rightarrow z= 1 + 2 k \\pi i$\n\nb) $e^{z} = e^{-z} \\rightarrow z = k \\pi i$\n\n... where in both cases k is an integer...\n\nKind regards\n\n$\\chi$ $\\sigma$","date":"2015-06-03 09:02:50","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 4, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9450161457061768, \"perplexity\": 3596.6799543226443}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2015-22\/segments\/1433195036641.11\/warc\/CC-MAIN-20150601214356-00094-ip-10-180-206-219.ec2.internal.warc.gz\"}"} | null | null |
Lan Jü (; † 22. března 1393) byl čínský vojevůdce, jeden z předních generálů Chung-wua, zakladatele a prvního císaře říše Ming. Vojenský talent a podpora jeho příbuzného, generála Čchang Jü-čchuna, mu zajistila vysoké postavení v mingské armádě. Během 80. let 14. století se vypracoval mezi nejpřednější vojevůdce říše, roku 1393 však upadl v nemilost, byl obviněn ze spiknutí a pokusu o puč a popraven. Popravena byla i jeho rodina a široký okruh příbuzných a podřízených, počet obětí čistky dosáhl několika tisíc lidí.
Život
Lan Jü pocházel z Ting-jüanu (v dnešní provincii An-chuej). Jeho starší sestra se provdala za Čchang Jü-čchuna, v 60. letech 14. století druhého nejvýznamnějšího generála Ču Jüan-čanga, který během povstání rudých turbanů proti říši Jüan budoval vlastní stát. Lan Jü sloužil jako důstojník v Čchang Jü-čchunově armádě, během 60. let rychle postupoval.
Roku 1371 byl generálem v armádě Fu Jou-tea, která se z rozkazu Ču Jüan-čanga, od roku 1368 císaře říše Ming, podílela na dobytí S'-čchuanu. Následující rok byl převelen do armády generála Sü Ta, která ze Šan-si vytáhla na sever proti mongolskému vojevůdci Kökö Temürovi. V dubnu 1372 v čele samostatné jednotky porazil Kökö Temüra na řece Tula. Poté pokračoval ve službě na severu. Roku 1374 vedl výpravu proti Mongolům shromážděným severně od Kalganu. Roku 1375 se účastnil obrany Jen-anu před Mongoly.
V listopadu 1378 ho císař jmenoval zástupcem Mu Jinga v tažení proti Tibeťanům v Kan-su. V říjnu 1379 utrpěl nepřítel porážku, generálové se vrátili do hlavního města a v prosinci 1379 dvanáct z nich císař odměnil šlechtickými tituly (většina byla později popravena jako členové Lan Jüova spiknutí). Stal se markýzem z Jung-čchang (, Jung-čchang chou) s příjmem 2000 tanů (tj. asi 119 tun) zrna ročně.
Od září 1381 se ve funkci zástupce generála Fu Jou-tea účastnil dobytí Jün-nanu. Poté, co v první fázi tažení v lednu 1382 mingská vojska porazila jüanskou armádu, Lan Jü v čele samostatného oddílu vytáhl na Ta-li, dobyl ho a získal kontrolu na severozápadním Jün-nanem. Za odměnou mu císař zvýšil příjem na 2500 tanů a svolil k sňatku jeho dcery za jedenáctého císařova syna Ču Čchuna, knížete Šu.
V září 1385 Feng Šeng se dvěma zástupci, Lan Jüem a Fu Jou-tem, převzal armádu v Pekingu. Po rozsáhlých přípravách dostali v lednu 1387 dostali rozkaz potlačit mongolské síly v jižním Mandžusku. Lan Jü vedl předvoj, přičemž porazil část Mongolů, v červenci hlavní síly mingské armády porazily nepřátelské vojsko a zajaly jeho velitele Nagačua. V září 1387 byl Feng Šeng za neuspokojivé chování během tažení odvolán a Lan Jü převzal armádu, přičemž velitelství umístil východně od Pekingu. V listopadu 1387 dostal rozkaz zaútočit na hlavní síly Mongolů vedené chánem Togus Temürem. Do půli května 1388 150 tisíc mingských vojáků pochodovalo přes Gobi do severovýchodního Mongolska. U jezera Bujr núr, 500 mil severně od Pekingu, Mongoly překvapili. Togus Temür z bitvy uprchl, nicméně Lan Jüovi vojáci získali desetitisíce zajatců a na zpáteční cestě porazili mongolského generála Qarajanga. Po návratu do Nankingu v září 1388 byli Lan i jeho podřízení bohatě odměněni, 19. ledna 1389 stal vévodou z Liang () s příjmem 3000 tanů. Byl teprve třetím vévodou jmenovaným po roce 1370. Lan byl reprezentantem generace generálů, kteří získali zásluhy až po občanských válkách 60. let 14. století.
V březnu byl poslán do S'-čchuanu, další rok potlačil revolty v jihozápadním Chu-kuangu. Po návratu do metropole v září 1390 mu byl příjem zvýšen na 3500 tanů. S dalšími dvěma vévody a několika markýzi byl v dubnu 1391 poslán do Šen-si velet pohraniční armádě.
V únoru 1392 císař odvolal z velitelských funkcí řadu vlivných generálů, Lan Jüa, Li Ťing-lunga, Čchang Šenga a další. Císař tehdy pojal nedůvěru k vojenské elitě jako celku, ale Lan Jü si ještě, navzdory občasnému netaktnímu chování, udržel jeho přízeň. V březnu se stal velitelem v Lan-čou, kde válčil s Mongoly. V S'-čchuanu zatím velitel jednoho pluku – Jelü Temür – získal podporu místních nečínských kmenů a vzbouřil se. Loajální oddíly ho nebyly schopny zastavil a tak byl na proti němu poslán Lan Jü se svou armádou. Ještě před jeho příchodem byli rebelové poraženi (v červenci 1392) a koncem roku byl chycen i jejich velitel. Lan Jü jako preventivní opatření proti opakování rebelie navrhl přesídlit do S'-čchuanu větší množství vojenských rolníků. Ale vláda v té době neúspěšně bojovala se zneužíváním vojáků a vojenských rolníků jejich veliteli. Rozzlobený císař proto v prosinci 1391 Lan Jüa odvolal do Nankingu.
V srpnu 1392 začala čistka mezi vojenskými veliteli. Čou Te-sing, markýz Ťiang-sia, byl obviněn ze spojení s Chu Wej-jungem (politikem popraveným roku 1380) a popraven a v září stejně dopadl Jie Šeng, markýz Ťing-ning a Lan Jüův příbuzný. Současně bylo několik vévodů a markýzů odvoláno z velitelských funkcí a posláno na svá panství.
Koncem roku 1392 byl Lan Jü poslán na severozápad s úkolem zlikvidovat mongolské povstání Orlug Temüra, v prosinci 1392 rebely porazil. Po návratu do Nankingu požadoval odvody tamních rolníků pro další tažení na západě. Císař odmítl a zbavil ho velení. Lan Jü odvolání nesl těžce a když v lednu 1393 jako částečnou kompenzaci dostal čestný titul v domácnosti následníka trůnu, hlasitě si stěžoval, že kolegové Feng Šeng a Fu Jou-te mají titul o stupeň vyšší, čímž posílil císařovu averzi.
K císařově nedůvěře vůči jeho leckdy zpupným generálům přispěl i Ču Ti, čtvrtý císařův syn, který se prosazoval jako schopný vojevůdce. Ču Ti dlouhodobě neměl s Lan Jüem dobré vztahy. Považoval ho za (pro sebe) nebezpečného pro jeho příbuzenské spojení s korunním princem. Ve snaze o posílení vlastní mocenské pozice zaměřil podezíravost císaře na generály. Podle historika Wang Š'-čena (1526–1590) měl Ču Ti hlavní zodpovědnost za Lan Jüovo zatčení a popravu. Byl spojován i s nejasnými úmrtími Feng Šenga a Fu Jou-tea na přelomu let 1394 a 1395.
V prvních měsících roku 1393 tajná policie zatkla několik Lan Jüových dřívějších podřízených a na mučidlech si vynutila výpovědi proti němu. V únoru 1393 byli čtyři císařovi synové vysláni na severní hranice, tři z nich v takovém spěchu, že neměli ještě vybudovaná sídla. V březnu byl s velkým množstvím podřízených a spojenců zatčen, obviněn ze spiknutí a přípravy vzpoury a 22. března 1393 popraven.
V následných čistkách bylo popraveno na 20 tisíc lidí, včetně jednoho vévody a čtrnácti markýzů. V čistkách první poloviny 90. let císař zničil vojenskou nobilitu. Vzniklé mocenské vakuum vyplnili osoby spojené s osobou císaře, především jeho synové. Je možné že odstranění řady vlivných a zasloužilých osob mělo usnadnit hladké předání moci následníku trůnu.
Odkazy
Poznámky
Reference
Mingští válečníci
Mingští aristokraté
Čínští generálové
Čínští vévodové
Popravení lidé
Narození v An-chueji
Narození ve 14. století
Úmrtí v roce 1393
Muži | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 6,811 |
Q: Silverlight DataForm control with RIA ( i.e. Display(Description=....)] I want the change the description of the labels on my silverlight dataform which currently show as my table fieldnames (dbEmailAddress).
<StackPanel Grid.Row="0" Grid.Column="1">
<dataFormToolkit:DataForm x:Name="dataForm1"
CurrentItem="{Binding SelectedItem, ElementName=dgLeagues}"
Header="Product Details"
>
</dataFormToolkit:DataForm>
I know I can get around this by adding DataFields programmatically in the xaml, but is there away to add the attributes in the RIA class ( in the web application) so it filters through, something like this in the ria domain metadata file.
[Display(Name = "Email Address:",
Description="We do not sell your information!")]
public string EmailAddress { get; set; }
-would this work?
Also if it would, then would this approach be useless as a recompile of the ria domain service metadata file mean I would loose any changes since its generated?
Thanks,
jason
A: What you have done is actually fine. To avoid redoing it every time you recompile, you can add it to your metadata.
[Display(Name = "Email Address:",
Description="We do not sell your information!")]
public string EmailAddress = null;
When you create your DomainService you can add a related metadata class - if you haven't that you can easily create it manually. The metadata class is designed to hold the exact information you describe.
[MetadataType(typeof(CustomerMetadata))]
public partial class Customer
{
private static class CustomerMetadata
{
[Required]
[Display(Name = "Email Address:",
Description = "We do not sell your information!")]
public string EmailAddress = null;
}
}
Remember to name the metadata file customer.metadata.cs or whatever your class is called. It's imported to postfix with *.metadata.cs. It's a good idea to put your metadata file in the same folder as your DomainService.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 304 |
'''Library to support sampling points, creating routes between them and pivots
along the path and near the goal.'''
from typing import Sequence, Optional, Dict, Any
from absl import logging
import geopandas as gpd
from geopandas import GeoDataFrame, GeoSeries
import inflect
import multiprocessing
from multiprocessing import Semaphore
import numpy as np
import networkx as nx
import os
import osmnx as ox
import pandas as pd
import random
from shapely.ops import nearest_points
from shapely.geometry.point import Point
from shapely.geometry import LineString
import sys
from torch import NoneType
from cabby.geo import util
from cabby.geo.map_processing import map_structure
from cabby.geo import geo_item
from cabby.geo import osm
SMALL_POI = 4 # Less than 4 S2Cellids.
SEED = 1
MAX_SEED = 2**32 - 1
SAVE_ENTITIES_EVERY = 100
MAX_BATCH_GEN = 100
MAX_BATCH_GEN = MAX_BATCH_GEN if MAX_BATCH_GEN<SAVE_ENTITIES_EVERY else SAVE_ENTITIES_EVERY
MAX_PATH_DIST = 2000
MIN_PATH_DIST = 200
NEAR_PIVOT_DIST = 80
ON_PIVOT_DIST = 10
# The max number of failed tries to generate a single path entities.
MAX_NUM_GEN_FAILED = 10
PIVOT_ALONG_ROUTE_MAX_DIST = 0.0007
MAX_NUM_BEYOND_TRY = 50
N_AROUND_PIVOTS = 10
N_MAIN_PIVOTS = 15
main_pivots = [f"main_pivot_{n}" for n in range(2, N_MAIN_PIVOTS+1)]
around_pivots = [f"around_goal_pivot_{n}" for n in range(1, N_AROUND_PIVOTS+1)]
LANDMARK_TYPES = [
"end_point", "start_point", "main_pivot"] + main_pivots + [
"near_pivot", "beyond_pivot"] + around_pivots + ['main_near_pivot']
FEATURES_TYPES = ["cardinal_direction",
"spatial_rel_goal",
"spatial_rel_pivot",
"intersections",
"goal_position",
"spatial_rel_main_near"]
inflect_engine = inflect.engine()
class Walker:
def __init__(self, map: map_structure.Map, rand_sample: bool = True):
#whether to sample randomly.
self.rand_sample = rand_sample
self.map = map
def compute_route_from_nodes(self,
origin_id: str,
goal_id: str,
graph: nx.MultiDiGraph,
nodes: GeoDataFrame) -> Optional[GeoDataFrame]:
'''Returns the shortest path between a starting and end point.
Arguments:
origin_id: The node id of the origin point.
goal_id(Point): The node id of the destination point.
graph(nx.MultiDiGraph): The directed graph class that stores multiedges.
nodes(GeoDataFrame): The GeoDataFrame of graph nodes.
Returns:
A sequence of Points which construct the geometry of the path.
'''
# Get shortest route.
try:
route = nx.shortest_path(graph, origin_id, goal_id, 'length')
except nx.exception.NetworkXNoPath:
logging.info("No route found for the start and end points.")
return None
route_nodes = nodes[nodes['osmid'].isin(route)]
# Create the dictionary that defines the order for sorting according to
# route order.
sorterIndex = dict(zip(route, range(len(route))))
# Generate a rank column that will be used to sort
# the dataframe numerically
sorted_nodes = route_nodes['osmid'].map(sorterIndex)
route_nodes = route_nodes.assign(sort=sorted_nodes)
route_nodes = route_nodes.sort_values(['sort'])
return route_nodes
def compute_route_from_points(self,
start_point: Point,
end_point: Point,
graph: nx.MultiDiGraph,
nodes: GeoDataFrame) -> Optional[GeoDataFrame]:
'''Returns the shortest path between a starting and end point.
Arguments:
start_point(Point): The lat-lng point of the origin point.
end_point(Point): The lat-lng point of the destination point.
graph(nx.MultiDiGraph): The directed graph class that stores multiedges.
nodes(GeoDataFrame): The GeoDataFrame of graph nodes.
Returns:
A sequence of Points which construct the geometry of the path.
'''
# Get closest nodes to points.
orig = ox.get_nearest_node(graph, util.tuple_from_point(start_point))
dest = ox.get_nearest_node(graph, util.tuple_from_point(end_point))
# Get shortest route.
try:
route = nx.shortest_path(graph, orig, dest, 'length')
except nx.exception.NetworkXNoPath:
logging.info("No route found for the start and end points.")
return None
route_nodes = nodes[nodes['osmid'].isin(route)]
# Create the dictionary that defines the order for sorting according to
# route order.
sorterIndex = dict(zip(route, range(len(route))))
# Generate a rank column that will be used to sort
# the dataframe numerically
route_nodes['sort'] = route_nodes['osmid'].map(sorterIndex)
route_nodes = route_nodes.sort_values(['sort'])
return route_nodes
def get_generic_tag(self, poi: pd.Series) -> Optional[str]:
'''Selects a non-specific tag (e.g., museum instead of "Austin Museum of
Popular Culture") instead of a POI.
Arguments:
poi: The POI to select a non-specific tag for.
Returns:
A non-specific tag.
'''
for tag, addition in osm.NON_SPECIFIC_TAGS.items():
if tag not in poi or not isinstance(poi[tag], str):
continue
if addition == True:
return tag
tag_value = poi[tag]
tag_value_clean = tag_value.replace("_", " ")
if tag_value in osm.CORRECTIONS:
tag_value_clean = osm.CORRECTIONS[tag_value]
if tag_value_clean in ['yes', 'no']:
continue
if addition == 'after':
new_tag = tag_value_clean + " " + tag
elif addition == "before":
new_tag = tag + " " + tag_value_clean
elif addition == False:
new_tag = tag_value_clean
elif tag_value not in addition:
continue
else:
new_tag = tag_value_clean
if new_tag in osm.CORRECTIONS:
new_tag = osm.CORRECTIONS[new_tag]
if new_tag in osm.BLOCK_LIST:
continue
return new_tag
return None
def select_generic_unique_pois(
self, pois: pd.DataFrame,
is_unique: bool = False,
end_point: pd.DataFrame = None,
avoid_pivots: Sequence[str] = []):
'''Returns a non-specific POIs with main tag being the non-specific tag.
Arguments:
pois: all pois to select from.
is_unique: if to filter unique tags.
end_point: end point of the path.
avoid_pivots: pivots to avoid picking.
Returns:
A number of non-specific POIs which are unique.
'''
# Assign main tag.
main_tags = pois.apply(self.get_generic_tag, axis=1)
new_pois = pois.assign(main_tag = main_tags)
new_pois.dropna(subset=['main_tag'], inplace=True)
new_pois = new_pois[~new_pois['osmid'].isin(avoid_pivots)]
if end_point is not None:
new_pois = new_pois[new_pois['osmid']!=end_point['osmid']]
# Get Unique main tags.
if is_unique:
# Randomly select whether the near by pivot would be
# a single pivot (e.g., `a toy shop` or
# a group of unique landmark (e.g, `3 toy shops`)
is_group = self.randomize_boolean(probabilty = 30)
if is_group:
uniqueness = new_pois.duplicated(subset=['main_tag'], keep=False)==True
new_pois_uniq = new_pois[uniqueness]
if new_pois_uniq.shape[0]==0:
return self.get_generic_unique_pois_single(new_pois)
count_by_tag = new_pois_uniq.main_tag.value_counts().to_dict()
tag_list = list((count_by_tag.keys()))
random.shuffle(tag_list)
for chosen_tag in tag_list:
chosen_count = count_by_tag[chosen_tag]
if chosen_count<=1:
continue
new_pois_uniq_group = new_pois_uniq[new_pois_uniq['main_tag']==chosen_tag]
if new_pois_uniq_group.shape[0]==0:
continue
single_new_pois_uniq = new_pois_uniq_group.sample()
anchor = single_new_pois_uniq.iloc[0]['centroid']
entities_geo_group = new_pois_uniq_group[new_pois_uniq_group.apply(
lambda x: (
util.get_distance_between_geometries(
x.geometry, anchor) <= ON_PIVOT_DIST and util.get_distance_between_geometries(
x.geometry, anchor) > 0), axis=1)]
chosen_count = entities_geo_group.shape[0]
if chosen_count<=1:
continue
by_word = self.randomize_boolean()
if by_word:
chosen_count = inflect_engine.number_to_words(chosen_count)
single_new_pois_uniq['main_tag'] = str(chosen_count) + \
" " + inflect_engine.plural(chosen_tag)
single_new_pois_uniq.drop(
single_new_pois_uniq.columns.difference(
[
'main_tag', 'centroid', 'geometry', 'osmid'] + \
osm.PROMINENT_TAGS_ORDERED+list(osm.NON_SPECIFIC_TAGS.keys())),
1, inplace=True)
single_new_pois_uniq['name'] = single_new_pois_uniq['main_tag']
single_new_pois_uniq['grouped'] = True
return single_new_pois_uniq
return self.get_generic_unique_pois_single(new_pois)
return new_pois
def get_generic_unique_pois_single(self, pois: pd.DataFrame):
uniqueness = pois.duplicated(subset=['main_tag'], keep=False)==False
new_pois_uniq = pois[uniqueness]
return new_pois_uniq
def select_generic_poi(self, pois: pd.DataFrame):
'''Returns a non-specific POI with main tag being the non-specific tag.
Arguments:
pois: all pois to select from.
Returns:
A single sample of a POI with main tag being the non-specific tag.
'''
pois_generic = self.select_generic_unique_pois(pois)
if pois_generic.shape[0]==0:
return None
# Sample POI.
poi = self.sample_point(pois_generic)
poi['geometry'] = poi.centroid
return poi
def get_end_poi(self) -> Optional[GeoSeries]:
'''Returns a random POI.
Returns:
A single POI.
'''
# Filter large POI.
small_poi = self.map.poi[self.map.poi['s2cellids'].str.len() <= SMALL_POI]
if small_poi.shape[0]==0:
return None
# Filter non-specific tags.
return self.select_generic_poi(small_poi)
def randomize_boolean(self, probabilty: int = 50) -> bool:
'''Returns a random\non random boolean value.
Arguments:
probabilty: probabilty it will be True (0-100).
Returns:
Returns a random\non random boolean value.
'''
if self.rand_sample:
rand_int = random.randint(0,100)
return rand_int<=probabilty
return True
def sample_point(self,
df: gpd.GeoDataFrame
) -> GeoSeries:
'''Returns a random\non random 1 sample of a POI.
Arguments:
df: data to sample.
Returns:
A single sample of a POI.
'''
if self.rand_sample:
return df.sample(1, random_state = random.randint(0, MAX_SEED)).iloc[0]
return df.sample(1, random_state=SEED).iloc[0]
def get_start_poi(self,
end_point: Dict
) -> Optional[GeoSeries]:
'''Returns the a random POI within distance of a given POI.
Arguments:
end_point: The POI to which the picked POI should be within distance
range.
Returns:
A single POI.
'''
# Get closest nodes to points.
dest_osmid = end_point['osmid']
try:
# Find nodes within 2000 meter path distance.
outer_circle_graph = ox.truncate.truncate_graph_dist(
self.map.nx_graph, dest_osmid,
max_dist=MAX_PATH_DIST, weight='true_length')
outer_circle_graph_osmid = list(outer_circle_graph.nodes.keys())
except nx.exception.NetworkXPointlessConcept: # GeoDataFrame returned empty
return None
try:
# Get graph that is too close (less than 200 meter path distance)
inner_circle_graph = ox.truncate.truncate_graph_dist(
self.map.nx_graph, dest_osmid,
max_dist=MIN_PATH_DIST, weight='true_length')
inner_circle_graph_osmid = list(inner_circle_graph.nodes.keys())
except nx.exception.NetworkXPointlessConcept: # GeoDataFrame returned empty
inner_circle_graph_osmid = []
osmid_in_range = [
osmid for osmid in outer_circle_graph_osmid if osmid not in
inner_circle_graph_osmid]
poi_in_ring = self.map.poi[self.map.poi['osmid'].isin(osmid_in_range)]
# Filter large POI.
small_poi = poi_in_ring[poi_in_ring['s2cellids'].str.len() <= SMALL_POI]
# Filter by distance
small_poi = small_poi[
small_poi.apply(
lambda x: util.get_distance_between_geometries(
x.geometry,
end_point['centroid']) > MIN_PATH_DIST, axis=1)]
# Filter non-specific tags.
return self.select_generic_poi(small_poi)
def get_landmark_if_tag_exists(self,
gdf: GeoDataFrame,
tag: str,
pick_generic_name: bool = False
) -> GeoSeries:
'''Check if tag exists, set main tag name and choose pivot.
Arguments:
gdf: The set of landmarks.
tag: tag to check if exists.
main_tag: the tag that should be set as the main tag.
Returns:
A single landmark.
'''
candidate_landmarks = gdf.columns
secondary_pivots = []
if tag in candidate_landmarks:
pivots = gdf[gdf[tag].notnull()]
if pivots.shape[0]:
if pick_generic_name:
tags_keys = osm.NON_SPECIFIC_TAGS.keys()
else:
tags_keys = osm.SPECIFIC_TAGS
for tag_k in tags_keys:
if tag_k not in pivots:
continue
pivots_tag = pivots[pivots[tag_k].notnull()]
if pick_generic_name and isinstance(
osm.NON_SPECIFIC_TAGS[tag_k], list):
pivots_tag = pivots_tag[pivots_tag[tag_k].isin(osm.NON_SPECIFIC_TAGS[tag_k])]
if pivots_tag.shape[0]:
if 'main_tag' not in pivots:
pivots_tag = pivots_tag.assign(main_tag=pivots_tag[tag_k])
pivots_prominent = pivots_tag[
~pivots_tag['amenity'].isin(osm.NEGLIGIBLE_AMENITY)]
if pivots_prominent.shape[0]==0:
pivot = self.sample_point(pivots_tag)
secondary_pivots.append(pivot)
continue
pivot = self.sample_point(pivots_prominent)
return pivot
if len(secondary_pivots)>0:
return secondary_pivots.pop()
return None
def pick_prominent_pivot(self,
df_pivots: GeoDataFrame,
end_point: Dict[str, Any],
path_geom: LineString,
pick_generic_name: bool = False
) -> Optional[GeoSeries]:
'''Select a landmark from a set of landmarks by priority.
Arguments:
df_pivots: The set of landmarks.
end_point: The goal location.
path_geom: The geometry of the path.
Returns:
A single landmark.
'''
# Remove goal location.
try:
df_pivots = df_pivots[df_pivots['osmid']!=end_point['osmid']]
except:
pass
if df_pivots.shape[0]==0:
return None
pivot = None
for main_tag in osm.PROMINENT_TAGS_ORDERED:
pivot = self.get_landmark_if_tag_exists(df_pivots,
main_tag,
pick_generic_name
)
if pivot is not None:
if not isinstance(pivot['geometry'], Point):
pivot['geometry'] = nearest_points(pivot['geometry'], path_geom)[0]
return pivot
return pivot
def get_pivot_near_goal(self,
end_point: GeoSeries,
start_point: GeoSeries,
path_geom: LineString,
max_distance_from_goal: int,
min_distance_from_goal: int,
avoid_pivots: Sequence[str] = []
) -> Optional[GeoSeries]:
'''Return a picked landmark near the end_point.
Arguments:
end_point: The goal location.
start_point: The start location.
path_geom: The geometry of the path selected.
max_distance_from_goal: The max distance from goal.
min_distance_from_goal: The min distance from goal.
avoid_pivots: pivots to avoid picking.
Returns:
A single landmark near the goal location.
'''
near_poi_con = self.map.poi.apply(
lambda x: util.get_distance_between_geometries(
x.geometry,
end_point['centroid']) < max_distance_from_goal and util.get_distance_between_geometries(
x.geometry,
end_point['centroid']) > min_distance_from_goal and util.get_distance_between_geometries(
x.geometry,
start_point['centroid']) > min_distance_from_goal, axis=1)
poi = self.map.poi[near_poi_con]
columns_empty = self.map.nodes.columns.tolist() + ['main_tag']
if poi.shape[0]==0:
return GeoDataFrame(index=[0], columns=columns_empty).iloc[0]
# Remove streets and roads.
if 'highway' in poi.columns:
poi = poi[poi['highway'].isnull()]
# Remove the endpoint.
nearby_poi = poi[poi['osmid'] != end_point['osmid']]
# Filter non-specific tags.
unique_poi = self.select_generic_unique_pois(
nearby_poi, is_unique=True, end_point=end_point, avoid_pivots=avoid_pivots)
if unique_poi.shape[0]==0:
return GeoDataFrame(index=[0], columns=columns_empty).iloc[0]
prominent_poi = self.pick_prominent_pivot(
unique_poi, end_point, path_geom, pick_generic_name=True)
if prominent_poi is None:
return GeoDataFrame(index=[0], columns=columns_empty).iloc[0]
return prominent_poi
def get_pivot_along_route(self,
route: GeoDataFrame,
end_point: Dict,
start_point: Dict,
) -> Optional[GeoSeries]:
'''Return a picked landmark on a given route.
Arguments:
route: The route along which a landmark will be chosen.
end_point: The goal location.
start_point: The start location.
Returns:
A single landmark. '''
# Get POI along the route.
start_point_copy = start_point.copy()
points_route = route['geometry'].tolist()
poly = LineString(points_route).buffer(PIVOT_ALONG_ROUTE_MAX_DIST)
df_pivots = self.map.poi[self.map.poi.apply(
lambda x: poly.intersects(x['geometry']), axis=1)]
columns_empty = self.map.nodes.columns.tolist() + ['main_tag']
if df_pivots.shape[0]==0:
return GeoDataFrame(index=[0], columns=columns_empty).iloc[0]
# Remove streets.
if 'highway' in df_pivots.columns:
df_pivots = df_pivots[(df_pivots['highway'].isnull())]
# Remove POI near goal and start position.
far_poi_con = df_pivots.apply(
lambda x: util.get_distance_between_geometries(
nearest_points(x.geometry, end_point['geometry'])[0],
nearest_points(x.geometry, end_point['geometry'])[1]
) > NEAR_PIVOT_DIST and util.get_distance_between_geometries(
nearest_points(x.geometry, start_point['geometry'])[0],
nearest_points(x.geometry, start_point['geometry'])[1]
) > ON_PIVOT_DIST, axis=1)
far_poi = df_pivots[far_poi_con]
path_geom = LineString(points_route)
main_pivot = self.pick_prominent_pivot(far_poi, end_point, path_geom, False)
if main_pivot is None:
return GeoDataFrame(index=[0], columns=columns_empty).iloc[0]
return main_pivot
def get_pivot_beyond_goal(self,
end_point: GeoSeries,
route: GeoDataFrame,
) -> Optional[GeoSeries]:
'''Return a picked landmark on a given route.
Arguments:
end_point: The goal location.
route: The route along which a landmark will be chosen.
Returns:
A single landmark. '''
columns_empty = self.map.nodes.columns.tolist() + ['main_tag']
if route.shape[0] < 2:
# Return Empty.
return GeoDataFrame(index=[0], columns=columns_empty).iloc[0]
final_node_in_route = route.iloc[-1]
last_node_in_route = route.iloc[-2]
before_last_node_in_route = route.iloc[-3]
street_beyond_route = self.map.edges[
(self.map.edges['u'] == last_node_in_route['osmid'])
& (self.map.edges['v'] == before_last_node_in_route['osmid'])
]
if street_beyond_route.shape[0] == 0:
# Return Empty.
return GeoDataFrame(index=[0], columns=columns_empty).iloc[0]
street_beyond_osmid = street_beyond_route['osmid'].iloc[0]
condition_street_id = self.map.edges['osmid'].apply(
lambda x: x == street_beyond_osmid)
street_nodes = self.map.edges[condition_street_id]['u'].unique()
street_nodes = np.random.choice(street_nodes, MAX_NUM_BEYOND_TRY)
for i in range(MAX_NUM_BEYOND_TRY):
length = nx.shortest_path_length(
self.map.nx_graph,
source=street_nodes[i],
target=last_node_in_route['osmid'])
# The beyond pivot should not be too close but also not too far away.
if not (length>3 and length<10):
continue
# Check the path between the POI and the last node in the route taken.
# If the path calculated does not pass through the route taken then it is
# beyond the route.
path = nx.shortest_path(self.map.nx_graph,
source=street_nodes[i],
target=last_node_in_route['osmid'])
# Remove the nodes in the route taken from the path calculated so that it
# will not be choosen as the pivot beyond.
path.remove(last_node_in_route['osmid'])
if final_node_in_route['osmid'] in path:
path.remove(final_node_in_route['osmid'])
intersections = set(route['osmid']).intersection(path)
# Check if the path calculated overlaps the route taken,
# if not then pick a POI to be the pivot beyond.
if len(intersections)<1 and len(path)>2:
beyond = self.select_pivot_from_path(route, end_point, path)
if beyond is not None:
return beyond
return GeoDataFrame(index=[0], columns=columns_empty).iloc[0]
def select_pivot_from_path(self,
route: GeoDataFrame,
end_point: Dict,
path: Sequence):
path_nodes = self.map.nodes[self.map.nodes['osmid'].isin(path)]
points_route = path_nodes['geometry'].tolist()
path_beyond = LineString(points_route).buffer(PIVOT_ALONG_ROUTE_MAX_DIST)
route_shape = LineString(
route['geometry'].tolist()).buffer(PIVOT_ALONG_ROUTE_MAX_DIST)
df_pivots = self.map.poi[self.map.poi.apply(
lambda x: (path_beyond.intersects(x['geometry'])) &
(route_shape.intersects(x['geometry'])==False), axis=1)]
if df_pivots.shape[0] == 0:
return None
# Remove streets.
if 'highway' in df_pivots.columns:
df_pivots = df_pivots[(df_pivots['highway'].isnull())]
# Remove invalid geometry.
df_pivots = df_pivots[(df_pivots['geometry'].is_valid)]
if df_pivots.shape[0] == 0:
# Return Empty.
return None
path_geom = LineString(points_route)
beyond_pivot = self.pick_prominent_pivot(
df_pivots, end_point, path_geom, pick_generic_name=True)
return beyond_pivot
def get_position_goal(self,
end_point: GeoSeries,
route: GeoDataFrame
) -> Optional[str]:
'''Return the position of the goal in the last block:
middle of the block\ near the closest intersection\ near the farther intersection
Arguments:
end_point: The goal location.
route: The route along which a landmark will be chosen.
Returns:
The position of the goal in last block. '''
street = self.map.edges[self.map.edges['u'] == end_point['osmid']].iloc[0]['osmid']
nodes_u = self.map.edges[self.map.edges['osmid']==street]['u']
condition_intersection = self.map.edges['osmid'] != street
condition_not_poi = self.map.edges['name'] != 'poi'
streets_intersection = self.map.edges[
condition_not_poi & condition_intersection & self.map.edges['u'].isin(
nodes_u)]
intersections_nodes_osmid = streets_intersection['u']
intersections_nodes = self.map.nodes[self.map.nodes['osmid'].isin(intersections_nodes_osmid)]
if intersections_nodes.shape[0]==0:
return None
distances = intersections_nodes.apply(
lambda x: util.get_distance_between_geometries(x.geometry, end_point.centroid), axis=1)
intersections_nodes.insert(0, "distances", distances, True)
bearing = intersections_nodes.apply(
lambda x: util.get_bearing(x.geometry.centroid, end_point.centroid), axis=1)
intersections_nodes.insert(0, "bearing", bearing, True)
min_distance_idx = intersections_nodes['distances'].idxmin()
bearing = intersections_nodes['bearing'].loc[min_distance_idx]
distance_closest = intersections_nodes['distances'].loc[min_distance_idx]
point_closest = intersections_nodes.loc[min_distance_idx]
# Get second bearing in opposite direction.
opposite_bearing = (bearing+180)%360
intersection_opposite = intersections_nodes[(intersections_nodes['bearing']-opposite_bearing)%360<30]
if intersection_opposite.shape[0]==0:
return None
intersection_opposite_idx = intersection_opposite['distances'].idxmin()
intersection_opposite_distance = intersection_opposite.loc[intersection_opposite_idx]['distances']
point_far = intersection_opposite.loc[intersection_opposite_idx]
cardinal = self.get_cardinal_direction(point_far, point_closest)
# Check the proportions.
total_distance = intersection_opposite_distance + distance_closest
closest_propotion = distance_closest/total_distance
if closest_propotion>0.4:
return "middle_block"
if closest_propotion>0.3:
return None
# Check to which intersection it is closer.
closest_inter_node_osmid = intersections_nodes.loc[min_distance_idx]['osmid']
if closest_inter_node_osmid in route['osmid'].tolist():
return "first_intersection;" + cardinal
return "second_intersection;" + cardinal
def get_pivots(self,
route: GeoDataFrame,
end_point: Dict,
start_point: Dict,
) -> Optional[Sequence[GeoSeries]]:
'''Return a picked landmark on a given route.
Arguments:
route: The route along which a landmark will be chosen.
end_point: The goal location.
start_point: The start location.
Returns:
A single landmark.
'''
# Get pivot along the goal location.
main_pivot = self.get_pivot_along_route(
route, end_point, start_point)
if main_pivot['geometry'] is None:
return None
list_main_pivots = [main_pivot]
# Get a second and third pivots along the goal location.
for _ in range(1, N_MAIN_PIVOTS):
main_pivot_x = self.get_pivot_along_route(
route, end_point, start_point)
dist = util.get_distance_between_geometries(
main_pivot_x.geometry.centroid,
end_point['geometry'].centroid
)
if dist < NEAR_PIVOT_DIST:
main_near_pivot = main_pivot_x
else:
columns_empty = self.map.nodes.columns.tolist() + ['main_tag']
main_near_pivot = GeoDataFrame(index=[0], columns=columns_empty).iloc[0]
list_main_pivots.append(main_pivot_x)
path_geom = LineString(route['geometry'].tolist())
# Get pivot near the goal location.
near_pivot = self.get_pivot_near_goal(
end_point, start_point, path_geom, NEAR_PIVOT_DIST, 2)
if near_pivot['geometry'] is None:
return None
list_around_goal_pivots_osmid = [near_pivot['osmid']]
list_around_goal_pivots = []
# Get a second and third pivots along the goal location.
for _ in range(0, N_AROUND_PIVOTS):
around_goal_pivot_x = self.get_pivot_near_goal(
end_point,
start_point,
path_geom,
2*NEAR_PIVOT_DIST,
NEAR_PIVOT_DIST,
list_around_goal_pivots_osmid)
list_around_goal_pivots_osmid.append(around_goal_pivot_x['osmid'])
list_around_goal_pivots.append(around_goal_pivot_x)
# Get pivot located past the goal location and beyond the route.
beyond_pivot = self.get_pivot_beyond_goal(end_point, route)
list_pivots = list_main_pivots + [
near_pivot] + list_around_goal_pivots + [beyond_pivot] + [main_near_pivot]
return list_pivots
def get_egocentric_spatial_relation_pivot(self,
ref_point: Point,
route: GeoDataFrame
) -> str:
line = LineString(route['geometry'].tolist())
dist_projected = line.project(ref_point)
cut_geometry = util.cut(line, dist_projected)
first_segment = cut_geometry[0]
coords = list(first_segment.coords)
return self.calc_spatial_relation_for_line(
ref_point, Point(coords[-1]), Point(coords[-2]))
def calc_spatial_relation_for_line(self,
ref_point: Point,
line_point_last_part: Point,
line_point_second_from_last: Point,
) -> str:
# Calculate the angle of the last segment of the line_point_last_part.
azim_route = util.get_bearing(
line_point_second_from_last, line_point_last_part)
# Calculate the angle between the last segment of the route and the goal.
azim_ref_point = util.get_bearing(
line_point_last_part, ref_point)
diff_azim = (azim_ref_point-azim_route) % 360
if diff_azim < 180:
return "right"
return "left"
def get_egocentric_spatial_relation_goal(self,
ref_point: Point,
route: GeoDataFrame
) -> str:
final_node_in_route = route.iloc[-2]['geometry'].centroid
last_node_in_route = route.iloc[-3]['geometry'].centroid
return self.calc_spatial_relation_for_line(
ref_point, final_node_in_route, last_node_in_route)
def get_cardinal_direction(self, start_point: Point, end_point: Point
) -> str:
'''Calculate the cardinal direction between start and and points.
Arguments:
start_point: The starting point.
end_point: The end point.
Returns:
A cardinal direction.
'''
azim = util.get_bearing(start_point, end_point)
if azim < 10 or azim > 350:
cardinal = 'North'
elif azim < 80:
cardinal = 'North-East'
elif azim > 280:
cardinal = 'North-West'
elif azim < 100:
cardinal = 'West'
elif azim < 170:
cardinal = 'South-East'
elif azim < 190:
cardinal = 'South'
elif azim < 260:
cardinal = 'South-West'
else: # azim < 280:
cardinal = 'West'
return cardinal
def get_number_intersections_past(self,
main_pivot: GeoSeries,
route: GeoDataFrame,
end_point: Point
) -> Optional[int]:
'''Return the number of intersections between the main_pivot and goal.
Arguments:
main_pivot: The pivot along the route.
route: The route along which a landmark will be chosen.
end_point: The goal location.
Returns:
The number of intersections between the main_pivot and goal.
If the main_pivot and goal are on different streets return None.
'''
pivot_goal_route = self.compute_route_from_nodes(
main_pivot['osmid'],
end_point['osmid'],
self.map.nx_graph,
self.map.nodes)
edges_in_pivot_goal_route = pivot_goal_route['osmid'].apply(
lambda x: set(self.map.edges[self.map.edges['u'] == x]['osmid'].tolist()))
pivot_streets = edges_in_pivot_goal_route.iloc[0]
goal_streets = edges_in_pivot_goal_route.iloc[-1]
common_streets = pivot_streets & goal_streets
if not common_streets:
return None
number_intersection = edges_in_pivot_goal_route.apply(
lambda x: len(x - common_streets) > 0).sum()
if number_intersection <= 0:
return 0
return number_intersection
def get_sample(self) -> Optional[geo_item.GeoEntity]:
'''Sample start and end point, a pivot landmark and route.
Returns:
A start and end point, a pivot landmark and route.
'''
geo_landmarks = {}
# Select end point.
geo_landmarks['end_point'] = self.get_end_poi()
if geo_landmarks['end_point'] is None:
return None
# Select start point.
geo_landmarks['start_point'] = self.get_start_poi(
geo_landmarks['end_point'])
if geo_landmarks['start_point'] is None:
return None
# Compute route between start and end points.
route = self.compute_route_from_nodes(
geo_landmarks['start_point']['osmid'],
geo_landmarks['end_point']['osmid'],
self.map.nx_graph,
self.map.nodes)
if route is None:
return None
# Select pivots.
result = self.get_pivots(
route, geo_landmarks['end_point'], geo_landmarks['start_point'])
if result is None:
return None
geo_landmarks['main_pivot'] = result[0]
geo_landmarks['near_pivot'] = result[N_MAIN_PIVOTS]
geo_landmarks['beyond_pivot'] = result[-2]
geo_landmarks['main_near_pivot'] = result[-1]
for i in range(1, N_MAIN_PIVOTS+1):
geo_landmarks[f'main_pivot_{i}'] = result[i]
for i in range(1, N_AROUND_PIVOTS+1):
geo_landmarks[f'around_goal_pivot_{i}'] = result[N_MAIN_PIVOTS+i]
geo_features = {}
# Get cardinal direction.
geo_features['cardinal_direction'] = self.get_cardinal_direction(
geo_landmarks['start_point']['geometry'], geo_landmarks['end_point']['geometry'])
# Get Egocentric spatial relation from goal.
geo_features['spatial_rel_goal'] = self.get_egocentric_spatial_relation_goal(
geo_landmarks['end_point']['geometry'].centroid, route)
# Get Egocentric spatial relation from main pivot.
geo_features['spatial_rel_pivot'] = self.get_egocentric_spatial_relation_pivot(
geo_landmarks['main_pivot']['geometry'].centroid, route)
# Get Egocentric spatial relation from pivot near and along the way.
if geo_landmarks['main_near_pivot']['geometry']:
geo_features['spatial_rel_main_near'] = self.get_egocentric_spatial_relation_pivot(
geo_landmarks['main_near_pivot']['geometry'].centroid, route)
else:
geo_features['spatial_rel_main_near'] = None
# Get number of intersections between main pivot and goal location.
geo_features['intersections'] = self.get_number_intersections_past(
geo_landmarks['main_pivot'], route, geo_landmarks['end_point'])
geo_features['goal_position'] = self.get_position_goal(
geo_landmarks['end_point'], route)
rvs_path_entity = geo_item.GeoEntity.add_entity(
route=route,
geo_features=geo_features,
geo_landmarks=geo_landmarks)
return rvs_path_entity
def get_single_sample(
self,
index: int,
sema: Any,
n_samples: int,
return_dict: Dict[int, geo_item.GeoEntity]):
'''Sample exactly one RVS path sample.
Arguments:
index: index of sample.
sema: Semaphore Object.
n_samples: the total number of samples to generate.
return_dict: The dictionary of samples generated.
'''
sema.acquire()
entity = None
attempt = 0
while entity is None:
if attempt >= MAX_NUM_GEN_FAILED:
sys.exit(f"Reached max number of failed attempts for sample {index}.")
entity = self.get_sample()
attempt += 1
logging.info(f"Created sample {index}/{n_samples}.")
return_dict[index]=entity
sema.release()
def generate_and_save_rvs_routes(self,
path_rvs_path: str,
n_samples: int,
n_cpu: int = multiprocessing.cpu_count()-1
):
'''Sample start and end point, a pivot landmark and route and save to file.
Arguments:
path_rvs_path: The path to which the data will be appended.
map: The map of a specific region.
n_samples: the max number of samples to generate.
'''
manager = multiprocessing.Manager()
sema = Semaphore(n_cpu)
new_entities = []
lst = list(range(n_samples))
batches = [
lst[i:i + MAX_BATCH_GEN] for i in range(0, len(lst), MAX_BATCH_GEN)]
for batch in batches:
return_dict = manager.dict()
jobs = []
for i in batch:
p = multiprocessing.Process(
target=self.get_single_sample,
args=(i+1, sema ,n_samples,
return_dict))
jobs.append(p)
p.start()
for proc in jobs:
proc.join()
new_entities += [entity for idx_entity, entity in return_dict.items()]
if len(new_entities)>=SAVE_ENTITIES_EVERY:
geo_item.save(new_entities, path_rvs_path)
new_entities = []
if len(new_entities)>0:
geo_item.save(new_entities, path_rvs_path)
def load_entities(path: str) -> Sequence[geo_item.GeoEntity]:
if not os.path.exists(path):
return []
geo_types_all = {}
for landmark_type in LANDMARK_TYPES:
geo_types_all[landmark_type] = gpd.read_file(path, layer=landmark_type)
geo_types_all['route'] = gpd.read_file(path, layer='path_features')['geometry']
geo_types_all['path_features'] = gpd.read_file(path, layer='path_features')
geo_entities = []
for row_idx in range(geo_types_all[LANDMARK_TYPES[0]].shape[0]):
landmarks = {}
for landmark_type in LANDMARK_TYPES:
landmarks[landmark_type] = geo_types_all[landmark_type].iloc[row_idx]
features = geo_types_all['path_features'].iloc[row_idx].to_dict()
del features['geometry']
route = geo_types_all['route'].iloc[row_idx]
geo_item_cur = geo_item.GeoEntity.add_entity(
geo_landmarks=landmarks,
geo_features=features,
route=LineString(route.exterior.coords[:-1])
)
geo_entities.append(geo_item_cur)
logging.info(f"Loaded entities {len(geo_entities)} from <= {path}")
return geo_entities | {
"redpajama_set_name": "RedPajamaGithub"
} | 2,487 |
Thomas L. Olson: Book Review of Sabine N. Meyer, "We Are What We Drink: The Temperance Battle in Minnesota" (2018).
Judge Daniel Fish: "Lincoln Literature: A Bibliographical Account of Books and Pamphlets Relating to Abraham Lincoln" (1900).
In 1900 Judge Daniel Fish's bibliography of books and pamphlets on Abraham Lincoln was published. It is posted here.
Judge Daniel Fish: "Lincoln Bibliography: A List of Books and Pamphlets Relating to Abraham Lincoln" (1906).
In 1906 Judge Daniel Fish's bibliography of books and pamphlets on Abraham Lincoln was published. It is a "revision and enlargement" of his "Lincoln Literature," a catalogue of Lincolniana published in 1900. It is posted here.
Judge Daniel Fish: "Lincoln Collections and Lincoln Bibliography" (1908).
In 1908 Judge Daniel Fish's article on "Lincoln Collections and Lincoln Bibliography" was published in Volume 3, Proceedings and Papers of the Bibliographical Society of America. It is posted here. | {
"redpajama_set_name": "RedPajamaC4"
} | 315 |
This month we wanted to highlight a recent job that demonstrate how Mr. Handyman's approach to a project differentiates them from other local home service providers.
Recently, a homeowner noticed wet carpet along a wall that had the master bathroom shower on the other side. He did a little investigative work and found two loose tiles on the wall in the shower, with soft drywall behind those. An exploratory hole cut in the drywall of the adjoining room clearly identified water at work in this wall, but it was not clear where the water was coming from. Multiple service providers priced the repair, each assuming a bad shower pan and full tear out of pan and tiled walls. Mr. Handyman was called and refused to price the whole job until the source of the water and extent of the damage was understood. The customer agreed to a careful demolition to assess scope of damage and source of water. Soon thereafter it was found that the shower diverter was leaking back in the wall and the damage was isolated to that wall. The leak must have recently developed as no rotten wood was found and the overall scope of the repair was significantly smaller, and less expensive for the homeowner, than the full tear out suggested by others.
This is a common approach for Mr. Handyman and is a key differentiator of their service. While they can and do tear out and replace full bathtub and shower tile surrounds when required (or desired by the homeowner as a remodeling effort), they often recommend an exploratory approach to damage behind tile, wet drywall, rotten siding and the like. With the full extent of the damage exposed, and the cause known, project can then be properly scoped. Any service provider that comes into these situations and immediately recommends a full tear-out and replacement before understanding the scope and source of the damage is simply looking out for their financial well being and not that of the customer.
Quality – All staff are employee's, have been background and drug screened, and have a minimum of 15 years of experience. They are a BBB Accredited Business with an A+ Rating.
Service – When you call Mr. Handyman you get a person, not a machine. They are fully licensed and insured for your protection.
Convenience – They offer scheduled appointment times.
Guaranteed – All work is guaranteed.
This simple formula has helped Mr. Handyman grow over the years even as the economy and home services industries, in general, have suffered through the great recession. Call the office today at 203-373-7717 to discuss your specific needs and schedule one of our service technicians for a visit! | {
"redpajama_set_name": "RedPajamaC4"
} | 3,860 |
{"url":"http:\/\/sketchesandmore.blogspot.com\/2013\/02\/conversations-with-coworkers.html","text":"Thursday, February 21, 2013\n\nConversations With CoWorkers\n\nThere is a reason I don't take signs like this to work:\n\nConsider the actual conversation I had with a coworker today.\n\nCoworker:\u00a0 Becky, these headers are supposed to have nuts in them.\n\nMe:\u00a0 You'll have to take them back to tubing [department}.\n\nCoworker:\u00a0 Huh?\n\nMe:\u00a0 You'll have to take them back to tubing so they can drill a hole in them for the nut.\n\nCoworker:\u00a0 What?\n\nMe:\u00a0 You'll have to take them back to tubing so they can put the nut in them.\n\nCoworker:\u00a0 What??\n\nMe:\u00a0 Go show them to [group leader] Fernando.\n\nSo, yeah, if I'd taken that sign in to work, I'm pretty sure I'd have had to spend all day explaining what that little house was.\n\n1 comment:\n\nStephieKnits said...\n\nI have had those days. I don't get them as often now. I don't really miss them either. I guess they won't get pie on March 14th either.","date":"2013-12-07 16:31:30","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8170643448829651, \"perplexity\": 2340.6532547222314}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2013-48\/segments\/1386163054974\/warc\/CC-MAIN-20131204131734-00085-ip-10-33-133-15.ec2.internal.warc.gz\"}"} | null | null |
Global coatings supplier Hempel (Kongens Lyngby, Denmark) announced Monday (Oct. 1) that the 65 percent acquisition of German paint manufacturer J.W. Ostendorf is now complete, marking a move that is part of a larger strategy for growth.
The JWO acquisition strengthens Hempel's presence in the decorative paints market; in general, Hempel has historically been better known for its protective and marine offerings. JWO has about 750 employees, according to Hempel, and is based in two locations, in Germany and France. The company makes mainly water-based paints for the retail market, with a focus on environmental friendliness. JWO joins the U.K.-based Crown Paints as part of Hempel's stable of consumer brands.
With the closing of the deal, Hempel will create within its organizational structure a new region known as Decorative Europe. Crown Paints is joining forces with JWO in this new venture. | {
"redpajama_set_name": "RedPajamaC4"
} | 7,610 |
You Are Here: Home » Courts, Government Documents, Health Care, Legal Research » FDA Allows Women to Get Abortion Pill by Mail
FDA Allows Women to Get Abortion Pill by Mail
by Sabrina I. Pacifici on Dec 16, 2021
USAToday: "The Food and Drug Administration said Thursday that it would permanently remove a key restriction on medication used to terminate pregnancies, allowing so-called "abortion pills" to be available by mail and prescribed through telehealth medical consultations. The FDA had temporarily allowed the medication to be available in such methods after a federal judge ordered it due to the COVID-19 pandemic and concerns about virus exposure in health care settings. Now, the agency says it will leave those new policies in place – a key move by the Biden administration as the Supreme Court considers monumental cases that could limit abortion rights across the country. The FDA, which first approved medical abortion in 2000, had always required the pills could only be prescribed after an in-person visit with a doctor. Women who are up to 10 weeks pregnant could received the medication. FDA officials said a scientific review supported widening access, including no longer limiting dispensing to a small number of specialty clinics and doctor's offices. But prescribers will still need to undergo certification and training. Additionally, the agency said dispensing pharmacies will have to be certified. There are two pills approved by the FDA to end a pregnancy, mifepristone and misoprostol, as a safe alternative to a medical procedure. When a person has been prescribed a medication abortion, they will first take the mifepristone, which blocks the body's ability to absorb its own progesterone, a necessary hormone for a pregnancy to grow. Then the person will take the misoprostol up to 48 hours later, which will cause cramping and bleeding emptying the uterus. Medical providers have said the process is similar to a miscarriage. While the use of the medication has steadily increased since it was first approved in 2000, surgical abortions have remained more popular in part because medication abortions are typically more expensive and can only be performed during the first few months of pregnancy. But an onslaught of new regulations, shuttered abortion clinics and rulings by the Supreme Court could leave people looking to terminate a pregnancy with no other option…"
Subjects: Courts, Government Documents, Health Care, Legal Research
← 21 photos that defined 2021 on Capitol Hill
Is a mask mandate in the US the same as a law? → | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 457 |
Giacomo Grosso (23 May 1860 in Cambiano – 14 January 1938 in Turin) was an Italian painter.
Biography
After spending his childhood at Glaveno seminary, Giacomo Grosso enrolled at the Accademia Albertina in Turin in 1873, thanks to a scholarship he was awarded by Cambiano Town Council. He became a pupil of Andrea Gastaldi and made his debut in 1882 at the 24th Esposizione della Società di Incoraggiamento alle Belle Arti di Torino, completing his studies the following year. In 1884, he participated in the Esposizione Generale Italiana in Turin with a painting inspired by La storia di una capinera by Giovanni Verga. After coming into contact with the Paris art scene through his many stays in the French capital, he continued to exhibit assiduously in the Turin Promotrici, the Venice Biennale from the first edition in 1895 (with a one-man show in 1912), and in other international shows (Paris, 1896; Munich, 1899; San Francisco, 1915) where he became acclaimed as a portraitist. From 1901 when he made his first journey to South America he began to receive commissions from Argentina and in 1910, for the celebration of the Argentinean Centennial in Buenos Aires, he executed a large commemorative canvas The panorama of the Battle of Maipú (lost in the fire in 1923), an episode in the War of Independence. From 1906 he held the chair of painting at the Accademia Albertina in Turin and in 1929 he was nominated senator of the Kingdom of Italy. His solo exhibition of over fifty works was curated by Leonardo Bistolfi at the Galleria Pesaro, Milan, in 1926.
Among his pupils are Arturo Conterno, Maurizio Pellegrini, Eso Pelluzi, and Giovanni Rava.
Gallery
References
Laura Casone, Giacomo Grosso , online catalogue Artgate by Fondazione Cariplo, 2010, CC BY-SA (source for the first revision of this article).
Other projects
19th-century Italian painters
19th-century Italian male artists
Italian male painters
20th-century Italian painters
Accademia Albertina alumni
Academic staff of Accademia Albertina
1860 births
1938 deaths
20th-century Italian male artists | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 2,119 |
Magnificent apartment with Terrace view Ste Victoire, near the Town Hall and Les Cardeurs (1'), Cours Mirabeau (5').
Pedestrian street and absolute silence, Bellegarde car park at 30 seconds walk.
You will appreciate the large terrace with a view of Sainte Victoire, the brightness with 2 large windows & Electric shutters, its comfort and design!
The accommodation is fully equipped. All charges are included in the rent (water, electricity, wifi...) A fixed price of 30 € per month is fixed for electricity consumption in order to avoid any risk of over-consumption (regularisation will take place at the end of the rental period). | {
"redpajama_set_name": "RedPajamaC4"
} | 5,331 |
Sam Vickers just turned 90 years old, but that number hasn't slowed him down a bit.
The Mansfield resident still builds wooden rocking horses for his grandkids, takes regular trips to historic sites and museums to relive his days working on submarines for General Dynamics and plays cards with his friends once a week. Most importantly, he starts every weekday with a stop at the Mansfield Activities Center to participate in the Senior Lifestyles program.
'Ever since' for Vickers is almost three decades, one of the longest tenures in the Senior Center, which is celebrating its 40th anniversary this year. At the time it was created, the community of less than 8,000 didn't have many parks and no recreation center, but city leaders understood the importance of providing a safe place for its older residents to gather. An old house off East Broad Street was dedicated, and programs like bingo, cards, parties and more began almost immediately.
The original location of the Mansfield Senior Center, Broad Street.
The Senior Center, which now operates as Senior Lifestyles out of the Mansfield Activities Center, has grown considerably since the early days, with regular daily attendance averaging around 75-100 visitors, and more than 300 different programs offered each year, from volleyball to arts and crafts. A notable improvement over the years includes two accessible buses that can transport visitors to and from home, or on any number of day trips the center sponsors, which have included air shows and shopping to museums and nature centers.
The MAC gymnasium gives space for special events like Senior Citizen's Day and weekly Walk & Talk exercise classes. Another important addition was the partnership with Tarrant County's Senior Citizens Program, Sixty and Better, which helps provide low to no-cost meals for members. In the last year, more than 6,000 hot meals were served to seniors in Mansfield.
Equipment, facilities and programs have improved, but the community spirit that motivated the program in the first place is still alive and thriving. Lenora Fike, 82, has been a regular since 1986 and remembers the early days fondly.
In good times, and in bad, the fellowship of the Senior Center has been there.
Suzanne Newman has been the city's Senior Lifestyles program coordinator for the last 17 years and says the success of the program has been a team effort.
Mansfield City Manager Clayton Chandler has been serving since 1984, almost since the center opened, and affirms the city's longtime support of the programs.
Services that will, by all accounts, only continue to grow and thrive over the next 40 years.
"I hope the senior center will still be here in 40 more years, and continue to have the support of the city leaders and community that we enjoy now," she said. "It's so much more than classes or games. It helps your mind stay active, and you can't feel sorry for yourself when you're surrounded by people who are concerned about you and want only to make your day brighter. Being active in the senior center helps me see the bigger picture; it makes me want to do more, just as so many others here have done for me.
Seniors dance at the 2016 New Year's Eve Party at the Mansfield Activities Center.
For more information, contact program coordinator Suzanne Newman at the Mansfield Activities Center, 817-728-3680 or visit them online https://www.mansfieldtexas.gov/senior-lifestyles. | {
"redpajama_set_name": "RedPajamaC4"
} | 9,521 |
We accept payment via BACS, Debit/Credit Cards and PayPal. If you would like to open a credit account, then you will need to contact our customer services team on 01933 313 252.
Please note that we will require the first order to be on a Proforma basis regardless of whether a credit account is applied for.
PayPal is designed from the ground up to be a safer way to send money online. PayPal doesn't expose or sell your financial information to merchants.
Your sensitive financial information is securely stored on their servers.
Information is automatically sent with a high level of data encryption.
100% protection for unauthorized payments sent from your account. As a fraud-prevention measure, PayPal send an email confirmation for every online PayPal payment that you make. If you receive an email confirmation for a transaction that you didn't approve, contact PayPal and they'll work to quickly resolve the issue — you won't be responsible for any unauthorized charges.
Buyer Complaint Policy when you shop outside eBay. Regardless of where you shop online, PayPal's Buyer Complaint Policy lets you submit a dispute for an item you don't receive or an item significantly not as described. By doing so, you have the opportunity to resolve your dispute directly with the seller. See eligibility.
The PayPal Resolution Center — your first step for any issue. Whether you encounter a problem with an item you purchased on or off eBay, the first step is to go to PayPal's Resolution Center to begin the dispute process.
Visit the Resolution Center and open a dispute within 45 days from the date you sent payment. | {
"redpajama_set_name": "RedPajamaC4"
} | 9,808 |
Q: Matplotlib Fill_Between Two Lines Good Morning, I'm Trying to use Matplotlib to Fill_between two Lines and i'm doing like this:
plt.figure(figsize = (16,8));
plt.plot(estudo_df["Sales"], label='Original Data');
plt.plot(forecast_2018["yhat"], color='red', label='Predictions');
plt.plot(forecast_2018["yhat_lower"], color='green', label='Predictions Lower');
plt.plot(forecast_2018["yhat_upper"], color='Blue', label='Predictions Blue');
plt.fill_between(forecast_2018["yhat_lower"],forecast_2018["yhat_upper"] , color='grey', alpha='0.5')
plt.legend();
plt.title('Forecast Sales Prophet - '+sh);
plt.xlabel('Date');
plt.ylabel('Sales');
But it is not working...
I use two DS that have the same amount and "index ID", which is a DATE, estudo_df and forecast_2018
Does anyone knows where the error is?
A: Change your code in 2 places:
*
*pass forecast_2018.index as the first parameter of fill_between,
*change alpha parameter to float (you passed a string).
I ran your code with one line commented out:
plt.figure(figsize = (8, 6));
#plt.plot(estudo_df["Sales"], label='Original Data');
plt.plot(forecast_2018["yhat"], color='red', label='Predictions');
plt.plot(forecast_2018["yhat_lower"], color='green', label='Predictions Lower');
plt.plot(forecast_2018["yhat_upper"], color='blue', label='Predictions Blue');
plt.fill_between(forecast_2018.index, forecast_2018["yhat_lower"],
forecast_2018["yhat_upper"], color='#DDDDDD', alpha=0.5)
plt.legend();
sh = 'xx'
plt.title('Forecast Sales Prophet - ' + sh)
plt.xlabel('Date')
plt.ylabel('Sales');
and got:
The index of my DataFrame contains dates but is of string type.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 5,382 |
Bisceglie – stacja końcowa metra w Mediolanie, na linii M1. Znajduje się na via Bisceglie, w Mediolanie i zlokalizowana jest za stacją Inganni. Została otwarta w 1992.
Linki zewnętrzne
Metro w Mediolanie | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 6,686 |
Thad Starner
School of Interactive Computing, College of Computing
Ph.D. Media Arts and Sciences, Media Laboratory (1999), MIT
M.S. Media Arts and Sciences, Media Laboratory (1995), MIT
B.S. Brain and Cognitive Science (1991), MIT
BS Computer Science (1991), MIT
Thad Starner is a Professor at the Georgia Institute of Technology's School of Interactive Computing. Thad was perhaps the first to integrate a wearable computer into his everyday life as an intelligent personal assistant. Starner's work as a PhD student would help found the field of Wearable Computing. His group's prototypes and patents on mobile MP3 players, mobile instant messaging and e-mail, gesture-based interfaces, and mobile context-based search foreshadowed now commonplace devices and services. Thad has authored over 100 scientific publications with over 100 co-authors on mobile Human Computer Interaction (HCI), pattern discovery, human power generation for mobile devices, and gesture recognition, and he is a founder and current co-chair of the IEEE Technical Committee on Wearable Information Systems. His work is discussed in public forums such as CNN, NPR, the BBC, CBS's 60 Minutes, The New York Times, Nikkei Science, The London Independent, The Bangkok Post, and The Wall Street Journal.
Ubiquitous & Wearable Computing
UBICOMP | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 7,238 |
Conducting an internship at the Alliance (and VACCHO) opened by eyes up in the Indigenous health and policy sector. I was exposed to the multitude of amazing things that VACCHO do throughout Victoria for Aboriginal people, and had the privilege to meet so many talented and passionate people here. My work for the Alliance taught me about the importance of advocacy in policy and program development, especially in the Aboriginal out-of-home-care sphere. The funding enables Indigenous individuals who may come from low-socioeconomic / under-resourced backgrounds to gain experiences with these organisations, which they may not have the chance to without the support of Aurora.
Makayla Jennings, VACCHO
Over my short four weeks, I have strengthened my knowledge of the criminal justice system and learned how ACT policy differs from Victoria. It has definitely reinforced my aspiration to choose a career pathway in this field – which is one of the key things I wanted to gain out of the placement.
Hope Kuchel, ACTCS
My time with AIDA has gone but it has been the catalyst for many changes in my life direction and the most formative experience for me personally, since making the decision to go back to study.
Christine Torcetti, AIDA
For Aurora and ACTCS the expectation was very high and therefore I was completing tasks that were engaging and required critical thinking and effort. Their work often goes unnoticed so it was great to see how much effort goes into providing the service they do and being able to appreciate it.
This experience has set up a pathway into my future that I was always keen on pursuing, but it has given me clarity and a realistic vision I can strive to pursue. I honestly cannot thank Aurora enough for this opportunity and experience and generosity.
Jay Lee Snowden, Just Reinvest/Maranguka
This experience has really highlighted the inequities and systemic problems that underline the current system and continue to further the disproportionate over-representation of Indigenous children in the child protection area. Following my internship, I actually have a greater sense of pride in my cultural identity and feel more empowered to advocate in this area.
Sophie Heath, SNAICC
Overall I feel very lucky to be involved in the Aurora Internship Program and am glad that I have been able to make a contribution to Food Ladder.
Faith Considine, Food Ladder
My placement has made me feel like I have really contributed to the work they are and will continue to be doing when I leave, while valuing my opinion for future resources to benefit Aboriginal and or Torres Strait Islander children in child protection. I have learnt so much not only about policy research but about what it means to have a full time 9-5 working job which has been a great learning experience.
Aliya Chalmers, SNAICC
QSNTS has a wonderful working environment and all Aurora Interns who have the privilege of working there will feel supported, valued and have an all around great experience!
Laura MacColl, QSNTS
I found my experience extremely rewarding and insightful. I once was an Aboriginal woman who knew little about the child protection space, which is concerning as it affects so many of our mob. I am now proud to say that I have been a part of an organisation that advocates to keep our Aboriginal children and young people safe and protected.
Lakkari Pitt, AbSec | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 1,145 |
Selected astro-ph abstracts for Tuesday March 21
Detecting shock waves in cosmological smoothed particle hydrodynamics simulations
Christoph Pfrommer (1,2), Volker Springel (1), Torsten A. Ensslin (1), Martin Jubelgas (1) ((1) MPA, (2) CITA) Comments: 20 pages, 7 figures, just appeared in MNRAS, full resolution version available at http://www.cita.utoronto.ca/~pfrommer/Publications/MNRAS.367.113.pdf Journal-ref: MNRAS, 2006, 367, 113
Willman 1 - A Galactic Satellite at 40 kpc With Multiple Stellar Tails
Beth Willman, Morad Masjedi, David W. Hogg, Julianne J. Dalcanton, David Martinez-Delgado, Michael Blanton, Andrew A. West, Aaron Dotter, Brian Chaboyer Comments: 10 pages, 4 figures. Submitted to AJ
Growing Live Disks Within Cosmologically Assembling Asymmetric Halos: Washing Out the Halo Prolateness
Ingo Berentzen (U. of Kentucky) Isaac Shlosman (U. of Kentucky) Comments: 12 pp, 10 figures, 2 animations. Submitted to the Astrophysical Journal. Animations available at http://www.pa.uky.edu/~shlosman/research/galdyn/movies.html
The Spitzer Space Telescope Extra-Galactic First Look Survey: 24 micron data reduction, catalog, and source identification
Dario Fadda, F. R. Marleau, L. J. Storrie-Lombardi, D. Makovoz, D. T. Frayer, P. N. Appleton, L. Armus, S. C. Chapman (1), P. I. Choi, F. Fang, I. Heinrichsen, G. Helou, M. Im (2), M. Lacy, D. L. Shupe, B. T. Soifer, G. K. Squires, J. Surace, H. I. Teplitz, G. Wilson, L. Yan, (Spitzer Science Center - Caltech expect (1) Dep. of Astronomy - Caltech, (2) Seoul National University.) Comments: 25 pages, 27 figures. 7 figures are given as png to reduce their size. Paper accepted for publication by AJ (June 2006, vol. 131). Full-resolution version of the paper and machine-readable catalogs are available at http://spider.ipac.caltech.edu/staff/fadda/inpress.html
Local Group Dwarf Galaxies and the Fundamental Manifold of Spheroids
Dennis Zaritsky (U. Arizona), Anthony H. Gonzalez (U. Florida), and Ann I. Zabludoff (U. Arizona) Comments: accepted for publication in ApJ Letters (5 pages)
High-Resolution Absorption Spectroscopy of Multi-phase, High-Metallicity Gas Associated with the Luminous Quasar HE 0226-4110
Rajib Ganguly, Kenneth R. Sembach, Todd M.Tripp, Blair D. Savage, Bart P. Wakker Comments: 25 pages, including 14 figures; uses emulateapj.cls document class; accepted for publication in The Astrophysical Journal
Supernova 2006aj and the associated X-Ray Flash 060218
J. Sollerman (1,2), A. O. Jaunsen (3), J. P. U. Fynbo (1), J. Hjorth (1), P. Jakobsson (1), M. Stritzinger (1), C. Feron (1), P. Laursen (1), J.-E. Ovaldsen (3), J. Selj (3), C. C. Thone (1), D. Xu (1), T. Davis (1), J. Gorosabel (4), D. Watson (1), R. Duro (3), N. Lysfjord (3), I. Ilyin (5), M. Wold (3), B. L. Jensen (1), S. Walch (7) ((1) Dark Cosmology Centre, Copenhagen (2) Department of Astronomy, Stockholm (3) Institute of theoretical astrophysics, Oslo (4) Instituto de Astrofisica de Andalucia, Granada (5) Astrophysikalisches Institut, Potsdam (6) ESO, Garching (7) University Observatory Munich, Munich) Comments: This draft was submitted to A&A main journal on February 17, 6 figures, comments welcome
The Dissipative Merger Progenitors of Elliptical Galaxies
Avishai Dekel (HU Jerusalem), Thomas, J. Cox (CfA Harvard) Comments: 9 pages, 4 figures
Magnetic fields in our Galaxy: How much do we know? III. Progress in the last decade
J.L. Han (NAOC) Comments: 8 pages, 4 figures. Invited Talk at "2005 Hanas Pulsar Symposium". To be published in Chinese Journal of Astronomy and Astrophysics
Evidence for a population of beamed radio intermediate quasars
Tinggui Wang (USTC), Hongyan Zhou (USTC), Junxian Wang (USTC), Youjun Lu (UCB), Yu Lu (USTC) Comments: 15 pages, 4 figures, 1 table, Accepted to the Astrophysical Journal
The black hole fundamental plane from a uniform sample of radio and X-ray emitting broad line AGNs
Ran Wang (PKU), Xue-Bing Wu (PKU) and Min-Zhi Kong (NAOC) Comments: 23 pages, 7 figures, ApJ accepted
Models for the Type Ic Hypernova SN 2003lw associated with GRB 031203
Paolo A. Mazzali, Jinsong Deng, Elena Pian, Daniele Malesani, Nozomu Tominaga, Keiichi Maeda, Ken'ichi Nomoto, Guido Chincarini, Stefano Covino, Massimo Della Valle, Dino Fugazza, Gianpiero Tagliaferri, Avishay Gal-Yam Comments: 19 pages, 8 figures, accepted for publication in ApJ
The GRB060218/SN 2006aj link to Supernova-GRBs blazing and re-brightening by precessing showering Jets
D.Fargion, M.Grossi Comments: 9 pages, 8 figures, Submitted for pubblication
Deep CFHT Photometric Survey of the Entire M33 Galaxy I: Catalogue of 36000 Variable Point Sources
J.D.Hartman, D.Bersier, K.Z.Stanek, J.-P.Beaulieu, J.Kaluzny, J.-B.Marquette, P.B.Stetson Comments: Submitted to MNRAS. 20 pages, 16 figures. Catalogue and light curves are available at http://www.astro.livjm.ac.uk/~dfb/M33/ animations associated with this paper are available at http://www.cfa.harvard.edu/~jhartman/M33_Movie.html a version of the paper with full-resolution images is available at http://www.astro.livjm.ac.uk/~dfb/M33/M33_fullres.ps.gz
Discovery of VHE gamma-ray emission from 1ES1218+30.4
MAGIC collaboration: J. Albert, E. Aliu, H. Anderhub, P. Antoranz, A. Armada, M. Asensio, C. Baixeras, J. A. Barrio, M. Bartelt, H. Bartko, D. Bastieri, S. R. Bavikadi, W. Bednarek, K. Berger, C. Bigongiari, A. Biland, E. Bisesi, R. K. Bock, T. Bretz, I. Britvitch, M. Camara, A. Chilingarian, S. Ciprini, J. A. Coarasa, S. Commichau, J. L. Contreras, J. Cortina, V. Curtef, V. Danielyan, F. Dazzi, A. De Angelis, R. de los Reyes, B. De Lotto, E. Domingo-Santamaria, D. Dorner, M. Doro, M. Errando, M. Fagiolini, D. Ferenc, E. Fernandez, R. Firpo, J. Flix, M. V. Fonseca, L. Font, N. Galante, M. Garczarczyk, M. Gaug, M. Giller, F. Goebel, D. Hakobyan, M. Hayashida, T. Hengstebeck, D. Hoehne, J. Hose, P. Jacon, O. Kalekin, D. Kranich, A. Laille, T. Lenisa, P. Liebing, E. Lindfors, et al Comments: 5 pages, 4 figures, submitted to ApJ
Gamma-Ray Burst associated Supernovae: Outliers become Mainstream
E. Pian (1,2), P.A. Mazzali (1,2,3), N. Masetti (4), P. Ferrero (5), et al. ((1) INAF-OATs, Italy, (2) KITP-UCSB, CA, (3) MPA, Germany, (4) INAF-IASF, Bologna, Italy, (5) TLS, Germany) Comments: Submitted to Nature
The differing locations of massive stellar explosions
A. S. Fruchter, A. J. Levan, L. Strolger, P. M. Vreeswijk, S. E. Thorsett, D. Bersier, I. Burud, J. M. Castro Cer\'o, n A. Castro-Tirado, C. Conselice, T. Dahlen, H. C. Ferguson, J. P. U. Fynbo, P. M. Garnavich, R. A. Gibbons, J. Gorosabel, T. R. Gull, J. Hjorth, S. T. Holland, C. Kouveliotou, Z. Levay, M. Livio, M. R. Metzger, P. E. Nugent, L. Petro, E. Pian, J. E. Rhoads, A. G. Riess, K. C. Sahu, A. Smette, N. R. Tanvir, R. A. M. J. Wijers, S. E. Woosley Comments: 27 pages, 4 figures, submitted to Nature on 22 August 2005, revised 9 February 2006. Supplementary material referred to in the text can be found at http://www.stsci.edu/~fruchter/GRB/locations/supplement.pdf Report-no: STScI Eprint No. 1718
Challenges in Detecting Gamma-Rays From Dark Matter Annihilations in the Galactic Center
Gabrijela Zaharijas and Dan Hooper Comments: 8 pages, 10 figures Report-no: FERMILAB-PUB-06-048-A
Constraining the Evolution of the Ionizing Background and the Epoch of Reionization with z ~ 6 Quasars II: A Sample of 19 Quasars
Xiaohui Fan, Michael A. Strauss, Robert H. Becker, Richard L. White, James E. Gunn, Gillian R. Knapp, Gordon T. Richards, Donald P. Schneider, J. Brinkmann and Masataka Fukugita Comments: AJ, in press, 58 pages, 15 figures; revisions in section on quasar HII regions | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 6,073 |
\section{Introduction}
\label{sec:intro}
Melody extraction is the task that aims to estimate the fundamental frequency (F0) of the dominant melody.
Automatic melody extraction has been an active topic of research in the literature, since it has many important downstream applications in music analysis and retrieval \cite{salamon14spm,kroher16taslp,bittner2017pitch,beveridge18pm}.
Lately, many deep neural network architectures have been proposed for melody extraction \cite{rigaud2016singing,kum2016melody,bittner17ismir,su2018vocal,lu18ismir}.
The basic idea of such neural network based methods is to use the neural nets to learn the mapping between a matrix that represents the input audio and another matrix that represents the melody line. For the input, it is usually a time-frequency representation such as the spectrogram,
which can be viewed as an $F \times T$ real-valued matrix, where $F$ and $T$ denote the number of frequency bins and time frames, respectively. For the output, it is another $F \times T$ matrix but this time it is a binary matrix indicating the F0 of the melody line for each frame. We only consider music with a single melody line in the music, so at most one frequency bin would be active per frame. It is also possible that there is no melody for some frames. From the training data, we have a number of such input and output pairs. We can use the difference between the target output and the predicted one to train the neural net in a supervised way.
Existing work has shown that using the neural nets to learn the nonlinear mapping between audio and melody leads to promising result. However, there are two issues that require further research. First, as it is easier for a neural net to deal with continuous values, the output of most existing models (if not all) is actually an $F \times T$ real-valued matrix, not a binary one. This is fine for the training stage, since we can still use cost functions such as cross entropy to measure the difference between a real-valued matrix (the estimated one) and a binary matrix (the groundtruth). However, for the testing stage, we still need to \emph{binarize} the output of the neural net. This binarization cannot be easily achieved simply by picking the frequency bin with the maximal activation per frame, because this would lead to false positives for frames that do not have melody. Therefore, most existing methods have to use a threshold whose value is empirically determined in a rather ad-hoc way for binarization.
\cite{bittner17ismir,lu18ismir}
The second issue is that existing models that lead to state-of-the-art result in melody extraction benchmark datasets may be overly complicated.
For example, the model presented by Lu and Su \cite{lu18ismir} uses in total 45 convolution or up-convolution layers, using residual blocks for the convolution modules and a sophisticated spatial pyramid pooling layer.
The goal of this paper is to propose a streamlined network architecture that has much simpler structure, and that does not need additional post-processing to binarize the model output. With a simple structure, we can better interpret the function of each layer of the network in generating the final result. We hope that the network can have accuracy that is comparable with, if not superior to, the state-of-the-art models.
We make two technical contributions to realize this. First, following Lu and Su \cite{lu18ismir}, we use an encoder/decoder architecture
to learn the audio-to-melody mapping. But, while they use the skip connections to pass the output of the convolution layers of the encoder to the up-convolution layers of the decoder, we propose to add links between the pooling layers of the encoder and the un-pooling layers of the decoder, and pass along the ``pooling indices''\cite{segnet}. While the skip connections they use will be short paths for gradient propagation, there is no trainable weights in pooling and un-pooling layers. We argue from a functional point of view that our method makes it easier for the model to localize the melody.
Second, we propose to use the bottleneck layer of the network to estimate the existence of melody per frame, and design a way such that we can simply use argmax to binarize the output.
The final model has in total only 7 convolution or up-convolution layers.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{compare_v4.png}
\caption{Comparison of the network architecture of SF-NMF-CRNN \cite{basaran2018CRNN}, DSM \cite{BittnerDeepSalience17}, Lu \& Su's model \cite{lu18ismir}, and the proposed model. (Notation---`E': encoder, `D': decoder.)}
\label{fig:comparison}
\end{figure}
\section{Related work}
\label{sec:related}
We show in Fig. \ref{fig:comparison} the network architectures of three previous methods that are proposed lately.
The first one is the deep salience model (DSM) proposed by Bittner \emph{et al.} \cite{bittner17ismir}. It uses a convolutional neural network (CNN) that takes a time-frequency representation of music as the input, and generates a salience map as output for estimating the melody.
Finally, they apply a threshold to the salience map to get the binary melody estimate.
The second one is the SF-NMF-CRNN model proposed by Basaran \emph{et al.} \cite{basaran2018CRNN}.
Instead of thresholding, it learns recurrent and dense layer to binarize the frequency map.
Another model presented by Lu and Su \cite{lu18ismir}, which is based on the DeepLabV3+ model \cite{Chen2018DeepLabV3+}, shows that better result for vocal melody extraction can be obtained by an encoder/decoder architecture with skip connections.
This model also uses thresholding to binarize the result.
The thresholding operation
can be found in many music-related tasks.
It can be done with a fixed threshold, an adaptive threshold \cite{lu18ismir}, or
other advanced methods \cite{dong18ismir,southall18ismir}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{model_v3_1.png}
\caption{Details of the proposed model. We
detect non-melody activity as a sub-target at the bottleneck layer and concatenate it with the output of the decoder, the salience frequency map.}
\label{fig:model}
\end{figure}
\section{Proposed Model}
\label{sec:Method}
The system overview is given in Fig. \ref{fig:model}. It has a simple encoder/decoder architecture. For
the encoder, we use three convolution layers and three max pooling layers. The output of the encoder is taken as the input by two separate branches of layers. The first branch is simply the decoder that uses three up-convolution layers and three un-pooling layers to estimate the
salience frequency map.
The second branch uses one convolution layer to estimate
the existence of melody per frame, leading to the ``non-melody'' estimate in the bottom.
Finally, the salience map and the non-melody estimate are then concatenated (along the frequency axis),
after which we get a binary-valued estimate of the melody line with a simple softmax layer.
We give more details of the network below.
\subsection{Model Input}
While the model can take any audio representation as the input, we choose to use the Combined Frequency and Periodicity (CFP) representation \cite{su2015combining}.
It contains three parts: the power-scaled spectrogram, generalized cepstrum (GC) \cite{kobayashi1984spectral,tokuda1994mel}
and generalized cepstrum of spectrum (GCoS) \cite{su2017HSP_DNN}.
The latter two are \emph{periodicity} representations that have been shown useful to multi-pitch estimation (MPE) \cite{peeters2006music}.
Given $\mathbf{X}$, the magnitude of the short-time Fourier transform (STFT) of an input signal, GC and GCoS can be computed as:
\begin{align}
\mathbf{Z}_{\text{S}}[k,n]&:=\sigma_{0}\left(\mathbf{W}_f\mathbf{X}\right)\,, \label{eq: specs}\\ \mathbf{Z}_{\text{GC}}[q,n]&:=\sigma_{1}\left(\mathbf{W}_t\mathbf{F}^{-1}\mathbf{Z}_{\text{S}}\right)\,, \label{eq: ceps} \\
\mathbf{Z}_{\text{GCoS}}[k,n]&:=\sigma_{2}\left(\mathbf{W}_f\mathbf{F}\mathbf{Z}_{\text{GC}}\right)\,, \label{eq: gcos}
\end{align}
where $\mathbf{W}_f$ and $\mathbf{W}_t$ are high-pass filters for removing the DC terms, $\mathbf{F}$ an DFT matrix and $\sigma_i$ activation functions \cite{su2015combining}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{demo_v4_1.png}
\caption{An illustration of our model in action.
For time frames with melody notes, their values would be close to 1 (bright yellow) in the output of the last encoding layer (\textcircled{\raisebox{-0.9pt}{3}}), but close to 0 (dark blue) in the output of the non-melody detector (\textcircled{\raisebox{-0.9pt}{4}}). We ``reverse the bits" here because we can then concatenate \textcircled{\raisebox{-0.9pt}{4}} with \textcircled{\raisebox{-0.9pt}{6}} so that a simple argmax on \textcircled{\raisebox{-0.9pt}{7}} can tell us whether there is a melody note and where it is. }
\label{fig:demo}
\end{figure}
\subsection{Encoder and Decoder}
The design of the encoder and decoder represents the first technical contribution of this work.
As depicted in Fig. \ref{fig:model}, we use simple convolution/up-convolution and pooling/un-pooling layers in our model. Moreover, we
pass the pooling indices between the pooling and un-pooling layers.
The design is motivated by SegNet \cite{segnet}, a state-of-the-art model for semantic pixel-wise segmentation of images. We found that melody extraction is similar to image segmentation in that both tasks require learning the mapping between a real-valued, dense matrix
and a binary-valued, relatively sparser matrix.
For melody extraction, the target output is indeed sparse---we only have at most one active entry per column (i.e. per time frame). Therefore, we like to test the idea of SegNet and use
pooling indices to inform the un-pooling layers the exact entries picked by the pooling layers in the encoding process.
This makes it easier for the decoder to localize the melody in frequency.
This is illustrated in Fig. \ref{fig:demo}.
In each convolution block, we use only one convolution layer with batch normalization and scaled exponential linear units (SELU) \cite{Klambauer17arxiv} as the activation function. The convolution kernel size is (5,5) with padding size (2,2) and stride size (1,1).
For the max-pooling layer, we use kernel size (4,1) and pool only along the frequency dimension. The feature map at the bottleneck of the network is a $128 \times T$ matrix.
\subsection{Non-melody Detector and ArgMax Layer}
The design of the non-melody detector represents the second technical contribution of this work.
As depicted in Fig. \ref{fig:model}, we learn one additional convolution layer that converts the $128 \times T$ matrix into a $1 \times T$ vector.
This vector is then concatenated with the salience map to make an $(F+1) \times T$ matrix, where the last row corresponds to this vector (see Fig. \ref{fig:demo} for an illustration).
We then use the argmax function to pick the entries with the maximal value per time frame and return the melody line with the following rule---\emph{if the argmax is the $F+1$ entry for a frame, we consider that there is no melody for that frame}.
In this way, the output of the model is an $F \times T$ binary matrix with only one or no active entry per frame.
In the model training process, the model output would be compared with the groundtruth output to calculate the loss and to update the network parameters. Therefore, according to our design, the convolution layer we just mentioned would be ``forced'' to learn whether there is a melody for each frame. Moreover, the frames without melody would tend to have high activation (close to `1'), whereas those with melody would have low activation (close to `0'), as shown in Fig. \ref{fig:demo}. This is why we call this branch the non-melody detector.
We can view the non-melody detector as a singing voice detector \cite{lehner14icassp, schlueter15ismir,Danieljointdetect} when the task is to detect the vocal melody.
But, our design is for general melody extraction, not only for vocal melody extraction.
The argmax layer is significant in that we do not need a separate, postprocessing step to discern melody/non-melody frames and to binarize the model output. The non-melody detection and binarization are built-in and trained together with the rest of the network to optimize the accuracy of melody extraction.
To our best knowledge (also see Section \ref{sec:related}), there is no such a model in the literature.
The argmax layer is not a general solution for any music-related tasks that require binarization. For example, in MPE \cite{su2015combining,peeters2006music} there are usually multiple active entries per frame.
\subsection{Model Update}
While Lu and Su \cite{lu18ismir} use the
focal loss \cite{Lin2017focal_loss} to deal with the sparsity of melody entries, we find our model works well with a simple loss function---the binary cross entropy between the estimated melody and the groundtruth one. Model update is done with mini-batch stochastic gradient descent (SGD) and the Adam optimizer.
The model is implemented using PyTorch.
For reproducibility, we share the source code at
\url{https://github.com/bill317996/Melody-extraction-with-melodic-segnet}.
\section{Experiment}
\label{sec:exp_setup}
\subsection{Experimental Setup}
We evaluate the proposed method on general melody extraction for one dataset, and on vocal melody extraction for three datasets.
For \textbf{general melody extraction}, we use the MedleyDB dataset \cite{bittner14ismir}. Specifically, we use the ``melody2'' annotation, which is the F0 contours of the melody line drawn from multiple sound sources. Following \cite{basaran2018CRNN}, among the 108 annotated songs in the dataset, we use 67 songs for training, 14 songs for validation and 27 songs for testing.
For \textbf{vocal melody extraction}, we use the MIR-1K dataset
\footnote{https://sites.google.com/site/unvoicedsoundseparation/mir-1k}
and a subset of MedleyDB for training. The former contains 1,000 Chinese karaoke clips, whereas the latter contains 48 songs where the vocal track represents the melody.
The testing data are from three datasets: 12 clips from ADC2004, 9 clips from MIREX05,\footnote{https://labrosa.ee.columbia.edu/projects/melody/} and 12 songs
from MedleyDB.
We set the training and testing splits of MedleyDB according to \cite{lu18ismir}.
There is no overlap between the two splits.
We compare the performance of our model with the three state-of-the-art deep learning based methods \cite{BittnerDeepSalience17,basaran2018CRNN,lu18ismir} described in Section \ref{sec:related}. Moreover, to validate the effectiveness of the non-melody detector branch, we implement an \emph{ablated} version of our model that removes the non-melody detector. For binarization of this method, we run a grid search to find the optimal threshold value using the validation set.
Following the convention in the literature, we use the following metrics for performance evaluation: overall accuracy
(OA), raw pitch accuracy (RPA), raw chroma accuracy
(RCA), voicing recall (VR) and voicing false alarm (VFA).
These metrics are computed by the \texttt{mir\_eval} \cite{mireval} library with the default setting---e.g., a pitch estimate is considered correct if it is within 50 cents of the groundtruth one.
Among the metrics, OA is often considered more important.
To adapt to different pitch ranges required in vocal and general melody extraction, we use different hyperparameters in computing the CFP for our model. For vocal melody extraction, the number of frequency bins is set to 320, with 60 bins per octave, and the frequency range is from 31 Hz (\texttt{B0}) to 1250 Hz (\texttt{D\#6}). For general melody extraction, the number of frequency bins is set to 400, with 60 bins per octave, and the frequency range is from 20 Hz (\texttt{E0}) to 2048 Hz (\texttt{C7}).
Moreover, since we use more frequency bins for general melody extraction, we increase the filter size of the third pooling layer of the encoder from (4,1) to (5,1) for this task.
We use 44,100 Hz sampling rate,
2,048-sample window size, and 256-sample hop size for computing the STFT.
Moreover, to facilitate training the model with mini-batches, we divide the training clips into fixed-length segments of $T=256$ frames, which is nearly 1.5 seconds.
According to our implementation, the model training can converge within 20 minutes with a single GTX1080ti GPU.
\vspace{-2mm}
\subsection{Result}
Table \ref{tab:mdb main} first lists the performance of vocal melody extraction for three datasets.
We see that the proposed model compares favorably with DSM \cite{BittnerDeepSalience17} and Lu \& Su's model \cite{lu18ismir}, leading to the highest OA for the ADC 2004 and MedleyDB datasets.
In particular, the proposed model outperforms the two prior arts greatly for
MedleyDB, the most challenging dataset among the three.
We also see that the proposed method outperforms DSM in VFA consistently across the three datasets, meaning that our model leads to fewer false alarms. This may be attributed to the built-in non-vocal detector.
The bottom of Table \ref{tab:mdb main} shows the result of general melody extraction. The proposed method outperforms DSM \cite{BittnerDeepSalience17} and compares favorably with CRNN \cite{basaran2018CRNN}.
In general, this suggests that our simple model is effective for both vocal melody and general melody extraction.
A closer examination of the results reveals that, compared to existing methods, our model is relatively weaker in the two pitch-related metrics, RPA and RCA, especially for MedleyDB.
For example, our model suffers from high frequency noises and make the wrong prediction sporadically.
Detailed error analysis of our model can be found in our GitHub repo.
Table \ref{tab:mdb main} also shows that our model outperforms its ablated version almost consistently across the five metrics and the four datasets, validating the effectiveness of the non-melody detector.
Although not shown in the table, we have implemented another ablated version of our model that replaces CFP with the constant-Q transform (CQT). This would decrease the OA by about 10\% for vocal melody extraction.
\begin{table}
\centering
\begin{tabular}{|l|rrrrr|} \hline
\multicolumn{6}{|l|}{
\textbf{ADC2004 (vocal melody)}}\\
Method & VR$\uparrow$ & VFA$\downarrow$ & RPA$\uparrow$ & RCA$\uparrow$ & OA$\uparrow$\\
\hlin
DSM \cite{BittnerDeepSalience17} & \textbf{92.9} & 50.5 & 77.1 & 78.8 & 70.8 \\
Lu \& Su's \cite{lu18ismir} & 73.8 & \textbf{3.0} & 71.7 & 74.8 & 74.9\\
ours & 91.1 & 19.2 & \textbf{84.7} & \textbf{86.2} & \textbf{83.7} \\
ours (ablated)&74.3 & 6.1 & 72.0 & 75.6 & 75.1\\
\hline
\hline
\multicolumn{6}{|l|}{\textbf{MIREX05 (vocal melody)}}\\
DSM \cite{BittnerDeepSalience17} &\textbf{ 93.6} & 42.8 & 76.3& 77.3 & 69.6 \\
Lu \& Su's \cite{lu18ismir} & 87.3 & \textbf{7.9} & \textbf{82.2} & \textbf{82.9} & \textbf{85.8} \\
ours&84.9& 13.3& 75.4&76.6& 79.5\\
ours (ablated)&71.9& 12.6& 66.3& 67.8& 73.8\\
\hline
\hline
\multicolumn{6}{|l|}{\textbf{MedleyDB (vocal melody)}}\\
DSM \cite{BittnerDeepSalience17} & \textbf{88.4} & 48.7 &\textbf{ 72.0} & \textbf{74.8} & 66.2 \\
Lu \& Su's \cite{lu18ismir}& 77.9 & 22.4 & 68.3 & 70.0 & 70.0 \\
ours&73.7& \textbf{13.3}& 65.5& 68.9& \textbf{79.7}\\
ours (ablated)&62.1& 14.1& 53.1& 58.8& 68.4\\
\hline
\hline
\multicolumn{6}{|l|}{\textbf{MedleyDB (general melody)}} \\
DSM \cite{BittnerDeepSalience17} & 60.9 & \textbf{24.3}& \textbf{75.1} & 69.2 & 61.7\\
CRNN \cite{basaran2018CRNN} & 69.8 & 31.0 & 71.4 & \textbf{76.5} &\textbf{ 64.3}\\
ours&\textbf{70.9}& 26.2& 57.2& 62.5& \textbf{64.3}\\
ours (ablated)& 66.5& 27.1& 53.3& 58.6& 59.8\\
\hline
\end{tabular}
\caption{Experiment results on several datasets. The ablated version of our model does not use the non-melody detector. The arrow next to each of the five performance metrics indicates whether the result is the higher or the lower the better. Please visit our github repo for the standard deviation values.}
\label{tab:mdb main}
\end{table}
\section{Conclusion}
\label{sec:conclusion}
We have introduced a streamlined encoder/decoder architecture that is designed for melody extraction.
It employs only 7 convolution or up-convolution layers.
Due to the use of a built-in non-melody detector, we do not need further post-processing of the result.
The code is public and we hope it contributes to other music tasks.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 265 |
{"url":"https:\/\/math.au.dk\/en\/research\/publications\/publication-series\/publication\/publid\/1127\/","text":"# KMS weights on groupoid C*-algebras, with an emphasis on graph C*-algebras\n\nBy Johannes Christensen\nPhD Dissertations\nSeptember 2018\nAbstract:\n\nIn this thesis we study KMS weights on groupoid $C^{*}$-algebras. Our first objective is to study the structure of KMS weights on \u00e9tale groupoid $C^{*}$-algebras. We extend a classical theorem by Neshveyev to KMS weights, allowing us to divide the description of KMS weights into the description of certain quasi-invariant measures and measurable fields of tracial states. We then investigate the properties of these measures and measurable fields of tracial states.\n\nAfterwards we use the insight developed in this general description of KMS weights to consider the case where the groupoid $C^{*}$-algebra is the $C^{*}$-algebra of a higher-rank graph or a directed graph. This leads to a description of the KMS states for the gauge-actions on the Toeplitz and Cuntz Krieger algebras of finite higher-rank graphs. Joint with Klaus Thomsen we also describe the KMS states for generalised gauge-actions on finite digraphs, give a partial description of KMS weights for generalised gauge-actions on Cayley graphs and establish a connection between diagonal weights and diagonal actions.","date":"2020-08-15 14:08:11","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7205233573913574, \"perplexity\": 1013.7263868490454}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-34\/segments\/1596439740848.39\/warc\/CC-MAIN-20200815124541-20200815154541-00401.warc.gz\"}"} | null | null |
\section{INTRODUCTION}
A semimetal-insulator transition in graphite, discovered in the early 1980s (Ref.~[\onlinecite{Tanuma1981}]),
has recently regained much attention~\cite{PhysRevLett.110.266601,JPhysSocJpn.84.054709,PhysRevLett.103.116802,PhysRevLett.119.136601,SciRep.7.1733,CamargoEscoffier2018}.
This transition is induced by high magnetic fields of $B \simeq 30$ T along the $c$-axis (out-of-plane direction) at low temperatures.
Reflecting a low carrier density,
only four quasi-one-dimensional Landau subbands [$(n=0,\uparrow), (n=0,\downarrow), (n=-1,\uparrow)$, and $(n=-1, \downarrow)$] remain at the Fermi level under high magnetic fields~\cite{PhysRev.109.272,PhysRev.119.606,JPhysSocJpn.17.808} [the so-called quasiquantum limit; see red lines in Fig.\ref{f1}(b)],
which should be responsible for this electronic phase transition.
Taking the quasi-one-dimensionality and the electron-electron interaction into account,
Yoshioka and Fukuyama proposed the exotic density-wave state, \textit{valley-density wave} (VDW) state~\cite{JPhysSocJpn.50.725}, as illustrated in the following.
Graphite has two energetically equivalent band dispersions (so-called valleys) along H-K-H and H$^{\prime}$-K$^{\prime}$-H$^{\prime}$ lines in the reciprocal lattice ($k$) space,
which form an electron Fermi pocket around the K (K$^{\prime}$) point, and a hole Fermi pocket around H (H$^{\prime}$) point,
as can be seen in Fig.~\ref{f1}(a).
If we focus on one valley (e.g., H-K-H),
it forms a $2k_F$-type charge-density wave (CDW) along the $c$ axis direction under high magnetic fields along the $c$-axis.
In the counter part of the valley (e.g., H$^{\prime}$-K$^{\prime}$-H$^{\prime}$),
it also forms a CDW but is antiphase to the counter valley.
This means that, in total, the VDW has no spatial modulation of carrier to cancel out the direct Coulomb repulsive interaction,
which is analogous to the spin-density wave (SDW) if we read the spin degrees of freedom as the valley ones.
Although there are some differences in detail,
all subsequent theories support the formation of the density-wave state~\cite{PhysicaB.201.384,PhysRevB.29.6722,JPhysCondensMatter.10.11315}.
On the other hand,
experimental verification of the density-wave state was a challenging problem.
First, if the ordered state is VDW,
it is impossible to directly observe it by utilizing, for example, x rays, since the spatial charge modulation should be absent or negligibly small.
Another common way of investigating the density-wave state is to detect non-Ohmic transport.
The nonlinearity was actually found in the in-plane~\cite{PhysRevLett.54.1182} and out-of-plane transport~\cite{JPhysSocJpn.68.181},
but its broad transition from a low conducting state to a high conducting state was ambiguous evidence for the sliding motion of the density wave.
In addition, it is not clear how to understand the in-plane transport results in the scenario of the density-wave standing along the out-of-plane direction.
It is noteworthy that swift neutron irradiation in graphite crystal was successful in controlling the phase boundary~\cite{JPhysSocJpn.68.1300,PhysicaB.298.546,JPhysConfSer.150.022099,JPhysCondensMattter.21.344207}.
In those experiments,
the transition line around $B\simeq 30$ T in the phase diagram shifted to higher magnetic fields almost in a parallel manner with the introduction of disorders.
This trend is basically understood by applying the theory of the ``pair-breaking effect,'' which is well-known in superconductivity~\cite{SolidStateCommun.52.975}.
This agreement manifests that some kind of pairing state is involved in the transition,
whereas it is difficult to provide a comprehensive interpretation of the formation of the density-wave state owing to concomitant carrier doping.
Recent discovery of a new electronic phase above $B > 53$ T offers a more confusing problem~\cite{PhysRevLett.110.266601}.
According to the Slonczewski-Weiss-McClure model,
which is known to accurately reproduce the band structure deduced from the quantum oscillations~\cite{PhysRevLett.102.166403},
the ($n=0, \uparrow$) subband escapes from the Fermi level at $B=53$ T (Refs.~[\onlinecite{PhysicaB.256.621,PhysicaB.201.384,JPhysCondensMattter.21.344207}]).
Therefore, it was believed that the anomalous electronic state will exist only between $B \simeq 30$ T and 53 T,
as the $(n=0,\uparrow)$ Landau subband is believed to be responsible for the density-wave formation in Ref.~[\onlinecite{JPhysSocJpn.50.725}].
In fact, the behavior of the in-plane resistivity seems to reenter the normal metallic state at 53 T (Ref.~[\onlinecite{PhysicaB.256.621}]).
However, according to Fauqu\'{e} \textit{et al.},
another high-resistivity state was found above 53 T by a longitudinal transport measurement ($R_{zz}$),
and the reentry of the conducting state needs to be as large as $B = 75$ T (Ref.~[\onlinecite{PhysRevLett.110.266601}]).
The authors proposed a new sequence of the Landau subband detachment to qualitatively explain the phenomena,
but there is no clear consensus as to that scenario so far~\cite{JPhysSocJpn.84.054709,PhysRevLett.119.136601}.
Even if other subbands are responsible for the phase transition,
a reasonable explanation for the anisotropic conducting state in the new phase,
and the reason for the location of the endpoint at $B=75$ T, are absent.
To unveil the true evolution of the electronic state under a high magnetic field,
it is significant to prove the electronic state as the density-wave state between $B \simeq 30$ T and 53 T.
In this study, we investigate the thickness dependence of the semimetal-insulator transition at $B \simeq 30$ T.
According to basic solid-state physics,
the interval of $k$ points along the $z$-direction $k_z$ ($z||c$), $\Delta k_z$, is written as $\Delta k_z = 2\pi/d$, where $d$ is the thickness of the system.
If $d$ is sufficiently large compared with the lattice constant $c$,
the dispersion can be regarded as continuous [red lines in Fig.~\ref{f1}(b)].
With a reduction of thickness $d$,
the dispersion is no longer continuous owing to the quantum size effect,
as shown by blue markers in Fig.~\ref{f1}(b).
If some nesting vector of $q_z$ (the vector connecting $k_z$ points) is responsible for the phase transition in bulk graphite,
the formation of the density-wave state tends to be inhibited in the thin-enough sample owing to the sparseness of the $k_z$ points.
In fact, neither mono- nor bilayer system (graphene) shows an insulator transition in high magnetic fields.
Suppose that
a band-width of a few tens of millielectronvolts in the Landau subband is divided into a hundred points,
and the energy spacing is a few Kelvin,
which is comparable to the phase transition temperature.
Therefore, this level spacing effect is expected to appear on the order of hundreds of unit-cell-thick systems (roughly 70 nm).
We note that, in contrast to the neutron irradiation experiment,
this method is expected not to introduce additional disorders or carriers,
which is an advantage of the simple interpretation.
In fact, we confirmed it in our 80-nm-thick film through the evaluation of the residual resistivity ratio (RRR) and Dingle temperature ($T_D$).
The higher RRR and the smaller $T_D$ indicate high purity of samples.
The observed values were RRR $\simeq 6$ and $T_D = 3-7$ K, respectively.
These values are reasonably in good agreement with those in bulk samples (RRR $>10$ and $T_D = 0.5-4$ K, respectively~\cite{Carbon.36.1671,PhysRev.134.A453}).
Although some amount of crack is possibly introduced in the mechanical exfoliation process,
these results indicate that the quality of our sample is still reasonable even after the exfoliation process.
In this study, by comparing the critical magnetic field $B_c$ for different thickness samples,
we found that the magnetic-field-induced phase becomes unstable for thin-film samples.
\begin{figure}
\includegraphics[width=8cm]{Fig1.eps}
\caption{(a) Schematic view of Brillouin zone and Fermi surfaces at $B=0$ in graphite. Electron pockets and hole pockets are formed around K (K$^{\prime}$) and H (H$^{\prime}$) points, respectively. Valleys along H-K-H and H$^{\prime}$-K$^{\prime}$-H$^{\prime}$ lines are energetically degenerated. The size of Fermi pockets is exaggerated for clarity. (b) Calculated Landau subbands in graphite under magnetic field $B = 30$ T along the $c$-axis. Only four Landau subbands are at the Fermi level $\varepsilon_F$. Calculation is based on the Slonczewski-Weiss-McClure model with $\gamma_3 = 0$ (Refs.~\onlinecite{PhysRev.109.272,PhysRev.119.606,JPhysSocJpn.17.808}). Red lines indicate the dispersions for the bulk (thick enough) system, and blue markers are for the thin-film system. Width of horizontal light blue and light red bars indicate $k_B T$ at low and high temperatures, respectively (see main text). The density-wave state characterized by $q_z=2k_F$ is expected to appear in the bulk system, while it is expected to be unstable in the thin-film system owing to the sparse $k_z$ states.}
\label{f1}
\end{figure}
\section{EXPERIMENTAL METHODS}
Thin-film graphite samples were obtained by mechanical exfoliation from Kish graphite crystals and were transferred onto the silicon substrate, in the same manner as the original graphene preparation~\cite{Science.306.666,Nature.438.197}.
Here, insulating silicon substrates were utilized in order to avoid heating by the eddy current under the pulsed magnetic field.
The thickness of each microcrystal on the substrate was identified by atomic-force microscopy,
in which we selected the flat surface samples.
The typical dimensions of the microcrystal were $50\times 50\times 0.1 \;\mu$m${^3}$.
The electrical contacts for in-plane resistance measurements were formed by standard electron-beam lithography and vacuum evaporation of gold.
High magnetic fields were generated by our portable nondestructive pulse magnet system,
which consists of home-wound coil cooled by liquid nitrogen,
a capacitor bank with a maximum charge energy of 20 kJ,
and a helium cryostat with a lowest temperature of 1.6 K.
The highest magnetic field reached $B \simeq 40$ T at a duration of 10 ms.
Because the time dependence of the magnetic field was very steep and noisy in the ascending branch,
we only show the descending branch (see Appendix~\ref{AppendixNumLockin} in detail.)
The in-plane resistance ($R$) was measured by applying a small ac electric current ($I_{\textrm{ac}}$) at a frequency of 25 kHz under the magnetic fields ($B$) along the $c$-axis (perpendicular to the plate) at low temperatures ($T$).
The resistance was determined by the numerical lock-in technique (see Appendix~\ref{AppendixNumLockin} in detail.)
\section{RESULTS}
Figures \ref{f2}
(a) and \ref{f2}(b) are the magnetic-field dependences of the in-plane resistance at several temperatures in samples with $d = 173$ nm and 80 nm, respectively.
Both samples show trends similar to those of the bulk sample.
Namely,
a large magnetoresistance appears up to 10 T, concomitant with clear Shubnikov de-Haas oscillations,
followed by a negative magnetoresistance between 10 T and 30 T.
A sharp transition to the insulating state can be observed at around 30 T.
These results indicate that both samples can be viewed as three-dimensional systems.
In fact, mono- and bilayer graphenes show different sequences of the Shubnikov de-Haas oscillations, and the semimetal-insulator transition is absent at around 30 T (Ref.~[\onlinecite{NewJPhys.12.083006}]).
With decreasing temperatures,
the transition rapidly shifts to lower magnetic fields.
This temperature dependence of the transition is qualitatively the same as that of the bulk result~\cite{PhysicaB.256.621}.
On the other hand, we can see some differences between the two samples.
In the thinner sample, (i) the value of the critical magnetic field $B_c$ shifts higher, and (ii) the temperature dependence of $B_c$ becomes small.
These characteristics are clearly visualized in the $B-T$ phase diagram,
as shown in Fig.~\ref{f3}(a).
For comparison,
that of the bulk system is also shown,
and is taken from Ref.~[\onlinecite{JPhysSocJpn.68.1300}].
When the thickness is reduced,
the phase boundary line between the semimetal and insulating states (i) shifts to higher magnetic fields,
and (ii) the slope of it becomes steeper.
The second trend is in stark contrast to the phase diagram found in neutron irradiated graphite,
where the boundary almost shifts to higher fields in a parallel manner.
The difference probably comes from an introduction of disorders and charge carriers.
We note that a previous report for a 130-nm sample~\cite{CurrApplPhys.7.338} does not contradict our results,
although the applied magnetic fields are not sufficient to determine the transition in that measurement.
Recently, the transition was observed at $B = 38$ T in the highly-oriented pyrolytic graphite (HOPG) sample with $d = 35$ nm at $T = 4.2$ K.
This result is consistent with our phase diagram.
\begin{figure}
\centering
\includegraphics[width=8cm]{Fig2.eps}
\caption{In-plane resistance as a function of magnetic field along the $c$ axis in (a) 173-nm-thick and (b) 80-nm-thick graphite at $T = 1.6, 2.0, 3.0, 4.2$, and 5.4 K. Several dip structures up to 10 T are Shubnikov-de Haas oscillations. In both samples, the critical magnetic field of the semimetal-insulator transition increases with elevating temperatures. Critical fields in thinner samples are higher, and show small temperature dependence.}
\label{f2}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=8cm]{Fig3a.eps}
\includegraphics[width=8cm]{Fig3b.eps}
\caption{(a) Phase boundary of the semimetal-insulator transition in $B-T$ plane for each graphite sample, obtained from Fig.~\ref{f2}. The bulk line is taken from Ref.~[\onlinecite{JPhysSocJpn.68.1300}]. By reducing sample thickness, the phase boundary shifts to higher magnetic fields, and the slope becomes steep. (b) Simulated phase boundaries for several thicknesses. Two characteristics of the thinner system are qualitatively reproduced.}
\label{f3}
\end{figure}
\section{DISCUSSION}
The presence of thickness dependence implies that an ordered state along the out-of-plane direction evolves.
To examine whether this ordered state is attributable to the formation of the density-wave state,
we calculated the thickness dependence of $B_c$ in the simple density-wave state model~\cite{RevModPhys.60.1129}.
In the case of a quasi-quantum limit [only four spin-split Landau subbands of ($n=0,\uparrow \downarrow$) and ($n=-1,\uparrow \downarrow$) are at the Fermi level],
the density-response function $\chi (\textbf{q}) = \rho (\textbf{q})/V (\textbf{q})$ can be evaluated by
\begin{eqnarray}
\lefteqn{ \chi^{(0)}(q_x=0, q_y=0, q_z)} \nonumber \\
& = & \frac{1}{2\pi l^2} \sum_{k_z}
\frac{f(E_{0\uparrow} (k_z+q_z))-f(E_{0\uparrow} (k_z))}{E_{0\uparrow} (k_z) - E_{0\uparrow} (k_z+q_z)}.
\label{eq:chi0}
\end{eqnarray}
Here,
$\rho (\textbf{q})$ and $V (\textbf{q})$ are the Fourier components of the carrier density and perturbation potential, respectively,
$E_{0\uparrow}$ denotes the ($n = 0,\uparrow$) Landau subband energy dispersion,
$f(E)$ is the Fermi-Dirac distribution function,
$l = \sqrt{\hbar/eB}$ is the magnetic length,
$\hbar=h/2\pi$ is Planck's constant divided by $2\pi$,
and $e$ is the elementary charge.
According to Ref.~[\onlinecite{JPhysSocJpn.50.725}],
the ($n = 0, \uparrow$) Landau subband is relevant for the density-wave transition.
Therefore,
we focus on the ($n = 0, \uparrow$) Landau subband,
and for simplicity, the Fermi energy is fixed to the value of the bulk system,
regardless of the thickness.
The energy dispersion of the subband $E_{0\uparrow} (k_z)$ is calculated by the Slonczewski-Weiss-McClure model~\cite{PhysRev.109.272,PhysRev.119.606} with $\gamma_3=0$ in Ref.~[\onlinecite{JPhysSocJpn.17.808}],
as this term is not effective in a high magnetic field.
The condition for the density-wave transition is that $\max\left[\chi^{(0)}\right] $ reaches a critical value $1/\tilde{u}$ (Ref.~[\onlinecite{JPhysSocJpn.50.725}]),
where $\tilde{u}$ is an effective exchange interaction.
In a bulk system, i.e., in the limit of the continuous $k_z$,
we can easily evaluate Eq. (\ref{eq:chi0}) by substituting the summation in the integral.
As a result,
we obtain the so-called ``$2k_F$ instability,''
namely, $\chi^{(0)}$ divergently increases at $q_z = 2k_F$,
and the peak rapidly decreases with elevating temperatures.
On the other hand, in a thin-film system,
as $k_z$ is discrete with a spacing of $\Delta k_z = 2\pi /d$,
we directly sum all possible $k_z$ and $k_z+q_z$ pairs at each $q_z$ in Eq. (\ref{eq:chi0}).
We note that only an integer multiple of $\Delta k_z$ is allowed for $q_z$,
so $q_z$ is also discrete.
In the $N = 300$ unit cell (u.c.)-thick system ($N=k_z/\Delta k_z=d/c$, where $c$ is the lattice constant along the $c$-axis),
the results are quite similar to those in the bulk, as it is still thick enough.
However, in thinner cases such as $N = 30, 20$, and 10-u.c. systems,
the values at the peak become smaller.
In addition, the temperature dependence of the peak height becomes progressively smaller by reducing the thickness (see Appendix~\ref{Appendixchi0calc}).
These two features mean that if we assume the critical condition $\max\left[\chi^{(0)}(T,B)\right] = 1/\tilde{u}$ is thickness independent,
the density-wave transition should occur at lower temperatures in thinner systems in some fixed magnetic fields.
By determining some adequate value of $1/\tilde{u}$,
the simulated phase diagram is depicted as Fig.~\ref{f3}(b).
Figure \ref{f3}(b)
qualitatively reproduces the trend of the phase boundary pointed out in the experimental phase diagram of Fig.~\ref{f3}(a) [see (i) and (ii) above].
Taking into account the agreement of the characteristics of the phase boundary,
we strongly suggest that the insulating state that appeared above $B \simeq 30$ T in graphite is the density-wave state.
Note that we do not confirm whether it is the \textit{valley}-density-wave state.
Although our simulation is based on the band dispersion of the Slonczewski-Weiss-McClure model~\cite{PhysRev.109.272,PhysRev.119.606},
the conclusion is not affected by the details of the band structure, as shown in the following discussion.
To look into the quantum size effect on the density-wave state,
we discuss the mechanism for the shift of $B_c$ and the small temperature dependence.
The first feature, (i) the shift of $B_c$, is attributable to the boundary condition for the density-wave state.
If we compare thick and thin samples at some fixed magnetic field around 30 T,
the thick sample has a pair of $k_z$ and $k_z + q_z$ just at the Fermi level,
while in the thin sample, such pairs cannot be found in some cases owing to the sparse $k_z$ [see Fig.~\ref{f1}(b)].
This means that the thin-film system needs to tune the magnetic field to find a pair of $k_z$ and $k_z + q_z$.
Because the value of $\max\left[\chi^{(0)}(T,B)\right]$ monotonically increases with $B$, as indicated by Eq.~(\ref{eq:chi0}),
the thin-film system needs a higher magnetic field to find a pair to achieve $\max\left[\chi^{(0)}(T,B)\right] \geq 1/\tilde{u}$.
In real space,
this feature corresponds to the formation of the density wave with a fixed-end boundary condition.
In the thick system,
where the boundary condition is not relevant,
the formation of the density wave is not restricted by the characteristic length of the density wave $\sim \pi/k_F$.
On the other hand,
as the node position of the density wave is expected to come at the boundary,
a mismatch of $d$ and $\sim \pi/k_F$ will make it difficult for the system to enter the density-wave state.
In fact,
the simulated $B_c (T)$ nonmonotonically behaves in the fixed $\varepsilon_F$ calculation [not shown in Fig.~\ref{f3}(b)],
although in reality the Fermi energy will go up and down as the magnetic fields increase.
The second feature, (II) the small temperature dependence of $B_c(T)$, originates from the sparseness of the states along the energy direction in the Landau subbands, instead of that along the $k_z$ direction.
Because the energy spectra are no longer continuous by the quantum size effect,
the distribution function becomes irrelevant to the system.
More specifically,
if we draw horizontal bars with two different widths of $k_B T$, as indicated in Fig.~\ref{f1}(b) by light blue (lower temperature) and light red (higher temperature),
the number of points on $E(k_z)$ overlaid by these bars are almost the same in the thin-film system,
while remarkably different in the thick-enough system
owing to the different energy spacing.
Hence, the condition satisfying the density-wave transition is not affected by the temperature,
resulting in the steep phase boundary in Fig.~\ref{f3}(b).
Finally, the threshold of the thickness is discussed, below which the quantum size effect becomes relevant.
Surprisingly,
the relatively thick system of $d=173$ nm already deviates from the bulk phase boundary in our experimental results,
but this value is on the same order of 70 nm, the rough estimation mentioned above.
Our density-response function calculation also supports this result.
Hence, we can safely attribute the phase boundary shift to the quantum size effect,
although a factor of difference remains.
In fact, our density-response function calculation shows that
a 300-u.c. system, which corresponds to $d\simeq 200$ nm, shows the same result as the bulk one.
This quantitative refinement is required by a modification of the nesting vector or a selection of subband.
Further investigation is expected for the quantitative agreement.
\section{CONCLUSION}
In conclusion,
we observed a thickness-dependent electronic phase transition at $B\simeq 30$ T in graphite.
The transition in thin-film graphite on silicon substrate was detected by the in-plane transport under a pulsed magnetic field.
The thickness dependence of the transition indicates that the ordered state along the out-of-plane direction evolves.
In contrast to bulk graphite,
the critical magnetic field in thin-film graphite shifts higher with reduced temperature dependence.
These features are understood by the quantum size effect on the density-wave transition,
and the phase diagram is in reasonably good agreement with the simulated one based on the density-wave state.
As a result, we strongly suggest that the insulating state appearing at $B\simeq 30$ T is the density-wave state.
This thinning approach, which controls the phase transition through level spacing without introducing defects or carriers, will help us understand the entire phase diagram of graphite.
\begin{acknowledgments}
The authors are grateful to Professor Y. Takada, Professor M. Tokunaga, Professor Y. Iye, Professor H. Yaguchi, and Professor R. Shindo for valuable discussions.
This work was partially supported by JSPS KAKENHI Grant No. JP25107003.
\end{acknowledgments}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 7,535 |
{"url":"https:\/\/concrete.zama.ai\/advanced-operations\/applying-a-function-to-a-ciphertext","text":"# Applying a function to a ciphertext\n\nProgrammable bootstrapping is a powerful technique that enables simultaneously bootstrapping a ciphertext and homomorphically evaluating a function on it. Without programmable bootstrapping, evaluating complex non-linear functions would require evaluating deep arithmetic or boolean circuits, with as many bootstraps as there is noise accumulation. Here, the same function can be evaluated for the cost of a single bootstrap.\n\n \u200b \u200b Operation \u200b$f(E[m]_{noisy}) \\rightarrow E[f(m)]_{clean}$\u200b Type Bootstrapped Side effects Reduces noiseModifies paddingPotentially modifies encryption keyPotentially modifies security parameters\n\n# Discretizing the function to be evaluated\n\nA simple way to think of programmable bootstrapping is as a homomorphic table lookup, where the table represents a discretization of the function $f$ that needs to be evaluated on the ciphertext.\n\nIn TFHE, and thus Zama, we can bootstrap using polynomials modulo $X^N+1$, and get as an intermediary step an encryption of a polynomial whose constant term is the input plaintext. Programming the bootstrapping operation then amounts to simply replacing this constant term by a table representing the discretized function being programmed. This table has to be provided with entries of the form $\\bigl(\\operatorname{encode}_{\\mathrm{in}}(m),\\operatorname{encode}_{\\mathrm{out}}(f(m))\\bigr)$ with $\\operatorname{encode}_{\\mathrm{in}}()$denotes the encoding function of the input and $\\operatorname{encode}_{\\mathrm{out}}()$ the encoding function of the output.\n\nJust with plain bootstrapping, choosing the right parameters is paramount to get the right tradeoff between performances and precision.\n\nJust as with plain bootstrapping, programmable bootstrapping requires at least one free bit of padding.\n\n# Applying a function on the ciphertext\n\nTo apply a function on a ciphertext, use the bootstrap_with_function method that takes as arguments:\n\n\u2022 a bootstrapping key.\n\n\u2022 the function to be evaluated, as a lambda Fn(f64) -> f64 , which can be any univariate function as long as it does not have side effects.\n\n\u2022 an output encoder that represents the range and precision of the resulting ciphertext, after the function has been applied to it.\n\nHere is a code example to evaluate the square function:\n\nuse concrete::*;\u200bfn main() -> Result<(), CryptoAPIError> { \/\/ encoders let encoder_input = Encoder::new(-10., 10., 6, 1)?; let encoder_output = Encoder::new(0., 100., 6, 0)?;\u200b \/\/ secret keys let sk_rlwe = RLWESecretKey::new(&RLWE128_1024_1); let sk_in = LWESecretKey::new(&LWE128_630); let sk_out = sk_rlwe.to_lwe_secret_key();\u200b \/\/ bootstrapping key let bsk = LWEBSK::new(&sk_in, &sk_rlwe, 5, 3);\u200b \/\/ messages let message: f64 = -5.;\u200b \/\/ encode and encrypt let c1 = LWE::encode_encrypt(&sk_in, message, &encoder_input)?;\u200b \/\/ bootstrap let c2 = c1.bootstrap_with_function(&bsk, |x| x * x, &encoder_output)?;\u200b \/\/ decrypt let output = c2.decrypt_decode(&sk_out)?;\u200b println!(\"before bootstrap: {}, after bootstrap: {}\", message, output);\u200b Ok(())}\u200b","date":"2021-03-08 21:30:09","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 6, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7746168375015259, \"perplexity\": 3789.808613479429}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-10\/segments\/1614178385529.97\/warc\/CC-MAIN-20210308205020-20210308235020-00545.warc.gz\"}"} | null | null |
{"url":"https:\/\/webwork.maa.org\/moodle\/mod\/forum\/discuss.php?d=546","text":"## Installation\n\nby Peter Staab -\nNumber of replies: 1\nI just upgraded to the latest WW. I have an odd error.\n\nIf I browse a list of problems and Click the \"Try it\" then everything seems to be working as it was before however if instead I \"Edit it\", then on the new page select \"View using seed....\" I get the following error, which is independent of the problem that I select.\n\n### Error messages\n\nCan't call method \"assignment_type\" on an undefined value at \/opt\/webwork\/webwork2\/lib\/WeBWorK\/ContentGenerator\/Instructor\/PGProblemEditor.pm line 1206.\n\n### Call stack\n\nThe information below can help locate the source of the problem.\n\n\u2022 in WeBWorK::ContentGenerator::Instructor::PGProblemEditor::view_handler called at line 290 of \/opt\/webwork\/webwork2\/lib\/WeBWorK\/ContentGenerator\/Instructor\/PGProblemEditor.pm\n\u2022 in WeBWorK::ContentGenerator::Instructor::PGProblemEditor::pre_header_initialize called at line 175 of \/opt\/webwork\/webwork2\/lib\/WeBWorK\/ContentGenerator.pm\n\u2022 in WeBWorK::ContentGenerator::go called at line 353 of \/opt\/webwork\/webwork2\/lib\/WeBWorK.pm\n\ncvs -q update -dP -r rel-2-4-patches","date":"2022-12-09 23:24:05","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3868619203567505, \"perplexity\": 6052.109097988953}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-49\/segments\/1669446711552.8\/warc\/CC-MAIN-20221209213503-20221210003503-00404.warc.gz\"}"} | null | null |
require 'rubygems'
require 'thrift'
require 'thrift/transport/tsocket'
require 'thrift/protocol/tbinaryprotocol'
require 'dynomite/node'
$:.unshift(File.join(File.dirname(__FILE__), "..", "thrift"))
require 'Dynomite.rb'
module Dynomite
class Client < DynomiteInternal::Client
attr_reader :config
protected :config
def initialize(params={ })
@config = { :host=>'localhost', :port=>9200, :transport=>Thrift::TBufferedTransport, :protocol=>Thrift::TBinaryProtocol }.merge(params)
@socket = Thrift::TSocket.new(config[:host], config[:port])
@protocol = config[:protocol].new(config[:transport].new(@socket))
super(@protocol)
end
def connect
@tcp_socket = @socket.open
self
end
def disconnect
@tcp_socket.close unless @tcp_socket.nil? or @tcp_socket.closed?
end
end
end
| {
"redpajama_set_name": "RedPajamaGithub"
} | 2,886 |
\section{Introduction and main results}\label{sec:firstpart}
The immersed boundary method, as formulated by Peskin in \cite{PeskinThesis1972,PESKIN1972252}, has become a useful and effective method to computationally solve fluid-structure interaction (FSI) problems \cite{MR2009378}. This method has developed numerous applications in different fields of science \cite{MR2115343,MR1156495}. And the scientific computing of FSI problems has remained an active area of research \cite{MR2242805,MR2009378,TRYGGVASON2001708,richter2017fluid,Bazilevs2013}.
The {\em Peskin problem}, considered in this paper, describes the time evolution of an elastic simple closed string immersed in a 2D incompressible Stokes flow. The string exerts a singular force which generates the flow, and then the configuration of the string evolves over time according to the local fluid velocity. This model is probably among the simplest FSI problems and it has been used extensively as a test problem in the development of numerical algorithms in addition to being used in physical modeling. We assume that the string $\Gamma(t)$ splits $\mathbb R^2$ into two simply connected domains $\Omega(t)$ (interior) and $\mathbb R^2 \backslash \Omega(t)$ (exterior). We shall consider the problem when the viscosities, $\mu_i$, in both fluids are equal, and we set them equal to one for simplicity $\mu_1 = \mu_2 =1$. Then there are several formulations of this problem, all of which are equivalent assuming we have a sufficiently smooth solution.
The first formulation is at the level of the fluid; for each fixed time $t>0$, both the fluid velocity $\bm{u}$ and pressure $p$ solve the equations
\begin{equation}\label{formulation.first}
\left\{\begin{array}{ll}
\Delta \bm{u} + \nabla p = 0, & x\in \mathbb R^2\backslash \Gamma(t), \\
\nabla_x \cdot \bm{u} = 0, & x \in \mathbb R^2\backslash \Gamma(t) \\
\bm{u},p\to 0, & \text{ as }x\to \infty \end{array} \right.
\end{equation}
We are left to describe the time evolution of $\Gamma(t)$ as well as the appropriate boundary conditions for $\bm{u}$ and $p$ at $\Gamma(t)$. Parametrize $\Gamma(t)$
by the Lagrangian coordinate $\theta \in\mathbb T=\mathbb{R}/(2\pi\mathbb{Z})=[-\pi, \pi]$, and let $\bm{X}(t,\theta):\mathbb T \to \mathbb R^2$
denote the coordinate position of $\Gamma$ at time $t$. Here $\bm{X}=(X_1, X_2)^T$ and $|\bm{X}|^2 \eqdef X_1^2+X^2_2$.
Then the evolution of $\bm{X}$ is given by
\begin{equation}\label{eqn.u.formula}
\partial_t \bm{X}(t,\theta) = \bm{u}(t,\bm{X}(t,\theta)).
\end{equation}
Define $\jump{w}=\jump{w}(\bm{X}(\theta))$ as the jump across the filament $\Gamma$:
\begin{equation}\notag
\jump{w}(\bm{X}(\theta)) = \lim\limits_{\Omega \ni x\to \bm{X}(\theta)} w(x) - \lim\limits_{\mathbb R^2 \backslash\Omega \ni x\to \bm{X}(\theta)} w(x).
\end{equation}
Then the final boundary conditions for $\bm{u}$ and $p$ are given by
\begin{equation}\label{jump.first}
\left\{\begin{array}{l}
\jump{\bm{u}} =0, \\
\jump{\paren{ \paren{\nabla \bm{u}+(\nabla \bm{u})^{\rm T}}-p\mathcal{I}}\bm{n}}= \bm{F}_{\rm el}\abs{\partial_\theta \bm{X}}^{-1}. \end{array} \right.
\end{equation}
Above $\mathcal{I}$ is the $2\times 2$ identity matrix and $\bm{n}$ is the outward pointing unit normal vector on $\Gamma$:
\begin{equation*}
\bm{n}=\begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix}\widehat{\bm{X}^\prime}, \; \widehat{\bm{X}^\prime} = \frac{\bm{X}^\prime}{\abs{\bm{X}'}}, \; \bm{X}^\prime = \partial_\theta \bm{X}=\PD{\bm{X}}{\theta}.
\end{equation*}
Since we will frequently be working with the parameterization $\bm{X}$ at fixed times, we will often omit the time variable and denote derivatives in $\theta$ of $\bm{X}$ as $\bm{X}^\prime$.
Lastly we denote
$\bm{F}_{\text{el}}$
as the elastic force exerted by the string $\Gamma$. In the case that the elastic string obey's Hooke's law, we have a simple tension given by:
\begin{equation}\label{linearF}
\bm{F}_{\rm el}=k_0\partial_\theta^2 \bm{X}, \quad k_0>0,
\end{equation}
where $k_0$ is the elasticity constant of the string $\Gamma(t)$. The general tension force law is given by
\begin{equation}\label{nonlinearF}
\bm{F}_{\rm el}=\partial_\theta\paren{\mc{T}(\abs{\partial_\theta \bm{X}})\frac{\partial_\theta\bm{X}}{\abs{\partial_\theta \bm{X}}}}
\end{equation}
This is also called the fully nonlinear force law in \cite{rodenberg_thesis}. Here $\mathcal{T}(s)$ is a coefficient modeling the elastic tension in the filament that satisfies the structure condition $\mathcal{T}>0$ and $d\mathcal{T}/ds>0$.
Note that \eqref{nonlinearF} is reduced to \eqref{linearF} if we take $\mathcal{T}(s)=k_0s$, hence $k_0=\mathcal{T}(1)=d\mathcal{T}/ds$.
The set of equations \eqref{formulation.first}-\eqref{eqn.u.formula}-\eqref{jump.first} above was first proposed as a simplified model to study blood flow through heart valves \cite{PeskinThesis1972,PESKIN1972252}. A second equivalent formulation of \eqref{formulation.first}-\eqref{eqn.u.formula}-\eqref{jump.first} is the following immersed boundary formulation
\begin{equation}\label{immersed.formulation}
\Delta \bm{u} +\nabla p = \int_{\mathbb T} \partial_\theta \left(\mathcal{T}(|\partial_\theta \bm{X}|) \frac{\partial_\theta \bm{X}}{|\partial_\theta \bm{X}|}\right) \delta(x - \bm{X}(\theta)) d\theta, \qquad \nabla\cdot \bm{u} = 0,
\end{equation}
which is very useful for numerical analysis. Then \eqref{immersed.formulation} combined with \eqref{eqn.u.formula} allows us to discretize the fluid domain in $x$ and the elastic string in $\theta$ independently of each other, with all communication between the two domains coming from the singular forcing of the fluid in \eqref{immersed.formulation}, and the time evolution of the string in \eqref{eqn.u.formula}. This became the basis for the immersed boundary method, which has been applied to numerous problems and is of great use in applications \cite{MR1156495}.
The third formulation which we will primarily be using is the following boundary integral formulation for the general force law \eqref{nonlinearF}:
\begin{equation}\label{e:boundaryintegral}
\partial_t \bm{X}(\theta) = \int_{\mathbb T} G(\delta_\alpha \bm{X}(\theta) )\partial_\alpha \left(\mathcal{T}(|\bm{X}'(\theta+\alpha) |) \frac{\bm{X}'(\theta+\alpha)}{|\bm{X}'(\theta+\alpha)|}\right) d\alpha.
\end{equation}
Here, for a generic function $f:\mathbb T \to \mathbb R^2$, we define the standard partial difference operator by
\begin{equation}\label{delta.notation}
\delta_\alpha f(\theta) \eqdef f(\theta+\alpha)- f(\theta).
\end{equation}
For $z\in \mathbb R^2$, then $G(z)$ is the Stokeslet given by
\begin{equation}\label{stokeslet.def}
G(z) = G_1(z) + G_2(z),
\quad G_1(z) \eqdef - \frac{1}{4\pi}\log(|z|) \mathcal{I},
\quad G_2(z) \eqdef \frac{1}{4\pi}\frac{z\otimes z}{|z|^2}.
\end{equation}
Notice that in the simple tension case \eqref{linearF} the equation \eqref{e:boundaryintegral} takes the form
\begin{equation}\notag
\partial_t \bm{X}(\theta) = k_0 \int_{\mathbb T} G(\delta_\alpha \bm{X}(\theta) ) \partial_\alpha^2 \bm{X}(\theta+\alpha) d\alpha,
\end{equation}
which contains the second order derivative $\partial_\alpha^2 \bm{X}$ inside the equation.
We also define
\begin{equation}\label{distance.X.notation}
\DAL \BX (\theta) \eqdef \frac{\delta_\alpha \bm{X} (\theta)}{ \alpha}.
\end{equation}
Then we introduce the arc-chord number
\begin{equation}\label{arc.cord.number}
|\bm{X}|_* \eqdef \inf\limits_{\theta, \alpha\in \mathbb T,\alpha \ne 0} |\DAL \BX (\theta)|.
\end{equation}
The evolution equation \eqref{e:boundaryintegral} is then is well-defined for a sufficiently regular function $\bm{X}(t,\theta)$ that satisfies $|\bm{X}(t)|_*>0$. If the parametrization $\bm{X}(t,\theta)$ is sufficiently regular, it has been proven that all three formulations \eqref{formulation.first}-\eqref{eqn.u.formula}-\eqref{jump.first}, \eqref{eqn.u.formula}-\eqref{immersed.formulation}, and \eqref{e:boundaryintegral} are equivalent \cite{MR1808257}. Considering the importance of the Peskin problem in applications, establishing the existence of smooth solutions is vitally important in order to guarantee that various numerical methods based on different formulations of
the problem all approximate the same solution.
The Peskin problem has several known similarities with the Muskat problem. The Muskat problem is also a free boundary problem that can be written in a boundary integral formulation \cite{MR2472040}. Also, both systems satisfy an energy balance law \cite{MR3935476,CCGS13,CCGRS16}. Further both equations have the invariant scaling $g_\lambda(t,\theta) = \lambda^{-1}g(\lambda t,\lambda \theta)$ (see also \secref{sec:scaling}). Lastly, both systems of equations can be written in the form
\begin{equation}\notag
\partial_t g + (-\Delta)^{\frac12}g = \mathfrak{R},
\end{equation}
with a ``remainder'' term $\mathfrak{R}$. For the Peskin problem $g=\bm{X}(t,\theta)$ and the remainder is $\mathfrak{R} = \mathcal{R}(t,\theta)$ as in \eqref{e:peskinapprox} below. Recently there has been a large amount of research work studying the local- and global-in-time well-posedness for the Muskat problem \cite{CCG11,FlynnNguyen2020,NguyenST2019,NguyenPausader2019,2103.14535,AlaNgu2021lipsh,2009.08442,2010.06915,CCGS13,CCGRS16,Cam17,Cam20,CGS16,MR3639321} and break-down \cite{CCFGL12}. This work was motivated by recent results on scaling critical local-in-time well-posedness for the Muskat problem in \cite{AlaNgu2021lipsh,2009.08442,2010.06915}, as well as recent analytical work on the Peskin problem in \cite{MR3935476,MR3882225}.
Analytical study of the Peskin problem began very recently, with all but one paper focussing on the case of simple tension $\mathcal{T}(r) = k_0 r$ in \eqref{linearF}. Lin and Tong were able to prove local well-posedness for the boundary integral formulation \eqref{e:boundaryintegral} with initial data $\bm{X}_0\in H^{5/2}(\mathbb T; \mathbb R^2)$ using energy methods and the Schauder fixed point theorem \cite{MR3882225}. At the same time Mori, Rodenberg, and Spirn proved local well-posedness for initial data $\bm{X}_0\in C^{1,\gamma}(\mathbb T;\mathbb R^2)$ for any $0<\gamma<1$ using semigroup theory \cite{MR3935476}. In particular the result of \cite{MR3935476} is barely subcritical, but the semi-group approach used in the proof makes a scaling critical result difficult. The only equilibrium states are uniformly parametrized circles \cite{MR3935476}, and both groups were able to prove global well-posedness and exponential convergence to equilibrium for initial data sufficiently close to a circle \cite{MR3882225,MR3935476}. Additionally, \cite{MR3935476} was able to prove that solutions to the Peskin problem immediately become $C^\infty$ for positive time, and that if $\bm{X}(t)$ blows up in finite time, either a chord arc condition fails or the $C^{1,\gamma}$ norm must blow up for any small $\gamma>0$. Tong \cite{1904.09528} then further established the global well-posedness of the regularized Peskin problem and proved convergence as the regularization parameter diminishes.
Regarding scaling critical initial data for \eqref{e:boundaryintegral}, recently Garc\'ia-J\'uarez, Mori, and Strain were able to prove global well-posedness if the initial data is sufficiently close to a uniformly parametrized circle in the Wiener algebra $\dot{\mathcal{F}}^{1,1} : = \{f:\mathbb T\to \mathbb R^2| \sum\limits_{k\in \mathbb Z^2} |k| |\hat{f}(k)| <\infty\}$. This result uses the spectral decomposition of the linearized operator \cite{2009.03360}, and it holds even in the case that the interior and exterior fluids have different viscosities--it is the first analytical result in that case. Recently, Gancedo, Belinch\'{o}n and Scrobogna \cite{2011.02294} studied a toy model of the Peskin problem and proved global existence and uniqueness in the critical Lipschitz space. Then more recently, Chen and Nguyen were able to prove local well-posedness for \eqref{e:boundaryintegral} whenever $\bm{X}_0'$ is in VMO using estimates on the fundamental solution of $(-\Delta)^{\frac12}$ and interpolation results, and they further prove global existence when $\bm{X}_0'$ is in BMO for initial data that is close to equilibrium \cite{2107.13854}.
The previously mentioned results in a sense rely on rewriting \eqref{e:boundaryintegral} with \eqref{linearF} as
\begin{equation}\label{e:peskinapprox}
\partial_t \bm{X}(t,\theta) + (-\Delta)^{\frac12}\bm{X}(t,\theta) = \mathcal{R}(t,\theta),
\end{equation}
for some remainder $\mathcal{R}$. And then controlling this remainder further requires controlling the derivative $\bm{X}'$. These results then make use of properties that are particular to the fractional heat equation such as the fundamental solution, the semigroup property, and the spectral decomposition. Once we consider a general tension $\mathcal{T}$ as in \eqref{nonlinearF} though, we lose access to the full power of these properties, and major alterations to the approach are needed. The only paper before which that has dealt with a general tension is Rodenberg's thesis \cite{rodenberg_thesis}. By localizing around the initial data, Rodenberg was able to apply the semigroup method from \cite{MR3935476} again and prove local existence when $\bm{X}_0, \mathcal{T}\in C^{1,\gamma}$. However, the result is weakened because the approach to localizing the intial data and thereby patching the semigroup method in \cite{rodenberg_thesis} didn't allow to also prove the smoothing effects, only guaranteeing that the solution $\bm{X}(t)$ remains in $C^{1,\gamma}$ even if the initial data and tension are $C^\infty$.
In order to further develop the fully nonlinear case \eqref{nonlinearF}, it's vital to understand exactly how the addition of a nonlinear tension $\mathcal{T}$ changes the problem. In particular, its important to understand how this affects the evolution of the derivative $\bm{X}'$, as the regularity of and behavior of the remainder $\mathcal{R}$ in \eqref{e:peskinapprox} has been controlled by that. In this article, we propose a new representation of the boundary integral equation for the problem. We write the equation \eqref{e:boundaryintegral} in the following equivalent formulation that will cancel out the terms featuring an $\partial_\alpha^2 \bm{X} = \bm{X}''$. In \eqref{e:boundaryintegral} we integrate by parts against $G_1(z)$ while leaving $G_2(z)$ alone to obtain
\begin{equation}\notag
\begin{split}
\partial_t \bm{X}(\theta) =& \int_{\mathbb T} \partial_\alpha \left( \frac{\mathcal{T}(|\bm{X}'|)}{|\bm{X}'|} \partial_\alpha(G_1(\delta_\alpha \bm{X}))\right) \delta_\alpha \bm{X}(\theta) d\alpha
\\ &+ \int_{\mathbb T} G_2(\delta_\alpha \bm{X}) \partial_\alpha \left(\mathcal{T}(|\bm{X}'(\theta+\alpha) |) \frac{\bm{X}'(\theta+\alpha)}{|\bm{X}'(\theta+\alpha)|}\right)d\alpha
\\
=& \frac{1}{4\pi}\int_{\mathbb T} \frac{2 \left(\bm{X}'(\theta+\alpha)\cdot \frac{\delta_\alpha \bm{X}}{|\delta_\alpha \bm{X}|} \right)^2 - |\bm{X}'(\theta+\alpha)|^2}{ |\delta_\alpha \bm{X}|^2} \frac{\mathcal{T}(|\bm{X}'(\theta+\alpha)|)}{|\bm{X}'(\theta+\alpha)|} \delta_\alpha \bm{X}(\theta) d\alpha.
\end{split}
\end{equation}
The calculation is performed in full detail in \secref{sec:GenTenDerivation}.
This property of the cancellation of the highest order derivatives is also satisfied by the equation for $\bm{X}'(t,\theta)$. Let $\bm{X}(t,\theta)$ be the solution of the Peskin problem with initial data $\bm{X}_0$ and tension $\mathcal{T}$, Then $\bm{X}'(t,\theta)$ solves the following equation
\begin{equation}\label{peskin.general.tension}
\partial_t \bm{X}'(\theta) = \int_{\mathbb T} \frac{d\alpha}{\alpha^2}~ \mathcal{K}[\bm{X}](\theta, \alpha)\delta_\alpha \mathbf{T}(\bm{X}'(\theta)),
\end{equation}
where $\mathbf{T}:\mathbb R^2\to \mathbb R^2$ is the tension map
\begin{equation}\label{tension.map.def}
\mathbf{T}(z) \eqdef \mathcal{T}(|z|)\hat{z}, \quad z \in \mathbb R^2.
\end{equation}
Here the kernel $\mathcal{K}(\theta, \alpha)=\mathcal{K}[\bX](\theta, \alpha)$ is given by
\begin{multline}\label{kernel.peskin.noExpand}
\mathcal{K}[\bX](\theta, \alpha) \eqdef \frac{1}{4\pi} \frac{\bm{X}'(\theta+\alpha) \cdot \mathcal{P}(\DAL \BX (\theta))\bm{X}'(\theta)}{|\DAL \BX (\theta)|^2} \mathcal{I}
\\
-\frac{1}{4\pi} \frac{\bm{X}'(\theta+\alpha) \cdot \mathcal{R}(\DAL \BX (\theta))\bm{X}'(\theta)}{|\DAL \BX (\theta)|^2} \mathcal{R}(\DAL \BX (\theta))
\\
+\frac{1}{4\pi} \frac{\bm{X}'(\theta+\alpha) \cdot (\mathcal{P}(\DAL \BX (\theta))-\mathcal{I})\bm{X}'(\theta)}{|\DAL \BX (\theta)|^2} \mathcal{P}(\DAL \BX (\theta)).
\end{multline}
Again $\mathcal{I}$ is the identity matrix on $\mathbb R^2$. Also the reflection matrices $\mathcal{R}$ and $\mathcal{P}$ are defined $\forall z \in \mathbb R^2$ by
\begin{equation}\label{matrix.operators}
\mathcal{R}(z) \eqdef \hat{z}\otimes \hat{z}^\perp + \hat{z}^\perp\otimes \hat{z}, \quad \mathcal{P}(z) \eqdef \hat{z}\otimes \hat{z} - \hat{z}^\perp \otimes \hat{z}^\perp ,
\end{equation}
where $\hat{z}^\perp\in \mathbb R^2 $ is the unit vector perpendicular to $\hat{z}$. We remark that the three matrices $\mathcal{I}$, $\mathcal{R}(z)$, $\mathcal{P}(z)$ are mutually orthogonal in $\mathbb R^4$ and form a basis for the 2 by 2 symmetric matrices for any fixed value of $z\in \mathbb R^2\setminus \{0\}$. This representation of the equation \eqref{peskin.general.tension} for the evolution of $\bm{X}'(t,\theta)$ is fundamental to the analysis in the remainder of this article. Equation \eqref{peskin.general.tension} is derived in \secref{sec:GenTenDerivation}.
Now, recalling \eqref{distance.X.notation}, to further expand out the additional cancellation in the kernel $\mathcal{K}[\bX](\theta, \alpha)$ we introduce the notation
\begin{equation}\label{delta.pm.notation}
\delta_\alpha^+ \bm{X}'(\theta) \eqdef \bm{X}'(\theta+\alpha) - \DAL \BX (\theta), \quad
\delta_\alpha^- \bm{X}'(\theta) \eqdef \bm{X}'(\theta) - \DAL \BX (\theta).
\end{equation}
Then it is an important observation that the kernel $\mathcal{K}[\bX](\theta, \alpha)$ from \eqref{kernel.peskin.noExpand} can be expressed as the following matrix valued function
\begin{equation}\label{kerbel.eqn.deriv}
\mathcal{K}[\bm{X}](\theta, \alpha) = \frac{1}{4\pi}\mathcal{I} + \mathcal{A}[\bm{X}](\theta, \alpha),
\end{equation}
where
\begin{multline} \label{kerbel.A.eqn.deriv}
4\pi\mathcal{A}[\bm{X}](\theta, \alpha) \eqdef \frac{\delta_\alpha^+ \bm{X}'(\theta) \cdot \mathcal{P}(\DAL \BX (\theta)) \delta_\alpha^- \bm{X}'(\theta)}{|\DAL \BX (\theta)|^2}\mathcal{I}
\\
+ \frac{(\delta_\alpha^+ \bm{X}'(\theta)+\delta_\alpha^- \bm{X}'(\theta)) \cdot \mathcal{P}(\DAL \BX (\theta)) \DAL \BX (\theta)}{|\DAL \BX (\theta)|^2} \mathcal{I}
\\
-\frac{\delta_\alpha^+ \bm{X}'(\theta) \cdot \mathcal{R}(\DAL \BX (\theta)) \delta_\alpha^- \bm{X}'(\theta)}{|\DAL \BX (\theta)|^2}
\mathcal{R}(\DAL \BX (\theta))
\\
- \frac{(\delta_\alpha^+ \bm{X}'(\theta)+\delta_\alpha^- \bm{X}'(\theta)) \cdot \mathcal{R}(\DAL \BX (\theta)) \DAL \BX (\theta)}{|\DAL \BX (\theta)|^2} \mathcal{R}(\DAL \BX (\theta))
\\
+ \frac{\delta_\alpha^+ \bm{X}'(\theta) \cdot (\mathcal{P}(\DAL \BX (\theta)) - \mathcal{I}) \delta_\alpha^- \bm{X}'(\theta)}{|\DAL \BX (\theta)|^2} \mathcal{P}(\DAL \BX (\theta)).
\end{multline}
This expression follows after taking into account the orthogonality in \eqref{matrix.operators}.
Then \eqref{peskin.general.tension} can be written as
\begin{equation}\label{peskin.expand.tension}
\partial_t \bm{X}'(\theta) -
\frac{1}{4\pi}\int_{\mathbb T} \frac{d\alpha}{\alpha^2}~ \delta_\alpha \mathbf{T}(\bm{X}'(\theta))= \int_{\mathbb T} \frac{d\alpha}{\alpha^2}~ \mathcal{A}[\bm{X}](\theta, \alpha)\delta_\alpha \mathbf{T}(\bm{X}'(\theta)),
\end{equation}
The expression $\frac{1}{4\pi}\int_{\mathbb T} \frac{d\alpha}{\alpha^2}~ \delta_\alpha \mathbf{T}(\bm{X}'(\theta))$ motivates our definition of $\widetilde{\Lambda}$ in \eqref{tildeLambda:eq}.
Then, due to the higher order cancellation of $\mathcal{A}[\bm{X}](\theta, \alpha)$ as in \eqref{kerbel.A.eqn.deriv}, for small $\alpha$ the integrand for the equation \eqref{peskin.general.tension} using \eqref{kerbel.eqn.deriv} is approximately
\begin{equation}\notag
\frac{\mathcal{K}[\bm{X}](\theta, \alpha)}{\alpha^2} \delta_\alpha \mathbf{T}(\bm{X}'(\theta))\approx \frac{\delta_\alpha \mathbf{T}(\bm{X}')}{4\pi\alpha^2}.
\end{equation}
Thus a basic model equation for the general tension equation \eqref{peskin.general.tension} would be a vector version of the fractional porous medium equation: \begin{equation}\notag
\partial_t \bm{U} = -(-\Delta)^{\frac12} \mathbf{T}(\bm{U}).
\end{equation}
To the best of our knowledge, this equation has not been studied before, though both the scalar fractional version \cite{FractionalPorousMedium1, FractionalPorousMedium2, MR3656476} and local vector valued \cite{VectorPorous1, VectorPorous2, VectorPorous3} have been studied. Then the positivity and monotonicity assumptions that we will make on the tension $\mathcal{T}$ are both physically motivated, as well as the same assumptions that typically appear on the porous media equation in order to ensure ``ellipticity" for the problem such as in \cite{MR3656476}.
\subsection{Scaling}\label{sec:scaling} For the Peskin problem \eqref{peskin.general.tension} in general for any $\lambda>0$ the rescaling $\bm{X}_{\lambda}(t, \theta) = \lambda^{-1} \bm{X}(\lambda t, \lambda \theta)$ leaves the equation invariant for an arbitrary tension $\mathcal{T}$ in \eqref{tension.map.def}. If the tension takes the form of a power law $\mathcal{T}(r) = r^{1+\gamma}$ for some $\gamma \ge 0$ then the Peskin problem has the additional rescaling $\bm{X}^r(t,\theta) = r \bm{X}(r^\gamma t, \theta)$. In the case of a simple tension $\mathcal{T}(r) = k_0 r$, there's a two dimensional family of rescaling $\bm{X}_{\tau, \lambda}(t,\theta) = \tau \bm{X}(\lambda t, \lambda \theta)$, where $\tau\in \mathbb R$ and $\lambda>0$ are independent of each other. To ensure that the arc-chord condition \eqref{arc.cord.number} also remains invariant then we are limited to the rescaling $\bm{X}_{\lambda}(t, \theta) = \lambda^{-1} \bm{X}(\lambda t, \lambda \theta)$.
Here we give a list of some scaling critical spaces for the Peskin problem \eqref{peskin.general.tension} under the rescaling $\bm{X}_{\lambda}(t, \theta) = \lambda^{-1} \bm{X}(\lambda t, \lambda \theta)$: the Lipshitz space $\dot{W}^{1,\infty}$, the Wiener algebra $\mathcal{A}^1$, $BMO^1$, and the homogeneous Besov spaces $\dot{B}^{1+\frac{1}{p}}_{p, r}$ for all $p, r \in [1,\infty]$. In particular we emphasize the spaces $\dot{B}^{\frac32}_{2,r}$ for $1 \le r \le \infty$ and $\dot{H}^{\frac{3}{2}}$ due to their $L^2$ structure.
In this paper we utilize the scaling critical Banach space $\dot{B}^{\frac32}_{2,1}$ since it has a clearly defined $L^2$ based structure, and then hopefully it might be useful also in the further development and study of numerical methods.
\subsection{Notation}\label{sec:normNew}
We use $C>0$ to denote some inessential constant whose value may change from line to line. We will write $A \lesssim B$ if $A \le C B$. We also write $A \approx B$ if both $A \lesssim B$ and $B \lesssim A$ hold. We will use $f:\mathbb T \to \mathbb R^2$ or $\mathbb C$ to denote a generic smooth function throughout this paper, where $f=(f_1, f_2)$ and $|f|^2 \eqdef f_1^2+f^2_2$. We also define the translation operator $\tau_\beta$ applied to the $\theta \in \mathbb T$ variable by
\begin{equation}\label{def.translation}
\tau_\beta f(\theta) \eqdef f(\theta+\beta).
\end{equation}
We define $\mathds{1}_{A}$ as the standard indicator function of the set $A$. We use the notation $\delta_\beta$ for the difference operator \eqref{delta.notation} frequently.
We will use the standard notation for the $L^p(\mathbb T)$ spaces as
\begin{equation}\notag
|| f||_{L^p(\mathbb T)}=|| f||_{L^p_\theta} \eqdef \left( \int_{\mathbb T} |f(\theta)|^p d\theta\right)^{1/p}, \quad 1 \le p < \infty.
\end{equation}
In this function space, and in all the functional spaces below, we use the standard generalization to $p=\infty$ as
\begin{equation}\notag
|| f||_{L^{\infty}(\mathbb T)}\eqdef \esssup_{\theta \in \mathbb T} |f(\theta)|.
\end{equation}
We will also use the temporal spaces
\begin{equation}\notag
|| f||_{L^p([0,T])}= || f||_{L^p_T} \eqdef \left( \int_{0}^{T} |f(t)|^p dt\right)^{1/p}, \quad 1 \le p < \infty.
\end{equation}
We define the $L^q_TL^p_\theta$ mixed Lebesgue space norms for $1\le p,q \leq \infty$ as follows:
\begin{equation*}
||f||_{L^q_TL^p_\theta}
=
||f||_{L^q_T(L^p_\theta)}
\eqdef
\big|\big| || f(\cdot, \cdot)||_{L^p(\mathbb T)} \big|\big| _{L^q([0,T])}.
\end{equation*}
Next we introduce the Besov spaces as follows
\begin{equation}\label{Besov.Space}
||f||_{\dot{B}^s_{p,r}} \eqdef \left(\int_{\mathbb T} \frac{d\beta}{|\beta|} \left(\frac{||\delta_\beta f||_{L^p(\mathbb T)}}{|\beta|^{s}} \right)^{r} \right)^{1/r}.
\end{equation}
Unless otherwise stated, all indicies in the rest of this section are for $0<s<1$ and $p,q,r\in [1,\infty]$. When $r=\infty$ we use
\begin{equation}\notag
||f||_{\dot{B}^s_{p,\infty}} \eqdef \esssup_{\beta \in \mathbb T}\left(\frac{||\delta_\beta f||_{L^p(\mathbb T)}}{|\beta|^{s}} \right).
\end{equation}
In the rest of this paper for simplicity when we write $\sup_{\theta \in \mathbb T}$ or $\sup_{0 \le t \le T}$ we mean it to be the standard essential supremum.
We will then also use the standard Sobolev spaces that can be defined as
\begin{equation}\notag
||f||_{\dot{H}^{s}} \eqdef
||f||_{\dot{B}^s_{2,2}}, \quad \forall s \in \mathbb R.
\end{equation}
Technically to define $\dot{B}^s_{2,2}$ in particular $\forall s \in \mathbb R$ we use the definition in Remark \ref{rem:besov.define}.
We will also use the Chemin-Lerner \cite{CL1995} mixed regularity spaces as described for example in \cite[Definition 2.67 on page 98]{BCD} that are defined as
\begin{equation}\label{Besov.CL.Space}
|| f||_{\widetilde{L}^{q}_T(\dot{B}_{p, r}^{s})}
\eqdef
\left( \int_{\mathbb T} \frac{d\beta}{|\beta|} \frac{|| \delta_\beta f||_{L^{q}_T(L^p_\theta)}^r}{|\beta|^{sr}}
\right)^{1/r}.
\end{equation}
Next, motivated by \cite{AlaNgu2021lipsh,2009.08442,2010.06915}, we introduce periodic Besov spaces with additional regularity on the logarithmic scale for $0<s<1$ and $p,r\in [1,\infty]$ as
\begin{equation}\label{Besov.mu.Space}
||f||_{\dot{B}^{s,\mu}_{p,r}} \eqdef \left(\int_{\mathbb T} \frac{d\beta}{|\beta|} \left(\mu(|\beta|^{-1})\frac{||\delta_\beta f||_{L^p(\mathbb T)}}{|\beta|^{s}} \right)^{r} \right)^{1/r}.
\end{equation}
Here the log scale derivative $\mu$ is defined as follows:
\begin{definition}\label{subw.definition}
We consider functions $\mu\colon[0,\infty) \to [1,\infty)$ which satisfy the following three assumptions:
\begin{itemize}
\item $\mu(r)$ is increasing and $\lim_{r\to\infty}\mu(r)=\infty$.
\item There is a $c_0>0$ such that $\mu(2r)\leq c_0\mu(r)$ for any $r\geq 0$.
\item The function $r\mapsto \mu(r)/\log(4+r)$ is decreasing on $[0,\infty)$.
\end{itemize}
\end{definition}
Then we similarly define
\begin{equation}\label{Besov.mu.CL.Space}
|| f||_{\widetilde{L}^{q}_T(\dot{B}_{p, r}^{s,\mu})}
\eqdef
\left( \int_{\mathbb T} \frac{d\beta}{|\beta|} \left(\mu(|\beta|^{-1})\frac{||\delta_\beta f||_{L^{q}_T(L^p_\theta)}}{|\beta|^{s}} \right)^{r}
\right)^{1/r}.
\end{equation}
We introduce streamlined notation for the main norms used in the paper
\begin{equation}\label{C.space.temporal}
|| f||_{\BS_T}
\eqdef
|| f||_{\widetilde{L}^{\infty}_T(\dot{B}_{2, 1}^{\frac12,\mu})}
=
\int_{\mathbb T} \frac{d\beta}{|\beta|^{3/2}} \mu(|\beta|^{-1}) || \delta_\beta f||_{L^{\infty}_T(L^2_\theta)},
\end{equation}
and
\begin{equation}\label{D.space.temporal}
|| f||_{\mathcal{D}_T^\subw}
\eqdef
|| \widetilde{\Lambda}^{\frac12} f||_{\widetilde{L}^2_T(\dot{B}^{\frac12,\mu}_{2,1})}
=
\int_{\mathbb T} \frac{d\beta}{|\beta|^{3/2}} \mu(|\beta|^{-1})
||\widetilde{\Lambda}^{\frac12} \delta_\beta f||_{L^2_T(L^2_\theta)}.
\end{equation}
Above the operator $\widetilde{\Lambda}$ is a constant multiple of $\Lambda \eqdef (-\Delta)^{\frac12}$ and is defined precisely in \eqref{tildeLambda:eq} in \secref{sec:para}.
Further from \eqref{tildeLambda:eq} we have
\begin{equation}\notag
|| \widetilde{\Lambda}^{\frac12} f||_{L^2_\theta}^2
=
\int_{\mathbb T} d\theta ~ f(\theta) \cdot \widetilde{\Lambda} f(\theta)
=
\frac{1}{8\pi}\int_{\mathbb T} d\theta \int_{\mathbb T} \frac{d\alpha}{\alpha^{2}}
~
| \delta_\alpha f(\theta)|^2.
\end{equation}
This can be taken as the definition of $|| \widetilde{\Lambda}^{\frac12} f||_{L^2_\theta}$ in \eqref{D.space.temporal}.
For the initial data we will use the following norm:
\begin{equation}\label{initial.B.space}
|| f||_{\mathcal{B}^\subw}
\eqdef
|| f||_{\dot{B}_{2, 1}^{\frac12,\mu}}
=
\int_{\mathbb T} \frac{d\beta}{|\beta|^{3/2}} \mu(|\beta|^{-1}) || \delta_\beta f||_{L^2_\theta}.
\end{equation}
Lastly we have
$||f||_{L^\infty_T(\dot{B}^{1/2, \mu}_{2,1})}
\leq
|| f||_{\BS_T},$ and this
inequality shows that the norm $|| f||_{\BS_T}$ is stronger than $||f||_{L^\infty_T(\dot{B}^{1/2, \mu}_{2,1})}$. We will also use the standard definitions of the H{\"o}lder spaces $C^{k,\gamma}$.
\subsection{Main results}\label{sec:mainResults} Without loss of generality we can suppose initially that $\bm{X}_0'$ has mean zero since the equation \eqref{e:boundaryintegral} and the equation \eqref{peskin.general.tension} both annihilate constants. Therefore, this mean zero property will be preserved by the solution. Next we give definitions of our notions of solution.
\begin{definition}\label{def:solution} (Weak solution) Let $\bm{X}_0'\in \dot{B}^{\frac{1}{2}}_{2,1}(\mathbb T ; \mathbb R^2)$ with $|\bm{X}_0|_*>0$. We say that $\bm{X}: [0,T]\times \mathbb T\to \mathbb R^2$ is a weak solution of the Peskin problem \eqref{peskin.general.tension} with tension $\mathcal{T}$ and initial data $\bm{X}_0$ if $\bm{X}', \mathbf{T}(\bm{X}')\in L^2_T (L^\infty_\theta \cap \dot{H}^{\frac12}_\theta)$ with $\inf_{0 \le t \le T} |\bm{X}(t)|_*>0$, and for any function $\bm{Y}: [0,T]\times \mathbb T\to \mathbb R^2$ with $\bm{Y}'\in L^2_T (L^\infty_\theta \cap \dot{H}^{\frac12}_\theta)$ and $\partial_t Y'\in L^2_T (L^\infty_\theta\cap \dot{H}^{\frac12}_\theta)^*$, we have
\begin{multline}\notag
\int_\mathbb T d\theta ~ \bm{Y}'(T,\theta)\cdot \bm{X}'(T,\theta) - \int_\mathbb T d\theta~ \bm{Y}_0'(\theta)\cdot \bm{X}'_0(\theta) =
\int_0^T dt \int_\mathbb T d\theta~ \partial_t \bm{Y}'(t,\theta)\cdot \bm{X}'(t,\theta)
\\
- \frac12 \int_0^T dt \int_\mathbb T d\theta\int_\mathbb T \frac{d\alpha}{\alpha^2} ~ \delta_\alpha \bm{Y}'(t) \cdot \mathcal{K}[\bm{X}(t)](\theta, \alpha)\delta_\alpha \mathbf{T}(\bm{X}'(t)).
\end{multline}
\end{definition}
\begin{remark}
Our definition of a weak solution can be accurately paraphrased as the weakest notion of distributional solution such that $\bm{X}'$ is a valid test function for itself. This is chosen in order to justify the calculations of our main a priori estimate in \secref{sec:BesovSpace}.
\end{remark}
\begin{definition}\label{def:StrongSolution} (Strong solution) Let $\bm{X}_0'\in \dot{B}^{\frac{1}{2}}_{2,1}(\mathbb T ; \mathbb R^2)$ with $|\bm{X}_0|_*>0$.
We say that $\bm{X}: [0,T]\times \mathbb T\to \mathbb R^2$ is a strong solution if $\bm{X}\in C^2((0,T]\times \mathbb T\to \mathbb R^2)$ solves the equation \eqref{peskin.general.tension} pointwise with $\inf_{0 \le t \le T} |\bm{X}(t)|_*>0$ and
\begin{equation}\notag
\lim\limits_{t\to 0} ||\bm{X}'(t)-\bm{X}'_0||_{L^\infty} = 0.
\end{equation}
\end{definition}
\begin{theorem}\label{thm:main}
Let $\bm{X}_0: \mathbb T\to \mathbb R^2$ with $\bm{X}'_0\in \dot{B}^{\frac{1}{2}}_{2,1}$ and $|\bm{X}_0|_*>0$.
Let the scalar tension $\mathcal{T}:(0,\infty)\to (0,\infty)$ be such that $\mathcal{T}\in C^{1,1}_{loc}(0,\infty)$ with $\mathcal{T}'(r)>0$ for all $0<r<\infty$.
Then there is a time $T>0$ such that there exists a unique weak solution $\bm{X}:[0,T]\times \mathbb T\to \mathbb R^2$ to the Peskin problem in the sense of Definition \ref{def:solution}, which is also a strong solution to the Peskin problem \eqref{peskin.general.tension} as in Definition \ref{def:StrongSolution}.
Furthermore for any $0<\beta<1$, $\bm{X}\in C^{2,\beta}_{loc}((0, T]\times \mathbb T;\mathbb R^2)$.
Additionally, if $\mathcal{T}\in C^{k,\gamma}_{loc}(0,\infty)$ for some $k\geq 2$ and $0<\gamma<1$ then we have that $\bm{X}\in C_{loc}^{k+1,\gamma}((0, T]\times \mathbb T;\mathbb R^2)$.
\end{theorem}
Note that due to the structure of equation \eqref{peskin.general.tension}, $\bm{X}\in C^{k+1,\gamma}_{loc}$ is the optimal regularity for $\mathcal{T}\in C^{k, \gamma}_{loc}$. We prove Theorem \ref{thm:main} by first establishing a quantitative version under more restrictive assumptions on the tension.
\begin{theorem}\label{thm:mainquant}(Quantitative existence)
Consider initial data $\bm{X}_0: \mathbb T\to \mathbb R^2$ such that $||\bm{X}'_0||_{\dot{B}^{\frac{1}{2}, \mu}_{2,1}}\leq M$ for some $\mu$ satisfying Definition \ref{subw.definition}, for any $M>0$, and $|\bm{X}_0|_*>0$. Let the tension map $\mathbf{T}:\mathbb R^2\to \mathbb R^2$ from \eqref{tension.map.def} be such that $D\mathbf{T}\in W^{1,\infty}(\mathbb R^2; \mathbb R^{2\times 2})$ satisfying the ellipticity condition $D\mathbf{T}(z)\geq \lambda \mathcal{I}>0.$
Then there exists a time $T>0$ depending only on $M$, $\mu$, $|\bm{X}_0|_*$, $\lambda$ and $||D\mathbf{T}||_{W^{1,\infty}}$ such that there exists a strong solution, in the sense of Definition \ref{def:StrongSolution}, $\bm{X}: [0,T]\times \mathbb T\to \mathbb R^2$ to the Peskin problem \eqref{peskin.general.tension} with tension $\mathbf{T}$ and initial data $\bm{X}_0$. This solution satisfies for some universal constant $c>0$ that
\begin{multline}\label{e:primaryestimate}
\int_{\mathbb T}\frac{d\beta}{|\beta|^{\frac32}}\mu(|\beta|^{-1})\left( ||\delta_\beta \bm{X}'||_{L^\infty_T L^2_\theta} + c\sqrt{\lambda} ||\delta_\beta (-\Delta)^{\frac14} \bm{X}'||_{L^2_T L^2_\theta}\right)
\\
\leq 4 \int_{\mathbb T}\frac{d\beta}{|\beta|^{\frac32}} \mu(|\beta|^{-1}) ||\delta_\beta \bm{X}'_0||_{L^2_\theta}.
\end{multline}
Further for any small time $\tau>0$ and any $0<\beta<1$, $\bm{X}\in C^{2,\beta}([\tau, T]\times \mathbb T;\mathbb R^2)$, with its norm depending only on $\tau, \beta, $ and the previously mentioned constants.
If we additionally have that $\mathbf{T}\in C^{k,\gamma}(\mathbb R^2;\mathbb R^2)$ for some $k\geq 2$ and $0<\gamma<1$, then for any small time $\tau>0$, $\bm{X}\in C^{k+1,\gamma}([\tau, T]\times \mathbb T;\mathbb R^2)$ with the $C^{k+1,\gamma}$ norm controlled by $M$, $\mu$, $|\bm{X}_0|_*$, $\lambda$, $\gamma$, $||\mathbf{T}||_{C^{k,\gamma}}$, and $\tau$.
\end{theorem}
\begin{remark}\label{pointwise.rk} Since in Theorem \ref{thm:main} and Theorem \ref{thm:mainquant} we have that
$\bm{X}\in C^{2,\beta}([\tau, T]\times \mathbb T;\mathbb R^2)$ for any $\tau>0$ and any $0 < \beta < 1$ then the calculation in \secref{sec:GenTenDerivation} can be reversed, and we have that $\bm{X}(t,\theta)$ solves both \eqref{peskin.general.tension} and \eqref{e:boundaryintegral} pointwise for any $t>0$.
\end{remark}
\begin{theorem}\label{first:unique}
(Uniqueness)
Consider $\bm{X}_0$ and $\bm{Y}_0$ such that $\bm{X}'_0, \bm{Y}'_0\in \dot{B}^{\frac{1}{2}, \mu}_{2,1}(\mathbb T; \mathbb R^2)$ with $|\bm{X}_0|_*>0$ and $|\bm{Y}_0|_*>0$. Let the tension map $\mathbf{T}:\mathbb R^2\to \mathbb R^2$ satisfy the same conditions as in Theorem \ref{thm:mainquant} and consider the corresponding solutions $\bm{X}, \bm{Y}: [0,T]\times \mathbb T\to \mathbb R^2$.
Choose any $\omega(r)$ satisfying Definition \ref{subw.definition} such that there exists $r_* \ge 1$ so that $\frac{\omega(r)}{\mu(r)}$ is decreasing for $r \ge r_*$ and
$\displaystyle\lim\limits_{r\to \infty}\frac{\omega(r)}{\mu(r)} = 0$.
For any $\varepsilon>0,$ there exists $\delta_*>0$ such that for any $0<\delta\le \delta_*$ then $||\bm{X}'_0-\bm{Y}'_0||_{L^2_\theta}<\delta$ implies
\begin{equation}\label{stability.est.nu}
\int_{\mathbb T}\frac{d\beta}{|\beta|^{\frac32}}\omega(|\beta|^{-1}) ||\delta_\beta (\bm{X}'-\bm{Y}')||_{L^\infty_T L^2_\theta} <\varepsilon.
\end{equation}
In particular if $||\bm{X}'_0-\bm{Y}'_0||_{L^2_\theta}=0$ then the solution is unique in $\mathcal{B}^{\omega}_T$.
\end{theorem}
\begin{remark}
In \eqref{stability.est.nu} we can take for example $\omega(r) = \mu(r)^\gamma$ for any $0<\gamma<1$.
\end{remark}
\begin{remark}
Note that if $\bm{X}_0', \bm{Y}_0' \in \dot{B}^{\frac12}_{2,1}$, then there exists some function $\mu$ satisfying the Definition \ref{subw.definition} such that $\bm{X}_0',\bm{Y}_0'\in \dot{B}^{\frac12, \mu}_{2,1}$. To see this, note that by Lemma \ref{lem:VallePoisson} there exist functions $\mu_X, \mu_Y$ such $\bm{X}_0'\in \dot{B}^{\frac12, \mu_X}_{2,1}$ and $\bm{Y}_0'\in \dot{B}^{\frac12, \mu_Y}_{2,1}$. Then taking $\mu(r) = \min\{\mu_X(r), \mu_Y(r)\}$ is sufficient.
\end{remark}
\begin{theorem}\label{main:unique}
(Strong continuity)
We consider the two strong solutions $\bm{X}, \bm{Y}: [0,T]\times \mathbb T\to \mathbb R^2$ to the Peskin problem \eqref{peskin.general.tension} with initial data $\bm{X}_0$, $\bm{Y}_0$ as in Theorem \ref{thm:mainquant}. Suppose the tension map $\mathbf{T}$ as in \eqref{tension.map.def} satisfies $D\mathbf{T}\in W^{2,\infty}(\mathbb R^2; \mathbb R^{2\times 2})$ and the ellipticity condition $D\mathbf{T}(z)\geq \lambda \mathcal{I}>0$.
Then there exists a time $T_M>0$ depending only on $M$, $|\bm{X}_0|_*$, $|\bm{Y}_0|_*$, $\mu$, $\lambda$, and $||D\mathbf{T}||_{W^{2,\infty}}$ such that for any $0<T\leq T_M,$
we have the following strong continuity estimate
\begin{equation*}
||\bm{X}' - \bm{Y}'||_{\BN_T}
+
2\lambda^{\frac12} || \bm{X}'- \bm{Y}'||_{\mathcal{D}_T^\nu}
\leq
8 || \bm{X}'_0 - \bm{Y}'_0||_{\mathcal{B}^\nu}.
\end{equation*}
Above $\nu$, which is defined precisely in \eqref{nu.definition} also satisfies Definition \ref{subw.definition} and defines norms of $\BN_T$ and $\mathcal{D}_T^\nu$ that are equivalent to $\BS_T$ and $\mathcal{D}_T^\subw$ respectively as seen in \eqref{equivalent.nu.norm}.
\end{theorem}
\begin{corollary}\label{cor:lipshitz}(Locally Lipschitz)
Suppose the tension map $\mathbf{T}$ as in \eqref{tension.map.def} satisfies $D\mathbf{T}\in W^{2,\infty}(\mathbb R^2; \mathbb R^{2\times 2})$ and the ellipticity condition $D\mathbf{T}(z)\geq \lambda \mathcal{I}>0$, and let $\mu$ satisfy the assumptions of Definition \ref{subw.definition}. Then for any $M, \rho\in (0,\infty)$, there exists a time $T>0$ such that for all $0<t\leq T$, the map
\begin{equation}\notag
\bm{X}_0\longrightarrow \bm{X}(t),
\end{equation}
is Lipschitz continuous from the bounded set $\{\bm{Z}: ||\bm{Z}'||_{\dot{B}^{1/2, \mu}_{2,1}}\leq M, |\bm{Z}|_*\geq \rho\}$ to $\dot{B}^{1/2,\mu}_{2,1}$, with Lipschitz constant depending on $M, \rho, \mu, \lambda, $ and $||D\mathbf{T}||_{W^{2,\infty}}$.
\end{corollary}
Corollary \ref{cor:lipshitz} follows directly from Theorem \ref{main:unique}.
\subsection{Discussion of the assumptions on the tension}\label{sec:TensionAssumptions}
In this subsection we will discuss our assumptions on the scalar tension $\mathcal{T}(r)$ and on the tension map $\mathbf{T}(z) = \mathcal{T}(|z|)\hat{z}$ in \eqref{tension.map.def}. We separate our assumptions on the tension into two groups: the assumptions needed for the qualitative Theorem \ref{thm:main} versus the assumptions used to prove the quantitative bounds in Theorems \ref{thm:mainquant}, \ref{first:unique}, and \ref{main:unique}.
Our qualitative assumptions in Theorem \ref{thm:main} are very weak, only requiring
\begin{equation}\label{e:QualitativeScalarTension}
\left\{\begin{array}{l}
\mathcal{T} \in C^{1,1}_{loc}((0,\infty); (0,\infty)),
\\ \mathcal{T}'(r)>0, \ 0<r<\infty.
\end{array}\right.
\end{equation}
By $\mathcal{T} \in C^{k,\gamma}_{loc}$ or $C^{k,\gamma}_{loc}(0,\infty)$ for an integer $k\geq 0$ and $0\leq \gamma \leq 1$, we mean for any $0<a<b<\infty$ that $\mathcal{T}\in C^{k,\gamma}([a,b]; (0,\infty))$. For qualitative higher regularity, we also assume $\mathcal{T}\in C^{k, \gamma}_{loc}(0,\infty).$ Thus singularities or degeneracy at $r=0$ or as $r\to \infty$ are allowable, and in particular any positive power law $\mathcal{T}(r) = C r^p$ for $p>0$ and $C>0$ satisfies \eqref{e:QualitativeScalarTension}. Note that there is no requirement that $\lim\limits_{r\to \infty}\mathcal{T}(r) = \infty$, so a bounded function such as $\mathcal{T}(r)=\arctan(r)$ would also satisfy \eqref{e:QualitativeScalarTension}.
For our quantitative estimates, we work with tensions that have the following global bounds
\begin{equation}\label{e:QuantitativeScalarTension}
\left\{\begin{array}{l}
\mathcal{T}'\in W^{1,\infty}([0,\infty); [0,\infty)),
\\ \inf\limits_{0<r<\infty}\mathcal{T}'(r)\geq \lambda>0,
\\ \mathcal{T}(0) = 0,
\end{array}\right.
\end{equation}
For quantitative higher regularity and the strong continuity estimate, we also need to assume $\mathcal{T}\in C^{k, \gamma}_r[0,\infty)$ with $\mathcal{T}^{'(k)}(0)=0$ for $k\geq 2$. This would be implied for example if $\mathcal{T}(r) = cr$ on $0\leq r\leq \epsilon$ for some $c>0$ and any small $\epsilon>0$. Note that the estimates we prove will depend on bounds for the tension map $\mathbf{T}(z)$, rather than the scalar tension itself. The assumption that $\mathcal{T}$ has higher order derivatives vanish at 0 guarantees that $\mathcal{T}\in C^{k,\gamma}([0,\infty);[0,\infty))$ implies $\mathbf{T}\in C^{k,\gamma}(\mathbb R^2; \mathbb R^2),$ with $||\mathbf{T}||_{C^{k,\gamma}}$ controlled in terms of $||\mathcal{T}||_{C^{k,\gamma}}$.
Also note that the global lower bound $\inf \mathcal{T}'(r)\geq \lambda>0$ and $\mathcal{T}(0)=0$ give us a lower bound on the derivative of the tension map $\mathbf{T}$ as
\begin{equation}\label{e:DTdefn}
D\mathbf{T}(z) = \mathcal{T}'(|z|) \hat{z}\otimes \hat{z} + \frac{\mathcal{T}(|z|)}{|z|} \hat{z}^\perp\otimes \hat{z}^\perp \geq \lambda \mathcal{I},
\quad \lambda>0.
\end{equation}
Of course in the case of simple tension \eqref{linearF} where $\mathcal{T}(r) = k_0r$, it follows that $D\mathbf{T}(z) = k_0 \mathcal{I}$.
For our quantitative estimates, we will typically state our assumptions for the tension map $\mathbf{T}(z)$ in \eqref{tension.map.def} by assuming $\forall z \in \mathbb R^2$ that \eqref{e:DTdefn} holds and further that
\begin{equation}\label{e:QuantitativeTensionMap}
\left\{\begin{array}{l} |D\mathbf{T}(z)|\leq \mathcal{C}_{1\TE},
\\ |D^2 \mathbf{T}(z)| \leq \mathcal{C}_{2\TE}.
\end{array}\right.
\end{equation}
Here $\mathcal{C}_{1\TE}$ and $\mathcal{C}_{2\TE}$ are any fixed positive finite constants that are allowed to be large. For our strong continuity estimate in Theorem \ref{main:unique}, for a fixed positive finite constant $\mathcal{C}_{3\TE}$, we additionally assume $\forall z \in \mathbb R^2$ that
\begin{equation}\label{tension.derivatives.continuity}
|D^3\mathbf{T}(z)|\leq \mathcal{C}_{3\TE}.
\end{equation}
Our quantitative estimates on higher regularity additionally depend on $||\mathbf{T}||_{C^{k,\gamma}}$.
Lastly, we note the apparent mismatch between our qualitative \eqref{e:QualitativeScalarTension} and quantitative \eqref{e:QuantitativeScalarTension} assumptions. That is, not every scalar tension $\mathcal{T}$ satisfying the qualitative assumptions will also satisfy the quantitative version. In particular, all positive power laws satisfy the former, but only the linear case satisfies the latter.
We are able to deal with these different assumptions for the following reason. Suppose that we have a tension $\mathcal{T}_1$ satisfying the quantitative assumptions \eqref{e:QuantitativeScalarTension}, and we use our quantitative estimates to construct a solution $\bm{X}:[0,T]\times \mathbb T\to \mathbb R^2$ to the Peskin problem with tension $\mathcal{T}_1$. Then for any time $t$ and any $\theta\in \mathbb T$ we have $0<|\bm{X}(t)|_*\leq |\bm{X}'(t,\theta)|\leq ||\bm{X}'(t)||_{L^\infty_\theta}$. Taking $a = \inf_{0\leq t\leq T} |\bm{X}(t)|_*$ and $b = ||\bm{X}'||_{L^\infty_T (L^\infty_\theta)}$, we then have that $\bm{X}(t)$ is also a solution to the Peskin problem \eqref{peskin.general.tension} for any tension $\mathcal{T}_2$ such that $\mathcal{T}_2\big|_{[a,b]} = \mathcal{T}_1\big|_{[a,b]} $.
Now suppose that our tension $\mathcal{T}$ only satisfies the qualitative assumptions \eqref{e:QualitativeScalarTension}. These are still enough to guarantee that for any $0<a<b<\infty$, there exists a tension $\tilde{\mathcal{T}}$ such that $\tilde{\mathcal{T}}\big|_{[a,b]} = \mathcal{T}\big|_{[a,b]},$ and $\tilde{\mathcal{T}} $ satisfies the quantitative assumptions \eqref{e:QuantitativeScalarTension}. Thus, for any fixed initial data $\bm{X}_0$ with $\bm{X}'_0\in \dot{B}^{\frac12}_{2,1}\subseteq L^\infty$ and $|\bm{X}_0|_*>0$, we take some interval $(a,b)$ which compactly contains $\{|\bm{X}'_0(\theta)|: \theta\in \mathbb T\}$, and then we construct a solution $\bm{X}:[0,T]\times \mathbb T\to \mathbb R^2$ to the Peskin problem with tension $\tilde{\mathcal{T}}$. Taking $T>0$ small enough that $\{|\bm{X}'_0(t,\theta)|: (t,\theta)\in [0,T]\times \mathbb T\}\subset (a,b)$, we then have that $\bm{X}(t)$ is also a solution to the Peskin problem with our original tension $\mathcal{T}$. We go through this argument again in more detail in the proof of our main theorem in \secref{sec:mainThmProof}.
\begin{remark}\label{remark:trick}
We note that the trick explained above and in the proof of our main theorem in \secref{sec:mainThmProof} always works for the kinds of solutions we consider with Definitions \ref{def:solution} and \ref{def:StrongSolution}. For the assumptions needed in order to apply this trick (to replace one tension with another) to fail, the solution would have to satisfy one of two conditions. Either (1) the solution $\bm{X}(t)$ violates the arc-chord condition \eqref{arc.cord.number} after an infinitesimal amount of time $\liminf\limits_{t\to 0+} |\bm{X}(t)|_* = 0$, or (2) the $L^\infty$ norm of the solution $\bm{X}'(t)$ blows up after an infinitesimal amount of time: $\limsup\limits_{t\to 0+} ||\bm{X}'(t)||_{L^\infty} = \infty.$ It's not clear whether a notion of solution which obey's either of these two conditions starting from initial data with $\bm{X}_0'\in \dot{B}^{1/2}_{2,1}\subseteq L^\infty_\theta$ and $|\bm{X}_0|>0$ would represent a physical solution.
\end{remark}
\begin{remark}
At the same time we remark that Theorem's \ref{thm:mainquant}, \ref{first:unique} and \ref{main:unique} also hold if
instead we replaced \eqref{e:QuantitativeTensionMap} and \eqref{tension.derivatives.continuity} with
\begin{equation}\notag
\bigg| D^{(k)}\mathbf{T}(z)\big|_{z=\bm{X}'}\bigg|\leq \mathcal{C}_{k\TE}(|| \bm{X}' ||_{L^\infty_\theta}, |\bm{X}|_*^{-1}),
\end{equation}
where for $k\in\{1,2,3\}$ we have $\mathcal{C}_{k\TE}=\mathcal{C}_{k\TE}(|| \bm{X}' ||_{L^\infty_\theta}, |\bm{X}|_*^{-1})$ are any increasing functions of both variables. Then under these conditions the proofs of those theorems in this paper continue to hold without any essential modifications. And further the solutions constructed under the assumptions in this remark would prevent the occurrence of (1) or (2) in the previous Remark \ref{remark:trick}.
\end{remark}
\subsection{A de la Valle-Poisson lemma}
Motivated by the work in \cite{AlaNgu2021lipsh,2009.08442,2010.06915}, we now prove the following de la Valle-Poisson type lemma.
\begin{lemma}\label{lem:VallePoisson} Fix any $p\in [1,\infty]$, $r\in [1,\infty)$, and $s\in (0,1)$. Given any function $f$ satisfying $|| f||_{\dot{B}^s_{p,r}(\mathbb T)} <\infty$, then there exists a function $\mu$, depending upon $f$, satisfying the assumptions of Definition \ref{subw.definition} such that $|| f||_{\dot{B}^{s,\mu}_{p,r}(\mathbb T)} <\infty$.
\end{lemma}
The proof builds upon the related lemma from \cite[Lemma 3.8 on page 35]{AlaNgu2021lipsh}.
\begin{proof} Since $|| f||_{\dot{B}^s_{p,r}(\mathbb T)} <\infty$ then after a simple change of variables we have that
\begin{equation}\notag
|| f||_{\dot{B}^s_{p,r}(\mathbb T)}^r
=
\int_0^\pi \frac{d\beta}{|\beta|^{1+sr}} \left( ||\delta_\beta f||_{L^p}^r +||\delta_{-\beta} f||_{L^p}^r\right) < \infty.
\end{equation}
We now define
\begin{equation*}
h_{p,r}(\beta) \eqdef ||\delta_\beta f||_{L^p}^r+||\delta_{-\beta} f||_{L^p}^r, \quad \omega(\alpha) \eqdef \pi^{-sr}\alpha^{sr-1} h_{p,r}(\pi \alpha^{-1}).
\end{equation*}
Then we will use the change of variables $\alpha = \pi \beta^{-1}$ to obtain
\begin{equation}\notag
\int_1^\infty d\alpha ~\omega(\alpha)
=
\int_0^\pi \frac{d\beta}{|\beta|^{1+sr}} \left( ||\delta_\beta f||_{L^p}^r +||\delta_{-\beta} f||_{L^p}^r\right) < \infty.
\end{equation}
By \cite[Lemma 3.8 on page 35]{AlaNgu2021lipsh} there then exists some function $\nu:[1,\infty)\to [1,\infty) \ $ satisfying the conditions of Definition \ref{subw.definition} such that
\begin{equation}\notag
\int_1^\infty d\alpha ~\omega(\alpha)~\nu(\alpha)
=
\int_0^\pi \frac{d\beta}{|\beta|^{1+sr}} \nu(\pi |\beta|^{-1}) \left( ||\delta_\beta f||_{L^p}^r +||\delta_{-\beta} f||_{L^p}^r\right) < \infty.
\end{equation}
Taking $\mu(|\beta|^{-1}) = \nu(\pi |\beta|^{-1})^{1/r}$, we have that $\mu$ satisfies the conditions of Definition \ref{subw.definition} as well, and further $|| f||_{\dot{B}^{s,\mu}_{p,r}(\mathbb T)} <\infty$.\end{proof}
We point out that using this Lemma \ref{lem:VallePoisson} then Theorem \ref{thm:main} follows immediately from Theorem's \ref{thm:mainquant} and \ref{first:unique}.
\subsection{The $\Lambda$ operator}\label{sec:para}
For a function $f:\mathbb R\to \mathbb C$ the $\Lambda^s=(-\Delta)^{\frac{s}{2}}$ operator is widely defined for any $s\in (0,2)$ as
\begin{equation*}
-\Lambda^s f(x) \eqdef C_{s} \text{pv}\hspace{-0.1cm}\int_{\mathbb R} \frac{(\delta_y+\delta_{-y})f(x)}{|y|^{1+s}} dy, \quad C_s \eqdef \frac{2^s \bm{\Gamma}(\frac12 (1+s))}{2\pi^{\frac12}|\bm{\Gamma}(-\frac{s}{2})|}.
\end{equation*}
Here we use the principal value integral when it is needed, and $\bm{\Gamma}$ is the standard Gamma function.
Then for $f:\mathbb T \to \mathbb C$, identified as a periodic function on $\mathbb R$, this can readily be reduced to
\begin{equation}\label{lambda.s.def}
-\Lambda^s f(\theta) = C_{s}\int_{\mathbb T} (\delta_\alpha+\delta_{-\alpha})f(\theta) \sum_{k \in \mathbb Z}\frac{1}{|\alpha + 2\pi k|^{1+s}} d\alpha.
\end{equation}
The above can be taken as the definition of $\Lambda^s$ on $\mathbb T$. Now for simplicity we define the notation $\mathcal{S}(\alpha)$ as
\begin{equation}\label{distance.alpha}
\mathcal{S}(\alpha) \eqdef 2 \sin(\alpha/2).
\end{equation}
Then we have the following known expansion formula
\begin{equation}\notag
\frac{1}{\mathcal{S}(\alpha)^2}
=
\sum_{n=-\infty}^{+\infty} \frac{1}{(\alpha+2\pi n)^2}, \quad 0 < |\alpha| \leq \pi .
\end{equation}
Thus for $s=1$ the $\Lambda$ operator on $\mathbb T$ has the following succinct formula
\begin{equation}\label{lambda.definition}
- \Lambda f(\theta) = \frac{1}{\pi} \int_{\mathbb T} \frac{ f(\alpha) -f(\theta) }{\mathcal{S}(\theta - \alpha)^2}d\alpha
= \frac{1}{\pi} \int_{\mathbb T} \frac{\delta_\alpha f(\theta)}{\mathcal{S}(\alpha)^2} d\alpha,
\end{equation}
Notice further that we have
\begin{equation}\label{sine.bound}
\frac{2}{\pi} \leq \frac{ \mathcal{S}(\alpha)}{\alpha} \leq 1, \quad \forall \alpha \in {\mathbb T}.
\end{equation}
In particular in the $L^p(\mathbb T)$ sense then \eqref{lambda.definition} is equivalent to the operator containing $\alpha^2$ in the denominator instead of $\mathcal{S}(\alpha)^2$. This discussion motivates the following simplifed notation that we will use in the rest of this article
\begin{equation}\label{tildeLambda:eq}
- \widetilde{\Lambda} f(\theta) \eqdef \frac{1}{4\pi} \int_{\mathbb T} \frac{\delta_\alpha f(\theta)}{\alpha^{2}} d\alpha.
\end{equation}
By \eqref{sine.bound} the operator $\widetilde{\Lambda}$ is equivalent to $\Lambda$ from \eqref{lambda.definition} in the $L^p(\mathbb T)$ norms.
More generally, for $s\in (0,2)$ we write the previous sum as
\begin{equation*}
\sum_{k \in \mathbb Z}\frac{1}{|\alpha + 2\pi k|^{1+s}}
=
\frac{1}{|\alpha|^{1+s}}\left(1
+
\sum_{k \neq 0}\frac{|\alpha|^{1+s}}{|\alpha + 2\pi k|^{1+s}}\right)
=\frac{1}{|\alpha|^{1+s}} \left(1+ \mathfrak{U}(\alpha)\right).
\end{equation*}
Notice that for $\alpha \in \mathbb T$ the series $\mathfrak{U}(\alpha)$ converges uniformly. Also $\mathfrak{U}(\alpha)$ is non-negative and is uniformly bounded for $\alpha \in \mathbb T$. We conclude that
\begin{equation*}
\sum_{k \in \mathbb Z}\frac{1}{|\alpha + 2\pi k|^{1+s}}
\approx
\frac{1}{|\alpha|^{1+s}},
\quad \forall \alpha \in \mathbb T.
\end{equation*}
Thus again in the $L^p(\mathbb T)$ sense $\Lambda^s$ is equivalent to the operator containing $\frac{1}{|\alpha|^{1+s}}$ instead of $\sum_{k \in \mathbb Z} \frac{C_{s}}{|\alpha + 2\pi k|^{1+s}}$ in \eqref{lambda.s.def} for any $s\in(0,2)$.
\subsection{Overview of the proof}\label{subsec:proofOverview} One very important point in the proof is the derivation of the equation \eqref{peskin.general.tension} with the kernel \eqref{kerbel.eqn.deriv}. It is crucial that the equation \eqref{peskin.general.tension} cancels the second order derivatives that are present in \eqref{e:boundaryintegral}. Let $\nabla_{\bm{X}'}$ denote the directional derivative in the direction $\bm{X}'$ as in \eqref{directional.def}, then with \eqref{stokeslet.def} the main idea can be seen as in
\begin{equation}\notag
[\nabla_{\bm{X}'(\theta+\alpha)}G_1](\delta_\alpha \bm{X})\delta_\alpha \bm{X} + G_2(\delta_\alpha \bm{X}) \bm{X}'(\theta+\alpha)=0.
\end{equation}
Fortunately this type of cancellation is preserved when we take higher order derivatives of the equation \eqref{e:boundaryintegral}. This more general cancellation structure is observed via a sequence of integrations by parts performed in \secref{sec:GenTenDerivation}.
Then the heart of our argument is the initial a priori estimate \eqref{e:primaryestimate}. In order to prove this, we make use of our new formulation of the equation for $\partial_t \bm{X}'$ in \eqref{peskin.general.tension}. Because $\mathcal{K}(\theta, \alpha)$ is symmetric in $\theta, \theta+\alpha$, our equation \eqref{peskin.general.tension} has divergence form symmetry making $L^2$ based energy estimates a useful choice. By making use of Besov spaces, we're interested then in keeping careful track of the time evolution of differences $||\delta_\beta \bm{X}'||_{L_\theta^2}(t)$ where $\beta\in \mathbb T$ is arbitrary. Taking into account \eqref{kerbel.eqn.deriv} and \eqref{peskin.expand.tension} with \eqref{tildeLambda:eq}, we have that $\delta_\beta \bm{X}'$ solves the equation
\begin{equation}\notag
\partial_t \delta_\beta \bm{X}' + \widetilde{\Lambda} \delta_\beta \mathbf{T}(\bm{X}') = \int_\mathbb T \frac{d\alpha}{\alpha^2} \delta_\beta \left(\mathcal{A}[\bm{X}](\theta, \alpha)\delta_\alpha \mathbf{T}(\bm{X}')\right).
\end{equation}
When we calculate $\frac{d}{dt} ||\delta_\beta \bm{X}'||_{L^2_\theta}^2$, we then get one good diffusive term $-\lambda ||\delta_\beta \bm{X}'||_{\dot{H}^{1/2}}^2$ from the $\widetilde{\Lambda} \delta_\beta \mathbf{T}(\bm{X}')$ (along with additional error terms if our tension isn't simple). We treat the remaining terms as error, and then we are left to bound integrals (for $q = 1, 2$) of the form
\begin{equation}\notag
\int_\mathbb T d\theta\int_\mathbb T \frac{d\alpha}{\alpha^2}~ |\delta_\beta \delta_\alpha \bm{X}'(\theta)|^2 ~ |\delta_\alpha \bm{X}'(\theta)|^q,
\end{equation}
and
\begin{equation}\notag
\int_\mathbb T d\theta\int_\mathbb T \frac{d\alpha}{\alpha^2}~ |\delta_\beta \delta_\alpha \bm{X}'(\theta)| \ |\delta_\beta \bm{X}'(\theta)| \ |\delta_\alpha \bm{X}'(\theta)|^{q+1}.
\end{equation}
If we were to bound the first term naively, we would get
\begin{equation}\notag
C ||\delta_\beta \bm{X}'||_{\dot{H}^{1/2}}^2 ||\bm{X}'||_{L^\infty_\theta}^q,
\end{equation}
which would make it impossible to close the estimate, as this is of the same order as our good diffusive term but with a possibly large coefficient in front for large data. However, the norm for $\dot{B}^{\frac12, \mu}_{2,1}$ both controls the size of the norm $\dot{B}^{\frac12}_{2,1}$ and the rate of decay for
\begin{equation}\label{rate.decay}
r \to \int_{|\alpha|<r}d\alpha \frac{||\delta_\alpha f||_{L^2_\theta}}{|\alpha|^{3/2}} \lesssim \frac{||f||_{\dot{B}^{1/2, \mu}_{2,1}}}{\mu(r^{-1})}.
\end{equation}
Thus splitting the integral in our error term between $|\alpha|<\eta$ and $|\alpha|>\eta$ for some $\eta$ sufficiently small depending on $\mu$, $||\bm{X}'_0||_{\dot{B}^{1/2, \mu}_{2,1}}$, and other relevant constants, we are able to bound this error term for any small $\epsilon>0$ as
\begin{equation}\notag
\epsilon ||\delta_\beta \bm{X}'||_{\dot{H}^{1/2}}^2 + C_{\epsilon} ||\delta_\beta \bm{X}'||_{L^2_\theta}^2,
\end{equation}
which we can handle. For the second type of error term, the story is similar except that we are forced to bound the $|\delta_\beta \bm{X}'|$ in $L^\infty$, as it has no decay as $\alpha\to 0$. Thus we end up with an error term of the form
\begin{equation}\notag
\epsilon||\delta_\beta \bm{X}'||_{\dot{H}^{1/2}}^2 + C ||\delta_\beta \bm{X}'||_{L^2_\theta}^2 + \epsilon ||\delta_\beta \bm{X}'||_{L^\infty_\theta}^2.
\end{equation}
This $L^\infty$ error term at first seems very bad, as notably the Sobolev embedding fails in $L^\infty$ and $||\delta_\beta\bm{X}'||_{L^\infty_\theta}^2$ is not controlled by our good diffusive piece $-\lambda ||\delta_\beta \bm{X}'||_{\dot{H}^{1/2}}^2$. However, once we integrate in $\beta$ against $\mu(|\beta|^{-1}) |\beta|^{-3/2}$ the Sobolev embedding is again true, and we can control this error term at the end of the estimate.
It is also vitally important to get a positive bound from below on the arc-chord condition $|\bm{X}(t)|_*$. In order to do this, we make use of the estimate
\begin{equation}\notag
||\bm{X}(t)|_* - |\bm{X}_0|_*| \leq ||\bm{X}'(t)-\bm{X}'_0||_{L^\infty}.
\end{equation}
Thus in \secref{sec:ArcChord} we prove continuity of the map $t\to \bm{X}'(t)$ in $L^\infty_\theta$ for small times. Our main a priori estimate \eqref{e:primaryestimate} grants us uniform bounds on the $\BS_T$ and $\mathcal{D}_T^\subw$ norms. Using our $\mathcal{D}_T^\subw$ bound, we then control $\partial_t \bm{X}'$ in $L_{t,\theta}^2$ and use this to prove continuity of $\bm{X}'(t)$ in $L_\theta^2$. Continuity in time in $L_\theta^2$ and our bound in $\BS_T$ then gives us continuity in time in $\dot{B}^{1/2}_{2,1}$, which controls $L^\infty_\theta$.
The strong continuity estimate given in Theorem \ref{main:unique} is for the most part similar to our main a priori estimate \eqref{e:primaryestimate}. However to obtain this estimate requires subtracting two solutions to the equation \eqref{peskin.general.tension} which in turn requires using the higher order bound \eqref{tension.derivatives.continuity}. Additionally when taking the difference of two solutions $\bm{X}'$ and $\bm{Y}'$ to \eqref{peskin.general.tension} we encounter a new term of the form
\begin{equation}\label{e:StrongContNewBad}
\mathcal{C}_{2\TE} \int_{\mathbb T} d\theta \int_{\mathbb T} \frac{d\alpha}{\alpha^2} ~
| \delta_\beta\delta_\alpha (\bm{X}'- \bm{Y}')(\theta)| |\tau_\beta \mathcal{K}[\bm{X}](\theta,\alpha)|
| (\bm{X}'- \bm{Y}')(\theta)| | \delta_\beta \delta_\alpha \bm{Y}'(\theta)|.
\end{equation}
The structure of this term does not have the ability to obtain extra smallness using the rate of decay in \eqref{rate.decay} in the energy estimate. This major difficulty prevents closing the strong continuity estimate in the norm of $\BS_T$ in \eqref{C.space.temporal}. Instead we simply bound this term by
\begin{equation}\notag
\frac{\lambda}{64} || \delta_\beta \widetilde{\Lambda}^{\frac12}(\bm{X}'- \bm{Y}')||_{L^2_\theta}^2
+
C \lambda^{-1} \mathcal{C}_{2\TE}^2 || \bm{X}'- \bm{Y}'||_{L^\infty_\theta}^2
|| \delta_\beta \widetilde{\Lambda}^{\frac12}\bm{Y}'||_{L^2_\theta}^2.
\end{equation}
The term $|| \delta_\beta \widetilde{\Lambda}^{\frac12}(\bm{X}'- \bm{Y}')||_{L^2_\theta}^2$ can be controlled by the dissipation. But we also require a small constant in front of $|| \bm{X}'- \bm{Y}'||_{L^\infty_\theta}^2 $ to close the continuity estimate. For this reason instead of using the norm $\BS_T$ with $\mu$ satisfying Definition \ref{subw.definition}, we need to introduce an equivalent norm as in \eqref{nu.definition} for a small constant $\varepsilon=\varepsilon(\lambda, \mathcal{C}_{2\TE},||\bm{X}'_0||_{\dot{B}^{\frac12,\mu}_{2,1}},||\bm{Y}'_0||_{\dot{B}^{\frac12,\mu}_{2,1}})>0$ as
\begin{equation}\notag
\nu(r) \eqdef 1 + \varepsilon \mu(r).
\end{equation}
And then $\nu$ also satisfies Definition \ref{subw.definition}.
Then the norm of $\BN_T$ is equivalent to the norm of $\BS_T$ and we are able to close the continuity estimate in $\BN_T$.
We also prove continuity for $\bm{X}'(t)-\bm{Y}'(t)$ in $L^2_\theta.$ This estimate is much simpler than the strong continuity estimate, and only requires $D\mathbf{T} \in W^{1,\infty}$ rather than $D\mathbf{T} \in W^{2,\infty}$. In particular, by making use of our a priori estimate in the higher order $\BS_T$ and $\mathcal{D}_T^\subw$ norms, we are able to bound a term like \eqref{e:StrongContNewBad} directly without changing to some equivalent norm. Continuity in $L^2_\theta$ and a bound in $\BS_T$ then implies that we have control over $\bm{X}'-\bm{Y}'$ in the $\mathcal{B}^\omega_T$ norm, for any function $\omega$ satisfying that $\frac{\omega(r)}{\mu(r)}$ is eventually decreasing with $\displaystyle\lim\limits_{r\to \infty} \frac{\omega(r)}{\mu(r)} = 0.$
Our higher regularity proofs are contained in \secref{sec:smoothing}. We begin by proving an $L^\infty_t \dot{H}^1$ estimate for $\bm{X}'$ and then establish regularity of the remainder from \eqref{peskin.expand.tension}:
\begin{equation}\notag
\mathcal{V}(\theta) \eqdef \int_{\mathbb T} \frac{d\alpha}{\alpha^2}~ \mathcal{A}[\bm{X}](\theta, \alpha)\delta_\alpha \mathbf{T}(\bm{X}'(\theta)),
\end{equation}
in terms of the regularity of $\bm{X}'$. Following the proof in \cite{MR3656476} for the scalar fractional porous medium equation, we then establish higher regularity for the fully nonlinear Peskin problem with a bootstrapping argument.
\subsection{Outline} In the next \secref{sec:GenTenDerivation} we will derive the equation \eqref{peskin.general.tension} that we will study in the rest of this work. Then in \secref{sec:BesovSpace} we will prove our main a priori estimate. After that in \secref{sec:ArcChord} we will explain a priori how we control the arc-chord condition \eqref{arc.cord.number} along the time evolution of \eqref{peskin.general.tension}. And then in \secref{sec:DifferenceBesovSpace} we prove the a priori continuity estimates for solutions to \eqref{peskin.general.tension} that enable us to establish the strong continuity and uniqueness.
Next in \secref{sec:smoothing} we prove the higher order smoothing effects.
Finally in \secref{sec:mainThmProof} we collect the previous results to explain the proof of our main theorems. Afterwards in \secref{sec:LPtorus} we explain some of the inequalities that we use in the previous sections of this text using the Littlewood-Paley decomposition on the torus. Lastly in \secref{sec:kernelDIFF} we give the difference estimates for the kernel \eqref{kerbel.eqn.deriv} and \eqref{kerbel.A.eqn.deriv} of the equation \eqref{peskin.general.tension}.
\section{Derivation of the general tension equation}\label{sec:GenTenDerivation}
In this section we will derive our alternative formulation of the equation for $\bm{X}'(\theta)$ as in \eqref{peskin.general.tension} with \eqref{tension.map.def} and \eqref{kernel.peskin.noExpand}. It is important for our main theorems in this paper that the equation \eqref{peskin.general.tension} does not contain any terms with $\bm{X}''(\theta)$ or higher derivatives. This is not obvious because the equation \eqref{e:boundaryintegral} does in fact contain terms with $\bm{X}''(\theta)$. Then in this section we explain the cancellation necessary to show that the higher derivative terms do not occur. We will first derive an alternative form of the equation for $\partial_t \bm{X}(\theta)$ in \eqref{peskin.equation.first.order.final}. Then afterwards we will derive in \eqref{general.tension} the equation for $\partial_t \bm{X}'(\theta)$ that we have written previously in \eqref{peskin.general.tension}.
To this end, with a general tension $\mathcal{T}$ as in \eqref{tension.map.def}, the Peskin problem \eqref{e:boundaryintegral} takes the form of an equation for the parametrization
\begin{equation}\notag
\partial_t \bm{X}(\theta) = \int G(\bm{X}(\eta)-\bm{X}(\theta)) \partial_\eta \left( \mathbf{T}(\bm{X}')(\eta)\right)d\eta,
\end{equation}
where $G(z)= G_1(z) + G_2(z)$ is the matrix valued function from \eqref{stokeslet.def} and $\mathbf{T}(z)$ is the tension map from \eqref{tension.map.def}. In this section we will write the integral, $\int$, without a domain such as $\mathbb T$ to emphasize that our calculations in this section are independent of the parametrization.
Next making the change of variables $\eta = \theta+\alpha$ and using \eqref{delta.notation}, we write
\begin{multline*}
\partial_t \bm{X}(\theta) = \int G_1(\delta_\alpha \bm{X}) \partial_\alpha \left( \mathcal{T}(|\bm{X}'|(\theta+\alpha))\widehat{\bm{X}'}(\theta+\alpha)\right)d\alpha
\\
+ \int G_2(\delta_\alpha \bm{X}) \partial_\alpha \left( \mathcal{T}(|\bm{X}'|(\theta+\alpha))\widehat{\bm{X}'}(\theta+\alpha)\right)d\alpha.
\end{multline*}
First we will focus on the term involving $G_1(z)$.
We use integration by parts and $\bm{X}'(\theta+\alpha) = \partial_\alpha (\delta_\alpha \bm{X}'(\theta))$ to obtain
\begin{multline*}
\int G_1(\delta_\alpha \bm{X}) \partial_\alpha \left( \mathcal{T}(|\bm{X}'|(\theta+\alpha))\widehat{\bm{X}'}(\theta+\alpha)\right)d\alpha
\\
= -\int \partial_\alpha [G_1(\delta_\alpha \bm{X})] \mathcal{T}(|\bm{X}'|(\theta+\alpha)) \widehat{\bm{X}'}(\theta+\alpha) d\alpha
\\ = -\int \partial_\alpha [G_1(\delta_\alpha \bm{X})] \frac{\mathcal{T}(|\bm{X}'|(\theta+\alpha))}{|\bm{X}'|(\theta+\alpha)} \partial_\alpha (\delta_\alpha \bm{X}(\theta)) d\alpha
\\ = \int \partial_\alpha \left( \frac{\mathcal{T}(|\bm{X}'|(\theta+\alpha))}{|\bm{X}'|(\theta+\alpha)} \partial_\alpha [G_1(\delta_\alpha \bm{X})]\right)\delta_\alpha \bm{X}(\theta) d\alpha.
\end{multline*}
We plug this calculation back into the equation to obtain
\begin{multline}\label{equation.intermediate}
\partial_t \bm{X}(\theta) = \int G(\delta_\alpha \bm{X}) \partial_\alpha\left( \frac{\mathcal{T}(|\bm{X}'|)}{|\bm{X}'|} \bm{X}'\right)(\theta+\alpha) d\alpha
\\= \int \partial_\alpha \left( \frac{\mathcal{T}(|\bm{X}'|)(\theta+\alpha)}{|\bm{X}'|(\theta+\alpha)} \partial_\alpha [G_1(\delta_\alpha \bm{X})]\right)\delta_\alpha \bm{X}(\theta) d\alpha
\\
+ \int G_2(\delta_\alpha \bm{X}) \partial_\alpha \left( \frac{\mathcal{T}(|\bm{X}'|)(\theta+\alpha)}{|\bm{X}'|(\theta+\alpha)} \bm{X}'(\theta+\alpha)\right) d\alpha.
\end{multline}
Next let $\nabla_u$ denote the directional derivative in the direction $u\in \mathbb R^2$, i.e.
\begin{equation}\label{directional.def}
\nabla_uf(z) \eqdef \lim\limits_{\epsilon\to 0+} \frac{f(z+\epsilon u)-f(z)}{\epsilon}.
\end{equation}
For the matrix valued functions $G_1(z)$ and $G_2(z)$ from \eqref{stokeslet.def} direct calculation gives
\begin{equation}\notag
4\pi\left\{\begin{array}{ll}
\nabla_u G_1(z) & = - \frac{u\cdot \hat{z}}{|z|} \mathcal{I},
\\ \nabla_u G_2(z) & = \frac{u\cdot \hat{z}^\perp}{|z|} \mathcal{R}(z),
\\ \nabla_u \nabla_v G_1(z) & = \frac{u \cdot \mathcal{P}(z) v}{|z|^2} \mathcal{I},
\\ \nabla_u \nabla_v G_2(z) &= -\frac{u \cdot \mathcal{R}(z)v}{|z|^2} \mathcal{R}(z) + \frac{u \cdot (\mathcal{P}(z)-\mathcal{I})v}{|z|^2} \mathcal{P}(z), \end{array}\right.
\end{equation}
where $\mathcal{R}(z)$ and $\mathcal{P}(z)$ are the reflection matrices from \eqref{matrix.operators}. Thus
\begin{equation}\notag
\partial_\alpha \left[G_1(\delta_\alpha \bm{X})\right] = \left[\nabla_{\bm{X}'(\theta+\alpha)} G_1\right](\delta_\alpha
\bm{X}),
\end{equation}
and
\begin{equation}\notag
\partial_\alpha^2 [G_1(\delta_\alpha \bm{X})] = \left[\nabla^2_{\bm{X}'(\theta+\alpha)} G_1\right](\delta_\alpha \bm{X}) + \left[\nabla_{\bm{X}''(\theta+\alpha)} G_1\right] (\delta_\alpha \bm{X}).
\end{equation}
We now {\it claim} that
\begin{equation}\label{peskin.equation.first.order}
\partial_t \bm{X}(\theta) = \int \left[\nabla^2_{\bm{X}'(\theta+\alpha)} G_1\right](\delta_\alpha \bm{X}) \frac{\mathcal{T}(|\bm{X}'|(\theta+\alpha))}{|\bm{X}'|(\theta+\alpha)} \delta_\alpha \bm{X}(\theta)d\alpha.
\end{equation}
Then \eqref{peskin.equation.first.order} directly implies that
\begin{equation}\label{peskin.equation.first.order.final}
\partial_t \bm{X}(\theta) = \frac{1}{4\pi}\int \frac{\bm{X}'(\theta+\alpha) \cdot \mathcal{P}(\delta_\alpha \bm{X})\bm{X}'(\theta+\alpha)}{|\delta_\alpha \bm{X}|^2} \frac{\mathcal{T}(|\bm{X}'|(\theta+\alpha))}{|\bm{X}'|(\theta+\alpha)} \delta_\alpha \bm{X}(\theta) d\alpha.
\end{equation}
Then \eqref{peskin.equation.first.order.final} will be our main expression for the Peskin equation for $\bm{X}(\theta)$.
Now \eqref{equation.intermediate} and the previous calculations imply that
\begin{equation}\notag
\begin{split}
\partial_t \bm{X}(\theta) &= \int \left[\nabla_{\bm{X}'(\theta+\alpha)}^2 G_1\right](\delta_\alpha \bm{X}) \frac{\mathcal{T}(|\bm{X}'|(\theta+\alpha))}{|\bm{X}'|(\theta+\alpha)} \delta_\alpha \bm{X}(\theta)d\alpha
\\& \quad + \int \frac{\mathcal{T}(|\bm{X}'|)}{|\bm{X}'|} \left([\nabla_{\bm{X}''(\theta+\alpha)}G_1](\delta_\alpha \bm{X})\delta_\alpha \bm{X} + G_2(\delta_\alpha \bm{X}) \bm{X}''(\theta+\alpha)\right)d\alpha
\\& \quad + \int \partial_\alpha \left(\frac{\mathcal{T}(|\bm{X}'|)}{|\bm{X}'|}\right) \left([\nabla_{\bm{X}'(\theta+\alpha)}G_1](\delta_\alpha \bm{X})\delta_\alpha \bm{X} + G_2(\delta_\alpha \bm{X}) \bm{X}'(\theta+\alpha)\right)d\alpha.
\end{split}
\end{equation}
Now to prove the {\it claim} \eqref{peskin.equation.first.order}, with \eqref{stokeslet.def} we use
\begin{equation}\label{directional.cancellation}
[\nabla_{u} G_1(z)]z = -\frac{1}{4\pi} \frac{u\cdot \widehat{z} }{|z|} z
= -\frac{u\cdot \widehat{z}}{4\pi}\widehat{z}
= - G_2(z) u.
\end{equation}
This exact calculation \eqref{directional.cancellation} is crucial to cancel the second two terms above, and in particular to cancel the second order derivatives. Using this cancellation, since the last two terms in the equation above are zero, then we obtain the {\it claim} in \eqref{peskin.equation.first.order}. And the equation \eqref{peskin.equation.first.order.final} is our alternative representation of the Peskin equation for $\bm{X}(t,\theta)$.
To obtain an equation for $\partial_t \bm{X}'(\theta)$, we could of course just differentiate \eqref{peskin.equation.first.order.final} in $\theta$. However, that equation contains $\bm{X}''(\theta)$ and ends up being more difficult to work with. Luckily though, there is another form for $\partial_t \bm{X}'$ which can be written in terms of only $\bm{X}'$. To begin our derivation for $\partial_t \bm{X}'$, we note that integrating by parts and using \eqref{e:boundaryintegral} with \eqref{tension.map.def} we have
\begin{equation}\notag
\begin{split}
\partial_t \bm{X}(\theta) &= \int G(\delta_\alpha \bm{X}) \partial_\alpha \mathbf{T}(\bm{X}')(\theta+\alpha) d\alpha
\\& = - \int \partial_\alpha G(\delta_\alpha \bm{X}) \delta_\alpha\mathbf{T}(\bm{X}')(\theta)d\alpha.
\end{split}
\end{equation}
Differentiating this equation with respect to $\theta,$ we see that
\begin{equation*}
\partial_t \bm{X}'(\theta) = -\int \partial_\theta \partial_\alpha G(\delta_\alpha \bm{X}) \delta_\alpha\mathbf{T}(\bm{X}')(\theta)
-\int\partial_\alpha G(\delta_\alpha \bm{X}) \partial_\theta\delta_\alpha \mathbf{T}(\bm{X}') (\theta)d\alpha.
\end{equation*}
As $\partial_\alpha G(\delta_\alpha \bm{X})$ is a derivative, it follows that
\begin{equation*}
\int \partial_\alpha G(\delta_\alpha \bm{X}) \partial_\theta \mathbf{T}(\bm{X}')(\theta)d\alpha
= \partial_\theta \mathbf{T}(\bm{X}')(\theta)
\int \partial_\alpha G(\delta_\alpha \bm{X})d\alpha = 0.
\end{equation*}
Notice that the zero integral above removes a highest order derivative.
We also have that
$\partial_\theta \mathbf{T}(\bm{X}')(\theta+\alpha) = \partial_\alpha \mathbf{T}(\bm{X}')(\theta+\alpha)$. So we can make this exchange and integrate by parts to obtain
\begin{equation}\notag
\begin{split}
\int \partial_\alpha G(\delta_\alpha \bm{X}) \partial_\theta \delta_\alpha \mathbf{T}(\bm{X}') (\theta) d\alpha &= \int \partial_\alpha G(\delta_\alpha \bm{X}) \partial_\alpha \delta_\alpha \mathbf{T}(\bm{X}') (\theta) d\alpha
\\&= -\int \partial_\alpha^2 G(\delta_\alpha \bm{X}) \delta_\alpha \mathbf{T}(\bm{X}')(\theta)d\alpha.
\end{split}
\end{equation}
Hence, we have that
\begin{equation}\notag
\partial_t \bm{X}'(\theta) = \int (\partial_\alpha^2 - \partial_\alpha \partial_\theta)G(\delta_\alpha \bm{X})\delta_\alpha \mathbf{T}(\bm{X}')(\theta)d\alpha.
\end{equation}
Its a straight forward calculation to see that
\begin{equation}\notag
(\partial_\alpha -\partial_\theta)[G(\delta_\alpha \bm{X})] = [(\nabla_{\bm{X}'(\theta+\alpha)} - \nabla_{\delta_\alpha \bm{X}'(\theta)})G](\delta_\alpha \bm{X}) = [\nabla_{\bm{X}'(\theta)} G](\delta_\alpha \bm{X}),
\end{equation}
and
\begin{equation}\notag
\partial_\alpha [\nabla_{\bm{X}'(\theta)}G (\delta_\alpha \bm{X})] = [\nabla_{\bm{X}'(\theta+\alpha)}\nabla_{\bm{X}'(\theta)} G](\delta_\alpha \bm{X}).
\end{equation}
Thus using our previous calculations of the derivatives of $G(z)$, we have that
the Peskin problem for a general tension can be written as an evolution equation for $\bm{X}'(\theta)$ as
\begin{equation} \label{general.tension}
\partial_t \bm{X}'(\theta) = \int \mathcal{K}_0[\bX](\theta, \alpha) \delta_\alpha \mathbf{T}(\bm{X}')(\theta) d\alpha.
\end{equation}
Here the kernel $\mathcal{K}_0[\bX](\theta, \alpha)$ is given by
\begin{multline}\label{kernel.peskin.nosing}
\mathcal{K}_0[\bX](\theta, \alpha) \eqdef \frac{1}{4\pi} \frac{\bm{X}'(\theta+\alpha) \cdot \mathcal{P}(\delta_\alpha \bm{X})\bm{X}'(\theta)}{|\delta_\alpha \bm{X}|^2} \mathcal{I}
\\
-\frac{1}{4\pi} \frac{\bm{X}'(\theta+\alpha) \cdot \mathcal{R}(\delta_\alpha \bm{X})\bm{X}'(\theta)}{|\delta_\alpha \bm{X}|^2} \mathcal{R}(\delta_\alpha \bm{X})
\\
+\frac{1}{4\pi} \frac{\bm{X}'(\theta+\alpha) \cdot (\mathcal{P}(\delta_\alpha \bm{X})-\mathcal{I})\bm{X}'(\theta)}{|\delta_\alpha \bm{X}|^2} \mathcal{P}(\delta_\alpha \bm{X}).
\end{multline}
Note that nothing we have done so far has implied periodicity of the solution $\bm{X}(\theta)$, and that these forms of the equations work for any parametrization.
Then further we can write $|\delta_\alpha \bm{X}|^2 = \alpha^2 |\DAL \BX (\theta)|^2$ using \eqref{delta.notation} and \eqref{distance.X.notation}. Thus we can write that $\mathcal{K}_0[\bX](\theta, \alpha) = \alpha^{-2}\mathcal{K}[\bX](\theta, \alpha)$ where $\mathcal{K}[\bX](\theta, \alpha)$ is given by \eqref{kernel.peskin.noExpand}. This establishes equation \eqref{peskin.general.tension} from \eqref{general.tension} and \eqref{kernel.peskin.nosing}.
\section{Main estimate}\label{sec:BesovSpace}
In this section we will prove our main a priori estimate for the Peskin problem \eqref{peskin.general.tension} with a general tension \eqref{tension.map.def} in Proposition \ref{prop:general.apriori.final.local}. To this end we let $\bm{X}'(t,\theta)$ be the solution of the Peskin problem \eqref{peskin.general.tension} with the general tension map $\mathcal{T}$ given in \eqref{tension.map.def} satisfying the assumptions from \secref{sec:TensionAssumptions}
and the kernel given by \eqref{kerbel.eqn.deriv} with \eqref{kerbel.A.eqn.deriv}. We consider intitial data for \eqref{peskin.general.tension}, $\bm{X}_0$, satisfying
\begin{equation}\label{initial.assumption}
||\bm{X}'_0||_{\mathcal{B}^\subw}=||\bm{X}'_0||_{\dot{B}^{\frac12,\mu}_{2,1}} \leq M, \qquad |\bm{X}_0|_* = \inf_{\alpha\neq \theta} |D_\alpha \bm{X}_0(\theta)| >0.
\end{equation}
Here $0 < M < \infty$ is allowed to be large. We then suppose in this section that over a short time interval $T>0$ for some fixed $\rho>0$ we have
\begin{equation}\label{apriori.bd}
|\bm{X}(t)|_*\geq \rho, \quad 0 \le t \leq T.
\end{equation}
For some $C_* >0$ we further suppose for $T>0$ that we have
\begin{equation}\label{apriori.bd.CM}
||\bm{X}'||_{\BS_T} \le C_* M.
\end{equation}
We recall the notation \eqref{C.space.temporal}, \eqref{D.space.temporal}, and \eqref{initial.B.space}. Then the main result in this section is the following proposition.
\begin{proposition}\label{prop:general.apriori.final.local}
Let $\bm{X}: [0,T]\times \mathbb T \to \mathbb R^2$ be a weak solution to the Peskin problem with tension $\mathcal{T}$ in the sense of Definition \ref{def:solution}. Assume that $\bm{X}_0,$ and $\mathbf{T}$ satisfy the assumptions of Theorem \ref{thm:mainquant} including \eqref{initial.assumption}. Additionally, assume that \eqref{apriori.bd} and \eqref{apriori.bd.CM} hold. Then there are uniform constants $c, C>0$ such that the solution $\bm{X}'(t,\theta)$ satisfies the following inequality
\begin{equation*}
||\bm{X}'||_{\BS_T}
+ c \lambda^{1/2} ||\bm{X}'||_{\mathcal{D}_T^\subw}
\leq 2 || \bm{X}'_0||_{\mathcal{B}^\subw}
+
CT^{1/2}
\mathcal{U}^{1/2}
||\bm{X}'||_{\BS_T}.
\end{equation*}
Above $\mathcal{U}=\mathcal{U}[M,\rho,\lambda, \mathcal{C}_{1\TE}, \mathcal{C}_{2\TE}]$ is defined in \eqref{BU.def}. In particular there exists $T_M = T_M(M, \rho, \mu, \lambda, \mathcal{C}_{1\TE}, \mathcal{C}_{2\TE})>0$ such that if $T\leq T_M$ then we have
\begin{equation}\notag
||\bm{X}'||_{\BS_T} + 2c\lambda^{1/2}||\bm{X}'||_{\mathcal{D}_T^\subw}\leq 4||\bm{X}'_0||_{\mathcal{B}^\subw}\leq 4M.
\end{equation}
\end{proposition}
In the rest of this section, we will prove Proposition \ref{prop:general.apriori.final.local}. To that end, we first fix some arbitrary $|\beta|>0$. Then direct calculation using \eqref{peskin.general.tension} gives
\begin{equation}\notag
\begin{split}
\frac{d}{dt} ||\delta_\beta \bm{X}'(t)||_{L^2}^2
&= 2\int_{\mathbb T} d\theta ~\delta_\beta \bm{X}'(\theta) \cdot \delta_\beta \partial_t \bm{X}'(\theta)
\\
&= 2\int_{\mathbb T}\int_{\mathbb T} d\theta d\alpha ~\frac{\delta_\beta \bm{X}'(\theta)\cdot \delta_\beta \left(\mathcal{K}(\theta, \alpha) \delta_\alpha \mathbf{T}(\bm{X}')\right)}{\alpha^2}
\\&= - \int_{\mathbb T}\int_{\mathbb T} d\theta d\alpha ~\frac{\delta_\beta \delta_\alpha \bm{X}'(\theta)\cdot \delta_\beta \left(\mathcal{K}(\theta, \alpha) \delta_\alpha \mathbf{T}(\bm{X}')\right)}{\alpha^2}.
\end{split}
\end{equation}
We thus conclude that
\begin{equation}\label{beta.L2.difference.1}
\begin{split}
\frac{d}{dt} ||\delta_\beta \bm{X}'(t)||_{L^2}^2
&= -\int_{\mathbb T}\int_{\mathbb T} d\theta d\alpha ~\frac{\delta_\beta \delta_\alpha \bm{X}'(\theta)\cdot \tau_{\beta}\mathcal{K}(\theta, \alpha) \delta_\beta\delta_\alpha \mathbf{T}(\bm{X}')}{\alpha^2}
\\
& \quad -\int_{\mathbb T}\int_{\mathbb T} d\theta d\alpha ~\frac{\delta_\beta \delta_\alpha \bm{X}'(\theta)\cdot \delta_\beta\mathcal{K}(\theta, \alpha) \delta_\alpha \mathbf{T}(\bm{X}')}{\alpha^2}.
\end{split}
\end{equation}
We will deal with these two integrals on the right side in order.
We next study the differences of the tension map.
First we give the following useful lemma which tells us in particular that the operators $\delta_\alpha^\pm$ from \eqref{delta.pm.notation} and the kernel \eqref{kerbel.A.eqn.deriv} are bounded above by the same Besov space with the operator $\delta_\alpha$.
\begin{lemma}\label{continunity.delta.pm}
Let $\mathbb T = \mathbb R/2\pi \mathbb Z = [-\pi, \pi]$. Recall the operators $D_\alpha$ from \eqref{distance.X.notation} and $\delta_\alpha^\pm$ from \eqref{delta.pm.notation}. Then for any $p\in [1,\infty]$ we have
\begin{equation}\label{operator.bd.first}
\begin{split}
||\delta_\alpha^- f'||_{L^p_\theta} \leq 2 ||f'||_{L^p_\theta}, \quad
||\delta_\alpha^+ f'||_{L^p_\theta} \leq 4 ||f'||_{L^p_\theta}, \quad ||D_\alpha f||_{L^p_\theta} \leq ||f'||_{L^p_\theta}.
\end{split}
\end{equation}
Furthermore, fix $0<s<1$ and $p,q \in [1,\infty]$. Then we have the uniform estimate
\begin{equation}\label{delta.operator.ineq}
\left(\int_{\mathbb T} \frac{d\beta}{|\beta|^{1+sq}} ||\delta_\beta^\pm f||^{q}_{L^p_\theta} \right)^{1/q}
\lesssim
|| f ||_{\dot{B}^s_{p,q}}.
\end{equation}
We use the standard modification of the lower bound when $q=\infty$.
\end{lemma}
\begin{proof}
From \eqref{distance.X.notation} we have
\begin{equation}\notag
D_\alpha f(\theta) = \int_0^1 d\tau~ f'(\theta+\tau\alpha).
\end{equation}
Then also using \eqref{delta.pm.notation} we have
\begin{equation}\notag
\delta_\alpha^- f'(\theta)
=f'(\theta) - \int_0^1 d\tau~ f'(\theta+\tau\alpha)
=
-\int_0^1 d\tau~ \delta_{\tau\alpha} f'(\theta),
\end{equation}
and then
\begin{equation}\label{delta.plus.formula}
\delta_\alpha^+ f'(\theta) = \delta_\alpha f'(\theta) +\delta_\alpha^- f'(\theta).
\end{equation}
Then from Minkowski's integral inequality we have for example
\begin{equation}\notag
||\delta_\alpha^- f' ||_{L^p_\theta} \leq \int_0^1 d\tau~ ||\delta_{\tau\alpha} f' ||_{L^p_\theta},
\end{equation}
Thus the inequalities in \eqref{operator.bd.first} follow from Minkowski's inequality, translation invariance, and the triangle inequality.
It remains to prove \eqref{delta.operator.ineq}. We use Minkowski's integral inequality twice as
\begin{equation}\notag
\left( \int_{\mathbb T} \frac{d\beta}{|\beta|^{1+sq}}||\delta_\beta^- f' ||_{L^p_\theta}^q \right)^{1/q}
\leq
\int_0^1 d\tau~ \left(\int_{\mathbb T} \frac{d\beta}{|\beta|^{1+sq}}||\delta_{\tau\beta} f' ||_{L^p_\theta}^q \right)^{1/q}.
\end{equation}
Now applying the change of variable $\alpha = \tau\beta$ we obtain \eqref{delta.operator.ineq} for $\delta_\beta^-$. The inequality \eqref{delta.operator.ineq} for $\delta_\beta^+$ then follows from the formula \eqref{delta.plus.formula} and the triangle inequality.
\end{proof}
\begin{remark}\label{ignore.remark}
In the rest of this article, when we use the estimates of \eqref{kerbel.A.eqn.deriv} in Lemma \ref{A.bound.lem}, Lemma \ref{lemm.A.diff} and Lemma \ref{lem:Abeta.upper} that are proven in \secref{sec:kernelDIFF}, we will only write the upper bounds with $\delta_\alpha$ in place of $\delta_\alpha^+$ and $\delta_\alpha^-$ from \eqref{delta.pm.notation}. We use this simplification to ease the notation, but more importantly because these operators have no effect on our final estimates due to Lemma \ref{continunity.delta.pm} and the inequality in \eqref{delta.operator.ineq}. We will also ignore the translation operator $\tau_\beta$ from \eqref{def.translation} when we use the estimates of \eqref{kerbel.A.eqn.deriv} as in \eqref{e:Abounds}, which is justified because all of the functional spaces that we are using in this article are translation invariant.
\end{remark}
Now to begin studying the differences of the tension map in \eqref{beta.L2.difference.1} we write
\begin{equation}\label{e:deltaalphaT}
\delta_\alpha \mathbf{T}(\bm{X}'(\theta))
= \int_0^1 ds_1 \frac{d}{ds_1} \mathbf{T}( s_1 \delta_\alpha \bm{X}'(\theta)+\bm{X}'(\theta))
= \overline{D\bT}[\BX'] \delta_\alpha \bm{X}'(\theta),
\end{equation}
where letting $D\mathbf{T}(z)$ denote the derivative in \eqref{e:DTdefn} of the tension map $\mathbf{T}(z)$ in \eqref{tension.map.def} we have
\begin{equation}\label{DBTX.def}
\overline{D\bT}[\BX'] (\theta)\eqdef
\int_0^1 ds_1~ D\mathbf{T}(g_1[\bm{X}'](s_1, \alpha, \theta)),
\end{equation}
where
\begin{equation*}
g_1[\bm{X}'](s_1, \alpha, \theta) \eqdef s_1 \tau_\alpha \bm{X}'(\theta) +
(1-s_1)\bm{X}'(\theta).
\end{equation*}
We therefore obtain from \eqref{e:QuantitativeTensionMap} that
\begin{equation}\label{delta.alpha.BX.bound}
|\delta_\alpha \mathbf{T}(\bm{X}'(\theta))| \leq \mathcal{C}_{1\TE} |\delta_\alpha \bm{X}'(\theta)|.
\end{equation}
We will use this estimate on the second term in \eqref{beta.L2.difference.1}.
To study the first term in \eqref{beta.L2.difference.1}, we apply $\delta_\beta$ to $\delta_\alpha \mathbf{T}(\bm{X}'(\theta))$ to obtain
\begin{equation}\label{e:albe.T.XY}
\delta_\beta \delta_\alpha \mathbf{T}(\bm{X}'(\theta)) = \tau_\beta \overline{D\bT}[\BX'] (\theta) \delta_\beta \delta_\alpha \bm{X}'(\theta)
+\delta_\beta\overline{D\bT}[\BX'] (\theta) \delta_\alpha \bm{X}'(\theta),
\end{equation}
where
\begin{equation}\label{delta.beta.DBTX}
\delta_\beta\overline{D\bT}[\BX'] (\theta)
=
\int_0^1 ds_1~ \overline{D^2\bT} [\bm{X}'](\theta) (g_1[\delta_\beta\bm{X}'](s_1, \alpha, \theta) ),
\end{equation}
and
\begin{equation}\notag
\overline{D^2\bT} [\bm{X}'](\theta)
\eqdef
\int_0^1 ds_2~ D^2\mathbf{T}(g_2[\bm{X}'](s_2,s_1,\alpha,\theta,\beta)),
\end{equation}
with $g_2[\bm{X}'](s_2,s_1,\alpha,\theta,\beta)$ given by
\begin{equation}\label{g2.s2.def}
g_2[\bm{X}'](,\alpha,\theta,\beta)s_2,s_1
\eqdef
s_2 g_1[\tau_\beta\bm{X}'](s_1, \alpha, \theta)
+(1-s_2) g_1[\bm{X}'](s_1, \alpha, \theta).
\end{equation}
We conclude using Remark \ref{ignore.remark} and \eqref{e:QuantitativeTensionMap} that
\begin{equation}\label{delta.beta.DBTX.bound}
|\delta_\beta\overline{D\bT}[\BX'] (\theta) \delta_\alpha \bm{X}'(\theta)| \leq
\mathcal{C}_{2\TE} |\delta_\beta \bm{X}'(\theta)|| \delta_\alpha \bm{X}'(\theta)|,
\end{equation}
and
\begin{equation}\label{beta.alpha.TXP}
|\delta_\beta \delta_\alpha \mathbf{T}(\bm{X}')| \leq \mathcal{C}_{1\TE} |\delta_\beta\delta_\alpha \bm{X}'(\theta)| + \mathcal{C}_{2\TE} |\delta_\beta \bm{X}'(\theta)| |\delta_\alpha \bm{X}'(\theta)|.
\end{equation}
We will use this estimate on the first term in \eqref{beta.L2.difference.1}. Also notice that we have the matrix inequality in \eqref{e:DTdefn} for $\overline{D\mathbf{T}}$, since
the pointwise lowerbound for $D\mathbf{T}(z)$ automatically applies to $\overline{D\mathbf{T}}$ from \eqref{DBTX.def}.
Plugging all of this into \eqref{beta.L2.difference.1}, and using \eqref{kerbel.eqn.deriv} with \eqref{kerbel.A.eqn.deriv}, \eqref{tildeLambda:eq} and the bounds from \eqref{e:DTdefn} we obtain
\begin{equation}\label{beta.L2.difference}
\frac{d}{dt} || \delta_\beta \bm{X}' ||_{L^2_\theta}^2
+ \lambda
|| \delta_\beta \widetilde{\Lambda}^{\frac12} \bm{X}' ||_{L^2_\theta}^2
\leq \mathcal{L}_1+ \mathcal{L}_2
+ \mathcal{L}_3.
\end{equation}
Then with \eqref{kerbel.A.eqn.deriv} we have
\begin{equation}\notag
\mathcal{L}_1 \eqdef \mathcal{C}_{1\TE} \int_{\mathbb T} d\theta \int_{\mathbb T} \frac{d\alpha}{\alpha^2} ~
| \delta_\beta\delta_\alpha \bm{X}'(\theta)|^2 |\tau_\beta \mathcal{A}[\bm{X}](\theta, \alpha)|,
\end{equation}
\begin{equation}\notag
\mathcal{L}_2 \eqdef \mathcal{C}_{1\TE} \int_{\mathbb T} d\theta \int_{\mathbb T} \frac{d\alpha}{\alpha^2} ~
| \delta_\beta\delta_\alpha \bm{X}'(\theta)| | \delta_\alpha \bm{X}'(\theta)| |\delta_\beta \mathcal{A}[\bm{X}](\theta, \alpha)|,
\end{equation}
\begin{equation}\notag
\mathcal{L}_3 \eqdef \mathcal{C}_{2\TE} \int_{\mathbb T} d\theta \int_{\mathbb T} \frac{d\alpha}{\alpha^2} ~
| \delta_\beta\delta_\alpha \bm{X}'(\theta)| | \delta_\beta \bm{X}'(\theta)| | \delta_\alpha \bm{X}'(\theta)| |\tau_\beta \mathcal{K}[\bm{X}](\theta, \alpha)|.
\end{equation}
We will estimate each of the terms above.
\begin{remark}
Note that in the simple tension case, $D^2\mathbf{T} \equiv 0,$ and hence as in \eqref{delta.beta.DBTX} in this case $\mathcal{L}_{3} \equiv 0.$
\end{remark}
For $\mathcal{L}_{1}$ we split the kernel \eqref{kerbel.A.eqn.deriv} as
\begin{equation}\notag
\mathcal{A}(\theta, \alpha)
=
\mathcal{A}^S(\theta, \alpha)
+
\mathcal{A}^L(\theta, \alpha),
\end{equation}
where for a fixed small $\eta>0$ to be chosen
\begin{equation}\label{split.K}
\mathcal{A}^S(\theta, \alpha)
=
\mathcal{A}(\theta, \alpha) \mathds{1}_{|\alpha| < \eta},
\quad
\mathcal{A}^L(\theta, \alpha)
=
\mathcal{A}(\theta, \alpha) \mathds{1}_{|\alpha| \ge \eta}.
\end{equation}
Thus we have
\begin{equation}\notag
\mathcal{L}_1^S \eqdef \mathcal{C}_{1\TE} \int_{\mathbb T} d\theta \int_{\mathbb T} \frac{d\alpha}{\alpha^2} ~
| \delta_\beta\delta_\alpha \bm{X}'(\theta)|^2 |\tau_\beta \mathcal{A}[\bm{X}](\theta, \alpha)| \mathds{1}_{|\alpha|< \eta}.
\end{equation}
And we define $\mathcal{L}_{1}^L = \mathcal{L}_{1} - \mathcal{L}_{1}^S$. Next, from \eqref{kerbel.A.eqn.deriv} and Remark \ref{ignore.remark} we have the general estimate
\begin{equation}\label{e:Abounds}
|\mathcal{A}(\theta, \alpha)| \lesssim |\bm{X}|_*^{-2} |\delta_\alpha \bm{X}'(\theta)|^2 + |\bm{X}|_*^{-1} | \delta_\alpha \bm{X}'(\theta)|.
\end{equation}
Then we apply H{\"o}lder's inequality using \eqref{apriori.bd} to obtain
\begin{multline}\notag
\mathcal{L}_1^S
\lesssim
\mathcal{C}_{1\TE}
\left( \int_{|\alpha|< \eta} \frac{d\alpha}{\alpha^2}
||\delta_\beta \delta_\alpha \bm{X}'||_{L^4_\theta}^4 \right)^{\frac{1}{2}}
\left( \int_{|\alpha|< \eta} \frac{d\alpha}{\alpha^2}
|| \delta_\alpha \bm{X}'||_{L^4_\theta}^4 \right)^{\frac{1}{2}}
\frac{1}{\rho^2}
\\
+
\mathcal{C}_{1\TE} ||\delta_\beta \widetilde{\Lambda}^{\frac12} \bm{X}'||_{L^2_\theta}
\left( \int_{|\alpha|< \eta} \frac{d\alpha}{\alpha^2}
||\delta_\beta \delta_\alpha \bm{X}'||_{L^4_\theta}^4 \right)^{\frac{1}{4}}
\left( \int_{|\alpha|< \eta} \frac{d\alpha}{\alpha^2}
|| \delta_\alpha \bm{X}'||_{L^4_\theta}^4 \right)^{\frac{1}{4}}\frac{1}{\rho}.
\end{multline}
In order to deal with this error term, we need to absorb it by the elliptic term. A priori though, $\mathcal{C}_{1\TE}$ and $\rho^{-1}$ could both be very large, and this might seem like we need to restrict our choice of tensions in \secref{sec:TensionAssumptions}.
However, the function $\mu$ from Definition \ref{subw.definition} allows us to control the decay rate of the integral on the right hand side. Specifically, for any $p> 1$ we have
\begin{multline}\label{extra.smallness.besov}
\int_{|\alpha|< \eta} \frac{d\alpha}{\alpha^2}
|| \delta_\alpha \bm{X}'||_{L^p_\theta}^p
\leq
\frac{1}{\mu(\eta^{-1})^p} \int_{\mathbb T} \frac{d\alpha}{\alpha^2}
|| \delta_\alpha \bm{X}'||_{L^p_\theta}^p ~\mu(|\alpha|^{-1})^p
=
\frac{ || \bm{X}' ||_{\dot{B}_{p, p}^{\frac{1}{p},\mu}}^p}{\mu(\eta^{-1})^p}.
\end{multline}
Note that $\mu(\eta^{-1})^{-1}$ can be made arbitrarily small for $\eta>0$ small. We next use the following embeddings from Proposition \ref{besov.ineq.prop} as
\begin{equation}\label{embed.la.besov}
|| f ||_{\dot{B}^{\frac1p}_{p,p}}
\lesssim
|| f ||_{\dot{B}^{\frac12}_{2,p}}, \quad
|| f ||_{\dot{B}^{\frac1p,\mu}_{p,p}}
\lesssim
|| f ||_{\dot{B}^{\frac12,\mu}_{2,p}}, \quad p\ge 2.
\end{equation}
We remark that we will also use the following inequality frequently in the rest of this paper,
$|| f ||_{\dot{B}_{p, r_1}^{s}} \lesssim || f ||_{\dot{B}_{p, r_2}^{s}}$ which holds for any $r_1 \ge r_2 \ge 1$ and any $s\in \mathbb R$.
We will now use these inequalities in the form
\begin{equation}\notag
|| \delta_\beta \bm{X}'||_{\dot{B}^{\frac14}_{4,4}} \lesssim || \delta_\beta \bm{X}'||_{\dot{B}^{\frac12}_{2,4}}
\lesssim
|| \delta_\beta \bm{X}'||_{\dot{B}^{\frac12}_{2,2}}
\approx
||\delta_\beta \widetilde{\Lambda}^{\frac12} \bm{X}'||_{L^2_\theta},
\end{equation}
and also using \eqref{apriori.bd.CM} we have,
\begin{equation}\notag
|| \bm{X}'||_{\dot{B}^{\frac{1}{4},\mu}_{4,4}} \leq C || \bm{X}'||_{\dot{B}^{\frac{1}{2},\mu}_{2,4}}
\leq C C_* M.
\end{equation}
We therefore conclude that
\begin{equation}\label{LAS.estimate}
\mathcal{L}_1^S
\leq
C \kappa_1 \mathcal{C}_{1\TE}
||\delta_\beta \bm{X}'||_{\dot{B}^{\frac12}_{2,2}}^2 ,
\end{equation}
where we define $\kappa_1 = \kappa_1(\eta)$ by
\begin{equation}\label{kappa1.def}
\kappa_1 \eqdef \frac{ 1 }{\rho^2} \frac{ M^2}{\mu(\eta^{-1})^2} +\frac{ 1}{\rho} \frac{ M}{\mu(\eta^{-1})}.
\end{equation}
This will be our main estimate for $\mathcal{L}_1^S$. We will later choose $\eta>0$ small enough so that $C \kappa_1 \mathcal{C}_{1\TE} \ll \lambda$.
Next we will estimate $\mathcal{L}_1^L$ containing $\mathcal{A}^L(\theta, \alpha)$ from \eqref{split.K} and \eqref{kerbel.A.eqn.deriv} on the region $|\alpha| \geq \eta$. Noting that $||\delta_\alpha f||_{L^p} \leq 2 ||f||_{L^p}$ for all $p \in [1,\infty]$, we can neglect the $\delta_\alpha$'s and apply H{\"o}lder's inequality in $\theta$ to obtain
\begin{equation*}
\mathcal{L}_1^L
\lesssim
\mathcal{C}_{1\TE}\left( \int_{|\alpha|\geq \eta} \frac{d\alpha}{\alpha^2} \right)\left(\frac{||\delta_\beta \bm{X}'||_{L_\theta^4}^2 ||\bm{X}'||_{L_\theta^4}^2}{\rho^2} + \frac{ ||\delta_\beta \bm{X}'||_{L_\theta^2} ||\delta_\beta \bm{X}'||_{L_\theta^4} ||\bm{X}'||_{L_\theta^4}}{\rho}\right).
\end{equation*}
Next from Proposition \ref{besov.ineq.prop}, Lemma \ref{Besov.increase}, and \eqref{apriori.bd.CM} we use the following inequalities
\begin{equation}\notag
||\bm{X}'||_{L_\theta^4}
\lesssim
|| \bm{X}' ||_{\dot{B}^{0}_{4,2}}
\lesssim
|| \bm{X}' ||_{\dot{B}^{\frac14}_{2,2}}
\lesssim
|| \bm{X}' ||_{\dot{B}^{\frac12}_{2,\infty}} \lesssim M.
\end{equation}
For the other term we use also Lemma \ref{Besov.interpolation} to obtain
\begin{equation}\notag
||\delta_\beta \bm{X}'||_{L_\theta^4}
\lesssim
|| \delta_\beta \bm{X}' ||_{\dot{B}^{0}_{4,2}}
\lesssim
|| \delta_\beta \bm{X}' ||_{\dot{B}^{\frac14}_{2,2}}
\lesssim
||\delta_\beta \widetilde{\Lambda}^{\frac12}\bm{X}'||_{L^2_\theta}^{1/2} ||\delta_\beta \bm{X}'||_{L^2_\theta}^{1/2}.
\end{equation}
We also use that $\left( \int_{|\alpha|\geq \eta} \frac{d\alpha}{\alpha^2} \right) = \eta^{-1}$. Then we obtain
\begin{equation*}
\mathcal{L}_1^L
\lesssim
\frac{\mathcal{C}_{1\TE} M^2}{\eta\rho^2}
||\delta_\beta \widetilde{\Lambda}^{\frac12} \bm{X}'||_{L^2_\theta} ||\delta_\beta \bm{X}'||_{L^2_\theta}
+
\frac{\mathcal{C}_{1\TE} M}{\eta\rho}
||\delta_\beta \widetilde{\Lambda}^{\frac12}\bm{X}'||_{L^2_\theta}^{1/2} ||\delta_\beta \bm{X}'||_{L^2_\theta}^{3/2}.
\end{equation*}
Using Young's inequality, we can separate out the higher order terms
and get
\begin{equation}\label{estimate.LLTAL}
\mathcal{L}_1^L
\leq \frac{\lambda}{8} ||\delta_\beta \bm{X}'||_{\dot{B}^{\frac12}_{2,2}}^2
+ C||\delta_\beta \bm{X}'||_{L^2_\theta}^2
\mathcal{U}_1,
\end{equation}
where
\begin{equation}\label{BB.const.def}
\mathcal{U}_1= \mathcal{U}_1 [M, \lambda, \rho, \mathcal{C}_{1\TE}, \eta] \eqdef \frac{\mathcal{C}_{1\TE}^2 M^4}{\lambda \eta^2 \rho^4} + \frac{\mathcal{C}_{1\TE}^{4/3} M^{4/3}}{\lambda^{1/3} \eta^{4/3} \rho^{4/3}}.
\end{equation}
This is our main estimate for $\mathcal{L}_1^L$.
Now we can collect the estimates for $ \mathcal{L}_1^S$ in \eqref{LAS.estimate} and $ \mathcal{L}_1^L$ in \eqref{estimate.LLTAL} to obtain
\begin{equation}\label{LA.estimate}
\mathcal{L}_1
\leq
\left(\frac{\lambda}{8}+ C \kappa_1 \mathcal{C}_{1\TE} \right)||\delta_\beta \bm{X}'||_{\dot{B}^{\frac12}_{2,2}}^2 + C ||\delta_\beta \bm{X}'||_{L^2_\theta}^2 \mathcal{U}_1.
\end{equation}
This is our main estimate for the term $\mathcal{L}_1$.
This estimate above motivates the following lemma. First, for some $A>0$, we consider a typical term of the following form
\begin{equation}\label{typical.term}
\mathcal{L} = \mathcal{L}[\bm{X}'_1, \bm{X}'_2, \bm{X}'_3] \eqdef A \int_{\mathbb T} d\theta \int_{\mathbb T} \frac{d\alpha}{\alpha^2} ~
| \delta_\beta\delta_\alpha \bm{X}'_1| | \delta_\beta \bm{X}'_2|
|\delta_\alpha \bm{X}'_3|.
\end{equation}
Here $\bm{X}'_i$ are given functions for $i=1,2,3$. Then we have
\begin{lemma}\label{lem:typical.term} For any small constant $0<c<1$, and for $\lambda>0$ from \eqref{e:DTdefn}, for any small $\eta>0$ we have the following uniform estimate for \eqref{typical.term} as
\begin{multline}\label{TTG.y.estimate}
\mathcal{L}
\leq
c \lambda || \delta_\beta \widetilde{\Lambda}^{\frac12}\bm{X}'_1||_{L^2_\theta}^2
+
\frac{C A^2}{\lambda \mu(\eta^{-1})^2} || \delta_\beta \bm{X}'_2||_{L^\infty_\theta}^2
|| \bm{X}'_3||_{\mathcal{B}^\subw}^2
\\
+
c || \delta_\beta\bm{X}'_1||_{L^2_\theta}^2
+
CA^2 \eta^{-2}|| \delta_\beta\bm{X}'_2||_{L^2_\theta}^2 ||\bm{X}'_3||_{L^{\infty}_\theta}^2 .
\end{multline}
In particular if $\bm{X}'_1 = \bm{X}'_2$ then we also have
\begin{equation*}
\mathcal{L}
\leq
c \lambda || \delta_\beta \widetilde{\Lambda}^{\frac12}\bm{X}'_1||_{L^2_\theta}^2
+
\frac{C A^2}{\lambda \mu(\eta^{-1})^2} || \delta_\beta \bm{X}'_1||_{L^\infty_\theta}^2
|| \bm{X}'_3||_{\mathcal{B}^\subw}^2
+
C\frac{A}{\eta} || \delta_\beta\bm{X}'_1||_{L^2_\theta}^2 ||\bm{X}'_3||_{L^{\infty}_\theta}.
\end{equation*}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lem:typical.term}]
We split this term into $\mathcal{L}=\mathcal{L}^S + \mathcal{L}^L$ where $\mathcal{L}^S$ is restricted to the integration domain $|\alpha| < \eta$ and $\mathcal{L}^L$ to the domain $|\alpha| \ge \eta$ similar to \eqref{split.K}. For $\mathcal{L}^S$ we use H{\"o}lder's inequality to obtain
\begin{equation}\notag
\mathcal{L}^S
\lesssim
|| \delta_\beta \bm{X}'_2||_{L^\infty_\theta}
\left( \int_{|\alpha| < \eta} \frac{d\alpha}{\alpha^2}
|| \delta_\alpha \delta_\beta\bm{X}'_1||_{L^2_\theta}^2 \right)^{\frac{1}{2}}
\left( \int_{|\alpha| < \eta} \frac{d\alpha}{\alpha^2}
|| \delta_\alpha \bm{X}'_3||_{L^{2}_\theta}^{2} \right)^{1/2}.
\end{equation}
We further use the embeddings \eqref{extra.smallness.besov} and \eqref{embed.la.besov} to obtain
\begin{equation}\notag
\mathcal{L}^S
\lesssim
|| \delta_\beta \widetilde{\Lambda}^{\frac12}\bm{X}'_1||_{L^2_\theta}
|| \delta_\beta \bm{X}'_2||_{L^\infty_\theta}
\frac{|| \bm{X}'_3||_{\dot{B}_{2, 2}^{\frac{1}{2},\mu}}}{\mu(\eta^{-1})}.
\end{equation}
Next we estimate $\mathcal{L}^L$. Again using H{\"o}lder's inequality we have
\begin{equation}\notag
\begin{split}
\mathcal{L}^L
&\lesssim
|| \delta_\beta\bm{X}'_1||_{L^2_\theta} || \delta_\beta\bm{X}'_2||_{L^2_\theta} ||\bm{X}'_3||_{L^{\infty}_\theta}
\left( \int_{|\alpha| \geq \eta} \frac{d\alpha}{\alpha^2}
\right)
\\& \lesssim
\eta^{-1}
|| \delta_\beta\bm{X}'_1||_{L^2_\theta} || \delta_\beta\bm{X}'_2||_{L^2_\theta} ||\bm{X}'_3||_{L^{\infty}_\theta}.
\end{split}
\end{equation}
We combine the estimates above and use the embedding $\mathcal{B}^\subw \subset \dot{B}_{2, 2}^{\frac{1}{2},\mu}$ to obtain the following general estimate
\begin{multline}\label{TTG.est}
\mathcal{L}
\leq
\frac{C A}{\mu(\eta^{-1})}
|| \delta_\beta \widetilde{\Lambda}^{\frac12}\bm{X}'_1||_{L^2_\theta}
|| \delta_\beta \bm{X}'_2||_{L^\infty_\theta}
|| \bm{X}'_3||_{\mathcal{B}^\subw}
\\
+
C A \eta^{-1}
|| \delta_\beta\bm{X}'_1||_{L^2_\theta} || \delta_\beta\bm{X}'_2||_{L^2_\theta} ||\bm{X}'_3||_{L^{\infty}_\theta}.
\end{multline}
Then \eqref{TTG.y.estimate} follows after applying Young's inequality.
\end{proof}
Next, we turn our attention towards bounding the term $\mathcal{L}_{3}$ in \eqref{beta.L2.difference}. Recalling \eqref{kerbel.eqn.deriv}, \eqref{apriori.bd} and \eqref{e:Abounds}, we can bound $|\tau_\beta \mathcal{K}[\bm{X}]|$ in general as
\begin{equation}\notag
|\tau_\beta \mathcal{K}[\bm{X}](\theta, \alpha)| \leq C\left( 1 + \rho^{-1} || \bm{X}' ||_{L^\infty_\theta}
+ \rho^{-2} || \bm{X}' ||_{L^\infty_\theta}^2\right).
\end{equation}
Now we state the following useful embedding as
\begin{equation}\label{embed.infty}
|| \bm{X}' ||_{L^\infty_\theta}
\lesssim
|| \bm{X}' ||_{\dot{B}_{2,1}^{\frac12}}.
\end{equation}
This embedding follows as in Proposition \ref{besov.ineq.prop}.
Then further using $|| \bm{X}' ||_{\dot{B}_{2,1}^{\frac12}} \lesssim || \bm{X}' ||_{\dot{B}_{2,1}^{\frac12,\mu}}$ and \eqref{apriori.bd.CM} we have
\begin{equation}\label{K.bound.infty}
C\left( 1 + \rho^{-1} || \bm{X}' ||_{L^\infty_\theta}
+ \rho^{-2} || \bm{X}' ||_{L^\infty_\theta}^2\right)
\leq C (1+ \rho^{-2} M^2)\eqdef \mathcal{W}_1=\mathcal{W}_1[\bm{X}'].
\end{equation}
Thus we notice that the remaining part of $\mathcal{L}_{3}$ is in the form of \eqref{typical.term} with $\bm{X}_1'=\bm{X}_2'=\bm{X}_3'=\bm{X}'$. Thus applying \eqref{TTG.est} we obtain
\begin{multline}\notag
\mathcal{L}_{3}
\leq
C|| \delta_\beta \widetilde{\Lambda}^{\frac12}\bm{X}'||_{L^2_\theta}
|| \delta_\beta \bm{X}'||_{L^\infty_\theta}
\frac{|| \bm{X}'||_{\mathcal{B}^\subw}}{\mu(\eta^{-1})} \mathcal{W}_1[\bm{X}'] \mathcal{C}_{2\TE}
\\
+
C\eta^{-1}
|| \delta_\beta\bm{X}'||_{L^2_\theta}^2 ||\bm{X}'||_{L^{\infty}_\theta}\mathcal{W}_1[\bm{X}'] \mathcal{C}_{2\TE}.
\end{multline}
We further apply Young's inequality to the first term above, and use \eqref{apriori.bd.CM}, to obtain
\begin{equation}\label{LTT12.estimate}
\mathcal{L}_{3}
\leq
\frac{\lambda}{8}||\delta_\beta \widetilde{\Lambda}^{\frac12}\bm{X}'||_{L^2_\theta}^2
+
C \kappa_2 \mathcal{C}_{2\TE}^2
|| \delta_\beta \bm{X}'||_{L^\infty_\theta}^2
+
C
|| \delta_\beta\bm{X}'||_{L^2_\theta}^2 \mathcal{U}_2,
\end{equation}
where recalling \eqref{K.bound.infty} we have
\begin{equation}\label{BK.const.def}
\mathcal{U}_2 = \mathcal{U}_2 [M, \rho^{-1}, \mathcal{C}_{2\TE}, \eta] \eqdef
\eta^{-1} M(1+ \rho^{-2} M^2) \mathcal{C}_{2\TE},
\end{equation}
and $\kappa_2 = \kappa_2(\eta)$ is
\begin{equation}\label{kappa2.def}
\kappa_2 \eqdef \lambda^{-1} \frac{M^2}{\mu(\eta^{-1})^2} (1+ \rho^{-2} M^2)^2.
\end{equation}
This will be our main estimate for $\mathcal{L}_{3}$. We will later choose $\eta>0$ small enough so that under our assumptions $C\kappa_2 \mathcal{C}_{2\TE}^2 \ll \lambda$.
To prove further estimates we will now state the following lemma which gives the pointwise estimates of $\delta_\beta \mathcal{A}(\theta, \alpha)$. The proof of Lemma \ref{A.bound.lem} is given in \secref{sec:kernelDIFF}.
\begin{lemma}\label{A.bound.lem}
Considering $\mathcal{A}(\theta, \alpha)$ from \eqref{kerbel.A.eqn.deriv} we can split $\delta_\beta \mathcal{A}(\theta, \alpha)$ as
\begin{equation}\label{Abeta.split}
\delta_\beta \mathcal{A}(\theta, \alpha) = \mathcal{A}_{1\beta}(\theta, \alpha) + \mathcal{A}_{2\beta}(\theta, \alpha),
\end{equation}
where $\mathcal{A}_{1\beta}(\theta, \alpha)$ satisfies the following uniform upper bound
\begin{multline}\label{A1X.bound}
\left| \mathcal{A}_{1\beta}(\theta, \alpha) \right|
\lesssim
\frac{| \delta_\beta \delta_\alpha^+ \BX' (\theta)| | \tau_\beta \delta_\alpha^- \BX' (\theta)| +| \delta_\alpha^+ \BX' (\theta)| | \delta_\beta \delta_\alpha^- \BX' (\theta)| }{|\bm{X}|_*^2}
\\
+
\frac{| \delta_\beta \delta_\alpha^+ \BX' (\theta)|+| \delta_\beta \delta_\alpha^- \BX' (\theta)| }{|\bm{X}|_*}.
\end{multline}
Further $\mathcal{A}_{2\beta}(\theta, \alpha)$ satisfies the uniform upper bound
\begin{multline}\label{A2X.bound}
\left| \mathcal{A}_{2\beta}(\theta, \alpha) \right|
\lesssim
| \delta_\alpha^+ \BX' (\theta)|
\left( | \tau_\beta \delta_\alpha^- \BX' (\theta)| + | \delta_\alpha^- \BX' (\theta)| \right)
\frac{| \delta_\beta \DAL \BX (\theta)| }{|\bm{X}|_*^3}
\\
+
\left(
| \delta_\alpha^+ \BX' (\theta)|
+
| \delta_\alpha^- \BX' (\theta)|
\right)
\frac{| \delta_\beta \DAL \BX (\theta)| }{|\bm{X}|_*^2}.
\end{multline}
\end{lemma}
Next we will estimate the term $\mathcal{L}_{2}$ from \eqref{beta.L2.difference}. For future use we will estimate the following more general term with a constant $A>0$ as
\begin{equation}\label{LLT2.generalform}
\mathcal{L}_2[\bm{X}'_1, \bm{X}'_2, \bm{X}'_3] \eqdef A \int_{\mathbb T} d\theta \int_{\mathbb T} \frac{d\alpha}{\alpha^2} ~
| \delta_\beta\delta_\alpha \bm{X}'_1(\theta)| |\delta_\beta \mathcal{A}[\bm{X}_2](\theta, \alpha)| | \delta_\alpha \bm{X}'_3(\theta)| .
\end{equation}
Here $\bm{X}_1$, $\bm{X}_2$ and $\bm{X}_3$ are given functions. Then in Lemma \ref{A.bound.lem} we split the kernel from \eqref{kerbel.A.eqn.deriv} as $\delta_\beta \mathcal{A}[\bm{X}_2](\theta, \alpha)= \mathcal{A}_{1\beta}[\bm{X}_2](\theta, \alpha) + \mathcal{A}_{2\beta}[\bm{X}_2](\theta, \alpha)$. Taking into account Remark \ref{ignore.remark}, from \eqref{A1X.bound} and \eqref{A2X.bound} we have
\begin{equation}\label{A1betaRemark}
\left| \mathcal{A}_{1\beta}[\bm{X}_2](\theta, \alpha) \right|
\lesssim
\frac{| \delta_\beta \delta_\alpha \bm{X}'_2(\theta)| |\delta_\alpha \bm{X}'_2(\theta)| }{|\bm{X}_2|_*^2}
+
\frac{| \delta_\beta \delta_\alpha \bm{X}'_2(\theta)|}{|\bm{X}_2|_*},
\end{equation}
and
\begin{equation}\label{A2betaRemark}
\left| \mathcal{A}_{2\beta}[\bm{X}_2](\theta, \alpha) \right|
\lesssim
\frac{| \delta_\alpha \bm{X}'_2(\theta)|^2 | \delta_\beta D_\alpha \bm{X}_2(\theta)| }{|\bm{X}_2|_*^3}
+
\frac{| \delta_\alpha \bm{X}'_2(\theta)| | \delta_\beta D_\alpha \bm{X}_2(\theta)| }{|\bm{X}_2|_*^2}.
\end{equation}
Now we split $\mathcal{L}_{2}=\mathcal{L}_{21}+\mathcal{L}_{22}$ according to \eqref{Abeta.split}. In particular $\mathcal{L}_{21}$ is the term $\mathcal{L}_{2}$ with $\delta_\beta \mathcal{A}(\theta, \alpha)$ replaced by $\mathcal{A}_{1\beta}(\theta, \alpha)$. Now notice that $\mathcal{L}_{21}[\bm{X}'_1, \bm{X}'_2, \bm{X}'_3]=\mathcal{L}_{21}$ satisfies the upper bound
\begin{equation} \notag
\mathcal{L}_{21}
\lesssim
A \sum_{j=1}^2 \rho^{-j} || \bm{X}'_2||_{L^\infty_\theta}^{j-1}\int_{\mathbb T} d\theta \int_{\mathbb T} \frac{d\alpha}{\alpha^2}
| \delta_\beta\delta_\alpha \bm{X}'_1(\theta)|
| \delta_\beta\bm{X}'_2(\theta)|
| \delta_\alpha \bm{X}'_3(\theta)|.
\end{equation}
Therefore as in \eqref{typical.term} we have from \eqref{TTG.y.estimate} the estimate
\begin{multline}\label{LLT21.estimate}
\mathcal{L}_{21}
\leq
c \lambda || \delta_\beta \widetilde{\Lambda}^{\frac12}\bm{X}'_1||_{L^2_\theta}^2
+
\frac{C A^2}{\lambda\mu(\eta^{-1})^2} \left( \sum_{j=1}^2 \rho^{-j} || \bm{X}'_2||_{L^\infty_\theta}^{j-1}\right)^2 || \delta_\beta \bm{X}'_2||_{L^\infty_\theta}^2
|| \bm{X}'_3||_{\mathcal{B}^\subw}^2
\\
+
c || \delta_\beta\bm{X}'_1||_{L^2_\theta}^2
+
C|| \delta_\beta\bm{X}'_2||_{L^2_\theta}^2 ||\bm{X}'_3||_{L^{\infty}_\theta}^2 A^2 \eta^{-2}
\left( \sum_{j=1}^2 \rho^{-j} || \bm{X}'_2||_{L^\infty_\theta}^{j-1}\right)^2.
\end{multline}
This is our main estimate for the term $\mathcal{L}_{21}[\bm{X}'_1, \bm{X}'_2, \bm{X}'_3]$.
Lastly we will estimate $\mathcal{L}_{22}=\mathcal{L}_{22}[\bm{X}'_1, \bm{X}'_2, \bm{X}'_3]$. For this term we have the upper bound
\begin{equation} \notag
\left| \mathcal{L}_{22} \right|
\lesssim
A \sum_{j=2}^3 \rho^{-j} || \bm{X}'_2||_{L^\infty_\theta}^{j-1}\int_{\mathbb T} d\theta \int_{\mathbb T} \frac{d\alpha}{\alpha^2}
| \delta_\beta \delta_\alpha \bm{X}'_1(\theta) |
| \delta_\beta D_\alpha \bm{X}_2(\theta)|
| \delta_\alpha \bm{X}'_3(\theta)|.
\end{equation}
We can estimate this term the same way that we estimated $\mathcal{L}_{3}$ in \eqref{LTT12.estimate} using Lemma \ref{lem:typical.term}. This follows because the term $ | \delta_\beta D_\alpha \bm{X}_2(\theta)|$ in $\mathcal{L}_{22}$ is treated exactly as the term $| \delta_\beta \bm{X}_2'(\theta)|$ in \eqref{typical.term} and \eqref{TTG.y.estimate}. We can do that as in \eqref{operator.bd.first} because
\begin{equation}\label{averaging.infinity.est}
|| \delta_\beta D_\alpha \bm{X}_2||_{L^p_\theta}
\lesssim
|| \delta_\beta \bm{X}'_2 ||_{L^p_\theta},
\quad p \in[1,\infty].
\end{equation}
Thus as in \eqref{TTG.y.estimate} we have
\begin{multline}\label{LLT22.estimate}
\mathcal{L}_{22}
\leq
c \lambda || \delta_\beta \widetilde{\Lambda}^{\frac12}\bm{X}'_1||_{L^2_\theta}^2
+
\frac{C A^2}{\lambda\mu(\eta^{-1})^2} \left( \sum_{j=2}^3 \rho^{-j} || \bm{X}'_2||_{L^\infty_\theta}^{j-1}\right)^2 || \delta_\beta \bm{X}'_2||_{L^\infty_\theta}^2
|| \bm{X}'_3||_{\mathcal{B}^\subw}^2
\\
+
c || \delta_\beta\bm{X}'_1||_{L^2_\theta}^2
+
C|| \delta_\beta\bm{X}'_2||_{L^2_\theta}^2 ||\bm{X}'_3||_{L^{\infty}_\theta}^2 A^2 \eta^{-2}
\left( \sum_{j=2}^3 \rho^{-j} || \bm{X}'_2||_{L^\infty_\theta}^{j-1}\right)^2.
\end{multline}
This is our main estimate for the term $\mathcal{L}_{22}$.
Then for the term $\mathcal{L}_{2}=\mathcal{L}_{2}[\bm{X}'_1, \bm{X}'_2, \bm{X}'_3]$ from \eqref{LLT21.estimate} and \eqref{LLT22.estimate} we have for any small constant $0<c<1$ that
\begin{multline}\label{LLT2.bound.general}
\mathcal{L}_{2}
\leq
c \lambda || \delta_\beta \widetilde{\Lambda}^{\frac12}\bm{X}'_1||_{L^2_\theta}^2
+
C\frac{A^2 \mathcal{U}_4 [\bm{X}'_2 ]}{\lambda\mu(\eta^{-1})^2}
|| \delta_\beta \bm{X}'_2||_{L^\infty_\theta}^2 || \bm{X}'_3||_{\mathcal{B}^\subw}^2
\\
+
c || \delta_\beta\bm{X}'_1||_{L^2_\theta}^2
+
C|| \delta_\beta\bm{X}'_2||_{L^2_\theta}^2 ||\bm{X}'_3||_{L^{\infty}_\theta}^2
A^2 \eta^{-2} \mathcal{U}_4 [\bm{X}'_2 ] .
\end{multline}
where
\begin{equation}\label{BG.const.def}
\mathcal{U}_4 =\mathcal{U}_4 [||\bm{X}'_2||_{L^\infty_\theta}, \rho^{-1} ] \eqdef
\left( \sum_{j=1}^2 \rho^{-j} || \bm{X}'_2||_{L^\infty_\theta}^{j-1}\right)^2+\left( \sum_{j=2}^3 \rho^{-j} || \bm{X}'_2||_{L^\infty_\theta}^{j-1}\right)^2.
\end{equation}
The above general estimate will be used in \secref{sec:strongCont}.
Specifically for $\mathcal{L}_{2}=\mathcal{L}_{2}[\bm{X}', \bm{X}', \bm{X}']$ from \eqref{beta.L2.difference} following a similar procedure we obtain
\begin{equation}\label{LLT2.bound}
\mathcal{L}_{2}
\leq \frac{\lambda}{4}||\delta_\beta \bm{X}'||_{\dot{B}^{\frac12}_{2,2}}^2 +
C \kappa_3 \mathcal{C}_{1\TE}^2
||\delta_\beta \bm{X}'||_{L^\infty_\theta}^2
+ C||\delta_\beta\bm{X}'||_{L^2_\theta}^2 \mathcal{C}_{1\TE} \mathcal{U}_3,
\end{equation}
where recalling \eqref{apriori.bd.CM} then
$\kappa_3 = \kappa_3[\bm{X}' ](\eta)$ is
\begin{equation}\label{kappa3.def}
\kappa_3 \eqdef
\frac{ M^2}{\lambda\mu(\eta^{-1})^2}
\left( \sum_{j=1}^2 \rho^{-j} M^{j-1}\right)^2(1+\rho^{-2} M^2),
\end{equation}
and
\begin{equation}\label{BC.const.def}
\mathcal{U}_3 =\mathcal{U}_3 [M, \rho^{-1}, \eta^{-1} ] \eqdef
M \eta^{-1}
\sum_{j=1}^3 \rho^{-j} M^{j-1}.
\end{equation}
This is our main estimate for the term $\mathcal{L}_{2}$. We will later choose $\eta>0$ small enough so that under our assumptions $C\kappa_3 \mathcal{C}_{1\TE}^2 \ll \lambda$.
Putting together our estimates for $\mathcal{L}_1$ \eqref{LA.estimate}, $\mathcal{L}_2$ \eqref{LLT2.bound} and $\mathcal{L}_3$ \eqref{LTT12.estimate} into \eqref{beta.L2.difference} we arrive at
\begin{multline*}
\frac{d}{dt}||\delta_\beta \bm{X}'||_{L^2_\theta}^2 + \frac{\lambda}{2} ||\delta_\beta \bm{X}'||_{\dot{B}^{\frac12}_{2,2}}^2
\leq
C\kappa_1 \mathcal{C}_{1\TE} ||\delta_\beta \bm{X}'||_{\dot{B}^{\frac12}_{2,2}}^2
\\
+C\left( \mathcal{C}_{2\TE}^2 \kappa_2+\mathcal{C}_{1\TE}^2 \kappa_3 \right)||\delta_\beta \bm{X}'||_{L^\infty_\theta}^2
+ C||\delta_\beta \bm{X}'||_{L^2_\theta}^2 \left(\mathcal{U}_1+\mathcal{U}_2+\mathcal{C}_{1\TE}\mathcal{U}_3 \right),
\end{multline*}
where we recall \eqref{kappa1.def}, \eqref{kappa2.def}, \eqref{kappa3.def}, \eqref{BB.const.def}, \eqref{BK.const.def} and \eqref{BC.const.def} respectively.
For convenience from \eqref{BB.const.def}, \eqref{K.bound.infty}, \eqref{BK.const.def} and \eqref{BC.const.def} we now define $\mathcal{U}$ by
\begin{equation}\label{BU.def}
\mathcal{U} =\mathcal{U} [M,\rho,\lambda, \mathcal{C}_{2\TE}, \mathcal{C}_{1\TE}] \eqdef \mathcal{U}_1+\mathcal{U}_2+\mathcal{C}_{1\TE}\mathcal{U}_3.
\end{equation}
From \eqref{kappa2.def} and \eqref{kappa3.def} we also define
\begin{equation}\notag
\kappa_0 \eqdef \mathcal{C}_{2\TE}^2 \kappa_2+\mathcal{C}_{1\TE}^2 \kappa_3.
\end{equation}
Now we can choose $\eta>0$ small enough so that $C\kappa_1 \mathcal{C}_{1\TE} < \lambda/4$. Thus we obtain
\begin{equation*}
\frac{d}{dt}||\delta_\beta \bm{X}'||_{L^2_\theta}^2 + \frac{\lambda}{4} ||\delta_\beta \bm{X}'||_{\dot{B}^{\frac12}_{2,2}}^2
\leq
C \kappa_0 ||\delta_\beta \bm{X}'||_{L^\infty_\theta}^2
+ C\mathcal{U} ||\delta_\beta \bm{X}'||_{L^2_\theta}^2.
\end{equation*}
Further integrating in time, we get that
\begin{multline*}
||\delta_\beta \bm{X}'||_{L^2_\theta}^2(t) + \frac{\lambda}{4} ||\delta_\beta \bm{X}'||_{L^2_t(\dot{B}^{\frac12}_{2,2})}^2
\leq ||\delta_\beta \bm{X}_0 '||_{L^2_\theta}^2
+
C \kappa_0 ||\delta_\beta \bm{X}'||_{L^2_t(L^\infty_\theta)}^2
\\
+ C\mathcal{U} ||\delta_\beta \bm{X}'||_{L^2_t(L^2_\theta)}^2.
\end{multline*}
Note that trivially, we have the bound
\begin{equation}\label{betaX.L2.Linfinity.bound}
||\delta_\beta \bm{X}'||_{L^2_t(L^2_\theta)}^2\leq t ||\delta_\beta \bm{X}'||_{L^\infty_t(L^2_\theta)}^2.
\end{equation}
Now we take the essential supremum over $0\le t \le T$, at the cost of an extra factor of 2 on the RHS, to obtain
\begin{multline}\label{ineq.refer.later}
||\delta_\beta \bm{X}'||_{L^\infty_T(L^2_\theta)}^2 + \frac{\lambda}{4} ||\delta_\beta \bm{X}'||_{L^2_T(\dot{B}^{\frac12}_{2,2})}^2
\leq 2 ||\delta_\beta \bm{X}_0 '||_{L^2_\theta}^2
+
C \kappa_0 ||\delta_\beta \bm{X}'||_{L^2_T(L^\infty_\theta)}^2
\\
+ CT
\mathcal{U} ||\delta_\beta \bm{X}'||_{L^\infty_T(L^2_\theta)}^2.
\end{multline}
Next, note that for any constants $A$ and $B$ we have
\begin{equation}\label{ineq.gen}
\frac{1}{\sqrt{2}} (|A|+|B|) \le (A^2+B^2)^{1/2} \le |A|+|B|.
\end{equation}
So taking the previous inequality and raising it to the $1/2$ power, we obtain
\begin{multline*}
||\delta_\beta \bm{X}'||_{L^\infty_T(L^2_\theta)} + \left(\frac{\lambda}{4}\right)^{1/2} ||\delta_\beta \bm{X}'||_{L^2_T(\dot{B}^{\frac12}_{2,2})}
\leq 2 ||\delta_\beta \bm{X}_0 '||_{L^2_\theta}
+
C \kappa_0^{1/2} ||\delta_\beta \bm{X}'||_{L^2_T(L^\infty_\theta)}
\\
+ CT^{1/2}
\mathcal{U}^{1/2}||\delta_\beta \bm{X}'||_{L^\infty_T(L^2_\theta)}.
\end{multline*}
Then further integrating the above in $d\beta$ against $|\beta|^{-3/2}\mu(|\beta|^{-1})$ thus gives us
\begin{multline*}
||\bm{X}'||_{\BS_T}
+ \left(\frac{\lambda}{4}\right)^{1/2} ||\bm{X}'||_{\mathcal{D}_T^\subw}
\leq 2 || \bm{X}'_0||_{\mathcal{B}^\subw}
\\
+C \kappa_0^{1/2}
\int_{\mathbb T} \frac{d\beta}{|\beta|^{3/2}}\mu(|\beta|^{-1}) ||\delta_\beta \bm{X}'||_{L^2_T(L^\infty_\theta)}
+
CT^{1/2}
\mathcal{U}^{1/2}
||\bm{X}'||_{\BS_T}.
\end{multline*}
To handle the term containing $\kappa_0^{1/2}$ we will use the following lemma.
\begin{lemma}\label{L.infinity.embedding}
There exists a constant $C_{\mu}>0$ such that
\begin{equation}\label{embedding.infty.use}
||f||_{\widetilde{L}^2_T(\dot{B}^{\frac12,\mu}_{\infty,1})}
\leq
C_{\mu} \int_{\mathbb T} \frac{d\beta}{|\beta|^{3/2}} \mu(|\beta|^{-1}) ||\delta_\beta \widetilde{\Lambda}^{\frac12} f||_{L^2_T(L^{2}_{\theta})}
=
C_{\mu} ||f||_{\mathcal{D}_T^\subw}.
\end{equation}
\end{lemma}
The proof of Lemma \ref{L.infinity.embedding} is a direct combination of Proposition \ref{besov.ineq.prop} with \eqref{bernstein.2}. Then after using Lemma \ref{L.infinity.embedding} we can further choose $\eta>0$ small enough so that $\left(\frac{\lambda}{4}\right)^{1/2} - C \kappa_0^{1/2} C_{\mu} \geq c \lambda^{1/2}>0$ for some small positive constant $c \ll 1$. We thus obtain
\begin{equation*}
||\bm{X}'||_{\BS_T}
+ c \lambda^{1/2} ||\bm{X}'||_{\mathcal{D}_T^\subw}
\leq 2 || \bm{X}'_0||_{\mathcal{B}^\subw}
+
CT^{1/2}
\mathcal{U}^{1/2}
||\bm{X}'||_{\BS_T}.
\end{equation*}
This completes the proof of Proposition \ref{prop:general.apriori.final.local}.
\section{Control of the arc-chord condition}\label{sec:ArcChord}
In this section we will establish the a priori control over the arc-chord condition defined with \eqref{arc.cord.number} for a solution to the Peskin problem \eqref{peskin.general.tension} with a general tension \eqref{tension.map.def} satisfying the a priori estimates \eqref{apriori.bd.norm}. Recall from \eqref{peskin.expand.tension} with \eqref{kerbel.A.eqn.deriv} and \eqref{tildeLambda:eq} that $\bm{X}'(t)$ solves the equation
\begin{equation}\label{v.theta.def}
\partial_t \bm{X}' + \widetilde{\Lambda} \mathbf{T}(\bm{X}') = \mathcal{V}(\theta),
\quad \mathcal{V}(\theta) \eqdef \int_{\mathbb T} \frac{d\alpha}{\alpha^2} \mathcal{A}(\theta, \alpha) \delta_\alpha \mathbf{T}(\bm{X}'(\theta)).
\end{equation}
We suppose that we are given initial data satisfying \eqref{initial.assumption} for equation \eqref{v.theta.def}.
For some $C_* >0$ we will further suppose for $T>0$ that for some $c>0$ and for $\lambda>0$ as in \eqref{e:DTdefn} that we have
\begin{equation}\label{apriori.bd.norm}
||\bm{X}'||_{\BS_T}+ c \lambda^{\frac12}||\bm{X}'||_{\mathcal{D}_T^\subw} \le C_* M.
\end{equation}
Next we have the following estimate on the $L^2_T(\dot{H}^1)$ norm of a solution.
\begin{lemma}\label{H1.small.time}
Given a solution to \eqref{v.theta.def} satisfying \eqref{apriori.bd} and \eqref{apriori.bd.norm}. For any $\varepsilon>0,$ there exists $T_{\varepsilon} = T(\varepsilon,M,\mu,\rho , \lambda )>0$ such that
\begin{equation}\notag
\int_0^{T_{\varepsilon}}ds~ ||\bm{X}'(s)||_{\dot{H}^1}^2 \leq \varepsilon.
\end{equation}
\end{lemma}
\begin{proof}
We split into $|\beta| < \eta$ and $|\beta| \ge \eta$ for some small $\eta>0$ to be chosen:
\begin{multline}\notag
\int_0^T ||\bm{X}'(s)||_{\dot{H}^1}^2ds \lesssim
\int_0^T ds \int_{\mathbb T} d\theta \int_{\mathbb T} \frac{d\beta}{\beta^2} \int_{\mathbb T} \frac{d\alpha}{\alpha^2} |\delta_\beta \delta_\alpha \bm{X}'(s,\theta)|^2
\\
\lesssim \frac{1}{\mu(\eta^{-1})^2} \int_{|\beta| < \eta} \frac{d\beta}{\beta^2} \mu(|\beta|^{-1})^2
\int_0^T ds \int_{\mathbb T} d\theta \int_{\mathbb T} \frac{d\alpha}{\alpha^2} |\delta_\beta \delta_\alpha \bm{X}'(s,\theta)|^2
\\
+ \frac{1}{\eta} \int_{\mathbb T} d\theta \int_0^T ds\int_{\mathbb T} d\alpha \frac{| \delta_\alpha \bm{X}'(s,\theta)|^2}{\alpha^2}
\\
\lesssim \frac{||\widetilde{\Lambda}^{\frac12}\bm{X}'||_{\widetilde{L}^{2}_T(\dot{B}_{2, 2}^{\frac12, \mu})}^2}{\mu(\eta^{-1})^2} + \frac{T ||\bm{X}'||_{\widetilde{L}^{\infty}_T(\dot{B}_{2, 2}^{\frac12})}^2}{\eta}
\lesssim \frac{C_*^2 M^2}{\mu(\eta^{-1})^2\lambda} + \frac{C_*^2 M^2T}{\eta}.
\end{multline}
Above we use the spaces from \eqref{Besov.CL.Space} with \eqref{Besov.mu.Space} as in \eqref{C.space.temporal} and \eqref{D.space.temporal}, and the last line follows from \eqref{apriori.bd.norm}. We can choose $\eta>0$ small enough so that
$C \frac{C_*^2 M^2}{\mu(\eta^{-1})^2\lambda} < \frac12 \varepsilon$, and then we can choose $T=T_{\varepsilon}$ small enough so that $C\frac{C_*^2 M^2 T}{\eta}< \frac12 \varepsilon$.
\end{proof}
Next, we prove the following lemma, which controls the $L^2(\mathbb T)$ norm of the time derivative of a solution by the $\dot{H}^1(\mathbb T)$ norm.
\begin{lemma}\label{time.space.equivalent}
A solution to \eqref{v.theta.def} satisfying \eqref{apriori.bd} and \eqref{apriori.bd.CM} has the estimate
\begin{equation}\notag
||\partial_t \bm{X}'(t)||_{
L^2(\mathbb T)} \leq C_1||\bm{X}'(t)||_{\dot{H}^1(\mathbb T)},
\end{equation}
for some constant $C_1 = C_1(M, \rho, \mathcal{C}_{1\TE})>0$ and for any time $0<t<T$.
\end{lemma}
\begin{proof}
We use the equation \eqref{v.theta.def} to obtain that
\begin{equation}\notag
||\partial_t \bm{X}'||_{L^2}\leq ||\widetilde{\Lambda} \mathbf{T}(\bm{X}')||_{L^2} + ||\mathcal{V}||_{L^2}.
\end{equation}
We will therefore estimate each of the two terms in the upper bound.
For the first term, we have from \eqref{e:QuantitativeTensionMap} that
\begin{equation}\notag
\begin{split}
||\widetilde{\Lambda} \mathbf{T}(\bm{X}')||_{L^2} & \approx ||\mathbf{T}(\bm{X}')||_{\dot{H}^1} \approx ||D\mathbf{T}(\bm{X}') \bm{X}''||_{L^2} \\&\lesssim \mathcal{C}_{1\TE} ||\bm{X}''||_{L^2} \lesssim \mathcal{C}_{1\TE} ||\bm{X}'||_{\dot{H}^1}.
\end{split}
\end{equation}
For the term $||\mathcal{V}||_{L^2}$, by the structure of $\mathcal{A}$ from \eqref{e:Abounds} with \eqref{apriori.bd} and \eqref{delta.alpha.BX.bound}, it is straightforward to get that
\begin{equation}\label{Vbound.g}
|\mathcal{V}(\theta)|\lesssim \mathcal{C}_{1\TE}\int_{\mathbb T} d\alpha ~ \left( \frac{ |\delta_\alpha \bm{X}'(\theta)|^2}{\rho \alpha^2}+\frac{|\delta_\alpha \bm{X}'(\theta)|^3}{\rho^2 \alpha^2}\right).
\end{equation}
Applying Minkowski's inequality, we then get that
\begin{equation}\notag
||\mathcal{V}||_{L^2} \lesssim \mathcal{C}_{1\TE} \int_{\mathbb T}
\frac{d\alpha}{\alpha^2} \left( \rho^{-1} ||\delta_\alpha \bm{X}'||_{L^4}^2 + \rho^{-2} ||\delta_\alpha \bm{X}'||_{L^6}^3 \right) .
\end{equation}
In terms of the Besov spaces the upper bound above is
\begin{equation}\notag
||\mathcal{V}||_{L^2} \lesssim
\mathcal{C}_{1\TE} \rho^{-1} ||\bm{X}'||_{\dot{B}^{1/2}_{4,2}}^2 + \mathcal{C}_{1\TE}\rho^{-2} ||\bm{X}'||_{\dot{B}^{1/3}_{6,3}}^3.
\end{equation}
From Proposition \ref{besov.ineq.prop} and Lemma \ref{Besov.interpolation}, we have the embedding inequalities
\begin{equation}\notag
||\bm{X}'||_{\dot{B}_{4,2}^{1/2}}^2 \lesssim
||\bm{X}'||_{\dot{B}_{2,2}^{3/4}}^2 \lesssim
||\bm{X}'||_{\dot{B}^{1/2}_{2,1}} \ ||\bm{X}'||_{\dot{H}^1},
\end{equation}
and
\begin{equation}\notag
||\bm{X}'||_{\dot{B}_{6,3}^{1/3}}^3 \lesssim ||\bm{X}'||_{\dot{B}_{2,2}^{2/3}}^3 \lesssim ||\bm{X}'||_{\dot{B}^{1/2}_{2,1}}^2 \ ||\bm{X}'||_{\dot{H}^1}.
\end{equation}
Plugging in these inequalities and using \eqref{apriori.bd.norm} we have
\begin{equation}\notag
||\mathcal{V}||_{L^2} \lesssim
\mathcal{C}_{1\TE} \left( \rho^{-1} M + \rho^{-2} M^2 \right) ||\bm{X}'||_{\dot{H}^1}.
\end{equation}
This completes the proof.
\end{proof}
\begin{corollary}\label{prop:time.estimate}
Given a solution to \eqref{peskin.general.tension} satisfying both \eqref{apriori.bd} and \eqref{apriori.bd.norm}. Then for any $\varepsilon>0$ there exists a time $T_\varepsilon = T(\varepsilon, M, \mu, \rho, \lambda, \mathcal{C}_{1\TE})>0$ such that
\begin{equation*}
\int_0^{T_\varepsilon}dt~ ||\partial_t \bm{X}'(t)||_{L^2_\theta}^2 < \varepsilon.
\end{equation*}
\end{corollary}
The proof of Corollary \ref{prop:time.estimate} follows from Lemma \ref{time.space.equivalent} and Lemma \ref{H1.small.time}.
\begin{proposition}\label{prop:arc.chord.small}
Given a solution to \eqref{peskin.general.tension} satisfying both \eqref{apriori.bd} and \eqref{apriori.bd.norm}. Then for any small $\varepsilon>0$ there exists a time $T_\epsilon = T(\epsilon, M, \mu, \rho, \lambda, \mathcal{C}_{1\TE})>0$ such that
\begin{equation}\notag
||\bm{X}'(t)-\bm{X}'_0||_{L^\infty_\theta} < \varepsilon,
\end{equation}
for all $0<t<T_{\epsilon}$.
\end{proposition}
\begin{proof}
We use the embedding \eqref{embed.infty} and \eqref{apriori.bd.norm}, and then we have for any $\eta>0$:
\begin{multline}\notag
||\bm{X}'(t)-\bm{X}'_0||_{L^\infty_\theta} \lesssim \int_\mathbb T \frac{d\beta}{|\beta|^{3/2}} ||\delta_\beta (\bm{X}'(t)-\bm{X}'_0)||_{L^2_\theta}
\lesssim \int_{|\beta| < \eta } + \int_{|\beta| \geq \eta }
\\ \lesssim \frac{||\bm{X}'||_{\BS_T}+||\bm{X}'_0||_{\mathcal{B}^\subw}}{\mu(\eta^{-1})} + \frac{||\bm{X}'(t) - \bm{X}'_0||_{L^2_\theta}}{\eta^{1/2}}\lesssim \frac{C_* M}{\mu(\eta^{-1})} + \frac{||\bm{X}'(t) - \bm{X}'_0||_{L^2_\theta}}{\eta^{1/2}}.
\end{multline}
Now fix $\varepsilon>0$ small. Then
we take $\eta$ sufficiently small, and we can guarantee
\begin{equation}\notag
||\bm{X}'(t)-\bm{X}'_0||_{L^\infty_\theta} \leq \frac{\varepsilon}{2} + C\frac{||\bm{X}'(t) - \bm{X}'_0||_{L^2_\theta}}{\eta^{1/2}}.
\end{equation}
Next we apply the Minkowski and H{\"o}lder inequalities so that we can bound the latter term as follows
\begin{equation}\notag
||\bm{X}'(t) - \bm{X}'_0||_{L^2_\theta} = \bigg|\bigg| \int_0^t ds~ \partial_t \bm{X}'(s) \bigg|\bigg|_{L^2_\theta} \leq t^{\frac12}\left(\int_0^t ds~ ||\partial_t \bm{X}'(s)||_{L^2_\theta}^2\right)^{\frac12}.
\end{equation}
Lastly we apply Corollary \ref{prop:time.estimate}, and then the result then follows so long as $T_{\epsilon}>0$ is taken sufficiently small.
\end{proof}
We now point out that the argument in \cite[Prop 8.7 on page 337]{MR1867882} shows that for any two vectors $\bm{X}_1$ and $\bm{X}_2$ from \eqref{distance.X.notation} and \eqref{arc.cord.number} we have
\begin{multline*}
\left||\bm{X}_1|_* - |\bm{X}_2|_* \right|
=
\left|\inf_{\theta \ne \alpha} |D_\alpha \bm{X}_1(\theta)| - \inf_{\theta \ne \alpha} |D_\alpha \bm{X}_2(\theta)| \right|
\\
\leq
\sup_{\theta \ne \alpha}\left| \frac{|\delta_\alpha \bm{X}_1(\theta)|}{|\alpha|} - \frac{|\delta_\alpha \bm{X}_2(\theta)|}{|\alpha|} \right|
\leq
\sup_{\theta \ne \alpha} \frac{|\delta_\alpha (\bm{X}_1-\bm{X}_2)(\theta)|}{|\alpha|}.
\end{multline*}
We thus conclude that
\begin{equation}\label{chord.arc.upper}
\left||\bm{X}_1|_* - |\bm{X}_2|_* \right|
\leq
|| \bm{X}_1' - \bm{X}_2'||_{L^\infty_\theta}.
\end{equation}
We can now deduce from Proposition \ref{prop:arc.chord.small} and \eqref{chord.arc.upper} that if initially $|\bm{X}_0|_*>0$, then for a solution to \eqref{peskin.general.tension} satisfying \eqref{apriori.bd.norm} for any fixed $\rho$ satisfying $0<\rho<|\bm{X}_0|_*$ there exists a small-time $T_\rho>0$ such that \eqref{apriori.bd} holds over $0 \le t \le T_\rho$.
\section{Strong continuity estimate}\label{sec:DifferenceBesovSpace}
In this section we will prove two a priori continuity estimates that will imply the uniqueness of solutions. In \secref{sec:strongCont} we will prove the estimate that will establish the strong continuity result in Theorem \ref{main:unique}. Then \secref{sec:l2continuity} we prove the estimates that will give the uniqueness in Theorem \ref{first:unique}.
\subsection{Strong continuity estimate}\label{sec:strongCont}
We consider two different solutions to \eqref{peskin.general.tension}, $\bm{X}'(t,\theta)$ and $\bm{Y}'(t,\theta)$, with corresponding initial data $\bm{X}_0$ and $\bm{Y}_0$ respectively. In this section we will sometimes use the notation $\bm{Z}$ to denote either $\bm{X}$ or $\bm{Y}$. When we use $\bm{Z}$ in the estimates below it will not matter whether it is $\bm{X}$ or $\bm{Y}$. We consider initial data for \eqref{peskin.general.tension}, $\bm{Z}_0$, satisfying
\begin{equation}\label{initial.assumption.Z}
||\bm{Z}'_0||_{\mathcal{B}^\subw}=||\bm{Z}'_0||_{\dot{B}^{\frac12,\mu}_{2,1}} \leq M, \qquad |\bm{Z}_0|_* = \inf_{\alpha\neq \theta} |D_\alpha \bm{Z}_0(\theta)| >0.
\end{equation}
Here $0 < M < \infty$ is allowed to be large.
Then for some $C_* >0$ we suppose for $T>0$ that for some $c>0$ and $\lambda>0$ as in \eqref{e:DTdefn} we have
\begin{equation}\label{apriori.bd.norm.Z}
||\bm{Z}'||_{\BS_T}+ c \lambda^{\frac12}||\bm{Z}'||_{\mathcal{D}_T^\subw} \le C_* M.
\end{equation}
We also prove our estimate in this section, for some $\rho>0$ that is allowed to be small, under the following condition
\begin{equation}\label{apriori.bd.Z}
|\bm{Z}(t)|_*\geq \rho, \quad 0 \le t \le T.
\end{equation}
Given $\mu$ from Definition \ref{subw.definition} we will use the equivalent semi-norm defined with $\nu$ instead of $\mu$ where $\nu$ is given by
\begin{equation}\label{nu.definition}
\nu(r) \eqdef 1 + \frac{\mu(r)}{\KCC \max\{1,M\}} , \quad C_3 \ge 1.
\end{equation}
We will choose $C_3 = C_3(\lambda^{-1}, \mathcal{C}_{1\TE})$ to be a possibly large constant at the end of the proof of Proposition \ref{prop:continuity}. Notice that $\nu$ defines equivalent norms $\BN_T$ and $\mathcal{D}_T^\nu$ to the norms $\BS_T$ and $\mathcal{D}_T^\subw$ defined in \eqref{C.space.temporal} and \eqref{D.space.temporal} respectively. In particular, from \eqref{nu.definition} we have
\begin{equation}\label{equivalent.nu.norm}
|| f ||_{\BN_T} \leq 2 || f ||_{\BS_T}, \quad
|| f ||_{\mathcal{D}_T^\nu} \leq 2 || f ||_{\mathcal{D}_T^\subw},
\end{equation}
and
\begin{equation}\notag
|| f ||_{\BS_T} \leq \KCC \max\{1,M\} || f ||_{\BN_T}, \quad
|| f ||_{\mathcal{D}_T^\subw} \leq \KCC \max\{1,M\} || f ||_{\mathcal{D}_T^\nu}.
\end{equation}
Then with this equivalent norm we will prove the following continuity estimate.
\begin{proposition}\label{prop:continuity}
Let $\bm{X}, \bm{Y}: [0,T]\times \mathbb T \to \mathbb R^2$ be two weak solutions to the Peskin problem with tension $\mathcal{T}$ in the sense of Definition \ref{def:solution} with initial data $\bm{X}_0,$ $\bm{Y}_0$ respectively. Assume that $\bm{X}_0,$ $\bm{Y}_0,$ and $\mathbf{T}$ satisfy the assumptions of Theorem \ref{main:unique} including \eqref{initial.assumption.Z}. Additionally, assume that \eqref{apriori.bd.norm.Z} and \eqref{apriori.bd.Z} hold. For the tension map \eqref{tension.map.def} we assume that \eqref{e:QuantitativeTensionMap} and \eqref{tension.derivatives.continuity} hold.
Then for the two solutions $\bm{X}'$ and $\bm{Y}'$ to \eqref{peskin.general.tension} over $0\le t \le T$ with $T>0$ we have
\begin{equation*}
||\bm{X}' - \bm{Y}'||_{\BN_T}
+
\lambda^{\frac12} || \bm{X}'- \bm{Y}'||_{\mathcal{D}_T^\nu}
\leq
4 || \bm{X}'_0 - \bm{Y}'_0||_{\mathcal{B}^\nu}
+C || \bm{X}'-\bm{Y}'||_{\BN_T} T^{\frac12} \mathcal{W},
\end{equation*}
where $\mathcal{W}=\mathcal{W}[\rho, M]$ is defined in \eqref{WK.constant.def}.
In particular, there exists $T_M = T_M(M, \rho, \mu, \lambda, \mathcal{C}_{1\TE}, \mathcal{C}_{2\TE}, \mathcal{C}_{3\TE})>0$ such that for any $0<T\leq T_M,$
we have the following estimate
\begin{equation*}
||\bm{X}' - \bm{Y}'||_{\BN_T}
+
2\lambda^{\frac12} || \bm{X}'- \bm{Y}'||_{\mathcal{D}_T^\nu}
\leq
8 || \bm{X}'_0 - \bm{Y}'_0||_{\mathcal{B}^\nu}.
\end{equation*}
\end{proposition}
For use below we define the following notation using \eqref{arc.cord.number}:
\begin{equation}\label{arc.chord.XY2}
|\bm{X},\bm{Y}|_* \eqdef \min\{|\bm{X}|_*,|\bm{Y}|_*\}.
\end{equation}
Then the next two lemmas will be used in the proof of Proposition \ref{prop:continuity}.
\begin{lemma}\label{lemm.A.diff}
We have the following uniform estimate
\begin{multline} \notag
\left| \mathcal{A}[\bm{X}]-\mathcal{A}[\bm{Y}] \right|
\lesssim
|\bm{X}|_*^{-1}\left(\left| \delta_\alpha^+ (\bX'-\bY') (\theta)\right|
+
\left| \delta_\alpha^- (\bX'-\bY') (\theta)\right|
\right)
\\
+
|\bm{X}|_*^{-2}\left(\left| \delta_\alpha^+ (\bX'-\bY') (\theta)\right| \left| \delta_\alpha^- \BX' (\theta)\right|
+
\left| \delta_\alpha^- (\bX'-\bY') (\theta)\right| \left| \delta_\alpha^+ \BY' (\theta)\right|
\right)
\\
+
|\bm{X},\bm{Y}|_*^{-2}\left| D_\alpha(\bm{X}'-\bm{Y}')(\theta)\right|\left( \left| \delta_\alpha^- \BY' (\theta)\right|
+\left| \delta_\alpha^+ \BY' (\theta)\right|
\right)
\\
+
|\bm{X},\bm{Y}|_*^{-3}\left| D_\alpha(\bm{X}'-\bm{Y}')(\theta)\right| \left| \delta_\alpha^- \BY' (\theta)\right|
\left| \delta_\alpha^+ \BY' (\theta)\right|.
\end{multline}
\end{lemma}
We also use the decomposition in \eqref{Abeta.split} as
\begin{equation}\notag
\delta_\beta \mathcal{A}[\bm{X}] = \mathcal{A}_{1\beta}[\bm{X}] + \mathcal{A}_{2\beta}[\bm{X}],
\quad
\delta_\beta \mathcal{A}[\bm{Y}] = \mathcal{A}_{1\beta}[\bm{Y}] + \mathcal{A}_{2\beta}[\bm{Y}].
\end{equation}
We further introduce the following notation
\begin{equation}\label{notation.two.beta}
|\delta_\beta\DAL \BX , \delta_\beta\DAL \BY |
\eqdef
\max\{|\delta_\beta\DAL \BX (\theta)|, | \delta_\beta\DAL \BY (\theta)|\}.
\end{equation}
\begin{lemma}\label{lem:Abeta.upper} We have the uniform estimate for the difference
\begin{multline}\label{A1beta.upper}
\left| \mathcal{A}_{1\beta}[\bm{X}] - \mathcal{A}_{1\beta}[\bm{Y}] \right|
\lesssim
\left( |\delta_\beta \delta_\alpha^+ (\bX'-\bY') (\theta)|
+
|\delta_\beta \delta_\alpha^- (\bX'-\bY') (\theta)|
\right)|\bm{X}|_{*}^{-1}
\\
+
\left(
|\delta_\beta \delta_\alpha^+ (\bX'-\bY') (\theta)||\tau_\beta \delta_\alpha^- \BX' (\theta)|+ |\delta_\beta \delta_\alpha^- (\bm{X}'-\bm{Y}')(\theta)| |\delta_\alpha^+ \BY' (\theta)| \right) |\bm{X}|_{*}^{-2}
\\
+
|\delta_\beta \delta_\alpha^+ \BY' (\theta)|
|\tau_\beta \delta_\alpha^- (\bX'-\bY') (\theta)|
|\bm{X}|_{*}^{-2}
+
|\delta_\alpha^+ (\bX'-\bY') (\theta)||\delta_\beta \delta_\alpha^- \BX' (\theta)| |\bm{X}|_{*}^{-2}
\\
+
\left( |\delta_\beta \delta_\alpha^+ \BY' (\theta)|
+
|\delta_\beta \delta_\alpha^- \BY' (\theta)|
\right)
|\tau_\beta D_\alpha(\bm{X}'-\bm{Y}')(\theta)||\bm{X},\bm{Y}|_{*}^{-2}
\\
+
|\delta_\beta \delta_\alpha^+ \BY' (\theta)||\tau_\beta \delta_\alpha^- \BY' (\theta)|
|\tau_\beta D_\alpha(\bm{X}'-\bm{Y}')(\theta)||\bm{X},\bm{Y}|_{*}^{-3}
\\
+
|\delta_\alpha^+ \BY' (\theta)||\delta_\beta \delta_\alpha^- \BY' (\theta)|
|D_\alpha(\bm{X}'-\bm{Y}')(\theta)| |\bm{X},\bm{Y}|_{*}^{-3}.
\end{multline}
And we have
\begin{multline}\label{A2beta.upper}
\left| \mathcal{A}_{2\beta}[\bm{X}] - \mathcal{A}_{2\beta}[\bm{Y}] \right|
\lesssim
|\delta_\alpha^+ (\bX'-\bY') (\theta)|(1+|\tau_\beta \delta_\alpha^- \BX' (\theta)|+| \delta_\alpha^- \BX' (\theta)| ) \frac{|\delta_\beta \DAL \BX (\theta)|}{|\bm{X}|_{*}^{3}}
\\
+
\left(
|\tau_\beta \delta_\alpha^- (\bX'-\bY') (\theta)|
| \delta_\alpha^+ \BY' (\theta)|
+
| \delta_\alpha^- (\bX'-\bY') (\theta)|(1+|\delta_\alpha^+ \BY' (\theta)|)
\right)\frac{|\delta_\beta \DAL \BX (\theta)|}{|\bm{X}|_{*}^{3}}
\\
+
(| \delta_\alpha^+ \BY' (\theta)|(1+| \delta_\alpha^- \BY' (\theta)| +|\tau_\beta \delta_\alpha^- \BY' (\theta)|)+| \delta_\alpha^- \BY' (\theta)|)
\frac{|\delta_\beta D_\alpha(\bm{X}'-\bm{Y}')(\theta)|}{|\bm{X},\bm{Y}|_{*}^{3}}
\\
+
| \delta_\alpha^+ \BY' (\theta)||\tau_\beta \delta_\alpha^- \BY' (\theta)|
|\tau_\betaD_\alpha(\bm{X}'-\bm{Y}')(\theta)|
\frac{|\delta_\beta\DAL \BX , \delta_\beta\DAL \BY |}{|\bm{X},\bm{Y}|_{*}^{4}}
\\
+
\left(
| \delta_\alpha^+ \BY' (\theta)||\tau_\beta \delta_\alpha^- \BY' (\theta)|
+
|\delta_\alpha^+ \BY' (\theta)|| \delta_\alpha^- \BY' (\theta)|
\right)
|D_\alpha(\bm{X}'-\bm{Y}')(\theta)| \frac{|\delta_\beta\DAL \BX , \delta_\beta\DAL \BY |}{|\bm{X},\bm{Y}|_{*}^{4}}
\\
+
\left(| \delta_\alpha^+ \BY' (\theta)|
+
| \delta_\alpha^- \BY' (\theta)|
\right)
|D_\alpha(\bm{X}'-\bm{Y}')(\theta)| \frac{|\delta_\beta\DAL \BX , \delta_\beta\DAL \BY |}{|\bm{X},\bm{Y}|_{*}^{4}}.
\end{multline}
\end{lemma}
The proofs of Lemmas \ref{lemm.A.diff} and \ref{lem:Abeta.upper} are contained in \secref{sec:kernelDIFF}.
\begin{proof}[Proof of Proposition \ref{prop:continuity}] For now we consider \eqref{peskin.general.tension}, and we take the difference of two solutions as
\begin{multline}\notag
\partial_t \bm{X}'(\theta) -\partial_t \bm{Y}'(\theta) = \int_{\mathbb T} d\alpha \frac{\mathcal{K}[\bm{X}](\theta, \alpha)}{\alpha^2} \delta_\alpha \left(\mathbf{T}(\bm{X}'(\theta))-\mathbf{T}(\bm{Y}'(\theta))\right)
\\
+
\int_{\mathbb T} d\alpha \frac{\mathcal{A}[\bm{X}](\theta, \alpha)-\mathcal{A}[\bm{Y}](\theta, \alpha)}{\alpha^2} \delta_\alpha \mathbf{T}(\bm{Y}'(\theta)).
\end{multline}
We now take $\delta_\beta$ of the equation above to obtain
\begin{multline}\notag
\partial_t \delta_\beta(\bm{X}' - \bm{Y}')(\theta) = \int_{\mathbb T} d\alpha \frac{\tau_\beta \mathcal{K}[\bm{X}](\theta, \alpha)}{\alpha^2} \delta_\beta\delta_\alpha \left(\mathbf{T}(\bm{X}'(\theta))-\mathbf{T}(\bm{Y}'(\theta))\right)
\\
+
\int_{\mathbb T} d\alpha \frac{\delta_\beta \mathcal{A}[\bm{X}](\theta, \alpha)}{\alpha^2}
\delta_\alpha(\mathbf{T}(\bm{X}'(\theta))-\mathbf{T}(\bm{Y}'(\theta)))
\\
+
\int_{\mathbb T} d\alpha \frac{\tau_\beta(\mathcal{A}[\bm{X}]-\mathcal{A}[\bm{Y}])(\theta, \alpha)}{\alpha^2} \delta_\beta\delta_\alpha \mathbf{T}(\bm{Y}'(\theta))
\\
+
\int_{\mathbb T} d\alpha \frac{\delta_\beta(\mathcal{A}[\bm{X}]-\mathcal{A}[\bm{Y}])(\theta, \alpha)}{\alpha^2} \delta_\alpha \mathbf{T}(\bm{Y}'(\theta)).
\end{multline}
Now we consider this expression in $L^2$ similar to \eqref{beta.L2.difference.1} as
\begin{multline}\label{energy.diff}
\frac{d}{dt} || \delta_\beta \bm{X}' - \delta_\beta \bm{Y}'||_{L^2}^2
\\
=
- \int_{\mathbb T} d\theta \int_{\mathbb T} d\alpha ~
\delta_\beta\delta_\alpha (\bm{X}'- \bm{Y}')(\theta) \cdot
\frac{\tau_\beta \mathcal{K}[\bm{X}](\theta, \alpha)}{\alpha^2}
\delta_\beta\delta_\alpha (\mathbf{T}(\bm{X}'(\theta))-\mathbf{T}(\bm{Y}'(\theta)))
\\
- \int_{\mathbb T} d\theta \int_{\mathbb T} d\alpha ~
\delta_\beta \delta_\alpha (\bm{X}'- \bm{Y}')(\theta)
\cdot\frac{\delta_\beta\mathcal{A}[\bm{X}](\theta, \alpha)}{\alpha^2}
\delta_\alpha (\mathbf{T}(\bm{X}'(\theta))-\mathbf{T}(\bm{Y}'(\theta)))
\\
- \int_{\mathbb T} d\theta \int_{\mathbb T} d\alpha ~
\delta_\beta\delta_\alpha (\bm{X}'- \bm{Y}')(\theta)
\cdot \frac{ \tau_\beta (\mathcal{A}[\bm{X}]-\mathcal{A}[\bm{Y}])(\theta, \alpha)}{\alpha^2}
\delta_\beta \delta_\alpha \mathbf{T}(\bm{Y}'(\theta))
\\
- \int_{\mathbb T} d\theta \int_{\mathbb T} d\alpha ~
\delta_\beta\delta_\alpha (\bm{X}'- \bm{Y}')(\theta)
\cdot
\frac{\delta_\beta(\mathcal{A}[\bm{X}]-\mathcal{A}[\bm{Y}])(\theta, \alpha)}{\alpha^2} \delta_\alpha \mathbf{T}(\bm{Y}'(\theta)).
\end{multline}
To prove our estimate, we will first expand out each of the terms above.
To this end we recall \eqref{e:deltaalphaT} and \eqref{e:albe.T.XY}. As in \eqref{e:deltaalphaT} we expand out
\begin{multline}\label{difference.alpha.T}
\delta_\alpha (\mathbf{T}(\bm{X}'(\theta))-\mathbf{T}(\bm{Y}'(\theta)))
=
\overline{D\bT}[\BX'] \delta_\alpha( \bm{X}'- \bm{Y}' )(\theta) \\ + (\overline{D\bT}[\BX'] - \overline{D\bT}[\BY'] )\delta_\alpha\bm{Y}'(\theta).
\end{multline}
Further as in \eqref{DBTX.def} we calculate that
\begin{equation}\label{difference.DBTXY}
\overline{D\bT}[\BX'] - \overline{D\bT}[\BY']
=
\int_0^1 ds_1~ \widetilde{D^2\bT} (s_1,\theta,\alpha) ~
(g_1[\bm{X}']-g_1[\bm{Y}'] ),
\end{equation}
where $g_1[\bm{X}']=g_1[\bm{X}'](s_1, \alpha, \theta)$ is defined below \eqref{DBTX.def} and
\begin{equation}\label{DDBT.term}
\widetilde{D^2\bT} (s_1,\theta,\alpha)
\eqdef
\int_0^1 ds_2~ D^2\mathbf{T}
(f_2[\bm{X}',\bm{Y}'](s_1,s_2,\theta,\alpha)).
\end{equation}
Here we also use the definition
\begin{multline}\notag
f_2[\bm{X}',\bm{Y}'](s_1,s_2,\theta,\alpha)
\eqdef
s_2(s_1 \tau_\alpha ( \bm{X}'-\bm{Y}')(\theta) +
(1-s_1)( \bm{X}'-\bm{Y}')(\theta) )
\\
+s_1 \tau_\alpha \bm{Y}'(\theta) +
(1-s_1)\bm{Y}'(\theta).
\end{multline}
Thus recalling Remark \ref{ignore.remark} and using \eqref{e:QuantitativeTensionMap} we have
\begin{multline*}
| \delta_\alpha (\mathbf{T}(\bm{X}'(\theta))-\mathbf{T}(\bm{Y}'(\theta))) |
\leq
\mathcal{C}_{1\TE} |\delta_\alpha (\bm{X}'-\bm{Y}')(\theta)|
\\
+\mathcal{C}_{2\TE}
| (\bm{X}'-\bm{Y}')(\theta)|
|\delta_\alpha \bm{Y}'(\theta)|.
\end{multline*}
We will use this estimate for the second term in \eqref{energy.diff}.
For the first term in \eqref{energy.diff}, we expand \eqref{e:albe.T.XY} out as
\begin{multline*}
\delta_\beta\delta_\alpha (\mathbf{T}(\bm{X}'(\theta))-\mathbf{T}(\bm{Y}'(\theta)))
=
\tau_\beta \overline{D\bT}[\BX'] \delta_\beta\delta_\alpha( \bm{X}'- \bm{Y}' )(\theta)
\\
+ \tau_\beta (\overline{D\bT}[\BX'] - \overline{D\bT}[\BY'] )\delta_\beta\delta_\alpha\bm{Y}'(\theta)
+
\delta_\beta \overline{D\bT}[\BX'] \delta_\alpha(\bm{X}'-\bm{Y}')(\theta)
\\
+
(\delta_\beta \overline{D\bT}[\BX'] -\delta_\beta \overline{D\bT}[\BY'] )\delta_\alpha \bm{Y}'(\theta).
\end{multline*}
Notice that $\delta_\beta \overline{D\bT}[\BX'] $ is calculated in \eqref{delta.beta.DBTX} and it has the bound \eqref{delta.beta.DBTX.bound}. We further calculate using $g_1$ from \eqref{DBTX.def} and \eqref{delta.beta.DBTX} that
\begin{multline}\label{delta.beta.DBTX.minus}
\delta_\beta\overline{D\bT}[\BX'] - \delta_\beta\overline{D\bT}[\BY']
=
\int_0^1 ds_1 ~
\overline{D^2\bT} [\bm{X}'](\theta) g_1[\delta_\beta(\bm{X}'-\bm{Y}')](s_1, \alpha, \theta)
\\
+
\int_0^1 ds_1 ~
(\overline{D^2\bT} [\bm{X}']- \overline{D^2\bT} [\bm{Y}']) g_1[\delta_\beta\bm{Y}'](s_1, \alpha, \theta),
\end{multline}
where
\begin{equation*}
\overline{D^2\bT} [\bm{X}']- \overline{D^2\bT} [\bm{Y}']
=
\int_0^1 ds_2~ \overline{D^3\bT} (\theta,\alpha,\beta) g_2[\bm{X}'-\bm{Y}'],
\end{equation*}
where $g_2[\bm{X}'-\bm{Y}']$ is defined in \eqref{g2.s2.def}. We further use
\begin{equation*}
\overline{D^3\bT} (\theta,\alpha,\beta)
\eqdef
\int_0^1 ds_3~ D^3\mathbf{T}
(g_3(s_3,\theta,\alpha,\beta)),
\end{equation*}
and with $g_2[\bm{X}']$ defined in \eqref{g2.s2.def} we use
\begin{equation*}
g_3(s_3,\theta,\alpha,\beta)
\eqdef
s_3 g_2[\bm{X}'](s_2,s_1,\alpha,\theta,\beta) + (1-s_3)g_2[\bm{Y}'](s_2,s_1,\alpha,\theta,\beta).
\end{equation*}
Notice that $ \tau_\beta \overline{D\bT}[\BX'] \delta_\beta\delta_\alpha( \bm{X}'- \bm{Y}' )(\theta)$ with \eqref{DBTX.def} will give rise to the crucial elliptic term in \eqref{energy.diff} using \eqref{e:DTdefn}. Putting all of this together including Remark \eqref{ignore.remark} using \eqref{delta.beta.DBTX.bound}, \eqref{e:QuantitativeTensionMap} and \eqref{tension.derivatives.continuity} we conclude the following bound
\begin{multline*}
|\delta_\beta\delta_\alpha (\mathbf{T}(\bm{X}'(\theta))-\mathbf{T}(\bm{Y}'(\theta)))
-
\tau_\beta \overline{D\bT}[\BX'] \delta_\beta\delta_\alpha( \bm{X}'- \bm{Y}' )(\theta) |
\\
\leq
\mathcal{C}_{2\TE}(
| \delta_\beta\delta_\alpha\bm{Y}'(\theta) |
|(\bm{X}'-\bm{Y}')(\theta)|
+
|\delta_\beta\bm{X}'(\theta)|
|\delta_\alpha(\bm{X}'-\bm{Y}')(\theta)|)
\\
+
\mathcal{C}_{2\TE}
|\delta_\beta( \bm{X}' - \bm{Y}')(\theta)| |\delta_\alpha \bm{Y}'(\theta)|
+
\mathcal{C}_{3\TE}
|(\bm{X}'-\bm{Y}')(\theta)|
|\delta_\beta \bm{Y}'(\theta)| |\delta_\alpha \bm{Y}'(\theta)|.
\end{multline*}
We will use this to estimate the first term in \eqref{energy.diff}.
To bound the third term in \eqref{energy.diff}, we recall \eqref{e:albe.T.XY} and \eqref{beta.alpha.TXP}.
Lastly, to bound fourth term in \eqref{energy.diff} we recall \eqref{e:deltaalphaT} and \eqref{delta.alpha.BX.bound}.
Plugging all of these calculations into \eqref{energy.diff}, and using \eqref{kerbel.eqn.deriv} with \eqref{kerbel.A.eqn.deriv} and \eqref{e:DTdefn} and Remark \eqref{ignore.remark} we obtain
\begin{multline}\label{energy.difference.first1}
\frac{d}{dt} || \delta_\beta (\bm{X}' - \bm{Y}')||_{L^2}^2
+
\lambda\int_{\mathbb T} d\theta \int_{\mathbb T} \frac{d\alpha}{\alpha^2} ~
|\delta_\beta\delta_\alpha (\bm{X}'- \bm{Y}')(\theta)|^2
\\
\leq \sum_{j=0}^{3} \mathcal{N}_j
+ \sum_{j=4}^{7} \mathcal{N}_j
+ \sum_{j=8}^{9} \mathcal{N}_j.
\end{multline}
To ease the notation, when we list the terms below we will drop the $(\theta)$ and $(\theta, \alpha)$ notation from each term. For example we will write $\tau_\beta \mathcal{A}[\bm{X}](\theta, \alpha) = \tau_\beta \mathcal{A}[\bm{X}]$. Then with \eqref{kerbel.eqn.deriv} and \eqref{kerbel.A.eqn.deriv} we have
\begin{equation}\notag
\mathcal{N}_0 \eqdef \mathcal{C}_{1\TE} \int_{\mathbb T} d\theta \int_{\mathbb T} \frac{d\alpha}{\alpha^2} ~
| \delta_\beta\delta_\alpha (\bm{X}'- \bm{Y}')|^2 |\tau_\beta \mathcal{A}[\bm{X}]|,
\end{equation}
\begin{equation}\notag
\mathcal{N}_1 \eqdef \mathcal{C}_{1\TE} \int_{\mathbb T} d\theta \int_{\mathbb T} \frac{d\alpha}{\alpha^2} ~
| \delta_\beta\delta_\alpha (\bm{X}'- \bm{Y}')| | \delta_\alpha (\bm{X}'- \bm{Y}')| |\delta_\beta \mathcal{A}[\bm{X}]|,
\end{equation}
\begin{equation}\notag
\mathcal{N}_2 \eqdef \mathcal{C}_{1\TE} \int_{\mathbb T} d\theta \int_{\mathbb T} \frac{d\alpha}{\alpha^2} ~
| \delta_\beta\delta_\alpha (\bm{X}'- \bm{Y}')| | \delta_\beta\delta_\alpha \bm{Y}'| |\tau_\beta (\mathcal{A}[\bm{X}]-\mathcal{A}[\bm{Y}])|,
\end{equation}
\begin{equation}\notag
\mathcal{N}_3 \eqdef \mathcal{C}_{1\TE} \int_{\mathbb T} d\theta \int_{\mathbb T} \frac{d\alpha}{\alpha^2} ~
| \delta_\beta\delta_\alpha (\bm{X}'- \bm{Y}')| | \delta_\alpha \bm{Y}'| |\delta_\beta(\mathcal{A}[\bm{X}]-\mathcal{A}[\bm{Y}])|,
\end{equation}
\begin{equation}\notag
\mathcal{N}_4 \eqdef \mathcal{C}_{2\TE} \int_{\mathbb T} d\theta \int_{\mathbb T} \frac{d\alpha}{\alpha^2} ~
| \delta_\beta\delta_\alpha (\bm{X}'- \bm{Y}')| |\tau_\beta \mathcal{K}[\bm{X}]|
| \bm{X}'- \bm{Y}'| | \delta_\beta \delta_\alpha \bm{Y}'|,
\end{equation}
\begin{equation}\notag
\mathcal{N}_5 \eqdef \mathcal{C}_{2\TE} \int_{\mathbb T} d\theta \int_{\mathbb T} \frac{d\alpha}{\alpha^2} ~
| \delta_\beta\delta_\alpha (\bm{X}'- \bm{Y}')| |\tau_\beta \mathcal{K}[\bm{X}]|
| \delta_\beta \bm{X}'| | \delta_\alpha (\bm{X}'- \bm{Y}')|,
\end{equation}
\begin{equation}\notag
\mathcal{N}_{6} \eqdef \mathcal{C}_{2\TE} \int_{\mathbb T} d\theta \int_{\mathbb T} \frac{d\alpha}{\alpha^2} ~
| \delta_\beta\delta_\alpha (\bm{X}'- \bm{Y}')| |\tau_\beta \mathcal{K}[\bm{X}]|
| \delta_\beta (\bm{X}'- \bm{Y}')|
| \delta_\alpha \bm{Y}'|,
\end{equation}
\begin{equation}\notag
\mathcal{N}_{7} \eqdef \mathcal{C}_{2\TE} \int_{\mathbb T} d\theta \int_{\mathbb T} \frac{d\alpha}{\alpha^2} ~
| \delta_\beta\delta_\alpha (\bm{X}'- \bm{Y}')| |\delta_\beta \mathcal{A}[\bm{X}]|
|\bm{X}'- \bm{Y}'| |\delta_\alpha \bm{Y}'|,
\end{equation}
\begin{equation}\notag
\mathcal{N}_{8} \eqdef \mathcal{C}_{2\TE} \int_{\mathbb T} d\theta \int_{\mathbb T} \frac{d\alpha}{\alpha^2} ~
| \delta_\beta\delta_\alpha (\bm{X}'- \bm{Y}')| |\tau_\beta (\mathcal{A}[\bm{X}]-\mathcal{A}[\bm{Y}])|
|\delta_\beta \bm{Y}'| |\delta_\alpha \bm{Y}'|,
\end{equation}
\begin{equation}\notag
\mathcal{N}_{9} \eqdef \mathcal{C}_{3\TE} \int_{\mathbb T} d\theta \int_{\mathbb T} \frac{d\alpha}{\alpha^2} ~
| \delta_\beta\delta_\alpha (\bm{X}'- \bm{Y}')|
|\tau_\beta \mathcal{K}[\bm{X}]|
|\bm{X}'- \bm{Y}'|
|\delta_\beta \bm{Y}'| |\delta_\alpha \bm{Y}'|.
\end{equation}
We will estimate each of the terms above individually. In the all of the following estimates we will use a small $\eta>0$ to be chosen at the end of the proof.
First notice that $\mathcal{N}_{0}$ is analogous to $\mathcal{L}_{1}$ from \eqref{beta.L2.difference} and \eqref{e:Abounds}. Thus similar to \eqref{LA.estimate} we have
\begin{equation}\label{TT1.estimate}
\mathcal{N}_{0}
\leq
\left( \frac{\lambda}{64} + C \kappa_1 \mathcal{C}_{1\TE} \right) ||\delta_\beta \widetilde{\Lambda}^{\frac12}(\bm{X}'-\bm{Y}')||_{L^2_\theta}^2 + C ||\delta_\beta (\bm{X}'-\bm{Y}')||_{L^2_\theta}^2 \mathcal{U}_1,
\end{equation}
where $\kappa_1 = \kappa_1(\eta)$ is given by \eqref{kappa1.def} and $\mathcal{U}_1$ is defined in \eqref{BB.const.def}. Later we will be able to choose $\eta>0$ small enough in so that we have $C \kappa_1 \mathcal{C}_{1\TE} \leq \frac{1}{2}\frac{\lambda}{64}$. This is our main estimate for the term $\mathcal{N}_{0}$.
Then the term $\mathcal{N}_{1}$ is exactly $\mathcal{L}_2[\bm{X}'-\bm{Y}', \bm{X}', \bm{X}'-\bm{Y}']$ from \eqref{LLT2.generalform}. Thus as in \eqref{LLT2.bound.general}, $\mathcal{N}_{1}$ satisfies the estimate
\begin{multline}\label{TT2.bound}
\mathcal{N}_{1}
\leq
\frac{\lambda}{64} ||\delta_\beta \widetilde{\Lambda}^{\frac12}(\bm{X}'-\bm{Y}')||_{L^2_\theta}^2
+
C\kappa_3\mathcal{C}_{1\TE}^2
|| \delta_\beta \bm{X}'||_{L^\infty_\theta}^2 || \bm{X}'-\bm{Y}'||_{\mathcal{B}^\MA}^2
\\
+
\frac{1}{64}
|| \delta_\beta(\bm{X}'-\bm{Y}')||_{L^2_\theta}^2
+C || \delta_\beta\bm{X}'||_{L^2_\theta}^2
|| \bm{X}'-\bm{Y}'||_{L^\infty_\theta}^2 \mathcal{C}_{1\TE}^2
\mathcal{W}_2,
\end{multline}
where $\kappa_3 = \kappa_3(\eta)$ is given by \eqref{kappa3.def}
and as in \eqref{BG.const.def} we have
\begin{equation}\label{WK2.constant.def}
\mathcal{W}_2 = \mathcal{W}_2[M,\rho^{-1},\eta^{-1}]
\eqdef
\eta^{-2}
\left( \sum_{j=1}^2 \rho^{-j} M^{j-1}\right)^2(1+\rho^{-2} M^2).
\end{equation}
This is our main estimate for the term $\mathcal{N}_{1}$.
Next we consider $\mathcal{N}_{7}$. After bounding $|\bm{X}'- \bm{Y}'| \lesssim ||\bm{X}'- \bm{Y}'||_{L^\infty_\theta}$, then as in \eqref{LLT2.bound.general} with \eqref{apriori.bd.norm.Z} the term $\mathcal{N}_{7}$ satisfies the bounds
\begin{multline}\label{TT11.bound}
\mathcal{N}_{7}
\leq
\frac{\lambda}{64} ||\delta_\beta \widetilde{\Lambda}^{\frac12}(\bm{X}'-\bm{Y}')||_{L^2_\theta}^2
+
C\kappa_3\mathcal{C}_{2\TE}^2 M^2 || \bm{X}'-\bm{Y}'||_{L^\infty_\theta}^2
|| \delta_\beta \bm{X}'||_{L^\infty_\theta}^2
\\
+
\frac{1}{64}
|| \delta_\beta(\bm{X}'-\bm{Y}')||_{L^2_\theta}^2
+
C
|| \delta_\beta\bm{X}'||_{L^2_\theta}^2
|| \bm{X}'-\bm{Y}'||_{L^\infty_\theta}^2 M^2\mathcal{C}_{2\TE}^2 \mathcal{W}_2.
\end{multline}
This is our main estimate for $\mathcal{N}_{7}$.
Next, we apply the estimate \eqref{TTG.y.estimate} to $\mathcal{N}_{5}$ to obtain
\begin{multline}\label{TT7.bound}
\mathcal{N}_{5}
\leq
\frac{\lambda}{64} || \delta_\beta \widetilde{\Lambda}^{\frac12}(\bm{X}'- \bm{Y}')||_{L^2_\theta}^2
+
C\kappa_5 \mathcal{C}_{2\TE}^2|| \delta_\beta \bm{X}'||_{L^\infty_\theta}^2 ||\bm{X}'- \bm{Y}'||_{\mathcal{B}^\MA}^2
\\
+
\frac{1}{64}
|| \delta_\beta(\bm{X}'- \bm{Y}')||_{L^2_\theta}^2+ C || \delta_\beta\bm{X}'||_{L^2_\theta}^2
||\bm{X}'- \bm{Y}'||_{L^\infty_\theta}^2 \mathcal{C}_{2\TE}^2
\mathcal{W}_5.
\end{multline}
where recalling $\mathcal{W}_1=\mathcal{W}_1[\rho, M]=C(1+ \rho^{-2} M^2)$ from \eqref{K.bound.infty} we define
\begin{equation}\label{kappa5.def}
\kappa_5 \eqdef \frac{(1+ \rho^{-2} M^2)^2}{\lambda \subw(\eta^{-1})^2},
\end{equation}
and
\begin{equation}\label{WK5.constant.def}
\mathcal{W}_5 \eqdef \eta^{-2}
(1+ \rho^{-2} M^2)^2.
\end{equation}
Next, we apply the estimate \eqref{TTG.y.estimate} to $\mathcal{N}_{6}$ to obtain
\begin{multline}\label{TT9.bound}
\mathcal{N}_{6}
\leq
\frac{\lambda}{64} || \delta_\beta \widetilde{\Lambda}^{\frac12}(\bm{X}'- \bm{Y}')||_{L^2_\theta}^2
+
C\kappa_6 \mathcal{C}_{2\TE}^2 || \delta_\beta (\bm{X}'- \bm{Y}')||_{L^\infty_\theta}^2
\\
+
C
|| \delta_\beta(\bm{X}'- \bm{Y}')||_{L^2_\theta}^2
\mathcal{C}_{2\TE}
M (1+ \rho^{-2} M^2) \eta^{-1},
\end{multline}
where
\begin{equation}\label{kappa6.def}
\kappa_6 = \frac{M^2}{\lambda \subw(\eta^{-1})^2} (1+ \rho^{-2} M^2)^2.
\end{equation}
We also apply the estimate \eqref{TTG.y.estimate} to $\mathcal{N}_{9}$ to obtain
\begin{multline}\label{TT18.bound}
\mathcal{N}_{9}
\leq
\frac{\lambda}{64} || \delta_\beta \widetilde{\Lambda}^{\frac12}(\bm{X}'- \bm{Y}')||_{L^2_\theta}^2
+
C\kappa_6 \mathcal{C}_{3\TE}^2 ||\bm{X}'- \bm{Y}'||_{L^\infty_\theta}^2 || \delta_\beta \bm{Y}'||_{L^\infty_\theta}^2
\\
+
\frac{1}{64}
|| \delta_\beta(\bm{X}'- \bm{Y}')||_{L^2_\theta}^2
\\
+C || \delta_\beta\bm{Y}'||_{L^2_\theta}^2
||\bm{X}'- \bm{Y}'||_{L^\infty_\theta}^2
\mathcal{C}_{3\TE}^2
M^2 (1+ \rho^{-2} M^2)^2 \eta^{-2}.
\end{multline}
This is our main estimate for $\mathcal{N}_{9}$.
We will now estimate the terms $\mathcal{N}_{2}$ and $\mathcal{N}_{8}$. From Lemma \ref{lemm.A.diff}, Remark \ref{ignore.remark} and \eqref{apriori.bd.Z} we have
\begin{multline} \label{A.diff.est.noPM}
\left| \mathcal{A}[\bm{X}]-\mathcal{A}[\bm{Y}] \right|
\lesssim
| \delta_\alpha (\bm{X}'-\bm{Y}')(\theta)|\left( \rho^{-1}+
\rho^{-2}(| \delta_\alpha \bm{X}'(\theta)|+| \delta_\alpha \bm{Y}'(\theta)|)
\right)
\\
+
\left| D_\alpha (\bm{X} - \bm{Y})\right| | \delta_\alpha \bm{Y}'(\theta)|
\left(\rho^{-2}+ \rho^{-3}|\delta_\alpha \bm{Y}'(\theta)|\right).
\end{multline}
We thus define
\begin{equation}\label{WK21.constant.def}
\mathcal{W}_{21}= \mathcal{W}_{21}[\rho,M] \eqdef
\rho^{-1}+
\rho^{-2}M, \quad \mathcal{W}_{22}= \rho^{-1}\mathcal{W}_{21}.
\end{equation}
Then for $\mathcal{N}_{2}$ using \eqref{operator.bd.first} we split it up as
\begin{multline*}
\mathcal{N}_{2} \leq C \mathcal{W}_{21}\mathcal{C}_{1\TE} \int_{\mathbb T} d\theta \int_{\mathbb T} \frac{d\alpha}{\alpha^2} ~
| \delta_\beta\delta_\alpha (\bm{X}'- \bm{Y}')(\theta)| | \delta_\beta \bm{Y}'(\theta)| | \delta_\alpha (\bm{X}'-\bm{Y}')(\theta)|
\\
+
C \mathcal{W}_{22}\mathcal{C}_{1\TE} || \bm{X}'-\bm{Y}'||_{L^\infty_{\theta}}\int_{\mathbb T} d\theta \int_{\mathbb T} \frac{d\alpha}{\alpha^2} ~
| \delta_\beta\delta_\alpha (\bm{X}'- \bm{Y}')(\theta)| | \delta_\beta \bm{Y}'(\theta)| | \delta_\alpha \bm{Y}'(\theta)|
\\
= \mathcal{N}_{21}+\mathcal{N}_{22}.
\end{multline*}
Then for $\mathcal{N}_{21}$ we use \eqref{TTG.y.estimate} to obtain
\begin{multline}\label{TT31.estimate}
\mathcal{N}_{21}
\leq
\frac{\lambda}{64} || \delta_\beta \widetilde{\Lambda}^{\frac12}(\bm{X}'- \bm{Y}')||_{L^2_\theta}^2
+
\frac{C \mathcal{W}_{21}^2\mathcal{C}_{1\TE}^2}{\lambda \subw(\eta^{-1})^2} || \delta_\beta \bm{Y}'||_{L^\infty_\theta}^2
|| \bm{X}'- \bm{Y}'||_{\mathcal{B}^\MA}^2
\\
+
\frac{1}{64} || \delta_\beta(\bm{X}'- \bm{Y}')||_{L^2_\theta}^2
+
C|| \delta_\beta\bm{Y}'||_{L^2_\theta}^2
||\bm{X}'- \bm{Y}'||_{L^{\infty}_\theta}^2
\mathcal{W}_{21}^2\mathcal{C}_{1\TE}^2 \eta^{-2}.
\end{multline}
And for $\mathcal{N}_{22}$ we similarly obtain
\begin{multline}\label{TT32.estimate}
\mathcal{N}_{22}
\leq
\frac{\lambda}{64} || \delta_\beta \widetilde{\Lambda}^{\frac12}(\bm{X}'- \bm{Y}')||_{L^2_\theta}^2
+
\frac{C \mathcal{W}_{22}^2\mathcal{C}_{1\TE}^2 M^2}{\lambda \subw(\eta^{-1})^2} || \delta_\beta \bm{Y}'||_{L^\infty_\theta}^2 || \bm{X}'-\bm{Y}'||_{L^\infty_{\theta}}^2
\\
+
\frac{1}{64}
|| \delta_\beta(\bm{X}'- \bm{Y}')||_{L^2_\theta}^2
+
C || \delta_\beta\bm{Y}'||_{L^2_\theta}^2|| \bm{X}'-\bm{Y}'||_{L^\infty_{\theta}}^2
M^2 \mathcal{W}_{22}^2\mathcal{C}_{1\TE}^2 \eta^{-2}.
\end{multline}
These are our main estimates for $\mathcal{N}_{2}$.
For the term $\mathcal{N}_{8}$, also using \eqref{operator.bd.first}, with \eqref{apriori.bd.norm.Z} and Remark \ref{ignore.remark} we will use the following estimate
\begin{equation}\notag
\left| \mathcal{A}[\bm{X}]-\mathcal{A}[\bm{Y}] \right|
\leq
C\mathcal{W}_{8}
|| \bm{X}'-\bm{Y}'||_{L^\infty_{\theta}},
\end{equation}
where
\begin{equation}\label{WK8.constant.def}
\mathcal{W}_{8}= \mathcal{W}_{8}[\rho,M] \eqdef
\rho^{-1}+
\rho^{-2}M
+
\rho^{-3}M^2.
\end{equation}
Then for $\mathcal{N}_{8}$ we further use \eqref{TTG.y.estimate} to find
\begin{multline}\label{TT13.estimate}
\mathcal{N}_{8}
\leq
\frac{\lambda}{64} || \delta_\beta \widetilde{\Lambda}^{\frac12}(\bm{X}'- \bm{Y}')||_{L^2_\theta}^2
+
C \frac{ \mathcal{W}_{8}^2 M^2 \mathcal{C}_{2\TE}^2
}{\lambda \subw(\eta^{-1})^2} || \delta_\beta \bm{Y}'||_{L^\infty_\theta}^2
|| \bm{X}'-\bm{Y}'||_{L^\infty_{\theta}}^2
\\
+
\frac{1}{64}
|| \delta_\beta(\bm{X}'- \bm{Y}')||_{L^2_\theta}^2
+
C
|| \delta_\beta\bm{Y}'||_{L^2_\theta}^2 || \bm{X}'-\bm{Y}'||_{L^\infty_{\theta}}^2 M^2 \mathcal{W}_{8}^2
\eta^{-2} \mathcal{C}_{2\TE}^2.
\end{multline}
This is our main estimate for $\mathcal{N}_{8}$.
We will now estimate $\mathcal{N}_{3}$. To this end we use the decomposition in \eqref{Abeta.split} as
\begin{equation}\notag
\delta_\beta \mathcal{A}[\bm{X}] = \mathcal{A}_{1\beta}[\bm{X}] + \mathcal{A}_{2\beta}[\bm{X}],
\quad
\delta_\beta \mathcal{A}[\bm{Y}] = \mathcal{A}_{1\beta}[\bm{Y}] + \mathcal{A}_{2\beta}[\bm{Y}].
\end{equation}
Then from \eqref{A1beta.upper} with Remark \ref{ignore.remark} and \eqref{apriori.bd.Z} we have the following bound
\begin{multline}\notag
\left| \mathcal{A}_{1\beta}[\bm{X}] - \mathcal{A}_{1\beta}[\bm{Y}] \right|
\lesssim
|\delta_\beta \delta_\alpha (\bm{X}'-\bm{Y}')(\theta)|\rho^{-1}
\\
+
|\delta_\alpha (\bm{X}'-\bm{Y}')(\theta)|
\left(
|\delta_\beta \delta_\alpha \bm{Y}'(\theta)|
+
|\delta_\beta \delta_\alpha \bm{X}'(\theta)|
\right)
\rho^{-2}
\\
+
|\delta_\beta \delta_\alpha (\bm{X}'-\bm{Y}')(\theta)|
\left(
| \delta_\alpha \bm{Y}'(\theta)|
+
|\delta_\alpha \bm{X}'(\theta)|
\right)
\rho^{-2}
\\
+
|\delta_\beta \delta_\alpha \bm{Y}'(\theta)|
|D_\alpha(\bm{X} - \bm{Y})(\theta)|
\left(1+ \rho^{-1}|\delta_\alpha \bm{Y}'(\theta)|\right)\rho^{-2}.
\end{multline}
And from \eqref{A2beta.upper} we have
\begin{multline}\notag
\left| \mathcal{A}_{2\beta}[\bm{X}] - \mathcal{A}_{2\beta}[\bm{Y}] \right|
\lesssim
\rho^{-3}
|\delta_\alpha (\bm{X}'-\bm{Y}')(\theta)|
|\delta_\beta \DAL \BX (\theta)|
\\
+
\rho^{-3}\left(
|\delta_\alpha \bm{Y}'(\theta)|
+
|\delta_\alpha \bm{X}'(\theta)|
\right)
|\delta_\alpha (\bm{X}'-\bm{Y}')(\theta)|
|\delta_\beta \DAL \BX (\theta)|
\\
+
\rho^{-3}\left(|\delta_\alpha \bm{Y}'(\theta)|+|\delta_\alpha \bm{Y}'(\theta)|^2 \right)
|\delta_\beta D_\alpha(\bm{X} - \bm{Y})(\theta)|
\\
+
\rho^{-4}\left(|\delta_\alpha \bm{Y}'(\theta)|+|\delta_\alpha \bm{Y}'(\theta)|^2 \right)
|D_\alpha(\bm{X} - \bm{Y})(\theta)| (|\delta_\beta\DAL \BX (\theta)|+|\delta_\beta\DAL \BY (\theta)|).
\end{multline}
We thus define $\mathcal{W}_{3}= \mathcal{W}_{3}[\rho,M]$ by
\begin{equation}\label{WK3.constant.def}
\mathcal{W}_{3} \eqdef
\rho^{-1}+
\rho^{-2}\left(1+M\right)\left(1+\rho^{-1}(1+M)\right)
+
\rho^{-4}M\left(1+M \right).
\end{equation}
And then we plug these estimates in, using also \eqref{operator.bd.first}, to observe
\begin{multline}\notag
\mathcal{N}_{3}
\leq
C \mathcal{C}_{1\TE} \mathcal{W}_{3} \int_{\mathbb T} d\theta \int_{\mathbb T} \frac{d\alpha}{\alpha^2} ~
| \delta_\beta\delta_\alpha (\bm{X}'- \bm{Y}')(\theta)| |\delta_\alpha (\bm{X}'-\bm{Y}')(\theta)| |\delta_\beta \bm{Z}'(\theta)|
\\
+
C \mathcal{C}_{1\TE} \mathcal{W}_{3} \int_{\mathbb T} d\theta \int_{\mathbb T} \frac{d\alpha}{\alpha^2} ~
| \delta_\beta\delta_\alpha (\bm{X}'- \bm{Y}')(\theta)| |\delta_\alpha (\bm{X}'-\bm{Y}')(\theta)| |\delta_\beta D_\alpha\bm{Z}(\theta)|
\\
+
C \mathcal{C}_{1\TE} \mathcal{W}_{3} \int_{\mathbb T} d\theta \int_{\mathbb T} \frac{d\alpha}{\alpha^2} ~
| \delta_\beta\delta_\alpha (\bm{X}'- \bm{Y}')(\theta)|| \delta_\beta (\bm{X}'- \bm{Y}')(\theta)| | \delta_\alpha \bm{Z}'(\theta)|
\\
+
C \mathcal{C}_{1\TE} \mathcal{W}_{3} \int_{\mathbb T} d\theta \int_{\mathbb T} \frac{d\alpha}{\alpha^2} ~
| \delta_\beta\delta_\alpha (\bm{X}'- \bm{Y}')(\theta)| |\delta_\beta D_\alpha(\bm{X} - \bm{Y})(\theta)| | \delta_\alpha \bm{Z}'(\theta)|
\\
+
C \mathcal{C}_{1\TE} \mathcal{W}_{3} || \bm{X}'- \bm{Y}'||_{L^\infty_\theta }\int_{\mathbb T} d\theta \int_{\mathbb T} \frac{d\alpha}{\alpha^2} ~
| \delta_\beta\delta_\alpha (\bm{X}'- \bm{Y}')(\theta)| | \delta_\alpha \bm{Y}'(\theta)| |\delta_\beta \bm{Z}'(\theta)|
\\
+
C \mathcal{C}_{1\TE} \mathcal{W}_{3}|| \bm{X}'- \bm{Y}'||_{L^\infty_\theta }\int_{\mathbb T} d\theta \int_{\mathbb T} \frac{d\alpha}{\alpha^2} ~
| \delta_\beta\delta_\alpha (\bm{X}'- \bm{Y}')(\theta)| | \delta_\alpha \bm{Y}'(\theta)| |\delta_\beta D_\alpha\bm{Z}(\theta)|
\\
=\sum_{j=1}^6 \mathcal{N}_{3j}.
\end{multline}
Here we recall the notation $\bm{Z}'$ defined above \eqref{initial.assumption.Z}.
Now for the term $\mathcal{N}_{31}$ we use \eqref{TTG.y.estimate} to get
\begin{multline}\label{TT41.estimate}
\mathcal{N}_{31}
\leq
\frac{\lambda}{64} || \delta_\beta \widetilde{\Lambda}^{\frac12}(\bm{X}'- \bm{Y}')||_{L^2_\theta}^2
+
\frac{C \mathcal{C}_{1\TE}^2 \mathcal{W}_{3}^2}{\lambda \subw(\eta^{-1})^2}
|| \bm{X}'-\bm{Y}'||_{\mathcal{B}^\MA}^2
||\delta_\beta \bm{Z}'||_{L^\infty_\theta }^2
\\
+
\frac{1}{64} || \delta_\beta(\bm{X}'- \bm{Y}')||_{L^2_\theta}^2
+
C||\delta_\beta \bm{Z}'||_{L^2_\theta }^2
||\bm{X}'- \bm{Y}'||_{L^\infty_\theta}^2
\mathcal{C}_{1\TE}^2 \mathcal{W}_{3}^2 \eta^{-2}.
\end{multline}
Then because of \eqref{averaging.infinity.est}, in $\mathcal{N}_{32}$ we can treat $|\delta_\beta D_\alpha\bm{Z}(\theta)|$ the same as $|\delta_\beta \bm{Z}'(\theta)|$ in $\mathcal{N}_{31}$. Thus $\mathcal{N}_{32}$ also satisfies \eqref{TT41.estimate}.
Next for the term $\mathcal{N}_{33}$ we again use \eqref{TTG.y.estimate} and \eqref{apriori.bd.norm.Z} to obtain
\begin{multline}\label{TT45.estimate}
\mathcal{N}_{33}
\leq
\frac{\lambda}{64} || \delta_\beta \widetilde{\Lambda}^{\frac12}(\bm{X}'- \bm{Y}')||_{L^2_\theta}^2
+
\frac{C \mathcal{C}_{1\TE}^2 \mathcal{W}_{3}^2}{\lambda \subw(\eta^{-1})^2}
M^2
||\delta_\beta (\bm{X}'-\bm{Y}')||_{L^\infty_\theta }^2
\\
+
C || \delta_\beta(\bm{X}'- \bm{Y}')||_{L^2_\theta}^2
\mathcal{C}_{1\TE} \mathcal{W}_{3} \eta^{-1}
M.
\end{multline}
Then again because of \eqref{averaging.infinity.est} $\mathcal{N}_{34}$ also satisfies \eqref{TT45.estimate}.
For the term $\mathcal{N}_{35}$ we use \eqref{TTG.y.estimate} to obtain
\begin{multline}\label{TT47.estimate}
\mathcal{N}_{35}
\leq
\frac{\lambda}{64} || \delta_\beta \widetilde{\Lambda}^{\frac12}(\bm{X}'- \bm{Y}')||_{L^2_\theta}^2
+
\frac{C \mathcal{C}_{1\TE}^2 \mathcal{W}_{3}^2}{\lambda \subw(\eta^{-1})^2}
M^2
||\delta_\beta \bm{Z}'||_{L^\infty_\theta }^2
||\bm{X}'-\bm{Y}'||_{L^\infty_\theta }^2
\\
+
\frac{1}{64} || \delta_\beta(\bm{X}'- \bm{Y}')||_{L^2_\theta}^2
+
C || \delta_\beta \bm{Z}'||_{L^2_\theta}^2
||\bm{X}'-\bm{Y}'||_{L^\infty_\theta }^2
\mathcal{C}_{1\TE}^2 \mathcal{W}_{3}^2 \eta^{-2}
M^2.
\end{multline}
And again with \eqref{averaging.infinity.est} then $\mathcal{N}_{36}$ also satisfies \eqref{TT47.estimate}. These are our main estimates for $\mathcal{N}_{3}$.
The last term to estimate is $\mathcal{N}_{4}$. From \eqref{K.bound.infty} we can bound
\begin{equation*}
\mathcal{N}_{4}
\leq
\mathcal{W}_1 \mathcal{C}_{2\TE} || \bm{X}'- \bm{Y}'||_{L^\infty_\theta} \int_{\mathbb T} d\theta \int_{\mathbb T} \frac{d\alpha}{\alpha^2} ~
| \delta_\beta\delta_\alpha (\bm{X}'- \bm{Y}')(\theta)|
| \delta_\beta \delta_\alpha \bm{Y}'(\theta)|.
\end{equation*}
For the term $\mathcal{N}_{4}$ we apply Cauchy-Schwartz to obtain
\begin{equation*}
\mathcal{N}_{4}
\leq
\mathcal{W}_1 \mathcal{C}_{2\TE} || \bm{X}'- \bm{Y}'||_{L^\infty_\theta}
|| \delta_\beta \widetilde{\Lambda}^{\frac12}(\bm{X}'- \bm{Y}')||_{L^2_\theta}
|| \delta_\beta \widetilde{\Lambda}^{\frac12}\bm{Y}'||_{L^2_\theta}.
\end{equation*}
Notice that this term does not have the same opportunity to achieve an extra smallness using the regularity from Definition \ref{subw.definition} similar to the other terms, as in \eqref{TTG.y.estimate}. Thus the presence of the term $\mathcal{N}_{4}$ is the reason why we use the equivalent norm with \eqref{nu.definition}. For now we apply Young's inequality
\begin{equation}\label{TT61.estimate}
\mathcal{N}_{4}
\leq
\frac{\lambda}{64} || \delta_\beta \widetilde{\Lambda}^{\frac12}(\bm{X}'- \bm{Y}')||_{L^2_\theta}^2
+
C \lambda^{-1} \mathcal{W}_1^2 \mathcal{C}_{2\TE}^2 || \bm{X}'- \bm{Y}'||_{L^\infty_\theta}^2
|| \delta_\beta \widetilde{\Lambda}^{\frac12}\bm{Y}'||_{L^2_\theta}^2.
\end{equation}
This completes our individual estimates for all of the terms in \eqref{energy.difference.first1}.
Next we collect all the estimates above in \eqref{TT1.estimate}, \eqref{TT2.bound}, \eqref{TT11.bound}, \eqref{TT7.bound}, \eqref{TT9.bound}, \eqref{TT18.bound}, \eqref{TT31.estimate}, \eqref{TT32.estimate}, \eqref{TT13.estimate}, \eqref{TT41.estimate}, \eqref{TT45.estimate}, \eqref{TT47.estimate} and \eqref{TT61.estimate} and put them into \eqref{energy.difference.first1} to obtain
\begin{multline}\notag
\frac{d}{dt} || \delta_\beta (\bm{X}' - \bm{Y}')||_{L^2_\theta}^2
+
\frac{3\lambda}{4} || \delta_\beta \widetilde{\Lambda}^{\frac12} (\bm{X}'- \bm{Y}')||_{L^2_\theta}^2
\\
\leq
C\kappa_{7} \left(|| \delta_\beta \bm{X}'||_{L^\infty_\theta}^2+|| \delta_\beta \bm{Y}'||_{L^\infty_\theta}^2 \right)|| \bm{X}'-\bm{Y}'||_{\mathcal{B}^\MA}^2
+
C|| \partial_{\beta}( \bm{X}'-\bm{Y}')||_{L^2_\theta}^2 \mathcal{W}_{9}
\\
+
C\kappa_{9} || \partial_{\beta}( \bm{X}'-\bm{Y}')||_{L^\infty_\theta}^2
+
C\left(|| \delta_\beta \bm{X}'||_{L^2_\theta}^2+|| \delta_\beta \bm{Y}'||_{L^2_\theta}^2 \right)
|| \bm{X}'-\bm{Y}'||_{L^\infty_\theta}^2
\mathcal{W}_{7}
\\
+ C \kappa_1 \mathcal{C}_{1\TE} ||\delta_\beta \widetilde{\Lambda}^{\frac12}(\bm{X}'-\bm{Y}')||_{L^2_\theta}^2
+
C \lambda^{-1} \mathcal{C}_{2\TE}^2 \mathcal{W}_1^2
|| \delta_\beta \widetilde{\Lambda}^{\frac12} \bm{Y}'||_{L^2_\theta}^2
|| \bm{X}'- \bm{Y}'||_{L^\infty_\theta}^2.
\end{multline}
Here we recall \eqref{kappa1.def}. Further recalling \eqref{kappa3.def}, \eqref{kappa5.def}, \eqref{kappa6.def}, \eqref{WK21.constant.def}, \eqref{WK8.constant.def} and \eqref{WK3.constant.def}, we define
\begin{multline}\label{kappa7.def}
\kappa_{7}
\eqdef \kappa_3\mathcal{C}_{1\TE}^2
+
\kappa_3\mathcal{C}_{2\TE}^2 M^2
+
\kappa_5 \mathcal{C}_{2\TE}^2
+
\kappa_6 \mathcal{C}_{3\TE}^2
+
\frac{ \mathcal{W}_{21}^2\mathcal{C}_{1\TE}^2}{\lambda \subw(\eta^{-1})^2}
+
\frac{ \mathcal{W}_{22}^2\mathcal{C}_{1\TE}^2 M^2}{\lambda \subw(\eta^{-1})^2}
\\
+
\frac{ \mathcal{W}_{8}^2 M^2 \mathcal{C}_{2\TE}^2
}{\lambda \subw(\eta^{-1})^2}
+
\frac{ \mathcal{C}_{1\TE}^2 \mathcal{W}_{3}^2}{\lambda \subw(\eta^{-1})^2} (1+M^2),
\end{multline}
and additionally recalling \eqref{WK2.constant.def} and \eqref{WK5.constant.def} we have
\begin{multline}\label{WK7.constant.def}
\mathcal{W}_{7}
\eqdef \mathcal{C}_{1\TE}^2
\mathcal{W}_2
+
M^2\mathcal{C}_{2\TE}^2 \mathcal{W}_2
+
\mathcal{C}_{2\TE}^2
\mathcal{W}_5
+
\mathcal{C}_{3\TE}^2
M^2 (1+ \rho^{-2} M^2)^2 \eta^{-2}
+
\mathcal{W}_{21}^2\mathcal{C}_{1\TE}^2 \eta^{-2}
\\
+
M^2 \mathcal{W}_{22}^2\mathcal{C}_{1\TE}^2 \eta^{-2}
+
M^2 \mathcal{W}_{8}^2
\eta^{-2} \mathcal{C}_{2\TE}^2
+
\mathcal{C}_{1\TE}^2 \mathcal{W}_{3}^2 \eta^{-2}(1+M^2).
\end{multline}
Above we also used the defintion
\begin{equation}\label{kappa8.def}
\kappa_{9}
\eqdef
\kappa_6 \mathcal{C}_{2\TE}^2
+
\frac{\mathcal{C}_{1\TE}^2 \mathcal{W}_{3}^2}{\lambda \subw(\eta^{-1})^2}
M^2,
\end{equation}
and additionally recalling \eqref{BB.const.def} we define
\begin{equation}\label{WK88.constant.def}
\mathcal{W}_{9}
\eqdef
1+
\mathcal{U}_1
+
\mathcal{C}_{2\TE}
M (1+ \rho^{-2} M^2) \eta^{-1}
+
\mathcal{C}_{1\TE} \mathcal{W}_{3} \eta^{-1}
M.
\end{equation}
We now choose $\eta>0$ small enough so that we have $C \kappa_1 \mathcal{C}_{1\TE} \leq \frac{1}{4}\lambda$.
Next we further integrate in time over $0 \le s \le t$ and afterwards we take the essential supremum in time over $0 \le t \le T$ to obtain
\begin{multline}\notag
\sup_{0 \le t \le T} || \delta_\beta (\bm{X}' - \bm{Y}')||_{L^2}^2(t)
+
\frac{\lambda}{2} || \delta_\beta \widetilde{\Lambda}^{\frac12} (\bm{X}'- \bm{Y}')||_{L^2_T(L^2_\theta)}^2
\\
\leq
|| \delta_\beta (\bm{X}'_0 - \bm{Y}'_0)||_{L^2}^2
+
C\kappa_{7} \left(|| \delta_\beta \bm{X}'||_{L^2_T(L^\infty_\theta)}^2+|| \delta_\beta \bm{Y}'||_{L^2_T(L^\infty_\theta)}^2 \right)|| \bm{X}'-\bm{Y}'||_{L^{\infty}_T(\mathcal{B}^\MA)}^2
\\
+
C \kappa_{9}|| \partial_{\beta}( \bm{X}'-\bm{Y}')||_{L^2_T(L^\infty_\theta)}^2
+
C|| \partial_{\beta}( \bm{X}'-\bm{Y}')||_{L^2_T(L^2_\theta)}^2 \mathcal{W}_{9}
\\
+
C\left(|| \delta_\beta \bm{X}'||_{L^2_T(L^2_\theta)}^2+|| \delta_\beta \bm{Y}'||_{L^2_T(L^2_\theta)}^2 \right)
|| \bm{X}'-\bm{Y}'||_{L^{\infty}_T(L^\infty_\theta)}^2
\mathcal{W}_{7}
\\
+
\frac{1}{2}T\sup_{0 \le t \le T}|| \partial_{\beta}( \bm{X}'-\bm{Y}')||_{L^2_\theta}^2(t)
+
C \lambda^{-1} \mathcal{C}_{2\TE}^2
|| \delta_\beta \widetilde{\Lambda}^{\frac12}\bm{Y}'||_{L^2_T(L^2_\theta)}^2
|| \bm{X}'- \bm{Y}'||_{L^{\infty}_T(L^\infty_\theta)}^2.
\end{multline}
Next we suppose that $0<T \le 1$, and we use the inequality \eqref{ineq.gen} to obtain
\begin{multline}\notag
\frac{1}{2} \sup_{0 \le t \le T} || \delta_\beta (\bm{X}' - \bm{Y}')||_{L^2}(t)
+
\frac{\lambda^{1/2}}{2} || \delta_\beta \widetilde{\Lambda}^{\frac12} (\bm{X}'- \bm{Y}')||_{L^2_T(L^2_\theta)}
\\
\leq
|| \delta_\beta (\bm{X}'_0 - \bm{Y}'_0)||_{L^2}
+
C\kappa_{7}^{1/2} \left(|| \delta_\beta \bm{X}'||_{L^2_T(L^\infty_\theta)}+|| \delta_\beta \bm{Y}'||_{L^2_T(L^\infty_\theta)} \right)|| \bm{X}'-\bm{Y}'||_{L^{\infty}_T(\mathcal{B}^\MA)}
\\
+
C \kappa_{9}^{1/2}|| \partial_{\beta}( \bm{X}'-\bm{Y}')||_{L^2_T(L^\infty_\theta)}
+
C|| \partial_{\beta}( \bm{X}'-\bm{Y}')||_{L^2_T(L^2_\theta)} \mathcal{W}_{9}^{1/2}
\\
+
C\left(|| \delta_\beta \bm{X}'||_{L^2_T(L^2_\theta)}+|| \delta_\beta \bm{Y}'||_{L^2_T(L^2_\theta)} \right)
|| \bm{X}'-\bm{Y}'||_{L^{\infty}_T(L^\infty_\theta)}
\mathcal{W}_{7}^{1/2}
\\
+
C \lambda^{-1/2} \mathcal{C}_{2\TE}
|| \delta_\beta \widetilde{\Lambda}^{\frac12}\bm{Y}'||_{L^2_T(L^2_\theta)}
|| \bm{X}'- \bm{Y}'||_{L^{\infty}_T(L^\infty_\theta)}.
\end{multline}
We further integrate the above in $d\beta$ against $|\beta|^{-3/2}\nu(|\beta|^{-1})$
for $\nu$ defined in \eqref{nu.definition} to obtain
\begin{multline}\notag
\frac{1}{2} ||\bm{X}' - \bm{Y}'||_{\BN_T}
+
\frac{\lambda^{1/2}}{2} || \bm{X}'- \bm{Y}'||_{\mathcal{D}_T^\nu}
\leq
|| \bm{X}'_0 - \bm{Y}'_0||_{\mathcal{B}^\nu}
\\
+
C\kappa_{7}^{1/2} \left(|| \bm{X}'||_{\widetilde{L}^{2}_T(\dot{B}_{\infty, 1}^{\frac12, \nu})}+|| \bm{Y}'||_{\widetilde{L}^{2}_T(\dot{B}_{\infty, 1}^{\frac12, \nu})} \right)|| \bm{X}'-\bm{Y}'||_{\BN_T}
\\
+
C \kappa_{9}^{1/2}|| \bm{X}'-\bm{Y}'||_{\widetilde{L}^{2}_T(\dot{B}_{\infty, 1}^{\frac12, \nu})}
+
CT^{1/2}\mathcal{W}_{9}^{1/2} || \bm{X}'-\bm{Y}'||_{\BN_T}
\\
+
CT^{1/2}\left(||\bm{X}'||_{\BN_T}+||\bm{Y}'||_{\BN_T} \right)
|| \bm{X}'-\bm{Y}'||_{L^{\infty}_T(L^\infty_\theta)}
\mathcal{W}_{7}^{1/2}
\\
+
C \lambda^{-1/2} \mathcal{C}_{2\TE}
|| \bm{Y}'||_{\mathcal{D}_T^\nu}
|| \bm{X}'- \bm{Y}'||_{L^{\infty}_T(L^\infty_\theta)}.
\end{multline}
We use the embedding \eqref{embedding.infty.use} to see that
$|| f||_{\widetilde{L}^{2}_T(\dot{B}_{\infty, 1}^{\frac12, \nu})} \leq C_{\nu} || f||_{\mathcal{D}_T^\nu}$. We also use the embeddings in \eqref{embed.infty}, Proposition \ref{besov.ineq.prop} and then we use Definition \ref{subw.definition} to obtain
\begin{equation}\notag
|| f||_{L^{\infty}_T(L^\infty_\theta)}
\leq
C || f ||_{L^{\infty}_T(\dot{B}^{\frac{1}{2}}_{2,1})}
\leq C
|| f ||_{\BN_T}.
\end{equation}
The last inequality above follows simply because $\nu \ge 1$ in \eqref{nu.definition}.
Thus we have
\begin{multline}\notag
\frac{1}{2} ||\bm{X}' - \bm{Y}'||_{\BN_T}
+
\frac{\lambda^{1/2}}{2} || \bm{X}'- \bm{Y}'||_{\mathcal{D}_T^\nu}
\leq
|| \bm{X}'_0 - \bm{Y}'_0||_{\mathcal{B}^\nu}
\\
+
C \max\{1,M\}C_{\nu} \kappa_{7}^{1/2} \left(|| \bm{X}'||_{\mathcal{D}_T^\nu}+|| \bm{Y}'||_{\mathcal{D}_T^\nu} \right)|| \bm{X}'-\bm{Y}'||_{\BN_T}
\\
+
C C_{\nu} \kappa_{9}^{1/2}|| \bm{X}'-\bm{Y}'||_{\mathcal{D}_T^\nu}
+
CT^{1/2}\mathcal{W}_{9}^{1/2} || \bm{X}'-\bm{Y}'||_{\BN_T}
\\
+
CT^{1/2}\left(||\bm{X}'||_{\BN_T}+||\bm{Y}'||_{\BN_T} \right)
|| \bm{X}'-\bm{Y}'||_{\BN_T}
\mathcal{W}_{7}^{1/2}
\\
+
C \lambda^{-1/2} \mathcal{C}_{2\TE}
|| \bm{Y}'||_{\mathcal{D}_T^\nu}
|| \bm{X}'- \bm{Y}'||_{\BN_T}.
\end{multline}
Now using \eqref{apriori.bd.norm.Z} and \eqref{equivalent.nu.norm} we can choose $\eta>0$ additionally small enough so that $\kappa_7>0$ from \eqref{kappa7.def} enforces
\begin{equation*}
C \max\{1,M\}C_{\nu} \kappa_{7}^{1/2} \left(|| \bm{X}'||_{\mathcal{D}_T^\nu}+|| \bm{Y}'||_{\mathcal{D}_T^\nu} \right)
\leq 4 C \max\{1,M\}C_{\nu} \kappa_{7}^{1/2} M
<
\frac{1}{8}.
\end{equation*}
Then we can further choose $\eta>0$ additionally possibly smaller so that $\kappa_9$ from \eqref{kappa8.def} enforces
\begin{equation*}
C C_{\nu} \kappa_{9}^{1/2}
<
\frac{\lambda^{1/2}}{4} .
\end{equation*}
Thus we obtain
\begin{multline}\notag
\frac{3}{8} ||\bm{X}' - \bm{Y}'||_{\BN_T}
+
\frac{\lambda^{1/2}}{4} || \bm{X}'- \bm{Y}'||_{\mathcal{D}_T^\nu}
\leq
|| \bm{X}'_0 - \bm{Y}'_0||_{\mathcal{B}^\nu}
\\
+CT^{1/2} \mathcal{W}_0 || \bm{X}'-\bm{Y}'||_{\BN_T}
+
C \lambda^{-1/2} \mathcal{C}_{2\TE}
|| \bm{Y}'||_{\mathcal{D}_T^\nu}
|| \bm{X}'- \bm{Y}'||_{\BN_T}.
\end{multline}
where recalling \eqref{WK88.constant.def}, \eqref{WK7.constant.def}, \eqref{apriori.bd.norm.Z} and \eqref{equivalent.nu.norm} we define
\begin{equation}\label{WK0.constant.def}
\mathcal{W}_0 \eqdef
\mathcal{W}_{9}^{1/2}
+
M
\mathcal{W}_{7}^{1/2}.
\end{equation}
Next from \eqref{nu.definition} and \eqref{D.space.temporal} we have that
\begin{equation}\notag
|| \bm{Y}'||_{\mathcal{D}_T^\nu}
=
\int_{\mathbb T} \frac{d\beta}{|\beta|^{3/2}}
||\delta_\beta \widetilde{\Lambda}^{\frac{1}{2}}\bm{Y}'||_{L^2_T(L^2_\theta)}
+
\frac{1}{\KCC \max\{1,M\}}
|| \bm{Y}'||_{\mathcal{D}_T^\subw}.
\end{equation}
Since we can bound $|| \bm{Y}'||_{\mathcal{D}_T^\subw} \leq
C \lambda^{-1/2} || \bm{Y}'_0||_{\mathcal{B}^\subw}$ as in Proposition \ref{prop:general.apriori.final.local} and \eqref{apriori.bd.norm.Z}, then we can make the second term above arbitrarily small. In particular we can choose $C_3\ge 1$ large enough so that
\begin{equation}\notag
C \lambda^{-1/2} \frac{\mathcal{C}_{2\TE} || \bm{Y}'||_{\mathcal{D}_T^\subw}}{\KCC \max\{1,M\}}
\leq C \lambda^{-1} \frac{\mathcal{C}_{2\TE} || \bm{Y}'_0||_{\mathcal{B}^\subw}}{\KCC \max\{1,M\}}
\leq C \lambda^{-1} \frac{\mathcal{C}_{2\TE}}{C_3} < \frac{1}{16}.
\end{equation}
It is important that $C_3 = C_3(\lambda, \mathcal{C}_{2\TE})$ but $C_3$ does not depend upon $\eta$. Then
for the first term above, we split into $|\beta| < \eta_1$ and $|\beta| \geq \eta_1$ for some small $\eta_1>0$. Then similar to \eqref{extra.smallness.besov} using also \eqref{apriori.bd.norm.Z} we have
\begin{equation}\notag
\int_{|\beta| < \eta_1} \frac{d\beta}{|\beta|^{3/2}}
||\delta_\beta \widetilde{\Lambda}^{\frac{1}{2}}\bm{Y}'||_{L^2_T(L^2_\theta)}
\leq
\frac{1}{\mu(\eta_1^{-1})}
|| \bm{Y}'||_{\mathcal{D}_T^\subw}
\leq
C\frac{M}{\lambda^{\frac12}\mu(\eta_1^{-1})}.
\end{equation}
For the other part, again with \eqref{apriori.bd.norm.Z}, we have
\begin{multline*}
\int_{|\beta| \geq \eta_1} \frac{d\beta}{|\beta|^{3/2}}
||\delta_\beta \widetilde{\Lambda}^{\frac{1}{2}}\bm{Y}'||_{L^2_T(L^2_\theta)}
\leq
C\eta_1^{-1/2}
||\widetilde{\Lambda}^{\frac{1}{2}}\bm{Y}'||_{L^2_T(L^2_\theta)}
\\
\leq
C T^{1/2}\eta_1^{-1/2}
||\bm{Y}'||_{L^\infty_T(\dot{B}^{\frac{1}{2}}_{2,2})}
\leq
C T^{1/2}M\eta_1^{-1/2}.
\end{multline*}
Notice that $\frac{1}{\mu(\eta_1^{-1})}$ can be made arbitrarily small for $\eta_1>0$ chosen small enough. Thus if we choose $\eta_1>0$ small enough we have
\begin{equation}\notag
C \lambda^{-1/2} \mathcal{C}_{2\TE}
\frac{M}{\lambda^{\frac12}\mu(\eta_1^{-1})}
<
\frac{1}{16}.
\end{equation}
Thus we obtain
\begin{equation}\notag
||\bm{X}' - \bm{Y}'||_{\BN_T}
+
\lambda^{1/2} || \bm{X}'- \bm{Y}'||_{\mathcal{D}_T^\nu}
\leq
4 || \bm{X}'_0 - \bm{Y}'_0||_{\mathcal{B}^\nu}
+C || \bm{X}'-\bm{Y}'||_{\BN_T} T^{1/2} \mathcal{W},
\end{equation}
where using \eqref{WK0.constant.def} we define
\begin{equation}\label{WK.constant.def}
\mathcal{W} \eqdef
\mathcal{W}_0+
\eta_1^{-1/2}
M \lambda^{-1/2} \mathcal{C}_{2\TE}.
\end{equation}
The proof is complete.
\end{proof}
\subsection{$L^2$ continuity estimate}\label{sec:l2continuity} For some $C_* >0$ we now suppose for $T>0$ that for some $c>0$ and $\lambda>0$ as in \eqref{e:DTdefn} for some $M > 0$ that we have
\begin{equation}\label{apriori.bd.norm.H1}
||\bm{Z}'||_{L^\infty_T \dot{H}^{\frac12}}+ c\lambda^{\frac12} ||\bm{Z}'||_{L^2_T \dot{H}^1} \leq C_* M.
\end{equation}
Notice that this condition is implied by \eqref{apriori.bd.norm.Z}.
Then in this subsection we will prove in the following proposition that as long as \eqref{apriori.bd.norm.H1} holds then the $L^2_\theta$ norm of the difference of two solutions to \eqref{peskin.general.tension} is stable.
\begin{proposition}\label{prop.L2.cont}
Let $\bm{X}, \bm{Y}: [0,T]\times \mathbb T \to \mathbb R^2$ be two weak solutions to the Peskin problem \eqref{peskin.general.tension} with tension $\mathcal{T}$ \eqref{tension.map.def} in the sense of Definition \ref{def:solution} with initial data $\bm{X}_0,$ $\bm{Y}_0$ respectively. Assume that $\bm{X}_0,$ $\bm{Y}_0,$ and $\mathbf{T}$ satisfy the assumptions of Theorem \ref{thm:mainquant}, in particular we assume \eqref{e:QuantitativeTensionMap}. Additionally assume \eqref{apriori.bd.Z} holds with $T>0$.
Then for two solutions $\bm{X}'$ and $\bm{Y}'$ over $0\le t \le T$ we have
\begin{equation}\notag
||(\bm{X}'-\bm{Y}')(t)||_{L^2_\theta} \leq C ||\bm{X}_0'-\bm{Y}_0'||_{L^2_\theta},
\end{equation}
where $C = C(M, \rho, \lambda, \mathcal{C}_{1\TE}, \mathcal{C}_{2\TE})>0.$
\end{proposition}
\begin{proof}
Direct calculation gives us that
\begin{multline}\notag
\frac{d}{dt}||\bm{X}'-\bm{Y}'||_{L^2_\theta}^2 = -\int_\mathbb T \int_\mathbb T \frac{d\alpha d\theta}{\alpha^2} \delta_\alpha (\bm{X}'-\bm{Y}')\cdot \left( \mathcal{K}[\bm{X}]\delta_\alpha \mathbf{T}(\bm{X}') - \mathcal{K}[\bm{Y}]\delta_\alpha \mathbf{T}(\bm{Y}')\right)
\\ = -\int_\mathbb T \int_\mathbb T \frac{d\alpha d\theta}{\alpha^2} \delta_\alpha (\bm{X}'-\bm{Y}')\cdot \mathcal{K}[\bm{X}]\delta_\alpha (\mathbf{T}(\bm{X}') - \mathbf{T}(\bm{Y}'))
\\ -\int_\mathbb T \int_\mathbb T \frac{d\alpha d\theta}{\alpha^2} \delta_\alpha (\bm{X}'-\bm{Y}')\cdot \left( \mathcal{A}[\bm{X}] - \mathcal{A}[\bm{Y}]\right)\delta_\alpha \mathbf{T}(\bm{Y}')
= I_{\mathcal{K}} + I_{\mathcal{A}}.
\end{multline}
Recalling \eqref{DBTX.def}, we use \eqref{kerbel.eqn.deriv} and \eqref{difference.alpha.T} to expand out
$I_{\mathcal{K}}$ as
\begin{multline}\notag
I_{\mathcal{K}} = -\frac{1}{4\pi}\int_\mathbb T \int_\mathbb T \frac{d\alpha d\theta}{\alpha^2} \delta_\alpha (\bm{X}'-\bm{Y}') \cdot \overline{D\bT} [\bm{X}'] \delta_\alpha (\bm{X}'-\bm{Y}')
\\
-\int_\mathbb T \int_\mathbb T \frac{d\alpha d\theta}{\alpha^2} \delta_\alpha (\bm{X}'-\bm{Y}') \cdot \mathcal{A}[\bm{X}]\overline{D\bT} [\bm{X}'] \delta_\alpha (\bm{X}'-\bm{Y}')
\\
-\int_\mathbb T \int_\mathbb T \frac{d\alpha d\theta}{\alpha^2} \delta_\alpha (\bm{X}'-\bm{Y}') \cdot \mathcal{K}[\bm{X}](\overline{D\bT} [\bm{X}']-\overline{D\bT} [\bm{Y}'])\delta_\alpha \bm{Y}'
=I_{\mathcal{K}}^1+I_{\mathcal{K}}^2+I_{\mathcal{K}}^3.
\end{multline}
Then from \eqref{e:DTdefn} we have $I_{\mathcal{K}}^1 \leq -\lambda ||\widetilde{\Lambda}^{\frac12}(\bm{X}'-\bm{Y}')||_{L^2_\theta}^2$.
Next we estimate the following sample term for an integer $j\geq 1$ using Proposition \ref{besov.ineq.prop} and Lemma \ref{Besov.interpolation} and Young's inequality for any small constant $c>0$ as
\begin{multline}\label{sample.term.est}
\int_\mathbb T d\theta \int_\mathbb T \frac{d\alpha }{\alpha^2}
|\delta_\alpha (\bm{X}'-\bm{Y}')|^2 |\delta_\alpha \bm{Z}'|^j
\leq C \int_\mathbb T \frac{d\alpha}{\alpha^2} ||\delta_\alpha (\bm{X}'-\bm{Y}')||_{L^2_\theta}^2
||\delta_\alpha \bm{Z}'||_{L^\infty_\theta}^j
\\
\leq C ||\bm{X}'-\bm{Y}'||_{\dot{B}^{\frac14}_{2,4}}^2 ||\bm{Z}'||_{\dot{B}^{\frac{1}{2j}}_{\infty, 2j}}^j
\leq C ||\bm{X}'-\bm{Y}'||_{\dot{H}^{\frac14}}^2 ||\bm{Z}'||_{\dot{B}^{\frac{j+1}{2j}}_{2, 2j}}^j
\\ \leq C ||\bm{X}'-\bm{Y}'||_{\dot{H}^{\frac12}}
||\bm{X}'-\bm{Y}'||_{L^2_\theta}
||\bm{Z}'||_{\dot{H}^{\frac12}}^{j-1}
||\bm{Z}'||_{\dot{H}^{1}}
\\
\leq c\lambda ||\bm{X}'-\bm{Y}'||_{\dot{H}^{\frac12}}^2
+
C \lambda^{-1} ||\bm{X}'-\bm{Y}'||_{L^2_\theta}^2
||\bm{Z}'||_{\dot{H}^{\frac12}}^{2(j-1)}
||\bm{Z}'||_{\dot{H}^{1}}^2.
\end{multline}
Then for $I_{\mathcal{K}}^2$ we use \eqref{DBTX.def}, \eqref{e:QuantitativeTensionMap}, \eqref{e:Abounds}, \eqref{apriori.bd.Z} and \eqref{sample.term.est} to obtain
\begin{multline}\label{est.IK2}
I_{\mathcal{K}}^2
\leq
C\int_\mathbb T d\theta \int_\mathbb T \frac{d\alpha }{\alpha^2}|\delta_\alpha (\bm{X}'-\bm{Y}')|^2 (\rho^{-1}|\delta_\alpha \bm{X}'| + \rho^{-2}|\delta_\alpha \bm{X}'|^2)|\overline{D\bT} [\bm{X}']|
\\ \leq \frac{\lambda}{8}||\bm{X}'-\bm{Y}'||_{\dot{H}^{\frac12}}^2 +C\frac{\mathcal{C}_{1\TE}^2}{\lambda}\left(1+\frac{||\bm{X}'||_{\dot{H}^{\frac12}}^2}{\rho^2}\right)\frac{||\bm{X}'||_{\dot{H}^1}^2}{\rho^2}||\bm{X}'-\bm{Y}'||_{L^2_\theta}^2.
\end{multline}
Next we will estimate $I_{\mathcal{K}}^3$. First similar to \eqref{sample.term.est} for an integer $j\geq 1$ we estimate
\begin{multline}\label{sample.term2.est}
\int_\mathbb T d\theta \int_\mathbb T \frac{d\alpha }{\alpha^2}
|\delta_\alpha (\bm{X}'-\bm{Y}')||\bm{X}'-\bm{Y}'| |\delta_\alpha \bm{Z}'|^j
\\
\leq C
|| \bm{X}'-\bm{Y}'||_{L^2_\theta}\int_\mathbb T \frac{d\alpha}{\alpha^2} ||\delta_\alpha (\bm{X}'-\bm{Y}')||_{L^2_\theta}
||\delta_\alpha \bm{Z}'||_{L^\infty_\theta}^j
\\
\leq C || \bm{X}'-\bm{Y}'||_{L^2_\theta} || \bm{X}'-\bm{Y}'||_{\dot{H}^{\frac12}} ||\bm{Z}'||_{\dot{B}^{\frac{1}{2j}}_{\infty, 2j}}^j
\\ \leq C ||\bm{X}'-\bm{Y}'||_{\dot{H}^{\frac12}}
||\bm{X}'-\bm{Y}'||_{L^2_\theta}
||\bm{Z}'||_{\dot{H}^{\frac12}}^{j-1}
||\bm{Z}'||_{\dot{H}^{1}}
\\
\leq c\lambda ||\bm{X}'-\bm{Y}'||_{\dot{H}^{\frac12}}^2
+
C \lambda^{-1} ||\bm{X}'-\bm{Y}'||_{L^2_\theta}^2
||\bm{Z}'||_{\dot{H}^{\frac12}}^{2(j-1)}
||\bm{Z}'||_{\dot{H}^{1}}^2.
\end{multline}
Now we use \eqref{difference.DBTXY} with \eqref{DBTX.def} and \eqref{e:QuantitativeTensionMap} to see that
\begin{equation}\notag
\left| \overline{D\bT}[\BX'] - \overline{D\bT}[\BY'] \right|
\lesssim \mathcal{C}_{2\TE} | \bm{X}'-\bm{Y}' |.
\end{equation}
Thus for $I_{\mathcal{K}}^3$ with \eqref{kerbel.eqn.deriv}, \eqref{e:Abounds} and \eqref{sample.term2.est} we have the following bound
\begin{multline}\label{e:L2contDbound}
I_{\mathcal{K}}^3 \leq C \mathcal{C}_{2\TE} \int_\mathbb T \int_\mathbb T \frac{d\alpha d\theta}{\alpha^2}|\delta_\alpha (\bm{X}'-\bm{Y}')| |\bm{X}'-\bm{Y}'| \left(|\delta_\alpha \bm{Z}'|+\frac{|\delta_\alpha \bm{Z}'|^2}{\rho} + \frac{|\delta_\alpha \bm{Z}'|^3}{\rho^2}\right)
\\
\leq
C\frac{\mathcal{C}_{2\TE}^2}{\lambda}\left(1+\rho^{-2}||\bm{Z}'||_{\dot{H}^{\frac12}}^2+\rho^{-4}||\bm{Z}'||_{\dot{H}^{\frac12}}^4\right)||\bm{Z}'||_{\dot{H}^1}^2||\bm{X}'-\bm{Y}'||_{L^2_\theta}^2
\\
+ \frac{\lambda}{8}||\bm{X}'-\bm{Y}'||_{\dot{H}^{\frac12}}^2.
\end{multline}
These are all of our estimates for $ I_{\mathcal{K}}$.
To estimate $I_{\mathcal{A}}$ we use the bounds in \eqref{delta.alpha.BX.bound} and \eqref{A.diff.est.noPM} to see that
\begin{multline}\notag
I_{\mathcal{A}} \lesssim \mathcal{C}_{1\TE} \int_\mathbb T \int_\mathbb T \frac{d\alpha d\theta}{\alpha^2} |\delta_\alpha (\bm{X}'-\bm{Y}')|^2
\left(\rho^{-1}|\delta_\alpha \bm{Z}'| +
\rho^{-2}|\delta_\alpha \bm{Z}'|^2
\right)
\\
+\mathcal{C}_{1\TE} \rho^{-2}\int_\mathbb T \int_\mathbb T \frac{d\alpha d\theta}{\alpha^2} |\delta_\alpha (\bm{X}'-\bm{Y}')|
\left| D_\alpha (\bm{X} - \bm{Y})\right|
\left(|\delta_\alpha \bm{Z}'|^2 +
\rho^{-1}|\delta_\alpha \bm{Z}'|^3
\right)
=I_{\mathcal{A}}^1+I_{\mathcal{A}}^2.
\end{multline}
Then similar to \eqref{sample.term.est} and \eqref{est.IK2} we have
\begin{equation*}
I_{\mathcal{A}}^1
\leq \frac{\lambda}{8}||\bm{X}'-\bm{Y}'||_{\dot{H}^{\frac12}}^2 +C\frac{\mathcal{C}_{1\TE}^2}{\lambda}\left(1+\frac{||\bm{Z}'||_{\dot{H}^{\frac12}}^2}{\rho^2}\right)\frac{||\bm{Z}'||_{\dot{H}^1}^2}{\rho^2}||\bm{X}'-\bm{Y}'||_{L^2_\theta}^2.
\end{equation*}
Also using \eqref{operator.bd.first} then similar to \eqref{sample.term2.est} and \eqref{e:L2contDbound} we have
\begin{multline}\notag
I_{\mathcal{A}}^2
\leq
C\frac{\mathcal{C}_{2\TE}^2\rho^{-4}}{\lambda}\left(||\bm{Z}'||_{\dot{H}^{\frac12}}^2+\rho^{-2}||\bm{Z}'||_{\dot{H}^{\frac12}}^4\right)||\bm{Z}'||_{\dot{H}^1}^2||\bm{X}'-\bm{Y}'||_{L^2_\theta}^2
\\
+ \frac{\lambda}{8}||\bm{X}'-\bm{Y}'||_{\dot{H}^{\frac12}}^2.
\end{multline}
These are our main estimates for $I_{\mathcal{A}}$.
Now from all of the bounds above we define
\begin{equation*}
\mathcal{J} (t)
\eqdef
\frac{\mathcal{C}_{1\TE}^2}{\rho^2\lambda}\left(1+\frac{||\bm{Z}'(t)||_{\dot{H}^{\frac12}}^2}{\rho^2}\right)
+
\frac{\mathcal{C}_{2\TE}^2}{\lambda}\left(1+\rho^{-2}\right)\left(1+\rho^{-4}||\bm{Z}'(t)||_{\dot{H}^{\frac12}}^4\right).
\end{equation*}
Then putting all of these bounds together, we get that
\begin{equation*}
\frac{d}{dt}\log(||(\bm{X}'-\bm{Y}')(t)||_{L^2_\theta}^2)
\leq C \mathcal{J}(t) ||\bm{Z}'(t)||_{\dot{H}^1}^2.
\end{equation*}
We conclude that
\begin{equation}\notag
||(\bm{X}'-\bm{Y}')(t)||_{L^2_\theta}^2 \leq \mbox{exp}\left(C\int_0^t ds~\mathcal{J}(s) ||\bm{Z}'(s)||_{\dot{H}^1}^2 \right) ||\bm{X}'_0-\bm{Y}'_0||_{L^2_\theta}^2.
\end{equation}
Then applying \eqref{apriori.bd.norm.H1} completes the proof.
\end{proof}
\begin{corollary}\label{cor.L2.cont.m}
Let $\bm{X}, \bm{Y}: [0,T]\times \mathbb T \to \mathbb R^2$ be two weak solutions to the Peskin problem \eqref{peskin.general.tension} with tension $\mathcal{T}$ in the sense of Definition \ref{def:solution} with initial data $\bm{X}_0,$ $\bm{Y}_0$ respectively, satisfying all the conditions in Proposition \ref{prop.L2.cont}. Let $\mu$ and $\omega$ satisfy in Definition \ref{subw.definition} and additionally suppose that there exists $r_* \ge 1$ such that $\frac{\omega(r)}{\mu(r)}$ is decreasing for $r \ge r_*$ and in particular
\begin{equation*}
\displaystyle\lim\limits_{r\to \infty}\frac{\omega(r)}{\mu(r)} = 0.
\end{equation*}
For any $\varepsilon>0,$ there exists $\delta_*>0$ such that for any $0<\delta\le \delta_*$ then \eqref{apriori.bd.norm.Z} and $||\bm{X}'_0-\bm{Y}'_0||_{L^2_\theta}<\delta$ imply
\begin{equation}\notag
||\bm{X}'-\bm{Y}'||_{\mathcal{B}^{\omega}_T}<\varepsilon.
\end{equation}
\end{corollary}
\begin{proof}
For any small $\eta>0,$ we can bound
\begin{multline}\notag
||\bm{X}'-\bm{Y}'||_{\mathcal{B}^\omega_T} = \int_\mathbb T \frac{d\beta}{|\beta|^{3/2}}\omega(|\beta|^{-1}) \sup_{0\leq t\leq T} ||\delta_\beta(\bm{X}'-\bm{Y}')(t)||_{L^2_\theta} \leq
\int_{|\beta|<\eta} + \int_{|\beta|>\eta}
\\ \leq \frac{\omega(\eta^{-1})}{\mu(\eta^{-1})}(||\bm{X}'||_{\BS_T}+||\bm{Y}'||_{\BS_T}) + \frac{\omega(\eta^{-1})}{\eta^{1/2}} \sup\limits_{0\leq t\leq T} ||(\bm{X}'-\bm{Y}')(t)||_{L^2_\theta}.
\end{multline}
Thus by our assumptions on $\mu$, $\omega$, $\bm{X}'$, and $\bm{Y}'$, we can take $\eta>0$ sufficiently small to guarantee that
$$\displaystyle\frac{\omega(\eta^{-1})}{\mu(\eta^{-1})} (||\bm{X}'||_{\BS_T}+||\bm{Y}'||_{\BS_T})<\frac{\varepsilon}{2}.$$
Then applying Proposition \ref{prop.L2.cont}, we can take $\delta>0$ sufficiently small to obtain the result.
\end{proof}
\section{Higher regularity}\label{sec:smoothing}
In this section we establish the gain of higher regularity for the solutions $\bm{X}'(t,\theta)$ to the Peskin problem \eqref{peskin.general.tension} satisfying \eqref{initial.assumption}, \eqref{apriori.bd} and \eqref{apriori.bd.norm}. In \secref{sec:oneTwoEst} we prove the $C^{\frac12}_{t,x}$ estimate. Then in \secref{sec:HigherEst} we prove the $C^{1,\alpha}_{t,x}$ estimate and the higher regularity.
\subsection{$C^{\frac12}_{t,x}$ estimate for $\bm{X}'(t,\theta)$}\label{sec:oneTwoEst}
We now prove the $C^{\frac12}_{t,x}$ estimate for solutions $\bm{X}'(t,\theta)$ to the Peskin problem \eqref{peskin.general.tension}. We first prove in Lemma \ref{estimate.q} a general estimate of some quantities that will come up repeatedly in subsequent estimates.
\begin{lemma}\label{estimate.q} For any $q\in \mathbb N$ we have the following uniform estimates:
\begin{multline}\notag
\mathcal{Y}_q \eqdef \int_{\mathbb T} \frac{d\beta}{\beta^2} \int_\mathbb T d\theta
\bigg|\int_{\mathbb T} \frac{d\alpha}{\alpha^2} | \delta_\alpha \bm{X}'(\theta)|^{q} |\delta_\beta\delta_\alpha \bm{X}'(\theta)|\bigg|^2 \lesssim ||\bm{X}'||_{\dot{H}^{\frac12}}^{2(q-1)}||\bm{X}'||_{\dot{H}^1}^4,
\\
\mathcal{Z}_q \eqdef \int_{\mathbb T} \frac{d\beta}{\beta^2} \int_\mathbb T d\theta
\bigg|\int_{\mathbb T} \frac{d\alpha}{\alpha^2} | \delta_\alpha \bm{X}'(\theta)|^{q+1} |\delta_\beta \bm{X}'(\theta)|\bigg|^2
\lesssim
||\bm{X}'||_{\dot{H}^{\frac12}}^{2q} ||\bm{X}'||_{\dot{H}^{1}}^4.
\end{multline}
\end{lemma}
\begin{proof}
Fix $q\in \mathbb N$. We apply Minkowski's inequality in $\theta$ and $\alpha$, and then we use the Cauchy-Schwartz inequality to obtain
\begin{multline}\notag
\int_\mathbb T d\theta
\bigg|\int_{\mathbb T} \frac{d\alpha}{\alpha^2} | \delta_\alpha \bm{X}'(\theta)|^{q} |\delta_\beta\delta_\alpha \bm{X}'(\theta)|\bigg|^2
\leq
\left(\int_\mathbb T \frac{d \alpha}{\alpha^2} \left[\int_\mathbb T d\theta |\delta_\alpha \bm{X}'|^{2q} |\delta_\beta\delta_\alpha \bm{X}'|^2\right]^\frac{1}{2}\right)^2
\\ \leq \left(\int_\mathbb T \frac{d\alpha}{\alpha^2}||\delta_\alpha \bm{X}'||_{L^\infty_\theta}^q||\delta_\beta \delta_\alpha \bm{X}'||_{L^2_\theta} \right)^2 \leq ||\bm{X}'||_{\dot{B}^{1/2q}_{\infty, 2q}}^{2q}||\delta_\beta \bm{X}'||_{\dot{H}^{1/2}}^2,
\end{multline}
and similarly
\begin{multline}\notag
\int_\mathbb T d\theta
\bigg|\int_{\mathbb T} \frac{d\alpha}{\alpha^2} | \delta_\alpha \bm{X}'(\theta)|^{q+1} |\delta_\beta \bm{X}'(\theta)|\bigg|^2 \leq \left(\int_\mathbb T \frac{d \alpha}{\alpha^2} \left[\int_\mathbb T d\theta |\delta_\alpha \bm{X}'|^{2(q+1)} |\delta_\beta \bm{X}'|^2\right]^\frac{1}{2}\right)^2
\\ \leq \left(\int_\mathbb T \frac{d\alpha}{\alpha^2}||\delta_\alpha \bm{X}'||_{L^{2(q+1)}_\theta}^{q+1}||\delta_\beta \bm{X}'||_{L^\infty_\theta} \right)^2 \leq ||\bm{X}'||_{\dot{B}^{1/(q+1)}_{2(q+1), q+1}}^{2(q+1)}||\delta_\beta \bm{X}'||_{L^\infty_\theta}^2.
\end{multline}
Integrating against $\displaystyle\frac{d\beta}{\beta^2}$
we obtain
\begin{eqnarray}\notag
&\mathcal{Y}_q \leq ||\bm{X}'||_{\dot{B}^{1/2q}_{\infty, 2q}}^{2q}\int_\mathbb T\frac{d\beta}{\beta^2}||\delta_\beta \bm{X}'||_{\dot{H}^{1/2}}^2\lesssim ||\bm{X}'||_{\dot{B}^{1/2q}_{\infty, 2q}}^{2q}||\bm{X}'||_{\dot{H}^1}^2,
\\ \notag
&\mathcal{Z}_q \leq ||\bm{X}'||_{\dot{B}^{1/(q+1)}_{2(q+1), q+1}}^{2(q+1)}\int_\mathbb T\frac{d\beta}{\beta^2}||\delta_\beta \bm{X}'||_{L^\infty_\theta}^2
\lesssim
||\bm{X}'||_{\dot{B}^{1/(q+1)}_{2(q+1), q+1}}^{2(q+1)}
||\bm{X}'||_{\dot{B}^{1/2}_{\infty, 2}}^{2}.
\end{eqnarray}
Then above we will use $||\bm{X}'||_{\dot{B}^{1/2}_{\infty, 2}}^{2} \lesssim || \bm{X}'||_{\dot{H}^1}^2$ from Proposition \ref{besov.ineq.prop} .
Finally, since $q\ge 1$, applying Proposition \ref{besov.ineq.prop} and Lemma \ref{Besov.interpolation} gives
\begin{multline}\notag
||\bm{X}'||_{\dot{B}^{1/2q}_{\infty, 2q}}^{2q} \lesssim ||\bm{X}'||_{\dot{H}^{1/2 + 1/2q}}^{2q} \lesssim ||\bm{X}'||_{\dot{H}^{1/2}}^{2(q-1)}||\bm{X}'||_{\dot{H}^1}^2,
\\ ||\bm{X}'||_{\dot{B}^{1/(q+1)}_{2(q+1), q+1}}^{2(q+1)} \lesssim ||\bm{X}'||_{\dot{H}^{1/2 + 1/2(q+1)}}^{2(q+1)} \lesssim ||\bm{X}'||_{\dot{H}^{1/2}}^{2q}||\bm{X}'||_{\dot{H}^1}^2,
\end{multline}
completing the estimate.
\end{proof}
Let $\bm{X}'$ be a smooth solution of \eqref{peskin.general.tension} with \eqref{kerbel.eqn.deriv} and \eqref{kerbel.A.eqn.deriv}, we will use the equation in the form \eqref{v.theta.def}. Next we prove the $\dot{H}^1$ estimate.
\begin{proposition}\label{prop:H1.estimate}
For any $0 < t_0 <t < T$ we have the following estimate
\begin{equation}\label{H1.estimate}
||\bm{X}'||_{\dot{H}^1}^2(t) \leq ||\bm{X}'||_{\dot{H}^1}^2(t_0)\mbox{exp}\left(C
|| \mathcal{H}||_{L^\infty(t_0,t)}\int_{t_0}^t ds ||\bm{X}'||_{H^1}^2(s)\right) .
\end{equation}
Here $\mathcal{H}=\mathcal{H}(s)=\mathcal{H}(||\bm{X}'(s)||_{\dot{H}^{\frac12}}, \rho^{-1}, \lambda^{-1}, \mathcal{C}_{1\TE}, \mathcal{C}_{2\TE})$ is a polynomial that is written explicitly in \eqref{HEU.def}. Thus in particular we have that
\begin{equation}\label{H1.estimate2}
||\bm{X}'||_{\dot{H}^1}^2(t)\leq C(M, \mu, \rho, \lambda, \mathcal{C}_{1\TE}, \mathcal{C}_{2\TE}) ||\bm{X}'||_{\dot{H}^1}^2(t_0).
\end{equation}
\end{proposition}
\begin{proof}
Notice that we use $||\bm{X}'||_{\dot{H}^1}^2 = \int_{\mathbb T} d\theta~ | \widetilde{\Lambda} \bm{X}'(\theta)|^2$ with \eqref{tildeLambda:eq}. Thus from \eqref{v.theta.def} we have
\begin{equation}\label{time.deriv.energy}
\frac{1}{2}\frac{d}{dt}||\bm{X}'||_{\dot{H}^1}^2 = -\int_{\mathbb T} d\theta ~\widetilde{\Lambda}^{\frac32}\bm{X}' \cdot \widetilde{\Lambda}^{\frac32}\mathbf{T}(\bm{X}')
+ \int_{\mathbb T} d\theta ~\widetilde{\Lambda}^{\frac32}\bm{X}' \cdot \widetilde{\Lambda}^{\frac12}\mathcal{V}.
\end{equation}
To estimate the first term we split
\begin{equation*}
\widetilde{\Lambda}^{3/2}\mathbf{T}(\bm{X}') = D\mathbf{T}(\bm{X}'(\theta)) \widetilde{\Lambda}^{3/2} \bm{X}' + \mathcal{H}_1,
\end{equation*}
where similar to \eqref{e:deltaalphaT} and \eqref{DBTX.def} we have
\begin{equation*}
\mathcal{H}_1 \eqdef \frac{1}{4\pi}\int_{\mathbb T} \frac{d\alpha}{\alpha^{5/2}} \left(\int_0^1\left( D\mathbf{T}(\bm{X}'(\theta) + s\delta_\alpha \bm{X}'(\theta)) - D\mathbf{T}(\bm{X}'(\theta)) \right)ds\right)\delta_\alpha \bm{X}'(\theta).
\end{equation*}
Then similar to \eqref{DDBT.term} we have
\begin{equation*}
\left| D\mathbf{T}(\bm{X}'(\theta) + s\delta_\alpha \bm{X}'(\theta)) - D\mathbf{T}(\bm{X}'(\theta)) \right|
\lesssim \mathcal{C}_{2\TE} |\delta_\alpha \bm{X}'(\theta)|.
\end{equation*}
Then using Minkowski's inequality and the Besov space embeddings in Proposition \ref{besov.ineq.prop}, we bound $\mathcal{H}_1$ in $L^2$ as
\begin{equation}\notag
|| \mathcal{H}_1 ||_{L^2_\theta}
\lesssim
\mathcal{C}_{2\TE} \int_{\mathbb T} \frac{d\alpha}{\alpha^{5/2}}||\delta_\alpha \bm{X}'||_{L^4_\theta}^2
\lesssim
\mathcal{C}_{2\TE} ||\bm{X}'||_{\dot{B}^{3/4}_{4,2}}^2 \lesssim \mathcal{C}_{2\TE} ||\bm{X}'||_{\dot{H}^1}^2 .
\end{equation}
Recalling $D\mathbf{T}(z)\geq \lambda I$ from \eqref{e:DTdefn}, applying Young's inequality we thus have
\begin{equation}\label{HEK.estimate}
\begin{split}
-\int_{\mathbb T} d\theta~ \widetilde{\Lambda}^{\frac32}\bm{X}' \cdot \widetilde{\Lambda}^{\frac32}\mathbf{T}(\bm{X}') &\leq -\lambda ||\bm{X}'||_{\dot{H}^{\frac32}}^2 + C\mathcal{C}_{2\TE} ||\bm{X}'||_{\dot{H}^{\frac32}} ||\bm{X}'||_{\dot{H}^{1}}^{2}
\\&\leq -\frac{\lambda}{2} ||\bm{X}'||_{\dot{H}^{\frac32}}^2 + C \lambda^{-1}\mathcal{C}_{2\TE}^2 ||\bm{X}'||_{\dot{H}^{1}}^4.
\end{split}
\end{equation}
This is our main estimate for the first term in \eqref{time.deriv.energy}.
To estimate the second term in \eqref{time.deriv.energy}, it suffices to bound
$\widetilde{\Lambda}^{\frac12}\mathcal{V}$ from \eqref{v.theta.def} in $L^2$. This is equivalent to bounding
$ \int_{\mathbb T} \frac{d\beta}{\beta^2} \int_{\mathbb T} d\theta \left(\delta_\beta \mathcal{V}(\theta)\right)^2$. Thus, we have
\begin{multline*}
|| \widetilde{\Lambda}^{\frac12}\mathcal{V}||_{L^2_\theta}^2 \approx \int_{\mathbb T} \frac{d\beta}{\beta^2} \int_{\mathbb T} d\theta \bigg|\int_\mathbb T \frac{d\alpha}{\alpha^2} \delta_\beta[\mathcal{A}(\theta, \alpha) \delta_\alpha \mathbf{T}(\bm{X}'(\theta))] \bigg|^2
\\
\lesssim
\int_{\mathbb T} \frac{d\beta}{\beta^2} \int_{\mathbb T} d\theta \bigg|\int_\mathbb T \frac{d\alpha}{\alpha^2} |\mathcal{A}(\theta, \alpha)| \ |\delta_\beta\delta_\alpha \mathbf{T}(\bm{X}'(\theta))|\bigg|^2
\\
+
\int_{\mathbb T} \frac{d\beta}{\beta^2} \int_{\mathbb T} d\theta \bigg|\int_\mathbb T \frac{d\alpha}{\alpha^2} |\delta_\beta\mathcal{A}(\theta, \alpha)| \ |\tau_\beta\delta_\alpha \mathbf{T}(\bm{X}'(\theta))| \bigg|^2 = \mathcal{H}_2+\mathcal{H}_3.
\end{multline*}
We now use \eqref{beta.alpha.TXP}, \eqref{e:Abounds}, \eqref{apriori.bd} and Lemma \ref{estimate.q} to calculate that
\begin{multline*}
\mathcal{H}_2
\lesssim
\mathcal{C}_{1\TE}^2 (\rho^{-2} \mathcal{Y}_1+\rho^{-4} \mathcal{Y}_2)+\mathcal{C}_{2\TE}^2 (\rho^{-2} \mathcal{Z}_1+\rho^{-4} \mathcal{Z}_2)
\\
\lesssim
(\mathcal{C}_{1\TE}^2+ \mathcal{C}_{2\TE}^2 ||\bm{X}'||_{\dot{H}^{\frac12}}^2 ) (\rho^{-2} + \rho^{-4}||\bm{X}'||_{\dot{H}^{\frac12}}^2 ) ||\bm{X}'||_{\dot{H}^{1}}^4.
\end{multline*}
These are our main estimates for the term containing $\mathcal{H}_2$.
To bound the term $\mathcal{H}_3$, we will use \eqref{delta.alpha.BX.bound} and the estimate of $|\delta_\beta \mathcal{A}(\theta, \alpha)|$ in Lemma \ref{A.bound.lem}, \eqref{A1betaRemark} and \eqref{A2betaRemark}. Then as in Lemma \ref{estimate.q} we have
\begin{multline*}
\mathcal{H}_3
\lesssim
\mathcal{C}_{1\TE}^2 (\rho^{-2} \mathcal{Y}_1+\rho^{-4}\mathcal{Y}_2 +\rho^{-4}\mathcal{Z}_1 + \rho^{-6} \mathcal{Z}_2 )
\\
\lesssim
\mathcal{C}_{1\TE}^2 (\rho^{-2} + \rho^{-4} ||\bm{X}'||_{\dot{H}^{\frac12}}^2+ \rho^{-6} || \bm{X}'||_{\dot{H}^{\frac12}}^{4} ) ||\bm{X}'||_{\dot{H}^{1}}^4.
\end{multline*}
Notice that above the estimates in \eqref{A2betaRemark} with $|\delta_\beta D_\alpha \bm{X}(\theta)|$ can be treated the same as $|\delta_\beta \bm{X}'(\theta)|$ in Lemma \ref{estimate.q} due to \eqref{operator.bd.first}.
Thus putting everything together,
we have that
\begin{equation}\notag
|| \widetilde{\Lambda}^{\frac12}\mathcal{V} ||^2_{L^2_\theta}
\lesssim
(\mathcal{C}_{1\TE}^2\rho^{-2}+ \mathcal{C}_{2\TE}^2 ) \rho^{-2} ||\bm{X}'||_{\dot{H}^{\frac12}}^2 (1 + \rho^{-2}||\bm{X}'||_{\dot{H}^{\frac12}}^2 ) ||\bm{X}'||_{\dot{H}^{1}}^4
+
\mathcal{C}_{1\TE}^2
\rho^{-2}
||\bm{X}'||_{\dot{H}^{1}}^4.
\end{equation}
Thus for the second term in \eqref{time.deriv.energy} after applying Young's inequality we have
\begin{multline*}
\bigg|\int_{\mathbb T} d\theta \widetilde{\Lambda}^{\frac32}\bm{X}' \cdot \widetilde{\Lambda}^{
\frac12}\mathcal{V} \bigg| \leq \frac{\lambda}{4}||\bm{X}'||_{\dot{H}^{\frac32}}^2
+
C\lambda^{-1}
\mathcal{C}_{1\TE}^2
\rho^{-2}
||\bm{X}'||_{\dot{H}^{1}}^4
\\
+ C\lambda^{-1}
(\mathcal{C}_{1\TE}^2\rho^{-2}+ \mathcal{C}_{2\TE}^2 ) \rho^{-2} ||\bm{X}'||_{\dot{H}^{\frac12}}^2 (1 + \rho^{-2}||\bm{X}'||_{\dot{H}^{\frac12}}^2 ) ||\bm{X}'||_{\dot{H}^{1}}^4.
\end{multline*}
From the above estimate and \eqref{HEK.estimate} we are motivated to define $\mathcal{H} = \mathcal{H} (s)$ by
\begin{equation}\label{HEU.def}
\mathcal{H} \eqdef
\lambda^{-1}
(\mathcal{C}_{1\TE}^2\rho^{-2}+ \mathcal{C}_{2\TE}^2 ) \left(
\rho^{-2} ||\bm{X}'(s)||_{\dot{H}^{\frac12}}^2 (1 + \rho^{-2}||\bm{X}'(s)||_{\dot{H}^{\frac12}}^2 )
+
1
\right).
\end{equation}
We plug these estimates into \eqref{time.deriv.energy} and apply Gr\"onwall's inequality to get \eqref{H1.estimate}.
Recalling \eqref{apriori.bd.norm} and noting that
\begin{equation}\notag
||\bm{X}'||_{L^\infty_t \dot{H}^{\frac12}} \lesssim ||\bm{X}'||_{\BS_T}, \qquad ||\bm{X}'||_{L^2_t \dot{H}^{1}} \lesssim ||\bm{X}'||_{\mathcal{D}_T^\subw},
\end{equation}
then gives \eqref{H1.estimate2}.
\end{proof}
Next, we prove the gain of $\dot{H}^1$ for small times.
\begin{lemma}\label{H1.Linfinity.time.bound}
Let $\bm{X}'$ be a solution to the Peskin problem \eqref{peskin.general.tension}. Then for any fixed $\varepsilon>0$ sufficiently small, there exists a time $T_\varepsilon= T_\varepsilon(\varepsilon, \rho, \mu, M, \lambda)>0$ such that for all $0<t\leq T_\varepsilon$ we have
\begin{equation}\notag
||\bm{X}'||_{\dot{H}^1}(t) \leq \varepsilon t^{-1/2}.
\end{equation}
\end{lemma}
\begin{proof}
For a fixed $\varepsilon>0$ by Lemma \ref{H1.small.time} for all $t>0$ sufficiently small we have
\begin{equation}\label{small.time.integ}
\int_0^{t}ds ||\bm{X}'||_{\dot{H}^1}^2(s) \leq \frac{\varepsilon^2\log 2}{4}.
\end{equation}
Then as
\begin{equation}\notag
\int_{t/2}^{t}ds \frac{\varepsilon^2}{4s} = \frac{\varepsilon^2 \log 2}{4},
\end{equation}
there must some time $t_0\in [ t/2, t]$ such that
\begin{equation}\notag
||\bm{X}'||_{\dot{H}^1}^2(t_0) \leq \frac{\varepsilon^2}{4t_0} \leq \frac{\varepsilon^2}{2t}.
\end{equation}
Then combining the $\dot{H}^1$ estimate \eqref{H1.estimate} with Lemma \ref{H1.small.time} gives us that
\begin{multline}\notag
||\bm{X}'||_{\dot{H}^1}^2(t) \leq \frac{\varepsilon^2}{2t} \mbox{exp}\left(C\sup\limits_{t/2\leq s\leq t}\mathcal{H}(s)\int_{t/2}^t ds ||\bm{X}'||_{\dot{H}^1}^2(s)\right)
\\
\leq \frac{\varepsilon^2}{2t} \mbox{exp}\left(C\sup\limits_{t/2\leq s\leq t}\mathcal{H}(s)\varepsilon^2\right) \leq \frac{\varepsilon^2}{t},
\end{multline}
so long as $\varepsilon$ is sufficiently small.
\end{proof}
Next we will prove the $C^{1/2}_{t,\theta}$ estimate.
\begin{lemma}\label{lem:chalf}
Let $Q_t = [\frac{t}{2}, t]\times \mathbb T$ for all times $0<t\leq T_*,$ where
$0<T_*\le T_\varepsilon$ for some fixed $\varepsilon>0$ and $T_\varepsilon$ as in Lemma \ref{H1.Linfinity.time.bound}. Then there exists a finite constant $ C=C(\mu, M, \rho, \lambda, \mathcal{C}_{1\TE}, \mathcal{C}_{2\TE})>0$ such that
\begin{equation}\notag
||\bm{X}'||_{C^{1/2}_{t,\theta}(Q_t)}\leq C t^{-1/2}.
\end{equation}
\end{lemma}
\begin{proof}
Combining Proposition \ref{prop:H1.estimate}, Lemma \ref{H1.Linfinity.time.bound} and the embedding in Proposition \ref{besov.ineq.prop} gives us for any time $t/2\leq s\leq t$ that
\begin{equation}\label{oneTwoTheta}
||\bm{X}'(s)||_{C^{1/2}_\theta} \lesssim ||\bm{X}'(s)||_{\dot{H}^1} \lesssim t^{-1/2}.
\end{equation}
Thus $\bm{X}'$ is uniformly $C^{1/2}$ in $\theta$ on the time interval $[t/2, t]$.
To show H{\"o}lder continuity in time, let $t/2\leq s_1<s_2\leq t$, and $\theta\in \mathbb T.$ Fixing some $\alpha>0$ to be determined, by the $C^{1/2}_\theta$ estimate above we have for $i\in\{1,2\}$ that
\begin{equation}\label{point.minus.avg}
\bigg| \bm{X}'(s_i, \theta) - \frac{1}{2\alpha}\int_{-\alpha}^{\alpha} d\beta \bm{X}'(s_i, \theta+\beta) \bigg| \lesssim \sqrt{\frac{\alpha}{t}}.
\end{equation}
Taking the difference of the two averages at times $s_1$ and $s_2$, we get that
\begin{multline}\label{avg.difference}
\bigg|\frac{1}{2\alpha}\int_{-\alpha}^{\alpha} d\beta \left( \bm{X}'(s_2, \theta+\beta)-\bm{X}'(s_1, \theta+\beta) \right)\bigg|
\\
= \bigg|\frac{1}{2\alpha}\int_{-\alpha}^{\alpha} d\beta \int_{s_1}^{s_2}ds ~\partial_t \bm{X}'(s, \theta+\beta)\bigg|.
\end{multline}
Applying Cauchy-Schwartz in the $d\beta$ integral to equation \eqref{avg.difference}, using Lemma \ref{time.space.equivalent} and \eqref{oneTwoTheta} we get that
\begin{multline}\label{avg.bound}
\bigg|\frac{1}{2\alpha}\int_{-\alpha}^{\alpha} d\beta \int_{s_1}^{s_2}ds ~\partial_t \bm{X}'(s, \theta+\beta)\bigg| \lesssim \frac{1}{\sqrt{\alpha }}\int_{s_1}^{s_2}ds~ || \partial_t \bm{X}'(s)||_{L^2}
\\
\lesssim \frac{1}{\sqrt{\alpha }}\int_{s_1}^{s_2}ds ~|| \bm{X}'(s)||_{\dot{H}^1}
\lesssim \frac{|s_2-s_1|}{\sqrt{\alpha t}}.
\end{multline}
Taking $\alpha = s_2-s_1>0$ and combining equations \eqref{point.minus.avg} and \eqref{avg.bound} then gives us
\begin{equation}\notag
|\bm{X}'(s_2,\theta) - \bm{X}'(s_1,\theta)|\lesssim \frac{|s_1-s_2|^{1/2}}{t^{1/2}}.
\end{equation}
This completes the proof.
\end{proof}
\subsection{$C^{1,\alpha}$ estimate for $\bm{X}'$}\label{sec:HigherEst}
With Lemma \ref{lem:chalf}, we have shown that our solution $\bm{X}'$ is in $C^{1/2}$ in both time $t$ and the parametrization $\theta$. Our next goal is to prove that $\bm{X}'\in C^{1,\alpha}_{t,\theta}([\tau, T]\times\mathbb T ; \mathbb R^2)$ for any fixed $\tau>0$.
Our proof follows from the paper \cite{MR3656476}, where the authors prove regularity estimates for the (scalar) fractional porous medium equation
\begin{equation}\notag
\partial_t u + (-\Delta)^{\sigma/2}\varphi(u) = 0.
\end{equation}
They make similar assumptions on their scalar nonlinearity $\varphi$ as we make on our tension map $\mathbf{T}$, and their proof transfers over to our vector valued case.
We shall go through the argument of \cite{MR3656476} and show that it applies. But first, recall that $\bm{X}'$ solves the equation
\begin{equation}\notag
\partial_t \bm{X}' + \widetilde{\Lambda} \mathbf{T}(\bm{X}') = \mathcal{V}(t,\theta),
\end{equation}
where $\mathcal{V}$ is defined in \eqref{v.theta.def}. Thus we are dealing with a fractional porous media equation with an additional forcing term, so we shall need some estimates on $\mathcal{V}$.
\begin{lemma}\label{lem.v.regularity}
Let $\mathcal{V}(t, \theta)$ be as in \eqref{v.theta.def}. If $\bm{X}'\in L^\infty_t \dot{H}^1_\theta \cap L^\infty_t \dot{H}^{\frac12}_\theta$, then
\begin{equation}\notag
\mathcal{V}(t,\theta)\in L^\infty_{t,\theta}.
\end{equation}
If $\bm{X}'\in C^{\beta}_{t,\theta}$ for some $\displaystyle\frac{1}{2}<\beta<1,$ then
\begin{equation}\notag
\mathcal{V}(t,\theta)\in C^{2\beta-1}_{t,\theta}.
\end{equation}
If $\bm{X}'\in C^{0,1}_{t, \theta}$, then $\mathcal{V}$ is log-Lipschitz. Finally, if $\bm{X}'\in C^{k, \beta}_{t, \theta}$ and $\mathcal{T}\in C^{k,\beta}_{r}$ for some $k\geq 1,$ $0<\beta\leq 1$ then all $k$-th order derivatives of $\mathcal{V}$ are log-$C^\beta$.
\end{lemma}
\begin{proof}
To prove the $L^\infty$ estimate,
as in \eqref{Vbound.g} we bound
\begin{multline}\notag
|\mathcal{V}(t,\theta)| \lesssim \mathcal{C}_{1\TE} \int_\mathbb T\frac{d\alpha}{\alpha^2} \left(\frac{|\delta_\alpha \bm{X}'|^2}{\rho} + \frac{|\delta_\alpha \bm{X}'|^3}{\rho^2}\right) \lesssim \mathcal{C}_{1\TE} \left(\frac{||\bm{X}'(t)||_{\dot{B}^{1/2}_{\infty,2}}^2}{\rho} + \frac{||\bm{X}'(t)||_{\dot{B}^{1/3}_{\infty,3}}^3}{\rho^2}\right)
\\ \lesssim \mathcal{C}_{1\TE} \left(1+ \frac{||\bm{X}'||_{L^\infty_t H^{1/2}}}{\rho}\right) \frac{||\bm{X}'||_{L^\infty_t H^1}^2}{\rho}.
\end{multline}
With Proposition \ref{besov.ineq.prop}, we just used the following embedding and interpolation
$||\bm{X}'||_{\dot{B}^{1/3}_{\infty,3}} \lesssim ||\bm{X}'||_{\dot{H}^{\frac56}}\lesssim ||\bm{X}'||_{\dot{H}^{\frac12}}^{\frac{1}{3}} ||\bm{X}'||_{\dot{H}^{1}}^{\frac{2}{3}}$.
Now assume that $\bm{X}' \in C^{\beta}_{t,\theta}$ for some $1/2<\beta<1.$
Letting $\Theta = (t,\theta),$ and $\Phi = (s,\phi)$, we need to bound the difference of $ |\mathcal{V}(\Theta) -\mathcal{V}(\Phi)|.$
To begin, we split $\mathcal{A}$ from \eqref{kerbel.A.eqn.deriv} into two pieces $\mathcal{A}_L$ and $\mathcal{A}_Q,$ where
\begin{multline}\notag
\mathcal{A}_L \eqdef \frac{(\delta_\alpha^+ \bm{X}'+\delta_\alpha^- \bm{X}') \cdot \mathcal{P}(\DAL \BX (\theta)) \DAL \BX (\theta)}{|\DAL \BX (\theta)|^2} \mathcal{I}
\\ - \frac{(\delta_\alpha^+ \bm{X}'+\delta_\alpha^- \bm{X}') \cdot \mathcal{R}(\DAL \BX (\theta)) \DAL \BX (\theta)}{|\DAL \BX (\theta)|^2} \mathcal{R}(\DAL \BX (\theta)),
\end{multline}
and
\begin{multline}\notag
\mathcal{A}_Q \eqdef \frac{\delta_\alpha^+ \bm{X}' \cdot \mathcal{P}(\DAL \BX (\theta)) \delta_\alpha^- \bm{X}'}{|\DAL \BX (\theta)|^2}\mathcal{I}
-\frac{\delta_\alpha^+ \bm{X}' \cdot \mathcal{R}(\DAL \BX (\theta)) \delta_\alpha^- \bm{X}'}{|\DAL \BX (\theta)|^2}
\mathcal{R}(\DAL \BX (\theta))
\\
+ \frac{\delta_\alpha^+ \bm{X}' \cdot (\mathcal{P}(\DAL \BX (\theta)) - \mathcal{I}) \delta_\alpha^- \bm{X}'}{|\DAL \BX (\theta)|^2} \mathcal{P}(\DAL \BX (\theta)).
\end{multline}
Correspondingly, we define $\mathcal{V}_L$ and $\mathcal{V_Q}$. We will focus on proving that $\mathcal{V}_L$ is $C^{2\beta-1}$ when $\bm{X}'$ is $C^\beta$. Since $\bm{X}'\in L^\infty \cap C^\beta$ then $\mathcal{A}_Q$ is $\min\{|\alpha|^\beta, 1\}$ smoother than $\mathcal{A}_L$ so that the proof for $\mathcal{V}_Q$ follows similarly.
To show that $\mathcal{V}_L$ is $2\beta-1$ H\"older continuous, fix any $\Theta\neq \Phi\in [0,T]\times \mathbb T$
\begin{multline}\label{e.v.holder.int.bound}
|\mathcal{V}_L(\Theta) - \mathcal{V}_L(\Phi)| \lesssim \frac{\mathcal{C}_{1\TE}}{\rho} \int_\mathbb T d\alpha \frac{|(\delta_\alpha^+ + \delta_\alpha^-)(\bm{X}'(\Theta)-\bm{X}'(\Phi)| \ |\delta_\alpha \bm{X}'|}{\alpha^2}
\\+ \frac{\mathcal{C}_{1\TE}}{\rho} \int_\mathbb T d\alpha \frac{|(\delta_\alpha^+ + \delta_\alpha^-)\bm{X}'| \ |\delta_\alpha \bm{X}'(\Theta) - \delta_\alpha \bm{X}'(\Phi)|}{\alpha^2}
\\+ \frac{\mathcal{C}_{1\TE}}{\rho^2} \int_\mathbb T d\alpha \frac{|(\delta_\alpha^+ + \delta_\alpha^-)\bm{X}'| \ |\delta_\alpha \bm{X}'|}{\alpha^2} | D_\alpha ( \bm{X}(\Theta) - \bm{X}(\Phi))|
\\+ \frac{\mathcal{C}_{2\TE}}{\rho} \int_\mathbb T d\alpha \frac{|(\delta_\alpha^+ + \delta_\alpha^-)\bm{X}'| \ |\delta_\alpha \bm{X}'|}{\alpha^2} \left(|\bm{X}'(\Theta) - \bm{X}'(\Phi)|+|\tau_\alpha (\bm{X}'(\Theta) - \bm{X}'(\Phi)|\right)
\\ = \frac{\mathcal{C}_{1\TE}}{\rho}I_1 + \frac{\mathcal{C}_{1\TE}}{\rho}I_2 + \frac{\mathcal{C}_{1\TE}}{\rho^2} I_3 + \frac{\mathcal{C}_{2\TE}}{\rho} I_4.
\end{multline}
Note that above and below when we do not write the dependence on the variable $\Theta$ or $\Phi$ it is because it will not have an effect on the following argument.
As $\beta>1/2$, we can easily bound
\begin{equation}\notag
\int_\mathbb T d\alpha \frac{|(\delta_\alpha^+ + \delta_\alpha^-)\bm{X}'| \ |\delta_\alpha \bm{X}'|}{\alpha^2} \lesssim ||\bm{X}'||_{C^\beta}^2 + ||\bm{X}'||_{L^\infty}^2.
\end{equation}
Thus
\begin{equation}\label{eqn.I34.bound}
I_3+I_4 \lesssim (||\bm{X}'||_{C^\beta}^2 + ||\bm{X}'||_{L^\infty}^2) ||\bm{X}'||_{C^\beta}|\Theta - \Phi|^\beta.
\end{equation}
To bound $I_1$ and $I_2$, we split each integral into the regions where $|\alpha|< |\Theta-\Phi|$ and $|\alpha|> |\Theta-\Phi|$. For small $\alpha$, we use the bounds
\begin{equation}\notag
|\delta_\alpha \bm{X}'|, |\delta_\alpha^\pm \bm{X}'| \lesssim ||\bm{X}'||_{C^\beta} |\alpha|^\beta,
\end{equation}
and for large $\alpha$ we bound
\begin{multline}\notag
|(\delta_\alpha^+ + \delta_\alpha^-)(\bm{X}'(\Theta)-\bm{X}'(\Phi)| \ |\delta_\alpha \bm{X}'| \lesssim ||\bm{X}'||_{C^\beta}^2|\Theta-\Phi|^\beta |\alpha|^\beta,
\\ |(\delta_\alpha^+ + \delta_\alpha^-)\bm{X}'| \ |\delta_\alpha \bm{X}'(\Theta) - \delta_\alpha \bm{X}'(\Phi)|\lesssim ||\bm{X}'||_{C^\beta}^2 |\Theta-\Phi|^\beta |\alpha|^\beta.
\end{multline}
Plugging in these bounds, we then get that
\begin{equation}\label{eqn.I12.bound}
I_1+I_2 \lesssim ||\bm{X}'||_{C^\beta}^2 |\Theta-\Phi|^{2\beta-1}.
\end{equation}
As $2\beta-1 < \beta$, plugging in \eqref{eqn.I34.bound} and \eqref{eqn.I12.bound} into \eqref{e.v.holder.int.bound} gives us that $\mathcal{V}_L \in C^{2\beta-1}_{t,\theta}.$
The proof for $\mathcal{V}_Q$ follows similarly, giving the result for $\mathcal{V}$.
Now suppose that $\bm{X}'$ is Lipschitz. Then again focusing on the $\mathcal{V}_L$ bound, we again are left to bound \eqref{e.v.holder.int.bound}. As $\mathcal{V}$ is bounded, we may assume without loss of generality that $|\Theta-\Phi|\leq 1$. We can bound $I_3,I_4$ using the same argument as the $1/2<\beta<1$ case to get
\begin{equation}\label{eqn.I34.bound2}
I_3, I_4 \lesssim (||\bm{X}'||_{C^{0,1}}^2 + ||\bm{X}'||_{L^\infty}^2) ||\bm{X}'||_{C^{0,1}}|\Theta - \Phi|.
\end{equation}
To bound $I_1,I_2$ we now need to split our integral into 3 regions. For $|\alpha|\leq |\Theta-\Phi|,$ we again use the bounds
\begin{equation}\label{eqn.I12.small.alpha}
|\delta_\alpha \bm{X}'|, |\delta_\alpha^\pm \bm{X}'| \lesssim ||\bm{X}'||_{C^{0,1}} |\alpha|.
\end{equation}
For $|\Theta-\Phi|\leq |\alpha|\leq 1,$ we use the bounds
\begin{multline}\label{eqn.I12.med.alpha}
|(\delta_\alpha^+ + \delta_\alpha^-)(\bm{X}'(\Theta)-\bm{X}'(\Phi)| \ |\delta_\alpha \bm{X}'|\lesssim ||\bm{X}'||_{C^{0,1}}^2|\Theta-\Phi| \ |\alpha|,
\\ |(\delta_\alpha^+ + \delta_\alpha^-)\bm{X}'| \ |\delta_\alpha \bm{X}'(\Theta) - \delta_\alpha \bm{X}'(\Phi)|\lesssim ||\bm{X}'||_{C^{0,1}}^2 |\Theta-\Phi| \ |\alpha|.
\end{multline}
And for $|\alpha|>1, $ we use
\begin{multline}\label{eqn.I12.large.alpha}
|(\delta_\alpha^+ + \delta_\alpha^-)(\bm{X}'(\Theta)-\bm{X}'(\Phi)| \ |\delta_\alpha \bm{X}'|\lesssim ||\bm{X}'||_{C^{0,1}} ||\bm{X}'||_{L^\infty}|\Theta-\Phi| ,
\\
|(\delta_\alpha^+ + \delta_\alpha^-)\bm{X}'| \ |\delta_\alpha \bm{X}'(\Theta) - \delta_\alpha \bm{X}'(\Phi)| \lesssim ||\bm{X}'||_{C^{0,1}}||\bm{X}'||_{L^\infty} |\Theta-\Phi| .
\end{multline}
Integrating and plugging in the above bounds \eqref{eqn.I12.small.alpha}, \eqref{eqn.I12.med.alpha} and \eqref{eqn.I12.large.alpha} we then get that
\begin{equation}\label{eqn.I12.bound2}
I_1+I_2 \lesssim ||\bm{X}'||_{C^{0,1}}(||\bm{X}'||_{C^{0,1}} + ||\bm{X}'||_{L^\infty})(1-\log|\Theta-\Phi|))|\Theta-\Phi|.
\end{equation}
Plugging \eqref{eqn.I34.bound2}, \eqref{eqn.I12.bound2} into \eqref{e.v.holder.int.bound} gives us that $\mathcal{V}$ is log-Lipschitz.
Now assume that $\bm{X}'\in C_{t,\theta}^{k,\beta}$ and $\mathcal{T}\in C^{k,\beta}_r$ for some $k\geq 1,$ and $0<\beta\leq 1.$ We claim that for every $0\leq j\leq k$ that $\partial_t^j \partial_\theta^{k-j}\mathcal{V}$ is log-$C^\beta$. The difference of $|\partial_t^j \partial_\theta^{k-j}\mathcal{V}(\Theta) - \partial_t^j \partial_\theta^{k-j}\mathcal{V}(\Phi)|$ can be bounded by the sum of a number of integrals. They can all be bounded similarly as above but for clarity we will directly show how to bound the two most difficult integrals, namely
\begin{multline}\notag
J_1 = \int_\mathbb T d\alpha \frac{|\partial_t^j \partial_\theta^{k-j}(\delta_\alpha^+ + \delta_\alpha^-)( \bm{X}'(\Theta)-\bm{X}'(\Phi)| \ |\delta_\alpha \bm{X}'|}{\alpha^2},
\\ J_2 = \int_\mathbb T d\alpha \frac{|(\delta_\alpha^+ + \delta_\alpha^-) \bm{X}'|}{\alpha^2}\bigg|\delta_\alpha D^k \mathbf{T}(\bm{X}'(\Theta)) -\delta_\alpha D^k \mathbf{T}(\bm{X}'(\Phi))\bigg| \ |\partial_t \bm{X}'|^j |\bm{X}''|^{k-j}.
\end{multline}
Without loss of generality, we assume $|\Theta-\Phi|\leq 1$. To bound $J_1$, we again split our integral into 3 regions. For $|\alpha|\leq |\Theta-\Phi|$ we use the bound
\begin{equation}\notag
|\partial_t^j \partial_\theta^{k-j}(\delta_\alpha^+ + \delta_\alpha^-)( \bm{X}'(\Theta)-\bm{X}'(\Phi)| \ |\delta_\alpha \bm{X}'| \lesssim ||\bm{X}'||_{C^{k,\beta}} ||\bm{X}'||_{C^{0,1}} |\alpha|^{1+\beta}.
\end{equation}
For $|\Theta-\Phi|\leq |\alpha|\leq 1,$ we use the bounds
\begin{equation}\notag
|\partial_t^j \partial_\theta^{k-j}(\delta_\alpha^+ + \delta_\alpha^-)( \bm{X}'(\Theta)-\bm{X}'(\Phi)| \ |\delta_\alpha \bm{X}'| \lesssim ||\bm{X}'||_{C^{k,\beta}} ||\bm{X}'||_{C^{0,1}}|\Theta-\Phi|^\beta |\alpha|.
\end{equation}
Finally for $|\alpha|>1$ we use
\begin{equation}\notag
|\partial_t^j \partial_\theta^{k-j}(\delta_\alpha^+ + \delta_\alpha^-)( \bm{X}'(\Theta)-\bm{X}'(\Phi)| \ |\delta_\alpha \bm{X}'| \lesssim ||\bm{X}'||_{C^{k,\beta}} ||\bm{X}'||_{L^\infty}|\Theta-\Phi|^\beta .
\end{equation}
Plugging these in, we get that
\begin{equation}\notag
J_1 \lesssim ||\bm{X}'||_{C^{k,\beta}}(||\bm{X}'||_{C^{0,1}}+||\bm{X}'||_{L^\infty})(1-\log|\Theta-\Phi|) |\Theta-\Phi|^\beta.
\end{equation}
The other important integral to bound is $J_2$. Note that
\begin{equation}\notag
|\partial_t \bm{X}'|^j |\bm{X}''|^{k-j}\leq ||\bm{X}'||_{C^{0,1}}^k.
\end{equation}
To bound the rest of $J_2$, we split the integral into the same 3 regions for $\alpha$. Using the 3 bounds
\begin{multline}\notag
|(\delta_\alpha^+ + \delta_\alpha^-) \bm{X}'| \ |\delta_\alpha D^k\mathbf{T}(\Theta)- \delta_\alpha D^k\mathbf{T}(\Phi)| \lesssim ||\mathbf{T}||_{C^{k,\beta}} ||\bm{X}'||_{C^{0,1}}^2 |\alpha|^{1+\beta},
\\ |(\delta_\alpha^+ + \delta_\alpha^-) \bm{X}'| \ |\delta_\alpha D^k\mathbf{T}(\Theta)- \delta_\alpha D^k\mathbf{T}(\Phi)| \lesssim ||\mathbf{T}||_{C^{k,\beta}} ||\bm{X}'||_{C^{0,1}}^2 |\Theta-\Phi|^\beta |\alpha|,
\\ |(\delta_\alpha^+ + \delta_\alpha^-) \bm{X}'| \ |\delta_\alpha D^k\mathbf{T}(\Theta)- \delta_\alpha D^k\mathbf{T}(\Phi)| \lesssim ||\mathbf{T}||_{C^{k,\beta}} ||\bm{X}'||_{C^{0,1}} ||\bm{X}'||_{L^\infty} |\Theta-\Phi|^\beta,
\end{multline}
for small, medium, and large $\alpha$ respectively. Plugging these in, we then get that
\begin{equation}\notag
J_2 \lesssim ||\mathbf{T}||_{C^{k,\beta}}||\bm{X}'||_{C^{0,1}}^{k+1}(||\bm{X}'||_{C^{0,1}}+||\bm{X}'||_{L^\infty})(1-\log |\Theta-\Phi|) |\Theta-\Phi|^\beta.
\end{equation}
All the other integrals involved in bounding $|\partial_t^j \partial_\theta^{k-j}\mathcal{V}(\Theta)-\partial_t^j \partial_\theta^{k-j}\mathcal{V}(\Phi)|$ can be bounded either following similar arguments, or by using only lower order norms. Thus all $k$-th order derivatives of $\mathcal{V}$ are log-$C^\beta.$
\end{proof}
With the regularity estimates for $\mathcal{V}$, we can now slightly modify \cite{MR3656476}'s proof of regularity for the scalar fractional porous medium equation. The crux of their argument is an a priori estimate for solutions to the fractional heat equation.
\begin{lemma}\label{lem.heat.eqn.estimates}(V\'{a}zquez, de Pablo, Quir\'{o}s and Rodr\'{i}guez \cite{MR3656476})
Let $f,g: [0,T]\times \mathbb R \to \mathbb R$ be such that
\begin{equation}\notag
\left\{\begin{array}{l} \partial_t g + \Lambda g = \Lambda f,
\\ g(0,\cdot) \equiv 0 . \end{array}\right.
\end{equation}
Fix $\Theta_0 = (t_0, \theta_0)\in (0,T)\times \mathbb R.$ Suppose that there exist some $0<\beta, \epsilon< 1$ and $r_0>0$ such that $f$ satisfies
\begin{equation}\notag
\left\{\begin{array}{l}
|f(\Theta_1)-f(\Theta_0)| \leq c |\Theta_1 - \Theta_0|^{\beta+\epsilon},
\\ |f(\Theta_1) - f(\Theta_2)| \leq c r^\epsilon |\Theta_1-\Theta_2|^\beta ,
\end{array}\right.
\end{equation}
for all $\Theta_1, \Theta_2\in B_r(\Theta_0)=\{\Theta: |\Theta - \Theta_0|<r\}$ and $0<r\leq r_0.$
Then $g$ satisfies
\begin{equation}\notag
|g(\Theta_0+\Phi) + g(\Theta_0-\Phi) - 2g(\Theta_0)| \lesssim |\Phi|^{\beta+\epsilon}.
\end{equation}
\end{lemma}
Note that Lemma \ref{lem.heat.eqn.estimates} above is a collection of Lemmas 4.1, 5.1, and 5.3 from \cite{MR3656476}. Lemma \ref{lem.heat.eqn.estimates} effectively says that if $f$ is $C^{\beta}$ everywhere and $C^{\beta+\epsilon}$ at a fixed point $\Theta_0,$ then so is the solution $g$. We also remark that Lemma \ref{lem.heat.eqn.estimates} generalizes automatically from $\mathbb R$ to $\mathbb T$. Also as in \secref{sec:para} then Lemma \ref{lem.heat.eqn.estimates} generalizes automatically from $\partial_t g + \Lambda g = \Lambda f$ to $\partial_t g + \widetilde{\Lambda} g = \widetilde{\Lambda} f$.
As in \cite{MR3656476}, we apply Lemma \ref{lem.heat.eqn.estimates} repeatedly to steadily improve the regularity of our solution $\bm{X}'$ in a bootstrapping argument.
\begin{proposition}\label{prop.C1beta}
Let $\bm{X}: [0,T]\times \mathbb T\to \mathbb R^2$ be the solution to the Peskin problem we constructed. Then for any $0<\tau<T$, $\bm{X}' \in C^{1,\beta}_{t,\theta}([\tau, T]\times \mathbb T; \mathbb R^2)$ for all $0<\beta<1.$
\end{proposition}
\begin{proof}
To begin, fix some point $\Theta_0 = (t_0,\theta_0)\in (\tau, T)\times \mathbb T.$ Let $\bm{V}^{\Theta_0}$ be the solution to the equation
\begin{equation}\label{eqn.v.pde}
\left\{ \begin{array}{l}
\partial_t \bm{V}^{\Theta_0} + D\mathbf{T}(\bm{X}'(\Theta_0)) \widetilde{\Lambda} \bm{V}^{\Theta_0} = -\mathcal{V}[\bm{X}'],
\\ \bm{V}^{\Theta_0}(\tau/2,\cdot) = \bm{X}'(\tau/2, \cdot).
\end{array} \right.
\end{equation}
Together Proposition \ref{prop:H1.estimate} and Lemma \ref{H1.Linfinity.time.bound} imply that $\bm{X}'\in L^\infty_t([\tau/2, T]; H^1(\mathbb T;\mathbb R^2)).$ Thus in particular, by Lemma \ref{lem.v.regularity}, $\mathcal{V}\in L^\infty([\tau/2, T]\times\mathbb T).$ Notice that \eqref{eqn.v.pde} can be diagonalized using $\bm{V}^{\Theta_0}\cdot \widehat{\bm{X}'}(\Theta_0)$ and $\bm{V}^{\Theta_0}\cdot \widehat{\bm{X}'}(\Theta_0)^\perp$. Then since $\bm{V}^{\Theta_0}$ is a solution to the fractional heat equation with bounded initial data and bounded forcing term, we thus have for any $0<\beta<1$ that
\begin{equation}\label{eqn.v.low.reg}
\bm{V}^{\Theta_0}\in C^{\beta}([\tau,T]\times \mathbb T),
\end{equation}
with the constant depending on $\tau, \beta, ||\bm{X}'||_{L^\infty}, ||\mathcal{V}||_{L^\infty}, \widetilde{\Lambda},$ and $\mathcal{C}_{1\TE}$.
Now take $\bm{U}^{\Theta_0}(t,\theta) \eqdef \bm{X}'(t,\theta) - \bm{V}^{\Theta_0}(t,\theta).$ Then using \eqref{v.theta.def} we see that $\bm{U}^{\Theta_0}$ solves the system
\begin{equation}\notag
\left\{\begin{array}{l} \partial_t \bm{U}^{\Theta_0} + D\mathbf{T}(\bm{X}'(\Theta_0)) \widetilde{\Lambda} \bm{U}^{\Theta_0} = \widetilde{\Lambda} \bm{F}^{\Theta_0},
\\ \bm{U}^{\Theta_0}(\tau/2,\cdot)\equiv 0, \end{array}\right.
\end{equation}
where
\begin{equation}\notag
\bm{F}^{\Theta_0}(t,\theta) = D\mathbf{T}(\bm{X}'(\Theta_0))\bm{X}'(t,\theta) - \mathbf{T}(\bm{X}'(t,\theta)).
\end{equation}
Note that $\bm{F}^{\Theta_0}$ satisfies
\begin{multline}\label{eqn.F1}
|\bm{F}^{\Theta_0}(\Theta_1) - \bm{F}^{\Theta_0}(\Theta_0)|
\\ = |\mathbf{T}(\bm{X}'(\Theta_1)) - \mathbf{T}(\bm{X}'(\Theta_0)) - D\mathbf{T}(\bm{X}'(\Theta_0))(\bm{X}'(\Theta_1) - \bm{X}'(\Theta_0)|
\\ \leq \mathcal{C}_{2\TE} |\bm{X}'(\Theta_1) - \bm{X}'(\Theta_0)|^2,
\end{multline}
and
\begin{multline}\label{eqn.F2}
|\bm{F}^{\Theta_0}(\Theta_1) - \bm{F}^{\Theta_0}(\Theta_2)|
\\ = |\mathbf{T}(\bm{X}'(\Theta_1)) - \mathbf{T}(\bm{X}'(\Theta_2)) - D\mathbf{T}(\bm{X}'(\Theta_0))(\bm{X}'(\Theta_1) - \bm{X}'(\Theta_2)|
\\ = \bigg|\left(\int_0^1 ds D\mathbf{T}(s \bm{X}'(\Theta_1) + (1-s)\bm{X}'(\Theta_2)) - D\mathbf{T}(\bm{X}'(\Theta_0))\right)(\bm{X}'(\Theta_1)-\bm{X}'(\Theta_2))\bigg|
\\ \leq \mathcal{C}_{2\TE} \max\{|\bm{X}'(\Theta_1) - \bm{X}'(\Theta_0)|,|\bm{X}'(\Theta_2) - \bm{X}'(\Theta_0)|\} |\bm{X}'(\Theta_1)-\bm{X}'(\Theta_2)|.
\end{multline}
We will use \eqref{eqn.F1} and \eqref{eqn.F2} to apply the bounds in Lemma \ref{lem.heat.eqn.estimates}.
Let $\bm{U}^{\Theta_0}_1 = \bm{U}^{\Theta_0}\cdot \widehat{\bm{X}'}(\Theta_0)$ and $\bm{U}^{\Theta_0}_2 = \bm{U}^{\Theta_0}\cdot \widehat{\bm{X}'}(\Theta_0)^\perp.$ Then using \eqref{e:DTdefn} we see that $\bm{U}^{\Theta_0}_1$ solves the scalar equation
\begin{equation}\notag
\left\{\begin{array}{l} \partial_t \bm{U}^{\Theta_0}_1 +\mathcal{T}'(|\bm{X}'(\Theta_0)|) \widetilde{\Lambda} \bm{U}^{\Theta_0}_1 = \widetilde{\Lambda} (\bm{F}^{\Theta_0}\cdot \widehat{\bm{X}'}(\Theta_0)),
\\ \bm{U}^{\Theta_0}_1(\tau/2,\cdot)\equiv 0, \end{array}\right.
\end{equation}
and $\bm{U}^{\Theta_0}_2$ solves
\begin{equation}\notag
\left\{\begin{array}{l} \partial_t \bm{U}^{\Theta_0}_2 +\frac{\mathcal{T}(|\bm{X}'(\Theta_0)|)}{|\bm{X}'(\Theta_0)|} \widetilde{\Lambda} \bm{U}^{\Theta_0}_2 = \widetilde{\Lambda} (\bm{F}^{\Theta_0}\cdot \widehat{\bm{X}'}(\Theta_0)^\perp),
\\ \bm{U}^{\Theta_0}_2(\tau/2,\cdot)\equiv 0, \end{array}\right.
\end{equation}
Note that from \eqref{e:DTdefn} and \eqref{e:QuantitativeTensionMap} we have
$$\lambda\leq \mathcal{T}'(|\bm{X}'(\Theta_0)|), \frac{\mathcal{T}(|\bm{X}'(\Theta_0)|)}{|\bm{X}'(\Theta_0)|} \leq \mathcal{C}_{1\TE}.$$
As $\bm{F}^{\Theta_0}$ satisfies \eqref{eqn.F1}, \eqref{eqn.F2} and $\bm{X}'\in C^{1/2}_{t,\theta}$ by Lemma \ref{lem:chalf}, after rescaling in time we can apply Lemma \ref{lem.heat.eqn.estimates} to $\bm{U}^{\Theta_0}_i
$ with $\beta = \epsilon=1/2 $ to get
\begin{equation}\notag
|\bm{U}^{\Theta_0}(\Theta_0+\Phi)+\bm{U}^{\Theta_0}(\Theta_0-\Phi)-\bm{U}^{\Theta_0}(\Theta_0)| \lesssim |\Phi|,
\end{equation}
where the constant depends on $||\bm{X}'||_{C^{1/2}}, \lambda, \mathcal{C}_{1\TE},$ and $\mathcal{C}_{2\TE}.$ In particular, we have that $\bm{U}^{\Theta_0}$ is $C^{\beta}$ at $\Theta_0$ for any $\beta<1$. As $\bm{X}' = \bm{U}^{\Theta_0}+\bm{V}^{\Theta_0}$, we thus have for any $\beta<1$ that
\begin{equation}\notag
|\bm{X}'(\Theta_0+\Phi)-\bm{X}'(\Theta_0)|\lesssim |\Phi|^\beta.
\end{equation}
Since $\Theta_0\in [\tau, T]\times \mathbb T$ was arbitrary, we thus have that $\bm{X}'\in C^\beta([\tau, T]\times \mathbb T; \mathbb R^2)$ for all $0<\beta<1.$
But now as $\bm{X}'\in C^\beta$ for all $\beta<1,$ by Lemma \ref{lem.v.regularity} we have that $\mathcal{V}$ is $C^\beta$ for all $\beta<1$ as well. Thus as $\bm{V}^{\Theta_0}$ solves \eqref{eqn.v.pde} with a $C^\beta$ forcing term, we must have $\bm{V}^{\Theta_0}\in C^{1,\beta}([\tau,T]\times \mathbb T)$ for any $\beta<1.$
As $\bm{F}^{\Theta_0}$ satisfies \eqref{eqn.F1}, \eqref{eqn.F2} and $\bm{X}'\in C^{\beta}_{t,\theta}$, after rescaling in time we can again apply Lemma \ref{lem.heat.eqn.estimates} to $\bm{U}^{\Theta_0}_i
$ with for any $\epsilon=\beta<1$ to get
\begin{equation}\notag
|\bm{U}^{\Theta_0}(\Theta_0+\Phi)+\bm{U}^{\Theta_0}(\Theta_0-\Phi)-\bm{U}^{\Theta_0}(\Theta_0)| \lesssim |\Phi|^{2\beta}.
\end{equation}
Since $\bm{X}' = \bm{U}^{\Theta_0}+\bm{V}^{\Theta_0}$ and $\Theta_0\in [\tau, T]\times \mathbb T$ and $\beta<1$ were arbitrary, we thus have that $\bm{X}'\in C^{1,\beta}([\tau, T]\times \mathbb T)$ for all $\beta<1.$
\end{proof}
\begin{proposition}\label{prop:higherReg}
Assume that $\mathcal{T}\in C^{k,\gamma}([0,\infty))$ for some $k\geq 2, $ and $0<\gamma<1.$ Then for any $\tau>0$, $\bm{X}'\in C^{k,\gamma}([\tau, T]\times \mathbb T; \mathbb R^2)$.
\end{proposition}
\begin{proof}
If $k=2$, we will show $\bm{X}'\in C^{2,\gamma}.$ Else, we will show that $\bm{X}'\in C^{2,\beta}$ for all $\beta<1$ and then proceed by induction on $k$.
So to begin, we will prove that $\bm{X}''\in C^{1,\gamma}.$ Differentiating our equation for $\bm{X}'$ \eqref{v.theta.def}, we get that
\begin{equation}\label{eqn.X''}
\partial_t \bm{X}'' + \widetilde{\Lambda} (D\mathbf{T}(\bm{X}')\bm{X}'') = \mathcal{V}'.
\end{equation}
Fix some point $\Theta_0\in [\tau, T)\times \mathbb T.$ Then we can rewrite \eqref{eqn.X''} as
\begin{multline}\label{eqn.X''.theta0}
\partial_t \bm{X}'' + D\mathbf{T}(\bm{X}'(\Theta_0))\widetilde{\Lambda} \bm{X}'' = \mathcal{V}' - \bm{X}''(\Theta_0)\widetilde{\Lambda} D\mathbf{T}(\bm{X}')
\\- \widetilde{\Lambda} \left[(D\mathbf{T}(\bm{X}')-D\mathbf{T}(\bm{X}'(\Theta_0))(\bm{X}''-\bm{X}''(\Theta_0))\right].
\end{multline}
As in the proof of Proposition \ref{prop.C1beta} again take $\bm{V}^{\Theta_0}$ to be the solution to
\begin{equation}\label{eqn.v.pde2}
\left\{ \begin{array}{l}
\partial_t \bm{V}^{\Theta_0} + D\mathbf{T}(\bm{X}'(\Theta_0)) \widetilde{\Lambda} \bm{V}^{\Theta_0} = \mathcal{V}'[\bm{X}']- \bm{X}''(\Theta_0)\widetilde{\Lambda} D\mathbf{T}(\bm{X}') ,
\\ \bm{V}^{\Theta_0}(\tau/2,\cdot) = \bm{X}''(\tau/2, \cdot).
\end{array} \right.
\end{equation}
By Proposition \ref{prop.C1beta} and Lemma \ref{lem.v.regularity} we have that $\mathcal{V}'\in C^{\beta}$ for all $\beta<1$. If $k>2$, then $\widetilde{\Lambda} D\mathbf{T}(\bm{X}')$ is $C^{\beta}$ for all $\beta<1$, and if $k=2$ then $\widetilde{\Lambda} D\mathbf{T}(\bm{X}')$ is $C^\gamma$. Thus $\bm{V}^{\Theta_0}\in C^{1,\beta}([\tau, T]\times \mathbb T)$ for all $\beta<1$ if $k>2$ and $\bm{V}^{\Theta_0}\in C^{1,\gamma}([\tau, T]\times \mathbb T)$ if $k=2$.
Taking $\bm{U}^{\Theta_0} = \bm{X}''-\bm{V}^{\Theta_0}$, subtracting \eqref{eqn.v.pde2} from \eqref{eqn.X''.theta0} gives us that $\bm{U}^{\Theta_0}$ solves
\begin{equation}\notag
\left\{\begin{array}{l} \partial_t \bm{U}^{\Theta_0} + D\mathbf{T}(\bm{X}'(\Theta_0)) \widetilde{\Lambda} \bm{U}^{\Theta_0} = \widetilde{\Lambda} \bm{F}^{\Theta_0},
\\ \bm{U}^{\Theta_0}(\tau/2,\cdot)\equiv 0, \end{array}\right.
\end{equation}
where
\begin{multline}\notag
\bm{F}^{\Theta_0}(\Theta) = (D\mathbf{T}(\bm{X}'(\Theta))-D\mathbf{T}(\bm{X}'(\Theta_0)))(\bm{X}''(\Theta)-\bm{X}''(\Theta_0))
\\ = O(|\bm{X}''(\Theta)-\bm{X}''(\Theta_0)|^2) = O(|\Theta-\Theta_0|^{2\beta}),
\end{multline}
for all $\beta<1.$ Using Lemma \ref{lem.heat.eqn.estimates} and following the same argument as in Proposition \ref{prop.C1beta}, we then get that $\bm{U}^{\Theta_0}$ is $C^{2\beta}$ at $\Theta_0$. If $k=2,$ then we get that $\bm{X}'' = \bm{U}^{\Theta_0}+\bm{V}^{\Theta_0}$ is $C^{1,\gamma}.$ And if $k\geq 2,$ then $\bm{X}''$ is $C^{1,\beta}$ for all $\beta<1.$ A symmetric argument works for $\partial_t \bm{X}',$ so we get that $\bm{X}'\in C^{2,\beta}$ for all $\beta<1$ if $k>2,$ and $\bm{X}'\in C^{2,\gamma}$ if $k=2$.
We now proceed by induction. Suppose that we have proven that $\bm{X}'\in C^{j,\beta} $ for all $\beta<1$ for some $j<k.$ Let $\partial^j = \partial_t^{l}\partial_\theta^{j-l}$ for some $0\leq l\leq j$ be some $j$-th order derivative. Then for any $\Theta_0\in [\tau, T]\times \mathbb T$, similar to \eqref{eqn.X''.theta0} we can write the equation for $\partial^j \bm{X}'$ as
\begin{multline}\notag
\partial_t (\partial^j\bm{X}') + D\mathbf{T}(\bm{X}'(\Theta_0)) \widetilde{\Lambda} \partial^j \bm{X}' = \partial^j \mathcal{V} - \widetilde{\Lambda}(\partial^j \mathbf{T}(\bm{X}') - D\mathbf{T}(\bm{X}')\partial^j\bm{X}')
\\ -\partial^j \bm{X}'(\Theta_0)\widetilde{\Lambda} D\mathbf{T}(\bm{X}')
-\widetilde{\Lambda} \left[(D\mathbf{T}(\bm{X}') - D\mathbf{T}(\bm{X}'(\Theta_0))(\partial^j\bm{X}' - \partial^j\bm{X}'(\Theta_0)\right].
\end{multline}
Then Lemma \ref{lem.v.regularity} $\partial^j \mathcal{V}$ is $C^{\beta}$ for all $\beta<1$. Since $k>2$, $\widetilde{\Lambda} D\mathbf{T}(\bm{X}')$ is also $C^{\beta}$ for all $\beta<1$. Finally, $\widetilde{\Lambda} (\partial^j \mathbf{T}(\bm{X}') - D\mathbf{T}(\bm{X}')\partial^j\bm{X}')$ is either $C^{\beta}$ for all $\beta<1$ if $k>j+1$, or its $C^\gamma$ if $k=j+1$.
Thus by defining $\bm{V}^{\Theta_0}, \bm{U}^{\Theta_0}, $and $\bm{F}^{\Theta_0}$ analogously, we can follow the same proof scheme as in Proposition \ref{prop.C1beta} and get that $\partial^j \bm{X}'$ is $C^{1,\beta} $ for all $\beta<1$ if $k>j+1$ or $\partial^j \bm{X}'$ is $C^{1,\gamma}$ if $k=j+1$. Thus by induction, we have that $\bm{X}'\in C^{k,\gamma}$ if $\mathcal{T}\in C^{k,\gamma}.$
\end{proof}
\section{Proof of the main theorem}\label{sec:mainThmProof}
In this section we will collect the previous a priori estimates to explain the proofs of our main theorems from \secref{sec:mainResults}. We will use an approximation argument starting with the existence and uniqueness theorem for general tension from \cite{rodenberg_thesis}:
\begin{theorem}\label{rodenberg.thm} \cite[Theorem 1.2.9 on page 17]{rodenberg_thesis}.
From \eqref{tension.map.def} we suppose the tension $\mathcal{T}: [0,\infty) \to [0,\infty)$ satisfies $\mathcal{T}(s) \in h^{1,\gamma}(0,\infty)$, for any fixed $0 < \gamma < 1$, is such that both $\mathcal{T}(s)>0$ and $\mathcal{T}'(s)>0$. Consider the fully nonlinear Peskin problem \eqref{e:boundaryintegral} and \eqref{stokeslet.def} with initial data $\bm{X}_0 \in h^{1,\gamma}(\mathbb T)$ with $|\bm{X}_0|_*>0$. (a) Then there exists $T>0$ such that \eqref{e:boundaryintegral} and \eqref{stokeslet.def} has a unique solution $\bm{X}(t) \in C([0,T]; h^{1,\gamma}(\mathbb T))\cap C^1([0,T]; h^{0,\gamma}(\mathbb T))$. (b) There exists some $\varepsilon>0$ such that if $\bm{Y}_0 \in h^{1,\gamma}(\mathbb T)$ with $||\bm{X}_0 - \bm{Y}_0 ||_{h^{1,\gamma}}<\varepsilon$ then \eqref{e:boundaryintegral} and \eqref{stokeslet.def} has a unique solution $\bm{Y}(t; \bm{Y}_0)\in C([0,T]; h^{1,\gamma}(\mathbb T))\cap C^1([0,T]; h^{0,\gamma}(\mathbb T))$ corresponding to the initial data $\bm{Y}_0$ where $T>0$ is the same as in statement (a).
\end{theorem}
In Theorem \ref{rodenberg.thm} recall that the little H{\"o}lder spaces $h^{1,\gamma}$ are the completion of $C^\infty$ in the $C^{1,\gamma}$ norm and that $C^{1,\alpha} \subset h^{1,\gamma}$ whenever $\alpha > \gamma >0$. We refer to \cite{MR3935476,rodenberg_thesis} and the references therin for further discussion of the little H{\"o}lder spaces.
Now let $\bm{X}_0'\in \dot{B}^{\frac{1}{2}}_{2,1}(\mathbb T; \mathbb R^2)$ with $|\bm{X}_0|_*>0$. We choose $\rho$ such that $\bm{X}_0$ satisfies $$|\bm{X}_0|_*\geq 3\rho>0.$$ Then by Lemma \ref{lem:VallePoisson} there is some function $\mu$ satisfying the conditions of Definition \ref{subw.definition} and a constant $M>0$ such that $$||\bm{X}'_0||_{\dot{B}^{\frac{1}{2}, \mu}_{2,1}}\leq M <\infty .$$ Let the scalar tension $\mathcal{T}: [0,\infty) \to [0,\infty)$ satisfy the strong bounds \eqref{e:QuantitativeScalarTension}, \eqref{e:DTdefn} and \eqref{e:QuantitativeTensionMap}.
Next we define the following approximations
\begin{equation*}
\bm{X}_{0,n}(\alpha)
=
\sum_{|k| \le n} \widehat{\bm{X}}_0(k) e^{ik\alpha}, \quad n \ge 1.
\end{equation*}
Above, for $k\in \mathbb Z$, we define the standard Fourier transform on $\mathbb T$ as
\begin{equation}\notag
\mathcal{F}_{\mathbb T}(f)(k) = \widehat{f}(k) \eqdef \frac{1}{2\pi}\int_{\mathbb T} f(\alpha)e^{-ik\alpha} d\alpha.
\end{equation}
Then $\bm{X}_{0,n}(\alpha)$ is smooth, $|| \bm{X}_{0,n}' ||_{\dot{B}^{\frac{1}{2},\mu}_{2,1}} \leq || \bm{X}_{0}' ||_{\dot{B}^{\frac{1}{2},\mu}_{2,1}} \leq M$ for all $n$, and we have
\begin{equation}\notag
\bm{X}_{0,n}' \to \bm{X}_0' \mbox{ as } n\to\infty \mbox{ in } \dot{B}^{\frac{1}{2}}_{2,1} \cap \dot{B}^{\frac{1}{2},\mu}_{2,1}.
\end{equation}
Since $\dot{B}^{\frac12}_{2,1}$ controls the $L^\infty$ norm as in Lemma \ref{Besov.embedding},
we have
\begin{equation}\label{bounded.embed.12}
||f||_{L^\infty}\lesssim \int_{\mathbb T} \frac{||\delta_\beta f||_{L^2_\theta}}{|\beta|^{3/2}} d\beta \approx ||f||_{\dot{B}^{\frac12}_{2,1}}.
\end{equation}
Using this estimate and \eqref{chord.arc.upper} then as $n\to\infty$ we also have
\begin{equation}\notag
\left||\bm{X}_0|_* - |\bm{X}_{0,n}|_* \right|
\lesssim ||\bm{X}_0'-\bm{X}_{0,n}'||_{\dot{B}^{\frac12}_{2,1}} \to 0.
\end{equation}
We conclude in particular that $|\bm{X}_{0,n}|_{*} \geq |\bm{X}_{0}|_{*} + o(1)$. Therefore, for any small $\varepsilon>0$ there is $1 \le N_\varepsilon<\infty$ such that $|\bm{X}_{0,n}|_{*} \geq |\bm{X}_{0}|_{*} -\varepsilon>0$ for all $n\ge N_\varepsilon$. Since we will be taking the limit as $n\to\infty$, without loss of generality we can take $N_\varepsilon=1$ by throwing away the first $N_\varepsilon$ terms in the sequence and relabelling. Specifically we choose $\varepsilon = \rho$ and then we have $$|\bm{X}_{0,n}|_{*} \geq 2\rho>0$$ uniformly. We also have $\bm{X}_{0,n} \in h^{1,\gamma}(\mathbb T)$ for all $n \ge 1$ and any $0 < \gamma < 1$.
Then using the result in \cite{rodenberg_thesis}, as stated above in Theorem \ref{rodenberg.thm}, we have that there exists a unique solution
\begin{equation}\notag
\bm{X}_n(t,\theta)
\in C([0,T_{\text{max}}]; h^{1,\frac12}(\mathbb T))\cap C^1([0,T_{\text{max}}]; h^{0,\frac12}(\mathbb T))
\end{equation}
to the fully nonlinear Peskin problem \eqref{e:boundaryintegral} and \eqref{stokeslet.def} with tension $\mathcal{T}$ for some time $T_{\text{max}}>0$. Notice that over $0<t<T_{\text{max}}$ the solution to \eqref{e:boundaryintegral} and \eqref{stokeslet.def} in $C([0,T_{\text{max}}]; h^{1,\frac12}(\mathbb T))\cap C^1([0,T_{\text{max}}]; h^{0,\frac12}(\mathbb T))$ has enough regularity to be a weak solution the equation \eqref{peskin.general.tension} with kernel \eqref{kerbel.eqn.deriv} in the sense of Definition \ref{def:solution}. If $T_{\text{max}}<\infty$ then either
\begin{equation}\notag
\liminf\limits_{t\to T_{\text{max}}} |\bm{X}_n(t)|_{*} = 0,
\end{equation}
or
\begin{equation}\notag
\limsup\limits_{t\to T_{\text{max}}} ||\bm{X}'_n||_{C^{1/2}_\theta}(t) = \infty.
\end{equation}
We will show that our estimates imply that this can not happen over a uniform time interval that is independent of $n$.
To this end, since the tension $\mathcal{T}$ satisfies \eqref{e:QuantitativeScalarTension} and \eqref{e:QuantitativeTensionMap} then our previous a priori estimates apply. Next let $T_M^*$ be defined by
\begin{equation}\label{e:tMdefn}
T_M^* = \inf\left\{ T>0: ||\bm{X}'_n||_{\BS_T} +2c\lambda^{1/2}||\bm{X}'_n||_{\mathcal{D}_T^\subw}> 5M\right\},
\end{equation}
where $c$ and $\lambda^{1/2}$ are the constants in Proposition \ref{prop:general.apriori.final.local}.
We further define $T_\rho^*$ by
\begin{equation}\label{e:trhodefn}
T_\rho^* = \inf\left\{ t>0: |\bm{X}_n(t)|_* < \rho \right\}.
\end{equation}
We then take the time $T^*$ to be the minimum of the two
\begin{equation}\label{e:t*defn}
T^* = \min\{T_M^*, T_\rho^*\}.
\end{equation}
Since under our assumptions the norms $||\bm{X}'_n||_{\BS_T}$ and $||\bm{X}'_n||_{\mathcal{D}_T^\subw}$ are continuous in $T>0$ then we have $T^*>0$. We will estimate this time $T^*$ from below in terms of $M$, $\mu$ and $\rho$. We will show that $T^*$ can be taken independent of $n$ and $T_{\text{max}}\geq T^*$.
We then estimate $\bm{X}'_n(t)$ on the time interval $[0,T^*]$. We shall first consider the case that $T_M^*\leq T_\rho^*$, and get a lower bound on $T_M^*$ using Proposition \ref{prop:general.apriori.final.local}. For $0\leq t\leq T^*$ under \eqref{e:tMdefn}, \eqref{e:trhodefn} and \eqref{e:t*defn} we have that
\begin{equation}\label{bounds.T.star}
||\bm{X}'_n(t)||_{\dot{B}^{\frac12,\mu}_{2,1}} \leq 5M,
\end{equation}
and
\begin{equation}\label{bounds.rho.T.star}
| \bm{X}_n(t)|_*\geq \rho>0.
\end{equation}
Then for $\mathcal{U}=\mathcal{U}[M,\rho,\lambda, \mathcal{C}_{2\TE}, \mathcal{C}_{1\TE}]$ as defined in \eqref{BU.def} with \eqref{BB.const.def}, \eqref{BK.const.def} and \eqref{BC.const.def}, we obtain from Proposition \ref{prop:general.apriori.final.local} for $0 < T_M$ sufficiently small that
\begin{equation}\notag
CT^{1/2}_M
\mathcal{U}^{1/2}[M,\rho,\lambda, \mathcal{C}_{2\TE}, \mathcal{C}_{1\TE}]
\leq
\frac{1}{2}.
\end{equation}
Thus we can plug this back into Proposition \ref{prop:general.apriori.final.local} to obtain
\begin{equation}\notag
\frac{1}{2}||\bm{X}'_n||_{\BS_T} + c\lambda^{1/2}||\bm{X}'_n||_{\mathcal{D}_T^\subw}
\\
\leq
2 || \bm{X}'_0||_{\BS_T}
\leq 2M,
\end{equation}
which holds for all $T\in [0,T_M]$. Thus $T_M^* \ge T_M > 0$ uniformly in $n$.
Next, suppose that $T_M^*\geq T_\rho^*$. Let $T^*$ be as defined in \eqref{e:t*defn}, then we have \eqref{bounds.T.star} and \eqref{bounds.rho.T.star} over $0 \le t \le T^*$, and we also have \eqref{bounded.embed.12}. Thus for a fixed $\eta>0$ to be chosen sufficiently small, then
breaking up the integral on the right-hand side of \eqref{bounded.embed.12} into $|\beta|<\eta$ and $|\beta|>\eta$, we get in general that
\begin{equation}\notag
\begin{split}
\int_{\mathbb T} \frac{||\delta_\beta f||_{L^2}}{|\beta|^{3/2}} d\beta &= \int_{|\beta|<\eta} \frac{||\delta_\beta f||_{L^2}}{|\beta|^{3/2}} d\beta+\int_{|\beta|>\eta} \frac{||\delta_\beta f||_{L^2}}{|\beta|^{3/2}} d\beta
\\&\leq \frac{1}{\mu(\eta^{-1})} \int_{\mathbb T} \frac{||\delta_\beta f||_{L^2}}{|\beta|^{3/2}} \mu(|\beta|^{-1})d\beta + \frac{4}{\eta^{1/2}}||f||_{L^2}.
\end{split}
\end{equation}
Since $\mu(\eta^{-1})^{-1} \to 0$ as $\eta \to 0$, then under \eqref{bounds.T.star} for any $\varepsilon>0$, we can choose $\eta = \eta(\mu, M, \varepsilon)>0$ such that
\begin{equation}\notag
||\bm{X}'_n(t)-\bm{X}'_{0,n}||_{L^\infty}\leq \frac{\varepsilon}{2} +\frac{C}{\eta^{1/2}}||\bm{X}'_n(t)-\bm{X}'_{0,n}||_{L^2},
\end{equation}
for some universal constant $C>0$. Thus, it remains to control the continuity of $\bm{X}'_n(t)$ in $L^2$. Then from Corollary \ref{prop:time.estimate} over $0 \le t \le T_\rho$ uniformly in $n$ we have
\begin{equation}\notag
||\bm{X}'_n(t)-\bm{X}'_{0,n}||_{L^2} = \bigg|\bigg| \int_0^t \partial_t \bm{X}'_n(s) ds \bigg|\bigg|_{L^2} \leq \int_0^t ||\partial_t \bm{X}'_n(s)||_{L^2} ds \leq C_2 T_\rho^{1/2}.
\end{equation}
Then by taking $T_\rho$ sufficiently small for some time $T_\rho = T_\rho(\mu, M, \rho, \varepsilon)>0$ and using using \eqref{chord.arc.upper}
we can guarantee that
\begin{equation}\label{arccord.bound.ep}
\left||\bm{X}_n(t)|_* - |\bm{X}_{0,n}|_* \right|
\leq
||\bm{X}'_n(t)-\bm{X}'_{0,n}||_{L^\infty}\leq \varepsilon, \quad 0 \le t \le T_\rho.
\end{equation}
In particular, taking $\varepsilon = \rho$ we can guarantee that \eqref{bounds.rho.T.star} holds over $0 \le t \le T_\rho$ uniformly in $n$. Thus $T_\rho^* \ge T_\rho >0$ uniformly in $n$.
In particular then \eqref{bounds.rho.T.star} and \eqref{e:trhodefn} imply that $T_{\text{max}}>T^*_\rho$. Next we consider $T^*_M>0$ defined in \eqref{e:tMdefn}. By Lemma \ref{H1.Linfinity.time.bound} we have for any small $t_0>0$ that
$||\bm{X}'_n(t_0)||_{\dot{H}^1} <\infty$ uniformly in $n$. Then by \eqref{H1.estimate2} for any $t_0<t < T^*_M$ we have $||\bm{X}'_n(t)||_{\dot{H}^1} \lesssim ||\bm{X}'_n(t_0)||_{\dot{H}^1}$. Further from Proposition \ref{Besov.equivalence.prop} and then Proposition \ref{besov.ineq.prop} we have
$$||\bm{X}'_n(t)||_{\dot{C}^{\frac12}_\theta} \approx ||\bm{X}'_n(t)||_{\dot{B}^{\frac12}_{\infty, \infty}}
\lesssim ||\bm{X}'_n(t)||_{\dot{B}^{1}_{2, \infty}}
\lesssim ||\bm{X}'_n(t)||_{\dot{H}^{1}} \lesssim ||\bm{X}'_n(t_0)||_{\dot{H}^1}.$$
Then using \eqref{bounded.embed.12} and \eqref{e:tMdefn} we have $||\bm{X}'_n(t)||_{L^\infty_\theta} \leq C ||\bm{X}'_n(t)||_{\dot{B}^{\frac12}_{2,1}}\leq 5 C M$ uniformly in $n$ over $0<t<T^*_M$ . We conclude that $T_{\text{max}} \ge T^*_M$. Thus $T_{\text{max}} \ge T^*>0$ uniformly in $n$.
Thus our sequence of solutions $\bm{X}_n(t)$ are all defined uniformly in $n$ on the interval $[0,T^*]$. They also satisfy the uniform bounds
\begin{equation}\label{e:uniformbounds}
\begin{split}
\begin{array}{rl} ||\bm{X}'_n||_{\BS_T} + c\lambda^{1/2}||\bm{X}'_n||_{\mathcal{D}_T^\subw}& \leq 5CM,
\\ \inf_{0<t<T} |\bm{X}_n(t)|_*& \geq \rho,
\\ ||\bm{X}_n||_{C_{t,\theta}^{2,\beta}([\tau, T]\times \mathbb T)} &\leq C(M, \mu, \lambda, \mathcal{C}_{1\TE}, \mathcal{C}_{2\TE}, \tau, \beta), \quad \forall 0<\beta< 1,
\end{array}
\end{split}
\end{equation}
where the last bounds follow by Proposition \ref{prop.C1beta}.
After passing to a subsequence, we then have that the sequence converges strongly in $L^\infty_t \dot{B}^{3/2}_{2,1}\cap L^2_t H^2_\theta\cap C^{2,\beta}_{loc}((0,T]\times \mathbb T)$ to a limit $\bm{X}(t)$ satisfying the same bounds in \eqref{e:uniformbounds}. Thus $\bm{X}(t)$ will be a strong solution to the Peskin problem with tension $\mathcal{T}$ and initial data $\bm{X}_0$ in the sense of Definition \ref{def:StrongSolution}. Thus Theorem \ref{thm:mainquant} follows, and the higher regularity in Theorem \ref{thm:mainquant} is a consequence of Proposition \ref{prop:higherReg}. Theorem \ref{first:unique} is then a direct consequence of Corollary \ref{cor.L2.cont.m}.
Alternatively, for a tension $\mathcal{T}$ satisfying also \eqref{tension.derivatives.continuity}, then by Proposition \ref{prop:continuity} using the equivalent weight $\nu$ in \eqref{nu.definition} we have
\begin{equation*}
||\bm{X}'_n - \bm{X}'_m||_{\BN_T}
+
2\lambda^{\frac12} || \bm{X}'_n - \bm{X}'_m||_{\mathcal{D}_T^\nu}
\leq
8 || \bm{X}'_{0,n} - \bm{X}'_{0,m}||_{\mathcal{B}^\nu}.
\end{equation*}
From \eqref{equivalent.nu.norm} we have $|| \bm{X}'_{0,n} - \bm{X}'_{0,m}||_{\mathcal{B}^\nu} \leq 2 || \bm{X}'_{0,n} - \bm{X}'_{0,m}||_{\mathcal{B}^\MA} \to 0$ as $m,n\to\infty$.
Therefore $\{\bm{X}'_n(t)\}$ is a Cauchy sequence in $\BN_T \cap \mathcal{D}_T^\nu$ over $0<t<T=T^*$. Since $\bm{X}_{0,n}'\to \bm{X}_0'$ in $\mathcal{B}^\MA$ as $n \to \infty$ then $\bm{X}'_n(t) \to \bm{X}'(t)$ in $\BN_T \cap \mathcal{D}_T^\nu$ over $0<t<T=T^*$. Then the limit $\bm{X}: [0,T^*] \to \mathbb R^2$ is a solution to the Peskin problem \eqref{peskin.general.tension} for tension $\mathcal{T}$ with initial data $\bm{X}_0$. Now Theorem \ref{main:unique} follows from Proposition \ref{prop:continuity}.
Lastly, suppose that our scalar tension $\mathcal{T}$ only satisfies the weaker qualitative assumptions \eqref{e:QualitativeScalarTension}. We again assume that $\bm{X}_0$ satisfies $||\bm{X}_0'||_{\dot{B}^{\frac12, \mu}_{2,1}}\leq M$ and $|\bm{X}_0|_*\geq 3\rho>0$. Let $\tilde{\mathcal{T}}:[0,\infty)\to [0,\infty)$ be such that \begin{equation}\label{e:TildeTension}
\tilde{\mathcal{T}}(r) = \mathcal{T}(r), \qquad \rho \leq r\leq ||\bm{X}_0'||_{L^\infty}+\rho,
\end{equation}
and $\tilde{\mathcal{T}}$ satisfies the stronger assumptions
\eqref{e:QuantitativeScalarTension}, \eqref{e:DTdefn} and \eqref{e:QuantitativeTensionMap}. Then by the above argument, there exists a strong solution $\bm{X}:[0,T]\times \mathbb T\to \mathbb R^2$ to the Peskin problem with tension $\tilde{\mathcal{T}}$ and initial data $\bm{X}_0.$
We claim that $\bm{X}(t)$ is also a solution over [0,T] to the Peskin problem with our original tension $\mathcal{T}$ as well. To see this, notice that \eqref{arccord.bound.ep} implies that
\begin{equation}\label{e:Linfinityfinalbound}
||\bm{X}'(t)-\bm{X}'_0||_{L^\infty} \leq \rho, \qquad 0\leq t\leq T.
\end{equation}
We conclude that $\rho \leq |\bm{X}(t)|_* \leq \inf_\theta |\bm{X}'(t,\theta)|\leq ||\bm{X}'(t)||_{L^\infty_\theta} \leq ||\bm{X}_0'||_{L^\infty_\theta}+\rho$ over $0\leq t\leq T$.
Thus combining \eqref{e:TildeTension} and \eqref{e:Linfinityfinalbound} we obtain
\begin{multline}\notag
\partial_t \bm{X}(t,\theta) = \int_{\mathbb T} d\alpha \frac{\bm{X}'(\theta+\alpha) \mathcal{P}(D_\alpha \bm{X})\bm{X}'(\theta+\alpha)}{|\delta_\alpha \bm{X}|^2} \frac{\tilde{\mathcal{T}}(|\bm{X}'|)(\theta+\alpha)}{|\bm{X}'(\theta+\alpha)|} \delta_\alpha \bm{X}(\theta)
\\ = \int_{\mathbb T} d\alpha \frac{\bm{X}'(\theta+\alpha) \mathcal{P}(D_\alpha \bm{X})\bm{X}'(\theta+\alpha)}{|\delta_\alpha \bm{X}|^2} \frac{\mathcal{T}(|\bm{X}'|)(\theta+\alpha)}{|\bm{X}'(\theta+\alpha)|} \delta_\alpha \bm{X}(\theta).
\end{multline}
We conclude that $\bm{X}(t,\theta)$ is a solution to the Peskin problem \eqref{peskin.general.tension} for our original tension $\mathcal{T}$ on the time interval $[0,T]$. The gain of higher regularity in Theorem \ref{thm:main} follows from Proposition \ref{prop:higherReg}. We thus conclude that Theorem \ref{thm:main} holds.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 3,948 |
// STGLIB/starray.h
// Copyright 1998, 1999, 2002 by StG Net
// auto-resizing array mechanism and usage tracking
// Does not include any MT features - derived or utilizing class must handle that
#ifndef STGLIB_STARRAY
#define STGLIB_STARRAY
#pragma message("using starray.h")
#include "/src/stglib/stcore.h"
// increase in powers of 2 instead of previous method
#define STARRAY_POW2
#ifndef STARRAY_MIN_ALLOC
#define STARRAY_MIN_ALLOC 32
#endif
#ifndef STARRAY_MUL_ALLOC
#define STARRAY_MUL_ALLOC 256
#endif
#ifdef STARRAY_DEBUG
typedef struct
{
void *_Ptr;
void *_Inst;
} dbg_typ;
dbg_typ dbg[1024];
int dbg_cnt=0;
char *dbg_stop=0;
void DBG_ADD(void *ptr,void *inst,StSize size)
{
#ifdef STARRAY_DEBUG_TRACE
printf("STARRAY ADD +%08X inst=%08X (%d)\n",ptr,inst,size);
#endif
int loop=0;
while (loop<dbg_cnt)
{
if (!dbg[loop]._Ptr)
break;
++loop;
}
dbg[loop]._Ptr=ptr;
dbg[loop]._Inst=inst;
if (dbg_cnt<=loop)
dbg_cnt=loop+1;
}
void DBG_DEL(void *ptr,void *inst)
{
#ifdef STARRAY_DEBUG_TRACE
printf("STARRAY DEL -%08X inst %08X\n",ptr,inst);
#endif
int loop=0;
while (loop<dbg_cnt)
{
if (dbg[loop]._Ptr==ptr)
{
if (dbg[loop]._Inst!=inst)
{
printf("STARRAY DBG @%08X allocated by %08X illegal del attempt by %08X\n",
ptr,dbg[loop]._Inst,inst);
*dbg_stop=0;
}
// correct delete - remove it
dbg[loop]._Ptr=0;
dbg[loop]._Inst=0;
return;
}
++loop;
}
printf("STARRAY DBG del attempt %08X by %08X not found\n",
ptr,inst);
*dbg_stop=0;
}
void DBG_CHK(void *ptr,void *inst)
{
if (!ptr)
return;
int loop=0;
while (loop<dbg_cnt)
{
if (dbg[loop]._Ptr==ptr)
{
if (dbg[loop]._Inst!=inst)
{
printf("STARRAY DBG @%08X allocated by %08X trying to use by %08X\n",
ptr,dbg[loop]._Inst,inst);
*dbg_stop=0;
}
// correct ptr - return
return;
}
++loop;
}
printf("STARRAY DBG check for %08X by %08X not found\n",
ptr,inst);
*dbg_stop=0;
}
#else
#define DBG_ADD(p,i,s)
#define DBG_DEL(p,i)
#define DBG_CHK(p,i)
#endif
// global memory allocated
// may roll over, but still valid for checking leaks
StSize _StArray_Bytes;
template <class T> class StArray
{
// list of other classes that are
// allowed to play with us directly
friend class StBuffer;
friend class StPipe;
friend class StFifoBuffer;
protected:
T *_ArrayPtr; // ptr to actual storage
StSize _ArraySize; // elements allocated
StSize _ArrayUsed; // elements used (ever accessed)
public:
// insure internals are preset to zero
StArray()
{
STGLIB_CON("StArray");
_ArrayPtr=0;
_ArraySize=0;
_ArrayUsed=0;
}
// don't leak allocation
virtual ~StArray()
{
STGLIB_DES("StArray");
if (_ArrayPtr)
{
DBG_DEL(_ArrayPtr,this);
//printf("Deleting %lX -> %lX\n",&_ArrayPtr,_ArrayPtr);
delete[] _ArrayPtr;
_ArrayPtr=0; // do this to make sure we don't try to re-use it later
_StArray_Bytes-=_ArraySize*sizeof(T);
}
}
// don't allow direct copy of class!!!
StArray<T>& operator=(const StArray<T>& copyfrom)
{
if (this==©from)
return(*this);
DBG_CHK(_ArrayPtr,this);
_ArrayResize(copyfrom._ArrayUsed,1);
memcpy(_ArrayPtr,copyfrom._ArrayPtr,copyfrom._ArrayUsed*sizeof(T));
_ArrayUsed=copyfrom._ArrayUsed;
return(*this);
}
// NOTE!!! All derived classes MUST include a forward operator= function
/* StArray(StArray<T>& copyfrom)
{
_ArrayResize(copyfrom._ArrayUsed,1);
memcpy(_ArrayPtr,copyfrom._ArrayPtr,copyfrom._ArrayUsed*sizeof(T));
// return(*this);
}
*/
// return size of array _IN USE_
inline StSize operator~(void)
{
// return used elements (base class returns size of allocation)
return(_ArrayUsed);
}
// %%% MUL_ALLOC should instead be scaled with needed
// resize array to include at least X elements
// this will deallocate array if X=0
// if DontShrink is non-zero, save time by not downsizing
void _ArrayResize(StSize Needed,int DontShrink=0)
{
DBG_CHK(_ArrayPtr,this);
if (!Needed)
{
// when zero array size asked for,
// always dump all memory
if (_ArrayPtr)
{
DBG_DEL(_ArrayPtr,this);
delete[] _ArrayPtr;
_StArray_Bytes-=_ArraySize*sizeof(T);
}
_ArrayPtr=0;
_ArraySize=0;
return;
}
if (DontShrink)
{
if (Needed<=_ArraySize)
return;
}
// HACK: increase needed by one so that there is always a zero terminator in list
Needed++;
#ifdef STARRAY_POW2
{
// increase Needed to the next power of two
StSize p=32;
while (Needed>p)
p<<=1;
Needed=p;
}
#else
if (Needed<STARRAY_MIN_ALLOC)
Needed=STARRAY_MIN_ALLOC;
else
{
StSize r=Needed%STARRAY_MUL_ALLOC;
// if (r)
// this makes sure last entry always empty
Needed+=STARRAY_MUL_ALLOC-r; // bump up to multiple
}
#endif
//printf("Changing array from %d to %d\n",_ArraySize,Needed);
T *NewPtr=new T[Needed];
memset(NewPtr,0,sizeof(T)*Needed);
if (_ArrayPtr)
{
DBG_DEL(_ArrayPtr,this);
memcpy(NewPtr,_ArrayPtr,sizeof(T)*(_ArraySize<Needed?_ArraySize:Needed));
//printf("Deleting %lX -> %lX\n",&_ArrayPtr,_ArrayPtr);
delete[] _ArrayPtr;
}
_ArrayPtr=NewPtr;
DBG_ADD(_ArrayPtr,this,Needed);
//printf("Setting %lX -> %lX\n",&_ArrayPtr,_ArrayPtr);
_StArray_Bytes-=_ArraySize*sizeof(T);
_ArraySize=Needed;
_StArray_Bytes+=_ArraySize*sizeof(T);
// don't forget to downsize ArrayUsed too
if (_ArrayUsed>_ArraySize)
_ArrayUsed=_ArraySize;
}
// resize array and mark as used (useful for mapped structures)
void _ArraySetUsed(StSize Needed)
{
_ArrayResize(Needed,1);
// if (_ArrayUsed<Needed)
_ArrayUsed=Needed;
}
// quickly zero out entire array but don't delete allocation
inline void _ArrayEmpty(StSize size=0)
{
if (!size) size=_ArraySize;
DBG_CHK(_ArrayPtr,this);
memset(_ArrayPtr,0,sizeof(T)*size);
_ArrayUsed=0;
}
inline void operator!(void)
{
_ArrayEmpty();
}
// return ref to element wanted
inline T & operator[](StSize element)
{
DBG_CHK(_ArrayPtr,this);
if (_ArrayUsed<=element)
_ArrayUsed=element+1;
if (_ArraySize<=element)
_ArrayResize(element+1);
return(_ArrayPtr[element]);
}
void _Reset(void)
{
_ArrayEmpty();
}
/* THIS IS COMMENTED OUT TO AVOID CONFUSION
// WITH STBASE STANDARD METHOD OF _Write()
void _Write(const T *data,StSize count=1)
{
DBG_CHK(_ArrayPtr,this);
if (_ArrayUsed+count>_ArraySize)
_ArrayResize(_ArrayUsed+count,1);
memcpy(_ArrayPtr+_ArrayUsed,data,sizeof(T)*count);
_ArrayUsed+=count;
}
void _Write(const T data)
{
DBG_CHK(_ArrayPtr,this);
if (_ArrayUsed+1>_ArraySize)
_ArrayResize(_ArrayUsed+1,1);
// memcpy(_ArrayPtr+_ArrayUsed,&data,sizeof(T));
_ArrayPtr[_ArrayUsed]=data;
_ArrayUsed+=1;
}
*/
inline void _ArrayAdd(const T *data,StSize count=1)
{
DBG_CHK(_ArrayPtr,this);
if (_ArrayUsed+count>=_ArraySize)
_ArrayResize(_ArrayUsed+count+1,1);
if (!count)
return;
if (count==1)
{
_ArrayPtr[_ArrayUsed]=*data;
}
else
{
memcpy(_ArrayPtr+_ArrayUsed,data,sizeof(T)*count);
}
_ArrayUsed+=count;
}
/* this is no longer needed - assuming one extra byte always now
// special zero termination version for strings
void _ArrayAddZT(T *data,StSize count)
{
DBG_CHK(_ArrayPtr,this);
if (_ArrayUsed+count+1>_ArraySize)
_ArrayResize(_ArrayUsed+count+1,1);
memcpy(_ArrayPtr+_ArrayUsed,data,sizeof(T)*count);
_ArrayUsed+=count;
memset(_ArrayPtr+_ArrayUsed,0,sizeof(T)*(_ArraySize-_ArrayUsed));
}
*/
void _ArrayAdd(T data)
{
DBG_CHK(_ArrayPtr,this);
if (_ArrayUsed+1>_ArraySize)
_ArrayResize(_ArrayUsed+1,1);
// memcpy(_ArrayPtr+_ArrayUsed,&data,sizeof(T));
// _ArrayUsed+=1;
_ArrayPtr[_ArrayUsed++]=data;
}
StArray& operator<< (T data)
{
_ArrayAdd(data);
return(*this);
}
// insert elements
void _ArrayInsert(StSize element,StSize shift)
{
DBG_CHK(_ArrayPtr,this);
if (_ArraySize<=_ArrayUsed+shift)
_ArrayResize(_ArrayUsed+shift);
// if (_ArrayUsed<=element+shift)
// _ArrayUsed=element+shift;
StSize move=_ArrayUsed-element;
T *InsPtr=_ArrayPtr+element;
memmove(InsPtr+shift,InsPtr,sizeof(T)*move);
memset(InsPtr,0,sizeof(T)*shift);
_ArrayUsed+=shift;
}
void _ArrayInsert(StSize element,const T* data,StSize shift)
{
DBG_CHK(_ArrayPtr,this);
if (_ArraySize<=_ArrayUsed+shift)
_ArrayResize(_ArrayUsed+shift);
// if (_ArrayUsed<=element+shift)
// _ArrayUsed=element+shift;
StSize move=_ArrayUsed-element;
T *InsPtr=_ArrayPtr+element;
memmove(InsPtr+shift,InsPtr,sizeof(T)*move);
// memset(InsPtr,0,sizeof(T)*shift);
memcpy(InsPtr,data,sizeof(T)*shift);
_ArrayUsed+=shift;
}
void _ArrayDelete(StSize element,StSize shift)
{
DBG_CHK(_ArrayPtr,this);
if (_ArrayUsed<=element)
_ArrayUsed=element+1;
if (_ArraySize<=_ArrayUsed)
_ArrayResize(_ArrayUsed);
StSize move=_ArrayUsed-element-shift;
T *DelPtr=_ArrayPtr+element;
memcpy(DelPtr,DelPtr+shift,sizeof(T)*move);
memset(DelPtr+move,0,sizeof(T)*shift);
_ArrayUsed-=shift;
}
};
// look like an array of objects,
// but hide the actual array of pointers to the objects
// and all the new/delete'ing
template <class T> class StArrayObj
{
StArray<T*> _Array;
public:
StArrayObj()
{
STGLIB_CON("StArrayObj");
}
virtual ~StArrayObj()
{
STGLIB_DES("StArrayObj");
StSize index=0;
while (index<~_Array)
{
T* ptr=_Array[index];
if (ptr)
delete ptr;
++index;
}
}
// don't allow direct copy of class!!!
StArrayObj<T*>& operator=(StArrayObj<T*>& copyfrom)
{
if (this==©from)
return(*this);
_Array=copyfrom;
return(*this);
}
inline T & operator[](StSize index)
{
T* ptr=_Array[index];
if (!ptr)
_Array[index]=ptr=new T;
return(*ptr);
}
// return size of array _IN USE_
inline StSize operator~(void)
{
// return used elements (base class returns size of allocation)
return(~_Array);
}
void _Randomize(void)
{
// randomize the array
StSize loop=0;
while (loop<~_Array)
{
StSize r=rnd(~_Array-loop);
T* temp=_Array[loop];
_Array[loop]=_Array[r];
_Array[r]=temp;
++loop;
}
}
};
template <class T> class StStack
{
StArray<T> _Array;
public:
StStack()
{
STGLIB_CON("StStack");
}
virtual ~StStack()
{
STGLIB_DES("StStack");
}
inline T & operator[](StSize index)
{
return(_Array[index]);
}
inline T& operator++()
{
_Array._ArrayInsert(0,1);
return(_Array[0]);
}
inline T operator--(int)
{
T copy;
copy=_Array[0];
_Array._ArrayDelete(0,1);
return(copy);
}
inline StSize operator~(void)
{
return(~_Array);
}
};
#endif
| {
"redpajama_set_name": "RedPajamaGithub"
} | 921 |
package Entities;
import org.json.JSONObject;
public class Coding {
private static final String SYSTEM = "system";
private static final String CODE = "code";
private static final String DISPLAY = "display";
private JSONObject entity;
public Coding(String system, String code, String display) {
entity = new JSONObject();
entity.put(SYSTEM, system);
entity.put(CODE, code);
entity.put(DISPLAY, display);
}
public JSONObject getJSONObject() {
return entity;
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 2,997 |
Mass mailing techniques also used by spammers to produce spam.
In the practice of mass mailing techniques, hundreds or even thousands of emails are sent to recipients in a matter of minutes.
Tagged Bulk Email Marketing, Direct email marketing, Mass Mailing, Traditional methods for marketing. | {
"redpajama_set_name": "RedPajamaC4"
} | 5,505 |
\section{Introduction}\label{sec_intro}
In \cite{Ru81}, Rudin gave an explicit smooth embedding of the Klein bottle into $\CC$ so that the image is {\em totally real}, i.e., no tangent space to the image in $\CC$ contains a nontrivial complex subspace. Later, Givental \cite{Gi86} constructed totally real embeddings into $\CC$ of all nonorientable surfaces that are connected sums of $n$ Klein bottles, where $n$ is odd and at least three. In fact, Givental's embeddings are {\em Lagrangian}, i.e., the pull-backs of the standard K{\"a}hler form via the embeddings vanish. The question of whether or not the Klein bottle admits {\em any} Lagrangian embedding into $\CC$ was settled much later (in the negative) by Shevchishin (see \cite{Sh09}). This places certain constraints on the convexity properties of totally real Klein bottles in $\CC$, as discussed below.
Given a compact set $X\subset\Cn$, its {\em rational hull} is defined as
\bes
\hR X=\{z\in\Cn: P(z)\in P(X)\ \text{for all polynomials}\ P:\Cn\rightarrow\C \}.
\ees
The set $X$ is said to be {\em rationally convex} if its rational hull is trivial, i.e., $\hR X=X$. One can alternatively identify $\hR X$ with the maximal ideal space of the {\em rational algebra of $X$}, which is defined as
\bes
\R(X)=\left\{f\in\cont(X):
\!\begin{array}{c}
f\ \text{is the uniform limit on $X$ of}\\
\text{rational functions with poles off $X$}
\end{array}\!\right\}.
\ees
We note that if $\cont(X)=\R(X)$, then $X$ is rationally convex. If, further, $X\subset\Cn$ is a smooth totally real submanifold, then the converse is also true (see \cite[Section 6.3]{St07}). In \cite{DuSi95}, Duval and Sibony show that for any $n$-dimensional totally real submanifold of $\Cn$, being rationally convex is equivalent to being Lagrangian with respect to some K{\"a}hler form on $\Cn$. So, although there are rationally convex topological Klein bottles in $\CC$ (see \cite{ShSu15}), by Shevchishin's result, every totally real Klein bottle in $\CC$ must have a nontrivial rational hull. It is natural to ask whether there is a constraint on the dimension of this hull. In this paper, we show that the rational hull of Rudin's Klein bottle is $2$-dimensional. More precisely, the hull consists of the Klein bottle and an attached analytic annulus (see Section~\ref{sec_klein1}). We also characterize the rational algebra of Rudin's Klein bottle.
A similar question regarding the polynomial hull of Rudin's Klein bottle is addressed in \cite{An12}. The {\em polynomial hull} of a compact set $X\subset\Cn$ is
\bes
\{z\in\Cn:|P(z)|\leq \sup_X|P|\ \text{for all polynomials}\ P:\Cn\rightarrow\C\}.
\ees
It is known \cite{Al72} that the polynomial hull of any compact $n$-dimensional manifold in $\Cn$ must be of dimension at least $n+1$. In \cite{An12}, the first author shows that the polynomial hull of Rudin's Klein bottle contains an open set in $\CC$. On the other hand, he produces a totally real Klein bottle in $\CC$ whose polynomial hull is of dimension $3$. In spite of admitting a smaller polynomial hull, the rational hull of his Klein bottle is qualitatively the same as that of Rudin's Klein bottle since they are equivalent under a rational automorphism. (See Section~\ref{sec_klein1} for details.)
To compute the rational hull of Rudin's Klein bottle, we will establish a general result in the fashion of the following well-known criterion for fibered sets in $\Cn$ (see \cite[Theorem 1.2.16]{St07}): {\em If $X\subset\Cn$ is compact and if $f\in\Po(X)$ satisfies $\Po(f(X))=\cont(f(X))$, then $X$ is polynomially convex if and only if each fiber $f^{-1}(t)$, $t\in f(X)$, is polynomially
convex. If $X$ is polynomially convex, then $\Po(X)=\cont(X)$ if and only if for each $t\in f(X)$, $\Po(f^{-1}(t))=\cont(f^{-1}(t))$.} Here $\Po(X)$ denotes the algebra of functions on $X$ that are uniformly approximable on $X$ by polynomials.
The analogous statement for rational convexity is not true in general; the surfaces considered in this paper are counterexamples. The key is to impose a stronger condition than that of rational convexity of fibers --- namely, the rational convexity of fibers with respect to a common analytic curve. Given an analytic curve $V\subset\CC$, a compact set $X\subset\CC\setminus V$, and an entire function $F$ so that $V=\{F=0\}$, we denote by $\R_V(X)$ the uniform closure in $\cont(X)$ of the set
\be\label{eq_alg}
\left\{\left.\frac{G}{F^m}\right|X: G\ \text{is entire}, m\in\mathbb N\right\}.
\ee
Note that $\R_V(X)\subset\R(X)$. We now state the general result.
\begin{theorem}\label{thm_main} Let $X\subset\CC$ be a compact set and $f:\CC\rightarrow\C$ be a rational function with no poles on $X$. Suppose $\Gamma=f(X)$ satisfies $\R(\Gamma)=\cont(\Gamma)$. For $t\in\Gamma$, let $X_t=f^{-1}(t)\cap X$. Let $V$ be an analytic curve in $\CC$ that avoids $X$.
Then,
\bes
\hR X\subseteq X\cup\bigcup_{t\in\Gamma}\Om_t
\ees
where $\Om_t$ denotes the union of all the bounded components of $f^{-1}(t)\setminus X_t$ that avoid $V$. Furthermore,
\bes
\R(X)\supseteq\{\psi\in\cont(X):\psi|{X_t}\in \R_V(X_t)\ \text{for all}\ t\in\Gamma\}.
\ees
In particular, if $\R_V(X_t)=\cont(X_t)$ for all $t\in \Gamma$, then $\R(X)=\cont(X)$ (and $X$ is rationally convex).
\end{theorem}
We must clarify that if $f:\CC\rightarrow\C$ is a rational function of the form $p/q$, where $p$ and $q$ are relatively prime polynomials, then by $f^{-1}(t)$, $t\in\C$, we mean $\{z\in\CC:p(z)=tq(z)\}$.
The above result can also be applied to other examples present in the literature. A few of these have been collected in Section~\ref{sec_other}. In each of these cases, the rational hull comes from an attached annulus. Note that since the above proof only gives a partial description of the rational hull, we need the following well-known principle to complete our computations: {\em Suppose $X\subset \Cn$ is a compact set and $\Sigma\subset\Cn$ is a bordered Riemann surface whose smooth boundary $\bdy\Sigma$ is in $X$, and bounds an oriented surface $S\subset X$ in the following sense: for any smooth $1$-form $\alpha$ defined in a neighborhood of $X\cup\Sigma$, we have
\bes
\int_{S}d\alpha=\int_{\bdy\Sigma}\alpha.
\ees
Then, $\Sigma\subset\hR X$.} In the case of Rudin's Klein bottle, a modified version of this principle applies, where we must allow $S$ to have multiplicity at $\bdy\Sigma$. We provide an alternate argument in Section \ref{sec_klein1}.
Lastly, our computations show that Rudin's Klein bottle is an example of a surface $S$ with the property that $\hR S\setminus S$ is a smooth $2$-manifold. Examples of $2$-dimensional tori with this property were earlier constructed by Duval and Sibony in \cite{DuSi98} (see Section~\ref{sec_other}). These examples all address a question raised by Alexander Izzo (private communication) of whether such surfaces exist in $\CC$.
\noindent{\bf Acknowledgements.} We would like to extend our gratitude to the anonymous referee whose comments led to the general result (Theorem~\ref{thm_main}) in this article, to Harold Boas whose careful reading of the manuscript vastly improved its presentation, and to Alexander Izzo who played a key role in facilitating this collaboration.
\section{Proof of Theorem~\ref{thm_main}}\label{sec_proof}
We will use the observation that if $Y\subset\Cn$ is compact, then $y\notin\hR Y$ if and only if there is an entire function $g$ on $\Cn$ such that $g(y)\notin g(Y)$.
\begin{proof} Suppose $f=p/q$ for relatively prime polynomials $p$ and $q$ on $\CC$. Let $(z_0,w_0)\in\hR X\setminus X$. For a compact set $Y$ and a rational
function $g$ with no poles on $Y$, $g(y)\in g(Y)$ for each point $y\in\hR Y$. Thus, $t_0=f(z_0,w_0)\in\Gamma$.
Now, choose a $g\in\cont(\Gamma)$ such that $g(t_0)=1$ and $|g|<1$ on $\Gamma\setminus\{t_0\}$. Then, since $\R(\Gamma)=\cont(\Gamma)$, $(g^n\circ f)h\in\R(X)$ for all $n\in\mathbb N$ and all $h\in\R(X)$. If $\mu$ is a positive representing measure for the point $(z_0,w_0)$ with respect to the algebra $\R(X)$, then
\bes
|h(z_0,w_0)|=\left|\int_Xh(z,w)(g^n\circ f)(z,w)d\mu(z,w)\right|.
\ees
Letting $n\rightarrow\infty$ yields
\be\label{eq_pol}
|h(z_0,w_0)|=\left|\int_{X_{t_0}}h(z,w)d\mu(z,w)\right|\leq \sup_{X_{t_0}}|h|.
\ee
In particular, \eqref{eq_pol} holds for all polynomials $h:\CC\rightarrow\C$. Thus, $(z_0,w_0)$ is in the polynomial hull of $X_{t_0}$. Since $f^{-1}(t_0)$ is a variety in $\CC$, $(z_0,w_0)$ must be contained in a bounded component $D$ of $f^{-1}(t_0)\setminus X_{t_0}$.
Now, suppose $V$ meets $D$, where we write $V=\{F=0\}$ for some entire function $F$. Let $G:\CC\setminus V\rightarrow \C^3_{z,w,\zeta}$ be the map
\bes
(z,w)\mapsto \left(z,w,\frac{1}{F(z,w)}\right).
\ees
Then, $G(D\setminus V)$ is an unbounded component of $V'\setminus G(X_{t_0})$, where
\bes
V'=\{(z,w,\zeta)\in\C^3:p(z,w)=t_0q(z,w),\zeta F(z,w)=1\}.
\ees
Since $V'$ is an analytic curve, $G(z_0,w_0)\in G(D\setminus V)$ is not in the polynomial hull of $G(X_{t_0})$. So, there is a polynomial $P:\C^3\rightarrow\C$ with $|P\circ G(z_0,w_0)|>\sup_{X_{t_0}}|P\circ G|$. Since $P\circ G\in \R(X)$, this contradicts \eqref{eq_pol}. Thus, $D\subseteq\Om_{t_0}$.
We now prove the second part of the theorem. We abuse notation to denote $f^{-1}(A)\cap X$ simply by $f^{-1}(A)$, for any set $A\subset\Gamma$. Let $\mu$ be an extreme point of the closed unit ball of $\R(X)^\perp$, the space of finite regular Borel measures orthogonal to $\R(X)$. Then, since $\R(\Gamma)=\cont(\Gamma)$, $\mu$ is supported on a single fiber $X_t$. If not, then there is a decomposition $\Gamma=E\cup E'$ with $E$ and $E'$ measurable subsets of $\Gamma$ with the property that $\mu$ has positive mass on both $f^{-1}(E)=\cup_{t\in E}X_t$ and $f^{-1}(E')=\cup_{t\in E'}X_t$. If $K$ is a compact subset of $E$, and $g$ is a continuous function on $\Gamma$ with $g=1$ on $K$ and $|g|<1$ off $K$, then for each $h\in\R(X)$, we have
\bes
\int_X(g^n\circ f)(x)h(x)d\mu(x)=0.
\ees
As $n\rightarrow\infty$, the integral on the left-hand side converges to $\int_{f^{-1}(K)}h(x)d\mu(x)$. Thus, $\mu|f^{-1}(K)$ is orthogonal to $\R(X)$. This is true for every choice of $K$, so the measure $\mu_E=\mu|f^{-1}(E)\in\R(X)^\perp$. By a similar argument, $\mu_{E'}=\mu|f^{-1}(E')\in\R(X)^\perp$. Since $\mu=\mu_E+\mu_{E'}$, we get a contradiction to the fact that $\mu$ is an extreme point of the closed unit ball of $\R(X)^\perp$. Thus, $\mu$ must be supported on some fiber $X_t$.
Now, let $\psi\in\cont(X)$ be such that $\psi|{X_t}\in \R_V(X_t)$ for all $t\in\Gamma$. Since, $\psi|{X_t}\in \R_V(X_t)$, there is a sequence $\{g_n\}_{n\in\mathbb N}$ of functions of the form $G/F^m$, $G$ entire and $m\in\mathbb N$ (see \eqref{eq_alg}), so that $\{g_n|{X_t}\}_{n\in\mathbb N}$ converges to $\psi|{X_t}$. Since $\R(\Gamma)=\cont(\Gamma)$, $X_t$ is a peak set for $\R(X)$ and $\R(X)|X_t$ --- the algebra of restrictions to $X_t$ of functions in $\R(X)$ --- is closed. Since each $g_n\in\R(X)$, $\psi|{X_t}\in\R(X)|X_t$. So, there is an $\wt \psi\in \R(X)$ that restricts to $\psi$ on $X_t$. Thus, with $\mu$ an extreme point of the closed unit ball of $\R(X)^\perp$, we have that
\bes
\int_X\psi\:d\mu=\int_{X_t}\psi\:d\mu=\int_{X_t}(\psi-\wt \psi+\wt \psi)\:d\mu
=\int_X\wt \psi\:d\mu=0.
\ees
Thus, $\psi\in\R(X)$. If we further have that $\R_V(X_t)=\cont(X_t)$ for all $t\in \Gamma$, then
\bes
\cont(X)
\supseteq\R(X)\supseteq\{\psi\in\cont(X):\psi|{X_t}\in \cont(X_t)\ \text{for all}\ t\in\Gamma\}
=\cont(X).
\ees
Hence, the claim.
\end{proof}
\section{Rudin's Klein Bottle}\label{sec_klein1}
In this section, we apply Theorem~\ref{thm_main} to compute the rational hull of
\be\label{eq_gh}
K=\left\{(e^{2 i\theta}g^2(\phi),
e^{i\theta}g(\phi)h(\phi))\in\CC
:-\pi\leq \theta,\phi<\pi \right\},
\ee
where $g(\phi)=a+b\cos\phi$ and $h(\phi)=\sin\phi+i\sin 2\phi$, for some fixed real numbers $0<b<a$.
\begin{theorem}\label{thm_klein1}
The rational hull of $K$ is $K\cup A$, where
\bes
A=\{(z,0)\in\CC: (a-b)^2\leq |z|\leq (a+b)^2\}.
\ees
Furthermore, if $A^\circ$ denotes the interior of the annulus $A$ relative to the plane $w=0$, then
\bes
\R(K)=\{\psi\in\cont(K): \psi\ \text{extends holomorphically to}\ A^\circ\}.
\ees
\end{theorem}
\begin{proof}
We first show that $A\subset \hR K$. Let $\D$ denote the open unit disc in $\C$. Consider the continuous family of maps $F_\phi:\overline\D\rightarrow\CC$ given by $F_\phi:\zeta\mapsto\big(\zeta^2g^2(\phi),
\zeta g(\phi)h(\phi)\big)$,
where $\phi\in[-\pi,0]$. Then, $\{F_\phi(\cdot)\}_{[-\pi,0]}$ is a continuous family of holomorphic discs in $\CC$ whose boundaries are attached to $K$. Note that $\zeta\mapsto F_\phi(\zeta)$ is a two-to-one map on $\overline\D\setminus{0}$, when $\phi$ is either $-\pi$ or $0$. Moreover,
\beas
F_{-\pi}(\overline\D)&=&\{(z,0)\in\CC:|z|\leq (a-b)^2\}\\
F_0(\overline\D)&=&\{(z,0)\in\CC:|z|\leq (a+b)^2\}.
\eeas
Thus, for any holomorphic function $f$ defined on $\CC$,
\be\label{eq_count}
\#\Z(f\circ F_0)-\#\Z(f\circ F_{-\pi})=2\: \#\Z(f|A),
\ee
for $f$ nonvanishing on $\bdy A$, where $\#\Z(g)$ denotes the number of zeros of $g$ (counting multiplicities). In particular, suppose $P$ is a polynomial that does not vanish anywhere on $K$. Then, each $f_\phi=P\circ F_\phi$ is holomorphic on $\D$, continuous up to the boundary, and nonvanishing on $\bdy\D$. So,
\bes
G:\phi\mapsto\#\Z(f_\phi) =\frac{1}{2\pi i}\int_{\bdy\D}
\frac{f_\phi '(\zeta)}{f_\phi(\zeta)}d\zeta
\ees
is a continuous integer-valued function on $[-\pi,0]$ --- therefore, it must be a constant. From \eqref{eq_count}, we have have that
\bes
\#\Z(P|A)=\frac{\#\Z(f_0)-\#\Z(f_{-\pi})}{2}
=\frac{G(0)-G(-\pi)}{2}= 0.
\ees
Thus, $P$ cannot vanish on $A$. Since $P$ was an arbitrarily chosen polynomial that doesn't vanish on $K$, $A\subset\hR K$.
For the rest of the proof, we apply Theorem~\ref{thm_main} with
\bes
f(z,w)=\dfrac{w^2}{z},\ V=\{(z,w)\in\CC:z=0\},\ \text{and}\ F(z,w)=z.
\ees
In this case, the set
\bes
\Gamma=f(K)=\{(\sin\phi+i\sin 2\phi)^2:-\pi\leq \phi<0\}
\ees
is a simple closed curve in $\C$, and satisfies $\R(\Gamma)=\cont(\Gamma)$. For $t\in\Gamma$, we let $K_t=f^{-1}(t)\cap K$. Then,
\bea\label{eq_fiber}
K_t
=\begin{cases}
C_{-\pi}\cup C_0, &\text{when}\ t=0, \\
C_{\phi},\ \text{for some}\
\phi\in(0,\pi),\ &\text{when}\ t\neq 0.
\end{cases}
\eea
where
$C_\phi=\{(e^{2i\theta}g^2(\phi), e^{i\theta}g(\phi)h(\phi)):-\pi\leq \theta<\pi\}$
is a circle on $K$. Indeed, if $t=h^2(\phi)=0$, then $\phi$ is either $0$ or $-\pi$. On the other hand, if $t\neq 0$, then $h^2(\phi)=t$ yields exactly two solutions in the interval $(-\pi,\pi)$, since
$h(-\phi)=-h(\phi)$. So, $K_t=C_\phi\cup C_{-\phi}$ for some $\phi\in(0,\pi)$. However,
\beas
C_{\phi}
&=&\{(e^{2i\theta}g^2(\phi), e^{i\theta}g(\phi)h(\phi)):\theta\in\rl\}\\
&=&\{(e^{2i\eta}g^2(-\phi), e^{i\eta}g(-\phi)h(-\phi)):\eta\in\rl\}
= C_{-\phi}.
\eeas
Now, if $t\neq 0$, then the only bounded component of $f^{-1}(t)\setminus K_t$ is a holomorphic disc intersecting $V$. For $t=0$, the set $f^{-1}(0)\setminus K_0$ consists of two bounded components --- the flat holomorphic disc $\{(z,0),|z|<(a-b)^2\}$, which intersects $V$, and the annulus $A$, which avoids $V$. Thus, by Theorem~\ref{thm_main}, $\hR K\subset K\cup A$.
For the characterization of $\R(K)$, first observe that since it can be identified with $\R(\hR K)$, the inclusion $\R(K)\subseteq \{\psi\in\cont(K): \psi|\bdy A\ \text{extends holomorphically to}\ A^\circ\}$ follows. To get the other inclusion, we need to compute $\R_V(K_t)$ for all $t\in\Gamma$. We claim that
\bes
\R_V(K_t)=
\begin{cases}
\hol(A), &\text{when}\ t=0, \\
\cont(K_t),\ &\text{when}\ t\neq 0,
\end{cases}
\ees
where $\hol(A)=\{\psi\in\cont(\bdy A):\psi\ \text{extends holomorphically to}\ A^\circ\}$. Indeed, for a fixed $\phi$, we have that $z^j|K\in\R_V(C_\phi)$ for all positive and negative integers $j$, and $w^\ell|K\in\R_V(C_\phi)$ for all positive integers $\ell$. It follows that $\R_V(C_\phi)$ contains $e^{ik\theta}$ for all integers $k$. So, if $t\neq 0$, then $\R_V(K_t)=\R(K_t)=\cont(K_t)$. On the other hand, if $t=0$, $\R_V(K_t)$ contains (the restrictions to $K_0$ of) all the Laurent polynomials in $z$.
Thus, $\R_V(K_0)=\hol(A)$. This completes the characterization of $\R(K)$.
\end{proof}
We now invoke \cite{Ro62} (and the references cited therein) to obtain the following criterion as a corollary. It is immediate from the corollary that the (finite, regular, Borel) measures orthogonal to $\R(K)$ are those measures on $K$ that are concentrated on $\bdy A$ and that are orthogonal to the algebra $\R(A)$.
\begin{cor}\label{cor_meas} The function $\psi\in\cont(K)$ lies in $\R(K)$ if and only if
$$\int_{\bdy A}\psi(z,0)g(z,0)\, dz=0$$
for all functions $g$ holomorphic on a neighborhood in the $z$--plane of the annulus $A$.
\end{cor}
In \cite{An12}, the first author modifies Rudin's Klein bottle to obtain the following totally real Klein bottle in $\CC$ whose polynomial hull is shown to be of dimension three.
Let
\bes
K^*=\left\{\left(e^{2 i\theta}g^2(\phi),
e^{-i\theta}\frac{h(\phi)}{g(\phi)}\right)
:-\pi\leq \theta,\phi<\pi \right\},
\ees
where $g$ and $h$ are as in \eqref{eq_gh}. We claim that the rational hull of $K^*$ is $K^*\cup A$, where $A$ is the annulus defined in Theorem~\ref{thm_main}. Indeed, consider the automorphism
\bes
J:(z,w)\mapsto\left(z,\frac{w}{z}\right)
\ees
on the domain $\T=\{(z,w)\in\CC:z\neq 0\}$. The inverse of $J$ is given by $J^{-1}(z,w)=(z,zw)$. It is easy to see that the automorphism $J$ carries $K$ onto $K^*$, $J^{-1}(K^*)=K$, and it acts as the identity map on the annulus $A$. Furthermore, $J$ effects an isomorphism between $\R(K^*)$ and $\R(K)$: given a rational function $f$ on $\CC$, $f$ is holomorphic on a neighborhood of $K^*$ if and only if $f\circ J$ is holomorphic on a neighborhood of $K$. It follows that $J$ restricts to a bijection between $\hR K$ and $\hR {K^*}$. The same reasoning gives the analogues of Theorem~\ref{thm_klein1} and Corollary~\ref{cor_meas} for $R(K^*)$.
\section{Other examples}\label{sec_other}
We apply the general technique of this paper to compute the rational hulls of some other surfaces in $\CC$ that have appeared in the literature. The tori considered below are particular instances of a general scheme provided in \cite{DuSi98} to construct a totally real torus in $\CC$ whose rational hull consists precisely of the torus and $n$ attached annuli, where $n$ is any prescribed integer.
\subsection*{Totally real discs} In \cite{HoWe69}, and later in \cite{DuSi95}, totally real discs of the following form are considered: $D=\{(z,w)\in\CC:|z|\leq 1,w=\bar zf(|z|^2)\}$, with $f:[0,1]\rightarrow\C$ a $\cont^1$-smooth function such that $t\mapsto tf(t)$ is an immersion with a double point at some $t_0\in[0,1]$. Let $\alpha=t_0f(t_0)\in\C$. The curve $C=\{zw=\alpha\}$ intersects $D$ in two circles that bound a surface in $D$, and an annulus $A$ in $C$. So, $A\subset\hR D$. Setting $f(z,w)=zw$ and $V$ as any analytic curve that avoids $D$, we can apply Theorem~\ref{thm_main} to obtain that $\hR D=D\cup A$.
\subsection*{Conjugate Hopf Tori} In \cite{DuGa14}, Duval and Gayet provide a dichotomy for any generic totally real unknotted torus embedded in $S^3$ --- either it is rationally convex and fillable by holomorphic discs, or its rational hull contains a holomorphic annulus. As an example of the latter, they consider a family of tori that come naturally from the conjugate Hopf fibration $\Theta:S^3\rightarrow S^2\subset\C\times\rl$ given by $(z,w)\mapsto (2zw, |z|^2-|w|^2)$. Let $\pi$ denote the projection of $\C\times\rl$ onto $\C$. For any embedded closed curve $\gamma$ in $S^2$ such that $\pi(\gamma)\subset\overline\D$ is immersed, the set
\bes
T^\gamma=\Theta^{-1}(\gamma)
\ees
is a totally real torus in $S^3$. In the Duval-Gayet examples, $\gamma$ is such that $\pi(\gamma)\subset \overline\D$ is a figure eight avoiding the origin.
Fix such a $\gamma$ and let $a$ be its point of self-intersection in $\D$. Let $f(z,w)=2zw$, $V=\{z=0\}$ and $F(z,w)=z$. The fibers $f^{-1}(t)\cap{T^\gamma}$ are polynomially convex circles when $t\neq a$, and the fiber $f^{-1}(a)\cap T^\gamma$ bounds the annulus
\bes
A
=\left\{(z,w)\in \CC
:2zw=a;
-\sqrt{1-a^2}\leq |z|^2-|w|^2\leq \sqrt{1-a^2} \right\}.
\ees
Applying Theorem~\ref{thm_main}, and observing that $\bdy A$ bounds a cylinder in $T^\gamma$, we have that
\bes
\hR {T^\gamma}=T^\gamma\cup A
\ees
and
\bes
\R(T^\gamma)
=\{\psi\in\cont(T^\gamma): \psi\ \text{extends holomorphically to}\ A^\circ\}
\ees
where $A^\circ$ denotes the interior of the annulus $A$ relative to the variety $\{2zw=a\}$.
\subsection*{Spin Tori} Let $\gamma:[0,1]\rightarrow\C\times\rl$ be a smooth embedded curve, so that if $\gamma(\theta)=(z(e^{i\theta}),r(e^{i\theta}))$, then $\Gamma=z(e^{i\theta})$ is an immersed curve in $\C$ with one point of self-intersection, say $z(e^{i\theta_1})=z(e^{i\theta_2})=z_0$. Consider $T_\gamma:=\{(z(e^{i\theta}),r(e^{i\theta})e^{i\phi}):0\leq \theta,\phi\leq 2\pi\}$. Then, $T_\gamma$ is a totally real torus in $\CC$ and
\bes
\hR {T_\gamma}=T_\gamma\cup A
\ees
and
\bes
\R(T_\gamma)
=\{\psi\in\cont(T_\gamma): \psi\ \text{extends holomorphically to}\ A^\circ\}
\ees
where $A=\{(z_0,w):\min(r(e^{i\theta_1}),r(e^{i\theta_2}))\leq |w|\leq \max(r(e^{i\theta_1}),r(e^{i\theta_2}))\}$.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 1,155 |
\section*{Acknowledgments}
We thank A. Bramon for discussions.
G.D. and G.I. acknowledge the hospitality of the
Institute of Nuclear Theory (INT) at the University of Washington,
where part of this work has been done.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 1,835 |
Lindel's Leap - A landing spot for thoughts and ideas as a venture capital investor.
Reflecting on five months
Yesterday marked five months since I got the phone call from my mother. We lost my brother on Sunday, September 22nd, 2019. 40 years and 5 days after he came into this world.
We buried Jonny's ashes on October 5th with great love and support from a community of friends and family. It was really wonderful to see so many attend his service and to honor the relationships that Jon built over his life. We saw friends of his from grade school through college. And friends that he made later in life, including a great group of his Masonic brothers from Colbert that came to provide services and honor him. I'm grateful for all that participated and I'm happy that we laid him to rest in that way, on that day.
Weddings and funerals are such a blur of emotion and relationships. I loved seeing so many of my mother's friends at Jon's services. Even some relationships that carry back to my father. Those are friends that hold old memories and bring big smiles from our youth. It's striking that we had so many of our extended family together, some that probably hadn't been in the same room in far too many years. I also loved seeing so many of my friends, spanning from grade school to business school. It's comforting to feel that support from this community and yet painful that we don't have more time to share. I suppose it takes a wedding or a funeral but I certainly wish that we all had more cause to be surrounded by our loved ones more frequently. And to spend time in a deeper way that allows us to share those memories and those feelings that brought us all together for Jon. Thank you all for being a part of our larger family and supporting us as we began to process and grieve for Jonny.
Five months later and it's only begun to sink in and feel very real. I think of him often even though he wasn't a fixture in my daily life for the last many years. We lived apart and were on different life trajectories but had the history of youth together. His passing has prompted me to talk more about Jonny with my own family and to tell funny stories (there are plenty of them) from when we were young. It's nice to be able to smile at those memories.
I certainly missed him this week when our mom pulled a very humorous stunt. We both would have worried about her together while also laughing at her misadventure. It was the first time that I've missed him in a happy way, able to acknowledge that he's gone but still very much with us as we go through life. I hope that his extended community feels the same way.
Previous PostJonathan Ellison Eakman Next PostLearnings from our community
This site is powered by Pantheon, the WebOps platform
All Rights Reserved © 2015 – 2023 Lindel's Leap • Legal | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 6,872 |
Businesses > Starting a Business
Useful Help and Advice to Anyone Thinking of Starting a New Business in Tameside
Thinking of starting a business?
Where can I find business opportunities within the borough?
Are there any courses that I may find useful?
What taxes will I have to pay?
Where can I get property advice?
Do I need a licence?
What standards do I need to abide by?
Will I need to pay business rates?
Where can I get rid of trade waste?
How do I recruit staff?
The Council is committed to providing a wide range of services to local businesses and the community at large. The Council's Economic Development Unit plays a key role in encouraging investment in the Borough. The Unit is also working to increase the quality and variety of work skills offered to Tameside residents, helping them to make the most of new employment and training opportunities as they are created.
Live, Work, Invest is a group of local businesses, large and small, working with the Council and the Greater Manchester Chamber of Commerce, to make access to business support services easier and more approachable.
If you are thinking of starting a Business or Social Enterprise, Live, Work, Invest is here to help you succeed. We provide free business support for new and existing businesses and Social Enterprises in Tameside.
Free Business awareness advice
Free Help in creating a business plan
Free Marketing advice and PR ideas
Free Access to finding assistance
Free Premises and location searches
Free Ongoing support and mentoring
Free Networking groups: share ideas
Free Local regeneration updates
Free Employment and recruitment advice
Free Business seminars and workshops
Free Mini website on http://www.liveworkinvest.com/
If you are considering starting a social enterprise, community business or Co-operative, advice is available from the Council's Social Enterprise Project Team.
Open 4 Business (Funding Search Tool)
For Tameside Grant Finder means businesses can now register for free access to a user-friendly funding search tool that helps them to identify external funding opportunities for themselves. Open 4 Business enables businesses to search for EU, Government, LEP and local authority-provided grants, loans, venture capital funds and tax credits.
User-friendly search tool – suitable for all levels of IT experience.
Weekly e-newsletters – details on all the latest funding announcements, approaching deadlines and breaking news.
Fully maintained – all funding information is kept fully up to date and accurate.
Open 4 Business is a one stop shop of all the funding and support on offer to help local companies succeed and grow – vital in a time of challenging conditions for enterprise.
Each year, hundreds of millions of pounds in grants, loans, awards and other funds are on offer to help businesses improve productivity, buy equipment, employ and train staff, carry out innovative R&D, diversify into new markets and much, much more.
Open 4 Business lets local entrepreneurs search for free through all these funds available to them, from major European Union and Government programmes, to regional development schemes, local authority funds and prestigious business award competitions.
To search for funding and browse what funding is available or to view local support and local success stories please visit the Tameside Grant Finder page of Open 4 Business.
Tameside Libraries Information Service
Looking for somewhere to research the business you are thinking of starting, find tips on writing your business plan or access a PC with an office package. Then why not visit the Tameside Libraries Information Service Central Library, Old Street, Ashton-under-Lyne, Tameside. OL6 7SG.
More information about the Tameside Libraries Business Information Service.
There are around 600 businesses in the UK that are franchised but what is franchising and would it be suitable for you?
It's a way of setting up in business for yourself but not on your own. With a franchise you run the business, but using methods that have been already tried and tested by another company, called the franchisor.
It is a way of being your own boss without many of the risk factors. However, you are ultimately answerable to the franchisor and this approach may not be for some people.
There are a large number of websites that deal with selling franchises a small number are listed below:
www.franchiseinfo.co.uk/
Jobcentre Plus can give you general advice about how your benefits will be affected if you start to do any work for yourself, and the in work support available.
Wherever you are going for financial help writing a Business Plan is essential but it is not as difficult as it may sound - it is simply a way of putting your ideas on paper.
More information on writing a Business Plan.
The Tameside Business Family runs business support workshops and seminars.
You may also find this page useful: www.tameside.gov.uk/training
Tameside College of Technology runs courses that cover many of the skills you may need when starting a new business. For more details visit the website at www.tameside.ac.uk
Other organisations that offer business start-up courses:
Prince's Trust Business Programme - available for people aged 18-30, who are either unemployed or in part-time employment.
HM Revenue and Customs have put information on their website especially for people starting up their own business. Visit the website at www.gov.uk/topic/business-tax/self-employed
For details of privately owned business properties (including industrial, office retail, land and development opportunities) currently available in the Tameside area, please contact the Economic Development Unit on 0161 342 2865/2167 or Send Us a Message or search the details directly at:
Tameside Business Property Search
The Council has a selection of properties and land for sale in addition to shops and industrial units to let.
Planning holds a balance between the need for new development and the need to protect the environment and tries to ensure all development is environmentally sustainable. Construction methods also need to comply with the Building Regulations for which there is a separate set of approvals.
There are also separate approvals governing the display of outdoor advertisements, developments affecting Listed Buildings and the demolition of unlisted buildings in Conservation areas.
You do not usually need planning permission just to work from home but sometimes you do. However there may be other implications concerning your mortgage or tenancy requirements, and concerning National Insurance or taxation. The best option is to seek advice initially from the Tameside Business Family Support Team
Generally, the key test is whether the residence has changed because of the business. If the answer to any of the following questions is "yes", then you will probably need planning permission:
Is your house no longer chiefly a private residence?
Will your business result in a marked rise in traffic, people calling or working around the house or in out buildings?
Will your business involve obvious activities not usual in a residential street?
Will your business disturb your neighbours because of noise or smells?
Basically you need to ask yourself if the house is still mainly a home or has it become business premises? This is whatever the business, including using a room as an office, hairdressing, repairing cars, storing goods, using part of the house as a bedsit, running a "bed and breakfast", providing childminding or music teaching.
Many businesses require a licence eg. hairdressers, pet shops and places of public entertainment. More information can be obtained from the Licensing Section.
Trading Standards and Consumer Services Section is a strategic Local Authority service contributing to the well-being of the local and business community through its enforcement activities and advisory services. More information about the work of trading standards, including access to fact sheets that may be relevant to your business, and advice on weights and measures.
The Trading Standards division will give talks to any business, consumer group, school or any other organisation on the work carried out by the Trading Standards service in the following areas:
Product and toy safety
Your rights when shopping
Credit, borrowing and lending
General work of the Trading Standards division
Other subject areas that are not listed can be arranged on request. To request a talk from trading standards fill in the online trading standards request form
The value of all property in respect of which rates are payable to the Council are shown in the local ratings list. This list is available from the Valuation Office Agency .
More information on business rates.
The Trade Waste Service is available to all commercial and industrial premises in the Borough.
The Council also carries out commercial pest control work for local businesses.
The Council is committed to attracting new jobs to the area and works in partnership with other agencies to ensure residents benefit from employment opportunities created within the Borough.
More information about training and employment.
Your local job centre will advise you on recruiting staff and will advertise positions in the local centres free of charge. For more information, advice and to find your local job centre look at the GOV.UK recruitment and hiring page .
You can also contact the local employment agencies to help recruit permanent or temporary staff although there will be a charge for this.
You can also advertise in the local press.
Tame Street Depot
Tame Street
SK15 1ST
View local map
Business in Tameside
Trading Standards Talk Request Form | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 4,977 |
La saison 1946 du Championnat d'Islande de football était la de la première division islandaise. Pour la première fois, le club d'IA Akranes participe à la compétition.
C'est le Fram Reykjavik qui remporte le championnat cette saison. C'est le de champion du club.
Les 6 clubs participants
KR Reykjavik
Fram Reykjavik
Valur Reykjavik
Vikingur Reykjavik
ÍBA Akureyri
IA Akranes
Compétition
Classement
Le barème de points servant à établir le classement se décompose ainsi :
Victoire : 2 points
Match nul : 1 point
Défaite : 0 point
|valign=top align=left width=50%|
|}
Matchs
Tous les matchs se sont disputés au stade de Melavöllur à Reykjavik.
Bilan de la saison
Voir aussi
Liens internes
Championnat d'Islande de football
Liens externes
RSSSF
Fiche sur le site de la fédération islandaise de football
Championnat d'Islande de football
Islande | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 4,264 |
#include <Windows.h> //Required for OpenGL on Windows
#include <gl/GL.h> //OpenGL
#include <gl/GLU.h> //OpenGL Utilities
#include "GL/freeglut.h" //freeglut library
#include "BasicMath.h"
#include "Camera.h"
#include "CameraManager.h"
#include "Director.h"
#include "Draw.h"
#include "Input.h"
#include "MaterialLoader.h"
#include "MeshLoader.h"
#include "Scene.h"
#include "Texture2D.h"
#include <vector>
namespace Basic3D
{
#define SCENE_OBJECT_CHILD_COUNT 50
class BASIC3D_API Vector2
{
public:
GLfloat x;
GLfloat y;
Vector2() {}
Vector2(GLfloat x, GLfloat y) : x(x), y(y) {}
static GLfloat Angle(Vector2 camPos, Vector2 boardPos);
};
class BASIC3D_API Vector3
{
public:
GLfloat x;
GLfloat y;
GLfloat z;
Vector3() {}
Vector3(GLfloat value) : x(value), y(value), z(value) {}
Vector3(GLfloat x, GLfloat y, GLfloat z) : x(x), y(y), z(z) {}
// Right hand opearators.
Vector3 & operator-= (const Vector3 & rVec);
Vector3 & operator+= (const Vector3 & rVec);
Vector3 & operator*= (const Vector3 & rVec);
Vector3 & operator/= (const Vector3 & rVec);
Vector3 & operator= (const Vector3 & rVal);
Vector3 & operator-= (const GLfloat & rVal);
Vector3 & operator+= (const GLfloat & rVal);
Vector3 & operator*= (const GLfloat & rVal);
Vector3 & operator/= (const GLfloat & rVal);
Vector3 & operator= (const GLfloat & rVal);
const Vector3 operator- (const Vector3 &subtract) const;
const Vector3 operator+ (const Vector3 &add) const;
const Vector3 operator* (const Vector3 &multiply) const;
const Vector3 operator/ (const Vector3 ÷) const;
const Vector3 operator- (const GLfloat &subtract) const;
const Vector3 operator+ (const GLfloat &add) const;
const Vector3 operator* (const GLfloat &multiply) const;
const Vector3 operator/ (const GLfloat ÷) const;
GLfloat Angle3D(const Vector3 &vector);
Vector3 Normal();
GLfloat Length();
GLfloat DotProduct(const Vector3 &vector);
};
class BASIC3D_API Vector4
{
public:
float x, y, z, w;
Vector4() {}
Vector4(float x, float y, float z, float w) : x(x), y(y), z(z), w(w) {}
};
struct BASIC3D_API Colour
{
GLfloat r, g, b, a;
Colour() {}
Colour(float r, float g, float b, float a) : r(r), g(g), b(b), a(a) {}
};
class BASIC3D_API Perspective
{
public:
double fieldOfView, aspect, nearPlane, farPlane;
Perspective() {}
Perspective(double fieldOfView, double aspect, double nearPlane, double farPlane) :
fieldOfView(fieldOfView), aspect(aspect), nearPlane(nearPlane), farPlane(farPlane) {}
};
class BASIC3D_API Lighting
{
public:
GLenum lightNumber;
Vector4 Ambient, Diffuse, Specular, Position;
Lighting() {}
Lighting(GLenum lightNumber, Vector4 ambient, Vector4 diffuse, Vector4 specular, Vector4 position) :
lightNumber(lightNumber), Ambient(ambient), Diffuse(diffuse), Specular(specular), Position(position) {}
};
class BASIC3D_API Material
{
public:
Vector4 Ambient, Diffuse, Specular;
GLfloat Shininess, Alpha;
Material() {}
Material(Vector4 ambient, Vector4 diffuse, Vector4 specular, GLfloat Shininess, GLfloat Alpha) :
Ambient(ambient), Diffuse(diffuse), Specular(specular), Shininess(Shininess), Alpha(Alpha) {}
};
class BASIC3D_API BoundingBox
{
public:
Vector3 position;
Vector3 minCoords, maxCoords;
BoundingBox(GLuint meshID);
BoundingBox(GLfloat size);
BoundingBox(GLfloat width, GLfloat height, GLfloat depth);
Vector3 Left();
Vector3 Right();
Vector3 Bottom();
Vector3 Top();
Vector3 Front();
Vector3 Back();
bool Contains(Vector3 point); // AARB.
bool Intersects(BoundingBox* box);
};
class BASIC3D_API Model
{
public:
GLuint meshID;
Texture2D * tex;
Material material;
Model(GLuint meshID, Texture2D * tex, Material material);
};
class BASIC3D_API Transform
{
public:
Vector3 position;
Vector3 scale;
GLfloat heading, pitch, roll;
Transform();
Transform(Vector3 position, Vector3 scale, GLfloat heading, GLfloat pitch, GLfloat roll);
virtual void Update(); // Perform transformations here.
};
class BASIC3D_API Billboard : public Transform
{
public:
Vector3 position;
Vector3 scale;
GLfloat heading, pitch, roll;
Billboard();
Billboard(Vector3 position, Vector3 scale, GLfloat heading, GLfloat pitch, GLfloat roll);
void Update();
};
class BASIC3D_API SceneObject
{
public:
SceneObject* parent;
SceneObject* children[SCENE_OBJECT_CHILD_COUNT]; // Statically allocated children for each child.
Model* model;
Transform* transform;
BoundingBox* box;
SceneObject(Model* model, Transform* transform);
SceneObject(Model* model, Transform* transform, BoundingBox* box, bool billboard);
};
struct BASIC3D_API Vertex
{
GLfloat x, y, z;
};
struct BASIC3D_API TexCoord
{
GLfloat u, v;
};
struct Mesh
{
std::vector<std::string> meshNames;
std::vector<std::vector<GLushort>> indices;
std::vector<Vertex> vertices;
std::vector<Vector3> normals;
std::vector<TexCoord> texCoords;
};
struct BASIC3D_API ReturnableMesh
{
std::string* meshNames;
GLushort* indices;
Vertex* vertices;
Vector3* normals;
TexCoord* texCoords;
};
} | {
"redpajama_set_name": "RedPajamaGithub"
} | 1,277 |
\section{Introduction}
Understanding and measuring the risk of financial positions is nowdays an important task as a huge number of financial institutions have been experiencing in recent years serious financial problems. The recent crisis highlights the importance of an accurate risk measurement system for financial institutions. A good risk measurement system is of great value to the financial institutions in particular and to the economy in general.
While there are various techniques for quantifying market, credit and operational risk, generally developed by financial institutions themselves or imposed by financial regulators, there is one more component of risk which, before the financial crisis began, has received less attention than it deserves.
In fact, the recent crisis has been strickly attibuted to a different component of the risk segments given
by the liquidity risk. For instance, the crisis shows that the inability of financial institutions to acquire funding or cash at low costs was one of the main causes of the crisis. This is the reason why
the regulatory attention to the liquidity risk has increased during the years after crisis.
On the other side, this would have increased the interest of the financial academy on incorporating the liquidity risk into risk measures. However, a review of the state of the art of the financial literature
dealing with liquidity and risk measures shows that a few papers are written on the topic. These include, for example, Bangia \textit{et al.} (2008), Acerbi and Scandolo (2008) and Weber \textit{et al.} (2013).
Bangia \textit{et al.} (2008) propose a liquidity adjusted VaR measure built on bid-ask spreads. Acerbi and Scandolo (2008) measure the liquidity risk by defining a coherent standard risk measure on the liquidity-adjusted value of the portfolio. The value of the portfolio depends on the so called liquidity policy. An example of a liquidity policy is given by the minimum requirement on cash to be held in a portfolio composed of assets (including cash) over a fixed investment horizon. Finally, Weber \textit{et al.} (2013) extend the approach by Acerbi and Scandolo (2008) by constructing a cash-invariant liquidity-adjusted risk measure.
In this paper we present a new framework for risk measures under the liquidity risk which we call illiquidity risk measures. The measurement framework will be mainly concerned with the risk of financial institutions' positions on financial securities, especially to those positions in which the financial institutions are long. The short selling will also be discussed for some particular cases. With securities we mean tradable assets. We will use the market-liquidity risk as our definition of the liquidity risk, that is the risk that a financial institution cannot easily offset a position without causing a significant movement in the price.
Financial institutions are supposed to have at time $0$ a given amount of a security $i$ or a portfolio composed of $n$ securities. The positions in securities and portfolios can be long or short. In the portfolio case, this means that we are considering portfolios composed of $n$ long or short positions.
The illiquidity risk is captured by the future values of the offsetting price of the security $i$ or the offsetting prices of each single security composing the portfolio at time $T$. That is, the prices a financial institution gets when liquidating securities in which is long at time $0$. The illiquidity risk measures are then a real valued function of a real variable or of $n$ real variables being equal to a convex risk measure defined on the future offsetting prices of the securities. The prices depend on the traded volume of the securities, and are increasing and concave on it. This way of modelling the prices is discussed for the stocks case by several authors in the theoretical financial literature, see e.g. \c{C}etin (2004), Allaj (2014), and Hausman \textit{et al.} (1992), Keim and Madhavan (1996) for what concerning the empirical financial literature.
The illiquidity risk measures are viewed as a capital requirement, the capital required for making
the one-unit positions held by financial institutions acceptable. The establishing objective is thus to compute for each position in a given security or portfolio the capital requirement needed to make that position acceptable. The illiquidity risk measures defined on long positions are increasing monotonic, cash sub-additive, and convex. The first property captures the fact that financial institutions with larger long positions are more risky. The second the greater sensitivity of the risk measures to an one-unit increase in the amount of the security or securities held by financial institutions with respect to a cash-additive standard risk measure. Lastly, the third one encourages financial institutions to brake up large trades into smaller trades.
We provide a dual representation of the illiquidity risk measures defined on the space of the positive real numbers, or on the set of the positive $n$-tuples of real numbers for the portfolio case, by using a technique recently developed by El Karoui and Ravanelli (2008). That is, we introduce a new function which is increasing, translationally invariant, convex, and from this we obtain the desired Fenchel-Moureau dual representation. In particular situations, we also define and derive a dual representation for illiquidity risk measures fixed on sell positions. The risk measures in this case satisfy the opposite properties of the illiquidity risk measures defined on the long side. The dual representation is independent on the (probability) space where the offsetting prices live. Several examples of risk measures are presented including the classical VaR measure.
Taking the cue from the illiquidity risk measures properties, we expand the previous framework to include the possibility that single financial institutions operate in the market by splitting their large trades into smaller ones. In presence of splitting trades, the offsetting price are not anymore required to be concave. We found that the illiquidity risk measures on long positions are decreasing monotonic, cash super-additive and convex, reflecting the fact that the risk is reduced due to beneficial effects arising from the splitting the trades. Opposite results hold, in special cases, for illiquidity risk measures defined on short positions. Overall, the capital requirement is smaller with respect to the case whithout trade splitting, and dual representations results as before can be obtained.
The paper is organized as follows. Section (\ref{sec1}) introduces the one-period risk measurement model. Section (\ref{ill}) describes and derive the dual representation of illiquidity risk measures defined on a single security. Various examples are also given. In Section (\ref{sec4}) illiquidity risk measures under trade splitting are discussed. Section (\ref{secmult}) presents the multivariate illiquidity risk measures by providing the respective dual representation and an example of a multivariate illiquidity risk measure. The case of trade splitting in presence of more than one security is given in Section (\ref{secnew}). Finally, Section (\ref{conc}) concludes.
\section{Model}\label{sec1}
We assume a one period risk measurement model with two dates $0$ and $T$. At time zero, a given financial institution such as bank, insurance company, and others, has at its disposal a security or a portfolio consisting of different securities, where for a security we mean a tradable asset. We suppose that the price of each security depends on the size of the given security.
Let $(\Omega,\mathcal{F})$ be a measurable space. The initial monetary value of the financial institution's position in the security $i$ will be denoted by $-yX^{i}_{0}(y)$, and the final monetary value at time $T$ by $yX^{i}_{T}(w, -y)$, where $y$ denotes the initial holdings of the financial institution in the security $i$, and $X^{i}_{0}(y)$, $X^{i}_{T}(w, -y)$ the price of the security $i$ at time $0$ and $T$. We assume the securities in the market are finite, i.e. $i\in \mathbb{N}$.
Here we suppose that positive values of $y$ indicate a long position, and negative values a short position in the security $i$. Using the above convention, we can give an explicit meaning of the quantities $-yX^{i}_{0}(y)$ and $yX^{i}_{T}(w, -y)$. At the beginning of the period, the financial institution starts with a position having a monetary value equal to $-yX^{i}_{0}(y)$ depending on whether the financial institution is long or short on the security $i$. The position in the security $i$ provides a monetary value of $yX^{i}_{T}(w, -y)$ at time $T$. That is, the quantity $yX^{i}_{T}(w, -y)$ gives the random amount the financial institution receives for the sale of $y$ units of security $i$ held at time $0$ when $y>0$ and the random amount it pays for the purchase of $y$ units of security $i$ when $y<0$. Put it differently, $yX^{i}_{T}(w, -y)$ represents the cash coming from making an opposite transaction at time $T$.
In this work, our main interest is on those securities or portfolios which are held or owned by the given financial institution. Keeping this in mind, a risk measure applied to an one unit of a given security $i$ or a portfolio composed of one unit of $n$ securities, is viewed as a capital requirement which, if added to the initial position, makes it acceptable from the point of view of a regulatory agency. For some particular cases we will also discuss risk measures defined on those securities or portfolios in which financial institutions have short positions. A risk measure defined on these short positions can be seen then as a capital guarantee which insures the securities will be returned back to the financial counterparty.
The \textit{cash flow} coming from a position $y>0$ in the security $i$ is given by
\begin{equation}\label{eq1}
y[X^{i}_{T}(w, -y)-X^{i}_{0}(y)]
\end{equation}
When Equation (\ref{eq1}) assumes a positive value, it means that the financial institution is receiving money from buying and selling security $i$, while when it is negative it is loosing money. One can then easily notice that Equation (\ref{eq1}) measures the degree of liquidity of a financial institution, at a given time $T$, in the security $i$. It says how much money net of the initial investment a financial institution can raise up by liquidating security $i$.
\begin{assumption}\label{ass1}
The price of the security $i$ is an increasing function of the quantity $y$ such that $y_1\geq y_2$ implies $X^{i}_{T}(w,y_1)\geq X^{i}_{T}(w,y_2)$ for each $y_1, y_2 \in \mathbb{R}\setminus\{0\}$. The price $X_{T}^{i}(w,0)=\tilde{X}_{T}^{i}(w)$ is known in the literature as the marginal unaffected price for an infinitesimal order size at time $T$ (see, for example, \c{C}etin (2004)) or as the price corresponding to an informationally efficient market with zero trading costs (see Allaj (2014)). It is supposed that $X^{i}_{T}(w,-y)\leq \tilde{X}_{T}^{i}(w) \leq X^{i}_{T}(w,y)$ for each $y\geq0$.
\end{assumption}
Therefore, the risk in our model is related to the variability of the random variables $X^{i}_{T}(w, -y)$.
In measuring the risk, we are just assuming that the risk of the financial institution in the security $i$ is captured by the future value of the security's $i$ price, that is, the random price the financial institution gets when selling and buying the security $i$ at time $T$. This way of thinking was pioneered in a classic paper of Artzner \textit{et al.} (1999).
We have thus the following definition.
\begin{definition}
The future values of a position $y\in\mathbb{R}_{>0}$ in a given security $i$ is described by a mapping $Z^{i}_{T,y}:\Omega \rightarrow \mathbb{R}$, where $Z^{i}_{T,y}=X^{i}_{T}(w, -y)$ for all $y\in\mathbb{R}_{>0}$.\footnote{From now on $Z^{i}_{T,y}(w)=Z^{i}_{y}(w)$.}
\end{definition}
We note that in the case the price $X$ is not a function of the order size $y$, then the cash flow of the security $i$ is the same for all $y\in\mathbb{R}_{>0}$, i.e. $y[X^{i}_{T}(w)-X^{i}_{0}]$. This means that also the risk of the security $i$ is the same for all $y\in\mathbb{R}_{>0}$. On the other side, the cash and the risk of the security $i$ is different when $X$ depends on $y$, assuming different values for different $y$.
\begin{assumption}\label{ass2}
It is assumed that $Z^{i}_y$ is concave in $y, v\in \mathbb{R}\setminus\{0\}$, that is $Z^{i}_{\lambda y+(1-\lambda)v}\geq \lambda Z^{i}_y+(1-\lambda)Z^{i}_v$ for all $0 \leq \lambda \leq 1$.
\end{assumption}
The concavity of the price impact function is observed by different authors in the empirical financial literature dealing with stock markets. Almost all of these studies, see for e.g Hausman \textit{et al.} (1992), Keim and Madhavan (1996), conclude that the market impact is a concave function of the traded volume. The price $X^{i}_{T}(w,y)$ is usually expressed as $X^{i}_{T}(w,y)=\tilde{X}^{i}_{T}(w) + h^{i}(y)$, where $\tilde{X}^{i}_{T}(w)$ is the unaffected price and $h^i$ increasing and concave function for $y>0$ and increasing and convex for $y<0$. In its simplest form, $h^i$ is just a linear form of $y$, that is $h^i=ay$ with $a>0$.
Another common form assumed by the price impact function (see Almgren \textit{et al.} (2005), Gabaix \textit{et al.} (2007)) is given by the so called power law function of type $\gamma |y|^{\alpha}$ with $\alpha<1$, $\gamma>0$ where $\pm\gamma |y|^{\alpha}=+\gamma |y|^{\alpha}$ if $y>0$ and $\pm\gamma |y|^{\alpha}=-\gamma |y|^{\alpha}$ if $y<0$. On contrary, Blais \textit{et al.} (2010) show by using data on stocks traded on the New York Stock Exchange that the form of the price impact is a linear one. The example of the power law function is quite different from what we assume in (\ref{ass2}). However, as we will see further on in this article, this assumption is innocuous when financial insitutions try to break their large orders into smaller packages, and this happens quite often in the reality.
By Assumption (\ref{ass2}) one then easily note that $Z^{i}_y$ is decreasing and concave in $y$.
\section{Illiquidity risk measures}\label{ill}
In this section our aim is to quantify the risk of $Z^{i}_{y}(w)$ for a fixed value of $y\in\mathbb{R}_{>0}$ and security $i$ by a risk measure function. We call such a risk measure an illiquidity risk measure, which we define by explicitly accounting the financial institution's holding in the security $i$. We denote it by $\beta^{i}: \mathbb{R}_{>0}\rightarrow \mathbb{R}$.
Thus we simply say that $y$ has some influence on the price of the security $i$ and compute for each $y\in\mathbb{R}_{>0}$ the financial institution's risk in the security $i$.
\subsection{Illiquidity risk measures definition}\label{3.1}
Given a position $y$ in the security $i$ and a convex risk measure $\rho$ on the space $\mathcal{Z}_{i}$, the risk measure $\beta^{i}$ of the position $y$ will be defined as being equal to $\rho(Z^{i}_{y})$. This way of defining the illiquidity risk measure seems quite natural since the risk of $y$ is related to the risk of the random variable $Z^{i}_{y}(w)$. This observation leads naturally to the following definition.
\begin{definition}\label{def2}
An illiquidity risk measure on the space $\mathbb{R}_{>0}$ is a function $\beta^{i}: \mathbb{R}_{>0}\rightarrow \mathbb{R}$ defined by $\beta^{i}(y)\stackrel{\text{def}}{=}\rho(Z^{i}_{y})$ for $y\in\mathbb{R}_{>0}$.
The functional $\rho: \mathcal{Z}_{i} \rightarrow \mathbb{R}$ is a standard convex monetary risk measure functional, i.e. for all $i \in\mathbb{N}$, and $V, U \in \mathcal{Z}_{i}$, it satisfies the following axioms\footnote{See Follmer and Schied (2004) and Delbaen (2002) for a risk measure functional definition.}
\begin{itemize}
\item [a)] Decreasing monotonicity: $V(w) \leq U(w)$, then $\rho(V)\geq \rho(U)$;
\item[b)] Cash invariance (or cash-additivity): $\forall m\in\mathbb{R}$, $\rho(V+m)=\rho(V)-m$.
\end{itemize}
A risk measure $\rho$ is called convex if
\begin{itemize}
\item[c)] Convexity: $\rho(\lambda V+(1-\lambda)U)\leq \lambda\rho(V) + (1-\lambda)\rho(U)$, $0 \leq\lambda \leq 1$.
\end{itemize}
\end{definition}
Our first goal is to show that Definition (\ref{def2}) together with Assumptions (\ref{ass1}) and (\ref{ass2}) imply that $\beta^{i}$ is increasing, cash sub-additive, and a convex illiquidity risk measure.
\begin{proposition}\label{prop1}
Denote an illiquidity risk measure by the mapping $\beta^{i}: \mathbb{R}_{>0}\rightarrow \mathbb{R}$. Then, $\beta^{i}(y)=\rho(Z^{i}_{y})$ is increasing, cash-sub additive, and convex for $y\in\mathbb{R}_{>0}$, that is, it satisfies the followings
\begin{itemize}
\item [a)] Increasing monotonicity: $\forall y \geq v \in\mathbb{R}_{>0}$, then $\beta^{i}(y) \geq \beta^{i}(v)$;
\item[b)] Cash sub-additivity (or translationally super-variance): $\forall m \geq 0$ such that $y\in\mathbb{R}_{>0}$, then $\beta^{i}(y+m)\geq \beta^{i}(y)-m$;
\item[c)] Convexity: $\forall y, v \in \mathbb{R}_{>0}$, then $\beta^{i}(\lambda y+ (1-\lambda)v) \leq \lambda\beta^{i}(y) + (1-\lambda)\beta^{i}(v)$, $0 \leq\lambda \leq 1$.
\end{itemize}
\end{proposition}
\begin{proof}
\begin{itemize}
\item[a)] Let $y, v\in \mathbb{R}_{>0}$. From Assumption (\ref{ass1}), $y \geq v$ imply that $X_{T}^{i}(w, -y) \leq X_{T}^{i}(w, -v)$ so that $Z^{i}_{y}(w) \leq Z^{i}_{v}(w)$. By Definition (\ref{def2}), it follows that $\beta^{i}(y)=\rho(Z^{i}_{y}) \geq \rho(Z^{i}_{v})=\beta^{i}(v)$.
\item[b)] For $y\in \mathbb{R}_{>0}$, $m \geq0$ it is easily verified that $\beta^{i}(y+m)\geq \beta^{i}(y)=\rho(Z^{i}_{y})\geq \rho(Z^{i}_{y} + m)= \rho(Z^{i}_{y}) - m=\beta^{i}(y)-m$ by point (a), positivity of $m$, and Definition (\ref{def2}).
\item[c)] Let $y, v\in \mathbb{R}_{>0}$ and note that $Z^{i}_{\lambda y +(1-\lambda)v}\geq \lambda Z^{i}_y+(1-\lambda)Z^{i}_{v}$ from Assumption (\ref{ass2}). This from Definition (\ref{def2}) implies that $\beta^{i}(\lambda y+(1-\lambda)v)=\rho(Z^{i}_{\lambda y+(1-\lambda)v})\leq\rho(\lambda Z^{i}_{y}+(1-\lambda)Z^{i}_{v}) \leq \lambda\rho(Z^{i}_{y}) + (1-\lambda)\rho(Z^{1}_{v})=\lambda\beta^{i}(y)+(1-\lambda)\beta^{i}(v)$.
\end{itemize}
\end{proof}
\begin{remark}\label{rm1}
We observe that $y>0$ corresponds to the case in which the financial institution borrows $y$ units of the security $i$ at time $0$ and sell them at time $T$. Axiom (a) then says that if the financial institution increases the long position in the security $i$ then its illiquidity risk measure $\beta^{i}$ should increase too, since the financial institution becomes more risky, and less liquid. The meaning of Axiom (b) is "when the financial institution buys more than $y$ units of the security $i$, exactly $y+m$ units, the illiquidity risk cannot be reduced by less than $m$." Suppose $m=1$. Then, the Axiom (b) reads $\beta^{i}(y+1)-\beta^{i}(y)\geq -1$. This means that the illiquidity risk measure increases by greater or equal than $-1$ as the position in security $i$ changes from $y$ to $y+1$. That is, the financial institution's money worth more in an illiquid market. The last axiom illustrates the fact that the increase in the risk of a security $i$ generated by an one unit increase in $y>0$ in the security $i$ is smaller when $y$ is small than when it is large. From a practical point of view, this axiom would encourages a financial institution to brake up a large trade into several smaller ones.
\end{remark}
\subsection{Dual representation of illiquidity risk measures}\label{secnew}
In this subsection, we suppose the random variables $Z^{i}_{y}$ for all $y\in\mathbb{R}$ belong to the space $\mathcal{Z}_{i}$ of all bounded measurable function defined on the measurable space $(\Omega, \mathcal{F})$. Recall that equipping the space $\mathcal{Z}_{i}$ with the supremum norm $||Z^{i}_{y}||=\sup_{\omega \in \Omega}|Z^{i}_{y}(w)|$, the convex risk measure is Lipschitz with respect to this norm.
Our aim is to give a dual representation for the illiquidity risk measure $\beta^{i}$.
As shown in the Subsection (\ref{3.1}), the main axioms of the illiquidity risk measure are convexity, cash sub-additivity and increasing monotonicity. The cash-additivity axiom is an important difference between a standard risk measure and the one proposed here. In order to make use of some main results in the convex analysis, we will work for the rest of this section with a new translationally invariant functions containing as a special case our risk measure function.
To deal with this, we introduce a new function defined in a similar fashion as in (El Karoui and Ravanelli (2008)). At first, we define the following function
\begin{equation}\label{eq6}
f^{i}(y) = \left\{
\begin{array}{l l}
\beta^{i}(y) & \quad \text{if $y\geq0$ }\\
\rho(Z^{i}_{y})& \quad \text{if $y\leq0$}
\end{array} \right.\
\end{equation}
By convention, we put $\beta^{i}(0)=\rho(\tilde{X}_{T}^{i}(w))=\rho(Z^{i}_{0})$, whereas $\beta^{i}(0)\leq\rho(Z^{i}_{y})=\beta^{i}(y)$ for each $y\geq 0$, $\beta^{i}(0)\geq\rho(Z^{i}_{y})=\beta^{i}(y)$ for each $y\leq 0$, where $\tilde{X}_{T}^{i}(w)$ gives the unaffected price. From now on we will assume that $Z^{i}_y$ is concave for all $y\in\mathbb{R}$. One can then easily verify that $f^{i}(y)$ satisfies Proposition (\ref{prop1}) for every $y\in\mathbb{R}$.
We then let $\hat{\beta}^{i}$ be the function defined as $\hat{\beta}^{i}(h, x)\stackrel{\text{def}}{=}f^{i}((y+x)-x)+x$, with $h=y+x$, $x\in\mathbb{R}$ and $y\in\mathbb{R}$.
The following proposition shows that $\hat{\beta}^{i}(h, x)$ satisfies the increasing monotonicity, translationally invariance, and the convexity property.
\begin{proposition}\label{prop3}
The function $\hat{\beta}^{i}(h, x)$ defined as $f^{i}((y+x)-x)+x$ for every $(h,x)\in \mathbb{R}^{2}$ is increasing monotonic, translationally invariant, and convex.
\end{proposition}
\begin{proof}
\begin{itemize}
\item[a)] Increasing monotonicity: Let $y\geq u$, $x_1\geq x_2$ and $h=y+x_1$, $v=u+x_2$ such that $y, v \in \mathbb{R}$ and $x_1, x_2\in\mathbb{R}$. From the increasing monotonicity of $f^{i}$, it follows that $\hat{\beta}^{i}(h, x_1)=f^{i}((y+x_1)-x_1)+x_1=f^{i}(y)+x_1\geq f^{i}((u+x_2)-x_2)+x_2=f^{i}(u)+x_2=\hat{\beta}^{i}(v, x_2)$;
\item[ b)] Translationally invariance: Assume $m\in \mathbb{R}$, $x\in\mathbb{R}$, and $y\in\mathbb{R}$. Then $\hat{\beta}^{i}(h+m, x+m)=f^{i}[(h+m) - (x+m)]+(x+m)=[f^{i}(h-x)+x]+m=[f^{i}((y+x)-x)+x]+m=\hat{\beta}^{i}(h,x)+m$;
\item[ c)] Convexity: Let $0 \leq\lambda \leq 1$, $y, u\in\mathbb{R}$, $x_1, x_2\in\mathbb{R}$, and $h=y+x_1$, $v=u+x_2$. By definition, $f^{i}\left[\lambda(h-x_1)+(1-\lambda)(v-x_2)\right]+\lambda x_1+(1-\lambda)x_2\leq\lambda f^{i}((y+x_1)-x_1)+(1-\lambda)f^{i}((v+x_2)-x_2)+\lambda x_1+(1-\lambda)x_2$. Convexity of $f^{i}$ implies then that $\hat{\beta}^{i}[\lambda(h,x_1) + (1-\lambda)(v,x_2)]\leq \lambda\hat{\beta}^{i}(h,x_1) + (1-\lambda)\hat{\beta}^{i}(v, x_2)$. This completes the proof.
\end{itemize}
\end{proof}
The lemma below shows that the function $\hat{\beta}^{i}(h, x)$ is Lipschitz continuous (with constant $\sqrt{2}$) with respect to the norm $||\cdot||$ on $\mathbb{R}^{2}$.
\begin{lemma}\label{lemm1}
The function $\hat{\beta}^{i}(\hat{y})$ is Lipschitz continuous with respect to the norm $||\cdot||$ on $\mathbb{R}^{2}$, i.e.
\begin{equation}\label{eq3}
|\hat{\beta}^{i}(\textbf{h})-\hat{\beta}^{i}(\textbf{v})| \leq \sqrt{2}||\textbf{h}-\textbf{v}||
\end{equation}
\end{lemma}
\begin{proof}
If $\textbf{h}=(h,x_1)$, $\textbf{v}=(v,x_2)$, $h=y+x_1$ and $v=u+x_2$ then we have
\begin{equation*}
\begin{pmatrix}
h \\
x_1 \\
\end{pmatrix}\leq
\begin{pmatrix}
v \\
x_2 \\
\end{pmatrix}
+\begin{pmatrix}
|h-v| \\
|x_1-x_2| \\
\end{pmatrix}
\end{equation*}
By Cauchy's inequality
\begin{equation*}
\begin{pmatrix}
h \\
x_1 \\
\end{pmatrix}\leq\begin{pmatrix}
v \\
x_2 \\\end{pmatrix}+\begin{pmatrix}
\sqrt{2}||\textbf{h}-\textbf{v}|| \\
\sqrt{2}||\textbf{h}-\textbf{v}|| \\
\end{pmatrix}
\end{equation*}
We have thus that
\begin{eqnarray*}
&&\hat{\beta}^{i}(h,x_1)=f^{i}((y+x_1)-x_1)+x_1\leq \hat{\beta}^{i}(\textbf{v}+\sqrt{2}||\textbf{h}-\textbf{v}||)\nonumber\\&&=\hat{\beta}^{i}(v+\sqrt{2}||\textbf{h}-\textbf{v}||, x_2+\sqrt{2}||\textbf{h}-\textbf{v}||)\nonumber\\&&= f^{i}((u+x_2)-x_2)+x_2+\sqrt{2}||\textbf{h}-\textbf{v}||\nonumber\\&&=\hat{\beta}^{i}(v,x_2)+\sqrt{2}||\textbf{h}-\textbf{v}||
\end{eqnarray*}
by increasing monotonicity and translationally invariance, or differently
\begin{eqnarray*}
\hat{\beta}^{i}(h,x_1)-\hat{\beta}^{i}(v,x_2)\leq\sqrt{2}||\textbf{h}-\textbf{v}||
\end{eqnarray*}
Reversing $\textbf{h}$ and $\textbf{v}$ yields the lemma
\begin{equation}
|\hat{\beta}^{i}(\textbf{h})-\hat{\beta}^{i}(\textbf{v})|\leq \sqrt{2}||\textbf{h}-\textbf{v}||
\end{equation}
\end{proof}
As an immediate consequence of Lemma (\ref{lemm1}), we have that $\hat{\beta}^{i}(h,x)$ is a lower semi continuous function on $\mathbb{R}^{2}$. Even more, it is proper and convex. We have already proved the convexity and the lower semi continuity property. As for the remaining property, it is clear that $\hat{\beta}^{i}(h,x)$ is proper as a sum of $x$ and a proper convex function $f^{i}((y+x)-x)$ defined as $f^{i}(y)=\beta^{i}(y)$ for $y\geq0$ and $f^{i}(y)=\rho(Z^{i}_y)$ for $y\leq0$.
In the light of the Fenchel-Moreau theorem, see Rockafellar (1970), Ekeland and T\`emam (1999) and Borwein and Lewis (2006) for the multivariate version of the Fenchel-Moreau theorem, the second conjugate of the function $\hat{\beta}^{i}(h,x)$ coincides with the function itself, that is
\begin{equation}\label{eq5*}
\hat{\beta}^{{i}^{**}}(\textbf{h})=\hat{\beta}^{i}(\textbf{h})
\end{equation}
where
\begin{equation*}
\hat{\beta}^{i}(\textbf{h})=\hat{\beta}^{{i}^{**}}(\textbf{h})=\sup_{\textbf{v}\in\mathbb{R}^{2}}\{\textbf{h}^{T}\textbf{v}-\hat{\beta}^{{i}^{*}}(\textbf{v})\}
\end{equation*}
and
\begin{equation*}
\hat{\beta}^{{i}^{*}}(\textbf{v})=\sup_{\textbf{h}\in\mathbb{R}^{2}}\{\textbf{v}^{T}\textbf{h}-\hat{\beta}^{i}(\textbf{h})\}
\end{equation*}
It is easily seen that the function $f$ can be derived by setting $x=0$, so that from Equation (\ref{eq5*}) $\hat{\beta}^{i}(\textbf{h})$ can be treated as a function of one variable. It follows that
\begin{equation*}
\hat{\beta}^{i}((y,0))=f^{i}(y)=\sup_{u\in\mathbb{R}}\{yu-f^{{i}^{*}}(u)\}
\end{equation*}
where the conjugate of $f^{i}(u)$ has the following expression
\begin{equation*}
f^{{i}^{*}}(u) =\hat{\beta}^{{i}^{*}}((u,0))=\sup_{y\in\mathbb{R}}\{uy-f^{i}(y)\}
\end{equation*}
Now, restricting $y$ to $\mathbb{R}_{>0}$ we are able to provide the illiquidity risk measure $\beta^{i}$ with a dual representation of the form
\begin{equation*}
\beta^{i}(y)=f^{i}(y)=\sup_{u\in\mathbb{R}}\{yu-f^{{i}^{*}}(u)\} \quad \forall y\in\mathbb{R}_{>0}
\end{equation*}
We have thus proved the following theorem.
\begin{theorem}\label{thm1}
Any illiquidity risk measure on $\mathbb{R}_{>0}$ defined as $\beta^{i}(y)=\rho(Z^{i}_y)$, with $\rho$ a convex risk measure defined on the linear space $\mathcal{Z}^{i}$ of bounded random variables and $Z^{i}_y$ decreasing and concave in $y$, can be represented as follows
\begin{equation}\label{ex4}
\beta^{i}(y)=\sup_{u\in\mathbb{R}}\{yu-f^{{i}^{*}}(u)\}
\end{equation}
with conjugate function $f^{{i}^{*}}$ given as follows
\begin{equation*}
\sup_{y\in\mathbb{R}}\{uy-f^{i}(y)\}
\end{equation*}
and $f$ as in Equation (\ref{eq6}).
\end{theorem}
\begin{example}\label{exlin}
Suppose that the price of the security $i$ is given by $X_{T}^{i}(w,y)=\tilde{X}_{T}^{i}(w)+ay$, where $X_{T}^{i}(w,y)$ is positive and bounded measurable for every $y\in\mathbb{R}$, and $\tilde{X}_{T}^{i}(w)$ gives the unaffected price of security $i$ at time $T$. Mathematically, the final price can be negative, but practically impossible. In practice, usually $a>0$ is small. A linear form of the supply curve is commonly obtained when one, for example regress stock prices on the signed traded volume of the stock. An empirical example is given in (Blais and Protter (2010)). Substituting these into the equation for $Z^{i}_y$, we get that
\begin{equation*}
Z^{i}_y=\tilde{X}_{T}^{i}(w)-ay
\end{equation*}
It easily follows that $Z^{i}_y$ is decreasing and concave and that $Z^{i}_y$ belongs to the space of bounded measurable functions.
Consider the convex worst-case risk measure $\rho$ defined on the space $\mathcal{Z}_{i}$ as
\begin{equation}\label{ex1}
\beta^{i}(y)=\rho(Z^{i}_{y})=-\inf_{w\in\Omega}\{\tilde{X}_{T}^{i}(w)-ay\}
\end{equation}
for $y>0$.
Now, rewrite Equation (\ref{ex1})
\begin{equation*}
\beta^{i}(y)=-\inf_{w\in\Omega}\{\tilde{X}_{T}^{i}(w)\}+ay
\end{equation*}
It follows that
\begin{equation}\label{ex3}
\beta^{i}(y)=\rho(\tilde{X}_{T}^{i})+ay
\end{equation}
As one can see, the risk measure in the case of no illiquidity can be obtained by simply taking $y=0$ in Equation (\ref{ex3}). The capital requirement of a position $y$ is then given by $y(\rho(\tilde{X}_{T}^{i})+X^{i}_{0}(y))$.
We also see that the illiquidity risk measure $\beta$ satisfies the axioms of Proposition (\ref{prop1}). Moreover, the capital requirement of a position $y$ in presence of illiquidity is given by $y(\rho(\tilde{X}_{T}^{i})+ay+X^{i}_{0}(y))$. Then the capital requirement is a linear function of $y$ with slope given by $(\rho(\tilde{X}_{T}^{i})+ay+X^{i}_{0}(y))$. This simply says that the rate at which $\beta^{i}$ increases per unit increase in $y$ depends on the standard risk measure plus two additional terms, $ay$ measuring the illiquidity risk of the security $i$ and $X^{i}_{0}(y)$ the initial price.
Thanks to Theorem (\ref{thm1}), the illiquidity risk measure has also a dual representation.
Take now $X_{T}^{i}(\omega,y)=\tilde{X}_{T}^{i}(w)+M_{T}^{i}(w)y$ where $X_{T}^{i}(\omega,y)$ is a positive bounded measurable function for all $y\in\mathbb{R}$, and $M_{T}^{i}$ positive. The convex worst-case risk measure in this case becomes
\begin{eqnarray}
\beta^{i}(y)&=&\rho(Z^{i}_{y})=-\inf_{w\in\Omega}\{\tilde{X}_{T}^{i}(w)-M_{T}^{i}(w)y\}\nonumber\\
&=&-\inf_{w\in\Omega}\{\tilde{X}_{T}^{i}(w)\}+
y\sup_{w\in\Omega}\{M_{T}^{i}(w)\}\nonumber\\&=&\rho(\tilde{X}_{T}^{i}) +y\sup_{w\in\Omega}\{M_{T}^{i}(w)\}
\end{eqnarray}
The illiquidity term in this situation is given by $y\sup_{w\in\Omega}\{M_{T}^{i}(w)\}$, and the capital requirement by $y(\rho(\tilde{X}_{T}^{i}) +y\sup_{w\in\Omega}\{M_{T}^{i}(w)\}+X^{i}_{0}(y))$
Suppose further that $X_{T}^{i}(w,y)$ is as $X_{T}^{i}(w,y)=\tilde{X}_{T}^{i}(w)+\theta sgn(y)+\eta y$, $\theta, \eta>0$, $sgn$ is the sign function, and $X_{T}^{i}(w,y)$ positive and bounded measurable. See Almgren (2000) for a discussion. The worst-case risk measure reads as
\begin{eqnarray}
\beta^{i}(y)&=&\rho(Z^{i}_{y})=-\inf_{w\in\Omega}\{\tilde{X}_{T}^{i}(w)-\theta sgn(y)-\eta y\}\nonumber\\&=&-\inf_{w\in\Omega}\{\tilde{X}_{T}^{i}(w)\}+\theta sgn(y)+\eta y\nonumber\\&=&\rho(\tilde{X}_{T}^{i})+\theta sgn(y)+\eta y
\end{eqnarray}
with illiquidity term given by $\theta sgn(y)+\eta y$ and capital requirement by $y(\rho(\tilde{X}_{T}^{i})+\theta sgn(y)+\eta y+X^{i}_{0}(y))$.
Theorem (\ref{thm1}) can again be used to give a dual representation of $\beta^{i}$.
\end{example}
\subsection{Relation between $\beta$ and $\rho$}\label{relat}
By assumption made previously on the space $\mathcal{Z}_{i}$, any convex risk measure $\rho$ defined on $\mathcal{Z}_{i}$ has a dual representation of the form
\begin{equation}\label{eq9}
\rho(Z)=\sup_{h\in ba}\{h(Z)-\rho^{*}(h)\} \quad \forall Z\in\mathcal{Z}_{i}
\end{equation}
where $ba:=ba(\Omega, \mathcal{F})$ denotes the space of all finitely additive set functions with finite total variation and $\rho^{*}$ is equal to
\begin{equation}\label{eq10}
\rho^{*}(h)= \sup_{Z\in\mathcal{Z}_{i}}\{h(Z)-\rho(Z)\}
\end{equation}
One can also write $\rho$ differently as
\begin{equation}
\rho(Z)=\sup_{Q\in \mathcal{M}_{1,f}}\{\mathbb{E}_{Q}(-Z)-\alpha(Q)\} \quad \forall Z\in\mathcal{Z}_{i}
\end{equation}
where $\mathcal{M}_{1,f}:= \mathcal{M}_{1,f}(\Omega,\mathcal{F})$ is the set of all
positive finitely additive set functions $Q : \mathcal{F}\rightarrow [0, 1]$ normalized to $Q[\Omega] = 1$, and
\begin{equation}\label{eq10}
\alpha(Q)=\sup_{Z\in \mathcal{Z}_{i}}\{\mathbb{E}_{Q}(-Z)-\rho(Z)\}
\end{equation}
is the minimal penalty function taking values in $\mathbb{R}\cup\{+\infty\}$.
Therefore, applying the dual representation in Equation (\ref{eq9}) to our case, we immediately deduce that
\begin{equation}\label{Eq11}
\beta^{i}(y)=\rho(Z^{i}_{y})=\sup_{Q\in \mathcal{M}_{1,f}}\{\mathbb{E}_{Q}(-Z^{i}_{y})-\alpha(Q)\}\quad \forall Z^{i}_y\in\mathcal{Z}_{i}, y>0
\end{equation}
with $\alpha$ as in Equation (\ref{eq10}).
\begin{remark}
We immediately see that there is a clear difference between the risk measure $\rho$ and the illiquidity risk measure $\beta^{i}$. The risk measure $\rho$ is defined as a functional on the future prices of the security $i$, while $\beta^{i}$ as a function on the space $\mathbb{R}_{>0}$ of the traded quantities of the security $i$. This means that the risk measure metric is now a real valued function of a real variable. It follows that we can associate to each positive traded quantity $y\in\mathbb{R}_{>0}$ a real number $\beta^{i}(y)$ giving the specific risk of the financial institution in the security $i$.
\end{remark}
\subsection{Illiquidity risk measures on $L^{p}$ spaces}\label{lp}
We now fix a probability measure on the measurable space $(\Omega,\mathcal{F})$ and recall the dual representation of convex risk measures in case of $L^{p}(\Omega,\mathcal{F},\mathbb{P})$ for $1\leq p\leq\infty$ spaces.
The definition of convex risk measures in general $\mathcal{Z}^{i}:=L^{p}(\Omega,\mathcal{F},\mathbb{P})$ probability spaces is identical to that of Definition (\ref{def2}). In particular, a risk measure $\rho$ defined on the $L^{\infty}(\Omega,\mathcal{F},\mathbb{P})$ space has the property of being Lipschitz continuous and finite-valued. The continuity together with the convexity of $\rho$ imply the existence of a dual representation for the risk measure $\rho$, namely
\begin{equation}
\rho(Z)=\sup_{Q\in \mathcal{M}_{1,g}}\{\mathbb{E}_{Q}(-Z)-\alpha(Q)\} \quad \forall Z\in\mathcal{Z}_{i}
\end{equation}
where now $\mathcal{M}_{1,g}$ denotes the set of all positive finitely additive set functions $Q: \mathcal{F}\rightarrow[0, 1]$ that are absolutely continuous w.r to $\mathbb{P}$ and normalized to $Q[\Omega] = 1$, and $\alpha(Q)$ as usual the minimal penalty function.
In this case the liqudity risk measure is equal to $\sup_{Q\in \mathcal{M}_{1,g}}\{\mathbb{E}_{Q}(-Z^{i}_y)-\alpha(Q)\}$.
For a convex risk measure $\rho:L^p(\Omega,\mathcal{F},\mathbb{P})\rightarrow \mathbb{R}\cup\{+\infty\}$ on the $\mathcal{Z}^{i}:=L^p(\Omega,\mathcal{F},\mathbb{P})$ space for $1 \leq p <\infty$, the existence of a dual representation is strickly connected to the lower semi continuity (with respect to the norm $||\cdot||_{p}$) of the risk measure functional. (Kaina and R\"uschendorf (2009)) prove that the dual representation of the convex lower semi continuity risk measure $\rho$ takes the form
\begin{equation}
\rho(Z)=\sup_{\mathbb{Q}\in \mathcal{M}_{1,q}}\{\mathbb{E}_{\mathbb{Q}}(-Z)-\alpha(\mathbb{Q})\} \quad \forall Z\in\mathcal{Z}_{i}
\end{equation}
with $\alpha(\mathbb{Q})$ as usual, coniugate index $q=p/(p-1)$ and
\begin{equation}
\mathcal{M}_{1,q}=\{\mathbb{Q}\in\mathcal{M}_{1}(\mathbb{P})|\frac{d\mathbb{Q}}{d\mathbb{P}}\in L^q(\Omega,\mathcal{F},\mathbb{P})\}
\end{equation}
where $\mathcal{M}_{1}(\mathbb{P})$ denotes the class of all absolutely continuous probabilities with respect to $\mathbb{P}$.
We then have $\beta^{i}(y)=\sup_{\mathbb{Q}\in \mathcal{M}_{1,q}}\{\mathbb{E}_{\mathbb{Q}}(-Z^{i}_y)-\alpha(\mathbb{Q})\}$. The difference with the illiquidity risk measures defined on the Banach space of all bounded measurable functions is that, in the $L^p$ case, it may happens that the illiquidity risk measures assume the value of $+\infty$.
At this point, we also want to emphasize the fact that an illiquidity risk measures may admit a dual representation indipendently on the fact that the risk measure $\rho$ admits or not a dual representation. Indeed, the illiquidity risk measure is well represented on a buy order whenever there is a proper convex risk measure $\rho$ defined on a given space of random variables $\mathcal{Z}_{i}$ and which satisfies the axioms of Definition (\ref{def2}). We include this result into a corollary.
\begin{corollary}\label{cor2}
An illiquidity risk measure $\beta^{i}$ on the space $\mathbb{R}_{>0}$ defined as $\beta^{i}(y)=\rho(Z^{i}_y)$, where $\rho$ is a proper convex risk measure satisfying the axioms of Definition (\ref{def2}) and $Z^{i}_y$ decreasing and concave, has a dual representation indipendently of the fact that $\rho$ has or not a dual representation.
\end{corollary}
We also note that a proper convex risk measure defined on a space $\mathcal{Z}^{i}$ of random variables is a sufficient condition to ensure that the illiquidity risk measure $\beta^{i}$ satisfies the axioms of Proposition (\ref{prop1}) and the dual representation of Theorem (\ref{thm1}), but it is not always a necessary condition. There can be cases when, for example, a risk measure defined on the space $\mathcal{Z}^{i}$ is not convex and still having an illiquidity risk measure $\beta^{i}$ satisfying Proposition (\ref{prop1}) and Theorem (\ref{thm1}). The following example illustrates this fact.
\begin{example}\label{exam1}
Let $\mathbb{P}$ be a probability measure on the measurable space $(\Omega,\mathcal{F})$ and define $Z^{i}_y$ as $Z^{i}_y=\tilde{X}_{T}^{i}(w)-ay-X_{0}^{i}$. The value at risk measure $VaR_{\delta}, \delta\in(0,1)$, on the space $\mathcal{Z}^{i}$ of essentially bounded random variables is naturally defined as
\begin{equation*}
\beta(y)=VaR_{\delta}(Z^{i}_y)=\inf{\{m\in\mathbb{R}|\mathbb{P}(\tilde{X}_{T}^{i}(w)-ay+m<0)\leq\delta\}}
\end{equation*}
for $y>0$.
Recall that $VaR_{\delta}$ is monotone decreasing, cash additive, positively homogeneous, but not convex on the space $\mathcal{Z}^{i}$. Then,
\begin{equation}
\beta^{i}(y)=VaR_{\delta}(\tilde{X}_{T}^{i})+ay
\end{equation}
As it can be seen, $\beta^{i}$ is increasing, convex, and cash sub-additive. Next, it admits also a dual representation as that of Theorem (\ref{thm1}). To see this, note first that $\beta^{i}$ is continuous on the space $\mathbb{R}_{>0}$. Taking $f$ as in Equation (\ref{eq6}) gives the result.
The capital requirement given by $y(VaR_{\delta}(\tilde{X}_{T}^{i})+ay+X_{0}^{i}(y))$ is an increasing function of $y$ and as can be seen is greater than the capital requirement needed in a liquid market.
\end{example}
Inspired by Example (\ref{exam1}) we arrive at the following proposition.
\begin{proposition}\label{thm4}
If the security's price is a separable additive function of the type $X_{T}^{i}(w,y)=\tilde{X}_{T}^{i}(w) + h^i(y)$ with $h^i(y)$ increasing concave and deterministic for all $y\in\mathbb{R}$, and $\tilde{X}_{T}^{i}(w)$ the unaffected price, then given a proper cash-additive functional $\rho$ defined on a space of random variables $\mathcal{Z}_{i}$ containing $Z^{i}_y$, the function $\beta^{i}$ expressed as $\beta^{i}(y)=\rho(Z^{i}_y)=\rho(\tilde{X}_{T}^{i} + h^i(-y))$ is a risk measure satisfying Proposition (\ref{prop1}). Further, it admits the dual representation of Theorem (\ref{thm1}).
\end{proposition}
\begin{proof}
By Definition, $\beta^{i}(y)=\rho(\tilde{X}_{T}^{i} + h^i(-y))=\rho(\tilde{X}_{T}^{i})-h^i(-y)$. Now, use concavity and increasing property of $h^i$ to conclude that $\beta^{i}$ is increasing monotonic, cash sub-additive, and convex. The dual representation follows by making use of the function $f$ in Equation (\ref{eq6}).
\end{proof}
When $Z^{i}_{y}$ is as in Proposition (\ref{thm4}), we can also define in a similar fashion to the previous subsection a function $\delta^i:\mathbb{R}_{<0}\rightarrow\mathbb{R}$ which shall measure the illiquidity risk of the financial institution on a position $y<0$ in the security $i$. We define $\delta^{i}$ as usually by $\delta^{i}(y)=\rho(-Z^{i}_{y})$, whith $\rho$ convex risk measure defined on a given space $\mathcal{Z}_{i}$. In addition, we assume that $\rho(U)<+\infty$ and $\rho(U)>-\infty$ for some $U\in \mathcal{Z}_{i}$. We note that $-Z^{i}_{y}$ is increasing and convex for all $y\in\mathbb{R}$ and that the cash-flow in this situation is given by $-y[X^{i}_{T}(w,-y)-X^{i}_{0}(y)]$. Simple calculations show that $\delta^{i}$ is decreasing monotonic, cash super-additive, and concave. Here, cash super-additivity or translationally sub-variance means $\delta^{i}(y+m)\leq\delta^{i}(y)+m$ for every $y<0$, $m\geq0$ and $(y+m)\leq0$. Remark (\ref{rm1}) allows us to give an interpretation to these axioms.
The axioms which deserve considerations is the decreasing monotonicity and the concavity. In particular, the first axiom says that the illiquidity risk of the security $i$ increases as the quantity $y$ sold by the financial insititution increases, thus making it less liquid and more risky. We have seen that convexity axiom induces financial institutions to brake up large trades into smaller ones. In the same spirit, concavity axiom stimulate financial institutions to split their large trades since the decrease in the risk of security $i$ caused by an one unit increase in $y<0$ is larger when $y$ is small.
Next, we define the function $g^i$ as
\begin{equation}\label{eqer}
g^{i}(y) = \left\{
\begin{array}{l l}
\rho(-Z^{i}_{y}) & \quad \text{if $y\geq0$ }\\
\delta^{i}(y) & \quad \text{if $y\leq0$}
\end{array} \right.\
\end{equation}
where $\delta^{i}(0)$ is equal to $\rho(-\tilde{X}^{i}_{T})=\rho(-Z^{i}_{0})$. Then, $\hat{\delta}^{i}(h, x)\stackrel{\text{def}}{=}g^{i}(h-x)-x\stackrel{\text{def}}{=}g^{i}((y+x)-x)-x$, where $h=y+x$, $x,y\in\mathbb{R}$, is decreasing monotonic, cash additive, concave and Lipschitz continuous with constant $\sqrt{2}$. And finally, another application of the Fenchel-Moureau theorem leads to Theorem (\ref{thm1}) with the $sup$ operator substituted by the $inf$ operator.
The reason why we do not define an illiquidity risk measure $\delta^{i}$ on $\mathbb{R}_{<0}$ for general random variables $-Z^{i}_{y}$ is that we cannot be sure, in general, that the resulting risk measure $\delta^{i}(y)=\rho(-Z^{i}_{y})$, where $\rho$ is a (convex) risk measure, is convex or concave, and thus we cannot make use of the Fenchel-Moreau theorem to give a dual representation to $\delta^{i}$. One also notice that Proposition (\ref{thm4}) holds also for $\delta^i$. Furthermore, using the illiquidity risk measure $\delta^i$ instead of $\beta^i$, the capital requirement in Example (\ref{exlin}) is $y(\rho(\tilde{X}^{i}_{T})-X_{0}^{i}(y))=y(\sup_{w\in\Omega}\{\tilde{X}^{i}_{T}(w)\}-X_{0}^{i}(y))$ when $Z^{i}_{y}=\tilde{X}^{i}_{T}(w)-ay$. Note that $\rho$ in this case is finite-valued and linear in $U\in \mathcal{Z}_{i}$. This implies that Proposition (\ref{thm4}) holds even when $h(y)$ is non-deterministic, and $\delta^i$ admits a Fenchel-Moreau dual representation. With $h(y)$ non-deterministic we mean that it has a form of type $B(w)F(y)$ or $B(w)+F(y)$. The other cases together with Example (\ref{exam1}) can be derived analogously.
\begin{example}\label{exvar}
Fix a probability measure $\mathbb{P}$ on the space $(\Omega,\mathcal{F})$. Let us now suppose that the price of security $i$, $X_{T}^{i}(w)$, follows a geometric Brownian motion, with a drift term depending on the traded volume, that is
\begin{equation}
dX_{t}^{i}=X_{t}^{i}(h^i(y)+\mu)dt+X_{t}^{i}\sigma dB_{t}
\end{equation}
where $h^i(y)=ay$ is an increasing and concave function, $\sigma$ and $\mu$ are constants, $X_{0}^{i}(y)>0$ is the initial value, $y>0$, and $B$ denotes the standard Brownian motion zero at zero. This way of modelling the security price is based on the framework developed by (Almgren and Chriss (2005)).
Solving the stochastic differential equation yields
\begin{equation}
X_{T}^{i}(w)=X_{0}^{i}(y)\exp\{ayT\}\exp\{(\mu-\frac{\sigma^2}{2})T+\sigma B_T\}
\end{equation}
Under this assumption, we let $X_{T}^{i}(w,y)$ be equal to $\exp\{ayT\}\tilde{X}_{T}^{i}(w)$ where $\tilde{X}_{T}^{i}(w)=X_{0}^{i}(y)\exp\{(\mu-\frac{\sigma^2}{2})T+\sigma B_T\}$ gives the price in the absence of illiquidity.
The $VaR_{\alpha}$ applied to $X_{T}^{i}(w,y)$ is
\begin{eqnarray*}
&&\beta^{i}(y)=VaR_{\delta}(Z^{i}_y)\\&&=\inf{\{m\in\mathbb{R}|\mathbb{P}(\exp\{-ayT\}\tilde{X}_{T}^{i}(w)+m<0)\leq \delta\}}\\&&=\inf{\{m\in\mathbb{R}|\mathbb{P}(\exp\{-ayT\}\tilde{X}_{T}^{i}(w)+m< 0)\leq \delta\}} \\&&=\inf{\{m\in\mathbb{R}|\mathbb{P}(-ayT+\ln(\tilde{X}_{T}^{i}(w))< \ln(-m))\leq \delta\}} \\&&=\inf{\{m\in\mathbb{R}|\mathbb{P}(\ln(\tilde{X}_{T}^{i}(w))< \ln(-m)+ayT)\leq \delta\}}
\\&&=\inf{\{m\in\mathbb{R}|\mathbb{P}\left(B_T<\frac{\ln(-m)+ayT-\ln(X^{i}_0(y))-(\mu - \frac{\sigma^2}{2})T}{\sigma}\right)\leq \delta\}}
\end{eqnarray*}
As $B_T$ is a standard Brownian motion, we can represent it as $B_T=\sqrt{T}W$ with $W$ a standard normal distribution. It follows that
\begin{eqnarray*}
&&\beta^{i}(y)\\&&=\inf{\{m\in\mathbb{R}|\mathbb{P}\left(W<\frac{\ln(-m)+ayT-\ln(X^{i}_0(y))-(\mu - \frac{\sigma^2}{2})T}{\sqrt{T}\sigma}\right)\leq \delta\} }
\end{eqnarray*}
and
\begin{equation*}
\Phi^{-1}(\delta)=\frac{\ln(-m)+ayT-\ln(X^{i}_0(y))-(\mu - \frac{\sigma^2}{2})T}{\sqrt{T}\sigma}
\end{equation*}
From this we obtain
\begin{eqnarray}
\beta^{i}(y)&=&-\exp\{-ayT\}\exp\left( \Phi^{-1}(\delta)\sqrt{T}\sigma +(\mu - \frac{\sigma^2}{2})T+\ln(X^{i}_0(y))\right)\nonumber\\&=&\exp\{-ayT\}VaR_{\delta}(\tilde{X}_{T}^{i})
\end{eqnarray}
which fulfills the three axioms of Proposition (\ref{prop1}). This type of risk measure encourages financial institution to brake large trades as the rate at which $\beta^{i}$ increases is more than proportionally than $y$, for larger $y$. The capital requirement of a position $y$ is then given by $y(\exp\{-ayT\}VaR_{\delta}(\tilde{X}_{T}^{i})+ X^{i}_{0}(y))$, and it increases in $y$ until the financial institution looses the initial investment.
Given values of $VaR_{\delta}$, we can also compute another familiar risk measure, the $AVaR_{\delta}$, which in the geometric Brownian motion case with $h(y)=ay$ reads
\begin{equation}
\beta^{i}(y)=AVaR_{\delta}(Z^{i}_y)=\frac{1}{\delta}\int_{0}^{\delta}VaR_{\delta}(Z^{i}_y)du
\end{equation}
and thus substituting
\begin{equation}
\beta^{i}(y)=\frac{1}{\delta}\exp\{-ayT\}\int_{0}^{\delta}VaR_{\delta}(\tilde{X}_{T})du
\end{equation}
Unlike the $VaR_{\delta}$ risk measure, the $AVaR_{\delta}$ is a coherent risk measure. Moreover, as in the $VaR_{\delta}$ case, $\beta^{i}$ is increasing monotonic, cash sub-additive, and convex in $y$. The capital requirement is given as usual by $y(\beta^{i}(y)+X^{i}_{0}(y))$.
\end{example}
\begin{proposition}\label{propcar}
Given a proper, positive homogeneity functional $\rho$ defined on the space of random variables $\mathcal{Z}^{i}$ and a separable multiplicative function for the security's price of the form $X_{T}^{i}(w,y)=h^i(y)\tilde{X}_{T}^{i}(w)$ with $h(y)$ increasing, positive, concave and deterministic on all $\mathbb{R}$, the function $\beta^{i}(y)=\rho(Z^{i}_y)=\rho(h^i(-y)\tilde{X}_{T}^{i})$ with $Z^{i}_y\in\mathcal{Z}^{i}$, $y>0$ and $\rho$ taking negative values, is a risk measure satisfying Proposition (\ref{prop1}) and has a dual representation as in Theorem (\ref{thm1}).
\end{proposition}
\begin{proof}
The proof follows by applying the positive homogeneity of $\rho$, and positivity of $h^i$. Indeed, $\beta^{i}(y)=h^i(-y)\rho(\tilde{X}_{T}^{i})$. Then, concavity and increasing property of $h^i$ give the first result. The dual representation follows by using the function $f$ in Equation (\ref{eq6}).
\end{proof}
If, on the contrary, $X_{T}^{i}(w,y)$ is a negative deterministic homogeneous function, then the illiquidity risk measure $\delta^{i}$ as discussed previously admits a dual representation representation according to the above proposition and the Fenchel-Moureau theorem. Again, we are assuming that $\rho(U)<+\infty$ and $\rho(U)>-\infty$ for some $U\in \mathcal{Z}^{i}$. If one wants to derive the illiquidity risk measure $\delta^i$ and the capital requirement corresponding to the Example (\ref{exvar}), the procedure to follow is identical.
\begin{example}
Once again, suppose that the security's price is given by $X_{T}^{i}(w,y)=\tilde{X}_{T}^{i}(w) + M_{T}^{i}(w)y$ with $X_{T}^{i}(w,y)$ positive and essentially bounded, and consider the following risk measure defined on the space of essentially bounded measurable functions, $L^{\infty}(\Omega,\mathcal{F},\mathbb{P})$, i.e.
\begin{eqnarray}
\beta^{i}(y)&=&\rho(Z^{i}_y)=\frac{1}{\lambda} \log \mathbb{E}_{\mathbb{P}}(\exp\{-\lambda Z^{i}_y\})\nonumber \\&=&\frac{1}{\lambda}\log \mathbb{E}_{\mathbb{P}}(\exp\{-\lambda (\tilde{X}_{T}^{i}(w) - M_{T}^{i}(w)y)\})
\end{eqnarray}
where $\lambda\in[0,+\infty)$ gives the risk aversion parameter. This risk measure which is convex is called the entropic risk measure and it is stricly related to the exponential utility function (see F\"ollmer and Knispel (2011)). As can be checked, $\beta^{i}$ satisfies Proposition (\ref{prop1}) and can be represented according to Theorem (\ref{thm1}). The capital requirement is equal to $y(\beta^{i}(y)+X_{0}^{i}(y))$.
This example also shows that if we define an illiquidity risk measure $\delta^i$ on $\mathbb{R}_{<0}$, $\delta^i$ would result in a function that is decreasing, cash super-additive, and proper convex with a well-defined dual representation. This fact again confirms why we did not developed a general duality theory for the illiquidity risk measures defined on $\mathbb{R}_{<0}$.
\end{example}
\section{Measuring the illiquidity risk when financial institutions split their trades into smaller ones}\label{sec4}
As the previous section outlined, convexity of the illiquidity risk measures induces financial institutions to brake up their large trades into smaller orders in order to reduce the illiquidity risk. Based on this assumption, in this section, we suppose similarly to the paper by (Acerbi and Scandolo (2008)) that financial institutions sell a quantity $y>0$ of a security $i$ by breaking it up in smaller orders $\Delta y_j$ so as to minimize the liquidity risk. Financial institutions sell at the highest price first, by selling units $\Delta y_j\leq \Delta x_j$ until $\sum_{j}\Delta y_j=y$, where $\Delta x_j$ gives the maximum amount that can be sold at the price $X^{i}_{T}(w,-y_j)$ in one single order.
In this situation, we are dealing with a cash flow given by
\begin{equation}\label{carje}
\sum_{j}(X^{i}_{T}(w, -y_{j})-X^{i}_{0}(y))\Delta y_j \quad \text{for} \quad y>0
\end{equation}
with $X^{i}_{T}(w, y)$ as in Assumption (\ref{ass1}), $X^{i}_{T}(w, -y_{j})\geq X^{i}_{T}(w, -y_{k})$ if $j\leq k$, and $\Delta y_{j}>0$. Note that $X^{i}_{T}(w, -y)$ is decreasing monotonic in $y$. Furthermore, nothing changes if in Equation (\ref{carje}) we assume that also the trading at time $0$ takes place in
a split order form. The cash flow in this case will be
\begin{equation*}
\sum_{j}X^{i}_{T}(w, -y_{j})\Delta y_j-\sum_{k}X^{i}_{0}(y_k)\Delta y_k \quad \text{for} \quad y>0
\end{equation*}
with $\sum_{k}\Delta y_k=y$, $\Delta y_k\leq\Delta x_k$, and $\Delta x_k$ the maximum amount that can be bought. One then can suppose that financial institutions buy at the lowest price first, so that $X^{i}_{0}$ is increasing in $y$.
In order that the continuous version in Equation (\ref{carje}) exists, we have to impose some conditions on the random variable $X_{T}(w, y)$. In particular, for convenience, we must require $X_{T}(w,y)$ to be bounded a.s. in $\Omega$ for every $y\in\mathbb{R}$. With these assumptions, the continuous version of the sum in the equation above is the integral
\begin{equation}
\int_{0}^{y}X^{i}_{T}(w, -u)du - yX^{i}_{0}(y)\quad \text{for} \quad y>0
\end{equation}
The risk therefore is captured by the random variable $Z^{i}_{y}:\Omega \rightarrow \mathbb{R}$ expressed as $\int_{0}^{y}X^{i}_{T}(w, -u)du$. As an immediate result, we obtain that $Z^{i}_{y}$ is increasing in $y\in\mathbb{R}\setminus\{0\}$. Furthermore, it is concave in $y\in\mathbb{R}\setminus\{0\}$ since $X^{i}_{T}(w, -y)$ is a decreasing function. Note that now we do not assume anymore that $X^{i}_{T}(w, -y)$ is concave in $y\in\mathbb{R}\setminus\{0\}$.
At this stage, one would like to define illiquidity risk measures on the space $\mathbb{R}_{>0}$. Fortunately, the theory presented in the previous section applies \textit{in toto} to the case when $Z^{i}_{y}$ is equal to $\int_{0}^{y}X^{i}_{T}(w, -u)du$.
Using the Definition (\ref{def2}), one can easily obtain that the illiquidity risk measure $\beta^{i}$ is decreasing, cash-super additive (or translationally super-variant), and convex for $y>0$. The difference now is that $\beta^{i}$ is decreasing and cash-super additive rather than decreasing and cash-sub additive. The decreasing property can be derived by noting that $Z^{i}_{y}(w)\geq Z^{i}_{v}(w)$ implies $\rho(Z^{i}_{y}) \leq \rho(Z^{i}_{v})$ when $y\geq v$ and $y,v>0$. This property thus says that more the financial institution's long position increases more the illiquidity risk measures decreases. This property can be attributed to the trade splitting effect, which in a market without inherent limits minimizes the impact on the securities prices. On the other side, cash super-additivity can be obtained by noting that $\beta^{i}(y+m)=\rho(Z^{i}_{y+m})\leq\rho(Z^{i}_{y})\leq\rho(Z^{i}_{y}-m)=\rho(Z^{i}_{y})+m=\beta^{i}(y)+m$, for all $m\geq0$.
Since $y=0$ implies $X^{i}_{T}(w, 0)=\tilde{X}_{T}^{i}(w)$ with $X^{i}_{T}(w, -y)\leq X^{i}_{T}(w, 0)\leq X^{i}_{T}(w, y)$ for every positive $y$, we see that $Z^{i}_{y}$ is concave for all $y\in\mathbb{R}$. It will then follow that if we define a function $f^i$ as in Equation (\ref{eq6}) and $\hat{\beta}^{i}$ as $f^i(h-x)-x=f^i((y+x)-x)-x$ with $h, x\in\mathbb{R}$, the dual representation of Theorem (\ref{thm1}) holds since $\hat{\beta}^{i}$ is Lipschitz continuous and convex besides being decreasing and cash-additive.
\begin{example}\label{last}
This example shows how the strategy of breaking up trades into smaller ones reduces the illiquidity risk measure $\beta^{i}$.
Suppose that the price $X^{i}_{T}(w,y)$ is as in Example (\ref{exlin}) and we want to compute the illiquidity risk measure $\beta^{i}(y)=\rho(Z^{i}_{y})=-\inf_{w\in\Omega}\{\int_{0}^{y}X^{i}_{T}(w, -u)du\}$. That is
\begin{equation}
\beta^{i}(y)=-\inf_{w\in\Omega}\{\int_{0}^{y}X^{i}_{T}(w, -u)du\} \quad y>0
\end{equation}
which can again be written as
\begin{eqnarray*}
\beta^{i}(y)&=&-\inf_{w\in\Omega}\{\int_{0}^{y}(\tilde{X}^{i}_{T}(w)-au) du\}
\\&=&-y\inf_{w\in\Omega}\{\tilde{X}^{i}_{T}(w)\}+a\frac{y^2}{2}\\&=&y\rho(\tilde{X}^{i}_{T})+a\frac{y^2}{2}\end{eqnarray*}
We note that $\beta^i$ is decreasing, cash super-additive, and convex. Moreover, $Z^{i}_{y}$ is concave for all $y\in\mathbb{R}$, and the dual representation of the risk measure $\beta^i$ holds. The capital requirement is given by $y(\rho(\tilde{X}^{i}_{T})+a\frac{y}{2}+X^{i}_{0}(y))$. Compared to the case when a given financial institution sells $y>0$ units of the security $i$ without breaking it up in small pieces, the capital requirement is smaller since $y(\rho(\tilde{X}^{i}_{T})+a\frac{y}{2}+X^{i}_{0}(y))< y(\rho(\tilde{X}^{i}_{T})+ay+X^{i}_{0}(y))$.
If we assume further that the initial monetary value of the position $y>0$ is given by $-\sum_{k}X^{i}_{0}(y_k)\Delta y_k$ or in the integral form by $-\int_{0}^{y}X^{i}_{0}(u)du$, the capital requirement is $y(\rho(\tilde{X}^{i}_{T})+ay+X_{0}^{i}(0))$ which as can be seen is smaller than $y(\rho(\tilde{X}^{i}_{T})+ay+X^{i}_{0}(y))$.
\end{example}
If instead we assume that $X^{i}_{T}(w,y)$ is given by $X^{i}_{T}(w,y)=\tilde{X}^{i}_{T}(w)\pm\gamma|y|^{\alpha}$, $\alpha<1$, $\gamma>0$, the illiquidity risk measure $\beta^{i}(y)=-\inf_{w\in\Omega}\{\int_{0}^{y}X^{i}_{T}(w, -u)du\}$ is given by
\begin{equation}
\beta^{i}(y)=-\inf_{w\in\Omega}\{\int_{0}^{y}X^{i}_{T}(w, -u)du\} \quad y>0
\end{equation}
which again is
\begin{eqnarray*}
\beta^{i}(y)&=&-\inf_{w\in\Omega}\{\int_{0}^{y}(\tilde{X}^{i}_{T}(w)-\gamma|y|^{\alpha}) du\}
\\&=&-y\inf_{w\in\Omega}\{\tilde{X}^{i}_{T}(w)\}+\gamma\frac{y^{\alpha+1}}{\alpha+1}\\&=&y\rho(\tilde{X}^{i}_{T})+\gamma\frac{y^{\alpha+1}}{\alpha+1}\end{eqnarray*}
The capital requirement is then given by $y(\rho(\tilde{X}^{i}_{T})+\gamma\frac{y^{\alpha+1}}{\alpha+1}+X^{i}_{0}(y))$.
All of the results previously obtained are still valid including the results (with the appropriate changes) concerning the illiquidity risk measure $\delta^i$. One of these is the increasing property of the illiquidity risk measure $\delta^i$.
\section{Multivariate illiquidity risk measures}\label{secmult}
In this section we discuss illiquidity risk measures for the multivariate case. We aim to introduce illiquidity risk measures for a portfolio composed of $n$ assets. As a starting point, we introduce the concept of a portfolio that we will use in the rest of the paper.
\begin{definition}\label{def3}
A portfolio $\textbf{y}$ is a vector $\textbf{y}=(y_1,y_2,...,y_n)\in\mathbb{R}^{n}\setminus\{\textbf{U}\}$, where $y_i$ denotes the position of the financial institution in the asset $i$ and, $\textbf{U}$ is given by $\{\textbf{v}\in\mathbb{R}^{n}: \text{at least one component $v_i$ of $\textbf{v}$ is zero}\}$. We say the financial institution is long on asset $i$ when $y_i>0$ and short when $y_i<0$.
\end{definition}
As discussed in the beginning of this paper, we will build a general duality theory only for those portfolios composed of $n$ long positions.
\begin{definition}\label{def4}
Fix a measurable space $(\Omega,\mathcal{F})$. Let $\textbf{y}\in\mathbb{R}_{+}^{n}\setminus\{\textbf{U}\}$ be a portfolio. The risk of the portfolio $\textbf{y}$ is related to the random variable $Z_{\textbf{y}}:\Omega \rightarrow \mathbb{R}$, with $Z_{\textbf{y}}$ given as
\begin{equation}
Z_{\textbf{y}}(\omega)=\sum_{i=1}^{n}Z^{i}_{y_{i}}(\omega)
\end{equation}
where $y_i>0$. The random variables $Z^{i}_{y_{i}}:\Omega \rightarrow \mathbb{R}$ are measurable with respect to $\mathcal{F}$ for each $i=1,2,...,n$ and assume the following form
\begin{equation*}
Z^{i}_{y_{i}}=X_{T}^{i}(w,-y_i)
\end{equation*}
$X_{T}^{i}(w,y_i)$ denote the price of security $i$ at time $T$, and $X_{0}^{i}(y_i)$ the price of security $i$ at time $0$ corresponding to the quantity $y_i$. It is supposed that $X_{T}^{i}(w,y_i)$ satisfies Assumption (\ref{ass1}) for each $i=1,2,...,n$.
\end{definition}
Note that by $\mathbb{R}_{+}^{n}$ we denote the positive elements of $\mathbb{R}^{n}$, i.e. $\textbf{p}\in\mathbb{R}_{+}^{n}$ if $p_i\geq0$ for each $i=1,2,...,n$. For simplicity of notations, we set $Z_{\textbf{y}}:=Z_{T,\textbf{y}}=Z_{T}(y_1,y_2,...,y_n)$. For each $\textbf{y}$, the random variable $Z_{\textbf{y}}$ is interpreted as the risk coming from a position $y_i$, $i=1,2,...,n$, in each of the $n$ securites.
Consider a portfolio made up of $n$ long positions. As in the univariate case, we shall assume that $Z^{i}_{y_{i}}$ is concave for each $y_i\in\mathbb{R}$. As a result, $Z_{\textbf{y}}$ is concave in $\textbf{y}\in\mathbb{R}_{+}^{n}\setminus\{\textbf{U}\}$, and clearly in $\mathbb{R}^{n}$ . This can be deduced from Assumption (\ref{ass2}) and the well-known fact that a decomposable function $Z_{\textbf{y}}=\sum_{i=1}^{n}Z^{i}_{y_{i}}$ is concave if all its components are concave.
\begin{assumption}\label{ass3}
The function $Z_{\textbf{y}}$ is concave on $\mathbb{R}_{+}^{n}\setminus\{\textbf{U}\}$
\begin{equation}
Z_{\lambda\textbf{y}+(1-\lambda)\textbf{v}}\geq \lambda Z_{\textbf{y}}+(1-\lambda)Z_{\textbf{v}}\quad \textbf{y},\textbf{v}\in \mathbb{R}_{+}^{n}\setminus\{\textbf{U}\}\quad 0\leq\lambda \leq 1
\end{equation}
\end{assumption}
The random variables $Z_{\textbf{y}}\in\mathbb{R}$ for every $\textbf{y}\in\mathbb{R}^{n}$ are assumed to live on a space $\mathcal{Z}$ of random variables. We add to this space a convex risk measure functional $\rho: \mathcal{Z} \rightarrow \mathbb{R}$ satisfying the decreasing monotonicity, cash invariance, and convexity for every $S,U \in\mathcal{Z}$. We can therefore give the following as a definition of an illiquidity risk measure on the space $\mathbb{R}_{+}^{n}\setminus\{\textbf{U}\}$.
\begin{definition}\label{def5}
Given $\rho: \mathcal{Z} \rightarrow\mathbb{R}$ a convex risk measure functional on space $\mathcal{Z}$, the illiquidity risk measure $\beta$ on $\mathbb{R}_{+}^{n}\setminus\{\textbf{U}\}$ is defined as
\begin{equation}
\beta(\textbf{y})=\rho(Z_{\textbf{y}}) \quad \forall \textbf{y}\in\mathbb{R}_{+}^{n}\setminus\{\textbf{U}\}
\end{equation}
\end{definition}
One then readily cheks that $\beta$ is an illiquidity risk measure satisfying the following axioms.
\begin{itemize}
\item [a)] Increasing monotonicity: $\forall \textbf{y}\geq \textbf{v} \in\mathbb{R}_{+}^{n}\setminus\{\textbf{U}\}$, that is $y_i\geq v_i$ for every $i=1,2,..,n$, then $\beta(\textbf{y}) \geq \beta(\textbf{v})$;
\item[b)] Cash sub-additivity (or translationally super-variance): $\beta(\textbf{y}+m\textbf{e})\geq \beta(\textbf{y})-m$, $\forall m\geq 0$, $\textbf{y}\in\mathbb{R}_{+}^{n}\setminus\{\textbf{U}\}$, and $\textbf{e}=(1,1,...,1)$;
\item[c)] Convexity: $\forall \textbf{y}, \textbf{v} \in\mathbb{R}_{+}^{n}\setminus\{\textbf{U}\}$, then $\beta(\lambda \textbf{y} + (1 -\lambda)\textbf{v})\leq \lambda\beta(\textbf{y}) + (1 -\lambda)\beta(\textbf{v}),
0 \leq \lambda \leq 1$.
\end{itemize}
These axioms follow easily by recalling the properties of the functions $Z^{i}_{y_{i}}$ and the particular form of $Z_{\textbf{y}}$. More precisely, use the fact that each of the $Z^{i}_{y_{i}}$ is decreasing in $y_i$, concave in $y_i$, and that $Z_{\textbf{y}}$ is a decomposable function to derive each of the three axioms above.
\subsection{Dual representation of the multivariate illiquidity risk measure}
Theorem (\ref{thm1}) states that the illiquidity risk measure $\beta^{i}$ defined on the space $\mathbb{R}_{>0}$ has a dual representation for every proper convex risk measure defined on the space $\mathcal{Z}^{i}$. The aim of this subsection is to extend this result to the multivariate case.
To this end, it will be more instructive to work first with the space $\mathcal{Z}$ of all bounded measurable functions defined on $(\Omega,\mathcal{F})$. We suppose each $Z^{i}_{y_{i}}$ belongs to the space $\mathcal{Z}$. As as sum of bounded measurable functions, the random variable $Z_{\textbf{y}}$ belongs to $\mathcal{Z}$. Now let us consider a real valued function $f$ defined on the space $\mathbb{R}^{n}$
\begin{equation}\label{eq12}
f(\textbf{y}) =
\begin{array}{l l}
\beta(\textbf{y}) & \quad \text{if $\textbf{y}\in\mathbb{R}_{+}^{n} $}\\
\rho(Z_{\textbf{y}}) & \quad \text{if $\textbf{y}\in\mathbb{R}^{n}\setminus\{\mathbb{R}_{+}^{n}\}$}
\end{array}
\end{equation}
where $\textbf{y}\in\mathbb{R}^{n}\setminus\{\mathbb{R}_{+}^{n}\}$ if $\textbf{y}\in\mathbb{R}^{n}$ such that $\textbf{y}\notin\mathbb{R}_{+}^{n}$. It is immediate that $f(\textbf{y})$ is increasing, cash-subadditive, and convex in all $\textbf{y}$. We put $f(\boldsymbol{0})=\beta(\boldsymbol{0})=\rho(Z_{\boldsymbol{0}})=\rho(\sum_{i=1}^{n}Z^{i}_{0})$, that is the risk measure of a liquid buy portfolio.
We introduce a new function $\hat{\beta}$ in the same manner as we did in the previous section. More explicitly, for all $\textbf{h},\textbf{x}\in\mathbb{R}^{n}$, we let $\hat{\beta}(\textbf{h},\textbf{x})\stackrel{\text{def}}{=}f(\textbf{h}-\textbf{x})+\textbf{x}\stackrel{\text{def}}{=}f((\textbf{y}+\textbf{x})-\textbf{x})+\textbf{x}$.
The proof of the below proposition is identical to that of Proposition (\ref{prop3}). We leave the proof to the reader.
\begin{proposition}\label{prop5}
The function $\hat{\beta}(\textbf{h}, \textbf{x})$ defined as $f(\textbf{h}-\textbf{x})+\textbf{x}$ is increasing monotonic, translationally invariant, and convex for all $(\textbf{h},\textbf{x}) \in \mathbb{R}^{2n}$.
\end{proposition}
The other result, which we've already shown in the univariate case, is that the function $\hat{\beta}(\overline{\textbf{h}})$ with $\overline{\textbf{h}}=(\textbf{h},\textbf{x})$ is Lipschitz continuous with constant equal to $\sqrt{2n}$ on the space $\mathbb{R}^{2n}$.
\begin{lemma}\label{lemm2}
The multivariate function $\beta(\overline{\textbf{h}})$ is Lipschitz continuous with respect to the norm
$||\cdot||$ on $\mathbb{R}^{2n}$, that is
\begin{equation}
|\hat{\beta}(\overline{\textbf{h}}) - \hat{\beta}(\overline{\textbf{v}})| \leq ||\overline{\textbf{h}}- \overline{\textbf{v}}||
\end{equation}
for every $\overline{\textbf{h}}$ and $\overline{\textbf{v}}$ on $\mathbb{R}^{2n}$.
\end{lemma}
At this stage we have everything we need to apply the Fenchel-Moreau theorem to the multivariate function $\hat{\beta}(\overline{\textbf{h}})$. By this theorem, $\hat{\beta}(\overline{\textbf{h}})$ is proper, convex and lower semicontinuous if and only if $\hat{\beta}(\hat{\textbf{y}})$ is Fenchel biconjugate $\hat{\beta}(\overline{\textbf{h}}) = \hat{\beta}(\overline{\textbf{h}})^{**}$. Therefore, $\hat{\beta}(\overline{\textbf{h}})$ is proper, convex, and lower semicontinuous. We insert this important result in the following theorem.
\begin{theorem}\label{thm3}
The function $\hat{\beta}(\overline{\textbf{h}})=f(\textbf{h}-\textbf{x})+\textbf{x}$ with $f(\textbf{y})$ defined as in Equation (\ref{eq12}), has the following dual representation
\begin{equation}\label{eqnew}
\hat{\beta}(\overline{\textbf{h}})=\hat{\beta}(\overline{\textbf{h}})^{**}=\sup_{\overline{\textbf{v}}\in\mathbb{R}^{2n}}\{\overline{\textbf{h}}^{\text{T}}\overline{\textbf{v}}-\hat{\beta}^{*}(\overline{\textbf{h}})\}
\end{equation}
\end{theorem}
If we set $\textbf{x}=\boldsymbol{0}$ and take only $\textbf{y}\in\mathbb{R}^{n}_{+}\setminus\{\textbf{U}\}$, we can state the following corollary to Theorem above, which permits us to compute the illiquidity risk measure for every $\textbf{y}\in\mathbb{R}^{n}_{+}\setminus\{\textbf{U}\}$.
\begin{corollary}\label{cor3}
Any illiquidity risk measure on $\textbf{y}\in\mathbb{R}^{n}_{+}\setminus\{\textbf{U}\}$ defined as $\beta(\textbf{y}) = \rho(Z_{\textbf{y}})$, where $\rho$ is a convex risk measure on the linear space $\mathcal{Z}$ of bounded random variables and the multivariate function $Z_{\textbf{y}}$ is increasing and concave on $\textbf{y}$, has the following dual representation
\begin{equation}
\beta(\textbf{y}) = \sup_{\textbf{v}\in\mathbb{R}^{n}}\{\textbf{y}^{\text{T}}\textbf{v}-f^{*}(\textbf{v})\}\quad \forall \textbf{y}\in\mathbb{R}_{+}^{n}\setminus\{\textbf{U}\}
\end{equation}
with conjugate $f^{*}$ given as follows
\begin{equation}
\sup_{\textbf{y}\in\mathbb{R}^{n}}\{\textbf{v}^{\text{T}}\textbf{y}-f(\textbf{y})\}
\end{equation}
and $f$ as in Equation (\ref{eq12}).
\end{corollary}
\subsection{Multivariate illiquidity risk measures on general probability spaces}
By Subsection (\ref{relat}), convex risk measure functionals on the space of the bounded measurable functions assumes the form $\rho(Z)=\sup_{h\in ba}\{h(Z)-\rho^{*}(h)\}$ for all $ Z\in\mathcal{Z}$. Therefore, as a consequence we obtain
\begin{equation*}
\beta(\textbf{y})=\rho(Z_{\textbf{y}})=\sup_{Q\in \mathcal{M}_{1,f}}\{\mathbb{E}_{Q}(-Z_{\textbf{y}})-\alpha(Q)\}\quad \forall Z_{\textbf{y}}\in\mathcal{Z}, \textbf{y}\in\mathbb{R}_{+}^{n}\setminus\{\textbf{U}\}
\end{equation*}
Fixing a probability measure $\mathbb{P}$ on the space $(\Omega,\mathcal{F})$, we can provide the illiquidity risk measure $\beta$ with a different dual representation on the space $\mathcal{Z}=L^{\infty}(\Omega,\mathcal{F},\mathbb{P})$ other than that of Corollary (\ref{cor3}), namely
\begin{equation*}
\beta(\textbf{y})=\rho(Z_{\textbf{y}})=\sup_{Q\in \mathcal{M}_{1,g}}\{\mathbb{E}_{Q}(-Z_{\textbf{y}})-\alpha(Q)\}\quad \forall Z_{\textbf{y}}\in\mathcal{Z}, \textbf{y}\in\mathbb{R}_{+}^{n}\setminus\{\textbf{U}\}
\end{equation*}
If we assume further that $\rho$ is lower semicontinuous, the illiquidity risk measure $\beta$ on the space $L^{p}(\Omega,\mathcal{F},\mathbb{P})$ is equal to
\begin{equation*}
\beta(\textbf{y})=\rho(Z_{\textbf{y}})=\sup_{\mathbb{Q}\in \mathcal{M}_{1,q}}\{\mathbb{E}_{\mathbb{Q}}(-Z_{\textbf{y}})-\alpha(\mathbb{Q})\}\quad \forall Z_{\textbf{y}}\in\mathcal{Z}, \textbf{y}\in\mathbb{R}_{+}^{n}\setminus\{\textbf{U}\}
\end{equation*}
We conclude by pointing out that the multivariate liquidity risk measure admits the dual representation of Corollary (\ref{cor3}) whenever $\rho$ is proper, convex risk measure satisfying Definition (\ref{def5}).
\begin{example}\label{fund}
Suppose each $Z^{i}_{y_i}$, $i=1,2,...,n$, belongs to the space of bounded measurable random functions $\mathcal{Z}$. Assume in addition that $Z^{i}_{y_{i}}$ is linear for every $i=1,2,...,n$, that is $Z^{i}_{y_{i}}=X_{T}^{i}(w,-y_{i})$ where $X_{T}^{i}(w,y_i)=\tilde{X}_{T}^{i}(w)+a_iy_i$ is positive bounded measurable, and $X_{0}^{i}$ is positive bounded. The random variable $Z_{\textbf{y}}$ is then given by
\begin{equation}\label{shqip}
Z_{\textbf{y}}=\sum_{i=1}^{n}Z_{y_{i}} = \sum_{i=1}^{n}(\tilde{X}_{T}^{i}(w)-ay_i)
\end{equation}
Then, the equality above implies due to concavity and decreasing property of each $Z^{i}_{y_{i}}$ that $Z_{\textbf{y}}$ is decreasing and concave.
To demonstrate how to compute an illiquidity risk measure on the space $\textbf{y}\in\mathbb{R}_{+}^{n}\setminus\{\textbf{U}\}$, we will use the same risk measure of the Example (\ref{exlin}). We thus consider the illiquidity risk measure $\beta$ on $\textbf{y}\in\mathbb{R}_{+}^{n}\setminus\{\textbf{U}\}$ defined as
\begin{equation}
\beta(\textbf{y})=\rho(Z_{\textbf{y}})=-\inf_{w\in\Omega}\{\sum_{i=1}^{n}(\tilde{X}_{T}^{i}(w)-a_iy_{i})\}
\end{equation}
and therefore
\begin{eqnarray}\label{eq37}
\beta(\textbf{y})&=&\sum_{i=1}^{n}-\inf_{w\in\Omega}\{\tilde{X}_{T}^{i}(w)-a_iy_{i}\}\nonumber\\&=&\sum_{i=1}^{n}(\rho(\tilde{X}_{T}^{i})+a_iy_{i})\nonumber\\&=&\sum_{i=1}^{n}\beta^{i}(y_i)
\end{eqnarray}
In this case, the portfolio illiquidity risk measure is simply the sum of individual security
illiquidity risks. As a result, it is also increasing monotonic, cash sub-additive, and convex in $\textbf{y}$.
The capital requirement of a given portfolio $\textbf{y}\in\mathbb{R}_{+}^{n}\setminus\{\textbf{U}\}$ can be calculated as $\sum_{i=1}^{n}y_i(\rho(\tilde{X}^{i}_{T})+a_iy_i+X_{0}^{i}(y_i))$, and as can be seen it is an increasing function of $\textbf{y}$. Note that the illiquidity risk measure of a portfolio $\textbf{y}$ can be derived by setting $\textbf{y}=\textbf{0}$ in Equation (\ref{eq37}). Finally, according to Corollary (\ref{cor3}) we can give also a dual representation to the illiquidity risk measure $\beta$.
\end{example}
\begin{proposition}\label{proop}
Given a proper and cash-additive risk functional $\rho$ on the space of random variables $\mathcal{Z}$ and an additive separable function for the securities prices of the form $X_{T}^{i}(w, y_i) = \tilde{X}_{T}^{i}(w)+h^{i}(y_i)$ with $h^{i}(y_i)$ increasing and concave on all $\mathbb{R}$, the function $\beta(\textbf{y})=\rho(Z_{\textbf{y}})=\rho(\sum_{i=1}^{n}(\tilde{X}_{T}^{i}(w)+h^{i}(-y_i)))$ with $Z_{\textbf{y}}\in\mathcal{Z}$ and $\textbf{y}\in\mathbb{R}_{+}^{n}\setminus\{\textbf{U}\}$, is a risk measure which satisfies Proposition (\ref{prop5}) and has a dual representation as in Corollary (\ref{cor3}).
\end{proposition}
\begin{proof}
The proof is an easy exercise. It simply follows by noticing that the functional $\rho$ satisfies $\rho(\sum_{i=1}^{n}(\tilde{X}_{T}^{i}(w)+h^{i}(-y_i)))=\rho(\sum_{i=1}^{n}\tilde{X}_{T}^{i}(w))-\sum_{i=1}^{n}h^{i}(-y_i)$. The results then follow by
the properties of function $h$.
\end{proof}
\begin{remark}
Repeating the same arguments of Subsection (\ref{lp}) to Proposition (\ref{proop}), we see that an illiquidity risk measure $\delta$ defined on $\mathbb{R}_{-}^{n}\setminus\{\textbf{U}\}$ is decreasing, super-additive, and concave. The space $\mathbb{R}_{-}^{n}$ denotes the negative elements of $\mathbb{R}^{n}$, $\textbf{p}\in\mathbb{R}_{-}^{n}$ if $p_i\leq0$ for each $i=1,2,...,n$ and $\textbf{V}=\{\textbf{v}\in\mathbb{R}^{n}: \text{at least one component $v_i$ of $\textbf{v}$ is zero}\}$. The dual representation follows exactly in the same way, but now one has to work on the space $\mathbb{R}_{-}^{n}$. Moreover, if $\rho$ is a linear risk measure, $X_{T}^{i}(w,y)=h^i(y)\tilde{X}_{T}^{i}(w)$ for all $i=1,2,...,n$ with $h^i$ increasing positive concave, Proposition (\ref{propcar}) is still valid in the multivariate case. Also in this situation, we obtain a similar result to that found in Subsection (\ref{lp}) for the illiquidity risk measure $\delta$.
\end{remark}
\begin{example}
Define a probability measure $\mathbb{P}$ on the space $(\Omega,\mathcal{F})$. We want to compute the $VaR_{\delta}$ of a portfolio $\textbf{y}$. To this end, we will suppose that the price of each security $i$ follows a geometric Brownian motion similar to that of Example (\ref{exvar}). That is, we take $X_{T}^{i}(w,y_i)=\exp\{a_iy_iT\}X_{0}^{i}(y_i)\exp\{(\mu_i-\frac{\sigma_{i}^2}{2})T+\sigma_{i} B_{T}^{i}\}=\exp\{a_iy_iT\}\tilde{X}_{T}^{i}(w)$ for every $i=1,2,...,n$. Under this assumption, we define the $VaR_{\delta}$ of a long portfolio $\textbf{y}$ as
{\small \begin{eqnarray}
&&\beta(\textbf{y})=VaR_{\delta}(Z_{\textbf{y}})\nonumber\\&&= \inf{\{m\in\mathbb{R}|\mathbb{P}(\sum_{i=1}^{n}\ln(X_{T}^{i}(w,-y_i))+m<0)\leq \delta\}}\nonumber\\&&=\inf{\{m\in\mathbb{R}|\mathbb{P}(\sum_{i=1}^{n}\ln(\exp\{-a_iy_iT\}\tilde{X}_{T}^{i}(w))+m<0)\leq \delta\}}\nonumber\\&&=\inf{\{m\in\mathbb{R}|\mathbb{P}(\sum_{i=1}^{n}-(a_iy_iT-\ln(X_{0}^{i}(y)))+\sum_{i=1}^{n}(\mu_i-\frac{\sigma_{i}^2}{2})T+\sum_{i=1}^{n}\sigma_{i}B_{T}^{i}<-m)\leq \delta\}}\nonumber\\&&=\inf{\{m\in\mathbb{R}|\mathbb{P}(\sum_{i=1}^{n}(\mu_i-\frac{\sigma_{i}^2}{2})T+\sum_{i=1}^{n}\sigma_{i}B_{T}^{i}<-m)\leq \delta\}}+\sum_{i=1}^{n}(a_iy_iT-\ln(X_{0}^{i}(y)))\nonumber
\end{eqnarray}}
where the last equality follows from the cash-additivity of the risk measure $VaR_{\delta}$.
We assume that $B_{T}^{1}, B_{T}^{2},...,B_{T}^{n}$ are dependent. Using the normality of $\ln(\tilde{X}_{T}^{i}(w))$ and the fact that the sum of normal distributions is again normal, we then obtain that
\begin{eqnarray}
&&\beta(\textbf{y})=-\Phi^{-1}(\delta)\sqrt{T}\sqrt{\textbf{e}'\Sigma\textbf{e}}-\sum_{i=1}^{n}(\mu_i-\frac{\sigma_{i}^{2}}{2})T+\sum_{i=1}^{n}(a_iy_iT-\ln(X_{0}^{i}(y)))\nonumber\\&&=VaR_{\delta}(\sum_{i=1}^{n}(\ln(\tilde{X}_{T}^{i}(w))+\ln(X_{0}^{i}(y)))+\sum_{i=1}^{n}(a_iy_iT-\ln(X_{0}^{i}(y)))\nonumber\\&&=VaR_{\delta}(\sum_{i=1}^{n}\ln(\tilde{X}_{T}^{i}(w))+\sum_{i=1}^{n}a_iy_iT
\end{eqnarray}
where $\textbf{e}$ is an $n\times 1$ vector of all ones, and $\Sigma$ is the covariance matrix of the assets which are in the portfolio.
We note that $\beta(\textbf{y})$ is cash sub-additive, convex, and increasing monotonic. Finally, since $\beta(\textbf{y})$ gives the worst return at a confidence level of $1-\delta$, the capital requirement formula is given by $\beta(\textbf{y})=\sum_{i=1}^{n}y_i X_{0}^{i}(y)\beta(\textbf{y})$.
\end{example}
\section{Multivariate illiquidity risk measures in presence of splitting trades}\label{secnew}
We use Definition (\ref{def3}) and the framework of Section (\ref{sec4}) to define the discrete version of the cash flow of a portfolio $\textbf{y}\in\mathbb{R}_{+}^{n}\setminus\{\textbf{U}\}$
\begin{equation}
\sum_{i=1}^{n}\sum_{j}(X^{i}_{T}(w, -y^{i}_{j})-X^{i}_{0}(y^{i}))\Delta y^{i}_{j}
\end{equation}
with $\sum_{j}\Delta y^{i}_{j} =y^i$. That is the financial institution liquidates at time $T$, $y_i>0$ units of the security $i=1,2,...,n$. It is easy to see that the continuous version is as below
\begin{equation}\label{eqlesh}
\sum_{i=1}^{n}(\int_{0}^{y_i}X^{i}_{T}(w, -u)du-y^iX_{0}^{i}(y^i))
\end{equation}
and $Z_{\textbf{y}}$ equal to $\sum_{i=1}^{n}\int_{0}^{y_i}X^{i}_{T}(w, -u)du=\sum_{i=1}^{n}Z^{i}_{y_{i}}(w)$. Note that $X^{i}_{T}(w, -y_i)$ is assumed to be bounded a.s. in $\Omega$ for every $y_i\in\mathbb{R}$, $i=1,2,...,n$ and decreasing for every $y_i\in\mathbb{R}$. With $X^{i}_{T}(w, 0)=\tilde{X}^{i}_{T}(w)$ we denote the unaffected price of each security $i=1,2,...,n$.
It follows from Equation (\ref{eqlesh}) that $Z_{\textbf{y}}$ is increasing and concave in $\textbf{y}\in\mathbb{R}^{n}$. Then, as in the univariate case the illiquidity risk measure $\beta$ defined on $\mathbb{R}_{+}^{n}\setminus\{\textbf{U}\}$ is decreasing monotonic, cash super-additive, and convex. In order to obtain a dual representation for $\beta$, it suffices to introduce the function $\hat{\beta}(\textbf{h},\textbf{x})\stackrel{\text{def}}{=}f(\textbf{h}-\textbf{x})-\textbf{x}\stackrel{\text{def}}{=}f((\textbf{y}+\textbf{x})-\textbf{x})-\textbf{x}$. Hence, Theorem (\ref{thm3}) and Corollary (\ref{cor3}) hold.
\begin{example}
Take now Example (\ref{last}) and let $X^{i}_{T}(w, y_i)=\tilde{X}^{i}_{T}(w)+a_iy_i$. Suppose $\beta(\textbf{y})$ is given by
\begin{equation*}
\beta(\textbf{y})=-\inf_{w\in\Omega}\{\sum_{i=1}^{n}\int_{0}^{y_i}X^{i}_{T}(w, -u)du\}
\quad \textbf{y}=(y_1,y_2,...,y_n)\in\mathbb{R}_{+}^{n}\setminus\{\textbf{U}\}
\end{equation*}
Further substitutions yield
\begin{eqnarray*}
\beta(\textbf{y})&=&-\inf_{w\in\Omega}\{\sum_{i=1}^{n}\int_{0}^{y_i}(\tilde{X}^{i}_{T}(w)-a_iu)du\}
\\&=&-\inf_{w\in \Omega}\{\sum_{i=1}^{n}y_i\tilde{X}^{i}_{T}(w)\}+\sum_{i=1}^{n}a_i\frac{y_i}{2}\\
&=&\sum_{i=1}^{n}y_i\rho(\tilde{X}^{i}_{T})+\sum_{i=1}^{n}a_i\frac{y_i}{2}
\end{eqnarray*}
\end{example}
It can be verified that $\beta(\textbf{y})$ is decreasing, cash super-additive, convex, and admits the dual representation of Corollary (\ref{cor3}). The capital requirement is given by $\sum_{i=1}^{n}y_i(\rho(\tilde{X}^{i}_{T})+a_i\frac{y_i}{2}+X_{0}^{i}(y_i))$. A simple comparison between the capital requirement needed in presence of splitting trades and the one without no splits, given by $\sum_{i=1}^{n}y_i(\rho(\tilde{X}^{i}_{T})+a_iy_i+X_{0}^{i}(y_i))$ shows that breaking up trades reduces the illiquidity risk measure and the capital requirement.
If the splitting takes place also at time $0$, the capital requirement is given by $\sum_{i=1}^{n}y_i(\rho(\tilde{X}^{i}_{T})+a_iy_i+X_{0}^{i}(0))$ which is clearly smaller than $\sum_{i=1}^{n}y_i(\rho(\tilde{X}^{i}_{T})+a_iy_i+X_{0}^{i}(y_i))$. See Example (\ref{last}) for this point.
We close by pointing out that the conclusions obtained from the previous section hold (with the necessary modifications) also in the framework of the present section.
\section{Conclusions}\label{conc}
We have extended the risk measurement theory to accommodate the liquidity risk. The new mechanism is able to capture the liquidity risk arising from financial institution's trading activities in securities. The goal is achieved by assuming that securities prices depend on the traded volume. We propose several examples of risk measures under the risk of liquidity, such as VaR and the worst-case risk measure. The capital requirement is shown to be larger than the capital requirement in a standard risk measurement framework. In particular, trade splitting helps financial institutions to reduce the risk of the liquidity. The properties of the risk measures differ from those of standard risk measures. In fact, on the buy side, they are convex increasing monotonic cash sub-additive functions, and concave decreasing monotonic cash super-additive functions when the trading takes place via splitting. We provide also dual representation results for these new class of risk measures.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 376 |
A visit to the Highlands in Scotland comes highly recommended and when you have places to stay like the Wildside Highland Lodges, then you have the perfect holiday base. The holiday lodge accommodation here is quite simply stunning and there is a large range of holiday lodges to sleep anywhere up to six people.
The lodge park lies in a woodland location on the banks of the River Fechlin and has you deep in the heart of the Highlands. From this stunning location you can spend the days enjoying exploring the local and surrounding attractions.
Get out and about and breathe the lovely fresh mountain air and enjoy a getaway from it all holiday or break.
The holiday lodges are luxurious with many additional features to ensure you have the ideal base to relax. Inside the lodges are furnished beautifully and retain a rustic charm and elegance.
Still they give you everything you need for a comfortable self catering break. Some of the features include Jacuzzi baths, woodburning stoves, power showers and freeview TV, DVD and CD players.
Explore the full range of holiday lodges with hot tubs across the UK from here.
Outside you can relax on the private decking area and take in the majestic views overlooking the mountains and down the river. After a day out you might like to spend some time in the private hot tub while you enjoy a glass or two of your favourite wine or beer.
A stay here really is about relaxing and taking life easy and the Scottish Highlands is the perfect place to do this.
The holidays and breaks here can be booked for a combination of lengths that include short breaks and long weekends of three or four nights, or a week or more away. Depending on when and how long you want to come away for you can pay anywhere from £400 to £1150 for a week stay, and this for the entire accommodation and not per person.
So if you want to come away to the highlands, have excellent accommodation, some great facilities and a lovely place to stay, then the Wildside Highland Lodges really do come highly recommended. Visit this stunning region and discover more lodges in Inverness-shire here.
Extremely spacious lodge. One double with ensuite shower and separate WC. Large open plan living/kitchen area. Fridge/freezer and washer/dryer. Doors leading to large open lawn space. Outdoor hot tub.
Fantastic luxury lodge situated in a quiet, wooded setting. One master double with ensuite luxury double shower. Light and spacious open plan living area comprising of kitchen and dining area and decking affording fantastic views of the River Fechlin and mountains. Outdoor hot tub.
Fantastic luxury lodge situated in a woodland setting with river views. One master double and one twin. Bath with overhead shower. Light and spacious open plan living area comprising of kitchen and dining area and lounge with floor-to-ceiling windows and lawn affording fantastic views of the River Fechlin. Decking with outdoor hot tub.
Superb luxury lodge nestled in the secluded far corner of the park and on the banks of the river with glorious mountain views. One master double with ensuite luxury double shower and fantastic full length picture window overlooking the River Fechlin. Walk-in shower. Open plan living area including kitchen and dining area. Doors leading to large decking area with outdoor hot tub and views of the mountains.
Luxurious lodge situated in a superb position, with river and mountain views. Perfect for couples and families alike. Master double bedroom and one twin. Bath with overhead shower. Large open plan living area including kitchen and dining/lounge area leading out to patio with outdoor hot tub.
Outstanding, luxurious and unique two storey Scandinavian lodge. Ground floor: Master double with ensuite bath with overhead shower and separate bathroom with a sumptuous spa bath and power shower. Extremely spacious open plan living area with modern, well equipped kitchen/dining area with dishwasher, fridge/freezer and washer/dryer, flowing in to the superbly designed lounge which incorporates a centrepiece woodburning stove, dramatic vaulted ceilings and doors leading out to decking affording excellent views across rolling countryside toward the mountains. Freeview. First floor: One double and one twin.
Spacious riverside lodge with stunning mountain views. One double and one twin. Luxury bathroom with double shower. Lounge with large floor-to-ceiling windows. Large lawn area with outdoor hot tub. No pets. | {
"redpajama_set_name": "RedPajamaC4"
} | 7,505 |
Q: php modulo and print_r of the result? i wanted to make my own quarter-final draw for the champion's league (tomorrow, friday 16 of march) : i've got 2 questions : first the modulo does not work : it shows "another match" after every entry in the array, whereas i wanted it to be written every two matches (every 2 entries)...
Second question : is there a better way to "print" the result? like a print_r without the index and where i could say "add \n after each entry" ?
<body>
<?php
$array = array("real", "barça", "bayern", "apoel", "chelsea", "milan", "benfica", "marseille" );
$new = array();
$incr = count($array);
while($incr>0){
$random = rand(0, count($array));
if (!in_array($array[$random], $new)){
$new[] = $array[$random];
if ( (count($new) % 2) ){
$new[] = " -- another match : ";
}
$incr--;
}
}
print_r($new);
?>
<p>results</p>
</body>
Thanks for your help
A: Another option would be to shuffle the array then just pop off each of the elements
$array = array("real", "barça", "bayern", "apoel", "chelsea", "milan", "benfica", "marseille" );
shuffle($array);
while($a = array_pop($array)) {
echo $a." vs. ".array_pop($array)." <br />";
}
Sample output:
apoel vs. real
barça vs. milan
marseille vs. bayern
chelsea vs. benfica
A: The modulo is working perfectly:
*
*The array starts empty.
*You add an element to it.
*The length is 1, so 1 % 2, so 1, so truthy, so you add -- another match to the array
*So the length is now 2
*Next iteration of the loop, you add another element to the array.
*The length is now 3, so 3 % 2, so 1, so truthy, so you add -- another match
And so on. Whatever it is you're trying to do, it's not what you told the server to do.
What you should probably do is something like this:
$array = Array(........);
while($a = array_shift($array)) {
$random = rand(0,count($array)-1); // -1 is important!
echo $a." vs. ".$array[$random]."<br />";
unset($array[$random)];
// no need to realign keys since array_shift already does that
}
A: The modulus is working exactly as you're telling it to.
(count($new) % 2) ){
when count($new) = 1, 1 % 2 = 1, = true
when count($new) = 2, 2 % 2 = 0, = false
when count($new) = 3, 3 % 2 = 1, = true
when count($new) = 4, 4 % 2 = 0, = false
when count($new) = 5, 5 % 2 = 1, = true
when count($new) = 6, 6 % 2 = 0, = false
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 8,562 |
Semblant is a Brazilian melodic death metal band.
Discography
Studio albums
Last Night of Mortality (2010)
Lunar Manifesto (2014)
Obscura (2020)
Vermilion Eclipse (2022)
Demos & EPs
Behold the Real Semblant (2008)
Behind the Mask (2011)
Band members
Current line-up
Sergio Mazul – growled vocals (2006-present)
J. Augusto – keyboards (2006-present)
Mizuho Lin – clean vocals (2010-present)
Sol Perez – guitars (2011-present)
Juliano Ribeiro – guitars (2011-present)
Thor Sikora – drums (2013-present)
Johann Piper – bass (2019-present)
Previous members
Candido Oliveira – drums (2006)
Phell Voltollini – drums (2009–2011)
Roberto Hendrigo – guitars (2008–2011)
Everson Choma – guitars (2008–2011)
Mario J. B. Gugisch – bass (2006–2007)
Marcio Lucca – drums (2006)
Alison "Djesus" de Gaivos – drums (2006–2008)
Vinicius Marcel – guitars (2006)
Rafael Bacciotti – guitars (2006–2007)
Katia Shakath – vocals (2006–2010)
Leonardo Rivabem – bass (2007–2012)
Rhandu Lopez – drums (2011–2012)
Rodrigo Garcia – bass (2012–2014)
João Vitor – bass (2014–2018)
External links
Brazilian melodic death metal musical groups
Brazilian heavy metal musical groups | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 1,803 |
{"url":"https:\/\/www.mathalino.com\/forum\/engineering-mechanics\/centroid-polar-curve-integration","text":"# Centroid of Polar Curve by Integration\n\n3 posts \/ 0 new\nlight_angel\nCentroid of Polar Curve by Integration\n\nPatulong po for this problem. I really need to know the solution of this.\nfind the centroid of the area enclosed by the cardioid r=a(1+cos theta).\n\nJhun Vert\n\n$dA = \\frac{1}{2}r^2 \\, d\\theta$\n\n$A = \\frac{1}{2}{\\displaystyle \\int_{\\theta_1}^{\\theta_2}} r^2 \\, d\\theta$\n\n$A = 2 \\left[ \\frac{1}{2} {\\displaystyle \\int_0^{\\pi}} a^2(1 + \\cos \\theta)^2 \\, d\\theta \\right]$\n\n$A = a^2 {\\displaystyle \\int_0^{\\pi}} (1 + \\cos \\theta)^2 \\, d\\theta$\n\n$A = a^2 \\left( \\frac{3}{2}\\pi \\right)$\n\n$A = \\frac{3}{2}\\pi a^2$\n\nBy symmetry\n\n$Y_G = 0$\n\nSolving for XG\n\n$AX_G = {\\displaystyle \\int_a^b} \\frac{2}{3}r \\cos \\theta \\, dA$\n\n$\\frac{3}{2}\\pi a^2 X_G = {\\displaystyle \\int_{\\theta_1}^{\\theta_2}} \\frac{2}{3}r \\cos \\theta \\left( \\frac{1}{2}r^2 \\, d\\theta \\right)$\n\n$\\frac{3}{2}\\pi a^2 X_G = \\frac{1}{3} {\\displaystyle \\int_{\\theta_1}^{\\theta_2}} r^3 \\cos \\theta \\, d\\theta$\n\n$\\frac{3}{2}\\pi a^2 X_G = \\frac{1}{3} \\left[ {\\displaystyle 2 \\int_0^{\\pi}} a^3 (1 + \\cos \\theta)^3 \\cos \\theta \\, d\\theta \\right]$\n\n$\\frac{3}{2}\\pi a^2 X_G = \\frac{2}{3}a^3 {\\displaystyle \\int_0^{\\pi}} (1 + \\cos \\theta)^3 \\cos \\theta \\, d\\theta$\n\n$\\frac{3}{2}\\pi a^2 X_G = \\frac{2}{3}a^3 \\left( \\dfrac{15\\pi}{8} \\right)$\n\n$\\frac{3}{2}\\pi a^2 X_G = \\frac{5}{4}\\pi a^3$\n\n$X_G = \\frac{5}{6}a$\n\nCentroid is at (5a\/6, 0) \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 answer\n\nlight_angel\n\nThank you sir for the help\n\n\u2022 Mathematics inside the configured delimiters is rendered by MathJax. The default math delimiters are $$...$$ and $...$ for displayed mathematics, and $...$ and $...$ for in-line mathematics.","date":"2023-02-03 00:02:46","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 3, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9107639193534851, \"perplexity\": 4846.663735807395}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-06\/segments\/1674764500041.2\/warc\/CC-MAIN-20230202232251-20230203022251-00014.warc.gz\"}"} | null | null |
The Majestic is based on the new Peugeot Boxer 2.0 160bhp BlueHDi Turbo Diesel engine, featuring six speed gear box, providing outstanding drive performance with improved fuel efficiency. Peugeot's BlueHDi technology featuring Adblue® complies with Euro 6 emission standards to deliver a driving experience rich in power and performance but with exceptional fuel economy and CO2 emissions.
All models have a specification with standard fittings including: passenger airbags, cruise control, new 110W solar panel, new low-level chassis with low-level entrance step, fully integrated touch screen console controlling DAB Radio/MP3 player & Sat Nav with Bluetooth hands-free & USB, plus steering mounted controls, automatic air-conditioning, TPMS (Tyre pressure monitor system), Thatcham conversion alarm, rear view observation camera, tracker system, driver and passenger airbags and much more.
All the exclusive Marquis Majestic models are built using a modern 95% wood-free construction method and have successfully achieved the Grade III classification for heating and thermal insulation. This means that every model excels in every environment to deliver luxurious year-round touring for your comfort. | {
"redpajama_set_name": "RedPajamaC4"
} | 3,122 |
{"url":"http:\/\/www.googoodolls.com\/node\/187091","text":"# Olympics closing today\n\n## News\n\n\u2022 February 24, 2002\nOlympics closing today\n\nOlympics closing today. . . . . . . . . .Hockey ?. . . . . . . . .ig you haven't heard. . . . . . . . . .ya better stop readin' unless you wanna know. . . . . . . . . . . . . . . . .Canada's got the Gold. . . . . . . God Bless 'em, they practically invented the damn thing . . . . . . . . . .Oh Right On !. . . . . . . . Welcome to edition # 604 of the Daily. . . . . . . . .A day off for the wicked. . . . . . . . .reflect. . . . . . what have we learned ?. . . . . . . . .well. . . . . . . Utah loves Jello and Sweden's got the coolest coats. . . . . . . . .the Olympic torch looks HUGE on tv . . . . . . . objective sports need to be played and judged by mature adults. . . . . . . .Sarah's cute but we still love Michelle. . . . . .There's nothin' better than seein' a last place skaters smile as he's crossin' the finish line winnin' the gold. . . . . . . . .Canada's got their Gold after some 50 years. . . . . . . .the irony of metal pin collecting and metal detectors at every turn is amazing. . . . . . . . . . that it ?. . . . . . . . . . .oh yhea. . . . . . .i bet a pot of room service coffee's not gonna be 25 bucks in Salt Lake next week. . . . . . . . . .on we go. . . . . . . .\n\nTV update. . . . . . . . .\n\n03\/10\/02 10:45 pm ET SHOWTIME The Chris Isaak Show\n\n03\/12\/02 10:00 pm ET SHOWTIME The Chris Isaak Show\n\nClick on Picture #1 for a pic of Robby with his new hero Ross \"The Intern\" Matthews from the Tonight Show. . . . . . . . . .covering the Olympics for NBC. . . . . . . . . . . . . .that's Jason on the right. . . . . . . .hey. . . . . . .this guy needs his own show. . . . . . . . . . .hope you're listenin' network executive types. . . . . . . . .we'd watch. . . . . . . . . . .\n\nHappy Birthday to the followin' Daily readers. . . . . . . . .Amanda Sitzmann, Andrea Knorpp, Beth Slack, Booie Johnston, Bri Taskey, Chelsea Shiery, Fiona Ferguson, Hanie Amalia, Heather, Jenny Avis, John Enquist, Kamari Ahrens, Katherine Eastman, Kirsti Shafer, Kristin Dickie, Leasa Stephenson, Luis Humberto Ojeda Leal, Maggie Campbell, Martine Van Der Linden, Michaela Dickson, Patrick Young, Sarah Solomon and Tammy-Jade Vasquez . . . . . . .\n\nThere ya go. . . . . .a DGQ tomorrow. . . . . .join us won't ya ?. . . . . . . . . .ok. . . . . . . .Monday. . . . . . LA, HOB Rock the Vote Awards show w\/ Nelly. . . . . . . .recording a live internet broadcast for AOL this week. . . . . . . . .a new link to the Much Music USA contest added to the top of the page. . . . . . .check it out. . . . . . . . ciao'. . . . . . . . . . . .\n\nbe cool. . . . . .\n\nbyebye\n\nlove\n\ngoo\n\non February 24, 2002 - 12:00am\n\nOlympics closing today. . . . . . . . . .Hockey ?. . . . . . . . .ig you haven't heard. . . . . . . . . .ya better stop readin' unless you wanna know. . . . . . . . . . . . . . . . .Canada's got the Gold. . . . . . . God Bless 'em, they practically invented the damn thing . . . . . . . . . .Oh Right On !. . . . . . . . Welcome to edition # 604 of the Daily. . . . . . . . .A day off for the wicked. . . . . . . . .reflect. . . . . . what have we learned ?. . . . . . . . .well. . . . . . . Utah loves Jello and Sweden's got the coolest coats. . . . . . . . .the Olympic torch looks HUGE on tv . . . . . . . objective sports need to be played and judged by mature adults. . . . . . . .Sarah's cute but we still love Michelle. . . . . .There's nothin' better than seein' a last place skaters smile as he's crossin' the finish line winnin' the gold. . . . . . . . .Canada's got their Gold after some 50 years. . . . . . . .the irony of metal pin collecting and metal detectors at every turn is amazing. . . . . . . . . . that it ?. . . . . . . . . . .oh yhea. . . . . . .i bet a pot of room service coffee's not gonna be 25 bucks in Salt Lake next week. . . . . . . . . .on we go. . . . . . . .\n\nTV update. . . . . . . . .\n\n03\/10\/02 10:45 pm ET SHOWTIME The Chris Isaak Show\n\n03\/12\/02 10:00 pm ET SHOWTIME The Chris Isaak Show\n\nClick on Picture #1 for a pic of Robby with his new hero Ross \"The Intern\" Matthews from the Tonight Show. . . . . . . . . .covering the Olympics for NBC. . . . . . . . . . . . . .that's Jason on the right. . . . . . . .hey. . . . . . .this guy needs his own show. . . . . . . . . . .hope you're listenin' network executive types. . . . . . . . .we'd watch. . . . . . . . . . .\n\nHappy Birthday to the followin' Daily readers. . . . . . . . .Amanda Sitzmann, Andrea Knorpp, Beth Slack, Booie Johnston, Bri Taskey, Chelsea Shiery, Fiona Ferguson, Hanie Amalia, Heather, Jenny Avis, John Enquist, Kamari Ahrens, Katherine Eastman, Kirsti Shafer, Kristin Dickie, Leasa Stephenson, Luis Humberto Ojeda Leal, Maggie Campbell, Martine Van Der Linden, Michaela Dickson, Patrick Young, Sarah Solomon and Tammy-Jade Vasquez . . . . . . .\n\nThere ya go. . . . . .a DGQ tomorrow. . . . . .join us won't ya ?. . . . . . . . . .ok. . . . . . . .Monday. . . . . . LA, HOB Rock the Vote Awards show w\/ Nelly. . . . . . . .recording a live internet broadcast for AOL this week. . . . . . . . .a new link to the Much Music USA contest added to the top of the page. . . . . . .check it out. . . . . . . . ciao'. . . . . . . . . . . .\n\nbe cool. . . . . .\n\nbyebye\n\nlove\n\ngoo","date":"2018-06-25 07:39:18","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8808367252349854, \"perplexity\": 1344.874554887946}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-26\/segments\/1529267867579.80\/warc\/CC-MAIN-20180625072642-20180625092642-00521.warc.gz\"}"} | null | null |
Det arabiska fullblodet (ar: حصان عربي) är en av världens äldsta hästraser. Den härstammar från hela den Arabiska halvön och Mellanöstern där de avlades fram av beduinerna. Araben är en ädel hästras av ökentyp som ofta beskrivs som vacker och elegant. Gångarterna är flytande och med jämna och fria rörelser. Arabhästen uppnår ofta högre ålder än de flesta andra hästraser och de ädlare arabiska hästarna har under flera tusen år använts för att förädla andra hästar, och mer än 80 % av dagens modernare hästraser har under någon tid influerats eller förädlats av araben.
Araben är en av världens mest eftertraktade hästraser, mycket på grund av sin mystik och sitt ädla utseende, men även då de är utmärkta allroundhästar som duger till all slags ridsport, westernridning och speciellt till distansritt då araber är typiska ökenhästar med en mycket stor uthållighet. Trots alla myter om arabens hetsiga humör så är rasen i de flesta fall utmärkta familjehästar som lever länge och är härdiga och motståndskraftiga mot sjukdomar.
Arabiska fullblod har en mycket tydlig och unik karaktär med kort rygg, rundad nacke och en tydligt inåtbuktande nosprofil. Huvudet är lätt trekantigt med liten mule och platt panna. Araben har oftast även bara 17 revben, 16 svanskotor och 5 ländkotor i jämförelse med de normala 18 revben, 17 svanskotor och 6 ländkotor som hos andra hästar. De färre svanskotorna bidrar till arabens höga svansföring vid rörelse. Araberna blir oftast mellan 145 och 161 cm i mankhöjd och förekommer i alla färger utom black, tigrerad och färgerna orsakade av creamgenen, t.ex. isabell och gulbrun. Inte heller konstantskimmelanlaget finns i rasen, men det förekommer ändå att hästar är registrerade som till exempel brunskimmel, som i dessa fall är en felaktig benämning på en häst som är (avblekbar) skimmel född brun. Skäck finns bara i betydelsen sabino, men de sabinoliknande mönster som finns hos araben orsakas inte av sabinogenen Sb1.
Historia
Det arabiska fullblodet har med stor säkerhet funnits i över 4000 år, men det finns en sannolikhet att de har framavlats mycket tidigare än så. Det arabiska fullblodet räknas som den absolut äldsta hästrasen som framavlats av människan. I de arabiska länderna hyllades det arabiska fullblodet med olika sorters tävlingar och uthållighetsprov där de olika uppfödarna skulle visa upp det arabiska fullblodets egenskaper. Beduinerna värderade sina hästar mycket högt och skyddade dem så gott de kunde i det hårda ökenklimatet, bland annat lät de hästarna bo hos sig i tälten vid dåligt väder och gav dem vatten innan de själva drack.
Arabernas förfäder anses vara häst typ 4 som senare utvecklades till ädla ökenhästar. När man idag talar om ökenhästar menar man oftast araber, turkmenska hästar eller berberhästar. Under 1700-talet i England hade man dock ett stort antal namn för ökenhästar när man skulle utveckla sin egen fullblodshäst, det engelska fullblodet. Bland annat delade man upp dem i syriska, turkmenska, berbiska och persiska hästar, allt beroende på varifrån de kom. En av de tre största stamfäderna till det engelska fullblodet, Godolphin Arabian, var bland annat väl omdiskuterad om huruvida han var ett arabiskt fullblod eller en Berberhäst, medan Byerley Turk ansågs vara antingen arab eller turkmensk. Enbart Darley Arabian var av tydligt arabisk härkomst.
Exakt hur det arabiska fullblodet utvecklades är osäkert men man vet att en häst med tydlig arabisk karaktär levde på Arabiska halvön cirka 2500 f.Kr. Beduinerna som har starka band till araben, går i sin tradition 3 000 år tillbaka i tiden till ett sto vid namn Baz och en hingst som hette Hoshaba. Stoet skulle ha fångats in i Jemen av en sonsons sonson till Noak som även han hette Baz. Beduinerna var väldigt noga med att hålla linjen ren och praktiserade en försiktig och selektiv avel, med en del inavel för att de bästa egenskaperna skulle föras vidare till nästa generation. Beduinerna höll även hårt på muntliga traditioner, därför skrevs aldrig hästarnas stamtavlor ner. Enbart en man bestämde sig för att sätta arabens historia i skrift. Den arabiska historikern El Kelbi satte sig ner år 786 e.Kr och han lyckades berätta en berättelse som gick mer än 3 000 år tillbaka i tiden och även om boken innehöll mer sagor än fakta så kartlade han arabernas urgamla anor.
En mer vetenskaplig teori är baserad på de fyra grundtyperna, som utvecklades ur primitiva vildhästar för flera tusen år sedan. Häst typ 4 var en mycket slank ökenhäst som levde på stäpperna i västra Asien. Denna hästtyp var dock mycket liten med en mankhöjd på 110–120 cm. Den moderna motsvarigheten anses vara den kaspiska miniatyrhästen, en mycket liten hästras från Kaukasien. En teori finns om att den kaspiska hästen ingick i utvecklingen av det arabiska fullblodet genom att de utavlades med större turkmenska hästar. En teori anser även att araben kan räknas som en ättling till den mycket primitiva och numera utdöda Tarpanen som levde vilt på stäpperna i Eurasien och skogarna i Europa. Detta är dock mer än gissning och ännu har inga bevis hittats som stödjer denna teori. Tarpanens inflytande bör i så fall vara så litet att det inte går att urskilja i ett DNA-test.
Forskningar på araben har genomförts sedan flera hundra år tillbaka och forskarna har nu hittat eventuella bevis för att araben även kan ha utvecklats ur en egen underart till hästen som hittills varit okänd. Denna art kallas än så länge equus caballus pumpelli. Denna häst kan ha varit en prototyp till araben som domesticerades av beduinerna för 4 000–5 000 år sedan. Många forskare är dock överens om att araben spreds över hela Mellanöstern och Arabiska halvön under utbredningen av islam.
Beduinernas avelssystem
Under flera hundra år höll beduinerna koll på stamtavlorna till sina hästar genom muntlig tradition. De mest renrasiga kallades för Asil-araber och det var strikt förbjudet att korsa två hästar som inte var Asil-araber. Att korsa en asil-arab med en icke-asil gick däremot bra för att undvika inavel. Ston var mest värdefulla bland beduinerna och användes både till avel och ridning. De viktigaste linjerna inom den arabiska rasen grundades även på ston medan man i moderna avel går på hingstlinjerna. Beduinerna kastrerade dock aldrig hingstarna även om de ansåg att okastrerade hingstar var alldeles för heta för att använda inom krigföringen. Många hingstföl såldes därför, eller dödades. Enbart de bästa hingstarna behölls för avel.
Under åren utvecklades flera olika typer eller linjer som alla hade sina egna unika egenskaper. Idag räknar man de fem ursprungliga linjerna till "Keheilan", "SeglawiSeglawi", "Abeyan", "Hamdani" och "Habdan". Det fanns även flera linjer i olika lokala uppfödningar, men dessa fem räknas som de främsta linjerna och alla araber som inte kan spåras till dessa linjer räknas inte som Asil-araber. Beduinerna trodde mycket starkt på renrasig avel och avlade helst inte utanför de fem linjerna. De menade att om ett sto avlades med en "oren" hingst skulle dennes ättlingar för alltid vara kontaminerade. Under mitten av 1950-talet skrev Carl Raswan, en författare och arabälskare, att han trodde att det i grund och botten bara fanns tre blodslinjer som representerade kroppstyper snarare än förfäder. Han beskrev dessa väldigt utförligt och namngav dem till "Kehilan" (maskulin typ), "Seglawi" (feminin typ) och "Muniqi" (en snabb kapplöpningstyp).
Dessa mycket komplexa blodslinjer och typer var en del av beduinernas vardagliga liv. Beduinerna kände till härstamningen in i minsta detalj, tack vare den muntliga traditionen. Beduinerna använde även samma system i all slags avel, bland annat kameler och salukihunden, men även inom den egna familjen. Den första nedskrivna dokumentationen av beduinernas hästavel som myntade uttrycket "arabhäst" härstammar från 1330-talet. Senare DNA-tester på moderna araber har dock visat att två hästar som bevisligen tillhör samma linje inte alls bör tillhöra samma ätt.
Arabens spridning
Dagens forskning har visat att araben först spreds med spridningen av islam med sin början under 600-talet e.Kr. De arabiska fullbloden spreds först till norra Afrika genom landbryggorna mellan Asien, sydöstra Europa och norra Afrika. Arabiska hästar fanns därför i norra Afrika redan under antiken. Målningar och fornfynd från antikens Egypten visar ofta stridshästar med arabens karaktäristiska inåtbuktande nosprofil och numera har Egypten en egen stam av arabiska hästar som kallas egyptisk arab. Araben bör även ha spridits från Egypten till södra Europa då fynd som härstammar från antikens Grekland och även under romerska kejsardömets storhetstid visar även hästar med tydligt arabiska drag.
Under 700-talet f.Kr invaderades även Spanien och Portugal av morerna från norra Afrika. Med sig hade de bland annat sina egna Berberhästar, men även ett stort antal arabiska fullblod. Under korståget som startade år 1095 invaderades Palestina av britterna och många arabiska hästar fördes tillbaka till Storbritannien som krigsbyten. Ökenhästar, och då även araber, spreds även till Europa från Turkiet under det osmanska riket mellan 1200-talet och början av 1900-talet. Det var under den här tiden som flest antal araber fördes till Europa, bland annat när turkarna skickade över 300 000 beridna soldater till Ungern år 1522. Många av dessa red på renrasiga arabiska fullblod som man hade tagit som krigsbyte under olika räder på arabiska halvön. 1529 nådde turkarna Wien i Österrike där de stoppades av den polska och ungerska armén. Hästarna beslagtogs och blev grunden till den förstklassiga avel av araber som idag bedrivs i Polen och Ungern.
I Europa användes arabhästarna ofta för att skapa lättare och snabbare stridshästar av tunga kallblodshästar. Bland annat var Napoleon I:s favorithäst Marengo ett arabiskt fullblod. Nu importerades även de tre arabiska hingstar som skulle bli de tre stamfäderna till det engelska fullblodet. Större stuterier för avel av araber startades över hela Europa och i Ryssland under 1700-talet och 1800-talet, bland annat Babolna-stuteriet i Ungern år 1789 och Marbach-stuteriet i Tyskland som startades av kejsaren Vilhelm I, kung av Wurttemberg år 1817. Det absolut främsta stuteriet var dock Crabbet Arabian Stud i England som startades 1878 av poeten Wilfrid Scawen Blunt som gjorde många resor till Mellanöstern för att ta hem högklassiga arabiska fullblod till sin avel. En av de absolut viktigaste hingstarna i den moderna arabens historia var Skowronek, en skimmelfärgad hingst född i Polen, som köptes av Wilfrids dotter Judith Blunt-Lytton för att användas i aveln på Crabbet Arabian Stud.
Under 1500-talet och 1600-talet koloniserades även den amerikanska kontinenten, främst av spanska conquistadorer. Dessa hade då med sig arabiska fullblod som härstammade från de araber som lämnats efter morernas invasion cirka 800 år tidigare. Tillsammans med de spanska hästarna och berberhästar har araberna ett stort inflytande på nästan alla dagens amerikanska hästraser. Till Australien importerades de första hästarna under 1800-talet då ön koloniserades. Än idag kan många araber som är födda i Australien spåras till en och samma hingst som kallades "Old Hector". Även engelska fullblod som fötts i Australien kan spåras till denna hingst. De flesta araber som importerades till Australien kom från England och var framavlade vid Crabbetstuteriet.
Arabens mytologi och legender
Araben har alltid varit omsvärmad av mytologi och legender, inte minst när det gäller skapandet av det arabiska fullblodet.
Allah som skapare
Under 1800-talet diskuterade emiren Abd-el-Kader (1808-1883) arabens historia och ursprung med generalen Daumas (1803-1871). Han ville dela upp rasen i fyra olika tidsåldrar. Adam till Ismael, Ismael till Salomo, Salomo till profeten Muhammed och Muhammed till modern tid. Han berättade även hur araberna hade utvecklats. Det var en väldigt vacker historia, men den hade inte ett spår av utvecklingsteori och var mer sägen än sanning. Enligt legenden skapade Allah de arabiska hästarna med hjälp av de fyra vindarna. Araberna fick ande från Nordanvinden, styrka från Sunnanvinden, snabbhet från Östanvinden och intelligens från västanvinden. Medan han gjorde detta sa han: "Jag skapar dig, araben. Vid din pannlugg binder jag Seger i strid. På din rygg sätter jag rikedomar. Jag fäster dig som en av Jordens Skatter. Jag ger dig flykt utan vingar.
En annan version av legenden hävdar att Allah skapade hästen ur Sunnanvinden:
"När Allah skulle skapa Hästen, sade han till Sunnanvinden, -Jag vill göra en varelse av dig, Förtäta dig. Och vinden förtätade sig. Strax uppenbarade sig ärkeängeln Gabriel och tog en handfull av stoftet och gav det till Allah, som av det skapade en mörk fux. Och han sade: Jag benämner dig Häst, jag döper dig till arab och jag ger dig myrans rödbruna färg, jag har hängt lyckan i din pannlugg, du skall vara de andra djurens Herre. Män skall följa dig vart du går, du skall vara lika snabb som förföljare som i flykten, rikedomar ska forslas på din rygg och du skall bringa lycka med dig. Sedan gav han hästen ärans och lyckans tecken, en vit stjärna i pannan."
Även om berättelsen om arabens ursprung var en vacker sägen så fanns det en gnutta verklighet i de fyra tidsåldrar som Abd-el-Kader talade om. Efter Ismaels död splittrades Beduinstammarna och arabens avel fortsatte under Salomos regering. Israel hade infört lagar som förbjöd hästhållning, men trots detta uppmuntrade Salomo till avel genom att hålla sig med 1 200 ridhästar och 40 000 vagnshästar i sitt kungliga stall. Efter Salomos tid spreds hästarna utanför Mellanöstern med hjälp av profeten Muhammed.
Muhammeds skapelselegend
En annan välkänd myt om arabens ursprung namnger Muhammed som upphovsman till hela den arabiska rasen, även om arabens historia i själva verket är nästan 3000 år äldre än Muhammeds. Legenden finns i många olika varianter men den vanligaste berättar om hur Muhammed skulle välja de ston som skulle bli stamhästarna till araberna. För att välja de bästa stona skulle han testa deras mod och lojalitet. Han förde en stor flock med ston genom öknarna innan han fick syn på en oas. Han släppte därför lös alla stona och lät dem kapplöpa till oasen där de skulle få dricka vatten efter den långa vandringen. Innan hjorden nådde fram till vattnet visslade Muhammed till sig hästarna igen och enbart fem av dem vände och kom tillbaka innan de fått dricka. Dessa fem ston blev Muhammeds favoriter och kallades Al Khamsa som betydde "de fem". Detta begrepp är fortfarande vanligt bland beduinerna och hänvisar till de fem blodslinjerna som ska finnas hos renrasiga araber. Även om denna berättelse främst är en legend tror många arabiska uppfödare på att deras hästar härstammar från dessa fem ston.
De största arabstuterierna
Under 1700- och 1800-talet avlades några av de främsta arabiska hästarna fram på statliga stuterier i Europa. Polen och Ungern låg i topp på både kvalitet och kvantitet på sina arabiska hästar, tillsammans med Tyskland och Frankrike. En del privata stuterier hade också fått god status och betydde mycket för arabens framgångar. Den polska familjen Potocki var goda representanter för den polska traditionen inom arabaveln.
I Storbritannien grundade Wilfrid Scawen Blunt och hans fru Anne ett eget stuteri för att avla högklassiga araber. Deras stuteri Crabbet Arabian Stud hade flera hästar av hög kvalitet och rena blodslinjer och de flesta av dessa hästar hade paret fått hem efter sina resor till Arabien under slutet av 1800-talet, samt från det stuteri som paret hade i Egypten. Rasen var då på väg att degenereras på grund av att många högklassade varmblodshästar föddes upp som inte bara var atletiska utan även lugnare att handskas med. 1800-talet var även de hästdragna vagnarnas guldålder och den största efterfrågan låg på större, starkare och lugnare körhästar istället för araber.
Paret Blunt köpte då världens mest berömda samling av arabhästar, som då tillhörde Abbas Pascha I. Dessa hästar fördes till Crabbet stuteri i England där de senare utgjorde grunden i all arabhästavel inte bara i Storbritannien utan även i USA, Australien och Sydafrika. Paret Blunt fick senare en dotter som i vuxen ålder skulle ärva gården efter sin far, Judith Blunt-Lytton. Det var när hon tog över gården som hon köpte en polsk arabhingst vid namn Skowronek som blev en av de viktigaste avelshingstarna i Arabhästarnas nutida historia.
Idag är det arabiska fullblodet mycket populär både bland statliga och privata stuterier. Förstklassiga arabiska fullblod har alltid varit en uppskattad gåva mellan stadsöverhuvuden världen över. Bland annat håller sig livgarden i Indien med en arabisk hingst av högsta klass, en gåva från Australien, som används inom aveln för livgardets hästar.
Beskrivning
Temperament
Arabiska fullblod har ett rykte om sig att vara mycket livliga och heta, men sanningen är att de har ett relativt fromt temperament men däremot mycket skarpa sinnen och instinkter som gör dem mycket uppmärksamma på vad som händer runt omkring. Därför kan araber även reagera förhållandevis snabbare än modernare framavlade raser. Araber är mycket intelligenta, lättlärda, tillgivna och de anpassar sig lätt till omgivningen.
De arabiska hästarna har ofta rykte om sig att vara nerviga och nervösa och kräver ett vant och lugnt handlag. Men araben fungerar lika bra som familjehäst och även för barn. En välbehandlad och väluppfödd arab ska tåla att barn klänger på den och i nästa sekund ska den kunna byta skepnad på exempelvis utställningar då den ska visa sitt fina temperament och fullblodskaraktär.
Karaktär
Arabiska fullblod är runt 143–155 cm höga och väger ungefär 350–500 kg. Alla färger är tillåtna hos araben, utom skäck, tigrerad och black. Dock är det inte helt ovanligt att araben kan födas med dessa färger. Dessa hästar, eller deras avkommor, får dock inte registreras som renrasiga araber och inte heller användas inom aveln. Idag kan skäckfärgade araber istället registreras som Pintabian.
Arabens huvud är litet och brett med en mycket konkav profil. Ögonen är stora, mörka och är lågt placerade. Stora utmejslade näsborrar ska vara rörliga så att de kan utvidga sig vid ansträngning. Öronen är små och sitter långt ifrån varandra. Halsen skall vara lång, den skall vidare vara elegant böjd, och strupranden torr och välmarkerad. Ryggen skall vara kort och stark. Huden är tunn och det är inte ovanligt att ådrorna syns. Hovskägg förekommer i ringa omfattning eller saknas helt. Taglet i man och svans är mjukt och ibland lite vågigt. Manen ska ej klippas då en arab alltid ska ha lång man. Att klippa en arabs man kallas helgerån.
Araben är speciell eftersom den oftast har sjutton par revben, ibland arton som övriga hästraser. Den har sexton svanskotor istället för arton, och oftast fem ländkotor istället för sex. Detta förklarar arabens speciella, höga svansföring. Det är svansen tillsammans med huvudet som kanske tydligast utmärker en arabhäst. Rörelserna hos en arabhäst ska vara flytande och mjuk med fjädrande steg. Arabhästarna är även starka i förhållande till sin storlek men de har mest använts till förädling av andra raser på grund sin uthållighet och kondition som räknas som världens bästa .
Många entusiaster lever efter hårda regler när det gäller arabens utseende. Man säger att arabens mule ska vara så liten att den får plats i en kupad hand och käkbenen ska vara så kraftigt rundade att en knuten näve får plats mellan dem. R.S Summerhays skrev om arabens ögon som "måste vara mörka och djupa, mycket själfulla hos ston, och synnerligen vaksamma hos hingsten, fylld av en enorm och utmanande värdighet".
Användningsområden
Araben är en mycket flexibel hästras som kan användas inom all slags ridsport och även körning. Araben har dock främst gjort sig ett namn inom distansritt där de är i det närmaste oslagbara, mycket på grund av sin uthållighet och sitt förflutna som ökenhäst där de tvingades leva på sämre foder och med lite vatten. Araben har även på senare tid blivit vanlig inom westernridning. På många platser runt om i världen tävlar man även med araber inom galopp i speciell arabgalopp.
Men trots arabens rykte om att ha ett hett temperament fungerar araben som en utmärkt familjehäst och allroundhäst. Araber tävlas inom de flesta discipliner så som banhoppning och dressyr. De mjuka rörelserna och den böjda nacken gör att de passar utmärkt inom dressyren och även om araben inte har hoppanlag kan de hoppa riktigt bra. Araber kan även köras.
Araben har även blivit en mycket populär häst inom showridning och visas ofta upp på olika utställningar.
Betydelse
Arabens förgreningar
Den egyptiska araben kännetecknas av en helt ren egyptisk stam. De egyptiska araberna skiljer sig en del från de till exempel polska. Bland annat brukar egyptierna ha längre ryggar, mer extremt inåtgående nosprofil. Ofta mer högrest (längre ben), dessutom brukar de ha ett längre huvud. Den arabiska fullblodshästen är ett resultat av en strikt selektiv avel för flera tusen år sedan. Men idag anses bara en liten del av alla världens arabhästar vara helt renrasiga och härstamma från dessa första stamfäder. De hästar som har en full stamtavla och är bevisat renrasiga kallas Asil-araber, där asil är det arabiska ordet för renrasig. För att få kallas Asil-arab måste det arabiska fullblodet ha påvisat släktskap med några av de första avelshingstarna inom minst fem generationer. De måste även ha en utmärkt exteriör och uthållighet.
Det finns även olika förgreningar av det arabiska fullblodet som har utvecklats olika beroende på var de har avlats. Några av de mest renrasiga araberna är de egyptiska araberna. För att få kallas egyptisk arab krävs det att hästen i fråga enbart har bevisad egyptisk stam, utan influenser från andra sorters araber. Troligtvis är enbart 2 procent av världens araber idag av äkta egyptisk stam. Den egyptiska araben kännetecknas även av sitt utseende som inte har förändrats någonting på flera tusen år. Dagens araber kan ofta jämföras med riktigt gamla teckningar, skisser eller målningar av araber och likheten är oftast slående.
I forna Persien avlades även araber men de persiska araberna är något större. De persiska araberna utgör en något större del av det arabiska beståndet än den egyptiska men är fortfarande relativt ovanlig. Idag avlas den persiska araben främst i Iran. Även i Syrien har araber avlats fram och den syriska hästen, som även kallas syrisk arab är ganska olik den vanliga araben genom att den är mindre ädel i exteriören. Men om den syriska hästen är av helt ren arabisk stam eller om den har utavlats med andra raser är inte bevisat.
Arabens inflytande
Det arabiska fullblodet anses ofta som anfader till nästan alla världens hästraser då den ingår i eller har använts för att förbättra de flesta hästraser. Nästan alla Europeiska varmblodshästar har en bas av det engelska fullblodet i sig och det engelska fullblodet har in sin tur utvecklats ur tre arabiska fullblodshingstar som hette Byerley Turk, Darley Arabian och Godolphin Arabian. Många hästraser har sedan influerats ytterligare med inkorsning av araber. En del uppfödare har även försökt få fram egna versioner av araben bland annat den fransk-engelska angloaraben, den ungerska shagya-araben och den numera utdöda ryska streletskaraben.
Även mindre ponnyer har utvecklats med hjälp av araben, till exempel den australiska ponnyn och welshponnyn. Welsh kategori A, även kallad welsh mountain har ett tydligt arabiskt utseende. Även mindre ädla ponnyraser kan ha influerats av araberna. Detta är speciellt vanligt i länder runt Mellanöstern och i östra Asien. De japanska ponnyraserna som till exempel misakiponnyn har med största sannolikhet utvecklats med hjälp av riktigt primitiva hästar som den mongoliska vildhästen przewalskihäst som förädlats något med arabiskt blod. Då de mongoliska vildhästarna är så primitiva syns inte de arabiska influenserna lika tydligt, men ponnyerna blir något ädlare och inte lika primitiva. Ponnyrasen welara är en ren korsning mellan welshponny och arabiskt fullblod.
På en del håll har även araben använts för att förbättra eller förädla tyngre kallblodshästar, och även göra dessa lättare efter att jordbruken mekaniserades under mitten av 1900-talet. Den franska percheronhästen har ibland tydliga orientaliska influenser i utseendet trots att de i övrigt är tunga och kraftiga kallblod.
Alla dagens nord- och sydamerikanska hästraser kan spåras tillbaka till de spanska hästarna som fördes till Amerika av de spanska conquistadorerna efter upptäckten av kontinenten på slutet av 1400-talet. Dessa spanska hästar hade utvecklats med hjälp av araber och berberhästar som fördes till Spanien via Nordafrika under 700-talet, då Europa invaderades av morerna. Idag är det även vanligt att många amerikanska hästraser korsas med araber och registreras i egna register, till exempel quaraben, moraben och arappaloosan.
Övrigt om arabhästen
Man sätter bokstäverna OX efter namnet på en renrasig arab. Angloaraben betecknas enbart med ett X och det engelska fullblodet med XX.
Det triangulärt utformade huvudet ska enligt arabisk tradition bilda formen av en vapensköld som på arabiska kallas jibbah.
1964 bildades Svenska Arabhästföreningen (SAHF) - www.sahf.se.
Alla renrasiga arabhästar ska registreras i en av WAHO (= World Arabian Horse Organisation), godkänd stambok.
Den svenska och godkända stamboken för arabiska fullblod som är medlem i WAHO är ARAB (= Arabhäst Registraturen AB). ARAB nyregistrerar svenskfödda föl, importer, exporter, ägarbyten samt utfärdar pass mm. Även arabkorsningar registreras via ARAB.
Araberna kan bli så låga i mankhöjd att de egentligen skulle kategoriseras som ponny (d.v.s. under 148 cm) men kategoriseras alltid som en större häst även om de går under tillåtna mankhöjden.
Den punkt där huvudet förenas med halsen kallas på arabiska för mitbah. Ju större bågen är här desto rörligare blir huvudet.
Alla araber föds med 17 revben, 16 svanskotor och 5 ländkotor medan andra hästar oftast har 18 revben, 17 svanskotor och 6 ländkotor. Just detta är utmärkande för araben.
Napoleons favorithäst Marengo var en vit arabisk hingst.
Araben är en ståtlig och vacker häst och är väldigt känd inom utställningar.
Se även
Shagya-arab - En ungersk arabhäst som är något kraftigare än arabiskt fullblod
Engelskt fullblod - En av världens inflytelserika raser, grundat på tre arabiska hingstar.
Berberhäst - en annan ökenhäst, med nära släktskap till araben
Persisk arab - en gren av arabhästen
Egyptisk arab - en förgrening av araben med rent egyptiska blodslinjer
Gidran-arab - en lyckad arabkorsning från Ungern
Angloarab - den mest populära korsningen mellan araben och det engelska fullblodet
Arappaloosa - Korsning arab och appaloosa som fått status som egen ras
Pintabian - Araber som fötts som skäck, eller korsning arab-paint horse
Quarab - korsning mellan quarterhäst och arab.
Streletskarab - en utdöd hästras från Ryssland där arabiska och engelska fullblod korsades med ryska hästraser
Berömda arabhästar
Skowronek - Den polska hingsten som betytt så mycket för arabens avel
Godolphin Arabian - en av de hästar som utvecklat det engelska fullblodet
Darley Arabian - den andra hästen som utvecklat det engelska fullblodet
Byerley Turk - den tredje hästen i engelska fullblodets utveckling
Expert Ox - Spelade arabhingsten i filmen Sherdil med Rebecka Liljeberg.
Marengo - Napoleon I:s stridshäst
Referenser
Noter
Källförteckning
World Arabian Horse Organisation
Arabian Horse Association
Länkarkiv till sidor med temat Arabiska fullblod
Svenska Arabhästföreningen
Mer om Asil-araber
Historiska rapporter om araben
Al Khamsa Organisation (om de fem blodslinjerna Al Khamsa
Varmblodshästar
Hästraser från Asien | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 8,060 |
Parsley.addMessages('no', {
defaultMessage: "Verdien er ugyldig.",
type: {
email: "Verdien må være en gyldig e-post.",
url: "Verdien må være en gyldig url.",
number: "Verdien må være et gyldig tall.",
integer: "Verdien må være et gyldig heltall.",
digits: "Verdien må være et siffer",
alphanum: "Verdien må være alfanumerisk"
},
notblank: "Verdien må ikke være blank.",
required: "Verdien er obligatorisk.",
pattern: "Verdien er ugyldig.",
min: "Verdien må være større eller lik %s.",
max: "Verdien må være mindre eller lik %s.",
range: "Verdien må være mellom %s and %s.",
minlength: "Verdien er for kort. Den burde bestå av minst %s tegn.",
maxlength: "Verdien er for lang. Den kan bestå av maksimalt %s tegn.",
length: "Verdilengden er ugyldig. Den må være mellom %s og %s tegn lang.",
mincheck: "Du må huke av minst %s valg.",
maxcheck: "Du må huke av %s valg eller mindre.",
check: "Du må huke av mellom %s og %s valg.",
equalto: "Verdien må være lik."
});
Parsley.setLocale('no');
| {
"redpajama_set_name": "RedPajamaGithub"
} | 2,478 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.