text stringlengths 14 5.77M | meta dict | __index_level_0__ int64 0 9.97k ⌀ |
|---|---|---|
\section{Introduction}
In recent years, graph-based signal processing has become an active research field due to the increasing demands for signal and information processing in irregular domains \cite{shuman_emerging_2013,sandryhaila_big_2014}.
For an $N$-vertex undirected graph $\mathcal{G}(\mathcal{V}, \mathcal{E})$, where $\mathcal{V}$ denotes the vertex set and $\mathcal{E}$ denotes the edge set, if a real number is associated with each vertex of $\mathcal{G}$, these numbers of all the vertices constitute a graph signal ${\bf f}\in\mathbb{R}^N$.
Potential applications of graph signal processing have been found in areas including sensor networks \cite{zhu_graph_2012}, semi-supervised learning \cite{gadde_active_2014}, image processing \cite{yang_gesture_2014}, and structure monitoring \cite{chen_bridge_2013}.
A lot of concepts and techniques for classical signal processing are extended to graph signal processing. Related problems on graph include graph signal filtering \cite{sandryhaila_discrete_2013, chen_adaptive_2013}, graph wavelets \cite{Coifman_Diffusion_2006, hammond_wavelets_2011,narang_perfect_2012}, graph signal compression \cite{zhu_approximating_2012, nguyen_downsampling_2015}, uncertainty principle \cite{agaskar_aspectral_2013}, graph signal coarsening \cite{liu_coarsening_2014, liu_graphcoarsening_2014}, multiresolution transforms \cite{ekambaram_multiresolution_2013,shuman_aframework_2013}, parametric dictionary learning \cite{thanou_parametric_2013}, graph topology learning \cite{dong_learning_2014}, graph signal sampling and reconstruction \cite{anis_towards_2014, narang_localized_2013, wang_iterative_2014, wang_local_2014}, and distributed algorithms \cite{wang_distributed_2014, chen_distributed_2015}.
\subsection{Motivation and Related Works}
It is a natural problem to reconstruct smooth signals from partial observations on a graph in practical applications \cite{sandryhaila_discrete_2013, chen_adaptive_2013, narang_signal_2013}. For data gathering in sensor networks, sometimes only part of the nodes transmit data due to the limited bandwidth or energy. According to the smoothness of data, the missing entries can be estimated from the received ones, which can be modeled as the reconstruction of smooth signals on graph from decimation. Especially, for a sensor network with clustering structure, the collected data within a cluster are aggregated by the cluster head, which plays the role as a local measurement and can be naturally obtained. Using the measured data from all the clusters to retrieve the raw data of all the nodes can be modeled as a problem of smooth graph signal reconstruction from local measurements, which is a linear combination of the signal amplitudes in a cluster of vertices. This problem is studied in this work for the first time.
There have been several works focusing on the theory of exactly reconstructing a bandlimited graph signal from its decimation.
Sufficient conditions for unique reconstruction of bandlimited graph signals from decimation are given for normalized \cite{pesenson_sampling_2008} and unnormalized Laplacian \cite{fuhr_poincare_2013}.
In \cite{anis_towards_2014}, a necessary and sufficient condition on the cutoff frequency is established and the bandwidth is estimated based on the concept of spectral moments.
Several algorithms are proposed to reconstruct graph signals from decimation.
In \cite{narang_localized_2013}, an algorithm named iterative least square reconstruction (ILSR) is proposed and the tradeoff between data-fitting and smoothness is also considered.
Two more efficient algorithms named iterative weighting reconstruction (IWR) and iterative propagating reconstruction (IPR) are proposed in \cite{wang_local_2014} with much faster convergence.
As far as we know, there is no work on reconstructing graph signals from local measurements.
The idea of local measurements can be traced back to
time-domain nonuniform sampling \cite{marvasti_nonuniform_2001}, or irregular sampling \cite{feichtinger_theory_1994, grochenig_adiscrete_1993}, which has a close relationship with graph signal sampling and reconstruction.
For the signals in time-domain \cite{grochenig_reconstruction_1992, feichtinger_theory_1994}, shift-invariant space \cite{aldroubi_nonuniform_2002}, or on manifolds \cite{pesenson_poincare_2004,feichtinger_recovery_2004}, based on the theoretical results of signal reconstruction from samples, there has been extended works on reconstructing signals from local averages. However, there is no such work on graph-signal-related problems.
\subsection{Contributions}
In this paper, we first generalize the sampling scheme for graph signals from \emph{decimation} to \emph{local measurement}.
Based on this scheme, we then propose a new method named iterative local measurement reconstruction (ILMR) to reconstruct the original signal from limited measurements. It is proved that the bandlimited signals can always be exactly reconstructed from its local measurements if certain conditions are satisfied. Moreover, we demonstrate that the traditional decimation scheme, which samples by vertex, and its corresponding reconstruction method is a special case of this work. Based on the performance analysis of ILMR, we find that the local measurement is more robust than decimation in noise scenario. As a consequence, the optimal local weights in different noisy environment are discussed. The proposed sampling scheme has several advantages. First, it will benefit in the situation where local measurements are easier to obtain than the samples of specific vertices. Second, the proposed local measurement and reconstruction is more robust against noise.
This paper is organized as follows. In section II, the basis of graph signal processing and some existing algorithms for reconstructing graph signals from decimation are reviewed. The generalized sampling scheme, i.e. local measurement, is proposed in section III. In section IV, the reconstruction algorithm ILMR is proposed and its convergence is proved. In section V, the reconstruction performance in noise scenario is studied, and the optimal choice of local weight and local set partition is discussed. Experimental results are demonstrated in section VI, and the paper is concluded in section VII.
\section{Preliminaries}
\subsection{Laplacian-based Graph Signal Processing and Bandlimited Graph Signals}
The Laplacian \cite{chung_spectral_1997} of an $N$-vertex undirected graph $\mathcal{G}$ is defined as
$$
{\bf L=D-A},
$$
where ${\bf A}$ is the adjacency matrix of $\mathcal{G}$, and ${\bf D}$ is the degree matrix, which is a diagonal matrix whose entries are the degrees of the corresponding vertices.
Since $\mathcal{G}$ is undirected, its Laplacian is a symmetric and positive semi-definite matrix, and all of the eigenvalues of ${\bf L}$ are real and nonnegative. If $\mathcal{G}$ is connected, there is only one zero eigenvalue. Denote the eigenvalues of ${\bf L}$ as
$0=\lambda_1<\lambda_2\le\cdots\le\lambda_N$, and the corresponding eigenvectors as $\{{\bf u}_k\}_{1\le k\le N}$. The eigenvectors can also be regarded as graph signals on $\mathcal{G}$.
The Laplacian ${\bf L}: \mathbb{R}^N\rightarrow\mathbb{R}^N$ is an operator on the space of graph signals on $\mathcal{G}$,
$$
({\bf Lf})(u)=\sum_{v\in\mathcal{V}, u\sim v}\!\!\left(f(u)-f(v)\right),\quad \forall u\in\mathcal{V},
$$
where $f(u)$ denotes the entry of ${\bf f}$ associated with vertex $u$, and $u\sim v$ denotes that there is an edge between vertices $u$ and $v$. The Laplacian can be viewed as a kind of differential operator between vertices and their neighbors. Therefore, among the eigenvectors of ${\bf L}$, those associated with small eigenvalues have similar amplitudes on connected vertices, while the eigenvectors associated with large eigenvalues vary fast on the graph. In other words, eigenvectors associated with small eigenvalues are smooth or denote low-frequency components of signals on $\mathcal{G}$.
For graph Fourier transform \cite{hammond_wavelets_2011}, the eigenvectors $\{{\bf u}_k\}_{1\le k\le N}$ are regarded as the Fourier basis of the frequency-domain, and the eigenvalues $\{\lambda_k\}_{1\le k\le N}$ are regarded as frequencies.
The graph Fourier transform is
$$
\hat{f}(k)=\langle {\bf f}, {\bf u}_k\rangle=\sum_{i=1}^Nf(i)u_k(i),
$$
where $\hat{f}(k)$ is the strength of frequency $\lambda_k$.
Similar to its counterpart in time-domain, if a graph signal ${\bf f}$ is smooth on $\mathcal{G}$, ${\bf f}$ may be uniquely determined by its entries on a limited number of sampled vertices. Based on the graph Laplacian, the smoothness of a graph signal is usually described as being within a bandlimited subspace. A graph signal ${\mathbf f}\in\mathbb{R}^N$ is $\omega$-bandlimited if
$${\mathbf f}\in PW_{\omega}(\mathcal{G})\triangleq\text{span}\{{\bf u}_i|\lambda_i\le\omega\},$$
which is called Paley-Wiener space on $\mathcal{G}$ \cite{pesenson_sampling_2008}.
\subsection{Reconstruction from Decimation of Bandlimited Graph Signals}
There have been theoretical analysis and algorithms on the reconstruction from decimation of bandlimited graph signals.
Existing results show that ${\mathbf f}\in PW_{\omega}(\mathcal{G})$ can be uniquely reconstructed from its entries $\{f(u)\}_{u\in \mathcal{S}}$ on a sampling vertex set $\mathcal{S}\subseteq \mathcal{V}$ under certain conditions.
An important concept of \emph{uniqueness set} is introduced in \cite{pesenson_sampling_2008}.
\begin{defi}[uniqueness set {\cite{pesenson_sampling_2008}}]
A set of vertices $\mathcal{S}\subseteq \mathcal{V}(\mathcal{G})$ is a uniqueness set for space $PW_{\omega}(\mathcal{G})$ if it holds for all ${\mathbf f}, {\mathbf g} \in PW_{\omega}(\mathcal{G})$ that $f(u)$ equals $g(u)$ for all $u\in\mathcal{S}$ implies ${\mathbf f}$ equals ${\mathbf g}$.
\end{defi}
Then iterative least square reconstruction (ILSR), a reconstruction algorithm from decimation of graph signal is proposed, which can be written in the following equivalent form.
\begin{thm}[ILSR {\cite{narang_localized_2013}}]\label{thm:ILSR}
If the sampling set $\mathcal{S}$ is a uniqueness set for $PW_{\omega}(\mathcal{G})$, then the original signal ${\bf f}$ can be reconstructed using the decimation $\{f(u)\}_{u\in\mathcal{S}}$
by the following ILSR method,
\begin{align}
{\bf f}^{(0)}&=\mathcal{P}_{\omega}\left(\sum_{u\in \mathcal{S}}f(u)\bm{\delta}_{u}\right),\nonumber\\
{\mathbf f}^{(k+1)}&={\mathbf f}^{(k)}+\mathcal{P}_{\omega}\left(\sum_{u\in \mathcal{S}}(f(u)-f^{(k)}(u))\bm{\delta}_{u}\right)\nonumber,
\end{align}
where $\mathcal{P}_{\omega}(\cdot)$ is the projection operator onto $PW_{\omega}(\mathcal{G})$, and $\bm{\delta}_u$ is a Dirac delta function whose entries satisfying
\begin{equation}\label{defdelta}
\delta_u(v)=
\begin{cases}
1, & v=u; \\
0, & v\neq u.
\end{cases}
\end{equation}
\end{thm}
To accelerate the convergence, an algorithm named iterative propagating reconstruction (IPR) is proposed, which is based on an important concept of \emph{local sets}.
\begin{defi}[local sets {\cite{wang_local_2014}}]\label{deflocalset1}
For a sampling set $\mathcal{S}$ on a graph $\mathcal{G}(\mathcal{V},\mathcal{E})$, assume that $\mathcal{V}$ is divided into disjoint local sets $\{\mathcal{N}(u)\}_{u\in \mathcal{S}}$ associated with the sampled vertices. For each $u\in\mathcal{S}$, denote the subgraph of $\mathcal{G}$ restricted to $\mathcal{N}(u)$ by $\mathcal{G}_{\mathcal{N}(u)}$, which is composed of vertices in $\mathcal{N}(u)$ and edges between them in $\mathcal{E}$. For each $u\in\mathcal{S}$, its local set satisfies $u\in\mathcal{N}(u)$, and the subgraph $\mathcal{G}_{\mathcal{N}(u)}$ is connected. Besides, $\{\mathcal{N}(u)\}_{u\in \mathcal{S}}$ should satisfy
$$
\bigcup_{u\in \mathcal{S}} \mathcal{N}(u)=\mathcal{V}
\text{ and }
\mathcal{N}(u)\cap \mathcal{N}(v)=\emptyset, \quad \forall u, v\in\mathcal{S}, u\neq v.
$$
\end{defi}
The property of a local set is measured by \emph{maximal multiple number} and \emph{radius}, as follows.
\begin{defi}[maximal multiple number {\cite{wang_local_2014}}]\label{defimmn}
Denoting $\mathcal{T}(u)$ as the shortest-path tree of $\mathcal{G}_{\mathcal{N}(u)}$ rooted at $u$,
for $v\sim u$ in $\mathcal{T}(u)$, $\mathcal{T}_u(v)$ is the subtree composed by $v$ and its descendants in $\mathcal{T}(u)$.
The \emph{maximal multiple number} of $\mathcal{N}(u)$ is
$$
K(u)=\max_{v\sim u \text{ in } \mathcal{T}(u)}|\mathcal{T}_u(v)|.
$$
\end{defi}
\begin{defi}[radius {\cite{wang_local_2014}}]\label{defradius}
The \emph{radius} of $\mathcal{N}(u)$ is the maximal distance of vertex in $\mathcal{G}_{\mathcal{N}(u)}$ from $u$, denoted as
$$
R(u)=\max_{v\in \mathcal{N}(u)}\text{dist}(v,u).
$$
\end{defi}
\begin{thm}[IPR {\cite{wang_local_2014}}]\label{thm:IPR}
For a given sampling set $\mathcal S$ and associated local sets $\{\mathcal{N}(u)\}_{u\in\mathcal{S}}$ on a graph $\mathcal{G}(\mathcal{V},\mathcal{E})$, $\forall {\mathbf f}\in PW_{\omega}(\mathcal{G})$, if $\omega$ is less than $1/Q_{\rm max}^2$, ${\mathbf f}$ can be reconstructed by its decimation $\{f(u)\}_{u\in \mathcal{S}}$ through the IPR method
\begin{align}
{\mathbf f}^{(0)}&=\mathcal{P}_{\omega}\left(\sum_{u\in \mathcal{S}}f(u)\bm{\delta}_{\mathcal{N}(u)}\right),\nonumber\\
{\mathbf f}^{(k+1)}&={\mathbf f}^{(k)}+\mathcal{P}_{\omega}\left(\sum_{u\in \mathcal{S}}(f(u)-f^{(k)}(u))\bm{\delta}_{\mathcal{N}(u)}\right),\nonumber
\end{align}
where $$Q_{\rm max}=\max_{u\in\mathcal{S}}\sqrt{K(u)R(u)},$$
and $\bm{\delta}_{\mathcal{N}(u)}$ denotes the graph signal with entries
$$
\delta_{\mathcal{N}(u)}(v)=
\begin{cases}
1, & v\in \mathcal{N}(u);\\
0, & v\notin \mathcal{N}(u).
\end{cases}
$$
\end{thm}
IPR converges faster than ILSR, because in each iteration, IPR updates a larger increment than ILSR by utilizing a propagation to local sets.
\section{Local Measurement: A Generalized Sampling Scheme}
We consider a new sampling scheme of measuring by local sets. In this scheme, all the vertices in a graph is partitioned into disjoint clusters. In each cluster, there is no specific sampling vertex, but all vertices in this cluster contribute to produce a measurement. For this purpose, \emph{centerless local sets} are first introduced based on Definition \ref{deflocalset1}.
\begin{defi}[centerless local sets]\label{deflocalset2}
For a graph $\mathcal{G}(\mathcal{V},\mathcal{E})$, assume that $\mathcal{V}$ is divided into disjoint local sets $\{\mathcal{N}_i\}_{i\in\mathcal{I}}$, where $\mathcal I$ denotes the index set of divisions. Each subgraph $\mathcal{G}_{\mathcal{N}_i}$, which denotes the subgraph of $\mathcal{G}$ restricted to $\mathcal{N}_i$, is connected. Besides, $\{\mathcal{N}_i\}_{i\in \mathcal{I}}$ should satisfy
$$
\bigcup_{i\in \mathcal{I}} \mathcal{N}_i=\mathcal{V}
\text{ and }
\mathcal{N}_i\cap \mathcal{N}_j=\emptyset, \quad \forall i, j\in\mathcal{I}, ~ i\neq j.
$$
\end{defi}
One should notice that the centerless local sets play important roles in the proposed generalized sampling scheme, while the local sets do not in traditional decimation scheme. In the decimation scheme, the local sets are designed for specific reconstruction algorithms and have no effect in the sampling process. However, in the generalized sampling scheme, the centerless local sets are elaborated for sampling and determine the performance of reconstruction, which will be discussed in section \ref{secnoise}.
To evaluate the partition of a graph, the \emph{diameter} of a centerless local set is defined and will be utilized in next section.
\begin{defi}[diameter]\label{diameter}
For a centerless local set $\mathcal{N}_i$, its diameter is defined as the largest distance of two vertices in $\mathcal{G}_{\mathcal{N}_i}$, i.e.,
$$
D_i = \max_{u,v\in\mathcal{N}_i}{\text{dist}}(u,v).
$$
\end{defi}
In order to produce a measurement from specific centerless local set, a \emph{local weight} is defined to balance the contribution of all vertices in this set and to obstruct the energy from other part of the graph.
\begin{defi}[local weight]
A local weight $\bm{\varphi}_i\in \mathbb{R}^N$ associated with a centerless local set $\mathcal{N}_i$ satisfies
$$
\varphi_i(v)
\begin{cases}
\ge 0, v\in \mathcal{N}_i\\
= 0, v\notin \mathcal{N}_i
\end{cases}
$$
and
$$
\sum_{v\in \mathcal{N}_i}\varphi_i(v)=1.
$$
\end{defi}
We highlight that the weight is \emph{local} rather than \emph{global} comes from some natural observations. It is partially because that locality and local operations are basic features of graphs and complex networks. Moreover, signal processing on graph may be dependent on distributed implementation, where local operations are more feasible than global ones.
Finally, we arrive at the definition of \emph{local measurement} by linearly combining the signal amplitudes in each centerless local set using preassigned local weights.
\begin{defi}[local measurement]
For given centerless local sets and the associated local weights $\{(\mathcal{N}_i,\bm{\varphi}_i)\}_{i\in \mathcal{I}}$, a set of local measurements for a graph signal $\bf f$ is $\{f_{\bm{\varphi}_i}\}_{i\in \mathcal{I}}$, where
$$
f_{\bm{\varphi}_i}\triangleq\langle {\mathbf f}, \bm{\varphi}_i\rangle=\sum_{v\in \mathcal{N}_i}f(v)\varphi_i(v).
$$
\end{defi}
\begin{figure}[t]
\begin{center}
\includegraphics[width=11cm]{decivsmeas.pdf}
\caption{An illustration of traditional sampling (decimation) scheme versus generalized sampling (local measurement) scheme. For each centerless local set, a local measurement is produced by a linear combination of signal amplitudes associated with vertices within this set.}
\label{decivsmeas}
\end{center}
\end{figure}
The sampling schemes of decimation and of local measurement are visualized in Fig. \ref{decivsmeas}. Compared with decimation in previous works \cite{wang_local_2014, pesenson_sampling_2008}, local measurement can be regarded as a generalized sampling scheme.
The local measurement $\{f_{\bm{\varphi}_i}\}_{i\in\mathcal{I}}$ is to obtain a linear combination of the signal in each local set, while the decimation $\{f(u)\}_{u\in\mathcal{S}}$ is to obtain the signal on selected vertices in the sampling set $\mathcal{S}$.
Both sampling schemes take the inner products of the original signal and specified local weights.
Decimation can be regarded as a special case of local measurement, with all the weights in each centerless local set assigned to only one vertex, i.e., the sampled vertex.
\section{ILMR: Reconstruct Signal from Local Measurements}
We will show that under certain conditions the original signal ${\mathbf f}$ can be uniquely and exactly reconstructed from the local measurements $\{f_{\bm{\varphi}_i}\}_{i\in\mathcal{I}}$.
First of all, an operator is defined based on centerless local sets and the associated local weights.
\begin{defi}\label{limitedpropagation}
For given centerless local sets and the associated weights $\{(\mathcal{N}_i, \bm{\varphi}_i)\}_{i\in\mathcal{I}}$ on a graph $\mathcal{G}(\mathcal{V},\mathcal{E})$, an operator ${\bf G}$ is defined by
\begin{align}
{\bf G}{\mathbf f}&=\mathcal{P}_{\omega}\left(\sum_{i\in \mathcal{I}}\langle {\mathbf f}, \bm{\varphi}_i\rangle\bm{\delta}_{\mathcal{N}_i}\right)\label{localmeasprop1}\\
&= \sum_{i\in \mathcal{I}}\langle {\mathbf f}, \bm{\varphi}_i\rangle\mathcal{P}_{\omega}(\bm{\delta}_{\mathcal{N}_i}),\label{localmeasprop2}
\end{align}
where $\bm{\delta}_{\mathcal{N}_i}$ is defined as
\begin{equation}
\delta_{\mathcal{N}_i}(v)=
\begin{cases}
1, & v\in \mathcal{N}_i;\\
0, & v\notin \mathcal{N}_i.
\end{cases}\label{deltaNi}
\end{equation}
\end{defi}
For a graph signal, the proposed operator is to calculate the local measurement in each centerless local set, then to assign the local measurement to all the vertices in that set, and finally to filter out the component beyond the bandwidth, i.e., \eqref{localmeasprop1}. Equivalently, it denotes a linear combination of all low-frequency part of $\{\bm{\delta}_{\mathcal{N}_i}\}_{i\in\mathcal{I}}$, with the combination coefficients as the local measurements of corresponding local sets, i.e., \eqref{localmeasprop2}.
The following lemma shows that the proposed operator is bounded in $PW_{\omega}(\mathcal{G})$ under certain conditions.
\begin{lem}\label{lemma1}
For given centerless local sets and the associated weights $\{(\mathcal{N}_i, \bm{\varphi}_i)\}_{i\in\mathcal{I}}$, $\forall{\mathbf f}\in PW_{\omega}(\mathcal{G})$, the following inequality holds,
$$
\|{\mathbf f}-{\bf G}{\mathbf f}\|\le C_{\rm max}\sqrt{\omega}\|{\mathbf f}\|,
$$
where
$$
C_{\rm max}=\max_{i\in\mathcal{I}}\sqrt{|\mathcal{N}_i|D_i},
$$
and $|\cdot|$ denotes cardinality.
\end{lem}
The proof of Lemma \ref{lemma1} is postponed to section \ref{proof1}. Lemma \ref{lemma1} shows that the operator $({\bf I-G})$ is a contraction mapping in $PW_{\omega}(\mathcal{G})$ if $\omega$ is less than $1/C_{\rm max}^2$.
Based on Lemma \ref{lemma1}, it is shown in Proposition \ref{pro1} that the original signal can be reconstructed from its local measurements.
\begin{pro}\label{pro1}
For given centerless local sets and the associated weights $\{(\mathcal{N}_i, \bm{\varphi}_i)\}_{i\in\mathcal{I}}$, $\forall {\mathbf f}\in PW_{\omega}(\mathcal{G})$, where $\omega$ is less than $1/C_{\rm max}^2$, ${\mathbf f}$ can be reconstructed from its local measurements $\{f_{\bm{\varphi}_i}\}_{i\in \mathcal{I}}$ through an iterative local measurement reconstruction (ILMR) algorithm in Table \ref{alg},
\begin{table}[t]
\renewcommand{\arraystretch}{1.2}
\caption{Iterative Local Measurement Reconstruction.}\label{alg}
\begin{center}
\begin{tabular}{l}
\toprule[1pt]
{\bf Input:} \hspace{0.5em} Graph $\mathcal{G}$, cutoff frequency $\omega$, centerless local sets $\{\mathcal{N}_i\}_{i\in\mathcal{I}}$, \\
\hspace{3.5em} local weights $\{\bm{\varphi}_i\}_{i\in\mathcal{I}}$, local measurements $\{f_{\bm{\varphi}_i}\}_{i\in\mathcal{I}}$;\\
{\bf Output:} \hspace{0.5em} Interpolated signal ${\bf f}^{(k)}$;\\
\hline
{\bf Initialization:}\\
\begin{minipage}{0.45\textwidth}
\begin{equation}\label{ILMR:init}
{\mathbf f}^{(0)}={\mathcal P}_{\omega}\left(\sum_{i\in \mathcal{I}}f_{\bm{\varphi}_i}\bm{\delta}_{\mathcal{N}_i}\right);
\end{equation}
\end{minipage}\\
{\bf Loop:}\\
\begin{minipage}{0.45\textwidth}
\begin{equation}\label{ILMR:loop}
{\mathbf f}^{(k+1)}={\mathbf f}^{(k)}+{\mathcal P}_{\omega}\left(\sum_{i\in \mathcal{I}}(f_{\bm{\varphi}_i}-\langle {\mathbf f}^{(k)}, \bm{\varphi}_i\rangle)\bm{\delta}_{\mathcal{N}_i}\right);
\end{equation}
\end{minipage}\\
{\bf Until:}\hspace{0.5em} The stop condition is satisfied.\\
\bottomrule[1pt]
\end{tabular}
\end{center}
\end{table}
with the error at the $k$th iteration satisfying
$$
\|{\mathbf f}^{(k)}-{\mathbf f}\|\le \gamma^{k}\|{\mathbf f}^{(0)}-{\bf f}\|,
$$
where
\begin{equation}\label{defgamma}
\gamma =C_{\rm max}\sqrt{\omega}.
\end{equation}
\end{pro}
\begin{proof}
According to the definition of ${\bf G}$, the iteration (\ref{ILMR:loop}) can be rewritten as
\begin{equation}\label{iter}
{\mathbf f}^{(k+1)}={\mathbf f}^{(k)}+{\bf G}({\mathbf f}-{\mathbf f}^{(k)}).
\end{equation}
Note that ${\mathbf f}\in PW_{\omega}(\mathcal{G})$ and ${\mathbf f}^{(k)}\in PW_{\omega}(\mathcal{G})$ for any $k$, then ${\mathbf f}^{(k)}-{\mathbf f}\in PW_{\omega}(\mathcal{G})$.
As a consequence of Lemma \ref{lemma1},
$$
\|{\mathbf f}^{(k+1)}-{\mathbf f}\|=\|({\mathbf f}^{(k)}-{\mathbf f})-{\bf G}({\mathbf f}^{(k)}-{\mathbf f})\|\le \gamma\|{\mathbf f}^{(k)}-{\mathbf f}\|.
$$
\end{proof}
Proposition \ref{pro1} shows that a signal ${\mathbf f}$ is uniquely determined and can be reconstructed by its local measurements $\{f_{\bm{\varphi}_i}\}_{i\in \mathcal{I}}$ if $\{\bm{\varphi}_i\}_{i\in \mathcal{I}}$ are known.
The quantity $(f_{\bm{\varphi}_i}-\langle {\mathbf f}^{(k)}, \bm{\varphi}_i\rangle)$ is the estimate error between the original measurement and the reconstructed measurement at the $k$th iteration.
According to (\ref{iter}), in each iteration of ILMR, the new increment of the interpolated signal is obtained by first assigning the estimate error to all vertices in the associated centerless local sets, and then projecting it onto the $\omega$-bandlimited subspace.
Except for the difference of decimation and local measurement, the basic idea of ILMR is similar to that of IPR \cite{wang_local_2014}, which is an algorithm of reconstructing graph signal from decimation. The procedures of IPR and ILMR in each iteration are illustrated in Fig. \ref{algorithm}.
One may find that in the assignment or propagating step, ILMR assigns the estimate errors of local measurements to vertices within the local sets, while IPR propagates the estimate errors of the decimated signal on the sampled vertices to other vertices in the local sets. In fact, ILMR degenerates to IPR if the local weight concentrates on only one vertex (the sampled vertex) in each local set, in which case the local measurement degenerates to decimation.
The sufficient conditions and error bounds for ILMR and IPR are also different. Suppose the (centerless) local sets divisions in ILMR and IPR are exactly the same, i.e. the sampling set $\mathcal{S}$ in IPR can be written as $\{u_i\}_{i\in\mathcal{I}}$, where $\mathcal{I}$ is the index set in ILMR, then $\mathcal{N}_i$ equals $\mathcal{N}(u_i)$ for all $i\in\mathcal{I}$. According to Definition \ref{defimmn} and \ref{defradius}, we have $R(u_i) \le D_i$ and $K(u_i) \le |\mathcal{N}(u_i)|=|\mathcal{N}_i|$. Therefore, $C_{\rm max}$ is not less than $Q_{\rm max}$. It implies that a more strict condition is needed to reconstruct a graph signal accurately from local measurements than to reconstruct it from decimation. However, since both sufficient conditions in Theorem \ref{thm:IPR} and Proposition \ref{pro1} are not tight and there is still room for refinement, such a comparison only provides a rough analysis.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=16cm]{ILMR.pdf}
\caption{The procedures of IPR and ILMR. The former algorithm is to reconstruct a bandlimited signal from decimation, while the latter reconstructs a signal from local measurements. Essentially, ILMR becomes IPR if the local weights concentrate on only one vertex of each local set, in which case local measurement degenerates to decimation. }
\label{algorithm}
\end{center}
\end{figure*}
\begin{rem}
For potential applications, if the local measurements come from the result of some repeatable physical operations, the local weights are even not necessarily known when conducting ILMR.
In detail, if $\{\bm{\varphi}_i\}_{i\in\mathcal{I}}$ is unknown but fixed, i.e., the local measurement operation in Fig. \ref{algorithm}(b) is a black box, $\langle {\mathbf f}^{(k)}, \bm{\varphi}_i\rangle$ may also be obtained by conducting the physical operations in each iteration. Therefore, the original signal can still be reconstructed by ILMR without exactly knowing $\{\bm{\varphi}_i\}_{i\in\mathcal{I}}$. This is a rather interesting result, and may facilitate graph signal reconstruction in specific scenarios.
\end{rem}
\section{Performance Analysis}\label{secnoise}
In this section, we study the error performance of ILMR when the original signal is corrupted by additive noise. We first derive the reconstruction error for incorrect measurement. Then the expected reconstruction error is calculated under the assumption of independent Gaussian noises and the optimal local weight is obtained in the sense of minimizing the expected reconstruction error bound. Finally, in a special case of \emph{i.i.d.} Gaussian perturbation, a greedy method for the centerless local sets partition and the selection of optimal local weights are provided.
\subsection{Reconstruction Error in Noise Scenario}
Suppose the observed signal associated with each vertex is corrupted by additive noise.
The corrupted signal is denoted as $\tilde{\bf f}={\bf f+n}$, where ${\bf n}$ denotes the noise.
In the $k$th iteration of ILMR, the corrupted local measurements $\{\langle \tilde{\mathbf f}, \bm{\varphi}_i\rangle\}_{i\in\mathcal{I}}$ is utilized to produce the temporary reconstruction of $\tilde{\bf f}^{(k)}$.
The following lemma gives a reconstruction error bound of $\tilde{\bf f}^{(k)}$.
\begin{pro}\label{pro2}
For given centerless local sets and the associated weights $\{(\mathcal{N}_i, \bm{\varphi}_i)\}_{i\in\mathcal{I}}$, ${\mathbf f}\in PW_{\omega}(\mathcal{G})$ is corrupted by additive noise ${\bf n}$.
If $\omega$ is less than $1/C_{\rm max}^2$, in the $k$th iteration the output of ILMR using the corrupted local measurements $\{\langle \tilde{\mathbf f}, \bm{\varphi}_i\rangle\}_{i\in\mathcal{I}}$ satisfies
\begin{align}\label{tfk-f}
\|\tilde{\bf f}^{(k)}-{\bf f}\|
\le\frac{\tilde{n}}{1-\gamma}+\gamma^{k+1}\left(\|{\mathbf f}\|+\|{\bf n}\|\right),
\end{align}
where $\gamma$ is defined as (\ref{defgamma}), $\tilde{n}$ is defined as
\begin{equation}\label{tilden}
\tilde{n}=\sum_{i\in \mathcal{I}}\sqrt{|\mathcal{N}_i|}\cdot|n_i|,
\end{equation}
and $n_i$ is the equivalent noise of centerless local set $\mathcal{N}_i$, defined as
\begin{equation}\label{ni}
n_i=\langle {\bf n}, \bm{\varphi}_i\rangle=\sum_{v\in\mathcal{N}_i}n(v)\varphi_i(v).
\end{equation}
\end{pro}
The proof of Proposition \ref{pro2} is postponed to section \ref{proof2}.
From (\ref{tfk-f}) it can be seen that in the noise scenario the reconstruction error in controlled by the sum of two parts. The former one is a weighted sum of the equivalent noise of all the local sets, while the latter one is decaying with the increase of iteration number.
The former part is crucial as the iteration goes on. Thus minimizing the former part, which is determined by both partition of centerless local sets and local weights, may improve the performance of ILMR in the noise scenario.
\subsection{Gaussian Noise and Optimal Local Weights}\label{subsecoptimalweight}
For a given partition $\{\mathcal{N}_i\}_{i\in\mathcal{I}}$, some prior knowledge of unknown noise ${\bf n}$ may bring the possibility to design optimal local weights. For simplicity the noises associated with different vertices are assumed to be independent.
Suppose the noise follows zero-mean Gaussian distribution, i.e., ${\bf n}\sim \mathcal{N}({\bf 0},{\bf \Sigma})$, where ${\bf \Sigma}$ is a diagonal matrix and the noise of vertex $v$ satisfies $n(v)\sim\mathcal{N}(0,\sigma^2(v))$.
Then $\tilde{n}$ defined in (\ref{tilden}) is a random variable.
For centerless local set $\mathcal{N}_i$, according to (\ref{ni}), the equivalent noise $n_i$ also follows a Gaussian distribution
$n_i\sim\mathcal{N}(0,\sigma_i^2)$,
where
\begin{equation}\label{sigmai}
\sigma_i^2=\sum_{v\in\mathcal{N}_i}\sigma^2(v)\varphi_i^2(v).
\end{equation}
Then $|n_i|$ follows the half-normal distribution with its expectation satisfying
$$
{\rm E}\left\{|n_i|\right\}=\sigma_i\sqrt{\frac{2}{\pi}}.
$$
According to (\ref{tilden}), the expectation of $\tilde{n}$ is
\begin{equation}\label{Etilden}
{\rm E}\{\tilde{n}\}=\sqrt{\frac{2}{\pi}}\sum_{i\in\mathcal{I}}\sqrt{|\mathcal{N}_i|}\sigma_i.
\end{equation}
Then the following corollary is ready to obtain.
\begin{cor}\label{cor1}
For given centerless local sets and the associated weights $\{(\mathcal{N}_i, \bm{\varphi}_i)\}_{i\in\mathcal{I}}$, the original signal ${\mathbf f}\in PW_{\omega}(\mathcal{G})$, assuming the noise associated with vertex $v$ follows independent Gaussian distribution $\mathcal{N}(0,\sigma^2(v))$,
if $\omega$ is less than $1/C_{\rm max}^2$, the expected reconstruction error of ILMR in the $k$th iteration satisfies
\begin{equation}\label{Etfk-f}
{\rm E}\left\{\|\tilde{\bf f}^{(k)}-{\bf f}\|\right\}
\le\frac{1}{1-\gamma}\sqrt{\frac{2}{\pi}}\sum_{i\in\mathcal{I}}\sqrt{|\mathcal{N}_i|}\sigma_i
+\mathcal{O}\left(\gamma^{k+1}\right),
\end{equation}
where $\gamma$ is defined as (\ref{defgamma}), and $\sigma_i$ is defined as (\ref{sigmai}).
\end{cor}
Corollary \ref{cor1} is ready to prove by plugging (\ref{sigmai}) and (\ref{Etilden}) in the expectation of (\ref{tfk-f}).
By minimizing the right hand side of (\ref{Etfk-f}), the optimal choice of local weights\footnote{In fact, the optimal local weights can also be studied in other criterions, e.g. the fastest convergence. However, in this work we only consider in the sense of minimizing the expected reconstruction error bound.} can be derived.
\begin{cor}\label{optweights}
For given division of centerless local sets $\{\mathcal{N}_i\}_{i\in\mathcal{I}}$, if the noises associated with the vertices are independent and follow zero-mean Gaussian distributions $n(v)\sim\mathcal{N}(0,\sigma^2(v))$, then the optimal local weights $\{\bm{\varphi}_i\}_{i\in\mathcal{I}}$ are
\begin{equation}\label{optimalweight}
\varphi_i(v)=
\begin{cases}
\displaystyle{\frac{(\sigma^2(v))^{-1}}{\sum_{v\in\mathcal{N}_i}(\sigma^2(v))^{-1}}}, & v\in \mathcal{N}_i;\\
0, & v\notin \mathcal{N}_i.
\end{cases}
\end{equation}
\end{cor}
\begin{proof}
Minimizing the right hand side of (\ref{Etfk-f}) is equivalent to minimizing $\sigma_i$ for each local set $\mathcal{N}_i$.
By the Cauchy-Schwarz inequality, one has
$$
\left(\sum_{v\in\mathcal{N}_i}(\sigma^2(v))^{-1}\!\right)\sigma_i^2
=\left(\sum_{v\in\mathcal{N}_i}(\sigma^2(v))^{-1}\!\right)\!\!\left(\sum_{v\in\mathcal{N}_i}\sigma^2(v)\varphi_i^2(v)\!\right)
\ge \left(\sum_{v\in\mathcal{N}_i}\varphi_i(v)\!\right)^2=1.
$$
Therefore,
\begin{equation}\label{sigmaimin}
\sigma_i^2\ge \frac{1}{\sum_{v\in\mathcal{N}_i}(\sigma^2(v))^{-1}}.
\end{equation}
The equality of (\ref{sigmaimin}) holds if and only if (\ref{optimalweight}) is satisfied.
\end{proof}
The above analysis shows that in the sense of minimizing the expected reconstruction error, the optimal local weight associated with vertex $v$ within $\mathcal{N}_i$ is inversely proportional to the noise variance of $v$. This is evident because more information are reserved in the sampling process if a larger local weight is assigned to a vertex with smaller noise variance. However, it should be noted that compared with the optimal local measurement, assigning all the weights in $\mathcal{N}_i$ to the vertex with the smallest noise variance, i.e. the optimal decimation, is not the best choice.
In fact, the optimal choice of local measurements is consistent with the well-known inverse variance weighting in statistics \cite{lipsey_practical_2001}.
Therefore, local measurement may reduce the disturbance of noise and reconstruct the original signal more precisely. In other words, for given partition of centerless local sets, graph signal reconstruction from local measurements with the optimal weights may always perform better than reconstruction from decimation, even when the vertices with the smallest noise variance are chosen in the latter sampling scheme.
\subsection{A Special Case of Independent and Identical Distributed Gaussian Noise}
Specifically, if the noise variances are the same for all the vertices, i.e., $\sigma(v)$ equals $\sigma$ for any $v\in\mathcal{V}$, $\tilde{n}$ can be approximately written in a more explicit form. For $\mathcal{N}_i$, the optimal local weight is equal for all the vertices in $\mathcal{N}_i$. Thus $\varphi_i(v)$ equals $1/|\mathcal{N}_i|$ for $v\in\mathcal{N}_i$, and in this case, $\sqrt{|\mathcal{N}_i|}n_i$ follows a Gaussian distribution,
$$
\sqrt{|\mathcal{N}_i|}n_i\sim\mathcal{N}(0,\sigma^2).
$$
Then $\sqrt{|\mathcal{N}_i|}\cdot|n_i|$ follows the half-normal distribution with the same parameter $\sigma$.
The above analysis shows that each term of the sum in (\ref{tilden}) follows independent and identical half-normal distribution, with its expectation and variance satisfying
\begin{align*}
{\rm E}\left\{\sqrt{|\mathcal{N}_i|}\cdot|n_i|\right\}&=\sigma\sqrt{\frac{2}{\pi}},\\
\text{Var}\left\{\sqrt{|\mathcal{N}_i|}\cdot|n_i|\right\}&=\sigma^2\left(1-\frac{2}{\pi}\right).
\end{align*}
Because the number of local sets $|\mathcal{I}|$ is always relatively large, by the central limit theorem,
$\tilde{n}$ follows a Gaussian distribution approximately,
$$
\tilde{n}\sim\mathcal{N}\left(|\mathcal{I}|\sigma\sqrt{\frac{2}{\pi}}, |\mathcal{I}|\sigma^2\left(1-\frac{2}{\pi}\right)\right).
$$
Then we have the following corollary.
\begin{cor}\label{cor2}
For given centerless local sets $\{\mathcal{N}_i\}_{i\in\mathcal{I}}$ and the associated weights $\varphi_i(v)=1/|\mathcal{N}_i|$ for $v\in\mathcal{N}_i$, the original signal ${\mathbf f}\in PW_{\omega}(\mathcal{G})$, assuming the noise associated with each vertex follows \emph{i.i.d} Gaussian distribution $\mathcal{N}(0,\sigma^2)$,
if $\omega$ is less than $1/C_{\rm max}^2$, the expected reconstruction error of ILMR in the $k$th iteration satisfies
\begin{align}\label{Etfk-f2}
{\rm E}\left\{\|\tilde{\bf f}^{(k)}-{\bf f}\|\right\}
\le\frac{|\mathcal{I}|\sigma}{1-\gamma}\sqrt{\frac{2}{\pi}}
+\mathcal{O}\left(\gamma^{k+1}\right),
\end{align}
where $\gamma$ is defined as (\ref{defgamma}).
\end{cor}
According to (\ref{Etfk-f2}), the error bound is affected by the number of centerless local sets $|\mathcal{I}|$. A division with fewer sets may reduce the expected reconstruction error.
However, it should be noted that the number of centerless local sets cannot be too small to satisfy the condition
$$
\gamma=C_{\rm max}\sqrt{\omega} = \max_{i\in\mathcal{I}}\sqrt{|\mathcal{N}_i|D_i\omega}<1,
$$
which is determined by the cutoff frequency of the original graph signal.
Besides, the factor $1/(1-\gamma)$ in (\ref{Etfk-f2}) implies that a smaller $C_{\rm max}$, which leads to a smaller $\gamma$, also reduces the error bound.
A roughly calculation can be given to balance the two factors.
If there are not too many vertices in each $\mathcal{N}_i$, we have that $C_{\rm max}$ approximates to $N_{\rm max}$, where $N_{\rm max}$ is the largest cardinality of centerless local sets. Since $N_{\rm max}|\mathcal{I}|$ approximates to $N$, we have
$$
\frac{1}{1-\gamma}|\mathcal{I}|\approx \frac{1}{1-\sqrt{\omega}N_{\rm max}}\cdot\frac{N}{N_{\rm max}}.
$$
To minimize the above quantity, a near optimal $N_{\rm max}$ is
\begin{equation}\label{Nmaxapp}
N_{\rm max} = \frac{1}{2\sqrt{\omega}},
\end{equation}
i.e., $\gamma$ approximates to $1/2$.
It provides a strategy to partition centerless local sets. For given cutoff frequency $\omega$, an approximated $N_{\rm max}$ can be chosen according to (\ref{Nmaxapp}), then the graph is divided into local sets to make sure that $|\mathcal{N}_i|$ is not more than $N_{\rm max}$ and the number of local sets is as small as possible.
For a given $N_{\rm max}$, a greedy algorithm is proposed to make the division of centerless local sets, as shown in Table \ref{algLocalSet}. The greedy algorithm is to iteratively remove connected vertices with the smallest degrees from the original graph into the new set, until the cardinality of the new set reaches $N_{\rm max}$ or there is no connected vertex. The reason for choosing the smallest-degree vertex is that such a vertex is more likely on the border of a graph.
\begin{table}[t]
\renewcommand{\arraystretch}{1.2}
\caption{A greedy method to partition centerless local sets with maximal cardinality.}\label{algLocalSet}
\begin{center}
\begin{tabular}{l}
\toprule[1pt]
{\bf Input:} \hspace{0.5em} Graph $\mathcal{G(V,E)}$, Maximal cardinality $N_{\text{max}}$;\\
{\bf Output:} \hspace{0.5em} Centerless local sets $\{\mathcal{N}_i\}_{i\in\mathcal{I}}$;\\
\hline
{\bf Initialization:}\hspace{0.5em} $i=0$;\\
{\bf Loop Until:} $\mathcal{V}=\emptyset$\\
\hspace{1.3em} 1) Find one vertex with the smallest degree in $\mathcal{G}$,\\
\hspace{3.9em} $\displaystyle u=\arg\min_{v\in \mathcal{V}}d_{\mathcal{G}}(v)$;\\
\hspace{1.3em} 2) $i=i+1$, $\mathcal{N}_i=\{u\}$;\\
\hspace{1.3em} 3) Obtain the neighbor set of $\mathcal{N}_i$, \\
\hspace{3.9em} $\mathcal{S}_i=\{v\in\mathcal{G}|v\sim w, w\in\mathcal{N}_i, v\notin\mathcal{N}_i\}$;\\
\hspace{1.3em} {\bf Loop Until:} $|\mathcal{N}_i|=N_{\text{max}}$ or $\mathcal{S}_i=\emptyset$\\
\hspace{2.6em} 4) Find one vertex with the smallest degree in $\mathcal{S}_i$,\\
\hspace{5.2em} $\displaystyle u=\arg\min_{v\in \mathcal{S}_i}d_{\mathcal{G}}(v)$;\\
\hspace{2.6em} 5) $\mathcal{N}_i=\mathcal{N}_i\cup\{u\}$;\\
\hspace{2.6em} 6) Update $\mathcal{S}_i=\{v\in\mathcal{G}|v\sim w, w\in\mathcal{N}_i, v\notin\mathcal{N}_i\}$;\\
\hspace{1.3em} {\bf End Loop}\\
\hspace{1.3em} 7) Remove the edges, $\displaystyle \mathcal{E}=\mathcal{E}\backslash\{(p,q)|p\in\mathcal{N}_i,q\in\mathcal{V}\}$;\\
\hspace{1.3em} 8) Remove the vertices, $\displaystyle \mathcal{V}=\mathcal{V}\backslash\mathcal{N}_i$ and $\mathcal{G=G(V,E)}$;\\
{\bf End Loop}\\
\bottomrule[1pt]
\end{tabular}
\end{center}
\end{table}
\section{Experiments}
We choose the Minnesota road graph \cite{gleich_matlabbgl}, which has $2640$ vertices and $6604$ edges, to verify the proposed generalized sampling scheme and reconstruction algorithm. The bandlimited signals for reconstruction are generated by removing the high-frequency component of random signals, whose entries are drawn from \emph{i.i.d.} Gaussian distribution. The centerless local sets are generated by the greedy method in Table \ref{algLocalSet} using given $N_{\rm max}$. Five kinds of local weights are tested including
\begin{enumerate}
\item
uniform weight, where $\varphi_i(v)$ equals $1/|\mathcal{N}_i|, \forall v\in\mathcal{N}_i$;
\item
random weight, where
$$\varphi_i(v) = \frac{\varphi^\prime_i(v)}{\sum_{u\in \mathcal{N}_i}\varphi^\prime_i(u)}, \quad \forall v\in\mathcal{N}_i, \varphi^\prime_i(u)\sim\mathcal U(0,1);$$
\item
Dirac delta weight, where ${\bm \varphi}_i$ equals ${\bm \delta}_u$ for a randomly chosen $u\in\mathcal{N}_i$;
\item
the optimal weight, where
$$
\varphi_i(v) = \frac{(\sigma^2(v))^{-1}}{\sum_{v\in\mathcal{N}_i}(\sigma^2(v))^{-1}}, \quad \forall v\in \mathcal{N}_i;
$$
\item
the optimal Dirac delta weight, where ${\bm \varphi}_i$ equals ${\bm \delta}_u$ for
$$u=\arg\min_{u\in\mathcal{N}_i}\sigma^2(u).
$$
\end{enumerate}
Notice that case 3) and case 5) degenerate ILMR to IPR.
\subsection{Convergence of ILMR}
In the first experiment, the convergence of the proposed ILMR is verified for various centerless local sets partition and local weights. The graph is divided into $709$ and $358$ centerless local sets for $N_{\rm max}$ equals $4$ and $8$, respectively. Three kinds of local weights are tested including case 1), 2), and 3). The averaged convergence curves are plotted in Fig. \ref{exp1} for $100$ randomly generated original graph signals. According to Fig. \ref{exp1}, the convergence is accelerated when the graph is divided into more local sets and has a smaller $N_{\rm max}$. It is ready to understand because more local sets will bring more measurements and increase the sampling rate, which provides more information in the reconstruction. According to (\ref{defgamma}), for the same $\omega$, a smaller $N_{\rm max}$ leads to a smaller $\gamma$, and guarantees a faster convergence. The experimental result also shows that in the noise-free scenario, reconstruction with uniform weight converges slightly faster than that with random weight. However, both above cases converge much faster than reconstruction with Dirac delta weight. This means that local-measurement-based ILMR behaves better than decimation-based IPR by combining the signals on different vertices properly.
\begin{figure}[t]
\begin{center}
\includegraphics[width=9cm]{newexp1.pdf}
\caption{The convergence behavior of ILMR for various division of centerless local sets and different local weights.}
\label{exp1}
\end{center}
\end{figure}
\subsection{Optimal Local Weights for Gaussian Noise}
\begin{figure}[t]
\begin{center}
\includegraphics[width=9cm]{newexp3.pdf}
\caption{The convergence curves of reconstruction with uniform weights, the optimal weights, and optimal Dirac delta weights when independent zero-mean Gaussian noise is added to each vertex. }
\label{exp3}
\end{center}
\end{figure}
In this experiment, independent zero-mean Gaussian noise is added to each vertex with different variance.
The original signal is normalized with unit norm. All of the vertices are randomly divided into three groups with the standard deviations of the noise chosen as $\sigma$ equals $1\times 10^{-4}$, $2\times 10^{-4}$, and $5\times 10^{-4}$, respectively. The graph is partitioned into $358$ centerless local sets with $N_{\rm max}$ equals $8$. Three kinds of local weights are tested including case 1), 4), and 5). The averaged convergence curves are illustrated in Fig. \ref{exp3} for $100$ randomly generated original graph signals. One may read that the steady-state relative error with the optimal weight is smaller than those with uniform weight and the optimal Dirac delta weight. The experimental result verifies the analysis in section \ref{subsecoptimalweight}. It implies that a better selection of local weights can reduce the reconstruction error if the noise variances on vertices are different.
\subsection{Performance against Independent and Identical Distributed Gaussian Noise}
\begin{figure}[t]
\begin{center}
\includegraphics[width=9cm]{newexp2.pdf}
\caption{Relative errors of ILMR under difference SNRs with various choices of local weights.
The noise associated with each vertex is \emph{i.i.d.} Gaussian.}
\label{exp2}
\end{center}
\end{figure}
In this experiment, the performance of the proposed algorithm against \emph{i.i.d.} Gaussian noise are tested for three kinds of local weights including case 1), 2), and 3). In this case the optimal local weights is equivalent to uniform weights. The graph is partitioned into $358$ centerless local sets with $N_{\rm max}$ equals $8$. The relative reconstruction errors of three tests are illustrated in Fig. \ref{exp2}. Each point is the average of $100$ trials.
The experimental result shows that for \emph{i.i.d.} Gaussian noise, reconstruction with uniform weight or random weight performs beyond that with Dirac delta weight, which is actually the traditional sampling scheme of decimation. It shows that compared with decimation, the proposed generalized sampling scheme is more robust against noise, as analyzed in section \ref{secnoise}.
\subsection{Reconstruction of Approximated Bandlimited Signals}
\begin{figure}[t]
\begin{center}
\includegraphics[width=9cm]{newexp4.pdf}
\caption{The convergence curves for uniform weights, random weights, and Dirac delta weights if the original graph signals are approximated bandlimited.}
\label{exp4}
\end{center}
\end{figure}
In this experiment, approximated bandlimited signals are tested to be reconstructed by ILMR.
The original signal is normalized to have norm $1$ and the out-of-band energy is $10^{-2}$ or $10^{-4}$. The graph is partitioned into $358$ centerless local sets and the maximal cardinality of local sets is $8$. Three kinds of local weights are tested including case 1), 2), and 3). The convergence curves are shown in Fig. \ref{exp4}, where each curve is the average of $100$ trials. It is natural to see that the steady-state error is larger for a larger out-of-band energy. Besides, the case with uniform local weights has a smaller relative error, much better than that with Dirac weights. In other words, reconstruction from local measurements performs beyond reconstruction from decimation if the original signals are not strictly bandlimited.
\section{Conclusion}
In this paper, a sampling scheme named local measurement is proposed to obtain sampled data from graph signals, which is a generalization of graph signal decimation.
Using the local measurements, a reconstruction algorithm ILMR is proposed to perfectly reconstruct original bandlimited signals iteratively. The convergence of ILMR is proved and its performance in noise scenario is analyzed.
The optimal local weights are given to minimize the effect of noise, and a greedy algorithm for local sets partition is proposed.
Theoretical analysis and experimental results demonstrate that the local measurement sampling scheme together with reconstruction method is more robust against additive noise.
\section{Appendix}
\subsection{Proof of Lemma \ref{lemma1}}
\label{proof1}
By the definition of ${\bf G}$, and considering that $\{\mathcal{N}_i\}_{i\in\mathcal{I}}$ are disjoint, one has
\begin{align}\label{lem1-1}
\|{\mathbf f}-{\bf G}{\mathbf f}\|^2
=&\left\|P_{\omega}\left(\sum_{i\in \mathcal{I}}\left({\mathbf f}_{\mathcal{N}_i}-\langle {\mathbf f}, \bm{\varphi}_i\rangle\bm{\delta}_{\mathcal{N}_i}\right)\right)\right\|^2\nonumber\\
\le&\left\|\sum_{i\in \mathcal{I}}\left({\mathbf f}_{\mathcal{N}_i}-\langle {\mathbf f}, \bm{\varphi}_i\rangle\bm{\delta}_{\mathcal{N}_i}\right)\right\|^2\nonumber\\
=&\sum_{i\in \mathcal{I}}\left\|{\mathbf f}_{\mathcal{N}_i}-\langle {\mathbf f}, \bm{\varphi}_i\rangle\bm{\delta}_{\mathcal{N}_i}\right\|^2,
\end{align}
where
$$
f_{\mathcal{N}_i}(v)=
\begin{cases}
f(v), & v\in \mathcal{N}_i;\\
0, & v\notin \mathcal{N}_i.
\end{cases}
$$
For $i\in\mathcal{I}$, one has
\begin{align}\label{lem1-4}
\|{\mathbf f}_{\mathcal{N}_i}-\langle {\mathbf f}, \bm{\varphi}_i\rangle\bm{\delta}_{\mathcal{N}_i}\|^2\nonumber
=&\sum_{v\in \mathcal{N}_i}|f(v)-\langle {\mathbf f}, \bm{\varphi}_i\rangle|^2\nonumber\\
=&\sum_{v\in \mathcal{N}_i}\left|\sum_{p\in \mathcal{N}_i}\varphi_i(p)\left(f(v)-f(p)\right)\right|^2\nonumber\\
\le&\sum_{v\in \mathcal{N}_i}\max_{p\in \mathcal{N}_i}|f(v)-f(p)|^2
\end{align}
Denote
$$p_i(v)=\arg \max_{p\in \mathcal{N}_i}|f(v)-f(p)|^2.$$
Since $\mathcal{N}_i$ is connected, there is a shortest path within $\mathcal{N}_i$ from $v$ to $p_i(v)$, which is denoted as $v\sim v_1\sim \cdots \sim v_{k_v} \sim p_i(v)$, and the length of this path is not longer than $D_i$.
Then for $v\in \mathcal{N}_i$, one has
\begin{align
\max_{p\in \mathcal{N}_i}|f(v)-f(p)|^2
=|f(v)-f(p_i(v))|^2
\le &\left(|f(v)-f(v_1)|+\cdots +|f(v_{k_v})-f(p_i(v))|\right)^2 \nonumber\\
\le &D_i\left(|f(v)-f(v_1)|^2+\cdots +|f(v_{k_v})-f(p_i(v))|^2\right).\nonumber
\end{align}
Therefore, one has
\begin{equation}\label{lem1-2}
\sum_{v\in \mathcal{N}_i}\max_{p\in \mathcal{N}_i}|f(v)-f(p)|^2
\le|\mathcal{N}_i|D_i\!\!\!\sum_{p\sim q; p,q\in \mathcal{N}_i}|f(p)-f(q)|^2,
\end{equation}
where $p\sim q$ denotes there is an edge between $p$ and $q$.
Inequality (\ref{lem1-2}) holds because each edge within $\mathcal{N}_i$ is reused for no more than $|\mathcal{N}_i|$ times. To study the right hand side of (\ref{lem1-2}), one has
\begin{align}
\sum_{p\sim q}|f(p)-f(q)|^2
=&{\mathbf f}^{\rm T}{\bf L}{\mathbf f}={\mathbf f}^{\rm T}{\bf U\Lambda U}^{\rm T}{\mathbf f}=\hat{{\mathbf f}}^{\rm T}{\bf \Lambda} \hat{{\mathbf f}}\nonumber\\
=&\sum_{\lambda_i\le\omega}\lambda_i |\hat{f}(i)|^2\le \omega\hat{{\mathbf f}}^{\rm T}\hat{{\mathbf f}}=\omega\|{\mathbf f}\|^2,\label{eqasdf}
\end{align}
where $\bf L, U$, and $\bf \Lambda$ denote the Laplacian, its eigenvectors, and its eigenvalues, respectively.
The last inequality in \eqref{eqasdf} is because the entries of spectrum $\hat{{\mathbf f}}={\bf U}^{\rm T}{\mathbf f}$ corresponding to the frequencies higher than $\omega$ are zero for ${\mathbf f}\in PW_{\omega}(\mathcal{G})$.
Consequently, utilizing \eqref{lem1-4}, \eqref{lem1-2}, and \eqref{eqasdf} in \eqref{lem1-1}, we have
\begin{align}
\|{\mathbf f}-{\bf G}{\mathbf f}\|^2\le&\sum_{i\in \mathcal{I}}\left(|\mathcal{N}_i|D_i\sum_{p\sim q; p,q\in \mathcal{N}_i}|f(p)-f(q)|^2\right)\nonumber\\
\le&C_{\rm max}^2\sum_{p\sim q}|f(p)-f(q)|^2\nonumber\\
\le&\omega C_{\rm max}^2\|{\mathbf f}\|^2\nonumber
\end{align}
and Lemma \ref{lemma1} is proved.
\subsection{Proof of Proposition \ref{pro2}}
\label{proof2}
According to Lemma \ref{lemma1}, we have $\|{\bf I-G}\|\le \gamma<1$ for $PW_{\omega}(\mathcal{G})$ when $\gamma=C_{\rm max}\sqrt{\omega}<1$.
Then ${\bf G}$ is invertible and $1-\gamma\le \|{\bf G}\|\le 1+\gamma$ for $PW_{\omega}(\mathcal{G})$.
The inverse of ${\bf G}$ is
$$
{\bf G}^{-1}=\sum_{j=0}^{\infty}({\bf I-G})^j.
$$
According to (\ref{localmeasprop2}), ${\bf f}$ can be written as
\begin{equation}\label{fe}
{\bf f}={\bf G}^{-1}{\bf Gf}
=\sum_{j=0}^{\infty}({\bf I-G})^j\sum_{i\in \mathcal{I}}\langle {\mathbf f}, \bm{\varphi}_i\rangle\mathcal{P}_{\omega}(\bm{\delta}_{\mathcal{N}_i})
=\sum_{i\in \mathcal{I}}\langle {\mathbf f}, \bm{\varphi}_i\rangle {\bf e}_i,
\end{equation}
where
$$
{\bf e}_i=\sum_{j=0}^{\infty}({\bf I-G})^j\mathcal{P}_{\omega}(\bm{\delta}_{\mathcal{N}_i}).
$$
Similarly, one has
\begin{align}
\tilde{\bf f}=\sum_{i\in \mathcal{I}}\langle \tilde{\mathbf f}, \bm{\varphi}_i\rangle {\bf e}_i.\nonumber
\end{align}
Using (\ref{iter}) and ${\bf f}^{(0)}={\bf Gf}$, we have
$$
{\bf f}^{(k)}={\bf f}+({\bf I-G})^{k}({\bf f}^{(0)}-{\bf f})={\bf f}-({\bf I-G})^{k+1}{\bf f}.
$$
Therefore
\begin{align}\label{fke}
\tilde{\bf f}^{(k)}=\tilde{\bf f}-({\bf I-G})^{k+1}\tilde{\bf f}
=\sum_{i\in \mathcal{I}}\langle \tilde{\mathbf f}, \bm{\varphi}_i\rangle {\bf e}_i
-({\bf I-G})^{k+1}\tilde{\bf f}.
\end{align}
If $\gamma=C_{\rm max}\sqrt{\omega}<1$, ${\bf e}_i$ satisfies
\begin{align}\label{norme}
\|{\bf e}_i\|\le\sum_{j=0}^{\infty}\gamma^j\|\mathcal{P}_{\omega}(\bm{\delta}_{\mathcal{N}_i})\|
\le\frac{1}{1-\gamma}\|\bm{\delta}_{\mathcal{N}_i}\|=\frac{1}{1-\gamma}\sqrt{|\mathcal{N}_i|}.
\end{align}
According to (\ref{fe}), (\ref{fke}), and (\ref{norme}),
\begin{align}
\|\tilde{\bf f}^{(k)}-{\bf f}\|
&=\left\|\sum_{i\in \mathcal{I}}\langle \tilde{\mathbf f}-{\bf f}, \bm{\varphi}_i\rangle {\bf e}_i
-({\bf I-G})^{k+1}\tilde{\bf f}\right\|\nonumber\\
&\le\sum_{i\in \mathcal{I}}|\langle {\bf n}, \bm{\varphi}_i\rangle| \left\|{\bf e}_i\right\|+\gamma^{k+1}\|\tilde{\mathbf f}\|\nonumber\\
&\le\frac{1}{1-\gamma}\sum_{i\in \mathcal{I}}\sqrt{|\mathcal{N}_i|}\cdot|n_i| +\gamma^{k+1}\left(\|{\mathbf f}\|+\|{\bf n}\|\right).\nonumber
\end{align}
Then Proposition \ref{pro2} is proved.
\footnotesize
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 7,610 |
The Sitka spruce is Britain's most commonly planted tree.
By Logan Wright
According to Einstein's theory of relativity, it is impossible to exceed the speed of light.
With this in mind, physicists set out to do the impossible and succeeded… sort of. In 2000, scientists in Princeton, New Jersey sent a small pulse of laser light through a vapor made up of gaseous cesium. The pulse traveled so quickly it was already leaving the vapor-filled chamber as it was still entering, roughly 300 times faster than it would have in a vacuum. Thus, light moved faster than the speed of light.
Other experiments have since managed to conquer light's supposed speed limit, though all claims have had fine print that keeps them from being especially groundbreaking. Those seeking to truly prove Einstein wrong must instead turn to objects like black holes, which could theoretically cause things to move at faster-than-light speeds. Black holes have proven to be particularly uncooperative test subjects though, and for now this remains speculation.
More information: Warp drive when? - NASA
Back to Top Ten Things Science Hasn't Explained
Top Ten Things You Wouldn't Think Were British
Top Ten Crazy Patents
Top Five Narcoleptic Dogs | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 4,802 |
Contents
Introduction
Author's Disclaimer
Chapter One: What are Emerging Markets?
The Name of the Game
Emerging, Emerging, Emerged!
The FELT Criteria
Chapter Two: Top Reasons for Investing in Emerging Markets
Why the Sudden Growth Spurt?
A "Problem" of Too Many Choices
All Markets are Volatile Sometimes
Your Best Protection Is Diversification
Chapter Three: Discovering Frontier Markets
Why Invest in Frontier Markets?
Dig Deeper to Find Gold
Go Ahead, Feel the Excitement
Chapter Four: Getting Down to Business
Emerging Market Mutual Funds
Domestic Listings of Emerging Market Companies
Depositary Certificate Listings of Emerging Market Companies in Developed Stock Markets
Exchange-Traded Funds
Chapter Five: Is There a Right Way to Invest?
Value versus Growth Orientation
Short versus Long Term
Bottom-Up versus Top-Down Investment Strategies
Chapter Six: Researching Emerging Markets
All Walks of Life
Narrowing the Choices
Is It a Buy or a Sell?
What's Its Worth?
Chapter Seven: The Reality of Risk
The Big Picture
Hedging Your Bets
Sit Tight, Don't Worry, be Happy
Chapter Eight: Timing Market Factors
Understanding Foreign Exchange
The Upside of Political Uncertainty
Overcoming Your Fears and Moving On
Chapter Nine: It's Called Volatility
A Research Challenge
A Painful Lesson
Don't Forget to Use What You Learn
Chapter Ten: The Importance of Being Contrary
Paper versus Realized Loss
Keeping a Cool Head
Chapter Eleven: The Big Picture and the Small Picture
The Bad and the Good
"Trust Us"
A Welcome Tidal Wave of Privatization
It's Okay to Sell the Crown Jewels
Industry Characteristics aren't the Be-All, End-All
Chapter Twelve: Pri·va·ti·za·tion
Priming the Pump
Here's How It Works
Why They're Good Investments
Chapter Thirteen: Boom to Bust
Three Warning Signs of a Bust
What Happened in Thailand?
A Short-Selling Nightmare
And the Baht Tumbles
It's Called Crony Capitalism
Chapter Fourteen: Don't Get Emotional
Become a Fan of All the Information You Can Find
The Example of China Telecom
A Chance for Small Investors
Chapter Fifteen: Turning Fear into an Advantage Instead of a Disadvantage
Going for Liquidity
Playing Pin the Tail on the Bottom
A Rosier Outlook
Chapter Sixteen: The Crisis Bargain Bin
No Pain, No Gain
A Few Cardinal Rules about Timing
Scoping Out the Banks
Looking for Patterns
Chapter Seventeen: Overcoming Irrational Market Panic
Riding the Rio Roller Coaster
First-Class Buying Opportunities
Here We Go Again
Facing Reality
Chapter Eighteen: The World Belongs to Optimists
Work Hard and be Disciplined
Be Humble
Show Some Common Sense
Get Creative
Be Independent
Remain Flexible
Investment Tools
Always Diversify Your Investments
Don't Run from Risk
Take a Long-Term View
Make Volatility Your Friend
Acknowledgments
About the Author
Little Book Big Profits Series
In the Little Book Big Profits series, the brightest icons in the financial world write on topics that range from tried-and-true investment strategies to tomorrow's new trends. Each book offers a unique perspective on investing, allowing the reader to pick and choose from the very best in investment advice today.
Books in the Little Book Big Profits series include:
The Little Book That Still Beats the Market by Joel Greenblatt
The Little Book of Value Investing by Christopher Browne
The Little Book of Common Sense Investing by John C. Bogle
The Little Book That Makes You Rich by Louis Navellier
The Little Book That Builds Wealth by Pat Dorsey
The Little Book That Saves Your Assets by David M. Darst
The Little Book of Bull Moves by Peter D. Schiff
The Little Book of Main Street Money by Jonathan Clements
The Little Book of Safe Money by Jason Zweig
The Little Book of Behavioral Investing by James Montier
The Little Book of Big Dividends by Charles B. Carlson
The Little Book of Bulletproof Investing by Ben Stein and Phil DeMuth
The Little Book of Commodity Investing by John R. Stephenson
The Little Book of Economics by Greg Ip
The Little Book of Sideways Markets by Vitaliy N. Katsenelson
The Little Book of Currency Trading by Kathy Lien
The Little Book of Stock Market Profits by Mitch Zacks
The Little Book of Big Profits from Small Stocks by Hilary Kramer
The Little Book of Trading by Michael W. Covel
The Little Book of Alternative Investments by Ben Stein and Phil DeMuth
The Little Book of Valuation by Aswath Damodaran
The Little Book of Bull's Eye Investing by John Mauldin
The Little Book of Emerging Markets by Mark Mobius
The Little Book of Hedge Funds by Anthony Scaramucci
Copyright © 2012 by Mark Mobius.
Published by John Wiley & Sons Singapore Pte. Ltd.
1 Fusionopolis Walk, #07-01, Solaris South Tower, Singapore 138628
All rights reserved.
No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as expressly permitted by law, without either the prior written permission of the Publisher, or authorization through payment of the appropriate photocopy fee to the Copyright Clearance Center. Requests for permission should be addressed to the Publisher, John Wiley & Sons Singapore Pte. Ltd., 1 Fusionopolis Walk, #07-01, Solaris South Tower, Singapore 138628, tel: 65-6643-8000, fax: 65-6643-8008, e-mail: enquiry@wiley.com.
This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. It is sold with the understanding that the Publisher is not engaged in rendering professional services. If professional advice or other expert assistance is required, the services of a competent professional person should be sought. Neither the author nor the Publisher is liable for any actions prompted or caused by the information presented in this book. Any views expressed herein are those of the author and do not represent the views of the organizations he works for.
Other Wiley Editorial Offices
John Wiley & Sons, 111 River Street, Hoboken, NJ 07030, USA
John Wiley & Sons, The Atrium, Southern Gate, Chichester, West Sussex, P019 8SQ, United Kingdom
John Wiley & Sons (Canada) Ltd., 5353 Dundas Street West, Suite 400, Toronto, Ontario, M9B 6HB, Canada
John Wiley & Sons Australia Ltd., 42 McDougall Street, Milton, Queensland 4064, Australia
Wiley-VCH, Boschstrasse 12, D-69469 Weinheim, Germany
ISBN 978-1-118-15381-9 (Hardcover)
ISBN 978-1-118-15370-3 (ePDF)
ISBN 978-1-118-15382-6 (Mobi)
ISBN 978-1-118-15383-3 (ePub)
Typeset in 12.75/15.5, CgCloister by MPS Limited, Chennai, India
To my mother and father for giving me the opportunity to learn
Introduction
One of the most frequent questions I get asked is: "When's the best time to invest?" The answer is: The best time to invest is when you have money. The reality is that market timing is impossible, and since purchasing ordinary shares of companies traded on a stock exchange (which is called equity investing) is the best way to preserve value, rather than leaving money in a bank account, it is most advisable to just get going. Don't wait for the fabled perfect moment. That answers the question of when to buy, but what about knowing when to sell? My advice on that issue is that an investment should not be sold unless a much better investment has been found to replace it.
"The best time to invest is when you have money."
—Sir John Templeton
To me, more important than the question of when to invest is the question of where to invest. My bias rests with emerging markets. Emerging markets are the financial markets of economies that are in the growth stage of their development cycle and have low to middle per capita incomes. The opposite of an emerging market is a developed market, the financial market of a mature economy with a high per capita income.
Emerging markets possess a greater upside in the long term because of their strong economic growth. Specifically, they offer the best opportunity for higher returns and diversification. It might also surprise you to know that emerging economies account for about two-thirds of the world's land mass—that's a large part of the world that you can't afford to miss out on!
Emerging markets are close to my heart; having worked in emerging countries for more than 40 years, I've learned a great deal about how their markets work and where money can be made. This book not only introduces you to emerging markets, but also describes where, why, and how you can invest in them. I also give you insights into individual markets and some of the crises that these markets have withstood; I hope to equip you with information that will help you better navigate through your search for investments in these markets.
But while we're on the topic of when, I'd like to share something I've learned over the years: Bull markets run longer and gain more in percentage terms than bear markets, which last a short period of time and fall less in percentage terms. This is an important overall phenomenon of which to be aware, because it is a factor when deciding whether to invest.
Market timing is difficult, but it is generally safe to assume that a bull market is coming eventually and that the stock market will rise above its previous highs during that time. Moreover, if you're strong enough to hold your own in both up and down markets, the best thing to do is to buy more stocks when the bear market comes, because it is going to be shorter in duration than the bull market. Investors who bought during the last bear market in 2008, for example, in many emerging markets doubled their money. Of course we'll have more bear markets going forward, but the lesson is clear.
Before diving in, let's look at an example. In January 1988, a bull market run in emerging markets began that lasted for about nine and a half years and saw the index climb over 600% from its starting point. The ensuing bear market lasted just over a year and saw a loss of more than 50% in the value of the index. The next bull market began in September 1998, and over the course of a year and a half, gained over 110% in value. This was followed by a similar-duration bear run in which prices declined by close to 50%.
The point is clear, and can be demonstrated again from October 2001 to November 2007, when the bull market returned more than 530% in just over six years, and was followed by a bear market that lost 65% in the subsequent 12 months.
The Ups and Downs (but Mostly Ups) of the MSCI Emerging Markets Index
Note: Bear market based on 30% decline from the peak and bull market based on 30% increase from the bottom.
Throughout the book you'll find Field Notes from my recent trips to countries considered to have emerging or frontier equity markets. These notes highlight industries to watch for and offer a glimpse into the sentiment there.
While there is no simple secret, blueprint, or road map to guarantee long-term success in emerging markets, there are plenty of good, solid lessons such as diversification, taking a long-term view, focusing on fundamentals, and tolerating market volatility. These lessons and more are what I have tried to put together in this book. I only hope that this Little Book can be your guide to big profits in emerging markets.
Author's Disclaimer
The views expressed in this book are solely my own, and do not necessarily represent the views of my employer.
The opinions expressed should not be relied upon as investment advice or an offer for a particular security. These opinions and insights may help you understand our investment management philosophy.
Statements of fact included in this book are from sources considered reliable, but the author makes no representation or warranty as to their completeness or accuracy.
Chapter One
What Are Emerging Markets?
An Investment Opportunity Not to Be Missed
While I was studying economic development at MIT in the early 1960s, the term underdeveloped countries was still in use, while more palatable euphemisms like developing countries were just coming into being.
The term emerging markets entered the vocabulary of the investment world in the late 1980s. The International Finance Corporation defined an emerging market this way: "A market growing in size and sophistication in contrast to a market that is relatively small, inactive, and gives little appearance of change." At the time, the term was a declaration of hope and faith on the part of those of us who were studying emerging stock markets, because many of these markets—such as those of Argentina, Peru, and Venezuela—were submerging faster than they were emerging.
The Name of the Game
The purview of international portfolio investors was quite limited in the early days. In fact, if the concept of emerging markets had been current at that time, Japan would probably have been placed in that category. In the 1960s, investing in Japan was considered to be a risky and pioneering adventure; it was known as a land of cheap and shoddy exports, weak currency, and an unstable political future.
When Sir John Templeton asked me to manage the first emerging markets fund in 1987, a universally accepted operational definition of an emerging market did not exist. Intuitively it was known that emerging implied developing or underdeveloped, but it wasn't possible to ascertain what the cutting-off point for "emerging" versus "emerged" markets would be. However, the World Bank's classification of "high-income," "middle-income," and "low-income" countries on the basis of per capita income was a good start. The middle- and low-income countries were considered to be "emerging."
Since 1987, when that original list of emerging markets was compiled from World Bank data, there have been a number of changes in the per capita income rankings of countries, with countries moving into new categories.
For example, the matter of countries with huge natural resources in the developing world, particularly in the Middle East, had to be addressed. While these countries clearly could not be classified as developed because of the low levels of infrastructure and income distribution at that time, they often had high per capita income levels due to the strong exports of resources such as oil and gas. Thus, although such countries as Qatar and Kuwait had per capita incomes significantly higher than the low- and middle-income countries, the distribution of that income was such that general living standards had not reached developed country status.
Some of the emerging country stock markets are well developed and are considered by some international investors as not belonging to the emerging country category. For example, Hong Kong is considered by a number of international investors as one of the world's major stock markets and therefore is not included in their lists of emerging markets. One pension fund manager said, "I don't consider Hong Kong an emerging market because it's easy to invest there and it's very liquid." However, when you consider the fact that Hong Kong is part of China, which is in the middle per capita income category, it is clearly in the emerging country category. Another important factor that needs to be taken into consideration is that in the case of many of the listed companies in Hong Kong, a large part of their earnings are generated in China; thus it would be mistake not to include those opportunities in the emerging market category.
Emerging markets are the financial markets of economies that are in the growth stage of their development cycle and have low to middle per capita incomes.
Emerging, Emerging, Emerged!
Another question: When does a stock market cease to be an emerging market? As emerging country income levels rise and emerging stock markets become more developed and easily accessible to all international investors, we then face the challenge of deciding which countries or markets should be deleted from the list and which should be added. For example, there has been much talk of whether South Korea and Taiwan should graduate to the status of developed markets.
The emerging markets list will continue to change as economic and political situations evolve around the globe. For now, though, emerging market countries are considered to include those classified to be developing or emerging by the World Bank, the International Finance Corporation, the United Nations, or the countries' authorities, as well as countries with a stock market capitalization of less than 3% of the MSCI World index. These countries are typically located in Asia (excluding Japan), the Middle East, Eastern Europe, Central and South America, and Africa. Of all the markets, about 170 countries meet these conditions today.
At first glance, it may seem that the range of countries is prohibitively diverse for serious investment analysis. But there were, and still are, practical factors that serve to reduce the list for investors. Many countries were excluded as initial investment possibilities because of a number of barriers, such as foreign investment restrictions, taxation, and the lack of stock markets. Gradually more and more countries finally abandoned the socialist/communist economic model and came to realize that a market economy would yield faster growth. This resulted in both bond and stock markets becoming open to foreign investors in addition to local investors. By opening up their markets to foreign investors, countries allow investors living outside of their borders to invest in their stock markets and thus attract more capital as they become more integrated to the larger global markets.
The FELT Criteria
Even though markets are more accessible now, some barriers remain. Before I'm willing to enter any stock market, I would like it to have some minimum requirements that I've defined by an easy acronym: FELT. It stands for:
* Fair: Are all investors treated equally? Does the company have one class of shares so that each share has equal voting rights? Is market information available to all investors?
* Efficient: Can investors buy and sell shares easily and safely? Are the systems for keeping track of trades rapid and accurate with a minimum of delays so that money is not tied up unnecessarily?
* Liquid: Is there sufficient turnover or volume to be able to freely buy and sell shares in the market? Is the free float, or number of shares normally available for trading, a high percentage of the total number of shares outstanding?
* Transparent: Is it easy to find out what's really going on in this market? Am I able to obtain information from the listed companies? Do they publish audited financial statements?
If a market meets the FELT criteria, you can get excited about it. If it doesn't, approach with great care.
In fact, when I started investing in emerging markets for Templeton in 1987, there were just a handful of markets to invest in at the time. Of course, before entering a new market there were many administrative and technical details to surmount, such as establishing a custodial bank to keep our securities safe, studying the local laws and regulations, learning about the complexities of the specific country's trading systems, and many other details in addition to the main task of learning about the possible company investments. Over the years, the emerging markets investable universe went from only five stock markets in 1987 to over 60 today.
Chapter Two
Top Reasons for Investing in Emerging Markets
Growth and Diversification
Why invest in emerging markets? Because that's where the growth is!
Economies of emerging markets are growing much faster than those of higher-income, developed countries. The International Monetary Fund (IMF) estimated that emerging economies would grow by 6% in 2012, three times faster than the 2% growth estimated for developed countries.
Why the Sudden Growth Spurt?
What are the reasons for this uptick in emerging market growth? When a country is growing at 5% and the population is growing at only 1%, then the per capita income increases at a fast rate. This is what is happening in emerging markets. This fundamental development is enhanced by another high growth propellant: the relatively low base from which these nations have been emerging, which allows for spectacular jumps in growth.
Emerging market countries are also in luck in this critical sense: They have not had to reinvent the wheel—the cell phone, the laser printer, or industrial robots—to realize the rewards of modern technology. In practical terms, for example, this means that some countries were able to establish stock exchanges that didn't need trading floors, because all trading was electronic and brokers could enter buy and sell orders using computers. The productivity enhancements gained by technological innovations could be obtained in the blink of an eye. Such technology transfers have helped propel growth in emerging markets.
Moreover, with inevitable shortages of almost every service and commodity and an unfulfilled demand for new products as the wealth of the emerging nations grows, the opportunities for businesses can be unprecedented. As spending in those economies increases and the requirements for credit and finance expand, capital and equity market developments are stimulated.
In China, for example, only 1% of rural households had refrigerators in 1990. As living standards and incomes increased over the years, this number shot up to 37% in 2009. Similarly, only 9% had washing machines in 1990, compared to more than 50% in 2009. Only 7% of rural households in China had computers in 2009. This is pale compared to developed countries such as the United States, where close to 70% of households had Internet access at home by 2009. Clearly, the potential for further growth is huge.
Stock markets in emerging countries are flourishing in tandem with economic growth. Stock market expansion allows for greater movement in stock values. These moves can be up or down, but in light of tremendous economic expansion, they are largely expected to continue rising. As more companies find themselves in a position to go public or are privatized by the state sector, there is a greater range of companies to choose from, in significant sectors of the economy like energy, finance, and industry. In 2011, for example, initial public offerings (IPOs) and follow-on issues exceeded US$245 billion, about 30% more than that recorded in the U.S. market and about 40% of the global total. The numbers were even more significant in 2010, when IPOs and follow-on issues in emerging markets totaled US$470 billion, more than double the US$198 billion recorded in the U.S. market and about half the world's US$950 billion.
By 2011, emerging markets represented 34% of the total world's stock market capitalization compared to less than 10% 10 years before. These statistics may be shocking to many longtime investors who are accustomed to thinking only about developed markets, but they mean that investors today have many more choices and ways to expose their portfolios to these high-growth emerging markets than any of us did 20, 10, or even 5 years ago.
Investing in emerging markets in particular enhances the performance contribution or benefits of diversification, since emerging markets have generally performed better than the developed markets. Since 1988, one year after we launched the Templeton Emerging Markets Fund, the accompanying chart shows that emerging markets have outperformed the U.S. market by about 940% and the world market by more than 1,360% as of the end of January 2012.
Emerging Markets Performance versus the World and U.S. Markets
Data source: MSCI, FactSet.
Of course, from one year to the next performance will vary and in some years emerging markets will underperform other markets, but in most years emerging markets are the winners. There can be dramatic differences between various countries. For example, in 2010, Argentina and Sri Lanka returned in excess of 70% in U.S. Dollar terms, while on the other end of the spectrum Bahrain and Kazakhstan saw declines of 18% and close to 15% in U.S. Dollar terms. It is thus logical that if you have investments spread across a number of countries, you have a better chance of at least a portion of your holdings generating optimal returns during any one year.
A "Problem" of Too Many Choices
Whether you're thinking about putting together your own portfolio of emerging markets stocks or buying into one of the many global or emerging markets mutual funds, a critical first step is to take a long, hard look at the world. You may not have done this at any great length since the sixth grade, but time spent studying a world map can never be wasted, and can be critical to your success as a global investor.
The first thing you'll probably notice is the relatively small size of the developed markets compared to the vast swaths of land covering the emerging countries. Emerging economies cover 77% of the world's land mass, have more than 80% of the world's population, hold more than 65% of the world's foreign exchange reserves, and account for about 50% of the world's gross domestic product (GDP). In 2010, about 5.7 billion people resided in emerging countries; that's about five times the 1.2 billion population of the developed markets. China and India alone account for more than 2.5 billion people; that's almost four times the approximately 700 million in the United States and European Union.
Within the emerging markets asset class, one could easily differentiate between the more mature or well-known emerging markets and the frontier markets, which are those that are younger and less developed.
Markets such as Brazil, Hong Kong, and India could be considered to be mature emerging markets not because they are immune to the volatility common to all emerging markets but because they offer a wide range of investment opportunities, some degree of transparency in the operation of their markets, and comparatively advanced systems of investor protection, including securities regulation and treatment of minority shareholders. These markets also have a more robust and developed local investor base, and have become less susceptible to the rise and fall of foreign risk appetite.
These nations comprise key markets in emerging market portfolios and should be examined closely for opportunities by any prospective emerging markets investor with the time to do the in-depth study.
Of course, frontier markets such as Nigeria, Vietnam, and Pakistan can be places to make substantial profits because the perception by the investing public is that those places are more "risky" and thus are avoided. This attitude often means that prices are low and the opportunity to obtain cheap stocks is good. This brings us right to this truth: Taking risks is the best way investors can make profits.
Taking risks is best way investors can make profits.
Today, China, India, Indonesia, Brazil, and Russia (known as the Big Five) are all viable emerging markets, by anyone's yardstick. Not only are they emerging, but all are among the world's 20 largest economies, with China, Brazil, and India in the top 10. These economies are clearly economic powerhouses of the twenty-first century.
Let's take a quick look at that list again: China, India, Indonesia, Brazil, and Russia. In the 1990s, all five could have easily wound up on a list of the world's financial disaster zones. But today, the Economist Intelligence Unit forecasts that growth in these rapidly emerging countries will surge an average of 6.5% a year through 2020, beating by a wide margin the comparatively tepid performance of the 1.9% growth rate by the developed economies. But, of course, the crash in Indonesia and other Asian countries in 1997 and 1998, as well as the economic collapse of Russia in mid-1998, proved that real dangers remain.
The prominence of these emerging market nations has changed the economic map of the world over the past 25 years. So what does that mean for you, the aspiring global investor—or for me, for that matter?
All five economic powerhouses should be places you should be taking a closer look at to see if you can find companies in them you'd like to invest in. Apart from the Big Five, the high growth rates enjoyed by nearly all emerging markets today (of course there are exceptions) suggest the following: If you want to gain exposure to the world's fastest-growing economies, you've got to take the plunge into emerging stock markets. Why? It's all about GROWTH!
All Markets Are Volatile Sometimes
Between 2000 and 2010, the economies of four major emerging markets—Brazil, Russia, India, and China, commonly known as the BRIC markets, grew by 112%. Over the same period, the economies of three major advanced countries—the United States, the United Kingdom, and Japan—grew by a comparatively insipid 14%.
In 2000, the average gross domestic product (GDP) growth of the world's emerging market countries hovered around 6.5%. At the same time, the average growth of the developed countries was a little under 4%.
A decade later in 2010, the growth gap between the world's emerging and developed markets had widened even further: The emerging markets average was 7.7% but the developed markets average was just 2.6%.
Take note: These stupendous growth rates should not be taken as proof that stock markets in the emerging markets will continue to appreciate substantially in tandem with their high-growth economies.
If anything, the Asian contagion of 1997 to 1998, much like the Latin American "tequila effect" of three years before, demonstrates that emerging markets have not been immune to shocks and are liable to take a few tumbles and stumbles on the road to wealth and prosperity. However, it's that very volatility that, if managed appropriately, can generate well above average returns over the long haul.
We've all got to face up to the fact that volatility is a characteristic in all markets—even the most mature ones.
Volatility is a characteristic in all markets—even the most mature ones.
The reason for this is really quite simple: All markets, being based more in mass psychology than objective reality, have a tendency to overshoot and undershoot economic growth rates. Judging the influence of irrational emotion is, by the way, the way patient investors can make money.
If you factor emotion out of the equation and base your strategy on long-term fundamentals, you can win both when markets fall and when markets rise.
Such overshootings and undershootings tend to cancel each other out over time, by the way. This means that stock markets do eventually reflect economic growth in the long haul. But also like all markets, emerging markets tend to be cyclical. That's a nice way of saying that sometimes they go boom, and sometimes they go bust.
The best way to take the edge off this volatility, I've found, is to faithfully follow the time-honored value-oriented and sometimes contrarian strategy first pioneered by our mentor, the late Sir John Templeton, often called the godfather of global investing. Sir John Templeton was a pioneer in financial investment and was among the first investors to venture into Japan in the 1960s, when it was still known as an underdeveloped economy. His strategy was to:
* Search the world for the best investment bargains.
* Focus on the long term, not the short term.
* Use common sense.
When we see unrecognized value, we are willing to be contrarian, buying when others are despondently selling, and selling when others are greedily buying.
The great paradox of value investing is that most of the money is made after—as opposed to before—the fall. Whether it's the Asian contagion, the Latin American tequila effect, or the U.S. subprime crisis, it's important to keep in mind that you're going to find the most and the best bargains during hard times, when the news is bad and when everyone else wants to sell.
Bad times can be good times. Or, as a colleague of mine once put it: "For us, bad news is good news."
In addition to increasing the probability of recording higher returns by diversifying investments among many countries instead of just one, portfolios tend to be less volatile because of the wider diversity of holdings and broader range of economic and political variables affecting the investments.
When I started out in emerging markets, with only around US$100 million in the kitty (in contrast to more than US$50 billion today), only a handful of countries met our investment criteria. Of those, tiny Hong Kong—where I'd lived in my early investing days—was our pivot, our linchpin.
I knew, or imagined I knew, the practices and characteristics of Hong Kong's market and the behavior of Chinese investors there and felt safe. But when we first began investing heavily in the Hong Kong market in October 1987, the great U.S. stock market crash was just sending shock waves around the world, bringing down world markets, many of which were just starting to recover from the 1970s slump brought on by the global oil crisis.
When the tidal wave hit Asia, the head of the Hong Kong Stock Exchange closed the market down for three days. By the time it reopened, we had paper losses of roughly a third of our total portfolio value. This profoundly unsettling experience (which featured me personally persuading many spooked investors to sit tight and ride out the wave) taught me and my then-tiny emerging markets team an unforgettable lesson about the inherent risk of putting too many eggs in one basket. I learned that it was actually possible to have all your eggs in the wrong basket at the wrong time.
This traumatic financial upheaval drove home—like a nail—one of the most important rules of investing in emerging markets: Your best protection is diversification.
Your Best Protection Is Diversification
To reduce one's vulnerability to a severe downturn in one market, every investor should diversify. That is your best protection against unexpected events, natural disasters, and dishonest management, as well as investor panic. Moreover, global investing across all sectors is always superior to investing in only one market or industry. If you search worldwide, you will find more bargains and better bargains than by studying only one market. You never want to be overly dependent on the fate of any one stock, market, or sector.
Diversification of assets among many countries and many stocks leads to lower volatility and lower risk, without limiting the potential for gain. This is because there is a broader range of influencing economic and political variables to impact your investments in different ways in different countries. A simple example is when oil prices are high, companies in oil-exporting countries like Russia or the United Arab Emirates will have more profitable returns and will generate a better performance while companies depending on imported oil power in oil-importing countries, like Japan, may turn in weaker results and experience a drop in their share prices.
If U.S. investors diversify from their U.S. holdings by making investments only in, say, the United Kingdom, the diversification effect exists, but not as markedly as if the investor moved into markets such as Bahrain, Jordan, Bangladesh, and Slovenia. This is because the correlation coefficient, or ratio of times in which the markets move in tandem, of the UK stock market indexes to the U.S. indexes have been found to be as high as 0.98 out of a total maximum of 1.0. This means that 9.8 times out of 10, when the U.S. markets are weak, so are the UK markets. For emerging markets, the numbers are lower. In the end, a given portfolio has benefited from only a minor degree of diversification and risk reduction. In contrast, Bahrain, Jordan, Bangladesh, and Slovenia all have a negative correlation to the United States, which means that when the U.S. market falls, these markets may actually rise, and vice versa. Emerging and frontier market investments therefore serve to reduce the risk of volatility in a portfolio to a much greater extent than investments in other developed markets would.
The potential for gain also exists between two different emerging and frontier markets. Even individual markets can move quite independently of each other, providing a great advantage to a diversified portfolio over single-country investments. In some years, there has been a correlation coefficient of only 0.28 between the Thailand and Egypt markets, and 0.32 between Turkey and Nigeria. China and Jordan actually have had a negative correlation, as have South Korea and Bahrain. Therefore, a portfolio with a selection of emerging market stocks, rather than a portfolio with one emerging market and one developed market, is likely to gain greater diversification benefits.
Of course, as the world gets smaller, as communications become better and faster and as global investors invest more in the emerging markets, the correlation between the developed and the emerging markets has drawn closer together. This is especially the case in times of crisis, and was evident during the U.S. subprime crisis in 2008, the Asian financial crisis in 1998, and even the Mexican Peso devaluation in 1994.
Fortunately, diversification has become easier with more opportunities to choose from. Today there are many more baskets and far fewer basket cases. In 1987, when Templeton launched the first listed emerging markets mutual funds on the New York Stock Exchange, no other U.S.-based mutual funds invested significant portions of their portfolios overseas. But today investors have access to more than 6,000 equity funds that invest in emerging markets.
In addition to growing at much faster rates than developed countries, emerging economies are becoming stronger and more immune to external shocks than they were in the late 1990s. Emerging countries tend to have higher foreign exchange reserves and lower debt levels than their developed counterparts.
As of August 2011, emerging markets as a group had about US$7,000 billion in total reserves (excluding gold), double the approximately US$3,500 billion in developed markets. In comparison, the world's largest foreign exchange reserve holder, China had more than US$3,200 billion. Japan, a far second, held about US$1,100 billion, while the next highest included such emerging countries as Russia, Saudi Arabia, Korea, Taiwan, and other emerging markets. By the end of 2010, public debt as a percentage of GDP for the G7 nations exceeded 95%. That's more than three times the approximately 30% for emerging markets. The total debt-to-GDP ratio including both public and private debt for developed countries such as Japan, the United Kingdom, Portugal, Spain, and the United States exceeded 200%, and in the case of Japan, it was more than 350%. On the other end of the spectrum, the percentages for emerging countries ranged from less than 50% for Russia and less than 80% for Turkey to about 110% for Brazil and India.
It is thus easy to understand why emerging markets are increasingly accepted as satisfying the investment objectives of portfolio diversification and higher returns. This is why investors in the United States, Europe, and Japan are increasingly restructuring their holdings to reduce domestic and developed market exposure in exchange for emerging market exposure. However, this is not being done fast enough. As we have pointed out, emerging stock markets currently account for more than 30% of the world's market capitalization, whereas on average, U.S. institutional investors have only 3 to 8% weightings dedicated to emerging markets. This means that most investors are very underweight emerging markets in their portfolios. So, as more and more investors begin to realize the strong growth potential of emerging markets equities, we expect to see more and more money to go into those markets.
Chapter Three
Discovering Frontier Markets
Don't Miss the First Mover Advantage
During a recent trip to Lagos in Nigeria, I got stuck in a hotel elevator not once but twice. I must admit, though, this was not atypical even for the nicest hotels in some emerging market cities. This experience speaks to the demand for more power sources in emerging countries, particularly frontier countries such as Nigeria, since that hotel, like other hotels and businesses in Nigeria, had to depend on its own diesel generators to produce electric power because the public power system was so unreliable, and in this case, the hotel generator failed.
In recent years, it has been widely recognized that among the emerging markets are numerous new markets that are showing even faster growth. These newer emerging markets, which we call "frontier markets," are found all over the world—in Latin America, Africa, Eastern Europe, and Asia. The list is long and includes such countries as Nigeria, Saudi Arabia, Kazakhstan, Bangladesh, Vietnam, United Arab Emirates, Qatar, Egypt, Ukraine, Romania, Argentina, and many more countries that have been underresearched or ignored totally because they were too small or perceived as being too risky or too difficult to enter because of foreign exchange restrictions and other investor barriers.
Why Invest in Frontier Markets?
In the period from 2001 to 2010, the top 10 fastest-growing countries have all been emerging markets but nine of those have been frontier markets. It is surprising to learn that those fastest-growing countries, in addition to China, included the frontier markets of Angola, Myanmar, Nigeria, Ethiopia, Kazakhstan, Chad, Mozambique, Cambodia, and Rwanda. In 2010, Vietnam grew by 6%, Nigeria by 7%, and Qatar by 18.5%, compared to the average of about 2% for developed economies.
Nine of the top 10 fastest-growth countries have been frontier markets.
The future potential growth is also great. Although 16% of the world's land area and 17% of the world's population is in frontier markets, only 6% of the world's gross domestic product is in those markets. That gap is being closed rapidly given their fast growth rates as more and more countries catch up in production and consumption. For example, let's look at the penetration of mobile phone usage in several of these markets. Whereas in 2010, the penetration of mobile phones in Japan and the United States was more than 90%, in Nigeria it was only 55%, and in Bangladesh only 46%. But they are catching up fast as per capita incomes rise and distribution/communications systems expand.
As we have said, these frontier markets are markets that normally investors would shy away from because they may be perceived as being too risky, too small, and too illiquid. However, we have found them to be not only faster growing but also with a number of characteristics that make them safer than imagined. For example, they generally have lower debt and higher foreign exchange reserves in relation to their gross domestic product. With economic growth comes capital market growth, these markets are quickly moving from small and illiquid to large and liquid.
Many of the frontier market countries have enormous reserves of natural resources. Companies that are strong producers of commodities such as oil, iron ore, aluminum, copper, nickel, and platinum look especially interesting. Infrastructure development in emerging markets has led to continued demand for hard commodities, but demand for soft commodities such as sugar, cocoa, and select grains has also increased. Many of the frontier countries are already leading producers of oil, gas, precious metals, and other raw materials and are well positioned to benefit from the growing global demand for these resources.
In the consumer area, the rising per capita incomes mean that the demand for consumer products is increasing fast. The deceleration of population growth combined with high economic growth means that per capita income is rising and demand for consumer products is increasing. This has led to positive earnings growth outlooks for consumer related companies. We look for opportunities in areas not only related to consumer products, such as automobiles and retailing, but also related to consumer services such as finance, banking, and telecommunications.
Additionally, as the economies of frontier market countries expand, they continue to increase investments in infrastructure, offering interesting opportunities in the construction, transportation, and telecommunications industries. Rising consumption provides these economies with strong purchasing power and the ability to spend their way into growth. Moreover, frontier market countries have been, and continue to be, positively impacted by the substantial investments made by large emerging market countries such as China, India, Russia, and Brazil.
The relatively low correlation of frontier markets to global markets also provides investors with an opportunity to diversify their investment portfolios. Furthermore, the economic drivers across frontier markets are diverse. For example, Botswana, one of the world's largest diamond exporters, is introducing data processing centers. Kazakhstan, a country rich in oil and other natural resources, is making significant investments in infrastructure development. These varied economic themes across frontier markets ensure the opportunity to build a diversified portfolio.
The rising number of initial public offerings (IPOs) in the frontier markets demonstrates that local capital markets have been steadily gaining strength. This is largely a result of governments selling some of their state owned companies and assets to the public through stock market listings while entrepreneurs have increasingly been using the capital markets as a source of funding for business expansion. The increase in IPOs has, in turn, boosted the overall equity market capitalization of the frontier market universe and is starting to bring these countries and companies to the attention of more investors.
Dig Deeper to Find Gold
Furthermore, frontier markets are generally underresearched. They thus tend to be ignored by the majority of investors. For example, in one month in 2010, approximately 30,000 company research reports were produced in the United States by brokers, banks, and other organizations. In Nigeria, the number was less than 100. This lack of information for investors can be a plus for those willing to do original on-the-spot research. Thus frontier markets display even greater opportunities for those who are willing to do their research, visit the frontier market companies, and dig for information.
The fact that frontier markets are not well known and that not many investors are active in them (yet) means that there are opportunities to be found. Time spent on due diligence to assess the quality of the management team, including more frequent on-site visits to evaluate the business effectively, can uncover great opportunities. A visit is crucial, as examinations of office operations and factories often yield critical insights that cannot be seen through reading financial statements. A meeting with the company's managers or a tour of the company's factories can provide a wealth of knowledge that may otherwise remain undiscovered. Of course, I understand that visiting companies may not be practical for individual investors. This is where the company's annual reports and website as well as the Internet can be very valuable tools. There is a wide spectrum of information available at your fingertips. Dig deeper. Don't just look at the financials of a company; research the people behind the numbers, learn about the industry, and look into competitors—you could learn useful information.
You'll notice that along with visiting individual companies, it's also important to keep your eyes open as soon as you land at a city, and to form a complete picture of the market, the company, and the people. Even small things, like how modern the airport is, the efficiency of public transportation, how crowded a restaurant or hotel might be, and whether there are many tourists, can teach you a lot about the local dynamics and the willingness as well as speed to modernize and compete, which are, eventually, important drivers of stock markets.
Go Ahead, Feel the Excitement
Stay excited about frontier markets because, in the future, many of them are likely to become quite important and eventually become full-fledged emerging markets.
Stay excited about frontier markets because, in the future, many of them are likely to become quite important and eventually become full-fledged emerging markets. Their potential for economic growth and development remains considerable, especially if the current trend toward the implementation of political and economic reforms remains on course.
Field Note: Kazakhstan
September 2010
Kazakhstan is becoming increasingly important as an investment destination. It has vast natural resources such as oil, gas, copper, uranium, and a host of other minerals. As a result of the billions of dollars pouring into the country to develop those resources, Kazakhstan could become an economic engine for Central Asia. The purpose of my visit was to take a closer look at the mining sector. Prices for several commodities, including metals such as palladium, platinum, copper, gold, and silver, rose dramatically in 2010, and that significantly benefited Kazakh metals and mining companies.
Growing consumerism and wealth were evident in Almaty, Kazakhstan's largest city, as I watched skiers shoot down a mountainside overlooking the city at a new stadium built for the 2011 Asian Games. At a mega-mall, I saw shops you would find in malls all over the world. However, improvement in general living standards still has a way to go.
Here are some notes from my visit:
Mining: My team and I took a one-and-a-half-hour flight to a mining conglomerate's headquarters and its mine site. At one of the firm's four mines in the area, a comprehensive safety briefing was provided by an enthusiastic safety engineer. Dressed in mining clothes with oxygen containers, masks, and hard hats with electric torches, my analysts and I descended 140 meters into the ground in a steel elevator cage. After going through a few iron doors, we boarded a diesel-powered all-terrain vehicle and drove three kilometers through lighted tunnels to the face of one mining site.
After ascending from the mine, my team and I traveled to the concentrating and smelting plants, where the ore is crushed and put in pools of reagents to extract the metal and other minerals. This slurry is then put in circular settling tanks where the concentrate floats to the top and is extracted, dried, smelted into cathodes, and then melted into ingots. I saw piles of gleaming ingots with shipping slips addressed to China. The Chinese influence seems to be strong: The conglomerate is receiving billion of dollars in financing from a Chinese bank, and it has a joint venture with a Chinese firm to develop another mining project.
This trip served us well to understand the important mining sector in Kazakhstan, as well as how efficient the company's mining operations were.
Chapter Four
Getting Down to Business
How to Invest in Emerging Markets
Once you've decided you will move into emerging markets, the next question to decide for yourself is how to invest. The challenging world of emerging market investments holds great rewards, but also substantial risks for the investor. The criteria to be applied when evaluating the desirability of investments in those markets vary depending on your investment style and objectives.
As an investment manager, I believe in the efficacy of mutual fund investment and will outline why in this chapter. However, some investors may have a preference for purchasing stocks themselves, so I include a review of investment instruments and how to use them.
Primary investment instruments to access emerging markets may be summarized as follows:
* Emerging market mutual funds.
* Domestic listings of emerging market companies.
* Depositary certificate listings of emerging market companies in developed stock markets.
* Exchange-traded funds.
Let's get started by looking at mutual fund investments.
Emerging Market Mutual Funds
Emerging market trusts and funds as we know them today began in 1986, with the launch of an emerging markets fund for institutional investors by Capital International and the International Finance Corporation (IFC). Individual retail investors were able to invest in emerging market funds in 1987, when Templeton launched its New York Stock Exchange—listed Templeton Emerging Markets Fund, Inc. At that time, no other U.S.-based mutual funds invested significant portions of their portfolios outside the United States. But today more than 27,000 mutual funds globally invest in international securities. And more than 6,000 of those invest exclusively in emerging markets.
Funds make the process of investing much more accessible, and require much less monitoring and research on a day-to-day basis.
Funds make the process of investing much more accessible, and require much less monitoring and research on a day-to-day basis. There are solid reasons for selecting funds as your instrument of investment: gaining exposure to potential high returns and reduced portfolio risk while shielding yourself from the complications of direct equity market purchases.
Closed-End Funds
A closed-end fund (in the United Kingdom they are called investment trusts) operates like any publicly listed company on a stock exchange. The fund raises capital by issuing a fixed number of shares via an initial public offering (IPO). These shares are then listed and traded freely on the market.
At the very early stages of emerging market development, closed-end country funds were a popular way of establishing emerging markets and putting them on the map among investors in the United States, Europe, and Japan. At that time, because of the low liquidity of emerging market stocks, it was felt that a closed-end structure would be best. In an open-end mutual fund structure, investors may redeem their investment from the fund manager at any time; in the closed-end structure, they are not able to ask for their investment back from the fund manager but must realize their investment by selling to other holders. In this way, the fund manager would not be challenged with a situation where many investors suddenly ask to redeem but the manager finds it difficult to sell the portfolio shares. Investment trusts or closed-end funds are sold just like common shares, with the transactions going through stockbrokers, where normal commissions are paid.
Of course, closed-end funds also provide investor a way to gain access and exposure to an emerging market without facing all the problems encountered when entering the markets themselves. In addition, since these country funds were closed-end funds and traded on the major stock exchanges, they were liquid, and investors could enter and exit the market rather simply.
There can be differences in emerging markets funds' performance in view of the wide range of individual market behavior. One significant problem is that during certain periods of time, an emerging markets fund's share price performance may not correspond with the actual value of the portfolio. Each day the net asset value (NAV) of the portfolio is calculated by dividing the total value of all the companies held in the portfolio, including cash and excluding any liabilities, by the number of outstanding fund shares. However, on the stock market where the closed-end fund is listed, the price of each fund share may not correspond to the NAV. Sometimes there may be a premium and sometimes a discount. The range of premiums or discounts can be wide. A discount indicates investor sentiment toward emerging markets is negative and/or they feel that the manager of the fund is not adding value to the fund. A premium, where the fund share price is higher than the NAV, indicates that investors are optimistic about emerging markets and/or believe the fund manager is doing a good job to enhance the NAV of the fund assets.
For buyers of closed-end funds, one of the greatest advantages is that they often sell at attractive discounts to their net asset values. In this way investors may purchase a basket of assets at a discount to their market value. Thus calculating the percentage difference between the share price and the net asset value per share is a key factor. Other factors to be studied are the percentage of total assets held in cash, the geographical spread of the investments, and the historical total return measured in terms of the performance of the NAV per share.
Open-End Funds
It is usually easiest to describe open-end funds as the opposite of closed-end funds. The differences between open-end and closed-end funds are numerous. However, the most important difference is the relationship between price and NAV. As we have said, the NAV of a fund is based on the sum total of all the market values of the fund's securities positions in addition to cash, and less any liabilities. In the case of open-end funds or unit trusts (as they are called in the United Kingdom) the managers must be continuously ready to offer shares to incoming investors at the current NAV plus any sales charges and expenses. They also stand ready to redeem investor shares at NAV less any charges. In contrast, as discussed previously, in the case of closed-end funds, the holder must sell his shares in the market to obtain his money. The important element is that prices of open-end funds are the same as their NAV, whereas the share price of a closed-end fund is determined by the market and tends to differ from the NAV.
Both open-end and closed-end funds offer advantages, the most important of which are:
* Diversification.
* Professional fund management.
* Lower costs as compared to investing individually.
* Convenience in record keeping.
In open-end funds there is a tendency for flows into the fund to increase at the peak of bull markets and outflows to increase in bear markets. This could make it difficult for the fund manager to perform at his or her best. However, if investors cooperate with the fund manager and invest more money when the markets are down, then, in fact, open-end funds could be more advantageous than closed-end funds.
One advantage of closed-end funds or investment trusts is that investors can precisely control the price at which they purchase the shares. In open-end funds, the price at which the shares are purchased is not known until after the investor has made the commitment, since the NAV would be computed at the end of the trading day. Of course, in most cases the differences between the NAV from day to day are normally not great.
Domestic Listings of Emerging Market Companies
For the average investor with limited time, the most difficult method of investing in emerging markets is by directly investing in stocks listed on emerging stock markets. Such direct investments, because of unique local conditions or local investor sentiments, can result in spectacular returns or spectacular losses. When making such direct investments, there are numerous considerations such as foreign currency changes and their impact on the investment as well as the business in which you are investing.
Depositary Certificate Listings of Emerging Market Companies in Developed Stock Markets
For those who prefer to take advantage of foreign stocks without going to a foreign market or foreign currency, depositary receipts such as American depositary receipts (ADRs) and global depositary receipts (GDRs) are designed to give you just that chance.
Depositary receipts are receipts for shares of a foreign company deposited in that foreign country and traded on that foreign stock market. For example, American depositary receipts are traded in the United States. Normally American banks will have a custodial operation in the foreign country where the shares are traded. The shares are kept in the custodian's vault in that foreign country, and then depositary receipts are issued against those shares.
Global depositary receipts are similar instruments, but they are traded in international exchanges, mainly in London and other European markets. They differ from American depositary receipts since they provide issuers with a means of tapping global capital markets by simultaneously issuing one security in multiple markets. Global depositary receipt issues often benefit from better-coordinated global offerings, a broadened shareholder base, and increased liquidity.
The advantage of depositary receipts is that they enable investors in the United States and Europe to invest in an emerging market company without leaving their home market. In some cases, the home market brokerage costs and other costs associated with purchasing and holding shares are even lower than in the emerging market. By not going into the emerging markets directly, the investor avoids considerable administrative and other complications. In addition, dividend collection and distribution are completed much more efficiently since the sponsoring bank undertakes to collect all dividends, and then distributes them to the depositary receipt holders after converting them into U.S. Dollars or the holder's home market currency.
The disadvantages of depositary receipts are that they may sell at a higher price than the underlying stock in the home market, and they are sometimes less liquid than the underlying stocks.
Exchange-Traded Funds
Exchange-traded funds (ETFs) are much like closed-end funds in the sense that they are traded on a stock exchange much like stocks. An ETF has stocks in its portfolios and the manager attempts to hold the total value of assets close to its net asset value over the course of a trading date. Most ETFs have an objective of tracking an index. In the case of emerging market ETFs, they may want to track the MSCI Emerging Markets Index, the S&P/IFCI index, or others.
An ETF thus combines the valuation feature of a open-end fund with the trading feature of a closed-end fund. ETFs were launched in the United States in 1998 and in Europe in 1999 as index funds, but in 2008, the U.S. Securities and Exchange Commission authorized the creation of actively managed ETFs.
ETFs are sometimes attractive because they enable the investor to closely follow an index of stocks and, like closed-end funds, have stock-like features. In volatile markets, it is sometimes difficult for the ETF manager to exactly track a particular index if there is a great deal of volatility.
There is no guarantee that an ETF will always trade at exactly its NAV. If there happens to be strong investor demand, the ETF share price would rise above its NAV per share. This gives an opportunity for speculators to trade on the difference with the knowledge that the ETF manager will need to bring the NAV and price together again.
Managers of ETFs tend to use various arbitrage methods to ensure that the share price tracks the net asset value. Usually the price deviation between the daily closing price and the daily NAV is less than 2%, but sometimes the price deviations may be quite large.
Chapter Five
Is There a Right Way to Invest?
Comparing Investment Styles
I can still recall how much grief I received in 1999 from various investment critics when I refused to pay exorbitant prices for technology stocks. The share prices were unreasonable, making valuations extremely expensive and unjustified. There was a clear disconnect between earnings and stock prices. Yes, the funds I managed suffered some short-term underperformance, but over the long term, it paid off when those technology stocks crashed. It pays to look (or in this case, study) before you leap. I must say that I wasn't surprised when the bubble burst in 2000, when investors punished companies that failed to deliver expected profits by dumping their stocks. Investors who withstood the pressure and temptations of investing during those times of "irrational exuberance" were rewarded.
Too much time has been used up over the years by trying to determine which investment style is the most successful. All kinds of terminology have sprung up to describe the strategies employed: "technical," "fundamental," "active," "passive," "bottom-up," "top-down," "value," and "growth." Instead of rehashing the debate, I'll outline my personal investment approach and explain why I think it makes sense for any equity investor.
Value versus Growth Orientation
Sir John Templeton once said that there was a tendency for too many investors to focus on "outlook" or "trend." It was his belief that more opportunities could be uncovered by focusing on value and I agree with that.
Studies have shown that over the long term, stock market prices tend to be influenced by the asset value and earnings capabilities of listed shares. Also, share prices tend to fluctuate much more widely than real share values.
Opportunities can be uncovered by focusing on value.
The value approach to investing was first and best defined by Benjamin Graham and David Dodd in their 1934 book Security Analysis. In that book, they articulated the system of buying value shares whose price was cheap relative to factors such as earnings, dividends, or book assets. But studies have also revealed elaborations on the application of that fundamental value orientation. For example, one study showed that investing in shares with a low ratio of share price to cash flow was a better strategy than buying shares with a low ratio of share price to book value. Others indicate that price-to-earnings (P/E) ratios were the best determinant of future price. Needless to say, despite what individual value criteria are used, those indicators that give investors insight into the earnings power and assets of a company are the best paths to finding value.
Many investors speak of "value investing" but few actually diligently apply the value investing principles and perform the hard work necessary to find real value. Those investors who do work hard at it are inevitably rewarded. The investor who purchases a stock that is selling below its intrinsic value can enjoy a certain peace of mind. If, after purchasing a stock at a low price in relation to value, the price continues to decline, then it is simply a better bargain than it was before. A number of studies have shown that dividend-paying companies perform better.
On the flip side, growth investors generally believe in buying stocks with above-average earnings growth without great regard for whether the stock is a bargain. In general, growth investors are more willing to pay a premium for such companies because they expect them to continue growing at such high rates. As a result of their high growth, such companies tend to have higher price-to-earnings and price-to-book value ratios than value stocks. In addition, growth companies tend to have low or even no dividend payouts, as profits tend to be invested into the company to further boost earnings. The main risk here is that the expected growth and profits may not occur. After all, just look back to 1999 when the technology sector was booming and investors were picking up stocks at astronomical prices with little regard for their current value but counting on their high growth to yield good value at a future date. However, when those companies failed to meet the high expectations placed on them, investors dumped their shares faster than you could say "technology crash," which led to the bursting of the 2000 technology bubble.
Historically, growth and value investment styles often do not move in tandem. While the growth market was strong in 1999, investors ignored the benefits of value investing. However, after the technology crash in 2000, the market shifted and value investing began showing signs of strength and dominance. Understanding how an investment is likely to perform under different market conditions can help you avoid selling a fund or stock because its style is temporarily out of favor.
You may now be wondering which strategy is for you or which one makes more sense. I strongly believe that value investing is no doubt the way to go, especially for investors with a long-term horizon in mind. And here's why.
History has taught us that when we buy value stocks, which are trading at low valuations despite strong fundamentals, over time the market will uncover the bargain and yield higher returns. In general, value investors tend to avoid paying unreasonable prices for stocks. However, growth investors tend to pay more attention to high market expectations and are more willing to pay higher and sometimes unreasonable current prices. (Here I must emphasize "current" since a price may seem high now but with high earnings growth that price may appear very cheap in the future.) In cases where expectations fail, value investors have less to lose as the stock has already been trading at low prices. However, in the case of the growth investor, the stock price would plummet and result in heavy losses, as seen with the technology sector in 2000. As a result, growth investing tends to involve greater risk than value investing.
I've been writing this as if there is a clear differentiation between value and growth. The reality is that you as an investor must always be looking to the future, and even when you are seeking value based on current earnings and current prices, you must always have your eye on the future since you expect a value company to at least do as well as it has done in the past and, hopefully, even better. You don't want to buy companies that are not growing, even if they look cheap at current valuations.
Short versus Long Term
Another issue about which I have been particularly concerned is that of evaluation or measurement of returns. Over the years, I have consistently emphasized a long-term investment approach. I have had to write many letters to individual investors in funds I manage who have expressed concern when we have not taken opportunistic short-term positions. Another way observers forget or ignore the long-term approach is obvious when they ask questions like: "Why have the funds experienced a poor performance over the past six months?" I must continually remind investors and commentators that this is the wrong question, and that there is no "poor" six-month performance because sometimes it is necessary to underperform in the short term to outperform in the long term. If you are buying cheap stocks, they are cheap because they are unpopular and they could remain unpopular or even become more unpopular in the short term before the market wakes up and realizes that the stock is undervalued.
One problem facing the world today is the tendency for people to think in shorter and shorter time frames. A study undertaken in the 1990s indicated that stocks in U.S. companies were held for an average of two years, whereas in the 1960s they used to be held for seven years. Some shareholders look for a quick return on their investments, and thus business executives are increasingly driven by the same mentality. This short-term philosophy is detrimental to the health of the company and the investor. Unless companies and investors take a longer-term view, growth prospects are limited and planning becomes stunted. Taking a long view of emerging markets will yield excellent results for the investor prepared to be patient and willing to apply sound and tested principles in a diligent and consistent manner.
The approach we take is not that of a three-month, six-month, or even one-year period, but at least a five-year period. Over the many years that Templeton funds have been investing, I have found that striving for short-term performance increases the risks to shareholders and actually results in poorer returns. Only by taking the long view will investment managers be able to do the best job for investors.
Taking a long view of emerging markets will yield excellent results for the investor prepared to be patient and willing to apply sound and tested principles in a diligent and consistent manner.
Bottom-Up versus Top-Down Investment Strategies
There is continuing controversy over the optimal research approach strategy to apply to emerging markets portfolio management, or for that matter any equity portfolio. On one side of the fence is the bottom-up investment school of thought, and on the other side are the top-down investors. Let me make clear from the outset that I think the arcane debate over the merits of narrowly defined investment strategies is often not very productive. Any good fund manager applies all the investment information he or she can obtain to make a good decision, and is unlikely to satisfy pure patterns that are merely convenient definitions.
Both "bottom-up" and "top-down" research have a place in successful investing.
When beginning our investment research we tend to take a "bottom up" approach by studying individual companies wherever they may be located in the world and in whatever industry they may be in. In this sense, the "bottom-up" research focuses on the details of each and every company: What the nature of their business is, how profitable they are, what the value of their assets is, and so on. The "top-down," or macroeconomic and political information, is then used to place the "bottom-up" information in context. No company can exist in isolation, and the economic and political conditions of the country or countries in which it operates will impact profitability and long-term planning. We need to be concerned with macroeconomic and political conditions to the extent that they may hinder or help a bargain company achieve its objectives. Bottom-up investors allow country and sector allocations of their portfolios be determined by bottom-up stock selection while taking into consideration the wider picture.
If an investor wants to take a strictly top-down approach, he or she will first select the countries in which he or she would like to invest, through the analysis of the economic and political environment in those countries. He or she may also study industry sector characteristics to determine which sectors are best. Only after those studies will he or she begin to select individual stocks within those markets and sectors finally looking at value as well as such considerations as liquidity and market capitalization, factors that would influence the manager's ability to enter and exit the market easily. Of course these categorizations of bottom-up and top-down investment styles are gross oversimplifications, and it is difficult to find managers who neatly fit those descriptions. The subject is fraught with dangers, simply because definitions of investment styles tend to pigeonhole particular managers and leave them with fewer options. More often than not, some managers would tend to emphasize stock selection whereas others would tend to emphasize country allocation, but both would at all times be considering macroeconomic and political situations as well as individual stock differences. It is difficult for any manager to ignore matters regarding earnings, growth, and dividends that are inherent in the evaluation of companies while ignoring such macro factors as exchange rates, interest rates, currencies, and other such factors and their impact.
Chapter Six
Researching Emerging Markets
Always Keep an Open Mind
In emerging markets, I am continually reminded of the need for independent research and careful checking of what company managers, brokers, and researchers tell me. You must also constantly be aware not only of the influences and biases that impact their thinking but also those influences and biases affecting you. These influences and biases are strongest in the places where one spends the most time and from where we obtain the most information.
For this reason, it is important to keep an open mind and read news and research reports originating from all over the world and from different sources. You must try your best to exercise a great deal of objectivity in your analysis of all the relevant data, so that local or foreign, company-specific, country-specific, or industry-specific data may be given appropriate weighting.
"A verbal promise isn't worth the paper it's written on."
—Samuel Goldwyn
There must be an ability and a willingness for the emerging markets investor to obtain information from all relevant sources, whether they're local or international sources. In other words, total reliance cannot be placed on just local information or on only foreign information.
The four best sources of information for emerging markets investors are:
1. The staff of a company in which you are considering an investment.
2. The staff of the company's competitors.
3. The audited financial statements.
4. The company's customers.
Original information is the best. If you read about it in the newspaper or a magazine, then it is probably too late (except maybe to do exactly the opposite of what the article suggests!).
In today's markets, it is has become increasingly difficult, if not impossible, to obtain qualitative and truly independent external advice and opinions. The interest of market participants and advisors have become very entangled and complex, and sometimes corrupted. We need to question the real independence of research conducted by investment banks, as oftentimes, as the past has shown, research of these institutions cannot be entirely independent from the banks' own trading interests and advisory mandates. The many scandals that came to light during the recent financial crisis have proven that point once again.
Market information has to be taken with huge care and there is no substitute for your own hard work in researching and understanding complex situations. Glossy reports have to be handled with care, and the fact alone that there is such a large bias for "buy" or "strong buy" recommendations out there has to trigger alarm bells for the diligent investor. Furthermore, if the broker is distributing the glossy publication to hundreds of investors, then there is little chance of the company still being a bargain by the time you read about it. Therefore, these research reports can be used only for background information.
For the same reason, it is not wise to unquestioningly accept financial data such as financial ratios from external sources, as they might be using a methodology or making adjustments that skew the data to a large degree. Every country has its own ways and definitions—is accustomed to different calculation methodologies—so one has to again be careful when making comparisons.
Reading periodicals and talking with brokers can be useful, but it's best to use more time to try to get to know the company's staff and its competitors. Always use the company's audited financial statements as the primary information source. If you find what seems to be a particularly good independent and reliable information source, use it but make sure it really is reliable.
Obtaining data from advisers and analysts based in the country in which the investments are being made is also beneficial even though it could be biased. As the number of investors and the amount of capital moving into emerging markets escalate, locally obtained knowledge could allow you to find yet undiscovered gems.
Experience has shown me that total reliance on a locally based analyst or adviser is not sufficient. For wise portfolio decisions, two important perspectives are necessary: first, the global outlook and experience that come from having invested in many countries, and second, a more detailed and intimate knowledge that comes from a local presence, especially about individual companies.
It is important to incorporate both perspectives by having local and country-specific information collated, digested, and then contrasted to global data. This analytic process yields much more powerful results than research that leans heavily on one or the other source of information. Locally gathered information, for example, provides insights into the real success of a business, as measured against similar companies in the same country experiencing the same economic conditions. Global information helps you to see what international economic or political forces are gathering steam and may alter the local business environment. The end results are much more valuable insights, which must yield far better long-term investment returns.
All Walks of Life
As an alternative to formal information sources, when visiting the countries in which I invest I like to talk to working people and people who are actually functioning in the economy. The people I have met have told me about their lives and how the economic conditions are affecting them. Take for instance, in 1995, when visiting Brazil, I found that there was a subdued feeling in the country stemming from the economic slowdown. Inflation was down substantially, but the economy itself had also been slowed. As a result, the business leaders I met with were not optimistic. From their perspective alone, I would have developed a rather downbeat economic forecast for the country.
Talking to people on the street, however, changed the picture for me. One woman said, "For the first time in many years I now know how much money I am going to make at the end of the month. In the past, we had 2,000% inflation a year, so each month I didn't know how much I was going to get paid, due to the indexation system for salaries. I had to rush to the bank and get in line to cash the check and then rush to the supermarket to buy anything I could get. Now that inflation is down to 8%, I can plan and I know how much I am going to receive and what it will buy. Of course things are expensive, and I must be careful with my expenditures, but I think things are a lot better." From statements like this, I formed much more accurate expectations of coming consumer attitudes and spending than I did from talking to the businesspeople themselves.
In addition to depending on a lot of respondents from different walks of life, it is important also to use your own associates as sounding boards and sources of information.
One of the key aspects of investing in emerging markets is the need to perform a careful historical analysis of companies. Such historical information requires in-depth research of the company's balance sheet, profit and loss figures, and other financial information going back at least five years, paying particular attention to the potential for, and the stability of, earnings growth. Normally, the further back in time the analysis goes, the better it will be.
Narrowing the Choices
There are tens of thousands of emerging market companies in which you can invest. To narrow these down to a more manageable number, you can use key financial information and ratios such as market capitalization and turnover as well as price-to-earnings, price-to-book, and debt-to-equity ratios.
Factors such as a sound balance sheet, high return on equity, decent sales growth, and good profit margins are some of the items you should look at when you undertake your analysis. Of course for this sort of analysis you need to have reliable and timely access to audited accounts, which can present problems when each country has different accounting standards. Remember, improperly or falsely presented materials can make the difference between buying and avoiding a stock.
Attention to detail is important. Henry Ford once said, "A handful of men have become very rich by paying attention to details that most others ignored." Audited accounts are necessarily the starting place for the examination of any company in the emerging markets. Audited financial statements provide the first source of information an investor has about a particular company. These statements are supposed to show an unbiased account of the company's health and business.
One of the most critical factors in judging any company, telecom, utility, industrial, or what-have-you is the quality of management. So one of the first things I look for, not surprisingly, is any sign of shady dealings or ethical misconduct. If I find or learn of even a hint or wisp of impropriety, I stop there and no longer consider the company viable for investment. Usually, local people know the rumors and can give you insights into behavior that would not be included in any research reports or other publications. By having our own team members based in most markets we invest in, we get a good insight into local dynamics and rumors, scandals, and suspicious developments. Always inquire about the probity and honesty of management before anything else. (I've learned that one the hard way.)
Always inquire about the probity and honesty of management before anything else.
Number two on my personal checklist are the skills and imagination of management. The best way to check out the management is, as I have said, to meet with the managers personally. But if you can't do that, the next best thing is their annual report, the company's website, and Internet searches. There is a wide range of information freely available on the Internet. You can also sometimes write to the investor relations person at the company if you have any questions on the basis of reading the annual report.
My frequent visits permit me to develop a personal sense of where the company is going and how management maintains the facilities and its employees. The appearance of the staff, operating environment, and physical location can all speak volumes about a company's success and priorities. Meeting with corporate representatives also helps to develop an understanding of the company, giving us more sources of information than the annual company report.
Is It a Buy or a Sell?
We use the time-tested strategy of identifying securities that stand at a low price in relation to the company's long-term value. We also evaluate how a company compares to its local, regional, and industry peers. We study quantitative as well as qualitative factors and five years of historical performance (or as many years back as we can go) and five years of projections. This is usually followed by a company visit and, if everything checks out and the price is below what we consider its fair value, we purchase the shares for the funds that we manage.
For a stock to be included in the funds I manage, it must meet at least two of the following four requirements:
1. It must be cheap relative to its price history, other stocks in the market, or other stocks in its industry internationally.
2. It must have good growth prospects, with a growth rate in excess of inflation projected for the next five years.
3. It must be cheap in relation to its net tangible assets.
4. It demonstrates a concern for minority investors by paying dividends.
Whenever you can buy a large amount of future earnings power for a low price, you have made a good investment. But you must remember to keep your estimates up-to-date with frequent reviews.
When the market price for a particular stock rises above the intrinsic value to an unreasonable extent, the stock can be sold if you can find a stock that is cheaper. Conversely, when the market price falls below the intrinsic value for a particular stock to an unreasonable extent, that stock can be placed on a list of stocks that can be purchased. I liked it when one of my colleagues compared the process to a ladder—when one stock reaches the top, it gets knocked off, and a new one is added to the bottom rung to replace it.
What's Its Worth?
The appraisal of value is complex and subject to numerous uncertainties. Some of these factors are management ability, growth trends, government control, assets per share, average past market prices for the shares, dividends, current earnings, average earnings in previous years, and estimates of future earnings.
With regard to the demands on your analytical skills in evaluating emerging market companies, most daunting are not only the varying accounting standards used in each country but the varying taxation regimes that affect how accounting standards are applied and as a result how accounting items are treated. It is essential, therefore, to ensure that you understand what methods the company's management and accountants are using. Accounting and taxation policies are also not static, so it is important to be aware of changes.
After decades of investing in emerging market companies, I've learned to not only examine profit and loss (P&L) statements and balance sheets, but also to study such issues as market share and technological improvements. But by far the most important single consideration in judging whether a company is over- or undervalued in the marketplace is its ability to grow earnings. A growth in earnings can materialize more easily if the assets that produce the products or services are undervalued and managed by competent people. When you're buying stock in a company, what are you really buying? All sorts of intangibles, to be sure, like goodwill, management skill, a brand name, and the rest. But at the end of the day, what you're really buying is hard assets. Assets, of course, don't always have to be things. They can even be receivables—money owed to the company. But whatever those assets are, it's important to know what they are, because the easiest way to determine the legitimate value of a share of stock is to take the value of all those assets and divide them by the number of shares outstanding. That combined with good management can result in a winning company.
If the net asset value (NAV) divided by the number of shares gives you a dollar figure higher than the share price and the company has competent managers, then you could consider it an undervalued stock.
In attempting to determine the NAV of a company, it is necessary to first look to the accountants hired by the company who produce the figures we work with. We use those accounts to make extrapolations of the value of these assets in comparison to other companies around the world.
It should also be remembered that different yardsticks are more significant for some companies than for others. In most cases, the single greatest yardstick is how high the price is in relation to earnings; but of course it is more important to compare price not with present earnings, but with future anticipated earnings. For example, when investigating telecom companies we use a factor such as income per telephone line or income per subscriber as one of a number of means of judging the overall efficiency of the system. In the banking sector, nonperforming loans would be an important indicator, while price to net premiums would be a useful yardstick to compare insurance companies.
When analyzing accounts, keep an eye on the following areas:
* In manufacturing and sales organizations, monitor inventory, accounts receivables, and order backlog trends. These are the strongest indicators of problems and are much more closely related to stock returns than reported earnings.
* Profit margin trends are important.
* There are many ways to appraise financial statements, but one of the most common is the use of ratio analysis, whereby the various elements in financial statements are compared. When looking at the value of a firm, ratios such as price to earnings, dividend yield, return on equity, and price to book value are used. When assessing profitability, ratios such as profit margins, return on equity, and return on assets are used. When assessing safety or balance sheet strength, ratios such as debt to equity and the current ratio can be meaningful indicators.
The bottom line is that all information is useful; none should be ignored, but none should be the sole platform upon which to base an investment decision. Whether your company information is derived from local or global sources, wherever it comes from—brokers, the media, business executives, the people in the street, your associates—consider it all, and then make the most informed decision possible. Only after having lined up all of this company information do we reintroduce macroeconomic factors to see if they are favorable in relation to the company's targets. If yes, then it's a buy!
Field Note: Czech Republic
November 2011
During 2011 with all the emphasis on Western Europe's debt problems in the so-called "PIIGS" countries (Portugal, Italy, Ireland, Greece, and Spain), Eastern Europe and Russia were pretty much overlooked.
Of course, that is not to say that a number of countries in Eastern Europe did not have debt problems of their own in the past. Five years before in Hungary, for example, banks made the mistake of offering mortgage loans in Swiss Francs and Japanese Yen because interest rates in those currencies were so much lower than those for the Hungarian Forint and thus were attractive to their clients.
However, when the Hungarian Forint went from 141 to the Swiss Franc in July 2008 to 253 by November 2011, a devaluation of 79%, many Hungarian mortgage holders got into trouble.
During my meetings in Prague, however, I found that businesses my team and I visited remained relatively unscathed by the issues in other parts of Europe. Companies visited included:
Banking: Operating in a healthy economy, the Prague banks were in fairly good shape. I was satisfied with the balance sheet of one of the banks I visited and was happy to hear that it was considering dividend payouts. Some difficulties were, however, expected in the Eurozone due to its importance as an export destination for the Czech Republic as well as an increase in value-added tax (VAT).
Electric Utilities: The largest Czech power company had significant nuclear and coal-fired generation capacities that made it one of the most profitable electricity players in the region. The power company has also taken advantage of the single, liberalized European market, where there were no barriers in exporting or importing electricity to and from neighboring countries, especially Germany. As a result, prices in the Czech Republic were able to rise up to the level charged by their counterparts in Western Europe thus benefiting the Czech producer.
Gaming: One interesting meeting was with a leading fixed-odds betting operator with an extensive branch network in Central and Eastern Europe. The management was of the opinion that gambling was a recession-resistant business since people would still gamble in tough times—similarly to drinking and smoking, I presume. The company also voiced its interest in acquisitions, possibly in Greece.
All in all, on this trip I found that things were quite normal in some Prague companies despite all the gloomy news floating around about Europe.
Chapter Seven
The Reality of Risk
And Why Not to Fear It
Modern portfolio theory gives a technical definition of "risk" that is very different from what we would normally think of risk. It defines risk as volatility calculated by the variance (as measured by the correlation coefficient) of a portfolio's historical returns. Therefore, a portfolio that is yielding excellent returns to an investor may have a "high-risk" profile if those returns have been volatile over the years.
Investing in emerging markets is not, they say, for the faint of heart. But then again, as the U.S. subprime crisis showed us, neither is investing in developed markets.
Any progress requires risk, since progress is made by moving into the unknown or the unexpected, with the possibility of making mistakes. To make progress we must be able to adapt and diversify so that any one mistake will not destroy our entire portfolio.
If we've learned anything in the past few years, it's that emerging markets are not as risky, in the traditional sense, as they were 10 or 20 years ago. If anything, some of the more mature emerging markets could be said to be even safer than the so-called developed markets, since the growth prospects are so much better. This clearly seems to be the case when you compare emerging market giants such as China and Brazil with developed economies such as Italy and Spain. However, risk in the sense of volatility still exists and because of high velocity trading, derivatives, and more efficient global trading networks, that volatility has actually increased in not only emerging markets but also in developed markets.
One solution to minimize portfolio volatility and thereby the risk (i.e., making your returns as close to constant as possible) is to invest in countries with markets that have a low relation or correlation with each other. By investing in stocks of countries that have low correlation coefficients with each other, the volatility of your global portfolio is reduced, and, by extension, the risk to your investment.
Of course, the volatility definition of risk and the relatively simple solution for reducing volatility through diversification does not explain the entire picture of risk in emerging markets investments.
The Big Picture
Over the years, I've become all too familiar with the significant risks that investors face in emerging markets. In no particular order, they include:
* Political risk: The possibility that revolutions or political turmoil in a country could significantly impact the value of an investment.
* Currency risk: The impact on an investment of fluctuations in a national currency.
* Company risk: Any risk arising from exposure to a particular company, such as the lack of information, a change in the company's management or ownership, a change in the health of a business, depression in a particular industrial sector, or a sudden price panic.
* Broker risk: Risk of unscrupulous or dishonest brokers who use customer orders to "front run," that is, to buy or sell ahead of their clients to take advantage of the market price differences.
* Settlement risk: Problems experienced in trying to settle transactions, and in obtaining, registering, and paying for securities.
* Custodial risk: Exposure to local safekeeping agents (popularly known as custodians) who may not provide adequate security for clients' shares.
* Operational risk: Risk arising from inadequate auditing and bookkeeping standards.
* Market risk: Exposure to extreme fluctuations in market values and the lack of liquidity.
I've disclosed all the major risks I can think of. So why, you might well ask after reviewing that list, should anyone bother with these messy, risky, strange situations? Because those risks exist in all markets, emerging and developed, and diversification internationally reduces the impact of such risks.
If you want to hit the financial home runs of the future, you'll need to pay attention to what goes on not only in your home country but everywhere in the world. In 2011, emerging market companies accounted for over 30% of the world's market capitalization, according to the World Federation of Exchanges. Thus, even the most prudent investors in search of diversification and asset allocation should consider investing one-third of their total portfolios in emerging markets, whether that investment be made in individual stocks or in an ever-growing list of emerging markets mutual funds.
As I have pointed out, historical stock market performances in emerging markets also back up the view that emerging markets have outperformed developed markets.
Before deciding to invest in any region, country, or company, all possible risks must be considered and one of the first to examine is the market liquidity, the investor's ability to buy and sell a stock quickly and efficiently. If a market or stock turnover is low, of course the ability to buy or sell is hindered, which is a disadvantage. However, in illiquid markets the difference between the buy and sell prices or the "spread" can be wide, thus opening up an opportunity to purchase at bargain prices. Therefore, investing in a low-liquidity, low-activity market can generate phenomenal returns because the purchase of just a few stocks by a few investors can drive the prices of those shares up dramatically. Of course, following the same pattern, a major sell-off in an illiquid market can cause prices to drop catastrophically.
Journalist Christopher Fildes once noted: "An emerging market is a market from which you cannot emerge in an emergency!" This is true of illiquid markets, but such markets should not be ignored since they could contain some excellent investments.
If there is anything we've learned in the past few years, it is that developed markets can be as volatile as emerging ones. During crisis times, it isn't uncommon to see a 5% swing in some markets in one day. But don't fly off to the United States for a safer ride: The stock market there also had its fair share of ups and downs, as was evident during the subprime crisis.
How to avoid being sideswiped by these risks?
Hedging Your Bets
Let's take another look at that map of the world. If you could overlay a transparent sheet with a graph of market lows and highs over time, you would see that with few exceptions, regional boundaries still very much define emerging markets.
In an age when jet travel and electronic commerce would seem to dominate an increasingly global economy, it's surprising how strong local and regional boundaries are. Regional economies in the less industrialized, less wired parts of the world are obviously knit together by increasingly advanced communications and transportation links. Therefore, changes in one country can impact an entire region.
For better or for worse, when the Mexico Peso crisis hit in 1994, the entire Latin American region suffered from what global investors quickly dubbed the tequila effect. In the same vein, when international currency traders began to seize on weaknesses in the Association of South East Asian Nations (ASEAN) countries—Indonesia, Malaysia, and Thailand primarily—the ripple effect that coursed through the region was called the Asian currency contagion and the Asian financial flu.
In both regions, certain countries (Hong Kong in Asia, Chile in Latin America) stood to some degree above the fray. But all global investors contemplating where to put whatever funds are at their disposal should first ponder the impact of regional changes and input that information in their research calculations. It is important to remember, however, that these regional movements where all countries in one region move together in one direction are temporary, and after a crisis, each country—each company—adopts its own behavior and stock price changes, which can be highly uncorrelated.
So, what should you do when there is a market crisis and it seems that all the stock markets are on fire?
Sit Tight, Don't Worry, Be Happy
If you have done your homework and selected a well managed mutual fund or put together a well diversified portfolio of undervalued stocks, then the best thing to do is: "Sit tight, don't worry, be happy." As long as the fundamentals are on the money, the companies will bounce back again, usually stronger than ever before. Please remember: Your best protection is diversification and patience is more than its own (just) reward.
If someone tells me he wants to become rich in less than one year in emerging markets, I tell him to take his money elsewhere. The swings in stock markets make market timing difficult and, if your timing is terrible, it can be very costly. The only way to consistently stay ahead of the game is to adopt a long-term view and, if appropriate, with a strong contrarian spin.
When we look at the balance sheet of a company, we might be willing to pay a high P/E ratio if we think that the company will achieve the high growth needed to obtain what might become a low P/E ratio in five years' time.
The single most important lesson I've learned is that long-term planning pays. The reason for this is astoundingly simple. All markets are fundamentally cyclical. Like people—because they're composed of the aggregate decisions people make—they're given to extended bouts of irrational fear and panic and equally irrational exuberance.
The single most important lesson I've learned is that long-term planning pays.
Like adolescents, we can get a little bit carried away at times. But—and I can't repeat this enough—it is by riding these wild mood swings like a surfer taking a wave that the investor can make money. An entirely rational market, after all, is a market that would barely budge at all.
The time of maximum pessimism is the best time to buy, and the time of maximum optimism is the best time to sell. That seems counterintuitive, right? Well, it is, and therefore it requires a positively stoic ability to let your mind rule over your passions. This brings us unerringly to: If you can see the light at the end of the tunnel, it's probably too late to buy (or sell).
Chapter Eight
Timing Market Factors: Currencies
Why a Crisis Can Be the Best Time to Buy
Market timing, on a Consistent basis, successfully, is impossible. However there are conditions when it is the best time to buy and the best time to sell. The guide to such conditions can be found in company valuations. But those valuations are impacted by currency changes. Contagion from the financial crisis that swept through the Asian markets like wildfire helped to strike panic in the hearts of even the most stalwart and savvy investors. On the face of it, that's not too hard to understand. After all, when a foreign currency declines in value, any stock that you own denominated in that currency is going to decline in value relative to the strength of one of the global reserve currencies, such as the U.S. Dollar, unless there is a local currency price increase that compensates for the local currency's devaluation.
So what to do during a currency crisis? Well, the first thing to realize is that despite everyone ranting about destitution and impoverishment, a devalued currency is not necessarily a catastrophe for any economy. In fact, it can be the engine for the next cycle of growth. There are winners and losers in such conditions. A cheap currency can mean that exporters will find it easier to export their goods, while importers will have difficulty.
Understanding Foreign Exchange
When examining foreign exchange (forex) and the expected value of a currency relative to other currencies, it is important to understand not only how forex traders behave but also some fundamentals. One of the most important is called the purchasing power parity (PPP). This is a way of comparing inflation in one country to inflation in another country. If inflation in Country A is higher than inflation in Country B, then it would be expected that Country A's currency will get weaker than Country B's. It's best to construct a line chart showing the trend of this relationship by dividing the monthly inflation rate in, say, the United States by the inflation rate in, say, Thailand. If inflation in Thailand is higher than in the United States and we track the monthly PPP line over time, the moving line will help us project if the currency is over- or undervalued and if the Thai Baht is expected to weaken or strengthen against the U.S. Dollar, as illustrated in the accompanying chart.
Purchasing Power Parity Chart of the Thai Baht
You can tell at a glance whether a particular currency is strengthening or weakening against the U.S. Dollar, which helps keep currency risk in perspective. The reason that the PPP index works so well to determine the strength or weakness of a given currency is that one of the key indicators of weakness in any economy is, of course, inflation.
Despite the bad rap that currency speculators—they prefer to be called forex traders—get in the press, a wise global investor will always do well to try to think like a foreign exchange trader, if for no better reason than to anticipate their next moves.
Global currency traders spend a great deal of time probing for weaknesses in a nation's defenses. High inflation rates are a big sign of weakness, and a form of bait to the wily forex trader.
The second thing that forex traders tend to look at is current account balances or imbalances. A country's current account factors in all of its imports, exports, payments out, and payments in. A high current account deficit could raise concerns. Some countries take into consideration so-called invisibles, as opposed to just hard trade numbers, to correctly assess, for example, the value of services as opposed to simply trade in hard goods.
The Upside of Political Uncertainty
The third thing forex traders look at is the political environment. If something looks odd to them on the political side, they'll dump a currency without remorse. In one country we found an extremely unpopular prime minister who only made matters worse by delaying and showing uncertainty in the face of a rapidly devaluing currency when decisive and fast decisions were needed.
In retrospect, it wasn't so hard to figure out why he and his cabinet were all so unconcerned in the face of a currency disaster: Personally, most of them were worth billions, not millions. And that's not in their local currency but in U.S. greenbacks. This military man turned politician failed to purge shady power brokers from his cabinet, and couldn't keep from meddling in the positive policies of the few brave-hearted bureaucrats truly committed to financial reform. In short, this guy was a walking disaster, and the very personification of a major risk of global investing: political risk. However, once the crisis hit, the Thai people got out a big broom and swept these guys out of power, which is one of the reasons these periodic crises are not such a bad thing: They promote much-needed change.
Political uncertainty—like any other form of uncertainty—can be your green light to move into a market.
Overcoming Your Fears and Moving On
Uncertainty depresses stock prices. If you have faith in your own analysis and research—an uncertain atmosphere can be just the break that you've been looking for to pick up wonderful, inexpensive stocks that otherwise would be too expensive to even consider.
When many markets are crashing, what do you do? Where do you go? Is there nowhere to run? Nowhere to hide? Does it make sense to race for the exits? In a word: No.
What I try to do is not behave like a sheep but behave like a shark. The great thing about market crashes is that now all of a sudden, all sorts of stocks we had been looking at but rejecting as too expensive were now affordable, for the first time in years.
At one particularly dire point in one of the market crises, one stock market had fallen by over 70% from its peak! Why? You know perfectly well why. Because all the smart money was groaning that it would take at least five years for recovery to take place. Of course, that was not the case and within one year, the market shot up to its previous high and instead of rushing for the exits everyone was rushing to the enter the market. There are many such cases we can point to. After the Mexican tequila crisis in 1996, it took only two years for the Mexican market to bounce back.
Often market commentators tend to try to draw parallels between one crisis and another, but each one is different and it is dangerous to make quick judgments since each country, each company, and each sector is different, and most should be individually considered based on their own merits. For example, in early 2012, many commentators were predicting a crash in the Chinese property market with dire consequences for banks and the economy as a whole. Unfortunately they were looking at the Chinese property market through the lens of the subprime property crisis in the United States. Such a comparison was faulty because the economic conditions and structure of both markets are markedly different. Looking at just one variable, derivatives, the U.S. property market was heavily influenced by credit default swaps (CDSs), which had no influence on the Chinese property market.
In deciding whether to buy given stocks, one local analyst observed that most investors and even many analysts and brokers were often acting more on raw emotion than anything else. As a result, the real talent is not necessarily the ability to identify good companies, but the ability to anticipate which stock the masses of overly emotional investors will rush to next—making sure that you are ready and aware of these mood changes.
Don't forget that reading local papers and watching local newscasts can provide valuable insights unavailable anywhere else into the mentality of local markets.
The amount of unreliable predictions about the Mexican meltdown of 1994 and the tequila effect that spoiled Latin American markets for a few years was remarkable. But even more remarkable—though much less remarked upon—is the salient fact that you never, ever saw a front-page headline that blared: "Mexican Markets Stage Fast Recovery."
The problem with recoveries is that they don't make good copy. Market meltdowns make headlines. Market recoveries make money.
Market meltdowns make headlines. Market recoveries make money.
The thing that I find interesting about market commentators is that, much like those they seek to advise, they tend to go to extremes: "It's a disaster!" "It's a catastrophe!" "There's blood in the streets!" Now, if you can, take away all those exclamation points from all those opinions. Don't you feel better? More calm? More collected?
Okay, take a deep breath, count to 10—and start thinking about going on a shopping spree. A classic quote often attributed to the first Baron Rothschild is that the best time to buy is when there's "blood in the streets." The second part of that bit of advice, I was told, is "even when the blood is your own." This obviously makes the Baron even more interesting than I thought he was.
I would personally amend that to take the subjective factor into account: The best time to buy is when everyone else is screaming that there's blood in the streets. Because I'm willing to bet that no matter what happens, if you take a look down into that gutter, all you're likely to see is a gutter full of wine.
Field Note: South Korea
August 2011
The South Korean economy and stock market had been doing fairly well in recent years. Although the GDP growth fell because of the subprime crisis, there was a dramatic recovery in 2010. Like other emerging market currencies, the Korean Won was steadily strengthening against the U.S. Dollar.
My meetings in Seoul reflected the changing business environment in South Korea with a move into higher-value-added products and advanced technology.
Home shopping was also on the rise in South Korea, and the field was getting crowded with many newcomers. As a result, competition for popular cable TV channels was heating up among home shopping companies as well as new multichannel providers.
Other strong industries in South Korea include:
Shipbuilding: South Korea had one of the world's leading shipbuilding industries. Large shipbuilders had been benefiting from the boom in offshore oil drilling production rigs as well as sophisticated liquid natural gas carriers.
Construction: In addition to shipbuilders, South Korean construction and engineering firms were active globally. One major firm we met had a diversified business portfolio ranging from housing to overseas chemical plants, architectural services, and civil engineering for roads, bridges, and other such facilities.
Real Estate: The Seoul housing market had not been doing as well as in other cities and provincial areas in recent years. In 2010, however, construction companies began providing discounts on unsold apartments, resulting in greater sales. The South Korean government had also been trying to support the property market by supplying low-end/mid-end apartments to increase supply, reducing or removing tax on owners of multiple homes, and deregulating the reconstruction/redevelopment market to increase the number of new apartments.
Internet Business: South Korea's Internet business was alive and well. Although the online gaming market faced lower growth rates than in the past due to stricter government regulations, the search portals were doing well.
South Korea's strength in such a diverse range of sectors left us confident of the country's bright future.
Chapter Nine
It's Called Volatility
What Goes Up Comes Down, and What Goes Down Comes Back Up
The essence of Hong Kong is the refinement of risk taking. In the socialist lexicon, assuming even a reasonable level of risk taking is called speculation, and is regarded as the root of all evil. But to the committed free marketeer, speculation is a more morally neutral matter of setting your sights on a target in the near future and running a real risk of being wrong but being more confident of being right.
The most lasting lesson I learned during nearly 30 years of living in Hong Kong is the ultimate value of risk taking. The Hong Kong stock market makes an ideal case in point. It's famously volatile, and requires nerves of steel to ride out its hair-raising roller coasters. But from those dark days of 30 years ago, when a good number of Hong Kong fortunes were made by those willing to see a bright future even before seeing the light at the end of the tunnel, I learned that it usually pays to take the long view. In Hong Kong I also learned that all markets are inherently elastic, and that while what goes up always goes down, the converse is equally true.
A Research Challenge
One day in 1973, I got a call from the grandson of the famous Chinese comprador who made millions by being the middleman between Chinese businessmen and the British colonial business establishment. At the time the British Colonial office ruled Hong Kong, Chinese people were not permitted to live high on Hong Kong Island's Peak, but this businessman was allowed to do so because not only of the valuable services he rendered to the British but because of the great wealth he had amassed. I was surprised when he said: "I want to know what's going on in the stock market. I want to know what's going to go up and what's going to go down. Can you give me some research?" I would, I could, and I did.
I approached my new commission as I would have any other market research assignment: by hitting the books. Still a perennial student at heart, I started out studying the technical analysis of price movements using the chartist methodology. The deeper I delved into this arcane art, the more intrigued I became by its possible predictive potential. Of course, this was before I was introduced to value investing and thus my approach now would be considered rather naïve. In any case, that year, 1973, just happened to mark the peak of one of the longest, most aggressive bull runs in postwar Hong Kong history. So, as a fledgling chartist, I was a bit taken aback to discern one of the best-known technical price formations, the famous head and shoulders formation, taking place right there in Hong Kong.
A head and shoulders formation occurs when aggregate stock prices form a top, correct, then run up to a higher top, then retreat and finally come up to a third peak, which is not quite as high as the previous highest top. The line on a graph assumes the shape of a shoulder and then a head, and then falls down to the other shoulder. If you see a head and shoulders formation taking shape on a stock market chart, the chartists say that you should beware and it's probably time to exit and exit fast.
A Painful Lesson
Armed with this ominous information, I filled the Chinese client in on my forebodings. Unfortunately, I neglected to follow my own advice. One too-hot-to-handle stock that an old friend and colleague, was particularly enthralled by was called Mosbert Holdings.
Mosbert was a sprawling, ill-defined Malaysian holding company that had made a mysterious entry into Hong Kong, and despite the fact that nobody could figure out where the money came from, had been noisily, busily buying up everything in sight: companies, buildings—whatever it could get its hands on.
"I bought Mosbert at eight and now it's down to three and a half, half of what I bought it for," my friend excitedly explained, in a somewhat muddled version of contrarian thinking. "It's a fabulous buying opportunity."
Well, maybe yes, and maybe no. Before taking the plunge, I decided to do some minimum due diligence. I picked up the phone and gave the folks at Mosbert a call. The fellow I spoke to was remarkably unfriendly, and to put it mildly, the last thing from an open book.
"I can't give out any information over the phone," he said brusquely. "And nothing's available in print." As if to drive home that point a little deeper, he hung up on me.
Needless to say, I found this all a bit disconcerting. But my friend repeatedly assured me that Mosbert was going to be the next great Hong Kong financial miracle, not to mention, at present prices, the bargain of the century. Against my better judgment, I said, "Let's go for it."
And so we did. Suffice it to say, the slew of cheaply printed Mosbert shares were not worth the paper they were printed on. Not only does a high-flying market cover all sins, it covers all scams. Mosbert Holdings turned out to be one of the biggest scams to emerge from a scandal-ridden Hong Kong stock market, which within weeks was collapsing all around us.
Mosbert, of course, went belly-up. Looking back in anger, we had been complete dolts. We should have wondered why the stock was depressed. The people on the street who had been driving Mosbert stock down knew a few things that we didn't: that Mosbert was not entirely on the up-and-up. We should have held on to our wallets when this publicly listed company refused to provide any information to us about its operations and finances. It was a good lesson on why fundamental value research is so important before making any investment decisions.
And with Mosbert on the ropes, the great Hong Kong bull market had buckled to its knees. From a peak approaching 300 (to put matters in perspective, it previously had hovered as high as 2,500), the Hang Seng index dropped like a stone to less than 100. At that point, it settled into a gradual, steady decline. What was the lesson to be learned from all this? Was it to stay out of volatile markets like Hong Kong? No. As far as I was concerned, the lesson to be learned from this disaster was: What goes down usually goes back up, if you're willing to be patient and don't hit the panic button.
What goes down usually goes back up, if you're willing to be patient and don't hit the panic button.
Don't Forget to Use What You Learn
If you don't follow what you learn, and you don't act on the information that you have gathered, and if you give in too readily to what Alan Greenspan so memorably dubbed the "irrational exuberance" of a runaway bull market, you might end up diving off your own head and shoulders.
At times of distress, there's a tendency to live too much in the moment. Emotions take place in the moment; rationality looks forward and backward in time. Panic and fear—as well as greed—bring sentiment into the foreground and make rationality take an emotional backseat. But examining a subject in the light of history generally helps you take the long view.
Chapter Ten
The Importance of Being Contrary
Don't Follow the Crowd
I can hear you asking: How on earth, when all the smart money was running scared out of Thailand at the height of Asian contagion, could it have possibly made sense to buy there?
For the right answer to a good question, let's take another look at a pet phrase from Sir John Templeton: "To buy when others are despondently selling and to sell when others are greedily buying requires the greatest fortitude but pays the greatest rewards."
"If you buy the same securities as other people, you'll get the same results as other people."
—Sir John Templeton
Now, the emphasis here is on the value placed on an asset by sentiment, as opposed to pure, dispassionate logic. Despondency and greed are emotions. They aren't about thinking, but feeling. You may feel in your gut that a stock is going to go up, but you're better off testing that hunch against the hard-core reality of a corporate balance sheet and such things as the competitive environment.
Of course, in the short term, the smart money tends to be right: When the Thai Baht drops by 50% in two months, you've got a full-fledged disaster on your hands. But if you sell then, history has shown that your decision would have been based on sentiment and emotion, not on a rational assessment of long-term fundamentals.
Of the decisions made by the herd, a certain percentage will be based on reason, but a far greater proportion will be based on emotion. So turn the picture upside down: You earn money by discounting a market's emotional quotient.
Paper versus Realized Loss
Many years ago, my brother's wife bought shares in one of our global emerging markets funds. Her market timing was bad, since she purchased the shares at the height of the 1993 emerging markets boom because everyone else was buying at that time and the news was all positive. We were having an emerging markets boom.
That boom, unfortunately, was quickly followed by the great 1994 emerging markets bust, brought on by the Mexican Peso crisis, which led to the tequila effect, leading to a crash in Latin American markets.
Now, when my sister-in-law saw her monthly fund financial statement she was shocked to find that since the net asset value of her shares had declined, she had "lost" money.
However, even though the assets had declined in value, she hadn't actually lost anything unless for some reason she was compelled to sell at that point. In fact, this was the time of greatest opportunity for her, as well as for me, because for every dollar she invested in our fund during the bust, she was getting more assets for her money.
But for a while there, since she hadn't yet caught on to that fact, you could have cut the atmosphere in her and my brother's house with a knife on the night I stopped by for dinner.
My sister-in-law, being the fine woman she is, didn't hide what she thought under a bushel basket. "It went down," she moaned, the first time I saw her after the big drop, in hushed tones, as if the neighbors might hear and think the less of us for our disgrace.
I begged her to have faith, and to take advantage (if she still had the stomach for it) of the marvelous discounts at which she could now buy more shares in my falling fund. She looked at me as if I'd lost my marbles. How could any sane person, she asked, buy into a falling—she might even have said failing—fund?
Because, I explained, "If you buy low now, that gives you an opportunity to sell high later."
This is obvious in its own way, but it's astonishing how many investors do precisely the opposite. Suffice it to say, my sister-in-law bought high and sold low. This is not the way to buy and sell mutual funds or anything else, for that matter.
Buy stocks whose prices are going down, not up.
If a market is down 20% or more from a recent peak and value can be seen, it's a good idea to start buying.
Sounds crazy, right? Wrong.
Keeping a Cool Head
A case in point: In 1991, one fund company found that Japanese investors were excited about investing in Indonesia so they launched an Indonesian fund for sale to investors in Japan. The launch of the fund coincided with the then peak of the Indonesian market, just prior to a major market meltdown. Of course, no one knew that at the time. However, the fund manager smelled a bubble aching to burst because stock prices were sailing skyward like helium balloons. At the time, because I was in Japan, I asked a Japanese fund manager his strategy for picking stocks. "I select stocks," he solemnly said, "that are going up." This is, of course, a brilliant strategy in a bull market.
But if you get in at the start of a bear market, it's a prescription for ending up with real problems with your portfolio. In any event, examining the hard numbers—the P/E ratios—of the leading Indonesian stocks, the prices being asked relative to the company histories made the whole country look very expensive, particularly in relation to other, rival markets in the region. The fund manager of that Indonesian fund was in a real quandary because the Japanese investors, who had entrusted him with millions of dollars, expected to make a killing in his fund but he was having trouble finding bargains and was looking a bear market in the face. What he did was sit tight. "Hurry up!" investors kept calling and faxing and e-mailing him. "Get invested 100%! If you're not 100% invested you're not earning your fee!" He took the heat, and didn't budge one inch from his stationary position. Shortly thereafter, the bottom fell out of the market, and it didn't take two minutes for those same investors to start calling him up, suddenly singing a different tune: "Stop buying stock! Start selling stock! Stay heavy in cash!" He didn't breathe a word, not even the four famous words: "I told you so."
Nobody ever thanked him, by the way, for saving their skins. But even more to the point, the shareholders began putting pressure on him to stop buying just when it made sense to start buying. They were frantic about losing their money, but he had to keep telling them, doing his best to keep his exasperation in check: "These short-term losses are only paper losses. The only way to make money is by buying now." Their problem, of course, was a lack of a proper long-term perspective.
Over more than two decades of emerging markets investing, I've found that being a genial yet cynical optimist is the best posture to earn long-term dividends in emerging markets. Because despite all the stops and false starts, booms and busts, bubbles and crashes, over the long haul the same logic seems to apply: Time heals most ills . . . particularly with regard to emerging markets.
Time heals most ills . . . particularly with regard to emerging markets.
Chapter Eleven
The Big Picture and the Small Picture
A Case Study of Russia
Often the big picture contradicts the small picture.
I can still recall that when I traveled to Russia looking for investments in the early 1990s, the big picture was that the place was dirty, low-down, and dishonest. But the small picture presented isolated pockets of real opportunity. We're talking macro versus micro views here. Although Russia's political, economic, social, and financial situations all left a lot to be desired, there were still bargains to be found.
By correcting the gap between the macro and micro views, you can get a jump ahead of the crowd.
And by correcting the gap between the macro and micro views, you can get a jump ahead of the crowd.
The Bad and the Good
When we first began tentatively sniffing around Russia, the macro picture could not have been more bearish, unless a full-fledged civil war had broken out.
* A besieged Boris Yeltsin had barely staved off a countercoup by shelling the parliament.
* The place was a hotbed of hard-core Communist resistance.
* Inflation was skyrocketing.
* Industrial output had hit rock bottom.
* Capital flight was endemic. Any Russian with a few rubles to his or her name had smuggled the money out of the country and shoved it in some offshore safe haven far removed from the long arm of the Moscow tax collector.
* There were no well-organized stock exchanges.
* There were no balance sheets.
* There were no earnings reports, because there were no earnings.
* The country was deep into what became known as the Great Contraction, when the country's gross domestic product (GDP) plunged by nearly half in five years.
I could go on; Russia was in a tough spot at the time.
So what attracted us? There was stock to be bought, for nickels and dimes to the share. Companies were being privatized right and left—big ones, small ones, good ones, bad ones, sometimes as many as a dozen a week.
Assets were being auctioned off like excess inventory for pennies on the dollar—or old rubles to new rubles—to sometimes not even the highest but often the only bidder. Entire companies—oil and ore giants, telecoms, energy companies—could be had for peanuts.
So why was the Russian state so determined to conduct a fire sale of its potentially valuable assets? The government desperately needed money. And a lot of individuals—mostly managers—also desperately needed money. So they were skimming and scamming and getting rich in the chaos. This made many ordinary people who were excluded from those deals very angry.
But there was also another, more legitimate reason that things were so cheap in Russia back then: No one could be sure which way the wind would blow—toward a viable conversion to a market economy or toward a civil war between would-be capitalists and ardent counterrevolutionaries.
The ultimate big winners of the second Russian Revolution (to convert to a market economy) were by no means clear. So we foreigners, in effect, had to get paid to take the plunge. Fortunately, those of us willing to take the risk came out smelling like roses—at least for a while, until the expected and probably inevitable deluge.
"Trust Us"
When we first set foot in Russia, registration of shares was a problem. "Who registers the shares?" we would ask, toward the end of nearly every company visit. "Oh, we do," the company official would smilingly reply. "But what's our guarantee that if you don't like our face, you won't just go and erase our name from the registry?" "Trust us" came the reply. But personally, the lack of a central share registry made me very nervous.
Back in 1994, the Russian Stock Exchange was so primitive that trading began at around three o'clock in the afternoon, give or take an hour or so, when a BMW would pull up to the stock exchange building in Moscow to unload a few million bucks' worth of cold cash.
Brokers would sit at long tables waiting for workers and ordinary citizens who had been given share vouchers—which could be exchanged for shares in newly privatized Russian companies—to bring them in by the bushel and sell them for a song.
At around six o'clock in the evening, the BMW would return to collect the vouchers that the brokers had bought on the cheap from the gullible workers and citizens. As one veteran of the scene recently recalled, "It really was 'over the counter.'"
Fast-forward two years. By 1996, the Russian Trading System (RTS), an electronic link between brokers and dealers established under the aegis of the U.S. Agency for International Development (USAID), was trading an average turnover of the equivalent of US$14.2 million daily.
This wasn't too bad, considering that the vast majority of Russian stocks were still so thinly traded that we had to wait days, or even weeks, to execute a trade.
Five years after we purchased our first Russian stock, the Russian bear boom had entered its second year in high gear. So the same two questions now arose that hover ominously over all runaway booms, at all times, anywhere in the world:
1. Is this the peak before the decline?
2. Or is this the early stages of a prolonged wild party?
There were, on the macro level, a few things to feel good about and to give us confidence that despite all our misgivings, some real and sustainable growth was taking place that would justify those rising share prices.
After plunging 43% since 1989—the year the Berlin Wall came crashing down on the Communists' heads and the Iron Curtain opened like a venetian blind—the Russian domestic economy in 1997 actually reported a marginal gain in GDP of a not exactly staggering 0.4%.
This may not sound like a lot to you, but given the amount of money being made that never got booked, it was mighty impressive—particularly since the GDP had risen for two consecutive quarters, the first such back-to-back upswings in five years. It could have been, for all we knew, the beginning of a Great Expansion—or just a minor blip on the screen.
A Welcome Tidal Wave of Privatization
There had been some impressive—and little-noticed—improvements in the results of that tidal wave of privatization. Despite the fact that the program had been highly corrupt and much criticized for being riddled with errors, it had accomplished many of its initial goals.
By year-end 1998, 75% of manufacturing enterprises had been privatized, while 85% of manufacturing output was being generated by privatized companies. More than 80% of industrial workers were employed either in privatized or quasi-privatized firms.
Even more critically, the huge industrial overcapacity that had plagued Russia during the Communist era had been largely squeezed out of the now heavily privatized economy. A burgeoning service industry had been created from scratch, with large and small banks, advertising agencies, and shops thriving on the new opportunities.
Under the old system, the prevailing wisdom had been: "He who doesn't steal from the state is stealing from his own children." Another common pearl of workers' wisdom: "We pretend to work, and they pretend to pay us." Taken together, it's not hard to understand why the Soviet system collapsed. In fact, the truly astonishing thing is that it took so long to buckle under its own internal contradictions.
After just a few years of free-market policies, employee morale had sharply improved. "It's become more difficult to steal from new owners than to steal from the state," a Russian oil and gas analyst told the Wall Street Journal. This fellow attributed the 1.3% rise in oil production in 1998 (not very much but still the first increase since 1988) to the fact that: "Management has become more motivated." In his industry, newly invigorated, financially incentivized managers were "overhauling oil wells, installing new technology, and investing more wisely."
Thank God a few things were going right. From the standpoint of cultural values, the country was in the midst of a major turnaround. A once-insular country, which out of ideological distaste had shunned foreign trade, became a major player in international commerce.
Still, judging by my own personal experience, in too many of these promptly privatized firms, the same old sad socialist sacks were still running the show. This meant, more often than not, running their companies straight into the ground. Five years of high-pressure, full-speed-ahead privatization and economic shock therapy had failed to bring fresh blood into the ranks of too many senior management teams.
If anything, the prospect of getting rich off privatization had encouraged many over-the-hill managers to stay put, hoping to cash in their chips before heading out. As a result, threatened and paranoid managers tended to adopt, in desperation, a passive survival mentality.
This lose-lose strategy involved:
* Lowering output.
* Cutting employment and wages.
* Running up massive arrears to suppliers and the federal budget.
It's Okay to Sell the Crown Jewels
Something, sooner or later, would have to give. The basket of blue-chip stocks held by our Russia Fund had done well. But the investments were getting expensive. The Fund's strategy in Russia in the second year of its boom was the same one that I employ during all booms anywhere and everywhere: Move down the list from the big, large-cap stocks, which have gotten expensive, and look for second-tier companies with small market capitalizations and big growth potential.
Take a good, hard look at your portfolio. Find all the stocks that have gone up 100% or more in one year or less, where the earnings have not risen as much and the five-year projection is not good, and consider dumping them.
Find all the stocks that have gone up 100% or more in one year or less, where the earnings have not risen as much and the five-year projection is not good, and consider dumping them.
What? Am I nuts? Why sell stocks that are your crown jewels? Because your crown jewel stocks can be your most dangerous stocks. Sure, you feel loyal to them because they've done well by you, and you by them. But watch out. They can be deadly. Let's take a quick look at the situation.
Here it was, a splendid collection of Russian blue chips, nearly every one of which had gone right through the roof—not altogether surprising, in a market that had tripled in value in a mere 18 months. The Fund had a good chunk of:
* The oil giant Lukoil (up 184% in one year).
* Vimpelcom, Russia's number-two cellular phone company (up 154%).
* GUM Trading House (up 132% in one year), the wonderful old department store in Moscow housed in a dramatic location right off Red Square.
* Rostelekom, the huge phone monopoly, and St. Petersburg City Telephone Network, two of the brightest telecom investments available, both well up in the triple digits in the past year.
So I had hit all the sectors and picked up the cream of the crop. I should have been happy, proud as punch. Instead, I was running scared. You don't maintain high performance by holding on to old blue chips that are no longer blue. Find the next batch of blue chips before they turn blue.
You don't maintain high performance by holding on to old blue chips that are no longer blue. Find the next batch of blue chips before they turn blue.
The fine, seasoned stocks in our portfolio weren't priceless oils, guaranteed to appreciate forever. They were more like prime steaks, capable of going bad if you held on to them for too long.
As I advised our Russia Fund shareholders after our second successful year in operation: "It's difficult to imagine just how big Russia is. If you fly eastward from Moscow, it takes nine hours to reach Vladivostok, on Russia's Pacific coast." Like the United States, Brazil, and China, Russia's size is intrinsic to its character.
Even after being shorn of many of its Soviet-era and tsarist imperial possessions (and pretensions), Russia is still the largest country in the world, as measured by land mass. It covers nine time zones, stretches nearly halfway across the globe, and contains just about every conceivable form of landscape on earth, from snowy mountains to hot, sandy deserts, from fertile lowlands to dry grasslands, from endless tracts of tundra to lush forests.
Its natural resources are staggering:
* It's the world's largest producer of palladium.
* It's the second-largest producer of platinum after South Africa.
* It's the second-largest producer of diamonds after Botswana.
* It's one of the largest producers of nickel, gold, oil, and natural gas in the world.
On the upside, a big country is a place where companies can grow to fit the landscape. Where the potential domestic markets are huge, companies don't have to rely so much on export-driven strategies to succeed. They don't have to rely on a global strategy to succeed. If they can become category killers in their own country, they're halfway there. And having conquered their own territory (assuming that it's big and diversified enough), moving on to the rest of the world doesn't seem like such a quantum leap.
On the downside, large countries' problems are often tailored to their size. They're not nimble. They're not quick. Unlike a small car or a small country, a big market can't easily turn, or be turned around, on a dime.
One time, I visited a company in Vladivostok, an industrial port city on the Sea of Japan. The firm was struggling, with less than 10% of its potential capacity being utilized. Just three years before, the producer of radio and TV components had been struggling to keep up with orders, because it had enjoyed a solid share of the Russian domestic market.
But no longer. With fierce competition from cheaper Asian imports (many originating in countries located not far from Vladivostok by plane), this company's electronics business was distinctly endangered.
Industry Characteristics Aren't the Be-All, End-All
But the electronic components manufacturer's attitude was positive. The company had established ties with U.S., South Korean, and European firms, and although these alliances had not produced much business—let alone hard cash—the managing director radiated hope, optimism, and confidence.
He was excited, he said, about a contract he hoped to sign soon with a South Korean company that might increase production by 50%. Because his business was fundamentally linked to the global economy, the electronic components manufacturer had been forced to engage with the new global realities. Objectively, in some ways, he was in a difficult place. But subjectively, he was miles ahead of the curve. Another, equally important lesson to be learned from not-so-random encounters is: A strictly industry study of a company's situation can be misleading. A visit on the ground can make the difference.
A strictly industry analysis of a company's situation can be misleading. A visit on the ground can make the difference.
Field Note: Russia
June 2010
The Russian economy and stock market, like those of other emerging markets, have had a remarkable recovery. By June 2010, Russian equities had more than doubled from the recent low in January 2009. Although the Russian economy contracted by 8% in 2009, it was actually expected to grow by 4 to 5% in 2010, supported by export growth.
Some of the companies we visited included:
Beverages: At one of the leading producers of vodka in Russia, I learned that the company had been investing heavily on marketing and promotion to move into the high-end vodka market and increase sales of its premium-brand vodka not just domestically but also internationally. Despite the government's efforts to decrease vodka consumption locally by imposing high taxes on vodka, the firm was of the opinion that the crackdown on illegal vodka producers (accounting for as much as 35% of total production) would help legitimate producers such as itself even though overall consumption was decreasing.
Food: Our next visit was to a food-processing company. The story here was market share growth. The market was very fragmented, and the company was expected to have opportunities to grow organically and inorganically. With the exception of pork, which could have profit margins as high as 40%, the meat-processing business had low margins. There were tax subsidies in place for agricultural producers in Russia that would last until 2012, and those might be extended. The valuations also looked attractive; however, the biggest risk was the high capital expenditure. The company was in massive expansionary mode because credit was cheap and management believed it could gain significant market share.
Information Technology: While visiting an information technology (IT) company, I learned about the progress in Russia's IT services sector. Involved in software development, IT services, and computer hardware for more than 1,000 organizations, including government institutions as well as large public companies, this particular firm was a beneficiary of the government's efforts to upgrade its IT systems.
Overall, it was a very fulfilling trip to Russia, where we continued to learn about the investment opportunities in that market.
Chapter Twelve
Pri·va·ti·za·tion
The Trend That Can Bring Huge Opportunities
So how did privatization help transform markets such as Russia? It sounds like the driest of bureaucratic abstractions. Just say it—privatization—and those Latinate syllables literally drag on your tongue. But privatization is much more than an abstraction. It's a revolutionary trend that has been sweeping the world for a number of good reasons, not the least of which is that it is the only way for long-dormant value to be pried out of moribund state industries, which have been black holes for capital instead of generators of it.
This has proven true not only in former Communist and socialist countries, and not only in emerging markets, but in developed markets as well. Great Britain and France made fortunes by privatizing their national phone companies, while achieving better service into the bargain.
Privatization has been the engine driving the lion's share of the world's emerging markets. For example, during the first half of 1997, Latin American stock funds produced some of the world's highest annualized returns. The reason? Stock markets in Brazil, Argentina, Mexico, Venezuela, Colombia, Chile, and Peru were all fired up by a wave of privatizations.
In Brazil, the wholesale shifting from the public to the private sector of several key companies, including Telebras (the national phone company), Eletrobras (the national electrical utility), and Petrobras (the state petroleum company), was big news.
Priming the Pump
Getting in early on newly privatized companies is one of the best ways to benefit from the resulting unlocking of value. Of course, gauging these situations can often be tricky, because some countries (and some central governments) rig the process to benefit insiders, while others simply bungle it in less venal ways that can result in horrendous rip-offs of shareholder value.
Getting in early on newly privatized companies is the best way to benefit from the resulting unlocking of value.
To the institutional investor, visiting companies in the early stages of privatization—generally before the shares become listed on international exchanges—is the next best thing to knowing the future. A comparable step for a small investor is buying the shares locally if there is a local listing early on in the privatization process. If the valuations are attractive, an investment via the initial public offering (IPO) can be a good opportunity.
Privatization allows governments saddled with unprofitable state industries to obtain the level of investment necessary to turn these crippled colossi around. Though the fairy tale of the sleeping frog being kissed by the princess and turning into a prince might be a somewhat romantic analogy, privatization can be almost miraculous in its ability to pry long-lost treasures out of rusty industrial chests to which the state had long ago misplaced the key—the key being incentive, of course. More than money, incentives are the royal road to riches.
Profitable investing in emerging markets demands a close study of the privatization process, because the difference between a good and a bad investment can be simply a matter of timing—buying at the right moment in the privatization process. Getting in early on the privatization curve is the key to riding the wave of the future.
Here's How It Works
Under the classic privatization model, state-owned companies seeking to go private are often told by their governments—which still own them—to go hunting for a strong strategic partner, typically a leading company in the same industry as the company undergoing privatization.
The rationale is to provide the resulting joint venture with the technical and managerial benefits provided by the strategic partner, so that these more often than not antiquated, clunky dinosaurs can be turned into lean, mean high-tech machines.
The oft-heard phrase strategic investor was no sweeter music to my ears than the dread term underwriters—the latter are the investment bankers who manage the issuing of new shares on behalf of new companies, and they like to set higher prices than I like to pay.
In general terms, my experience with strategic investors has not been terribly positive. Why? Because while a portfolio investor like us is looking to make money, a strategic investor is looking to gain control.
In most national privatization programs, the national telecommunications companies have been among the first firms to be taken private. That's because the revenue to be raised by selling off the telecom operation can be quite hefty, while the investment required to upgrade the system to international standards is typically so great that only a well-heeled, deep-pocketed strategic partner is going to be up to the task of jump-starting these creaky jalopies and kicking them into high gear.
In the 1990s, in Estonia, for example, the Ministry of Posts and Communications hoped for a partner willing to help upgrade the entire domestic phone system, not just the urban network. This was a potential fly in the ointment, because although it was clearly more profitable to serve the more densely populated urban areas, for political reasons the government could hardly ignore the sizable slice of citizenry who lived out in the countryside, many of whom had been on waiting lists for decades hoping one day to be granted the privilege of having their own phone.
Now let's look a little more closely at this decision, because it's critical to any foreign investor, large or small. Why is the company often placed under a legal obligation to find a strategic partner? Because, just as the management of any private company looking to sell it would want to buff it up to fetch the highest price, the government's goals tend to be:
* To maximize the proceeds from privatization.
* To curry political favor by improving services.
What usually happens is that the new company gets saddled with the political imperatives of the old company. So the government says to the strategic investor: Let's make a deal. We're going to give you an opportunity to make one heck of a lot of money. But in exchange for this opportunity (which we're going to guarantee for a period of time by extending the monopoly enjoyed by the present company's state-owned predecessor) we've put down two nonnegotiable demands:
1. You're not going to be able to provide service only to the people who can pay a lot.
2. You're going to have to serve everyone, even if that involves taking a few losses.
If all goes well, a well-handled privatization can be a win-win situation for everyone involved, from the government to managers to customers to underwriters (investment bankers) to you and me—foreign investors in general.
Why They're Good Investments
The reason that privatizations (provided you get in on them early enough) tend to make good investments is that a combination of higher investment—from the people who buy the shares and from the strategic partner, if there is one—with improved management almost invariably leads to higher productivity. I said almost invariably—not always.
Whenever you buy stock in a company, you're placing a bet on that company's long-term prospects. The price of that stock is really just the average of a range of potential buyers' and sellers' opinions of what the shares are going to be worth in the future.
Common characteristics of public-sector telecom companies in emerging or, more so, the frontier markets include low penetration rate, excessive prices for the average citizen, and poor phone service. As mentioned earlier in the book, in the developed countries, we tend to take mobile phone service—or more than 90% market penetration—practically for granted. But in Nigeria, for example, it was 55%, while in Bangladesh it's even lower at 46%!
Privatization offers cash-strapped emerging nations:
* A way to get cash out of their antiquated phone networks, which would take millions to bring up to speed.
* A way to close the telecom gap as quickly and efficiently as possible, at next to no cost to the taxpayer.
Low telecom penetration rates represent high potential growth. (Investors like telecom companies that start out near the bottom, because that only enhances their upside potential.)
After telecoms, public utilities usually comprise the next wave of state-owned companies to be privatized. They tend to be vast and unproductive, and need serious money to be upgraded into profitability. But on the upside, once they've got their infrastructure in place, their costs can be pushed lower with good management and their profitability can be pushed up with fair rates.
Utilities may not be sexy, but they can be sleepers. The three big questions to ask with any utility:
1. How subject is it to regulation?
2. If it is subject to regulation (and most of them are), how onerous is that regulation?
3. If they're not subject to government regulation, chances are that's because they're no longer a monopoly. And if they're no longer a monopoly, the overriding question becomes: Can they stand the heat of competition?
Thus, investing in a newly privatized company can lead to substantial profits since you're getting into the company in its early stages of development. As the company becomes more efficient and productive, profitability and subsequently share prices should increase. So it's always wise to keep a lookout for such companies—especially in frontier markets.
Chapter Thirteen
Boom to Bust
How, When, and Why?
When the Asian financial flu first broke out with a vengeance in Thailand in the summer of 1997, after years of unrestrained lending on speculative real estate projects, the local banks started to look kind of shaky, at least to impartial outside observers.
Three Warning Signs of a Bust
Here are three of the warning signs, by which you can sometimes tell if a boom—any boom—is about to go bust:
1. The nation's current account is perilously low. A current account takes the payments a country must make to outsiders, and compares them to all the revenues it's taking in. If the account is out of balance, that's a bad sign. And if the balance skews way toward the net outflow column, that's when global investors start getting nervous.
2. Inflation is rising. If the inflation rate starts rising far and fast in any country, take it as a major red flag because the usual central bank response is to raise interest rates, which could create an economic downturn.
3. Companies are taking out huge loans in Dollars thinking they could easily repay them when the local currency is healthier. Companies do this because the interest rates could be lower on foreign currency loans than on loans in their own currency.
Three warning signs of a bust: perilously low current account, rapidly rising inflation, and huge foreign currency debt.
What Happened in Thailand?
In the case of Thailand, companies took on huge foreign currency debt because the interest rates were lower on Dollar-denominated loans than on Baht-denominated loans. They would take the Dollars they borrowed and buy Baht at a lower price, and rake in the profits on the interest rate differential. It was a great way to make money as long as the Baht remained strong. But any sign that the Baht might weaken would surely bring the whole house of currency cards down. In effect, the entire country was gambling on the strength of its own currency, and in 1997 that gamble was looking a little risky.
The big commercial banks—no doubt tipped off by the central bank in Thailand—were beginning to get an idea that perhaps the Baht was not as strong as had once been thought. And the problem with currency is that any perception of weakness may bring about actual weakness. The more people got the idea that the Baht was in danger of being devalued, the greater the chances that it actually would be.
And what would that do? Well, a devalued Baht would make it much harder for all those Thai companies to pay back all those Dollar-denominated loans, because they would need to make many more Baht to pay back those Dollars. That was terrible news for overleveraged companies—which included some of the largest companies in the country—whose debts in some cases would soon start outstripping their assets. This, in turn, would make the banks even edgier, and make them less likely to extend or roll over these loans, because these banks would be looking at big black holes themselves, in the mirror, and would be trying to call in any and all loans to keep themselves from going under.
Into this tense, anxious pool of people biting their nails, waiting for the other shoe to drop, quietly slipped a group of ladies and gentlemen collectively known as forex traders. As I've mentioned, they're also less kindly known as currency speculators, a term that has caught on because it captures some of the flavor of what they do.
And what do they do? They buy and sell various countries' currencies, of course. Now, there was a time when most major currencies were fixed to the gold standard—last established under the Bretton Woods agreement after World War II. But in 1971, most of the world's industrial powers allowed currency exchange rates to float—which meant that they would be permitted to seek their own level on world markets, and that currencies could be traded against each other just like any other commodity. Still, currency trading remained a relative backwater on the financial scene until the mid-1980s, when a massive increase in the volume of foreign trade caused more and more money to go whirring around the world, constantly being exchanged for local currency when it stopped to buy some goods or services. To get just some idea of the size of growth, daily currency trading turnover soared from US$190 billion in 1986 to an estimated US$1.3 trillion in 1998.
As this market grew, traders—with large lines of credit extended from banks, brokerage houses, and other financial entities seeking to get into what can be an extremely lucrative activity—began speculating on these fluctuating currency rates, aided by computer programs and models that help them trade massive amounts of currency at the blink of an eye. Traders trade on margin and put up only a fraction of the amount of currencies they buy and sell. Thousands of currency traders sitting at computer screens all over the world—some working for banks, others for brokerage houses, still others for companies and central banks—influence with countless buy and sell orders the value of many currencies, which trade much like stocks, bonds, or any other sort of financial instrument.
Still, the central banks of many countries sometimes try—and often fail—to fix the rates at which their currencies are exchanged on global markets against other currencies, in an effort to prop up their own economies. If a government wants to favor exports, it will take steps to let its currency drop so its exports become more attractive. If a central bank wants to stimulate imports it will strengthen its currency by purchasing large amounts of its own currency on currency markets.
When a country sets a price at which its currency can be exchanged against another currency, that's called a peg. Pegs—typically measured against a stable reserve currency like the Dollar—became popular in some Asian and Latin American countries seeking to lend stability to turbulent or hyperinflated economies. And sometimes, when the central bank of a country has enough hard currency in reserve to support the rate of exchange it wants to maintain by buying vast sums of it on the open market, these pegs hold. And sometimes, they don't.
When they don't hold, it's usually because the currency traders no longer believe in the price being asked for the currency by the central government of a given country. So what do they do?
A Short-Selling Nightmare
The currency traders start selling it short, which means that they make a bet with somebody else that the price of that currency will fall. What do I mean by the term short selling, whether it's a stock or a Baht? I call it "selling something you don't have at a price that you don't want to pay."
What short sellers do is:
1. Borrow stocks, or bonds, or currencies, or what-have-you from their owners.
2. Sell the shares that they've borrowed, hoping that the price will fall.
3. Buy the stock, or currency, back at a lower price (if the price does fall).
4. Pocket the difference.
The "shorts"—which is short for short sellers—are taking the risk and making a bet that the price of whatever commodity they're selling will fall by a given amount within a given period of time. Where they can get caught is if the price of that commodity, instead of falling, goes up—at which point they're forced to come up with the difference. So it's by no means a win-win proposition. Sometimes, short sellers end up shorting themselves.
But if enough currency traders start feeling in their bones that the Thai Baht or Mexican Peso is overvalued at the current or prevailing rate and that it's destined to take a dive, all of their actions, taken together, will produce that effect. It's a perfect example of the tendency for markets, being based fundamentally in psychology—hope and greed—to create self-fulfilling prophecies. Lo and behold, the currency will drop, which gives the central bank of the country two options:
1. Cave in and let the currency float freely to seek its new natural level.
2. Fight and start spending its currency reserves to defend the Baht, Peso or what their currency happens to be.
And the Baht Tumbles
In the case of the Thai central bank—the Bank of Thailand—on July 2, 1997, the bank decided to abandon the Baht's fixed peg to the Dollar (it was actually not a precise peg, but a so-called trading band, or limited range of rates) and let it float on international currency markets. As expected—though not perhaps by the central bank—the Baht collapsed. Over the next few weeks, in an ill-advised and ultimately futile attempt to bolster the Baht, the Thai central bank spent some US$60 billion (US$23 billion of that borrowed) before throwing in the towel.
After that, the Baht was on its own. And what it did was sink like a stone from about THB20 to US$1 to THB50 to US$1. Those with U.S. Dollar loans thus saw their debts more than double in a very short period of time. When global investors—banks, institutions, money funds, and individuals—saw what was going on in Thailand, which was that the vast majority of companies had Dollar-denominated debts greater than their assets, and the Thai banks and so-called finance companies that had extended those loans were going to be in deep trouble, they did just what global investors always do during a time of crisis: They pulled all their money out before they lost any more of it.
During the ensuing period of currency contagion—so called because the rapid drop of the Baht prompted similarly shaky currencies to fall, from Malaysia to Indonesia—a whole lot of very mad people out there (many of them governing the Southeast Asian countries) began stridently denouncing the currency traders and accusing them of engaging in a conspiracy to ruin their once-high-flying Asian tiger economies.
Chief among the proponents of this conspiracy theory was the prime minister of Malaysia, who, having proudly presided over what for a long time had been known as the Malaysian Miracle, had no desire to go down in history as the man who had presided over the collapse of said Malaysian Miracle.
He became convinced that the huge drops in the Malaysian currency, the Ringgit, were the result of this malicious conspiracy on the part of a lot of devilish forex traders. A Muslim, he even went so far as to denounce this ring as a "Jewish" conspiracy, in part because the most visible and famous currency trader of them all, George Soros, just happened to be Jewish.
If this all sounds somewhat unlikely, you may do well to recall that in the 1960s, when the British Pound sterling was suffering pretty much the same fate, Prime Minister Harold Wilson fiercely blamed a cabal of Swiss bankers he called the Gnomes of Zurich for making his life miserable.
With the Baht locked into a sickening downward spiral, the Thai banks—which were now sitting on huge loans in Baht that were looking a lot less likely to be repaid—promptly cut off all lending, bringing the breakneck economy to a screeching halt. This newfound conservatism was in sharp contrast to previous practice, which had been to lend virtually without restriction to just about anyone, particularly anyone who was anyone—that is, with political or social connections to the military, bureaucratic, business, or government elites.
It's Called Crony Capitalism
They even had a name for that sort of thing: crony capitalism. It was invented by a clever U.S. journalist to describe the cozy cartels that rose up in the Philippines under the late, not very lamented President Ferdinand Marcos. But here it was being used to describe the economic system most favored across Southeast Asia, in which cartels and conglomerates with connections to the government or to the army dominated economic affairs.
In South Korea, they called these cartels chaebol. In Japan, they called them keiretsu or zaibatsu before World War II. In Russia, there was semibankirschina, which meant "rule of the seven bankers." The entire theory that a high-growth economy was best controlled by a governing elite composed of military and government officials, banks, corporate honchos, and other favored elite, who popularized the term and the concept of so-called Asian values. That was all very well as long as the governments and economies involved could deliver the goods: high rates of growth. But once the music in this game of musical chairs stopped, the governments—and those officials and cronies who had been raking so much off the top—suddenly found themselves being denounced as crooks and scoundrels.
It emerged that there was no such thing, really, as Asian values. There were just fair, open, and transparent economies, and unfair, closed, and opaque ones.
It's not easy to understand exactly how, when, and why a crisis can emerge. Hopefully, you can now recognize what you're dealing with, so let's move on to learning about how you can benefit from it.
Chapter Fourteen
Don't Get Emotional
How to Profit from the Panic
So how does a panic begin? What really happens? And as an investor caught in the downturn, or someone looking to cash in on the panic, what do you do?
The number one thing that you do is: Don't panic. Panic, after all, is an irrational visceral response to a sense of powerlessness and helplessness, which often comes from a lack of understanding of the actual circumstances. But panics, odd as this may sound, are nothing to be scared of.
As Franklin Roosevelt once said during the Great Depression, "The only thing we have to fear is fear itself." Understanding the origins of the difficulty can help to diminish anxiety. And in any number of critical ways, all busts start with a boom. Why? Because all busts start with a gathering consensus that a market has gone too far, too fast.
The same people who were so in love with the market, and with every stock in it, that they'd sell their grandmothers into slavery to buy more stock, now all of a sudden won't touch a share of stock with a 10-foot pole. Objectively, this fickle attitude makes absolutely no sense. But that's failing to take into account the rule of emotion, which tends to stimulate snap judgments. Emotions make people see only in black and white, good and bad, up and down, so what was good suddenly becomes bad. What do you do in such conditions?
Wait for the panic and the inevitable crash in prices. Then, calmly, buy.
Why? Because you're being paid to take a risk that the short-term sentiment is greatly exaggerated. In the perception gap between emotion and reason, you'll find your buy window.
In the perception gap between emotion and reason, you'll find your buy window.
Become a Fan of All the Information You Can Find
Hoping to avoid getting roasted, some stock market crystal ball gazers perform a mathematical ritual known as technical analysis in the hope of forecasting these big market drops.
Such analysis is another aid to understanding what is happening in the market. It is the study of price movements across all kinds of markets, including stock markets. It is different from the fundamental analysis of such variables as price-earnings ratios, profits, earnings, market share, and other factors impacting corporate performance since it focuses only on the stock prices.
As I pointed out, early in my career, when I was working in Hong Kong as a consultant, I started my study of stock markets with technical analysis. From a chartist's point of view, there are certain definable patterns in price movements as put on a chart, which could help us predict what could happen to the price in the future. In the case of the Thai crash, we discussed previously that the chart pattern was what is referred to as a quadruple top, where the market peaked four times before crashing. Such patterns are unusual, since most crashes are preceded by a head and shoulders pattern, as we discussed, or a double or triple top. A quadruple top usually means a severe and dramatic movement, which was experienced in Thailand.
In times of market volatility, to adopt a contrarian position, technical analysis can help the investor find the right times to enter and exit, provided that the value fundamentals are clear. The important point is that when everyone is dying to get in the market, when the stocks are too expensive, it is best to exit—but when everyone is screaming to get out and the stock are cheap, that is the time to buy.
When everyone else is dying to get in, get out. When everyone else is screaming to get out, get in.
The Example of China Telecom
I can still recall the frenzy around the listing of China Telecom in 1997, in the midst of the Asian financial crisis.
The New York Times expressed the spirit of the time when it stated: "For investors still willing to try the bumpy ride in Asia, it's hard to think of a sexier pair of words than 'China Telecom.'" (China Telecom was broken up in 1999, and reborn as China Mobile.)
Broker analysts had dubbed the company the hottest "red-chip" initial public offering to come down the pike since the handover, when Hong Kong was returned to China. ("Red chips" are mainland Chinese companies listed on the Hong Kong Exchange).
Even as Hong Kong's financial secretary was publicly insisting that there was "no political or economic need for us to disband the Hong Kong Dollar peg," one of the most renowned Hong Kong stock analysts was blandly assuring us that buying China Telecom was a "no-brainer"—and guaranteed to make money for those who purchased the stock.
"The issue is oversubscribed by 300 times," he bleated, visibly salivating at the very thought. "It's a hot issue. I'd buy as much as you can get. The gray market is saying you'll double your money in one day. It's putting a 100% premium on the market price."
Whenever you hear the words "no-brainer" and "hot issue," it's best to turn on the alarms.
The issue was finally oversubscribed not by 300 times but by 30 times. Two days later, with the Hang Seng index doing a splendid imitation of a lead weight in free fall, China Telecom was set to open at HK$10.00—below the initial offering price of HK$11.68. This highly touted US$4 billion stock offering had promised to be the bellwether for red chips in the post-handover Hong Kong.
But come the fall, the overheated market in red chips had cooled considerably. By Black October, red chips had dropped 40% from their peak in late August. What to do? Well, the first thing I did was have a meeting with some of the top executives at China Telecom. Taken at face value, the numbers looked great.
China Telecom was, according to its glossy prospectus, expertly prepared by the lead underwriters of the initial public offering, "the dominant provider of cellular telecommunications services in Guangdong and Zhejiang provinces which are among China's most economically developed provinces and the two provinces in China with the largest numbers of cellular subscribers." In other words, la crème de la crème, cellularly speaking.
Not only that, but the telecommunications industry in China had experienced rapid growth in recent years, and the cellular services sector was one of the fastest growing sectors within the telecommunications industry. In short, what was there not to like? After all, China Telecom's cellular subscriber base had grown at an annual rate of 88% over the previous three years. The strong growth trend was expected to continue, according to management. On top of that—as the stock underwriters had put it in touting the stock—"You're buying an effective monopoly."
Even without all the rocking, rolling, and roiling in Asian markets—which I calmly considered a fleeting epi-phenomenon—buying China Telecom at its high initial offering price was, despite the underwriters' flamboyant assurances, by no means a no-brainer.
This was also despite the gray market placing a 100% premium on the shares—according to rumor. The gray market, incidentally, is a market in shares that have been allocated to certain subscribers at the initial share issue who immediately turn around and sell their share allocation, right out of the gate, to buyers willing to pay a steep premium for the shares before the actual listing.
My little head-mounted antennae were quivering like tuning forks, picking up danger signals. As best I could figure it, China Telecom's assets when compared to similarly situated cellular companies elsewhere in the world were being valued at a very high price.
Of course, for everyone involved, buying this stock meant placing a bet on the future. For my purposes, I wasn't as concerned about today's price-earnings ratio as I was about those five years down the line. But for that price-earnings ratio to be any kind of bargain in five years' time, China Telecom's growth in revenues—as opposed to subscriber base—would have to be staggering. And even then, growth in numbers or market share wasn't the point. It was growth in profits that mattered.
To their credit, I found the managers of China Telecom impressive. And I had not the slightest doubt that under such obviously capable management, China Telecom would flourish in its local market. But I did harbor some doubts about the company's long-term revenue growth prospects. Simply put, as cellular phone service becomes more of a mass medium, prices—and possibly profits—were bound to go down.
The looming question was: Would increased volume compensate for the drop in revenue per subscriber? Add to that uncertain mix the feeding frenzy that typically accompanies these red-chip initial public offerings, and my gut response was rank skepticism.
When an initial public offering is oversubscribed, this means that more people—institutional and retail investors—have put in their bids (and in many cases written sizable checks for shares they hoped to buy) than will ever lay hands on the shares.
China Telecom's underwriters had been granted the right by the company to essentially allocate the shares as they saw fit, which meant that a certain (small) percentage would go to New York in the form of American depositary receipts (ADRs), and a certain (higher) percentage would be listed in Hong Kong.
Initial public offerings are intrinsically unfair, insofar as both share underwriters and the company are permitted a wide degree of latitude in granting preferential treatment to most favored customers. With a "hot" one like China Telecom—piping hot until the day the bottom dropped out of the Hong Kong stock market—the gray market quickly bid the price way up above the initial share, or opening, price.
A Chance for Small Investors
The gray market is made up of all sorts of frustrated buyers who, because they haven't been given a share allocation, promise to pay a substantial premium to any people who did get their hands on some shares over the official market price, if they will sell their shares to them.
This means that anyone who's lucky enough to get a chunk of stock on the first round—a privilege bestowed sometimes by random lottery, and at other times because of connections to the underwriters—can simply turn around and flip the stock, minutes after that investor bought it, into the gray market and make a tidy profit in no seconds flat.
One factor influencing the so-called gray marketeers is the expectation that the new share issue will soar like a hot-air balloon. What I often do in initial public offerings is hang back and wait to see what happens to the shares in the aftermarket—that's the open market—in a few weeks or months. Of course, there's no way to be sure that the price will drop, but once the initial euphoria has ended, it's not uncommon in my experience for initial public offering prices to dip, or at least drift, once the support limits promised by the first round of buyers have been breached.
In any given share allocation scheme, a certain proportion of the total shares issued is guaranteed to be sold to the general public. So it's perfectly possible for small investors to participate in an initial public offering.
Chapter Fifteen
Turning Fear into an Advantage Instead of a Disadvantage
A Case Study of Thailand
Let's take the Thai people. I'd lived in Thailand for a few years, back in the 1960s, and I'd been back quite a few times in the years since. I thought I knew the Thai people pretty well, and I had great respect and admiration for them. In particular, I respected them for their capacity to persevere in times of adversity and smile when things got difficult.
When everyone else is getting all pessimistic, that's usually when it is time to turn optimistic.
Something that people tend to forget is that during times of panic, adversity often brings out the best in people. And, by the same token, prosperity often brings out the worst in people.
Within a matter of months, not years, after the Asian financial crisis, I could see the progress that the Thai people and the formerly stagnant Thai government had made to set things right. It takes a major blowout for tectonic plates to shift.
The boiling point had come when the embattled prime minister seriously suggested that the economy could be saved by opening up more Thai restaurants and popularizing Thai kickboxing.
It took just a few serious protests in Bangkok—mounted not by disgruntled leftists and radicals, but by sober, dark-suited businesspeople, middle-class people feeling the pinch and getting hopping mad about it—to suddenly result in a new government and a new, more respected prime minister. The king had stepped in and started to exert moral pressure to lessen corruption and self-dealing on the part of the local elites. Would all these bad people all of a sudden turn into angels? Of course not, but for a while at least, it would behoove them to keep their noses clean. That, in time, would help revive morale and bring the market around.
The people get it together. They start pulling together. They clean up their acts, and start demanding that people in charge do the same. They start working harder. They start saving more. They stop spending money. This was what Oliver Wendell Holmes meant when he spoke of "the moral equivalent of war."
As a partner in the Bangkok office of a consulting firm said during the depths of the crisis: "Even turkeys can fly in a hurricane. But when the wind dies down, it's much more difficult to sustain performance. It's a question of muscle."
He soberly added, "During a downturn, you need to not just cut fat. It's even more important to start building muscle." This means that sooner rather than later, the situation begins slowly but surely to turn around.
Going for Liquidity
The first thing to do in times of crisis is go for liquidity. You could call it a flight to quality. It only stands to reason that if I now have a choice between a small illiquid stock and a large liquid stock, I'll pick a liquid one every time. In fact, the only time that I buy illiquid or less liquid stocks is if I have to—during booms, when liquid stocks get too expensive.
The first thing to do in times of crisis is go for liquidity.
Liquid stocks tend to be the market leaders, large-cap stocks, index stocks, and blue chips—the stocks you can never buy during a boom, but are your first choice at the first sign of a bust. As sentiment sours, these stocks will begin to come down to more reasonable levels.
In Thailand, I was aching to take a closer look at Siam Cement, one of the country's blue-chip companies, partly owned by the royal household. This was more than just a cement company, but a diversified group with holdings in building materials, petrochemicals, plastics, and a number of other basic building-block raw chemical materials.
It was getting a bad rap in the press: exports down, plants being closed. In other words, now was the perfect time to pay a visit. Was Siam Cement, you might ask, one of those companies that irresponsibly took out loans in dollars with the expectation that they could be paid back in Baht? Well, yes. And, I ask you, so what?
The point is, they all did it. The point is, such a strategy seemed sound at the time. In fact, the fact that Siam Cement was widely known to have suffered that exposure made it worth buying. Why? Because these ideas about exposure to risk tend to weaken sentiment, which makes them attractive targets.
With the currency devaluation, the beleaguered Baht would soon be looking competitive again. Under pressure to increase business through exports, Siam Cement was going to start exporting like crazy, because it could make and sell cement and all of its other products more cheaply than its competitors in neighboring countries could.
And Thailand, which had slapped an export tax on cement, had—lo and behold—canceled that tax for the duration. This made Siam Cement even cheaper to the Malaysian market—even cheaper, in some cases, than Malaysian cement. So, like a kid in a candy store, I was rarin' to start buying up all those juicy blue chips I couldn't afford before the crash.
Playing Pin the Tail on the Bottom
As the Asian virus raged ominously throughout the continent, the new verbal game being played out grimly in global financial circles became "pin the tail on the bottom."
One global trader announced that this wasn't a panic, but "a systematic meltdown of testing a new bottom." Say what? Other self-styled experts pronounced this a terrific time for "bottom fishing." Still others spoke loftily about "breaching the quadruple bottom."
Meanwhile, into the breach plunged the intrepid International Monetary Fund, the global institution best equipped to deal with such crises of confidence. It promptly stepped up to the Baht with a generous offer: a US$17 billion bailout and rescue package, which was dangled like a carrot in order to force the ruling elite to swallow the harsh medicine of financial discipline.
But, in the short term, sentiment was so sour that even the prospect of powerful external forces leveraging the Thai economy back into line did little to raise morale.
Hoping to steal a leaf out of my own book, I hopped onto the next flight to Bangkok. Now how on earth, you might ask, could we be the slightest bit optimistic about Thailand when all the smart money was deserting the place, as if the country had contracted the plague?
The main reason that we felt positive was because all of the trends were so negative. This was not just to be stubbornly or rigidly contrarian—because being a true contrarian means not to go slavishly against the grain, but to be always independent in your thinking. It was simply that we and the short-term smart money were operating according to different time frames.
A Rosier Outlook
In the short term, the smart money was right on the money. For the near future, Thailand was a mess. But over time—I reckoned three to four years, maybe five at the outside—precisely because things were going to get so tough, the Thai people would change their behavior dramatically. Here's how (and why):
* They would not borrow as much.
* They would not buy as much.
* They would save more.
* They would work harder.
* They would break their necks to export because they would need dollars.
* Their industrial output would increase.
And last but not least, not by a long shot:
* They would demand more from their government by way of reform.
And they did. The case study of Thailand is just one example of how the only thing to fear is fear itself. And, in actuality, fear can be used to the investor's advantage time and time again.
Chapter Sixteen
The Crisis Bargain Bin
Taking the Long-Term View in the Aftermath of a Crisis
Within just a few months of the start of the Thai crisis, Thailand showed some measurable improvement in a couple of key areas. An irate citizenship forced government changes so that a new, more forceful administration came in. The new government administrators who took over were praised by everyone we spoke to. They generally agreed that they were the most talented group to run the country in a long time. That alone gave me hope for growth.
Tough times bring leaders to the fore. An apt analogy would be Franklin Roosevelt's famous first 100 days, when the New Deal was ushered in to help rescue the country from collapse.
Only in times of crisis will people change their destructive behavior patterns. Only when there's a consensus that something is broken will anyone take the trouble to see that it gets fixed.
Many Bangkok blue chips enjoyed pride of place in our portfolio as jewels in our crown and, at the right price, deserved a larger place. It was time to focus on those bargains and winnow out those companies that could not survive and prosper in the new environment. It was time to buy some stocks and sell some stocks, reasonably, rationally, prudently and cautiously. Someone asked me how I was purchasing Thai stocks. My reply: "Like porcupines make love . . . with great care!"
No Pain, No Gain
Looking at the macro picture for the near term, things could not have been much worse. However, from my point of view, that was not altogether a bad thing, because we trade in perceptions as much as reality. When we're hunting for bargains, we look for stocks that look lousy but are in fact simply being misjudged. By the end of 1997, the Thai stock index had declined 70% from its all-time peak. In 1993, at the height of the Southeast Asian boom, the total market capitalization of the Thai stock market had been US$133 billion. By early 1998, it had declined to a dismal US$22 billion.
From my point of view, even with the index down in the dumpster, there had to be more than a few treasures that were being tarred with the same brush. The Stock Exchange of Thailand index's calamitous drop clearly indicated that the average investor in Thailand currently regarded Thai stocks as poor bets.
If we'd been looking at the same group of stocks from the same short-term perspective, we, too, would probably have cut our losses and run. But as we saw it, the market was once again overshooting the mark, and opening up for us a rare window of opportunity for increasing our positions in many companies formerly too rich for our blood.
Given the magnitude of the market's decline, you did have to wonder: Could the index go down to zero? In my opinion, the answer was: Not very likely. This was not, I'm afraid, due as much to sound fundamentals as to the sheer volume of money being pumped into the country. The initial US$17 billion injected by the International Monetary Fund to shore up the country's foreign currency reserves was, we assumed, just the first installment of a program that would in time stabilize the currency. It was not nearly enough money to shore up the country's ailing financial system, but was mainly a stopgap measure to plug the holes in the dikes to stanch the flow of funds leaking out.
The chance that the Thai index would decline an additional 50% before stabilizing, much less recovering, had to be considered. But when stacked up against the far greater likelihood of a gradual if erratic recovery, I held out for the chance of recovery—sooner, rather than later. This issue was of more than academic concern to us, because we'd been sniffing and snooping around and shifting stocks in Thailand since the market took its first big 40% drop. And because we'd been buying stocks aggressively on the way down, that meant that as the market kept dropping, we were losing money hand over fist—on paper, at least.
In what country, I was asked time and time again during the crisis, did I see the greatest bargains? Without hesitation, I'd reply: "Thailand."
If you're savvy enough to buy stocks on the way down instead of on the way up, you need to be willing to rack up losses in the short term. But at certain strategic points in time sometimes you've got to take some pain in the short term in order to outperform in the future.
You've sometimes got to take some pain in the short term in order to outperform in the future.
A Few Cardinal Rules about Timing
The best rule about timing is not to do it. Market timing is not a very fruitful investment technique since it is so difficult to do successfully. Although we generally discourage trying to time the markets, during extreme meltdowns a few cardinal rules do apply. One is that a market in free fall will tend to hit bottom and then rebound as much as 30% before collapsing again.
Why? Because markets generally pick up to a point where spooked investors who've been holding off on selling because they want to keep their losses to a minimum are ready to take their hits and move on.
When buying stocks during a bust, you need to make sure that you're picking long-term recovery prospects, not corpses shortly to be found not on an action list but on a watch list.
Prowling through Bangkok's back streets and alleys and chugging along congested highways lined with half-empty skyscrapers looking for bargains was a bit like panning for gold in a stream of worthless sand. A better analogy might be hunting for diamonds in a bucket of zircons, because although there were quite a few superficially attractive companies out there, too many—if you dug a little deeper or read the fine profit and loss (P&L) print—were deceptively attractive, as opposed to genuinely undervalued.
The two big booby traps, as I saw them, that tend to make trouble during any bust are:
1. Excessively high levels of dollar-denominated debt.
2. Management shock: higher-ups prone to deep denial and/or suffering from what I like to call "deer frozen by oncoming headlights" syndrome.
For example, of the 480 companies listed on the Bangkok Stock Exchange, about 40 had already gone belly-up by year's end 1997. An equal number of sick companies had seen trading in their shares suspended out of fear that if they were traded they, too, would fail. By our calculations, we expected at least another 20 companies to go under before the situation stabilized. We had to keep our eyes out for weak companies and for companies exposed to those companies that wouldn't survive. Although we don't mind losing money in the short term, we don't like it when stocks we own self-destruct in a puff of smoke—and mirrors.
Scoping Out the Banks
So where did we start our search for bargains? The financial sector. Why? Because that's where the damage was perceived to be the greatest and where recovery could come the fastest.
If you watch a bank like a hawk, you'll see in the patterns of its lending practices a blueprint of the macro picture.
If you watch a bank like a hawk, you'll see in the patterns of its lending practices a blueprint of the macro picture.
During a downturn, banks are always the first to take the hit, and usually the first to recover. Banks, as lenders to individuals and businesses that either can or cannot pay them back, are the canaries in the coal mine of any economy. Coal miners used to carry canaries (in cages) down into deep mines, as early-warning signals of any dangerous level of toxic gas. Since canaries are more sensitive to toxic gas than humans, they'd keel over at the first whiff of gas, long before any coal miner would be affected. Banks, despite their often bloated size, are highly sensitive gauges of any economy, assuming you know how to read the dials. Investors call them proxies for the economy at large.
In times of uncertainty, banks make excellent catchall stocks, because if you buy a piece of a bank, you're buying a piece of every loan on its books, which subsumes the whole economy. Let's say you learn that a bank is pulling back in a certain sector. That could be a sign of weakness in that sector and it will be necessary to investigate carefully before investing. But if you find out that a bank is suddenly extending itself to service a certain sector at a time of widespread distress, that means the bank has found some bright spots in an otherwise dismal picture.
Bankers tend to get a bad rap whenever things turn really sour. Think of the international banker as a guy with a monkey wrench who mans the valves through which money flows, like oil in a pipeline. When times are good, he opens the valves. When times turn bad, he tightens up or even shuts down the valves. With the flow of money frozen, he starts hunting down everyone and anyone capable of funneling some of that money back into the tank. During the sad, sordid saga of Southeast Asia in the late 1990s, international bankers poured some US$400 billion into the region—excluding substantial loans to Hong Kong and Singapore—before abruptly shutting off the valves, throwing the debt engine into reverse, and leaving the poor rejected countries gasping for air. "There was a huge euphoria about Asia and Southeast Asia," a spokesman for one of Germany's major banks admitted to the New York Times. "It was the place to be."
The problem is always that when the going is good, it tends to be great. But when it gets bad, it gets horrid. International flight capital—money that can be moved in and out of a market at a moment's notice—is poured in at the first sign of strength, and yanked out at the first sight of weakness. What disturbed even the most hard-core free marketeers in the financial community about the Asian contagion was the deadly speed with which this money tap could be turned on and off and, what's more, turned into a kind of financial vacuum cleaner, sucking its victims dry. When the taps get turned off, it's like a deep freeze: No one can budge until the guy with the wrench turns the tap on again.
Another reason to invest in banks when a declining currency is bound to stimulate exports is that it is an easy way to benefit from the growth in exports and how that growth will impact the entire economy positively. There can also be opportunities to benefit from the direct export sector.
A prime example was Delta Electronics, a diversified manufacturer of electrical components whose products would at that point in time have been 50% cheaper than they had been six months before, making their prices highly competitive. Delta was an excellent company, well managed, with a profit growth of 20% for the year. But Delta was also one of a handful of Thai companies clearly in a position to benefit from the currency collapse, and had therefore seen its shares soar 150% since the devaluation of the Baht six months before. But soon Delta was too expensive and it was necessary to turn to the banks. I was more inclined to buy the banks, because—contrary to popular perception—this would be the most efficient way to make an export play, because the banks were financing the exporters. When an entire country goes bad, sentiment usually quickly sours on the banks. But all the banks, one could safely presume, in a position to do so would be doing what Thai Farmers Bank affirmed as its policy: shifting the bulk of its loans from importers to exporters.
The shares should in time reflect the gathering strength of the export-driven recovery. But we were also on the lookout for export plays that were not quite so painfully obvious as electronics.
Looking for Patterns
One key point to always keep in the forefront of your mind while suffering through the bust end of the boom-and-bust business cycle is that the first country to get hit, and hit hard, is typically the first one to recover.
The first country to get hit, and hit hard, is typically the first one to recover.
A corollary to that: The country hardest hit is forced to confront the deepest root causes of its problems, and will therefore be forced to improve its behavior most dramatically in order to attract now-edgy global investors.
Economic recoveries are no different from any other form of recovery: They follow a classic trajectory of a high high, a low low, and an upturn that could be dramatically rapid.
Thailand had gone on a binge—which in this case had been to gorge itself on cheap credit—and was now going to have to go cold turkey. But it didn't pay to sit on the sidelines in Asia and do nothing while waiting for the storm to blow over. In my opinion, it always pays to take direct action. Reason: Stock markets move before the economy. Stock markets anticipate that economic movement is coming in a year or two.
Chapter Seventeen
Overcoming Irrational Market Panic
Learning to Be Objective
If the world needed any greater proof that globalization was (1) here to stay and (2) a force to be reckoned with, the negative impact displayed by Latin American markets in the wake of the Asian crisis supplied it in spades. On a morning in late fall 1997 after we checked into our hotel on Rio de Janeiro's Copacabana Beach, the high-flying Rio stock market took a stomach-churning 15% tumble, prompted by desperate doings in far-off Hong Kong. That same morning, a sickeningly swift 10% drop on the São Paulo exchange in the first four minutes of the session forced the governors to halt trading for the first time in the exchange's history.
Riding the Rio Roller Coaster
The steepest declines were racked up by a series of high-flying Brazilian blue chips that had enjoyed an average 93% climb in the first nine months of 1997. Anchoring the group were the three prime poles of the Brazilian privatization tripod: Telebras, the state-owned telecom company, Eletrobras, the state-owned electrical utility, and Petrobras, the state-owned oil and energy company. With the Bovespa (Brazilian Stock Exchange index) dropping like mercury in an ice storm, it looked as if the second-best-performing stock market in the world (after Russia) was heading full-speed into a brick wall.
So why were global investors getting so hot and bothered over Latin America, when the currency crisis appeared to be confined to Asia, half a planet away? One easy answer, as well as a true one, was that the whole world was now so intricately interconnected that it was no longer possible to buffer any one market from the behavior of others due to distance alone. Just as chaos theorists had proposed that a butterfly flapping its wings on one side of the earth could cause a hurricane to break out on the other, the slightest trembles and tremors in one financial market could easily infect and influence the others, even if their commercial connections were rather remote. Of course, markets also move in their own directions, one often markedly different from the other.
First-Class Buying Opportunities
How does that affect the investor? Here's how: These global perturbations often present an unprecedented wave of first-class buying opportunities. When markets overshoot and undershoot due to irrational factors, that's when a cool head can win.
If market sentiment suddenly sours on an entire country mainly because it's perceived to be linked to problems elsewhere, that sentiment may be a function of irrational panic, not cold calculation.
In the same way, if sentiment sours on a whole country when individual companies inside that country are still doing well, you can find bargains in stocks that are artificially depressed because of no other reason than popular knee-jerk reactions to temporary events.
With most emerging markets overexposed to the ins and outs and ups and downs of flight capital—money that can be moved in and out of a market at a moment's notice—it's critical to figure out whether these sudden panics (as well as flights of irrational exuberance) are justified by the fundamentals. Because if they're not, betting against them can be your ticket to ride.
Let's take Brazil, for example; the swift transformation from a mixed to a market economy that Brazil pushed itself through during the five years prior to 1997 had set the pace for a regional record 5% growth rate in Latin America in the 1990s, and a record US$45 billion in direct foreign investment flowing into the region as a whole.
In Southeast Asia, the forex traders' alleged collective decision to attack a currency—otherwise known as short selling that currency, with many individual players placing the same bet that it would wilt under pressure—sent a signal to less-plugged-in investors that possibly a few things were rotten in the underpinnings of the so-called Asian miracle—things like rampant corruption, insider dealing, and a lack of transparent markets. That behavior sent those markets into a tailspin, but it would have been realistic, not masochistic, for government officials of Malaysia and Indonesia to bless those currency traders, not curse them.
Why? Because by shining their harsh speculative spotlights on the flagrant weaknesses in their economies, the forex folks were expertly diagnosing precisely what ailed them. Those governments didn't need to hire high-priced consultants, because the currency traders were doing the job for free, with the icing on the cake being a quick profit if their bets went the right way.
But what about Latin America? The prospect of Asian contagion infecting Latin America was like a golden door swinging wide open for bargain hunters, because the supposed links between these two vibrant regional economies were not nearly so strong as popular opinion would have it. Yes, these countries did trade together, and did compete as exporters of goods to more mature markets. But just because they shared some superficial similarities didn't make Latin America just like Asia. If anything, the two continents were growing less alike than ever before.
When the raft of Brazilian blue chips took their first real nosedive in a decade, mainly due to trouble in Asia, I felt in my bones that we were looking at a phenomenon known as spooking.
When the ordinary run of buyers gets spooked, that's the time to step up to the plate and start putting your money down on the table.
When the ordinary run of buyers gets spooked, that's the time to step up to the plate and start putting your money down on the table.
Here We Go Again
A major reason that global investors were up in arms over Latin America, after those same markets had doubled, on average, over the previous six months, was that some of the same sins—low or negative current account balances, high foreign trade deficits, creeping inflation—that had brought the Asian tigers low were running rampant in Latin America.
Brazil was more vulnerable to being downgraded in investors' books than its neighbors were, because its currency—the Real—was widely regarded as at least 30% overvalued. But it was important to recognize that although the prospect of a currency devaluation—either abrupt or gradual—represents currency risk in action, a devalued currency can be a country's saving grace, because gradual, panic-free devaluations can kick exports through the roof.
Some simplistic strategies would have you bail out of a country at the slightest hint of a devaluation coming down the pike, because the same share of stock will be worth that much less, in Dollar terms, once the currency declines. But I see things differently. Combined with the negative sentiment that such a prospect represents, I view signs of an impending devaluation as a possible signal to buy into a country—because even if the markets do pull a temporary nosedive, the eventual bounce-back will be all that much stronger than before.
In Rio, the first casualty of the currency traders' preliminary skirmishes was the confidence of local stock traders, who found themselves caught with their pants down by a wave of frantic short selling in anticipation of the coming currency crisis. As one young Rio derivatives trader gloomily observed to the New York Times on the morning of the crash (euphemistically called by some brokers a "correction"): "What can I say? Brazil today is no-man's-land. The future is cloudy and stormy, and everyone around here is concerned and desperate. Everything here is so exaggerated. Brazil is a land where death comes suddenly."
Talk about an upbeat assessment!
From my point of view, reading words like these in a daily newspaper was like being handed an engraved invitation to a party. If Sir John Templeton said it once, he said it a thousand times: The right time to buy is always at the point of maximum despondency. Well, here we were at rock bottom, and I was drawing up a shopping list the length of my arm.
Having just left Thailand (which I would have nominated as the global capital of pessimism), it now seemed as if Brazil might be a contender for that dubious distinction.
Among the many reasons being proffered in the press to back up the purported link between Brazil (and Latin America in general) and the problems besetting Asia was that South Korean banks, when the going was good, had bought scads of high-yield Brazilian Brady bonds and would now be forced to dump them on the open market, at fire sale prices, in a desperate bid to raise cash. (Brady bonds, incidentally, were an innovation devised by U.S. Treasury Secretary Nicholas Brady of the Bush administration, by which the U.S. government backed emerging market debt, permitting such countries to borrow on international markets at lower rates.)
Though the Brazilian-Korean bondfire sounded fine on paper, personally I wasn't buying it. For one thing, the amount of money involved, when compared to the size of Brazil's economy, was inconsequential. Another supposed connection between Brazil and troubled South Korea was that South Korean producers were going to start flooding the world with cut-rate cement. Although not entirely off the mark, the adverse effect on a single industry was hardly sufficient to justify the downgrade of a whole country, much less a full-scale market stampede.
It's important to realize that when a panic starts, it takes only the slightest hair-trigger to turn a run into a rout. It can be cement, Brady bonds, or bubblegum—it doesn't matter. The reason for turning tail could even be a missing millionaire. On our second day in Brazil, a completely unfounded rumor that a major Mexican industrialist had disappeared—presumed kidnapped, or even killed—sent shock waves through the region's capital markets. Just as inanely, they promptly bounced back once it was disclosed that the rich gent in question was alive and kicking.
Korean-owned Brazilian bonds, cut-rate Asian cement, and allegedly abducted industrialists all provided jumpy investors with the lazy-headed excuse they were looking for to start dumping Brazil big-time. When things are looking lousy from any number of angles, the collective unconscious always starts snooping around, looking for evidence—no matter how far-fetched—to bolster its depressed emotional state. And that's just the time, as a savvy investor, that you should start taking a serious look at the country.
If the whole world is down on a country for exaggerated, short-term reasons, think of shifting it from a hold to a buy.
If the whole world is avoiding a country for exaggerated, short-term reasons, think of shifting it from a hold to a buy.
Note: You can take advantage of rumors like the missing industrialist by snapping up stock during the moment's downturn, and riding the shares back up north when the smoke clears. But this sort of market timing is a risky, hair-raising strategy—not a move for the novice player or an investor most interested in maintaining a long-term investment horizon.
Don't get me wrong. There were more than a few superficial similarities between Asia and Latin America. Like South Korea and like Thailand, Brazil's once-red-hot export growth had been slowing to a crawl all through the fall, a slowdown that had spread to other Latin American economies. The country's current account balance was starting to look like the credit card bill of a confirmed shopaholic. But in contrast to Thailand, where the government had proved ludicrously ineffective in staving off the alleged forex traders' assaults, Brazil's popular president, Fernando Henrique Cardoso, was determined not to let that shadowy crowd get the best of him. If they wanted to fight, he was more than willing. In fact, for better or for worse—personally, I felt for worse—he was willing to defend his currency to the political death, if need be.
Facing Reality
Unfortunately for his citizens, who would soon start feeling the heat, President Cardoso had an election coming up. So he had to act, and act fast. The 66-year-old president knew better than anyone that if the Brazilian Real—which he had personally created—were to start heading south, he would be hitting the South Polar ice cap right along with it. In short, the man's credibility was on the line.
The only problem I had with Cardoso's tough-it-out strategy was that there are times when a devalued currency can kick a failing economy back into high gear, by jump-starting exports. Preventing a currency devaluation can have as much to do with salvaging national pride as with hard-core economic reality. When, in January 1999—shortly after his reelection—he was forced to let the Real float freely, the Brazilian market surprisingly soared. Why? Because the markets were finally forcing the Real to face reality.
Field Note: Brazil
February 2011
Brazil's economy was doing very well. After the contraction in 2009, there was a dramatic recovery in 2010, resulting in a growth rate of 7.5%. Moreover, both inflation and unemployment were at half the high levels experienced in 2003. Foreign reserves stood at over US$300 billion, up from only US$50 billion in 2006.
One of the big worries was the strengthening exchange rate of Brazil's Real, which had moved from the 2002 low of close to R$4 to US$1, to the current rate of R$1.7 to US$1. Nevertheless, exports were booming on the back of a growing global demand for Brazil's iron ore and agricultural goods. The stock market had also recovered from its recent low in November 2008, returning more than 300% in U.S. Dollar terms as of the end of February 2011.
During our trip to Brazil, we gained insights from a number of companies.
Transportation: The upcoming Olympics and World Cup were expected to benefit companies in the transportation industry. At a leading bus manufacturer, revenues were up almost 50% in 2010, while margins had also improved significantly. In addition to holding more than 40% of the domestic market share, the company had a significant international presence with exports to more than 60 countries.
Agriculture: Brazil's favorable climate and soil as well as relatively cheap labor bode well for agricultural companies in the country. Furthermore, high soft commodity prices had been supporting earnings in this industry, and I expected this trend to continue. One of the companies I visited reported good earnings on cotton as a result of the high global prices. The three most important variables for these businesses were the cost of raw materials (fertilizers and chemicals), selling prices, and the exchange rate of the Real to the U.S. Dollar.
Natural Resources: Another beneficiary of high commodity prices, natural resources was another sector in which I was interested. Management at a steel company mentioned that although steel continued to suffer margin contraction, the company had been able to deliver above-average results because of its iron ore mines as well as its limestone and dolomite resources in the mineral-rich state of Minas Gerais. This well-diversified company also engaged in cement production since it had dolomite, limestone, and slag from steel operations.
These visits and others I made in Brazil indicated a favorable business environment with continued growth.
Chapter Eighteen
The World Belongs to Optimists
Golden Investment Attributes and Rules
In emerging markets investment, I believe it is necessary to be optimistic. The fact remains that there have always been problems and there will continue to be so in the coming years throughout the world. But we are entering an era that is perhaps unparalleled in the history of mankind. With higher income and living standards, better communications and technology, improved travel, greater international trade, and generally better relations between nations, emerging markets investors have the perfect opportunity to capitalize on the benefits.
Studies have shown that stock market investments made in a patient and consistent manner will invariably grow, since there is a natural tendency for the value of equity investments to rise in order to keep up with inflation. In addition, independently managed businesses, competing successfully in the marketplace, are generally winners in the stock markets as there is a tendency for their sales, profits, and assets to expand. However, it is not always possible to predict whether a company is going to be successful or unsuccessful, so it is necessary to diversify.
"The world belongs to optimists. The pessimists are only spectators."
—François Guizot
While one can certainly learn numerous technical skills that help in making investments or managing a portfolio, a large percentage of investing is still psychological. Both buyers and sellers act on a combination of instinct, information, and logic. The development of certain personal characteristics could play a key role in contributing to your investment success.
Here are a few key personal attributes that can make for good investment results. These include: discipline, hard work, humility, common sense, creativity, independence, and flexibility.
Work Hard and Be Disciplined
Someone asked me once if I could condense into five words the most important qualities needed for a good investor, and I replied: "Motivation, humility, hard work, discipline." It stands to reason that the more time and effort that are put into researching investments, the more knowledge that will be gained and wiser decisions made.
Be Humble
Humility is needed so you are able and willing to ask questions. If you think you know all the answers, you probably don't even know the questions. As Sir John Templeton once said, "If we become increasingly humble about how little we know, we may be more eager to search."
Show Some Common Sense
To me, common sense is most important when making investment decisions, since the words common sense imply the clarity and simplification required to successfully integrate all the complex information with which investors are faced.
Get Creative
I think a significant amount of creativity is required for successful investing, since it is necessary to use a multifaceted approach in looking at investments, considering all the variables that could negatively or positively affect an investment. Also, creative thinking is required to look forward to the future and try to forecast the outcome of current business plans.
Be Independent
A number of successful investors have commented on the importance of independent and individual decision making. Sir John Templeton said, "If you buy the same securities as other people, you'll get the same results as other people." It is impossible to produce superior results unless one does something different from the majority.
Remain Flexible
It is important for investors to be flexible and not permanently adopt a particular type of asset. I think the best approach is to migrate from the popular to the unpopular securities or sectors. Flexibility is also an attribute that keeps one from holding on to a stock out of loyalty—flexibility allows one to change as times change and as new opportunities present themselves.
Investment Tools
So that's the personal preparation that goes into investing. On the professional side, it can't be stressed enough that in addition to meeting company managements and their competitors, it is important to be a voracious reader. A wide variety of reading contributes enormously to an investor's ability to make insightful decisions. Reading is like your body's muscles; use it or lose it. If you don't exercise regularly you will lose muscle tone and bone strength. If you don't read you will not be able to absorb new information and techniques.
Of course, there are also investment attitudes that will benefit your investment results. I'm going to use this space to list what I consider to be some of the most important investment rules I've picked up over the years.
Always Diversify Your Investments
Diversification is your best strategy to plan against unexpected events such as earthquakes, political upheaval, floods, investor panic, and the like—not only within a particular market, but also across markets globally. You never want to be overly dependent on the fate of any one stock or security, particularly if you don't have control over a company's management or events. Some successful investors with a limited number of holdings believe in the Mark Twain school of thought: "Put all your eggs in one basket—and watch that basket!" But these investors often have some influence on companies and management. Most investors are not able to do that, and if you fall into the latter category, you will always be better off diversifying across countries and companies. Global investing is always superior to investing in only your home market or one market. If you search worldwide, you will find more bargains and better bargains than by studying only one nation.
Don't Run from Risk
Without risk, I believe it is difficult for your portfolio to aim to achieve superior investment returns. But that risk taking is not the same as playing roulette or skydiving. The assumption of risk I'm talking about must be carefully planned and researched. Investment decisions always require decisions based on insufficient information. There is never enough time to learn all there is to know about an investment, as equity investments are undergoing continuous change. There comes a time when a decision must be made and a risk acquired. The ability to take just the right amount of risk based on the most diligently researched available information, in my mind, is the mark of a good investor.
Take a Long-Term View
Take a long-term view even if you desire short-term rewards. If you take a long-term view of the world and the markets, you are likely to (1) be less emotional and thus less likely to make costly mistakes; (2) see beyond the short-term volatility of the market; and (3) take a step back to see broader patterns of market, political, and economic behavior that may not be evident to a short-term observer. By looking at the long-term growth and prospects of companies and countries, particularly those stocks that are out of favor or unpopular, you will have a much greater chance of obtaining superior returns.
Make Volatility Your Friend
Markets are volatile, like a combustible material. You can warm up gasoline until a certain point, after which it ignites and explodes. But these market explosions give us an opportunity to buy low and sell high, as long as you have been wearing protective gear (such as diversification). The extreme sensitivity of markets to any news, what I call their "manic-depressive" nature, means that they often rise and fall by much more than they should. Remember that the time of maximum pessimism is the best time to buy and the point of maximum optimism is the best time to sell. If you can look beyond the emotional roller coaster of the volatility and use it to your advantage, you might be able do rather well on your investments.
Okay, you win some, you lose some—you can't avoid it. It's not only the name of the game, but the only game in town for emerging markets investors. However, some losses are not only avoidable but meaningful, because you learn lessons from them.
With just about every major loss we've incurred, we've picked up a few pointers. I try to avoid repeating the same mistakes again and again. Because, like most people, I hate being wrong too often, and I know that fund managers are only as good as their last performance.
We've had our fair share of losses, believe me. But at the end of the day, we must be ready to make mistakes; otherwise we would never learn. You might even say that hitting a few foul balls goes with the territory.
Even today, when the worst wounds have healed, the mere mention of some losers fills me with dread and regret, but, curiously, not anger—because any hard feelings are softened by the fact that we invariably learn from the experience.
As someone who's been punched by the markets more than once, I have some advice: Roll with the punches. If you're going to take real risks, you can't always count on rewards.
Roll with the punches. If you're going to take real risks, you can't always count on rewards.
If you follow the contrarian path, you're going to find yourself buying stock in distressed companies. You can't avoid it, because the vast majority of the bargains out there are in shares of organizations that have made a few mistakes along the way. That's why these stocks are so cheap. That's why a whole lot of smart people think they're headed nowhere in a hurry.
Our job is to prove them wrong. So how can we be so arrogant? Because we have the luxury, and the opportunity, of shifting our focus to a five-year time horizon. We can look ahead to a point out there in the distance where the long-term outlook may be more positive than the outlook in the short term. We also have the advantage of stacking the company up against other, similar companies elsewhere, often faced with similar challenges. Sometimes we can see an opportunity and some light where others see only darkness. And, of course, sometimes we're wrong.
We like finding good companies, and good countries, that have fallen on hard times, but only—we hope—temporarily; companies, and countries, poised to stage an unlikely comeback; companies, and countries, fighting an uphill battle to thrive, even to survive—because only when the odds are against you are you going to find any real bargains.
Those are your big winners. More often than not, the line separating the winners from the losers can be embarrassingly thin.
It's also important to recognize that not only is nobody perfect, but mistakes are an integral part of investing. You're never going to always pick winners, and picking losers—in my opinion—should in no way reflect any lack of care or negligence in the selection of investments for a portfolio under management.
These marvelous markets, as they continue to evolve and develop, may be volatile at times. But I sincerely hope that after reading this book, you'll no longer view volatility only in negative terms. Volatility can be a good thing for investors; be prepared to benefit from it. Market ups and downs, even the most violent ones, provide incentives for people to adapt to new environments and to shifting realities. Free markets can be harsh taskmasters, but to paraphrase what Winston Churchill once said of democracy, the free market may be a lousy system, but it just happens to be the best one we've got.
Acknowledgments
If I were to attempt to acknowledge all the wonderful people who helped formulate the ideas that went into this book, it would take many pages. Suffice it to say that I have learned a great deal from the thousands of people working in and studying emerging markets since I began this adventure in the 1970s even before I started managing the Templeton Emerging Markets Fund. Over the years, experts within the Franklin Resources organization have helped the Templeton Emerging Markets Group grow enormously in assets under management, and instead of five countries in which to invest in 1987 with only US$100 million, we now cover more than 60 countries with over US$50 billion invested.
Special thanks go to Shalini Dadlani for her excellent research and editing as well as the team at John Wiley & Sons who have helped bring this project to fruition. Of course I must take sole responsibility for any errors and omissions.
About the Author
Dr. Mark Mobius, Executive Chairman of Templeton Emerging Markets Group, joined Templeton in 1987 as managing director of its Far East Division in Hong Kong, with responsibility for supporting the Templeton Group's research expertise in emerging market countries. He directs the analysis for 17 global locations including Hong Kong, China, Singapore, Vietnam, India, South Korea, Malaysia, Thailand, United Arab Emirates, South Africa, Argentina, Brazil, Austria, Romania, Turkey, Poland, and Russia with a total of over US$50 billion under his supervision. Dr. Mobius has spent more than 40 years working in emerging markets and has extensive experience in economic research and analysis.
He has been named one of the "50 Most Influential People" by Bloomberg Markets magazine, "Top 100 Most Powerful and Influential People" by Asiamoney magazine, "Emerging Markets Equity Manager" by International Money Marketing, "Ten Top Money Managers of the 20th Century" by the Carson Group, "Investment Manager of the Year" by the Sunday Telegraph, and "Closed-End Fund Manager of the Year" by Morningstar. He holds a Bachelor's and Master's degree from Boston University and received his Ph.D. in Economics and Political Science from Massachusetts Institute of Technology.
Dr. Mobius spends over 300 out of every 365 days traveling around the world in search of the world's best bargains.
| {
"redpajama_set_name": "RedPajamaBook"
} | 1,562 |
Q: How to create horizontal indeterminate progress drawable with round corners for progressbar in android? I want to create horizontal indeterminate progress drawable with round corners for progress bar in android. How to create that using xml?
Edit 1: I am using following drawable:
<?xml version="1.0" encoding="utf-8"?>
<layer-list xmlns:android="http://schemas.android.com/apk/res/android">
<item android:id="@android:id/background">
<shape>
<solid android:color="?attr/dividerAndProgressColor" />
<corners android:radius="20dp" />
</shape>
</item>
<item android:id="@android:id/secondaryProgress">
<clip>
<shape>
<solid android:color="?attr/secondaryText" />
<corners android:radius="20dp" />
</shape>
</clip>
</item>
<item
android:id="@android:id/progress"
>
<clip>
<shape>
<solid android:color="?attr/secondaryText" />
<corners android:radius="20dp" />
</shape>
</clip>
</item>
</layer-list>
android:indeterminateDrawable="@drawable/progress_drawable_start_end_indeterminate"
But there is one problem, animation starts from 0 reaches 100 and then restarts. This is not desired in case of indeterminate progressbar.
A: round corner
lookout this, https://stackoverflow.com/a/42646939/12709358
<ProgressBar
android:id="@+id/progressbar_horizontal"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:indeterminateOnly="true"
android:indeterminateDrawable="@drawable/progressbar_indeterminate_horizontal"
android:progressDrawable="@drawable/progressbar_horizontal"
android:minHeight="24dip"
android:maxHeight="24dip"
/>
also see this, https://stackoverflow.com/a/63463786/12709358
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 1,568 |
\section{Introduction}\label{sec0}
\par
Continuous, discrete and semi-continuous convolutions appear
naturally when searching for estimates between short-time Fourier
transforms with
different window functions. By straight-forward application
of Fourier's inversion formula, the short-time Fourier
transform $V_\phi f$ of the function or (ultra-)distribution
$f$ with window function $\phi$ is linked to $V_{\phi _0}f$
by
\begin{equation}\label{Eq:STFTConvRel}
|V_\phi f| \lesssim |V_\phi \phi _0|*|V_{\phi _0}f|
\end{equation}
(cf. e.{\,}g. \cite[Chapter 11]{Gc2}). Here $*$ denotes the
usual (continuous) convolution and it is assumed that the
window functions $\phi$ and $\phi _0$ are fixed and belongs
to suitable classes (see \cite{Ho1,Gc2} and
Section \ref{sec1} for notations).
\par
Modulation spaces appear by imposing norm or quasi-norm
estimates on
the short-time Fourier transforms of (ultra-)distributions
in Fourier-invariant spaces. In most situations these
(quasi-)norms are mixed norms of (weighted) Lebesgue types.
More precisely, let $\mathscr B$ be a mixed quasi-Banach space
of Lebesgue type with functions defined on the phase space,
and let $\omega$ be a moderate weight. Then the modulation
space $M(\omega ,\mathscr B)$ consists of all ultra-distributions
$f$ such that
\begin{equation}\label{Eq:ModNorm}
\nm f{M(\omega ,\mathscr B)}\equiv \nm {V_\phi f \cdot \omega}{\mathscr B}
\end{equation}
is finite.
\par
If $\mathscr B$ is a Banach space of mixed Lebesgue type, then the
inequality \eqref{Eq:STFTConvRel} can be used to deduce:
\begin{enumerate}
\item that $M(\omega ,\mathscr B )$ is invariant of the choice of
window function $\phi$ in \eqref{Eq:ModNorm}, and that different
$\phi$ give rise to equivalent norms.
\vspace{0.1cm}
\item that $M(\omega ,\mathscr B)$ increases with the Lebesgue
exponents.
\vspace{0.1cm}
\item that $M(\omega ,\mathscr B)$ is complete.
\end{enumerate}
\par
Essential parts of these basic properties for modulation spaces
were established in the pioneering paper \cite{Fe2}, but some tracks
goes back to \cite{Fe1}. The theory has thereafter been developed
in different ways, see e.{\,}g. \cite{Fe3,FG1,FG2,Gc2}.
\par
A more complicated situation appear when some of the Lebesgue parameters
for $\mathscr B$ above are strictly smaller than one, since $\mathscr B$
is then merely a quasi-Banach space, but not a Banach space, since
only a weaker form of the triangle inequality hold true. In such
situation, $\mathscr B$ even fails to be a local convex topological vector
space, and the analysis based on \eqref{Eq:STFTConvRel} to reach (1)--(3)
in their full strength above seems not work. (Some partial properties
can be achieved if for example it is required that the Fourier
transform of $\phi$ and $\phi _0$ should be compactly supported, see
e.{\,g} \cite{WaHu}.)
\par
In \cite{GaSa}, the more discrete approach is used to handle this
situation, where a Gabor expansion of $\phi$ with $\phi _0$ as
Gabor window leads to that $|V_\phi f|$ can be estimated by
\begin{equation}\label{Eq:STFTConvRe2}
|V_\phi f| \lesssim a*_{[E]}|V_{\phi _0}f|,
\end{equation}
for some non-negative sequence $a$ with enough rapid decay
towards zero at infinity.
Here $*_{[E]}$ denotes the semi-continuous convolution
$$
a*_{[E]}F \equiv \sum _{j\in \Lambda _E}F(\, \cdot \, -j)a(j)
$$
with respect to the basis $E$, between functions
$F$ and sequences $a$, and $\Lambda _E$ is the lattice spanned by
$E$. It follows that $*_{[E]}$ is similar to discrete convolutions.
\par
For the discrete convolution $*$ both the classical
Young's inequality
\begin{alignat}{3}
\nm {a*b}{\ell ^p_0} &\le \nm a{\ell ^{p_1}}\nm b{\ell ^{p_2}},&
\quad
\frac 1{p_1}+\frac 1{p_2} &= 1+\frac 1{p_0},&\ p_j&\in [1,\infty ],
\intertext{as well as}
\nm {a*b}{\ell ^p} &\le \nm a{\ell ^p}\nm b{\ell ^r},&
\quad r&\le \min (1,p),& \ \ p,r&\in (0,\infty ],
\end{alignat}
hold true, and it is proved in \cite{GaSa} and
extended in \cite{To15} that similar facts hold true
for semi-continuous convolutions. In the end
the following restatement of \cite[Proposition 2.1]{To15} is deduced.
The result also extends \cite[Lemma 2.6]{GaSa}.
\par
\begin{thm}\label{Thm:SemiContConvEst0}
Let $E$ be an ordered basis
of $\rr d$,
$\omega ,v\in \mathscr P _E(\rr d)$ be
such that $\omega$ is $v$-moderate, and let
${\boldsymbol p} ,\boldsymbol r \in (0,\infty ]^{d}$ be such that
$$
r_k\le \min _{m\le k}(1,p_m).
$$
Also let
$f$ be measurable.
Then the map $(a,f)\mapsto a*_{[E]}f$ from $\ell _0(\Lambda _E)\times
\Sigma _1(\rr d)$ to
$L^{{\boldsymbol p}}_{E,(\omega )}(\rr d)$ extends uniquely to a
linear and continuous map from $\ell ^{\boldsymbol r}_{E ,(v)}(\Lambda _E)
\times L^{{\boldsymbol p}}_{E,(\omega )}(\rr d)$ to
$L^{{\boldsymbol p}}_{E,(\omega )}(\rr d)$, and
\begin{equation}\label{convest0}
\nm {a*_{[E]}f}{L^{{\boldsymbol p} }_{E,(\omega )}}\lesssim
\nm {a}{\ell ^{\boldsymbol r}_{E ,(v)}}
\nm {f}{L^{{\boldsymbol p} }_{E,(\omega )}}.
\end{equation}
\end{thm}
\par
In \cite{GaSa}, \eqref{Eq:STFTConvRe2} in combination with
\cite[??]{GaSa} is used to show that (1)--(3) still hold
when $\mathscr B =L^{p,q}$ and $\omega$ is a moderate weight of
polynomial type. In \cite{To15}, \eqref{Eq:STFTConvRe2} in
combination with Theorem \ref{Thm:SemiContConvEst0} are used
to show (1)--(3) for an even broader class of mixed
Lebesgue spaces $\mathscr B$ and weight functions $\omega$.
\par
The aim of the paper is to extend Theorem
\ref{Thm:SemiContConvEst0}, so that $f$ in some directions
(variables) is allowed to be periodic, or a weaker form of
periodicity, called \emph{echo-periodic functions}. Such functions
appear for example when applying the short-time Fourier transform on
periodic or quasi-periodic functions. In fact, if
$f$ is $E$-periodic, then $x\mapsto |V_\phi f(x,\xi )|$
is $E$-periodic for every $\xi$. A function or distribution
$F(x,\xi )$ is called quasi-periodic of order $\rho >0$, if
\begin{equation*}
\begin{alignedat}{2}
F(x+\rho k,\xi ) &= e^{2\pi i\rho \scal k\xi} F(x,\xi ),& \quad k &\in \zz d,
\\[1ex]
F(x,\xi +\kappa /\rho ) &= F(x,\xi ),& \quad \kappa &\in \zz d.
\end{alignedat}
\end{equation*}
and by straight-forward computations it follows that
\begin{alignat}{2}
|(V_\Phi F)(x+\rho k,\xi ,\eta ,y)| &=
|(V_\Phi F)(x,\xi ,\eta ,y-2\pi k)|,
& \quad k &\in \zz d,\label{Eq:PerTransfer}
\\[1ex]
|(V_\Phi F)(x,\xi +\kappa /\rho ,\eta ,y)| &=
|(V_\Phi F)(x,\xi ,\eta ,y)|,
&\quad \kappa &\in \zz d,\notag
\end{alignat}
for such $F$.
\par
It is expected that
the achieved extensions will be useful when performing
local investigations of short-time Fourier transforms
of periodic and quasi-periodic functions, e.{\,}g. in
\cite{To18}.
\par
\section{Preliminaries}\label{sec1}
\par
In this section we recall some basic facts and introduce
some notations. In the first part we recall the notion
of weight functions. Thereafter we discuss mixed quasi-norm
spaces of Lebesgue types. Finally we consider periodic
functions and distributions, and introduce the notion of
echo-periodic functions, which is a weaker form of periodicity
which at the same time also include the notion of
quasi-periodicity.
\par
\subsection{Weight functions}\label{subsec1.1}
\par
A \emph{weight} on $\rr d$ is a positive function $\omega
\in L^\infty _{loc}(\rr d)$ such that $1/\omega \in L^\infty _{loc}(\rr d)$.
A usual condition on $\omega$ is that it should be \emph{moderate},
or \emph{$v$-moderate} for some positive function $v \in
L^\infty _{loc}(\rr d)$. This means that
\begin{equation}\label{moderate}
\omega (x+y) \lesssim \omega (x)v(y),\qquad x,y\in \rr d.
\end{equation}
We note that \eqref{moderate} implies that $\omega$ fulfills
the estimates
\begin{equation}\label{moderateconseq}
v(-x)^{-1}\lesssim \omega (x)\lesssim v(x),\quad x\in \rr d.
\end{equation}
We let $\mathscr P _E(\rr d)$ be the set of all moderate weights on $\rr d$.
\par
It can be proved that if $\omega \in \mathscr P _E(\rr d)$, then
$\omega$ is $v$-moderate for some $v(x) = e^{r|x|}$, provided the
positive constant $r$ is large enough (cf. \cite{Gc2.5}). In particular,
\eqref{moderateconseq} shows that for any $\omega \in \mathscr P
_E(\rr d)$, there is a constant $r>0$ such that
$$
e^{-r|x|}\lesssim \omega (x)\lesssim e^{r|x|},\quad x\in \rr d.
$$
\par
We say that $v$ is
\emph{submultiplicative} if $v$ is even and \eqref{moderate}
holds with $\omega =v$. In the sequel, $v$ and $v_j$ for
$j\ge 0$, always stand for submultiplicative weights if
nothing else is stated.
\par
\subsection{Spaces of mixed quasi-norm spaces of
Lebesgue types}\label{subsec1.2}
Our discussions on periodicity are done in terms of suitable bases.
\par
\begin{defn}\label{Def:OrdBasis}
Let $E$ be an (ordered) basis $e_1,\dots,e_d$ to
$\rr {d}$. Then
\begin{align*}
\Lambda _E
&=
\sets{n_1e_1+\cdots+n_de_d}{(n_1,\dots,n_d)\in \zz d}
\end{align*}
is the corresponding lattices.
\end{defn}
\par
Evidently, if $E$ is the same as in Definition \ref{Def:OrdBasis},
then there is a matrix $T_E$ with $E$ as the image of
the standard basis in $\rr d$. Then $E'$ is the image of the standard basis
under the map $T_{E'}= 2\pi(T^{-1}_E)^t$.
\par
\begin{defn}\label{Def:DiscLebSpaces}
Let $E$ be a basis of $\rr d$, $\kappa (E)$ be the
parallelepiped spanned by $E$, $\omega \in \mathscr P _E(\rr d)$
${\boldsymbol p} =(p_1,\dots ,p_d)\in (0,\infty ]^{d}$ and $r=\min (1,{\boldsymbol p} )$.
If $f\in L^r_{loc}(\rr d)$, then
$$
\nm f{L^{{\boldsymbol p} }_{E,(\omega )}}\equiv
\nm {g_{d-1}}{L^{p_{d}}(\mathbf R)}
$$
where $g_k(\boldsymbol z _k)$, $z_k\in \rr {d-k}$,
$k=0,\dots ,d-1$, are inductively defined as
\begin{align*}
g_0(x_1,\dots ,x_{d})
&\equiv
|f(x_1e_1+\cdots +x_{d}e_d)\omega (x_1e_1+\cdots +x_{d}e_d)|,
\\[1ex]
\intertext{and}
g_k(\boldsymbol z _k) &\equiv
\nm {g_{k-1}(\, \cdot \, ,\boldsymbol z _k)}{L^{p_k}(\mathbf R)},
\quad k=1,\dots ,d-1.
\end{align*}
\begin{enumerate}
\item If $\Omega \subseteq \rr d$ is measurable,
then $L^{{\boldsymbol p} }_{E,(\omega )}(\Omega )$ consists
of all $f\in L^r_{loc}(\Omega )$ with finite quasi-norm
$$
\nm f{L^{{\boldsymbol p}}_{E,(\omega )}(\Omega )}
\equiv
\nm {f_\Omega }{L^{{\boldsymbol p}}_{E,(\omega )}(\rr d)},
\qquad
f_\Omega (x)
\equiv
\begin{cases}
f(x), &\text{when}\ x\in \Omega
\\[1ex]
0, &\text{when}\ x\notin \Omega .
\end{cases}
$$
The space $L^{{\boldsymbol p} }_{E,(\omega )}(\Omega )$ is called
\emph{$E$-split Lebesgue space (with respect to $\omega$, ${\boldsymbol p}$,
$\Omega$ and $E$)};
\vspace{0.1cm}
\item If $\Lambda \subseteq \rr d$ is a lattice such that
$\Lambda _E\subseteq \Lambda$, then the quasi-Banach space
$\ell ^{{\boldsymbol p} } _{E ,(\omega )}(\Lambda )$ consists of all
$a\in \ell _0'(\Lambda )$ such that
$$
\nm a{\ell ^{{\boldsymbol p} }_{E,(\omega )}(\Lambda )}
\equiv
\Nm {\sum _{j\in \Lambda}a(j)\chi _{j+\kappa (E)}}
{L^{{\boldsymbol p} }_{E,(\omega )}(\rr d)}
$$
is finite. The space $\ell ^{{\boldsymbol p} }_{E,(\omega )} \equiv \ell ^{{\boldsymbol p} }
_{E,(\omega )}(\Lambda _E)$ is called the
\emph{discrete version of $L^{{\boldsymbol p} }_{E,(\omega )}(\rr d)$}.
\end{enumerate}
\end{defn}
\par
Evidently, $L^{{\boldsymbol p}}_{E,(\omega )} (\Omega )$ and
$\ell ^{{\boldsymbol p}}_{E,(\omega )} (\Lambda )$
in Definition \ref{Def:DiscLebSpaces} are quasi-Banach spaces of order
$\min ({\boldsymbol p} ,1)$. We set
$$
L^{{\boldsymbol p}}_{E} = L^{{\boldsymbol p}}_{E,(\omega )}
\quad \text{and}\quad
\ell ^{{\boldsymbol p}}_{E} = \ell ^{{\boldsymbol p}}_{E,(\omega )}
$$
when $\omega =1$, and if ${\boldsymbol p} = (p,\dots ,p)$ for some
$p\in (0,\infty ]$, then
\begin{alignat*}{5}
L^{p}_{E,(\omega )} &= L^{{\boldsymbol p}}_{E,(\omega )},
&\quad
L^{p}_{E} &= L^{{\boldsymbol p}}_{E},
&\quad
\ell ^{p}_{E,(\omega )} &= \ell ^{{\boldsymbol p}}_{E,(\omega )}
&\quad &\text{and} &\quad
\ell ^{p}_{E} &= \ell ^{{\boldsymbol p}}_{E}
\intertext{agree with}
&L^p_{(\omega )}, &\qquad
&L^p, &\qquad
&\ell ^{p}_{(\omega )}
&\quad &\text{and}&\quad
&\ell ^{p},
\end{alignat*}
respectively, with equivalent quasi-norms.
\par
\subsection{Periodic and echo-periodic functions}
\par
We recall that if $E =\{ e_1,\dots ,e_d\}$ is an ordered
basis of $\rr d$, then
the function or distribution $f$ on $\rr d$ is called $E$-periodic, if
$f(\, \cdot \, +v)=f$ for every $v\in E$. More generally, if $E_0=\subseteq E$,
then $f$ above is called $E_0$-periodic, if
$f(\, \cdot \, +v)=f$ for every $v\in E_0$. We shall consider functions that
possess weaker periodic like conditions, which appear when dealing with
e.{\,}g. quasi-periodic functions and their short-time Fourier transforms.
\par
\begin{defn}\label{Def:PerEcho}
Let $E =\{ e_1,\dots ,e_d\}$ be an ordered basis of $\rr d$,
$E_0\subseteq E$
and let $f$ be a (complex-valued) function on $\rr d$. For every
$k\in \{ 1,\dots ,d\}$, let
$M_{k}$ be the set of all $l\in \{ 1,\dots ,k\}$ such that
$e_l\in E\setminus E_0$.
Then $f$ is called an \emph{echo-periodic function with
respect to $E_0$}, if for every
$e_k\in E_0$, there is a
vector
$$
v_k = \sum _{l\in M_k} v_{k,l}e_l
$$
such that
\begin{equation}\label{Eq:PerEchoDef}
|f(\, \cdot \, +e_k)|=|f(\, \cdot \, +v_k)|.
\end{equation}
\end{defn}
\par
We notice that in \eqref{Eq:PerTransfer}, relations of
the form \eqref{Eq:PerEchoDef} appears.
\par
\begin{rem}\label{Rem:PerEcho}
Let $E$, $E_0$ and $M_k$ be the same as in Definition \ref{Def:PerEcho},
and let $f$ be a (complex-valued) function on $\rr d$
such that \eqref{Eq:PerEchoDef} holds true. Also let
\begin{alignat*}{1}
J_k
&=
{
\begin{cases}
\mathbf R,& k\in M_d,
\\[1ex]
[0,1],& k\notin M_d,
\end{cases}
}
\\[1ex]
I_k &= \sets {xe_k}{x\in J_k},\qquad k\in \{1,\dots ,d\}
\intertext{and}
I &= \sets {x_1e_1+\cdots x_de_d}{x_k\in J_k,\ k=1,\dots ,d}
\\
&\simeq I_1\times \cdots \times I_d.
\end{alignat*}
Then evidently,
$|f(\, \cdot \, +ne_k)|=|f(\, \cdot \, +nv_k)|$ for every integer $n$.
Hence, if $f$
is measurable and echo-periodic with respect to
$E_0$, and ${\boldsymbol p} \in (0,\infty ]^d$, then
it follows by straight-forward computations that
$$
\nm {f(\, \cdot \, +ne_k)}{L^{{\boldsymbol p}}_E(I)} = \nm f{L^{{\boldsymbol p}}_E(I)}
$$
for every integer $n$ and $e_k\in E_0$.
\end{rem}
\par
\begin{defn}\label{Def:PerEchoLebSpaces}
Let $E$, $E_0$ and $I\subseteq \rr d$ be the same as in Remark
\ref{Rem:PerEcho}, $\omega \in \mathscr P _E(\rr d)$
and let ${\boldsymbol p} \in (0,\infty ]^d$. Then
$L^{{\boldsymbol p} ,E_0}_{E,(\omega )}(\rr d)$ denotes the set of
all complex-valued measurable echo-periodic functions
$f$ with respect to $E_0$ such that
$$
\nm f{L^{{\boldsymbol p} ,E_0}_{E,(\omega )}}
\equiv
\nm f{L^{{\boldsymbol p}}_{E,(\omega )}(I)}
$$
is finite.
\end{defn}
\par
In the next section we shall deduce weighted $L^{{\boldsymbol p}}_E(I)$
estimates of the \emph{semi-discrete convolution}
\begin{equation}\label{EqDistSemContConv}
(a*_{[E]}f)(x) \sum _{j\in \Lambda _E}a(j)f(x - j),
\end{equation}
of the measurable function $f$ on $\rr d$ and
$a \in \ell _0 (\Lambda _E)$,
with respect to the ordered basis $E$.
\par
\section{Weighted Lebesgue estimates
on semi-discrete convolutions}\label{sec2}
\par
In this section we extend Theorem \ref{Thm:SemiContConvEst0}
from the introduction such that $L^{{\boldsymbol p}}_E(I)$-estimates
of echo-periodic functions are included.
\par
Let $E$, $E_0$, $M_k$ and $J_k$, $k=1,\dots ,d$,
be the same as in Remark \ref{Rem:PerEcho}.
In what follows we let $\Sigma _1^{E_0}(\rr d)$ be the set
of all $E_0$-periodic $f\in C^\infty (\rr d)$ such that if
$$
g(x_1,\dots ,x_d)\equiv f(x_1e_1+\cdots +x_de_d),
$$
then
$$
\sup _{\alpha ,\beta \in \nn d}
\frac {\nm {x^\alpha D^\beta g}{L^\infty (I)}}
{h^{|\alpha +\beta|}\alpha !\beta !}
$$
is finite for every $h>0$. By the assumptions and
basic properties due to \cite{ChuChuKim} it follows that
$\Sigma _1^{E_0}(\rr d)\subseteq
L^{{\boldsymbol p} ,E_0}_{E,(\omega )}(\rr d)$ for every choice
of $\omega \in \mathscr P _E(\rr d)$ and ${\boldsymbol p} \in (0,\infty ]^d$
such that
\begin{equation}\label{Eq:E0WeightCond}
\omega (x)=\omega (x_0)
\qquad \text{when}\qquad
x = \sum _{k=1}^d x_ke_k,
\quad
x_0 = \sum _{k\in M_d} x_ke_k.
\end{equation}
\par
Our extension of Theorem \ref{Thm:SemiContConvEst0}
to include echo-periodic functions is the following,
which is also our main result.
\par
\begin{thm}\label{Thm:SemiContConvEst}
Let $E$ be an ordered basis
of $\rr d$, $E_0\subseteq E$,
$\omega ,v\in \mathscr P _E(\rr d)$ be
such that $\omega$ is $v$-moderate and satisfy
\eqref{Eq:E0WeightCond}, and let
${\boldsymbol p} ,\boldsymbol r \in (0,\infty ]^{d}$ be such that
$$
r_k\le \min _{m\le k}(1,p_m).
$$
Also let
$f$ be measurable echo-periodic function with respect to
$E_0$, and let
$I\subseteq \rr d$ be as in Remark \ref{Rem:PerEcho}.
Then the map $(a,f)\mapsto a*_{[E]}f$ from $\ell _0(\Lambda _E)\times
\Sigma _1^{E_0}(\rr d)$ to
$L^{{\boldsymbol p}}_{E,(\omega )}(I)$ extends uniquely to a
linear and continuous map from $\ell ^{\boldsymbol r}_{E ,(v)}(\Lambda _E)
\times L^{{\boldsymbol p} ,E_0}_{E,(\omega )}(\rr d)$ to
$L^{{\boldsymbol p}}_{E,(\omega )}(I)$, and
\begin{equation}\label{convest1}
\nm {a*_{[E]}f}{L^{{\boldsymbol p} }_{E,(\omega )}(I)}\lesssim
\nm {a}{\ell ^{\boldsymbol r}_{E ,(v)}(\Lambda _E)}
\nm {f}{L^{{\boldsymbol p} }_{E,(\omega )}(I)}.
\end{equation}
\end{thm}
\par
For the proof we recall that
\begin{equation}\label{Eq:OtherMinkowski}
\left (
\sum _{j\in I} |b(j)|
\right )^{r}
\le
\sum _{j\in I} |b(j)|^r
\qquad
0<r\le 1,
\end{equation}
for any sequence $b$ and countable set $I$.
\par
\begin{proof}
By letting
\begin{align*}
f_0(x_1,\dots ,x_d)
&=
|f(x_1e_1+\cdots +x_de_d)\omega (x_1e_1+\cdots +x_de_d)|,
\\[1ex]
a_0(l_1,\dots ,l_d)
&=
|a(l_1e_1+\cdots +l_de_d)v(l_1e_1+\cdots +l_de_d)|
\end{align*}
and using the inequality
$$
|a*_{[E]}f \cdot \omega |\lesssim a_v *_{[E]}f_\omega ,
$$
we reduce ourselves to the case when $E$ is the standard
basis, $\omega =v=1$ and $f,a\ge 0$. This implies that we may identify
$I_k$ in Remark \ref{Rem:PerEcho} with $J_k$ for every $k$.
\par
Let
\begin{gather*}
\boldsymbol z _k = (x_{k+1},\dots ,x_d)\in \rr {d-k},\qquad
\boldsymbol m _k = (l_{k+1},\dots ,l_d)\in \zz {d-k}
\intertext{for $k=0,\dots ,d-1$, and let}
f_0 = f,\qquad a_0=a,\qquad g_0= a*_{[E]}f.
\end{gather*}
Then $\boldsymbol z _{k-1}=(x_k,\boldsymbol z _k)$ and
$\boldsymbol m _{k-1}=(l_k,\boldsymbol m _k)$. It follows that
$x_k\in I_k$ when applying the mixed quasi-norms of
Lebesgue types, and that
\begin{multline}\label{Eq:ConvRef1}
0\le (a*_{[E]}f)(x_1,\dots ,x_d)
\\[1ex]
\le
\sum _{\boldsymbol m _0\in \zz d}
f(x_1-\varphi _1(\boldsymbol m _0),\dots ,z_d-\varphi _d(\boldsymbol m _{d-1}))a(\boldsymbol m _0),
\end{multline}
for some linear functions $\varphi _k$ from
$\rr {d+1-k}$ to $\mathbf R$, which satisfy
\begin{equation}\label{Eq:PhiSeqDef}
\varphi _k(\boldsymbol z _{k-1})
=
\begin{cases}
x_k +\psi _k(\boldsymbol z _k),& J_k=\mathbf R,
\\[1ex]
0, & J_k=[0,1],
\end{cases}
\end{equation}
for some linear forms $\psi _k$ on $\rr {d-k}$, $k=1,\dots ,d$.
\par
Define inductively
\begin{align*}
f_k(\boldsymbol z _k) &= \nm {f_{k-1}(\, \cdot \, ,\boldsymbol z _k)}{L^{p_k}(J_k)},
\quad
a_k(\boldsymbol m _k) = \nm {a_{k-1}(\, \cdot \, ,\boldsymbol m _k)}{\ell ^{r_k}(\mathbf Z)},
\intertext{and}
g_k(\boldsymbol z _k) &= \nm {g_{k-1}(\, \cdot \, ,\boldsymbol z _k)}{L^{p_k}(J_k)},\qquad k=1,\dots d.
\end{align*}
Also let
$$
\boldsymbol \varphi _k(\boldsymbol z _k) = (\varphi _{k+1}(\boldsymbol z _{k}),\dots ,\varphi _d(\boldsymbol z _{d-1})),
\quad k=0,\dots ,d-1.
$$
Then \eqref{Eq:ConvRef1} is the same as
\begin{equation}\label{Eq:ConvRef2}
0\le (a*_{[E]}f)(x_1,\dots ,x_d)
\le
\sum _{\boldsymbol m _0\in \zz d}
f(\boldsymbol z _0 -\boldsymbol \varphi _0(\boldsymbol m _0))a(\boldsymbol m _0),
\end{equation}
\par
We claim
$$
g_k(\boldsymbol z _k)
\lesssim
\left (
\sum _{\boldsymbol m _k} f_k(x_{k+1}-\varphi _{k+1}(\boldsymbol m _k),
\dots ,x_{d}-\varphi _{d}(\boldsymbol m _{d-1}))^{p_{0,k}}a_k(\boldsymbol m _k)^{p_{0,k}}
\right )^{\frac 1{p_{0,k}}},
$$
which in view of the links between \eqref{Eq:ConvRef1} and
\eqref{Eq:ConvRef2} is the same as
\begin{equation}\label{Eq:ConvIndEst1}
g_k(\boldsymbol z _k)
\lesssim
\left (
\sum _{\boldsymbol m _k} f_k(\boldsymbol z _{k}-\boldsymbol \varphi _{k}(\boldsymbol m _k))^{p_{0,k}}
a_k(\boldsymbol m _k)^{p_{0,k}}
\right )^{\frac 1{p_{0,k}}}
\end{equation}
when $k=0, \dots ,d$. Here we set $p_{0,0}=1$, and
interprete $f_d$, $a_d$, $g_d$ and the right-hand side of
\eqref{Eq:ConvIndEst1} as $\nm f{L^{{\boldsymbol p}}_E(I)}$, $\nm a{\ell
^{{\boldsymbol p} _0}_E(\zz d)}$, $\nm {g_0}{L^{{\boldsymbol p}}_E(I)}$ and
$\nm f{L^{{\boldsymbol p}}_E(I)}\nm a{\ell ^{{\boldsymbol p} _0}_E(\zz d)}$,
respectively. The result then follows by letting $k=d$ in
\eqref{Eq:ConvIndEst1}.
\par
We shall prove \eqref{Eq:ConvIndEst1} by induction. The result
is evidently true when $k=0$. Suppose it is true for $k-1$,
$k\in \{1,\dots ,d-1\}$. We shall consider the cases when
$p_k\ge p_{0,k-1}$ or $p_k\le p_{0,k-1}$, and $J_k=\mathbf R$ or $J_k=[0,1]$
separately, and for conveniency we set $p_{0,k-1}=p$ and $f_{k-1}=h$.
\par
First assume that $p_k\ge p_{0,k-1}$. Then $p_{0,k}=p_{0,k-1}$.
Also suppose $J_k=\mathbf R$. Then
it follows from the induction
hypothesis that
\begin{multline*}
g_k(\boldsymbol z _k)
\\[1ex]
\lesssim
\left (
\int _{-\infty}^\infty
\left (
\sum
h(x_k-\varphi _k(\boldsymbol m _{k-1}), \boldsymbol z _k-\boldsymbol \varphi _k(\boldsymbol m _k))
^{p}a_{k-1}(l_k,\boldsymbol m _k)^{p}
\right )^{\frac {p_k}{p}}\, dx_k
\right )^{\frac 1{p_k}},
\end{multline*}
where the sum is taken over all $(l_k,\boldsymbol m _k)\in \mathbf Z\times \zz {d-k}$.
By Minkowski's inequality, the right-hand side can be estimated by
\begin{multline*}
\left (
\sum
\left (
\int _{-\infty}^\infty
h(x_k-\varphi _k(\boldsymbol m _{k-1}), \boldsymbol z _k-\boldsymbol \varphi _k(\boldsymbol m _k))
^{p_{k}}\, dx_k
\right )^{\frac {p}{p_k}}
a_{k-1}(l_k,\boldsymbol m _k)^{p}
\right )^{\frac 1{p}}
\\[1ex]
=
\left (
\sum
\left (
\int _{-\infty}^\infty
h(x_k, \boldsymbol z _k-\boldsymbol \varphi _k(\boldsymbol m _k))
^{p_{k}}\, dx_k
\right )^{\frac {p}{p_k}}
a_{k-1}(l_k,\boldsymbol m _k)^{p}
\right )^{\frac 1{p}}
\\[1ex]
=
\left (
\sum
f_{k}(\boldsymbol z _k-\boldsymbol \varphi _k(\boldsymbol m _k)) ^{p}
a_{k-1}(l_k,\boldsymbol m _k)^{p}
\right )^{\frac 1{p}}
\\[1ex]
=
\left (
\sum _{\boldsymbol m _k\in \zz {d-k}}
f_{k}(\boldsymbol z _k-\boldsymbol \varphi _k(\boldsymbol m _k)) ^{p}
\left (
\sum _{l_k\in \mathbf Z}a_{k-1}(l_k,\boldsymbol m _k)^{p}
\right )
\right )^{\frac 1{p}}
\\[1ex]
=
\left (
\sum _{\boldsymbol m _k\in \zz {d-k}}
f_{k}(\boldsymbol z _k-\boldsymbol \varphi _k(\boldsymbol m _k)) ^{p_{0,k}}
a_{k}(\boldsymbol m _k)^{p_{0,k}}
\right )^{\frac 1{p_{0,k}}},
\end{multline*}
and \eqref{Eq:ConvIndEst1} follows in the case $p_k\ge p_{0,k-1}$
and $J_k=\mathbf R$ by combining these estimates.
\par
Next we consider the case when $p_k\ge p_{0,k-1}$
and $J_k=[0,1]$. Then $\varphi _k(\boldsymbol m _{k-1})=0$, and
by the induction hypothesis and Minkowski's
inequality we get
\begin{multline*}
g_k(\boldsymbol z _k)
\\[1ex]
\lesssim
\left (
\int _{0}^1
\left (
\sum _{\boldsymbol m _{k-1}}
h(x_k, \boldsymbol z _k-\boldsymbol \varphi _k(\boldsymbol m _k))
^{p}a_{k-1}(l_k,\boldsymbol m _k)^{p}
\right )^{\frac {p_k}{p}}\, dx_k
\right )^{\frac 1{p_k}}
\\[1ex]
\le
\left (
\sum _{\boldsymbol m _{k-1}}
\left (
\int _{0}^1
h(x_k, \boldsymbol z _k-\boldsymbol \varphi _k(\boldsymbol m _k))
^{p_{k}}\, dx_k
\right )^{\frac {p}{p_k}}
a_{k-1}(l_k,\boldsymbol m _k)^{p}
\right )^{\frac 1{p}}
\end{multline*}
\begin{multline*}
=
\left (
\sum _{\boldsymbol m _{k-1}}
f_{k}(\boldsymbol z _k-\boldsymbol \varphi _k(\boldsymbol m _k)) ^{p}
a_{k-1}(l_k,\boldsymbol m _k)^{p}
\right )^{\frac 1{p}}
\\[1ex]
=
\left (
\sum _{\boldsymbol m _k\in \zz {d-k}}
f_{k}(\boldsymbol z _k-\boldsymbol \varphi _k(\boldsymbol m _k)) ^{p_{0,k}}
a_{k}(\boldsymbol m _k)^{p_{0,k}}
\right )^{\frac 1{p_{0,k}}},
\end{multline*}
and \eqref{Eq:ConvIndEst1} follows in the case $p_k\ge p_{0,k-1}$
and $J_k=[0,1]$ as well.
\par
Next assume that $p_k\le p_{0,k-1}$
and $J_k=\mathbf R$. Then
$$
p_k/p_{0,k-1}=p_k/p\le 1
\quad \text{and}\quad
p_{0,k}=p_k,
$$
and \eqref{Eq:OtherMinkowski} gives
\begin{multline*}
g_k(\boldsymbol z _k)
\\[1ex]
\lesssim
\left (
\int _{-\infty}^\infty
\left (
\sum _{\boldsymbol m _{k-1}}
h(x_k-\varphi _k(\boldsymbol m _{k-1}), \boldsymbol z _k-\boldsymbol \varphi _k(\boldsymbol m _k))
^{p}a_{k-1}(l_k,\boldsymbol m _k)^{p}
\right )^{\frac {p_k}{p}}\, dx_k
\right )^{\frac 1{p_k}}
\\[1ex]
\lesssim
\left (
\int _{-\infty}^\infty
\sum _{\boldsymbol m _{k-1}}
\left (
h(x_k-\varphi _k(\boldsymbol m _{k-1}), \boldsymbol z _k-\boldsymbol \varphi _k(\boldsymbol m _k))
^{p}a_{k-1}(l_k,\boldsymbol m _k)^{p}
\right )^{\frac {p_k}{p}}\, dx_k
\right )^{\frac 1{p_k}}
\\[1ex]
=
\left (
\sum _{\boldsymbol m _{k-1}}
\left (
\int _{-\infty}^\infty
h(x_k-\varphi _k(\boldsymbol m _{k-1}), \boldsymbol z _k-\boldsymbol \varphi _k(\boldsymbol m _k))
^{p_k}\, dx_k
\right )
a_{k-1}(l_k,\boldsymbol m _k)^{p_k}
\right )^{\frac 1{p_k}}
\\[1ex]
=
\left (
\sum _{\boldsymbol m _{k-1}}
\left (
\int _{-\infty}^\infty
h(x_k, \boldsymbol z _k-\boldsymbol \varphi _k(\boldsymbol m _k))
^{p_k}\, dx_k
\right )
a_{k-1}(l_k,\boldsymbol m _k)^{p_k}
\right )^{\frac 1{p_k}}
\\[1ex]
=
\left (
\sum _{\boldsymbol m _{k}}
f_k(\boldsymbol z _k-\boldsymbol \varphi _k(\boldsymbol m _k))
^{p_k}
\left (
\sum _{l_k}a_{k-1}(l_k,\boldsymbol m _k)^{p_k}
\right )
\right )^{\frac 1{p_k}}
\\[1ex]
=
\left (
\sum _{\boldsymbol m _{k}}
f_k(\boldsymbol z _k-\boldsymbol \varphi _k(\boldsymbol m _k))
^{p_{0,k}}
a_{k}(\boldsymbol m _k)^{p_{0,k}}
\right )^{\frac 1{p_{0,k}}},
\end{multline*}
and \eqref{Eq:ConvIndEst1} follows in this case
as well.
\par
It remain to consider the case $p_k\le p_{0,k-1}$
and $J_k=[0,1]$. Then $\varphi _k(\boldsymbol m _{k-1})=0$, and
by similar arguments as above we get
\begin{multline*}
g_k(\boldsymbol z _k)
\\[1ex]
\lesssim
\left (
\int _{0}^1
\left (
\sum _{\boldsymbol m _{k-1}}
h(x_k, \boldsymbol z _k-\boldsymbol \varphi _k(\boldsymbol m _k))
^{p}a_{k-1}(l_k,\boldsymbol m _k)^{p}
\right )^{\frac {p_k}{p}}\, dx_k
\right )^{\frac 1{p_k}}
\\[1ex]
\lesssim
\left (
\int _{0}^1
\sum _{\boldsymbol m _{k-1}}
\left (
h(x_k, \boldsymbol z _k-\boldsymbol \varphi _k(\boldsymbol m _k))
^{p}a_{k-1}(l_k,\boldsymbol m _k)^{p}
\right )^{\frac {p_k}{p}}\, dx_k
\right )^{\frac 1{p_k}}
\end{multline*}
\begin{multline*}
=
\left (
\sum _{\boldsymbol m _{k-1}}
\left (
\int _{0}^1
h(x_k, \boldsymbol z _k-\boldsymbol \varphi _k(\boldsymbol m _k))
^{p_k}\, dx_k
\right )
a_{k-1}(l_k,\boldsymbol m _k)^{p_k}
\right )^{\frac 1{p_k}}
\\[1ex]
=
\left (
\sum _{\boldsymbol m _{k}}
f_k(\boldsymbol z _k-\boldsymbol \varphi _k(\boldsymbol m _k))
^{p_{0,k}}
a_{k}(\boldsymbol m _k)^{p_{0,k}}
\right )^{\frac 1{p_{0,k}}},
\end{multline*}
and \eqref{Eq:ConvIndEst1}, and thereby the result
follow.
\end{proof}
\par
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 6,742 |
The Indian cricket team began a tour of Australia in December 2007, playing the 4 match Test series for the Border Gavaskar Trophy, followed by a single Twenty20 match on 1 February 2008. They also participated in the Commonwealth Bank tri-series against Australia and Sri Lanka from 3 February to 4 March.
Squads
Tour matches
Before heading into the Test series, only one tour game was planned for India, against Victoria. After the Second Test, a second tour game was played, against ACT Invitational XI in Canberra. MS Dhoni led the Indians after regular captain Anil Kumble rested.
Test series
1st Test
{{Test match
| date = 26–29 December
| team1 =
| team2 =
| score-team1-inns1 = 343 (92.4 overs)
| runs-team1-inns1 = Matthew Hayden 124 (183)
| wickets-team1-inns1 = Anil Kumble 5/84 (25 overs)
| score-team2-inns1 = 196 (71.5 overs)
| runs-team2-inns1 = Sachin Tendulkar 62 (77)
| wickets-team2-inns1 = Stuart Clark 4/28 (15 overs)
| score-team1-inns2 = 351/7d (88 overs)
| runs-team1-inns2 = Michael Clarke 73 (113)
| wickets-team1-inns2 = Harbhajan Singh 3/101 (26 overs)
| score-team2-inns2 = 161 (74 overs)
| runs-team2-inns2 = VVS Laxman 42 (112)
| wickets-team2-inns2 = Mitchell Johnson 3/21 (15 overs)
| result = Australia won by 337 runs
| venue = Melbourne Cricket Ground, MelbourneAtt: Day1: 68,778 Day2: 44,797 Day3: 36,346 Day4: 16,742
| umpires = Mark Benson (Eng) and Billy Bowden (NZ)
| motm = Matthew Hayden (Aus)
| report = Scorecard
| toss = Australia won the toss and elected to bat.
| notes = Sourav Ganguly (Ind) played his 100th Test.
Adam Gilchrist went past Ian Healy in the record for most dismissals by an Australia wicket-keeper (396).
}}
Day One
Australia won the toss and elected to bat and after surviving playing and missing and edging some balls through the cordon, built a strong platform with a century opening stand before Phil Jaques was stumped on 66. Ponting and Hussey got out shortly after falling for 4 and 2 respectively. Hayden brought up another Boxing Day a hundred after hitting a four. India then picked up the wicket of a struggling Clarke for 20. Kumble then picked up the wickets of Symonds and Gilchrist. Lee fell for a duck and Hogg was also out leaving Australia 9/323.
Day Two
The second day Johnson and Clark batted well before Clark got out leaving Australia with a healthy score of 343. The Indian team were sent into bat and Dravid and Jaffer batted very slowly before losing early wickets. Tendulkar was the highlight of the innings scoring 62 before being bowled by Clark. The rest of the batting line up fell cheaply all out for 196. Australia then went in to bat late in the day with Jaques and Hayden not out at the end of the day to put Australia in a healthy position at stumps.
Day Three
Australia resumed play firmly on top with all the batsmen getting starts except Ponting with Clarke top scoring with 73. At 7/351 Ponting declared sending India in to bat with a near impossible target of 499 to win. Jaffer and Dravid survived until the end of the day giving India a slight chance of winning.
Day Four
India lost early wickets on day four and only Laxman and Ganguly scored above 20. The Australian bowlers ripped through the middle order with Johnson getting three along with the match-ending wicket of RP Singh. India were all out for 161 with Australia winning by 337 runs with a day to spare going 1-0 up in the best of 4 series. This also brought up 15 straight Test victories just one behind the record, held by Australia under the captaincy of Steve Waugh.
2nd Test
Day One
India lost the toss again and Australia elected to bat, R.P Singh took Jaques for 0 his first ever Test duck in front of his home crowd. This left Australia 1 wicket down with no runs scored. Hayden followed soon after leaving Ponting and Hussey at the crease. The two survived until lunch Ponting scoring a half century. Harbhajan dismissed Ponting again causing a middle order collapse to leave Australia 6/137. On 30, Symonds was clearly caught behind off the bowling of Sharma but umpire Steve Bucknor controversially did not give it out. Symonds then went on to make 162 not out, with Hogg also scoring his first Test 50.
Day Two
India dismissed the lower order in the morning with Australia all out for 463. Brett Lee bowled Jaffer before the partnership of Dravid and Laxman took over the crease. Laxman looked to play shots while Dravid was on the defence. Dravid made 55 while Laxman scored his second century at the SCG. Australian took late wickets leaving Tendulkar and Ganguly at the crease at stumps.
Day Three
Tendulkar made another SCG century scoring 154 not out with the rest of the batsman contributing healthy scores. India held a 69 run lead, quite remarkable considering the position they were in after day one. Hayden and Jaques were unbeaten at stumps.
Day Four
Jaques was out to Kumble shortly after regaining the lead. Ponting was caught off Harbhajan sparking major celebrations among the Indians. Hayden and Hussey took over the crease with Hayden scoring another hundred. Hayden had to get Ricky Ponting as a runner due to an injury, and was eventually dismissed of Kumble for 123. On the next ball Kumble claimed Clarke for a golden duck. On the hat-trick ball Kumble hit Symonds on the pads, sparking a big appeal from the Indian team, but it was not given. Symonds and Hussey remained unbeaten at stumps.
Day Five
Australia started the final day at a slower rate then what was expected. Hussey went on to make his first century against India ending on 145 not out. Symonds also scored a half century before getting out at slip. With two overs available to bowl before lunch, Australia declared setting India a target of 333 to win. Many commentators opined that Ponting had declared too late in the innings. The situation of the game meant that India needed a run rate of well over 4, nearly impossible on the decaying SCG pitch. Australia needed 10 wickets to win in a minimum of 72 overs.
Before lunch Jaffer fell to Lee, as he had in all four innings to date on the tour. The rest of the top and middle order fell without a large change on the scoreboard. The highest score was that of Ganguly who fell to a controversial decision on 51. Captain Anil Kumble led by example after the Laxman dismissal scoring 45 not out and spending over 2 hours at the crease. With just 2 overs remaining on day 5, India had 3 wickets in hand and were 122 runs behind. The game looked destined to be a draw. However, Michael Clarke took 3 wickets in 5 balls to give Australia victory with just 7 balls remaining.
The umpiring was heavily criticised after the match, with India believing they had a too-large share of bad decisions. After the match the Indian team sought to replace one of the umpires for the 3rd Test, going against a prior agreement stating that "Neither team has a right to object to an umpire's appointment."
3rd Test
Day One
India won the toss for the first time in the series, and elected to bat first. Virender Sehwag got his first chance in the series and played a typically attacking innings, providing the team with a good start along with Wasim Jaffer. Both openers went within two runs of each other and Australia slowed the scoring for some time. Sachin Tendulkar and Rahul Dravid then came in and steadied the ship, taking the team to a relatively safe score with individual fifties before being dismissed, Dravid missing his century by seven runs. Australia took a few wickets at the end of the day to leave the day's honours even with India batting at 297/6 at the end of the day's play. Stuart Clark and Brett Lee were the standout performers for Australia.
Day Two
The Indian batsmen took a few runs before Australia took four wickets for just two runs, closing the innings at 330. In reply, India's young pace attack kept Australia down with some fine swing bowling, pushing Australia down to 5/61. Andrew Symonds and Adam Gilchrist then put up an attacking 102 run partnership to engineer an Australian revival but were soon dismissed in quick succession. Indian captain Anil Kumble claimed Symonds as his 600th Test wicket. The pacers then cleaned up the tail, overcoming a few minor scares from tailenders Mitchell Johnson and Shaun Tait. Australia finished at 212, 118 runs behind, leaving the Indian batsmen to negotiate the last hour of the day's play. Stuart Clark took an early wicket, dismissing Wasim Jaffer, leaving India with a lead of 170 with nine wickets remaining at stumps with Sehwag and nightwatchman Irfan Pathan at the crease.
Day Three
The morning session was finely balanced, with Australia reducing India to 125 for 5. Sehwag fell to Clark, while Brett Lee took the wickets of Rahul Dravid and Tendulkar, and Mitchell Johnson dismissed Sourav Ganguly. India's day was saved by the lower order, led by the 79 of VVS Laxman. Pathan finished on 46 and Mahendra Singh Dhoni a gritty 38, but the biggest irritant to Australia proved to be RP Singh, who scored a career-high 30 as part of a 51-run ninth-wicket partnership with Laxman that took the India lead over 400 runs. India were finally dismissed for 294 when Laxman edged a Lee delivery to Gilchrist. Australia were set a daunting target of 413 to win—greater than all but one successful run chase in Test history to date. Pathan took the wickets of Chris Rogers and Phil Jaques before stumps, leaving Australia on 65/2.
Day Four
Ricky Ponting and Mike Hussey stayed at the crease for a major part of the morning session. Ishant Sharma troubled Ponting throughout this period, with the Australian captain unable to take control. After a seven-over spell, Anil Kumble was about to replace Sharma with RP Singh, when Virender Sehwag asked him to retain Sharma. The ploy worked and Ponting was dismissed off the first ball of that over. This triggered the fall of the Australian resistance, as they lost four wickets in the session after lunch (including the contentious dismissals of Hussey and Andrew Symonds). Sehwag was brought in to bowl and responded with the prize scalps of Adam Gilchrist and Brett Lee. Towards the end, Michael Clarke (61) kept up the resistance in partnerships of 50 and 24 with Gilchrist and Mitchell Johnson. Johnson himself made his first Test fifty and was involved in a whirlwind partnership of 74 with Stuart Clark, but once last man Shaun Tait came in at the fall of Clark, it was only a matter of time before India took the match; RP Singh did the honours with a yorker that went hit Tait's foot outside the leg stump and rolled onto his stumps, half an hour before the close of the day's play.
The game was widely praised for the high standard of cricket on offer throughout. The Indians were particularly praised for coming back from two games down in the series to deny Australia a record seventeenth consecutive Test victory at a venue whose pitch has, over the years, proved to be the downfall of almost every visiting team. Indian captain Anil Kumble considered this win as his best win ever.
The defeat also end Australia's unbeaten streak in Tests at the WACA for 11 years after their 1997 loss to the West Indies. As a result, India became first and only Asian team to win a test match at the WACA.
4th Test
Day One
India won the toss and elected to bat first. While India brought in Harbhajan Singh in place of Wasim Jaffer, Australia brought back Matthew Hayden and Brad Hogg replacing Chris Rogers and Shaun Tait.
Irfan Pathan and Virender Sehwag opened the innings for India. At lunch, India were at 89/2 with Mitchell Johnson taking both the wickets of Irfan Pathan and Rahul Dravid. In the post-lunch session Sehwag (63) and Sourav Ganguly (7) got out, as India went to tea break at 187/4. In the final session of the day, Sachin Tendulkar scored his 39th Test a hundred, and VVS Laxman got out after scoring 51. At the end of the day's play, India were at 309/5 with Sachin and Dhoni remained as the not-out batsmen. Brett Lee and Mitchell Johnson both got 2 wickets, while Brad Hogg got the single wicket of Sourav Ganguly. Throughout the day, the bowling was at a slower rate, as Australia completed the day with having only 86 overs bowled, even after 30 minutes of additional play.
Day Two
Tendulkar and Dhoni took the overnight score to 336 before Dhoni was out, caught by Symonds off the bowling of Johnson, early on the second day. India suffered a blow when Tendulkar's was the next wicket to fall, caught by Hogg off the bowling of Lee; his final score was 153. At lunch, India had reached 405/7 with Kumble and Harbhajan Singh at the crease. The pair ended up putting on a 107-run partnership when Harbhajan was finally out, caught by Gilchrist off Symonds, in the 131st over. Harbhajan's dismissal meant Gilchrist beat Mark Boucher's record for wicket-keeper dismissals; Gilchrist became the new record-holder with 414 dismissals. Following the day's play, Gilchrist announced that the Adelaide Test would be his last, effectively retiring from Test cricket and from all international cricket once the one-day series with India and Sri Lanka concluded in March. India were finally dismissed for 526 after tea on the second day, they now needing to dismiss Australia cheaply to be in a good position of winning the Test and thereby squaring the series. Johnson proved to be the most effective of the Australian bowlers, finishing up with innings figures of 4/126. With Australia having to face 21 overs before stumps on the second day, India were disappointed not to take a wicket, Australia ending the day on 62/0.
Day Three
Any hope of an Indian breakthrough early on the third day soon evaporated as Australia's openers, Jaques and Hayden, continued to make runs. They batted cautiously, averaging 3 runs an over, and ended up putting on 159 for the first wicket. Kumble made the breakthrough not long after lunch when he bowled Jaques. Hayden was able to make his 30th Test century before he was bowled by Sharma on 103. Ponting and Hussey then guided Australia safely to 225/2 at tea, making the most of the placid pitch. Pathan then bowled Hussey shortly after tea and Australia's score was 241/3. Ponting and Clarke batted out the rest of the day, however, and with Australia finishing on 322/3, a draw was looking the most likely result.
Day Four
The Adelaide pitch continued to hold up for the batsmen and Ponting and Clarke took off largely from where they left the previous day. The pair ended up making a 210-run partnership before Ponting was bowled by Sehwag after lunch on the fourth day. He had ended up with 140 runs off 396 balls, the cautious batting reflecting Australia's desire not to be dismissed cheaply and risk losing the Test and a series victory. Clarke had become the fourth batsman in the Test to make a century after he reached 100 in the over where Ponting was dismissed; he was out finally to Sharma, caught by Laxman, on 118 and Australia's score was 490/5. His dismissal brought Gilchrist to the crease in what was looking increasingly like his last Test innings given that it was the fourth day and India were still to bat again. However, any hope of final innings glory was dashed when Gilchrist was dismissed cheaply by a catch from Sehwag in the covers off the bowling of Pathan. His final Test score was 14 and he finished up with career figures of 5,570 runs at an average of 47.60. The score was now 506/6 and 30 from Symonds, with some late contributions from Hogg and Johnson, allowed Australia's innings to conclude at 563, a slight first innings lead of 37. The wickets were spread relatively evenly among India's bowlers, with both Pathan and Sharma taking three wickets each. With a further 17 overs to play, India started their second innings and had reached 45/1 at stumps on the fourth day.
Day Five
After the loss of the first wicket the previous day, India batted solidly and had made 128 before the loss of its second wicket, Tendulkar, shortly before lunch. With Australia needing a cheap dismissal of India to have any hope of winning the Test, Sehwag put paid to this outcome by making a commanding 151. By the time he was out after tea, India had made 253/6 and had put the Test out of danger. With a draw being the only possible result, Kumble did the inevitable and declared India's innings over at 269/7, causing play to finish early. The outcome of the Test reflected both the closeness of the series and the evenness between the two sides.
T20I match
Commonwealth Bank Series
The 2007-08 edition of the Commonwealth Bank Series was a One Day International cricket tournament held in Australia. The Commonwealth Bank Series is an annual event involving the national teams of Australia, India and Sri Lanka. India won the event with a 2-0 sweep of the hosts in the final series.
The first two matches were shaping up for excellent contests after their first Innings. However heavy rain in Brisbane, caused by a cyclone in the Pacific ocean, saw the first two matches of the series abandoned. Australia was first to win a match in the series after a Sri Lanka collapsed in game 3.
Australia was the best team during the regular matches, taking 4 wins with bonus points. However, India defeated Australia 2-0 in the best of 3 final series to win the tournament. This is the second time in a row Australia has lost their home tri-nations' series. Last year they lost to England. Fast bowler Nathan Bracken was the leading wicket taker in the tournament with 21 wickets and was named Player of the Series. Australian wicketkeeper Adam Gilchrist and left-arm spinner Brad Hogg retired from One Day International after the second final.
Controversies
Umpiring incidents
The Second Test witnessed many controversial umpiring decisions from the two on-field umpires - Steve Bucknor and Mark Benson - and even the third umpire. The first of Bucknor's gaffes occurred when he did not give Andrew Symonds out caught behind at 30 when TV replays clearly showed that the ball had touched the bat's edge. The second was when Bucknor did not refer a stumping call against Symonds (now 148) to the third umpire. Replays showed the Australia all-rounder's foot wasn't grounded inside the crease when the bails came off. Symonds went on to make 162 not out and brought Australia back into the game. After these incidents, Symonds said, "I was very lucky. I was out when I was 30, given not out. That's cricket though, I can sit here and tell you about my bad decisions as well - but I won't."'' On the fifth day, Bucknor declared Rahul Dravid out caught behind though television replays later showed the ball had brushed his pad without touching his bat. In response to an official complaint about Bucknor's umpiring from the BCCI, the International Cricket Council (ICC) withdrew Bucknor from umpiring in the Third Test, and assigned Billy Bowden as his replacement. The other incident was when Benson consulted the fielding captain, Ricky Ponting, instead of Bucknor at square leg on whether Michael Clarke had taken the catch of Sourav Ganguly cleanly; he gave Ganguly out but the replay showed that the ball was touching the ground. (There had been a pre-series agreement between the captains about taking the fielder's word on catches; it was dropped after this Test.) These and other umpiring errors created a huge backlash against the Australian cricket team for not playing in the spirit of the game.
Monkeygate and unsportsmanlike conduct
Following the Second Test, there was speculation that the tour could be in jeopardy due to the fallout of an incident between Harbhajan and Symonds, with referee Mike Procter issuing a three-match ban against Harbhajan for racial sledging. This resulted in the Indians feeling hard done by; the ban was later rescinded after an appeal before a New Zealand High Court judge. Other acrimony included the reporting of Brad Hogg for unsportsmanshiplike conduct, reports of sledging by the Australian team and the scrapping of the captains' agreement about taking the fielder's word on catches. Indian captain Anil Kumble echoed the Bodyline quote, saying during an interview immediately after the match "Only one team was playing with the spirit of the game, that's all I can say."
Records
Australian team equalled the world record of 16 consecutive Test wins, after winning the 2nd Test of this series in Sydney. This record is being held also with Steve Waugh's team, which created the record in the year 2001.
Anil Kumble secured his 600th Test wicket, in the 3rd Test of this series in Perth. Kumble became the third bowler to achieve this feat, after Shane Warne and Muttiah Muralitharan.
For the second time, after 2001 Test in Kolkata, the Indian team broke the record sequence of Test wins for Australia. By losing to India in 3rd Test of this series, Australia's Test winning sequence ended after 16 consecutive wins starting from 2005 season.
Australian wicket keeper Adam Gilchrist broke the record of most dismissals (414) in Test cricket by a wicket keeper, previously held by Mark Boucher (413) of South Africa.
References
2007-08
2007 in cricket
2008 in cricket
2007–08 Australian cricket season
International cricket competitions in 2007–08
2007 in Indian cricket
2008 in Indian cricket
Cricket controversies | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 1,498 |
MwEPA-LLC (formerly Midwest Energy Performance Analytics, Inc.) is located in Downers Grove, Illinois - just 45 minutes from the Chicago Loop. We are pleased to serve clients throughout the Midwest region.
You can call us at 630 971-2016.
For additional information, contact James Cavallo, Ph.D., atjdcavallo@MwEPA.com or cavallo@Kouba-Cavallo.com.
This page was last updated on March 25, 2019. | {
"redpajama_set_name": "RedPajamaC4"
} | 3,795 |
Martha is married to her wonderful husband, Ray. Ray and Martha have a son, Kyle, who is a senior at Belmont, where she teaches full-time.
Martha enjoys working with preschoolers, child and their families, and the volunteers who give their time to Crievewood. | {
"redpajama_set_name": "RedPajamaC4"
} | 7,044 |
The 1975–76 Pittsburgh Penguins season was their ninth in the National Hockey League. They finished third in the Norris Division, as they had in 1974–75. Despite strong seasons by Pierre Larouche, who set new club records in goals scored in a season (53) and points in a season (111), Jean Pronovost and Syl Apps, Jr. (who set a new club record for assists in a season with 67) the Penguins powerful offense scored a meagre three goals in three games against the Toronto Maple Leafs in the preliminary round of the Stanley Cup playoffs, ending their season.
Regular season
Division standings
Schedule and results
|- style="background:#cfc;"
| 1 || Oct 7 || Pittsburgh Penguins || 4–2 || Washington Capitals || Capital Centre || 1–0–0 || 2
|- style="background:#cfc;"
| 2 || Oct 11 || Washington Capitals || 5–7 || Pittsburgh Penguins || Civic Arena || 2–0–0 || 4
|- style="background:#cfc;"
| 3 || Oct 15 || Pittsburgh Penguins || 8–4 || Toronto Maple Leafs || Maple Leaf Gardens || 3–0–0 || 6
|- style="background:#cfc;"
| 4 || Oct 18 || Detroit Red Wings || 1–6 || Pittsburgh Penguins || Civic Arena || 4–0–0 || 8
|- style="background:#fcf;"
| 5 || Oct 21 || Montreal Canadiens || 7–1 || Pittsburgh Penguins || Civic Arena || 4–1–0 || 8
|- style="background:#ffc;"
| 6 || Oct 25 || Philadelphia Flyers || 4–4 || Pittsburgh Penguins || Civic Arena || 4–1–1 || 9
|- style="background:#fcf;"
| 7 || Oct 30 || Pittsburgh Penguins || 0–4 || Los Angeles Kings || The Forum || 4–2–1 || 9
|-
|- style="background:#fcf;"
| 8 || Nov 1 || Pittsburgh Penguins || 3–7 || Minnesota North Stars || Met Center || 4–3–1 || 9
|- style="background:#fcf;"
| 9 || Nov 2 || Pittsburgh Penguins || 2–7 || Buffalo Sabres || Buffalo Memorial Auditorium || 4–4–1 || 9
|- style="background:#cfc;"
| 10 || Nov 5 || New York Islanders || 6–7 || Pittsburgh Penguins || Civic Arena || 5–4–1 || 11
|- style="background:#cfc;"
| 11 || Nov 6 || Pittsburgh Penguins || 5–3 || St. Louis Blues || St. Louis Arena || 6–4–1 || 13
|- style="background:#fcf;"
| 12 || Nov 8 || Chicago Black Hawks || 7–5 || Pittsburgh Penguins || Civic Arena || 6–5–1 || 13
|- style="background:#fcf;"
| 13 || Nov 9 || Pittsburgh Penguins || 4–6 || Philadelphia Flyers || The Spectrum || 6–6–1 || 13
|- style="background:#ffc;"
| 14 || Nov 12 || Pittsburgh Penguins || 6–6 || Washington Capitals || Capital Centre || 6–6–2 || 14
|- style="background:#fcf;"
| 15 || Nov 13 || Montreal Canadiens || 5–4 || Pittsburgh Penguins || Civic Arena || 6–7–2 || 14
|- style="background:#fcf;"
| 16 || Nov 15 || Buffalo Sabres || 5–2 || Pittsburgh Penguins || Civic Arena || 6–8–2 || 14
|- style="background:#fcf;"
| 17 || Nov 18 || California Golden Seals || 5–3 || Pittsburgh Penguins || Civic Arena || 6–9–2 || 14
|- style="background:#cfc;"
| 18 || Nov 21 || Pittsburgh Penguins || 4–1 || Atlanta Flames || Omni Coliseum || 7–9–2 || 16
|- style="background:#cfc;"
| 19 || Nov 22 || Los Angeles Kings || 3–6 || Pittsburgh Penguins || Civic Arena || 8–9–2 || 18
|- style="background:#cfc;"
| 20 || Nov 26 || Detroit Red Wings || 2–5 || Pittsburgh Penguins || Civic Arena || 9–9–2 || 20
|- style="background:#cfc;"
| 21 || Nov 29 || New York Rangers || 3–8 || Pittsburgh Penguins || Civic Arena || 10–9–2 || 22
|- style="background:#fcf;"
| 22 || Nov 30 || Pittsburgh Penguins || 2–4 || Boston Bruins || Boston Garden || 10–10–2 || 22
|-
|- style="background:#ffc;"
| 23 || Dec 3 || Pittsburgh Penguins || 3–3 || Chicago Black Hawks || Chicago Stadium || 10–10–3 || 23
|- style="background:#fcf;"
| 24 || Dec 4 || Pittsburgh Penguins || 1–6 || New York Islanders || Nassau Veterans Memorial Coliseum || 10–11–3 || 23
|- style="background:#cfc;"
| 25 || Dec 7 || Toronto Maple Leafs || 3–6 || Pittsburgh Penguins || Civic Arena || 11–11–3 || 25
|- style="background:#fcf;"
| 26 || Dec 9 || Pittsburgh Penguins || 2–3 || Kansas City Scouts || Kemper Arena || 11–12–3 || 25
|- style="background:#fcf;"
| 27 || Dec 10 || Pittsburgh Penguins || 2–3 || Detroit Red Wings || Olympia Stadium || 11–13–3 || 25
|- style="background:#ffc;"
| 28 || Dec 13 || Boston Bruins || 4–4 || Pittsburgh Penguins || Civic Arena || 11–13–4 || 26
|- style="background:#fcf;"
| 29 || Dec 14 || Pittsburgh Penguins || 4–7 || Montreal Canadiens || Montreal Forum || 11–14–4 || 26
|- style="background:#cfc;"
| 30 || Dec 17 || Pittsburgh Penguins || 9–2 || California Golden Seals || Oakland Coliseum Arena || 12–14–4 || 28
|- style="background:#fcf;"
| 31 || Dec 19 || Pittsburgh Penguins || 1–5 || Vancouver Canucks || Pacific Coliseum || 12–15–4 || 28
|- style="background:#cfc;"
| 32 || Dec 20 || Pittsburgh Penguins || 5–1 || Los Angeles Kings || The Forum || 13–15–4 || 30
|- style="background:#fcf;"
| 33 || Dec 23 || Pittsburgh Penguins || 3–4 || New York Rangers || Madison Square Garden (IV) || 13–16–4 || 30
|- style="background:#fcf;"
| 34 || Dec 26 || Pittsburgh Penguins || 3–4 || Atlanta Flames || Omni Coliseum || 13–17–4 || 30
|- style="background:#cfc;"
| 35 || Dec 27 || Atlanta Flames || 2–3 || Pittsburgh Penguins || Civic Arena || 14–17–4 || 32
|- style="background:#cfc;"
| 36 || Dec 31 || Los Angeles Kings || 1–5 || Pittsburgh Penguins || Civic Arena || 15–17–4 || 34
|-
|- style="background:#fcf;"
| 37 || Jan 3 || Philadelphia Flyers || 8–4 || Pittsburgh Penguins || Civic Arena || 15–18–4 || 34
|- style="background:#fcf;"
| 38 || Jan 4 || Pittsburgh Penguins || 3–5 || Chicago Black Hawks || Chicago Stadium || 15–19–4 || 34
|- style="background:#fcf;"
| 39 || Jan 7 || Pittsburgh Penguins || 1–4 || California Golden Seals || Oakland Coliseum Arena || 15–20–4 || 34
|- style="background:#ffc;"
| 40 || Jan 10 || Vancouver Canucks || 3–3 || Pittsburgh Penguins || Civic Arena || 15–20–5 || 35
|- style="background:#fcf;"
| 41 || Jan 11 || Pittsburgh Penguins || 0–6 || Buffalo Sabres || Buffalo Memorial Auditorium || 15–21–5 || 35
|- style="background:#fcf;"
| 42 || Jan 13 || Pittsburgh Penguins || 2–6 || Boston Bruins || Boston Garden || 15–22–5 || 35
|- style="background:#fcf;"
| 43 || Jan 15 || Pittsburgh Penguins || 1–4 || Philadelphia Flyers || The Spectrum || 15–23–5 || 35
|- style="background:#cfc;"
| 44 || Jan 17 || Buffalo Sabres || 2–3 || Pittsburgh Penguins || Civic Arena || 16–23–5 || 37
|- style="background:#cfc;"
| 45 || Jan 18 || New York Rangers || 3–8 || Pittsburgh Penguins || Civic Arena || 17–23–5 || 39
|- style="background:#fcf;"
| 46 || Jan 22 || Montreal Canadiens || 4–3 || Pittsburgh Penguins || Civic Arena || 17–24–5 || 39
|- style="background:#cfc;"
| 47 || Jan 24 || Washington Capitals || 2–8 || Pittsburgh Penguins || Civic Arena || 18–24–5 || 41
|- style="background:#ffc;"
| 48 || Jan 25 || Minnesota North Stars || 1–1 || Pittsburgh Penguins || Civic Arena || 18–24–6 || 42
|- style="background:#cfc;"
| 49 || Jan 29 || Kansas City Scouts || 2–6 || Pittsburgh Penguins || Civic Arena || 19–24–6 || 44
|- style="background:#ffc;"
| 50 || Jan 31 || Pittsburgh Penguins || 4–4 || Kansas City Scouts || Kemper Arena || 19–24–7 || 45
|-
|- style="background:#cfc;"
| 51 || Feb 1 || Toronto Maple Leafs || 1–7 || Pittsburgh Penguins || Civic Arena || 20–24–7 || 47
|- style="background:#fcf;"
| 52 || Feb 5 || Pittsburgh Penguins || 1–5 || Boston Bruins || Boston Garden || 20–25–7 || 47
|- style="background:#cfc;"
| 53 || Feb 7 || Pittsburgh Penguins || 7–3 || Los Angeles Kings || The Forum || 21–25–7 || 49
|- style="background:#cfc;"
| 54 || Feb 8 || Pittsburgh Penguins || 7–3 || Vancouver Canucks || Pacific Coliseum || 22–25–7 || 51
|- style="background:#ffc;"
| 55 || Feb 11 || Pittsburgh Penguins || 4–4 || California Golden Seals || Oakland Coliseum Arena || 22–25–8 || 52
|- style="background:#ffc;"
| 56 || Feb 14 || Pittsburgh Penguins || 4–4 || New York Islanders || Nassau Veterans Memorial Coliseum || 22–25–9 || 53
|- style="background:#cfc;"
| 57 || Feb 15 || Los Angeles Kings || 4–6 || Pittsburgh Penguins || Civic Arena || 23–25–9 || 55
|- style="background:#cfc;"
| 58 || Feb 17 || Kansas City Scouts || 1–6 || Pittsburgh Penguins || Civic Arena || 24–25–9 || 57
|- style="background:#cfc;"
| 59 || Feb 19 || Toronto Maple Leafs || 5–7 || Pittsburgh Penguins || Civic Arena || 25–25–9 || 59
|- style="background:#cfc;"
| 60 || Feb 21 || Chicago Black Hawks || 1–10 || Pittsburgh Penguins || Civic Arena || 26–25–9 || 61
|- style="background:#ffc;"
| 61 || Feb 22 || Pittsburgh Penguins || 2–2 || Detroit Red Wings || Olympia Stadium || 26–25–10 || 62
|- style="background:#ffc;"
| 62 || Feb 25 || Atlanta Flames || 3–3 || Pittsburgh Penguins || Civic Arena || 26–25–11 || 63
|- style="background:#cfc;"
| 63 || Feb 28 || Vancouver Canucks || 4–5 || Pittsburgh Penguins || Civic Arena || 27–25–11 || 65
|- style="background:#fcf;"
| 64 || Feb 29 || St. Louis Blues || 5–3 || Pittsburgh Penguins || Civic Arena || 27–26–11 || 65
|-
|- style="background:#cfc;"
| 65 || Mar 2 || Pittsburgh Penguins || 6–2 || Minnesota North Stars || Met Center || 28–26–11 || 67
|- style="background:#cfc;"
| 66 || Mar 6 || Minnesota North Stars || 0–5 || Pittsburgh Penguins || Civic Arena || 29–26–11 || 69
|- style="background:#fcf;"
| 67 || Mar 7 || New York Islanders || 5–3 || Pittsburgh Penguins || Civic Arena || 29–27–11 || 69
|- style="background:#fcf;"
| 68 || Mar 10 || Buffalo Sabres || 7–6 || Pittsburgh Penguins || Civic Arena || 29–28–11 || 69
|- style="background:#cfc;"
| 69 || Mar 13 || California Golden Seals || 2–4 || Pittsburgh Penguins || Civic Arena || 30–28–11 || 71
|- style="background:#cfc;"
| 70 || Mar 14 || St. Louis Blues || 1–7 || Pittsburgh Penguins || Civic Arena || 31–28–11 || 73
|- style="background:#fcf;"
| 71 || Mar 16 || Pittsburgh Penguins || 4–5 || Montreal Canadiens || Montreal Forum || 31–29–11 || 73
|- style="background:#cfc;"
| 72 || Mar 19 || Pittsburgh Penguins || 7–3 || Washington Capitals || Capital Centre || 32–29–11 || 75
|- style="background:#cfc;"
| 73 || Mar 21 || Pittsburgh Penguins || 4–2 || New York Rangers || Madison Square Garden (IV) || 33–29–11 || 77
|- style="background:#ffc;"
| 74 || Mar 24 || Boston Bruins || 5–5 || Pittsburgh Penguins || Civic Arena || 33–29–12 || 78
|- style="background:#fcf;"
| 75 || Mar 25 || Pittsburgh Penguins || 2–5 || St. Louis Blues || St. Louis Arena || 33–30–12 || 78
|- style="background:#cfc;"
| 76 || Mar 28 || Detroit Red Wings || 0–3 || Pittsburgh Penguins || Civic Arena || 34–30–12 || 80
|- style="background:#fcf;"
| 77 || Mar 29 || Pittsburgh Penguins || 4–5 || Toronto Maple Leafs || Maple Leaf Gardens || 34–31–12 || 80
|- style="background:#fcf;"
| 78 || Mar 31 || Pittsburgh Penguins || 3–7 || Montreal Canadiens || Montreal Forum || 34–32–12 || 80
|-
|- style="background:#fcf;"
| 79 || Apr 3 || Washington Capitals || 5–4 || Pittsburgh Penguins || Civic Arena || 34–33–12 || 80
|- style="background:#cfc;"
| 80 || Apr 4 || Pittsburgh Penguins || 6–5 || Detroit Red Wings || Olympia Stadium || 35–33–12 || 82
|-
|- style="text-align:center;"
| Legend: = Win = Loss = Tie
Playoffs
The Penguins' made the playoffs for the fourth time in their history, losing in the preliminary round to Toronto.
Player statistics
Skaters
Goaltenders
†Denotes player spent time with another team before joining the Penguins. Stats reflect time with the Penguins only.
‡Denotes player was traded mid-season. Stats reflect time with the Penguins only.
Awards and records
Jean Pronovost became the first player to score 200 goals for the Penguins. He did so in a 4–5 loss to Montreal on November 13.
Jean Pronovost became the first player to score 400 points for the Penguins. He did so in a 5–2 win over Detroit on November 26.
Jean Pronovost became the first person to score 50 goals in a season for the Penguins. He did so in a 5–5 tie with Boston on March 24.
Pierre Larouche became the first person to score 100 points in a season for the Penguins. He did so in a 5–5 tie with Boston on March 24.
Pierre Larouche established a new franchise record for goals in a season with 53, besting the previous high of 52 held by Jean Pronovost.
Pierre Larouche established a new franchise record for points in a season with 111, besting the previous high of 86 held by Ron Schock.
Syl Apps, Jr. established a new franchise record for assists in a season with 67, besting the previous high of 63 held by Ron Schock.
Ron Stackhouse established a new franchise record for assists (60) and points (71) by a defenseman in a season. He topped the previous highs of 45 assists 60 points both held by himself.
Ron Stackhouse established a new franchise record for points by a defenseman with 150, besting the previous high of 104 held by Duane Rupp.
Transactions
The Penguins were involved in the following transactions during the 1975–76 season:
Trades
Additions and subtractions
Awards and honors
Draft picks
The 1975 NHL Amateur Draft was held in Montreal, Quebec.
References
Penguins on Hockey Database
Pittsburgh Penguins seasons
Pittsburgh
Pittsburgh
Pitts
Pitts | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 7,874 |
package io.enmasse.systemtest;
import java.util.Arrays;
import java.util.HashSet;
import java.util.Set;
/**
* This test suite is organized using different test profiles and those test profiles use a different set of tags.
*
* This class defines all the tags used in the test suite.
*/
public class TestTag {
public static final String ISOLATED = "isolated";
public static final String ISOLATED_STANDARD = "isolated-standard";
public static final String ISOLATED_BROKER = "isolated-broker";
public static final String SHARED_STANDARD = "shared-standard";
public static final String SHARED_BROKERED = "shared-brokered";
public static final String SHARED_MQTT = "shared-mqtt";
public static final String SHARED_IOT = "shared-iot";
public static final String ISOLATED_IOT = "isolated-iot";
public static final String SOAK = "soak";
public static final String NON_PR = "nonPR";
public static final String UPGRADE = "upgrade";
public static final String SMOKE = "smoke";
public static final String ACCEPTANCE = "acceptance";
public static final String SCALE = "scale";
public static final String OLM = "olm";
public static final String FRAMEWORK = "framework";
public static final Set<String> SHARED_TAGS = new HashSet<>(Arrays.asList(SHARED_BROKERED, SHARED_STANDARD, SHARED_MQTT, SHARED_IOT));
public static final Set<String> IOT_TAGS = new HashSet<>(Arrays.asList(SHARED_IOT, ISOLATED_IOT));
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 8,013 |
Décor Studio
Office, Industrial & Retail
Sorbara Group of Companies Nominated for Tarion Homeowners' Choice Awards for Second Year in a Row
The Satisfaction of Our Homeowners is Our Greatest Achievement.
We are proud to announce that the Sorbara Group of Companies has been selected as a finalist for the Tarion Homeowners' Choice Awards for the second year in a row. These are the only awards that give Ontario's new home buyers the power to have their new home builder recognized for customer service excellence.
"With today's increasingly competitive housing markets, a new home commands an even larger proportion of a person's household income than ever before," said Howard Bogach, Tarion President and CEO. "As prices rise, so too do homeowner expectations. What these awards recognize are the builders who have not only met, but exceeded, those expectations through service excellence."
Every fall, Tarion engages a third-party research firm to conduct a province-wide customer satisfaction survey of new home owners in their first year of ownership – specifically, homeowners who took possession between October 1, 2017 and September 30, 2018.
Over 54,500 invitations to complete the survey were sent by email and post. More than 14,000 completed surveys were returned, representing a response rate of 21 per cent.
Survey questions focused on homeowners' satisfaction with their builder, covering every stage in the homeowner-builder relationship – from the signing of the Agreement of Purchase and Sale, through construction and the pre-delivery period, to after-sales service.
"It's important that new home buyers are able to have confidence in their builders to not only deliver a quality built home but also back it up with excellent after sales service," said Bogach. "The Homeowners' Choice Awards recognize builders who have earned the trust of their customers and help champion the importance of customer service in making the home buying journey a positive one."
A huge thank-you to our entire team for their hard work in making our customers a top priority.
3700 Steeles Ave. West, Suite 800
Vaughan, ON L4L 8M9
Tor/Tel.: 416.798.7254
4291 Steeles Ave. West
Toronto, ON M3N 1V7
© Sorbara Group of Companies 2018 All rights reserved.
All illustrations are artist's concepts. Prices and specifications subject to change without notice. E.&O.E.
Group Terms Of Use
Designed by Channel 13
Affiliations
© Sorbara Group 2018. All rights reserved. All illustrations are artist's concepts. Prices and specifications subject to change without notice. E.&O.E. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 1,247 |
Q: How can a web page play mp3 on iPhone/iPad As far as O know, web pages play mp3 sounds by embedding invisible Flash players. Since Flash is not available on mobile webkit (iphone/ipad), how is it possible to play mp3 on this platform?
A: HTML5 supports these 2 tags (audio / video) which Safari supports:
In your case, you can use the audio tag.
<audio src="horse.ogg" controls="controls">
your browser doesn't support HTML5 audio
</audio>
The only catch is getting the right format for your media. :-)
Safari supports MP3 for audio, and MPEG 4 for video... Firefox supports OGG for audio/video, (I believe Opera does too), and chrome supports both.
A: More information about the supported file formats in HTML5 are here:
http://www.html5laboratory.com/playing-with-audio-files.php
That links to a really useful page allowing you to test the format support on your particular browser:
http://www.jplayer.org/HTML5.Audio.Support/
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 2,760 |
Тогайлы () — село в Туркестанской области Казахстана. Находится в подчинении городской администрации Арыса. Входит в состав Акдалинского сельского округа. Код КАТО — 611633500.
Население
По данным 1999 года в селе проживало 959 человек. По данным переписи 2009 года, в селе проживали 116 человек.
Примечания
Населённые пункты городской администрации Арыса | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 4,119 |
Timpson Transport
Specializing in Multi Axle Flatbed and Oversize Loads, Sand and Topsoil Deliveries
Sand Pit
About Timpson Transport, Inc.
Timpson Transport, Inc. is a West Michigan based trucking company specializing in hauling multi axle flatbeds and oversized loads. The company is a fourth generation owned and operated family business. Years ago it began as an apple orchard and has grown and evolved into transporting specialized products. Our fleet includes a large selection of tractor trailer units that deliver to surrounding areas daily. The company also has a fully operational sandpit with various excavating materials available for purchase and delivery. Timpson Transport, Inc. is committed to serving our community and customers with consistency. Customer satisfaction has earned us an outstanding reputation within the community and we continue to strive for only the highest standards in every aspect of our business.
3175 Segwun Ave. SE,
Lowell, MI 49331 | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 9,344 |
{"url":"http:\/\/bootmath.com\/extending-a-function-on-a-submanifold-to-the-ambient-manifold-proof-of-a-property-of-a-vector-field.html","text":"# Extending a function on a submanifold to the ambient manifold & proof of a property of a vector field.\n\n$\\newcommand{\\wt}[1]{\\widetilde{#1}}$\nHello, I just tried my hand at two exercises from John M Lee\u2019s book Riemannian Geometry and I would like to know whether my reasoning is sound or if I did something wrong. This is about exercise 2.3 (a) and (c).\n\nLet $M^n\\subset\\widetilde{M}^m$ be an embedded submanifold.\n\n(a) If $f\\in C^\\infty(M)$, show that $f$ can be extended to a smooth function on a neighborhood of $M$ in $\\widetilde{M}$.\n\n(c) Let $\\widetilde{X}$ be a vector field on $\\widetilde{M}$, i.e. $\\widetilde{X}\\in\\mathfrak{X}(\\widetilde{M})$. Show: $\\widetilde{X}$ is tangent to $M$ $\\iff$ If $f\\in C^\\infty(\\widetilde{M})$ with $f_{|M}=0$, then $(\\wt{X}f)_{|M}=0$\n\nOkay, so here\u2019s what I did for (a).\n\nChoose open sets $\\wt{U}_\\alpha\\subset\\wt{M}$ s.t. $M\\subseteq\\bigcup_\\alpha \\wt{U}_\\alpha$ and such that they imply slice coordinates for the sets $U_\\alpha=\\wt{U}_\\alpha\\cap M$, and both form open covers of $M$. A note on the slice coordinates (abusing the notation a bit, but this is the way he does it in that particular book): If $\\wt{U}_\\alpha\\leadsto x^1,\u2026,x^m$, then $U_\\alpha\\leadsto x^1,\u2026,x^n$.\nDefine $\\wt{f}_\\alpha\\in C^\\infty(\\wt{U}_\\alpha)$ via $\\wt{f}_\\alpha(x^1,\u2026,x^m)=f(x^1,\u2026,x^n)$ . This definition is obviously independet of the last $m-n$ coordinates.\nChoose a partition of unity $p_\\alpha$ subordinate to $\\wt{U}_\\alpha$, and set $F=\\sum_\\alpha p_\\alpha\\wt{f}_\\alpha$. Then $F\\in C^\\infty\\left(\\bigcup_\\alpha\\wt{U}_\\alpha\\right)$ is an extension of $f$ in a neighborhood of $M$.\n\nNow, for (c):\n\nWe again use the slice coordinates $x^i$ like above. Note first that if $f_{|M=0}$, then $D f_{|\\wt{U}_\\alpha}=\\partial_i f\\mathrm{d}x^i_{|\\wt{U}_\\alpha}$ with $n<i\\leq m$, so let $f$ w.l.o.g. vanish on $M$.\n$\\wt{X}_{|M}$ is tangent to $M$.\n$\\iff$ In local coordinates of $\\wt{U}_\\alpha$, $\\wt{X}_{|\\wt{U}_\\alpha}={X^i\\partial_i}_{|\\wt{U}_\\alpha}$ with $X^i(p)=0$ for all $p\\in U_\\alpha$ and $n<i\\leq m$.\n$\\iff$ $\\wt{X}f_{|\\wt{U}_\\alpha}=0$, since-roughly spoken-there is no matching pair of indices in the $\\partial_i$ and $\\mathrm{d}x^j$. But this holds for all $\\wt{U}_\\alpha$, so for every point in $M$.\n\nPersonally, I can\u2019t think of any conflicts there, but that\u2019s why I\u2019m asking here: I don\u2019t have too much experience. So \u2013 did I make a mistake somewhere, was I sloppy, etc?\n\nThanks!","date":"2018-07-16 20:27:31","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9518466591835022, \"perplexity\": 110.82894487696854}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-30\/segments\/1531676589455.35\/warc\/CC-MAIN-20180716193516-20180716213516-00516.warc.gz\"}"} | null | null |
The Burbank High baseball team showed it can be explosive on offense in a Pacific League game Friday night against visiting Arcadia.
In a battle of two of three teams atop the league standings, the Bulldogs broke open the contest with 10 runs in the fourth inning to record a 13-3 victory against the Apaches.
The game was called in the fifth inning because of the 10-run mercy rule.
The win keeps Burbank (15-8, 10-2 in league) in first place with Crescenta Valley (18-7, 10-2), which defeated Pasadena on Friday, 7-3.
In was an important victory for the Bulldogs, who are trying to capture their first league championship since 1991, when they were members of the Foothill League.
In the big fourth inning, Burbank had three home runs, including a grand slam by Angel Villagran that scored John White, Dylan Mersola and Harrison Hernandez. Angel Roman hit a solo shot in the inning and Ian McKinnon had a two-run blast.
"It was one of those nights that everything came together offensively," Burbank Coach Bob Hart said. "I've been waiting for that to happen and I knew they were capable of doing that."
The Bulldogs began the game by pushing across one run in the first inning. Ricky Perez started with a single to center field. He then advanced to second on a sacrifice by White. Perez was able to score on an RBI double to left field by Mersola.
The Burbank lead was short-lived, however, as Arcadia scored two runs in the second to take a 2-1 lead.
But Burbank responded with two runs in its half of the second with a run-scoring single by CC Okimoto and an RBI groundout by McKinnon.
The Apaches kept coming in the fourth inning, tallying a run to knot the score at 3.
Mersola and KcKinnon had two RBI each for Burbank, which hammered out 12 hits.
On the hill, Villagran got the win, allowing three runs, striking out one, walking one and giving up eight hits. | {
"redpajama_set_name": "RedPajamaC4"
} | 3,279 |
The Meaning of Rice : And Other Tales from the Belly of Japan
Book Title: The Meaning of Rice : And Other Tales from the Belly of Japan
Author: Alex Kerr
**Shortlisted for the 2017 Andre Simon Food and Drink Book Awards**
**Shortlisted for the 2018 Fortnum & Mason Food Book Award**
'The next Bill Bryson.' New York Times
Food and travel writer Michael Booth and his family embark on an epic journey the length of Japan to explore its dazzling food culture. They find a country much altered since their previous visit ten years earlier (which resulted in the award-winning international bestseller Sushi and Beyond).
Over the last decade the country's restaurants have won a record number of Michelin stars and its cuisine was awarded United Nations heritage status. The world's top chefs now flock to learn more about the extraordinary dedication of Japan's food artisans, while the country's fast foods - ramen, sushi and yakitori - have conquered the world. As well as the plaudits, Japan is also facing enormous challenges. Ironically, as Booth discovers, the future of Japan's culinary heritage is under threat.
Often venturing far off the beaten track, the author and his family discover intriguing future food trends and meet a fascinating cast of food heroes, from a couple lavishing love on rotten fish, to a chef who literally sacrificed a limb in pursuit of the ultimate bowl of ramen, and a farmer who has dedicated his life to growing the finest rice in the world... in the shadow of Fukushima.
Quitting Plastic : Easy and Practical Ways to Cut Down the Plastic in Your Life
The Wall of Storms
Little London : Child-friendly Days Out and Fun Things To Do
The Great Big Book of Horrible Things : The Definitive Chronicle of History's 100 Worst Atrocities
The Girl in the Red Coat
Classic Fairy Tales : Candlewick Illustrated Classic
Scarpetta : Scarpetta (Book 16)
Panzerartillerie : Firepower for the Panzer Divisions
Film Editing : Great Cuts Every Filmmaker and Movie Lover Must Know
Socrates: A Very Short Introduction | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 8,132 |
{"url":"https:\/\/forum.allaboutcircuits.com\/threads\/drawing-picture-of-singly-linked-list.183483\/","text":"Drawing picture of singly linked list\n\nmukesh1\n\nJoined Mar 22, 2020\n68\nI find linked list quit difficult in c programming I have read many tutorials, looked many program of linked list but still I am not getting clear understanding. I think I don\u2019t understand basic concept of singly linked list.\n\nSo I\u2019ve drawn singly linked list diagram on paper that looks right to me.\n\nI am trying to make picture that show empty list, that shows what happens when first node add to the list, when second node to list, when third node add to list, when fourth node add to list, and when fifth node add to list with pointer.\n\nBobTPH\n\nJoined Jun 5, 2013\n6,298\nDo you understand pointers?\n\nBob\n\nmukesh1\n\nJoined Mar 22, 2020\n68\nDo you understand pointers?\n\nBob\nYes I understand Pointer, Structure, Pointer to Structure and Dynamic Memory.\n\nI'm having trouble with the function where the node is added to the list at the end.\n\nCan you tell how do you draw the diagram for the following.\n\nA struct pointer head is declared in the main function which is initially null.\n\nWe have a function AddNode that creates a node in list\n\nThe new node should always be added to the end of the list. When function AddNode called from main\n\nThe last node in the list should always point to a null value.\n\nBut it's too early for me to write a program. That's why I want to draw picture of discription\n\nBobTPH\n\nJoined Jun 5, 2013\n6,298\nDo you really mean that you cannot draw it, or that you don\u2019t know how to code it?\n\nIf you can\u2019t draw it that means you can draw a list with five elements, but not one with six elements. How can that be?\n\nBob\n\nPapabravo\n\nJoined Feb 24, 2006\n19,846\nA list with no elements has a head pointer that contains a NULL\nIn the absence of any other information you have to \"walk\" the list from the head to the tail to add a new element. You change the pointer in the tail element to point to the new element and you make the new element the tail by setting its pointer to NULL.\n\nSome singly linked list headers will also contain a pointer to the tail in which case it is not necessary to \"walk\" the list to find the tail.\n\nmukesh1\n\nJoined Mar 22, 2020\n68\nDo you really mean that you cannot draw it, or that you don\u2019t know how to code it?\n\nIf you can\u2019t draw it that means you can draw a list with five elements, but not one with six elements. How can that be?\n\nBob\nI've written code for list but I don't think its good way to do it\n\nC:\n#include<stdio.h>\n#include<stdlib.h>\n\ntypedef struct Node\n{\nint data;\nstruct Node *next;\n}list;\nint main()\n{\nlist *second = NULL;\nlist *third = NULL;\nlist *fourth = NULL;\nlist *fifth = NULL;\nlist *temp = NULL;\n\nsecond = malloc(sizeof(*second));\nthird = malloc(sizeof(*third));\nfourth = malloc(sizeof(*fourth));\nfifth = malloc(sizeof(*fifth));\n\nsecond->data = 5;\nsecond->next = third;\n\nthird->data = 9;\nthird->next = fourth;\n\nfourth->data = 12;\nfourth->next = fifth;\n\nfifth ->data = 20;\nfifth->next = NULL;\n\nwhile (temp != NULL)\n{\nprintf(\" %d \", temp->data);\ntemp = temp->next;\n}\n\nreturn 0;\n}\nOutput from program\n3 5 9 12 20\n\nmukesh1\n\nJoined Mar 22, 2020\n68\nA list with no elements has a head pointer that contains a NULL\nIn the absence of any other information you have to \"walk\" the list from the head to the tail to add a new element. You change the pointer in the tail element to point to the new element and you make the new element the tail by setting its pointer to NULL.\n\nSome singly linked list headers will also contain a pointer to the tail in which case it is not necessary to \"walk\" the list to find the tail.\nI need to write function that add the node at end of list\ndummy code\nC:\n#include<stdio.h>\n#include<stdlib.h>\n\ntypedef struct Node\n{\nint data;\nstruct Node *next;\n}list;\n\nint main()\n{\n\nreturn 0;\n}\nThat was the reason I was first trying to draw it on paper to understand how function will implement for list\n\nPapabravo\n\nJoined Feb 24, 2006\n19,846\nI need to write function that add the node at end of list\ndummy code\nC:\n#include<stdio.h>\n#include<stdlib.h>\n\ntypedef struct Node\n{\nint data;\nstruct Node *next;\n}list;\n\nint main()\n{\n\nreturn 0;\n}\nThat was the reason I was first trying to draw it on paper to understand how function will implement for list\nSo can you translate the verbal description that I gave you into code?\n\nmukesh1\n\nJoined Mar 22, 2020\n68\nSo can you translate the verbal description that I gave you into code?\nHead Pointer point to null value when list is empty. I don't know more than it.\n\nHere are some points that should be done in the function\n\n\u2022 We need a function AddNode that creates a node in list\n\u2022 The new node should always be added to the end of the list.\n\u2022 The last node in the list should always point to a null value.\n\nI am neither able to write the code nor implement the diagram for this function\n\nPapabravo\n\nJoined Feb 24, 2006\n19,846\nHead Pointer point to null value when list is empty. I don't know more than it.\n\nHere are some points that should be done in the function\n\n\u2022 We need a function AddNode that creates a node in list\n\u2022 The new node should always be added to the end of the list.\n\u2022 The last node in the list should always point to a null value.\n\nI am neither able to write the code nor implement the diagram for this function\nI'm sorry to say that my coding skills have atrophied in retirement and I don't have any compiler to check prototype solutions, so I'm not going to be much help to you. I should point out that the \"head\" pointer is a standalone element. It is a pointer to a list, but it is not a list, because a list has two elements and the head pointer has only one. Before messing with malloc() you might want to try using a statically allocated array of structures of type list. That will be easier to debug when you get to that stage.\n\nApacheKid\n\nJoined Jan 12, 2015\n1,117\nI find linked list quit difficult in c programming I have read many tutorials, looked many program of linked list but still I am not getting clear understanding. I think I don\u2019t understand basic concept of singly linked list.\n\nSo I\u2019ve drawn singly linked list diagram on paper that looks right to me.\n\nView attachment 254129\nI am trying to make picture that show empty list, that shows what happens when first node add to the list, when second node to list, when third node add to list, when fourth node add to list, and when fifth node add to list with pointer.\nIn general (and in functional languages certainly) items are added to a singly-linked list only at the head - period. Another way to grasp this is that lists are - ideally - immutable, that is once some code has a pointer to some list, the list never ever changes.\n\nAbove, if some code has a pointer to element 3034, then that pointer and the elements that follow it, never ever change as items are added to the list.\n\nNow immutability is of course not forced upon us but is highly desirable for a host of reason (for example the list might be accessed concurrently by multiple threads), so that's my opinion on this.\n\nSo having said that adding elements to the list in your diagram is simple:\n\n1. Create new element.\n2. Set elements next_ptr to the valued pointed to by the head_ptr.\n3. Set head_ptr to point to this new element.\n\nDone.\n\nOf course if you don't want immutability then there are algorithms for inserting nodes more arbitrarily, in this situation there are just three cases to consider:\n\n2. Add element to tail of list\n3. Add element into middle of list (that is after being added there is an element before and after).\n\nAll you need to consider when designing this is a two element list and the new node to add, if you code for each of the three cases the solution will work for lists of any length. Of course you need a rule for deciding at what point to insert a new node (that is how are items ordered), that rule likely depends on some data inside the node, its good practice to extract that rule so that it is not part of the insert logic, that is make the list insert independent of the rule so that the rule can be changed without any impact on the list insert, the insert code is then reusable for any kind of list with any rule.\n\nmukesh1\n\nJoined Mar 22, 2020\n68\nIn general (and in functional languages certainly) items are added to a singly-linked list only at the head - period. Another way to grasp this is that lists are - ideally - immutable, that is once some code has a pointer to some list, the list never ever changes.\nIn general, when we make a list on paper, we add one by one item. For example, if I want to list the players of a cricket team, I will start with 1, 2, and the last player will be number 11. It is observed that the new number in the list is added last. That's why I want to create a list in which I add a new node to the end of the list.\n\nI have no idea how to write a function for the code posted in #7 that add the new node at end of list. I am asking for help to write this function.\n\nSo having said that adding elements to the list in your diagram is simple:\n\n1. Create new element.\n2. Set elements next_ptr to the valued pointed to by the head_ptr.\n3. Set head_ptr to point to this new element.\n\nDone.\nI'm sure the diagram shows that the new node is added to the end of the list. No node has ever been added before the head node\n\nLast edited:\n\nApacheKid\n\nJoined Jan 12, 2015\n1,117\nIn general, when we make a list on paper, we add one by one item. For example, if I want to list the players of a cricket team, I will start with 1, 2, and the last player will be number 11. It is observed that the new number in the list is added last. That's why I want to create a list in which I add a new node to the end of the list.\n\nI have no idea how to write a function for the code posted in #7 that add the new node at end of list. I am asking for help to write this function.\n\nI'm sure the diagram shows that the new node is added to the end of the list. No node has ever been added before the head node\nIn which case this can be achieved by adjusting the \"head\" structure to also point to the tail of the list. If it is the tail to which items must be appended then having a pointer to that in the \"head\" structure is the sensible thing to do.\n\nPapabravo\n\nJoined Feb 24, 2006\n19,846\nIn which case this can be achieved by adjusting the \"head\" structure to also point to the tail of the list. If it is the tail to which items must be appended then having a pointer to that in the \"head\" structure is the sensible thing to do.\nWe already discussed this earlier in the thread. In the case that you have a \"head' pointer ONLY, you need to \"walk\" the list to find the \"tail\". If you do it often enough then having a pointer to the \"tail\" makes sense. If you do it rarely, or never, then not so much. Like everything else in the software arena there are compromises and tradeoffs.\n\nApacheKid\n\nJoined Jan 12, 2015\n1,117\nWe already discussed this earlier in the thread. In the case that you have a \"head' pointer ONLY, you need to \"walk\" the list to find the \"tail\". If you do it often enough then having a pointer to the \"tail\" makes sense. If you do it rarely, or never, then not so much. Like everything else in the software arena there are compromises and tradeoffs.\nSure, if the list at some point has a million items though then it is a serious CPU cost to find the tail. My point is if the problem is to append things to the tail of the list and we only have a pointer to the head then the problem is that design, it is inadequate and a poor solution.\n\nSo I guess the next question is what exactly is driving this requirement? the requirement to not have a pointer to the tail yet at the same time the requirement to be able to append things to that tail, only Mukesh can answer this I guess.\n\nApacheKid\n\nJoined Jan 12, 2015\n1,117\nHere are some suggestion too, I've designed very sophisticated systems in C in fact a compiler too which entails huge lists and trees, here's a tip for how to define and declare basic structures:\n\nCode:\ntypedef void * Any_ptr;\ntypedef struct list_struct * List_ptr;\ntypedef struct node_struct * Node_ptr;\n\ntypedef struct list_struct\n{\nNode_ptr tail_ptr;\n} List;\n\ntypedef struct node_struct\n{\nNode_ptr prev_ptr;\nNode_ptr next_ptr;\nAny_ptr data_ptr;\nint data_len; \/\/ storing length of data here allows us to be able to free up the data memory automatically if we ever need that.\n} Node;\n\n\/* we are now able to simply declare items as \"Node\" or \"List\" as well as \"Node_ptr\" and so on *\/\n\nList_ptr CreateList()\n{\nList_ptr ptr = malloc(sizeof(List));\n\nptr->tail_ptr = NULL;\n\nreturn ptr;\n}\n\nNode_ptr CreateNodeFromData(Any_ptr node_data, int node_data_length)\n{\n\nNode_ptr ptr = malloc(sizeof(Node));\n\nptr->prev_ptr = NULL;\nptr->next_ptr = NULL;\nptr->data_ptr = node_data;\nptr->data_len = node_data_len;\n\nreturn ptr;\n}\nOK you can see that I define typedefs for both structures and pointers to those structures, this makes code much more readable and greatly reduces the need to declare stuff with * all the time.\n\nYou can see how creating a new empty list and creating a new element for the list, are coded.\n\nIn this basic design the list nodes do not contain data, but they do contain a pointer to the data. This make the design completely independent of the type of stuff you might want to put into the list.\n\nThe next step is to write functions like InsertAtHead() and InsertAtTail(), for example:\n\nCode:\nvoid InsertAtTail (List_ptr list_ptr, Node_ptr node_ptr)\n{\n\nif (list_ptr->head_ptr == NULL && list_ptr->tail_ptr == NULL) \/\/ The list is currently empty\n{\nlist_ptr->tail_ptr = node_ptr;\nreturn;\n}\n\nnode_ptr->prev_ptr = list_ptr->tail_ptr;\nlist_ptr->tail_ptr = node_ptr;\n\nreturn;\n}\n\nvoid InsertAtHead (List_ptr list_ptr, Node_ptr node_ptr)\n{\n\nif (list_ptr->head_ptr == NULL && list_ptr->tail_ptr == NULL) \/\/ The list is currently empty\n{\nlist_ptr->tail_ptr = node_ptr;\nreturn;\n}\n\nreturn;\n}\n\nLast edited:\n\nPapabravo\n\nJoined Feb 24, 2006\n19,846\nSure, if the list at some point has a million items though then it is a serious CPU cost to find the tail. My point is if the problem is to append things to the tail of the list and we only have a pointer to the head then the problem is that design, it is inadequate and a poor solution.\n\nSo I guess the next question is what exactly is driving this requirement? the requirement to not have a pointer to the tail yet at the same time the requirement to be able to append things to that tail, only Mukesh can answer this I guess.\nI did state the conditions under which it would be reasonable.\n\nmukesh1\n\nJoined Mar 22, 2020\n68\nAdvice is being given for two pointers let's try to make list now\n\nNow how new node will add at end of list\n\nC:\n#include<stdio.h>\n#include<stdlib.h>\n\ntypedef struct Node\n{\nint data;\nstruct Node *next;\n}list;\n\nvoid InsertAtNode ( list *head, list *tail, int value )\n{\nlist *new = NULL;\nnew = malloc(sizeof(*new));\n}\n\nint main()\n{\nlist *tail = NULL;\nInsertAtNode ( head, tail, 10 );\n\nreturn 0;\n}\n\nApacheKid\n\nJoined Jan 12, 2015\n1,117\nAdvice is being given for two pointers let's try to make list now\n\nNow how new node will add at end of list\n\nC:\n#include<stdio.h>\n#include<stdlib.h>\n\ntypedef struct Node\n{\nint data;\nstruct Node *next;\n}list;\n\nvoid InsertAtNode ( list *head, list *tail, int value )\n{\nlist *new = NULL;\nnew = malloc(sizeof(*new));\n}\n\nint main()\n{\nlist *tail = NULL;\nInsertAtNode ( head, tail, 10 );\n\nreturn 0;\n}\nMay I ask, why are you seeking an answer to this question? are you building\/designing something or might this some kind of college assignment you've been given?","date":"2023-03-25 17:56:27","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.2108057141304016, \"perplexity\": 1058.0828066341762}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-14\/segments\/1679296945368.6\/warc\/CC-MAIN-20230325161021-20230325191021-00008.warc.gz\"}"} | null | null |
**NIGEL HEWITT-COOPER**
CARNIVOROUS PLANTS
Gardening with Extraordinary Botanicals
**TIMBER PRESS**
PORTLAND, OREGON
_Cephalotus follicularis, Darlingtonia californica, Sarracenia flava_ var. _cuprea, Heliamphora nutans, Nepenthes rajah, Dionaea muscipula_.
Copyright © 2016 by Nigel Hewitt-Cooper. All rights reserved.
Published in 2016 by Timber Press, Inc.
Photo credits appear on page 223.
Thanks are offered to those who granted permission for use of materials. While every reasonable effort has been made to contact copyright holders and secure permission for all materials reproduced in this work, we offer apologies for any instances in which this was not possible and for any inadvertent omissions.
The Haseltine Building
133 S.W. Second Avenue, Suite 450
Portland, Oregon 97204-3527
timberpress.com
Book design by Stacy Wakefield Forte
Cover design by Kristi Pfeffer
Library of Congress Cataloging-in-Publication Data
Hewitt-Cooper, Nigel, author.
Carnivorous plants : gardening with extraordinary botanicals/Nigel Hewitt-Cooper. —First edition.
p. cm
Includes index.
ISBN 978-1-60469-758-2
1. Carnivorous plants. I. Title.
SB432.7.H48 2016
635.9'3375—dc23
2015029658
A catalogue record for this book is also available from the British Library.
To my family, who have tolerated my botanical madness for the past three decades—especially my wife Polly and my three children, Tom, Lily, and Daisy. And to Uncle Jim, who bought me that first Venus flytrap in 1981.
CONTENTS
Introduction
Carnivorous Plant Basics
Cultivation in the Home & Garden
Where to Grow Plants of Prey
Year-Round Care & Maintenance
Common Carnivores for Easy Growing
Taking Things to the Next Level
Children, Beginners & Education
Resources
Recommended Reading
Acknowledgments
Photography Credits
Index
INTRODUCTION
**"Nigel, It is obvious you share my passion and knowledge of the species. Long may you continue to do so."**
**—ADRIAN SLACK**
This brief message came to me as this book was in production. I have made the acquaintance of Mr. Slack, and the fact that he considers my passion and knowledge of these species to be in the same realm as his is deeply gratifying. He is without doubt the preeminent name in the modern-day cultivation of carnivorous plants. His books fueled my early interest; in fact, his _Insect-Eating Plants and How to Grow Them_ was the last book written by an Englishman on carnivorous plant cultivation, some thirty years ago.
My interest in the botanical world (today I admit to a straight-up obsession) came at an early age. I grew up in the once-leafy London suburbs, where my grandmother, who lived locally, was one of those people who could grow virtually anything with a good degree of success. She had a small wood-framed greenhouse in her garden which was barely six feet (two metres) square. As a child, I would delight in accompanying her there to water the occupants within.
Even at that young stage, I found the incredible variation of form and colour fascinating, and as I looked upward through the green canopy in this tiny Eden, I recall being awestruck by the beauty of the diversity.
When I was seven, an uncle bought me my first Venus flytrap. At that time, such plants (almost certainly ripped from the wild, unfortunately) could be found in garden centres, protected under unnecessary plastic domes—further prompting the curiosity of my seven-year-old mind. This poor specimen, like millions of its brethren, was doomed to die. But for me, it was an introduction to a group of plants which was to become a fascination that has captivated and at times frustrated me for thirty-five years. The largest obstacle to overcome in the early 1980s was the lack of information available on the subject. Aside from a handful of dedicated fanatics, carnivorous plants were generally unknown to the wider public; even now there are many who could not name an example besides the ubiquitous flytrap.
Back then, Adrian Slack's groundbreaking _Carnivorous Plants_ , published in 1979, was the only English-language book on the subject that was up to date. This title not only (though predominantly) covered the taxonomic nuances of these plants, but also for the first time touched, albeit briefly, on cultivation.
Only a year earlier, the UK's Carnivorous Plant Society had been formed, its aim to bring together like-minded individuals who had been bitten by the bug, so to speak. Through the organization's journals and newsletters one could read of other enthusiasts' successes and failures, the latter of which were common for us all.
_Dionaea muscipula_ 'Sawtooth'.
The paucity of information and available plant material meant that a grower's primary goal was to maintain plants in cultivation. Any experiment that might result in a loss was out of the question, and the suggestion that temperate species could be grown outside was the furthest thing from our minds.
Slack followed up his first book with _Insect-Eating Plants and How to Grow Them_ in 1986. As suggested in the title, the book's focus was cultivation; it brought together early successes and served to introduce the hobby to a wider audience. My own collection grew as quickly as it could with the limited number of plants available, and I had my own personal triumphs and disasters, at times in equal measure. But the interest remained resolute.
By the early '90s there were a number of books, both botanical and with an emphasis on cultivation, written by authors around the world. The number of species in cultivation increased exponentially as more people became hooked on the diversity and beauty of such unusual plants. This growth, combined with the advent of the Internet and the ease with which information can be shared, has meant an explosion of knowledge over the past fifteen years. The combined work and travel of a comparatively small number of intrepid adventurers has also resulted in a large number of new species being described since 2000.
It seems there remains a gap in the market, however—a lack of information in print as to the general cultivation of carnivorous plants. I still hear this from people, as well as the problem that the huge amount of information available online is bewildering and often contradictory.
The purpose of this book is to act as both a general introduction to the genre of carnivorous plants, and as a guide for the more advanced grower who may have had a few successes and wishes to delve deeper into this peculiar and somewhat alien world. Cultivation is the primary intent, and I will cover the more commonly and easily grown representatives of a number of different genera, relying on plants that can successfully be grown in the home and garden.
There are a number of myths and misconceptions that surround carnivorous plants, in fact probably more than any other horticultural grouping, which does little to endear them to potential growers. I hope to dispel many of these erroneous assumptions.
To many, the mention of carnivorous plants evokes images of hot, tropical conditions. There are species that thrive in such environments, but most that you are likely to encounter and grow successfully are in fact temperate.
Though few people contemplate growing these plants in the confines of an average house, let alone cultivating them outside, I want to challenge such views and attitudes. Some dismiss carnivorous plants as novelties; many pass my displays at flower shows, declaring, "Oh, I don't like those things." When introduced to the intrinsic beauty and wide-ranging diversity of these plants, however, I find that most people can appreciate their grace and elegance.
I will also challenge the outdated opinion that carnivorous plants are strictly greenhouse inhabitants. Indeed, a good many are candidates for the garden, and hence deserving of a place alongside today's favourite ornamental plants.
The stocky pitcher of _Nepenthes mira_.
CARNIVOROUS PLANT BASICS
Man has been aware of carnivorous plants for centuries, though their feeding habits weren't categorically confirmed until Charles Darwin wrote _Insectivorous Plants_ , published in 1875. In Henry Lyte's _New Herball_ of 1578, the widespread sundew species _Drosera rotundifolia_ is pictured and described, albeit somewhat erroneously in the chapter describing mosses, despite the recognition of it possessing white flowers.
This herb is of a very strange nature and marvellous: for although that the sun does shine hot, and a long time thereon, yet you shall find it always moist and bedewed, and the hairs thereof always full of little drops of water: and the hotter the sun shineth upon this herb, so much the moistier it is and the more bedewed, and for that cause it was called _Ros Solis_ in Latin, which is to say in English, The Dew of the Sun, or Sun Dew.
A charming description of what was recognized then as a somewhat unusual plant—but no mention of its carnivorous habit, nor any clue that Lyte noticed a presence of insects caught on the leaves in his observation of the plant.
Early references to _Sarracenia_ also do not take into account the genus's potential carnivorous habit. The botanist Mark Catesby in his _Natural History of Carolina, Florida, and the Bahama Islands_ , published in 1754, illustrates both _S. flava_ and _S. purpurea_ , and even goes so far as to state of _S. purpurea_ that the leaves contain water and that they seem "to serve as an asylum or secure retreat for numerous insects, from frogs and other animals which feed on them."
_Sarracenia flava_ , from Mark Catesby, 1754.
_Drosera rotundifolia_ as illustrated in Henry Lyte's _New Herball_ , 1578.
The first suggestion that these plants may have evolved elaborate traps to catch insects was made in the 1760s when the Venus flytrap was dubbed the "fly trap sensitive" by Arthur Dobbs, governor of North Carolina at the time. But the notion that the plant was deriving some nutritional benefit wasn't mentioned until 1770, when the plant was formally described as _Dionaea muscipula_ by John Ellis, a textile merchant and naturalist. In his description, he states that "nature may have some view toward its nourishment, informing the upper joint of its leaf like a machine to catch food."
This view wasn't widely accepted. At a time when all aspects of nature were considered to be God's work, the notion that a plant could devour animals was quickly dismissed, even by the great Carl Linnaeus, with whom Ellis had corresponded.
_Sarracenia purpurea_ , from Mark Catesby, 1754—protection or danger?
Ellis's illustration of the Venus flytrap, 1770.
For the next century, the notion of plant carnivory was kicked to the long grass to languish alongside other absurd notions of the day. However, the discovery of new species (such as the gargantuan pitcher plant _Nepenthes rajah_ on the island of Borneo) rekindled interest, and it was then that Darwin embraced the genre. In 1859 he had published his _Origin of Species_ , striking the first fracturing blow to the established views of nature and its place in the world. The book's reception had at first been hostile, but since he was an established and respected scientist, his peers couldn't completely dismiss his ideas. Who better to tackle the delicate subject of carnivorous plants? _Insectivorous Plants_ was the first detailed study on the topic, with much of the volume concentrating on his study of the sundew _Drosera rotundifolia_ , but he also examined and included a number of other genera.
On the basis of his studies, Darwin concluded, "There is a class of plants which digest and afterward absorb animal matter." Well, it couldn't be clearer than that, and Darwin himself proclaimed, "I care more about _Drosera_ than the origin of all the species."
The discovery of a huge number of carnivorous plants from around the world fuelled a frenzy of plant collecting during the Victorian era, and large collections were amassed by those with the necessary wealth to both obtain and maintain them. Nurseries such as the famous Veitch of Chelsea, which also traded in Exeter, introduced hundreds of new exotic species to cultivation, even having a dedicated _Nepenthes_ house.
This time of plenty for the fortunate few came crashing down, however, with the start of the First World War, and the amassed plant collections were lost as maintenance costs soared and interest in caring for the plants waned. Interestingly, the Chelsea Veitch nursery ceased trading in 1914, while its Exeter branch continued to operate. Most of the privately held collections were lost, and serious interest in carnivorous plants wasn't rekindled for decades.
The next great work on the subject came in 1942, with the publication of Francis E. Lloyd's _The Carnivorous Plants_ , which concentrated on taxonomy rather than cultivation. Continuing the work of Darwin and more obscure studies completed in the intervening period, Lloyd's work is still regarded as relevant today, and helped bring the plants back into the limelight. A small number of individuals formed the International Carnivorous Plant Society in 1972, and that combined with Adrian Slack's book in 1979 gave the hobby a shaky rebirth.
A nineteenth-century illustration of the Veitch nursery _Nepenthes_ house.
WHERE ARE THEY FOUND? TROPICAL VS. TEMPERATE
Carnivorous plants enjoy worldwide distribution. Across the temperate regions of Asia, Europe, North America, and other similar climates, they are typically bog plants, inhabiting wet, peaty areas—environments which certainly in Europe and North America have been greatly reduced due to land drainage and peat extraction. Peat is formed very slowly by the decaying of vegetable matter, predominantly sphagnum moss. The process is lengthy because of the absence of oxygen, due to the waterlogged ground conditions. It is this slow formation that leads peat to be generally considered non-renewable, forming at the rate of around 1/25 in. (1 mm) per year.
Settings such as these are generally open areas, acidic and very low in nutrients; typical vegetation includes grasses and sedges, mosses, and their most unusual residents, carnivorous plants. Which leads to the fundamental question about our subject matter: Why _are_ these plants capable of consuming creatures for sustenance?
As in so many cases, where there is a need, nature provides a solution. Boggy areas are so limited in nutrients (especially nitrogen and phosphorous) that over many millennia, some of the plants in these environments adapted to catch and digest animals to supplement the meagre diet provided by the conditions. The ability to digest mostly insects affords the plants an advantage over their non-carnivorous neighbours.
A sphagnum bog in Southern England. The red-coloured plants are the English sundew, _Drosera anglica_.
In other areas of the world, the habitats in which carnivorous plants are found vary from those in temperate regions. In countries such as Australia, South Africa, and Mexico, habitats can be only seasonally wet and then dry part of the year. This presents a challenge for the plants since a good amount of water is required during their active carnivorous phase. Certainly in the genus _Drosera_ there are some interesting environmental adaptations to surviving the arid season; a couple of droseras are annual—which means they germinate, grow, flower, and ultimately produce seed in a single season before dying. The seeds lie on the ground until the rains return, when the cycle repeats. The majority of carnivorous plants, however, employ additional tactics to survive. Some produce long, thick, fleshy roots which penetrate deep into the soil, where they retreat to survive the dry season. One group of Australian sundews goes a stage further, producing an underground tuber each year.
The Portuguese dewy pine, _Drosophyllum lusitanicum_ , an unusual sticky-leaved plant which is found in coastal Portugal, southern Spain, and northern Morocco, grows mainly on arid slopes where summers are hot and dry, and winter temperatures can drop to freezing. Again, a substantial and expansive rootstock sustains the plant in its harsh environment.
Cloud forest at the base of Mount Roraima, Venezuela.
A large specimen of _Drosophyllum lusitanicum_.
Then there are the stereotypical tropical environments that most people assume are the natural habitats of all carnivorous plants. In reality, only a handful of genera call the lowland tropics home. The rainforest evokes images of hot, steamy jungles, dense with vegetation, inhabited by leeches and other creatures ready and waiting to bite the unwary visitor. This environment (hot and humid year-round) is found at low altitude, and comparatively few carnivorous plants grow here—a few species of the tropical pitcher plant ( _Nepenthes_ ) in Southeast Asia, and some bladderwort species ( _Utricularia_ ), but not many other than those.
Perhaps surprisingly, looking a bit higher in altitude, where conditions are not only humid but cooler, one finds many additional species, including the majority of those in the genus _Nepenthes_. While there are currently around 150 species of _Nepenthes_ , the total is increasing all the time, as botanically unexplored mountains are conquered and their green treasures discovered. In this habitat, trees are festooned with mosses, epiphytic plants such as orchids, and in some cases carnivorous bladderworts, which cling to branches.
So, generally speaking, native habitats can be roughly divided into tropical or temperate. For ease, let's assume the plants I've just mentioned fall into the former, and all others, the latter. Temperate dwellers fill the other ecological niches and contain among their number those most likely to be cultivated successfully in Europe and the United States.
Now, I'm not saying that everything other than the tropical species can be grown together, or outside, but with a little consideration to requirements, you will be surprised what you can actually grow at home.
WHAT DO THEY EAT, AND CAN THEY BE USED AS INSECT CONTROL?
It could be argued that the term "carnivorous" is something of a misnomer. It would probably be more apt to refer to carnivorous plants as insectivorous, as the bulk of these plants' diet is made up of insects. There are exceptions, however—in some cases, quite extraordinary ones.
Defining carnivorous
To begin, one must consider the attributes necessary for a plant to be classified as carnivorous. The following are all necessary criteria.
**AN ATTRACTION, SOMETHING THAT ENTICES INSECTS TO THE PLANT.** This is usually in the form of sugar-laden nectar, produced in copious amounts by sarracenias and other pitcher plants. Nectar is also sometimes used in conjunction with ultraviolet patterning, which renders leaves highly visible to insects.
**A METHOD BY WHICH THE PLANT CATCHES AND HOLDS ITS PREY.** This could be a liquid-filled bath, a mucilaginous glue, or even a sudden restraining movement.
**A WAY OF KILLING AND DIGESTING THE ANIMAL.** In carnivorous plants this is generally achieved by smothering or crushing. Digestion is almost exclusively through enzymatic action. A number of distinct enzymes have been isolated from different genera.
**THE ASSIMILATION OF VARIOUS PRODUCTS OF DIGESTION INTO THE PLANT FOR ITS BENEFIT.** As carnivorous plants exhibit a wide range of trap types (which will be explained later in more detail), the range of prey they capture also varies. Bladderworts ( _Utricularia_ ) possess generally tiny traps around only ¹⁄₁₀ in. (2½ mm) in diameter. They capture correspondingly small prey such as protozoans and tiny crustaceans; larger species occasionally add mosquito larvae.
A dissection of a _Sarracenia_ pitcher in the autumn reveals the plant's diet of flies and similar-sized insects.
_Nepenthes ×mixta_ with a somewhat surprising prey item.
Larger-growing plants, and those of greater interest for us, catch larger insect matter. These are carnivores such as sundews, Venus flytrap, and _Sarracenia_ pitcher plants, all of which are capable of catching large numbers of houseflies, bluebottles, and wasps. Pitcher plant leaves become gorged over the course of their season, due to their capturing efficiency.
Perhaps most intriguing of all are plants capable of enticing and holding larger creatures. If ever there were true carnivores among plants, it would be select species of the tropical pitcher plants ( _Nepenthes_ ). Their traps range greatly in size, from as small as a thumbnail to the cavernous, bucket-like pitchers of _Nepenthes rajah_. Insects are the primary diet for the majority of species, but as the traps become bigger, so do the animals caught. Small frogs, lizards, and rodents—even rats—have been found drowned in the fluid within the pitchers' traps. In cultivation these plants can occasionally catch mice.
I had a pitcher plant that I once hung in a tree outside in my tropical garden, during the summer months. One afternoon as I was walking by the tree, I noticed the tail feathers of a common blue tit ( _Cyanistes caeruleus_ ) protruding from the plant's pitcher.
This of course is far from the norm. Indeed, it was one of only a very few documented cases of birds becoming trapped by carnivorous plants in cultivation in Europe.
Exactly how the unfortunate bird came to meet its early demise is unknown. One thing is certain, however: the pitcher plant was not actively attracting birds, or there would be more than a few known cases. I believe that the bird was attracted by something else, most likely the insects being drawn to the plant. I can imagine that while perched on the front rim of the pitcher's mouth, it leaned forward to retrieve an insect, became wedged in the trap, and drowned in the fluid.
A Venus flytrap with a small lizard. Larger prey such as this are somewhat unusual.
The large size of the prey item, in comparison to the relatively small dimensions of the trap (some 6 in. [15 cm] from the base to the bottom of the lid), meant that the leaf decayed before any nutritional gain could be made by the plant, but it does demonstrate how some of these unusual captures can occur. This type of event isn't limited to the larger species of carnivorous plant. A number of years ago, when I operated my nursery from Surrey, a few miles south of London, I found several small lizards caught by Venus flytraps.
The diet of these plants is almost exclusively made up of insects, either crawling or flying, but they have been observed in the wild capturing small frogs—which for a large plant makes a convenient meal, as long as the animal doesn't push itself out of the leaf with its powerful rear legs. The capture of lizards is another unusual occurrence, certainly in cultivation, and again I suspect this is the result of a blunder rather than the plant actively attracting such animals.
Assuming that these plants stick to their usual diet, they make both effective and beautiful methods of insect control around the home. On a sunny windowsill, mid-height sarracenias, some species of sundew, the cobra lily, and the Venus flytrap are effective at capturing houseflies and bluebottles, and are certainly more interesting and aesthetically pleasing than those sticky-tape traps. They're also friendlier to us than chemical sprays.
In an environment such as a sunny conservatory or greenhouse, the range of plant sizes can be increased to include larger species and hybrids of _Sarracenia_ —this is where an impressive display of these plants can be staged. Imagine a selection of brightly coloured, organ pipe–like pitchers in a crescendo, from the smaller species at the front up to the metre-high, fluted leaves of some forms of _Sarracenia flava_. A sprinkling of sundews and Venus flytraps completes the display. With the embellishment of a few props—cork bark or clean, salt-free driftwood—you have something capable of making the neighbours envious and the local insect population fearful. A similar setup can be achieved outside, especially in a sunny, sheltered position where pitcher plants make ideal candidates for containers on patios and decks, and even as interesting and unusual specimens for the margins of a pond.
CURRENT UNDERSTANDING
Our knowledge of the cultural requirements of these plants has increased immeasurably over the past thirty years. Plants which in the 1980s were considered to require temperatures above freezing over the winter months are in fact quite hardy. As the availability of cultivated material increased, it enhanced our understanding of the scope of their tolerances. This broader range generally comes as a surprise to people but shows just how much more widely carnivorous plants could and should be grown.
Finally, I should make a few comments concerning the names of the plants. All plants and animals have Latin names, which follow the binomial system devised by the Swedish botanist and zoologist Carl Linnaeus (1707–1778). Names consist of two parts, the generic (genus) and specific (species) names. The easiest way of understanding this is to consider the two names as make and model.
Species
Let's take the Venus flytrap, whose Latin name is _Dionaea muscipula. Dionaea_ is the genus (make), and within that we have the species (model), which is _muscipula_. As there are no other species, this is called a monotypic genus.
With North American pitcher plants ( _Sarracenia_ ), there are eight species, and so we have _S. flava_ , _S. leucophylla_ , and others. You will often see the genus name reduced to its first letter (which is always a capital), but the species name is almost always written in full, beginning with a lower case letter.
The concept of allowing carnivorous plants to freeze was alien until fairly recently.
When a species is written down, it is correct to follow it with the name of the person responsible for describing it. So we have _Sarracenia flava_ L., in which L. is the abbreviation for Carl Linnaeus himself. For the Venus flytrap, it is _Dionaea muscipula_ Ellis. However, this rule is rarely adhered to and is considered unnecessary in anything other than scientific or taxonomic texts.
Varieties
Sometimes we have a situation in which a species is defined further, because of a certain stable characteristic which differentiates it, but not to the extent to which it can be regarded as a separate species. _Sarracenia flava_ is a prime example, as there are seven named, naturally occurring varieties. These are written as _S. flava_ var. _flava, S. flava_ var. _ornata_ , and so on. The term "var." stands for variety, and in this instance the varieties differ little in stature but are distinct in their colours and patterning.
Subspecies
If a plant has distinct features which go further than simply, say, colour, they are categorized by subspecies, and again we can use _Sarracenia_ to demonstrate. _Sarracenia purpurea_ has two of these subspecies, which are written _S. purpurea_ subsp. _purpurea_ and _S. purpurea_ subsp. _venosa_. Although the two plants are clearly the same species, they have different natural ranges and slightly different forms, one with slimmer, hairless (glabrous) leaves, and the other with much larger, more voluminous leaves, which often have a covering of fine, short hairs (pubescence).
Forms
Occasionally we see odd forms which are given legitimate status if they are naturally occurring, and we can again use _Sarracenia purpurea_ to demonstrate. Each subspecies has an all-green variety which lacks the red pigment anthocyanin, the chemical compound which gives most plants their red colouration (anthocyanin is also used as a food colourant). Such a plant could be regarded by the layman as akin to an albino, and the easiest way to reliably identify this anomaly is to look at the emerging new growth, which is usually flushed red but in these individuals is always lime green. I mention this as there are sometimes veinless green plants found which still contain anthocyanin.
So we have _Sarracenia purpurea_ subsp. _purpurea_ f. _heterophylla_ (the f. standing for forma), and _S. purpurea_ subsp. _venosa_ var. _venosa_ f. _pallidiflora_ , wherein the subspecies itself is broken down into two distinct forms.
Hybrids
A species is a plant in its purest form. A hybrid is a cross between two or more species; in the case of _Sarracenia_ , often between other hybrids, as they are all interfertile. A primary hybrid—that is, a cross between only two species—is indicated by an × between the two names. So the cross between _S. flava_ and _S. purpurea_ is _S. ×catesbaei_ , and the cross between _S. flava_ and _S. leucophylla_ is _S. ×moorei_. In the case of crosses between two hybrids, the name can be written as, for example, ( _S. ×catesbaei_ ) × ( _S. ×moorei_ ). Alternatively, you can list the individual components: ( _S. flava × S. purpurea_ ) × ( _S. flava × S. leucophylla_ ).
Cultivars
Finally, we come to cultivars. These are generally plants of horticultural origin and are names given to individual clones (genetically distinct individual plants) with outstanding characteristics. There is no limit to the number of cultivars that exist, even within an individual species or hybrid. They are given non-Latin names and are frequently named after people—for example, my own _Sarracenia_ 'Joyce Cooper', in which the name of the plant is within single quotation marks, with capital lettering at the beginning of each word.
Here is a vitally important thing to remember with cultivars. Virtually all are so named because of a unique character, and so to preserve these attributes, cultivars can _only_ be propagated vegetatively, that is by cutting, division, or tissue culture (micropropagation), in which tiny fragments of the growth points are grown in sterile jars on a nutrient jelly.
Plants should never be grown from seeds produced by a cultivar, nor labelled, swapped, or sold with that name. This applies to all plants, not just those we are covering in this book. That isn't to say you can't use a cultivar as a parent to produce seed; indeed there are many outstanding plants that can be raised, but the resultant offspring are to be labelled accordingly.
Very occasionally, a plant is named which has a unique characteristic that is genetic and passed on to its offspring. If it is specified in the cultivar description that any plant with this trait can be labelled as such, then it is fine to do so.
Cultivars need to be registered with a recognized body; in the case of our plants, that is the International Carnivorous Plant Society. They can also be named and described in the published catalogue of a nursery, or in a published book.
A mixed planted basket makes a striking garden addition.
CULTIVATION IN THE HOME & GARDEN
As this is a book aimed at those with little or no experience growing these remarkable plants, I'll endeavour to keep most instructions as simple as possible—not because I like to patronize people, rather because I can well remember how baffled I was when I was first drawn to this hobby, by the advice of those determined to overcomplicate matters.
PROCURING PLANTS
When purchasing these plants, it really is important that you buy from a reputable specialist, for a couple of reasons. First, you are not just buying an item on a shelf, you are buying the care and knowledge of an expert grower, someone who has been through the rigours of success and many failures to develop a certain level of know-how. Good nursery folks will share this knowledge, so you are able to grow your plants successfully. Second, and just as important, you will need plants that are hard grown (grown in harsh conditions to enhance adaptability). This is the opposite of soft grown, and refers to the conditions in which the plants have been raised. In soft conditions, environments are kept at optimal levels to produce quick growth. Constant warmth and controlled light levels ensure the plants are grown in the minimum amount of time at minimum cost before they are sold. If you consider that most of us don't have these controlled conditions, you will realize that these plants won't be too happy in your house or garden and are much more likely to fail in the sudden change of conditions.
Things are different at a specialist nursery. Temperature control means opening the windows when it gets too hot to work in the greenhouses. As for heating in the winter, we don't bother if it isn't an essential requirement. So if you buy temperate plants from a specialist in the winter months, there is usually very little to see in the way of growth.
GROWING PLANTS FROM SEED
Some like the satisfaction of growing things from seed. This is possible with carnivorous plants, but be forewarned: it is a long road from seed to adult plants. Years, in fact, in some cases. If that doesn't deter you, here are some fundamentals.
Before you can grow plants from seed, seed needs to be produced. For seed to be produced, pollination must happen. This is the transference of the male pollen to the female stigma. In nature, it is usually facilitated by an animal, usually an insect. This can still happen in many species within the confines of cultivation, especially if plants are outside where the indigenous insect population willingly performs the task. Cross-pollination happens when the pollen of one flower is transferred to the stigmas of another.
Plant reproduction
Most plants are monoecious, which means their reproductive male and female parts are found on the same flower. Dioecious plants are much scarcer, and have male and female flowers on separate plants, necessitating both a male and female plant for pollination. The only dioecious carnivorous genus is _Nepenthes_. There are a few terms concerning different types of flowers which are of relevance to us.
**SELF-FERTILE** A self-fertile plant is compatible with itself. While it is still preferable to pollinate with a genetically different individual, it can produce seed all on its own. If there is only one flower present, or you just have one plant, it can be self-pollinated or selfed as it is often known. The advantage of cross-pollination, however, is that the resulting offspring will be more vigorous. Selfed plants are in effect inbred, and therefore lack genetic variation.
**SELF-STERILE** A self-sterile plant is not compatible with itself and requires pollinating with another genetically different individual.
**SELF-POLLINATING** Some flowers have the ability to pollinate themselves to produce seed. These plants will still flower and seek an external pollinator but if one is not forthcoming, they have the ability to complete the process, as they close to ensure that seed is produced. Some sundew species have the ability to do this.
Manual pollination
Manual pollination refers to the fact that a plant requires an external pollinating agent, either an animal, or in cultivation, the assistance of man. This is an easy process, which simply involves the employment of a small, soft-bristled paintbrush which is used to gently brush the stamens and collect the pollen. It is then transferred to the stigma of another flower, and if you have more than one different individual clone this is even better. The pollen will stick to the stigma and the job is done. Repeat this process daily three or four times and you will have a good seed set. For plants with small or extremely fine flowers, it is often easier to simply rub the faces of the two flowers together.
People are often wary of the perceived difficulties of growing all kinds of plants from seed (not only carnivorous plants). Indeed, there is a valuable lesson here which is relevant to a number of the plants we will cover in this book.
Collecting and sowing seeds
When virtually any seed is shed by a plant, it is in a state of dormancy, and needs to be awakened. Seeds of temperate genera, such as _Darlingtonia, Dionaea_ , and _Sarracenia_ , are usually shed in the autumn, at the end of the growing season. They have been developing throughout the summer months; remember, these plants flower prior to any leaf production. Because of their small size, if these seeds were to germinate immediately, they would run the very real risk of being killed by the ravages of winter. That's why they germinate in the spring, once the danger of freezing conditions has passed.
Seeds do not germinate immediately after shedding, because they require a certain treatment to break this chemical-induced dormancy. In the case of temperate plants, the treatment involves a period of cold to break down the chemical inhibitors contained in the seed, which facilitates germination. This is known as stratification; its practice requires nothing more technical than allowing the seed to experience a cold period.
After collecting seeds, you must ensure that they are dry. Lay them on a sheet of paper for a few days on a windowsill. Once seeds are dry, store them in a paper or glassine envelope. Do not use plastic bags for storage; there is still an element of water in the seeds and they can rot if not allowed to "breathe." Keep them in the refrigerator for six to eight weeks prior to sowing in the spring (late March to late April).
Sow them directly onto the surface of the same potting mix as the adult plant. Do not cover. Depending on the number of seeds you have, this can be done in a pot, a shallow seed tray, or in a propagator (a covered, often heated container filled with earth or potting mix, used for germinating or raising seedlings or plants). Moisten generously from above with a watering can rose, then set in water, in a sunny position under cover, ensuring they remain wet at all times (as seedlings, even the briefest period of drying out will kill them). Don't place the seeds outside, as they are likely to be scattered in heavy rain, and remember that birds find seeds particularly appetizing. Germination should occur in four to six weeks.
If you don't wish to sow seeds the spring after they've been collected, they may remain in the refrigerator, but try to not store seeds for more than two or three years, as their viability (ability to germinate) reduces with each passing season. Some species of _Nepenthes_ are viable only for a matter of a few weeks, and a couple of utricularias actually produce live green seeds which have to fall onto wet soil or water immediately once they are shed.
Avoid at all costs those packeted seed kits. The seeds will likely have been sitting on shelves at room temperature for many months, and are often already dead when purchased. This can be especially disappointing if they have been bought for a child. If your plan was to buy such a seed kit, make the investment and buy a fully grown plant instead.
The virtue of patience
With many carnivorous genera, especially _Darlingtonia, Sarracenia_ , and _Dionaea_ , slow growth is the norm. In the case of _Darlingtonia_ and _Sarracenia_ , the time it takes for a seed to grow to adult size can be five to eight years; three to five years for a hybrid of _Sarracenia_. For _Dionaea_ , you will wait three to four years.
When cobra lilies and sarracenias first germinate, you will notice two small green seed leaves, followed by tiny tubular pitchers. These increase in size on a yearly basis, with the plants gaining height annually until they reach adulthood. As seedlings and young plants, they need to be treated exactly the same way as adults. They can be left in their pot or tray for two to three years before being pricked out and potted individually.
For dionaeas, you will again see two seed leaves followed by tiny traps, which will often become larger with each subsequent leaf, though some will stay small for longer—really, until they start catching prey. At that point, they will increase in size more rapidly.
An alternative method of sowing, one which I frequently use for convenience, is to sow in the autumn in the same way as mentioned, leaving seeds in the greenhouse to stratify over winter. Seeds will then germinate in the spring in their own time when conditions suit. This simply gives me one less task to do at a very busy time of year.
Adaptation for the southern hemisphere
In the southern hemisphere, you would sow in late September to late October. In countries such as Australia the cultivation of temperate species carries its own issues, such as providing a suitable dormant period; winters are just too warm in many areas. I have heard of growers in the hotter regions of the country unpotting their plants, washing off the planting mix, wrapping them in sphagnum, and storing in the refrigerator for three months. A rather extreme solution, but it reportedly works.
THE THREE GOLDEN RULES FOR TEMPERATE SPECIES
We've already seen that temperate species of carnivorous plants aren't the hothouse softies we so often assume they are, but they do perform best when the following important rules are observed.
As we are primarily considering the temperate species of carnivorous plants, I'll introduce you to the three golden rules for succeeding with them. Other plants of interest will be covered later, as their requirements will differ.
Golden rule number one: Full sun
The bogland habitat is too poor to sustain larger plants, creating an open environment with low-lying vegetation and little except occasional grasses to afford any shade. Plants there have evolved and adapted to tolerate high light levels, and this must be replicated in cultivation. Plants grown in insufficient light are usually pale green and insipid, bereft of the colours which make them so interesting from an aesthetic point of view, whereas those in good light are bright and vivid.
Species of _Sarracenia_ fare even worse, the upright plants suffering particularly. Any pale green leaves produced will struggle to remain upright, falling over as they stretch (or etiolate, to give the phenomenon its correct term) in their search for more light. This recumbent position renders the traps quite useless.
To succeed, a good six hours of direct sunlight is required to maintain healthy growth and colouration. Don't be frightened of overdoing it—in short, you can't. Keep an eye out for sudden extreme increases in temperature and light levels early in the season, though, because developing leaves can occasionally scorch in such periods. If this happens, just remove the damaged leaves and the plant will resume growth.
Two examples of _Drosera capensis_. The sundew on the right has been shaded. It's a more lax, open specimen with less colour, and isn't producing sufficient glue on its leaves.
With many species, especially those of _Sarracenia_ , the more light the plant receives, the better the colouration, and for a few varieties the level of colour seen on wild plants is difficult to achieve in cultivation, especially in the often grey English climate. Plants grown farther south in Europe and in the United States seem to fare a little better when it comes to reaching their potential in this respect. To put the light issue into context, my nursery receives in the region of twelve to fourteen hours of direct sun at the height of summer.
A shaded sarracenia lacks red colouration, and the background colour is a dull green, not the vibrant yellow that's characteristic of _Sarracenia flava_.
If you aren't in a situation to provide the levels of light required, all is not lost, but the number of species you can grow is somewhat curtailed. A couple of sundew species, the Mexican butterworts, and a number of terrestrial bladderworts are all suitable candidates for bright but sunless windows.
ARTIFICIAL LIGHT
There are times when some form of artificial light will be required—for instance, if you are growing your plants in a terrarium, if they don't receive enough natural light due to position, or if you are trying to encourage growth early in the season. Understanding a little about artificial light and how it relates to natural light can help you choose the right setup.
Plants require light to undertake the process of photosynthesis, light that is supplied in the visible spectrum from the sun, and of the type that we as humans can see. This fairly narrow bandwidth appears at first glance to be white light, but as we know, it can be split into the spectrum of colours—think of a rainbow.
The sun supplies light across this spectrum from violet to red, and so plants receive the whole range of colours. They rely on the violet end of the spectrum for healthy growth, and the red end to facilitate flowering.
Artificial lighting typically provides light from a very narrow bandwidth, often from either the red or violet end, which will prompt either growth or flowering. Lighting is available, however, that has a wider bandwidth and is therefore more suitable as an all-around solution. Let's have a look at some of the more common options.
**FLUORESCENT LIGHTING** is the best option for most hobbyists and for terrarium cultivation. It is available in a variety of lengths, and is relatively cheap to purchase (and certainly cheap to run). It does require a starter unit (ballast) which can easily be concealed, and several tubes can be mounted in the lid, allowing tubes from different ends of the spectrum to be used together. Full spectrum lights are also available. A trip to the nearest aquarium or reptile supplier will give you a good idea of options.
A reflector mounted above the lights is essential, and the plants will need to be positioned 6 to 12 in. (15 to 30 cm) below them. Fluorescent lighting operates at a low temperature, so there is little risk of burning the leaves unless they come in direct contact with the tubes.
**LED (LIGHT-EMITTING DIODE) LAMPS** are fairly new technology as far as grow lights are concerned. They have the advantage of being cheap to run individually, but more important, of being very bright collectively, while producing little or no extra heat—though this could also be construed as a negative. I've not used LEDs personally, but there are a number of options and a dizzying amount of conflicting information, facts, and figures. In light of this and the fact that the technology seems to be changing and progressing quickly, a little research will be required prior to purchase. There are many types of LED units available on the Internet; keep in mind that you usually get what you pay for.
**HID (HIGH INTENSITY DISCHARGE) LIGHTS** are the units used in commercial nurseries, and hence suited to larger greenhouse applications. They are completely unsuitable for terrarium use.
Sodium vapour HID lamps produce light from the red end of the spectrum, and are used extensively to produce flowering plants for the mass market. The light they produce is warm, however, and not conducive to viewing plants.
Metal halide HID lamps have a broader spectrum that is concentrated in the blue end, so they encourage growth. The light they produce appears almost white to human eyes and lifts the colours of the plants. Subjects photographed under this type of light, though, appear yellow. In a greenhouse situation they are a handy addition to natural light, and I use them occasionally to kick-start the growth of sarracenias if I have an early flower show at which I'm exhibiting.
Water collection tanks connected in sequence at their bases dramatically increase storage capacity.
HID lights are expensive to purchase; require a large, heavy ballast that can buzz irritatingly; and, at 400 or 600 watts per bulb, are very expensive to run. In short, they are of little use to the amateur hobbyist.
All these light types will need to be set on a timer to allow a photoperiod (period of time each day that an organism receives illumination) of twelve to fourteen hours per day, unless they are being used as supplementary lighting. Standard plug-in timers for fluorescent tubes and LED lights are fine. Consult a well-stocked electrical supplier, preferably in conjunction with a qualified electrician.
Golden rule number two: Keep plants wet with rainwater
I say rainwater, but this can be widened to include other options, so don't panic if you live in a flat and have no capacity to collect it. Water is an important component, though, and one which will kill your plants if you don't get it correct. People constantly ask me at flower shows why their Venus flytrap died, and the answer is invariably because they used tap water.
What's the problem? The chlorine level is a consideration, but this will dissipate if water is left to stand for twenty-four hours. The main issue is the hardness of tap water, which is essentially the level of dissolved minerals it contains. In Britain and the United States, our drinking water is generally considered to be hard. Some areas are fortunate to have soft water, but even in those areas I would shy away from using it.
A common misconception about tap water is that it is okay to use if it has been boiled. People assume this removes all impurities; unfortunately it isn't that simple. Tap water contains two types of hardness: temporary and permanent. Temporary hardness precipitates out as the water boils. In hard water areas, this can be seen as scale in the kettle. It is the carbonate element of the hardness. Permanent hardness, however, isn't removed by boiling. This sulphate hardness remains in the water and actually increases in concentration as water boils and evaporates, rendering the water even more unsuitable. Filtering tap water is also not recommended. Filters often use salts to soften the water, with some just removing the temporary hardness. Bottled mineral or spring water should be avoided, as they usually contain a higher level of mineral salts.
The best and cheapest option (and the most reliable in many parts of the UK and North America) is rainwater. A water tank that collects from a downspout off the roof is the best method to ensure a good supply. Downspouts from tiled, glass, or polycarbonate roofs are all fine, but be aware that concrete tiles will contain lime (calcium carbonate), which will harden your water and could make it unusable. Install the largest collection tank you can; it's surprising how quickly the supply goes down in a period of hot weather. If you have a number of plants, it may be a good idea to link a few tanks together.
Any sturdy plastic water tanks are suitable, as is a simple bucket placed under a downspout if you just have a single plant. It is advisable to install some kind of filter to the downspout to prevent leaves and other debris from soiling your water supply. The cheapest and most effective filter is a double layer of women's tights, fixed securely at the end of the pipe.
If you have no access to rainwater there are alternatives—not all as inexpensive as the stuff that falls from the sky, unfortunately. Water from an air conditioning unit or dehumidifier is produced by simply condensing moisture from the air—fine for use. Distilled and deionized water is also suitable, such as the kind available for use in car batteries and irons. Deionized water has been through a chemical purification process to remove the mineral ions. Distilling water is a much more energy-intensive process; water is boiled and the resultant steam is condensed. Both processes result in water of a similar purity. Reverse osmosis (RO) water is also fine, but requires the purchase of an expensive RO unit for home production. This device filters out impurities by forcing the water through a semipermeable membrane, producing water with an impurity level as low as 10 parts per million (ppm). However, this method will convert only 5 to 15 percent of the water that enters the system, resulting in a large quantity of wastewater unsuitable for use on any plants, due to its high level of mineral salts.
Golden rule number three: Cold dormancy
Temperate regions of the world witness distinct seasons, generally with hot summers and cold winters. Plants have to adopt a growth pattern to suit their environment. Trees are the most obvious example of this adaptation, with many species in growth during the summer months, then losing their leaves for the winter when growth ceases.
Plants from these regions, most notably sarracenias, some droseras, dionaeas, and darlingtonias, require similar treatment in cultivation if they are to survive in the long term. In autumn, they will die back and lose their leaves in readiness for winter. Sarracenias die back from the top of the pitcher gradually down to the base; the speed at which this is done varies greatly between species.
_Sarracenia oreophila_ has a shorter growing season than the other species, and can lose its leaves entirely over a period of only two to three weeks late in the summer. On the other hand, a couple of species, including _S. leucophylla_ , have a second crop of leaves which can last well into winter; these are lost before the following spring, when new leaves emerge. Do not panic when this die-back happens, as there are always green leaf bases left on the plant. The exception is _S. oreophila_ , but that plant produces non-carnivorous winter leaves.
Leaves of sarracenias dying back in the fall. Don't worry when you see this happening.
The Venus flytrap tends to lose some of its leaves, especially those held erect, and retains a small, flat, basal rosette. Occasionally a plant will lose all its leaves, yet a little excavation will reveal that the plant is not dead, but in fact alive and well.
The sundew species found in temperate regions employ a slightly different method. These are much smaller, more slender plants, which lose their leaves and produce a tight winter resting bud to protect themselves from low temperatures. These buds are composed of the much-reduced last leaves of the season and are held low to the ground, covering the growth point within.
All these plants require a similar cold rest period in cultivation. I often hear that people have lost their plants, and if it's not due to watering with tap water, it is invariably because the dormant period hasn't been respected. Creating this state isn't simply a case of leaving the plant in a cool room. When one considers the temperature ranges of their natural habitats, it's evident that these plants are capable of tolerating a far lower temperature than many realize. Much of Europe witnesses winters with long periods of weather below 32°F (0°C), as does the southeastern United States, where the greatest concentration of temperate carnivorous plants exist.
For the rest of the world, it's sometimes hard to imagine cold weather in Florida, especially as it's marketed to potential visitors as the Sunshine State, but of course the farther north you go in the state, the colder it becomes. Tallahassee has experienced a winter low of -2°F (-17°C).
**Winter Requirements**
WINTER REQUIREMENTS AND IDEAL POSITIONS FOR PLANTS
In the UK, temperatures of below 14°F (-10°C) are fairly unusual, certainly in the southern half of the country. I've witnessed temperatures this low on two or three occasions, one such time a number of years ago while camping in January! It follows that the farther north one travels, the colder the temperature becomes, and with the correct climatic conditions there can be a disparity of up to twenty-seven degrees Fahrenheit (fifteen degrees Celsius) between the Scottish Isles and London.
European growers aren't as fortunate, and see much colder winters, though they can still succeed with their plants in such hostile climates—proving just how cold-tolerant the temperate species really are. The level of cold dormancy these plants endure, and indeed require, is very much understated, making it important to not leave temperate plants in the house because of cold outdoor weather. It will be far too warm inside.
Rather than placing a sarracenia on a cool windowsill in the house for the winter, put it in either an unheated greenhouse, or outside. There are a couple of species which seem to benefit from a little protection from the elements: _Sarracenia psittacina_ , and in some areas, _S. leucophylla_.
From my nursery work, taking these plants through winters over the last thirty-five years, I've found that the harder the winter they endure, the better they seem to grow the following season. Of course, there are other factors at play which could have a bearing, such as sunlight hours, but it is a pattern that seems to repeat itself.
_Sarracenia_ is able to withstand a winter minimum of 5 to 14°F (-15 to -10°C). It follows that our native species also need to be left outside, or at least in a cold, unheated greenhouse, as a warm winter period will soon kill them.
This low temperature requirement is not essential for those other species so easily grown as windowsill and greenhouse plants, which hail from otherwise milder climates. For example, the many and varied sundews from South Africa, a good number of which are easy to grow in the home, can generally be kept on a sunny windowsill year-round if the temperature remains above freezing. The same applies to Mexican butterworts ( _Pinguicula_ ). Although some can tolerate a brief freeze, I would advise against taking the risk.
We should also consider what extremes can be tolerated at the other end of the spectrum. In the confines of a greenhouse or conservatory, temperatures can soar on a hot summer day; a little ventilation is required on these occasions. My nursery greenhouse, at 30 by 100 ft. (10 by 30 m), is a little larger than the average garden structure. But even with the many roof vents open, it can reach 120°F (49°C). That's too hot for me to work in, but the plants, as long as they are kept wet in 2 to 3 in. (5 to 7 cm) of rainwater, are fine. At these temperatures, young plants may struggle (especially some species of sundew), and _Sarracenia oreophila_ will be pushed into an early dormancy, but there should be no long-term detriment.
POTTING MIX
To grow carnivorous plants successfully, a small range of materials will be required for potting mix (also referred to as compost in the UK). This may seem daunting, but don't worry as it's not the fabled rocket science you would be led to believe. I'll cover each ingredient individually, and we'll tie them all together with the different plants later.
PEAT MOSS
As we've seen, many of these plants are inhabitants of peat bogs, and so peat moss (called moss peat in the UK) is the base for most of the mixes we'll be creating. Peat is a divisive issue in the UK and there are current government drives to reduce its use.
There are two types of peat available: peat moss, which is what we require, and sedge peat, which is unsuitable as it can be slightly alkaline. Sedge peat is also sold as "Rich Dark Peat," so beware. Peat should be pure, with no additives, fertilizer, or wetting agents.
Peat moss is a dark brown colour and has a rich, slightly earthy aroma. It often contains larger lumps or fibres, all of which can be used. It is an acidic material with a pH of between 3 and 5. Peat moss maintains its structure well, which is one reason it was marketed as a soil improver for many years, a practice which I can't endorse. It is a finite resource, and one that shouldn't be squandered. However, peat-fired power stations can be found around the world; each burns more than a million tons of the material every year. I'll leave you to argue the ethics of that one.
Peat moss, not to be confused with sedge peat.
It is imperative that any sand you use be lime-free—don't necessarily trust what's written on the packaging.
SAND
The sand of which we need to avail ourselves must be primarily lime-free. It can be (preferably) silver sand or horticultural sand, but an important word of caution: even if the package states "lime-free" or "washed" it may be anything but. A number of years ago, I was experimenting with potting mixes for _Sarracenia_ and lost over 500 plants because I trusted what was printed on a bag.
Avoid children's play sand and building sand, as both are unsuitable, and building sand is also unwashed. Generally, good-quality, clean-looking silver sand with a pale yellow colour is fine, but a quick check with a soil pH testing kit will take only five minutes—and could potentially save you a lot of extra work repotting later.
CORNISH GRIT
Cornish grit is crushed granite that is available in different grades, coarse and fine. Fine is a good alternative to silver sand. Coarse is useful for some potting mixes which need to be more open in nature. It is a silver-grey colour which makes an ideal contrast to plants such as Mexican butterworts.
PERLITE
Perlite is a natural material derived from obsidian, a volcanic glass. When tiny fragments are heated they expand greatly, turning white in the process. Perlite is used in hydroponics, as a material in which cuttings can get started. It is also used in potting mix, where its high permeability and low water retention make it an ideal addition to prevent soil compaction. It is completely inert, adding only structure and no nutritional benefit, and is a good alternative to sand in a number of cases.
Cornish grit. Note how coarse the particles are.
Perlite.
VERMICULITE
Only a handful of the Mexican butterworts ( _Pinguicula_ ) will utilize vermiculite, but it's important to mention, as it's often confused with perlite. It is a silicate material which, like perlite, expands when heated to form small fragments of layered, slightly gold–coloured pieces. Again, it contains nothing in the way of nutritional value and is used to create structure. It has various applications as a heat-proof material—I was more than a little surprised to see my new oven at home filled with it prior to installation.
Vermiculite. Note how it differs from perlite, though the two are often confused.
Orchid bark. Choose a coarse grade of orchid bark for nepenthes plants. It will be used in conjunction with sphagnum moss.
ORCHID BARK
This will be used with only one genus, _Nepenthes_. As the name suggests, orchid bark is primarily produced for the orchid fanciers' market, but it is a valuable material for us as well. It is usually pine in origin, and is available in medium or coarse grades. I prefer the coarse.
SPHAGNUM MOSS
_Sphagnum_ is the genus of moss that degrades over time to produce the bulk of the peat in wetlands. In its live or dried form, it can be used as a planting medium, especially for nepenthes plants, but also as a topping for potting mixes for other species. It is also used outside in bog gardens.
_Sphagnum_ is a beautiful moss; in its live state it displays a wonderful array of green and red. It can also be purchased in a dried and compressed state, which when rehydrated will soon colour up, producing the most wonderful green carpet. Unfortunately, it is also highly prized by birds as ideal nesting material, and an errant chicken can wreak havoc with your deep-pile emerald rug.
In its dry state, sphagnum moss is light brown in colour.
Hydrated sphagnum returns to a luscious green.
Tufa, ground and as larger pieces.
TUFA
Tufa is a type of limestone, which at first glance is an odd ingredient to use with carnivorous plants as it is alkaline in nature, but as with most things there are exceptions. It is a soft, crumbly rock and can be used with certain species of _Pinguicula_ (especially the Mexican butterworts, which are found in alkali conditions), and some rarer European species.
COIR AND PEAT ALTERNATIVES
With the efforts to reduce the use of peat in horticulture, there are alternatives. Much of the experimentation is in its infancy and I have seen various products suggested: pine or larch needles, pure sands, sphagnum moss, and most commonly coir, which is derived from coconut husks. While coir is an alternative, it's important to note that the coconut industry is not without controversy either, causing the destruction of virgin forests.
Coir is available from specialist suppliers in various guises, most conveniently as compressed blocks which require soaking prior to use. A search online will turn up a supplier in your country. Ensure that any coir you purchase is washed, as it can be high in salt, which will be poisonous to your plants. It can be used in various mixes where peat is used, but I would suggest experimenting with a couple of surplus plants initially.
**Potting Mixes for Carnivorous Plants**
**PLANT** | **POTTING MIX**
---|---
_Cephalotus_ | Peat and sand 1:1
_Darlingtonia_ | Peat and perlite 1:1
_Dionaea_ | Peat
_Drosera_ (pygmy and tuberous) | Peat and sand 1:3
_Drosera_ (other species) | Peat and sand 1:1
_Heliamphora_ | Peat and perlite 1:1
_Nepenthes_ | Sphagnum moss and orchid bark 1:1
_Pinguicula_ (temperate European acidic) | Peat and sand 1:1
_Pinguicula_ (temperate European calcareous) | Peat, sand, and tufa 1:1:1
_Pinguicula_ (Mexican) | Peat, perlite, sand, and vermiculite 1:2:2:2
_Sarracenia_ | Peat and perlite 1:1
_Utricularia_ (terrestrial) | Peat and sand 1:1
_Utricularia_ (epiphytic) | Lower half of pot peat; upper half sphagnum moss and orchid bark 1:1
CONTAINERS
What are the best containers in which to grow one's plants? Let's begin by looking at what is not suitable. Clay pots, although perhaps more attractive, are unsuitable for all but one or two carnivorous plants. As the plants generally need to stand in water, they rapidly become covered in unsightly moss and slime algae. The porous quality of clay pots means that they will dispatch precious rainwater as it evaporates from the sides—far more than plastic pots. Containers made of concrete are also unsuitable because they contain lime. However, pots made from either of these materials can be fine if they are lined and waterproof.
A tray of sarracenias on a sunny windowsill. Cheers!
My container of choice is plastic. The standard-depth terracotta-coloured pots are perfect, but I prefer to use black, as the colour complements the occupants, something I find with all plants. I also prefer square pots; they provide a greater upper surface in which plants can wander around, and make better use of space on a greenhouse bench.
Pot size is important, and it can be surprising just how large plants can become, especially sarracenias. As a general rule, carnivorous plants harbor no aversion to being a little pot bound, but when they have filled their containers and begin to distort the pots, they are in definite need of moving up a pot size or two. Larger plants will require larger accommodation, and I have display plants in 5-gallon (20-litre) pots with a diameter of 13 in. (32 cm) and larger.
Various troughs can be utilized to produce small displays of maybe four or five plants for a windowsill. You can stand a number of pots in larger, deep plastic trays—ideal in a sunny conservatory.
Troughs and other decorative containers can be planted for a more professional finish.
Metal troughs are great for contemporary settings. The powder-coated variants can be coloured, which lifts the colour of the plants within.
WHERE TO GROW PLANTS OF PREY
Despite their exotic looks, most carnivorous plants can find happy homes in temperate climates around the world, either inside or as an ornamental addition to the garden.
Venus flytraps on a bright windowsill.
ON THE WINDOWSILL
There are a number of species which are ideal for windowsill cultivation, especially if the window receives direct sun. Some species will survive on a window that gets afternoon and evening sun, as it is stronger than morning light. Remember, the more direct sun a plant receives, the better it will grow and colour. Ideal candidates for the windowsill are the Venus flytrap, all the pitcher plants, _Sarracenia_ (except the very tallest plants unless you have large windows), and all the commonly grown South African sundews ( _Drosera_ ).
If you have only low-light windows, your choice is somewhat limited, but there are still plants that can thrive in such locations. Mexican butterworts ( _Pinguicula_ ) and the fascinating terrestrial bladderworts ( _Utricularia_ ) are ideal for these situations, as they don't require full sun. Both of these can produce the most wonderful flowers, and can be in bloom for months at a time. Although their small stature means they're not exactly going to hoover up annoying houseflies and wasps, their captivating beauty helps us forgive their small appetites.
Three striking carnivorous plants on a sunny windowsill: a sundew, a pitcher plant, and a Venus flytrap.
Another candidate for the windowsill is a peculiar relative of _Sarracenia_ : the cobra lily ( _Darlingtonia californica_ ). This unique pitcher plant, complete with twisting forked tongue, can tolerate slightly lower light levels than its cousins because of the way in which the pitcher is hollow along its length to the base, thus giving it a stronger structure. That's not to say it doesn't like the sun; it can colour as spectacularly as sarracenias, but it does have an aversion to overheating, which can usually be avoided with windowsill placement.
It probably goes without saying that as windowsill residents, these plants prefer to be next to the glass and not behind shading such as curtains—although curtains can be helpful in allowing those shade lovers to be grown in a sunny spot. One thing to bear in mind is that some plants require a cold rest period; refer to the winter requirements table in the previous chapter for specifics.
IN THE GREENHOUSE AND CONSERVATORY
I'm including greenhouse and conservatory together, because they are essentially the same, the latter perhaps a little more conducive to relaxing on a lazy afternoon.
In this environment, the number of plants you can grow and the perfection which you can achieve is greater than elsewhere. Again, position is everything and a sunny conservatory is best. I've seen some exceptionally well-grown plants in conservatories with orientations other than south, as long as the sunniest spot within has been chosen. Indeed, these are ideal plants to put in that hot spot where little else grows and the thought of a cactus is uninspiring.
Here you can create a display of plants where you can appreciate their beauty and they can earn their keep, diverting wasps from your afternoon snacks. With a deep windowsill, a large plastic tray can be positioned next to the glass, or a small table can accommodate a more substantial arrangement.
Simply stand the plants in the tray and fill it with rainwater. Then make sure it never dries out in the growing season, keeping plants just damp in the winter. Remember, these are predominantly bog plants and you can't overwater them. I'm often asked what to do in the summer during vacations and holidays. Just stand the taller plants in buckets and fill them up and above the potting mix surface, even as far as half way up the leaves. Smaller plants can cope with similar conditions; again, ensure half of each leaf is above the water surface.
Why do I say to just keep them damp over winter, since bogs are generally even wetter places at this time of year? Consider where you're growing. In the greenhouse or conservatory, we usually keep the doors and windows closed in the winter, and the air within is still. Fungal spores which are naturally present in the air will settle rather than be carried around on air currents, landing on plants and attacking any dead leaves, especially if the plants are too wet. By reducing the moisture, you reduce the risk of plants developing grey mould ( _Botrytis cinerea_ ), a disease which can kill plants but is easily preventable.
_Drosera binata_ makes an ideal subject for a hanging basket.
A glazed ceramic pot holds a _Sarracenia_ hybrid.
In a greenhouse situation, it is usually easy enough to leave doors open a few inches, or to install an electric fan for airflow. There are a number of fans available specifically for this purpose. If your conservatory is unheated and freezes, your plants can be left in situ, though remember to remove those that require a little heat.
The range and number of plants you can grow in a greenhouse increases dramatically, and it is in this environment that you can find yourself on a slippery slope. Remember, like many others, I began with a single plant—and have spent the last thirty-five years struggling for available space to house my ever-increasing collection. It's imperative, therefore, if you are going to purchase a greenhouse, that you buy the largest your budget and space can accommodate, as it is surprising how quickly you will fill it.
I prefer aluminum-framed greenhouses, as they are stronger, longer lasting, and more affordable. Avoid the flimsy, cheap things you see advertised; you're never going to drink champagne on beer money. Go instead with a reliable and trusted manufacturer.
Larger sarracenias can be placed in a deep plastic tray. The addition of a few pieces of cork bark adds interest and helps hide the pots.
Next you will need benching, usually a width of 2 to 3 ft. (60 to 90 cm) along each side. With a greenhouse 12 ft. (3.5 m) wide or more, you can comfortably accommodate a middle bench, a handy addition for the tallest of the sarracenias. Be sure to purchase strong, all-aluminum benching. Remember, the extra water these plants need makes them very heavy, and the last thing you want is a collapsed mess. Having a bench that is entirely aluminum also eliminates the risk of it rotting, saving you money in the long run. I prefer slatted benching as it is stronger than the flat sheet aluminum variety.
The position of the greenhouse is important, and you will usually have more choice in this aspect than you will have with a conservatory. A sunny orientation is best if you have it, where it will receive direct sun for the longest period. Remember, if you are growing species which require a little shade, that is easy to add, whereas you cannot add extra sun.
If the greenhouse is in a prominent position in the garden, and certainly if you have children, safety glass is an essential (albeit more expensive) necessity. Even if safely fenced away, the greenhouse is still somehow a magnet for errant footballs.
Now you have a decision to make. Do you require heating for the greenhouse, or are you going to concentrate on the cold-growing species? Let's assume some heat in the winter will be required, in which case there are a few options.
Gas heaters are fairly cheap to purchase, and can run from a propane cylinder. They are clean, but do produce moisture as a result of the combustion. Avoid anything that burns butane—it is not as effective in cold temperatures, just the time when you need it. The cylinders will need regular changing, and it is a good idea to have an automatic changeover valve which connects two cylinders, swapping them over when one empties in readiness for you to replace it. Although heavy, large propane cylinders are best in terms of convenience and value, they will require a suitable trolley to move them around if you wish to avoid injury. Gas heaters are also fitted with a thermostat, giving you control over the temperature. Kerosene (paraffin in the UK) heaters are still available, and this was the method I used to heat my first greenhouse. They are fairly inexpensive to buy, but increasingly expensive to run, as you rarely find hardware shops with kerosene pumps in store now, and instead have to rely on ready-bottled fuel. They burn with a hot blue-orange flame. As well as producing moisture, kerosene heaters also coat everything with a sticky residue, and are somewhat dirty. There is no control in the form of a thermostat, meaning they will need regular relighting and extinguishing, and the wick will need to be trimmed to ensure that the flame burns cleanly and doesn't produce smoke. I would avoid this option if given the choice.
Sarracenias thrive in a greenhouse environment.
Remember, anything that heats with a burning flame will require a small amount of ventilation to ensure the flame isn't extinguished, though this is unlikely in a greenhouse environment. Electricity is increasingly expensive to buy, but is more controllable, convenient, and reliable—and, if you hadn't guessed, my preference. Electric heaters specifically designed for and safer to use in a greenhouse will cost more than basic models, but are worth the investment. Heaters with fans can be set to have the fan run continuously, and hence maintain a degree of air movement at all times. Most have fitted thermostats, which are generally quite accurate—to within a degree or two—making them very efficient.
The most expensive aspect of this option is the electricity supply to the greenhouse. If you have a powered garage nearby, it is usually more convenient to spur off of that supply, but please get a professional to do the work. Remember, you are connecting a live electrical supply to a glass and metal box with metal benching, with the addition of copious amounts of water. These components could be a recipe for tragedy, so use external fittings and covered sockets which are waterproof. There are some things I refuse to mess around with, and electricity is one of them.
Another consideration for the winter is insulation. Heating a greenhouse can be costly, and with no insulation, somewhere around 50 percent of the heat will be lost through the glass. A single layer of bubble insulation will reduce your heating bill by a similar amount. This insulation is different from the packing bubble material, as it is double skinned and UV stabilized to ensure a longer life. It is available in two grades, one with small bubbles ⅜ in. (1 cm) in diameter, and the other ¾ in. (2 cm) in diameter. I prefer the larger; it has greater insulation properties and better light transmission, and is more hard wearing, ensuring I can reuse it for three or four years. It is more cost effective to purchase a roll rather than short sections if you require a good amount, and a horticultural wholesaler will be able to supply it.
Alliclips, used for securing bubble insulation to the channel in aluminum greenhouse bars.
There are a couple of widths of bubble insulation available, the best being 5 ft. (1½ m). Fitting it is an easy enough process; I simply run a length of it along the centre of the roof, and one along each side, with a slight overlap. If you have a wooden greenhouse, you can purchase heavy duty pins to secure the insulation to the frame. Aluminum-framed greenhouses usually have channels which are designed to take small plastic clips known as "alliclips." When simply pushed in and turned, these secure the insulation.
Try to cover the entire structure in this way. To grow the widest range of plants in a single greenhouse you will need to maintain a minimum winter temperature of 45°F (7°C). Your heating bill could be further reduced by the addition of a partition, so only half of the greenhouse is heated—preferably the rear section, which is less vulnerable to draughts from the doors.
These amendments will enable you to grow species which do not require any heat alongside those that do. For many years the bulk of my collection was in a single 8 by 12 ft. (2.5 by 4 m) greenhouse, in which I installed a bubble insulation partition midautumn. Sarracenias, darlingtonias, dionaeas, and cold-loving droseras and pinguiculas lived in the front section, while the rear half housed everything else except nepenthes plants, which require more specialized care, as we shall discuss later. If your greenhouse is large enough, you may wish to consider a glass partition, which you can install during assembly. With the arrival of warmer, sunnier days, remove the insulation and store it until the following winter.
Your plants will need to be standing in water for most of the year, and the easiest way to achieve this is with large trays. Unfortunately, those available for such purposes are generally too shallow or too small for greenhouse use, and you would be refilling them every day. I found making my own trays to be the best option.
Trays of the size made in the accompanying do-it-yourself project (opposite) are much easier to keep full of water during the summer. If you are fortunate enough to have a power supply and a water butt connected to the guttering on the greenhouse (or preferably, several connected together), you can suspend a submersible pump in one and pump the water via a hose directly into the tray, saving yourself considerable effort. If you don't have a power supply, the exercise will do you good.
Hygiene is an important consideration in a greenhouse environment, to reduce the risk of pests and diseases. Any dead material should be routinely removed from the plants and disposed of, and the floor swept clear of debris after each purge. Keep the glass clean—especially in areas of high rainfall, which encourages the growth of algae. A wipe with a window-cleaning rubber squeegee followed by a rinse with clean water will eliminate any build-up and subsequent loss of light transmission. This is best performed early on a sunny spring morning, after the doors have been closed overnight to ensure there is condensation on the inside of the glass. Any algal build-up will be soft and easily removed. Once every year should be more than sufficient. As with the conservatory, the water should be reduced over the winter months, to minimize the risk of grey mould.
Finally, a word on safety. As a father, I soon learned to recognize areas of hazard. I've already mentioned safety glass, which to reiterate is essential if children will be around. But also consider danger zones in the rest of the greenhouse. Sharp corners on aluminum benches seem to be just the height of a two-year-old's forehead. Cover such corners using pipe insulation and cable ties. Store chemicals outside easy reach. Keep water containers covered and secure lids if possible. A quick look-around can save mishaps, tears, or worse.
**Create your own plant tray**
**1.** Line the benching with sheets of ¹⁄₆ in. (4 mm) plywood.
**2.** Make a frame using 1 by 4 in. (25 by 100 mm) sawn timber, held together on the external corners with 90-degree metal brackets, ensuring the screws do not protrude through to the inside of the frame. The length you decide upon is entirely dependent on the length of your benching, and also on the necessity for the bench to be level.
**3.** Line the frame with pond liner and nail the outside overlap with ½-in. (13-mm) galvanized nails, ensuring there is sufficient liner tucked into the inside of the frame so it is not stretched when full of water. The liner can be made of PVC or butyl (though PVC is considerably less expensive), and either can be purchased from an aquatic supplier. A thickness of ½ mm is perfect. Make sure it is UV stabilized. Avoid any other plastic sheeting, as the sun and heat in the greenhouse will degrade it in a surprisingly brief time, resulting in it splitting and draining—invariably on the hottest day of the year, while you are not around. Although pond liner is more expensive, it will last years, even in this environment. Don't be tempted to use woven liner as this has very little strength, something I learned the hard way.
**4.** Trim the excess liner from the outside.
IN THE GARDEN
Many gardeners struggle with whether or not their outdoor environment is viable for growing carnivorous plants. However, with the application of a little thought and, sometimes, ingenuity, locations that may not seem hospitable can produce surprisingly successful results.
A warning, though: every condition is different. My garden, for example, is slightly more protected than my neighbours' because of how I've planted it. It has a marginally different microclimate, which enables me to grow plants that they cannot, and vice versa. What works for one may not for others, but as you will see, I have collected successes from growers in various countries.
The bog garden
A bog garden is the stereotypical method of outside cultivation, and in its purest form, amounts to a hole dug in the ground and lined. The size of this garden will depend on your objective and the space available, but the primary consideration should be position. Choose a site that receives at least five hours of direct sun in the summer. If you hope to grow the taller species of _Sarracenia_ , select a location with protection from strong winds. If your site is open, just choose shorter-growing plants.
Be aware that in a hot summer, you will still need to water your bog garden in all likelihood, so don't position it too far from a water supply. By ensuring a good depth, you will reduce the frequency of watering.
Dig it as you would a pond. Either a liner or a pre-formed plastic or fibreglass moulding can be used, but if you are using a liner, choose a high-quality one. A fleece underlay is a good idea to prevent sharp stones from piercing the liner—worth the small extra cost.
Once lined, or positioned in the case of a moulded pond, fill to within a couple inches of the top with peat moss, firm down, and level as best you can. To reduce the amount of peat moss you use, add 50 percent perlite or washed gravel if you prefer. Now slowly add water (a hose from a rainwater barrel is ideal), until the peat is saturated. Leave to settle overnight and trim the excess liner, or obscure it under stones if appropriate the following day.
Position your selected plants on the surface, moving them around to find the most pleasing arrangement before planting them in the peat moss. Add a good quantity of sphagnum moss to the surface, which will grow across the bog and complement your plants.
Finally, adding some props, in the form of cork bark sections, salt-free driftwood, or the like, will further enhance the finish.
Fill the pond with peat moss, add water, and allow to settle.
Position your selections before planting.
_Sarracenia flava_ var. _ornata, S. purpurea_ subsp. _purpurea, S. oreophila, S. ×catesbaei_ , and _Drosera filiformis_ var. _filiformis_ thrive in this bog garden.
Suggested plants include all _Sarracenia_ (except S. _psittacina_ , which tends to get rather lost among the sphagnum). All hardy sundew species ( _Drosera_ ) are suitable; I've often used the North American _D. filiformis_ var. _filiformis_. Finally, any of the hardy butterwort species ( _Pinguicula_ ) can also be added, along with the cobra lily, _Darlingtonia californica_.
The best time to create this type of bog garden is in the spring, when the plants have a complete season in which to establish themselves. Next, let's have a look at a few bog garden features created by people in different countries and settings.
A general view of Ian Salter's bog garden beside his greenhouse.
Two subspecies of _Sarracenia purpurea_ growing together.
WALES
Ian Salter has created his bog garden in an existing graveled area, next to one of his greenhouses, which adds an extra level of interest and also serves to draw the eye and greenhouse visitors outside. It was dug to a depth of 2 ft. (60 cm) and all sharp stones were removed. A layer of sand was added to protect the pond liner, which was then used to contain the bog. The hole was filled with peat moss and allowed to settle as described previously, before drawing the gravel back across to obscure the edges. The elongate, triangular shape gives the bed an interesting flow.
Once settled, he planted it with _Pinguicula grandiflora_ and a number of _Sarracenia_ species ( _S. flava_ var. _flava, S. purpurea_ subsp. _purpurea, S. purpurea_ subsp. _venosa_ var. _burkii_ , _S. minor_ var. _okefenokeensis_ , and _S. rubra_ subsp. _rubra_ ), and added a few small reeds as complement plants.
Ian lives in the Neath Valley in South Wales, an area known for its high rainfall. This climate, along with the depth of the bog, means he doesn't have to water too often. The area averages a winter low of around 23°F (-5°C), and occasional highs in the summer of 86°F (30°C), although this is a rare occurrence.
Mark Griffin's bog garden becomes another border within the garden.
ENGLAND
Mark Griffin has created his bog in a fashion similar to Ian's, lining it with pond liner and filling with peat moss, sand, and gravel. This garden is incorporated as more of a standard flower bed, protruding onto the grass and edged with a few rocks to break the rigidity of the liner. He's planted it with a variety of _Sarracenia_ species and hybrids. Mark lives in the southern county of Wiltshire, where summers are warm—generally around 68°F (20°C), but occasionally up to 86° F (30°C)—and average winter low temperatures are around 36° F (2°C), occasionally dropping to as low as 14°F (-10°C). The county sees good rainfall all times of year, with occasional snow during the winter.
Matt DeRhodes's raised bog makes a fantastic feature on the lawn.
In winter, Matt covers the garden to protect plants from the worst of the snow. This will also encourage early growth in the spring.
THE UNITED STATES
Based in Cincinnati, Ohio, Matt DeRhodes has created a raised bed for his plants. This is a more formal feature and sits comfortably on the lawn, where it becomes a prominent element of the garden. Again, he dug down to around 6 in. (15 cm) deep, and lined the bog with pond liner. The three layers of blocks add extra depth, with the liner tucked under the top row. Extra height above ground level also makes close observation easier. This can be adjusted to your own climate; add layers to make the bog higher in warmer climates, and reduce the number in colder areas.
The space created was then filled with peat moss, perlite, and pine needles, and planted with various _Sarracenia_ species and hybrids, as well as Venus flytrap ( _Dionaea muscipula_ ).
The climate sees a wide temperature span, with an average winter minimum of 22°F (-5°C), sometimes dropping to -10°F (-23°C), and warm summer days averaging 86°F (30°C), occasionally reaching 98°F (37°C).
Matt chooses to cover the bog in the winter months to protect it from the effects of desiccating winds, using a cloth cover held over a frame. This would also serve to encourage early growth in the spring, and can be removed once the danger of severe frost has passed.
SERBIA
Andrej Jarkov lives in Belgrade and manages to grow his plants outside, despite the harsh conditions. The average temperatures there represent something of an extreme, with the summer average 88°F (31°C), the winter average a cold 9° F (-13°C), and spikes well above and below these averages. There is rainfall year-round, peaking in May and June.
Andrej grows his plants in mini container bogs, which are half-buried to protect the roots of the plants from the extremes. These are filled with a mix of peat moss and silver sand, and topped with sphagnum moss. In this setup he plants various _Sarracenia_ species and hybrids, and also a few sedges ( _Carex_ ), as well as some small, noninvasive grasses which complement the carnivorous plants. Despite the harsh conditions, he has had _Sarracenia leucophylla_ produce a leaf to 30 in. (75 cm) in height!
Andrej Jarkov partially buries his containers for added protection in the harsh winter.
Even in this extreme environment the plants are happy.
Pond marginals
Carnivorous plants aren't usually thought of as candidates for the margins of ponds, but over the past ten years I've sold many plants—almost exclusively sarracenias—for use in these areas and can report some remarkably positive results.
When one considers the fact that they like their feet in water in a sunny position, they seem ideal for the conditions around many ponds, and certainly make elegant and unusual additions. It is in these environments that I have seen some interesting uses of the plants, none more so than Tom Hoblyn's 2009 Chelsea Flower Show garden, which was inspired by carnivorous plants and their vulnerability in their natural habitat. It demonstrated perfectly how the plants can be used both in natural and contemporary settings, though was also interesting for his use of purely natural materials.
I have several baskets of sarracenias in my ponds, and they have remained in the water for the past seven or so years, through several very hard winters, with no ill effects. It is simply a matter of planting them in standard pond baskets. The best type are the square ones; they are more stable once in the water, especially for taller plants, which are more likely to sway in the wind.
First I line the basket. I prefer using a piece of ground cover membrane—the sort used to suppress weed growth, because it's UV-stabilized and will not rot in the water—which I puncture a few times with a knife to allow a good ingress of water. The membrane will prevent any small particles of peat moss from escaping into the water. Next, select your plants and consider positioning. If your pond is sheltered, you will be better able to grow the taller species of _Sarracenia_. If it's exposed, choose the lower-growing plants.
Tom Hoblyn's 2009 Chelsea Flower Show garden featured blue _Iris_ and _Sarracenia flava_ specimens.
For tall plants, use a deep basket and place a few large pebbles in the bottom for stability. You can pot several plants together to make a small display of contrasting heights and colours. Use a mix of peat moss and perlite in equal parts, though this doesn't have to be exact.
Finally, a layer of washed gravel on the surface will prevent the potting mix from blowing across the water and the perlite from floating up to the surface. Now soak well using a watering can rose, and gently lower the basket into the pond, making sure the plant bases are just above the water surface. The position will need to be along the marginal shelf, or on blocks to raise the height (a small upturned bucket drilled with holes will suffice for this). Inevitable fluctuations in water level are fine.
A large specimen of _Sarracenia purpurea_ subsp. _purpurea_ , an ideal species for windier situations.
Mixed planted baskets add unusual colour and form, and work equally well in traditional and contemporary situations.
The best types of ponds are those that aren't overstocked with fish, as they will add too many nutrients to the water. The large koi which are so popular (and large enough to feed a family of six) are somewhat destructive in that they tend to knock baskets over.
Avoid using water that has been treated to remove algae. The best ponds are those of a more natural type—left to its own devices, a pond should clear naturally a year or so after initial filling, rendering methods and potions for clearing the water superfluous.
Plants can be left in the water year-round, and yes, that also means they will freeze in the winter. Being marginals, they are in the shallows, which means that they are in the layer that freezes. They will be encased in ice, and this is fine; there is no need for concern.
The same principle works in a more modern setup; the taller-growing species have a structural elegance that enhances contemporary designs; various suitable containers are available for this purpose.
Another option is floating containers, which carry the benefit of not having to be placed in the margin or on a plinth. These can be planted in the same fashion as a basket, but without the membrane liner, then simply lowered into the water. The surrounding polystyrene floats around the plants, ensuring they remain upright and don't sink. These are ideal for small- to medium-height plants; slightly taller plants can be used if the pond is in a sheltered location.
Floating baskets are ideal where a marginal ledge is absent.
Outdoor containers
Floating containers are great for ponds; in fact, there are many places where these plants can be grown in containers. The selection of ready-to-use pots is many and varied, and there are also options that require only a bit of adaptation to be suitable as vessels.
Belfast or farmhouse sinks—deep, white, glazed ceramic kitchen sinks—make perfect containers. Since they are watertight (when plugged) and glazed, there is nothing to react with the potting mix and damage the plants. They are also fitted with a handy overflow regulator, which maintains the water level. They make ideal mini bog gardens and can be planted with any of the cold hardy species.
Bear in mind the positioning before you start. Work in the same way you would with a buried bog, filling the sink with peat moss, or peat moss and perlite, before firming down and filling the basin with rainwater until the potting mix is saturated. Plant as discussed previously.
If your planting is raised above ground level, you can appreciate your plants more easily. It's also a great way to get young children interested, a topic we'll cover later. Of course, it doesn't have to remain above ground, and by burying it you can successfully "grow" carnivorous plants in the ground (as long as they don't dry out).
Let your imagination go. Don't think that you have to own a heavy Belfast sink to create your own mini bog—any container will suffice. It doesn't even have to be particularly watertight, because you can easily line any suitably sized vessel with pond liner and prepare as directed. Remember what I said about concrete containers, however—these will definitely require lining, as will anything metal, due to the acidity of the potting mix and the reaction it can cause.
Oak containers can be used as planters (such as here), or as mini ponds.
A simple wooden decorative container fashioned to resemble a half barrel is an ideal container for a sunny patio or deck. It will look great all summer when planted with sarracenias, especially if a few colourful hybrids are included that have a longer growing season; their colour will intensify as the autumn progresses.
Plastic containers are often a good choice. They are durable, shatterproof when the kids are kicking footballs around the garden, and long lasting. They can also resemble stone containers, are easy to move if necessary, and have the added advantage that they do not need to be lined.
Plastic planters are versatile and won't crack in the winter as some ceramics will.
Fibreglass mini ponds are perfect for smaller gardens, and add appeal to patios and decking.
Metal containers have a different look and feel altogether, conveying a clean, modern impression. These are available in various sizes and can be galvanized or powder-coated so they won't rust. If metal pots are sealed, the occupants can be planted directly inside. Alternatively, metal containers can hold other pots, allowing plants to be easily swapped out. Try to avoid the cheap, flimsy versions, as they lack the strength of the high-quality types, especially if you envisage moving the pots at any time—they may be very heavy. These are suitable for both inside and outside use.
Containers do not have to be large in terms of size, or require a lot of effort. Small tabletop displays have the advantage of being somewhat more movable than large troughs, and are great conversation pieces. These can be as small as a ceramic bowl for a single pot, and can be used to adorn patios and tables both outside and in.
A variant on container growing is the use of a mini pond, which can be positioned anywhere suitable, especially where some extra interest is called for. These mini ponds can be planted entirely with carnivorous plants, or with a combination of other aquatics.
Again, a variety of vessels can be utilized for such a project. Oak half barrels offer a rustic feel and work well in a sunny spot in a more traditional garden. Modern containers such as fibreglass ponds fit with contemporary environments, and are designed for just this purpose.
Both types are perfect for mini ponds. Position the container in its final location—someplace sunny where the plants will receive the light they require—then simply fill with rainwater and plant. In addition to sarracenias, I like to include a dwarf water lily, and some elodea is useful for a year or so, until the water chemistry balances and the algae (which is guaranteed to develop), clears, leaving a healthy ecosystem filled with aquatic life.
At this stage you can either remove the elodea, or at least keep it in check by thinning it out. Once the water clears, the time is right to drop in another carnivore, such as the bladderwort _Utricularia vulgaris_.
IN TERRARIUMS
For many, a terrarium is an ideal place to grow carnivorous plants, especially for anyone not blessed with a greenhouse or garden. Terrariums don't suit all plants, so you will be somewhat limited in the number and range of species you can grow successfully, but this should not put you off.
The first step is to consider what terrariums actually are. Generally, they are a glass or plastic box with a removable lid, or front sliding doors. Their interiors are high-humidity environments. They can be a simple glass tank or a more elaborate piece of custom-made furniture.
The tropical conditions within limit the number of plants you can successfully grow, but do include some of the most desirable of carnivorous plants. Aside from the tropical butterworts ( _Pinguicula_ ), the many terrestrial and epiphytic species of bladderwort ( _Utricularia_ ), a couple of sundew species, and the sun pitchers ( _Heliamphora_ ), by far the largest group in number and stature are the tropical pitchers ( _Nepenthes_ ).
Don't even begin to contemplate growing any temperate species unless the terrarium receives full sun (in which case it will require ventilating) or has very strong artificial lighting, and you are able to move the occupants to cold winter quarters for their rest period. If you are restricted to growing the plants solely indoors, go for those just listed.
There are different types of terrariums available. In the traditional sense of the word, they are sealed glass cases, originally known as Wardian cases because they were developed during the Victorian era by Dr. Nathaniel Ward. Their purpose was to enable the safe transport of collected plant specimens from overseas, helping the Victorian obsession with exotic plants immeasurably since the majority of those collected previously perished during the long journey back to England.
Planted by Matthew Wagstaffe, this is a perfect example of how a terrarium can become both an attractive and unusual piece of furniture.
The cases were made of steel or wood and glazed to create a sealed environment, which was self-sustaining, as evaporated water could not escape. It simply condensed on the inside walls and ran back down, hydrating the plants within.
Modern terrariums operate on a similar principle, but aren't completely sealed; we like to examine our plants more closely and access is required for maintenance. They can be as simple as an empty fish tank with a lid, or a specifically designed affair with a heated base and lights. While the latter will undoubtedly be more expensive, it will also enable the plants to grow more successfully.
The average room temperature inside a house will be sufficient, especially under lights. Added lights will create the same effect that one sees in the wild, wherein the day temperatures are higher due to the sun, with a distinct drop at nighttime—a phenomenon certainly seen in many tropical regions.
Because of their often elongate shape, and the limited space available, fluorescent tubes or LEDs are the best types of lighting for our purposes.
The terrarium can be planted if the occupants are to remain inside year-round, which will afford an aesthetically pleasing result. On the other hand, planting will necessitate more maintenance, in terms of controlling the rampant advance of mosses and weeds, which will thrive as readily as your plants. It will also further limit the range of plants you can grow, because different species have different soil requirements.
If you choose to plant your terrarium, here's how to begin. Start by adding and leveling 2 in. (5 cm) of clean, washed horticultural sand in the base. Next add 4 to 5 in. (10 to 12 cm) of your preferred potting mix—an equal blend of peat moss and the same sand will be ideal. Arrange your plants on the surface until you are happy with the layout and balance, then remove them from their pots and plant. Maintain a water line of at least an inch (2½ cm), which will be visible in the sand layer. The addition of props will add an extra level of interest, and sphagnum moss can be placed on the surface to grow (though this will need to be kept in check as it will outgrow some of the smaller species).
If this is to be a sub-tropical terrarium which will witness an average temperature of around 75°F (24°C), you can successfully grow _Cephalotus_ ; tropical, pygmy, and sub-tropical sundews ( _Drosera_ ); terrestrial and some epiphytic bladderworts ( _Utricularia_ ); and (with a slight adjustment to some of the soil) a couple of the sun pitchers ( _Heliamphora_ ).
Although this temperature will suit some of the higheraltitude tropical pitcher plants (such as _Nepenthes_ ), the soil will not—one of the annoying limitations we mentioned about terrariums. An alternative, and one which will undoubtedly widen your range, is to grow plants in their pots and then to simply place the pots in trays of water within the terrarium. This allows you the freedom to add and remove them as required, and means they can also be taken out for closer inspection.
The potted terrarium is more practical if you like to remove your plants for closer inspection. This example belongs to Vincent Fiechter from Geneva, Switzerland.
This method of tray watering will give you far greater control over the amount of water a plant receives, and in the case of nepenthes plants, which do not like to stand in water, it is a logical way of growing the widest range of plants possible. In this environment, you can grow all the genera listed earlier for planted terrariums, with the addition of some smaller-growing species of _Nepenthes_ and all the heliamphoras. Although you will need to take care watering, the high humidity within will ensure that the length of time between refills is much reduced.
The same method can be utilized for a heated terrarium, often referred to as a "lowland terrarium." _Nepenthes_ is loosely divided into two groups with regard to cultivation: highland and lowland. The highlanders are found in the wild at altitudes over 3280 ft. (1000 m), whereas the lowlanders are found under this threshold.
Temperature drops with increasing altitude, approximately one degree Fahrenheit (six-tenths of a degree Celsius) for each 328 ft. (100 m), and as such, the temperature in a tropical climate is considerably higher at sea level than it is on a mountain at, say, 13,213 ft. (4000 m). By creating this differentiation, we can accommodate plants from different altitudes, though they will need to be grown separately.
In a lowland terrarium, the temperature needs to be raised, with a nighttime minimum of 75°F (24°C), and a higher daytime temperature—which is easily achieved with the addition of artificial heat.
In this hot and humid environment, you will be limited further in terms of the range you can grow, but if your particular penchant is for lowland _Nepenthes_ , then this will be ideal. You can also add a few tropical bladderworts ( _Utricularia_ ) and tropical sundews ( _Drosera_ ) for extra interest.
There are a couple of methods of heating the terrarium. An aquarium heater (available from most pet shops) submersed in a bath of water will not only heat but also humidify the terrarium. Keep a close eye on the bath to ensure it never dries out. Alternatively, a flat heat mat of the type used for heating reptile and amphibian vivariums is ideal. These are relatively inexpensive and available in a variety of sizes; simply select the size that best fits under the terrarium without protruding from either end. If you are going to use one of these for the heat source, you will need to stand the terrarium on an insulated surface, such as a layer of polystyrene or foam rubber, to ensure the heat is concentrated upward, not down, to increase efficiency.
The addition of a thermostat to control the heat and a timer to control the lights will give you a fairly self-sufficient unit in terms of the day-to-day environment. Just keep an eye on water requirements, and perform occasional surgery on those plants whose intention it is to escape their confines and join you on the sofa.
Whichever type of terrarium you choose, always ensure that the plants you add are free of pests and diseases, which will spread through your other plants like wildfire. More on pests and diseases a little later.
YEAR-ROUND CARE & MAINTENANCE
A little about annual maintenance. Apart from keeping plants wet and sunny in the summer and adhering to the three golden rules, most carnivorous plants require very little in the way of everyday care. In the autumn, deciduous species will need tidying up. Annual clean-up can seem a chore, especially if you have a large number of plants. But alas, it is unavoidable—an essential aspect of the cultivation of carnivorous plants.
Evergreen species of _Drosera_ (here, _D. slackii_ ) can, over time, develop a substantial pedestal.
Each of the two most prominent genera has its own general maintenance needs. _Sarracenia_ plants die back at differing times over the autumn and winter. Once the leaves have browned off sufficiently, the old growth must be removed—a simple process we'll look at in more detail later. Those species of _Drosera_ that die back to their roots or form winter resting buds can simply be cut back with scissors, removing the dead growth. The evergreen species, such as _Drosera capensis_ from South Africa, develop a skirt of dead leaves around the base of the plant, which helps support the upright stem and can be left intact—as can the old leaves under the rosette species. Over time, the dead leaves form a pedestal of old growth under the live leaves.
TOOLS
If you are fortunate enough to possess a substantial number of plants, it is wise to avail yourself of appropriate tools for the job. Good-quality pruning shears (secateurs) will pay dividends, and the extra investment will give you years of service around the entire garden.
Sharp shears of the type used for topiary are, for me, a real asset, saving me many hours. They self-sharpen as they're used, with the edges of the blades rubbing together, enabling you to cut through a large number of wiry sarracenia leaf bases in one cut—great for larger specimens, and for saving time.
Long tweezers can be extremely useful for removing dead leaves from sticky-leaved plants, for plucking out seedlings, and for any one of a number of tasks where large or cold, numbed fingers just won't suffice. Again, go for quality—they will be much sturdier. Mine are from a bonsai-growing nurseryman friend and have a flattened, blunt blade at the top which is ideal for making holes in the potting mix when transplanting.
A sharp pair of scissors is always helpful for more delicate pruning of fine plants, and some very good ones are available from specialist bonsai growers. All the tools you use should be kept in good condition, cleaned regularly to eliminate any risk of cross infection, and sharpened when necessary. Specific sharpening stones are available for pruning shears, and they should be used regularly to ensure you are performing clean cuts with minimal effort.
Always select high-quality tools.
A large plant such as this _Sarracenia minor_ is best tackled with topiary shears.
Cleanliness is paramount, especially as you will invariably use your tools around the garden. After each use a quick wipe with disinfectant will sterilize the blades, and a good rub with one of those baby wipes will remove any sap residue. Some advocate the use of a strong bleach solution, but this is poisonous to all green plants, so I avoid it. A squirt of oil after cleaning will prevent any rust developing on the metal surfaces, and keep the moving parts lubricated.
PESTS AND DISEASES
It can seem incongruous to learn that carnivorous plants can be prone to pest attacks when you consider their nature, but like any group of plants, there is a veritable army of insect warriors on the march. The reason the plants cannot defend themselves is that the pests usually attack developing leaves. The pests are also usually very small and elude capture.
There are differing methods of control which I'll suggest, but first a mention of spraying. This is a controversial subject, but most insecticides are fine to use on carnivorous plants, though some care or experimentation should be used with glandular-leaved plants such as the sundews and butterworts. There's a fine balance here. I find a close eye and a reactive approach work well, rather than overspraying and risking the development of a local resistance to the chemicals. This can also be avoided by changing the insecticide on a regular basis. Some offer a systemic cover, in which the active ingredients are taken into the plant's system, where they remain and offer a period of protection, usually around six weeks. I feel these claims are a little overstated, so a keen eye is the best way to prevent a pest from gaining a foothold. Ready-to-use varieties are ideal and convenient. If you are using a concentrate, mix it according to the manufacturer's instructions. The available range changes as it is discovered that chemicals which have been used widely are in fact somewhat dangerous to health or the wider environment, but here are a few currently in use. I have listed the active ingredient rather than the proprietary name, which varies from country to country.
**ACETAMIPRID** A good all-around insecticide sold by Scotts under the name Bug Clear Ultra in the UK and Ortho Bug-B-Gon: Systemic Insect Killer in the United States. Available as a concentrate and a ready-to-use spray, and again I've used this on all the genera covered in this book.
**THIACLOPRID** Another good insecticide. Also available as a ready-to-use spray and concentrate, this is a systemic product, and I have used it on all the genera covered in this book.
**ORGANIC SPRAY** There are a few organic insecticide sprays available which contain fatty acids. These have their benefits, especially where pets and children may be in close proximity. The way they act ensures no risk of localized immunity to the active ingredients. Bayer manufactures a good example which seems to be safe to use on dionaeas, droseras, and sarracenias.
**BIOLOGICAL CONTROLS** Biological controls are available for many greenhouse pests, but their employment is rarely justified for a small collection unless the infestation is severe. There is also the very real risk of plants catching "good bug" predators, especially if they have a penchant for the nectar the plants produce.
**FUNGICIDES** Here's where the old adage of an ounce of prevention being worth a pound of cure certainly rings true, as there appear to be no currently available fungicides which will prevent grey mould ( _Botrytis_ ). Remember my earlier comments about good air circulation.
Pests
ANTS
Ants are rarely an issue themselves. Quite the contrary, it's quite a sight to see a long trail of them moving along the benching and up a _Sarracenia_ pitcher, where they fall one after another into its depths. However, their habit of "farming" some pests can be a nuisance. Plants grown outside are not usually bothered by their nesting habits, if the potting mix is kept wet enough.
APHIDS
There are a number of species of this all-too-familiar garden and greenhouse pest. They are typically ¹⁄₂₅ to ⅛ in. (1 to 3 mm) in length and a green or black colour, often forming a large colony which can clothe entire shoots. Their success lies in their prolific breeding habits, meaning that an infestation can take hold surprisingly quickly. They are sapsuckers, feeding through a sharp stylet which they insert into a plant's vascular system to feed, causing disfigurement to the emerging leaves—generally the first sign that they have attacked your plants. Another side effect is sooty mildew (more on this later), the by-product of ants which will "farm" aphids to feed off the sweet exudates they produce. A more sinister side effect is the transference of plant viruses while aphids are feeding, which although very unlikely with carnivorous plants, is still worth bearing in mind. A small number of aphids can simply be squashed between the fingers, but if you feel squeamish doing this, or if the infestation is heavy, you can either submerse the whole plant in a bucket of rainwater for twenty-four hours, which will drown them, or spray with an insecticide.
Deformed leaves, the sign of aphid damage.
Birds are an occasional pest, especially when they learn a free meal is to be had in the guise of insects caught within _Sarracenia_ pitchers.
Although not usually a major pest, caterpillars can cause severe damage, as here on a leaf of _Nepenthes burbidgeae_.
BIRDS
Not your usual suspects, and I can remember only a couple of years when an enterprising bird learned to rip the _Sarracenia_ pitchers open to retrieve a free meal (coincidentally, it happened while writing this book). Thankfully, the trick appeared to be forgotten the following season. They can be a problem when trapped in the greenhouse and disoriented, flying at the glass to escape and knocking plants and pots over. Some birds seem more intelligent than others. Wrens frequent the nursery on a regular basis, flying around and through, completely unfazed by my presence. Blackbirds and blue tits, however, are carefully ushered out.
If you live in a rural area as I do, be aware of pheasants. They are without doubt the stupidest of all creatures and can cause considerable damage, both to plants and bench linings, with their sharp claws. They do, however, roast well.
CATERPILLARS
Caterpillars are another of those casual pests. Our plants aren't their usual diet, but occasionally they bite the soft young growth. Random holes and orange-coloured excrement announce their handiwork, often in the autumn for some reason, when the last pitchers are produced on the sarracenias, and when the damage is rather superfluous due to impending dieback. Again, manual removal is usually all that is required; insecticidal intervention is not normally needed.
CHILDREN
Carnivorous plants are the perfect subjects for introducing plants to children, and I cover some educational ideas in the last chapter of this book. However, in the greenhouse, especially unsupervised, they can amuse themselves by closing all your flytraps, or checking to see if the _Nepenthes_ lids move (they don't—unfortunately, they snap off), or running curious fingers through your sundews. This is all fine if you don't mind their hands-on approach—there will be no long-term damage. But having once been setting up my display at a flower show, only to find my son doing all of the aforementioned to my display plants, I can fully appreciate the potential carnage and horror! (I should add that he was only six or seven at the time.) Joking aside, while kids are young, do allow them to touch the plants in a controlled situation, explaining the hows and whys of such interesting living things.
Allow kids a supervised, hands-on approach to your plants. Get them interested early. Balls and associated projectiles are the primary cause for concern.
A mealy bug on a _Sarracenia_ pitcher leaf.
MEALYBUGS
Another soft-bodied, sap-sucking insect which can be persistent and requires a careful regimen of treatment to eradicate. There are many species, but typically mealybugs are oval-shaped, segmented insects to around ¹⁄₅ in. (5 mm) long, a whitish grey in colour, with two filamentous spurs on the rear end. Their preferred carnivorous hosts in cultivation are sarracenias. Mealy bugs exude a white, waxy material, which resembles cotton wool, in which they hide and lay their eggs. As they feed, they also excrete a sweet material known as honeydew, which, as with aphids, can lead to sooty mildew. Clusters of mealy bugs can be found at the base of leaves, and their population increases in the autumn, although they are active during the entire growing season. Control can be tricky; the key is understanding their life cycle, which takes approximately thirty to forty days to complete. Life cycles often overlap. A spray with a suitable insecticide will remove any live insects, but will not kill the eggs, which will continue hatching. Two or three subsequent applications at two-week intervals should solve the problem. It is this phenomenon that has prompted the assumption that they are difficult to eradicate.
PITCHER PLANT MOTH AND PITCHER PLANT RHIZOME BORER
These two pests are confined to the natural range of _Sarracenia_ in North America, and are (thankfully) absent in the UK. The caterpillars of the pitcher plant moth species consume the plants' pitcher walls, causing them to collapse and topple over. The rhizome borer, as the name suggests, eats into the rhizome, leaving a trail of orange-coloured droppings. (Rhizomes are swollen, horizontal underground stems that serve as plant storage organs, from which shoots and roots grow.) Both can be controlled by a suitable insecticide and prompt removal of infected growth.
RABBITS AND DEER
Rabbits are by far the most troublesome and destructive pests I encounter. These insidious creatures dig up or eat just about everything I plant, from _Sempervivum_ and even _Agave_ , to roses. They strip the bark from trees and shrubs, and treat my garden as a vegetarian smorgasbord. However, they do leave the many _Sarracenia_ plants I have outside alone, and I have never seen a rabbit showing the remotest interest in them. I take my revenge on these mammalian menaces, and eat them occasionally, which provides a degree of satisfaction. Like rabbits, deer appear to show a similar disdain for our plants and are therefore not a concern.
RED SPIDER MITES
These tiny red-coloured mites, which are not spiders, are only ¹⁄₅₀ in. (½ mm) in length. They are serious crop pests in some countries, and are usually noticed when they have produced a substantial colony within a silk tent. They suck the contents from individual cells, causing brown spots and seriously impeding a plant's photosynthetic abilities. Although a greenhouse pest, red spider mites prefer somewhat drier conditions than carnivorous plants, so are less likely to be encountered than other pests. They can be destroyed by an appropriate insecticide/miticide (check the label to be sure).
RODENTS
Rodents will generally appear in the autumn, a greenhouse being ideal winter quarters—especially if it's a heated enclosure. In a well-maintained house with little in the way of clutter on the floor, they are unlikely to arrive. In the nursery they are a problem, and an autumnal bait keeps them at bay. Very occasionally, mice and rats will eat the rhizomes of sarracenias, though I have only seen this once in the nursery, a number of years ago.
SCALE INSECTS
There are many species of the sap-sucking scale insect, though the typical greenhouse variety is small, flat, and oval, with a light tan colour, to ¹⁄₅ in. (5 mm) in length. They are not as prolific as mealy bugs, but their treatment is the same. They too produce a sweet exudate which encourages the growth of sooty mildew.
Scale insects are usually found in small numbers, here seen on a _Heliamphora_ pitcher.
SLUGS AND SNAILS
These mollusc pests aren't usually problematic. They seem to graze casually on the likes of sarracenias rather than do any real damage. However, they do have a penchant for, and can be destructive to, the soft, fleshy leaves of butterworts ( _Pinguicula_ ). Active in the cooler temperatures of night, they nibble away while you have your feet up, leaving a trail of destruction and excrement over your pots. If you have a problem, a trip to the greenhouse at night will catch them out and about, so they can be manually removed. A few slug pellets on the floor of the greenhouse should stop any new arrivals, but be careful not to allow other animals or children to come into contact with the pellets.
SQUIRRELS
Like birds, squirrels can be destructive if they feel trapped in a greenhouse. They do like to bury nuts, though, and in the spring, I always find a number of hastily buried walnuts in pots. A word of warning: never corner a squirrel.
THRIPS
Thrips are small, winged insects (though poor flyers), around ¹⁄₂₅ in. (1 mm) in length and black in colour. They will eat the surface layer of leaves, mainly species of _Sarracenia_ and _Darlingtonia_. Control them with an insecticide.
VINE WEEVILS
This serious garden pest has become very prevalent in recent years. The issue isn't the ½-in.- (1-cm-) long black adults which nibble leaves, producing characteristic semi-circular notches. It's the pale larvae, which are of a similar size and consume the roots, destroying the plants—in our case, mainly sarracenias. Fortunately for fanciers of carnivorous plants, the larvae easily drown, and the adults cannot swim, so a good depth of water maintained under your plants will act as an effective barrier. This is an important consideration for plants kept outside, where there should be either a barrier between the edge of the water tray and the container housing the plants, or a high water line to ensure the potting mix is saturated.
Diseases
GREY MOULD
Generally regarded as the nemesis of carnivorous plant growers, grey mould ( _Botrytis cinerea_ ) is a fungal infection that attacks dead plant growth, predominantly on sarracenias. This is definitely a case where prevention is better (and easier) than cure. The removal of such material and good air movement virtually eliminates the risk of it developing.
If it does occur and has spread to live growth, you will see that an entire growth point (crown) has been infected. If there are a number of growth points on the plant, then you have plenty of material and don't need to worry about losing the plant. If it is a single-crowned plant, you have a problem.
The first thing to do is remove the affected growth, which involves cutting the point off, leaving a stub of white rhizome (the rhizome can very rarely be flushed pink). If the rhizome is brown, it is still affected and further surgery is required until you reach clean material. Wipe the pruning shears with disinfectant after each cut to ensure you do not infect healthy tissue.
Grey mould attacking a _Sarracenia leucophylla_ specimen. Note how the entire growth point is affected.
Remove the growth point so that only white tissue remains.
A large plant can usually breeze through this kind of treatment, but a single-crowned individual may be left as a rhizome cutting. Other genera are rarely affected in the same way, but a watchful eye is warranted. Interestingly, _Botrytis_ is the historical "noble rot" referenced in winemaking, describing when grey mould covers grapes and increases sugar levels, accounting for the smooth sweetness of dessert wines.
SOOTY MILDEW
I've mentioned sooty mildew a few times already, as it's associated with some of the pests listed previously. The truth is, it's not all that bad, and is often a handy indication of an infestation of some sort. It is found on most _Sarracenia_ plants by mid- to late summer, because it also develops on the nectar produced by the plants, even in the wild.
It manifests as a black deposit over the leaf surface, and only becomes detrimental if it covers a substantial area and can affect a plant's ability to photosynthesize. If it appears heavily over the base of the plant (again, especially species of _Sarracenia_ ) it is almost certainly the result of a pest attack. When it shows up on the lids and throats of the pitchers (and it will), there is usually no cause for concern. If it bothers you, remove with a soft, wet cloth.
You can see the pattern of sooty mildew following the nectar glands around the edge of this _Sarracenia flava_ lid.
Sooty mildew will also colonize the leaves farther down. Here you can see it at the base of _Sarracenia minor_. This small amount is quite normal, especially later in the summer.
FEEDING
Feeding may seem a little at odds with our subject matter—after all, why would we need to provide food for plants so adept at feeding themselves?
The answer lies in supply and demand. In the conservatory during summer months, there is no paucity of insect life on hand to satisfy the requirements of our plants. However, in the confines of the terrarium, or for a not insubstantial number of winter-growing plants, things are a little different. There are times when feeding is recommended, though it is not absolutely essential. Plants will still grow, just not as quickly and vigorously as when given a food source, so there are definite benefits. There is also the issue of frequency. Perhaps it's the erroneous animal link that people attach to carnivorous plants, but there is an assumption by many that the plants need feeding two or three times per day. But these are not animals and a long period without food will do them no harm. If your plants are outdoors or in a greenhouse/conservatory, they will at times catch a huge number of insects and should be left to their own devices.
Let's first look at what _not_ to feed your plants. I have heard from countless people what they have fed their plants—usually followed by a comment like "... and then the leaves all turned black." So here are a few examples of things you _don't_ feed carnivorous plants, taken from anecdotal reporting.
Ham and other processed meats, including hamburger
Cat food
Chocolate and other forms of confectionary
Cookies
Dog food
Lego bricks
Milk
Although some of these contain nutrients beneficial to the plants, they do encourage leaves to rot, especially if given in larger-than-necessary quantities. The best policy is to avoid feeding carnivorous plants anything other than insect matter.
In a terrarium, a few insects every couple of weeks will be more than sufficient, administered directly to the traps. Don't be tempted to overfeed them in the hopes that you'll find Audrey II the following morning; you will simply succeed in rotting the leaves in the high humidity and creating an unpleasant smell. The size of the trap has a direct correlation to the intended size of the prey, so don't attempt to feed a hornet-sized insect to a small sundew.
There have been many suggestions over the years to feed carnivorous plants with dilute concentrations of proprietary plant foods, something I've never found to be necessary. To keep things simple, let's disregard this debate. The best items to feed your plants are those handy, self-contained insects which they are designed to consume, and these can be live, dried, or even tinned.
Some dieback of traps is to be expected, especially older leaves which have reached the end of their lifespan.
With live insects, the cheapest and most convenient method of gathering is to wander around the garden in the darkness of evening, looking under stones and on the undersides of leaves, collecting whatever you find. A good range of insects can be captured in this way, and under cover of darkness you are less likely to be seen by your neighbours, who will already have their reservations about you, since you grow carnivorous plants. Any insect will suffice, the plants are not selective about what they eat—though avoid earthworms, because they have the ability to escape most traps. In pitchers, they will produce a nasty smell.
Woodlice (sow bugs), flies of all denominations, beetles, caterpillars (though these can literally eat their way out of a trap), wasps (best avoided if you don't want sore fingers), spiders, and small grubs are all suitable. They can be fed directly to your plants. Remember to match the size of the prey with the size of the trap—insects should be large enough to be of value to the plant, but not so large that they kill the leaf.
Suitable insects for our plants. Essentially, if it moves, it's fair game.
Naturally caught insect prey in dionaea leaves.
If you prefer not to stalk around your garden in the darkness, you can always buy insects. A wide variety is available from reptile suppliers online. The most convenient is the common brown cricket, _Acheta domesticus_ , which is bred extensively as a live food. They are available in a variety of sizes, known as instars (sizes go from first instar to second instar, and so on, as they moult and become progressively larger).
The biggest challenge is catching them in the box in which they are supplied. Here, a pair of long tweezers will assist greatly. A word of caution: if you grasp one by a back leg, they will often detach that leg and hop away, usually across the living room floor, where they become difficult to recapture. I well remember being in trouble for just this misdemeanor as a child (when I kept tropical frogs). Adding insult to injury, my mother had a strong aversion to crickets! Placing the crickets in the fridge (not the freezer) for ten minutes will slow them down and make them easier to handle.
Grasshoppers are also available in a variety of instar sizes, but have the disadvantage of being able to jump great distances, making them more difficult to round up in the house.
Freeze-dried insects are a very convenient food for sticky-leaved plants and I use dried bloodworm, which is the larval stage of a midge, for feeding my winter-growing sundews. Dried bloodworm is also ideal for smaller terrarium plants.
A range of canned insects is available in some countries, though not in the UK. These are suitable, but once the can is open, its contents need to be used quickly.
Close-up of a mature cobra lily in its natural habitat in Oregon.
COMMON CARNIVORES FOR EASY GROWING
Now that you realize just how easily you can grow carnivorous plants, you have no excuse not to do so. In this chapter, we'll look at six genera, and when appropriate, cover every species they contain, or a selection of species. The suggestions are not exhaustive, though—there are over a thousand carnivorous plant species in total. The sundews ( _Drosera_ ), for example, contain more than 200 representatives—too many to include here.
In the cultivation sections, when possible, I have included the standard USDA hardiness zone into which each genus and/or species fits (<http://planthardiness.ars.usda.gov/PHZMWeb/>). These guidelines take into account annual average minimum temperatures. This is designed as a general guide, so there will be other factors at play. Earlier I described how individual microclimates can vary, literally between neighbouring houses. When considering plants that are grown exclusively outside, variables such as winter sun, or lack thereof, can also affect a plant's hardiness.
I gauge the hardiness based on my own observations and experience over the past thirty-five years. I hope this will enable you to make informed decisions as to which plants will be suitable for your particular region. With each genus (and sometimes individual species), we will look at the requirements for cultivation and the various methods of propagation.
_Darlingtonia_
**COBRA LILY**
The genus _Darlingtonia_ contains just one species, _californica_ , which we know as the cobra lily and which is found in the same family as _Sarracenia_ and _Heliamphora_. Native to the mountains of California and Oregon in the western United States, the cobra lily is without doubt the most sinister looking of all carnivorous plants. Well deserving of its epithet, with its turning, serpent-like pitchers, this is a plant that is guaranteed to attract attention wherever you choose to grow it.
A large _Darlingtonia californica_ colony in habitat, Oregon.
The cobra lily can still be seen in vast colonies in the wild, due to its preference for wet seeps in remote mountainous locations. It prefers sphagnum bogs and is also found extensively on serpentine substrates. Serpentine refers to soil that is naturally low in essential nutrients such as nitrogen, potassium, and phosphorous, but high in heavy metals which are poisonous to many plants. So, the cobra lily can colonize these areas unhindered by competing vegetation. The wet nature of both sphagnum bogs and substrates also serves to impede the growth of other plants.
In habitat, the conditions witnessed by the plants can be harsh; winters can reach temperatures as low as 5°F (-15°C), with frequent snow. In summer, the temperature can reach 80°F (27°C), and with little shade from surrounding vegetation, the plants are adapted to tolerate a high light level.
_Darlingtonia californica_ is a clump-forming herbaceous perennial. Herbaceous means it dies back during the winter. Perennial means it lives longer than two years. It is also stoloniferous, which means it has the ability to spread by means of stolons, or runners, in the same way a strawberry plant does. By doing this, plants can form large clumps over time, the resultant divisions each a clone of the original parent plant and hence genetically identical.
It produces two types of leaf. The first is a juvenile type for the first two to three years of life, which is in effect a small tubular leaf to around 2 in. (5 cm) in length, open at one end with a long spur.
Next, the plants produce their characteristic serpent-like leaves, or pitchers, and there are two distinct types of these as well. When young, the leaves are short, 3 to 4 in. (7½ to 10 cm) in length, and are held semi-erect, often with the forked tongue touching the soil, no doubt to guide crawling insects into the pitcher mouth. Mature pitchers are tall and upright, up to 3 ft. (90 cm) high in the wild, occasionally taller if shaded. As they emerge, they face inward toward the centre of the plant, turning around 180 degrees as they develop. By the time they open, they face outward. The first leaf to open is always the tallest; subsequent pitchers are progressively smaller as the season continues. The pitchers are an attractive apple green when they open, fading to a yellow-green. Some forms display varying amounts of red in strong sun.
As with all carnivorous plants, the cobra lily demonstrates a remarkable level of engineering in its design. Once open, the dome at the top of the leaf inflates and the trap is ready to catch its prey. Nectar is secreted through cells on the tongue, which also acts as a convenient landing platform for flying insects. Bugs follow the trail toward the mouth of the leaf, where there is a concentration of nectar. The mouth has a margin which is rolled and protrudes inward, resembling the entrance of a lobster pot. Once the insect enters, it finds itself surrounded by translucent windows, or fenestrations, which fill the dome with light.
The juvenile pitchers of _Darlingtonia californica_ bear little resemblance to those on the adult plant.
A cross section of a _Darlingtonia californica_ pitcher. Note the rolled mouth, forming something akin to a lobster pot.
As caught prey attempts to escape, it invariably loses its footing on the waxy internal surface and falls downward into the pitcher tube. To prevent escape, downward-pointing hairs cover the lower two-thirds of the tube.
The leaf is partially filled with water, which the plant secretes and regulates as there is no possible means of rain entering. It is within this pool that the insects drown. Unlike most other carnivorous plants, the cobra lily does not secrete digestive enzymes, relying rather on bacterial breakdown for the assimilation of nutrients.
Like its cousins in the genus _Sarracenia, Darlingtonia californica_ flowers in the spring, though the emerging bud is produced, sheathed in protective bracts (leaf-like structure often found on a flower scape), in the autumn after the final leaf opens. It sits at the plant base until the advent of warmer weather. The rather brittle flower stalk reaches a height of up to 30 in. (75 cm), and has at its apex a single flower which hangs downward like a small bell.
A favourite spring sight, the lantern-like flowers of _Darlingtonia californica_ , illuminated in the morning sun.
It is a unique flower structure like no other: five bright green, narrow sepals to about 1½ in. (4 cm) surround another five petals of slightly shorter length, which are wider and pinched a little before their tips. These are held together so that the pinched sections align to form five openings, allowing pollinators access. The petals are a bright red colour, streaked longitudinally with translucent orange-green lines. These flowers glow like small red lanterns when backlit by the sun—a wonderful spring sight.
Inside the flower there is a large green ovary, ringed at the top of the structure by pollen-releasing stamens. At its base is a five-pointed, star-shaped receptive stigma.
Once pollinated, the petals fall away, leaving the sepals and the green ovary, which swells and gradually lifts from its hanging position to become upright. When the seeds are ripe in the autumn, the seed head browns and splits open for them to be shed. The split head acts as a shaker, throwing the seeds away from the parent plant in the wind, like a poppy.
Cobra lily flower with the petals removed to expose the pale yellow anthers at the top of the structure and the star-shaped receptive stigma at the bottom.
_Darlingtonia californica_ seed pod and seed.
In the autumn, plants cease their growth but retain their leaves through most of the winter period, dying back and looking rather sorry for themselves by spring. As flowers and new shoots emerge, old pitchers can be removed.
It is a fairly variable plant in terms of stature and colour, but only one variant has been formally named: _Darlingtonia californica_ f. _viridiflora_. This unusual plant lacks the red pigment anthocyanin, which is found in the standard forms. The name _viridiflora_ means "green flower," and not surprisingly, the bloom is lime green with green petals. It was initially given cultivar status as _Darlingtonia californica_ 'Othello'.
CULTIVATION
Given the natural habitat of the cobra lily, you would expect these plants to be tolerant of high temperatures in the greenhouse or conservatory—and you would be quite wrong. Instead, they favour areas with water flowing over and through the roots (which has a cooling effect) and some consideration will need to be given to this requirement. If you can mitigate the effects of the roots overheating, then it's likely you can grow this plant in the greenhouse or outdoors. The tray method, in which plants stand permanently in rainwater, is ideal. The preferred position is in full sun, but in contact with a cold base, such as the greenhouse or conservatory floor. The best specimens I grew in a greenhouse were planted in one of those large plastic boxes, under which I positioned a similar-sized box full of water. I circulated the water by means of a small pump. This enabled the plants to enjoy the full sun and heat, but prevented the roots from overheating.
There are alternatives to this elaborate setup. You could position your plants so their pots are obscured by others and prevent the sun from overheating the pot walls. You can also stand them in trays on the floor of the greenhouse under the staging, or in the conservatory. This will give you surprisingly good results, as the coolness of the floor helps prevent unnecessary heat build-up.
Don't be afraid to allow them some shading, if necessary. Unlike _Sarracenia_ , the leaves are tubular to their base and, structurally speaking, are stronger and have greater stability. Plants kept in a little shade are less colourful and a bit taller, but just as vigorous and effective, which is advantageous if your greenhouse becomes excessively hot.
A sunny windowsill in the house is also ideal, where the plant can be stood in a glazed ceramic container or something similar. This will also minimize the effects of overheating and will look better than a bland pot standing in a saucer. Just bear in mind its winter requirements.
_Darlingtonia californica_ is an ideal candidate for cultivation outdoors in Zone 7. It is especially well-suited to planting in either a container or a bog garden, or used as a pond marginal. An interesting effect can be achieved by planting them in a trickling waterfall, replicating their natural preference. With a little thought and ingenuity, not to mention a pond basket or two (which may need cutting down), you could easily create a mossy planter of rearing cobras.
There are a couple of options for potting mix. Traditionally it was stated that plants grew best in pure sphagnum moss, which is fine, but you will find that the moss in the lower portion of the pot will gradually decay and smell. I prefer a 50/50 mix of peat moss and perlite (you can increase the amount of perlite if you like, to 70/30), but add a layer of sphagnum on the surface. This can be live or dried; it will soon regenerate and form an emerald green carpet under your plants. Be aware though: the birds will also appreciate your moss.
Individual stolon divisions removed. A section of the stolon on each plant has been retained, from which the initial roots are produced.
A mature cobra lily without its pot and potting mix, to show the structure. The plants form at the end of stolons, making them easy to remove without breaking apart the main plant, though you can also split this plant up if you wish.
In a greenhouse, the plants retain their leaves through the winter months, dying back in the spring as flowers emerge. But those outside generally have their pitchers removed by the ravages of frost and snow. Cut the leaves off once they have died back.
Being a plant from temperate regions, it requires a cold dormant period—either outside, where it will withstand temperatures from 14 to 5°F (-10 to -15°C), or in a cold greenhouse. Do not leave this species in the house or in a heated environment over the winter, as you will see a drastic decline in growth the following season, and an untimely death. Move it somewhere cold from October until February.
PROPAGATION
There are three methods of increasing your cobra lily stock, all of which are easy. The most straightforward is by division, in which you take a large, mature specimen and carefully break the individual growth points apart, ensuring each has a few roots.
This must be done in the early spring, before flower stems are 2 in. (5 cm) in height. In fact, if stems are present it is best to remove them on plants you are dividing. Pot the individual plants in the peat moss/perlite potting mix mentioned, and treat as adult plants.
This is the ideal time to divide many plants, for a couple of reasons. Temperatures are generally cool and the sun is far from its strongest, allowing the divisions to settle and root through without being placed under unnecessary stress. This timing also allows plants a full season to re-establish prior to the next winter.
Its distinctive pitcher means _Darlingtonia californica_ needs little in the way of formal introduction.
Since these are stoloniferous, you will soon notice that a container of these plants begins to expand and multiply. Stolons are formed from the base of the plant under the soil surface; in potted plants, these are often found circling the pot several times before rising to the top and forming a new plant.
Stolons can be over 2 ft. (60 cm) long, and often have roots along them. These sections can also be used for propagation, when cut into at least 2-in. (5-cm) lengths and potted. Place sections at the soil surface and keep them shaded until small plantlets appear.
The final propagation method is by seed, which was discussed in the chapter on cultivation.
_Dionaea muscipula_ plants in the wild with the sundew _Drosera capillaris_.
_Dionaea_
**VENUS FLYTRAP**
As with _Darlingtonia_ , there is just one species in the genus _Dionaea: D. muscipula_ (Venus flytrap). It is placed in the same family as the sundews, the Droseraceae. This small herbaceous perennial is usually the first stop on the road to a deeper fascination with carnivorous plants; in fact, its popularity was almost its downfall, when millions of plants were stripped from their fragile and small natural habitat to satisfy the needs of the market from the 1950s until the 1990s. This practice has now been reduced (thanks to mass tissue culture propagation, rendering the removal of plants from the wild unnecessary) but not ceased entirely. It is estimated that there are now only a few tens of thousands of plants in the wild; with habitat destruction, the collection of plants, seeds, and fire suppression (naturally occurring fires are nature's way of clearing competing vegetation), the odds appear to go against the long-term survival of _Dionaea muscipula_ outside of cultivation.
Despite being undoubtedly the most widely known and grown of all carnivorous plants, the Venus flytrap has a surprisingly small natural distribution, straddling the border between North and South Carolina in the United States, where it inhabits low-lying acidic peat bogs and open pine forests. It is usually found in open, sunny sites where the plants can develop a vivid colouration. In shaded areas, the plants are often larger and green, with no red colour, due to a lack of light. Here, plants will produce longer leaves which typically are unable to hold themselves upright when mature. This is the plant's attempt to reach brighter sunlight.
Its range is Zone 9, with snow being of rare occurrence, but with an average winter minimum of 15°F (-9°C), occasionally lower.
The growth pattern of Venus flytrap follows that of most temperate species, commencing in my nursery in March generally, with a flush of new leaves. Typically, plants form a flat rosette of leaves to 4 in. (10 cm) in diameter, often in conjunction with a number of upright leaves in the summer months. Occasionally this latter type of leaf predominates. There is a cessation of leaf production in early summer, with each plant or active growth point producing a single flower stalk. These are easy to recognize as they are circular in cross section. Flowers attain a height of 12 to 16 in. (30 to 40 cm), and are topped with eight to ten blooms approximately 1 in. (2½ cm) in diameter, which open one to three at a time. Flowers are white, with each petal bearing fine green veins along its length. They open for a couple of days before closing, with the scape remaining green until autumn, when the seed is ripe.
Venus flytraps are self-fertile after an initial period, but require manual pollination, a process as easy as rubbing the fine structures of the flowers gently together, or using the fine-haired paintbrush mentioned earlier and going from flower to flower, gently brushing over the pollen-bearing stamens and then transferring the pollen to the central stigma (receptive when it has feathered out). Pollen is released initially prior to the stigma becoming receptive, to encourage cross- rather than self-pollination. Seed heads gradually swell and split in the autumn, to display around thirty shiny black seeds about ¹⁄₂₅ in. (1 mm) in length.
Flowers will slow the progress of _Dionaea muscipula_ , though they are unlikely to kill the plant unless it is weak or dying anyway. If you don't wish to produce seed, pinch blooms out as they appear.
Shiny black seeds are produced in late summer, seen here in the mature seed pod.
The leaves are remarkable feats of natural design, and after all these years of growing them, I am still fascinated by the elaborate and sinister-looking traps. They arise from an odd bulb-like structure, which unlike a traditional bulb forms a chain of old-leaf bases, more akin in habit to a rhizome, and indeed it has the ability to branch and divide. Over many years, and in optimum conditions, the plants can form substantial clumps of individuals, all genetically identical.
The leaves are composed of two parts. The lower stem (petiole) in adult plants is usually 1 to 4 in. (2½ to 10 cm) in length, the rosette leaves being shorter than the upright individuals. They are normally a lime-green colour, and also vary in shape and width, with shorter and wider petioles produced in the spring and autumn, and longer thinner ones in the summer, which often hold the traps upright or semi-upright. In autumn, plants will lose a number of their leaves, and these can be simply removed by supporting the base of the plant and gently pulling them away.
The upper section is the trap itself. This includes two lobes held together at their bases at about ninety degrees when open. The upper margins of the lobes are lined with ferocious-looking spines approximately ½ in. (1 cm) in length. They have a flexible quality, rather than being akin to thorns. The interior of the trap varies in colour due to genetics and environment; typically it is a deep red in strong sunlight, in contrast to the green petioles, the trap exterior, and spines. In some forms, the colouration is faint or absent altogether, while in others the entire plant can be a deep red.
The trap's interior surface faces upward, to attract as many insects (both crawling and flying) as possible. It is baited and has to draw its prey in using a variety of methods. Like a flower, the trap possesses UV patterning, which although not visible to our eyes, is attractive to insects. The sometimes vivid red colouration of the trap interior may also be attractive, although insects see differently than we do.
There are numerous nectar-secreting glands on the interior, especially concentrated in a band under the base of the spines, positioned perfectly for the insect to trigger the trap. Close inspection typically reveals three short, rigid spines ¹⁄₂₅ to ⅛ in. (1 to 3 mm) in length, arranged to form a triangle on each lobe. These are the trigger hairs, which taper to a point and possess a hinged base.
Petioles of _Dionaea muscipula_. A shorter spring example on the left; a longer summer leaf on the right.
Once the insect is in the trap interior, it invariably touches one of these triggers. The first stimulus seemingly does nothing: it is considered to be a safeguard against wind-borne or other debris closing the trap unnecessarily. A second touch is required—on either a different hair or the same one—for the trap to close. Bear in mind: because it can recognize two stimuli, _Dionaea muscipula_ is the only member of the plant kingdom that can count.
Traps can close with lightning speed—in as little as _one-tenth of a second_ in the case of a healthy plant on a warm day. The mechanism by which this movement occurs is remarkable: charged particles pass through cellular membranes, which in turn cause a number of cells on the outer surface of the trap to expand, pushing the sides of the trap together. There is no hinge mechanism at the base of the lobes.
When the trap closes initially, the spines mesh, leaving a row of gaps allowing the escape of insect prey too small to be of value to the plant. In these cases the trap will reopen within about twenty-four hours. If the insect cannot escape, it will continue to touch triggers as it struggles. The trap closes ever tighter, eventually producing an airtight seal, with the spines bowing slightly outward.
Trigger hairs are located on the inside of the trap, typically three on each side.
With the second stimulation, the trap snaps shut, spines interlocked.
When digesting, the trap seals tightly and the spines reflex outward.
Glands on the leaf's inner surface secrete digestive enzymes onto the insect. The creature's soft parts are broken down; the soluble products are absorbed by the same glands. The duration of the digestion process is dictated by the size of the prey; a small fly may take only two or three days, but a larger item such as a wasp may require ten to fourteen days.
When digestion is complete, the trap slowly reopens, a reverse action to closure, with the inner surface increasing in size and pushing the two lobes apart. What remains inside looks for all intents and purposes to be a complete insect—but closer inspection reveals just the hard, indigestible chitin exoskeleton. In the wild, these wash or blow out, but in the confines of a greenhouse they usually remain on the leaf. A warning perhaps to others!
The closure and subsequent reopening of the trap is an irreversible growth process which can occur several times, and the action of digestion can take place approximately three times per leaf. If the animal caught is too large for the trap, however, the trap will usually blacken and die midway through the process.
CULTIVATION
Considering that the natural range of this plant rarely encounters snow, it's surprising how cold tolerant Venus flytraps are. I have had frozen plants in the nursery to as low as 14°F (-10°C), and I have based my Zone 9 recommendation on this. Bear in mind that this was in the confines of a greenhouse, which afforded some protection. However, I did have a few plants living outside for a number of years, and I know growers whose plants are all outside.
As autumn approaches, the leaves, especially any upright examples, will quickly blacken.
A group of individual plants can be potted in a bowl.
For the healthiest and most colourful flytraps, place them in full sun. They are at their best in a greenhouse or conservatory, or on a sunny windowsill. In those environs, they will need to sit in a tray of rainwater to a depth of 1 to 2 in. (2½ to 5 cm) in the summer. Keep them just damp in the winter months, when they lose any upright traps and retain a few prostrate leaves. The plants should follow the temperate regimen of hot summers and cold winters. Those denied winter dormancy will decline after a few years, so be sure to provide cold surroundings between October and February. Remember, this means moving them out of the house—it may seem chilly to you in the bathroom, but it will still be too warm for _Dionaea muscipula_.
Pot plants in either straight peat moss or a mix of equal parts peat moss and lime-free horticultural sand. A fairly small container, around 4 in. (10 cm) in diameter, is ideal. Fill the pot with your chosen medium, firmly pat it down, and make a hole of finger depth. Thread the fine black roots down into the hole. The white-coloured leaf bases which make up the rhizome structure should be buried in the soil to a depth of about ½ in. (1 cm).
A large, mature plant in need of dividing.
_Dionaea muscipula_ divisions, ready for potting.
Samples of leaf cuttings or pullings. Note they are stripped down to their white or pink bases.
A division showing a section of the rhizome removed. This can be potted separately.
For a more impressive display, consider grouping a number of plants in a larger container, either a pot or a bulb bowl, where the plants will have room to spread. Alternatively, they can sit and happily snap away at the front of a planting of other temperate carnivorous plants. I am often asked how to repot these plants without triggering the traps, and the answer is quite simply that you can't. A newly repotted plant will look dreadful, but don't worry, the new pot will encourage new growth, and before long, the plants will once again look happy and healthy.
Much has been said of the flowering process killing plants, and although there is an element of truth to the claim, it is certainly not always the case. To be expected to successfully flower and reproduce, a plant should be mature and in good health, and its winter rest period should be respected. When the flower stalks are produced in early summer, you have the choice of leaving them if you wish to produce seed, or pinching them off if you do not. You will notice a visible slowing in the plant's growth progress while flowering takes place. Don't worry, though—normal growth rate is usually restored afterward
If a plant has not been allowed winter dormancy, or is in general poor health, the flowering process can indeed result in its death. It has also been proposed that a dying plant flowers in an attempt to produce seed and continue its lineage prior to its demise.
PROPAGATION
There are three simple methods of propagating the Venus flytrap, and one made possible in occasional circumstances. The one which will give instant results is division. As a plant grows and branches, you will notice small clumps developing. In the confines of a pot, vigour will begin to slow after three to four years. This is the time to repot and divide the plants. Only attempt this in the spring so the plants have the whole season ahead of themselves to re-establish. Remove the plants from their container and shake off the old potting mix. Mature plants will typically have two or three individual growth points, and usually a chain of old leaf bases which remain alive for a considerable time after the leaves themselves have died.
Begin by removing the chain of leaf bases, retaining them. Gently grasp the growth points and snap them apart. They are quite brittle and will come apart easily, leaving you with individual plants. Pot plants as described.
You will be left with the rhizome. This can itself be useful—at the base of each segment, there is a dormant bud which will begin to grow if exposed. Many plants employ this tactic as a safeguard if the main stem or crown of the plant is damaged. By breaking the rhizome into pieces ½ in. (1 cm) in length, you can use the pieces as cuttings. Simply reduce the length of the roots by half, and push them into a container or seed tray to half their depth. Green shoots should appear in around four weeks.
Leaf cuttings are just as straightforward, though the word "pulling" is more appropriate. One aspect is essential for success: the leaf must be complete, including the white (sometimes pink) base. This is where the active cells for regeneration are to be found (exactly the same as the rhizome cuttings). Select a healthy plant with six to eight leaves and remove it from its pot (you could try to pull the leaves out without doing this, but you'll snap at least half of them).
A batch of _Dionaea muscipula_ seedlings.
Holding the plant, carefully strip up to half of the leaves downward so that the whole leaf comes away. Select as many as you require and snip off the traps. Fill a seed tray with peat moss and soak well, before laying the leaves on the surface so that the bases are all pressed in ¹⁄₅ in. (5 mm). A light sprinkling of sphagnum moss will help keep the humidity high around the developing plantlets. You can either stand the tray in water somewhere shaded, or cover with a plastic bag and place on a bright windowsill out of direct sun.
Small plants will be visible in three to six weeks. Once they are 1 in. (2½ cm) in height they can be treated as adult plants. Pot them separately the following spring. Be careful to harden plants off first by making a small hole in the bag every day for a week before transplanting, so they do not dry out. Also keep in mind that they will scorch if placed in full sun immediately, so introduce this gradually as well.
_Dionaea muscipula_ 'Australian Red Rosette'.
_Dionaea muscipula_ 'Cross Teeth'.
Propagating by seed is straightforward, if somewhat slow. Once the flowers have produced their seed in the autumn, stratify and sow them as described in the cultivation chapter. From seed to maturity is three to five years, but you will have the benefit of raising plants of various stature and colour.
Finally, there is a strange phenomenon in which plantlets are sometimes formed on the flower scapes, in addition to the flowers, attaining a good size over the course of the season. Plantlets can be carefully snapped off and will generally root through in a few weeks. Although this is another intriguing aspect of these plants, it is a fairly rare occurrence, and cannot be regarded as a reliable method of propagation.
Plant Suggestions
_Dionaea muscipula_ is a variable plant; though it has not been formally broken down into any recognized varieties, there are a large number of formally and informally named cultivars and variants, a few of which I shall describe.
First though, a note on naming. As described in the introduction, plants are given the status of cultivar for an outstanding or unusual characteristic that makes them horticulturally desirable.
_Dionaea muscipula_ 'Dentate Traps'.
_Dionaea muscipula_ 'Red Piranha'.
_Dionaea muscipula_ 'Sawtooth'.
_Dionaea muscipula_ 'Royal Red'.
In recent years there has been a large number of Venus flytraps afforded this accolade for being hideously deformed—usually the case of a fault in the tissue culture process, causing occasionally stable mutations. One has only to read the cultivar descriptions to see the oft-repeated line that the plant was "discovered in a garden centre" to realize that these random mutations are being exploited by individuals who seek to make a name for themselves. In reality, such abominations deserve no future other than the compost heap.
Don't get me wrong. There are a few deformed cultivars which are attractive, but the current craze for naming the most abhorrent of tissue culture's mishaps really should stop. Back in the 1980s there were no cultivars to speak of, merely a range of forms designated by colour, generally combinations of red and green. The discovery of the first all-red forms in the early 1990s began the surge of cultivar naming we see today. With that in mind, here are a few worthy cultivars.
'Australian Red Rosette', a large and squat plant, produces flat, prostrate rosettes. Its large, chunky traps grow to 1¹⁄₅ in. (3 cm) across, always held flat to the ground. Vivid crimson colouration contrasts with apple-green foliage.
_Dionaea muscipula_ 'South West Giant'.
An unusual plant of small stature, 'Cross Teeth' produces traps with long, eyelash-like spines, some of which are slightly fused, causing them to cross over. The interior of the traps is a bright blood-red colour.
'Dentate Traps' is similar in size to 'Royal Red', but with normal green and red colouration. Its teeth are reduced from spines in the spring to jagged, triangular points in the summer. A large number of different plants are now circulating with this characteristic—this was foreseen and a Dentate Traps Group has also been registered.
'Red Piranha', an all-red plant, produces traps with teeth that are reduced in the same way as 'Sawtooth'. Flat rosettes produce traps to ⁴⁄₅ in. (2 cm).
The first of several very similar plants to have been named, 'Royal Red' produces leaves and a flower scape that develop a uniform purple-red colouration. Summer leaves are around 5 in. (12 cm) in height and are held erect, with traps to about 1 in. (2½ cm). Flowers are the same as the type.
'Sawtooth' is an interesting variety in which the teeth, similar to 'Dentate Traps', are reduced. Here, they form a margin which, as the name implies, resembles that of a saw.
'South West Giant' was named by my good friends Alistair and Jenny Pearce of South West Carnivorous Plants in Devon. A large and vigorous plant, it bears the typical colouration but grows large summer traps on long petioles to 1½ in. (4 cm) across.
_Drosera callistos_ is a pygmy species.
_Drosera_
**SUNDEW**
_Drosera_ is a genus of over 200 species, with worldwide distribution. Plants are found on every continent except Antarctica—a truly successful and remarkable group when one considers the adaptations they display to survive in their respective habitats. Some species make ideal and visually impressive plants for a bright windowsill, where the sun will illuminate their leaves and give them a fiery appearance. In this location, the-five-petalled flowers of droseras are produced at various times during the growth cycle of the plant, and are usually open for one to three days.
An example of a tuberous species, the climbing _Drosera pallida_.
_Drosera capensis_ wrapping its leaves around a housefly.
The headquarters of the genus is Australia, with some 50 percent of all known species, followed by South Africa and South America. The remaining species are found across Asia, Africa, Europe, and North America, as well as unusual species endemic to islands, including Madagascar and New Caledonia.
With a small handful of exceptions, they are all perennials, varying greatly in shape and stature from the tiny _Drosera occidentalis_ from Australia, barely ⅓ in. (8 mm) in diameter, to the stately and regal _Drosera regia_ from South Africa, which can produce a leaf 18 in. (45 cm) in length. The plants display an uncommon variety and diversity, and although many are beyond the scope of this book, it is worth mentioning these adaptations, in the hope that you will become addicted to these beautiful plants.
The temperate species lose their leaves in the autumn, and instead produce a tight winter resting bud, often termed a hibernaculum, to protect the base of the plant against winter cold. Our native European and North American species adopt this strategy, with one species, _Drosera filiformis_ , clothing itself in a mass of fuzzy brown hair. The plants sit in this suspended state until the return of warmer conditions in the spring, at which time the bud opens up and new leaves emerge. A similar challenge is presented to those species whose habitats become too hot or dry for part of the year. That hotbed of species, Australia, includes a couple groups of sundews that have this challenge to overcome.
The tuberous species, of which there are around fifty, are predominantly found in Western Australia, and grow in the cooler and wetter winter months, a habit that they continue in cultivation. Some species produce their growth and develop, flower, and set seed in only three to four months, before they die back in the spring and retreat underground, away from the heat of summer, resting in a tuber until the following autumn. The tuber is a storage organ, the most common example being a potato.
The same region of Australia is also home to most of the thirty or so species of pygmy _Drosera_ , which as the name implies are all small in stature, but must still employ a tactic to survive the harsh heat of summer. They utilize a couple of methods to do this. Firstly, they produce a structure similar to the hibernacula of temperate species, and lose their leaves. However, they cover this with white- or silver-coloured hairs called stipules—small outgrowths that can be found on most species of _Drosera_. Instead of the hairs serving as protection from cold, as with _Drosera filiformis_ , these are enlarged and serve to reflect the heat of the sun away from the growth point.
A dormant winter resting bud of _Drosera filiformis_.
The dormant resting bud and stipules of _Drosera closterostigma_.
A few species appear to take it one step further, and produce a stem. In _Drosera dichrosepala_ , this can be up to 4 in. (10 cm) in height. As they become taller, they produce aerial roots which serve to steady the plant, resulting in what appears to be a plant on stilts, held up above the baking soil surface. For some South African species, the habitat, as in Australia, dries over the summer. To survive this, a number of species produce roots which are thick and fleshy, and in a way similar to tuberous plants, they lose their growth in the spring and the roots remain dormant until the following autumn.
Whatever their particular environmental adaptations, all droseras share a common characteristic: leaves with mobile tentacles, each of which is furnished at its tip with a swollen gland surrounded by a droplet of mucilaginous glue. The density of these tentacles varies between species, with some densely embellished and others, such as the Australian _Drosera schizandra_ , having only a sparse covering.
Structurally they are all similar, and the amount of movement varies; some barely moving at all. The length of the hairs is loosely dictated by the size of the leaves. In a number of species, the leaves themselves are also capable of movement, curling over and around their prey like an octopus, wrapping up flies and even wasps. In _Drosera capensis_ and _D. regia_ , the effect can be dramatic.
_Drosera capensis_ , wild in South Africa.
Prey are attracted to the glistening leaves by the promise of nectar, with the mucilage reflecting light both in the visible and ultraviolet spectrums, an enticing lure to insects. However, on contact with the leaf, an insect soon become ensnared in the glue, which because of its elastic properties, effectively smothers the animal as it struggles. This movement is detected by the leaf and the surrounding tentacles, which bend inward toward the animal, ensuring as many hairs as possible make contact. In the case of species in which the leaf itself moves, this can be a substantial number. The movement is slow and barely visible to the naked eye, although some species are furnished with a row of extra-long tentacles on the leaf tip, which are able to snap over very quickly. The insect soon suffocates, as the glue blocks its breathing pores. Enzymes are then released through the tentacles, entering the insect's body. During the digestive process, the resultant soluble nutrients are absorbed into the plant before the leaf and tentacles gently unfurl, leaving only the insect's exoskeleton.
A typical _Drosera_ flower on _D. rubrifolia_.
Seed from drosera plants is tiny. Shown here is _D. capensis_.
CULTIVATION
These few basics apply to the entire genus; specific individual requirements are covered in Plant Suggestions.
Whatever their origins, nearly all species of _Drosera_ require full sun. For such seemingly delicate plants, they are remarkably tolerant, reveling in the greenhouse, conservatory, or sunny windowsill, where most houseplants would burn.
The commonly grown species are generally bog plants, and they need to permanently stand in a tray of rainwater. This can be to a depth of 2 in. (5 cm) or deeper during the hot summer months and reduced in the winter to a thin film. But plants can never dry out, as this will certainly kill them.
The majority of temperate species need a hot summer and a cooler winter period. The depth of cooling is dictated by the plant's origin, with the cold tolerant species being best in an unheated greenhouse or outdoors. But many species are content in a position where there is a lesser drop in temperature, making them ideal for cool windowsills.
For virtually all the droseras covered here, a potting mix of equal parts peat moss and lime-free horticultural sand is generally perfect; I'll mention any variance from this with individual listings. Those plants which require similar conditions can be planted together in a single container or bowl—creating a sticky alien landscape.
PROPAGATION
The majority of droseras grow easily from seed, surface sown onto the same potting mix as the adult plants. Seed should be collected late summer, stored in the fridge, and sown in the spring. The cold tolerant species require a cold stratification as described in the cultivation chapter. Most species appear to be self-fertile, though not all these self-pollinate, and may require manual intervention if you have only a single plant. Flowers are usually pink or white, and in a very few cases are fragrant. Seed is tiny, usually around ¹⁄₂₅ in. (1 mm) in length and very narrow, and is often produced in large quantity. To collect seeds, simply cut off the spent flower scape while ensuring it remains upright. Turn it upside down over a sheet of paper, onto which the dust-like seeds can fall. A gentle tap will shake any stragglers free. Seed germinates in two to four weeks. Depending on the species, adult plants can be produced in as little as one to two years.
A planted bowl of mixed sundews makes an interesting and unusual display. Two forms of _Drosera capensis_ are at the rear, _D. cuneifolia_ is front left, and _D. slackii_ is front right.
A number of species produce long roots, and can be propagated by root cuttings. This simple procedure, best performed in the spring, involves unpotting the plants, shaking off the potting mix, and breaking off around half the length of the roots. These root cuttings are then placed on the surface of the same planting mix from which the adult plant came. A seed tray works well for this. Cover seeds lightly with potting mix. New plants will be seen in four to six weeks. The donor plants can be repotted.
There are a very small number of _Drosera_ hybrids, mostly man-made. These are all sterile, so seed cannot be produced from them.
Plant Suggestions
_Drosera adelae_
_Drosera adelae_ is one of three species found in the Queensland rainforest, the others being _Drosera prolifera_ and _D. schizandra_. They are known collectively as the Queensland sundews. _Drosera adelae_ is a perennial evergreen hailing from a tropical region, making it an ideal candidate for terrariums, where it will thrive in the warm environment. The plant produces lance-shaped leaves to 3 in. (7½ cm) in length, which unfurl in a semi-erect fashion and gradually lower to a prostrate position. The upper surface is covered with sticky tentacles. Leaves are a bright green in low light levels (something this species is able to tolerate), and flush an attractive bronze-red colour in good light. Short flower scapes to about 3 in. (7½ cm) high are produced in the summer months, each with a dozen or so small, red-petalled, star-shaped flowers to ⅓ in. (8 mm) in diameter.
**CULTIVATION** This plant can be grown in a greenhouse or conservatory, a terrarium, or on a windowsill with indirect light. It will tolerate lower light levels and is one of the very few species that can grow on a shady windowsill. As it requires good humidity, it is best to keep it in the confines of a propagator with its sister species. It also grows well in the type of low bowl you see for hyacinths and similar bulbs. Maintain a winter minimum of 45°F (7°C).
**PROPAGATION** This species doesn't self-seed, but is one of the few species of sundew which produces daughter rosettes from its roots, forming clumps and filling its container over time. These offspring can be simply dug out and potted separately, preferably in the spring.
The unusual star-shaped flowers of _Drosera adelae_.
_Drosera adelae_.
_Drosera aliciae_
_Drosera aliciae_ is one of a number of evergreen, rosetted species found in South Africa. Many look very similar at first glance, and many are also variable, which has led to much confusion concerning their identity over the years. This plant produces rosettes of 1 to 3 in. (2½ to 7½ cm) in diameter composed of pale green leaves. The leaves are wedge-shaped and slightly rounded at their tips, covered on their upper surface by sticky tentacles, which are a beautiful pink-red. In full sun, the leaves can also flush red. During the summer months, tall, wiry flower scapes to 16 in. (40 cm) are produced, unfurling like watch springs from the rosette and covered, as close examination will reveal, by tiny, sticky glands. Around twenty pale pink flowers to about ⅓ in. (8 mm) in diameter are produced in succession, the lowest opening first.
_Drosera aliciae_.
_Drosera binata_ flowers.
To propagate, spread divided roots on the surface of the potting mix and cover lightly.
A year after root propagation, a sizable clump will be produced.
**CULTIVATION** This plant can be grown in the greenhouse or conservatory, or on a windowsill where it will receive full sun. Maintain a winter minimum above 32°F (0°C), though no cold period is required.
**PROPAGATION** This species freely self-seeds, without the need for manual pollination. Store seed in the refrigerator, although a cold stratification is unnecessary, and surface sow in spring.
_Drosera binata_
The species name _binata_ means "divided in pairs" (a reference to the leaves being forked), hence the common name of forked-leaf sundew for the impressive _Drosera binata_. An Australian species, it is found in the more temperate southeast of the country, and also in Tasmania and New Zealand, as well as some other smaller islands. It is a variable species with four distinct forms, all worthy of cultivation for their notable stature and easy-to-please attitude. A well-grown clump of this perennial species is a dramatic sight, especially early in the morning when the sun is low and the plant is ignited in the light.
All _Drosera binata_ produce white (occasionally pink-blushed) flowers in the summer months, ½ to ¾ in. (1 to 1½ cm) in diameter, held aloft on sometimes tall, sturdy stems. Most plants in general cultivation appear to be self-sterile, but beware, some forms do self-seed prolifically, scattering their seeds far and wide in the greenhouse. In the autumn they die back to their roots, the entire plant blackening off in a surprisingly short time before the onset of winter. In the spring, they unfurl from the soil, exploding like fireworks.
**CULTIVATION** _Drosera binata_ can be grown in the greenhouse or conservatory, or on a windowsill where it will receive full sun. The habit of the larger forms makes them best in hanging containers, where they are free to explore their surroundings without interfering with any neighbours. _D. binata_ tolerates a winter minimum of 14°C (-10°F).
**PROPAGATION** The quickest method of propagation is to split mature _Drosera binata_ plants that have formed clumps. As they develop in early spring, you can simply pull sections apart, or cut them with a knife. They produce very long, wiry roots, which can also be used as cuttings. Simply spread them on the surface of a pot or seed tray, and lightly cover with planting mix. Within a few weeks you will see leaves developing, and mature plants can be produced in a single year this way. Seed can be stored in the refrigerator and surface sown in the spring, though that is perhaps a little unnecessary in the case of this species because of the ease with which it can be divided.
_Drosera binata_ var. _dichotoma_.
_Drosera binata_ var. _binata_ is the typical form (commonly referred to as the T-form), an erect plant producing a dense colony of glabrous, green- to bronze-coloured petioles to 6 in. (15 cm), topped by a singly divided sticky leaf, each side 1 to 3 in. (2½ to 7½ cm) in length. The whole plant can attain a height of some 10 in. (25 cm).
_Drosera binata_ var. _dichotoma_ is a generally larger plant in all respects. Its leaves are divided twice, similar to _D. binata_ var. _binata_ , but with an extra pair of points, making four. The petioles can be up to 10 in. (25 cm) in height, and are apple green, blushing red. The sticky leaves can be up to 5 in. (12½ cm) in diameter. This variety is a little slower to divide than _D. binata_ var. _binata_.
_Drosera binata_ var. _multifida_ is a finer, more gracile (slender) plant all around. Like _D. binata_ var. _binata_ , it forms a dense clump, but we see a change of habit in this variety—from upright to hanging over the edge of the container, due to leaf weight. And the leaves in this plant are divided still further, typically to eight to ten points; the forms in cultivation bear leaves to around 4 in. (10 cm) in diameter, held on thin petioles to a similar length.
Leaf examples. From left: two leaves of _Drosera binata_ var. _binata_ , one leaf of _D. binata_ var. _dichotoma_ , one leaf of _D. binata_ var. _multifida_ , and one leaf of _D. binata_ var. _multifida_ f. _extrema_.
_Drosera binata_ var. _multifida_ f. _extrema_ represents this species at its most extreme (pardon the pun), and in my opinion, at its most beautiful. Frequently boasting leaves of over twenty points, (in the wild, the figure of sixty has been suggested), these sticky red spider webs can attain a diameter of 8 in. (20 cm) on hanging, similarly coloured, wiry petioles to 10 in. (25 cm) in length. In the wild they are often found on vertical cliffs, where their hanging habit works perfectly, projecting traps out into the air, where they lie in wait for prey. It will survive 14°F (-10°C), but does seem to regrow slower in the spring. Try to keep this form above freezing.
_Drosera capensis_
A native of the Cape region of South Africa, _Drosera capensis_ is absolutely the first plant you should have on your windowsill (after the obligatory Venus flytrap, that is). Surprisingly rare in the wild, this showy and impressive plant has everything a beginner could hope for. It produces long, strap-shaped leaves to 4 in. (10 cm) long, which are covered in vivid red tentacles for half their length. Small houseflies and similar insects are fatally attracted, and the plant catches these with relish, leaves curling over and winding up their prey, then unfurling when they have finished their meal. In the summer a profusion of self-fertile pink flowers are produced on tall, slightly hairy stems to over 12 in. (30 cm), followed by many seeds, which can be stored and sown in the spring. The ease with which this species can be grown makes it the perfect plant for children to grow. A few rather attractive all-red forms are also in cultivation, identical in all respects except for the entire plant developing an intense red colour—a perfect contrast to the standard forms, and also to the green stems of _Sarracenia_ , when grown together. When kept above the freezing point it remains evergreen, though it can freeze and will die back to its roots, returning in the spring. This deep freeze isn't a requirement, though, and the plant can be kept indoors year-round.
The easy and vigorous _Drosera capensis_. Make this your first drosera.
_Drosera filiformis_ var. _tracyi_ enjoying the morning sun in a greenhouse.
A beautiful red form of _Drosera capensis_.
**CULTIVATION** This plant can be grown in the greenhouse or conservatory, or on a windowsill where it will receive full sun. Maintain a winter minimum above 32°F (0°C) to preserve evergreen leaves.
**PROPAGATION** Individual plants will gradually divide and can be split in the spring. Alternatively, sow the copious amounts of seed.
_Drosera filiformis_
_Drosera filiformis_ is an interesting North American species with self-supporting, thread-like leaves held in unusually attractive upright clumps, up to 8 in. (20 cm) in height. There are two varieties: _Drosera filiformis_ var. _filiformis_ , which is a finer, more delicate plant, with green leaves and red tentacles; and _D. filiformis_ var. _tracyi_ , which is a stockier plant with green leaves and white tentacles that catch the low morning sun beautifully. Plants die back to a winter resting bud and need to be kept cold over the winter months, re-emerging in the spring, their leaves unwinding out of the base. Large, pale pink, self-fertile flowers to 1 in. (2½ cm) across are produced in the summer.
**CULTIVATION** This plant can be grown in the greenhouse or conservatory, or on a windowsill where it will receive full sun in the summer. It must be cold over the winter, so somewhere unheated or outside is best. It is also good in the bog garden or in an outdoor container. As a pond marginal, it will complement _Sarracenia_ , with which it grows naturally in the wild. _Drosera filiformis_ tolerates a winter minimum of 14°F (-10°C).
**PROPAGATION** Individual plants will gradually divide and can be split in the spring. Leaf cuttings also work well. Take cuttings in the summer, slice them into 1-in. (2½-cm) lengths, lay them on a surface of potting mix, and lightly cover with a little sphagnum moss. Young plants are formed in four to five weeks. Alternatively, sow seeds in the spring after they have been stored in the refrigerator. Being temperate, cold stratification is necessary prior to germination.
The olive green rosettes of _Drosera hamiltonii_.
_Drosera hamiltonii_
One of my favourite species, _Drosera hamiltonii_ from Australia is another rosetted plant which reaches about 2½ in. (5 cm) across. It is unusual in its colouration—rather than the standard pale green of its fellow species, it is a dark olive green, offset with purple tentacles, quite unlike any other. For the larger part of the year it is a spectacular plant, with its bejeweled leaves shimmering in the sun. But in the hottest part of the summer, it can lapse into a state of semi-dormancy, where the leaves are temporarily stripped of their glue. It is one of those plants best described as shy to flower, and indeed over the past twenty-five years I've succeeded only a handful of times, often a short period after repotting. The deep purple flowers are worth the wait, borne on wiry glandular stems to 16 in. (40 cm) high and over 1 in. (2½ cm) across—huge for a drosera. They are self-sterile.
Tiny _Drosera pulchella_ rosettes.
Flowers of _Drosera pulchella_.
**CULTIVATION** This plant can be grown in the greenhouse or conservatory, or on a windowsill where it will receive full sun in the summer. _Drosera hamiltonii_ prefers a winter minimum of 41°F (5°C), no cold period required.
**PROPAGATION** Individual plants will gradually divide and can be split in the spring. They also produce very long roots which are ideal for cuttings. Lay them on the soil surface and cover lightly. New plantlets will appear in a few weeks.
_Drosera pulchella_
_Drosera pulchella_ is one of the Australian pygmy species I mentioned earlier. Although not large and showy, the plant is fascinating, whether used as stand-alone specimen or mixed with other species. Plants are around ¾ in. (2 cm) in diameter, and a bright vivid green colour which would certainly highlight others. In addition to this, they also produce correspondingly small flowers to ¹⁄₅ in. (5 mm), usually pale pink, but white and orange in some forms. Seed is rarely produced.
**CULTIVATION** This plant can be grown in the greenhouse or conservatory, or on a windowsill where it will receive full sun in the summer. _Drosera pulchella_ prefers a winter minimum of 41°F (5°C); no cold period required.
**PROPAGATION** Pygmy sundews employ a somewhat different tactic for propagation. In the latter half of the winter months, a number of tiny green buds, gemmae, are produced in the centre of the plant. These vegetative buds can be transferred, using a fine paintbrush, from the plant onto a sheet of paper, or directly onto the surface of new quarters. They will grow into clones of the adult plant. This is best performed every two to three years for all the pygmy species, as the plants are fairly short-lived and lose their vigour.
Stately _Drosera regia_ plants in a greenhouse. The large stature means it is best here or in a conservatory.
Flower of _Drosera regia_.
_Drosera regia_
Meet the king of the sundews. I'm including the remarkable _Drosera regia_ not because I would necessarily recommend it as a plant for beginners, but because any book that covers this genus would be doing a great injustice to not include it. It is found in just a single valley in South Africa, where it is divided between two tiny populations, and holds the honor of being the largest member of the genus in terms of bulk. The beautiful, sword-shaped leaves rise vertically, slicing through the air to a height of 15 in. (37 cm). They are wider at their bases, to 1 in. (2½ cm), taper along their length to a point, and are covered on the upper surface with large, sticky tentacles. The underside is smooth with a prominent, raised midrib. This is the only drosera to produce a rhizome, gradually dividing over a number of years to become a substantial sticky monster. Its size makes it capable of taking on larger prey such as wasps, and its leaf movements are agile, ensuring nothing escapes. Through the summer months, mature plants produce tall flower scapes to 24 in. (60 cm), which support a cluster of around twelve large flowers to 1 in. (2½ cm) across. Different in structure than the blooms of other species, these flowers are bright pink with darker veins and glowing yellow stamens. Over the winter months the plant dies down, losing its carnivorous leaves and producing a few non-glandular leaves which are often only about 1 in. (2½ cm) in length. Growth is resumed in the spring. This is one of those plants with a reputation for being a bit of a diva, though it seems to do well once established. Perhaps try a few of the more forgiving species first.
**CULTIVATION** This plant can be grown in the greenhouse or conservatory, where it will receive full sun in the summer. Because of its large size I wouldn't recommend it for the windowsill. The king of the sundews prefers a winter minimum of 41° F (5° C); no cold period required.
**PROPAGATION** Individual plants will gradually divide from their rhizome and can be split in the spring. They also produce very long, thick roots which are ideal for cuttings. Lay them on the soil surface and cover lightly; new plantlets will appear in a few weeks.
_Drosera rotundifolia_
The common round-leaved sundew _Drosera rotundifolia_ is one of three species native to the UK and has a vast distribution throughout Asia, Europe, and North America. Surprisingly, despite this huge range, it is not particularly variable. It produces a loose rosette to 3 in. (7½ cm) in diameter of semi-erect to prostrate leaves, each consisting of a thin petiole topped by a round glandular leaf. This, along with its native colleagues _Drosera anglica_ and _D. intermedia_ , is one of the stalwarts for the bog garden, and is perfect when grown in a carpet of green sphagnum moss, with its red-highlighted leaves held aloft. In the summer, short flower scapes to 4 in. (10 cm) in height are produced, topped with small white self-fertile flowers.
_Drosera rotundifolia_ , in the wilds of southern England.
**CULTIVATION** This plant can be grown in the greenhouse or conservatory, or on a windowsill where it will receive full sun in the summer, but it doesn't appreciate excessively high temperatures. It must be cold over the winter, so somewhere unheated or outside is best. It is also good in the bog garden or in an outdoor container. As a pond marginal, it will complement other carnivorous plants. _Drosera rotundifolia_ tolerates a winter minimum of -4°F(-20°C).
**PROPAGATION** This species is best propagated from the seed that it freely produces. It can be collected and stored in the refrigerator, or left to fall in situ, where the plants will gradually colonize. A cold stratification is essential for germination.
The fragrant rosettes of _Drosera slackii_.
Flowers of _Drosera slackii_ , with guest.
_Drosera slackii_
As far as the rosetted species are concerned, spectacular _Drosera slackii_ is hard to beat. It boasts large, stocky, deep crimson rosettes to 3½ in. (9 cm) across. Over the summer, these rosettes send out wiry, red glandular flower scapes topped with deep pink flowers ½ in. (1 cm) in diameter. The plants gradually form pedestals. A native of South Africa, this is a perfect candidate for anywhere in full sun where its remarkable colouration can develop. An interesting quirk of the plant is the release of a floral fragrance from the rosettes in hot weather; the flowers have no discernible scent. Whether planted in a mixed scheme or used as a stand-alone plant, this species is one you shouldn't be without.
**CULTIVATION** This plant can be grown in the greenhouse or conservatory, or on a windowsill where it will receive full sun in the summer to produce its best colour. _Drosera slackii_ prefers a winter minimum of 41°F (5°C); no cold period required.
**PROPAGATION** Individual plants will gradually divide and can be split in the spring. They also produce very long roots, which are ideal for cuttings. Lay cuttings on the soil surface and cover lightly. New plantlets will appear in a few weeks.
_Drosera spatulata_ , easy to grow year-round on a sunny windowsill.
_Drosera spatulata_
Another rosette species with wide distribution, _Drosera spatulata_ is found in Eastern Asia, down through Southeast Asia, and across the area known as Australasia. Its wide range affords it broad diversity, and there are many forms. Some of these were named informally in cultivation, but are now diluted by the varied strains which have been introduced over the past thirty years. The plants produce loose rosettes of spoon-shaped leaves, between ⅔ and 2 in. (1½ and 5 cm) in diameter, in an array of colour variations from plain green to blushed through with orange, pink, and red. The self-fertile flowers, held on fine scapes, are either white or pink. At barely ¹⁄₅ in. (5 mm), the blooms aren't particularly impressive, but do add interest. Some forms from colder climates will cease their growth and produce a winter rest bud, resuming in the spring, but they will not survive a deep freeze outside.
**CULTIVATION** This plant can be grown in the greenhouse or conservatory, or on a windowsill where it will receive full sun in the summer to produce its best colour forms. _Drosera spatulata_ prefers a winter minimum of 41°F (5°C); no cold period required.
**PROPAGATION** The easiest method of propagation is by seed, which this species produces well. Surface sow these in the spring, and adult plants can be raised in a little over a year.
_Pinguicula_
**BUTTERWORT**
There are somewhere in the region of eighty species of butterwort, found across Asia, Europe, and North America, along with a small handful of satellite species in countries such as Cuba and Japan. However, the main concentration is in Central America, specifically Mexico, which over the past twenty-five years has yielded a large number of new species. These plants impart a subtle beauty and elegance which makes them enchanting specimen subjects in the home. At first glance, most people would not suspect these beautiful and at times innocuous plants of any untoward habit such as carnivory. They typically produce a flat rosette of bright green leaves pressed to the ground, from which they launch the most delicately beautiful flowers, always borne singly.
The leaves, upon close inspection, reveal thousands of tiny, sticky hairs, which when touched impart a greasy feeling—hence the Latin name deriving from _pinguis_ , which refers to fat, and _ula_ , being small. Tiny insects generally smaller than a housefly become ensnared in this mucilage and suffocate. Glands on the leaf surface release digestive enzymes onto the carcass, breaking it down. Some species have the ability to move, though slowly; motion is usually restricted to the curling over of the leaf margins to prevent the loss of prey and valuable digestive juices.
The flowers of all species are beautiful, with some being barely ½ in. (1 cm) across and others up to 2 in. (5 cm). The flowers are symmetrical in only one plane, and can be divided in half from top to bottom—a characteristic shared with the bladderworts (which belong to the same family). Blooms consist of an upper and lower lobe; the upper is divided into two sections and the lower is divided into three. Many of the flowers last for over two weeks, and can bring a welcome splash of colour at an otherwise drab time of year. This colour, when combined with props such as tufa (on which some species naturally grow), can be used to great effect when creating a small display.
Flower of _Pinguicula_ 'Tina'.
CULTIVATION
Just as they are found in diverse environments, butterworts have different requirements in cultivation. For our purposes, the two most common groups of species are the Mexican and the temperate.
**MEXICAN SPECIES** While a few of the Mexican species are evergreen and grow in an environment that favors year-round growth, the majority do not, losing their carnivorous leaves over the winter in favour of a reduced rosette of many small, non-carnivorous succulent leaves. A number of species flower while in this state. The reason for the change of growth habit is not to protect the plant from cold; rather it is an adaptation against desiccation where winters are cool and dry.
Mexican species of _Pinguicula_ are somewhat different in their requirements from temperate species, and can be regarded as more akin to houseplants, a role in which they can excel. A key distinction, though: their potting mix requirement is a far cry from that needed by temperate species. Do not be tempted to keep them in wet peat moss or similar, as they are prone to rotting—though a few commercial hybrids are able to tolerate this, especially those that grow year-round. There are a couple of options for planting material. For many years I had great success with a mix that was equal parts perlite, vermiculite, and sand, with a small scattering of peat moss to provide an element of organic matter. In recent times, I've found that the plants seem to be happy in coarse Cornish grit (crushed granite), with nothing else added. This is one group of plants for which you could ask a dozen growers about preferred potting mix, and receive a dozen different answers.
The winter rosette of _Pinguicula crassifolia_ forming in the autumn.
_Pinguicula grandiflora_ and friends in a bog container.
Some hardy _Pinguicula_ species in a bowl: _P. grandiflora_ subsp. _rosea_ in front, _P. poldinii_ behind.
Mexican species plants are often found naturally on north-facing cliffs, away from direct sun, so a degree of shading is necessary. This makes Mexican _Pinguicula_ species some of the few carnivorous plants that will thrive in indirectly lit windowsills and conservatories that are not suitable for their sun-loving cousins. While this means they are not good candidates for mixed display, their singular beauty makes them stellar stand-alone specimens. As their carnivorous leaves die back in autumn, they are best kept dry until spring, when growth resumes. A daily mist of rainwater is quite sufficient for their needs when they are dormant, and they can remain inside year-round in a cool location. If you are growing Mexican species in a greenhouse or conservatory, they prefer a winter minimum of 45° F (7°C). Although a brief freeze seems to do little harm to the winter rosette, it will result in the destruction of any remaining carnivorous leaves. Plants should stand in rainwater for the growing season, to a depth of around ½ in. (1 cm); the coarse nature of the potting mix ensures that the soil surface will remain somewhat damp rather than wet. The root systems of these plants are surprisingly shallow and the dampness is perfectly sufficient. Once a plant has formed its winter rosette, stop watering and allow it to dry out. This mimics conditions in the wild, where it adapted to endure an arid environment. If, however, your plant remains in growth—as species such as _Pinguicula moranensis_ and some horticultural hybrids do—continue to water as usual.
**TEMPERATE SPECIES** Temperate pinguiculas are found across Asia, Europe, and some parts of North America, and behave in the usual manner of bog plants—that is, producing growth during the more conducive summer months, then dying back to a much-reduced winter bud (hibernaculum) in the colder months. These species require a cold winter and are hence unsuitable for indoor cultivation, but are in some cases perfect for outdoor culture tolerating a winter minimum of 14°F (-10° C). Once they produce their hibernacula and the carnivorous leaves die back, cooler temperatures are imperative. As with hardy sundews, temperate pinguiculas are naturally small in stature, and can be used as complement plants in a bog garden. The European species are ideal for this, especially native plants. An alternative is to plant them in containers, which allow closer inspection. This means plants can be moved easily for maximum appreciation when they flower, and such a pot of specimens makes a beautiful centerpiece for a garden table.
Most temperate species of _Pinguicula_ prefer a degree of shading. In fact, direct sun can rapidly scorch their delicate leaves, so take care to prevent this. A planting mix of equal parts peat moss and horticultural sand is ideal, or even pure peat moss, keeping them wet in the growing season and, if under cover, damp over the winter. Plants outside will remain wet year-round. Do watch out for birds, which certainly in my locale have a penchant for occasionally picking small plants from pots—no doubt while digging around for suitable food.
PROPAGATION
Seed is a good option with the Mexican species of _Pinguicula_ , although plants are not generally self-fertile, so manual pollination is necessary. Seeds are shed toward the end of the dry season. Simply sow them in a tray of the same potting mix as the adult plants and stand in water. A daily spray will help promote germination. Perhaps surprisingly, a number of people have reported good success germinating seeds by floating them on the surface of a dish of water, pricking them out once they have sprouted. The easiest method of propagation for these species can be performed during the winter, when plants are in possession of their tight winter rosettes. Using a pair of tweezers, start at the edge of the rosette and gently pull away the small succulent leaves, taking care to not bruise them. Exercise some caution when handling, but you can be ruthless in the number you take—it's safe to remove at least a third. Gently push the leaf pullings into the surface of a tray of perlite. Stand in water until they root and produce small plantlets, generally at around four weeks.
For temperate pinguiculas, seed is a straightforward method of propagation. They are often self-fertile, but don't appear to self-pollinate, and outside plants are usually visited by pollinating insects. Manual pollination is a simple process of inserting a small paintbrush into the throat of the flower, then withdrawing while pushing slightly upward before moving on to the next flower. Seed is shed in the autumn and can simply be surface sown immediately and kept cold in an unheated greenhouse or similar. Alternatively, seed may be stored in the refrigerator over the winter and sown come spring. Germination takes around four to six weeks and adult plants develop in two to three years.
Many of the temperate European species also form so-called daughter hibernacula over the winter. Surrounding the main resting bud, a number of much smaller buds often develop. These can be removed with a pair of tweezers—carefully so as not to damage them—and placed on the surface of the potting mix away from the mother plant, where they will develop in the spring. Any buds left on the plant will be smothered when the plant resumes its growth.
Leaf pullings can be taken from the winter rosettes of the Mexican species.
Mexican species that have obviously divided can be split in the spring.
Finally, you can split the Mexican species which have divided. This can be done in the spring or while the plant is dormant. It is simply a case of removing the plant from its container and gently pulling the individual rosettes apart.
Flower of _Pinguicula ehlersae._
_Pinguicula esseriana_.
_Pinguicula lauana_.
Plant Suggestions
MEXICAN SPECIES
_Pinguicula ehlersae_
_Pinguicula ehlersae_ is one of a few closely related and similar-looking species. All are small, and perfect for displaying together in a bowl or similar container, where they flower en masse during the winter. The rosettes are small, to 1½ in. (4 cm) across, and apple green, blushing a delicate pink in good light. The flowers are stunning, an intense purple with an almost iridescent quality, 1 in. (2½ cm) in length.
_Pinguicula esseriana_
Exquisite _Pinguicula esseriana_ is one of those closely allied to _Pinguicula ehlersae_ , sharing many similarities in size and appearance, though the rosettes do not blush pink. Flowers are pale pink with a white throat that is highlighted by a dark purple stigma within. Another great plant to grow as a colony in a small bowl, where the delicate flowers will brighten dull winter days.
_Pinguicula lauana_
Without doubt, _Pinguicula lauana_ is one of the finest species in the genus—and one of only a couple to bear red flowers. The rosettes can be large, to 3 in. (7½ cm) in diameter, and are bright green, often with a beautiful bronze tint in good light. The long-lasting flowers are spectacular, variable in colour and shape and 1½ in. (3½ cm) long. Some are a deep red-burgundy, others a more pure bright red, but all with some of the finest colouration you will find in this fascinating genus.
_Pinguicula moranensis_ ( _white-flowered form_ ).
_Pinguicula_ 'Tina' in a shallow contemporary tray, ideal for a bright windowsill out of direct sun.
_Pinguicula moranensis_
A variable species, _Pinguicula moranensis_ is one of the first Mexican plants to have entered cultivation. Early books split this species into two: _Pinguicula caudata_ and _P. mexicana_ , but these now-defunct names refer to the same variable plant. It can be large, with huge apple-green rosettes up to 6 in. (15 cm) in diameter, though usually smaller. The flowers are a deep pink colour, with the generally standard white throat. A particularly attractive white -flowered form is also in cultivation and worthy of a place in anyone's home.
Mexican _Pinguicula_ hybrids
As with many genera, _Pinguicula_ demonstrates hybrid vigour, and in recent years a handful of worthy plants have been named. More important in many respects is that a couple of them have entered the mass production market and are propagated en masse in tissue culture. I say this, as it demonstrates the potential of these particular plants to become houseplants—more so than any other genus of carnivorous plant. Two such plants are _Pinguicula_ 'Tina', a grand plant with purple flowers streaked with white and large rosettes that freely divide to produce a wonderful clump, and _Pinguicula_ 'Weser', a smaller plant to 3 in. (7½ cm) in diameter, with stunning purple-pink flowers to 1 in. (2½ cm) across.
_Pinguicula grandiflora_.
_Pinguicula vulgaris_.
TEMPERATE SPECIES
_Pinguicula grandiflora_
A native species found across Europe, _Pinguicula grandiflora_ is one of the showier of the easier-to-grow temperate plants. It forms attractive, slightly lax yellow-green rosettes to around 3 in. (7½ cm) in diameter. In early summer it produces beautiful, blue-purple flowers to ¾ in. (1½ cm) across, each with a white throat which is touched with fine purple veins which contrast well with the darker purple.
_Pinguicula vulgaris_
Another native plant; known to carpet the wet seeps of mountainous regions in small violet flowers. _Pinguicula vulgaris_ blooms are held above the delicate, yellow-green, starfish-like rosettes. The flowers are not as large as _Pinguicula grandiflora_ , being slightly closed in comparison, but of a similar colour and with varying degrees of white within the throat. Their vigorous nature makes these plants ideal for the first-timer, and the ease with which they are grown makes them perfect for the outdoor bog garden.
_Sarracenia_
**NORTH AMERICAN PITCHER PLANT**
These are the well-known pitcher plants which, certainly in temperate cultivation, are the backbone of any collection. Big, bold, and absolutely essential, these stunning plants give stature and structure when planted into any kind of display. They are also perfect as individual specimen plants, where their singular beauty can be admired. If I had to recommend a single plant to you, it would be one of these.
Found in the southeastern United States, the range of _Sarracenia_ extends southward into Florida, west into Texas, and one species, _S. purpurea_ , reaches northward into much of Canada.
The genus _Sarracenia_ is a member of the family Sarraceniaceae, as are _Darlingtonia_ and _Heliamphora_. Found in low-lying boggy areas and coastal plains, the range of _Sarracenia_ overlaps with several other carnivorous genera ( _Dionaea, Drosera, Pinguicula_ , and _Utricularia_ ).
Plants growing in open areas and hence subject to the strongest sun can produce the most incredibly coloured pitchers. Those inhabiting more shaded areas, where the surrounding vegetation has encroached habitat, are more dull in colour and will remain that way until a wildfire burns through, removing surrounding scrub and opening the area up again.
A dissected _Sarracenia flava_ flower. The receptive stigma points are at each of the five tips of the removed umbrella structure on the right flower. They are found just at the base of the notches and protrude inward. Pollen collects on the floor of the umbrella from the stamens you can see at the top of the flower.
There are eight species of _Sarracenia_ , six of which have leaves of a similar design: tubular and upright. Leaves of _S. purpurea_ and _S. psittacina_ are short, open, and squat, and in _S. psittacina_ , they produce a flat rosette, pressed to the ground. They are all herbaceous perennials, growing from a stout rhizome which, as with an iris, divides and branches, forming in time a large and impressive individual clump.
The new year begins in most cases with the production of flowers. These never fail to impress those who see them for the first time, as their unusual structure is unlike any other flower.
_Sarracenia leucophylla_ growing wild in Alabama.
Species are either red or yellow (rarely producing pink or white flowers), but the hybrids, of which there are many, can produce red, pink, yellow, and even orange flowers. They can be large, to over 3 in. (7½ cm) across, and held singly on sturdy stems up to 2½ ft. (75 cm) in height. Each mature growth point is capable of producing a flower, and a large specimen in flower will brighten any dull spring day.
Flowers usually develop before any pitchers are produced (with the exception of _Sarracenia minor_ and S. _oreophila_ , which leaf first). Once the buds reach their optimum height, they tip over 180 degrees, hanging downward, and open—unfurling and releasing the five petals.
The flowers are self-fertile, but require manual pollination, though the best seed will be produced by crossing flowers of genetically different plants. This is a simple process, as the pollen is released and collected in the umbrella-shaped structure at the base of the flower. This can be collected with the small paintbrush mentioned earlier, then transferred to the five stigma points on the inside of each tip of the same umbrella, or on that of another flower.
Once pollination has occurred, the petals wither and fall, leaving the remainder of the flower. It gradually reverts to its upright position by the time the seeds ripen in the autumn, coinciding with the autumnal die back. Occasionally a plant may produce a flower in the autumn at the end of the season. These are always held low to the base of the plant, seldom on stems more than 3 in. (7½ cm) in height. They die back prior to seed production and can be removed along with the leaves during the annual task of cutting back dead growth.
A mature specimen of _Sarracenia flava_ var. _ornata_.
The ostentatious bloom of _Sarracenia_ 'Joyce Cooper'.
Cut _Sarracenia_ flowers last well in a vase. Even once the petals have been shed, the flower structure remains green for a considerable time. As part of a mixed display with other flowers, or on their own, their unique shape and form make them exceedingly eye-catching. This is also the case with the leaves, and the taller upright specimens make unusual subjects for display, either alone in a mixed vase, or, in the case of elegant species such as _S. leucophylla_ , as part of a contemporary piece. Ensure the cut stem is fresh when you insert it in water; if it's been left in the air for more than a couple minutes, recut it just above the initial wound.
_Sarracenia_ flower colours complement each other.
_Sarracenia_ pitchers are produced starting in early spring, the exact time being dictated by geographical location and ambient temperature. Pitchers vary in size and structure, but the upright species can be anything from 6 in. (15 cm) to 39 in. (100 cm) in height. They are typically made up of a leaf rolled to form a tube, with a strengthening ridge along the length of the front, and topped by a lid, which in most cases fully covers the open mouth when viewed from above.
This lid is often brightly coloured and is also covered in glands, which release a nectar with a sweet smell and taste. In some cases, the nectar actually contains the neurotoxin coniine, which paralyzes unsuspecting prey. The lid (which incidentally does not move or snap shut in any way) acts as a convenient landing platform for flying insects, who follow the trail of the nectar to the underside and throat area, where the alluring substance is most abundant. This area and the rolled lip of the mouth have a waxy feel, with surfaces that insects cannot grip onto. With careful observation, one can observe them slipping, even as they continue to feed.
Eventually, the insects lose their footing and slip into the tube, which narrows rapidly, and is also furnished with many downward-pointing hairs: a one-way system to prevent creatures from crawling out. Once inside, the insects die and are digested by the enzymes released into the pitcher by the plant. The lid is fashioned to channel any water from above down the rear of the pitcher body, away from the mouth, ensuring that the contents are not diluted by excessive rainwater.
In autumn, the leaves die back, and with two of the species habitually producing non-carnivorous winter leaves, the accumulated nutrients of summer hunting are stored in the rhizomes to fuel the following season's growth.
As well as the individual species, there are many hybrids, both man-made and found naturally where the ranges of individual species overlap. Sarracenias are unusual in that when crossed, the resultant offspring are approximately midway between the parent plants. The issue of hybridization is further complicated by the fact that the hybrids retain their fertility, so that complex and back crosses can be made, some with spectacular results, others resulting in less desirable plants.
Any autumn flowers are held low to the base of the plant and do not produce seed.
It can be interesting to create one's own hybrids. When breeding from parents of hybrid origin, it is often possible to grow plants of completely different appearances from the same cross, as genetic characteristics from their lineage make an appearance.
CULTIVATION
Despite many sarracenias being found in what is regarded as the warm southeastern United States, they are remarkably tough—the more northerly representatives even more so. Temperatures across their range vary, increasing as one moves southward, but many plants still see winter freezes, and all seem to be remarkably tolerant of low temperatures in cultivation.
_Sarracenia_ flowers are well suited to displays.
The leaves can also be cut, some lasting longer than others.
A contemporary display using _Sarracenia leucophylla, Fatsia_ , and _Phormium_ leaves.
I recommend Zone 7, with hardiness to 5°F (-15°C). Here in England, I have grown all the species outside, the only exception being _Sarracenia psittacina_ , which is best kept under cover—though that said, I have still kept _S. psittacina_ below 14°F (-10°C). What's more, they are equally adapted to the other end of the temperature spectrum. This wide tolerance range makes _Sarracenia_ pitchers ideal for so many locations: outside in bog gardens or as pond marginals; in greenhouses, conservatories, sun rooms, and sunny windows; anywhere, in short, that receives full sun in the summer.
The key time is winter, when the plants have to be allowed their cold dormancy. If there is any heat in the place where you are keeping them, place them outside or in an unheated greenhouse. The optimum time for this treatment is between Halloween and Valentine's Day. After Valentine's Day, return the plants to their summer quarters if you've moved them. The rise in temperature along with the increasing day length will kick-start them into growth, and you will soon see flower stems appear.
While _Sarracenia_ blooms are spectacular, a word of warning. The most frequently encountered and grown species, _S. flava_ , produces the largest flowers in the genus, sulphur yellow and held on tall, erect stems. These are accompanied, however, by a strong and somewhat controversial odour. I must admit that after many years of growing these remarkable plants, my nose smells an acidic lemon perfume, one that reminds me of warm spring days. Others are a little less kind, and the flowers are frequently described as having a feline smell. Judge them for yourself. You may decide to keep them exclusively outside, where they can be admired from afar. The other species are more benign, with faint sweet odours in some, and nothing detectable in others. Once the petals drop, any smell ceases and the plants continue with leaf production.
A fly dances with danger in the throat of _Sarracenia flava_ var. _flava_.
Pitchers offer a kaleidoscope of colour in the summer.
Watching a tiny shoot develop into a tall tube which then splits and flares open is fascinating, and a true wonder of the natural world. In some species, such as _Sarracenia oreophila_ , virtually all the pitchers are produced in the spring, but in others (most notably the exquisite _S. leucophylla_ ) there are two distinct growth phases: spring leaves which are thin and wiry, and the summer pitchers which are taller, stockier, and usually more colourful.
As the days shorten, the pitchers begin to die back. With _Sarracenia oreophila_ this can be as early as late summer, especially if it's exceptionally hot, whereas others such as _S. minor_ can retain their leaves well into winter. When they do begin to die back, the first indication is brown patches on the sides of the leaves, where the insects within are broken down. Don't be alarmed at this; it is quite natural and is followed by a browning of the lid which then extends down along the length of the leaf.
Don't panic! Brown patches that develop on the pitcher walls in summer are perfectly normal.
Dead leaf bases from the previous year can be removed along with the current year's leaf dieback.
By early autumn, many of the pitchers are dying back.
If you have many plants, it's often easier to remove leaves near their bases.
At this juncture there is a call to action—the commencement of the most important task of the year. For most of the upright species and hybrids, this means simply waiting until the leaves have died three-quarters along their length from the top downward, then removing the dead section with a pair of sharp pruning shears or scissors. Leave a couple of inches of green base on the plant. Of the upright species, _Sarracenia oreophila_ differs in that its leaves die right back to the base. Simply wait until they are completely brown, and pull them away.
If you prefer, you can remove the leaves a little at a time as they brown, though this is not essential and may not be practical if you have a large number of plants. Once the current year's dead sections are removed, you are left with the green leaf bases from those just cut, and a number of dead bases from the previous year's surgery. These dead bases need to be removed, as they are now likely to attract grey mould. Holding the base of the plant, pull the dead bases out. You will notice that a dead base detaches from the rhizome cleanly, along with the flared joint where it was attached. Not only is this good for preventing disease, it also encourages the rhizome to produce offshoots from a dormant lateral bud, which will often begin to grow once exposed.
The pitcher of _Sarracenia oreophila_ dies back, and the whole leaf can be pulled out carefully from the base, leaving the sickle-shaped winter leaves.
Winter leaves of _Sarracenia flava_ are sword-shaped and can be left on the plant.
_Sarracenia purpurea_ leaves can be left until they die back completely, usually in the spring.
Species such as _Sarracenia leucophylla_ and _S. rubra_ and their subspecies and varieties will lose their spring leaves in the autumn, and their summer leaves will remain green and on the plant well into the winter. Just remove the dying leaves, leaving those still in good condition to enjoy for as long as possible.
_Sarracenia flava_ and _S. oreophila_ will produce winter leaves, or phyllodes. These appear at the end of the growth cycle and are non-carnivorous. In _S. flava_ , they are sword-shaped and upright; in _S. oreophila_ they are sickle-shaped and recumbent, arching over with their tips often touching the soil. These structures can be left on the plant. With _S. flava_ , you can cut the leaves down as described, leaving the phyllodes. If they are removed inadvertently, however, there is no detriment to the plant.
Some species retain their leaves until the following spring, and the best time to remove them is at this time as they die back. _Sarracenia minor_ can simply be cut back as the new leaves emerge, though this is often while the previous season's leaves are still very much green. _Sarracenia purpurea_ and _S. psittacina_ are best left until their leaves have died back completely. Then, carefully holding the base of the plant so as to avoid breaking the fragile rhizome, gently pull the leaf away.
The many hybrids will follow one of the patterns above, a characteristic they inherit from one or other of their parents. Just keep an eye on the plant and observe how it behaves in its first season.
A large sarracenia ready for dividing.
Remove the plant and wash the potting mix away to expose its structure.
Individual growth points can be severed as shown.
Individual rhizome cuttings and the remainder of the rhizome.
The rhizome can be cut into lengths to be used as cuttings, discarding the oldest brown portion.
After cutting back dead leaves on those plants that require it, allow the plants to rest somewhere cold. If they are under cover, keep them damp rather than wet, and if they are in an entirely unventilated environment, be sure you have removed all dead material. Do keep an occasional eye on plants, as the effects of drying out can be surprisingly sudden and go unnoticed during the colder months.
Plants that live their lives permanently out of doors will have a somewhat shorter growing season than those afforded the protection of a greenhouse, even an unheated one. The subtle differences in bloom times in a greenhouse become much more exaggerated outside, where temperature swings are wider. In slightly warmer environments, and certainly in the native habitat of _Sarracenia_ , spring will be heralded by the return of warm weather, enabling plants to commence growth.
Whatever container you choose, you will need to plant _Sarracenia_ specimens in an equal mix of peat moss and perlite. If you are growing the plant as a pond marginal, a covering of washed, coarse gravel will prevent the perlite from being flushed out of the potting mix and across the pond's surface.
The potted division. Note the depth of the rhizome.
When potting, note the position of the cutting, with the growth point facing toward the middle of the pot.
Two _Sarracenia_ seed capsules, the one on the right dissected to show the seeds within.
PROPAGATION
There are two good methods of propagating _Sarracenia_ —one fast (by division), and one somewhat slower (by seed).
Being rhizomatous, sarracenias will begin to divide once mature and of a good size. This manner of propagation results in adult plants straight away, and spring is the time to repot and divide your plants. It's ideal to get this done before emerging flowers are 1 in. (2½ cm) in height, because they are brittle when first developing and vulnerable to breaking off during the procedure.
Remove the plant from its pot by turning the container upside down and squeezing its sides to loosen the plant and gently shake it free. It should come out as a single pot-shaped root ball. Larger plants will require some brute force, and those which have altered the shape of their quarters may require the pot to be cut off. Be careful doing this; pots tend to yield suddenly, and it's easy to run the knife through your hand as well as the plastic.
Once the plant has been released, loosen the root ball and shake the potting mix free. You may wish to wash off the rest in a bucket, especially if this is the first time you have divided a plant, as it will help you see the plant's structure more easily.
With the plant exposed, evaluate exactly what to split off. You will notice individual growth points on the ends of old rhizomes. These rhizome sections can be 3 to 4 in. (7½ to 10 cm) in length, depending on the time the plant has been in the pot. If a growth point is young, it may have little or no rhizome behind it, but it can still be detached as long as there are a few roots attached.
Using a sharp pair of pruning shears, cut off the rhizome sections, allowing if possible at least 1 in. (2½ cm) of rhizome with each growth point. Once all the growth points are removed, you will be left with a mass of old rhizome sections. Bear in mind that as one end of the rhizome grows and develops, the other end is dying back, so start at the rear, cutting through incrementally until you hit live, white tissue.
These live sections can then be cut into lengths of at least 1 in. (2½ cm) and treated as cuttings; they will invariably sprout new growth in their first year. Just remove the roots to 1 in. (2½ cm) and insert in a seed tray of standard potting mix for _Sarracenia_ , or keep the roots and plant the cuttings in pots. Keep the cuttings away from fierce heat until they have sprouted.
Year-old _Sarracenia_ plant seedlings.
If you are repotting a plant that has outgrown its container and are simply dividing it, follow the same method as just stated, though you will probably find it easier to physically break it apart to start. This may require some considerable effort on your part; a sharp serrated knife or pruning shears can help. Once divided, remove the old, dead back sections of the rhizome, and while it is exposed, remove any leaf bases.
The live divisions—that is, those with a live growth point and split plants—can be potted up into containers. Remember that these can be tall plants, and the roots act as an important anchor to secure them in the ground. Place a 1-in. (2½-cm) layer of potting mix in the base of the container. Now hold the plant in the pot, allowing the roots to touch the bottom if they are long enough, but keeping the rhizome at ground level. This is important; they don't like to be buried—with the exception of _Sarracenia minor_ and _S. oreophila_ , both of which should be buried to around 1 in. (2½ cm) under the soil surface if you are repotting plants with growth points intact, or on the soil surface if you have rhizome cuttings. If in doubt, just position them at the level they were previously. As they continue to grow they will find their own level.
At this point, it's helpful to consider the direction in which the plant will grow. Remember, the rhizome is a horizontal stem that travels along the ground in the direction of the growth point. When you position your plant, always ensure that you have the rhizome end toward one edge of the pot, but not touching it, to allow the plant room to grow and develop, and also to ensure it can remain in its pot for as long as possible.
Fill the container halfway with potting mix, giving the pot a tap to shake the mix through the roots. Then use the fingers of your free hand as well to push the mix down and through the roots. Continue this process until the pot is full and the potting mix is firmed down, without using excessive force.
The other method of propagation is via seed. This method can produce a large number of plants, can be fun (especially if you are creating hybrids), and is easy. But you must be prepared for a long process and therefore patience is required. Collect ripe seed in the autumn when the swollen seed capsules are dry and often split open. Either surface sow them immediately and keep in a cold greenhouse or similar over winter, or store them dry in the fridge until March. Sow onto the same potting mix as the adult plants and germination will occur in the spring: two small seed leaves first, then tiny hollow pitchers to about ½ in. (1 cm) in height.
As seedlings, all sarracenias look the same, so don't be concerned if they bear little resemblance to the adult plants you were expecting to see. Remember, they increase in size in stages each year, becoming progressively larger until they reach maturity. For a species this is five to eight years, three to five for a hybrid. Leave the seedlings for two to three years in the seed tray before pricking them out and into small pots no larger than 3 in. (7½ cm) in diameter. Potting them into larger pots will not speed up the process. Treat seedlings in the same way as adult plants.
_Sarracenia alata_ flowers.
_Sarracenia alata_ var. _alata_ , the typical form of the species.
Plant Suggestions
_Sarracenia alata_
**PALE PITCHER**
_Sarracenia alata_ is one of a couple of plants which in my opinion are underrated. A prolific devourer of wasps, it is a variable species which has recently been reclassified into six distinct varieties, (based on stable colour variation) and a form, represented by an all-green, anthocyanin-free plant, _Sarracenia alata_ f. _viridescens_. The different varieties work well together with their contrasting colours, and the finer forms of some of the varieties work to highlight and frame larger species in a display. These narrow, elegant pitchers are typically up to 24 in. (60 cm) in height, occasionally reaching as high as 39 in. (100 cm), though these are the exceptions. The beautiful, pale yellow flowers, produced before the pitchers, are approximately half the height and 3 in. (7½ cm) across. Petals occasionally wear a very slight brush of red.
_Sarracenia alata_ var. _alata_ , the typical form, boasts beautiful apple-green leaves, touched with fine purple filigree veins. Pubescent forms which wear a coat of fine velvety hair are common, and add to the interest.
_Sarracenia alata_ var. _atrorubra_.
_Sarracenia alata_ var. _atrorubra_ is the all-red variety, and requires strong light to colour well. A stunning individual; its pitchers are gradually suffused with a solid crimson colour over the season.
_Sarracenia alata_ var. _cuprea_ is at first glance the same as _S. alata_ var. _alata_ , but differs in that it produces a copper-coloured lid, sometimes accented with pink.
_Sarracenia alata_ var. _nigropurpurea_ , the so-called black-tubed _S. alata_ , requires good light levels to colour well; the exterior of the pitcher turns a dark purple-black on the upper half. The interior of the mouth is often darker than the rest of the leaf. Some forms can be exceptionally tall; others bear a pubescence.
_Sarracenia alata_ var. _cuprea_.
_Sarracenia alata_ var. _ornata_.
_Sarracenia alata_ var. _rubrioperculata_.
_Sarracenia alata_ var. _nigropurpurea_.
_Sarracenia alata_ var. _ornata_ is the elegant, heavily veined form, bearing a covering of fine reticulate veins over the upper half of the pitcher and inside the mouth. Its background is bright green, which contrasts and lifts the veining perfectly. Although it often has the red lid of _S. alata_ var. _rubrioperculata_ , the intense veining confirms its identity.
_Sarracenia alata_ var. _rubrioperculata_ has a wide range of veining variability, but is not as heavily veined as _S. alata_ var. _ornata_ , and is set apart by the presence of a red colouration on the underside of the lid. In many, this is restricted to the lid and ends abruptly; in others, the colour extends into the pitcher throat.
_Sarracenia flava_
**YELLOW TRUMPET**
If I had to recommend one carnivorous plant, it would undoubtedly be _Sarracenia flava_. Big, brash, confident, and easy to please, this stalwart should be your first port of call (or second if you've killed your Venus flytrap and don't want a _Drosera capensis_ ). In a display of any kind—pond, bog garden, greenhouse, or conservatory—this plant will give you both structure and bulk. As with _Sarracenia alata_ it is divided, this time into seven distinct varieties governed by colour and patterning. There is also an all-green, anthocyanin-free form, _S. flava_ f. _viridescens_. The flowers are the largest in the genus, sulphur yellow and up to 4 in. (10 cm) across, with long petals of similar length. They sit atop sturdy stems which can be anything up to 39 in. (100 cm) in height, towering above some other species. Pitcher height varies greatly. One of the chief attributes of this species is its vigour, with most varieties forming a dense clump over several years. A typical plant will divide roughly once per year, with each growth point capable of producing a flower and four to six pitchers. Dividing every three to four years will maintain this vigour. The red-tubed forms seem to divide more slowly and produce a sparser, more open plant. Prior to the autumn dieback of the pitchers, a handful of sword-shaped winter leaves usually appear, which can be up to 8 in. (20 cm) in height. These are non-carnivorous leaves and remain on the plant until spring.
The glorious, nodding flowers of _Sarracenia flava_ herald the start of spring.
_Sarracenia flava_ var. _flava_ is the typical form of the species, with light green pitchers, often fluted, which fade to yellow over the season. The name _flava_ means yellow. Often this colour change is dramatic and plants can be a lemon yellow by the end of their growing season. The upper section of the pitcher bears a small number of delicate red veins. Plants vary in height, typically to around 24 in. (60 cm), but up to 39 in. (100 cm). A cultivar of _Sarracenia flava_ var. _flava_ , 'Maxima' is a glorious monster with correspondingly tall flowers preceding the leaves. It carries the standard vein patterning, and was named for its vigour and size. A note of caution: don't confuse this cultivar with _S. flava_ var. _maxima_ , which has no leaf veining.
_Sarracenia flava_ var. _flava_.
_Sarracenia flava_ var. _atropurpurea_.
_Sarracenia flava_ var. _flava_ 'Maxima'.
_Sarracenia flava_ var. _maxima_.
_Sarracenia flava_ var. _cuprea_.
_Sarracenia flava_ var. _atropurpurea_ is the rarest of the varieties, and perhaps the most intensely beautiful. Its leaves develop a solid background of plum red, which in good light covers both the exterior of the leaf, including the lid, and the interior. Light is the key to achieving full colouration; lower light levels will produce little more than a reddish blush. Plants can be large, up to 32 in. (80 cm) in height.
_Sarracenia flava_ var. _cuprea_ is usually smaller in stature, up to a maximum of 24 in. (60 cm). A distinct copper colour develops over the upper surface of the lid, sometimes looking almost brown. This colour is strongest from spring into summer, often fading prior to autumn dieback. It is a variable plant, with veining being slight to heavy. I have one form which is entirely veinless, with lime green leaves and a copper top—most unusual.
_Sarracenia flava_ var. _maxima_ is often tall and elegant, occasionally up to 30 in. (75 cm), with fluted leaves flaring at the mouth, though it can also be shorter and stockier. This veinless, all-green variety creates a beautiful contrast when grown in conjunction with the red-tubed forms. Note: do not confuse this with the rare anthocyanin-free clones of the same name, nor with the similarly named cultivar _S. flava_ var. _flava_ 'Maxima'.
_Sarracenia flava_ var. _ornata_.
_Sarracenia flava_ var. _ornata_ gets its ornate reference from the heavy veining across the upper third of the leaf and in the throat. This can be extreme, with the interior of the throat sometimes developing a solid red or purple suffusion. Similar in height and stature to _S. flava_ var. _cuprea_ , this variety possesses a variability that sets it apart from others. This is one of my personal favourites.
_Sarracenia flava_ var. _rubricorpora_ is another red-tubed plant ( _rubricorpora_ means "red body"). It develops a solid plum-red colour in good light, though its lid and interior remain green and veined—a startling contrast. Though this variety typically grows to around 24 in. (60 cm), larger forms may reach over 32 in. (80 cm). Sometimes not dividing as rapidly as other varieties, it can be rather sparse, resulting in an open plant that showcases individual leaves. In displays, highlight this feature by positioning it in front of denser green plants.
_Sarracenia flava_ var. _rubricorpora_.
_Sarracenia flava_ var. _rugelii_.
_Sarracenia flava_ var. _rugelii_ is a variable plant in size and stature; either short and stocky or more tall, elegant, and fluted. At its upper height, as much as 39 in. (100 cm), it is stunning, with a graceful air that some of the other giants lack. Pitchers are veinless, boasting a red splotch within the throat. This colouration is also variable: in some forms it is large and stretches across the throat, in others it is reduced to a thin vertical sliver and may be vibrant red or a dark purple-red. I have one clone which is almost black.
The most elegant and beautiful species in the genus, _Sarracenia leucophylla_.
The rare _Sarracenia leucophylla_ var. _alba_ is still variable and bears veins on the exterior, but produces leaves devoid of any veining in the interior of the mouth.
_Sarracenia leucophylla_
**WHITE TRUMPET**
Although I sing the praises of all sarracenias, there are one or two species that stand out. In terms of beauty, the remarkable _Sarracenia leucophylla_ wins hands down. As both the common and Latin names suggest, it boasts characteristic white-topped leaves, a rare attribute. It is also features an all-green, anthocyanin-free form, _Sarracenia leucophylla_ f. _viridescens_.
This is a variable species, with individuals of many sizes and patterns. The confusing array is further complicated by the presence of hybrid plants where the natural range overlaps with other species. Complex back crosses can occur, sometimes making exact identification difficult.
Tall red flowers precede _Sarracenia leucophylla_ pitchers in the spring.
In its pure form, plants can be large, to over 39 in. (100 cm) tall—bright green breaking into pure white, overlaid with a lace of either green or red veins. In some extreme cases, the white top can be pure and unbroken, the pitchers appearing to glow in the sun. Big red flowers are produced in the spring atop tall stems similar to the leaves in height; multi-crowned plants are an impressive sight.
_Sarracenia leucophylla_ produces an open clump, the rhizomes dividing only occasionally, so a large specimen may consist of only six or seven growth points. It also has a distinct growth pattern: the first pitchers produced after flowering are thin and wiry, often not quite to their full height. These are the spring leaves; once developed, their growth ceases until midsummer, when a second crop is produced.
Summer leaves (right) of _Sarracenia leucophylla_ are truly spectacular, towering over now-shabby spring pitchers (left) and often twice the width. Their vivid colour is a stark contrast to other tall-growing species.
_Sarracenia minor_ var. _minor_ , the typical form.
_Sarracenia minor_
**HOODED PITCHER**
_Sarracenia minor_ is immediately distinguishable by its unique "hooded" lid which usually overhangs the mouth. It is also furnished with what at first appear to be white spots on the rear of the leaf, but are in fact translucent windows—no doubt to confuse attracted prey. Insects fly toward the windows thinking they are escape routes, but instead hit the rear of the tube, tumbling down into the trap.
_Sarracenia minor_ var. _okefenokeensis_.
A comparison of the two varieties: _Sarracenia minor_ var. _okefenokeensis_ on the left, _S. minor_ var. _minor_ on the right.
The plant displays a little variation in colour, with some forms flushing red, occasionally with an orange tinge in the autumn. There is also an anthocyanin-free form: _Sarracenia minor_ var. _minor_ f. _viridescens_. Depending on your locality, this species may be best under cover for the winter in an unheated greenhouse or conservatory.
Plants produce a dense clump over time, and unlike most other species, flower after the production of leaves. The small flowers are a beautiful green-yellow, to only 2 in. (5 cm) across, held below the now-open mouths of the pitchers.
_Sarracenia minor_ var. _minor_ is the typical form, a beautiful, densely clumping plant with stocky leaves to around 12 in. (30 cm) in height. Being a shorter-growing species, this is ideal for the front of a mixed display or in a situation where space is limited.
_Sarracenia minor_ var. _okefenokeensis_ bears a name that refers to its native range within the great Okefenokee Swamp in the United States. It is similar in all respects to _S. minor_ var. _minor_ , but its pitchers can reach over 2½ ft. (75 cm). This is a variety that likes to bury its rhizome to aid in the support of its leaves, which can be another 6 to 12 in. (15 to 30 cm) below the surface. Adding this section of the pitcher increases the height further. This plant can become vast; retain control by dividing every three to four years.
_Sarracenia oreophila_ var. _oreophila_.
_Sarracenia oreophila_
**GREEN TRUMPET**
_Sarracenia oreophila_ has the distinction of being the rarest species of _Sarracenia_ in its native states of Alabama, Georgia, and North Carolina, and is considered now to be crit-
ically endangered. As with _S. minor_ , it has a slightly different growth cycle than other sarracenias: leaves are produced in the spring, prior to the flowers. Blooms are a green-yellow colour, to 2½ in. (6½ cm) across, with short petals held on stems which are always taller than the open pitchers. Pitchers themselves are typically to around 15 in. (38 cm) in height. There are two stable varieties designated according to leaf pattern.
_Sarracenia oreophila_ var. _oreophila_ is the typical form of the species, with bright green leaves and a few red to purple veins. Its leaves are often elegant and either stocky or occasionally fluted. Leaves die back in mid- to late summer, earlier if the weather is exceptionally hot, giving it the shortest season of the pitcher plants. Dieback can be rapid and leaves brown to their bases; simply pull them away. Before dieback, a few sickle-shaped winter leaves are produced which sit close to the ground and remain on the plant until the following spring.
_Sarracenia oreophila_ var. _ornata_.
_Sarracenia oreophila_ var. _ornata_ is a beautiful and moderately rare plant, named for the ornate appearance of the heavily veined leaves. These scarce individuals can be very striking—their colouration immediately sets them apart, making them highly sought after by collectors.
_Sarracenia psittacina_
**PARROT PITCHER**
_Sarracenia psittacina_ is in many respects the odd man out of the genus, with a form unlike any of the other species. Ground-hugging rather than upright, its leaves are held flat to the soil. The structure of the leaves also differs: while characteristically tubular, they are lined internally with long, very pronounced interlocking hairs, forming a one-way route for prey that prevents escape.
The small flower of _Sarracenia psittacina_.
The entrance to the lobster pot–like trap of _Sarracenia psittacina_.
The top of the leaf has an inflated dome, with a mouth at its base facing into the centre of the rosette. It is similar in structure to a lobster pot, and works in a similar way—crawling insects enter the dome and are guided to the tube, where they are digested in the same manner as other species of _Sarracenia_.
Because of its unique form and often intense colouration, _Sarracenia psittacina_ is essential as the frontman in a mixed display. It looks good year-round as it retains its leaves through the winter. Dieback doesn't occur until spring, as new growth emerges. Flowers are small, barely 1 in. (2½ cm) across, but are a rich dark burgundy colour. It doesn't flower as freely as other species, so do not expect them every year.
This is a species I would not recommend keeping outside year-round. Rather, afford it a little protection over winter in a cold greenhouse or similar. It is divided into two varieties, each with an anthocyanin-free form.
_Sarracenia psittacina_ var. _okefenokeensis_.
_Sarracenia psittacina_ var. _psittacina_ , the typical variety, produces leaves to around 3 in. (7½ cm) in length, of variable colour, with individuals of pale to intense red, and even occasionally orange-flushed specimens. Over time it forms a dense clump.
_Sarracenia psittacina_ var. _okefenokeensis_ is similar to the typical form in all respects except size—leaves are up to 6 in. (20 cm) long, and domed hoods can be 2 in. (5 cm) across. Hailing from the Okefenokee Swamp in the United States, this plant imparts an alien-like quality when planted among mosses.
The flower of _Sarracenia purpurea_ subsp. _venosa_.
_Sarracenia purpurea_
**PURPLE PITCHER**
Another unique, low-growing species for the front of a display, but one which also makes an ideal stand-alone specimen. _Sarracenia purpurea_ is short and squat, and differs from most in the genus in that the pitcher's mouth is open, designed to collect rainwater under an often large and undulating frilled hood.
Plants are clump forming, with a large, mature plant being an impressive sight—especially in the spring, when they fire out typically red-petalled flowers that can be up to 3 in. (7½ cm) across. Leaves are typically 4 to 6 in. (10 to 15 cm) long and vary in shape. They are retained through the winter, with old growth dying off in the spring. Allow complete dieback, then pull out dead growth, using one hand to support the base of the plant to avoid snapping the delicate rhizomes.
_Sarracenia purpurea_ subsp. _purpurea_ is the northern subspecies, and has the accolade of being the hardiest of all sarracenias, with its natural range extending across much of Canada and naturalized populations in Ireland, England, and even Switzerland. The leaves are usually rather slender, and may be solid red, veined, blushed orange, or any combination of these. An anthocyanin-free plant, _S. purpurea_ subsp. _purpurea_ f. _heterophylla_ is lime green, a striking contrast.
Anythocyanin-free _Sarracenia purpurea_ subsp. _purpurea_ f. _heterophylla_ (left) and _Sarracenia purpurea_ subsp. _purpurea_ (right).
Anthocyanin-free _Sarracenia purpurea_ subsp. _venosa_ f. _pallidiflora_ ( _left_ ) _and Sarracenia purpurea_ subsp. _venosa_ (right).
Anthocyanin-free _Sarracenia purpurea_ subsp. _venosa_ var. _burkii_ f. _luteola_ (left) and _Sarracenia purpurea_ subsp. _venosa_ var. _burkii_ (right).
Unlike the blooms of others in the species, _Sarracenia purpurea_ subsp. _venosa_ var. _burkii_ has a pink flower.
_Sarracenia purpurea_ subsp. _venosa_ is the southern form, found from the state of New Jersey southward. It is a stockier plant in all respects, with a wider pitcher and a more open, flared mouth. Varying degrees of attractive veining are present, and can become a solid pink-red in good light. There is also an anthocyanin-free plant, _S. purpurea_ subsp. _venosa_ f. _pallidiflora_.
To many, the most beautiful expression of the species is pink-flowered _Sarracenia purpurea_ subsp. _venosa_ var. _burkii_ , a stocky variant with a bulbous pitcher and a distinctive, thick lip across the mouth. An anthocyanin-free plant has delicately pale yellow-green leaves, and is blessed with perhaps the longest name of any carnivorous plant: _S. purpurea_ subsp. _venosa_ var. _burkii_ f. _luteola_. It's scarce, but worth growing for the name alone.
_Sarracenia rubra_
**SWEET TRUMPET**
I have a soft spot for _Sarracenia rubra_ because it is often overlooked due to its smaller stature and growth habit. To me (always on the side of the underdog) this is a shame, as it is a beautiful plant and deserves a second look. It is a favourite for a couple of reasons. One, it is the only species to produce more than one flower from each growth point, a characteristic which on a mature plant can result in a large and impressive number of blooms in the spring. Second, the flowers are among the smallest in the genus, dark red in colour and often only around 1 in. (2½ cm) across on dainty stems, their modest size affording them a delicate elegance against the more substantial flowers of larger species. They possess a charm and grace absent from their ostentatious cousins.
Samples of the five subspecies of _Sarracenia rubra_ (from left): _S. rubra_ subsp. _alabamensis, S. rubra_ subsp. _gulfensis, S. rubra_ subsp. _jonesii, S. rubra_ subsp. _rubra_ , and _S. rubra_ subsp. _wherryi_.
_Sarracenia rubra_ follows the usual pattern of flowering prior to leaves opening. There are two bursts of growth: slender and at times floppy leaves appear in the spring, and a second round of taller, stockier leaves are produced in the summer, lasting well into winter. There are five subspecies of _Sarracenia rubra_ , and plants grown or positioned together offer a wide selection of colours—a good display can be made just with representatives of this single species.
Together, _Sarracenia rubra_ subspecies add a splash of colour and vibrancy to an autumnal morning.
_Sarracenia rubra_ subsp. _rubra_ is the typical form, a slim and densely clump-forming plant, generally apple green with purple veining. Occasionally rather attractive red individuals are found. It is small in stature, the tallest plants generally under 12 in. (30 cm).
_Sarracenia rubra_ subsp. _alabamensis_ has the largest of the summer pitchers, the tallest up to 20 in. (50 cm). It is a vibrant green and often sports a good cover of reticulate purple veining in the throat, under the large lid.
The delicate flowers of _Sarracenia rubra_ , nodding in the greenhouse.
_Sarracenia rubra_ subsp. _gulfensis_.
_Sarracenia rubra_ subsp. _gulfensis_ is a more variable plant, producing tall leaves to around 20 in. (50 cm). The top section of the often olive-green pitcher bulges slightly, but is not as wide as _S. rubra_ subsp. _alabamensis_. Leaves are occasionally shaded an attractive red, with copper-coloured tops. There is also an anthocyanin-free form, _S. rubra_ subsp. _gulfensis_ f. _luteoviridis_ , which often bears white patches over the upper quarter of its leaves.
_Sarracenia rubra_ subsp. _jonesii_ is another tall, variable plant in the wild, but comparatively few clones of this scare plant are in cultivation. It too bears a pitcher which slightly bulges at the top, and those likely to be encountered are often ornate with good veining. An anthocyanin-free plant is S. _rubra_ subsp. _jonesii_ f. _viridescens_.
_Sarracenia rubra_ subsp. _wherryi_ is my personal favourite of the subspecies. It is smaller in stature, the summer leaves generally only up to 9 in. (25 cm) high, and it often develops a wonderful, subtle colour which can include hints of copper, red, and pink. These pitchers are usually slightly pubescent, giving them a felty, tactile nature.
A mixed display of carnivorous plants by the author, with a variety of _Sarracenia_ species and hybrids.
_Sarracenia_ hybrids
This is where things get messy, and neatly compartmentalized species give way to a dazzling array of hybrid plants. As we've mentioned, a hybrid is a cross of two plants of different types, and in their simplest form they are known as primary hybrids—that is, a cross between two species. These occur in the wild where the ranges overlap, but can also be created in cultivation.
Hybrids can introduce an extra element of vibrancy and form to any display of sarracenias, and serve to lift the colours of the species. They also demonstrate what is known as hybrid vigour—the increased vitality that the offspring of two parents has when it comes to growth rates, size, and often longevity of leaf life.
_Sarracenia ×mitchelliana_ is a great plant to add intermediate height between the smaller species and taller specimens. It and the similar _S. ×catesbaei_ bring a different shape, as their lids (from the _S. purpurea_ parent) are upright and open. _Sarracenia ×mitchelliana_ is also serrated and often heavily veined from the _S. leucophylla_ influence.
_Sarracenia ×moorei_ 'Adrian Slack' is one of those highly sought-after plants, named after the man responsible for bringing carnivorous plants to popular cultivation in the 1970s and '80s. It is a beautiful plant with vivid red-pink colour in the mouth and a white lid, carefully drawn with heavy and deliberate veins.
_Sarracenia_ × _moorei_ 'Adrian Slack'.
_Sarracenia_ 'Joyce Cooper'.
_Sarracenia ×moorei_ 'Brook's Hybrid'.
_Sarracenia_ 'Constance Healy'.
_Sarracenia ×moorei_ 'Brook's Hybrid' is a plant which has been in general cultivation for a number of years. Essential for sheer size (reaching over 39 in. [100 cm] in height regularly), its colour is predominantly green, with a slight degree of white speckling on the lid and in the throat. It is lightly veined in red, and also has a remnant patch of solid red in the throat from its _S. flava_ var. _rugelii_ parent.
_Sarracenia_ 'Constance Healy' is one of two plants I named after my grandmothers, who did so much to encourage my appreciation of the botanical world. A complex cross between a red form of _S. ×catesbaei_ , and _S. ×moorei_ 'Brook's Hybrid', the plant attains a height of around 12 in. (30 cm). It has an apple-green pitcher with fine red veining in the upper half, giving way to a lid that can be over 4 in. (10 cm) wide, where the green breaks into white while retaining the veining. Its large flowers have broad, dark pink petals.
_Sarracenia_ 'Joyce Cooper' is the cultivar named for my other grandmother, and is from the same cross ( _S. ×catesbaei_ and _S. ×moorei_ 'Brook's Hybrid'), which demonstrates just how variable offspring can be from complex crosses. It is of a height similar to 'Constance Healy' but produces slimmer pitchers with narrower lids. Colouration is unique, with the coloured leaves from its _S. ×catesbaei_ parent presenting as a mottled effect. Leaves are orange. As it develops, the veins on the outside of the lid are reversed, appearing green against an orange background.
**Examples of _Sarracenia_ Primary Hybrids**
Bear in mind that these hybrids are all fertile, and can hybridize with each other to create ever more elaborate and complex crosses. With this in mind, we'll just consider a handful of interesting plants.
_Sarracenia_ × _areolata_ , a cross of _S. leucophylla_ and _S. alata_.
_Sarracenia_ × _catesbaei_ , a cross of _S. purpurea_ and _S. flava_.
_Sarracenia_ × _chelsonii_ , a cross of _S. purpurea_ and _S. rubra_.
_Sarracenia ×excellens_ , a cross of _S. minor_ and _S. leucophylla_.
_Sarracenia ×exornata_ , a cross of _S. purpurea_ and _S. alata_.
_Sarracenia_ × _moorei_ , a cross of _S. leucophylla_ and _S. flava_.
_Sarracenia_ × _mitchelliana_ , a cross of _S. leucophylla_ and _S. purpurea_.
_Sarracenia_ × _popei_ , a cross of _S. flava_ and _S. rubra_.
_Utricularia_
**BLADDERWORT**
Bladderworts represent one of the largest genera of carnivorous plants, with some 230 species. They have worldwide distribution and like the droseras are found on every continent except Antarctica. This is in itself a remarkable achievement, but when combined with their diversity and the fact that they possess the fastest-moving structures and most highly sophisticated level of natural engineering in the plant kingdom, the genus _Utricularia_ becomes truly awe-inspiring. From a cultivation aspect, the bladderworts are grown primarily for their often incredible, orchid-like flowers, which range in size from a few millimetres to over 2 in. (5 cm) across.
_Utricularia vulgaris_ , an aquatic bladderwort. Note the winter resting buds developing at the ends of the thin branches.
_Utricularia campbelliana_ , growing among moss on tree branches in Venezuela.
The variability and adaptations found within the genus are vast, but we'll keep things simple by following the traditional division of bladderworts (at least for cultivational purposes) into three groups: aquatic, epiphytic, and terrestrial.
Aquatic species are found across Europe and in other parts of the world, existing as free-floating stems and branches. In temperate regions they lose their growth and form small spherical buds, which sink to the bottom of their ponds to survive the cold of winter, then float to the top in the spring.
Epiphytic species are typically found living in mosses, either at ground level or higher up on the trunks and branches of trees, generally in tropical areas. There the risk of drying out is minimal due to the high level of rainfall.
Terrestrial plants, which make up the largest number of species, are found in wet soils, sometimes as evergreen perennials. In other examples, they exist as annuals in seasonally wet areas, surviving as seed during the dry period of the year.
Whether aquatic, epiphytic, or terrestrial, the bladders themselves (as they are called) all require the presence of water to function and trap their prey. Bladders are small, generally oval structures which are hollow and have an entrance door at the narrow end, creating a seal and holding a vacuum within, giving the trap a lean, pinched appearance. Size varies greatly, from around ¹⁄₅₀ in. (½ mm) to the largest at around ½ in. (1 cm) in diameter. When an insect touches a trigger hair, it causes the door to open, allowing the walls of the trap to spring out and suck the water and insect inside before reclosing—a process which takes less than two thousandths of a second! Once inside, the water is then pumped out of the trap, leaving just the insect within to be digested.
_Utricularia reticulata_ , a typical terrestrial species with small leaves pressed to the soil surface.
A close-up image of botanical engineering at its finest and most extreme: the traps of _Utricularia reniformis_.
CULTIVATION
As with _Pinguicula_ , it is wise for cultivation purposes to approach _Utricularia_ in the species groups discussed: aquatic, epiphytic, and terrestrial. Of course, a large and diverse genus such as this will have exceptions to the rule, but we are concentrating on the common species.
**AQUATIC SPECIES** Whatever their origins, all aquatic species of _Utricularia_ must be submersed to function and trap prey. Their meals consist of tiny aquatic organisms such as daphnias, so these are not the plants to use for clearing wasps from the greenhouse.
A couple of temperate aquatics are ideal for growing in your garden pond, especially if it's a wildlife pond free of destructive fish (and unnecessary nutrients). They will tolerate a winter minimum of -4°F (-20°C). You can generally leave the plants to their own devices; by their very nature they will grow in the summer and die back and sink in the winter. Not particularly inspiring, but interesting just the same. Other species from warmer climes can be grown in a tank or small vessel in the home or terrarium. The biggest issue with these containers is a build-up of destructive algae—virtually unavoidable in such settings, and requiring an ongoing investment of time, effort, and attention.
**EPIPHYTIC SPECIES** Epiphytic utricularias grow within mosses and the like, always on another living plant—though unlike a parasite, not to the host's detriment. No carnivorous plants are parasitic.
The fine stature of the terrestrial species makes them ideal to use as accents, such as with bonsai plantings. Left to right: _Utricularia bisquamata_ 'Betty's Bay', _U. sandersonii_ , and _U. livida_.
Regarding cultivation, I recommend Zone 11, where plants are hardy to 45°F (7°C). The key distinction in growing epiphytic species is in the potting mix. I like to keep my epiphytics in pots with the lower 50 percent plain peat, and the remainder topped with a mix of sphagnum moss and orchid bark. I fill the lower section with peat moss because sphagnum placed there would break down and produce an offensive smell in a very short time.
Keep plants wet and in good light, affording some shade from the strongest sun. In the winter, allow them to dry a little so they're damp (with the winter minimum of 45° F [7°C] mentioned previously). In temperate climates they tend to lose their leaves, regrowing in the spring.
Epiphytic bladderworts are also ideal for terrarium cultivation. This environment will widen your choice of species, and in it, plants can be kept warm and in growth year-round.
**TERRESTRIAL SPECIES** This group offers the largest number of plants available to grow on a windowsill, or even as part of a larger display. All thrive in equal parts peat moss and lime-free horticultural sand, set permanently in a dish of rainwater. In summer, this water level can be as high as the soil surface; less water is required in winter. Never allow these plants to dry out—they consist of extremely fine strands of growth which will die quickly if desiccated.
Terrestrial utricularias are not particularly fussy about light levels, and this opens them up to succeed in different environments. Outside plantings can be grown in Zone 11, where they will tolerate 45°F (7°C). Sunny and shady greenhouses are fine, as are sunny and shady windowsills—a good choice for low-light windows. As small container-grown plants (shallow dishes work well), they are almost botanical ornaments. Use them to adorn tables and such, where their small flowers can be appreciated. Their modest stature also makes them suitable as accent plants for other specimens such as bonsai, the delicate nature of terrestrial utricularias a perfect contrast to the solid nature of trees and other bonsai forms. In terrariums, keep them potted to curb their wandering nature. Here, you will be able to grow a plethora of tropical species which will appreciate the warmth and reward you with year-long flowers.
Divisions of utricularias. The epiphyte _Utricularia reniformis_ on the left, terrestrial _U. livida_ on the right.
PROPAGATION
There are two options when it comes to propagating bladderworts. When beginning with a new species, seed is often the only way to go. Because of the wide variation in their native habitats, however, it is difficult to give one-size-fits-all advice on how to germinate them. Some have habitats that dry out on an annual basis and will only grow after a period of hot desiccation (the complete opposite of sarracenias and other temperate plants). Others need warm and wet conditions. One or two are shed as living, green seeds with a visible embryo and need to hit water immediately. As a rule of thumb, though, they all need to be sown and kept very wet, and with a little warmth they will germinate in around four weeks if viable. A thorough daily spray from a hand-held mister will help.
If you already have a particular plant and wish to propagate it, forget seed, as many do not self-seed anyway. (Believe me, this is a blessing, as the ultra-fine seed tends to blow around in the greenhouse, infecting other pots.) Division is quick and easy. With terrestrial species, this is as straightforward as either taking the plant out of the pot and splitting it into sections, or cutting a section at least ½ in. (1 cm) in diameter and planting it in the same potting mix as the adult plant. You will be surprised how quickly new plants will take hold. With epiphytic species, the principle is the same, but a little more care is required. They are larger plants, often with thick, fleshy roots, and not as neat and compact as terrestrial species, so you will need to divide carefully. Ensure that a good section of roots is in each division, and simply replant in the same potting mix as the adult plant. Aquatic plants can be simply pulled apart and dropped in water.
_Utricularia vulgaris_. Note the structure of the traps and the developing winter resting bud (turion) to the left.
Plant Suggestions
AQUATIC SPECIES
_Utricularia vulgaris_
_Utricularia vulgaris_ , the common bladderwort, is a vigorous and widespread plant found across Europe and Asia as well as the UK (though scarce there). This plant is best suited to a garden pond—specifically a wildlife pond without destructive fish that are likely to eat the plant's winter resting buds (turions). Simply drop the plant into the water and let it do its thing. Occasionally, it produces the most beautiful yellow flowers, to ½ in. (1 cm) across, held to 4 in. (10 cm) above the water surface.
Flowers of _Utricularia alpina_.
The kidney-shaped leaves of _Utricularia reniformis_.
_Utricularia reniformis_ flower.
EPIPHYTIC SPECIES
_Utricularia alpina_
A beautiful species found in northern South America and the Antilles, _Utricularia alpina_ produces broadly spear-shaped leaves to around 4 in. (10 cm) in length, which are held upright. The large white flowers with a yellow area on the lower lobe are unmistakable. It forms tubers to enable it to survive dry periods, and is a perfect candidate for the terrarium.
_Utricularia reniformis_
A favourite plant, _Utricularia reniformis_ is also from northern South America, and one of the largest as well. Bright green, kidney-shaped leaves grow to 4 in. (10 cm) across, held aloft on wiry red stems. The flowers, reminiscent of an orchid, are to 2 in. (5 cm) wide and of a pale violet colour, the lower lobe furnished with two vertical yellow stripes. This can be a large plant, ideal for the terrarium or greenhouse.
_Utricularia bisquamata_ 'Betty's Bay'.
_Utricularia_ sp. "Kerala."
TERRESTRIAL SPECIES
_Utricularia bisquamata_
_Utricularia bisquamata_ is probably the most common bladderwort of all. This South African plant is particularly vigorous; in most forms it will rapidly self-seed and become a nuisance, albeit a nuisance with charm. It is exceedingly variable, with most forms being small and perhaps a bit innocuous. The best form to grow has been elevated to cultivar status, _Utricularia bisquamata_ 'Betty's Bay': a fine, self-sterile clone with large flowers to ⅓ in. (8 mm) across, and a pale purple lower lobe, darker at its upper edge before giving way to a vivid yellow. Suitable for the windowsill, greenhouse or conservatory.
_Utricularia_ sp. _"Kerala"_
_Utricularia_ sp. "Kerala" is an as-yet-unnamed species found in the Indian state of Kerala, and therefore ideal in the terrarium. I find this plant both prolific and extremely floriferous, making it ideal for the beginner. It produces many delicate, pale pink flowers touched on their lower lobe with a yellow patch on the upper section atop a darker purple line. The upper lobe is of the same colour, and the pointed white spur extends downward. These are produced on stems to around 4 in. (10 cm) in height, with several flowers open at any one time on each scape. While this plant, like other terrestrial species, tolerates temperatures to 45°F (7°C), it prefers warmer conditions.
_Utricularia livida_ (pink-purple clone).
_Utricularia parthenopipes_.
_Utricularia livida_
_Utricularia livida_ is an interesting and variable plant found in both Central and South America and in Africa. The flowers can be pale violet and white in some forms, a darker blue in others, held above broad, apple-green leaves. The flowers open singly one above another, with as many as eight to ten on each fine scape.
_Utricularia parthenopipes_
A South American species with a charm nearly as elegant as its name. Although _Utricularia parthenopipes_ will tolerate the same temperatures as the others listed, it does better in the warmer confines of the house or terrarium. Its white flowers, which are up to ½ in. (1 cm) long, are touched with a vivid yellow patch at the top of the lower lobe, and a cluster of deep purple veins on the upper. In comfortable conditions it can flower prolifically, seemingly producing viable seed, so beware.
_Utricularia prehensilis_
_Utricularia prehensilis_ is a stunning plant with sulphur-yellow flowers on tall stems. The stems have the ability to twine, and in the wild, wrap themselves around surrounding grasses, always in a counterclockwise direction. This is an attribute shared with around twenty species, all but one twining in the same direction. Not content to stay put, it is often seen escaping from the base of its pot. Flowers are to around ½ in. (1 cm) long with a lower lip that protrudes outward. The strap-shaped leaves are a similar length.
_Utricularia prehensilis_.
_Utricularia sandersonii_.
_Utricularia uniflora_.
_Utricularia sandersonii_
One of the most common and recognizable species in cultivation, and for my money, one of the prettiest. _Utricularia sandersonii_ produces a profusion of small, paddle-shaped, bright green leaves, and a similar quantity of white flowers up to ½ in. (1 cm) long. They are unusual compared to other species, in that they possess a long, arching spur which reaches forward from the rear to the underside of the flower. Also, the upper lobe is divided into two "ears" which give the flower the appearance of a rabbit. These are touched with a light streak of violet, as is the wide lower lobe. They are produced in profusion, each flower scape carrying around six blooms.
_Utricularia uniflora_
_Utricularia uniflora_ is an Australian species which, as the name implies, produces its flowers singly on tall stems. These are pink in colour, the lower section fanning out like a skirt, topped with two tiny vertical yellow lines, themselves underlined with a dark pink stripe. The upper lobe is small and pale. The dark stems can be up to 8 in. (20 cm) in height.
The exquisite form of _Cephalotus follicularis_.
TAKING THINGS TO THE NEXT LEVEL
At some point, you will begin to tune in to your plants and their particular foibles and habits. You will get the hang of the unique processes they employ to survive and thrive, and what role you play in their success. As your knowledge and experience grows, you may become ready to take on carnivorous plants that push your ability a little further (we also sometimes refer to this stage of interest as "hooked").
The three genera that follow should be perfect for that stage of your development as a carnivorous plant gardener. However please heed this warning: although you may be tempted, do not attempt growing these plants until you have had a degree of success with the more common plants from the previous chapter—early failures can be very disappointing. (Trust me, when it comes to cultivation, I and many others have fallen into the carnivorous plant trap of running before we could walk).
_Cephalotus_
**ALBANY PITCHER PLANT**
As with _Darlingtonia_ and _Dionaea_ , there is just one species of the genus _Cephalotus: C. follicularis_ , and it is the sole representative of its family, the Cephalotaceae. This unique pitcher is unlikely to be confused with any other plant. It is found in a small coastal area near the town of Albany in southwest Australia, where it inhabits open, peaty swamps. The habitat rarely sees temperatures as low as freezing, and the plants are usually found in full sun, but sometimes in shaded areas.
A perennial, its unique appearance makes it highly sought after, a popular choice for collectors. It is a low-growing species which produces two types of leaves. One is a flat, dark green, non-carnivorous leaf. The other is a pitcher, exquisite in both form and design, with a remarkable intricacy. The pitchers are small, the largest being only around 2 in. (5 cm) in length, but with a level of detail generally found only in the genus _Nepenthes_.
It is a rosetted plant which gradually forms a clump from its thick subterranean rhizome, each rosette up to 5 in. (12½ cm) in diameter and consisting of a combination of carnivorous and non-carnivorous leaves. The squat pitchers are attached by a sometimes long stalk to their rear, and are often partly buried to around a quarter of their depth. This is in order to catch crawling insects such as ants. Depending on their location, pitchers can be a range of colours, from green (in semi-shade) to bright burgundy (in full sun). Each is furnished with three substantial ridges, which run vertically up the front of the trap, each ridge fringed with long, white hairs. These ridges are dual purpose: they add to the intrinsic strength of the structure, and also guide crawling insects upward toward the mouth, which is guarded by a palisade of glossy red, inward-pointing spines.
A _Cephalotus_ plant is perfect in full sun next to a window.
A lid overhangs the mouth to avoid the ingress of water. This lid is marked on its lower side by dark red patches interspersed at the margin by translucent windows, perhaps to allow light to reflect on the liquid within. The throat of the pitcher is pale green, and below the spines it forms a ridge before giving way to the belly of the trap—another barrier to prevent escape from within. Small insects are guided and channeled along the pitcher length by the ridges. Nectar is secreted over the mouth region and acts as a further attractant. Here the prey lose their footing and fall into the trap, where they are digested. The traps have a substantial feel; because of their design they are surprisingly strong, presumably to prevent mechanical damage.
The underwhelming flowers of _Cephalotus_.
In the spring a somewhat absurdly tall flower scape is produced, up to 24 in. (60 cm) high. It is topped by a cluster of what frankly must be the biggest florific anti-climax in the world of carnivorous plants: small, inconspicuous white flowers with yellow stamens and pink stigmas, around ¹⁄₆ in. (4 mm) in diameter. Several open at once. Although self-fertile, a pollinating agent is required to fertilize the flowers. This results in small, hairy seeds to ¹⁄₁₂ in. (2 mm) long being produced in the autumn.
CULTIVATION
_Cephalotus_ is one of those genera with a reputation for being difficult in cultivation, though I believe a combination of hard conditions and infrequent repotting goes a long way toward ensuring success.
As with many carnivorous plants, it inhabits a harsh environment in the wild and is adapted to full sun, although as I previously alluded, it will grow in shade. Such sun-restricted conditions, however, result in traps that although larger, are green and somewhat insipid. For this reason I recommend placing it in the sunniest location you have. That said, due to its tolerance of lower light levels, it is a candidate for those shadier windowsills—though I would stop short of placing it in truly shade-dominated window. It is also a good option for the terrarium, especially under lights.
Follow the standard regimen of keeping the container in a tray containing 1 to 2 in. (2½ to 5 cm) of rainwater in the summer months, reducing that amount to maintain dampness over the winter without allowing the plant to dry out. Grow _Cephalotus_ in Zone 11; an ideal winter minimum is 41°F (5°C), although a cooler location in the house is fine, as this plant doesn't need quite the intense drop in temperature that more temperate species require. _Cephalotus follicularis_ is remarkably tough, though; I have frozen them, though I can't endorse such behaviour. Apart from a little leaf damage, they were fine, regrowing in the spring.
Plant in standard-depth containers or similar. Potting mix should be equal parts peat moss and horticultural sand. There is the temptation with a plant whose habit is low to the ground to pot it in shallow containers—a bulb bowl, for example—but avoid this. _Cephalotus_ plants like a good depth; their roots are fine and can penetrate deep below the stocky rhizome. This fine constitution results in some fragility, though, and newly repotted plants, especially if they are large, have a tendency to lose a number of roots. The effect is that they appear to sulk somewhat, often dropping their lids over their mouths for a period of two to three weeks. If this happens, place a plastic bag over the plant (which will help reduce water loss), and move the pot to a shadier location until the plant appears to be back to normal. Smaller specimens seem to be more resilient, perhaps as they are easier to handle and repot and lose fewer roots in the process.
PROPAGATION
There are three methods of propagation for the Albany pitcher plant: by seed, rhizome, and leaf cuttings. Seed is a good way of raising multiples of the plant, though it's a little slow. The tiny seeds are best collected in the autumn, when they are shed, and stored in the refrigerator. In the spring, they can be surface sown on the same potting mix as the adult plants, then kept wet and in full sun. Germination occurs in around four weeks. Move the plants on as they develop.
As there is a rhizome on this plant, it is an ideal candidate for division in the spring. This results in adult-sized plants, albeit on a smaller scale, almost immediately. Carefully removing the plant from its pot (bearing in mind my previous cautions), simply break off the individual growth points, along with a portion of the rhizome and roots. Try to retain the major section of the plant (which you aim to keep), to minimize root disturbance and prevent repotting issues. Plant the divisions separately in the same potting mix. Smaller potted sections appear to be more resilient, perhaps because of their smaller stature. Once wetted thoroughly, they should resume their growth. Keep the divisions out of full sun, under cover in a propagator for a few weeks until established.
_Cephalotus_ divisions. Note the thick rootstock.
Leaf pullings from _Cephalotus_ plants need to include the paler base of the stem.
Push the pullings into the potting mix, up to the point where the leaves begin to flare out.
The third option, and one which prevents disturbance to the root system entirely, is to take leaf cuttings or pullings. This method utilizes the flat, green, non-carnivorous leaves and is best undertaken in the spring. The leaves should be gripped firmly between the fingers, as low as you can manage, because the trick is to remove the leaf from the rhizome _with_ its base, at the end of the stalk. This is where the genetic material (which will produce the new plant) is to be found. Once you have harvested a few (and don't be afraid to take over half of them), fill a container with the potting mix mentioned. Make small holes in the mix with a plant label or similar, and insert the leaves to the depth of the stalk, with the leaf remaining above the soil surface.
Water well from above to wash the potting mix around the leaf bases. Keep wet and in a bright position away from scorching sun. New plants can be seen developing in around eight weeks. They should be left in the pot until they are sturdy enough to handle, at which point they can be potted separately.
_Heliamphora_
**SUN PITCHER**
Native to those peculiar flat-topped table mountains known as _tepuis_ , and their surrounding marshy lowlands, this genus is the South American representative of the family Sarraceniaceae. As little as thirty years ago, only five species were known, so remote are their often highly restricted ranges. In that comparatively short time, their ranks have swelled to include some twenty-four members.
Sun pitchers possess an elegance not so apparent in other plants. They work like, and indeed are similar in design to, the sarracenias, though they appear to be a little more primitive. A pitcher is comprised of a simple leaf, which has been rolled around and attached at the front to form a tube. However, _Heliamphora_ is precisely adapted to its unique habitat. The leaf opening is not finished with an overhanging lid, as with the upright sarracenias. Instead, it is furnished with a much-reduced structure, commonly referred to as a nectar spoon, due to its small size. This secretes a sometimes sweet-smelling nectar, which acts as an attractant to passing insects. While drinking, the bugs lose their footing on the slippery surface directly beneath, falling into the water-filled pitcher. There they are broken down by bacterial decomposition.
Plants are generally found growing atop the tepuis, isolated from each other and the surrounding lowlands. It is this isolation which over eons produced the speciation we see today—individual species are often found on only a single mountain. We have a small number of determined botanists and explorers to thank for the relatively recent discovery of so many of these fascinating plants; in the scientists' research, they have often been the first humans to step onto these alien landscapes.
These mountains experience high rainfall, generating their own weather systems as hot lowland air is pushed upward by the sheer sides, to condense over the vast plateaus. They are among the wettest places on Earth. Rather than battle to keep their leaves free of excess water, which would flush the contents out, the pitchers freely fill on a regular basis. They do, however, have a mechanism to prevent loss of food items. At the front of the leaf, where the join is, close inspection will reveal either a thin drainage slit extending about a quarter of the way down beneath the mouth, or a small drainage hole. Each option bears a number of short hairs which act as filters, so the excess water can escape, but the prey cannot.
Two tepuis in Venezuela: Kukenam on the left, and the famous Roraima on the right.
_Heliamphora glabra_ in habitat, atop Mount Roraima.
The plants vary widely in height, from around 6 in. (15 cm) in the case of _Heliamphora minor_ , up to 13 ft. (4 m) in _H. tatei_ , which forms a tall, upright stem topped by a rosette of pitchers. This enables the plant to grow through surrounding scrub. The pitchers are still among the tallest, up to 18 in. (45 cm); they are often fluted and elegant with wide mouths. Between species, plants can be remarkably colourful: some variation of green, red, red and green, and orange is the general range of their spectrum.
Leaves rise from a subterranean rhizome, which enables the plant to gradually divide and produce a clump. This can take a number of years—it is not uncommon for a single-crowned adult of some of the taller species to need a decade to become sizable.
In cultivation, sun pitchers flower in late winter through to early summer, and although variable, they are of a similar structure. Each bloom is comprised of four white tepals, which surround the yellow stamens and stigma. They are pendulous, hanging downward in the same manner as other flowers in the family. The pure white tepals are held on tall, wiry stems which are either green or bright red, contrasting beautifully with the often colourful pitchers. With no petals to drop, the flowers last for three or four months, the tepals gradually darkening until the scape dies back. If pollination occurs, seeds are shed late summer. The flowers have an interesting mechanism to prevent self-pollination: pollen is shed prior to the ripening of the receptive stigma, and then only with the aid of the vibration from the pollinators' wings, which shake the anthers. In cultivation, it is best to remove a few of the stamens and split them open to access the pollen, transferring it to another flower with a green stigma.
Species of _Heliamphora_ together in the morning sun.
The flower of _Heliamphora nutans_.
CULTIVATION
Traditional literature has often stated that you should spray heliamphoras on a regular basis, but this is unnecessary and can even be detrimental by seemingly encouraging sooty mildew. I never spray my plants. I do, however, water them regularly from above, to ensure the pitchers are kept full—remember, where these plants are found it often rains every day, sometimes for days at a time. Understanding how a plant grows in the wild is key to its successful cultivation, and this fact is especially true with heliamphoras.
A large _Heliamphora nutans_ ready to divide.
Carefully remove the pot. Remember, all parts of these plants are very brittle.
It is also a misconception that these plants do not like standing in water; some of the lower-growing plants can be flooded up to the level of their mouths for long periods of time. They all need to be kept wet to a depth of up to 3 in. (7½ cm) of rainwater in the growing season. Reduce the amount to keep them merely damp over the winter if they are in a greenhouse or on a windowsill.
Coming from an equatorial region, sun pitchers are adapted to very high light levels. This is particularly the case with species found atop the tepuis, where light intensity can be fierce and is exacerbated by the high altitude. Conversely, temperatures on the summit plateaus rarely exceed 77°F (25°C) during the day, and can plummet to as low as 37°F (3°C) at night. When one considers their natural habitat conditions, it's not surprising that _Heliamphora_ plants in cultivation require intensely high light for optimal colouration. They are generally suited to Zone 11, tender to 40°F (4°C), and do not like overheating—temperatures over 86°F (30°C) are detrimental. The more commonly grown species will be fine on a sunny windowsill in your home. But in a greenhouse or conservatory, take care to keep the temperature down. A couple of suggestions in the greenhouse are to stand plants in the bed nearest the door to allow access to the breeze, and also to soak them around the middle of the day if possible, giving them a good drenching. Wetting the floor thoroughly will also have a cooling effect as the water evaporates.
Divisions ready to be potted separately.
Once plants are potted, keep them shaded and cool for a few weeks to aid re-establishment.
These plants make ideal candidates for the terrarium, where the temperature can be controlled a little more easily, especially if lit by cooler lights such as fluorescent tubes or LEDs. This environment makes it easy to provide sufficient illumination for good plant colour, and is also conducive to easy watering and ventilation. There are a number of species in the ideal 6 to 10 in. (15 to 25 cm) height range for terrariums.
Use a potting mix of equal parts peat moss and perlite. Provide good-sized containers so the plants can divide and clump. This will also mean less-frequent repotting—an important consideration, since all parts of the plant are unusually brittle and don't appreciate regular disturbance. Repot in the spring, when flowers are open, as this will allow time for the plants to recover sufficiently.
PROPAGATION
There are two options for propagating heliamphoras: seed and division. Seed is, alas, very slow but will result in a number of plants being produced. The seeds should be sown as soon as they are shed, which means late summer, or as soon as you receive them (in their natural habitat the temperatures are fairly constant year-round). Seeds will appreciate the coolness of spring in which to germinate. Germination takes four to six weeks, and seeds should be sprayed a couple of times per day to replicate the wet season in their natural habitat. It has even been suggested that the growth inhibitors which prevent a seed from germinating at an inappropriate time require this flushing to disperse. Treat the seedlings as adult plants, but be aware that they are exceptionally susceptible to heat stress at this size.
Division is far more preferable, but take care because of the plants' delicate nature. This should be done in the spring. Tip the contents of the pitcher out and remove the plant by upending the pot, supporting the plant carefully. Remove as much of the loose potting mix as you can, before washing off the bulk of the remainder in a bucket of rainwater. This will allow you to see the structure of the plant and will make the division process easier. Either divide the plant into two or three pieces, or just remove a couple of growth points, ensuring that each has a few roots attached. Plant these in the standard potting mix, firming down gently, and thoroughly soak. Due to their delicate nature, it is common for the crowns to snap off without roots. Crowns can still be potted, burying the base of the plant 1 in. (2½ cm) into the soil, where they will root through in time. These must be kept shaded until they have rooted.
Pitchers of _Heliamphora heterodoxa_.
Plant Suggestions
_Heliamphora heterodoxa_
A species with wide distribution, found in Venezuela both high atop Ptari-tepui, and in lowland areas of the Gran Sabana (great savannna). _Heliamphora heterodoxa_ is a variable plant to 10 in. (25 cm) in height, apple green, flushing red as the leaves age. The red-rimmed mouth holds aloft a red nectar spoon that can be broad and flattened, elongate and overhanging the mouth, even helmet-shaped. The white flowers, to 2 in. (5 cm) in length, are borne on long red stems to 24 in. (60 cm) high. Ideal for terrarium or greenhouse cultivation.
An insect's-eye view of _Heliamphora minor_.
_Heliamphora nutans_.
_Heliamphora minor_
_Heliamphora minor_ is the smallest species is found on Auyan-tepui, the mountain from which Angel Falls, the world's highest uninterrupted waterfall, flows. It produces pitchers to only 6 in. (15 cm) at their largest, usually smaller. It is a beautiful plant, with leaves opening a vibrant green and often touched with red veins that darken to a deep crimson, occasionally almost black, in intense sun. It is a dense, clump-forming plant which in the wild can be up to 39 in. (100 cm) across, with white nodding flowers to 1¼ in. (3 cm) long on red stems to 12 in. (30 cm) high. Ideal for terrarium or greenhouse cultivation.
_Heliamphora nutans_
The first species of _Heliamphora_ to be discovered, _H. nutans_ is generally considered to be the easiest to grow in cultivation—particularly tolerant and a good choice for first-time growers. It is found on several tepuis, and is typically up to 8 in. (20 cm) in height, the pitchers often producing flushes of pink-red in cultivation, darkening to solid red as they age or in good light. Elegant leaves contrast well in the spring with the white flowers, which are to 2 in. (5 cm) long and are held on bright red stems to 24 in. (60 cm) high. This species seems to be particularly tolerant in cultivation and is a good choice for first-time growers. Ideal for terrarium or greenhouse cultivation.
_Heliamphora tatei_.
The large flower of _Heliamphora tatei_.
_Heliamphora tatei_
I include _Heliamphora tatei_ , a somewhat less-available plant, because it provides such a contrast to other species mentioned. It is harder to source, but seems to revel in the conditions favoured by other species. The tall fluted pitchers are typically a solid green colour, and are to a maximum height of 18 in. (45 cm), though usually a little smaller. It rarely produces the stem I spoke of earlier, and indeed this can take many years, suggesting that some fantastically old plants exist in the wild. Ideal for terrarium or greenhouse cultivation.
_Nepenthes_
**TROPICAL PITCHER PLANT**
On your knees for royalty! _Nepenthes_ is regarded by many as the monarch among carnivorous plant genera. With their unique, unrivalled spectrum of form and size, these plants represent extremes of evolution.
From the left, three stages of pitcher development in _Nepenthes boschiana_.
From left, the lower, intermediate, and upper pitchers of _Nepenthes boschiana_.
Tropical pitcher plants are found predominantly in Southeast Asia, with a few satellite species living as far afield as Madagascar and India. These are the stereotypical tropical jungle plants and are the sole genus in the family Nepenthaceae, which currently boasts around 150 species. This number continues to increase as more new species are identified by dedicated explorers—a fascinating fact, considering how few horticultural discoveries we assume are left in this twenty-first century.
Plants generally form a climbing stem from a terrestrial rosette. The stem is clothed in leaves, from which the midrib extends, producing a long tendril that flares out at its tip to form the pitcher. The mouth is surrounded by a slippery rim known as a peristome, which exudes nectar. This rim overhangs the interior of the trap, where digestive fluid is contained. There is a lid, which in many species acts as an effective barrier to prevent ingress and dilution of the pitcher fluid, though in other species, the lid is less effective.
There are two distinct types of pitcher. The lower pitchers, produced at ground level, nestle among mosses or leaf litter. They are joined to the tendril at the front of the pitcher, which is fringed along its length by two flanges. These flanges are edged with soft spines, perhaps as a guide for crawling prey in the same way as the _Cephalotus_ pitcher. Lower pitchers range in size from the fingernail-small _Nepenthes argentii_ , to the gargantuan _N. rajah_ from Borneo. It is with these huge pitchers that some species of _Nepenthes_ have evolved to become true carnivores, rather than the more limited insectivorous nature of the other genera. Traps can be the size of an American football, and the largest single _Nepenthes_ pitcher discovered held a volume of over 3½ quarts (3½ litres) of fluid. This is the only plant genus ever found to contain the remains of rats within its traps. The capture of vertebrates is not unusual with these plants, and frequent prey include frogs, lizards, and smaller rodents, as well as the usual insect fare.
Upper pitchers, as the name implies, are found higher up the stem, and they are formed when the plant begins to climb. These differ from lower pitchers in several ways: they are smaller, more funnel-shaped, and the tendrils attach at the rear, so the mouth is facing away and designed to attract flying insects. Tendrils in these upper leaves are also prehensile and will wrap and coil around surrounding vegetation for support.
Male flowers on _Nepenthes rajah_ , in habitat on the island of Borneo.
The flanges are absent in these pitchers, reduced to a pair of ridges, and the climbing stems in the largest plants may scramble up to 50 ft. (15 m), though often less. Intermediate pitchers, which are midway between the two types, are occasionally formed.
The plants are dioecious, which means there are male and female individuals each producing slightly different, and somewhat innocuous, flowers—unique among carnivorous plants.
It is interesting to note that these plants are nearly always found as terrestrials, which then scramble and climb. There is a common assumption that they are all epiphytes, but in reality very few species of _Nepenthes_ live in this manner.
CULTIVATION
Let's not beat about the bush. These can be large, climbing plants, a fact which can present problems in smaller spaces. I'm not going to lie—to grow nepenthes plants truly successfully you really should invest in a separate greenhouse, both to accommodate the plants' growth tendencies, and to provide the environment they require. However, if a greenhouse setup is not possible, don't dismiss the entire genus. There are a number of smaller-growing plants which are suitable for the terrarium.
A large and impressive setup, created by Jeremiah Harris, Colorado, United States.
For cultivation purposes, tropical pitcher plants are loosely divided into two groups: highland and lowland, which are differentiated mainly by temperature. The lowland plants are generally found at altitudes below 3280 ft. (1000 metres), and the highland plants are above this. The most important factor is temperature; this reduces with altitude and hence the plants become more tolerant of cooler conditions, an important consideration with regard to winter heating.
**LOWLAND SPECIES** These species are inhabitants of hot and humid environments, often shady and tropical rain forests, exactly as we might imagine. For the lowland plants, a winter minimum of 64°F (18°C) is ideal. Light is an important factor, with lowland plants often tolerant of shady conditions.
**HIGHLAND SPECIES** Highland plants prefer brighter conditions. However, as with _Heliamphora_ , highland nepenthes plants do not like the excessively high temperatures which are often associated with brighter conditions in a greenhouse—but they do need good humidity, which in itself serves to cool the environment. If grown in a greenhouse, highland species are cheaper to heat due to their preference of lower temperatures; a minimum of 45°F (7°C) will suffice for many. The highland group also offers a much wider variety of plant choices than the lowland group, including some of the most interesting and attractive in the genus.
When they are small, my plants are grown in pond baskets, in a mix of equal parts coarse orchid bark and sphagnum moss. I move them to large, 2½- or 4-gallon (10- or 15-litre) containers once they are big enough. This may seem a little excessive, especially since they have comparatively fine root systems, but there is a reason: it reduces the frequency of necessary watering. _Nepenthes_ also appreciates an open, airy mix in which to grow, and I favour equal parts coarse orchid bark and sphagnum moss. Again, this is a genus for which growers seem to develop their own preferred planting mix recipes, and you will soon realize there are many variations. Virtually all the species loathe standing in water, yet conversely must never dry out.
Thoroughly drench the potting mix every day in the summer, gauging this according to your plants' needs and the size of containers you are using. In hot weather, regular spraying of the plants with rainwater (using a hose) and dampening the floor several times a day will help regulate excess heat. These tasks are not easily undertaken if one works outside the home, and a little ingenuity may be required. You can install a few misting jets and a suitable pump (not as expensive as one might imagine) to avoid the worry of your plants frying. These can be combined with either a timer, or a humidistat set to around 85 percent. A layer of shade netting will also help regulate the temperature and allow the plants enough light to colour well. During the winter, watering can be reduced to as little as once a week, again depending on your specific conditions. You will soon get a feel for what is needed.
A _Nepenthes mira_ pitcher dying back at the end of its useful life.
Being a tropical genus, _Nepenthes_ grows year-round in the wild. But in cultivation, you will find with the cooler temperatures and lower light levels of winter that they tend to sit in a state of suspended animation. When early spring arrives, they resume the production of pitchers.
If a dedicated greenhouse is impractical, a terrarium is the best solution. There are some advantages to this cultivation method. The temperature and light (if artificial) can be maintained year-round, keeping the plants in growth. The daily maintenance and vital watering regimen required with a greenhouse will be drastically reduced in a smaller, more confined environment. And, the maintenance costs associated with a terrarium will be much less than the expense of running a greenhouse. Do keep an eye on the temperature and ensure it doesn't rise too high in hot weather—positioning the terrarium out of direct sun will prevent this. Highland species will require no extra heat if in the house, but lowland species may, to the temperatures mentioned previously. A degree of periodic maintenance will be necessary to keep terrarium-grown plants in control— they will happily plot their escape while your back is turned, and the climbing stems will begin to explore. When this happens, simply trim the stems back to a more manageable length.
As with the majority of climbing plants, each leaf has at its base a dormant lateral bud which only commences growth if the stem above that point is damaged or removed, and the plant will regrow from this point. If you do prune, take care that the cut you make is on green stem—a cut on the brown, woody section is unlikely to regrow. You will also notice the presence of small rosettes forming at the base, which will occasionally produce new stems. Many plants will have several stems when they are mature, so removing one or two will do little harm to the plant.
A container of the hybrid _Nepenthes ×mixta_ , hanging outside for the summer in the author's garden, Somerset, England.
There are a few horticultural hybrids of _Nepenthes_ which can occasionally be found in garden centres. These are often a little more tolerant of harsher conditions and are ideal candidates for a shady position within the greenhouse or conservatory, especially among other plants. If some heat is provided over the winter, they will happily thrive in this environment year-round. Water and spray daily, more frequently in hot weather.
Tropical pitcher plants can also be grown outside in the summer months (that was when I discovered my "bird-eating" specimen). Planted in large containers and hung from the branches of a tree, they make fascinating and unusual subjects for a tropical garden.
PROPAGATION
There are two methods of propagating species of _Nepenthes_ : by seed and by cuttings. Seed is sometimes available and is a good method of raising a number of plants, but it has its drawbacks. It is slow—you will wait four to five years for a good-sized plant to be produced. The seeds also must be fresh, with what is probably the narrowest viability window of all carnivorous plants. In the case of _Nepenthes bicalcarata_ , that window is as little as two to three weeks, so the seed needs to be sown immediately if it is to have any reasonable chance of germinating. Seed should be surface sown onto chopped sphagnum moss, and kept damp and in high humidity. This can be in a propagator or wherever you are growing your adult plants. Good light is also important, but keep newly sown seed out of direct sun. Some bottom heat will also help. Germination can occur in as little as two to three weeks. Seedlings should be pricked out and placed into individual pots once they are large enough to handle, and treated as adult plants.
_Nepenthes_ cuttings. Reduce the leaf length by half.
Splitting the ends of the cuttings can help rooting.
Cuttings are the most convenient method of increasing your stock, and these should be taken in the spring to allow new plants the whole season to develop. Because nepenthes plants are keen to explore, there is likely to be an abundance of material from which to take cuttings; healthy, undamaged stems should be used. If the stem is short, simply remove the top section of perhaps four leaves plus the growing tip, cutting midway between two leaves. Cut each of the leaves with scissors to half their length; this will reduce the amount of water loss. Dip the bottom end of the cutting in rooting hormone powder as per the instructions. The most important requirement is high humidity, followed by good light. Cuttings should root if both these needs are met either in a propagator (some bottom heat will again help here), or preferably in the high humidity environment of your other plants. Cuttings will begin to root in a few weeks, and once established, the new plant will commence growth.
Cut upward through the leaf node to a depth that's about half of the stem.
Insert a few strands of sphagnum into the wound to keep it open.
Wrap the wound with more sphagnum and enclose in plastic.
Once rooted, the stem can be severed and the cutting potted and treated as the adult plant. You can see the black fibrous roots on the sphagnum.
Cuttings can also be taken from midsections of the stem. These will grow from the upper leaf node (remember the dormant lateral bud I mentioned previously). Be sure that the cuttings are taken from live, green sections of stem and not the lower, brown, woody section.
As another option, cuttings can be rooted in water. Simply stand them in 1 in. (2½ cm) of rainwater, and keep them in a bright location out of direct sun. A humid environment, such as a propagator, is ideal. To aid rooting, the lower 1 in. (2½ cm) can be split open, as one would a banana.
Air layering is another method of propagation—perhaps a preferable approach for rarer species, or for those which do not produce many stems. It is, in effect, taking a cutting from the plant without actually removing it from the adult. The cutting must be taken from soft green stems. Begin by selecting a healthy, undamaged stem, and remove a leaf at the point at which you want roots to form. Using a clean-bladed knife, make a cut upward through the leaf node, to a depth of about one-half of the stem. Press a couple of strands of sphagnum moss into the cut to keep it open and to prevent the wound from resealing. You can add a little rooting powder at this point, although its effectiveness has been questioned with layering. Wrap a small quantity of sphagnum moss (about the size of a golf ball) around the wound. Use a cut-up plastic bag to hold the moss securely, tying the bag firmly at the bottom (though not too tight), and a little more loosely at the top to allow water to be added if the moss looks dry. After a few weeks, you will notice a number of black roots within the plastic; at this point the stem can be severed below them, and the plant potted and treated as an adult.
The upper pitcher of _Nepenthes albomarginata_ in habitat, Borneo.
Plant Suggestions
Here I recommend a few plants that I feel are both easy and rewarding to grow. A closer look at the genus will open up a whole world of variation, which can quickly descend into the beginnings of true obsession. Don't say I didn't warn you.
LOWLAND SPECIES
_Nepenthes albomarginata_
Found in Borneo and Peninsular Malaysia, _Nepenthes albomarginata_ is unique within the genus in that its pitchers have a white band underneath the mouth, comprised of many compressed white hairs. These hairs appear to be harvested by termites, which in the wild make up a large portion of the plant's captured prey. The lower pitchers are to around 3 in. (7 cm), the upper ones to 6 in. (15 cm) in height and variably coloured. An ideal plant for the greenhouse or terrarium.
_Nepenthes ampullaria_
This is the nepenthes with the widest distribution; it is found across Southeast Asia. _Nepenthes ampullaria_ is a plant unlikely to be confused with any other, with its squat, oval pitchers. In the wild, these pitchers can form a dense carpet of open mouths, their narrow lids reflexed back. Without effective cover, the pitchers catch a large amount of debris falling from the trees above, and it has been suggested that the plant is adapting to derive nutriment from this leaf litter. Pitchers are up to 4 in. (10 cm) high, typically green and speckled in red, though various red forms exist. Upper pitchers are rarely produced and are small and functionless. _Nepenthes ampullaria_ is good in a greenhouse situation where it can be allowed the space to develop and spread, and is also ideal for the terrarium. Grow this plant in pure peat moss, as it is often found in wet, swampy ground.
The lower pitcher of _Nepenthes ampullaria_ in Kuching, Sarawak, Malaysia.
_Nepenthes bicalcarata_ in habitat, Kuching, Sarawak, Malaysia.
The delicate upper pitcher of _Nepenthes glabrata_.
_Nepenthes bicalcarata_
Another unique species from Borneo, _Nepenthes bicalcarata_ bears traps which are furnished with two fang-like teeth that overhang the mouth. The lower pitchers are large—up to 10 in. (25 cm) in height, and can be voluminous, containing up to 1 quart (1 litre) of fluid. This is one of the biggest species of _Nepenthes_. If given free rein in a greenhouse, it will soon begin to wander, suspending its smaller-toothed upper traps in the air as it climbs. It is also suitable for the terrarium, though will need to be kept in check.
HIGHLAND SPECIES
_Nepenthes glabrata_
A native of the island of Sulawesi and a personal favourite of mine, _Nepenthes glabrata_ has a dainty stature that makes it a perfect choice for the terrarium. If left to scramble, however (in a greenhouse, for example), it will soon reach the rafters. The lower pitchers are small, usually around 1 in. (2.5 cm) in height, coloured pale green with purple blotches under a broad green peristome. Its tendrils can be surprisingly long and are produced from narrow, bright green leaves, themselves emerging from dark red stems. The upper pitchers are unusual in that they are often larger than their lower counterparts. They are a paler green, almost porcelain-like, with blood red markings. All in all, _N. glabrata_ is a stunner.
_Nepenthes mira_
Found on the island of Palawan in the Philippines, beautiful _Nepenthes mira_ produces brightly coloured, stocky pitchers to around 8 in. (20 cm) in height. The peristome surrounding the mouth is made up of many individual teeth, which hang over the liquid within.
The stocky pitcher of _Nepenthes mira_.
_Nepenthes rajah_
Not the most practical of choices—and, due to its size, certainly not one for long-term success in the terrarium— _Nepenthes rajah_ is nonetheless a truly remarkable plant. A native of two mountains on the island of Borneo (Tambuyukon and the famous Kinabalu), this species holds the distinction of bearing the largest traps in the genus. Its green and crimson pitchers can be up to nearly 16 in. (40 cm) high and 8 in. (20 cm) wide, giving them unsurpassed capacity. Due to their great size, _N. rajah_ pitchers sit on the ground, a large convex lid overhanging the wide mouth. When plants produce climbing stems, they are rarely more than 6 ft. (2 m) in height. This is an ideal plant for the greenhouse; it enjoys good light and a potting mix of equal parts peat moss and lime-free sand. Keep it wet but do not stand it in water.
A word of warning, and a valuable lesson: I had carefully nurtured my plant, and it was producing pitchers to around half their adult size, when a faulty light tripped the power one frosty evening. I lost a number of rare _Nepenthes_ specimens in one night. With expensive plants such as these, it is best to have a backup heater that will take over in such a scenario—gas is ideal.
_Nepenthes rajah_ in Mesilau, Malaysia.
_Nepenthes ramispina_.
_Nepenthes spathulata_.
_Nepenthes ramispina_
A beautiful species from Peninsular Malaysia, _Nepenthes ramispina_ produces pitchers to 8 in. (20 cm) in height. Unique in colouration, they become purple-black in good sunlight. Good-sized traps are produced while the plant is still fairly small, making _N. ramispina_ a good selection for the terrarium.
_Nepenthes spathulata_
Found on the island of Sumatra, _Nepenthes spathulata_ is another favourite of mine that is vigorous and easy to grow. The fleshy, bright green leaves produce comparatively huge lower pitchers of a similar shade of green, with red-crimson peristomes. These can be up to 10 in. (25 cm) in height and can be formed while the plant is still young, making _N. spathulata_ a good choice for the terrarium. It is also well suited to the greenhouse, where it can (and will) scramble to its heart's content.
_Nepenthes truncata_.
_Nepenthes truncata_
I've listed _Nepenthes truncata_ as a highland species, but it is found in wide altitudinal distribution on the island of Mindanao, in the Philippines, so both highland and lowland forms exist. It is a fantastic creature, producing some of the largest upper pitchers in the genus, frequently over 12 in. (30 cm) in height. They are of a red-green colour with wide and frilly orange-red peristomes.
CHILDREN, BEGINNERS & EDUCATION
Learning should be fun in every sense, so I'm not suggesting you make your children (or yourself) sit through dry experiments and exams; rather, you should use these fascinating plants to introduce young people to the wider world of green, growing things. After all, to my mind, there is no group of plants quite as amazing as the carnivores.
Start kids young. Let them touch a Venus flytrap. It will grab their attention and do no long-term damage to the plant if the encounters are brief and occasional. Botany is seen as a dull subject, simply because plants do not move and operate in the same time frame we do. To avoid the continuing decline in interest, we need to snatch our children's developing minds away from electronic games and other distractions of the modern world, and encourage kids to become fascinated with the wonders of the natural environment.
Of course, children are not the only ones susceptible to the marvels of carnivorous plants, and I am happy to see increasing numbers of gardeners and hobbyists become infatuated. The following information is intended to help introduce curious minds of all ages to this captivating group of plants.
OBSERVING
These plants are great fun to watch, and simply observing their behaviour can facilitate discussion on trapping methods, effectiveness, adaptation and evolution, and methods of attraction.
Pitcher in a bottle
This activity requires a small receptacle that will hold water; a clear, 67-oz.(2-litre) plastic soft drink bottle with the label removed; a couple of flies of any variety; and an upright _Sarracenia_ pitcher, freshly cut from one of your plants.
Begin by pouring roughly a cup of water into the bottom of the receptacle. Cut a hole a short distance from the end of the tube section of the pitcher, large enough for a fly to escape. Cut off the bottom section of the bottle, to form a bell shape. (If you are doing this with children, supervise or handle the cutting steps.) Place the pitcher in the water and position the bottle over it, leaving the lid on.
Lift the edge of the bottle and release a fly or two. It may help with handling the flies to first place them in the freezer for a couple of minutes (no more or they might die)—this will slow them down a bit. Once under the bottle cover, the flies will soon return to normal activity. Watch carefully, as the flies are attracted to and subsequently caught by the pitcher. The hole you've cut in the pitcher will provide an escape hatch for the fly, but with luck, the process should repeat several times.
Venus flytraps and their prey will always fascinate young gardeners.
The same activity can be performed with full plants inside a standard propagator. Try _Dionaea muscipula_ or a species of _Drosera_. However, without the escape hole, it will of course be "game over" for the flies.
Filmed traps in action
Plants in the genus _Pinguicula_ move too slowly to be observed. Conversely, specimens of _Utricularia_ move too quickly and are too small. The mechanisms of these carnivorous plants are best observed on film, and a quick Internet search will bring up ample opportunities to witness the plants' trapping sequences.
Growing carnivorous plants from seed
As we know, carnivorous plants can take a long time to germinate and grow to full size. To preserve any hope of keeping a child's interest alive (or that of many an adult), forget about growing Venus flytraps or pitcher plants, and instead choose plants that will develop faster. Some species of _Drosera_ , most notably _D. capensis_ , are easy to grow and do so relatively quickly.
Please, please avoid prepacked seed kits. The chances of seeds from these packages germinating are only slightly more than nil, and failure will likely only discourage would-be botanists, as well as further the prevalent view that carnivorous plants are hard to grow.
FURTHER INVESTIGATION
A small collection of carnivorous plants kept in a suitable location in a classroom can be a useful aid to all types of botanical education, for students of any age. For older students, there are a multitude of questions that can spur experiments and analysis. Use the following as prompts to initiate or enhance study.
VENUS FLYTRAPS
How does temperature affect closure speed?
How long do traps take to reopen with prey of different sizes?
What effect do different fertilizers have?
SUNDEWS
What is the smallest mass size of particle that can be detected?
Do different species react with different speeds?
What types of chemical solutions will facilitate a reaction?
_SARRACENIA_ PITCHER PLANTS
Does the volume of nectar produced vary between species?
Does the application of UV light affect trap efficiency?
Do some species drug their prey?
_Nepenthes glabrata_ , upper pitcher.
RESOURCES
Nurseries
The following nurseries specialize in carnivorous plants. They all have long-standing, sturdy reputations.
UK
Hampshire Carnivorous Plants
hantsflytrap.com
Hewitt-Cooper Carnivorous Plants
hccarnivorousplants.co.uk
South West Carnivorous Plants
littleshopofhorrors.co.uk
P & J Plants
pj-plants.co.uk
UNITED STATES
California Carnivores
californiacarnivores.com
EUROPE
Nature et Paysages
natureetpaysages.fr
Wistuba
wistuba.com
Best Carnivorous Plants
bestcarnivorousplants.net
Equipment
The following are a few equipment and sundries suppliers in the UK whom I know and would recommend. An Internet search in your respective country and/or region would undoubtedly bring up similar suppliers.
Two Wests
www.twowests.co.uk
This company has been trading for many years and supplies a wide range of high-quality equipment, not least of which is their exceedingly strong commercial benching, capable of holding the heavy load required. Custom work can be created to your specifications. They also supply heated trays and covers to your specification.
Waterside Nursery
www.watersidenursery.co.uk
Owned by my good friends Linda and Phil Smith, Waterside supplies not only water plants for your pond but also a selection of containers including the fibreglass mini pond shown on p. 67.
Metal Planters UK
www.metalplantersuk.co.uk
Manufacturers of galvanized and powder-coated plant troughs, which can be used for planting or as containers for holding pots. These are of superlative quality compared to most you see, and are available in myriad colours.
Societies
Societies are the perfect way to meet other like-minded individuals who share your interest. I feel it's important to make the effort to both join your local society and to attend their meetings. This is because as useful as the Internet is as a resource, it is still no replacement for meeting and talking to people who have shared the same successes and frustrations as you have. That said, the Internet may be _very_ useful in helping you find societies in your area other than those listed here.
AUSTRALIA AND NEW ZEALAND
Australasian Carnivorous Plant Society
www.auscps.com
New Zealand Carnivorous Plant Society
www.nzcps.co.nz
BELGIUM
Drosera vzw
droseravzw.org
CZECH REPUBLIC
Darwiniana
darwiniana.cz
FRANCE
Association Francophone des Amateurs de Plantes Carnivores
dionee.org
GERMANY
Gesellschaft für Fleischfressende Pflanzen
carnivoren.org/gfp
ITALY
Associazione Italiana Piante Carnivore
aipcnet.it
JAPAN
Insectivorous Plant Society
ips.2-d.jp
NETHERLANDS
Carnivora
carnivora.nl
PORTUGAL
Associação Portuguesa de Plantas Carnívoras
appcarnivoras.org
SWEDEN
Scandinavian Carnivorous Plant Society
scps.se
UK
The Carnivorous Plant Society
thecps.org.uk
UNITED STATES
The International Carnivorous Plant Society
carnivorousplants.org
RECOMMENDED READING
Bailey, Tim, and Stewart McPherson. 2012. _Dionaea, The Venus's Flytrap_. Poole, Dorset: Redfern Natural History Publications.
Barthlott, Wilhelm, et al. 2007. _The Curious World of Carnivorous Plants_. Portland, OR: Timber Press.
D'Amato, Peter. 1998. _The Savage Garden_. Berkeley, CA: Ten Speed Press.
Fleischmann, Andreas. 2012. _Monograph of the Genus Genlisea_. Poole, Dorset: Redfern Natural History Publications.
Lowrie, Allen. 2014. _Carnivorous Plants of Australia, Volumes 1–3_. Poole, Dorset: Redfern Natural History Publications.
McPherson, Stewart. 2010. _Carnivorous Plants and their Habitats, Volumes 1–2_. Poole, Dorset: Redfern Natural History Publications.
———.2009. _Pitcher Plants of the Old World, Volumes 1–2_. Poole, Dorset: Redfern Natural History Publications.
McPherson, Stewart, and Donald Schnell. 2011. _Sarraceniaceae of North America_. Poole, Dorset: Redfern Natural History Publications.
McPherson, Stewart, et al. 2010. _Sarraceniaceae of South America_. Poole, Dorset: Redfern Natural History Publications.
ACKNOWLEDGMENTS
There are a number of people I'd like to thank for helping me with this project. First, Matt DeRhodes, Mark Griffin, Andrej Jarkov, and Ian Salter for sharing their experiences of growing these plants in different outside locations, and indeed in different countries. A big thank-you to Alan and Sylvia Smith of West Pennard, Somerset, for allowing me to regularly commandeer their conservatory and garden for photographic purposes. To Tim Bailey, who supplied the darlingtonia seeds in my hour of need. Thanks to the Red Lion at West Pennard for allowing me to set up my tripod in the bar for one of the shots. And finally, appreciation to Greg Allan, Matt DeRhodes, Vincent Fiechter, Mark Griffin, Jeremiah Harris, Andrej Jarkov, Lynn Keddie, Stewart McPherson, Kamil Pasek, Dianne Riddiford, Ian Salter, Andy Sturgeon, and Matthew Wagstaffe, who contributed photographs for the book.
PHOTOGRAPHY CREDITS
Greg Allan, pages left, centre and right
Matt DeRhodes, pages ,
NoahElhardt/wikimedia.org, page
Earl/Flickr, page left
Vincent Fiechter, page
Mark Griffin, pages , bottom,
Jeremiah Harris, page
Andrej Jarkov, page
Lynn Keddie, page right
Stewart McPherson, pages – top centre and bottom centre, , , , , , , , left,
Kamil Pasek, page left
Dianne Riddiford, pages , left and right, left
Ian Salter, pages top left and right, left, bottom right
Andy Sturgeon, page
Matthew Wagstaffe, page
H. Zell/wikimedia.org, page centre
All other photos are by the author.
INDEX
A
Albany pitcher plant, ,
ants,
aphids, –
aquatic species, –,
Australian sundews,
B
benching, in the greenhouse,
binomial system of plant names, –
birds, –,
bladderwort(s), , , , –
bog gardens, –
boggy areas, nutrient scarcity in,
bonsai plantings,
_Botrytis cinerea_ (grey mould), , –
butterwort(s), , , , ,
C
_Carnivorous Plants_ (Slack),
_Carnivorous Plants, The_ (Lloyd),
Carnivorous Plant Society, UK,
caterpillars,
Catesby, Mark, ,
Cephalotaceae family,
_Cephalotus_ (Albany pitcher plant)
cultivation, –
overview, –
propagation, –
_Cephalotus follicularis_ ,
Chelsea Flower Show garden,
children, , –
classification criteria for carnivorous plants, –
classroom investigations, –
clones,
cloud forest, Mount Roraima, Venezuela,
cobra lily, , ,
coir, in potting mixes,
cold dormancy requirements for temperate species, –
common blue tit,
containers, for cultivation in the home and garden, –, –
Cornish grit, in potting mixes,
cultivation from seeds, –, 317
cultural requirements for carnivorous plants, –
_Cyanistes caeruleus_ (common blue tit),
D
_Darlingtonia_ (cobra lily)
collecting and sowing seeds,
cultivation, –
growth rate,
overview, –
propagation, –
_Darlingtonia californica_ (cobra lily), ,
_Darlingtonia californica_ f. _viridiflora_ ,
Darwin, Charles, ,
deer,
DeRhodes, Matt, –
digestion by carnivorous plants,
dioecious plants,
_Dionaea_ (Venus flytrap)
cultivation, –
overview, –
plant suggestions, –
propagation, –
_Dionaea muscipula_ (Venus flytrap), ,
_Dionaea muscipula_ 'Australian Red Rosette', , ,
_Dionaea muscipula_ 'Cross Teeth', ,
_Dionaea muscipula_ 'Dentate Traps', ,
_Dionaea muscipula_ 'Red Piranha', ,
_Dionaea muscipula_ 'Royal Red', ,
_Dionaea muscipula_ 'Sawtooth', ,
_Dionaea muscipula_ 'South West Giant',
diseases, –
distribution of carnivorous plants, –
Dobbs, Arthur,
_Drosera_ (sundew)
cultivation, –
environmental adaptations,
overview, –
plant suggestions, –
propagation,
South African, cold dormancy,
trap type and prey,
_Drosera adelae_ ,
_Drosera aliciae_ , –
_Drosera anglica_ (English sundew),
_Drosera binata_ (forked-leaf sundew), , –
_Drosera binata_ var. _binata_ ,
_Drosera binata_ var. _dichotoma_ , ,
_Drosera binata_ var. _multifida_ ,
_Drosera binata_ var. _multifida_ f. _extrema_ ,
_Drosera callistos_ ,
_Drosera capensis_ , , , , , , –
_Drosera capillaris_ ,
Droseraceae family,
_Drosera closterostigma_ ,
_Drosera cuneifolia_ ,
_Drosera dichrosepala_ ,
_Drosera filiformis_ , ,
_Drosera filiformis_ var. _filiformis_ , ,
_Drosera filiformis_ var. _tracyi_ ,
_Drosera hamiltonii_ , –
_Drosera occidentalis_ ,
_Drosera pallida_ ,
_Drosera prolifera_ ,
_Drosera pulchella_ , –
_Drosera regia_ , , , –
_Drosera rotundifolia_ , , , –
_Drosera rubrifolia_ ,
_Drosera schizandra_ , ,
_Drosera slackii_ , , ,
_Drosera spatulata_ , –
_Drosophyllum lusitanicum_ (Portuguese dewy pine), –,
E
Ellis, John,
England, bog garden in, –
English sundew,
environmental adaptations of carnivorous plants, –,
F
fluorescent lighting,
food and feeding, –
forked-leaf sundew,
freezing temperatures, and carnivorous plants, ,
G
garden cultivation, –
germination of seeds, –
greenhouse and conservatory cultivation, , –
green trumpet,
grey mould, , –
Griffin, Mark, –
growth rates of carnivorous genera,
H
hard grown vs. soft grown plants,
heaters for greenhouses,
_Heliamphora_ (sun pitcher)
cultivation, –
overview, –
plant suggestions, –
propagation, –
_Heliamphora glabra_ ,
_Heliamphora heterodoxa_ ,
_Heliamphora minor_ , ,
_Heliamphora nutans_ , , ,
_Heliamphora tatei_ , ,
HID (high intensity discharge) lights,
Hoblyn, Tom,
hooded pitcher,
hygiene, ,
I
insect control, –
_Insect-Eating Plants and How to Grow Them_ (Slack), ,
_Insectivorous Plants_ (Darwin), –,
International Carnivorous Plant Society, ,
J
Jarkov, Andrej,
L
LED (light-emitting diode) lamps, –
light requirements for temperate species, –
Linnaeus, Carl, ,
Lloyd, Francis E.,
Lyte, Henry,
M
mealybugs, –
Mexican butterworts, , , ,
monoecious plants,
Mount Roraima, Venezuela,
N
native habitats of carnivorous plants, –
_Natural History of Carolina, Florida, and the Bahama Islands_ (Catesby),
Nepenthaceae family,
_Nepenthes_ (tropical pitcher plant)
cultivation, –
highland species, –
lowland species, –
native habitats, –
overview, –
seed viability,
trap type and prey,
_Nepenthes albomarginata_ ,
_Nepenthes ampullaria_ , –
_Nepenthes argentii_ ,
_Nepenthes bicalcarata_ ,
_Nepenthes boschiana_ ,
_Nepenthes glabrata_ , –,
_Nepenthes mira_ , , ,
_Nepenthes ×mixta_ , ,
_Nepenthes rajah_ , , , ,
_Nepenthes ramispina_ ,
_Nepenthes spathulata_ ,
_Nepenthes truncata_ ,
_New Herball_ (Lyte),
North American pitcher plant(s), –,
O
orchid bark, in potting mixes,
_Origin of Species_ (Darwin),
P
pale pitcher,
parrot pitcher,
peat, –
peat alternatives, in potting mixes,
peat moss, in potting mixes,
perlite, in potting mixes,
pests and diseases, –
_Pinguicula_ (butterwort)
cold dormancy,
cultivation, –
Mexican hybrids, –
Mexican species, –
on a windowsill,
overview, –
propagation, –
temperate species, –
_Pinguicula crassifolia_ ,
_Pinguicula ehlersae_ ,
_Pinguicula esseriana_ ,
_Pinguicula grandiflora_ , ,
_Pinguicula grandiflora_ subsp. _rosea_ ,
_Pinguicula lauana_ , –
_Pinguicula moranensis_ ,
_Pinguicula poldinii_ ,
_Pinguicula_ 'Tina', ,
_Pinguicula vulgaris_ ,
_Pinguicula_ 'Weser',
pitcher in a bottle (activity), –
pitcher plant moth,
pitcher plant rhizome borer,
pitcher plant(s), ,
plant carnivory, notion of, –
plant collecting, in Victorian era, –
plant cultivation from seeds, –,
plant families
Cephalotaceae,
Droseraceae,
Nepenthaceae,
Sarraceniaceae, , ,
plant names, binomial system, –
plant procurement,
plant reproduction,
plant trays, –
pollination, manual, –
pond marginals, –
Portuguese dewy pine, –
potting mixes for carnivorous plants, –,
propagation, vegetative,
purple pitcher,
pygmy sundews, , , –
Q
Queensland sundews,
R
rabbits,
rainforest,
rainwater collection and use, –
recommended reading,
red spider mites,
resources, –
reverse osmosis (RO),
rhizomes, defined,
rodents,
round-leaved sundew,
S
Salter, Ian,
sand, in potting mixes, –
_Sarracenia_ (North American pitcher plant)
cultivation, –
dormancy requirements, –
in the classroom,
light requirements, –
overview, –
plant suggestions, –
propagation, –
trap type and prey,
_Sarracenia alata_ (pale pitcher), –
_Sarracenia alata_ f. _viridescens_ ,
_Sarracenia alata_ var. _atrorubra_ ,
_Sarracenia alata_ var. _cuprea_ , ,
_Sarracenia alata_ var. _nigropurpurea_ , –
_Sarracenia alata_ var. _ornata_ ,
_Sarracenia alata_ var. _rubrioperculata_ ,
_Sarracenia ×areolata_ ,
_Sarracenia ×catesbaei_ , ,
Sarraceniaceae family, , ,
_Sarracenia ×chelsonii_ ,
_Sarracenia_ 'Constance Healy',
_Sarracenia ×excellens_ ,
_Sarracenia ×exornata_ ,
_Sarracenia flava_ (yellow trumpet)
dissected flower of,
Catesby illustration,
cultivation, –,
overview, –
light requirements for,
with sooty mildew,
_Sarracenia flava_ f. _viridescens_ ,
_Sarracenia flava_ var. _atropurpurea_ ,
_Sarracenia flava_ var. _cuprea_ ,
_Sarracenia flava_ var. _flava_ , ,
_Sarracenia flava_ var. _flava_ 'Maxima',
_Sarracenia flava_ var. _maxima_ ,
_Sarracenia flava_ var. _ornata_ , , ,
_Sarracenia flava_ var. _rubricorpora_ , –
_Sarracenia flava_ var. _rugelii_ ,
_Sarracenia_ hybrids, , –
_Sarracenia_ 'Joyce Cooper', ,
_Sarracenia leucophylla_ (white trumpet)
as cut flower, ,
cold dormancy, ,
cultivation,
growing season, –
in the wild,
overview, –
with grey mould,
_Sarracenia leucophylla_ var. _alba_ ,
_Sarracenia minor_ (hooded pitcher), , , , , –
_Sarracenia minor_ var. _minor_ , ,
_Sarracenia minor_ var. _minor_ f. _viridescens_ ,
_Sarracenia minor_ var. _okefenokeensis_ ,
_Sarracenia_ × _mitchelliana_ , ,
_Sarracenia_ × _moorei_ ,
_Sarracenia ×moorei_ 'Adrian Slack',
_Sarracenia ×moorei_ 'Brook's Hybrid',
_Sarracenia oreophila_ (green trumpet), –, , , –, ,
_Sarracenia oreophila_ var. _oreophila_ ,
_Sarracenia oreophila_ var. _ornata_ ,
_Sarracenia ×popei_ ,
_Sarracenia psittacina_ (parrot pitcher), , , , , –
_Sarracenia psittacina_ var. _okefenokeensis_ ,
_Sarracenia psittacina_ var. _psittacina_ ,
_Sarracenia purpurea_ (purple pitcher), , , , , , –
_Sarracenia purpurea_ subsp. _purpurea_ , , ,
_Sarracenia purpurea_ subsp. _purpurea_ f. _heterophylla_ , ,
_Sarracenia purpurea_ subsp. _venosa_ , ,
_Sarracenia purpurea_ subsp. _venosa_ f. _pallidiflora_ , ,
_Sarracenia purpurea_ subsp. _venosa_ var. _burkii_ , ,
_Sarracenia purpurea_ subsp. _venosa_ var. _burkii_ f. _luteola_ , ,
_Sarracenia rubra_ (sweet trumpet), , –
_Sarracenia rubra_ subsp. _alabamensis_ , , ,
_Sarracenia rubra_ subsp. _gulfensis_ , ,
_Sarracenia rubra_ subsp. _gulfensis_ f. _luteoviridis_ ,
_Sarracenia rubra_ subsp. _jonesii_ , ,
_Sarracenia rubra_ subsp. _jonesii_ f. _viridescens_ ,
_Sarracenia rubra_ subsp. _rubra_ , ,
_Sarracenia rubra_ subsp. _wherryi_ , ,
scale insects,
scientific names for plants, –
sedge peat,
seed kits,
seeds, plant cultivation from, –,
Serbia, bog garden in,
Slack, Adrian, –, , ,
slugs,
snails,
sooty mildew, –
South African sundews,
sphagnum bog, Southern England,
sphagnum moss, in potting mixes, –
sprays and spraying, –
squirrels,
stratification, defined,
sundew(s)
feeding,
in bog gardens,
in the classroom,
in a conservatory,
in dormancy,
in Lyte's _New Herball_ ,
in terrariums, , –,
light and, –
on a windowsill,
self-pollination,
sprays and spraying,
sun pitcher(s), , , –,
sweet trumpet,
T
tap water, –
temperate species, golden rules for, –
terrariums, –,
thrips,
tools, for care and maintenance of plants, –
traps, filming in action,
trap types,
tropical pitcher plant(s), , , ,
tropical species, –, , –, , ,
tufa, in potting mixes,
U
United States, bog garden in, –
_Utricularia_ (bladderwort)
cultivation, –
native habitat, –
overview, –
plant suggestions, –
propagation, –
traps and prey, ,
_Utricularia alpina_ ,
_Utricularia bisquamata_ ,
_Utricularia bisquamata_ 'Betty's Bay', ,
_Utricularia campbelliana_ ,
_Utricularia livida_ , , ,
_Utricularia parthenopipes_ ,
_Utricularia prehensilis_ , –
_Utricularia reniformis_ , ,
_Utricularia reticulata_ ,
_Utricularia sandersonii_ , ,
_Utricularia_ sp. _"Kerala"_ ,
_Utricularia uniflora_ ,
_Utricularia vulgaris_ , ,
V
vegetative propagation,
Veitch nursery, Chelsea and Exeter,
Venus flytraps
author and, ,
children and, , –
die-back during dormancy,
Dobbs and,
Ellis illustration,
as insect control,
on a windowsill, ,
prey of, ,
tap water and,
vermiculite, in potting mixes, –
Victorian era, plant collecting in, –
vine weevils,
W
Wagstaffe, Matthew,
Wales, bog garden in,
Ward, Nathaniel,
Wardian cases,
white trumpet,
windowsill cultivation, –
winter requirements, and ideal positions for plants,
World War I,
Y
yellow trumpet,
Plantsman **Nigel Hewitt-Cooper** has been fascinated by and immersed in the beautiful and strange world of carnivorous plants since he received a Venus flytrap from his uncle in 1981. By the late 1990s, his collection had grown to several hundred species and forms, and Nigel opened his own nursery. His plants have been awarded many accolades, including a number of Chelsea Flower Show gold medals. Nigel is a regular contributor to botanical journals and newsletters, has appeared on radio and television, and lectures on carnivorous plants.
| {
"redpajama_set_name": "RedPajamaBook"
} | 9,475 |
from bcc import BPF
import time
from unittest import main, TestCase
class TestCallchain(TestCase):
def test_callchain1(self):
hist = {}
def cb(pid, callchain):
counter = hist.get(callchain, 0)
counter += 1
hist[callchain] = counter
b = BPF(text="""
#include <linux/ptrace.h>
int kprobe__finish_task_switch(struct pt_regs *ctx) {
return 1;
}
""", cb=cb)
start = time.time()
while time.time() < start + 1:
b.kprobe_poll()
for k, v in hist.items():
syms = [b.ksym(addr) for addr in k]
print("%-08d:" % v, syms)
if __name__ == "__main__":
main()
| {
"redpajama_set_name": "RedPajamaGithub"
} | 7,336 |
As you browse southdownhomes.com, advertising cookies will be placed on your computer so that we can understand what you are interested in. Our display advertising partners then enables us to present you with remarketing advertising on other sites based on your previous interaction with southdownhomes.com. The techniques our partners employ do not collect personal information such as your name, email address, postal address or telephone number. You can visit this page to opt out of interest-based advertising and remarketing. | {
"redpajama_set_name": "RedPajamaC4"
} | 1,145 |
{"url":"https:\/\/porespy.org\/examples\/generators\/reference\/faces.html","text":"# faces#\n\nA quick way to generate an image with 2 opposing faces set to True, which is used throughout PoreSpy to indicate inlets and outlets\n\n[1]:\n\nimport porespy as ps\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport inspect\n\n\nThe arguments and default values of the function can be found as follows:\n\n[2]:\n\ninspect.signature(ps.generators.faces)\n\n[2]:\n\n<Signature (shape, inlet=None, outlet=None)>\n\n\n## shape#\n\nThis would be the same shape as the actual image under study. Let\u2019s say we have an image of blobs:\n\n[3]:\n\nim = ps.generators.blobs(shape=[10, 10, 10])\nfaces = ps.generators.faces(shape=im.shape, inlet=0, outlet=0)\n\nax.voxels(faces, edgecolor='k', linewidth=0.25);\n\n\n## inlet and outlet#\n\nThese indicate which axis the True values should be placed, with inlets placed at the start of the axis, and outlets placed at the end:\n\n[4]:\n\nfaces = ps.generators.faces(shape=im.shape, inlet=2, outlet=0)","date":"2022-05-17 23:10:15","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5076807737350464, \"perplexity\": 3747.5554236483135}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-21\/segments\/1652662520936.24\/warc\/CC-MAIN-20220517225809-20220518015809-00783.warc.gz\"}"} | null | null |
Laparoscopic vaginoplasty procedure using a modified peritoneal pull-down technique with uterine strand incision in patients with mayer–rokitansky–küster–hauser syndrome: Kisu modification
Iori Kisu, Miho Iida, Kanako Nakamura, Kouji Banno, Tetsuro Shiraishi, Asahi Tokuoka, Keigo Yamaguchi, Kunio Tanaka, Moito Iijima, Hiroshi Senba, Kiyoko Matsuda, Nobumaru Hirao
Department of Preventive Medicine and Public Health
Department of Obstetrics and Gynecology (Gynecology)
Various vaginoplasty procedures have been developed for patients with Mayer–Rokitansky– Küster–Hauser (MRKH) syndrome. Here, we describe a novel laparoscopic vaginoplasty procedure, known as the Kisu modification, using a pull-down technique of the peritoneal flaps with additional structural support to the neovaginal apex using the incised uterine strand in patients with MRKH syndrome. Ten patients with MRKH syndrome (mean age at surgery: 23.9 ± 6.5 years, mean postoperative follow-up period: 17.3 ± 3.7 months) underwent construction of a neovagina via laparoscopic vaginoplasty. All surgeries were performed successfully without complications. The mean neovaginal length at discharge was 10.3 ± 0.5 cm. Anatomical success was achieved in all patients, as two fingers were easily introduced, the neovagina was epithelialized, and the mean neovaginal length was 10.1 ± 1.0 cm 1 year postoperatively. No obliteration, granulation tissue formation at the neovaginal apex, or neovaginal prolapse was recorded. Five of the 10 patients attempted sexual intercourse and all five patients were satisfied with the sexual activity, indicating functional success. Although the number of cases in this case series is few, our favorable experience suggests that the Kisu modification of laparoscopic vaginoplasty procedure is an effective, feasible, and safe approach for neovaginal creation in patients with MRKH syndrome.
Journal of Clinical Medicine
https://doi.org/10.3390/jcm10235510
Published - 2021 Dec 1
Davydov procedure
Mayer–Rokitansky–Küster–Hauser syndrome
Neovagina
Uterine factor infertility
Uterus transplantation
Vaginal agenesis
10.3390/jcm10235510
Dive into the research topics of 'Laparoscopic vaginoplasty procedure using a modified peritoneal pull-down technique with uterine strand incision in patients with mayer–rokitansky–küster–hauser syndrome: Kisu modification'. Together they form a unique fingerprint.
Granulation Tissue Medicine & Life Sciences 100%
Coitus Medicine & Life Sciences 99%
Prolapse Medicine & Life Sciences 97%
Fingers Medicine & Life Sciences 78%
Kisu, I., Iida, M., Nakamura, K., Banno, K., Shiraishi, T., Tokuoka, A., Yamaguchi, K., Tanaka, K., Iijima, M., Senba, H., Matsuda, K., & Hirao, N. (2021). Laparoscopic vaginoplasty procedure using a modified peritoneal pull-down technique with uterine strand incision in patients with mayer–rokitansky–küster–hauser syndrome: Kisu modification. Journal of Clinical Medicine, 10(23), [5510]. https://doi.org/10.3390/jcm10235510
Laparoscopic vaginoplasty procedure using a modified peritoneal pull-down technique with uterine strand incision in patients with mayer–rokitansky–küster–hauser syndrome : Kisu modification. / Kisu, Iori; Iida, Miho; Nakamura, Kanako; Banno, Kouji; Shiraishi, Tetsuro; Tokuoka, Asahi; Yamaguchi, Keigo; Tanaka, Kunio; Iijima, Moito; Senba, Hiroshi; Matsuda, Kiyoko; Hirao, Nobumaru.
In: Journal of Clinical Medicine, Vol. 10, No. 23, 5510, 01.12.2021.
Kisu, I, Iida, M, Nakamura, K, Banno, K, Shiraishi, T, Tokuoka, A, Yamaguchi, K, Tanaka, K, Iijima, M, Senba, H, Matsuda, K & Hirao, N 2021, 'Laparoscopic vaginoplasty procedure using a modified peritoneal pull-down technique with uterine strand incision in patients with mayer–rokitansky–küster–hauser syndrome: Kisu modification', Journal of Clinical Medicine, vol. 10, no. 23, 5510. https://doi.org/10.3390/jcm10235510
Kisu I, Iida M, Nakamura K, Banno K, Shiraishi T, Tokuoka A et al. Laparoscopic vaginoplasty procedure using a modified peritoneal pull-down technique with uterine strand incision in patients with mayer–rokitansky–küster–hauser syndrome: Kisu modification. Journal of Clinical Medicine. 2021 Dec 1;10(23). 5510. https://doi.org/10.3390/jcm10235510
Kisu, Iori ; Iida, Miho ; Nakamura, Kanako ; Banno, Kouji ; Shiraishi, Tetsuro ; Tokuoka, Asahi ; Yamaguchi, Keigo ; Tanaka, Kunio ; Iijima, Moito ; Senba, Hiroshi ; Matsuda, Kiyoko ; Hirao, Nobumaru. / Laparoscopic vaginoplasty procedure using a modified peritoneal pull-down technique with uterine strand incision in patients with mayer–rokitansky–küster–hauser syndrome : Kisu modification. In: Journal of Clinical Medicine. 2021 ; Vol. 10, No. 23.
@article{6be5c5f7e2284047904a0e0313c897a7,
title = "Laparoscopic vaginoplasty procedure using a modified peritoneal pull-down technique with uterine strand incision in patients with mayer–rokitansky–k{\"u}ster–hauser syndrome: Kisu modification",
abstract = "Various vaginoplasty procedures have been developed for patients with Mayer–Rokitansky– K{\"u}ster–Hauser (MRKH) syndrome. Here, we describe a novel laparoscopic vaginoplasty procedure, known as the Kisu modification, using a pull-down technique of the peritoneal flaps with additional structural support to the neovaginal apex using the incised uterine strand in patients with MRKH syndrome. Ten patients with MRKH syndrome (mean age at surgery: 23.9 ± 6.5 years, mean postoperative follow-up period: 17.3 ± 3.7 months) underwent construction of a neovagina via laparoscopic vaginoplasty. All surgeries were performed successfully without complications. The mean neovaginal length at discharge was 10.3 ± 0.5 cm. Anatomical success was achieved in all patients, as two fingers were easily introduced, the neovagina was epithelialized, and the mean neovaginal length was 10.1 ± 1.0 cm 1 year postoperatively. No obliteration, granulation tissue formation at the neovaginal apex, or neovaginal prolapse was recorded. Five of the 10 patients attempted sexual intercourse and all five patients were satisfied with the sexual activity, indicating functional success. Although the number of cases in this case series is few, our favorable experience suggests that the Kisu modification of laparoscopic vaginoplasty procedure is an effective, feasible, and safe approach for neovaginal creation in patients with MRKH syndrome.",
keywords = "Davydov procedure, Mayer–Rokitansky–K{\"u}ster–Hauser syndrome, Neovagina, Uterine factor infertility, Uterus transplantation, Vaginal agenesis, Vaginoplasty",
author = "Iori Kisu and Miho Iida and Kanako Nakamura and Kouji Banno and Tetsuro Shiraishi and Asahi Tokuoka and Keigo Yamaguchi and Kunio Tanaka and Moito Iijima and Hiroshi Senba and Kiyoko Matsuda and Nobumaru Hirao",
note = "Publisher Copyright: {\textcopyright} 2021 by the authors. Licensee MDPI, Basel, Switzerland.",
doi = "10.3390/jcm10235510",
journal = "Journal of Clinical Medicine",
T1 - Laparoscopic vaginoplasty procedure using a modified peritoneal pull-down technique with uterine strand incision in patients with mayer–rokitansky–küster–hauser syndrome
T2 - Kisu modification
AU - Kisu, Iori
AU - Iida, Miho
AU - Nakamura, Kanako
AU - Banno, Kouji
AU - Shiraishi, Tetsuro
AU - Tokuoka, Asahi
AU - Yamaguchi, Keigo
AU - Tanaka, Kunio
AU - Iijima, Moito
AU - Senba, Hiroshi
AU - Matsuda, Kiyoko
AU - Hirao, Nobumaru
N1 - Publisher Copyright: © 2021 by the authors. Licensee MDPI, Basel, Switzerland.
N2 - Various vaginoplasty procedures have been developed for patients with Mayer–Rokitansky– Küster–Hauser (MRKH) syndrome. Here, we describe a novel laparoscopic vaginoplasty procedure, known as the Kisu modification, using a pull-down technique of the peritoneal flaps with additional structural support to the neovaginal apex using the incised uterine strand in patients with MRKH syndrome. Ten patients with MRKH syndrome (mean age at surgery: 23.9 ± 6.5 years, mean postoperative follow-up period: 17.3 ± 3.7 months) underwent construction of a neovagina via laparoscopic vaginoplasty. All surgeries were performed successfully without complications. The mean neovaginal length at discharge was 10.3 ± 0.5 cm. Anatomical success was achieved in all patients, as two fingers were easily introduced, the neovagina was epithelialized, and the mean neovaginal length was 10.1 ± 1.0 cm 1 year postoperatively. No obliteration, granulation tissue formation at the neovaginal apex, or neovaginal prolapse was recorded. Five of the 10 patients attempted sexual intercourse and all five patients were satisfied with the sexual activity, indicating functional success. Although the number of cases in this case series is few, our favorable experience suggests that the Kisu modification of laparoscopic vaginoplasty procedure is an effective, feasible, and safe approach for neovaginal creation in patients with MRKH syndrome.
AB - Various vaginoplasty procedures have been developed for patients with Mayer–Rokitansky– Küster–Hauser (MRKH) syndrome. Here, we describe a novel laparoscopic vaginoplasty procedure, known as the Kisu modification, using a pull-down technique of the peritoneal flaps with additional structural support to the neovaginal apex using the incised uterine strand in patients with MRKH syndrome. Ten patients with MRKH syndrome (mean age at surgery: 23.9 ± 6.5 years, mean postoperative follow-up period: 17.3 ± 3.7 months) underwent construction of a neovagina via laparoscopic vaginoplasty. All surgeries were performed successfully without complications. The mean neovaginal length at discharge was 10.3 ± 0.5 cm. Anatomical success was achieved in all patients, as two fingers were easily introduced, the neovagina was epithelialized, and the mean neovaginal length was 10.1 ± 1.0 cm 1 year postoperatively. No obliteration, granulation tissue formation at the neovaginal apex, or neovaginal prolapse was recorded. Five of the 10 patients attempted sexual intercourse and all five patients were satisfied with the sexual activity, indicating functional success. Although the number of cases in this case series is few, our favorable experience suggests that the Kisu modification of laparoscopic vaginoplasty procedure is an effective, feasible, and safe approach for neovaginal creation in patients with MRKH syndrome.
KW - Davydov procedure
KW - Mayer–Rokitansky–Küster–Hauser syndrome
KW - Neovagina
KW - Uterine factor infertility
KW - Uterus transplantation
KW - Vaginal agenesis
KW - Vaginoplasty
U2 - 10.3390/jcm10235510
DO - 10.3390/jcm10235510
JO - Journal of Clinical Medicine
JF - Journal of Clinical Medicine | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 1,768 |
\section{Introduction}
Sagittarius A* (Sgr A*) is the $(4.152\pm0.014)\times10^{6}~\rm{M}_\odot$ supermassive black hole located at the Galactic Center at a distance of $8178$ pc from the Earth \citep{Gravity2019}. Sgr A* is well-known to vary across the electromagnetic spectrum \citep[e.g.,][]{FYZ2011, Neilsen2013, Subroweit2017, Do2019, Witzel2021, Wielgus2022a}. In the radio and submillimeter (submm), the emission is dominated by two components, both of which originate from the accretion flow: (quasi-)quiescent and variable radiation. The accretion flow is turbulent, which causes variations in its flux on timescales $\gtrsim$ minute. The accretion flow produces an overall net level of quiescent emission on top of low- and high-level amplitude changes owing to its variable nature. The large-amplitude variations are known as ``flares'' and dominate the variable emission. Previous work on Sgr A*'s variability in the radio/submm regimes has focused on total intensity observations. However, these earlier results largely ignore the polarization of Sgr A*, which helps uncover the accretion flow's magnetic properties.
\cite{Bower1999a, Bower1999c, Bower2001} first searched for linear polarization (LP) between 5-112 GHz and found Sgr A* to be linearly unpolarized at these frequencies. However, circular polarization (CP) was detected at 5 and 8 GHz \citep{Bower1999b, Sault1999}. Further observations extended polarimetric wavelength coverage from 1.5 GHz to 230 GHz \citep{Bower2002,Tsuboi2003,Munoz2012}. \cite{Aitken2000} first hinted at intrinsic LP from Sgr A* at 400 GHz; however, their measurements may have been contaminated by the abundance of polarized dust immediately surrounding Sgr A* in the circumnuclear disk \citep[e.g.,][]{Hsieh2018}.
The first interferometric observations detected LP at 230 and 340 GHz \citep{Bower2003, Marrone2006}. Later observations broadened this range from 86 to 700 GHz \citep{Macquart2006, Liu2016a, Liu2016b}. Presently, Sgr A* is known to be circularly polarized from 1.5-230 GHz and linearly polarized between 86-700 GHz. \cite{Liu2016a,Liu2016b} find Sgr A*'s LP percent to increase from $\sim0.5\%$ at 93 GHz to $\sim8.5\%$ at 500 GHz, which may decrease at higher frequencies. \cite{Munoz2012} compiled CP measurements of Sgr A* from 1.5 GHz to 345 GHz finding levels of $\sim-0.2\%$ to $\sim-1\%$, respectively. While the absolute CP amplitude is known to vary \citep[see][]{Munoz2012}, the \textit{sign} is consistently negative in the radio and submm in all currently published data, which they suggest is caused by a highly coherent magnetic field configuration throughout the accretion flow.
In addition to studying Sgr A*'s long-term polarimetric trends \citep[i.e.,][]{Bower2002, Bower2005, Macquart2006, Munoz2012, Bower2018}, the detection of hourly-timescale variation of LP by \cite{Marrone2006} opened up an additional avenue by which to study the accretion flow via the variable emission. Several models describing the total intensity flaring emission have been proposed, such as jets/outflows \citep{Falcke2000, Brinkerink2015} and adiabatically-expanding synchrotron hotspots embedded in the accretion flow \citep{VDL1966, FYZ2006}.
\cite{FYZ2007} first modeled the full-Stokes light curves of these hotspots using an analytic formalism for the transfer of polarized synchrotron radiation through a homogeneous medium \citep{Jones1977a}. Supplementing this simple picture with full-Stokes radiative transfer presents a new opportunity to study the magnetic field configuration in a localized region of the accretion flow. Previously, only the equipartition magnetic field strength could be estimated from this model. However, the observed polarization light curves are regulated by the orientation of the magnetic field relative to the observer. The orientation is a crucial physical parameter that could not previously be determined. \cite{FYZ2007} tested this full-polarization hotspot model at 22 and 43 GHz; however, their analysis was limited as the LP level was low ($\sim0.2-0.8\%$), and the data were noisy. Sgr A* is brighter and more linearly and circularly polarized at submm frequencies, which decrease the overall uncertainty in the polarimetric properties.
In this paper, we present the first full-Stokes modeling of Sgr A*'s submm flaring emission using the adiabatically-expanding hotspot model. This paper is organized as follows. In Section \ref{sec:data}, we discuss the observations and processing of the data and analyze possible systematic issues in the CP products. In Section \ref{sec:analysis}, we describe the models adopted for the quiescent and variable components used to fit the full-Stokes light curves and present the best-fit values. For the first time, we determine the orientation of the hotspot's magnetic field on the plane-of-sky and along the line-of-sight. We find the projected magnetic field to be oriented along the Galactic Plane and approximately perpendicular to the line-of-sight. This has interesting implications for the accretion flow's magnetic field configuration, which we discuss in Section \ref{sec:disc}. Furthermore, in Section \ref{sec:disc}, we discuss the other results in the context of previous analyses in the literature and consider some limitations with our chosen data set. Finally, in Section \ref{sec:summary}, we present a summary of our findings and discuss future work.
\section{Data}\label{sec:data}
\subsection{Observations and Processing}\label{ssec:observations}
The Atacama Large Millimeter/submillimeter Array (ALMA) observed Sgr A* on 16 July 2017 in band 6 ($\approx230$ GHz) in full polarization (project ID 2016.A.00037.T). These data, part of a multi-wavelength campaign of Sgr A* concurrent with the Chandra X-ray Observatory and the Spitzer Space Telescope, were taken with the 12-meter array in the C40-5 configuration (the baselines ranged from 17 to 1100 meters). For our analysis, we focus only on the submm data.
The observation consists of two line and two continuum spectral windows. The two continuum windows are centered on 233 and 235 GHz, each having a bandwidth of 2 GHz with 64 31.25-MHz bandwidth channels. The first spectral line window is centered on SiO (5-4) at $\approx$217 GHz with a 1.875 GHz bandwidth of 1920 0.976-MHz channels. The second spectral line window is centered on $^{13}$CO (2-1) at 220.398 GHz with 1920 0.244-MHz channels for a total bandwidth of $\approx0.47$ GHz. In this analysis, we average over all of the channels per spectral window to obtain four frequency-averaged continuum windows.
Only one of five execution blocks was observed owing to technical issues which occurred during the observation. We used the ALMA pipeline (version 2020.1.0.40) with CASA 6.1.1.15 \citep{McMullin2007} to generate the calibrated data. The following calibrators are used to generate the calibration tables: J1733-1304 (flux), J1517-2422 (bandpass), J1549+0237 (instrumental polarization), and J1744-3116 (phase). The QA2 team designated these data ``semi-pass'' since the parallactic coverage ($\approx46^\circ$) was lower than recommended to determine the instrumental polarization terms ($60^\circ$). Despite this, we were able to calibrate the instrumental polarization. We imaged and phase self-calibrated the data starting with a solution interval of $30$ seconds and stopping at a single integration time ($6.05$ seconds). After phase self-calibration, we flagged any obvious misbehaving baselines or antennas.
\begin{figure*}
\centering
\includegraphics[trim = 1.8cm 0cm 0cm 0.0cm, clip, width=0.95\textwidth]{July16_30s_BinEx.pdf}
\caption{A sample of example images of Sgr A* on 16 July 2017 at 00:28:07 UTC using a 30-second binning time for each Stokes parameter at 233.5 GHz. The full image is $\approx51\arcsec$ per side. We flag pixels below a normalized primary beam limit of 20\%, resulting in an image that is approximately $37\arcsec$ per side. The panels use the same gray scale to show the noise level. The inset is a $2\farcs5\times2\farcs5$ subregion centered on Sgr A*.}
\label{fig:example_binned}
\end{figure*}
We developed a CASA script to autonomously determine the full polarization light curves for a point source located at the phase center using \texttt{TCLEAN} and \texttt{IMFIT}. Briefly, the code bins every scan on a single source to a user-defined value for imaging. \texttt{TCLEAN} images the visibilities in all four Stokes parameters for each time bin. \texttt{IMFIT} is used to fit a point source + zero-level offset at the phase center, where Sgr A* is located, in each image and polarization to determine the point source flux density and (statistical) error. We construct the point source light curves using the fitted \texttt{IMFIT} parameters and export them to a text file for further analysis, where we calculate the polarization product light curves (see Appendix \ref{appendix:pol_conv} for our chosen conventions). Since Sgr A* is surrounded by diffuse emission which is not fully resolved out in the observed configuration, this method yields contamination-free light curves without restricting the projected baseline length, which would lower the overall sensitivity.
We use 30-second binning in our analysis. Each image is $1024\times1024$ ($\approx51\arcsec\times51\arcsec$) and uses a cell size of $0\farcs05$. We apply the standard 20\% primary beam cut to remove imaging artifacts toward the edge of the image, resulting in a final image size of $\approx37\arcsec\times37\arcsec$. We do not primary-beam correct the image since Sgr A* is at the phase center. We restrict the maximum number of iterations to $1000$ to properly clean any extended emission while not cleaning noise artifacts. We show a sample image of Sgr A* during a 30-second binning time in all four Stokes parameters in Figure \ref{fig:example_binned}. In Figure \ref{fig:july_lcs}, we show the final Stokes I, Q, U, and V, LP percent ($p_{l}$), CP percent ($p_{c}$), and LP angle ($\chi$) light curves used in our analysis. Overall, we find Sgr A* to be linearly and circularly polarized at levels of $\sim10\%$ and $\sim-1\%$, respectively. The definitions of these parameters and their uncertainties are detailed in Appendix \ref{appendix:pol_conv}. We discuss the absence of CP products for J1744-3116 in Section \ref{ssec:circular_pol}.
\subsection{Verifying Sgr A*'s Circular Polarization Detection}\label{ssec:circular_pol}
There has been great care taken in previous polarimetric analyses to rule out calibration-error-based CP detections. \cite{Goddi2021} present a detailed description of the issue (see their Appendix G). In short, the polarization calibrator is assumed to have Stokes $V = 0$, which can induce a false (and time-dependent) CP onto the target sources. To check for systematics, we focus on the CP characteristics of J1517-2422 and J1744-3116 following the prescription given in \cite{Munoz2012}. J1517-2422 has a similar declination to Sgr A* ($17^{\rm{h}}45^{\rm{m}}40.04^{\rm{s}},-29^\circ00\arcmin28.17\arcsec$), while J1744-3116 has a comparable right ascension. To check for intrinsic CP for the calibrators, we image each spectral window in Stokes I and V during the entire observing window using the same non-interactive process in Section \ref{ssec:observations}. We obtain a higher sensitivity image to detect CP by imaging the entire observation. We fit a point source to the phase center using \texttt{IMFIT} and report the integrated flux density and statistical error. These results are shown in Table \ref{tab:calibrator_pc}. While \texttt{IMFIT} returns converged flux densities and errors for J1744-3116 in Stokes V, the images do not show circularly polarized emission at or near the phase center. To quantify the $3\sigma$ upper limit on CP, we calculate $3\times$ the Stokes V root-mean-square (RMS) provided by \texttt{IMSTAT}.
For J1517-2422, we detect a statistically significant $p_c \approx -0.1\%$. Since this source is bright ($>3$ Jy), residual or uncalibrated instrumental polarization terms in V could lead to spurious CP measurements. Despite the unfortunately sparse coverage of this source, we compare our results to those in the literature. \cite{Bower2018} report $p_c\approx0.1\%$ for this source in August 2016 at $\approx240$ GHz, having the same magnitude (but opposite sign) as our result. Following \cite{Goddi2021}, we use the AMAPOLA\footnote{\url{http://www.alma.cl/~skameno/AMAPOLA/}} project, which tracks the flux density and polarization properties of several ALMA calibrators for more nearby observations to July 2017. At 233 GHz, the CP of J1517-2422 ranged between roughly $-0.4\%$ and $0.3\%$ during January--April 2017 and between $-1.0\%$ to $-0.4\%$ between October--December 2017. Given that our $-0.1\%$ detection is well within the historical average and that Sgr A* is at least $10\times$ more circularly polarized than either J1517-2422 or J1744-3116, we robustly detect intrinsic CP from Sgr A*.
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|}
\hline\hline
$\nu$ & Stokes I & Stokes V & $p_c$ \\
$\rm{[GHz]}$ & [mJy] & [mJy] & [\%] \\
\hline
\multicolumn{4}{c}{J1517-2422} \\
\hline
$217.1$ & $3099 \pm 0.54$ & $-3.33 \pm 0.17$ & $-0.11\pm0.005$ \\
$220.0$ & $3088 \pm 0.59$ & $-2.37 \pm 0.17$ & $-0.08\pm0.006$\\
$233.5$ & $3059 \pm 0.61$ & $-2.75 \pm 0.17$ & $-0.09\pm0.006$\\
$235.0$ & $3049 \pm 0.56$ & $-2.16 \pm 0.14$ & $-0.07\pm0.005$\\
\hline
\multicolumn{4}{c}{J1744-3116}\\
& & & \hspace{-2.0cm}$3\sigma$ Upper Limits\\
\hline
$217.1$ & $ 217.4 \pm 0.17$ & $<\left|0.27\right|$ & $<|0.12|$ \\
$220.0$ & $ 214.3 \pm 0.29$ & $<|0.51|$ & $<|0.24|$ \\
$233.5$ & $210.2\pm0.15$ & $<|0.27|$ & $<|0.13|$ \\
$235.0$ & $207.8\pm 0.17$ & $<|0.27|$ & $<|0.13|$ \\
\hline
\end{tabular}
\caption{The measured Stokes I and V properties for the two non-instrumental polarization calibrators on 16 July 2017. The errors quoted are only statistical. We do not detect CP in J1744-3116 but do detect it in J1517-2422 at a level $\approx-0.1\%$.}
\label{tab:calibrator_pc}
\end{table}
The final aspect to consider is a time-dependent Stokes V leakage. We cannot directly check for this as J1744-3116 is not circularly polarized, and no other gain calibrators were observed. \citet[][Appendix G]{Goddi2021} study the measured Stokes V as a function of feed angle (parallactic angle + receiver rotation relative to the antenna mount) in search for uncalibrated Stokes V terms. In their April 2017 data (near 230 GHz), they found a modulating $\approx0.1\%$ leakage in Stokes V for the nearest calibrators to Sgr A* (J1733-1304 and J1924-2914). This modulation occurs over a range of $\gtrsim 100^\circ$ in the feed angle. In this observation, the feed angle changes by only $\approx8^\circ$. By our estimates, this induces a maximum of $\approx2$ mJy (absolute) variation in Stokes V. As Sgr A*'s Stokes V light curves vary by $\approx15$ mJy, these time-dependent variations are intrinsic to Sgr A* and are not caused by uncalibrated polarization terms.
\section{Modeling the Light Curves}\label{sec:analysis}
\begin{figure*}
\centering
\includegraphics[trim = 0.8cm 1.3cm 2.25cm 0.5cm, clip, width=0.49\textwidth]{July16_LightCurves_SgrA.pdf}
\includegraphics[trim = 0.8cm 1.3cm 2.25cm 0.5cm, clip, width=0.49\textwidth]{July16_LightCurves_J1744.pdf}
\caption{Full Stokes and polarization light curves of Sgr A* (red, left) and the phase calibrator J1744-3116 (blue, right) on 16 July 2017. As J1744-3116 is not circularly polarized (see Section \ref{ssec:circular_pol}), not shown are Stokes V and $p_c$ for this source. Error bars for both sources are shown and are often smaller than the marker size.}
\label{fig:july_lcs}
\end{figure*}
We adopt a two-component model consistent with previous work to account for the variable and (quasi-)quiescent components of Sgr A*'s light curves. In contrast to previous work, however, we incorporate a full-Stokes picture. The flaring component is modeled as a homogeneous, spherical synchrotron hotspot adiabatically expanding at a constant speed on a roughly one-hour timescale. This model is characterized by several physical parameters, such as the initial radius, expansion speed, magnetic field strength and orientation, and power-law population of relativistic electrons. Our model does not intrinsically include orbital motion (i.e., a varying magnetic field orientation), gravitational effects (i.e., lensing), non-symmetric geometric evolution (i.e., shearing), nor a sense of the hotspot's location in the accretion flow. We account for secular variations in the accretion flow by modeling the slowly-varying frequency-dependent quiescent component. At each frequency, the four Stokes parameters are assumed to rise or fall linearly during the observation and are characterized by phenomenological parameters, such as gradients with respect to time and reference flux densities. Additionally, the Stokes parameters are frequency-dependent, accounting for physical properties of the accretion flow (such as rotation measure, RM), which we model with spectral indices and gradients with respect to frequency. The two components are described in detail below.
\subsection{Polarization Model for Flaring Emission}
The Stokes I temporal- and frequency-dependent flaring emission are well-modeled by an adiabatically-expanding synchrotron plasma \citep[thenceforth referred to as a ``hotspot;''][]{VDL1966, FYZ2006}. The hotspot is homogeneous and characterized by five parameters: $I_p$, $p$, $v_{\rm{exp}}$, $R_0$, and $t_0$. $I_p$ is the peak flare flux density at frequency $\nu_0$ at time $t_0$ having radius $R_0$, $p$ is the electron energy power-law index $\left(N(E)\propto E^{-p}\right)$ valid between energies $E_{\rm{min}}$ and $E_{\rm{max}}$, and $v_{\rm{exp}}$ is the (normalized) radial expansion velocity. The flux density at any frequency and size is calculated via
\begin{align}
I_{f}\left(R\right) = I_p\left(\dfrac{\nu}{\nu_0}\right)^{5/2}\left(\dfrac{R}{R_0}\right)^3\dfrac{f(\tau_{\nu})}{f(\tau_0)}.
\end{align}
$f(...)$ is a non-trivial function encompassing the full-Stokes radiative transfer equations (briefly described below). $\tau_\nu$, the frequency- and size-dependent optical depth, is given by,
\begin{align}
\tau_\nu(R) = \tau_0\left(\dfrac{\nu}{\nu_0}\right)^{-(p+4)/2}\left(\dfrac{R}{R_0}\right)^{-(2p+3)}.
\end{align}
$\tau_0$ is the critical optical depth where the hotspot becomes optically thin and is determined by
\begin{align}
e^{\tau_0} - \left(\dfrac{2p}{3}+1\right)\tau_0-1=0.
\end{align}
Finally, we assume a uniform expansion to relate the time and radius:
\begin{align}
R(t) = R_0\left(1 + v_{\rm{exp}}\left(t - t_0\right)\right).
\end{align}
Assuming magnetic equipartition, we can determine the hotspot's physical radius and expansion velocity, mass, magnetic field strength, and electron number density.
Given the remarkable high-sensitivity light curves found in Section \ref{sec:data}, we model them using a full-Stokes adiabatically-expanding synchrotron hotspot. To do so, we use the prescription given in \cite{Jones1977a}, which describes the transfer of full-Stokes synchrotron radiation for a static source through a homogeneous medium, i.e., where $v_{\rm{exp}}\ll c$. The Stokes I model described above temporally evolves depending upon the size of the emitting region. Since this temporal evolution is secular, we can model the expanding source as a sequence of hotspots with varying parameters (such as radius, magnetic field strength, and electron population) to account for the time-dependent properties as it grows. Therefore, we convert a stationary solution into a dynamic one to produce full-Stokes light curves of such a source.
Supplementing this model with polarization adds only two parameters: $\theta$ and $\phi$, which are related to the orientation of the magnetic field. A schematic of these angles is shown in Figure \ref{fig:angles}. $\phi$ is the intrinsic electric vector position angle (EVPA; see Equation \ref{eq:polangle}) of the hotspot, measured East of North, projected in the plane-of-sky (POS). It is closely related to the projected magnetic field orientation in the POS, $\phi_B = \phi + \pi/2$. $\phi$ and $\phi_B$ are pseudovectors and obey the $\pi$-ambiguity. $\theta$ is the projected magnetic field vector to the line of sight (LOS). In this convention, $\theta=0^\circ$ and $\theta=90^\circ$ occur when the projected magnetic fields are along and perpendicular to the LOS, respectively.
Under the assumption of homogeneity, the radiative transfer equations as given by \cite{Jones1977a} read
\begin{align}
\normalsize{
\begin{bmatrix}
\vspace{0.15cm}
J_\nu\\\vspace{0.15cm}
\epsilon_QJ_\nu\\\vspace{0.15cm}
0\\\vspace{0.15cm}
\epsilon_VJ_\nu
\end{bmatrix}=
\begin{bmatrix}
\left(\frac{d}{d\tau_\nu} + 1\right) & \zeta_Q & 0 & \zeta_V\\
\zeta_Q & \left(\frac{d}{d\tau_\nu} + 1\right) & \zeta_V^* & 0\\
0 & -\zeta_V^* & \left(\frac{d}{d\tau_\nu} + 1\right) & \zeta_Q^*\\
\zeta_V & 0 & -\zeta_Q^* & \left(\frac{d}{d\tau_\nu} + 1\right)\\
\end{bmatrix}
\begin{bmatrix}
\vspace{0.15cm}
I_f/\Omega\\\vspace{0.15cm}
Q_f/\Omega\\\vspace{0.15cm}
U_f/\Omega\\\vspace{0.15cm}
V_f/\Omega
\end{bmatrix}}.
\label{eq:poltransfer}
\end{align}
Inside the dielectric tensor, $\tau_\nu$ is the optical depth, $\zeta_{\{Q,V\}}$ are the Stokes Q and V absorption coefficients, $\zeta^{*}_{V}$ is the rotativity coefficient (responsible for Faraday rotation), and $\zeta^{*}_Q$ is the convertability \citep[or repolarization;][]{Pacholczyk1973} coefficient between LP and CP. $\epsilon_{\{Q, V\}}$ are the Stokes Q and V emissivity coefficients, and $J_\nu$ is a source function. $\Omega$ is the solid angle subtended by the source given by $\Omega\equiv\pi R^2/d^2$, where $R$ and $d$ are the radius of the hotspot and its distance from the Earth, respectively. \cite{Jones1977a} provide analytic solutions for the emergent Stokes flux densities ($I_f$, $Q_f$, $U_f$, and $V_f$) integrated over the homogeneous source (see their equations C4-C17).
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{Angles_Figure.pdf}
\caption{We show several perspectives of the various angles used in this analysis. (a) The general schematic setup, where ``N,'' ``E,'' and ``$\hat{k}$'' denote north, east (in equatorial coordinates), and the unit vector toward the observer, respectively. The hotspot (gray sphere) possesses a three-dimensional magnetic field vector ($\vec{B}$, orange). The pink dashed line denotes the projected magnetic field orientation ($\phi_B$) in the North-East plane perpendicular to the LOS, measured East of North. The dot-dashed cyan arrow shows the angle between the projected magnetic field vector to the LOS ($\theta$). (b) The schematic along the observer's LOS. The dashed pink line again shows the projected magnetic field orientation. In this analysis, we focus on the electric vector position angle (EVPA, $\phi$) shown as a solid green line. The EVPA is also measured East of North. $\phi_B$ and $\phi$ are related by $\phi_B = \phi+\pi/2$ and is wrapped through the $\pi$-ambiguity. For clarity, we do not show the dot-dashed cyan vector. (c) A ``side'' view along the eastern direction to show the projected magnetic field vector along the LOS. Again, for clarity, we do not show the dashed pink or solid green lines.}
\label{fig:angles}
\end{figure*}
We note the number of assumptions made in this picture, specifically that the hotspot undergoes only secular evolution, which is a limitation in our modeling and differs from other approaches \citep[such as][]{Tiede2020, Gelles2021} that include the hotspot's evolution as it orbits Sgr A*. Our attempt to formally fit this data is to describe the general nature of the expanding hotspot. However, if these secondary processes dominate adiabatic expansion, we would not have expected this modeling to be successful. This is due to the frequency- and polarization-dependent coupling of the polarized radiative transfer equations (Equation \ref{eq:poltransfer} and see Section \ref{ssec:num_fit_params}). These secondary effects can be included and are planned for future work.
\subsection{Quiescent Frequency and Temporal Variations}
The quiescent component is known to have frequency- and time-dependent baselines that must be accounted for while modeling the flaring emission. The time dependence likely arises from continual, longer-term variability within the accretion flow. For example, \cite{Dexter2014} find an $\sim8$-hour characteristic timescale in Sgr A*'s submm light curves. While their analysis focuses only on Stokes I, we include time-dependent terms for the other three Stokes parameters for consistency. If there is no time dependence in the Stokes Q, U, and V light curves, we expect their time-dependent fitting parameters to be consistent with $0$. Frequency-dependent variations emerge from processes like optical depth (Stokes I and V) or RM (Stokes Q and U).
To account for the frequency- and time-dependent nature of the quiescent component, we use the following model:
\begin{align}
I_q(\nu, t) &= \left(I_0 + I_1 \left(t - t_0\right)\right)\left(\dfrac{\nu}{\nu_0}\right)^{\alpha_I}\,,\label{eq:I_quiescent}\\
Q_q(\nu, t) &= Q_0 + Q_1 \left(t - t_0\right) + Q_2 \left(\nu - \nu_0\right)\,,\label{eq:Q_quiescent}\\
U_q(\nu, t) &= U_0 + U_1 \left(t - t_0\right) + U_2 \left(\nu - \nu_0\right)\,,\label{eq:U_quiescent} \\
V_q(\nu, t) &= \left(V_0 + V_1 \left(t - t_0\right)\right)\left(\dfrac{\nu}{\nu_0}\right)^{\alpha_V}\,.\label{eq:V_quiescent}
\end{align}
We choose two different frequency dependencies based on the Stokes parameter. The signs of Stokes I and V cannot or do not, respectively, change across the 18 GHz of bandwidth. \citep[The sign of Stokes V does not change from $1.4-340$ GHz;][]{Munoz2012}. Therefore, we use the classic frequency power-law form. Equations \ref{eq:I_quiescent} and \ref{eq:V_quiescent} follow the time- and frequency-dependent model in \cite{Michail2021b}. Due to Faraday rotation, the signs for Stokes Q and U change owing to RM across the bandpass. Therefore, we model frequency-dependent changes in Stokes Q and U using a linear form. Here, $I_i, Q_i, U_i,$ and $V_i$ are all constants; parameters with a ``$0$'' subscript reflect reference flux densities for the four Stokes parameters at frequency $\nu_0$ at time $t_0$. The time- and frequency-dependent slopes are denoted by parameters with subscripts ``$1$'' and ``$2$'', respectively. The spectral indices for Stokes I and V are $\alpha_I$ and $\alpha_V$, respectively.
\subsection{Results of Model Fitting}\label{ssec:model_fitting}
We use \texttt{LMFIT} \citep{lmfit} to simultaneously fit the 16 light curves (4 spectral windows $\times$ 4 Stokes parameters) by minimizing the $\chi^2$ of the variable + quiescent models (i.e., $I_\nu = I_f + I_q$, $Q_\nu = Q_f + Q_q$, $U_\nu = U_f + U_q$, $V_\nu = V_f + V_q$) discussed above. Due to the lack of time coverage, we only fit the data through 00:36 hrs UTC. While there appears to be a second flare beginning near 00:45 hrs UTC, the time coverage is insufficient to model it. Therefore, we do not include those data in the fit. We discuss the implications of this limited time coverage in Section \ref{ssec:caveats}. For this analysis, we set the reference frequency to $\nu_0 = 235.1$ GHz. In Table \ref{tab:fitted_parameters}, we present the fitted parameters values and errors. In Figure \ref{fig:data_model_fits} (left), we show the best-fit model superimposed on Sgr A*'s light curves.
\begin{table*}
\centering
\begin{tabular}{|c|l|c|c|}
\hline\hline
Parameter & Description & Value & Unit \\\hline
\multicolumn{4}{c}{Hotspot}\\\hline
$I_p$ & Peak Flare Flux Density at 235.1 GHz & $0.19 \pm 0.01$ & Jy \\
$p$ & Electron Power-Law Index & $3.10\pm0.05$ & --\\
$v_{\rm{exp}}$ & Relative Expansion Speed & $1.48 \pm 0.05$ & hr$^{-1}$\\
$\phi$ & Intrinsic EVPA projected in POS (East of North) & $144.8 \pm 1.5$ & degrees\\
$\theta$ & Angle of projected magnetic field vector relative to LOS & $90.09 \pm 0.01$ & degrees \\
$t_0$ & Time of Peak Flux at 235.1 GHz & $0.455 \pm 0.002$ & hr UTC\\
\hline
\multicolumn{4}{c}{Quiescent Component}\\\hline
$I_0$ & Stokes I flux density at $t=t_0$ at 235.1 GHz & $2.46 \pm 0.01$ & Jy\\
$Q_0$ & Stokes Q flux density at $t=t_0$ at 235.1 GHz & $-82.6 \pm 3.7$ & mJy \\
$U_0$ & Stokes U flux density at $t=t_0$ at 235.1 GHz & $-292.3 \pm 4.7$ & mJy \\
$V_0$ & Stokes V flux density at $t=t_0$ at 235.1 GHz & $-16.5 \pm 1.2$ & mJy \\
\\
$I_1$ & Stokes I Time-Dependent Slope & $-238.2 \pm 13.0$ & mJy hr$^{-1}$\\
$Q_1$ & Stokes Q Time-Dependent Slope & $-118.3 \pm 14.2$ & mJy hr$^{-1}$\\
$U_1$ & Stokes U Time-Dependent Slope & $-146.6 \pm 16.5$ & mJy hr$^{-1}$\\
$V_1$ & Stokes V Time-Dependent Slope & $6.5 \pm 5.2$ & mJy hr$^{-1}$\\
\\
$\alpha_I$ & Stokes I Spectral Index & $0.16 \pm 0.01$ & --\\
$Q_2$ & Stokes Q Frequency-Dependent Slope & $2.82 \pm 0.07$ & mJy GHz$^{-1}$\\
$U_2$ & Stokes U Frequency-Dependent Slope & $-2.88 \pm 0.09$ & mJy GHz$^{-1}$\\
$\alpha_V$ & Stokes V Spectral Index & $-1.31 \pm 0.45$ & --\\\hline
\end{tabular}
\caption{Fitted parameters (with errors) for the joint quiescent and variable model discussed in Section \ref{sec:analysis}.}
\label{tab:fitted_parameters}
\end{table*}
\begin{figure*}
\centering
\includegraphics[trim = 0.8cm 1.55cm 2.25cm 0.5cm, clip, width=0.49\textwidth]{16Jul2017_Fitted.pdf}
\includegraphics[trim = 0.8cm 1.55cm 2.5cm 0.5cm, clip, width=0.49\textwidth]{Quiescent_PolParams.pdf}
\caption{\textit{Left}: Sgr A*'s light curves are shown in red, and the best-fit model is superimposed in black. Due to the short time coverage, we only model the light curves before 00:36 hr UTC. \textit{Right}: Light curves and linear polarimetric quantities for the quiescent component. The best-fit model is plotted in red, and the $1\sigma$ error range is shaded in gray. Unlike the left panel, these figures are plotted to 00:36 hr UTC.}
\label{fig:data_model_fits}
\end{figure*}
\begin{table}
\centering
\begin{tabular}{l|l|r}
\hline\hline
Parameter & Description & Value \\\hline
\multicolumn{3}{c}{Adopted Parameters}\\\hline
$E_{\rm{min}}$ & Electron Lower Energy Bound & $1$ MeV\\
$E_{\rm{max}}$ & Electron Upper Energy Bound & $500$ MeV\\\hline
\multicolumn{3}{c}{Derived Parameters}\\\hline
$n_e$ & Electron density & $6.5\times10^{7}$ cm$^{-3}$\\
$R_0$ & Radius of flaring region at $t=t_0$ & $9.2\times10^{11}$ cm\\
$v_{\rm{exp}}\cdot R_0$ & Physical Expansion Speed & $0.013$c\\
$B_{\rm{eq}}$ & Equipartition magnetic field strength & $71$ G\\
$M$ & Mass of flaring region & $3.62\times10^{20}$ g\\\hline
\end{tabular}
\caption{Adopted and derived hotspot properties from the variable parameters in Table \ref{tab:fitted_parameters}. We do not account for non-relativistic electrons or protons while estimating the equipartition magnetic field strength. Therefore, this is a lower limit on the true value.}
\label{tab:Plasmon_phys}
\end{table}
\subsubsection{Variable Component}\label{sssec:var}
Modeling the light curves gives the six variable component parameters, which characterize the hotspot and are listed in Table \ref{tab:fitted_parameters}. To determine physical parameters, we assume the hotspot is in magnetic equipartition with the electrons responsible for the synchrotron emission between energies $E_{\rm{min}}$ and $E_{\rm{max}}$. In Table \ref{tab:Plasmon_phys}, we present the physical properties of the hotspot fixing $E_{\rm{min}}$ and $E_{\rm{max}}$ to $1$ and $500$ MeV ($\gamma_e\sim2-1000$), respectively. We disregard contributions from protons and non-relativistic electrons in the magnetic field strength, so this is a lower limit on the true value. Overall, we find a $235.1$ GHz peak flare flux density of $0.19$ Jy produced by an electron energy spectrum $N(E)\propto E^{-3.1}$. The hotspot expands at speed $\approx0.013c$ with an equipartition magnetic field strength $71$ G and radius $0.75~R_{\rm{S}}$ ($1~R_{\rm{S}}=1.23\times10^{12}$ cm for a $4.152\times10^{6}~M_{\odot}$ Schwarzschild black hole). Our model robustly detects the two new parameters in this full-polarization fit: $\theta$ and $\phi$. For the intrinsic EVPA of the source, we find $\phi\approx145^\circ$, corresponding to $\phi_B=55^\circ$ East of North ($\phi+\pi/2$ wrapped through the $\pi$-ambiguity). Additionally, we determine $\theta=90.09^\circ$, placing the projected magnetic field orientation approximately perpendicular to the LOS.
To compare the overall variability of the flaring component to the quiescent emission, we calculate the hotspot's mean LP and CP and their relative fractional change ($\rm{RFC}\equiv \left(\rm{max} - \rm{min}) / \rm{average}\right)$). During the modeled range, we find the flare to have average LP and CP of $\approx35\%$ and $\approx-4.2\%$, respectively, at $235.1$ GHz. The LP goes from a minimum of $\approx9.5\%$ to a maximum of $\approx81\%$, giving an RFC $=~2.04$. The CP ranges from $\approx-15\%$ to $\approx-0.1\%$ with an RFC $=~3.48$.
\subsubsection{Quiescent Component}\label{sssec:quiescent}
We find statistically-significant time dependencies in the quiescent component's Stokes I, Q, and U light curves. In Figure \ref{fig:data_model_fits} (right), we present the quiescent-only full-Stokes light curves during our modeled range. While the Stokes I time-dependence has been observed previously \citep[e.g.,][]{Michail2021b}, this is the first detection of the quiescent component's Stokes Q and U time-variability. We do not find any changes in the quiescent emission's Stokes V properties, as the time-dependent term is not significant. An uncalibrated Stokes V polarization term (Section \ref{ssec:circular_pol}) would contribute to the final fitted value, further proof that the variations in Stokes V are intrinsic to Sgr A*'s flaring emission.
We find the quiescent emission has average LP $\approx12\%$ and average CP of $\approx-0.7\%$. The LP ranges between $\approx9.9\%$ to $\approx14\%$, giving RFC $=~0.31$. Since we conclude above the Stokes V quiescent emission is not time-dependent, we do not calculate its RFC.
Additionally, we identify strong frequency-dependent terms in Stokes I, Q, and U, and a marginal dependence in Stokes V. We find the quiescent emission's Stokes I spectral index is $\alpha_I = 0.15$. The Stokes V spectral index value is much steeper at $\alpha_V = -1.31$. The detection of frequency-dependent slopes in Stokes Q and U generates a non-zero RM in the quiescent emission, which has been found in previous analyses \citep[e.g.,][]{Marrone2006, Bower2018}.
\section{Discussion}\label{sec:disc}
In the previous section, we presented the first-ever full-Stokes modeling of Sgr A*'s total intensity and polarized light curves to simultaneously characterize the quiescent and variable emission. Determining the fitted parameters for our two-component model allows us to study and derive additional physical properties of both components. We derived a few of these properties for the variable emission above by assuming magnetic equipartition. In this section, we compare our results for each component to previous analyses and broadly find them consistent with those in the literature. Finally, given our $\sim40$ minute observation and the 18 free parameters in this fit, we examine their implications on our results.
\subsection{Variable Component}\label{ssec:variable_disc}
The electron power-law index responsible for the flaring emission is consistent with multi-wavelength constrained values, which broadly range from $p\approx1-3$ \citep[e.g.,][]{FYZ2006,FYZ2008,Eckart2009,Michail2021c,Gravity2021,Witzel2021}. The calculated magnetic field strength is on the higher side of those previously reported, which typically averages a few to tens of Gauss. However, \cite{FYZ2008} and \cite{Eckart2009} report magnetic field strengths $\approx80$ G. These are somewhat stronger than the average field strengths of stellar wind-fed simulations \citep[e.g.,][]{Ressler2020}. It suggests the hotspot may have occurred near the inner accretion flow, where field strengths are stronger and/or in a concentration of flux when the acceleration of particles drives the flaring. The $\approx7\times10^7$ cm$^{-3}$ electron density is consistent with the \cite{Witzel2021} joint variability analysis of Sgr A* at submm, infrared, and X-ray frequencies.
Previous observations \citep[i.e.,][]{Marrone2007, Gravity2018, Wielgus2022b} have detected variability in the LP angle ($\chi$) of Sgr A* caused by orbital motion. $\chi$ changes by $180^\circ$ over half of the orbital period of the hotspot \citep{Gravity2018}. We only observe $\Delta\chi\sim15^\circ$ over $\sim40$ minutes, the latter of which roughly corresponds to the period at the innermost stable circular orbit (ISCO) for a non-spinning black hole with the mass of Sgr A* ($P = 31.5$ minutes). If the hotspot was near the ISCO, we expect $\Delta\chi\sim180^\circ$ from orbital motion during our modeling range, which would dominate over changes caused by adiabatic expansion. However, the observed change of $\sim15^\circ$ implies the hotspot is far outside the ISCO, and orbital motion-induced variations in $\chi$ are subdominant to changes from the adiabatic expansion. Therefore, the $71$ G field we derive above, which places us towards the inner accretion flow when compared to \citet{Ressler2020}, is likely overestimated. We discuss the possible cause in Section \ref{ssec:caveats}.
We find the magnetic field angle projected on the POS is $\phi_B=55^\circ$ East of North. As a point probe of the conditions within the accretion flow, these results present the first \textit{direct} detection of the accretion flow's projected magnetic field orientation in the POS. Several analyses suggest this position angle is a favored orientation for the Sgr A* system. Near-infrared polarimetric observations of Sgr A*'s flaring emission over several years find a mean EVPA of approximately $60^\circ$ East of North with a range of about $45^\circ$ \citep{Eckart2006, Meyer2007}. \cite{Eckart2006} speculate this indicates the projected spin axis of a disk around Sgr A*, while \cite{Meyer2007} merely propose this as a preferred orientation for the Sgr A* system. Continuum and spectral observations near 1.5 GHz by \cite{FYZ2020} find a symmetric jet-like structure oriented along the Galactic plane at a position angle $\sim60^\circ$, which they attribute to evidence of a jet/outflow from Sgr A*. A more recent analysis by \cite{Wielgus2022b} uses ALMA linear polarimetry at similar frequencies ($\sim220$ GHz) in the context of an orbiting, non-expanding hotspot. Their analysis again confirms a $\sim60^\circ$ EVPA, which they conclude is the hotspot's projected orbital angular momentum axis. \cite{Wielgus2022b} conclude that the orbital motion of a near-infrared hotspot observed by \cite{Gravity2018} is consistent with their results, as well. In the context of these previous observations, we conclude the magnetic and angular momentum axes of the accretion flow are parallelly-oriented, a key signature of magnetically arrested disks \citep[MAD;][]{Narayan2003}.
Of all 18 parameters required to fit our model, $\theta$ (the angle between the LOS and magnetic field vector) is the most well-constrained. For self-absorbed synchrotron sources, \cite{Jones1977a} show the Stokes V absorption, emission, and rotativity ($\zeta_V$, $\epsilon_V$, and $\zeta_V^*$, respectively) coefficients depend on $\theta$. As $\theta\rightarrow90^\circ$, the variations in Stokes V decrease as CP emission and absorption are suppressed. Additionally, the ``strength'' of internal Faraday rotation within the hotspot decreases as $\theta\rightarrow90^\circ$. The convertibility coefficient ($\zeta_Q^*$) is not a function of $\theta$, causing the process of repolarization to be dominant over Faraday rotation. Repolarization has been suggested as one possible explanation for Sgr A*'s low LP but high CP detections at radio frequencies \citep[e.g.,][]{Bower1999b, Sault1999}. If the magnetic field configuration throughout the accretion flow is uniform and stable in time \citep[as suggested by][]{Munoz2012}, then this result corroborates repolarization as the cause of the CP-only detections in the radio.
\subsection{Quiescent Component}
The time- and frequency-dependent nature of the quiescent component is clear and leads us to search for secular variations in the RM and intrinsic EVPA ($\chi_0$). Using an error-weighted linear least squares, we calculate these values by assuming the normal form for Faraday rotation (i.e., $\chi=\rm{RM}\cdot\lambda^2 + \chi_0$). We show the fitted values over the modeled time range in Figure \ref{fig:quiescent_pols}. Overall, we find the RM to vary between $\approx-4.9\times10^{5}$ rad m$^{-2}$ and $\approx-3.8\times10^{5}$ rad m$^{-2}$, while $\chi_0$ ranges between $\approx-4^\circ$ to $\approx-19^\circ$. Variations in these two parameters are strongly anti-correlated, which \cite{Bower2018} also suggest. \cite{Goddi2021} found RMs in the range $-4.84\times10^{5}$ to $-3.28\times10^{5}$ rad m$^{-2}$ and $\chi_0$ ranging from $-18.8^\circ$ to $-14.7^\circ$, which averaged over the entire observation to determine the ``quiescent'' parameters. Our fitted RM and $\chi_0$ match those from their analysis of April 2017 data.
We calculate the average LP and RFC for both components in Sections \ref{sssec:var} and \ref{sssec:quiescent}. We find that the variable emission is $\sim3\times$ more linearly polarized than the quiescent component. Unsurprisingly, the flare's LP properties also vary $\approx6.6\times$ more than the quiescent emission. However, the quiescent emission's modeled LP does change by an appreciable amount (RFC $=~0.31$). Some of its variability may be caused by unmodeled hotspot evolution. The two dominant sources are likely from non-symmetric geometric evolution, such as shearing, and orbital motion. Shearing occurs when the hotspot size is $\sim 0.5~R_{\rm{S}}$ \citep{Wielgus2022b}; in our modeling, we find $R\sim0.75~R_{\rm{S}}$. However, the shearing timescale is similar to the orbital period \citep{Tiede2020}. As discussed in Section \ref{ssec:variable_disc}, the hotspot is not in the inner accretion flow, so the orbital period is longer than our modeling range. The hotspot's orbital motion will affect the measured Stokes Q and U light curves as $\phi$ will vary in the POS and result in some level of hotspot-induced time variability in the quiescent emission's linear polarization properties. These are modelable effects \citep[i.e.,][]{Jones1977b, Wielgus2022b} that will be considered in future work.
\begin{figure}
\centering
\includegraphics[trim = 0.60cm 0.55cm 0.65cm 0.15cm, clip, width=0.6\textwidth]{RM_chi0_quiescent.pdf}
\caption{\textit{Left}: A plot of the quiescent component's RM during the modeled time range. Red depicts the best-fit RM value, and the shaded region is the $1\sigma$ model range. \textit{Right}: Similar to the left panel but for the intrinsic EVPA ($\chi_0$) of the quiescent emission.}
\label{fig:quiescent_pols}
\end{figure}
\cite{Goddi2021} found the Stokes I spectral index ($\alpha_I$) consistent with $0$, whereas we find $0.15$. This is likely explained by a variable spectral index between April and July 2017, which varies on daily to weekly timescales at submm frequencies \citep[see][]{Wielgus2022a}. \cite{Goddi2021} accounted for the $\approx10\%$ absolute uncertainty in ALMA's flux calibration, whereas we only factor in statistical errors. While this tends to underestimate our uncertainty on $\alpha_I$ by $\approx20\%$, it cannot fully account for the discrepancy.
We find $\alpha_V\approx-1.3$, which implies weaker (less negative) Stokes V flux density at higher frequencies. This is in contrast to \cite{Munoz2012} that find Stokes V flux density $\propto\nu^{0.35}$ (more negative) at increasingly higher frequencies. \cite{Bower2018} finds epochs consistent with both positive and negative $\alpha_V$, although they used a linear frequency term instead of a power-law. Despite only having three epochs from which to draw conclusions, there seems to be a general trend in their data. When Sgr A* is brighter, Stokes V is stronger (more negative, $\alpha_V > 0$) at higher frequencies. When Stokes I is lower, Stokes V is weaker ($\alpha_V < 0$) at higher frequencies. In one of three epochs, they find $\alpha_V < 0$ when Sgr A* is $2.68$ Jy at $233$ GHz. Here, we find $\alpha_V<0$ when Sgr A*'s quiescent component is $2.46$ Jy at $235$ GHz. Notably, $\alpha_V > 0$ when Sgr A*'s $227$ GHz flux density was $\approx3.6$ Jy \citep{Munoz2012}. This may suggest a fundamental relationship between Sgr A*'s flux density and the CP spectrum. However, additional data and a more uniform analysis are required before such a correlation is proposed.
\begin{figure}
\centering
\includegraphics[width=0.35\columnwidth]{RM_nu.pdf}
\caption{Predicted absolute RM for the quiescent component as a function of frequency from our linear model is plotted as the black dashed line. We compare our predicted RM to those previously published in the literature. \citet[221 GHz]{Goddi2021}, \citet[227/343 GHz]{Marrone2007}, and \citet[233 GHz]{Bower2018} published multiple RM values in a single paper. In these cases, the marker shows the average value while the vertical bars denote the total published range.}
\label{fig:rm_nu}
\end{figure}
This phenomenological model predicts a frequency-dependent RM for the quiescent emission. \cite{FYZ2007} also find this trend from compiling published RM values. This suggests that classic Faraday rotation, where the RM is frequency-independent, is not valid for Sgr A*. In Figure \ref{fig:rm_nu}, we plot the predicted (absolute) RM as a function of frequency using our model. We compare these values to those previously published in the literature. At lower frequencies, the model and published values are discrepant by more than an order of magnitude. However, our model predicts the same general trend, where the (absolute) RM falls off at lower frequencies. Unfortunately, the lack of $\approx350$ GHz RM measurements \citep[outside of][]{Marrone2007} makes it difficult to determine if our model underpredicts the average RM, if Sgr A*'s RM significantly changes near this frequency, or a combination of both.
\subsection{Effect of Short Time Coverage}\label{ssec:caveats}
We only model about half of the light curve on this day (see Section \ref{ssec:model_fitting}). Data after 00:45 hr UTC appear to show the beginning of a new flare. It is impossible to fit this second flare with the hotspot model as the peak was not observed, which is crucial to determine its physical parameters. The short time coverage, compounded with not detecting the beginning of the first flare, complicates fitting the proper quiescent baseline. At submm, the flaring emission is historically $\approx20\%$ of the overall emission. Here, however, $I_p/I_0\approx8\%$. While $I_p/I_0$ can vary, and $I_0$ matches Sgr A*'s flux density in April 2017 \citep{Goddi2021}, this does seem somewhat low.
We explore the sensitivity to $I_p/I_0$ by assuming $I_p=0.5$ Jy, which is closer to the historic $I_p/I_0\approx20\%$ mentioned above. If $I_p=0.5$ Jy, then the fixed-$I_p$ model has $p=2.53$, $\phi=147.5^\circ$ ($\phi_B=57.5^\circ$), $\theta=90.06^\circ$, and $v_{\rm{exp}}=0.86$ hr$^{-1}$. The hotspot has an equipartition magnetic field strength $55$ G, radius $1~R_{\rm{S}}$, electron density $2.73\times10^{7}$ cm$^{-3}$, and expansion velocity $0.01$c. Noticeably, $\theta$ and $\phi$ are practically unchanged. For completeness, we list other physical properties of the quiescent emission: average RM $=-3.60\times10^5$ rad m$^{-2}$, average $\chi_0=-12.96^\circ$, $\alpha_I=-0.016$, average LP $=18.4\%$, and average CP $=-0.57\%$. While these values are within their nominal ranges, the reduced $\chi^2$ of this fixed-$I_p$ fit is $26\%$ higher than for the model with our best-fitting parameters. We find the properties of the flaring and quiescent components are not extremely sensitive to the value of $I_p$. $\theta$ and $\phi$, which regulate the variations in the Stokes Q, U, and V light curves, are virtually unaffected.
\subsection{Concerning the Number of Fitting Parameters}\label{ssec:num_fit_params}
We require fitting 18 parameters to model the quiescent and variable components. There is a concern that given the number of parameters, we may be able to fit any data regardless of its true nature. We have taken several steps throughout this analysis as a safeguard, which we detail below.
Of the 18 values fitted here, 12 are dedicated to characterizing the quiescent emission's frequency dependence and temporal evolution. Most of these parameters are phenomenological (i.e., the time-slopes, frequency-slopes, and reference flux densities) as we do not have a physical model for the quiescent emission. To limit over-fitting, we require the quiescent model (Equations \ref{eq:I_quiescent}--\ref{eq:V_quiescent}) to have the same time-slope across all frequencies, but we do not couple these terms across the Stokes parameters. We fit three quiescent terms using four spectral windows for each Stokes parameter. By not pairing terms across Stokes parameters, we guarantee that any non-linear changes in the light curves are due to the variable component.
The final six fitting parameters for the flaring emission characterize these non-linear amplitude variations and link changes in the four Stokes parameters. The resulting full-Stokes light curves have unique patterns depending on the physical parameters of the hotspot. The power of ALMA's wide frequency bandwidths is that we can simultaneously observe Sgr A*'s time and frequency dependence. Since we jointly fit all 16 light curves, variations in any spectral window and/or Stokes parameter must occur in the other light curves following the full-polarization radiative transfer model. Therefore, we conclude that these simultaneous fits cannot model any light curves that do not follow this full-Stokes prescription.
\section{Summary}\label{sec:summary}
We present the first full-Stokes analysis of the adiabatically-expanding synchrotron hotspot model using 230 GHz ALMA light curves of Sgr A* on 16 July 2017. This work is the most robust test of the hotspot model yet performed at any frequency regime by including all four Stokes parameter light curves. The full-polarization modeling we complete is additional proof of the adiabatically-expanding hotspot model, aside from time delay measurements. By modeling the time- and frequency-dependent nature of the variable and quiescent components, we constrain the physical and magnetic properties of the hotspot located within Sgr A*'s accretion flow. Our results are fundamental to the Event Horizon Telescope's future efforts to untangle the full-Stokes variable emission from that of the underlying accretion flow \citep{Broderick2022, EHT2022c, EHT2022d}. Our analysis will benefit from past \citep[i.e.,][]{EHT2022e} and future simultaneous multi-wavelength observations, even those with only total intensity data. These additional data would further constrain the frequency-dependent nature of the variable emission. Our fitted parameters show remarkable consistency with those previously published in the literature. We describe several of our key findings below:
\begin{enumerate}
\item We observe average LP and CP detections of Sgr A* at levels $\approx10\%$ and $\approx-1\%$ at 235 GHz, respectively, which are consistent with previous measurements.
\item We find the quiescent component's average RM and $\chi_0$ as $-4.22\times10^5$ rad m$^{-2}$ and $-13.3^\circ$, respectively. These values match well with a recent analysis of Sgr A*'s April 2017 average polarimetric properties \citep{Goddi2021}.
\item As a point probe of the accretion flow, this hotspot is likely located near the inner edge of the accretion flow owing to the inferred magnetic field strength and electron density being a few times larger than those typically found in MHD simulations \citep[i.e.,][]{Ressler2020}.
\item The hotspot magnetic field orientation projected on the POS is aligned parallel to the Galactic Plane, matching a previously discovered jet-like feature emanating from Sgr A*, and near-infrared/submm polarization results indicating the accretion flow's angular momentum axis. This reveals the first direct evidence that the accretion flow's magnetic and angular momentum axes are aligned parallel, a key signature of a magnetically-arrested disk.
\item The hotspot's magnetic field axis is aligned almost perpendicular to the LOS. This suggests repolarization is dominant over Faraday rotation and corroborates it as the cause of low-LP but high-CP in Sgr A* at radio frequencies.
\item We find the results of the variable component are not drastically (or at all) altered from the lack of long-duration time coverage provided by these data.
\end{enumerate}
Several exciting prospects remain to test this model. There is a diverse dataset of full-track, long-duration ALMA and Submillimeter Array (SMA) observations of Sgr A*. Many of these are simultaneous observations at similar or vastly different frequencies. In the first case, these data provide full-Stokes light curves over a broader time range than a single array could provide. In the latter case, multiwavelength observations allow us to constrain the frequency-dependent total intensity and polarized nature of the quiescent emission and solidly test this full-Stokes hotspot model across a wide range of frequencies. Future analyses will benefit from fast-frequency switching or sub-array ALMA observations (for example, simultaneously using the 12-meter and 7-meter arrays at two separate frequencies). This expanded analysis will allow us to test for variability in the hotspot's physical parameters, such as the magnetic field orientation and strength. For example, if $\phi$ is variable, as is suggested in the near-infrared \citep[i.e.,][]{Eckart2006}, its range may signify the opening angle of an outflow emanating from Sgr A*.
\section*{Acknowledgements}
We thank the anonymous referee for their very helpful and constructive comments, which strengthened the arguments and analysis in this work. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. This paper makes use of the following ALMA data: ADS/JAO.ALMA\#2016.A.00037.T. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. This research was supported in part through the computational resources and staff contributions provided for the Quest high performance computing facility at Northwestern University which is jointly supported by the Office of the Provost, the Office for Research, and Northwestern University Information Technology.
\section*{Data Availability}
The data used in this analysis are publicly available from the ALMA archive. The light curves and code used to model these data are available upon request to the first author.
\bibliographystyle{mnras}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 7,044 |
\section{Composition Theorems}\label{sec:composition-theorems}
It is often convenient to design a mechanism as the function composition
of two mechanisms, $\ensuremath{\mathcal{M}}\xspace = \ensuremath{\mathcal{M}}\xspace_2 \circ \ensuremath{\mathcal{M}}\xspace_1$. We present ``composition theorems''
which yield flexible accuracy and differential privacy guarantees for \ensuremath{\mathcal{M}}\xspace in terms of those for $\ensuremath{\mathcal{M}}\xspace_1$ and $\ensuremath{\mathcal{M}}\xspace_2$.
\subsection{Flexible Accuracy Under Composition}\label{sec:comp-accuracy}
In order to give our composition theorem for flexible accuracy, we need to define two new sensitivity notions: {\em distortion sensitivity} for a function and {\em error sensitivity} for a mechanism.
We give motivation behind each of these sensitivity notions when defining them in their respective subsections below.
\subsubsection{Distortion Sensitivity}\label{sec:dist-sens}
When we compose two flexibly accurate mechanisms $M_1$ and $M_2$ for $f_1:A\to B$ and $f_2:B\to C$, respectively, to obtain the flexible accuracy guarantee of $M_2\circ M_1$ for $f_2\circ f_1:A\to C$, we would like to attribute all the distortion made in $A$ and $B$ (for measuring the output error of $M_1$ and $M_2$, respectively) to the distortion in $A$. This requires transferring the input distortion from $B$ back into $A$, and the notion of distortion sensitivity allows us to quantify this.
Informally, distortion sensitivity of a function $f$ (denoted by $\distsens{f}{}$) captures the amount of distortion required in the domain of $f$ to capture a certain amount of distortion in the codomain of $f$. We formalize this intuition below.
\begin{defn}[Distortion sensitivity]\label{def:dist-sens}
Let $f: A \to B$ be a randomized function where $B$ admits Wasserstein distances.
Let $\ensuremath{\mathsf{\partial}}\xspace_1,\ensuremath{\mathsf{\partial}}\xspace_2$ be measures of distortion on $A, B$, respectively.
Then, the \emph{distortion-sensitivity} of $f$ w.r.t.\ $(\ensuremath{\mathsf{\partial}}\xspace_1,\ensuremath{\mathsf{\partial}}\xspace_2)$ is defined as the function
$\distsens{f}{}:\ensuremath{\R_{\ge0}}\xspace \cup \{\infty\}\to\ensuremath{\R_{\ge0}}\xspace \cup \{\infty\}$ given by
\begin{equation}\label{eq:dist-sens}
\distsens{f}{}(\alpha) =
\sup_{\substack{x,Y:\\\ensuremath{{\widehat\dn}}\xspace_2(f(x),\prob{Y}) \le \alpha}} \inf_{\substack{X:\\ f(X) = Y}} \ensuremath{{\widehat\dn}}\xspace_1(x, \prob{X})
\end{equation}
where $x\in A$, and the random variables $X$ and $Y$ are distributed over $A$ and $B$,
respectively.
Above, infimum over an empty set is defined to be $\infty$.
\end{defn}
See \Figureref{dist-sens} on page~\pageref{fig:dist-sens} for an illustration of distortion sensitivity using a pebbling game.
\paragraph{Distortion sensitivity at $\alpha=0$.} It is easy to verify that for any randomized function $f$, we have $\distsens{f}{}(0)=0$. We will use this for deriving flexible accuracy guarantees for any histogram-based-statistic in \Sectionref{HBS}.
\paragraph {Distortion sensitivity of deterministic bijective functions:} When $f:A\to B$ is a deterministic and bijective map, then for every $x,Y$ such that $\ensuremath{{\widehat\dn}}\xspace_2(f(x),\prob{Y})\leq\alpha$, there is only one choice of $X$ for which $f(X)= Y$ holds, that is $X=f^{-1}(Y)$.
Since for any point $x\in A$ and distribution $P$ over $A$, we have $\ensuremath{{\widehat\dn}}\xspace_1(x,P)=\sup_{x'\in\ensuremath{\mathrm{support}}\xspace(P)}\ensuremath{\mathsf{\partial}}\xspace_1(x,x')$, it follows that
\begin{align}\label{bijective-distsens}
\distsens{f}{}(\alpha)\quad = \sup_{\substack{x,Y:\\\ensuremath{{\widehat\dn}}\xspace_2(f(x),\prob{Y}) \le \alpha}}\ensuremath{{\widehat\dn}}\xspace_1(x,\prob{f^{-1}(Y)})
\quad = \sup_{\substack{x\in A,y\in B:\\\ensuremath{\mathsf{\partial}}\xspace_2(f(x),y) \le \alpha}} \ensuremath{\mathsf{\partial}}\xspace_1(x, f^{-1}(y)).
\end{align}
In particular, if $f:A\to A$ is an identity function and $\ensuremath{\mathsf{\partial}}\xspace_1=\ensuremath{\mathsf{\partial}}\xspace_2$, then we we have $\distsens{f}{}(\alpha)\leq\alpha$. Many of our mechanisms in this paper for which we derive flexible accuracy guarantees are given for the identity function over the space of histograms; see, for example, the result for our basic histogram mechanism (\Theoremref{hist-priv-accu}), the bucking mechanism (see \Claimref{bucket-accuracy}), and their composition (\Theoremref{bucketing-hist}), etc.
\paragraph{Distortion sensitivity of the histogram function w.r.t.\ $(\ensuremath{\dn_{\mathrm{drop}}}\xspace,\ensuremath{\dn_{\mathrm{drop}}}\xspace)$:}
Let $\ensuremath{\mathcal{G}}\xspace=\{g_1,g_2,\hdots,g_k\}$ denote a finite set.
The histogram function $f_{\ensuremath{\mathrm{hist}}\xspace}$ takes an unordered dataset $\ensuremath{\mathbf{x}}\xspace=\{x_1,\hdots,x_n\}$ (where each $x_i\in\ensuremath{\mathcal{G}}\xspace$ and the ordering of $x_i$'s does not matter) as input and outputs the histogram of the dataset, i.e., a $k$-tuple $(\ensuremath{\mathbf{x}}\xspace(g_1),\hdots,\ensuremath{\mathbf{x}}\xspace(g_k))$ where $\ensuremath{\mathbf{x}}\xspace(g_i):=|\{j:x_j=g_i\}|$ denotes the multiplicity of $g_i$ in $\ensuremath{\mathbf{x}}\xspace$. Note that $f_{\ensuremath{\mathrm{hist}}\xspace}$ is a deterministic bijective function. It is easy to verify (from \eqref{bijective-distsens}) that $\distsens{f_{\ensuremath{\mathrm{hist}}\xspace}}{}(\alpha)\leq\alpha$ w.r.t.\ $(\ensuremath{\dn_{\mathrm{drop}}}\xspace,\ensuremath{\dn_{\mathrm{drop}}}\xspace)$. Later on, it will be convenient for us to represent a histogram over a finite set $\ensuremath{\mathcal{G}}\xspace$ as a map $\ensuremath{\mathbf{x}}\xspace:\ensuremath{\mathcal{G}}\xspace\to\ensuremath{{\mathbb N}}\xspace$, that outputs the multiplicity of any element from $\ensuremath{\mathcal{G}}\xspace$ in the dataset.
\begin{remark}
Though the distortion sensitivity is bounded in many circumstances (including all the applications we consider in this paper); however, due to the strict requirement of having an $X$ such that $f(X)=Y$ (under infimum) in its definition, it may be infinite in other situations where this condition cannot be satisfied. To accommodate more functions, we can relax the definition with more parameters $\theta\in[0,1]$, $\omega\geq0$ as follows:
\[ \distsens{f}{\gamma,\omega}(\alpha) =
\sup_{\substack{x,Y:\\\ensuremath{{\widehat\dn}}\xspace_2(f(x),\prob{Y}) \le \alpha}} \inf_{\substack{X:\\ \Winf{\gamma}(f(X), \prob{Y})\leq\omega}} \ensuremath{{\widehat\dn}}\xspace_1(x, \prob{X}).\]
All the results in this paper can be extended to work with this more general definition of distortion sensitivity.
\end{remark}
\begin{figure}[t]
\centering
\begin{subfigure}{0.29\textwidth}
\centering
\includegraphics[width=\linewidth]{fa.pdf}
\captionsetup{justification=centering}
\caption{Flexible Accuracy}
\label{fig:fa}
\end{subfigure}\hfill
\begin{subfigure}{0.29\textwidth}
\centering
\includegraphics[width=\linewidth]{dist-sens.pdf}
\captionsetup{justification=centering}
\caption{Distortion Sensitivity}
\label{fig:dist-sens}
\end{subfigure}\hfill
\begin{subfigure}{0.40\textwidth}
\centering
\includegraphics[width=\linewidth]{err-sens.pdf}
\captionsetup{justification=centering}
\caption{Error Sensitivity}
\label{fig:err-sens}
\end{subfigure
\caption{An illustration of the flexible accuracy, distortion sensitivity, and error sensitivity.
Dotted arrows
indicate closeness in terms of distortion between histograms (or
distributions thereof), and the solid two-sided arrows indicate closeness
in terms of the lossy Wasserstein distance. Each figure
shows the corresponding guarantee (accuracy, error sensitivity or
distortion sensitivity) as a pebbling game: The white boxes with
black pebbles correspond to given histograms, and the yellow
boxes indicate histograms that are guaranteed to exist, such that the
given closeness relations hold. This allows those boxes to be pebbled.
Accuracy guarantee of $M_2 \circ M_1$
is derived by first applying the pebbling rule of accuracy of $M_1$
(to obtain the purple pebbles), then that of the error sensitivity of
$M_2$ (to get the pink pebbles) and finally using the pebbling rule of
the distortion sensitivity of $f_1$ to pebble the remaining yellow box.}
\label{fig:fa-dist-err-sens}
\end{figure}
\begin{SCfigure}
\centering
\includegraphics[width=0.55\textwidth]{fa-composition.pdf}
\caption{An illustration of the composition theorem, \Theoremref{compose-accuracy}. Accuracy guarantee of $M_2 \circ M_1$
is derived by first applying the pebbling rule of accuracy of $M_1$
(to obtain the purple pebbles), then that of the error sensitivity of
$M_2$ (to get the pink pebbles), and finally using the pebbling rule of
the distortion sensitivity of $f_1$ to pebble the remaining yellow box.
The final parameters are $\alpha=\alpha_1+\distsens{f_1}(\alpha_2)$, $\beta=\tau_{M_2,f_2}^{\alpha_2,\gamma_2}(\beta_1,\gamma_1)$, and $\gamma=\gamma_2$.}
\label{fig:composition}
\end{SCfigure}
\subsubsection{Error Sensitivity}\label{sec:err-sens}
Suppose we want to compose an $(\alpha_1,\beta_1,\gamma_1)$-accurate mechanism $M_1$ for $f_1:A\to B$ with another flexibly accurate mechanism $M_2$ for $f_2:B\to C$ to obtain flexible accuracy guarantee of the composed mechanism $M_2\circ M_1$ for $f_2\circ f_1:A\to C$.
For this, on any input $x\in A$, first we measure the output error of $M_1$ on input $x$ in terms of $\Winf{\gamma_1}(M_1(x),f_1(X'))$, where $X'$ is an $\alpha_1$-distortion of the {\em same} $x$ on which we run the mechanism $M_1$; see \eqref{eq:fa-defn}.
Now, for composition, we need to run $M_2$ on $M_1(x)$ and distort $f_1(X')$ to obtain another r.v.\ $Y$, and the output error of the composed mechanism is given by $\Winf{\gamma}(M_2(M_1(x)),f_2(Y))$. The problem here is that since the input (distribution) $f(X')$ that we distort is {\em not the same} as the input (distribution) $M_1(x)$ that we run $M_2$ on, we cannot directly obtain the output error guarantee of the composed mechanism from that of $M_2$. Therefore, we need a way to generalize the measure of accuracy (output error) of a flexible accurate mechanism when the input (distribution) to the mechanism is not the same as the input (distribution) that we distort, but they are at a bounded distance from each other (as measured in terms on the lossy $\infty$-Wasserstein distance). The notion of error sensitivity formalizes this intuition. Informally, it captures the sensitivity of the output error of a flexibly accurate mechanism in such situations.
\begin{defn}[Error sensitivity]\label{def:err-sens}
Let $\ensuremath{\mathcal{M}}\xspace: B \to C$ be any mechanism for a function $f:B\to C$,
where both $B$ and $C$ have associated Wasserstein distances.
Let $\ensuremath{{\widehat\dn}}\xspace$ be a measure of distortion on $B$. Then, for $\alpha_2,
\gamma_2\geq0$, the error-sensitivity $\tau_{\ensuremath{\mathcal{M}}\xspace,f}^{\alpha_2, \gamma_2}:\ensuremath{\R_{\ge0}}\xspace
\times [0,1] \to \ensuremath{\R_{\ge0}}\xspace$ of \ensuremath{\mathcal{M}}\xspace w.r.t.\ $f$ is defined as:
\begin{align}\label{eq:err-sens}
\tau_{\ensuremath{\mathcal{M}}\xspace,f}^{\alpha_2, \gamma_2}(\beta_1, \gamma_1) = \sup_{\substack{X,X': \\ \Winf{\gamma_1}(\prob{X},\prob{X'})\leq \beta_1}} \ \inf_{\substack{Y: \\ \ensuremath{{\widehat\dn}}\xspace(X', Y) \le \alpha_2}} \Winf{\gamma_2}(\ensuremath{\mathcal{M}}\xspace(X),f(Y)).
\end{align}
\end{defn}
In other words, if $\tau_{\ensuremath{\mathcal{M}}\xspace,f}^{\alpha_2, \gamma_2}(\beta_1, \gamma_1) = \beta_2$, then for distributions $X,X'$ over $A$ such that $\Winf{\gamma_1}(\prob{X},\prob{X'})\leq \beta_1$, one can $\alpha_2$-distort $X'$ to $Y$ in such a way that $\Winf{\gamma_2}(\ensuremath{\mathcal{M}}\xspace(X),f(Y))\leq\beta_2$.
See \Figureref{err-sens} on page~\pageref{fig:err-sens} for an illustration of err sensitivity using a pebbling game.
\begin{remark}
As mentioned earlier, the notion of error sensitivity generalizes the definition of flexible accuracy. In other words, if a mechanism $\ensuremath{\mathcal{M}}\xspace$ for computing a function $f$ is $(\alpha,\beta,\gamma)$-accurate, then $\beta=\tau_{\ensuremath{\mathcal{M}}\xspace,f}^{\alpha, \gamma}(0, 0)$.
\end{remark}
We can simplify the expression of error sensitivity in some special cases that arise later on in \Sectionref{mechanisms}; we discuss these after stating our composition theorem for flexible accuracy in the next subsection.
\subsubsection{Composition Theorem for Flexible Accuracy}\label{sec:compostion_FA}
Having defined the distortion and error sensitivities, we shall now see how they play in a composition $M_2 \circ M_1$ for $f_2 \circ f_1$, where $M_1, M_2$ are mechanisms with flexible accuracy guarantees.
\begin{thm}[Flexible Accuracy Composition]\label{thm:compose-accuracy}
Let $\ensuremath{\mathcal{M}}\xspace_1 : A \to B$ and $\ensuremath{\mathcal{M}}\xspace_2 : B \to C$ be mechanisms, respectively, with $(\alpha_1,\beta_1,\gamma_1)$-accuracy for $f_1: A \to B$ and $\tau_{\ensuremath{\mathcal{M}}\xspace_2,f_2}$ error sensitivity for $f_2: B \to C$, w.r.t.\ measures of distortion $\ensuremath{\mathsf{\partial}}\xspace_1$, $\ensuremath{\mathsf{\partial}}\xspace_2$ defined on $A, B$ and metrics $\ensuremath{\mathfrak{d}}\xspace_1, \ensuremath{\mathfrak{d}}\xspace_2$ defined on $B, C$, respectively.
Suppose $f_1,\alpha_2$ are such that $\distsens{f_1}{}(\alpha_2)$ is finite.
Then, for any $\alpha_2 \ge 0$ and $\gamma_2 \in [0,1]$, the mechanism $\ensuremath{\mathcal{M}}\xspace_2 \circ \ensuremath{\mathcal{M}}\xspace_1 : A \to C$ is $(\alpha, \beta ,\gamma)$-accurate for the function $f_2\circ f_1$ w.r.t.~$\ensuremath{\mathsf{\partial}}\xspace_1$ and $\ensuremath{\mathfrak{d}}\xspace_2$, where
$\alpha = \alpha_1 + \distsens{f_1}{}(\alpha_2)$, $\beta = \tau_{\ensuremath{\mathcal{M}}\xspace_2,f_2}^{\alpha_2, \gamma_2}(\beta_1, \gamma_1)$, and $\gamma = \gamma_2$.
\end{thm}
We prove \Theoremref{compose-accuracy} in \Sectionref{compose-accuracy_proof}.
An illustration of how the composition theorem works is given as a pebbling game in \Figureref{composition}.
\Theoremref{compose-accuracy} requires computing/bounding the error sensitivity of $\ensuremath{\mathcal{M}}\xspace_2$ in order to compute the flexible accuracy parameter $\beta$ of the composed mechanism $\ensuremath{\mathcal{M}}\xspace_2\circ\ensuremath{\mathcal{M}}\xspace_1$.
Now we show that the expression of error sensitivity can be simplified in some important special cases.
\paragraph{$\bullet$ When $\ensuremath{\mathcal{M}}\xspace_1,f_1$ are deterministic maps and $\ensuremath{\mathcal{M}}\xspace_1$ is $(0,\beta_1,0)$-accurate.}
This setting arises when we compute the flexible accuracy parameters of our bucketed histogram mechanism $\mBhist{} = \mtrlap{} \circ \mbuc{}$ (\Algorithmref{histogram-mech}) while proving \Theoremref{bucketing-hist}.
In this case, for any $x\in\ensuremath{\mathcal{A}}\xspace$, both $\ensuremath{\mathcal{M}}\xspace_1(x),f_1(x)$ are point distributions. This means that in order to compute the error sensitivity of $\ensuremath{\mathcal{M}}\xspace_2$, we only need to take the supremum in \eqref{eq:err-sens} over point distributions $\prob{x},\prob{x'}$ over $\ensuremath{\mathcal{B}}\xspace$ (where $\prob{x},\prob{x'}$ can be thought of being supported on $x:=\ensuremath{\mathcal{M}}\xspace_1(x)$ and $x':=f_1(x)$, respectively) such that $\Winf{}(\prob{x},\prob{x'})\leq\beta_1$.
Since $\Winf{}(\prob{x},\prob{x'})=\ensuremath{\met_{\mathrm{\B}}}\xspace(x,x')$, we only need to take the supremum in \eqref{eq:err-sens} over $x,x'\in\ensuremath{\mathcal{B}}\xspace$ such that $\ensuremath{\met_{\mathrm{\B}}}\xspace(x,x')\leq\beta_1$.
\paragraph{$\bullet$ When $\ensuremath{\mathcal{M}}\xspace_2,f_2$ are deterministic maps and $\ensuremath{\mathcal{M}}\xspace_1$ is $(\alpha_1,\beta_1,0)$-accurate and $\alpha_2=\gamma_2=0$.}
This setting arises in the case of histogram-based-statistics (denoted by a deterministic function $\ensuremath{{f_{\mathrm{HBS}}}}\xspace$) in \Sectionref{HBS}, in which we use the composed mechanism $\ensuremath{{f_{\mathrm{HBS}}}}\xspace \circ \mBhist{}$ for computing $\ensuremath{{f_{\mathrm{HBS}}}}\xspace$, where \mBhist{} is our final histogram mechanism that is $(\alpha,\beta,0)$-accurate (see \Theoremref{bucketing-hist}) and $\ensuremath{{f_{\mathrm{HBS}}}}\xspace$ (as a mechanism) is $(0,0,0)$-accurate for computing $\ensuremath{{f_{\mathrm{HBS}}}}\xspace$.
Upon substituting these parameters in \eqref{eq:err-sens}, the expression for the error sensitivity reduces to computing $\tau_{\ensuremath{\mathcal{M}}\xspace_2,f_2}^{0,0}(\beta_1,0) = \sup_{X,X': \Winf{}(\prob{X},\prob{X'})\leq \beta_1} \Winf{}(\ensuremath{\mathcal{M}}\xspace_2(X),f_2(X'))$, which can be simplified further as shown in the lemma below, which we prove in \Appendixref{err-sens-deterministic_proof}.
\begin{lem}\label{lem:err-sens-deterministic}
Let $\ensuremath{\mathcal{M}}\xspace:\ensuremath{\mathcal{B}}\xspace\to\ensuremath{\mathcal{C}}\xspace$ be a deterministic mechanism for a deterministic function $f:\ensuremath{\mathcal{B}}\xspace\to\ensuremath{\mathcal{C}}\xspace$. Then, for any $\beta_1\geq0$, we have
\begin{align}\label{eq:err-sens-deterministic}
\tau_{\ensuremath{\mathcal{M}}\xspace,f}^{0,0}(\beta_1,0) \quad = \sup_{\substack{X,X': \\ \Winf{}(\prob{X},\prob{X'})\leq \beta_1}} \Winf{}(\ensuremath{\mathcal{M}}\xspace(X),f(X')) \quad = \sup_{\substack{x,x'\in\ensuremath{\mathcal{A}}\xspace : \\ \dB{x}{x'} \leq \beta_1}} \dC{\ensuremath{\mathcal{M}}\xspace(x)}{f(x')}.
\end{align}
\end{lem}
\subsection{Differential Privacy Under Composition}
First we formally define the notion of differential privacy.
\paragraph{Differential Privacy.}
Let \ensuremath{\mathcal{X}}\xspace denote a universe of possible ``databases'' with a
symmetric neighborhood relation $\sim$. In typical applications, two
databases \ensuremath{\mathbf{x}}\xspace and $\ensuremath{\mathbf{x}}\xspace'$ are considered neighbors if one is obtained from the
other by removing the data corresponding to a single ``individual.''
A \emph{mechanism} \ensuremath{\mathcal{M}}\xspace over \ensuremath{\mathcal{X}}\xspace is an algorithm which takes $\ensuremath{\mathbf{x}}\xspace\in\ensuremath{\mathcal{X}}\xspace$ as
input and samples an output from an output space \ensuremath{\mathcal{Y}}\xspace, according to some
distribution. We shall denote this distribution by $\ensuremath{\mathcal{M}}\xspace(\ensuremath{\mathbf{x}}\xspace)$.
\begin{defn}[Differential Privacy \cite{DworkMNS06,DworkKMMN06}]\label{def:epsdel-DP}
A randomized algorithm $\ensuremath{\mathcal{M}}\xspace:\ensuremath{\mathcal{X}}\xspace\to\ensuremath{\mathcal{Y}}\xspace$ is $(\ensuremath{\epsilon}\xspace,\delta)$-differentially private (DP), if for all neighboring databases $\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{x}}\xspace'\in\ensuremath{\mathcal{X}}\xspace$
and all measurable subsets $S\subseteq\ensuremath{\mathcal{Y}}\xspace$, we have
$\Pr[\ensuremath{\mathcal{M}}\xspace(\ensuremath{\mathbf{x}}\xspace)\in S] \leq e^{\ensuremath{\epsilon}\xspace}\Pr[\ensuremath{\mathcal{M}}\xspace(\ensuremath{\mathbf{x}}\xspace')\in S] + \delta$.
\end{defn}
A simple but very useful result in differential privacy is the ``post-processing'' theorem for DP (see \cite[Proposition 2.1]{DworkRo14}), which states that if $\ensuremath{\mathcal{M}}\xspace_1$ is $(\epsilon,\delta)$-DP, then for any mechanism $\ensuremath{\mathcal{M}}\xspace_2$, the composed mechanism $\ensuremath{\mathcal{M}}\xspace_2 \circ \ensuremath{\mathcal{M}}\xspace_1$ would remain $(\epsilon,\delta)$-DP.
We prove a ``pre-processing'' theorem for differential privacy, which can be viewed as complementing the ``post-processing'' theorem for DP.
Our pre-processing theorem for DP states that if $\ensuremath{\mathcal{M}}\xspace_2$ is private,
then so would $\ensuremath{\mathcal{M}}\xspace_2 \circ \ensuremath{\mathcal{M}}\xspace_1$ be (i.e.,
pre-processing does not hurt privacy), provided that $\ensuremath{\mathcal{M}}\xspace_1$ is well-behaved.
The following notion of being well-behaved suffices for our purposes.
\begin{defn}[Neighborhood preserving Mechanism]
A mechanism $\ensuremath{\mathcal{M}}\xspace:A\rightarrow B$ is \emph{neighborhood preserving} w.r.t.\
neighborhood relations $\sim_A$ over $A$ and $\sim_B$ over $B$, if for all
$x,y \in A$ s.t. $x \sim_A y$, there exists a pair of jointly distributed
random variables $(X,Y)$ s.t. $\prob{X}=\ensuremath{\mathcal{M}}\xspace(x)$, $\prob{Y}=\ensuremath{\mathcal{M}}\xspace(y)$, and
$\Pr[X\sim_B Y] = 1$.
\end{defn}
The following theorem states our pre-processing theorem for DP, which we prove in \Appendixref{comp-privacy}.
\begin{thm}[Differential Privacy Composition]\label{thm:compose-DP}
Let $\ensuremath{\mathcal{M}}\xspace_1:A\to B$ and $\ensuremath{\mathcal{M}}\xspace_2:B\to C$ be any two mechanisms.
If $\ensuremath{\mathcal{M}}\xspace_1$ is neighborhood-preserving w.r.t.\
neighborhood relations $\sim_A$ and $\sim_B$ over $A$ and $B$, respectively,
and $\ensuremath{\mathcal{M}}\xspace_2$ is $(\epsilon, \delta)$-DP w.r.t.\ $\sim_B$,
then $\ensuremath{\mathcal{M}}\xspace_2\circ \ensuremath{\mathcal{M}}\xspace_1:A\to C$ is $(\epsilon, \delta)$-DP w.r.t.\ $\sim_A$.
\end{thm}
It is important to note here is that we are not releasing the output of the neighboring-preserving mechanism $\ensuremath{\mathcal{M}}\xspace_1$; we only release the output of $\ensuremath{\mathcal{M}}\xspace_2\circ \ensuremath{\mathcal{M}}\xspace_1$.
Looking ahead, we will require \Theoremref{compose-DP} to establish the DP guarantee of our bucketed-histogram mechanism (\Algorithmref{histogram-mech}) which is obtained by pre-processing our $(\ensuremath{\epsilon}\xspace,\delta)$-DP histogram mechanism (\Algorithmref{hist-mech}) with the neighborhood-preserving bucketing mechanism (\Algorithmref{bucketing}).
\section{Experimental Evaluations}\label{sec:eval}
We empirically compare our basic mechanism \mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace} (\Algorithmref{hist-mech})
on a ground set $\ensuremath{\mathcal{G}}\xspace=\{1,\cdots,B\}$, against various
competing mechanisms, for accuracy on a few histogram-based statistics
computed on it. We plot average
errors (actual and flexible), on different histogram distributions%
\footnote{For each data distribution, the plots were averaged over 100 data sets, with 100 runs each for each mechanism.}
for functions $\max_k(\ensuremath{\mathbf{x}}\xspace) := \max \{ i \mid \ensuremath{\mathbf{x}}\xspace(i) \ge k \}$, $\max:=\max_1$,
and $\ensuremath{\mathrm{mode}}\xspace(\ensuremath{\mathbf{x}}\xspace) := \arg\max_i \ensuremath{\mathbf{x}}\xspace(i)$; note that $\ensuremath{\mathrm{mode}}\xspace(\ensuremath{\mathbf{x}}\xspace)$ is equal to the most frequently occurring data item in $\ensuremath{\mathbf{x}}\xspace$.
The parameters for \mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace} that we will use in the section are given in \Corollaryref{simpler-dp-hist}.
We emphasize that the plots are only indicative of the performance of our algorithm on specific histograms, and do not suggest \emph{worst-case accuracy guarantees}. On the other hand, our theorems do provide worst-case accuracy guarantees.
We will empirically compare our results against the Exponential Mechanism \cite{ExponentialMech}, Propose-Test-Release Mechanism \cite{DworkL09}, Smooth-sensitivity Mechanism \cite{NissimRS07}, Stability-Based Sanitized Histogram \cite{BNS}, and Choosing-Based Histogram Mechanism \cite{BeimelNiSt16}.
First we present the comparison of our mechanisms against all these on different data distributions in \Sectionref{evals-carried-out} and then described these mechanisms briefly in \Sectionref{compared-mechs}.
We point out one notable omission from our plots: the Encode-Shuffle-Analyze
histogram mechanism \cite{erlingsson2020encode}, which appeared
independently and concurrently to our mechanism,%
\footnote{Preliminary versions of the current work were available online and had been presented publicly (as an invited talk \cite{prelimversion19}) before \cite{erlingsson2020encode} was available.}
also uses a shifted (but not truncated) Laplace mechanism, and in all the
examples plotted, yields a behavior that is virtually identical to our
mechanism's. However, we emphasize that \cite{erlingsson2020encode} claim
accuracy only for the histogram itself, and indeed, for the functions that
we consider, it does not enjoy the \emph{worst-case accuracy guarantees}
that we provide.
\begin{figure}
\centering
\includegraphics[scale=0.55]{allplots.pdf}
\caption{For each evaluation, a typical histogram used is shown in inset.
The different data distributions elicit a variety of behaviors of the
different mechanisms. Experiment (2) shows an instance which is hard for all
the mechanisms without considering flexible accuracy; on the other hand,
in Experiment (3), flexible accuracy makes no difference (the plots
overlap). In these two experiments, BNS and our new mechanism match each
other. In all the other experiments, our new mechanism dominates the others,
with or without considering flexible accuracy.
\label{fig:app-eval}}
\end{figure}
\subsection{Evaluations Carried Out}\label{sec:evals-carried-out}
In each of the following empirical evaluations, a histogram distribution and
one of the following functions were fixed: $\max_k(\ensuremath{\mathbf{x}}\xspace) := \max \{ i \mid
\ensuremath{\mathbf{x}}\xspace(i) \ge k \}$, $\max:=\max_1$, and $\ensuremath{\mathrm{mode}}\xspace(\ensuremath{\mathbf{x}}\xspace) := \arg\max_i \ensuremath{\mathbf{x}}\xspace(i)$.
\begin{enumerate}
\item[\textbf{(1)}] Function $\max$. Histogram of about 10,000 items drawn i.i.d.\ from a Cauchy distribution with median $45$ and scale $4$, restricted to 100 bars, with the last 10 set to empty bars.
\item[\textbf{(2)}] Function $\max$. Step histogram with two steps (height $\times$ width) : [$1000 \times 50$, $1 \times 50$].
\item[\textbf{(3)}] Function $\max_{500}$. Same histogram distribution as in (1) above,
but without zeroing out the right-most bars.
\item[\textbf{(4)}] Function $\max_{500}$. Step histogram with 100 bars, with two steps (height $\times$ width) : [$540 \times 50$, $490 \times 50$].
\item[\textbf{(5)}] Function \ensuremath{\mathrm{mode}}\xspace. Histogram of 30 bars, each bar has height drawn from i.i.d.\ Poisson with mean 250.
\item[\textbf{(6)}] Function \ensuremath{\mathrm{mode}}\xspace. Noisy step histogram, with steps
[$130 \times 120$, $200 \times 5$, $185 \times 85$, $190 \times 10$, $130\times80$].
\end{enumerate}
The results are shown in \Figureref{app-eval}. In each experiment, a range of values for $\epsilon$ are chosen, while we fixed $\delta = 2^{-20}$. Errors are shown in the y-axis as a percentage of the full range $[0,B)$. In all experiments, for each mechanism we also compute flexible accuracy allowing distortion of $\ensuremath{\dn_{\mathrm{drop}}}\xspace=0.005$.
\subsection{Description of the Compared Mechanisms}\label{sec:compared-mechs}
\paragraph{Exponential Mechanism.}
The Exponential Mechanism \cite{ExponentialMech} can be tailored for an
abstract utility function. We consider the negative of the error as the utility of a response $y$ on input histogram \ensuremath{\mathbf{x}}\xspace, i.e., $q(\ensuremath{\mathbf{x}}\xspace,y)=-\ensuremath{\mathrm{err}}\xspace(\ensuremath{\mathbf{x}}\xspace,y)=-|\max(\ensuremath{\mathbf{x}}\xspace)-y|$.
However, for both $\max_k$ and \ensuremath{\mathrm{mode}}\xspace, error has high sensitivity -- changing
a single element in the histogram can change the error by as much as the
number of bars in the histogram. Since the mechanism produces an output $r$
with probability proportional to $e ^ {\frac{\epsilon q(\ensuremath{\mathbf{x}}\xspace, r)}{2
\Delta_\ensuremath{\mathrm{err}}\xspace}}$, where $\Delta_\ensuremath{\mathrm{err}}\xspace$ is the sensitivity of $\ensuremath{\mathrm{err}}\xspace(\cdot,\cdot)$, having a
large sensitivity has the effect of moving the output distribution close to
a uniform distribution. This is reflected in the performance of this
mechanism in all our plots.
\paragraph{Propose-Test-Release Mechanism (PTR).}
We consider the commonly used form of the PTR mechanism of Dwork and Lei
\cite{DworkL09}, namely, ``releasing stable values'' (see \cite[Section 3.3]{VadhanSurvey}). On input \ensuremath{\mathbf{x}}\xspace, the
mechanism either releases the correct result $f(\ensuremath{\mathbf{x}}\xspace)$ or refuses to do so (replacing it with a random output value), depending on whether the radius of the neighborhood of \ensuremath{\mathbf{x}}\xspace where it remains constant is sufficiently large (after adding some noise). For computing a function $f$ and a setting of parameter $\beta = 0$ and privacy parameters $\epsilon, \delta$, the mechanism calculates this radius for an input $\ensuremath{\mathbf{x}}\xspace$ as,
$r = d(\ensuremath{\mathbf{x}}\xspace, \{\ensuremath{\mathbf{x}}\xspace' : \text{LS}_f(\ensuremath{\mathbf{x}}\xspace') > 0\}) + \text{Lap}(\nicefrac{1}{\epsilon})$, where $d(\ensuremath{\mathbf{x}}\xspace,\mathcal{S})$ is the minimum Hamming distance between $\ensuremath{\mathbf{x}}\xspace$ and any point in the set $\mathcal{S}$ and $\text{LS}_f(\ensuremath{\mathbf{y}}\xspace):=\max\{|f(\ensuremath{\mathbf{y}}\xspace)-f(\ensuremath{\mathbf{z}}\xspace)|:\ensuremath{\mathbf{y}}\xspace\sim\ensuremath{\mathbf{z}}\xspace\}$ is local sensitivity of function $f$ at $\ensuremath{\mathbf{y}}\xspace$.
If this radius $r$ is greater than $\nicefrac{\ln{(\nicefrac{1}{\delta})}}{\epsilon}$, the
mechanism will output the exact answer $f(\ensuremath{\mathbf{x}}\xspace)$, otherwise it outputs a random value from the domain.
For the functions we consider, this radius of stable region can be computed efficiently and is typically small or even zero for input distributions considered which is reflected in our plots.
\paragraph{Smooth-sensitivity Mechanism (SS).}
This mechanism, due to Nissim et al.\ \cite{NissimRS07}, uses
the smooth sensitivity of a function $f$, defined as $SS_{f}^{\epsilon}(\ensuremath{\mathbf{x}}\xspace) = \max \{ LS_f(\ensuremath{\mathbf{x}}\xspace') e^{-\epsilon d(\ensuremath{\mathbf{x}}\xspace, \ensuremath{\mathbf{x}}\xspace')} | \ensuremath{\mathbf{x}}\xspace' \in \Hspace\ensuremath{\mathcal{G}}\xspace \}$, where $LS_f(\ensuremath{\mathbf{x}}\xspace')$ denotes the \emph{local sensitivity} of $f$ at $\ensuremath{\mathbf{x}}\xspace'$, and $d(\cdot,\cdot)$ is the Hamming distance. Given an input histogram \ensuremath{\mathbf{x}}\xspace, the mechanism adds noise $O(SS_f^{\beta}(\ensuremath{\mathbf{x}}\xspace)/\alpha)$ to $f(x)$ for appropriate values of $\alpha$ and $\beta$ to obtain $(\epsilon, \delta)$-DP.
For functions like $\max_k$ and \ensuremath{\mathrm{mode}}\xspace, like sensitivity, local sensitvity (and hence smooth sensitivity) also tends to
be large on many histograms, which leads this mechanism to add a large
noise.
\paragraph{Stability-Based Sanitized Histogram Mechanism.}
This mechanism was proposed by Bun et al. \cite{BNS} (also see \cite[Theorem 3.5]{VadhanSurvey}) for
releasing histograms with provable worst-case guarantees. However, these
guarantees are in terms of the errors in the individual bar heights of the
histogram, and does not necessarily translate to the histogram based functions, as we
consider. Nevertheless, this mechanism provides a potential candidate for
a mechanism for any histogram based statistic.
For each bar of the histogram, the mechanism adds Laplace noise to the bar
height, and the resulting value is reported only if it is more than a
threshold, and
otherwise a $0$ is reported. By treating empty bars differently, this
mechanism achieves comparable flexible accuracy as our mechanism in the case
of $\max$. However, this does not generalize to $\max_k$. In particular, in
the example in (4) in \Figureref{app-eval}, by adding (possibly positive)
noise to histogram bars of height lower than $k$, the mechanism is very
likely to find a bar which is much further to the right than the point where
the bar heights cross $k$.
\paragraph{Choosing-Based Histogram Mechanism.}
Beimel et al.~\cite{BeimelNiSt16} presented a mechanism SanPoints for
producing a sanitized histogram, with formal PAC-guarantees for the height
of each bar of the histogram. The mechanism involves iteratively choosing
bars from the histogram, without replacement, and adding some noise
to the bar heights. The bars are chosen according to a DP mechanism for picking the tallest bar
(which in turn uses the exponential mechanism).
For the functions we consider, SanPoints yields mixed results, but is
dominated by BNS and our new mechanism.
\section{Flexible Accuracy}\label{sec:flex-accu}
The high-level idea of flexible accuracy is to
allow for some \emph{distortion of the input} before measuring accuracy.
We would like to define ``natural'' distortions of a database, that are meaningful for the
function in question. For many functions, removing a few data points (say, outliers) would be a natural distortion, while for others, perturbing the data points (or a combination of both) is more natural. Note that \emph{adding} new entries -- even just one -- is often not a reasonable
distortion. Therefore, distortion is generally defined not using a metric over
databases, but a {\em quasi-metric} (which is not required
to be symmetric).\footnote{A function $\ensuremath{\mathsf{\partial}}\xspace:\ensuremath{\mathcal{X}}\xspace\times\ensuremath{\mathcal{X}}\xspace\to[0,\infty)$ is called a quasi-metric, if for every $\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{y}}\xspace,\ensuremath{\mathbf{z}}\xspace$, we have {\sf (i)} $\ensuremath{\mathsf{\partial}}\xspace(\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{y}}\xspace)=0\iff\ensuremath{\mathbf{x}}\xspace=\ensuremath{\mathbf{y}}\xspace$ and {\sf (ii)} $\ensuremath{\mathsf{\partial}}\xspace(\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{y}}\xspace)\leq\ensuremath{\mathsf{\partial}}\xspace(\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{z}}\xspace)+\ensuremath{\mathsf{\partial}}\xspace(\ensuremath{\mathbf{z}}\xspace,\ensuremath{\mathbf{y}}\xspace)$.}
\subsection{Measure of Distortion}\label{sec:distortion-measures}
We shall use quasi-metrics with range $\ensuremath{\R_{\ge0}}\xspace\cup\{\infty\}$ to define a measure of distortion,
where $\infty$ indicates that one database cannot be distorted into another one.
As we shall need distortion measure between two distributions in our accuracy guarantees and also in the definitions of distortion and error sensitivities, it will be useful to extend the distortion measure to distributions. This can be done in same way as $\ensuremath{W^\infty}\xspace$, but with respect to a quasi-metric rather than a metric.
\begin{defn}[Measure of Distortion]\label{def:distortion}
A \emph{measure of distortion} on a set $\ensuremath{\mathcal{X}}\xspace$ is a function $\ensuremath{\mathsf{\partial}}\xspace:\ensuremath{\mathcal{X}}\xspace\times \ensuremath{\mathcal{X}}\xspace
\rightarrow \ensuremath{\R_{\ge0}}\xspace \cup \{\infty\}$ which forms a quasi-metric over $\ensuremath{\mathcal{X}}\xspace$.
We also define \ensuremath{{\widehat\dn}}\xspace as the extension to $\ensuremath{\mathsf{\partial}}\xspace$ to distribution, which maps a pair
of distributions $P,Q$ over \ensuremath{\mathcal{X}}\xspace to a real number as
\[
\ensuremath{{\widehat\dn}}\xspace(P, Q) := \inf_{\phi\in\Phi^0(P, Q)} \sup_{(x,y)\leftarrow\phi}\ensuremath{\mathsf{\partial}}\xspace(x,y).
\]
If $P$ is a point distribution with all its mass on a point $x$, we denote
$\ensuremath{{\widehat\dn}}\xspace(P,Q)$ as $\ensuremath{{\widehat\dn}}\xspace(x,Q)$, which can be simplified as $\ensuremath{{\widehat\dn}}\xspace(x,Q)=\sup_{x'\in\ensuremath{\mathrm{support}}\xspace(Q)}\ensuremath{\mathsf{\partial}}\xspace(x,x')$.
Furthermore, if both $P$ and $Q$ are point distributions on $x$ and $y$, respectively, then $\ensuremath{{\widehat\dn}}\xspace(P,Q)=\ensuremath{\mathsf{\partial}}\xspace(x,y)$, and we will write $\ensuremath{{\widehat\dn}}\xspace(P,Q)$ simply by $\ensuremath{\mathsf{\partial}}\xspace(x,y)$.
\end{defn}
It is easy to verify that if $\ensuremath{\mathsf{\partial}}\xspace$ is a quasi-metric, so is $\ensuremath{{\widehat\dn}}\xspace$. We prove this in \Lemmaref{dnx-quasi-metric} in \Appendixref{beyond-drop}.
\paragraph{Examples of measures of distortion.}
We formally define three measures of distortion: $\ensuremath{\dn_{\mathrm{drop}}}\xspace$ for dropping elements, $\ensuremath{\dn_{\mathrm{move}}}\xspace$ for perturbing/moving elements, and $\dropmove\eta$ for a combination of dropping and moving elements.
These are defined when each element in $\ensuremath{\mathcal{X}}\xspace$ is a finite multiset over a ground set \ensuremath{\mathcal{G}}\xspace. Formally,
$\ensuremath{\mathbf{x}}\xspace\in\ensuremath{\mathcal{X}}\xspace$ is a function $\ensuremath{\mathbf{x}}\xspace:\ensuremath{\mathcal{G}}\xspace\rightarrow\ensuremath{{\mathbb N}}\xspace$ (where $\mathbb{N}$ denotes the set of all non-negative integers, including zero) that outputs the multiplicity
of each element of \ensuremath{\mathcal{G}}\xspace in $\ensuremath{\mathbf{x}}\xspace$. We denote the {\emph size} of $\ensuremath{\mathbf{x}}\xspace$ by $|\ensuremath{\mathbf{x}}\xspace| := \sum_{i\in\ensuremath{\mathcal{G}}\xspace} \ensuremath{\mathbf{x}}\xspace(i)$.
\begin{enumerate}
\item {\bf Dropping elements:}
For finite $\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{x}}\xspace' \in \ensuremath{\mathcal{X}}\xspace$, we define $\ensuremath{\dn_{\mathrm{drop}}}\xspace$, a measure of distortion for dropping elements, as follows:
\begin{equation}\label{eq:drop_defn}
\ensuremath{\dn_{\mathrm{drop}}}\xspace(\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{x}}\xspace') :=
\begin{cases}
\frac{\sum_{g\in G} \ensuremath{\mathbf{x}}\xspace(g)-\ensuremath{\mathbf{x}}\xspace'(g)}{\sum_{g\in G} \ensuremath{\mathbf{x}}\xspace(g)} & \text{ if } \forall g\in\ensuremath{\mathcal{G}}\xspace, \ensuremath{\mathbf{x}}\xspace(g) \ge \ensuremath{\mathbf{x}}\xspace'(g), \\
\infty & \text{ otherwise.}
\end{cases}
\end{equation}
That is, $\ensuremath{\dn_{\mathrm{drop}}}\xspace(\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{x}}\xspace')$ measures the fraction of elements in \ensuremath{\mathbf{x}}\xspace that are to
be dropped for it to become $\ensuremath{\mathbf{x}}\xspace'$ (unless $\ensuremath{\mathbf{x}}\xspace'$ cannot be derived thus).
It is easy to see that $\ensuremath{\dn_{\mathrm{drop}}}\xspace$ is a quasi-metric.
\item {\bf Perturbing/Moving elements:}
For finite $\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{y}}\xspace \in \ensuremath{\mathcal{X}}\xspace$, we define $\ensuremath{\dn_{\mathrm{move}}}\xspace$, a measure of distortion for moving elements, as follows:
\begin{equation}\label{eq:move_defn}
\ensuremath{\dn_{\mathrm{move}}}\xspace(\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{y}}\xspace) =
\begin{cases}
\Winf{}(\frac{\ensuremath{\mathbf{x}}\xspace}{|\ensuremath{\mathbf{x}}\xspace|},\frac{\ensuremath{\mathbf{y}}\xspace}{|\ensuremath{\mathbf{y}}\xspace|}) & \text{ if } |\ensuremath{\mathbf{x}}\xspace|=|\ensuremath{\mathbf{y}}\xspace|, \\
\infty & \text{ otherwise},
\end{cases}
\end{equation}
where $\frac{\ensuremath{\mathbf{x}}\xspace}{|\ensuremath{\mathbf{x}}\xspace|}$ (similarly, $\frac{\ensuremath{\mathbf{y}}\xspace}{|\ensuremath{\mathbf{y}}\xspace|}$) is treated as a probability vector of size $|\ensuremath{\mathcal{G}}\xspace|$, indexed by the elements of $\ensuremath{\mathcal{G}}\xspace$; the $i$'th element of $\frac{\ensuremath{\mathbf{x}}\xspace}{|\ensuremath{\mathbf{x}}\xspace|}$ is equal to $\frac{\ensuremath{\mathbf{x}}\xspace(i)}{|\ensuremath{\mathbf{x}}\xspace|}$.
We show in that \Claimref{move-metric} in \Appendixref{beyond-drop} that $\ensuremath{\dn_{\mathrm{move}}}\xspace$ is a metric.
\item {\bf Both dropping and moving elements:} For finite $\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{y}}\xspace \in \ensuremath{\mathcal{X}}\xspace$, we define $\dropmove\eta$, a measure of distortion for both moving and dropping elements, as follows:
\begin{equation}\label{eq:drop_move_defn}
\dropmove\eta(\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{y}}\xspace) = \inf_{\ensuremath{\mathbf{z}}\xspace} \left(\ensuremath{\dn_{\mathrm{drop}}}\xspace(\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{z}}\xspace) + \eta \cdot \ensuremath{\dn_{\mathrm{move}}}\xspace(\ensuremath{\mathbf{z}}\xspace,\ensuremath{\mathbf{y}}\xspace)\right).
\end{equation}
We show in \Claimref{drop-move-quasi-metric} in \Appendixref{beyond-drop} that $\dropmove\eta$ is a quasi-metric.\footnote{While showing that $\ensuremath{\dn_{\mathrm{drop}}}\xspace$ is a quasi-metric is trivial, it is not always so with other measures of distortion; in particular, showing that $\dropmove\eta$ is a quasi-metric is non-trivial.}
\end{enumerate}
Most of the results in this paper are derived w.r.t.\ the distortion $\ensuremath{\dn_{\mathrm{drop}}}\xspace$, but they can also be extended to the distortion $\dropmove\eta$; see \Sectionref{beyond-drop} for the extension.
\subsection{Defining Flexible Accuracy}\label{sec:defn-fa}
Informally, flexible accuracy with a distortion bound $\alpha$ guarantees
that on an input \ensuremath{\mathbf{x}}\xspace, a mechanism shall produce an output that corresponds to
$f(\ensuremath{\mathbf{x}}\xspace')$ for some $\ensuremath{\mathbf{x}}\xspace'$ such that $\ensuremath{\mathsf{\partial}}\xspace(\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{x}}\xspace')\leq \alpha$. In addition to
such input distortion, we may allow the output to be also probably
approximately correct, with an approximation error parameter $\beta$ and an
error probability parameter $\gamma$. Formally, the probabilistic
approximation guarantee of the output is given as a bound of $\beta$ on a
$\gamma$-lossy $\infty$-Wasserstein distance.
\begin{defn}[$(\alpha,\beta,\gamma)$-accuracy]\label{def:alpha-beta-gamma-accu}
Let \ensuremath{\mathsf{\partial}}\xspace be a measure of distortion on a set $\ensuremath{\mathcal{X}}\xspace$ and $f:\ensuremath{\mathcal{X}}\xspace\to\ensuremath{\mathcal{Y}}\xspace$ be a randomized function
such that \ensuremath{\mathcal{Y}}\xspace admits a metric. A mechanism $\ensuremath{\mathcal{M}}\xspace$ is said to
be \emph{$(\alpha,\beta,\gamma)$-accurate for $f$ with respect to $\ensuremath{\mathsf{\partial}}\xspace$}, if
\begin{equation}\label{eq:fa-defn}
\left(\sup_{\ensuremath{\mathbf{x}}\xspace\in \ensuremath{\mathcal{X}}\xspace}\inf_{X':\ensuremath{{\widehat\dn}}\xspace(\ensuremath{\mathbf{x}}\xspace,\prob{X'})\leq\alpha}\Winf{\gamma}(\ensuremath{\mathcal{M}}\xspace(\ensuremath{\mathbf{x}}\xspace),f(X'))\right) \leq \beta.
\end{equation}
In other words, for each $x\in\ensuremath{\mathcal{X}}\xspace$, there is a random variable $X'$ satisfying $\ensuremath{{\widehat\dn}}\xspace(\ensuremath{\mathbf{x}}\xspace,\prob{X'})\leq\alpha$ (i.e., $\ensuremath{\mathsf{\partial}}\xspace(\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{x}}\xspace')\leq\alpha$ for all $\ensuremath{\mathbf{x}}\xspace'\in\ensuremath{\mathrm{support}}\xspace(X')$) such that $\Winf{\gamma}(\ensuremath{\mathcal{M}}\xspace(\ensuremath{\mathbf{x}}\xspace),f(X'))\leq\beta$.
\end{defn}
See \Figureref{fa} on page~\pageref{fig:fa} for an illustration of flexible accuracy using a pebbling game.
\paragraph{Flexible accuracy generalizes existing accuracy definitions.}
It should be noted that flexible accuracy is not a completely disparate notion but a more generalized form of the standard accuracy guarantees. In particular:
\begin{itemize}
\item As mentioned in \Sectionref{lossy-wass}, $(0,\beta,\gamma)$-accuracy already extends the PAC guarantees. For example, the Laplace mechanism (see \cite[Chapter 3]{DworkRo14}) for a function $f:\ensuremath{\mathcal{X}}\xspace\to\ensuremath{{\mathbb R}}\xspace$ that achieves $\epsilon$-DP is $(0,\frac{\nabla_f}{\ensuremath{\epsilon}\xspace}\ln(1/\gamma),\gamma)$-accurate for any $\gamma>0$, where $\nabla_f$ is the sensitivity of $f$.
\item Blum et al.~\cite{BLR} introduced \emph{usefulness} to
measure accuracy with respect to a ``perturbed'' function. While adequate
for the function classes they considered (half-space queries, range queries
etc.), it is not applicable to queries like maximum. Flexible accuracy
generalizes usefulness (see \Appendixref{Comparison_BLR}).
\end{itemize}
As we show later, flexible accuracy lets us develop DP mechanisms for highly
sensitive functions ({\em e.g.}, $\max$), for which existing DP mechanisms offered only limited,
if not vacuous, guarantees.
\section{Details Omitted from \texorpdfstring{\Sectionref{#1}} ~ (\nameref{sec:#1})}}
\section{Introduction}
\label{sec:intro}
In the era of big data, privacy has been a major concern, to the point that
recent legislative moves, like General Data Protection Regulation (GDPR) in
the European Union, have mandated various measures for ensuring privacy. Further, in
the face of a global pandemic that has prompted governments to collect and
share individual-level information for epidemiological purposes, debates on
privacy-utility trade-offs have been brought to sharper relief.
Against this backdrop, mathematical theories of privacy are of great
importance.
Differential Privacy \cite{DworkMNS06} is by far the most impactful
mathematical framework today for privacy in statistical databases. It has
seen large scale adoption in theory and practice, including machine learning
applications and large scale commercial implementations (e.g.,
\cite{AbadiCGMMTZ16,BorgsCS15,BorgsCSZ18,Rappor14,AppleDP17}).
In this work, we make foundational contributions to the area of Differential
Privacy (DP), extending its applicability. Our main contribution is the notion of \emph{Flexible Accuracy} --
a new framework for measuring the \emph{accuracy} of a mechanism (while
retaining the DP framework unaltered for quantifying privacy). This lets us
develop new DP mechanisms with non-trivial provable (and empirically
demonstrable) accuracy guarantees in settings involving high-sensitivity functions.
\paragraph{Motivating Flexible Accuracy (FA).}
Consider querying a database consisting of integer valued observations --
say, ages of patients who recovered from a certain disease -- for the
maximum value. For the sake of privacy, one may wish to apply a DP
mechanism, rather than output the maximum in the data itself. Two possible
datasets which differ in only one patient are considered neighbors and a DP
mechanism needs to make the outputs on these two samples indistinguishable
from each other. However, the function in question is \emph{highly
sensitive} -- two neighboring datasets can have their maxima differ by as
much as the entire range of possible ages%
\footnote{In fact, \emph{all datasets} with low maximum values have high
sensitivity \emph{locally}, by considering a neighboring dataset with a
single additional data item with a large value.}
-- and, as we shall see in our empirical evaluations in \Sectionref{eval}, the various kinds of mechanisms in the literature~\cite{ExponentialMech,BNS,VadhanSurvey,DworkL09,NissimRS07,BeimelNiSt16} do not provide a satisfactory solution.
The difficulty in solving this problem is related to another issue.
Consider the problem of reporting a \emph{histogram} (again,
say, of patients' ages). Here a standard DP mechanism, of adding a zero-mean
Laplace noise to each bar of the histogram is indeed reasonable, as the
histogram function has low sensitivity in each bar. Now, note that
\emph{maximum can be computed as a function of the histogram}. However, even
though the histogram mechanism was sufficiently accurate in the standard
sense, the maximum computed from its output is no longer accurate! This is
because when a non-zero count is added to a large-valued item which
originally has a count of 0, the maximum can increase arbitrarily.
Flexible Accuracy (FA) is a relaxed notion of accuracy that lets us address both
of the above issues. In particular, it not only enables new DP mechanisms
for maximum, but also allows one to derive the mechanism from a new DP
mechanism for histograms. We provide a general \emph{composition theorem} that
enables such transfer of accuracy guarantees that is not applicable to
conventional accuracy measures.
The high-level idea of Flexible Accuracy is to allow for some
\emph{distortion of the input} when measuring accuracy. We shall require
distortion to be defined using a \emph{quasi-metric} over the input space (a
quasi-metric is akin to a metric, but is not required to be symmetric). A
good example of distortion is \emph{dropping a few items} from the dataset;
note that in this case, \emph{adding} a data item is \emph{not} considered low distortion.
Referring back to the example of reporting maximum, given a dataset with a
single elderly patient and many young patients, flexible accuracy with
respect to this distortion allows a mechanism for maximum to report the
maximum age of the younger group.%
\footnote{Of course, it is not obvious what should determine which items
should be dropped and with what probability. This will be the subject of
our new mechanisms.}
Flexible accuracy needs to account for errors that can be attributed to
distortion of the input (input error), as well as to inaccuracies in the
output (output error). To be able to exploit input distortion while retaining privacy, we allow
input distortion to be randomized. A side-effect of this is that our
measure of output accuracy needs to allow the ``correct output'' to be
randomized (i.e., defined by a distribution), even if we are interested in
only deterministic functions. To generalize the conventional
\emph{probabilistically approximately correct} (PAC) guarantees to this
setting, we introduce a natural, but new quantity called \emph{lossy
$\infty$-Wasserstein distance}.
Our final definition of flexible accuracy is a 3-parameter quantity, with
one parameter accounting for input distortion, and 2 parameters used for
output error measured using lossy $\infty$-Wasserstein distance.
\subsection{Our Contributions}
Our contributions are in three parts:
\begin{itemize}
\item \textit{Definitions:} We present a conceptual
enhancement to the framework of DP -- \emph{flexible accuracy} -- which considers error after allowing for a small \emph{distortion of the input}; see \Definitionref{alpha-beta-gamma-accu}.
To account for randomized distortion (and more generally, to be able to
consider distributions over inputs and/or randomized functions) we
need an error measure that compares a mechanism's output distribution to not
a fixed ``correct value,'' but a ``correct distribution.'' For this, we
introduce and use a new measure called \emph{lossy $\infty$-Wasserstein distance} (see \Definitionref{infty-delta-wass-dist}),
extending the classical notion of Wasserstein distance (or Earth Mover
Distance). This also generalizes several existing notions, such as the PAC guarantee, the notion of total variation distance, etc.
\item \textit{Composition Theorems:}
We present a composition theorem for flexible accuracy (see \Theoremref{compose-accuracy}),
which gives an FA guarantee for a composed mechanism from those of the constituent ones. This involves identifying new quantities including \emph{distortion sensitivity} (see \Definitionref{dist-sens})
and \emph{error sensitivity} (see \Definitionref{err-sens}).
To be able to use such composed mechanisms for DP, we rely on the well-known post-processing theorem of DP,
as well as a new pre-processing theorem (see \Theoremref{compose-DP}).
\item \textit{Mechanisms:} We give a DP mechanism with FA guarantee for releasing a sanitized histogram (called the \ensuremath{\text{Shifted-Truncated Laplace}}\xspace mechanism; see \Algorithmref{hist-mech} and \Algorithmref{histogram-mech}),
which, via our composition theorems, yield DP mechanisms with FA
guarantees for \emph{histogram-based statistics} (see
\Theoremref{bucketing-general}).
These functions include several high-sensitivity functions, such as maximum
and minimum, support of a set, range, median, maximum margin separator, etc.\ (we give concrete bounds for max/min and support).
We present an
empirical comparison against state-of-the-art DP mechanisms, which
reveals that apart from the theoretical guarantees we obtain (where
none were available till now), our mechanisms compare favorably with
the others in terms of accuracy (flexible and otherwise) empirically
as well.
\end{itemize}
\subsection{The Surprising Power of Flexible Accuracy}
Consider a sequence of $n+1$ neighboring histograms, such that the first in the sequence has all
its $n$ elements in the first bar, and the last one has all elements in the last
bar, and the first and the last bars are far away from each other.
In any reasonably accurate (flexible or not) mechanism for a histogram-based statistic like max, the answers for these two extremes must be very different with
probability almost 1. So, intuitively, there should be some pair of neighbors in this
sequence for which the
answers should be significantly different with probability at least $1/n$.
This seems to preclude obtaining $(\ensuremath{\epsilon}\xspace,\delta)$-DP for a small constant
$\ensuremath{\epsilon}\xspace$ with $\delta \ll 1/n$. Remarkably, this intuition turns out to be
wrong! By carefully calibrating the probability of the responses (while
also making sure that the responses can be attributed to only dropping a few
items -- as permitted by flexible accuracy), our mechanism
can obtain the following guarantee for the max function (see \Corollaryref{bucketing-max}): \\
\noindent \fbox{\parbox{\linewidth}{{\bf Informal result for max:}\label{informal-result-max} Our flexibly-accurate mechanism for max over a bounded range achieves $\left(\ensuremath{\epsilon}\xspace,\ensuremath{\epsilon}\xspace e^{-\Omega(\ensuremath{\epsilon}\xspace\alpha n)}\right)$-DP while incurring an arbitrarily small output error after dropping only $\alpha n$ elements.}} \\
The above result gives a trade-off between the privacy guarantee and number of elements dropped. For example:
{\sf (i)} By choosing $\ensuremath{\epsilon}\xspace=\frac{1}{n^{1/4}}$ and $\alpha=\frac{1}{\sqrt{n}}$, our mechanism is $(\frac{1}{n^{1/4}},e^{-\Omega(n^{1/4})})$-DP while dropping only $O(\sqrt{n})$ elements.
{\sf (ii)} By choosing $\ensuremath{\epsilon}\xspace$ to be a small constant (say, $0.1$) and say, $\alpha=\frac{\log^2 n}{n}$, our mechanism is $(0.1,n^{-\Omega(\log n)})$-DP while dropping only $O(\log^2 n)$ elements.
See \Sectionref{choosing-params} for several other parameter choices that are of interest.
\paragraph{Significance of the New Mechanisms.} Traditional DP literature has largely
not addressed functions like the maximum function, \ensuremath{f_{\mathrm{max}}}\xspace. This is in part due to the very high
sensitivity of such functions: When the database has entries from $[0,B]$,
the sensitivity of $\ensuremath{f_{\mathrm{max}}}\xspace$ is $B$.%
\footnote{The sensitivity of a real-valued function $f:\ensuremath{\mathcal{X}}\xspace\to\ensuremath{{\mathbb R}}\xspace$ is defined by $\Delta_f:=\max_{\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{x}}\xspace'\in\ensuremath{\mathcal{X}}\xspace:\ensuremath{\mathbf{x}}\xspace\sim\ensuremath{\mathbf{x}}\xspace'}|f(\ensuremath{\mathbf{x}}\xspace)-f(\ensuremath{\mathbf{x}}\xspace')|$. In the case of \ensuremath{f_{\mathrm{max}}}\xspace, there are neighboring databases $\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{x}}\xspace'$, where the first database $\ensuremath{\mathbf{x}}\xspace$ has all the inputs as $0$ and the second database has $n-1$ inputs as $0$ but one input is $B$, so, $\Delta_{\ensuremath{f_{\mathrm{max}}}\xspace}=B$.}
The same holds for other functions like a ``thresholded maximum'' $\max_k$
which outputs the maximum value that appears at least $k$ times in the
database. Despite being natural functions about the shape of the data, no DP
mechanisms have been offered in the literature for these functions.
With FA, \emph{for the first time, we provide DP mechanisms for such
functions, with meaningful worst-case accuracy guarantees}. We
emphasize that we retain the \emph{standard} definition of
$(\ensuremath{\epsilon}\xspace,\delta)$-DP, and achieve strong parameters for it (see above).
Further, the additional dimension of inaccuracy that we allow -- namely,
input distortion -- is in line with what applications like (robust) Machine
Learning often anticipate and tolerate.
We also remark that, \emph{on specific data distributions}, some of the
existing DP mechanisms may already enable empirical FA guarantees (see
\Sectionref{eval} where such guarantees are compared). But crucially,
such guarantees are not always available in the worst-case, and even on data
distributions where they do exist, they were not identified previously.
\subsection{Related Work and Paper Organization}
\paragraph{Related work.}
DP, defined by Dwork et al.~\cite{DworkMNS06} has developed into a highly
influential framework for providing formal privacy guarantees (see
\cite{DworkRo14} for more details). The notion of flexible accuracy we
define is motivated by the difficulty in handling outliers in the data. Some
of the work leading to DP explicitly attempts to address the privacy of
outliers \cite{ChawlaDMSW05,ChawlaDMT05}, as did some of the later works
within the DP framework \cite{DworkL09,BNS,Stable1}. These results rely on having a distribution over the data, or
respond only when the answer is a ``stable value''.
Blum et al.~\cite{BLR} introduced the notion of \emph{usefulness}, that is motivated by similar limitations of
DP as those which motivated flexible accuracy, but as explained later, is
less generally applicable.
Incidentally, Wasserstein distance has been used
in privacy mechanisms in the Pufferfish framework \cite{KiferM14,SongWC17},
but assuming a data distribution.
Several DP mechanisms for histograms are available with a variety of
accuracy guarantees, as discussed in \Sectionref{eval}. While these
mechanisms do not claim any accuracy guarantees for functions computed from
histograms, on specific data distributions and for some of these mechanisms,
we see that FA can be used to empirically capture meaningful accuracy guarantees.
\paragraph{Paper organization.}
We define the lossy Wasserstein distance and its properties in \Sectionref{lossy-wass}. We define flexible accuracy in \Sectionref{flex-accu}, where we also give several examples of distortion measure.
In \Sectionref{composition-theorems}, we present our composition theorems for flexible accuracy and differential privacy. We also motivate and define distortion and error sensitivities (with examples) in \Sectionref{dist-sens} and \Sectionref{err-sens}, respectively.
In \Sectionref{histogram_mechs}, we present our (bucketed)-histogram mechanism and state its flexible accuracy and privacy guarantees, and we post-process that mechanism by any histogram-bases-statistic in \Sectionref{HBS}. Results with distortions other than dropping elements are presented in \Sectionref{beyond-drop}. All the proofs are presented in \Sectionref{proofs}. We empirically evaluate our mechanisms with several other mechanisms from literature in \Sectionref{eval}. Omitted details are provided in appendices.
\section*{Acknowledgements}
The work of Deepesh Data was supported in part by NSF grants \#1740047, \#2007714, and UC-NL grant LFR-18-548554.
The work of Manoj Prabhakaran was supported in part by the Joint Indo-Israel Project DST/INT/ISR/P-16/2017 and the Ramanujan Fellowship of Dept. of Science and Technology, India.
\bibliographystyle{alpha}
\section{Mechanisms That Exploit Flexible Accuracy}\label{sec:mechanisms}
In this section, we propose and analyze concrete mechanisms for several
important functions. First, we present a new DP mechanism for the histogram
function with flexible accuracy in \Sectionref{histogram_mechs} and then extend it to any ``histogram based
statistic'' ({\em e.g.}, max and support) in \Sectionref{HBS}.
In \Sectionref{beyond-drop}, we show our results for other measures of distortion, beyond just dropping elements.
Also, in \Appendixref{Comparison_BLR}, we note that the mechanisms (e.g., for half-space
queries) which required \cite{BLR} to introduce the accuracy notion of
\emph{usefulness} can be cast in the framework of flexible accuracy.
\subsection{A Private Mechanism for Releasing Histograms with Flexible Accuracy}\label{sec:histogram_mechs}
Before describing our new mechanism for releasing histograms with flexible accuracy, let us consider a simpler Boolean task of privately reporting whether a given set is empty or not. Deriving a solution to this simpler problem will pave a way towards our new histogram mechanism.
\paragraph{Private mechanism for determining whether a given set is empty or not.}
For this, the only input distortion we are allowed is to drop some elements -- i.e., we cannot
report an empty set as non-empty. Since we seek to limit the extent of
distortion, let us add a constraint that if a set has $q$ or more elements,
then with probability 1 (or very close to 1) we should report the set as
being non-empty. Let $p_k$ denote the probability that a set of size $k\in[0,q]$ is
reported as being non-empty, so that $p_0=0$ and $p_q=1$.
For our scheme to be \ensuremath{(\epsilon, \delta)}{}-differential private, we require
\begin{align*}
p_k \leq p_{k+1} \ensuremath{e^{\epsilon}} + \delta, &\qquad
p_{k+1} \leq p_k \ensuremath{e^{\epsilon}} + \delta, \\
(1 - p_k) \leq (1 - p_{k+1}) \ensuremath{e^{\epsilon}} + \delta, &\qquad
(1 - p_{k+1}) \leq (1 - p_k) \ensuremath{e^{\epsilon}} + \delta,
\end{align*}
for $0\le k < q$, with boundary conditions
$p_0 = 0$ and $p_q = 1$.
We are interested in simultaneously reducing $\epsilon$ and $\delta$
subject to the above constraints.
The pareto-optimal \ensuremath{(\epsilon, \delta)}{} turn out to be given by
$\delta \eepsratio{(q/2) \ensuremath{\epsilon}\xspace} = \frac{1}{2}$, with corresponding
values of $p_k$ being given by
\begin{align}
\label{eq:optprob}
p_{k} = \delta \eepsratio{{k}\ensuremath{\epsilon}\xspace}\text{ for } k \le \nicefrac{q}{2} \quad
\text{ and }\quad
p_{k} = 1 - p_{q-k}\text{ for } k \ge \nicefrac{q}{2}.
\end{align}
The condition $\delta \eepsratio{(q/2) \ensuremath{\epsilon}\xspace} = \frac{1}{2}$ implies that we can achieve $(\ensuremath{\epsilon}\xspace,\ensuremath{\epsilon}\xspace e^{-\Omega(\ensuremath{\epsilon}\xspace q)})$-differential privacy.
In particular, we may choose $\ensuremath{\epsilon}\xspace = O\left(\frac1{\sqrt{q}}\right)$, and
$\displaystyle \delta = O\left(\frac{e^{-\sqrt{q}/2}}{\sqrt{q}}\right)$, providing a
useful privacy guarantee when $q$ is sufficiently large.
In \Figureref{truncLap}, on the left, we plot the probabilities $p_k$ against $\nicefrac{k}q$ for this choice of \ensuremath{(\epsilon, \delta)}.
\begin{figure}[!htbp]
\begin{center}
\includegraphics[width=0.4\textwidth]{cryptoplot1.PNG}
\includegraphics[width=0.4\textwidth]{cryptoplot2.PNG}
\end{center}
\caption{The probability function in the optimal mechanism for reporting whether a set is empty or not
(left), which can be interpreted as adding a noise according to a truncated
Laplace distribution with a negative mean (right). \label{fig:truncLap}}
\end{figure}
\paragraph{Towards a private mechanism for histograms.} To generalize this Boolean mechanism to a full-fledged histogram mechanism, we reinterpret it. In a histogram mechanism,
where again, the distortion allowed in the input is to only drop elements,
we can add a \emph{negative noise} to the count in each ``bar'' of the
histogram. (If the reduced count is negative, we report it as 0.) We seek a noise function such that the probability of the
reported count being 0 (when the actual count is $k\in[0,q]$) is the same as
that of the above mechanism reporting that a set of size $k$ is empty.
That is, the probability of adding a noise $\nu \le -k$ should be $1-p_k$.
That is, if the noise distribution is given by the density function $\sigma$,
we require that
\begin{align*}
\int_{-q}^{-k} \sigma(t) \cdot dt = 1-p_k \qquad \text{ and } \qquad
\sigma(t) = 0 \text{ for } t\not\in[-q,0] \notag.
\end{align*}
Substituting the expression for $p_k$ from \eqref{eq:optprob}, and then differentiating this identity with respect to $k$,
we obtain the following expression for $\sigma(t)$:
\begin{equation}\label{eq:trun_Lap_noise}
\sigma(t) =
\begin{cases}
\frac{1}{1 - e^{-\ensuremath{\epsilon}\xspace q/2}}\ensuremath{\mathrm{Lap}}\xspace(t \mid -\frac{q}{2}, \frac1\ensuremath{\epsilon}\xspace), & \text{ if } t\in[-q,0], \\
0, & \text{ otherwise},
\end{cases}
\end{equation}
where $\ensuremath{\mathrm{Lap}}\xspace$ is the Laplace noise distribution with mean $-\frac{q}2$ and
scale parameter $\nicefrac1\ensuremath{\epsilon}\xspace$.\footnote{The Laplace distribution over \ensuremath{{\mathbb R}}\xspace, with \emph{scaling parameter} $b > 0$ and mean $\mu$, is defined by the density function $\ensuremath{\mathrm{Lap}}\xspace(x|\mu,b) := \frac{1}{2b}e^{\frac{-|x-\mu|}{b}}$ for all $x\in\ensuremath{{\mathbb R}}\xspace$.
We denote a random variable that is distributed according to the Laplace distribution with the scaling parameter $b$ and mean 0 by $\ensuremath{\mathrm{Lap}}\xspace(b)$.
}
We call $\sigma(t)$ the shifted-truncated Laplace distribution, which is equal to the (normalized) Laplace distribution with mean $-\frac{q}2$ and scale parameter $\frac1\ensuremath{\epsilon}\xspace$ when $t\in[-q,0]$, and equal to zero when $t\notin[-q,0]$.
\begin{algorithm}
\caption{Shifted and Truncated Laplace Mechanism, \mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace}}\label{algo:hist-mech}
{\bf Parameter:} Threshold $\ensuremath{\tau}\xspace\in[0,1)$; ground set \ensuremath{\mathcal{G}}\xspace; and $\ensuremath{\epsilon}\xspace>0$. \\
{\bf Input:} A histogram, $\ensuremath{\mathbf{x}}\xspace:\ensuremath{\mathcal{G}}\xspace\rightarrow\ensuremath{{\mathbb N}}\xspace$. \\
{\bf Output:} A histogram, $\ensuremath{\mathbf{y}}\xspace:\ensuremath{\mathcal{G}}\xspace\rightarrow\ensuremath{{\mathbb N}}\xspace$. \\
\vspace{-0.3cm}
\begin{algorithmic}[1]
\ForAll{$g\in\ensuremath{\mathcal{G}}\xspace$}
\State $z_g \leftarrow \pi_q$, where $q:=\ensuremath{\tau}\xspace|\ensuremath{\mathbf{x}}\xspace|$ and
$\displaystyle
\pi_q(z) =
\begin{cases}
\frac{1}{1-e^{-\ensuremath{\epsilon}\xspace{q}/{2}}} \ensuremath{\mathrm{Lap}}\xspace(z \mid -\frac{q}{2},\frac{1}{\ensuremath{\epsilon}\xspace}) & \text{ if } z\in[-q,0], \\
0 & \text{ otherwise. }\\
\end{cases}
$
\State $\ensuremath{\mathbf{y}}\xspace(g) := \max(0,\lfloor \ensuremath{\mathbf{x}}\xspace(g) + z_g \rceil)$
\Comment{$z_g$ need not be computed for $g$ s.t.\ $\ensuremath{\mathbf{x}}\xspace(g)=0$}
\EndFor
\State Return $\ensuremath{\mathbf{y}}\xspace$.
\end{algorithmic}
\end{algorithm}
\paragraph{The shifted-truncated Laplace mechanism for releasing histograms with flexible accuracy.}
Our final histogram mechanism is derived by adding the noise distributed according to $\sigma(t)$ from \eqref{eq:trun_Lap_noise} with appropriate parameter $q$ to each bar of the histogram, followed by rounding to the nearest integer (or to $0$, if it is negative).
Before describing the mechanism, we need some notation.
Datasets can be abstractly represented by multi-sets, and each element in the multi-set belongs to a ground set $\ensuremath{\mathcal{G}}\xspace$.
Formally, a multi-set $\ensuremath{\mathbf{x}}\xspace$ over the ground set $\ensuremath{\mathcal{G}}\xspace$
is a function $\ensuremath{\mathbf{x}}\xspace:\ensuremath{\mathcal{G}}\xspace\rightarrow\ensuremath{{\mathbb N}}\xspace$ that outputs the multiplicity of elements
in \ensuremath{\mathcal{G}}\xspace. The \emph{size} and \emph{support} of \ensuremath{\mathbf{x}}\xspace are defined as $|\ensuremath{\mathbf{x}}\xspace| :=
\sum_{i\in\ensuremath{\mathcal{G}}\xspace} \ensuremath{\mathbf{x}}\xspace(i)$ and $\ensuremath{\mathrm{support}}\xspace(\ensuremath{\mathbf{x}}\xspace):=\{i\in\ensuremath{\mathcal{G}}\xspace:\ensuremath{\mathbf{x}}\xspace(i)\neq0\}$,
respectively. We shall be interested in finite-sized multi-sets, which we
refer to as histograms. We denote the domain of all histograms over \ensuremath{\mathcal{G}}\xspace by
\Hspace\ensuremath{\mathcal{G}}\xspace. For DP, the standard notion of neighborhood among histograms is
defined as $\ensuremath{\mathbf{x}}\xspace\ensuremath{\sim_{\mathrm{hist}}\xspace}\ensuremath{\mathbf{x}}\xspace'$ iff $\sum_{i\in\ensuremath{\mathcal{G}}\xspace}|\ensuremath{\mathbf{x}}\xspace(i)-\ensuremath{\mathbf{x}}\xspace'(i)|\le 1$.
Later, we shall also require \ensuremath{\mathcal{G}}\xspace to be a metric space, endowed with a metric
\ensuremath{\mathfrak{d}}\xspace.
We describe our shifted-truncated Laplace mechanism for the identity function (which maps histograms to histograms and is denoted by $\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace}:\Hspace\ensuremath{\mathcal{G}}\xspace\to\Hspace\ensuremath{\mathcal{G}}\xspace$) in \Algorithmref{hist-mech}.
It simply \emph{decreases} the multiplicity of each element by adding a bounded quantity
sampled from the shifted-truncated Laplace distribution. The following theorem, proven in \Sectionref{hist-priv-accu-proof}, summarizes the privacy and flexible accuracy guarantees achieved by \mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace} for a particular choice of $\ensuremath{\epsilon}\xspace$.
\begin{thm}\label{thm:hist-priv-accu}
On inputs $\ensuremath{\mathbf{x}}\xspace$ of size $n$, \mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace} from \Algorithmref{hist-mech} satisfies the following guarantees:
\begin{itemize}
\item \underline{Privacy:} For any $\ensuremath{\epsilon}\xspace,\tau$ such that $\ensuremath{\epsilon}\xspace\tau n \geq 2$, \mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace} is $\left(\ensuremath{\epsilon}\xspace,\ensuremath{\epsilon}\xspace e^{-\Omega(\ensuremath{\epsilon}\xspace\tau n)}\right)$-DP w.r.t.\ \ensuremath{\sim_{\mathrm{hist}}\xspace}.
\item \underline{Flexible accuracy:} If $|\ensuremath{\mathrm{support}}\xspace(\ensuremath{\mathbf{x}}\xspace)|\le t$, then for any $\ensuremath{\epsilon}\xspace>0$, \mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace} is $(\ensuremath{\tau}\xspace t,0,0)$-accurate for the identity function, w.r.t.\ the distortion measure \ensuremath{\dn_{\mathrm{drop}}}\xspace.
\end{itemize}
\end{thm}
\begin{remark}\label{remark:hist-params}
There are many choices of $\ensuremath{\epsilon}\xspace,\tau$ for which we get favorable privacy parameters in \Theoremref{hist-priv-accu}. For instance, choosing $\ensuremath{\epsilon}\xspace=\frac{1}{\sqrt{\tau n}}$ gives that \mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace} is $\left(\frac{1}{\sqrt{\tau n}},\frac{e^{-\Omega(\sqrt{\tau n})}}{\sqrt{\tau n}}\right)$-DP, provided $\tau$ is such that $\sqrt{\tau n} \geq 2$. Note that $\tau$ is the maximum overall fraction of elements we drop from each bar of the histogram. For example, by choosing $\tau=\frac{1}{n^{1/2}}$, we get that \mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace} is $\left(\frac{1}{n^{1/4}},\frac{e^{-\Omega(n^{1/4})}}{n^{1/4}}\right)$-DP and $(\frac{t}{n^{1/4}},0,0)$-accurate. See also \Sectionref{choosing-params} for more discussion.
\end{remark}
\Remarkref{hist-params} shows that the privacy parameters of \mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace} improve as the database size $|\ensuremath{\mathbf{x}}\xspace|$ grows, by dropping only a small number of elements, provided that the support size $t$ is small. To handle larger supports, this mechanism can be composed with a simple fixed width $w$ bucketing mechanism to give small support size, as described next.
\paragraph{Bucketed shifted-truncated Laplace mechanism.}
In order to explain the idea behind our bucketing mechanism, for simplicity, we consider the ground set $\ensuremath{\mathcal{G}}\xspace=[0,B)$.\footnote{\label{foot:dim-d}We also present the general results for $\ensuremath{\mathcal{G}}\xspace=[0,B)^d$ (which is a $d$-dimensional cube with side-length equal to $B$) in \Appendixref{d-dim-results}. Also see \Remarkref{d-dim} in \Sectionref{beyond-drop}.} In our bucketing mechanism, we divide the interval $[0,B)$ into $t=\lceil \frac{B}{w}\rceil$ sub-intervals (buckets) of length $w$, and map each input point to the center of the nearest sub-interval (bucket). This mapping of input points to the nearest bucket introduces error in the output space, and the value of $w$ depends on the amount of error we want to tolerate in the output space. In our bucketed shifted-truncated Laplace mechanism, we run our shifted-truncated Laplace mechanism (\Algorithmref{hist-mech}) on the bucketed histogram.
\begin{algorithm}
\caption{Bucketing Mechanism, \mbuc{w,[0,B)}}\label{algo:bucketing}
{\bf Parameter:} Bucket width $w$; ground set $[0,B)$. \\
{\bf Input:} A histogram $\ensuremath{\mathbf{x}}\xspace$ over $[0,B)$. \\
{\bf Output:} A histogram $\ensuremath{\mathbf{y}}\xspace$ over $S =\{ w(i-\frac12) : i \in [t], t = \lceil \frac{B}{w} \rceil \}$, and $|\ensuremath{\mathbf{y}}\xspace| = |\ensuremath{\mathbf{x}}\xspace|$. \\
\vspace{-0.3cm}
\begin{algorithmic}[1]
\ForAll{$s \in S $}
\State $\ensuremath{\mathbf{y}}\xspace(s) := \sum_{g:g-s \in [\frac{-w}2,\frac{w}2)} \; \ensuremath{\mathbf{x}}\xspace(g)$
\EndFor
\State Return \ensuremath{\mathbf{y}}\xspace
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{BucketHist Mechanism, \mBhist{\alpha,\beta,[0,B)}}\label{algo:histogram-mech}
{\bf Parameter:} Accuracy parameters $\alpha,\beta$; ground set $[0,B)$. \\
{\bf Input:} A histogram \ensuremath{\mathbf{x}}\xspace over $[0,B)$. \\
{\bf Output:} A histogram \ensuremath{\mathbf{y}}\xspace over $[0,B)$. \\
\vspace{-0.3cm}
\begin{algorithmic}[1]
\State $w := 2\beta$, $t := \lceil \frac{B}{w}\rceil$, $\ensuremath{\tau}\xspace := \alpha/t$
\State Return $\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,[0,B)} \circ \mbuc{w,[0,B)} (\ensuremath{\mathbf{x}}\xspace)$
\Comment{where \mbuc{w,[0,B)} is in \Algorithmref{bucketing}}
\end{algorithmic}
\end{algorithm}
Our bucketing mechanism $\mbuc{w,[0,B)}$ and the final bucketed-histogram mechanism $\mBhist{\alpha,\beta,[0,B)}$ are presented in \Algorithmref{bucketing} and \Algorithmref{histogram-mech}, respectively.
Since \mbuc{w,[0,B)} introduces error in the output space, we need a metric over $\Hspace{[0,B)}$ to analyze its flexible accuracy. We use the following natural metic \ensuremath{\met_{\mathrm{hist}}}\xspace over $\Hspace{[0,B)}$, which is defined as $\dhist{\ensuremath{\mathbf{y}}\xspace}{\ensuremath{\mathbf{y}}\xspace'}:=\Winf{}(\frac{\ensuremath{\mathbf{y}}\xspace}{|\ensuremath{\mathbf{y}}\xspace|},\frac{\ensuremath{\mathbf{y}}\xspace'}{|\ensuremath{\mathbf{y}}\xspace'|})$. Here, $\frac{\ensuremath{\mathbf{y}}\xspace}{|\ensuremath{\mathbf{y}}\xspace|}$ is treated as a probability distribution and the underlying metric for $\Winf{}$ is the standard distance metric over $\ensuremath{{\mathbb R}}\xspace$.
The following theorem presents the accuracy and privacy guarantees of $\mBhist{\alpha,\beta,[0,B)}$, which we prove in \Sectionref{bucketHist-priv-accu-proof}.
\begin{thm}\label{thm:bucketing-hist}
On inputs of size $n$, $\mBhist{\alpha,\beta,[0,B)}$ is $(\alpha, \beta, 0)$-accurate for the identity function, w.r.t.~the distortion measure $\ensuremath{\dn_{\mathrm{drop}}}\xspace$ and metric $\ensuremath{\met_{\mathrm{hist}}}\xspace$. Furthermore, for any $\ensuremath{\epsilon}\xspace>0$, and $\tau=\alpha(\frac{2\beta}B)$, if $\ensuremath{\epsilon}\xspace\tau n \geq 2$, then $\mBhist{\alpha,\beta,[0,B)}$ is $\left(\ensuremath{\epsilon}\xspace,\ensuremath{\epsilon}\xspace e^{-\Omega(\ensuremath{\epsilon}\xspace\tau n)}\right)$-DP w.r.t.\ \ensuremath{\sim_{\mathrm{hist}}\xspace}.
\end{thm}
We can instantiate \Theoremref{bucketing-hist} with different parameter settings to achieve favorable privacy-accuracy tradeoffs. See \Sectionref{choosing-params} for more details.
\subsection{Histogram-Based-Statistics}\label{sec:HBS}
\Theoremref{bucketing-hist} provides a powerful tool to obtain a DP mechanism
for \emph{any deterministic} histogram-based-statistic $\ensuremath{{f_{\mathrm{HBS}}}}\xspace: \Hspace{[0,B)} \rightarrow \ensuremath{\mathcal{A}}\xspace$, simply
by defining
\begin{align}
\ensuremath{\mathcal{M}}\xspace_{\ensuremath{{f_{\mathrm{HBS}}}}\xspace}^{\alpha, \beta, [0,B)} = \ensuremath{{f_{\mathrm{HBS}}}}\xspace \circ \mBhist{\alpha,\beta,[0,B)}. \label{fhbs-mech-1dim}
\end{align}
To analyze the flexible accuracy of $\ensuremath{\mathcal{M}}\xspace_{\ensuremath{{f_{\mathrm{HBS}}}}\xspace}$, we
define the \emph{metric sensitivity} function of $\ensuremath{{f_{\mathrm{HBS}}}}\xspace$.
\begin{defn}\label{def:metric-sensitivity}
The \emph{metric sensitivity} of a histogram-based-statistic $\ensuremath{{f_{\mathrm{HBS}}}}\xspace: \Hspace{[0,B)} \rightarrow \ensuremath{\mathcal{A}}\xspace$, is given by $\Delta_{\ensuremath{{f_{\mathrm{HBS}}}}\xspace}: \ensuremath{\R_{\ge0}}\xspace \rightarrow \ensuremath{\R_{\ge0}}\xspace$,
in terms of a metric \ensuremath{\met_{\mathrm{\A}}}\xspace over $\ensuremath{\mathcal{A}}\xspace$,
\begin{align}\label{eq:fhbs-sens}
\Delta_\ensuremath{{f_{\mathrm{HBS}}}}\xspace(\beta) = \sup_{\substack{\ensuremath{\mathbf{x}}\xspace, \ensuremath{\mathbf{x}}\xspace'\in\Hspace{[0,B)} : \\ \dhist{\ensuremath{\mathbf{x}}\xspace}{\ensuremath{\mathbf{x}}\xspace'} \leq \beta}} \dA{\ensuremath{{f_{\mathrm{HBS}}}}\xspace(\ensuremath{\mathbf{x}}\xspace)}{\ensuremath{{f_{\mathrm{HBS}}}}\xspace(\ensuremath{\mathbf{x}}\xspace')}.
\end{align}
\end{defn}
The privacy and accuracy guarantees of our HBS mechanism are stated in the following theorem, which we prove in \Sectionref{proof_bucketing-general}.
\begin{thm}\label{thm:bucketing-general}
On inputs of size $n$, $\ensuremath{\mathcal{M}}\xspace_{\ensuremath{{f_{\mathrm{HBS}}}}\xspace}^{\alpha, \beta, [0,B)}$ is $(\alpha, \Delta_{\ensuremath{{f_{\mathrm{HBS}}}}\xspace}(\beta), 0)$-accurate for \ensuremath{{f_{\mathrm{HBS}}}}\xspace w.r.t.\ distortion \ensuremath{\dn_{\mathrm{drop}}}\xspace and metric \ensuremath{\met_{\mathrm{\A}}}\xspace. Furthermore, for any $\ensuremath{\epsilon}\xspace>0$, and $\tau=\alpha(\frac{2\beta}B)$, if $\ensuremath{\epsilon}\xspace\tau n \geq 2$, then $\ensuremath{\mathcal{M}}\xspace_{\ensuremath{{f_{\mathrm{HBS}}}}\xspace}^{\alpha, \beta, [0,B)}$ is $\left(\ensuremath{\epsilon}\xspace,\ensuremath{\epsilon}\xspace e^{-\Omega(\ensuremath{\epsilon}\xspace\tau n)}\right)$-DP.
\end{thm}
We can instantiate \Theoremref{bucketing-general} with different parameter settings to achieve favorable privacy-accuracy tradeoffs. See \Sectionref{choosing-params} for more details.
\Theoremref{bucketing-general} has direct applications to functions
which have high sensitivity (defined w.r.t.\ the neighborhood relation
$\sim$), but low metric sensitivity. We point out two such examples,
for which no solutions with non-trivial guarantees were previously offered.
\subsubsection{Computing the Maximum or Minimum Element of a Multi-set}\label{sec:max}
We define \ensuremath{f_{\mathrm{max}}}\xspace (or simply $\max$) for histograms over real numbers as $\ensuremath{f_{\mathrm{max}}}\xspace(\ensuremath{\mathbf{x}}\xspace):=\max\{g:\ensuremath{\mathbf{x}}\xspace(g)>0\}$.
Similarly, we can define \ensuremath{f_{\mathrm{min}}}\xspace (or simply $\min$) as $\ensuremath{f_{\mathrm{min}}}\xspace(\ensuremath{\mathbf{x}}\xspace):=\min\{g:\ensuremath{\mathbf{x}}\xspace(g)>0\}$.
We give our result for $\ensuremath{f_{\mathrm{max}}}\xspace$ only; the same result holds for $\ensuremath{f_{\mathrm{min}}}\xspace$ as well.
\begin{corol}\label{corol:bucketing-max}
On inputs of size $n$, $\mmax{\alpha, \beta, [0,B)}$ is $(\alpha, \beta, 0)$-accurate for $\ensuremath{f_{\mathrm{max}}}\xspace$ w.r.t.\ the distortion \ensuremath{\dn_{\mathrm{drop}}}\xspace and the standard distance metric over \ensuremath{{\mathbb R}}\xspace. Furthermore, for any $\ensuremath{\epsilon}\xspace>0$, and $\tau=\alpha(\frac{2\beta}B)$, if $\ensuremath{\epsilon}\xspace\tau n \geq 2$, then $\mmax{\alpha, \beta, [0,B)}$ is $\left(\ensuremath{\epsilon}\xspace,\ensuremath{\epsilon}\xspace e^{-\Omega(\ensuremath{\epsilon}\xspace\tau n)}\right)$-DP.
\end{corol}
The proof of \Corollaryref{bucketing-max} is straight-forward, and we prove it in \Sectionref{bucketing-max_proof}.
\subsubsection{Computing the Support of a Multi-set}\label{sec:support}
\ensuremath{f_{\mathrm{supp}}}\xspace (or simply \ensuremath{\mathrm{support}}\xspace) is defined as $\ensuremath{f_{\mathrm{supp}}}\xspace(\ensuremath{\mathbf{x}}\xspace):=\{g:\ensuremath{\mathbf{x}}\xspace(g)>0\}$,
which maps a multiset to the set that forms its
support. To measure accuracy, we use a metric \ensuremath{\met_{\mathrm{supp}}}\xspace over
the set of finite subsets of \ensuremath{{\mathbb R}}\xspace: for any two finite subsets $\ensuremath{\mathcal{S}}\xspace_1,\ensuremath{\mathcal{S}}\xspace_2\subseteq \ensuremath{{\mathbb R}}\xspace$, define
\[\dsupp{\ensuremath{\mathcal{S}}\xspace_1}{\ensuremath{\mathcal{S}}\xspace_2} := \max\left\{
\max_{s_1\in \ensuremath{\mathcal{S}}\xspace_1} \min_{s_2\in \ensuremath{\mathcal{S}}\xspace_2} |s_1-s_2|,\, \max_{s_2 \in \ensuremath{\mathcal{S}}\xspace_2}\min_{s_1 \in \ensuremath{\mathcal{S}}\xspace_1} |s_2-s_1|\right\}.\]
\ensuremath{\met_{\mathrm{supp}}}\xspace measures the farthest that a point in one of the sets is from any
point on the other set. For example, if $s_i^{\min}:=\min_{s\in\ensuremath{\mathcal{S}}\xspace_i}\{s\}$ and $s_i^{\max}:=\max_{s\in\ensuremath{\mathcal{S}}\xspace_i}\{s\}$ denote the minimum and the maximum elements of the set $\ensuremath{\mathcal{S}}\xspace_i$ (for $i=1,2$), respectively, then it can be verified that $\dsupp{\ensuremath{\mathcal{S}}\xspace_1}{\ensuremath{\mathcal{S}}\xspace_2}=\max\{|s_1^{\min}-s_2^{\min}|,|s_1^{\max}-s_2^{\max}|\}$.
\begin{corol}\label{corol:bucketing-supp}
On inputs of size $n$, $\msupp{\alpha, \beta, [0,B)}$ is $(\alpha, \beta, 0)$-accurate for $\ensuremath{f_{\mathrm{supp}}}\xspace$ w.r.t.\ the distortion \ensuremath{\dn_{\mathrm{drop}}}\xspace and metric $\ensuremath{\met_{\mathrm{supp}}}\xspace$. Furthermore, for any $\ensuremath{\epsilon}\xspace>0$, and $\tau=\alpha(\frac{2\beta}B)$, if $\ensuremath{\epsilon}\xspace\tau n \geq 2$, then $\msupp{\alpha, \beta, [0,B)}$ is $\left(\ensuremath{\epsilon}\xspace,\ensuremath{\epsilon}\xspace e^{-\Omega(\ensuremath{\epsilon}\xspace\tau n)}\right)$-DP.
\end{corol}
The proof of \Corollaryref{bucketing-supp} is straight-forward, and we prove it in \Sectionref{bucketing-supp_proof}.
\subsubsection{Choosing the Parameters}\label{sec:choosing-params}
As mentioned in \Remarkref{hist-params} for \Theoremref{hist-priv-accu}, there are many choices of $\ensuremath{\epsilon}\xspace,\tau$ for which we can get favorable privacy, accuracy parameters in Theorems~\ref{thm:bucketing-hist},~\ref{thm:bucketing-general}, and Corollaries~\ref{corol:bucketing-max},~\ref{corol:bucketing-supp}.
For concreteness, in the following, we illustrate the privacy accuracy trade-off by choosing parameters for the $\mmax{\alpha, \beta, [0,B)}$ mechanism in \Corollaryref{bucketing-max}; the same result applies to Theorems~\ref{thm:bucketing-hist},~\ref{thm:bucketing-general}, and \Corollaryref{bucketing-supp} as well.
If we choose $\ensuremath{\epsilon}\xspace=\frac{1}{\sqrt{\tau n}}$ and $\tau$ is such that $\frac{1}{\ensuremath{\epsilon}\xspace}=\sqrt{\tau n} \geq 2$, then by dropping only $\alpha n = \frac{1}{\ensuremath{\epsilon}\xspace^2}\frac{2\beta}{B}$ elements from the entire dataset, the mechanism $\mmax{\alpha,\beta,[0,B)}$ achieves $\left(\frac{1}{\sqrt{\tau n}},\frac{e^{-\Omega(\sqrt{\tau n})}}{\sqrt{\tau n}}\right)$-differential privacy.
If $\beta/B$ is a small constant (say, $1/100$), which corresponds to perturbing the output by a small constant fraction of the whole range $B$, then by dropping only $\alpha n = O(\frac{1}{\ensuremath{\epsilon}\xspace^2})$ elements, $\mmax{\alpha,\beta,[0,B)}$ achieves $(\ensuremath{\epsilon}\xspace,\ensuremath{\epsilon}\xspace e^{-\Omega(\frac{1}{\ensuremath{\epsilon}\xspace})})$-differential privacy.
We can set any $\tau$ that satisfies $\frac{1}{\ensuremath{\epsilon}\xspace}=\sqrt{\tau n}\geq2$ in this result. For example,\\
\parbox{15.5cm}{By setting $\ensuremath{\epsilon}\xspace=\frac{1}{(\log n)^2}$, we get that by dropping only $O((\log n)^4)$ elements from the entire dataset, $\mmax{\alpha,\beta,[0,B)}$ achieves $(\frac{1}{(\log n)^2},\frac{n^{-\Omega(\log n)}}{(\log n)^2})$-differential privacy while incurring only a small constant error (of the entire range) in the output.\\}
\noindent Note that in the above setting of parameters, we take $\ensuremath{\epsilon}\xspace=\frac{1}{\sqrt{\tau n}}$, which implies that the bound on $\delta$ can at best be a small constant for any constant $\ensuremath{\epsilon}\xspace$. This is because $\ensuremath{\epsilon}\xspace\tau n=\sqrt{\tau n} = \frac{1}{\ensuremath{\epsilon}\xspace}$ is a constant, which implies that $\delta=\ensuremath{\epsilon}\xspace e^{-\Omega(\frac{1}{\ensuremath{\epsilon}\xspace})}$ will be a constant too. Therefore, for getting privacy guarantees with small constant $\ensuremath{\epsilon}\xspace$ such that $\delta$ (exponentially) decays with $n$, we will work with the general privacy result of $(\ensuremath{\epsilon}\xspace,\ensuremath{\epsilon}\xspace e^{-\Omega(\ensuremath{\epsilon}\xspace\tau n)})$-DP as in \Corollaryref{bucketing-max}.
For example, \\
\parbox{15.5cm}{By setting $\ensuremath{\epsilon}\xspace=0.1$ and $\tau=\frac{1}{n^c}$ (for any $c\in(0,1)$), we get that by dropping only $\alpha n = \tau n \frac{B}{2\beta} = O(n^{1-c})$ elements from the entire dataset, $\mmax{\alpha,\beta,[0,B)}$ achieves $(0.1,e^{-\Omega(n^{1-c})})$-differential privacy while incurring only a small constant error (of the entire range) in the output.\\}
\noindent For other parameter settings, see the result on page~\pageref{informal-result-max} after we stated our informal result for max.
\subsection{Further Applications: Beyond \ensuremath{\dn_{\mathrm{drop}}}\xspace}\label{sec:beyond-drop}
Useful variants of \Theoremref{bucketing-general} can be obtained with
measures of distortion other than \ensuremath{\dn_{\mathrm{drop}}}\xspace. In particular, in \eqref{eq:move_defn} and \eqref{eq:drop_move_defn}, we defined the distortions $\ensuremath{\dn_{\mathrm{move}}}\xspace$ and $\dropmove\eta$, respectively, where $\ensuremath{\dn_{\mathrm{move}}}\xspace$ allows moving/perturbing of data points and $\dropmove\eta$ allows both dropping and moving.
The following theorem provides the privacy and accuracy guarantees of $\ensuremath{\mathcal{M}}\xspace_{\ensuremath{{f_{\mathrm{HBS}}}}\xspace}^{\alpha, \beta, [0,B)}$ (defined in \eqref{fhbs-mech-1dim}) w.r.t.\ the distortion measure $\ensuremath{\dn_{\mathrm{drmv}}^{\eta}}\xspace$, and we prove it in \Sectionref{beyond-drop_proofs}.
\begin{thm}\label{thm:bucketing-general-drmv}
On inputs of size $n$, $\ensuremath{\mathcal{M}}\xspace_{\ensuremath{{f_{\mathrm{HBS}}}}\xspace}^{\alpha, \beta, [0,B)}$ is $(\alpha+\eta\beta, 0, 0)$-accurate for \ensuremath{{f_{\mathrm{HBS}}}}\xspace w.r.t.\ the distortion measure $\dropmove\eta$. Furthermore, for any $\ensuremath{\epsilon}\xspace>0$, and $\tau=\alpha(\frac{2\beta}B)$, if $\ensuremath{\epsilon}\xspace\tau n \geq 2$, then $\ensuremath{\mathcal{M}}\xspace_{\ensuremath{{f_{\mathrm{HBS}}}}\xspace}^{\alpha, \beta, [0,B)}$ is $\left(\ensuremath{\epsilon}\xspace,\ensuremath{\epsilon}\xspace e^{-\Omega(\ensuremath{\epsilon}\xspace\tau n)}\right)$-DP.
\end{thm}
\begin{remark}
This is analogous to \Theoremref{bucketing-general},
but with the important difference that it does
not refer to the metric sensitivity of the function \ensuremath{{f_{\mathrm{HBS}}}}\xspace, and does not even
require a metric over its codomain \ensuremath{\mathcal{A}}\xspace. This makes this result applicable to
complex function families like maximum-margin separators or neural net
classifiers. However, the accuracy notion uses a measure of distortion that
allows dropping a (small) fraction of the data \emph{and} (slightly) moving all
data points, which may or may not be acceptable to all applications.
\end{remark}
\begin{remark}[Extending the results from $[0,B)$ to $[0,B)^d$]\label{remark:d-dim}
Note that the bucketing mechanism $\mbuc{w,[0,B)}$ (\Algorithmref{bucketing}) and the bucketed-histogram mechanism $\mBhist{\alpha,\beta,[0,B)}$ (\Algorithmref{histogram-mech}) are given for the ground set $\ensuremath{\mathcal{G}}\xspace=[0,B)$.
However, as mentioned in \Footnoteref{dim-d}, they can easily be extended to the $d$-dimensional ground set $\ensuremath{\mathcal{G}}\xspace=[0,B)^d$, and we present the $d$-dimensional analogues of the above two mechanisms in \Appendixref{d-dim-results}. All our results in \Theoremref{bucketing-hist}, \Theoremref{bucketing-general}, and \Theoremref{bucketing-general-drmv} will hold verbatim with these generalized mechanisms, except for the value of $\tau$, which will be replaced by $\tau=\alpha(\frac{2\beta}{B\sqrt{d}})^d$; see \Appendixref{d-dim-results} for a proof of this.
\end{remark}
\section{Proofs}\label{sec:proofs}
In our proofs, when dealing with infimum/supremum (for example, in the definitions of the lossy Wasserstein distance, measure of distortion, distortion and error sensitivities, etc.), for simplicity, we assume that the infimum/supremum is always achieved; all our proofs can be easily extended to work without this assumption by taking appropriate limits when working with infinitesimal quantities.
\subsection{Proof of \Lemmaref{wass-triangle} -- Triangle Inequality for $\Winf{\gamma}$}\label{sec:triangle-ineq_Wass}
In this section we prove \Lemmaref{wass-triangle}, and along the way derive useful properties about lossy Wasserstein distance, that may be of independent interest.
The following lemma is crucial to proving \Lemmaref{wass-triangle}.
\begin{lem}\label{lem:wass-marginal-loss}
Let $P$ and $Q$ be any two distributions over a metric space
$(\Omega,\ensuremath{\mathfrak{d}}\xspace)$. If $\Winf{\gamma}(P, Q) = \beta$, then for all $\gamma_1 \in [0, \gamma]$, there exist distributions $P'$ and $Q'$ s.t. $\Delta(P, P') \le \gamma_1$, $\Delta(Q, Q') \le \gamma - \gamma_1$, and $\Winf{}(P', Q') = \beta$.
\end{lem}
\begin{proof}
Let $P$ and $Q$ be any two distributions over a metric space
$(\Omega,\ensuremath{\mathfrak{d}}\xspace)$. Let us assume that the optimal $\Winf{\gamma}(P, Q)$ (= $\beta$) is obtained at the joint distribution $\phi_{opt}$. Let the first and the second marginal distributions of $\phi_{opt}$ be $P_{opt}$ and $Q_{opt}$, respectively. Let $\Delta(P, P_{opt}) = \gamma_{opt}$, which implies that $\Delta(Q, Q_{opt}) \le \gamma - \gamma_{opt}$.
Define a function $R_{opt}:\Omega\to\ensuremath{{\mathbb R}}\xspace$ as $R_{opt}(\omega) := \Prob{P_{opt}}{\omega} - \Prob{P}{\omega}$ for all $\omega\in\Omega$. Clearly, $\int_{\Omega} R_{opt}(\omega)\ensuremath{\,\mathrm{d}}\xspace\omega= 0$ and $\int_{\Omega} |R_{opt}(\omega)|\ensuremath{\,\mathrm{d}}\xspace\omega= 2\gamma_{opt}$.
In the discussion below, we shall take a general $\gamma_1 \in [0,\gamma_{opt})$ and construct distributions $P'$ and $Q'$ s.t.\ $\Delta(P, P') \le \gamma_1$, $\Delta(Q, Q') \le \gamma - \gamma_1$, and $\Winf{}(P', Q') = \beta$, as required in the conclusion of \Lemmaref{wass-marginal-loss}.
We can show a similar result for the other case also when $\gamma_1 \in (\gamma_{opt}, \gamma]$ (by swapping the roles of $P$ and $Q$ in the above as well as in the argument below). This will complete the proof of \Lemmaref{wass-marginal-loss}.
Define a function $R':\Omega\to\ensuremath{{\mathbb R}}\xspace$ as $R'(\omega) := \frac{\gamma_1}{\gamma_{opt}}R_{opt}(\omega)$.
For any $\omega\in\Omega$, let $P'(\omega)=\Prob{P}{\omega} + R'(\omega)$.
After substituting the value of $R_{opt}(\omega) = \Prob{P_{opt}}{\omega} - \Prob{P}{\omega}$, we get $P'(\omega)=\frac{\gamma_1}{\gamma_{opt}}\Prob{P_{opt}}{\omega} + \left(1-\frac{\gamma_1}{\gamma_{opt}}\right)\Prob{P}{\omega}$. Since $P'$ is a convex combination of two distributions, it is also a valid distribution.
It is easy to see that $\Delta(P, P') = \gamma_1$.
Define a joint distribution $\phi'$ as follow: for every $(x,y)\in\Omega\times\Omega$, define
\begin{align*}
\phi'(x, y) := \begin{cases}
\phi_{opt}(x, y)\frac{\Prob{P'}{x}}{\Prob{P_{opt}}{x}} & \text{if } \Prob{P_{opt}}{x} > 0\\
\Prob{P'}{x}\delta(x - y) & \text{otherwise}
\end{cases}
\end{align*}
where $\delta(\cdot)$ is the Dirac delta function.
It follows from the definition that $\int_{\Omega} \phi'(x, y)\ensuremath{\,\mathrm{d}}\xspace y = \Prob{P'}{x}$, i.e., the first marginal of $\phi'$ is $\Prob{P'}{\cdot}$. This also implies that $\phi'$ is a valid joint distribution because
{\sf (i)} $\phi'(x,y)\geq0$ for all $(x,y)\in\Omega\times\Omega$, and
{\sf (ii)} $\int_{\Omega\times\Omega}\phi'(x,y)\ensuremath{\,\mathrm{d}}\xspace x \ensuremath{\,\mathrm{d}}\xspace y = \int_{\Omega}\Prob{P'}{x}\ensuremath{\,\mathrm{d}}\xspace x = 1$.
Let the second marginal of $\phi'$ be $Q'$. We show in \Claimref{TV_dist_Q-Qprime} in \Appendixref{wasserstein} that $\Delta(Q, Q')\leq\gamma-\gamma_1$.
The only thing left to prove is to show that $\Winf{}(P', Q') = \beta$ for the above constructed $P'$ and $Q'$.
First we show $\Winf{}(P', Q') \ge \beta$ and then show $\Winf{}(P', Q') \le \beta$.
\begin{itemize}
\item {\bf Showing $\Winf{}(P', Q') \ge \beta$:} This follows from the following claim, which we prove in \Appendixref{wasserstein}.
\begin{claim}\label{clm:wass_alternate}
For distributions $P$ and $Q$ over a metric space $(\Omega,\ensuremath{\mathfrak{d}}\xspace)$ and $\gamma\in [0,1]$, we have
\begin{align}\label{eq:wass_alternate}
\Winf{\gamma}(P,Q) \quad= \displaystyle \inf_{\substack{\hat{P},\hat{Q}:\\ \Delta(P,\hat{P}) + \Delta(Q,\hat{Q})\leq\gamma}} \Winf{} (\hat{P},\hat{Q}).
\end{align}
\end{claim}
Now, since $P',Q'$ satisfy $\Delta(P,P') + \Delta(Q,Q')\leq\gamma$, we have $\Winf{\gamma}(P,Q)\leq\Winf{}(P',Q')$. Since $\Winf{\gamma}(P,Q)=\beta$, we have shown that $\Winf{}(P',Q')\geq\beta$.
\item {\bf Showing $\Winf{}(P', Q') \le \beta$:}
For the sake of contradiction, let us assume that $\Winf{}(P', Q') > \beta$.
Then there is a pair $(x, y) \in \Omega^2$ such that $\phi'(x, y) > 0$ and $\ensuremath{\mathfrak{d}}\xspace(x, y) > \beta$.
This implies that $\phi_{opt}(x, y) = 0$, because, otherwise, we would have $\Winf{\gamma}(P, Q) > \beta$,
which contradicts our hypothesis that $\Winf{\gamma}(P, Q) = \beta$.
So, we know that $\phi'(x, y) > 0$ and $\phi_{opt}(x, y) = 0$. From the definition of $\phi'$, this is only possible if $\Prob{P_{opt}}{x} = 0$ and $\Prob{P'}{x}\delta(x - y) > 0$. This can happen only if $x = y$, but this implies $\ensuremath{\mathfrak{d}}\xspace(x, y) = 0 \le \beta$, which is a contradiction. Hence $\Winf{}(P', Q') \le \beta$.
\end{itemize}
This completes the proof of \Lemmaref{wass-marginal-loss}.
\end{proof}
Now we are ready to prove \Lemmaref{wass-triangle}.
Let $\Winf{\gamma_1}(P, Q) = \beta_1$ and $\Winf{\gamma_2}(Q, R) = \beta_2$.
It follows from \Lemmaref{wass-marginal-loss} that there exists a distribution $P'$ such that $\Delta(P, P') \le \gamma_1$ and $\Winf{}(P', Q) = \beta_1$. Similarly, there exists a distribution $R'$ such that $\Delta(R, R') \le \gamma_2$ and $\Winf{}(Q, R') = \beta_2$.
Using these, we have from \Lemmaref{Winf_triangle} that $\Winf{}(P', R') \leq \beta_1 + \beta_2$.
Now, the result follows from the following set of inequalities.
\begin{align*}
\Winf{\gamma_1+\gamma_2}(P,R)
\ \stackrel{\text{(d)}}{=}\hspace{-0.5cm}\displaystyle \inf_{\substack{\hat{P},\hat{R}:\\ \Delta(P,\hat{P}) + \Delta(R,\hat{R})\leq\gamma_1+\gamma_2}}\hspace{-0.5cm} \Winf{} (\hat{P},\hat{R})
\ \stackrel{\text{(e)}}{\leq} \ \Winf{}(P',R')
\ \leq\ \beta_1 + \beta_2
\ = \ \Winf{\gamma_1}(P, Q) + \Winf{\gamma_2}(Q, R),
\end{align*}
where (d) follows from \Claimref{wass_alternate}
and (e) follows because $P',R'$ satisfy $\Delta(P,P') + \Delta(R,R')\leq\gamma_1+\gamma_2$.
This concludes the proof of \Lemmaref{wass-triangle}.
\subsection{Proof of \Theoremref{compose-accuracy} -- Composition Theorem for Flexible Accuracy}\label{sec:compose-accuracy_proof}
The following lemma will be useful in proving \Theoremref{compose-accuracy}.
It translates the definition of distortion sensitivity
(\Definitionref{dist-sens}) to apply to distortion of input distributions.
We prove it in \Appendixref{comp-accuracy}.
\begin{lem}\label{lem:distsens-distrib}
Suppose $f: A \to B$ has distortion sensitivity \distsens{f}{} w.r.t.\ $(\ensuremath{\mathsf{\partial}}\xspace_1,\ensuremath{\mathsf{\partial}}\xspace_2)$.
For all r.v.s $X_0$ over $A$ and $Y$ over $B$ such that $\ensuremath{{\widehat\dn}}\xspace_2(f(X_0),\prob{Y}) \le \alpha$ for some $\alpha\geq0$, there must exist a r.v.\ $X$ over $A$ such that $Y=f(X)$ and $\ensuremath{{\widehat\dn}}\xspace_1(\prob{X_0}, \prob{X}) \le \distsens{f}{}(\alpha)$, provided $\distsens{f}{}(\alpha)$ is finite.
\end{lem}
Now we prove \Theoremref{compose-accuracy}, which is essentially formalizing the pictorial proof given in \Figureref{composition}.
For a given element $x \in A$, since $\ensuremath{\mathcal{M}}\xspace_1$ is $(\alpha_1 , \beta_1, \gamma_1)$-accurate mechanism for $f_1$, we have from \Definitionref{alpha-beta-gamma-accu} that there exists a r.v.\ $X'$ such that
\begin{align}
\ensuremath{{\widehat\dn}}\xspace_1(x, \prob{X'}) &\le \alpha_1, \label{eq:compose-accuracy-dnx1}\\
\Winf{\gamma_1}(f_1(X'),\ensuremath{\mathcal{M}}\xspace_1(x))&\leq\beta_1. \label{eq:compose-accuracy-W1}
\end{align}
Now, applying the mechanism $\ensuremath{\mathcal{M}}\xspace_2$ on $\ensuremath{\mathcal{M}}\xspace_1(x)$, we incur an overall error of at most $\tau_{\ensuremath{\mathcal{M}}\xspace_2,f_2}^{\alpha_2, \gamma_2}(\beta_1,\gamma_1)$ to the output of function $f_2$ over a distorted input (see \Definitionref{err-sens}). Therefore, there exists a r.v.\ $Y^*$ such that,
\begin{align}
\ensuremath{{\widehat\dn}}\xspace_2( f_1(X'), \prob{Y^*}) &\leq \alpha_2, \label{eq:compose-accuracy-dnx2}\\
\Winf{\gamma_2}(f_2 (Y^*),\ensuremath{\mathcal{M}}\xspace_2(\ensuremath{\mathcal{M}}\xspace_1(x))) &\leq \tau_{\ensuremath{\mathcal{M}}\xspace_2,f_2}^{\alpha_2, \gamma_2}(\beta_1,\gamma_1). \label{eq:compose-accuracy-W2}
\end{align}
Since $\distsens{f_1}{}(\alpha_2)$ is finite (by assumption), it follows from \eqref{eq:compose-accuracy-dnx2} and \Lemmaref{distsens-distrib} that there exists a r.v.\ $X$ over $A$ such that
\begin{align}
\ensuremath{{\widehat\dn}}\xspace_1(\prob{X'}, \prob{X}) &\leq \distsens{f_1}{}(\alpha_2), \label{eq:compose-accuracy_interim 2} \\
Y^* &= f_1(X). \label{eq:compose-accuracy_interim 3}
\end{align}
Since $\ensuremath{\mathsf{\partial}}\xspace_1$ is a quasi-metric, it follows that $\ensuremath{{\widehat\dn}}\xspace_1$ is also a quasi-metric; see \Lemmaref{dnx-quasi-metric} in \Appendixref{comp-accuracy} for a proof. This, together with
\eqref{eq:compose-accuracy-dnx1} and \eqref{eq:compose-accuracy_interim 2}, implies that
\begin{align}
\ensuremath{{\widehat\dn}}\xspace_1(x, \prob{X}) \leq \alpha_1 + \distsens{f_1}{}(\alpha_2). \label{eq:compose-accuracy_interim1}
\end{align}
Substituting $Y^* = f_1(X)$ from \eqref{eq:compose-accuracy_interim 3} into \eqref{eq:compose-accuracy-W2} gives
\begin{align}\label{eq:compose-accuracy_interim5}
\Winf{\gamma_2}(f_2 (f_1(X)),\ensuremath{\mathcal{M}}\xspace_2(\ensuremath{\mathcal{M}}\xspace_1(x))) \leq \tau_{\ensuremath{\mathcal{M}}\xspace_2,f_2}^{\alpha_2, \gamma_2}(\beta_1,\gamma_1).
\end{align}
\eqref{eq:compose-accuracy_interim1} and \eqref{eq:compose-accuracy_interim5} imply that $\ensuremath{\mathcal{M}}\xspace_2\circ\ensuremath{\mathcal{M}}\xspace_1$ is $(\alpha,\beta,\gamma)$-accurate for $f_2\circ f_1$ w.r.t.\ the distortion measure $\ensuremath{\mathsf{\partial}}\xspace_1$ on $A$ and metric $\ensuremath{\mathfrak{d}}\xspace_2$ on $C$, where $\alpha=\alpha_1 + \distsens{f_1}{}(\alpha_2)$, $\beta=\tau_{\ensuremath{\mathcal{M}}\xspace_2,f_2}^{\alpha_2, \gamma_2}(\beta_1,\gamma_1)$, and $\gamma=\gamma_2$.
This concludes the proof of \Theoremref{compose-accuracy}.
\subsection{Proof of \Theoremref{hist-priv-accu} -- Truncated Laplace Mechanism for Histograms}\label{sec:hist-priv-accu-proof}
First we prove the flexible accuracy part, which is easy, and then we will move on to proving the privacy part, which is more involved than the existing privacy analysis of differentially-private histogram mechanisms.
We also note that the requirement of $|\ensuremath{\mathrm{support}}\xspace(\ensuremath{\mathbf{x}}\xspace)|\le t$ is only needed the accuracy result.
\paragraph{Flexible accuracy.}
Note that the noise added by \mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace} in each bar of the histogram is bounded by $-q=-\ensuremath{\tau}\xspace |\ensuremath{\mathbf{x}}\xspace|$, which can lead to a drop of at most $\ensuremath{\tau}\xspace$ fraction of total number of elements from each bar. Combined with the fact that $|\ensuremath{\mathrm{support}}\xspace(\ensuremath{\mathbf{x}}\xspace)|\le t$, the fraction of the maximum fraction of elements that can be dropped is $\ensuremath{\tau}\xspace t$. Hence, \mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace} is $(\ensuremath{\tau}\xspace t,0,0)$-accurate. \\
\paragraph{Differential privacy.} Our proof of the privacy part of \Theoremref{hist-priv-accu} depends on the following lemma.
\begin{lem}\label{lem:hist-epdel}
For any $\nu\geq0,\ensuremath{\epsilon}\xspace>0$ and on inputs $\ensuremath{\mathbf{x}}\xspace$ s.t.\ $|\ensuremath{\mathbf{x}}\xspace| \ge \frac{2}{\ensuremath{\epsilon}\xspace\ensuremath{\tau}\xspace} \ln\left(1 + \frac{1 - e^{-\frac{\ensuremath{\epsilon}\xspace\ensuremath{\tau}\xspace}{2}}}{e^{\ensuremath{\epsilon}\xspace(\nu + \frac{\ensuremath{\tau}\xspace}{2})} - 1} \right)$, \mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace} is
$\left((1 + \nu)\ensuremath{\epsilon}\xspace,\frac{e^{\ensuremath{\epsilon}\xspace} - 1}{2(e^{\nicefrac{\ensuremath{\epsilon}\xspace q}{2}} - 1)}\right)$-DP w.r.t.\ \ensuremath{\sim_{\mathrm{hist}}\xspace}, where $q=\tau|\ensuremath{\mathbf{x}}\xspace|$.
\end{lem}
\begin{proof}
We shall in fact prove that a mechanism that outputs $\hat\ensuremath{\mathbf{y}}\xspace$ with $\hat\ensuremath{\mathbf{y}}\xspace(i) :=
\ensuremath{\mathbf{x}}\xspace(i)+z_i$ (without rounding, and without replacing negative values with 0)
is already differentially private as desired. Then, since the actual mechanism is a
post-processing of this mechanism, it will also be differentially
private with the same parameters.
Let $\ensuremath{\mathbf{x}}\xspace$ and $\ensuremath{\mathbf{x}}\xspace'$ be two neighbouring histograms.
For simplicity, for every $i\in\ensuremath{\mathcal{G}}\xspace$, define $x_i:=\ensuremath{\mathbf{x}}\xspace(i)$ and $x_i':=\ensuremath{\mathbf{x}}\xspace'(i)$.
Since $\ensuremath{\mathbf{x}}\xspace\sim\ensuremath{\mathbf{x}}\xspace'$, there exists an
$i^*\in\ensuremath{\mathcal{G}}\xspace$ such that $|x_{i^*}-x_{i^*}'|= 1$ and that $x_i=x_i'$ for every $i\in \ensuremath{\mathcal{G}}\xspace\setminus\{i^*\}$.
Without loss of generality, assume that $x_{i^*}=x_{i^*}'+1$, which implies $|\ensuremath{\mathbf{x}}\xspace| = |\ensuremath{\mathbf{x}}\xspace'|+1 = n + 1$.
Let $q = \ensuremath{\tau}\xspace (n + 1)$ and $q' = \ensuremath{\tau}\xspace n$.
For simplicity of notation, we will denote $\ensuremath{\mathrm{support}}\xspace(\ensuremath{\mathbf{y}}\xspace)$ by $\ensuremath{\mathcal{G}}\xspace_{\ensuremath{\mathbf{y}}\xspace}$ for any $\ensuremath{\mathbf{y}}\xspace\in\{\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{x}}\xspace'\}$.
In order to prove the lemma, for every subset $S\subseteq\Hspace\ensuremath{\mathcal{G}}\xspace$, we need to show that
\begin{align}
\Pr[\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace}(\ensuremath{\mathbf{x}}\xspace')\in S] \leq e^{(1 + \nu)\ensuremath{\epsilon}\xspace}\Pr[\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace}(\ensuremath{\mathbf{x}}\xspace)\in S]+\delta, \label{eq:hist-dp_interim2} \\
\Pr[\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace}(\ensuremath{\mathbf{x}}\xspace)\in S] \leq e^{(1 + \nu)\ensuremath{\epsilon}\xspace}\Pr[\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace}(\ensuremath{\mathbf{x}}\xspace')\in S]+\delta, \label{eq:hist-dp_interim1}
\end{align}
where $\delta=\frac{e^{\ensuremath{\epsilon}\xspace} - 1}{2(e^{\ensuremath{\epsilon}\xspace \nicefrac{q}{2}} - 1)}$.
We only prove \eqref{eq:hist-dp_interim2}; \eqref{eq:hist-dp_interim1} can be shown similarly.
Fix an arbitrary subset $S\subseteq\Hspace\ensuremath{\mathcal{G}}\xspace$.
Since $\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace}$ adds independent noise to each bar of the histogram according to $\lnoise{z}$, we have that for every $\ensuremath{\mathbf{s}}\xspace\in\Hspace\ensuremath{\mathcal{G}}\xspace$, we have $\prob{\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace}(\ensuremath{\mathbf{x}}\xspace)}(\ensuremath{\mathbf{s}}\xspace)=\prod_{i\in\ensuremath{\mathcal{G}}\xspace_{\ensuremath{\mathbf{x}}\xspace}} \lnoise{s_i - x_i}$ where $s_i = \ensuremath{\mathbf{s}}\xspace(i)$. Thus, we have
\begin{align}
\Pr[\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace}(\ensuremath{\mathbf{x}}\xspace)\in S] &= \int_{S} \big[\prod_{i\in\ensuremath{\mathcal{G}}\xspace_{\ensuremath{\mathbf{x}}\xspace}} \lnoise{s_i - x_i}\big]\ensuremath{\,\mathrm{d}}\xspace\ensuremath{\mathbf{s}}\xspace, \label{eq:app-mech2-eps-delta-dp2} \\
\Pr[\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace}(\ensuremath{\mathbf{x}}\xspace')\in S] &= \int_{S} \big[\prod_{i\in\ensuremath{\mathcal{G}}\xspace_{\ensuremath{\mathbf{x}}\xspace'}} \dnoise{s_i - x_i'}\big]\ensuremath{\,\mathrm{d}}\xspace\ensuremath{\mathbf{s}}\xspace. \label{eq:app-mech2-eps-delta-dp1}
\end{align}
Now, using the fact that $\forall k \neq i^*, x_k = x_k'$ and $x_{i^*} = x_{i^*}' + 1$, we partition $S$ into three disjoint sets:
\begin{enumerate}
\item $S_0:=\{\ensuremath{\mathbf{s}}\xspace\in\Hspace\ensuremath{\mathcal{G}}\xspace: s_{i^*} - x_{i^*}' < -q'\} \cup \{\ensuremath{\mathbf{s}}\xspace\in\Hspace\ensuremath{\mathcal{G}}\xspace: 0 < s_{i^*} - x_{i^*}'\}$.
\item $S_1:=\{\ensuremath{\mathbf{s}}\xspace\in\Hspace\ensuremath{\mathcal{G}}\xspace: -q' \le s_{i^*} - x_{i^*}' < -q' + (1 - \ensuremath{\tau}\xspace)\}$.
\item $S_2:=\{\ensuremath{\mathbf{s}}\xspace\in\Hspace\ensuremath{\mathcal{G}}\xspace: -q' + (1 - \ensuremath{\tau}\xspace) \le s_{i^*} - x_{i^*}' \le 0)\}$.
\end{enumerate}
The proof of \eqref{eq:hist-dp_interim2} is a simple corollary of the following two claims, which we prove in \Appendixref{histogram}.
\begin{claim}\label{clm:hist-epdel-c2}
$\Pr[\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace}(\ensuremath{\mathbf{x}}\xspace')\in S_0\cup S_2]\leq e^{(1 + \nu)\ensuremath{\epsilon}\xspace}\Pr[\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace}(\ensuremath{\mathbf{x}}\xspace)\in S_0\cup S_2]$, provided $n \ge \frac{2}{\ensuremath{\epsilon}\xspace\ensuremath{\tau}\xspace} \ln\left(1 + \frac{1 - e^{-\frac{\ensuremath{\epsilon}\xspace\ensuremath{\tau}\xspace}{2}}}{e^{\ensuremath{\epsilon}\xspace(\nu + \frac{\ensuremath{\tau}\xspace}{2})} - 1} \right)$.
\end{claim}
\begin{claim}\label{clm:hist-epdel-c3}
$\Pr[\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace}(\ensuremath{\mathbf{x}}\xspace')\in S_{1}]\leq \delta$, where $\delta=\frac{e^{\ensuremath{\epsilon}\xspace} - 1}{2(e^{\nicefrac{\ensuremath{\epsilon}\xspace q}{2}} - 1)}$.
\end{claim}
The above two claims together imply \eqref{eq:hist-dp_interim2} as follows:
\begin{align}
\Pr[\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace}(\ensuremath{\mathbf{x}}\xspace')\in S] &= \Pr[\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace}(\ensuremath{\mathbf{x}}\xspace')\in S_0\cup S_2] + \Pr[\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace}(\ensuremath{\mathbf{x}}\xspace')\in S_1]\nonumber\\
&\leq e^{(1+\nu)\ensuremath{\epsilon}\xspace}\Pr[\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace}(\ensuremath{\mathbf{x}}\xspace)\in S_0\cup S_2] + \delta \nonumber\\
&\leq e^{(1+\nu)\ensuremath{\epsilon}\xspace}\Pr[\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace}(\ensuremath{\mathbf{x}}\xspace)\in S] + \delta. \tag{Since $S_0\cup S_2\subseteq S$}
\end{align}
This completes the proof of \Lemmaref{hist-epdel}.
\end{proof}
In \Lemmaref{hist-epdel}, $\nu$ is a free variable. By taking $\nu=0$, we get the following result in \Corollaryref{simpler-dp-hist}. We can also get different guarantees by restricting to $\nu>0$; see \Remarkref{priv-nu-bigger-0} below for this.
\begin{corol}\label{corol:simpler-dp-hist}
For any $\ensuremath{\epsilon}\xspace,\tau,\ensuremath{\mathbf{x}}\xspace$ such that $\tau|\ensuremath{\mathbf{x}}\xspace|\ensuremath{\epsilon}\xspace \ge 2$, \mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace} is
$\left(\ensuremath{\epsilon}\xspace,\frac{e^{\ensuremath{\epsilon}\xspace} - 1}{2(e^{\nicefrac{\ensuremath{\epsilon}\xspace q}{2}} - 1)}\right)$-DP w.r.t.\ \ensuremath{\sim_{\mathrm{hist}}\xspace}, where $q=\tau|\ensuremath{\mathbf{x}}\xspace|$.
\end{corol}
\begin{proof}
Substituting $\nu=0$ in \Lemmaref{hist-epdel} gives that when $\ensuremath{\mathbf{x}}\xspace$ satisfies $|\ensuremath{\mathbf{x}}\xspace| \ge \frac{2}{\ensuremath{\epsilon}\xspace\ensuremath{\tau}\xspace} \ln\left(1 + \frac{1 - e^{-\frac{\ensuremath{\epsilon}\xspace\ensuremath{\tau}\xspace}{2}}}{e^{\frac{\ensuremath{\epsilon}\xspace\ensuremath{\tau}\xspace}{2}} - 1} \right)$, we have that \mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace} is $\left(\ensuremath{\epsilon}\xspace,\frac{e^{\ensuremath{\epsilon}\xspace} - 1}{2(e^{\nicefrac{\ensuremath{\epsilon}\xspace n\ensuremath{\tau}\xspace}{2}} - 1)}\right)$-DP w.r.t.\ \ensuremath{\sim_{\mathrm{hist}}\xspace}. Now, the corollary follows because
$\frac{2}{\ensuremath{\epsilon}\xspace\tau}\geq\frac{2}{\ensuremath{\epsilon}\xspace\ensuremath{\tau}\xspace} e^{-\frac{\ensuremath{\epsilon}\xspace\ensuremath{\tau}\xspace}{2}} \geq \frac{2}{\ensuremath{\epsilon}\xspace\ensuremath{\tau}\xspace} \ln\left(1 + e^{-\frac{\ensuremath{\epsilon}\xspace\ensuremath{\tau}\xspace}{2}} \right) = \frac{2}{\ensuremath{\epsilon}\xspace\ensuremath{\tau}\xspace} \ln\left(1 + \frac{1 - e^{-\frac{\ensuremath{\epsilon}\xspace\ensuremath{\tau}\xspace}{2}}}{e^{\frac{\ensuremath{\epsilon}\xspace\ensuremath{\tau}\xspace}{2}} - 1} \right)$,
where the first inequality uses $x \ge \ln(1+x)$ for $x>0$.
\end{proof}
\begin{remark}\label{remark:priv-nu-bigger-0}
We show in \Lemmaref{hist-priv-nu-bigger-0} in \Appendixref{histogram} that by restricting \Lemmaref{hist-epdel} to $\nu>0$, we can get a weaker condition than what we have in \Corollaryref{simpler-dp-hist} with a slight increase in the privacy parameter $\ensuremath{\epsilon}\xspace$. In particular, we show that for all $\ensuremath{\epsilon}\xspace,\ensuremath{\mathbf{x}}\xspace$ such that $\ensuremath{\epsilon}\xspace\nu\geq\ln\left(1+\frac{1}{|\ensuremath{\mathbf{x}}\xspace|}\right)$, \mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace} is $\left((1+\nu)\ensuremath{\epsilon}\xspace,\frac{e^{\ensuremath{\epsilon}\xspace} - 1}{2(e^{\nicefrac{\ensuremath{\epsilon}\xspace q}{2}} - 1)}\right)$-DP w.r.t.\ \ensuremath{\sim_{\mathrm{hist}}\xspace}. We can take $\nu=1$ here.
\end{remark}
Now the privacy part of \Theoremref{hist-priv-accu} follows because $q=\tau|\ensuremath{\mathbf{x}}\xspace|$ and $\tau|\ensuremath{\mathbf{x}}\xspace|\ensuremath{\epsilon}\xspace\geq2$ (note that $\tau|\ensuremath{\mathbf{x}}\xspace|\ensuremath{\epsilon}\xspace$ is typically a much bigger number than $2$ as it scales with the size of the dataset), which implies that $\frac{e^{\ensuremath{\epsilon}\xspace} - 1}{2(e^{\nicefrac{\ensuremath{\epsilon}\xspace q}{2}} - 1)}= \ensuremath{\epsilon}\xspace e^{-\Omega(\ensuremath{\epsilon}\xspace\tau|\ensuremath{\mathbf{x}}\xspace|)}$. Hence, \mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace} is $(\ensuremath{\epsilon}\xspace,\ensuremath{\epsilon}\xspace e^{-\Omega(\ensuremath{\epsilon}\xspace\tau|\ensuremath{\mathbf{x}}\xspace|)})$-DP.
This completes the proof of \Theoremref{hist-priv-accu}.
\subsection{Proof of \Theoremref{bucketing-hist} -- Bucketed Truncated Laplace Mechanism}\label{sec:bucketHist-priv-accu-proof}
Note that $\mBhist{\alpha,\beta,[0,B)} = \mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,[0,B)} \circ \mbuc{w,[0,B)}$, with $w = 2\beta$ and $\ensuremath{\tau}\xspace = \frac{\alpha}{t}$, where $t = \lceil \frac{B}{2\beta} \rceil$.
We will use \Theoremref{compose-DP} to show the DP guarantee and \Theoremref{compose-accuracy} to show the flexible accuracy guarantee of $\mBhist{\alpha,\beta,[0,B)}$.
\paragraph{Differential privacy.}
First note that $\mbuc{w,[0,B)^d}$ is a neighborhood-preserving mechanism w.r.t.\ the neighborhood relation \ensuremath{\sim_{\mathrm{hist}}\xspace}. This follows because adding/removing any one element changes the output of bucketing by at most one element; hence, neighbors remain neighbors after bucketing. Now, since $\mbuc{w,[0,B)}$ outputs a histogram whose support size is at most $t=\lceil\frac{B}{w}\rceil$, and $\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,[0,B)}$ on input histograms with support size at most $t$ is $\left(\ensuremath{\epsilon}\xspace,\ensuremath{\epsilon}\xspace e^{-\Omega(\ensuremath{\epsilon}\xspace\tau n)}\right)$-differentially private w.r.t.\ \ensuremath{\sim_{\mathrm{hist}}\xspace}, it follows from \Theoremref{compose-DP} that $\mBhist{\alpha,\beta,[0,B)}$ is also differentially private w.r.t.\ \ensuremath{\sim_{\mathrm{hist}}\xspace} with the same parameters.
\paragraph{Flexible accuracy.}
First we show in the following claim that the flexible accuracy guarantee of the bucketing mechanism $\mbuc{w,[0,B)}(\ensuremath{\mathbf{x}}\xspace)$, and we prove it in \Appendixref{bucketHist_proofs}.
\begin{claim}\label{clm:bucket-accuracy}
$\mbuc{w,[0,B)}$ is $\left(0, \frac{w}{2}, 0\right)$-accurate for the identity function $f_{\emph{id}}$ over $\Hspace{[0,B)}$ w.r.t~ the metric $\ensuremath{\met_{\mathrm{hist}}}\xspace$.
\end{claim}
Note that when we apply \Theoremref{hist-priv-accu} to compute the flexible accuracy parameters of the composed mechanism $\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,[0,B)} \circ \mbuc{w,[0,B)}$, the parameters of the composed mechanism depend on the distortion sensitivity $\distsens{f_1}{}(\alpha_2)$ and the error sensitivity of $\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,[0,B)}$. We compute them below. \\
\noindent $\bullet$ {\it Distortion sensitivity of $f_1$:}
Since $f_1$ is the identity function $f_{\text{id}}$ over $\Hspace{[0,B)}$, we have (as noted in the first example in \Sectionref{dist-sens}) that $\distsens{f_1}{}(\alpha_2)\leq\alpha_2$. \\
\noindent $\bullet$ {\it Error sensitivity of $\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,[0,B)}$:}
Note that the bucketing mechanism $\mbuc{w,[0,B)}:\Hspace{[0,B)}\to\Hspace{[0,B)}$ is a deterministic map, and is $(0,\beta,0)$-accurate (see \Claimref{bucket-accuracy}) for computing the identity function $f_{\text{id}}$, where $\beta=\frac{w}{2}$. As mentioned in the first bullet after the statement of \Theoremref{compose-accuracy}, this implies that when computing the error sensitivity of $\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,[0,B)}$ (which is required for calculating the output error $\beta$ of the composed mechanism $\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,[0,B)} \circ \mbuc{w,[0,B)}$), we only need to take supremum in \eqref{eq:err-sens} over point distributions $\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{x}}\xspace'$ such that $\dhist{\ensuremath{\mathbf{x}}\xspace}{\ensuremath{\mathbf{x}}\xspace'}\leq\beta$, where $\dhist{}{}$ is the metric that we use over $\Hspace{[0,B)}$. In other words, in order to compute the error sensitivity of $\mtrlap{\ensuremath{\tau}\xspace,[0,B)}$, we only need to bound $\sup_{\ensuremath{\mathbf{x}}\xspace, \ensuremath{\mathbf{x}}\xspace': \\ \dhist{\ensuremath{\mathbf{x}}\xspace}{\ensuremath{\mathbf{x}}\xspace'}\leq \beta} \ \inf_{Y:\\ \ensuremath{{\widehat\dn}}\xspace(\ensuremath{\mathbf{x}}\xspace',Y) \le \alpha}\Winf{}(\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace}(\ensuremath{\mathbf{x}}\xspace),\prob{Y})$. We bound this in \Lemmaref{hist-error-sens} below.
\begin{lem}\label{lem:hist-error-sens}
For any $\alpha,\beta \geq0$, we have
\[\tau^{\alpha, 0}_{\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,[0,B)}, f_\ensuremath{\mathrm{id}}\xspace} (\beta, 0)\quad = \sup_{\substack{\ensuremath{\mathbf{x}}\xspace, \ensuremath{\mathbf{x}}\xspace': \\ \dhist{\ensuremath{\mathbf{x}}\xspace}{\ensuremath{\mathbf{x}}\xspace'}\leq \beta}} \
\inf_{\substack{Y:\\ \ensuremath{{\widehat\dn}}\xspace(\ensuremath{\mathbf{x}}\xspace',Y) \le \alpha}}
\Winf{}(\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace}(\ensuremath{\mathbf{x}}\xspace),\prob{Y}) \leq \beta\]
w.r.t.\ the distortion \ensuremath{\dn_{\mathrm{drop}}}\xspace and the metric $\ensuremath{\met_{\mathrm{hist}}}\xspace$. Here, input histograms to the mechanism $\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace}$ are restricted to $t$ bars and $\tau = \alpha/t$.
\end{lem}
\begin{proof}
For simplicity, we denote $[0,B)$ by $\ensuremath{\mathcal{G}}\xspace$.
For any two histograms $\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{x}}\xspace'\in\Hspace\ensuremath{\mathcal{G}}\xspace$ such that $\dhist{\ensuremath{\mathbf{x}}\xspace}{\ensuremath{\mathbf{x}}\xspace'} \le \beta$, we will construct a r.v.\ $Y$ over $\Hspace\ensuremath{\mathcal{G}}\xspace$ such that $\ensuremath{{\widehat\dn}}\xspace_{\text{drop}}(\ensuremath{\mathbf{x}}\xspace',\prob{Y}) \le \alpha$ and $\Winf{}(\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace}(\ensuremath{\mathbf{x}}\xspace),\prob{Y}) \leq \beta$. The claim then immediately follows from this. Details follow.
Consider any two histograms $\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{x}}\xspace'\in\Hspace\ensuremath{\mathcal{G}}\xspace$ such that $\dhist{\ensuremath{\mathbf{x}}\xspace}{\ensuremath{\mathbf{x}}\xspace'} \le \beta$.
Let $\dG{\cdot}{\cdot}$ denote the underlying metric over $\ensuremath{\mathcal{G}}\xspace$ (consists of $t$ elements) and $|\ensuremath{\mathbf{x}}\xspace|$ denote number of elements in the histogram $\ensuremath{\mathbf{x}}\xspace$.
By definition of $\dhist\cdot\cdot$, we have $\dhist{\ensuremath{\mathbf{x}}\xspace}{\ensuremath{\mathbf{x}}\xspace'}=\Winf{}(\frac{\ensuremath{\mathbf{x}}\xspace}{|\ensuremath{\mathbf{x}}\xspace|},\frac{\ensuremath{\mathbf{x}}\xspace'}{|\ensuremath{\mathbf{x}}\xspace'|})$.
Let $\phi$ be an optimal coupling of $\frac{\ensuremath{\mathbf{x}}\xspace}{|\ensuremath{\mathbf{x}}\xspace|}$ and $\frac{\ensuremath{\mathbf{x}}\xspace'}{|\ensuremath{\mathbf{x}}\xspace'|}$ such that
\begin{align}
\dhist{\ensuremath{\mathbf{x}}\xspace}{\ensuremath{\mathbf{x}}\xspace'} = \Winf{}(\frac{\ensuremath{\mathbf{x}}\xspace}{|\ensuremath{\mathbf{x}}\xspace|},\frac{\ensuremath{\mathbf{x}}\xspace'}{|\ensuremath{\mathbf{x}}\xspace'|}) = \sup_{\substack{(a, b) \leftarrow \phi}}\dG{a}{b} \leq \beta.
\end{align}
Using $\phi$ we define a transformation $f_{\phi}$, which, when given a histogram $\ensuremath{\mathbf{z}}\xspace$ that is $\alpha$-distorted from $\ensuremath{\mathbf{x}}\xspace$, returns $f_{\phi}(\ensuremath{\mathbf{z}}\xspace)$ that is an $\alpha$-distorted histogram from $\ensuremath{\mathbf{x}}\xspace'$. Recall that for a histogram $\ensuremath{\mathbf{x}}\xspace$ and $a\in\ensuremath{\mathcal{G}}\xspace$, we denote by $\ensuremath{\mathbf{x}}\xspace(a)$ the multiplicity of $a$ in $\ensuremath{\mathbf{x}}\xspace$. Now, for any $b\in[0,B)$, we define $f_{\phi}(\ensuremath{\mathbf{z}}\xspace)(b)$ as follows:
\[f_{\phi}(\ensuremath{\mathbf{z}}\xspace)(b) := |\ensuremath{\mathbf{x}}\xspace'|\sum_{a\in\ensuremath{\mathcal{G}}\xspace} \frac{\ensuremath{\mathbf{z}}\xspace(a) \phi(a , b)}{\ensuremath{\mathbf{x}}\xspace(a)}.\]
The following claim is proved in \Appendixref{bucketHist_proofs}.
\begin{claim}\label{clm:fphi_distort}
For any $\ensuremath{\mathbf{x}}\xspace\in\Hspace\ensuremath{\mathcal{G}}\xspace$, if $\ensuremath{\mathbf{z}}\xspace$ is $\alpha$-distorted from $\ensuremath{\mathbf{x}}\xspace$, then $f_{\phi}(\ensuremath{\mathbf{z}}\xspace)$ is $\alpha$-distorted from $\ensuremath{\mathbf{x}}\xspace'$.
\end{claim}
Recall that $\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace}(\ensuremath{\mathbf{x}}\xspace)$ outputs $\alpha$-distorted histograms from $\ensuremath{\mathbf{x}}\xspace$. This suggests
defining a r.v.\ $Y:=f_{\phi}\left(\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace}(\ensuremath{\mathbf{x}}\xspace)\right)$ over $\Hspace\ensuremath{\mathcal{G}}\xspace$, whose distribution is given as follows:
\[\text{For }\ensuremath{\mathbf{y}}\xspace\in\Hspace\ensuremath{\mathcal{G}}\xspace, \text{ define }\Pr[Y=\ensuremath{\mathbf{y}}\xspace]:=\Pr[\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace}(\ensuremath{\mathbf{x}}\xspace)\in f_{\phi}^{-1}(\ensuremath{\mathbf{y}}\xspace)],\]
where $f_{\phi}^{-1}(\ensuremath{\mathbf{y}}\xspace):=\{\ensuremath{\mathbf{z}}\xspace\in\Hspace\ensuremath{\mathcal{G}}\xspace:f_{\phi}(\ensuremath{\mathbf{z}}\xspace)=\ensuremath{\mathbf{y}}\xspace\}$ is the inverse mapping of $f_{\phi}$.
In the following two claims (which we prove in \Appendixref{bucketHist_proofs}), we show that the above defined $Y$ satisfies $\ensuremath{{\widehat\dn}}\xspace_{\text{drop}}(\ensuremath{\mathbf{x}}\xspace',\prob{Y}) \le \alpha$ and $\Winf{}(\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace}(\ensuremath{\mathbf{x}}\xspace),\prob{Y}) \leq \beta$.
\begin{claim}\label{clm:distortion_bw_xprime-Y}
$\widehat{\partial}_{\emph{drop}}(\ensuremath{\mathbf{x}}\xspace',\prob{Y}) \leq \alpha$.
\end{claim}
\begin{claim}\label{clm:Winf_Y_trLap-bound}
$\Winf{}(\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace}(\ensuremath{\mathbf{x}}\xspace),\prob{Y}) \leq \beta$.
\end{claim}
It follows from \Claimref{distortion_bw_xprime-Y} and \Claimref{Winf_Y_trLap-bound} that $\inf_{Y: \ensuremath{{\widehat\dn}}\xspace(\ensuremath{\mathbf{x}}\xspace',Y) \le \alpha}\Winf{}(\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace}(\ensuremath{\mathbf{x}}\xspace),\prob{Y}) \leq \beta$. Since this holds for any two histograms $\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{x}}\xspace'\in\Hspace\ensuremath{\mathcal{G}}\xspace$ such that $\dhist{\ensuremath{\mathbf{x}}\xspace}{\ensuremath{\mathbf{x}}\xspace'}\leq \beta$, we have proved \Lemmaref{hist-error-sens}.
\end{proof}
Now, applying \Theoremref{compose-accuracy} to $\mBhist{\alpha,\beta,[0,B)} = \mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,[0,B)} \circ \mbuc{t,[0,B)}$, we get that $\mBhist{\alpha,\beta,[0,B)}$ is $(\alpha, \beta, 0)$-accurate.
This completes the proof of \Theoremref{bucketing-hist}.
\subsection{Omitted Proofs from \Sectionref{HBS}}\label{sec:proof_bucketing-general}
In this section, we will prove \Theoremref{bucketing-general}, \Corollaryref{bucketing-max}, and \Corollaryref{bucketing-supp}.
\subsubsection{Proof of \Theoremref{bucketing-general} -- Any Histogram-Based-Statistic}\label{sec:proof_bucketing-general}
First we show the flexible accuracy and then the differential privacy guarantee of our composed mechanism $\ensuremath{\mathcal{M}}\xspace_{\ensuremath{{f_{\mathrm{HBS}}}}\xspace}^{\alpha,\beta,[0,B)} = \ensuremath{{f_{\mathrm{HBS}}}}\xspace \circ \mBhist{\alpha,\beta,[0,B)}$.
\paragraph{Flexible accuracy.}
Note that $\ensuremath{{f_{\mathrm{HBS}}}}\xspace$ (as a mechanism) for computing $\ensuremath{{f_{\mathrm{HBS}}}}\xspace$ is $(0,0,0)$-accurate, and we have from \Theoremref{bucketing-hist} that $\mBhist{\alpha,\beta,[0,B)}$ is $(\alpha,\beta,0)$-accurate for the identity function $f_{\ensuremath{\mathrm{id}}\xspace}$ w.r.t.\ the distortion measure \ensuremath{\dn_{\mathrm{drop}}}\xspace and the metric $\ensuremath{\met_{\mathrm{hist}}}\xspace$.
Applying \Theoremref{compose-accuracy}, we get that $\ensuremath{\mathcal{M}}\xspace_{\ensuremath{{f_{\mathrm{HBS}}}}\xspace}^{\alpha,\beta,[0,B)}$ is $(\alpha+\distsens{f_{\ensuremath{\mathrm{id}}\xspace}}{}(0),\tau_{\ensuremath{{f_{\mathrm{HBS}}}}\xspace,\ensuremath{{f_{\mathrm{HBS}}}}\xspace}^{0,0}(0,\beta),0)$-accurate.
It follows from \eqref{eq:err-sens-deterministic} (by substituting $\ensuremath{\mathcal{M}}\xspace=\ensuremath{{f_{\mathrm{HBS}}}}\xspace$ as a mechanism for $f=\ensuremath{{f_{\mathrm{HBS}}}}\xspace$) and the definition of the metric sensitivity \eqref{eq:fhbs-sens}, that $\tau_{\ensuremath{{f_{\mathrm{HBS}}}}\xspace,\ensuremath{{f_{\mathrm{HBS}}}}\xspace}^{0,0}(0,\beta)=\Delta_{{\ensuremath{{f_{\mathrm{HBS}}}}\xspace}}(\beta)$. We have also noted after \eqref{eq:dist-sens} that the distortion sensitivity of any randomized function at zero is equal to zero; in particular, $\distsens{f_{\ensuremath{\mathrm{id}}\xspace}}{}(0)=0$. Substituting these in the flexible accuracy parameters of $\ensuremath{\mathcal{M}}\xspace_{\ensuremath{{f_{\mathrm{HBS}}}}\xspace}^{\alpha,\beta,[0,B)}$, we get that $\ensuremath{\mathcal{M}}\xspace_{\ensuremath{{f_{\mathrm{HBS}}}}\xspace}^{\alpha,\beta,[0,B)}$ is $(\alpha,\Delta_{{\ensuremath{{f_{\mathrm{HBS}}}}\xspace}}(\beta),0)$-accurate for \ensuremath{{f_{\mathrm{HBS}}}}\xspace w.r.t.\ distortion \ensuremath{\dn_{\mathrm{drop}}}\xspace and metric \ensuremath{\met_{\mathrm{\A}}}\xspace.
\paragraph{Differential privacy.} Since $\mBhist{}$ is $\left(\ensuremath{\epsilon}\xspace,\ensuremath{\epsilon}\xspace e^{-\Omega(\ensuremath{\epsilon}\xspace\tau n)}\right)$-DP, and $\ensuremath{\mathcal{M}}\xspace_{\ensuremath{{f_{\mathrm{HBS}}}}\xspace}^{}$ is a post-processing of $\mBhist{}$, it follows that $\ensuremath{\mathcal{M}}\xspace_{\ensuremath{{f_{\mathrm{HBS}}}}\xspace}^{}$ is also differentially private with the same parameters.
This completes the proof of \Theoremref{bucketing-general}.
\subsubsection{Proof of \Corollaryref{bucketing-max} -- Computing the Maximum}\label{sec:bucketing-max_proof}
For any two histograms \ensuremath{\mathbf{y}}\xspace, $\ensuremath{\mathbf{y}}\xspace'$, by definition of $\dhist{\ensuremath{\mathbf{y}}\xspace}{\ensuremath{\mathbf{y}}\xspace'}=\Winf{}(\frac{\ensuremath{\mathbf{y}}\xspace}{|\ensuremath{\mathbf{y}}\xspace|},\frac{\ensuremath{\mathbf{y}}\xspace'}{|\ensuremath{\mathbf{y}}\xspace'|})$ and $\ensuremath{f_{\mathrm{max}}}\xspace$, it follows that
$|\ensuremath{f_{\mathrm{max}}}\xspace(\ensuremath{\mathbf{y}}\xspace) -\ensuremath{f_{\mathrm{max}}}\xspace(\ensuremath{\mathbf{y}}\xspace')| \leq \dhist{\ensuremath{\mathbf{y}}\xspace}{\ensuremath{\mathbf{y}}\xspace'}$.
Using this in \eqref{eq:fhbs-sens} implies that $\Delta_{\ensuremath{f_{\mathrm{max}}}\xspace}(\beta) \leq \beta$ for every $\beta\geq0$.
Then, the corollary follows from \Theoremref{bucketing-general}, with $\ensuremath{{f_{\mathrm{HBS}}}}\xspace = \ensuremath{f_{\mathrm{max}}}\xspace$.
\subsubsection{Proof of \Corollaryref{bucketing-supp} -- Computing the Support}\label{sec:bucketing-supp_proof}
Since $\dsupp{\ensuremath{\mathcal{S}}\xspace_1}{\ensuremath{\mathcal{S}}\xspace_2}$ is the difference between the maximum or the minimum elements of $\ensuremath{\mathcal{S}}\xspace_1$ and $\ensuremath{\mathcal{S}}\xspace_2$, it follows that for any two histograms \ensuremath{\mathbf{y}}\xspace and $\ensuremath{\mathbf{y}}\xspace'$, we have $\dsupp{\ensuremath{f_{\mathrm{supp}}}\xspace(\ensuremath{\mathbf{y}}\xspace)}{\ensuremath{f_{\mathrm{supp}}}\xspace(\ensuremath{\mathbf{y}}\xspace')} \leq \max\{|\ensuremath{f_{\mathrm{max}}}\xspace(\ensuremath{\mathbf{y}}\xspace) -\ensuremath{f_{\mathrm{max}}}\xspace(\ensuremath{\mathbf{y}}\xspace')|,|\ensuremath{f_{\mathrm{min}}}\xspace(\ensuremath{\mathbf{y}}\xspace) -\ensuremath{f_{\mathrm{min}}}\xspace(\ensuremath{\mathbf{y}}\xspace')|\}$, where $|\ensuremath{f_{\mathrm{max}}}\xspace(\ensuremath{\mathbf{y}}\xspace) -\ensuremath{f_{\mathrm{max}}}\xspace(\ensuremath{\mathbf{y}}\xspace')|\leq \dhist{\ensuremath{\mathbf{y}}\xspace}{\ensuremath{\mathbf{y}}\xspace'}$ (from \Corollaryref{bucketing-max}), and similarly, $|\ensuremath{f_{\mathrm{min}}}\xspace(\ensuremath{\mathbf{y}}\xspace) -\ensuremath{f_{\mathrm{min}}}\xspace(\ensuremath{\mathbf{y}}\xspace')|\leq \dhist{\ensuremath{\mathbf{y}}\xspace}{\ensuremath{\mathbf{y}}\xspace'}$.
Using this in \eqref{eq:fhbs-sens} implies that $\Delta_{\ensuremath{f_{\mathrm{supp}}}\xspace}(\beta) \leq \beta$ for every $\beta\geq0$.
Then, the corollary follows from \Theoremref{bucketing-general}, with $\ensuremath{{f_{\mathrm{HBS}}}}\xspace = \ensuremath{f_{\mathrm{supp}}}\xspace$.
\subsection{Proof of \Theoremref{bucketing-general-drmv}}\label{sec:beyond-drop_proofs}
Since $\ensuremath{\mathcal{M}}\xspace_{\ensuremath{{f_{\mathrm{HBS}}}}\xspace}^{\alpha, \beta, [0,B)}$ is the same mechanism for which the results in \Theoremref{bucketing-hist} hold, the same privacy results as in \Theoremref{bucketing-hist} will also hold here. In the rest of this proof, we prove the flexible accuracy part.
Since $\ensuremath{{f_{\mathrm{HBS}}}}\xspace$ is a $(0,0,0)$-accurate mechanism for $\ensuremath{{f_{\mathrm{HBS}}}}\xspace$ (which implies that $\Delta_{\ensuremath{{f_{\mathrm{HBS}}}}\xspace}(0)=0$), in order to prove the accuracy guarantee of $\ensuremath{\mathcal{M}}\xspace_{\ensuremath{{f_{\mathrm{HBS}}}}\xspace}^{\alpha, \beta, [0,B)}$, it suffices to show that $\mBhist{\alpha,\beta,[0,B)}$ is $(\alpha+\eta\beta,0,0)$-accurate w.r.t.\ $\ensuremath{\dn_{\mathrm{drmv}}^{\eta}}\xspace$.
Note that $\ensuremath{\mathcal{M}}\xspace_{\ensuremath{{f_{\mathrm{HBS}}}}\xspace}^{\alpha, \beta, [0,B)} = \mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,[0,B)} \circ \mbuc{w,[0,B)}$.
On any input $\ensuremath{\mathbf{x}}\xspace$, first we produce an intermediate bucketed output $\ensuremath{\mathbf{z}}\xspace:=\mbuc{w,[0,B)}(\ensuremath{\mathbf{x}}\xspace)$ and then produce $\ensuremath{\mathbf{y}}\xspace:=\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,[0,B)}(\ensuremath{\mathbf{z}}\xspace)$ as the final output.
We have shown in \Claimref{bucket-accuracy} in the proof of \Theoremref{bucketing-hist} that the output $\ensuremath{\mathbf{z}}\xspace$ produced by $\mbuc{w,[0,B)}$ on input $\ensuremath{\mathbf{x}}\xspace$ satisfies $\Winf{}(\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{z}}\xspace)\leq\beta$. This, by definition of the distortion $\ensuremath{\dn_{\mathrm{move}}}\xspace$, implies $\ensuremath{\dn_{\mathrm{move}}}\xspace(\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{z}}\xspace)\leq\beta$.
We have also shown in the proof of \Theoremref{hist-priv-accu} that the output $\ensuremath{\mathbf{y}}\xspace$ produced by $\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,[0,B)}$ on input $\ensuremath{\mathbf{z}}\xspace$ satisfies $\ensuremath{\dn_{\mathrm{drop}}}\xspace(\ensuremath{\mathbf{z}}\xspace,\ensuremath{\mathbf{y}}\xspace)\leq\alpha$. So, we have $\ensuremath{\dn_{\mathrm{move}}}\xspace(\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{z}}\xspace)\leq\beta$ and $\ensuremath{\dn_{\mathrm{drop}}}\xspace(\ensuremath{\mathbf{z}}\xspace,\ensuremath{\mathbf{y}}\xspace)\leq\alpha$. This, together with \Lemmaref{drop-move-switch}, implies an existence of a histogram $\ensuremath{\mathbf{s}}\xspace$ such that $\ensuremath{\dn_{\mathrm{drop}}}\xspace(\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{s}}\xspace)\leq\alpha$ and $\ensuremath{\dn_{\mathrm{move}}}\xspace(\ensuremath{\mathbf{s}}\xspace,\ensuremath{\mathbf{y}}\xspace)\leq\beta$. Using these in the definition of $\dropmove\eta$ in \eqref{eq:drop_move_defn} implies that $\dropmove\eta(\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{y}}\xspace)\leq\alpha+\eta\beta$. Since we have attributed all the error to the input distortion, we have shown that $\mBhist{\alpha,\beta,[0,B)}$ is $(\alpha+\eta\beta,0,0)$-accurate w.r.t.\ the distortion $\dropmove\eta$.
This completes the proof of \Theoremref{bucketing-general-drmv}.
\section{Details Omitted from \Sectionref{bucketHist-priv-accu-proof} -- Bucketed Histogram Mechanism}\label{app:bucketHist_proofs}
\begin{claim*}[Restating \Claimref{bucket-accuracy}]
$\mbuc{w,[0,B)}$ is $\left(0, \frac{w}{2}, 0\right)$-accurate for the identity function $f_{\emph{id}}$ over $\Hspace{[0,B)}$ w.r.t~ the metric $\ensuremath{\met_{\mathrm{hist}}}\xspace$.
\end{claim*}
\begin{proof}
Since both $f_{\text{id}}$ and $\mbuc{w,[0,B)}$ are deterministic maps, on any input $\ensuremath{\mathbf{x}}\xspace\in\Hspace{[0,B)}$, we denote $\ensuremath{\mathbf{x}}\xspace$ (as the output of $f_{\text{id}}(\ensuremath{\mathbf{x}}\xspace)$) and $\mbuc{w,[0,B)}$ as point distributions over $\Hspace{[0,B)}$.
Now, in order to prove the claim, we need to show that $\Winf{}(\mbuc{w,[0,B)}(\ensuremath{\mathbf{x}}\xspace),\ensuremath{\mathbf{x}}\xspace)\leq\frac{w}{2}$ holds for any $\ensuremath{\mathbf{x}}\xspace\in\Hspace{[0,B)}$.
Fix any $\ensuremath{\mathbf{x}}\xspace\in\Hspace{[0,B)}$ and define $\ensuremath{\mathbf{y}}\xspace:=\mbuc{w,[0,B)}(\ensuremath{\mathbf{x}}\xspace)$.
Since $\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{y}}\xspace$ are point distributions and the underlying metric is $\ensuremath{\met_{\mathrm{hist}}}\xspace$, we have $\Winf{}(\ensuremath{\mathbf{y}}\xspace,\ensuremath{\mathbf{x}}\xspace)=\ensuremath{\met_{\mathrm{hist}}}\xspace(\ensuremath{\mathbf{y}}\xspace,\ensuremath{\mathbf{x}}\xspace)$, where $\ensuremath{\met_{\mathrm{hist}}}\xspace$ is defined as $\dhist{\ensuremath{\mathbf{y}}\xspace}{\ensuremath{\mathbf{x}}\xspace}=\Winf{}(\frac{\ensuremath{\mathbf{y}}\xspace}{|\ensuremath{\mathbf{y}}\xspace|},\frac{\ensuremath{\mathbf{x}}\xspace}{|\ensuremath{\mathbf{x}}\xspace|})$. Since $\ensuremath{\mathbf{y}}\xspace$ is a deterministic function of $\ensuremath{\mathbf{x}}\xspace$, $\Winf{}(\frac{\ensuremath{\mathbf{y}}\xspace}{|\ensuremath{\mathbf{y}}\xspace|},\frac{\ensuremath{\mathbf{x}}\xspace}{|\ensuremath{\mathbf{x}}\xspace|})$ is upper bounded by the maximum distance any point in $\ensuremath{\mathbf{x}}\xspace$ moves to form $\ensuremath{\mathbf{y}}\xspace$, which is equal to the the maximum distance of the center of a bucket from any point in that bucket, which is $\frac{w}{2}$.
\end{proof}
\begin{claim*}[Restating \Claimref{fphi_distort}]
For any $\ensuremath{\mathbf{x}}\xspace\in\Hspace\ensuremath{\mathcal{G}}\xspace$, if $\ensuremath{\mathbf{z}}\xspace$ is $\alpha$-distorted from $\ensuremath{\mathbf{x}}\xspace$, then $f_{\phi}(\ensuremath{\mathbf{z}}\xspace)$ is $\alpha$-distorted from $\ensuremath{\mathbf{x}}\xspace'$.
\end{claim*}
\begin{proof}
We need to show two things:
{\sf (i)} $f_{\phi}(\ensuremath{\mathbf{z}}\xspace)(b) \leq \ensuremath{\mathbf{x}}\xspace'(b)$ holds for every $b\in\ensuremath{\mathcal{G}}\xspace$, and
{\sf (ii)} $\sum_{b\in\ensuremath{\mathcal{G}}\xspace}f_{\phi}(\ensuremath{\mathbf{z}}\xspace)(b) \geq (1-\alpha)\sum_{b\in\ensuremath{\mathcal{G}}\xspace}\ensuremath{\mathbf{x}}\xspace'(b)$.
The first condition holds because $\ensuremath{\mathbf{z}}\xspace(a)\leq\ensuremath{\mathbf{x}}\xspace(a),\forall a\in\ensuremath{\mathcal{G}}\xspace$ (since $\ensuremath{\mathbf{z}}\xspace$ is $\alpha$-distorted from $\ensuremath{\mathbf{x}}\xspace$) and that $\sum_{a\in\ensuremath{\mathcal{G}}\xspace}\phi(a,b)=\frac{\ensuremath{\mathbf{x}}\xspace'(b)}{|\ensuremath{\mathbf{x}}\xspace'|}$.
For the second condition,
\begin{align}
\sum_{b\in\ensuremath{\mathcal{G}}\xspace}f_{\phi}(\ensuremath{\mathbf{z}}\xspace)(b) &= \sum_{b\in\ensuremath{\mathcal{G}}\xspace}|\ensuremath{\mathbf{x}}\xspace'|\sum_{a\in\ensuremath{\mathcal{G}}\xspace} \frac{\ensuremath{\mathbf{z}}\xspace(a) \phi(a , b)}{\ensuremath{\mathbf{x}}\xspace(a)}
= |\ensuremath{\mathbf{x}}\xspace'|\sum_{a\in\ensuremath{\mathcal{G}}\xspace}\frac{\ensuremath{\mathbf{z}}\xspace(a)}{\ensuremath{\mathbf{x}}\xspace(a)}\sum_{b\in\ensuremath{\mathcal{G}}\xspace}\phi(a,b)
\stackrel{\text{(a)}}{=} \frac{|\ensuremath{\mathbf{x}}\xspace'|}{|\ensuremath{\mathbf{x}}\xspace|} \sum_{a\in\ensuremath{\mathcal{G}}\xspace}\ensuremath{\mathbf{z}}\xspace(a)
\stackrel{\text{(b)}}{\geq} (1-\alpha)|\ensuremath{\mathbf{x}}\xspace'|, \label{fphi_dist_bound}
\end{align}
where (a) follows from $\sum_{b\in\ensuremath{\mathcal{G}}\xspace}\phi(a,b)=\frac{\ensuremath{\mathbf{x}}\xspace(a)}{|\ensuremath{\mathbf{x}}\xspace|}$ and (b) follows because $\ensuremath{\mathbf{z}}\xspace$ is $\alpha$-distorted from $\ensuremath{\mathbf{x}}\xspace$, which implies that $\sum_{a\in\ensuremath{\mathcal{G}}\xspace}\ensuremath{\mathbf{z}}\xspace(a) = |\ensuremath{\mathbf{z}}\xspace| \geq (1-\alpha)|\ensuremath{\mathbf{x}}\xspace|$.
Therefore, $f_{\phi}(\ensuremath{\mathbf{z}}\xspace)$ is $\alpha$-distorted from $\ensuremath{\mathbf{x}}\xspace'$.
\end{proof}
\begin{claim*}[Restating \Claimref{distortion_bw_xprime-Y}]
$\widehat{\partial}_{\emph{drop}}(\ensuremath{\mathbf{x}}\xspace',\prob{Y}) \leq \alpha$.
\end{claim*}
\begin{proof}
Note that the support of $\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace}(\ensuremath{\mathbf{x}}\xspace)$ is the set of all $\alpha$-distorted histograms from $\ensuremath{\mathbf{x}}\xspace$.
We have shown in \Claimref{fphi_distort} that for any $\ensuremath{\mathbf{z}}\xspace\in\Hspace\ensuremath{\mathcal{G}}\xspace$ such that $\ensuremath{\dn_{\mathrm{drop}}}\xspace(\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{z}}\xspace)\leq\alpha$, we have $\ensuremath{\dn_{\mathrm{drop}}}\xspace(\ensuremath{\mathbf{x}}\xspace',f_{\phi}(\ensuremath{\mathbf{z}}\xspace))\leq\alpha$.
This implies that $\sup_{\ensuremath{\mathbf{y}}\xspace\in\ensuremath{\mathrm{support}}\xspace(f_{\phi}(\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace}(\ensuremath{\mathbf{x}}\xspace)))}\ensuremath{\dn_{\mathrm{drop}}}\xspace(\ensuremath{\mathbf{x}}\xspace',\ensuremath{\mathbf{y}}\xspace)\leq\alpha$, which in turn implies that $\widehat{\partial}_{\text{drop}}(\ensuremath{\mathbf{x}}\xspace',\prob{Y})\leq\alpha$.
\end{proof}
\begin{claim*}[Restating \Claimref{Winf_Y_trLap-bound}]
$\Winf{}(\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace}(\ensuremath{\mathbf{x}}\xspace),\prob{Y}) \leq \beta$.
\end{claim*}
\begin{proof}
Define a coupling $\phi_{\ensuremath{\mathbf{x}}\xspace}$ of $\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace}(\ensuremath{\mathbf{x}}\xspace)$ and $\prob{Y}$ over $\Hspace\ensuremath{\mathcal{G}}\xspace\times\Hspace\ensuremath{\mathcal{G}}\xspace$ as follows:
\[\phi_{\ensuremath{\mathbf{x}}\xspace}(\ensuremath{\mathbf{z}}\xspace,\ensuremath{\mathbf{y}}\xspace) :=
\begin{cases}
\Pr[\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace}(\ensuremath{\mathbf{x}}\xspace) = \ensuremath{\mathbf{z}}\xspace] & \text{ if } \ensuremath{\mathbf{y}}\xspace= f_{\phi}(\ensuremath{\mathbf{z}}\xspace), \\
0 & \text{ otherwise}.
\end{cases}
\]
It is easy to verify that the above defined $\phi_{\ensuremath{\mathbf{x}}\xspace}$ is a valid coupling of $\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace}(\ensuremath{\mathbf{x}}\xspace)$ and $\prob{Y}$, i.e., its first marginal is equal to $\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace}(\ensuremath{\mathbf{x}}\xspace)$ and the second marginal is equal to $\prob{Y}$. Note that $\phi_{\ensuremath{\mathbf{x}}\xspace}(\ensuremath{\mathbf{z}}\xspace,\ensuremath{\mathbf{y}}\xspace)$ is non-zero only when $\ensuremath{\mathbf{y}}\xspace=f_{\phi}(\ensuremath{\mathbf{z}}\xspace)$.
This implies that
\[\Winf{}(\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace}(\ensuremath{\mathbf{x}}\xspace),\prob{Y}) \leq \sup_{(\ensuremath{\mathbf{z}}\xspace,\ensuremath{\mathbf{y}}\xspace)\leftarrow\phi_{\ensuremath{\mathbf{x}}\xspace}}\dhist{\ensuremath{\mathbf{z}}\xspace}{\ensuremath{\mathbf{y}}\xspace} = \sup_{(\ensuremath{\mathbf{z}}\xspace,f_{\phi}(\ensuremath{\mathbf{z}}\xspace))\leftarrow\phi_{\ensuremath{\mathbf{x}}\xspace}}\dhist{\ensuremath{\mathbf{z}}\xspace}{f_{\phi}(\ensuremath{\mathbf{z}}\xspace)} \leq \beta,\]
where the last inequality follows from \Claimref{f_phi_preserves_dhist} (stated and proven below) and using the fact that $\ensuremath{\mathbf{z}}\xspace\sim\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace}(\ensuremath{\mathbf{x}}\xspace)$ is $\alpha$-distorted from $\ensuremath{\mathbf{x}}\xspace$.
\end{proof}
\begin{claim}\label{clm:f_phi_preserves_dhist}
Let $\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{x}}\xspace'\in\Hspace\ensuremath{\mathcal{G}}\xspace$ be such that $\dhist{\ensuremath{\mathbf{x}}\xspace}{\ensuremath{\mathbf{x}}\xspace'}\leq\beta$. Then, for any $\ensuremath{\mathbf{z}}\xspace$ that is $\alpha$-distorted from $\ensuremath{\mathbf{x}}\xspace$, we have $\dhist{\ensuremath{\mathbf{z}}\xspace}{f_{\phi}(\ensuremath{\mathbf{z}}\xspace)} \leq \dhist{\ensuremath{\mathbf{x}}\xspace}{\ensuremath{\mathbf{x}}\xspace'}\leq\beta$.
\end{claim}
\begin{proof}
Define $\phi'(a, b) = \frac{\ensuremath{\mathbf{z}}\xspace(a)|\ensuremath{\mathbf{x}}\xspace|\phi(a , b)}{\ensuremath{\mathbf{x}}\xspace(a)|\ensuremath{\mathbf{z}}\xspace|}$.
For any $a\in\ensuremath{\mathcal{G}}\xspace$, its first marginal is equal to $\sum_{b\in\ensuremath{\mathcal{G}}\xspace}\phi'(a,b)=\frac{\ensuremath{\mathbf{z}}\xspace(a)}{|\ensuremath{\mathbf{z}}\xspace|}$.
For any $b\in\ensuremath{\mathcal{G}}\xspace$, its second marginal is equal to
$\sum_{a\in\ensuremath{\mathcal{G}}\xspace}\phi'(a,b) = \frac{|\ensuremath{\mathbf{x}}\xspace|}{|\ensuremath{\mathbf{z}}\xspace|}\sum_{a\in\ensuremath{\mathcal{G}}\xspace}\frac{\ensuremath{\mathbf{z}}\xspace(a)}{\ensuremath{\mathbf{x}}\xspace(a)}\phi(a , b) = \frac{|\ensuremath{\mathbf{x}}\xspace|}{|\ensuremath{\mathbf{x}}\xspace'||\ensuremath{\mathbf{z}}\xspace|}f_{\phi}(\ensuremath{\mathbf{z}}\xspace)(b)$. We would like to say that the quantity on the RHS is equal to $\frac{f_{\phi}(\ensuremath{\mathbf{z}}\xspace)(b)}{|f_{\phi}(\ensuremath{\mathbf{z}}\xspace)|}$. We show this as follows:
Since $|\ensuremath{\mathbf{z}}\xspace|\geq(1-\alpha)|\ensuremath{\mathbf{x}}\xspace|$, there exists $c\geq0$ such that $|\ensuremath{\mathbf{z}}\xspace|=(1-\alpha+c)|\ensuremath{\mathbf{x}}\xspace|$. If we put this instead of $|\ensuremath{\mathbf{z}}\xspace|\geq(1-\alpha)|\ensuremath{\mathbf{x}}\xspace|$ in \eqref{fphi_dist_bound}, we would get $\sum_{b\in\ensuremath{\mathcal{G}}\xspace}f_{\phi}(\ensuremath{\mathbf{z}}\xspace)(b)=(1-\alpha+c)|\ensuremath{\mathbf{x}}\xspace'|$. With these substitutions, we get $\frac{|\ensuremath{\mathbf{x}}\xspace|}{|\ensuremath{\mathbf{x}}\xspace'||\ensuremath{\mathbf{z}}\xspace|}f_{\phi}(\ensuremath{\mathbf{z}}\xspace)(b)=\frac{f_{\phi}(\ensuremath{\mathbf{z}}\xspace)(b)}{\sum_{b\in\ensuremath{\mathcal{G}}\xspace}f_{\phi}(\ensuremath{\mathbf{z}}\xspace)(b)}$, which implies that the second marginal of $\phi'$ is equal to $\sum_{a\in\ensuremath{\mathcal{G}}\xspace}\phi'(a,b)=\frac{f_{\phi}(\ensuremath{\mathbf{z}}\xspace)(b)}{|f_{\phi}(\ensuremath{\mathbf{z}}\xspace)|}$ for any $b\in\ensuremath{\mathcal{G}}\xspace$.
This means that $\phi'(a,b)$ is a valid coupling of $\ensuremath{\mathbf{z}}\xspace, f_{\phi}(\ensuremath{\mathbf{z}}\xspace)$. This implies that
\begin{align*}
\dhist{\ensuremath{\mathbf{z}}\xspace}{f_{\phi}(\ensuremath{\mathbf{z}}\xspace)} = \Winf{}(\ensuremath{\mathbf{z}}\xspace,f_{\phi}(\ensuremath{\mathbf{z}}\xspace)) \leq \sup_{\substack{(a', b') \leftarrow \phi'}}\dG{a'}{b'} \stackrel{\text{(c)}}{\leq} \sup_{\substack{(a', b') \leftarrow \phi}}\dG{a'}{b'} = \dhist{\ensuremath{\mathbf{x}}\xspace}{\ensuremath{\mathbf{x}}\xspace'} \leq \beta,
\end{align*}
where (c) holds because $\text{support}(\phi') \subseteq \text{support}(\phi)$ (by the definition of $\phi'$).
\end{proof}
\section{Details Omitted from \Sectionref{compostion_FA} -- Proof of \Lemmaref{err-sens-deterministic}}\label{app:err-sens-deterministic_proof}
For convenience, we write the lemma statement below.
\begin{lem*}[Restating \Lemmaref{err-sens-deterministic}]
Let $\ensuremath{\mathcal{M}}\xspace:\ensuremath{\mathcal{B}}\xspace\to\ensuremath{\mathcal{C}}\xspace$ be a deterministic mechanism for a deterministic function $f:\ensuremath{\mathcal{B}}\xspace\to\ensuremath{\mathcal{C}}\xspace$. Then, for any $\beta_1\geq0$, we have
\begin{align*}
\tau_{\ensuremath{\mathcal{M}}\xspace,f}^{0,0}(\beta_1,0) \quad = \sup_{\substack{X,X': \\ \Winf{}(\prob{X},\prob{X'})\leq \beta_1}} \Winf{}(\ensuremath{\mathcal{M}}\xspace(X),f(X')) \quad = \sup_{\substack{x,x'\in\ensuremath{\mathcal{A}}\xspace : \\ \dB{x}{x'} \leq \beta_1}} \dC{\ensuremath{\mathcal{M}}\xspace(x)}{f(x')}.
\end{align*}
\end{lem*}
\begin{proof}
The first equality follows from the definition of error sensitivity. We only need to prove the second equality.
\begin{itemize}
\item {\it LHS $\geq$ RHS:} This is the easy part.
\begin{align*}
\sup_{\substack{X,X': \\ \Winf{}(\prob{X},\prob{X'})\leq \beta_1}} \Winf{}(\ensuremath{\mathcal{M}}\xspace(X),f(X')) \ \ \ge \sup_{\substack{x,x'\in\ensuremath{\mathcal{B}}\xspace: \\ \Winf{}(\prob{x},\prob{x'})\leq \beta_1}} \Winf{}(\ensuremath{\mathcal{M}}\xspace(x),f(x')) \ \
= \sup_{\substack{x,x'\in\ensuremath{\mathcal{B}}\xspace: \\ \dB{\prob{x}}{\prob{x'}}\leq \beta_1}} \dC{\ensuremath{\mathcal{M}}\xspace(x)}{f(x')},
\end{align*}
where the inequality holds because considering only point distributions restricts the set over which we take supremum and the equality holds because the $\infty$-Wasserstein distance between any two point distributions in any metric is just the distance between the points on which the distributions are supported in that metric.
\item {\it LHS $\leq$ RHS:}
Consider any two distributions $\prob{X}, \prob{X'}$ over $\ensuremath{\mathcal{B}}\xspace$ s.t.\ $\Winf{}(\prob{X},\prob{X'})\leq \beta$. Let $\phi_1$ be the optimal coupling between $\prob{X}, \prob{X'}$ such that
\[\Winf{}(\prob{X},\prob{X'}) \quad = \sup_{(\ensuremath{\mathbf{x}}\xspace, \ensuremath{\mathbf{x}}\xspace') \leftarrow \phi_1} \dB{\ensuremath{\mathbf{x}}\xspace}{\ensuremath{\mathbf{x}}\xspace'} \leq \beta_1. \]
Using $\phi_1,\ensuremath{\mathcal{M}}\xspace,f$, we define a joint distribution $\phi_2$ over $\ensuremath{\mathcal{C}}\xspace\times\ensuremath{\mathcal{C}}\xspace$ as follows: For any ${\bf a,b}\in\ensuremath{\mathcal{C}}\xspace$, define
\[\phi_2({\bf a}, {\bf b}) \quad := \sum_{\substack{\ensuremath{\mathbf{x}}\xspace, \ensuremath{\mathbf{x}}\xspace' : \\
\ensuremath{\mathcal{M}}\xspace(\ensuremath{\mathbf{x}}\xspace) = {\bf a}, f(\ensuremath{\mathbf{x}}\xspace') = {\bf b}}} \phi_1(\ensuremath{\mathbf{x}}\xspace, \ensuremath{\mathbf{x}}\xspace').\]
It can be verified that $\phi_2\in\Phi(\ensuremath{\mathcal{M}}\xspace(X),f(X'))$, i.e., $\phi_2$ is a valid coupling between $\ensuremath{\mathcal{M}}\xspace(X), f(X')$.
Now
\begin{align*}
\Winf{}(\ensuremath{\mathcal{M}}\xspace(X),f(X'))
\ \ \leq \sup_{({\bf a},{\bf b})\leftarrow\phi_2}\dC{\bf a}{\bf b}
\ \ = \sup_{(\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{x}}\xspace')\leftarrow\phi_1}\dC{\ensuremath{\mathcal{M}}\xspace(\ensuremath{\mathbf{x}}\xspace)}{f(\ensuremath{\mathbf{x}}\xspace')}
\ \ \leq \sup_{\substack{\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{x}}\xspace'\in\ensuremath{\mathcal{B}}\xspace: \\ \dB{\ensuremath{\mathbf{x}}\xspace}{\ensuremath{\mathbf{x}}\xspace'}\leq \beta_1}}\dC{\ensuremath{\mathcal{M}}\xspace(\ensuremath{\mathbf{x}}\xspace)}{f(\ensuremath{\mathbf{x}}\xspace')},
\end{align*}
where the last inequality holds because $\{(\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{x}}\xspace'):(\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{x}}\xspace')\leftarrow\phi_1\}\subseteq\{(\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{x}}\xspace'):\dB{\ensuremath{\mathbf{x}}\xspace}{\ensuremath{\mathbf{x}}\xspace'}\leq\beta_1\}$.
Note that the RHS of the last inequality does not depend on $X,X'$. So, taking supremum over all distributions $X, X'$ such that $\Winf{}(\prob{X},\prob{X'})\leq\beta_1$ gives the required result.
\end{itemize}
This completes the proof of \Lemmaref{err-sens-deterministic}.
\end{proof}
\section{Omitted Details from \Sectionref{compose-accuracy_proof} -- Flexible Accuracy Under Composition}\label{app:comp-accuracy}
In this section, we prove \Lemmaref{distsens-distrib}.
\begin{lem*}[Restating \Lemmaref{distsens-distrib}]
Suppose $f: A \to B$ has distortion sensitivity \distsens{f}{} w.r.t.\ $(\ensuremath{\mathsf{\partial}}\xspace_1,\ensuremath{\mathsf{\partial}}\xspace_2)$.
For all r.v.s $X_0$ over $A$ and $Y$ over $B$ such that $\ensuremath{{\widehat\dn}}\xspace_2(f(X_0),\prob{Y}) \le \alpha$ for some $\alpha\geq0$, there must exist a r.v.\ $X$ over $A$ such that $Y=f(X)$ and $\ensuremath{{\widehat\dn}}\xspace_1(\prob{X_0}, \prob{X}) \le \distsens{f}{}(\alpha)$, provided $\distsens{f}{}(\alpha)$ is finite.
\end{lem*}
\begin{proof}
Fix random variables $X_0$ over $A$ and $Y$ over $B$ such that $\ensuremath{{\widehat\dn}}\xspace_2(f(X_0),\prob{Y}) \le \alpha$.
Let $\phi$ be an optimal coupling that achieves the infimum in the definition of $\ensuremath{{\widehat\dn}}\xspace_2(f(X_0),\prob{Y})$, i.e.,
\begin{equation}\label{eq:distsens-distrib-interim1}
\ensuremath{{\widehat\dn}}\xspace_2(f(X_0),\prob{Y}) = \sup_{(u,y)\leftarrow\phi}\ensuremath{\mathsf{\partial}}\xspace_2(u,y) \leq \alpha.
\end{equation}
For each $x_0 \in \ensuremath{\mathrm{support}}\xspace(X_0)$, consider the conditional distribution $\phi_{x_0} = \phi | \{X_0 = x_0\}$.
Clearly, the first marginal of $\phi_{x_0}$ is a point distribution supported at $f(x_0)$. Let its second marginal be denoted by $\prob{Y_{x_0}}$. First we show that for each $x_0\in\ensuremath{\mathrm{support}}\xspace(X_0)$, we have $\ensuremath{{\widehat\dn}}\xspace_2(f(x_0),\prob{Y_{x_0}})\leq \alpha$.
\begin{align*}
\ensuremath{{\widehat\dn}}\xspace_2(f(x_0),\prob{Y_{x_0}}) = \inf_{\phi\in\Phi^0(f(x_0),\prob{Y_{x_0}})} \sup_{(u,y)\leftarrow\phi}\ensuremath{\mathsf{\partial}}\xspace_2(u,y)
\leq \sup_{(u,y)\leftarrow\phi_{x_0}}\ensuremath{\mathsf{\partial}}\xspace_2(u,y)
\overset{\text{(a)}}{\leq} \sup_{(u,y)\leftarrow\phi}\ensuremath{\mathsf{\partial}}\xspace_2(u,y)
\overset{\text{(b)}}{\leq} \alpha.
\end{align*}
Here $(a)$ follows from the fact that $\ensuremath{\mathrm{support}}\xspace(\phi_{x_0}) \subseteq \ensuremath{\mathrm{support}}\xspace(\phi)$ and (b) follows from \eqref{eq:distsens-distrib-interim1}.
Thus for each $x_0 \in \ensuremath{\mathrm{support}}\xspace(X_0)$, we have $\ensuremath{{\widehat\dn}}\xspace_2(f(x_0),\prob{Y_{x_0}}) \leq \alpha$.
Since $\distsens{f}{}(\alpha)$ is finite, by the definition of $\distsens{f}{}$, there exist a r.v.\ $X_{x_0}$ such that
\begin{align}
Y_{x_0} &= f(X_{x_0}), \label{distsens-distrib-interim3} \\
\ensuremath{{\widehat\dn}}\xspace_1(x_0,\prob{X_{x_0}}) &\le \distsens{f}{}(\alpha). \label{distsens-distrib-interim4}
\end{align}
Define $X = \sum_{x_0\in\ensuremath{\mathrm{support}}\xspace(X_0)} \prob{X_0}(x_0) X_{x_0}$.
Now we show that $Y = f(X)$ and $\ensuremath{{\widehat\dn}}\xspace_1(\prob{X_0},\prob{X})\leq \distsens{f}{}(\alpha)$.
\begin{itemize}
\item {\bf Showing $Y = f(X)$:}
Note that $Y=\sum_{x_0\in\ensuremath{\mathrm{support}}\xspace(X_0)} \prob{X_0}(x_0) Y_{x_0}$ and $f(X)=\sum_{x_0\in\ensuremath{\mathrm{support}}\xspace(X_0)} \prob{X_0}(x_0) f(X_{x_0})$. Now the claim follows because because $Y_{x_0} = f(X_{x_0})$ for each $x_0\in\ensuremath{\mathrm{support}}\xspace(X_0)$ (from \eqref{distsens-distrib-interim3}).
\item {\bf Showing $\ensuremath{{\widehat\dn}}\xspace_1(\prob{X_0},\prob{X})\leq \distsens{f}{}(\alpha)$:}
For each $x_0\in\ensuremath{\mathrm{support}}\xspace(X_0)$, let $\psi_{x_0}$ be the optimal coupling that achieves the infimum in the definition of
$\ensuremath{{\widehat\dn}}\xspace_1(x_0,\prob{X_{x_0}})$. That is, for each $x_0$, $\psi_{x_0} \in \Phi^{0}(x_0,\prob{X_{x_0}})$ and
$\ensuremath{{\widehat\dn}}\xspace_1(x_0,\prob{X_{x_0}})=\sup_{(a,b)\leftarrow\psi_{x_0}}\ensuremath{\mathsf{\partial}}\xspace_1(a,b)$.
Let $\psi$ be defined by $\psi(a,b)=\prob{X_0}(x_0)\psi_{x_0}(a,b)$.
It is easy to verify that $\psi\in\Phi^{0}(\prob{X_0},\prob{X})$.
Further,
\begin{align*}
\ensuremath{{\widehat\dn}}\xspace_1(\prob{X_0},\prob{X}) &\leq \sup_{(a,b)\leftarrow\psi}\ensuremath{\mathsf{\partial}}\xspace_1(a,b)
= \sup_{x_0\leftarrow \prob{X_0}} \sup_{(a,b)\leftarrow\psi_{x_0}}\ensuremath{\mathsf{\partial}}\xspace_1(a,b)
= \sup_{x_0\leftarrow \prob{X_0}} \ensuremath{{\widehat\dn}}\xspace_1(x_0,\prob{X_{x_0}})
\leq \distsens{f}{}(\alpha),
\end{align*}
where the last inequality follows from \eqref{distsens-distrib-interim4}.
\end{itemize}
This completes the proof of \Lemmaref{distsens-distrib}.
\end{proof}
\section{Proof of \Theoremref{compose-DP} -- Differential Privacy Under Composition}\label{app:comp-privacy}
\begin{thm*}[Restating \Theoremref{compose-DP}]
Let $\ensuremath{\mathcal{M}}\xspace_1:A\to B$ and $\ensuremath{\mathcal{M}}\xspace_2:B\to C$ be any two mechanisms.
If $\ensuremath{\mathcal{M}}\xspace_1$ is neighborhood-preserving w.r.t.\
neighborhood relations $\sim_A$ and $\sim_B$ over $A$ and $B$, respectively,
and $\ensuremath{\mathcal{M}}\xspace_2$ is $(\epsilon, \delta)$-DP w.r.t.\ $\sim_B$,
then $\ensuremath{\mathcal{M}}\xspace_2\circ \ensuremath{\mathcal{M}}\xspace_1:A\to C$ is $(\epsilon, \delta)$-DP w.r.t.\ $\sim_A$.
\end{thm*}
\begin{proof}
For simplicity, we consider the case when $B$ is discrete. The proof can be
generalized to the continuous setting.
Since the mechanism $\ensuremath{\mathcal{M}}\xspace_1$ is neighborhood preserving, for $x, x' \in A$
s.t.\ $x_1 \sim_A x_2$, there exists a pair of jointly distributed random
variables $(X_1,X_2)$ over $B\times B$ s.t, $\prob{X_1} = \ensuremath{\mathcal{M}}\xspace_1(x)$,
$\prob{X_2} = \ensuremath{\mathcal{M}}\xspace_1(x')$ and $\Pr[X_1 \sim_B X_2] = 1$. So, for all $(x_1,
x_2)$ such that $\prob{X_1,X_2}(x_1,x_2)>0$, we have $x_1 \sim_B x_2$ and
hence, by the $(\epsilon , \delta)$-differential privacy of the mechanism
$\ensuremath{\mathcal{M}}\xspace_2$, for all subsets $S \subseteq C$, we have,
\begin{align*}
\Pr(\ensuremath{\mathcal{M}}\xspace_2(x_1) \in S) &\leq e^{\epsilon} \Pr(\ensuremath{\mathcal{M}}\xspace_2(x_2) \in S) + \delta.
\end{align*}
Thus, if $x\sim_A x'$, then for any subset $S\subseteq C$, we have,
\begin{align*}
\Pr[\ensuremath{\mathcal{M}}\xspace_2(\ensuremath{\mathcal{M}}\xspace_1(x)) \in S]
&= \sum_{x_1} \prob{X_1}(x_1)\Pr[\ensuremath{\mathcal{M}}\xspace_2(x_1) \in S] \\
&= \sum_{(x_1, x_2)} \prob{X_1,X_2}(x_1, x_2) \Pr[\ensuremath{\mathcal{M}}\xspace_2(x_1) \in S] \\
&\le \sum_{(x_1, x_2)} \prob{X_1,X_2}(x_1, x_2) \left(e^{\epsilon} \Pr[\ensuremath{\mathcal{M}}\xspace_2(x_2) \in S] + \delta\right) \\
&= e^\epsilon \left( \sum_{(x_1, x_2)} \prob{X_1,X_2}(x_1, x_2) \Pr[\ensuremath{\mathcal{M}}\xspace_2(x_2) \in S] \right) + \delta \\
&= e^\epsilon \left( \sum_{x_2} \prob{X_2}(x_2) \Pr[\ensuremath{\mathcal{M}}\xspace_2(x_2) \in S] \right) + \delta \\
&= e^{\epsilon} \Pr[\ensuremath{\mathcal{M}}\xspace_2(\ensuremath{\mathcal{M}}\xspace_1(x')) \in S] + \delta
\end{align*}
This completes the proof of \Theoremref{compose-DP}.
\end{proof}
\section{$d$-Dimensional Analogues of our Mechanisms/Results}\label{app:d-dim-results}
In our $d$-dimensional bucketing mechanism for $\ensuremath{\mathcal{G}}\xspace=[0,B)^d$, we divide $[0,B)^d$ into $t=\lceil \frac{B}{w}\rceil^d$ $d$-dimensional cubes (buckets), each of side length $w$, and map each input point to the center of the nearest cube (bucket). Note that the distance between any point in $[0,B)^d$ to the center of the nearest bucket is $\frac{w}{2}\sqrt{d}$. In the following, we will ignore the ceil/floor for simplicity.
\begin{algorithm}
\caption{Bucketing Mechanism over $[0,B)^d$, \mbuc{w,[0,B)^d}}\label{algo:app-bucketing}
{\bf Parameter:} Bucket (which is $d$-dimensional cube) side length $w$, ground set $[0,B)^d$. \\
{\bf Input:} A histogram $\ensuremath{\mathbf{x}}\xspace$ over $[0,B)$. \\
{\bf Output:} A histogram $\ensuremath{\mathbf{y}}\xspace$ over $S = T^d$ where $T=\{ w(i-\frac12) : i \in [t], t = \lceil \frac{B}{w} \rceil \}$, and $|\ensuremath{\mathbf{y}}\xspace| = |\ensuremath{\mathbf{x}}\xspace|$. \\
\vspace{-0.3cm}
\begin{algorithmic}[1]
\ForAll{$s \in S $}
\State $\ensuremath{\mathbf{y}}\xspace(s) := \sum_{g:g-s \in [\frac{-w}2,\frac{w}2)^d} \; \ensuremath{\mathbf{x}}\xspace(g)$
\EndFor
\State Return \ensuremath{\mathbf{y}}\xspace
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{BucketHist Mechanism over $[0,B)^d$, \mBhist{\alpha,\beta,[0,B)^d}}\label{algo:app-histogram-mech}
{\bf Parameter:} Accuracy parameters $\alpha,\beta$; ground set $[0,B)^d$. \\
{\bf Input:} A histogram \ensuremath{\mathbf{x}}\xspace over $[0,B)^d$. \\
{\bf Output:} A histogram \ensuremath{\mathbf{y}}\xspace over $[0,B)^d$. \\
\vspace{-0.3cm}
\begin{algorithmic}[1]
\State $w := 2\beta$, $t := \lceil \frac{B}{w}\rceil^d$, $\ensuremath{\tau}\xspace := \alpha/t$
\State Return $\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,[0,B)^d} \circ \mbuc{w,[0,B)^d} (\ensuremath{\mathbf{x}}\xspace)$ \Comment{where \mbuc{w,[0,B)} is in \Algorithmref{app-bucketing}}
\end{algorithmic}
\end{algorithm}
Our $d$-dimensional bucketing mechanism $\mbuc{w,[0,B)^d}$ and the final $d$-dimensional bucketed-histogram mechanism $\mBhist{\alpha,\beta,[0,B)^d}$ are presented in \Algorithmref{app-bucketing} and \Algorithmref{app-histogram-mech}, respectively.
As mentioned in \Remarkref{d-dim} in \Sectionref{beyond-drop}, with these modified mechanisms, all our results in \Theoremref{bucketing-hist}, \Theoremref{bucketing-general}, and \Theoremref{bucketing-general-drmv} will hold verbatim, except for the value of $\tau$, which will be replaced by $\tau=\alpha(\frac{2\beta}{B\sqrt{d}})^d$. Note that for the one dimensional case, we have $\tau=\frac{\alpha}{t}=\alpha(\frac{w}{B})$, where $w=2\beta$, which comes from the $(0,\frac{w}{2},0)$-accuracy of the bucketing mechanism $\mbuc{w,[0,B)}$ (see \Claimref{bucket-accuracy} in \Sectionref{bucketHist-priv-accu-proof}).
The $d$-dimensional analogue of that result is stated in the following claim which can be proven along the lines of the proof of \Claimref{bucket-accuracy}..
\begin{claim}\label{clm:bucket-accuracy-d-dim}
$\mbuc{w,[0,B)^d}$ is $\left(0, \frac{w}{2}\sqrt{d}, 0\right)$-accurate for the identity function $f_{\emph{id}}$ over $\Hspace{[0,B)^d}$ w.r.t~ the metric $\ensuremath{\met_{\mathrm{hist}}}\xspace$.
\end{claim}
It follows from \Claimref{bucket-accuracy-d-dim} that the output error of $\mbuc{w,[0,B)^d}$ is $\beta=\frac{w}{2}\sqrt{d}$. This implies $\tau=\frac{\alpha}{t}=\alpha(\frac{w}{B})^d=\alpha(\frac{2\beta}{B\sqrt{d}})^d$.
\section{Details Omitted from \Sectionref{distortion-measures}}\label{app:beyond-drop}
In this section, first we prove that $\ensuremath{{\widehat\dn}}\xspace$ is a quasi-metric (assuming that $\ensuremath{\mathsf{\partial}}\xspace$ is a quasi-metric), and then prove that our two distortions $\ensuremath{\dn_{\mathrm{move}}}\xspace$ and $\dropmove\eta$ (defined in \eqref{eq:move_defn} and \eqref{eq:drop_move_defn}, respectively) are metric and quasi-metric, respectively.
\begin{lem}\label{lem:dnx-quasi-metric}
If \ensuremath{\mathsf{\partial}}\xspace is a quasi-metric, then \ensuremath{{\widehat\dn}}\xspace is a quasi-metric.
\end{lem}
\begin{proof}
We need to show that for any three distributions $P$, $Q$, and $R$ over the same space $A$, we have
{\sf (i)} $\ensuremath{{\widehat\dn}}\xspace(P, Q)\geq 0$, where the equality holds if and only if $P = Q$, and {\sf (ii)} $\ensuremath{{\widehat\dn}}\xspace$ satisfies the triangle inequality: $\ensuremath{{\widehat\dn}}\xspace(P, Q)\leq \ensuremath{{\widehat\dn}}\xspace(P, R) + \ensuremath{{\widehat\dn}}\xspace(R, Q)$. We show them one by one below:
\begin{enumerate}
\item The first property follows from the definition of $\ensuremath{{\widehat\dn}}\xspace$ (see \Definitionref{distortion}): If $\ensuremath{{\widehat\dn}}\xspace(P, Q)= 0$, then the optimal $\phi\in\Phi(P, Q)$ is a diagonal distribution, which means that $P = Q$. On the other hand, if $P = Q$, then there exists a coupling $\phi$ in $\Phi(P, Q)$, which is a diagonal distribution and hence $\ensuremath{{\widehat\dn}}\xspace(P, Q)= 0$.
\item Since the definition of $\ensuremath{{\widehat\dn}}\xspace$ is the same as that of $\Winf{}$, except for that the former is defined w.r.t.\ a quasi-metric, whereas, the latter is defined w.r.t.\ a metric, we can show the triangle inequality for $\ensuremath{{\widehat\dn}}\xspace$ along the lines of the proof of \Lemmaref{Winf_triangle}. Note that we did not use the symmetric property of $\Winf{}$ while proving \Lemmaref{Winf_triangle}; we only used that the underlying metric $\ensuremath{\mathfrak{d}}\xspace$ satisfies the triangle inequality, which also holds for $\ensuremath{{\widehat\dn}}\xspace$ which is a quasi-metric.
\end{enumerate}
This completes the proof of \Lemmaref{dnx-quasi-metric}.
\end{proof}
In \Sectionref{distortion-measures}, we introduced two new distortions: $\ensuremath{\dn_{\mathrm{move}}}\xspace(\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{y}}\xspace)$ in \eqref{eq:move_defn} and $\dropmove\eta(\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{y}}\xspace)$ in \eqref{eq:drop_move_defn}. We prove that $\ensuremath{\dn_{\mathrm{move}}}\xspace$ is a metric in \Claimref{move-metric} and that $\dropmove\eta$ is a metric in \Claimref{drop-move-quasi-metric}.
We present the definitions of these distortions here again for convenience:
\begin{align*}
\ensuremath{\dn_{\mathrm{move}}}\xspace(\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{y}}\xspace) &= \begin{cases}
\Winf{}(\frac{\ensuremath{\mathbf{x}}\xspace}{|\ensuremath{\mathbf{x}}\xspace|},\frac{\ensuremath{\mathbf{y}}\xspace}{|\ensuremath{\mathbf{y}}\xspace|}) & \text{ if } |\ensuremath{\mathbf{x}}\xspace|=|\ensuremath{\mathbf{y}}\xspace| \\
\infty & \text{ otherwise}
\end{cases}
&
\dropmove\eta(\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{y}}\xspace) = \inf_{\ensuremath{\mathbf{z}}\xspace} \left(\ensuremath{\dn_{\mathrm{drop}}}\xspace(\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{z}}\xspace) + \eta \cdot \ensuremath{\dn_{\mathrm{move}}}\xspace(\ensuremath{\mathbf{z}}\xspace,\ensuremath{\mathbf{y}}\xspace)\right).
\end{align*}
Note that in the definition of $\ensuremath{\dn_{\mathrm{move}}}\xspace$, when $|\ensuremath{\mathbf{x}}\xspace| = |\ensuremath{\mathbf{y}}\xspace| = 0$, we define $\ensuremath{\dn_{\mathrm{move}}}\xspace(\ensuremath{\mathbf{x}}\xspace, \ensuremath{\mathbf{y}}\xspace) = 0$.
\begin{claim}\label{clm:move-metric}
$\ensuremath{\dn_{\mathrm{move}}}\xspace(\cdot,\cdot)$ is a metric.
\end{claim}
\begin{proof}
Since $\ensuremath{\dn_{\mathrm{move}}}\xspace(\cdot,\cdot)$ is defined as the $\infty$-Wasserstein distance between normalized histograms, it suffices to show that the $\infty$-Wasserstein distance is a metric. We need to show three things for any triple of distributions $P,Q,R$ over a metric space $(\Omega,\ensuremath{\mathfrak{d}}\xspace)$:
{\sf (i)} $\Winf{}(P,Q)\geq0$ and equality holds if and only if $P=Q$, {\sf (ii)} $\Winf{}(P,Q)=\Winf{}(Q,P)$, and {\sf (iii)} $\Winf{}(P,R)\leq\Winf{}(P,Q)+\Winf{}(Q,R)$.
By definition, $\Winf{}(P, R) = \inf_{\phi\in\Phi(P,R)} \sup_{(x,z):\\\phi(x,z)\neq 0}\ensuremath{\mathfrak{d}}\xspace(x,z)$.
Now, the first two conditions follow because $\ensuremath{\mathfrak{d}}\xspace$ is a metric, and the last condition (triangle inequality) we show in \Lemmaref{Winf_triangle} in \Appendixref{wasserstein}.
Note that when $|\ensuremath{\mathbf{x}}\xspace| = |\ensuremath{\mathbf{y}}\xspace| = 0$, the Wasserstein distance is undefined, but we have defined $\ensuremath{\dn_{\mathrm{move}}}\xspace(\ensuremath{\mathbf{x}}\xspace, \ensuremath{\mathbf{y}}\xspace)$ in this case separately as $0$ which is consistent with the properties of a metric.
\end{proof}
We first give an intermediate result (\Lemmaref{drop-move-switch} below) which will be used in proving that $\ensuremath{\dn_{\mathrm{drmv}}^{\eta}}\xspace$ is a quasi-metric. The result of this lemma is also used in the proof of \Theoremref{bucketing-general-drmv}.
\begin{lem}\label{lem:drop-move-switch}
Let $\ensuremath{\mathbf{x}}\xspace$, $\ensuremath{\mathbf{y}}\xspace$ and $\ensuremath{\mathbf{z}}\xspace$ be any three histograms over a ground set $\ensuremath{\mathcal{G}}\xspace$ ,associated with a metric $\ensuremath{\mathfrak{d}}\xspace$, such that $\ensuremath{\dn_{\mathrm{move}}}\xspace(\ensuremath{\mathbf{x}}\xspace, \ensuremath{\mathbf{z}}\xspace) = \alpha_1$ and $\ensuremath{\dn_{\mathrm{drop}}}\xspace(\ensuremath{\mathbf{z}}\xspace, \ensuremath{\mathbf{y}}\xspace) = \alpha_2$ with $\alpha_1 \ge 0$ and $\alpha_2 < 1$. Then there exists a histogram $\ensuremath{\mathbf{s}}\xspace$ such that $\ensuremath{\dn_{\mathrm{drop}}}\xspace(\ensuremath{\mathbf{x}}\xspace, \ensuremath{\mathbf{s}}\xspace) = \alpha_2$ and $\ensuremath{\dn_{\mathrm{move}}}\xspace(\ensuremath{\mathbf{s}}\xspace, \ensuremath{\mathbf{y}}\xspace) \leq \alpha_1$.
\end{lem}
\begin{proof}
Using the definitions of $\ensuremath{\dn_{\mathrm{drop}}}\xspace$ and $\ensuremath{\dn_{\mathrm{move}}}\xspace$, we have the following:
\begin{enumerate}[label=\textbf{Z.\arabic*}]
\item \label{Z1} $|\ensuremath{\mathbf{x}}\xspace| = |\ensuremath{\mathbf{z}}\xspace|$
\item \label{Z2} $\Winf{}(\frac{\ensuremath{\mathbf{x}}\xspace}{|\ensuremath{\mathbf{x}}\xspace|}, \frac{\ensuremath{\mathbf{z}}\xspace}{|\ensuremath{\mathbf{z}}\xspace|}) \leq \alpha_1$. We will use $\phi_z$ to denote the optimal joint distribution which achieves the infimum in the definition of $\Winf{}(\frac{\ensuremath{\mathbf{x}}\xspace}{|\ensuremath{\mathbf{x}}\xspace|}, \frac{\ensuremath{\mathbf{z}}\xspace}{|\ensuremath{\mathbf{z}}\xspace|})$.
\item \label{Z3} $|\ensuremath{\mathbf{y}}\xspace| = (1 - \alpha_2)|\ensuremath{\mathbf{z}}\xspace|$
\item \label{Z4} For all $g \in \ensuremath{\mathcal{G}}\xspace$, $0 \le \ensuremath{\mathbf{y}}\xspace(g) \le \ensuremath{\mathbf{z}}\xspace(g)$
\end{enumerate}
Now we want to prove the existence of a histogram $\ensuremath{\mathbf{s}}\xspace$ with the following property:
\begin{enumerate}[label=\textbf{S.\arabic*}]
\item \label{S1} $|\ensuremath{\mathbf{s}}\xspace| = (1 - \alpha_2)|\ensuremath{\mathbf{x}}\xspace|$
\item \label{S2} For all $g \in \ensuremath{\mathcal{G}}\xspace$, $0 \le \ensuremath{\mathbf{s}}\xspace(g) \le \ensuremath{\mathbf{x}}\xspace(g)$
\item \label{S3} $|\ensuremath{\mathbf{s}}\xspace| = |\ensuremath{\mathbf{y}}\xspace|$
\item \label{S4} $\Winf{}(\frac{\ensuremath{\mathbf{s}}\xspace}{|\ensuremath{\mathbf{s}}\xspace|}, \frac{\ensuremath{\mathbf{y}}\xspace}{|\ensuremath{\mathbf{y}}\xspace|}) \leq \alpha_1$.
\end{enumerate}
Consider the following joint distribution $\phi_s$:
\begin{equation}\label{eq:drmv-phi}
\phi_s(g_x, g_y) = \begin{cases}
\frac{1}{1 - \alpha_2} \phi_z(g_x, g_y)\frac{\ensuremath{\mathbf{y}}\xspace(g_y)}{\ensuremath{\mathbf{z}}\xspace(g_y)} & \text{if } \ensuremath{\mathbf{z}}\xspace(g_y) > 0\\
0 & \text{otherwise}
\end{cases}
\end{equation}
We denote the first marginal of $\phi_s$ by $\frac{\ensuremath{\mathbf{s}}\xspace}{|\ensuremath{\mathbf{s}}\xspace|}$, where $\ensuremath{\mathbf{s}}\xspace$ corresponds to the histogram that we want to show exists.
By definition, for all $g_x, g_y \in \ensuremath{\mathcal{G}}\xspace$, we have $\phi_s(g_x, g_y) \ge 0$.
Also note that, if $\ensuremath{\mathbf{z}}\xspace(g_y) = 0$, then for all $g_x \in \ensuremath{\mathcal{G}}\xspace$, we have $\phi_z(g_x, g_y) = 0$; this is because $\frac{\ensuremath{\mathbf{z}}\xspace}{|\ensuremath{\mathbf{z}}\xspace|}$ is the second marginal of $\phi_z$.
Now we show that the above-defined $\phi_s$ satisfies properties \ref{S1}-\ref{S4} -- we show these in the sequence of \ref{S4}, \ref{S3}, \ref{S1}, \ref{S2}.
\begin{itemize}
\item {\bf Proof of \ref{S4}.}
Note that the first marginal of $\phi_s$ is assumed to be $\frac{\ensuremath{\mathbf{s}}\xspace}{|\ensuremath{\mathbf{s}}\xspace|}$.
Now we show that its second marginal is $\frac{\ensuremath{\mathbf{y}}\xspace}{|\ensuremath{\mathbf{y}}\xspace|}$ and that $\max_{(g_x,g_y)\leftarrow\phi_s}\ensuremath{\mathfrak{d}}\xspace(g_x,g_y)\leq\alpha_1$. Note that these together imply that $\Winf{}(\frac{\ensuremath{\mathbf{s}}\xspace}{|\ensuremath{\mathbf{s}}\xspace|},\frac{\ensuremath{\mathbf{y}}\xspace}{|\ensuremath{\mathbf{y}}\xspace|})\leq\alpha_1$.
\begin{itemize}
\item {\it Second marginal of $\phi_s$ is $\frac{\ensuremath{\mathbf{y}}\xspace}{|\ensuremath{\mathbf{y}}\xspace|}$:}
We show it in two parts, first for $g_y\in\ensuremath{\mathcal{G}}\xspace$ for which $\ensuremath{\mathbf{z}}\xspace(g_y)=0$ and then for the rest of the $g_y\in\ensuremath{\mathcal{G}}\xspace$. Note that when $\ensuremath{\mathbf{z}}\xspace(g_y) = 0$, we have from \ref{Z4} that $\ensuremath{\mathbf{y}}\xspace(g_y)=0$.
Now we show that $\int_{\ensuremath{\mathcal{G}}\xspace}\phi_s(g_x,g_y)\ensuremath{\,\mathrm{d}}\xspace g_x=0$.
It follows from \eqref{eq:drmv-phi} that for all $g_y$ such that $\ensuremath{\mathbf{z}}\xspace(g_y) = 0$, we have
$\phi_s(g_x,g_y)=0, \forall g_x\in\ensuremath{\mathcal{G}}\xspace$, which implies that $\int_{\ensuremath{\mathcal{G}}\xspace}\phi_s(g_x,g_y)\ensuremath{\,\mathrm{d}}\xspace g_x=0$.
Now we analyze the case when $\ensuremath{\mathbf{z}}\xspace(g_y) > 0$.
\begin{align*}
\int_{\ensuremath{\mathcal{G}}\xspace} \phi_s(g_x, g_y)\ensuremath{\,\mathrm{d}}\xspace g_x &= \int_{\ensuremath{\mathcal{G}}\xspace} \frac{1}{1 - \alpha_2} \phi_z(g_x, g_y)\frac{\ensuremath{\mathbf{y}}\xspace(g_y)}{\ensuremath{\mathbf{z}}\xspace(g_y)}\ensuremath{\,\mathrm{d}}\xspace g_x \tag{using \Equationref{drmv-phi}}\\
&= \frac{1}{1 - \alpha_2} \frac{\ensuremath{\mathbf{y}}\xspace(g_y)}{\ensuremath{\mathbf{z}}\xspace(g_y)} \int_{\ensuremath{\mathcal{G}}\xspace} \phi_z(g_x, g_y)\ensuremath{\,\mathrm{d}}\xspace g_x\\
&= \frac{1}{1 - \alpha_2} \frac{\ensuremath{\mathbf{y}}\xspace(g_y)}{\ensuremath{\mathbf{z}}\xspace(g_y)} \frac{\ensuremath{\mathbf{z}}\xspace(g_y)}{|\ensuremath{\mathbf{z}}\xspace|}\tag{using \ref{Z2}}\\
&= \frac{\ensuremath{\mathbf{y}}\xspace(g_y)}{(1 - \alpha_2)|\ensuremath{\mathbf{z}}\xspace|}\\
&= \frac{\ensuremath{\mathbf{y}}\xspace(g_y)}{|\ensuremath{\mathbf{y}}\xspace|}. \tag{Using \ref{Z3}}
\end{align*}
\item {\it $\Winf{}(\frac{\ensuremath{\mathbf{s}}\xspace}{|\ensuremath{\mathbf{s}}\xspace|},\frac{\ensuremath{\mathbf{y}}\xspace}{|\ensuremath{\mathbf{y}}\xspace|})\leq\alpha_1$:}
We have shown that the first and the second marginals of $\phi_s$ are $\frac{\ensuremath{\mathbf{s}}\xspace}{|\ensuremath{\mathbf{s}}\xspace|}$ and $\frac{\ensuremath{\mathbf{y}}\xspace}{|\ensuremath{\mathbf{y}}\xspace|}$, respectively. So, it suffices to show that $\max_{(g_x,g_y)\leftarrow\phi_s}\ensuremath{\mathfrak{d}}\xspace(g_x,g_y)\leq\alpha_1$.
Consider any pair $(g_x, g_y) \in \ensuremath{\mathcal{G}}\xspace^2$ s.t. $\phi_s(g_x, g_y) > 0$. This is possible only if $\phi_z(g_x, g_y) > 0$ (see \Equationref{drmv-phi}), which, when combined with \ref{Z2}, gives $\ensuremath{\mathfrak{d}}\xspace(g_x, g_y) \le \alpha_1$. Hence, for any pair $(g_x, g_y) \in \ensuremath{\mathcal{G}}\xspace^2$ s.t. $\phi_s(g_x, g_y) > 0$, we have $\ensuremath{\mathfrak{d}}\xspace(g_x, g_y) \le \alpha_1$.
\end{itemize}
\item {\bf Proof of \ref{S3}.}
Note that \Equationref{drmv-phi} gives the normalized $\ensuremath{\mathbf{s}}\xspace$, but we still have the freedom to choose $|\ensuremath{\mathbf{s}}\xspace|$. To satisfy \ref{S3}, we set $|\ensuremath{\mathbf{s}}\xspace| = |\ensuremath{\mathbf{y}}\xspace|$.
\item {\bf Proof of \ref{S1}.}
Note that \ref{S1} is already satisfied using \ref{Z1}, \ref{Z3}, and \ref{S3}.
\item {\bf Proof of \ref{S2}.}
Let us denote $\{g \in \ensuremath{\mathcal{G}}\xspace\ |\ \ensuremath{\mathbf{z}}\xspace(g) > 0\}$ by $\ensuremath{\mathcal{G}}\xspace_z$. We will show that for any $g \in \ensuremath{\mathcal{G}}\xspace$, we have $\ensuremath{\mathbf{x}}\xspace(g) - \ensuremath{\mathbf{s}}\xspace(g) \ge 0$:
\begin{align*}
\ensuremath{\mathbf{x}}\xspace(g) - \ensuremath{\mathbf{s}}\xspace(g) &= |\ensuremath{\mathbf{x}}\xspace|\int_{\ensuremath{\mathcal{G}}\xspace} \phi_z(g, g_y)\ensuremath{\,\mathrm{d}}\xspace g_y - |\ensuremath{\mathbf{s}}\xspace|\int_{\ensuremath{\mathcal{G}}\xspace} \phi_s(g, g_y)\ensuremath{\,\mathrm{d}}\xspace g_y \tag{using \ref{Z2} and \ref{S4}}\\
&= |\ensuremath{\mathbf{x}}\xspace|\int_{\ensuremath{\mathcal{G}}\xspace_z} \phi_z(g, g_y)\ensuremath{\,\mathrm{d}}\xspace g_y - |\ensuremath{\mathbf{s}}\xspace|\int_{\ensuremath{\mathcal{G}}\xspace} \phi_s(g, g_y)\ensuremath{\,\mathrm{d}}\xspace g_y \tag{Since $\ensuremath{\mathbf{z}}\xspace(g_y) = 0 \implies \phi_z(g, g_y) = 0, \forall g\in\ensuremath{\mathcal{G}}\xspace$;~\ref{Z2}}\\
&= |\ensuremath{\mathbf{x}}\xspace|\int_{\ensuremath{\mathcal{G}}\xspace_z} \phi_z(g, g_y)\ensuremath{\,\mathrm{d}}\xspace g_y - |\ensuremath{\mathbf{s}}\xspace|\int_{\ensuremath{\mathcal{G}}\xspace_z} \frac{1}{1 - \alpha_2} \phi_z(g, g_y)\frac{\ensuremath{\mathbf{y}}\xspace(g_y)}{\ensuremath{\mathbf{z}}\xspace(g_y)}\ensuremath{\,\mathrm{d}}\xspace g_y \tag{using \Equationref{drmv-phi}}\\
&= |\ensuremath{\mathbf{x}}\xspace|\int_{\ensuremath{\mathcal{G}}\xspace_z} \phi_z(g, g_y)\ensuremath{\,\mathrm{d}}\xspace g_y - \frac{(1 - \alpha_2)|\ensuremath{\mathbf{x}}\xspace|}{1 - \alpha_2}\int_{\ensuremath{\mathcal{G}}\xspace_z} \phi_z(g, g_y)\frac{\ensuremath{\mathbf{y}}\xspace(g_y)}{\ensuremath{\mathbf{z}}\xspace(g_y)}\ensuremath{\,\mathrm{d}}\xspace g_y \tag{using \ref{S1}}\\
&= |\ensuremath{\mathbf{x}}\xspace|\int_{\ensuremath{\mathcal{G}}\xspace_z} \phi_z(g, g_y)\ensuremath{\,\mathrm{d}}\xspace g_y - |\ensuremath{\mathbf{x}}\xspace|\int_{\ensuremath{\mathcal{G}}\xspace_z} \phi_z(g, g_y) \frac{\ensuremath{\mathbf{y}}\xspace(g_y)}{\ensuremath{\mathbf{z}}\xspace(g_y)}\ensuremath{\,\mathrm{d}}\xspace g_y \\
&= |\ensuremath{\mathbf{x}}\xspace|\int_{\ensuremath{\mathcal{G}}\xspace_z} \phi_z(g, g_y)\left(1 - \frac{\ensuremath{\mathbf{y}}\xspace(g_y)}{\ensuremath{\mathbf{z}}\xspace(g_y)}\right)\ensuremath{\,\mathrm{d}}\xspace g_y\\
&\ge 0 \tag{using \ref{Z4}, $\frac{\ensuremath{\mathbf{y}}\xspace(g_y)}{\ensuremath{\mathbf{z}}\xspace(g_y)} \le 1$}\\
\end{align*}
\end{itemize}
Thus, we have shown that the joint distribution $\phi_s$ defined in \eqref{eq:drmv-phi} satisfies all four properties \ref{S1}-\ref{S4}.
This completes the proof of \Lemmaref{drop-move-switch}.
\end{proof}
\begin{claim}\label{clm:drop-move-quasi-metric}
For all $\eta \in \ensuremath{\R_{\ge0}}\xspace$, $\dropmove\eta(\cdot,\cdot)$ is a quasi metric.
\end{claim}
\begin{proof}
Note that both $\ensuremath{\dn_{\mathrm{drop}}}\xspace$ and $\ensuremath{\dn_{\mathrm{move}}}\xspace$ are quasi-metrics. Hence, for any $\ensuremath{\mathbf{x}}\xspace, \ensuremath{\mathbf{y}}\xspace$, $\ensuremath{\dn_{\mathrm{drop}}}\xspace(\ensuremath{\mathbf{x}}\xspace, \ensuremath{\mathbf{y}}\xspace) \ge 0$ and $\ensuremath{\dn_{\mathrm{move}}}\xspace(\ensuremath{\mathbf{x}}\xspace, \ensuremath{\mathbf{y}}\xspace) \ge 0$. This implies that for every $\ensuremath{\mathbf{x}}\xspace, \ensuremath{\mathbf{y}}\xspace$, $\ensuremath{\dn_{\mathrm{drmv}}^{\eta}}\xspace(\ensuremath{\mathbf{x}}\xspace, \ensuremath{\mathbf{y}}\xspace) \ge 0$. Now we one by one prove that $\ensuremath{\dn_{\mathrm{drmv}}^{\eta}}\xspace$ satisfies the properties of quasi-metric:
\textit{Property \#1: For all $\ensuremath{\mathbf{x}}\xspace$ and $\ensuremath{\mathbf{y}}\xspace$, $\ensuremath{\mathbf{x}}\xspace = \ensuremath{\mathbf{y}}\xspace \Leftrightarrow \ensuremath{\dn_{\mathrm{drmv}}^{\eta}}\xspace(\ensuremath{\mathbf{x}}\xspace, \ensuremath{\mathbf{y}}\xspace) = 0$.}
\begin{enumerate}
\item For all $\ensuremath{\mathbf{x}}\xspace$, $\ensuremath{\dn_{\mathrm{drmv}}^{\eta}}\xspace(\ensuremath{\mathbf{x}}\xspace, \ensuremath{\mathbf{x}}\xspace) = 0$:
\begin{align*}
\ensuremath{\dn_{\mathrm{drmv}}^{\eta}}\xspace(\ensuremath{\mathbf{x}}\xspace, \ensuremath{\mathbf{x}}\xspace) &= \inf_{\ensuremath{\mathbf{z}}\xspace} \left( \ensuremath{\dn_{\mathrm{drop}}}\xspace(\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{z}}\xspace) + \eta \cdot \ensuremath{\dn_{\mathrm{move}}}\xspace(\ensuremath{\mathbf{z}}\xspace,\ensuremath{\mathbf{x}}\xspace)\right) \\
&\le \ensuremath{\dn_{\mathrm{drop}}}\xspace(\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{x}}\xspace) + \eta \cdot \ensuremath{\dn_{\mathrm{move}}}\xspace(\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{x}}\xspace) \tag{infimum over a set is $\le$ the value at any fixed point in set}\\
&= 0
\end{align*}
Since $\ensuremath{\dn_{\mathrm{drmv}}^{\eta}}\xspace(\ensuremath{\mathbf{x}}\xspace, \ensuremath{\mathbf{x}}\xspace) \ge 0$ as well as $\le 0$, $\ensuremath{\dn_{\mathrm{drmv}}^{\eta}}\xspace(\ensuremath{\mathbf{x}}\xspace, \ensuremath{\mathbf{x}}\xspace) = 0$.
\item For all $\ensuremath{\mathbf{x}}\xspace, \ensuremath{\mathbf{y}}\xspace$, $\ensuremath{\dn_{\mathrm{drmv}}^{\eta}}\xspace(\ensuremath{\mathbf{x}}\xspace, \ensuremath{\mathbf{y}}\xspace) = 0 \implies \ensuremath{\mathbf{x}}\xspace = \ensuremath{\mathbf{y}}\xspace$:\\
$\ensuremath{\dn_{\mathrm{drmv}}^{\eta}}\xspace(\ensuremath{\mathbf{x}}\xspace, \ensuremath{\mathbf{y}}\xspace) = 0$ implies that $\inf_{\ensuremath{\mathbf{z}}\xspace} \left( \ensuremath{\dn_{\mathrm{drop}}}\xspace(\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{z}}\xspace) + \eta \cdot \ensuremath{\dn_{\mathrm{move}}}\xspace(\ensuremath{\mathbf{z}}\xspace,\ensuremath{\mathbf{y}}\xspace) \right) = 0$. As both $\ensuremath{\dn_{\mathrm{drop}}}\xspace(\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{z}}\xspace)$ and $\ensuremath{\dn_{\mathrm{move}}}\xspace(\ensuremath{\mathbf{z}}\xspace,\ensuremath{\mathbf{y}}\xspace)$ are $\ge 0$ for any value of $\ensuremath{\mathbf{x}}\xspace, \ensuremath{\mathbf{y}}\xspace, \ensuremath{\mathbf{z}}\xspace$, this is possible only if $\ensuremath{\dn_{\mathrm{drop}}}\xspace(\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{z}}\xspace) = \ensuremath{\dn_{\mathrm{move}}}\xspace(\ensuremath{\mathbf{z}}\xspace,\ensuremath{\mathbf{y}}\xspace) = 0$ which means that $\ensuremath{\mathbf{x}}\xspace = \ensuremath{\mathbf{z}}\xspace = \ensuremath{\mathbf{y}}\xspace$. Hence $\ensuremath{\mathbf{x}}\xspace = \ensuremath{\mathbf{y}}\xspace$.
\end{enumerate}
\textit{Property \#2: For all $\ensuremath{\mathbf{x}}\xspace$, $\ensuremath{\mathbf{y}}\xspace$ and $\ensuremath{\mathbf{z}}\xspace$, $\ensuremath{\dn_{\mathrm{drmv}}^{\eta}}\xspace(\ensuremath{\mathbf{x}}\xspace, \ensuremath{\mathbf{z}}\xspace) \le \ensuremath{\dn_{\mathrm{drmv}}^{\eta}}\xspace(\ensuremath{\mathbf{x}}\xspace, \ensuremath{\mathbf{y}}\xspace) + \ensuremath{\dn_{\mathrm{drmv}}^{\eta}}\xspace(\ensuremath{\mathbf{y}}\xspace, \ensuremath{\mathbf{z}}\xspace)$.}
We assume that the infimum in both $\ensuremath{\dn_{\mathrm{drmv}}^{\eta}}\xspace(\ensuremath{\mathbf{x}}\xspace, \ensuremath{\mathbf{y}}\xspace)$ and $\ensuremath{\dn_{\mathrm{drmv}}^{\eta}}\xspace(\ensuremath{\mathbf{y}}\xspace, \ensuremath{\mathbf{z}}\xspace)$ is achieved by $\ensuremath{\mathbf{s}}\xspace_1$ and $\ensuremath{\mathbf{s}}\xspace_2$, respectively
(the proof can be easily extended to the case when the infimum is not achieved).
This means that there exists $a,b,c,d\geq0$, such that
\[\ensuremath{\dn_{\mathrm{drop}}}\xspace(\ensuremath{\mathbf{x}}\xspace, \ensuremath{\mathbf{s}}\xspace_1) = a;\ \ensuremath{\dn_{\mathrm{move}}}\xspace(\ensuremath{\mathbf{s}}\xspace_1, \ensuremath{\mathbf{y}}\xspace) = b;\ \ensuremath{\dn_{\mathrm{drop}}}\xspace(\ensuremath{\mathbf{y}}\xspace, \ensuremath{\mathbf{s}}\xspace_2) = c;\ \ensuremath{\dn_{\mathrm{move}}}\xspace(\ensuremath{\mathbf{s}}\xspace_2, \ensuremath{\mathbf{z}}\xspace) = d,\]
which implies $\ensuremath{\dn_{\mathrm{drmv}}^{\eta}}\xspace(\ensuremath{\mathbf{x}}\xspace, \ensuremath{\mathbf{y}}\xspace) = a + \eta b$ and $\ensuremath{\dn_{\mathrm{drmv}}^{\eta}}\xspace(\ensuremath{\mathbf{y}}\xspace, \ensuremath{\mathbf{z}}\xspace) = c + \eta d$.
We need to show that $\ensuremath{\dn_{\mathrm{drmv}}^{\eta}}\xspace(\ensuremath{\mathbf{x}}\xspace, \ensuremath{\mathbf{z}}\xspace) \le (a + c) + \eta(b + d)$.
Using \Lemmaref{drop-move-switch} with $\ensuremath{\dn_{\mathrm{move}}}\xspace(\ensuremath{\mathbf{s}}\xspace_1, \ensuremath{\mathbf{y}}\xspace) = b$ and $\ensuremath{\dn_{\mathrm{drop}}}\xspace(\ensuremath{\mathbf{y}}\xspace, \ensuremath{\mathbf{s}}\xspace_2) = c$, we get that there is a $\ensuremath{\mathbf{y}}\xspace'$ such that $\ensuremath{\dn_{\mathrm{drop}}}\xspace(\ensuremath{\mathbf{s}}\xspace_1, \ensuremath{\mathbf{y}}\xspace') = c$ and $\ensuremath{\dn_{\mathrm{move}}}\xspace(\ensuremath{\mathbf{y}}\xspace', s_2) \leq b$. This gives the following:
\[\ensuremath{\dn_{\mathrm{drop}}}\xspace(\ensuremath{\mathbf{x}}\xspace, \ensuremath{\mathbf{s}}\xspace_1) = a;\ \ensuremath{\dn_{\mathrm{drop}}}\xspace(\ensuremath{\mathbf{s}}\xspace_1, \ensuremath{\mathbf{y}}\xspace') = c;\ \ensuremath{\dn_{\mathrm{move}}}\xspace(\ensuremath{\mathbf{y}}\xspace', \ensuremath{\mathbf{s}}\xspace_2) \leq b;\ \ensuremath{\dn_{\mathrm{move}}}\xspace(\ensuremath{\mathbf{s}}\xspace_2, \ensuremath{\mathbf{z}}\xspace) = d.\]
Now we prove that $\ensuremath{\dn_{\mathrm{drmv}}^{\eta}}\xspace(\ensuremath{\mathbf{x}}\xspace, \ensuremath{\mathbf{z}}\xspace) \le (a + c) + \eta(b + d)$:
\begin{align*}
\ensuremath{\dn_{\mathrm{drmv}}^{\eta}}\xspace(\ensuremath{\mathbf{x}}\xspace, \ensuremath{\mathbf{z}}\xspace) &= \inf_{\ensuremath{\mathbf{z}}\xspace} \left( \ensuremath{\dn_{\mathrm{drop}}}\xspace(\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{y}}\xspace) + \eta \cdot \ensuremath{\dn_{\mathrm{move}}}\xspace(\ensuremath{\mathbf{y}}\xspace,\ensuremath{\mathbf{z}}\xspace) \right) \\
&\le \ensuremath{\dn_{\mathrm{drop}}}\xspace(\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{y}}\xspace') + \eta \cdot \ensuremath{\dn_{\mathrm{move}}}\xspace(\ensuremath{\mathbf{y}}\xspace',\ensuremath{\mathbf{z}}\xspace)\\
&\le \ensuremath{\dn_{\mathrm{drop}}}\xspace(\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{s}}\xspace_1) + \ensuremath{\dn_{\mathrm{drop}}}\xspace(\ensuremath{\mathbf{s}}\xspace_1,\ensuremath{\mathbf{y}}\xspace') + \eta \cdot \ensuremath{\dn_{\mathrm{move}}}\xspace(\ensuremath{\mathbf{y}}\xspace',\ensuremath{\mathbf{z}}\xspace) \tag{$\ensuremath{\dn_{\mathrm{drop}}}\xspace$ is a quasi-metric}\\
&\le \ensuremath{\dn_{\mathrm{drop}}}\xspace(\ensuremath{\mathbf{x}}\xspace,\ensuremath{\mathbf{s}}\xspace_1) + \ensuremath{\dn_{\mathrm{drop}}}\xspace(\ensuremath{\mathbf{s}}\xspace_1,\ensuremath{\mathbf{y}}\xspace') + \eta \cdot (\ensuremath{\dn_{\mathrm{move}}}\xspace(\ensuremath{\mathbf{y}}\xspace',\ensuremath{\mathbf{s}}\xspace_2) + \ensuremath{\dn_{\mathrm{move}}}\xspace(\ensuremath{\mathbf{s}}\xspace_2,\ensuremath{\mathbf{z}}\xspace)) \tag{$\ensuremath{\dn_{\mathrm{move}}}\xspace$ is a metric}\\
&\le (a + c) + \eta (b + d).
\end{align*}
This concludes the proof of \Claimref{drop-move-quasi-metric}
\end{proof}
\section{Details Omitted from \Sectionref{defn-fa} -- Usefulness~\cite{BLR} vs.\ Flexible Accuracy}\label{app:Comparison_BLR}
To express accuracy
guarantees of their mechanisms, Blum et al.~\cite{BLR} introduced a notion of
\emph{$(\beta,\gamma,\psi)$-usefulness} that parallels
$(\alpha,\beta,\gamma)$-accuracy, except that $\psi$ measures perturbation
of the function rather than input distortion. Note that this is a reasonable
notion for the function classes they considered (half-space queries, range queries etc.),
but it is not applicable to queries like maximum.
Flexible accuracy generalizes the notion of usefulness.
Firstly, mechanisms which are
$(\beta,\gamma,0)$-useful are $(0,\beta,\gamma)$-accurate (in \cite{BLR},
such mechanisms were given for interval queries). But even general
usefulness can be translated to flexible accuracy generically, by redefining
the function to have an extra input parameter that specifies perturbation.
Further, the specific $(\beta,\gamma,\psi)$-useful DP mechanism of
\cite{BLR} for half-space counting queries -- with data points on a unit
sphere, and the perturbation of the function corresponded to rotating the
half-space by $\psi$ radians -- is $(\psi,\beta,\gamma)$-accurate for the
same functions, w.r.t.\ the distortion \ensuremath{\dn_{\mathrm{move}}}\xspace. This is because, the rotation
of the half-space can be modeled as moving all the points on the unit sphere
by a distance of at most $\psi$.
\section{Details Omitted from \Sectionref{hist-priv-accu-proof} -- Shifted-Truncated Laplace Mechanism}\label{app:histogram}
\begin{claim*}[Restating \Claimref{hist-epdel-c2}]
$\Pr[\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace}(\ensuremath{\mathbf{x}}\xspace')\in S_0\cup S_2]\leq e^{(1 + \nu)\ensuremath{\epsilon}\xspace}\Pr[\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace}(\ensuremath{\mathbf{x}}\xspace)\in S_0\cup S_2]$, provided $n \ge \frac{2}{\ensuremath{\epsilon}\xspace\ensuremath{\tau}\xspace} \ln\left(1 + \frac{1 - e^{-\frac{\ensuremath{\epsilon}\xspace\ensuremath{\tau}\xspace}{2}}}{e^{\ensuremath{\epsilon}\xspace(\nu + \frac{\ensuremath{\tau}\xspace}{2})} - 1} \right)$.
\end{claim*}
\begin{proof}
First we show that for $\ensuremath{\mathbf{s}}\xspace\in S_0\cup S_2$, we have, $\dnoise{s_{i^*} - x_{i^*}'}\leq e^{(1 + \nu)\ensuremath{\epsilon}\xspace}\lnoise{s_{i^*} - x_{i^*}}$, provided $n \ge \frac{2}{\ensuremath{\epsilon}\xspace\ensuremath{\tau}\xspace} \ln\left(1 + \frac{1 - e^{-\frac{\ensuremath{\epsilon}\xspace\ensuremath{\tau}\xspace}{2}}}{e^{\ensuremath{\epsilon}\xspace(\nu + \frac{\ensuremath{\tau}\xspace}{2})} - 1} \right)$, and then we show how this implies the result.
For $\ensuremath{\mathbf{s}}\xspace\in S_0$, $\dnoise{s_{i^*} - x_{i^*}'} = 0$ so the inequality trivially holds.
For $\ensuremath{\mathbf{s}}\xspace\in S_2$, both $\dnoise{s_{i^*} - x_{i^*}'} > 0$ and $\lnoise{s_{i^*} - x_{i^*}} > 0$; hence, we will be done if we show that
$\frac{\dnoise{s_{i^*} - x_{i^*}'}}{\lnoise{s_{i^*} - x_{i^*}}} \leq e^{(1 + \nu)\ensuremath{\epsilon}\xspace}$. Note that we are given the following inequality:
\begin{align*}
n &\ge \frac{2}{\ensuremath{\epsilon}\xspace\ensuremath{\tau}\xspace} \ln\left(1 + \frac{1 - e^{-\ensuremath{\epsilon}\xspace\frac{\ensuremath{\tau}\xspace}{2}}}{e^{\ensuremath{\epsilon}\xspace(\nu + \frac{\ensuremath{\tau}\xspace}{2})} - 1} \right),
\end{align*}
which can be rewritten as (which we show in \Claimref{equiv-ineq_claim-hist} after this proof):
\begin{align}
\ln\left(\frac{1 - e^{-\ensuremath{\epsilon}\xspace\frac{\ensuremath{\tau}\xspace (n+1)}{2}}}{1 - e^{-\ensuremath{\epsilon}\xspace\frac{\ensuremath{\tau}\xspace n}{2}}}\right) &\leq \ensuremath{\epsilon}\xspace(\nu + \frac{\ensuremath{\tau}\xspace}{2}). \label{eq:n-bound}
\end{align}
By substituting $q=\ensuremath{\tau}\xspace (n+1)$ and $q'=\ensuremath{\tau}\xspace n$, \eqref{eq:n-bound} is equivalent to
\[\frac{1}{\ensuremath{\epsilon}\xspace}\ln\left(\frac{1 - e^{-\ensuremath{\epsilon}\xspace\frac{q}{2}}}{1 - e^{-\ensuremath{\epsilon}\xspace\frac{q'}{2}}}\right) + (1 - \frac{\ensuremath{\tau}\xspace}{2}) \leq 1 + \nu.\]
This, using the triangle inequality, implies that
\[\frac{1}{\ensuremath{\epsilon}\xspace}\ln\left(\frac{1 - e^{-\ensuremath{\epsilon}\xspace\frac{q}{2}}}{1 - e^{-\ensuremath{\epsilon}\xspace\frac{q'}{2}}}\right) + \left|s_{i^*} - x_{i^*} + \frac{q}{2}\right| - \left|s_{i^*} - x_{i^*} + \frac{q}{2} + (1 - \frac{\ensuremath{\tau}\xspace}{2})\right| \leq 1 + \nu.\]
Putting $q'=q-\ensuremath{\tau}\xspace$ and $x_{i^*}'=x_{i^*}-1$, we get
\[\frac{1}{\ensuremath{\epsilon}\xspace}\ln\left(\frac{1 - e^{-\ensuremath{\epsilon}\xspace\frac{q}{2}}}{1 - e^{-\ensuremath{\epsilon}\xspace\frac{q'}{2}}}\right) + \left|s_{i^*} - x_{i^*} + \frac{q}{2}\right| - \left|s_{i^*} - x_{i^*}' + \frac{q'}{2}\right| \leq 1 + \nu.\]
By taking exponents of both sides, this is equivalent to showing
\[\frac{(1 - e^{-\ensuremath{\epsilon}\xspace\frac{q}{2}})}{(1 - e^{-\ensuremath{\epsilon}\xspace\frac{q'}{2}})} \frac{e^{-\ensuremath{\epsilon}\xspace|s_{i^*} - x_{i^*}' + \frac{q'}{2}|}}{e^{-\ensuremath{\epsilon}\xspace|s_{i^*} - x_{i^*} + \frac{q}{2}|}} \leq e^{(1 + \nu)\ensuremath{\epsilon}\xspace}\]
By substituting the values of $\lnoise{s_{i^*} - x_{i^*}}$ and $\dnoise{s_{i^*} - x_{i^*}'}$, this can be equivalently written as
\begin{equation}\label{hist-epdel-c2_interim1}
\frac{\dnoise{s_{i^*} - x_{i^*}'}}{\lnoise{s_{i^*} - x_{i^*}}} \leq e^{(1 + \nu)\ensuremath{\epsilon}\xspace}.
\end{equation}
\noindent Now we show $\Pr[\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace}(\ensuremath{\mathbf{x}}\xspace')\in S_0\cup S_2]\leq e^{(1 + \nu)\ensuremath{\epsilon}\xspace}\Pr[\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace}(\ensuremath{\mathbf{x}}\xspace)\in S_0\cup S_2]$. Recall that $\ensuremath{\mathcal{G}}\xspace_{\ensuremath{\mathbf{x}}\xspace}=\ensuremath{\mathrm{support}}\xspace(\ensuremath{\mathbf{x}}\xspace)$ for any histogram $\ensuremath{\mathbf{x}}\xspace\in\Hspace\ensuremath{\mathcal{G}}\xspace$.
\begin{align*}
\Pr[\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace}(\ensuremath{\mathbf{x}}\xspace')\in S_0\cup S_2] &= \int_{S_0\cup S_2} \big[\prod_{i\in\ensuremath{\mathcal{G}}\xspace_{\ensuremath{\mathbf{x}}\xspace'}} \dnoise{s_i - x_i'}\big]\ensuremath{\,\mathrm{d}}\xspace\ensuremath{\mathbf{s}}\xspace \nonumber\\
&= \int_{S_0\cup S_2} \big[\prod_{\substack{i\in\ensuremath{\mathcal{G}}\xspace_{\ensuremath{\mathbf{x}}\xspace'} : i\neq i^*}} \dnoise{s_i - x_i'}\big]\dnoise{s_{i^*} - x_{i^*}'}\ensuremath{\,\mathrm{d}}\xspace\ensuremath{\mathbf{s}}\xspace \nonumber\\
&\leq \int_{S_0\cup S_2} \big[\prod_{\substack{i\in\ensuremath{\mathcal{G}}\xspace_{\ensuremath{\mathbf{x}}\xspace} : i\neq i^*}} \lnoise{s_i - x_i}\big]e^{(1 + \nu)\ensuremath{\epsilon}\xspace}\lnoise{s_{i^*} - x_{i^*}}\ensuremath{\,\mathrm{d}}\xspace\ensuremath{\mathbf{s}}\xspace \nonumber \tag{Using \eqref{hist-epdel-c2_interim1} and that $x_i=x_i', \forall i\neq i^*$}\\
&= e^{(1 + \nu)\ensuremath{\epsilon}\xspace}\int_{S_0\cup S_2} \big[\prod_{i\in\ensuremath{\mathcal{G}}\xspace_{\ensuremath{\mathbf{x}}\xspace}} \lnoise{s_i - x_i}\big]\ensuremath{\,\mathrm{d}}\xspace\ensuremath{\mathbf{s}}\xspace \nonumber \\
&= e^{(1 + \nu)\ensuremath{\epsilon}\xspace} \Pr[\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace}(\ensuremath{\mathbf{x}}\xspace)\in S_0\cup S_2]
\end{align*}
This completes the proof of \Claimref{hist-epdel-c2}.
\end{proof}
\begin{claim}\label{clm:equiv-ineq_claim-hist}
\[n \ge \frac{2}{\ensuremath{\epsilon}\xspace\ensuremath{\tau}\xspace} \ln\left(1 + \frac{1 - e^{-\ensuremath{\epsilon}\xspace\frac{\ensuremath{\tau}\xspace}{2}}}{e^{\ensuremath{\epsilon}\xspace(\nu + \frac{\ensuremath{\tau}\xspace}{2})} - 1} \right)\quad \Longleftrightarrow \quad \ln\left(\frac{1 - e^{-\ensuremath{\epsilon}\xspace\frac{\ensuremath{\tau}\xspace (n+1)}{2}}}{1 - e^{-\ensuremath{\epsilon}\xspace\frac{\ensuremath{\tau}\xspace n}{2}}}\right) \leq \ensuremath{\epsilon}\xspace(\nu + \frac{\ensuremath{\tau}\xspace}{2}).\]
\end{claim}
\begin{proof}
We will start with the RHS and show that it is equivalent to the LHS.
\begin{align*}
\frac{1 - e^{-\ensuremath{\epsilon}\xspace\frac{\ensuremath{\tau}\xspace (n+1)}{2}}}{1 - e^{-\ensuremath{\epsilon}\xspace\frac{\ensuremath{\tau}\xspace n}{2}}} &\leq e^{\ensuremath{\epsilon}\xspace(\nu + \frac{\ensuremath{\tau}\xspace}{2})} \\
\Longleftrightarrow 1 - e^{-\ensuremath{\epsilon}\xspace\frac{\ensuremath{\tau}\xspace (n+1)}{2}} &\leq e^{\ensuremath{\epsilon}\xspace(\nu + \frac{\ensuremath{\tau}\xspace}{2})} - e^{\ensuremath{\epsilon}\xspace(\nu + \frac{\ensuremath{\tau}\xspace}{2})}e^{-\ensuremath{\epsilon}\xspace\frac{\ensuremath{\tau}\xspace n}{2}} \\
\Longleftrightarrow 1 - e^{-\ensuremath{\epsilon}\xspace\frac{\ensuremath{\tau}\xspace n}{2}}e^{-\ensuremath{\epsilon}\xspace\frac{\ensuremath{\tau}\xspace}{2}} &\leq e^{\ensuremath{\epsilon}\xspace(\nu + \frac{\ensuremath{\tau}\xspace}{2})} - e^{\ensuremath{\epsilon}\xspace(\nu + \frac{\ensuremath{\tau}\xspace}{2})}e^{-\ensuremath{\epsilon}\xspace\frac{\ensuremath{\tau}\xspace n}{2}} \\
\Longleftrightarrow e^{-\ensuremath{\epsilon}\xspace\frac{\ensuremath{\tau}\xspace n}{2}}\left(e^{\ensuremath{\epsilon}\xspace(\nu + \frac{\ensuremath{\tau}\xspace}{2})} - e^{-\ensuremath{\epsilon}\xspace\frac{\ensuremath{\tau}\xspace}{2}}\right) &\leq e^{\ensuremath{\epsilon}\xspace(\nu + \frac{\ensuremath{\tau}\xspace}{2})} - 1 \\
\Longleftrightarrow e^{\ensuremath{\epsilon}\xspace\frac{\ensuremath{\tau}\xspace n}{2}} &\geq \frac{e^{\ensuremath{\epsilon}\xspace(\nu + \frac{\ensuremath{\tau}\xspace}{2})} - e^{-\ensuremath{\epsilon}\xspace\frac{\ensuremath{\tau}\xspace}{2}}}{e^{\ensuremath{\epsilon}\xspace(\nu + \frac{\ensuremath{\tau}\xspace}{2})} - 1} \\
\Longleftrightarrow e^{\ensuremath{\epsilon}\xspace\frac{\ensuremath{\tau}\xspace n}{2}} &\geq 1 + \frac{1 - e^{-\ensuremath{\epsilon}\xspace\frac{\ensuremath{\tau}\xspace}{2}}}{e^{\ensuremath{\epsilon}\xspace(\nu + \frac{\ensuremath{\tau}\xspace}{2})} - 1} \\
\Longleftrightarrow n &\ge \frac{2}{\ensuremath{\epsilon}\xspace\ensuremath{\tau}\xspace} \ln\left(1 + \frac{1 - e^{-\ensuremath{\epsilon}\xspace\frac{\ensuremath{\tau}\xspace}{2}}}{e^{\ensuremath{\epsilon}\xspace(\nu + \frac{\ensuremath{\tau}\xspace}{2})} - 1} \right).
\end{align*}
\end{proof}
\begin{claim*}[Restating \Claimref{hist-epdel-c3}]
$\Pr[\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace}(\ensuremath{\mathbf{x}}\xspace')\in S_{1}]\leq \frac{e^{\ensuremath{\epsilon}\xspace} - 1}{2(e^{\nicefrac{\ensuremath{\epsilon}\xspace q}{2}} - 1)}$.
\end{claim*}
\begin{proof}
Observe that, for every $\ensuremath{\mathbf{s}}\xspace\in S_1$, we have $-q' \le s_{i^*} - x_{i^*}' < -q' + (1 - \ensuremath{\tau}\xspace)$.
Recall that $\ensuremath{\mathcal{G}}\xspace_{\ensuremath{\mathbf{x}}\xspace'}=\ensuremath{\mathrm{support}}\xspace(\ensuremath{\mathbf{x}}\xspace')$ and $|\ensuremath{\mathbf{x}}\xspace'|=n$.
Let $|\ensuremath{\mathcal{G}}\xspace_{\ensuremath{\mathbf{x}}\xspace'}|=t$ for some $t\leq n$, and, for simplicity, assume that $\ensuremath{\mathcal{G}}\xspace_{\ensuremath{\mathbf{x}}\xspace'}=\{1,2,\hdots,t\}$.
For $i\in[t]$, define $S_1(i):=\{\hat{s}_i:\exists \ensuremath{\mathbf{s}}\xspace\in S_1 \text{ s.t. } \hat{s}_i=s_i\}$, which is equal to the collection of the multiplicity of $i$ in the histograms in $S_1$.
\begin{align*}
\Pr[\mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace}(\ensuremath{\mathbf{x}}\xspace')\in S_1] &= \int_{S_1} \big[\prod_{i=1}^t \dnoise{s_i - x_i'}\big]\ensuremath{\,\mathrm{d}}\xspace\ensuremath{\mathbf{s}}\xspace \nonumber\\
& = \int_{S_1(1)}\ldots\int_{S_1(i^*)}\ldots\int_{S_1(t)} \big[\prod_{i=1}^t \dnoise{s_i - x_i'}\big]\ensuremath{\,\mathrm{d}}\xspace s_t\ldots \ensuremath{\,\mathrm{d}}\xspace s_{i^*}\ldots \ensuremath{\,\mathrm{d}}\xspace s_1 \nonumber \\
& = \int_{S_1(i^*)}\dnoise{s_{i^*} - x_{i^*}'} \underbrace{\bigg(\int_{S_1(1)}\ldots \int_{S_1(t)} \big[\prod_{\substack{i=1 : i\neq i^*}}^t \dnoise{s_i - x_i'}\big]\ensuremath{\,\mathrm{d}}\xspace s_t\ldots \ensuremath{\,\mathrm{d}}\xspace s_1\bigg)}_{\leq\ 1} \ensuremath{\,\mathrm{d}}\xspace s_{i^*} \nonumber\\
& \leq \int_{S_1(i^*)}\dnoise{s_{i^*} - x_{i^*}'} \ensuremath{\,\mathrm{d}}\xspace s_{i^*} \nonumber\\
&= \int_{q'}^{q' + (1 - \ensuremath{\tau}\xspace)}\dnoise{z} \ensuremath{\,\mathrm{d}}\xspace z \tag{Since $\forall \ensuremath{\mathbf{s}}\xspace\in S_1$, $(s_{i^*}-x_{i^*}')\in[-q',-q' + (1-\ensuremath{\tau}\xspace))$} \nonumber \\
& = \frac{e^{(1-\ensuremath{\tau}\xspace)\ensuremath{\epsilon}\xspace} - 1}{2(1 - e^{-\ensuremath{\epsilon}\xspace q/2})}e^{-\ensuremath{\epsilon}\xspace \nicefrac{q}{2}} \nonumber\\
& \le \frac{e^{\ensuremath{\epsilon}\xspace} - 1}{2(e^{\ensuremath{\epsilon}\xspace \nicefrac{q}{2}} - 1)}. \tag{Since $\ensuremath{\tau}\xspace > 0$}
\end{align*}
This proves \Claimref{hist-epdel-c3}.
\end{proof}
\begin{lem}\label{lem:hist-priv-nu-bigger-0}
For any $\nu,\ensuremath{\epsilon}\xspace>0$ and $\ensuremath{\mathbf{x}}\xspace$ such that $\ensuremath{\epsilon}\xspace\nu>\ln\left(1+\frac{1}{|\ensuremath{\mathbf{x}}\xspace|}\right)$, \mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace} is
$\left((1 + \nu)\ensuremath{\epsilon}\xspace,\frac{e^{\ensuremath{\epsilon}\xspace} - 1}{2(e^{\nicefrac{\ensuremath{\epsilon}\xspace q}{2}} - 1)}\right)$-DP w.r.t.\ \ensuremath{\sim_{\mathrm{hist}}\xspace}, where $q=\tau|\ensuremath{\mathbf{x}}\xspace|$.
\end{lem}
\begin{proof}
We use \Lemmaref{hist-epdel} and put a restriction that $\nu$ should be $> 0$. We will analyze the effect of this restriction on the bound of $|\ensuremath{\mathbf{x}}\xspace|$. We restate the bound on $|\ensuremath{\mathbf{x}}\xspace|$ here again for convenience:
\[|\ensuremath{\mathbf{x}}\xspace| \ge \frac{2}{\ensuremath{\epsilon}\xspace\ensuremath{\tau}\xspace} \ln\left(1 + \frac{1 - e^{-\ensuremath{\epsilon}\xspace\frac{\ensuremath{\tau}\xspace}{2}}}{e^{\ensuremath{\epsilon}\xspace(\nu + \frac{\ensuremath{\tau}\xspace}{2})} - 1} \right)\]
It can be easily checked that for any fixed $\ensuremath{\epsilon}\xspace, \nu > 0$, the RHS is a decreasing function of $\ensuremath{\tau}\xspace$.
Hence, if we set $\ensuremath{\tau}\xspace$ to its minimum value, we get a lower bound on $|\ensuremath{\mathbf{x}}\xspace|$ which is independent of $\tau$.
Since this expression is not defined at $\ensuremath{\tau}\xspace = 0$, we will take its one-sided limit as $\ensuremath{\tau}\xspace \rightarrow 0^+$, i.e.,
\begin{align*}
\lim_{\ensuremath{\tau}\xspace \rightarrow 0^+}\frac{2}{\ensuremath{\epsilon}\xspace\ensuremath{\tau}\xspace} \ln\left(1 + \frac{1 - e^{-\ensuremath{\epsilon}\xspace\frac{\ensuremath{\tau}\xspace}{2}}}{e^{\ensuremath{\epsilon}\xspace(\nu + \frac{\ensuremath{\tau}\xspace}{2})} - 1} \right)
\end{align*}
We will replace $\frac{\ensuremath{\epsilon}\xspace\ensuremath{\tau}\xspace}{2}$ with $l$. As $\ensuremath{\tau}\xspace \rightarrow 0^+$, $l \rightarrow 0^+$, and we get
\begin{align*}
\lim_{\ensuremath{\tau}\xspace \rightarrow 0^+}\frac{2}{\ensuremath{\epsilon}\xspace\ensuremath{\tau}\xspace} \ln\left(1 + \frac{1 - e^{-\ensuremath{\epsilon}\xspace\frac{\ensuremath{\tau}\xspace}{2}}}{e^{\ensuremath{\epsilon}\xspace(\nu + \frac{\ensuremath{\tau}\xspace}{2})} - 1} \right) &= \lim_{l \rightarrow 0^+}\frac{1}{l} \ln\left(1 + \frac{1 - e^{-l}}{e^{\ensuremath{\epsilon}\xspace\nu + l} - 1} \right)\\
&= \lim_{l \rightarrow 0^+}\frac{1}{l} \ln\left(1 + \frac{1 - e^{-l}}{e^{\ensuremath{\epsilon}\xspace\nu + l} - 1} \right)\left(\frac{1 - e^{-l}}{e^{\ensuremath{\epsilon}\xspace\nu + l} - 1}\right)\left(\frac{e^{\ensuremath{\epsilon}\xspace\nu + l} - 1}{1 - e^{-l}}\right)\\
&= \lim_{l \rightarrow 0^+}\left(\frac{1}{e^{\ensuremath{\epsilon}\xspace\nu + l} - 1}\right)\left(\frac{1 - e^{-l}}{l}\right) \left(\frac{\ln\left(1 + \frac{1 - e^{-l}}{e^{\ensuremath{\epsilon}\xspace\nu + l} - 1} \right)}{\frac{1 - e^{-l}}{e^{\ensuremath{\epsilon}\xspace\nu + l} - 1}}\right)\\
&= \frac{1}{e^{\ensuremath{\epsilon}\xspace\nu} - 1} \tag{$\lim_{x \rightarrow 0^+} \frac{1-e^{-x}}{x} = 1$; $\lim_{x \rightarrow 0^+} \frac{\ln(1+x)}{x} = 1$}\\
\end{align*}
We have proved that on inputs $\ensuremath{\mathbf{x}}\xspace$ s.t.\ $|\ensuremath{\mathbf{x}}\xspace| > \frac{1}{e^{\ensuremath{\epsilon}\xspace\nu} - 1}$, which is equivalent to the condition that $\ensuremath{\epsilon}\xspace\nu>\ln\left(1+\frac{1}{|\ensuremath{\mathbf{x}}\xspace|}\right)$, \mtrlap{\ensuremath{\tau}\xspace,\ensuremath{\epsilon}\xspace,\ensuremath{\mathcal{G}}\xspace} is
$\left((1 + \nu)\ensuremath{\epsilon}\xspace,\frac{e^{\ensuremath{\epsilon}\xspace} - 1}{2(e^{\nicefrac{\ensuremath{\epsilon}\xspace q}{2}} - 1)}\right)$-DP w.r.t.\ \ensuremath{\sim_{\mathrm{hist}}\xspace}, where $q=\tau|\ensuremath{\mathbf{x}}\xspace|$.
\end{proof}
\section{Details Omitted from \Sectionref{lossy-wass}}
\subsection{Lossy $\infty$-Wasserstein Distance}\label{app:wasserstein}
\begin{lem}[\Lemmaref{wass-triangle} at $\gamma_1=\gamma_2=0$]\label{lem:Winf_triangle}
For distributions $P$, $Q$, and $R$ over a metric space $(\Omega,\ensuremath{\mathfrak{d}}\xspace)$, we have
\begin{align*}
\Winf{}(P, R) &\leq \Winf{}(P, Q) + \Winf{}(Q, R).
\end{align*}
\end{lem}
\begin{proof}
Let $\phi_2\in\Phi(P, Q)$ and $\phi_3\in\Phi(Q, R)$ denote the optimal couplings for $\Winf{}(P, Q)$ and $\Winf{}(Q, R)$, respectively, i.e., $\Winf{}(P, Q)=\sup_{\substack{(x,y):\\\phi_2(x,y)\neq 0}}\ensuremath{\mathfrak{d}}\xspace(x,y)$ and $\Winf{}(Q,R)=\sup_{\substack{(y,z):\\\phi_3(y,z)\neq 0}}\ensuremath{\mathfrak{d}}\xspace(y,z)$.
It follows from the Gluing Lemma \cite{Villani_OptimalTransport08} that we can find a coupling $\phi'$ over $\Omega\times \Omega\times \Omega$ such that the projection of $\phi'$ onto its first two coordinates is equal to $\phi_2$ and its last two coordinates is equal to $\phi_3$.
Let $\phi_1$ denote the projection of $\phi'$ onto its first and the third coordinates. Note that $\phi_1\in\Phi(P, R)$, but it may not be an optimal coupling for $\Winf{}(P, R)$.
Now the triangle inequality follows from the following set of inequalities:
\begin{align*}
\Winf{}(P, R) &= \inf_{\phi\in\Phi(P,R)} \sup_{\substack{(x,z):\\\phi(x,z)\neq 0}}\ensuremath{\mathfrak{d}}\xspace(x,z)
\quad\leq \sup_{\substack{(x,z):\\\phi_1(x,z)\neq 0}}\ensuremath{\mathfrak{d}}\xspace(x,z)
\quad= \sup_{\substack{(x,y,z):\\\phi'(x,y,z)\neq 0}}\ensuremath{\mathfrak{d}}\xspace(x,z) \\
&\stackrel{\text{(a)}}{\leq} \sup_{\substack{(x,y,z):\\\phi'(x,y,z)\neq 0}}\ensuremath{\mathfrak{d}}\xspace(x,y) + \ensuremath{\mathfrak{d}}\xspace(y,z) \\
&= \sup_{\substack{(x,y,z):\\\phi'(x,y,z)\neq 0}}\ensuremath{\mathfrak{d}}\xspace(x,y) \quad+ \sup_{\substack{(x,y,z):\\\phi'(x,y,z)\neq 0}} \ensuremath{\mathfrak{d}}\xspace(y,z) \\
&= \sup_{\substack{(x,y):\\\phi_2(x,y)\neq 0}}\ensuremath{\mathfrak{d}}\xspace(x,y) \quad+ \sup_{\substack{(y,z):\\\phi_3(y,z)\neq 0}} \ensuremath{\mathfrak{d}}\xspace(y,z) \\
&= \Winf{}(P, Q) + \Winf{}(Q, R),
\end{align*}
where (a) follows from the fact that $\ensuremath{\mathfrak{d}}\xspace$ is a metric, and so it satisfies the triangle inequality.
\end{proof}
\begin{claim}\label{clm:TV_dist_Q-Qprime}
$\Delta(Q, Q')\leq\gamma-\gamma_1$.
\end{claim}
\begin{proof}
The claim follows from the following set of inequalities.
\begin{align}
\Delta(Q, Q') &\le \Delta(Q, Q_{opt}) + \Delta(Q_{opt}, Q') \notag \\
&\le \gamma - \gamma_{opt} + \frac{1}{2}\int_{\Omega}\left|\Prob{Q_{opt}}{y} - \Prob{Q'}{y}\right|\ensuremath{\,\mathrm{d}}\xspace y \tag{Since $\Delta(Q, Q_{opt})\leq\gamma - \gamma_{opt}$} \notag \\
&= \frac{1}{2}\int_{\Omega} \left|\int_{\Omega}\phi_{opt}(x, y)\ensuremath{\,\mathrm{d}}\xspace x - \int_{\Omega}\phi'(x, y)\ensuremath{\,\mathrm{d}}\xspace x\right| \ensuremath{\,\mathrm{d}}\xspace y + (\gamma - \gamma_{opt}) \notag \\
&\le \frac{1}{2}\int_{\Omega} \int_{\Omega} \left|\phi_{opt}(x, y) - \phi'(x, y)\right|\ensuremath{\,\mathrm{d}}\xspace x\ensuremath{\,\mathrm{d}}\xspace y + (\gamma - \gamma_{opt}) \notag
\end{align}
Define $\Omega_1:=\{x\in\Omega:\Prob{P_{opt}}{x}>0\}$ and $\overline{\Omega}_1:=\Omega\setminus\Omega_1$.
Since $\Prob{P_{opt}}{x} = 0$ for all $x\in\overline{\Omega}_1$ and $\Prob{{P}_{opt}}$ is the first marginal of $\phi_{opt}$, we have that $\phi_{opt}(x,y)=0$ for all $x\in\overline{\Omega}_1$ and $y\in\Omega$.
Now, continuing from above, we get
\begin{align}
\Delta(Q, Q') &\leq \frac{1}{2}\int_{\Omega} \int_{x\in\Omega_1} \left|\phi_{opt}(x, y) - \phi'(x, y)\right|\ensuremath{\,\mathrm{d}}\xspace x\ensuremath{\,\mathrm{d}}\xspace y + \frac{1}{2}\int_{\Omega} \int_{x\in\overline{\Omega}_1} \left|\phi_{opt}(x, y) - \phi'(x, y)\right|\ensuremath{\,\mathrm{d}}\xspace x\ensuremath{\,\mathrm{d}}\xspace y + (\gamma - \gamma_{opt}) \notag \\
&= \frac{1}{2}\int_{\Omega}\int_{\Omega_1}\phi_{opt}(x, y)\left|1 - \frac{\Prob{P'}{x}}{\Prob{P_{opt}}{x}}\right|\ensuremath{\,\mathrm{d}}\xspace x\ensuremath{\,\mathrm{d}}\xspace y + \frac{1}{2}\int_{\Omega}\int_{\overline{\Omega}_1}\left|\phi'(x, y)\right|\ensuremath{\,\mathrm{d}}\xspace x\ensuremath{\,\mathrm{d}}\xspace y + (\gamma - \gamma_{opt}) \notag \\
&= \frac{1}{2}\int_{\Omega_1}\left|1 - \frac{\Prob{P'}{x}}{\Prob{P_{opt}}{x}}\right|\ensuremath{\,\mathrm{d}}\xspace x \int_{\Omega}\phi_{opt}(x, y)\ensuremath{\,\mathrm{d}}\xspace y + \frac{1}{2}\int_{\Omega}\int_{\overline{\Omega}_1}\Prob{P'}{x}\delta(x - y)\ensuremath{\,\mathrm{d}}\xspace x\ensuremath{\,\mathrm{d}}\xspace y + (\gamma - \gamma_{opt}) \tag{Since $\phi'(x, y) = \Prob{P'}{x}\delta(x - y)$ for $x\in\overline{\Omega}_1$} \\
&= \frac{1}{2}\int_{\Omega_1}\left|\Prob{P_{opt}}{x} - \Prob{P'}{x}\right|\ensuremath{\,\mathrm{d}}\xspace x + \frac{1}{2}\int_{\overline{\Omega}_1}\Prob{P'}{x}\ensuremath{\,\mathrm{d}}\xspace x + (\gamma - \gamma_{opt}) \tag{Since $\int_{\Omega}\phi_{opt}(x, y)\ensuremath{\,\mathrm{d}}\xspace y = \Prob{P_{opt}}{x}$ and $\int_{\Omega}\delta(x-y)\ensuremath{\,\mathrm{d}}\xspace y = 1$ for any $x$} \\
&= \frac{1}{2}\int_{\Omega_1}\left|\Prob{P_{opt}}{x} - \Prob{P'}{x}\right|\ensuremath{\,\mathrm{d}}\xspace x + \frac{1}{2}\int_{\overline{\Omega}_1}\left|\Prob{P_{opt}}{x} - \Prob{P'}{x}\right|\ensuremath{\,\mathrm{d}}\xspace x + (\gamma - \gamma_{opt}) \tag{Since $\Prob{P_{opt}}{x}=0$ whenever $x\in\overline{\Omega}_1$} \notag \\
&= \frac{1}{2}\int_{\Omega}\left|\Prob{P_{opt}}{x} - \Prob{P'}{x}\right|\ensuremath{\,\mathrm{d}}\xspace x + (\gamma - \gamma_{opt}) \notag \\
&\stackrel{\text{(a)}}{=} \frac{1}{2}\int_{\Omega}\left|\Big(1-\frac{\gamma_1}{\gamma_{opt}}\Big)R_{opt}(x)\right|\ensuremath{\,\mathrm{d}}\xspace x + (\gamma - \gamma_{opt}) \notag \\
&= \frac{(\gamma_{opt}-\gamma_1)}{2\gamma_{opt}} \int_{\Omega}\left|R_{opt}(x)\right|\ensuremath{\,\mathrm{d}}\xspace x + (\gamma - \gamma_{opt}) \notag \\
&= \gamma_{opt} - \gamma_1 \tag{Since $\int_{\Omega} |R_{opt}(\omega)|\ensuremath{\,\mathrm{d}}\xspace\omega= 2\gamma_{opt}$} + (\gamma - \gamma_{opt}) \notag \\
&= \gamma - \gamma_1
\end{align}
Here (a) follows because for every $x\in\Omega$, we have $\Prob{P_{opt}}{x} - \Prob{P'}{x} = R_{opt}(x) + \Prob{P}{x} - \Prob{P'}{x} = R_{opt}(x) - R'(x) = R_{opt}(x) - \frac{\gamma_1}{\gamma_{opt}}R_{opt}(x)$.
\end{proof}
\begin{claim*}[Restating \Claimref{wass_alternate}]
For distributions $P$ and $Q$ over a metric space $(\Omega,\ensuremath{\mathfrak{d}}\xspace)$ and $\gamma\in [0,1]$, we have
\begin{align*}
\Winf{\gamma}(P,Q) \quad= \displaystyle \inf_{\substack{\hat{P},\hat{Q}:\\ \Delta(P,\hat{P}) + \Delta(Q,\hat{Q})\leq\gamma}} \Winf{} (\hat{P},\hat{Q}).
\end{align*}
\end{claim*}
\begin{proof}
This claim simply follows by by viewing the infimum set in the definition of $\gamma$-Lossy $\infty$-Wasserstein distance differently.
\begin{align}
\Winf{\gamma}(P,Q) \quad &\stackrel{\text{(a)}}{=} \displaystyle \inf_{\phi\in\Phi^{\gamma}(P,Q)} \max_{(x,y)\leftarrow\phi}\ensuremath{\mathfrak{d}}\xspace(x, y) \notag \\
&\stackrel{\text{(b)}}{=} \displaystyle \inf_{\substack{\hat{P},\hat{Q}:\\ \Delta(P,\hat{P}) + \Delta(Q,\hat{Q})\leq\gamma}} \displaystyle \inf_{\phi\in\Phi^0(\hat{P},\hat{Q})} \max_{(x,y)\leftarrow\phi}\ensuremath{\mathfrak{d}}\xspace(x, y) \notag \\
&\stackrel{\text{(c)}}{=} \displaystyle \inf_{\substack{\hat{P},\hat{Q}:\\ \Delta(P,\hat{P}) + \Delta(Q,\hat{Q})\leq\gamma}} \Winf{} (\hat{P},\hat{Q}). \notag
\end{align}
where (a) follows from the definition of $\gamma$-Lossy $\infty$-Wasserstein distance;
(b) trivially holds by viewing the infimum set differently; in (c) we substituted the definition of $W_{\infty}$;
and (d) follows because $P',Q'$ satisfies $\Delta(P,P') + \Delta(Q,Q')\leq\gamma$.
\end{proof}
\subsection{Average Version of Lossy Wasserstein Distance}\label{app:average_lossy-wass}
Our definition of \Winf\theta uses a worst case notion of distance.
Many of the results using this notion have analogues using an average
case version. We formally present this definition below, as it may be of interest
elsewhere.
\begin{defn}[$\theta$-Lossy Average Wasserstein Distance]\label{def:gamma-wass-dist}
Let $P$ and $Q$ be two probability distributions over a metric space
$(\Omega,\ensuremath{\mathfrak{d}}\xspace)$, and let $\theta\in[0,1]$.
The \emph{$\theta$-lossy average Wasserstein distance} between $P$ and $Q$ is defined as:
\begin{equation}\label{eq:gamma-wass-dist}
\W{\theta}(P, Q) = \inf_{\phi\in\Phi^{\theta}(P, Q)}\E_{(x,y)\leftarrow\phi}[\ensuremath{\mathfrak{d}}\xspace(x,y)].
\end{equation}
\end{defn}
The following lemma relates lossy average Wasserstein and lossy $\infty$-Wasserstein
distances.
\begin{lem}\label{lem:W-Winf}
For any two distributions $P, Q$, and $0 \le \beta' < \beta \le 1$,
\[ \W{\beta}(P, Q) \le \Winf{\beta}(P, Q) \le \frac{\W{\beta'}(P, Q)}{(\beta-\beta')}. \]
\end{lem}
\begin{proof}
Clearly from the definitions, $\W{\beta}(P, Q) \le \Winf{\beta}(P, Q)$.
Suppose $\W{\beta'}(P, Q)=\gamma$ and $\phi\in\Phi^{\beta'}(P, Q)$ is an optimal
coupling that realizes this. Then, in $\phi$, the total mass that is
transported more than a distance $\gamma'$ is at most $\gamma/\gamma'$ and
the total mass that is lost is at most $\beta'$. By choosing to simply not
transport this mass at all, one loses $\beta'+\gamma/\gamma'$ mass, but no
mass is transported more than a distance $\gamma'$. Choosing
$\gamma'=\gamma/(\beta-\beta')$ this upper bound on loss is $\beta$, and
hence this modified coupling shows that $\Winf{\beta}(P, Q) \le \gamma'$.
\end{proof}
\subsection{$\gamma$-Lossy $\infty$-Wasserstein Distance Generalizes Existing Notions}\label{app:wass-generalizes}
\begin{lem}\label{lem:generalize-pac}
Let $(\Omega,\ensuremath{\mathfrak{d}}\xspace)$ be a metric space. Let $F_f$ be a point distribution on some $f\in\Omega$ and $G$ be a distribution over $\Omega$. Then for any $\gamma\in[0,1]$ and $\beta\geq0$, we have
\[\Winf{\gamma}(F_f,G)\leq\beta\quad \Longleftrightarrow \quad \Pr_{g\leftarrow G}[\ensuremath{\mathfrak{d}}\xspace(f,g) > \beta] \le \gamma.\]
\end{lem}
\begin{proof}
We show both the directions below.
\begin{itemize}
\item {\bf Only if part ($\Rightarrow$):} Suppose $\Winf{\gamma}(F_f,G)\leq\beta$. It follows from \Lemmaref{wass-marginal-loss} that there exists a distribution $G'$ such that $\Delta(G',G)\leq\gamma$ and $\Winf{}(F_f,G')\leq\beta$. Since $F_f$ is a point distribution, all couplings $\phi\in\Phi^0(F_f,G')$ will be such that $\phi_1=F_f$ and $\phi_2=G'$, which implies that $\Winf{}(F_f,G')=\sup_{g'\leftarrow G'}\ensuremath{\mathfrak{d}}\xspace(f,g')\leq\beta$.
Now we show that, together with $\Delta(G',G)\leq\gamma$, this implies $\Pr_{g\leftarrow G}[\ensuremath{\mathfrak{d}}\xspace(f,g)>\beta]\leq\gamma$:
\begin{align*}
\Pr_{g\leftarrow G}[\ensuremath{\mathfrak{d}}\xspace(f,g)>\beta] &= \underbrace{\Pr_{g\leftarrow G}[\ensuremath{\mathfrak{d}}\xspace(f,g)>\beta\ |\ g\in\ensuremath{\mathrm{support}}\xspace(G')]}_{=\ 0}\Pr_{g\leftarrow G}[g\in\ensuremath{\mathrm{support}}\xspace(G')] \notag \\
&\hspace{2cm} + \underbrace{\Pr_{g\leftarrow G}[\ensuremath{\mathfrak{d}}\xspace(f,g)>\beta\ |\ g\notin\ensuremath{\mathrm{support}}\xspace(G')]}_{\leq\ 1}\Pr_{g\leftarrow G}[g\notin\ensuremath{\mathrm{support}}\xspace(G')] \notag \\
&\leq \Pr_{g\leftarrow G}[g\notin\ensuremath{\mathrm{support}}\xspace(G')] \notag \\
&= \int_{g\in\Omega:\ p_{G}(g)>0\ \&\ p_{G'}(g)=0}p_G(g)dg \notag \\
&= \int_{g\in\Omega:\ p_{G}(g)>0\ \&\ p_{G'}(g)=0}(p_G(g)-p_{G'}(g))dg \notag \\
&\stackrel{\text{(a)}}{\leq} \int_{g\in\Omega:\ p_G(g)>p_{G'}(g)}(p_G(g)-p_{G'}(g))dg \notag \\
&\stackrel{\text{(b)}}{=} \Delta(G,G') \leq \gamma,
\end{align*}
where (a) follows because $\{g\in\Omega:p_{G}(g)>0\ \&\ p_{G'}(g)=0\}\subseteq\{g\in\Omega:p_G(g)>p_{G'}(g)\}$, and (b) follows from the reasoning given below.
Define $\Omega_G^+ := \{g\in\Omega:p_G(g)>p_{G'}(g)\}$ and $\Omega_G^- := \{g\in\Omega:p_G(g)<p_{G'}(g)\}$.
Since $\int_{g\in\Omega}p_G(g)dg = \int_{g\in\Omega}p_{G'}(g)dg$, it follows that $\int_{g\in\Omega_G^+}(p_G(g)-p_{G'}(g))dg = \int_{g\in\Omega_G^-}(p_{G'}(g)-p_G(g))dg$.
Substituting this in the definition of $\Delta(G,G')$, we get $\Delta(G,G')=\int_{g\in\Omega_G^+}(p_G(g)-p_{G'}(g))dg$.
\item {\bf If part ($\Leftarrow$):} Suppose $\Pr_{g\leftarrow G}[\ensuremath{\mathfrak{d}}\xspace(f,g) > \beta] \le \gamma$. Let $\Omega'=\{g\in\Omega:\ensuremath{\mathfrak{d}}\xspace(f,g)\leq\beta\}$ and $G'$ be a distribution supported on $\Omega'$ such that $p_{G'}(g)=\frac{1}{\eta}p_G(g)$ when $g\in\Omega'$, otherwise $p_{G'}(g)=0$. Here $\eta=\int_{g\in\Omega'}p_G(g)dg\geq(1-\gamma)$ is the normalizing constant.
First we show that $\Delta(G,G')\leq\gamma$.
\begin{align}
\Delta(G,G') &= \frac{1}{2}\int_{g\in\Omega}|p_{G'}(g)-p_G(g)|dg \notag \\
&= \frac{1}{2}\int_{g\in\Omega'}|p_{G'}(g)-p_G(g)|dg + \frac{1}{2}\int_{g\in\Omega\setminus\Omega'}p_G(g)dg \tag{Since $p_{G'}(g)=0$ when $g\in\Omega\setminus\Omega'$} \\
&= \frac{1}{2}\int_{g\in\Omega'}p_G(g)(\frac{1}{\eta}-1)dg + \frac{1}{2}\int_{g\in\Omega\setminus\Omega'}p_G(g)dg \notag \\
&= \frac{1}{2}(\frac{1}{\eta}-1)\eta + \frac{1}{2}(1-\eta) \tag{Since $\int_{g\in\Omega'}p_G(g)dg=\eta$} \\
&= 1-\eta \leq \gamma.
\end{align}
Now define a joint distribution $\phi$, whose first marginal is the point distribution $F_f$ and the second marginal is $G'$, which implies that $\sup_{(x,y)\leftarrow\phi}\ensuremath{\mathfrak{d}}\xspace(x,y)=\sup_{g'\in\Omega'}\ensuremath{\mathfrak{d}}\xspace(f,g')$. It follows from the argument above that $\phi\in\Phi^{\gamma}(F_f,G)$, which implies that $\Winf{\gamma}(F_f,G)\leq\sup_{(x,y)\leftarrow\phi}\ensuremath{\mathfrak{d}}\xspace(x,y)=\sup_{g'\in\Omega'}\ensuremath{\mathfrak{d}}\xspace(f,g')\leq\beta$, where the last inequality is by definition of $\Omega'$. Hence, we get $\Winf{\gamma}(F_f,G)\leq\beta$.
\end{itemize}
This completes the proof of \Lemmaref{generalize-pac}.
\end{proof}
\begin{lem}\label{lem:generalizing-tv}
For any two distributions $P,Q$ over a metric space $(\Omega,\ensuremath{\mathfrak{d}}\xspace)$ and $\gamma\in[0,1]$, we have
\[\Winf{\gamma}(P,Q)=0 \quad \Longleftrightarrow \quad \Delta(P,Q)\leq\gamma.\]
\end{lem}
\begin{proof} We show both the directions below.
\begin{itemize}
\item {\bf Only if part ($\Rightarrow$):} Suppose $\Winf{\gamma}(P,Q)=0$. This implies that there exists a joint distribution $\phi\in\Phi^{\gamma}(P,Q)$ such that $\sup_{(x,y)\leftarrow\phi}\ensuremath{\mathfrak{d}}\xspace(x,y)=0$. Since $\ensuremath{\mathfrak{d}}\xspace$ is a metric, this implies that for all $(x,y)\leftarrow\phi$, we have $x=y$. Hence, the first marginal $\phi_1$ and the second marginal $\phi_2$ of $\phi$ are equal, which implies that $\Delta(\phi_1,P)+\Delta(\phi_2,Q)\leq\gamma$. Then, by triangle inequality and that $\phi_1=\phi_2$, we get $\Delta(P,Q)\leq\gamma$.
\item {\bf If part ($\Leftarrow$):} Suppose $\Delta(P,Q)\leq\gamma$. Define a joint distribution $\phi:=P\times P$. Since $\phi_1=\phi_2=P$, we have $\phi\in\Phi^{\gamma}(P,Q)$. This, by definition, implies $\Winf{\gamma}(P,Q)\leq\sup_{(x,y)\leftarrow\phi}\ensuremath{\mathfrak{d}}\xspace(x,y)$. Since both the marginals of $\phi$ are the same, we have $\ensuremath{\mathfrak{d}}\xspace(x,y)=0$ for every $(x,y)\leftarrow\phi$. This, by the non-negativity of $\Winf{\gamma}(P,Q)$, gives $\Winf{\gamma}(P,Q)=0$.
\end{itemize}
\end{proof}
\section{Lossy Wasserstein Distance}\label{sec:lossy-wass}
Central to the formalization of all the results in this work is a new notion of distance between
distributions over a metric space, that we call \emph{lossy Wasserstein
distance}. Lossy Wasserstein distance generalizes the notion of Wasserstein
distance~\cite{Villani_OptimalTransport08}, or Earth Mover Distance,
which is the minimum cost of transporting probability mass (``earth'') of one
distribution to make it match the other. Loss refers to the fact that some
of the mass is allowed to be lost during this transportation. We shall
use the ``infinity
norm'' version, where the cost paid is the maximum distance any mass is
transported.
Formally, consider a metric space with ground set
$\Omega$, and metric \ensuremath{\mathfrak{d}}\xspace, where Wasserstein distance can be defined. For
example, one may consider $\Omega=\ensuremath{{\mathbb R}}\xspace^n$ and the metric \ensuremath{\mathfrak{d}}\xspace being an
$\ell_p$-metric.
For $\gamma\in[0,1]$, and distributions $P,Q$ over the metric space
$(\Omega,\ensuremath{\mathfrak{d}}\xspace)$,\footnote{We will use upper case letters ($P,Q,X,Y$, etc.) to denote random variables (r.v.), as well as the probability distributions associated with them. Sometimes, we will also denote the probability distribution associated with a r.v.\ $X$ by $\prob{X}$.} we define $\Phi^{\gamma}(P,Q)$, the set of \emph{$\gamma$-lossy couplings of $P$
and $Q$}, as consisting of joint distributions $\phi$
over $\Omega^2$ with marginals $\phi_1$ and $\phi_2$ such that
$\Delta(\phi_1,P) + \Delta(\phi_2,Q)\leq\gamma$, where
$\Delta(P,Q) := \frac{1}{2}\int_{\Omega}|\Prob{P}\omega-\Prob{Q}\omega|\ensuremath{\,\mathrm{d}}\xspace \omega$ denotes the total variation distance between $P$ and $Q$.
Note that $\Phi^0(P,Q)$ consists of joint distributions with marginals exactly equal to $P$ and $Q$.
\begin{defn}[$\gamma$-Lossy $\infty$-Wasserstein Distance]\label{def:infty-delta-wass-dist}
Let $P$ and $Q$ be two distributions over a metric space
$(\Omega,\ensuremath{\mathfrak{d}}\xspace)$.
For $\gamma\in[0,1]$, the $\gamma$-lossy $\infty$-Wasserstein distance between $P$ and $Q$ is defined as:
\begin{equation}\label{eq:infty-delta-wass-dist}
\Winf{\gamma}(P,Q) = \inf_{\phi\in\Phi^{\gamma}(P,Q)} \sup_{(x,y)\leftarrow\phi}\ensuremath{\mathfrak{d}}\xspace(x,y).
\end{equation}
\end{defn}
For simplicity, we write $\ensuremath{W^\infty}\xspace(p,q)$ to denote $\Winf0(p,q)$.
We remark that while our definition of \Winf\gamma uses a worst case notion of distance
(as signified by $\infty$), there is an analogous average case definition,
that may be of independent interest. We define this in
\Appendixref{average_lossy-wass}.
\subsection{Lossy $\infty$-Wasserstein Distance Generalizes Some Existing Notions}
Now we show that the Lossy $\infty$-Wasserstein distance generalizes the guarantee of
being ``Probably Approximately Correct'' (PAC) and also the definition of total variation distance, as shown below.
\begin{itemize}
\item {\bf Generalizing the PAC guarantee:} The PAC guarantee states that
a randomized quantity $G$ is, except with some small probability $\gamma$,
within an approximation radius $\beta$ of a desired \emph{deterministic}
quantity $f$: i.e., $\Pr_{g\leftarrow G}[\ensuremath{\mathfrak{d}}\xspace(f,g) > \beta] \le \gamma$. For example, when $G$ takes values over $\ensuremath{{\mathbb R}}\xspace$, $\ensuremath{\mathfrak{d}}\xspace$ can be the standard different metric over $\ensuremath{{\mathbb R}}\xspace$, i.e., $\ensuremath{\mathfrak{d}}\xspace(f,g)=|f-g|$.
Representing $f$ by a point distribution $F_f$, this can
be equivalently written as $\Winf{\gamma}(F_f,G) \le \beta$, where the underlying metric is $\ensuremath{\mathfrak{d}}\xspace$;
see \Lemmaref{generalize-pac} in \Appendixref{wass-generalizes} for a proof of this.
\item {\bf Generalizing the total variation distance:} It also
generalizes the total variation distance $\Delta(P,Q)$ between two distributions, since
$\Winf\gamma(P,Q)=0$ iff $\Delta(P,Q) \le \gamma$; see \Lemmaref{generalizing-tv} in \Appendixref{wass-generalizes} for a proof of this.
\end{itemize}
\subsection{Triangle Inequality for Lossy Wasserstein Distance}
The Lossy $\infty$-Wasserstein distance satisfies the following triangle inequality.
\begin{lem}\label{lem:wass-triangle}
For distributions $P$, $Q$, and $R$ over a metric space $(\Omega,\ensuremath{\mathfrak{d}}\xspace)$ and for all
$\gamma_1,\gamma_2 \in [0,1]$, we have
\begin{align}
\Winf{\gamma_1 + \gamma_2}(P, R) &\leq \Winf{\gamma_1}(P, Q) + \Winf{\gamma_2}(Q, R). \label{eq:wass-triangle-gamma-infty}
\end{align}
\end{lem}
We can easily prove \Lemmaref{wass-triangle} for the special case when $\gamma_1=\gamma_2=0$ using standard tools from \cite{Villani_OptimalTransport08}; see \Lemmaref{Winf_triangle} in \Appendixref{wasserstein} for a proof. However, proving \Lemmaref{wass-triangle} in its full generality requires a significantly more involved proof, which we present in \Sectionref{triangle-ineq_Wass}.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 3,522 |
Q: Table sides by side with input/search on top on first table I have two tables that I would like next to each other. "mytable" needs to be centered on screen with "myinput" centered on top. "mytable1" needs to be just to the right of "mytable" and even with top. I can't manage to get the code correct. I threw some of what I've tried in there with inline-block and float right but then it moves "myinput" to the left.
<input type="text" id="myInput" onkeyup="myFunction()" placeholder="Search by any part of name or speed code" title="Type in a name">
<table id="myTable" align="center" style="display: inline-block;">
<tr>
<td>Albert Einstein College of Medicine</td>
<td>718-904-2444</td>
<td>123</td>
</tr>
<tr>
<td>Bronx Lebanon Hospital Center</td>
<td>718-518-5118</td>
<td>123</td>
</tr>
</table>
<table id="myTable1" style="float: right;">
<tr>
<td>1</td>
</tr>
<tr>
<td>2</td>
</tr>
<tr>
<td>3</td>
</tr>
</table>
A: <div align="center">
<table id="myTable" align="center" style="display: inline-block;">
<tbody>
<tr style="text-align: center;">
<td><input type="text" id="myInput" onkeyup="myFunction()" placeholder="Search by any part of name or speed code" title="Type in a name"></td>
</tr>
<tr>
<td>Albert Einstein College of Medicine</td>
<td>718-904-2444</td>
<td>123</td>
</tr>
<tr>
<td>Bronx Lebanon Hospital Center</td>
<td>718-518-5118</td>
<td>123</td>
</tr>
</tbody></table>
<table id="myTable1" style="display: inline-block;">
<tbody><tr>
<td>1</td>
</tr>
<tr>
<td>2</td>
</tr>
<tr>
<td>3</td>
</tr>
</tbody></table>
</div>
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 3,160 |
Julián de Zaracondegui (? – 1873) fue un comerciante y político peruano. Hizo fortuna como consignatario del guano y luego invirtió sus capitales en la agroexportación (algodón y azúcar). Fue alcalde de Lima de 1859 a 1860 y ministro de Hacienda en 1864.
Biografía
Julián de Zaracondegui fue el típico miembro de la emergente clase social de los inicios de la República que se benefició del comercio del guano y luego invirtió sus capitales en la agroexportación; de allí a la incursión en la política con el fin primordial de defender sus intereses había un solo paso.
Zaracondegui aparece ya involucrado en las actividades comerciales de Lima desde la década de 1850, fundando varias empresas de exportación e importación. Asimismo, fue director del Banco de Lima, miembro de la Sociedad de Beneficencia Pública de Lima y miembro de la Cámara de Diputados.
En 1858 fue elegido alcalde de Lima, junto con Miguel Pardo como teniente alcalde. Pero ambos se ausentaron, siendo reemplazados interinamente por el señor José Rojas, del 22 de noviembre de 1858, hasta el 24 del mismo mes. Rojas volvió a ser nombrado el 29 de noviembre, permaneciendo en dicho puesto hasta el 24 de diciembre del mismo año, en que por motivo de enfermedad fue reemplazado por el coronel Estanislao Correa y Garay, regidor de la municipalidad.
En 1859 continuó Zaracondegui como alcalde de Lima, pero no habiendo asumido su puesto lo reemplazó interinamente su teniente alcalde Miguel Pardo. Finalmente renunció. Como nuevo alcalde fue elegido el coronel Estanislao Correa y Garay, junto con el señor Manuel Vitorero como teniente alcalde (1860).
Durante el gobierno de Juan Antonio Pezet ejerció brevemente como ministro de Hacienda integrando el gabinete de Manuel Costas Arce, del 11 de agosto a 5 de septiembre de 1864. Renunció a raíz de unos artículos publicados en el diario opositor El Perú, de José María Químper.
A fines de los años 1850 y a lo largo de los años 1860 empezó a incursionar en el negocio algodonero y azucarero, ante la gran demanda internacional que tenían dichos productos a consecuencia de la guerra civil norteamericana, pero sobre todo, por el crecimiento de un mercado mundial que exigía insistentemente mayor cantidad de materias primas. Se asoció con Ramón Aspíllaga y ambos adquirieron la hacienda Cayaltí, de casi 4 mil hectáreas, situada en el valle del río Zaña, en Lambayeque (norte del Perú). Hacia 1870 Cayaltí producía azúcar a gran escala y sus propietarios decidieron montar un moderno ingenio azucarero con maquinaria traída de Inglaterra; la mano de obra empleada era la de los culíes o trabajadores chinos. Pero en contraste con el auge de Cayaltí, las otras empresas de Zaracondegui empezaron a decaer, por lo que sus socios de la azucarera levantaron una hipoteca por 338.700 dólares. Zaracondegui recibió esta suma, transfiriendo así su parte de la hacienda a los Aspíllaga. Poco después se declaró en quiebra y se suicidó.
Véase también
Anexo:Alcaldes de Lima
Era del Guano
Bibliografía
Basadre, Jorge: Historia de la República del Perú. 1822 - 1933, Octava Edición, corregida y aumentada. Tomo 4, p. 1038. Editada por el Diario "La República" de Lima y la Universidad "Ricardo Palma". Impreso en Santiago de Chile, 1998.
Orrego, Juan Luis: La República Oligárquica (1850-1950). Incluida en la Historia del Perú. Lima, Lexus Editores, 2000. ISBN 9972-625-35-4
Vargas Ugarte, Rubén: Historia General del Perú. La República (1844-1879). Tomo IX, p. 104. Segunda Edición. Editor Carlos Milla Batres. Lima, Perú, 1984. Depósito legal: B. 22436-84 (IX)
Vidaurre, Pedro N.: Relación cronológica de los alcaldes que han presidido el ayuntamiento de Lima desde su fundación hasta nuestros días… Solis, 1889 - 109 páginas.
Alcaldes de Lima
Ministros de Economía y Finanzas de Perú
Era del Guano | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 1,441 |
Бульвар Тараса Шевченка — бульвар у Бересті.
Бульвар Шевченка — бульвар у Мінську.
Boulevard Shevchenko, LaSalle, QC, Канада
Бульвар Шевченка — бульвар у Донецьку
Бульвар Шевченка — бульвар у Запоріжжі
Бульвар Тараса Шевченка — бульвар у Києві
Бульвар Шевченка — бульвар у Маріуполі
Бульвар Тараса Шевченка — бульвар у Тернополі
Бульвар Шевченка — бульвар у Черкасах
Див. також
Вулиця Шевченка
Проспект Шевченка
Площа Тараса Шевченка
Набережна Тараса Шевченка
Парк імені Тараса Шевченка
Сквер імені Тараса Шевченка
Бульвари | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 8,147 |
\section{\label{sec:intro}Introduction}
A large degree of interest in the community was prompted by the announcement of a gravitational signal identified as the merging of two neutron stars (NS's). The GW170817 event \cite{Abbott2017} triggered an alert followed by many observatories and satellites, and at least 70 positive detections were reported. Among the most important observational highlights, a gamma-ray burst (GRB) definitely associated with the event \cite{AbbottGRB} confirmed the expectation that ``short'' GRBs are produced by the mergers, although the referred event was particularly faint (probably due to off-axis emission \cite{Tsvi}) and its recognition has been disputed \cite{Istvan}.
Important observations of the light-curve showed, on the other hand, a distinctive IR excess a few days after the outburst, of the type now known as ``kilonovae'' \cite{AbbottGRB, Valenti}. It was linked to the production of lanthanides and actinides \cite{Nucleo}, given that high-opacity in the ejecta neatly explains the temporal behavior \cite{Nucleo2}. The recent identification of strontium in the spectrum of the source \cite{Watson2019} added credibility to this interpretation. Actinides are also expected to form in the event, perhaps dominating the production of many heavy isotopes in the galaxy and populating the end of the Periodic Table \cite{Nucleo3}.
In spite of this benchmark advance, the theory of NS merging still needs to provide many answers for
the whole picture to be complete and compelling. This is quite a difficult task and should involve a
number of physical ingredients and high-performance computation. One key ingredient is the
composition of the matter in colliding stars. Even within the standard picture, nucleons are hardly
the only particle present, since hyperons are expected at certain inner density and explicitly
considered for many years~\cite{ref1,ref2,ref3,ref4,ref5,ref6,ref7,Yo}. Even more exotic components
have been considered, notably quark matter, both as a part of the innermost region of the stars or
as an absolutely stable state ({\it strange quark matter}, SQM) composing essentially all the star
up to the upper layers~\cite{Bodmer,Witten,Terazawa,Itoh}. The latter idea has been around for more
than three decades, and some indirect observational evidence for its possible existence has been
given in Refs.~\cite{SS-Evidences}.
To establish the link between the star internal composition and the observational data is the main goal of many investigations. In this regard, certain knowledge can be crucial for discriminating among the proposed compositions, as for example, the results of reliable calculations of the nucleosynthesis process \cite{nos}, the so-called {\it tidal deformability} that can be extracted from the GW170817 data~\cite{lvc1,lvc2,Abbott2}, the star mass-radius relationship, etc.
In this paper we investigate if the equation of state~(EOS) of a strange star, described by a color-flavor-locked~(CFL) model with vector interactions and the gluon self-energy contribution \cite{Our-Gluons}, satisfies several observational constraints derived from the tidal deformability inferred from the GW170817 event, the maximum-mass constraints from various known pulsars, and mass-radius estimates derived from the Neutron Star Interior Composition Explorer (NICER) data. When considering the information from the GW170817 event, we will assume that the two stars participating in the binary NS coalescence have the same EOS.
We show that the deformability parameter space predicted by our model matches the one obtained from the GW170817 data \cite{Abbott2}. Furthermore, the maximum-mass constraints corresponding to PSR J1614-2230, PSR J0348+0432, and MSP J0740+6620 with $M=1.97\pm 0.04M_{\odot}$~\cite{Demorest}, $M=2.01\pm 0.04M_{\odot}$~\cite{Antoniadis}, and $2.14{{+0.10}\atop{-0.09}} M_{\odot}$~\cite{Cromartie}, respectively, are satisfied for the parameter values under consideration. In this matter, the inclusion of gluon effects increases the range of $G_V$ compatible with the observations. The reason for this is that the combined effect of the vector interactions and the gluon contribution makes the strange matter malleable, so that it can be sufficiently deformed while stiff enough to reach a high maximum mass. We also verify that the calculated dimensionless tidal deformabilites are of the same order as those obtained in relativistic and non-relativistic hadronic models studied in Refs.~\cite{had2,had3,had4,had5}.
This paper is organized as follows: In Sec.~\ref{sec:model}, we introduce the model and its EOS, which then is used as input to solve the Tolman-Oppenheimer-Volkoff (TOV) equations. The definitions of the tidal deformabilities are presented in Sec.~\ref{tidaldef}. Our results and comparisons with the GW170817 event are shown in Sec.~\ref{results}. Finally, we present the summary and concluding remarks of our study in Sec.~\ref{conclusions}.
\section{\label{sec:model}Modelling of self-bound compact stars}
According to the Bodmer–Terazawa–Witten (BTW) hypothesis \cite{Bodmer, Terazawa, Witten}, strange matter, which consists of roughly equal numbers of up, down, and strange quarks at high densities, is conjectured to be absolutely stable (it has lower energy per baryon than ordinary iron nuclei). If this is the case, the whole interior of a NS will likely be converted into strange matter.
On the other hand, the ground state of the superdense quark system is unstable with respect to the formation of diquark condensates \cite{CS}, a non-perturbative phenomenon essentially equivalent to the Cooper instability of BCS superconductivity. Given that in QCD one gluon exchange between two quarks is attractive in the color-antitriplet channel, at sufficiently high density and sufficiently small temperature quarks should condense into Cooper pairs, which are color antitriplets. At densities much higher than the masses of the $u$, $d$, and $s$ quarks (a condition usually written as $\mu\gtrsim m_s^2/2\Delta$, with $m_s$ being the strange quark mass and $\Delta$ the pairing gap), one can assume that the three quarks are massless. In this asymptotic region the most favored state is the CFL phase \cite{CFL}, characterized by a spin-zero diquark condensate antisymmetric in both color and flavor.
When considering CFL matter through a Nambu-Jona-Lasinio (NJL) model with four-fermion interactions at finite density, other interactions, besides the diquark channel \cite{DiQuark}, can also be considered. Among these additional channels of interactions, vector interactions \cite{Vector-Int,dynamical} are the most relevant as they can significantly affect the stiffness of the EOS, and hence they will be considered in our analysis. On the other hand, gluons degrees of freedom are usually disregarded as negligible at zero temperature and finite density. However, in the color superconducting background, the gluons acquire Debye ($m_D$) and Meissner ($m_M$) masses
\begin{eqnarray}\label{Masses}
m_D^2=\frac{21-8\ln 2}{18} m_g^2,\;\;\;\;\;\; m_M^2=\frac{21-8\ln 2}{54}m_g^2,\nonumber
\\
m_g^2=g^2\mu^2N_f/6\pi^2. \qquad \qquad\qquad
\end{eqnarray}
that depend on the chemical potential $\mu$ \cite{Gluon-Mass} and thus can affect the EOS of the CFL phase \cite{Our-Gluons}. In (\ref{Masses}), $N_f$ is the number of flavors and $g$ is the quark-gluon gauge coupling constant. Then, the net effect of the gluons in the CFL background is a $\mu$-dependent contribution that increases the energy density and decreases the pressure.
The CFL thermodynamic potential with the contributions of the vector interactions and the gluons takes the form
\begin{equation}\label{modelo}
\Omega_{\mbox{\tiny CFLg}}=\Omega_{q}+ \Omega_{g} - \Omega_{vac}
\end{equation}
where the quark contribution at zero temperature is
\begin{eqnarray} \label{Gamma0}
\Omega_{q}&=&-\frac{1}{4\pi^2}\int_0^{\Lambda_{\rm cut}} dp p^2 (16|
\varepsilon|+16|\overline{\varepsilon}|)\nonumber
\\
&-&\frac{1}{4\pi^2}\int_0^{\Lambda_{\rm cut}}
dp p^2 (2|\varepsilon'|+2|\overline{\varepsilon'}|) +\frac{3\Delta^2}{G_D}-G_V\rho^2,
\end{eqnarray}
with
\begin{equation}\label{Spectra}
\varepsilon=\pm \sqrt{(p-\tilde{\mu})^2+\Delta^2}, \quad
\overline{\varepsilon}=\pm \sqrt{(p+\tilde{\mu})^2+\Delta^2},\nonumber
\end{equation}
\begin{eqnarray}\label{Spectra-2}
\varepsilon'=\pm \sqrt{(p-\tilde{\mu})^2+4\Delta^2,}\quad
\overline{\varepsilon}'=\pm \sqrt{(p+\tilde{\mu})^2+4\Delta^2}.\qquad
\end{eqnarray}
with $\tilde{\mu}=\mu-2G_V \rho$, and
\begin{eqnarray}
\Omega_{g} &=& \frac{2}{\pi^2}\int_0^{\Lambda_{\rm cut}} dp
p^2\left[3\sqrt{p^2+\tilde{m}_M^2\theta(\Delta-p)}\right.
\nonumber \\
&+&\left.\sqrt{p^2+ \tilde{m}^2_D \theta({\Delta}-p)+3\tilde{m}^2_g\theta(\tilde{\mu}-p)\theta(p-\Delta)} \right]
\label{TP-gluons-T0}
\end{eqnarray}
is the gluon contribution at $T=0$. In (\ref{modelo}) we subtracted the vacuum constant
$\Omega_{vac}\equiv \Omega_{\mbox{\tiny CFLg}}(\mu=0, \Delta=0)$.
The dynamical quantities $\Delta$ and
$\rho$ are found from the equations
\begin{equation} \label{Gap-Eq1}
\frac{\partial\Omega_{\mbox{\tiny CFLg}}}{\partial\Delta} = 0, \;\;\;\; \rho=-\frac{\partial\Omega_q}{\partial\tilde{\mu}}
\end{equation}
The solution of the gap equation (first equation in (\ref{Gap-Eq1})) is a minimum of the thermodynamic potential while the solution of the second equation is a maximum \cite{Vector-Int}, since it defines, as usual in statistics, the particle number density $\rho=\langle\bar{\psi}\gamma_0\psi\rangle$.
Having the thermodynamic potential (\ref{modelo}), we can write the EOS of the system as
\begin{equation}\label{Pressure}
P_{\mbox{\tiny CFLg}}= -(\Omega_{q}+ \Omega_{g} - \Omega_{vac})+(B-B_0),
\end{equation}
\begin{equation}\label{Energy}
\epsilon_{\mbox{\tiny CFLg}} = \Omega_{q}+ \Omega_{g} - \Omega_{vac} + \tilde{\mu} \rho-(B-B_0)
\end{equation}
Notice that the chemical potential that multiplies the particle number density in the energy density is $\tilde{\mu}$ instead of $\mu$. This result can be derived following the same calculations of Ref. \cite{Israel} to find the quantum-statistical average of the energy-momentum tensor component $\tau_{00}$.
In (\ref{Pressure})-(\ref{Energy}), we added the bag constant $B$, which in the NJL model can be dynamically found in the mean-field approximation in terms of the chiral condensates that exist at low density \cite{Oertel}. The vacuum bag constant $B_0=B|_{\rho_u=\rho_d=\rho_s=0}$ is introduced to ensure that $\epsilon_{\mbox{\tiny CFLg}}=P_{\mbox{\tiny CFLg}}=0$ in vacuum. Using the results of \cite{Oertel}, one can readily see that for the parameter set under consideration, the vacuum bag constant takes the value $B_0=B|_{\rho_u=\rho_d=\rho_s=0}=57.3$ MeV/fm$^3$. Moreover, at the high densities where the CFL phase occurs, the chiral condensates are all zero, and consequently $B=0$ \cite{Our-Gluons}.
The mass-radius relationship of the system can be obtained using the EOS and the Tolman-Oppenheimer-Volkoff (TOV) equations
\begin{eqnarray}
\frac{dm(r)}{dr}&=&4\pi r^2\epsilon(r) \label{TOV1}\\
\frac{dP (r)}{dr} &=& -\frac{\left[\epsilon(r) + P(r)\right]\left[m(r) + 4\pi r^3 P (r)\right]}{r^2f(r)}
\label{TOV2}
\end{eqnarray}
written in natural units where $c = G = 1$. Here, $f(r)=1-2m(r)/r$, and $m(R)=M$ is the mass of the star with radius $R$. Since there is no strong evidence in favor of high spins in the GW170817 data, we shall not refer to this case.
Using these equations one can show that for each $G_V$, the gluons tend to decrease the maximum star mass in about $20 \%$~\cite{Our-Gluons}. The effect is even bigger at lower values of $G_V$. Sequences including gluons do not reach $2M_{\odot}$ unless $G_V/G_S>0.2$~\cite{Our-Gluons}.
In the following sections, we will add new constraints to the mix to determine the compatibility of the CFL model -with and without gluons- with new observations like updated maximum mass values, tidal deformability of strange stars, and the mass-radius estimates obtained from NICER.
\section{Tidal Deformability}
\label{tidaldef}
The tidal deformability is a dynamical property of matter subject to a tidal field. Close analogy with known phenomena can be easily recognized from nuclear physics, in which several modes related to the nuclear structure (dipole, giant resonance, etc.) can be measured when the nucleus is subjected to perturbation (obviously not tidal). The linear regime of tidal deformability is seen every day in ocean tides. In the context of neutron star collision, tidal deformability is an extreme non-linear regime version of what occurs in bulk matter.
On very general grounds, and irrespective of a Newtonian or relativistic approach, the tidal deformability $\lambda \equiv {Q_{ij}\over{\varepsilon_{ij}}}$ is defined by the quotient of the induced quadrupole $Q_{ij}$ to the tidal field $\varepsilon_{ij}$, dimensionally expected to scale as the fifth power of the star radius $R^{5}$. In fact, introducing the
{\it gravitational Love number} $k_{2}$, the precise relation is
\begin{equation}
\lambda = {2\over{3}} k_{2} R^{5}.
\label{lambda}
\end{equation}
Direct calculations of a collection of equations of state yield $k_{2} \sim 0.2-0.3$. For a general purpose, the tidal deformability can be made dimensionless dividing it by the mass of the star $M$ to the fifth power, namely
\begin{equation}
\Lambda = {2\over{3}} k_{2} {R^{5}\over{M^{5}}} \equiv {2\over{3}} k_{2} C^{-5}
\label{diml}
\end{equation}
where $C \equiv {M\over{R}}$ is the compactness. Numerically it can be seen that $\Lambda$ can vary three orders of magnitude
from its value for $\sim 1 M_{\odot}$ stars to the maximum mass of the configuration for a fixed EOS (and not considering other effects such as rotation, dynamical response of the tidal fields and magnetic fields). This is why many
works have focused on this quantity, which is very sensitive to the stars' composition \cite{kata}. Thus, even if we shall refer to one event (GW170817) only, its observation will be potentially important for an evaluation of the state of stellar interiors.
In addition to this novel test of the EOS, known tests must be also enforced to select out a realistic form of the pressure and energy density, for a given composition. This type of approach has been attempted in connection to the heavy-ion data, that is, a reconstruction of the allowed zones inferred \cite{Recons}. And of course, ``static'' information on neutron stars concerning the degree of stiffness of the EOS, allowing at least $2.14{{+0.10}\atop{-0.09}} M_{\odot}$~\cite{Cromartie} for the maximum mass, and a relatively large radius $13.02{{+1.24}\atop{-1.06}}$~km obtained~\cite{Col} with the
emission fits to the NICER data for PSR J0030+0451, with a determined mass of $1.44{{+0.15}\atop{-0.14}} M_{\odot}$, should be considered.
To proceed we must make contact with the problem of two compact stars colliding, not necessarily of the same mass. In the inspiral final phase of a binary system, periodic gravitational waves (GW) are emitted with a phase that can be expressed in a post-Newtonian expansion in powers of $v/c$ (also expressed as $u = (\pi M f)^{1/3}$ with $f$ the gravitational wave frequency), yielding a ``tidal'' term $\propto -(39/2) \tilde{\Lambda} u^{10}$, at the lowest order. The coefficient $\tilde{\Lambda}$ is given by
\begin{eqnarray}
{\tilde{\Lambda}} = {16\over{13}}{{(M_{1}+12M_{2})M_{1}^{4}\Lambda_{1} + (M_{2}+12M_{1}) M_{2}^{4}\Lambda_{2}} \over {(M_{1}+M_{2})^{5}}}\qquad
\label{tilde}
\end{eqnarray}
where $\Lambda_{1}$ and $\Lambda_{2}$ are the dimensionless tidal deformabilities of each star as defined above. This result was first obtained by Flannagan and Hinder \cite{FH} and serves to investigate the response of the stellar material to the tidal field, as stated below, being extracted directly from the observed waveform.
\section{Numerical results and the GW170817 event}
\label{results}
In this section, we investigate how well the model described in section \ref{sec:model} of a
self-bound compact star with CFL matter satisfies the tidal-deformability constraints imposed by the
GW170817 event, the most recently observed maximum-mass values, and the mass/radius fits to NICER
data for PSR J003+0451. We will consider CFL matter with and without gluons and discuss the region
of compatibility on each case.
The model parameters used in the numerical calculations are defined by following a standard
procedure, with the energy cutoff $\Lambda_{\rm cut}=602.3$ MeV and the
quark-antiquark coupling $G_S{\Lambda^2_{\rm cut}}=1.835$
adjusted to fit $f_\pi$, $m_\pi$, $m_K$ and $m_{\eta'}$ to their empirical values in the sharp cutoff regularization \cite {Rehberg}. Then, the diquark coupling $G_D$, that produces a gap $\Delta\simeq10$ MeV at $\mu=500$ MeV, is found to be $G_D=1.2 G_S$. A similar ratio $G_D/G_S$ was already considered in \cite{GD-GS} to investigate the $M-R$ relationship in hybrid compact stars with color superconducting cores. Changing $\Lambda$ in a few percentage, while simultaneously modifying $G_D$ to produce the same value of $\Delta$, does not affect our qualitative results. As for the values of the vector coupling, it is known that if the vector channel is originated from a Fierz transformation of a local color current-current interaction, the resulting coupling strength is $G_{V}=0.5 G_S$. If instead, one starts from the molecular instanton liquid model or the PNJL model, the Fierz transformations give rise to much smaller values of $G_V$ \cite{GV-Vacuum}. Based on these considerations, $G_V$ is usually taken as a free parameter in the range $G_V=(0-0.5)G_S$. Here we adopt this same range for $G_V$.
In order to correctly describe a strange star from this model, we consider stellar matter composed by $u$, $d$ and~$s$ quarks, in which the equations of state used as input to solve the TOV equations are given by $\epsilon=\epsilon_{\mbox{\tiny CFLg}}$ and $P=P_{\mbox{\tiny CFLg}}$ from (\ref{Pressure})-(\ref{Energy}) for the case with gluons, and the same equations but with $\Omega_g=0$, in the case without gluons. Once the inputs are defined, the solution of the TOV Eqs.~(\ref{TOV1}) and~(\ref{TOV2}) is constrained by the following conditions at the neutron star center: $P(0) = P_c$ (central pressure), and $m(0) = 0$ (central mass). The mass of the star for each set of parameters, is obtained as the solution of the TOV equations at the point where the pressure vanishes, i.e., when it reaches the surface of the star.
In Fig.~\ref{mr}, we present the mass-radius profiles of strange stars obtained with the NJL model used in this work, with and without the inclusion of gluons in its thermodynamics. The parametrizations were constructed by varying the vector channel strength of the model within the physically acceptable range of $G_V$.
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.35]{mr-revised.eps}
\caption{Quark star mass, in units of $M_\odot$, as a function of its radius generated from the CFL phase (a) without and (b) with gluons contribution. Bands extracted from Refs.~\cite{Demorest,Antoniadis,Cromartie}. Circles with error bars are related to the NICER data~\cite{Col,l21,l25}.}
\label{mr}
\end{figure}
From Fig.~\ref{mr}, it can be seen that the gluons' contribution reduces the value of the maximum star mass obtained in each parametrization. This result coincides with the one already reported in Ref.~\cite{Our-Gluons}. Here we go further in the analysis of these diagrams by comparing them with more recent observational data. Two of them are related to the mass values of the objects PSR J1614-2230 and PSR J0348+0432 with $M=1.97\pm 0.04M_{\odot}$~\cite{Demorest} and $M=2.01\pm 0.04M_{\odot}$~\cite{Antoniadis}, lower and middle bands respectively. The upper band represents the new result of $2.14{{+0.10}\atop{-0.09}} M_{\odot}$ for the mass of the MSP J0740+6620 pulsar at $68.3\%$ credible level, recently presented in Ref.~\cite{Cromartie}. One can see that even with the overall reduction in the maximum mass that occurs in the presence of gluons, there is a range of $G_V$ that is consistent with the maximum mass observations. More precisely, in the absence of gluons, the range is $\frac{G_V}{G_S}> 0.02$, while with gluons it becomes $\frac{G_V}{G_S} > 0.21$ for the MSP J0740+6620 pulsar.
The range of allowable parameters is further constrained by the recent mass-radius estimates extracted from the NICER data, namely, $M=1.44^{+0.15}_{-0.14}M_{\odot}$ with $R=13.02^{+1.24}_{-1.06}$~km~\cite{Col}, $M=1.34^{+0.15}_{-0.16}M_{\odot}$ with $R=12.71^{+1.14}_{-1.19}$~km~\cite{l21}, and $R_{1.44}>10.7$~km~\cite{l25}. These estimates are indicated by black dots in the figure with their corresponding error bars. Each dot then determines the corresponding range of allowable $G_V$. In the case without gluons, for each dot there is a range of $G_V$ consistent with both constraints, from NICER's and the maximum mass. Adding the gluons reduces the compatibility to just one of NICER estimates, the one with $R_{1.44}>10.7$~km, which is the only that can overlap with the condition $\frac{G_V}{G_S}> 0.21$.
While the number of accurately measured masses is increasing steadily, the radii are much more
difficult to obtain. The recent determination by the NICER group for the neutron star PSR J0030+0451
is probably the most reliable measurement today. As pointed out, it predicts a radius about $11$~km
to $13$~km for $M\sim 1.4M_\odot$. This range for $R$ is on the ``high'' side of expected values.
Small radii reports have been presented over the years (see, for example,
Refs.~\cite{bogdanov,ozelfreire}) although they involve some form of modeling and are not as direct.
For example, the radius of the NS in the quiescent low-mass X-ray binary X5 has been constrained to
$R=9.6^{+0.9}_{-1.1}$~km for a $M = 1.4 M_\odot$ NS, according to Ref.~\cite{bogdanov}. By
considering this data instead of NICER's one, we would find that only the model with gluons, for
$\frac{G_V}{G_S}\lesssim 0.1$, can reproduce it. It is clear that there are identified methods to
infer the radii, and small values could ultimately be confirmed, but there is work to be done and
questions on the road ahead that need to be answered~\cite{Lattimer}. Needless to say, this is a
very important question because it may be indicative of a ``two family'' situation~\cite{alvarez}
among other possibilities.
Regarding results depicted in Fig.~\ref{mr}, we remark that CFL model with and without gluons
predicts high values for the NS mass. In that direction, the detection of the unusual event
GW190814~\cite{Abbot} featuring a member of the pair in the interval $(2.5 ‒ 2.67)M_{\odot}$ is
important in the context of the maximum mass issue of NS's and the equation of state. Even though
the object can well be a black hole (of the ``light'' type which has never been observed in the
local Universe), there is mounting evidence that it could also be an extreme case of the compact
star branch. This stems from i) the analysis of the LIGO-Virgo Collaboration showing that the
``light'' object is an outlier from the BH distribution detected from merging, hence it should be on
the compact star side~\cite{Abbottetal2021}; ii) the statistical evidence that the maximum mass
$M_{max}$ is high, around $(2.5-2.6)M_\odot$~\cite{Alsing,Horvath} and iii) the studies that have
argued the possible nature of the lighter object as a strange quark
star~\cite{Bombaci,HorvathMoraes2021} for which the theoretical sequences can reach this higher
level without obvious fatal problems. In summary, while we are not claiming that the GW190814 light
component must be a compact star, this possibility has been reinforced recently and guarantees
extended studies, with clear connections with the subject of the present paper.
Now we need to consider the compatibility with the tidal deformability associated to the observation
of GW emission from the binary star merger GW170817 event, detected by the LIGO/Virgo Collaboration
(LVC)~\cite{Abbott2,lvc1,lvc2}. The GW emission caused an energy flux out of the binary system and
produced the inspiral motion of the stars ~\cite{Taylor,Hulsel}. The obtained data allowed LVC to
establish some constraints on $\Lambda_1$ and $\Lambda_2$. It was also possible to determine a range
for $\Lambda_{1.4}$ (deformability of the star with $M=1.4M_\odot$). In order to calculate $\Lambda$
as a function of $M$ or $R$, Eq.~(\ref{diml}), one needs the second Love number $k_2$, which is
defined as
\begin{align}
k_2 &=\frac{8C^5}{5}(1-2C)^2[2+2C(y_R-1)-y_R]\nonumber\\
&\times\Big\{2C [6-3y_R+3C(5y_R-8)] \nonumber\\
&+ 4C^3[13-11y_R+C(3y_R-2) + 2C^2(1+y_R)]\nonumber\\
&+ 3(1-2C)^2[2-y_R+2C(y_R-1)]{\rm ln}(1-2C)\Big\}^{-1},
\label{k2}
\end{align}
with $y_R\equiv y(R)$, and $y(r)$ obtained as the solution of
\begin{align}
r\frac{dy}{dr} + y^2 + yF(r) + r^2Q(r) = 0,
\label{dydr}
\end{align}
that has to be solved as part of a coupled system containing the TOV equations, Eqs.~(\ref{TOV1}) and~(\ref{TOV2}). Here, $F(r)$ and $Q(r)$ are defined as
\begin{eqnarray}
F(r) &=& \frac{1 - 4\pi r^2[\epsilon(r) - P(r)]}{f(r)},
\\
Q(r)&=&\frac{4\pi}{f(r)}\left[5\epsilon(r) + 9P(r) +
\frac{\epsilon(r)+P(r)}{v_s^2(r)}- \frac{6}{4\pi r^2}\right]
\nonumber\\
&-& 4\left[ \frac{m(r)+4\pi r^3 P(r)}{r^2f(r)} \right]^2,
\label{qr}
\end{eqnarray}
where $v_s^2(r)=\partial P(r)/\partial\epsilon(r)$ is the squared sound velocity~\cite{tanj10,Prakash,hind08,damour,tayl09}.
While solving the TOV equations, the star surface is defined as the point where the pressure goes to zero, $P(R)=0$, as we mentioned before. Nevertheless, in the case of a bare strange star, the energy density is finite at this point as one can see in Fig.~\ref{pe}.
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.35]{pe.eps}
\caption{Energy density as a function of pressure for the CFL phase (a) without and (b) with gluons contribution.}
\label{pe}
\end{figure}
This requires a correction to be added to the calculation of~$y_R$ to account for the energy discontinuity between the star's surface and its outside, reading~\cite{angli,wang,mingli,Takatsy2020}
\begin{equation}
y_R\rightarrow y_R - \frac{4\pi R^3\epsilon_s}{M},
\label{yr}
\end{equation}
where $\epsilon_s$ is the energy density difference between the internal and external regions.
Since the TOV equations are solved coupled to Eq.~(\ref{dydr}) and Eq.~(\ref{yr}), it is possible to obtain the tidal deformabilities in the framework of the CFL model, with and without gluons, for different parametrizations generated by varying $G_V$. We compare these quantities with observational data extracted from LVC. In Fig.~\ref{lm}, we show the dimensionless tidal deformability as a function of $M$.
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.35]{lm.eps}
\caption{Dimensionless tidal deformability as a function of quark star mass, in units of $M_\odot$, for the CFL phase (a) without and (b) with gluons contribution. Full circle: result of $\Lambda_{1.4}=190_{-120}^{+390}$ obtained by LVC~\cite{lvc2}.}
\label{lm}
\end{figure}
From Fig. (\ref{lm}), one can gather that the vector interactions tend to increase $\Lambda$ at any given value of $M$ in both cases, i.e., with and without gluons. On the other hand, the effect of the gluons is to decrease the tidal deformability at any given $M$ and $G_V$.
For the specific case of $\Lambda_{1.4}$, in which one has an observational value determined from LVC, namely, $\Lambda_{1.4}=190_{-120}^{+390}$~\cite{lvc2} (GW170817 event), we see a clear trend in the CFL phase with gluon contribution to attain the LVC data. Furthermore, a clear linear increasing of $\Lambda_{1.4}$ as a function of $G_V$ is observed as displayed in Fig.~\ref{l14-gv} for both cases: with and without gluon contribution. In this figure, each circle/square represents a value of $\Lambda_{1.4}$ for each value of $G_V/G_S$. Notice that the parametrizations $0\leqslant G_V/G_S \leqslant 0.4$ are completely inside the GW170817 constraint of $\Lambda_{1.4}$ for the case with gluons. With no gluon contribution, this range becomes more stringent, namely, $0\leqslant G_V/G_S \leqslant 0.1$.
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.33]{l14-gv.eps}
\caption{$\Lambda_{1.4}$ as a function of the vector channel strength, in units of $G_S$, for the CFL phase with and without gluons contribution. Dashed blue lines: $\Lambda_{1.4}=190_{-120}^{+390}$ obtained by LVC~\cite{lvc2}.}
\label{l14-gv}
\end{figure}
For the sake of completeness, we show in Fig.~\ref{ldim} how $\lambda$, calculated from Eq.~(\ref{lambda}), depends on the star radius~$R$.
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.35]{ldim.eps}
\caption{$\lambda$ as a function of $R$ for the CFL phase (a) without and (b) with gluons contribution. }
\label{ldim}
\end{figure}
We verify that the same features observed in Fig.~\ref{lm} are also presented in the $\lambda$ vs $R$ curves, namely, that $\lambda$ increases with $G_V$ at a fixed value of $R$, and that the gluon contribution reduces the $\lambda$ values. In addition, one can also see a reduction of the star radii for the model with gluons included. This is an effect also verified in the mass-radius profiles exhibited in Fig.~\ref{mr}.
In Fig.~\ref{l1l2} we show the tidal deformabilities $\Lambda_1$ and $\Lambda_2$ of the binary system in the CFL phase. We also depict the contour lines of $50\%$ and $90\%$ credible levels (full orange curves) related to the GW170817 event~\cite{lvc2}.
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.34]{l1l2.eps}
\caption{Dimensionless tidal deformabilities for the case of high-mass ($\Lambda_1$)
and low-mass ($\Lambda_2$) components of the GW170817 event for stars in the CFL phase (a) without and (b) with gluons contribution. The confidence lines (50\% and 90\%) are taken from Ref.~\cite{lvc2}. The dots on the curves denote the values that corresponds to $M_1=1.4M_\odot$ and $\Lambda_1=\Lambda_{1.4}$.}
\label{l1l2}
\end{figure}
In order to produce such curves, with different $G_V$ values, we run the mass of one of the stars, $M_1$, in the range of $1.37 \leqslant M_1/M_\odot \leqslant 1.60$~\cite{lvc2,Abbott2}. The mass of the second star, $M_2$, presents a relationship with $M_1$ via the chirp mass defined as \mbox{${\mathcal M} = (M_1M_2)^{3/5}/(M_1+M_2)^{1/5}$}~\cite{lvc1}. The analysis of the LVC provided ${\mathcal M}$ as presenting the value of $\mathcal{M}=1.188^{+0.004}_{-0.002}M_\odot$~\cite{lvc1}, that generates a variation of $1.17 \leqslant M_2/M_\odot \leqslant 1.36$~\cite{lvc1,lvc2} for the mass of the companion star.
By comparing the curves in Fig.~\ref{l1l2}-$a$ and -$b$, we clearly notice (once more) that the inclusion of gluons contribution favors the curves to satisfy the LVC constraint in the $\Lambda_1\times\Lambda_2$ plane. In this case, all curves with $G_V\leqslant 0.4G_S$ are completely inside the $90\%$ credible region. The one in which $G_V=0.5G_S$ is in the limit of the external boundary curve.
We also marked with a dot the points in the curves where $M_1=1.4M_\odot$ and, consequently, $\Lambda_1=\Lambda_{1.4}$. From such points we can observe a connection between the results shown in Figs.~\ref{l14-gv} and~\ref{l1l2}. The decreasing of $G_V$ implies lower values of $\Lambda_{1.4}$, and in the case of the CFL phase studied here, it leads to an agreement with the LVC constraint for this quantity, as pointed out before. The same kind of compatibility is verified in the entire $\Lambda_1\times\Lambda_2$ curves. The reduction of $\Lambda_{1.4}$, due to the decreasing of $G_V$, is followed by a shift of all the curves toward the observational region predicted by the GW170817 event. This is a feature observed for the CFL phase including or not the gluon contribution. We also remark here that the magnitude of the curves exhibited in Figs.~\ref{lm} and~\ref{l1l2} are compatible with those obtained by relativistic and nonrelativistic hadronic models~\cite{had1}, which are also in agreement with the observational data reported by LVC, see for example Refs.~\cite{had2,had3,had4,had5}.
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.34]{ltilde.eps}
\caption{$\tilde{\Lambda}$ for different parametrizations (different $G_V$ values) for strange stars in the CFL phase (a) without and (b) with gluons contribution. Dashed lines: range of $\tilde{\Lambda}=300^{+420}_{-230}$ determined by LVC~\cite{Abbott2}.}
\label{ltilde}
\end{figure}
Finally, Fig.~\ref{ltilde} shows the ranges of $\tilde{\Lambda}$, Eq.~(\ref{tilde}), obtained for strange stars in the CFL phase. $\tilde{\Lambda}$ is calculated as a function of the mass of one of the stars forming the binary system, namely, $\tilde{\Lambda}=\tilde{\Lambda}(M_1)$ or $\tilde{\Lambda}=\tilde{\Lambda}(M_2)$. Since $M_1$ (or $M_2$) is defined into a particular range according to the GW170817 event, each parametrization with a fixed $G_V$ value will produce a range for $\tilde{\Lambda}$. We compare the results with the constraint on the combined dimensionless tidal deformability obtained by LVC, namely, $\tilde{\Lambda}=300^{+420}_{-230}$~\cite{Abbott2}. Once again, the CFL phase with the gluon contribution supports the observational data from the GW170817 event. Just as with the behavior between $\Lambda_{1.4}$ and $G_V$ depicted in Fig.~\ref{l14-gv}, there is also a strong linear relation between $\tilde{\Lambda}$ and $G_V$. Lastly, the parametrizations $G_V/G_S < 0.2$~($G_V/G_S < 0.5$) for the CFL phase without~(with) gluons satisfy the constraint imposed by the observational range of $\tilde{\Lambda}$.
\section{Summary and concluding remarks}
\label{conclusions}
In this paper, we explored the compatibility of strange stars in the CFL phase with a set of observational constraints obtained from the GW170817 event, the maximum stellar mass from PSR J1614-2230~\cite{Demorest}, PSR J0348+0432~\cite{Antoniadis}, and MSP J0740+6620~\cite{Cromartie}; and the mass-radius estimates from recent NICER data. An important goal of this paper has been to present a systematic approach to test the observational compatibility of a quark star in a particular phase.
We considered an absolutely stable strange star made of massless $u$, $d$, and $s$ quarks in the CFL phase modeled by a NJL theory with diquark and vector interaction channels. Gluon effects were incorporated by adding the gluon self-energy calculated in the finite-density color superconducting medium \cite{Our-Gluons} to the thermodynamic potential.
In Fig.~\ref{finalranges} we summarize our main findings and the range of overall compatibility for the CFL phase, with and without the gluon term. The regions between the dashed vertical lines in Figs.~\ref{finalranges}{\color{blue}a} and~\ref{finalranges}{\color{blue}b} indicate the range of $G_V/G_S$ compatible with all the constraints simultaneously. In general, including gluons tends to better accommodate the tidal deformability observations. Gluons also contribute to widen the range of vector interactions compatible with all the observations. At the same time, they increase the minimum $G_V$ needed to satisfy the constraints, although the resultant range is still within the theoretically acceptable values of vector interaction strengths.
\begin{figure}[!t]
\centering
\includegraphics[scale=0.34]{gv.eps}
\caption{Compilation of $G_V$ ranges predicted by this work from comparison with astrophysical data. The intersection of the ranges is (a) $0.02<G_V/G_S<0.1$, and (b) $0.21<G_V/G_S<0.4$ for the model without and with gluons contribution included, respectively.}
\label{finalranges}
\end{figure}
Our results show that the CFL phase, with or without the gluon contribution, is compatible with the set of recent observations considered in this paper. This of course does not ensure that future observations and/or updated refinement of the estimated values from known observations cannot push the CFL phase out of the compatibility region. Even in such a case, other phases that can be realized in a strange star would be worth to be examined against the new constraints using the same approach followed here.
It is interesting that the dimensionless tidal deformabilities of CFL stars found in this paper are comparable to those of hadronic stars \cite{had2,had3,had4,had5}, i.e., they have the same order of magnitude within the allowable parameter range.
Finally, we call the reader's attention to the fact that strictly speaking, the CFL phase of massless $u$, $d$ and~$s$ quarks is energetically favored only at asymptotically large densities (i.e. at densities much higher than the $s$ quark mass). At more realistic densities, the effect of the $s$ quark mass may lead to chromomagnetic instabilities and eventually to an spatially inhomogenous phase \cite{chromomagnetic inst}. At those densities, other phases may compete with the CFL phase and become plausible candidates for the strange star phase. Along this direction, inhomogeneous phases of dense quark matter with chiral quark-hole condensates have been attracting much interest in recent years \cite{inh-con}. In this context, one of those phases, the so called magnetic dual chiral density wave (MDCDW) phase, has emerged as a viable candidate, which so far has satisfied some important astrophysical constraints, as for instance the observed $\sim 2M_{\odot}$~\cite{Carignano}, and more recently its stability against collective fluctuations \cite{MDCDW stability}, ensuring its robustness at the density and temperature conditions of neutron stars.
\begin{acknowledgments}
The work of E.J.F. and V.I. was supported in part by NSF grant PHY-2013222. The work of C.H.L., M.D., and O.L. is part of the project INCT-FNA proc. No. 464898/2014-5. It is also supported by Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) under Grants No. 310242/2017-7, 312410/2020-4, 406958/2018-1 (O.L.), and No. 433369/2018-3 (M.D.). We also acknowledge Funda\c{c}\~ao de Amparo \`a Pesquisa do Estado de S\~ao Paulo (FAPESP) under Thematic Project 2017/05660-0 (O.L., M.D., C.H.L), Grant No. 2020/05238-9 (O.L., M.D., C.H.L), and Thematic Project 2013/26258-4 (L.P.). J.E. Horvath has been supported by Fapesp Agency (S\~ao Paulo) and CNPq
(Federal Agency, Brazil) through grants and scolarships.
\end{acknowledgments}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 2,025 |
Browse > Home / Archives for girls games
Completely Foamy
Sometimes you have to get a little messy to feel this good. Like tonight after dinner when we had a shaving cream fight down on the landsports field. Chase announced the optional event in the dining hall, and from the cheering it was clear we would have plenty of girls ready to romp about with the slippery white foam.
The point of a shaving cream fight is simple— spray the contents of your can both on others and on yourself. Then run around smearing, wiping and rubbing the shaving cream into everyone's hair, on their backs, and ultimately everywhere. Beyond that, the goal is to have fun, be silly and enjoy the mess of it all. It's as simple as that.
There are no teams, and this is not a competition where we pick a winner at the end. So it's not much of a "fight" really. It's more cooperative since it's just as much fun to be attacked as it is to splatter others. Part of the fun is surprising someone, sneaking up to them and planting a blob right on their back, shoulder or leg… as you race away grinning, and secretly hoping, but also looking out for, someone who will do the same to you.
And it's absolutely hilarious! Once the spraying begins, you can't hear anything except shrieks of delight and laughter. We all (yes, counselors and directors too!) quickly begin to look pretty funny, our hair sticking up, with white beards and mustaches, if not completely foamy.
A shaving cream fight feels liberating too. It's a little mischievous and outrageous, but still sanctioned, even celebrated at camp. It's a harmless way to go a little crazy, while at the same time laugh and play with your friends.
What to learn from a shaving cream fight? I'm not sure, but I'd say it's a wonderful way to experience uninhibited joy, a deep feeling that in our ordinary lives too often struggles to find expression. At camp though, it's pretty easy; we find it everyday.
Filed Under: Games
Tagged: funny, girls camp, girls games, shaving cream fight, special events
Exciting for Everyone
Arriving at camp, as our 2nd July mini session campers did today, is exciting for everyone. For the full session girls already here and half way through their long session, the arrival of new friends, many of whom we already know, is invigorating because it means camp will again kick up a notch with new conversations and new people to play with. For the girls arriving, the anticipation of camp starting —all that pent up enthusiasm and energy— can finally be released. For everyone, today was a chance to reunite with old camp friends… and we saw plenty of full-on hugs to prove that! …or to meet new people that surely will become friends before long. The whole morning was a festival of smiles as the arriving mini session campers smoothly checked in, met their counselors and got settled in the cabins.
Right away, the arriving girls got busy with hikes to Rockbrook Falls, which is one of the larger waterfalls on the camp property. They gathered on the tennis courts to hit a few balls and play a "speed game." Some, as another option, chose to stop by the gym to play gaga ball or basketball, while others made their first lanyard or friendship bracelet on the hillside lodge porch. I could tell the girls appreciated getting started with a camp activity in the first few minutes they arrived.
Rick's homemade pizza, along with more salad than we could eat, made our first meal delicious and familiar at the same time. Tours of camp during rest hour, and trips to the lake for swimming demonstrations, plus cabin meetings (a chance to get to know each other, rearrange trunks and other personal items, and learn important camp rules) came next. It being a hot sunny afternoon, roaming around the camp and finally stopping at the lake for a quick swim felt really good.
What better way to open the camp session, though, than with an afternoon carnival? When the bell rang about 3pm, Chase our program director, with the help of almost 20 other staff members, pulled out all the stops for this amazing all-camp event on the grassy hill in the center of camp. Like all great parties, this event combined fun dance music, several options for snacks, group games, challenge games, and in this case, about 200 excited girls to enjoy everything with.
There were two huge inflatables to try: a 35-foot water slide called the "Wild Rapid," and an obstacle course that allowed two girls at a time to climb, crawl and scramble through. There was a cake walk organized for girls to earn small cupcakes. One area had girls playing Pin the Tail on the Donkey and choosing rubber ducks from a pond, while another allowed girls to "fish" (using a stick with a string and magnet attached) for prizes. Two Hi-Ups enjoyed running a pie throwing station, two more, a beanbag "dunk booth," and two others a challenge that involved eating an apple or doughnut hung by a string. We had face painting, yard checkers, a ring toss game, a giant bubble station, and a game of "Messy Twister" (messy from shaving cream and a little body paint) also going on. Pumping across the hill was music by DJ Dawg, who was also teaching dance moves to groups of girls. Snow cones and popcorn kept us snacking too. Plus we had fun busting open several piñatas and scrambling for the pieces of candy that came spilling out.
With so many options available, you could stand on the hill and see happy groups of girls in all directions, each smiling and laughing as they got a little wet, maybe a little messy, and had a blast zipping from area to area. One small Junior camper ran by me, snow cone in hand, shouting, "This is the best day ever!"
Later in the afternoon, some girls chose to take a dip in the lake, while others took a shower before dinner, capping off a fantastic opening day. We'll start right in with camp activities in the mornings, and soon the first whitewater rafting trip will be going out.
We've only just gotten started, and there's so much to look forward to!
Tagged: camp activities, camp tours, carnival, girls games, opening day
Happy and Tired
At breakfast this morning we announced a special activity the girls could select today: attending a wood turning workshop presented by local artist George Peterson. George is married to an Alumna of Rockbrook and has two daughters who attend camp. He is known nationally for working with wood, shaping, etching, carving, burning and finishing it into amazing functional and decorative pieces. He just returned from showing his work in Japan, and has worked with galleries in New York, San Francisco and Atlanta as well. One of his bowls was recently chosen for the oval office! His Web site, The Circle Factory, shows some of his latest work. Check out what he's done with old skateboards.
George started by demonstrating how a wood lathe can spin a block of wood, and allow his sharp chisel to cut away curly shavings, slowly revealing a uniform shape. It was a little loud, but so fascinating to watch a bowl materialize from the block with each chip of wood removed. After forming the interior of the bowl, George demonstrated using an electric carving tool how to shape the exterior and bottom. This wasn't just a demonstration though. George was ready for each girl to have her own bowl to work on. He had the interiors started, and with George guiding the tools, the girls carved and sanded their bowls, readying them for the final two touches: burning the letters "RBC" using a metal brand, and adding a coat of mineral oil to protect the wood and give it a pleasing shine. Throughout the day, in a total of 4 workshops, campers were carving and sanding very cool wooden bowls, now keepsakes of their session at Rockbrook.
The Rockbrook lake already has floats, beach balls, kickboards, noodles, tubes, and other assorted floating balls and toys, but today the lifeguards added a few other items "just for fun," as they put it. For the morning periods, it was an arsenal of water pistols, and water shooting devices. The junior campers in particular had fun spraying each other, easily refilling their weapons with water from the lake. In the afternoon, suddenly there was a watermelon to play with. Some of the older girls took turns swimming with it, tossing it from the diving board, and watching it— after a very excellent splash —slowly resurface. After each toss and loud kerplunk, the girls would laugh and laugh, ready to pass the watermelon back up for another throw. Simple stuff, I know, but you would love it too!
Tonight was an event that many of the girls, especially the older campers, look forward to all session, and that has become a camp tradition over the years: a dance with Camp Carolina. We probably fired continuously all of our tankless water heaters this afternoon, and the very few mirrors in camp attracted a constant crowd, as the girls prepared for the night, pulling out a special outfit or maybe dressing in a silly costume. Once again we split the two camps and held two dances, our Juniors and Middlers staying at the Rockbrook gym with the younger boys and our Seniors and Hi-Ups dancing in the CCB dining hall with their older boys. At Rockbrook, our friend Marcus (aka, DJ Dawg) played all the music, doing a great job selecting songs the girls know, as well as songs with popular dance moves like "Watch Me." At both dances we outnumbered the boys about 2:1 making the night, for the girls at least, more about dancing with their friends than with the boys. Sweaty and tired from jumping and dancing around for an hour and a half, the older girls were very excited and chatty on the ride back to Rockbrook. Happy and tired: that's another good camp day.
Tagged: camp dance, girls games, kids games, rockbrook lake, swimming, visiting artist
A Terrific Evening
During the two "Free Swim" periods of each day, 45 minutes before both lunch and dinner, it's common to see a good number of girls swimming laps at the lake. Some using kick boards and others varying their strokes, girls are clocking laps back and forth. And they are keeping count of exactly how many they finish, because if they reach 200 (150 for Middlers, and 100 for Juniors) they join the "Mermaid Club." You can imagine completing that many laps is no one-day affair; it takes dedication and multiple trips to the lake. When a camper joins the Mermaid Club, Chrissy, our Waterfront Director, will read out your name in the dining hall during the announcements after a meal, and then the whole camp sings the "Mermaid Song" inserting the camper's name in the final line. Chrissy wrote the song, and here are the words.
The Mermaid Song
Way down at Rockbrook in the chilly lake,
There were some girls a-swimming,
Who started to shiver and shake.
We saw some scales a-glinting,
And tails they did sprout!
Lo and behold a mermaid and the whole camp did shout
"Oh Mermaid, Mermaid, What's your name?
[name]! [name]! You're a mermaid!"
More of a chant than a song, it's an honor to be recognized by everyone in this way. In addition to the recognition, some girls are (at least partially!) motivated by another perk awarded members of the Mermaid Club each session: a trip to Dolly's Dairy Bar. For girls who simply love the waterfront, the water slide "Big Samantha," the diving board, or just floating around on a tube in the sun, this is a concrete way to show it. Here's a short video to give you a better sense of it all. I wonder if your girls are striving to join the Mermaid Club… (Hint. Hint. You could write them and ask!)
For those who prefer more land-based activities to fill their free time, the gym is one place to go because there's bound to be a basketball or dodgeball game in the works. Right outside the gym, the GaGa pit is a great option. The tennis courts are also available to practice your serve or just to hit a few ball with a friend. A group of "Rockbrook Runners," which includes walkers, leaves for a loop around the camp during the first Free Swim of the day. Like the Mermaid Club, the Rockbrook Runners have a club based on how many loops/laps are completed by the girls. It's the "Marathon Club," and as you might guess, the runners aim to finish 26 miles while they are here for the full session (though less if at camp for fewer weeks). And yes, the same extra sweet, creamy reward awaits those who run the required amount. Running for ice cream… I suppose that makes sense in some way or another.
The tankless hot water heaters were humming constantly this afternoon after we announced at lunch that tonight we would travel over to Camp High Rocks for a square dance with their boys. After braiding a lot of clear hair, dressing in whatever combination of flannel, jeans and bandannas we could gather, our entire camp made the short journey up the mountain (10 bus/van loads plus a couple of cars for extra counselors!). When we arrived, the boys were waiting for us out on their tennis courts and the bluegrass music was already playing from a set of speakers on the small hill nearby. Some of the girls seemed a little nervous about not knowing how to square dance, but the High Rocks boys, and their counselors, were friendly and relaxed about the whole event and helped the girls learn different moves. Once we got going that uncertainty passed and soon everyone was smiling and laughing with every turn and do-si-do.
After about an hour of dancing, we took a short break to mingle and recharge with some homemade oatmeal raisin cookies and lemonade. A little more dancing and we were back down the mountain discussing what made tonight's square dance (for some, surprisingly) so fun. Maybe it was the outdoor setting with beautiful evening sunlight, or the lighthearted friendly atmosphere, or the opportunities to talk with each other, or the gentlemanly behavior of the High Rocks boys, or the genre of the music (… Well, for the girls, maybe not that.). Whatever the reason, we were all sure it was a terrific evening.
Tagged: camp dance, girls games, gym sports, mermaid club, Rockbrook Runners, square dance, swimming
Jittery Excitement
"Welcome back to camp!" and "Welcome to Rockbrook!" were the phrases of the day as we opened our main session of camp today and a record setting 227 campers arrived (a bit above capacity because our 16-year-olds make a huge group this session). Beyond the phrase, the feeling of the day was jittery excitement as campers arrived and were greeted by Sarah, the other directors and their cabin counselors. Everyone was fired up and ready to get started. All this positive energy buoyed most everyone's spirits as the line to check in moved entirely too slowly. Interestingly, also today, we had only about 8 girls flying into the airport, as opposed to what's ordinarily 30 or so. I suppose air travel is becoming more burdensome for everyone! The whole morning had staff members hustling to help campers settle into their bunks while campers took short hikes, made bracelets, decorated name tags, and played their first game of ga-ga. We had a picnic lunch on the hill, and cabin meetings all before beginning the many tours of camp and the activity areas. With this much excitement bubbling up around here, the whole morning was a lot of fun.
Before we allow anyone to use the Rockbrook lake (or participate in any of the "water trips" like whitewater rafting or kayaking), we want to make sure, for obvious safety reasons and as part of our American Camp Association accreditation, they can swim well and be comfortable in the water. For this reason, we asked everyone to demonstrate their swimming ability this afternoon by jumping in the lake, swimming out 50 feet, back another 50 feet using a back stroke, and treading water for 1 full minute. Our lake is fed directly by a mountain stream, so it is notoriously "refreshing," or "shockingly cold," as one camper put it. Fortunately, it was hot and sunny during the demonstrations today, and the waterfront staff saw very few girls struggle to complete the test. Everyone who passes receives both a swim tag labeled with their name and a bright green bracelet that serves as a way for the lifeguards to identify who is eligible to swim in the deep area of the lake. Girls who need to retake the swim test receive a different colored swim tag and can still enjoy the lake, but we require that they wear a life vest and stay in the shallow area. When the lifeguards call for a "Tag Check," the girls in the deep area (who should have bracelets) hold up their arms, and it quickly becomes clear how many swimmers are in each area of the lake and that the total number matches the arrangement of tags on the tag board. It's an elaborate system, but it is an essential and effective safety check for our waterfront.
Before the girls sign up for activities, which as you know is something done twice per week here at Rockbrook rather than in advance at home, it's helpful for them to learn more about what each option entails. Likewise, it's fun to see which counselors and staff members will be the instructors for each of the 28 different offerings. With these two goals in mind, we spent time late this afternoon assembled in the gym as the activity instructors performed short skits to introduce what they have planned for this session. Like all good Rockbrook skits, these were a little silly, involved costumes, props, a little dancing, but also singing. Some of the skits, for example those by the climbing and paddling staff, included plenty of cool looking equipment… Ropes, paddles, helmets, and so forth. Many of the crafts areas presented finished examples of their upcoming projects… weaving, jewelry, and a "bunny pillow," for example. The five ceramics instructors, dressed simply as different "pots" sang a song set to the tune "Be Our Guest" from the Disney movie Beauty and the Beast. "We can coil, we can pinch, after all miss this is camp! Make a handle, take a spot, and you'll surely make a pot. Make a pot, make a pot, make a pot." After each skit, it was all cheers for the animated enthusiasm demonstrated by the staff, and plenty of chatter among the girls about which activities they would be trying first.
We're off to a great start, with all the girls settling down… jitters subsiding… getting to know one and other, and now even more excited for tomorrow's first day of regular activities. Seeing that energy, I can already tell we've got all the ingredients for a great camp session. Stay tuned!
Tagged: activity skits, girls games, opening day, swim demos
Happy and Excited
Ordinarily at camp the wake up bell rings at 8am giving the girls time to dress and do a few cabin chores before the breakfast bell at 8:30am. Today though, we surprised everyone with a special pancake breakfast held in each Line's stone lodge. The kitchen gave us a head start by making a few hundred pancakes, but then teams of counselors, armed with griddles and huge bowls of batter, poured and flipped hot pancakes starting around 8. When the breakfast bell rang, the girls went to their lodges and found sausage and pancakes, milk and juice, but also a pancake toppings station loaded with all kinds of yummy sweet syrups, chocolate chips, marshmallow spread, butter, blueberries and cut strawberries. The girls spilled out into the sunshine around the lodges, sat in their crazy creek chairs, or lined up in the red porch rockers chatting while they watched the fog lift from the mountains in the distance. It was a lovely morning, and a big hit with the campers.
Lunch today turned toward the deep south with Rick and his team in the kitchen frying up sliced green tomatoes for everyone to make sandwiches. With a dab of his homemade rémoulade sauce, or a slice of cheese for the truly bold, this made a delicious sandwich. As a side, Rick prepared several pans of summer squash casserole made with a perfect balance of breadcrumbs, fried onions, cheddar cheese and butter. Cut cantaloupe, strawberries and grapes balanced out the table. Of course, the super-stocked salad bars saw plenty of action too, as did the peanut butter and jelly station.
When it's your birthday at camp, as it was for Frances today, it's a big deal. Before breakfast begins, the counselors will secretly decorate your cabin's table with a colorful painted banner— Happy Brithday Frances! —to surprise everyone about your special day. Then at lunch, we interrupt the meal to carry out one of Katie's (Rockbrook's fabulous baker) delicious cakes, highly decorated for the occasion and lit with candles. The whole camp, which is close to 280 people, then sings a big boisterous version of "Happy Birthday" followed by chanting "Tell us when to stop!" Clapping in unison, one clap for each year old, everyone counts out until the birthday girl waves us off at the right number. Also, for birthdays we happily make an exception to our "No Packages" policy, making it even more exciting to receive a few presents from home. Sharing your birthday (and your cake!) with so many friends, is really a special experience.
This afternoon, as is the case most Wednesdays, we paused our regularly scheduled individual activity periods and enjoyed special all-cabin and whole-line trips. It's our "Cabin Day" (Have you seen this glossary of camp terms?) Some cabins were having "Paint and Polish Parties" where fingers and toes gained fresh color. Others had letter writing projects, cabin name plaques to paint, or had plans to hike the steep climb up to Castle Rock. The Juniors had a silly costume fashion show in the Hillside Lodge. The photos of that event are hilarious!
Late in the afternoon, all the Middlers and their counselors took a ride into the Forest for a picnic, a few chilly rides down sliding rock, and a frozen ice cream treat at Dolly's. The girls had a great time playing groups games in the grassy field after our dinner of hotdogs, chips and fruit. The "I'm a Rockbrook Girl" game seemed to be the most popular as it got everyone dashing across the huge circle a group this size (about 85) required. Our timing at sliding rock was again ideal because we found the place deserted, leaving us free to slide as much as we wanted. The water is cold enough, and by now it was late enough, that most girls slid 2 or 3 times, even as a handful braved the plunge 8 times. Good fun. And an extra large scoop of Dolly's ice cream made the evening complete. A little chilled, but happy and excited to sing on the bus, we made our way back to camp in the dark and called it "another wonderful day" at camp.
Tagged: birthday, cabin day, Camp food, costumes, girls games, sliding rock
Resilient Camp Girls
At 8:00 this morning, as is usual, the girls were awoken to the clear tones of the iron bell ringing throughout the camp, but also today to the tapping of rain on every roof. It was one of those rare mornings when raincoats came out for breakfast, when the temperature is cooler, and droplets of mud seem to spring up on most things at camp. On a day like this, some girls resist the weather and gear up completely with waterproof hats, jackets, boots and umbrellas, while others just embrace it, stomping around in flip flops, wet hair and soggy clothes.
Either way, there's something important going on; the girls are showing their resilience, their ability to carry on despite the rain. Even with the minor discomfort and reshuffling of plans a rainy day presents, the girls coped just fine, confidently and without a parent determining every step. Life often includes moments like this when unexpected misfortune rears its head, so learning to be resilient, to land on your feet ultimately, is a crucial skill, and it's something that camp is perfectly suited to teach. Here's an article discussing how Rockbrook teaches resilience, what our program, staff and overall philosophy provide to help our girls handle setbacks later in their lives. (Please take a moment to read it.) We've said it many times before, and this is an example; woven into all the excitement and fun of camp are really significant lifelong benefits for kids.
Today's rafting trips are another example of your girls' resilience. After a great night camping at our outpost located further upstream on the Nantahala River, complete with s'more making and wildlife encounters (a beautiful Eastern Box Turtle, a couple of girls discovered by flashlight), we woke to a light rain. By the time we reached the put-in to begin rafting, we had a steady, let's-get-wet, kind of rain. Without hesitation or any sign of dampened spirits, the girls were soon suited up in blue spray jackets (for a little added warmth), PFDs, helmets and paddles, and ready to go. It's hard to hold back an excited group of girls, and this was no exception. Even before the first rapid, boats were singing, cheering, bouncing around in the rafts, and doing "high fives" with their paddles. Rain or no rain, perfect conditions or not, these girls were having big fun.
At camp, lunch was an elaborate taco fiesta, complete with Eulogia's homemade guacamole to top ground beef, black beans, diced tomatoes, Mexican rice, cheese and salsa. Each table/cabin had a plate of crunchy and soft taco shells, and an unlimited supply in the kitchen for seconds. There was a little action over at the peanut butter and jelly station, but not much. Oh, and the muffins today were another of Katie's creative combination recipes: Krispy Kreme, Applejack Muffins. Yep, they had chopped doughnuts in the batter and Applejack cereal blended in "for color and a little crunch," as she put it. And for dessert tonight, Katie surprised everyone with homemade cinnamon rolls that she baked with just the right amount of sugar rolled up in a thin dough, sliced, and lightly glazed. We had no trouble gobbling those right up!
After dinner, a group of counselors presented a new, action-packed Twilight activity called "Gold Rush." Working in cabin groups, the girls learned that hidden around the camp were "golden nuggets" (wiffle balls painted gold, actually) and that they were to find as many as they could, with the cabin gathering the most winning a special treat (spending rest hour by the lake, for example). They also stationed "Bandits" around the camp who could steal a cabin's gold if the girls couldn't sing a certain RBC song or answer a trivia question correctly. This was a high-energy event with the campers looking high and low all over the camp. In the end, we awarded several prizes to each age group. It was an evening spent enjoying the wooded setting of camp, the cool, fresh mountain air, and the company of friends playing a silly game… Exactly the kind of evening we love around here.
Filed Under: Girls Camps
Tagged: benefits of camp, Camp food, girls games, rafting, rainy days, special events, twilight | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 3,518 |
source $POWERTRAIN_DIR/var/ARGS.sh
enforce_args_length 6
VERSION_SCRIPT=${ARGS[4]}
PT_CONTEXT=${ARGS[5]}
source $POWERTRAIN_DIR/var/IMAGE.sh ${ARGS[0]} ${ARGS[1]}
source $POWERTRAIN_DIR/var/REGISTRY.sh ${ARGS[2]}
source $POWERTRAIN_DIR/var/DEFAULT.sh "DOCKERFILE" ${ARGS[3]}
echo "Building \"$REGISTRY""$IMAGE\"..."
if [ -n "$DOCKERFILE" ]; then
DOCKERFILE_FLAG="-f $DOCKERFILE"
fi
docker build $DOCKERFILE_FLAG -t $REGISTRY""$IMAGE $PT_CONTEXT
| {
"redpajama_set_name": "RedPajamaGithub"
} | 7,684 |
Q: Tem como usar uma variável que esta dentro do for? Por exemplo. Estou trabalhando com Heredoc e sempre que preciso fazer um for, tenho que fechar o Heredoc fazer o for depois abrir novamente a Tag mencionada.
Tem alguma forma de consultar uma variável dentro do for que me retornará todos os resultados? Por ex.:
$telefones = 3;
for($i = 0; $i < $telefones; $i++) {
$numero_tel = $buscar_telefones[$i]['numero'];
$todosNumeros = "telefone: $numero_tel";
}
echo <<< EOT
$todosNumeros
EOT;
PRINT : telefone: 9999-9999 telefone: 9999-7777 telefone: 9999-8888
Muito obrigado !
A: Só declarar a variável antes, fora do for().
Tem mais algumas alterações que precisam ser feitas, conforme abaixo:
$todosNumeros = "";
$telefones = 3;
for($i = 0; $i < $telefones; $i++) {
$numero_tel = $buscar_telefones[$i]['numero'];
$todosNumeros .= "telefone: " . $numero_tel . " ";
}
echo <<< EOT
$todosNumeros
EOT;
A: Você pode perfeitamente utilizar, fora do for, qualquer variável criada dentro do contexto do for(), e não é obrigatório criá-lo ele antes;
$telefones = 3;
for($i = 0; $i < $telefones; $i++) {
$numero_tel = $buscar_telefones[$i]['numero'];
$todosNumeros .= "telefone: $numero_tel ";
}
echo $todosNumeros;
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 8,619 |
\section{\sc #1}}
\def\scss#1{\subsection{\sc #1}}
\def\scsss#1{\subsubsection{\sc #1}}
\def\alpha{\alpha}
\def\beta{\beta}
\def\gamma{\gamma}
\def\Gamma{\Gamma}
\def\delta{\delta}
\def\Delta{\Delta}
\def\epsilon{\epsilon}
\def\varepsilon{\varepsilon}
\def\zeta{\zeta}
\def\eta{\eta}
\def\theta{\theta}
\def\Theta{\Theta}
\def\vartheta{\vartheta}
\def\iota{\iota}
\def\kappa{\kappa}
\def\lambda{\lambda}
\def\Lambda{\Lambda}
\def\mu{\mu}
\def\nu{\nu}
\def\xi{\xi}
\def\Xi{\Xi}
\def\pi{\pi}
\def\Pi{\Pi}
\def\varpi{\varpi}
\def\rho{\rho}
\def\varrho{\varrho}
\def\sigma{\sigma}
\def\Sigma{\Sigma}
\def\tau{\tau}
\def\upsilon{\upsilon}
\def\phi{\phi}
\def\varphi{\varphi}
\def\chi{\chi}
\def\psi{\psi}
\def\Psi{\Psi}
\def\omega{\omega}
\def\Omega{\Omega}
\def{\cal A}{{\cal A}}
\def{\cal B}{{\cal B}}
\def{\cal C}{{\cal C}}
\def{\cal D}{{\cal D}}
\def{\cal E}{{\cal E}}
\def{\cal F}{{\cal F}}
\def{\cal G}{{\cal G}}
\def{\cal H}{{\cal H}}
\def{\cal I}{{\cal I}}
\def{\cal J}{{\cal J}}
\def{\cal K}{{\cal K}}
\def{\cal L}{{\cal L}}
\def{\cal M}{{\cal M}}
\def{\cal N}{{\cal N}}
\def{\cal O}{{\cal O}}
\def{\cal P}{{\cal P}}
\def{\cal Q}{{\cal Q}}
\def{\cal R}{{\cal R}}
\def{\cal S}{{\cal S}}
\def{\cal T}{{\cal T}}
\def{\cal U}{{\cal U}}
\def{\cal V}{{\cal V}}
\def{\cal W}{{\cal W}}
\def{\cal X}{{\cal X}}
\def{\cal Y}{{\cal Y}}
\def{\cal Z}{{\cal Z}}
\def\begin{equation}{\begin{equation}}
\def\end{equation}{\end{equation}}
\def\begin{eqnarray}{\begin{eqnarray}}
\def\end{eqnarray}{\end{eqnarray}}
\def\begin{align}{\begin{array}}
\def\end{array}{\end{array}}
\def\begin{center}{\begin{center}}
\def\end{center}{\end{center}}
\def\begin{align}{\begin{align}}
\def\end{align}{\end{align}}
\def\footnote{\footnote}
\def\footnote{\footnote}
\def\label{\label}
\def\textsl{\textsl}
\def\mathcal{\mathcal}
\def\textsc{\textsc}
\def\notag{\notag}
\def\nonumber{\nonumber}
\def\newline{\newline}
\def\prime \hat{\phantom{\! \! \prime}}{\prime \hat{\phantom{\! \! \prime}}}
\def\prime{\prime}
\def\frac{1}{2}{\frac{1}{2}}
\def\frac{\frac}
\def\partial{\partial}
\def\partial \cdot{\partial \cdot}
\def\langle \,{\langle \,}
\def\, \rangle{\, \rangle}
\def\,,\,{\,,\,}
\def\equiv{\equiv}
\def\dagger{\dagger}
\newcommand{\bin}[2]{{#1 \choose #2}}
\def&\!\!{&\!\!}
\def\!\!&{\!\!&}
\def\leftarrow{\leftarrow}
\def\rightarrow{\rightarrow}
\def\Longleftarrow{\Longleftarrow}
\def\Longrightarrow{\Longrightarrow}
\def\leftrightarrow{\leftrightarrow}
\def\leftrightarrow{\leftrightarrow}
\newcommand{\comp}[2]{\phantom{\alpha}^{(#1)}\hspace{-19pt}\alpha_{\phantom{(1)}#2}}
\newcommand{\compt}[2]{\phantom{\alpha}^{(#1)}\hspace{-19pt}\widetilde{\alpha}_{\phantom{(1)}#2}}
\def\not {\! \pr}{\not {\! \partial}}
\def\not {\! \pr}{\not {\! \partial}}
\def\hat{\h}{\hat{\eta}}
\def\hat{\pr} \cdot{\hat{\partial} \cdot}
\def\hat{\pr}{\hat{\partial}}
\def\not {\!\! \psi}{\not {\!\! \psi}}
\def\not {\!\!\! \cal W}{\not {\!\!\! \cal W}}
\def\, \not {\!\!\!\! \cal W}{\, \not {\!\!\!\! \cal W}}
\def\not {\! \cal{A}}{\not {\! \cal{A}}}
\def\not {\! \epsilon}{\not {\! \epsilon}}
\def\not {\! \! \epsilon}{\not {\! \! \epsilon}}
\def\not {\! \cal S}{\not {\! \cal S}}
\def\not {\! \xi}{\not {\! \xi}}
\def\not {\! \bar{\xi}}{\not {\! \bar{\xi}}}
\def\not {\! \nabla}{\not {\! \nabla}}
\def\not {\! \! \D}{\not {\! \! \Delta}}
\def\not {\! \l}{\not {\! \lambda}}
\def\not {\! \! \cZ}{\not {\! \! {\cal Z}}}
\def\not {\! \cal R}{\not {\! \cal R}}
\def\not {\! \bar{\xi}}{\not {\! \bar{\xi}}}
\def\not {\! \cal S}{\not {\! \cal S}}
\def\not {\! \Gamma}{\not {\! \Gamma}}
\def\not {\! \!\chi}{\not {\! \!\chi}}
\def\not {\! \! p}{\not {\! \! p}}
\def\not { p}{\not { p}}
\def\bar{\e}{\bar{\epsilon}}
\newcommand{\mathfrak{so}}{\mathfrak{so}}
\newcommand{\mathfrak{su}}{\mathfrak{su}}
\newcommand{\mathfrak{usp}}{\mathfrak{usp}}
\newcommand{\mathfrak{u}}{\mathfrak{u}}
\newcommand{\mathfrak{sp}}{\mathfrak{sp}}
\newcommand{\mathfrak{sl}}{\mathfrak{sl}}
\newcommand{\mathfrak{gl}}{\mathfrak{gl}}
\newcommand{\mathrm{d}}{\mathrm{d}}
\newcommand{\mathrm{i}}{\mathrm{i}}
\newcommand{\mathrm{e}}{\mathrm{e}}
\newcommand{\mathop{\rm E}}{\mathop{\rm E}}
\newcommand{\mathop{\rm SU}}{\mathop{\rm SU}}
\newcommand{\mathop{\rm SO}}{\mathop{\rm SO}}
\newcommand{\mathop{\rm SO}}{\mathop{\rm SO}}
\newcommand{\mathop{\rm CSO}}{\mathop{\rm CSO}}
\newcommand{\mathop{\rm ISO}}{\mathop{\rm ISO}}
\newcommand{\mathop{\rm {}U}}{\mathop{\rm {}U}}
\newcommand{\mathop{\rm {}USp}}{\mathop{\rm {}USp}}
\newcommand{\mathop{\rm {}Sp}}{\mathop{\rm {}Sp}}
\newcommand{\mathop{\rm {}OSp}}{\mathop{\rm {}OSp}}
\newcommand{\mathop{\rm {}Sp}}{\mathop{\rm {}Sp}}
\newcommand{\mathop{\rm {}S}\ell }{\mathop{\rm {}S}\ell }
\newcommand{\mathop{\rm SL}}{\mathop{\rm SL}}
\newcommand{\mathop{\rm GL}}{\mathop{\rm GL}}
\newcommand{\mathop{\rm {}G}\ell }{\mathop{\rm {}G}\ell }
\newcommand{\mathop{\rm {}Spin}}{\mathop{\rm {}Spin}}
\newcommand{\mathcal{F}}{\mathcal{F}}
\newcommand{\mathrm{Im}}{\mathrm{Im}}
\newcommand{\mathrm{Re}}{\mathrm{Re}}
\newcommand{\mathcal{N}}{\mathcal{N}}
\thispagestyle{empty}
\begin{document}
\begin{titlepage}
\begin{center}
\vskip 0.2cm
\vskip 0.2cm
{\Large\sc Integrable Scalar Cosmologies }\vskip 4pt
{\sc II. Can they fit into Gauged Extended Supergavity or be encoded in $\mathcal{N}$=1 superpotentials?}\\[1cm]
{\sc P.~Fr\'e${}^{\; a}$\footnote{Prof. Fr\'e is presently fulfilling the duties of Scientific Counselor of the Italian Embassy in the Russian Federation, Denezhnij pereulok, 5, 121002 Moscow, Russia.}, A.S.~Sorin$^{\; b}$ and M. Trigiante$^{\; c}$}\\[10pt]
{${}^a$\sl\small Dipartimento di Fisica, Universit\'a di Torino\\INFN -- Sezione di Torino \\
via P. Giuria 1, \ 10125 Torino \ ITALY \\}\emph{e-mail:} \quad {\small {\tt
fre@to.infn.it}}\\
\vspace{5pt}
{{\em $^{b}$\sl\small Bogoliubov Laboratory of Theoretical Physics}}\\
{{\em {\tt and} Veksler and Baldin Laboratory of High Energy Physics,}}\\
{{\em Joint Institute for Nuclear Research,}}\\
{\em 141980 Dubna, Moscow Region, Russia}~\quad\\
\emph{e-mail:}\quad {\small {\tt sorin@theor.jinr.ru}}
\quad \\
\vspace{5pt}
{{\em $^c$\sl\small Dipartimento di Fisica Politecnico di Torino,}}\\
{\em C.so Duca degli Abruzzi, 24, I-10129 Torino, Italy}~\quad\\
\emph{e-mail:}\quad {\small {\tt mario.trigiante@gmail.com}}
\quad \vspace{8pt}
\vspace{15pt}
\begin{abstract}
The question whether the integrable one-field cosmologies classified in a previous paper by Fr\'e, Sagnotti and Sorin can be embedded as consistent one-field truncations into Extended Gauged Supergravity or in $\mathcal{N}=1$ supergravity gauged by a superpotential without the use of $D$-terms is addressed in this paper. The answer is that such an embedding is very difficult and rare but not impossible. Indeed we were able to find two examples of integrable models embedded in Supergravity in this way. Both examples are fitted into $\mathcal{N}=1$ Supergravity by means of a very specific and interesting choice of the superpotential $W(z)$. The question whether there are examples of such an embedding in extended Gauged Supergravity remains open. In the present paper, relying on the embedding tensor formalism we classified all gaugings of the $\mathcal{N}=2$ STU model, confirming, in the absence on hypermultiplets, the uniqueness of the stable de Sitter vacuum found several years ago by Fr\'e, Trigiante and Van Proeyen and excluding the embedding of any integrable cosmological model.
A detailed analysis of the space of exact solutions of the first Supergravity--embedded integrable cosmological model revealed several new features worth an in depth consideration. When the scalar potential has an extremum at a negative value, the universe necessarily collapses into a Big Crunch notwithstanding its spatial flatness. The causal structure of these universes is quite different from that of the closed, positive curved, universe: indeed in this case the particle and event horizons do not coincide and develop complicated patterns. The cosmological consequences of this unexpected mechanism deserve careful consideration.
\end{abstract}
\end{center}
\end{titlepage}
\tableofcontents
\noindent {}
\newpage
\section{\sc Introduction}
In a recent paper \cite{primopapero} some of us have addressed the question of classifying integrable one-field cosmological models based on a slightly generalized ansatz for the spatially flat metric,
\begin{equation}\label{piatttosa}
ds^2 \ = \ - \ e^{\,2\,{\cal B}(t)} \, \mathrm{d}t^2 \ + \ a^2(t) \ \mathrm{d}\mathbf{x}\cdot \mathrm{d}\mathbf{x}
\ ,
\end{equation}
and on a suitable choice of a potential $V(\phi)$ for the unique scalar field $\phi$, whose kinetic term is supposed to be canonical:
\begin{equation}\label{kin}
\mathcal{L}_{kin}(\phi) \, = \, \frac{1}{2} \partial_\mu\phi \, \partial^\mu \phi \, \sqrt{-g}\, .
\end{equation}
The suitable potential functions $V(\phi)$ that lead to exactly integrable Maxwell Einstein field equations were searched within the family of linear combinations of exponential functions $\exp \beta \phi$ or rational functions thereof. The motivations for such a choice were provided both with String and Supergravity arguments and a rather remarkable bestiary of exact cosmological solutions was uncovered, endowed with quite interesting mathematical properties. Some of these solutions have also some appeal as candidate models of the inflationary scenario, capable of explaining the structure of the primordial power spectrum.
\par
In \cite{primopapero} the classical Friedman equations
\begin{eqnarray}
&& H^2 \ = \ \frac{1}{3} \ \dot{\phi}^2 \, + \, \frac{2}{3} \ V(\phi) \ , \nonumber\\
&& \dot{H} \ = \ - \, \dot{\phi}^2 \ , \nonumber\\
&& \ddot{\phi} \,+ \, 3 \, H \, \dot{\phi} \, + \, V^{\,\prime} \ = \ 0 \ , \label{fridmano}
\end{eqnarray}
where
\begin{equation}
a(t) \ = \ e^{\, A(t)} \ , \qquad H(t) \, \equiv \, \frac{\dot{a}(t)}{a(t)} \ = \ \dot{A}(t) \,
\end{equation}
are respectively the scale factor and the Hubble function, were revisited in the more general gauge with ${\cal B}\ne 0$ which allows for the construction of exact solutions, whenever the effective two dimensional dynamical system lying behind
eq.s (\ref{fridmano}) can be mapped, by means of a suitable canonical transformation, into an integrable dynamical model endowed with two hamiltonians in involution. Such procedure produced the bestiary constructed and analyzed in \cite{primopapero}.
\par
After the change of perspective produced by the recent series of papers \cite{johndimitri},\cite{Ketov:2010qz},\cite{Ketov:2012jt},\cite{Kallosh:2013hoa},\cite{Kallosh:2013lkr},\cite{Farakos:2013cqa},
\cite{minimalsergioKLP},\cite{primosashapietro},\cite{Ferrara:2013wka},\cite{Ferrara:2013kca},\cite{Ferrara:2013pla} and in particular after
\cite{minimalsergioKLP},\cite{primosashapietro}, we know that all positive definite members of the above mentioned bestiary can be embedded into $\mathcal{N}=1$ supergravity as $D$-terms produced by the gauging of an axial symmetry, provided the K\"ahler manifold to which we assign the Wess-Zumino multiplet of the inflaton is consistent with the chosen potential $V(\phi)$, namely it has an axial symmetric K\"ahler potential defined in a precise way by $V(\phi)$. In \cite{secondosashapietro} which is published at the same time as the present paper, two of us have analysed the mathematical algorithm lying behind this embedding mechanism which we have named the $D$-map. In the same paper a possible path toward the microscopic interpretation of the peculiar axial symmetric K\"ahler manifolds requested by the $D$-type supergravity embedding of the integrable potentials is proposed and discussed. Such a microscopic interpretation is obligatory in order to give a sound physical meaning to the supergravity embedding.
\par
The main theme that we are going to debate in the present paper is instead the following. Can integrable cosmologies be embedded into gauged extended supergravity or in $\mathcal{N}=1$ supergravity gauged by the F-terms that are produced by the choice of some suitable superpotential $W(z)$? In the present paper the choice of the K\"ahler geometry for the inflaton will not depend on the potential $V(\phi)$. The inflaton Wess-Zumino multiplet will always be assigned to a constant curvature K\"ahler manifold as it is the case in compactifications on torii, orbifolds or orientifolds.
\par
Having clarified this fundamental distinction between the complementary approaches of the present paper and of the parallel paper \cite{secondosashapietro}, we continue our discussion of the Friedman system (\ref{fridmano}).
Referring to the classical cosmic time formulation (\ref{fridmano}) of Friedman equations and to the very instructive hydrodynamical picture, we recall that the energy density and the pressure of the fluid describing the scalar matter can be identified with the two combinations
\begin{eqnarray}
&& \rho \ = \ \frac{1}{4} \ \dot{\phi}^2 \, + \, \frac{1}{2}\ V(\phi)\ , \nonumber\\
&& p \ = \ \frac{1}{4} \ \dot{\phi}^2 \, - \, \frac{1}{2} \ V(\phi)\ ,\label{patatefritte}
\end{eqnarray}
since, in this fashion, the first of eqs.~\eqref{fridmano} translates into the familiar link between the Hubble constant and the energy density of the Universe,
\begin{equation}\label{gordilatinus}
H^2 \, = \, \frac{4}{3} \ \rho \ .
\end{equation}
A standard result in General Relativity (see for instance \cite{pietroGR}) is that for a fluid whose equation of state is
\begin{equation}\label{equatastata}
p\, = \, w \, \rho \qquad \quad w\, \in \,\mathbb{R}
\end{equation}
the relation between the energy density and the scale factor takes the form
\begin{equation}\label{forense2}
\frac{\rho}{\rho_0} \, = \, \left(\frac{a_0}{a} \right)^{3(1+w)} \ ,
\end{equation}
where $\rho_0$ and $a_0$ are their values at some reference time $t_0$.
Combining eq.~(\ref{equatastata}) with the first of eqs.~(\ref{fridmano}) one can then deduce that
\begin{equation}\label{andamentus}
a(t) \, \sim \, \left(t-t_i\right)^{\frac{2}{3 (w+1)}} \ ,
\end{equation}
where $t_i$ is an initial cosmic time. All values $-1\leq w \leq 1$ can be encompassed by eqs.~\eqref{patatefritte}, including the two particularly important cases of a dust--filled Universe, for which $w=0$ and $a(t) \, \sim \, \left(t-t_i\right)^{\frac{2}{3}}$, and of a radiation--filled Universe, for which $w=\frac{1}{3}$ and $a(t) \, \sim \, \left(t-t_i\right)^{\frac{1}{2}}$. Moreover, when the potential energy $V(\phi)$ becomes negligible with respect to the kinetic energy in eqs.~\eqref{patatefritte}, $w \approx 1$. On the other hand, when the potential energy $V(\phi)$ dominates $w\approx-1$, and eq.~(\ref{forense2}) implies that the energy density is approximately constant (vacuum energy) $\rho \, = \, \rho_0$. The behavior of the scale factor is then exponential, since the Hubble function is also a constant $H_0$ on account of eq.~\eqref{gordilatinus}, and therefore
$$ a(t) \, \sim \, \exp\left [ H_0 \, t \right ] \quad ; \quad H_0 \, = \, \sqrt{\frac{4}{3}\ \rho_0}$$
The actual solutions of the bestiary described in \cite{primopapero} correspond to complicated equations of state whose index $w$ varies in time. Nonetheless they are qualitatively akin, at different epochs, to these simple types of behavior.
\par
As we just stressed, the next question that constitutes the main issue of the present paper is whether the integrable potentials classified in \cite{primopapero} play a role in consistent one--field truncations of \emph{four--dimensional} gauged Supergravity.
A striking and fascinating feature of Supergravity is in fact that its scalar potentials are not completely free. Rather, they emerge from a well defined gauging procedure that becomes more and more restrictive as the number $\mathcal{N}$ of supercharges increases, so that the link between the integrable cosmologies of \cite{primopapero} and this structure is clearly of interest.
\par
The first encouraging observation was already mentioned: in all integrable models found in \cite{primopapero} the potential ${V}(\phi)$ is a polynomial or rational function of exponentials $\exp[ \beta \, \phi]$ of a field $\phi$ whose kinetic term is canonical. If we discard the rational cases and we retain only the polynomial ones that are the majority, this feature connects naturally such cosmological models to Gauged Supergravity with scalar fields belonging to \emph{non compact, symmetric coset manifolds} $\mathrm{G/H}$. This wide class encompasses not only all $\mathcal{N}>2$ theories, but also some $\mathcal{N} \le 2$ models that are frequently considered in connection with Cosmology, Black Holes, Compactifications and other issues.
Since the coset manifolds $\mathrm{G/H}$ relevant to supergravity are characterized by a numerator group $\mathrm{G}$ that is a non-compact semi-simple group, in these models one can always resort to a \textit{solvable parameterization} of the scalar manifold \cite{SUGRA_solvable}, so that the scalar fields fall into two classes:
\begin{enumerate}
\item The \textit{Cartan fields} $\mathfrak{h}^i$ associated with the Cartan generators of the Lie algebra $\mathbb{G}$, whose number equals the rank $r$ of $\mathrm{G/H}$. For instance, in models associated with toroidal or orbifold compactifications, fields of this type are generically interpreted as radii of the underlying multi--tori.
\item The \textit{axion fields} $b^I$ associated with the roots of the Lie algebra $\mathbb{G}$.
\end{enumerate}
The kinetic terms of Cartan scalars have the canonical form
\begin{equation}
\sum_i^r\frac{\alpha_i^2}{2}\ \partial_\mu \mathfrak{h}^i \, \partial^\mu \mathfrak{}h^i \ ,
\end{equation}
up to constant coefficients, while for the axion scalars entering solvable coset representatives, the $\alpha_i^2$ factors leave way to exponential functions $\exp[ \beta_i \, \mathfrak{h}^i]$ of Cartan fields. The scalar potentials of gauged Supergravity are polynomial functions of the coset representatives, so that after the truncation to Cartan sectors, setting the axions to constant values, one is led naturally to combinations of exponentials of the type encountered in \cite{primopapero}. Yet the devil lies in the details, since the integrable potentials do result from exponential functions $\exp[ \beta \, \mathfrak{h}] $, but with rigidly fixed ratios between the $\beta_i$ entering the exponents and the $\alpha_i$ entering the kinetic terms. The candidate potentials are displayed in tables \ref{tab:families} and \ref{Sporadic} following the notations and the nomenclature of \cite{primopapero}.
\begin{table}[ht!]
\centering
\begin{tabular}{|lc|}
\hline
\null & Potential function \\
\hline
\null&\null\\
$I_1$ & $\! C_{11} \, e^{\,\varphi} \, + \, 2\, C_{12} \, + \, C_{22} \, e^{\, - \varphi}$ \\
\null&\null\\
$I_2$ & $\! C_1 \, e^{\,2\,\gamma \,\varphi}\, +\, C_2e^{\,(\gamma+1)\, \varphi} $ \\
\null&\null\\
$I_3$ & $\! C_1 \, e^{\, 2\, \varphi} \ + \ C_2$ \\
\null&\null\\
$I_7$ & $\! C_1 \, \Big(\cosh\,\gamma\,\varphi \Big)^{\frac{2}{\gamma} \, - \, 2}\, + \, C_2 \Big( \sinh\,\gamma\,\varphi \Big)^{\frac{2}{\gamma} \, - \, 2}$ \\
\null&\null\\
$I_8$ & $\! C_1 \left(\cosh [2 \, \gamma \, \varphi] \right)^{\frac {1}{\gamma} -1}\,\cos\left[\left(\frac {1}{\gamma} -1\right)\, \arccos\left(\tanh[2\,\gamma\, \varphi]\,+\,C_2\right)\right]$ \\
\null&\null\\
$I_9$ & $\! C_1 \ e^{2\,\gamma\,\varphi} \ + \ C_2 \
e^{\frac{2}{\gamma}\,\varphi} $ \\
\null&\null\\
\hline
\end{tabular}
\caption{The families of integrable potential classified in \cite{primopapero} (and further extended in \cite{secondosashapietro}) that, being pure linear combinations of exponentials, might have a chance to be fitted into Gauged Supergravity are those corresponding to the numbers $I_1$,$I_2$,$I_3$,$I_7$,$I_8$ (if $\gamma = \frac{1}{n}$ with $n\in \mathbb{Z}$) and $I_9$. In all cases $C_i$ should be real parameters and $\gamma \in \mathbb{Q}$ should just be a rational number.}
\label{tab:families}
\end{table}
As a result, the possible role of integrable potentials in gauged supergravity theories is not evident a priori, and actually, the required ratios are quite difficult to be obtained. Notwithstanding these difficulties we were able to identify a pair of examples, showing that although rare, supergravity integrable cosmological models based on $\mathrm{G/H}$ scalar manifolds\footnote{The main consequence of the $D$-embedding of integrable potentials discussed in the parallel paper \cite{secondosashapietro} is that the K\"ahler manifold hosting the inflaton is not a constant curvature coset manifold $\mathrm{G/H}$.} do exist and might provide a very useful testing ground where exact calculations can be performed \textit{ab initio} to the very end.
\begin{table}[ht!]
\centering
\begin{tabular}{|lc|}
\hline
\null & \null \\
\null & Sporadic Integrable Potentials \\
\null & \null \\
\null & $\begin{array}{lcr}\mathcal{V}_{Ia}(\varphi)
& = & \frac{\lambda}{4} \left[(a+b)
\cosh\left(\frac{6}{5}\varphi\right)+(3 a-b)
\cosh\left(\frac{2}{5}\varphi\right)\right] \end{array}$ \\
\null & \null \\
\null & $\begin{array}{lcr}\mathcal{V}_{Ib}(\varphi) & = & \frac{\lambda}{4} \left[(a+b)
\sinh\left(\frac{6}{5}\varphi\right)-(3 a-b)
\sinh\left(\frac{2}{5}\varphi\right)\right] \end{array} $\\
\null & \null \\
where & $ \left\{a,b\right\} \, = \, \left\{
\begin{array}{cc}
1 & -3 \\
1 & -\frac{1}{2} \\
1 & -\frac{3}{16}
\end{array}
\right\} $\\
\null & \null \\
\hline
\null & \null \\
\null & $\begin{array}{lcr}
\mathcal{V}_{II}(\varphi)
& = & \frac{\lambda}{8} \left[3 a+3 b- c+4( a- b)
\cosh\left(\frac{2}{3}\varphi \right)+(a+b+c) \cosh\left(\frac{4}{3}\varphi \right)\right]\nonumber\ ,
\end{array}$\\
\null & \null \\
where & $\left\{a,b,c\right\} \, = \, \left\{
\begin{array}{ccc}
1 & 1 & -2
\\
1 & 1 & -6
\\
1 & 8 & -6
\\
1 & 16 & -12
\\
1 & \frac{1}{8} &
-\frac{3}{4} \\
1 & \frac{1}{16} &
-\frac{3}{4}
\end{array}
\right\} $\\
\null & \null \\
\hline
\null & \null \\
\null & $\begin{array}{lcr}
\mathcal{V}_{IIIa}(\varphi) & = & \frac{\lambda}{16} \left[\left(1-\frac{1}{3
\sqrt{3}}\right) e^{-6 \varphi
/5}+\left(7+\frac{1}{\sqrt{3}}\right)
e^{-2 \varphi
/5} \right. \\
&& \left. +\left(7-\frac{1}{\sqrt{3}}\right)
e^{2 \varphi /5}+\left(1+\frac{1}{3
\sqrt{3}}\right) e^{6 \varphi
/5}\right]\ .
\end{array}$\\
\null&\null\\
\hline
\null & \null \\
\null &$\begin{array}{lcr}
\mathcal{V}_{IIIb}(\varphi) &=& \frac{\lambda}{16} \left[\left(2-18
\sqrt{3}\right) e^{-6 \varphi
/5}+\left(6+30 \sqrt{3}\right) e^{-2
\varphi /5}\right.\\
&&\left. +\left(6-30
\sqrt{3}\right) e^{2 \varphi
/5}+\left(2+18 \sqrt{3}\right) e^{6
\varphi /5}\right]
\end{array}
$ \\
\null&\null\\
\hline
\end{tabular}
\caption{In this table of the sporadic integrable potentials classified in \cite{primopapero} we retain only those that being pure linear combinations of exponentials have an a priori possibility of being realized in some truncation of Gauged Supergravity models}
\label{Sporadic}
\end{table}
\section{\sc The set up for comparison with Supergravity}\label{sec:supergravity}
In this paper we focus on $D=4$ supergravity models.
In order to compare the effective dynamical model considered in \cite{primopapero} with the possible one-field truncations of supergravity,
it is convenient to adopt a slightly different starting point which touches upon some of the fundamental features of all supersymmetric extension of gravity. As we have already mentioned, differently from non supersymmetric theories, where the kinetic and potential terms of the scalar fields are uncorrelated and disposable at will, the fascination of \textit{sugras} is precisely that a close relation between these two terms here exists and is mandatory. Indeed the potential is just created by means of the gauging procedure. The explicit formulae for the potential always involve the metric of the target manifold which, on the other hand, determines the scalar field kinetic terms. Thus, in one-field truncations, the form of the kinetic term cannot be normalized at will but comes out differently, depending on the considered model and on the chosen truncation. A sufficiently ductile Lagrangian that encodes the various sugra-truncations discussed in this paper is the following one:
\begin{equation}\label{unopratino}
{\cal L}_{eff} \, = \, \mbox{const} \, \times \, e^{\,{3 \, A} \ - \ {\cal B} } \ \left[ \ - \, \frac{3}{2} \dot{ A}^{\,2} \ + \ \frac{q}{2} \, \dot{\mathfrak{h}}^2 \ - \ e^{\,2\,{\cal B}} \ {{V}}(\mathfrak{h}) \right]
\end{equation}
The field $\mathfrak{h}$ is a residual dilaton field after all the the other dilatonic and axionic fields have been fixed to their stationary values and $q$ is a parameter, usually integer, that depends both on the chosen supergravity model and on the chosen truncation. The correspondence with the set up of \cite{primopapero} is simple: $\phi=\sqrt{q} \, \mathfrak{h}$.
Hence altogether the transformation formulae that correlate the general discussion of this paper with the bestiary of supergravity potentials, found in \cite{primopapero} and displayed in tables \ref{tab:families} and \ref{Sporadic} are the following ones:
\begin{eqnarray}
\label{babushka}
\dot{\cal A}(t) &=& 3 \, H(t) \, = \, 3 \, \frac{\mathrm{d}}{\mathrm{d}t} \, \log [a(t)]\nonumber\\
&\updownarrow & \nonumber\\
{\cal A}(t) & = & 3 A(t) \nonumber\\
{\cal B}(t) & = & {\cal B}(t)\nonumber\\
\varphi &=& \sqrt{3 q} \, \mathfrak{h} \nonumber\\
\mathcal{V}(\varphi) &=& 3 \, V(\mathfrak{h}) \, = \, 3 \,V(\frac{\varphi}{\sqrt{3q}})
\end{eqnarray}
We will consider examples of $\mathcal{N}=2$ and $\mathcal{N}=1$ models trying to spot the crucial points that make it unexpectedly difficult to fit integrable cosmological models into the well established framework of \textit{gauged supergravities}. Difficult but not impossible since we were able to identify at least one integrable $\mathcal{N}=1$ supergravity model based on the coupling of a single Wess-Zumino multiplet endowed with a very specific superpotential.
While postponing to a further paper the classification of all the gaugings of the $\mathcal{N}=2$ models based on symmetric spaces \cite{ToineCremmerOld} (see table \ref{homomodels}) and the analysis of their one-field truncation in the quest of possible matching with
\begin{table}[ht!]
\begin{center}
{\small
\begin{tabular}{||c|c||c||}
\hline
coset &coset & susy\\
D=4 & D=3 & \\
\hline
\null & \null &\null \\
$ \frac{\mathrm{SU(1,1)}}{\mathrm{U(1)}}$ & $ \frac{\mathrm{G_{2(2)}}}{\mathrm{SU(2)\times SU(2)}}$ & $\mathcal{N}=2$ \\
\null & \null & n=1 \\
\hline
\null & \null &\null \\
$ \frac{\mathrm{Sp(6,R)}}{\mathrm{SU(3)\times U(1)}}$ & $ \frac{\mathrm{F_{4(4)}}}{\mathrm{USp(6)\times SU(2)}}$ & $\mathcal{N}=2$ \\
\null & \null & $n=6$ \\
\null & \null &\null \\
\hline
\null & \null &\null \\
$ \frac{\mathrm{SU(3,3)}}{\mathrm{SU(3)\times SU(3) \times U(1)}}$ & $ \frac{\mathrm{E_{6(2)}}}{\mathrm{SU(6)\times SU(2)}}$ & $\mathcal{N}=2$ \\
\null & \null & $n=9$ \\
\null & \null &\null \\
\hline
\null & \null &\null \\
$ \frac{\mathrm{SO^\star(12)}}{\mathrm{SU(6)\times U(1)}}$ & $ \frac{\mathrm{E_{7(-5)}}}{\mathrm{SO(12)\times SU(2)}}$ & $\mathcal{N}=2$ \\
\null & \null & n=15 \\
\null & \null &\null \\
\hline
\null & \null &\null \\
$ \frac{\mathrm{E_{7(-25)}}}{\mathrm{E_{6(-78)} \times U(1)}}$ & $ \frac{\mathrm{E_{8(-24)}}}{\mathrm{E_{7(-133)}\times SU(2)}}$ & $\mathcal{N}=2$ \\
\null & \null & $n=27$ \\
\hline
\null & \null &\null \\
$ \frac{\mathrm{SL(2,\mathbb{R})}}{\mathrm{SO(2)}}\times\frac{\mathrm{SO(2,2+p)}}{\mathrm{SO(2)\times SO(2+p)}}$ & $ \frac{\mathrm{SO(4,4+p)}}{\mathrm{SO(4)\times SO(4+p)}}$ & $\mathcal{N}=2$ \\
\null & \null & n=3+p \\
\hline
\null & \null &\null \\
$ \frac{\mathrm{SU(p+1,1)}}{\mathrm{SU(p+1)\times U(1)}}$ & $ \frac{\mathrm{SU(p+2,2)}}{\mathrm{SU(p+2)\times SU(2)}}$ & $\mathcal{N}=2$\\
\null & \null &\null \\
\hline
\end{tabular}
}
\caption{List of special K\"ahler homogeneous spaces in $D=4$ with their $D=3$ enlarged counterparts, obtained through Kaluza-Klein reduction. The number $n$ denotes the number of vector multiplets. The total number of vector fields is therefore $n_V \, =\, n+1$. \label{homomodels}}
\end{center}
\end{table}
the integrable potentials, in the present paper we will consider in some detail another possible point of view. It was named the \textit{minimalist approach} in the conclusions of \cite{primopapero}. Possibly no physically relevant cosmological model extracted from Gauged Supergravity is integrable, yet the solution of its field equations might be effectively simulated in their essential behavior by the exact solution of a neighboring integrable model. Relying on the classification of fixed points presented in \cite{primopapero} we advocate that if there is a one parameter family of potentials that includes both an integrable case and a case derived from supergravity and if the fixed point type is the same for the integrable case and for the supergravity case, then the integrable model provides a viable substitute of the physical one and its solutions provide good approximations of the physical ones accessible only with numerical evaluations. We will illustrate this viewpoint with the a detailed analysis of one particularly relevant case.
\par
The obvious limitation of this approach is the absence of an algorithm to evaluate the error that separates the unknown physical solution from its integrable model clone. Yet a posteriori numerical experiments show that is error is rather small and that all essential features of the physical solution are captured by the solutions of the appropriate integrable member of the same family.
\par
Certainly it would be very much rewarding if other integrable potentials could be derived from specific truncations of specially chosen supergravity gaugings. If such a case is realized the particular choice of parameters that leads to integrability would certainly encode some profound physical significance.
\section{\sc $\mathcal{N}=2$ Models and Stable de Sitter Vacua}
\label{STUgauginghi}
An issue of high relevance for a theoretical explanation of current cosmological data is the
construction of \textit{stable de Sitter string vacua} that break all supersymmetries \cite{kklt}, a question that is actually formulated at the level of the low--energy $\mathcal{N}$--extended Supergravity. As recently reviewed in \cite{scrucca}, for $\mathcal{N} > 2$ no stable de Sitter vacua have ever been found and do not seem to be possible. In $\mathcal{N}=1$ Supergravity coupled only to chiral multiplets, stability criteria can be formulated in terms of sectional curvatures of the underlying K\"ahler manifold that are quite involved, so that their general solution has not been worked out to date.
\par
In $\mathcal{N}=2$ Supergravity stable de Sitter vacua have been obtained, until very recently, only in a unique class of models \cite{mapietoine} (later generalized in \cite{Roest:2009tt})\footnote{For a recent construction of meta-stable de Sitter vacua in abelian gaugings of $\mathcal{N}=2$ supergravity, see \cite{Catino:2013syn}.} and, as stressed there, the mechanism that generates a scalar potential with the desired properties results from three equally essential ingredients:
\begin{enumerate}
\item The gauging of a \textit{non-compact, non-abelian group} that in the models that were considered is $\mathfrak{so}(2,1)$.
\item The introduction of Fayet Iliopoulos terms corresponding to the gauging of compact $\mathfrak{u}(1)$ factors.
\item The introduction of a Wagemans-de Roo angle that within special K\"ahler geometry rotates the directions associated to the non-compact gauge group with respect to those associated with the compact one.
\end{enumerate}
The class of models constructed in \cite{mapietoine} relies on the coupling of vector multiplets to Supergravity as dictated by the special K{\"a}hler manifold
\begin{eqnarray}
\mathcal{SK}_n & = & \mathcal{ST}[2,n] \, \equiv \,
\frac{\mathrm{SU(1,1)}}{\mathrm{U(1)}} \, \times \,
\frac{\mathrm{SO(2,n)}}{\mathrm{SO(2)\times SO(n)}}\ , \label{spiffero}
\end{eqnarray}
which accommodates the scalar fields and governs the entire structure of the Lagrangian. There are two interesting special cases: for $n=1$ one obtains the $\mathrm{ST}$ model, which describes two vector multiplets, while for $n=2$ one obtains the $\mathrm{STU}$-model, which constitutes the core of most supergravity theories and is thus ubiquitous in the study of string compactifications at low energies. In this case, due to accidental Lie algebra automorphisms, the scalar manifold factorizes, since
\begin{equation}\label{vorticino}
\mathcal{ST}[2,2] \, \equiv \,\frac{\mathrm{SU(1,1)}}{\mathrm{U(1)}} \, \times \,\frac{\mathrm{SU(1,1)}}{\mathrm{U(1)}} \, \times \,\frac{\mathrm{SU(1,1)}}{\mathrm{U(1)}}
\end{equation}
Starting from the Lagrangian of ungauged $\mathcal{N}=2$ Supergravity based on this special K\"ahler geometry, the scalar potential is generated gauging a subgroup $\mathrm{G_{gauge}} \subset \mathrm{SU(1,1)} \times \mathrm{SO(2,n)}$.
The three models explicitly constructed in \cite{mapietoine}, whose scalar potential admits stable de Sitter extrema, are
\begin{itemize}
\item The $\mathrm{STU}$ model with 3 vector multiplets, in the manifold $\mathcal{ST}[2,2]$,
which, together with the graviphoton, are gauging
$\mathop{\rm SO}(2,1)\times \mathop{\rm {}U}(1)$, with a Fayet--Iliopoulos term for the $\mathop{\rm {}U}(1)$ factor.
\item a model with 5 vector multiplets, in the manifold $\mathcal{ST}[2,4]$,
which, together with the graviphoton, are gauging
$\mathop{\rm SO}(2,1)\times \mathop{\rm SO}(3)$, with a Fayet--Iliopoulos term for the
$\mathop{\rm SO}(3)$; and
\item the last model extended with 2 hypermultiplets with 8 real
scalars in the coset $\frac{\mathop{\rm SO}(4,2)}{\mathop{\rm SO}(4)\times \mathop{\rm SO}(2)}$.
\end{itemize}
The choice of the hypermultiplet sector for the third model is possible since
the coset $\frac{\mathop{\rm SO}(4,2)}{\mathop{\rm SO}(4)\times \mathop{\rm SO}(2)}$ can be viewed as a
factor in the special K{\"a}hler manifold $\mathcal{ST}[2,4]$, or alternatively as a
quaternionic-K{\"a}hler manifold by itself. The scalar potentials of the three models are qualitatively very similar, while the key ingredient behind the emergence of de Sitter extrema is the introduction of a non--trivial Wagemans--de Roo angle. For this reason we shall analyze only the first and simplest of these three models.
The explicit form of the scalar potential obtained in this gauging can be illustrated by introducing
a parametrization of the scalar sector according to Special Geometry, and symplectic sections are the main ingredient to this effect. In
the notation of \cite{Andrianopoli:1997cm}, the holomorphic
section reads
\begin{equation}
\Omega= \left( \begin{array}{c}
X^\Lambda \\
F_\Sigma
\end{array}\right),
\label{Omegabig}
\end{equation}
where
\begin{eqnarray}
X^\Lambda(S,y) & = & \left( \begin{array}{c}
\frac{1}{2} \, \left( 1+y^2\right) \\
\frac{1}{2} \, {\rm i} \, (1-y^2) \\
y^a
\end{array}\right) \qquad ;\quad a=1,\dots , n\,, \quad \nonumber\\
F_\Lambda(S,y) & = & \left( \begin{array}{c}
\frac{1}{2} \, S \, \left( 1+y^2\right) \\
\frac{1}{2} \, {\rm i} \, S \, (1-y^2) \\
- S \, y^a
\end{array}\right) \quad ;\quad y^2 = \sum_{a=1}^{n} (y^a)^2\ ,
\label{symsecso21}
\end{eqnarray}
The complex $y^a$ fields are
Calabi--Vesentini coordinates for the homogeneous manifold
$\frac{\mathrm{SO(2,n)}}{\mathrm{SO(2)\times SO(n)}}$, while the complex
field $S$ parameterizes the homogeneous space
$\frac{\mathrm{SU(1,1)}}{\mathrm{U(1)}}$, which is identified with the
complex upper half-plane. With these conventions, the positivity domain of our Lagrangian is
\begin{equation}
\mbox{Im} \, S \, > \, 0\,.
\label{positdom}
\end{equation}
The K{\"a}hler potential is by definition
\begin{equation}
{\cal K}\, = \, -\mbox{log}\left (- {\rm i}\langle \Omega \,
\vert \, \bar \Omega
\rangle \right )\, =\, -\mbox{log}\left [- {\rm i} \left ({\bar X}^\Lambda
F_\Lambda - {\bar F}_\Sigma X^\Sigma \right ) \right ] \ , \label{specpot}
\end{equation}
so that in this example the K{\"a}hler potential and the K{\"a}hler metric read
\begin{eqnarray}
\mathcal{K} & = & \mathcal{K}_1 + \mathcal{K}_2 \,,\nonumber\\
\mathcal{K}_1 & = & -\log \, \left[ - {\rm i} \, \left(
S-\overline{S}\right) \right],\qquad
\mathcal{K}_2 = -\log \left[ \frac{1}{2}
\, \left( 1-2\overline{y}^a \, y^a + | y^a y^a|^2
\right) \right],
\nonumber\\
g_{S\overline{S}}&= & \frac{1}{(2\mbox{Im} S)^2}\,,\qquad \,\qquad \, \quad
g_{a \bar b}= \,\frac{\partial }{\partial
y^a}\,\frac{\partial }{\partial \bar {y}^b}\, \mathcal{K}_2\,.
\label{kalermetr}
\end{eqnarray}
\subsection{\sc de Roo -- Wagemans angles}
\label{stabdesitter}
As we have stressed, in the construction of \cite{mapietoine}, the \textit{de
Roo--Wagemans angles} are essential ingredients for the existence of de Sitter extrema. They were originally introduced \cite{deRoo:1985jh,Wagemans:1990mv} in ${\cal N}=4$ supergravity with semisimple gaugings to characterize the relative embeddings of each simple factor $\mathrm{G_k}$ of the gauge group inside
$\mathrm{Sp(2({\bf n}+2),\mathbb{R})}$, performing a \textit{symplectic rotation} on the
holomorphic section of the manifold prior to gauging. Different choices
of the angles yield inequivalent gauged models with different properties. For $n=2$, with ${\rm SO(2,1)\times U(1)}$ gauging, there is just one de Roo-Wagemans' angle and the corresponding rotation matrix reads
\begin{equation}
\mathcal{R} \ = \ \left(\begin{array}{cc} A & B\cr -B & A \end{array}\right)\ ,\end{equation}
where
\begin{equation}
A \ = \ \left(\begin{array}{cc} {\mathbf 1}_{3\times 3} &0 \\
0 & \cos{(\theta)} \end{array} \right) \ , \qquad
B \ = \ \left(\begin{array}{cc} \mathbf{0}_{3\times 3} & 0\\
0 & \sin{(\theta)} \end{array} \right) \ .
\label{dRWagangles}
\end{equation}
The symplectic section is rotated as
\begin{eqnarray}
\Omega & \rightarrow & \Omega_R \, \equiv \, \mathcal{R}\cdot \Omega\ ,
\end{eqnarray}
while the K{\"a}hler potential is clearly left invariant by the
transformation. The de Roo--Wagemans' angle appears explicitly in the scalar potential, which is determined by the symplectic section $\Omega_R$ and by
\begin{equation}
V_R
\equiv \exp \left[\mathcal{ K}\right] \, \Omega_R
\end{equation}
and reads~\cite{mapietoine}
\begin{equation}
\mathcal{V}_{\mathop{\rm SO}(2,1)\times \mathop{\rm {}U}(1)}=\mathcal{V}_3+\mathcal{V}_1=
\frac{1}{2\mbox{Im} S} \,\left(e_1{}^2 |\cos \theta -S\,\sin \theta|^2 +
e_0{}^2\,\frac{{P_2^+}(y)}{{P_2^-}(y)}
\right) \ ,
\label{Potentabel}
\end{equation}
where $P_2^\pm (y) $ are polynomial functions in the Calabi--Vesentini
variables of holomorphic degree specified by their index,
\begin{equation}
P_2^\pm (y) = 1 - 2\,y_0\,\overline{y}_0\pm
2\,y_1\,{\overline{y}_1} +y^2\bar y^2\ ,
\label{polinabel}
\end{equation}
while $e_{0,1}$ are the coupling constants for the $\mathfrak{so}(2,1)$ and $\mathfrak{u}(1)$ gauge algebras.
In order to study the properties of this potential one has to perform a coordinate transformation from the Calabi--Vesentini coordinates to the standard ones that provide a \textit{solvable parametrization} of the three
Lobachevsky--Poincar\'e planes displayed in eq.~(\ref{vorticino}).
With some care such a transformation can be worked out and reads
\begin{eqnarray}
y_1 &=& -\frac{{\rm i} \left({\rm i} b_1 \left({\rm i}
b_2+e^{h_2}\right)+e^{h_1+h_2}+{\rm i} e^{h_1}
b_2-1\right)}{\left({\rm i} b_1+e^{h_1}+1\right) \left({\rm i}
b_2+e^{h_2}+1\right)}\nonumber \\
y_2 &=& \frac{{\rm i} b_1+e^{h_1}-e^{h_2}-{\rm i} b_2}{\left({\rm i}
b_1+e^{h_1}+1\right) \left({\rm i} b_2+e^{h_2}+1\right)}\nonumber \\
S &=& {\rm i} e^h \, + \, b \ . \label{transformus}
\end{eqnarray}
After this coordinate change, the complete K\"ahler potential becomes
\begin{equation}\label{nuovoK}
\mathcal{K} \, = \, -\log \left(-\frac{16 b
e^{h_1+h_2}}{\left(\left(1+e^{h_1}\right)^2+b_1^2\right)
\left(\left(1+e^{h_2}\right)^2+b_2^2\right)}\right)
\end{equation}
so that the K\"ahler metric is
\begin{equation}\label{rillu}
ds_{K}^2 \, = \, \frac{1}{4} \left(e^{-2 h} {db}^2+{dh}^2+e^{-2
h_1} {db}_1^2+e^{-2 h_2}
{db}_2^2+{dh}_1^2+{dh}_2^2\right) \ ,
\end{equation}
while in the new coordinates the scalar potential takes the form
\begin{eqnarray}\label{potentissimo}
V & = & \, - \, \frac{1}{8} e^{-h-h_1-h_2} \left[2 e^{h_1+h_2}
\left(-b^2+2 \sin (2 \theta ) b-e^{2
h}+\left(b^2+e^{2 h}-1\right) \cos (2 \theta
)-1\right)
e_1^2 \nonumber \right.\\
&& \left.-\left(\left(e^{h_1}+e^{h_2}\right)^2+b_1^2+b_2^
2-2 b_1 b_2\right) e_0^2\right] \ .
\end{eqnarray}
Let us now turn to exploring consistent truncation patterns to one--field models with standard kinetic terms for the residual scalars. To this effect, one can verify that the constant values
\begin{equation}\label{assioni}
\left \{b \, , \, b_1 \, , \, b_2 \right \} \, \Rightarrow \, \vec{b}_0 \, \equiv \, \left\{-\frac{\sin (2 \theta )}{\cos (2 \theta )-1} \, , \, \kappa \, , \, \kappa \right \}
\end{equation}
result in the vanishing of the derivatives of the potential with respect to the three axions, identically in the remaining fields, so that one can safely introduce these values (\ref{assioni}) in the potential to arrive at the reduced form
\begin{equation}\label{barpot}
\bar{V} \, = \, V|_{\vec{b}=\vec{b}_0} \, = \, \frac{1}{4} e^{-h} e_0^2+\frac{1}{8} e^{-h+h_1-h_2}
e_0^2+\frac{1}{8} e^{-h-h_1+h_2} e_0^2+\frac{1}{2}
e^h \sin ^2(\theta ) e_1^2 \ .
\end{equation}
The last step of the reduction is performed setting the two fields $h_{1,2}$ to a common constant value:
\begin{equation}\label{fixingh12}
h_{1,2}\, = \, \ell
\end{equation}
Indeed, it can be simply verified that for these values the derivatives of $\bar V$ with respect to $h_{1,2}$
vanish identically. Finally, redefining the field $h$ by means of the constant shift
\begin{equation}\label{rulito}
h \, = \, \mathfrak{h} \, + \, \log \left(\frac{\csc (\theta ) e_0}{e_1}\right)
\end{equation}
the one-field potential becomes
\begin{equation}\label{finalpotential}
V(\mathfrak{h}) \, = \, \underbrace{\,\sin(\theta ) e_0 e_1}_{\bar{\mu}} \, \cosh \, \mathfrak{h}
\end{equation}
This information suffices to determine the corresponding dynamical system.
We start from the general form of the $\mathcal{N}=2$ supergravity action truncated to the scalar sector which is the following:
\begin{eqnarray}
\mathcal{S}^{\mathcal{N}=2} & = & \int d^4x \, \, \mathcal{L}^{\mathcal{N}=2}_{SUGRA} \nonumber\\
\mathcal{L}^{\mathcal{N}=2}_{SUGRA} & = & \sqrt{-g} \, \left[ R[g] \, + 2\, g^{SK}_{ij^\star} \, \partial_\mu z^i \, \partial^\mu {\bar z}^{j^\star} \, - 2\, V(z,{\bar z}) \,\right ] \label{n2sugra}
\end{eqnarray}
where $g^{SK}_{ij^\star}$ is the special K\"ahler metric of the target manifold and $V(z,{\bar z})$ is the potential that we have been discussing. Reduced to the residual dynamical field content, after fixing the other fields to their extremal values, the above action becomes:
\begin{eqnarray}
\mathcal{S}^{\mathcal{N}=2} & = & \int d^4x \, \sqrt{-g} \, \left( \mathcal{R}[g] \, + \, \frac{1}{2} \, \partial_\mu \mathfrak{h} \, \partial^\mu \mathfrak{h} \, - \, \mu^2 \, \cosh\left[ h\right] \, \right ) \label{n2sugrareduzia}
\end{eqnarray}
where we have redefined $\mu^2 \, = \, 2 \, {\bar{\mu}}^2$
Hence the effective one-field dynamical system is described by the following Lagrangian
\begin{equation}\label{lagruccona}
\mathcal{L}_{eff} \, = \, \exp[3 A \, - \, \mathcal{B}] \, \left(\frac{1}{2} \, \dot{\mathfrak{h}}^2 \, - \, \frac{3}{2} \, \dot{A}^2 \, - \, \exp[2 \mathcal{B}] \, \underbrace{\mu^2 \,\cosh [ \mathfrak{h} ]}_{V(\mathfrak{h})}\right )
\end{equation}
which agrees with the general form (\ref{unopratino}), introduced above.
\par
In light of this, the effective dynamical model of the gauged $\mathrm{STU}$ model would be integrable if the potential
\begin{equation}\label{signorina}
\mathcal{V}(\varphi) \, = \, 3 \, \mu^2 \, \cosh\left[\frac{1}{\sqrt{3}}\, \varphi \right ]
\end{equation}
could be identified with any of the integrable potentials listed in tables \ref{tab:families} and \ref{Sporadic}. We show below that this is not the case. Nonetheless, the results of \cite{primopapero} provide a qualitative information on the behavior of the solutions of this supergravity model. As a special case, one can simply retrieve the de Sitter vacuum from this formulation in terms of a dynamical system. Choosing the gauge $\mathcal{B}=0$, the field equations associated with the Lagrangian \eqref{lagruccona} are solved by setting $\mathfrak{h}\, = \, 0$, which corresponds to the extremum of the potential, and
\begin{equation}\label{rutilant}
A(t) \, = \, \exp \left[ H_0 \, t \right ] \quad ; \quad H_0 \, = \, \sqrt{ \frac{2}{3} \mu^2} \, = \, \sqrt{ \frac{4}{3} \sin (\theta ) e_0 e_1} \ ,
\end{equation}
which corresponds to the eternal exponential expansion of de-Sitter space. This solution is an attractor for all the other solutions as shown in \cite{primopapero}.
\par
In order to answer the question whether the Lagrangian (\ref{lagruccona}) defines an integrable system, so that its general solutions can be written down in analytic form, it is useful to reformulate our question in slightly more general terms, observing that the Lagrangian under consideration belongs to the family
\begin{equation}\label{lagrucconata}
\mathcal{L}_{cosh} \, = \, \exp[3 A \, - \, \mathcal{B}] \, \left(\frac{q}{2} \, \dot{\mathfrak{h}}^2 \, - \, \frac{3}{2} \, \dot{A}^2 \, - \, \exp[2 \mathcal{B}] \, \mu^2 \,\cosh [ p \,\mathfrak{h} ]\right )
\end{equation}
that depends on two parameters $q$ and $ p$. Comparing with the list of integrable models one can see that there are just two integrable cases corresponding to the choices
\begin{equation}\label{integratus}
\frac{p}{\sqrt{3\, q}} \, = \, 1 \quad ; \quad \frac{p}{\sqrt{3\, q}}\, = \, \frac{2}{3}
\end{equation}
The first case $\frac{p}{\sqrt{3\, q}} \, = \, 1$, corresponding to the potential $\mathcal{V}(\varphi)\sim \cosh[\varphi]$ can be mapped into three different integrable series among those displayed in table \ref{tab:families}. The first embedding is into the series $I_1$ by choosing $C_{11}=C_{22} \ne 0$ and $C_{12}=0$. The second embedding is into series $I_2$, by choosing $\gamma \, = \,\frac{1}{2}$ and $C_1=C_2$. The third embedding is into model $I_7$, by choosing once again $\gamma \, = \,\frac{1}{2}$ and $C_1=C_2$. The second case $\frac{p}{\sqrt{3\, q}}\, = \, \frac{2}{3}$ corresponding to the potential $\mathcal{V}(\varphi)\sim \cosh[\frac{2}{3}\varphi]$ can be mapped into series $I_2$ of table \ref{tab:families}, by choosing $\gamma \, = \, - \,\frac{1}{3}$ and $C_1=C_2$. It can also me mapped into the series $I_7$ by choosing $\gamma \, = \, \frac{1}{3}$ and $C_1= - C_2$. Unfortunately, none of these solutions correspond to the Lagrangian (\ref{lagruccona}), where
\begin{equation}\label{ginopaoli}
p \, = \, 1 \quad ; \quad q \, = \, 1
\end{equation}
so that the one--field cosmology emerging from the non--compact non--abelian $\mathfrak{so}(2,1)$ gauging of the $\mathrm{STU}$ model is indeed not integrable! This analysis emphasizes that embedding an integrable model into the gauging of an extended supergravity theory is a difficult task.
\par
In section \ref{zerlina} we will consider in more detail the $Cosh$-model defined by eq.~(\ref{lagrucconata}). There we will show that it can be reduced to a normal form depending only on one parameter that we name the index:
\begin{equation}\label{indexomega}
\omega \, = \, \frac{p}{\sqrt{q}}
\end{equation}
and we will compare its behavior for various values of the index $\omega$. The two integrable cases mentioned above correspond respectively to the following critical indices,
\begin{equation}\label{criticalindices}
\omega_c^f \, =\, \sqrt{3} \quad , \quad \omega_c^n \, = \, \frac{2}{\sqrt{3}}
\end{equation}
The first critical index has been denoted with the superscript $f$ since, in the language of \cite{primopapero} the fixed point of the corresponding dynamical system is of \textit{focus} type. Similarly, the second critical index has been given the superscript $n$ since the fixed point of the corresponding dynamical system is of the \textit{node} type.
In these two cases we are able to integrate the field equations explicitly. For the other values of $\omega$ we are confined to numerical integration. Such a numerical study reveals that when the initial conditions are identical, the solutions of the non integrable models have a behavior very similar to that of the exact solutions of the integrable model, as long as the type of fixed point defined by the extremum of the potential is the same. Hence the the behavior of the one-field cosmology emerging from the $\mathfrak{so}(2,1)$ gauging of the $\mathrm{STU}$ model can be approximated by the exact analytic solutions of the $cosh$-model with index $\omega_c^n$.
\par
It remains a fact that the value of $\omega$ selected by the Gauged Supergravity model is $\omega = 1$ rather than the integrable one, a conclusion that will be reinforced by a study of Fayet--Iliopoulos gaugings in the $S^3$ model~\cite{FayetIlio}.
\par
Considering instead the integrable series $I_2$ of table \ref{tab:families}, we will show in section \ref{integsusymodel} that there is just one case
there that can be fitted into a Gauged Supergravity model. It corresponds to the value $\gamma\, = \, \frac{2}{3}$, which can be realized in $\mathcal{N}=1$ supergravity by an acceptable and well defined superpotential. After a wide inspection, this seems to be one of the very few integrable supersymmetric models so far available. A second one will be identified in section \ref{fluxscan}. As we shall emphasize, the superpotential underlying both instances of supersymmetric integrable models is strictly $\mathcal{N}=1$ and does not arise from a
Fayet--Iliopoulos gauging of a corresponding $\mathcal{N}=2$ model.
\subsection{\sc Behavior of the solutions in the $\mathcal{N}=2$ $STU$ model with $\mathfrak{so}(1,2)$-gauging}
Although the $\mathcal{N}=2$ model that we have been considering is not integrable, its Friedman equations can be integrated numerically providing a qualitative understanding of the nature of the solutions
\begin{figure}[!hbt]
\begin{center}
\iffigs
\includegraphics[height=40mm]{n2scalfat1.eps}
\includegraphics[height=40mm]{n2hfildo1.eps}
\else
\end{center}
\fi
\caption{\it
The numerical solution shows that the de Sitter solution is indeed an attractor.}
\label{soluziapiat}
\iffigs
\hskip 1cm \unitlength=1.1mm
\end{center}
\fi
\end{figure}
In fig.~\ref{soluziapiat} we show the behavior of both the scale factor and the scalar field for any regular initial conditions. The plot clearly shows that the de Sitter solution corresponding to an indefinite exponential expansion is an attractor as predicted by the fixed point analysis of the differential system. Indeed the numerical integration reveals a slow--roll phase that works in reversed order with respect to the standard inflationary scenario \cite{inflation}. When the scalar field is high up and descends rapidly, the expansion of the Universe proceeds rather slowly, then the scalar field reaches the bottom of the potential and rolls slowly toward its minimum, while the Universe expands exponentially becoming asymptotically de Sitter.
\subsection{\sc A More Systematic Approach: The Embbedding Tensor Formalism}
In the previous section we have reviewed and analyzed, from a cosmological perspective, a special class of $\mathcal{N}=2$ models which exhibit stable de Sitter vacua. A complete analysis of one-field cosmological models emerging from $\mathcal{N}=2$ supergravities (or even $\mathcal{N}>2$ theories) is a considerably more ambitious project, which requires a systematic study of the possible gaugings of extended supergravities. A precious tool in this respect is the embedding tensor formulation of gauged extended supergravities \cite{embeddingtensor}. This approach consists in writing the gauged theory as a deformation of an ungauged one (with the same field content and supersymemtry) in which the additional terms in the Lagrangian (minimal couplings, fermion mass terms and scalar potential) and in the supersymmetry transformation laws, which are needed in order to make the theory locally invariant with respect to the chosen gauge group $\mathcal{G}$ while keeping the original supersymmetry unbroken, are all expressed in terms of a single matrix of coupling constants (the embedding tensor) which can be described as a covariant tensor with respect to the global symmetry group ${\rm G}$ of the original ungauged model. If we denote by $\{t_\alpha\}$ the generators of the Lie algebra $\mathfrak{g}$ of ${\rm G}$ and by $X_\Lambda$ the gauge generators, to be gauged by the vector fields $A^\Lambda_\mu$ of the model,
since the gauge group must be contained in ${\rm G}$, $X_\Lambda$ must be a linear combination of the $t_\alpha$:
\begin{equation}
X_\Lambda=\Theta_\Lambda{}^\alpha\,t_\alpha\,.
\end{equation}
The matrix $\Theta_\Lambda{}^\alpha$ is the embedding tensor and defines all the information about the embedding of the gauge algebra inside $\mathfrak{g}$. A formulation of the gauging which is independent of the symplectic frame of the original ungauged theory, was given in \cite{magnetic} and extends the definition of the embedding tensor by including, besides the electric components defined above, also magnetic ones:
\begin{equation}
\Theta_M{}^\alpha=\{\Theta_\Lambda{}^\alpha,\,\Theta^{\Lambda\,\alpha}\}\,.
\end{equation}
The index $M$ is now associated with the symplectic duality-representation ${\bf W}$ of ${\rm G}$ in which the electric field strengths and their magnetic duals transform, so that $\Theta_M{}^\alpha$ formally belongs to the product ${\bf W}\otimes {\bf Adj}({\rm G})$, namely is a ${\rm G}$-covariant tensor. Since all the deformations of the original ungauged action, implied by the gauging procedure, are written in terms of $\Theta_M{}^\alpha$ in a ${\rm G}$-covariant way, the gauged equations of motion and the Bianchi identities formally retain the original global ${\rm G}$-invariance, provided $\Theta_M{}^\alpha$ is transformed as well. Since, however, the action of ${\rm G}$, at the gauged level, affects the coupling constants of the theory, encoded in the embedding tensor, it should be viewed as an equivalence between theories rather than a symmetry, and gauged models whose embedding tensors are related by ${\rm G}$-transformations, share the same physics. Thus gauged extended supergravities obtained from the same ungauged model, can be classified in universality classes defined by the orbits of the embedding tensor under the action of ${\rm G}$. Classifying such classes is a rather non trivial task. In simple models like the STU one, this can be done thoroughly. In the following we perform this analysis and analyze the possible one-field cosmological models for each class, leaving its extension to more general $\mathcal{N}=2$ gauged models to a future investigation \cite{FayetIlio}.\par
To set the stage, let us consider an $\mathcal{N}=2$ theory with $n_V$ vector fields and a global symmetry group of the form:
\begin{equation}
\mathrm{G}=\mathrm{U}_{SK}\times \mathrm{G}_{QK}\,,
\end{equation}
where $\mathrm{U}_{SK},\, \mathrm{G}_{QK}$ are the isometry groups of the Special K\"ahler and Quaternionic K\"ahler manifolds (in the absence of hypermultiplets $\mathrm{G}_{QK}={\rm SO}(3)$).
Let $\mathfrak{g},\,\mathfrak{g}_{SK},\, \mathfrak{g}_{QK}$ denote the Lie algebras of $\mathrm{G},\,\mathrm{U}_{SK},\, \mathrm{G}_{QK}$ and $\{t_\alpha\},\,\{t_A\},\,\{t_a\}$, $\alpha=1,\dots, {\rm dim}(\mathrm{G}),\,A=1,\dots, {\rm dim}(\mathrm{U}_{SK}),\,a=1,\dots, {\rm dim}(\mathrm{G}_{QK})$, a set of corresponding bases.
Only the group $\mathrm{U}_{SK}$ has a symplectic duality action on the $2n_V$-dimensional vector $\mathbb{F}^M_{\mu\nu}$ , $M=\,\dots, 2n_V$, consisting of the electric field strengths and their duals
\begin{equation}\label{symplettone}
\mathbb{F}^M_{\mu\nu} \, \equiv \, \left( \begin{array}{c}
F^\Lambda_{\mu\nu} \\
G_{\Lambda\,\mu\nu}
\end{array}
\right )
\end{equation}
namely:
\begin{equation}
\forall u\in \mathrm{U}_{SK}\,\,:\,\,\,\mathbb{F}^M_{\mu\nu}\rightarrow \mathbb{F}^{'M}_{\mu\nu}=u^M{}_N\,\mathbb{F}^N_{\mu\nu}\,.
\end{equation}
We have denoted by ${\bf W}$ the corresponding $2n_V$-dimensional, symplectic representation of $\mathrm{U}_{SK}$.
\par
For reader's convenience we summarize the index conventions in the following table
{\small\begin{center}
\begin{tabular}{|l||c|c|c|c|}
\hline
groups and & $\mathrm{G}$ & $\mathrm{U}_{SK}$ & $\mathrm{G}_{QK}$ & $\mathbf{W}$-rep \\
represent. & \null & \null & \null & of $\mathrm{U}_{SK}$\\
\hline
Lie algebras & $\mathfrak{g}$ & $\mathfrak{g}_{SK}$ & $\mathfrak{g}_{QK}$ & $\mathbf{W}$-rep \\
\hline
action & global & on vector & on & on elect/magn. \\
\null & \null & multiplets & hypermultiplets & $\mathbb{F}^M_{\mu\nu}$ \\
\hline
generators & $t_\alpha$ & $t_A$ & $t_a$ & $t_{AM}^{\phantom{AM}N}$ \\
\hline
range & $\alpha=1,\dots, {\rm dim}(\mathrm{G})$ & $A=1,\dots, {\rm dim}\mathrm{U}_{SK}$ & $a=1,\dots, {\rm dim}\mathrm{G}_{QK}$ & $M=1,\dots, 2 n_V$\\
\null & \null & \null & \null & $\Lambda = 1,\dots, n_V$\\
\hline
\end{tabular}
\end{center}
}
\par
The embedding tensor has the general form:
\begin{equation}
\{\Theta_M{}^\alpha\}=\{\Theta_M{}^A,\,\Theta_M{}^a\}\,,
\end{equation}
and defines the embedding of the gauge algebra $\mathfrak{g}_{gauge}=\{X_M\}$ inside $\mathfrak{g}$:
\begin{equation}
\label{Xcombo}
X_M=\Theta_M{}^A\,t_A+\Theta_M{}^a\,t_a\,.
\end{equation}
In the absence of hypermultiplets, the components $\Theta_M{}^a$, $a=1,2,3$ running over the adjoint representation of the global symmetry ${\rm SO}(3)$, are the Fayet-Iliopoulos terms. The generators $t_A$ of $\mathfrak{g}_{SK}$ have a non-trivial ${\bf W}$-representation: $t_A=(t_{A M}{}^N)$ while the generators $t_a$ do not.
Thus we can define the following tensor:
\begin{equation}
\label{Xtensoro}
X_{MN}{}^P=\Theta_M{}^A\,t_{A N}{}^P\,\,;\,\,\,\,X_{MNP}=X_{MN}{}^Q\,\mathbb{C}_{QP}\,,
\end{equation}
where $\mathbb{C}$ is the $2n_V\times 2n_V$ skew-symmetric, invariant ${\rm Sp}(2n_V,\mathbb{R})$-matrix.
\par
Gauge-invariance and supersymmetry of the action impose linear and quadratic constraints on $\Theta$:
\begin{itemize}
\item The {\bf linear constraints} are:
\begin{equation}
X_{MN}{}^M=0\,\,;\,\,\,X_{(MNP)}=0\,.\label{1constr}
\end{equation}
\item The {\bf quadratic constraints} originate from the condition that $X_M$ close an algebra inside $\mathfrak{g}$ with structure constants given in terms of $X_{MN}{}^P$, and from the condition that the symplectic vectors $\Theta_M{}^\alpha$, labeled by $\alpha$, be \emph{mutually local}:
\begin{align}
[X_M,\,X_N]&=-X_{MN}{}^P\,X_P\,,\label{2constr1}\\
\mathbb{C}^{MN}\Theta_M{}^\alpha \Theta_N{}^\beta&=0\,\,\Leftrightarrow \,\,\,\Theta^{\Lambda\,[\alpha} \Theta_\Lambda{}^{\beta]}=0\label{2constr2}\,.
\end{align}
the former can be rewritten as the following set of two equations:
\begin{align}
\Theta_M{}^A\Theta_N{}^B\,f_{AB}{}^C+ \Theta_M{}^A\,t_{A N}{}^P\,\Theta_P{}^C&=0\,,\label{2constr11}\\
\Theta_M{}^a\Theta_N{}^b\,f_{ab}{}^c+ \Theta_M{}^A\,t_{A N}{}^P\,\Theta_P{}^c&=0\,,\label{2constr12}
\end{align}
where $f_{AB}{}^C,\,f_{ab}{}^c$ are the structure constants of $\mathfrak{g}_{SK}$ and $\mathfrak{g}_{QK}$, respectively.
It can be shown that eq. s (\ref{1constr}) and (\ref{2constr11}) imply $\Theta^{\Lambda\,[A} \Theta_\Lambda{}^{B]}=0$, which is the part of (\ref{2constr2}) corresponding to $\alpha=A,\,\beta=B$.
\end{itemize}
Let us now denote by $k_A^i,\,k_A^{\bar{\imath}}$ the Killing vectors on the Special K\"ahler manifold corresponding to the isometry generator $t_A$, $k_a^u$ the Killing vectors on the Quaternionic K\"ahler manifold corresponding to the isometry generator $t_a$, and $\mathcal{P}_a^x$, $x=1,2,3$, the corresponding momentum map (note that these quantities are, by definition, associated only with the geometry of the scalar manifold and thus independent of the gauging). The scalar potential can be written in the following way \cite{Andrianopoli:1997cm}
\begin{align}
\mathcal{V}&=\mathcal{V}_{hyperino}+\mathcal{V}_{gaugino,1}+\mathcal{V}_{gaugino,2}+\mathcal{V}_{gravitino}\,,\nonumber\\
\mathcal{V}_{hyperino}&= 4\,\overline{V}^M\,V^N\,\Theta_M{}^a\Theta_N{}^b\,k_a^u\,k_b^v\,h_{uv}\,,\nonumber\\
\mathcal{V}_{gaugino,1}&= \overline{V}^M\,V^N\,\Theta_M{}^A\Theta_N{}^B\,k_A^i\,k_B^{\bar{\jmath}}\,g_{i\bar{\jmath}}\,,\nonumber\\
\mathcal{V}_{gaugino,2}&= g^{i\bar{\jmath}}\,D_iV^M\,D_{\bar{\jmath}}\overline{V}^N\,\Theta_M{}^a\Theta_N{}^b\,\mathcal{P}_a^x\,\mathcal{P}_b^x\,,\nonumber\\
\mathcal{V}_{gravitino}&=-3\,\overline{V}^M\,V^N\,\Theta_M{}^a\Theta_N{}^b\,\mathcal{P}_a^x\,\mathcal{P}_b^x\,,
\end{align}
where $V^M$ is the covariantly holomorphic symplectic section of the Special K\"ahler manifold under consideration: $(V^M)=(L^\Lambda,\,M_\Lambda)$.
\subsection{\sc Scan of the STU Model Gaugings and Their Duality Orbits}\label{gaustu}
Consider the STU model with no hypermultiplets. This corresponds to the sixth item in table \ref{homomodels} for $p=0$.
The global symmetry group is $ \mathrm{G} \, = \, {\rm SL}(2,\mathbb{R})^3\times {\rm SO}(3)$, the latter
factor being the form of $\mathrm{G}_{QK}$ in the absence of hypermultiplets. It is
relevant to our discussion only in the case we want to add FI terms,
i.e. when we introduce non vanishing components $\Theta_M{}^a$, $a=1,2,3$.
\par
The symplectic
$\mathbf{W}$-representation of the electric-magnetic charges is the
$(\frac{1}{2},\frac{1}{2},\frac{1}{2})$ of $\mathrm{U}_{SK}={\rm
SL}(2,\mathbb{R})^3$. Let us use the indices $i,j,k=1,2$ to label
the fundamental representation of ${\rm SL}(2,\mathbb{R})$.
As $\mathfrak{sl}(2)$-generators in this spinor representation of $\mathfrak{so}(2,1) \sim \mathfrak{sl}(2)$, we make the following choice:
$\{s_x\}=\{\sigma_1,i\,\sigma_2,\,\sigma_3\}$, $\sigma_x$ being the
Pauli matrices. The index $M$ can be written as $M=(i_1,i_2,i_3)$
and the embedding tensor $\Theta_M{}^A$ takes the following form:
\begin{equation}
\Theta_M{}^A=\{\Theta_{(i_1,i_2,i_3)}{}^{x_1},\,\Theta_{(i_1,i_2,i_3)}{}^{x_2},\,\Theta_{(i_1,i_2,i_3)}{}^{x_3}\}\,\,\in\,\,\, \left(\frac{1}{2},\frac{1}{2},\frac{1}{2}\right)\times [(1,0,0)+(0,1,0)+(0,0,1)]\,,
\end{equation}
where $x_i$ run over the adjoint (vector)-representations of the three $\mathfrak{sl}(2)$ algebras.
Since:
\begin{equation}
\left(\frac{1}{2},\frac{1}{2},\frac{1}{2}\right)\times [(1,0,0)+(0,1,0)+(0,0,1)]=3\times
\left(\frac{1}{2},\frac{1}{2},\frac{1}{2}\right)+\left(\frac{3}{2},\frac{1}{2},\frac{1}{2}\right)+\left(\frac{1}{2},\frac{3}{2},\frac{1}{2}\right)+\left(\frac{1}{2},\frac{1}{2},\frac{3}{2}\right)\,,
\end{equation}
each component of the embedding tensor can be split into its $\left(\frac{1}{2},\frac{1}{2},\frac{1}{2}\right)$ and
$\left(\frac{3}{2},\frac{1}{2},\frac{1}{2}\right)$ irreducible parts as follows:
\begin{align}
\Theta_{(i_1,i_2,i_3)}{}^{x_1}&=(s^{x_1})_{i_1}{}^j\,\xi^{(1)}_{j\,i_2\,i_3}+\Xi_{i_1,i_2,i_3}{}^{x_1}\,,\nonumber\\
\Theta_{(i_1,i_2,i_3)}{}^{x_2}&=(s^{x_2})_{i_2}{}^j\,\xi^{(1)}_{i_1\,j\,i_3}+\Xi_{i_1,i_2,i_3}{}^{x_2}\,,\nonumber\\
\Theta_{(i_1,i_2,i_3)}{}^{x_3}&=(s^{x_3})_{i_3}{}^j\,\xi^{(1)}_{i_1\,i_2\,j}+\Xi_{i_1,i_2,i_3}{}^{x_3}\,,\nonumber\\
\end{align}
The irreducible $\left(\frac{3}{2},\frac{1}{2},\frac{1}{2}\right)$tensors $ \Xi_{i_1,i_2,i_3}{}^{x_i}$ are defined by the the vanishing of the approriate gamma-trace, namely:
\begin{equation}
\Xi_{j,i_2,i_3}{}^{x_1} (s_{x_1})_{i_1}{}^j=\Xi_{i_1,j,i_3}{}^{x_2} (s_{x_2})_{i_2}{}^j=\Xi_{i_1,i_2,j}{}^{x_3} (s_{x_3})_{i_3}{}^j=0\,.
\label{purlo}
\end{equation}
Let us now define embedded gauge generators $X_{MN}{}^P$:
\begin{align}
X_{(i_1,i_2,i_3),(j_1,j_2,j_3)}{}^{(k_1,k_2,k_3)}&=\Theta_{(i_1,i_2,i_3)}{}^{x_1}\,(s_{x_1})_{j_1}{}^{k_1}\delta_{j_2}^{k_2}\delta_{j_3}^{k_3}+
\Theta_{(i_1,i_2,i_3)}{}^{x_2}\,(s_{x_2})_{j_2}{}^{k_2}\delta_{j_1}^{k_1}\delta_{j_3}^{k_3}+\nonumber\\
&+\Theta_{(i_1,i_2,i_3)}{}^{x_3}\,(s_{x_3})_{j_3}{}^{k_3}\delta_{j_2}^{k_2}\delta_{j_1}^{k_1}\,.
\label{fischietto}
\end{align}
The linear constraints (\ref{1constr}) become:
\begin{align}
& X_{(i_1,i_2,i_3),(j_1,j_2,j_3)}{}^{(i_1,i_2,i_3)}=0\,\,\Rightarrow\,\,\,\,\xi^{(1)}_{i_1\,i_2\,i_3}+
\xi^{(2)}_{i_1\,i_2\,i_3}+\xi^{(3)}_{i_1\,i_2\,i_3}=0\,,\nonumber\\
&X_{(i_1,i_2,i_3),(j_1,j_2,j_3),(k_1,k_2,k_3)}+X_{(k_1,k_2,k_3),(j_1,j_2,j_3),(i_1,i_2,i_3)}+
X_{(j_1,j_2,j_3),(i_1,i_2,i_3),(k_1,k_2,k_3)}=0\,\Rightarrow\nonumber\\&\Rightarrow\,\, \Xi_{i_1,i_2,i_3}{}^{x_i}=0\,.
\end{align}
This corresponds to the elimination of the $\left(\frac{3}{2},\frac{1}{2},\frac{1}{2}\right)$ representation leaving us only with three tensors in the $\left(\frac{1}{2},\frac{1}{2},\frac{1}{2}\right)$ representation.
Explicitly the linearly constrained embedding tensor reads as follows:
\begin{align}
\Theta_{(i_1,i_2,i_3)}{}^{A}&=\{(s^{x_1})_{i_1}{}^j\,\xi^{(1)}_{j\,i_2\,i_3},\,(s^{x_2})_{i_2}{}^j\,\xi^{(2)}_{i_1\,j\,i_3},\,
(s^{x_3})_{i_3}{}^j\,\xi^{(3)}_{i_1\,i_2\,j}\}\,,
\end{align}
and the additional linear constraint reduces further the independent tensors in the $\left(\frac{1}{2},\frac{1}{2},\frac{1}{2}\right)$ to two, since we get condition
\begin{equation}
\xi^{(1)}_{i_1\,i_2\,i_3}+
\xi^{(2)}_{i_1\,i_2\,i_3}+\xi^{(3)}_{i_1\,i_2\,i_3}=0
\end{equation}
The quadratic condition for $\Theta_M{}^A$ has the form (\ref{2constr2}), which applied to our solution of the linear constraint takes the following appearance:
\begin{equation}
\epsilon^{i_1 j_1}\epsilon^{i_2 j_2}\epsilon^{i_3 j_3}\,\Theta_{(i_1,i_2,i_3)}{}^{A}\,\Theta_{(j_1,j_2,j_3)}{}^{B}=0\,.
\end{equation}
By means of a MATHEMATICA computer code we were able to find 36 solutions to this equation, all of which corresponding to non-semisimple gauge groups. We do not display them here, since, in section \ref{noFIgauge} we show how to classify the orbits into which such solutions are organized and it will be sufficient to consider only one representative for each orbit.
\subsubsection{\sc The special Geometry of the STU model}
In equation (\ref{transformus}) we derived the transformation from the Calabi--Vesentini coordinates $\{S,y_{1,2}\}$ to a triplet of complex coordinates $z_{1,2,3}$ parameterizing the three identical copies of the coset manifold $\frac{\mathrm{SL(2,\mathbb{R})}}{\mathrm{SO(2)}}$ which compose this special instance of special K\"ahler manifold. Indeed setting:
\begin{equation}\label{rinomo}
{\rm i} e^{h} + b \, \equiv \, z_1 \quad ; \quad {\rm i} e^{h_1} + b_1 \, \equiv \, z_2 \quad ; \quad {\rm i} e^{h_2} + b_2 \, \equiv \, z_3
\end{equation}
the transformation (\ref{transformus}) can be rewritten as follows:
\begin{equation}\label{transformer}
\begin{array}{lll}
S & = & z_1 \\
y_1 & = & -\frac{{\rm i}
\left(z_2
z_3+1\right)}{\left(z_2+{\rm i}\right) \left(z_3+{\rm i}\right)}
\\
y_2 & = & \frac{{\rm i}
\left(z_2-z_3\right)}{\left (z_2+{\rm i}\right) \left(z_3+{\rm i}\right)}
\end{array}
\end{equation}
In the sequel we will adopt the symmetric renaming of variables
\begin{equation}\label{bongobongo}
z_i \, = \, {\rm i} e^{\mathfrak{h}_i} + \mathfrak{b}_i
\end{equation}
Applying the transformation (\ref{transformer}) of its arguments to the Calabi-Vesentini holomorphic section $\Omega_{CV} $, we find:
\begin{eqnarray}
\Omega_{CV}(z) &=& \mathfrak{f}(z) \,\left(
\begin{array}{l}
\frac{z_2}{\sqrt{2}}+\frac{
z_3}{\sqrt{2}} \\
\frac{z_2
z_3}{\sqrt{2}}-\frac{1}{\sqrt{2}} \\
-\frac{z_2
z_3}{\sqrt{2}}-\frac{1}{\sqrt{2}} \\
\frac{z_2}{\sqrt{2}}-\frac{
z_3}{\sqrt{2}} \\
\frac{z_1
z_2}{\sqrt{2}}+\frac{z_1
z_3}{\sqrt{2}} \\
\frac{z_1 z_2
z_3}{\sqrt{2}}-\frac{z_1}{\sqrt{2}} \\
\frac{z_2 z_3
z_1}{\sqrt{2}}+\frac{z_1}{\sqrt{2}} \\
\frac{z_1
z_3}{\sqrt{2}}-\frac{z_1
z_2}{\sqrt{2}}
\end{array}
\right)\\
\mathfrak{f}(z) &=& \frac{{\rm i} \, \sqrt{2}}{\left(z_2+{\rm i}\right)
\left(z_3+{\rm i}\right)}
\end{eqnarray}
As it is well known the overall holomorphic factor $\mathfrak{f}(z)$ in front of the section has no consequences on the determination of the K\"ahler metric and simply it adds the real part of a holomorphic function to the K\"ahler potential. Similarly, at the level of ungauged supergravity, the symplectic frame plays no role on the Lagrangian and we are free to perform any desired symplectic rotation on the section, the preserved symplectic metric being the following one:
\begin{equation}\label{CCmatruzza}
\mathbb{C} \, = \, \left(
\begin{array}{llllllll}
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0
\\
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0
\\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0
\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1
\\
-1 & 0 & 0 & 0 & 0 & 0 & 0 &
0 \\
0 & -1 & 0 & 0 & 0 & 0 & 0 &
0 \\
0 & 0 & -1 & 0 & 0 & 0 & 0 &
0 \\
0 & 0 & 0 & -1 & 0 & 0 & 0 &
0
\end{array}
\right)
\end{equation}
On the other hand at the level of \textit{gauge supergravity} the choice of the symplectic frame is physically relevant. The Calabi--Vesentini frame is that one where the $\mathrm{SO(2,2)}$ isometries of the manifold are all linearly realized on the electric vector field strengths, while the $\mathrm{SL(2,\mathbb{R})}$ factor acts as a group of electric/magnetic duality transformations. For this reason the CV frame was chosen in paper \cite{mapietoine}, since in such a frame it was easy to single out the non-compact gauge group $\mathrm{SO(2,1)}$. On the other the so named special coordinate frame which admits a description in terms of a prepotential, is that one where the three group factors $\mathrm{SL(2,\mathbb{R})}$ are all on the same footing and the $\mathbf{W}$-representation is identified as the $(2,2,2)\sim \left(\frac{1}{2},\frac{1}{2},\frac{1}{2}\right)$.
\par
The philosophy underlying the \textit{embedding tensor approach} to gaugings is that the embedding tensor already contains all possible symplectic frame choices, since it transforms as a good tensor under the symplectic group. Hence we can choose any preferred symplectic frame to start with.
\par
In view of these considerations we introduce the following symplectic matrix:
\begin{eqnarray}\label{symgroupel}
\mathcal{S}& = & \left(
\begin{array}{llllllll}
0 & 0 & \frac{1}{\sqrt{2}} &
\frac{1}{\sqrt{2}} & 0 & 0
& 0 & 0 \\
-\frac{1}{\sqrt{2}} & 0 & 0 &
0 & 0 & \frac{1}{\sqrt{2}}
& 0 & 0 \\
-\frac{1}{\sqrt{2}} & 0 & 0 &
0 & 0 & -\frac{1}{\sqrt{2}}
& 0 & 0 \\
0 & 0 & \frac{1}{\sqrt{2}} &
-\frac{1}{\sqrt{2}} & 0 & 0
& 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 &
\frac{1}{\sqrt{2}} &
\frac{1}{\sqrt{2}} \\
0 & -\frac{1}{\sqrt{2}} & 0 &
0 & -\frac{1}{\sqrt{2}} & 0
& 0 & 0 \\
0 & \frac{1}{\sqrt{2}} & 0 &
0 & -\frac{1}{\sqrt{2}} & 0
& 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 &
\frac{1}{\sqrt{2}} &
-\frac{1}{\sqrt{2}}
\end{array}
\right)\label{symgroupebisl}\\
\mathbb{C} &=& \mathcal{S}^T\, \mathbb{C} \, \mathcal{S}
\end{eqnarray}
and we introduce the symplectic section in the \textit{special coordinate frame} by setting
\begin{equation}\label{specialframe}
\Omega_{SF}^M \, = \, \frac{1}{\mathfrak{f}(z)} \, \mathcal{S}^{-1}\, \Omega_{CV} \, = \, \left(\begin{array}{l}
1 \\
z_1 \\
z_2 \\
z_3 \\
-z_1 z_2 z_3 \\
z_2 z_3 \\
z_1 z_3 \\
z_1 z_2
\end{array}
\right) \, \equiv \, \left(\begin{matrix}X^\Lambda (z)\cr
F_\Lambda(z)\end{matrix}\right)
\end{equation}
Note that this frame admits a prepotential description. Introducing the following holomorphic prepotential:
\begin{equation}
\mathcal{F}(z)=z^1 z^2 z^3=s t u\,,
\end{equation}
The symplectic section $\Omega_{SF}$ can be written as follows:
\begin{equation}
\Omega_{SF}(z) =\left(1,\, \underbrace{z^m}_{m=1,2,3}, -\mathcal{F}(z), \underbrace{\frac{\partial \mathcal{F}(z)}
{\partial z^m}}_{m=1,2,3} \right)\,,
\end{equation}
It is useful to work also with real fields $(\phi^r)=(\mathfrak{b}_m,\,\mathfrak{h}_m)$, defined as in equation (\ref{bongobongo}).
The K\"ahler potential is expressed as follows:
\begin{equation}
\mathcal{K}(z^m,\bar{z})=-\log\left[-{\rm i}\,\Omega\mathbb{C}\bar{\Omega}\right])=
-\log\left[-{\rm i}\,(X^\Lambda \bar{F}_\Lambda-F_\Lambda
\bar{X}^\Lambda)\right]\,\label{Kom}
\end{equation}
and, in real coordinates we have:
\begin{equation}
e^{-\mathcal{K}}=8\, e^{\mathfrak{h} _1+\mathfrak{h} _2+\mathfrak{h} _3}\,.
\end{equation}
We also introduce the covariantly holomorphic symplectic section $V=e^{\mathcal{K}/2}\,\Omega_{SF}$ satisfying the condition:
\begin{equation}
\nabla_{\bar{a}}V\equiv
(\partial_{\bar{a}}-\frac{1}{2}\,\partial_{\bar{a}}\mathcal{K})V=0\,,
\label{covholo}
\end{equation}
and its covariant derivatives:
\begin{equation}
U_m=(U_m{}^M)\equiv
\nabla_{{m}}V=(\partial_m+\frac{1}{2}\,\partial_m\mathcal{K})V\,.
\end{equation}
The following properties hold:
\begin{align}
V\mathbb{C}\bar{V}&={\rm i}\,\,;\,\,\,U_m\mathbb{C}\bar{V}=\bar{U}_{\bar{m}}\mathbb{C}\bar{V}=0\,\,;\,\,\,
U_m\mathbb{C}\bar{U}_{\bar{n}}=-{\rm i}\,g_{m\bar{n}}\,.\label{props}
\end{align} If
$E_m{}^I$, $I=1,\dots, 3$, is the complex vielbein matrix of the manifold,
$g_{m\bar{n}}=\sum_{I}E_m{}^I\bar{E}_{\bar{n}}{}^I$, and ${E}_I{}^m$
its inverse, we introduce the quantities $U_I\equiv E_I{}^m\,U_m$,
in terms of which the following $8\times 8$ matrix
$\hat{\mathbb{L}}_4=(\hat{\mathbb{L}}_4{}^M{}_N)$ is defined:
\begin{equation}
\hat{\mathbb{L}}_4(z,\bar{z})=\left(V,\overline{U}_{I},\,\overline{V},\,U_I\right)\mathcal{C}=\sqrt{2}\,\left({\rm Re}(V),\,{\rm
Re}(U_I),\,-{\rm Im}(V),{\rm Im}(U_I)\right)\,,
\end{equation}
where $\mathcal{C}$ is the Cayley matrix.
By virtue of eq.s (\ref{props}), the matrix $\hat{\mathbb{L}}_4$ is symplectic:
$\hat{\mathbb{L}}^T_4\mathbb{C}\hat{\mathbb{L}}_4=\mathbb{C}$.
\par
In order to find the coset representative $\mathbb{L}$ as an ${\rm Sp}(8,\mathbb{R})$ matrix in the solvable gauge, and the symplectic representation of the isometry generators $t_A$ in the special coordinate basis,
we proceed as follows. We construct a symplectic matrix $\mathcal{L}$ which coincides with the identity at the origin where $\phi^r\equiv 0\,\Leftrightarrow\,\,\mathfrak{h}_m=\mathfrak{b}_m=0$:
\begin{equation}
\mathcal{L}(\phi^r)=\hat{\mathbb{L}}_4(\phi^r)\,\hat{\mathbb{L}}_4(\phi^r\equiv 0)^{-1}\,.
\end{equation}
The following property holds:
\begin{equation}
V(\phi^r)= \mathbb{L}(\phi^r)\,V(\phi^r\equiv 0)\,.
\end{equation}
The matrix $ \mathbb{L}$ is the coset representative in the solvable gauge. To show this we compute the following generators:
\begin{equation}
{\bf h}_m=\left.\frac{\partial \mathbb{L}}{\partial
\mathfrak{h}_m}\right\vert_{\phi^r\equiv 0}\,\,;\,\,\,{\bf
a}_m=\left.\frac{\partial \mathbb{L}}{\partial
\mathfrak{b}_m}\right\vert_{\phi^r\equiv 0}\,.
\end{equation}
These generators close a solvable Lie algebra $Solv$ which is the Borel subalgebra of $\mathfrak{g}_{SK}$. The above construction is general and applies to any symmetric Special K\"ahler manifold. In our case $Solv=Solv_2^{(1)}\oplus Solv_2^{(2)}\oplus Solv_2^{(3)}$, where
\begin{equation}
Solv_2^{(m)}\equiv \{{\bf h}_m,\,{\bf
a}_m\}\,\,;\,\,\,[{\bf h}_m,\,{\bf
a}_n]=\delta_{mn}\,{\bf
a}_n\,\,;\,\,\,[Solv_2^{(m)},\,Solv_2^{(n)}]=0,.
\end{equation}
One can verify that
\begin{equation}
\mathbb{L}(\mathfrak{h}_m,\,\mathfrak{b}_m)=\mathbb{L}_{axion}(\mathfrak{b}_m)\,\mathbb{L}_{dilaton}(\mathfrak{h}_m)=
e^{\mathfrak{b}_m\,{\bf a}_m}\,e^{\mathfrak{h}_m\,{\bf h}_m}\,.\label{cosetrep}
\end{equation}
Each $\mathfrak{sl}(2)$ algebra is spanned by $\{{\bf h}_m,\,{\bf a}_m,\,{\bf a}^T_m\}$, $[{\bf a}_m,\,{\bf
a}_n^T]=2\,\delta_{mn}\,{\bf
h}_n$. The explicit matrix representations of these generators is:
\begin{align}
{\bf h}_1 &=\left(
\begin{array}{llllllll}
-\frac{1}{2} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & \frac{1}{2} & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & -\frac{1}{2} & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & -\frac{1}{2} & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & \frac{1}{2} & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & -\frac{1}{2} & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & \frac{1}{2} & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & \frac{1}{2}
\end{array}
\right)\,\,;\,\,\,{\bf a}_1=\left(
\begin{array}{llllllll}
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0
\end{array}
\right)\,,\nonumber\\
{\bf h}_2 &=\left(
\begin{array}{llllllll}
- \frac{1}{2} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & - \frac{1}{2} & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & \frac{1}{2} & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & - \frac{1}{2} & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & \frac{1}{2} & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & \frac{1}{2} & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & - \frac{1}{2} & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & \frac{1}{2}
\end{array}
\right)\,\,;\,\,\,{\bf a}_2=\left(
\begin{array}{llllllll}
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & - 1 & 0 \\
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0
\end{array}
\right)\,,\nonumber\\
{\bf h}_3 &=\left(
\begin{array}{llllllll}
- \frac{1}{2} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & - \frac{1}{2} & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & - \frac{1}{2} & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & \frac{1}{2} & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & \frac{1}{2} & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & \frac{1}{2} & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & \frac{1}{2} & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & - \frac{1}{2}
\end{array}
\right)\,\,;\,\,\,{\bf a}_3=\left(
\begin{array}{llllllll}
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & - 1 \\
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0
\end{array}
\right)\,.
\end{align}
The axionic and dilatonic parts of the coset representative in (\ref{cosetrep}) have the following matrix form:
\begin{align}
\mathbb{L}_{axion}(\mathfrak{b}_m)&=e^{\mathfrak{b}_m\,{\bf a}_m}=\left(
\begin{array}{llllllll}
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\mathfrak{b}_1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
\mathfrak{b}_2 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\
\mathfrak{b}_3 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\
-\mathfrak{b}_1 \mathfrak{b}_2 \mathfrak{b}_3 & -\mathfrak{b}_2 \mathfrak{b}_3 & -\mathfrak{b}_1 \mathfrak{b}_3 & -\mathfrak{b}_1 \mathfrak{b}_2 & 1 & -\mathfrak{b}_1 & -\mathfrak{b}_2 & -\mathfrak{b}_3 \\
\mathfrak{b}_2 \mathfrak{b}_3 & 0 & \mathfrak{b}_3 & \mathfrak{b}_2 & 0 & 1 & 0 & 0 \\
\mathfrak{b}_1 \mathfrak{b}_3 & \mathfrak{b}_3 & 0 & \mathfrak{b}_1 & 0 & 0 & 1 & 0 \\
\mathfrak{b}_1 \mathfrak{b}_2 & \mathfrak{b}_2 & \mathfrak{b}_1 & 0 & 0 & 0 & 0 & 1
\end{array}
\right)\,,\nonumber\\
\mathbb{L}_{dilaton}(\mathfrak{h}_m)&=e^{\mathfrak{h}_m\,{\bf h}_m}={\rm diag}\left(e^{-\frac{\mathfrak{h} _1}{2}-\frac{\mathfrak{h} _2}{2}-\frac{\mathfrak{h} _3}{2}},e^{\frac{\mathfrak{h} _1}{2}-\frac{\mathfrak{h} _2}{2}-\frac{\mathfrak{h}
_3}{2}},e^{-\frac{\mathfrak{h} _1}{2}+\frac{\mathfrak{h} _2}{2}-\frac{\mathfrak{h} _3}{2}},e^{-\frac{\mathfrak{h} _1}{2}-\frac{\mathfrak{h} _2}{2}+\frac{\mathfrak{h}
_3}{2}},\right.\nonumber\\&\left.e^{\frac{\mathfrak{h} _1}{2}+\frac{\mathfrak{h} _2}{2}+\frac{\mathfrak{h} _3}{2}},e^{-\frac{\mathfrak{h} _1}{2}+\frac{\mathfrak{h} _2}{2}+\frac{\mathfrak{h}
_3}{2}},e^{\frac{\mathfrak{h} _1}{2}-\frac{\mathfrak{h} _2}{2}+\frac{\mathfrak{h} _3}{2}},e^{\frac{\mathfrak{h} _1}{2}+\frac{\mathfrak{h} _2}{2}-\frac{\mathfrak{h}
_3}{2}}\right)\,,\nonumber\\
\end{align}
To make contact with the discussion about the embedding tensor provided in the previous section, we define the transformation from the basis $(i_1,i_2,i_3)$ and the special coordinate symplectic frame. We start with an ordering of the independent components of a vector $W_{i_1,i_2,i_3}$, which defines a symplectic basis to be dubbed ``old'':
\begin{equation}
W^{old} =(W^{old}_M)=\left(W_{ 1,1,1 },W_{ 1,1,2 },W_{ 1,2,1 },W_{ 1,2,2 },W_{ 2,1,1 },W_{ 2,1,2 },W_{ 2,2,1 },W_{ 2,2,2 }
\right)\,.
\end{equation}
The new special coordinate basis is related to the old one by an orthogonal transformation $\mathcal{O}$:
\begin{align}
W^{s.c.}_M&=\mathcal{O}_M{}^N\,W^{old}_M\,\,;\,\,\,\,
\mathcal{O}=\frac{1}{2\sqrt{2}}\left(
\begin{array}{llllllll}
1 & -1 & -1 & 1 & -1 & 1 & 1 & -1 \\
-1 & 1 & 1 & -1 & -1 & 1 & 1 & -1 \\
-1 & 1 & -1 & 1 & 1 & -1 & 1 & -1 \\
-1 & -1 & 1 & 1 & 1 & 1 & -1 & -1 \\
1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\
1 & 1 & 1 & 1 & -1 & -1 & -1 & -1 \\
1 & 1 & -1 & -1 & 1 & 1 & -1 & -1 \\
1 & -1 & 1 & -1 & 1 & -1 & 1 & -1
\end{array}
\right)\,.
\end{align}
The $\mathfrak{sl}(2)^3$ generators $t_A$ in the old basis read:
\begin{align}
(t_{x_1})_{j_1,j_2, j_3}{}^{k_1,k_2, k_3}&=(s_{x_1})_{j_1}{}^{k_1}\delta_{j_2}^{k_2}\delta_{j_3}^{k_3}\,,;\,\,
(t_{x_2})_{j_1,j_2, j_3}{}^{k_1,k_2, k_3}=(s_{x_2})_{j_2}{}^{k_2}\delta_{j_1}^{k_1}\delta_{j_3}^{k_3}\,,;\nonumber\\
(t_{x_3})_{j_1,j_2, j_3}{}^{k_1,k_2, k_3}&=(s_{x_3})_{j_3}{}^{k_3}\delta_{j_1}^{k_1}\delta_{j_2}^{k_2}\,,
\end{align}
In the new basis their representation is deduced from their relation to the $Solv$ generators and their transpose:
\begin{equation}
t_{1_m}=2\,\,{\bf h}_m\,\,;\,\,\,t_{2_m}= {\bf a}_m-{\bf a}_m^T\,\,;\,\,\,t_{3_m}= -{\bf a}_m-{\bf a}_m^T\,.
\end{equation}
The commutation relations among them read:
\begin{equation}
[t_{x_m},\,t_{y_n}]=-2\,\delta_{mn}\,\epsilon_{xy}{}^z\,t_{z_n}\,,
\end{equation}
where the adjoint index is raised with $\eta_{xy}={\rm diag}(+1,-1,+1)$.
\paragraph{\sc The Killing Vectors}
A standard procedure in coset geometry allows to compute the Killing vectors $\{k_A\}=\{k_{x_m}^r\,\frac{\partial}{\partial \phi^r}\}_{m=1,2,3}$:
\begin{align}
k_{1_m}&=-2 \,(\partial_{\mathfrak{h}_m}+\mathfrak{b}_m\,\partial_{\mathfrak{b}_m})\,\,;\,\,\,k_{2_m}=2 \mathfrak{b}_m\,\partial _{\mathfrak{h} _m} +\left(\mathfrak{b}_m^2-e^{2 \mathfrak{h} _m}+1\right)\,\partial _{\mathfrak{b}_m}\,,\nonumber\\
k_{3_m}&=-2 \mathfrak{b}_m\,\partial _{\mathfrak{h} _m} - \left(\mathfrak{b}_m^2-e^{2 \mathfrak{h} _m}-1\right)\,\partial _{\mathfrak{b}_m}\,.
\end{align}
For the purpose of computing the scalar potential, it is convenient
to compute the holomorphic Killing vectors
$k^m,\,k^{\bar{m}}$. To this end we solve the equation:
\begin{equation}
\delta_\alpha \Omega(z)^N=-\Omega(z)^M\,t_{\alpha
M}{}^N=k_\alpha^m\,\partial_m\Omega(z)+\ell_\alpha\,\Omega(z)\,,
\end{equation}
and find:
\begin{equation}
k_{1_m}=-2\,z^m\,\partial_m\,\,;\,\,\,k_{2_m}=(1+(z^m)^2)\,\partial_m\,\,;\,\,\,k_{3_m}=(1-(z^m)^2)\,\partial_m\,.
\end{equation}
These are conveniently expressed in terms of a holomorphic
prepotential $\mathcal{P}_\alpha(z)$:
\begin{align}
\mathcal{P}_\alpha&=-\overline{V}^M\,t_{\alpha
M}{}^N\,\mathbb{C}_{NL}\,V^L\,,\nonumber\\
\mathcal{P}_{1_m}&=-i\,\frac{z^m+\bar{z}^m}{z^m-\bar{z}^m}\,,\,\,\mathcal{P}_{2_m}=i\,\frac{1+|z^m|^2}{z^m-\bar{z}^m}\,,\,\,\mathcal{P}_{2_m}=i\,\frac{1-|z^m|^2}{z^m-\bar{z}^m}\,,
\end{align}
the relation being:
\begin{equation}
k^{\bar{m}}_\alpha=-i\,g^{\bar{m}n}\,\partial_n\mathcal{P}_\alpha\,.
\end{equation}
\subsubsection{{\sc The gaugings with no Fayet Iliopoulos terms}}
\label{noFIgauge}
We first consider the case of no Fayet Iliopoulos terms, namely
($\Theta_M{}^a=0$). We can use the global symmetry $\mathrm{G}$ of the
theory to simplify our analysis. Indeed the field equations and
Bianchi identities are invariant if we $\mathrm{G}$-transform the
field and embedding tensors at the same time. This is in particular
true for the scalar potential $\mathcal{V}(\phi, \Theta)$:
\begin{equation}
\forall g\in G\,\,\,:\,\,\,\,\mathcal{V}(\phi,
\Theta)=\mathcal{V}(g\star\phi, g\star\Theta)\,,\label{G-inv}
\end{equation}
where $(g\star\phi)^r$ are the scalar fields obtained from $\phi^r$
by the action of the isometry $g$, and $(g\star\Theta)_M{}^\alpha$
is the $g$-transformed embedding tensor. Notice that we can have
other formal symmetries of the potential which are not in $\mathrm{U}_{SK}$.
Consider for instance the symplectic transformation:
\begin{equation}
\mathcal{S}={\rm
diag}(1,\varepsilon_m,1,\varepsilon_m)\,,\label{Sep}
\end{equation}
where $\varepsilon_m=\pm 1$,
$\varepsilon_1\varepsilon_2\varepsilon_3=1$. These transformations
correspond to the isometries $z^m\rightarrow \varepsilon_m\,z^m$,
which however do not preserve the physical domain defined by the
upper half plane for each complex coordinate: ${\rm Im}(z^m)>0$.
Therefore embedding tensors connected by such transformations are to
be regarded as physically inequivalent.
\par
We have shown in sect. \ref{gaustu} that the embedding tensor, solution to the linear constraints and in the absence of Fayet Iliopoulos terms, is parameterized by two independent tensors $\xi^{(2)},\,\xi^{(3)}$ in the
$\left(\frac{1}{2},\frac{1}{2},\frac{1}{2}\right)$ of $\mathrm{U}_{SK}$.
These are then subject to the quadratic constraints that restrict the $\mathrm{U}_{SK}$-orbits of these two quantities.
We can think of acting by means of $\mathrm{U}_{SK}$ on $\xi^{(2)}$, so as
to make it the simplest possible. By virtue of eq. (\ref{G-inv}) this will not
change the physics of the gauged model (vacua, spectra,
interactions), but just make their analysis simpler.
\par
Let us recall that the $\mathrm{U}_{SK}$-orbits of a single object, say
$\xi^{(2)M}$, in the
$\mathbf{W} = \left(\frac{1}{2},\frac{1}{2},\frac{1}{2}\right)$ representation
are described by a quartic invariant $\mathfrak{I}_4(\xi^{(2)})$, defined as:
\begin{equation}
\mathfrak{I}_4(\xi^{(2)})=-\frac{2}{3}\,t_{A\,M
N}\,t^A{}_{PQ}\,\xi^{(2)M}\xi^{(2)N}\xi^{(2)P}\xi^{(2)Q}\,.
\end{equation}
A very important observation is that by definition the $\mathbf{W}$ representation is that of the electro--magnetic-charges of a black-hole solution of ungauged supergravity. Hence the components of the $\xi^{(2)}$-tensor could be identified with the charges $\mathcal{Q}$ of such a black-hole and the classification of the orbits of $\mathrm{U}_{SK}$ in the representation $\mathbf{W}$ coincides with the classification of Black-Hole solutions. The quartic invariant is just the same that in the Black-Hole case determines the area of the horizon. Here we make the first contact with the profound relation that links the black-hole potentials with the gauging potentials.
The orbits in the $\left(\frac{1}{2},\frac{1}{2},\frac{1}{2}\right)$-representation are classified as follows\footnote{Strictly speaking, for all models in the sixth line of table \ref{homomodels}, there is a further fine structure (see \cite{Borsten:2011ai}) in some of the orbits classified above which depends on the $\mathrm{U}_{SK}$-invariant sign of the time-like component (denoted by $\mathcal{I}_2$) of the 3-vector $s^{x\,\alpha\beta} \xi^{(2)}_{\alpha\alpha_1\alpha_2}\xi^{(2)}_{\beta\beta_1\beta_2}\epsilon^{\alpha_1\beta_1}\epsilon^{\alpha_2\beta_2}$ (i.e. the $x=2$ component in our conventions). This further splitting in the STU model, however, is not relevant since yields isomorphic orbits.}:
\begin{itemize}
\item[i)] Regular, $\mathfrak{I}_4>0$, and there exists a $\mathbb{Z}_3$-centralizer;
\item[ii)]Regular, $\mathfrak{I}_4>0$, no $\mathbb{Z}_3$-centralizer;
\item[iii)]Regular, $\mathfrak{I}_4<0$;
\item[iv)]\emph{Light-like}, $\mathfrak{I}_4=0$, $\partial_M I_4\neq 0$;
\item[v)]\emph{Critical}, $\mathfrak{I}_4=0$, $\partial_M \mathfrak{I}_4= 0$, $t_A{}^{MN}\,\partial_M \partial_N\,I_4\neq 0$ ;
\item[vi)]\emph{Doubly critical}, $\mathfrak{I}_4=0$, $\partial_M \mathfrak{I}_4= 0$, $t_A{}^{MN}\,\partial_M \partial_N\,\mathfrak{I}_4=
0$ ,
\end{itemize}
where $\partial_M\equiv \partial/\partial \xi^{(2)M}$. The quadratic
constraints (\ref{2constr11}) restrict $\xi^{(2)}$ (and $\xi^{(3)}$)
to be either in the \emph{critical} or in the
\emph{doubly-critical} orbit. Let us analyze the two cases
separately.
\paragraph{\sc $\xi^{(2)}$ Critical.}
The quadratic constraints imply $\xi^{(3)}=0$ and thus the embedding
tensor is parameterized by $\xi^{(1)}=\xi^{(2)}$, namely the diagonal
of the first two $\mathrm{SL}(2,\mathbb{R})$ groups in $\mathrm{G}_{SK}$. We
can choose a representative of the orbit in the form:
\begin{equation}
\xi^{(2)}=g\,(0,1,c,0,0,0,0,0)\,.
\end{equation}
The scalar potential reads:
\begin{equation}
\mathcal{V}=\mathcal{V}_{gaugino,\,1}=
g^2\,e^{-\mathfrak{h}_1-\mathfrak{h}_2-\mathfrak{h}_3}\,\left((\mathfrak{b}_1+c\,\mathfrak{b}_2)^2+(e^{\mathfrak{h}_1}-c\,e^{\mathfrak{h}_2})^2\right)\,.
\end{equation}
The truncation to the dilatons ($\mathfrak{b}_m \, = \, 0$) is a consistent one:
\begin{equation}
\left.\frac{\partial \mathcal{V}}{\partial
\mathfrak{b}_m}\right\vert_{\mathfrak{b}_m=0}=0\,,
\end{equation}
and
\begin{equation}
\left.\mathcal{V}\right\vert_{\mathfrak{b}_m=0}=g^2\,\left(e^{-\frac{1}{2}(-\mathfrak{h}_1+\mathfrak{h}_2+\mathfrak{h}_3)}
-c\,e^{-\frac{1}{2}(\mathfrak{h}_1-\mathfrak{h}_2+\mathfrak{h}_3)}\right)^2\,.
\end{equation}
The above potential has an extremum if $c>0$, for
$e^{\mathfrak{h}_1}=c\,e^{\mathfrak{h}_2}$, while it is runaway if $c<0$. The
sign of $c$ is changed by a transformation of the kind (\ref{Sep})
with $\varepsilon_1=-\varepsilon_2=-\varepsilon_3=1$. For the reason
outlined above, in passing from a negative to a positive $c$, the
critical point of the potential moves to the unphysical domain
(${\rm Im}(z^2) < 0$). The gauging for $c=-1$ coincides with the one
considered in \cite{mapietoine}, in the absence of Fayet Iliopoulos terms, with
potential (compare with eq.(\ref{Potentabel}):
\begin{equation}
\mathcal{V}_{CV}\, = \, \frac{e_0^2}{2\,{\rm
Im}(S)}\,\frac{P_2^+(y)}{P_2^-(y)}\,,
\end{equation}
where $P_2^\pm(y)=1-2 \,y_0\bar{y}_0\pm
2\,y_1\bar{y}_1+y^2\bar{y}^2$ and $y^2=y_0^2+y_1^2$. The two
potentials are connected by the transformation relations between
the Calabi-Vesentini and the special coordinates spelled out in eq.(\ref{transformer})
and by setting $e_0=2\sqrt{2}\,g$.
\paragraph{\sc $\xi^{(2)}$ Doubly-Critical.}
We can choose a representative of the orbit in the form:
\begin{equation}
\xi^{(2)}=g\,(1,0,0,0,0,0,0,0)\,.
\end{equation}
In this case $\xi^{(3)}$ is non-vanishing and has the form:
\begin{equation}
\xi^{(3)}=g'\,(1,0,0,0,0,0,0,0)\,.
\end{equation}
The gauging is electric ($\Theta^\Lambda=0$) and the gauge
generators $X_\Lambda=(X_0,X_m)$, $m=1,2,3$, satisfy the following
commutation relations:
\begin{equation}
[X_0,\,X_m]=M_m{}^n\,X_n,\,\,\,\,M_m{}^n={\rm
diag}(-2\,(g+g'),2\,g,\,2\,g')\,,
\end{equation}
all other commutators being zero. This gauging originates from a
Scherk-Schwarz reduction from $D=5$, in which the semisimple global
symmetry generator defining the reduction is the 2-parameter
combination $M_m{}^n$ of the $\mathfrak{so}(1,1)^2$ global symmetry
generators of the $D=5 $ parent theory.\par The scalar potential is
axion-independent and reads:
\begin{equation}
\mathcal{V}= (g^2+g
g'+g^{'2})\,e^{-\mathfrak{h}_1-\mathfrak{h}_2-\mathfrak{h}_3}\,.
\end{equation}
This potential is trivially integrable since it contains only one exponential of a single scalar field combination.
\subsubsection{\sc Adding ${\rm U}(1)$ Fayet Iliopoulos terms}
Let us now consider adding a component of the embedding tensor along
one generator of the ${\rm SO}(3)$ global symmetry group:
$\theta_M=\Theta_M{}^{a=1}$. The constraints on $\theta_M$ are
(\ref{2constr2}) and (\ref{2constr12}), which read:
\begin{equation}
\theta_M\,\mathbb{C}^{MP}\,X_{PN}{}^Q=0\,\,,\,\,\,X_{PN}{}^Q\,\theta_Q=0\,.
\end{equation}
while the constraints (\ref{2constr11}) on $\Theta_M{}^A$ are just
the same as before and induce the same restrictions on the orbits of
$\xi^{(2)},\,\xi^{(3)}$. Clearly if $X_{PN}{}^Q=0$, namely
$\Theta_M{}^A=0$, no SK isometries are gauged and there are no
constraints on $\theta_M$. We shall consider this case
separately.\par The potential reads
\begin{align}
\mathcal{V}&= \mathcal{V}_{gaugino,\,1}+ \mathcal{V}_{gaugino,\,2}+
\mathcal{V}_{gravitino}\,,
\end{align}
where $\mathcal{V}_{gaugino,\,1}$ was constructed in the various
cases in the previous section, while:
\begin{align}
\mathcal{V}_{gaugino,\,2}+
\mathcal{V}_{gravitino}=\left(g^{\bar{m}n}\,\mathcal{D}_{\bar{m}}
\overline{V}^{M} \mathcal{D}_n\,V^{N}-3\,\overline{V}^{M}
V^{N}\right)\,\theta_M\,\theta_N\,, \label{salianka}
\end{align}
has just the form of an $\mathcal{N}=1$ potential generated by a superpotential:
\begin{equation}\label{superpatata}
W_h \,=\, \theta_M \, \Omega_{SF}^M
\end{equation}
as discussed later in eq. (\ref{frattocchia}).
It is interesting to rewrite the above contribution to the potential
in terms of quantities which are familiar in the context of black
holes in supergravity. We use the property:
\begin{equation}
g^{\bar{m}n}\,\mathcal{D}_{\bar{m}} \overline{V}^{(M}
\mathcal{D}_n\,V^{N)}+\overline{V}^{(M}
V^{N)}=-\frac{1}{2}\,\mathcal{M}^{-1\,MN}\,,\label{contriV}
\end{equation}
where $\mathcal{M}_{MN}$ is the symplectic, symmetric,
negative-definite matrix defined later in eq. (\ref{inversem4}) in terms of the $\mathcal{N}_{\Lambda\Sigma}(z,{\bar z})$ matrix which appears in the $D=4$ Lagrangian (See eq.(\ref{d4generlag})).
Let us now define
the complex quantity $Z=V^M\,\theta_M$. The FI contribution to the
scalar potential (\ref{contriV}) can be recast in the form:
\begin{equation}
\mathcal{V}_{gaugino,\,2}+
\mathcal{V}_{gravitino}=-\frac{1}{2}\,\theta_M\,\mathcal{M}^{-1\,MN}\,\theta_N-4\,|Z|^2=V_{BH}-4\,|Z|^2\,.
\label{madrileno}
\end{equation}
The first term has the same form as the (positive-definite) effective
potential for a static black hole with charges
$\mathcal{Q}^M=\mathbb{C}^{MN}\theta_N$, while the second one is the
squared modulus of the black hole \emph{central charge}. Notice that
we can also write
\begin{equation}
V_{BH}=-\frac{1}{2}\,\theta_M\,\mathcal{M}^{-1\,MN}\,\theta_N=|Z|^2+g^{\bar{m}n}\,D_{\bar{m}}
\overline{Z} D_n\,Z>0\,.
\end{equation}
Let us now study the full scalar potential in the relevant cases.
\paragraph{\sc $\xi^{(2)}$ Critical.}
In this case, choosing
\begin{equation}
\xi^{(2)}=g\,(0,1,c,0,0,0,0,0)\,.
\end{equation}
we find for $\theta_M$ the following general solution to the
quadratic constraints:
\begin{equation}
\theta_M=(0,\frac{f_1}{c},\,f_1,\,0,\,0,\,f_2,\,\frac{f_2}{c},\,0)\,,
\end{equation}
where $f_1,\,f_2$ are constants.
\par The scalar potential reads:
\begin{equation}
\label{guliashi}
\mathcal{V}=g^2\,e^{-\mathfrak{h}_1-\mathfrak{h}_2-\mathfrak{h}_3}\,\left((\mathfrak{b}_1+c\,\mathfrak{b}_2)^2+(e^{\mathfrak{h}_1}-c\,e^{\mathfrak{h}_2})^2\right)+
\frac{e^{-\mathfrak{h}_3}}{c}\,\left[(f_1+f_2\,\mathfrak{b}_3)^2+f_2^2\,e^{2\,\mathfrak{h}_3}\right]\,.
\end{equation}
The above potential (if $g>0$) has an extremum only for
$c<0,\,f_2>0$ and:
\begin{equation}
\mathfrak{h}_1=\mathfrak{h}_2+\log(-c),\,\mathfrak{h}_3=-\log\left(-\frac{2c
g}{f_2}\right)\,,\,\,\,y_3=-\frac{f_1}{f_2}\,,\,\,y_1=-c\,y_2\,.
\end{equation}
The potential at the extremum is
\begin{equation}
\mathcal{V}_0= 4\,g\,f_2>0\,,
\end{equation}
while the squared scalar mass matrix reads:
\begin{equation}
\left.(\partial_r\partial_s \mathcal{V}\,g^{st})\right\vert_0={\rm
diag}(2,2,1,1,0,0)\times \mathcal{V}_0\,.
\end{equation}
In this way we retrieve the stable dS vacuum of \cite{mapietoine}, discussed in Subsect. \ref{stabdesitter}, the two
parameters $f_1,\,f_2$ being related to $e_1$ and the de
Roo-Wagemann's angle.
\paragraph{\sc $\xi^{(2)}$ Doubly-Critical.}
In this case the constraints on $\theta_M$ impose:
\begin{align}
(g+g')\theta_1&=0\,,\,g\,\theta_2=0\,,\,g'\,\theta_3=0\,,\,g\,\theta^0=g'\,\theta^0=0\,,\,g\,\theta^1=g'\,\theta^1=0\,,\,
g\,\theta^2=g'\,\theta^2=0\,,\nonumber\\
g\,\theta^3&=g'\,\theta^3=0\,.
\end{align}
Under these conditions, unless $g=g'=0$, which is the case we shall
consider next, the FI contribution to the scalar potential vanishes.
\paragraph{\sc Case $\Theta_M{}^A=0$. Pure Fayet Iliopoulos gauging.}
In this case, we can act on $\theta_M$ by means of $\mathrm{U}_{SK}$ and
reduce it the theta vector to its canonical normal form:
\begin{equation}
\theta_M=(0,f_1,f_2,f_3,f^0,0,0,0)\,.
\end{equation}
The scalar potential reads:
\begin{align}
\mathcal{V}&= \mathcal{V}_{gaugino,\,2}+
\mathcal{V}_{gravitino}=-\sum_{m=1}^3\,e^{-\mathfrak{h}_m}\,\left(f_mf^0\,(\mathfrak{b}_m^2+e^{2\mathfrak{h}_m})+f_n
f_p\,\right)\,,
\end{align}
where $n\neq p\neq m$. The truncation to the dilatons is consistent
and we find:
\begin{equation}
\label{doppiocritFI}
\left.\mathcal{V}\right\vert_{\mathfrak{b}_m=0}=-\sum_{m=1}^3\,\left(f_mf^0\,e^{\mathfrak{h}_m}+f_n
f_p\,e^{-\mathfrak{h}_m}\right)\,,
\end{equation}
which is extremized with respect to the dilatons by setting
\begin{equation}
e^{2\mathfrak{h}_m}=\frac{f_n f_p}{f_mf^0}\,,
\end{equation}
and the potential at the extremum reads:
\begin{equation}
\mathcal{V}_0=-6\,\varepsilon\,\sqrt{f^0 f_1 f_2 f_3}<0\,,
\end{equation}
This extremum exists only if $f^0 f_1 f_2 f_3>0$. This implies that
$\theta_M$ should be either in the orbit $i)$ ($\varepsilon=1$ in the above expression for $\mathcal{V}_0$) or in the orbit $ii)$ ($\varepsilon=-1$). Using the
analogy between $\theta_M$ and black hole charges, these two orbits
correspond to BPS and non-BPS with $\mathfrak{I}_4>0$ black holes. The extremum
condition for $V_{BH}$ fixes the scalar fields at the horizon
according to the attractor behavior. Now the potential has an
additional term $-4\,|Z|^2$ which, however, for the orbits $i)$,
$ii)$, has the same extrema as $V_{BH}$ since its derivative with
respect to $z^m$ is $-4 \mathcal{D}_m Z\,\bar{Z}$ which vanishes for the $i)$
orbit since at the extremum of $V_{BH}$ (BPS black hole horizon)
$\mathcal{D}_m Z=0$, and for the $ii)$ orbit since at the extremum of $V_{BH}$
(black hole horizon) $Z=0$.
\par We conclude that in the ``BPS'' orbit $i)$ the extremum corresponds to an AdS-vacuum where the scalar mass
spectrum reads as follows:
\begin{equation}
\left.(\partial_r\partial_s \mathcal{V}\,g^{st})\right\vert_0={\rm
diag}\left(\frac{2}{3},\frac{2}{3},\frac{2}{3},\frac{2}{3},\frac{2}{3},\frac{2}{3}\right)\times
\mathcal{V}_0<0\,.
\end{equation}
These models have provided a useful supergravity framework where to study black hole solutions in anti-de Sitter spacetime \cite{adsblackholes}.\par
In the ``non-BPS'' orbit $ii)$ the potential has a de Sitter extremum which, however, is not stable, having takyonic directions:
\begin{equation}\left.(\partial_r\partial_s \mathcal{V}\,g^{st})\right\vert_0={\rm
diag}\left(-2,-2,2,2,2,2\right)\times
\mathcal{V}_0>0\,.
\end{equation}
We shall not consider this case in what follows.
\subsection{\sc Conclusions on the one-field cosmologies that can be derived from the gaugings of the $\mathcal{N}=2$ STU model}
Let us summarize the results of the above systematic discussion. From the gaugings of the $\mathcal{N}=2$ STU model one can obtain following dilatonic potentials:
\begin{description}
\item[A)] \textbf{Critical Orbit without FI terms}. We have the potential:
\begin{equation}\label{purto1}
{V}\, =\, g^2\,\left(e^{-\frac{1}{2}(-\mathfrak{h}_1+\mathfrak{h}_2+\mathfrak{h}_3)}
+ \,e^{-\frac{1}{2}(\mathfrak{h}_1-\mathfrak{h}_2+\mathfrak{h}_3)}\right)^2\,.
\end{equation}
In this case we have a consistent truncation to one dilaton by setting: $ \mathfrak{h}_1 \, = \, \mathfrak{h}_2 \, = \, \ell \in \, \mathbb{R}$, since the derivatives of the potential with respect to $\mathfrak{h}_{1,2}$ vanish on such a line. The residual one dilaton potential is:
\begin{equation}\label{cucco1}
{V}\, =\, 4 \, g^2\, e^{- \mathfrak{h}_3}
\end{equation}
which upon use of the translation rule (\ref{babushka}) yields
\begin{equation}\label{fulatto}
\mathcal{V}\, =\, 12 \, g^2\, e^{- \frac{\varphi}{\sqrt{3}}}
\end{equation}
The above potential is trivially integrable, being a pure less than critical exponential.
\item[B)] \textbf{Doubly Critical Orbit without FI terms}.We have the potential:
\begin{equation}\label{purto2}
{V}\, =\, \mbox{const}\, e^{-\mathfrak{h}_1-\mathfrak{h}_2-\mathfrak{h}_3}
\end{equation}
Introducing the following field redefinitions:
\begin{equation}\label{gurkina1}
\phi_1 \, = \, \mathfrak{h}_1+\mathfrak{h}_2+\mathfrak{h}_3 \quad ; \quad \phi_2 \, = \, \mathfrak{h}_2-\mathfrak{h}_3
\quad ; \quad \phi_3 \, = \, -2 \, \mathfrak{h}_1+\mathfrak{h}_2+\mathfrak{h}_3
\end{equation}
the kinetic term:
\begin{equation}\label{kinesi1}
\mbox{kin} \, = \, \frac{1}{2} \, \left (\dot{ \mathfrak{h}}_1+\dot{\mathfrak{h}}_2+\dot{\mathfrak{h}}_3 \right)
\end{equation}
is transformed into:
\begin{equation}\label{kinesi2}
\mbox{kin} \, = \, \frac{1}{6} \, \dot{ \phi}_1+\frac{1}{4} \, \dot{ \phi}_2+\frac{1}{12} \, \dot{ \phi}_3
\end{equation}
while the potential (\ref{purto2}) depends only on $\phi_1$. Hence we can consistently truncate to one field by setting
$\phi_2=\phi_3 \, = \, \mbox{const}$ and upon use of the translation rule (\ref{babushka}) we obtain a trivially integrable over critical exponential potential:
\begin{equation}\label{fulatto2}
\mathcal{V}\, =\, \mbox{const} \, e^{- \sqrt{3} \, \varphi}
\end{equation}
\item[C)] \textbf{Critical Orbit with FI terms}. This case leads to the potential (\ref{guliashi}) which, as we showed, reproduces the potential (\ref{potentissimo}) of the $\mathfrak{so}(2,1)\times \mathfrak{u}(1)$ gauging extensively discussed in sect. \ref{stabdesitter}. Such a potential admits a stable de Sitter vacuum and a consistent one dilaton truncation to a model with a $cosh$ potential which is not integrable, since the intrinsic index $\omega$ does not much any one of the three integrable cases.
\item[D)] \textbf{Doubly Critical Orbit with FI terms}. Upon a constant shift of the dilatons in eq.(\ref{doppiocritFI}) this gauging leads to the following negative potential:
\begin{equation}\label{papalone}
V \, = \, - \, 2\, \sqrt{f^0 \, f_1 \, f_2 \, f_3} \sum_{i=1}^3 \, \cosh\left[ \mathfrak{h}_i \right]
\end{equation}
that has a stable anti de Sitter extremum. We have a consistent truncation to one-field by setting to zero any two of the three dilatons. Upon use of the translation rule (\ref{babushka}) we find the potential:
\begin{equation}\label{gugullo}
\mathcal{V} \, = \, - \, \mbox{const}\left(2+\cosh\left[\frac{\varphi}{\sqrt{3}}\right]\right)
\end{equation}
which does not fit into any one of the integrable series of tables \ref{tab:families} and \ref{Sporadic}.
\end{description}
Hence apart from pure exponentials without critical points no integrable models can be fitted into any gauging of the $\mathcal{N}=2$ STU model.
\section{\sc $\mathcal{N}=1$ models with a superpotential}
Let us now turn to consider the case of $\mathcal{N}=1$ Supergravity coupled to Wess--Zumino multiplets \cite{cfgv}.
Following the notations of \cite{castdauriafre}, the general bosonic Lagrangian of this class of models is\footnote{Observe that here we consider only the graviton multiplet coupled to Wess-Zumino multiplets. There are no gauge multiplets and no $D$-terms. The embedding mechanisms discussed in \cite{secondosashapietro} is lost a priori from the beginning.}
\begin{equation}\label{n1sugra}
\mathcal{L}^{\mathcal{N}=1}_{SUGRA} \, = \, \sqrt{-g} \, \left[ \mathcal{R}[g] \, + \,2\, g^{HK}_{ij^\star} \, \partial_\mu z^i \, \partial^\mu {\bar z}^{j^\star} \, - \, 2\, V(z,{\bar z}) \,\right ]\ ,
\end{equation}
where the scalar metric is K\"ahler (the scalar manifold must be Hodge--K\"ahler)
\begin{eqnarray}
g_{ij^\star} &=& \partial_i \, \partial_{j^\star} \, \mathcal{K} \\
\mathcal{K} &=& \overline{\mathcal{K}} \, = \, \mbox{K\"ahler potential}
\end{eqnarray}
and the potential is
\begin{eqnarray}
V & = & 4 \, e^2 \,\exp \left[ \mathcal{K} \right]\left ( g^{ij^\star}\, \mathcal{D}_i W_h(z)\, \mathcal{D}_{j^\star} \overline{W_h}({\bar z}) \, - \, 3 \, |W_h(z)|^2 \right )\ ,
\label{frullini}
\end{eqnarray}
where the superpotential $W_h(z)$ is a holomorphic function. Furthermore
\begin{eqnarray}
\mathcal{D}_i \, W&=& \partial_i W \, + \, \partial_i \mathcal{K} \, W \nonumber \\
\mathcal{D}_{j^\star}\, { \overline{W}} &=& \partial_{j^\star} \overline{W} \, + \,
\partial_{j^\star} \mathcal{K} \, \overline{W}\label{felucide}
\end{eqnarray}
are usually referred to as K\"ahler covariant derivatives. They arise since $W_h(z)$, rather than a function, is actually a holomorphic section of the line bundle $\mathcal{L} \rightarrow \mathcal{M}_{K} $ over the K\"ahler manifold whose first Chern class is the K\"ahler class, as required by the definition of Hodge--K\"ahler manifolds. In other words, $c_1\left( \mathcal{L}\right) \, = \, \left [ \mathrm{K} \right ]$, the latter being the K\"ahler two--form. The fiber metric on this line bundle is $h \, = \, \exp\left [ \mathcal{K}\right]$, so that a generic section $W(z,{\bar z})$
of $\mathcal{L}$ (not necessarily holomorphic) admits the invariant norm
\begin{equation}\label{invanorma}
\parallel W \parallel ^2 \, \equiv \, W\, \overline{W} \, \exp\left [ \mathcal{K}\right]
\end{equation}
A generic gauge transformation of the line bundle takes the form
\begin{eqnarray}
W^\prime(z,{\bar z}) &=& \exp\left [ \frac{1}{2} f(z)\right ] \times W(z,{\bar z}) \nonumber\\
{\overline{W}}^\prime(z,{\bar z}) &=& \exp\left [ \frac{1}{2} \overline{f(z)}\right ] \times \overline{W}(z,{\bar z}) \, \label{frattocchia}
\end{eqnarray}
where $f(z)$ is a holomorphic complex function. Under the gauge transformation (\ref{frattocchia}), the fiber metric changes according to
\begin{equation}\label{coriandolo}
\mathcal{K}^\prime(z,{\bar z}) \, = \, - \, \mathcal{K}^\prime(z,{\bar z}) + \mbox{Re} f(z)
\end{equation}
while the norm (\ref{invanorma}) stays invariant. It is important to stress that the same K\"ahler metric
$g_{ij^\star} \, = \, \partial_i \partial_{j^\star} \, \mathcal{K}^\prime$ would be obtained by the same token from $\mathcal{K}^\prime$.
All transition functions from one local trivialization of the line bundle to another one are of the form (\ref{frattocchia}) and
(\ref{coriandolo}), with an appropriate $f(z)$. The fiber metric introduces a canonical connection $\theta \, = \, h^{-1}\partial h$ leading to the covariant derivatives (\ref{felucide}). In covariant notation, the potential (\ref{frullini}) takes the form
\begin{equation}\label{rucolafina}
V \, = \, \, 4 \, e^2 \, \left( \parallel \mathcal{D} W \parallel^2 \, - \, 3 \, \parallel W \parallel^2 \right)
\end{equation}
where by definition
\begin{eqnarray}
\parallel \mathcal{D} W \parallel^2 &=& g^{ij^\star}\mathcal{D}_i \, W \, \mathcal{D}_{j^\star}\, \overline{W} \,\exp \left [ \mathcal{K} \right]\nonumber \\
\, \parallel W \parallel^2 &=& W \, { \overline{W}}\, \exp\left [ \mathcal{K}\right] \label{trifoglio}
\end{eqnarray}
Let us now consider the notion of covariantly holomorphic section, defined by the condition
\begin{equation}\label{condizia}
\mathcal{D}_{j^\star}\,W \, = \,0
\end{equation}
From any covariantly holomorphic section, one can retrieve a holomorphic one by setting
\begin{equation}\label{olomorfina}
W_{h}(z) \, = \, \exp\left [ - \,\frac{1}{2} \, \mathcal{K}\right] \,W \, \quad \Rightarrow \quad \partial_{j^\star} W_h \, = \, 0
\end{equation}
\par
By hypothesis the superpotential $W$ that appears in the potential (\ref{rucolafina}) is covariantly constant. The compact notation (\ref{rucolafina}) is very instructive since it stresses that the scalar potential results from the difference of two positive definite terms originating from two different contributions. The first contribution is the absolute square of the auxiliary fields appearing in the supersymmetry transformations of the spin $\frac{1}{2}$--fermions (the chiralinos belonging to Wess--Zumino multiplets), while the second is the square of the auxiliary field appearing in the supersymmetry transformation of the spin
$\frac{3}{2}$--gravitino. Indeed
\begin{eqnarray}
\delta_{SUSY} \chi^i & = & {\rm i} \, \partial_\mu z^i \, \gamma^\mu \, \epsilon^\bullet \, + \, \mathcal{H}^i \, \epsilon_\bullet \\
\delta_{SUSY} \Psi_{\mu\bullet} &=& \mathcal{D}_\mu \, \epsilon_\bullet \, + \, S \, \gamma_\mu \, \epsilon^\bullet
\label{ballaconilupi}
\end{eqnarray}
where $\epsilon^\bullet \, , \epsilon_\bullet$ denote the two chiral projections of the supersymmetry parameter and the scalar field dependent auxiliary fields are
\begin{eqnarray}
S &=& {\rm i}\, e\, \sqrt{\parallel W\parallel^2} \, = \, {\rm i}\, e\, \sqrt{| W_h|^2} \,
\exp\left[\frac{1}{2} \mathcal{K} \right] \nonumber\\
\mathcal{H}^i &=& 2 \, e\, g^{ij^\star} \, \, \mathcal{D}_{j^\star} W \, \exp\left[\frac{1}{2} \mathcal{K} \right]\label{auxiliary}
\end{eqnarray}
\par
This structure of the potential shows that any de Sitter vacuum characterized by a potential $V(z_0)$ that is positive at the extremum necessarily breaks supersymmetry since this implies that the chiralino auxiliary fields are different from zero
in the vacuum $<\mathcal{H}^i> \, = \, \mathcal{H}^i(z_0) \, \ne \, 0$.
Let us also stress that the parameter $e$ appearing in the potential is just a dimensionful parameter which fixes the scale of all the masses generated by the gauging, \textit{i.e.} by the introduction of a superpotential.
\subsection{\sc One--field models}
In this general framework the simplest possibility is a model with one scalar multiplet assigned to the homogeneous
K\"ahler manifold
\begin{equation}\label{tripini}
\mathcal{M}_{K} \, = \, \frac{\mathrm{SU(1,1)}}{\mathrm{U(1)}}
\end{equation}
and K\"ahler potential
\begin{equation}\label{kelero1}
\mathcal{K} \, = \, - \, \log \, \left[ (z \, - \, \bar{z})^q\right ]\ ,
\end{equation}
which leads to the K\"ahler metric
\begin{equation}\label{kelero2}
g_{z\bar{z}} \, = \, - \, \frac{q}{(z \, - \, \bar{z})^2}\ ,
\end{equation}
where $q$ is an integer number. Its favorite value, $q=3$, corresponds to the $\mathcal{N}=1$ truncation of the $\mathcal{N}=2$ model $S^3$ that, on its turn arises from the $\mathrm{STU}$ model discussed in the previous section upon identification of the three scalar multiplets $S,T$ and $U$. Alternatively, the case $q=1$ corresponds to the $\mathcal{N}=1$ truncation of an $\mathcal{N}=2$ theory with vanishing Yukawa couplings. Because of their $\mathcal{N}=2$ origin, both instances of the familiar Poincar\'e Lobachevsky plane are not only Hodge-K\"ahler but actually \textit{special K\"ahler} manifolds.
In the notation of \cite{PietroSashaMarioBH1}, the holomorphic symplectic section governing this geometry is given
by the four--component vector
\begin{equation}\label{seziona}
\Omega \, = \,\left\{-\sqrt{3}z^2,z^3,\sqrt{3} z,1\right\}\ ,
\end{equation}
which transforms in the spin $j\, = \, \frac{3}{2}$ of the $\mathrm{SL(2,\mathbb{R})} \sim \mathrm{SU(1,1)}$ group that happens to be
four--dimensional symplectic
\begin{equation}
\label{frilli}
\mathrm{SL(2,\mathbb{R})} \, \ni \,\left(\begin{array}{ll}
a & b \\
c & d
\end{array} \right) \, \Longrightarrow \, \left(
\begin{array}{llll}
d a^2+2 b c a & -\sqrt{3} a^2
c & -c b^2-2 a d b &
-\sqrt{3} b^2 d \\
-\sqrt{3} a^2 b & a^3 &
\sqrt{3} a b^2 & b^3 \\
-b c^2-2 a d c & \sqrt{3} a
c^2 & a d^2+2 b c d &
\sqrt{3} b d^2 \\
-\sqrt{3} c^2 d & c^3 &
\sqrt{3} c d^2 & d^3
\end{array}
\right) \, \in \, \mathrm{Sp(4,\mathbb{R})}
\end{equation}
where the preserved symplectic metric is
\begin{equation}\label{goriaci}
\mathbb{C} \, = \, \left(
\begin{array}{llll}
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \\
-1 & 0 & 0 & 0 \\
0 & -1 & 0 & 0
\end{array}
\right)
\end{equation}
According to the general set up of Special Geometry (for a recent review see \cite{pietroGR}), the K\"ahler potential (\ref{kelero1}) is retrieved letting
\begin{equation}\label{fagano}
\mathcal{K}(z,{\bar z}) \, = \, - \log \left[ - {\rm i} \Omega \, \mathbb{C} \, \
\overline{\Omega} \right ]
\end{equation}
Independently of the special structure that is essential for $\mathcal{N}=2$ supersymmetry, at the $\mathcal{N}=1$ level
one can consider general superpotentials that are consistent with the Hodge--K\"ahler structure, provided they are holomorphic,
namely provided they can be expanded in a power series of the unique complex field $z$
\begin{equation}\label{supipoti}
W_h(z) \, = \, \sum_{n \, \in \, \mathbb{N}} \, c_n \, \, z^n \ ,
\end{equation}
where the $c_n$ are complex coefficients. The sum over $n$ extends to a finite or infinite subset of the natural
numbers $\mathbb{N}$, while rational or irrational powers leading to cuts are excluded in order to obtain properly transforming
sections of the Hodge line bundle.
Notwithstanding this wider choice available at the $\mathcal{N}=1$ level, it is interesting to note that in discussing black--hole solutions of the corresponding $\mathcal{N}=2$ model one is lead to an effective sigma--model whose Lagrangian resembles closely the effective Lagrangian of the cosmological sigma--model and displays a potential that is also built in terms of a superpotential, although the latter is more restricted. The comparison between cosmological and black--hole constructions provides inspiring hints on the choice of appropriate superpotentials.
Let us briefly see how this works.
\subsection{\sc Cosmological versus black--hole potentials}
The common starting point for black--hole and cosmological solutions is the general form of the bosonic portion of the four--dimensional Supergravity, which takes the form (for a recent review see Chapter 8, Vol 2 in \cite{pietroGR} and all references therein)
\begin{eqnarray}
\mathcal{L}^{(4)} &=& \sqrt{|\mbox{det}\, g|}\left[R[g] - \frac{1}{2}
\partial_{ {\mu}}\phi^a\partial^{ {\mu}}\phi^b \mathfrak{g}_{ab}(\phi) \,
+ \,
2 \, \mbox{Im}\mathcal{N}_{\Lambda\Sigma}(\phi) \, F_{ {\mu} {\nu}}^\Lambda
F^{\Sigma| {\mu} {\nu}}\right. \nonumber\\
&&+\left. \, e^2 \, V(\phi) \, \right] \, + \,
\mbox{Re}\mathcal{N}_{\Lambda\Sigma}(\phi)\, F_{ {\mu} {\nu}}^\Lambda
F^{\Sigma}_{ {\rho} {\sigma}}\epsilon^{ {\mu} {\nu} {\rho} {\sigma}}\, ,
\label{d4generlag}
\end{eqnarray}
where $F_{ {\mu} {\nu}}^\Lambda\equiv (\partial_{ {\mu}}A^\Lambda_{ {\nu}}-\partial_{ {\nu}}A^\Lambda_{ {\mu}})/2$ are the field strengths of the vector fields, $\phi^a$ denotes the collection of $n_{\mathrm{s}}$ scalar fields
parameterizing the scalar manifold $ \mathcal{M}_{scalar}^{D=4}$, with $\mathfrak{g}_{ab}(\phi)$ its metric and the field--dependent complex matrix $\mathcal{N}_{\Lambda\Sigma}(\phi)$ is fully determined by constraints imposed by duality symmetries. In addition, the scalar potential $V(\phi)$ is determined by the appropriate gauging procedures, while $e$ is the gauge coupling constant, which vanishes in ungauged supergravity.
\par
Although the discussion can be extended also to higher $\mathcal{N}$, for simplicity we focus on the $\mathcal{N}=2,1$ cases, where the real scalar fields are grouped in complex combinations $z^i$ and their kinetic term becomes
\begin{equation}\label{calerone}
\frac{1}{2}
\partial_{ {\mu}}\phi^a\partial^{ {\mu}}\phi^b \mathfrak{g}_{ab}(\phi) \, \mapsto \, 2 \, g_{ij^\star}(z,{\bar z}) \, \partial_\mu z^i \,
\partial^{ {\mu}} \, {\bar z}^{j^\star}
\end{equation}
In the case of extremal black--hole solutions of ungauged Supergravity ($e=0$), the four--dimensional metric is of the form
\begin{equation}\label{metruzza}
ds^2_{BH} \, = \, - \, \exp[U(\tau)] \, dt^2 \, + \, \, \exp[- U(\tau)] \, dx^i \otimes dx^j \, \delta_{ij}
\end{equation}
where $\tau \, = \, - \,\left(\sum_{i=1}^3 \, x_i^2\right)^{\,-\,\frac{1}{2}}$ is the reciprocal of the radial distance, one is lead to the effective Euclidian $\sigma$--model (for a recent review see chapter Chapter 9, Volume Two in \cite{pietroGR} and all references therein)
\begin{eqnarray}
S_{BH} & \equiv & \int \, {\cal L}_{BH}(\tau) \, d\tau \quad \nonumber\\
{\cal L}_{BH}(\tau ) & = & \frac{1}{4} \,\left( \frac{dU}{d\tau} \right)^2 +
g_{ij^\star} \, \frac{dz^i}{d\tau} \, \frac{dz^{j^\star}}{d\tau} + e^{U}
\, V_{BH}(z, {\bar z} , \mathcal{Q})
\label{effact}
\end{eqnarray}
The geodesic potential $V(z, {\bar z} , \mathcal{Q})$ is defined by
\begin{equation}\label{geopotentissimo}
V_{BH}(z, {\bar z}, \mathcal{Q})\, = \, \frac{1}{4} \, \mathcal{Q}^t \, {\cal M}_4^{-1}\left( {\cal N}\right) \, \mathcal{Q} \ .
\end{equation}
Here $\mathcal{Q}$ is the vector of electric and magnetic charges of the hole,
which transforms in the same representation of the K\"ahler isometry group $\mathrm{G}$ as the symplectic section of Special Geometry.
In the $S^3$ case $G=\mathrm{SL(2,\mathbb{R})}$ and the four charges of the hole
\begin{equation}\label{grutinata}
\mathcal{Q} \, = \, \left\{p_1,p_2,q_1,q_2\right\}
\end{equation}
transform by means of the matrix (\ref{frilli}).
The $(\mathrm{2n+2})\times(\mathrm{2n+2}) $ matrix ${\cal M}_4^{-1}$ appearing in eq.~(\ref{geopotentissimo}) is given in terms of the $(\mathrm{n+1})\times(\mathrm{n+1}) $ matrix $\mathcal{N}_{\Lambda\Sigma}(\phi)$ that appears in the $4D$ Lagrangian. In detail,
\begin{eqnarray}
\mathcal{M}_4^{-1} & = &
\left(\begin{array}{c|c}
{\mathrm{Im}}\mathcal{N}\,
+\, {\mathrm{Re}}\mathcal{N} \, { \mathrm{Im}}\mathcal{N}^{-1}\, {\mathrm{Re}}\mathcal{N} & \, -{\mathrm{Re}}\mathcal{N}\,{ \mathrm{Im}}\,\mathcal{N}^{-1}\\
\hline
-\, { \mathrm{Im}}\mathcal{N}^{-1}\,{\mathrm{Re}}\mathcal{N} & { \mathrm{Im}}\mathcal{N}^{-1} \
\end{array}\right) \ , \label{inversem4}
\end{eqnarray}
where $n$ is the number of vector multiplets coupled to Supergravity.
Starting instead from the spatially flat cosmological metric
\begin{equation}\label{metruzzolla}
ds^2_{Cosm} \, = \, - \, \exp[3A(t)] \, dt^2 \, + \, \, \exp[2 A(t)] \, dx^i \otimes dx^j \, \delta_{ij}
\end{equation}
which, in the language of the preceding sections, corresponds to the gauge $\mathcal{B}=3A$, one is led to the effective sigma model
\begin{eqnarray}
S_{Cosm} & \equiv & \int \, {\cal L}_{Cosm}(t) \, dt \quad \nonumber\\
{\cal L}_{Cosm}(\tau ) & = & - \,\frac{3}{2} \,\left( \frac{dA}{dt} \right)^2 +
g_{ij^\star} \, \frac{dz^i}{dt} \, \frac{dz^{j^\star}}{dt} + e^{6 A}
\, V_{Cosm}(z, {\bar z})
\label{effactbis}
\end{eqnarray}
where $V_{Cosm}(z, {\bar z})\, = \, e^2 \, V(\phi)$ is the scalar potential produced by gauging that, in an $\mathcal{N}=1$ theory, or in an $\mathcal{N}=2$ one with only abelian gauge groups (Fayet--Iliopoulos terms), admits the representation in terms of a holomorphic superpotential recalled in eq.(\ref{frullini}).
The similarity between the cosmological and black--hole cases becomes striking if one recalls that the black--hole geodesic potential (\ref{geopotentissimo}) admits the alternative representation
\begin{eqnarray}\label{potenzialusgeodesicus}
V_{BH}(z, {\bar z}, \mathcal{Q})&= & -\,\frac{1}{2} \,\left( \vert Z \vert ^2 + g^{ij^\star} \mathcal{D}_i Z \, \mathcal{D}_{j^\star} \bar{Z} \right)\ .
\end{eqnarray}
Here $Z$ denotes the field--dependent central charge of the supersymmetry algebra
\begin{equation}\label{centralcharge}
Z \, \equiv \, \exp \left[ \frac{1}{2} \, \mathcal{K}(z,z)\right] \, \mathcal{Q}^T \, \mathbb{C} \, \Omega (z)\ ,
\end{equation}
$\Omega (z)$ denotes the holomorphic symplectic section of special K\"ahler geometry (that of eq.~(\ref{Omegabig})
for the $\mathrm{STU}$ model, or that of eq.~(\ref{seziona}) for the $S^3$ model) and $ \mathcal{K}(z,z)$ denotes the K\"ahler potential.
Introducing the black--hole holomorphic superpotential
\begin{equation}\label{governoladro}
W_{BH}(z) \, \equiv \, \mathcal{Q}^T \, \mathbb{C} \, \Omega (z)
\end{equation}
eq.~(\ref{potenzialusgeodesicus}) for the geodesic potential can be recast in the form
\begin{eqnarray}\label{potenzialato}
V_{BH}(z, {\bar z}, \mathcal{Q})&= & -\,\frac{1}{2} \,\exp \left[\mathcal{K}(z,z)\right] \,
\left( g^{ij^\star} \mathcal{D}_i W_{BH} \, \mathcal{D}_{j^\star} \bar{W}_{BH} + \vert W_{BH} \vert ^2 \right)
\end{eqnarray}
which is almost identical to eq.~(\ref{frullini}) yielding the cosmological potential, up to a crucial change of sign and coefficient. The coefficient $-3$ of the second term becomes $+1$, and in this fashion the black hole potential is strictly positive definite since it is the sum of two squares. Yet the entire discussion suggests that black--hole superpotentials, that are group theoretically classified by the available $\mathrm{G}$--orbits of charge vectors $\mathcal{Q}$, form a good class of superpotentials also for Gauged Supergravity models. Indeed we already saw, by means of the systematic analysis of the STU model, that black--hole superpotentials encode and exhaust the available abelian gaugings for $\mathcal{N}=2$ supergravity theories.
\subsection{\sc Cosmological Potentials from the $S^3$ model}
Relying on the preceding discussion, let us consider the abelian gaugings of the $S^3$ model provided by the superpotential
\begin{equation}
\label{chiridone}
W_{\mathcal{Q}}(z) \, = \, \mathcal{Q} \mathbb{C} \Omega \, = \, -q_2 z^3+\sqrt{3} q_1
z^2+\sqrt{3} p_1 z+p_2 \ ,
\end{equation}
which happens to be the most general third--order polynomial. Let us stress that in multi--field models based on larger special K\"ahler homogeneous manifolds $\mathrm{G/H}$, despite the existence of many coordinates $z_i$, the order of the superpotential will stay three since this is the polynomial order of the symplectic section for all such special geometries. Inserting (\ref{chiridone}) into
(\ref{frullini}) yields the four--parameter potential
\begin{equation}\label{4parami}
V(z,{\bar z},\mathcal{Q}) \, = \, -\frac{{\rm i} \left(2
p_1^2+\left((z+{\bar z})
q_1+2 \sqrt{3} z {\bar z}
q_2\right) p_1+2 z
{\bar z} q_1^2+p_2 \left(3
(z+{\bar z}) q_2-2
\sqrt{3}
q_1\right)\right)}{z-{\bar z}}
\end{equation}
in which one can decompose $z$ into its real and imaginary parts according to
\begin{equation}\label{donnaiolo}
z \, = \, {\rm i} \, e^\mathfrak{h} \, + \, \mathfrak{b}
\end{equation}
Not every choice of the charge vector $\mathcal{Q}$ allows for a consistent truncation to a vanishing axion, guaranteed by the condition
\begin{equation}\label{lordoftherings}
\partial_\mathfrak{b} \,V(z,{\bar z},\mathcal{Q})|_{\mathfrak{b}=0} \, = \,0
\end{equation}
and yet there is a representative with such a property for every $\mathrm{SL(2,\mathbb{R})}$ orbit in the $j=\frac{3}{2}$ representation except for the largest one. Following the results of \cite{noinilpotenti} one can identify the orbits
\begin{enumerate}
\item The very small orbit with a parabolic stability group $\mathcal{O}_1 \, = \, \left\{p_1\to 0,p_2\to
0,q_1\to 0,q_2\to \mathfrak{q}\right\}$
\item The small orbit with no stability group $\mathcal{O}_2 \, = \, \left\{p_1\to \sqrt{3}
\mathfrak{p},p_2\to 0,q_1\to 0,q_2\to
0\right\}$
\item The large orbit with a $\mathbb{Z}_3$ stability group $\mathcal{O}_3 \, = \, \left\{p_1\to 0,p_2\to
\mathfrak{p},q_1\to -\sqrt{3} \mathfrak{q},q_2\to
0\right\}$ ($\mathfrak{p}\mathfrak{q}<0$ regular BPS in black hole constructions)
\item The large orbit with no stability group $\mathcal{O}_4 \, = \, \left\{p_1\to 0,p_2\to
\mathfrak{p},q_1\to \sqrt{3} \mathfrak{q},q_2\to
0\right\}$ ($\mathfrak{p}\mathfrak{q}>0$ regular non-BPS in black hole constructions)
\item The very large orbit with no stability group $\left\{p_1\to p_1,p_2\to
0,q_1\to q_1,q_2\to
q_2\right\}$ \ ,
\end{enumerate}
and the following superpotentials and potentials:
\begin{description}
\item[$\mathcal{O}_1$] The superpotential is purely cubic $W\, = \, -\mathfrak{q} \,z^3$ and the potential vanishes
\begin{equation}\label{flatto}
V \, = \, 0
\end{equation}
This is an instance of flat potentials \cite{noscale}. Namely supersymmetry is broken by the presence of non vanishing auxiliary fields, yet the vacuum energy is exactly zero and the ground state is Minkowski space.
\item[$\mathcal{O}_2$] The superpotential is linear $W\, = \, 3 \mathfrak{p} z$ and the consistent truncation to zero axion yields a pure exponential
\begin{equation}\label{expoto}
V \, = \, -3 e^{-\mathfrak{h}} \mathfrak{p}^2
\end{equation}
This potential is trivially integrable.
\item[$\mathcal{O}_3$] The superpotential is quadratic $W\, = \, \mathfrak{p}-3 \mathfrak{q} z^2$ and the consistent truncation to zero axion yields the following potential
\begin{equation}\label{cossho}
V \, = \, -3 e^{-\mathfrak{h}} \mathfrak{q} \left(\mathfrak{p}+e^{2 \mathfrak{h}} \,
\mathfrak{q}\right)\, \simeq \, - 3 \mathfrak{q}^2 \, \cosh \hat{\mathfrak{h}}
\end{equation}
The last form of the potential can be always achieved by means of a constant shift of the scalar field $\mathfrak{h}\mapsto \mathfrak{h} + \mbox{const} $. In this case the intrinsic index is:
\begin{equation}\label{formidabile}
\omega \, = \, \frac{1}{3}
\end{equation}
since the kinetic term of the $S^3$-model corresponds to $q=3$. It is different from the value $\omega = 1$ which is obtained from the non abelian $\mathfrak{so}(1,2)$-gauging of the same model, yet it is still different from either one of the integrable indices: $\omega \ne \sqrt{3}$ and $\omega \ne \frac{2}{\sqrt{3}}$.
This result confirms what we already learned. Consistent one-field truncations of Gauged Supergravity easily yield cosmological models of the $cosh$-type, yet non integrable ones. It is interesting to note that the $cosh$ case is in correspondence with the regular BPS black holes.
\item[$\mathcal{O}_4$] The superpotential is quadratic $W\, = \, \mathfrak{p}+3 \mathfrak{q} z^2$ but with a different relative sign between the constant and quadratic terms. The consistent truncation to zero axion yields the following potential
\begin{equation}\label{cosshobis}
V \, = \, 3 e^{-h} \mathfrak{p} \mathfrak{q}-3 e^h \mathfrak{q}^2\, \simeq \, - 3 \mathfrak{q}^2 \, \sinh \hat{\mathfrak{h}}
\end{equation}
As in the previous case the last form of the potential can always be achieved by means of a constant shift of the scalar field. It is interesting to note that the $sinh$ case of potential is in correspondence with the regular non-BPS black holes. Once again the index $\omega$ is not a critical one for integrability.
\item[$\mathcal{O}_5$] In this case no consistent truncation to zero axion does exist.
\end{description}
\section{\sc The supersymmetric integrable model with one multiplet}
\label{integsusymodel}
The unique integrable model that so far we have been able to fit into the considered supersymmetric framework with just one multiplet (the K\"ahler metric is fixed once for all to the choice (\ref{tripini}) belongs to the series $I_2$ of table \ref{tab:families} and occurs for the under--critical value $\gamma = \frac{2}{3}$. Before proceeding with the further analysis of this particular integrable model it is just appropriate to stress that
in a couple of separate publications Sagnotti and collaborators \cite{dks},\cite{dkps} have also shown that the phenomenon of climbing scalars, displayed by all of the integrable models we were able to classify, has the potential ability to explain the oscillations in the low angular momentum part of the CMB spectrum, apparently observed by PLANCK. In his recent talk given at the Dubna SQS2013 workshop, our coauthor Sagnotti has also shown a best fit to the PLANCK data for the low $\ell$ part of the spectrum, by using precisely the series of integrable potentials $I_2$\footnote{In comparing the following equation with the Table of paper \cite{primopapero}, please note the coefficient $\sqrt{3}$ appearing in the exponents that has been introduced to convert the unconventional normalization of the field $\varphi$ used there to the canonical normalization of the field $\phi$ used here.}
\begin{equation}\label{gammaserie}
V(\phi) \, = \, a \, \exp\left[ 2\, \sqrt{3} \, \gamma \, \phi\right] + b \, \exp\left[ \sqrt{3} \, (\gamma +1)\, \phi\right]
\end{equation}
with the particularly nice value $\gamma \, = \, -\footnote 76$ (see \cite{secondosashapietro} for details about the D-map insertion into supergravity). Here the different subcritcal value $\gamma \, = \, \frac 23$ is select by supergravity when we try to realize the integral model through a superpotential (F-embedding).
Indeed this potential can be obtained from the $S^3$-model with a carefully calibrated and unique superpotential that now we describe. We immediately anticipate that such a superpotential is not of the form discussed in the previous section and therefore strictly corresponds to an $\mathcal{N}=1$ theory and not to an abelian gauging of the $\mathcal{N}=2$ model. Technically, the difficulty met when trying to fit an integrable case into supergravity coupled just to one multiplet resides in the following. If the superpotential involves powers only up to the cubic order, as pertains to the construction via symplectic sections, the dilaton truncation can contain at most two types of exponentials, one positive and one negative, so that one can reach either $\cosh p \mathfrak{h}$ or $\sinh p\mathfrak{h}$ models. Yet the obtained index $p$ is always $1$, different from the $p=3,2$ required by integrability. In order to get higher values of $p$, one would need higher powers $z^n$ in the superpotential, but as the degree of $W(z)$ increases one is confronted with new problems: one can generate higher exponentials $\exp\left [ p \mathfrak{h}\right]$ but only positive ones, while negative exponents are bounded from below, so that the list ends with $\exp\left [ - 3\, \mathfrak{h}\right]$. On the other hand, together with the highest positive exponential $\exp\left [ p_{max} \mathfrak{h}\right]$, also subleading ones for $ 0< p < p_{max}$ appear and cannot be eliminated by a choice of coefficients in $W(z)$. As a result, the possible match with integrable models of the $cosh$, and $sinh$ type is ruled out, as the match with the sporadic potentials of table \ref{Sporadic}, all of which have the property of being symmetric in positive and negative exponentials. One is thus left with the two series $I_2$ and [9] of table \ref{tab:families}. The last is easily ruled out, since the exponents $6\gamma$ and $\frac{6}{\gamma}$ can be simultaneously integer only for $\gamma=1,2,3$, and no superpotential produces these values without producing other exponentials with intermediate subleading exponents. The hunting ground is thus restricted to the series $I_2$ of table \ref{tab:families} (the $cosh$ models have already been discussed), where one is to spot a combination of powers in $W(z)$ that gives rise to only two exponents in the potential, whose indices should be related by the very restricted relation defining the series.
A careful and systematic analysis led us to the unique solution provided by the following superpotential:
\begin{equation}\label{integsuppot}
W_{int} \, = \, \lambda z^4 \, +\, {\rm i} \, \kappa z^3 \ ,
\end{equation}
where $\lambda$ and $\kappa$ are real constants. Performing the construction of the scalar potential one is led to
\begin{equation}\label{integSupPot}
V_{int} (z,{\bar z})\, = \, \frac{z^2 {\bar z}^2 \lambda
\left(3 z^2 \,\kappa +4 {\rm i}
{\bar z} z^2 \lambda -4 {\rm i}
{\bar z}^2 \lambda z+3
{\bar z}^2 \kappa
\right)}{3 (z-{\bar z})^2}\ ,
\end{equation}
To study the extrema of the above potential and for convenience in the further development of the integration it is useful to change parametrization, reabsorbing the overall coupling constant $\lambda$ into a rescaling of the space-time coordinates and setting:
\begin{equation}\label{changeofvariable}
\lambda \, = \,\frac{6}{\sqrt{5}} \quad ; \quad \kappa \, = \, \frac{2\omega}{\sqrt{5}}
\end{equation}
In this way the potential (\ref{integSupPot}) becomes
\begin{equation}
V_{int} (z,{\bar z}) \, = \, \frac{12 z^2 {\bar z}^2 \left((4 {\rm i}
{\bar z}+\omega ) z^2-4 {\rm i} {\bar z}^2
z+{\bar z}^2 \omega \right)}{5
(z-{\bar z})^2} \label{integSupPotBis}
\end{equation}
Next let us consider the derivative of the potential with respect to the complex field $z$:
\begin{eqnarray}
\partial_z V_{int} & = & \frac{24 z {\bar z}^2 \left((4 {\rm i}
{\bar z}+\omega ) z^3+2 {\bar z} (-5 {\rm i}
{\bar z}-\omega ) z^2+6 {\rm i} {\bar z}^3
z-{\bar z}^3 \omega \right)}{5
(z-{\bar z})^3} \label{derivoz} \\
& =& \frac{6}{5} b e^{-2 h} \left(b^2+e^{2
h}\right) \left(3 \left(4 e^h-\omega
\right) b^2+e^{2 h} \left(\omega +12
e^h\right)\right) \label{realpartder}\\
&& + {\rm i} \, \left(-\frac{6}{5} e^{-3 h} \left(b^2+e^{2 h}\right)
\left(\left(\omega -2 e^h\right) b^4+e^{2h} \left(8 e^h-\omega \right) b^2+2 e^{4 h}
\left(\omega +5 e^h\right)\right) \right) \nonumber\\
\label{imagpartder}
\end{eqnarray}
In eq.s (\ref{realpartder},\ref{imagpartder}) we have separated the real and imaginary parts of the potential derivative after replacing the field $z$ with its standard parametrization in terms of a dilaton and an axion:
\begin{equation}\label{fruttosio}
z \, = \, {\rm i} \exp[h] \, + \, b
\end{equation}
In order to get a true extremum both the real and imaginary part of the derivative should vanish for appropriate values of $b$ and $h$. We begin with considering the zeros of the real part (\ref{realpartder}) in the axion $b$. It is immediately evident that there are three of them:
\begin{equation}\label{zerosReal}
b \, = \, 0 \quad ; \quad b \, = \, \pm \, \frac{\sqrt{-e^{2 h} \omega -12 e^{3
h}}}{\sqrt{3} \sqrt{4 e^h-\omega }}
\end{equation}
The first zero in (\ref{zerosReal}) is always available. The other two can occur only if $\omega <0$ is negative and
\begin{equation}\label{condiziata}
-e^{2 h} \omega -12 e^{3
h} >0
\end{equation}
If we choose the first zero (truncation of the axion) and we insert it into the imaginary part of the derivative we get:
\begin{equation}\label{allowatoprimo}
-\frac{12}{5} e^{3 h} \omega -12 e^{4 h} \, = \, 0 \quad \Rightarrow \quad h \, = \,\left\{\begin{array}{ll}-\log \left(-\frac{5}{\omega }\right) & \mbox{if} \,\,\omega < 0 \\
-\,\infty & \mbox{always} \end{array} \right.
\end{equation}
In the case the second and third zeros displayed in (\ref{zerosReal}) are permissible ($\omega < 0$), substituting their values in the imaginary part of the derivative (\ref{imagpartder}) we obtain the condition
\begin{equation}\label{nonallowato}
-\frac{128 e^{3 h} \left(2 e^h-\omega \right)
\omega ^3}{45 \left(4 e^h-\omega \right)^3}\, = \, 0 \quad \Rightarrow \quad h \, = \,- \infty
\end{equation}
Indeed for $\omega < 0$, no other solution of the above equation are available.
We conclude that if $\omega >0$ the only extremum of the potential is at $h \, = \, -\,\infty$ where the potential vanishes so that such an extremum corresponds to a \textit{Minkowski vacuum}. If $\omega <0 $ we have instead an additional extremum at:
\begin{equation}\label{puffoidato}
z_0 \, = \, {\rm i} \, \frac{|\omega|}{5}
\end{equation}
where the potential takes the following negative value:
\begin{equation}\label{pastruffo}
V_{int}(z_0) \, = \, \frac{6 \omega ^5}{15625} \, < \, 0
\end{equation}
Hence the extremum (\ref{puffoidato}) defines an anti de Sitter vacuum. We can wonder whether such an AdS vacuum is either supersymmetric or stable. The first possibility can be immediately ruled out by computing the derivative of the superpotential at the extremum:
\begin{equation}\label{dersupext}
\partial_z W_{int}(z)|_{z=z_0} \, = \, -\frac{6 i \omega ^3}{125 \sqrt{5}} \, \ne \, 0
\end{equation}
Since $\partial_z W_{int}(z)$ does not vanish at the extremum, the auxiliary field of the \textit{chiralino} is different from zero and supersymmetry is broken. In order to investigate stability of the AdS vacuum we have to consider the Breitenlohner-Freedman bound \cite{bretelloneF} which, in the normalizations of \cite{castdauriafre} (see page 462 of Vol. I) is given by:
\begin{equation}\label{bretella}
\lambda_i \, \ge \, \frac{3}{4} \, V_{int}(z_0)
\end{equation}
where by $\lambda_i$ we have denoted the Hessian of the potential $\partial_i\partial_j V_{int}$ calculated at the extremum
(\ref{puffoidato}). Using ${h,b}$ as field basis we immediately obtain:
\begin{equation}\label{balengone}
\partial_i\partial_j V_{int}|_{z=z_0} \, = \,
\left( \begin{array}{ll}
-\frac{24 \omega ^5}{3125} & 0 \\
0 & -\frac{84 \omega ^3}{625}
\end{array}
\right)
\end{equation}
from which the two eigenvalues are immediately read off and seen to be both positive. Hence the Breitenlohner-Freedman bound is certainly satisfied and we can conclude that for $\omega <0$ we have two vacua, a Minkowski vacuum at infinity and a \textit{stable, non supersymmetric $\mathrm{AdS}$ vacuum} at the extremum (\ref{puffoidato}). For $\omega >0$, instead we have only the Minkowski vacuum at infinity.
\subsection{\sc Truncation to zero axion}
The potential (\ref{integSupPotBis}) of the considered supergravity model can be consistently truncated to a vanishing value of the axion $b$, since its derivative derivative with respect to $b$ vanishes at $b=0$. Imposing such a truncation from
(\ref{integSupPotBis}) we obtain the following dilatonic potential
\begin{equation}\label{farimboldobis}
V_{Int} \, = \, \frac{6}{5} e^{4 \mathfrak{h}} \left(\omega +4 e^\mathfrak{h}\right)
\end{equation}
which by means of the replacement (\ref{babushka}) is mapped into the case $\gamma=\frac{2}{3}$ of the series $I_2$ of integrable potentials listed in table \ref{tab:families}.
According to the previous analysis of extrema of the full theory, we see that,
depending on the sign of the parameter $\omega$, this potential is either monotonic or it has a minimum (see fig.\ref{zuzzolo})
\begin{figure}[!hbt]
\begin{center}
\iffigs
\includegraphics[height=55mm]{susypotpiu.eps}
\vskip 2cm
\includegraphics[height=55mm]{susypoymeno.eps}
\else
\end{center}
\fi
\caption{\it
In this figure we display the behavior of the supersymmetric integrable potential \ref{farimboldobis} for the two choices $\omega =1$ (above) and $\omega = -1$ (below)
}
\label{zuzzolo}
\iffigs
\hskip 1cm \unitlength=1.0mm
\end{center}
\fi
\end{figure}
The important thing to note is that when it exists, the extremum of the potential is always at a negative value of the potential. It corresponds to the stable $\mathrm{AdS}$ vacuum discussed in the previous subsection. As its well known the $\mathrm{AdS}$ has no parametrization in terms of spatially flat constant time slices. Hence if we assume, to begin with, a spatially flat ansatz for the metric, as we do in eq. (\ref{piatttosa}), no solution of the Friedman equations can stabilize the scalar field at the $\mathrm{AdS}$ extremum. Indeed the exact solutions produced by the available general integral show that the scalar field always flows to infinity at the beginning and end of cosmic time.
\subsection{\sc Explicit integration of the supersymmetric integrable model}
In order to integrate the field equation of this model, it is convenient to write down the explicit form of the Lagrangian which has the following form
\begin{eqnarray}\label{frangente}
\mathcal{L}_{int} & = & e^{3 {A}(t)- {\mathcal{B}}(t)}
\left(-\frac{3}{2}
{A}'(t)^2+\frac{3}{2}
\mathfrak{h}'(t)^2-\frac{6}{5} e^{2 {\mathcal{B}}(t)+4
\mathfrak{h}(t)} \left(\omega +4
e^{h(t)}\right)\right)
\end{eqnarray}
and following the strategy outlined in \cite{primopapero}, one can move on to two new functions $U(\tau)$ and $V(\tau)$
\begin{eqnarray}
A(\tau) &=& \frac{1}{5} \log (U(\tau ))+\log (V(\tau )) \nonumber\\
B(\tau) &=& 2 \log (V(\tau ))-\frac{2}{5} \log (U(\tau ))\nonumber\\
\mathfrak{h}(\tau) &=& \frac{1}{5} (\log (U(\tau ))-5 \log (V(\tau
))) \label{integtrasformazia}
\end{eqnarray}
Inserting the transformation (\ref{integtrasformazia}) into the Lagrangian (\ref{frangente}) this becomes
\begin{equation}\label{fiduciato}
\mathcal{L}_{int} \, = \, -4 U(\tau )^{6/5}-\omega V(\tau ) U(\tau
)-U'(\tau ) V'(\tau )
\end{equation}
while the Hamiltonian constraint takes the form
\begin{equation}\label{sfiducioso}
\mathcal{H} \, = \, 4 (U(\tau
)^{6/5}+\omega \,V(\tau )
U(\tau )-U'(\tau ) V'(\tau
) \, = \, 0
\end{equation}
The field equations associated with (\ref{fiduciato}) have the following triangular form:
\begin{equation}\label{equemozionica}
\begin{array}{lcl}
\omega U(\tau )-U''(\tau ) & = & 0 \\
\omega V(\tau )-V''(\tau )+\frac{24}{5}
\sqrt[5]{U(\tau )} &=& 0
\end{array}
\end{equation}
and can be integrated by means of trigonometric or hyperbolic functions depending on the sign of $\omega$.
\subsection{\sc Trigonometric solutions in the potential with $AdS$ extremum}
If we pose $\omega\, = \, -\nu^2$ the first equation becomes the equation of the standard harmonic oscillator and we have:
{\small
\begin{eqnarray}
U(\tau) &=& a \cos (\nu \tau )+b \sin (\nu \tau ) \label{armonium}\\
V(\tau) &=& \left( 4 \cot \left(\nu \tau +\tan
^{-1}\left(\frac{a}{b}\right)\right) \, \times \right.\nonumber \\
&&\left. _2F_1\left(\frac{1}{2},\frac{9}{10};\frac{
3}{2};\cos ^2\left(\nu \tau +\tan
^{-1}\left(\frac{a}{b}\right)\right)
\right) \sin ^2\left(\nu \tau +\tan
^{-1}\left(\frac{a}{b}\right)\right)^{9/10
} \left(b \cos (\nu \tau )-a \sin (\nu \tau
)\right)\right. \nonumber\\
&& \left. +5 \left(\cos (\nu \tau ) \left(c (a
\cos (\nu \tau )+b \sin (\nu \tau
))^{4/5} \nu ^2+4 a\right)\right.\right.\nonumber\\
&&\left. +\sin (\nu \tau
) \left(d (a \cos (\nu \tau )+b \sin (\nu
\tau ))^{4/5} \nu ^2+4 b\right)\right) \times (5
\nu ^2 (a \cos (\nu \tau )+b \sin (\nu\tau ))^{-4/5})
\end{eqnarray}
}
where $_2F_1$ denotes a hypergeometric function of the specified indices. The parameters $a,b,c,d$ are four integration constants, on which the Hamiltonian constraint imposes the condition
\begin{equation}\label{dariopisco}
(b c + a d) \, = \, 0 \ .
\end{equation}
We solve the constraint by setting $d=-\rho \, a$, $c=\rho \,b$. In this way we obtain an explicit general integral depending on three parameters $(a,b,\rho)$. The explicit form of the solution for the scale factor, for the $\exp[\mathcal{B}]$ function and for the scalar field $\mathfrak{h}(\tau)$ are given below.
{\small
\begin{eqnarray}
a(\tau;a,b,\rho,\nu) &=& \frac{\mathfrak{J}(\tau;a,b,\rho,\nu)}{5 \nu ^2 (a \cos (\nu \tau )+b \sin (\nu
\tau ))^{3/5}}\nonumber \\
\mathfrak{J}(\tau;a,b,\rho,\nu) &=& 5 \left(\sin (\nu \tau ) \left(4 b-a \nu ^2
\rho (a \cos (\nu \tau )+b \sin (\nu
\tau ))^{4/5}\right)\right.\nonumber\\
&& \left.+ \cos (\nu \tau )
\left(b \rho (a \cos (\nu \tau )+b \sin
(\nu \tau ))^{4/5} \nu ^2+4
a\right)\right)\nonumber \\
&& + 4 \cot \left(\nu \tau +\tan
^{-1}\left(\frac{a}{b}\right)\right) \,
_2F_1\left(\frac{1}{2},\frac{9}{10};\frac{3
}{2};\cos ^2\left(\nu \tau +\tan
^{-1}\left(\frac{a}{b}\right)\right)\right) \times \nonumber\\
&& (b \cos (\nu \tau )-a \sin (\nu \tau )) \sin
^2\left(\nu \tau +\tan
^{-1}\left(\frac{a}{b}\right)\right)^{9/10}\nonumber\\
\exp[\mathcal{B}(\tau;a,b,\rho,\nu)] &=& \frac{\left(\mathfrak{J}(\tau;a,b,\rho,\nu)\right)^2}{25 \nu ^4 (a \cos (\nu \tau )+b \sin (\nu
\tau ))^2}\\
\mathfrak{h}(\tau;a,b,\rho,\nu) &=& \log\left[\frac{5 \nu ^2 (a \cos (\nu \tau )+b \sin (\nu
\tau ))^{7/5}}{ \mathfrak{J}(\tau;a,b,\rho,\nu)} \right] \label{grantrigsoluz}
\end{eqnarray}
}
From the explicit form of the solution the structure of its time development is not immediately evident. Yet it is clear that it must be periodic, since all addends are constructed in terms of trigonometric functions with the same frequency $\nu$. Therefore we are lead to suspect that the scalar field will go to infinity and the scale factor to zero in a periodic fashion. In other words we expect solutions with a Big Bang and a Big Crunch. This expectation is sustained by the general arguments of paper \cite{primopapero}. Indeed the considered scalar potential has an absolute minimum, yet this minimum is at a negative value, so that in the phase portrait of the equivalent first system there is no fixed point and under these conditions the only possible solutions are blow-up solutions, physically corresponding to Big-Bang/Big Crunch universes.
\subsubsection{\sc Structure of the moduli space of the general integral}
In order to understand the actual form and the behavior of these type of solutions it is convenient to investigate first the physical interpretation of the three integration constants $a,b,\rho$ that we have introduced and reduce the general integral to a simpler canonical form.
\par
An \textit{a priori} observation valid for all the solutions of Friedman equations is that the effective parameter labeling such solutions is only one, two parameters being accounted for by the uninteresting overall scale of the scale--factor and by the equally uninteresting possibility of shifting the parametric time $\tau$ by a constant. What has to be done case by case is to work out those combinations of the parameters that can be disposed of by the above mentioned symmetries and single out the unique meaningful deformation parameter.
\par
In the present case we begin by noting that all functions in the solution depend on $\tau$ only through the combination $\nu \tau$. Hence the frequency parameter $\nu$ can be reabsorbed by rescaling the parametric time:
\begin{equation}\label{reassorbo}
\nu \tau \, = \, \tau^\prime.
\end{equation}
In other words, without loss of generality we can set $\nu=1$. Secondly let us consider the following rescaling of the solution parameters:
\begin{equation}\label{riscalaggio}
a \, \mapsto \, \lambda \, a \quad ; \quad b \, \mapsto \, \lambda \, b \quad ; \quad \rho \, \mapsto \, \lambda^{-4/5} \, \rho
\end{equation}
under such a transformation we have:
\begin{eqnarray}
a(\tau;\, \lambda \, a, \,\lambda \, b, \, \lambda^{-4/5} \, \rho, \, 1) &=& \lambda^{2/5} \, a(\tau;\, a, \, b, \, \rho, \, 1) \nonumber \\
\exp[\mathcal{B}(\tau;\, \lambda \, a, \,\lambda \, b, \, \lambda^{-4/5} \, \rho, \, 1)] &=& \exp[\mathcal{B}(\tau;\, a, \, b, \, \rho, \, 1)]\nonumber \\
\mathfrak{h}(\tau;\, \lambda \, a, \,\lambda \, b, \, \lambda^{-4/5} \, \rho, \, 1) &=& \mathfrak{h}(\tau;\, a, \, b, \, \rho, \, 1) \label{bamboccio}
\end{eqnarray}
From this we deduce that a suitable combination of the parameters $a,b,\rho$ is just the overall scale of the scale factor, as we announced. Using the symmetry of eq.(\ref{bamboccio}) we could for instance fix the gauge where either one of the three parameters $a,b,\rho$ is $1$. Yet we can do much better if we realize that the ratio $a/b$ actually amounts to a shift of the parametric time. Indeed by means of several analytic manipulations we can prove the following identities:
\begin{eqnarray}
a\left(\tau + \arctan\left(\frac{b}{a}\right);\, a, \, b, \, \rho, \, 1\right)&=& \frac{4}{5} \left(
\sqrt{{b^2}+{a^2}} \cos (\tau
)\right)^{2/5} \times \nonumber \\
&&\left(\cos ^2(\tau )^{9/10} \,
_2F_1\left(\frac{1}{2},\frac{9}{10};\frac{3}{2};\sin ^2(\tau )\right)\, \tan ^2(\tau )+5\right) \nonumber\\
&& -\rho \, \left(\sqrt{{b^2}+{a^2}}\, \cos (\tau )\right)^{6/5} \tan (\tau ) \nonumber\\
\exp\left[\mathcal{B}\left(\tau + \arctan\left(\frac{b}{a}\right);\, a, \, b, \, \rho, \, 1\right)\right]&=& \frac{1}{25} \left(4 \cos ^2(\tau
)^{9/10} \, _2F_1\left(\frac{1}{2},\frac{9}{10};\frac{3}{2};\sin ^2(\tau )\right)\, \tan ^2(\tau ) \right. \nonumber \\
&&\left. -5 \rho \left(\sqrt{{b^2}+{a^2}}\cos (\tau)\right)^{4/5} \tan (\tau)+20\right)^2 \nonumber\\
\mathfrak{h}\left(\tau + \arctan\left(\frac{b}{a}\right);\, a, \, b, \, \rho, \, 1\right)&=& -\log \left(\frac{4}{5} \cos ^2(\tau
)^{9/10} \, _2F_1\left(\frac{1}{2},\frac{9}{10};\frac{3}{2};\sin ^2(\tau )\right)\tan ^2(\tau )\right.\nonumber\\
&&\left.-\rho \left(\sqrt{{b^2}+{a^2}} \cos (\tau)\right)^{4/5} \tan (\tau )+4\right)\label{kukletta}
\end{eqnarray}
In this way we realize that after the shift $\tau \, \mapsto \, \tau \, + \, \arctan\left(\frac{b}{a}\right)$ the solution functions depend only on the two parameters $ \sqrt{{b^2}+{a^2}} $ and $\rho$. Furthermore the explicit result (\ref{kukletta}) suggests that we redefine these latter as follows:
\begin{equation}\label{rifriggo}
\sqrt{{b^2}+{a^2}} \, = \, \Lambda^{\frac{5}{2}} \quad ; \quad \rho \, = \, \frac{Y}{\Lambda^2}
\end{equation}
so that we obtain:
\begin{eqnarray}
a\left(\tau,\Lambda,Y\right) \, = \, \Lambda\, \mathfrak{a}(\tau,Y) &=& \Lambda \,\left[ \frac{4}{5} \cos ^{\frac{2}{5}}(\tau )
\left(\left(\cos \tau \right)^{9/5} \,
_2F_1\left(\frac{1}{2},\frac{9}{10};
\frac{3}{2};\sin ^2(\tau )\right)
\tan ^2 (\tau)+5\right)\right.\nonumber\\
&&\left.-Y \cos
^{\frac{1}{5}}(\tau ) \sin (\tau )\right]\nonumber\\
\exp\left[\mathcal{B}\left(\tau ,Y \right)\right]&=& \frac{1}{25} \left(4 \cos ^2(\tau
)^{9/10} \, _2F_1\left(\frac{1}{2},\frac{9}{10};\frac{3}{2};\sin ^2(\tau )\right)\tan ^2(\tau )
-\frac{5 Y \sin (\tau)}{\cos ^{\frac{1}{5}}(\tau )}+20\right)^2 \nonumber\\
\mathfrak{h}\left(\tau,Y\right)&=& -\log \left(\frac{4}{5} \cos ^2(\tau
)^{9/10} \,
_2F_1\left(\frac{1}{2},\frac{9}{10};
\frac{3}{2};\sin ^2(\tau )\right)
\tan ^2(\tau )-\frac{Y \sin (\tau
)}{\cos ^{\frac{1}{5}}(\tau
)}+4\right) \nonumber\\
\label{confettofalqui}
\end{eqnarray}
which puts into evidence the only relevant deformation parameter, namely $Y$. We can get some understanding of the meaning of this latter by plotting the solution functions for various values of $Y$. We begin by analyzing the simplest and most symmetrical solution at $Y=0$.
\subsubsection{\sc The simplest trigonometric solution at $Y=0$}
\begin{figure}[!hbt]
\begin{center}
\iffigs
\includegraphics[height=60mm]{Y0SF.eps}
\includegraphics[height=60mm]{Y0EXPB.eps}
\includegraphics[height=60mm]{Y0PHI.eps}
\else
\end{center}
\fi
\caption{\it
Plots of the real and imaginary parts of the scale factor, $\exp[B]$ factor and scalar field for the case of parameter $Y=0$. In the three diagrams the solid line represents the real part, while the dashed line represents the imaginary part. We note the periodicity of all the functions and the windows where the three of them are simultaneously real. Taking as basic reference window the interval $\left[-\frac{\pi}{2},\frac{\pi}{2}\right]$ we note the reflection symmetry of the plots with respect to the point $\tau =0$.}
\label{Y0triplots}
\iffigs
\hskip 1cm \unitlength=1.1mm
\end{center}
\fi
\end{figure}
In fig.\ref{Y0triplots} we display the behavior of the real and imaginary parts of three main functions composing the solution in the case $Y=0$. From these plots it is evident that a physical solution of scalar-matter coupled gravity exists only in those windows where the three functions are simultaneously real, for instance in the interval $\left[-\frac{\pi}{2},\frac{\pi}{2}\right]$. In such an interval of the parametric time $\tau$ the scale factor goes from a zero to another zero so that the cosmic evolution should correspond to an Universe that starts with a Big Bang and finishes its life collapsing into a Big Crunch. To put such a conclusion on a firm ground we have actually to verify that both zeros of the scale factor do indeed correspond to a true space-like singularity and this can only be done by considering the intrinsic components of the curvature two-form showing that they all blow up to infinity in the initial and final point. This we will do shortly. First let us verify analytically the limit of the scale factor and of the scalar field in the initial and final point of the reality domain of the solution. We find:
\begin{eqnarray}
\label{apertivo}
\lim_{\tau \rightarrow \pm \frac{\pi}{2}} \mathfrak{ a}(\tau;0) &=& 0 \nonumber\\
\lim_{\tau \rightarrow \pm \frac{\pi}{2}} \mathfrak{h}(\tau,0) &=& -\infty\nonumber\\
\lim_{\tau \rightarrow \pm \frac{\pi}{2}} \exp[\mathcal{B}(\tau,0)] &=& +\infty
\end{eqnarray}
This means that a life-cycle of this universe is contained in the following finite interval of parametric time $\left[ -\frac{\pi}{2}\, , \, \frac{\pi}{2} \right]$, which by the $\exp[\mathcal{B}(\tau;1,1,0,1)]$ function is monotonically mapped into a finite interval of Cosmic time. Indeed defining:
\begin{equation}\label{cosmicotempus}
T_c(\tau) \, = \, \int_{-\frac{\pi}{2}}^{\tau} \,\exp[\mathcal{B}(t;1,1,0,1)] \, \mathrm{d}t
\end{equation}
we find:
\begin{equation}\label{corleonite}
T_c(-\frac{\pi}{2}) \, = \, 0 \quad ; \quad T_c(\frac{\pi}{2}) \, = \, 84.7046
\end{equation}
the plot of $T_c(\tau)$ being displayed in fig.\ref{cosmicomica}
\begin{figure}[!hbt]
\begin{center}
\iffigs
\includegraphics[height=55mm]{susycosmictime.eps}
\else
\end{center}
\fi
\caption{\it
In this figure we display the behavior of the cosmic time with respect to the parametric time for the trigonometric type of solution of the supersymmetric cosmological model with parameter $Y=0$.
}
\label{cosmicomica}
\iffigs
\hskip 1cm \unitlength=1.0mm
\end{center}
\fi
\end{figure}
Due to these properties of the cosmic time function we do not loose any essential information by plotting the solution in parametric rather than in cosmic time. The essential difference between this case and the case of positive potentials with
positive extrema discussed in \cite{primopapero} is best appreciated by considering the phase-portrait of the this solution presented in fig.\ref{facciaportratto}
\begin{figure}[!hbt]
\begin{center}
\iffigs
\includegraphics[height=55mm]{susyfasaportrig.eps}
\else
\end{center}
\fi
\caption{\it
In this figure we display the phase portrait of the solution defined by $Y=0$. The axes are the scalar field $\Phi \equiv \mathfrak{h}$ and its derivative with respect tot the cosmic time $V \equiv \partial_{T_c} \mathfrak{h}$. The extremum of the potential is at $\Phi_0 \, = \, - \log[5]$. It is reached by the solution however with a non vanishing velocity. The field also reaches vanishing velocity yet not an extremum of the potential. Hence there is no fixed point and the trajectory is from infinity to infinity with no fixed point.
}
\label{facciaportratto}
\iffigs
\hskip 1cm \unitlength=1.0mm
\end{center}
\fi
\end{figure}
The absence of a fixed point implies the structure of a blow-up solution with a Big Bang and a Big Crunch which is displayed in fig.\ref{cuginetto}.
\begin{figure}[!hbt]
\begin{center}
\iffigs
\includegraphics[height=60mm]{susytriscalafactor.eps}
\else
\end{center}
\fi
\caption{\it
In this figure we display plots (in parametric time) of the scale factor (solid line) and of the scalar field (dashed line) for the trigonometric type at $Y=0$. It is evident that in a finite time the Universe undergoes a Big Bang, a decelerated expansion and then a Big Crunch. At the same time the scalar field climbs from $-\infty$ to a maximum and then descends again to $-\infty$.
}
\label{cuginetto}
\iffigs
\hskip 1cm \unitlength=1.0mm
\end{center}
\fi
\end{figure}
As we already emphasized the interpretation of Big Bang and Big Crunch is suggested by the plots, yet it has to be verified by an appropriate study of the curvature singularity.
\subsubsection{\sc The curvature two-form and its singularities}
Throughout this paper we consider metrics of the form (\ref{piatttosa}). It is important to calculate the explicit general form of the curvature 2-form associated with such metrics. To this effect we introduce the vielbein:
\begin{equation}\label{vielbeinus}
E^1 \, = \, \exp\left[\mathcal{B}(\tau)\right] \, d\tau \quad ; \quad E^i \, = \, \quad \exp\left[A(\tau)\right]\, dx^i \quad (i\, = \, 2,3,4)
\end{equation}
and we obtain the following result for the matrix valued curvature two-form:
\begin{eqnarray}\label{curbura}
& R^{AB} \equiv d\omega^{AB} \, + \, \omega^{AC} \, \wedge \, \omega^{DB} \, \eta_{CD} \, = \, & \nonumber\\
& \null & \nonumber\\
& \left(
\begin{array}{c|c|c|c}
0 & -\frac{E^1 \wedge E^2
\left(\mathfrak{b}
\mathfrak{a}''-\mathfrak{a}'
\mathfrak{b}'\right)}{\mathfrak{a}
\mathfrak{b}^3} &
-\frac{E^1 \wedge E^3
\left(\mathfrak{b}
\mathfrak{a}''-\mathfrak{a}'
\mathfrak{b}'\right)}{\mathfrak{a}
\mathfrak{b}^3} &
-\frac{E^1 \wedge E^4
\left(\mathfrak{b}
\mathfrak{a}''-\mathfrak{a}'
\mathfrak{b}'\right)}{\mathfrak{a}
\mathfrak{b}^3} \\
\null & \null & \null & \null \\
\hline
\null & \null & \null & \null \\
\frac{E^1 \wedge E^2
\left(\mathfrak{b}
\mathfrak{a}''-\mathfrak{a}'
\mathfrak{b}'\right)}{\mathfrak{a}
\mathfrak{b}^3} & 0 &
-\frac{E^2 \wedge E^3
(\mathfrak{a}')^2}{\mathfrak{a}^2 \mathfrak{b}^2} &
-\frac{E^2)\wedge E^4)
(\mathfrak{a}')^2}{\mathfrak{a}^2 \mathfrak{b}^2} \\
\null & \null & \null & \null \\
\hline
\null & \null & \null & \null \\
\frac{E^1 \wedge E^3
\left(\mathfrak{b}
\mathfrak{a}''-\mathfrak{a}'
\mathfrak{b}'\right)}{\mathfrak{a}
\mathfrak{b}^3} &
\frac{E^2 \wedge E^3
(\mathfrak{a}')^2}{\mathfrak{a}^2 \mathfrak{b}^2} & 0 &
-\frac{E^3 \wedge E^4
(\mathfrak{a}')^2}{\mathfrak{a}^2 \mathfrak{b}^2} \\
\null & \null & \null & \null \\
\hline
\null & \null & \null & \null \\
\frac{E^1 \wedge E^4
\left(\mathfrak{b}
\mathfrak{a}''-\mathfrak{a}'
\mathfrak{b}'\right)}{\mathfrak{a}
\mathfrak{b}^3} &
\frac{E^2 \wedge E^4
(\mathfrak{a}')^2}{\mathfrak{a}^2 \mathfrak{b}^2} &
\frac{E^3 \wedge E^4
(\mathfrak{a}')^2}{\mathfrak{a}^2 \mathfrak{b}^2} & 0
\end{array}
\right) &
\end{eqnarray}
having denoted by $\omega^{AB}$ the Levi-Civita spin connection defined by:
\begin{equation}\label{levicivita}
dE^A \, + \, \omega^{AB} \, \wedge \, E^{C} \, \eta_{BC} \, = \,0
\end{equation}
and having introduced the following notation:
\begin{equation}\label{franzusko}
\mathfrak{a} \, \equiv \, \exp[A(\tau)] \, = \, a(\tau) \quad ; \quad \mathfrak{b} \, \equiv \, \exp[\mathcal{B}(\tau)]
\end{equation}
If the functions $\mathfrak{a},\mathfrak{b}$ are specialized to the form (\ref{confettofalqui}), we obtain some rather formidable, yet fully explicit analytic expressions that correspond to the intrinsic components of the Riemann tensor for the solution under consideration. We can calculate the limit of the curvature two-form when $\tau$ approaches a zero of the scale-factor. In the case $Y=0$, the only zeros are at $\tau\, = \, \pm \frac{\pi}{2}$ and we find:
\begin{equation}\label{limitus}
\lim_{\tau \rightarrow \pm \frac{\pi}{2}} \, R^{AB} \, = \, \left(
\begin{array}{llll}
0 & \infty E^1\wedge E^2 & \infty
E^1\wedge E^3 & \infty E^1\wedge
E^4 \\
-\infty E^1\wedge E^2 & 0 & -\infty
E^2\wedge E^3 & -\infty
E^2\wedge E^4 \\
-\infty E^1\wedge E^3 & \infty
E^2\wedge E^3 & 0 & -\infty
E^3\wedge E^4 \\
-\infty E^1\wedge E^4 & \infty
E^2\wedge E^4 & \infty E^3\wedge
E^4 & 0
\end{array}
\right)
\end{equation}
Hence the Riemann tensor diverges in all directions and both the initial and final zero of the scale factor correspond to true singularities confirming their interpretation as the Big Bang and Big Crunch points.
Actually we can make the statement even more precise. In the case of the $Y=0$ solution we can calculate the asymptotic expansion of the curvature components in the neighborhood of the two singularities and we find:
\begin{equation}\label{divergendo}
R^{AB} \, \stackrel{\tau \rightarrow \pm \pi /2}{\approx }\, \frac{1}{(\mp \frac{\pi}{2}+\tau)^{6/5}}\,
\frac{25 \Gamma \left(\frac{3}{5}\right)^4}{8 \pi ^2
\Gamma \left(\frac{1}{10}\right)^4} \, \left(
\begin{array}{llll}
0 & E^1\wedge E^2 & E^1\wedge E^3
& E^1\wedge E^4 \\
-E^1\wedge E^2 & 0 & -\frac{1}{2}
E^2\wedge E^3 & -\frac{1}{2}
E^2\wedge E^4 \\
-E^1\wedge E^3 & \frac{1}{2}
E^2\wedge E^3 & 0 & -\frac{1}{2}
E^3\wedge E^4 \\
-E^1\wedge E^4 & \frac{1}{2}
E^2\wedge E^4 & \frac{1}{2}
E^3\wedge E^4 & 0
\end{array}
\right)
\end{equation}
Hence all the components of the intrinsic curvature tensor have the same degree of divergence which is identically at the Big Bang and at the Big Crunch. This reflects the already noted $\mathbb{Z}_2$ symmetry of the solution.
\subsubsection{\sc $Y$-deformed solutions}
We established that the actual moduli space of the trigonometric solutions is provided by the deformation parameter $Y$. It is interesting to explore the quality of the solutions that this latter parameterizes. The first fundamental question is whether all solutions have a Big Bang and a Big Crunch or other behaviors are possible. Periodicity of the solution functions guarantees that, in any case, the scale factor has zeros at $\tau = \pm \frac{\pi}{2} + n \times \pi $ for $n\in \mathbb{Z}$, yet there is also another possibility which has to be taken into account: an additional zero might or might not occur
in the interval $\left [ -\frac{\pi}{2} , \frac{\pi}{2} \right ]$. This depends on the value of $Y$.
Given the form (\ref{confettofalqui}) of the solution, a zero of the scale factor can occur at a value $\tau_0$ which satisfies the equation:
\begin{equation}\label{goliardo}
Y \, = \, \frac{4}{5} \cos
^{\frac{1}{5}}\left(\tau _0\right)
\csc \left(\tau _0\right) \left(\cos
^2\left(\tau _0\right)^{9/10} \,
_2F_1\left(\frac{1}{2},\frac{9}{10};
\frac{3}{2};\sin ^2\left(\tau
_0\right)\right) \tan ^2\left(\tau
_0\right)+5\right) \, \equiv \, \mathfrak{f}(\tau_0)
\end{equation}
The plot of the function $\mathfrak{f}(\tau)$ defined above is displayed in fig.\ref{ziotta}.
\begin{figure}[!hbt]
\begin{center}
\iffigs
\includegraphics[height=60mm]{ycoef.eps}
\else
\end{center}
\fi
\caption{\it
Plot of the function $\mathfrak{f}(\tau)$. A zero of the scale factor occurs when $\mathfrak{f}(\tau_0) = Y$. Hence for all those values of $Y$ that are never attained by the function $\mathfrak{f}(\tau)$ in the range $\left [ -\frac{\pi}{2} , \frac{\pi}{2} \right ]$ there is no early Big Crunch. The two straight asymptotic lines are at the values $\pm Y_0 \, = \, \pm \frac{4 \sqrt{\pi } \Gamma
\left(\frac{11}{10}\right)}{\Gamma
\left(\frac{3}{5}\right)}$
}
\label{ziotta}
\iffigs
\hskip 1cm \unitlength=1.0mm
\end{center}
\fi
\end{figure}
We see that for
\begin{equation}\label{gomorroida}
|Y| \, \le \, Y_0 \, \equiv \, \frac{4 \sqrt{\pi } \Gamma
\left(\frac{11}{10}\right)}{\Gamma
\left(\frac{3}{5}\right)}
\end{equation}
the candidate Big Bang and Big Crunch are at $\tau \, = \, \pm \frac{\pi}{2}$, while for $|Y| \, > \, Y_0$ the candidate Big Bang is at $\tau \, = \, - \frac{\pi}{2}$, while the Big Crunch occurs earlier at:
\begin{equation}\label{georgy}
\tau_0 \, = \, \mathfrak{f}^{-1}(Y)
\end{equation}
It is reasonable to expect a significantly different structure of the solution in the two cases.
\paragraph{$Y\ne0$ but less than critical.}
\begin{figure}[!hbt]
\begin{center}
\iffigs
\includegraphics[height=50mm]{Y1SF.eps}
\includegraphics[height=50mm]{Y1EXPB.eps}
\includegraphics[height=50mm]{Y1PHI.eps}
\else
\end{center}
\fi
\caption{\it
Plots of the real and imaginary parts of the scale factor, of the $\exp[\mathcal{B}]$-factor and of the scalar field for the case of parameter $Y=1$, which is non zero but smaller than the critical value $Y_0$ In the three diagrams the solid line represents the real part, while the dashed line represents the imaginary part. The interval in which the three function are simultaneously real is still $\left[-\frac{\pi}{2},\frac{\pi}{2}\right]$ as in the $Y=0$ case. Yet the shape of the plots is no longer symmetric and also the $\exp[B]$ factor and the scalar field start developing imaginary parts that instead are identically zero over the full range of $\tau$ when $Y=0$. }
\label{Y1triplots}
\iffigs
\hskip 1cm \unitlength=1.1mm
\end{center}
\fi
\end{figure}
In fig.\ref{Y1triplots} we display the plot of the real and imaginary parts for the three functions composing the solutions for the less than critical case $Y=1$. As we see the shape of the plots is no longer symmetric and imaginary parts are developed also by the scalar field and by the $\exp[\mathcal{B}]$-factor, yet the candidate Big Crunch occurs once again at the parametric time $\tau =\frac{\pi}{2}$. Furthermore the scalar field, after climbing to some finite value, drops again to $-\infty$ at the end of the life cycle of this Universe. The no longer symmetric phase-portrait of this solution is displayed in fig.\ref{facciaportratto2}.
\begin{figure}[!hbt]
\begin{center}
\iffigs
\includegraphics[height=55mm]{fasaportrig2.eps}
\else
\end{center}
\fi
\caption{\it
In this figure we display the phase portrait of the solution defined by $Y=1$. The axes are the scalar field $\Phi \equiv \mathfrak{h}$ and its derivative with respect tot the cosmic time $V \equiv \partial_{T_c} \mathfrak{h}$. The extremum of the potential is at $\Phi_0 \, = \, - \log[5]$. It is reached by the solution however with a non vanishing velocity. The field also reaches vanishing velocity yet not at the extremum of the potential. Hence there is no fixed point and the trajectory is infinite with no fixed point. The symmetric shape of the $Y=0$ trajectory is lost.
}
\label{facciaportratto2}
\iffigs
\hskip 1cm \unitlength=1.0mm
\end{center}
\fi
\end{figure}
The verification that the zeros of the scale factor are indeed singularities is done by inspecting the divergences of the curvature two-form in $\pm \frac{\pi}{2}$. In this case it is much more difficult to calculate the coefficients of the asymptotic expansion, yet is sufficiently easy to determine the divergence order of the various components. We find:
\begin{equation}
R^{AB} \, \stackrel{\tau \rightarrow \pm \pi /2}{\approx }\, \left(
\begin{array}{llll}
0 & \mathcal{O}\left(\frac{1}{\left(\tau \mp \frac{\pi }{2}\right)^{6/5}}\right)
& \mathcal{O}\left(\frac{1}{\left(\tau \mp \frac{\pi }{2}\right)^{6/5}}\right)
& \mathcal{O}\left(\frac{1}{\left(\tau \mp \frac{\pi }{2}\right)^{6/5}}\right)
\\
\mathcal{O}\left(\frac{1}{\left(\tau \mp \frac{\pi }{2}\right)^{6/5}}\right) & 0
& \mathcal{O}\left(\frac{1}{\left(\tau \mp \frac{\pi }{2}\right)^{14/5}}\right)
& \mathcal{O}\left(\frac{1}{\left(\tau \mp \frac{\pi }{2}\right)^{14/5}}\right)
\\
\mathcal{O}\left(\frac{1}{\left(\tau \mp \frac{\pi }{2}\right)^{6/5}}\right) &
\mathcal{O}\left(\frac{1}{\left(\tau \mp \frac{\pi }{2}\right)^{14/5}}\right) &
0 & \mathcal{O}\left(\frac{1}{\left(\tau \mp \frac{\pi
}{2}\right)^{14/5}}\right) \\
\mathcal{O}\left(\frac{1}{\left(\tau \mp \frac{\pi }{2}\right)^{6/5}}\right) &
\mathcal{O}\left(\frac{1}{\left(\tau \mp \frac{\pi }{2}\right)^{14/5}}\right) &
\mathcal{O}\left(\frac{1}{\left(\tau \mp \frac{\pi }{2}\right)^{14/5}}\right) &
0
\end{array}
\right) \label{divergendo2}
\end{equation}
At $Y\ne 0$, differently from the $Y=0$ case there are two different velocities of approach to infinity for the curvature components. Half of them go faster and half of them go slower. Yet the relevant point is that all of them blow up and certify that we are in presence of a true singularity, both at the beginning and at the end of time.
The overall shape of the solution for the scalar field and for the scale factor is displayed in fig.\ref{nipotino}.
\begin{figure}[!hbt]
\begin{center}
\iffigs
\includegraphics[height=60mm]{susytriscalafactor1.eps}
\else
\end{center}
\fi
\caption{\it
In this figure we display plots (in parametric time) of the scale factor (solid line) and of the scalar field (dashed line) for the trigonometric type of solutions at $Y=1$. It is evident that in a finite time the Universe undergoes a Big Bang, a decelerated expansion and then a Big Crunch. At the same time the scalar field climbs from $-\infty$ to a maximum and then descends again to $-\infty$.
}
\label{nipotino}
\iffigs
\hskip 1cm \unitlength=1.0mm
\end{center}
\fi
\end{figure}
\paragraph{Overcritical $Y>Y_0$}. When the parameter $Y$ is over critical we have a new zero of the scale factor which occurs at some $\tau_0$ in the fundamental interval $\left[-\frac{\pi}{2},\frac{\pi}{2}\right]$. A practical way to deal with this type of solutions is to invert the procedure and use $\tau_0$ as parameter by setting $Y=\mathfrak{f}(\tau_0)$. As an illustrative example we choose $\tau_0 \, = \, \frac{\pi}{6}$ and we get:
\begin{equation}\label{ybullo}
Y_\bullet \, = \, \mathfrak{f}\left(\frac{\pi}{6}\right) \, = \, 4 \times 2^{4/5} \sqrt[10]{3}+\frac{2}{5} \,
_2F_1\left(\frac{1}{2},\frac{9}{10};
\frac{3}{2};\frac{1}{4}\right)
\end{equation}
The behavior of the real and imaginary parts of the three functions composing the solution for $Y=Y_\bullet$ is displayed in fig.\ref{Ybultriplots}.
\begin{figure}[!hbt]
\begin{center}
\iffigs
\includegraphics[height=50mm]{YbulSF.eps}
\includegraphics[height=50mm]{YbulEXPB.eps}
\includegraphics[height=50mm]{YbulPHI.eps}
\else
\end{center}
\fi
\caption{\it
Plots of the real and imaginary parts of the scale factor, of the $\exp[\mathcal{B}]$-factor and of the scalar field for the case of parameter $Y=Y_\bullet$, which is overcritical $Y_\bullet > Y_0$. In the three diagrams the solid line represents the real part, while the dashed line represents the imaginary part. The interval in which the three functions are simultaneously real is now reduced to $\left[-\frac{\pi}{2},\frac{\pi}{3}\right]$. In the real range the scalar fields climbs from $-\infty$ to $+\infty$. }
\label{Ybultriplots}
\iffigs
\hskip 1cm \unitlength=1.1mm
\end{center}
\fi
\end{figure}
The new character of the solution is immediately evident from such plots. The earlier zero of the scale factor is in correspondence with a divergence of the scalar field that now climbs from $-\infty$ to $+\infty$ as it is displayed in fig.\ref{giunglatroops}
\begin{figure}[!hbt]
\begin{center}
\iffigs
\includegraphics[height=60mm]{SpercriticalTrigo.eps}
\else
\end{center}
\fi
\caption{\it
In this figure we display plots (in parametric time) of the scale factor (solid line) and of the scalar field (dashed line) for the supercritical trigonometric type of solutions at $Y=Y_\bullet$. It is evident that in the finite parametric time interval $\left[-\frac{\pi}{2},\frac{\pi}{3}\right]$ the Universe undergoes a Big Bang, a decelerated expansion and then a Big Crunch. At the same time the scalar field climbs from $-\infty$ to $+\infty$.
}
\label{giunglatroops}
\iffigs
\hskip 1cm \unitlength=1.0mm
\end{center}
\fi
\end{figure}
The structure of the phase-portrait changes significantly with respect to the subcritical cases and it is displayed in fig.\ref{facciaportratto3}
\begin{figure}[!hbt]
\begin{center}
\iffigs
\includegraphics[height=55mm]{fasaportrig3.eps}
\else
\end{center}
\fi
\caption{\it
In this figure we display the phase portrait of the solution defined by the supercritical value $Y=Y_\bullet$. The axes are the scalar field $\Phi \equiv \mathfrak{h}$ and its derivative with respect tot the cosmic time $V \equiv \partial_{T_c} \mathfrak{h}$. The extremum of the potential is at $\Phi_0 \, = \, - \log[5]$. It is reached by the solution however with a non vanishing velocity. The field also reaches vanishing velocity yet differently from the previous case before rather than after the extremum. This allows for the continuous climbing of the field up to $+\infty$.
}
\label{facciaportratto3}
\iffigs
\hskip 1cm \unitlength=1.0mm
\end{center}
\fi
\end{figure}
At the Big Bang point $\tau \, = \, -\frac{\pi}{2}$ the curvature components diverge just in the same way as in eq.(\ref{divergendo2}) namely the fastest approach to infinity is $\mathcal{O}\left( \frac{1}{(\tau+\pi/2)^{14/5}}\right)$. Instead at the new Big Crunch point $\tau=\pi/3$ the fastest diverging components of the curvature tensor have a much stronger singularity, namely they diverge as $\mathcal{O}\left( \frac{1}{(\tau+\pi/2)^{7}}\right)$. This further shows the clear-cut separation between less than critical and over critical solutions of the trigonometric type.
\par
Apart from this finer structure the above detailed analysis has explicitly demonstrated the main point which we want to stress since it is somehow new in General Relativity. Notwithstanding the spatial flatness of the metric and notwithstanding the positive asymptotic behavior of the potential $V(\mathfrak{h})$ that goes to $+\infty$ for large values of the scalar field $\mathfrak{h}$, the presence of a negative extremum of $V(\mathfrak{h})$, (does not matter whether maximum or minimum) always implies a collapse of the Universe at a finite value of the cosmic or parametric time.
\par
The Big Crunch collapse is the typical destiny of a closed Universe with positive spatial curvature. Therefore one is naturally led to inquiry whether such Universes as those discussed above have just the same causal structure as a closed Universe. To give an answer to such a question we consider the Particle and Event Horizons.
\subsubsection{\sc Particle and Event Horizons}
\label{parteventhoriz}
Two important concept in Cosmology are those of Particle and Event Horizons. Given a metric of the form (\ref{piatttosa}) let us rewrite it in polar coordinates:
\begin{equation}\label{polaris}
ds^2 \, = \, \exp\left[\mathcal{B}(\tau) \right] \, d\tau^2 - \mathfrak{a}^2(\tau) \, \left( dr^2 + r^2 d\Omega^2\right)
\end{equation}
where, as usual, $d\Omega^2$ denotes the metric on a two-sphere, and let us consider the radial light-like geodesics defined by the equation:
\begin{equation}\label{bomboladigas}
0 \, = \, \exp\left[2\mathcal{B}(\tau) \right] \, d\tau^2 - \mathfrak{a}^2(\tau) \, dr^2 \quad \rightarrow \quad \int_{0}^R \, dr \, = \, \int_{- \frac{\pi}{2}}^T \, d\tau \, \frac{\exp\left[\mathcal{B}(\tau) \right] }{\mathfrak{a}(\tau)}
\end{equation}
From (\ref{bomboladigas}) it follows that at any parametric time $T$ after the Big Bang, the remotest radial coordinate from which we can receive a signal is given by:
\begin{equation}\label{curlandia}
R(T) \, = \,\int_{- \frac{\pi}{2}}^T \, d\tau \, \frac{\exp\left[\mathcal{B}(\tau) \right] }{\mathfrak{a}(\tau)}
\end{equation}
and in any case the maximal physical value of such a radial coordinate is given by:
\begin{equation}\label{curlandiabis}
r_{max} \, = \,\int_{- \frac{\pi}{2}}^{T_{max}} \, d\tau \, \frac{\exp\left[\mathcal{B}(\tau) \right] }{\mathfrak{a}(\tau)}
\end{equation}
where $T_{max}$ is the Big Crunch parametric time. It is therefore convenient to measure radial coordinates $r$ in fractions of this maximal one and measure the scale factor in fraction of the maximal one attained during time evolution:
\begin{equation}\label{massimino}
a_{max} \, \equiv \, \mathfrak{a}(\hat{\tau}) \quad ; \quad \mbox{where} \quad \partial_\tau \mathfrak{a}(\tau)|_{\tau = \hat{\tau}} \, = \, 0
\end{equation}
Setting:
\begin{equation}\label{gugulini}
{\bar{R}}(T) \, = \, \frac{R(T)}{r_{max}} \quad ; \quad \bar{\mathfrak{a}}(\tau) \, = \, \frac{\mathfrak{a}(\tau)}{a_{max}}
\end{equation}
we conclude that the fastest distance from which an observer can receive a signal at any instant of time is:
\begin{equation}\label{patacchio}
\mathcal{P}(T) \, = \, \frac{\mathfrak{a}(T)}{a_{max} \, r_{max} } \, \int_{- \frac{\pi}{2}}^T \, d\tau \, \frac{\exp\left[\mathcal{B}(\tau) \right] }{\mathfrak{a}(\tau)}
\end{equation}
By definition this distance is the \textit{Particle Horizon} and defines the portion of Space that is visible by any Observer living at time $T$.
On the other hand the \textit{Event Horizon} is the boundary of the Physical Space from which no signal will ever reach an Observer living at time $T$ at any time of his future. In full analogy with equation (\ref{patacchio}) the Event Horizon is defined by:
\begin{equation}\label{spazzolone}
\mathcal{E}(T) \, = \, \frac{\mathfrak{a}(T)}{a_{max} \, r_{max} } \, \int_{T}^{T_{max}} \, d\tau \, \frac{\exp\left[\mathcal{B}(\tau) \right] }{\mathfrak{a}(\tau)}
\end{equation}
It is well known, (see for instance \cite{pietroGR}) that in a matter dominated, closed Universe the Particle Horizon and the Event Horizon exactly coincide. This means that in such a Universe, the portion of space that is invisible to an observer living at time $T$ will remain invisible to him also at all later times. Furthermore in such a Universe the Particle/Event Horizon contracts to zero exactly at the moment when the Universe reaches its maximum extension. An observer living at that time is completely blind and will stay blind all the rest of his life.
In the Universes we have considered in this section things go quite differently since the Particle and the Event Horizon do not coincide and actually have a somehow opposite behavior. Plots of the Particle and Event horizon are shown in figure \ref{partucla} for the three solutions of trigonometric type we have been considering.
\begin{figure}[!hbt]
\begin{center}
\iffigs
\includegraphics[height=50mm]{PEhor0.eps}
\includegraphics[height=50mm]{PEhor1.eps}
\includegraphics[height=50mm]{PEhorBul.eps}
\else
\end{center}
\fi
\caption{\it
The three plots in this figure respectively refer to the solutions $Y=0$, $Y=1$ and $Y=Y_\bullet$. In each plot the solid line represents the Scale Factor, the dashed line with longer dashes represents the Particle Horizon, while the dashed line with shorter and denser dashes represents the Event Horizon. In all cases the Event Horizon goes to zero faster than the Particle Horizon and invisible portion of the Universe become visible to the same observer at later times.}
\label{partucla}
\iffigs
\hskip 1cm \unitlength=1.1mm
\end{center}
\fi
\end{figure}
In all cases the Particle Horizon does not coincide with the Event Horizon. At the beginning the latter is larger than the former, which means that there is a portion of the invisible Universe which will reveal itself to the a given observer in his future. Then the Particle Horizon grows and the Event Horizon rapidly decreases. This means that as time goes on larger and larger becomes the visible Universe but also larger and larger the portion that will never reveal itself to an observer living at that time. In all cases before the end of its life the entire Universe becomes visible to an observer living at that time. This happens at relatively early times for supercritical solutions with $Y> Y_0$.
\par
This rather intricate structure is quite different from that of a Closed Universe with positive spatial curvature. Contrary to generally accepted lore, these solutions of Einstein Klein Gordon equations show that spatial flatness of the Universe should not lead us to automatically exclude the possibility of a final collapse into a singularity. We think that this is an important warning and for this reason we analyzed the case in depth. A second motivation for our detailed analysis was the confirmation of the climbing, descending mechanism that strictly correlates the space-time singularities with the divergences of the scalar field that can only start and end up at infinity. Without a positive extremum the scalar field cannot stop at a fixed point and space-time has no other option than exploding and then collapsing.
\par
In the next section we consider the much simpler and rather smoothly behaved hyperbolic solutions that occur when the potential has no extremum.
\subsection{\sc Hyperbolic solutions in the run away potential without finite extrema}
When we choose $\omega=\nu^2$ the potential has no minimum, the solution of eq.s(\ref{equemozionica}) drastically simplifies and it is provided in terms of exponential functions.
\par
Explicitly we obtain:
{\small
\begin{eqnarray}
\label{HyperUVsol}
U(\tau) &=& a \, e^{\nu \tau } +b \, e^{-\nu
\tau } \\
V(\tau) &=& \left(e^{-\nu \tau }
\left(\left(e^{2 \nu \tau }
a+b\right) \left(c e^{2 \nu
\tau } \nu ^2+d \nu ^2-4 e^{\nu
\tau } \sqrt[5]{e^{\nu \tau }
a+b e^{-\nu \tau
}}\right)\right.\right.\nonumber\\
&&\left.\left.-e^{\nu \tau }
\sqrt[5]{e^{\nu \tau } a+b
e^{-\nu \tau }} \left(b-a e^{2
\nu \tau }\right)
\left(\frac{e^{2 \nu \tau }
a}{b}+1\right)^{4/5} \,
_2F_1\left(\frac{2}{5},\frac{4}{
5};\frac{7}{5};-\frac{a e^{2 \nu
\tau
}}{b}\right)\right)\right)\times\nonumber\\
&& \left(e^{2
\nu \tau } a+b\right) \nu ^2
\end{eqnarray}
}
where $a,b,c,d$ are integration constants. Once inserted in the formula (\ref{integtrasformazia}) for the physical fields
the solution (\ref{HyperUVsol}) produces a solution of the original equation upon implementation of the same constraint
(\ref{dariopisco}) as in the trigonometric case that we can solve with the same position, namely by setting $d=-\rho \, a$, $c=\rho \,b$.
The final form of the solution depending on three parameters is the following one:
\begin{eqnarray}
\mathfrak{P}(\tau;a,b,\rho,\nu) &=& e^{-\nu \tau } \sqrt[5]{e^{\nu
\tau } a+b e^{-\nu \tau }}
\left(\left(e^{2 \nu \tau }
a+b\right) \left(b e^{2 \nu
\tau } \rho \nu ^2-a \rho \nu^2 \right.\right.\nonumber\\
&&\left.\left. -4 e^{\nu \tau }
\sqrt[5]{e^{\nu \tau } a+b
e^{-\nu \tau }}\right)-e^{\nu
\tau } \sqrt[5]{e^{\nu \tau }
a+b e^{-\nu \tau }} \left(b-a
e^{2 \nu \tau }\right)
\left(\frac{e^{2 \nu \tau }
a}{b}+1\right)^{4/5} \, \times \right.\nonumber\\
&&\left.
_2F_1\left(\frac{2}{5},\frac{4}{
5};\frac{7}{5};-\frac{a e^{2 \nu
\tau }}{b}\right)\right) \nonumber\\
a(\tau;a,b,\rho,\nu) &=& \frac{\mathfrak{P}(\tau;a,b,\rho,\nu)}{\left(e^{2 \nu \tau } a+b\right)
\nu ^2} \nonumber \\
\end{eqnarray}
\begin{eqnarray}
\exp\left[\mathcal{B}\right](\tau;a,b,\rho,\nu) &=& \frac{\left(\mathfrak{P}(\tau;a,b,\rho,\nu)\right)^2}{\left(e^{\nu \tau } a+b e^{-\nu
\tau }\right)^{4/5} \left(e^{2
\nu \tau } a+b\right)^2 \nu ^4} \nonumber\\
\mathfrak{h}(\tau;a,b,\rho,\nu) &=& \log \left[\frac{\left(e^{\nu \tau } a+b e^{-\nu
\tau }\right)^{2/5} \left(e^{2
\nu \tau } a+b\right) \nu ^2}{\mathfrak{P}(\tau;a,b,\rho,\nu)} \right]
\end{eqnarray}
\subsubsection{\sc The simplest hyperbolic solution}
The simplest solution of the hyperbolic type is obtained for the choice $a=0$, $c=0$, $b=1$, $\rho=1$, since in this case the hypergeometric function disappears and we simply get:
\begin{eqnarray}
\label{finocchius}
\mathbf{a}(t,\nu) &\equiv & a\left(t-\frac{5 \log \left(\frac{\nu
^2}{5}\right)}{6 \nu };0,1,1,\nu\right) \, = \, \frac{5^{2/3} e^{-\frac{2 t
\nu }{5}}
\left(-1+e^{\frac{6 t \nu
}{5}}\right)}{\nu ^{4/3}} \nonumber \\
\exp\left[\mathbf{B}(t,\nu) \right] &\equiv& \exp\left[\mathcal{B}\left(t-\frac{5 \log \left(\frac{\nu
^2}{5}\right)}{6 \nu };0,1,1,\nu\right)\right] \, = \, \frac{25 \left(-1+e^{\frac{6 t
\nu }{5}}\right)^2}{\nu ^4}\nonumber\\
\mathbf{h}(t,\nu) &\equiv & \mathfrak{h}\left(t-\frac{5 \log \left(\frac{\nu
^2}{5}\right)}{6 \nu };a,b,\rho,\nu\right) \, = \, \log \left(\frac{1}{5
\left(-1+e^{\frac{6 t \nu
}{5}}\right)}\right)+2 \log (\nu )
\end{eqnarray}
The shift in the parametric time variable $\tau \rightarrow t -\frac{5 \log \left(\frac{\nu^2}{5}\right)}{6 \nu }$ has been specifically arranged in such a way that $t=0$ is a zero of the scale factor, namely corresponds to the Big Bang. Furthermore, in this case, which involves only elementary transcendental functions, the relation between parametric and cosmic time can be explicitly evaluated. We have:
\begin{equation}\label{radiboga}
T_c(t) \, \equiv \, \int_0^t \, dx \,\exp\left[\mathbf{B}(x,\nu) \right] \, = \, \frac{25 t}{\nu ^4}-\frac{125 e^{\frac{6 t \nu
}{5}}}{3 \nu ^5}+\frac{125 e^{\frac{12 t \nu
}{5}}}{12 \nu ^5}+\frac{125}{4 \nu ^5}
\end{equation}
This allows for a simple evaluation of the asymptotic behavior of both the scale factor and the scalar field for asymptotic very late and very early times. We calculate the limit:
\begin{eqnarray}\label{gordiano}
\lim_{t\,\to \, \infty} \, \frac{\log[\mathbf{a}(t,\nu)]}{\log[T_c(t)]} & = & \frac{1}{3}
\end{eqnarray}
This means that at late times, independently from the parameter $\nu$ the scale factor behaves like the cubic root of the cosmic time.
\begin{equation}
\mathbf{a}(T_c,\nu) \, \stackrel{T_c \to \infty}{\simeq} \, \mbox{const} \times T_c^{\frac{1}{3}} \nonumber \\
\label{asintotus1}
\end{equation}
This corresponds to an equation of state of type \ref{equatastata} with $w=1$. In view of eq.s(\ref{patatefritte}) this means that at late times the predominant contribution to the energy density is the kinetic one, the potential energy being negligible. Such a conclusion can be matched with the information on the asymptotic behavior of the scalar field for late times. This latter can be worked in the following way. As $t \to \infty$ (for $\nu >0$) we have:
\begin{equation}\label{ciaccius1}
T_c \, \stackrel{t \to \infty}{\simeq} \,\frac{125 e^{\frac{12 t \nu
}{5}}}{12 \nu ^5}
\end{equation}
while for the scalar field we get:
\begin{equation}\label{ciaccius2}
\mathbf{h}(t,\nu) \, \stackrel{t \to \infty}{\simeq} \, -\frac{6 t \nu }{5}
\end{equation}
Combining the two results we get:
\begin{equation}\label{ciaccius3}
\mathbf{h} \, \stackrel{T_c \to \infty}{\simeq} \, - \frac{1}{2} \, \log[T_c]
\end{equation}
namely, the scalar field goes logarithmically to $-\infty$ when plotted against cosmic time. Obviously the value of the potential (\ref{farimboldobis}) at $\mathfrak{h} \, = \, -\infty$ is zero and this explains the asymptotic dominance of the kinetic energy.
\par
To work out the behavior at very early times it is more complicated, yet we can predict it by inspecting the behavior of the energy density and of the pressure. Inserting the form of the solution and of the potential in eq.(\ref{patatefritte}) we obtain the parametric time behavior of the energy density and of the pressure \footnote{Note that to obtain this result we have calculated the derivative of the scalar field with respect to the cosmic time and not with respect to the parametric time}:
\begin{eqnarray}
\label{quaquastata}
\rho &=& \frac{3 \nu ^8 \left(-4 \nu ^2+2 e^{\frac{6 t \nu
}{5}} \left(2 \nu ^2+5\right)+e^{\frac{12 t \nu
}{5}} \left(3 \nu ^2-5\right)-5\right)}{15625
\left(-1+e^{\frac{6 t \nu }{5}}\right)^6} \\
p &=& \frac{3 \nu ^8 \left(4 \nu ^2-2 e^{\frac{6 t \nu
}{5}} \left(2 \nu ^2+5\right)+e^{\frac{12 t \nu
}{5}} \left(3 \nu ^2+5\right)+5\right)}{15625
\left(-1+e^{\frac{6 t \nu }{5}}\right)^6}
\end{eqnarray}
Expanding both functions in power series for $t\sim 0$ we obtain:
\begin{eqnarray}
\rho & \stackrel{t \to 0}{\sim} & \frac{\nu ^4}{1728 t^6}-\frac{\nu ^5}{2592
t^5}+O\left(\frac{1}{t^4}\right) \\
p & \stackrel{t \to 0}{\sim} & \frac{\nu ^4}{1728 t^6}-\frac{13 \nu ^5}{12960
t^5}+O\left(\frac{1}{t^4}\right)
\end{eqnarray}
Both the pressure and the energy density diverge as $1/t^6$ plus subleading singularities; the identity of the coefficient in the leading pole of both expansions implies that also at very early times the effective equation of state is
%
\begin{equation}\label{equastata}
p \, = \, \rho \quad \Leftrightarrow \quad w \, = \, 1
\end{equation}
%
which implies the following behavior for the scale factor:
\begin{equation}
\mathbf{a}(T_c,\nu) \, \stackrel{T_c \to 0}{\simeq} \, \mbox{const} \times T_c^{\frac{1}{3}} \nonumber \\
\label{asintotus0}
\end{equation}
With same technique we can also work out the asymptotic behavior of the scalar field in the origin of time:
\begin{equation}\label{ciaccius5}
\mathbf{h} \, \stackrel{T_c \to 0}{\simeq} \, - \frac{1}{3} \, \log[T_c]
\end{equation}
In fig.\ref{soluziasemplice} we present the plots of an explicit example of such solutions.
\begin{figure}[!hbt]
\begin{center}
\iffigs
\includegraphics[height=60mm]{hypersusyAfac.eps}
\vskip 2cm
\includegraphics[height=60mm]{hypersusyScal.eps}
\else
\end{center}
\fi
\caption{\it
Here we present the behavior of the scale factor and of the scalar field for the simplest of the hyperbolic type solutions ($\omega = \nu^2 $) of the cosmological model based on the potential of
eq.(\ref{farimboldobis}).
The analytic form of the solution is given in
eq.(\ref{finocchius}).
For the plot we have chosen $\nu = \frac{1}{4}$. In the first graph, describing the scale factor, the solid line is the actual solution while the dashed curves are of the form $\alpha_{1,2} \, T_c^{\frac 13}$ with two different coefficient $\alpha_1 = \frac{3^{2/3}}{{10}^{1/3}}$ and $\alpha_2 = {\frac{3}{5}}^{1/3}$. The first curve is tangential to the solution at $T_c\to 0$ while the second is tangential to the solution at $T_c\to \infty$. The same style of presentation is adopted in the second picture. Here we plot the scalar field against the logarithm of the cosmic time. The two dashed straight lines represent the curves $-\frac{1}{3}\, \log \left[ T_c \right] $, and $-\frac{1}{2}\, \log \left[ T_c \right]$. The first is tangential to the solution at $T_c \to 0$, the second is tangential to the solution at $T_c\to \infty$. }
\label{soluziasemplice}
\iffigs
\hskip 1cm \unitlength=1.1mm
\end{center}
\fi
\end{figure}
Finally in fig.\ref{curbettis} we present the phase portrait for this type of solutions of the hyperbolic type. We compare the phase portrait of the simple solutions we have analyzed above with the phase portrait of the generic solutions that involve also the hypergeometric term. The quality of the picture is essentially the same, yet there is an important critical difference concerning the asymptotic behavior.
\begin{figure}[!hbt]
\begin{center}
\iffigs
\includegraphics[height=40mm]{fasahypersemp.eps}
\vskip 2cm
\includegraphics[height=40mm]{fasahypercomp.eps}
\else
\end{center}
\fi
\caption{\it
In this picture we present some trajectories making up the phase portrait for the solutions of hyperbolic type presented in this section. In the upper figure we use only the simple solution involving elementary functions. In the lower figure we utilize also solutions involving the hypergeometric term. The quality of the portraits is just the same. }
\label{curbettis}
\iffigs
\hskip 1cm \unitlength=1.1mm
\end{center}
\fi
\end{figure}
\subsubsection{\sc Hyperbolic solutions displaying the second asymptotic behavior for late times}
According to the analysis of \cite{primopapero} and of the previous literature \cite{dks,lm,exponential_pot}, when the potential is of the exponential type considered in this paper, namely
\begin{equation}
\mathcal{V}(\varphi)\ = \ \Sigma_{k=1}^{n}\,{\cal V}_{0k}\,e^{2\,\gamma_k\,\varphi}\ , \quad \gamma_{k} \ > \ \gamma_{k+1}\
\label{SingAn2}
\end{equation}
there are two possible different asymptotic behaviors of the scale factor in the vicinity of a Big Bang or of a Big Crunch. One behavior is universal and it is the one we have met in the previous example of the simplest hyperbolic solution, namely:
\begin{equation}\label{gorko}
a(T_c) \, \sim \, T_c^{\frac{1}{3}} \quad \Leftrightarrow \quad w\, = \, 1 \, \quad \mbox{kinetic asymptopia}
\end{equation}
The \textit{universal asymtopia} is the only one available both at the beginning and at the end of time when the solution for the scalar field is \textit{climbing}. As already stressed it corresponds to a complete dominance of the kinetic energy of the scalar field with respect to its potential one. On the other hand if the scalar is descending there are a priori two possible asymptopia available: in addition to the universal kinetic one (\ref{gorko}), there is also the following additional one:
\begin{equation}\label{babele}
a(T_c) \, \sim \, T_c^{\frac{1}{3\,\gamma_{dom}^2}} \quad \Leftrightarrow \quad w\, = \, 2 \, \gamma_{dom}^2 -1 \, \quad \mbox{potential asymptopia}
\end{equation}
where $\gamma_{dom}$ is the coefficient of the dominant exponential appearing in the potential, once this latter is written in its normal form (\ref{SingAn2}) by means of the replacement (\ref{babushka}). For descending scalars that tend to $-\infty$, the dominant exponential is that with the smaller positive $\gamma_k$. In our case $\gamma_{dom} \, = \, \frac{2}{3}$ so that the second asymptopia available in the case of descending solutions, is:
\begin{equation}\label{babelebis}
a(T_c) \, \sim \, T_c^{\frac{3}{4}} \quad \Leftrightarrow \quad w\, = \, - \, \frac{1}{9} \quad \mbox{potential asymptopia}
\end{equation}
The simplest asymptotic solution described in the previous section does not take advantage of the second possibility for its asymptotic behavior: both at very early and at very late times the scale factors goes as $T_c^{\frac{1}{3}}$. This is no longer the case for the solutions with parameter $a\ne0$ where the contribution from the hypergeometric function is switched on. Indeed we have verified that for all such solutions the scale factor goes as $T_c^{\frac{1}{3}}$ near the initial singularity but diverges as $T_c^{\frac{3}{4}}$ for late times.
\begin{figure}[!hbt]
\begin{center}
\iffigs
\includegraphics[height=40mm]{AggScalFac.eps}
\includegraphics[height=40mm]{AggScalField.eps}
\vskip 2cm
\includegraphics[height=40mm]{AggLogLog.eps}
\else
\end{center}
\fi
\caption{\it
In this picture we present the plots of the scale factor (first plot) and of the scalar field (second plot) in the hyperbolic solution with parameters $a= -1,b=\rho=\nu=1$. The asymptotic behavior for late cosmic times of the scale factor is visualized by the third plot of the logarithm of the function divided by the logarithm of its argument. It is quite evident that this ratio goes rapidly to $0.75 \, = \, \frac{3}{4}$. }
\label{scappavia}
\iffigs
\hskip 1cm \unitlength=1.1mm
\end{center}
\fi
\end{figure}
An example of such a behavior is provided in the plots of fig.\ref{scappavia}. The effective equation of state at late times corresponds to a negative pressure although to a very weak one $w \, = \, -\frac{1}{9}$. This negative pressure is provided by a small dominance of the potential energy over the kinetic one and causes an indefinite expansion of the universe slightly stronger than that of a matter dominated Universe ($T_c^{\frac{2}{3}}$) yet very far from an exponential one.
\section{\sc Analysis of the $\cosh$-model}
\label{zerlina}
We come finally to one of the main questions posed in this paper, namely how much valuable are the exact solutions of integrable one-field cosmologies as simulations or approximations of the solutions of the physical cosmological models, in particular of those provided by consistent one-field truncations of supergravity theories. These latter, as we have emphasized, are mostly not integrable, as we have already seen and as we are going to show further in the last section.
In other words the present section is devoted to assess the viability of the \textit{minimalist approach}.
\par
Assuming that cosmological models derived from supergravity are not integrable, we still would like to ascertain whether integrable potentials that are similar to actual physical ones have solutions that simulate in a reasonable way the solutions of the supergravity derived models.
\par
In this contest a primary example is provided by the $cosh$ model already introduced in previous sections and defined by the Lagrangian (\ref{lagrucconata})
that depends on the three parameters $q$, $ p$ and $\mu^2$.
As we already pointed out, the family of $\cosh$--models defined by the Lagrangian (\ref{lagrucconata}) with the potential
\begin{equation}\label{gorrona}
V(\mathfrak{h}) \, = \, - \, \mu^2 \, \cosh \left[ p\, \mathfrak{h}\right] \quad ; \quad \mu^2 \, = \,\mbox{either positive or negative}
\end{equation}
includes two integrable cases when $\frac{p}{\sqrt{3 q}}=1$ or $\frac{p}{\sqrt{3 q}}=\frac{2}{3}$. At the same time the case $p=1$, $q=1$, which by no means is integrable, corresponds to the consistent one-field truncation of the $\mathrm{STU}$ model. Furthermore, other instances of the same Lagrangian are expected to appear in the consistent truncation of other Gauged Supergravity models.
\par
Hence the model (\ref{lagrucconata}) is a perfect testing ground for the questions we have posed.
\par
An important conclusion that was reached in section \cite{primopapero} is that the qualitative behavior of solutions is dictated by the type of critical points possessed by the equivalent first order dynamical system that, on its turn is just dictated by the properties of extrema of the potential. Hence the first question that arises in connection with our model (\ref{lagrucconata}) is whether its critical points are always of the same type or fall into different classes depending on the parameters $p,q$.
\par
To this effect we begin by summarizing some of the results of sections\cite{primopapero} in a language less mathematically oriented and closer to the jargon of the supergravity community . Furthermore in such a summary we utilize the customary physical normalizations of Friedman equations, recalled in eq.s(\ref{fridmano}), rather than the normalizations and notations of \cite{primopapero} that are less familiar to cosmologists.
\subsection{\sc Summary of the mathematical results on the structure of Friedman equations and on the qualitative description of their solutions}
\label{sec:qualitativesummary}
Choosing the standard gauge ${\cal B} \, = \, 0 $
and considering the standard form (\ref{fridmano}) of Friedman equations, the main crucial observation that was put forward in \cite{primopapero}, is that the logarithm of the scale factor $A(t)$ is a cyclic variable since it appears only through the Hubble function, namely covered by a derivative.
The next crucial observation was that the second order differential system (\ref{fridmano}) can be rewritten
in two different ways as a system of first order ordinary differential equations for two variables. These rewritings were named \textit{irreducible subsystems} and it was advocated that each of them, when solved, generates solutions of the initial second order system (\ref{fridmano}). Adopting such a language allowed for the use of some powerful theorems that can predict the general qualitative behavior of the solutions of Friedman equations once the potential $V(\phi)$ is specified.
\par
The two subsystems are hereby rewritten in the standard notation of General Relativity and Cosmology:
\paragraph{\sc Subsystem I}. The first subsystem uses as independent variables the scalar field $\phi(t)$ and its time derivative $\dot{\phi}(t)$, which is renamed $\mathrm{v}(t)$. Hence one writes:
\begin{eqnarray}
&& \dot{\phi} \ = \ \mathrm{v} \ ,\nonumber \\
&& \dot{\mathrm{v}} \ = \, - \, 3 \, \sigma \, \mathrm{v} \, \sqrt{\frac{1}{3} \, \mathrm{v}^{\,2} \ + \ \frac{2}{3}\, {V}(\phi)}\ - \
{V}^{\,\prime}(\phi)\ ,
\label{MODE3}
\end{eqnarray}
where $\sigma \, = \, \pm 1$ takes into account the two branches of the square root in solving the quadratic equation for the Hubble function. One has to \textit{exclude} those branches of the solutions of (\ref{MODE3}) that satisfy the following conditions:
\begin{eqnarray}
&& \frac{1}{3} \, \mathrm{v}^{\,2} \ + \ \frac{2}{3}\, {V}(\phi)\, = \, 0 \quad if \quad
{V}(\phi) \, \neq \, 0 \ ,\nonumber \\
&& \frac{1}{3} \, \mathrm{v}^{\,2} \ + \ \frac{2}{3}\, {V}(\phi) \, < \, 0
\label{MODE3a}
\end{eqnarray}
The remaining solutions are named \textit{admissible}.
When an admissible solution of eqs. (\ref{MODE3}) is given, namely when the functions $\phi(t)$ and $\mathrm{v}(t)$ have been determined, the Hubble function $H(t)$ is immediately obtained,
\begin{eqnarray}
H(t) \ = \, \sigma \, \sqrt{\frac{1}{3} \, \mathrm{v}^{\,2}(t) \ + \ \frac{2}{3}\, {V}(\phi(t))}\
\label{MODE5}
\end{eqnarray}
and, by means of a further integration, one obtains also the scale factor:
\begin{equation}\label{corinzio}
a(t) \, = \, \exp \left[ \int \, H(t) \, dt \,\right]
\end{equation}
\paragraph{\sc Subsystem II.} The second subsystem uses as independent variables the scalar field and the Hubble function. So doing we are lead to the following first order equations:
\begin{eqnarray}
&& \dot{\phi} \, = \, \sigma \, \left(3 \, H^{\,2} \ - \ 2\, {V}(\phi)\right)^{\frac{1}{2}}\quad ; \quad \sigma = \pm 1 \nonumber \\
&& \dot{H} \ = \ -\, \left(3 \, H^{\,2} \ - \ 2\, {V}(\phi)\right)\
\label{MODE4}
\end{eqnarray}
As in the first instance of irreducible subsystem also here we have to exclude unadmissible branches of solutions to eq.s (\ref{MODE4}), namely those that satisfy the following conditions:
\begin{eqnarray}
&& 3 \, H^{\,2} \ - \ 2\, {V}(\phi) \, = \, 0 \quad if \quad
{V}^{\,\prime}(\phi) \ \neq \ 0 \ , \nonumber\\
&& 3 \, H^{\,2} \ - \ 2\, {V}(\phi) \, = \, 0 \, < \, 0
\label{MODE4a}
\end{eqnarray}
It is easily verified that all equations of the original Friedman system (\ref{fridmano})
follow from either one of the subsystems (\ref{MODE3}) and (\ref{MODE4}).
\par
In mathematical language, both subsystems (\ref{MODE3}) and (\ref{MODE4}) are nonlinear autonomous first-order ordinary differential equations over a two--dimensional Euclidean plane, namely either $\mathbb{R}^2 \, \ni \, (\phi, \, \mathrm{v})$ or $\mathbb{R}^2 \, \ni \,(\phi, \, H)$.
\par
The mathematical theory of planar dynamical systems is highly developed and allows for a qualitative analysis of both the local and the global behavior of their \textit{phase portraits}, namely of their trajectories (also named orbits). According to such a theory a generic planar system is very regular: it admits only a few different types of trajectories and limit sets. Explicitly we can have:
\begin{description}
\item[a)] periodic orbits, named also cycles,
\item[b)] heteroclinic orbits that connect two different critical points of the system
\item[c)] homoclinic orbits that start from a critical point and return to it at the end of time
\item[d)] trajectories that connect the point at infinity of $\mathbb{R}^2$ with a fixed point.
\end{description}
As a result no planar dynamical system can be chaotic. This property distinguishes planar systems very strongly from dynamical systems in dimensions higher than two, where various chaotic regimes are generically allowed.
\par
From these considerations it follows that one-field cosmologies are not chaotic and one can obtain a qualitative understanding of their solutions. In the integrable case the solutions can also be worked out analytically.
The relevant point is that such analytical solutions can be taken as trustable models of the behavior of the solutions also for entire classes of potentials whose integrable representatives occur only at very special values of their parameters.
\subsubsection{\sc Subsystem I : qualitative analysis}
To illustrate in a concrete manner these general ideas we choose to work with the subsystem I.
Fixed points of the subsystem (\ref{MODE3}) are defined by the following equations
\begin{eqnarray}
\mathrm{v}_0 \, = \, 0\ , \quad {V}^{\,\prime}(\phi_0)\, = \, 0
\label{MODE6}
\end{eqnarray}
and are admissible if they satisfy the condition:
\begin{eqnarray}
{V}(\phi_0)\, \geq \, 0
\label{MODE6a}
\end{eqnarray}
In plain physical words a fixed point of this dynamical system is just a vacuum solution of scalar coupled gravity, namely a constant configuration of the scalar field that is an extremum of the potential. At the same time the space-time metric is either the Minkowski metric if $V(\phi_0) =0$ or the de Sitter metric if $V(\phi_0) = \Lambda >0$.
\par
From the dynamical system point of view, if the condition (\ref{MODE6a}) is not fulfilled, then the subsystem does not possesses fixed points at all, i.e. all its phase space points are regular. Without fixed points a nonlinear system admits only monotonic solutions, which can also blow up in a finite time. From the physical point of view an extremum of the potential at a negative value of the potential corresponds to an anti de Sitter space, yet anti de Sitter, differently from de Sitter admits no representation in terms of flat constant time slices, which is our initial assumption. Hence the only consequence can be a blowing up solution with a Big Bang followed by a Big Crunch.
\paragraph{\sc Linearization of the first order system in a neighborhood of a fixed point}
Let us consider the linearization of the first order system in a neighborhood of the fixed point by setting:
\begin{eqnarray}
\phi&=& \phi_0 + \Delta\phi \nonumber\\
\mathrm{v} &=& \Delta \mathrm{v} \label{fluttuo}
\end{eqnarray}
To first order in the deviations we obtain
\begin{eqnarray}
&& \Delta\dot{\phi} \ = \ \Delta\mathrm{v} \ ,\nonumber \\
&& \Delta\dot{ \mathrm{v}} \ = \ -\, \sigma \, \sqrt{ 6 \,{V}(\phi_0)}\, \Delta \mathrm{v}\ - \, {V}^{\,\prime \prime}(\phi_0) \, \Delta\phi \ + \ h.o.t.\ ,
\label{MODE7}
\end{eqnarray}
where the abbreviation h.o.t. means higher order terms.
\par
As explained in \cite{primopapero}, the eigenvalues of the linearization matrix:
\begin{equation}
\mathcal{M} \, = \, \left[\begin{array}{cc} 0 & 1 \\ - V^{\prime\prime}_0 & - \, \sigma\, \sqrt{6 \, V_0}\\
\end{array}\right]\label{somber}
\end{equation}
namely:
\begin{eqnarray}
\lambda_{\pm} \ = \, \frac{1}{\sqrt{2}}\,\left(-\,\sigma \, \sqrt{3 \, V_0}\, \pm \, \sqrt{3 \, V_0\, - \, 2 \, V^{\prime \prime}_0 } \right)
\label{MODE8}
\end{eqnarray}
characterize the type of the corresponding critical points and consequently define the phase portrait of the linearization. The main theorem that allows to predict the qualitative behavior of the solution states the following.
In the case the fixed point is hyperbolic, namely both eigenvalues have a non vanishing real part, (i.e. $\mathrm{Re}\,(\lambda_{\pm})\,\neq\,0$) the phase portraits of the nonlinear system and of its linearization are diffeomorphic in a finite neighborhood of the hyperbolic fixed point.
Hence the analysis of the linearization gives a valuable information about the phase portrait of the original nonlinear system we are interested in.
\par
We have in particular the following classification of non degenerate fixed points for which, by definition, the linearization matrix has no zero eigenvalue:
\paragraph{\sc Classification of fixed point types}
\begin{description}
\item[a) \textbf{Saddle }] When the two real eigenvalues have opposite sign $\lambda_+ >0, \,\lambda_- < 0$ or
$\lambda_+ <0, \,\lambda_- > 0$.
\item[b) \textbf{Node }] When the two real eigenvalues have the same sign $\lambda_+ >0, \,\lambda_- > 0$ or
$\lambda_+ <0, \,\lambda_- < 0$.
\item[c) \textbf{Improper Node}] When the two eigenvalues coincide $\lambda_+ \, = \, \lambda_-$, yet the linearization matrix (\ref{somber}) is not diagonal.
\item[d) \textbf{Degenerate Node}] When the linearization matrix (\ref{somber}) is proportional to identity.
\item[e)\textbf{Focus }] When the two eigenvalues have non vanishing both the imaginary part and the real part and are complex conjugate to each other $\lambda_\pm \, = \, x \, \pm {\rm i} \, y$.
\item[f) \textbf{Center } ]When the two eigenvalues are purely imaginary and conjugate to each other.
\end{description}
For each of these fixed point types the trajectories have a distinct behavior that we are going to illustrate by means of our concrete example, namely the \textit{Cosh Model} analysed in the present section.
\par
Furthermore we should recall the result that the subsystem (\ref{MODE3}) has no periodic trajectories according to Dulac's criterion since:
\begin{eqnarray}
\frac{\partial \dot{\phi}}{\partial \phi} \ + \
\frac{\partial \dot{\mathrm{v}}}{\partial \mathrm{v}} \ \equiv \ -\,2\,\sigma \, \frac{\mathrm{v}^{\,2} \ + \ {V}(\phi)}{\sqrt{\frac{1}{3}\,v^{\,2} \ + \ \frac{2}{3}\, {V}(\phi)}}
\label{MODE3per}
\end{eqnarray}
does not change sign over the whole two-dimensional plane. Thus, we are led to the conclusion that this subsystem can have only fixed points (i.e. vacuum solutions) as well as heteroclinic/homoclinic orbits, orbits connecting infinity with a fixed point or orbits connecting infinity with infinity in the case of fixed points of the saddle type.
As we know the case $p=1$, $q=1$ is the one which appears in the non-abelian gauging of the $\mathrm{STU}$ model and the case $p=1$, $q=3$ can be obtained in the $S^3$ model by means of an abelian gauging.
\par
Although the $\mathcal{N}=2$ case is not integrable, yet it belongs to same subclass (Node) as the integrable case $\frac{p}{\sqrt{3 q}}=\frac{2}{3}$. Hence we can probably learn about its behavior from an analysis of the integrable case close to it.
\par
The natural question which we have posed, namely \textit{how much do the solutions of the physically relevant $\cosh$-models depart from the exact solutions of the integrable members of the same family} can now be partially answered. As we just stressed, the physically relevant $\mathcal{N}=2$ case is of the \textit{Node type} so that any integrable model with the same type of fixed point would just capture all the features of the physical $\mathrm{STU}$ model. The other integrable case $\frac{p}{\sqrt{3 q}}=1$ is instead of the \textit{Focus type}, therefore it has a little bit more of structure with respect to the $\mathrm{STU}$-model. Any other supersymmetric one-field model with a \textit{Cosh potential} of the focus type, although not integrable might be well described by the $\frac{p}{\sqrt{3 q}}=1$ integrable member of the family (\ref{lagrucconata}).
\subsection{\sc Normal Form of the Cosh-model}
In this spirit let us first show how the \textit{Cosh model} can be put into a normal form, displaying a unique parameter $\omega$ whose value will determine the type of fixed point and, for two special choices, yield two integrable models.
\par
To this effect we introduce the following rescaled fields and variables:
\begin{equation}\label{cambiovario}
\mathfrak{h}[t] \, = \, \frac{\phi(\tau)}{\sqrt{q}} \quad ; \quad t \, = \, \frac{\sqrt{2} \tau }{\mu } \quad ; \quad A(t) \, = \, A(\tau) \quad ; \quad \omega \, \equiv \, \frac{p}{\sqrt{q}}
\end{equation}
In terms of these new items, the effective Lagrangian (\ref{lagrucconata}) becomes
\begin{equation}\label{borragine}
\mathcal{L} \, = \, e^{3 A(\tau) \, - \, \mathcal{B}(\tau)}\left\{-\frac{3}{2} A'(\tau
)^2+\frac{1}{2} \phi'(\tau
)^2\, \mp \, 2 e^{2\mathcal{B}(\tau)} \, \cosh [ \omega \phi(\tau)
]\right\} \ ,
\end{equation}
where the sign choice distinguishes two drastically different systems. The first choice yields a positive definite potential with an absolute minimum that allows for a stable de Sitter vacuum, while the second yields a potential unbounded from below with an absolute maximum.
If we choose the gauge $\mathcal{B}=0$, the field equations of this system, including the hamiltonian constraint can be written as the following three Friedman equations:
\begin{equation}\label{friemano}
\begin{array}{lcl}
\frac{a'(\tau )^2}{a(\tau
)^2}-\frac{1}{3}
\phi'(\tau )^2\, \mp \,\frac{4}{3}
\cosh [\omega \phi(\tau
)] & = & 0\\
\null & \null & \null \\
\frac{2}{3} \phi'(\tau
)^2\, \mp \, \frac{4}{3} \cosh [\omega
\phi(\tau
)]+\frac{a''(\tau )}{a(\tau )} & = & 0\\
\null & \null & \null \\
\pm \, 2 \omega \sinh [\omega
\phi(\tau )]+\frac{3
a'(\tau ) \phi'(\tau
)}{a(\tau )}+\phi''(\tau ) & = & 0 \\
\end{array}
\end{equation}
where $a(\tau)\, = \, \exp[A(\tau)]$.
\subsection{\sc The general integral for the case $\omega = \sqrt{3}$.}
In the integrable case $\omega\, = \,\sqrt{3}$, by means of the integrating transformation described in \cite{primopapero} we obtain the following general solution of eq.s (\ref{friemano}) depending on three parameters, the scale $\lambda$ and the two angles $\psi$ and $\theta$, which applies to the case of the positive potential (upper choice in eq.(\ref{borragine})):
\begin{eqnarray}
a_+(\tau) &=& \sqrt[3]{\left(\lambda \cos (\psi )
\cosh \left(\sqrt{3} \tau
\right)+\lambda \sinh
\left(\sqrt{3} \tau
\right)\right)^2-\lambda ^2 \cos
^2\left(\theta -\sqrt{3} \tau
\right) \sin ^2(\psi )} \\
\phi_+(\tau) &=& \frac{1}{\sqrt{3}}\log \left[\frac{\cos (\psi )
\cosh \left(\sqrt{3} \tau
\right)-\cos \left(\theta
-\sqrt{3} \tau \right) \sin (\psi
)+\sinh \left(\sqrt{3} \tau
\right)}{\cos (\psi ) \cosh
\left(\sqrt{3} \tau \right)+\cos
\left(\theta -\sqrt{3} \tau
\right) \sin (\psi )+\sinh
\left(\sqrt{3} \tau
\right)}\right] \label{soluziapiu}
\end{eqnarray}
For the negative potential (lower choice in eq.(\ref{borragine})) we find instead:
\begin{eqnarray}
a_-(\tau) &=& \sqrt[3]{\lambda ^2 \left(\cosh
^2(\psi ) \sinh ^2\left(\theta
-\sqrt{3} \tau \right)-\left(\sin
\left(\sqrt{3} \tau \right)-\cos
\left(\sqrt{3} \tau \right) \sinh
(\psi )\right)^2\right)} \\
\phi_-(\tau)&=& \frac{1}{\sqrt{3}} \, \log \left[ \frac{\sin
\left(\sqrt{3} \tau \right)+\cosh
(\psi ) \sinh \left(\theta
-\sqrt{3} \tau \right)-\cos
\left(\sqrt{3} \tau \right) \sinh
(\psi )}{\sin \left(\sqrt{3} \tau
\right)-\cosh (\psi ) \sinh
\left(\theta -\sqrt{3} \tau
\right)-\cos \left(\sqrt{3} \tau
\right) \sinh (\psi
)}\right]\label{soluziameno}
\end{eqnarray}
One important observation is the following. In the case of the positive potential, by choosing the parameters $\psi=0,\theta=0$ we obtain the very simple solution:
\begin{equation}\label{carriome}
a_0(\tau) \, = \, \exp\left[ {\frac{2 \tau }{\sqrt{3}}} \right] \, \lambda
^{2/3} \quad ; \quad \phi\, = \, 0 \quad \Rightarrow \quad \mathfrak{h}\, = \, 0
\end{equation}
This is the de Sitter solution where the scalar field is stationary at its absolute minimum and the scale factor grows exponentially.
Such a solution is ruled out in the case of the negative potential which does not allow for any static scalar field solution.
\paragraph{\sc The second hamiltonian structure}
We have explicitly integrated the integrable cosmological model and we have obtained its general integral. We can address the question why is it integrable? The answer is that it admits not just one rather two functionally independent conserved hamiltonians. Examining their structure is worth doing since it provides hints about the underlying properties of the field theory that might be responsible for the emergence
of integrability at the cosmological level. Consider then the Lagrangian and the hamiltonian for the model under consideration:
\begin{eqnarray}
\mathcal{L}_{0} & = & \exp[3 A ] \, \left\{ \frac{q}{2} \, \dot{\mathfrak{h}}^2 \, - \, \frac{3}{2} \, \dot{A}^2 \, - \, \mu^2 \cosh \left [ 3 \, \mathfrak{h} \right ] \right\}\nonumber\\
\mathcal{H}_{0} & = & \exp[3 A ] \, \left\{ \frac{q}{2} \, \dot{\mathfrak{h}}^2 \, - \, \frac{3}{2} \, \dot{A}^2 \, + \, \underbrace{\mu^2 \cosh \left [ 3 \, \mathfrak{h} \right]}_{V_0(\mathfrak{h})} \right\}
\end{eqnarray}
By means of direct evaluation we can easily check that the following two functionals:
\begin{eqnarray}
\mathcal{H}_{1} &=& -\frac{1}{2} e^{3 A }
\left(2 \mu ^2 \cosh
^2\left(\frac{3
\mathfrak{h} }{2}\right)+3 \dot{A} ^2
\cosh ^2\left(\frac{3
\mathfrak{h} }{2}\right)+3 \sinh
^2\left(\frac{3
\mathfrak{h} }{2}\right) \dot{\mathfrak{h}} ^2+3
\sinh (3 \mathfrak{h} ) \dot{A}
\dot{\mathfrak{h}} \right) \nonumber\\
\null & \null &\null \label{hamma1}\\
\mathcal{H}_{2} &=& \frac{1}{2} e^{3 A }
\left(-2 \mu ^2 \sinh
^2\left(\frac{3
\mathfrak{h} }{2}\right)+3 \dot{A} ^2
\sinh ^2\left(\frac{3
\mathfrak{h} }{2}\right)+3 \cosh
^2\left(\frac{3
\mathfrak{h} }{2}\right) \dot{\mathfrak{h}} ^2+3
\sinh (3 \mathfrak{h} ) \dot{A}
\dot{\mathfrak{h}} \right) \nonumber\\
\label{hamma2}
\end{eqnarray}
satisfy the following conditions:
\begin{eqnarray}
\mathcal{H}_{1}+\mathcal{H}_{2} &=& \mathcal{H}_{0} \\
\frac{\mathrm{d}}{\mathrm{d}t} \mathcal{H}_{1,2}&=& 0 \quad \mbox{Upon use of field equations from Lagrangian}
\end{eqnarray}
Hence $\mathcal{H}_{1,2} $ are the two conserved hamiltonians that guarantee the integrability of the system. As we know, the actual solution of Friedman equations is obtained by enforcing also the constraint $\mathcal{H}_{0}\, = \,0$ so that for our general solution
(\ref{soluziapiu}-\ref{soluziameno}) we have: $\mathcal{H}_{1}\,=\,- \,\mathcal{H}_{2}$.
\subsection{\sc The general integral in the case $\omega = \frac{2}{\sqrt{3}}$}
In this case as in the previous one the solution can be obtained by means of the same substitution described \cite{primopapero}, yet with respect to the previous case there is one relevant difference. In this case the gauge $\mathcal{B}=0$ cannot be chosen
and there is a difference between the cosmic time $t_c$ and the parametric time $\tau$. The form of the space-time metric is the following:
\begin{equation}\label{spaziailtempone}
ds^2 \, = \, - \, \exp\left [ - 2 A(\tau) \right ] \, \mathrm{d}\tau^2 \, + \, \exp\left [ 2 A(\tau) \right ] \, \mathrm{d}\vec{x}^2
\end{equation}
corresponding to the gauge $\mathcal{B}(\tau) \, = \, - A(\tau)$.
In principle the general integral depends on three integration constants, but in this case one of them can be immediately reabsorbed into a shift of the parametric time and thus we are left only with two relevant constants.
\par
At the end of the computations the general integral can be written in a very suggestive and elegant form in terms of the four roots of a quartic polynomial. Precisely we have:
\begin{eqnarray}
A(\tau)&=& \log \left[\frac{2 \sqrt[4]{\left(\tau
-\lambda _1\right)
\left(\tau -\lambda
_2\right) \left(\tau
-\lambda _3\right)
\left(\tau -\lambda
_4\right)}}{\sqrt{3}}\right] \label{AfattoCosh2} \\
\phi(\tau) &=& \frac{1}{4} \sqrt{3} \log
\left(\frac{\left(\tau
-\lambda _3\right)
\left(\tau -\lambda
_4\right)}{\left(\tau
-\lambda _1\right)
\left(\tau -\lambda
_2\right)}\right) \\
\mathcal{B}(\tau) &=& -A(\tau) \label{soluzionna}
\end{eqnarray}
One important caveat, however, is the following. Let us name
\begin{equation}\label{maxmin}
\mbox{Re} \lambda_{min} \quad ; \quad \mbox{Re} \lambda_{max}
\end{equation}
the smallest and the largest of the real parts of the four roots. The functions (\ref{soluzionna}) provide an exact solution of the Friedman equations under two conditions:
\begin{description}
\item[A)] The parametric time $\tau$ is either in the range $\mbox{Re} \lambda_{max} \, \le \tau \, \le +\infty$ or in the range
$\mbox{Re} \lambda_{min} \, \ge \tau \, \ge -\infty$
\item[B)] The four roots satisfy the following constraint:
\begin{equation}
2 \lambda _1 \lambda _2-\lambda _3
\lambda _2-\lambda _4 \lambda
_2-\lambda _1 \lambda _3-\lambda _1
\lambda _4+2 \lambda _3 \lambda _4 \, = \, 0 \label{custretto1}
\end{equation}
\end{description}
When $ \tau$ is in the range $\mbox{Re} \lambda_{min} \, \le \tau \, \ge \, \mbox{Re} \lambda_{max}$, the expressions (\ref{soluzionna}) do not satisfy Friedman equations.
\par
Solving (\ref{custretto1}) explicitly and replacing such solution back into the expression (\ref{soluzionna}) of the fields we obtain the general integral depending on three parameters. As we already stressed one of these three parameters can always be reabsorbed by means of a shift of the parametric time coordinate $\tau$, as it is evident from the form (\ref{soluzionna}). A convenient way of taking into account these gauge fixing is provided by solving the constraint (\ref{custretto1}) in terms of two real parameters $(\alpha,\beta)$, as it follows:
\begin{eqnarray}
\lambda_1 &=& \alpha -\sqrt{2} \sqrt{\alpha ^2-\beta } \nonumber\\
\lambda_2 &=& \alpha +\sqrt{2} \sqrt{\alpha ^2-\beta }\nonumber\\
\lambda_3 &=& -\alpha -\sqrt{2} \sqrt{\alpha ^2+\beta } \nonumber\\
\lambda_4 &=& \sqrt{2} \sqrt{\alpha ^2+\beta }-\alpha \label{fraticello}
\end{eqnarray}
which greatly facilitates the discussion since depending on whether $|\beta| < \alpha^2$ or $|\beta| > \alpha^2$, we either have four real roots and hence four zeros of the scale factor or two real roots and two complex conjugate ones. If $|\beta| = \alpha^2$ we have only real roots but three of them coincide and the fourth is different. Finally if both $\alpha$ and $\beta$ vanish the four roots coincide and the corresponding solution degenerates into the de Sitter solution. Let us substitute explicitly the values (\ref{fraticello}) into (\ref{soluzionna}) and obtain the general integral in the form:
\begin{eqnarray}
a(\tau,\alpha,\beta) \, \equiv \, \exp\left[A(\tau,\alpha,\beta)\right] &=&\frac{2 \sqrt[4]{\alpha ^4-6 \tau ^2
\alpha ^2+8 \beta \tau \alpha +\tau
^4-4 \beta ^2}}{\sqrt{3}} \label{afattoCosh2}\\
\phi(\tau,\alpha,\beta) &=& \frac{1}{4} \sqrt{3} \log
\left(\frac{\alpha ^2-2 \tau \alpha
-\tau ^2+2 \beta }{\alpha ^2+2 \tau
\alpha -\tau ^2-2 \beta }\right)\label{scalfattoCosh2}\\
\exp\left[B(\tau,\alpha,\beta)\right] &=& \frac{\sqrt{3}}{2 \sqrt[4]{\alpha ^4-6 \tau ^2
\alpha ^2+8 \beta \tau \alpha +\tau
^4-4 \beta ^2}} \label{BfattoCosh2}
\end{eqnarray}
which will be very useful in the discussion of the of the properties of solutions in the vicinity of the fixed point which for this case happen to be of the Node-type.
\subsection{\sc Discussion of the fixed points}
Inserting the potential $2\, \cosh\left( \omega \phi\right)$ into the formulae (\ref{somber}) and (\ref{MODE8})
for the linearization matrix and for its eigenvalues eigenvalues we immediately obtain:
\begin{equation}\label{mygoodness}
\mathcal{M} \, = \, \left[ \begin{array}{cc}
0 & 1 \\
-2 \, \omega^2 & \sqrt{12}
\end{array}
\right] \quad \Rightarrow \quad \lambda_\pm \, = \, - \sqrt{3} \pm \sqrt{3\, - \, 2 \omega^2}
\end{equation}
From this we learn that for $0 \, < \, \omega \, < \, \sqrt{\frac{3}{2}}$ the fixed point at $\phi=0$ is of the \textit{Node type}.
For $\omega = \sqrt{\frac{3}{2}}$ the fixed point is exactly of the \textit{improper node type}, while for $\omega > \sqrt{\frac{3}{2}}$ it is always of the focus type. What this implies for the behavior of the scalar field we will shortly see.
\par
One important point to stress concerns the initial conditions that have always got to be of the Big Bang type (initial singularity). The exact solutions of the integrable case are very instructive in this respect. Fixing the Big Bang initial condition $a(0)=0$ at a finite reference time ($\tau = 0$) from (\ref{soluziapiu}) we obtain a relation between the angle $\psi$ and the angle $\theta$
\begin{equation}\label{bigbango}
\cos (\psi)
- \cos^2\left(\theta\right) \sin ^2(\psi ) \, = \, 0
\end{equation}
which inserted back into the formula (\ref{soluziapiu}) for the scalar field implies $\phi(0) \, = \, \infty$.
This is not a peculiarity of the integrable model, rather it is a general fact. The zeros at a finite time of the scale factor are always in correspondence with a singularity of the scalar field. Hence the only initial condition that can be fixed independently at the Big Bang is the initial velocity of the field $\dot{\phi}(0)$.
\par
Emerging from the initial singularity at $\pm \infty$ the scalar field can flow to its fixed point value, namely zero (if the potential is positive) and the way it does so depends on the fixed-point type. However it can also happen that before reaching the fixed point the scalar field goes again to $\pm \infty$. In this case we have a blow-up solution where, at a finite time the scale factor goes again to zero and we have a Big Crunch. This happens only in the case the extremal point of the potential (either minimum or maximum) is negative. If the extremal point of the potential is positive we always have an asymptotic de Sitter destiny of the considered Universe.
\subsection{\sc Behavior of solutions in the neighborhood of a Node critical point}
The best way to discuss the quality of solutions is by means of the so called phase portrait of the dynamical system, where we plot the trajectories of the solution in the plane $\{\phi\, , \, v\}$. As we have already emphasized there are solutions that go to the fixed point $\{\phi\, , \, v\}= \{0,0\}$ and solutions that never reach it. The solutions that reach the fixed point have a universal type of behavior in its neighborhood which we now describe for the case of the Node. We illustrate such a behavior by means of the exact solutions of the integrable case $\omega = \frac{2}{\sqrt{3}}$. The two eigenvectors corresponding to the two eigenvalues in eq.(\ref{mygoodness}) are:
\begin{equation}\label{eigenfunctions}
\mathbf{v}_{\pm} \, = \,\left\{
\begin{array}{rcl}
\{\frac{\sqrt{3-2 \omega ^2}-\sqrt{3}}{2
\omega ^2} &,& 1 \} \\
\null&\null&\null \\
\{-\frac{\sqrt{3-2 \omega ^2}+\sqrt{3}}{2
\omega ^2} &,& 1\}
\end{array}\right. \quad \stackrel{\omega = \frac{2}{\sqrt{3}}}{\Longrightarrow} \quad \left\{
\begin{array}{rcl}
\{ -\frac{\sqrt{3}}{4} &,& 1\} \\
\{-\frac{\sqrt{3}}{2} &,& 1\}
\end{array}
\right.
\end{equation}
According to theory all solutions of the differential system approach the fixed point along the eigenvector vector $\{-\frac{\sqrt{3-2 \omega ^2}+\sqrt{3}}{2 \omega ^2} , 1\} \, \Rightarrow \, \{-\frac{\sqrt{3}}{2} , 1\}$ corresponding to the eigenvalue of largest absolute value, except a unique, exceptional one, named the \textbf{separatrix} that approaches the fixed point along the other eigenvector $\{ -\frac{\sqrt{3}}{4} , 1\}$. This can be checked analytically computing a limit. Let us define the function:
\begin{equation}\label{tangentus}
T(\tau,\alpha,\beta) \, = \, \frac{\partial_\tau \phi(\tau,\alpha,\beta) \, a(\tau\,\alpha,\beta)}{\phi(\tau,\alpha,\beta)} \, =\, \frac{\partial_{t_c} \phi(t_c)}{\phi(t_c)}
\end{equation}
which, due to the metric (\ref{spaziailtempone}), represents the ratio of the logarithmic derivative of the scalar field with respect to the cosmic time. By explicit calculation we find:
\begin{eqnarray}
\lim_{\tau \rightarrow \pm \infty} \, T(\tau,\alpha,\beta) &=& \mp\frac{2}{\sqrt{3}} \\
\lim_{\tau \rightarrow \pm \infty} \, T(\tau,\alpha,0) &=& \mp\frac{2}{\sqrt{3}} \\
\lim_{\tau \rightarrow \pm \infty} \, T(\tau,0,\beta) &=& \mp\frac{4}{\sqrt{3}}
\end{eqnarray}
We conclude that the separatrix solution is given by the choice $\alpha=0$, all the other solutions being regular. An inspiring view of the phase portrait is given in fig.\ref{separanonsepara}
\begin{figure}[!hbt]
\begin{center}
\iffigs
\includegraphics[height=70mm]{PhasPortNode.eps}
\else
\end{center}
\fi
\caption{\it
In this figure are shown the $(\phi,\mathrm{v})$ trajectories corresponding to six solutions of the $\omega \, = \frac{2}{\sqrt{3}}$ Cosh-model with parameters $(\alpha,\beta)$ = $(0,\frac 1 2)$, $(2,0)$, $(5,0)$, $(3,-3)$, $(2,5)$, $(1,-18)$. The case $(0,\frac 1 2)$ corresponds to the separatrix which approches the fixed point along the tangent vector $\{ -\frac{\sqrt{3}}{4} , 1\}$. All the other solutions approach the fixed point along the tangent vector $\{ -\frac{\sqrt{3}}{2} , 1\}$. The two dashed lines represent these two tangent vectors.
\label{separanonsepara}}
\iffigs
\hskip 1cm \unitlength=1.0mm
\end{center}
\fi\end{figure}
\subsection{\sc Analysis of the separatrix solution}
Because its exceptional status, the separatrix solution is worth to be analysed in some detail.
Explicitly we have:
\begin{eqnarray}
a(\tau) &=& \frac{2 \sqrt[4]{\tau
^4-1}}{\sqrt{3}} \\
\phi(\tau) &=&\frac{1}{4} \sqrt{3} \log
\left(\frac{\tau ^2-1}{\tau
^2+1}\right) \\
e^{B} &=& \frac{\sqrt{3}}{2 \sqrt[4]{\tau
^4-1}}
\end{eqnarray}
The roots of $a(\tau) $ are $\pm {\rm i}$ and $\pm 1$. Hence the solution exists and is real only for $|\tau| >1$. So we have two real identical branches of the solution in the range $[-\infty, -1]$ and in the range $[1, +\infty]$. The cosmic time can be explicitly integrated and admits the following expression in terms of a generalized hypergeometric function:
\begin{equation}\label{cosmictimeSeparo}
\int_{1}^{T}\, \frac{\sqrt{3}}{2 \sqrt[4]{\tau
^4-1}} \, \mathrm{d}\tau \, = \, \frac{1}{2} \sqrt{3}
\left(-\frac{\,
_3F_2\left(1,1,\frac{5}{4};
2,2;\frac{1}{T^4}\right)}{1
6 T^4}+\log (T)+\frac{1}{8}
(-\pi +\log (64))\right)
\end{equation}
The behavior of the scale factor and of the scalar field for the separatrix solution are displayed in fig.\ref{separando}
\begin{figure}[!hbt]
\begin{center}
\iffigs
\includegraphics[height=55mm]{SeparatrixLogA.eps}
\vskip 2cm
\includegraphics[height=55mm]{SeparatrixHfil.eps}
\else
\end{center}
\fi
\caption{\it
Behavior of the scale factor and of the scalar field in the case of the separatrix solution. As one can see we have a climbing scalar that from minus infinity goes asymptotically to its extremum value $\phi = 0$, while the scale factor has an asymptotic exponential behavior as in most of the other regular solutions that go the fixed point, the specialty of this solution is visible only in the phase portrait.
}
\label{separando}
\iffigs
\hskip 1cm \unitlength=1.0mm
\end{center}
\fi
\end{figure}
\subsection{\sc Analysis of a solution with four real roots}
Next we analyze the exact solution with parameters $(\alpha , \beta) = (2,0)$.
In this case the solution has the following form:
\begin{eqnarray}
\label{duebraccia}
a(\tau)&=& \frac{2 \sqrt[4]{\tau ^4-24 \tau
^2+16}}{\sqrt{3}}\\
\phi(\tau) &=& \frac{1}{4} \sqrt{3} \log
\left(\frac{\tau ^2+4 \tau -4}{\tau
^2-4 \tau -4}\right) \\
e^B &=& \frac{\sqrt{3}}{2 \sqrt[4]{\tau ^4-24 \tau
^2+16}}
\end{eqnarray}
and the scale factor admit four real roots, namely:
\begin{equation}\label{ruttoni}
\lambda_{1,2,3,4} \, = \, \left\{2+2 \sqrt{2},-2+2 \sqrt{2},2-2
\sqrt{2},-2-2 \sqrt{2}\right\}
\end{equation}
The scalar factor is real in the intervals $[-\infty, \lambda_4]$, $[\lambda_3, \lambda_2]$ and $[\lambda_1, +\infty]$. In the two identical branches (it suffices to change $\tau \leftrightarrow - \tau$) $[-\infty, \lambda_4]$ and $[\lambda_1, +\infty]$, the solution reaches the fixed point and it is asymptotically de Sitter (exponential increase of the scale factor). On the other hand in the branch $[\lambda_3, \lambda_2]$, we might think that we have a solution that begins with a Big Bang and ends up with Big Crunch (the Universe collapses) in a finite amount of cosmic time. Yet this branch of the functions (\ref{duebraccia}) simply does not satisfy Friedman equations and such an embarrassing solution does not exist! The two phase trajectory composing the solution and the fake solution are displayed in fig.\ref{duebraccine}
\begin{figure}[!hbt]
\begin{center}
\iffigs
\includegraphics[height=70mm]{PhasporTwoBranch.eps}
\else
\end{center}
\fi
\caption{\it In this figure we display the two trajectories in $(\phi,\mathrm{v})$-plane corresponding to the solution of parameter $( \alpha,\beta)=(2,0)$. One branch connects infinity with infinity and never reaches the fixed point. This branch is actually fake since in this range of the parametric time Friedman equations are not satisfied! The other physical branch which satisfies Friedaman equation, connects infinity with the fixed point which is reached along the universal tangent vector of all solutions except the separatrix.
}
\label{duebraccine}
\iffigs
\hskip 1cm \unitlength=1.0mm
\end{center}
\fi
\end{figure}
The physical branch of this solution which is connected with the fixed point and has an asymptotically de Sitter behavior is displayed in fig.\ref{turoldo}
\begin{figure}[!hbt]
\begin{center}
\iffigs
\includegraphics[height=55mm]{PrimBruchLogA.eps}
\vskip 2cm
\includegraphics[height=55mm]{PrimBrunchHfil.eps}
\else
\end{center}
\fi
\caption{\it
In this figure we display the behavior of the scale factor and of the scalar field for the asymptotic branch of the solution $(\alpha,\beta) = (2,0)$. The scalar descends from $+\infty$ to $0$ in an infinite time. In a parallel way the scale factor approaches an asymptotically exponential behavior.
}
\label{turoldo}
\iffigs
\hskip 1cm \unitlength=1.0mm
\end{center}
\fi
\end{figure}
\subsection{\sc Numerical simulations}
In this section our goal is to explore the phase portrait and the behavior of the solutions of Friedman equations for a few different values of $\omega$ which corresponds to different fixed point types. We consider the following four cases:
\begin{equation}\label{perdinci}
\omega \, = \, \underbrace{1}_{\mbox{Node}} \, , \, \underbrace{\frac{2}{\sqrt{3}}}_{\mbox{Node \& Integrable}} \, , \,
\underbrace{\sqrt{3}}_{\mbox{focus \& integrable}} \, , \, \underbrace{3}_{\mbox{focus}}
\end{equation}
and we make a comparison of their behavior by solving numerically the Friedman equations with the same initial conditions in the four cases. We cannot choose exactly $a(0)\,=\,0$ since this corresponds to a singularity, so we just choose $a(0)$ quite small and $\phi(0)$ quite large. A precise way of choosing the initial conditions to be applied to all four cases can be provided by the analytic solution determined by the integrable cases. This time we use the solution eq.(\ref{soluziapiu}) of the integral model $\omega \, = \, \sqrt{3}$ characterized by parameters:
\begin{equation}\label{gonzallo}
\lambda \, = \, 1 \quad ; \quad \theta \, = \, \pi \quad ; \quad \xi \, = \, \frac{\pi}{6}
\end{equation}
we obtain:
\begin{equation}\label{gordolino}
a(0) \, = \, 2^{-1/3} \quad ; \quad \phi(0) \, = \, \frac{\log
\left(\frac{1+\sqrt{3}}{-1+\sqrt{3}}\right)}{\sqrt{3}} \quad ;\quad \dot{\phi}(0) \, = \, -2
\end{equation}
which will be the initial values for the integration programme in all four cases (\ref{perdinci}).
The result is provided in a series of figures.
\par
We begin with the node case $\omega\, = \, 1$ whose behavior is displayed in fig.s \ref{psiconano1a} and \ref{psiconano1b}
\begin{figure}[!hbt]
\begin{center}
\iffigs
\includegraphics[height=50mm]{UnoLogA.eps}
\vskip 2cm
\includegraphics[height=50mm]{UnoHfil.eps}
\else
\end{center}
\fi
\caption{\it
Setting the initial conditions eq.(\ref{gordolino}), in this figure we display the evolution of the scale factor $a(\tau)$ and of the scalar field $\phi(\tau)$ for the case $\omega = 1$, where the fixed point is of the node type. As we see we just have a descending scalar that goes smoothly to the fixed value with no oscillations.
}
\label{psiconano1a}
\iffigs
\hskip 1cm \unitlength=1.0mm
\end{center}
\fi
\end{figure}.
\begin{figure}[!hbt]
\begin{center}
\iffigs
\includegraphics[height=60mm]{Unopshasport.eps}
\else
\end{center}
\fi
\caption{\it
In this figure we display the trajectory determined by the initial conditions of eq.(\ref{gordolino}) in the phase space $(\phi,\mathrm{v})$ for the case $\omega \, = \, 1$. This trajectory goes from infinity to the fixed point without winding around it (node) and following the direction fixed by the linearization matrix
}
\label{psiconano1b}
\iffigs
\hskip 1cm \unitlength=1.0mm
\end{center}
\fi
\end{figure}
We continue with the integrable case $\omega\, = \, \frac{2}{\sqrt{3}}$ whose behavior is displayed in fig.s \ref{psiconano2a} and \ref{psiconano2b}.
\begin{figure}[!hbt]
\begin{center}
\iffigs
\includegraphics[height=50mm]{2sqrt3LogA.eps}
\vskip 2cm
\includegraphics[height=50mm]{2sqrt3Hfil.eps}
\else
\end{center}
\fi
\caption{\it
Setting the initial conditions eq.(\ref{gordolino}), in this figure we display the evolution of the scale factor $a(\tau)$ and of the scalar field $\phi(\tau)$ for the case $\omega = \frac{2}{\sqrt{3}}$, which is actually integrable. Its analytic behavior was extensively analysed before.
}
\label{psiconano2a}
\iffigs
\hskip 1cm \unitlength=1.0mm
\end{center}
\fi
\end{figure}
\begin{figure}[!hbt]
\begin{center}
\iffigs
\includegraphics[height=60mm]{2sqrt3phasport.eps}
\else
\end{center}
\fi
\caption{\it
In this figure we display the trajectory determined by the initial conditions of eq.(\ref{gordolino}) in the phase space $(\phi,\mathrm{v})$ for the case $\omega \, = \, \frac{2}{\sqrt{3}}$. This trajectory goes from infinity to the fixed point without winding around it: Node.
}
\label{psiconano2b}
\iffigs
\hskip 1cm \unitlength=1.0mm
\end{center}
\fi
\end{figure}
Next we consider the integrable case $\omega\, = \, \sqrt{3}$ whose behavior, already of the focus type, is displayed in fig.s \ref{psiconano3a} and \ref{psiconano3b}.
\begin{figure}[!hbt]
\begin{center}
\iffigs
\includegraphics[height=50mm]{sqrt3LogA.eps}
\vskip 2cm
\includegraphics[height=50mm]{sqrt3Hfil.eps}
\else
\end{center}
\fi
\caption{\it
Setting the initial conditions eq.(\ref{gordolino}), in this figure we display the evolution of the scale factor $a(\tau)$ and of the scalar field $\phi(\tau)$ for the case $\omega = \sqrt{3}$, where the fixed point is of the focus type. This is an integrable case for which we posses the analytical solutions As we see we have a descending scalar that goes to a negative valued minimum and then climbs again to its fixed point value at zero.
}
\label{psiconano3a}
\iffigs
\hskip 1cm \unitlength=1.0mm
\end{center}
\fi
\end{figure}
\begin{figure}[!hbt]
\begin{center}
\iffigs
\includegraphics[height=60mm]{sqrt3phasport.eps}
\else
\end{center}
\fi
\caption{\it
In this figure we display the trajectory determined by the initial conditions of eq.(\ref{gordolino}) in the phase space $( \phi,\mathrm{v})$ for the case $\omega \, = \, \sqrt{3}$. This trajectory goes from infinity to its fixed point winding a little bit around it (focus).
}
\label{psiconano3b}
\iffigs
\hskip 1cm \unitlength=1.0mm
\end{center}
\fi
\end{figure}
Finally we consider the case $\omega\, = \, 3$ which is not integrable, yet shares with the integrable case s$\omega= \sqrt{3}$ the quality of its fixed point, namely the focus type. The behavior is displayed in fig.(\ref{psiconano4a})
\begin{figure}[!hbt]
\begin{center}
\iffigs
\includegraphics[height=50mm]{3LogA.eps}
\vskip 2cm
\includegraphics[height=50mm]{3Hfil.eps}
\else
\end{center}
\fi
\caption{\it
Setting the initial conditions eq.(\ref{gordolino}), in this figure we display the evolution of the scale factor $a(\tau)$ and of the scalar field $\phi(\tau)$ for the case $\omega = 3$, where the fixed point is of the focus type. As we see we have a descending scalar that goes to a negative valued passing through various oscillations.
}
\label{psiconano4a}
\iffigs
\hskip 1cm \unitlength=1.0mm
\end{center}
\fi
\end{figure}
and in fig.\ref{psiconano4b}.
\begin{figure}[!hbt]
\begin{center}
\iffigs
\includegraphics[height=60mm]{3Phasport.eps}
\else
\end{center}
\fi
\caption{\it
In this figure we display the trajectory determined by the initial conditions of eq.(\ref{gordolino}) in the phase space $(h=\phi,\mathrm{v})$ for the case $\omega \, = \, 3$. This trajectory goes from infinity to its fixed point winding few times around it (focus).
}
\label{psiconano4b}
\iffigs
\hskip 1cm \unitlength=1.0mm
\end{center}
\fi
\end{figure}
The conclusion of this comparison is that what we can learn from the integrable models is the behavior of solutions that share the same type of fixed point. Also the asymptotic behavior for very late times of both the scalar field and of the scale factor is captured by the integrable case and is shared by all other members of the family. The structure of the scalar field behavior at finite times is rather strongly dependent from the value of $\omega$. For sufficiently large $\omega$ we get the focus case and the scalar oscillates around the fixed point. The larger is $\omega$ the more the trajectory winds around the fixed point. This winding corresponds to oscillations of the scalar field which in cosmology are potentially interesting since they might be at the heart of the reheating mechanism after inflation.
\par
The structure of possible trajectories is summarized in fig.\ref{buonanno} that plots together the behavior of the scalar field for the various considered values of $\omega$ and so does for the scale factors
\begin{figure}[!hbt]
\begin{center}
\iffigs
\includegraphics[height=50mm]{tuttiloghi.eps}
\vskip 2cm
\includegraphics[height=50mm]{tuttiscali.eps}
\else
\end{center}
\fi
\caption{\it
In this figure we display in the same plot the behavior of the scale factor and of the scalar field starting with the same initial condition but following the equations for the different values of $\omega$ considered in eq.(\ref{perdinci})
}
\label{buonanno}
\iffigs
\hskip 1cm \unitlength=1.0mm
\end{center}
\fi
\end{figure}
\section{\sc A brief scan of $\mathcal{N}=1$ superpotentials from Flux Compactifications}
\label{fluxscan}
A lot of progress has been made since the year 2000 in the context of flux compactifications of string theory with the aim of obtaining four-dimensional effective theories with phenomenologically desirable features, among which, after the discovery of the Universe late-time acceleration, with high relevance, ranks the need to find de Sitter vacua.
\par
A vast literature in this field deals with the search of \textit{flux backgrounds} that are compatible with minimal $\mathcal{N}=1$ supersymmetry in $D=4$ and relies on the mechanism, firstly discovered in \cite{gkp} of inducing an effective $\mathcal{N}=1$ superpotential from fluxes. This token has been extensively utilized in those compactifications that give rise to an $\mathrm{STU}$-model as low energy description \cite{Derendinger:2005ph,Derendinger:2004kf,Denef:2007pq,Kachru:2002he,Kachru:2004jr}.
The final outcome of these procedures, which take their motivation in orbifold and orientifold compactifications \cite{orientifolds}, is just an explicit expression for the superpotential $W(S,T,U)$, which can be used to calculate the scalar potential, whose extrema and consistent one-field truncations can be studied in a systematic way. The marvelous thing of this description is that each coefficient in the development of $W(S,T,U)$ in the constituent fields, $S,T,U$ has a direct interpretation in terms of fluxes and, in an appropriate basis, it admits only quantized integer values.
\par
Recently a series of papers has appeared \cite{Dibitetto:2012xd,Dibitetto:2011gm,Danielsson:2012by} aiming at a systematic charting of the landscape of these superpotentials and of the extrema of their corresponding scalar potentials.
\par
In the present section we briefly consider such a landscape with the aim to single out consistent one-dilaton truncations and work out the corresponding potential $\mathcal{V}(\varphi)$, to be compared with our list of integrable ones. As a result we obtain several examples of one field potentials, all falling into the general family of linear combinations of exponentials that we consider, yet very seldom satisfying the severe relations on the exponents and the coefficients that are required for integrability. Although in this run we identified only one new integrable model, the lesson that we learn from it is that, by considering truncations of multi-field supergravities, the variety of possible outcomes is significantly enlarged. Indeed what matters are the intrinsic indices $\omega_i \, = \, \, p_i/\sqrt{3\,q}$, the numbers $p_i$ being the exponent coefficients of the field $\mathfrak{h}$ that is kept in the truncation, while $q$ is the coefficient of the kinetic term of the latter. In view of this, when $\mathfrak{h}$ happens to be a linear combination of several other dilatons, the coefficients of the linear combination play a role, both in generating a variety of $p_i$.s and in giving rise to non standard $q$.s, the final result being difficult to be predicted a priori.
On the other hand, the coefficients of the appropriate linear combination that can constitute a consistent truncation, are
searched for by diagonalizing the mass matrix of the theory in the vicinity of an extremum. Indeed the mass eigenstates around an exact vacuum of the theory constitute a natural basis where some fields can be put consistently to zero with the exception of one which survives.
\par
We plan to use the landscape chartered by Dibitetto et al as a mean to illustrate the above ideas.
\subsection{\sc The $\mathrm{STU}$ playing ground}
Originally, in orbifold compactifications on $\mathrm{T}^6/\mathbb{Z}_2\times \mathbb{Z}_2$, one arrives at seven complex moduli fields:
\begin{equation}\label{dottorbalordo}
S, \, T_1, \,T_2, \,T_3, \, U_1, \,U_2, \,U_3
\end{equation}
the first being related to the original dilaton and Kalb-Ramond field of $10$-dimensional supergravity, the remaining six being the appropriate complexifications of the six radii of $\mathrm{T}^6$. Embedding these fields into $\mathcal{N}=4$ supergravity, which is a necessary intermediate step when supersymmetry is halved by the orbifold projection, Dibitetto et al. \cite{Dibitetto:2011gm} have been able to reduce the playing ground for the search of flux induced superpotentials to three fields:
\begin{equation}\label{STUmolti}
S \quad ; \quad T \, = \, T_1 =T_2=T_3 \quad ; \quad U \, = \, U_1 =U_2=U_3
\end{equation}
This is done by consistently truncating $\mathcal{N}=4$ supergravity to the singlets with respect to an appropriately chosen global $\mathrm{SO(3)}$ symmetry group. This truncation breaks supersymmetry to $\mathcal{N}=1$. As a result of the identifications in eq.(\ref{STUmolti}) the K\"ahler potential for the residual fields takes the following form:
\begin{equation}\label{carlettopoto}
\mathcal{K} \, = \, - \, \log[-i\,(S-\bar{S})]\, - \, \log[i\,(T-\bar{T})^3]\, - \, \log[ i\,(U-\bar{U})^3]
\end{equation}
Next the authors of \cite{Dibitetto:2011gm} have identified a list of $32$ monomials out of which the superpotential can be constructed as a linear combination. Assigning a real coefficient $\lambda_{1,\dots,16}$ to the even powers and an imaginary coefficient ${\rm i} \mu_{1,\dots,16}$ to the odd ones, one guarantees a priori that the truncation to zero axions for all the three fields is consistent. With this proviso, the most general $\mathcal{N}=1$ superpotential considered by Dibitetto et al is the following one:
\begin{eqnarray}
W_{gen} &=& \lambda _1+U^2 \lambda _2+S U \lambda
_3+S U^3 \lambda _4+T U \lambda _5+T
U^3 \lambda _6+S T \lambda _7+S T
U^2 \lambda _8 \nonumber\\
&& +T^3 U^3 \lambda
_9+T^3 U \lambda _{10}+S T^3 U^2
\lambda _{11}+S T^3 \lambda
_{12}\nonumber\\
&& +T^2 U^2 \lambda _{13}+T^2
\lambda _{14}+S T^2 U^3 \lambda
_{15}+S T^2 U \lambda _{16}\nonumber\\
&& +{\rm i} U \mu
_1+{\rm i} U^3 \mu _2+{\rm i} S \mu _3+{\rm i} S U^2
\mu _4+i T \mu _5+i T U^2 \mu _6+{\rm i} S
T U \mu _7+{\rm i} S T U^3 \mu _8\nonumber\\
&& +{\rm i} T^3
U^2 \mu _9+{\rm i} T^3 \mu _{10}+{\rm i} S T^3
U^3 \mu _{11}+{\rm i} S T^3 U \mu _{12}\nonumber\\
&&+{\rm i}
T^2 U^3 \mu _{13}+{\rm i} T^2 U \mu
_{14}+{\rm i} S T^2 U^2 \mu _{15}+{\rm i} S T^2
\mu _{16}
\end{eqnarray}
The interesting point is that each of the $\lambda$.s and each of the $\mu$.s has a precise interpretation in terms of $10$-dimensional fluxes of various type.
\par
A completely general approach to the study of vacua and possible one-field truncations would consist of the following precise algorithm:
\begin{center}
\textbf{Truncation Charting Algorithm from Flux Superpotentials (TCAFS)}
\end{center}
\begin{enumerate}
\item Calculate from $W_{gen}$ the scalar potential $V_{gen}\left(\lambda,\mu,h_{1,2,3},b_{1,2,3}\right)$ depending on three dilatons, three axions and 32 real parameters $\{\lambda,\mu\}$.
\item Consistently truncate this potential to zero axions $\hat{V}_{gen}\left(\lambda,\mu,h_{1,2,3}\right)=V_{gen}\left(\lambda,\mu,h_{1,2,3},0\right)$.
\item Calculate the three derivatives of the potential with respect to the remaining dilatons $\partial_{h_i} \hat{V}$ and impose that they are zero at $h_{1,2,3}\, = \,0$:
\begin{equation}\label{estremadura}
\partial_{h_i} \hat{V}\left(\lambda,\mu,h_{1,2,3}\right)|_{h_{1,2,3}=0} \, = \, 0
\end{equation}
These conditions impose that the base point of the manifold $S=T=U={\rm i}$ should be an extremum of the potential. This choice corresponds to no loss of generality, since the dilatons are defined up to a translation and any extremum can be mapped into the reference point $S=T=U={\rm i}$ at the prize of rescaling some of the coefficients $\{\lambda,\mu\}$. Hence, as long as we keep $\{\lambda,\mu\}$ general we do not loose anything by deciding a priori where the extremum should be located. In this way eq.s(\ref{estremadura}) become a set of three algebraic equations of higher order for the coefficients $\{\lambda,\mu\}$.
\item Solve, if possible, the algebraic equations (\ref{estremadura}). In principle this results into a set of $m$ solutions:
\begin{equation}\label{solutini}
\lambda_i \, = \, \lambda_i^{(\alpha)} \quad ; \quad \mu_i \, = \, \mu_i^{(\alpha)} \quad ; \quad \alpha\, = \, 1,\dots,m
\end{equation}
\item Replace one by one the solutions (\ref{solutini}) into $\hat{V}_{gen}\left(\lambda,\mu,h_{1,2,3}\right)$, obtaining $m$ potentials of the three dilaton fields:
\begin{equation}\label{pistacchi}
V^{(\alpha)}(h_1,h_2,h_3) \, \equiv \, \hat{V}_{gen}\left(\lambda^{(\alpha)},\mu^{(\alpha)},h_{1,2,3}\right) \quad ; \quad \alpha\, = \, 1,\dots,m
\end{equation}
which, by construction, have an extremum in $h_{1,2,3} \, = \,0$. Verify whether each extremum corresponds to Minkowski ($V^{(\alpha)}(0) \, = \, 0$), de Sitter ($V^{(\alpha)}(0) \, > \, 0$) or anti de Sitter ($V^{(\alpha)}(0) \, < \, 0$) space.
\item For each potential $V^{(\alpha)}(h_1,h_2,h_3)$ calculate the mass-matrix in the extremum:
\begin{equation}\label{massamatrata}
M_{ij}^{(\alpha)} \, = \, \partial_i\partial_j V^{(\alpha)}(h_1,h_2,h_3)|_{h_{1,2,3}=0} \quad ; \quad \alpha\, = \, 1,\dots,m
\end{equation}
and the corresponding eigenvalues $\Lambda_{I}^{(\alpha)}$ ($I=1,2,3$) and eigenvectors $\vec{v}_I^{(\alpha)}$.
\item From the eigenvalues $\Lambda_{I}^{(\alpha)}$ ($I=1,2,3$) we learn about stability or instability of the corresponding vacuum. By means of the corresponding eigenvectors introduce a new basis of three fields well-adapted to the potential $V^{(\alpha)}$:
\begin{equation}\label{gomoide}
\phi_I^{(\alpha)} \, \equiv \, \vec{v}_I^{(\alpha)} \cdot \vec{h} \quad ; \quad \alpha\, = \, 1,\dots,m \quad I\, =\, 1,2,3
\end{equation}
\item Transform the potential and the kinetic term to the new well adapted basis and inspect if truncation to any of the $\phi_{1,2,3}$, by setting to zero the other two is consistent.
\item In case of positive answer to the previous question calculate the effective coefficient $q$ in the kinetic term and by means of the transformation (\ref{babushka}) produce a potential $\mathcal{V}(\varphi)$ to be compared with the list of integrable ones.
\end{enumerate}
The problem with the above algorithm is simply computational. Using all of the 32 terms in the superpotential and truncating to zero axions we are left with a three-dilaton potential that contains 480 terms and the three algebraic equations (\ref{estremadura}) in 32 unknowns are too large to be solved by standard codes in MATHEMATICA. Some strategy to reduce the parameter space has to be found. What we were able to do with ease was just to test the TCAFS on some reduced space suggested by the special superpotentials reviewed in \cite{Dibitetto:2011gm}. We did not assume the coefficients presented in that paper, we simply restricted the parameter space to that spanned by the monomials included in each of these superpotentials and by running the TACFS algorithm on such parameter space we retrieved exactly the same results presented in \cite{Dibitetto:2011gm}. For all these extrema we have also calculated the mass-matrix and we have found some consistent one field truncations for which we could determine the corresponding one-field potential. As anticipated they all fall in the family of linear combination of exponentials and although none coincides with an integrable one we start seeing new powers and new structures that in the one-field constructions were absent.
\par
Finally with some ingenuity we were able to derive new interesting instances of superpotentials that lead to interesting one-field truncations. In one case we obtained a new instance of a supersymmetric integrable cosmological model.
\subsection{\sc Locally Geometric Flux induced superpotentials}
In \cite{Dibitetto:2011gm}, the authors consider a particular superpotential that is denominated \textit{locally geometric} since its origin is claimed to arise from a combination of geometric type IIB fluxes with non geometric ones, the resulting composition still admitting a \textit{locally geometric} description. Leaving aside the discussion of its ten dimensional origin in type IIB or type IIA compactifications the above mentioned superpotential has the following form:
\begin{equation}\label{partus}
W_{locgeo} \, = \, \lambda _1+S U^3 \lambda _4+T U \lambda _5+S T U^2 \lambda _8
\end{equation}
and corresponds to a truncation of the $32$ dimensional parameter space to a four-dimensional one spanned by $\{\lambda_1,\lambda_4 ,
\lambda_5,\lambda_8\}$. Using the standard parametrization of the fields:
\begin{equation}\label{fischiuto}
S \, = \, {\rm i} \, \exp[h_1] + b_1 \quad ; \quad T \, = \, {\rm i} \, \exp[h_2] + b_2 \quad ; \quad U \, = \, {\rm i} \, \exp[h_3] + b_3
\end{equation}
and implementing the first two steps of the TCAFS we obtain the following potential:
\begin{eqnarray}\label{Vlocgeo}
V & = &\frac{1}{64} e^{-h_1-3 h_2-3 h_3}
\lambda _1^2-\frac{1}{32} e^{-3 h_2}
\lambda _1 \lambda _4+\frac{1}{64}
e^{h_1-3 h_2+3 h_3} \lambda
_4^2+\frac{1}{32} e^{-2 h_2+h_3}
\lambda _4 \lambda _5 \nonumber\\
&&-\frac{1}{192}
e^{-h_1-h_2-h_3} \lambda
_5^2-\frac{1}{32} e^{-2 h_2-h_3}
\lambda _1 \lambda _8+\frac{1}{32}
e^{-h_2} \lambda _5 \lambda
_8-\frac{1}{192} e^{h_1-h_2+h_3}
\lambda _8^2
\end{eqnarray}
Implementing next the steps 3 and 4 of the TCAFS we obtain five non vanishing solutions for the $\lambda$-coefficients that can be displayed by writing the corresponding superpotentials:
\begin{eqnarray}
W_{(2)}^{locgeo} &=& 1+3\, T U+3\, S T U^2-S U^3 \label{theo2}\\
W_{(3a)}^{locgeo}&=& 5-9 \,T U+3\, S T U^2-S U^3\label{theo3a}\\
W_{(Mink)}^{locgeo} &=& 1+S U^3 \label{theoMink}\\
W_{(1)}^{locgeo} &=&-1-3 \,T U+3\, S T U^2-S U^3\label{theo1}\\
W_{(3b)}^{locgeo} &=&-\frac{1}{3}-T U+3\, S T
U^2+\frac{5 \,S U^3}{3}\label{theo3b}
\end{eqnarray}
The names given to these solutions are taken from the nomenclature utilized in table 5 of \cite{Dibitetto:2011gm} since the superpotentials we found exactly correspond to those considered there, up to a multiplicative overall constant in the last case.
The values in the extremum of the corresponding scalar potentials are:
\begin{equation}\label{governolo}
\left\{\underbrace{V_{(2)}^{locgeo}(\vec{0})}_{\mathrm{dS}}, \, \underbrace{V_{(3a)}^{locgeo}(\vec{0})}_{\mathrm{AdS}}, \, \underbrace{V_{(Mink)}^{locgeo}(\vec{0})}_{\mathrm{Mink}}, \, \underbrace{V_{(1)}^{locgeo}(\vec{0})}_{\mathrm{AdS}}, \, \underbrace{V_{(3b)}^{locgeo}(\vec{0}) }_{\mathrm{AdS}}\right\} \, = \, \left\{\frac{1}{16},-\frac{15}
{16},0,-\frac{3}{16},-\frac{5}{48}\right\}
\end{equation}
Hence we conclude that we have one de Sitter vacuum, one Minkowski vacuum and three anti de Sitter vacua. This is just the same result found by the authors of \cite{Dibitetto:2011gm}. The corresponding dilaton potentials that give rise to such vacua at their extremum have the following explicit form:
\begin{eqnarray}
V_{(2)}^{locgeo}(\vec{h})&=&-\left( -\frac{1}{32} e^{-3
h_2}-\frac{9
e^{-h_2}}{32}-\frac{1}{64}
e^{-h_1-3 h_2-3
h_3}+\frac{3}{32} e^{-2
h_2-h_3}+\frac{3}{64}
e^{-h_1-h_2-h_3}\right)\nonumber\\
&&\left. +\frac{3}{3
2} e^{-2
h_2+h_3}+\frac{3}{64}
e^{h_1-h_2+h_3}-\frac{1}{64
} e^{h_1-3 h_2+3 h_3}\right)\label{theo2V} \\
V_{(3a)}^{locgeo}(\vec{h}) &=& -\left(-\frac{5}{32} e^{-3
h_2}+\frac{27
e^{-h_2}}{32}-\frac{25}{64}
e^{-h_1-3 h_2-3
h_3}+\frac{15}{32} e^{-2
h_2-h_3}\right.\nonumber\\
&&\left.+\frac{27}{64}
e^{-h_1-h_2-h_3}-\frac{9}{3
2} e^{h_3-2
h_2}+\frac{3}{64}
e^{h_1-h_2+h_3}-\frac{1}{64
} e^{h_1-3 h_2+3 h_3}\right) \label{theo3aV}\\
V_{(Mink)}^{locgeo}(\vec{h}) &=&- \left(\frac{1}{32} e^{-3
h_2}-\frac{1}{64} e^{-h_1-3
h_2-3 h_3}-\frac{1}{64}
e^{h_1-3 h_2+3 h_3} \right)\\
V_{(1)}^{locgeo}(\vec{h}) &=&-\left( \frac{1}{32} e^{-3
h_2}+\frac{9
e^{-h_2}}{32}-\frac{1}{64}
e^{-h_1-3 h_2-3
h_3}-\frac{3}{32} e^{-2
h_2-h_3}\right)\nonumber\\
&&\left.+\frac{3}{64}
e^{-h_1-h_2-h_3}-\frac{3}{3
2} e^{h_3-2
h_2}+\frac{3}{64}
e^{h_1-h_2+h_3}-\frac{1}{64
} e^{h_1-3 h_2+3 h_3}\right)\label{theo1V}\\
V_{(3b)}^{locgeo}(\vec{h}) &=&-\left( -\frac{5}{288} e^{-3
h_2}+\frac{3
e^{-h_2}}{32}-\frac{1}{576}
e^{-h_1-3 h_2-3
h_3}-\frac{1}{32} e^{-2
h_2-h_3}\right.\nonumber\\
&&\left. +\frac{1}{192}
e^{-h_1-h_2-h_3}+\frac{5}{9
6} e^{h_3-2
h_2}+\frac{3}{64}
e^{h_1-h_2+h_3}-\frac{25}{5
76} e^{h_1-3 h_2+3 h_3}\right)\label{theo3bV}
\end{eqnarray}
We continue the development of the TCAFS algorithm, case by case.
\subsubsection{\sc The $\mathrm{dS}$ potential}
Calculating the mass matrix in the extremum of the potential $ V_{(2)}^{locgeo}(\vec{h})$ we obtain:
\begin{equation}\label{massmat1}
M_{mass} \, = \, \left(
\begin{array}{lll}
- \frac{1}{16} & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & 0
\end{array}
\right)
\end{equation}
Hence we have one negative and two null eigenvalues which means that this $dS$ vacuum is unstable. Since the mass matrix is diagonal the charge eigenstates, namely the fields coincide with the mass eigenstates and we can explore if there are consistent truncations. By direct evalutation of the derivatives we find that there are two consistent one-field truncations:
\begin{description}
\item[A-truncation.] $h_1 \, = \, h_2 \, = \, 0$. In this case the residual potential is:
$$V\, = \, \frac{1}{16} e^{-3 h_3} \left(1-3 e^{h_3}+3 e^{2 h_3}\right)$$
Since the kinetic term of $h_3$ has a factor $q=3$, by means of the substitution (\ref{babushka}), we obtain:
\begin{equation}\label{potentus1}
\mathcal{V}(\varphi) \, = \, \frac{e^{-\varphi}}{16}-\frac{3}{16} e^{-2\varphi /3}+\frac{3 e^{-\varphi /3}}{16}
\end{equation}
which is not any of the integrable potentials but belongs to the same class.
\item[B-truncation.] $h_2 \, = \, h_3 \, = \, 0$. In this case the residual potential is:
$$V\, = \,- \, \frac{1}{32}\left(-4+e^{-h_1}+e^{h_1}\right)$$
Since the kinetic term of the $h_1$ field has a factor $q=1$, by means of the substitution (\ref{babushka}), we obtain:
\begin{equation}\label{potentus2}
\mathcal{V}(\varphi) \, = \, \frac{1}{16}\left(2-\text{Cosh}\left[\frac{\varphi}{\sqrt{3}}\right]\right)
\end{equation}
which also belongs to the class of exponential potentials here considered but does not fit into any integrable series or sporadic case.
\end{description}
\subsubsection{\sc The $\mathrm{AdS}$ potential 3a}
Calculating the mass matrix in the extremum of the potential $ V_{(3a)}^{locgeo}(\vec{h})$ we obtain:
\begin{equation}\label{massmat2}
M_{mass} \, = \, - \, \left(
\begin{array}{lll}
\frac{1}{16} & -\frac{3}{4} &
-\frac{3}{4} \\
-\frac{3}{4} & -3 &
-\frac{3}{2} \\
-\frac{3}{4} & -\frac{3}{2} &
-3
\end{array}
\right)
\end{equation}
The eigenvalues of this mass-matrix are:
\begin{equation}\label{eigevallla2}
\mathrm{Eigenval} \, = \, \left\{- \underbrace{\frac{1}{32} \left(-71-\sqrt{6481}\right )}_{> \, 0},\underbrace{\frac{3}{2}}_{> \, 0},\underbrace{- \frac{1}{32}\left(-71+\sqrt{6481}\right)}_{< \, 0}\right\}
\end{equation}
showing that this anti de Sitter vacuum is stable since all the eigenvalues satisfy the Breitenlohner-Freedman bound $\lambda_i \, > \, -\, \frac{45}{48}$ . The corresponding mass eigenstates are the following fields
\begin{equation}\label{basato}
\left \{\phi_1,\, \phi_2,\, \phi_3 \right \} \, = \, \left\{\frac{1}{24}
\left(-73+\sqrt{6481}\right
)
h_1+h_2+h_3,-h_2+h_3,-\frac
{1}{24}
\left(73+\sqrt{6481}\right)
h_1+h_2+h_3\right\}
\end{equation}
Calculating the derivatives we verify that there is no consistent truncation of this potential.
\subsubsection{\sc The $\mathrm{AdS}$ potential 3b}
Calculating the mass matrix in the extremum of the potential $ V_{(3b)}^{locgeo}(\vec{h})$ we obtain:
\begin{equation}\label{massmat3}
M_{mass} \, = \, - \, \left(
\begin{array}{lll}
\frac{1}{144} & \frac{1}{12}
& -\frac{1}{12} \\
\frac{1}{12} & -\frac{1}{3} &
\frac{1}{6} \\
-\frac{1}{12} & \frac{1}{6} &
-\frac{1}{3}
\end{array}
\right)
\end{equation}
The eigenvalues of this mass-matrix are:
\begin{equation}\label{eigevallla3}
\mathrm{Eigenval} \, = \, \left\{\underbrace{\frac{1}{288} \left(71+\sqrt{6481}\right )}_{> \, 0},\underbrace{\frac{1}{6}}_{> \, 0},\underbrace{\frac{1}{288}\left(71-\sqrt{6481}\right)}_{< \, 0}\right\}
\end{equation}
showing that also also this anti de Sitter vacuum is stable since all eigenvalues satisfy the Breitenlohner Freedman bound $\lambda_i \, > \, - \, \frac{5}{64}$. The corresponding mass eigenstates are the following fields
\begin{equation}\label{basatobis}
\left \{\phi_1,\, \phi_2,\, \phi_3 \right \} \, = \, \left\{\frac{1}{24}
\left(-73+\sqrt{6481}\right
)
h_1-h_2+h_3,h_2+h_3,-\frac{
1}{24}
\left(73+\sqrt{6481}\right)
h_1-h_2+h_3\right\}
\end{equation}
Calculating the derivatives we verify that there is no consistent truncation of this potential.
\subsubsection{\sc The $\mathrm{AdS}$ potential 1}
Calculating the mass matrix in the extremum of the potential $ V_{(1)}^{locgeo}(\vec{h})$ we obtain:
\begin{equation}\label{massmat4}
M_{mass} \, = \, - \, \left(
\begin{array}{lll}
\frac{1}{16} & 0 & 0 \\
0 & -\frac{3}{8} & 0 \\
0 & 0 & -\frac{3}{8}
\end{array}
\right)
\end{equation}
which is diagonal and the eigenvalues are immediately read off:
\begin{equation}\label{eigevallla4}
\mathrm{Eigenval} \, = \, \left\{\underbrace{- \frac{1}{16}}_{<\, 0},\underbrace{\frac{3}{8}}_{< \, 0},\underbrace{\frac{3}{8}}_{> \, 0}\right\}
\end{equation}
showing that also this anti de Sitter vacuum is stable. Indeed also in this case the Breitenlohner-Freedman bound is satisfied $\lambda_i \, > \, - \, \frac{9}{64}$. The mass eigenstates coincide in this case with the charge eigenstates and we have two consistent one-field truncations:
\begin{description}
\item[A-truncation.] $h_1 \, = \, h_3 \, = \, 0$. In this case the residual potential is:
$$V\, = \, \frac{3}{16} e^{-2 h_2}-\frac{3 e^{-h_2}}{8}$$
Since the kinetic term of $h_2$ has a factor $q=3$, by means of the substitution (\ref{babushka}), we obtain:
\begin{equation}\label{potentus4}
\mathcal{V}(\varphi) \, = \, \frac{3}{16} e^{-2 \varphi /3}-\frac{3 e^{-\varphi /3}}{8}
\end{equation}
which is not any of the integrable potentials but belongs to the same class.
\item[B-truncation.] $h_2 \, = \, h_3 \, = \, 0$. In this case the residual potential is:
$$V\, = \, - \, \frac{1}{32}
\left(4+e^{-h_1}+e^{h_1}\right)$$
Since the kinetic term of the $h_1$ field has a factor $q=1$, by means of the substitution (\ref{babushka}), we obtain:
\begin{equation}\label{potentus5}
\mathcal{V}(\varphi) \, = \, - \, \frac{1}{16} \left(\cosh\left(\frac{\varphi}{\sqrt{3}}\right)+2\right)
\end{equation}
which also belongs to the class of exponential potentials here considered but does not fit into any integrable series or sporadic case.
\end{description}
\subsection{\sc Another more complex example}
Inspired by the superpotential presented in eq.(5.12) of \cite{Dibitetto:2011gm} we have also considered the following extension of the superpotential (\ref{partus}):
\begin{eqnarray}\label{gonzaldino}
\hat{W}^{locgeo} & = & \lambda _1+S U^3 \lambda _4+TU \lambda _5+S T U^2\lambda _8 \nonumber\\
&&+T^3 U^3 \lambda_9+S T^3 \lambda _{12}+T^2U^2 \lambda _{13}+S T^2 U\lambda _{16}
\end{eqnarray}
which leads to a potential with 30 terms and and 26 different type of exponentials. Finding all the roots of the equations that determine the existence of an extremum turned out to be to difficult, yet apart from the already known solution of the previous sections we were able to find by trial and error another solution corresponding to the following superpotential which depends on the overall parameter $\lambda_{4}$:
\begin{equation}\label{fasciofascio}
\hat{W}_0 \, = \, -T U\lambda _4-S T^2 U \lambda _4-2 S TU^2 \lambda _4
+2 T^2 U^2 \lambda_4+S U^3 \lambda _4+T^3 U^3 \lambda_4
\end{equation}
The corresponding scalar potential, which for brevity we do not write, can be consistently truncated to the dilatons by setting all the axions to zero and by construction it has an extremum in $S=T=U={\rm i}$ where it takes the positive value $\frac{\lambda _4^2}{12}$. Hence, choosing the superpotential (\ref{fasciofascio}) we find an $\mathrm{dS}$ vacuum. Calculating the mass matrix in this extremum we find:
\begin{equation}\label{matamat5}
M \, = \, - \, \frac{\lambda_4^2}{24} \, \left(
\begin{array}{lll}
1 & 3 & 0 \\
3 & -1 & 0 \\
0 & 0 & -4
\end{array}
\right)
\end{equation}
It is convenient to choose $\lambda_4=\sqrt{24}$ and in this way the the eigenvalues of $M$ have the following simple form:
\begin{equation}\label{eigati}
\mbox{Eigenvalues}[M] \, \equiv \, \Lambda_i \, = \, \left\{4,\sqrt{10},\, - \,\sqrt{10}\right\}
\end{equation}
The presence of a negative one among the eigenvalues (\ref{eigati}) shows that the constructed $\mathrm{dS}$-vacuum is unstable.
The eigenstates corresponding to the above eigenvalues are the following fields:
\begin{equation}\label{autostatti}
\left\{\phi_1 , \, \phi_2 , \, \phi_3 , \, \right\} \, = \, \left\{h_3,\left(\frac{1}{3}-\frac{\sqrt{10}}{3}\right)
h_1+h_2,\left(\frac{1}{3}+\frac{\sqrt{10}}{3}\right) h_1+h_2\right\}
\end{equation}
By calculating the derivatives we find that setting $\phi_2\,=\,\phi_3\,=\, 0$ is a consistent truncation. The surviving potential has the form:
\begin{equation}\label{formidino}
V \, = \, 1+\frac{e^{-\phi _1}}{2}+2 e^{\phi_1}-3 e^{2 \phi _1}+\frac{3 e^{3 \phi _1}}{2}
\end{equation}
while the kinetic term of the field $\phi_1$ has $q=3$. By means of the transformation (\ref{babushka}) the potential (\ref{formidino}) is mapped into:
\begin{equation}\label{cassandra}
\mathcal{V}(\varphi) \, = \, 1+\frac{e^{-\varphi /3}}{2}+2 e^{\varphi /3}-3 e^{2 \varphi /3}+\frac{3 e^{\varphi }}{2}
\end{equation}
which is a combination of four different exponentials but does not fit into any of the integrable cases listed by us in tables \ref{tab:families} and \ref{Sporadic}.
\par
We might find still more examples, yet we think that those provided already illustrate the variety of one-field multi exponential potentials one can obtain by consistent truncations of Gauged Supergravity. In this large variety identifying, if any, combinations that perfectly match one of the integrable cases is quite difficult, in want of some strategy able to orient a priori our choices, yet with some art we were able to single out at least one of them.
\subsection{\sc A new integrable model embedded into supergravity}
Working in a reduced parameter space that was determined with some inspired guessing we found the following very simple superpotential:
\begin{equation}\label{gordingo}
W_{integ} \, = \, \left(i T^3+1\right) \left(S U^3-1\right)
\end{equation}
which inserted into the formula for the scalar potential, upon consistent truncation to no axions produces the following dilatonic potential:
\begin{eqnarray}
V_{dil}(\vec{h}) &=& \frac{5}{32}+\frac{1}{32} e^{-3 h_2}+\frac{e^{3
h_2}}{32}-\frac{1}{64} e^{-h_1-3 h_3}+\frac{1}{64}
e^{-h_1-3 h_2-3 h_3}\nonumber\\
&&+\frac{1}{64} e^{-h_1+3 h_2-3
h_3}-\frac{1}{64} e^{h_1+3 h_3}+\frac{1}{64} e^{h_1-3
h_2+3 h_3}+\frac{1}{64} e^{h_1+3 h_2+3 h_3} \label{dilatus}
\end{eqnarray}
Performing the following field redefinition:
\begin{equation}\label{rotaziosca}
h_1\to \sqrt{3} \phi _2 \quad , \quad h_2\to -\frac{\phi
_1}{\sqrt{3}}\quad , \quad h_3\to \phi _3
\end{equation}
which is a rotation that preserves the form of the dilaton kinetic term:
\begin{equation}\label{furbacchione}
\frac{1}{2} \dot{h}_1^2 + \frac{3}{2} \dot{h}_2^2 + \frac{3}{2} \dot{h}_3^2 \, \rightarrow \, \frac{1}{2} \dot{\phi}_1^2 + \frac{3}{2} \dot{\phi}_2^2 + \frac{3}{2} \dot{\phi}_3^2
\end{equation}
the dilaton potential (\ref{dilatus}) transforms into
\begin{eqnarray}
V_{dil}(\vec{\phi}) &=& \frac{5}{32}+\frac{1}{32} e^{-\sqrt{3} \phi
_1}+\frac{1}{32} e^{\sqrt{3} \phi _1}-\frac{1}{64}
e^{-\sqrt{3} \phi _2-3 \phi _3}+\frac{1}{64}
e^{-\sqrt{3} \phi _1-\sqrt{3} \phi _2-3 \phi
_3}\nonumber\\
&&+\frac{1}{64} e^{\sqrt{3} \phi _1-\sqrt{3} \phi
_2-3 \phi _3}-\frac{1}{64} e^{\sqrt{3} \phi _2+3 \phi
_3}+\frac{1}{64} e^{-\sqrt{3} \phi _1+\sqrt{3} \phi
_2+3 \phi _3}+\frac{1}{64} e^{\sqrt{3} \phi
_1+\sqrt{3} \phi _2+3 \phi _3}\nonumber\\
&&\label{baruffa}
\end{eqnarray}
The above potential has a de Sitter extremum at $\phi_{1,2,3}\, = \, 0$:
\begin{equation}\label{dSstabulo}
\left.\frac{\partial}{\partial \phi_{1,2,3}} \, V_{dil}(\vec{\phi})\right|_{\phi_{1,2,3}\, = \, 0} \, = \, 0 \quad ; \quad
\left. V_{dil}(\vec{\phi})\right|_{\phi_{1,2,3}\, = \, 0}\, = \, \frac{1}{4} \, > \, 0
\end{equation}
This dS vacuum is stable since the mass matrix:
\begin{equation}\label{massamatriciotta}
\mbox{Mass}^2 \, \equiv \, \left.\frac{\partial^2}{\partial \phi_{i}\partial \phi_{j}} \, V_{dil}(\vec{\phi})\right|_{\phi_{1,2,3}\, = \, 0} \, = \, \left(
\begin{array}{lll}
\frac{3}{8} & 0 & 0 \\
0 & \frac{3}{32} & \frac{3 \sqrt{3}}{32} \\
0 & \frac{3 \sqrt{3}}{32} & \frac{9}{32}
\end{array}
\right)
\end{equation}
has two positive and one null eigenvalue:
\begin{equation}\label{eigenvaluti}
\mbox{Eigenvalues} \, \mbox{Mass}^2 \, = \, \left\{\frac{3}{8},\frac{3}{8},0\right\}
\end{equation}
The corresponding mass eigenstates are the following fields:
\begin{equation}\label{caratto}
\left \{ \omega_1 \, ,\, \omega_2 \, ,\,\omega_3 \right \} \, = \, \left\{\frac{\phi _2}{\sqrt{3}}+\phi _3,\phi _1,\phi
_3-\sqrt{3} \phi _2\right\}
\end{equation}
and transformed to the $\omega_i$ basis the dilatonic potential (\ref{baruffa}) becomes:
\begin{eqnarray}
V_{dil}(\vec{\omega})\ &=& \frac{5}{32}-\frac{1}{64} e^{-3 \omega _1}-\frac{e^{3
\omega _1}}{64}+\frac{1}{32} e^{-\sqrt{3} \omega
_2}+\frac{1}{32} e^{\sqrt{3} \omega _2}+\frac{1}{64}
e^{-3 \omega _1-\sqrt{3} \omega _2}\nonumber\\
&&+\frac{1}{64} e^{3
\omega _1-\sqrt{3} \omega _2}+\frac{1}{64}
e^{\sqrt{3} \omega _2-3 \omega _1}+\frac{1}{64} e^{3
\omega _1+\sqrt{3} \omega _2}\label{finocchio}
\end{eqnarray}
which is explicitly independent from the field $\omega_3$, (the massless field) and can be consistently truncated to either one of the two massive modes $\omega_{1,2}$. In terms of the mass-eigenstates the kinetic term has the following form:
\begin{equation}\label{kineticoso}
\mbox{kin} \, = \, \frac{9 \dot{\omega} _1^2}{8}+\frac{\dot{\omega} _2^2}{2}+\frac{3
\dot{\omega} _3^2}{8}
\end{equation}
In view of this and using the translation rule (\ref{babushka}), the two potentials that we obtain from the two consistent truncations are:
\begin{eqnarray}
3 \left. V_{dil}(\vec{\omega})\right|_{\omega_{1}\, = \, 0 \, ;\, \omega_2 \, = \, \frac{\varphi}{\sqrt{3}}}&=& \frac{3}{8} (\cosh (\varphi )+1)\label{integpotenz1} \\
3 \left. V_{dil}(\vec{\omega})\right|_{\omega_{1}\, = \, \frac{2 \, \varphi}{3\sqrt{3}} \, ;\, \omega_2 \, = \, 0} &=& \frac{3}{32} \left(\cosh \left(\frac{2 \varphi
}{\sqrt{3}}\right)+7\right) \label{nonintegpotenz2}
\end{eqnarray}
The potential (\ref{integpotenz1}) fits into the integrable series $I_1$ of table \ref{tab:families} with $C_{11}=C_{22}=C_{12}$, while the potential (\ref{nonintegpotenz2}) associated with the second consistent truncation does not fit into any integrable series.
\begin{figure}[!hbt]
\begin{center}
\iffigs
\includegraphics[height=70mm]{susypotenziallo.eps}
\else
\end{center}
\fi
\caption{\it
Structure of the two field supergravity potential which hosts an integrable $\cosh[\varphi]$ model. The integrable potential is cut out by the intersection with the plane $\omega_1=0$.}
\label{bellopotente}
\iffigs
\hskip 1cm \unitlength=1.1mm
\end{center}
\fi
\end{figure}
\par
The lesson that we learn from this example is very much illuminating in order to appreciate the significance and the role of integrable models in the framework of supergravity. Leaving aside the axions, that should be in any case taken into account, but that we assume to be stabilized at their vanishing values, a generic solution of the supergravity field equations would involve a scalar field moving, in the current example, on the two-dimensional surface of the potential displayed in fig.\ref{bellopotente}. Certainly the two field model is not integrable since it admits non-integrable reductions so that deriving generic solutions cannot be attained. Yet the existence of an integrable one-field reduction implies that we can work out some special exact solution of the multi-field theory by using the integrability of a particular reduction. Conceptually this means that the emphasis on general integrals usually attached to integrability is to be dismissed in this case. The general integral of the reduction is in any case a set of particular solutions of the complete physical theory which does not capture the full extensions of initial conditions. The correct attitude is to consider the whole machinery of integrability as an algorithm to construct particular solutions of Einstein equations that might be more or less relevant depending on their properties.
\section{\sc Summary and Conclusions}\label{sec:conclusion}
In the present paper we have addressed two questions
\begin{description}
\item[A)] Whether any of the integrable one-field cosmological models classified in \cite{primopapero} can be fitted into Gauged Supergravity (in the $\mathcal{N}=1$ case we have restricted ourselves to F-term supergravitites, where the contribution of the vector multiplets to the scalar potential is negligible) based on constant curvature scalar manifolds $\mathrm{G/H}$ as consistent truncations to one-field of appropriate multi-field models.
\item[B)] Whether the solutions of integrable one-field cosmological models can be used as a handy simulation of the behavior of unknown exact cosmological solutions of Friedman equations
\end{description}
Both questions have received a positive answer.
\par
As for question A) we have shown that the embedding of integrable one-field cosmologies into Gauged Supergravity is rather difficult but not impossible. Indeed we were able to identify two independent examples of integrable potentials that can be embedded into $\mathcal{N}=1$ supergravity, gauged by means of suitable and particularly nice superpotentials:
\begin{description}
\item[Model One)] The first integrable supersymmetric model corresponds to $\mathcal{N}=1$ supergravity coupled to a single Wess-Zumino multiplet with the K\"ahler Geometry of $\frac{\mathrm{SU(1,1)}}{\mathrm{U(1)}}$ and the following quartic superpotential:
\begin{equation}\label{superpot1}
W_{int1}(z) \, = \, \frac{2}{\sqrt{5}} \left (3\, z^4 \, + \, {\rm i} \, \omega \, z^3\right)
\end{equation}
and gives rise, after truncation to the dilaton, to the integrable potential of series $I_2$ in table \ref{tab:families} with $\gamma \, = \, \frac{2}{3}$.
\item [Model Two)] It is obtained within the STU-model with a pure $\mathcal{N}=1$ K\"ahler structure (the special K\"ahler structure is violated) and a very specific superpotential:
\begin{equation}\label{superpot2}
W_{int2}(S,T,U) \, = \, \left(i T^3+1\right) \left(S U^3-1\right)
\end{equation}
whose interpretation within flux compactification is an interesting issue to be pursued further. A consistent truncation of this $\mathcal{N}=1$ model yields the integrable potential $I_1$ of table \ref{tab:families}.
\end{description}
An important additional result of the present paper is the complete classification of all possible $\mathcal{N}=2$ gaugings of the STU model that was presented in section \ref{STUgauginghi}. This classification was performed within the embedding tensor formalism by means of which we reduced the enumeration of non-abelian gaugings to the enumeration of admissible $\mathrm{G}$-orbits in the $\mathbf{W}$-representation, the same which black-hole charges are assigned to. In each admissible orbit we have the choice of switching on the Fayet Iliopoulos terms or keeping them zero. This yields two different gaugings for each admissible orbit. Finally one can consider pure abelian gaugings that are once again in correspondence with the $\mathrm{G}$-orbits in the $\mathbf{W}$-representation.
This classification provided two results. On one hand we verified that the only (stable) de Sitter vacuum is the one that was obtained several years ago in \cite{mapietoine}. On the other hand we might conclude that no integrable model can be embedded in any of these gauged models.
\par
Provisionally it follows that the very few examples of integrable cosmologies admitting a supergravity embedding are found within the $\mathcal{N}=1$ framework with F-term gauging. On the other hand, utilizing the D-terms and the axial symmetric K\"ahlerian manifolds that are in the image of the D-map, infinite series of integrable cosmological models can be embedded into $\mathcal{N}=1$ Supergravity \cite{secondosashapietro}.
We plan to pursue further the classification of all the gaugings for the $\mathcal{N}=2$ models of table \ref{homomodels} in order to ascertain whether these conclusion hold true in all cases or whether there are new integrable truncations \cite{terzopapero}.
\par
As for question B) we have addressed it concretely in the case of the $cosh$-model which emerges in many one-field truncations but it is integrable only for a few distinct values of the parameters. We have shown that if the non-integrable and the integrable considered models share the same type of fixed point (for instance node), then the solutions of the integrable case capture all the features of the solutions to the non integrable model and are actually numerically very close to them. Such a demonstration is forcibly only qualitative and can be just appreciated by looking at the plots. A precise algorithm to estimate the error is so far absent.
\par
The detailed analysis, presented in section \ref{integsusymodel}, of the space of solutions to the supersymmetric integrable \textit{Model One} revealed a new interesting phenomenon that, to the best of our knowledge, was so far undiscovered in General Relativity. As we stressed in paper \cite{primopapero}, whenever the scalar potential has an extremum at negative values the solutions of Friedman equation describe a Universe that, notwithstanding its spatial flatness, ends its life into a Big Crunch like a closed Universe with positive spatial curvature. This is already an interesting novelty but an even more striking one was discovered in our analysis of sect. \ref{parteventhoriz}. The causal structure of these spatially flat collapsing universes is significantly different from that of the closed universe, since here the particle and event horizons do not coincide and have an interesting evolution during the universe life cycle. We think that the possible cosmological implications of this mechanism should be attentively considered.
\vskip 2cm
\section*{Acknowledgments}
The authors wish to thank A. Sagnotti for enlightening discussions and critical reading of the manuscript. \\
The work of A.S. was supported in part by the RFBR Grants No. 11-02-01335-a, No. 13-02-91330-NNIO-a and No. 13-02-90602-Arm-a.
\newpage
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 7,978 |
Q: String identification in text files using regex in R This is my first post in stack overflow and I'll try and explain my problem as succintly as possible.
The problem is pretty simple. I'm trying to identify strings containing alphanumeric characters and alphanumeric characters with symbols and remove them. I looked at previous questions in Stack overflow and found a solution that looks good.
https://stackoverflow.com/a/21456918/7467476
I tried the provided regex (slightly modified) in notepad++ on some sample data just to see if its working (and yes, it works). Then, I proceeded to use the same regex in R and use gsub to replace the string with "" (code given below).
replace_alnumsym <- function(x) {
return(gsub("(?=.*[a-z])(?=.*[A-Z])(?=.*[0-9])(?=.*[_-])[A-Za-z0-9_-]{8,}", "", x, perl = T))
}
replace_alnum <- function(x) {
return(gsub("(?=.*[a-z])(?=.*[A-Z])(?=.*[0-9])[a-zA-Z0-9]{8,}", "", x, perl = T))
}
sample <- c("abc def ghi WQE34324Wweasfsdfs23234", "abcd efgh WQWEQtWe_232")
output1 <- sapply(sample, replace_alnum)
output2 <- sapply(sample, replace_alnumsym)
The code runs fine but the output still contains the strings. It hasn't been removed. I'm not getting any errors when I run the code (output below). The output format is also strange. Each element is printed twice (once without and once within quotes).
> output1
abc def ghi WQE34324Wweasfsdfs23234 abcd efgh WQWEQtWe_232
"abc def ghi WQE34324Wweasfsdfs23234" "abcd efgh WQWEQtWe_232"
> output2
abc def ghi WQE34324Wweasfsdfs23234 abcd efgh WQWEQtWe_232
"abc def ghi WQE34324Wweasfsdfs23234" "abcd efgh WQWEQtWe_232"
The desired result would be:
> output1
abc def ghi abcd efgh WQWEQtWe_232
> output2
abc def ghi WQE34324Wweasfsdfs23234 abcd efgh
I think I'm probably overlooking something very obvious.
Appreciate any assistance that you can provide.
Thanks
A: Your outputs are not printing twice, they're being output as named vectors. The unquoted line is the element names, the quoted line in the output itself. You can see this by checking the length of an output:
length( sapply( sample, replace_alnum ) )
# [1] 2
So you know there are only 2 elements there.
If you want them without the names, you can unname the vector on output:
unname( sapply( sample, replace_alnum ) )
# [1] "abc def ghi WQE34324Wweasfsdfs23234" "abcd efgh WQWEQtWe_232"
Alternatively, you can rename them something more to your liking:
output <- sapply( sample, replace_alnum )
names( output ) <- c( "name1", "name2" )
output
# name1 name2
# "abc def ghi WQE34324Wweasfsdfs23234" "abcd efgh WQWEQtWe_232"
As far as the regex itself, it sounds like what you want is to apply it to each string separately. If so, and if you want them back to where they were at the end, you need to split them by space, then recombine them at the end.
# split by space (leaving results in separate list items for recombining later)
input <- sapply( sample, strsplit, split = " " )
# apply your function on each list item separately
output <- sapply( input, replace_alnumsym )
# recombine each list item as they looked at the start
output <- sapply( output, paste, collapse = " " )
output <- unname( output )
output
# [1] "abc def ghi WQE34324Wweasfsdfs23234" "abcd efgh "
And if you want to clean up the trailing white space:
output <- trimws( output )
output
# [1] "abc def ghi WQE34324Wweasfsdfs23234" "abcd efgh"
A: No idea if this regex-based approach is really fine, but it is possible if we assume that:
*
*alnumsym "words" are non-whitespace chunks delimited with whitespace and start/end of string
*alnum words are chunks of letters/digits separated with non-letter/digits or start/end of string.
Then, you may use
sample <- c("abc def ghi WQE34324Wweasfsdfs23234", "abcd efgh WQWEQtWe_232")
gsub("\\b(?=\\w*[a-z])(?=\\w*[A-Z])(?=\\w*\\d)\\w{8,}", "", sample, perl=TRUE) ## replace_alnum
gsub("(?<!\\S)(?=\\S*[a-z])(?=\\S*[A-Z])(?=\\S*[0-9])(?=\\S*[_-])[A-Za-z0-9_-]{8,}", "", sample, perl=TRUE) ## replace_alnumsym
See the R demo online.
Pattern 1 details:
*
*\\b - a leading word boundary (we need to match a word)
*(?=\\w*[a-z]) - (a positive lookahead) after 0+ word chars (\w*) there must be a lowercase ASCII letter
*(?=\\w*[A-Z]) - an uppercase ASCII letter must be inside this word
*(?=\\w*\\d) - and a digit, too
*\\w{8,} - if all the conditions above matched, match 8+ word chars
Note that to avoid matching _ (\w matches _) you need to replace \w with [^\W_].
Pattern 2 details:
*
*(?<!\\S) - (a negative lookbehind) no non-whitespace can appear immediately to the left of the current location (a whitespace or start of string should be in front)
*(?=\\S*[a-z]) - after 0+ non-whitespace chars, there must be a lowercase ASCII letter
*(?=\\S*[A-Z]) - the non-whitespace chunk must contain an uppercase ASCII letter
*(?=\\S*[0-9]) - and a digit
*(?=\\S*[_-]) - and either _ or -
*[A-Za-z0-9_-]{8,} - if all the conditions above matched, match 8+ ASCII letters, digits or _ or -.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 2,973 |
Say "Manchester" and the first thought to strike is football and Manchester United. But not during the months of June and July in 2019! During these two months, the city is going to drown in the roars of the crowd cheering for their favorite international cricket team at the Old Trafford Cricket Stadium. The city might be known for its football team, but Manchester will be owned by the cricket fans during the ICC World Cup 2019 season.
Being a host to six nail-biting matches, including a semi-final, Manchester is certainly a city in England that would see many visitors during the ICC World Cup 2019. So, while you are in the city, here are a few essential things to know about Manchester to have an optimum time.
Also, check out the World Cup schedule in London!
Accommodating a hotel near the Old Trafford Cricket Stadium in Manchester will be your best bet if you are in the city to attend a match. This will help you reach the stadium on the match-day without any hassle, and who knows, you can even catch a glimpse of a cricket team from your hotel room gallery.
Hilton Garden Inn Manchester Emirates Old Trafford: This one is at a walking distance from the stadium. The mid-range chain hotel offers numerous facilities along with rooms that offer cricket ground views.
Trafford Hall Hotel: If you are in Manchester on a budget, then the Trafford Hall Hotel is your next best option. The quaint-looking hotel with a dash of Victorian style is hardly at a walking distance of just 15-20 minutes.
Chesters Hotel: Another budget hotel, this one lies a block away from the Old Trafford Cricket Stadium. Start walking from the hotel, and you will find yourself at the stadium within 10-15 minutes.
Other neighborhoods where you would find a variety of accommodations in Manchester include Ordsall and Spinningfields.
Who says you just have to attend a cricket match during your visit to Manchester. The best way to make the most of your time in the city is to explore it to the fullest. Utilize the time you would have before or after a match and visit the famous attractions in Manchester. Here are a few that you must include in your trip plan.
Old Trafford Football Stadium: This one is just half a mile away from the Old Trafford Cricket Stadium. Being a sports fan, you absolutely can't miss a tour of this iconic stadium that the Manchester United football team calls home. You may check out a few of these Manchester United Museum and Stadium tours at Old Trafford.
Manchester Museum: This place is located around 3 miles away from the Old Trafford Cricket Stadium. It is one of the best places in Manchester to discover great works of archaeology, natural history, and anthropology.
Heaton Park: The Heaton Park is about 8 miles away from the Old Trafford Cricket Stadium and may prove to be a little farther from the venue. But, a stroll through this vast park on a pleasant evening is something you should not miss.
Manchester Art Gallery: This art-lovers' paradise is just about 3 miles away from the Old Trafford Cricket Stadium. With a collection spanning the artworks of about 6 centuries, it is definitely one of the must-visit places in Manchester.
People's History Museum, Chill Factore, and The John Rylands Library are a few other attractions in Manchester. You may take a look at these several other things to do in Manchester that promise a great time in the city.
A great way to celebrate the victory of your supporting team is by having a delicious dinner at the Manchester restaurants. Why, even if the team loses, just head towards one of the best places to drink in Manchester and wash away the sorrows in tasty beverages!
The Wharf, Castlefield: This is one of the best places to eat and drink in Manchester. Stay downstairs for the ales or walk upstairs to have delicious food at a full table.
Rosylee, Northern Quarter: This is just the right place to have an afternoon tea before heading for a day/night World Cup match at the Old Trafford Cricket Stadium in Manchester.
The Laundrette, Chorlton: This bar and restaurant is the right place to hit for a mouth-watering pizza, coupled with a tasty cocktail.
Mr. Cooper's House & Garden: Serving an international menu, this is a fine-dine restaurant with some sleek and contemporary setting.
The Oast House, Spinningfields: Looking at its appearance in daylight, you may feel like giving this place a skip. But, you would miss some great food and drinks if you do so. The rustic-style pub is also known to offer beer tasting tours.
You may also take a look at how to have a great time in the small town of Chester-Le-Street during the World Cup Matches.
Manchester has a thriving nightlife scene. With many pubs and bars in the city, you have all the reasons for celebrating the night after a match at the Old Trafford Cricket Stadium. Here are a few neighborhoods where you can taste the nightlife in Manchester.
Spinningfields: This area in Manchester is home to many restaurants, pubs, and bars. Besides, it is also close to the cricket stadium. The Dockyard, Slug & Lettuce, and Neighbourhood Manchester are a few places that you may try.
Canal Street: The Canal Street in Manchester's Gay Village is lined with numerous bars, clubs, and restaurants. Store Street Craft Bar, O2 Ritz, and Oscars are a few must-try places.
The Northern Quarter: This trendy neighborhood in Manchester is dotted with the liveliest music venues, bohemian bars, and buzzing restaurants. Matt & Phreds Jazz Club, Port Street Beer House, and Twenty Twenty Two are some of the popular places in The Northern Quarter.
Follow this Manchester guide during your World Cup trip to Manchester and rest assured that you will have a wonderful time.
Also, here's how you can make the most of the World Cup 2019 in Wales! | {
"redpajama_set_name": "RedPajamaC4"
} | 3,105 |
\section{Acknowledgments}
Much of my work on the quantum Hall effect has been in collaboration with Allan
MacDonald. The more recent work on quantum Hall ferromagnets has also been done
in collaboration with M.\ Abolfath, L.\ Belkhir, L.\ Brey, R.\ C\^{o}t\'{e}, H.
Fertig, P.\ Henelius, K.\ Moon, H.\ Mori, J.\ J.\ Palacios, A.\ Sandvik, H.
Stoof, C.\ Timm, K.\ Yang, D.\ Yoshioka, S.\ C.\ Zhang, and L.\ Zheng. It is a
pleasure to acknowledge many useful conversations with S.\ Das Sarma, M.\ P.\ A.
Fisher, N.\ Read, and S.\ Sachdev.
It is a pleasure to thank Ms.~Daphne Klemme for her expert typesetting
of my scribbled notes and Jairo Sinova for numerous helpful comments on
the manuscript.
This work was supported by NSF DMR-9714055.
\chapter{Berry's Phase and Adiabatic Transport}
\label{app:BerryPhase}
Consider a quantum system with a Hamiltonian $H_{\vec{R}}$ which depends on a
set of externally controlled parameters represented by the vector $\vec{R}$.
Assume that for some domain of $\vec{R}$ there is always a finite excitation gap
separating the ground state energy from the rest of the spectrum of
$H_{\vec{R}}$. Consider now the situation where the parameters $\vec{R}(t)$ are
slowly varied around a closed loop in parameter space in a time interval $T$
\begin{equation}
\vec{R}(0) = \vec{R}(T).
\end{equation}
If the circuit is transversed sufficiently slowly so that $h/T \ll
\Delta_{\mathrm{min}}$ where $\Delta_{\mathrm{min}}$ is the minimum excitation
gap along the circuit, then the state will evolve \textit{adiabatically}. That
is, the state will always be the local ground state $\Psi_{\vec{R}(t)}^{(0)}$ of
the instantaneous Hamiltonian $H_{\vec{R}(t)}$. Given the complete set of energy
eigenstates for a given $\vec{R}$
\begin{equation}
H_{\vec{R}} \Psi_{\vec{R}}^{(j)} = \epsilon_{\vec{R}}^{(j)}
\Psi_{\vec{R}}^{(j)},
\end{equation}
the solution of the time-dependent Schr\"{o}dinger equation
\begin{equation}
i\hbar \frac{\partial\psi(\vec{r},t)}{\partial t} = H_{\vec{R}(t)}
\psi(\vec{r},t)
\label{eq:berry3}
\end{equation}
is
\begin{eqnarray}
\psi(\vec{r},t) &=& \Psi_{\vec{R}(t)}^{(0)}(\vec{r}\,)\; e^{i\gamma(t)}\;
e^{-\frac{i}{\hbar}\int_{0}^{t}dt'\; \epsilon_{\vec{R}(t')}^{(0)}}\nonumber\\
&&+\sum_{j\neq 0} a_{j}(t)\; \Psi_{\vec{R}(t)}^{(j)}.
\end{eqnarray}
The adiabatic approximation consists of neglecting the admixture of excited
states represented by the second term. In the limit of extremely slow variation
of $\vec{R}(t)$, this becomes exact as long as the excitation gap remains
finite. The only unknown at this point is the Berry Phase \cite{Berry}
$\gamma(t)$ which can be found by requiring that $\psi(\vec{r},t)$ satisfy the
time-dependent Schr\"{o}dinger equation. The LHS of eq.~(\ref{eq:berry3}) is
\begin{eqnarray}
i\hbar \frac{\partial\psi(\vec{r},t)}{\partial t} &=&
\left[-\hbar\dot{\gamma}(t) + \epsilon_{\vec{R}(t)}^{(0)}\right]\;
\psi(\vec{r},t)\nonumber\\
&&+i\hbar\dot{R}^{\mu}\; \left[\frac{\partial}{\partial R^{\mu}}\;
\Psi_{\vec{R}(t)}^{(0)}(\vec{r}\,)\right]\; e^{i\gamma(t)}\;
e^{-\frac{i}{\hbar}\int_{0}^{t}dt'\; \epsilon_{\vec{R}(t')}^{(0)}}
\label{eq:berry5}
\end{eqnarray}
if we neglect the $a_{j}(t)$ for $j > 0$. The RHS of eq.~(\ref{eq:berry3}) is
\begin{equation}
H_{\vec{R}(t)}\; \psi(\vec{r},t) = \epsilon_{\vec{R}(t)}^{(0)}\; \psi(\vec{r},t)
\label{eq:berry6}
\end{equation}
within the same approximation. Now using the completeness relation
\begin{equation}
\left|\frac{\partial}{\partial R^{\mu}}\; \Psi_{\vec{R}}^{(0)}\right\rangle =
\sum_{j=0}^{\infty} \left|\Psi_{\vec{R}}^{(j)}\right\rangle\;
\left\langle\Psi_{\vec{R}}^{(j)} \left|\frac{\partial}{\partial
R^{\mu}}\right.\; \Psi_{\vec{R}}^{(0)}\right\rangle.
\end{equation}
In the adiabatic limit we can neglect the excited state contributions so
eq.~(\ref{eq:berry5}) becomes
\begin{equation}
i\hbar \frac{\partial\psi}{\partial t} = \left[-\hbar\dot{\gamma}(t) +
i\hbar\dot{R}^{\mu}\; \left\langle\Psi_{\vec{R}}^{(0)}
\left|\frac{\partial}{\partial R^{\mu}}\right.\;
\Psi_{\vec{R}(t)}^{(0)}\right\rangle + \epsilon_{\vec{R}(t)}^{(0)}\right]\; \psi .
\end{equation}
This matches eq.~(\ref{eq:berry6}) provided
\begin{equation}
\dot{\gamma}(t) = i\dot{R}^{\mu}(t)\; \left\langle\Psi_{\vec{R}(t)}^{(0)}
\left|\frac{\partial}{\partial R^{\mu}}\right.\;
\Psi_{\vec{R}(t)}^{(0)}\right\rangle.
\end{equation}
The constraint $\left\langle\Psi_{\vec{R}}^{(0)}
\left|\Psi_{\vec{R}}^{(0)}\right.\right\rangle = 1$ guarantees that
$\dot{\gamma}$ is purely real.
Notice that there is a kind of gauge freedom here. For each $\vec{R}$ we have a
different set of basis states and we are free to choose their phases
independently. We can think of this as a gauge choice in the \textit{parameter}
space. Hence $\dot{\gamma}$ and $\gamma$ are `gauge dependent' quantities. It is
often possible to choose a gauge in which $\dot{\gamma}$ vanishes. The key
insight of Berry \cite{Berry} however was that this is not always the case. For
some problems involving a closed-circuit $\Gamma$ in parameter space the
\textit{gauge invariant} phase
\begin{equation}
\gamma_{\mathrm{Berry}} \equiv \int_{0}^{T} dt\; \dot{\gamma} = i \oint_{\Gamma}
dR^{\mu}\; \left\langle\Psi_{\vec{R}}^{(0)} \left|\frac{\partial}{\partial
R^{\mu}}\right.\; \Psi_{\vec{R}}^{(0)}\right\rangle
\end{equation}
is non-zero. This is a gauge invariant quantity because the system returns to
its starting point in parameter space and the arbitrary phase choice drops out
of the answer. This is precisely analogous to the result in electrodynamics that
the line integral of the vector potential around a closed loop is gauge
invariant. In fact it is useful to define the `Berry connection' $\mathcal{A}$
on the parameter space by
\begin{equation}
\mathcal{A}^{\mu}(\vec{R}\,) = i\; \left\langle\Psi_{\vec{R}}^{(0)}
\left|\frac{\partial}{\partial R^{\mu}}\right.\;
\Psi_{\vec{R}}^{(0)}\right\rangle
\label{eq:berry11}
\end{equation}
which gives the suggestive formula
\begin{equation}
\gamma_{\mathrm{Berry}} = \oint_{\Gamma} d\vec{R} \cdot \mathcal{A}(\vec{r}\,).
\end{equation}
Notice that the Berry's phase is a purely geometric object independent of the
particular velocity $\dot{R}^{\mu}(t)$ and dependent solely on the path taken in
parameter space. It is often easiest to evaluate this expression using Stokes
theorem since the curl of $\mathcal{A}$ is a gauge invariant quantity.
As a simple example \cite{Berry} let us consider the Aharonov-Bohm effect where
$\mathcal{A}$ will turn out to literally be the electromagnetic vector
potential. Let there be an infinitely long solenoid running along the $z$ axis.
Consider a particle with charge $q$ trapped inside a box by a potential $V$
\begin{equation}
H = \frac{1}{2m}\; \left(\vec{p} - \frac{q}{c} \vec{A}\right)^{2} +
V\left(\vec{r} - \vec{R}(t)\right).
\label{eq:berry13}
\end{equation}
The position of the box is moved along a closed path $\vec{R}(t)$ which
encircles the solenoid but keeps the particle outside the region of magnetic
flux. Let $\chi^{(0)}\left(\vec{r} - \vec{R}(t)\right)$ be the adiabatic wave
function in the absence of the vector potential. Because the particle only sees
the vector potential in a region where it has no curl, the exact wave function
in the presence of $\vec{A}$ is readily constructed
\begin{equation}
\Psi_{\vec{R}(t)}^{(0)}(\vec{r}\,) = e^{\frac{i}{\hbar} \frac{q}{c}
\int_{\vec{R}(t)}^{\vec{r}} d\vec{r}' \cdot \vec{A}(\vec{r}')}\;
\chi^{(0)}\left(\vec{r} - \vec{R}(t)\right)
\label{eq:berry14}
\end{equation}
where the precise choice of integration path is immaterial since it is interior
to the box where $\vec{A}$ has no curl. It is straightforward to verify that
$\Psi_{\vec{R}(t)}^{(0)}$ exactly solves the Schr\"{o}dinger equation for the
Hamiltonian in eq.~(\ref{eq:berry13}) in the adiabatic limit.
The arbitrary decision to start the line integral in eq.~(\ref{eq:berry14}) at
$\vec{R}$ constitutes a gauge choice in parameter space for the Berry
connection. Using eq.~(\ref{eq:berry11}) the Berry connection is easily found to
be
\begin{equation}
\mathcal{A}^{\mu}(\vec{R}\,) = +\frac{q}{\hbar c}\; A^{\mu}(\vec{R}\,)
\end{equation}
and the Berry phase for the circuit around the flux tube is simply the
Aharonov-Bohm phase
\begin{equation}
\gamma_{\mathrm{Berry}} = \oint dR^{\mu}\; \mathcal{A}^{\mu} = 2\pi
\frac{\Phi}{\Phi_{0}}
\end{equation}
where $\Phi$ is the flux in the solenoid and $\Phi_{0} \equiv hc/q$ is the flux
quantum.
As a second example \cite{Berry} let us consider a quantum spin with Hamiltonian
\begin{equation}
H = -\vec{\Delta}(t) \cdot \vec{S}.
\label{eq:berry17}
\end{equation}
The gap to the first excited state is $\hbar |\vec{\Delta}|$ and so the circuit
in parameter space must avoid the origin $\vec{\Delta} = \vec{0}$ where the
spectrum has a degeneracy. Clearly the adiabatic ground state has
\begin{equation}
\left\langle\Psi_{\vec{\Delta}}^{(0)} \left|\vec{S}\right|
\Psi_{\vec{\Delta}}^{(0)}\right\rangle = \hbar S
\frac{\vec{\Delta}}{|\vec{\Delta}|}.
\end{equation}
If the orientation of $\vec{\Delta}$ is defined by polar angle $\theta$ and
azimuthal angle $\varphi$, the same must be true for $\langle\vec{S}\rangle$. An
appropriate set of states obeying this for the case $S = \frac{1}{2}$ is
\begin{equation}
|\psi_{\theta,\varphi}\rangle = \left(\begin{array}{c}
\cos{\frac{\theta}{2}}\\
\sin{\frac{\theta}{2}}\; e^{i\varphi}\end{array}\right)
\end{equation}
since these obey
\begin{equation}
\left\langle\psi_{\theta,\varphi} \left|S^{z}\right|
\psi_{\theta,\varphi}\right\rangle = \hbar S \left(\cos^{2}{\frac{\theta}{2}} -
\sin^{2}{\frac{\theta}{2}}\right) = \hbar S \cos{\theta}
\end{equation}
and
\begin{equation}
\left\langle\psi_{\theta,\varphi} \left|S^{x} + iS^{y}\right|
\psi_{\theta,\varphi}\right\rangle = \left\langle\psi_{\theta,\varphi}
\left|S^{+}\right| \psi_{\theta,\varphi}\right\rangle = \hbar S \sin{\theta}\;
e^{i\varphi}.
\end{equation}
Consider the Berry's phase for the case where $\vec{\Delta}$ rotates slowly
about the $z$ axis at constant $\theta$
\begin{eqnarray}
\gamma_{\mathrm{Berry}} &=& i \int_{0}^{2\pi} d\varphi\;
\left\langle\psi_{\theta,\varphi} \left|\frac{\partial}{\partial\varphi}\right.
\psi_{\theta,\varphi}\right\rangle\nonumber\\
&=& i \int_{0}^{2\pi} d\varphi\; \left(\cos{\frac{\theta}{2}}\;
\sin{\frac{\theta}{2}}\; e^{-i\varphi}\right)\; \left(\begin{array}{c}
0\\
i \sin{\frac{\theta}{2}}\; e^{i\varphi}\end{array}\right)\nonumber\\
&=& -S \int_{0}^{2\pi} d\varphi\; (1 - \cos{\theta})\nonumber\\
&=& -S \int_{0}^{2\pi} d\varphi\; \int_{\cos{\theta}}^{1} d\cos{\theta'} =
-S\Omega
\label{eq:berry22}
\end{eqnarray}
where $\Omega$ is the solid angle subtended by the path as viewed from the
origin of the parameter space. This is precisely the Aharonov-Bohm phase one
expects for a charge $-S$ particle traveling on the surface of a unit sphere
surrounding a magnetic monopole. It turns out that it is the degeneracy in the
spectrum at the origin which produces the monopole \cite{Berry}.
Notice that there is a singularity in the connection at the `south pole' $\theta
= \pi$. This can be viewed as the Dirac string (solenoid containing one quantum
of flux) that is attached to the monopole. If we had chosen the basis
\begin{equation}
e^{-i\varphi}\; |\psi_{\theta,\varphi}\rangle
\end{equation}
the singularity would have been at the north pole. The reader is directed to
Berry's original paper \cite{Berry} for further details.
In order to correctly reproduce the Berry phase in a path integral for the spin
whose Hamiltonian is given by eq.~(\ref{eq:berry17}), the Lagrangian must be
\begin{equation}
\mathcal{L} = \hbar S\; \left\{-\dot{m}^{\mu}\mathcal{A}^{\mu} +
\Delta^{\mu}m^{\mu} + \lambda(m^{\mu}m^{\mu} - 1)\right\}
\end{equation}
where $\vec{m}$ is the spin coordinate on a unit sphere, $\lambda$ enforces the
length constraint, and
\begin{equation}
\vec{\nabla}_{m} \times \vec{\mathcal{A}} = \vec{m}
\end{equation}
is the monopole vector potential. As discussed in the text in
section~\ref{sec:qhf}, this Lagrangian correctly reproduces the spin precession
equations of motion.
\section{Charged Excitations}
Except for the fact that they are gapped, the neutral magnetophonon excitations
are closely analogous to the phonon excitations in superfluid ${}^{4}\hbox{He}$.
We further pursue this analogy with a search for the analog of vortices in
superfluid films. A vortex is a topological defect which is the quantum version
of the familiar whirlpool. A reasonably good variational wave function for a
vortex in a two-dimensional film of ${}^{4}\hbox{He}$ is
\begin{equation}
\psi_{\vec{R}}^{\pm} = \left\{\prod_{j=1}^{N} f\left(|\vec{r}_{j} -
\vec{R}|\right)\; e^{\pm i\theta(\vec{r}_{j}-\vec{R})}\right\}\Phi_{0}.
\end{equation}
Here $\theta$ is the azimuthal angle that the particle's position makes relative
to $\vec{R}$, the location of the vortex center. The function $f$ vanishes as
$\vec{r}$ approaches $\vec{R}$ and goes to unity far away. The choice of sign in
the phase determines whether the vortex is right or left handed.
The interpretation of this wave function is the following. The vortex is a
topological defect because if any particle is dragged around a closed loop
surrounding $\vec{R}$, the phase of the wave function winds by $\pm 2\pi$. This
phase gradient means that current is circulating around the core. Consider a
large circle of radius $\xi$ centered on $\vec{R}$. The phase change of $2\pi$
around the circle occurs in a distance $2\pi\xi$ so the local gradient seen by
\textit{every} particle is $\hat{\theta}/\xi$. Recalling eq.~(\ref{eq:12125}) we
see that locally the center of mass momentum has been boosted by
$\pm\frac{\hbar}{\xi}\; \hat{\theta}$ so that the current density of the
whirlpool falls off inversely with distance from the core.\footnote{This slow
algebraic decay of the current density means that the total kinetic energy of a
single vortex diverges logarithmically with the size of the system. This in turn
leads to the Kosterlitz Thouless phase transition in which pairs of vortices
bind together below a critical temperature. As we will see below there is no
corresponding finite temperature transition in a quantum Hall system.} Near the
core $f$ falls to zero because of the `centrifugal barrier' associated with this
circulation. In a more accurate variational wave function the core would be
treated slightly differently but the asymptotic large distance behavior would be
unchanged.
What is the analog of all this for the lowest Landau level? For $\psi^{+}$ we
see that every particle has its angular momentum boosted by one unit. In the
lowest Landau level analyticity (in the symmetric gauge) requires us to replace
$e^{i\theta}$ by $z = x + iy$. Thus we are led to the Laughlin `quasi-hole'
wave function
\begin{equation}
\psi_{Z}^{+}[z] = \prod_{j=1}^{N} (z_{j} - Z)\; \psi_{m}[z] \label{eq:12145}
\end{equation}
where $Z$ is a complex number denoting the position of the vortex and $\psi_{m}$
is the Laughlin wave function at filling factor $\nu = 1/m$. The corresponding
antivortex (`quasi-electron' state) involves $z_{j}^{*}$ suitably projected (as
discussed in App.~\ref{app:projection}.):
\begin{equation}
\psi_{Z}^{-}[z] = \prod_{j=1}^{N} \left(2\frac{\partial}{\partial z_{j}} -
Z^{*}\right)\; \psi_{m}[z] \label{eq:12146}
\end{equation}
where as usual the derivatives act only on the polynomial part of $\psi_{m}$.
All these derivatives make $\psi^{-}$ somewhat difficult to work with. We will
therefore concentrate on the quasi-hole state $\psi^{+}$. The origin of the
names quasi-hole and quasi-electron will become clear shortly.
Unlike the case of a superfluid film, the presence of the vector potential
allows these vortices to cost only a finite energy to produce and hence the
electrical dissipation is always finite at any non-zero temperature. There is no
finite temperature transition into a superfluid state as in the Kosterlitz
Thouless transition. From a field theoretic point of view, this is closely
analogous to the Higg's mechanism \cite{compositeboson}.
Just as in our study of the Laughlin wave function, it is very useful to see how
the plasma analogy works for the quasi-hole state
\begin{equation}
|\psi_{Z}^{+}|^{2} = e^{-\beta U_{\mathrm{class}}}\; e^{-\beta V}
\end{equation}
where $U_{\mathrm{class}}$ is given by eq.~(\ref{eq:Uclass}), $\beta = 2/m$ as
before and
\begin{equation}
V \equiv m \sum_{j=1}^{N} \left(-\ln{|z_{j} - Z|}\right).
\end{equation}
Thus we have the classical statistical mechanics of a one-component plasma of
(fake) charge $m$ objects seeing a neutralizing jellium background plus a new
potential energy $V$ representing the interaction of these objects with an
`impurity' located at $Z$ and having unit charge.
Recall that the chief desire of the plasma is to maintain charge neutrality.
Hence the plasma particles will be repelled from $Z$. Because the plasma
particles have fake charge $m$, the screening cloud will have to have a net
reduction of $1/m$ particles to screen the impurity. But this means that the
quasi-hole has fractional fermion number! The (true) physical charge of the
object is a fraction of the elementary charge
\begin{equation}
q^{*} = \frac{e}{m}.
\end{equation}
This is very strange! How can we possibly have an elementary excitation carrying
fractional charge in a system made up entirely of electrons? To understand this
let us consider an example of another quantum system that seems to have
fractional charge, but in reality doesn't. Imagine three protons arranged in an
equilateral triangle as shown in fig.~(\ref{fig:1201}).
\begin{figure
\centerline{\epsfxsize=6cm
\epsffile{fraccharge.xfig.eps}}
\caption[]{Illustration of an electron tunneling among the 1S orbitals of three
protons. The tunneling is exponentially slow for large separations which leads
to only exponentially small lifting of what would otherwise be a three-fold
degenerate ground state.}
\label{fig:1201}
\end{figure}
Let there be one electron in the system. In the spirit of the tight-binding
model we consider only the 1S orbital on each of the three `lattice sites'. The
Bloch states are
\begin{equation}
\psi_{k} = \frac{1}{\sqrt{3}} \sum_{j=1}^{3} e^{ikj}\; |j\rangle
\end{equation}
where $|j\rangle$ is the 1S orbital for the $j$th atom. The equilateral triangle
is like a linear system of length 3 with periodic boundary conditions. Hence the
allowed values of the wavevector are $\left\{ k_{\alpha} = \frac{2\pi}{3}\alpha;\;\;
\alpha = -1,0,+1\right\}$. The energy eigenvalues are
\begin{equation}
\epsilon_{k_{\alpha}} = -E_{\mathrm{1S}} - 2J\; \cos{k_{\alpha}}
\end{equation}
where $E_{\mathrm{1S}}$ is the isolated atom energy and $-J$ is the hopping
matrix element related to the orbital overlap and is exponentially small for
large separations of the atoms.
The projection operator that measures whether or not the particle is on site $n$
is
\begin{equation}
P_{n} \equiv |n\rangle\; \langle n|.
\end{equation}
Its expectation value in any of the three eigenstates is
\begin{equation}
\left\langle\psi_{k_{\alpha}}|P_{n}|\psi_{k_{\alpha}}\right\rangle = \frac{1}{3}.
\end{equation}
This equation simply reflects the fact that as the particle tunnels from site to
site it is equally likely to be found on any site. Hence it will, on average, be
found on a particular site $n$ only 1/3 of the time. The average electron number
per site is thus 1/3. This however is a trivial example because the value of the
measured charge is always an integer. Two-thirds of the time we measure zero and
one third of the time we measure unity. This means that the charge
\textit{fluctuates}. One measure of the fluctuations is
\begin{equation}
\sqrt{\langle P_{n}^{2}\rangle - \langle P_{n}\rangle^{2}} = \sqrt{\frac{1}{3} -
\frac{1}{9}} = \frac{\sqrt{2}}{3},
\end{equation}
which shows that the fluctuations are larger than the mean value. This result is
most easily obtained by noting $P_{n}^{2} = P_{n}$.
A characteristic feature of this `imposter' fractional charge $\frac{e}{m}$ that
guarantees that it fluctuates is the existence in the spectrum of the
Hamiltonian of a set of $m$ nearly degenerate states. (In our toy example here,
$m=3$.) The characteristic time scale for the charge fluctuations is $\tau \sim
\hbar/\Delta\epsilon$ where $\Delta\epsilon$ is the energy splitting of the
quasi-degenerate manifold of states. In our tight-binding example $\tau \sim
\hbar/J$ is the characteristic time it takes an electron to tunnel from the 1S
orbital on one site to the next. As the separation between the sites increases
this tunneling time grows exponentially large and the charge fluctuations become
exponentially slow and thus easy to detect.
In a certain precise sense, the fractional charge of the Laughlin quasiparticles
behaves very differently from this. An electron added at low energies to a $\nu
= 1/3$ quantum Hall fluid breaks up into three charge 1/3 Laughlin
quasiparticles. These quasiparticles can move arbitrarily far apart from each
other\footnote{Recall that unlike the case of vortices in superfluids, these
objects are unconfined.} and yet no quasi-degenerate manifold of states appears.
The excitation gap to the first excited state remains finite. The only
degeneracy is that associated with the positions of the quasiparticles. If we
imagine that there are three impurity potentials that pin down the positions of
the three quasiparticles, then the state of the system is \textit{uniquely}
specified. Because there is no quasidegeneracy, we do not have to specify any
more information other than the positions of the quasiparticles. Hence in a deep
sense, they are true \textit{elementary particles} whose fractional charge is a
sharp quantum observable.
Of course, since the system is made up only of electrons, if we capture the
charges in some region in a box, we will always get an integer number of
electrons inside the box. However in order to close the box we have to locally
destroy the Laughlin state. This will cost (at a minimum) the excitation gap.
This may not seem important since the gap is small --- only a few Kelvin or so.
But imagine that the gap were an MeV or a GeV. Then we would have to build a
particle accelerator to `close the box' and probe the fluctuations in the
charge. These fluctuations would be analogous to the ones seen in quantum
electrodynamics at energies above $2m_{e}c^{2}$ where electron-positron pairs
are produced during the measurement of charge form factors by means of a
scattering experiment.
Put another way, the charge of the Laughlin quasiparticle fluctuates but only at
high frequencies $\sim \Delta/\hbar$. If this frequency (which is $\sim
50\hbox{GHz}$) is higher than the frequency response limit of our voltage
probes, we will see no charge fluctuations. We can formalize this by writing a
modified projection operator \cite{KivelsonGoldhaber} for the charge on some
site $n$ by
\begin{equation}
P_{n}^{(\Omega)} \equiv P^{\Omega}\; P_{n} P^{\Omega}
\end{equation}
where $P_{n} = |n\rangle\; \langle n|$ as before and
\begin{equation}
P^{(\Omega)} \equiv \theta(\Omega - H + E_{0})
\end{equation}
is the operator that projects onto the subset of eigenstates with excitation
energies less than $\Omega$. $P_{n}^{(\Omega)}$ thus represents a measurement
with a high-frequency cutoff built in to represent the finite bandwidth of the
detector. Returning to our tight-binding example, consider the situation where
$J$ is large enough that the excitation gap $\Delta = \left(1 -
\cos{\frac{2\pi}{3}}\right) J$ exceeds the cutoff $\Omega$. Then
\begin{eqnarray}
P^{(\Omega)} &=& \sum_{\alpha=-1}^{+1} |\psi_{k_{\alpha}}\rangle\;
\theta(\Omega - \epsilon_{k_{\alpha}} + \epsilon_{k_{0}})\;
\langle\psi_{k_{\alpha}}|\nonumber\\
&=& |\psi_{k_{0}}\rangle\; \langle\psi_{k_{0}}|
\end{eqnarray}
is simply a projector on the ground state. In this case
\begin{equation}
P_{n}^{(\Omega)} = |\psi_{k_{0}}\rangle\; \frac{1}{3}\; \langle\psi_{k_{0}}|
\end{equation}
and
\begin{equation}
\left\langle\psi_{k_{0}}\left|[P_{n}^{(\Omega)}]^{2}\right|\psi_{k_{0}}\right\rangle
- \left\langle\psi_{k_{0}}|P_{n}^{(\Omega)}|\psi_{k_{0}}\right\rangle^{2} = 0.
\end{equation}
The charge fluctuations in the ground state are then zero (as measured by the
finite bandwidth detector).
The argument for the Laughlin quasiparticles is similar. We again emphasize that
one can not think of a single charge tunneling among three sites because the
excitation gap remains finite no matter how far apart the quasiparticle sites
are located. This is possible only because it is a correlated many-particle
system.
To gain a better understanding of fractional charge it is useful to compare this
situation to that in high energy physics. In that field of study one knows the
physics at low energies --- this is just the phenomena of our everyday world.
The goal is to study the high energy (short length scale) limit to see where
this low energy physics comes from. What force laws lead to our world? Probing
the proton with high energy electrons we can temporarily break it up into three
fractionally charged quarks, for example.
Condensed matter physics in a sense does the reverse. We know the phenomena at
`high' energies (i.e. room temperature) and we would like to see how the known
dynamics (Coulomb's law and non-relativistic quantum mechanics) leads to unknown
and surprising collective effects at low temperatures and long length scales.
The analog of the particle accelerator is the dilution refrigerator.
To further understand Laughlin quasiparticles consider the point of view of
`flatland' physicists living in the cold, two-dimensional world of a $\nu = 1/3$
quantum Hall sample. As far as the flatlanders are concerned the `vacuum' (the
Laughlin liquid) is completely inert and featureless. They discover however that
the universe is not completely empty. There are a few elementary particles
around, all having the same charge $q$. The flatland equivalent of Benjamin
Franklin chooses a unit of charge which not only makes $q$ negative but gives it
the fractional value $-1/3$. For some reason the Flatlanders go along with this.
Flatland cosmologists theorize that these objects are `cosmic strings',
topological defects left over from the `big cool down' that followed the
creation of the universe. Flatland experimentalists call for the creation of a
national accelerator facility which will reach the unprecedented energy scale of
10 Kelvin. With great effort and expense this energy scale is reached and the
accelerator is used to smash together three charged particles. To the
astonishment of the entire world a new short-lived particle is temporarily
created with the bizarre property of having integer charge!
There is another way to see that the Laughlin quasiparticles carry fractional
charge which is useful to understand because it shows the deep connection
between the sharp fractional charge and the sharp quantization of the Hall
conductivity. Imagine piercing the sample with an infinitely thin magnetic
solenoid as shown in fig.~(\ref{fig:solenoid})
\begin{figure
\centerline{\epsfysize=10cm
\epsffile{laughlinqp.xfig.eps}}
\caption[]{Construction of a Laughlin quasiparticle by adiabatically threading
flux $\Phi(t)$ through a point in the sample. Faraday induction gives an
azimuthal electric field $E(t)$ which in turn produces a radial current $J(t)$.
For each quantum of flux added, charge $\nu e$ flows into (or out of) the region
due to the quantized Hall conductivity $\nu e^{2}/h$. A flux tube containing an
integer number of flux quanta is invisible to the particles (since the Aharanov
phase shift is an integer multiple of $2\pi$) and so can be removed by a
singular gauge transformation.}
\label{fig:solenoid}
\end{figure}
and slowly increasing the magnetic flux $\Phi$ from 0 to $\Phi_{0} =
\frac{hc}{e}$ the quantum of flux. Because of the existence of a finite
excitation gap $\Delta$ the process is adiabatic and reversible if performed
slowly on a time scale long compared to $\hbar/\Delta$.
Faraday's law tells us that the changing flux induces an electric field obeying
\begin{equation}
\oint_{\Gamma} d\vec{r} \cdot \vec{E} = -\frac{1}{c}\;
\frac{\partial\Phi}{\partial t}
\end{equation}
where $\Gamma$ is any contour surrounding the flux tube. Because the electric
field contains only Fourier components at frequencies $\omega$ obeying
$\hbar\omega < \Delta$, there is no dissipation and $\sigma_{xx} = \sigma_{yy} =
\rho_{xx} = \rho_{yy} = 0$. The electric field induces a current density obeying
\begin{equation}
\vec{E} = \rho_{xy}\; \vec{J} \times \hat{z}
\end{equation}
so that
\begin{equation}
\rho_{xy} \oint_{\Gamma} \vec{J} \cdot (\hat{z} \times d\vec{r}) = -
\frac{1}{c}\; \frac{d\Phi}{dt}.
\end{equation}
The integral on the LHS represents the total current flowing into the region
enclosed by the contour. Thus the charge inside this region obeys
\begin{equation}
\rho_{xy}\; \frac{dQ}{dt} = -\frac{1}{c}\; \frac{d\Phi}{dt}.
\end{equation}
After one quantum of flux has been added the final charge is
\begin{equation}
Q = \frac{1}{c}\; \sigma_{xy} \Phi_{0} = \frac{h}{e}\; \sigma_{xy}.
\label{eq:1124165}
\end{equation}
Thus on the quantized Hall plateau at filling factor $\nu$ where $\sigma_{xy} =
\nu\; \frac{e^{2}}{h}$ we have the result
\begin{equation}
Q = \nu e.
\end{equation}
Reversing the sign of the added flux would reverse the sign of the charge.
The final step in the argument is to note that an infinitesimal tube containing
a quantum of flux is invisible to the particles. This is because the
Aharonov-Bohm phase factor for traveling around the flux tube is unity.
\begin{equation}
\exp{\left\{ i \frac{e}{\hbar c} \oint_{\Gamma} \delta\vec{A} \cdot
d\vec{r}\right\}} = e^{\pm 2\pi i} = 1.
\end{equation}
Here $\delta\vec{A}$ is the additional vector potential due to the solenoid.
Assuming the flux tube is located at the origin and making the gauge choice
\begin{equation}
\delta\vec{A} = \Phi_{0}\; \frac{\hat{\theta}}{2\pi r},
\end{equation}
one can see by direct substitution into the Schr\"{o}dinger equation that the
only effect of the quantized flux tube is to change the phase of the wave
function by
\begin{equation}
\psi \rightarrow \psi \prod_{j} \frac{z_{j}}{|z_{j}|} = \psi \prod_{j}
e^{i\theta_{j}}.
\end{equation}
The removal of a quantized flux tube is thus a `singular gauge change' which has
no physical effect.
Let us reiterate. Adiabatic insertion of a flux quantum changes the state of the
system by pulling in (or pushing out) a (fractionally) quantized amount of
charge. Once the flux tube contains a quantum of flux it effectively becomes
invisible to the electrons and can be removed by means of a singular gauge
transformation.
Because the excitation gap is preserved during the adiabatic addition of the
flux, the state of the system is fully specified by the position of the
resulting quasiparticle. As discussed before there are no low-lying
quasi-degenerate states. This version of the argument highlights the essential
importance of the fact that $\sigma_{xx} = 0$ and $\sigma_{xy}$ is quantized.
The existence of the fractionally quantized Hall transport coefficients
guarantees the existence of fractionally charged elementary excitations
These fractionally charged objects have been observed directly by using an
ultrasensitive electrometer made from a quantum dot \cite{Vgoldman} and by the
reduced shot noise which they produce when they carry current \cite{shotnoise}.
Because the Laughlin quasiparticles are discrete objects they cost a non-zero
(but finite) energy to produce. Since they are charged they can be thermally
excited only in neutral pairs. The charge excitation gap is therefore
\begin{equation}
\Delta_{c} = \Delta_{+} + \Delta_{-}
\end{equation}
where $\Delta_{\pm}$ is the vortex/antivortex (quasielectron/quasihole)
excitation energy. In the presence of a transport current these thermally
excited charges can move under the influence of the Hall electric field and
dissipate energy. The resulting resistivity has the Arrhenius form
\begin{equation}
\rho_{xx} \sim \gamma \frac{h}{e^{2}}\; e^{-\beta\Delta_{c}/2}
\end{equation}
where $\gamma$ is a dimensionless constant of order unity. Note that the law of
mass action tells us that the activation energy is $\Delta_{c}/2$ not
$\Delta_{c}$ since the charges are excited in pairs. There is a close analogy
between the dissipation described here and the flux flow resistance caused by
vortices in a superconducting film.
Theoretical estimates of $\Delta_{c}$ are in good agreement with experimental
values determined from transport measurements \cite{gapmeasures}. Typical values
of $\Delta_{c}$ are only a few percent of $e^{2}/\epsilon\ell$ and hence no
larger than a few Kelvin. In a superfluid time-reversal symmetry guarantees that
vortices and antivortices have equal energies. The lack of time reversal
symmetry here means that $\Delta_{+}$ and $\Delta_{-}$ can be quite different.
Consider for example the hard-core model for which the Laughlin wave function
$\psi_{m}$ is an exact zero energy ground state as shown in
eq.~(\ref{eq:12103}). Equation~(\ref{eq:12145}) shows that the quasihole state
contains $\psi_{m}$ as a factor and hence is also an exact zero energy
eigenstate for the hard-core interaction. Thus the quasihole costs zero energy.
On the other hand eq.~(\ref{eq:12146}) tells us that the derivatives reduce the
degree of homogeneity of the Laughlin polynomial and therefore the energy of the
quasielectron \textit{must} be non-zero in the hard-core model. At filling factor
$\nu = 1/m$ this asymmetry has no particular significance since the
quasiparticles must be excited in pairs.
Consider now what happens when the magnetic field is increased slightly or the
particle number is decreased slightly so that the filling factor is slightly
smaller than $1/m$. The lowest energy way to accommodate this is to inject $m$
quasiholes into the Laughlin state for each electron that is removed (or for
each $m \Phi_{0}$ of flux that is added). The system energy (ignoring disorder
and interactions in the dilute gas of quasiparticles) is
\begin{equation}
E_{+} = E_{m} - \delta N\; m\Delta_{+}
\end{equation}
where $E_{m}$ is the Laughlin ground state energy and $-\delta N$ is the number
of added holes. Conversely for filling factors slightly greater than $1/m$ the
energy is (with $+\delta N$ being the number of added electrons)
\begin{equation}
E_{-} = E_{m} + \delta N\; m\Delta_{-}.
\end{equation}
This is illustrated in fig.~(\ref{fig:energyslope}).
\begin{figure
\centerline{\epsfxsize=6cm
\epsffile{chempot.xfig.eps}}
\caption[]{Energy cost for inserting $\delta N$ electrons into the Laughlin
state near filling factor $\nu=1/m$. The slope of the line is the chemical
potential. Its discontinuity at $\nu=1/m$ measures the charge excitation gap.}
\label{fig:energyslope}
\end{figure}
The slope of the lines in the figure determines the chemical potential
\begin{equation}
\mu_{\pm} = \frac{\partial E_{\pm}}{\partial\delta N} = \mp m\Delta_{\pm}.
\end{equation}
The chemical potential suffers a jump discontinuity of $m(\Delta_{+} +
\Delta_{-}) = m\Delta_{c}$ just at filling factor $\mu = 1/m$. This jump in the
chemical potential is the signature of the charge excitation gap just as it is
in a semiconductor or insulator. Notice that this form of the energy is very
reminiscent of the energy of a type-II superconductor as a function of the
applied magnetic field (which induces vortices and therefore has an energy cost
$\Delta E \sim |B|$).
Recall that in order to have a quantized Hall plateau of finite width it is
necessary to have disorder present. For the integer case we found that disorder
localizes the excess electrons allowing the transport coefficients to not change
with the filling factor. Here it is the fractionally-charged quasiparticles that
are localized by the disorder.\footnote{Note again the essential importance of
the fact that the objects are `elementary particles'. That is, there are no
residual degeneracies once the positions are pinned down.} Just as in the
integer case the disorder may fill in the gap in the density of states but the
DC value of $\sigma_{xx}$ can remain zero because of the localization. Thus the
fractional plateaus can have finite width.
If the density of quasiparticles becomes too high they may delocalize and
condense into a correlated Laughlin state of their own. This gives rise to a
hierarchical family of Hall plateaus at rational fractional filling factors $\nu
= p/q$ (generically with $q$ odd due to the Pauli principle). There are several
different but entirely equivalent ways of constructing and viewing this
hierarchy which we will not delve into here \cite{SMGBOOK,TAPASHbook,DasSarmabook}.
\section{Classical and Semi-Classical Dynamics}
\subsection{Classical Approximation}
The classical equations of motion for an electron of charge $-e$ moving in two
dimensions under the influence of the Lorentz force $\frac{-e}{c}\vec{v}\times
\vec{B}$ caused by a magnetic field $\vec{B} = B\hat{z}$ are
\begin{eqnarray}
m \ddot{x} &=& -\frac{eB}{c} \dot{y} \label{eq:9812-02}\\
m \ddot{y} &=& +\frac{eB}{c} \dot{x}.
\label{eq:lorentz}
\end{eqnarray}
The general solution of these equations corresponds to motion in a circle of
arbitrary radius $R$
\begin{equation}
\vec{r} = R\left(\cos(\omega_{c} t+\delta),\sin(\omega_{c} t+\delta)\right).
\end{equation}
Here $\delta$ is an arbitrary phase for the motion and
\begin{equation}
\omega_{c}\equiv \frac{eB}{mc}
\end{equation}
is known as the classical cyclotron frequency. Notice that the period of the
orbit is independent of the radius and that the tangential speed
\begin{equation}
v = R\omega_{c}
\end{equation}
controls the radius. A fast particle travels in a large circle but returns to
the starting point in the same length of time as a slow particle which
(necessarily) travels in a small circle. The motion is thus \textit{isochronous}
much like that of a harmonic oscillator whose period is independent of the
amplitude of the motion. This apparent analogy is not an accident as we shall
see when we study the Hamiltonian (which we will need for the full quantum
solution).
Because of some subtleties involving distinctions between canonical and
mechanical momentum in the presence of a magnetic field, it is worth reviewing
the formal Lagrangian and Hamiltonian approaches to this problem. The above
classical equations of motion follow from the Lagrangian
\begin{equation}
\mathcal{L} = \frac{1}{2}m\dot{x}^{\mu}\dot{x}^{\mu} - \frac{e}{c}\dot{x}^{\mu}
A^{\mu},
\label{eq:classicalLagrangian}
\end{equation}
where $\mu=1,2$ refers to $x$ and $y$ respectively and $\vec{A}$ is the vector
potential evaluated at the position of the particle. (We use the Einstein
summation convention throughout this discussion.) Using
\begin{equation}
\frac{\delta \mathcal{L}}{\delta x^{\nu}} = -\frac{e}{c} \dot{x}^{\mu}\,
\partial_{\nu} A^{\mu}
\end{equation}
and
\begin{equation}
\frac{\delta \mathcal{L}}{\delta \dot{x}^{\nu}} = m\dot{x}^{\nu} -\frac{e}{c}
A^{\nu}
\end{equation}
the Euler-Lagrange equation of motion becomes
\begin{equation}
m \ddot{x}^{\nu} = -\frac{e}{c}\left[\partial_{\nu} A^{\mu} - \partial_{\mu}
A^{\nu}\right]\dot{x}^{\mu}.
\label{eq:euler-lagrange}
\end{equation}
Using
\begin{eqnarray}
\vec{B} &=& \vec\nabla\times\vec{A}\\
B^{\alpha} &=& \epsilon^{\alpha\beta\gamma}\partial_{\beta} A^{\gamma}
\end{eqnarray}
shows that this is equivalent to eqs.~(\ref{eq:9812-02}--\ref{eq:lorentz}).
Once we have the Lagrangian we can deduce the canonical momentum
\begin{eqnarray}
p^{\mu} &\equiv& \frac{\delta \mathcal{L}}{\delta \dot{x}^{\mu}}\nonumber\\
&=& m\dot{x}^{\mu} -\frac{e}{c}A^{\mu},
\end{eqnarray}
and the Hamiltonian
\begin{eqnarray}
H[\vec{p},\vec{x}] &\equiv& \dot{x}^{\mu} p^{\mu} - \mathcal{L}(\dot{\vec{x}},
\vec{x})\nonumber\\
&=& \frac{1}{2m} \left(p^{\mu} + \frac{e}{c}A^{\mu}\right) \left(p^{\mu} +
\frac{e}{c}A^{\mu}\right).
\end{eqnarray}
(Recall that the Lagrangian is canonically a function of the positions and
velocities while the Hamiltonian is canonically a function of the positions and
momenta). The quantity
\begin{equation}
p_{\mathrm{mech}}^{\mu} \equiv p^{\mu} + \frac{e}{c}A^{\mu}
\end{equation}
is known as the \textit{mechanical} momentum. Hamilton's equations of motion
\begin{eqnarray}
\dot{x}^{\mu} &=& \frac{\partial H}{\partial p^{\mu}} =
\frac{1}{m}p_{\mathrm{mech}}^{\mu} \label{eq:9812-03}\\
\dot{p}^{\mu} &=& -\frac{\partial H}{\partial x^{\mu}} = -\frac{e}{mc}
\left(p^{\nu} + \frac{e}{c}A^{\nu}\right)\partial_{\mu} A^{\nu}
\label{eq:hamilton}
\end{eqnarray}
show that it is the mechanical momentum, not the canonical momentum, which is
equal to the usual expression related to the velocity
\begin{equation}
p_{\mathrm{mech}}^{\mu} = m \dot{x}^{\mu}.
\label{eq:mechanical}
\end{equation}
Using Hamilton's equations of motion we can recover Newton's law for the Lorentz
force given in eq.~(\ref{eq:euler-lagrange}) by simply taking a time derivative
of $\dot{x}^{\mu}$ in eq.~(\ref{eq:9812-03}) and then using
eq.~(\ref{eq:hamilton}).
The distinction between canonical and mechanical momentum can lead to confusion.
For example it is possible for the particle to have a finite velocity while
having zero (canonical) momentum! Furthermore the canonical momentum is
dependent (as we will see later) on the choice of gauge for the vector potential
and \textit{hence is not a physical observable}. The mechanical momentum, being
simply related to the velocity (and hence the current) is physically observable
and gauge invariant. The classical equations of motion only involve the curl of
the vector potential and so the particular gauge choice is not very important at
the classical level. We will therefore delay discussion of gauge choices until
we study the full quantum solution, where the issue is unavoidable.
\subsection{Semi-classical Approximation}
Recall that in the semi-classical approximation used in transport theory we
consider wave packets $\Psi_{\vec{R}(t),\vec{K}(t)}(\vec{r},t)$ made up of a
linear superposition of Bloch waves. These packets are large on the scale of the
de Broglie wavelength so that they have a well-defined central wave vector
$\vec{K}(t)$, but they are small on the scale of everything else (external
potentials, etc.) so that they simultaneously can be considered to have
well-defined mean position $R(t)$. (Note that $\vec{K}$ and $\vec R$ are
\textit{parameters} labeling the wave packet not arguments.) We then argue (and
will discuss further below) that the solution of the Schr\"{o}dinger equation in
this semiclassical limit gives a wave packet whose parameters $\vec{K}(t)$ and
$\vec{R}(t)$ obey the appropriate analog of the classical Hamilton equations of
motion
\begin{eqnarray}
\dot{R}^{\mu} &=& \frac{\partial \langle
\Psi_{\vec{R},\vec{K}}|H|\Psi_{\vec{R},\vec{K}}\rangle} {\partial \hbar K^{\mu}}\\
\hbar\dot{K}^{\mu} &=& -\frac{\partial \langle
\Psi_{\vec{R},\vec{K}}|H|\Psi_{\vec{R},\vec{K}}\rangle} {\partial R^{\mu}}.
\label{eq:semiclassical}
\end{eqnarray}
Naturally this leads to the same circular motion of the wave packet at the
classical cyclotron frequency discussed above. For weak fields and fast
electrons the radius of these circular orbits will be large compared to the size
of the wave packets and the semi-classical approximation will be valid. However
at strong fields, the approximation begins to break down because the orbits are
too small and because $\hbar\omega_{c}$ becomes a significant (large) energy.
Thus we anticipate that the semi-classical regime requires $\hbar\omega_{c} \ll
\ensuremath{\epsilon_{\mathrm{F}}}$, where $\ensuremath{\epsilon_{\mathrm{F}}}$ is the Fermi energy.
We have already seen hints that the problem we are studying is really a harmonic
oscillator problem. For the harmonic oscillator there is a characteristic energy
scale $\hbar\omega$ (in this case $\hbar\omega_{c}$) and a characteristic length
scale $\ell$ for the zero-point fluctuations of the position in the ground
state. The analog quantity in this problem is the so-called magnetic length
\begin{equation}
\ell \equiv \sqrt{\frac{\hbar c}{eB}} = \frac{257\hbox{\AA}}{\sqrt{\frac{B}{1
\mathrm{tesla}}}}.
\end{equation}
The physical interpretation of this length is that the area $2\pi\ell^{2}$
contains one quantum of magnetic flux $\Phi_{0}$ where\footnote{Note that in the
study of superconductors the flux quantum is defined with a factor of $2e$
rather than $e$ to account for the pairing of the electrons in the condensate.}
\begin{equation}
\Phi_{0} = \frac{hc}{e}.
\end{equation}
That is to say, the density of magnetic flux is
\begin{equation}
B = \frac{\Phi_{0}}{2\pi\ell^{2}}.
\end{equation}
To be in the semiclassical limit then requires that the Fermi wavelength be
small on the scale of the magnetic length so that $\ensuremath{k_{\mathrm{F}}}\ell \gg 1$. This
condition turns out to be equivalent to $\hbar\omega_{c} \ll \ensuremath{\epsilon_{\mathrm{F}}}$ so they are
not separate constraints.
\boxedtext{\begin{exercise}
Use the Bohr-Sommerfeld quantization condition that the orbit have a
circumference containing an integral number of de Broglie wavelengths to find the
allowed orbits of a 2D electron moving in a uniform magnetic field. Show that
each successive orbit encloses precisely one additional quantum of flux in its
interior. Hint: It is important to make the distinction between the canonical
momentum (which controls the de Broglie wavelength) and the mechanical momentum
(which controls the velocity). The calculation is simplified if one uses the
symmetric gauge $\vec{A} = -\frac{1}{2}\vec{r} \times \vec{B}$ in which the
vector potential is purely azimuthal and independent of the azimuthal angle.
\label{ex:stateperflux}
\end{exercise}}
\section{Neutral Collective Excitations}
So far we have studied one particular variational wave function and found that
it has good correlations built into it as graphically illustrated in
Fig.~\ref{fig:snapshot}. To further bolster the case that this wave function
captures the physics of the fractional Hall effect we must now demonstrate that
there is finite energy cost to produce excitations above this ground state. In
this section we will study the neutral collective excitations. We will examine
the charged excitations in the next section.
It turns out that the neutral excitations are phonon-like excitations similar to
those in solids and in superfluid helium. We can therefore use a simple
modification of Feynman's theory of the excitations in superfluid helium
\cite{feynman72,GMP}.
By way of introduction let us start with the simple harmonic oscillator. The
ground state is of the form
\begin{equation}
\psi_{0}(x) \sim e^{-\alpha x^{2}}.
\end{equation}
Suppose we did not know the excited state and tried to make a variational
ansatz for it. Normally we think of the variational method as applying only to
ground states. However it is not hard to see that the first excited state energy
is given by
\begin{equation}
\epsilon_{1} = \mathrm{min}\,
\left\{\frac{\langle\psi|H|\psi\rangle}{\langle\psi|\psi\rangle}\right\}
\end{equation}
provided that we do the minimization over the set of states $\psi$ which are
constrained to be orthogonal to the ground state $\psi_{0}$. One simple way to
produce a variational state which is automatically orthogonal to the ground
state is to change the parity by multiplying by the first power of the
coordinate
\begin{equation}
\psi_{1}(x) \sim x\; e^{-\alpha x^{2}}. \label{eq:1201}
\end{equation}
Variation with respect to $\alpha$ of course leads (in this special case) to the
\textit{exact} first excited state.
With this background let us now consider the case of phonons in superfluid
${}^{4}\hbox{He}$. Feynman argued that because of the Bose statistics of the
particles, there are no low-lying single-particle excitations. This is in stark
contrast to a fermi gas which has a high density of low-lying excitations around
the fermi surface. Feynman argued that the only low-lying excitations in
${}^{4}\hbox{He}$ are collective density oscillations that are well-described by
the following family of variational wave functions (that has no adjustable
parameters) labeled by the wave vector
\begin{equation}
\psi_{\vec{k}} = \frac{1}{\sqrt{N}}\; \rho_{\vec{k}}\; \Phi_{0} \label{eq:1202}
\end{equation}
where $\Phi_{0}$ is the exact ground state and
\begin{equation}
\rho_{\vec{k}} \equiv \sum_{j=1}^{N} e^{-i\vec{k}\cdot\vec{r}_{j}}
\label{eq:1203}
\end{equation}
is the Fourier transform of the density. The physical picture behind this is
that at long wavelengths the fluid acts like an elastic continuum and
$\rho_{\vec{k}}$ can be treated as a generalized oscillator normal-mode
coordinate. In this sense eq.~(\ref{eq:1202}) is then analogous to
eq.~(\ref{eq:1201}). To see that $\psi_{\vec{k}}$ is orthogonal to the ground
state we simply note that
\begin{eqnarray}
\langle\Phi_{0}|\psi_{\vec{k}}\rangle &=& \frac{1}{\sqrt{N}}\;
\langle\Phi_{0}|\rho_{\vec{k}}|\Phi_{0}\rangle\nonumber\\
&=& \frac{1}{\sqrt{N}} \int d^{3}R\; e^{-i\vec{k}\cdot\vec{R}}\;
\langle\Phi_{0}|\rho(\vec{r}\,)|\Phi_{0}\rangle . \label{eq:1204}
\end{eqnarray}
where
\begin{equation}
\rho(\vec{r}\,) \equiv \sum_{j=1}^{N} \delta^{3}(\vec{r}_{j} - \vec{R})
\end{equation}
is the density operator. If $\Phi_{0}$ describes a translationally invariant
liquid ground state then the Fourier transform of the mean density vanishes for
$k\neq 0$.
There are several reasons why $\psi_{\vec{k}}$ is a good variational wave
function, especially for small $k$. First, it contains the ground state as a
factor. Hence it contains all the special correlations built into the ground
state to make sure that the particles avoid close approaches to each other
without paying a high price in kinetic energy. Second, $\psi_{\vec{k}}$ builds
in the features we expect on physical grounds for a density wave. To see this,
consider evaluating $\psi_{\vec{k}}$ for a configuration of the particles like
that shown in fig.~(\ref{fig:densitywave}a)
\begin{figure
\centerline{\epsfxsize=6cm
\epsffile{densitywave.xfig.eps}}
\caption[]{(a) Configuration of particles in which the Fourier transform of the
density at wave vector $k$ is non-zero. (b) The Fourier amplitude will have a
similar magnitude for this configuration but a different phase.}
\label{fig:densitywave}
\end{figure}
which has a density modulation at wave vector $\vec{k}$. This is not a
configuration that maximizes $|\Phi_{0}|^{2}$, but as long as the density
modulation is not too large and the particles avoid close approaches,
$|\Phi_{0}|^{2}$ will not fall too far below its maximum value. More
importantly, $|\rho_{\vec{k}}|^{2}$ will be much larger than it would for a more
nearly uniform distribution of positions. As a result $|\psi_{\vec{k}}|^{2}$ will be
large and this will be a likely configuration of the particles in the excited
state. For a configuration like that in fig.~(\ref{fig:densitywave}b), the phase
of $\rho_{\vec{k}}$ will shift but $|\psi_{\vec{k}}|^{2}$ will have the same
magnitude. This is analogous to the parity change in the harmonic oscillator
example. Because all different phases of the density wave are equally likely,
$\rho_{\vec{k}}$ has a mean density which is uniform (translationally
invariant).
To proceed with the calculation of the variational estimate for the excitation
energy $\Delta(k)$ of the density wave state we write
\begin{equation}
\Delta(k) = \frac{f(k)}{s(k)} \label{eq:1205}
\end{equation}
where
\begin{equation}
f(k) \equiv \left\langle\psi_{\vec{k}}|(H - E_{0})|\psi_{\vec{k}}\right\rangle,
\label{eq:1206}
\end{equation}
with $E_{0}$ being the exact ground state energy and
\begin{equation}
s(k) \equiv \langle\psi_{\vec{k}}|\psi_{\vec{k}}\rangle = \frac{1}{N}\;
\langle\Phi_{0}|\rho_{\vec{k}}^{\dagger}\rho_{\vec{k}}^{\phantom{\dagger}}|\Phi_
{0}\rangle.
\label{eq:1207}
\end{equation}
We see that the norm of the variational state $s(k)$ turns out to be the static
structure factor of the ground state. It is a measure of the mean square density
fluctuations at wave vector $\vec{k}$. Continuing the harmonic oscillator
analogy, we can view this as a measure of the zero-point fluctuations of the
normal-mode oscillator coordinate $\rho_{\vec{k}}$. For superfluid
${}^{4}\hbox{He}$ $s(k)$ can be directly measured by neutron scattering and can
also be computed theoretically using quantum Monte Carlo methods
\cite{ceperley95}. We will return to this point shortly.
\boxedtext{\begin{exercise}
Show that for a uniform liquid state of density $n$, the static structure factor
is related to the Fourier transform of the radial distribution function by
\[
s(k) = N\; \delta_{\vec{k},\vec{0}} + 1 + n \int d^{3}r\;
e^{i\vec{k}\cdot\vec{r}}\; \left[g(r) - 1\right]
\]
\label{ex:static2g(r)}
\end{exercise}}
The numerator in eq.~(\ref{eq:1206}) is called the oscillator strength and can
be written
\begin{equation}
f(k) = \frac{1}{N}\; \left\langle\Phi_{0}|\rho_{\vec{k}}^{\dagger}
[H,\rho_{\vec{k}}^{\phantom{\dagger}}]|\Phi_{0}\right\rangle. \label{eq:1208}
\end{equation}
For uniform systems with parity symmetry we can write this as a double
commutator
\begin{equation}
f(k) = \frac{1}{2N}\; \left\langle\Phi_{0}\left|\left[\rho_{\vec{k}}^{\dagger}, [H,
\rho_{\vec{k}}^{\phantom{\dagger}}]\right]\right|\Phi_{0}\right\rangle
\label{eq:1209}
\end{equation}
from which we can derive the justifiably famous oscillator strength sum rule
\begin{equation}
f(k) = \frac{\hbar^{2}k^{2}}{2M}. \label{eq:1210}
\end{equation}
where $M$ is the (band) mass of the particles.\footnote{Later on in
Eq.~(\ref{eq:1217}) we will express the oscillator strength in terms of a
frequency integral. Strictly speaking if this is integrated up to very high
frequencies including interband transitions, then $M$ is replaced by the bare
electron mass.} Remarkably (and conveniently) this is a universal result
independent of the form of the interaction potential between the particles. This
follows from the fact that only the kinetic energy part of the Hamiltonian fails
to commute with the density.
\boxedtext{\begin{exercise}
Derive eq.~(\ref{eq:1209}) and then eq.~(\ref{eq:1210}) from
eq.~(\ref{eq:1208}) for a system of interacting particles.
\label{ex:oscstrength}
\end{exercise}}
We thus arrive at the Feynman-Bijl formula for the collective mode excitation
energy
\begin{equation}
\Delta(k) = \frac{\hbar^{2}k^{2}}{2M}\; \frac{1}{s(k)}. \label{eq:1211}
\end{equation}
We can interpret the first term as the energy cost if a single particle
(initially at rest) were to absorb all the momentum and the second term is a
renormalization factor describing momentum (and position) correlations among the
particles. One of the remarkable features of the Feynman-Bijl formula is that it
manages to express a \textit{dynamical} quantity $\Delta (k)$, which is a
property of the excited state spectrum, solely in terms of a \textit{static}
property of the ground state, namely $s(k)$. This is a very powerful and useful
approximation.
Returning to eq.~(\ref{eq:1202}) we see that $\psi_{\vec{k}}$ describes a linear
superposition of states in which one single particle has had its momentum
boosted by $\hbar\vec{k}$. We do not know which one however. The summation in
eq.~(\ref{eq:1203}) tells us that it is equally likely to be particle 1
\textit{or} particle 2 \textit{or} \dots, etc. This state should not be confused
with the state in which boost is applied to particle 1 \textit{and} particle 2
\textit{and} \dots, etc. This state is described by a product
\begin{equation}
\Phi_{\vec{k}} \equiv \left(\prod_{j=1}^{N} e^{i\vec{k}\cdot\vec{r}_{j}}\right)\;
\Phi_{0}
\end{equation}
which can be rewritten
\begin{equation}
\Phi_{\vec{k}} = \exp{\left\{ iN\vec{k} \cdot \left(\frac{1}{N} \sum_{j=1}^{N}
\vec{r}_{j}\right)\right\}}\; \Phi_{0}
\label{eq:12125}
\end{equation}
showing that this is an exact energy eigenstate (with energy $N\;
\frac{\hbar^{2}k^{2}}{2M}$) in which the center of mass momentum has been
boosted by $N\hbar\vec{k}$.
In superfluid ${}^{4}\hbox{He}$ the structure factor vanishes linearly at small
wave vectors
\begin{equation}
s(k) \sim \xi k \label{eq:1212}
\end{equation}
so that $\Delta(k)$ is linear as expected for a sound mode
\begin{equation}
\Delta(k) = \left(\frac{\hbar^{2}}{2M}\; \frac{1}{\xi}\right)\; k
\label{eq:1213}
\end{equation}
from which we see that the sound velocity is given by
\begin{equation}
c_{\mathrm{s}} = \frac{\hbar}{2M}\; \frac{1}{\xi}. \label{eq:1214}
\end{equation}
This phonon mode should not be confused with the ordinary hydrodynamic sound
mode in classical fluids. The latter occurs in a collision dominated regime
$\omega\tau \ll 1$ in which collision-induced pressure provides the restoring
force. The phonon mode described here by $\psi_{\vec{k}}$ is a low-lying
eigenstate of the quantum Hamiltonian.
At larger wave vectors there is a peak in the static structure factor caused by
the solid-like oscillations in the radial distribution function $g(r)$ similar
to those shown in Fig.~\ref{fig:2pointqhe} for the Laughlin liquid. This peak in
$s(k)$ leads to the so-called roton minimum in $\Delta(k)$ as illustrated in
fig.~(\ref{fig:Heroton}).
\begin{figure
\centerline{\epsfxsize=6cm
\epsffile{heliumroton.xfig.eps}}
\caption[]{Schematic illustration of the phonon dispersion in superfluid liquid
${}^{4}$He. For small wave vectors the dispersion is linear, as is expected for
a gapless Goldstone mode. The roton minimum due to the peak in the static
structure factor occurs at a wave vector $k$ of approximately 20 in units of
inverse \AA. The roton energy is approximately $10$ in units of Kelvins.}
\label{fig:Heroton}
\end{figure}
To better understand the Feynman picture of the collective excited states recall
that the dynamical structure factor is defined (at zero temperature) by
\begin{equation}
S(q,\omega) \equiv \frac{2\pi}{N}\;
\left\langle\Phi_{0}\left|\rho_{\vec{q}}^{\dagger}\;\, \delta\left(\omega -
\frac{H - E_{0}}{\hbar}\right)
\rho_{\vec{q}}^{\phantom{\dagger}}\right|\Phi_{0}\right\rangle. \label{eq:1215}
\end{equation}
The static structure factor is the zeroth frequency moment
\begin{equation}
s(q) = \int_{-\infty}^{\infty} \frac{d\omega}{2\pi}\; S(q,\omega) =
\int_{0}^{\infty} \frac{d\omega}{2\pi}\; S(q,\omega) \label{eq:1216}
\end{equation}
(with the second equality valid only at zero temperature). Similarly the
oscillator strength in eq.~(\ref{eq:1206}) becomes (at zero temperature)
\begin{equation}
f(q) = \int_{-\infty}^{\infty} \frac{d\omega}{2\pi}\; \hbar\omega\; S(q,\omega)
= \int_{0}^{\infty} \frac{d\omega}{2\pi}\; \hbar\omega\; S(q,\omega).
\label{eq:1217}
\end{equation}
Thus we arrive at the result that the Feynman-Bijl formula can be rewritten
\begin{equation}
\Delta(q) = \frac{\int_{0}^{\infty} \frac{d\omega}{2\pi}\; \hbar\omega\;
S(q,\omega)}{\int_{0}^{\infty} \frac{d\omega}{2\pi}\; S(q,\omega)}.
\end{equation}
That is, $\Delta(q)$ is the mean excitation energy (weighted by the square of
the density operator matrix element). Clearly the mean exceeds the minimum and
so the estimate is variational as claimed. Feynman's approximation is equivalent
to the assumption that only a single mode contributes any oscillator strength so
that the zero-temperature dynamical structure factor contains only a single
delta function peak
\begin{equation}
S(q,\omega) = 2\pi\; s(q)\; \delta\left(\omega - \frac{1}{\hbar}\;
\Delta(q)\right). \label{eq:1218}
\end{equation}
Notice that this approximate form satisfies both eq.~(\ref{eq:1216}) and
eq.~(\ref{eq:1217}) provided that the collective mode energy $\Delta(q)$ obeys
the Feynman-Bijl formula in eq.~(\ref{eq:1211}).
\boxedtext{\begin{exercise}
For a system with a homogeneous liquid ground state, the (linear response)
static susceptibility of the density to a perturbation $U = V_{\vec{q}}
\rho_{-\vec{q}}$ is defined by
\begin{equation}
\left\langle\rho_{\vec{q}}\right\rangle = \chi(q)V_{\vec{q}}.
\label{eq:linearresponse}
\end{equation}
Using first order perturbation theory show that the static susceptibility is
given in terms of the dynamical structure factor by
\begin{equation}
\chi(q) = -2\int_{0}^{\infty} \frac{d\omega}{2\pi} \frac{1}{\hbar\omega}
S(q,\omega).
\label{eq:static}
\end{equation}
Using the single mode approximation and the oscillator strength sum rule, derive
an expression for the collective mode dispersion in terms of $\chi(q)$. (Your
answer should \textbf{not} involve the static structure factor. Note also that
eq.(\ref{eq:linearresponse}) is not needed to produce the answer to this part.
Just work with eq.(\ref{eq:static}).)
\label{ex:9812}
\end{exercise}}
As we mentioned previously Feynman argued that in ${}^{4}\hbox{He}$ the Bose
symmetry of the wave functions guarantees that unlike in Fermi systems, there is
only a single low-lying mode, namely the phonon density mode. The paucity of
low-energy single particle excitations in boson systems is what helps make them
superfluid--there are no dissipative channels for the current to decay into.
Despite the fact that the quantum Hall system is made up of fermions, the
behavior is also reminiscent of superfluidity since the current flow is
dissipationless. Indeed, within the `composite boson' picture, one views the
FQHE ground state as a bose condensate \cite{compositeboson,sciam,sczhang}. Let
us therefore blindly make the single-mode approximation and see what happens.
{}From eq.~(\ref{eq:12105}) we see that the static structure factor for the
$m$th Laughlin state is (for small wave vectors only)
\begin{equation}
s(q) = \frac{L^{2}}{N}\; \frac{q^{2}}{4\pi m} = \frac{1}{2}\; q^{2}\ell^{2},
\label{eq:1219}
\end{equation}
where we have used $L^{2}/N = 2\pi\ell^{2}m$. The Feynman-Bijl formula then
yields\footnote{We will continue to use the symbol $M$ here for the band mass of
the electrons to avoid confusion with the inverse filling factor $m$.}
\begin{equation}
\Delta(q) = \frac{\hbar^{2}q^{2}}{2M}\; \frac{2}{q^{2}\ell^{2}} =
\hbar\omega_{c}. \label{eq:1220}
\end{equation}
This predicts that there is an excitation gap that is independent of wave vector
(for small $q$) and equal to the cyclotron energy. It is in fact correct that at
long wavelengths the oscillator strength is dominated by transitions in which a
single particle is excited from the $n=0$ to the $n=1$ Landau level.
Furthermore, Kohn's theorem guarantees that the mode energy is precisely
$\hbar\omega_{c}$. Eq.~(\ref{eq:1220}) was derived specifically for the Laughlin
state, but it is actually quite general, applying to any translationally
invariant liquid ground state.
One might expect that the single mode approximation (SMA) will not work well in
an ordinary Fermi gas due to the high density of excitations around the Fermi
surface.\footnote{This expectation is only partly correct however as one
discovers when studying collective plasma oscillations in systems with
long-range Coulomb forces.} Here however the Fermi surface has been destroyed by
the magnetic field and the continuum of excitations with different kinetic
energies has been turned into a set of discrete inter-Landau-level excitations,
the lowest of which dominates the oscillator strength.
For filling factor $\nu =1$ the Pauli principle prevents any intra-level
excitations and the excitation gap is in fact $\hbar\omega_{c}$ as predicted by
the SMA. However for $\nu < 1$ there should exist intra-Landau-level excitations
whose energy scale is set by the interaction scale $e^{2}/\epsilon\ell$ rather
than the kinetic energy scale $\hbar\omega_{c}$. Indeed we can formally think of
taking the band mass to zero ($M \rightarrow 0$) which would send
$\hbar\omega_{c} \rightarrow \infty$ while keeping $e^{2}/\epsilon\ell$ fixed.
Unfortunately the SMA as it stands now is not very useful in this limit. What we
need is a variational wave function that represents a density wave but is
restricted to lie in the Hilbert space of the lowest Landau level. This can be
formally accomplished by replacing eq.~(\ref{eq:1202}) by
\begin{equation}
\psi_{\vec{k}} = \bar{\rho}_{\vec{k}}\; \psi_{m} \label{eq:1221}
\end{equation}
where the overbar indicates that the density operator has been projected into
the lowest Landau level. The details of how this is accomplished are presented
in appendix~\ref{app:projection}.
The analog of eq.~(\ref{eq:1205}) is
\begin{equation}
\Delta(k) = \frac{\bar{f}(k)}{\bar{s}(k)} \label{eq:1222}
\end{equation}
where $\bar{f}$ and $\bar{s}$ are the projected oscillator strength and
structure factor, respectively. As shown in appendix~\ref{app:projection}
\begin{eqnarray}
\bar{s}(k) &\equiv& \frac{1}{N}\;
\left\langle\psi_{m}|\bar{\rho}_{\vec{k}}^{\dagger}\;
\bar{\rho}_{\vec{k}}|\psi_{m}\right\rangle = s(k) - \left[ 1 - e^{-
\frac{1}{2}|k|^{2}\ell^{2}}\right]\nonumber\\
&=& s(k) - s_{\nu=1}(k). \label{eq:1223}
\end{eqnarray}
This vanishes for the filled Landau level because the Pauli principle forbids
all intra-Landau-level excitations. For the $m$th Laughlin state
eq.~(\ref{eq:1219}) shows us that the leading term in $s(k)$ for small $k$ is
$\frac{1}{2}k^{2}\ell^{2}$. Putting this into eq.~(\ref{eq:1223}) we see that
the leading behavior for $\bar{s}(k)$ is therefore quartic
\begin{equation}
\bar{s}(k) \sim a(k\ell)^{4} + \ldots .
\end{equation}
We can not compute the coefficient $a$ without finding the $k^{4}$ correction to
eq.~(\ref{eq:1219}). It turns out that there exists a compressibility sum rule
for the fake plasma from which we can obtain the exact result \cite{GMP}
\begin{equation}
a = \frac{m-1}{8}.
\end{equation}
The projected oscillator strength is given by eq.~(\ref{eq:1209}) with the
density operators replaced by their projections. In the case of
${}^{4}\hbox{He}$ only the kinetic energy part of the Hamiltonian failed to
commute with the density. It was for this reason that the oscillator strength
came out to be a universal number related to the mass of the particles. Within
the lowest Landau level however the kinetic energy is an irrelevant constant.
Instead, after projection the density operators no longer commute with each
other (see appendix~\ref{app:projection}). It follows from these commutation
relations that the projected oscillator strength is proportional to the strength
of the interaction term. The leading small $k$ behavior is \cite{GMP}
\begin{equation}
\bar{f}(k) = b\; \frac{e^{2}}{\epsilon\ell}(k\ell)^{4} + \ldots
\end{equation}
where $b$ is a dimensionless constant that depends on the details of the
interaction potential. The intra-Landau level excitation energy therefore has a
finite gap at small $k$
\begin{equation}
\Delta(k) = \frac{\bar{f}(k)}{\bar{s}(k)} \sim \frac{b}{a}\;
\frac{e^{2}}{\epsilon\ell} + \mathcal{O}(k^{2}) + \ldots
\end{equation}
This is quite different from the case of superfluid ${}^{4}\hbox{He}$ in which
the mode is gapless. However like the case of the superfluid, this
`magnetophonon' mode has a `magnetoroton' minimum at finite $k$ as illustrated
in fig.~(\ref{fig:magnetoroton}).
\begin{figure
\centerline{\epsfxsize=10cm
\epsffile{roton1.eps}}
\caption[]{Comparison of the single mode approximation (SMA) prediction of the
collective mode energy for filling factors $\nu=1/3,1/5,1/7$ (solid lines) with
small-system numerical results for $N$ particles. Crosses indicate the $N=7,
\nu=1/3$ spherical system, triangles indicate the $N=6, \nu=1/3$ hexagonal unit
cell system results of Haldane and Rezayi \cite{haldane-rezayi}. Solid dots are
for $N=9,\nu=1/3$ and $N=7, \nu=1/5$ spherical system calculations of Fano et
al.~\cite{Fano} Arrows at the top indicate the magnitude of the reciprocal
lattice vector of the Wigner crystal at the corresponding filling factor. Notice
that unlike the phonon collective mode in superfluid helium shown in
fig.~(\ref{fig:Heroton}), the mode here is gapped.}
\label{fig:magnetoroton}
\end{figure}
The figure also shows results from numerical exact diagonalization studies which
demonstrate that the single mode approximation is extremely accurate. Note that
the magnetoroton minimum occurs close to the position of the smallest reciprocal
lattice vector in the Wigner crystal of the same density. In the crystal the
phonon frequency would go exactly to zero at this point. (Recall that in a
crystal the phonon dispersion curves have the periodicity of the reciprocal
lattice.)
Because the oscillator strength is almost entirely in the cyclotron mode, the
dipole matrix element for coupling the collective excitations to light is very
small. They have however been observed in Raman scattering \cite{pinczukroton}
and found to have an energy gap in excellent quantitative agreement with the
single mode approximation.
Finally we remark that these collective excitations are characterized by a
well-defined wave vector $\vec{k}$ despite the presence of the strong magnetic
field. This is only possible because they are charge neutral which allows one to
define a gauge invariant conserved momentum \cite{Kallin}.
\section{Double-Layer Quantum Hall Ferromagnets}
\label{sec:doublelayer}
\subsection{Introduction}
\label{sec:intro}
We learned in our study of quantum Hall ferromagnets that the Coulomb
interaction plays an important role at Landau level filling factor $\nu=1$
because it causes the electron spins to spontaneously align ferromagnetically
and this in turn profoundly alters the charge excitation spectrum by producing a
gap.\footnote{Because the charged excitations are skyrmions, this gap is not as
large as naive estimates would suggest, but it is still finite as long as the
spin stiffness is finite.} A closely related effect occurs in double-layer
systems in which layer index is analogous to spin
\cite{murphyPRL,JPEbook,GMbook}. Building on our knowledge of the dynamics of
ferromagnets developed in the last section, we will use this analogy to explore
the rich physics of double-layer systems.
Novel fractional quantum Hall effects due to correlations \cite{gsnum} in
multicomponent systems were anticipated in early work by Halperin
\cite{BIHhelv} and the now extensive literature has been reviewed in
\cite{GMbook}. There have also been recent interesting studies of systems in
which the spin and layer degrees of freedom are coupled in novel ways
\cite{Pinczukdouble,DasSarmaSachdev}.
As described in this volume by Shayegan \cite{ShayeganLesHouches}, modern MBE
techniques make it possible to produce double-layer (and multi-layer)
two-dimensional electron gas systems of extremely low disorder and high
mobility. As illustrated schematically in Fig.~(\ref{fig:wellschematic}),
\begin{figure}
\centerline{\epsfxsize=6cm
\epsffile{wellschematic.eps}}
\caption[]{Schematic conduction band edge profile for a
double-layer two-dimensional electron gas system. Typical widths
and separations are $W\sim d\sim 100\hbox{\AA}$
and are comparable to the spacing
between electrons within each inversion layer.}
\label{fig:wellschematic}
\end{figure}
these systems consist of a pair of 2D electron gases separated by a distance $d$
so small ($d\sim 100\mbox{\AA}$) as to be comparable to the typical spacing
between electrons in the same layer. A second type of system has also recently
been developed to a high degree of perfection \cite{mansour}. These systems
consist of single wide quantum wells in which strong mixing of the two lowest
electric subbands allows the electrons to localize themselves on opposites sides
of the well to reduce their correlation energy. We will take the point of view
that these systems can also be approximately viewed as double-well systems with
some effective layer separation and tunnel barrier height.
As we have already learned, correlations are especially important in the strong
magnetic field regime because all electrons can be accommodated within the
lowest Landau level and execute cyclotron orbits with a common kinetic energy.
The fractional quantum Hall effect occurs when the system has a gap for making
charged excitations, \textit{i.e.} when the system is incompressible. Theory has
predicted \cite{BIHhelv,gsnum,amdreview} that at some Landau level filling
factors, gaps occur in double-layer systems only if interlayer interactions are
sufficiently strong. These theoretical predictions have been
confirmed \cite{expamd}. More recently work from several different points of
view \cite{wenandzee,ezawa,ahmz1,gapless,harfok,JasonHo} has suggested that
inter-layer correlations can also lead to unusual broken symmetry states with a
novel kind of spontaneous phase coherence between layers which are isolated from
each other except for inter-layer Coulomb interactions. It is this spontaneous
interlayer phase coherence which is responsible \cite{usPRL,Ilong,II,GMbook} for
a variety of novel features seen in the experimental data to be discussed below
\cite{murphyPRL,JPEbook}.
\subsection{Pseudospin Analogy}
We will make the simplifying assumption that the Zeeman energy is large enough
that fluctuations of the (true) spin order can be ignored, leaving out the
possibility of mixed spin and pseudospin correlations
\cite{Pinczukdouble,DasSarmaSachdev}. We will limit our attention to the lowest
electric subband of each quantum well (or equivalently, the two lowest bands of
a single wide well). Hence we have a two-state system that can be labeled by a
pseudospin 1/2 degree of freedom. Pseudospin up means that the electron is in
the (lowest electric subband of the) upper layer and pseudospin down means that
the electron is in the (lowest electric subband of the) lower layer.
Just as in our study of ferromagnetism we will consider states with total
filling factor $\nu \equiv \nu_{\uparrow} + \nu_{\downarrow} =1$. A state
exhibiting interlayer phase coherence and having the pseudospins
ferromagnetically aligned in the direction defined by polar angle $\theta$ and
azimuthal angle $\varphi$ can be written in the Landau gauge just as for
ordinary spin
\begin{equation}
|\psi\rangle = \prod_{k}\left\{\cos(\theta/2) c^{\dagger}_{k\uparrow}
+ \sin(\theta/2)e^{i\varphi} c^{\dagger}_{k\downarrow}\right\}|0\rangle.
\end{equation}
Every $k$ state contains one electron and hence this state has $\nu=1$
as desired. Note however that the layer index for each electron is
uncertain. The amplitude to find a particular electron in the upper
layer is $\cos(\theta/2)$ and the amplitude to find it in the lower layer
is $\sin(\theta/2)e^{i\varphi}$. Even if the two layers are
completely independent with no tunneling between them, quantum mechanics
allows for the somewhat peculiar possibility that we are uncertain
which layer the electron is in.
For the case of ordinary spin we found that the Coulomb interaction produced an
exchange energy which strongly favored having the spins locally parallel. Using
the fact that the Coulomb interaction is completely spin independent (it is only
the Pauli principle that indirectly induces the ferromagnetism) we wrote down
the spin rotation invariant effective theory in eq.~(\ref{eq:1124219}). Here we
do not have full SU(2) invariance because the interaction between electrons in
the same layer is clearly stronger than the interaction between electrons in
opposite layers. Thus for example, if all the electrons are in the upper (or
lower) layer, the system will look like a charged capacitor and have higher
energy than if the layer occupancies are equal. Hence to leading order in
gradients we expect the effective action to be modified slightly
\begin{eqnarray}
\mathcal{L} = &-&\int d^{2}r\; \left\{\hbar S n\, \dot{m}^{\mu}(\vec{r}\,)
\mathcal{A}^{\mu}[\vec{m}] -\lambda(\vec{r}\,) (m^{\mu} m^{\mu} - 1)
\right\}\nonumber\\
&-&\int d^{2}r\, \left\{\frac{1}{2} \rho_{s} \partial_{\mu} m^{\nu}
\partial_{\mu} m^{\nu} + \beta m^{z} m^{z} - \Delta m^{z} - nt m^{x}\right\}.
\label{eq:easyplane}
\end{eqnarray}
The spin stiffness $\rho_{s}$ represents the SU(2) invariant part of the
exchange energy and is therefore somewhat smaller than the value computed in
eq.~(\ref{eq:060322}). The coefficient $\beta$ is a measure of the capacitive
charging energy.\footnote{We have taken the charging energy to be a local
quantity characterized by a fixed, wave vector independent capacitance. This is
appropriate only if $m^{z}(\vec{r}\,)$ represents the local charge imbalance
between the layers coarse-grained over a scale larger than the layer separation.
Any wave vector dependence of the capacitance will be represented by higher
derivative terms which we will ignore.} The analog of the Zeeman energy $\Delta$
represents an external electric field applied along the MBE growth direction
which unbalances the charge densities in the two layers. The coefficient $t$
represents the amplitude for the electrons to tunnel between the two layers. It
prefers the pseudospin to be aligned in the $\hat x$ direction because this
corresponds to the spinor
\begin{equation}
\frac{1}{\sqrt{2}} \left(\begin{array}{c}
1\\
1
\end{array}\right)
\end{equation}
which represents the \textit{symmetric} (i.e. bonding) linear combination of the
two well states. The state with the pseudospin pointing in the $-\hat{x}$
direction represents the \textit{antisymmetric} (i.e. antibonding) linear
combination which is higher in energy.
For the moment we will assume that both $t$ and $\Delta$ vanish, leaving only
the $\beta$ term which breaks the pseudospin rotational symmetry. The case
$\beta < 0$ would represent `Ising anisotropy'. Clearly the physically realistic
case for the capacitive energy gives $\beta > 0$ which represents so-called
`easy plane anisotropy.' The energy is minimized when $m^{z} = 0$ so that the
order parameter lies in the XY plane giving equal charge densities in the two
layers. Thus we are left with an effective XY model which should exhibit
long-range off-diagonal order\footnote{At finite temperatures $\Psi(\vec{r}\,)$
will vanish but will have long-range algebraically decaying correlations. Above
the Kosterlitz-Thouless phase transition temperature, the correlations will fall
off exponentially.}
\begin{equation}
\Psi(\vec{r}\,) = \langle m^{x}(\vec{r}\,) + i m^{y}(\vec{r}\,)\rangle.
\end{equation}
The order is `off-diagonal' because it corresponds microscopically to an
operator
\begin{equation}
\Psi(\vec{r}\,) = \langle s^{+}(\vec{r}\,)\rangle = \langle
\psi^{\dagger}_{\uparrow}(\vec{r}\,) \psi_{\downarrow}(\vec{r}\,)\rangle
\label{eq:odlro}
\end{equation}
which is not diagonal in the $s^{z}$ basis, much as in a superfluid where the
field operator changes the particle number and yet it condenses and acquires a
finite expectation value.
One other comment worth making at this point is that eq.~(\ref{eq:odlro}) shows
that, unlike the order parameter in a superconductor or superfluid, this one
corresponds to a charge neutral operator. Hence it will be able to condense
despite the strong magnetic field (which fills charged condensates with vortices
and generally destroys the order).
In the next subsection we review the experimental evidence that long-range XY
correlations exist and that as a result, the system exhibits excitations which
are highly collective in nature. After that we will return to further analysis
and interpretation of the effective Lagrangian in eq.~(\ref{eq:easyplane}) to
understand those excitations.
\subsection{Experimental Background}
\label{sec:expback}
As illustrated by the dashed lines in fig.~(\ref{fig:wellschematic}), the lowest
energy eigenstates split into symmetric and antisymmetric combinations separated
by an energy gap $\Delta_{\mathrm{SAS}} = 2t$ which can, depending on the sample,
vary from essentially zero to many hundreds of Kelvins. The splitting can
therefore be much less than or greater than the interlayer interaction energy
scale, $E_{\mathrm{c}} \equiv e^{2}/\epsilon d$. Thus it is possible to make
systems which are in either the weak or strong correlation limits.
When the layers are widely separated, there will be no correlations between them
and we expect no dissipationless quantum Hall state since each layer has
\cite{nuhalf} $\nu = 1/2$. For smaller separations, it is observed
experimentally that there is an excitation gap and a quantized Hall plateau
\cite{greg,mansour,murphyPRL}. This has either a trivial or a highly non-trivial
explanation, depending on the ratio $\Delta_{\mathrm{SAS}}/ E_{\mathrm{c}}$. For
large $\Delta_{\mathrm{SAS}}$ the electrons tunnel back and forth so rapidly
that it is as if there is only a single quantum well. The tunnel splitting
$\Delta_{\mathrm{SAS}}$ is then analogous to the electric subband splitting in a
(wide) single well. All symmetric states are occupied and all antisymmetric
states are empty and we simply have the ordinary $\nu = 1$ integer Hall effect.
Correlations are irrelevant in this limit and the excitation gap is close to the
single-particle gap $\Delta_{\mathrm{SAS}}$ (or $\hbar\omega_{\mathrm{c}}$,
whichever is smaller). What is highly non-trivial about this system is the fact
that the $\nu = 1$ quantum Hall plateau survives even when
$\Delta_{\mathrm{SAS}} \ll E_{\mathrm{c}}$. In this limit the excitation gap has
clearly changed to become highly collective in nature since the observed
\cite{mansour,murphyPRL} gap can be on the scale of 20K even when
$\Delta_{\mathrm{SAS}} \sim 1~\mathrm{K}$. Because of the spontaneously broken
XY symmetry \cite{wenandzee,ezawa,harfok,usPRL,Ilong}, the excitation gap
actually survives the limit $\Delta_{\mathrm{SAS}} \longrightarrow 0$! This
cross-over from single-particle to collective gap is quite analogous to that for
spin polarized single layers. There the excitation gap survives the limit of
zero Zeeman splitting so long as the Coulomb interaction makes the spin
stiffness non-zero. This effect in double-layer systems is visible in
fig.~(\ref{fig:qhe-noqhe})
\begin{figure}
\centerline{\epsfxsize=10cm
\epsffile{figqhe-noqhe.eps}}
\caption[]{Phase diagram for the double layer QHE system (after
Murphy et al. \cite{murphyPRL}). Only samples whose parameters
lie below the dashed line exhibit
a quantized Hall plateau and excitation gap.}
\label{fig:qhe-noqhe}
\end{figure}
which shows the QHE phase diagram obtained by Murphy \textit{et al.}
\cite{murphyPRL,JPEbook} as a function of layer-separation and tunneling energy.
A $\nu=1$ quantum Hall plateau and gap is observed in the regime below the
dashed line. Notice that far to the right, the single particle tunneling energy
dominates over the coulomb energy and we have essentially a one-body integer QHE
state. However the QHE survives all the way into $\Delta_{\mathrm{SAS}} =0 $
provided that the layer separation is below a critical value
$d/\ell_{\mathrm{B}} \approx 2$. In this limit there is no tunneling and the gap
is purely many-body in origin and, as we will show, is associated with the
remarkable `pseudospin ferromagnetic' quantum state exhibiting spontaneous
interlayer phase coherence.
A second indication of the highly collective nature of the excitations can be
seen in the Arrhenius plots of thermally activated dissipation \cite{murphyPRL}
shown in the inset of fig.~(\ref{fig:arrhenius})
\begin{figure}
\centerline{\epsfxsize=10cm
\epsffile{nu1prl_fig2.eps}}
\caption[]{The charge activation energy gap, $\Delta$, as a function of tilt
angle in a weakly tunneling double-layer sample ($\Delta_{\mathrm{SAS}} = 0.8$K). The
solid circles are for filling $\nu=1$, open triangles for $\nu=2/3$. The arrow
indicates the critical angle $\theta_{\mathrm{c}}$. The solid line is a guide to the
eye. The dashed line refers to a simple estimate of the renormalization of the
tunneling amplitude by the parallel magnetic field. Relative to the actual
decrease, this one-body effect is very weak and we have neglected it. Inset:
Arrhenius plot of dissipation. The low temperature activation energy is $\Delta
= 8.66$K and yet the gap collapses at a much lower temperature scale of about
$0.4$K ($1/T\approx 2.5$). (After Murphy \textit{et al.} \cite{murphyPRL}).}
\label{fig:arrhenius}
\end{figure}
The low temperature activation energy $\Delta$ is, as already noted, much larger
than $\Delta_{\mathrm{SAS}}$. If $\Delta$ were nevertheless somehow a
single-particle gap, one would expect the Arrhenius law to be valid up to
temperatures of order $\Delta$. Instead one observes a fairly sharp leveling off
in the dissipation as the temperature increases past values as low as $\sim 0.05
\Delta$. This is consistent with the notion of a thermally induced collapse of
the order that had been producing the collective gap.
The third significant feature of the experimental data pointing to a
highly-ordered collective state is the strong response of the system to
relatively weak magnetic fields $B_{\parallel}$ applied in the plane of the 2D
electron gases. In fig.~(\ref{fig:arrhenius}) we see that the charge activation
gap drops dramatically as the magnetic field is tilted (keeping $B_{\perp}$
constant).
Within a model that neglects higher electric subbands, we can treat the electron
gases as strictly two-dimensional. This is important since $B_{\parallel}$ can
affect the system only if there are processes that carry electrons around closed
loops containing flux. A prototypical such process is illustrated in
fig.~(\ref{fig:figloop}).
\begin{figure}
\centerline{\epsfxsize=8cm
\epsffile{figloop.eps}}
\caption[]{A process in a double-layer two-dimensional electron gas system which
encloses flux from the parallel component of the magnetic field. One interpretation
of this process is that an electron tunnels from
the upper layer to the lower layer (near the left end of the figure). The resulting
particle-hole pair then travels coherently to the right and is annihilated by a subsequent
tunneling event in the reverse direction.
The quantum amplitude for such paths is sensitive to the parallel component of the field.}
\label{fig:figloop}
\end{figure}
An electron tunnels from one layer to the other at point A, and travels to point
B. Then it (or another indistinguishable electron) tunnels back and returns to
the starting point. The parallel field contributes to the quantum amplitude for
this process (in the 2D gas limit) a gauge-invariant Aharonov-Bohm phase factor
$\exp\left(2\pi i \Phi/\Phi_{0}\right)$ where $\Phi$ is the enclosed flux and
$\Phi_{0}$ is the quantum of flux.
Such loop paths evidently contribute significantly to correlations in the system
since the activation energy gap is observed to decrease very rapidly with
$B_{\parallel}$, falling by factors of order two or more until a critical field,
$B^{*}_{\parallel} \sim 0.8\mathrm{T}$, is reached at which the gap essentially
ceases changing \cite{murphyPRL}. To understand how remarkably small
$B^{*}_{\parallel}$ is, consider the following. We can define a length
$L_{\parallel}$ from the size of the loop needed to enclose one quantum of flux:
$L_{\parallel} B^{*}_{\parallel} d = \Phi_{0}$. ($L_{\parallel} [\hbox{\AA}] =
4.137 \times 10^{5} / d [\hbox{\AA}] B^{*}_{\parallel} [\mathrm{T}]$.) For
$B^{*}_{\parallel} = 0.8\mathrm{T}$ and $d = 150 \hbox{\AA}$, $L_{\parallel} =
2700 \hbox{\AA} $ which is approximately twenty times the spacing between
electrons in a given layer and thirty times larger than the quantized cyclotron
orbit radius $\ell \equiv (\hbar c / e B_{\perp})^{1/2}$ within an individual
layer. Significant drops in the excitation gap are already seen at fields of
0.1T implying enormous phase coherent correlation lengths must exist. Again this
shows the highly-collective long-range nature of the ordering in this system.
In the next subsection we shall briefly outline a detailed model which explains
all these observed effects.
\subsection{Interlayer Phase Coherence}
\label{sec:coherence}
The essential physics of spontaneous inter-layer phase coherence can be examined
from a microscopic point of view \cite{ahmz1,gapless,harfok,usPRL,Ilong} or a
macroscopic Chern-Simons field theory point of view
\cite{wenandzee,ezawa,usPRL,Ilong}, but it is perhaps most easily visualized in
the simple variational wave function which places the spins purely in the XY
plane \cite{Ilong}
\begin{equation}
|\psi\rangle = \prod_{k} \left\{c^{\dagger}_{k\uparrow} +
c^{\dagger}_{k\downarrow} e^{i\varphi}\right\} |0\rangle.
\label{eq:variational}
\end{equation}
Note for example, that if $\varphi=0$ then we have precisely the non-interacting
single Slater determinant ground state in which electrons are in the symmetric
state which, as discussed previously in the analysis of the effective Lagrangian
in eq.~(\ref{eq:easyplane}), minimizes the tunneling energy. This means that the
system has a definite total number of particles ($\nu=1$ exactly) but an
indefinite number of particles in each layer. In the absence of inter-layer
tunneling, the particle number in each layer is a good quantum number. Hence
this wave function represents a state of spontaneously broken symmetry
\cite{wenandzee,ezawa,Ilong} in the same sense that the BCS state for a
superconductor has indefinite (total) particle number but a definite phase
relationship between states of different particle number.
In the absence of tunneling ($t=0$) the energy can not depend on the phase angle
$\varphi$ and the system exhibits a global $U(1)$ symmetry associated with
conservation of particle number in each layer \cite{wenandzee}. One can imagine
allowing $\varphi$ to vary slowly with position to produce excited states. Given
the $U(1)$ symmetry, the effective Hartree-Fock energy functional for these
states is restricted to have the leading form
\begin{equation}
H = \frac{1}{2}\rho_{s}\int d^{2}r |\nabla\varphi|^{2} + \ldots\,\,.
\label{eq:xymod}
\end{equation}
The origin of the finite `spin stiffness' $\rho_{s}$ is the loss of exchange
energy which occurs when $\varphi$ varies with position. Imagine that two
particles approach each other. They are in a linear superposition of states in
each of the layers (even though there is no tunneling!). If they are
characterized by the same phase $\varphi$, then the wave function is symmetric
under pseudospin exchange and so the spatial wave function is antisymmetric and
must vanish as the particles approach each other. This lowers the Coulomb
energy. If a phase gradient exists then there is a larger amplitude for the
particles to be near each other and hence the energy is higher. This loss of
exchange energy is the source of the finite spin stiffness and is what causes
the system to spontaneously `magnetize'.
We see immediately that the $U(1)$ symmetry leads to eq.~(\ref{eq:xymod}) which
defines an effective XY model which will contain vortex excitations which
interact logarithmically. \cite{goldenfeld,boulevard}
In a superconducting film the vortices interact
logarithmically because of the kinetic energy cost of the supercurrents
circulating around the vortex centers. Here the same logarithm appears, but
it is due to the potential energy cost (loss of exchange) associated with the
phase gradients (circulating pseudo-spin currents).
Hartree-Fock estimates \cite{Ilong} indicate that the spin stiffness $\rho_{s}$
and hence the Kosterlitz-Thouless (KT) critical temperature are on the scale of
0.5~K in typical samples. Vortices in the $\varphi$ field are reminiscent of
Laughlin's fractionally charged quasiparticles but in this case carry charges
$\pm\frac{1}{2} e$ and can be left- or right-handed for a total of four
`flavors' \cite{usPRL,Ilong}. It is also possible to show \cite{Ilong,II} that
the presence of spontaneous magnetization due to the finite spin stiffness means
that the charge excitation gap is finite (even though the tunnel splitting is
zero). Thus the QHE survives \cite{Ilong} the limit $\Delta_{\mathrm{SAS}}
\longrightarrow 0$.
Since the `charge' conjugate to the phase $\varphi$ is the $z$ component of the
pseudo spin $S^{z}$, the pseudospin `supercurrent'
\begin{equation}
\vec{J} =\rho_{s} \mathbf{\vec{\nabla}}\varphi
\end{equation}
represents oppositely directed charge currents in each layer. Below the KT
transition temperature, such current flow will be dissipationless (in linear
response) just as in an ordinary superfluid. Likewise there will be a linearly
dispersing collective Goldstone mode as in a superfluid
\cite{ahmz1,wenandzee,ezawa,usPRL,Ilong} rather than a mode with quadratic
dispersion as in the SU(2) symmetric ferromagnet. (This is somewhat akin to the
difference between an ideal bose gas and a repulsively interacting bose gas.)
If found, this Kosterlitz-Thouless transition would be the first example of a
finite-temperature phase transition in a QHE system. The transition itself has
not yet been observed due to the tunneling amplitude $t$ being significant in
samples having the layers close enough together to have strong correlations. As
we have seen above however, significant effects which imply the existence of
long-range XY order correlations have been found. Whether or not an appropriate
sample can be constructed to observe the phase transition is an open question at
this point.
\boxedtext{\begin{exercise}
Following the method used to derive eq.~(\ref{eq:1105228}), show that the
collective mode for the Lagrangian in eq.~(\ref{eq:easyplane}) has linear rather
than quadratic dispersion due to the presence of the $\beta$ term. (Assume
$\Delta=t=0$.) Hint: Consider small fluctuations of the magnetization away from
$\vec{m} = (1,0,0)$ and choose an appropriate gauge for $\cal A$ for this
circumstance.
Present a qualitative argument that layer imbalance caused by $\Delta$ does not
fundamentally change any of the results described in this section but rather
simply renormalizes quantities like the collective mode velocity. That is,
explain why the $\nu=1$ QHE state is robust against charge imbalance. (This is
an important signature of the underlying physics. Certain other interlayer
correlated states (such as the one at total filling $\nu=1/2$) are quite
sensitive to charge imbalance \cite{GMbook}.)
\label{ex:981201}
\end{exercise}}
\subsection{Interlayer Tunneling and Tilted Field Effects}
As mentioned earlier, a finite tunneling amplitude $t$ between the layers breaks
the $U(1)$ symmetry
\begin{equation}
H_{\mathrm{eff}} = \int d^{2}r \left[\frac{1}{2}\rho_{s}
\vert\nabla\varphi\vert^{2} - nt \cos{\varphi}\right]
\label{eq:H_eff}
\end{equation}
by giving a preference to symmetric tunneling states. This can be seen from the
tunneling Hamiltonian
\begin{equation}
H_{\mathrm{T}} = - t \int d^{2}r \left\{\psi_{\uparrow}^{\dagger} (\mathbf{r})
\psi_{\downarrow} (\mathbf{r}) + \psi_{\downarrow}^{\dagger} (\mathbf{r})
\psi_{\uparrow} (\mathbf{r})\right\}
\label{eq:tunnel}
\end{equation}
which can be written in the spin representation as
\begin{equation}
H_{\mathrm{T}} = - 2t \int d^{2}r S^{x}(\mathbf{r}).
\end{equation}
(Recall that the eigenstates of $S^{x}$ are symmetric and antisymmetric
combinations of up and down.)
As the separation $d$ increases, a critical point $d^{*}$ is reached at which
the magnetization vanishes and the ordered phase is destroyed by quantum
fluctuations \cite{usPRL,Ilong}. This is illustrated in
fig.~(\ref{fig:qhe-noqhe}). For \textit{finite} tunneling $t$, we will see below
that the collective mode becomes massive and quantum fluctuations will be less
severe. Hence the phase boundary in fig.~(\ref{fig:qhe-noqhe}) curves upward
with increasing $\Delta_{\mathrm{SAS}}$.
The introduction of finite tunneling amplitude destroys the U(1) symmetry and
makes the simple vortex-pair configuration extremely expensive. To lower the
energy the system distorts the spin deviations into a domain wall or `string'
connecting the vortex cores as shown in fig.~(\ref{fig:meron_string}).
\begin{figure}
\centerline{\epsfysize=10cm
\epsffile{meron_string.xfig.eps}}
\caption[]{Meron pair connected by a domain wall. Each meron carries a charge
$e/2$ which tries to repel the other one.}
\label{fig:meron_string}
\end{figure}
The spins are oriented in the $\hat{x}$ direction everywhere except in the
central domain wall region where they tumble rapidly through $2\pi$. The domain
wall has a fixed energy per unit length and so the vortices are now confined by
a linear `string tension' rather than logarithmically. We can estimate the
string tension by examining the energy of a domain wall of infinite length. The
optimal form for a domain wall lying along the $y$ axis is given by
\begin{equation}
\varphi(\vec{r}) = 2 \arcsin{[\tanh{(\lambda x)}]},
\label{eq:soliton}
\end{equation}
where the characteristic width of the string is
\begin{equation}
\lambda^{-1} = \left[\frac{2\pi\ell^{2}\rho_{s}}{t}\right]^\frac{1}{2}.
\end{equation}
The resulting string tension is
\begin{equation}
T_{0} = 8 \left[\frac{t\rho_{s}}{2\pi\ell^{2}}\right]^\frac{1}{2}.
\label{eq:tension_{0}}
\end{equation}
Provided the string is long enough ($R\lambda \gg 1$), the total energy of a
segment of length $R$ will be well-approximated by the expression
\begin{equation}
E_{\mathrm{pair}}' = 2E_{\mathrm{mc}}' + \frac{e^{2}}{4R} + T_{0}R.
\label{string_pair}
\end{equation}
This is minimized at $R^{*} = \sqrt{e^{2}/4T_{0}}$. The linear confinement
brings the charged vortices closer together and rapidly increases the Coulomb
energy. In the limit of very large tunneling, the meron pair shrinks and the
single-particle excitation (hole or extra spin-reversed electron) limit must be
recovered.
The presence of parallel field $B_{\parallel}$ field can be conveniently
described with the gauge choice
\begin{equation}
\vec A_{\parallel} = xB_{\parallel} \hat{z}
\end{equation}
where $\hat{z}$ is the growth direction. In this gauge the tunneling amplitude
transforms to
\begin{equation}
t \rightarrow t\; e^{iQx}
\end{equation}
and the energy becomes
\begin{equation}
H = \int d^{2}r \left[\frac{1}{2} \rho_{s} \vert\vec{\nabla}\varphi\vert^{2} -
\frac{t}{2\pi\ell^{2}}\cos{(\varphi - Qx)}\right]
\end{equation}
where $Q = 2\pi /L_{\parallel}$ and $L_{\parallel}$ is the length associated
with one quantum of flux for the loops shown in fig.~\ref{fig:figloop}. This is
the so-called Pokrovsky-Talopov model which exhibits a
commensurate-incommensurate phase transition. At low $B_{\parallel}$, $Q$ is
small and the low energy state has $\varphi \approx Qx$; i.e. the local spin
orientation `tumbles'. In contrast, at large $B_{\parallel}$ the gradient cost
is too large and we have $\varphi \approx \hbox{constant}$. It is possible to
show \cite{Ilong,II} that this phase transition semiquantitatively explains the
rapid drop and subsequent leveling off of the activation energy vs.
$B_{\parallel}$ seen in fig.~(\ref{fig:arrhenius}).
\boxedtext{\begin{exercise}
Derive eq.~(\ref{eq:soliton}) for the form of the `soliton' that minimizes the energy cost
for the Hamiltonian in eq.~(\ref{eq:H_eff}).
\label{ex:string}
\end{exercise}}
\section{FQHE Edge States}
We learned in our study of the integer QHE that gapless edge excitations exist
even when the bulk has a large excitation gap. Because the bulk is
incompressible the only gapless neutral excitations must be area-preserving
shape distortions such as those illustrated for a disk geometry in
fig.~(\ref{fig:edgewaves}a).
\begin{figure
\centerline{\epsfysize=14cm
\epsffile{shape1.xfig.eps}}
\caption[]{Area-preserving shape distortions of the incompressible quantum Hall
state. (a) IQHE Laughlin liquid `droplet' at $\nu=1$. (b) FQHE annulus at
$\nu=1/m$ formed by injecting a large number $n$ of flux quanta at the origin to
create $n$ quasiholes. There are thus two edge modes of opposite chirality.
Changing $n$ by one unit transfers fractional charge $\nu e$ from one edge to
the other by expanding or shrinking the size of the central hole. Thus the edge
modes have topological sectors labeled by the `winding number' $n$ and one can
view the gapless edge excitations as a gas of fractionally charged Laughlin
quasiparticles.}
\label{fig:edgewaves}
\end{figure}
Because of the confining potential at the edges these shape distortions have a
characteristic velocity produced by the $\vec{E} \times \vec{B}$ drift. It is
possible to show that this view of the gapless neutral excitations is precisely
equivalent to the usual Fermi gas particle-hole pair excitations that we
considered previously in our discussion of edge states. Recall that we argued
that the contour line of the electrostatic potential separating the occupied
from the empty states could be viewed as a real-space analog of the Fermi
surface (since position and momentum are equivalent in the Landau gauge). The
charged excitations at the edge are simply ordinary electrons added or removed
from the vicinity of the edge.
In the case of a fractional QHE state at $\nu = 1/m$ the bulk gap is caused by
Coulomb correlations and is smaller but still finite. Again the only gapless
excitations are area-preserving shape distortions. Now however the charge of
each edge can be varied in units of $e/m$. Consider the annulus of Hall fluid
shown in fig.~(\ref{fig:edgewaves}b). The extension of the Laughlin wave
function $\psi_{m}$ to this situation is
\begin{equation}
\psi_{mn}[z] = \left(\prod_{j=1}^{N} z_{j}^{n}\right)\; \psi_{m}.
\end{equation}
This simply places a large number $n \gg 1$ of quasiholes at the origin.
Following the plasma analogy we see that this looks like a highly charged
impurity at the origin which repels the plasma, producing the annulus shown in
fig.~(\ref{fig:edgewaves}b). Each time we increase $n$ by one unit, the
annulus expands. We can view this expansion as increasing the electron number at
the outer edge by $1/m$ and reducing it by $1/m$ at the inner edge. (Thereby
keeping the total electron number integral as it must be.)
It is appropriate to view the Laughlin quasiparticles, which are gapped in the
bulk, as being liberated at the edge. The gapless shape distortions in the Hall
liquid are thus excitations in a `gas' of fractionally charged quasiparticles.
This fact produces a profound alteration in the tunneling density of states to
inject an electron into the system. An electron which is suddenly added to an
edge (by tunneling through a barrier from an external electrode) will have very
high energy unless it breaks up into $m$ Laughlin quasiparticles. This leads to
an `orthogonality catastrophe' which simply means that the probability for this
process is smaller and smaller for final states of lower and lower energy. As a
result the current-voltage characteristic for the tunnel junction becomes
non-linear \cite{KaneFisher,Chamon,Wen}
\begin{equation}
I \sim V^{m}.
\end{equation}
For the filled Landau level $m=1$ the quasiparticles have charge $q = em = e$
and are ordinary electrons. Hence there is no orthogonality catastrophe and the
I-V characteristic is linear as expected for an ordinary metallic tunnel
junction. The non-linear tunneling for the $m=3$ state is shown in
fig.~(\ref{fig:changdata}).
\begin{figure
\centerline{\epsfysize=14cm
\epsffile{grayson.eps}}
\caption[]{Non-linear current voltage response for tunneling an electron into a
FQHE edge state. Because the electron must break up into $m$ fractionally
charged quasiparticles, there is an orthogonality catastrophe leading to a
power-law density of states. The flattening at low currents is due to the finite
temperature. The upper panel shows the $\nu=1/3$ Hall plateau. The theory
\cite{KaneFisher,Chamon} works extremely well on the 1/3 quantized Hall plateau,
but the unexpectedly smooth variation of the exponent with magnetic field away
from the plateau shown in the lower panel is not yet fully understood. (After M.
Grayson \textit{et al.}, Ref.~\cite{grayson}.}
\label{fig:changdata}
\end{figure}
\section{Quantum Hall Ferromagnets}
\label{sec:qhf}
\subsection{Introduction}
\label{subsec:QHEIntroduction}
Naively one might imagine that electrons in the QHE have their spin dynamics
frozen out by the Zeeman splitting $g\mu_{\mathrm{B}}B$. In free space with $g = 2$
(neglecting QED corrections) the Zeeman splitting is exactly equal to the
cyclotron splitting $\hbar\omega_{c} \sim 100~\mathrm{K}$ as illustrated in
fig.~(\ref{fig:zeeman}~a).
\begin{figure
\centerline{\epsfxsize=10cm
\epsffile{landau_level.xfig.eps}}
\caption[]{(a) Landau energy levels for an electron in free space. Numbers label
the Landau levels and $+ (-)$ refers to spin up (down). Since the $g$ factor is
2, the Zeeman splitting is exactly equal to the Landau level spacing,
$\hbar\omega_{c}$ and there are extra degeneracies as indicated. (b) Same for an
electron in GaAs. Because the effective mass is small and $g\approx -0.4$, the
degeneracy is strongly lifted and the spin assignments are reversed.}
\label{fig:zeeman}
\end{figure}
Thus at low temperatures we would expect for filling factors $\nu < 1$ all the
spins would be fully aligned. It turns out however that this naive expectation
is incorrect in GaAs for two reasons. First, the small effective mass $(m^{*} =
0.068)$ in the conduction band of GaAs increases the cyclotron energy by a
factor of $m/m^{*} \sim 14$. Second, spin-orbit scattering tumbles the spins
around in a way which reduces their effective coupling to the external magnetic
field by a factor of $-5$ making the $g$ factor $-0.4$. The Zeeman energy is
thus some 70 times smaller than the cyclotron energy and typically has a value
of about 2K, as indicated in fig.~(\ref{fig:zeeman}~b).
This decoupling of the scales of the orbital and spin energies means that it is
possible to be in a regime in which the orbital motion is fully quantized
($k_{\mathrm{B}}T \ll \hbar\omega_{c}$) but the low-energy spin fluctuations are not
completely frozen out ($k_{\mathrm{B}}T \sim g^{*}\mu_{\mathrm{B}}B$). The spin dynamics
in this regime are extremely unusual and interesting because the system is an
itinerant magnet with a quantized Hall coefficient. As we shall see, this leads
to quite novel physical effects.
The introduction of the spin degree of freedom means that we are dealing with
the QHE in multicomponent systems. This subject has a long history going back
to an early paper by Halperin \cite{BIHhelv} and has been reviewed extensively
\cite{GMbook,JPEbook,TAPASHbook}. In addition to the spin degree of freedom
there has been considerable recent interest in other multicomponent systems in
which spin is replaced by a pseudo-spin representing the layer index in double
well QHE systems or the electric subband index in wide single well systems.
Experiments on these systems are discussed by Shayegan in this volume
\cite{ShayeganLesHouches} and have also been reviewed in \cite{JPEbook}.
Our discussion will focus primarily on ferromagnetism near filling factor $\nu =
1$. In the subsequent section we will address analogous effects for pseudo-spin
degrees of freedom in multilayer systems.
\subsection{Coulomb Exchange}
\label{subsec:coulomb}
We tend to think of the integer QHE as being associated with the gap due to the
kinetic energy and ascribe importance to the Coulomb interaction only in the
fractional QHE. However study of ferromagnetism near integer filling factor $\nu
= 1$ has taught us that Coulomb interactions play an important role there as
well \cite{Sondhi}.
Magnetism occurs not because of direct magnetic forces, but rather because of a
combination of electrostatic forces and the Pauli principle. In a fully
ferromagnetically aligned state all the spins are parallel and hence the spin
part of the wave function is exchange symmetric
\begin{equation}
|\psi\rangle = \Phi(z_{1}, \ldots ,z_{N})\;
|\uparrow\uparrow\uparrow\uparrow\uparrow\ldots\uparrow\rangle .
\label{eq:052601}
\end{equation}
The spatial part $\Phi$ of the wave function must therefore be fully
antisymmetric and vanish when any two particles approach each other. This means
that each particle is surrounded by an `exchange hole' which thus lowers the
Coulomb energy per particle as shown in eq.~(\ref{eq:12109}). For filling factor
$\nu = 1$
\begin{equation}
\frac{\langle V\rangle}{N} = -\sqrt{\frac{\pi}{8}}\; \frac{e^{2}}{\epsilon\ell}
\sim 200\mathrm{K}
\end{equation}
This energy scale is two orders of magnitude larger than the Zeeman splitting
and hence strongly stabilizes the ferromagnetic state. Indeed at $\nu = 1$ the
ground state is spontaneously fully polarized at zero temperature even in the
absence of the Zeeman term. Ordinary ferromagnets like iron are generally only
partially polarized because of the extra kinetic energy cost of raising the
fermi level for the majority carriers. Here however the kinetic energy has been
quenched by the magnetic field and all states in the lowest Landau level are
degenerate. For $\nu = 1$ the large gap to the next Landau level means that we
know the spatial wave function $\Phi$ essentially exactly. It is simply the
single Slater determinant representing the fully filled Landau level. That is,
it is $m = 1$ Laughlin wave function. This simple circumstance makes this
perhaps the world's best understood ferromagnet.
\subsection{Spin Wave Excitations}
\label{subsec:spinwave}
It turns out that the low-lying `magnon' (spin wave) excited states can also be
obtained exactly. Before doing this for the QHE system let us remind ourselves
how the calculation goes in the lattice Heisenberg model for $N$ local moments
in an insulating ferromagnet
\begin{eqnarray}
H &=& -J \sum_{\langle ij\rangle} \vec{S}_{i} \cdot \vec{S}_{j} - \Delta
\sum_{j} S_{j}^{z}\nonumber\\
&=& -J \sum_{\langle ij\rangle} \left\{ S_{i}^{z} S_{j}^{z} + \frac{1}{2} \left(
S_{i}^{+} S_{j}^{-} + S_{i}^{-} S_{j}^{+}\right)\right\} - \Delta \sum_{j}
S_{j}^{z}
\end{eqnarray}
The ground state for $J > 0$ is the fully ferromagnetic state with total spin $S
= N/2$. Let us choose our coordinates in spin space so that $S_{z} = N/2$.
Because the spins are fully aligned the spin-flip terms in $H$ are ineffective
and (ignoring the Zeeman term)
\begin{equation}
H\; |\uparrow\uparrow\uparrow\ldots\uparrow\rangle = -\frac{J}{4} N_{b}\;
|\uparrow\uparrow\uparrow\ldots\uparrow\rangle
\end{equation}
where $N_{b}$ is the number of near-neighbor bonds and we have set $\hbar = 1$.
There are of course $2S + 1 = N + 1$ other states of the same total spin which
will be degenerate in the absence of the Zeeman coupling. These are generated by
successive applications of the total spin lowering operator
\begin{eqnarray}
S^{-} &\equiv& \sum_{j=1}^{N} S_{j}^{-}\\
S^{-}\; |\uparrow\uparrow\uparrow\ldots\uparrow\rangle &=&
|\downarrow\uparrow\uparrow\ldots\uparrow\rangle +
|\uparrow\downarrow\uparrow\ldots\uparrow\rangle\nonumber\\
&+&|\uparrow\uparrow\downarrow\ldots\uparrow\rangle + \dots
\end{eqnarray}
It is not hard to show that the one-magnon excited states are created by a
closely related operator
\begin{equation}
S_{\vec{q}}^{-} = \sum_{j=1}^{N} e^{-i\vec{q}\cdot\vec{R}_{j}}\; S_{j}^{-}
\end{equation}
where $\vec{q}$ lies inside the Brillouin zone and is the magnon wave
vector.\footnote{We use the phase factor $e^{-i\vec{q}\cdot\vec{R}_{j}}$ here
rather than $e^{+i\vec{q}\cdot\vec{R}_{j}}$ simply to be consistent with
$S_{\vec{q}}^{-}$ being the Fourier transform of $S_{j}^{-}$.} Denote these
states by
\begin{equation}
|\psi_{\vec{q}}\rangle = S_{\vec{q}}^{-}\; |\psi_{0}\rangle
\label{eq:052608}
\end{equation}
where $|\psi_{0}\rangle$ is the ground state. Because there is one flipped spin
in these states the transverse part of the Heisenberg interaction is able to
move the flipped spin from one site to a neighboring site
\begin{eqnarray}
H|\psi_{\vec{q}}\rangle &=& \left(E_{0} + \Delta + \frac{Jz}{2}\right)\;
|\psi_{\vec{q}}\rangle\nonumber\\
&&-\frac{J}{2} \sum_{\vec{\delta}} \sum_{j=1}^{N}
e^{-i\vec{q}\cdot\vec{R}_{j}}\; S_{j+\vec{\delta}}^{-}\; |\psi_{0}\rangle\\
H|\psi_{\vec{q}}\rangle &=& (E_{0} + \epsilon_{\vec{q}})\;
|\psi_{\vec{q}}\rangle
\end{eqnarray}
where $z$ is the coordination number, $\vec{\delta}$ is summed over near
neighbor lattice vectors and the magnon energy is
\begin{equation}
\epsilon_{\vec{q}} \equiv \frac{Jz}{2}\; \left\{1 - \frac{1}{z} \sum_{\vec{\delta}}
e^{-i\vec{q}\cdot\vec{\delta}}\right\} + \Delta
\end{equation}
For small $\vec{q}$ the dispersion is quadratic and for a 2D square lattice
\begin{equation}
\epsilon_{\vec{q}} \sim \frac{Ja^{2}}{4} q^{2} + \Delta
\end{equation}
where $a$ is the lattice constant.
This is very different from the result for the antiferromagnet which has a
linearly dispersing collective mode. There the ground and excited states can
only be approximately determined because the ground state does not have all the
spins parallel and so is subject to quantum fluctuations induced by the
transverse part of the interaction. This physics will reappear when we study
non-collinear states in QHE magnets away from filling factor $\nu = 1$.
The magnon dispersion for the ferromagnet can be understood in terms of bosonic
`particle' (the flipped spin) hopping on the lattice with a tight-binding model
dispersion relation. The magnons are bosons because spin operators on different
sites commute. They are not free bosons however because of the hard core
constraint that (for spin 1/2) there can be no more than one flipped spin per
site. Hence multi-magnon excited states can not be computed exactly. Some nice
renormalization group arguments about magnon interactions can be found in
\cite{ReadandSachdev}.
The QHE ferromagnet is itinerant and we have to develop a somewhat different
picture. Nevertheless there will be strong similarities to the lattice
Heisenberg model. The exact ground state is given by eq.~(\ref{eq:052601}) with
\begin{equation}
\Phi(z_{1},\ldots,z_{N}) = \prod_{i<j} (z_{i} - z_{j})\; e^{-
\frac{1}{4}\sum_{k}|z_{k}|^{2}}.
\end{equation}
To find the spin wave excited states we need to find the analog of
eq.~(\ref{eq:052608}). The Fourier transform of the spin lowering operator for
the continuum system is
\begin{equation}
S_{\vec{q}}^{-} \equiv \sum_{j=1}^{N} e^{-i\vec{q}\cdot\vec{r}_{j}}\; S_{j}^{-}
\label{eq:052614}
\end{equation}
where $\vec{r}_{j}$ is the position operator for the $j$th particle. Recall from
eq.~(\ref{eq:1221}) that we had to modify Feynman's theory of the collective
mode in superfluid helium by projecting the density operator onto the Hilbert
space of the lowest Landau level. This suggests that we do the same in
eq.~(\ref{eq:052614}) to obtain the projected spin flip operator. In contrast to
the good but approximate result we obtained for the collective density mode,
this procedure actually yields the \textit{exact} one-magnon excited state (much
like we found for the lattice model).
Using the results of appendix~\ref{app:projection}, the projected spin lowering
operator is
\begin{equation}
\bar{S}_{q}^{-} = e^{-\frac{1}{4}|q|^{2}} \sum_{j=1}^{N} \tau_{q}(j)\; S_{j}^{-}
\label{eq:052615}
\end{equation}
where $q$ is the complex number representing the dimensionless wave vector
$\vec{q}\ell$ and $\tau_{q}(j)$ is the magnetic translation operator for the
$j$th particle. The commutator of this operator with the Coulomb interaction
Hamiltonian is
\begin{eqnarray}
{}[H,\bar{S}_{q}^{-}] &=& \frac{1}{2} \sum_{k\neq 0} v(k)\;
\left[\bar{\rho}_{-k}\bar{\rho}_{k},\bar{S}_{q}^{-}\right]\nonumber\\
&=& \frac{1}{2} \sum_{k\neq 0} v(k)\; \left\{\bar{\rho}_{-k}
\left[\bar{\rho}_{k},\bar{S}_{q}^{-}\right] +
\left[\bar{\rho}_{-k},\bar{S}_{q}^{-}\right]\; \bar{\rho}_{k}\right\}.
\end{eqnarray}
We will shortly be applying this to the fully polarized ground state
$|\psi\rangle$. As discussed in appendix~\ref{app:projection}, no density wave
excitations are allowed in this state and so it is annihilated by
$\bar{\rho}_{k}$. Hence we can without approximation drop the second term above
and replace the first one by
\begin{equation}
{}[H,\bar{S}_{q}^{-}]\; |\psi\rangle = \frac{1}{2} \sum_{k\neq 0} v(k)\;
\left[\bar{\rho}_{-k},\left[\bar{\rho}_{k},\bar{S}_{q}^{-}\right]\right]\;
|\psi\rangle
\end{equation}
Evaluation of the double commutator following the rules in
appendix~\ref{app:projection} yields
\begin{equation}
{}[H,\bar{S}_{q}^{-}]\; |\psi\rangle = \epsilon_{q}\; \bar{S}_{q}^{-}\;
|\psi\rangle
\end{equation}
where
\begin{equation}
\epsilon_{q} \equiv 2\sum_{k\neq 0} e^{-\frac{1}{2}|k|^{2}}\; v(k)\;
\sin^{2}{\left(\frac{1}{2} q \wedge k\right)}.
\label{eq:1123195}
\end{equation}
Since $|\psi\rangle$ is an eigenstate of $H$, this proves that
$\bar{S}_{q}^{-}\; |\psi\rangle$ is an exact excited state of $H$ with
excitation energy $\epsilon_{q}$. In the presence of the Zeeman coupling
$\epsilon_{q} \rightarrow \epsilon_{q} + \Delta$.
This result tells us that, unlike the case of the density excitation, the
single-mode approximation is exact for the case of the spin density excitation.
The only assumption we made is that the ground state is fully polarized and has
$\nu = 1$.
For small $q$ the dispersion starts out quadratically
\begin{equation}
\epsilon_{q} \sim Aq^{2}
\label{eq:1124198}
\end{equation}
with
\begin{equation}
A \equiv \frac{1}{4} \sum_{k\neq 0} e^{-\frac{1}{2}|k|^{2}}\; v(k)\; |k|^{2}
\end{equation}
as can be seen by expanding the sine function to lowest order. For very large
$q$ $\sin^{2}$ can be replaced by its average value of $\frac{1}{2}$ to yield
\begin{equation}
\epsilon_{q} \sim \sum_{k\neq 0} v(k)\; e^{-\frac{1}{2}|k|^{2}}.
\end{equation}
Thus the energy saturates at a constant value for $q \rightarrow \infty$ as
shown in fig.~(\ref{fig:LLLspinwave}).
\begin{figure
\centerline{\epsfxsize=6cm
\epsffile{spinwave.xfig.eps}}
\caption[]{Schematic illustration of the QHE ferromagnet spinwave dispersion.
There is a gap at small $k$ equal to the Zeeman splitting,
$\Delta_{\mathrm{Z}}$. At large wave vectors, the energy saturates at the
Coulomb exchange energy scale $\Delta_{x} + \Delta_{\mathrm{Z}} \sim 100$K.}
\label{fig:LLLspinwave}
\end{figure}
(Note that in the lattice model the wave vectors are restricted to the first
Brillouin zone, but here they are not.)
While the derivation of this exact result for the spin wave dispersion is
algebraically rather simple and looks quite similar (except for the LLL
projection) to the result for the lattice Heisenberg model, it does not give a
very clear physical picture of the nature of the spin wave collective mode. This
we can obtain from eq.~(\ref{eq:052615}) by noting that $\tau_{q}(j)$ translates
the particle a distance $\vec{q} \times \hat{z}\ell^{2}$. Hence the spin wave
operator $\bar{S}_{q}^{-}$ flips the spin of one of the particles and translates
it spatially leaving a hole behind and creating a particle-hole pair carrying
net momentum proportional to their separation as illustrated in
fig.~(\ref{fig:phpairflip}).
\begin{figure
\centerline{\epsfxsize=6cm
\epsffile{spinflip.xfig.eps}}
\caption[]{Illustration of the fact that the spin flip operator causes
translations when projected into the lowest Landau level. For very large wave
vectors the particles is translated completely away from the exchange hole and
loses all its favorable Coulomb exchange energy.}
\label{fig:phpairflip}
\end{figure}
For large separations the excitonic Coulomb attraction between the particle and
hole is negligible and the energy cost saturates at a value related to the
Coulomb exchange energy of the ground state given in eq.~(\ref{eq:12109}). The
exact dispersion relation can also be obtained by noting that scattering
processes of the type illustrated by the dashed lines in
fig.~(\ref{fig:phpairflip}) mix together Landau gauge states
\begin{equation}
c_{k - q_{y},\downarrow}^{\dagger}\; c_{k,\uparrow}^{\phantom{\dagger}}\;
|\uparrow\uparrow\uparrow\uparrow\uparrow\uparrow\rangle
\end{equation}
with different wave vectors $k$. Requiring that the state be an eigenvector of
translation uniquely restricts the mixing to linear combinations of the form
\begin{equation}
\sum_{k} e^{-ikq_{x}\ell^{2}}\; c_{k-q_{y},\downarrow}^{\dagger}\;
c_{k,\uparrow}^{\phantom{\dagger}}\;
|\uparrow\uparrow\uparrow\uparrow\uparrow\uparrow\rangle.
\end{equation}
Evaluation of the Coulomb matrix elements shows that this is indeed an exact
eigenstate.
\subsection{Effective Action}
\label{subsec:effectiveaction}
It is useful to try to reproduce these microscopic results for the spin wave
excitations within an effective field theory for the spin degrees of freedom.
Let $\vec{m}(\vec{r}\,)$ be a vector field obeying $\vec{m} \cdot \vec{m} = 1$
which describes the local orientation of the order parameter (the
magnetization). Because the Coulomb forces are spin independent, the potential
energy cost can not depend on the orientation of $\vec{m}$ but only on its
gradients. Hence we must have to leading order in a gradient expansion
\begin{equation}
U = \frac{1}{2} \rho_{s} \int d^{2}r\; \partial_{\mu}m^{\nu}\;
\partial_{\mu}m^{\nu} - \frac{1}{2} n \Delta \int d^{2}r\; m^{z}
\label{eq:1124203}
\end{equation}
where $\rho_{s}$ is a phenomenological `spin stiffness' which in two dimensions
has units of energy and $n \equiv \frac{\nu}{2\pi\ell^{2}}$ is the particle
density. We will learn how to evaluate it later.
We can think of this expression for the energy as the leading terms in a
functional Taylor series expansion. Symmetry requires that (except for the
Zeeman term) the expression for the energy be invariant under uniform global
rotations of $\vec{m}$. In addition, in the absence of disorder, it must be
translationally invariant. Clearly the expression in (\ref{eq:1124203})
satisfies these symmetries. The only zero-derivative term of the appropriate
symmetry is $m^{\mu} m^{\mu}$ which is constrained to be unity everywhere. There
exist terms with more derivatives but these are irrelevant to the physics at
very long wavelengths. (Such terms have been discussed by Read and Sachdev
\cite{ReadandSachdev}.)
To understand how time derivatives enter the effective action we have to recall
that spins obey a first-order (in time) precession equation under the influence
of the local exchange field.\footnote{That is, the Coulomb exchange energy which
tries to keep the spins locally parallel. In a Hartree-Fock picture we could
represent this by a term of the form $-\vec{h}(\vec{r}\,) \cdot \vec{s}(\vec{r}\,)$
where $\vec{h}(\vec{r}\,)$ is the self-consistent field.} Consider as a toy model
a single spin in an external field $\vec{\Delta}$.
\begin{equation}
H = -\hbar\Delta^{\alpha} S^{\alpha}
\end{equation}
The Lagrangian describing this toy model
needs to contain a first order time derivative and so must have
the form (see discussion in appendix~\ref{app:BerryPhase})
\begin{equation}
\mathcal{L} = \hbar S\; \left\{- \dot{m}^{\mu} \mathcal{A}^{\mu}[\vec{m}] +
\Delta^{\mu} m^{\mu} + \lambda (m^{\mu}m^{\mu} - 1)\right\}
\label{eq:1124204}
\end{equation}
where $S = \frac{1}{2}$ is the spin length and $\lambda$ is a Lagrange
multiplier to enforce the fixed length constraint. The unknown vector
$\vec{\mathcal{A}}$ can be determined by requiring that it reproduce the correct
precession equation of motion. The precession equation is
\begin{eqnarray}
\frac{d}{dt} S^{\mu} &=& \frac{i}{\hbar} [H,S^{\mu}] = -i\Delta^{\alpha}
[S^{\alpha},S^{\mu}]\nonumber\\
&=& \epsilon^{\alpha\mu\beta} \Delta^{\alpha} S^{\beta}\\
\dot{\vec{S}} &=& -{\vec \Delta} \times \vec{S}
\label{eq:precession}
\end{eqnarray}
which corresponds to \textit{counterclockwise} precession around the magnetic
field.
We must obtain the same equation of motion from
the Euler-Lagrange equation for the Lagrangian in eq.~(\ref{eq:1124204})
\begin{equation}
\frac{d}{dt}\; \frac{\delta\mathcal{L}}{\delta\dot{m}^{\mu}} -
\frac{\delta\mathcal{L}}{\delta m^{\mu}} = 0
\end{equation}
which may be written as
\begin{equation}
\Delta^{\mu} + 2\lambda m^{\mu} = F^{\mu\nu} \dot{m}^{\nu}
\label{eq:060301}
\end{equation}
where
\begin{equation}
F^{\mu\nu} \equiv \partial_{\mu} \mathcal{A}_{\nu} - \partial_{\nu}
\mathcal{A}_{\mu}
\end{equation}
and $\partial_{\mu}$ means $\frac{\partial}{\partial m^{\mu}}$ (\textit{not} the
derivative with respect to some spatial coordinate). Since
$F^{\mu\nu}$ is antisymmetric let us guess a solution of the form
\begin{equation}
F^{\mu\nu} = \epsilon^{\alpha\mu\nu} m^{\alpha}.
\label{eq:060303}
\end{equation}
Using this in
eq.~(\ref{eq:060301}) yields
\begin{equation}
\Delta^{\mu} + 2\lambda m^{\mu} = \epsilon^{\alpha\mu\nu} m^{\alpha}
\dot m^{\nu}.
\end{equation}
Applying $\epsilon^{\gamma\beta\mu}m^\beta$ to both sides and
using the identity
\begin{equation}
\epsilon^{\nu\alpha\beta} \epsilon^{\nu\lambda\eta} = \delta_{\alpha\lambda}
\delta_{\beta\eta} - \delta_{\alpha\eta} \delta_{\beta\lambda}
\end{equation}
we obtain
\begin{equation}
-(\vec\Delta\times\vec m)^\gamma = \dot m^\gamma - m^\gamma(\dot m^\beta m^\beta).
\end{equation}
The last term on the right vanishes due to the length constraint. Thus we find that our ansatz
in eq.~(\ref{eq:060303}) does indeed make the Euler-Lagrange equation correctly reproduce
eq.~(\ref{eq:precession}).
Eq.~(\ref{eq:060303}) is equivalent to
\begin{equation}
\vec{\nabla}_{m} \times \vec{\mathcal{A}} [\vec{m}] = \vec{m}
\label{eq:060308}
\end{equation}
indicating that $\vec{\mathcal{A}}$ is the vector potential of a unit magnetic
monopole sitting at the center of the unit sphere on which $\vec{m}$ lives as
illustrated in fig.~(\ref{fig:monopole}). Note (the always confusing point) that we are
interpreting $\vec m$ as the coordinate of a fictitious particle living on the unit sphere
(in spin space) surrounding the monopole.
\begin{figure
\centerline{\epsfxsize=6cm
\epsffile{monopole.xfig.eps}}
\caption[]{Magnetic monopole in spin space. Arrows indicate the curl of the
Berry connection $\vec{\nabla} \times \vec{\mathcal{A}}$ emanating from the
origin. Shaded region indicates closed path $\vec{m}(t)$ taken by the spin order
parameter during which it acquires a Berry phase proportional to the monopole
flux passing through the shaded region.}
\label{fig:monopole}
\end{figure}
Recalling eq.~(\ref{eq:classicalLagrangian}), we see that
the Lagrangian for a single spin in eq.~(\ref{eq:1124204}) is equivalent to
the Lagrangian of a massless object of charge $-S$,
located at position $\vec m$, moving on the unit sphere containing a
magnetic monopole. The Zeeman term represents a constant electric field
$-\vec{\Delta}$ producing a force $\vec{\Delta}S$ on the particle. The
Lorentz force caused by the monopole causes the particle to orbit the sphere at
constant `latitude'. Because no kinetic term of the form $\dot{m}^{\alpha}
\dot{m}^{\alpha}$ enters the Lagrangian, the charged particle is massless and so
lies only in the lowest Landau level of the monopole field. Note the similarity
here to the previous discussion of the high field limit and the semiclassical
percolation picture of the integer Hall effect. For further details the reader
is directed to appendix~\ref{app:BerryPhase} and to Haldane's discussion of
monopole spherical harmonics \cite{HaldaneSMGbook}.
If the `charge' moves slowly around a closed counterclockwise path $\vec{m}(t)$
during the time interval $[0,T]$ as illustrated in fig.~(\ref{fig:monopole}),
the quantum amplitude
\begin{equation}
e^{\frac{i}{\hbar}\int_{0}^{T}dt\mathcal{L}}
\end{equation}
contains a Berry's phase \cite{Berry} contribution proportional to the
`magnetic flux' enclosed by the path
\begin{equation}
e^{-iS\int_{0}^{T}dt\dot{m}^{\nu}\mathcal{A}^{\nu}} =
e^{-iS\oint\vec{\mathcal{A}}\cdot d\vec{m}}.
\end{equation}
As discussed in appendix~\ref{app:BerryPhase}, this is a purely geometric phase in
the sense that it depends only on the geometry of the path and not the rate at
which the path is traversed (since the expression is time reparameterization
invariant). Using Stokes theorem and eq.~(\ref{eq:060308}) we can write the
contour integral as a surface integral
\begin{equation}
e^{-iS\oint\vec{\mathcal{A}}\cdot d\vec{m}} = e^{-iS\int
d\vec{\Omega}\cdot\vec{\nabla}\times\vec{\mathcal{A}}} = e^{-iS\Omega}
\end{equation}
where $d\vec{\Omega} = \vec{m}d\Omega$ is the directed area (solid angle)
element and $\Omega$ is the total solid angle subtended by the contour as viewed
from the position of the monopole. Note from fig.~(\ref{fig:monopole}) that
there is an ambiguity on the sphere as to which is the inside and which is the
outside of the contour. Since the total solid angle is $4\pi$ we could equally
well have obtained\footnote{The change in the sign from $+i$ to $-i$ is due to
the fact that the contour switches from being counterclockwise to clockwise if
viewed as enclosing the $4\pi - \Omega$ area instead of the $\Omega$ area.}
\begin{equation}
e^{+iS(4\pi-\Omega)}.
\end{equation}
Thus the phase is ambiguous unless $S$ is an integer or half-integer. This
constitutes a `proof' that the quantum spin length must be quantized.
Having obtained the correct Lagrangian for our toy model we can now readily
generalize it to the spin wave problem using the potential energy in
eq.~(\ref{eq:1124203})
\begin{eqnarray}
\mathcal{L} &=& -\hbar Sn \int d^{2}r\; \Biggl\{\dot{m}^{\mu}(\vec{r}\,)\;
\mathcal{A}^{\mu}[\vec{m}] - \Delta m^{z}(\vec{r}\,)\Biggr\}\nonumber\\
&&-\frac{1}{2} \rho_{s} \int d^{2}r\; \partial_{\mu} m^{\nu} \partial_{\mu}
m^{\nu} + \int d^{2}r\; \lambda(\vec{r}\,)\; (m^{\mu} m^{\mu} - 1).
\label{eq:1124219}
\end{eqnarray}
The classical equation of motion can be analyzed just as for the toy model,
however we will take a slightly different approach here. Let us look in the low
energy sector where the spins all lie close to the $\hat{z}$ direction. Then we
can write
\begin{eqnarray}
\vec{m} &=& \left(m^{x}, m^{y}, \sqrt{1-m^{x}m^{x}-m^{y}m^{y}}\right)\nonumber\\
&\approx& \left(m^{x}, m^{y}, 1 - \frac{1}{2} m^{x}m^{x} - \frac{1}{2}
m^{y}m^{y}\right).
\end{eqnarray}
Now choose the `symmetric gauge'
\begin{equation}
\vec{\mathcal{A}} \approx \frac{1}{2} (-m^{y}, m^{x}, 0)
\end{equation}
which obeys eq.~(\ref{eq:060308}) for $\vec{m}$ close to $\hat{z}$.
Keeping only quadratic terms in the Lagrangian we obtain
\begin{eqnarray}
\mathcal{L} &=& -\hbar Sn \int d^{2}r\; \bigg\{\frac{1}{2} (\dot{m}^{y} m^{x} -
\dot{m}^{x} m^{y}) \nonumber\\
&&- \Delta \left(1 - \frac{1}{2} m^{x}m^{x} - \frac{1}{2}
m^{y}m^{y}\right)\bigg\}\nonumber\\
&&-\frac{1}{2} \rho_{s} \int d^{2}r\; (\partial_{\mu}m^{x} \partial_{\mu}m^{x} +
\partial_{\mu}m^{y} \partial_{\mu}m^{y}).
\end{eqnarray}
This can be conveniently rewritten by defining a complex field
\begin{displaymath}
\psi \equiv m^{x} + im^{y}
\end{displaymath}
\begin{eqnarray}
\mathcal{L} &=&-Sn \hbar \int d^{2}r\; \bigg\{\frac{1}{4}
\left[\psi^{*}\left(-i\frac{\partial}{\partial t}\right)\psi -
\psi\left(-i\frac{\partial}{\partial t}\right)\psi^{*}\right] \nonumber\\
&&- \Delta \left(1 - \frac{1}{2} \psi^{*}\psi\right)\bigg\}
-\frac{1}{2} \rho_{s} \int d^{2}r\; \partial_{\mu}\psi^{*} \partial_{\mu}\psi
\end{eqnarray}
The classical equation of motion is the Schr\"{o}dinger like equation
\begin{equation}
+i\hbar\frac{\partial\psi}{\partial t} = -\frac{\rho_{s}}{nS}
\partial_{\mu}^{2}\psi + \hbar\Delta\psi.
\end{equation}
This has plane wave solutions with quantum energy
\begin{equation}
\epsilon_{k} = \hbar\Delta + \frac{\rho_{s}}{nS} k^{2}.
\label{eq:1105228}
\end{equation}
We can fit the phenomenological stiffness to the exact dispersion relation in
eq.~(\ref{eq:1124198}) to obtain
\begin{equation}
\rho_{s} = \frac{nS}{4} \sum_{k\neq 0} e^{-\frac{1}{2}|k|^{2}}\; v(k) |k|^{2}.
\label{eq:060322}
\end{equation}
\boxedtext{\begin{exercise}
Derive eq.~(\ref{eq:060322}) from first principles by evaluating the loss of
exchange energy when the Landau gauge $\nu = 1$ ground state is distorted to
make the spin tumble in the $x$ direction
\begin{equation}
|\psi\rangle = \prod_{k} \left(\cos{\frac{\theta_{k}}{2}}
c_{k\uparrow}^{\dagger} + \sin{\frac{\theta_{k}}{2}}
c_{k\downarrow}^{\dagger}\right) |0\rangle
\end{equation}
where $\theta_{k} = -\gamma k\ell^{2}$ and $\gamma =
\frac{\partial\theta}{\partial x}$ is the (constant) spin rotation angle
gradient (since $x = -k\ell^{2}$ in this gauge).
\label{ex:9809}
\end{exercise}}
\subsection{Topological Excitations}
\label{subsec:topological}
So far we have studied neutral collective excitations that take the form of spin
waves. They are neutral because as we have seen from eq.~(\ref{eq:052615}) they
consist of a particle-hole pair. For very large momenta the spin-flipped particle
is translated a large distance $\vec{q} \times \hat{z}\ell^{2}$ away from its
original position as discussed in appendix~\ref{app:projection}. This looks
locally like a charged excitation but it is very expensive because it loses all
of its exchange energy. It is sensible to inquire if it is possible to make a
cheaper charged excitation. This can indeed be done by taking into account the
desire of the spins to be locally parallel and producing a smooth topological
defect in the spin orientation
\cite{LeeandKane,Sondhi,Ilong,Tsvelik,Rodriguez,Abolfath,Apel,GreenTsvelik}
known as a skyrmion by analogy with related objects in the Skyrme model of
nuclear physics \cite{skyrme}. Such an object has the beautiful form exhibited
in fig.~(\ref{fig:skyrmion}).
\begin{figure
\centerline{\epsfxsize=10cm
\epsffile{skyrmion.eps}}
\caption[]{Illustration of a skyrmion spin texture. The spin is down at the
origin and gradually turns up at infinite radius. At intermediate distances, the
XY components of the spin exhibit a vortex-like winding. Unlike a $U(1)$ vortex,
there is no singularity at the origin.}
\label{fig:skyrmion}
\end{figure}
Rather than having a single spin suddenly flip over, this object gradually turns
over the spins as the center is approached. At intermediate distances the spins
have a vortex-like configuration. However unlike a $U(1)$ vortex, there is no
singularity in the core region because the spins are able to rotate downwards
out of the xy plane.
In nuclear physics the Skyrme model envisions that the vacuum is a `ferromagnet'
described by a four component field $\Phi^{\mu}$ subject to the constraint
$\Phi^{\mu} \Phi^{\mu} = 1$. There are three massless (i.e. linearly dispersing)
spin wave excitations corresponding to the three directions of oscillation about
the ordered direction. These three massless modes represent the three (nearly)
massless pions $\pi^{+}, \pi^{0}, \pi^{-}$. The nucleons (proton and neutron)
are represented by skyrmion spin textures. Remarkably, it can be shown (for an
appropriate form of the action) that these objects are \textit{fermions} despite
the fact that they are in a sense made up of a coherent superposition of (an
infinite number of) \textit{bosonic} spin waves.
We shall see a very similar phenomenology in QHE ferromagnets. At filling factor
$\nu$, skyrmions have charge $\pm \nu e$ and fractional statistics much like
Laughlin quasiparticles. For $\nu = 1$ these objects are fermions. Unlike
Laughlin quasiparticles, skyrmions are extended objects, and they involve many
flipped (and partially flipped) spins. This property has profound implications
as we shall see.
Let us begin our analysis by understanding how it is that spin textures can
carry charge. It is clear from the Pauli principle that it is \textit{necessary}
to flip at least some spins to locally increase the charge density in a $\nu =
1$ ferromagnet. What is the \textit{sufficient} condition on the spin
distortions in order to have a density fluctuation? Remarkably it turns out to
be possible, as we shall see, to uniquely express the charge density solely in
terms of gradients of the local spin orientation.
Consider a ferromagnet with local spin orientation $\vec{m}(\vec{r}\,)$ which is
static. As each electron travels we assume that the strong exchange field keeps
the spin following the local orientation $\vec{m}$. If the electron has velocity
$\dot{x}^{\mu}$, the rate of change of the local spin orientation it sees is
$\dot{m}^{\nu} = \dot{x}^{\mu} \frac{\partial}{\partial x^{\mu}} m^{\nu}$. This
in turn induces an additional Berry's phase as the spin orientation varies. Thus
the single-particle Lagrangian contains an additional first order time
derivative in addition to the one induced by the magnetic field coupling to the
orbital motion
\begin{equation}
\mathcal{L}_{0} = -\frac{e}{c} \dot{x}^{\mu} A^{\mu} + \hbar S
\dot{m}^{\nu} \mathcal{A}^{\nu}[\vec{m}].
\end{equation}
Here $A^{\mu}$ refers to the electromagnetic vector potential and
$\mathcal{A}^{\nu}$ refers to the monopole vector potential obeying
eq.~(\ref{eq:060308}) and we have set the mass to zero (i.e. dropped the
$\frac{1}{2}M\; \dot{x}^{\mu} \dot{x}^{\mu}$ term). This can be rewritten
\begin{equation}
\mathcal{L}_{0} = -\frac{e}{c} \dot{x}^{\mu} (A^{\mu} + a^{\mu})
\end{equation}
where (with $\Phi_{0}$ being the flux quantum)
\begin{equation}
a^{\mu} \equiv -\Phi_{0}S \left(\frac{\partial}{\partial x^{\mu}}
m^{\nu}\right)\; \mathcal{A}^{\nu}[\vec{m}]
\end{equation}
represents the `Berry connection', an additional vector potential which
reproduces the Berry phase. The additional fake magnetic flux due to the curl of
the Berry connection is
\begin{eqnarray}
b &=& \epsilon^{\alpha\beta} \frac{\partial}{\partial x^{\alpha}}
a^{\beta}\nonumber\\
&=& -\Phi_{0} S\epsilon^{\alpha\beta} \frac{\partial}{\partial x^{\alpha}}\;
\left(\frac{\partial}{\partial x^{\beta}} m^{\nu}\right)
\mathcal{A}^{\nu}[\vec{m}]\nonumber\\
&=& -\Phi_{0} S\epsilon^{\alpha\beta} \left\{\left(\frac{\partial}{\partial
x^{\alpha}}\; \frac{\partial}{\partial x^{\beta}} m^{\nu}\right)
\mathcal{A}^{\nu}[\vec{m}]\right.\nonumber\\
&&\left.+\left(\frac{\partial}{\partial x^{\beta}} m^{\nu}\right)\;
\frac{\partial m^{\gamma}}{\partial x^{\alpha}}\; \frac{\partial
\mathcal{A}^{\nu}}{\partial m^{\gamma}}\right\}.
\end{eqnarray}
The first term vanishes by symmetry leaving
\begin{equation}
b = -\Phi_{0} S\epsilon^{\alpha\beta} \frac{\partial m^{\nu}}{\partial
x^{\beta}}\; \frac{\partial m^{\gamma}}{\partial x^{\alpha}}\; \frac{1}{2}
F^{\nu\gamma}
\end{equation}
where $F^{\nu\gamma}$ is given by eq.~(\ref{eq:060303}) and we have taken
advantage of the fact that the remaining factors are antisymmetric under the
exchange $\nu \leftrightarrow \gamma$. Using eq.~(\ref{eq:060303}) and setting
$S = \frac{1}{2}$ we obtain
\begin{equation}
b = -\Phi_{0} \tilde{\rho}
\label{eq:060329}
\end{equation}
where
\begin{eqnarray}
\tilde{\rho} &\equiv& \frac{1}{8\pi} \epsilon^{\alpha\beta} \epsilon^{abc} m^{a}
\partial_{\alpha} m^{b} \partial_{\beta} m^{c}\nonumber\\
&=& \frac{1}{8\pi} \epsilon^{\alpha\beta} \vec{m} \cdot \partial_{\alpha}\vec{m}
\times \partial_{\beta}\vec{m}
\end{eqnarray}
is (for reasons that will become clear shortly) called the \textit{topological
density} or the Pontryagin density.
Imagine now that we adiabatically deform the uniformly magnetized spin state
into some spin texture state. We see from eq.~(\ref{eq:060329}) that the orbital
degrees of freedom see this as adiabatically adding additional flux
$b(\vec{r}\,)$. Recall from eq.~(\ref{eq:1124165}) and the discussion of the
charge of the Laughlin quasiparticle, that extra charge density is associated
with extra flux in the amount
\begin{eqnarray}
\delta\rho &=& \frac{1}{c} \sigma_{xy} b\\
\delta\rho &=& \nu e\tilde{\rho}.
\end{eqnarray}
Thus we have the remarkable result that the changes in the electron charge
density are proportional to the topological density.
Our assumption of adiabaticity is valid as long as the spin fluctuation
frequency is much lower than the charge excitation gap. This is an excellent
approximation for $\nu = 1$ and still good on the stronger fractional Hall
plateaus.
It is interesting that the fermionic charge density in this model can be
expressed solely in terms of the vector boson field $\vec{m}(\vec{r}\,)$, but
there is something even more significant here. The skyrmion spin texture has
total topological charge
\begin{equation}
Q_{\mathrm{top}} \equiv \frac{1}{8\pi} \int d^{2}r\; \epsilon^{\alpha\beta}
\vec{m} \cdot \partial_{\alpha}\vec{m} \times \partial_{\beta}\vec{m}
\label{eq:060333}
\end{equation}
which is always an integer. In fact for \textit{any} smooth spin texture in
which the spins at infinity are all parallel, $Q_{\mathrm{top}}$ is always an
integer. Since it is impossible to continuously deform one integer into another,
$Q_{\mathrm{top}}$ is a topological invariant. That is, if $Q_{\mathrm{top}} =
\pm 1$ because a skyrmion (anti-skyrmion) is present, $Q_{\mathrm{top}}$ is
stable against smooth continuous distortions of the field $\vec{m}$. For example
a spin wave could pass through the skyrmion and $Q_{\mathrm{top}}$ would remain
invariant. Thus this charged object is topologically stable and has fermion
number (i.e., the number of fermions (electrons) that flow into the region when the object
is formed)
\begin{equation}
N = \nu Q_{\mathrm{top}}.
\end{equation}
For $\nu = 1$, $N$ is an integer ($\pm 1$ say) and has the fermion number of an
electron. It is thus continuously connected to the single flipped spin example
discussed earlier.
We are thus led to the remarkable conclusion that the spin degree of freedom
couples to the electrostatic potential. Because skyrmions carry charge, we can
affect the spin configuration using electric rather than magnetic fields!
To understand how $Q_{\mathrm{top}}$ always turns out to be an integer, it is
useful to consider a simpler case of a one-dimensional ring. We follow here the
discussion of \cite{Rajaraman}. Consider the unit circle (known to topologists
as the one-dimensional sphere $S_{1}$). Let the angle $\theta\;
\epsilon[0,2\pi]$ parameterize the position along the curve. Consider a
continuous, suitably well-behaved, complex function $\psi(\theta) =
e^{i\varphi(\theta)}$ defined at each point on the circle and obeying $|\psi| =
1$. Thus associated with each point $\theta$ is another unit circle giving the
possible range of values of $\psi(\theta)$. The function $\psi(\theta)$ thus
defines a trajectory on the torus $S_{1} \times S_{1}$ illustrated in
fig.~(\ref{fig:toruswinding}).
\begin{figure
\centerline{\epsfxsize=10cm
\epsffile{mapping.xfig.eps}}
\caption[]{Illustration of mappings $\varphi(\theta)$ with: zero winding
number (left) and winding number $+2$ (right).}
\label{fig:toruswinding}
\end{figure}
The possible functions $\psi(\theta)$ can be classified into different homotopy
classes according to their winding number $n \in \mathbf{Z}$
\begin{eqnarray}
n &\equiv& \frac{1}{2\pi} \int_{0}^{2\pi} d\theta\; \psi^{*}\left(-i
\frac{d}{d\theta}\right)\psi\nonumber\\
&=& \frac{1}{2\pi} \int_{0}^{2\pi} d\theta\;
\frac{d\varphi}{d\theta} = \frac{1}{2\pi} \left[\varphi(2\pi) -
\varphi(0)\right].
\label{eq:s1winding}
\end{eqnarray}
Because the points $\theta = 0$ and $\theta = 2\pi$ are identified as the same
point
\begin{equation}
\psi(0) = \psi(2\pi) \Rightarrow \varphi(2\pi) - \varphi(0) = 2\pi \times
\mbox{ integer}
\end{equation}
and so $n$ is an integer.
Notice the crucial role played by the fact that the `topological density'
$\frac{1}{2\pi}\; \frac{d\varphi}{d\theta}$ is the Jacobian for converting from
the coordinate $\theta$ in the domain to the coordinate $\varphi$ in the range.
It is this fact that makes the integral in eq.~(\ref{eq:s1winding}) independent of
the detailed local form of the mapping $\varphi(\theta)$
and depend only on the overall winding number.
As we shall shortly see,
this same feature will also turn out to be true for the Pontryagin density.
Think of the function $\varphi(\theta)$ as defining the path of an elastic band
wrapped around the torus. Clearly the band can be stretched, pulled and
distorted in any smooth way without any effect on $n$. The only way to change
the winding number from one integer to another is to discontinuously break the
elastic band, unwind (or wind) some extra turns, and then rejoin the cut pieces.
Another way to visualize the homotopy properties of mappings from $S_{1}$ to
$S_{1}$ is illustrated in fig.~(\ref{fig:homotopySISI}).
\begin{figure
\centerline{\epsfxsize=10cm
\epsffile{winding.xfig.eps}}
\caption[]{A different representation of the mappings from $\theta$ to
$\varphi$. The dashed line represents the domain $\theta$ and the solid line
represents the range $\varphi$. The domain is `lifted up' by the mapping and
placed on the range. The winding number $n$ is the number of times the dashed
circle wraps the solid circle (with a possible minus sign depending on the
orientation).}
\label{fig:homotopySISI}
\end{figure}
The solid circle represents the domain $\theta$ and the dashed circle
represents the range $\varphi$. It is useful to imagine the $\theta$ circle as
being an elastic band (with points on it labeled by coordinates running from
$0$ to $2\pi$) which can be `lifted up' to the $\varphi$ circle in such a
way that each point of $\theta$ lies just outside the image point
$\varphi(\theta)$. The figure illustrates how the winding number $n$ can be
interpreted as the number of times the domain $\theta$ circle wraps around the
range $\varphi$ circle. (Note:
even though the elastic band is `stretched' and may wrap around
the $\varphi$ circle more than once, its coordinate labels still only run from $0$ to $2\pi$.)
This interpretation is the one which we will generalize
for the case of skyrmions in 2D ferromagnets.
We can think of the equivalence class of mappings having a given winding number
as an element of a group called the homotopy group $\pi_{1}(S_{1})$. The group
operation is addition and the winding number of the sum of two functions,
$\varphi(\theta) \equiv \varphi_{1}(\theta) + \varphi_{2}(\theta)$, is the sum
of the two winding numbers $n = n_{1} + n_{2}$. Thus $\pi_{1}(S_{1})$ is
isomorphic to $\mathbf{Z}$, the group of integers under addition.
Returning now to the ferromagnet we see that the unit vector order parameter
$\vec{m}$ defines a mapping from the plane $R_{2}$ to the two-sphere $S_{2}$
(i.e. an ordinary sphere in three dimensions having a two-dimensional surface).
Because we assume that $\vec{m} = \hat{z}$ for all spatial points far from the
location of the skyrmion, we can safely use a projective map to `compactify'
$R_{2}$ into a sphere $S_{2}$. In this process all points at infinity in $R_{2}$
are mapped into a single point on $S_{2}$, but since $\vec{m}(\vec r)$ is the
same for all these different points, no harm is done. We are thus interested in
the generalization of the concept of the winding number to the mapping $S_{2}
\rightarrow S_{2}$. The corresponding homotopy group $\pi_{2}(S_{2})$ is also
equivalent to $\mathbf{Z}$ as we shall see.
Consider the following four points in the plane and their images (illustrated in
fig.~(\ref{fig:mapping})) under the mapping
\begin{eqnarray}
(x,y) &\longrightarrow& \vec{m}(x,y)\nonumber\\
(x+dx,y) &\longrightarrow& \vec{m}(x+dx,y)\nonumber\\
(x,y+dy) &\longrightarrow& \vec{m}(x,y+dy)\nonumber\\
(x+dx,y+dy) &\longrightarrow& \vec{m}(x+dx,y+dy).
\end{eqnarray}
The four points in the plane define a rectangle of area $dxdy$. The four points
on the order parameter (spin)
sphere define an approximate parallelogram whose area (solid angle) is
\begin{eqnarray}
d\omega &\approx& \left[\vec{m}(x+dx,y) - \vec{m}(x,y)\right] \times
\left[\vec{m}(x,y+dy) - \vec{m}(x,y)\right] \cdot \vec{m}(x,y)\nonumber\\
&\approx& \frac{1}{2} \epsilon^{\mu\nu}\; \vec{m} \cdot \partial_{\mu}\vec{m}
\times \partial_{\nu} \vec{m}\; dxdy\nonumber\\
&=& 4\pi \tilde{\rho}\; dxdy.
\end{eqnarray}
Thus the Jacobian converting area in the plane into solid angle on the sphere is
$4\pi$ times the Pontryagin density $\tilde{\rho}$. This means that the total
topological charge given in eq.~(\ref{eq:060333}) must be an integer since it
counts the number of times the compactified plane is wrapped around the order
parameter sphere by the mapping. The `wrapping' is done by lifting each point
$\vec{r}$ in the compactified plane up to the corresponding point
$\vec{m}(\vec{r}\,)$ on the sphere just as was described for $\pi_{1}(S_{1})$ in
fig.~(\ref{fig:homotopySISI}).
\begin{figure
\centerline{\epsfxsize=6cm
\epsffile{stokes.xfig.eps}}
\caption[]{Infinitesimal circuit in spin space associated with an infinitesimal circuit in
real space via the mapping $\vec m(\vec r)$.}
\label{fig:mapping}
\end{figure}
For the skyrmion illustrated in fig.~(\ref{fig:skyrmion}) the order parameter
function $\vec{m}(\vec{r}\,)$ was chosen to be the standard form that minimizes
the gradient energy \cite{Rajaraman}
\alpheqn{
\begin{eqnarray}
m^{x} &=& \frac{2\lambda r\; \cos{(\theta - \varphi)}}{\lambda^{2} +
r^{2}}\label{eq:060339a}\\
m^{y} &=& \frac{2\lambda r\; \sin{(\theta - \varphi)}}{\lambda^{2} +
r^{2}}\label{eq:060339b}\\
m^{z} &=& \frac{r^{2} - \lambda^{2}}{\lambda^{2} + r^{2}}
\label{eq:060339c}
\end{eqnarray}}
\reseteqn
\noindent where $(r,\theta)$ are the polar coordinates in the plane, $\lambda$
is a constant that controls the size scale, and $\varphi$ is a constant that
controls the XY spin orientation. (Rotations about the Zeeman axis leave the
energy invariant.) From the figure it is not hard to see that the skyrmion
mapping wraps the compactified plane around the order parameter sphere exactly
once. The sense is such that $Q_{\mathrm{top}} = -1$.
\boxedtext{\begin{exercise}
Show that the topological density can be written in polar spatial coordinates as
\[
\tilde{\rho} = \frac{1}{4\pi r} \vec{m} \cdot \frac{\partial\vec{m}}{\partial r}
\times \frac{\partial\vec{m}}{\partial\theta}.
\]
Use this result to show
\[
\tilde{\rho} = -\frac{1}{4\pi} \left(\frac{2\lambda}{\lambda^{2} + r^{2}}\right)^{2}
\]
and hence
\[
Q_{\mathrm{top}} = -1
\] for the skyrmion mapping in eqs.~(\ref{eq:060339a}--\ref{eq:060339c}).
\label{ex:9810}
\end{exercise}}
It is worthwhile to note that it is possible to write down simple microscopic
variational wave functions for the skyrmion which are closely related to the
continuum field theory results obtained above. Consider the following state in
the plane \cite{Ilong}
\begin{equation}
\psi_{\lambda} = \prod_j \left(\begin{array}{c}
z_{j} \\
\lambda\end{array}\right)_{j} \Psi_{1},
\label{eq:skyrmicro}
\end{equation}
where $\Psi_{1}$ is the $\nu = 1$ filled Landau level state $(\cdot)_{j}$ refers
to the spinor for the $j$th particle, and $\lambda$ is a fixed length scale.
This is a skyrmion because it has its spin purely down at the origin (where
$z_{j} = 0$) and has spin purely up at infinity (where $|z_{j}| \gg \lambda$).
The parameter $\lambda$ is simply the size scale of the skyrmion
\cite{Sondhi,Rajaraman}. At radius $\lambda$ the spinor has equal weight for up
and down spin states (since $|z_{j}| = \lambda$) and hence the spin lies in the
XY plane just as it does for the solution in eq.~(\ref{eq:060339c}). Notice that
in the limit $\lambda \longrightarrow 0$ (where the continuum effective action
is invalid but this microscopic wave function is still sensible) we recover a
fully spin polarized filled Landau level with a charge-1 Laughlin quasihole at
the origin. Hence the number of flipped spins interpolates continuously from
zero to infinity as $\lambda$ increases.
In order to analyze the skyrmion wave function in eq.~(\ref{eq:skyrmicro}), we
use the Laughlin plasma analogy. Recall from our discussion in
sec.~\ref{subsec:nuequalsone} that in this analogy the norm of $\psi_{\lambda}$,
$Tr_{\{\sigma\}} \int D[z]\; |\Psi[z]|^{2}$ is viewed as the partition function of a
Coulomb gas. In order to compute the density distribution we simply need to take
a trace over the spin
\begin{equation}
Z = \int D[z]\; e^{-2\left\{\sum_{i>j} - \log{|z_{i}-z_{j}|} - \frac{1}{2}
\sum_{k} \log{(|z_{k}|^{2}+\lambda^{2})} + \frac{1}{4} \sum_{k}
|z_{k}|^{2}\right\}}.
\label{eqm20}
\end{equation}
This partition function describes the usual logarithmically interacting Coulomb
gas with uniform background charge plus a spatially varying impurity background
charge $\Delta\rho_b(r)$,
\begin{eqnarray}
\Delta\rho_{b}(r) &\equiv& -\frac{1}{2\pi} \nabla^{2} V(r) =
+\frac{\lambda^{2}}{\pi(r^{2}+\lambda^{2})^{2}}, \label{eqm30}\\
V(r) &=& -\frac{1}{2} \log{(r^{2}+\lambda^{2})}.
\label{eqm40}
\end{eqnarray}
For large enough scale size $\lambda \gg \ell$, local neutrality of the plasma
\cite{jasonho} forces the electrons to be expelled from the vicinity of the
origin and implies that the excess electron number density is precisely
$-\Delta\rho_{b}(r)$, so that eq.~(\ref{eqm30}) is in agreement with the standard
result \cite{Rajaraman} for the topological density given in ex.~\ref{ex:9810}.
Just as it was easy to find an explicit wave function for the Laughlin
quasi-hole but proved difficult to write down an analytic wave function for the
Laughlin quasi-electron, it is similarly difficult to make an explicit wave
function for the anti-skyrmion. Finally, we note that by replacing $z \choose
\lambda$ by $z^{n} \choose \lambda^{n}$, we can generate a skyrmion with a
Pontryagin index $n$.
\boxedtext{
\begin{exercise}
The argument given above for the charge density of the microscopic skyrmion
state wave function used local neutrality of the plasma and hence is valid only
on large length scales and thus requires $\lambda \gg \ell$. Find the complete
microscopic analytic solution for the charge density valid for arbitrary
$\lambda$, by using the fact that the proposed manybody wave function is nothing
but a Slater determinant of the single particle states $\phi_m(z)$,
\begin{equation}
\phi_m(z) = \frac {z^m}{\sqrt{2\pi 2^{m+1} m!
\left(m+1+\frac{\lambda^{2}}{2}\right)}} {z \choose \lambda} e^{-\frac
{|z^{2}|}{4}}.
\label{eqm50}
\end{equation}
Show that the excess electron number density is then
\begin{equation}
\Delta n^{(1)}(z) \equiv \sum_{m=0}^{N-1} |\phi_m(z)|^{2} -\frac{1}{2\pi},
\label{eqm60}
\end{equation}
which yields
\begin{equation}
\Delta n^{(1)}(z) = \frac{1}{2\pi} \left(\frac{1}{2} \int_{0}^{1} d\alpha\;
\alpha^{\frac{\lambda^{2}}{2}} e^{-\frac{|z|^{2}}{2} (1-\alpha)}
(|z|^{2}+\lambda^{2}) - 1 \right).
\label{eqm70}
\end{equation}
Similarly, find the spin density distribution $S^z(r)$ and show that it also
agrees with the field-theoretic expression in eq.~(\ref{eq:060339c}) in the
large $\lambda$ limit.
\label{ex:9811}
\end{exercise}}
The skyrmion solution in eqs.~(\ref{eq:060339a}--\ref{eq:060339c}) minimizes the
gradient energy
\begin{equation}
E_{0} = \frac{1}{2} \rho_{s} \int d^{2}r\; \partial_{\mu}m^{\nu}
\partial_{\mu}m^{\nu}.
\end{equation}
Notice that the energy cost is scale invariant since this expression contains
two integrals and two derivatives. Hence the total gradient energy is
independent of the scale factor $\lambda$ and for a single skyrmion is given by
\cite{Sondhi,Rajaraman}
\begin{equation}
E_{0} = 4\pi\rho_{s} = \frac{1}{4} \epsilon_{\infty}
\label{eq:1105255}
\end{equation}
where $\epsilon_{\infty}$ is the asymptotic large $q$ limit of the spin wave
energy in eq.~(\ref{eq:1123195}). Since this spin wave excitation produces a
widely separated particle-hole pair, we see that the energy of a widely
separated skyrmion-antiskyrmion pair $\left(\frac{1}{4} + \frac{1}{4}\right)
\epsilon_{\infty}$ is only half as large. Thus skyrmions are considerably
cheaper to create than simple flipped spins.\footnote{This energy advantage is
reduced if the finite thickness of the inversion layer is taken into account.
The skyrmion may in some cases turn out to be disadvantageous in higher Landau
levels.}
Notice that eq.~(\ref{eq:1105255}) tells us that the charge excitation gap,
while only half as large as naively expected, is finite as long as the spin
stiffness $\rho_{s}$ is finite. Thus we can expect a dissipationless Hall
plateau. Therefore, as emphasized by Sondhi \textit{et al.}\ \cite{Sondhi}, the Coulomb
interaction plays a central role in the $\nu = 1$ integer Hall effect. Without
the Coulomb interaction the charge gap would simply be the tiny Zeeman gap. With
the Coulomb interaction the gap is large even in the limit of zero Zeeman energy
because of the spontaneous ferromagnetic order induced by the spin stiffness.
At precisely $\nu = 1$ skyrmion/antiskyrmion pairs will be thermally activated
and hence exponentially rare at low temperatures. On the other hand, because
they are the cheapest way to inject charge into the system, there will be a
finite density of skyrmions even in the ground state if $\nu \neq 1$. Skyrmions
also occur in ordinary 2D magnetic films but since they do not carry charge (and
are energetically expensive since $\rho_{s}$ is quite large) they readily freeze
out and are not particularly important.
The charge of a skyrmion is sharply quantized but its number of flipped spins
depends on its area $\sim \lambda^{2}$. Hence if the energy were truly scale
invariant, the number of flipped spins could take on any value. Indeed one of
the early theoretical motivations for skyrmions was the discovery in numerical
work by Rezayi \cite{Sondhi,Rezayi} that adding a single charge to a filled
Landau level converted the maximally ferromagnetic state into a spin singlet. In
the presence of a finite Zeeman energy the scale invariance is lost and there is
a term in the energy that scales with $\Delta\lambda^{2}$ and tries to minimize
the size of the skyrmion. Competing with this however is a Coulomb term which we
now discuss.
The Lagrangian in eq.~(\ref{eq:1124219}) contains the correct leading order
terms in a gradient expansion. There are several possible terms which are fourth
order in gradients, but a particular one dominates over the others at long
distances. This is the Hartree energy associated with the charge density of the
skyrmion
\begin{equation}
V_{\mathrm{H}} = \frac{1}{2\epsilon} \int d^{2}r\; \int d^{2}r'\;
\frac{\delta\rho(\vec{r}\,)\; \delta\rho(\vec{r}^{\,\prime})}{|\vec{r} -
\vec{r}^{\,\prime}|}
\label{eq:fourthHartree}
\end{equation}
where
\begin{equation}
\delta\rho = \frac{\nu e}{8\pi}\; \epsilon^{\alpha\beta}\; \vec{m} \cdot
\partial_{\alpha}\vec{m} \times \partial_{\beta}\vec{m}
\end{equation}
and $\epsilon$ is the dielectric constant. The long range of the Coulomb
interaction makes this effectively a three gradient term that distinguishes it
from the other possible terms at this order. Recall that the Coulomb interaction
already entered in lower order in the computation of $\rho_{s}$. That however
was the exchange energy while the present term is the Hartree energy. The
Hartree energy scales like $\frac{e^{2}}{\epsilon\lambda}$ and so prefers to
expand the skyrmion size. The competition between the Coulomb and Zeeman
energies yields an optimal number of approximately four flipped spins according
to microscopic Hartree Fock calculations \cite{FertigHF}.
Thus a significant prediction for this model is that each charge added (or
removed) from a filled Landau level will flip several $(\sim 4)$ spins. This is
very different from what is expected for non-interacting electrons. As
illustrated in fig.~(\ref{fig:non-int})
\begin{figure
\centerline{\epsfxsize=6cm
\epsffile{nonintelectron.xfig.eps}}
\caption[]{Illustration of the spin configurations for non-interacting electrons
at filling factor $\nu=1$ in the presence of a hole (top) and an extra electron
(bottom).}
\label{fig:non-int}
\end{figure}
removing an electron leaves the non-interacting system still polarized. The
Pauli principle forces an added electron to be spin reversed and the
magnetization drops from unity at $\nu = 1$ to zero at $\nu = 2$ where both spin
states of the lowest Landau level are fully occupied.
Direct experimental evidence for the existence of skyrmions was first obtained
by Barrett \textit{et al.}\ \cite{Barrett} using a novel optically pumped NMR technique.
The Hamiltonian for a nucleus is \cite{Slichter}
\begin{equation}
H_{N} = -\Delta_{N}I^{z} + \Omega\vec{I} \cdot \vec{s}
\end{equation}
where $\vec{I}$ is the nuclear angular momentum, $\Delta_{N}$ is the nuclear
Zeeman frequency (about 3 orders of magnitude smaller than the electron Zeeman
frequency), $\Omega$ is the hyperfine coupling and $\vec{s}$ is the electron
spin density at the nuclear site. If, as a first approximation we replace
$\vec{s}$ by its average value
\begin{equation}
H_{N} \approx \left(-\Delta_{N} + \Omega\langle s^{z}\rangle\right)\; I^{z}
\end{equation}
we see that the precession frequency of the nucleus will be shifted by an amount
proportional to the magnetization of the electron gas. The magnetization deduced
using this so-called Knight shift is shown in fig.~(\ref{fig:knightdata}).
\begin{figure
\centerline{\epsfxsize=10cm
\epsffile{knight.eps}}
\caption[]{NMR Knight shift measurement of the electron spin polarization near
filling factor $\nu=1$. Circles are the data of Barrett \textit{et al.}~\cite{Barrett}.
The dashed line is a guide to the eye. The solid line is the prediction for
non-interacting electrons. The peak represents 100\% polarization at $\nu=1$.
The steep slope on each side indicates that many ($\sim 4$) spins flip over for
each charge added (or subtracted). The observed symmetry around $\nu=1$ is due
to the particle-hole symmetry between skyrmions and antiskyrmions not present in
the free-electron model.}
\label{fig:knightdata}
\end{figure}
The electron gas is 100\% polarized at $\nu = 1$, but the polarization drops
off sharply (and symmetrically) as charge is added or subtracted. This is in
sharp disagreement with the prediction of the free electron model as shown in
the figure. The initial steep slope of the data allows one to deduce that 3.5--4
spins reverse for each charge added or removed. This is in excellent
quantitative agreement with Hartree-Fock calculations for the skyrmion model
\cite{FertigHF}.
Other evidence for skyrmions comes from the large change in Zeeman energy with
field due to the large number of flipped spins. This has been observed in
transport \cite{Eisensteinskyrme} and in optical spectroscopy \cite{BennettGoldberg}. Recall
that spin-orbit effects in GaAs make the electron $g$ factor $-0.4$. Under
hydrostatic pressure $g$ can be tuned towards zero which should greatly enhance
the skyrmion size. Evidence for this effect has been seen \cite{pressuretune}.
\section{Fractional QHE}
Under some circumstances of weak (but non-zero) disorder, quantized Hall
plateaus appear which are characterized by simple rational fractional quantum
numbers. For example, at magnetic fields three times larger than those at which
the $\nu=1$ integer filling factor plateau occurs, the lowest Landau level is
only 1/3 occupied. The system ought to be below the percolation threshold and
hence be insulating. Instead a robust quantized Hall plateau is observed
indicating that electrons can travel through the sample and that (since
$\sigma_{xx}\longrightarrow 0$) there is an excitation gap. This novel and quite
unexpected physics is controlled by Coulomb repulsion between the electrons. It
is best understood by first ignoring the disorder and trying to discover the
nature of the special correlated many-body ground state into which the electrons
condense when the filling factor is a rational fraction.
For reasons that will become clear later, it is convenient to analyze the
problem in a new gauge
\begin{equation}
\vec{A} = -\frac{1}{2} \vec{r} \times \vec{B}
\end{equation}
known as the symmetric gauge. Unlike the Landau gauge which preserves
translation symmetry in one direction, the symmetric gauge preserves rotational
symmetry about the origin. Hence we anticipate that angular momentum (rather
than $y$ linear momentum) will be a good quantum number in this gauge.
For simplicity we will restrict our attention to the lowest Landau level only
and (simply to avoid some awkward minus signs) change the sign of the $B$ field:
$\vec{B} = -B\hat{z}$. With these restrictions, it is not hard to show that the
solutions of the free-particle Schr\"{o}dinger equation having definite angular
momentum are
\begin{equation}
\varphi_{m} = \frac{1}{\sqrt{2\pi\ell^{2} 2^{m} m!} } z^{m}
e^{-\frac{1}{4}|z|^{2}}
\label{eq:symmgauge}
\end{equation}
where $z=(x+iy)/\ell$ is a dimensionless complex number representing the
position vector $\vec{r} \equiv (x,y)$ and $m\ge 0$ is an integer.
\boxedtext{\begin{exercise}
Verify that the basis functions in eq.~(\ref{eq:symmgauge}) do solve the
Schr\"{o}dinger equation in the absence of a potential and do lie in the lowest
Landau level. Hint: Rewrite the kinetic energy in such a way that $\vec{p}\cdot
\vec{A}$ becomes $\vec{B}\cdot \vec{L}$.
\label{ex:ssymmgauge}
\end{exercise}}
The angular momentum of these basis states is of course $\hbar m$. If we
restrict our attention to the lowest Landau level, then there exists only one
state with any given angular momentum and only non-negative values of $m$ are
allowed. This `handedness' is a result of the chirality built into the problem
by the magnetic field.
It seems rather peculiar that in the Landau gauge we had a continuous
one-dimensional family of basis states for this two-dimensional problem. Now we
find that in a different gauge, we have a discrete one dimensional label for the
basis states! Nevertheless, we still end up with the correct density of states
per unit area. To see this note that the peak value of $|\varphi_{m}|^{2}$
occurs at a radius of $R_{\mathrm{peak}}=\sqrt{2m\ell^{2}}$. The area
$2\pi\ell^{2} m$ of a circle of this radius contains $m$ flux quanta. Hence we
obtain the standard result of one state per Landau level per quantum of flux
penetrating the sample.
Because all the basis states are degenerate, any linear combination of them is
also an allowed solution of the Schr\"{o}dinger equation. Hence any function of
the form \cite{girvinjach}
\begin{equation}
\Psi(x,y) = f(z) e^{-\frac{1}{4}|z|^{2}}
\end{equation}
is allowed so long as $f$ is \textit{analytic} in its argument. In particular,
arbitrary polynomials of any degree $N$
\begin{equation}
f(z) = \prod_{j=1}^{N} (z-Z_{j})
\end{equation}
are allowed (at least in the thermodynamic limit) and are conveniently defined
by the locations of their $N$ zeros $\{Z_{j}; j=1,2,\dots,N\}$.
Another useful solution is the so-called coherent state which is a particular
infinite order polynomial
\begin{equation}
f_{\lambda}(z) \equiv \frac{1}{\sqrt{2\pi\ell^{2}}} e^{\frac{1}{2}\lambda^{*}
z}e^{-\frac{1}{4}\lambda^{*}\lambda} .
\end{equation}
The wave function using this polynomial has the property that it is a narrow
gaussian wave packet centered at the position defined by the complex number
$\lambda$. Completing the square shows that the probability density is given by
\begin{equation}
|\Psi_{\lambda}|^{2} = |f_{\lambda}|^{2} e^{-\frac{1}{2} |z|^{2}} =
\frac{1}{2\pi\ell^{2}}e^{-\frac{1}{2}|z-\lambda|^{2}}
\end{equation}
This is the smallest wave packet that can be constructed from states within the
lowest Landau level. The reader will find it instructive to compare this
gaussian packet to the one constructed in the Landau gauge in
exercise~(\ref{ex:landaupacket}).
Because the kinetic energy is completely degenerate, the effect of Coulomb
interactions among the particles is nontrivial. To develop a feel for the
problem, let us begin by solving the two-body problem. Recall that the standard
procedure is to take advantage of the rotational symmetry to write down a
solution with the relative angular momentum of the particles being a good
quantum number and then solve the Schr\"{o}dinger equation for the radial part
of the wave function. Here we find that the analyticity properties of the wave
functions in the lowest Landau level greatly simplifies the situation. If we
know the angular behavior of a wave function, analyticity uniquely defines the
radial behavior. Thus for example for a single particle, knowing that the
angular part of the wave function is $e^{im\theta}$, we know that the full wave
function is guaranteed to uniquely be $r^{m}e^{im\theta}e^{-\frac{1}{4}|z|^{2}}
= z^{m}e^{-\frac{1}{4}|z|^{2}}$.
Consider now the two body problem for particles with relative angular momentum
$m$ and center of mass angular momentum $M$. The \textit{unique} analytic wave
function is (ignoring normalization factors)
\begin{equation}
\Psi_{mM}(z_{1},z_{2}) = (z_{1} - z_{2})^{m} (z_{1}+z_{2})^{M}
e^{-\frac{1}{4}(|z_{1}|^{2} + |z_{2}|^{2})}.
\end{equation}
If $m$ and $M$ are non-negative integers, then the prefactor of the exponential
is simply a polynomial in the two arguments and so is a state made up of linear
combinations of the degenerate one-body basis states $\varphi_{m}$ given in
eq.~(\ref{eq:symmgauge}) and therefore lies in the lowest Landau level. Note that
if the particles are spinless fermions then $m$ must be odd to give the correct
exchange symmetry. Remarkably, this is the exact (neglecting Landau level
mixing) solution for the Schr\"{o}dinger equation for \textit{any} central
potential $V(|z_{1}-z_{2}|)$ acting between the two particles.\footnote{Note
that neglecting Landau level mixing is a poor approximation for strong
potentials $V \gg \hbar\omega_{c}$ unless they are very smooth on the scale of
the magnetic length.} We do not need to solve any radial equation because of the
powerful restrictions due to analyticity. There is only one state in the (lowest
Landau level) Hilbert space with relative angular momentum $m$ and center of
mass angular momentum $M$. Hence (neglecting Landau level mixing) it is an exact
eigenstate of \textit{any} central potential. $\Psi_{mM}$ is the exact answer
independent of the Hamiltonian!
The corresponding energy eigenvalue $v_{m}$ is independent of $M$ and is
referred to as the $m$th Haldane pseudopotential
\begin{equation}
v_{m} = \frac{\left\langle mM|V|mM\right\rangle}{\left\langle
mM|mM\right\rangle}.
\end{equation}
The Haldane pseudopotentials for the repulsive Coulomb potential are shown in
fig.~(\ref{fig:pseudopots}).
\begin{figure
\centerline{\epsfxsize=10cm
\epsffile{pseudopot.xmgr.eps}}
\caption[]{The Haldane pseudopotential $V_{m}$ vs. relative angular momentum $m$
for two particles interacting via the Coulomb interaction. Units are
${e^{2}}/{\epsilon\ell}$, where $\epsilon$ is the dielectric constant of the
host semiconductor and the finite thickness of the quantum well has been
neglected.}
\label{fig:pseudopots}
\end{figure}
These discrete energy eigenstates represent bound states of the repulsive
potential. If there were no magnetic field present, a repulsive potential would
of course have only a continuous spectrum with no discrete bound states. However
in the presence of the magnetic field, there are effectively bound states
because the kinetic energy has been quenched. Ordinarily two particles that have
a lot of potential energy because of their repulsive interaction can fly apart
converting that potential energy into kinetic energy. Here however (neglecting
Landau level mixing) the particles all have fixed kinetic energy. Hence
particles that are repelling each other are stuck and can not escape from each
other. One can view this semi-classically as the two particles orbiting each
other under the influence of $\vec{E}\times\vec{B}$ drift with the Lorentz force
preventing them from flying apart. In the presence of an attractive potential
the eigenvalues change sign, but of course the eigenfunctions remain exactly the
same (since they are unique)!
The fact that a repulsive potential has a discrete spectrum for a pair of
particles is (as we will shortly see) the central feature of the physics
underlying the existence of an excitation gap in the fractional quantum Hall
effect. One might hope that since we have found analyticity to uniquely
determine the two-body eigenstates, we might be able to determine many-particle
eigenstates exactly. The situation is complicated however by the fact that for
three or more particles, the various relative angular momenta
$L_{12},L_{13},L_{23}$, etc.\ do not all commute. Thus we can not write down
general exact eigenstates. We will however be able to use the analyticity to
great advantage and make exact statements for certain special cases.
\boxedtext{\begin{exercise}
Express the exact lowest Landau level two-body eigenstate
\[
\Psi(z_{1},z_{2}) = (z_{1} - z_{2})^{3}\;
e^{-\frac{1}{4}\left\{|z_{1}|^{2}+|z_{2}|^{2}\right\}}
\]
in terms of the basis of all possible two-body Slater determinants.
\label{ex:9805}
\end{exercise}
\begin{exercise}
Verify the claim that the Haldane pseudopotential $v_{m}$ is independent of the
center of mass angular momentum $M$.
\label{ex:pseudopot1}
\end{exercise}
\begin{exercise}
Evaluate the Haldane pseudopotentials for the Coulomb potential
$\frac{e^{2}}{\epsilon r}$. Express your answer in units of
$\frac{e^{2}}{\epsilon\ell}$. For the specific case of $\epsilon=10$ and
$B=10$T, express your answer in Kelvin.
\label{ex:pseudopot2}
\end{exercise}
\begin{exercise}
Take into account the finite thickness of the quantum well by assuming that the
one-particle basis states have the form
\begin{displaymath}
\psi_{m}(z,s) = \varphi_{m}(z)\Phi(s),
\end{displaymath}
where $s$ is the coordinate in the direction normal to the quantum well. Write
down (but do not evaluate) the formal expression for the Haldane
pseudo-potentials in this case. Qualitatively describe the effect of finite
thickness on the values of the different pseudopotentials for the case where the
well thickness is approximately equal to the magnetic length.
\label{ex:pseudopot3}
\end{exercise}}
\subsection{The $\nu=1$ many-body state}
\label{subsec:nuequalsone}
So far we have found the one- and two-body states. Our next task is to write
down the wave function for a fully filled Landau level. We need to find
\begin{equation}
\psi[z] = f[z]\;
e^{-\frac{1}{4}\sum_{j} |z_{j}|^{2}}
\end{equation}
where $[z]$ stands for $(z_{1},z_{2},\ldots ,z_{N})$ and
$f$ is a polynomial representing the Slater determinant with all states
occupied. Consider the simple example of two particles. We want one particle in
the orbital $\varphi_{0}$ and one in $\varphi_1$, as illustrated schematically in
fig.~(\ref{fig:slater2}a).
\begin{figure
\centerline{\epsfxsize=6cm
\epsffile{orbitalocc.xfig.eps}}
\caption[]{Orbital occupancies for the maximal density filled Landau level state
with (a) two particles and (b) three particles. There are no particle labels
here. In the Slater determinant wave function, the particles are labeled but a
sum is taken over all possible permutations of the labels in order to
antisymmetrize the wave function.}
\label{fig:slater2}
\end{figure}
Thus (again ignoring normalization)
\begin{eqnarray}
f[z] &=& \left|\begin{array}{cc}
(z_{1})^{0} & (z_{2})^{0}\\
(z_{1})^{1} & (z_{2})^{1}\end{array}\right| = (z_{1})^{0} (z_{2})^{1} -
(z_{2})^{0} (z_{1})^{1}\nonumber\\
&=& (z_{2} - z_{1})
\end{eqnarray}
This is the lowest possible order polynomial that is antisymmetric. For the case
of three particles we have (see fig.~(\ref{fig:slater2}b))
\begin{eqnarray}
f[z] &=& \left|\begin{array}{ccc}
(z_{1})^{0} & (z_{2})^{0} & (z_{3})^{0}\\
(z_{1})^{1} & (z_{2})^{1} & (z_{3})^{1}\\
(z_{1})^{2} & (z_{2})^{2} & (z_{3})^{2}\end{array}\right| = z_{2}z_{3}^{2} -
z_{3}z_{2}^{2} - z_{1}^{1}z_{3}^{2} + z_{3}^{1}z_{1}^{2} + z_{1}z_{2}^{2} -
z_{2}^{1}z_{1}^{2}\nonumber\\
&=& -(z_{1} - z_{2}) (z_{1} - z_{3}) (z_{2} - z_{3})\nonumber\\
&=& -\prod_{i<j}^{3} (z_{i} - z_{j}) \label{eq:1282}
\end{eqnarray}
This form for the Slater determinant is known as the Vandermonde polynomial. The
overall minus sign is unimportant and we will drop it.
The single Slater determinant to fill the first $N$ angular momentum states is
a simple generalization of eq.~(\ref{eq:1282})
\begin{equation}
f_{N}[z] = \prod_{i<j}^{N} (z_{i} - z_{j}).
\end{equation}
To prove that this is true for general $N$, note that the polynomial is fully
antisymmetric and the highest power of any $z$ that appears is $z^{N-1}$. Thus
the highest angular momentum state that is occupied is $m = N - 1$. But since
the antisymmetry guarantees that no two particles can be in the same state, all
$N$ states from $m = 0$ to $m = N - 1$ must be occupied. This proves that we
have the correct Slater determinant.
\boxedtext{\begin{exercise}
Show carefully that the Vandermonde polynomial for $N$ particles is in fact
totally antisymmetric.
\label{ex:9806}
\end{exercise}}
One can also use induction to show that the Vandermonde polynomial is the correct
Slater determinant by writing
\begin{equation}
f_{N+1}(z) = f_{N}(z)\; \prod_{i=1}^{N} (z_{i} - z_{N+1})
\end{equation}
which can be shown to agree with the result of expanding the determinant of the
$(N + 1) \times (N + 1)$ matrix in terms of the minors associated with the
$(N+1)$st row or column.
Note that since the Vandermonde polynomial corresponds to the filled Landau
level it is the unique state having the maximum density and hence is an exact
eigenstate for any form of interaction among the particles (neglecting Landau
level mixing and ignoring the degeneracy in the center of mass angular
momentum).
The (unnormalized) probability distribution for particles in the filled Landau
level state is
\begin{equation}
\left|\Psi[z]\right|^{2} = \prod_{i<j}^{N} |z_{i} - z_{j}|^{2}\; e^{-\frac{1}{2}
\sum_{j=1}^{N} |z_{j}|^{2}}.
\end{equation}
This seems like
a rather complicated object about which it is hard to make any useful
statements. It is clear that the polynomial term tries to keep the particles
away from each other and gets larger as the particles spread out. It is also
clear that the exponential term is small if the particles spread out too much.
Such simple questions as, `Is the density uniform?', seem hard to answer
however.
It turns out that there is a beautiful analogy to plasma physics developed by
R.\ B.\ Laughlin which sheds a great deal of light on the nature of this many
particle probability distribution. To see how this works, let us pretend that
the norm of the wave function
\begin{equation}
Z \equiv \int d^{2}z_{1} \ldots \int d^{2}z_{N}\; |\psi_{[z]}|^{2}
\end{equation}
is the partition function of a classical statistical mechanics problem with
Boltzmann weight
\begin{equation}
\left|\Psi[z]\right|^{2} = e^{-\beta U_{\mathrm{class}}}
\end{equation}
where $\beta \equiv \frac{2}{m}$ and
\begin{equation}
U_{\mathrm{class}} \equiv m^{2} \sum_{i<j} \left(-\ln{|z_{i} - z_{j}|}\right) +
\frac{m}{4} \sum_{k} |z_{k}|^{2}.
\label{eq:Uclass}
\end{equation}
(The parameter $m=1$ in the present case but we introduce it for later
convenience.) It is perhaps not obvious at first glance that we have made
tremendous progress, but we have. This is because $U_{\mathrm{class}}$ turns out
to be the potential energy of a fake classical one-component plasma of particles
of charge $m$ in a uniform (`jellium') neutralizing background. Hence we can
bring to bear well-developed intuition about classical plasma physics to study
the properties of $|\Psi|^{2}$. Please remember however that all the statements
we make here are about a particular wave function. There are no actual
long-range logarithmic interactions in the quantum Hamiltonian for which this
wave function is the approximate groundstate.
To understand this, let us first review the electrostatics of charges in three
dimensions. For a charge $Q$ particle in 3D, the surface integral of the electric
field on a sphere of radius $R$ surrounding the charge obeys
\begin{equation}
\int d\vec{A} \cdot \vec{E} = 4\pi Q.
\end{equation}
Since the area of the sphere is $4\pi R^{2}$ we deduce
\begin{eqnarray}
\vec{E}(\vec{r}\,) &=& Q \frac{\hat{r}}{r^{2}}\\
\varphi(\vec{r}\,) &=& \frac{Q}{r}
\end{eqnarray}
and
\begin{equation}
\vec{\nabla} \cdot \vec{E} = -\nabla^{2}\varphi = 4\pi Q\; \delta^{3}(\vec{r}\,)
\end{equation}
where $\varphi$ is the electrostatic potential. Now consider a two-dimensional
world where all the field lines are confined to a plane (or equivalently
consider the electrostatics of infinitely long charged rods in 3D). The
analogous equation for the line integral of the normal electric field on a
\textit{circle} of radius $R$ is
\begin{equation}
\int d\vec{s} \cdot \vec{E} = 2\pi Q
\end{equation}
where the $2\pi$ (instead of $4\pi$) appears because the circumference of a
circle is $2\pi R$ (and is analogous to $4\pi R^{2}$). Thus we find
\begin{eqnarray}
\vec{E}(\vec{r}\,) &=& \frac{Q\hat{r}}{r}\\
\varphi(\vec{r}\,) &=& Q \left(-\ln{\frac{r}{r_{0}}}\right)
\end{eqnarray}
and the 2D version of Poisson's equation is
\begin{equation}
\vec{\nabla} \cdot \vec{E} = -\nabla^{2}\varphi = 2\pi Q\; \delta^{2}(\vec{r}\,).
\end{equation}
Here $r_{0}$ is an arbitrary scale factor whose value is immaterial since it
only shifts $\varphi$ by a constant.
We now see why the potential energy of interaction among a group of objects with
charge $m$ is
\begin{equation}
U_{0} = m^{2} \sum_{i<j} \left(-\ln{|z_{i} - z_{j}|}\right).
\end{equation}
(Since $z = (x + iy)/\ell$ we are using $r_{0} = \ell$.) This explains the first
term in eq.~(\ref{eq:Uclass}).
To understand the second term notice that
\begin{equation}
-\nabla^{2}\; \frac{1}{4}|z|^{2} = -\frac{1}{\ell^{2}} = 2\pi\rho_{\mathrm{B}}
\label{eq:9812-04}
\end{equation}
where
\begin{equation}
\rho_{\mathrm{B}} \equiv -\frac{1}{2\pi\ell^{2}}.
\end{equation}
Eq.~(\ref{eq:9812-04}) can be interpreted as Poisson's equation and tells us
that $\frac{1}{4}|z|^{2}$ represents the electrostatic potential of a constant
charge density $\rho_{\mathrm{B}}$. Thus the second term in
eq.~(\ref{eq:Uclass}) is the energy of charge $m$ objects interacting with this
negative background.
Notice that $2\pi\ell^{2}$ is precisely the area containing one quantum of flux.
Thus the background charge density is precisely $B/\Phi_{0}$, the density of
flux in units of the flux quantum.
The very long range forces in this fake plasma cost huge (fake) `energy' unless
the plasma is everywhere locally neutral (on length scales larger than the Debye
screening length which in this case is comparable to the particle spacing). In
order to be neutral, the density $n$ of particles must obey
\begin{eqnarray}
nm + \rho_{\mathrm{B}} &=& 0\\
\Rightarrow\qquad n &=& \frac{1}{m}\; \frac{1}{2\pi\ell^{2}}
\end{eqnarray}
since each particle carries (fake) charge $m$. For our filled Landau level with
$m=1$, this is of course the correct answer for the density since every
single-particle state is occupied and there is one state per quantum of flux.
We again emphasize that the energy of the fake plasma has nothing to do with the
quantum Hamiltonian and the true energy. The plasma analogy is merely a
statement about this particular choice of wave function. It says that the square
of the wave function is very small (because $U_\mathrm{class}$ is large) for
configurations in which the density deviates even a small amount from
$1/(2\pi\ell^2)$. The electrons can in principle be found anywhere, but the
overwhelming probability is that they are found in a configuration which is
locally random (liquid-like) but with negligible density fluctuations on long
length scales. We will discuss the nature of the typical configurations again
further below in connection with fig.~(\ref{fig:snapshot}).
When the fractional quantum Hall effect was discovered, Robert Laughlin realized
that one could write down a many-body variational wave function at filling
factor $\nu = 1/m$ by simply taking the $m$th power of the polynomial that
describes the filled Landau level
\begin{equation}
f_{N}^{m}[z] = \prod_{i<j}^{N} (z_{i} - z_{j})^{m}.
\end{equation}
In order for this to remain analytic, $m$ must be an integer. To preserve the
antisymmetry $m$ must be restricted to the odd integers. In the plasma analogy
the particles now have fake charge $m$ (rather than unity) and the density of
electrons is $n = \frac{1}{m}\; \frac{1}{2\pi\ell^{2}}$ so the Landau level
filling factor $\nu = \frac{1}{m} = \frac{1}{3}, \frac{1}{5}, \frac{1}{7}$,
etc.\ (Later on, other wave functions were developed to describe more general
states in the hierarchy of rational fractional filling factors at which
quantized Hall plateaus were observed
\cite{SMGBOOK,TAPASHbook,DasSarmabook,stonebook,sciam}.)
The Laughlin wave function naturally builds in good correlations among the
electrons because each particle sees an $m$-fold zero at the positions of all
the other particles. The wave function vanishes extremely rapidly if any two
particles approach each other, and this helps minimize the expectation value of
the Coulomb energy.
Since the kinetic energy is fixed we need only concern ourselves with the
expectation value of the potential energy for this variational wave function.
Despite the fact that there are no adjustable variational parameters (other than
$m$ which controls the density) the Laughlin wave functions have proven to be
very nearly exact for almost any realistic form of repulsive interaction. To
understand how this can be so, it is instructive to consider a model for which
this wave function actually is the exact ground state. Notice that the form of
the wave function guarantees that every pair of particles has relative angular
momentum greater than or equal to $m$. One should not make the mistake of
thinking that every pair has relative angular momentum precisely equal to $m$.
This would require the spatial separation between particles to be very nearly
the same for every pair, which is of course impossible.
Suppose that we write the Hamiltonian in terms of the Haldane pseudopotentials
\begin{equation}
V = \sum_{m'=0}^{\infty}\; \sum_{i<j} v_{m'}\; P_{m'}(ij)
\end{equation}
where $P_{m}(ij)$ is the projection operator which selects out states in which
particles $i$ and $j$ have relative angular momentum $m$. If $P_{m'}(ij)$ and
$P_{m^{''}}(jk)$ commuted with each other things would be simple to solve, but
this is not the case. However if we consider the case of a `hard-core potential'
defined by $v_{m'} = 0$ for $m' \geq m$, then clearly the $m$th Laughlin state
is an exact, zero energy eigenstate
\begin{equation}
V\psi_{m}[z] = 0. \label{eq:12103}
\end{equation}
This follows from the fact that
\begin{equation}
P_{m'}(ij)\psi_{m} = 0
\end{equation}
for any $m' < m$ since every pair has relative angular momentum of at least $m$.
Because the relative angular momentum of a pair can change only in discrete
(even integer) units, it turns out that this hard core model has an excitation
gap. For example for $m = 3$, any excitation out of the Laughlin ground state
necessarily weakens the nearly ideal correlations by forcing at least one pair
of particles to have relative angular momentum $1$ instead of $3$ (or larger).
This costs an excitation energy of order $v_{1}$.
This excitation gap is essential to the existence of dissipationless
$(\sigma_{xx} = \rho_{xx} = 0)$ current flow. In addition this gap means that
the Laughlin state is stable against perturbations. Thus the difference between
the Haldane pseudopotentials $v_{m}$ for the Coulomb interaction and the
pseudopotentials for the hard core model can be treated as a small perturbation
(relative to the excitation gap). Numerical studies show that for realistic
pseudopotentials the overlap between the true ground state and the Laughlin
state is extremely good.
To get a better understanding of the correlations built into the Laughlin wave
function it is useful to consider the snapshot in fig.~(\ref{fig:snapshot})
\begin{figure
\centerline{\epsfysize=5cm
\epsffile{randomdots.eps}\hfill \epsfysize=5cm\epsffile{laughlindots.eps}}
\caption[]{Comparison of typical configurations for a completely uncorrelated
(Poisson) distribution of 1000 particles (left panel) to the distribution given
by the Laughlin wave function for $m=3$ (right panel). The latter is a snapshot
taken during a Monte Carlo simulation of the distribution. The Monte Carlo
procedure consists of proposing a random trial move of one of the particles to a
new position. If this move increases the value of $|\Psi|^2$ it is always
accepted. If the move decreases the value of $|\Psi|^2$ by a factor $p$, then
the move is accepted with probability $p$. After equilibration of the plasma by
a large number of such moves one finds that the configurations generated are
distributed according to $|\Psi|^2$. (After R. B. Laughlin, Chap. 7 in
\cite{SMGBOOK}.)}
\label{fig:snapshot}
\end{figure}
which shows a typical configuration of particles in the Laughlin ground state
(obtained from a Monte Carlo sampling of $|\psi|^{2}$) compared to a random
(Poisson) distribution. Focussing first on the large scale features we see that
density fluctuations at long wavelengths are severely suppressed in the
Laughlin state. This is easily understood in terms of the plasma analogy and the
desire for local neutrality. A simple estimate for the density fluctuations
$\rho_{\vec{q}}$ at wave vector $\vec{q}$ can be obtained by noting that the
fake plasma potential energy can be written (ignoring a constant associated with
self-interactions being included)
\begin{equation}
U_{\mathrm{class}} = \frac{1}{2L^{2}} \sum_{\vec{q}\neq 0} \frac{2\pi
m^{2}}{q^{2}}\; \rho_{\vec{q}}\rho_{-\vec{q}}
\end{equation}
where $L^{2}$ is the area of the system and $\frac{2\pi}{q^{2}}$ is the Fourier
transform of the logarithmic potential (easily derived from
$\nabla^{2}\left(-\ln{(r})\right) = -2\pi\; \delta^{2}(\vec{r}\,)\,$). At long
wavelengths $(q^{2} \ll n)$ it is legitimate to treat $\rho_{\vec{q}}$ as a
collective coordinate of an elastic continuum. The distribution $e^{-\beta
U_{\mathrm{class}}}$ of these coordinates is a gaussian and so obeys (taking
into account the fact that $\rho_{-\vec{q}} = (\rho_{\vec{q}})^{*}$)
\begin{equation}
\langle\rho_{\vec{q}}\rho_{-\vec{q}}\rangle = L^{2} \frac{q^{2}}{4\pi m}.
\label{eq:12105}
\end{equation}
We clearly see that the long-range (fake) forces in the (fake) plasma strongly
suppress long wavelength density fluctuations. We will return more to this point
later when we study collective density wave excitations above the Laughlin
ground state.
The density fluctuations on short length scales are best studied in real space.
The radial correlation $g(r)$ function is a convenient object to consider. $g(r)$
tells us the density at $r$ given that there is a particle at
the origin
\begin{equation}
g(r) = \frac{N(N-1)}{n^{2}Z} \int d^{2}z_{3} \ldots \int d^{2}z_{N}\; \left|\psi
(0,r,z_{3},\ldots ,z_{N})\right|^{2}
\end{equation}
where $Z \equiv \langle\psi|\psi\rangle$, $n$ is the density (assumed uniform)
and the remaining factors account for all the different pairs of particles that
could contribute. The factors of density are included in the denominator so that
$\lim_{r\rightarrow\infty} g(r) = 1$.
Because the $m=1$ state is a single Slater determinant $g(z)$ can be computed
exactly
\begin{equation}
g(z) = 1 - e^{-\frac{1}{2}|z|^{2}}.
\label{eq:12108}
\end{equation}
Fig.~(\ref{fig:2pointqhe})
\begin{figure
\centerline{\epsfxsize=5cm
\epsffile{hof3.eps}\hfill \epsfxsize=5cm\epsffile{hof5.eps}}
\caption[]{Plot of the two-point correlation function $h(r) \equiv 1-g(r)$ for
the Laughlin plasma with $\nu^{-1}= m = 3$ (left panel) and $m=5$ (right panel).
Notice that, unlike the result for $m=1$ given in eq.~(\ref{eq:12108}), $g(r)$
exhibits the oscillatory behavior characteristic of a strongly coupled plasma
with short-range solid-like local order.}
\label{fig:2pointqhe}
\end{figure}
shows numerical estimates of $h(r) \equiv 1-g(r)$ for the cases $m=3$ and $5$.
Notice that for the $\nu = 1/m$ state $g(z) \sim |z|^{2m}$ for small distances.
Because of the strong suppression of density fluctuations at long wavelengths,
$g(z)$ converges exponentially rapidly to unity at large distances. For $m > 1$,
$g$ develops oscillations indicative of solid-like correlations and, the plasma
actually freezes\footnote{That is, Monte Carlo simulation of $|\Psi|^2$ shows
that the particles are most likely to be found in a crystalline configuration
which breaks translation symmetry. Again we emphasize that this is a statement
about the Laughlin variational wave function, not necessarily a statement about
what the electrons actually do. It turns out that for $m \ge\,\sim 7$ the Laughlin
wave function is no longer the best variational wave function. One can write
down wave functions describing Wigner crystal states which have lower
variational energy than the Laughlin liquid.} at $m \approx 65$. The Coulomb
interaction energy can be expressed in terms of $g(z)$ as\footnote{This
expression assumes a strictly zero thickness electron gas. Otherwise one must
replace $\frac{e^2}{\epsilon|z|}$ by $\frac{e^{2}}{\epsilon}
\int_{-\infty}^{+\infty}ds \frac{\left| F(s)\right|^{2}}{\sqrt{|z|^{2} +
s^{2}}}$ where $F$ is the wavefunction factor describing the quantum well bound
state.}
\begin{equation}
\frac{\langle\psi|V|\psi\rangle}{\langle\psi|\psi\rangle} = \frac{nN}{2} \int
d^{2}z\; \frac{e^{2}}{\epsilon|z|}\; \left[ g(z) - 1\right] \label{eq:12109}
\end{equation}
where the $(-1)$ term accounts for the neutralizing background and $\epsilon$ is
the dielectric constant of the host semiconductor. We can interpret $g(z) - 1$
as the density of the `exchange-correlation hole' surrounding each particle.
The correlation energies per particle for $m=3$ and $5$ are \cite{levesque84}
\begin{equation}
\frac{1}{N}\;
\frac{\langle\psi_{3}|V|\psi_{3}\rangle}{\langle\psi_{3}|\psi_{3}\rangle} =
-0.4100\pm 0.0001
\end{equation}
and
\begin{equation}
\frac{1}{N}\;
\frac{\langle\psi_{5}|V|\psi_{5}\rangle}{\langle\psi_{5}|\psi_{5}\rangle} =
-0.3277\pm 0.0002
\end{equation}
in units of $e^{2}/\epsilon\ell$ which is $\approx 161~\mathrm{K}$ for $\epsilon
= 12.8$ (the value in GaAs), $B = 10\mathrm{T}$. For the filled Landau level
($m=1$) the exchange energy is $-\sqrt{\frac{\pi}{8}}$ as can be seen from
eqs.~(\ref{eq:12108}) and (\ref{eq:12109}).
\boxedtext{\begin{exercise}
Find the radial distribution function for a one-dimensional spinless free
electron gas of density $n$ by writing the ground state wave function as a
single Slater determinant and then integrating out all but two of the
coordinates. Use this first quantization method even if you already know how to
do this calculation using second quantization. Hint: Take advantage of the
following representation of the determinant of a $N \times N$ matrix $M$ in terms
of permutations $P$ of $N$ objects.
\[
\mathrm{Det}\; M = \sum_{P} (-1)^{P} \prod_{j=1}^{N} M_{jP_{j}}.
\]
\label{ex:9807}
\end{exercise}
\begin{exercise}
Using the same method derive eq.~(\ref{eq:12108}).
\label{ex:9808}
\end{exercise}}
\section{IQHE Edge States}
Now that we understand drift in a uniform electric field, we can consider the
problem of electrons confined in a Hall bar of finite width by a non-uniform
electric field. For simplicity, we will consider the situation where the
potential $V(x)$ is smooth on the scale of the magnetic length, but this is not
central to the discussion. If we assume that the system still has translation
symmetry in the $y$ direction, the solution to the Schr\"{o}dinger equation must
still be of the form
\begin{equation}
\psi(x,y) = \frac{1}{\sqrt{L_{y}}}e^{iky}f_{k}(x).
\label{eq:psiHallbar}
\end{equation}
The function $f_{k}$ will no longer be a simple harmonic wave function as we
found in the case of the uniform electric field. However we can anticipate that
$f_{k}$ will still be peaked near (but in general not precisely at) the point
$X_{k}\equiv -k\ell^{2}$. The eigenvalues $\epsilon_{k}$ will no longer be
precisely linear in $k$ but will still reflect the kinetic energy of the
cyclotron motion plus the local potential energy $V(X_{k})$ (plus small
corrections analogous to the one in eq.~(\ref{eq:driftepsilon})). This is
illustrated in fig.~(\ref{fig:hallbarLL}).
\begin{figure
\centerline{\epsfxsize=10cm
\epsffile{smoothpot.xfig.eps}}
\caption[]{Illustration of a smooth confining potential which varies only in the
$x$ direction. The horizontal dashed line indicates the equilibrium fermi level.
The dashed curve indicates the wave packet envelope $f_{k}$ which is displaced
from its nominal position $x_{k} \equiv -k\ell^{2}$ by the slope of the
potential.}
\label{fig:hallbarLL}
\end{figure}
We see that the group velocity
\begin{equation}
\vec{v}_{k} = \frac{1}{\hbar}\frac{\partial \epsilon_{k}}{\partial k} \hat{y}
\label{eq:groupv}
\end{equation}
has the opposite sign on the two edges of the sample. This means that in the
ground state there are edge currents of opposite sign flowing in the sample. The
semi-classical interpretation of these currents is that they represent `skipping
orbits' in which the circular cyclotron motion is interrupted by collisions with
the walls at the edges as illustrated in fig.~(\ref{fig:skipping}).
\begin{figure
\centerline{\epsfysize=6cm
\epsffile{skippingorbs.xfig.eps}}
\caption[]{Semi-classical view of skipping orbits at the fermi level at the two
edges of the sample where the confining electric field causes $\vec{E} \times
\vec{B}$ drift. The circular orbit illustrated in the center of the sample
carries no net drift current if the local electric field is zero.}
\label{fig:skipping}
\end{figure}
One way to analyze the Hall effect in this system is quite analogous to the
Landauer picture of transport in narrow wires \cite{KaneFisher,Buttiker}. The
edge states play the role of the left and right moving states at the two fermi
points. Because (as we saw earlier) momentum in a magnetic field corresponds to
position, the edge states are essentially real space realizations of the fermi
surface. A Hall voltage drop across the sample in the $x$ direction corresponds
to a difference in electrochemical potential between the two edges. Borrowing
from the Landauer formulation of transport, we will choose to apply this in the
form of a chemical potential difference and ignore any changes in electrostatic
potential.\footnote{This has led to various confusions in the literature. If
there is an electrostatic potential gradient then some of the net Hall current
may be carried in the bulk rather than at the edges, but the final answer is the
same. In any case, the essential part of the physics is that the only place
where there are low lying excitations is at the edges.} What this does is
increase the number of electrons in skipping orbits on one edge of the sample
and/or decrease the number on the other edge. Previously the net current due to
the two edges was zero, but now there is a net Hall current. To calculate this
current we have to add up the group velocities of all the occupied states
\begin{equation}
I = -\frac{e}{L_{y}}\int_{-\infty}^{+\infty} dk\frac{L_{y}}{2\pi}\,
\frac{1}{\hbar}\frac{\partial\epsilon_{k}}{\partial k} n_{k},
\end{equation}
where for the moment we assume that in the bulk, only a single Landau level is
occupied and $n_{k}$ is the probability that state $k$ in that Landau level is
occupied. Assuming zero temperature and noting that the integrand is a perfect
derivative, we have
\begin{equation}
I = -\frac{e}{h}\int_{\mu_{R}}^{\mu_{L}} d\epsilon = -\frac{e}{h}\left[\mu_{L} -
\mu_{R}\right].
\end{equation}
(To understand the order of limits of integration, recall that as $k$ increases,
$X_{k}$ decreases.) The definition of the Hall voltage drop is\footnote{To get
the signs straight here, note that an increase in chemical potential brings in
more electrons. This is equivalent to a more positive voltage and hence a more
negative potential energy $-eV$. Since $H-\mu N$ enters the thermodynamics,
electrostatic potential energy and chemical potential move the electron density
oppositely. $V$ and $\mu$ thus have the same sign of effect because electrons
are negatively charged.}
\begin{equation}
(+e)V_{H} \equiv (+e)\left[V_{R} - V_{L}\right] = \left[\mu_{R} -
\mu_{L}\right].
\end{equation}
Hence
\begin{equation}
I = -\nu \frac{e^{2}}{h}V_{H},
\end{equation}
where we have now allowed for the possibility that $\nu$ different Landau levels
are occupied in the bulk and hence there are $\nu$ separate edge channels
contributing to the current. This is the analog of having $\nu$ `open' channels
in the Landauer transport picture. In the Landauer picture for an ordinary wire,
we are considering the longitudinal voltage drop (and computing $\sigma_{xx}$),
while here we have the Hall voltage drop (and are computing $\sigma_{xy}$). The
analogy is quite precise however because we view the right and left movers as
having distributions controlled by separate chemical potentials. It just happens
in the QHE case that the right and left movers are physically separated in such
a way that the voltage drop is transverse to the current. Using the above result
and the fact that the current flows at right angles to the voltage drop we have
the desired results
\begin{eqnarray}
\sigma_{xx} &=& 0\\
\sigma_{xy} &=& -\nu\frac{e^{2}}{h},
\end{eqnarray}
with the quantum number $\nu$ being an integer.
So far we have been ignoring the possible effects of disorder. Recall that for a
single-channel one-dimensional wire in the Landauer picture, a disordered region
in the middle of the wire will reduce the conductivity to
\begin{equation}
I = \frac{e^{2}}{h} |T|^{2},
\end{equation}
where $|T|^{2}$ is the probability for an electron to be transmitted through the
disordered region. The reduction in transmitted current is due to \textit{back
scattering}. Remarkably, in the QHE case, the back scattering is essentially
zero in very wide samples. To see this note that in the case of the Hall bar,
scattering into a backward moving state would require transfer of the electron
from one edge of the sample to the other since the edge states are spatially
separated. For samples which are very wide compared to the magnetic length (more
precisely, to the Anderson localization length) the matrix element for this is
exponentially small. In short, there can be nothing but forward scattering. An
incoming wave given by eq.~(\ref{eq:psiHallbar}) can only be transmitted in the
forward direction, at most suffering a simple phase shift $\delta_{k}$
\begin{equation}
\psi_{\mathrm{out}}(x,y) = \frac{1}{\sqrt{L_{y}}}e^{i\delta_{k}}e^{iky}f_{k}(x).
\end{equation}
This is because no other states of the same energy are available. If the
disorder causes Landau level mixing at the edges to occur (because the confining
potential is relatively steep) then it is possible for an electron in one edge
channel to scatter into another, but the current is still going in the same
direction so that there is no reduction in overall transmission probability. It
is this \textit{chiral} (unidirectional) nature of the edge states which is
responsible for the fact that the Hall conductance is correctly quantized
independent of the disorder.
Disorder will broaden the Landau levels in the bulk and provide a reservoir of
(localized) states which will allow the chemical potential to vary smoothly with
density. These localized states will not contribute to the transport and so the
Hall conductance will be quantized over a plateau of finite width in $B$ (or
density) as seen in the data. Thus obtaining the universal value of quantized
Hall conductance to a precision of $10^{-10}$ does not require fine tuning the
applied $B$ field to a similar precision.
The localization of states in the bulk by disorder is an essential part of the
physics of the quantum Hall effect as we saw when we studied the role of
translation invariance. We learned previously that in zero magnetic field all
states are (weakly) localized in two dimensions. In the presence of a quantizing
magnetic field, most states are strongly localized as discussed above. However
if all states were localized then it would be impossible to have a quantum phase
transition from one QHE plateau to the next. To understand how this works it is
convenient to work in a semiclassical percolation picture to be described below.
\boxedtext{\begin{exercise}
Show that the number of edge channels whose energies lie in the gap between two
Landau levels scales with the length $L$ of the sample, while the number of bulk
states scales with the area. Use these facts to show that the range of magnetic
field in which the chemical potential lies in between two Landau levels scales
to zero in the thermodynamic limit. Hence finite width quantized Hall plateaus
can not occur in the absence of disorder that produces a reservoir of localized
states in the bulk whose number is proportional to the area.
\label{ex:edgecount}
\end{exercise}}
\section{Introduction}
\label{sec:introduction}
The quantum Hall effect (QHE) is one of the most remarkable condensed-matter
phenomena discovered in the second half of the 20th century. It rivals
superconductivity in its fundamental significance as a manifestation of quantum
mechanics on macroscopic scales. The basic experimental observation is the
nearly vanishing dissipation
\begin{equation}
\sigma_{xx} \rightarrow 0
\end{equation}
and the quantization of the Hall conductance
\begin{equation}
\sigma_{xy} = \nu \frac{e^{2}}{h} \label{eq:9812-01}
\end{equation}
of a real (as opposed to some theorist's fantasy) transistor-like device
(similar in some cases to the transistors in computer chips) containing a
two-dimensional electron gas subjected to a strong magnetic field. This
quantization is universal and independent of all microscopic details such as the
type of semiconductor material, the purity of the sample, the precise value of
the magnetic field, and so forth. As a result, the effect is now used to
maintain\footnote{Maintain does \textit{not} mean \textit{define}. The SI ohm is
defined in terms of the kilogram, the second and the speed of light (formerly
the meter). It is best realized using the reactive impedance of a capacitor
whose capacitance is computed from first principles. This is an extremely
tedious procedure and the QHE is a very convenient method for realizing a fixed,
reproducible impedance to check for drifts of resistance standards. It does not
however \textit{define} the ohm. Eq.~(\ref{eq:9812-01}) is given in cgs units.
When converted to SI units the quantum of resistance is $h/e^{2}(\mathrm{cgs})
\rightarrow \frac{Z}{2\alpha} \approx 25,812.80~\Omega~(\mathrm{SI})$ where
$\alpha$ is the fine structure constant and $Z \equiv
\sqrt{\mu_{0}/\epsilon_{0}}$ is the impedance of free space.} the standard of
electrical resistance by metrology laboratories around the world. In addition,
since the speed of light is now defined, a measurement of $e^{2}/h$ is
equivalent to a measurement of the fine structure constant of fundamental
importance in quantum electrodynamics.
In the so-called integer quantum Hall effect (IQHE) discovered by von Klitzing
in 1980, the quantum number $\nu$ is a simple integer with a precision of about
$10^{-10}$ and an absolute accuracy of about $10^{-8}$ (both being limited by
our ability to do resistance metrology).
In 1982, Tsui, St\"{o}rmer and Gossard discovered that in certain devices with
reduced (but still non-zero) disorder, the quantum number $\nu$ could take on
rational fractional values. This so-called fractional quantum Hall effect (FQHE)
is the result of quite different underlying physics involving strong Coulomb
interactions and correlations among the electrons. The particles condense into
special quantum states whose excitations have the bizarre property of being
described by fractional quantum numbers, including fractional charge and
fractional statistics that are intermediate between ordinary Bose and Fermi
statistics. The FQHE has proven to be a rich and surprising arena for the
testing of our understanding of strongly correlated quantum systems. With a
simple twist of a dial on her apparatus, the quantum Hall experimentalist can
cause the electrons to condense into a bewildering array of new `vacua', each of
which is described by a different quantum field theory. The novel order
parameters describing each of these phases are completely unprecedented.
We begin with a brief description of why two-dimensionality is important to the
universality of the result and how modern semiconductor processing techniques
can be used to generate a nearly ideal two-dimensional electron gas (2DEG). We
then give a review of the classical and semi-classical theories of the motion of
charged particles in a magnetic field. Next we consider the limit of low
temperatures and strong fields where a full quantum treatment of the dynamics is
required. After that we will be in a position to understand the localization
phase transition in the IQHE. We will then study the origins of the FQHE and the
physics described by the novel wave function invented by Robert Laughlin to
describe the special condensed state of the electrons. Finally we will discuss
topological excitations and broken symmetries in quantum Hall ferromagnets.
The review presented here is by no means complete. It is primarily an
introduction to the basics followed by a more advanced discussion of recent
developments in quantum Hall ferromagnetism. Among the many topics which receive
little or no discussion are the FQHE hierarchical states, interlayer drag
effects, FQHE edge state tunneling and the composite boson \cite{compositeboson}
and fermion \cite{compositefermion} pictures of the FQHE. A number of general
reviews exist which the reader may be interested in consulting
\cite{SMGBOOK,TAPASHbook,macdbook,DasSarmabook,Hajdu,stonebook,sciam,sczhang,macdleshouches}
\subsection{Why 2D Is Important}
As one learns in the study of scaling in the localization transition,
resistivity (which is what theorists calculate) and resistance (which is what
experimentalists measure) for classical systems (in the shape of a hypercube) of
size $L$ are related by \cite{LeeRamakrishnan,sondhiRMP97}
\begin{equation}
R = \rho L^{(2-d)}.
\end{equation}
Two dimensions is therefore special since in this case the resistance of the
sample is scale invariant and $(e^{2}/h)R$ is dimensionless. This turns out to
be crucial to the universality of the result. In particular it means that one
does not have to measure the physical dimensions of the sample to one part in
$10^{10}$ in order to obtain the resistivity to that precision. Since the
locations of the edges of the sample are not well-defined enough to even
contemplate such a measurement, this is a very fortunate feature of having
available a 2DEG. It further turns out that, since the dissipation is nearly
zero in the QHE states, even the shape of the sample and the precise location of
the Hall voltage probes are almost completely irrelevant.
\subsection{Constructing the 2DEG}
There are a variety of techniques to construct two-dimensional electron gases.
Fig.~(\ref{fig:2DEG})
\begin{figure
\centerline{\epsfxsize=10cm
\epsffile{quantum_well.xfig.eps}}
\caption[]{Schematic illustration of a GaAs/AlAs heterostructure quantum well.
The vertical axis is band energy and the horizontal axis is position in the MBE
growth direction. The dark circles indicate the Si$^{+}$ ions which have donated
electrons into the quantum well. The lowest electric subband wave function of the
quantum well is illustrated by the dashed line. It is common to use an alloy of
GaAs and AlAs rather than pure AlAs for the barrier region as illustrated here.}
\label{fig:2DEG}
\end{figure}
shows one example in which the energy bands in a GaAs/AlAs heterostructure are
used to create a `quantum well'. Electrons from a Si donor layer fall into the
quantum well to create the 2DEG. The energy level (`electric subband') spacing
for the `particle in a box' states of the well can be of order $10^{3}~\mbox{K}$
which is much larger than the cryogenic temperatures at which QHE experiments
are performed. Hence all the electrons are frozen into the lowest electric
subband (if this is consistent with the Pauli principle) but remain free to move
in the plane of the GaAs layer forming the well. The dynamics of the electrons
is therefore effectively two-dimensional even though the quantum well is not
literally two-dimensional.
Heterostructures that are grown one atomic layer at a time by Molecular Beam
Epitaxy (MBE) are nearly perfectly ordered on the atomic scale. In addition the
Si donor layer can be set back a considerable distance ($\sim 0.5\mu\mathrm{m}$)
to minimize the random scattering from the ionized Si donors. Using these
techniques, electron mobilities of $10^{7}~\mathrm{cm^{2}/Vs}$ can be achieved
at low temperatures corresponding to incredibly long mean free paths of $\sim
0.1~\mbox{mm}$. As a result of the extremely low disorder in these systems,
subtle electronic correlation energies come to the fore and yield a remarkable
variety of quantum ground states, some of which we shall explore here.
The same MBE and remote doping technology is used to make GaAs quantum well High
Electron Mobility Transistors (HEMTs) which are used in all cellular telephones
and in radio telescope receivers where they are prized for their low noise and
ability to amplify extremely weak signals. The same technology is widely
utilized to produce the quantum well lasers used in compact disk players.
\subsection{Why is Disorder and Localization Important?}
Paradoxically, the extreme universality of the transport properties in the
quantum Hall regime occurs because of, rather than in spite of, the random
disorder and uncontrolled imperfections which the devices contain. Anderson
localization in the presence of disorder plays an essential role in the
quantization, but this localization is strongly modified by the strong magnetic
field.
In two dimensions (for zero magnetic field and non-interacting electrons) all
states are localized even for arbitrarily weak disorder. The essence of this
weak localization effect is the current `echo' associated with the quantum
interference corrections to classical transport \cite{Bergmann}. These quantum
interference effects rely crucially on the existence of time-reversal symmetry.
In the presence of a strong quantizing magnetic field, time-reversal symmetry is
destroyed and the localization properties of the disordered 2D electron gas are
radically altered. We will shortly see that there exists a novel phase
transition, not between a metal and insulator, but rather between two distinctly
different insulating states.
In the absence of any impurities the 2DEG is translationally invariant and there
is no preferred frame of reference.\footnote{This assumes that we can ignore the
periodic potential of the crystal which is of course fixed in the lab frame.
Within the effective mass approximation this potential modifies the mass but
does not destroy the Galilean invariance since the energy is still quadratic in
the momentum.} As a result we can transform to a frame of reference moving with
velocity $-\vec{v}$ relative to the lab frame. In this frame the electrons
appear to be moving at velocity $+\vec{v}$ and carrying current density
\begin{equation}
\vec{J} = -ne\vec{v},
\end{equation}
where $n$ is the areal density and we use the convention that the electron
charge is $-e$. In the lab frame, the electromagnetic fields are
\begin{eqnarray}
\vec{E} &=& \vec{0}\\
\vec{B} &=& B \hat{z}.
\end{eqnarray}
In the moving frame they are (to lowest order in $v/c$)
\begin{eqnarray}
\vec{E} &=& -\frac{1}{c}\vec{v}\times \vec{B}\\
\vec{B} &=& B \hat{z}.
\end{eqnarray}
This Lorentz transformation picture is precisely equivalent to the usual
statement that an electric field must exist which just cancels the Lorentz force
$\frac{-e}{c} \vec{v}\times \vec{B}$ in order for the device to carry the current
straight through without deflection. Thus we have
\begin{equation}
\vec{E} = \frac{B}{nec} \vec{J}\times \hat{B}.
\end{equation}
The resistivity tensor is defined by
\begin{equation}
E^{\mu} = \rho_{\mu\nu} J^{\nu}.
\end{equation}
Hence we can make the identification
\begin{equation}
\underline{\underline{\rho}} = \frac{B}{nec}
\left(\begin{array}{cc}
0&+1\\
-1&0
\end{array}\right)
\end{equation}
The conductivity tensor is the matrix inverse of this so that
\begin{equation}
J^{\mu} = \sigma_{\mu\nu} E^{\nu},
\end{equation}
and
\begin{equation}
\underline{\underline{\sigma}} = \frac{nec}{B}
\left(\begin{array}{cc}
0&-1\\
+1&0
\end{array}\right)
\end{equation}
Notice that, paradoxically, the system looks insulating since $\sigma_{xx}=0$
and yet it looks like a perfect conductor since $\rho_{xx} = 0$. In an ordinary
insulator $\sigma_{xy}=0$ and so $\rho_{xx}=\infty$. Here $\sigma_{xy} =
\frac{nec}{B} \ne 0$ and so the inverse exists.
The argument given above relies only on Lorentz covariance. The only property of
the 2DEG that entered was the density. The argument works equally well whether
the system is classical or quantum, whether the electron state is liquid, vapor,
or solid. It simply does not matter. Thus, in the absence of disorder, the Hall
effect teaches us nothing about the system other than its density. The Hall
resistivity is simply a linear function of magnetic field whose slope tells us
about the density. In the quantum Hall regime we would therefore see none of the
novel physics in the absence of disorder since disorder is needed to destroy
translation invariance. Once the translation invariance is destroyed there is a
preferred frame of reference and the Lorentz covariance argument given above
fails.
Figure~(\ref{fig:qhedata})
\begin{figure
\centerline{\epsfxsize=12cm
\epsffile{qhe_transport_data.eps}}
\caption[]{Integer and fractional quantum Hall transport data showing the
plateau regions in the Hall resistance $R_{\rm H}$ and associated dips in the
dissipative resistance $R$. The numbers indicate the Landau level filling
factors at which various features occur. After ref.~\cite{transport-data}.}
\label{fig:qhedata}
\end{figure}
shows the remarkable transport data for a real device in the quantum Hall
regime. Instead of a Hall resistivity which is simply a linear function of
magnetic field, we see a series of so-called \textit{Hall plateaus} in which
$\rho_{xy}$ is a universal constant
\begin{equation}
\rho_{xy} = -\frac{1}{\nu}\frac{h}{e^{2}}
\end{equation}
independent of all microscopic details (including the precise value of the
magnetic field). Associated with each of these plateaus is a dramatic decrease
in the dissipative resistivity $\rho_{xx}\longrightarrow 0$ which drops as much
as 13 orders of magnitude in the plateau regions. Clearly the system is
undergoing some sort of sequence of phase transitions into highly idealized
dissipationless states. Just as in a superconductor, the dissipationless state
supports persistent currents. These can be produced in devices having the
Corbino ring geometry shown in fig.~(\ref{fig:corbino}).
\begin{figure}
\centerline{\epsfxsize=6cm
\epsffile{corbino.xfig.eps}}
\caption[]{Persistent current circulating in a quantum Hall device having the
Corbino geometry. The radial electric field is maintained by the charges which
can not flow back together because $\sigma_{xx}$ is nearly zero. These charges
result from the radial current pulse associated with the azimuthal electric
field pulse produced by the applied flux $\Phi(t)$.}
\label{fig:corbino}
\end{figure}
Applying additional flux through the ring produces a temporary azimuthal
electric field by Faraday induction. A current pulse is induced at right angles
to the $E$ field and produces a radial charge polarization as shown. This
polarization induces a (quasi-) permanent radial electric field which in turn
causes persistent azimuthal currents. Torque magnetometer measurements
\cite{torque} have shown that the currents can persist $\sim 10^{3}~\mbox{secs}$
at very low temperatures. After this time the tiny $\sigma_{xx}$ gradually
allows the radial charge polarization to dissipate. We can think of the
azimuthal currents as gradually spiraling outwards due to the Hall angle
(between current and electric field) being very slightly less than $90^{\circ}$
(by $\sim 10^{-13}$).
We have shown that the random impurity potential (and by implication Anderson
localization) is a necessary condition for Hall plateaus to occur, but we have
not yet understood precisely how this novel behavior comes about. That is our
next task.
\chapter{Lowest Landau Level Projection}
\label{app:projection}
A convenient formulation of quantum mechanics within the subspace of the lowest
Landau level (LLL) was developed by Girvin and Jach \cite{girvinjach}, and was exploited
by Girvin, MacDonald and Platzman in the magneto-roton theory of collective
excitations of the incompressible states responsible for the fractional quantum
Hall effect \cite{GMP}. Here we briefly review this formalism. See also
Ref.~\cite{stonebook}.
We first consider the one-body case and choose the symmetric gauge. The
single-particle eigenfunctions of kinetic energy and angular momentum in the LLL
are given in Eq.~(\ref{eq:symmgauge})
\begin{equation}
\phi_m(z)=\frac{1}{(2\pi 2^m m!)^{1/2}}\> z^m\>
\exp{\biggl( -\frac{\vert z\vert^2}{4}\biggr)} ,
\label{eq3.10}
\end{equation}
where $m$ is a non-negative integer, and $z = (x + iy)/\ell$. From
(\ref{eq3.10}) it is clear that any wave function in the LLL can be written in
the form
\begin{equation}
\psi (z)=f(z)\> e^{-\frac{\vert z\vert^2}{4}}
\label{eq3.20}
\end{equation}
where $f(z)$ is an analytic function of $z$, so the subspace in the LLL is
isomorphic to the Hilbert space of analytic functions
\cite{girvinjach,Bargman,stonebook}. Following Bargman \cite{girvinjach,Bargman}, we define
the inner product of two analytic functions as
\begin{equation}
(f, g)=\int d\mu (z)\> f^\ast (z)\> g(z),
\label{eq3.30}
\end{equation}
where
\begin{equation}
d\mu (z)\equiv (2\pi )^{-1}\> dxdy\> e^{-\frac{\vert z\vert^2}{2}} .
\label{eq3.40}
\end{equation}
Now we can define bosonic ladder operators that connect $\phi_m$ to $\phi_{m\pm
1}$ (and which act on the polynomial part of $\phi_m$ only):
\alpheqn{
\begin{eqnarray}
a^\dagger &=& {z\over \sqrt{2}} ,\label{eq3.50a}\\
a &=& \sqrt{2}\> \frac{\partial}{\partial z} ,\label{eq3.50b}
\end{eqnarray}}
\reseteqn
\noindent so that
\alpheqn{
\begin{eqnarray}
a^\dagger\> \varphi_m &=& \sqrt{m+1}\> \varphi_{m+1} ,\label{eq3.60a}\\
a\> \varphi_m &=& \sqrt{m}\> \varphi_{m-1} ,\label{eq3.60b}\\
(f, a^\dagger\; g) &=& (a\; f, g) , \label{eq3.60c}\\
(f, a\; g) &=& (a^\dagger\; f, g) .\label{eq3.60d}
\end{eqnarray}}
\reseteqn
\noindent All operators that have non-zero matrix elements only within the LLL
can be expressed in terms of $a$ and $a^\dagger$. It is essential to notice that
the adjoint of $a^\dagger$ is not $z^\ast/\sqrt{2}$ but $a\equiv
\sqrt{2}\partial/\partial z$, because $z^\ast$ connects states in the LLL to
higher Landau levels. Actually $a$ is the projection of $z^\ast/\sqrt{2}$ onto
the LLL as seen clearly in the following expression:
\[
(f, \frac{z^\ast}{\sqrt{2}}\; g)=(\frac{z}{\sqrt{2}}\; f, g)=(a^\dagger\; f,
g)=(f, a\; g).
\]
So we find
\begin{equation}
\overline{z^\ast}=2\frac{\partial}{\partial z},
\label{eq3.70}
\end{equation}
where the overbar indicates projection onto the LLL. Since $\overline{z^\ast}$
and $z$ do not commute, we need to be very careful to properly order the operators
before projection. A little thought shows that in order to project an operator which is a
combination of $z^\ast$ and $z$, we must first normal order all the $z^\ast$'s to the
left of the $z$'s, and then replace $z^\ast$ by $\overline{z^\ast}$. With this
rule in mind and (\ref{eq3.70}), we can easily project onto the LLL any operator that
involves space coordinates only.
For example, the one-body density operator in momentum space is
\[
\rho_{\mathbf{q}} = \frac{1}{\sqrt{A}}\> e^{-i\mathbf{q}\cdot \mathbf{r}} =
\frac{1}{\sqrt{A}}\> e^{-\frac{i}{2}(q^\ast z + qz^\ast)} = \frac{1}{\sqrt{A}}\>
e^{-\frac{i}{2}qz^\ast}\; e^{-\frac{i}{2}q^\ast z} ,
\]
where $A$ is the area of the system, and $q=q_x + iq_y$. Hence
\begin{equation}
\overline{\rho_q} = \frac{1}{\sqrt{A}}\> e^{-iq\frac{\partial}{\partial z}}\;
e^{-\frac{i}{2}q^\ast z} = \frac{1}{\sqrt{A}}\> e^{-\frac{\vert q\vert^2}{4}}\;
\tau_q ,
\label{eq3.80}
\end{equation}
where
\begin{equation}
\tau_q = e^{-iq\frac{\partial}{\partial z} - \frac{i}{2}q^\ast z}
\label{eq3.90}
\end{equation}
is a unitary operator satisfying the closed Lie algebra
\alpheqn{
\begin{eqnarray}
\tau_q\tau_k &=& \tau_{q+k}\> e^{\frac{i}{2}q\wedge k} ,\label{eq3.100a}\\
{}[\tau_q, \tau_k] &=& 2i\; \tau_{q+k}\> \sin{\frac{q\wedge k}{2}},
\label{eq3.100b}
\end{eqnarray}}
\reseteqn
\noindent where $q\wedge k \equiv \ell^2(\mathbf{q}\times \mathbf{k}) \cdot
\hat{\mathbf{z}}$. We also have $\tau_q\tau_k\> \tau_{-q}\tau_{-k} = e^{iq\wedge
k}$. This is a familiar feature of the group of translations in a magnetic
field, because $q\wedge k$ is exactly the phase generated by the flux in the
parallelogram generated by $\mathbf{q}\ell^2$ and $\mathbf{k}\ell^2$. Hence the
$\tau$'s form a representation of the magnetic translation group [see
Fig.~(\ref{fig:magtrans})].
\begin{figure
\centerline{\epsfxsize=10cm
\epsffile{translation.xfig.eps}}
\caption[]{Illustration of magnetic translations and phase factors. When an
electron travels around a parallelogram (generated by
$\tau_{q}\tau_{k}\tau_{-q}\tau_{-k}$) it picks up a phase $\phi = 2\pi
\frac{\Phi}{\Phi_{0}} = q\wedge k$, where $\Phi$ is the flux enclosed in the
parallelogram and $\Phi_{0}$ is the flux quantum.}
\label{fig:magtrans}
\end{figure}
In fact $\tau_{q}$ translates the particle a distance $\ell^{2}\hat{\mathbf{z}}
\times \mathbf{q}$. This means that different wave vector components of the
charge density do not commute. It is from here that non-trivial dynamics arises
even though the kinetic energy is totally quenched in the LLL subspace.
This formalism is readily generalized to the case of many particles with spin,
as we will show next. In a system with area $A$ and $N$ particles the projected
charge and spin density operators are
\alpheqn{
\begin{eqnarray}
\overline{\rho _{q}} &=& \frac{1}{\sqrt{A}}\> \sum_{i=1}^N
\overline{e^{-i\mathbf{q} \cdot \mathbf{r}_i}} = \frac{1}{\sqrt{A}}\>
\sum_{i=1}^N e^{-\frac{\vert q\vert^2}{4}}\> \tau_{q}(i) \label{eq3.110a}\\
\overline{S_{q}^\mu} &=& \frac{1}{\sqrt{A}}\> \sum_{i=1}^N
\overline{e^{-i\mathbf{q}\cdot \mathbf{r}_i}}\> S_i^\mu = \frac{1}{\sqrt{A}}\>
\sum_{i=1}^N e^{-\frac{\vert q\vert^2}{4}}\> \tau_{q}(i)\> S_i^\mu ,
\label{eq3.110b}
\end{eqnarray}}
\reseteqn
\noindent where $\tau_{q}(i)$ is the magnetic translation operator for the $i$th
particle and $S_i^\mu$ is the $\mu$th component of the spin operator for the
$i$th particle. We immediately find that unlike the unprojected operators, the
projected spin and charge density operators do not commute:
\begin{equation}
[\bar{\rho}_{k}, \bar{S}_{q}^\mu] = \frac{2i}{\sqrt{A}}\> e^{\frac{\vert k +
q\vert^2 - \vert k\vert^2 - \vert q\vert^2}{4}}\> \overline{S_{k + q}^\mu}\>
\sin{\biggl(\frac{k\wedge q}{2}\biggr)} \neq 0.
\label{eq3.120}
\end{equation}
This implies that within the LLL, the dynamics of spin and charge are entangled,
i.e., when you rotate spin, charge gets moved. As a consequence of that, spin
textures carry charge as discussed in the text.
\section{Quantum Dynamics in Strong B Fields}
Since we will be dealing with the Hamiltonian and the Schr\"{o}dinger equation,
our first order of business is to choose a gauge for the vector potential. One
convenient choice is the so-called Landau gauge:
\begin{equation}
\vec{A}(\vec{r}\,) = xB\hat{y}
\end{equation}
which obeys $\vec{\nabla} \times \vec{A} = B\hat{z}$. In this gauge the vector
potential points in the $y$ direction but varies only with the $x$ position, as
illustrated in fig.~(\ref{fig:gauge}).
\begin{figure
\centerline{\epsfysize=6cm
\epsffile{vectorpot.xfig.eps}}
\caption[]{Illustration of the Landau gauge vector potential $\vec{A} =
xB\hat{y}$. The magnetic field is perfectly uniform, but the vector potential
has a preferred origin and orientation corresponding to the particular gauge
choice.}
\label{fig:gauge}
\end{figure}
Hence the system still has translation invariance in the $y$ direction. Notice
that the magnetic field (and hence all the physics) is translationally
invariant, but the Hamiltonian is not! (See exercise~\ref{ex:9801}.) This is one
of many peculiarities of dealing with vector potentials.
\boxedtext{\begin{exercise}
Show for the Landau gauge that even though the Hamiltonian is not invariant for
translations in the $x$ direction, the physics is still invariant since the
change in the Hamiltonian that occurs under translation is simply equivalent to
a gauge change. Prove this for any arbitrary gauge, assuming only that the
magnetic field is uniform.
\label{ex:9801}
\end{exercise}}
The Hamiltonian can be written in the Landau gauge as
\begin{equation}
H = \frac{1}{2m}\left( p_{x}^{2} + (p_{y} + \frac{eB}{c}x)^{2} \right)
\end{equation}
Taking advantage of the translation symmetry in the $y$ direction, let us attempt
a separation of variables by writing the wave function in the form
\begin{equation}
\psi_{k}(x,y) = e^{i ky} f_{k}(x).
\end{equation}
This has the advantage that it is an eigenstate of $p_{y}$ and hence we can make
the replacement $p_{y} \longrightarrow \hbar k$ in the Hamiltonian. After
separating variables we have the effective one-dimensional Schr\"{o}dinger
equation
\begin{equation}
h_{k} f_{k}(x) = \epsilon_{k} f_{k}(x),
\end{equation}
where
\begin{equation}
h_{k} \equiv \frac{1}{2m} p_{x}^{2}+\frac{1}{2m}\left(\hbar k +
\frac{eB}{c}x\right)^{2}.
\end{equation}
This is simply a one-dimensional displaced harmonic oscillator\footnote{Thus we
have arrived at the harmonic oscillator hinted at semiclassically, but
paradoxically it is only one-dimensional, not two. The other degree of freedom
appears (in this gauge) in the $y$ momentum.}
\begin{equation}
h_{k} = \frac{1}{2m} p_{x}^{2} + \frac{1}{2}m\omega_{c}^{2} \left(x +
k\ell^{2}\right)^{2}
\label{eq:1d-displaced}
\end{equation}
whose frequency is the classical cyclotron frequency and whose central position
$X_{k} = -k\ell^{2}$ is (somewhat paradoxically) determined by the $y$ momentum
quantum number. Thus for each plane wave chosen for the $y$ direction there will
be an entire family of energy eigenvalues
\begin{equation}
\epsilon_{kn} = (n+\frac{1}{2})\hbar\omega_{c}
\end{equation}
which depend only on $n$ are completely independent of the $y$ momentum
$\hbar k$. The corresponding (unnormalized) eigenfunctions are
\begin{equation}
\psi_{nk}(\vec{r}\,) = \frac{1}{\sqrt{L}} e^{iky}
H_{n}(x+k\ell^{2})e^{-\frac{1}{2\ell^{2}}(x+k\ell^{2})^{2}},
\label{eq:landaupsi}
\end{equation}
where $H_{n}$ is (as usual for harmonic oscillators) the $n$th Hermite
polynomial (in this case displaced to the new central position $X_{k}$).
\boxedtext{\begin{exercise}
Verify that eq.~(\ref{eq:landaupsi}) is in fact a solution of the
Schr\"{o}dinger equation as claimed.
\label{ex:9802}
\end{exercise}}
These harmonic oscillator levels are called Landau levels. Due to the lack of
dependence of the energy on $k$, the degeneracy of each level is enormous, as we
will now show. We assume periodic boundary conditions in the $y$ direction.
Because of the vector potential, it is \textit{impossible} to simultaneously
have periodic boundary conditions in the $x$ direction. However since the basis
wave functions are harmonic oscillator polynomials multiplied by strongly
converging gaussians, they rapidly vanish for positions away from the center
position $X_{0} = -k\ell^{2}$. Let us suppose that the sample is rectangular
with dimensions $L_{x},L_{y}$ and that the left hand edge is at $x=-L_{x}$ and
the right hand edge is at $x=0$. Then the values of the wavevector $k$ for which
the basis state is substantially inside the sample run from $k=0$ to
$k=L_{x}/\ell^{2}$. It is clear that the states at the left edge and the right
edge differ strongly in their $k$ values and hence periodic boundary conditions
are impossible.\footnote{The best one can achieve is so-called quasi-periodic
boundary conditions in which the phase difference between the left and right
edges is zero at the bottom and rises linearly with height, reaching $2\pi
N_{\Phi} \equiv L_{x}L_{y}/\ell^{2}$ at the top. The eigenfunctions with these
boundary conditions are elliptic theta functions which are linear combinations
of the gaussians discussed here. See the discussion by Haldane in
Ref.~\cite{SMGBOOK}.}
The total number of states in \textit{each} Landau level is then
\begin{equation}
N = \frac{L_{y}}{2\pi}\int_{0}^{L_{x}/\ell^{2}} dk =
\frac{L_{x}L_{y}}{2\pi\ell^{2}} = N_{\Phi}
\end{equation}
where
\begin{equation}
N_{\Phi}\equiv\frac{BL_{x}L_{y}}{\Phi_{0}}
\end{equation}
is the number of flux quanta penetrating the sample. Thus there is one state per
Landau level per flux quantum which is consistent with the semiclassical result
from Exercise~(\ref{ex:stateperflux}). Notice that even though the family of
allowed wavevectors is only one-dimensional, we find that the degeneracy of each
Landau level is extensive in the two-dimensional area. The reason for this is
that the spacing between wave vectors allowed by the periodic boundary
conditions $\Delta_{k} = \frac{2\pi}{L_{y}}$ \textit{decreases} while the
range of allowed wave vectors $[0,L_{x}/\ell^{2}]$ \textit{increases} with
increasing $L$. The reader may also worry that for very large samples, the
range of allowed values of $k$ will be so large that it will fall outside the
first Brillouin zone forcing us to include band mixing and the periodic lattice
potential beyond the effective mass approximation. This is not true however,
since the canonical momentum is a gauge dependent quantity. The value of $k$ in
any particular region of the sample can be made small by shifting the origin of
the coordinate system to that region (thereby making a gauge transformation).
The width of the harmonic oscillator wave functions in the $n$th Landau level is
of order $\sqrt{n}\ell$. This is microscopic compared to the system size, but
note that the spacing between the centers
\begin{equation}
\Delta = \Delta_{k}\ell^{2} = \frac{2\pi\ell^{2}}{L_{y}}
\end{equation}
is vastly smaller (assuming $L_{y} >> \ell$). Thus the supports of the different
basis states are strongly overlapping (but they are still orthogonal).
\boxedtext{\begin{exercise}
Using the fact that the energy for the $n$th harmonic oscillator state is
$(n+\frac{1}{2})\hbar\omega_{c}$, present a semi-classical argument explaining
the result claimed above that the width of the support of the wave function
scales as $\sqrt{n}\ell$.
\label{ex:9803}
\end{exercise}
\begin{exercise}
Using the Landau gauge, construct a gaussian wave packet in the lowest Landau
level of the form
\[
\Psi(x,y) = \int_{-\infty}^{+\infty} a_{k}
e^{iky}e^{-\frac{1}{2\ell^{2}}(x+k\ell^{2})^{2}},
\]
choosing $a_{k}$ in such a way that the wave packet is localized as closely as
possible around some point $\vec{R}$. What is the smallest size wave packet that
can be constructed without mixing in higher Landau levels?
\label{ex:landaupacket}
\end{exercise}}
Having now found the eigenfunctions for an electron in a strong magnetic field
we can relate them back to the semi-classical picture of wave packets undergoing
circular cyclotron motion. Consider an initial semiclassical wave packet located
at some position and having some specified momentum. In the semiclassical limit
the mean energy of this packet will greatly exceed the cyclotron energy
$\frac{\hbar^{2}K^{2}}{2m}\gg\hbar\omega_{c}$ and hence it will be made up of a
linear combination of a large number of different Landau level states centered
around $\bar n = \frac{\hbar^{2}K^{2}}{2m\hbar\omega_{c}}$
\begin{equation}
\Psi(\vec{r},t) = \sum_{n}\int L_{y}\frac{dk}{2\pi}\, a_{n}(\vec{k})
\psi_{nk}(\vec{r}\,) e^{-i(n+\frac{1}{2})\omega_{c} t}.
\end{equation}
Notice that in an ordinary 2D problem at zero field, the complete set of plane
wave states would be labeled by a 2D continuous momentum label. Here we have
one discrete label (the Landau level index) and a 1D continuous labels (the $y$
wave vector). Thus the `sum' over the complete set of states is actually a
combination of a summation and an integration.
The details of the initial position and momentum are controlled by the
amplitudes $a_{n}(\vec{k})$. We can immediately see however, that since the
energy levels are exactly evenly spaced that the motion is exactly periodic:
\begin{equation}
\Psi(\vec{r},t+\frac{2\pi}{\omega_{c}}) = \Psi(\vec{r},t).
\end{equation}
If one works through the details, one finds that the motion is indeed circular
and corresponds to the expected semi-classical cyclotron orbit.
For simplicity we will restrict the remainder of our discussion to the lowest
Landau level where the (correctly normalized) eigenfunctions in the Landau gauge
are (dropping the index $n=0$ from now on):
\begin{equation}
\psi_{k}(\vec{r}\,) = \frac{1}{\sqrt{\pi^{1/2}L\ell}} e^{iky}
e^{-\frac{1}{2\ell^{2}}(x+k\ell^{2})^{2}}
\label{eq:lowlandaupsi}
\end{equation}
and every state has the same energy eigenvalue $\epsilon_{k}=
\frac{1}{2}\hbar\omega_{c}$.
We imagine that the magnetic field (and hence the Landau level splitting) is
very large so that we can ignore higher Landau levels. (There are some
subtleties here to which we will return.) Because the states are all degenerate,
any wave packet made up of any combination of the basis states will be a
stationary state. The total current will therefore be zero. We anticipate
however from semiclassical considerations that there should be some remnant of
the classical circular motion visible in the local current density. To see this
note that the expectation value of the current in the $k$th basis state is
\begin{equation}
\langle\vec{J}\,\rangle = -e\frac{1}{m} \left\langle\Psi_{k}\left|\left(\vec{p}
+ \frac{e}{c}\vec{A}\,\right)\right|\Psi_{k}\right\rangle.
\end{equation}
The $y$ component of the current is
\begin{eqnarray}
\langle J_{y}\rangle &=& -\frac{e}{m\pi^{1/2}\ell} \int
dx\,e^{-\frac{1}{2\ell^{2}}(x+k\ell^{2})^{2}} \left(\hbar k +
\frac{eB}{c}x\right) e^{-\frac{1}{2\ell^{2}}(x+k\ell^{2})^{2}}\nonumber\\
&=& -\frac{e\omega_{c}}{\pi^{1/2}\ell} \int
dx\,e^{-\frac{1}{\ell^{2}}(x+k\ell^{2})^{2}} \left(x+k\ell^{2}\right)
\end{eqnarray}
We see from the integrand that the current density is antisymmetric about the
peak of the gaussian and hence the total current vanishes. This antisymmetry
(positive vertical current on the left, negative vertical current on the right)
is the remnant of the semiclassical circular motion.
Let us now consider the case of a uniform electric field pointing in the $x$
direction and giving rise to the potential energy
\begin{equation}
V(\vec{r}\,) = +eEx.
\end{equation}
This still has translation symmetry in the $y$ direction and so our Landau gauge
choice is still the most convenient. Again separating variables we see that the
solution is nearly the same as before, except that the displacement of the
harmonic oscillator is slightly different. The Hamiltonian in
eq.~(\ref{eq:1d-displacedE}) becomes
\begin{equation}
h_{k} = \frac{1}{2m} p_{x}^{2} + \frac{1}{2}m\omega_{c}^{2} \left(x +
k\ell^{2}\right)^{2} +eEx.
\label{eq:1d-displacedE}
\end{equation}
Completing the square we see that the oscillator is now centered at the new
position
\begin{equation}
X_{k} = -k\ell^{2} - \frac{eE}{m\omega_{c}^{2}}
\end{equation}
and the energy eigenvalue is now linearly dependent on the particle's peak
position $X_{k}$ (and therefore linear in the $y$ momentum)
\begin{equation}
\epsilon_{k} = \frac{1}{2}\hbar\omega_{c} +eEX_{k} + \frac{1}{2}m\bar{v}^{2},
\label{eq:driftepsilon}
\end{equation}
where
\begin{equation}
\bar{v} \equiv -c\frac{E}{B}.
\end{equation}
Because of the shift in the peak position of the wavefunction, the perfect
antisymmetry of the current distribution is destroyed and there is a net current
\begin{equation}
\langle J_{y}\rangle = -e\bar{v}
\end{equation}
showing that $\bar{v}\hat{y}$ is simply the usual $c\vec{E} \times
\vec{B}/B^{2}$ drift velocity. This result can be derived either by explicitly
doing the integral for the current or by noting that the wave packet group
velocity is
\begin{equation}
\frac{1}{\hbar}\frac{\partial \epsilon_{k}}{\partial k} =
\frac{eE}{\hbar}\frac{\partial X_{k}}{\partial k} = \bar{v}
\end{equation}
independent of the value of $k$ (since the electric field is a constant in this
case, giving rise to a strictly linear potential). Thus we have recovered the
correct kinematics from our quantum solution.
It should be noted that the applied electric field `tilts' the Landau levels in
the sense that their energy is now linear in position as illustrated in
fig.(\ref{fig:LLtilt}).
\begin{figure
\centerline{\epsfxsize=10cm
\epsffile{LLtilt.xfig.eps}}
\caption[]{Illustration of electron Landau energy levels
$\left(n + \frac{1}{2}\right)\hbar\omega_{\mathrm{c}}$ vs.\ position $x_{k} =
-k\ell^{2}$. (a) Zero electric field case. (b) Case with finite electric field
pointing in the $+\hat{x}$ direction.}
\label{fig:LLtilt}
\end{figure}
This means that there are degeneracies between different Landau level states
because different kinetic energy can compensate different potential energy in
the electric field. Nevertheless, we have found the exact eigenstates (i.e., the
stationary states). It is not possible for an electron to decay into one of the
other degenerate states because they have different canonical momenta. If however
disorder or phonons are available to break translation symmetry, then these
decays become allowed and dissipation can appear. The matrix elements for such
processes are small if the electric field is weak because the degenerate states
are widely separated spatially due to the small tilt of the Landau levels.
\boxedtext{\begin{exercise}
It is interesting to note that the exact eigenstates in the presence of the
electric field can be viewed as displaced oscillator states in the original
(zero $E$ field) basis. In this basis the displaced states are linear
combinations of all the Landau level excited states of the same $k$. Use
first-order perturbation theory to find the amount by which the $n=1$ Landau
level is mixed into the $n=0$ state. Compare this with the exact amount of
mixing computed using the exact displaced oscillator state. Show that the two
results agree to first order in $E$. Because the displaced state is a linear
combination of more than one Landau level, it can carry a finite current. Give
an argument, based on perturbation theory why the amount of this current is
inversely proportional to the $B$ field, but is independent of the mass of the
particle. Hint: how does the mass affect the Landau level energy spacing and the
current operator?
\label{ex:9804}
\end{exercise}}
\chapter{The Quantum Hall Effect}
\label{chap:quantum_hall}
\input{Introduction}
\input{Classical}
\input{QuantumDynamics}
\input{IQHE}
\input{Semiclassical}
\input{FractionalQHE}
\input{Collective}
\input{Charged}
\input{FQHE}
\input{Ferromagnets}
\input{Skyrmion}
\input{Double-layer}
\input{Acknowledgments}
\section{Semiclassical Percolation Picture}
Let us consider a smooth random potential caused, say, by ionized silicon donors
remotely located away from the 2DEG in the GaAs semiconductor host. We take the
magnetic field to be very large so that the magnetic length is small on the
scale over which the potential varies. In addition, we ignore the Coulomb
interactions among the electrons.
What is the nature of the eigenfunctions in this random potential? We have
learned how to solve the problem exactly for the case of a constant electric
field and know the general form of the solution when there is translation
invariance in one direction. We found that the wave functions were plane waves
running along lines of constant potential energy and having a width
perpendicular to this which is very small and on the order of the magnetic
length. The reason for this is the discreteness of the kinetic energy in a
strong magnetic field. It is impossible for an electron stuck in a given Landau
level to continuously vary its kinetic energy. Hence energy conservation
restricts its motion to regions of constant potential energy. In the limit of
infinite magnetic field where Landau level mixing is completely negligible, this
confinement to lines of constant potential becomes exact (as the magnetic length
goes to zero).
We are led to the following somewhat paradoxical picture. The strong magnetic
field should be viewed as putting the system in the quantum limit in the sense
that $\hbar\omega_{c}$ is a very large energy (comparable to $\ensuremath{\epsilon_{\mathrm{F}}}$). At the same
time (if one assumes the potential is smooth) one can argue that since the
magnetic length is small compared to the scale over which the random potential
varies, the system is in a semi-classical limit where small wave packets (on the
scale of $\ell$) follow classical $\vec{E}\times \vec{B}$ drift trajectories.
{}From this discussion it then seems very reasonable that in the presence of a
smooth random potential, with no particular translation symmetry, the
eigenfunctions will live on contour lines of constant energy on the random
energy surface. Thus low energy states will be found lying along contours in
deep valleys in the potential landscape while high energy states will be found
encircling `mountain tops' in the landscape. Naturally these extreme states will
be strongly localized about these extrema in the potential.
\boxedtext{\begin{exercise}
Using the Lagrangian for a charged particle in a magnetic field with a scalar potential
$V(\vec{r}\,)$, consider the high field limit by setting the mass to zero (thereby sending
the quantum cyclotron energy to infinity).
\begin{enumerate}
\item Derive the classical equations of motion from the Lagrangian and show that they yield
simple $\vec{E} \times \vec{B}$ drift along isopotential contours.
\item Find the momentum conjugate to the coordinate $x$ and show that (with an appropriate
gauge choice) it is the coordinate $y$:
\begin{equation}
p_{x} = -\frac{\hbar}{\ell^{2}}y
\end{equation}
so that we have the strange
commutation relation
\begin{equation}
[x,y]=-i\ell^{2}.
\end{equation}
\end{enumerate}
In the infinite field limit where $\ell\rightarrow 0$ the coordinates commute and we recover
the semi-classical result in which effectively point particles drift along isopotentials.
\label{ex:semiclassical}
\end{exercise}}
To understand the nature of states at intermediate energies, it is useful to
imagine gradually filling a random landscape with water as illustrated in
fig.~(\ref{fig:sealevel}).
\begin{figure
\centerline{\epsfysize=6cm
\epsffile{percolation.xfig.eps}}
\caption[]{Contour map of a smooth random landscape. Closed dashed lines
indicate local mountain peaks. Closed solid lines indicate valleys. From top to
bottom, the gray filled areas indicate the increasing `sea level' whose
shoreline finally percolates from one edge of the sample to the other (bottom
panel). The particle-hole excitations live along the shoreline and become
gapless when the shoreline becomes infinite in extent.}
\label{fig:sealevel}
\end{figure}
In this analogy, sea level represents the chemical potential for the electrons.
When only a small amount of water has been added, the water will fill the
deepest valleys and form small lakes. As the sea level is increased the lakes
will grow larger and their shorelines will begin to take on more complex shapes.
At a certain critical value of sea level a phase transition will occur in which
the shoreline percolates from one side of the system to the other. As the sea
level is raised still further, the ocean will cover the majority of the land and
only a few mountain tops will stick out above the water. The shore line will no
longer percolate but only surround the mountain tops.
As the sea level is raised still higher additional percolation transitions will
occur successively as each successive Landau level passes under water. If Landau
level mixing is small and the disorder potential is symmetrically distributed
about zero, then the critical value of the chemical potential for the $n$th
percolation transition will occur near the center of the $n$th Landau level
\begin{equation}
\mu^{*}_{n} = (n+\frac{1}{2})\hbar\omega_{c}.
\end{equation}
This percolation transition corresponds to the transition between quantized Hall
plateaus. To see why, note that when the sea level is below the percolation
point, most of the sample is dry land. The electron gas is therefore insulating.
When sea level is above the percolation point, most of the sample is covered
with water. The electron gas is therefore connected throughout the majority of
the sample and a quantized Hall current can be carried. Another way to see this
is to note that when the sea level is above the percolation point, the confining
potential will make a shoreline along the full length of each edge of the
sample. The edge states will then carry current from one end of the sample to
the other.
We can also understand from this picture why the dissipative conductivity
$\sigma_{xx}$ has a sharp peak just as the plateau transition occurs. (Recall
the data in fig.~(\ref{fig:qhedata}).) Away from the critical point the
circumference of any particular patch of shoreline is finite. The period of the
semiclassical orbit around this is finite and hence so is the quantum level
spacing. Thus there are small energy gaps for excitation of states across these
real-space fermi levels. Adding an infinitesimal electric field will only weakly
perturb these states due to the gap and the finiteness of the perturbing matrix
element which will be limited to values on the order of $\sim eED$ where $D$ is
the diameter of the orbit. If however the shoreline percolates from one end of
the sample to the other then the orbital period diverges and the gap vanishes.
An infinitesimal electric field can then cause dissipation of energy.
Another way to see this is that as the percolation level is approached from
above, the edge states on the two sides will begin taking detours deeper and
deeper into the bulk and begin communicating with each other as the localization
length diverges and the shoreline zig zags throughout the bulk of the sample.
Thus electrons in one edge state can be back scattered into the other edge
states and ultimately reflected from the sample as illustrated in
fig.~(\ref{fig:zigzag}).
\begin{figure
\centerline{\epsfysize=6cm
\epsffile{backscatter.xfig.eps}}
\caption[]{Illustration of edge states that wander deep into the bulk as the
quantum Hall localization transition is approached from the conducting side.
Solid arrows indicate the direction of drift along the isopotential lines.
Dashed arrows indicate quantum tunneling from one semi-classical orbit (edge
state) to the other. This backscattering localizes the eigenstates and prevents
transmission through the sample using the `edge' states (which become part of
the bulk localized states).}
\label{fig:zigzag}
\end{figure}
Because the random potential broadens out the Landau level density of states,
the quantized Hall plateaus will have finite width. As the chemical potential is
varied in the regime of localized states in between the Landau level peaks, only
the occupancy of localized states is changing. Hence the transport properties
remain constant until the next percolation transition occurs. It is important to
have the disorder present to produce this finite density of states and to
localize those states.
It is known that as the (classical) percolation point is approached in two
dimensions, the characteristic size (diameter) of the shoreline orbits diverges
like
\begin{equation}
\xi \sim |\delta|^{-4/3},
\end{equation}
where $\delta$ measures the deviation of the sea level from its critical value.
The shoreline structure is not smooth and in fact its circumference diverges
with a larger exponent $7/3$ showing that these are highly ramified fractal
objects whose circumference scales as the $7/4$th power of the diameter.
So far we have assumed that the magnetic length is essentially zero. That is, we
have ignored the fact that the wave function support extends a small distance
transverse to the isopotential lines. If two different orbits with the same
energy pass near each other but are classically disconnected, the particle can
still tunnel between them if the magnetic length is finite. This quantum
tunneling causes the localization length to diverge faster than the classical
percolation model predicts. Numerical simulations find that the localization
length diverges like \cite{huckestein,chalker,HuoBhatt,DasSarmalocalizationbook}
\begin{equation}
\xi \sim |\delta|^{-\nu}
\end{equation}
where the exponent $\nu$ (not to be confused with the Landau level filling
factor!) has a value close (but probably not exactly equal to) $7/3$ rather than
the $4/3$ found in classical percolation. It is believed that this exponent is
universal and independent of Landau level index.
Experiments on the quantum critical behavior are quite difficult but there is
evidence \cite{Wei}, at least in selected samples which show good scaling, that
$\nu$ is indeed close to $7/3$ (although there is some recent controversy on this point.
\cite{shahar}) and that the conductivity tensor is universal at the critical point.
\cite{HuoBhatt,Yanguniversality}
Why Coulomb interactions that are present in
real samples do not spoil agreement with the numerical simulations is something
of a mystery at the time of this writing. For a discussion of some of these issues see
\cite{sondhiRMP97}.
\section{Skyrmion Dynamics}
\label{sec:skyrmion}
NMR \cite{Barrett} and nuclear specific heat \cite{Bayot} data indicate that
skyrmions dramatically enhance the rate at which the nuclear spins relax. This
nuclear spin relaxation is due to the transverse terms in the hyperfine
interaction which we neglected in discussing the Knight shift
\begin{equation}
\frac{1}{2} \Omega\; (I^{+}s^{-} + I^{-}s^{+}) = \frac{1}{2} \Omega\; \left\{
I^{+} \sum_{\vec{q}} S_{\vec{q}}^{-} + \mbox{h.c.}\right\}.
\label{eq:060807}
\end{equation}
The free electron model would predict that it would be impossible for an
electron and a nucleus to undergo mutual spin flips because the Zeeman energy
would not be conserved. (Recall that $\Delta_{N} \sim 10^{-3}\Delta$.) The spin
wave model shows that the problem is even worse than this. Recall from
eq.~(\ref{eq:1123195}) that the spin Coulomb interaction makes spin wave energy
much larger than the electron Zeeman gap except at very long wavelengths. The
lowest frequency spin wave excitations lie above 20--50~GHz while the nuclei
precess at 10--100~MHz. Hence the two sets of spins are unable to couple
effectively. At $\nu = 1$ this simple picture is correct. The nuclear relaxation
time $T_{1}$ is extremely long (tens of minutes to many hours depending on the
temperature) as shown in fig.~(\ref{fig:relaxrate}).
\begin{figure
\centerline{\rotatebox{-90}{\epsfysize=10cm \epsffile{1overT1.eps}}}
\caption[]{NMR nuclear spin relaxation rate $1/T_1$ as a function of filling
factor. After Tycko \textit{et al.}~\cite{Tycko}. The relaxation rate is very small at
$\nu=1$, but rises dramatically away from $\nu=1$ due to the presence of
skyrmions.}
\label{fig:relaxrate}
\end{figure}
However the figure also shows that for $\nu \neq 1$ the relaxation rate
$1/T_{1}$ rises dramatically and $T_{1}$ falls to $\sim 20~\mbox{seconds}$. In
order to understand this dramatic variation we need to develop a theory of spin
dynamics in the presence of skyrmions.
The $1/T_{1}$ data is telling us that for $\nu \neq 1$ at least some of the
electron spin fluctuations are orders of magnitude lower in frequency than the
Zeeman splitting and these low frequency modes can couple strongly to the
nuclei. One way this might occur is through the presence of disorder. We see
from eq.~(\ref{eq:060807}) that NMR is a local probe which couples to spin flip
excitations at all wave vectors. Recall from eq.~(\ref{eq:052615}) that lowest
Landau level projection implies that $\overline{S_{\vec{q}}^{-}}$ contains a
translation operator $\tau_{q}$. In the presence of strong disorder the Zeeman
and exchange cost of the spin flips could be compensated by translation to a
region of lower potential energy. Such a mechanism was studied in
\cite{Antoniou} but does not show sharp features in $1/T_{1}$ around $\nu = 1$.
We are left only with the possibility that the dynamics of skyrmions somehow
involves low frequency spin fluctuations. For simplicity we will analyze this
possibility ignoring the effects of disorder, although this may not be a valid
approximation.
Let us begin by considering a ferromagnetic $\nu = 1$ state containing a single
skyrmion of the form parameterized in eqs.~(\ref{eq:060339a}--\ref{eq:060339c}).
There are two degeneracies at the classical level in the effective field theory:
The energy does not depend on the position of the skyrmion and it does not
depend on the angular orientation $\varphi$. These continuous degeneracies are
known as zero modes \cite{Rajaraman} and require special treatment of the
quantum fluctuations about the classical solution.
In the presence of one or more skyrmions, the quantum Hall ferromagnet is
\textit{non-colinear}. In an ordinary ferromagnet where all the spins are
parallel, global rotations about the magnetization axis only change the quantum phase
of the state --- they do not produce a new state.\footnote{Rotation about the
Zeeman alignment axis is accomplished by $R = e^{-\frac{i}{\hbar}\varphi S^{z}}$.
But a colinear ferromagnet ground state is an eigenstate of $S^{z}$, so rotation
leaves the state invariant up to a phase.} Because the skyrmion has
distinguishable orientation, each one induces a new $U(1)$ degree of freedom in
the system. In addition because the skyrmion has a distinguishable location,
each one induces a new translation degree of freedom. As noted above, both of
these are zero energy modes at the classical level suggesting that they might
well be the source of low energy excitations which couple so effectively to the
nuclei. We shall see that this is indeed the case, although the story is
somewhat complicated by the necessity of correctly quantizing these modes.
Let us begin by finding the effective Lagrangian for the translation mode
\cite{stonebook}. We take the spin configuration to be
\begin{equation}
\vec{m}(\vec{r},t) = \vec{m}_{0}\left(\vec{r} - \vec{R}(t)\right)
\end{equation}
where $\vec{m}_{0}$ is the static classical skyrmion solution and $\vec{R}(t)$
is the position degree of freedom. We ignore all other spin wave degrees of
freedom since they are gapped. (The gapless $U(1)$ rotation mode will be treated
separately below.) Eq.~(\ref{eq:1124219}) yields a Berry phase term
\begin{equation}
\mathcal{L}_{0} = -\hbar S \int d^{2}r\; \dot{m}^{\mu}
\mathcal{A}^{\mu}[\vec{m}]\; n(\vec{r}\,)
\label{eq:060808}
\end{equation}
where
\begin{equation}
\dot{m}^{\mu} = -\dot{R}^{\nu} \frac{\partial}{\partial r^{\nu}}
m_{0}^{\mu}(\vec{r} - \vec{R})
\end{equation}
and unlike in eq.~(\ref{eq:1124219}) we have taken into account our new-found
knowledge that the density is non-uniform
\begin{equation}
n(\vec{r}\,) = n_{0} + \frac{1}{8\pi} \epsilon^{\mu\nu}\; \vec{m} \cdot
\partial_{\mu}\vec{m} \times \partial_{\nu}\vec{m}.
\label{eq:060810}
\end{equation}
The second term in eq.~(\ref{eq:060810}) can be shown to produce an extra Berry
phase when two skyrmions are exchanged leading to the correct minus sign for
Fermi statistics (on the $\nu = 1$ plateau) but we will not treat it further.
Eq.~(\ref{eq:060808}) then becomes
\begin{equation}
\mathcal{L}_{0} = +\hbar\dot{R}^{\nu} a^{\nu}(\vec{r}\,)
\label{eq:060811}
\end{equation}
where the `vector potential'
\begin{equation}
a^{\nu}(\vec{r}\,) \equiv Sn_{0} \int d^{2}r\; (\partial_{\nu}m^{\mu}) {\cal A}^{\mu}
\end{equation}
has curl
\begin{eqnarray}
\epsilon^{\lambda\nu} \frac{\partial}{\partial R^{\lambda}}a^{\nu} &=& -
\epsilon^{\lambda\nu} \frac{\partial}{\partial r^{\lambda}}a^{\nu}\nonumber\\
&=& -Sn_{0}\; \epsilon^{\lambda\nu} \int d^{2}r\; \partial_{\lambda}
\left\{(\partial_{\nu}m^{\mu}) {\cal A}^{\mu}\right\}\nonumber\\
&=& -Sn_{0}\; \epsilon^{\lambda\nu} \int d^{2}r\; (\partial_{\nu}m^{\mu})\;
(\partial_{\lambda}m^{\gamma}) \frac{\partial {\cal A}^{\mu}}{\partial
m^{\gamma}}\nonumber\\
&=& -\frac{Sn_{0}}{2} \int d^{2}r\; \epsilon^{\lambda\nu}\; \partial_{\nu}m^{\mu}
\partial_{\lambda}m^{\gamma} F^{\gamma\mu}\nonumber\\
&=& -2\pi n_{0} Q_{\mathrm{top}}
\end{eqnarray}
Thus eq.~(\ref{eq:060811}) corresponds to the kinetic Lagrangian for a
massless particle
of charge $-eQ_{\mathrm{top}}$ moving in a uniform magnetic field of strength $B
= \frac{\Phi_{0}}{2\pi\ell^{2}}$. But this of course is precisely what the
skyrmion is \cite{stonebook}.
We have kept here only the lowest order adiabatic time derivative term in the
action.\footnote{There may exist higher-order time-derivative terms which give
the skyrmion a mass and there will also be damping due to radiation of spin
waves at higher velocities. \cite{fertigradiation}} This is justified by the
existence of the spin excitation gap and the fact that we are interested only in
much lower frequencies (for the NMR).
If we ignore the disorder potential then the kinetic Lagrangian simply leads to
a Hamiltonian that yields quantum states in the lowest Landau level, all of
which are degenerate in energy and therefore capable of relaxing the nuclei
(whose precession frequency is extremely low on the scale of the electronic
Zeeman energy).
Let us turn now to the rotational degree of freedom represented by the
coordinate $\varphi$ in eqs.~(\ref{eq:060339a}--\ref{eq:060339c}). The full
Lagrangian is complicated and contains the degrees of freedom of the continuous
field $\vec m(\vec r)$. We need to introduce the collective coordinate $\varphi$
describing the orientation of the skyrmion as one of the degrees of freedom and
then carry out the Feynman path integration over the quantum fluctuations in all
the infinite number of remaining degrees of freedom.\footnote{Examples of how to
do this are discussed in various field theory texts, including Rajaraman
\cite{Rajaraman}.} This is a non-trivial task, but fortunately we do not
actually have to carry it out. Instead we will simply write down the answer. The
answer is some functional of the path for the single variable $\varphi(t)$. We
will express this functional (using a functional Taylor series expansion) in the
most general form possible that is consistent with the symmetries in the
problem. Then we will attempt to identify the meaning of the various terms in
the expansion and evaluate their coefficients (or assign them values
phenomenologically). After integrating out the high frequency spin wave
fluctuations, the lowest-order symmetry-allowed terms in the action are
\begin{equation}
\mathcal{L}_{\varphi} = \hbar K \dot{\varphi} + \frac{\hbar^{2}}{2U}
\dot{\varphi}^{2} + \ldots
\label{eq:060814}
\end{equation}
Again, there is a first-order term allowed by the lack of time-reversal symmetry
and we have included the leading non-adiabatic correction. The full action
involving $\vec{m}(\vec{r},t)$ contains only a first-order time derivative but a
second order term is allowed by symmetry to be generated upon integrating out
the high frequency fluctuations. We will not perform this explicitly but rather
treat $U$ as a phenomenological fitting parameter.
The coefficient $K$ can be computed exactly since it is simply the Berry phase
term. Under a slow rotation of all the spins through $2\pi$ the Berry phase is
(using eq.~(\ref{eq:berry22}) in appendix~\ref{app:BerryPhase})
\begin{equation}
\int d^{2}r\; n(\vec{r}\,)\; (-S2\pi)\; \left[1 - m_{0}^{z}(\vec{r}\,)\right] =
{\frac{1}{\hbar} \int_{0}^{T} \mathcal{L}_{\varphi}} = 2\pi K.
\end{equation}
(The non-adiabatic term gives a $1/T$ contribution that vanishes in the
adiabatic limit $T \rightarrow \infty$.) Thus we arrive at the important
conclusion that $K$ is the expectation value of the number of overturned spins
for the classical solution $\vec{m}_{0}(\vec{r}\,)$. We emphasize that this is
the Hartree-Fock (i.e., `classical') skyrmion solution and therefore $K$ need
not be an integer.
The canonical angular momentum conjugate to $\varphi$ in eq.~(\ref{eq:060814})
is
\begin{equation}
L_{z} = \frac{\delta\mathcal{L}_{\varphi}}{\delta\dot{\varphi}} = \hbar K +
\frac{\hbar^{2}}{U} \dot{\varphi}
\end{equation}
and hence the Hamiltonian is
\begin{eqnarray}
H_{\varphi} &=& L_{z} \dot{\varphi} - \mathcal{L}_{\varphi}\nonumber\\
&=& \left(\hbar K + \frac{\hbar^{2}}{U} \dot{\varphi}\right) \dot{\varphi} -
\hbar K - \frac{\hbar^{2}}{2U} \dot{\varphi}^{2}\nonumber\\
&=& +\frac{\hbar^{2}}{2U} \dot{\varphi}^{2}=\frac{U}{2\hbar^{2}} (L_{z} - \hbar K)^{2}
\label{eq:1268old}
\end{eqnarray}
Having identified the Hamiltonian and expressed it in terms of the coordinate
and the canonical momentum conjugate to that coordinate, we quantize $H_\varphi$
by simply making the substitution
\begin{equation}
L_z \longrightarrow -i\hbar\frac{\partial}{\partial\varphi}
\end{equation}
to obtain
\begin{equation}
H_\varphi= +\frac{U}{2}\;
\left(-i\frac{\partial}{\partial\varphi} - K\right)^{2}.
\label{eq:1268}
\end{equation}
This can be interpreted as the Hamiltonian of a (charged) XY quantum rotor
with moment of inertia $\hbar^{2}/U$ circling a solenoid containing $K$ flux
quanta. (The Berry phase term in eq.~(\ref{eq:060814}) is then interpreted as
the Aharonov-Bohm phase.) The eigenfunctions are
\begin{equation}
\psi_{m}(\varphi) = \frac{1}{\sqrt{2\pi}} e^{im\varphi}
\label{eq:060818}
\end{equation}
and the eigenvalues are
\begin{equation}
\epsilon_{m} = \frac{U}{2} (m - K)^{2}.
\label{eq:060819}
\end{equation}
The angular momentum operator $L_{z}$ is actually the operator giving the number
of flipped spins in the skyrmion. Because of the rotational symmetry about the
Zeeman axis, this is a good quantum number and therefore takes on integer
values (as required in any quantum system of finite size with rotational symmetry
about the $z$ axis). The ground state value of $m$ is the nearest integer to
$K$. The ground state angular velocity is
\begin{equation}
\dot{\varphi} = \left\langle\frac{\partial H_{\varphi}}{\partial
L_{z}}\right\rangle = \frac{U}{\hbar} (m - K).
\end{equation}
Hence if $K$ is not an integer the skyrmion is spinning around at a finite
velocity. In any case the actual orientation angle $\varphi$ for the skyrmion is
completely uncertain since from eq.~(\ref{eq:060818})
\begin{equation}
\left|\psi_{m}(\varphi)\right|^{2} = \frac{1}{2\pi}
\end{equation}
$\varphi$ has a flat probability distribution (due to quantum zero point
motion). We interpret this as telling us that the global U(1) rotation symmetry
broken in the classical solution is restored in the quantum solution because of
quantum fluctuations in the coordinate $\varphi$. This issue will arise again in
our study of the Skyrme lattice where we will find that for an infinite array of
skyrmions, the symmetry can sometimes remain broken.
Microscopic analytical \cite{Breywithoutsigma} and numerical \cite{FertigHF}
calculations do indeed find a family of low energy excitations with an
approximately parabolic relation between the energy and the number of flipped
spins just as is predicted by eq.~(\ref{eq:060819}). As mentioned earlier, $K
\sim 4$ for typical parameters. Except for the special case where $K$ is a half
integer the spectrum is non-degenerate and has an excitation gap on the scale of
$U$ which is in turn some fraction of the Coulomb energy scale $\sim
100~\mbox{K}$. In the absence of disorder even a gap of only 1~K would make
these excitations irrelevant to the NMR. We shall see however that this
conclusion is dramatically altered in the case where many skyrmions are present.
\subsection{Skyrme Lattices}
For filling factors slightly away from $\nu = 1$ there will be a finite density
of skyrmions or antiskyrmions (all with the same sign of topological charge) in
the ground state \cite{Breyxtal,usPRL,GreenTsvelik}. Hartree-Fock calculations
\cite{Breyxtal} indicate that the ground state is a Skyrme crystal. Because the
skyrmions are charged, the Coulomb potential in eq.~(\ref{eq:fourthHartree}) is
optimized for the triangular lattice. This is indeed the preferred structure for
very small values of $|\nu - 1|$ where the skyrmion density is low. However at
moderate densities the square lattice is preferred. The Hartree-Fock ground
state has the angular variable $\varphi_{j}$ shifted by $\pi$ between
neighboring skyrmions as illustrated in fig.~(\ref{fig:skyrmelattice}).
\begin{figure
\centerline{\rotatebox{-90}{\epsfysize=10cm\epsffile{escorial2.plt.eps}}}
\caption[]{Electronic structure of the skyrmion lattice as determined by
numerical Hartree-Fock calculations for filling factor $\nu=1.1$ and Zeeman
energy $0.015\frac{e^{2}}{\epsilon\ell}$. (a) Excess charge density (in units of
$1/(2\pi\ell^{2})$) and (b) Two-dimensional vector representation of the XY
components of the spin density. The spin stiffness makes the square lattice more
stable than the triangular lattice at this filling factor and Zeeman coupling.
Because of the $U(1)$ rotational symmetry about the Zeeman axis, this is simply
one representative member of a continuous family of degenerate Hartree-Fock
solutions. After Brey \textit{et al.}~\cite{Breywithoutsigma}.}
\label{fig:skyrmelattice}
\end{figure}
This `antiferromagnetic' arrangement of the XY spin orientation minimizes the
spin gradient energy and would be frustrated on the triangular lattice. Hence it
is the spin stiffness that stabilizes the square lattice structure.
The Hartree-Fock ground state breaks both global translation and global $U(1)$
spin rotation symmetry. It is a kind of `supersolid' with both diagonal
\begin{equation}
G^{z} \equiv \left\langle s^{z}(\vec{r}\,)\;
s^{z}(\vec{r}^{\,\prime})\right\rangle
\end{equation}
and off-diagonal
\begin{equation}
G^{\perp} \equiv \left\langle s^{+}(\vec{r}\,)\;
s^{-}(\vec{r}^{\,\prime})\right\rangle
\end{equation}
long-range order. For the case of a single skyrmion we found that the $U(1)$
symmetry was broken at the Hartree-Fock (classical) level but fully restored by
quantum fluctuations of the zero mode coordinate $\varphi$. In the thermodynamic
limit of an infinite number of skyrmions coupled together, it is possible for
the global $U(1)$ rotational symmetry breaking to survive quantum
fluctuations.\footnote{Loosely speaking this corresponds to the infinite system
having an infinite moment of inertia (for global rotations) which allows a
quantum wave packet which is initially localized at a particular orientation
$\varphi$ not to spread out even for long times.} If this occurs then an
excitation gap is \textit{not} produced. Instead we have a new kind of gapless
spin wave Goldstone mode \cite{Senthil,skyrmelatticePRL}. This mode is gapless
despite the presence of the Zeeman field and hence has a profound effect on the
NMR relaxation rate. The gapless Goldstone mode associated with the broken
translation symmetry is the ordinary magneto-phonon of the Wigner crystal. This
too contributes to the nuclear relaxation rate.
In actual practice, disorder will be important. In addition, the NMR experiments
have so far been performed at temperatures which are likely well above the
lattice melting temperature. Nevertheless the zero temperature lattice
calculations to be discussed below probably capture the essential physics of
this non co-linear magnet. Namely, there exist spin fluctuations at frequencies
orders of magnitude below the Zeeman gap. At zero temperature these are coherent
Goldstone modes. Above the lattice melting temperature they will be overdamped
diffusive modes derived from the Goldstone modes. The essential physics will
still be that the spin fluctuations have strong spectral density at frequencies
far below the Zeeman gap.
It turns out that at long wavelengths the magnetophonon and $U(1)$ spin modes
are decoupled. We will therefore ignore the positional degrees of freedom when
analyzing the new $U(1)$ mode. We have already found the $U(1)$ Hamiltonian for
a single skyrmion in eq.~(\ref{eq:1268}). The simplest generalization to the
Skyrme lattice which is consistent with the symmetries of the problem is
\begin{equation}
H = \frac{U}{2} \sum_{j} (\hat{K}_{j} - K)^{2} - J \sum_{\langle ij\rangle}
\cos{(\varphi_{i} - \varphi_{j})}
\label{eq:061103}
\end{equation}
where $\hat{K}_{j} \equiv -i \frac{\partial}{\partial\varphi_{j}}$ is the
angular momentum operator. The global $U(1)$ symmetry requires that the
interactive term be invariant if all of the $\varphi_{j}$'s are increased by a
constant. In addition $H$ must be invariant under $\varphi_{j} \rightarrow
\varphi_{j} + 2\pi$ for any single skyrmion. We have assumed the simplest
possible near-neighbor coupling, neglecting the possibility of longer range
higher-order couplings of the form $\cos n(\varphi_{i} - \varphi_{j})$ which are
also symmetry allowed. The phenomenological coupling $J$ must be negative to be
consistent with the `antiferromagnetic' XY order found in the Hartree-Fock
ground state illustrated in fig.~(\ref{fig:skyrmelattice}). However we will find it
convenient to instead make $J$ positive and compensate for this by a `gauge' change
$\varphi_j\rightarrow \varphi_j+\pi$ on one sublattice. This is convenient
because it makes the coupling `ferromagnetic' rather than `antiferromagnetic.'
Eq.~(\ref{eq:061103}) is the Hamiltonian for the quantum XY rotor model, closely
related to the boson Hubbard model \cite{Cha,Sorensen,Fisherboson}. Readers
familiar with superconductivity will recognize that this model is commonly used
to describe the superconductor-insulator transition in Josephson arrays
\cite{Cha,Sorensen}. The angular momentum eigenvalue of the $\hat{K}_{j}$
operator represents the number of bosons (Cooper pairs) on site $j$ and the $U$
term describes the charging energy cost when this number deviates from the
electrostatically optimal value of $K$. The boson number is non-negative while
$\hat{K}_{j}$ has negative eigenvalues. However we assume that $K \gg 1$ so that
the negative angular momentum states are very high in energy.
The $J$ term in the quantum rotor model is a mutual torque that transfers units
of angular momentum between neighboring sites. In the boson language the wave
function for the state with $m$ bosons on site $j$ contains a factor
\begin{equation}
\psi_{m}(\varphi_{j}) = e^{im\varphi_{j}}.
\end{equation}
The raising and lowering operators are thus\footnote{These operators have matrix
elements $\langle\psi_{m+1}|e^{+i\varphi}|\psi_{m}\rangle = 1$ whereas a boson
raising operator would have matrix element $\sqrt{m+1}$. For $K \gg 1$, $m \sim
K$ and this is nearly a constant. Arguments like this strongly suggest that the
boson Hubbard model and the quantum rotor model are essentially equivalent. In
particular their order/disorder transitions are believed to be in the same
universality class.} $e^{\pm i\varphi_{j}}$. This shows us that the cosine term
in eq.~(\ref{eq:061103}) represents the Josephson coupling that hops bosons
between neighboring sites.
For $U \gg J$ the system is in an insulating phase well-described by the wave
function
\begin{equation}
\psi(\varphi_{1},\varphi_{2},\ldots,\varphi_{N}) = \prod_{j} e^{im\varphi_{j}}
\end{equation}
where $m$ is the nearest integer to $K$. In this state every rotor has the same
fixed angular momentum and thus every site has the same fixed particle number in
the boson language. There is a large excitation gap
\begin{equation}
\Delta \approx U\left(1 - 2|m - K|\right)
\end{equation}
and the system is insulating.\footnote{An exception occurs if $|m - K| =
\frac{1}{2}$ where the gap vanishes. See \cite{Fisherboson}.}
Clearly $|\psi|^{2} \approx 1$ in this phase and it is therefore quantum
disordered. That is, the phases $\{\varphi_{j}\}$ are wildly fluctuating because
every configuration is equally likely. The phase fluctuations are nearly
uncorrelated
\begin{equation}
\langle e^{i\varphi_{j}}\; e^{-i\varphi_{k}}\rangle \sim e^{-|\vec{r}_{j}-
\vec{r}_{k}|/\xi}.
\end{equation}
For $J \gg U$ the phases on neighboring sites are strongly coupled together and
the system is a superconductor. A crude variational wave function that captures
the essential physics is
\begin{equation}
\psi(\varphi_{1},\varphi_{2},\ldots,\varphi_{N}) \sim e^{\lambda\sum_{\langle
ij\rangle}\;\cos{(\varphi_{i}-\varphi_{j})}}
\end{equation}
where $\lambda$ is a variational parameter \cite{Rana}. This is the simplest
ansatz consistent with invariance under $\varphi_{j} \rightarrow \varphi_{j} +
2\pi$. For $J \gg U$, $\lambda \gg 1$ and $|\psi|^{2}$ is large only for spin
configurations with all of the XY spins locally parallel. Expanding the cosine
term in eq.~(\ref{eq:061103}) to second order gives a harmonic Hamiltonian which
can be exactly solved. The resulting gapless `spin waves' are the Goldstone
modes of the superconducting phase.
For simplicity we work with the Lagrangian rather than the Hamiltonian
\begin{equation}
\mathcal{L} = \sum_{j} \left[\hbar K\dot{\varphi}_{j} +
\frac{\hbar^{2}}{2U}\dot{\varphi}_{j}^{2}\right] + J\sum_{\langle ij\rangle}
\cos{(\varphi_{i} - \varphi_{j})}
\end{equation}
The Berry phase term is a
total derivative and can not affect the equations of motion.\footnote{In fact in
the quantum path integral this term has no effect except for time histories in
which a `vortex' encircles site $j$ causing the phase to wind
$\varphi_{j}(\hbar\beta) = \varphi_{j}(0) \pm 2\pi$. We explicitly ignore this
possibility when we make the harmonic approximation.} Dropping this term and
expanding the cosine in the harmonic approximation yields
\begin{equation}
\mathcal{L} = \frac{\hbar^{2}}{2U} \sum_{j} \dot{\varphi}_{j}^{2} - \frac{J}{2}
\sum_{\langle ij\rangle} (\varphi_{i} - \varphi_{j})^{2}.
\end{equation}
This `phonon' model has linearly dispersing gapless collective modes at small
wavevectors
\begin{equation}
\hbar\omega_{q} = \sqrt{UJ}\; qa
\end{equation}
where $a$ is the lattice constant. The parameters $U$ and $J$ can be fixed by
fitting to microscopic Hartree-Fock calculations of the spin wave velocity and
the magnetic susceptibility (`boson compressibility')
\cite{FertigHF,skyrmelatticePRL}. This in turn allows one to estimate the
regime of filling factor and Zeeman energy in which the $U(1)$ symmetry is not
destroyed by quantum fluctuations \cite{skyrmelatticePRL}.
Let us now translate all of this into the language of our non-colinear QHE
ferromagnet \cite{Senthil,skyrmelatticePRL}. Recall that the angular momentum
(the `charge') conjugate to the phase angle $\varphi$ is the spin angular
momentum of the overturned spins that form the skyrmion. In the quantum
disordered `insulating' phase, each skyrmion has a well defined integer-valued
`charge' (number of overturned spins) much like we found when we quantized the
$U(1)$ zero mode for the plane angle $\varphi$ of a single isolated skyrmion in
eq.~(\ref{eq:060818}). There is an excitation gap separating the energies of the
discrete quantized values of the spin.
The `superfluid' state with broken $U(1)$ symmetry is a totally new kind of spin
state unique to non-colinear magnets \cite{Senthil,skyrmelatticePRL}. Here the
phase angle is well-defined and the number of overturned spins is uncertain. The
off-diagonal long-range order of a superfluid becomes
\begin{equation}
\langle b_{j}^{\dagger} b_{k}^{\phantom{\dagger}}\rangle \rightarrow \langle
e^{i\varphi_{j}} e^{-i\varphi_{k}}\rangle
\end{equation}
or in the spin language\footnote{There is a slight complication here. Because
the XY spin configuration of the skyrmion has a vortex-like structure $\langle
s^{+}\rangle \equiv \langle s^{x} + is^{y}\rangle$ winds in phase around the
skyrmion so the `bose condensation' is not at zero wave vector.}
\begin{equation}
\left\langle s^{+}(\vec{r}\,) s^{-}(\vec{r}^{\,\prime})\right\rangle.
\end{equation}
Thus in a sense we can interpret a spin flip interaction between an electron and
a nucleus as creating a boson in the superfluid. But this boson has a finite
probability of `disappearing' into the superfluid `condensate' and hence the
system does not have to pay the Zeeman price to create the flipped spin. That
is, the superfluid state has an uncertain number of flipped spins (even though
$S_{\mathrm{tot}}^{z}$ commutes with $H$) and so the Zeeman energy cost is
uncertain.
In classical language the skyrmions locally have finite (slowly varying) x and y
spin components which act as effective magnetic fields around which the nuclear spins
precess and which thus cause $I^{z}$ to change with
time. The key here is that $s^{x}$ and $s^{y}$ can, because of the broken $U(1)$
symmetry, fluctuate very slowly (i.e. at MHz frequencies that the nuclei can
follow rather than just the very high Zeeman precession frequency).
Detailed numerical calculations \cite{skyrmelatticePRL} show that the Skyrme
lattice is very efficient at relaxing the nuclei and $1/T_{1}$ and is enhanced
by a factor of $\sim 10^{3}$ over the corresponding rate at zero magnetic field.
We expect this qualitative distinction to survive even above the Skryme lattice
melting temperature for the reasons discussed earlier.
Because the nuclear relaxation rate increases by orders of magnitude, the
equilibration time at low temperatures drops from hours to seconds. This means
that the nuclei come into thermal equilibrium with the electrons and hence the
lattice. The nuclei therefore have a well-defined temperature and contribute to
the specific heat. Because the temperature is much greater than the nuclear
Zeeman energy scale $\Delta \sim 1~\mbox{mK}$, each nucleus contributes only a
tiny amount $\sim k_{\mathrm{B}} \frac{\Delta^{2}}{T^{2}}$ to the specific heat.
On the other hand, the electronic specific heat per particle $\sim
k_{\mathrm{B}} \frac{T}{T_{\mathrm{fermi}}}$ is low and the electron density is
low. In fact there are about $10^{6}$ nuclei per quantum well electron and the
nuclei actually enhance the specific heat more than 5 orders of magnitude
\cite{Bayot}!
Surprisingly, at around 30~mK there is a \textit{further} enhancement of the
specific heat by an additional order of magnitude. This may be a signal of the
Skyrme lattice melting transition \cite{Bayot,skyrmelatticePRL,TimmMelting},
although the situation is somewhat murky at the present time. The peak can not
possibly be due to the tiny amount of entropy change in the Skyrme lattice
itself. Rather it is due to the nuclei in the thick AlAs barrier between the
quantum wells.\footnote{For somewhat complicated reasons it may be that the
barrier nuclei are efficiently dipole coupled to the nuclei in the quantum
wells (and therefore in thermal equilibrium) only due to the critical slowing
down of the electronic motion in the vicinity of the Skyrme lattice melting
transition.}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 7,104 |
<?php
/**
* @file
* Contains \Drupal\Core\Field\Plugin\Field\FieldType\PasswordItem.
*/
namespace Drupal\Core\Field\Plugin\Field\FieldType;
use Drupal\Core\Entity\EntityMalformedException;
use Drupal\Core\Field\FieldStorageDefinitionInterface;
use Drupal\Core\StringTranslation\TranslationWrapper;
use Drupal\Core\TypedData\DataDefinition;
/**
* Defines the 'password' entity field type.
*
* @FieldType(
* id = "password",
* label = @Translation("Password"),
* description = @Translation("An entity field containing a password value."),
* no_ui = TRUE,
* )
*/
class PasswordItem extends StringItem {
/**
* {@inheritdoc}
*/
public static function propertyDefinitions(FieldStorageDefinitionInterface $field_definition) {
$properties['value'] = DataDefinition::create('string')
->setLabel(new TranslationWrapper('The hashed password'))
->setSetting('case_sensitive', TRUE);
$properties['existing'] = DataDefinition::create('string')
->setLabel(new TranslationWrapper('Existing password'));
return $properties;
}
/**
* {@inheritdoc}
*/
public function preSave() {
parent::preSave();
$entity = $this->getEntity();
// Update the user password if it has changed.
if ($entity->isNew() || ($this->value && $this->value != $entity->original->{$this->getFieldDefinition()->getName()}->value)) {
// Allow alternate password hashing schemes.
$this->value = \Drupal::service('password')->hash(trim($this->value));
// Abort if the hashing failed and returned FALSE.
if (!$this->value) {
throw new EntityMalformedException('The entity does not have a password.');
}
}
if (!$entity->isNew()) {
// If the password is empty, that means it was not changed, so use the
// original password.
if (empty($this->value)) {
$this->value = $entity->original->{$this->getFieldDefinition()->getName()}->value;
}
}
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 4,906 |
\section{Introduction}
Compelling evidence in favour of neutrino oscillations obtained in
recent years in the Super-Kamiokande \cite{SK-atm-1998,SK-solar}, SNO
\cite{SNO}, KamLAND \cite{KamLAND}, K2K \cite{K2K} and other neutrino
experiments (see e.g.~\cite{maltoni} and references therein) is a
major breakthrough in the search for physics beyond the Standard
Model.
All existing neutrino oscillation data with the exception of the LSND
data \cite{LSND}\footnote{The result of the LSND experiment is planned
to be checked by the MiniBooNE experiment \cite{miniboone} which is
currently taking data.} are well described if we assume
three-neutrino mixing. Defining $\Delta m^2_{jk}= m_j^2 - m_k^2$,
where the $m_j$ are the neutrino masses, the best fit values
\begin{equation}
\Delta m^{2}_{21} = 7.9 \times 10^{-5}\:\mathrm{eV}^{2}
\quad \mbox{and} \quad
\left| \Delta m^{2}_{32} \right| = 2.4 \times 10^{-3}\:\mathrm{eV}^{2},
\label{1}
\end{equation}
were found for the solar \cite{KamLAND} and atmospheric neutrino
neutrino mass-squared differences \cite{SK-atm}, respectively.
These values of the neutrino mass-squared differences were obtained
from neutrino oscillation data under the assumption that the neutrino
transition and survival probabilities have the standard form (see
e.g.\ the reviews in Ref.~\cite{reviews}). Neutrino oscillations are
due to the interference of the amplitudes of the propagation of
neutrinos with different masses and the standard phase differences are
given by the expression
\begin{equation}
\Delta\varphi_{jk}=\frac{\Delta m^2_{jk} L}{2E}.
\label{2}
\end{equation}
Here $E$ is the neutrino energy and $L$ is the
distance between neutrino production and neutrino
interaction points. The theory of neutrino oscillations has a long
history starting with the paper of Gribov and Pontecorvo \cite{gribov}
(for other early papers see \cite{pontecorvo,fritzsch}, for
historical overviews see \cite{nuhistory}).
There is also a rich literature on more elaborate derivations of
neutrino transition and survival probabilities based on quantum
mechanics and quantum field theory (for a choice of these papers see
\cite{kayser,okun,nussinov,giunti93,rich,stockinger,stodolsky,%
cardall,kim,beuthe,lipkin04},
more citations are found in the
reviews~\cite{zralek,giunti01,beuthe-review,giunti04}),
which all result in the
standard oscillation phases of Eq.~(\ref{2}).
There exist, however, claims \cite{field} that the phase differences in
neutrino transition probabilities
differ from the standard ones by a factor of two and are equal to
\begin{equation}
\overline{\Delta\varphi_{jk}}=\frac{\Delta m^2_{jk}L}{E}.
\label{3}
\end{equation}
Other authors \cite{deleo} claim that there is an ambiguity in the
oscillation phase. Theoretical discussions about the factor of two or
other factors in oscillation phases continue during many years---see
e.g.~\cite{kim,lipkin04,giunti04,tsukerman,lipkin} where these
additional factors have been refuted on theoretical grounds. Taking
into account the fundamental importance of the problem we believe that
it is worthwhile to think about possibilities to confront the
different oscillation phases to experimental data.
The same non-quantum-theoretical arguments which lead to an additional
factor of two in neutrino oscillation phases can be applied to the
oscillation phases in $M^0 \leftrightarrows \bar M^0$ oscillations of
neutral bosons $M^0 = K^0$, $B^0_d$, etc., as was demonstrated in
Ref.~\cite{lipkin}. A more complicated additional factor has been
obtained in Ref.~\cite{srivastava}, but was subsequently refuted in
Ref.~\cite{ancochea}. Since in $M^0 \leftrightarrows \bar M^0$
oscillation experiments the mesons are often non-relativistic, the
relevant oscillation phase is
\begin{equation}\label{QMphase}
\Delta \varphi_\mathrm{QT} = \frac{\Delta m^2 L}{2p},
\end{equation}
where $p$ is the momentum of the neutral meson. In the
ultra-relativistic limit, Eq.~(\ref{QMphase}) coincides with
Eq.~(\ref{2}). In the following we use the subscript QT for the
standard phase~(\ref{QMphase}), whereas phases different from the
standard phase are marked by a bar---see Eq.~(\ref{3}).
In recent years a remarkable progress in the measurement of $|V_{cb}|$,
$|V_{ub}|$ and other elements of the CKM matrix was reached (see
e.g.~\cite{CKM-group}). Another great achievement was the measurement
of the CP parameter $\sin2\beta$ with an accuracy of about 5\% in the
BaBar \cite{babar} and Belle \cite{belle} experiments at asymmetric
B-factories.
This allowed to perform a new check of the Standard Model based on the
test of the unitarity of the CKM mixing matrix,
the so-called unitarity triangle test of the SM.
It was shown \cite{buras1,buras2,silva,UT2000,UT2005} that the SM with three
families of quarks is in an good agreement with existing data, which
include the data on the measurements of the effects of CP violation.
In the unitarity triangle (UT) test the experimental values of the
$K_L-K_S$ mass difference $\Delta m_K$ and the $B_{dH}-B_{dL}$
mass difference $\Delta m_{B_d}$ are used.
The values of $\Delta m_{K}$ and $\Delta m_{B_d}$ were obtained from
an analysis of the experimental data based on the standard transition
probabilities with the standard oscillation phase~(\ref{QMphase}).
In this paper we will present the result of the UT
test under the assumption that oscillation phases in
$K^0 \leftrightarrows \bar K^0$ and $B^0_d \leftrightarrows \bar B^0_d$
oscillations differ from
the standard ones by the above factor of two.
We will show that such an assumption is disfavoured by the
existing data at the level of more than $3\,\sigma$.
The plan of the paper is as follows.
In Section~\ref{notorious} we will discuss in some detail how this
notorious factor of two in the oscillation phase
appears. Considerations how to confront the factor of two with
experiment are found in Section~\ref{confronting}.
Section~\ref{fit} contains our UT fit with and without
the factor of two. Our conclusions are presented in
Section~\ref{concl}. The technical details of the UT fit are deferred
to an appendix.
\section{The notorious factor of two}
\label{notorious}
\subsection{Notation}
For simplicity we consider oscillations between only two states.
Thus we have two different masses $m_j$ ($j=1,2$). We adopt the
convention $m_1 < m_2$. For each mass eigenstate
the relevant phase is
\begin{equation}\label{phi}
\varphi_j = E_j t - p_j L,
\end{equation}
where $E_j = \sqrt{p_j^2 + m_j^2}$ and $p_j$ are energy and momentum,
respectively.
Though there are some arguments that in particle oscillations mass
eigenstates with the same energies are coherent
\cite{stockinger,stodolsky,lipkin04,lipkin},
we want to be general and assume neither equal energies nor equal momenta.
It is useful to define quantities $\Delta p$ and $\Delta m$ via
\begin{equation}\label{averagequantities}
p_{1,2} = p \mp \frac{1}{2} \Delta p, \quad
m_{1,2} = m \mp \frac{1}{2} \Delta m,
\end{equation}
where $p$ and $m$ denote average momentum and mass, respectively.
Defining $\Delta m^2 = m_2^2 - m_1^2$ and $\Delta m = m_2 - m_1$, we
have the relation
\begin{equation}
\Delta m^2 = 2m \Delta m.
\end{equation}
In the following we will use the approximations
\begin{equation}\label{approximations}
p \gg |\Delta p| \quad \mbox{with} \quad \Delta p = a \Delta m.
\end{equation}
The dimensionless constant $a$ is zero for $p_1 = p_2$. In general it
will be of order one or even larger. In the non-relativistic case one
can have $a \sim m/p$.
The first relation of Eq.~(\ref{approximations})
excludes particles which are nearly at rest; such a situation is not
contained in our discussion. Consequently, we do not allow for
$p \ll m$ or $a \gg 1$.
However, we will take care that
all our considerations hold also in the moderately non-relativistic limit.
The second relation in Eq.~(\ref{approximations}) states our coherence
assumption: mass eigenstates with momenta which differ more
than the mass difference can be coherent.
Note that with Eq.~(\ref{approximations}) we have
\begin{equation}
p \gg \Delta m.
\end{equation}
In the following we will need
\begin{equation}\label{diffE}
\Delta E \equiv E_2 - E_1 =
\frac{1}{E} \left( m \Delta m + p \Delta p \right) =
\frac{\Delta m^2}{2E} + \frac{p \Delta p}{E}
\quad \mbox{with} \quad E = \frac{1}{2} \left( E_1 + E_2 \right).
\end{equation}
\subsection{``Derivation'' of extra factors in oscillation phases}
Particle oscillation phases different from that of Eq.~(\ref{QMphase})
have been found for instance in Refs.~\cite{field,srivastava}, and an
ambiguity of a factor of two in the oscillation phase has been
diagnosed in Ref.~\cite{deleo}. It was stressed first in
Ref.~\cite{lipkin} and then in Refs.~\cite{giunti04,tsukerman} that in
essence the discrepancy to the standard result~(\ref{QMphase}) is due
to the assumption that the two mass eigenstates are detected at the
same space point but at different times
\begin{equation}\label{wrong}
t_j = L/v_j = LE_j/p_j.
\end{equation}
For each mass eigenstate, the corresponding time $t_j$ is inserted
into the phase (\ref{phi}). The motivation
for this is that particles with different masses move with
different velocities $v_j$. This picture mixes
quantum-theoretical and classical considerations
in an ad hoc fashion and leads to the
conclusion that particle phases taken at \emph{different times},
though at the same space point, produce the interference, which is in
contradiction to the rules of quantum theory.
Eq.~(\ref{wrong}) gives the phase
\begin{equation}
\overline{\varphi}_j = E_j t_j - p_j L =
\frac{E_j^2 L}{p_j} - p_j L = \frac{m_j^2 L}{p_j}
\end{equation}
and, therefore, the phase difference
\begin{equation}\label{wrongdiff}
\overline{\Delta \varphi} =
\frac{m_2^2 L}{p_2} - \frac{m_1^2 L}{p_1}.
\end{equation}
Then, using only $\Delta p \ll p$, we obtain
\begin{equation}\label{nonQM}
\overline{\Delta \varphi} \simeq
2\, \Delta \varphi_\mathrm{QT} -
\frac{\left( m_1^2 + m_2^2 \right) \Delta p\, L}{2\, p^2}.
\end{equation}
As seen from this equation, $\overline{\Delta \varphi}$ differs from
$\Delta \varphi_\mathrm{QT}$ not only by a factor of two, but also by
an additional term which contains the \emph{arbitrary}
quantity\footnote{In principle, one should be able to determine an
upper limit on $\Delta p$ from the widths of the wave packets of the
particles participating in the neutrino, $K^0$, $B_d^0$, etc.\
production and detection processes \cite{giunti93,stockinger,beuthe}.}
$\Delta p$. In the ultra-relativistic case, which always applies to
neutrinos but also to $M^0 \leftrightarrows \bar M^0$ oscillations when
their energy is high enough, the additional term is negligible and we
have the ultra-relativistic phase
\begin{equation}
\left( \overline{\Delta \varphi} \right)_\mathrm{UR} \simeq
2\, \Delta \varphi_\mathrm{QT}.
\end{equation}
For oscillations of non-relativistic neutral flavoured mesons, the
additional term can not only be comparable with the first term but
could even dominate in Eq.~(\ref{nonQM}). Since $\Delta p$ is
arbitrary, we come to the conclusion that, for oscillations of
non-relativistic particles, Eq.~(\ref{wrong}) leads to an
arbitrary---and thus unphysical---oscillation phase.
In order to illustrate the latter point, let us consider the two
extreme cases of equal momenta and equal energies.
In the first case with
$\Delta p = 0$, Eq.~(\ref{nonQM}) gives
\begin{equation}\label{factor2}
\overline{\Delta \varphi} =
\frac{\Delta m^2 L}{p} = \frac{2m \Delta m L}{p}.
\end{equation}
Clearly, we have again the notorious factor of two, in comparison with
the quantum-theoreti\-cal result.
On the other hand, equal energies correspond to
$\Delta p = -\Delta m^2/(2p)$ (see Eq.~(\ref{diffE}))
and with Eq.~(\ref{nonQM}) the result is
\begin{equation}\label{factor2'}
\overline{\Delta \varphi} =
\frac{\Delta m^2 L}{p} \left( 1 + \frac{m^2}{2p^2} \right).
\end{equation}
This oscillation phase, which is similar to the one advocated in
Ref.~\cite{srivastava}, agrees with Eq.~(\ref{factor2}) only in the
ultra-relativistic limit.
\subsection{The quantum-theoretical oscillation phase}
Although it has been stressed many times (see
e.g.\ Ref.~\cite{giunti01}) that the quantum-theoretical oscillation
phase does \emph{not} suffer from any ambiguity, it is instructive to
repeat the derivation of this fact here, in order to compare with the
derivation of Eq.~(\ref{nonQM}). Quantum theory requires the two
phases~(\ref{phi}) to be taken at the \emph{same space-time
point}. Therefore, we have
\begin{equation}
\Delta \varphi_\mathrm{QT} = \Delta E\, T - \Delta p L,
\end{equation}
where $T$ characterizes the time when the interference takes
place.
Then, with $T = LE/p$ we obtain the quantum-theoretical result
\begin{equation}\label{phiQM}
\Delta \varphi_\mathrm{QT} =
\left( \frac{\Delta m^2}{2E} + \frac{p \Delta p}{E} \right)
\frac{EL}{p} - \Delta p L = \frac{\Delta m^2L}{2p} =
\frac{m\Delta mL}{p},
\end{equation}
where the arbitrary quantity $\Delta p$
has dropped out.\footnote{It is reasonable to assume
that $T$ is $L/v_1$ or $L/v_2$
or some average of these two expressions.
What one takes precisely as $T$ is
irrelevant, because all these
possibilities differ only in terms suppressed by $\Delta m$ and
$\Delta p$. Since $\Delta E$ is already small in that sense
(see Eq.~(\ref{diffE})) and the first order in
$\Delta m$ and $\Delta p$ is sufficient, we take the velocity $p/E$.}
For $M^0 \leftrightarrows \bar M^0$ oscillations,
the phase~(\ref{phiQM}) can also be written in the familiar form
$\Delta m\, \tau$, where $\tau$ is the eigentime of the particle for
covering a distance $L$.
We want to emphasize that a more complete understanding of the
oscillation phase needs a full quantum-mechanical or quantum
field-theoretical approach. All such treatments (see for instance the
reviews~\cite{zralek,beuthe-review,giunti04} and
references therein) consistently give the result of
Eq.~(\ref{phiQM}). In approaches not guided by quantum mechanics or
quantum field theory the conversion of time into a distance is always
the subtle point \cite{lipkin,ancochea}. In all present experiments,
oscillations are treated as phenomena in space. If eigentimes are used
for the evaluation of data, then distances are converted into times
(see e.g.~\cite{babar,belle,hummel}).
\section{Confronting non-quantum-theoretical phases with experiment}
\label{confronting}
Since we have seen that the derivation of phase~(\ref{nonQM}) does not
conform to the rules of quantum theory whereas Eq.~(\ref{QMphase})
does, then one could ask the question why consider the
phase~(\ref{nonQM}) at all. From our point of view, the reason for
this is twofold:
\begin{itemize}
\item
On the one hand, there is the subtlety that the time difference
$\Delta t = \left| t_2 - t_1 \right|$ (see Eq.~(\ref{wrong})), which
is the culprit of the discrepancy with the quantum-theoretical result,
is immeasurably small.
\item
On the other hand, as we will show, the phases~(\ref{factor2}) and
(\ref{factor2'}) can actually be tested experimentally.
\end{itemize}
The time difference can be expressed as
\begin{equation}\label{timediff}
\Delta t \simeq
\frac{L}{2pE}
\left| \Delta m^2 -
\left( m_1^2 + m_2^2 \right) \frac{\Delta p}{p} \right|.
\end{equation}
To get a feeling for the size of $\Delta t$, we take the $K^0 \bar
K^0$ system with $\Delta m_K \simeq 3.48 \times 10^{-12}$~MeV and use
for example $L = 1$~m, $p = 1$~GeV and $\Delta p = 0$. Then we find
$\Delta t \sim 5 \times10^{-24}$~sec, which is indeed far beyond
measurability.
As for an experimental test of the phase~(\ref{factor2'}) we consider
two different measurements of the $K_L-K_S$ mass difference. Since
this phase has an additional dependence on the momentum, it is useful
to compare two measurements which have different average kaon
momenta. The CPLEAR experiment has measured \cite{CPLEAR}
$\Delta m_K = (5295 \pm 20 \pm 3) \times 10^6\;
\hbar \mathrm{s}^{-1}$. In that experiment kaons are produced in the
reaction $p \bar p \to K^+ \pi^- {\bar K}^0$ and the charged-conjugate
reaction, with $p \bar p$ annihilation at rest. Thus the kaons are
non-relativistic. In the KTeV experiment
the kaons are in the ultra-relativistic regime; this experiment has
obtained \cite{KTeV}
$\Delta m_K = (5261 \pm 15) \times 10^6\;
\hbar \mathrm{s}^{-1}$.
According to Eq.~(\ref{factor2'}) the mass differences extracted in these
experiments should be different and related by
\begin{equation}\label{ratio}
\frac{\left( \Delta m_K \right)_\mathrm{CPLEAR}}%
{\left( \Delta m_K \right)_\mathrm{KTeV}} =
1 + \frac{m_{K^0}^2}{2\,p_{K^0}^2} \geq
1 + \frac{m_{K^0}^2}{2\,p_{K^0\,\mathrm{max}}^2},
\end{equation}
where $p_{K^0}$ is the (average) neutral-kaon momentum in the CPLEAR
experiment.\footnote{If Eq.~(\ref{factor2'}) were correct, there
should also be a dependence of the extracted mass difference on
$p_{K^0}$.} One can show that the maximal energy of the neutral kaon
in the CPLEAR reaction is given by
\begin{equation}\label{Emax}
E_{K^0\,\mathrm{max}} =
\frac{4\, m_p^2 - m_\pi^2 - 2\, m_\pi m_K}{4\, m_p},
\end{equation}
where $m_p$, $m_\pi$ and $m_K$ are proton, pion and kaon mass,
respectively. For our purpose the distinction between the mass values
of the charged and neutral kaon masses is irrelevant. With the
numbers above for the mass differences obtained by the CPLEAR and KTeV
experiments, we use the law of propagation of errors to compute the
value $1.006 \pm 0.005$ for the ratio on the left-hand side of
Eq.~(\ref{ratio}). We insert the values of the particle masses into
Eq.~(\ref{Emax}) and calculate $p_{K^0\,\mathrm{max}}$; then we arrive
at 1.22 for the right-hand side of Eq.~(\ref{ratio}), which is about
40 standard deviations larger than the ratio of $K_L - K_S$ mass
differences. Consequently, we conclude that the
phase~(\ref{factor2'}) is in contradiction to the results of the
CPLEAR and KTeV experiments.
The phase~(\ref{factor2}) which contains the notorious factor of two
needs a different approach; in the next section we will use the fit to
the unitarity triangle constructed from the CKM matrix to show that
this factor of two is experimentally strongly disfavoured. For the
idea to compare the $\Delta m^2$ result of the solar neutrino
experiments with that of the KamLAND experiment see
Ref.~\cite{smirnov}.
\section{The unitarity triangle fit}
\label{fit}
\subsection{Description of the unitarity triangle analysis}
\label{sec:fit-description}
Following the traditional way, the unitarity triangle (UT)
is given by the three points $A
= (\bar\rho,\bar\eta)$, $B = (1,0)$, $C = (0,0)$ in the plane of the
parameters $\bar\rho$ and $\bar\eta$, which are defined by
\begin{equation}
\bar\rho = \rho \left( 1 - \frac{\lambda^2}{2} \right) \,,\quad
\bar\eta = \eta \left( 1 - \frac{\lambda^2}{2} \right) \,,
\end{equation}
where $\lambda,\rho,\eta$ are the Wolfenstein parameters of the CKM
matrix. Pedagogical introductions to the UT can be found e.g.\ in
Refs.~\cite{buras1,buras2,silva}.
Our numerical analysis is based on the input data
as given in Tab.~1 of Ref.~\cite{UT2005}, and we use the following
constraints to determine the point $A = (\bar\rho,\bar\eta)$:
\begin{itemize}
\item
The measured value of $\varepsilon_K = (2.280\pm0.013)\times
10^{-3}$. The theoretical prediction for this quantity, which is a
measure for CP violation in $K^0 - \bar K^0$ mixing, is given
by\footnote{For the sake of brevity we drop the phase factor
$\exp (i\pi/4)$ in $\varepsilon_K$, since it plays no role in the following.}
\begin{equation}\label{eq:epsilon}
\varepsilon_K = \frac{\hat B_K \, C}{\Delta m_K} \,
\bar\eta \, \left[ (1-\bar\rho) \, D - E \right] \,,
\end{equation}
where $\Delta m_K$ is the $K_L-K_S$ mass difference and $\hat B_K, C,
D, E$ are numbers which have to be calculated and/or depend on
measured quantities such as $\lambda,\, m_t,\, m_c,\, |V_{cb}|$
(see e.g.\ Ref.~\cite{buras2} for precise definitions).
\item
The experimental determination of $|V_{ub}/V_{cb}|$. This ratio is
connected to $\bar\rho,\bar\eta$ by
\begin{equation}
\sqrt{\bar\eta^2 + \bar\rho^2} =
\left( \frac{1}{\lambda} - \frac{\lambda}{2} \right)
\left| \frac{V_{ub}}{V_{cb}} \right| \,.
\end{equation}
\item
The measurement of the $B_{dH} - B_{dL}$ mass difference
\begin{equation}\label{eq:DmBd}
\Delta m_{B_d} = 0.502\pm0.006\,\hbar\mathrm{ps}^{-1} \,.
\end{equation}
The theoretical prediction for the square root of $\Delta m_{B_d}$
as a function of $\bar\rho,\bar\eta$ is given by
\begin{equation}\label{eq:DmBdconstraint}
\sqrt{\Delta m_{B_d}} = F \, |V_{cb}| \, \lambda
\sqrt{\bar\eta^2 + (1 - \bar\rho)^2} \,,
\end{equation}
where $F$ is a constant depending on $m_t$ and other quantities
subject to theoretical uncertainties (see e.g.\ Ref.~\cite{buras2}).
\item
In addition we use direct information on the angles of the unitarity
triangle $\alpha,\beta,\gamma$. The angle $\beta$ has been measured
at BaBar and Belle, and we use the value $\sin2\beta =
0.726\pm0.037$. For $\gamma$ we use the value $(59.1\pm16.7)^\circ$
(see Ref.~\cite{UT2005} for details), whereas for $\alpha$ we use the
likelihood function extracted from Fig.~10 of Ref.~\cite{UT2005} to
take into account the two allowed regions for $\alpha$ around
$107^\circ$ and $176^\circ$.
\end{itemize}
We do not use the constraint from $\Delta m_{B_s}$ which usually is
included in UT fits. The reason is that at present only a lower bound
exists on $\Delta m_{B_s}$, and therefore no further constraint is
obtained for the oscillation phase with the extra factor of two.
However, we remark that once an upper bound on $\Delta m_{B_s}$ will
have been established in the future, this will provide additional
information on the oscillation phase.
The fit is performed by constructing a $\chi^2$-function
$\chi^2(\bar\rho,\bar\eta)$ from these observables, including
experimental as well as theoretical errors. Technical details on our
analysis are given in Appendix~\ref{appendix}.
\subsection{Results of the UT analysis}
The result of the standard UT fit is shown in the upper panel of
Fig.~\ref{fig:UT}. It is in good agreement with the results of various groups
performing this analysis, compare e.g.\ Refs.~\cite{CKM-group,UT2005,RPP}. We
show the allowed regions in the plane of $\bar\rho$ and $\bar\eta$ at 95\%~CL
for the individual constraints from $\varepsilon_K, |V_{ub}/V_{cb}|, \Delta
m_{B_d}, \sin 2\beta$, as well as the combined analysis including in addition
the information on $\alpha$ and $\gamma$. The 95\%~CL regions are obtained
within the Gaussian approximation for 2 degrees of freedom (dof), i.e.\ they
are given by contours of $\Delta\chi^2 = 5.99$. For the best fit point of the
combined analysis we obtain $\bar\rho = 0.237, \bar\eta = 0.325$ with the
95\%~CL allowed region shown as the ellipse in Fig.~\ref{fig:UT}. Assuming
that the $\chi^2$-minimum has as a $\chi^2$-distribution with $(6-2)$ dof our
value of $\chi^2_\mathrm{min} = 1.4$ implies the excellent goodness of fit of
84\%.
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{UT.eps}
\caption{Unitarity triangle fit with $\Delta m_K$ and $\Delta m_{B_d}$
obtained from the standard oscillation phase (upper panel) and the
oscillation phase with the extra factor of two (lower panel). The shaded
regions correspond to the 95\% CL regions (2 dof) obtained from the
constraints given by $\varepsilon_K, |V_{ub}/V_{cb}|, \Delta m_{B_d}$ and
$\sin2\beta$. In addition, constraints from the measurement of the
angles $\alpha$ and $\gamma$ are used in the fit (not shown in the
figure). The ellipses correspond to the 95\% CL regions from all data
combined, and the stars mark the best fit points.}
\label{fig:UT}
\end{figure}
Let us now discuss how an extra factor of two in the oscillation phase will
affect the UT fit. If such a factor is present the mass differences inferred
from particle--antiparticle oscillation experiments will be two times
smaller. Therefore, whenever in the UT analysis a mass difference
inferred from oscillations enters one has to use
\begin{equation}\label{eq:r}
\overline{\Delta m} = r\, \Delta m
\end{equation}
with $r=1/2$, where $\Delta m$ is the value obtained with the standard
oscillation phase, i.e.\ this is the value which is given by the
Particle Data Group~\cite{RPP}. In the lower panel of
Fig.~\ref{fig:UT} we show the result of the UT fit by using the extra
factor of two in the oscillation phase. This factor affects two
observables relevant for the UT fit.
\begin{enumerate}
\item
In the prediction for $\varepsilon_K$ shown in Eq.~(\ref{eq:epsilon}) the
experimental value for $\Delta m_K$ is used. Since this value is
obtained from $K^0 \leftrightarrows \bar K^0$ oscillations,
$\Delta m_K$ has to be
replaced by $\overline{\Delta m}_K$ if there is an extra factor of two in
the oscillation phase. This moves the hyperbola in the
($\bar\rho,\bar\eta$) plane from $\varepsilon_K$ to the right, as visible
in Fig.~\ref{fig:UT}.
\item
The experimental value for $\Delta m_{B_d}$ given in
Eq.~(\ref{eq:DmBd}) has to be replaced by $\overline{\Delta m}_{B_d}$,
which is a factor of two smaller. Therefore, from
Eq.~(\ref{eq:DmBdconstraint}) it is clear that the radius of the
circle in the ($\bar\rho,\bar\eta$) plane from $\Delta m_{B_d}$ is
reduced by a factor $\sqrt{2}$, as can be seen also in
Fig.~\ref{fig:UT}.
\end{enumerate}
The other constraints from $|V_{ub}/V_{cb}|, \sin2\beta, \alpha$ and $\gamma$
are obtained from particle decays without involving any oscillation effect,
and therefore they do not depend on the oscillation phase. One observes from
Fig.~\ref{fig:UT} that the agreement of the individual constraints gets
significantly worse using the extra factor of two.
In particular, at 95\%~CL there
is only a very marginal overlap of the intersection of the allowed regions
from $|V_{ub}/V_{cb}|$ and $\sin2\beta$ with the one from $\varepsilon_K$. The
best fit point in the lower panel of Fig.~\ref{fig:UT} has
$\chi^2_\mathrm{min} = 13.8$, which implies a goodness of fit of only 0.8\%,
assuming a $\chi^2$-distribution for 4~dof.
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{chisq_r.eps}
\caption{$\chi^2$ of the unitarity triangle fit as a function of the
parameter $r$ defined in Eq.~(\ref{eq:r}). For fixed $r$ the $\chi^2$
is minimized with respect to $\bar\rho$ and $\bar\eta$.}
\label{fig:chisq}
\end{figure}
In Fig.~\ref{fig:chisq} we show the $\chi^2$ minimized with respect to
$\bar\rho$ and $\bar\eta$ as a function of the parameter $r$ given in
Eq.~(\ref{eq:r}). Hence, $r = 1$ corresponds to the standard
oscillation phase, and $r = 1/2$ corresponds to the extra factor
of two. From this figure one observes the remarkable feature that the best
fit point occurs exactly at $r=1$. In other words, even if
the extra factor in the
oscillation phase is treated as a free parameter to be determined by
the fit, the data prefer the standard oscillation phase. For the value
$r=1/2$ we obtain a $\Delta \chi^2 = 12.4$ with respect to the best
fit point, which corresponds to an exclusion at $3.5\sigma$ for
1~dof. We conclude that the extra factor of two in the oscillation
phase is strongly disfavoured by the UT fit.
\subsection{Robustness of the UT analysis}
\label{sec:robustness}
In this subsection we investigate the robustness of our conclusion with
respect to variations of the input data for the UT fit. To this aim we show in
Tab.~\ref{tab:variations} the results of our analysis by changing some of the
numbers entering the UT fit. The line ``standard analysis'' in the table
corresponds to the analysis described in the previous two subsections. In
particular, exactly the input data given in Tab.~1 of Ref.~\cite{UT2005}
are used.
First we have investigated how our analysis depends on the value for
$|V_{ub}|$. We show the results of the fit by using only the value
from exclusive ($|V_{ub}|_\mathrm{(excl)}$) or inclusive
($|V_{ub}|_\mathrm{(incl)}$) decays, where the numbers are taken from
Ref.~\cite{UT2005}. Note that in our standard analysis both values are
taken into account, as described in Appendix~\ref{appendix}. We
observe from the numbers given in Tab.~\ref{tab:variations} that for
the relatively small value for $|V_{ub}|$ from exclusive measurements
the fit gets notably worse for $r=1/2$. In contrast, for the
relatively large value from inclusive measurements the fit gets worse
for the standard oscillation phase ($\chi^2_\mathrm{min} = 3.9$),
whereas for $r=1/2$ the fit improves with respect to the standard
analysis ($\chi^2_\mathrm{min} = 7.8$). The reason is that for large
values of $|V_{ub}|$ the radius of the circle in the
($\bar\rho,\bar\eta$) plane from $|V_{ub} / V_{cb}|$ becomes larger,
which worsens the fit for $r=1$, whereas for $r=1/2$ the agreement of
the individual allowed regions becomes better. Note however, that even
for $|V_{ub}|_\mathrm{(incl)}$ the goodness of fit for $r=1/2$ is only
1\%, and $r=1/2$ is disfavoured with respect to $r=1$ by 2$\sigma$.
We have also performed the analysis by using the (inclusive and
exclusive) averaged value $|V_{ub}|_\mathrm{(PDG)}$ obtained by the
PDG~\cite{RPP}. The fit using the extra factor of two is slightly improved
with respect to our standard analysis, however $r=1/2$ can still be
excluded at $3.2\sigma$.
\begin{table}
\centering
\begin{tabular}{|l|c|c|c|}
\hline\hline
& $\chi^2_\mathrm{min} ( r =1)$
& $\chi^2_\mathrm{min} ( r =1/2)$
& number of $\sigma$ \\
\hline
standard analysis & 1.4 & 13.8 & 3.5 \\
\hline
$|V_{ub}|_\mathrm{(excl)} = (33.0 \pm 2.4 \pm 4.6)\times 10^{-4} $
& 2.9 & 17.6 & 3.8 \\
$|V_{ub}|_\mathrm{(incl)} = (47.0 \pm 4.4) \times 10^{-4}$ & 3.9 & 7.8 & 2.0 \\
$|V_{ub}|_\mathrm{(PDG)} = (36.7 \pm 4.7) \times 10^{-4}$ & 1.6 & 11.9 & 3.2 \\
\hline
$m_c = (1.2 \pm 0.2)$ GeV & 1.4 & 11.9 & 3.2 \\
\hline
constraints on $\alpha,\gamma$ not used & 0.13 & 9.6 & 3.1\\
\hline\hline
\end{tabular}
\caption{The $\chi^2_\mathrm{min}$ for the standard oscillation phase ($r=1$)
and for the oscillation phase with the extra factor of two ($r=1/2$) for
variations of the input data (see text for details). The column ``number of
$\sigma$'' gives the number of standard deviations with which $r=1/2$ is
disfavoured with respect to $r=1$.}
\label{tab:variations}
\end{table}
Furthermore we have investigated how our result depends on the input
value for the charm quark mass $m_c$. The value $m_c = (1.2 \pm
0.2)$~GeV is adopted by the CKM-fitter group~\cite{CKM-group},
in contrast to the value $m_c = (1.3 \pm 0.1)$~GeV from the UTfit
Collaboration~\cite{UT2005} used in our standard analysis. The mild
improvement of the fit for $r=1/2$ comes mainly from the larger error
on $m_c$, which leads to a slightly larger allowed region from
$\varepsilon_K$.
In the last line of Tab.~\ref{tab:variations} we have removed the
constraints for the angles $\alpha$ and $\gamma$ from the fit, i.e.\
we use only $\varepsilon_K, |V_{ub}/V_{cb}|, \Delta m_{B_d}, \sin
2\beta$. We observe that the direct constraints of $\alpha$ and
$\gamma$ contribute $4.2$ units of $\chi^2$ to the
$\chi^2_\mathrm{min}$ for $r=1/2$. However, also without the
constraints for $\alpha$ and $\gamma$ the extra factor of two in the
oscillation phase is excluded by more than $3\sigma$.
Finally let us comment on the the very small value of
$\chi^2_\mathrm{min} = 0.13$ (for 2~dof), which we obtain without the
constraints on $\alpha$ and $\gamma$ for the standard oscillation
phase. In fact, the $\chi^2$-minimum value of 1.4 in the standard
analysis comes mainly from $\alpha$. To include the information on
this angle we are using the likelihood function from Fig.~10 of
Ref.~\cite{UT2005} (see Appendix~\ref{appendix}), which has two maxima
around $107^\circ$ and $176^\circ$. The maximum at $176^\circ$ is
slightly preferred, whereas the UT fit requires the other maximum. The
very small $\chi^2$-minimum value obtained without using the
likelihood for $\alpha$ shows that the fit is dominated by rather
large theoretical errors. Therefore, $\chi^2$ is significantly lower
as expected just from statistics. The fact that even with these
assumptions on theoretical errors the $\chi^2$ is large for $r=1/2$
implies that the exclusion of the extra factor of two in the
oscillation phase is rather robust.
\section{Conclusions}
\label{concl}
In this paper we have reconsidered claims that the standard oscillation
phase~(\ref{phiQM}) has to be corrected by extra factors. We have
focused on possible tests of these extra factors by using experimental
data. The usual starting point to derive these non-quantum-theoretical
expressions for the oscillation phase is Eq.~(\ref{wrong}), which
says that mass eigenstates with different masses need different
times to reach the spatial point where the interference of the
amplitudes for the different mass eigenstates takes place. In this way
we have derived the phase $\overline{\Delta \varphi}$ of
Eq.~(\ref{nonQM}). The aim of our theoretical discussion was to
consider both neutrino oscillations and
oscillations of neutral flavoured mesons. For $M^0 - \bar M^0$
oscillations, it was important to include the non-relativistic limit
in our phase considerations.
We have obtained the following results:
\begin{enumerate}
\item
The non-quantum-theoretical phase
$\overline{\Delta \varphi}$ of Eq.~(\ref{nonQM}) becomes ambiguous in
the non-relativistic case, because it contains a small but arbitrary
momentum difference $\Delta p$. We have stressed that in the correct
quantum-theoretical treatment, where the amplitudes interfere at the
\emph{same} time, this arbitrary term does \emph{not} show up.
\item
If we adjust $\Delta p$ in Eq.~(\ref{nonQM})
such that the mass eigenstates have the same
energy, then a momentum-dependent extra factor appears in
$\overline{\Delta \varphi}$---see Eq.~(\ref{factor2'}). We have shown
that this extra momentum dependence is in disagreement with
measurements of the $K_L - K_S$ mass difference at different kaon
energies.
\item
If $\Delta p = 0$, the notorious factor of two appears in
$\overline{\Delta \varphi}$---see Eq.~(\ref{factor2}).
We have demonstrated that using $K_L - K_S$ and $B_{dH} - B_{dL}$
mass differences extracted from the data with the extra factor of two in the
$K^0 \leftrightarrows \bar K^0$ and
$B_d^0 \leftrightarrows \bar B_d^0$ oscillation phases,
respectively, the unitarity triangle fit in the Standard Model becomes
significantly worse compared to the fit with the standard mass
differences. The phase with the extra factor of two is excluded at
more than three standard deviations with respect to the standard phase.
\end{enumerate}
Concerning this last point, as an additional check, we have treated
the extra factor in the oscillation phase as a free parameter $r$ (see
Eq.~(\ref{eq:r})) and considered $\chi^2$ as a function of $r$. It is
remarkable that the minimum of $\chi^2$ occurs nearly precisely at
$r=1$, which corresponds to the standard oscillation phase. This
result can be regarded as a successful test of quantum theory. It is
likely that in the future, with accumulated data used in the unitarity
triangle fit, the exclusion of the extra factor of two will become
even more significant.
\vspace{5mm}
\noindent
\textbf{Acknowledgements:}
S.M.B.\ acknowledges the support by the
Italian Program ``Rientro dei Cervelli''.
W.G.\ would like to thank S.T.\ Petcov for an invitation to SISSA,
where part of this work was performed. He is also grateful to
A.Yu.\ Smirnov for a useful discussion.
T.S.\ is supported by a ``Marie Curie Intra-European Fellowship within
the 6th European Community Framework Programme.''
\begin{appendix}
\section{Details of our UT fit procedure}
\label{appendix}
The fit of the UT is performed by adopting the following $\chi^2$-function:
\begin{eqnarray}
\chi^2(\bar\rho,\bar\eta, \hat B_K, |V_{ub}|) &=&
\sum_{i,j} (x_i^\mathrm{exp} - x_i^\mathrm{pred}) S^{-1}_{ij}
(x_j^\mathrm{exp} - x_j^\mathrm{pred}) +
\chi^2_\alpha \nonumber\\
&+&
\chi^2_\mathrm{syst}(\hat B_K) + \chi^2_\mathrm{syst}(|V_{ub}|)
\label{eq:chisq}
\end{eqnarray}
The final $\chi^2$ is obtained by minimizing Eq.~(\ref{eq:chisq}) with
respect to $ \hat B_K$ and $|V_{ub}|$:
\begin{equation}
\chi^2(\bar\rho,\bar\eta)
=
\mathrm{Min}
\left[\chi^2(\bar\rho,\bar\eta, \hat B_K, |V_{ub}|); \, \hat B_K, |V_{ub}|
\right] \,.
\end{equation}
In Eq.~(\ref{eq:chisq}) the indices $i,j$ run over $(\varepsilon_K,
|V_{ub}/V_{cb}|, \Delta m_{B_d}, \beta, \gamma)$ and $S_{ij}$ is the
covariance matrix of these observables containing the experimental as
well as theoretical uncertainties. It also takes into account correlations
between the various observables induced by the experimental errors of
parameters such as $m_t,\lambda$ and $|V_{cb}|$, which are common to
more than one observable.
The term $\chi^2_\alpha$ contains the information on the angle
$\alpha$, and is defined as $\chi^2_\alpha = -2 \ln
[\mathcal{L}(\alpha) / \mathrm{Max} \, \mathcal{L}(\alpha) ]$, where
$\mathcal{L}(\alpha)$ is the likelihood function for $\alpha$ read off
from Fig.~10 of Ref.~\cite{UT2005}.
For the treatment of theoretical uncertainties we follow the common
practice in UT fits to split the error into a Gaussian
part and into a ``flat'' part, which cannot be assigned a
probabilistic interpretation~\cite{CKM-group,UT2000,UT2005}. For the
parameter $\hat B_K$ relevant for $\varepsilon_K$ one has $\hat B_K =
0.86 \pm 0.06 \pm 0.14$, where the first error is Gaussian and the
second is ``flat''. To include both errors in our fit we construct a
likelihood function $\mathcal{L}(\hat B_k)$ by convoluting a Gaussian
distribution with width $0.06$ with a flat distribution which is
non-zero in the interval $[-0.14,+0.14]$ and zero outside. Then this
likelihood is converted into a $\chi^2$ by $\chi^2_\mathrm{syst}(\hat
B_K) = -2 \ln [ \mathcal{L}(\hat B_k) / \mathrm{Max} \mathcal{L}(\hat
B_k)]$ which is added to the total $\chi^2$ according to
Eq.~(\ref{eq:chisq}). The resulting $\chi^2$ is minimised for fixed
$\bar\rho$ and $\bar\eta$ with respect to $\hat B_K$.
The value of $|V_{ub}|$ can be obtained from exclusive and inclusive decays,
where the exclusive measurement suffers from theoretical uncertainties
characterized by a ``flat'' error (see e.g.\ Tab.~1 of Ref.~\cite{UT2005}). In
our standard analysis we include both values by constructing a likelihood
function $\mathcal{L}(|V_{ub}|) = \mathcal{L}_\mathrm{excl}(|V_{ub}|) \times
\mathcal{L}_\mathrm{incl}(|V_{ub}|)$, where
$\mathcal{L}_\mathrm{excl}(|V_{ub}|)$ is obtained similar as in the case of
$\hat B_K$ by folding a Gaussian and a flat distribution, whereas
$\mathcal{L}_\mathrm{incl}(|V_{ub}|)$ is just a Gaussian
distribution. Finally, the term $\chi^2_\mathrm{syst}(|V_{ub}|)$ in
Eq.~(\ref{eq:chisq}) is obtained by $\chi^2_\mathrm{syst}(|V_{ub}|) = -2 \ln [
\mathcal{L}(|V_{ub}|) / \mathrm{Max} \mathcal{L}(|V_{ub}|)]$. The dependence
of our results on the treatment of $|V_{ub}|$ is discussed in
Sec.~\ref{sec:robustness}.
\end{appendix}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 9,344 |
Just hoist those mammas up with industrial strength rigging, and it's plain sailing for you and your twin buoys! And the author can thank himself for being part of the problem with his insensitive unrealistic blurb. Such a shame that some people or women you can regard as beautiful on the outside are still ugly people on the inside. It looks like a giant ball of black and red mess. She acquired tattoos of him and PTV symbology in a ritualistic fashion in the lead up to their first meeting in person, committing her skin as an altar to thee TOPY.
Andrea L April 13, I squealed when I saw your username, Zelda. Jayne Mansfield and other actresses of the day had better proportionsand who cares about the dress size discrepancy, her thighs and hips WERE quite large and her upper body was quite small. Probably the perception of the difference between then and now lies more in the fact that the average American is a lot bigger today. And this is it! What happened with that larger than life romance with Lady Jaye? | {
"redpajama_set_name": "RedPajamaC4"
} | 6,111 |
Syntretus pusio är en stekelart som först beskrevs av Marshall 1898. Syntretus pusio ingår i släktet Syntretus och familjen bracksteklar. Inga underarter finns listade i Catalogue of Life.
Källor
Bracksteklar
pusio | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 7,532 |
Sault Ste. Marie, Rogers City, Cheboygan and Harbor Springs all advanced to the semifinals in the District 13 Little League Major Division (ages 11-12) tournament over the weekend at Bates Park.
In the eight-team pool play tournament, Sault Ste. Marie and Cheboygan both won their respective pools with 3-0 records, while Harbor Springs went 2-1 and Rogers City also went 2-1.
Today, Monday, in the semifinals Sault Ste. Marie will face Rogers City on Field A at Bates Park, while Harbor Springs will face Cheboygan on Field B. Both games are scheduled to begin at 6 p.m.
Petoskey finished with a 1-2 record and was eliminated from the tournament as they fell to Rogers City, 6-3; defeated Tri-Rivers, 14-6; and fell to Cheboygan, 18-5. In other Pool B games, Cheboygan defeated Tri-Rivers, 26-0; Cheboygan defeated Rogers City, 10-1; and Rogers City topped Tri-Rivers, 11-1.
In Pool A games, Sault Ste. Marie topped Harbor Springs, 11-1; Onaway defeated North Emmet, 24-14; Sault Ste. Marie topped North Emmet, 10-0; Harbor Springs beat Onaway, 6-0; Sault Ste. Marie defeated Onaway, 10-0; and Harbor Springs defeated North Emmet, 10-3.
The tournament final is scheduled for 6 p.m. Tuesday, July 9, at Bates Park. | {
"redpajama_set_name": "RedPajamaC4"
} | 3,502 |
{"url":"https:\/\/www.physicsforums.com\/threads\/quadratic-problem.313369\/","text":"1. May 11, 2009\n\nA movie theatre sells tickets for $8.50 each. The manager is considering raising the prices but knows that for every 50 cents the price is raised, 20 fewer people go to the movies. The equation R = -40c^2 = 720c describes the relationship between the cost of tickets, c dollars, and the amount of revenue, R dollars, that the theatre makes. What price should the theatre charge to maximize revenue? I believe what I need to do is find the maximum vertex of the parabola in order to solve the equation. So I did the following: R = -40c^2 - 720c = -40(c^2 - 18c) = -40(c^2 - 18c + 9^2 - 9^2) <-- complete the square = -40(c^2 - 18c + 81 - 81) = -40[(c^2 - 9)^2 - 81) = -40(c^2 - 9)^2 + 3240 Which would give me a vertex (9, 3240) but this does not make sense to me, I am not sure what I am looking for to be honest. I believe that the maximum price would be$9.00 to have a revenue of $3240, is this correct and I am just second guessing? 2. May 11, 2009 ### symbolipoint You seem to be thinking in the right direction, although I did not analyze your work in detail. One spot of confusion is what you say, equation R = -40c^2 = 720c describes the relationship between the cost of tickets, c dollars, and the amount of revenue, R dollars, that the theatre makes\", does not make sense. OOOOHHH, you mean -40c^2 - 720c = R, this could be better. 3. May 11, 2009 ### nickjer You pulled out a negative but you left the 2nd term negative as well. Double check the equation you were given, because you miswrote it in the problem, and it could have a mistake when you first started solving it. 4. May 12, 2009 ### Imperil Now I am fairly confused as it really does not make sense to me. I double checked the equation and I was correct in my work that it is the following: R = -40c^2 - 720c After correcting my mistake (that was pointed out by nickjer) I now have the following: R = -40c^2 - 720c = -40(c^2 + 18c) = -40(c^2 + 18c + 81 - 81) <-- complete the square = -40[(c^2 + 9)^2 - 81] = -40(c^2 + 9)^2 + 3240 Which would give a vertex of (-9, 3240) which makes no sense to me in the context of the question. I am really not sure where to go from here. 5. May 12, 2009 ### gabbagabbahey Surely, this equation should be R=-40c^2+720c instead! 6. May 12, 2009 ### Imperil I have triple checked and it is definitely -720c which is why I am confused. 7. May 12, 2009 ### gabbagabbahey It must be a typo! If the equation were -40c^2-720c , then if you charged$1.00 per ticket, you would have a revenue of -\\$760.00; but revenue is always a positive quantity.\n\nI would assume that the equation is supposed to be -40c^2+720c and just ask your instructor about it when you see him\/her.\n\n8. May 12, 2009\n\n### Imperil\n\nI thought this exact same thing but figured maybe I was thinking about it wrong! Thanks for your help, I just contacted my teacher by email regarding this. It is a key problem in my correspondence that I need to hand in, so I am shocked they included this typo.","date":"2018-01-19 04:08:01","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.601389467716217, \"perplexity\": 897.068448956207}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-05\/segments\/1516084887729.45\/warc\/CC-MAIN-20180119030106-20180119050106-00218.warc.gz\"}"} | null | null |
Kathryn Henderson
The smells and sounds of a five-star meal underway permeate the room as Rick Neal prepares the first dish for the first course in his newly established culinary workshop. Neal spent years behind the scenes as an executive chef, gaining titles as 2017 Texas Chef of the Year and World Master Chef's Society member. Now, Neal is branching out to share his knowledge, skills and love of food through a unique outlet in the heart of Tyler.
Chef Rick Neal of R&D Culinary stands at his Bergfeld Center storefront. 📷 all photos by Kathryn Henderson
Opening shop in the Bergfeld Center, R&D Culinary is the brainchild of Neal and his wife, Denise. It is their second business venture, after becoming owners of Village Bakery in 2020. The bakery is a Tyler fixture, holding its trademark for the past 73 years.
Returning from Dallas, Texas, Neal is planning to offer a number of nationally and locally produced wares and services including U.S. made, hand-forged chefs' knives; locally sourced spices and ingredients; and cooking courses led by professionals. Neal's vision is not in the contents on the shelves, though. It is the experience people have when they enter.
Spiceology is one of the American owned companies partnered with R&D Culinary. Based out of Washington state, they specialize in locally sourced-spices.
"I want my visitors to step into a different world. The retail part is my toy box, it's things that I enjoy and that I want to share with others," Neal said. "My main focus is in the cooking classes, the fun and the journey that we can all take together."
The courses involve themes such as "Couples' Night", "Around the World with Chef" — with 13 "stops" along the tour and "Chef's Surprise Night'. Alongside culinary education, Neal hopes to support other chefs and locally owned businesses through his endeavor.
Chef Neal's second dish of the evening Dec. 15: seared red snapper topped with homemade fish sauce on a carrot puree alongside green beans, carrots and turnips.
Amina Petty is a Tyler resident and line cook at Longhorn Steakhouse. With little experience in the kitchen besides her workplace, she was intrigued by the opportunity to learn new recipes and from a nationally known chef. One of R&D's first class attendees, she reported a positive experience. "Chef's class was very welcoming and open. Personally, I grew up not spending a lot of time in the kitchen, and his class made me feel more prepared to start on my own."
"He was very personable and relaxed which made a huge difference in learning."
"We're working really hard to source U.S.A. products as much as we can. We've partnered up with several local and nationally based chef-owned companies already that offer high quality goods." Neal said. "It's important to us that we can support local businesses."
Hailing from Yorba Linda, Arizona, and moving to Tyler when he was 16, Neal pictured his future with a different set of tools in his hands. He imagined working with wrenches and engines or stencils and ink guns. When his parents were less-than-ecstatic, he pivoted to the culinary arts.
"I was 19, and I wanted to be a Harley Davidson mechanic and build bikes. Mom didn't think that was a good idea," Neal said. "Then, I wanted to be a tattoo artist, and Dad didn't think that was a good idea. So cooking is what I picked up and where my career has revolved around."
R&D Culinary Chef Rick Neal walks the class through the science of sauce reduction Dec. 15 during a couples night-themed class.
Neal's classes delve into a number of topics and lessons, skills and dishes. Neal makes a point to build a foundation of cooking with his students. The classes at R&D dive into the science, math and origins behind each dish, laying the groundwork to explore new skills and ideas.
R&D course attendees can leave with a dish sample and hopefully the confidence to go home and try some new dishes and techniques.
The first dish of the evening of Dec. 15 at R&D Culinary featured butter-braised bison on a jalapeño crisp, topped with cherry tomatoes.
Much of Neal's expertise comes from his training at Aims Culinary Academy of Dallas and thereafter working in restaurants. Neal began as a line cook and made his way through the ranks as executive chef.
Neal hopes R&D Culinary will bring people together around a shared love of food through instruction, practice and delicious flavors.
"We look forward to helping others branch out of their comfort zones in the kitchen and to be able to create new dialogue and ideas with the community." Neal said.
Kathryn Henderson is a Jacksonville, Texas, native and a Tyler resident of 21 years. She graduated from Whitehouse High School where she developed her passion for news writing and storytelling. Her interests include art, music and political activism.
Love what you're seeing in our posts? Help power our local, nonprofit journalism platform — from in-depth reads, to freelance training, to COVID Stories videos, to intimate portraits of East Texans through storytelling.
Our readers have told us they want to better understand this place we all call home, from Tyler's north-south divide to our city's changing demographics. What systemic issues need attention? What are are greatest concerns and hopes? What matters most to Tylerites and East Texans?
Help us create more informed, more connected, more engaged Tyler. Help us continue providing no paywall, free access posts. Become a member today. Your $15/month contribution drives our work.
Support The Tyler Loop!
Previous articleHow East Texas nurtured the importance of family in Tasneem Raja
Next articlePublic input at TISD meeting rates failing grade | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 5,884 |
This entry was posted in ゴルフ会員権ニュース, 近畿 on 2008年4月24日 by naniwagolf.
This entry was posted in ゴルフ会員権ニュース, 近畿 on 2008年4月10日 by naniwagolf.
This entry was posted in ゴルフ会員権ニュース, 中部 on 2008年4月10日 by naniwagolf.
This entry was posted in ゴルフ会員権ニュース, 近畿 on 2008年4月1日 by naniwagolf. | {
"redpajama_set_name": "RedPajamaC4"
} | 7,831 |
CancerConnect
Treatment of Stage II and III Bladder Cancer
Cancer Connect
by Dr. C.H. Weaver M.D. updated 1/2020
Patients with Stage II (T2) bladder cancer have cancer that invades through the connective tissue into the muscle wall, but has not spread outside the bladder wall or to local lymph nodes. Patients with stage II bladder cancer invading the inner half of the muscle of the bladder wall have a better outcome than patients with invasion into the deep muscle (outer half of the muscle of the bladder wall).
Patients with Stage III bladder cancer have cancer that invades through the connective tissue and muscle and into the immediate tissue outside the bladder and/or invades the prostate gland in males or the uterus and/or vagina in females. With Stage III bladder cancer, there is no spread to lymph nodes or distant sites. Stage III bladder cancer is classified as a "deep" or "invasive" bladder cancer.
There are essentially two ways to treat stage II -III bladder cancers: primary surgical treatment consisting of radical cystectomy with some form of urinary diversion or combined modality treatment consisting of administration of chemotherapy and/or radiation therapy, followed by radical cystectomy only for those patients who do not achieve a complete response. Patients who achieve a complete response following chemotherapy and/or radiation are followed closely and are treated with a radical cystectomy if cancer returns. It is important to realize that several physicians, including a urologist, a medical oncologist, and a radiation oncologist may be required to assist you in making the appropriate decision concerning the initial choice of treatment for stage II - III bladder cancer.
The general health condition of the patient may help determine which approach to treatment is most appropriate. It is important to consider whether the patient is well enough to undergo radical cystectomy and creation of an artificial bladder. It is the general health condition, rather than age, that can be the limiting factor for this type of surgery. For patients in good condition, the choice will depend on the extent of cancer and the preferences of the patient and treating physicians.
Surgery as Primary Treatment
Radical cystectomy is a standard treatment for invasive bladder cancer. A radical cystectomy involves removal of the bladder, tissue around the bladder, the prostate, and seminal vesicles in men and the uterus, fallopian tubes, ovaries, anterior vaginal wall, and urethra in women. In addition, a radical cystectomy may or may not be accompanied by pelvic lymph node dissection. Radical cystectomy was once considered a procedure that seriously affected a patient's quality of life. With the creation of artificial bladders, referred to as continent reservoirs or "neobladders," that preserve voiding function, a radical cystectomy is now a far more acceptable procedure.
Segmental cystectomy (partial removal of the bladder) may be appropriate therapy in some patients with smaller cancers. In some cases, stage II bladder cancer may also be controlled by transurethral resection (TUR) if the cancer is small enough and does not extend far into the bladder wall. A TUR is an operation that is performed for both the diagnosis and management of bladder cancer. During a TUR, a urologist inserts a thin, lighted tube called a cystoscope into the bladder through the urethra to examine the lining of the bladder. The urologist can remove samples of tissue through this tube or can remove some or all of the cancer in the bladder.
Approximately 50-80% of patients with stage II - III bladder cancer are cured after undergoing a radical cystectomy.
To learn more about TUR and cystectomy, go to Surgery for Bladder Cancer
Systemic Therapy Prior to Cystectomy (Neoadjuvant Therapy)
Following a radical cystectomy, local recurrence of cancer is uncommon because the cancer and bladder are removed. Some patients however will still develop distant recurrences because undetected cancer cells called micrometastases spread to other locations in the body before the bladder was removed. Treatment with a systemic (whole-body) therapy such as chemotherapy or immunotherapy may reduce or eliminate these micrometastases and reduce the risk of cancer recurrence.
Neoadjuvant therapy refers to systemic therapy that is given before surgery. The rationale behind neoadjuvant therapy for bladder cancer is twofold. First, pre-operative treatment can shrink some bladder cancers and therefore, may allow more complete surgical removal of the cancer. Second, because systemic therapy kills undetectable cancer cells in the body, it may help prevent the spread of cancer when used initially rather than waiting for patient recovery following the surgical procedure.
A study published in the New England Journal of Medicine reported that patients with muscle-invasive bladder cancer who received chemotherapy prior to cystectomy had better survival than patients treated with cystectomy alone. (1)
Adjuvant Therapy After Surgery
Adjuvant therapy is a systemic treatment that follows surgical cystectomy. Clinical trials have compared adjuvant chemotherapy to no additional treatment in advanced bladder cancer and found that cystectomy plus adjuvant chemotherapy improves survival compared to treatment with cystectomy alone. (2) One study also suggests that adjuvant therapy should not be delayed. The study found that immediate treatment (< 90 days) from cystectomy delayed cancer progression and improved survival. (3)
Chemotherapy and Radiation Therapy for Primary Treatment
Over the past decade, there have been many clinical trials in the United States and Europe evaluating the combination of radiation and chemotherapy for initial treatment of patients with Stage II bladder cancer for the purpose of preserving the bladder. Bladder-preserving therapy is appealing because patients who achieve a complete response to treatment can often avoid additional treatment with a radical cystectomy unless they experience recurrence of their cancer.
In some clinical trials, approximately half or more of patients who were treated with bladder-preserving therapy (initial TUR of as much cancer as possible, plus chemotherapy and radiation therapy) survived cancer-free for three to four years after treatment. These results appear as good as those observed with radical cystectomy, but there have been no direct comparisons between bladder-preserving therapy and radical cystectomy. While bladder-preserving therapy has been widely adopted for the treatment of Stage II bladder cancer, some physicians still think it should be limited to clinical trials and not adopted as standard therapy.
Chemotherapy Alone as Primary Treatment
Treatment with chemotherapy alone is less effective than combined approaches to treatment. (4)
Radiation Therapy Alone as Primary Treatment
Currently, the use of radiation therapy alone has been replaced by the use of radiation therapy and chemotherapy. However, there may be some patients who cannot tolerate chemotherapy, and radiation alone could be beneficial.
To learn more, go to Radiation Therapy for Bladder Cancer.
Questions to Ask Your Physician About the Treatment of Stage II Bladder Cancer
What are the long-term results of treatment with radical cystectomy at the treating institution?
What is the quality of life with the type of artificial bladder constructed at the treating institution?
What are the long-term results of bladder-sparing treatments at the treating institution?
How will systemic therapy improve my outcome compared to treatment with surgery alone?
Strategies to Improve Treatment
Most new treatments are developed in clinical trials. Clinical trials are studies that evaluate the effectiveness of new treatment strategies. The development of more effective cancer treatment for bladder cancer requires that new and innovative therapies be evaluated in patients. Participation in a clinical trial may offer access to better treatments and advance the existing knowledge about treatment of bladder cancer. Patients who are interested in participating in a clinical trial should discuss the risks and benefits with their physician.
Adjuvant Treatment After Surgery: Clinical trials have compared adjuvant chemotherapy to no additional treatment in advanced bladder cancer and found that cystectomy plus adjuvant chemotherapy improves survival compared to treatment with cystectomy alone. Additional trials are ongoing to determine the optimal combination of medications to achieve the best outcomes with adjuvant chemotherapy and whether newer immunologic therapies can provide additional benefit either alone or in combination with chemotherapy.
Precision Cancer Medicines & Immunotherapy utilizes molecular diagnostic testing, including DNA sequencing, to identify cancer-driving abnormalities in a cancer's genome. Once a genetic abnormality is identified, a specific targeted therapy can be designed to attack a specific mutation or other cancer-related change in the DNA programming of the cancer cells. Precision cancer medicine uses targeted drugs and immunotherapies engineered to directly attack the cancer cells with specific abnormalities, leaving normal cells largely unharmed. Researchers are currently evaluating whether precision cancer immunotherapy that helps to restore the body's immune system can improve outcomes for bladder cancer when administered alone or in combination with chemotherapy in all settings.
There are several PD-1 and PD-L1 inhibitors that work in bladder cancer and they are collectively referred to as "checkpoint inhibitors". Checkpoint inhibitors create their anti-cancer effect by blocking a specific proteins used by cancer cells called PD-1 and PD-L1, to escape an attack by the immune system. Once PD-L1 is blocked, cells of the immune system are able to identify cancer cells as a threat, and initiate an attack to destroy the cancer. (5-8)
Keytruda (pembrolizumab)
Imfinzi (durvalumab)
Tecentriq (atezolizumab)
Bavencio (avelumab)
Opdivo (nivolumab)
Maintenance Bavencio Prolongs Survival: In 2017, the FDA approved Bavencio for the treatment of patients with locally advanced or metastatic bladder carcinoma who have disease progression during or following platinum-containing chemotherapy, or who have disease progression within 12 months of neoadjuvant or adjuvant treatment with platinum-containing chemotherapy. The JAVELIN Bladder 100 (NCT02603432) clinical trial compared maintenance Bavencio plus best supportive care (BSC) to BSC alone in patients with locally advanced or metastatic bladder cancer whose disease did not progress after completion of first-line platinum-containing chemotherapy. A total of 700 patients whose disease had not progressed after induction chemotherapy were treated with either Bavencio plus BSC or BSC alone.
Preliminary trial results suggest that first-line maintenance therapy with Bavencio prolonged survival duration compared to BSC alone. Full trail results will be released at an upcoming medical meeting.
Grossman HB, Natale RB, Tangen CM et al. Neoadjuvant Chemotherapy Plus Cystectomy Compared with Cystectomy Alone for Locally Advanced Bladder Cancer. New England Journal of Medicine 2003.239:859-66.
Galsky MD, Stensland K, Moshier EL, et al. Comparative Effectiveness of Adjuvant Chemotherapy (AC) versus Observation in Patients with ? Pt3 and/or Pn+ Bladder Cancer (BCa). Presented at the 2015 Genitourinary Cancers Symposium. Journal of Clinical Oncology. 2015; 33 (supplement 7; abstract 292).
Sternberg CN, Skoneczna IA, Kerst JM, et al. Final results of EORTC intergroup randomized phase III trial comparing immediate versus deferred chemotherapy after radical cystectomy in patients with pT3T4 and/or N+ M0 transitional cell carcinoma (TCC) of the bladder. Journal of Clinical Oncology. 32:5s, 2014 (suppl; abstr 4500).
National Comprehensive Cancer Network. NCCN Clinical Practice Guidelines in Oncology.™ Bladder Cancer. V.2.2008. © National Comprehensive Cancer Network, Inc. 2008. NCCN and NATIONAL COMPREHENSIVE CANCER NETWORK are registered trademarks of National Comprehensive Cancer Network, Inc.
Merck's KEYNOTE-045 Studying KEYTRUDA® (pembrolizumab) in Advanced Bladder Cancer (Urothelial Cancer) Meets Primary Endpoint and Stops Early
United States Food and Drug Administration. (2016.) News Release. FDA approves new, targeted treatment for bladder cancer.
Durvalumab (Imfinzi)
FDA approves new, targeted treatment for bladder cancer. Accessed May 31, 2016.
Overview of Bladder Cancer
Cancer Connect - Overview of Bladder Cancer
Treatment & Management of Bladder Cancer
Surgery, radiation, immunotherapy, & precision medicines and chemotherapy may all be used to treat bladder cancer
What You Should Know About Bladder Cancer
Dr. Michael O'Donnell, MD, Director of Urologic Oncology University of Iowa answers questions about bladder cancer.
Treatment of Stage 0 & I "Superficial" Bladder Cancer
Treatment of non-muscle invasive bladder cancers continues to improve with the use of immunotherpay.
BCG/Mitomycin C Improves Outcomes Compared to BCG Alone in Stage I BladderCancer
BCG/Mitomycin C Improves Outcomes Compared to BCG Alone in Stage I Bladder Cancer
followersuk
Enfortumab Vedotin - Promising Treatment for Bladder Cancer
FDA approves Padcev - a novel antibody drug conjugate precision medicine for treatment of advanced bladder cancer.
Keytruda Improves Survival of Advanced Bladder Cancer
Keynote 45-Keytruda superior to chemotherapy for treatment of recurrent urothelial cancer.
Avelumab Immunotherapy Approved for Bladder Cancer
ASCO Update - Bavencio immunotherapy superior to improves bladder cancer treatment outcomes.
Ureteral Cancer and other Upper Tract Urothelial Cancers (UTUC)
Upper Tract Urothelial Cancers (UTUC) are uncommon but can be effectively treated.
MedMaven
Treatment of Stage IV Bladder Cancer
Understand the role of surgery and systemic treatment with chemotherapy, immunotherapy and precision cancer medicines. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 4,375 |
After two intense months of festivities Closings (of which, although it seems incredible, there are still some, you can consult them in our special closures 2018), Ibiza get ready for one of the most anticipated and absolutely fun winter parties: Halloween. A celebration that in recent years is gaining followers and for which people are preparing absolutely delirious attire, is the night when monsters take to the streets and the party is guaranteed! Something that, honestly, you can not miss.
As you probably know, the name Halloween comes from the English expression "All Hallow's Eve"Ie "The eve of All Saints". So much for the night of the dead, this year on Wednesday 31 of October of 2018, as for the Day of Deceased on Thursday 1 of November of 2018, as well as for the previous weekend, especially on Saturday 27 of October of 2018, There are a lot of parties for adults and children with brutal activities.
If you do not want to miss anything this Halloween 2018 in Ibiza, attentive to this little guide, that we will update with all the events that are appearing, and in which you will find the more fun proposals to spend a Halloween monstrous. Creatures of the night, take Ibiza!
The Halloween Fiestuki of Piruleto, ¡planazo for the kids! | {
"redpajama_set_name": "RedPajamaC4"
} | 6,270 |
\section{INTRODUCTION}
Recommender systems \cite{RecSys, RecSysCACM} are one of the most prominent applications
of preference handling technology \cite{PrefHandling} and a highly active area of research.
In particular, fueled by the Netflix competition and its one million dollar prize money \cite{Netflix},
research on collaborative recommendation techniques \cite{CFRecSys} has recently made significant advances,
most notably through the introduction of \emph{factor models} \cite{KorenBellVolinskyFact, UnifiedFact}.
In collaborative recommender systems, \emph{users} repeatedly express their preferences for \emph{items},
which usually is done by giving explicit \emph{ratings} on some predefined numerical scale. This data
can be modeled using a \emph{rating matrix,} whose rows correspond to items, columns to users, and entries
to ratings. Typically, ratings matrices are very sparse, that is, only a small fraction of all possible
ratings have actually been observed. Personalized recommendations are generated by predicting
unobserved ratings from the available data and, for each user, selecting those items considered to be most appealing.
Most state-of-the-art collaborative recommendation methods---including the winner of the Netflix Prize---are
based on factor models, which are known to yield much more accurate predictions than traditional neighborhood-based methods
\cite{KorenTemporal, KorenFactNgbr, UnifiedFact, NNMFTakacs, MMMF}.
In factor models, each user and each item is represented
by a vector in some shared real coordinate space. The vectors are chosen such that each observed
rating is closely approximated by the dot product of the corresponding item and user vectors. The selection
of coordinates usually is formalized as an optimization problem. Predictions for unobserved ratings
are generated by computing the respective scalar products. Equivalently, this approach can be seen
as a factorization of the rating matrix into the product of an item matrix (whose rows are the item vectors)
and a user matrix (whose columns are the user vectors).
The success of factor models is usually attributed to the intuition that the coordinate space
used to represent items and users actually is a \emph{latent feature space}. That is, its dimensions
capture the items' perceptual properties as well as the users' preference judgments regarding
these properties. For example, when items are movies, the individual dimensions are generally thought
to measure (more of less) \enquote{obvious} features such as horror vs. romance, the level of sophistication,
or orientation towards adults. For users, each coordinate is thought to describe the relative degree of importance
attached to the respective dimension. This understanding of factor models can be found throughout the literature,
for example, in \cite{Netflix, KorenFactNgbr, KorenBellVolinskyFact, FunkSVD, NNMFTakacs}.
Although it is intuively appealing, to our knowledge, the correspondence to features has never been
systematically proven, but is only reported anecdotically. For example, Koren et al. \cite{KorenBellVolinskyFact} performed
a factorization on the Netflix movie data set and manually interpreted the first two coordinates
for selected movies as follows:
\begin{quote}
Someone familiar with the movies shown can see clear meaning in the latent factors. The first
factor has on one side lowbrow comedies and horror movies,
aimed at a male or adolescent audience,
while the other side contains drama or comedy with serious undertones and strong
female leads. The second factorization
axis has independent, critically acclaimed, quirky films
on the top, and on the bottom, mainstream formulaic films.
\end{quote}
Further evidence has been provided by Takács et al. \cite{NNMFTakacs}.
After performing a factorization of the Netflix data set, they manually assigned labels
to individual dimensions of their coordinate space, such as \emph{Legendary,}
\emph{Typical for men,} \emph{Romantic,} and \emph{NOT Monty Python}.
In this paper, we propose a \emph{systematic} method for studying the coordinate spaces derived from factor models
and apply it the MovieLens\,10M data set, a large real-world collection of movie ratings.
The main contribution of our work consists in laying important groundwork,
on which further research in recommender systems and preference handling can be build.
In particular, we see two concrete directions for future work:
\begin{itemize}
\item
First, knowing what kind of semantic information is extracted by factor models---and how
it is represented in coordinate spaces---will enable a deeper understanding of
these methods. Ultimately, these findings may lead to a more systematic development
and refinement of recommender systems. In particular, a systematic
assessment of semantic structures provides an additional way of evaluating the effectiveness
of factor-based recommenders. This would perfectly complement
traditional evaluation methods \cite{RecSysEval}, which focus on predictive accuracy.
\item
Second, we believe that factor models might be a powerful tool for automatically extracting
meaningful descriptions of otherwise hard-to-describe items such as movies or songs---particularly,
essential features of movies cannot be characterized at all by purely technical features such as runtime,
language, or release date.\footnote{A complementary approach to closing this \emph{semantic gap}
is content-based image and video retrieval \cite{ES03}.}
But given a coordinate representation of movies that matches human perception,
the full machinery developed in preference handling research can be applied \cite{PrefHandling, PrefLearning}.
For example, clustering techniques can give user an initial high-level impression of the available items,
item rankings can be learnt from ordinal preference statements \cite{RankSVM} or utilities \cite{BRV09}, and
the best items can be retrieved by means of Top-k algorithms \cite{IBS08}.
\end{itemize}
Since our primary research interest lies in applying preference-based retrieval techniques
to item collections, in this paper we will concentrate on evaluating the semantic structures
contained in the item matrix $A$. Performing a similar analysis of the user matrix $B$ may require
entirely different methods.
The paper is structured as follows: After introducing notation and reviewing
the most important factor models, we develop general guidelines on how to evaluate
coordinate spaces for semantic information. Then, we illustrate how to apply
these guidelines to the evaluation of factor spaces generated from movie rating data
and perform experiments on the MovieLens\,10M data set.
\section{PRELIMINARIES}
In the following, we use the variables $i$ and $j$ to identify items,
whereas $u$ and $v$ denote users. We are dealing with ratings given to $I$ items
by $U$ users. Let $R = (r_{i, u}) \in \{\mathbb{R} \cup \emptyset\}^{I \times U}$ be the
corresponding rating matrix, where $r_{i, u} = \emptyset$, if item $i$ has not been rated by user $u$;
otherwise, $r_{i, u}$ expresses the strength of user $u$'s preference for item $i$.
Ratings are usually limited to a fixed integer scale (for example, one to ten stars).
Moreover, $\mathcal{R} = \bigl\{(i, u)\,|\,r_{i, u} \neq \emptyset\bigr\}$ is
the set of all item--user pairs for which ratings are known. Let $n$ be the total number
of ratings observed (the cardinality of $\mathcal{R}$). Typically, $n$ is very small compared
to the number of possible ratings $I \cdot U$ (for example, in the Netflix data set it is $\frac{n}{I \cdot U} \approx 1.4\%$).
Given some target dimensionality $d$, the basic idea underlying factor models is
to find matrices $A = (a_{i, r}) \in \mathbb{R}^{I \times d}$ and $B = (b_{r, u}) \in \mathbb{R}^{d \times U}$
such that their product $\hat{R} = A \cdot B$ closely resembles $R$ on all known entries.
To quantify this notion of \enquote{close resemblance,} the sum of squared errors (SSE)
is popularly chosen. The SSE difference between the rating matrix $R$ and its estimation $\hat{R} = (\hat{r}_{i, u})$
is defined as
\begin{displaymath}
\text{SSE}\bigl(R, \hat{R}\bigr) = \sum_{(i, u) \in \mathcal{R}} \bigl(r_{i, u} - \hat{r}_{i, u}\bigr)^2.
\end{displaymath}
Factor models are typically formulated as optimization problems over $A$ and $B$, in which the SSE (or some
other measure) is to be minimized.
Probably the most popular factor model is Brandyn Webb's regularized SVD model \cite{KorenBellVolinskyFact, FunkSVD},
in which $A$ and $B$ are defined as the solution of the least squares problem
\begin{displaymath}
\min_{A, B}\quad\text{SSE}\bigl(R, A \cdot B\bigr) + \lambda\hspace{-0.3em} \sum_{(i, u) \in \mathcal{R}} \sum_{r = 1}^d \left(a_{i, r}^2 + b_{r, u}^2\right)\text{.}
\end{displaymath}
Here, $\lambda \geq 0$ is a regularization constant used to avoid overfitting.
More advanced versions of the SVD model exclude systematic rating deviations from the factorization and model them explicitly
using new variables. Bell and Koren \cite{SVDB} propose to estimate rating $r_{i, u}$ by
\begin{displaymath}
\hat{r}_{i, u} = \mu + \delta_i + \delta_u + \sum_{r = 1}^d a_{i, r} b_{r, u}\text{,}
\end{displaymath}
where the constant $\mu$ denotes the mean of all observed ratings; $\delta_i$ and $\delta_u$ are $I + U$ new model parameters expressing
systematic item and user deviations from $\mu$. Again, the parameters are chosen according to a regularized
least squares problem:
\begin{displaymath}
\min_{A, B, \delta_\star}\quad\text{SSE}\bigl(R, \hat{R}\bigr) + \lambda\hspace{-0.3em} \sum_{(i, u) \in \mathcal{R}} \left(\sum_{r = 1}^d \left(a_{i, r}^2 + b_{r, u}^2\right) + \delta_i + \delta_u\right)\text{.}
\end{displaymath}
The rationale underlying this approach---which we refer to as $\delta$-SVD in the following---is
that the removal of item- and user-specific general trends from the factorization
allows to focus on more sophisticated rating patterns.
The third basic factor model being relevant to our work performs a non-negative factorization of the rating matrix \cite{NNMFTakacs}.
It is identical to the regularized SVD model up to the additional constraint that all entries of $A$ and $B$
must be non-negative. Extending this model by explicit item and users deviations is not reasonable since this
would require negative entries in $A$ and $B$ to approximate $R$ close enough. The non-negative matrix factorization
model aims at creating a coordinate space in which effects of different dimensions on the estimated ratings
cannot cancel out each other. Henceforth, we refer to this model as NNMF.
\section{EVALUATING COORDINATE SPACES}
Given an item--feature matrix $A \in \mathbb{R}^{I \times d}$ generated by some factor model,
how can we determine whether the items' coordinates in this $d$-dimensional space resemble
a \enquote{semantically meaningful} pattern? The most straightforward approach consists in
extending and systematizing the casual investigations described in the introduction.
This could easily be done by presenting the item coordinate space to a number of different
people and asking them to label its dimensions. The correspondence between the generated item coordinates
and human perception could, for example, be done by measuring the degree of consensus among people
or the average time needed to come up with adequate labels.
Although this kind of investigation seems very reasonable,
it contains some severe flaws, which cannot be fixed by careful study design:
\begin{enumerate}
\item
The dimensionality chosen in most applications of factor models typically ranges between
$d = 10$ and $d = 100$. A comprehensive analysis of the resulting
data sets would require the users to comprehend high-dimensional spaces,
which is impossible even when using advanced visualization techniques.
\item
Due to hindsight bias, given enough time, users will be able to assign a fitting label to almost
any dimension of the coordinate space. Chances are good that this effect accounts
for rather questionable labels such as \emph{NOT Monty Python}.
\item
By using free association to name dimensions, the collection of resulting labels tend
to show a high variability and reflect individual differences between users.
To produce statistically significant results, either the sample size must be extended
(which requires more study participants and results in higher costs), or the variability must be reduced,
for example, by training participants to use an established domain-specific vocabulary
to articulate the semantic properties they recognize in the data (which also increases time and effort).
\item
Typically, there are many near-optimal solutions to the above mentioned optimization problems,
which can be transformed into one another by rotation of the coordinate axes. This is because,
for any invertible matrix $M \in \mathbb{R}^d$, the solution pairs $(A, B)$ and
$(AM, M^{-1}B)$ produce the same SSE. Although regularization usually enforces the theoretical existence of
a unique optimal solution pair, in practice the enormous problem size often
allows only finding one of the many near-optimal solutions. Consequently, the direction of the coordinate axes
is completely arbitrary, which makes the task of assigning labels a hopeless undertaking.
\end{enumerate}
\subsection{Some Guidelines}
In this section, we devise a set of guidelines on which to base
more appropriate approaches to the analysis of coordinate spaces.
\begin{itemize}
\item
In the view of problems
(1) and (4), we recommend to avoid any direct human interaction with \emph{item coordinates.}
Instead, human input should concentrate on describing \emph{item properties,} which in turn are
related to coordinates as well as compared by algorithmic means.
\item
The only effective way
to eliminate hindsight bias (2) is collecting feedback on items before generating and
presenting any information extracted by the factor models under consideration.
\item
To resolve problem (3), we primarily recommend to adapt a domain-specific vocabulary to allow a
structurized description of items. For example, to characterize music, the rich vocubulary
developed by allmusic\footnote{\url{http://www.allmusic.com}} seems appropriate; amongst others,
it includes very detailed information about genres, styles, moods, and connections between artists.
Since this kind of semantic information can be (or already have been) provided by a small number of experts and
usually is little prone to debate, it is easy to assemble and work with. In later stages of
analysis, unrestricted user feedback may be included to reveal the position and extent of
more fine-grained and rather subjective concepts in the coordinate space.
\end{itemize}
We also propose to apply a standardization procedure to the generated coordinate space.
This is for the following reasons: First, recall that, for any invertible matrix
$M \in \mathbb{R}^d$, the solution pairs $(A, B)$ and $(AM, M^{-1}B)$ are equivalent;
to enable comparisons between different factor models and even different runs of
the same optimization algorithms, we need to define one solution pair as the standard representation.
Second, to enable a better separation of different effects in the data, the axes of the item (and user) coordinate space
should be chosen to be orthogonal. Moreover, axes should be ordered according to their relative importance
(measured by the variance of data along each axis); that is, the first dimension should be assigned to the most
important axis.
The perfect tool for matching these requirements is the singular value decomposition, a well-known matrix factorization technique
from linear algebra, which inspired the SVD factor model. It is based on the fact that, for any rank-$d$ matrix $X \in \mathbb{I \times U}$,
there is a column-orthonormal matrix $U \in \mathbb{R}^{I \times d}$, a diagonal matrix $S \in \mathbb{R}^{d \times d}$,
and a row-orthonormal matrix $V \in \mathbb{d \times U}$ such that $X = USV$. By reordering rows and columns,
$S$ can be chosen such that its diagonal elements are ordered by increasing magnitude. Moreover, the diagonal matrix $S$
can be eliminated from this factorization by setting $X = U'V'$, where $U' = U S^{\frac{1}{2}}$ and $V' = U S^{\frac{1}{2}}$.
The matrices $U'$ and $V'$ are unique if all diagonal elements of $S$ have been mutually different.
In our setting, we will apply the singular value decomposition to transform the product $X = A \cdot B$ into
a new product $A' \cdot B'$ as just described. Since rating data tends to be very \enquote{noisy,} we can safely assume
that $(A', B')$ is a unique representation of $(A, B)$; we did not encounter any counterexamples during our
experiments on large real-world rating data. Moreover, any equivalent pair $(AM, M^{-1}B)$ also gets
transformed into $(A', B')$, which we define as the corresponding standard representation.
It can be computed efficiently using the product decomposition algorithm proposed in \cite[Sec.\,3]{ProductSVD}.
\subsection{Use Case: Movie Ratings}
Based on these guidelines, we now present a concrete method for performing
a basic evaluation of coordinate spaces generated from movie ratings.
Our focus rests on immediate applicability, so we relate the item coordinates
to reference data that is already available.
The reference source for all kinds of movie-related information is IMDb,
the Internet Movie Database\footnote{\url{http://www.imdb.com}}, which currently
covers about 1.6 million titles. Most of IMDb's data has been created with the
help of its users. Therefore, a large proportion of the available content can freely
be downloaded and used for non-commercial purposes\footnote{\url{http://www.imdb.com/interfaces\#plain}}.
Based on this comprehensive data, one should be able to cross-reference any collection of movie ratings with IMDb.
For the semantic evaluations we are going to perform, the following attributes of titles
may prove helpful: genres, certifications (e.g., USA:PG for \emph{parental guidance suggested}),
year of release, and plot keywords. To illustrate the general procedure,
we will only exploit genre information in this paper. Extendig our method to
other types of semantic information is straightforward. Checking the correspondence between
genres and item coordinates also makes up a good first test of whether
at least some basic semantic properties of movies are represented in coordinate spaces, which
is exactly the purpose of the current work.
IMDb recognizes 28 different genres, from \emph{Action} to \emph{Western,} where each movie may belong
to multiple genres. The assignment of genres is done by IMDb's expert staff in cooperation
with IMDb users. To enforce consistency, this process is based upon a collection of publicly available
guidelines\footnote{\url{http://www.imdb.com/updates/guide/genres}}. Therefore, this data source
matches the requirements developed in the previous section.
To analyze whether the distribution of genres in coordinate space displays any
significant pattern, we turn to established classification algorithms, which explicitly
have been designed to exploit any relevant patterns in the data if there are any.
In particular, we propose to measure the degree of adherence to a pattern by
the classification accuracy shown by these algorithms when predicting the genre of movies
based on their coordinates. In essence, we transform our analysis into
a sequence of binary classification problems (one for each genre), which enables us to build on solid grounds.
Following the common methodology, we use cross-validation; that is, accuracy is measured
on a data set, which is independent of the one used to train the classifier.
By applying proven techniques to counter overfitting, our approach also overcomes any possible problems
related to hindsight bias.
For a start, we selected two popular classification algorithms, which are able to detect
different kinds of patterns in the data:
support vector machines and kNN-classifiers.
Support vector machines will be used in two different flavors: first, using a linear kernel (refered to as SVM-lin),
and second, using a Gaussian radial basis function kernel (SVM-RBF). Linear support vector machines
will show a high classification accuracy if most movies of the respective genre are grouped at one
side of the data set, which can be separated from all remaining movies by a hyperplane. For example, this can be used to disprove
the hypothesis that there exists a direction in the coordinate space along which, say, the amount of action,
increases monotonically. In contrast, the SVM-RBF classifier detects whether groups of movies with the same genre
tend to be located in close vincinity.
kNN-classifiers perform well if the distance between movies having the same genre typically is smaller
than the distance to movies not having this genre. Therefore, they can be used to check whether genres form
spatially separated patterns in coordinate space. Since factor models are not based on a notion of proximity,
it is not clear what measure of distance suits factor models best. We will try out the following four measures:
Euclidean distance, standardized Euclidean distance (where, to ensure equally weighted dimensions, coordinate values
are divided by the standard deviation of the data with respect each dimension), negative scalar product
(which essentially adapts the method of rating prediction to measure distance), and cosine similarity
(which is monotonically related to the angle between two vectors).
To evaluate the true benefit of coordinate spaces generated from factor models,
we propose the following baseline, which is derived from traditional neighborhood-based
recommendation methods \cite{ItemItemSim} and constructed as follows:
First, for any items $i$ and $j$, we compute their Pearson correlation coefficient
\begin{displaymath}
\varrho_{i, j} = \frac{\sum_{u \in \mathcal{R}_{i, j}} (r_{i, u} - \mu_{i, j}) (r_{j, u} - \mu_{j, i})}{\sqrt{\sum_{u \in \mathcal{R}_{i, j}} (r_{i, u} - \mu_{i, j})^2} \sqrt{\sum_{u \in \mathcal{R}_{i, j}} (r_{j, u} - \mu_{j, i})^2}},
\end{displaymath}
where $\mathcal{R}_{i, j}$ is the set of all users who rated both $i$ and $j$, and
$\mu_{i, j}$ is the mean rating given to item $i$ by users who rated both $i$ and $j$.
If $\mathcal{R}_{i, j}$ is empty, then $\varrho_{i, j}$ is undefined. The
Pearson correlation coefficient $\varrho_{i, j}$ measures the tendency of
users to rate items $i$ and $j$ similarly. To avoid biased estimates in cases where
$n_{i, j} = \lvert\mathcal{R}_{i, j}\rvert$ is very small, we derive a new measure of similarity
\begin{displaymath}
s_{i, j} = \frac{n_{i, j}}{n_{i, j} + \lambda} \cdot \varrho_{i, j}
\end{displaymath}
from $\varrho_{i, j}$ by shrinking towards zero \cite{KorenFactNgbr}. Here,
$\lambda \geq 0$ is a regularization parameter. Finally, we carry over these similarity
into distances by applying a logarithmic transformation:
\begin{displaymath}
d_{i, j} = -\ln\left(\frac{1 + s_{i, j}}{2}\right).
\end{displaymath}
To derive a $d$-dimensional coordinate space in which items $i$ and $j$ approximately have distance $d_{i, j}$,
we use metric multidimensional scaling \cite{MDS}.
Since neighborhood-based recommendation methods are usually outperformed by factor models,
we expect our baseline coordinate space to be far inferior to those constructed using factor models.
We refer to our baseline model as MDS.
\section{EXPERIMENTS ON MOVIELENS\,10M}
We applied our approach to the MovieLens\,10M data set \footnote{\url{http://www.grouplens.org/node/73}},
which consists of about 10 million ratings collected by the online movie recommender service
MovieLens\footnote{\url{http://www.movielens.org}}. After postprocessing the original data (removing one non-existing movie,
merging several duplicate movie entries, and removing movies that received less than 20 ratings),
our new data set consists of 9{,}984{,}419 ratings of 8938 movies provided by 69878 users.
The ratings use a 10-point scale from 0.5 (worst) to 5 (best). Each user contributed at least 14 ratings.
Our analysis requires the genre information maintained by IMDb, so
we had to map each movie in the data set to its corresponding IMDb entry.
This task has been simplified a lot by the fact that all items in the MovieLens\,10M data set
are relatively well-known movies developed for cinema.\footnote{This is the reason why
we did not consider the Netflix data set. It consists of all kinds of
DVD titles, which often lack a clear correspondence in IMDb.}
We mapped about 8000 movies automatically by comparing titles and release years; the remaining
movies have been assigned manually or semi-automatically.
To avoid the problem of learning from very small samples for now, we did not use all 28 genres distinguished by IMDb.
Instead, we take only those genres into consideration that have been assigned to at least
5\% of all movies in our data set. Table\,\ref{genres} lists all remaining 13 genres and
their relative frequencies. On average, 2.3 genres have been assigned to each movie.
\begin{table}
\centering
\begin{tabular}{@{}lrclr@{}} \toprule
Genre & \% & \hspace{5em} & Genre & \%\\ \midrule
Action & 16.0 & & Horror & 10.1\\
Adventure & 12.7 & & Mystery & 9.1\\
Comedy & 38.2 & & Romance & 25.2\\
Crime & 16.6 & & Sci-Fi & 8.6\\
Drama & 54.6 & & Thriller & 24.2\\
Family & 8.4 & & War & 5.2 \\
Fantasy & 8.3\\ \bottomrule
\end{tabular}
\caption{Relative frequencies of genres.}
\label{genres}
\end{table}
\subsection{Generating Coordinate Spaces}
We implemented each of the four coordinate extraction methods in MATLAB
and executed them on our rating data.
For SVD, $\delta$-SVD, and NNMF, we followed the literature
and used an optimization procedure based on gradient descent; to reduce computation time,
we applied the Hessian speedup proposed in \cite{RaikoSpeedup}. Adapting
the common methodology, we chose the regularization parameter $\lambda$ by cross-validation
such that the SSE is minimized on randomly chosen test sets. We ended up with
a value of $\lambda = 0.04$ for each of the three algorithms.
Since optimization by gradient descent is known to get stuck in local extrema
of the function to be minimized, we ran the three procedures at least three times,
each with different initial coordinates, which have been chosen randomly.
For each result, we computed the standardized solution pair as described in the previous
section. We found that the solutions generated by each extractor do not differ
significantly after standardization. This indicates that our coordinate spaces
match the unique solution of each optimization problem.
For our MDS procedure, we used the regularization constant $\lambda = 20$, which we determined
by adapting the recommendation Koren gave for the Netflix data set \cite{KorenFactNgbr}.
The coordinates have been generated by MATLAB's \texttt{mdscale} function using the
metric stress criterion. Since in our data set about 14 percent of all movie--movie pairs
had no raters in common, we treated the respective entries of the distance matrix as missing data.
To measure the effect of dimensionality, we generated three different coordinate spaces
with each extractor by varying the parameter $d$. We chose $d = 10$, $d = 50$, and $d = 100$.
\subsection{Applying the Classifiers}
In total, we used 14 different classifiers to evaluate each of the 12 coordinate spaces
with respect to each of the 13 genres.
We implemented the two support vector machine classifiers by soft-margin SVMs with parameters
$C = 4$ and (for SVM-RBF) $\gamma = 0.1$, which have been determined by cross-validation
to maximize classification accuracy.
Each of the four different kNN-classifiers will be applied to the data sets with three different
choices of $k$. To measure whether movies of the same genre tend to occur in larger groups,
we chose $k = 1$, $k = 3$, and $k = 9$. In the following, we will refer to these 12 classifiers
as $k$NN-Eucl, $k$NN-sEucl, $k$NN-scal, and $k$NN-cos.
To enable comparisons among classifiers and data sets, we generated 20 pairs of training and test sets,
each by randomly chosing 40\% of all movies for training and 10\% (of the remaining movies) for testing.
For each of the resulting 2184 combinations of coordinate spaces, classifiers and genres, we use the same
20 pairs of item sets for training and testing. In each case, we measured the classification accuracy.
All results reported below are averages over the 20 runs.
\subsection{Results}
Probably the most popular way of assessing a classifier's performance is measuring its accuracy, that is,
the fraction of test items which have been classified correctly. However, in our setting, this measure
is not very helpful. To see this, recall that the relative frequency of genres is very different in our
data set. For example, over half of all movies belong to the genre \emph{Drama,} but there are only about 5\% \emph{War} movies.
While attaining an accuracy of 95\% would be significant for the genre \emph{Drama,} it can easily be achieved
for the genre \emph{War} just by classifying any movie as \emph{non-War}. To enable comparisons
across genres, we propose to use a modified version of Cohen's kappa measure.
Any result of a binary classification task can be described by four numbers, which sum up to $1$:
the fraction of true positives ($\alpha_\text{tp}$), the fraction of false positives ($\alpha_\text{fp}$),
the fraction of false negatives ($\alpha_\text{fn}$), and the fraction of true negatives ($\alpha_\text{tn}$).
Accuracy is defined as $acc = \alpha_\text{tp} + \alpha_\text{tn}$. Moreover, the accuracy of a static majority-based
classifier (which always returns the label of the more frequent class) is
$acc_\text{maj} = \max\{\alpha_\text{tp} + \alpha_\text{fn}, \alpha_\text{fp} + \alpha_\text{tn}\}$.
We propose to use this kind of naive classifier for normalizing the accuracy and define
$\kappa = (acc - acc_\text{maj}) / (1 - acc_\text{maj})$. This measure expresses a classifier's relative
performance with respect to the majority-based classifier. If $acc = 1$ then $\kappa = 1$,
if $acc > acc_\text{maj}$, then $\kappa > 0$, if $acc = acc_\text{maj}$, then $\kappa = 0$, and if
$acc < acc_\text{maj}$, then $\kappa < 0$.
By measuring accuracy in terms of $\kappa$, we can average classification performance over different genres.
Tables\,\ref{kappas1}--\ref{kappas4} report the mean $\kappa$s over all 260 classification results obtained
for each combination of coordinate space and classifier type. All entries larger than 0.10 have been
marked in boldface. We can observe the following:
\begin{table}
\centering
\begin{tabular}{@{}rrrr@{}} \toprule
& SVD-10 & SVD-50 & SVD-100\\ \midrule
SVM-lin & 0.08 & \textbf{0.18} & \textbf{0.20}\\
SVM-RBF & \textbf{0.15} & \textbf{0.23} & \textbf{0.25}\\
1NN-Eucl & $-$0.24 & $-$0.21 & $-$0.19\\
3NN-Eucl & 0.01 & 0.05 & 0.04\\
9NN-Eucl & \textbf{0.12} & \textbf{0.16} & \textbf{0.14}\\
1NN-sEucl & $-$0.25 & $-$0.27 & $-$0.31\\
3NN-sEucl & 0.01 & 0.00 & $-$0.06\\
9NN-sEucl & \textbf{0.12} & \textbf{0.12} & 0.04\\
1NN-scal & $-$0.42 & $-$0.30 & $-$0.30\\
3NN-scal & $-$0.16 & $-$0.03 & $-$0.03\\
9NN-scal & 0.01 & \textbf{0.11} & \textbf{0.12}\\
1NN-cos & $-$0.25 & $-$0.18 & $-$0.16\\
3NN-cos & 0.00 & 0.06 & 0.06\\
9NN-cos & \textbf{0.12} & \textbf{0.17} & \textbf{0.16}\\ \bottomrule
\end{tabular}
\caption{Kappas for coordinates generated by SVD.}
\label{kappas1}
\end{table}
\begin{table}
\centering
\begin{tabular}{@{}rrrr@{}} \toprule
& $\delta$-SVD-10 & $\delta$-SVD-50 & $\delta$-SVD-100\\ \midrule
SVM-lin & 0.07 & \textbf{0.16} & \textbf{0.18}\\
SVM-RBF & \textbf{0.13} & \textbf{0.20} & \textbf{0.23}\\
1NN-Eucl & $-$0.26 & $-$0.26 & $-$0.26\\
3NN-Eucl & $-$0.01 & 0.01 & $-$0.02\\
9NN-Eucl & \textbf{0.11} & \textbf{0.12} & 0.08\\
1NN-sEucl & $-$0.26 & $-$0.29 & $-$0.36\\
3NN-sEucl & 0.00 & $-$0.03 & $-$0.11\\
9NN-sEucl & \textbf{0.11} & 0.09 & $-$0.01\\
1NN-scal & $-$0.41 & $-$0.28 & $-$0.22\\
3NN-scal & $-$0.06 & 0.02 & 0.06\\
9NN-scal & 0.05 & \textbf{0.13} & \textbf{0.16}\\
1NN-cos & $-$0.26 & $-$0.19 & $-$0.16\\
3NN-cos & 0.00 & 0.07 & 0.09\\
9NN-cos & \textbf{0.12} & \textbf{0.18} & \textbf{0.19}\\ \bottomrule
\end{tabular}
\caption{Kappas for coordinates generated by $\delta$-SVD.}
\label{kappas2}
\end{table}
\begin{table}
\centering
\begin{tabular}{@{}rrrr@{}} \toprule
& NNMF-10 & NNMF-50 & NNMF-100\\ \midrule
SVM-lin & 0.02 & 0.05 & \textbf{0.11}\\
SVM-RBF & 0.02 & 0.09 & \textbf{0.14}\\
1NN-Eucl & $-$0.56 & $-$0.47 & $-$0.41\\
3NN-Eucl & $-$0.20 & $-$0.16 & $-$0.13\\
9NN-Eucl & $-$0.02 & 0.01 & 0.02\\
1NN-sEucl & $-$0.56 & $-$0.47 & $-$0.45\\
3NN-sEucl & $-$0.20 & $-$0.16 & $-$0.16\\
9NN-sEucl & $-$0.02 & 0.01 & 0.00\\
1NN-scal & $-$0.37 & $-$0.34 & $-$0.34\\
3NN-scal & $-$0.11 & $-$0.10 & $-$0.09\\
9NN-scal & $-$0.02 & 0.00 & 0.02\\
1NN-cos & $-$0.56 & $-$0.45 & $-$0.41\\
3NN-cos & $-$0.20 & $-$0.15 & $-$0.13\\
9NN-cos & $-$0.03 & 0.02 & 0.03\\ \bottomrule
\end{tabular}
\caption{Kappas for coordinates generated by NNMF.}
\label{kappas3}
\end{table}
\begin{table}
\centering
\begin{tabular}{@{}rrrr@{}} \toprule
& MDS-10 & MDS-50 & MDS-100\\ \midrule
SVM-lin & $-$0.16 & \textbf{0.15} & \textbf{0.19}\\
SVM-RBF & 0.03 & \textbf{0.16} & \textbf{0.17}\\
1NN-Eucl & $-$0.29 & $-$0.19 & $-$0.18\\
3NN-Eucl & $-$0.01 & 0.06 & 0.06\\
9NN-Eucl & \textbf{0.13} & \textbf{0.18} & \textbf{0.18}\\
1NN-sEucl & $-$0.29 & $-$0.23 & $-$0.29\\
3NN-sEucl & $-$0.01 & 0.05 & $-$0.01\\
9NN-sEucl & \textbf{0.13} & \textbf{0.17} & \textbf{0.12}\\
1NN-scal & $-$0.29 & $-$0.19 & $-$0.18\\
3NN-scal & $-$0.01 & 0.07 & 0.08\\
9NN-scal & \textbf{0.12} & \textbf{0.18} & \textbf{0.18}\\
1NN-cos & $-$0.28 & $-$0.18 & $-$0.16\\
3NN-cos & 0.00 & 0.07 & 0.08\\
9NN-cos & \textbf{0.13} & \textbf{0.19} & \textbf{0.19}\\ \bottomrule
\end{tabular}
\caption{Kappas for coordinates generated by MDS.}
\label{kappas4}
\end{table}
\begin{itemize}
\item
The coordinate space derived by NNMF does not contain much helpful information about
genres that can be exploited by our classifiers. The performance in all other spaces
is significantly better.
\item
Except for NN-sEucl, classification performance generally improves with increasing dimensionality.
However, the difference in performance between $d = 10$ and $d = 50$ is much larger
than the one between $d = 50$ and $d = 100$. This indicates that our ordering of dimensions
during standardization indeed captures some notion of relative importance. This is probably
also the reason for NN-sEucl's decreasing performance with growing $d$; treating all dimensions
equally seems to overweight information from dimensions at the end of the list.
\item
The SVM-RBF classifier slightly outperforms SVM-lin, but is comparable in
performance to 9NN-Eucl, 9NN-scal, and 9NN-cos. This indicates
that genres indeed tend to cluster in coordinate spaces, even with
respect to different measures of distance.
\item
The NN-classifiers display bad performance for $k = 1$ and $k = 3$, which indicates
that, although movies of the same genre roughly occur in clusters,
each cluster usually also contains movies that do not have assigned the respective genre.
\item
In contrast to our expectations, the performance in coordinate spaces generated by factor models
is comparable to the performance shown on our baseline coordinate space MDS.
\end{itemize}
Moreover, the results suggest that the performance of $k$NN-classifiers might even further increase for larger
values of $k$. To check this, we performed some preliminary tests with $k \approx 20$, but have not been able
to confirm this conjective.
We also investigated the influence of individual genres on classification performance;
as an example, the results for SVM-RBF are reported in Table\,\ref{kappasgen}. Entries larger than 0.20 have been
indicated. We can see that some genres, such as \emph{Horror} and \emph{Drama}, can clearly be identified by the classifier,
while others cannot. We have expected much better performance on clear-cut genres such as \emph{War.}
\begin{table}
\centering
\begin{tabular}{@{}rrrrr@{}} \toprule
& SVD-100 & $\delta$-SVD-100 & NNMF-100 & MDS-100\\ \midrule
Action & \textbf{0.34} & \textbf{0.31} & \textbf{0.22} & \textbf{0.22}\\
Adventure & 0.13 & 0.12 & 0.08 & 0.00\\
Comedy & \textbf{0.45} & \textbf{0.42} & \textbf{0.25} & \textbf{0.42}\\
Crime & 0.08 & 0.06 & $-$0.01 & 0.00\\
Drama & \textbf{0.47} & \textbf{0.43} & \textbf{0.37} & \textbf{0.44}\\
Family & \textbf{0.43} & \textbf{0.46} & \textbf{0.31} & \textbf{0.34}\\
Fantasy & 0.03 & 0.05 & 0.01 & 0.00\\
Horror & \textbf{0.56} & \textbf{0.54} & \textbf{0.31} & \textbf{0.61}\\
Mystery & 0.06 & 0.04 & $-$0.00 & 0.00\\
Romance & 0.11 & 0.10 & $-$0.00 & 0.00\\
Sci-Fi & \textbf{0.23} & 0.20 & 0.09 & 0.00\\
Thriller & \textbf{0.31} & \textbf{0.27} & 0.14 & 0.15\\
War & 0.05 & 0.06 & $-$0.00 & 0.00\\ \bottomrule
\end{tabular}
\caption{Kappas for SVM-RBF by genre.}
\label{kappasgen}
\end{table}
In summary, these preliminary experiments suggest that the coordinate spaces derived by SVD, $\delta$-SVD,
and MDS indeed contain some significant semantic information about the represented movies.
However, the situation is by far not as clear as claimed by the literature.
\section{CONCLUSION AND OUTLOOK}
In the current paper, we presented a general methodology for systematically analyzing
whether coordinate spaces generated from factor models contain semantic information,
as it is commonly claimed. We applied our approach to the MovieLens\,10M data set
and found initial evidence for this claim.
Our results encourage us to follow this line of research in several ways.
First, we would like to investigate whether our results also carry over to
more advanced and complex factor models, which have been proposed very recently \cite{KorenSVD++, KorenFactNgbr}.
It would also interesting to see what more traditional methods such as multidimensional
scaling can contribute to the problem of feature extraction from rating data,
since our results indicate that these methods can sucessfully be modified
for use in our new setting.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 14 |
Miami Hurricane, October 22, 1971
Um Sports
Kelly Cochran, once a superstar, now a bench-warmer, see p. 10.
Exelusive
Want to become a genius? See p. 3 for story.
Diane Tina
Homecoming Queen Election On Thursday
By F. J. MIZZLES, JR.
Hurrican« Reporter
"Impact '71 is less than a week away and the planned activities are in full swing.
Eighty seven contestants entered the "Draft the Queen" contest and only twelve are left," Hope Kourland, chairman of the queen's contest said.
Miss Kourland said that the students selected five finalists out of the 12 semi-finalists in an election for the Homecoming Queen yesterday in the Student Union Breezeway.
"There were more freshmen in the contest than from any other classification. There are three semifinalists from every class. They were judged on beauty and poise," Miss Kourland said.
The Queen will be drafted and crowned during a "unique" program planned for Thursday, October 28 on the Student Union Patio at 8 p.m. The finalists as well as the judges will all be dressed in Army uniforms since the theme is "The Army Feels the Draft."
"For the final judgment all the contestants will perform variety acts. The best act will receive the crown, provided she has excelled in other phases of competition. The theme will be 'Beat the Army,' " Miss Kourland said.
The names of the semi-finalists are: Roberta Haivey, 1968 Complex; Freshman; Peggy Litchford, Alpha Delta Pi, Freshman; Robyn Rentz,
Kappa Kappa, Gamma, Freshman; El-oise Taylor ZBT, Soph.; Linda Thompson, Chi Omega, Soph.; Holiday Jones, Kappa Kappa Gamma, Soph.; Angela Miller, ZBT, Jr. Sunni Beakley, Pi Kappa Alpha, Jr.; Lois V. Smith. Phi Beta Sigma, Jr.; Tina Etling, Delta Gamma, Sr.; Diane Daughetee, Delta Zeta, Sr.; Debra Butler, Black Sisters, Sr.
"We feel that this contest will be better than those In the past, since the standards are higher this year than they have been in the past," Miss Kourland said.
Miss Kourland said that the judges come from the University administration and local citizens consisting of five men judges and one woman judge.
"During this time there will also be a pep-rally and a boat burning for the UM-Army game. We hope that this occasion will instill in the students the true spirit of UM," Bill Hartman, chairman of homecoming said.
Hartman said that after the rally we will hold our candlelight procession around the lake. Students and Alumni are welcome. "We hope that all the lighted candles will prove to be a beautiful and moving experience," Hartman said.
"I would like to remind everyone that the Homecoming parade will be on Wednesday night October 27, at
7:30 p.m.," hartman said. "We have more of everything in this parade and expect it to be one of our best. We are expecting a large turnout for it."
Elevated Campus Crime Rate Indicated By Students' Fears
H'cane
By KINGSLEY RUSH and ERIC BALOFF
Of TIM Hurricane Staff
Fear for personal safety and fear of robbery and theft appears to be on the minds of a majority of UM students this year.
In our weekly poll, 100 UM students were polled on controversial questions.
This is how it went this week:
When on campus, do you feel safe from personal harm, robbery and theft?
YES NO UNDECIDED 32% 61% 7%
Due to a rash of crime on campus, we thought it might be interesting to test the mood of the campus on this subject.
"I've walked in Central Park and felt less threatened than when I have to walk from the library to the
dorms," one New York coed said.
Most students who replied negatively seemed to have experienced some kind of theft or robbery, or knew someone who had.
In a continuation of last week's poll, we asked the following question:
If the National elections
were held today, who would you vote for?
LINDSAY NIXON WALLACE UNDECIDED 36% 43% 4% 17%
McGovern nixon WALLACE UNDECIDED
31% 45% 4% 20%
Although President Nixon is polling a majority of UM students, John V. Lindsay drew some republicans over to the democratic ticket.
However, the trend is still a conservative one. Of the four leading contenders from the Democratic party, Lind-say seems to be the most popular so far.
Aid Former Student
A 21 year old former UM student facing certain death from kidney failure, is in need of $5,000 to help pay for a kidney transplant.
A drive to collect money for the George Nottage Kidney Fund will start Friday in the Breezeway of the Student Union.
Next week, we will offer
two more choices.
Nottage, who is not enrolled in UM this semester because of his illness, now spends seven hours a day, every other day, on a kidney dailysis machine which cleanses his blood.
Would you favor seating Formosa, Red China, or both Red China and Formosa in the United Nations?
Nottage will die if he does not continue the dialysis treatment or undergo the kidney transplant. He chose the transplant even though it is a high risk operation because as a young man he feels he could not go through life living off of a machine. y
FORMOSA BOTH 30% 24%
RED CHINA UNDECIDED 37% 19%
The United Nations vote on this question is rapidly
approaching. The United States appears to back a "Two China" policy while Red China and Formosa have stated they will not accept a seat if they must share it.
"Both Chinas represent a certain group of people with certain philosophies," one student said, "I believe they both should be seated; equally but separately."
More students felt that both Chinas should be allowed in than those who felt that only Red China or Formosa should be seated alone. However more students favored Red China being seated than Formosa.
"I don't feel that Red China and Formosa could get along in the UN, and Red China represents a greater number of people' so they should get the representation," one coed said.
In a lighter vain, students wrede shown this photograph and asked the following question:
Would you buy insurance from this man?
YES NO UNDECIDED 7% 89% 4%
The photograph is that of Howard Zusman, Student Body Government Treasurer. Howard has been working recently on a plan to add revenue to SBC, coffers. The plan is to sell insurance to students. All we can add at this point is, GOOD LUCK Howard!
LIBRARY^
Tonight s Concert May Be Our Last
By JILL MOVSHIN and KINGSLEY RUSH
Of The Hurricane Staff
Tonight's Cannonball Ad-derly concert may be UM'S last if a newly formed Con-
See Dr. Halier'* re* ponte Io ihr Hiirrirnnp concerning concert* on Cage 3.
cert Evaluation Committee reports unfavorably on general concert security and crowd behavior.
UM Vice President for Student Affairs William R. Butler, in a memorandum to the Hurricane staff writer Jill H. Movshin, announced the formation of the committee.
Butler cited "inadequate handling of (last) Sunday evening's concert" as the reason for his action in forming the committee and his refusal to allow a reappearance of the musical group, It's A Beautiful Day, on Monday evening.
The Concert Evaluation Committee will be responsible for reporting on crowd and security activities at each concert. A similar committee was formed last year to evaluate student conduct at concerts after a moratorium or, campus concerts was lifted.
Butler said that the new committee would also study and make a recommendations for a location to be used as a permanent site for concerts.
In a meeting last Wednesday morning, Butler outlined and discussed with concerned students and administrators the new concert series changes that could be made to alleviate the present situation.
Butler said that reports he had received on the Boz Scaggs-It's A Beautiful Day concert led him to the conclusion that the student marshal system had failed in preventing non-students trom gaining entrance to the concert.
Flagrant use of drugs and a lack of student responsibility toward required administrative details at last Sunday's concert were cited by Butler as reasons to end the UM concert series.
"There were roving bands of young people at Sunday's concert, and I don't want to see anyone hurt," Butler said.
According to Student Entertainment Committee Chairman Glen Lipnick, about 400 people were seated in the restricted baseball field.
Butler said that the inadequate lighting in the concert area compounded the danger of students being harmed. He pointed out that tonight's concert might provide some special problems for the security forces.
Butler felt that the proximity of black neighborhoods in the UM area along with Cannonball Adderly performing tonight Could be an attraction for young blacks to attend.
Steve Schifrin, in charge of the student security force at concerts, said that many of the security people were attacked last week when they tried to prevent outsiders from "crashing" the concert.
*Cane T ukes A Break
The Hurricane will not appear on your local newsstands Tuesday, Oct. 26. The Hurricane will resume publication Friday Oct. 29 with its annual Homecoming issue.
-Photo By PBTI*
Sunday's Concert Crowds Proved To Be Unruly
... if can't happen again
Concert Ticket info.
To gain entrance to tonight's concert featuring Cannonball Adderly at the sewage treatment plant, here's what you must do:
• Present your UM student I.D. card at the window in the union breezeway of the Student Union directly behind the infor-
mation booth. The window will be open until 9 p.m. tonight.
• You will receive one ticket for yourself and an additional guest ticket.
• Present your ticket along with your I.D Card at the entrance gate to the concert area. Gates open
at 8 p.m.
• If you bring a guest he must have a guest ticket and he must enter the gate with you when you present your ticket.
• No one will be allowed to attend unless they have all of the required tickets and I.D.
Future Concerts In Peril
HURRICANE OPINION
The lack of security precautions and the flagrant drug abuse -which
was evident at last Sunday's concert has threatened the future of on campus concerts. And only the students have the power to make sure that they remain.
When the edict banning concerts from campus was passed last year, there was an uproar from the students who felt that it was absurd to make them travel all the way to Miami Marine Stadium when the Soccer Field was perfectly adequate. When given the chance to have concerts back on campus (at the Sewage Treatment Plant) the situation was vastly improved. "Outsiders" were kept to a minimum, drug abuse was less obvious and security was more than adequate.
However, last Sunday the situation was worse than it had ever been. The threat of physical violence constantly hung over the concert grounds. People sitting directly in front of the stage were lighting joints and freely passing them around. Lighting was also inadequate.
Vice President for Student Affairs Dr. William Butler became concerned, as he should have been, when he heard the report. On some
campuses, the concert series would have been abolished immediately
after such a fiasco. But Dr. Butler has decided to give UM students one more chance.
A committee has been set up of students and non-students to observe the Cannonball Adderly concert which takes place tonight. This is it. If you blow it this time, you may have blown it for good.
The advice we have to give is the same as last year. Blow your dope at home and don't add to the problem. Don't help "outsiders" to get in. You're only helping to jeopardize the concert series that you are paying for when you help high school students sneak in.
Better security personnel must be hired. Too many students hired to work security conveniently forgot what they were being paid to do once the music started.
Also, Coral Gables and University police should be placed outside the gates to help disperse crowds of potential gate crashers.
It has become obvious the past two years that the concert series is the most popular service provided to the students. The future of this series hangs on what happens tonight.
We have one more chance. Let's not blow it.
Muskie Discusses Platform With UM
By JOHN REILLY
HvrriciiM New, Editor
Senator Edmund Muskie, a democrat from Maine, brought his presidential campaign to UM yesterday in the form of an hour long question and answer period with UM students.
Muskie covered a range of subjects including the Washington D.C. gossip.
"Everyone in Washington wants to be president," he said, "except Henry Kissinger, he's happy just running the country."
Muskie said his statement in Los Angeles concerning a black vice-presidential candidate was widely misunderstood.
"I said the American people are not prepared to support a national ticket with a black on it."
Muskie said this is regrettable hut the American people would not support a black vice-presidential candidate for the same reason blacks are denied equal rights and
Continued on Page 2
Senator Muskie Campaigns
. . . /ocuse» on many aspeclê
Title Miami Hurricane, October 22, 1971
Full Text Um Sports Kelly Cochran, once a superstar, now a bench-warmer, see p. 10. Exelusive Want to become a genius? See p. 3 for story. Sunny Debbie Diane Tina Angie Homecoming Queen Election On Thursday By F. J. MIZZLES, JR. Hurrican« Reporter "Impact '71 is less than a week away and the planned activities are in full swing. Eighty seven contestants entered the "Draft the Queen" contest and only twelve are left," Hope Kourland, chairman of the queen's contest said. Miss Kourland said that the students selected five finalists out of the 12 semi-finalists in an election for the Homecoming Queen yesterday in the Student Union Breezeway. "There were more freshmen in the contest than from any other classification. There are three semifinalists from every class. They were judged on beauty and poise," Miss Kourland said. The Queen will be drafted and crowned during a "unique" program planned for Thursday, October 28 on the Student Union Patio at 8 p.m. The finalists as well as the judges will all be dressed in Army uniforms since the theme is "The Army Feels the Draft." "For the final judgment all the contestants will perform variety acts. The best act will receive the crown, provided she has excelled in other phases of competition. The theme will be 'Beat the Army,' " Miss Kourland said. The names of the semi-finalists are: Roberta Haivey, 1968 Complex; Freshman; Peggy Litchford, Alpha Delta Pi, Freshman; Robyn Rentz, Kappa Kappa, Gamma, Freshman; El-oise Taylor ZBT, Soph.; Linda Thompson, Chi Omega, Soph.; Holiday Jones, Kappa Kappa Gamma, Soph.; Angela Miller, ZBT, Jr. Sunni Beakley, Pi Kappa Alpha, Jr.; Lois V. Smith. Phi Beta Sigma, Jr.; Tina Etling, Delta Gamma, Sr.; Diane Daughetee, Delta Zeta, Sr.; Debra Butler, Black Sisters, Sr. "We feel that this contest will be better than those In the past, since the standards are higher this year than they have been in the past," Miss Kourland said. Miss Kourland said that the judges come from the University administration and local citizens consisting of five men judges and one woman judge. "During this time there will also be a pep-rally and a boat burning for the UM-Army game. We hope that this occasion will instill in the students the true spirit of UM," Bill Hartman, chairman of homecoming said. Hartman said that after the rally we will hold our candlelight procession around the lake. Students and Alumni are welcome. "We hope that all the lighted candles will prove to be a beautiful and moving experience," Hartman said. "I would like to remind everyone that the Homecoming parade will be on Wednesday night October 27, at 7:30 p.m.," hartman said. "We have more of everything in this parade and expect it to be one of our best. We are expecting a large turnout for it." Linda Eloise Lois Roberta Holiday Robin Elevated Campus Crime Rate Indicated By Students' Fears H'cane Opinion Poll By KINGSLEY RUSH and ERIC BALOFF Of TIM Hurricane Staff Fear for personal safety and fear of robbery and theft appears to be on the minds of a majority of UM students this year. In our weekly poll, 100 UM students were polled on controversial questions. This is how it went this week: When on campus, do you feel safe from personal harm, robbery and theft? YES NO UNDECIDED 32% 61% 7% Due to a rash of crime on campus, we thought it might be interesting to test the mood of the campus on this subject. "I've walked in Central Park and felt less threatened than when I have to walk from the library to the dorms," one New York coed said. Most students who replied negatively seemed to have experienced some kind of theft or robbery, or knew someone who had. In a continuation of last week's poll, we asked the following question: If the National elections were held today, who would you vote for? LINDSAY NIXON WALLACE UNDECIDED 36% 43% 4% 17% McGovern nixon WALLACE UNDECIDED 31% 45% 4% 20% Although President Nixon is polling a majority of UM students, John V. Lindsay drew some republicans over to the democratic ticket. However, the trend is still a conservative one. Of the four leading contenders from the Democratic party, Lind-say seems to be the most popular so far. Aid Former Student A 21 year old former UM student facing certain death from kidney failure, is in need of $5,000 to help pay for a kidney transplant. A drive to collect money for the George Nottage Kidney Fund will start Friday in the Breezeway of the Student Union. Next week, we will offer two more choices. Nottage, who is not enrolled in UM this semester because of his illness, now spends seven hours a day, every other day, on a kidney dailysis machine which cleanses his blood. Would you favor seating Formosa, Red China, or both Red China and Formosa in the United Nations? Nottage will die if he does not continue the dialysis treatment or undergo the kidney transplant. He chose the transplant even though it is a high risk operation because as a young man he feels he could not go through life living off of a machine. y FORMOSA BOTH 30% 24% RED CHINA UNDECIDED 37% 19% The United Nations vote on this question is rapidly approaching. The United States appears to back a "Two China" policy while Red China and Formosa have stated they will not accept a seat if they must share it. "Both Chinas represent a certain group of people with certain philosophies," one student said, "I believe they both should be seated; equally but separately." More students felt that both Chinas should be allowed in than those who felt that only Red China or Formosa should be seated alone. However more students favored Red China being seated than Formosa. "I don't feel that Red China and Formosa could get along in the UN, and Red China represents a greater number of people' so they should get the representation," one coed said. In a lighter vain, students wrede shown this photograph and asked the following question: Would you buy insurance from this man? YES NO UNDECIDED 7% 89% 4% The photograph is that of Howard Zusman, Student Body Government Treasurer. Howard has been working recently on a plan to add revenue to SBC, coffers. The plan is to sell insurance to students. All we can add at this point is, GOOD LUCK Howard! LIBRARY^ Tonight s Concert May Be Our Last Conduct, Security Warned By JILL MOVSHIN and KINGSLEY RUSH Of The Hurricane Staff Tonight's Cannonball Ad-derly concert may be UM'S last if a newly formed Con- See Dr. Halier'* re* ponte Io ihr Hiirrirnnp concerning concert* on Cage 3. cert Evaluation Committee reports unfavorably on general concert security and crowd behavior. UM Vice President for Student Affairs William R. Butler, in a memorandum to the Hurricane staff writer Jill H. Movshin, announced the formation of the committee. Butler cited "inadequate handling of (last) Sunday evening's concert" as the reason for his action in forming the committee and his refusal to allow a reappearance of the musical group, It's A Beautiful Day, on Monday evening. The Concert Evaluation Committee will be responsible for reporting on crowd and security activities at each concert. A similar committee was formed last year to evaluate student conduct at concerts after a moratorium or, campus concerts was lifted. Butler said that the new committee would also study and make a recommendations for a location to be used as a permanent site for concerts. In a meeting last Wednesday morning, Butler outlined and discussed with concerned students and administrators the new concert series changes that could be made to alleviate the present situation. Butler said that reports he had received on the Boz Scaggs-It's A Beautiful Day concert led him to the conclusion that the student marshal system had failed in preventing non-students trom gaining entrance to the concert. Flagrant use of drugs and a lack of student responsibility toward required administrative details at last Sunday's concert were cited by Butler as reasons to end the UM concert series. "There were roving bands of young people at Sunday's concert, and I don't want to see anyone hurt," Butler said. According to Student Entertainment Committee Chairman Glen Lipnick, about 400 people were seated in the restricted baseball field. Butler said that the inadequate lighting in the concert area compounded the danger of students being harmed. He pointed out that tonight's concert might provide some special problems for the security forces. Butler felt that the proximity of black neighborhoods in the UM area along with Cannonball Adderly performing tonight Could be an attraction for young blacks to attend. Steve Schifrin, in charge of the student security force at concerts, said that many of the security people were attacked last week when they tried to prevent outsiders from "crashing" the concert. *Cane T ukes A Break The Hurricane will not appear on your local newsstands Tuesday, Oct. 26. The Hurricane will resume publication Friday Oct. 29 with its annual Homecoming issue. -Photo By PBTI* Sunday's Concert Crowds Proved To Be Unruly ... if can't happen again Concert Ticket info. To gain entrance to tonight's concert featuring Cannonball Adderly at the sewage treatment plant, here's what you must do: • Present your UM student I.D. card at the window in the union breezeway of the Student Union directly behind the infor- mation booth. The window will be open until 9 p.m. tonight. • You will receive one ticket for yourself and an additional guest ticket. • Present your ticket along with your I.D Card at the entrance gate to the concert area. Gates open at 8 p.m. • If you bring a guest he must have a guest ticket and he must enter the gate with you when you present your ticket. • No one will be allowed to attend unless they have all of the required tickets and I.D. Future Concerts In Peril HURRICANE OPINION The lack of security precautions and the flagrant drug abuse -which was evident at last Sunday's concert has threatened the future of on campus concerts. And only the students have the power to make sure that they remain. When the edict banning concerts from campus was passed last year, there was an uproar from the students who felt that it was absurd to make them travel all the way to Miami Marine Stadium when the Soccer Field was perfectly adequate. When given the chance to have concerts back on campus (at the Sewage Treatment Plant) the situation was vastly improved. "Outsiders" were kept to a minimum, drug abuse was less obvious and security was more than adequate. However, last Sunday the situation was worse than it had ever been. The threat of physical violence constantly hung over the concert grounds. People sitting directly in front of the stage were lighting joints and freely passing them around. Lighting was also inadequate. Vice President for Student Affairs Dr. William Butler became concerned, as he should have been, when he heard the report. On some campuses, the concert series would have been abolished immediately after such a fiasco. But Dr. Butler has decided to give UM students one more chance. A committee has been set up of students and non-students to observe the Cannonball Adderly concert which takes place tonight. This is it. If you blow it this time, you may have blown it for good. The advice we have to give is the same as last year. Blow your dope at home and don't add to the problem. Don't help "outsiders" to get in. You're only helping to jeopardize the concert series that you are paying for when you help high school students sneak in. Better security personnel must be hired. Too many students hired to work security conveniently forgot what they were being paid to do once the music started. Also, Coral Gables and University police should be placed outside the gates to help disperse crowds of potential gate crashers. It has become obvious the past two years that the concert series is the most popular service provided to the students. The future of this series hangs on what happens tonight. We have one more chance. Let's not blow it. Muskie Discusses Platform With UM By JOHN REILLY HvrriciiM New, Editor Senator Edmund Muskie, a democrat from Maine, brought his presidential campaign to UM yesterday in the form of an hour long question and answer period with UM students. Muskie covered a range of subjects including the Washington D.C. gossip. "Everyone in Washington wants to be president," he said, "except Henry Kissinger, he's happy just running the country." Muskie said his statement in Los Angeles concerning a black vice-presidential candidate was widely misunderstood. "I said the American people are not prepared to support a national ticket with a black on it." Muskie said this is regrettable hut the American people would not support a black vice-presidential candidate for the same reason blacks are denied equal rights and Continued on Page 2 Senator Muskie Campaigns . . . /ocuse» on many aspeclê | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 3,777 |
Devils' Signing of Hamilton Just What They Needed
July 29, 2021 by Alex Chauvancy
The New Jersey Devils did it. They landed this year's prized free agent in Dougie Hamilton, signing him to a seven-year deal worth a total of $63 million. He spent the past three seasons with the Carolina Hurricanes and finished with 40 points in 52 games in 2020-21 — a 63-point pace over 82 games.
Overhauling the team's defense was a priority for Devils general manager Tom Fitzgerald this offseason. He got the ball rolling by acquiring Ryan Graves from the Colorado Avalanche before the expansion draft. Now with Hamilton in the mix, their blue line will have a drastically different look next season. And one for the better, as Hamilton is an ideal fit for how the Devils want to play. What is it about his game that makes him a perfect match for the team as they look to take a big step forward in their rebuild? Let's find out.
Hamilton an Elite All-Around Defender
When it comes to Hamilton, the first thing to note is his offensive ability. He's averaged almost 54 points per 82 games over the last three seasons and is constantly a scoring threat in the offensive zone. Since the start of the 2018-19 campaign, his even-strength offense has been worth an expected goals above replacement (xGAR) of 43.3. That ranks first in the NHL among defensemen, with Shea Theodore being next at 36.3. xGAR is a better tool for measuring a defenseman's impact because it places less emphasis on finishing, so Hamilton has been an absolute tank offensively.
Related: New Jersey Devils Sign Dougie Hamilton to 7-Year Contract
When Hamilton is on the ice, his team always seems to have control of the puck. He had a Corsi percentage (CF%) of 57.25 percent and an expected goals percentage of 57.69 percent over the last three seasons with the Hurricanes. So they were constantly out-attempting and out-chancing their opponents with Hamilton at five-on-five.
There have been some questions about Hamilton's defensive game over the years. But I'm not sure how much those hold up. The Hurricanes were not giving Hamilton soft minutes at five-on-five, and far from it. He played against elite competition quite a bit and fared very well in those minutes (via PuckIQ):
Season Percent of Ice Time vs. Elite Competition CF%
2018-19 36.4 54.7
2019-20 38 56.6
It's pretty clear Hamilton is a force at five-on-five, but he's also had good to excellent results on special teams. Though he's not an elite penalty-killer, he was a decent shot and chance suppressor for the Hurricanes when down a man. He'll likely get time there for the Devils, too, even if it's only on the second unit.
But where Hamilton's biggest strength lies on special teams is the power play. It's not a stretch to say that he's one of the best power play quarterbacks in the NHL. He had an overwhelmingly strong positive impact on the man advantage, something the Devils should benefit from greatly after their power play struggles in 2020-21. Combine that with Hamilton's overwhelmingly strong positive impact at even strength, and you have an elite defenseman:
EV & PP RAPM Type (per 60), Standardized, 18-21 (via Evolving-Hockey)
If there was any concern about Hamilton's defensive game, those seem to be minor. He might not be an elite shutdown defender, but he's still had a positive impact, as shown in the RAPM chart above (xGA/60, CA/60). So he's going to play first-pair minutes against opponents' top lines, something the Devils haven't had on the right side of their blue line in many years.
Another component of Hamilton's game the Devils should benefit from is his ability to make things happen when he gains the offensive zone with puck possession. In his previous three seasons before 2020-21, he ranked in the 100th percentile in individual shots when in the offensive zone. And he managed to gain the offensive zone with puck possession more often than not:
Dougie Hamilton's effectiveness in all three zones of the ice
Hamilton has always been a polarizing defenseman, with many people questioning how good he actually is. The truth is he's an elite offensive defenseman who handles himself well defensively and has a significant positive impact on the power play. He's miles better than Seth Jones, who the Chicago Blackhawks signed to a mega extension. And he'll be making $500,000 less per year than Jones for the next seven years. So the Devils will get their money's worth.
Hamilton a Perfect Fit in More Ways Than One
Hamilton is in the upper echelon of NHL defensemen. That's clearly why the Devils signed him, but he's also a perfect fit for how they want to play under head coach Lindy Ruff. In his first season as coach in 2020-21, it's pretty clear how Ruff wanted the team to play. Unlike his predecessor John Hynes, he implemented an up-tempo, rush-based system similar to the one he had as the Dallas Stars head coach.
The issue with that kind of system is you need puck-moving defensemen to execute it properly. The Devils had a couple that found success with Ruff last season in Ty Smith and Damon Severson, and even P.K. Subban found some new life. But that's only half a blue line, hence why Fitzgerald made it a priority this offseason.
Related: Devils 2021 Draft Haul Fills Needs, Adds Promise
Though Graves is more of a defensive defenseman, he's a decent enough puck-mover to find success in Ruff's system. Hamilton, on the other hand, should thrive tremendously in it. He's great at transitioning the puck up the ice, whether it's on his own stick or making breakout passes to forwards. He's not afraid to jump in on the rush either, which should work out well in Ruff's system. A good comparison would be John Klingberg, whose best years with the Stars came under Ruff. It's not a stretch to think Hamilton will have similar or even better results than Klingberg did in Dallas.
As for who Hamilton's defense partner could be, there are a couple of fits. Graves played a bit alongside Cale Makar with the Colorado Avalanche, so he has some experience playing with an elite defender. Graves is 6-foot-5, 220 pounds, while Hamilton is 6-foot-6, 227 pounds, so that would make for a massive first-pairing but one with mobility. Teams would probably have a tough time breaking through a unit with that kind of size, mobility and defensive prowess.
Newest New Jersey Devils defenseman Dougie Hamilton (Photo by Amy Irvin / The Hockey Writers)
The other option would be to play Smith, who had a good but not great rookie season in 2020-21, alongside Hamilton. He's just 21 years old and projects to be a high-end puck-moving defender down the road. But it might be too soon to pencil him into top-pair minutes. The more likely scenario sees him paired with Severson once again. The two had success together in 2020-21 and would make for a formidable second pair. Add Subban and Jonas Siegenthaler, who the Devils acquired at this year's trade deadline, and here's what the defense pairs could look like:
Graves – Hamilton
Smith – Severson
Siegenthaler – Subban
X – Christian Jaros
That is a drastically different look than last season and is much closer to a blue line that can help the Devils be much more competitive. It's all because of what Hamilton brings as a player, as he can play in all situations and does a bit of everything at a high level. And because there are a variety of ways Ruff can utilize him, it makes him an ideal fit for the team.
Devils Will Get What They Paid For
Finally, there's Hamilton's contract. There's always risk in signing a 28-year-old free agent to a max seven-year deal. It's no different with Hamilton, but I wouldn't be too concerned. For starters, he's showing no signs of decline and is still at the top of his game. Plus, it's usually second-tier free agents whose deals end up aging poorly (Milan Lucic, Andrew Ladd, to name a couple), not elite players such as Hamilton.
And for what it's worth, Dom Luszczyszyn's model still has Hamilton as an elite defender in the seventh year of his deal. That may be a stretch, but it's not unreasonable to think he could still be a top-four blueliner by the time his contract is coming to an end.
Dougie Hamilton, who is better than Seth Jones and is somehow being paid $500K less, signs the best deal of the day. pic.twitter.com/Mo1vOUM2hH
— dom at the athletic (@domluszczyszyn) July 28, 2021
All in all, the Devils' signing of Hamilton is a complete game-changer. They haven't had a defenseman of his caliber in their lineup since Brian Rafalski, which was many moons ago. The Devils still have some work to do as far as adding a scorer or two, but Fitzgerald got the job done on defense. They should have one of the better defensive groups in the Metropolitan Division next season. And with Jonathan Bernier coming in as a 1B to Mackenzie Blackwood, opponents should find the back of the net less often than they did in 2020-21.
Advanced stats from Natural Stat Trick, Evolving-Hockey
Alex Chauvancy
Alex Chauvancy is a New Jersey Devils writer for The Hockey Writers who has a penchant for advanced stats, prospects, signings and trades. He previously wrote for Devils Army Blog, a New Jersey Devils fan blog, from 2015-2017
Categories Advanced Stats, Devils Transactions Tags Dougie Hamilton
Avalanche's Darcy Kuemper Trade Full of Risk and Reward
Blue Jackets Turning to New Leadership Core
FREE SHIPPING UNTIL JAN. 18
NHL Rumors: Oilers, Canucks, Canadiens, Blackhawks, Senators
Maple Leafs & Blues Could Produce Deadline Blockbuster
National Media Misses on Jets and Bowness Again
Flyers' Biggest Impact Players From the First Half of 2022-23
Penguins' Best & Worst Moves Under GM Ron Hextall
Maple Leafs' New Whipping Boy: Morgan Rielly
3 Takeaways From Senators' 2-1 Loss Versus Blues
Minnesota Wild Gameday Preview: Washington Capitals – 1/17/23
Jacques Plante: The Man in the Fiberglass Mask
NHL Great Believes Oilers Should Trade Puljujarvi
Wild Check-In: Kaprizov, Zuccarello, Spurgeon, Duhaime & Dewar
2013 NHL Entry Draft: Where Are They Now?
Most Games Played By a Goalie in One NHL Season
Sharks History: Timo Meier's 5-Goal Game, One Year Later
Oilers' Janmark and Kostin Stepped Up Big in Kane's Absence
Wild's 3 Best & 3 Worst at Halfway Mark of Season | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 9,677 |
Q: Activity wont start in Android application project I am using Eclipse and I am building an Android application. I have created a new Android application project in which I created a blank activity(just check the checkbox). The application is a basic Hello World, I have made no changes to the application that was created by default. I start the application by clicking run as -> Android Application. The Android Virtual Device starts, but the activity does not. Any ideas what might be wrong.
This is the console output:
[2012-10-31 06:39:28 - newand] ------------------------------
[2012-10-31 06:39:28 - newand] Android Launch!
[2012-10-31 06:39:28 - newand] adb is running normally.
[2012-10-31 06:39:28 - newand] Performing com.example.newand.MainActivity activity launch
[2012-10-31 06:39:28 - newand] Automatic Target Mode: launching new emulator with compatible AVD 'androidEMP'
[2012-10-31 06:39:28 - newand] Launching a new emulator with Virtual Device 'androidEMP'
A: Sometimes the AVD is started but fails to load the application. Try starting the avd first and then running the application within it.
If it still fails, you can to try restarting the adb server from the command line (you need to have installed the adb tools for this) :
adb kill-server
adb start-server
Sometimes another AVD will open even if one is active. In that case, Close the old avd and let the new one be open. Then try killing and restarting the adb server.
A: In your AndroidManifest.xml file add entry something as
<activity android:name=".yourActivityName">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
Inside application tag.
This should solve your problem.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 9,447 |
\section{Introduction}
\label{sec:intro}
\todo{Be consistent about ``Simultaneous'' or ``simultaneous''}
The Simultaneous \Pade approximation problem concerns approximating
several power series $S_1,\ldots,S_n \in {\mathsf{K}}[[x]]$ with rational
functions $\frac {\sigma_1}\lambda,\ldots,\frac {\sigma_n} \lambda$,
all sharing the same denominator $\lambda$. In other words, for
some $d \in \ZZ_{\geq 0}$, we seek $\lambda \in {\mathsf{K}}[x]$ of low degree such
that each of
\[
\word{rem}(\lambda S_1,\ x^d) , \word{rem}(\lambda S_2,\ x^d),\ \ldots,\ \word{rem}(\lambda S_n,\ x^d)
\]
has low degree. The study of Simultaneous \Pade approximations
traces back to Hermite's proof of the transcendence of $e$
\cite{hermite_sur_1878}. Solving Simultaneous \Pade
approximations has numerous applications, such as in coding theory,
e.g.~\cite{feng_generalization_1991,schmidt_collaborative_2009};
or in distributed, reliable computation \cite{clement_pernet_high_2014}.
Many algorithms have been developed for this problem, see e.g.~\cite{beckermann_uniform_1992,olesh_vector_2006,sidorenko_linear_2011,nielsen_generalised_2013} as well as the references therein.
Usually one cares about the regime where $d \gg n$.
Obtaining $O(n d^2)$ is classical through successive cancellation, see \cite{beckermann_uniform_1994} or \cite{feng_generalization_1991} for a Berlekamp--Massey-type variant.
Using fast arithmetic, the previous best was $\Osoft(n^\omega d)$, where $\omega$ is the exponent for matrix multiplication, see \cref{ssec:cost}.
That can be done by computing a minimal approximant basis with e.g.~\cite{giorgi_complexity_2003,GuptaSarkarStorjohannValeriote11}; this approach traces back to \cite{barel_general_1992,beckermann_uniform_1992}.
Another possibility which achieves the same complexity is fast algorithms for solving structured linear systems, e.g.~\cite{bostan_solving_2008}; see \cite{chowdhury_faster_2015} for a discussion of this approach.
\jsrn{Should we make these considerations in this paper, or in a possible journal version?
\cite{olesh_vector_2006} presents an algorithm which is faster when $N_i$ all equal some $N$, and if $d > N + $
Note that for generic input, there will then be no such solutions.
The algorithm has complexity $\Osoft(n k^{\omega-1} d)$, where $k$
}
\arne{Maybe we can add something in the conclusions about this, e.g.,
For the special case of Problem 1 when all the $N_i$ are equal to some $N$, and all the $g_i$ are equal,
and $d > N+N/k$ for some $k \in \\ZZ_{>0}$,
it can be shown that the solution basis has dimension bounded by $k$, and
the algorithm of \cite{olesh_vector_2006} achieves a running time of $\Osoft(n k^{\omega-1} d)$ provided $d > N+N/k$.
It should be possible to extend the algorithms in this paper to blah blah. We will present
this is in a future paper.}
A common description is to require $\deg \lambda < N_0$ for some degree bound $N_0$, and similarly $\deg \word{rem}(\lambda S_1,\, x^d) < N_i$ for $i = 1,\ldots,n$.
The degree bounds could arise naturally from the application, or could be set such that a solution must exist.
A natural generalisation is also to replace the $x^d$ moduli with arbitrary $g_1,\ldots,g_n \in {\mathsf{K}}[x]$.
Formally, for any field ${\mathsf{K}}$:
\begin{problem}
\label{prob:sim_pade}
Given a tuple $(\vec S, \vec g, \vec N)$ where
\begin{itemize}
\item
$\vec S = (S_1,\ldots,S_n) \in {\mathsf{K}}[x]^n$ is a sequence of polynomials,
\item $\vec g = (g_1,\ldots,g_n) \in {\mathsf{K}}[x]^n$ is a sequence of moduli polynomials with $\deg S_i < \deg g_i$ for $i=1,\ldots,n$,
\item and $\vec N = (N_0,\ldots,N_n)
\in \ZZ_{\geq 0}^{n+1}$ are degree bounds
satisfying $1\leq N_0 \leq \max_i \deg g_i$ and $N_i \leq \deg g_i$ for $i=1,\ldots,n$,
\end{itemize}
find, if it exists, a non-zero vector $(\lambda, \phi_1, \ldots, \phi_n)$ such that
\begin{enumerate}
\item $\lambda S_i \equiv \phi_i \mod g_i$ for $i = 1,\ldots, n$, and \label{p1item1}
\item $\deg \lambda < N_0$ and $\deg \phi_i < N_i$ for $i=1,\ldots,n$.
\end{enumerate}
\end{problem}
We will call any vector $(\lambda, \phi_1, \ldots, \phi_n)$ as above \emph{a solution} to a given Simultaneous \Pade approximation problem.
Note that if the $N_i$ are set too low, then it might be the case that no solution exists.
\begin{example}
\label{ex:simpade}
Consider over $\FF 2[x]$ that $g_1 = g_2 = g_3 = x^5$, and
$\vec S = (S_1,S_2,S_3) =
\left(x^{4} + x^{2} + 1,\,x^{4} + 1,\,x^{4} + x^{3} + 1\right)$,
with degree bounds $\vec N = (5, 3, 4, 5)$.
Then $\lambda_1 = x^4 + 1$ is a solution, since $\deg \lambda_1 < 5$ and
\[
\lambda_1 \vec S \equiv
\left(x^{2} + 1,\ 1,\ x^{3} + 1\right)
\mod x^5 \ .
\]
$\lambda_2 = x^{3} + x$ is another solution, since
\[
\lambda_2 \vec S \equiv
\left(x,\ x^{3} + x,\ x^4+x^3 + x\right)
\mod x^5 \ .
\]
These two solutions are linearly independent over $\FF 2[x]$ and span all solutions.
\end{example}
Several previous algorithms for solving \cref{prob:sim_pade} are
more ambitious and produce an entire \emph{basis} of solutions
that satisfy the first output condition $\lambda S_i \equiv \phi_i \mod g_i$
for $i=1,\ldots,n$,
including solutions that do not satisfy the degree bounds stipulated
by the second output condition. Our algorithms are slightly more
restricted in that we only return the sub-basis that generates
the set of solutions that satisfy both output requirements of
\cref{prob:sim_pade}.
Formally:
\begin{problem}
\label{prob:sim_pade_basis}
Given an instance of \cref{prob:sim_pade}, find a matrix $A \in {\mathsf{K}}[x]^{* \times (n+1)}$ such that:
\begin{itemize}
\item Each row of $A$ is a solution to the instance.
\item All solutions are in the ${\mathsf{K}}[x]$-row space of $A$.
\item $A$ is $(-\vec N)$-row reduced\footnote{%
The notions $(-\vec N)$-degree, $\deg_{-\vec N}$ and $(-\vec N)$-row reduced are recalled in \cref{sec:preliminaries}.}.
\end{itemize}
\end{problem}
The last condition ensures that $A$ is minimal, in a sense, according to the degree bounds $\vec N$, and that we can easily parametrise which linear combinations of
the rows of $A$ are solutions.
We recall the relevant definitions and lemmas in \cref{sec:preliminaries}.
We will call such a matrix $A$ a \emph{solution basis}.
In the complexities we report here, we cannot afford to compute
$A$ explicitly. For example, if all $g_i = x^d$,
the number of field elements required to explicitly
write down all of the entries of $A$ could be $\Omega(n^2d)$.
Instead, we remark that $A$ is completely given by
the problem instance as well as the first column of $A$, containing
the $\lambda$ polynomials.\footnote{%
The restriction $N_i \leq \deg g_i$ in \cref{prob:sim_pade} ensures
that for a given $\lambda$, the only possibilities for the $\phi_i$
in a solution are $\word{rem}(\lambda S_i, \ g_i)$. In particular, if
we allowed $N_i > \deg g_i$ then $(0,\ldots, 0, g_i, 0, \ldots,
0)$ would be a solution which can not be directly reconstructed
from its first element.
}
Our algorithms will therefore represent $A$ row-wise using the
following compact representation.
\begin{definition}
For a given instance of \cref{prob:sim_pade_basis}, a \emph{solution
specification} is a tuple $(\vec \lambda,\vec \delta) \in
{\mathsf{K}}[x]^{k \times 1} \times \ZZ_{<0}^k$ such that the \emph{completion} of $\vec
\lambda$ is a solution basis, and where $\vec \delta$ are the $(-\vec N)$-degrees of the
rows of $A$.
The \emph{completion} of $\vec \lambda = (\lambda_1,\ldots,\lambda_k)^\top$ is
the matrix
\[
\begin{bmatrix}
\lambda_1 & \word{rem}(\lambda_1 S_1,\ g_1) & \ldots & \word{rem}(\lambda_1 S_n,\ g_n) \\
\vdots & & \ddots & \vdots \\
\lambda_k & \word{rem}(\lambda_k S_1,\ g_1) & \ldots & \word{rem}(\lambda_k S_n,\ g_n) \\
\end{bmatrix}
\ .
\]
\end{definition}
Note that $\vec \delta$ will consist of only negative numbers, since any solution $\vec v$ by definition has $\deg_{-\vec N} \vec v < 0$.
\begin{example}
A solution specification for the problem in \cref{ex:simpade} is
\[
(\vec \lambda, \vec \delta) = \big( [x^4 + 1,\ x^3 + x]^\top ,\ (-1, -1) \big) \ .
\]
The completion of this is
\[
A = \begin{bmatrix}
x^4 + 1 & x^{2} + 1 & 1 & x^{3} + 1 \\
x^3 + x & x & x^{3} + x & x^4+x^3 + x
\end{bmatrix}
\]
One can verify that $A$ is $(-\vec N)$-row reduced.
\end{example}
We present two algorithms for solving \cref{prob:sim_pade_basis},
both with complexity $O\big(n^{\omega-1}\, {\mathsf{M}}(d)\,(\log d)\,(\log
d/n)^2\big)$, where $d = \max_i \deg g_i$ and ${\mathsf{M}}(d)$ is the cost of multiplying two polynomials of degree $d$, see \cref{ssec:cost}.
They both depend crucially on recent developments
that allow computing minimal approximant bases of non-square matrices
faster than for the square case
\cite{zhou_efficient_2012,jeannerod_computation_2016}.
We remark that from the solution basis, one can also compute the expanded form of
one or a few of the solutions in the same complexity, for instance if a single, expanded solution to the simultaneous \Pade problem is needed.
Our first algorithm in \cref{sec:dual} assumes $g_i = x^d$ for all
$i$ and some $d \in \ZZ_{\geq 0}$. It utilises a well-known duality between
Simultaneous \Pade approximations and Hermite \Pade approximations,
see e.g.~\cite{beckermann_uniform_1992}. The Hermite \Pade problem
is immediately solvable by fast minimal approximant basis computation.
A remaining step is to efficiently compute a single row of the
adjoint of a matrix in Popov form, and this is done by combining
partial linearisation \cite{GuptaSarkarStorjohannValeriote11} and high-order
lifting \cite{storjohann_high-order_2003}.
Our second algorithm in \cref{sec:intersect} supports arbitrary $g_i$.
The algorithm first solves $n$ single-sequence \Pade approximations, each of $S_1,\ldots,S_n$.
The solution bases for two problem instances can be combined by computing the
intersection of their row spaces; this is handled by a minimal approximant basis
computation.
A solution basis of the full Simultaneous \Pade problem is then obtained by structuring intersections along a binary tree.
Before we describe our algorithms, we give some preliminary notation and definitions in \cref{sec:preliminaries}, and in \cref{sec:subroutines} we describe some of the computational tools that we employ.
Both our algorithms have been implemented in Sage v. 7.0 \cite{stein_sagemath_????} (though asymptotically slower alternatives to the computational tools are used).
The source code can be downloaded from \url{http://jsrn.dk/code-for-articles}.
\subsection{Cost model}
\label{ssec:cost}
We count basic arithmetic operations in ${\mathsf{K}}$ on an algebraic RAM.
We will state complexity results in terms of an exponent $\omega$
for matrix multiplication, and a function ${\mathsf{M}}(\cdot)$ that is a
multiplication time for
${\mathsf{K}}[x]$ \cite[Definition~8.26]{von_zur_gathen_modern_2012}. Then two
$n\times n$ matrices over ${\mathsf{K}}$ can be multiplied in $O(n^{\omega})$
operations in ${\mathsf{K}}$, and two polynomials in ${\mathsf{K}}[x]$ of degree
strictly less than $d$ can be multiplied in ${\mathsf{M}}(d)$ operations in
${\mathsf{K}}$. The best known algorithms allow $\omega < 2.38$
\cite{coppersmith_matrix_1990, LeGall14}, and we can always take
${\mathsf{M}}(d) \in O(n (\log n) (\loglog n))$ \cite{CantorKaltofen}.
In
this paper we assume that $\omega > 2$, and that ${\mathsf{M}}(d)$ is super-linear while
${\mathsf{M}}(d) \in O(d^{\omega-1})$. The assumption ${\mathsf{M}}(d) \in O(d^{\omega-1})$ simply
stipulates that if fast matrix multiplication techiques are used
then fast polynomial multiplication should be used also:
for example, $n \, {\mathsf{M}}(nd) \in O(n^{\omega} \, {\mathsf{M}}(d))$.
\section{Preliminaries}
\label{sec:preliminaries}
Here we gather together some definitions and results regarding row
reduced bases, minimal approximant basis, and their shifted variants.
For a matrix $A$ we denote by $A_{i,j}$ the entry in row $i$ and
column $j$. For a matrix $A$ over ${\mathsf{K}}[x]$ we denote by $\word{Row}(A)$
the ${\mathsf{K}}[x]$-linear row space of $A$.
\subsection{Degrees and shifted degrees}
The degree of a nonzero vector $\vec v \in {\mathsf{K}}[x]^{1 \times m}$ or
matrix $A \in {\mathsf{K}}[x]^{n\times m}$ is denoted by
$\deg \vec v$ or $\deg A$, and is the maximal degree of entries of
$\vec v$ or $A$. If $A$ has no zero rows the {\em row degrees} of $A$, denoted
by $\word{rowdeg}\, A$, is the tuple $(d_1,\ldots,d_n)$ with $d_i = \deg
\word{row}(A,i)$.
The (row-wise) {\em leading matrix} of $A$, denoted by
${\rm LM}(A) \in {\mathsf{K}}^{n \times m}$, has ${\rm LM}(A)_{i,j}$
equal to the coefficient of $x^{d_i}$ of $A_{i,j}$.
Next we recall~\cite{barel_general_1992,zhou_efficient_2012,jeannerod_computation_2016}
the shifted variants of the notion of degree, row degrees, and leading
matrix. For a {\em shift} $\vec s =
(s_1,\ldots,s_n) \in \ZZ^n$,
define the $n \times n$ diagonal matrix $x^{\vec s}$
by $$x^{\vec s} := \left [ \begin{array}{ccc} x^{s_1}
& & \\
& \ddots & \\ & & x^{s_n} \end{array} \right ].$$
Then the {\em ${\vec s}$-degree} of $v$, the {\em ${\vec s}$-row
degrees} of $A$, and the {\em $\vec s$-leading matrix} of $A$, are
defined by \vec $\deg_{\vec s} v := \deg v x^{\vec s}$, $\word{rowdeg}_{\vec
s} A := \word{rowdeg}\, Ax^{\vec s}$, and ${\rm LM}_{\vec s}(A) := {\rm
LM}(Ax^{\vec s})$.
Note that we pass over the ring of Laurent polynomials only for
convenience; our algorithms will only compute with polynomials.
As pointed out in~\cite{jeannerod_computation_2016}, up to negation
the definition of ${\vec s}$-degree is equivalent to that used
in~\cite{BeckermannLabahnVillard06} and to the notion of {\em defect}
in~\cite{beckermann_uniform_1994}.
For an instance $(\vec S, \vec g, \vec N)$ of \cref{prob:sim_pade}, in the
context of defining matrices, we will be using $\vec S$ and $\vec g$ as
vectors, and by $\diagg$ denote the diagonal matrix with the entries of
$\vec g$ on its diagonal.
\subsection{Row reduced}
Although row reducedness can be defined for matrices of arbitrary
shape and rank, it suffices here to consider the case of matrices
of full row rank. A matrix $R \in {\mathsf{K}}[x]^{n \times m}$ is
{\em row reduced} if ${\rm LM}(R)$ has full row rank, and
{\em $\vec{s}$-row reduced} if ${\rm LM}_{\vec s}(R)$ has full row rank.
Every $A \in {\mathsf{K}}[x]^{n \times m}$ of full row rank is left equivalent
to a matrix $R \in {\mathsf{K}}[x]^{n \times m}$ that is ${\vec s}$-row
reduced. The rows of $R$ give a basis for $\word{Row}(A)$ that is minimal
in the following sense: the list of ${\vec s}$-degrees of the rows
of $R$, when sorted in non-decreasing order, will be lexicographically
minimal. An important feature of row reduced matrices is
the so-called ``predictable degree''-property~\cite[Theorem~6.3-13]{kailath_linear_1980}: for any
$\vec v \in {\mathsf{K}}[x]^{1 \times n}$, we have
\[
\deg_{\vec s}(\vec v R) = \max_{i=1,\ldots,n}( \deg_{\vec s} {\rm row}(R,i)
+ \deg v_i ) \ .
\]
A canonical $\vec{s}$-reduced basis is provided by the ${\vec
s}$-Popov form. Although an ${\vec s}$-Popov form can be defined
for a matrix of arbitrary shape and rank, it suffices
here to consider the case of a non-singular matrix. The
following definition is equivalent
to~\cite[Definition~1.2]{jeannerod_computation_2016}.
\begin{definition} \label{def:popov}
A non-singular matrix $R \in {\mathsf{K}}[x]^{n\times n}$ is in ${\vec s}$-Popov
form if ${\rm LM}_{\vec s}(R)$ is unit lower triangular and the
degrees of off-diagonal entries of $R$ are strictly less than the
degree of the diagonal entry in the same column.
\end{definition}
\subsection{Adjoints of row reduced matrices}
For a non-singular matrix $A$ recall that the adjoint of $A$, denoted
by ${\rm adj}(A)$, is equal to $(\det A)A^{-1}$, and that entry
${\word{adj}(A)}_{i,j}^\top$ is equal to $(-1)^{i+j}$ times the determinant
of the $(n-1) \times (n-1)$ sub-matrix that is obtained from $A$ by
deleting row $i$ and column $j$.
\begin{lemma}
\label{lem:adjointRowReduced}
Let $A \in {\mathsf{K}}[x]^{n \times n}$ be $\vec s$-row reduced. Then
$\word{adj}(A)^\top$ is $(-\vec s)$-row reduced with
\[
\word{rowdeg}_{(-\vec s)} \word{adj}(A)^\top =(\eta - s - \eta_1,\ldots , \eta - s -\eta_n) \ ,
\]
where $\vec \eta = \word{rowdeg}_{\vec s} A$, $\eta = \sum_i \eta_i$ and $s = \sum_i s_i$.
\end{lemma}
\begin{proof}
Since $A$ is $\vec s$-row reduced then $A x^{\vec s}$ is row reduced.
Note that $\word{adj}(A x^{\vec s})^\top (A x^{\vec s})^\top
= (\det A x^{\vec s}) I_m$ with $\deg
\det A x^{\vec s} = \eta$. It follows that
row $i$ of $\word{adj}(A x^{\vec s})^\top$ must
have degree at least $\eta - \eta_i$ since
$\eta_i$ is the degree of column $i$
of $(A x^{\vec s})^\top$. However, entries in row
$i$ of $\word{adj}(A x^{\vec s})^\top$ are minors of the matrix obtained from
$A x^{\vec s}$ by removing row $i$, hence have degree at most $\eta
- \eta_i$. It follows that the (row-wise) leading coefficient matrix of
$\word{adj}(A x^{\vec s})^\top$ is non-singular, hence $\word{adj} (A x^{\vec
s})^\top$ is row reduced. Since $\word{adj} (A x^{\vec s})^\top =
(\det x^{\vec s}) \word{adj}(A)^\top x^{-\vec s}$ we conclude that $\word{adj}(A)^\top$
is $(-\vec s)$-row reduced with $\word{rowdeg}_{(-\vec s)} \word{adj}(A) =
(\eta - \eta_1 - s, \ldots, \eta - \eta_n - s)$.
\end{proof}
\subsection{Minimal approximant bases}
We recall the standard notion of minimal approximant basis, sometimes
known as order basis or $\sigma$-basis \cite{beckermann_uniform_1994}.
For a matrix $A \in {\mathsf{K}}[x]^{n \times m}$ and order $d \in
\ZZ_{\geq 0}$, an \emph{order $d$ approximant} is a vector $\vec p \in
{\mathsf{K}}[x]^{1 \times n}$ such that
$\vec pA \equiv \vec 0 \mod x^d.$
An \emph{approximant basis of order $d$} is then a matrix $F \in
{\mathsf{K}}[x]^{n \times n}$ which is a basis of all order $d$ approximants.
Such a basis always exists and has full rank $n$.
For a shift $\vec s \in \ZZ^n$,
$F$ is then an
\emph{$\vec s$-minimal approximant basis} if it is $\vec s$-row
reduced.
Let $\algoname{MinBasis}(d,A,\vec s)$ be a function that returns $(F,\vec
\delta)$, where $F$ is an $\vec s$-minimal approximant basis of $A$
of order $d$, and $\vec \delta = \word{rowdeg}_{\vec s} F$. The next
lemma recalls a well known method of constructing minimal approximant
bases recursively. Although the output of $\algoname{MinBasis}$ may not be
unique, the lemma holds for \emph{any} $\vec s$-minimal approximant basis
that $\algoname{MinBasis}$ might return.
\arne{Give some refs regarding lem. I was hoping to ask George, but he is away.
I'd rather not cite something incorrectly so let's just leave it.}
\jsrn{Can we make it more clear that all the following properties hold for
\emph{any} $\vec s$-minimal approximant basis that $\algoname{MinBasis}(...)$ might return?}
\begin{lemma} \label{lem:paderec} Let $A = \left [ \begin{array}{c|c}
A_1 & A_2 \end{array} \right ]$ over ${\mathsf{K}}[x]$. If
$(F_1, \vec \delta_1) = \algoname{MinBasis}(d,A_1,\vec s)$
and $(F_2,\vec \delta_2) =
\algoname{MinBasis}(d,F_1A_2,\vec \delta_1)$, then
$F_2F_1$ is an $\vec s$-minimal approximant basis of $A$ of order $d$
with $\vec \delta_2 = \word{rowdeg}_{\vec s} F_2 F_1$.
\end{lemma}
Sometimes only the {\em negative part} of an $\vec s$-minimal
approximant bases is required, the submatrix of the approximant
bases consisting of rows with negative $\vec s$-degree.
Let function $\algoname{NegMinBasis}(d,A,\vec
s)$ have the same output as $\algoname{MinBasis}$, but with
$F$ restricted to the negative part.
\arne{Give some refs for the cor.}
\begin{corollary} \label{lem:paderecprune} \cref{lem:paderec} still
holds if $\algoname{MinBasis}$ is replaced by $\algoname{NegMinBasis}$, and
``an $\vec s$-minimal'' is replaced with ``the negative part of an $\vec s$-minimal.''
\end{corollary}
Using for example the algorithm \texttt{M-Basis} of
\cite{giorgi_complexity_2003}, it is easy to show
that any order
$d$ approximant basis $G$ for an $A$ of column dimension $m$ has
$\det G = x^D$ for some $D \in \ZZ_{\geq 0}$ with $D \leq md$.
Many problems of ${\mathsf{K}}[x]$ matrices or approximations reduce to the
computation of (shifted) minimal approximant bases, see
e.g.~\cite{beckermann_uniform_1994,giorgi_complexity_2003},
often resulting in the best known asymptotic complexities for these
problems.
\subsection{Direct solving of Simultaneous \Pade approximations}
\label{sec:direct_solve}
Let $(\vec S, \vec g, \vec N)$ be an instance of \cref{prob:sim_pade_basis}
of size $n$. We recall some \jsrn{should we add: ``but not all''?} known approaches for computing a solution
specification using row reduction and minimal approximant basis
computation.
\subsubsection{Via reduced basis}
\label{sec:direct_reduced_basis}
Using the predictable degree property
it is easy to show that if $R \in {\mathsf{K}}[x]^{(n+1)
\times (n+1)}$ is an $(-\vec N)$-reduced basis of
\[ A =
\left [ \begin{array}{c|c}
1 & \vec S \\\hline
& \diagg
\end{array} \right]
\in {\mathsf{K}}[x]^{(n+1) \times (n+1)},
\]
then the sub-matrix
of $R$ comprised of the rows with negative $(-\vec N)$-degree form
a solution basis. A solution specification $(\vec \lambda, \vec
\delta)$ is then a subvector $\vec \lambda$
of the first column of $R$, with $\vec \delta$
the corresponding subtuple $\vec \delta$ of $\word{rowdeg}_{(- \vec N)} R$.
Mulders and Storjohann \cite{mulders_lattice_2003} gave an iterative algorithm
for performing row reduction by successive cancellation; it is similar to but
faster than earlier algorithms
\cite{kailath_linear_1980,lenstra_factoring_1985}.
Generically on input $F \in {\mathsf{K}}[x]^{m \times m}$ it has complexity
$O(n^3 (\deg F)^2)$.
Alekhnovich \cite{alekhnovich_linear_2005} gave what is essentially a Divide \&
Conquer variant of Mulders and Storjohann's algorithm, with complexity
$\Osoft(n^{\omega+1}\deg F)$.
Nielsen remarked \cite{nielsen_generalised_2013} that these algorithms
perform fewer iterations when applied to the matrix $A$ above, due to its low
\emph{orthogonality defect}: ${\rm OD}(F) = \sum\word{rowdeg} F - \deg \det F$, resulting
in $O(n^2(\deg A)^2)$ respectively $\Osoft(n^\omega \deg A)$.
Nielsen also used the special shape of $A$ to give a variant of the
Mulders--Storjohann algorithm that computes coefficients in the working matrix in a lazy
manner with a resulting complexity $O(n \,\mathsf{P}(\deg A))$, where
$\mathsf P(\deg A) = (\deg A)^2$ when the $g_i$ are all powers of $x$, and
$\mathsf P(\deg A) = {\mathsf{M}}(\deg A)\deg A$ otherwise.
Giorgi, et al. \cite{giorgi_complexity_2003} gave a reduction for performing
row reduction by computing a minimal approximant basis.
For the special matrix $A$, this essentially boils down to the approach
described in the following section.
When $n = 1$, the extended Euclidean algorithm on input $S_1$ and $g_1$ can solve the approximation problem by essentially computing the reduced basis of the $2 \times 2$ matrix $A$: each iteration corresponds to a reduced basis for a range of possible shifts \cite{sugiyama_further_1976,justesen_complexity_1976,gustavson_fast_1979}.
The complexity of this is $O({\mathsf{M}}(\deg g_1) \log \deg g_1)$.
\subsubsection{Via minimal approximant basis}
First consider the special case when all $g_i = x^d$ for the same
$d$. An approximant $\vec v = (\lambda, \phi_1, \ldots, \phi_n)$
of order $d$ of
\begin{align*}
A &= \left [ \begin{array}{c}
- \vec S \\
I
\end{array} \right ]
\in {\mathsf{K}}[x]^{(n+1) \times n}
\end{align*}
clearly satisfies $\lambda S_i \equiv \phi_i \mod x^d$ for $i =
1,\ldots,n$; conversely, any such vector $\vec v$
satisfying these congruences must be an approximant of $A$
of order $d$. So the negative part of a $(-\vec N)$-minimal
approximant basis of $A$ of order $d$ is a solution basis.
In the general case we can reduce to a minimal approximant bases
computation as shown by \cref{alg:simpadedirect}.
Correctness of the algorithm follows from the following result.
\begin{theorem} \label{thm:simpadedirect}
Corresponding to an instance $(\vec S, \vec g, \vec N)$ of
\cref{prob:sim_pade_basis} of size $n$, define a shift
$\vec h$ and order $d$:
\begin{itemize}
\item $\vec h := -(\vec N \mid N_0 -1, \ldots, N_0-1) \in \ZZ^{2n+1}$
\item $d := N_0 + \max_i \deg g_i -1$
\end{itemize}
If $G$ is the negative part of an $\vec h$-minimal approximant basis of
$$
H = \left [ \begin{array}{c} -\vec S \\
I \\
\diagg \end{array} \right ] \in {\mathsf{K}}[x]^{(2n+1) \times n}
$$
of order $d$, then the submatrix of $G$ comprised of the
first $n+1$ columns is a solution basis to the problem instance.
\end{theorem}
\begin{proof} An approximant
$\vec v = (\lambda, \phi_1,\ldots,\phi_n,q_1,\ldots, q_n)$ of order
$d$ of $H$ clearly satisfies
\begin{align} \label{eq:lambda}
\lambda S_i = \phi_i + q_ig_i \bmod
x^d
\end{align} for $i=1,\ldots,n$; conversely, any such vector $\vec v$
satisfying these
congruences must be an approximant of $H$ of order $d$.
Now suppose $\vec v$ is an order $d$ approximant of $H$ with negative
$\vec h$-degree, so $\deg \lambda \leq N_0-1$, $\deg \phi_i \leq
N_i-1$, and $\deg q_i \leq N_0 - 2$.
Since \cref{prob:sim_pade}
specifies that $\deg S_i < \deg g_i$ and
$N_i \leq \deg g_i$, both $\lambda S_i$ and $q_i g_i$
will have degree bounded by $N_0 + \deg g_i - 2$.
Since \cref{prob:sim_pade} specifies that $N_0 \geq 1$,
it follows that both the left and right hand sides of (\ref{eq:lambda})
have degree bounded by $N_0+ \deg g_i -2$, which is strictly less
than $d$. We conclude that
\begin{align} \label{eq:lambda2}
\lambda S_i = \phi_i + q_i g_i
\end{align} for
$i=1,\ldots,n$.
It follows that $\vec v H = 0$ so $\vec v$ is in the left kernel
of $H$. Moreover, restricting $\vec v$ to its first $n+1$ entries
gives $\bar{\vec v} := (\lambda, \phi_1,\ldots,\phi_n)$, a solution
to the simultaneous \Pade problem with $\deg_{- \vec N} \bar{\vec
v} = \deg_{\vec h} \vec v$.
Conversely, if $\bar{\vec v} = (\lambda, \phi_1,\ldots,\phi_n)$ is
a solution to the simultaneous \Pade problem, then the extension
$\vec v = (\lambda, \phi_1,\ldots,\phi_n,q_1,\ldots,q_n)$ with $q_i
= (\lambda S_i - \phi_i)/g_i \in {\mathsf{K}}[x]$ for $i=1,\ldots,n$ is an
approximant of $H$ of order $d$ with $\deg_{\vec h} \vec v =
\deg_{-\vec N} \bar{\vec v}$.
Finally, consider that a left kernel basis for $H$ is given by
$$
K = \left[ \begin{array}{c|c} K_1 & K_2 \end{array} \right ] =
\left [ \begin{array}{cc|c} 1 & \vec S & \\
& \diagg & -I \end{array} \right ].
$$
We must have $G = M K$ for some polynomial matrix $M$ of full row
rank. But then $M K_1$ also has full row rank with $\word{rowdeg}_{-\vec N} MK_1
= \word{rowdeg}_{\vec h} G$.
\end{proof}
\begin{algorithm}[t]
\caption{\algoname{DirectSimPade}}
\label{alg:simpadedirect}
\begin{algorithmic}[1]
%
\Require{$(\vec S, \vec g, \vec N)$, an instance of
\cref{prob:sim_pade_basis} of size $n$.}
\Ensure{$(\vec \lambda, \vec \delta)$, a solution specification.}
\State $\vec h \leftarrow -( \vec N \mid N_0-1,\ldots,N_0-1) \in \ZZ^{2n+1}$
\State $d \leftarrow N_0 + \max_i \deg g_i - 1$
\State $H =
\left[ \begin{array}{c}
-\vec S \\
I \\
\diagg
\end{array} \right]$
\State $(\left [ \begin{array}{c|c} \vec \lambda & \ast \end{array}
\right ], \vec \delta) \leftarrow \algoname{NegMinBasis}(d, H, \vec h)
\State $\textbf{return } (\vec \lambda, \vec \delta)$
\end{algorithmic}
\end{algorithm}
\algoname{DirectSimPade} can be performed in time
$\Osoft(n^{\omega} \deg H) = \Osoft(n^\omega \max_i \deg g_i)$ using the minimal approximant basis algorithm by Jeannerod, et al.~\cite{jeannerod_computation_2016}, see \cref{sec:subroutines}.
A closely related alternative to \algoname{DirectSimPade} is the recent algorithm by Neiger \cite{neiger_fast_2016} for computing solutions to modular equations with general moduli $g_i$.
This would give the complexity $\Osoft(n^{\omega-1} \sum_i \deg g_i) \subset \Osoft(n^\omega \max_i \deg g_i)$.
All of the above solutions ignore the sparse, simple structure of the input
matrices, which is why they do not obtain the improved complexity that we do here.
\section{Computational tools}
\label{sec:subroutines}
The main computational tool we will use is the following very recent
result from Jeannerod, Neiger, Schost and
Villard~\cite{jeannerod_computation_2016} on minimal approximant
basis computation.
\begin{theorem}[\protect{\cite[Special case of Theorem~1.4]{jeannerod_computation_2016}}]
\label{thm:orderbasis}
There exists an algorithm $\algoname{PopovBasis}(d,A, \vec s)$ where the
input is an order $d \in \ZZ_+$, a polynomial matrix $A \in {\mathsf{K}}[x]^{n
\times m}$ of degree at most $d$, and shift $\vec s \in \ZZ^n$,
and which returns $(F, \vec \delta)$, where
$F$ is an $\vec s$-minimal approximant basis of $A$ of order $d$,
$F$ is in $\vec s$-Popov form, and $\vec \delta = \word{rowdeg}_{\vec s} F$.
\algoname{PopovBasis} has complexity
$O(n^{\omega-1}\, {\mathsf{M}}(\sigma)\, (\log \sigma) \, (\log \sigma /n)^2)$ operations in ${\mathsf{K}}$, where $\sigma = md$.
\end{theorem}
Our next result says that we can quickly compute the first row of $\word{adj}(F)$ if $F$ is a minimal approximant basis in Popov form.
In particular, since $F$ is an approximant basis $\det F = x^D$ for some $D \leq \sigma$, where $\sigma = md$ from \cref{thm:orderbasis}.
\begin{theorem} \label{thm:fastsolver}
Let $F \in {\mathsf{K}}[x]^{n \times n}$ be in Popov form and with $\det F = x^D$ for some $D \in \ZZ_{\geq 0}$.
Then the first row of $\word{adj}(F)$ can be computed in $O(n^{\omega-1}\, {\mathsf{M}}(D)\, (\log D) \, (\log D/n))$ operations in ${\mathsf{K}}$.
\end{theorem}
\begin{proof}
Because $F$ is in $\vec s$-Popov form, $D$ is the sum of the column degrees of $F$.
We consider two cases: $D \geq n$ and $D < n$.
First suppose $D
\geq n$. Partial linearisation
\cite[Corollary~2]{GuptaSarkarStorjohannValeriote11}
can produce from $F$, with
no operations in ${\mathsf{K}}$, a new matrix $G \in {\mathsf{K}}[x]^{\bar n \times \bar n}$ with
dimension $\bar{n} < 2n$, $\deg G \leq \lceil D/n\rceil$, $\det G = \det F$,
and such that $F^\mo$ is equal to the principal $n \times n$
sub-matrix of $G^\mo$. Let $\vec v \in {\mathsf{K}}[x]^{1 \times \bar{n}}$
be the first row of $x^DI_{\bar{n}}$.
Then the first row of $\word{adj}(F)$ will be the first $n$ entries of
the first row of $\vec vG^{-1}$. High-order $X$-adic lifting
\cite[Algorithm~5]{storjohann_high-order_2003} using the modulus $X=(x-1)^{\lceil
D/n \rceil}$ will compute $\vec vG^{-1}$ in $O\big(n^{\omega}\,
{\mathsf{M}}(\lceil D/n \rceil) \,(\log \lceil D/n \rceil)\big)$ operations
in ${\mathsf{K}}$ \cite[Corollary~16]{storjohann_high-order_2003}. Since $D \geq n$
this cost estimate remains valid if we replace $\lceil D/n \rceil$
with $D/n$. Finally, from the super-linearity assumption on ${\mathsf{M}}(\cdot)$
we have $M(D/n) \leq (1/n) {\mathsf{M}}(D)$, thus matching our target cost.
Now suppose $D < n$. In this case we can not directly appeal to the
partial linearisation
technique since the resulting
$O(n^{\omega} \lceil D/n\rceil)$ may be asymptotically
larger than our target cost. But $D < n$ means that $F$ has ---
possibly many --- columns of degree 0; since $F$ is in Popov form,
such columns have a 1 on the matrix's diagonal and are 0 on the
remaining entries. The following describes how to essentially
ignore those columns. $D$ is then greater than or equal to the
number of remaining columns, thus effectuating the gain from the
partial linearisation.
If $n-k$ is the number of such columns in $F$ that means we can find a
permutation matrix $P$ such that
\[
\hat{F} := PFP^\top = \left [ \begin{array}{c|c}
F_1 & \\\hline
F_2 & I_{n-k}
\end{array} \right ] \ ,
\]
with each column of $F_1$ having degree strictly greater than zero.
Let $i$ be the row index of the single 1 in the first column of
$P^\top$. Since $F^{-1} = P^\top \hat{F}^{-1}P$, we have
\begin{equation}
\label{first}
{\rm row}(\word{adj}(F),1)P^{-1} = x^D\, {\rm row}(\hat{F}^{-1},i).
\end{equation}
Considering that
\[
\hat{F}^{-1} = \left [ \begin{array}{c|c} F_1^{-1} & \\\hline -F_2F_1^{-1} & I_{n-k} \end{array} \right ],
\]
it will suffice to compute the first $k$ entries of the vector on
the right hand side of~(\ref{first}). If $i \leq k$ then let $\vec
v \in {\mathsf{K}}[x]^{1 \times k}$ be row $i$ of $x^{D}I_k$. Otherwise, if
$i>k$ then let $\vec v$ be row $i-k$ of $-x^{D}F_2$. Then in both cases, $\vec
vF_1^{-1}$ will be equal to the first $k$ entries of the vector on
the right hand side of~(\ref{first}). Like before, high-order
lifting combined with partial linearisation will compute this vector
in $O\big(k^{\omega}\, {\mathsf{M}}(\lceil D/k \rceil)\,(\log \lceil D/k \rceil)
\big)$ operations in ${\mathsf{K}}$. Since $D\geq k$ the cost estimate
remains valid if $\lceil D/k \rceil$ is replaced with $D/k$.
\end{proof}
\section{Reduction to Hermite PAD\'E\xspace}
\label{sec:dual}
In this section we present an algorithm for solving
\cref{prob:sim_pade_basis} when $g_1 = \ldots = g_n = x^d$ for some
$d \in \ZZ_{\geq 0}$. The algorithm is based on the well-known duality
between the Simultaneous \Pade problem and the Hermite \Pade problem,
see for example~\cite{beckermann_uniform_1992}. This duality, first
observed in a special case~\cite{Mahler68}, and then later in the
general case~\cite{beckermann_recursiveness_1997}, was exploited
in~\cite{beckermann_fraction-free_2009} to develop algorithms
for the fraction free computation of Simultaneous \Pade approximation.
We begin with a technical
lemma that is at the heart of this duality.
\begin{lemma}
\label{lem:duality}
Let $\hat A, \hat B \in {\mathsf{K}}[x]^{(n+1)\times(n+1)}$ be as follows.
\begin{align*}
\hat A &=
\left [ \begin{array}{c|c}
x^d & -\vec S \\\hline
& I
\end{array} \right ]
&&
\hspace*{-1em}\hat B &=
\left [ \begin{array}{c|cccc}
1 & \\\hlineSpace{2pt}
\vec S^\top & x^d I
\end{array} \right ]
\end{align*}
Then $\hat B$ is the adjoint of $\hat A^\top$.
Furthermore, $\hat A^\top$ is an approximant basis for $\hat
B$ of order $d$, and $\hat B^\top$ is an approximant basis of
$\hat A$ of order $d$.
\end{lemma}
\begin{proof}
Direct computation shows that $\hat A^\top \hat B = x^d I_m =
\det \hat A^\top I_m$, so $\hat B$ is the adjoint of $\hat
A^\top$.
Let now $G$ be an approximant basis of $\hat B$. By the above
computation the row space of $\hat A^\top$ must be a subset of
the row space of $G$. But since $G \hat B = (x^dI_m) R$ for some
$R \in {\mathsf{K}}[x]^{(n+1)\times(n+1)}$, then $\det G = x^d \det R$. Thus
$x^d \mid \det G$. But $\det \hat A^\top = x^d$, so the row space
of $\hat A^\top$ can not be smaller than the row space of $G$.
That is, $\hat A^\top$ is an approximant basis for $B$ of order
$d$. Taking the transpose through the argument shows that $\hat
B^\top$ is an approximant basis of $\hat B$ of order $d$.
\end{proof}
\begin{theorem}
\label{thm:dualityMinbasis}
Let $A$ and $B$ be as follows.
\begin{align*}
A &= \left [ \begin{array}{cccc}
-\vec S \\\hlineSpace{1pt}
I
\end{array} \right ] \in {\mathsf{K}}[x]^{(n+1) \times (n+1)}
&&&
\hspace*{-1em}B &= \left [ \begin{array}{c}
1 \\ \vec S
\end{array} \right] \in {\mathsf{K}}[x]^{(n+1) \times 1}
\end{align*}
If $G$ is an $\vec N$-minimal approximant basis of $B$ of order $d$ with shift
$\vec N \in \ZZ_{\geq 0}^{n+1}$, then
$\word{adj}(G^\top)$ is a $(-\vec N)$-minimal
approximant basis of $A$ of order $d$. Moreover,
if $\vec \eta = \word{rowdeg}_{\vec N} G$, then
$\word{rowdeg}_{-\vec N} \word{adj}(G) =(\eta - N - \eta_1,\ldots , \eta - N
-\eta_{n+1})$, where $\eta = \sum_i \eta_i$ and $N = \sum_i N_i$.
\end{theorem}
\begin{proof}
Introduce $\hat A$ and $\hat B$ as in \cref{lem:duality}.
Clearly $G$ is also an $\vec N$-minimal approximant basis of $\hat B$ of order $d$.
Likewise, $\hat A$ and $A$ have the same minimal approximant bases for given order and shift.
Assume, without loss of generality, that we have scaled $G$ such
that $\det G$ is monic. Since $\hat A^\top$ is also an approximant
basis for $\hat B$ of order $d$, then $\det G = \det \hat A^\top
= x^d$. By definition $G\hat B = x^d R$ for some matrix $R \in
{\mathsf{K}}[x]^{(n+1)\times(n+1)}$. That means
\begin{align*}
x^{2d}((G\hat B)^\top))^\mo &= x^{2d}((x^d R)^\top)^\mo \ , & \textrm{so} \\
(x^d(G^\top)^\mo)(x^d(\hat B^\top)^\mo) &= x^{d}(R^\top)^\mo \ , & \textrm{that is} \\
\word{adj}(G^\top) \hat A &= x^d (R^\top)^\mo \ .
\end{align*}
Now $\det R = 1$ since $(x^d)^{n+1} \det R = \det(G\hat B) =
x^{d+nd}$, so $(R^\top)^\mo = \word{adj}(R^\top) \in {\mathsf{K}}[x]^{(n+1)
\times (n+1)}$.
Therefore $\word{adj}(G^\top)$ is an approximant basis of $\hat A$
of order $d$.
The theorem now follows from \cref{lem:adjointRowReduced}
by noting that $G$ is $\vec N$-row reduced.
\end{proof}
\begin{example}
We apply \cref{thm:dualityMinbasis} to the problem of \cref{ex:simpade} with shifts $\vec N = (5, 3, 4, 5)$.
We have
\begin{align*}
A &=
\left[\begin{array}{rrr}
x^{4} + x^{2} + 1 & x^{4} + 1 & x^{4} + x^{3} + 1 \\
1 & & \\
& 1 & \\
& & 1
\end{array}\right]
\\
B &= \left[\begin{array}{r}
1 \\ x^4 + x^2 + 1 \\ x^{4} + 1 \\ x^{4} + x^{3} + 1
\end{array}\right]
\end{align*}
An $\vec N$-minimal approximant basis to order $d = 5$ of $B$ is
\begin{align*}
G &=
\left[\begin{array}{rrrr}
x & 0 & x & 0 \\
1 & x^{2} + 1 & 0 & 0 \\
0 & 1 & x^{2} + 1 & 0 \\
0 & x & x + 1 & 1
\end{array}\right] , \textrm{ and}
\\
\word{adj}(G)^\top &=
\left[\begin{array}{rrrr}
x^{4} + 1 & x^{2} + 1 & 1 & x^{3} + 1 \\
x & x^{3} + x & x & x^{4} + x \\
x^{3} + x & x & x^{3} + x & x^{4} + x^{3} + x \\
0 & 0 & 0 & x^{5}
\end{array}\right]
\ .
\end{align*}
$\word{adj}(G)^\top$ can be confirmed to be an $(-\vec N)$-minimal approximant basis of
$A$, since $\word{adj}(G)^\top A \equiv 0 \mod x^d$, and since the $(-\vec N)$-leading coefficient matrix of $\word{adj}(G)^\top$ has full rank.
\end{example}
Algorithm~\ref{alg:simpadedual} uses \cref{thm:dualityMinbasis} to solve a Simultaneous \Pade approximation by computing a minimal approximant basis of $B$ in Popov form.
\begin{algorithm}[t]
\caption{\algoname{DualitySimPade}}
\label{alg:simpadedual}
\begin{algorithmic}[1]
%
\Require{$(\vec S, (x^d,\ldots, x^d), \vec N)$, an instance of
\cref{prob:sim_pade_basis} of size~$n$.
}
\Ensure{$(\vec \lambda, \vec \delta)$, solution specification.}
\State $B \leftarrow [ 1, S_1, \ldots, S_n ]^T \in {\mathsf{K}}[x]^{(n+1) \times 1}$
\State $G \leftarrow \algoname{PopovBasis}(d, B, \vec N)$
\label{line:simpadedual:basis}
\State $\vec \eta \leftarrow \word{rowdeg}_{\vec N} G$
\State $\hat{\vec \lambda} \leftarrow $ first column of $\word{adj}(G^\top)$
\label{line:simpadedual:firstcol}
\State $\hat{ \vec\delta} \leftarrow (\eta - N -\eta_1, \ldots, \eta - N - \eta_{n+1})$,
where $\eta = \sum_i \eta_i$ and $N = \sum_i N_i$
\label{line:simpadedual:degrees}
\State $I \leftarrow \{ i \mid \hat{\vec\delta}_i < 0 \}$, and $k \leftarrow |I|$
\State $(\vec \lambda,\ \vec \delta) \leftarrow \big( \hat{\vec\lambda}_{i \in I},\ (\hat{\vec\delta}_i)_{i \in I}
\big) \in {\mathsf{K}}[x]^{k \times 1} \times \ZZ^{k} $
\State $\textbf{return } (\vec \lambda, \vec \delta)$
\end{algorithmic}
\end{algorithm}
\begin{theorem} \cref{alg:simpadedual} is correct.
The cost of the algorithm is $O(n^{\omega-1}\, {\mathsf{M}}(d) (\log d) (\log d/n)^2)$
operations in ${\mathsf{K}}$.
\end{theorem}
\begin{proof}
Correctness follows from \cref{thm:dualityMinbasis}. The complexity
estimate is achieved if the algorithms supporting \cref{thm:orderbasis}
and \cref{thm:fastsolver} are used for the computation in lines 2
and 4, respectively.
\end{proof}
\section{A Divide \& Conquer algorithm}
\label{sec:intersect}
Our second algorithm can handle the full generality of \cref{prob:sim_pade_basis}.
It works by first solving $n$ single \Pade approximations, one for each of the $S_i$ individually, and then intersecting these solutions to form approximations of multiple $S_i$ simultaneously.
The intersection is structured in a Divide \& Conquer tree, and performed by computing minimal approximant bases.
Let $(\vec S, \vec g, \vec N)$ be an instance of \cref{prob:sim_pade_basis}
of size $n$.
The idea of the intersection algorithm is the following:
consider that we have solution specifications for two different Simultaneous \Pade problems, $(\vec\lambda_1, \vec\delta_1)$ and $(\vec\lambda_2, \vec\delta_2)$.
We then compute an approximant basis $G$ of the following matrix:
\begin{equation}
\label{eqn:intersect_R}
R =
\left[\begin{array}{@{}c|c@{}}
1 & 1 \\ \hline
-\vec\lambda_1 & \\\hline
& -\vec\lambda_2 \\
\end{array}\right]
\end{equation}
$G$ then encodes the \emph{intersection} of the ${\mathsf{K}}[x]$-linear combinations of the $\vec\lambda_1$ with the ${\mathsf{K}}[x]$-linear combinations of the $\vec\lambda_2$:
any $\lambda \in {\mathsf{K}}[x]$ residing in both sets of polynomials will appear as the first entry of a vector in the row space of $G$.
We compute $G$ as an $\vec r$-minimal approximant basis to high enough order, where $\vec r$ is selected carefully such that the $\vec r$-degree of any $(\lambda \mid \ldots) \in \word{Row}(G)$ will equal the $(-\vec N)$-degree of the completion of $\lambda$ according to the combined Simultaneous \Pade problem, whenever this degree is negative.
From those rows of $G$ with negative $\vec r$-degree we then get a solution specification for the combined problem.
\begin{example}
Consider again \cref{ex:simpade}.
We divide the problem into two sub-problems $\vec S_1 = (S_1, S_2)$, $\vec N_1 = (5, 3, 4)$, and $\vec S_2 = (S_3)$ and $\vec N_2 = (5, 5)$.
Note that $N_{1,0} = N_{2,0} = 5$, since this is the degree bound on the sought $\lambda$ for the combined problem.
The sub-problems have the following solution specifications and their completions:
\begin{align*}
(\vec\lambda_1, \vec \delta_1) &= \big( [ x^{4} + 1,\ x^3 + x ]^\top,\ ( -1, -1 ) \big)
\\
A_1 &=
\left(\begin{array}{rrr}
x^{4} + 1 & x^{2} + 1 & 1 \\
x^{3} + x & x & x^{3} + x
\end{array}\right)
\\
(\vec\lambda_2, \vec \delta_2) &= \big( [ x^2,\ x^3 + x + 1 ]^\top,\ ( -3, -2 ) \big)
\\
A_2 &=
\left(\begin{array}{rr}
x^{2} & x^{2} \\
x^{3} + x + 1 & x + 1
\end{array}\right)
\end{align*}
We construct $R$ as in \eqref{eqn:intersect_R}, and compute $G$, a minimal approximant basis of $R$ of order $7$ and with shifts $\vec r= (-5 \mid -1, -1 \mid -3, -2)$ (the $G$ below is actually in $\vec r$-Popov form):
\[
G = \left(\begin{array}{rrrrr}
x^{8} & 0 & 0 & 0 & 0 \\
x^{3} + x + 1 & x^{4} + 1 & 1 & 0 & 1 \\
x^{3} + x^{2} + x + 1 & 1 & x + 1 & 1 & 1 \\
x^{4} + x^{3} + x + 1 & 1 & 1 & x^{2} & 1 \\
x^{4} + 1 & 1 & 0 & x + 1 & x + 1
\end{array}\right)
\]
$G$ has $\vec r$-row degree $(3, 3, 0, -1, -1)$.
Only rows 4 and 5 have negative $\vec r$-degree, and their first entries are the linearly independent solutions $x^4 + x^3 + x + 1$ and $x^4 + 1$.
Both solutions complete into vectors with $(-\vec N)$-degree -1.
\end{example}
To prove the correctness of the above intuition, we will use \cref{alg:simpadedirect} (\algoname{DirectSimPade}).
The following lemma says that to solve two simultaneous \Pade approximations, one can compute a minimal approximant basis of one big matrix $A$ constructed essentially from two of the matrices employed in \algoname{DirectSimPade}.
Afterwards, \cref{lem:recursive_R_solves} uses this to show that a minimal approximant basis of $R$ in \eqref{eqn:intersect_R} provides the crucial information in a minimal approximant basis of $A$.
\begin{algorithm}[t]
\caption{\algoname{RecursiveSimPade}}
\label{alg:recsimpade}
\begin{algorithmic}[1]
%
\Require{$(\vec S, \vec g, \vec N)$, an instance of
\cref{prob:sim_pade_basis} of size $n$.}
\Ensure{$(\vec \lambda, \vec \delta)$, a solution specification.}
\If{$n=1$}
\State $\textbf{return } \algoname{DirectSimPade}(\vec S, \vec g, \vec N)$
\Else
\State $\vec S_1, \vec g_1 \leftarrow $
the first $\ceil{n/2}$ elements of $\vec S, \vec g$
\State $\vec S_2, \vec g_2 \leftarrow $
the last $\floor{n/2}$ elements of $\vec S, \vec g$
\State $\vec N_1
\leftarrow (N_0,N_1,\ldots,N_{\ceil{n/2}})$
\State $\vec N_2
\leftarrow (N_0,N_{\ceil{n/2}+1},\ldots,N_n)$
\State $(\vec \lambda_1, \vec \delta_1) \leftarrow \algoname{RecursiveSimPade}\big(\vec S_1, \vec g_1, \vec N_1)$
\State $(\vec \lambda_2, \vec \delta_2) \leftarrow \algoname{RecursiveSimPade}
\big(\vec S_2, \vec g_2, \vec N_2)$
\State $\vec r \leftarrow (-N_0 \mid \vec \delta_1 \mid \vec \delta_2)$
\State $d \leftarrow N_0 + \max_i \deg g_i - 1$ \label{line:recursive:choosed}
\State \label{lineH} $R \leftarrow
\left[\begin{array}{c|c}
1 & 1\\ \hline
-\vec \lambda_1 & \\\hline
& -\vec \lambda_2
\end{array}\right]
\State $(\left [\begin{array}{c|c} \vec \lambda & \ast \end{array} \right ],
\vec \delta) \leftarrow \algoname{NegMinBasis}(d, R, \vec r)$
\label{calltoNegMin}
\State $\textbf{return } (\vec \lambda, \vec \delta)$
\EndIf
\end{algorithmic}
\end{algorithm}
\begin{lemma}
\label{lem:recursive_big_matrix}
Let $(\vec S_1, \vec g_1,\vec N_1)$ and $(\vec S_2, \vec g_2,\vec N_2)$ be two instances of \cref{prob:sim_pade_basis} of lengths $n_1, n_2$ respectively, and where $\vec N_1 = (N_0 \mid \grave{\vec N_1})$ and $\vec N_2 = (N_0 \mid \grave{\vec N_2})$.
Let $\vec S = (\vec S_1 \mid \vec S_2)$, $\vec g = (\vec g_1 \mid \vec g_2)$ and $\vec N = (N_0 \mid \grave{\vec N_1} \mid \grave{\vec N_2})$ be the combined problem having length $n = n_1 + n_2$.
Let $\vec h_i = (-\vec N_i \mid N_0 -1 \ldots N_0 -1) \in \ZZ^{2n_i+1}$ for $i=1,2$.
Let $(F, \vec\delta) = \algoname{NegMinBasis}(d, A, \vec a)$, where $A$ of dimension $(2n+3) \times (n+2)$ is given as:
\[
A = \left [ \begin{array}{c|c}
A_1 & A_2
\end{array} \right ]
= \left [ \begin{array}{cc|cc}
& & 1 & 1 \\
-\vec S_1 & & -1 \\
I & & \\
\diagg[1] & & \\
& -\vec S_2 & & -1 \\
& I & \\
& \diagg[2] &
\end{array} \right ] ,
\]
with $\vec a = (- N_0 \mid \vec h_1 \mid \vec h_2)$ and $d = N_0 + \max_i \deg g_i - 1$.
Then $(\vec \lambda, \vec \delta)$ is a solution specification to $(\vec S, \vec g, \vec N)$, where $\vec \lambda$ is the first column of $F$.
\end{lemma}
\begin{proof}
Note that the matrix $A$ is right equivalent to the following matrix $B$:
\[
B := A
\left [ \begin{array}{cccc}
& & I & \\
& & & I \\
1 & & \vec S_1 & \\
& 1 & & \vec S_2
\end{array} \right ] =
\left [ \begin{array}{cc|cc}
1 & 1 & - \vec S_1 & -\vec S_2 \\
-1 & & & \\
& & I & \\
& & \diagg[1] & \\
& -1 & & \\
& & & I \\
& & & \diagg[2]
\end{array} \right ].
\]
Since $F$ is an ${\vec a}$-minimal approximant of $A$ of order $d$, then it will also be one for $B$.
Let $P$ be the permutation matrix that produces the following
matrix $C := P B$:
\[
\setlength{\arraycolsep}{.8\arraycolsep}
C = P B =
\left [ \begin{array}{cc|cc}
1 & 1 & - \vec S_1 & -\vec S_2 \\
& & I & \\
& & & I \\
& & \diagg[1] & \\
& & & \diagg[2] \\\hline
-1 & & & \\
& -1 & &
\end{array} \right ] =
\left [ \begin{array}{cc|c}
1 & 1 & - \vec S \\ & & I \\ & & \diagg
\\\hline
-1 & & \\
& -1 &
\end{array} \right ]
\]
Define $\vec c := \vec a P^{-1}$, and note that $\vec c = (\vec h \mid -N_0, -N_0)$.
Since $F = \algoname{NegMinBasis}(d, A, \vec a)$, then $(FP^{-1}, \vec \delta)$ is a valid output of $\algoname{NegMinBasis}(d, C, \vec c)$.
Furthermore, since the first column of $P$ is $(1, 0, \ldots, 0)$, the first column of $F$ will be equal to the first column of $FP^\mo$.
We are therefore finished if we can show that if $(F', \vec\delta')$ is any valid output of $\algoname{NegMinBasis}(d, C, \vec c)$, then the first column of $F'$ together with $\vec \delta'$ form a solution specification to $(\vec S, \vec g, \vec N)$.
\jsrn{I subtly changed the proof: we can't appeal to \cref{lem:paderec} because we need to prove the above statement for any possible output of $\algoname{NegMinBasis}$, and not just the ones that decompose according to \cref{lem:paderec}.}
Consider therefore such an $(F', \vec \delta')$.
By the first two columns of $C$, we must have $F'_{*,1} \equiv F'_{*,2n+2} \equiv F'_{*,2n+3} \mod x^d$, where $F'_{*,i}$ denotes the $i$'th column of $F'$.
Since each row of $F'$ have negative $\vec c$-degree, and since $N_0 < d$, then the congruences must lift to equalities.
We can therefore write $F = [ G \mid F'_{*,1} \mid F'_{*,1} ]$ for some $G \in {\mathsf{K}}[x]^{k \times (2n+1)}$ for some $k$, and we have $\word{rowdeg}_{\vec h} G = \word{rowdeg}_{\vec c} F' = \vec \delta'$.
By the last $n$ columns of $C$, we have $G H \equiv 0 \mod x^d$, where
\[
H =
\left[\begin{array}{c}
-\vec S \\
I \\
\diagg
\end{array}\right] \ .
\]
In fact, $(G, \vec \delta')$ is a valid output for $\algoname{NegMinBasis}(d, H, \vec h)$: for $G$ has full row rank since $F'$ does; $G$ is $\vec h$-row reduced since $F'$ is $\vec c$-row reduced; and any negative $\vec h$-order $d$ approximant of $H$ must clearly be in the span of $G$ since $F'$ is a negative $\vec c$-minimal approximant basis of $C$.
By the choice of $d$, then \cref{thm:simpadedirect} therefore implies that the first column of $G$ together with $\vec \delta'$ form a solution specification to the problem $(\vec S, \vec g, \vec N)$.
Since the first column of $G$ is also the first column of $F'$, this finishes the proof.
\end{proof}
\begin{lemma}
\label{lem:recursive_R_solves}
In the context of \cref{lem:recursive_big_matrix}, let $(\vec \lambda_1, \vec \delta_1)$ and $(\vec \lambda_2, \vec \delta_2)$ be solution specifications to the two sub-problems, and let $\vec r = (-N_0 \mid \vec \delta_1 \mid \vec \delta_2)$.
If $([ \vec\lambda \mid * ], \vec\delta) = \algoname{NegMinBasis}(d, R, \vec r)$, where $\vec \lambda$ is a column vector and
\[
R =
\left[\begin{array}{c|c}
1 & 1\\ \hline
-\vec \lambda_1 & \\\hline
& -\vec \lambda_2
\end{array}\right] \ ,
\]
then $(\vec\lambda, \vec\delta)$ is a solution specification for the combined problem.
\end{lemma}
\begin{proof}
We will prove the lemma by using \cref{lem:paderecprune} to relate valid outputs of $\algoname{NegMinBasis}(d, R, \vec r)$ with valid outputs of $\algoname{NegMinBasis}(d, A, \vec a)$ from \cref{lem:recursive_big_matrix}.
For $i=1,2$, since $(\vec \lambda_i, \vec \delta_i)$ is a solution specification to the $i$'th problem, then by \cref{thm:simpadedirect} there is some $G_i \in {\mathsf{K}}[x]^{k_i \times 2n_i+1}$ whose first column is $\vec\lambda_i$ and such that $G_i$ is a valid output of $\algoname{NegMinBasis}(d, H_i, \vec h_i)$, where
\[
H_i = \left [ \begin{array}{c}
-\vec S_i \\
I \\
\diagg[i]
\end{array} \right ] \in {\mathsf{K}}[x]^{(2n_i+1) \times n_i} ,
\]
and $\vec h_i$ is as in \cref{lem:recursive_big_matrix}.
Note now that if
\[
F_1 := \left [\begin{array}{ccc}
1 & & \\
& G_1 \\
& & G_2
\end{array} \right ] \in {\mathsf{K}}[x]^{(k_1+k_2+1) \times (2n_1 + 2n_2 + 3)} ,
\]
then $(F_1, \vec r)$ is a valid output of $\algoname{NegMinBasis}(d, A_1, \vec a)$: for $\word{rowdeg}_{\vec a} F_1$ is clearly $\vec r$; $F_1$ has full row rank and is $\vec r$-row reduced; and the rows of $F_1$ must span all $\vec a$-order $d$ approximants of $A_1$, since the three column ``parts'' of $F_1$ correspond to the three row parts of $A_1$. \jsrn{Too informal?}.
Note now that $F_1 A_2 = R$.
Thus by \cref{lem:paderecprune}, if $(F_2, \vec \delta) = \algoname{NegMinBasis}(d, R, \vec r)$, then $(F_2 F_1, \vec \delta)$ is a valid output of $\algoname{NegMinBasis}(d, A, \vec a)$.
Note that by the shape of $F_1$ then the first column $\vec\lambda$ of $F_2 F_1$ is the first column of $F_2$.
Thus $\vec\lambda, \vec\delta$ are exactly as stated in the lemma, and by \cref{lem:recursive_big_matrix} they must be a solution specification to the combined problem.
\end{proof}
\begin{theorem}
\cref{alg:recsimpade} is correct. The cost of
the algorithm is
$O(n^{\omega-1}\, {\mathsf{M}}(d) (\log d) (\log d/n)^2)$,
$d = \max_i \deg g_i$.
\end{theorem}
\begin{proof}
Correctness follows from \cref{lem:recursive_R_solves}.
For complexity, note that the choice of order in \cref{line:recursive:choosed} is bounded by $2\max_i \deg g_i$, i.e. twice the value of $d$ of this theorem.
So if $T(n)$ is the cost \cref{alg:recsimpade} for given $n$ and where the order will be bounded by $O(d)$, then we have the following recursion:
\[
T(n) = \left \{\begin{array}{ll}
2T(n/2) + P(n) & \textrm{if } n > 1 \\
O({\mathsf{M}}(d)\log d) & \textrm{if } n = 1 \textrm{ (see \cref{sec:direct_reduced_basis})}
\end{array}\right . \ ,
\]
where $P(n)$ is the cost of line \ref{calltoNegMin}.
Using algorithm \algoname{PopovBasis} for the computation
of the negative part of the minimal approximant bases
we can set $P(n)$ to the target cost.
The recursion then implies $T(n) \in O(P(n))$.
\end{proof}
\jsrn{For final version: discuss the fact that the algorithm is faster when there's only few dimensions of solutions to the individual problems, i.e.~a speed similar to in \cite{olesh_vector_2006}.}
\jsrn{Why do we present two algorithms?}
\noindent
{\bf Acknowledgements.}
The authors would like to thank George Labahn for valuable
discussions, and for making us aware of the Hermite--Simultaneous
\Pade duality. We would also like to thank Vincent Neiger for
making preprints of \cite{jeannerod_computation_2016} available to
us. The first author would like to thank the Digiteo Foundation
for funding the research visit at Waterloo, during which most of
the ideas of this paper were developed.
\bibliographystyle{abbrv}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 3,944 |
Pnie – wieś w Polsce położona w województwie mazowieckim, w powiecie białobrzeskim, w gminie Promna.
W latach 1975–1998 miejscowość administracyjnie należała do województwa radomskiego.
Wieś powstała na przełomie XIX i XX w. (zapiski w spisie powszechnym z 1921 r.)
Nazwa pochodzi od słowa "pień". Osada ta powstała na dawnych gruntach folwarcznych i w 1921 roku liczyła 15 domów i 104 mieszkańców.
Współcześnie we wsi jest remiza strażacka z ponad 50 letnia tradycją, wyposażona w dwa samochody.
"OSP Pnie" bierze czynny udział w wielu akcjach ratowniczo-gaśniczych.
W centrum wsi znajduje się staw, jest on wizytówką Pni, gdyż jest to jedyny w okolicy zbiornik wodny takich rozmiarów (lustro wody ma powierzchnię niemal 2 hektarów). Często jest zarybiany.
Większość mieszkańców utrzymuje się z sadownictwa.
Tereny lekko pagórkowate o żyznych glebach sprzyjają rozwojowi drzew owocowych.
Przypisy
Linki zewnętrzne
Strona gminy. Historia miejscowości
Promna (gmina) | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 3,502 |
\section{Introduction}
The pioneering work of Voiculescu \cite{Voi91} identified the eigenvalue density of
the sum of two Hermitian $N\times N$ matrices $A$ and $B$ in a general relative position as
the free additive convolution of the eigenvalue densities $\mu_A$ and $\mu_B$ of $A$ and $B$. The primary example
for general relative position is asymptotic freeness that can be generated by conjugation via a Haar distributed
unitary matrix. In fact, under some mild regularity condition on $\mu_A$ and $\mu_B$, {\it local laws} also hold, asserting that
the empirical eigenvalue density of the sum converges on small scales as well. The optimal
precision in such local law pins down the location of individual eigenvalues with
an error bar that is just slightly above the local eigenvalue spacing. With an optimal error term, it
identifies the speed of convergence of order $N^{-1+\epsilon}$ in Voiculescu's limit theorem.
After several gradual improvements on the precision in \cite{Kargin2012, Kargin, BES15}, the local law
on the optimal $N^{-1+\epsilon}$ scale was established in \cite{BES15b} and the optimal convergence speed was
obtained in \cite{BES16}. All these results were, however, restricted to the {\it regular bulk} spectrum, \emph{i.e., }
to the spectral regime where the density of the free convolution is non-vanishing and bounded from above.
In particular, the regime of the spectral edges were not covered. Under mild conditions on the
limiting eigenvalue densities of $A$ and $B$, the free convolution density always vanishes as the square-root
function near the edges of its support. We call such type of edges {\it regular}. We remark that the regular edge is typical in many random matrix models, for instance, the semicircle law; \emph{i.e., }
the limiting density for Wigner matrices.
Near the edges the eigenvalues are sparser hence they fluctuate more; naively, the extreme
eigenvalues might be prone to very large fluctuations due to the room available to them
on the opposite side of the support. Nevertheless, for Wigner matrices and many related ensembles
with independent or weakly dependent entries it has been shown that the eigenvalue fluctuation
does not exceed its natural threshold, the local spacing, even at the edge; see \emph{e.g., }~\cite{EKYY, LS13, AEK15} and references therein.
In general, it implies a very strong concentration of the empirical measure.
For the smallest and largest eigenvalues
it means a fluctuation of order $N^{-2/3}$.
In fact, the precise fluctuation is universal and it
follows the Tracy--Widom distribution; see \emph{e.g., }~\cite{TW, BEY14, LS16} for proofs in various models.
In this paper we present a comprehensive edge local law on optimal scale and with optimal precision for the ensemble $A+UBU^*$
where $U$ is Haar unitary. We assume that the laws of $A$ and $B$ are close to continuous limiting profiles $\mu_\alpha$ and $\mu_\beta$
with a single interval support and power law behavior at the edge with exponent less than one. We prove that
the free convolution $\mu_\alpha\boxplus\mu_\beta$ has a square root singularity at its edge and $\mu_A\boxplus \mu_B$
closely trails this behavior. Furthermore, we establish that the eigenvalues of $A+UBU^*$ follow $\mu_A\boxplus \mu_B$
down to the scale of the local spacing, uniformly throughout the spectrum. In particular, we show that the extreme eigenvalues
are in the optimal $N^{-\frac23+\varepsilon}$ vicinity of the deterministic spectral edges. Previously, similar result was only known with $o(1)$ precision, see \cite{CM} for instance. We expect that Tracy--Widom law holds at the regular edge of our additive model. Very recently, bulk universality has been demonstrated in \cite{CL}.
Our analysis also implies optimal rate of
convergence for Voiculescu's global law for free convolution densities with the typical square root edges.
The result demonstrates that the Haar randomness in the additive model has a similarly strong concentration
of the empirical density as already proved for the Wigner ensemble earlier. In fact, the additive model is only the simplest prototype
of a large family of models involving polynomials of Haar unitaries and deterministic matrices; other examples include
the ensemble in the single ring theorem \cite{GKZ11, BES16b}.
The technique developed in the current paper can potentially handle square root edges in more complicated ensembles
where the main source of randomness is the Haar unitaries.
After the statement of the main result and the introduction of a few basic quantities,
we show in Section~\ref{s. subordination at the edge} that $\mu_\alpha\boxplus\mu_\beta$ has under suitable conditions a square root singularity at the lowest edge and we establish stability properties of subordination equations around that edge. In Section~\ref{sec:general} an informal outline of the proof that explains the main difficulties stemming from the edge
in contrast to the related analysis in the bulk. Here we highlight only the key point.
A typical proof of the local laws has two parts: $(i)$ stability analysis of a deterministic (Dyson) equation for the limiting eigenvalue
distribution, and $(ii)$ proof that the empirical density approximately satisfies the Dyson equation
and estimate the error. Given these two inputs, the local law follows by simply inverting the Dyson equation.
For our model the Dyson equation is actually the pair of the {\it subordination equations},
that define the free convolution.
Near the spectral edge, the subordination equations become unstable. A similar phenomenon is well known
for the Dyson equation of Wigner type models, but it has not yet been analyzed for the
subordination equations. This instability can only be compensated by a very accurate estimate
on the approximation error; a formidable task given the complexity of the analogous error estimates
in the bulk \cite{BES16}. Already the bulk analysis required carefully selected counter terms and weights
in the fluctuation averaging mechanisms before recursive moment estimates could be started.
All these ideas are used at the edge, even up to higher order, but they still fall short of the necessary precision.
The key novelty is to identify a very specific linear combination of two basic fluctuating quantities with a fluctuation smaller
than those of its constituencies, indicating a very special strong correlation between~them.
{\it Notation:}
The symbols $O(\,\cdot\,)$ and $o(\,\cdot\,)$ stand for the standard big-O and little-o notation. We use~$c$ and~$C$ to denote positive finite constants that do not depend on the matrix size~$N$. Their values may change from line to line.
We denote by $M_N({\mathbb C})$ the set of $N\times N$ matrices over ${\mathbb C}$. For a vector $\mathbf{v}\in \mathbb{C}^N$, we use $\|\mathbf{v}\|$ to denote its Euclidean norm. For $A\in M_N({\mathbb C})$, we denote by $\|A\|$ its operator norm and by $\|A\|_2$ its Hilbert-Schmidt norm. We use $\mathrm{tr}\, A=\frac{1}{N}\sum_{i} A_{ii}$ to denote the normalized trace of an $N\times N$ matrix $A=(A_{ij})_{N,N}$.
Let $\mathbf{g}=(g_1,\ldots, g_N)$ be a real or complex Gaussian vector. We write $\mathbf{g}\sim \mathcal{N}_{\mathbb{R}}(0,\sigma^2I_N)$ if $g_1,\ldots, g_N$ are independent and identically distributed (i.i.d.) $N(0,\sigma^2)$ normal variables; and we write $\mathbf{g}\sim \mathcal{N}_{\mathbb{C}}(0,\sigma^2I_N)$ if $g_1,\ldots, g_N$ are i.i.d.\ $N_{\mathbb{C}}(0,\sigma^2)$ variables, where $g_i\sim N_{\mathbb{C}}(0,\sigma^2)$ means that $\Re g_i$ and $\Im g_i$ are independent $N(0,\frac{\sigma^2}{2})$ normal variables.
For two possibly $N$-dependent numbers $a,b\in \mathbb{C}$, we write $a\sim b$ if there is a (large) positive constant $C>1$ such that $C^{-1}|a|\leq |b|\leq C|a|$.
Finally, we use double brackets to denote index sets, \emph{i.e., } for $n_1, n_2\in{\mathbb R }$, $\llbracket n_1,n_2\rrbracket\mathrel{\mathop:}= [n_1, n_2] \cap{\mathbb Z}$.
\section{Definition of the Model and main results}
\subsection{Model and assumptions} Let $A\equiv A_N=\text{diag}(a_1, \ldots, a_N)$ and $B\equiv B_N=\text{diag}(b_1, \ldots, b_N)$ be two deterministic real diagonal matrices in $ M_N(\mathbb{C})$. Let $U\equiv U_N$ be a random unitary matrix which is Haar distributed on $\mathcal{U}(N)$, where $\mathcal{U}(N)$ is the $N$-dimensional unitary group. We study the following random Hermitian matrix
\begin{align}
H\equiv H_N\mathrel{\mathop:}= A+UBU^*. \label{17080150}
\end{align}
More specifically, we study the eigenvalues of $H$, denoted by $\lambda_1\leq\ldots\leq \lambda_N$.
Throughout the paper, we are mainly working in the vicinity of the bottom of the spectrum. The discussion for the top of the spectrum is analogous. Let $\mu_A$, $\mu_B$ and $\mu_H$ be the empirical eigenvalue distributions of $A$, $B$, and $H$,~\emph{i.e., }
\begin{align*}
\mu_A\mathrel{\mathop:}=\frac{1}{N}\sum_{i=1}^N \delta_{a_i}\,,\quad \qquad \mu_B\mathrel{\mathop:}=\frac{1}{N} \sum_{i=1}^N \delta_{b_i}\,,\quad \qquad \mu_H\mathrel{\mathop:}=\frac{1}{N}\sum_{i=1}^N \delta_{\lambda_i} \,.
\end{align*}
For any probability measure $\mu$ on the real line, its Stieltjes transform is defined as
\begin{align*}
m_\mu(z)\mathrel{\mathop:}=\int_{\mathbb{R}} \frac{1}{x-z}\mathrm{d}\mu(x)\,,\qquad \qquad z\in \mathbb{C}^+\,,
\end{align*}
where $z$ is called {\it spectral parameter}. Throughout the paper, we write $z=E+\mathrm{i}\eta$, i.e., $E=\Re z$, $\Im z=\eta$.
In this paper, we assume that there are two $N$-independent absolutely continuous probability measures~$\mu_\alpha$ and~$\mu_\beta$ with continuous density functions $\rho_\alpha$ and $\rho_\beta$, respectively, such that the following assumptions, Assumptions~\ref{a.regularity of the measures} and~\ref{a. levy distance}, are satisfied. The first one discusses some qualitative properties of $\mu_\alpha$ and $\mu_\beta$, while the second one demands that $\mu_A$ and $\mu_B$ are close to $\mu_\alpha$ and $\mu_\beta$, respectively.
\begin{assumption} \label{a.regularity of the measures} We assume the following:
\begin{itemize}
\item[$(i)$] Both density functions $\rho_\alpha$ and $\rho_\beta$ have single non-empty interval supports, $[E_-^\alpha,E_+^\alpha]$ and $[E_-^\beta,E_+^\beta]$, respectively,
and $\rho_\alpha$ and $\rho_\beta$ are strictly positive in the interior of their supports.
\item[$(ii)$] In a small $\delta$-neighborhood of the lower edges of the supports,
these measures have a power law behavior, namely, there is a (small) constant $\delta>0$ and exponents $ -1<t_-^\alpha, t_-^\beta<1$ such that
\begin{align*}
C^{-1}\leq\frac{\rho_\alpha(x)}{(x-E_-^\alpha)^{t_-^\alpha}}\leq C\,,\qquad\qquad \forall x\in [E_-^\alpha, E_-^\alpha+\delta]\,, \nonumber\\
C^{-1}\leq\frac{\rho_\beta(x)}{(x-E_-^\beta)^{t_-^\beta}}\leq C\,,\qquad\qquad \forall x\in [E_-^\beta, E_-^\beta+\delta]\,,
\end{align*}
hold for some positive constant $C>1$.
\item[$(iii)$] We assume that at least one of the following two bounds holds
\begin{equation}\label{mbound}
\sup_{z\in {\mathbb C}^+}|m_{\mu_\alpha}(z)|\le C\,, \qquad\qquad \sup_{z\in {\mathbb C}^+}|m_{\mu_\beta}(z)|\le C\,,
\end{equation}
for some positive constant $C$.
\end{itemize}
\end{assumption}
\begin{assumption}\label{a. levy distance} We assume the following:
\begin{itemize}
\item[$(iv)$] For the L\'evy-distances $d_L$, we have that
\begin{align}\label{levy}
\mathbf{d}\mathrel{\mathop:}= d_L(\mu_A, \mu_\alpha) + d_L(\mu_B, \mu_\beta)\le N^{-1+\epsilon}\,,
\end{align}
for any constant $\epsilon>0$ when $N$ is sufficiently large.
\item[$(v)$] For the lower edges, we have
\begin{align}\label{supab}
\inf\, \mathrm{supp} \,\mu_A\ge E_-^\alpha -\delta\,, \qquad\qquad \inf\, \mathrm{supp}\, \mu_B\ge E_-^\beta -\delta\,,
\end{align}
for any constant $\delta>0$ when $N$ is sufficiently large.
\item[$(vi)$] For the upper edges, we assume that there is a constant $C$ such that
\begin{align}
\sup\, \mathrm{supp} \,\mu_A\le C\,, \qquad\qquad \sup\, \mathrm{supp}\, \mu_B\le C\,.
\end{align}
\end{itemize}
\end{assumption}
A direct consequence of $(v)$ and $(vi)$ above is that there is a constant $C'$ such that $\|A\|, \|B\|\leq C'\,.$
Since \cite{Voi91}, it is well known now that $\mu_H$ can be weakly approximated by a deterministic probability measure, called the free additive convolution of $\mu_A$ and $\mu_B$. Here we briefly introduce some notations concerning the free additive convolution, which will be necessary to state our main results.
For a probability measure $\mu$ on ${\mathbb R }$, we denote by $F_\mu$ its negative reciprocal Stieltjes transform,~\emph{i.e., }
\begin{align}\label{le reciprocal m}
F_\mu(z)\mathrel{\mathop:}= -\frac{1}{m_\mu(z)}\,,\qquad\qquad z\in{\mathbb C}^+\,.
\end{align}
Note that $F_{\mu}\,:{\mathbb C}^+\rightarrow {\mathbb C}^+$ is analytic such that
\begin{align}
\lim_{\eta\nearrow\infty}\frac{F_{\mu}(\mathrm{i}\eta)}{\mathrm{i}\eta}=1\,.
\end{align}
Conversely, if $F\,:\,{\mathbb C}^+\rightarrow{\mathbb C}^+$ is an analytic function with
$\lim_{\eta\nearrow\infty} F(\mathrm{i}\eta)/\mathrm{i}\eta=1$, then $F$ is the negative reciprocal Stieltjes transform of a probability
measure $\mu$, \emph{i.e., } $F(z) = F_{\mu}(z)$, for all $z\in{\mathbb C}^+$; see \emph{e.g., }~\cite{Aki}.
The {\it free additive convolution} is the symmetric binary operation on Borel probability measures on~${\mathbb R }$ characterized by the following result.
\begin{pro}[Theorem 4.1 in~\cite{BB}, Theorem~2.1 in~\cite{CG}]\label{le prop 1}
Given two Borel probability measures, $\mu_1$ and $\mu_2$, on ${\mathbb R }$, there exist unique analytic functions, $\omega_1,\omega_2\,:\,{\mathbb C}^+\rightarrow {\mathbb C}^+$, such that,
\begin{itemize}[noitemsep,topsep=0pt,partopsep=0pt,parsep=0pt]
\item[$(i)$] for all $z\in {\mathbb C}^+$, $\mathrm{Im}\, \omega_1(z),\,\mathrm{Im}\, \omega_2(z)\ge \mathrm{Im}\, z$, and
\begin{align}\label{le limit of omega}
\lim_{\eta\nearrow\infty}\frac{\omega_1(\mathrm{i}\eta)}{\mathrm{i}\eta}=\lim_{\eta\nearrow\infty}\frac{\omega_2(\mathrm{i}\eta)}{\mathrm{i}\eta}=1\,;
\end{align}
\item[$(ii)$] for all $z\in{\mathbb C}^+$,
\begin{align}\label{le definiting equations}
F_{\mu_1}(\omega_{2}(z))=F_{\mu_2}(\omega_{1}(z))\,,\qquad\qquad \omega_1(z)+\omega_2(z)-z=F_{\mu_1}(\omega_{2}(z))\,.
\end{align}
\end{itemize}
\end{pro}
The analytic function $F\,:\,{\mathbb C}^+\rightarrow {\mathbb C}^+$ defined by
\begin{align}\label{le kkv}
F(z)\mathrel{\mathop:}= F_{\mu_1}(\omega_{2}(z))=F_{\mu_2}(\omega_{1}(z))\,,
\end{align}
is, in virtue of~\eqref{le limit of omega}, the negative reciprocal Stieltjes transform of a probability measure $\mu$, called the free additive convolution of $\mu_1$ and $\mu_2$, denoted by $\mu\equiv\mu_1\boxplus\mu_2$. The functions $\omega_1$ and $\omega_2$ are referred to as the {\it subordination functions}. The subordination phenomenon for the addition of freely independent non-commutative random variables was first noted by Voiculescu~\cite{Voi93} in a generic situation
and extended to full generality by Biane~\cite{Bia98}.
Choosing $(\mu_1, \mu_2)=(\mu_\alpha, \mu_\beta)$ in Proposition \ref{le prop 1}, we denote the associated subordination functions $\omega_1$ and $\omega_2$ by $\omega_\alpha$ and $\omega_\beta$, respectively. Analogously, for the choice $(\mu_1, \mu_2)=(\mu_A, \mu_B)$, we denote by $\omega_A$ and $\omega_B$ the associated subordination functions. With the above notations, we obtain from (\ref{le definiting equations}) and (\ref{le kkv}) the following subordination equations
\begin{align}
m_{\mu_A}(\omega_B(z))&=m_{\mu_B}(\omega_A(z))=m_{\mu_A\boxplus\mu_B}(z),\nonumber\\
\omega_A(z)+\omega_B(z)-z&=-\frac{1}{m_{\mu_A\boxplus\mu_B}(z)}. \label{170730100}
\end{align}
The same system of equations hold if we replace the subscripts $(A, B)$ by $(\alpha, \beta)$.
We denote the lower and upper edges of the support of $\mu_\alpha\boxplus\mu_\beta$ by
\begin{align}
E_-\mathrel{\mathop:}=\inf\,\mathrm{supp}\, \mu_\alpha\boxplus\mu_\beta\,, \qquad\qquad E_+\mathrel{\mathop:}= \sup\,\textrm{supp} \,\mu_\alpha\boxplus\mu_\beta\,. \label{17080330}
\end{align}
In Section~\ref{s. subordination at the edge}, we establish
various qualitative properties of $\mu_\alpha\boxplus\mu_\beta$ and of $\mu_A\boxplus\mu_B$.
In particular, under Assumption \ref{a.regularity of the measures},
we show that $\mu_\alpha\boxplus\mu_\beta$ has a square-root decay at the lower edge, see (\ref{17080390}).
\subsection{Main results}
To state our results, we introduce some more terminology.
We denote the Green function or resolvent of $H$ and its normalized trace by
\begin{align*}
G(z)\equiv G_H(z)\mathrel{\mathop:}= \frac{1}{H-z}\,, \qquad m_H(z)\mathrel{\mathop:}=\mathrm{tr}\, G(z)=\frac{1}{N}\sum_{i=1}^N G_{ii}(z)\,, \qquad\qquad z\in \mathbb{C}^+.
\end{align*}
Observe that $m_H(z)$ is also the Stieltjes transform of $\mu_H$, \emph{i.e., }
\begin{align*}
m_H(z)= \int_{\mathbb R } \frac{1}{x-z} \mathrm{d}\mu_H(x)=\frac{1}{N}\sum_{i=1}^N \frac{1}{\lambda_i-z}, \qquad z\in \mathbb{C}^+.
\end{align*}
We further set
\begin{align}
\mathcal{K}\mathrel{\mathop:}= \|A\|+\|B\|+1\,. \label{17072840}
\end{align}
Moreover, for any spectral parameter $z=E+\mathrm{i}\eta\in \mathbb{C}^+$, we let
\begin{align}
\kappa\equiv \kappa(z)\mathrel{\mathop:}= \min \{|E-E_-|, |E-E_+|\}\,, \label{17080102}
\end{align}
with $E_\pm$ given in~\eqref{17080330}. We then introduce the following domain of the spectral parameter $z$: For any $0<a\le b$ and $0<\tau<\frac{E_+-E_-}{2}$,
\begin{align}
\mathcal{D}_\tau(a, b)\mathrel{\mathop:}= \{z=E+\mathrm{i} \eta\in \mathbb{C}^+: -\mathcal{K}\leq E\leq E_-+\tau, \quad a\leq \eta\leq b\}. \label{17020201}
\end{align}
For any (small) positive constant $\gamma>0$, we set
\begin{align*}
\eta_{\rm m}\mathrel{\mathop:}= N^{-1+\gamma}.
\end{align*}
Let $\eta_\mathrm{M}>1$ be some sufficiently large constant.
In the rest of the paper, we will mainly work in the regime $z\in \mathcal{D}_{\tau}(\eta_{\rm m}, \eta_\mathrm{M})$ with sufficiently small constant $\tau>0$. In particular, we usually have $\eta_{\mathrm{m}}\le \eta\le\eta_{\mathrm{M}}$.
We also need the following definition on high-probability estimates from~\cite{EKY}. In Appendix~\ref{appendix A} we collect some of its properties.
\begin{defn}\label{definition of stochastic domination}
Let $\mathcal{X}\equiv \mathcal{X}^{(N)}$ and $\mathcal{Y}\equiv \mathcal{Y}^{(N)}$ be two sequences of
nonnegative random variables. We say that~$\mathcal{Y}$ stochastically dominates~$\mathcal{X}$ if, for all (small) $\epsilon>0$ and (large)~$D>0$,
\begin{align}
\P\big(\mathcal{X}^{(N)}>N^{\epsilon} \mathcal{Y}^{(N)}\big)\le N^{-D},
\end{align}
for sufficiently large $N\ge N_0(\epsilon,D)$, and we write $\mathcal{X} \prec \mathcal{Y}$ or $\mathcal{X}=O_\prec(\mathcal{Y})$.
When
$\mathcal{X}^{(N)}$ and $\mathcal{Y}^{(N)}$ depend on a parameter $v\in \mathcal{V}$ (typically an index label or a spectral parameter), then $\mathcal{X}(v) \prec \mathcal{Y} (v)$, uniformly in $v\in \mathcal{V}$, means that the threshold $N_0(\epsilon,D)$ can be chosen independently of $v$.
\end{defn}
With these definitions and notations, we now state our main result.
\begin{thm}[Local law at the regular edge] \label{thm. strong law at the edge} Suppose that Assumptions \ref{a.regularity of the measures} and \ref{a. levy distance} hold. Let $\tau>0$ be a sufficiently small constant and fix any (small) constants $\gamma>0$ and $\varepsilon>0$. Let $d_1, \ldots, d_N\in \mathbb{C}$ be any deterministic complex number satisfying
\begin{align*}
\max_{i\in \llbracket 1, N\rrbracket} |d_i| \leq 1.
\end{align*}
Then
\begin{align}
\Big| \frac{1}{N} \sum_{i=1}^N d_i\Big(G_{ii}(z)-\frac{1}{a_i-\omega_B(z)}\Big)\Big|\prec \frac{1}{N\eta} \label{17072330}
\end{align}
holds uniformly on $\mathcal{D}_\tau(\eta_{\rm m},\eta_\mathrm{M})$ with $\eta_{\rm m}=N^{-1+\gamma}$ and any constant $\eta_\mathrm{M}>0$. In particular, choosing $d_i=1$ for all $i\in \llbracket 1, N\rrbracket$, we have the estimate
\begin{align}
\Big| m_H(z)-m_{\mu_A\boxplus\mu_B}(z)\Big|\prec \frac{1}{N\eta}\,, \label{17011304}
\end{align}
uniformly on $\mathcal{D}_\tau(\eta_{\rm m},\eta_\mathrm{M})$. Moreover, we have the improved estimate
\begin{align}
\Big| m_H(z)-m_{\mu_A\boxplus\mu_B}(z)\Big|\prec \frac{1}{N(\kappa+\eta)}\,, \label{17072847}
\end{align}
uniformly for all $z=E+\mathrm{i}\eta\in \mathcal{D}_\tau(0,\eta_\mathrm{M})$ with $E\leq E_--N^{-\frac{2}{3}+\varepsilon}$. Here, $\kappa=|E-E_-|$ is given in~\eqref{17080102}.
\end{thm}
Let $\gamma_j$ be the $j$-th $ N$-quantile of $\mu_\alpha\boxplus \mu_\beta$, \emph{i.e., } $\gamma_j$ is the smallest real number such that
\begin{align}\label{quantile}
\mu_{\alpha}\boxplus\mu_\beta\big((-\infty, \gamma_j]\big)= \frac{j}{N}.
\end{align}
Similarly, we define $\gamma_j^*$ to be the $j$-th $N$-quantile of $\mu_A\boxplus \mu_B$.
The following theorem is on the rigidity property of the eigenvalues of $H$.
\begin{thm}[Rigidity at the lower edge] \label{thm. rigidity of eigenvalues} Suppose that Assumptions \ref{a.regularity of the measures} and \ref{a. levy distance} hold. For any sufficiently small constant $c>0$, we have that for all $1\leq i\leq cN $,
\begin{align}
|\lambda_i-\gamma_i^*|\prec i^{-\frac{1}{3}}N^{-\frac23}. \label{17072845a}
\end{align}
In fact, the same estimate also holds if $\gamma_i^*$ is replaced with $\gamma_i$.
\end{thm}
With the following additional assumptions on the upper edges of $\mu_\alpha$, $\mu_\beta$ and $\mu_A$, $\mu_B$, we can combine the current edge analysis with our strong local law in the bulk regime in~\cite{BES16}. This yields the rigidity result for the whole spectrum.
\begin{assumption}\label{a. rigidity entire spectrum} We assume the following:
\item[$(ii')$] In a small $\delta$-neighborhood of the upper edges of their supports, the measures $\mu_\alpha$ and $\mu_\beta$ have a power law behavior, namely, there is a (large) constant $C\geq 1$ and exponents ${ -1< t_+^\alpha, t_+^\beta<1}$ such that
\begin{align*}
C^{-1}\leq\frac{\rho_\alpha(x)}{(E_+^\alpha-x)^{t_+^\alpha}}\leq C\,,\qquad\qquad \forall x\in [E_+^\alpha-\delta, E_+^\alpha]\,, \nonumber\\
C^{-1}\leq\frac{\rho_\beta(x)}{(E_+^\beta-x)^{t_+^\beta}}\leq C\,,\qquad\qquad \forall x\in [E_+^\beta-\delta, E_+^\beta]\,,
\end{align*}
hold for some sufficiently small constant $\delta>0$.
\item[$(v')$] For the upper edges of $\mu_A$ and $\mu_B$, we have
\begin{align*}
\sup\,\mathrm{supp} \,\mu_A\le E_+^\alpha +\delta\,, \qquad\qquad \sup\,\mathrm{supp}\, \mu_B\le E_+^\beta +\delta\,,
\end{align*}
for any constant $\delta>0$ when $N$ is sufficiently large.
\item[$(vii)$] The density function of $\mu_\alpha\boxplus\mu_\beta$ has a single interval support, \emph{i.e., }
\begin{align*}
\mathrm{supp}\, \mu_\alpha\boxplus\mu_\beta=[E_-, E_+]\,.
\end{align*}
\end{assumption}
\begin{cor} [Rigidity for the whole spectrum] \label{c. rigidity for whole spectrum} Suppose that Assumptions \ref{a.regularity of the measures}, \ref{a. levy distance} and \ref{a. rigidity entire spectrum} hold. Then we have, for all $i\in \llbracket 1, N\rrbracket $, the estimate
\begin{align}
|\lambda_i-\gamma_i^*|\prec \min \big\{i^{-\frac{1}{3}}, (N-i+1)^{-\frac{1}{3}}\big\}N^{-\frac23}. \label{17072845}
\end{align}
The same estimate also holds if $\gamma_i^*$ is replaced with $\gamma_i$.
Moreover, we have the following estimate on the convergence rate of $\mu_H$,
\begin{align}
\sup_{x\in \mathbb{R}} \big| \mu_H((-\infty, x])-\mu_A\boxplus\mu_B((-\infty, x])\big|\prec \frac{1}{N}\,. \label{17080225}
\end{align}
\end{cor}
We remark here that all of our results above also hold for the orthogonal setup, \emph{i.e., } when $U$ is a random orthogonal matrix Haar distributed on the orthogonal group $\mathcal{O}(N)$. The proof is nearly the same as the unitary setup. A discussion on the necessary modification for the block additive model
in the bulk regime can be found in Appendix C of \cite{BES16b}. Here for our model, the modification can be done in the same way. We omit the details.
\section{Properties of the subordination functions at the regular edge} \label{s. subordination at the edge}
In this section, we collect some key properties of the subordination functions and related quantities, that will often be used
in Sections~\ref{s. Entrywise estimate}-\ref{s.rigidity}. We first introduce
\begin{align}
\mathcal{S}_{AB}&\equiv \mathcal{S}_{AB}(z)\mathrel{\mathop:}= (F'_{A}(\omega_B(z))-1)(F'_{B}(\omega_A(z))-1)-1\,, \nonumber\\
\mathcal{T}_A&\equiv \mathcal{T}_A(z)\mathrel{\mathop:}= \frac{1}{2}\Big(F''_{A}(\omega_B(z)) (F'_{B}(\omega_A(z))-1)^2+F''_{B}(\omega_A(z))(F'_{A}(\omega_B(z))-1) \Big)\,,\nonumber\\
\mathcal{T}_B&\equiv \mathcal{T}_B(z)\mathrel{\mathop:}= \frac{1}{2}\Big(F''_{B}(\omega_A(z)) (F'_{A}(\omega_B(z))-1)^2+F''_{A}(\omega_B(z))(F'_{B}(\omega_A(z))-1) \Big)\,, \label{17080110}
\end{align}
where we use the shorthand notation $F_A\equiv F_{\mu_A}$ and $F_B\equiv F_{\mu_B}$ for the negative reciprocal Stieltjes transforms of~$\mu_A$ and~$\mu_B$, and where $\omega_A$ and $\omega_B$ are the subordination functions associated through~\eqref{le definiting equations}. The main result in this section is the following proposition on the domain $\mathcal{D}_\tau(\eta_{\rm m}, \eta_\mathrm{M})$; see~\eqref{17020201}.
\begin{pro}\label{le proposition 3.1} Suppose that Assumptions \ref{a.regularity of the measures} and \ref{a. levy distance} hold. Then, for sufficiently small constant $\tau>0$, we have the following statements:
\begin{itemize}
\item[$(i)$] There exist strictly positive constants $k$ and $K$, such that
\begin{align}
&\min_{i}| a_i-\omega_B(z)|\geq k\,, \qquad& &\min_i | b_i-\omega_A(z)|\geq k\,, \label{17020502}\\
&\big|\omega_A(z)\big|\leq K, \qquad& &\big|\omega_B(z)\big|\leq K\,, \label{17020503}
\end{align}
hold uniformly on $\mathcal{D}_\tau(\eta_{\rm m},\eta_\mathrm{M})$.
\item[$(ii)$] For the Stieltjes transform $m_{\mu_A\boxplus\mu_B}$ of $\mu_A\boxplus\mu_B$, we have that
\begin{align}
\Im m_{\mu_A\boxplus\mu_B} (z)\sim
\begin{cases}
\sqrt{\kappa+\eta}\,,\quad& \text{if }\qquad E\in \mathrm{supp}\,\mu_A\boxplus\mu_B\,,\\
\frac{\eta}{\sqrt{\kappa+\eta}}\,, & \text{if }\qquad E\not\in \mathrm{supp}\,\mu_A\boxplus\mu_B\,,
\end{cases}
\label{17080120}
\end{align}
uniformly on $z=E+\mathrm{i}\eta\in\mathcal{D}_\tau(\eta_{\rm m},\eta_\mathrm{M})$, with $\kappa$ given in~\eqref{17080102}.
\item[$(iii)$] For $\mathcal{S}_{AB}$, $\mathcal{T}_A$ and $\mathcal{T}_B$ defined in (\ref{17080110}), we have
\begin{align}
\mathcal{S}_{AB}(z)\sim \sqrt{\kappa+\eta}\,, \quad\qquad |\mathcal{T}_A(z)|\leq C\,,\qquad\quad |\mathcal{T}_B(z)|\leq C\,, \label{17080121}
\end{align}
uniformly on $z\in \mathcal{D}_\tau(\eta_{\rm m}, \eta_\mathrm{M})$, for some constant $C$. In addition, for $z=E+\mathrm{i}\eta\in \mathcal{D}_\tau(\eta_{\rm m}, \eta_\mathrm{M})$ with $|E-E_-|\leq \delta$ and $\eta\leq \delta$ for some sufficiently small constant $\delta>0$, we also have
\begin{align}
|\mathcal{T}_A(z)|\geq c\,,\quad \qquad |\mathcal{T}_B(z)|\geq c\,, \label{17080122}
\end{align}
for some strictly positive constant $c=c(\delta)$.
\item[$(iv)$] For $\omega_A$, $\omega_B$ and $\mathcal{S}_{AB}$ we have
\begin{align}\label{le lipschitz stuff}
|\omega_A'(z)| \leq C\frac{1}{\sqrt{\kappa+\eta}}\,,\quad \qquad |\omega_B'(z)| \leq C\frac{1}{\sqrt{\kappa+\eta}}\,,\quad \qquad |\mathcal{S}_{AB}' (z)|\leq C\frac{1}{\sqrt{\kappa+\eta}}\,,
\end{align}
any $z\in \mathcal{D}_\tau (\eta_{\rm m}, \eta_\mathrm{M})$, for some constant $C$.
\end{itemize}
\end{pro}
The proof of Proposition~\ref{le proposition 3.1} is split into two steps. In the first step, carried out in Subsection~\ref{appendix. properties of free convolution}, we derive the analogous statements for the $N$-independent measures $\mu_\alpha$ and $\mu_\beta$. This step requires only Assumption~\ref{a.regularity of the measures}. In the second step, carried out in Subsection~\ref{app:stab}, we show that the statements carry over to the $N$-dependent measures $\mu_A$ and $\mu_B$ under Assumption~\ref{a. levy distance}, for $N$ sufficiently large.
\subsection{Free convolution measure $\mu_\alpha\boxplus\mu_\beta$} \label{appendix. properties of free convolution}
In this subsection, we derive some properties of the free additive convolution of the $\mu_\alpha$ and $\mu_\beta$. We will always assume that $\mu_\alpha$ and $\mu_\beta$ satisfy Assumption~\ref{a.regularity of the measures}. From Assumption \ref{a.regularity of the measures} $(iii)$ and Lemma 4.1 in \cite{Voi93}, we know that
\begin{align}
\sup_{z\in \mathbb{C}^+} |m_{\mu_\alpha\boxplus\mu_\beta}(z)|\leq C. \label{17080326}
\end{align}
In addition, under Assumption \ref{a.regularity of the measures}, we see from Theorem 2.3 and Remark 2.4 in \cite{Bel1} that $\omega_\alpha(z)$, $\omega_\beta(z)$ and $m_{\mu_\alpha\boxplus\mu_\beta}(z)$ can be extended continuously to $\mathbb{C}^+\cup \mathbb{R}$. This together with (\ref{17080326}) implies that $\mu_\alpha\boxplus\mu_\beta$ is absolutely continuous with a continuous and bounded density function.
Recall from Assumption~\ref{a.regularity of the measures} that $\mathrm{supp}\,\mu_\alpha= [E_-^\alpha,E_+^\alpha]$ and $\mathrm{supp}\,\mu_\beta=[E_-^\beta,E_+^\beta]$. We introduce the spectral domain $\mathcal{E}\subset{\mathbb C}$ by setting
\begin{align}
\mathcal{E}\mathrel{\mathop:}=\{z\in{\mathbb C}^+\cup{\mathbb R }\,:\, E_-^\alpha+E_-^\beta-1\le \mathrm{Re}\, z\le E_+^\alpha+E_+^\beta+1\,, 0\le\mathrm{Im}\, z\le \eta_\mathrm{M}\}\,,
\end{align}
where $\eta_\mathrm{M}>0$ is any constant. By Lemma~3.1 in~\cite{Voi86}, we have that $\mathrm{supp}\,\mu_\alpha\boxplus\mu_\beta\subset \mathcal{E}\cap{\mathbb R }$.FC
\begin{lem}\label{lemma 1}
There exists a constant $C$ such that
\begin{align}
\sup_{z\in\mathcal{E}}( |\omega_\alpha(z)|+|\omega_\beta(z)|)\le C\,.
\end{align}
\end{lem}
\begin{proof}
Let $L>\max\{|E_+^\alpha+E_+^\beta+1|,|E_-^\alpha+E_-^\beta-1|\}$ and $M>10$ be large numbers to be chosen later. We will argue by contradiction. Assume first that there is $z\in\mathcal{E}$ such that
\begin{align}\label{ram1}
|\omega_\alpha(z)|> LM \,,\qquad\qquad |\omega_\beta(z)|> L\,.
\end{align}
Then we have from~\eqref{le definiting equations} that
\begin{align}
\frac{1}{\omega_\alpha(z)+\omega_\beta(z)-z}=-\int_{\mathbb R }\frac{\mathrm{d}\mu_\alpha(x)}{x-\omega_\beta(z)}&=\frac{1}{\omega_\beta(z)}+O((\omega_\beta(z))^{-2})\label{le first s}\,,\\
\frac{1}{\omega_\alpha(z)+\omega_\beta(z)-z}=-\int_{\mathbb R }\frac{\mathrm{d}\mu_\beta(x)}{x-\omega_\alpha(z)}&=\frac{1}{\omega_\alpha(z)}+O((\omega_\alpha(z))^{-2})\,,\label{le second s}
\end{align}
as $L\rightarrow\infty$. Thus we get from~\eqref{le second s}, as $z\in\mathcal{E}$, that in the same limit
\begin{align}\label{ram11}
\frac{\omega_\beta(z)}{\omega_\alpha(z)}=O\left((\omega_\alpha(z)^{-1}\right)\,.
\end{align}
But then we have from~\eqref{ram1} and~\eqref{ram11} that
\begin{align}
\frac{L}{|\omega_\alpha(z)|}\le\frac{|\omega_\beta(z)|}{|\omega_\alpha(z)|}\le C\frac{1}{|\omega_\alpha(z)|}\,,
\end{align}
hence for $L$ sufficiently large, we get a contradiction.
Next, assume that there is $z\in\mathcal{E}$ such that
\begin{align}\label{ram2}
|\omega_\alpha(z)|> LM \,,\qquad\qquad |\omega_\beta(z)|\le L\,.
\end{align}
Then we conclude from~\eqref{le definiting equations} that
\begin{align}\label{ram4}
\frac{1}{|m_{\mu_\alpha}(\omega_\beta(z))|}=|\omega_\alpha(z)+\omega_\beta(z)-z|\ge\frac{LM}{2}\,,
\end{align}
for $M$ sufficiently large, where we used that $z\in\mathcal{E}$. On the other hand, the Stieltjes transform $m_{\mu_\alpha}(z)$ does not have any zeros in $\mathcal{E}$ as the support of $\mu_\alpha$ is connected. Thus there is a constant $c>0$, depending on $L$, such that $|m_{\mu_\alpha}(z')|\ge c$, for all $z'\in{\mathbb C}^+$ with $|z'|\le L$. Hence, for $M$ sufficiently large, we get a contradiction from~\eqref{ram4}.
Finally, as both, ~\eqref{ram1} and~\eqref{ram2}, have been ruled out, we can conclude that
\begin{align}
|\omega_\alpha(z) |\le LM\,,\qquad\qquad |\omega_\beta(z)|\le L\,,
\end{align}
for all $z\in \mathcal{E}$. This completes the proof of Lemma~\ref{lemma 1}.
\end{proof}
Recall from (\ref{17080330}) that $E_-=\inf\,\mathrm{supp}\,\mu_\alpha\boxplus\mu_\beta$. Recall further that, for any spectral parameter $z$, $\kappa=\kappa(z)$ defined in (\ref{17080102}) is the distance of $\Re z$ to the endpoints of $\mathrm{supp}(\mu_\alpha\boxplus\mu_\beta)$.
\begin{lem}\label{lemma 2}
Let $u\in{\mathbb R }$ with $u\le E_-$, then we have
\begin{align}\label{le C15}
\mathrm{Re}\,\omega_\alpha(u)\le E_-^\beta\,,\qquad\qquad \mathrm{Re}\,\omega_\beta(u)\le E_-^\alpha\,.
\end{align}
Moreover, $\mathrm{Re}\,\omega_\alpha$ and $\mathrm{Re}\,\omega_\beta$ are monotone increasing on $(-\infty,E_-)$.
\end{lem}
\begin{proof}
We argue by contradiction. Assume that there exists $y'$ with $y'\le E_-$ such that $\mathrm{Re}\, \omega_\alpha(y')>E_-^\beta$. Then either $\Re\omega_\alpha(y')\in(E_-^\beta,E_+^\beta)$ or $\Re\omega_\alpha(y')\ge E_+^\beta$. In the first case, using that the imaginary part of the identity $m_{\mu_\alpha\boxplus\mu_\beta}(z)= m_\alpha(\omega_\beta(z))$, we conclude that $\mathrm{Im}\, m_{\mu_\alpha\boxplus\mu_\beta}(y')>0$, \emph{i.e., } the density of $\mu_\alpha\boxplus\mu_\beta$ at $y'$ is strictly positive. This contradicts the definition of $E_-$ (as the lowest endpoint $\mathrm{supp}\,\mu_\alpha\boxplus\mu_\beta$).
In the second case, $\mathrm{Re}\,\omega_\alpha(y')\ge E_+^\beta$, we have
\begin{align}\label{llk}
\mathrm{Re}\, m_{\mu_\beta}(\omega_\alpha(y'))=\int_{E_-^\beta}^{E_+^\beta}\frac{ (x-\mathrm{Re}\,\omega_\alpha(y'))\mathrm{d}\mu_\beta(x)}{|x-\omega_\alpha(y')|^2}<0\,.
\end{align}
However, since $\mathrm{Re}\, m_{\mu_\beta}(\omega_\alpha(y'))=\mathrm{Re}\, m_{\mu_\alpha\boxplus\mu_\beta}(y')$, we get a contradiction as
\begin{align}
\mathrm{Re}\, m_{\mu_\alpha\boxplus\mu_\beta}(y')=\int_y^\infty\frac{\mathrm{d}\mu_\alpha\boxplus\mu_\beta(x)}{x-y'}>0\,,
\end{align}
by the definition of $E_-$.
From the above, we get $\mathrm{Re}\, \omega_\alpha(y')\le E_-^\beta$. Repeating the argument for $\omega_\beta$, we~obtain~\eqref{le C15}.
Finally, that $\mathrm{Re}\,\omega_\alpha$ and $\mathrm{Re}\,\omega_\beta$ are increasing on $(-\infty,E_-)$ follows from the observation that $\mathrm{Re}\, m_{\mu_\alpha\boxplus\mu_\beta}$ is increasing on $(-\infty,E_-)$, the subordination property $m_{\mu_\alpha\boxplus\mu_\beta}(z)=m_{\mu_\beta}(\omega_\alpha(z))$ and~\eqref{llk}. The same argument shows that $\mathrm{Re}\, \omega_\alpha$ is increasing on $(-\infty,E_-)$. This finishes the proof of Lemma~\ref{lemma 2}.
\end{proof}
We now show that we actually have $\mathrm{Re}\, \omega_\alpha(E_-)\leq E_-^\beta-k_0$ and $\mathrm{Re}\, \omega_\beta(E_-)\le E_-^\alpha-k_0$, for some constant $k_0>0$. Our argument relies on the following computational lemma.
\begin{lem}\label{lemma computation}
Let $\omega=\lambda+\mathrm{i} \nu$, with $\nu\ge0$ and $|\omega|\le \vartheta$, for some small $\vartheta>0$. Let $-1<t<1$.
Then,
\begin{align}
\int_0^\vartheta\frac{x^t\,\mathrm{d} x}{(x-\lambda)^2+\nu^2}\sim\begin{cases} \frac{\lambda^t}{\nu}\,,\qquad &\textrm{if}\qquad \lambda>\nu\,,\\
|\omega|^{t-1}\sim \lambda^{t-1}\,,\qquad &\textrm{if}\qquad \lambda<0\,, |\lambda|>\nu\,,\\
\nu^{t-1}\,,\qquad &\textrm{if}\qquad \nu>|\lambda|\,.
\end{cases}
\end{align}
\end{lem}
\begin{proof}
Follows from elementary estimations.
\end{proof}
Recall from~\eqref{le reciprocal m} that $F_{\mu}(w)=-1/m_{\mu}(w)$, $w\in{\mathbb C}^+$, denotes the negative reciprocal Stieltjes transform of any probability measure $\mu$. As $F_{\mu}\,:{\mathbb C}^+\rightarrow {\mathbb C}^+$ is analytic, and since $\mu$ is a probability measure, it admits the representation
\begin{align}\label{representation}
F_{\mu}(z)-z=\Re F_{\mu}(\mathrm{i})+\int_{\mathbb R }\left(\frac{1}{x-z}-\frac{x}{1+x^2}\right)\,\mathrm{d} \widehat\mu(x)\,,
\end{align}
for some finite Borel measure $\widehat\mu$ on ${\mathbb R }$. Note that $\widehat\mu$ is in general not a probability measure. In particular, we have $\widehat\mu\equiv 0$ if and only if $\mu$ is supported at a single point. The following result about the support of the measure $\widehat\mu$ associated with the measure $\mu$ is of relevance.
\begin{lem}\label{lemma 6}
Let $\mu$ be a probability measure on ${\mathbb R }$ which is supported at more than two points, is of bounded support and satisfies $m_{\mu}(x)\not=0$, for all $x\in{\mathbb R }\backslash \mathrm{supp}\, {\mu}$. Then we have that
\begin{align}\label{lefrasu}
\mathrm{supp}\,\mu=\mathrm{supp}\,\widehat\mu\,,
\end{align}
where $\widehat\mu$ is the finite Borel measure associated with $\mu$ through~\eqref{representation}.
\end{lem}
\begin{proof}
Given any probability measure $\nu$ on ${\mathbb R }$, we first note that $x\in{\mathbb R }$ is in the support of $\nu$ if and only if its Stieltjes transform fails to be analytic in a neighborhood of $x$. For the measure $\mu$ from above, we have $m_{\mu}(x)\not=0$ for all $x\in{\mathbb R }\backslash\mathrm{supp}\,\mu$. Therefore, we know that $x\in{\mathbb R }$ is in the support of~$\mu$ if and only if the reciprocal Stieltjes transform $F_{\mu}$ fails to be analytic in a neighborhood of $x$.
Since $\mu$ is supported at more than one point, we have $\widehat\mu\not=0$ in~\eqref{representation}. We then apply the same reasoning to conclude that $x\in{\mathbb R }$ is in the support of the measure~$\widehat\mu$ if and only if $F_{\mu}$ fails to be analytic in a neighborhood of $x$. Thus~\eqref{lefrasu} directly follows.
\end{proof}
\begin{lem}\label{lemma 3}
There is a constant $k_0>0$, such that
\begin{align}\label{key to everything}
\mathrm{Re}\, \omega_\alpha(E_-)\le E_-^\beta-k_0\,,\qquad\mathrm{Re}\,\omega_\beta(E_-)\le E_-^\alpha-k_0\,.
\end{align}
Moreover, there exists a constant $C$, such that
\begin{align}\label{a lot of rs}
\mathrm{Im}\, \omega_\alpha(z)+\mathrm{Im}\,\omega_\beta(z)\le \eta+C\mathrm{Im}\, m_{\mu_\alpha\boxplus\mu_\beta}(z)\,,
\end{align}
for all $z\in\mathcal{E}$. The constants $k_0$ and $C$ only depend on $\mu_\alpha$ and $\mu_\beta$.
\end{lem}
\begin{proof}Let $z\in\mathcal{E}$. Taking the imaginary part in the subordination equations~\eqref{le definiting equations} we get
\begin{align*}
\frac{\mathrm{Im}\,\omega_\alpha(z)+\mathrm{Im}\,\omega_\beta(z)-\mathrm{Im}\, z}{|\omega_\alpha(z)+\omega_\beta(z)-z|^2}=\mathrm{Im}\, m_{\mu_\alpha\boxplus\mu_\beta}(z)\,.
\end{align*}
Thus we obtain
\begin{align*}
\mathrm{Im}\, \omega_\alpha(z)+\mathrm{Im}\,\omega_\beta(z)=\mathrm{Im}\, z+|\omega_\alpha(z)+\omega_\beta(z)-z|^2\mathrm{Im}\, m_{\mu_\alpha\boxplus\mu_\beta}(z)\le \eta+C\mathrm{Im}\, m_{\mu_\alpha\boxplus\mu_\beta}(z)\,,
\end{align*}
where we used Lemma~\ref{lemma 1} to get the inequality. This proves~\eqref{a lot of rs}.
We move on to prove the estimates in~\eqref{key to everything}. Using
\begin{align}\label{needed later on}
\mathrm{Im}\, m_{\mu_\alpha\boxplus\mu_\beta}(z)=\mathrm{Im}\, \omega_\alpha(z)\int_{\mathbb R }\frac{\mathrm{d}\mu_\beta(x)}{|x-\omega_\alpha(z)|^2}=\mathrm{Im}\,\omega_\beta(z)\int_{\mathbb R }\frac{\mathrm{d}\mu_\alpha(x)}{|x-\omega_\beta(z)|^2}\,,
\end{align}
and~\eqref{le definiting equations}, we can write
\begin{align*}
\frac{\mathrm{Im}\, m_{\mu_\alpha\boxplus\mu_\beta}(z)}{\mathrm{Im}\, z}\bigg(\Big({\int_{\mathbb R }\frac{\mathrm{d}\mu_\alpha(x)}{|x-\omega_\beta(z)|^2}}\Big)^{-1}+\Big({\int_{\mathbb R }\frac{\mathrm{d}\mu_\beta(x)}{|x-\omega_\alpha(z)|^2}} \Big)^{-1}\bigg)-1&=\frac{\mathrm{Im}\, m_{\mu_\alpha\boxplus\mu_\beta}(z)}{\mathrm{Im}\, z}\frac{1}{|m_{\mu_\alpha\boxplus\mu_\beta}(z)|^2}\,,
\end{align*}
for all $z\in\mathcal{E}\cap{\mathbb C}^+$. Since $\mathrm{Im}\, m_{\mu_\alpha\boxplus\mu_\beta}(z)/\mathrm{Im}\, z>0$, for all $z\in\mathcal{E}\cap{\mathbb C}^+$, we obtain
\begin{align}\label{good identity}
\Big({\int_{\mathbb R }\frac{\mathrm{d}\mu_\alpha(x)}{|x-\omega_\beta(z)|^2}}\Big)^{-1}+\Big({\int_{\mathbb R }\frac{\mathrm{d}\mu_\beta(x)}{|x-\omega_\alpha(z)|^2}}\Big)^{-1}\ge|m_{\mu_\alpha\boxplus\mu_\beta}(z)|^{-2}=|\omega_\alpha(z)+\omega_\beta(z)-z|^2\,,
\end{align}
for all $z\in\mathcal{E}\cap{\mathbb C}^+$, and we can take the limit $\mathrm{Im}\, z\rightarrow 0$ to obtain the conclusion also for $z\in\mathcal{E}$.
Next, we introduce the quantities
\begin{align}
d_\alpha\mathrel{\mathop:}= |\mathrm{Re}\, \omega_\alpha(E_-)-E_-^\beta|\,,\qquad d_\beta\mathrel{\mathop:}= |\mathrm{Re}\, \omega_\beta(E_-)-E_-^\alpha|\,.
\end{align}
We now claim that $d_\alpha\ge k_0$ and $d_\beta\ge k_0$, for some constant $k_0>0$. Without loss of generality, we may assume that $d_\beta\ge d_\alpha$. We then proceed by distinguishing two cases: First assume that
\begin{align}\label{le case a}
d_\alpha\le \epsilon k\,,\qquad\qquad d_\beta>k\,,
\end{align}
for some small constants $k>0$ and $\epsilon>0$ to be chosen below.
Recalling Lemma~\ref{lemma computation}, we note that, for fixed small $\vartheta>0$,
\begin{align}\label{from the computational lemma}
\int_{E_-^\beta}^{E_-^\beta+\vartheta}\frac{\mathrm{d}\mu_\beta(x)}{|x-\omega_\alpha(z)|^2}\sim\begin{cases}
\frac{(\mathrm{Re}\, \omega_\alpha(z)-E_-^\beta)^{t_-^\beta}}{\mathrm{Im}\, \omega_\alpha(z)}\,,\qquad &\textrm{ if }\qquad \mathrm{Re}\,\omega_\alpha(z)-E_-^\beta\ge\mathrm{Im}\,\omega_\alpha(z)\,,\\
|\mathrm{Re}\,\omega_\alpha(z)-E_-^\beta|^{t_-^\beta-1}\,, &\textrm{ if }\qquad \mathrm{Re}\, \omega_\alpha(z)-E_-^\beta\le-\mathrm{Im}\,\omega_\alpha(z)\,,\\
(\mathrm{Im}\, \omega_\alpha(z))^{t_-^\beta-1}\,, &\textrm{ if }\qquad \mathrm{Im}\,\omega_\alpha(z)>|\mathrm{Re}\,\omega_\alpha(z)-E_-^\beta|\,,
\end{cases}
\end{align}
uniformly on the domain $\mathcal{E}$, where we have $-1<t_-^\beta<1$. (In the limit $\mathrm{Im}\, z\rightarrow 0$, the integral may be divergent, but this does not affect the following argument.) Fixing a small $\delta>0$ and setting $z=E_--\delta$, we obtain from all three cases in~\eqref{from the computational lemma} that
\begin{align}\label{new liebling}
\bigg(\int_{E_-^\beta}^{E_+^\beta}\frac{\mathrm{d}\mu_\beta(x)}{|x-\omega_\alpha(E_--\delta)|^2}\bigg)^{-1}\le c|\mathrm{Re}\, \omega_\alpha(E_--\delta)-E_-^\beta|^{1-t_-^\beta}\le c( d_\alpha)^{1-t_-^\beta}\,,
\end{align}
where we used that $\mathrm{Re}\,\omega_\alpha(y-\delta)$ is a non-positive increasing function as $\delta$ decreases by Lemma~\ref{lemma 2}. In particular we can take the limit $\delta\searrow 0$.
Thus, when $ d_\alpha<\epsilon k$ and $ d_\beta>k$, we have from~\eqref{good identity} and~\eqref{new liebling} that
\begin{align}
\bigg(\frac{1}{\int_{\mathbb R }\frac{\mathrm{d}\mu_\alpha(x)}{|x-\omega_\beta(E_--\delta)|^2}}+c(\epsilon k)^{1-t_-^\beta} \bigg)\ge |m_{\mu_\alpha\boxplus\mu_\beta}(E_--\delta) |^{-2}\,,
\end{align}
which implies
\begin{align}\label{the contradiction follows from this}
1&\ge \int_{\mathbb R }\frac{\mathrm{d}\mu_\alpha(x)}{|x-\omega_\beta(E_--\delta)|^2}\big( |m_{\mu_\alpha\boxplus\mu_\beta}(E_--\delta) |^{-2}-c(\epsilon k)^{1-t_-^\beta} \big)\\
&=\frac{\int_{\mathbb R }\frac{\mathrm{d}\mu_\alpha(x)}{|x-\omega_\beta(E_--\delta)|^2}}{\big|\int_{\mathbb R }\frac{\mathrm{d}\mu_\alpha(x)}{x-\omega_\beta(E_--\delta)} \big|^2}-c(\epsilon k)^{1-t_-^\beta} \int_{\mathbb R }\frac{\mathrm{d}\mu_\alpha(x)}{|x-\omega_\beta(E_--\delta)|^2}\,,
\end{align}
where we used~\eqref{le definiting equations} to get the equality. As we are currently assuming that $d_\beta>k$, we have
\begin{align}\label{sb1}
c (\epsilon k)^{1-t_-^\beta} \int_{\mathbb R }\frac{\mathrm{d}\mu_\alpha(x)}{|x-\omega_\beta(E_--\delta)|^2}\le c(\epsilon k)^{1-t_-^\beta} \frac{1}{d_\beta^2}\le c\epsilon^{1-t_-^\beta}k^{-t_-^\beta-1}\,,
\end{align}
where we used that $\mathrm{Re}\,\omega_\alpha(E_--\delta)$ is a non-positive increasing function as $\delta$ decreases.
Next, as we assume that $\mu_\beta$ is not a single point mass, we have by the Cauchy-Schwarz inequality
\begin{align}\label{sb2}
\frac{\int_{\mathbb R }\frac{\mathrm{d}\mu_\alpha(x)}{|x-\omega_\beta(E_--\delta)|^2}}{\big|\int_{\mathbb R }\frac{\mathrm{d}\mu_\alpha(x)}{x-\omega_\beta(E_--\delta)} \big|^2}\ge (1+C_S)\,,
\end{align}
for some constant $C_S>0$, uniformly for, say, all $0\le\delta\le 1/10$.
Hence, returning to~\eqref{the contradiction follows from this} and taking the limit $\delta\searrow 0$, we conclude from~\eqref{sb1} and~\eqref{sb2}
\begin{align}
1\ge 1+C_S- c\epsilon^{1-t_-^\beta}k^{-t_-^\beta-1}\,.
\end{align}
We therefore get, for $\epsilon<( C_S/ck^{1+t_-^\beta})^{1/(1-t_-^\beta)}$, for any $k>0$, a contradiction. Here we use that $t_-^\beta<1$. Thus, we can reject~\eqref{le case a} for any $k$ if $\epsilon$ is sufficiently small depending on $k$.
Assume next that
\begin{align}\label{le case b}
d_\alpha\le \epsilon k\,,\qquad\qquad d_\beta\le k\,,
\end{align}
Following the lines from~\eqref{from the computational lemma} to~\eqref{new liebling} with $\alpha$ and $\beta$ interchanged, we find that for any small $\delta>0$,
\begin{align}\label{new liebling 2}
\Big(\int_{E_-^\alpha}^{E_+^\alpha}\frac{\mathrm{d}\mu_\alpha(x)}{|x-\omega_\beta(E_--\delta)|^2}\Big)^{-1}\le c|\mathrm{Re}\, \omega_\beta(z-\delta)-E_-^\alpha|^{1-t_\alpha}\le c( d_\beta)^{1-t_\alpha}\,.
\end{align}
Hence, together with~\eqref{new liebling}, we get from~\eqref{good identity} that
\begin{align}
c(\epsilon k)^{1-t_-^\beta}+ck^{1-t_-^\alpha}\ge |m_{\mu_\alpha\boxplus\mu_\beta}(E_--\delta)|^{-2}\,.
\end{align}
As $m_{\mu_\alpha\boxplus\mu_\beta}(E_--\delta)$ is increasing as $\delta$ decreases, we can take the limit $\delta\searrow 0$. Thus
\begin{align}\label{groena lund}
|m_{\mu_\alpha\boxplus\mu_\beta}(E_-)|^{-2}\le c(\epsilon k)^{1-t_-^\beta}+ck^{1-t_-^\alpha}\,.
\end{align}
By (\ref{17080326}). Hence, since $t_-^\alpha<1$ and $t_-^\beta<1$, we get a contradiction by choosing $k>0$ sufficiently small in~\eqref{groena lund}. Thus~\eqref{le case b} is ruled out. Here we only used that $\epsilon<1$.
Combining~\eqref{le case a} and~\eqref{le case b}, we conclude that
\begin{align}
d_\alpha> \epsilon k\,,\qquad\qquad d_\beta> k\,,
\end{align}
for $\epsilon>0$ and $k>0$ sufficiently small. Together with (\ref{le C15}) this proves~\eqref{key to everything} with $k_0\mathrel{\mathop:}=\epsilon k$ and concludes the proof of Lemma~\ref{lemma 3}.
\end{proof}
\begin{lem}\label{lemma 7}
The lowest endpoint $E_-$ of $\mathrm{supp}\,\mu_\alpha\boxplus\mu_\beta$ is the smallest real solution to the equation
\begin{align}\label{ja wo ist das edge}
(F'_{\mu_\alpha}(\omega_\beta(z))-1)(F'_{\mu_\beta}(\omega_\alpha(z))-1)=1\,,\qquad z\in{\mathbb R }\,.
\end{align}
Moreover, there are constants $\kappa_0>0$ and $\eta_0>0$ such that
\begin{align}\label{omega behavior}
\Im m_{\mu_\alpha\boxplus\mu_\beta}(z)\sim\Im \omega_\alpha(z)\sim \mathrm{Im}\,\omega_\beta(z)\sim\begin{cases}\sqrt{\kappa+\eta}\,,\quad & \textrm{if}\; E\ge E_-\,,\\
\frac{\eta}{\sqrt{\kappa+\eta}}&\textrm{if }\; E<E_-\,,\end{cases}
\end{align}
uniformly for all $z=E+\mathrm{i}\eta\in\mathcal{E}_0$ where
\begin{align}\label{def:eps0}
\mathcal{E}_0\mathrel{\mathop:}= \big\{z\in \mathcal{E}_{\kappa_0}: -\kappa_0\le \Re z-E_- \le\kappa_0, 0\le \Im z\leq \eta_0\big\}\,.
\end{align}
\end{lem}
\begin{proof}[Proof of Lemma~\ref{lemma 7}]
From Lemma~\ref{lemma 3} we know that $\Re\omega_\alpha(E_-)\le E_-^\beta -K$ and $\Re\omega_\beta(E_-)\le E_-^\alpha-K$. From the subordination equations~\eqref{le definiting equations} and~\eqref{representation}, we have that
\begin{align}\label{le franz}
F_{\mu_\alpha\boxplus\mu_\beta}(z)=F_{\mu_\alpha}(\omega_\beta(z))=\Re F_{\mu_\alpha}(\mathrm{i})+\omega_\beta(z)+\int_{\mathbb R }\left(\frac{1}{x-\omega_\beta(z)}-\frac{x}{1+x^2}\right)\mathrm{d}\widehat{\mu}_{\alpha}(x)\,,
\end{align}
for some Borel measures $\widehat\mu_\alpha$ on ${\mathbb R }$ with, according to Lemma~\ref{lemma 6}, $\mathrm{supp}\,\widehat\mu_\alpha=\mathrm{supp}\,\mu_\alpha$. Arguing as in the proof of Lemma~\ref{lemma 6}, we notice that $u\in{\mathbb R }$ is an edge of the measure $\mu_\alpha\boxplus\mu_\beta$, if $m_{\mu_\alpha\boxplus\mu_\beta}$ fails to be analytic at $u\in{\mathbb R }$ and $\mathrm{Im}\, m_{\mu_\alpha\boxplus\mu_\beta}(u)=0$. Analyticity breaks down if either $F_{\mu_\alpha\boxplus\mu_\beta}(u)=0$ or, according to~\eqref{le franz}, if $\omega_\beta(u)\in \mathrm{supp}\,\widehat{\mu}_\alpha=\mathrm{supp}\,\mu_\alpha$, or if $\omega_\beta$ fails to be analytic at $u$. For the lowest edge at $u=E_-$, we can exclude $F_{\mu_\alpha\boxplus\mu_\beta}(u)=0$ by (\ref{17080326}) and also $\omega(u)\in\mathrm{supp}\,\mu_\alpha$ as $\mathrm{Re}\,\omega_\alpha(E_-)\le E_-^\beta-k_0$, $k_0>0$. Thus $E_-\in{\mathbb R }$ is the smallest point where $\omega_\beta$ is not analytic.
We next claim that $\omega_\beta$ is not analytic at $u\in{\mathbb R }$ if $(F_{\mu_\alpha}'(\omega_\beta(u))-1)(F_{\mu_\beta}'(\omega_\alpha(u))-1)=1$. We argue as follows. From~\eqref{representation} we know that there is a Borel measure $\widehat\mu_\beta$ such that
\begin{align}
F_{\mu_\beta}(\omega)=\Re F_{\mu_\beta}(\mathrm{i})+\omega+\int_{{\mathbb R }}\left(\frac{1}{x-\omega}-\frac{x}{1+x^2}\right)\,\mathrm{d}\widehat\mu_\beta(x)\,,
\end{align}
and $F_{\mu_\beta}$ is analytic in a disk of radius $K$ centered at $\omega=\omega_\beta(E_-)$ by (\ref{key to everything}). Here we also used that $\mathrm{supp}\,\widehat\mu_\beta=\mathrm{supp}\,\mu_\beta$ by Lemma~\ref{lemma 6}. It follows that
\begin{align}\label{hurra}
F'_{\mu_\beta}(\omega)=1+\int_{{\mathbb R }}\frac{\mathrm{d}\widehat\mu_\beta(x)}{(x-\omega)^2}\,,
\end{align}
and in particular that $F'_{\mu_\beta}(\omega_\alpha(E_-))>1$, since $\omega_\alpha(E_-)$ is real valued $E_-$ being defined as the lower endpoint of the support of $\mu_\alpha\boxplus\mu_\beta$. By the analytic inverse function theorem, the functional inverse $F^{(-1)}_{\mu_\beta}$ of $F_{\mu_\beta}$ is analytic in a neighborhood of $F_{\mu_\beta}(\omega_\alpha(E_-))$. Thus the function
\begin{align}\label{le z}
\widetilde z(\omega)\mathrel{\mathop:}= -F_{\mu_\alpha}(\omega)+\omega+F^{(-1)}_{\mu_\beta}\circ F_{\mu_\alpha}(\omega)
\end{align}
is well-defined and analytic in a neighborhood of $\omega_\alpha(E_-)$. It follows from~\eqref{le definiting equations} that $\omega_\beta(z)$ is a solution $\omega=\omega_\beta(z)$ to the equation $z=\widetilde z(\omega)$ (with $\mathrm{Im}\, \omega_\beta(z)\ge \mathrm{Im}\, z$). Moreover, we have $\omega_\alpha(z)=F^{(-1)}_{\mu_\beta}\circ F_{\mu_\alpha}(\omega_\beta(z))$.
The function $\widetilde{z}(\omega)$ admits the following Taylor expansion in a neighborhood of $\omega_\beta(E_-)$,
\begin{align}\label{taylor expansion}
\widetilde{z}(\omega)=E_-+z'(\omega_\beta(E_-))(\omega-\omega_\beta(E_-))+\frac{1}{2}z''(\omega_\beta(E_-))(\omega-\omega_\beta(E_-))^2+O\left((\omega-\omega_\beta(E_-))^3\right)\,.
\end{align}
In particular, $\widetilde{z}(\omega)$ admits an inverse around $z=E_-$ that is locally analytic if and only if $\widetilde z'(\omega_\beta(E_-))\not=0$. Thus the smallest edge $E_-$ of the support of $\mu_\alpha\boxplus\mu_\beta$, is the smallest $u\in{\mathbb R }$ such that $\widetilde z'(\omega_\beta(u))=0$. To find the location of edge, we compute
\begin{align}
\widetilde z'(\omega)=-F'_{\mu_\alpha}(\omega)+1+\frac{1}{F'_{\mu_\beta}\circ F_{\mu_\beta}^{(-1)}\circ F_{\mu_\alpha}(\omega)}F'_{\mu_\alpha}(\omega)\,.
\end{align}
Hence, choosing $\omega=\omega_\beta(z)$, we get
\begin{align}\label{stinker}
\widetilde z'(\omega_\beta(z))=-F'_{\mu_\alpha}(\omega_\beta(z))+1+\frac{1}{F'_{\mu_\beta}(\omega_\alpha(z))}F'_{\mu_\alpha}(\omega_\beta(z))\,,
\end{align}
thence, from $\widetilde z'(\omega_\beta(E_-))=0$ we have
\begin{align}\label{stinker bis}
(F'_{\mu_\alpha}(\omega_\beta(E_-))-1)(F'_{\mu_\beta}(\omega_\alpha(E_-))-1)=1\,.
\end{align}
This proves~\eqref{ja wo ist das edge}.
We move on to proving~\eqref{omega behavior}. From~\eqref{le z} we compute,
\begin{align}
\widetilde z''(\omega)&=-F''_{\mu_\alpha}(\omega)+\frac{1}{F'_{\mu_\beta}\circ F_{\mu_\beta}^{(-1)}\circ F_{\mu_\alpha}(\omega)}F''_{\mu_\alpha}(\omega)\nonumber\\&\qquad-\frac{1}{(F'_{\mu_\beta}\circ F_{\mu_\beta}^{(-1)}\circ F_{\mu_\alpha}(\omega))^3}\left(F''_{\mu_\beta}\circ F_{\mu_\beta}^{(-1)}\circ F_{\mu_\alpha}(\omega)\right)\cdot( F'_{\mu_\alpha}(\omega))^2\nonumber\,,
\end{align}
and thus by choosing $\omega=\omega_\beta(z)$, we get
\begin{align*}
\widetilde z''(\omega_\beta(z))=-F''_{\mu_\alpha}(\omega_\beta(z))+\frac{1}{F'_{\mu_\beta}(\omega_\alpha(z))}F''_{\mu_\alpha}(\omega_\beta(z))-\frac{1}{(F'_{\mu_\beta}(\omega_\alpha(z)))^3}F''_{\mu_\beta}(\omega_\alpha(z))\cdot( F'_{\mu_\alpha}(\omega_\beta(z)))^2\,.
\end{align*}
This we can rewrite as
\begin{align}
\widetilde z''(\omega_\beta(z))=\frac{F''_{\mu_\alpha}(\omega_\beta(z))}{F'_{\mu_\beta}(\omega_\alpha(z))}(1-F'_{\mu_\beta}(\omega_\alpha(z)))-\frac{1}{(F'_{\mu_\beta}(\omega_\alpha(z)))^3}F''_{\mu_\beta}(\omega_\alpha(z))\cdot( F'_{\mu_\alpha}(\omega_\beta(z)))^2\,.
\end{align}
Thus choosing~$z=E_-$ and recalling~\eqref{stinker} and~\eqref{stinker bis}, we get
\begin{align}
\widetilde z''(\omega_\beta(E_-))=\frac{F''_{\mu_\alpha}(\omega_\beta(E_-))}{F'_{\mu_\beta}(\omega_\alpha(E_-))}(1-F'_{\mu_\beta}(\omega_\alpha(E_-)))+\frac{F''_{\mu_\beta}(\omega_\alpha(E_-))}{F'_{\mu_\beta}(\omega_\alpha(E_-))}(F'_{\mu_\alpha}(\omega_\beta(E_-))-1)^2\,.
\end{align}
From~\eqref{hurra}, we directly get
\begin{align}\label{sas 1}
F'_{\mu_\beta}(\omega_\alpha(E_-))=1+\int_{\mathbb R }\frac{\mathrm{d}\widehat\mu_\beta(x)}{(x-\omega_\alpha(E_-))^2}\,,\qquad F'_{\mu_\alpha}(\omega_\beta(E_-))=1+\int_{\mathbb R }\frac{\mathrm{d}\widehat\mu_\alpha(x)}{(x-\omega_\beta(E_-))^2}\,,
\end{align}
as well as
\begin{align}\label{sas 2}
F''_{\mu_\beta}(\omega_\alpha(E_-))=\int_{\mathbb R }\frac{\mathrm{d}\widehat\mu_\beta(x)}{(x-\omega_\alpha(E_-))^3}\,,\qquad F''_{\mu_\alpha}(\omega_\beta(E_-))=\int_{\mathbb R }\frac{\mathrm{d}\widehat\mu_\alpha(x)}{(x-\omega_\beta(E_-))^3}\,.
\end{align}
Recalling that $\omega_\alpha(E_-)\le E_-^\beta-K$, $\omega_\beta(E_-)\le E_-^\alpha-K$ and that $\widehat\mu_\alpha\not=0$, $\widehat\mu_\beta\not=0$ (as $\mu_\alpha$ and $\mu_\beta$ are not single point masses), we infer from~\eqref{sas 1} and~\eqref{sas 2} that there are constants $c>0$ and $C<\infty$ such~that
\begin{align}\label{flugi}
c\le\widetilde z''(\omega_\beta(E_-))\le C\,.
\end{align}
Choosing $\omega=\omega_\beta(z)$ (thus $\widetilde z(\omega_\beta(z))=z)$
and using $\widetilde z'(\omega_\beta(E_-))=0$, $\widetilde z''(\omega_\beta(E_-))\not=0$ in~\eqref{taylor expansion}, we get
\begin{align}\label{kind nervt}
\omega_\beta(z)-\omega_\beta(E_-)=\frac{2}{z''(\omega_\beta(E_-))}\sqrt{E_--z}+O(|z-E_-|^{3/2})\,,
\end{align}
for $z$ in a neighborhood of $E_-$. The branch of the square root is chosen such that $\mathrm{Im}\, \omega_\beta(z)>0$, $z\in{\mathbb C}^+$.
Next, setting $z=E+\mathrm{i}\eta$, we observe that~\eqref{flugi} and~\eqref{kind nervt} imply, for $z$ near $E_-$, that
\begin{align}
\mathrm{Im}\, \omega_\beta(z)\sim\begin{cases} \sqrt{\kappa+\eta}\,,\qquad &\textrm{if}\; E\ge E_-\,, \\ \frac{\eta}{\sqrt{\kappa+\eta}}\,, &\textrm{if}\; E<E_-\,.\end{cases}
\end{align}
This proves the third estimate in~\eqref{omega behavior}. The second estimate is obtained in the same way by interchanging the r\^oles of the indices $\alpha$ and $\beta$. Finally the first estimate follows from~\eqref{needed later on} and the fact that $\omega_\alpha(z)$ and $\omega_\beta(z)$, $z\in\mathcal{E}_0$, are away from the supports of the measure $\mu_\beta$ respectively $\mu_\alpha$ by~\eqref{key to everything} and~\eqref{kind nervt}. This shows~\eqref{omega behavior} and concludes the proof of Lemma~\ref{lemma 7}.
\end{proof}
\begin{rem} From (\ref{kind nervt}) and $m_{\mu_\alpha\boxplus\mu_\beta}(z)=m_{\mu_\alpha}(\omega_\beta(z))$ we get the precise behavior of $m_{\mu_\alpha\boxplus\mu_\beta}(z)$~on~$\mathcal{E}_0$,
\begin{align*}
m_{\mu_\alpha\boxplus\mu_\beta}(z)- m_{\mu_\alpha\boxplus\mu_\beta}(E_-)= \frac{2 m_{\mu_\alpha}'(\omega_\beta(E_-))}{z''(\omega_\beta(E_-))}\sqrt{E_--z}+O(|z-E_-|^{3/2})\,,
\end{align*}
and thus by the Stieltjes inversion formula we have the square root behavior for the density of $\mu_\alpha\boxplus\mu_\beta$,
\begin{align}
{\rm d} \mu_{\alpha}\boxplus\mu_\beta (x)\sim \sqrt{x-E_-} \, {\rm d} x\,, \qquad \forall x\in [E_-, E_-+\kappa_0]\,. \label{17080390}
\end{align}
\end{rem}
\begin{cor}\label{le second corollary} Let $\mathcal{E}_0$ be as in~\eqref{def:eps0}. Then the following behaviors hold uniformly for $z\in\mathcal{E}_0$,
\begin{align}\label{mprime}
m'_{\mu_\alpha\boxplus\mu_\beta}(z)&\sim \frac{1}{\sqrt{|z-E_-|}}\,,\qquad & m''_{\mu_\alpha\boxplus\mu_\beta}(z)\sim \frac{1}{|z-E_-|^{3/2}}\,,\\
|\omega'_\alpha(z)|&\sim \frac{1}{\sqrt{|z-E_-|}}\,,\qquad& |\omega''_\alpha(z)|\sim \frac{1}{|z-E_-|^{3/2}}\,,\label{bring it on 2}
\end{align}
and
\begin{align}
& F'_{\mu_\alpha}(\omega_\beta(z))\sim 1\,,& F''_{\mu_\alpha}(\omega_\beta(z))\sim 1\,,\qquad \qquad& F'''_{\mu_\alpha}(\omega_\beta(z))\sim 1\,.\label{bring it on 3}
\end{align}
The same estimates hold true when the r\^oles of the subscripts $\alpha$ and $\beta$ are interchanged.
\end{cor}
\begin{proof}
Having established~\eqref{omega behavior} for the behavior of $\omega_\alpha$ and $\omega_\beta$ around the smallest edge $E_-$, the behaviors in~\eqref{mprime} follow directly. Using the subordination equations~\eqref{le definiting equations}, we note that $F'_{\mu_\alpha}(\omega_\beta(z))\omega'_\beta(z)=F'_{\mu_\beta}(\omega_\alpha(z))\omega'_\alpha(z)=-m_{\mu_\alpha\boxplus\mu_\beta}(z)'/(m_{\mu_\alpha\boxplus\mu_\beta}(z))^{2}$, which together with~\eqref{mprime} imply~\eqref{bring it on 2}. Finally,~\eqref{bring it on 3} follows directly from the analyticity of $F_{\mu_\beta}$ and $F_{\mu_\alpha}$ in neighborhood of $\omega_\alpha(E_-)$, respectively $\omega_\beta(E_-)$.
\end{proof}
Let us define a second subdomain $\mathcal{E}_{\kappa_0}$ of $\mathcal{E}$ by setting
\begin{align} \label{def:E kappa0}
\mathcal{E}_{\kappa_0}\mathrel{\mathop:}=
\{z\in\mathcal{E}\,:\, E_-^\alpha+E_-^\beta-1\le \mathrm{Re}\, z-E_-\le \kappa_0\,, 0\le\mathrm{Im}\, z\le \eta_\mathrm{M}\}
\end{align}
with $\kappa_0$, $\eta_0$ and $\eta_\mathrm{M}$ as in~\eqref{def:eps0}. Note that $\mathcal{E}_0\subset\mathcal{E}_{\kappa_0}\subset\mathcal{E}$.
We further introduce the functions
\begin{align}\label{T}
&\mathcal{S}_{\alpha\beta}\equiv \mathcal{S}_{\alpha\beta}(z)\mathrel{\mathop:}= (F'_{\mu_\alpha}(\omega_\beta(z))-1)(F'_{\mu_\beta}(\omega_\alpha(z))-1)-1\,, \nonumber\\
&\mathcal{T}_\alpha\equiv \mathcal{T}_\alpha(z)\mathrel{\mathop:}= \frac{1}{2}\big(F''_{\mu_\alpha}(\omega_\beta(z)) (F'_{\mu_\beta}(\omega_\alpha(z))-1)^2+
F''_{\mu_\beta}(\omega_\alpha(z))(F'_{\mu_\alpha}(\omega_\beta(z))-1) \big)\,, \nonumber\\
&\mathcal{T}_\beta\equiv \mathcal{T}_\beta(z)\mathrel{\mathop:}= \frac{1}{2}\big(F''_{\mu_\beta}(\omega_\alpha(z)) (F'_{\mu_\alpha}(\omega_\beta(z))-1)^2+F''_{\mu_\alpha}(\omega_\beta(z))(F'_{\mu_\beta}(\omega_\alpha(z))-1) \big)\,,\qquad z\in{\mathbb C}^+\,.
\end{align}
These functions are essentially the first and second order derivatives of the subordination equations~(\ref{le definiting equations}). We have the following corollary on the estimates of $m_{\mu_\alpha\boxplus\mu_\beta}$, $\omega_\alpha$, $\omega_\beta$ and also the above functions.
\begin{cor} \label{cor.17080370} Let $\mathcal{E}_{\kappa_0}$ be as in~\eqref{def:E kappa0} and let $\mathcal{E}_0$ be as in~\eqref{def:eps0}. Then
\begin{align}\label{kev1}
\Im m_{\mu_\alpha\boxplus\mu_\beta}(z)\sim \Im \omega_\alpha(z)\sim \Im \omega_\beta(z)\sim
\begin{cases}\sqrt{\kappa+\eta}\,, \qquad&\textrm{if } E\ge E_-\,,\\
\frac{\eta}{\sqrt{\kappa+\eta}}\,, & \textrm{if } E<E_-\,,
\end{cases}
\end{align}
and
\begin{align}\label{kev2}
\mathcal{S}_{\alpha\beta}(z)\sim\sqrt{\kappa+\eta}
\end{align}
hold uniformly for $z\in\mathcal{E}_{\kappa_0}$, with $\kappa$ given in~\eqref{17080102}. Moreover, we have
\begin{align}\label{kev3}
\mathcal{T}_\alpha(z)\sim 1,\qquad \qquad
\mathcal{T}_\beta(z)\sim 1\,,
\end{align}
uniformly for $z\in\mathcal{E}_0$, respectively
\begin{align}\label{kev3334}
&|\mathcal{T}_\alpha(z)|\leq C,\qquad\qquad |\mathcal{T}_\beta(z)|\leq C\,,
\end{align}
uniformly for $z\in\mathcal{E}_{\kappa_0}$, for some constant $C$.
\end{cor}
\begin{proof}[Proof of Corollary \ref{cor.17080370}]
Having established~\eqref{omega behavior} for the behavior of $\omega_\alpha$ and $\omega_\beta$ on $\mathcal{E}_0$, the behaviors in~\eqref{kev1},~\eqref{kev2} and~\eqref{kev3} can be checked by elementary computations using Taylor expansions as in the proof of Lemma~\ref{lemma 7}, and the estimates in~\eqref{sas 1} and~\eqref{sas 2} .
Consider now the complementary domain $\mathcal{E}_{\kappa_0}\setminus \mathcal{E}_0$. Observe that $\kappa+\eta\sim 1$ in $\mathcal{E}_{\kappa_0}\setminus \mathcal{E}_0$. Hence, we have
\begin{align}
\Im m_{\mu_\alpha\boxplus\mu_\beta}(z)=\int_{\mathbb R }\frac{\eta}{(x-E)^2+\eta^2} {\rm d} \mu_{\alpha}\boxplus\mu_\beta(x)\sim \eta \label{17080350}
\end{align}
uniformly on $\mathcal{E}_{\kappa_0}\setminus \mathcal{E}_0$. Then, from (\ref{a lot of rs}), (\ref{17080350}) and $\Im \omega_\alpha(z)\geq \eta$, $\Im \omega_\beta(z)\geq \eta$, we get
\begin{align}
\Im \omega_\alpha(z)\sim \eta, \qquad\qquad \Im \omega_\alpha(z)\sim \eta. \label{17080380}
\end{align}
Observe that both estimates in (\ref{kev1}) are of the same order as $\eta$ if $z\in \mathcal{E}_{\kappa_0}\setminus\mathcal{E}_0$. Hence, we have
(\ref{kev1}).
Next, we show that (\ref{kev2}) can be extended to the whole $\mathcal{E}_{\kappa_0}\setminus \mathcal{E}_0$. Since $\kappa+\eta\sim 1$ , it suffices to show that the left side of (\ref{kev2}) is comparable to $1$ on $\mathcal{E}_{\kappa_0}\setminus \mathcal{E}_0$. We first consider real $z\in [E_-^\alpha+E_-^\beta-1, E_-]$. Using (\ref{hurra}) and the analogue of $F'_{\mu_\alpha}$, (\ref{stinker bis}), (\ref{kev2}), the monotonicity of $\omega_\alpha(z)$ and $\omega_\beta(z)$ on $(-\infty, E_--\kappa_0]$ (\emph{c.f., } Lemma \ref{lemma 2}), and (\ref{key to everything}), we see that
\begin{align*}
0\leq (F'_{\mu_\alpha}(\omega_\beta(z))-1)(F'_{\mu_\beta}(\omega_\alpha(z))-1)\leq 1- c\,, \qquad \quad\forall z\in [E_-^\alpha+E_-^\beta-1, E_--\kappa_0] \,,
\end{align*}
for some small constant $c>0$. Hence, we have
\begin{align}
\big|(F'_{\mu_\alpha}(\omega_\beta(z))-1)(F'_{\mu_\beta}(\omega_\alpha(z))-1)-1\big|\sim 1, \qquad\quad\forall z\in [E_-^\alpha+E_-^\beta-1, E_--\kappa_0]\,. \label{17080360}
\end{align}
Then, (\ref{17080360}) can be extended to all $z=E+\mathrm{i}\eta$, with $E\in [E_-^\alpha+E_-^\beta-1, E_--\kappa_0]$ and $0\leq \eta\leq \widetilde{\eta}_0$ for sufficiently small constant $\widetilde{\eta}_0>0$ by continuity. This together with (\ref{kev2}) gives the estimate in the regime $E\in [E_-^\alpha+E_-^\beta-1, E_-+\kappa_0]$ and $0\leq \eta\leq \eta_0$ after possibly reducing $\eta_0$ to $\widetilde{\eta}_0$ if $\eta_0>\widetilde{\eta}_0$.
It remains to show that the left side of (\ref{kev2}) is proportional to $1$ when $E\in [E_-^\alpha+E_-^\beta-1, E_-+\kappa_0]$ and $\eta_0\leq \eta\leq \eta_\mathrm{M}$. To this end, we first recall (\ref{hurra}), and observe from (\ref{le franz}) that
\begin{align}
\frac{\Im F_{\mu_\alpha}(\omega_\beta(z))-\Im \omega_\beta(z)}{\Im \omega_\beta(z)}=\int_{\mathbb R }\frac{1}{|x-\omega_\beta(z)|^2}\,\mathrm{d}\widehat{\mu}_{\alpha}(x)\,. \label{17080378}
\end{align}
Hence, using (\ref{hurra}), (\ref{17080378}) and their $F_{\mu_\beta}$ analogues, we have
\begin{align}
|(F'_{\mu_\alpha}(\omega_\beta(z))-1)(F'_{\mu_\beta}(\omega_\alpha(z))-1)| &\leq \frac{\Im F_{\mu_\alpha}(\omega_\beta(z))-\Im \omega_\beta(z)}{\Im \omega_\beta(z)}\frac{\Im F_{\mu_\beta}(\omega_\alpha(z))-\Im \omega_\alpha(z)}{\Im \omega_\alpha(z)}\nonumber\\
& = \frac{\Im \omega_\alpha(z)-\eta}{\Im \omega_\beta(z)}\frac{\Im \omega_\alpha(z)-\eta}{\Im \omega_\alpha(z)}\leq 1-c\,, \label{17080379}
\end{align}
for a strictly positive constant $c$, where in the second step we used the second equation in (\ref{le definiting equations}) and in the last step we used the fact that $\eta\geq \eta_0$ and (\ref{17080380}). Then, from (\ref{17080379}) we get (\ref{kev2}) in the whole $\mathcal{E}_{\kappa_0}$.
Similarly, the upper bound in (\ref{kev3334}) follows from (\ref{17080380}), (\ref{key to everything}), the monotonicity in Lemma \ref{lemma 2}, and the continuity of $\omega_\alpha$ and $\omega_\beta$. Omitting the details, we conclude the proof of Corollary \ref{cor.17080370}.
\end{proof}
At this stage we have completed the first step in the proof of Proposition~\ref{le proposition 3.1}. In the next subsection, we carry out the second step where we translate results obtained so far for $\mu_\alpha$ and $\mu_\beta$ to the measures $\mu_A$ and $\mu_B$ by giving the actual proof of Proposition~\ref{le proposition 3.1}.
\subsection{Proof of Proposition~\ref{le proposition 3.1}}\label{app:stab}
\newcommand{\varepsilon}{\varepsilon}
In this subsection, we prove Proposition~\ref{le proposition 3.1}. Consider the $N$-dependent measures $\mu_A$ and $\mu_B$ while always assuming that they satisfy Assumption~\ref{a. levy distance}.
Let $\omega_A(z)$ and $\omega_B(z)$ denote the subordination functions associated by~\eqref{170730100} to the measures $\mu_A$ and $\mu_B$. Recall further the definition of the $z$-dependent quantities $\mathcal{S}_{AB}$, $\mathcal{T}_A$ and $\mathcal{T}_B$ in~\eqref{17080110}.
Recall that $E_-=\inf\,\mathrm{supp}\,\mu_\alpha\boxplus\mu_\beta$. Fix sufficiently small $\varepsilon,\delta>0$ and let the domain $\mathcal{D}$ be defined by
\begin{align*}
\mathcal{D}\mathrel{\mathop:}= \mathcal{D}_{\mathrm{in}}\cup \mathcal{D}_{\mathrm{out}}\,,
\end{align*}
with
\begin{align*}
\mathcal{D}_{\mathrm{in}}& \mathrel{\mathop:}=\{ z\in {\mathbb C}^+ : |z-E_-|\le \delta\} \cap \{ \mathrm{Im}\, z\ge N^{-1+10\varepsilon}, \mathrm{Re}\, z>E_- - N^{-1+10\varepsilon} \} \,,\\
\mathcal{D}_{\mathrm{out}}&\mathrel{\mathop:}= \{ z\in {\mathbb C}^+: |z-E_-|\le \delta\}\cap \{ \mathrm{Re}\, z <E_- - N^{-1+10\varepsilon} \} \,.
\end{align*}
Notice that the bounds on $A, B$-quantities will be for spectral parameters $z$ that are
separated away from the limiting spectrum (\emph{e.g., } by assuming that $\mathrm{Im}\, z\ge N^{-1+10\varepsilon}$)
unlike in case of the $\alpha, \beta$-quantities.
\begin{lem}\label{thm.17080401}
Let $\mu_A$, $\mu_B$, $\mu_\alpha$ and $\mu_\beta$ satisfy Assumptions~\ref{a.regularity of the measures} and~\ref{a. levy distance}.
Then, there is a constant $c>0$ such that for any $z\in\mathcal{D}$ we have
\begin{align}\label{omegadiff}
|\omega_A (z)- \omega_\alpha(z)| + |\omega_B (z)- \omega_\beta(z)| & \lesssim \frac{N^{-1+c\varepsilon}} {\sqrt{|z-E_-|}} \le N^{-1/2+c\varepsilon}\,,\\
|\mathcal{S}_{AB}(z)|& \sim \sqrt{|z-E_-|}\,,\label{Sdiff}
\end{align}
and
\begin{equation}\label{Tdiff}
|\mathcal{T}_A(z)| \sim 1, \qquad |\mathcal{T}_B(z)| \sim 1\,,
\end{equation}
for $N$ sufficiently large. Moreover, we have for any $z\in\mathcal{D}$ that
\begin{align}
\mathrm{Im}\, m_{\mu_A\boxplus\mu_B}(z) &\sim \sqrt{|z-E_-|}, \qquad\qquad & &z \in\mathcal{D}_{\mathrm{in}} \,,\label{mdiff1}\\
\mathrm{Im}\, m_{\mu_A\boxplus\mu_B}(z) &\lesssim
\frac{\mathrm{Im}\, z + O(N^{-1+c\varepsilon})}{ \sqrt{|z-E_-|}} ,\qquad & &z\in \mathcal{D}_{\mathrm{out}}\,,\label{mdiff2}
\end{align}
for $N$ sufficiently large. Furthermore, for the imaginary parts the bound \eqref{omegadiff} is, for $N$ sufficiently large, sharpened to
\begin{equation}\label{imbound}
|\mathrm{Im}\, \omega_A -\mathrm{Im}\, \omega_\alpha| + |\mathrm{Im}\,\omega_B-\mathrm{Im}\,\omega_\beta| \le
\frac{ (\mathrm{Im}\, \omega_\alpha + \mathrm{Im}\, \omega_\beta) N^{-1+c\varepsilon} + \mathrm{Im}\, z}{\sqrt{|z-E_-|}}\,,
\end{equation}
for $z\in \mathcal{D}_{out}$, $\eta\le N^{-1}$,
which implies that
\begin{equation}\label{bottom}
\inf \mathrm{supp}\, \mu_{A}\boxplus \mu_B \ge E_-- N^{-1+10\varepsilon}\,.
\end{equation}
Away from the edge we have the following weaker versions of \eqref{Sdiff}, \eqref{Tdiff}:
\begin{equation}\label{Sdiff1}
|\mathcal{S}_{AB}(z)| \sim 1\,,
\end{equation}
\begin{equation}\label{Tdiff1}
|\mathcal{T}_A(z)| +|\mathcal{T}_B(z)| \le C\,,
\end{equation}
hold uniformly for any $z$ with $\delta \le |z-E_-|\le C$, for $N$ sufficiently large.
\end{lem}
\begin{proof} First, note that we can rewrite the subordination equation for $\mu_\alpha$ and $\mu_\beta$ (\emph{c.f., }~\eqref{le definiting equations} with $\mu_1=\mu_\alpha$, $\mu_2=\mu_\beta$) as
\begin{align}\label{le perturbed subo}
F_{\mu_A}(\omega_\beta(z)) -\omega_\alpha(z)-\omega_\beta(z)+z &= r_1(z)\,,\nonumber\\
F_{\mu_B}(\omega_\alpha(z)) -\omega_\alpha(z) - \omega_\beta(z) +z & = r_2(z)\,,
\end{align}
where we introduced
\begin{align}\label{the r's}
r_1(z)\mathrel{\mathop:}= F_{\mu_A}(\omega_\beta(z)) - F_{\mu_\alpha}(\omega_\beta(z))\,,\qquad \qquad r_2(z)\mathrel{\mathop:}= F_{\mu_B}(\omega_\alpha(z)) - F_{\mu_\beta}(\omega_\alpha(z))\,.
\end{align}
By Lemma~\ref{lemma 3} and Lemma~\ref{lemma 7}, we know that $\omega_\beta(z)$, $z\in\mathcal{D}$, is far away from the support of $\mu_\alpha$ and also from the support of $\mu_A$, using \eqref{supab}. Hence, using Corollary~\ref{le second corollary} and Lemma~\ref{lemma 6}, we have
\begin{align}
| r_1(z)|\le C\mathbf{d} = CN^{-1+\varepsilon}\,,\qquad |r_2(z)|\le C\mathbf{d} = CN^{-1+\varepsilon}\,,\qquad\qquad z\in\mathcal{D}\,,
\end{align}
with $\mathbf{d}$ given in~\eqref{levy}. We rely on the following local stability result of the system~\eqref{le perturbed subo}.
\begin{lem}\label{lemma local stability}
Fix $z_0\in\mathcal{D}$. Assume that the functions $\omega_\alpha$, $\omega_\beta$, $r_1$, $r_2\,:\,{\mathbb C}^+\rightarrow {\mathbb C}$ satisfy~\eqref{le perturbed subo} with $z=z_0$. Assume moreover that there is a function $q\equiv q(z_0)$ such that
\begin{align}\label{le apriori closeness}
|\omega_A(z_0)-\omega_\alpha(z_0)|\le q(z_0)\,,\qquad |\omega_B(z_0)-\omega_\beta(z_0)|\le q(z_0)\,,
\end{align}
with $ \mathcal{S}_{\alpha\beta}(z_0)\,q(z_0)=o(1)$ and $ \mathcal{S}_{\alpha\beta}(z_0)\,q(z_0)=o(1)$, with $\mathcal{S}_{\alpha\beta}$ given in~\eqref{T}. Then we have
\begin{equation}\label{omom}
|\omega_A(z_0)-\omega_\alpha(z_0)|+ |\omega_B(z_0) - \omega_\beta(z_0)| \le 2\frac{|r_1(z_0)|+|r_2(z_0)|}{|\mathcal S_{\alpha\beta}(z_0)|}\,,
\end{equation}
for $N$ sufficiently large.
\end{lem}
\begin{proof}
The proof is almost identical to the proof of Proposition~4.1 in~\cite{BES15}. The only difference is that, by Corollary~\ref{le second corollary}, $F''_{\mu_\alpha}(\omega_\beta(z))$ and $F''_{\mu_\beta}(\omega_\alpha(z))$ are $O(1)$ uniformly in $z\in\mathcal{D}$. Hence, in~(4.11) of~\cite{BES15}, we can stop the Taylor expansion in $\Omega_2(z)=\omega_B(z)-\omega_\beta(z)$ at second order and estimate the remainder by $O(|\Omega_2(z)|^2)$. This means that the factor $K/k^2$ in the subsequent formulas (4.12) and (4.13) can be replaced by a constant. Recalling that the current $\mathcal{S}_{\alpha\beta}$ plays the r\^ole of $1/S$ in~\cite{BES15}, we find that in the equation~(4.13) we are in the linear regime provided that $\mathcal{S}_{\alpha\beta}(z_0)\,q(z_0)\ll 1$, $\mathcal{S}_{\alpha\beta}(z_0)\,q(z_0)\ll 1$. Following the dichotomy argument of~\cite{BES15}, we prove Lemma~\ref{lemma local stability}. We omit the details.
\end{proof}
Continuing the proof of Lemma~\ref{thm.17080401}, we use a continuity argument to establish~\eqref{omom} with $q(z)\mathrel{\mathop:}= N^{-1+5\epsilon}/\sqrt{|z-E_-|}$. For $z\in\mathcal{D}$ with $\mathrm{Im}\, z=\eta_{\mathrm{M}}$, for some fixed $\eta_{\mathrm{M}}=O(1)$, the local linear stability result of Lemma 4.2. of~\cite{BES15} shows that
$ |\omega_A(z)-\omega_\alpha(z)|+ |\omega_B(z) - \omega_\beta(z)| \le 2|r_1(z)|+2|r_2(z)|\le N^{-1+2\epsilon}$, provided that $\mathrm{Im}\, \omega_A(z) - \mathrm{Im}\, z \ge c>0$ and $\mathrm{Im}\,\omega_B(z)-\mathrm{Im}\, z\ge c>0$. These bounds follow from the subordination equation and
the representation:
$$
\mathrm{Im}\, \omega_A(z) - \mathrm{Im}\, z = \mathrm{Im}\, F_{\mu_A}(\omega_B(z)) - \mathrm{Im}\, \omega_B(z) = (\mathrm{Im}\, z)\int_{\mathbb R } \frac{d\widehat\mu_A(x)}{|x-z|^2} \ge c'>0
$$
if $\mathrm{Im}\, z\ge \eta_{\mathrm{M}}$, and similarly for $\omega_B$.
Using the Lipschitz continuity of the subordination functions on $\mathcal{D}$, in particular $|\omega_A'(z)|$, $|\omega_B'(z)|\le \eta^{-2}$, and similar for $\omega_\alpha$ and $\omega_\beta$, we can bootstrap~\eqref{le apriori closeness} and~\eqref{omom} with $q(z)= N^{-1+5\epsilon}/\sqrt{|z-E_-|}$, as then $q(z)\mathcal{S}_{\alpha\beta}(z)\sim N^{-5\epsilon}$ (since $\mathcal{S}_{\alpha\beta}(z)\sim\sqrt{|z-E_-|}$ by~\eqref{kev2}). Thus we have
$$
|\omega_A(z)-\omega_\alpha(z)|+ |\omega_B(z) - \omega_\beta(z)| \lesssim \frac{\mathbf{d}}{|\mathcal S_{\alpha\beta}|} \le
\frac{N^{-1+\varepsilon}} {\sqrt{|z-E_-|}} \le N^{-1/2+\varepsilon}, \qquad z\in\mathcal{D}\,,
$$
since for $z\in \mathcal{D}$, we have $|z-E_-|\ge N^{-1+10\varepsilon}$, \emph{i.e., } $|\mathcal{S}_{\alpha\beta}(z)|\ge N^{-1/2+5\varepsilon}$, this proves \eqref{omegadiff}.
From this bound we can compare $\mathcal{S}_{\alpha\beta}$ and $\mathcal{S}_{AB}$, $\mathcal{T}_\alpha$ and $\mathcal{T}_A$, and $\mathcal{T}_\beta$ and $\mathcal{T}_B$, \emph{e.g., }
\begin{align*}
|\mathcal{S}_{AB}(z) - \mathcal{S}_{\alpha\beta}(z)|& \le | (F'_{\mu_A}(\omega_B(z))-1)(F'_{\mu_B}(\omega_A(z))-1) -
(F'_{\mu_A}(\omega_\beta(z))-1)(F'_{\mu_B}(\omega_\alpha(z))-1)|\\
&+ | (F'_{\mu_A}(\omega_\beta(z))-1)(F'_{\mu_B}(\omega_\alpha(z))-1) -
(F'_{\mu_\alpha}(\omega_\beta(z))-1)(F'_{\mu_\beta}(\omega_\alpha(z))-1)|\\
& \lesssim |\omega_A(z)-\omega_\alpha(z)|+ |\omega_B(z) - \omega_\beta(z)| +d \le N^{-1/2+\varepsilon}\,, \qquad z\in\mathcal{D}\,,
\end{align*}
(in the first estimate we used that $F$'s are all regular and in the second we used the same in addition to \eqref{key to everything} and
\eqref{supab}).
Since $|\mathcal{S}_{\alpha\beta}|\ge N^{-1/2+5\varepsilon}$ in this regime, we immediately get \eqref{Sdiff}.
The bounds~\eqref{Tdiff},~\eqref{mdiff1},~\eqref{mdiff2},~\eqref{Sdiff1} are proven exactly in the same way
by showing that the difference between the finite-$N$ quantity and the limiting quantity is
smaller than the size of the limiting quantity given in \eqref{T} and~\eqref{mprime}.
The proof of \eqref{imbound} requires one more argument.
Outside of the support, \eqref{omegadiff} is not optimal for the imaginary parts. Recall $r_1$ and $r_2$ from~\eqref{the r's}, $z\in{\mathbb C}^+$. Clearly
$$
|\mathrm{Im}\, r_1(z)| \le C (\mathrm{Im}\, \omega_\beta(z)) N^{-1+\varepsilon}, \qquad |\mathrm{Im}\, r_2(z)| \le C (\mathrm{Im}\, \omega_\alpha(z)) N^{-1+\varepsilon}\,,\qquad\qquad z\in\mathcal{D}\,,
$$
since
$$
\mathrm{Im}\, F_{\mu_A}(\omega_\beta(z)) = \frac{\mathrm{Im}\, m_{\mu_A}(\omega_\beta(z))}{| m_{\mu_A}(\omega_\beta(z))|^2} = \frac { \mathrm{Im}\,\omega_\beta(z)}{| m_{\mu_A}(\omega_\beta(z))|^2}
\int_{\mathbb R } \frac{\mathrm{d}\mu_A(x)}{|x-\omega_\beta(z)|^2}\,,
$$
so changing $A$ to $\alpha$ yields a factor $N^{-1+\varepsilon}$ by \eqref{levy} since $\omega_\beta(z)$ is away from the support
of~$\mu_A$. Taking imaginary parts in~\eqref{le perturbed subo} and using the representations from~\eqref{representation} gives,
\begin{align}\label{one}
\mathrm{Im}\, \omega_\beta(z) \int_{\mathbb R } \frac{\mathrm{d}\widehat \mu_A(x)}{|x-\omega_\beta(z)|^2} - \mathrm{Im}\,\omega_\alpha(z) + \mathrm{Im}\, z &= \mathrm{Im}\, r_1(z) = O\big(
\mathrm{Im}\, \omega_\beta(z) N^{-1+\varepsilon}\big)\,,\nonumber\\
\mathrm{Im}\, \omega_\alpha(z) \int_{\mathbb R } \frac{\mathrm{d}\widehat \mu_B(x)}{|x-\omega_\alpha(z)|^2} - \mathrm{Im}\,\omega_\beta(z) + \mathrm{Im}\, z &= \mathrm{Im}\, r_2(z)= O\big(
\mathrm{Im}\, \omega_\alpha(z) N^{-1+\varepsilon}\big)\,,
\end{align}
$z\in\mathcal{D}$, and similarly, starting from the subordination equations for $\mu_A$ and $\mu_B$, we have
\begin{align}\label{one1}
\mathrm{Im}\, \omega_B(z) \int_{\mathbb R } \frac{\mathrm{d}\widehat \mu_A(x)}{|x-\omega_B(z)|^2} - \mathrm{Im}\,\omega_A(z) + \mathrm{Im}\, z &= 0\,,\nonumber\\
\mathrm{Im}\, \omega_A(z) \int_{\mathbb R } \frac{\mathrm{d}\widehat \mu_B(x)}{|x-\omega_A(z)|^2} - \mathrm{Im}\,\omega_B(z) + \mathrm{Im}\, z &= 0\,.
\end{align}
In fact, we can change $\omega_\beta$ to $\omega_B$ and $\omega_\alpha$ to $\omega_A$ in \eqref{one}, to get
\begin{align}\label{one2}
\mathrm{Im}\, \omega_\beta(z) \int_{\mathbb R } \frac{\mathrm{d}\widehat \mu_A(x)}{|x-\omega_B(z)|^2} - \mathrm{Im}\,\omega_\alpha(z) + \mathrm{Im}\, z &= O\big(
\mathrm{Im}\, \omega_\beta(z) N^{-1+\varepsilon}\big)\,,\nonumber\\
\mathrm{Im}\, \omega_\alpha(z) \int_{\mathbb R } \frac{\mathrm{d}\widehat \mu_B(x)}{|x-\omega_A(z)|^2} - \mathrm{Im}\,\omega_\beta(z) + \mathrm{Im}\, z &= O\big(
\mathrm{Im}\, \omega_\alpha(z) N^{-1+\varepsilon}\big)\,,
\end{align}
$z\in\mathcal{D}$. Subtracting \eqref{one1} from \eqref{one2} and using that for very small
$\eta$ the determinant of the resulting linear system is very close to
$\mathcal S_{AB}(z)\sim \sqrt{|z-E_-|}$, $z\in\mathcal{D}$,
from \eqref{Sdiff}, we
have proved~\eqref{imbound}.
To prove \eqref{bottom}, let $z= x+ \mathrm{i}\eta$ with $x\le E_-- N^{-1+10\varepsilon}$. At a distance of at least $N^{-1}$ below $E_-$,~we~get
$$
\mathrm{Im}\, m_{\mu_\alpha\boxplus\mu_\beta} (z)= \mathrm{Im}\, z \int_{\mathbb R } \frac{\mathrm{d}\mu_\alpha\boxplus\mu_\beta(x)}{|x-z|^2} \le N\mathrm{Im}\, z\,.
$$
Moreover from $m_{\mu_\alpha\boxplus\mu_\beta}(z)= m_\alpha(\omega_\beta(z))$ we have $\mathrm{Im}\, m_\alpha (\omega_\beta(z)) \sim \mathrm{Im}\, \omega_\beta(z)$
since $\omega_\beta(z)$ is away from the support of $\mu_\alpha$. The same holds for $\omega_\alpha(z)$, so we get $\mathrm{Im}\, \omega_\alpha(z)+ \mathrm{Im}\, \omega_\beta(z) \le N\mathrm{Im}\, z$.
Taking $\eta\searrow 0$, we note that the right hand side of \eqref{imbound} goes to zero. Thus
we get $\mathrm{Im}\, \omega_A(x) = \mathrm{Im}\, \omega_B (x) =0$. Since $\mathrm{Im}\, m_{\mu_A\boxplus \mu_B}(z) \sim \mathrm{Im}\, \omega_A(z)$
in this regime,~$x$ cannot lie in the support of $\mu_A\boxplus \mu_B$. This proves~\eqref{bottom}.
\end{proof}
Recall that $\gamma_j$ denoted the $j$-th $N$-quantiles of $\mu_\alpha\boxplus\mu_\beta$
from \eqref{quantile} and similarly let $\gamma^*_j$ denote the $j$-th $N$-quantiles of
$\mu_A\boxplus\mu_B$, \emph{i.e., } these are the smallest numbers $\gamma_j$ and $\gamma_j^*$ such that
$$
\mu_\alpha\boxplus\mu_\beta\big( (-\infty, \gamma_j]\big)
= \mu_A\boxplus\mu_B\big( (-\infty, \gamma_j^*]\big) = \frac{j}{N}.
$$
\begin{lem}[Rigidity] Suppose Assumptions~\ref{a.regularity of the measures} and \ref{a. levy distance} hold,
then we have the rigidity bound
\begin{equation}\label{rigi2}
|\gamma_j -\gamma_j^*|\le j^{-1/3} N^{-\frac{2}{3}+\varepsilon}, \qquad j\in\llbracket 1, cN\rrbracket\,,
\end{equation}
for $N$ sufficiently large and for some sufficiently small constant $c>0$.
Under the additional Assumption~\ref{a. rigidity entire spectrum} we have the rigidity estimate for all quantiles, \emph{i.e., }
\begin{equation}\label{rigi}
|\gamma_j -\gamma_j^*|\le \min\{ j^{-1/3}, (N+1-j)^{-1/3}\} N^{-\frac{2}{3}+\varepsilon}, \qquad j\in\llbracket 1,N\rrbracket\,.
\end{equation}
\end{lem}
\begin{proof}
The proof of these rigidity results are fairly straightforward from the information collected so far,
by using standard arguments to translate the closeness of Stieltjes transform of
two measures into closeness of their quantiles. We will just outline the argument.
Recall the domain $\mathcal{E}_{\kappa_0}$ from \eqref{def:eps0}.
First, we establish that there are at most $N^\varepsilon$ $\gamma_j$-quantiles as well as $N^\varepsilon$
$\gamma_j^*$-quantiles in an $N^{-2/3+\varepsilon}$ vicinity of $E_-=\inf\,\mathrm{supp}\,\mu_\alpha\boxplus\mu_\beta$.
This fact is immediate for the $\gamma_j$ quantiles since their distribution is given by the regular square root law, see
\eqref{17080390}. For the $\gamma_j^*$-quantiles, we know
from \eqref{bottom} that
$\gamma_1^* \ge E_- - N^{-1+10\varepsilon}$.
We compute from \eqref{mdiff1}
\begin{align*}
\frac{j}{N} &= \int_{-\infty}^{\gamma_j^*} {\rm d} \mu_{A}\boxplus \mu_B = \int_{E_--N^{-1+10\varepsilon}}^{\gamma_j^*} \mu_{A}\boxplus \mu_B(x) {\rm d}x
\le C\int_{E_--N^{-1+10\varepsilon}}^{\gamma_j^*} \mathrm{Im}\, m_{A\boxplus B}(x+\mathrm{i} N^{-1+10\varepsilon}) {\rm d}x\\
&\le C\int_{E_--N^{-1+10\varepsilon}}^{\gamma_j^*} \big[ |x-E_-| + N^{-1+10\varepsilon}\big]^{1/2} {\rm d}x \le C |\gamma_j^*-E_-|^{3/2} + CN^{-1+10\varepsilon}|\gamma_j^*-E_-|\,,
\end{align*}
which means that
$$
|\gamma_j^*-E_-|\ge c\Big(\frac{j}{N}\Big)^{2/3}\,,
$$
with some positive constant $c>0$. So we have
\begin{equation}\label{first}
\gamma_j^* \ge E_- + c N^{-2/3+\varepsilon}, \qquad \mbox{if} \quad j\ge c N^{3\varepsilon/2},
\end{equation}
and note that the condition
$ j\ge cN^{3\varepsilon/2} $ is equivalent to $\gamma_j \ge E_-+ cN^{-2/3+\varepsilon}$. In the other direction~we~use
$$
\int_{E_--N^{-1+10\varepsilon}}^{\gamma_j^*} \mu_{A}\boxplus \mu_B(x)\, {\rm d}x
\ge c\int_{E_--N^{-1+10\varepsilon}}^{\gamma_j^*} \mathrm{Im}\, m_{A\boxplus B}(x+\mathrm{i} N^{-1+10\varepsilon})\, {\rm d}x
$$
if $|\gamma_j^* - E_-|\gg N^{-1+10\varepsilon}$. Using again \eqref{mdiff1} we get
$$
\frac{j}{N} \ge c |\gamma_j^*-E_-|^{3/2}, \qquad \mbox{\emph{i.e., }} \qquad \gamma_j^* \le E_-+ C \Big(\frac{j}{N}\Big)^{2/3} \qquad \forall j,
$$
since this latter bound also holds in the case, when $|\gamma_j^* - E_-|\gg N^{-1+10\varepsilon}$ is not satisfied.
Thus we have established
\begin{equation}\label{ed}
|\gamma_j - \gamma_j^*| \le |\gamma_j -E_-| + |\gamma_j^*-E_-| \le C N^{-2/3+\varepsilon}, \qquad \mbox{whenever}\quad
\gamma_j \le E_- + N^{-2/3+\varepsilon}.
\end{equation}
From the continuity of the free convolution (Proposition 4.13 of \cite{BeV93}) and
the condition \eqref{levy} we get
$$
{\rm d}_L (\mu_{A}\boxplus \mu_B, \mu_\alpha \boxplus \mu_\beta)\le
{\rm d}_L(\mu_A, \mu_\alpha) + {\rm d}_L(\mu_B, \mu_\beta)\le N^{-1+\epsilon}\,.
$$
On the other hand, the definition of the L\'evy distance and the boundedness of the density of
$\mu_{\alpha} \boxplus \mu_\beta$ below $E_-+\kappa_0$ (see \eqref{17080390})
directly imply that
\begin{align}\label{cumulative}
\big| \mu_A \boxplus \mu_B \big( (-\infty, x) \big) - \mu_\alpha \boxplus \mu_\beta \big( (-\infty, x) \big) \big|\le CN^{-1+\varepsilon}
\end{align}
holds for any $x\le E_-+\kappa_0$. Together with \eqref{ed}, this estimate immediately implies
the bound \eqref{rigi2}.
For the proof of \eqref{rigi}, we note that $(ii')$ and $(v')$ of Assumption~\ref{a. rigidity entire spectrum}
guarantee that near the upper edge of the support of $\mu_\alpha\boxplus\mu_\beta$
a similar rigidity statement holds as \eqref{rigi2}. Finally, $(ii')$ of Assumption~\ref{a. rigidity entire spectrum}
together with the continuity and boundedness of the density of $\mu_\alpha\boxplus\mu_\beta$
(see \eqref{17080326}) imply that the density has a positive lower and upper bound away the two extreme edges
of its support. These information together with \eqref{levy}
are sufficient to conclude that \eqref{cumulative} hold uniformly for any $x\in{\mathbb R }$. The corresponding
result \eqref{rigi} for the quantiles follows immediately.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{le proposition 3.1}] First, on the domain $\mathcal{D}$, $(i)$ of Proposition~\ref{le proposition 3.1} follows from (\ref{omegadiff}), (\ref{key to everything}), the assumption (\ref{supab}) and also the continuity of $\omega_\alpha$ and $\omega_\beta$. In the complementary domain $\mathcal{D}_\tau(\eta_{\rm m}, \eta_\mathrm{M})\setminus \mathcal{D}$, we first prove (\ref{17020503}). Using the equations $m_{\mu_A\boxplus\mu_B}=m_{\mu_A}(\omega_B)= m_{\mu_B}(\omega_A)$, we see that the upper bounds on $\omega_A$ and $\omega_B$ follow from the fact that $|m_{\mu_A\boxplus\mu_B}(z)|\geq c$, which can be derived from the rigidity (\ref{rigi2}) easily. For (\ref{17020502}), we further split into two regimes. In the regime $\eta\geq \eta_0$ for some small $\eta>0$, we use the fact $\Im \omega_A(z), \Im \omega_B(z)\geq \eta$ directly. In the regime $\eta\leq \eta_0$, we use the continuity of $\omega_A$ and $\omega_B$, and also the monotonicity of the $ \omega_A (u)$ and $ \omega_B (u)$ for $ u\in (-\infty, E_--\delta]$ which can be proved similarly to the monotonicity of $ \omega_\alpha (u)$ and $ \omega_\beta (u)$ (\emph{c.f., } (\ref{le C15})).
Similarly, on the domain $\mathcal{D}$, Proposition~\ref{le proposition 3.1} $(ii)$ follows from (\ref{mdiff1}) and (\ref{mdiff1}) directly. In the complementary domain $\mathcal{D}_\tau(\eta_{\rm m}, \eta_\mathrm{M})\setminus \mathcal{D}$, we apply again the rigidity result (\ref{rigi2}) to conclude the proof.
Statement $(iii)$ follows from~(\ref{Sdiff}), (\ref{Tdiff}), (\ref{Sdiff1}) and~(\ref{Tdiff1}).
Finally, to prove item $(iv)$, we differentiate the subordination equations~\eqref{le definiting equations} with respect to $z$~to~get
\begin{align*}
\left(
\begin{array}{ccc}
1 & 1-F_A'(\omega_B(z))\\
1-F_B'(\omega_A(z)) & 1
\end{array}
\right) \left(\begin{array}{cc}
\omega_A'(z)\\
\omega_B'(z)
\end{array}
\right)
=\left(\begin{array}{cc}
1\\
1
\end{array}
\right)\,,
\end{align*}
with the shorthand $F_A\equiv F_{\mu_A}$, $F_B\equiv F_{\mu_B}$.
Hence,
\begin{align*}
\left(\begin{array}{cc}
\omega_A'(z)\\
\omega_B'(z)
\end{array}
\right)=\mathcal{S}^{-1} \left(\begin{array}{cc}
F'_A(\omega_B(z))-1\\
F'_B(\omega_A(z))-1
\end{array}
\right),
\end{align*}
where $\mathcal{S}\equiv \mathcal {S}_{AB}$.
Using (\ref{17080110}) and (\ref{17020502}) and~\eqref{17080121}, we directly get the first two estimates in~\eqref{le lipschitz stuff}.
Next, from the definition of $\mathcal{S}(z)$ in (\ref{17080110}), we observe that
\begin{align}
|\mathcal{S}'(z)|=\Big|F''_B(\omega_A)(F'_A(\omega_B)-1)\omega_A'(z)+F''_A(\omega_B)(F'_B(\omega_A)-1) \omega_B'(z)\Big|\leq C |S^{-1}(z)|, \label{17072610}
\end{align}
where in the second step we used (\ref{17020502}), the first two estimates in (\ref{le lipschitz stuff}). Hence, by~\eqref{17080121} we get the third estimate in~\eqref{le lipschitz stuff} and statement $(iv)$ is proved. This finishes the proof of Proposition~\ref{le proposition 3.1}.
\end{proof}
\section{General structure of the proof} \label{sec:general}
\subsection{Partial randomness decomposition} In this subsection, we recall a the partial randomness decomposition of the Haar unitary matrix used in~\cite{BES15b}, which will often be used below.
Let $\mathbf{u}_i=(u_{i1}, \ldots, u_{iN})$ be the $i$-th column of $U$. Let $\theta_i$ be the argument of $u_{ii}$. The following partial randomness decomposition of $U$ is taken from~\cite{DS87} (see also \cite{Mezzadri}): For any $i\in \llbracket 1, N\rrbracket$, we can write
\begin{align}
U=-\e{\mathrm{i}\theta_i}R_i U^{\langle i\rangle}\,, \label{17072510}
\end{align}
where $U^{\langle i\rangle}$ is a unitary block-diagonal matrix whose $(i,i)$-th entry equals $1$, and its $(i,i)$-minor is Haar distributed on $\mathcal{U}(N-1)$. Hence, $U^{\langle i\rangle}\mathbf{e}_i=\mathbf{e}_i$ and $\mathbf{e}_i^* U^{\langle i\rangle}=\mathbf{e}_i^*$, where $\mathbf{e}_i$ is the $i$-th coordinator vector. Here $R_i$ is a reflection matrix, defined as
\begin{align}
R_i\mathrel{\mathop:}= I-\mathbf{r}_i\mathbf{r}_i^*\,, \label{17072593}
\end{align}
where
\begin{align}
\mathbf{r}_i\mathrel{\mathop:}=\sqrt{2} \frac{\mathbf{e}_i+\e{-\mathrm{i}\theta_i}\mathbf{u}_i}{\|\mathbf{e}_i+\e{-\mathrm{i}\theta_i} \mathbf{u}_i\|}\,. \label{17072517}
\end{align}
Using $U^{\langle i\rangle}\mathbf{e}_i=\mathbf{e}_i$ and (\ref{17072510}), we see that
\begin{align}
\mathbf{u}_i=U\mathbf{e}_i= -\e{\mathrm{i}\theta_i}R_i\mathbf{e}_i\,. \label{17072530}
\end{align}
Hence, $R_i=R_i^*$ is actually the Householder reflection (up to a sign) sending $\mathbf{e}_i$ to $-\e{-\mathrm{i}\theta_i}\mathbf{u}_i$. With the decomposition in (\ref{17072510}), we can write
\begin{align*}
H= A+\widetilde{B}=A+R_i \widetilde{B}^{\langle i\rangle} R_i\,,
\end{align*}
where we introduced the notations
\begin{align*}
\widetilde{B}\mathrel{\mathop:}= UBU^*, \qquad \widetilde{B}^{\langle i\rangle}\mathrel{\mathop:}= U^{\langle i\rangle} B (U^{\langle i\rangle})^*\,.
\end{align*}
Observe that $\widetilde{B}^{\langle i\rangle}\mathbf{e}_i=b_i\mathbf{e}_i$ and $\mathbf{e}_i^* \widetilde{B}^{\langle i\rangle}=b_i\mathbf{e}_i^*$. Clearly, $\widetilde{B}^{\langle i\rangle}$ is independent of $\mathbf{u}_i$.
It is known that $\mathbf{u}_i\in S_{\mathbb{C}}^{N-1}\mathrel{\mathop:}=\{\mathbf{x}\in \mathbb{C}^N\,:\, \mathbf{x}^*\mathbf{x}=1\}$ is a uniformly distributed complex vector, and there exists a Gaussian vector $\widetilde{\mathbf{g}}_i\sim \mathcal{N}_{\mathbb{C}}(0, N^{-1}I_N)$ such that
\begin{align*}
\mathbf{u}_i=\frac{\widetilde{\mathbf{g}}_i}{\|\widetilde{\mathbf{g}}_i\|}\,.
\end{align*}
We then further introduce the notations
\begin{align}
\mathbf{g}_i\mathrel{\mathop:}=\e{-\mathrm{i}\theta_i}\widetilde{\mathbf{g}}_i\,, \qquad \quad\mathbf{h}_i\mathrel{\mathop:}=\frac{\mathbf{g}_i}{\|\mathbf{g}_i\| }=\e{-\mathrm{i}\theta_i} \mathbf{u}_i\,, \qquad\quad\ell_i\mathrel{\mathop:}=\frac{\sqrt{2}}{\| \mathbf{e}_i+\mathbf{h}_i\|}\,. \label{17072590}
\end{align}
Observe that the components $g_{ik}$ of $\mathbf{g}_i$ are independent. Moreover, for $k\neq i$, $g_{ik}\sim N_{\mathbb{C}}(0, \frac{1}{N})$ while $g_{ii}$ is a $\chi$-distributed random variable with $\mathbb{E}g_{ii}^2=\frac{1}{N}$. With the above notations, we can write $\mathbf{r}_i$ in (\ref{17072517})~as
\begin{align}
\mathbf{r}_i=\ell_i (\mathbf{e}_i+\mathbf{h}_i)\,. \label{170725100}
\end{align}
In addition, using (\ref{17072530}) and the fact $R_i^2=I$, we have
\begin{align}
R_i\mathbf{e}_i=-\mathbf{h}_i\,,\qquad \qquad R_i\mathbf{h}_i=-\mathbf{e}_i\,, \label{17072573}
\end{align}
which also imply
\begin{align}
\mathbf{h}_i^* \widetilde{B}^{\langle i\rangle}R_i= -\mathbf{e}_i^* \widetilde{B}\,,\qquad \qquad \mathbf{e}_i^* \widetilde{B}
^{\langle i\rangle}R_i= -b_i\mathbf{h}_i^*=-\mathbf{h}_i^*\widetilde{B}\,. \label{170726105}
\end{align}
Here, in the first equality of the second equation we used that $\mathbf{e}_i^* \widetilde{B}^{\langle i\rangle}=b_i\mathbf{e}_i$. We introduce the vectors
\begin{align*}
\mathring{\mathbf{g}}_i\mathrel{\mathop:}= \mathbf{g}_i-g_{ii}\mathbf{e}_i\,,\qquad\qquad \mathring{\mathbf{h}}_i\mathrel{\mathop:}= \frac{\mathring{\mathbf{g}}_i}{\|\mathbf{g}_i\|}\,,
\end{align*}
where the $\chi$-distributed variable $g_{ii}$ is kicked out.
\subsection{Summary of the proof route}
In this subsection, we summarize the main route of the proof.
While the final goal of the local law is to understand $G_{ii}$, $i\in\llbracket1,N\rrbracket$, and its averaged version, we work with several auxiliary quantities first. To understand their origin, it is useful to review the structure of our previous proofs
of the local laws in the bulk \cite{BES15b,BES16}. We first introduce the following control parameters
\begin{align}
\Psi\equiv \Psi(z)\mathrel{\mathop:}= \sqrt{\frac{1}{N\eta}}\,,\qquad\qquad\Pi\equiv \Pi(z)\mathrel{\mathop:}= \sqrt{\frac{\Im m_H}{N\eta}}\,. \label{17012101}
\end{align}
In~\cite{BES15b}, we investigated two main quantities:
\begin{align}
S_i\equiv S_i(z)\mathrel{\mathop:}= \mathbf{h}_i^* \widetilde{B}^{\langle i\rangle} G\mathbf{e}_i\,,\qquad \qquad T_i\equiv T_i(z)\mathrel{\mathop:}= \mathbf{h}_i^* G\mathbf{e}_i\,. \label{17072580}
\end{align}
In particular we showed that
$$
S_i = \frac{z-\omega_B(z)}{a_i-\omega_B(z)} + O_\prec (\Psi)\,, \qquad\qquad T_i = O_\prec (\Psi)\,,
$$
by performing integration by parts in the $\mathbf{h}_i^*$ variable.
Using the identity
\begin{align*}
G_{ii} = \frac{1- (\widetilde{B} G)_{ii}}{a_i-z}
\end{align*}
and that
\begin{align*}
(\widetilde{B} G)_{ii} &= \mathbf{e}_i^* R_i \widetilde{B}^{\langle i\rangle} R_i G\mathbf{e}_i =
- \mathbf{h}_i^*\widetilde{B}^{\langle i\rangle} R_i G\mathbf{e}_i = - S_i + \mathbf{h}_i^*\widetilde{B}^{\langle i\rangle} \mathbf{r}_i \mathbf{r}_i^*G\mathbf{e}_i\\
&= - S_i + \ell_i^2 ( \mathbf{h}_i^*\widetilde{B}^{\langle i\rangle} \mathbf{h}_i + b_ih_{ii}) (G_{ii}+T_i)\,,
\end{align*}
we obtained the entry-wise local law for $G_{ii}$ from a precise control on $S_i$ and $T_i$.
Technically $S_i$ is a better quantity than $G_{ii}$ to handle since integration by parts can be directly applied to it.
However, along the calculation the quantity $T_i$ appeared and a second integration by parts was needed to control it.
We obtained a closed system of equations on the expectations of $S_i$ and $T_i$ (see (6.23)--(6.24) of \cite{BES15b})
from which the entry-wise local law in the bulk followed.
To obtain the law for the normalized trace of $G$ in \cite{BES16}, we performed fluctuation averaging, but again not for $G_{ii}$ directly. We considered
averages (with arbitrary weights $d_i$) of the quantity
\begin{align*}
Z_i \mathrel{\mathop:}= Q_i + G_{ii} \Upsilon\,,
\end{align*}
where we defined
\begin{align}
Q_i&\equiv Q_i(z)\mathrel{\mathop:}= (\widetilde{B}G)_{ii}\mathrm{tr}\, G-G_{ii} \mathrm{tr}\, \widetilde{B}G\,, \label{17021701}\\
\Upsilon&\equiv \Upsilon(z)\mathrel{\mathop:}= \mathrm{tr}\, \widetilde{B}G-(\mathrm{tr}\, \widetilde{B}G)^2+\mathrm{tr}\, G\mathrm{tr}\, \widetilde{B} G\widetilde{B}\,. \label{17020511}
\end{align}
From the entry-wise laws it is clear that $|Q_i|, |\Upsilon| \prec \Psi$, and now we improve these bounds, at least
in averaged sense in case of $Q_i$.
Notice that $Q_i$ is the most ``symmetric" quantity, in particular $\sum_i Q_i =0$, but
technically it is not the most convenient object to start a high moment estimate for $\frac{1}{N}\sum_i d_i Q_{i}$.
The reason is that one step of integration by parts generates an additional term, $G_{ii} \Upsilon$, which is hard to control
directly. So instead of averaging $Q_i$, in \cite{BES16} we included a counter term, \emph{i.e., } we averaged $Z_i$ instead.
We first proved that that average is one order better, \emph{i.e., }
\begin{align}\label{Qav}
\Big| \frac{1}{N}\sum_{i=1}^N d_i Z_i\Big| \prec \Psi^2.
\end{align}
Then, using~\eqref{Qav} with $d_i \equiv 1$, we obtained $|\Upsilon| \prec \Psi^2$. Thus {\it a posteriori} we showed that the
counter term $G_{ii} \Upsilon$ is irrelevant for estimates of order $\Psi^2$ and we obtained the same bound \eqref{Qav}
for $Q_i$ as well. Finally, the bounds on the average of $Q_i$ with careful choices of the weights $d_i$
and using the algebraic identities between $G$ and $\widetilde{B} G$ yielded the averaged law for $G_{ii}$
with the optimal $O_\prec(\Psi^2)$ error.
All results in \cite{BES15b,BES16} concerned the bulk. It is well known from the analogous
results for Wigner matrices that the edge analysis is more difficult. The main reason is that the
corresponding Dyson equation, the subordination equation in the current model, is unstable at the spectral edge, hence more precise
estimates are necessary for the error terms. Theoretically all error terms involving $\Psi = \frac{1}{\sqrt{N\eta}}$ should be
improved by a factor of $\sqrt{\mathrm{Im}\, m}$, where we set $m=m_{\mu_A\boxplus\mu_B}$. This factor reflects that the density of states is small at the edge (at a square root edge
we have $\mathrm{Im}\, m(z) \sim \sqrt{\kappa+ \eta}$, where $\eta =\mathrm{Im}\, z$ and $\kappa$ is the distance of $\mathrm{Re}\, z$ to the edge).
This improvement exactly compensates for the bound of order $(\kappa+\eta)^{-1/2}$ on the inverse of the
linearization of the subordination equation near the edge.
However, this improvement is quite complicated to obtain and the method in \cite{BES16} is not sufficient.
In this paper we present a new strategy to obtain the stronger bound. To prepare for the higher accuracy, already in
the entry-wise law we work with two new quantities $P_i$ and $K_i$ instead of $S_i$ and $T_i$. They are defined as
\begin{align}
P_{i}&\equiv P_i(z)\mathrel{\mathop:}= (\widetilde{B}G)_{ii} \mathrm{tr}\, G-G_{ii} \mathrm{tr}\, (\widetilde{B} G)+(G_{ii}+T_{i})\Upsilon\,, \label{17011301}\\
K_i&\equiv K_i(z)\mathrel{\mathop:}= T_{i}+ (b_i T_{i}+(\widetilde{B}G)_{ii}) \mathrm{tr}\, G-(G_{ii}+T_i) \mathrm{tr}\, (\widetilde{B}G)\,. \label{17072420}
\end{align}
We recognize that $P_i = Q_i + (G_{ii}+T_i) \Upsilon = Z_i + T_i\Upsilon$, \emph{i.e., } we included an additional counter term $T_i\Upsilon$
to the previous $Z_i$. While {\it a posteriori} this counter term turns out to be irrelevant, it is necessary in order to perform the
integration by parts more precisely.
Similarly,
\begin{align}\label{ki}
K_i= \big(1 + b_i \mathrm{tr}\, G- \mathrm{tr}\, (\widetilde{B}G)\big) T_i + Q_i\,,
\end{align}
\emph{i.e., } $K_i$ is a linear combination of $T_i$ and $Q_i$, it is nevertheless easier to work with $K_i$.
The proof is divided into three parts.
In the first part (Section \ref{s. Entrywise estimate}) we obtain entry-wise bounds of the form
\begin{align}\label{entry}
|K_i|, |Q_i|, |T_i|, |P_i| \prec \Psi, \qquad \mbox{as well as}\qquad |\Upsilon| \prec \Psi\,;
\end{align}
see Proposition \ref{pro.17020310}. Notice that the estimates are still in terms of $\Psi =\frac{1}{\sqrt{N\eta}}$ without the improving factor
$\sqrt{\mathrm{Im}\, m}$. These results would be possible to derive directly from the estimates in \cite{BES15b}
by operating with $S_i$ and $T_i$, we nevertheless use the new quantities, since the formulas derived
along the entry-wise bounds will be used in the improved bounds later.
There is yet another reason for introducing the new quantities $P_i$ and $K_i$, namely that in the current paper
we have also changed the strategy concerning the entry-wise laws. In \cite{BES15b}, a precursor to \cite{BES16},
we first proved entry-wise laws by deriving a system of equations
for the expectation values (of $S_i$ and $T_i$), complemented with concentration inequalities
to enhance them to high probability bounds. For the improved bound on averaged quantities
high moment estimates were performed only in \cite{BES16}, using the entry-wise law as an input.
In the current paper we organize the proof in a more straightforward way, similarly to
\cite{BES16b}.
We bypass the fairly complicated
argument leading to the entry-wise law in \cite{BES15b} and we rely on high moment estimates directly
even for the entry-wise law. This strategy is not only conceptually cleaner but also allows us to use essentially
the same calculations for the entry-wise and the averaged law. The estimates of many error terms are
shared in the two parts of the proofs; in case of some other estimates it will be sufficient to point out the
necessary improvements.
However, high moment estimates require to consider more carefully chosen quantities. For example, no direct high moment
estimates are possible for $S_i$ since it is even not a small quantity. But high moment estimates
even for $T_i$ and $Q_i$ produce additional terms that are difficult to handle. It turns out that
the carefully chosen counter terms in $P_i$ and $K_i$ make them suitable for performing high moment bounds.
More precisely, in the first step we compute the high moments of $K_i$ and conclude that $|K_i|\prec \Psi$.
In the second step we prove a high moment bound for $P_i = Q_i+(G_{ii} +T_i)\Upsilon$, \emph{i.e., } prove $|P_i|\prec \Psi$.
In the third step we average this bound and conclude $|\Upsilon| \prec \Psi$, which in turn yields that $|Q_i|\prec \Psi$.
Finally, from \eqref{ki} we conclude that $|T_i|\prec \Psi$. This proves \eqref{entry} and completes the entry-wise bounds.
In the second part of the proof (Section \ref{s. rough bound}) we derive a rough bound on the averaged quantities.
We will focus on
$\frac{1}{N}\sum_i d_i Q_i$ since $Q_i$ is the most fundamental quantity.
Averaged quantities typically are one order better than the trivial entryway
bounds indicate, \emph{i.e., } we expect $|\frac{1}{N}\sum_i d_i Q_i| \prec \Psi^2= (N\eta)^{-1}$, and indeed this was proven in \cite{BES16}
in the bulk and could be extended to the edge.
Due to the improvement at the edge, now we expect a bound of order $\Pi^2\approx\mathrm{Im}\, m/N\eta$, but we cannot obtain this in general.
In this second part of the proof,
we prove a bound of the form $\Pi\Psi\approx\sqrt{\mathrm{Im}\, m}/N\eta$, which is ``half-way" between the standard fluctuation averaging
bound and the optimal bound.
We compute the high moments of $\frac{1}{N}\sum_i d_i Q_i$ to achieve this bound. Interestingly,
the apparently leading term in the high moment calculation already gives the optimal bound $\Pi^2$
(first term on the right of \eqref{17071833}),
but a ``cross-term" (when the derivative hits another factor of $\frac{1}{N}\sum_i d_i Q_i$)
is responsible for the weaker $\Pi\Psi$ bound.
Another point to make is that it is not necessary to compute the high moments of another quantity
for the rough averaged bound,
unlike in \cite{BES15b,BES16} and in the first part of the current proof, where we always operated with two
different quantities in parallel. Various error terms
along the calculation of $\frac{1}{N}\sum_i d_i Q_i$ do contain $T_i$, but
these terms can all be estimated using
the entry-wise bound $T_i\prec \Psi$ only. Choosing a special weight sequence $d_i$ we also improve the bound on $\Upsilon$ to
$\Upsilon \prec \Pi \Psi$. In particular we could obtain an improved averaged bound on $P_i = Q_i + (G_{ii}+T_i)\Upsilon$ immediately,
and with a little effort on~$K_i$ and~$T_i$ as well, but we do not need them.
Finally, in the third part of the proof (Section \ref{s.optimal FL}) we obtain the optimal $\Pi^2$ bound for the average of $Q_i$, but
only for two very specially chosen weights, see \eqref{17021710}--\eqref{17022001}.
In fact, only the estimates on the ``cross-term" need to be
improved and the weights are chosen to achieve an additional cancellation.
Nevertheless, linear combinations of $Q_i$'s with these two special sequences of weights are sufficient to invert the subordination equations and
conclude that $\Lambda_\iota\mathrel{\mathop:}= \omega_i^c -\omega_i \prec \Psi^2$, $\iota=A,B$. We finally notice~that
$$
\frac{1}{N}\sum_{i=1}^N d_i \Big( G_{ii} - \frac{1}{a_i - \omega_B^c}\Big)
$$
may be expressed as a linear combination of the $Q_i$, see \eqref{17072501}, this quantity is already stochastically bounded
by $ \Pi\Psi \le \Psi^2$ from the second part of the proof. Since replacing $\omega_B^c$ with $\omega_B$ yields an error
of at most $\Psi^2$, we obtain \eqref{17072330}, the optimal average law for $G_{ii}$.
The actual proofs are considerably more complicated than this informal summary. On one hand, many error terms need
to be estimated that have not been mentioned here, in particular we need fluctuation averaging with random
weights, a novel complication that has not been considered before.
On the other hand, in this summary we used the deterministic $\Psi=(N\eta)^{-1/2}$ and $\Pi \approx (\mathrm{Im}\, m/N\eta)^{1/2}$
as control parameters. In fact, $\Pi$ is random, see \eqref{17012101}, containing $\mathrm{Im}\, m_H$
which is $\mathrm{Im}\, m_{A\boxplus B}$ up to a random error that itself depends on $\Lambda: = |\Lambda_A|+|\Lambda_B|$.
In the third part of the proof (Section \ref{s.optimal FL}) we obtain a self-consistent inequality for this random quantity $\Lambda$ (see \eqref{17030301}).
Therefore an additional continuity argument in $\eta$ is necessary to conclude a deterministic bound on $\Lambda$.
\section{Entry-wise Green function subordination} \label{s. Entrywise estimate}
In this section, we prove a subordination property for the Green function entries. From this section to Appendix \ref{appendix B}, without loss of generality, we assume that
\begin{align}
\mathrm{tr}\, A=\mathrm{tr}\, B=0\,. \label{17072620}
\end{align}
We define the {\it approximate subordination functions} as
\begin{align}
\omega_A^c(z)\mathrel{\mathop:}= z-\frac{\mathrm{tr}\, AG(z)}{m_H(z)}\,,\quad \qquad \omega_B^c(z)\mathrel{\mathop:}= z-\frac{ \mathrm{tr}\, \widetilde{B}G}{ m_{H}(z)}, \qquad\qquad z\in \mathbb{C}^+. \label{17072550}
\end{align}
It will be seen that the functions $\omega_A^c$ and $\omega_B^c$ are good approximations of $\omega_A$ and $\omega_B$ defined in (\ref{le prop 1}) with $(\mu_1,\mu_2)=(\mu_A, \mu_B)$. Switching the r\^oles of $A$ and $B$, and also the r\^oles of $U$ and $U^*$, we introduce the following analogues of $\widetilde{B}$, $H$, and $G(z)$, respectively,
\begin{align}\label{the tilda guys}
\widetilde{A}\mathrel{\mathop:}= U^*AU\,,\qquad \qquad \mathcal{H}\mathrel{\mathop:}= B+\widetilde{A}\,,\qquad \qquad \mathcal{G}\equiv \mathcal{G}(z)\mathrel{\mathop:}=(\mathcal{H}-z)^{-1}\,.
\end{align}
Observe that, by the cyclicity of the trace,
\begin{align*}
\omega_A^c(z)=z-\frac{\mathrm{tr}\, \widetilde{A}\mathcal{G}(z)}{m_H(z)}\,.
\end{align*}
From (\ref{17072550}) and the identity $(A+\widetilde{B}-z)G=I$, it is easy to check that
\begin{align}
\omega_A^c(z)+\omega_B^c(z)-z=-\frac{1}{m_H(z)}\,, \qquad\qquad z\in \mathbb{C}^+\,. \label{170725130}
\end{align}
Recall the quantities $S_i$ and $T_i$ defined in (\ref{17072580}).
We will also need their variants
\begin{align}
\mathring{S}_i\equiv \mathring{S}_i(z)\mathrel{\mathop:}=\mathring{\mathbf{h}}_i^* \widetilde{B}^{\langle i\rangle} G\mathbf{e}_i=S_i-h_{ii}b_iG_{ii}\,, \qquad \quad \mathring{T}_i\equiv \mathring{T}_i(z)\mathrel{\mathop:}=\mathring{\mathbf{h}}_i^* G\mathbf{e}_i=T_i-h_{ii}G_{ii}\,, \label{17072581}
\end{align}
where the $\chi$ random variable $h_{ii}$ is kicked out.
Further, we denote (dropping the $z$-dependence from the notation for brevity)
\begin{align}
\Lambda_{{\rm d} i}\mathrel{\mathop:}=\Big|G_{ii}-\frac{1}{a_i-\omega_B}\Big|\,,\qquad \qquad \Lambda_{{\rm d}}\mathrel{\mathop:}=\max_i \Lambda_{{\rm d} i}\,,\qquad \qquad\Lambda_T\mathrel{\mathop:}=\max_i|T_{i}|\,. \label{17072571}
\end{align}
We also define $\Lambda_{{\rm d} i}^c$ and $\Lambda_{{\rm d}}^c$ analogously by replacing $\omega_B$ by $\omega_B^c$ in the definitions of $\Lambda_{{\rm d} i}$ and $\Lambda_{{\rm d}}$, respectively.
In addition, we use the notations $\widetilde{\Lambda}_{{\rm d} i}, \widetilde{\Lambda}_{{\rm d}}, \widetilde{\Lambda}_T, \widetilde{\Lambda}_{{\rm d} i}^c, \widetilde{\Lambda}_{{\rm d}}^c$ to represent their analogues, obtained by switching the r\^oles of $A$ and $B$, and the r\^oles of $U$ and $U^*$, in the definitions of $\Lambda_{{\rm d} i}, \Lambda_{{\rm d}}, \Lambda_T, \Lambda_{{\rm d} i}^c, \Lambda_{{\rm d}}^c$,~\emph{e.g., }
\begin{align}
\Lambda_{{\rm d} i}^c\mathrel{\mathop:}=\Big|G_{ii}-\frac{1}{a_i-\omega_B^c}\Big|\,,\qquad\qquad\widetilde{\Lambda}_{{\rm d} i}\mathrel{\mathop:}=\Big|\mathcal{G}_{ii}-\frac{1}{b_i-\omega_A}\Big|\,. \label{17080305}
\end{align}
Recall $P_{i}$, $K_i$, and $\Upsilon$ defined in~\eqref{17011301},~\eqref{17072420} and~\eqref{17020511}. We further observe the elementary identities
\begin{align}
\widetilde{B}G=I-(A-z)G\,, \qquad\qquad G\widetilde{B}=I-G(A-z)\,. \label{17020508}
\end{align}
Using the first identity in (\ref{17020508}), we can rewrite $\Upsilon$ defined in~\eqref{17020511} as
\begin{align}
\Upsilon=\mathrm{tr}\, AG\; \mathrm{tr}\, \widetilde{B}G-\mathrm{tr}\, G\;\mathrm{tr}\, \widetilde{B}G A=\frac{1}{N}\sum_{i=1}^N a_i \Big(G_{ii} \mathrm{tr}\, \widetilde{B}G-(\widetilde{B}G)_{ii} \mathrm{tr}\, G\Big)\,. \label{17011302}
\end{align}
To ease the presentation, we further introduce the control parameter
\begin{align}
\Pi_i\equiv \Pi_i(z)\mathrel{\mathop:}=\sqrt{\frac{\Im (G_{ii}(z)+\mathcal{G}_{ii}(z))}{N\eta}}\,,\qquad\qquad i\in\llbracket 1,N\rrbracket \,.\label{17020550}
\end{align}
Note that since $\|H\|< \mathcal{K}$ (\emph{c.f., } (\ref{17072840})), it is easy to see that $\Im G_{ii}(z)\gtrsim \eta$ and $\Im \mathcal{G}_{ii}(z)\gtrsim \eta$ for all $z\in \mathcal{D}_\tau(0,\eta_\mathrm{M})$, by spectral decomposition. This implies
\begin{align}
\frac{1}{\sqrt{N}}\lesssim \Pi_i(z)\,, \qquad \qquad\forall z\in \mathcal{D}_\tau(0,\eta_\mathrm{M})\,. \label{17020530}
\end{align}
In this section, we derive the following Green function subordination property.
\begin{pro} \label{pro.17020310} Suppose that the assumptions of Theorem \ref{thm. strong law at the edge} hold. Fix $z\in \mathcal{D}_\tau(\eta_{\rm m},\eta_\mathrm{M})$. Assume~that
\begin{align}
\Lambda_{{\rm d}} (z)\prec N^{-\frac{\gamma}{4}}, \qquad \widetilde{\Lambda}_{{\rm d}}(z)\prec N^{-\frac{\gamma}{4}}, \qquad \Lambda_{T}(z)\prec 1, \qquad \widetilde{\Lambda}_T(z)\prec 1. \label{17020501}
\end{align}
Then we have, for all $i\in \llbracket 1, N\rrbracket$, that
\begin{align}
| P_i(z)|\prec \Psi(z), \qquad\qquad | K_i(z)|\prec \Psi(z). \label{17020301}
\end{align}
In addition, we also have that
\begin{align}
|\Upsilon(z) |\prec \Psi(z) \label{17020302}
\end{align}
and, for all $i\in \llbracket 1, N\rrbracket$, that
\begin{align}
\Lambda_{{\rm d} i}^c(z)\prec \Psi(z),\qquad\qquad |T_{i}|\prec \Psi(z). \label{17020303}
\end{align}
The same statements hold if we switch the r\^oles of $A$ and $B$, and also the r\^oles of $U$ and $U^*$.
\end{pro}
Before the actual proof of Proposition \ref{pro.17020310}, we establish several bounds that follow from the assumption in (\ref{17020501}). From the definitions in (\ref{17072571}), the assumptions in (\ref{17020501}), together with (\ref{17020502}), we see~that
\begin{align}
\max_{i\in \llbracket 1, N\rrbracket }|G_{ii}|\prec 1\,,\qquad \qquad \max_{i\in \llbracket 1, N\rrbracket }|T_{i}|\prec 1\,. \label{17020505}
\end{align}
Analogously, we also have $\max_{i\in \llbracket 1, N\rrbracket }|\mathcal{G}_{ii}|\prec 1$. Hence, under (\ref{17020501}), we see that
\begin{align*}
\max_{i\in \llbracket 1, N\rrbracket }\Pi_i(z)\prec \Psi(z).
\end{align*}
Moreover, using the identities in (\ref{17020508}),
we also get from the first bound in (\ref{17020505}) that
\begin{align}
\max_{i\in \llbracket 1, N\rrbracket} |(XGY)_{ii}|\prec 1, \qquad\qquad X, Y=I \;\text{or}\; \widetilde{B}. \label{170726100}
\end{align}
In addition, from (\ref{170730100}) we see that
\begin{align}
\frac{1}{N}\sum_{i=1}^N \frac{1}{a_i-\omega_B(z)}=m_{\mu_A}(\omega_B(z))=m_{\mu_A\boxplus\mu_B} (z). \label{17020507}
\end{align}
Then, the first bound in (\ref{17020501}), together with (\ref{17020507}), (\ref{17020508}), (\ref{17020503}) and (\ref{17020502}), leads to the following estimates
\begin{align}
\mathrm{tr}\, G&= m_{\mu_A\boxplus\mu_B} +O_\prec(N^{-\frac{\gamma}{4}})\,,\nonumber\\
\mathrm{tr}\, \widetilde{B} G&= (z-\omega_B) m_{\mu_A\boxplus \mu_B} +O_\prec(N^{-\frac{\gamma}{4}})\,, \nonumber\\
\mathrm{tr}\, \widetilde{B} G\widetilde{B}&=(\omega_B-z) \big(1+(\omega_B-z) m_{\mu_A\boxplus\mu_B}\big)+O_\prec(N^{-\frac{\gamma}{4}})\,. \label{17020535}
\end{align}
Furthermore, by (\ref{17020502}), (\ref{17020503}), and (\ref{17020507}), we see that all the above tracial quantities are $O_\prec(1)$
. This also implies that $|\Upsilon|\prec 1$, (\emph{c.f., } (\ref{17020511})). Moreover, from
(\ref{17072550}) and the first two equations in (\ref{17020535}), we can get the following rough estimate under (\ref{17020501}) and (\ref{17020502}),
\begin{align}
\omega_B^c=\omega_B+O_\prec(N^{-\frac{\gamma}{4}})\,. \label{170725110}
\end{align}
\begin{proof}[Proof of Proposition \ref{pro.17020310}]
To prove (\ref{17020301}), it suffices to show the high order moment estimates
\begin{align}
\mathbb{E}\big[ | P_i|^{2p}\big]\prec \Psi^{2p}\,,\qquad \qquad \mathbb{E} \big[ | K_i|^{2p}\big]\prec \Psi^{2p}\,, \label{17072410}
\end{align}
for any fixed $p\in \mathbb{N}$. Let us introduce the notations
\begin{align}
\mathfrak{m}_i^{ (k,l)}\mathrel{\mathop:}= P_{i}^k\overline{ P_i^l}\,,\quad \quad \mathfrak{n}_i^{{ (k,l)}}\mathrel{\mathop:}= K_i^k\overline{ K_i^l},\qquad \qquad k,l\in \mathbb{N}\,,\quad\quad i\in\llbracket 1,N\rrbracket\,. \label{17072350}
\end{align}
Further, we make the following convention in the rest of the paper: the notation $O_\prec(\Psi^k)$, for any given integer $k$, represents some generic (possibly) $z$-dependent random variable $X\equiv X(z)$ which satisfies
\begin{align*}
|X|\prec \Psi^k, \qquad \text{and}\qquad \mathbb{E}|X|^q\prec \Psi^{qk}\,,
\end{align*}
for any given positive integer $q$. The first bound above follows from the original definition of the notation $O_\prec(\cdot)$ directly. It turns out that it is more convenient to require the second one in our discussions below as well. It will be clear that the second bound always follows from the first one whenever this notation will be used. For more details, we refer to the paragraph above Proposition~6.1 in~\cite{BES16}. Analogously, for all notation of the form $O_\prec(\Gamma)$ with some deterministic control parameter $\Gamma$, we make the same convention.
With the definitions in (\ref{17072350}) and the convention made above, we have the following recursive moment estimates. This type of estimates were used first in~\cite{LS16} to derive local laws for sparse Wigner matrices.
\begin{lem}[Recursive moment estimate for $ P_i$ and $ K_i$] \label{lem.17021230} \label{lem.17020520} Suppose the assumptions of Proposition \ref{pro.17020310}. For any fixed integer $p\geq 1$ and any $i\in \llbracket 1, N\rrbracket$, we have
\begin{align}
\mathbb{E}[\mathfrak{m}_i^{(p,p)}]&=\mathbb{E}[O_\prec(\Psi)\mathfrak{m}_i^{(p-1,p)}]+\mathbb{E}[O_\prec(\Psi^2) \mathfrak{m}_i^{(p-2,p)}]+\mathbb{E}[O_\prec(\Psi^2) \mathfrak{m}_i^{(p-1,p-1)}]\,,\label{17021020}\\
\mathbb{E}[\mathfrak{n}_i^{(p,p)}]&=\mathbb{E}[O_\prec(\Psi)\mathfrak{n}_i^{(p-1,p)}]+\mathbb{E}[O_\prec(\Psi^2) \mathfrak{n}_i^{(p-2,p)}]+\mathbb{E}[O_\prec(\Psi^2) \mathfrak{n}_i^{(p-1,p-1)}]\,, \label{17021021}
\end{align}
where we made the convention $\mathfrak{m}_i^{(0,0)}=\mathfrak{n}_i^{(0,0)}=1$ and $\mathfrak{m}_i^{(-1,1)}=\mathfrak{n}_i^{(-1,1)}=0$ if $p=1$.
\end{lem}
Although in the statements of Lemma~\ref{lem.17021230}, we use $\Psi$, in the proof, we actually get better estimates in terms of $\Pi_i^2$ instead of $\Psi^2$ for some error terms. We will keep the stronger form of these estimates since the same errors will appear in the averaged bounds in Section \ref{s. rough bound} as well. The average of these errors is typically smaller than $\Psi^2$.
\begin{proof}[Proof of Lemma \ref{lem.17020520}] The proof is very similar to that of Lemma 7.3 of \cite{BES16b}, which is presented for the block additive model in the bulk regime. It suffices to go through the strategy in \cite{BES16b} for our additive model again. The strategy also works well at the regular edge, provided (\ref{17020502}) and (\ref{17020503}) hold. In addition, instead of the control parameter $\Psi$ used in the proof of Lemma 7.3 of \cite{BES16b}, we aim here at controlling many errors in terms of $\Pi_i$. This requires a more careful estimate on the error terms. Due to the similarity to the proof of Lemma 7.3 of \cite{BES16b}, we only sketch the proof of Lemma \ref{lem.17020520} in the sequel.
For each $i\in \llbracket 1, N\rrbracket$, we write
\begin{align}
\mathbb{E}[\mathfrak{m}_i^{(p,p)}] =\mathbb{E}[ P_i\mathfrak{m}_i^{(p-1,p)}]&=\mathbb{E}[(\widetilde{B}G)_{ii} \mathrm{tr}\, G \mathfrak{m}_i^{(p-1,p)}]+\mathbb{E}\big[\big(-G_{ii}\mathrm{tr}\, \widetilde{B}G+(G_{ii}+T_{ii})\Upsilon\big)\mathfrak{m}_i^{(p-1,p)}\big]\,,\label{17020532}
\end{align}
respectively,
\begin{align}
\mathbb{E}[\mathfrak{n}_i^{(p,p)}] =\mathbb{E}[ K_i \mathfrak{n}_i^{(p-1,p)}]=\mathbb{E}[T_i \mathfrak{n}_i^{(p-1,p)}]+\mathbb{E}\big[\big( (b_iT_i+(\widetilde{B}G)_{ii})\mathrm{tr}\, G- (G_{ii}+T_i) \mathrm{tr}\, \widetilde{B}G\big)\mathfrak{n}_i^{(p-1,p)}\big]\,. \label{17020533}
\end{align}
Using the fact $\mathbf{e}_i^*R_i=-\mathbf{h}_i^*$ (\emph{c.f., } (\ref{17072573})), we can write
\begin{align}
(\widetilde{B}G)_{ii}&=\mathbf{e}_i^* R_i\widetilde{B}^{\langle i\rangle} R_i G\mathbf{e}_i=-\mathbf{h}_i^* \widetilde{B}^{\langle i\rangle} R_i G\mathbf{e}_i=-\mathbf{h}_i^* \widetilde{B}^{\langle i\rangle}G\mathbf{e}_i+\ell_i^2 \mathbf{h}_i^* \widetilde{B}^{\langle i\rangle}(\mathbf{e}_i+\mathbf{h}_i)(\mathbf{e}_i+\mathbf{h}_i)^*G\mathbf{e}_i\nonumber\\
&=-S_i+\ell_i^2(b_i h_{ii}+\mathbf{h}_i^* \widetilde{B}^{\langle i\rangle}\mathbf{h}_i) (G_{ii}+T_i)= -\mathring{S}_{i}+\varepsilon_{i1}\,, \label{17020531}
\end{align}
where $S_i$ and $\mathring{S}_i$ are defined in (\ref{17072580}) and (\ref{17072581}), respectively, and
\begin{align}
\varepsilon_{i1}\mathrel{\mathop:}=\big((\ell_i^2-1)b_i h_{ii}+\ell_i^2 \mathbf{h}_i^* \widetilde{B}^{\langle i\rangle} \mathbf{h}_i\big) G_{ii}+\ell_i^2 \big(b_i h_{ii}+\mathbf{h}_i^* \widetilde{B}^{\langle i\rangle}\mathbf{h}_i\big) T_i\,. \label{17071805}
\end{align}
With the aid of Lemma \ref{lem.091720}, it is elementary to check
\begin{align}
|h_{ii}|\prec \frac{1}{\sqrt{N}}\,,\qquad\quad |\ell_i^2-1|\prec \frac{1}{\sqrt{N}}\,,\quad \qquad |\mathbf{h}_i^* \widetilde{B}^{\langle i\rangle}\mathbf{h}_i|\prec \frac{1}{\sqrt{N}}\,, \label{17021540}
\end{align}
where in the last inequality we also used the fact that $\mathrm{tr}\, \widetilde{B}^{\langle i\rangle}=\mathrm{tr}\, B=0$, under the convention (\ref{17072620}).
Applying the bounds in (\ref{17020505}) and (\ref{17021540}),
it is easy to see that
\begin{align}
|\varepsilon_{i1}|\prec \frac{1}{\sqrt{N}}\,. \label{17020534}
\end{align}
Substituting (\ref{17020531}) and (\ref{17020534}) into the first term on the right hand side of (\ref{17020532}), we have
\begin{align}
\mathbb{E}[(\widetilde{B}G)_{ii} \mathrm{tr}\, G \mathfrak{m}_i^{(p-1,p)}]=-\mathbb{E}[\mathring{S}_i \mathrm{tr}\, G \mathfrak{m}_i^{(p-1,p)}]+\mathbb{E}[O_\prec(N^{-\frac12})\mathfrak{m}_i^{(p-1,p)}]\,, \label{17020536}
\end{align}
where for the second term on the right hand side above we also used $\mathrm{tr}\, G=O_\prec(1)$; \emph{c.f., } (\ref{17020535}). We recall the definition of $\mathring{S}_i$ from (\ref{17072581}) and rewrite
\begin{align*}
\mathring{S}_i=\sum_{k}^{(i)} \bar{g}_{ik} \frac{1}{\|\mathbf{g}_i\| }\mathbf{e}_k^*\widetilde{B}^{\langle i\rangle}G\mathbf{e}_i.
\end{align*}
Hereafter, we use the notation $\sum_{k}^{(i)}$ to represent the sum over $k\in \llbracket 1, N\rrbracket\setminus\{i\}$.
Thus, the first term on the right of (\ref{17020536}) is of the form $\mathbb{E}[\sum_{k}^{(i)} \bar{g}_{ik} \langle \cdots \rangle]$, where $\langle \cdots\rangle$ can be regarded as a function of the $\bar{g}_{ik}$'s and the $g_{ik}$'s. Recall the following integration by parts formula for complex centered Gaussian variables,
\begin{align}
\int_{\mathbb{C}} \bar{g} f(g,\bar{g}) \e{-\frac{|g|^2}{\sigma^2}} {\rm d}^2 g=\sigma^2 \int_{\mathbb{C}} \partial_g f(g,\bar{g}) \e{-\frac{|g|^2}{\sigma^2}}{\rm d}^2 g \,, \label{17021237}
\end{align}
for any differentiable function $f: \mathbb{C}^2\to \mathbb{C}$. Applying (\ref{17021237}) to the first term on the right of (\ref{17020536}), we~get
\begin{align}
&\mathbb{E}[\mathring{S}_i \mathrm{tr}\, G \mathfrak{m}_i^{(p-1,p)}]=\frac{1}{N} \sum_k^{(i)} \mathbb{E}\Big[ \frac{1}{\|\mathbf{g}_i\| }\frac{\partial (\mathbf{e}_k^*\widetilde{B}^{\langle i\rangle} G\mathbf{e}_i)}{\partial g_{ik}}\mathrm{tr}\, G \mathfrak{m}_i^{(p-1,p)}\Big]\nonumber\\
&\qquad+ \frac{1}{N} \sum_k^{(i)} \mathbb{E}\Big[ \frac{\partial \|\mathbf{g}_i\| ^{-1}}{\partial g_{ik}} \mathbf{e}_k^*\widetilde{B}^{\langle i\rangle} G\mathbf{e}_i\mathrm{tr}\, G \mathfrak{m}_i^{(p-1,p)}\Big]+\frac{1}{N} \sum_k^{(i)} \mathbb{E}\Big[ \frac{\mathbf{e}_k^*\widetilde{B}^{\langle i\rangle} G\mathbf{e}_i}{\|\mathbf{g}_i\| } \frac{\partial \mathrm{tr}\, G}{\partial g_{ik}}\mathfrak{m}_i^{(p-1,p)} \Big]\nonumber\\
&\qquad+ \frac{p-1}{N} \sum_k^{(i)} \mathbb{E}\Big[ \frac{\mathbf{e}_k^*\widetilde{B}^{\langle i\rangle} G\mathbf{e}_i}{\|\mathbf{g}_i\| } \mathrm{tr}\, G \frac{\partial P_i}{\partial g_{ik}}\mathfrak{m}_i^{(p-2,p)}\Big]+ \frac{p}{N} \sum_k^{(i)} \mathbb{E} \Big[ \frac{\mathbf{e}_k^*\widetilde{B}^{\langle i\rangle} G\mathbf{e}_i}{\|\mathbf{g}_i\| } \mathrm{tr}\, G \frac{\partial \overline{ P_i}}{\partial g_{ik}}\mathfrak{m}_i^{(p-1,p-1)}\Big]. \label{17020540}
\end{align}
Analogously, by $T_i=\mathring{T}_i+h_{ii}G_{ii}$, (\ref{17072581}), the first bound in (\ref{17020505}), the first bound in (\ref{17021540}), and also (\ref{17020530}), we can write the first term on the right hand side of (\ref{17020533}) as
\begin{align}
\mathbb{E}[T_i \mathfrak{n}_i^{(p-1,p)}]=\mathbb{E}[\mathring{T}_i \mathfrak{n}_i^{(p-1,p)}]+\mathbb{E}[O_\prec(N^{-\frac12}) \mathfrak{n}_i^{(p-1,p)}]\,. \label{17071802}
\end{align}
Similarly to (\ref{17020540}), applying the integration by parts formula, we obtain
\begin{align}
&\mathbb{E}[\mathring{T}_i \mathfrak{n}_i^{(p-1,p)}]=\frac{1}{N} \sum_k^{(i)} \mathbb{E}\Big[ \frac{1}{\|\mathbf{g}_i\| }\frac{\partial (\mathbf{e}_k^*G\mathbf{e}_i)}{\partial g_{ik}} \mathfrak{n}_i^{(p-1,p)}\Big]+ \frac{1}{N} \sum_k^{(i)} \mathbb{E}\Big[ \frac{\partial \|\mathbf{g}_i\| ^{-1}}{\partial g_{ik}} \mathbf{e}_k^* G\mathbf{e}_i \mathfrak{n}_i^{(p-1,p)}\Big]\nonumber\\
&\;\;+ \frac{p-1}{N} \sum_k^{(i)} \mathbb{E}\Big[ \frac{\mathbf{e}_k^* G\mathbf{e}_i}{\|\mathbf{g}_i\| } \frac{\partial K_i}{\partial g_{ik}}\mathfrak{n}_i^{(p-2,p)}\Big]+ \frac{p}{N} \sum_k^{(i)} \mathbb{E} \Big[ \frac{ \mathbf{e}_k^* G\mathbf{e}_i}{\|\mathbf{g}_i\| } \frac{\partial \overline{ K_i}}{\partial g_{ik}}\mathfrak{n}_i^{(p-1,p-1)}\Big]\,. \label{17021011111}
\end{align}
First, we consider the first term on the right side of (\ref{17020540}). Recall $\ell_i$ from (\ref{17072590}). For brevity, we set
\begin{align}
c_i\mathrel{\mathop:}=\frac{\ell_i^2}{\|\mathbf{g}_i\| }. \label{170725102}
\end{align}
It is elementary to derive that
\begin{align}
&\frac{\partial G}{\partial g_{ik}}= c_i\big(G\mathbf{e}_k (\mathbf{e}_i+\mathbf{h}_i^*) \widetilde{B}^{\langle i\rangle} R_i G+GR_i\widetilde{B}^{\langle i\rangle}\mathbf{e}_k (\mathbf{e}_i+\mathbf{h}_i)^*G\big)+\Delta_G(i,k)\,. \label{17071801}
\end{align}
Here $\Delta_G(i,k)$ is a small remainder, defined as
\begin{align}
\Delta_G(i,k)\mathrel{\mathop:}=-G\Delta_R(i,k)\widetilde{B}^{\langle i\rangle} R_i G-GR_i\widetilde{B}^{\langle i\rangle}\Delta_R(i,k)G, \label{17022801}
\end{align}
where
\begin{align}
\Delta_R(i,k)\mathrel{\mathop:}=\frac{\ell_i^2}{2\|\mathbf{g}_i\| ^2} \bar{g}_{ik}\big(\mathbf{e}_i\mathbf{h}_i^*+\mathbf{h}_i\mathbf{e}_i^*+2\mathbf{h}_i\mathbf{h}_i^*\big)-\frac{\ell_i^4}{2\|\mathbf{g}_i\| ^3} g_{ii}\bar{g}_{ik}\big(\mathbf{e}_i+\mathbf{h}_i\big)\big(\mathbf{e}_i+\mathbf{h}_i\big)^*\,. \label{170729100}
\end{align}
The $\Delta_G(i,k)$'s are irrelevant error terms. We handle quantities with $\Delta_G(i,k)$ separately in Appendix~\ref{appendix B}.
Similarly to~(7.55) of \cite{BES16b}, using (\ref{17071801}), we can get
\begin{multline}
\frac{1}{N} \sum_k^{(i)} \frac{\partial (\mathbf{e}_k^* \widetilde{B}^{\langle i\rangle}G\mathbf{e}_i)}{\partial g_{ik}}=-c_i \frac{1}{N} \sum_k^{(i)} \mathbf{e}_k^* \widetilde{B}^{(i)} G\mathbf{e}_k (b_i T_i +(\widetilde{B}G)_{ii})\\
+c_i\frac{1}{N} \sum_k^{(i)} \mathbf{e}_k^* \widetilde{B}^{\langle i\rangle} G R_i \widetilde{B}^{\langle i\rangle} \mathbf{e}_k (G_{ii}+T_i)+\frac{1}{N} \sum_k^{(i)} \mathbf{e}_k^*\widetilde{B}^{\langle i\rangle} \Delta_G(i,k) \mathbf{e}_i\,. \label{17020551}
\end{multline}
Note that $T_i$ naturally appears in the first term of (\ref{17020540}) after integrating by parts the $\mathring{S}_i$ term. This explains why we need to study the high moments of $K_i$ to get another equation.
Now, we claim that
\begin{align}
\frac{1}{N} \sum_k^{(i)} \mathbf{e}_k^* \widetilde{B}^{(i)} G\mathbf{e}_k=\mathrm{tr}\, \widetilde{B}G+O_\prec(\Pi_i^2)\,,\qquad \frac{1}{N} \sum_k^{(i)} \mathbf{e}_k^* \widetilde{B}^{\langle i\rangle} G R_i \widetilde{B}^{\langle i\rangle} \mathbf{e}_k=\mathrm{tr}\, \widetilde{B}G\widetilde{B}+O_\prec(\Pi_i^2)\,, \label{17072401}
\end{align}
with $\Pi_i$ given in~\eqref{17020550}. We state the proof for the first estimate in (\ref{17072401}). Note that
\begin{align}
\frac{1}{N} \sum_k^{(i)} \mathbf{e}_k^* \widetilde{B}^{(i)} G\mathbf{e}_k= \mathrm{tr}\, \widetilde{B}^{\langle i\rangle} G-\frac{1}{N} (\widetilde{B}^{\langle i\rangle} G)_{ii}=\mathrm{tr}\, \widetilde{B}^{\langle i\rangle} G+O_\prec(\frac{1}{N})\,, \label{17021510}
\end{align}
where the last step follows from the identity $(\widetilde{B}^{\langle i\rangle} G)_{ii}=b_i G_{ii}$ and (\ref{17020505}). Then, using that $\widetilde{B}^{\langle i\rangle}=R_i\widetilde{B} R_i$ and $R_i=I-\mathbf{r}_i\mathbf{r}_i^*$ (\emph{c.f., } (\ref{17072593})), we see that
\begin{align*}
\mathrm{tr}\, \widetilde{B}G-\mathrm{tr}\, \widetilde{B}^{\langle i\rangle} G= \mathrm{tr}\, \widetilde{B} G- \mathrm{tr}\, R_i\widetilde{B} R_iG=\frac{1}{N} \mathbf{r}_i^* \widetilde{B}G\mathbf{r}_i+\frac{1}{N}\mathbf{r}_i^*G\widetilde{B} \mathbf{r}_i-\frac{1}{N}\mathbf{r}_i^*\widetilde{B}\mathbf{r}_i \mathbf{r}_i^*G\mathbf{r}_i\,.
\end{align*}
Using (\ref{170725100}), $\ell_i=1+O_\prec(\frac{1}{\sqrt{N}})$ and $\|\mathbf{r}_i^* \widetilde{B}\|\lesssim 1$, we get by Cauchy-Schwarz that
\begin{align*}
\big|\mathbf{r}_i^* \widetilde{B}G\mathbf{r}_i\big|\lesssim\Big( \|G\mathbf{e}_i\|^2+\|G\mathbf{h}_i\|^2\Big)^{\frac{1}{2}}=\Big(\frac{\Im (G_{ii}+ \mathbf{h}_i^* G\mathbf{h}_i)}{\eta}\Big)^{\frac{1}{2}}= \Big(\frac{\Im (G_{ii}+\mathcal{G}_{ii})}{\eta}\Big)^{\frac{1}{2}},
\end{align*}
with $\mathcal{G}$ given in~\eqref{the tilda guys}, where in the last step we used
\begin{align}
\mathbf{h}_i^* G\mathbf{h}_i=\mathbf{u}_i^* G\mathbf{u}_i=\mathbf{e}_i^*U^*GU\mathbf{e}_i=\mathcal{G}_{ii} \label{170726140}
\end{align}
and the identities $|G|^2=\frac{1}{\eta} \Im G$ and $ |\mathcal{G}|^2=\frac{1}{\eta} \Im \mathcal{G}$.
Similarly, we have
\begin{align*}
\big|\mathbf{r}_i^*G\widetilde{B} \mathbf{r}_i\big|\lesssim \Big(\frac{\Im (G_{ii}+\mathcal{G}_{ii})}{\eta}\Big)^{\frac{1}{2}},\qquad \big|\mathbf{r}_i^*G \mathbf{r}_i\big|\lesssim \Big(\frac{\Im (G_{ii}+\mathcal{G}_{ii})}{\eta}\Big)^{\frac{1}{2}}.
\end{align*}
Hence, we have
\begin{align}
\big|\mathrm{tr}\, \widetilde{B}G-\mathrm{tr}\, \widetilde{B}^{\langle i\rangle} G\big|\lesssim \frac{1}{N} \Big(\frac{\Im (G_{ii}+\mathcal{G}_{ii})}{\eta}\Big)^{\frac{1}{2}}\lesssim \frac{\Im (G_{ii}+\mathcal{G}_{ii})}{N \eta}=O_\prec(\Pi_i^2)\,, \label{17021511}
\end{align}
where in the second step, we used the fact $\Im G_{ii}, \Im \mathcal{G}_{ii}\gtrsim \eta$. Combining (\ref{17021510}) with (\ref{17021511}) we obtain the first estimate of (\ref{17072401}). The second estimate in (\ref{17072401}) is proved in the same way.
Hence, using (\ref{17072401}) and the first estimate in (\ref{17030105}), we obtain from (\ref{17020551}) that
\begin{align}
\frac{1}{N} \sum_k^{(i)} \frac{\partial (\mathbf{e}_k^* \widetilde{B}^{\langle i\rangle}G\mathbf{e}_i)}{\partial g_{ik}}=-c_i \mathrm{tr}\, \widetilde{B}G \big(b_i T_i+(\widetilde{B}G)_{ii}\big)+c_i \mathrm{tr}\, \widetilde{B} G\widetilde{B} \big( G_{ii}+T_i\big)+O_\prec({\Pi}_i^2)\,. \label{17020801}
\end{align}
Analogously, we can show that
\begin{align}
\frac{1}{N}\sum_k^{(i)} \frac{\partial (\mathbf{e}_k^* G\mathbf{e}_i)}{\partial g_{ik}}=-c_i\mathrm{tr}\, G\big( b_i T_i+(\widetilde{B}G)_{ii}\big)+c_i\mathrm{tr}\, \widetilde{B}G \big( G_{ii}+T_i\big)+O_\prec({\Pi}_i^2)\,. \label{17020802}
\end{align}
Using (\ref{17020533}), (\ref{17071802}), (\ref{17021011111}) and (\ref{17020802}) and the estimate $\frac{c_i}{\|\mathbf{g}_i\|}=1+O_\prec(\frac{1}{\sqrt{N}})$, we obtain
\begin{multline}
\mathbb{E}[ \mathfrak{n}_i^{(p,p)}]=\mathbb{E}\Big[ O_\prec(\Psi)\mathfrak{n}_i^{(p-1,p)}\Big]+ \frac{1}{N} \sum_k^{(i)} \mathbb{E}\Big[ \frac{\partial \|\mathbf{g}_i\| ^{-1}}{\partial g_{ik}} \mathbf{e}_k^* G\mathbf{e}_i \mathfrak{n}_i^{(p-1,p)}\Big]\\+ \frac{p-1}{N} \sum_k^{(i)} \mathbb{E}\Big[ \frac{\mathbf{e}_k^* G\mathbf{e}_i}{\|\mathbf{g}_i\| } \frac{\partial K_i}{\partial g_{ik}}\mathfrak{n}_i^{(p-2,p)}\Big]+ \frac{p}{N} \sum_k^{(i)} \mathbb{E} \Big[ \frac{\mathbf{e}_k^* G\mathbf{e}_i}{\|\mathbf{g}_i\| } \frac{\partial \overline{ K_i}}{\partial g_{ik}}\mathfrak{n}_i^{(p-1,p-1)}\Big]\,. \label{17021011}
\end{multline}
Then, combining (\ref{17020801}) with (\ref{17020802}), we obtain
\begin{multline}
\frac{1}{N}\sum_k^{(i)} \frac{\partial (\mathbf{e}_k^* \widetilde{B}^{\langle i\rangle}G\mathbf{e}_i)}{\partial g_{ik}} \mathrm{tr}\, G =-c_i(G_{ii}+T_i) \big(\mathrm{tr}\, \widetilde{B}G-\Upsilon\big)+\frac{1}{N}\sum_k^{(i)} \frac{\partial (\mathbf{e}_k^* G\mathbf{e}_i)}{\partial g_{ik}} \mathrm{tr}\, \widetilde{B}G+O_\prec({\Pi}_i^2)\\=-c_i(G_{ii}+T_i) \big(\mathrm{tr}\, \widetilde{B}G-\Upsilon\big)+\mathring{T}_i \mathrm{tr}\, \widetilde{B}G+\Big(\frac{1}{N}\sum_k^{(i)} \frac{\partial (\mathbf{e}_k^* G\mathbf{e}_i)}{\partial g_{ik}}-\mathring{T}_i \Big)\mathrm{tr}\, \widetilde{B}G+O_\prec({\Pi}_i^2)\,. \label{17020812}
\end{multline}
Recall the definition of $c_i$ from (\ref{170725102}). It is elementary to check that
\begin{align}
c_i=\|\mathbf{g}_i\| -h_{ii}-\big( \|\mathbf{g}_i\| ^2-1\big)+O_\prec(\frac{1}{N})\,. \label{17020811}
\end{align}
Plugging (\ref{17020811}) into (\ref{17020812}) and also using the second equation in (\ref{17072581}), we can write
\begin{multline}
\frac{1}{N}\sum_k^{(i)} \frac{\partial (\mathbf{e}_k^* \widetilde{B}^{\langle i\rangle}G\mathbf{e}_i)}{\partial g_{ik}} \mathrm{tr}\, G= -\|\mathbf{g}_i\| \big( G_{ii}\mathrm{tr}\, \widetilde{B}G-(G_{ii}+T_i)\Upsilon\big)\\
+\Big(\frac{1}{N}\sum_k^{(i)} \frac{\partial (\mathbf{e}_k^* G\mathbf{e}_i)}{\partial g_{ik}}-\|\mathbf{g}_i\| \mathring{T}_i \Big)\mathrm{tr}\, \widetilde{B}G+\varepsilon_{i2}+O_\prec({\Pi}_i^2), \label{17021008}
\end{multline}
where $\varepsilon_{i2}$ collects irrelevant terms
\begin{align}
\varepsilon_{i2}\mathrel{\mathop:}= &\big(\|\mathbf{g}_i\| -c_i\big) \big(G_{ii}\mathrm{tr}\, \widetilde{B}G-(G_{ii}+T_i)\Upsilon\big)+\big(\|\mathbf{g}_i\| \mathring{T}_i-c_i T_i\big)\mathrm{tr}\, \widetilde{B}G\nonumber\\
=& \big( \|\mathbf{g}_i\| ^2-1\big)G_{ii}\mathrm{tr}\, \widetilde{B}G- \big(h_{ii}+\big( \|\mathbf{g}_i\| ^2-1\big)\big) (G_{ii}+T_i)\Upsilon\nonumber\\
&\qquad+\big(h_{ii}+\big( \|\mathbf{g}_i\| ^2-1\big)\big)T_i \mathrm{tr}\, \widetilde{B}G+O_\prec\big(\frac{1}{N}\big)\,. \label{17021004}
\end{align}
From the estimates $|h_{ii}|\prec\frac{1}{\sqrt{N}}$, $\|\mathbf{g}_i\| =1+O_\prec(\frac{1}{\sqrt{N}})$, (\ref{17020505}) and the observation that the tracial quantities are $O_\prec (1)$, we~see that
\begin{align}
\varepsilon_{i2}=O_\prec\big(\frac{1}{\sqrt{N}}\big)\,. \label{170729110}
\end{align}
Combining (\ref{17020532}), (\ref{17020531}), (\ref{17020540}) and (\ref{17021008}), we have
\begin{align}
&\mathbb{E}[\mathfrak{m}_i^{(p,p)}] =-\mathbb{E}[(\mathring{S}_{i}+\varepsilon_{i1}) \mathrm{tr}\, G \mathfrak{m}_i^{(p-1,p)}]+\mathbb{E}\big[\big(-G_{ii}\mathrm{tr}\, \widetilde{B}G+(G_{ii}+T_{ii})\Upsilon\big)\mathfrak{m}_i^{(p-1,p)}\big]\nonumber\\
&= \mathbb{E}\Big[ \Big(\mathring{T}_i-\frac{1}{\|\mathbf{g}_i\| }\frac{1}{N} \sum_k^{(i)}\frac{\partial (\mathbf{e}_k^*G\mathbf{e}_i)}{\partial g_{ik}}\Big)\mathrm{tr}\, \widetilde{B}G \mathfrak{m}_i^{(p-1,p)}\Big]-\frac{1}{N} \sum_k^{(i)} \mathbb{E}\Big[ \frac{\partial \|\mathbf{g}_i\| ^{-1}}{\partial g_{ik}} \mathbf{e}_k^*\widetilde{B}^{\langle i\rangle} G\mathbf{e}_i\mathrm{tr}\, G \mathfrak{m}_i^{(p-1,p)}\Big]\nonumber\\
& -\frac{1}{N} \sum_k^{(i)} \mathbb{E}\Big[ \frac{\mathbf{e}_k^*\widetilde{B}^{\langle i\rangle} G\mathbf{e}_i}{\|\mathbf{g}_i\| } \frac{\partial \mathrm{tr}\, G}{\partial g_{ik}}\mathfrak{m}_i^{(p-1,p)} \Big]- \frac{p-1}{N} \sum_k^{(i)} \mathbb{E}\Big[ \frac{\mathbf{e}_k^*\widetilde{B}^{\langle i\rangle} G\mathbf{e}_i}{\|\mathbf{g}_i\| } \mathrm{tr}\, G \frac{\partial P_i}{\partial g_{ik}}\mathfrak{m}_i^{(p-2,p)}\Big]\nonumber\\
&-\frac{p}{N} \sum_k^{(i)} \mathbb{E} \Big[ \frac{\mathbf{e}_k^*\widetilde{B}^{\langle i\rangle} G\mathbf{e}_i}{\|\mathbf{g}_i\| } \mathrm{tr}\, G \frac{\partial \overline{ P_i}}{\partial g_{ik}}\mathfrak{m}_i^{(p-1,p-1)}\Big]+\mathbb{E}\Big[\Big(\varepsilon_{i1}\mathrm{tr}\, G-\frac{1}{\|\mathbf{g}_i\| } \varepsilon_{i2}+O_\prec(\Pi_i^2) \Big) \mathfrak{m}_i^{(p-1,p)}\Big]. \label{17021023}
\end{align}
For the first term on the right of (\ref{17021023}), analogously to (\ref{17021011111}), applying (\ref{17021237}) to the $\mathring{T}_i$-term, we get
\begin{align}
&\mathbb{E}\Big[\Big(\mathring{T}_i- \frac{1}{\|\mathbf{g}_i\| }\frac{1}{N} \sum_k^{(i)}\frac{\partial (\mathbf{e}_k^*G\mathbf{e}_i)}{\partial g_{ik}}\Big)\mathrm{tr}\, \widetilde{B}G \mathfrak{m}_i^{(p-1,p)}\Big]\nonumber\\
&=\frac{1}{N} \sum_k^{(i)} \mathbb{E}\Big[ \frac{1}{\|\mathbf{g}_i\| }\frac{\partial \mathrm{tr}\, \widetilde{B}G}{\partial g_{ik}} \mathbf{e}_k^* G\mathbf{e}_i \mathrm{tr}\, \widetilde{B}G \mathfrak{m}_i^{(p-1,p)}\Big] +\frac{1}{N} \sum_k^{(i)} \mathbb{E}\Big[ \frac{\partial \|\mathbf{g}_i\| ^{-1}}{\partial g_{ik}} \mathbf{e}_k^* G\mathbf{e}_i \mathrm{tr}\, \widetilde{B}G \mathfrak{m}_i^{(p-1,p)}\Big]\nonumber\\
&\qquad+ \frac{p-1}{N} \sum_k^{(i)} \mathbb{E}\Big[ \frac{ \mathbf{e}_k^* G\mathbf{e}_i}{\|\mathbf{g}_i\| } \frac{\partial P_i}{\partial g_{ik}}\mathrm{tr}\, \widetilde{B}G \mathfrak{m}_i^{(p-2,p)}\Big]+ \frac{p}{N} \sum_k^{(i)} \mathbb{E} \Big[ \frac{\mathbf{e}_k^* G\mathbf{e}_i}{\|\mathbf{g}_i\| } \frac{\partial \overline{ P_i}}{\partial g_{ik}}\mathrm{tr}\, \widetilde{B}G \mathfrak{m}_i^{(p-1,p-1)}\Big]. \label{17021024}
\end{align}
Recall the estimates of $\varepsilon_{i1}$ and $\varepsilon_{i2}$ in (\ref{17020534}) and (\ref{170729110}), respectively, which implies that $|\varepsilon_{i1}|\prec\Psi$ and $|\varepsilon_{i2}|\prec \Psi$. Therefore, to show (\ref{17021020}), it suffices to estimate the second to the fifth terms on the right side of (\ref{17021023}), and all the terms on the right side of (\ref{17021024}). Similarly, in light of (\ref{17020533}), (\ref{17071802}), and (\ref{17020802}), to show (\ref{17021021}), it suffices to estimate the last three terms on the right side of (\ref{17021011}). All these terms can be estimated based on the following lemma.
\begin{lem} \label{lem.17021201} Suppose the assumptions in Proposition \ref{pro.17020310} hold. Set $X_i=I$ or $\widetilde{B}^{\langle i\rangle}$. Let $Q$ be any (possibly random) diagonal matrix satisfying $\|Q\|\prec 1$ and $X=I$ or $A$. We have the following estimates
\begin{align}
&\frac{1}{N} \sum_k^{(i)} \frac{\partial \|\mathbf{g}_i\| ^{-1}}{\partial g_{ik}} \mathbf{e}_k^* X_i G\mathbf{e}_i=O_\prec(\frac{1}{N}), \qquad& &\frac{1}{N} \sum_k^{(i)} \mathbf{e}_i^*X \frac{\partial G}{\partial g_{ik}} \mathbf{e}_i\mathbf{e}_k^* X_i G\mathbf{e}_i=O_\prec({\Pi}_i^2), \nonumber\\
& \frac{1}{N} \sum_k^{(i)} \frac{\partial T_{i}}{ \partial g_{ik}} \mathbf{e}_k^* X_i G\mathbf{e}_i= O_\prec({\Pi}_i^2),\qquad& &\frac{1}{N} \sum_k^{(i)} \mathrm{tr}\, \Big(Q X\frac{\partial G}{\partial g_{ik}}\Big) \mathbf{e}_k^* X_i G\mathbf{e}_i=O_\prec\big(\Psi^2\Pi_i^2\big),\nonumber\\
&\frac{1}{N} \sum_k^{(i)} \mathrm{tr}\, \Big(Q X\frac{\partial G}{\partial g_{ik}}\Big) \mathbf{e}_k^* X_i \mathring{\mathbf{g}}_i=O_\prec\big(\Psi^2\Pi_i^2\big). \label{17021202}
\end{align}
In addition, the same estimates hold if we replace $\frac{\partial G}{\partial g_{ik}}$ and $\frac{\partial T_i}{\partial g_{ik}}$ by their complex conjugates $\frac{\partial \overline{G}}{\partial g_{ik}}$ and $\frac{\partial \overline{T}_i}{\partial g_{ik}}$ in the last four equations above.
\end{lem}
The proof of Lemma \ref{lem.17021201} will be postponed to Appendix \ref{appendix B}.
With the aid of Lemma \ref{lem.17021201}, the remaining proof of Lemma \ref{lem.17020520} is the same as the counterpart to the proof of Lemma 7.3 in \cite{BES16b}. The only difference is that we use the improved bounds in Lemma \ref{lem.17021201} instead of those in Lemma 7.4 in \cite{BES16b}. Specifically, the estimates for the second term of (\ref{17021011}), the second term of (\ref{17021023}), and the second term of (\ref{17021024}) follow from the first equation in (\ref{17021202}).
The third term of (\ref{17021023}) and the first term of (\ref{17021024}) can be estimated by the last equation in (\ref{17021202}), after writing $\mathrm{tr}\, \widetilde{B}G= 1-\mathrm{tr}\, (A-z)G$. All the other terms have $\frac{\partial K_i}{\partial g_{ik}}$ and $\frac{\partial P_i}{\partial g_{ik}}$ or their complex conjugate involved. Recall the definitions in (\ref{17011301}) and (\ref{17072420}), and also the first equation in (\ref{17020508}). Then, by the chain rule, we see that all terms in (\ref{17021011}), (\ref{17021023}) and (\ref{17021024}), with $\frac{\partial K_i}{\partial g_{ik}}$ and $\frac{\partial P_i}{\partial g_{ik}}$ or their complex conjugate counterparts involved, can be estimated by combining the last three equations in (\ref{17021202}).
This completes the proof of Lemma \ref{lem.17020520}.
\end{proof}
With Lemma \ref{lem.17020520}, we can complete the proof of Proposition \ref{pro.17020310}.
The proof is nearly the same as that for Theorem 7.2 in \cite{BES16b}. For the convenience of the reader, we sketch it below.
First, using Young's inequality, we obtain from (\ref{17021020}) that for any given (small) $\varepsilon>0$,
\begin{align*}
\mathbb{E}\big[\mathfrak{m}_i^{(p,p)} \big]\leq \frac{1}{3} \frac{1}{2p} N^{2p\varepsilon}\Psi^{2p}+3\frac{2p-1}{2p} N^{-\frac{2p\varepsilon}{2p-1}} \mathbb{E} \big[\mathfrak{m}_i^{(p,p)}\big].
\end{align*}
Since $\varepsilon>0$ was arbitrary, this implies the first bound in (\ref{17072410}). The second one then follows from (\ref{17021021}) in the same manner. By Markov's inequality, we get (\ref{17020301}).
Next, we show how (\ref{17020302}) and (\ref{17020303}) follow from (\ref{17020301}) and the assumption (\ref{17020501}). To this end, we first prove the following crude bound
\begin{align}
\Lambda_T(z)\prec N^{-\frac{\gamma}{4}}\,. \label{17072441}
\end{align}
From the definition in (\ref{17072420}), we can rewrite the second estimate in (\ref{17020301}) as
\begin{align}
(1+b_i\mathrm{tr}\, G-\mathrm{tr}\, (\widetilde{B}G))T_{i}= G_{ii}\mathrm{tr}\, (\widetilde{B}G)-(\widetilde{B}G)_{ii} \mathrm{tr}\, G+O_\prec(\Psi)\,. \label{17072432}
\end{align}
Using the identity
\begin{align}
(\widetilde{B}G)_{ii}=1-(a_i-z)G_{ii}(z)\,, \label{170725131}
\end{align}
and approximate $G_{ii}$ by $(a_i-\omega_B)^{-1}$, we get from (\ref{17020501}) and (\ref{17020502}) that
\begin{align}
(\widetilde{B}G)_{ii}=\frac{z-\omega_B}{a_i-\omega_B}+O_\prec(N^{-\frac{\gamma}{4}})\,. \label{17072431}
\end{align}
We also recall the estimates of the tracial quantities in (\ref{17020535}) under the assumption (\ref{17020501}). Plugging (\ref{17072431}), (\ref{17020535}) and the first bound in the assumption (\ref{17020501}) into (\ref{17072432}), we~get
\begin{align}
\big(1+(b_i-z+\omega_B) m_{\mu_A\boxplus \mu_B}+O_\prec(N^{-\frac{\gamma}{4}})\big)T_{i}= O_\prec(N^{-\frac{\gamma}{4}})+O_\prec(\Psi)=O_\prec(N^{-\frac{\gamma}{4}})\,, \label{17072440}
\end{align}
where in the last step we used that $\Psi\leq N^{-\frac{\gamma}{2}}$ for all $\eta\geq \eta_{\rm m}$. From the second line in (\ref{170730100}), we note~that
\begin{align*}
1+(b_i-z+\omega_B) m_{\mu_A\boxplus \mu_B}=m_{\mu_A\boxplus \mu_B}\Big(\frac{1}{m_{\mu_A\boxplus \mu_B}}+b_i-z+\omega_B\Big)= m_{\mu_A\boxplus \mu_B}(b_i-\omega_A)\,.
\end{align*}
Using (\ref{17020502}) and $\|A\|, \|B\|\leq C$, we get $|m_{\mu_A\boxplus \mu_B}(b_i-\omega_A)|\gtrsim 1$. This together with (\ref{17072440}) implies (\ref{17072441}).
To prove (\ref{17020302}), we recall the definition of $P_i$ in (\ref{17011301}), which implies that
\begin{align}
\frac{1}{N}\sum_{i=1}^N (G_{ii}+T_i) \Upsilon=\frac{1}{N}\sum_{i=1}^N P_i= O_\prec(\Psi)\,. \label{17072447}
\end{align}
Using the facts $\frac{1}{N}\sum_{i=1}^NG_{ii}=m_{\mu_A\boxplus\mu_B}+O_{\prec}(N^{-\frac{\gamma}{4}})$ (\emph{c.f., } (\ref{17020535})), and $\frac{1}{N}\sum_{i=1}^N T_i=O_\prec(N^{-\frac{\gamma}{4}})$, and also $|m_{\mu_A\boxplus\mu_B}|\gtrsim 1$, we get (\ref{17020302}) from (\ref{17072447}).
Then, combining (\ref{17020302}) with the first estimate in (\ref{17020301}), we get
\begin{align}
(\widetilde{B}G)_{ii}\mathrm{tr}\, G-G_{ii}\mathrm{tr}\, \widetilde{B}G=O_\prec(\Psi)\,. \label{17072450}
\end{align}
Applying the identity (\ref{170725131}) and the definition of $\omega_B^c$, we can rewrite (\ref{17072450}) as
\begin{align*}
\big((a_i-\omega_B^c)G_{ii}-1\big) \mathrm{tr}\, G=O_\prec(\Psi)\,.
\end{align*}
As shown above that $|\mathrm{tr}\, G|\gtrsim 1$ with high probability under the assumption (\ref{17020501}), we get
$(a_i-\omega_B^c)G_{ii}-1=O_\prec(\Psi)$. By (\ref{170725110}) and (\ref{17020502}), we also note that $|a_i-\omega_B^c|\gtrsim 1$ with high probability. This further implies the first estimate in (\ref{17020303}).
Finally, plugging (\ref{17072450}) back to (\ref{17072432}), we can improve the right hand side of (\ref{17072440}) to $O_\prec(\Psi)$. Then the second estimate in (\ref{17020303}) follows. This completes the proof of Proposition \ref{pro.17020310}.
\end{proof}
\section{Rough fluctuation averaging for general linear combinations} \label{s. rough bound}
In this section, we prove a rough fluctuation averaging estimate for the basic quantities $Q_i$'s defined in (\ref{17021701}).
From (\ref{17072450}), we see that
\begin{align}
|Q_i|\prec \Psi. \label{17021737}
\end{align}
Recall the definition of the control parameters $\Pi$ and $\Pi_i$ in~\eqref{17012101} and~\eqref{17020550}, respectively. The following proposition states that the average of the $Q_i$'s is typically smaller than an individual $Q_i$.
\begin{pro} \label{lem. rough fluctuation averaging} Fix a $z\in \mathcal{D}_\tau(\eta_{\rm m},\eta_\mathrm{M})$. Suppose that the assumptions of Proposition \ref{pro.17020310} hold. Set $X_i=I$ or $\widetilde{B}^{\langle i\rangle}$. Let $d_1, \ldots, d_N\in \mathbb{C}$ be possibly $H$-dependent quantities satisfying $\max_j|d_j|\prec 1$. Assume that they depend only weakly on the randomness in the sense that the following hold, for all $i,j\in \llbracket 1, N\rrbracket$,
\begin{align}
\frac{1}{N^2} \sum_{i=1}^N \sum_k^{(i)} \frac{\partial d_j}{\partial g_{ik}} \mathbf{e}_k^* X_i G\mathbf{e}_i=O_\prec\big(\Psi^2\Pi_i^2\big)\,,\qquad\qquad \frac{1}{N^2}\sum_{i=1}^N \sum_k^{(i)} \frac{\partial d_j}{\partial g_{ik}} \mathbf{e}_k^* X_i \mathring{\mathbf{g}}_i=O_\prec\big(\Psi^2\Pi_i^2\big)\,, \label{17022530}
\end{align}
and the same bounds hold when the $d_j$'s are replaced by their complex conjugates $\overline{d_j}$.
Suppose that $\Pi(z)\prec \hat{\Pi}(z)$ for some deterministic and positive function $\hat{\Pi}(z)$ that satisfies ${\frac{1}{\sqrt{N\sqrt{\eta}}}}\prec \hat{\Pi}\prec \Psi$.
Then,
\begin{align}
\Big|\frac{1}{N} \sum_{i=1}^N d_i Q_i\Big | \prec \Psi \hat{\Pi}\,. \label{170723113}
\end{align}
\end{pro}
We remark that whenever the $d_j$'s are deterministic, (\ref{17022530}) trivially holds. However, we will also need~(\ref{170723113}) with certain random $d_j$'s that satisfy (\ref{17022530}).
For any $d_i$'s satisfying the assumption in Proposition \ref{lem. rough fluctuation averaging}, we introduce the notation
\begin{align}
\mathfrak{m}^{(k,l)}\mathrel{\mathop:}=\Big(\frac{1}{N}\sum_{i=1}^N d_i Q_i\Big)^k\Big(\frac{1}{N}\sum_{i=1}^N \overline{d_i}\; \overline{ Q_i}\Big)^l\,,\qquad\qquad k,l\in{\mathbb N}\,. \label{17071810}
\end{align}
Similarly to Lemma \ref{lem.17021230}, it suffices to prove the following recursive moment estimate.
\begin{lem} \label{lem.17021231} Fix a $z\in \mathcal{D}_\tau(\eta_{\rm m},\eta_\mathrm{M})$. Suppose that the assumptions of Proposition \ref{lem. rough fluctuation averaging} hold. Then, for any fixed integer $p\geq 1$, we have
\begin{align}
\mathbb{E}\big[ \mathfrak{m}^{(p,p)}\big]=\mathbb{E}\big[O_\prec(\hat{\Pi}^2)\mathfrak{m}^{(p-1,p)}\big]+\mathbb{E}\big[O_\prec(\Psi^2\hat{\Pi}^2) \mathfrak{m}^{(p-2,p)}\big]+\mathbb{E}\big[O_\prec(\Psi^2\hat{\Pi}^2) \mathfrak{m}^{(p-1,p-1)}\big]. \label{17071833}
\end{align}
\end{lem}
\begin{proof}[Proof of Proposition \ref{lem. rough fluctuation averaging}]
Similarly to the proof of (\ref{17020301}) from Lemma \ref{lem.17021230}, with Lemma \ref{lem.17021231}, we can get (\ref{170723113}) by applying Young's and Markov's inequalities. This completes the proof of Proposition~\ref{lem. rough fluctuation averaging}.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lem.17021231}] We first claim that it suffices to prove the following statements: If $|\Upsilon(z)|\prec \hat{\Upsilon}(z)$ for any deterministic and positive function $\hat{\Upsilon}(z)\leq\Psi(z)$, then
\begin{align}
\mathbb{E}\big[ \mathfrak{m}^{(p,p)}\big]=&\mathbb{E}\big[(O_\prec(\hat{\Pi}^2)+O_\prec(\Psi \hat{\Upsilon}))\mathfrak{m}^{(p-1,p)}\big]+\mathbb{E}\big[O_\prec(\Psi^2\hat{\Pi}^2) \mathfrak{m}^{(p-2,p)}\big]\nonumber\\
&\qquad+\mathbb{E}\big[O_\prec(\Psi^2\hat{\Pi}^2) \mathfrak{m}^{(p-1,p-1)}\big]\,. \label{17072903}
\end{align}
Indeed, similarly to the proof of (\ref{17020301}) from Lemma \ref{lem.17021230}, we can again apply Young's inequality and Markov's inequality to get, for any $d_i$'s satisfying the assumptions in Proposition \ref{lem. rough fluctuation averaging}, that~\eqref{17072903} implies
\begin{align}
\Big|\frac{1}{N}\sum_{i=1}^N d_i Q_i\Big|\prec \hat{\Pi}^2+\Psi \hat{\Upsilon}+ \Psi\hat{\Pi}\prec \Psi \hat{\Upsilon}+ \Psi\hat{\Pi}\,, \label{17072901}
\end{align}
where in the last step we used the assumption $\hat{\Pi}\prec\Psi$.
Next, recall from (\ref{17011302}) that
\begin{align*}
\Upsilon=-\frac{1}{N}\sum_{i=1}^N a_i Q_i\,.
\end{align*}
Choosing $d_i=a_i$ for all $i$, we get from (\ref{17072901})
\begin{align}
|\Upsilon|\prec \Psi \hat{\Upsilon}+ \Psi\hat{\Pi}\prec N^{-\frac{\gamma}{4}} \hat{\Upsilon}+ \Psi\hat{\Pi}\,. \label{17072902}
\end{align}
Using the right hand side of (\ref{17072902}) as a new deterministic bound of $\Upsilon$ instead of the initial $\hat{\Upsilon}$ in (\ref{17072903}), and perform the above argument iteratively, we can finally get
\begin{align*}
|\Upsilon|\prec \Psi\hat{\Pi}\,.
\end{align*}
Hence, at the end, we can choose $\hat{\Upsilon}= \Psi\hat{\Pi}$ in (\ref{17072903}) and get
\begin{align}
\mathbb{E}\big[ \mathfrak{m}^{(p,p)}\big]=&\mathbb{E}\big[(O_\prec(\hat{\Pi}^2)+O_\prec(\Psi^2 \hat{\Pi}))\mathfrak{m}^{(p-1,p)}\big]+\mathbb{E}\big[O_\prec(\Psi^2\hat{\Pi}^2) \mathfrak{m}^{(p-2,p)}\big]\nonumber\\
&\qquad+\mathbb{E}\big[O_\prec(\Psi^2\hat{\Pi}^2) \mathfrak{m}^{(p-1,p-1)}\big]. \label{17072910}
\end{align}
Observe that by the assumption that $\frac{1}{N\sqrt{\eta}}\prec \hat{\Pi}$, we also have $\Pi^2\prec \hat{\Pi}$ on $\mathcal{D}_\tau(\eta_{\rm m},\eta_\mathrm{M})$.
Then the $O_\prec(\Psi^2 \hat{\Pi})$ term can be absorbed by the $O_\prec(\hat{\Pi}^2)$ in (\ref{17072910}). Hence, we conclude (\ref{17071833}) from (\ref{17072903}). Therefore, in the sequel, we will focus on proving (\ref{17072903}).
Denote by $D\mathrel{\mathop:}=\text{diag}(d_i)_{i=1}^N$.
We first write
\begin{align}
\frac{1}{N} \sum_{i=1}^N d_i Q_i
= \frac{1}{N}\sum_{i=1}^N (\widetilde{B}G)_{ii} \big(d_i \mathrm{tr}\, G- \mathrm{tr}\, DG\big)=\frac{1}{N}\sum_{i=1}^N (\widetilde{B}G)_{ii}\mathrm{tr}\, G \tau_{i1} , \label{17021232}
\end{align}
where we introduced the notation
\begin{align}
\tau_{i1}\mathrel{\mathop:}= d_i-\frac{\mathrm{tr}\, D G}{\mathrm{tr}\, G}. \label{17021305}
\end{align}
Similarly to the proof of (\ref{17020301}), we approximate $(\widetilde{B}G)_{ii}$ by $-\mathring{S}_i$ (\emph{c.f., } (\ref{17020531})), and then perform integration by parts using~\eqref{17021237} with respect to $\mathring{\mathbf{g}}_i$ in $\mathring{S}_i$. More specifically, we write
\begin{align}
\mathbb{E}\big[ \mathfrak{m}^{(p,p)}\big] &=\frac{1}{N}\sum_{i=1}^N \mathbb{E}\Big[ (\widetilde{B}G)_{ii} \mathrm{tr}\, G \tau_{i1}\mathfrak{m}^{(p-1,p)}\Big]\nonumber\\
&=-\frac{1}{N}\sum_{i=1}^N \mathbb{E}\Big[ \mathring{S}_{i} \mathrm{tr}\, G \tau_{i1}\mathfrak{m}^{(p-1,p)}\Big]+
\mathbb{E}\Big[ \varepsilon_1\mathfrak{m}^{(p-1,p)}\Big] , \label{17021240}
\end{align}
where we used the notation
\begin{align}
\varepsilon_{1}\mathrel{\mathop:}=\frac{1}{N}\sum_{i=1}^N \varepsilon_{i1} \mathrm{tr}\, G \tau_{i1}. \label{17021320}
\end{align}
Here $\varepsilon_{i1}$ is defined in (\ref{17071805}). To ease the presentation, we further introduce the notation
\begin{align}
\tau_{i2}\mathrel{\mathop:}=-\tau_{i1} \mathrm{tr}\, \widetilde{B}G. \label{1702130512}
\end{align}
Using assumption (\ref{17020501}), (\ref{17020535}), and also~\eqref{17020502}, one checks that $|\tau_{i1}|\prec 1$, $|\tau_{i2}|\prec 1$, for all $i\in \llbracket 1, N\rrbracket $.
Similarly to (\ref{17020540}), applying~(\ref{17021237}) to the first term on the right hand side of (\ref{17021240}), we obtain
\begin{align}
\frac{1}{N}\sum_{i=1}^N \mathbb{E}\Big[ \mathring{S}_{i} &\mathrm{tr}\, G \tau_{i1}\mathfrak{m}^{(p-1,p)}\Big]=\frac{1}{N^2} \sum_{i=1}^N \sum_k^{(i)} \mathbb{E}\Big[ \frac{1}{\|\mathbf{g}_i\| }\frac{\partial (\mathbf{e}_k^*\widetilde{B}^{\langle i\rangle} G\mathbf{e}_i)}{\partial g_{ik}}\mathrm{tr}\, G \tau_{i1} \mathfrak{m}^{(p-1,p)}\Big]\nonumber\\
&\quad+ \frac{1}{N^2} \sum_{i=1}^N \sum_k^{(i)} \mathbb{E}\Big[ \frac{\partial \|\mathbf{g}_i\| ^{-1}}{\partial g_{ik}} \mathbf{e}_k^*\widetilde{B}^{\langle i\rangle} G\mathbf{e}_i\mathrm{tr}\, G \tau_{i1} \mathfrak{m}^{(p-1,p)}\Big]\nonumber\\
&\quad +\frac{1}{N^2} \sum_{i=1}^N \sum_k^{(i)} \mathbb{E}\Big[ \frac{1}{\|\mathbf{g}_i\| } \mathbf{e}_k^*\widetilde{B}^{\langle i\rangle} G\mathbf{e}_i \frac{\partial (\mathrm{tr}\, G \tau_{i1})}{\partial g_{ik}}\mathfrak{m}^{(p-1,p)} \Big]\nonumber\\
&\quad+ \frac{p-1}{N^2} \sum_{i=1}^N \sum_k^{(i)} \mathbb{E}\Big[ \frac{1}{\|\mathbf{g}_i\| } \mathbf{e}_k^*\widetilde{B}^{\langle i\rangle} G\mathbf{e}_i \mathrm{tr}\, G \tau_{i1} \Big(\frac{1}{N}\sum_{j=1}^N \frac{\partial (d_jQ_j)}{\partial g_{ik}}\Big)\mathfrak{m}^{(p-2,p)}\Big]\nonumber\\
&\quad+ \frac{p}{N^2} \sum_{i=1}^N \sum_k^{(i)} \mathbb{E} \Big[ \frac{1}{\|\mathbf{g}_i\| } \mathbf{e}_k^*\widetilde{B}^{\langle i\rangle} G\mathbf{e}_i \mathrm{tr}\, G \tau_{i1} \Big(\frac{1}{N}\sum_{j=1}^N \frac{\partial (\overline{d_j}\overline{ Q_j})}{\partial g_{ik}}\Big)\mathfrak{m}_i^{(p-1,p-1)}\Big]. \label{17021250}
\end{align}
First, we estimate the first term on the right hand side of (\ref{17021250}). Using (\ref{17021008}) and the bound
\begin{align*}
\frac{1}{N}\sum_{i=1}^N \Pi_i^2\leq 2 \Pi^2,
\end{align*}
we have
\begin{align*}
&\frac{1}{N^2} \sum_{i=1}^N \sum_k^{(i)} \frac{1}{\|\mathbf{g}_i\| }\frac{\partial (\mathbf{e}_k^*\widetilde{B}^{\langle i\rangle} G\mathbf{e}_i)}{\partial g_{ik}}\mathrm{tr}\, G \tau_{i1}=-\frac{1}{N} \sum_{i=1}^N \big( G_{ii}\mathrm{tr}\, \widetilde{B}G-(G_{ii}+T_i)\Upsilon\big) \tau_{i1}\nonumber\\
&\qquad\qquad \qquad\qquad+\frac{1}{N^2} \sum_{i=1}^N \sum_k^{(i)} \Big(\mathring{T}_i-\frac{1}{\|\mathbf{g}_i\| }\frac{\partial (\mathbf{e}_k^* G\mathbf{e}_i)}{\partial g_{ik}} \Big) \tau_{i2}+\varepsilon_{2}+O_\prec(\Pi^2)\,,
\end{align*}
where we have introduced
\begin{align}
\varepsilon_{2}\mathrel{\mathop:}=\frac{1}{N} \sum_{i=1}^N \frac{1}{\|\mathbf{g}_i\| } \tau_{i1}\varepsilon_{i2}\,; \label{17021310}
\end{align}
see~(\ref{17021004}) for the definition of $\varepsilon_{i2}$.
According to the definition in (\ref{17021305}), we observe that
\begin{align*}
&\frac{1}{N} \sum_{i=1}^N \big( G_{ii}\mathrm{tr}\, \widetilde{B}G-(G_{ii}+T_i)\Upsilon\big)\tau_{i1}=\frac{1}{N^2} \sum_{i=1}^N G_{ii} \tau_{i1}\big(\mathrm{tr}\, \widetilde{B}G-\Upsilon\big) - \frac{1}{N} \sum_{i=1}^N T_i \tau_{i1} \Upsilon =O_\prec(\Psi \hat{\Upsilon})\,.
\end{align*}
Here in the last step we used the facts
\begin{align}
\sum_{i=1}^N G_{ii} \tau_{i1}=0\,,\qquad\qquad \frac{1}{N}\sum_{i=1}^N T_i \tau_{i1} \Upsilon =O_\prec(\Psi \hat{\Upsilon})\,, \label{170728100}
\end{align}
where the second estimate is implied by the second estimate in (\ref{17020303}), and the assumption that $|\Upsilon|\prec \hat{\Upsilon}$.
Therefore, for the first term on the right hand side of (\ref{17021250}), we have
\begin{align}
&\frac{1}{N^2} \sum_{i=1}^N \sum_k^{(i)} \mathbb{E}\Big[ \frac{1}{\|\mathbf{g}_i\| }\frac{\partial (\mathbf{e}_k^*\widetilde{B}^{\langle i\rangle} G\mathbf{e}_i)}{\partial g_{ik}}\mathrm{tr}\, G \tau_{i1} \mathfrak{m}^{(p-1,p)}\Big]\nonumber\\
&= \frac{1}{N^2} \sum_{i=1}^N \sum_k^{(i)}\mathbb{E}\Big[ \Big(\mathring{T}_i-\frac{1}{\|\mathbf{g}_i\| }\frac{\partial (\mathbf{e}_k^* G\mathbf{e}_i)}{\partial g_{ik}} \Big)\tau_{i2} \mathfrak{m}^{(p-1,p)}\Big]+\mathbb{E}\big[(\varepsilon_{2}+O_\prec(\Pi^2)+O_\prec(\Psi \hat{\Upsilon}))\mathfrak{m}^{(p-1,p)}\big]\nonumber\\
&= \frac{1}{N^2} \sum_{i=1}^N \sum_k^{(i)} \mathbb{E}\Big[ \frac{\partial \|\mathbf{g}_i\| ^{-1}}{\partial g_{ik}} \mathbf{e}_k^* G\mathbf{e}_i \tau_{i2} \mathfrak{m}^{(p-1,p)}\Big] +\frac{1}{N^2} \sum_{i=1}^N \sum_k^{(i)} \mathbb{E}\Big[ \frac{1}{\|\mathbf{g}_i\| }\frac{\partial \tau_{i2}}{\partial g_{ik}} \mathbf{e}_k^* G\mathbf{e}_i \mathfrak{m}^{(p-1,p)}\Big]\nonumber\\
&\qquad+ \frac{p-1}{N^2} \sum_{i=1}^N \sum_k^{(i)} \mathbb{E}\Big[ \frac{1}{\|\mathbf{g}_i\| } \mathbf{e}_k^* G\mathbf{e}_i \tau_{i2} \Big(\frac{1}{N}\sum_{j=1}^N \frac{\partial (d_jQ_j)}{\partial g_{ik}}\Big) \mathfrak{m}^{(p-2,p)}\Big]\nonumber\\
&\qquad+ \frac{p}{N^2} \sum_{i=1}^N \sum_k^{(i)} \mathbb{E} \Big[ \frac{1}{\|\mathbf{g}_i\| } \mathbf{e}_k^* G\mathbf{e}_i \tau_{i2} \Big(\frac{1}{N}\sum_{j=1}^N \frac{\partial (\overline{d_j}\overline{ Q_j})}{\partial g_{ik}}\Big)\mathfrak{m}^{(p-1,p-1)}\Big]\nonumber\\
&\qquad+\mathbb{E}\big[\big(\varepsilon_{2}+O_\prec(\Pi^2)+O_\prec(\Psi \hat{\Upsilon})\big)\mathfrak{m}^{(p-1,p)}\big], \label{17021252}
\end{align}
where the second equation is obtained analogously to (\ref{17021024}), by writing $\mathring{T}_i=\sum_k^{(i)}\bar{g}_{ik} \mathbf{e}_k^*G\mathbf{e}_i/\|\mathbf{g}_i\| $ and performing integration by parts with respect to the $g_{ik}$'s.
According to (\ref{17021240}), (\ref{17021250}), and (\ref{17021252}), it suffices to estimate the last term on the right side of (\ref{17021240}), the last four terms on the right side of (\ref{17021250}), and all the terms on the right side of (\ref{17021252}). All the desired estimates can be derived from the following lemma.
\begin{lem} \label{lem.17021301} Fix a $z\in \mathcal{D}_\tau(\eta_{\rm m},\eta_\mathrm{M})$. Suppose that the assumptions of Proposition \ref{lem. rough fluctuation averaging} hold, especially (\ref{17022530}) holds for $d_1, \ldots, d_N$ in the definition (\ref{17071810}). Let $\tilde{d}_1, \ldots, \tilde{d}_N \in \mathbb{C}$ be any (possibly random) numbers with the bound $\max_i|\tilde{d}_i|\prec 1$. Let $Q$ be any (possibly random) diagonal matrix that satisfies $\| Q\|\prec 1$. Set $X=I$ or $A$, and set $X_i=I$ or $\widetilde{B}^{\langle i\rangle}$. Then we have
\begin{align}
&\frac{1}{N^2} \sum_{i=1}^N \sum_k^{(i)} \tilde{d}_i \frac{\partial \|\mathbf{g}_i\| ^{-1}}{\partial g_{ik}} \mathbf{e}_k^* X_iG\mathbf{e}_i=O_\prec(\frac{1}{N})\,,\label{17021302}\\
& \frac{1}{N^2} \sum_{i=1}^N \sum_k^{(i)} \tilde{d}_i \mathrm{tr}\, \Big(Q X\frac{\partial G}{\partial g_{ik}}\Big) \mathbf{e}_k^* X_iG\mathbf{e}_i=O_\prec(\Psi^2\Pi^2) \,,\label{17021303}
\end{align}
and the same estimate holds if we replace $\frac{\partial G}{\partial g_{ik}}$ by the complex conjugate $\frac{\partial \overline{G}}{\partial g_{ik}}$ in (\ref{17021303}).
Further, we~have
\begin{multline}
\mathbb{E} \big[ \varepsilon_j\mathfrak{m}^{(p-1,p)}\big]=\mathbb{E}\big[O_\prec(\hat{\Pi}^2)\mathfrak{m}^{(p-1,p)}\big]\\+\mathbb{E}\big[O_\prec(\Psi^2\hat{\Pi}^2) \mathfrak{m}^{(p-2,p)}\big]+\mathbb{E}\big[O_\prec(\Psi^2\hat{\Pi}^2) \mathfrak{m}^{(p-1,p-1)}\big]\,, \qquad j=1,2.\label{17021311}
\end{multline}
\end{lem}
We postpone the proof of Lemma \ref{lem.17021301} and continue with the proof of Lemma \ref{lem.17021231} instead.
The second term of (\ref{17021250}) and the first term of (\ref{17021252}) are directly estimated by (\ref{17021302}).
Using the~definition of $\tau_{i1}$ in (\ref{17021305}) and of $\tau_{i2}$ in (\ref{1702130512}), the boundedness of the tracial quantities (\emph{c.f., } (\ref{17020535})), and the chain rule, we get the estimate on the third term of (\ref{17021250}) and the second term of (\ref{17021252}), using (\ref{17021303}) and the assumption (\ref{17022530}). For the last two terms of (\ref{17021250}), and the third and fourth terms of (\ref{17021252}), we~note~that
\begin{align*}
\frac{1}{N} \sum_{j=1}^N d_j Q_j=\mathrm{tr}\, D\widetilde{B}G\,\mathrm{tr}\, G-\mathrm{tr}\,\widetilde{B}G \,\mathrm{tr}\, DG=\mathrm{tr}\, D \,\mathrm{tr}\, G-\mathrm{tr}\, DG- \mathrm{tr}\, DAG\,\mathrm{tr}\, G+\mathrm{tr}\, AG\, \mathrm{tr}\, DG\,,
\end{align*}
where in the last step we used the first identity of (\ref{17020508}). Hence, by the chain rule, the fourth term of (\ref{17021250}) and the third term of (\ref{17021252}) are estimated with the aid of (\ref{17021303}) and (\ref{17022530}). The last term of (\ref{17021250}) and the fourth term of (\ref{17021252}) can be estimated analogously.
Finally, the estimates of the second term of (\ref{17021240}) and the last term of (\ref{17021252}) are given by (\ref{17021311}). Thus we conclude the proof of Lemma~\ref{lem.17021231}.
\end{proof}
In the sequel, we prove Lemma \ref{lem.17021301}.
\begin{proof}[Proof of Lemma \ref{lem.17021301}]
Note that (\ref{17021302}) and (\ref{17021303}) follow from the first and the last estimates in (\ref{17021202}), respectively, by averaging over the index $i$. Hence, it suffices to prove (\ref{17021311}). Recall the definition of $\varepsilon_1$ from (\ref{17021320}) and of $\varepsilon_{2}$ from (\ref{17021310}).
We first consider $\mathbb{E}[\varepsilon_1\mathfrak{m}^{(p-1,p)}]$. Recall the definition of $\varepsilon_{i1}$ from (\ref{17071805}). Using (\ref{17020302}), (\ref{17020303}), the first bound in (\ref{17020505}), and (\ref{17021540}),
we have
\begin{align}
\varepsilon_{i1}= \frac{\mathbf{h}_i^* \widetilde{B}^{\langle i\rangle} \mathbf{h}_i}{a_i-\omega_B^c}+O_\prec\big(\frac{\Psi}{\sqrt{N}}\big)= \frac{\mathring{\mathbf{h}}_i^* \widetilde{B}^{\langle i\rangle} \mathring{\mathbf{h}}_i}{a_i-\omega_B^c}+O_\prec(\hat{\Pi}^2)\,. \label{17021550}
\end{align}
Here the last step follows from the assumption $\frac{1}{N\sqrt{\eta}} \prec \hat{\Pi}^2$, and that $\mathbf{h}_i=\mathring{\mathbf{h}}_i+\frac{g_{ii}}{\|\mathbf{g}_i\|}\mathbf{e}_i$ with
\begin{align*}
|g_{ii}|\prec \frac{1}{\sqrt{N}}\,, \qquad\qquad \mathring{\mathbf{h}}_i^* \widetilde{B}^{\langle i\rangle} \mathbf{e}_i=b_i\mathring{\mathbf{h}}_i^* \mathbf{e}_i=0\,.
\end{align*}
Hence, by the definition of $\varepsilon_1$ in~(\ref{17021320}), we have
\begin{align*}
\varepsilon_1=\frac{1}{N}\sum_{i=1}^N \mathring{\mathbf{h}}_i^* \widetilde{B}^{\langle i\rangle} \mathring{\mathbf{h}}_i \frac{d_i \mathrm{tr}\, G-\mathrm{tr}\, DG}{a_i-\omega_B^c}+O_\prec(\hat{\Pi}^2)= \frac{1}{N}\sum_{i=1}^N \mathring{\mathbf{h}}_i^* \widetilde{B}^{\langle i\rangle} \mathring{\mathbf{h}}_i \tau_{i3}+O_\prec(\hat{\Pi}^2)\,,
\end{align*}
where we introduced the notation
\begin{align*}
\tau_{i3}\mathrel{\mathop:}= \frac{d_i \mathrm{tr}\, G-\mathrm{tr}\, DG}{a_i-\omega_B^c}\,.
\end{align*}
Using the integration by parts formula~\eqref{17021237}, we obtain
\begin{align}
\frac{1}{N}\sum_{i=1}^N \mathbb{E}\big[ \mathring{\mathbf{h}}_i^* \widetilde{B}^{\langle i\rangle} \mathring{\mathbf{h}}_i \tau_{i3}\mathfrak{m}^{(p-1,p)}\big]&= \frac{1}{N}\sum_{i=1}^N \sum_{k}^{(i)}\mathbb{E}\big[ \frac{1}{\|\mathbf{g}_i\| ^2} \bar{g}_{ik} \mathbf{e}_k^* \widetilde{B}^{\langle i\rangle} \mathring{\mathbf{g}}_i \tau_{i3}\mathfrak{m}^{(p-1,p)}\big]\nonumber\\
&=\frac{1}{N^2}\sum_{i=1}^N \sum_{k}^{(i)}\mathbb{E}\Big[ \frac{\partial \big(\|\mathbf{g}_i\| ^{-2} \mathbf{e}_k^* \widetilde{B}^{\langle i\rangle} \mathring{\mathbf{g}}_i \tau_{i3}\mathfrak{m}^{(p-1,p)}\big)}{\partial g_{ik}}\Big]. \label{17021531}
\end{align}
Note that
\begin{align}
&\frac{\partial \big(\|\mathbf{g}_i\| ^{-2} \mathbf{e}_k^* \widetilde{B}^{\langle i\rangle} \mathring{\mathbf{g}}_i \tau_{i3}\mathfrak{m}^{(p-1,p)}\big)}{\partial g_{ik}}=\frac{ \partial \|\mathbf{g}_i\| ^{-2} }{ \partial g_{ik}} \mathbf{e}_k^* \widetilde{B}^{\langle i\rangle} \mathring{\mathbf{g}}_i \tau_{i3}\mathfrak{m}^{(p-1,p)}+ \|\mathbf{g}_i\| ^{-2} \mathbf{e}_k^* \widetilde{B}^{\langle i\rangle} \mathbf{e}_k \tau_{i3}\mathfrak{m}^{(p-1,p)}\nonumber\\
&\quad +\|\mathbf{g}_i\| ^{-2} \mathbf{e}_k^* \widetilde{B}^{\langle i\rangle} \mathring{\mathbf{g}}_i \frac{\partial \tau_{i3}}{\partial g_{ik}}\mathfrak{m}^{(p-1,p)}+ (p-1) \|\mathbf{g}_i\| ^{-2} \mathbf{e}_k^* \widetilde{B}^{\langle i\rangle} \mathring{\mathbf{g}}_i \tau_{i3} \Big(\frac{1}{N}\sum_{j=1}^N \frac{\partial (d_jQ_j)}{\partial g_{ik}}\Big)\mathfrak{m}^{(p-2,p)}\nonumber\\
&\quad + p \|\mathbf{g}_i\| ^{-2} \mathbf{e}_k^* \widetilde{B}^{\langle i\rangle} \mathring{\mathbf{g}}_i \tau_{i3} \Big(\frac{1}{N}\sum_{j=1}^N \frac{\partial (\overline{d_j}\overline{Q_j})}{\partial g_{ik}}\Big)\mathfrak{m}^{(p-1,p-1)}\,. \label{17021530}
\end{align}
Notice that $\frac{\partial \|\mathbf{g}_i\| ^{-2}}{\partial g_{ik}}=-\|\mathbf{g}_i\| ^{-4}\bar{g}_{ik}$ and that $\tau_{i3}=O_\prec (1)$. In addition, we also have that
\begin{align*}
\sum_k^{(i)}\bar{g}_{ik}\mathbf{e}_k=\mathring{\mathbf{g}}_i^*\,, \qquad\qquad \sum_{k}^{(i)} \mathbf{e}_k^* \widetilde{B}^{\langle i\rangle}\mathbf{e}_k=\text{Tr} B-b_i=b_i\,.
\end{align*}
Denoting by $\tilde{d}_1,\ldots, \tilde{d}_N\in \mathbb{C}$ generic (possibly random) numbers with $\max_i |\tilde{d}_i|\prec 1$, we see that the contributions from the first two terms on the right side of (\ref{17021530}) to (\ref{17021531}) follow from the~estimates
\begin{align*}
\frac{1}{N^2}\sum_{i=1}^N \tilde{d}_i\mathring{\mathbf{g}}_i^* \widetilde{B}^{\langle i\rangle} \mathring{\mathbf{g}}_i =O_\prec(\frac{1}{N})\,, \qquad\qquad
\frac{1}{N^2}\sum_{i=1}^N \tilde{d}_i b_i \mathbf{e}_k^* \widetilde{B}^{\langle i\rangle} \mathbf{e}_k =O_\prec(\frac{1}{N}) \,.
\end{align*}
Here $\tilde{d}_i$ includes $\tau_{i3}$ and an appropriate power of $\|\mathbf{g}_i\|$.
In addition, for the estimate of the remaining terms in (\ref{17021530}),
we claim that, for $X_i=I, \widetilde{B}^{\langle i\rangle}$,
\begin{align}
&\frac{1}{N^2}\sum_{i=1}^N \sum_k^{(i)} \tilde{d}_i \mathbf{e}_k^* X_i \mathring{\mathbf{g}}_i \frac{\partial \tau_{i3}}{\partial g_{ik}}=O_\prec(\Psi^2\Pi^2)\,,\\
&\frac{1}{N^2}\sum_{i=1}^N \sum_k^{(i)} \tilde{d}_i \mathbf{e}_k^* X_i \mathring{\mathbf{g}}_i \Big(\frac{1}{N}\sum_{j=1}^N \frac{\partial (d_j Q_j)}{\partial g_{ik}}\Big)=O_\prec(\Psi^2\Pi^2)\,,\label{17022401}\\
&\frac{1}{N^2}\sum_{i=1}^N \sum_k^{(i)} \tilde{d}_i \mathbf{e}_k^* X_i \mathring{\mathbf{g}}_i \Big(\frac{1}{N}\sum_{j=1}^N \frac{\partial (\overline{d_j} \overline{Q_j})}{\partial g_{ik}}\Big)=O_\prec(\Psi^2\Pi^2)\,. \label{17022402}
\end{align}
The above three bounds follows from the last estimate in (\ref{17021202}) and the chain rule. Hence, we conclude the proof of (\ref{17021311}) with $j=1$.
The proof of (\ref{17021311}) for $j=2$ is similar to $j=1$. Recall the definition of $\varepsilon_{i2}$ from (\ref{17021004}). Using (\ref{17020302}), (\ref{17020303}), the first bound in (\ref{17020505}), and also the bounds in (\ref{17021540}), we have
\begin{align*}
\varepsilon_{i2}= &\big( \|\mathbf{g}_i\| ^2-1\big)G_{ii}\mathrm{tr}\, \widetilde{B}G+O_\prec\Big(\frac{\Psi}{\sqrt{N}}\Big)=\big( \mathring{\mathbf{g}}_i^*\mathring{\mathbf{g}}_i-1\big)\frac{\mathrm{tr}\, \widetilde{B}G}{a_i-\omega_B^c}+O_\prec(\hat{\Pi}^2)\,,
\end{align*}
which possesses a very similar structure as (\ref{17021550}). The remaining proof is nearly the same as the case for $\varepsilon_{1}$; it suffices to replace $\mathring{\mathbf{g}}_i^*\widetilde{B}^{\langle i\rangle}\mathring{\mathbf{g}}_i$ by $\mathring{\mathbf{g}}_i^*\mathring{\mathbf{g}}_i$ throughout the proof. We thus omit the details. Hence, we conclude the proof for Lemma \ref{lem.17021301}.
\end{proof}
\section{Optimal fluctuation averaging} \label{s.optimal FL}
In this section, we establish the optimal fluctuation averaging estimate for a very special linear combinations of the $Q_i$'s and their analogues the $\mathcal{Q}_i$'s (\emph{c.f., } (\ref{17071820})), under assumption (\ref{17020501}).
Recall the definition of the approximate subordination functions $\omega_A^c$ and $\omega_B^c$ in~\eqref{17072550}. We denote
\begin{align}
\Lambda_A\mathrel{\mathop:}=\omega_A^c-\omega_A\,,\qquad \Lambda_B\mathrel{\mathop:}=\omega_B^c-\omega_B\,, \qquad \Lambda\mathrel{\mathop:}=|\Lambda_A|+|\Lambda_B|\,.\label{le gros lambda}
\end{align}
Recall $\mathcal{S}_{AB}$, $\mathcal{T}_A$ and $\mathcal{T}_B$ defined in (\ref{17080110}). For brevity, in the sequel, we use the shorthand notation
\begin{align*}
\mathcal{S}\equiv\mathcal{S}_{AB}.
\end{align*}
\begin{pro} \label{pro.17021715}Fix a $z=E+\mathrm{i}\eta\in \mathcal{D}_\tau(\eta_{\rm m},\eta_\mathrm{M})$. Suppose that the assumptions of Proposition \ref{pro.17020310} hold. Suppose that $\Lambda(z)\prec \hat{\Lambda}(z)$, for some deterministic and positive function $\hat{\Lambda}(z)\prec N^{-\frac{\gamma}{4}}$, then
\begin{align}
&\Big|\mathcal{S}\Lambda_\iota+\mathcal{T}_\iota\Lambda_\iota^2+O(\Lambda_\iota^3)\Big|\prec \frac{\sqrt{(\Im m_{\mu_A\boxplus\mu_B}+\hat{\Lambda})(|\mathcal{S}|+\hat{\Lambda})}}{N\eta}+\frac{1}{(N\eta)^2},\qquad \iota=A, B\,. \label{17030301}
\end{align}
\end{pro}
Before commencing the proof of Proposition~\ref{pro.17021715}, we first claim that the control parameter
$\hat{\Pi}$ in Proposition~\ref{lem. rough fluctuation averaging} can be chosen as the square root of the right side of~\eqref{17030301} as long as $\Lambda\prec \hat{\Lambda}$, \emph{i.e., }
\begin{align}
\hat{\Pi}\mathrel{\mathop:}= \Bigg( \frac{\sqrt{(\Im m_{\mu_A\boxplus\mu_B}+\hat{\Lambda})(|\mathcal{S}|+\hat{\Lambda})}}{N\eta}+\frac{1}{(N\eta)^2}\Bigg)^{\frac12} \,.\label{17072960}
\end{align}
Indeed, observe that when $\Lambda\prec \hat{\Lambda}\prec N^{-\frac{\gamma}{4}}$, we obtain from the second line of (\ref{170730100}) that
\begin{align}
|m_H-m_{\mu_A\boxplus\mu_B}|=|m_{H}m_{\mu_A\boxplus\mu_B}|\Big|\frac{1}{m_H(z)}-\frac{1}{m_{\mu_A\boxplus\mu_B}(z)}\Big|\prec |m_{H}m_{\mu_A\boxplus\mu_B}|\,\Lambda\,. \label{17073101}
\end{align}
Further, from the first line of (\ref{170730100}) and (\ref{17020502}), we see that, for any $z\in \mathcal{D}_\tau(\eta_{\rm m},\eta_\mathrm{M})$,
\begin{align}
|m_{H}m_{\mu_A\boxplus\mu_B}|\prec \big|(m_{\mu_A\boxplus\mu_B}+O_\prec(N^{-\frac{\gamma}{4}}))m_{\mu_A\boxplus\mu_B}\big|\prec 1\,. \label{17073102}
\end{align}
Hence, we conclude from (\ref{17073101}) and (\ref{17073102}) that
\begin{align}
|m_H-m_{\mu_A\boxplus\mu_B}|\prec \Lambda\prec \hat{\Lambda} \,.\label{17073111}
\end{align}
Therefore, recalling~\eqref{17012101}, we have
\begin{align*}
\Pi^2\prec \frac{\Im m_{\mu_A\boxplus\mu_B}+\hat{\Lambda}}{N\eta}\prec \frac{\sqrt{(\Im m_{\mu_A\boxplus\mu_B}+\hat{\Lambda})(|\mathcal{S}|+\hat{\Lambda})}}{N\eta}\prec \Psi^2,
\end{align*}
where in the last two steps, we used that $\Im m_{\mu_A\boxplus\mu_B}\lesssim |\mathcal{S}|\prec 1$; (\ref{17080120}) and (\ref{17080121}). In addition, from~(\ref{17080120}) and (\ref{17080121}), we also have $\Im m_{\mu_A\boxplus\mu_B} |\mathcal{S}|\gtrsim \eta $. Thus we also have
\begin{align*}
\frac{1}{N\sqrt{\eta}}\prec \frac{\sqrt{(\Im m_{\mu_A\boxplus\mu_B}+\hat{\Lambda})(|\mathcal{S}|+\hat{\Lambda})}}{N\eta}\,.
\end{align*}
From the definition of $\Pi$ in (\ref{17012101}), we note that up to a $\frac{1}{N\eta}$ term $\hat{\Pi}$ here is
equivalent to $\Pi$ inside the spectrum but it is much
larger than $\Pi$ in the outside regime where $\mathcal{S}\gg \Im m_{\mu_A\boxplus\mu_B}$ (\emph{c.f., } (\ref{17080120}), (\ref{17080121})).
With the above notation, we can rewrite (\ref{17030301}) as
\begin{align}
&\Big|\mathcal{S}\Lambda_\iota+\mathcal{T}_\iota\Lambda_\iota^2+O(\Lambda_\iota^3)\Big|\prec \hat{\Pi}^2,\qquad\qquad \iota=A, B.
\end{align}
Recall the definition of $Q_i$ from (\ref{17021701}). We also introduce their analogues
\begin{align}
\mathcal{Q}_i\equiv \mathcal{Q}_i(z)\mathrel{\mathop:}=(\widetilde{A}\mathcal{G})_{ii}\mathrm{tr}\, \mathcal{G}-\mathcal{G}_{ii} \mathrm{tr}\, \widetilde{A}\mathcal{G}\,,\qquad\qquad i\in\llbracket 1,N\rrbracket\,. \label{17071820}
\end{align}
with $\widetilde A$ and $\mathcal{G}$ given in~\eqref{the tilda guys}. To prove Proposition \ref{pro.17021715}, we need an optimal fluctuation averaging for a very special combination of $Q_i$'s and $\mathcal{Q}_i$'s. To this end, we define the functions $\Phi_1,\Phi_2\,:\, ({\mathbb C}^+)^3\longrightarrow {\mathbb C}$,
\begin{align}
\Phi_1(\omega_1,\omega_2,z)\mathrel{\mathop:}= F_A(\omega_2)-\omega_1-\omega_2+z\,,\qquad\qquad\Phi_2(\omega_1,\omega_2,z)\mathrel{\mathop:}= F_B(\omega_1)-\omega_1-\omega_2+z\,. \label{17073115}
\end{align}
From (\ref{170730100}), we have $\Phi_1(\omega_A, \omega_B,z)=\Phi_2(\omega_A, \omega_B,z)=0$, with $\omega_A\equiv \omega_A(z)$ and $\omega_B\equiv\omega_B(z)$. For brevity, we use the shorthand notations
\begin{align}
\Phi_1^c\mathrel{\mathop:}=\Phi_1(\omega_A^c,\omega_B^c,z)\,,\qquad \qquad \Phi_2^c\mathrel{\mathop:}= \Phi_2(\omega_A^c,\omega_B^c,z)\,. \label{072960}
\end{align}
Further, we define the quantities
\begin{align}
\mathcal{Z}_1\mathrel{\mathop:}= \Phi_1^c+(F_A'(\omega_B)-1)\Phi_2^c\,,
\qquad \qquad
\mathcal{Z}_2\mathrel{\mathop:}=\Phi_2^c+(F_B'(\omega_A)-1)\Phi_1^c\,. \label{17021710}
\end{align}
We are going to show that $\mathcal{Z}_1$ and $\mathcal{Z}_2$ are actually certain linear combinations of the $Q_i$'s and the $\mathcal{Q}_i$'s.
We start with the identities
\begin{align}
\Phi_1^c= -\frac{F_A(\omega_B^c)}{(m_H(z))^2} \frac{1}{N} \sum_{i=1}^N \frac{1}{a_i-\omega_B^c} Q_i\,,\qquad\qquad \Phi_2^c
= -\frac{F_B(\omega_A^c)}{(m_H(z))^2} \frac{1}{N} \sum_{i=1}^N \frac{1}{b_i-\omega_A^c} \mathcal{Q}_i\,, \label{17021711}
\end{align}
which can be derived by combining (\ref{17072550}), (\ref{170725130}) and (\ref{170725131}).
For all $i\in \llbracket 1, N\rrbracket $, we set
\begin{align}
& \mathfrak{d}_{i,1}\mathrel{\mathop:}= -\frac{F_A(\omega_B^c)}{(m_H(z))^2}\frac{1}{a_i-\omega_B^c}\,,\qquad\qquad \mathfrak{d}_{i,2}\mathrel{\mathop:}= -(F_A'(\omega_B)-1)\frac{F_B(\omega_A^c)}{(m_H(z))^2}\frac{1}{b_i-\omega_A^c}\,. \label{17022001}
\end{align}
According to the definition in (\ref{17021710}), (\ref{17021711}), and also (\ref{17022001}), we can write
\begin{align}
\mathcal{Z}_1=\frac{1}{N} \sum_{i=1}^N \mathfrak{d}_{i,1} Q_i+ \frac{1}{N} \sum_{i=1}^N \mathfrak{d}_{i,2} \mathcal{Q}_i\,, \label{17012202}
\end{align}
and $\mathcal{Z}_2$ can be represented in a similar way.
Now, we choose $d_i=\mathfrak{d}_{i,1}, i\in \llbracket 1, N\rrbracket$, in Proposition \ref{lem. rough fluctuation averaging}. Observe that $\mathfrak{d}_{i,1}$ can be regarded as a smooth function of $\mathrm{tr}\, \widetilde{B}G=1-\mathrm{tr}\,(A-z)G$ and $m_H(z)=\mathrm{tr}\, G$, according to the definition in (\ref{17022001}) and that of $\omega_B^c$ in (\ref{17072550}). Then, using the chain rule and the estimates of the tracial quantities in (\ref{17020535}), one can check that the first equation in assumption (\ref{17022530}) is satisfied for the choice $d_i=\mathfrak{d}_{i,1}, i\in \llbracket 1, N\rrbracket$, by using~(\ref{17021202}). The second equation can be checked analogously. Hence, applying Proposition \ref{lem. rough fluctuation averaging}, we get
\begin{align}
|\Phi_1^c|\prec \Psi\hat{\Pi}\,,\qquad\qquad |\Phi_2^c|\prec \Psi\hat{\Pi}\,, \label{170724501}
\end{align}
where $\hat{\Pi}$ is chosen as in (\ref{17072960}).
The main technical task in this section is to establish the following estimates for $\mathcal{Z}_1$ and $\mathcal{Z}_2$, where the previous order $\Psi\hat{\Pi}$ bounds from (\ref{170723113}) are strengthened.
\begin{pro} \label{pro. 17021720}
Fix $z\in \mathcal{D}_\tau(\eta_{\rm m},\eta_\mathrm{M})$. Suppose that the assumptions of Proposition \ref{pro.17020310} hold and that $\Lambda(z)\prec \hat{\Lambda}(z)$ for some deterministic and positive function $\hat{\Lambda}(z)\leq N^{-\frac{\gamma}{4}}$. Choose $\hat{\Pi}(z)$ as (\ref{17072960}). Then,
\begin{align}
|\mathcal{Z}_1|\prec \hat{\Pi}^2\,, \qquad\qquad |\mathcal{Z}_2|\prec \hat{\Pi}^2 \,. \label{17021740}
\end{align}
\end{pro}
We postpone the proof of Proposition \ref{pro. 17021720} and first prove Proposition \ref{pro.17021715} with the aid of Proposition~\ref{pro. 17021720}.
\begin{proof}[Proof of Proposition \ref{pro.17021715}] By assumption, we see that $|\Lambda_A|, |\Lambda_B|\prec N^{-\frac{\gamma}{4}}$. First of all, expanding $\Phi_1^c$ and $\Phi_2^c$ around $(\omega_A, \omega_B)$ and using the subordination equations $\Phi_1(\omega_A, \omega_B,z)=\Phi_2(\omega_A, \omega_B,z)=0$, we get
\begin{align}
&\Phi_1^c=-\Lambda_A+(F'_A(\omega_B)-1)\Lambda_B+\frac{1}{2}F''_A(\omega_B) \Lambda_B^2+O(\Lambda_B^3)\,,\nonumber\\
& \Phi_2^c=-\Lambda_B+(F'_B(\omega_A)-1)\Lambda_A+\frac{1}{2}F''_B(\omega_A) \Lambda_A^2+O(\Lambda_A^3)\,. \label{17021730}
\end{align}
We rewrite the second equation in (\ref{17021730}) as
\begin{align}
\Lambda_B=-\Phi_2^c+(F'_B(\omega_A)-1)\Lambda_A+\frac{1}{2}F''_B(\omega_A) \Lambda_A^2+O(\Lambda_A^3)\,. \label{17021731}
\end{align}
Substituting (\ref{17021731}) into the first equation in (\ref{17021730}) yields
\begin{align*}
\Phi_1^c &=-(F'_A(\omega_B)-1)\Phi_2^c+\mathcal{S}\Lambda_A+\mathcal{T}_A\Lambda_A^2+O((\Phi_2^c)^2)+O(\Phi_2^c\Lambda_A)+O(\Lambda_A^3)\,,
\end{align*}
where $\mathcal{T}_A$ is defined in (\ref{17080110}).
In light of the definition in (\ref{17021710}), we have
\begin{align}
\mathcal{Z}_1=\mathcal{S}\Lambda_A+\mathcal{T}_A\Lambda_A^2+O((\Phi_2^c)^2)+O(\Phi_2^c\Lambda_A)+ O(\Lambda_A^3)\,. \label{17021741}
\end{align}
Combination of (\ref{170724501}), (\ref{17021740}) with
(\ref{17021741}) leads to
\begin{align}
\big|\mathcal{S}\Lambda_A+\mathcal{T}_A\Lambda_A^2+O(\Lambda_A^3)\big|\prec \hat{\Pi}^2 +\Psi \hat{\Pi} \hat{\Lambda}\,. \label{1702175511}
\end{align}
The second term on the right hand side of (\ref{1702175511}) can be absorbed into the first term, in light of the fact that $\Psi\hat{\Lambda}\prec \hat{\Pi}$ (\emph{c.f., } (\ref{17072960})). Hence, we have
\begin{align}
\big|\mathcal{S}\Lambda_A+\mathcal{T}_A\Lambda_A^2+O(\Lambda_A^3)\big|\prec \hat{\Pi}^2\,. \label{17021755}
\end{align}
Analogously, we also have
\begin{align}
\big|\mathcal{S}\Lambda_B+\mathcal{T}_B\Lambda_B^2+O(\Lambda_B^3)\big|\prec \hat{\Pi}^2\,. \label{17021756}
\end{align}
This completes the proof of Proposition \ref{pro.17021715}.
\end{proof}
It remains to prove Proposition \ref{pro. 17021720}.
We state the proof for $\mathcal{Z}_1$, $\mathcal{Z}_2$ is handled similarly.
We set
\begin{align*}
\mathfrak{l}^{(k,l)}\mathrel{\mathop:}= \mathcal{Z}_1^k\overline{\mathcal{Z}_1^l}\,,\qquad\qquad k,l\in {\mathbb N}\,.
\end{align*}
We can now prove a stronger estimate one $\mathbb{E}[\mathfrak{l}^{(p,p)}]$ than the estimate obtained from Lemma \ref{lem.17021231} by improving the error terms from $O_\prec(\Psi\hat{\Pi})$ to $O_\prec(\hat{\Pi}^2)$.
\begin{lem} \label{lem.17022410}
Fix a $z\in \mathcal{D}_\tau(\eta_{\rm m},\eta_\mathrm{M})$. Suppose that the assumptions of Proposition \ref{pro. 17021720} hold. For any fixed integer $p\geq 1$, we have
\begin{align*}
\mathbb{E}\big[ \mathfrak{l}^{(p,p)}\big]=\mathbb{E}\big[O_\prec(\hat{\Pi}^2)\mathfrak{l}^{(p-1,p)}\big]+\mathbb{E}\big[O_\prec(\hat{\Pi}^4) \mathfrak{l}^{(p-2,p)}\big]+\mathbb{E}\big[O_\prec(\hat{\Pi}^4) \mathfrak{l}^{(p-1,p-1)}\big].
\end{align*}
\end{lem}
Now, with Lemma \ref{lem.17022410}, we can prove Proposition \ref{pro. 17021720}.
\begin{proof}[Proof of Proposition \ref{pro. 17021720}] Similarly to the proof of (\ref{17020301}) from Lemma \ref{lem.17021230}, with Lemma \ref{lem.17022410}, we can get (\ref{17021740}) by applying Young's and Markov's inequalities. This completes the proof of Proposition \ref{pro. 17021720}.
\end{proof}
In the sequel, we prove Lemma \ref{lem.17022410}.
\begin{proof} [Proof of Lemma \ref{lem.17022410}] Recall the definition of $\mathcal{Z}_1$ in (\ref{17012202}). We can write
\begin{align*}
\mathbb{E}\big[ \mathfrak{l}^{(p,p)}\big]= \frac{1}{N} \sum_{i=1}^N \mathbb{E}\big[ \mathfrak{d}_{i,1} Q_i \mathfrak{l}^{(p-1,p)}\big]+ \frac{1}{N} \sum_{i=1}^N \mathbb{E}\big[\mathfrak{d}_{i,2} \mathcal{Q}_i \mathfrak{l}^{(p-1,p)}\big].
\end{align*}
We only state the estimate for the first term on the right hand side above. The second term can be estimated in a similar way. By (\ref{17021232}), we can write
\begin{align*}
\frac{1}{N} \sum_{i=1}^N \mathfrak{d}_{i,1} Q_i= \frac{1}{N}\sum_{i=1}^N (\widetilde{B}G)_{ii} \mathrm{tr}\, G\tau_{i1},
\end{align*}
where we chose $d_i=\mathfrak{d}_{i,1}, i\in\llbracket 1,N\rrbracket$, in the definition of $\tau_{i1}$ in (\ref{17021305}).
Then, analogously to (\ref{17021240}), we can also write
\begin{align}
&\frac{1}{N} \sum_{i=1}^N \mathbb{E}\big[ \mathfrak{d}_{i,1} Q_i \mathfrak{l}^{(p-1,p)}\big] =\frac{1}{N}\sum_{i=1}^N \mathbb{E}\Big[ (\widetilde{B}G)_{ii} \mathrm{tr}\, G \tau_{i1} \mathfrak{l}^{(p-1,p)}\Big]
\end{align}
with $d_i=\mathfrak{d}_{i,1}, i\in\llbracket 1,N\rrbracket$. Analogously to (\ref{17071833}), we can show
\begin{align*}
&\frac{1}{N} \sum_{i=1}^N \mathbb{E}\big[ \mathfrak{d}_{i,1} Q_i \mathfrak{l}^{(p-1,p)}\big]=\mathbb{E}\big[O_\prec(\hat{\Pi}^2) \mathfrak{l}^{(p-1,p)}\big]+\mathbb{E}\big[O_\prec(\Psi^2\hat{\Pi}^2) \mathfrak{l}^{(p-2,p)}\big]+\mathbb{E}\big[O_\prec(\Psi^2\hat{\Pi}^2) \mathfrak{l}^{(p-1,p-1)}\big],
\end{align*}
where the last two terms come from the estimates of the analogues of the last two terms of (\ref{17021250}), the third and fourth terms in the right side of (\ref{17021252}), and also the terms in (\ref{17022401}) and (\ref{17022402}), but with $\frac{1}{N}\sum_{j=1}^N d_jQ_j$ replaced by $\mathcal{Z}_1$. It suffices to improve the estimates of these terms. All these terms contain a derivative $\frac{\partial \mathcal{Z}_1}{\partial g_{ik}}$ or $\frac{\partial \overline{\mathcal{Z}}_1}{\partial g_{ik}}$, which is smaller than the derivative of an arbitrary linear combination $\partial (\frac{1}{N}\sum_i d_i Q_i)/\partial g_{ik}$ or $\partial (\frac{1}{N}\sum_i d_i \mathcal{Q}_i)/\partial g_{ik}$, due to the special choice of $\mathfrak{d}_{i,1}$'s and $\mathfrak{d}_{i,2}$'s.
Specifically, we shall show the following lemma, which contains the estimates of all necessary terms.
\begin{lem} \label{lem. partial Z} Fix a $z\in \mathcal{D}_\tau(\eta_{\rm m},\eta_\mathrm{M})$. Suppose that the assumptions of Proposition \ref{pro.17020310} hold. Let $\tilde{d}_1, \ldots, \tilde{d}_N \in \mathbb{C}$ be (possibly random) numbers with $\max_i|\tilde{d}_i|\prec 1$. Let $X_i=I$ or $\widetilde{B}^{\langle i\rangle}$. Then we~have
\begin{align}
&\frac{1}{N^2}\sum_{i=1}^N \sum_k^{(i)} \tilde{d}_i \mathbf{e}_k^* X_i G\mathbf{e}_i \frac{\partial \mathcal{Z}_1}{\partial g_{ik}}=O_\prec(\hat{\Pi}^4)\,,\qquad & & \frac{1}{N^2}\sum_{i=1}^N \sum_k^{(i)} \tilde{d}_i \mathbf{e}_k^* X_i G\mathbf{e}_i \frac{\partial \overline{\mathcal{Z}_1}}{\partial g_{ik}}=O_\prec(\hat{\Pi}^4)\,,\nonumber\\
&\frac{1}{N^2}\sum_{i=1}^N \sum_k^{(i)} \tilde{d}_i \mathbf{e}_k^* X_i \mathring{\mathbf{g}}_i \frac{\partial \mathcal{Z}_1}{\partial g_{ik}}=O_\prec(\hat{\Pi}^4)\,, \qquad & & \frac{1}{N^2}\sum_{i=1}^N \sum_k^{(i)} \tilde{d}_i \mathbf{e}_k^* X_i \mathring{\mathbf{g}}_i \frac{\partial \overline{\mathcal{Z}_1}}{\partial g_{ik}}=O_\prec(\hat{\Pi}^4)\,.\label{17022420}
\end{align}
\end{lem}
\begin{proof}[Proof of Lemma \ref{lem. partial Z}]
We give the proof for the first estimate in (\ref{17022420}). The third one is analogous, and the other two are just their complex conjugates. From the definitions in (\ref{072960}) and (\ref{17021710}), we~get
\begin{align*}
\frac{\partial \mathcal{Z}_1}{\partial g_{ik}}&=\frac{\partial \Phi_1^c}{\partial g_{ik}}+(F_A'(\omega_B)-1)\frac{\partial \Phi_2^c}{\partial g_{ik}}\nonumber\\
&=\Big(\big(F_A'(\omega_B)-1\big)\big(F_B'(\omega_A^c)-1\big)-1\Big)\frac{\partial \omega_A^c}{\partial g_{ik}}+\big(F_A'(\omega_B^c)-F_A'(\omega_B)\big)\frac{\partial \omega_B^c}{\partial g_{ik}}.
\end{align*}
Note that by the regularity of $F_A$ and $F_B$, we have
\begin{align*}
\big(F_A'(\omega_B)-1\big)\big(F_B'(\omega_A^c)-1\big)-1=\mathcal{S}+O(|\Lambda_A|)\,,\qquad F_A'(\omega_B^c)-F_A'(\omega_B)=O(|\Lambda_B|)\,.
\end{align*}
The smallness of these coefficients carry the gain.
According to the definition of $\hat{\Pi}$ in (\ref{17072960}), we see that
\begin{align*}
(|\mathcal{S}|+\Lambda)\Psi^2\Pi^2 \leq \hat{\Pi}^4
\end{align*} if $\Lambda\leq \hat{\Lambda}$. Hence, for the first estimate in (\ref{17022420}),
it suffices to show that
\begin{align}
\frac{1}{N^2}\sum_{i=1}^N \sum_k^{(i)} \tilde{d}_i \mathbf{e}_k^* X_i G\mathbf{e}_i \frac{\partial \omega_\iota^c}{\partial g_{ik}}=O_\prec(\Psi^2\Pi^2)\,, \qquad \iota=A,B\,. \label{17073001}
\end{align}
This follows from (\ref{17021303}), the fact that $\omega_B^c$ is a tracial quantity, and the chain rule. The other terms in (\ref{17022420}) can be estimated similarly.
This concludes the proof of Lemma \ref{lem. partial Z}.
\end{proof}
With the aid of Lemma \ref{lem. partial Z}, we can conclude the proof of Lemma \ref{lem.17022410}.
\end{proof}
\section{Strong local law}
In this section, we use a continuity argument to prove the strong local law, \emph{i.e., } Theorem \ref{thm. strong law at the edge},
based on Propositions \ref{pro.17020310}, \ref{lem. rough fluctuation averaging}, and \ref{pro.17021715}. We start with the following lemma. Recall $\mathcal{S}\equiv \mathcal{S}_{AB}$ from~\eqref{17080110} and $\Lambda=|\Lambda_A|+|\Lambda_B|$ from~\eqref{le gros lambda}. Further recall that $\eta_\mathrm{m}=N^{-1+\gamma}$, with $\gamma>0$ as in Theorem~\ref{thm. strong law at the edge}.
\begin{lem} \label{lem.17030310}
Fix $z\in \mathcal{D}_\tau(\eta_{\rm m},\eta_\mathrm{M})$. Suppose that the assumptions of Proposition \ref{pro.17020310} hold. Let $\varepsilon\in (0, \frac{\gamma}{12})$. Suppose that $\Lambda\prec \hat{\Lambda}$ for some deterministic control parameter $ \hat{\Lambda}\leq N^{-\frac{\gamma}{4}}$. If $ \hat{\Lambda} \geq \frac{N^{3\varepsilon}}{N\eta}$, then we have:
\noindent $(i)$: If $
\sqrt{\kappa+\eta}> N^{-\varepsilon} \hat{\Lambda}
$, there is a sufficiently large constant $K_0>0$, such that
\begin{align}
\mathbf{1}\Big(\Lambda\leq \frac{|\mathcal{S}|}{K_0}\Big) |\Lambda_A| \prec N^{-2\varepsilon} \hat{\Lambda}\,,\qquad\qquad\mathbf{1}\Big(\Lambda\leq \frac{|\mathcal{S}|}{K_0}\Big) |\Lambda_B| \prec N^{-2\varepsilon} \hat{\Lambda}\,;\label{17080301}
\end{align}
$(ii)$: If $
\sqrt{\kappa+\eta}\leq N^{-\varepsilon} \hat{\Lambda}
$, we have
\begin{align*}
|\Lambda_A|\prec N^{-\varepsilon} \hat{\Lambda}\,, \qquad\qquad|\Lambda_B|\prec N^{-\varepsilon} \hat{\Lambda}\,.
\end{align*}
\end{lem}
\begin{proof} From (\ref{17080120}) and (\ref{17080121}), we see that $|S|\gtrsim \Im m_{\mu_A\boxplus\mu_B}$ for all $z\in \mathcal{D}_\tau(\eta_{\rm m},\eta_\mathrm{M})$. Thus~(\ref{17030301}) gives
\begin{align}
&\Big|\mathcal{S}\Lambda_\iota+\mathcal{T}_\iota\Lambda_\iota^2+O(\Lambda_\iota^3)\Big|\prec \frac{|\mathcal{S}|+ \hat{\Lambda}}{N\eta}+\frac{1}{(N\eta)^2},\qquad \iota=A, B. \label{1703030111}
\end{align}
with $\mathcal{S}$, $\mathcal{T}_A$ and $\mathcal{T}_B$ given in~\eqref{17080110}. Then, from $|\Lambda_\iota|\prec\hat{\Lambda}\leq N^{-\frac{\gamma}{4}}$, we have
\begin{align}
\mathcal{S}\Lambda_\iota+\mathcal{T}_\iota\Lambda_\iota^2=O_\prec \Big( \frac{|\mathcal{S}|+ \hat{\Lambda}}{N\eta}+\frac{1}{(N\eta)^2}+N^{-\frac{\gamma}{4}} \hat{\Lambda}^2\Big),\qquad \iota=A, B. \label{17030302}
\end{align}
If $
\sqrt{\kappa+\eta}> N^{-\varepsilon} \hat{\Lambda}
$, we have for $\iota=A, B$,
\begin{align}
\mathbf{1}\Big(\Lambda\leq \frac{|\mathcal{S}|}{K_0}\Big) |\Lambda_\iota|\prec |\mathcal{S}|^{-1} \Big( \frac{|\mathcal{S}|+ \hat{\Lambda}}{N\eta}+\frac{1}{(N\eta)^2}+N^{-\frac{\gamma}{4}} \hat{\Lambda}^2\Big)\leq C \frac{N^\varepsilon}{N\eta}+ N^{\varepsilon-\frac{\gamma}{4}} \hat{\Lambda}\leq C N^{-2\varepsilon} \hat{\Lambda}\,. \label{17030303}
\end{align}
Here we absorbed the quadratic term on the left hand side in (\ref{17030302}) into the linear term. Hence, we proved $(i)$.
From (\ref{17030303}), we also see that if $ \sqrt{\kappa+\eta}> N^{-\varepsilon} \hat{\Lambda} $, then
\begin{align}
\mathbf{1}\Big(\Lambda\leq \frac{|\mathcal{S}|}{K_0}\Big) |\Lambda_\iota| \prec N^{-\varepsilon} |\mathcal{S}|,\qquad \qquad \iota=A, B. \label{17071901}
\end{align}
Next, we prove $(ii)$. If $ \sqrt{\kappa+\eta}\leq N^{-\varepsilon} \hat{\Lambda} $, from (\ref{17080121}) and (\ref{17080122}), wee that $\mathcal{T}_\iota\sim 1$. Hence, we solve the quadratic equation (\ref{17030302}) directly, then we get
\begin{align*}
|\Lambda_\iota|\prec |\mathcal{S}|+ \Big(\frac{|\mathcal{S}|+ \hat{\Lambda}}{N\eta}+\frac{1}{(N\eta)^2}+N^{-\frac{\gamma}{4}} \hat{\Lambda}^2\Big)^{\frac{1}{2}}\leq CN^{-\varepsilon} \hat{\Lambda}, \qquad \iota=A, B \,,
\end{align*}
under the assumption that $ \hat{\Lambda}\geq \frac{N^{3\varepsilon}}{N\eta}$. This concludes the proof of Lemma \ref{lem.17030310}.
\end{proof}
Recall the definitions of $\mathcal{S}$ in (\ref{17080110}) and of $\Lambda_{\rm d}$, $\widetilde{\Lambda}_{\rm d}$, ${\Lambda}_T$, $ \widetilde{\Lambda}_T$ in~\eqref{17072571}. For any $z\in \mathcal{D}_\tau(\eta_{\rm m},\eta_\mathrm{M})$ and any $\delta\in [0,1]$, we define the event
\begin{align}
\Theta(z, \delta)\mathrel{\mathop:}= \Big\{ \Lambda_{\rm d}(z)\leq \delta, \; \widetilde{\Lambda}_{\rm d}(z) \leq \delta,\;
\Lambda(z)\leq \delta^2, \; \Lambda_T(z)\leq 1,\; \widetilde{\Lambda}_T(z)\leq 1\Big\}. \label{17072310}
\end{align}
We further decompose the domain $\mathcal{D}_\tau(\eta_{\rm m},\eta_\mathrm{M})$ into the following two disjoint parts:
\begin{align}
&\mathcal{D}_{>}\mathrel{\mathop:}= \Big\{z\in \mathcal{D}_\tau(\eta_{\rm m},\eta_\mathrm{M}): \sqrt{\kappa+\eta}> \frac{N^{2\varepsilon}}{N\eta} \Big\},\quad\mathcal{D}_{\leq}\mathrel{\mathop:}=\Big\{z\in \mathcal{D}_\tau(\eta_{\rm m},\eta_\mathrm{M}): \sqrt{\kappa+\eta}\leq \frac{N^{2\varepsilon}}{N\eta} \Big\}\,. \label{17072803}
\end{align}
For $z\in \mathcal{D}_{>}$, any $\delta\in [0,1]$ and any $\varepsilon'\in [0,1]$, we define the event $\Theta_>(z, \delta, \varepsilon')\subset \Theta(z, \delta)$ as
\begin{align*}
\Theta_>(z, \delta, \varepsilon')\mathrel{\mathop:}= \Big\{ \Lambda_{\rm d}(z)\leq \delta, \; \widetilde{\Lambda}_{\rm d}(z) \leq \delta,\;
\Lambda(z)\leq \min\{\delta^2, N^{-\varepsilon'} |\mathcal{S}| \}, \; \Lambda_T(z)\leq 1,\; \widetilde{\Lambda}_T(z)\leq 1\Big\}\,.
\end{align*}
\begin{lem} \label{lem.17030512} Suppose that the assumptions in Theorem \ref{thm. strong law at the edge} hold. For any fixed $z\in \mathcal{D}_\tau(\eta_{\rm m},\eta_\mathrm{M})$, any $\varepsilon\in (0, \frac{\gamma}{12})$ and any $D>0$, there exists a positive integer $N_1(D, \varepsilon)$ and an event $\Omega(z)\equiv \Omega(z, D,\varepsilon)$ with
\begin{align}
\mathbb{P}(\Omega(z))\geq 1- N^{-D},\qquad \forall N\geq N_1(D, \varepsilon) \label{17072304}
\end{align}
such that the following hold:
(i) If $z\in \mathcal{D}_{>}$, we have
\begin{align}
\Theta_{>} \Big(z, {\frac{N^{\frac32\varepsilon}}{\sqrt{N\eta}}},\frac{\varepsilon}{10}\Big) \cap \Omega(z) \subset \Theta_{>} \Big(z, {\frac{N^{\frac{5}{4}\varepsilon}}{\sqrt{N\eta}}}, \frac{\varepsilon}{2}\Big). \label{17072135}
\end{align}
(ii) If $z\in \mathcal{D}_{\leq}$, we have
\begin{align}
\Theta \Big(z, {\frac{N^{\frac32\varepsilon}}{\sqrt{N\eta}}}\Big) \cap \Omega(z) \subset \Theta \Big(z, {\frac{N^{\frac{5}{4}\varepsilon}}{\sqrt{N\eta}}}\Big)\,. \label{17072136}
\end{align}
\end{lem}
\begin{proof} In this proof, we fix a $z\in \mathcal{D}_\tau(\eta_{\rm m},\eta_\mathrm{M})$. From Proposition \ref{pro.17020310}, we see that under the assumption
\begin{align}
\Lambda_{{\rm d}} (z)\prec N^{-\frac{\gamma}{4}}, \qquad \widetilde{\Lambda}_{{\rm d}}(z)\prec N^{-\frac{\gamma}{4}}, \qquad \Lambda_{T}(z)\prec 1, \qquad \widetilde{\Lambda}_T(z)\prec 1, \label{17030501}
\end{align}
we have using~\eqref{17020303} that
\begin{align}
\Lambda_{{\rm d}}^c (z)\prec {\frac{1}{\sqrt{N\eta}}}\,, \qquad \widetilde{\Lambda}_{{\rm d}}^c(z)\prec {\frac{1}{\sqrt{N\eta}}}\,, \qquad \Lambda_{T}(z)\prec {\frac{1}{\sqrt{N\eta}}}\,, \qquad \widetilde{\Lambda}_T(z)\prec {\frac{1}{\sqrt{N\eta}}}\,. \label{17030502}
\end{align}
The following more quantitive statement for (\ref{17030502}) can be derived if one states the proof of Proposition~\ref{pro.17020310} in a quantitive way: if the event $\Theta(z, {\frac{N^{\frac32\varepsilon}}{\sqrt{N\eta}}})$ holds, then
\begin{align}
\Lambda_{{\rm d}}^c (z)\leq {\frac{N^\frac\varepsilon2}{\sqrt{N\eta}}}, \qquad \widetilde{\Lambda}_{{\rm d}}^c(z)\leq {\frac{N^\frac\varepsilon2}{\sqrt{N\eta}}}, \qquad \Lambda_{T}(z)\leq {\frac{N^\frac\varepsilon2}{\sqrt{N\eta}}}, \qquad \widetilde{\Lambda}_T(z)\leq {\frac{N^\frac\varepsilon2}{\sqrt{N\eta}}}, \label{17030511}
\end{align}
hold on $\Theta(z, {\frac{N^{\frac32\varepsilon}}{\sqrt{N\eta}}})\cap \Omega(z)$. Here $\Omega(z)$ is the typical ``event " on which all the concentration estimates in the proof of Proposition \ref{pro.17020310} hold. Note that these concentration estimates are done with respect to the entries or quadratic forms of Gaussian vectors $\mathbf{g}_i$'s, the probability of $\Omega(z)$ is thus independent of $z$. Hence, we have a positive integer $N_1(D, \varepsilon)$ uniformly in $z$ such that (\ref{17072304}) holds. Moreover, on $\Omega(z)$, we can write Lemma \ref{lem.17030310} in a quantitive way. For instance, (\ref{17080301}) can be written as $\mathbf{1}\big(\Lambda\leq \frac{|\mathcal{S}|}{K_0}\big) |\Lambda_\iota| \leq N^{-\varepsilon} \hat{\Lambda}$~on~$\Omega(z)$.
Now, we choose $ \hat{\Lambda}=\frac{N^{3\varepsilon}}{N\eta}$ in Lemma \ref{lem.17030310}. From Lemma \ref{lem.17030310}~$(i)$ and (\ref{17071901}), we see that for $z\in \mathcal{D}_{>}$, the following bound holds on the event $\Theta_>(z, {\frac{N^{\frac32\varepsilon}}{\sqrt{N\eta}}}, \frac{\varepsilon}{10})\cap \Omega(z)$,
\begin{align}
\Lambda\leq \min \Big\{\frac{N^{\frac{9}{4}\varepsilon}}{N\eta}, N^{-\frac{\varepsilon}{2}} |S|\Big\}. \label{1703051011}
\end{align}
From Lemma \ref{lem.17030310}~$(ii)$, we see that for $z\in \mathcal{D}_{\leq}$, the following bound holds on the event $\Theta(z, {\frac{N^{\frac32\varepsilon}}{\sqrt{N\eta}}})\cap \Omega(z)$,
\begin{align}
\Lambda\leq \frac{N^{\frac{9}{4}\varepsilon}}{N\eta}. \label{17030510}
\end{align}
Substituting (\ref{1703051011}) and (\ref{17030510}) into the first two estimates in (\ref{17030511}), we further get that
\begin{align*}
\Lambda_{{\rm d}} (z)\leq {\frac{N^{\frac{5}{4}\varepsilon}}{\sqrt{N\eta}}}, \qquad\qquad \widetilde{\Lambda}_{{\rm d}}(z)\leq {\frac{N^{\frac{5}{4}\varepsilon}}{ \sqrt{N\eta}}}
\end{align*}
hold on $\Theta_>(z, {\frac{N^{\frac32\varepsilon}}{\sqrt{N\eta}}}, \frac{\varepsilon}{10})\cap \Omega(z)$ if $z\in \mathcal{D}_{>}$ and on $\Theta(z, {\frac{N^{\frac32\varepsilon}}{\sqrt{N\eta}}})\cap \Omega(z)$ if $z\in \mathcal{D}_{\leq}$.
This completes the~proof.
\end{proof}
With Lemma \ref{lem.17030512}, we can now prove (\ref{17072330}) and (\ref{17011304}) in Theorem \ref{thm. strong law at the edge}, using a continuity argument. The proof of (\ref{17072847}) will be stated in Section \ref{s.rigidity}.
\begin{proof}[Proof of (\ref{17072330}) and (\ref{17011304}) in Theorem \ref{thm. strong law at the edge}] With Lemma \ref{lem.17030512}, the remaining proof of Theorem \ref{thm. strong law at the edge} is quite similar to the proof of Theorem 7.1 of \cite{BES16b}. So we only sketch the arguments.
We start with an entry-wise Green function subordination estimate on global scale, \emph{i.e., } $\eta=\eta_\mathrm{M}$ for some sufficiently large constant $\eta_\mathrm{M}>0$. Recall $Q_i$ from (\ref{17021701}). We regard $Q_i$ as a function of the random unitary matrix $U$. Then, for $z=E+\mathrm{i}\widetilde{\eta}_M$ with any fixed $E$ and any $\widetilde{\eta}_M\geq \eta_\mathrm{M}$, we apply the Gromov-Milman concentration inequality (\emph{c.f., } (6.2) in \cite{BES16b}), and get
\begin{align}
|Q_i(E+\mathrm{i}\widetilde{\eta}_M)-\mathbb{E} Q_i(E+\mathrm{i} \widetilde{\eta}_M)|\prec\frac{1}{\sqrt{N \widetilde{\eta}_M^4}}\,; \label{170731101}
\end{align}
see Section 6.2 of \cite{BES16b} for similar estimates for the Green function entries of the block additive model.
Next, using the invariance of the Haar measure, one can check the equation
\begin{align}
\mathbb{E} (\widetilde{B}G\otimes G-G\otimes \widetilde{B}G)=0\,; \label{170731100}
\end{align}
see Proposition 3.2 of~\cite{VP}.
Taking the $(i,i)$-th entry for the first component and the normalized trace for the second component in the tensor product, we obtain from (\ref{170731100}) that
\begin{align}
\mathbb{E} Q_i= \mathbb{E} \big((\widetilde{B}G)_{ii}\mathrm{tr}\, G-G_{ii}\mathrm{tr}\, \widetilde{B}G\big)=0\,. \label{170731102}
\end{align}
We claim that, for sufficiently large $\eta_\mathrm{M}>1$, we have
\begin{align}
\sup_{z: \Im z\geq \eta_\mathrm{M}}|Q_i(z)|\prec\frac{1}{\sqrt{N}}\,,\qquad\qquad \forall i\in\llbracket 1,N\rrbracket\,, \label{170731120}
\end{align}
where we used (\ref{170731101}), (\ref{170731102}), the Lipschitz continuity of $Q_i$ in the regime $|z|\leq \sqrt{N}$ and the deterministic bound $|Q_i(z)|\leq\frac{C}{\sqrt{N}}$ when $|z|\geq \sqrt{N}$.
In addition, using that $\|H\|\leq \|A\|+\|B\|< \mathcal{K}$ and the convention $\mathrm{tr}\, \widetilde{B}=\mathrm{tr}\, B=0$ (\emph{c.f., } (\ref{17072620})), we have, for $z=E+\mathrm{i}\widetilde{\eta}_M$ with fixed $E$ and any $\widetilde{\eta}_M\geq \eta_\mathrm{M}$, the expansions
\begin{align}
&\mathrm{tr}\, G(z)=-\frac{1}{z}+O(\frac{1}{|z|^2})=\frac{\mathrm{i}}{\widetilde{\eta}_M}+O\big(\frac{1}{\widetilde{\eta}_M^2}\big), \qquad \quad\mathrm{tr}\, \widetilde{B}G(z)=-\frac{\mathrm{tr}\, \widetilde{B}}{z}+O(\frac{1}{|z|^2})=O(\frac{1}{\widetilde{\eta}_M^2})\,, \label{170731125}
\end{align}
where we used $\mathrm{tr}\, B=0$ in the second equality. Hence, by the definition of $\omega_B^c$ in (\ref{17072550}), we see that,
\begin{align}
\omega_B^c(z)= z+O(\frac{1}{\widetilde{\eta}_M}), \qquad\qquad z=E+\mathrm{i}\widetilde{\eta}_M. \label{170731130}
\end{align}
Using the identity $(\widetilde{B}G)_{ii}=1-(a_i-z)G_{ii}$, we can rewrite (\ref{170731120}) as
\begin{align*}
(1-(a_i-\omega_B^c) G_{ii}) \mathrm{tr}\, G=O_\prec(\frac{1}{\sqrt{N}}),\qquad \qquad z=E+\mathrm{i}\widetilde{\eta}_M.
\end{align*}
From the first line of (\ref{170731125}) and (\ref{170731130}) we get
\begin{align}
\Lambda_{{\rm d}}^c (z )\prec {\frac{1}{\sqrt N }},\qquad \qquad z=E+\mathrm{i}\widetilde{\eta}_M. \label{170731140}
\end{align}
Analogously, we also have
\begin{align}
\widetilde{\Lambda}_{{\rm d}}^c(z )\prec\frac{1}{\sqrt{N}},\qquad \qquad z=E+\mathrm{i}\widetilde{\eta}_M. \label{170731141}
\end{align}
Averaging over the index $i$ in the definition of $\Lambda_{di}^c$ and $\widetilde{\Lambda}_{di}^c$ (\emph{c.f., }(\ref{17080305})), using (\ref{170731140}) and (\ref{170731141}) and using the fact $\mathrm{tr}\, G=\mathrm{tr}\, \mathcal{G}=m_H$ yields
\begin{align}
\sup_{z: \Im z\geq \eta_\mathrm{M}}\big|m_H(z)- m_A(\omega_B^c(z))\big|\prec \frac{1}{\sqrt{N}}\,,\qquad \qquad \sup_{z: \Im z\geq \eta_\mathrm{M}}\big|m_H(z)- m_B(\omega_A^c(z))\big|\prec \frac{1}{\sqrt{N}} \label{170731153}
\end{align}
where in the large $z$ regime these bounds even hold deterministically, similarly to (\ref{170731120}).
This together with (\ref{170725130}) gives us the system
\begin{align}
\sup_{z: \Im z\geq \eta_\mathrm{M}}|\Phi_1(\omega_A^c(z), \omega_B^c(z),z)|\prec \frac{1}{\sqrt{N}}\,,\qquad \sup_{z: \Im z\geq \eta_\mathrm{M}}|\Phi_2(\omega_A^c(z), \omega_B^c(z),z)|\prec \frac{1}{\sqrt{N}}, \label{170731150}
\end{align}
where $\Phi_1$ and $\Phi_2$ are defined in (\ref{17073115}). We regard (\ref{170731150}) as a perturbation of $\Phi_1(\omega_A(z), \omega_B(z),z)=0$, $\Phi_2(\omega_A(z), \omega_B(z),z)=0$. The stability of this system in the large $\eta$ regime is analyzed in Lemma~\ref{lem. stability for large eta}.
Choosing $(\mu_1, \mu_2)=(\mu_A, \mu_B)$, $(\widetilde{\omega}_1(z), \widetilde{\omega}_2(z))= (\omega_A^c(z), \omega_B^c(z))$ in Lemma \ref{lem. stability for large eta} below, and using the fact that (\ref{170731150}) and (\ref{170731130}) hold for any sufficiently large $\widetilde{\eta}_M$, we obtain from the stability Lemma \ref{lem. stability for large eta} that
\begin{align}
|\Lambda_\iota(z)|=|\omega_\iota^c(z)-\omega_\iota(z)|\prec \frac{1}{\sqrt{N}}, \qquad \qquad \iota=A,B, \qquad z=E+\mathrm{i}\eta_\mathrm{M} \label{17080105}
\end{align}
for any sufficiently large constant $\eta_\mathrm{M}>1$, say.
Substituting (\ref{17080105}) into (\ref{170731140}) and (\ref{170731141}) gives
\begin{align}
\Lambda_{{\rm d}} (E+\mathrm{i}\eta_\mathrm{M} )\prec \frac{1}{\sqrt{N}}\,, \qquad \qquad \widetilde{\Lambda}_{{\rm d}}(E+\mathrm{i}\eta_\mathrm{M} )\prec \frac{1}{\sqrt{N}}\,,\label{17072101}
\end{align}
for any fixed $E\in \mathbb{R}$. Using the bound $\|G\|\leq \frac{1}{\eta}$ and the inequality $|\mathbf{x}^*G\mathbf{y}|\leq \|G\|\|\mathbf{x}\| \|\mathbf{y}\| $, we also~get
\begin{align}
\Lambda_T(E+\mathrm{i}\eta_\mathrm{M} )\leq \frac{1}{\eta_\mathrm{M}}\,, \qquad\qquad \widetilde{\Lambda}_T(E+\mathrm{i}\eta_\mathrm{M} )\leq \frac{1}{\eta_\mathrm{M}} \,,\label{17072102}
\end{align}
for any fixed $E\in \mathbb{R}$. Since (\ref{17072101}) and (\ref{17072102}) guarantee assumption~(\ref{17020501}), similarly to (\ref{17030502}), we can apply Proposition \ref{pro.17020310} to get, for any fixed $E\in \mathbb{R}$, that
\begin{align}
& \Lambda_{T}(E+\mathrm{i}\eta_\mathrm{M} )\prec \frac{1}{\sqrt{N }}\,, \qquad \qquad \widetilde{\Lambda}_T(E+\mathrm{i}\eta_\mathrm{M} )\prec \frac{1}{\sqrt{N} }\,. \label{17072111}
\end{align}
Also observe that $E+\mathrm{i}\eta_\mathrm{M} \in \mathcal{D}_{>}$, for any fixed $E$, and that $|\mathcal{S}(E+\mathrm{i}\eta_\mathrm{M} )|\gtrsim 1$. Hence $\Lambda(E+\mathrm{i}\eta_\mathrm{M} )\prec N^{-\varepsilon} |\mathcal{S}(E+\mathrm{i} \eta_\mathrm{M})|$. Then we can apply
Lemma \ref{lem.17030310} $(i)$ repeatedly for smaller and smaller $\Lambda$ to~get
\begin{align}
\Lambda(E+\mathrm{i}\eta_\mathrm{M} ) \prec \frac{1}{N}. \label{17072110}
\end{align}
Combining (\ref{17072101}), (\ref{17072111}), (\ref{17072110}) with the fact $\Lambda(E+\mathrm{i}\eta_\mathrm{M} )\prec N^{-\varepsilon} |\mathcal{S}(E+\mathrm{i}\eta_\mathrm{M} )|$, we see that the event $\Theta_{>} (E+\mathrm{i}\eta_\mathrm{M} , {\frac{N^{\frac32\varepsilon}}{\sqrt N}}, {\frac{\varepsilon}{10}})$ holds with high probability. More quantitively, we have for any fixed $E$ that
\begin{align}
\mathbb{P} \Big( \Theta_{>} \big(E+\mathrm{i}\eta_\mathrm{M} , {\frac{N^{\frac32\varepsilon}}{\sqrt N}}, {\frac{\varepsilon}{10}}\big)\Big)\geq 1-N^{-D}\,, \label{17072115}
\end{align}
for all $D>0$ and $N\geq N_2(D, \varepsilon)$ with some threshold $N_2(D, \varepsilon)$.
Now we take (\ref{17072115}) as the initial input, and use a continuity argument based on Lemma \ref{lem.17030512}, to control the probability of the ``good" events $\Theta_{>}$ for $z\in \mathcal{D}_{>}$ and $\Theta$ for $z\in \mathcal{D}_{\leq}$. To this end, we first recall the event $\Omega(z)$ in Lemma \ref{lem.17030512}. The main task is to show for any $z=E+\mathrm{i}\eta\in \mathcal{D}_{>}$,
\begin{align}
\Theta_{>} \Big(E+\mathrm{i}\eta,{\frac{N^{\frac{5}{4}\varepsilon}}{\sqrt{N\eta}}}, \frac{\varepsilon}{2}\Big)\cap \Omega(E+\mathrm{i}(\eta-N^{-5}))\subset \Theta_{>} \Big(E+\mathrm{i}(\eta-N^{-5}), {\frac{N^{\frac{5}{4}\varepsilon}}{\sqrt{N\eta}}}, \frac{\varepsilon}{2}\Big), \label{17072123}
\end{align}
and, for any $z=E+\mathrm{i}\eta\in \mathcal{D}_{\leq}$,
\begin{align}
\Theta \Big(E+\mathrm{i}\eta, {\frac{N^{\frac{5}{4}\varepsilon}}{\sqrt{N\eta}}}\Big)\cap \Omega(E+\mathrm{i}(\eta-N^{-5}))\subset \Theta \Big(E+\mathrm{i}(\eta-N^{-5}), {\frac{N^{\frac{5}{4}\varepsilon}}{\sqrt{N\eta}}}\Big). \label{17072124}
\end{align}
The inclusions (\ref{17072123}) and (\ref{17072124}) are analogous to (7.20) of \cite{BES15b}. The only difference is here we decompose the domain $\mathcal{D}_\tau(\eta_{\rm m},\eta_\mathrm{M})$ into $\mathcal{D}_{>}$ and $\mathcal{D}_{\leq}$, and in $\mathcal{D}_{>}$ we also keep monitoring the event $\Lambda\leq N^{-\frac{\varepsilon}{2}}|\mathcal{S}|$ in order to use Lemma
\ref{lem.17030310}~$(i)$. As we are gradually reducing $\Im z$, once $z$ enters into the domain $\mathcal{D}_{\leq}$, we do not need to monitor $\mathcal{S}$ anymore.
The proofs of (\ref{17072123}) and (\ref{17072124}) rely on the Lipschitz continuity of the Green function, $\|G(z)-G(z')\|\leq N^2|z-z'|$, and of the subordination functions and $\mathcal{S}$ in~\eqref{le lipschitz stuff}. Using the Lipschitz continuity of these functions, it is not difficult to see the following two
\begin{align}
&\Theta_{>} \Big(E+\mathrm{i}\eta, {\frac{N^{\frac{5}{4}\varepsilon}}{\sqrt{N\eta}}}, \frac{\varepsilon}{2}\Big)\subset \Theta_{>} \Big(E+\mathrm{i}(\eta-N^{-5}), {\frac{N^{\frac32\varepsilon}}{\sqrt{N\eta}}}, \frac{\varepsilon}{10}\Big),\qquad & & z=E+\mathrm{i}\eta\in \mathcal{D}_{>}\,, \label{17072130} \\
&\Theta \Big(E+\mathrm{i}\eta, {\frac{N^{\frac{5}{4}\varepsilon}}{\sqrt{N\eta}}}\Big)\subset \Theta \Big(E+\mathrm{i}(\eta-N^{-5}), {\frac{N^{\frac32\varepsilon}}{\sqrt{N\eta}}}\Big), \qquad & & z=E+\mathrm{i}\eta\in \mathcal{D}_{\leq}\,. \label{17072131}
\end{align}
Then, (\ref{17072130}) together with (\ref{17072135}) implies (\ref{17072123}). Similarly, (\ref{17072131}) together with (\ref{17072136}) implies (\ref{17072124}). Applying (\ref{17072123}) and (\ref{17072124}) recursively and using the simple fact that the domains $\mathcal{D}_{>}$ and $\mathcal{D}_{\leq}$ are connected, one can go from $\eta=\eta_\mathrm{M} $ to $\eta=\eta_{\rm m}$, step by step of size $N^{-5}$.
Consequently, we obtain for any $\eta\in [\eta_{\rm m},\eta_\mathrm{M}]\cap N^{-5}\mathbb{Z}$ that, if $E+\mathrm{i}\eta\in \mathcal{D}_{>}$ then
\begin{multline}
\Theta_{>} \Big(E+\mathrm{i}\eta_\mathrm{M}, {\frac{N^{\frac{5}{4}\varepsilon}}{\sqrt{N\eta_\mathrm{M}}}}, \frac{\varepsilon}{2}\Big)\cap \Omega(E+\mathrm{i}(\eta_\mathrm{M}-N^{-5}))\cap\ldots\cap \Omega(E+\mathrm{i}\eta)\\
\subset \Theta_{>} \Big(E+\mathrm{i}\eta,{\frac{N^{\frac{5}{4}\varepsilon}}{\sqrt{N\eta}}}, \frac{\varepsilon}{2}\Big)\subset \Theta_{>} \Big(E+\mathrm{i}\eta, {\frac{N^{\frac{5}{4}\varepsilon}}{\sqrt{N\eta}}}\Big)\,, \label{17072301}
\end{multline}
respectively, if $E+\mathrm{i}\eta\in \mathcal{D}_{\leq}$ then
\begin{align}
&\Theta_{>} \Big(E+\mathrm{i}\eta_\mathrm{M}, {\frac{N^{\frac{5}{4}\varepsilon}}{\sqrt{N\eta_\mathrm{M}}}}, \frac{\varepsilon}{2}\Big)\cap \Omega(E+\mathrm{i}(\eta_\mathrm{M}-N^{-5}))\cap\ldots\cap \Omega(E+\mathrm{i}\eta)\subset \Theta \Big(E+\mathrm{i}\eta, {\frac{N^{\frac{5}{4}\varepsilon}}{\sqrt{N\eta}}}\Big)\,.\label{17072302}
\end{align}
Combining (\ref{17072304}), (\ref{17072115}), (\ref{17072301}) and (\ref{17072302}), we have
\begin{align}
\mathbb{P}\Big(\Theta \Big(E+\mathrm{i}\eta, {\frac{N^{\frac{5}{4}\varepsilon}}{\sqrt{N\eta}}}\Big)\Big)\geq 1-N^{-D} (1+N^5(\eta_\mathrm{M}-\eta))\,,\label{17073030}
\end{align}
uniformly for all $\eta\in [\eta_{\rm m}, \eta_\mathrm{M}]\cap N^{-5}\mathbb{Z}$, when $N\geq \max\{N_1(D, \varepsilon), N_2(D, \varepsilon)\}$. Finally, by the Lipschitz continuity of the Green function and also that of the subordination functions in~\eqref{le lipschitz stuff}, we can extend the bounds from $z$ in the discrete lattice to the entire domain $\mathcal{D}_\tau(\eta_{\rm m},\eta_\mathrm{M})$.
By the definition in (\ref{17072310}), we obtain from (\ref{17073030}) that
\begin{align}
\max_{i\in \llbracket 1, N\rrbracket}\Big|G_{ii}(z)-\frac{1}{a_i-\omega_B(z)}\Big|&\prec \frac{1}{\sqrt{N\eta}}\,, \qquad & & |\Lambda_A(z)|\prec \frac{1}{N\eta} \nonumber\,, \\
\max_{i\in \llbracket 1, N\rrbracket}\Big|\mathcal{G}_{ii}(z)-\frac{1}{b_i-\omega_A(z)}\Big|&\prec \frac{1}{\sqrt{N\eta}}\,,
& &|\Lambda_B(z)|\prec \frac{1}{N\eta}\,, \label{17072320}
\end{align}
uniformly on $\mathcal{D}_\tau(\eta_{\rm m},\eta_\mathrm{M})$ with high probability. For any deterministic $d_1, \ldots, d_N\in \mathbb{C}$, we further write
\begin{align}
\frac{1}{N}\sum_{i=1}^N d_i \Big(G_{ii}-\frac{1}{a_i-\omega_B^c}\Big)= \frac{1}{N}\sum_{i=1}^N \frac{d_i}{\mathrm{tr}\, G (a_i-\omega_B^c)}Q_i\,, \label{17072501}
\end{align}
which can easily be checked from the definition of $\omega_B^c$, $Q_i$ and the equation $(a_i-z)G_{ii}+(\widetilde{B}G)_{ii}=1$. Regarding $\frac{d_i}{\mathrm{tr}\, G (a_i-\omega_B^c)}$ as the random coefficients $d_i$ in (\ref{170723113}), it is not difficult to check that (\ref{17022530}) holds, similarly to the last two equations in (\ref{17021202}). Hence, we have
\begin{align}
\Big|\frac{1}{N}\sum_{i=1}^N d_i \Big(G_{ii}-\frac{1}{a_i-\omega_B^c}\Big)\Big|\prec \Psi\hat{\Pi}\,. \label{17072321}
\end{align}
Plugging the last estimate in (\ref{17072320}) into (\ref{17072321}), and using (\ref{17020502}), we obtain (\ref{17072330}) uniformly on $\mathcal{D}_\tau(\eta_{\rm m},\eta_\mathrm{M})$. Finally, choosing $d_i=1$ for all $i\in \llbracket 1, N\rrbracket $ in (\ref{17072321}), we get (\ref{17011304}) uniformly on $\mathcal{D}_\tau(\eta_{\rm m},\eta_\mathrm{M})$. This completes the proof of (\ref{17072330}) and (\ref{17011304}) in Theorem \ref{thm. strong law at the edge}.
\end{proof}
\section{Rigidity of the eigenvalues} \label{s.rigidity}
In this section, we prove Theorem \ref{thm. rigidity of eigenvalues}, and also (\ref{17072847}) in Theorem \ref{thm. strong law at the edge}. Recall the definition of $\mathcal{D}_>$ in~\eqref{17072803}. We start by improving the estimate of $\Lambda$ defined in~\eqref{le gros lambda} in the following subdomain of $\mathcal{D}_>$,
\begin{align}
\widetilde{\mathcal{D}}_{>}\mathrel{\mathop:}= \{z=E+\mathrm{i}\eta\in\mathcal{D}_{>}: E<E_-\}\,. \label{17073110}
\end{align}
\begin{lem} \label{lem. away from the support} Suppose that the assumptions in Theorem~\ref{thm. strong law at the edge} hold.
Then, we have the following uniform estimate for all $z\in\widetilde{\mathcal{D}}_{>}$,
\begin{align}
\Lambda(z)\prec \frac{1}{N\sqrt{(\kappa+\eta)\eta}}+\frac{1}{\sqrt{\kappa+\eta}}\frac{1}{(N\eta)^2}\,.\label{17072802}
\end{align}
\end{lem}
\begin{proof} First, from (\ref{17072320}), we see that $\Lambda\prec\frac{1}{N\eta}$ on $\mathcal{D}_\tau(\eta_{\rm m},\eta_\mathrm{M})$. Now, suppose that $\Lambda\prec \hat{\Lambda}$ for some deterministic $ \hat{\Lambda}\equiv \hat{\Lambda}(z)$ that satisfies
\begin{align}
N^{\varepsilon} \Big(\frac{1}{N\sqrt{(\kappa+\eta)\eta}}+\frac{1}{\sqrt{\kappa+\eta}}\frac{1}{(N\eta)^2}\Big)\leq \hat{\Lambda}(z)\leq \frac{N^{\varepsilon}}{N\eta}\,. \label{17072801}
\end{align}
Observe that such $\hat{\Lambda}$ always exists on $\mathcal{D}_>$.
From (\ref{17030301}), (\ref{17080120}) and (\ref{17080121}), we have for $\iota=A, B$, and $z\in \widetilde{\mathcal{D}}_{>}$
\begin{align}
\Big|\mathcal{S}\Lambda_\iota+\mathcal{T}_\iota\Lambda_\iota^2\Big| &\prec \frac{\sqrt{(\frac{\eta}{\sqrt{\kappa+\eta}}+ \hat{\Lambda})(\sqrt{\kappa+\eta}+ \hat{\Lambda})}}{N\eta}+\frac{1}{(N\eta)^2}\prec \frac{\sqrt{ \hat{\Lambda}\sqrt{\kappa+\eta}}}{N\eta}+\frac{\sqrt{\eta}}{N\eta}+\frac{1}{(N\eta)^2}\,, \label{170727100}
\end{align}
where we used that $ \hat{\Lambda} \prec \frac{N^{\varepsilon}}{N\eta}\leq N^{-\varepsilon}\sqrt{\kappa+\eta}$ for all $z\in \widetilde{\mathcal{D}}_{>}$.
Moreover, for $z\in \widetilde{\mathcal{D}}_{>}$, we see that
\begin{align*}
|\Lambda_\iota|\prec \frac{1}{N\eta}\leq N^{-2\varepsilon}\sqrt{\kappa+\eta}\sim N^{-2\varepsilon}|\mathcal{S}|\,,
\end{align*}
for $\iota=A, B$. Hence, according to the fact $\mathcal{T}_\iota\leq C$ (\emph{c.f., } (\ref{17080121})), we can absorb the second term on the left side of (\ref{170727100}) into the first term, and thus we have for $\iota=A, B$
\begin{align*}
|\Lambda_\iota|\prec \frac{1}{\sqrt{\kappa+\eta}} \bigg(\frac{\sqrt{ \hat{\Lambda}\sqrt{\kappa+\eta}}}{N\eta}+\frac{\sqrt{\eta}}{N\eta}+\frac{1}{(N\eta)^2}\bigg)\leq \frac{1}{N\eta(\kappa+\eta)^{\frac14}} \hat{\Lambda}^{\frac12}+N^{-\varepsilon} \hat{\Lambda} \leq N^{-\frac{\varepsilon}{4}} \hat{\Lambda}\,,
\end{align*}
where in the second step we used the lower bound in (\ref{17072801}) directly, and in the last step we used the fact $(N\eta)^{-1}(\kappa+\eta)^{-\frac14}\leq N^{-\frac{\varepsilon}{2}}\hat{\Lambda}^{\frac12}$ which again follows from the lower bound in (\ref{17072801}).
Hence, we improved the bound from $\Lambda\leq \hat{\Lambda}$ to $\Lambda\leq N^{-\frac{\varepsilon}{4}} \hat{\Lambda}$ as long as the lower bound in (\ref{17072801}) holds. Performing the above improvement iteratively, one finally gets (\ref{17072802}). Hence, we complete the proof.
\end{proof}
With the aid of Lemma \ref{lem. away from the support}, we can now prove Theorem \ref{thm. rigidity of eigenvalues}.
\begin{proof}[Proof of Theorem \ref{thm. rigidity of eigenvalues}] We first show (\ref{17072845a}) for the smallest eigenvalue $\lambda_1$, \emph{i.e., }
\begin{align}
|\lambda_1-\gamma_1|\prec N^{-\frac23}\,. \label{17072820}
\end{align}
Recall $\mathcal{K}$ defined in (\ref{17072840}). For any (small) constant $\varepsilon>0$, we define the line segment.
\begin{align}
\widetilde{\mathcal{D}}(\varepsilon)\mathrel{\mathop:}=\{z=E+\mathrm{i}\eta: E\in [-\mathcal{K}, E_--N^{-\frac23+6\varepsilon}]\,, \,\eta=N^{-\frac{2}{3}+\varepsilon}\}.
\end{align}
Then it is easy to check that $\widetilde{\mathcal{D}}(\varepsilon)\subset \widetilde{\mathcal{D}}_{>}$ (\emph{c.f., } (\ref{17073110})). Applying \eqref{17072802}, we obtain $\Lambda\prec \frac{N^{-\varepsilon}}{N\eta}$ uniformly on $\widetilde{\mathcal{D}}(\varepsilon)$, which together with (\ref{17073111}) implies
\begin{align}
|m_H(z)-m_{\mu_A\boxplus\mu_B}(z)|\prec \frac{N^{-\varepsilon}}{N\eta}\,, \label{17073130}
\end{align}
uniformly on $\widetilde{\mathcal{D}}(\varepsilon)$. Moreover, by (\ref{17080120}), we have
\begin{align}
\Im m_{\mu_A\boxplus\mu_B}(z)\sim \frac{\eta}{\sqrt{\kappa+\eta}}\leq \frac{N^{-\varepsilon}}{N\eta}\,,\label{17073131}
\end{align}
uniformly on $\widetilde{\mathcal{D}}(\varepsilon)$. Combining (\ref{17073130}) with (\ref{17073131}) yields
\begin{align}
\Im m_H(z)\prec \frac{N^{-\varepsilon}}{N\eta}\,, \label{17080315}
\end{align}
uniformly on $\widetilde{\mathcal{D}}(\varepsilon)$. Since $\|H\|< \mathcal{K}$, to see (\ref{17072820}), it suffices to show
that with high probability $\lambda_1$ is not in the interval $[-\mathcal{K}, E_--N^{-\frac23+6\varepsilon}]$. We prove it by contradiction. Suppose that $\lambda_1\in [-\mathcal{K}, E_--N^{-\frac23+6\varepsilon}]$. Then clearly for any $\eta>0$,
\begin{align*}
\sup_{E\in[-\mathcal{K}, E_--N^{-\frac23+6\varepsilon}]} \Im m_H(E+\mathrm{i}\eta)= \sup_{E\in [-\mathcal{K}, E_--N^{-\frac23+6\varepsilon}]}\frac{1}{N}\sum_{i=1}^N \frac{\eta}{(\lambda_i-E)^2+\eta^2}\geq \frac{1}{N\eta}\,,
\end{align*}
which contradicts the fact that (\ref{17080315}) holds uniformly on $\widetilde{\mathcal{D}}(\varepsilon)$. Hence, we have (\ref{17072820}).
Next, from (\ref{17011304}), (\ref{mdiff1}) and (\ref{mdiff2}) and a standard application of Helffer-Sj{\"o}strand formula (\emph{c.f., } Lemma 5.1 \cite{AEK15}) on $\mathcal{D}_\tau (\eta_{\rm m}, \eta_\mathrm{M})$ yields
\begin{align}
\sup_{x\leq E_-+c}|\mu_H((-\infty,x])-\mu_A\boxplus\mu_B((-\infty, x])|\prec \frac{1}{N} \,,\label{17073140}
\end{align}
for any sufficiently small $c=c(\tau)$.
Then (\ref{17072820}), (\ref{17073140}), together with the rigidity (\ref{rigi2}) and the square root behavior of the distribution $\mu_\alpha\boxplus\mu_\beta$ (\emph{c.f., } (\ref{17080390})) will lead to the conclusion. The same conclusion holds with $\gamma_j^*$'s replaced by $\gamma_j$'s by rigidity (\ref{rigi2}).
\end{proof}
Finally, with the aid of Theorem \ref{thm. rigidity of eigenvalues}, we can prove (\ref{17072847}) in Theorem \ref{thm. strong law at the edge}.
\begin{proof}[Proof of (\ref{17072847}) in Theorem \ref{thm. strong law at the edge}] Let $\varepsilon>0$ be any (small) constant. Since $\kappa=E_--E\geq N^{-\frac23+\varepsilon}$ in~\eqref{17072847}, we see that (\ref{17072847}) follows from (\ref{17011304}) directly in the regime $\eta\geq \frac{\kappa}{4}$, say. Hence, in the sequel, we work in the regime $\eta\leq \frac{\kappa}{4}$ only. For any $z=E+\mathrm{i}\eta \in \mathcal{D}_\tau(\eta_{\rm m}, \eta_\mathrm{M})$ with $\kappa\geq N^{-\frac23+\varepsilon}$, we set the contour
\begin{align*}
\mathcal{C}\equiv \mathcal{C}(z)\mathrel{\mathop:}=\mathcal{C}_l\cup\mathcal{C}_r\cup \mathcal{C}_u\cup \overline{\mathcal{C}}_u,
\end{align*}
where
\begin{align*}
&\mathcal{C}_l\equiv \mathcal{C}_l(z)\mathrel{\mathop:}=\big\{\tilde{z}=E+\frac{\kappa}{2}+\mathrm{i}\tilde{\eta}: -\eta-\kappa\leq \tilde{\eta} \leq \eta+\kappa\big\},\nonumber\\
&\mathcal{C}_r\equiv \mathcal{C}_r(z)\mathrel{\mathop:}=\big\{\tilde{z}=E-\frac{\kappa}{2}+\mathrm{i}\tilde{\eta}: -\eta-\kappa\leq \tilde{\eta} \leq \eta+\kappa\big\},\nonumber\\
&\mathcal{C}_u\equiv \mathcal{C}_u(z)\mathrel{\mathop:}= \big\{ \tilde{z}= \tilde{E}+\mathrm{i}(\eta+\kappa): E-\frac{\kappa}{2}\leq \tilde{E}\leq E+\frac{\kappa}{2}\big\}
\end{align*}
We then further decompose $\mathcal{C}=\mathcal{C}_{<}\cup\mathcal{C}_{\geq}$, where
\begin{align*}
\mathcal{C}_<\equiv \mathcal{C}_<(z)\mathrel{\mathop:}= \big\{\tilde{z}\in \mathcal{C}: |\Im \tilde{z}|< \eta_{\rm m}\big\}, \qquad \mathcal{C}_{\geq}\equiv\mathcal{C}_{\geq} (z)\mathrel{\mathop:}=\mathcal{C}\setminus \mathcal{C}_<.
\end{align*}
Now, we further introduce the event
\begin{align*}
\Xi\mathrel{\mathop:}=\bigcap_{\tilde{z}\in \mathcal{C}>}\Big\{ \big| m_H(\tilde{z})-m_{\mu_A\boxplus\mu_B}(\tilde{z})\big|\leq \frac{N^\varepsilon}{N\Im \tilde{z}}\Big\} \bigcap \Big\{\lambda_1\geq E_--\frac{1}{4}N^{-2/3+\varepsilon} \Big\}
\end{align*}
Then, on the event $\Xi$, we have
\begin{align}
m_H(z)-m_{\mu_A\boxplus\mu_B}(z) &= \frac{1}{2\pi \mathrm{i}}\oint_{\mathcal{C}} \frac{1}{\tilde{z}-z} \big(m_H(\tilde{z})-m_{\mu_A\boxplus\mu_B}(\tilde{z})\big){\rm d} \tilde{z}\nonumber\\
&= \frac{1}{2\pi \mathrm{i}}\Big(\int_{\mathcal{C}_<}+ \int_{\mathcal{C}_\geq }\Big)\frac{1}{\tilde{z}-z} \big(m_H(\tilde{z})-m_{\mu_A\boxplus\mu_B}(\tilde{z})\big){\rm d} \tilde{z}. \label{17080201}
\end{align}
Note that, for $\tilde{z}\in \mathcal{C}$, we always have $\frac{1}{|\tilde{z}-z|}\leq \frac{2}{\kappa} $. In addition, for $\tilde{z}\in\mathcal{C}_<$, we have the fact $|\mathcal{C}_<|\leq \eta_{\rm m}$, and
\begin{align*}
|m_H(\tilde{z})|\leq \frac{C}{\kappa}\,, \qquad \qquad|m_{\mu_A\boxplus\mu_B}(\tilde{z})|\leq \frac{C}{\kappa}\,,
\end{align*}
which hold on $\Xi$. For $\tilde{z}\in\mathcal{C}_\geq$, we have the fact $|\mathcal{C}_\geq|\leq C\kappa$ and the bound
\begin{align*}
\big| m_H(\tilde{z})-m_{\mu_A\boxplus\mu_B}(\tilde{z})\big|\leq \frac{N^\varepsilon}{N\Im \tilde{z}}
\end{align*}
which holds on $\Xi$. Applying the above bounds to (\ref{17080201}), it is elementary to check that
\begin{align*}
|m_H(z)-m_{\mu_A\boxplus\mu_B}(z)|\leq C\big(\eta_{\rm m}+N^{-1+\varepsilon}\log N\big) \frac{1}{\kappa}
\end{align*}
on $\Xi$. Since $\gamma$ in $\eta_{\rm m}=N^{-1+\gamma}$ and $\varepsilon$ can be arbitrary, we can conclude that
\begin{align}
|m_H(z)-m_{\mu_A\boxplus\mu_B}(z)|\prec \frac{1}{N\kappa} \label{17080210}
\end{align}
if we can show that $\Xi$ holds with high probability. Using (\ref{17072820}), it suffices to show that
\begin{align*}
\big| m_H(\tilde{z})-m_{\mu_A\boxplus\mu_B}(\tilde{z})\big|\prec \frac{1}{N\Im \tilde{z}}\,,
\end{align*}
uniformly in $\tilde{z}\in\mathcal{C}_>$. This only requires us to enlarge the domain $\mathcal{D}_\tau(\eta_{\rm m}, \eta_\mathrm{M})$ and also consider its complex conjugate to include $\mathcal{C}_>$ during the proof of (\ref{17011304}). Hence, we conclude the proof of (\ref{17072847}) by combining the
$\frac{1}{N\kappa}$ bound in (\ref{17080210}) with the $\frac{1}{N\eta}$ bound in (\ref{17011304}).
\end{proof}
We conclude the main part of the paper with the proof of Corollary~\ref{c. rigidity for whole spectrum}.
\begin{proof}[Proof of Corollary \ref{c. rigidity for whole spectrum}] With the additional Assumption \ref{a. rigidity entire spectrum}, we can show analogously that the estimates (\ref{17011304}) and (\ref{17072845a}) hold as well around the upper edge. According to Assumption \ref{a. rigidity entire spectrum} $(vii)$ and the fact $\sup_{\mathbb{C}^+}|m_{\mu_\alpha\boxplus\mu_\beta}|\leq C$ (\emph{c.f., } (\ref{17080326})), we see that except for the two vicinities of the lower and upper edge, the remaining spectrum is within the regular bulk. Together with the strong local law in the bulk regime, \emph{c.f., } Theorem 2.4 in \cite{BES16}, we have
\begin{align}
\big| m_H(z)-m_{\mu_A\boxplus\mu_B}(z)\big|\prec \frac{1}{N\eta}. \label{17080220}
\end{align}
uniformly on the domain $
\mathcal{D}(\eta_{\rm m}, \eta_\mathrm{M})\mathrel{\mathop:}=\{z=E+\mathrm{i} \eta\in \mathbb{C}^+: -\mathcal{K}\leq E\leq \mathcal{K}, \quad \eta_{\rm m}\leq \eta\leq \eta_\mathrm{M}\}$.
Then, (\ref{17080220}) together with (\ref{17072845a}) and its counterpart at the upper edge implies the rigidity for all eigenvalues, \emph{i.e., }~\eqref{17072845} can be proved again with Helffer-Sj{\"o}strand formula. Then, from (\ref{17072845}), we conclude that (\ref{17080225}) holds. This completes the proof of Corollary~\ref{c. rigidity for whole spectrum}.
\end{proof}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 8,297 |
<!DOCTYPE html>
<html lang="en">
<head>
<!-- Required meta tags always come first -->
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta http-equiv="x-ua-compatible" content="ie=edge">
<title>Lights</title>
{% block stylesheets %}
<!-- Bootstrap CSS -->
<link href="//cdn.bootcss.com/bootstrap/3.3.6/css/bootstrap.min.css" rel="stylesheet">
<link href="//cdn.bootcss.com/toastr.js/latest/css/toastr.min.css" rel="stylesheet">
<link rel="stylesheet" href="{{url_for('static', filename='css/style.css')}}">
{% endblock %}
</head>
<body>
<nav class="navbar navbar-default">
<div class="container-fluid">
<!-- Brand and toggle get grouped for better mobile display -->
<div class="navbar-header">
<button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target="#navbar-collapse" aria-expanded="false">
<span class="sr-only">Toggle navigation</span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
</button>
<a class="navbar-brand" href="#">Brand</a>
</div>
<!-- Collect the nav links, forms, and other content for toggling -->
<div class="collapse navbar-collapse" id="navbar-collapse">
<ul class="nav navbar-nav">
<li class="nav-item">
<a class="nav-link" href="{{url_for('light.light_list')}}">Lights <span class="sr-only">(current)</span></a>
</li>
<li class="nav-item">
<a class="nav-link" href="{{ url_for('switch.switch_list') }}">Switchs</a>
</li>
</ul>
</div><!-- /.navbar-collapse -->
</div><!-- /.container-fluid -->
</nav>
{% block container %}
{% endblock%}
{% block javascripts %}
<!-- jQuery first, then Bootstrap JS. -->
<script src="https://cdn.bootcss.com/jquery/2.1.4/jquery.min.js"></script>
<!--<script src="//cdn.bootcss.com/bootstrap/4.0.0-alpha.2/js/bootstrap.min.js"></script>-->
<script src="//cdn.bootcss.com/bootstrap/3.3.6/js/bootstrap.min.js"></script>
<script src="//cdn.bootcss.com/toastr.js/latest/js/toastr.min.js"></script>
{% endblock %}
</body>
</html> | {
"redpajama_set_name": "RedPajamaGithub"
} | 6,384 |
Fraser Coast
Burrum Heads
Torbanlea
SELF-BELIEF: CQUniversity graduate Tiffany Brown at Central Queensland Innovation and Research Precinct. Allan Reinikka ROK181218atiffany
'Never give up hope' on achieving university dream
by Aden Stokes
CQUNIVERSITY Rockhampton graduate Tiffany Brown has credited her mum's determination in life as being a constant source of inspiration.
Ms Brown's family has faced a number of financial challenges over the years but her mum has not once lost hope.
"My mother Shirley was hit by a truck when she was six years old and left paralysed on one side," Ms Brown said.
"She was told by doctors she could not do many things in her life, including have children, but her determination has shown me you can do anything if you're passionate."
Shirley Brown went on to have five children and the family has survived and thrived on a single income.
CQUniversity graduate Tiffany Brown at CQIRP. Allan Reinikka ROK181218atiffany
Ms Brown, who was the third child in her family to study at university, said two of her sisters didn't because they couldn't see a way to afford it .
"They got full time jobs straight away - they couldn't see any other option I guess," she said.
"There is always help out there.
"You wouldn't think that there is so much out there, and you may be looking at all these scholarships and thinking I won't get it or it's not going to be me, but if you try for every single one you will definitely get something. It's worth trying.
Hugh and Gail Grant, David, Tiffany and Shirley Brown and Bob Pleash. Contributed
"Never give up hope. If you're passionate enough about it, you will make it."
It was thanks to an Iwasaki Foundation tertiary education bursary, worth $30,000 over three years, that enabled Ms Brown to follow her dreams and complete a Bachelor of Environmental Science (Water Management) with Distinction.
"It was my childhood dream to study marine biology, but I found jobs were limited in Australia so I looked more broadly," she said.
"The Iwasaki Bursary helped seal my decision to study locally once I found the opportunities with the Environmental Science degree.
"For those with no other financial support options, like myself, the balancing act between work and university is greatly increased and negatively impacts their potential.
Tiffany Brown flanked by Hugh Grant and Bob Pleash representing the Iwasaki Foundation. Contributed
"So many people see the financial challenges of going to university as impassable.
"The directors of Iwasaki Foundation recognise this challenge and offer a generous solution to those hardworking, financially disadvantaged students."
Ms Brown said the key to striving at university is to believe in yourself and put 100 per cent into the experience.
"You will always wonder 'am I even going to get through this term', but by the end of it you will look back and wonder where all the time went," she said.
"I personally found that by going into the university itself, going to the library and studying with friends, I got the most out of it as opposed to trying to do it all online. It was a much better experience going in and doing it with people.
"It goes so quickly, so enjoy it."
Ms Brown is now working as a CQUni research assistant at Central Queensland Innovation and Research Precinct (CQIRP) with a focus on helping coal mines with water quality assurance.
She has arranged a summer scholarship to kickstart her Honours research to pinpoint where micro-plastics tend to float in the ocean - near the surface, in deeper water or near the bottom.
"I will be visiting Gladstone over summer to start field measurements using a plankton scoop in the harbour," she said.
"Knowing the location of micro-plastics is an important stage in cleaning them up."
Ms Brown said she is loving the idea of a research workforce and getting into the business early has driven her to continue down this path.
CQUniversity cost breakdown
Students doing one of the more straight-forward degree pathways, such as the Bachelor of Business, could expect to pay around $33,700 but deferrable for Australian students using HECS-HELP and SA-HELP.
Students looking at a degree such as Bachelor of Business, which does not have extra costs associated with uniforms, the Bookshop suggests a budget around $2800 across the three years for textbook costs.
On-campus accommodation costs range from around $20,000 across a three-year degree (assuming studying from home on-line during the summer terms, BYO Linen and in a standard self-catered room).
cquniversity education inspiration rockhampton university
premium_icon BREAKING: Armed robbery suspect arrested in Maryborough
News Police arrested the boy in relation to an alleged armed robbery
premium_icon Hervey Bay man sentenced for drug, weapon possession
Crime Weapons and drugs were uncovered during a police search
premium_icon Property officials launch new advocacy group on Fraser Coast
News The group will act as a voice for the property sector in the region
premium_icon BUSINESS THEFT: Clerk back in court for mention
News Margaret Joyce Hull faces 92 charges of stealing
premium_icon SKYDIVE: Bucket list moment captured with hilarious photo
News Now Padraig is headed to Perth to visit his sister before travelling back to Ireland next month to go back to work.
© The Maryborough Hervey Bay Newspaper Company Pty Ltd 2019. Unauthorised reproduction is prohibited under the laws of Australia and by international treaty. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 5,362 |
Evluma values diversity as a Core Value and it's suppliers as an integral part of our sourcing strategy. We have developed a policy statement that is consistent with those values.
Our program allows us to enhance our supplier base for new ideas and high-quality goods and services, while enhancing economic growth opportunity. Evluma is dedicated to the strategy, purpose and goals of a strong Supplier Diversity Program. Continual efforts are being made to strive to enhance and build upon our program to identify small, local, minority, disadvantaged, women-owned, Hubzone, veteran-owned, and service disabled veteran-owned businesses as suppliers. It is clear that companies with global business vision understand the value of diversity relationships, not only with employees, but with customers, suppliers and investors.
Policy Statement: Evluma recognizes the importance of Small, Minority-owned and Women-owned Business Enterprises (SBE MWBE) to the economies of the nation, the state, and the communities it serves, as well as the corporation itself. Therefore, we are committed to pursuing business relationships with such enterprises and using innovative approaches designed to continually improve business opportunities.
Policy Program Objective: The diversity program objective is to promote and develop long-term relationships with small, local, women and minority-owned businesses and to create and sustain a culture that embraces supplier diversity to bring about change through creativity & inclusion.
• Actively and routinely seek out qualified and certified local, small and minority-owned and women-owned business enterprises that can provide competitive and high-quality commodities and services.
• Encourage non MWBE company's participation and support of supplier diversity through Tier II tracking and reporting.
• Seek out opportunities to assist in the development and competitiveness of SBE & MWBEs through instruction, mentoring, and other outreach activities.
Evluma uses the Federal Government's definitions for businesses located at http://www.sba.gov.
Please note: Evluma makes no promises or commitments to transact business with any suppliers who register with us. Registration does not in any manner guarantee registrants that their company will be identified, contacted, evaluated or selected to enter any business relationships or constitute any agreement with Evluma or any of its operating divisions. | {
"redpajama_set_name": "RedPajamaC4"
} | 7,354 |
Il governatorato di Tambov () è stato una gubernija dell'Impero russo e poi dell'Unione Sovietica. Istituito nel 1796, esistette fino al 1937, il capoluogo era Tambov.
Altri progetti
Tambov
Oblast' di Tambov | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 4,091 |
\section{Introduction}
Topological quantum computing is a theoretical paradigm of large-scale quantum error correction in which important data is encoded in non-local features of a vast entangled state. So long as the physical errors on the overall system stay below some threshold value, the data is protected. The archetypal example is the $\mathbb{C}\mathbb{Z}_2$ surface code \cite{DKLP,BK1}, a system which requires only nearest-neighbour connectivity between qubits and has a high threshold against errors \cite{WFH}. The key practical feature of the surface code, as opposed to the earlier toric code \cite{Kit1}, is that it may be embedded on the plane with boundaries, and does not require exotic homology to encode data.
Lattice surgery was developed by Horsman et al. \cite{HFDM} as a method of computation using the surface code. It is conceptually simple and flexible, and believed to be efficient in its consumption of resources such as qubits and time \cite{FG,Lit1} compared to other methods such as defect braiding \cite{FMMC}. Lattice surgery starts with patches of surface code, then employs `splits' and `merges' on these, which act non-unitarily on logical states. In fact, merges are described by completely positive trace preserving (CPTP) maps, and cannot be performed deterministically in general. Both features make the model cumbersome to describe using the circuit model.
Interestingly, computation using lattice surgery closely mirrors the Hopf algebra structure of $\mathbb{C} \mathbb{Z}_2$, the group algebra of $\mathbb{Z}_2$ over the field $\mathbb{C}$. Coincidentally, this is one of the building blocks of the ZX-calculus, a formal graphical language for reasoning about quantum computation \cite{CD}. It represents quantum processes using ZX-diagrams, which may then be rewritten using the axioms of the calculus. The initial presentation of the ZX-calculus applied only to qubits, and can be summarised algebraically as $\mathbb{C} \mathbb{Z}_2$ and $\mathbb{C}(\mathbb{Z}_2)$ sitting on the same vector space, plus the inherent Fourier transform and a so-called phase group.
De Beaudrap and Horsman \cite{BH} noticed this relationship between lattice surgery and qubit ZX-calculus, and leveraged it to develop novel lattice surgery procedures. Other techniques using the same idea have also been developed, e.g. for efficient compilation of magic state distillation circuits \cite{GF} and reasoning about implementing deterministic programs despite non-deterministic merges \cite{BDHP}.
The ZX-calculus has since been generalised to qudits using $\mathbb{C} \mathbb{Z}_d$, for $d\in \mathbb{N}$ \cite{W1}. We observe that lattice surgery may similarly be generalised. The procedure is algebraically very simple, with the most advanced technology required being the Fourier transform. We give a concrete description of the computational model, although for brevity we elide some of the details such as the Pauli frame. We then leverage the qudit ZX-calculus to describe transformations on the logical data. We use this description to give a series of qudit lattice surgery procedures, and show that the model still requires magic state injection for universality.
\section{The $\mathbb{C}\mathbb{Z}_d$ surface code}
In this section we introduce the surface code for qudits; readers familiar with Kitaev models may wish to skip to Section~\ref{sec:lattice_surgery}. Throughout, we let $\mathbb{Z}_d$ be the cyclic group with $d$ elements labelled by integers $0,\cdots,d-1$ with addition as group multiplication. We assume $d\geq 2$, as the $d=1$ case is trivial. In the interest of brevity, proofs are brief or relegated to the appendices. For more explicit and thorough treatments at a higher level of generality see e.g. \cite{Kit1,Bom,Cow}. Throughout, we occasionally ignore normalisation (typically factors of $d$ or $1\over d$) when convenient.
\begin{definition} Let $\mathbb{C}\mathbb{Z}_d$ be the group Hopf algebra with basis states $|i\>$ for $i \in \mathbb{Z}_d$. $\mathbb{C}\mathbb{Z}_d$ has multiplication given by a linear extension of its native group multiplication, so $|i\>\otimes |j\> \mapsto |i+j\>$, and the unit $|0\>$. It has comultiplication given by $|i\>\mapsto |i\>\otimes |i\>$, and the counit $|i\>\mapsto 1 \in \mathbb{C}$. It has the normalised integral element $\Lambda_{\mathbb{C}\mathbb{Z}_d} = \frac{1}{d}\sum_i |i\>$ and the antipode is the group inverse. $\mathbb{C}\mathbb{Z}_d$ is commutative and cocommutative.
\end{definition}
\begin{definition}Let $\mathbb{C}(\mathbb{Z}_d)$ be the function Hopf algebra with basis states $|\delta_i\>$ for $i\in \mathbb{Z}_d$. $\mathbb{C}(\mathbb{Z}_d)$ is the dual algebra to $\mathbb{C}\mathbb{Z}_d$. $\mathbb{C}(\mathbb{Z}_d)$ has multiplication $|\delta_i\> \otimes |\delta_j\> \mapsto \delta_{i,j}|\delta_i\>$ and the unit $\sum_i |\delta_i\>$. It has comultiplication $|\delta_i\> \mapsto \sum_{h\in \mathbb{Z}_d} |\delta_h\>\otimes|\delta_{i-h}\>$ and counit $|\delta_i\>\mapsto \delta_{i,0}$. It has the normalised integral element $\Lambda_{\mathbb{C}(\mathbb{Z}_d)} = |\delta_0\>$ and the antipode is also the inverse. $\mathbb{C}(\mathbb{Z}_d)$ is commutative and cocommutative.
\end{definition}
\begin{lemma}\label{lem:fourier}
The algebras are related by the Fourier isomorphism, so $\mathbb{C}(\mathbb{Z}_d)\cong \mathbb{C}\mathbb{Z}_d$ as Hopf algebras. In particular this isomorphism has maps
\begin{equation}\label{Zisom} |j\> \mapsto \sum_k q^{jk}|\delta_k\>,\quad |\delta_j\>\mapsto {1\over d} \sum_k q^{-jk}|k\>,\end{equation}
where $q = e^{i2\pi\over d}$ is a primitive $d$th root of unity.
\end{lemma}
\begin{definition}\label{def:lattice_acts}
Now let $\Sigma = \Sigma(V, E, P)$ be a square lattice viewed as a directed graph with its usual (cartesian) orientation. The corresponding Hilbert space $\hbox{{$\mathcal H$}}$ will be a tensor product of vector spaces with one copy of $\mathbb{C}\mathbb{Z}_d$ at each arrow in $E$, with basis denoted by $\{|i\>\}_{i\in \mathbb{Z}_d}$ as before. Next, for each vertex $v \in V$ and each face $p \in P$ we define an action of $\mathbb{C}\mathbb{Z}_d$ and $\mathbb{C}(\mathbb{Z}_d)$, which acts on the vector spaces around the vertex or around the face, and trivially elsewhere, according to
\[\tikzfig{vertex_action}\]
and
\[\tikzfig{face_action}\]
for $|l\> \in \mathbb{C}\mathbb{Z}_d$ and $|\delta_j\>\in \mathbb{C}(\mathbb{Z}_d)$.
\end{definition}
Here $|l\>{\triangleright}_v$ subtracts in the case of arrows pointing towards the vertex and $|\delta_j\>{\triangleright}_p$ has $c,d$ entering negatively in the $\delta$-function because these are contra to a {\em clockwise} flow around the face in our conventions. The vertex actions are built from four-fold copies of the operator $X$ and $X^\dagger$, where $X^l|i\>=|i+l\>$. Consider the face actions of elements $\sum_j q^{mj}|\delta_j\>$, i.e. the Fourier transformed basis of $\mathbb{C}(\mathbb{Z}_d)$; these face actions are made up of $Z$ and $Z^\dagger$, where $Z^m|i\>=q^{mi}|i\>$, and the $Z$, $X$ obey $ZX=qXZ$.
Stabilisers on the lattice are given by measurements of the $X\otimes X\otimes X^\dagger\otimes X^\dagger$ and $Z\otimes Z\otimes Z^\dagger\otimes Z^\dagger$ operators on vertices and faces respectively; that is, for the vertices we non-deterministically perform one of the $d$ projectors $P_v(j) = \sum_k q^{jk}|k\>{\triangleright}_v$ for $j\in \mathbb{Z}_d$, according to each of the $d$ measurement outcomes. Similarly for faces, we perform one of the $d$ projectors $P_p(j) =|\delta_j\>{\triangleright}_p$. In practice, this requires additional `syndrome' qudits at each vertex and face; we give explicit circuits for these in Appendix~\ref{app:circs}. At each round of measurement, we measure all of the stabilisers on the whole lattice. Physically, we may also say that the system is in a subspace of a certain Hamiltonian:
\[H=-(\sum_v A(v) + \sum_p B(p))+{\rm const.}\]
where
\[ A(v)=P_v(0)=\Lambda{\triangleright}={1\over d}\sum_i |i\>{\triangleright}_v,\quad B(p)=P_p(0)=\Lambda^*{\triangleright}_p=|\delta_0\>{\triangleright}_p.\]
It is easy to see that
\[ A(v)^2=A(v),\quad B(p)^2=B(p),\quad [A(v),A(v')]=[B(p),B(p')]=[A(v),B(p)]=0\]
where $v,v'$ are different vertices, and $p,p'$ are different faces.
When the measurements at a vertex $v$ and face $p$ yield the projectors $A(v)$ and $B(p)$ we say that no errors were detected at these locations, and we are locally in the vacuum. Then if we obtain the projectors $A(v)$ and $B(p)$ everywhere we are in the global vacuum space $\hbox{{$\mathcal H$}}_{vac}$. One can check that a state ${|{\rm vac}\>} \in \hbox{{$\mathcal H$}}_{vac}$ obeys
\[|l\>{\triangleright}_v{|{\rm vac}\>} = A(v){|{\rm vac}\>}= \sum_j q^{mj}|\delta_j\>{\triangleright}_p{|{\rm vac}\>} = B(p){|{\rm vac}\>}={|{\rm vac}\>}\]
for all $l, m\in \mathbb{Z}_d, v\in V, p\in P$.
\begin{definition}\label{def:log_states}
We can always write down at least two vacuum states\footnote{In certain cases, such as when the lattice is embedded onto a sphere, these states coincide.}, which we shall call:
\[|0\>_L := \prod_{v}A(v)\bigotimes_E |0\>\]
and
\[|\delta_0{}\>_L := \prod_{p}B(p)\bigotimes_E \sum_i |i\>.\]
\end{definition}
Computationally, the vacuum space is also the \textit{logical space}, the subspace in which we store data; the subscript ${}_L$ refers to this logical space, and $|0\>_L$, $|\delta_0{}\>_L$ are canonical logical states.
If measurements yield other projectors $P_v(j)$ or $P_p(j)$ then we have detected an error; in physics jargon we have detected the presence of an electric or magnetic particle. One important feature of the code is that if we receive the measurement outcome $P_v(j)$, say, at a vertex then there will be another vertex at which we detect $P_v(-j)$ instead. This is because all operators on the lattice come in the form of \textit{string operators}. String operators come in two types: $X$ and $Z$.
\begin{definition}
An $X$-type string operator ${}_xF^i_\xi$ acts on the lattice as
\[\tikzfig{x_string_operator}\]
where $\xi$ is a string that passes between faces and for each crossed edge we apply either an $X^i$ or $X^i{}^\dagger$ depending on the orientation, as shown.
\end{definition}
The $X$-type string operators satisfy $({}_xF^i_{\xi})^\dagger = {}_xF^{-i}_{\xi}$. Additionally we have
\[{}_xF^i_{\xi}\circ{}_xF^j_{\xi} = {}_xF^{i+j}_{\xi}\]
and, given concatenated strings $\xi, \xi'$,
\[{}_xF^i_{\xi'\circ\xi} = {}_xF^i_{\xi'}\circ{}_xF^i_{\xi}\]
where one can see multiplication and comultiplication of $\mathbb{C} \mathbb{Z}_d$; more generally they obey the same Hopf laws as $\mathbb{C} \mathbb{Z}_d$. The other axioms are easy to check.
The $X$-type string operators make magnetic quasiparticles `appear' at the faces at which a string ends. In particular, given an initial vacuum state ${|{\rm vac}\>}$, we can check that
\[P_{p_0}(i){}_xF^i_\xi{|{\rm vac}\>} = P_{p_1}(-i){}_xF^i_\xi{|{\rm vac}\>} = {}_xF^i_\xi{|{\rm vac}\>}\]
where $p_0,p_1$ are the start and endpoints of the string, so we will detect errors at these locations upon measurement. However, the string operators leave the system in the vacuum in the intermediate faces of the string, as we have:
\[B(p){}_xF^i_\xi = {}_xF^i_\xi B(p)\]
for any $p \neq p_0$ or $p_1$. As a consequence, we may think of string operators as equivalent up to a sort of discrete framed isotopy.
\begin{definition}
A $Z$-type string operator ${}_zF^{\delta_j}_\xi$ acts on the lattice as
\[\tikzfig{z_string_operator}\]
by passing between vertices. For each crossed edge we include a term in the $\delta$-function, as shown. Observe that $\sum_j q^{mj} {}_zF^{\delta_j}_\xi$ applies a $Z^m$ or $Z^m{}^\dagger$ at each edge, and that this is the Fourier transformed basis of the $Z$-type string operators.
\end{definition}
The $Z$-type string operators satisfy $({}_zF^{\delta_i}_\xi)^\dagger = {}_zF^{\delta_i}_\xi$. Additionally we have
\[{}_zF^{\delta_i}_\xi\circ {}_zF^{\delta_j}_\xi = \delta_{i,j}\ {}_zF^{\delta_j}_\xi\]
and
\[{}_zF^{\delta_i}_{\xi'\circ\xi} = \sum_{_h}{}_zF^{\delta_h}_{\xi'}\circ{}_zF^{\delta_{i-h}}_{\xi},\]
so $Z$-type string operators obey the same Hopf laws as $\mathbb{C}(\mathbb{Z}_d)$.
The $Z$-type string operators generate electric quasiparticles at the vertices at which a string ends. We have
\[P_{v_0}(i)\sum_j q^{ij}{}_zF^{\delta_j}_\xi{|{\rm vac}\>}=P_{v_1}(-i)\sum_j q^{ij}{}_zF^{\delta_j}_\xi{|{\rm vac}\>}={}_zF^{\delta_j}_\xi{|{\rm vac}\>}.\]
As a result, we refer to this basis of the $Z$-type string operators as the `quasiparticle basis'. They leave the system in the vacuum in the intermediate vertices of the string:
\[A(v)\sum_j q^{ij}{}_zF^{\delta_j}_\xi = \sum_j q^{ij}{}_zF^{\delta_j}_\xi A(v)\]
for any $v\neq v_0$ or $v_1$.
In the quasiparticle basis we have
\[\sum_j q^{ij}{}_zF^{\delta_j}_\xi\circ\sum_j q^{kj}{}_zF^{\delta_j}_\xi = \sum_j q^{(i+k)j}{}_zF^{\delta_j}_\xi\]
and
\[\sum_j q^{ij}{}_zF^{\delta_j}_{\xi'\circ\xi} = \sum_j q^{ij}{}_zF^{\delta_j}_{\xi'}\circ \sum_k q^{ik}{}_zF^{\delta_k}_{\xi}\]
i.e. the same algebraic rules as the $X$-type string operators, and as $\mathbb{C} \mathbb{Z}_d$.
\begin{lemma}\cite{Kit1}
String operators which form a closed loop on a lattice segment which is locally vacuum either act as identity or are physically impossible, i.e. they take the system to 0.
\end{lemma}
{\noindent {\bfseries Proof:}\quad }
First, assume the string passes between faces, so we have an $X$-type string operator. In this case, we may tile the loop with squares on the dual lattice (that is, the dual in the graph-theoretic sense). Then one can check the closed string operator acts as a product over the tiles of $|l\>{\triangleright}_v$ actions. As $|l\>{\triangleright}_v{|{\rm vac}\>} = {|{\rm vac}\>}$ the state is left unchanged.
If the string passes between vertices, tile the loop with squares. Consider the $Z$-type string operators in the quasiparticle basis. Then the closed string operator acts as a product of $\sum_j q^{mj}|\delta_j\>{\triangleright}_p$ actions. As $\sum_j q^{mj}|\delta_j\>{\triangleright}_p{|{\rm vac}\>} = {|{\rm vac}\>}$ the state is left unchanged. In the original basis, the product of $|\delta_h\>{\triangleright}_p$ actions acts as identity if $h=0$; otherwise it takes the system to 0.
\endproof
We are now ready to define a \textit{patch}.
\begin{definition}
A patch is a rectangular segment of lattice bordered by two rough and two smooth boundaries, like so:
\[\tikzfig{patch}\]
where rough boundaries are at the top and bottom, while smooth boundaries are at the left and right.\footnote{One can of course define patches with other combinations of boundaries, which are useful for specific kinds of circuits \cite{Lit1}, but this is a convenient definition for our purposes.}
\end{definition}
There are assumed to be no parts of the lattice beyond the patch; these are all of the edges in the lattice. The stabilisers on the boundaries are the same as in the bulk, with the exceptions that (a) stabilisers obviously exclude the edges which are not present, and (b) there are no stabilisers for single edges. So, in particular, there are no vertex measurements which include only the single top and bottom edges; likewise, there are no face measurements which include only the single left and right edges.\footnote{More generally, boundary conditions are defined by a subgroup $K\subseteq\mathbb{Z}_d$. This leads to a rich algebraic theory \cite{BSW}. In the present case, the subgroups $K$ associated to rough and smooth boundaries are $K=\{0\}$ and $K=\mathbb{Z}_d$ respectively.}
\begin{lemma}
Let the system be in a vacuum state. All $X$-type string operators which extend between the left to right boundaries, for example in the manner below
\[\tikzfig{patch_X_string}\]
leave the system in vacuum, but do not generally act as identity.
\end{lemma}
{\noindent {\bfseries Proof:}\quad }
There are no face stabilisers for the single edges at the end, and at all other faces $B(p)$ commutes with the string operators. However, while the string can be smoothly deformed up and down the sides of the patch while leaving the operation on the vacuum invariant, it cannot be expressed as a product of vertex or face operators, and explicit checks on small (but nontrivial) examples show that ${}_xF^i_{\xi}$ does not act as identity unless $i=0$.
\endproof
In fact, we have a stronger property: all operators which act as a product of $X$ operations on edges and leave the system in vacuum may be expressed as a linear combination of $X$-type string operators extending between left and right, so the $d$ different $X$-type string operators ${}_xF^i_{\xi}$ form an orthonormal basis for the algebra of such operators. We have a similar result for the $Z$-type string operators in the quasiparticle basis, which extend between the top and bottom boundaries. These properties motivate the following:
\begin{lemma}
A patch as defined above with underlying group algebra $\mathbb{C} \mathbb{Z}_d$ has ${\rm dim}(\hbox{{$\mathcal H$}}_{vac}) = d$ and two canonical bases, $\{|i\>_L\}_{i\in\mathbb{Z}_d}$ and $\{|\delta_i\>_L\}_{i\in \mathbb{Z}_d}$.
\end{lemma}
{\noindent {\bfseries Proof:}\quad }
The states in $\hbox{{$\mathcal H$}}_{vac}$, and hence the logical space of the code, are uniquely characterised by the algebra of operators upon them. Given a reference state $|{\rm ref}\>$ in the vacuum, if there is another vacuum state $|\psi\>$ there must be some linear map which transforms $|{\rm ref}\>$ into $|\psi\>$. Thus $\{{}_xF^i_{\xi}\}_{i\in\mathbb{Z}_d}$ automatically gives an orthonormal basis for $\hbox{{$\mathcal H$}}_{vac}$.
Let us call $|0\>_L$ the reference state from Def~\ref{def:log_states}. Then $|i\>_L := {}_xF^i_{\xi}|0\>_L$, where $\xi$ is any string extending from the left boundary to the right. We may call ${}_xF^i_{\xi}$ a logical $X^i$ gate, i.e. $X^i_L$.
As with $\mathbb{C}\mathbb{Z}_d$ itself, we have a Fourier basis for the patch's logical space. To begin with, we have $|\delta_0\>_L = \sum_i {}_xF^i_{\xi}|0\>_L$. Then, we define further logical states in the Fourier basis by $|\delta_i\>_L = \sum_j q^{ij}{}_zF^{\delta_j}_{\xi'}|\delta_0\>_L$, where now the string $\xi'$ extends from the top to bottom, and we claim that $|\delta_i\>_L = \sum_kq^{-ik}|k\>_L$. We check on a small example that this is consistent with Lemma~\ref{lem:fourier} in Appendix~\ref{app:fourier_patch}, and assert that it holds generally. As a result we call $\sum_j q^{ij}{}_zF^{\delta_j}_{\xi'}$ a logical $Z^i$ gate, $Z^i_L$.
\endproof
Note that the logical space is independent of the size of lattice, and depends only on the topology. The lattice size is relevant only for the probability of correcting errors.
\section{Lattice surgery}\label{sec:lattice_surgery}
If we have two patches with logical spaces $(\hbox{{$\mathcal H$}}_{vac})_1$ and $(\hbox{{$\mathcal H$}}_{vac})_2$ which are disjoint in space then we evidently have a combined logical space $\hbox{{$\mathcal H$}}_{vac} = (\hbox{{$\mathcal H$}}_{vac})_1 \mathop{{\otimes}} (\hbox{{$\mathcal H$}}_{vac})_2$.
We may start with one patch and `split' it to convert it into two patches.
\subsection{Splits}
To perform a smooth split, take a patch and measure out a string of intermediate qudits from top to bottom in the $\{|\delta_i\>\}$ basis, like so:
\[\tikzfig{split1}\]
Regardless of the measurement results we get, we now have two disjoint patches next to each other. We can see the effect on the logical state by considering an $X$-type string operator which had been extending across a string $\xi$ from left to right on the original patch. Previously it had been ${}_xF^i_{\xi}$, say. Now, let $\xi = \xi''\circ\xi'$, where $\xi'$ extends across the left patch after the split and $\xi''$ extends across right one. Then ${}_xF^i_{\xi} = {}_xF^i_{\xi'}\circ{}_xF^i_{\xi''}$; our $X^i_L$ gate on the original logical space is taken to $X^i_L\mathop{{\otimes}} X^i_L$ on $(\hbox{{$\mathcal H$}}_{vac})_1 \mathop{{\otimes}} (\hbox{{$\mathcal H$}}_{vac})_2$. It is easy to see that this then gives the map:
\[\Delta_s : |i\>_L\mapsto |i\>_L\otimes |i\>_L\]
for $i\in\mathbb{Z}_d$. This is the same regardless of the measurement outcomes on the intermediate qubits we measured out.
To perform a rough split, take a patch and measure out a string of qudits from left to right in the $\{|i\>\}$ basis. A similar analysis to before, but for $Z^i_L$ gates, shows that we have
\[\Delta_r : |\delta_i\>_L\mapsto |\delta_i\>_L\otimes |\delta_i\>_L.\]
\begin{remark}\rm
We now note a subtlety: for both smooth and rough splits we induce a copy in the relevant bases, that is the comultiplication of $\mathbb{C}\mathbb{Z}_d$, rather than the comultiplication of $\mathbb{C}(\mathbb{Z}_d)$ for the rough splits. This is because we are placing both algebras on the same object, using the non-natural isomorphism $V\cong V^*$ for vector spaces $V$. Thus if we take the rough split map in the other basis we get
\[\Delta_r : |i\>_L\mapsto \sum_h |h\>_L \otimes |i-h\>_L.\]
This follows directly from Lemma~\ref{lem:fourier}. The fact that both algebras are placed on the same object allows us to relate the model to the ZX-calculus in Section~\ref{sec:zx}.
\end{remark}
\subsection{Merges}
To perform a smooth merge, we do the reverse operation. Start with two disjoint patches:
\[\tikzfig{split2}\]
and then initialise between them a string of intermediate qudits, each in the $\sum_i|i\>$ state, like so:
\[\tikzfig{merge}\]
Then measure the stabilisers at all sites on the now merged lattice. Now, assuming no errors have occurred all the stabilisers are automatically satisfied everywhere except the measurements which include the new edges. These measurements realise a measurement of $Z_L\mathop{{\otimes}} Z_L$ on the logical space $(\hbox{{$\mathcal H$}}_{vac})_1 \mathop{{\otimes}} (\hbox{{$\mathcal H$}}_{vac})_2$. We prove this in Appendix~\ref{app:merge}. With merges, the resultant logical state after merging is also dependent on the measurement outcomes.
Depending on which `frame' we choose we can have two different sets of possible maps from the smooth merge, see \cite{BH} for the easier qubit case. Here we choose to adopt the Pauli frame of the second patch. In the Fourier basis we thus have the Kraus operators:
\[\nabla_s: \{|\delta_i\>_L\mathop{{\otimes}}|\delta_j\>_L\mapsto q^{in}|\delta_{i+j}\>_L\}_{n \in \{0,\cdots,d-1\}}\]
where $q^{in}$ is a factor introduced by the $Z_L\mathop{{\otimes}} Z_L$ measurement; we have $n \in \{0,\cdots,d-1\}$ for the $d$ different possible measurement outcomes. If we only consider the $n=0$ case for a moment, one can come to the conclusion that this is the correct map using the $Z_L$ logical operators:
\[\sum_j q^{ij}{}_zF^{\delta_j}_\xi\circ\sum_j q^{kj}{}_zF^{\delta_j}_\xi = \sum_j q^{(i+k)j}{}_zF^{\delta_j}_\xi\]
from earlier, where $\xi$ extends from bottom to top on both original patches. Then when we merge the patches, we get the combined string operator. In the other basis of logical states, the smooth merge gives:
\[\nabla_s: \{|i\>_L\mathop{{\otimes}}|j\>_L\mapsto \delta_{i+n,j}|i+n\>_L\}_{n\in \{0,\cdots,d-1\}},\]
\begin{remark}\rm
It is common in categorical quantum mechanics to consider the so-called multiplicative fragment of quantum mechanics. In this fragment, we may post-select rather than just make measurements according to the traditional postulates. As such, there is a canonical choice of post-selection such that $n=0$ and we acquire the multiplication of $\mathbb{C}\mathbb{Z}_d$ or $\mathbb{C}(\mathbb{Z}_d)$ depending on basis. While physically we cannot post-select, this is a useful toy model in which algebraic notions may be more conveniently related to quantum mechanical processes.
\end{remark}
Considering the same convention of frame, a rough merge gives:
\[\nabla_r: \{|i\>_L\mathop{{\otimes}}|j\>_L\mapsto q^{in}|i+j\>\}_{n\in \{0,\cdots,d-1\}}\]
by a similar argument, this time performing a measurement of $X_L\mathop{{\otimes}} X_L$ to merge patches at the top and bottom.
\subsection{Units and deletion}
While we are on the subject of measurements, we can delete a patch by measuring out every qudit associated to its lattice in the $Z$-basis. If we do so, we obtain the maps
\[{\epsilon}_r: \{|i\>_L\mapsto \delta_{n,i}\}_{n\in \{0,\cdots,d-1\}}.\]
In the $n=0$ outcome this is precisely the counit of $\mathbb{C}(\mathbb{Z}_d)$. We check this in Appendix~\ref{app:counit}. If we instead measure out each qudit in the $X$-basis we get
\[{\epsilon}_s: \{|i\>_L\mapsto q^{in}\}_{n\in \{0,\cdots,d-1\}},\]
where we see the counit of $\mathbb{C}\mathbb{Z}_d$.
One can clearly also construct the units of $\mathbb{C}(\mathbb{Z}_d)$ and $\mathbb{C}\mathbb{Z}_d$, being $\eta_s: \sum_i|i\>_L$ and $\eta_r: |0\>_L$ respectively. The last remaining pieces of the puzzle are the antipode and Fourier transform on the logical space.
\subsection{Antipode}
First we demonstrate how to map between the $|0\>_L$ and $|\delta_i\>_L$ states. If we are in the $|0\>_L = \prod_{v}A(v)\bigotimes_E |0\>$ state and apply a Fourier transform $H = \sum_{j,k}q^{-jk}|k\>\<j|$ to every qudit we have $H|0\> = \sum_i |i\>$.\footnote{The $H$ stands for Hadamard, which is what the qubit Fourier transform is commonly called. The qudit Fourier transform is not a Hadamard matrix in general.} Similarly, as $HX = Z^\dagger H$ (and $XH = HZ$) we translate all $A(v)$ projectors to $B(p)$ projectors by rotating the lattice to exchange vertices with faces
\[\tikzfig{vertex_rotate}\]
such that the $X, X^\dagger$ match up with $Z^\dagger, Z$ appropriately when considering the clockwise conventions from Def~\ref{def:lattice_acts}.
This is just a conceptual rotation, and there does not need to be any \textit{physical} rotation in space. Thus we have
\[H_L |0\>_L=(\bigotimes_E H) \prod_{v}A(v)\bigotimes_E |0\> = \prod_pB(p)\bigotimes_E \sum_i |i\> = |\delta_i\>_L\]
where $H_L = \bigotimes_E H$ is the logical Fourier transform, and the lattice has been mapped:
\[\tikzfig{rotate_patch}\]
$H_L$ also takes $X$-type string operators to $Z$-type string operators in the quasiparticle basis but with a sign change, and thus we have
\[H_L|i\>_L = H_LX^i|0\>_L=Z^{-i}H_L|0\>_L = \sum_{k}q^{-ik}|k\>_L=|\delta_i\>_L\]
so it is genuinely a Fourier transform. Applying it twice gives
\[H_LH_L|i\>_L = \sum_{k,l}q^{-ik}q^{-kl}|l\>_L = \sum_l \delta_{l,-i}|l\>_L = |-i\>_L\]
where the lattice is now as though the whole patch has been rotated in space by $\pi$ by the same argument as before. This is evidently the \textit{logical antipode}, $S_L = H_LH_L$.
This completes the set of fault-tolerant operations we may perform with the $\mathbb{C}\mathbb{Z}_d$ lattice surgery. One can create other states in a non-error corrected manner and then perform state distillation to acquire the correct state with a high probability, but this is beyond the scope of the paper and very similar to e.g. \cite{FSG}.
\section{The ZX-calculus}\label{sec:zx}
The ZX-calculus is based on Hopf-Frobenius algebras sitting on the same object. It imports ideas from monoidal category theory to justify its graphical formalism \cite{Sel}. See \cite{HV} for an introduction from the categorical point of view. Calculations may be performed by transforming diagrams into one another, and the calculus may be thought of as a tensor network theory equipped with rewriting rules.
Here we present the syntax and semantics of ZX-diagrams for $\mathbb{C}\mathbb{Z}_d$. We are unconcerned with either universality or completeness \cite{Back}, and give only the necessary generators for our purposes; moreover, we adopt a slightly simplified convention. First, we have generators:
\[\tikzfig{units}\]
for elements, where the small red and green nodes are called `spiders', and diagrams flow from bottom to top.\footnote{Red and green are dark and light shades in greyscale.} The labels associated to a spider are called phases. Then we have the multiplication maps,
\[\tikzfig{merge_spiders}\]
comultiplication,
\[\tikzfig{comult_spiders}\]
maps to $\mathbb{C}$,
\[\tikzfig{counits}\]
and Fourier transform\footnote{The Hadamard symbol here makes it look like it is vertically reversible, i.e. $H^\dagger = H$, but it is not; this is just a notational flaw.} plus antipode:
\[\tikzfig{hadamard}\]
Now, these generators obey all the normal Hopf rules: associativity of multiplication and comultiplication, unit and counit, bialgebra and antipode laws, but that it is not all. The ZX-calculus makes use of an old result by Pareigis \cite{Par}, which states that all finite-dimensional Hopf algebras on vector spaces automatically give two Frobenius structures, which in the present case correspond to the red and green spiders above. In this case, they are in fact so-called $\dagger$-special commutative Frobenius algebras ($\dagger$-SFCAs) \cite{CPV}. Such algebras have a normal form, such that any connected set of green or red spiders may be combined into a single green or red spider respectively, summing the phases \cite{CD}. This is called the \textit{spider theorem}. As an easy example, observe that we can define the $X^a$ gate in the ZX-calculus as:
\[\tikzfig{X_gate_spider}\]
and similarly for a $Z^b$ gate,
\[\tikzfig{Z_gate_spider}.\]
The Fourier transform then `changes colour' between green and red spiders. We show these axioms in Appendix~\ref{app:zx_axioms}. For a detailed exposition of the qudit ZX-calculus in greater generality see \cite{W1}.
Now, one can immediately see that the generators are automatically (by virtue of the $\mathbb{C}\mathbb{Z}_d$ and $\mathbb{C}(\mathbb{Z}_d)$ structures) in bijection with the lattice surgery operations described previously. The bijection between this fragment of the ZX-calculus and lattice surgery was spotted by de Beaudrap and Horsman in the qubit case \cite{BH}; however, their presentation emphasises the Frobenius structures. The algebraic explanation for the lattice surgery properties is all in the Hopf structure: in summary, it is because the string operators are Hopf-like.\footnote{We formalise such operators as module maps in \cite{Cow}.} The Frobenius structures are still useful diagrammatic reasoning tools because of the spider theorem, and also because the two interacting Frobenius algebras correspond to the rough (red spider) and smooth (green spider) operations. There is a convenient 3-dimensional visualisation for this using `logical blocks', which we defer to Appendix~\ref{app:block}. There we also include Table~\ref{tbl:lat_oper}, which is a dictionary between lattice operations, ZX-diagrams and linear maps.
\subsection{Gate synthesis}\label{sec:synth}
Using the ZX-calculus we can thus design logical protocols in a straightforward manner. We have already implicitly shown a state injection protocol, being the spider merges for the $X^a$ and $Z^b$ gates above, but we can go further. A common gate in the circuit model is the controlled-$X$ ($CX$) gate. In qudit quantum computing this is defined as the map
\[CX: |i\> \otimes |j\> \mapsto |i\> \otimes |i+j\>\]
which in the ZX-calculus we might represent as, say,
\[\tikzfig{cnot_spiders}.\]
In the first diagram we perform a rough split followed by a smooth merge; in the second we do the opposite. In the third and fourth we first generate a maximally entangled state and then perform a smooth and rough merge on either side. Trivial rewrites using the spider theorem show that these are equal, and conversions into linear maps do indeed yield the $CX$. Note that we implicitly assumed the $n=0$ measurement outcomes for the merges, but we assert that in this case the protocol works deterministically by applying corrections. This is a generalisation of protocols specified in \cite{BH}, and the correction arguments are identical.
We can also easily see that the lattice surgery operations are not universal, even with the addition of logical $X_L$ and $Z_L$ gates using string operators. All phases have integer values, and so we cannot even achieve all single-qudit gates in the 2nd level of the Clifford hierarchy fault-tolerantly. For example, we cannot construct a $\sqrt{X}_L$ gate with the operations listed here.
With this limitation in mind, in Appendix~\ref{app:generalisations} we discuss the prospects for expanding the scope of the model to other group algebras and to Hopf algebras more generally.
\section{Conclusion}
We have shown that lattice surgery is straightforward to generalise to qudits, assuming an underlying abelian group structure. The resultant diagrammatics which can be used to describe computation are elegant, concise and powerful. We currently do not know how this generalises further, and what the connections are to quantum field theories. We aim to tackle these issues in future work.
\input{biblio}
\input{appendix}
\bibliographystyle{eptcs}
\end{document}
\section{Circuits for measuring stabilisers}\label{app:circs}
Given the face
\[\tikzfig{face_to_be_measured}\]
we can perform a face measurement using the circuit
\[\tikzfig{Z_stab_measurement}\]
i.e. a measurement of the $Z\mathop{{\otimes}} Z\mathop{{\otimes}} Z^\dagger\mathop{{\otimes}} Z^\dagger$ operator. The $CX$ gates act as $|i\>\mathop{{\otimes}} |j\>\mapsto |i\>\mathop{{\otimes}}|i+j\>$, and the yellow boxes are Fourier transforms $H$. Note that $H^2 : |i\>\mapsto |-i\>$. Hence one can calculate that this circuit is the map
\[|i\>\mathop{{\otimes}}|j\>\mathop{{\otimes}}|k\>\mathop{{\otimes}}|l\>\mapsto \delta_a(i+j-k-l)|i\>\mathop{{\otimes}}|j\>\mathop{{\otimes}}|k\>\mathop{{\otimes}}|l\> \]
for some $a\in \mathbb{Z}_d$. For a vertex
\[\tikzfig{vertex_to_be_measured}\]
we have
\[\tikzfig{X_stab_measurement}\]
measuring the $X\mathop{{\otimes}} X\mathop{{\otimes}} X^\dagger\mathop{{\otimes}} X^\dagger$ operator. The $CX$ gates also act as $|\delta_i\>\mathop{{\otimes}}|\delta_j\>\mapsto |\delta_{i-j}\>\mathop{{\otimes}}|\delta_j\>$, motivating the exchanged control and target and application of $H^2$ to the other qudits. Then we see that this circuit is the map
\[|\delta_i\>\mathop{{\otimes}}|\delta_j\>\mathop{{\otimes}}|\delta_k\>\mathop{{\otimes}}|\delta_l\>\mapsto \delta_b(i+j-k-l)|\delta_i\>\mathop{{\otimes}}|\delta_j\>\mathop{{\otimes}}|\delta_k\>\mathop{{\otimes}}|\delta_l\>\]
for some $b\in Z_d$.
\section{Fourier basis for patches}\label{app:fourier_patch}
Consider the small patch
\[\tikzfig{small_patch}\]
Now, $|i\>_L$ is the following state:
\[\tikzfig{patch_0}\]
where we have taken $|0\>_L$ and applied an $X$-type string from left to right. Now, consider $|\delta_0\>_L$:
\[\tikzfig{patch_plus}\]
where we performed a change of variables $g\mapsto -g$, $h\mapsto -h$. Now, $\delta_0(d+a-c-f-g+h)$ holds iff $d+a-g=i$ and $-f-c+h=-i$ for some $i\in \mathbb{Z}_d$. Thus we have $|\delta_0\>_L = \sum_i|i\>_L$. If we then apply a $Z$-type string operator from top to bottom in the quasiparticle basis we see that $|\delta_j\>_L = \sum_iq^{-ij}|i\>_L$.
One could then show that the bases are consistent under Fourier transform for all sizes of patch by induction, using the above as the base case.
\section{Proof of lattice merges}\label{app:merge}
We demonstrate the smooth merge on a small patch but it is easy to see that the same method applies for arbitrary large patches. We begin with two patches, in the $|\delta_g\>_L$ and $|\delta_h\>_L$ states respectively.
\[\tikzfig{patch_delta_0}\]
Then initialise two new edges between, each in the $\delta_0$ state.
\[\tikzfig{patch_delta_0_together}\]
where we have exaggerated the length of the new edges for emphasis. Now if we apply stabiliser measurements at all points we see that the only relevant ones are the face measurements including the new edges (the vertex measurements will still yield $A(v)$ unless there a physical error has appeared). The relevant measurements give us
\[\delta_s(c-w-k);\quad \delta_r(k+i+d-c-j-x+w-l);\quad \delta_t(-d+l+x)\]
for each new face, where $r,s,t\in \mathbb{Z}_d$. By substitution this gives
\[\delta_r(k+i+d-c-j-x+w-l) = \delta_r(-t-s+i-j) = \delta_{r+t+s}(i-j) = \delta_n(i-j) = \delta_i(n+j)\]
where $n$ is the group product of $r,t,s$ in $\mathbb{Z}_d$. Computationally, $n$ is the important \textit{measurement outcome} of the merge. Plugging back in to the patches we have
\[\tikzfig{merge_outcome_patch}\]
In the positive outcome case, i.e. when $s=r=t=0$, it is immediate that we have $|\delta_{g+h}\>_L$ on the combined patch. Otherwise, we can `fix' the internal additions of $s, t, n$ to the edges with string operators or otherwise accommodate them into the Pauli frame in the same manner as described in e.g. \cite{BH}. Then we are left with $q^{ng}|\delta_{g+h}\>_L$, as stated.
The Fourier transformed version of the above explains the rough merges as well, so we do not describe it explicitly.
\section{Proof of lattice counits}\label{app:counit}
We now show a `smooth counit' on a patch with state $|\delta_j\>_L$:
\[\tikzfig{delta_j_patch}\]
Measure out all edges in the $Z$ basis, giving
\[\sum_{a,b,c,d,i}q^{ij}\delta_r(a)\delta_s(a-c)\delta_t(c)\delta_u(i+b-a)\delta_v(b-d)\delta_w(i+d-c)\delta_x(-b)\delta_y(-d)\]
for some $r,\cdots,y\in \mathbb{Z}_d$. Then we observe that $\delta_u(i+b-a)=\delta_i(a-b-u)=\delta_i(n)$ for $n=a-b-u$, and by performing some other substitutions we arrive at
\[q^{nj}\delta_v(y-x)\delta_w(n-y-t)\delta_s(n-u-x-t)\]
Importantly, the only factor here which depends on the input state is $q^{nj}$. All the $\delta$-functions are merely conditions regarding which measurement outcomes are possible due to the lattice geometry. These will always be satisfied by our measurements, thus we have just
\[|\delta_j\>_L\mapsto q^{nj}\]
for $n\in\mathbb{Z}_d$, which in the other basis is $|i\>_L\mapsto \delta_{n,i}$ as stated. The rough counit follows similarly.
\section{Qudit ZX-calculus axioms}\label{app:zx_axioms}
We show some relevant axioms for the fragment of qudit ZX-calculus which interests us. These simply coincide with the rules from Hopf and Frobenius structures, along with the Fourier transform. We ignore the more general phase group \cite{W1}, and also leave out non-zero scalars. First, we define a spider
\[\tikzfig{spider_theorem}\]
which is well-defined due to associativity and specialty of the underling Frobenius structure. The spider is also invariant under exchange of input wires with each other and the same for outputs, as the Frobenius algebra is (co)-commutative. A phaseless spider with 1 input and 1 output is identity:
\[\tikzfig{phaseless_spider}\]
Then we have the Fourier exchange rule:
\[\tikzfig{fourier_exchange}\]
which encodes Lemma~\ref{lem:fourier} graphically.
Then we have the bialgebra rules
\[\tikzfig{bialgebra_rules}\]
and rules pertaining to the antipode:
\[\tikzfig{antipode_axioms}\]
This is far from an exhaustive (or complete) list of rules.
\section{The logical block depiction}\label{app:block}
The lattice at a given time is drawn with a red line for a smooth boundary and green for a rough boundary:
\[\includegraphics[width=0.35\textwidth]{colour_patch}\]
where the surface is shaded blue for clarity. A block extending upwards represents the transformation over time. For example:
\[\includegraphics[width=0.1\textwidth]{id_block}\]
We call this the `logical block' depiction, following similar work in \cite{Logic}.
Table~\ref{tbl:lat_oper} is an explicit dictionary between lattice surgery operations, qudit ZX-calculus and linear maps in the multiplicative fragment, i.e. the $n=0$ measurement outcomes. We choose to use the multiplicative fragment to highlight the visual connection between the columns. We see that red and green spiders correspond to rough and smooth operations respectively.
\begin{table}
\centering
\begin{tabular}{ | m{3cm} | c | m{2cm} | m{3cm} | }
\hline
Lattice operation & Logical block & ZX-diagram & Linear map\\ \hline
smooth unit
&
\begin{minipage}{.05\textwidth}
\includegraphics[width=\linewidth]{smooth_unit_block}
\end{minipage}
&
\[\tikzfig{smooth_unit_ZX}\]
&
\[\sum_i|i\>\]
\\
\hline
smooth split
&
\begin{minipage}{.1\textwidth}
\includegraphics[width=\linewidth]{smooth_split_block}
\end{minipage}
&
\[\tikzfig{smooth_split_ZX}\]
&
\[|i\>\mapsto |i\>\mathop{{\otimes}}|i\>\]
\\
\hline
smooth merge
&
\begin{minipage}{.1\textwidth}
\includegraphics[width=\linewidth]{smooth_merge_block}
\end{minipage}
&
\[\tikzfig{smooth_merge_ZX}\]
&
\[|i\>\mathop{{\otimes}}|j\>\mapsto \delta_{i,j}|i\>\]
\\
\hline
smooth counit
&
\begin{minipage}{.05\textwidth}
\includegraphics[width=\linewidth]{smooth_counit_block}
\end{minipage}
&
\[\tikzfig{smooth_counit_ZX}\]
&
\[|i\>\mapsto 1\]
\\
\hline
rough unit
&
\begin{minipage}{.05\textwidth}
\includegraphics[width=\linewidth]{rough_unit_block}
\end{minipage}
&
\[\tikzfig{rough_unit_ZX}\]
&
\[|0\>\]
\\
\hline
rough split
&
\begin{minipage}{.1\textwidth}
\includegraphics[width=\linewidth]{rough_split_block}
\end{minipage}
&
\[\tikzfig{rough_split_ZX}\]
&
\[|i\>\mapsto \sum_h|h\>\otimes |i-h\>\]
\\
\hline
rough merge
&
\begin{minipage}{.1\textwidth}
\includegraphics[width=\linewidth]{rough_merge_block}
\end{minipage}
&
\[\tikzfig{rough_merge_ZX}\]
&
\[|i\>\mathop{{\otimes}}|j\>\mapsto |i+j\>\]
\\
\hline
rough counit
&
\begin{minipage}{.05\textwidth}
\includegraphics[width=\linewidth]{rough_counit_block}
\end{minipage}
&
\[\tikzfig{rough_counit_ZX}\]
&
\[|i\>\mapsto\delta_{i,0}\]
\\
\hline
rotation
&
\begin{minipage}{.05\textwidth}
\includegraphics[width=\linewidth]{fourier_block}
\end{minipage}
&
\[\tikzfig{fourier_spider}\]
&
\[|i\>\mapsto\sum_jq^{-ij}|j\>\]
\\
\hline
\end{tabular}
\caption{Dictionary of lattice surgery operations in the multiplicative fragment.}\label{tbl:lat_oper}
\end{table}
We have no new results or proofs in this section, but we would like to discuss the diagrams of logical blocks. These sorts of diagrams for lattice surgery have been used in an engineering setting to compile quantum circuits to lattice surgery \cite{GF,Logic}. To go from the cubes shown there to the tubes which we show here we merely relax the discretisation of space and time somewhat to expose the relationship with algebra. This relationship with algebra is relevant because such diagrams have appeared in a seemingly quite different context.
It is well known that the category of `2-dimensional thick tangles', \textbf{2Thick}, is monoidally equivalent to the category \textbf{Frob} freely generated by a noncommutative Frobenius algebra \cite{Lauda}. This should be unsurprising to those familiar with the notion of a `pair of pants' algebra. We say that \textbf{2Thick} is a \textit{presentation} of \textbf{Frob}. Similarly, the symmetric monoidal category \textbf{2Cob} of (diffeomorphism classes of) 2-dimensional cobordisms between (disjoint unions of) circles is a presentation of \textbf{ComFrob}, the category freely generated by a commutative Frobenius algebra \cite{Kock}.
This fact is important for topological quantum field theories (TQFTs). One can define an $n$-dimensional TQFT as a symmetric monoidal functor from $\textbf{nCob}\rightarrow \textbf{Vect}$, the category of finite-dimensional vector spaces. The key point is that the functor takes (diffeomorphism classes of) manifolds as inputs and outputs linear maps between vector spaces, which are by definition manifold invariants. One can see that 2D TQFTs are in bijection with commutative Frobenius algebras in $\textbf{Vect}$.
In \cite{Reut}, Reutter gives a slightly different monoidal category, which we will call \textbf{2Block}. It has as objects disjoint unions of squares, with the same shading of sides as those in the logical block diagrams above. Then morphisms are classes of surfaces between the squares, such that the borders between the surfaces match up with the edges of the squares at the source and target objects and the surface colours are consistent with those of the squares' sides. While the morphisms are obviously quotiented by equivalence of surfaces up to border-preserving diffeomorphism, Reutter quotients by `saddle-invertibility' as well, which is not a rule one can acquire through topological moves alone, as it involves the closing and opening of holes.
Reutter conjectures that $\textbf{2Block}\simeq \textbf{uHopf}$, where \textbf{uHopf} is the category freely generated by a unimodular Hopf algebra.\footnote{In \textbf{Vect}, unimodularity is typically defined using integrals \cite{Ma:book}. In this more abstract setting it is defined by some axioms on dualities.} While we do not know enough about topology or geometry to prove (or disprove) this conjecture, we suspect one route is to consider Morse functions and classify the diffeomorphism classes near critical points. This is similar to one proof of $\textbf{2Cob}\simeq \textbf{ComFrob}$ \cite{Kock}. For the reader's convenience, we now reproduce a handful of the equivalences under topological deformation which motivate this conjecture. We have the axioms of a Frobenius algebra,
\[\includegraphics[width=0.3\linewidth]{unit_mult_block}\]
\[\includegraphics[width=0.3\linewidth]{associativity_block}\]
\[\includegraphics[width=0.3\linewidth]{frob_block}\]
and the same for red faces. These are just widened versions of the diagrams in \textbf{2Thick}. Then one can see the interpretation of a unimodular Hopf algebra as two interacting Frobenius algebras. We start with two Frobenius algebras and glue them together in such a way that they give the bialgebra and antipode axioms. The main bialgebra rule is
\[\includegraphics[width=0.3\linewidth]{bialgebra_block}\]
where we require saddle invertibility to close up a hole in the middle. This is also required for showing that comultiplication is a unit map and so on. Given all of these deformations and those involving the antipode, which is a twist by $\pi$, one can see that they define a functor $\textbf{uHopf}\rightarrow\textbf{2Block}$; the hard part is proving that this is an equivalence.
Now, Reutter also draws a comparison with representation theory and tensor category theory. It is striking that, given the unimodular Hopf algebra $\mathbb{C}\mathbb{Z}_d$, we can create a logical space on a patch isomorphic to the vector space of $\mathbb{C}\mathbb{Z}_d$ itself, and the logical operations precisely coincide with the linear maps defined by the algebra. We conjecture that lattice surgery is the `computational implementation' of this presentation of unimodular Hopf algebras, in the same way that the logical space of the Kitaev model on a closed orientable manifold $\mathcal{M}$ is isomorphic to the vector space $F(\mathcal{M})$ in the image of a Dijkgraaf-Witten theory $F : \textbf{2Cob}\rightarrow \textbf{Vect}$ when given the same manifold $\mathcal{M}$ \cite[Thm~3.2]{Cow}. It remains to be seen whether this extends further than just abelian group algebras.
\section{Generalisations and Hopf algebras}\label{app:generalisations}
While we have shown that lattice surgery works for arbitrary dimensional qudits, we emphasise that the algebraic structures involved are very simple so far. The lattice model in the bulk can be generalised significantly: first, one can replace $\mathbb{C}\mathbb{Z}_d$ with another finite abelian group algebra. As all finite abelian groups decompose into direct sums of cyclic groups this case follows immediately from the work herein and is uninteresting.
At the second level up, we can replace it with an arbitrary finite group algebra $\mathbb{C} G$. At this level several assumptions break down:
\begin{itemize}
\item $\mathbb{C} G$ still has a dual function algebra $\mathbb{C}(G)$, but the Fourier transform no longer coincides with Pontryagin duality, and the two algebras will no longer be isomorphic in general. One can still define a Fourier transform in the sense that it translates between convolution and multiplication, but in this case the Fourier transform is the Peter-Weyl isomorphism, i.e. a bimodule isomorphism between $\mathbb{C} G$ and a direct sum of matrix algebras labelled by the irreps of $G$.
\item The $\mathbb{C} G$ lattice model can no longer be described using string operators, and these must be promoted to ribbon operators \cite{Kit1}. This is because the lattice model is based on the Drinfeld double $D(G) = \mathbb{C}(G){>\!\!\!\triangleleft} \mathbb{C} G$, where the associated action is conjugation. In the abelian case conjugation acts trivially and so we have $D(\mathbb{Z}_d) = \mathbb{C}(\mathbb{Z}_d) \otimes \mathbb{C}\mathbb{Z}_d$: the double splits into independent algebras, which give the $X$-type and $Z$-type string operators respectively.
\item There are still canonical choices of rough and smooth boundary, labelled by subgroups $K = \{e\}$ and $K = G$ for rough and smooth boundaries respectively. Similarly, we still have well-defined measurements, using representations of $C G$ and $C(G)$ for vertices and faces. However, the algebra of ribbon operators which are undetectable at the boundary, and hence the logical operations on a patch, becomes significantly more complicated, see \cite{PS2} for the underlying module theory. Preliminary calculations indicate that they are labelled by conjugacy classes (i.e. irreps) of $G$, and it is not even obvious that ${\rm dim}(\hbox{{$\mathcal H$}}_{vac}) = |G|$ as in the abelian case. This is quite an obstruction to calculating the logical maps corresponding to lattice surgery operations.
\end{itemize}
Of course, the Kitaev model can be generalised much further still. The third level would be arbitrary finite-dimensional Hopf $\mathbb{C}^*$-algebras. At this level even the calculations in the bulk are tricky, and many features were only recently resolved \cite{Cow,Meu,Chen}. Understanding lattice surgery in these models seems a formidable task. We aim to at least make some progress on this in upcoming work \cite{Cow2}.
The fourth (and highest) level is the maximal generality, which are weak Hopf $\mathbb{C}^*$-algebras, in bijection (up to an equivalence) with so-called \textit{unitary fusion categories} \cite{EGNO}. Even at this extreme generality, there are glimpses of hope. There are two canonical choices of boundaries given by the trivial (rough) and regular (smooth) module categories \cite{Os}, and we speculate that calculating some basic features like ${\rm dim}(\hbox{{$\mathcal H$}}_{vac})$ of a patch could be done using techniques from topological quantum field theory (TQFT). At this level of generality, the connections with TQFT become more tantalising. The parallels between topological quantum computing in the bulk and TQFTs are well-known, see e.g. \cite{Kir}, but lattice surgery introduces discontinuous deformations in the manner of geometric surgery. While boundaries of TQFTs are well-studied \cite{KS,FS}, we do not know whether TQFT theorists study the relation between geometric surgery on manifolds and linear algebra in the same manner as they do for, say, diffeomorphism classes of cobordisms.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 4,534 |
class ChangeGlobalIdToInt < ActiveRecord::Migration
def change
change_column :users, :global_id, :int
end
end
| {
"redpajama_set_name": "RedPajamaGithub"
} | 5,700 |
\section{Introduction and Statement of Results}
We consider the enumeration of directed paths constrained to lie within a strip, with steps taken from a finite set of allowed steps having prescribed weights. Previous work in this context on bounded excursions found generating function expressions in terms of rectangular Schur functions \cite{Bousquet2006}. In related work \cite{2013arXiv1303.2724B}, bounded meanders were studied using a transfer matrix approach. Both meanders and excursions start at height zero, but while excursions are restricted to also end at height zero, meanders have no such endpoint restriction. In this paper, we extend these results by considering bounded paths starting and ending at arbitrary given heights. We express their generating functions in terms of skew Schur functions, and provide an expansion of these skew Schur functions in terms of a linear combination of Schur functions.
Related work has appeared in \cite{MR2735330}; in Theorem 4 of that paper the authors arrive at an expression using a generating variable for the endpoint position. An alternative proof of our Corollary \ref{schurcorol} could in principle be obtained from that expression.\begin{figure}[ht!]
\centering
\includegraphics[width=0.99\textwidth]{gd2}
\caption{Directed path of length $n=16$ with northeast steps $A=\{1,3,4,6\}$ and southeast steps $B=\{1,2,3,4\}$ ($\alpha=6$ and $\beta=4$) in a slit of width $w=7$, starting at height $u=2$ and ending at height $v=4$.
}
\label{gd}
\end{figure}
Consider a directed $n$-step path in the slit $ \mathbb{Z} \times \{0,1,\cdots,w\}$ of width $w$, starting at point $(0,u)$ and ending at point $(n,v)$, taking its steps from $\{1\} \times S$, where $S \subset \mathbb{Z}$ is a finite set. For simplicity we call $S$ the step set. Figure \ref{gd} shows such a path.
We separate the step set $S$ into sets of up and down steps by defining $A=S \cap \mathbb{Z}_0^+$ and $B=-(S \backslash A)$, where we have included the horizontal step in the set $A$. Every up step of height $a\in A$ is weighted by a weight $p_a$, and every down step of height $b\in B$ is weighted by a weight $q_b$. We denote the maximum of $A$ and $B$ by $\alpha$ and $\beta$, respectively, and assume that the weights $p_{\alpha}$ and $q_{\beta}$ are nonzero. The weight $\omega_{\varphi}$ of a path $\varphi$ is then the product of the weights of all the steps in the path. The introduction of weights implies that by assigning a weight of zero to any integer not appearing in $S$ we can without loss of generality consider $A=\{0,1,\ldots,\alpha\}$ and $B=\{1,2,\ldots,\beta\}$.
Given a step set $S$ and associated step weights, let $\Omega_{(u,v),n}^{w,\alpha,\beta}$ be the set of directed $n$-step paths in a strip of width $w$ starting at $(0,u)$ and ending at $(n,v)$. The main object of this paper is the generating function of directed weighted paths
\begin{equation}
G_{(u,v)}^{w,\alpha,\beta}(t)=\sum_{n=0}^\infty t^n\sum_{\varphi\in\Omega_{(u,v),n}^{w,\alpha,\beta}}\omega_{\varphi}\;.
\end{equation}
Having a finite strip width automatically implies that the generating function is rational, as the enumeration problem can be cast as a random walk problem on a finite graph and thus the generating function can be found from its transition matrix. This approach has for example been followed in \cite{2013arXiv1303.2724B}. One can easily deduce some complexity results, such as giving upper bounds on the degree of the polynomials appearing in the rational generating function, and also compute $G_{(u,v)}^{w,\alpha,\beta}(t)$ for specific parameter values. However, computing a general expression is considerably more difficult, with only some results available for meanders, {\it i.e.}~$G_{(0,v)}^{w,\alpha,\beta}(t)$ \cite{2013arXiv1303.2724B}. Following along ideas from \cite{Bousquet2006}, where an explicit expression was obtained for excursions, {\it i.e.}~$G_{(0,0)}^{w,\alpha,\beta}(t)$, our approach enables us to provide a general solution for $G_{(u,v)}^{w,\alpha,\beta}(t)$ in Theorem \ref{theorem1}.
\begin{theorem}
The generating function $G_{(u,v)}^{w,\alpha,\beta}(t)$ of directed weighted paths is given by
\begin{equation}
G_{(u,v)}^{w,\alpha,\beta}(t)=\frac{(-1)^{1-\alpha}}{tp_{\alpha}}\frac{s_{(w^{\alpha},u,0^{\beta-1})/(v,0^{\alpha+\beta-1})}(\bar{z})}{s_{((w+1)^{\alpha},0^{\beta})}(\bar{z})}\;,
\label{skewresult}
\end{equation}
where $\bar z$ are the $\alpha+\beta$ roots of
\[
K(t,z)=1-t\sum_{a \in A}p_az^a-t\sum_{b\in B}q_bz^{-b}\;,
\]
and $s_{\lambda/\mu}(z)$ is a skew Schur function.
\label{theorem1}
\end{theorem}
Schur functions form a linear basis for the space of all symmetric polynomials \cite{stanley_fomin_1999}. We can therefore express the skew Schur function in Theorem \ref{theorem1} as a linear combination of Schur functions.
\begin{corollary}
\label{schurcorol}
The generating function $G_{(u,v)}^{w,\alpha,\beta}(t)$ of directed weighted paths is given by
\[
G_{(u,v)}^{w,\alpha,\beta}(t)=(-1)^{1-\alpha}\frac{1}{tp_{\alpha}}\frac{\sum\limits_{l=0}^{r}s_{(w^{\alpha-1},w-(v-u)_+-l,(u-v)_++l,0^{\beta-1})}(\bar{z})}{s_{((w+1)^{\alpha},0^{\beta})}(\bar{z})}\;,
\]
where $r=\min(u,v,w-u,w-v)$.
\end{corollary}
At this point we should like to remark that numerical experimentation with Maple led us to conjecture Corollary \ref{schurcorol} first, however we did not find a direct proof that avoided skew Schur functions.
Excursions, bridges, and meanders are all contained as special cases. For excursions we recover the result given in \cite{Bousquet2006},
\begin{equation}
G_{(0,0)}^{w,\alpha,\beta}(t)=G_{(w,w)}^{w,\alpha,\beta}(t)=\frac{(-1)^{1-\alpha}}{tp_{\alpha}}\frac{s_{(w^{\alpha},0^{\beta})}(\bar{z})}{s_{((w+1)^{\alpha},0^{\beta})}(\bar{z})}\;,
\end{equation}
and for bridges we find
\begin{equation}
G_{(0,w)}^{w,\alpha,\beta}(t)=\frac{(-1)^{1-\alpha}}{tp_{\alpha}}\frac{s_{(w^{\alpha-1},0^{\beta+1})}(\bar{z})}{s_{((w+1)^{\alpha},0^{\beta})}(\bar{z})}
\end{equation}
and
\begin{equation}
G_{(w,0)}^{w,\alpha,\beta}(t)=\frac{(-1)^{1-\alpha}}{tp_{\alpha}}\frac{s_{(w^{\alpha+1},0^{\beta-1})}(\bar{z})}{s_{((w+1)^{\alpha},0^{\beta})}(\bar{z})}\;,
\end{equation}
which are related by obvious symmetry. Similarly, for meanders we find
\begin{equation}
G_{(0,v)}^{w,\alpha,\beta}(t)=\frac{(-1)^{1-\alpha}}{tp_{\alpha}}\frac{s_{(w^{\alpha-1},w-v,0^{\beta})}(\bar{z})}{s_{((w+1)^{\alpha},0^{\beta})}(\bar{z})}\;,
\end{equation}
\begin{equation}
G_{(w,v)}^{w,\alpha,\beta}(t)=\frac{(-1)^{1-\alpha}}{tp_{\alpha}}\frac{s_{(w^{\alpha},w-v,0^{\beta-1})}(\bar{z})}{s_{((w+1)^{\alpha},0^{\beta})}(\bar{z})}\;,
\end{equation}
\begin{equation}
G_{(u,w)}^{w,\alpha,\beta}(t)=\frac{(-1)^{1-\alpha}}{tp_{\alpha}}\frac{s_{(w^{\alpha-1},u,0^{\beta})}(\bar{z})}{s_{((w+1)^{\alpha},0^{\beta})}(\bar{z})}\;,
\end{equation}
\begin{equation}
G_{(u,0)}^{w,\alpha,\beta}(t)=\frac{(-1)^{1-\alpha}}{tp_{\alpha}}\frac{s_{(w^{\alpha},u,0^{\beta-1})}(\bar{z})}{s_{((w+1)^{\alpha},0^{\beta})}(\bar{z})}\;.
\end{equation}
We prove Theorem \ref{theorem1} and Corollary \ref{schurcorol} in a sequence of steps in Section 2. Section 3 contains examples of some specific step sets.
\section{Proofs}
We consider the generating function
\begin{equation}
G(t,z)=\sum_{v=0}^{w}G_{(u,v)}^{w,\alpha,\beta}(t)z^v\;,
\label{gf}
\end{equation}
where for convenience we drop the indices $w$, $\alpha$, $\beta$ and $u$ on the left-hand side. We present a functional equation satisfied by $G(t,z)$ and define the notion of the kernel for this functional equation (this is $K(t,z)$ in the statement of Theorem \ref{theorem1}), which up to a prefactor is a polynomial in $z$ of degree $\alpha+\beta$. Coefficients of the kernel can be interpreted in terms of elementary symmetric functions of the roots, which will be central in our approach. The functional equation is equivalent to setting up a system of linear equations, and using elementary symmetric functions will allow us to employ the Jacobi-Trudi formula to express the solution of the system in terms of skew Schur functions, leading to the expression in Theorem \ref{theorem1}.
\begin{proposition}
The generating function $G(t,z)$ satisfies the functional equation
\begin{multline}
G(t,z)=z^u+t\left(\sum_{a \in A} p_{a} z^a+\sum_{b \in B}\frac{q_{b}}{z^b}\right)G(t,z)\\-t\sum_{j=1}^{\infty}z^{w+j}\sum_{a \geq j}p_aG_{(u,w-a+j)}(t)-t\sum_{j=1}^{\infty}z^{-j}\sum_{b \geq j}q_bG_{(u,b-j)}(t)\;,
\label{feq}
\end{multline}
where $G_{(u,v)}(t)=G_{(u,v)}^{w,\alpha,\beta}(t)$.
\end{proposition}
\begin{proof}
An $n$-step walk is constructed by adding steps from the step set $S$ to an $(n-1)$-step walk, provided $n>0$.
The zero step walk starting and ending at height $u$ is represented by $z^u$. The term $t\left(\sum_{a \in A} p_{a} z^a+\sum_{b \in B}\frac{q_{b}}{z^b}\right)G(t,z)$ corresponds to steps appended without the consideration of violation of boundaries. The steps not allowed are removed by subtracting the terms which account for the steps crossing the strip boundaries at $y=0$ and $y=w$. More precisely, $t\sum_{j=1}^{\infty}z^{w+j}\sum_{a \geq j}p_aG_{(u,w-a+j)}(t)$ corrects overcounting by steps going above the line $y=w$, and $t\sum_{j=1}^{\infty}z^{-j}\sum_{b \geq j}q_bG_{(u,b-j)}(t)$ corrects overcounting by steps going below the line $y=0$.
\end{proof}
Next, we rearrange the functional equation as
\begin{multline}
\left(1-t\sum_{a \in A} p_{a} z^a-t\sum_{b \in B}\frac{q_{b}}{z^b}\right)G(t,z)=\\z^u-t\sum_{j=1}^{\infty}z^{w+j}\sum_{a \geq j}p_aG_{(u,w-a+j)}(t)-t\sum_{j=1}^{\infty}z^{-j}\sum_{b \geq j}q_bG_{(u,b-j)}(t).
\label{kfeq}
\end{multline}
The prefactor of $G(z,t)$ in (\ref{kfeq}) is called the kernel of the functional equation,
\begin{equation}
K(t,z)=1-t\sum_{a \in A}p_az^a-t\sum_{b \in B}q_bz^{-b}.
\end{equation}
It will be convenient to relate the coefficients of the kernel to elementary symmetric functions.
\begin{lemma}
\label{lemma1}
The kernel $K(t,z)$ can be written as
\begin{equation}
K(t,z)=-tp_{\alpha}\sum_{i=0}^{\alpha+\beta}z^{\alpha-i}(-1)^{i}e_i(z_1,z_2, \ldots, z_{\alpha+\beta})
\end{equation}
where $\bar{z}=z_1,z_2, \ldots, z_{\alpha+\beta}$ are the roots of the kernel $K(t,z)$, and we have
\begin{equation}
-tp_a=-tp_{\alpha}(-1)^{\alpha-a}e_{\alpha-a}(\bar{z})
\label{p-val}
\end{equation}
\begin{equation}
1-tp_0=-tp_{\alpha} (-1)^{\alpha}e_{\alpha}(\bar{z})
\label{p0-val}
\end{equation}
\begin{equation}
-tq_b=-tp_{\alpha}(-1)^{\alpha+b}e_{\alpha+b}(\bar{z})
\label{q-val}
\end{equation}
for $1\le a\le\alpha$ and $1\le b\le\beta$.
\end{lemma}
\begin{proof}
Writing the kernel in terms of its roots $\bar z$ we get
\begin{equation}
K(t,z)
=-\frac{tp_{\alpha}}{z^{\beta}}\prod_{k=1}^{\alpha+\beta}(z-z_k)
=-tp_{\alpha}\sum_{i=0}^{\alpha+\beta}z^{\alpha-i}(-1)^{i}e_i(\bar{z})\;,
\end{equation}
where we have introduced the elementary symmetric functions $e_j$ \cite{macdonald1998symmetric} defined by
\begin{equation}
\prod_{k=1}^{n}(z+z_k)=\sum_{j=0}^{n}z^{n-j}e_{j}(\bar{z})\;.
\end{equation}
Comparing coefficients of
\begin{equation}
1-t\sum_{a=0}^{\alpha}p_az^a-t\sum_{b=1}^{\beta}q_bz^{-b}=-tp_{\alpha}\sum_{i=0}^{\alpha+\beta}z^{\alpha-i}(-1)^{i}e_i(\bar{z})
\label{kernel_e}
\end{equation}
for different powers of $z$ completes the proof.
\end{proof}
\begin{proposition}
The functional equation \eqref{feq} is equivalent to
\begin{equation}
\sum_{v=0}^{w} \left(\sum_{i=0}^{\alpha+\beta}(-1)^{i}e_iG_{(u,v-\alpha+i)}(t)\right)z^v =-\frac{z^u}{tp_{\alpha}}
\label{syseq}
\end{equation}
\end{proposition}
\begin{proof}
We aim to rewrite the functional equation \eqref{kfeq} in terms of elementary symmetric functions instead of weights $p_a, q_b$ and $t$.
Using Lemma \ref{lemma1}, we find
\begin{multline}
\left(\sum_{i=0}^{\alpha+\beta}z^{\alpha-i}(-1)^{i}e_i\right)\sum_{v=0}^{w}G_{(u,v)}(t)z^v=
-\frac{z^u}{tp_{\alpha}}\\+\sum_{j=1}^{\infty}\sum_{a \geq j}z^{w+j}(-1)^{\alpha-a}e_{\alpha-a}G_{(u,w-a+j)}(t)\\
+\sum_{j=1}^{\infty}\sum_{b \geq j}z^{-j}(-1)^{\alpha+b}e_{\alpha+b}G_{(u,b-j)}(t)\;.
\label{stuff}
\end{multline}
We rewrite the left hand side of \eqref{stuff} as
\begin{multline}
\left(\sum_{i=0}^{\alpha+\beta}z^{\alpha-i}(-1)^{i}e_i\right)\sum_{v=0}^{w}G_{(u,v)}(t)z^v
=\sum_{v=-\infty}^{\infty}\left(\sum_{i=0}^{\alpha+\beta}(-1)^{i}e_iG_{(u,v-\alpha+i)}(t)\right)z^{v},
\end{multline}
where we have extended the limits of summation over $v$, as $G_{(u,v)}(t)$ is zero if the end point of the path is outside the strip.
Careful inspection of \eqref{stuff} shows that all the powers of $z^v$ with $v<0$ and $v>w$ cancel,
and we are left with with the desired result.
\end{proof}
The boundary corrections in the functional equation have of course been introduced to precisely that effect, as they were added to correct for steps that went beyond the upper and lower boundaries.
\begin{proof}[Proof of Theorem \ref{theorem1}]
Comparing coefficients of $z^v$ for $0 \leq v \leq w$, equation \eqref{syseq} is equivalent to a system of $w+1$ equations given by
\begin{equation}
\tilde{A}x=b\;,
\label{meq}
\end{equation}
where $x$ is the vector of unknowns $G_{(u,v)}(t)$, and $b$ is the column vector on the right hand side with a single non zero entry $-\frac{1}{tp_{\alpha}}$.
Using the convention that $e_k=0$ if $k<0$ or $k>\alpha+\beta$, $\tilde{A}$ is the coefficient matrix
\begin{equation}
\tilde{A}=\left((-1)^{\alpha+j-i}e_{\alpha+j-i}\right)_{i,j=0}^w\;,
\end{equation}
so that the non zero entries of $\tilde A$ form a diagonal band. We can evaluate the unknowns $G_{(u,v)}(t)$ for $v=0 \ldots w$, by using Cramer's rule. Before we do this, we first remove the negative signs of the entries in $\tilde{A}$ to write \eqref{meq} in terms of the matrix
\begin{equation}
A=\left(e_{\alpha+j-i}\right)_{i,j=0}^w=\begin{bmatrix}
e_{\alpha} & e_{\alpha+1}& e_{\alpha+2} & \cdots & e_{\alpha+\beta} & \cdots & 0 \\
e_{\alpha-1} & e_{\alpha} & e_{\alpha+1} & \cdots & e_{\alpha+\beta-1} & \cdots & 0 \\
e_{\alpha-2} & e_{\alpha-1} & e_{\alpha} & \cdots & e_{\alpha+\beta-2} & \cdots & 0 \\
\vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\
e_0 & e_1 & e_2 & \cdots & e_{\beta}& \cdots & 0 \\
\vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\
0 & 0 & 0 & 0 &0& \cdots &e_{\alpha}
\end{bmatrix}\;.
\label{amatrix}
\end{equation}
We accomplish this by applying a transformation given by the diagonal matrix $S$ with entries $(S)_{ii}=(-1)^i$ for $0\leq i\leq w$.
The matrix equation \eqref{meq} will be transformed as
$S\tilde{A}S^{-1}Sx=Sb$.
We note that $S\tilde{A}S^{-1}=(-1)^{\alpha}A$ and $Sb=(-1)^ub$, so we have
\begin{equation}
(-1)^{\alpha}A(Sx)=(-1)^ub,
\label{tmateq}
\end{equation}
where
$(Sx)_k=(-1)^k G_{(u,k)}(t)$.
To evaluate $Sx$ using Cramer's rule, let $A_{(u,v)}$ be the matrix formed by replacing column $v$ in $A$ with the column vector $(-1)^\alpha Sb$, which has $(-1)^{u+1-\alpha}\frac{1}{tp_{\alpha}}$ at position $u$. so that
\begin{equation}
(-1)^vG_{(u,v)}(t)=\frac{|A_{(u,v)}|}{|A|}\,.
\end{equation}
What is left is to compute the determinants $|A|$ and $|A_{(u,v)}|$.
Using the second Jacobi -Trudi formula \cite{macdonald1998symmetric}, which
expresses the Schur function as a determinant in terms of the elementary symmetric functions as
\begin{equation}
s_{\lambda}=\det(e_{\lambda_i'+j-i})_{i,j=1}^{l(\lambda')}\;,
\label{jtfs}
\end{equation}
where $\lambda'$ is the partition conjugate to $\lambda$,
we can write $|A|$ in terms of a Schur function $s_\lambda$.
Comparing the determinant in \eqref{jtfs} with the matrix $A$ in \eqref{amatrix}, we can see that the conjugate partition $\lambda'$ is given by
\begin{equation}
\lambda'=\left(\alpha^{w+1}\right).
\end{equation}
From this we can write $\lambda=((w+1)^{\alpha},0^\beta)$ and so $|A|$ can be written as
\begin{equation}
|A|=s_{((w+1)^{\alpha},0^{\beta})}(z_1,z_2,\cdots,z_{\alpha+\beta}).
\label{dv}
\end{equation}
Note that we have chosen the convention to let the partition have the same number of parts as we have roots $z_1,z_2,\cdots,z_{\alpha+\beta}$, so that we supplement the partition with zero size parts as needed.
To evaluate the determinant of the matrix $A_{(u,v)}$, we make use of the fact that the only non zero entry in the $v$-th column is $(-1)^{u+1-\alpha}\frac{1}{tp_{\alpha}}$ and
expand the determinant by that column to get
\begin{multline}
|A_{(u,v)}|=\frac{(-1)^{2u+v+1-\alpha}}{tp_{\alpha}}\\
\times
\begin{vmatrix}
e_{\alpha} & \cdots & e_{\alpha+v-1} & e_{\alpha+v+1} & \cdots& e_{\alpha+\beta} & \cdots & 0 \\
e_{\alpha-1} & \cdots & e_{\alpha+v-2} & e_{\alpha+v} & \cdots & e_{\alpha+\beta-1} & \cdots & 0 \\
e_{\alpha-2} & \cdots & e_{\alpha+v-3} & e_{\alpha+v-1} & \cdots & e_{\alpha+\beta-2} & \cdots & 0 \\
\vdots & \vdots & \vdots & \vdots & \ddots & \vdots& \ddots & \vdots \\
e_{\alpha-u+1} & \cdots & e_{\alpha+v-u} & e_{\alpha+v-u+2} & \cdots & e_{\alpha+\beta-u+1} & \cdots & 0 \\
e_{\alpha-u-1} & \cdots & e_{\alpha+v-u-2} & e_{\alpha+v-u-1} & \cdots & e_{\alpha+\beta-u-1} & \cdots & 0 \\
e_{\alpha-u-2} & \cdots & e_{\alpha+v-u-3} & e_{\alpha+v-u-1} & \cdots & e_{\alpha+\beta-u-2} & \cdots & 0 \\
\vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \ddots & \vdots \\
e_0 & \cdots & e_{v-1} & e_{v+1}& \cdots & e_{\beta} & \cdots & 0 \\
\vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \ddots & \vdots \\
0 & 0 & 0 &0 & \cdots &0 & \cdots & e_{\alpha}
\end{vmatrix}_{[w]}\;,
\label{remm}
\end{multline}
where the indices of $e_k$ increase by $2$ from the $(v-1)$-st to the $v$-th column.
Using the second Jacobi -Trudi formula \cite{macdonald1998symmetric} for skew Schur functions,
\begin{equation}
s_{\lambda/\mu}=\det(e_{\lambda_i'-\mu_j'+j-i})_{i,j=1}^{l(\lambda')}\;,
\label{ssjtf}
\end{equation}
we can express the determinant in \eqref{remm} by a skew Schur function. We find
\[\lambda'=(\alpha+1^u,\alpha^{w-u})\quad\mbox{and}\quad\mu'=(1^v,0^{w-v})\;,\]
and hence
\begin{equation}
\lambda=(w^{\alpha},u,0^{\beta-1})
\quad\mbox{and}\quad
\mu=(v,0^{\alpha+\beta-1}),
\end{equation}
where we have again added zero size parts to follow the convention established above.
\begin{figure}[h!]
\centering
\includegraphics[width=0.5\textwidth]{drawing-skew}
\caption{Skew partition $\lambda/\mu=(w^{\alpha},u,0^{\beta-1})/(v,0^{\alpha+\beta-1})$ for the skew Schur function related to $\det A_{(u,v)}$. Here and in what follows we employ the `British' convention that the parts of the partition are depicted such the the largest part is at the top and the smallest one at the bottom. Note that we do not show parts of zero size.}
\label{figurethatneedstobementioned}
\end{figure}
A pictorial representation of the skew partition is given in Figure \ref{figurethatneedstobementioned}. We see that the associated skew partition is given by a rectangle of size $w\times\alpha$ which has a row of size $u$ added below and a row of size $v$ removed from the top row.
The corresponding skew Schur function is
$s_{(w^{\alpha},u,0^{\beta-1})/(v,0^{\alpha+\beta-1})}(\bar{z})$,
and therefore
\begin{equation}
|A_{(u,v)}|=(-1)^{v+1-\alpha}\frac{1}{tp_{\alpha}}s_{(w^{\alpha},u,0^{\beta-1})/(v,0^{\alpha+\beta-1})}(\bar{z})\;.
\end{equation}
Together with the expression of $|A|$ from \eqref{dv} we can write that $G_{(u,v)}$ is given by
\begin{equation}
(-1)^vG_{(u,v)}(t)=\frac{|A_{(u,v)}|}{|A|}=(-1)^{v+1-\alpha}\frac{1}{tp_{\alpha}}\frac{
s_{(w^{\alpha},u,0^{\beta-1})/(v,0^{\alpha+\beta-1})}(\bar{z})}{s_{((w+1)^{\alpha},0^\beta)}(\bar{z})}\;,
\end{equation}
which gives \eqref{skewresult} as needed.
\end{proof}
To prove Corollary \ref{schurcorol}, we need a technical lemma expanding the skew Schur function occurring in Theorem \ref{theorem1} in terms of Schur functions.
\begin{lemma}
\label{lemma}
Let $\alpha,\beta,w>0$. Then for $0\leq u,v\leq w$ we have
\begin{multline}
s_{(w^{\alpha},u,0^{\beta-1})/(v,0^{\alpha+\beta-1})}(z_1,\ldots,z_{\alpha+\beta})=\\
\sum_{l=0}^{r}s_{(w^{\alpha-1},w-(v-u)_+-l,(u-v)_++l,0^{\beta-1})}(z_1,\ldots,z_{\alpha+\beta}),
\label{skewtoschur}
\end{multline}
where $r=\min(u,v,w-u,w-v)$.
\end{lemma}
\begin{proof}
From Pieri's rule \cite[Corollary 7.15.9]{stanley_fomin_1999}, we know that for a skew partition $\lambda/\nu$, where $\nu$ is a single-part partition $(v)$,
\begin{equation}
s_{\lambda/\nu}(z)=\sum _{\mu}s_{\mu}(z),
\label{cor7159}
\end{equation}
where the sum ranges over all partitions $\mu \subseteq \lambda$ for which $ \lambda/ \mu$ is a partition with one part of size $v$. In order to prove this lemma we specify the partitions $\lambda$ and $\nu$ as on the left hand side of \eqref{skewtoschur}. The partitions associated with the skew Schur function are
\begin{equation}
\lambda=(w^{\alpha},u,0^{\beta-1})
\end{equation}
and
\begin{equation}
\nu=(v,0^{\alpha+\beta-1}).
\end{equation}
\begin{figure}[h!]
\centering
\includegraphics[width=0.5\textwidth]{drawingl}
\caption{A diagram of the partition $\lambda=(w^{\alpha},u,0^{\beta-1})$ occurring in the identity \eqref{cor7159}.}
\label{lpartition}
\end{figure}\\
The aim is to find an explicit expression for all partitions $\mu$ in the sum on the right hand side of \eqref{cor7159}.
Given a partition $\lambda$ of the shape depicted Figure \ref{lpartition}, we want to find all partitions $\mu$ for which $\lambda/\mu$ is a horizontal strip of size $v$. This can be viewed as removing a strip of size $v$ from $\lambda$ so that the remaining object is still a valid partition. This removal can only be done from the last two rows, as removing anything from above the last two rows will not correspond to the removal of a strip. As the bottom row is of size $u$, the options of removing a strip of size $v$ depend on the size of $u$ and $v$. For this we consider two cases depending on whether the size $v$ of the strip to be removed exceeds the length $u$ of the bottom row or not.
\subsubsection*{Case $u \leq v$:}
\begin{figure}[h!]
\centering
\includegraphics[width=0.5\textwidth]{proof-pic}
\caption{A diagram showing the structure of the partition $\mu=(w^{\alpha-1},w-(v-u)-l,l)$ in the case $u\leq v$. The shaded part corresponds to a strip of size $v$.}
\label{proofpic}
\end{figure}
Consider a skew partition $\lambda/\nu$ where $\lambda=(w^{\alpha},u,0^{\beta-1})$ and $\nu={(v,0^{\alpha+\beta-1})}$ as shown in Figure \ref{figurethatneedstobementioned}. If $u\leq v$ then the structure of the partitions $\mu$ appearing in the sum on the right hand side of \eqref{cor7159} is indicated in Figure \ref{proofpic}. The shaded portion shows the strip $\nu$ to be removed.
We remove part of $\nu$ from the bottom row of length $u$ and the remaining part from the row above, i.e. we shorten the bottom row by $u-l$ and the row above by $v-u+l$.
Removing the strip $\nu$ from $\lambda$ gives the following partition
\begin{equation}
\mu=(w^{\alpha-1},w-(v-u)-l,l)\;.
\end{equation}
Here, $l$ is constrained by
$$ l\leq \min(u,w-v).$$
Remembering that $u\leq v$, the sum can therefore be written as claimed,
\begin{equation}
s_{(w^{\alpha},u,0^{\beta-1})/(v,0^{\alpha+\beta-1})}=\sum_{l=0}^{\min(u,v,w-u,w-v)}s_{(w^{\alpha-1},w-(v-u)-l,l,0^{\beta-1})}\;.
\label{schurulv}
\end{equation}
\subsubsection*{Case $u > v$:}
\begin{figure}[h!]
\centering
\includegraphics[width=0.5\textwidth]{proof-pic2}
\caption{A diagram showing the structure of the partition $\mu=(w^{\alpha-1},w-l,u-v+l)$ in the case $u>v$. The shaded part corresponds to a strip of size $v$.}
\label{proofpic2}
\end{figure}
We use the same idea as in the first case and remove strip $\nu$ from the partition $\lambda$. For $v<u$ the structure of the partitions $\mu$ appearing in the sum on the right hand side of \eqref{cor7159} are indicated in Figure \ref{proofpic2}.
Since $v<u$, we can remove $\nu$ completely from the lowest row and nothing from the row above, or we can remove part of it from the lowest row and the rest from the row above. We thus shorten the bottom row by $v-l$ and the row above by $l$.
Removing the strip $\nu$ from $\lambda$ therefore gives the partition
\begin{equation}
\mu=(w^{\alpha-1},w-l,u-v+l)\;.
\end{equation}
Here, $l$ is constrained by $$ l\leq \min(v,w-u).$$
Remembering that $u>v$, the sum can therefore also be written as claimed,
\begin{equation}
s_{(w^{\alpha},u,0^{\beta-1})/(v,0^{\alpha+\beta-1})}=\sum_{l=0}^{\min(u,v,w-u,w-v)}s_{(w^{\alpha-1},w-l,u-v+l,0^{\beta-1})}
\label{schurvlu}
\end{equation}
Taken together, \eqref{schurulv} and \eqref{schurvlu} prove the lemma.
\end{proof}
We now use this Lemma to state the desired equivalent result for Theorem \ref{theorem1} in terms of Schur functions. Note that while in Lemma \ref{lemma} we did not need to specify the arguments of the functions, here it is important that the arguments are given by the kernel roots.
\begin{proof}[Proof of Corollary \ref{schurcorol}]
Lemma $\ref{lemma}$ proves the corollary.
\end{proof}
\section{Examples}
We now present several special cases involving small values of $\alpha$ and $\beta$. The first case we examine is
$(\alpha,\beta)=(1,1)$, which corresponds to weighted Motzkin paths, and also includes Dyck paths as a special case, if the weight of the horizontal step is set to $p_0=0$. This has been studied previously \cite{JOSUATVERGES20112064} \cite{CHEN2008329}, but the Schur function approach used here is different and focusses more on the structure of the problem than just giving explicit generating functions. We then examine the cases $(\alpha,\beta)=(1,2)$ and $(\alpha,\beta)=(2,1)$, the solution of which involves roots of cubic equations. Here, the strength of our Schur function approach becomes apparent, as any explicit solution involves cumbersome algebraic expressions.
\subsection{Motzkin paths}
Theorem \ref{theorem1} shows that the geometric structure of the problem is encoded in the partition shapes, while the step weights are ``hidden'' in the kernel roots. For Motzkin paths the result is particularly simple and elegant, involving only partitions with two parts,
\begin{equation}
G_{(u,v)}^{w,1,1}(t)
=
\frac{1}{tp_1}\frac{s_{(w,u)/(v,0)}(z_1,z_2)}{s_{(w+1,0)}(z_1,z_2)}\;.
\label{motzskewschur}
\end{equation}
From a computational point of view, skew Schur functions are of course not that easy to evaluate, but with the help of Corollary \ref{schurcorol} we are able to state the result in terms of Schur functions,
\begin{equation}
G_{(u,v)}^{w,1,1}(t)
=
\frac{1}{tp_1}\frac{\sum\limits_{l=0}^{r}s_{(w-(v-u)_+-l,(u-v)_++l)}(z_1,z_2)}{s_{(w+1,0)}(z_1,z_2)}.
\label{motzschur}
\end{equation}
To expand the Schur functions we write them in terms of determinants. The Schur function in the denominator of Equation \eqref{motzschur} is given by
\begin{align}
s_{(w+1,0)}(z_1,z_2)=&\frac{1}{\Delta}\begin{vmatrix}
z_1^{w+2} & z_2^{w+2}\\
z_1^0 & z_2^0
\end{vmatrix}\\
=&\frac{1}{\Delta}
(z_1^{w+2}-z_2^{w+2}).
\end{align}
where $\Delta=\Delta(z_1,z_2)=z_1-z_2$ comes from a Vandermonde determinant evalution.
Similarly expressing the Schur function in the numerator of Equation \eqref{motzschur} as a determinant implies
\begin{multline}
s_{(w-(v-u)_+-l,(u-v)_++l)}(z_1,z_2)=\frac{1}{\Delta}\begin{vmatrix}
z_1^{w-(v-u)_+-l+1} & z_2^{w-(v-u)_+-l+1}\\
z_1^{(u-v)_++l} & z_2^{(u-v)_++l}
\end{vmatrix}\\
=\frac{1}{\Delta}(
z_1^{w-(v-u)_+-l+1}z_2^{(u-v)_++l} - z_2^{w-(v-u)_+-l+1}z_1^{(u-v)_++l}
).
\end{multline}
Now substituting the expansion of these Schur functions into \eqref{motzschur}, we finally obtain
\begin{equation}
G_{(u,v)}^{w,1,1}(t)=\frac{1}{tp_1}\dfrac{\sum\limits_{l=0}^{r}(
z_1^{w-(v-u)_+-l+1}z_2^{(u-v)_++l} - z_2^{w-(v-u)_+-l+1}z_1^{(u-v)_++l}
)}{
z_1^{w+2}-z_2^{w+2}}
\end{equation}
Here, $z_1=z_1(t)$ and $z_2=z_2(t)$ are the roots of the kernel $K(t,z)=1-tp_0-tp_1z-tq_1/z$, so that they can be explicitly given as solutions of the quadratic equation
\begin{equation}
z^2-\frac{1/t-p_0}{p_1}z+\frac{q_1}{p_1}=0\;.
\label{motzkernel}
\end{equation}
\subsection{Case ($\alpha=1$, $\beta=2$)}
Structurally, this case is rather similar to the preceding one, however the Schur functions now have as argument three kernel roots $z_1(t), z_2(t)$ and $z_3(t)$, which are the solution to the kernel equation given by
\begin{equation}
z^3-\frac{1/t-p_0}{p_1}z^2+\frac{q_1}{p_1}z+\frac{q_2}{p_1}=0\;,
\end{equation}
so that a general explicit solution would involve roots of a cubic equation.
Theorem \ref{theorem1} implies that
\begin{equation}
G_{(u,v)}^{w,1,2}(t)=\frac{1}{tp_{1}}\frac{s_{(w,u,0)/(v,0,0)}(z_1,z_2,z_3)}{s_{(w+1,0,0)}(z_1,z_2,z_3)}\;,
\label{motzskweschur12}
\end{equation}
and the result given in Corollary \ref{schurcorol} can be written as
\begin{equation}
G_{(u,v)}^{w,1,2}(t)=\frac{1}{tp_1}\frac{\sum\limits_{l=0}^{r}s_{(w-(v-u)_+-l,(u-v)_++l,0)}(z_1,z_2,z_3)}{s_{(w+1,0,0)}(z_1,z_2,z_3)}\;.
\label{motzschur12}
\end{equation}
We expand the Schur functions and write them in form of determinants. The Schur function in the denominator is given by
\begin{multline}
s_{(w+1,0,0)}(z_1,z_2,z_3)=\frac{1}{\Delta}\begin{vmatrix}
z_1^{w+3} & z_2^{w+3} & z_3^{w+3}\\
z_1^1 & z_2^1 & z_3^1\\
z_1^0 & z_2^0 & z_3^0
\end{vmatrix}\\
=\frac{1}{\Delta}
(z_1^{w+3}(z_2-z_3)-z_2^{w+3}(z_1-z_3)+z_3^{w+3}(z_1-z_2))\;,
\end{multline}
where $\Delta=(z_1-z_2)(z_1-z_3)(z_2-z_3)$ is again a Vandermonde determinant (which will however cancel out in the final result).
Similarly expressing the Schur function in the numerator as a determinant implies
\begin{multline}
s_{(w-(v-u)_+-l,(u-v)_++l,0)}(z_1,z_2,z_3)
=\\
\frac{1}{\Delta}\left(
z_1^{w-(v-u)_+-l+2}(z_2^{(u-v)_++l+1}-z_3^{(u-v)_++l+1})\right.\\
\hspace*{1.5cm}- z_2^{w-(v-u)_+-l+2}(z_1^{(u-v)_++l+1}-z_3^{(u-v)_++l+1})\\
\left. +z_3^{w-(v-u)_+-l+2}(z_1^{(u-v)_++l+1}-z_2^{(u-v)_++l+1})
\right).
\end{multline}
Now substituting the expansion of Schur functions in \eqref{motzschur12}, we obtain
\begin{multline}
G_{(u,v)}^{w,1,2}(t)=\\
\frac{1}{tp_1}
\dfrac{\mathlarger{\mathlarger{\mathlarger{\sum}}}\limits_{l=0}^r\left(
\splitdfrac{\splitdfrac{z_1^{w-(v-u)_+-l+2}(z_2^{(u-v)_++l+1}-z_3^{(u-v)_++l+1})}
{- z_2^{w-(v-u)_+-l+2}(z_1^{(u-v)_++l+1}-z_3^{(u-v)_++l+1})}}{+z_3^{w-(v-u)_+-l+2}(z_1^{(u-v)_++l+1}-z_2^{(u-v)_++l+1})}
\right)}
{z_1^{w+3}(z_2-z_3)-z_2^{w+3}(z_1-z_3)+z_3^{w+3}(z_1-z_2)}.
\label{12final}
\end{multline}
\subsection{Case ($\alpha=2$, $\beta=1$)}
The kernel equation now leads to
\begin{equation}
z^3+\frac{p_1}{p_2}z^2-\frac{1/t-p_0}{p_2}z+\frac{q_1}{p_2}=0\;.
\end{equation}
We note that exchanging $\alpha$ and $\beta$ is akin to switching up and down steps with adjusting the weights appropriately. More precisely, making all the parameters explicit we have
\begin{equation}
K^{(2,1)}_{p_0,p_1,p_2,q_1}(t,z)=K^{(1,2)}_{p_0,q_1,q_2,p_1}(t,1/z)\;,
\end{equation}
which in the case of unit weights implies that the kernel roots for $(\alpha,\beta)=(2,1)$ and $(\alpha,\beta)=(1,2)$ are simply inverses of each other. This symmetry is not as explicit when writing the generating functions in terms of Schur functions. Symmetry considerations would dictate that we need to replace $u$ and $v$ by $w-u$ and $w-v$, respectively, but this is not obvious from the result given in Theorem \ref{theorem1}, which now reads
\begin{equation}
G_{(u,v)}^{w,2,1}(t)=-\frac{1}{tp_{2}}\frac{s_{(w,w,u)/(v,0,0)}(z_1,z_2,z_3)}{s_{(w+1,w+1,0)}(z_1,z_2,z_3)}\;.
\label{motzskweschur21}
\end{equation}
From Corollary \ref{schurcorol}, this can be written as
\begin{equation}
G_{(u,v)}^{w,2,1}(t)=-\frac{1}{tp_2}\frac{\sum\limits_{l=0}^{r}s_{(w,w-(v-u)_+-l,(u-v)_++l)}(z_1,z_2,z_3)}{s_{(w+1,w+1,0)}(z_1,z_2,z_3)}\;.
\label{motzschur21}
\end{equation}
We expand the Schur functions and write them in form of determinants,
and we obtain
\begin{multline}
G_{(u,v)}^{w,2,1}(t)=
-\frac{1}{tp_2}\times \\
\dfrac{\mathlarger{\mathlarger{\mathlarger{\sum}}}\limits_{l=0}^r\left(
\splitdfrac{\splitdfrac{z_1^{w+2}(z_2^{w-(v-u)_+-l+1}z_3^{(u-v)_++l}-z_3^{w-(v-u)_+-l+1}z_2^{(u-v)_++l})}
{- z_2^{w+2}(z_1^{w-(v-u)_+-l+1}z_3^{(u-v)_++l}-z_3^{w-(v-u)_+-l+1}z_1^{(u-v)_++l})}}{+z_3^{w+2}(z_1^{w-(v-u)_+-l+1}z_2^{(u-v)_++l}-z_2^{w-(v-u)_+-l+1}z_1^{(u-v)_++l})}
\right)}
{z_1^{w+3}(z_2^{w+2}-z_3^{w+2})-z_2^{w+3}(z_1^{w+2}-z_3^{w+2})+z_3^{w+3}(z_1^{w+2}-z_2^{w+2})}.
\label{21final}
\end{multline}
When written in terms of kernel roots, we see some structural similarity between \eqref{21final} and \eqref{12final}, in line with the symmetry observation made above. Obviously a more general study of the effect of symmetry would be an interesting topic for further work.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 1,825 |
[](https://travis-ci.org/webino/Webino)
[](https://scrutinizer-ci.com/g/webino/Webino/?branch=prototype)
[](https://coveralls.io/r/webino/Webino?branch=prototype)
Webino™ is a platform for creating quality and modular web applications.
It is built on [PHP](http://php.net/) with [Zend Framework](http://framework.zend.com/)
and other useful open-source packages.
- Examples: [demo.webino.org](http://demo.webino.org) `Powered by Heroku`
- Documentation: [docs.webino.org](http://docs.webino.org) `Powered by GitHub Pages`
## Goals
The main goal of Webino™ is to be a platform for high quality, modular and scalable web applications. Providing tools
for rapid development, testing and building processes. Including support for continuous integration
and continuous delivery.
- **High quality code**
- Classes must be short, prefer composition over inheritance. Use PSR-2, OOP, design-patterns and refactoring.
- **Configurable**
- Everything must be configurable, even the templates rendering.
- **Modular architecture**
- Modules can override each others configuration influencing the resulting effect.
- **Low coupling**
- The computing code must be separated into routines emitting events and listeners.
- **Testable**
- Testing must be so easy that it will be the first place you want to start typing code.
- **Scalable**
- It is built on PHP.
- **Deliverable**
- Continuous integration development and continuous delivery of minimized code.
## Requirements
- [PHP 7.1](http://php.net)
- Web server: [Apache](https://www.apache.org) | [NGINX](https://nginx.org) [TODO]
- Database: [MySQL](https://www.mysql.com) [TODO] | [MariaDB](https://mariadb.org) [TODO] | [PostgreSQL](https://www.postgresql.org) [TODO] `optional`
## Architecture
- Inversion of Control
- Dependency Injection
- Event-Driven Architecture
- Object-Oriented Programming
## Addendum
Learn how to develop web applications with Webino™.
[Read documentation](https://docs.webino.org)
| {
"redpajama_set_name": "RedPajamaGithub"
} | 1,183 |
Cottage Gardens will be presenting a Miniature, Succulent and Fairy Garden Workshop on Saturday APRIL 13 at 10:00 AM and (if needed) 1 PM. We will help you make your own miniature garden, succulent planter, fairy garden or even a terrarium.
Great fun for all ages: Moms, Grandmas, and Kids. Bring a container or get one from us.
Tell your neighbors, bring a friend! Class size is limited, so reserve a space by calling 712-338-4569 or emailing info@cottagegardens.net.
Cottage Gardens will be hosting a Container Garden Workshop on Sat. April 20 at 10:00 AM.
Bring in one or two containers to plant. For larger containers, just measure the inside opening to plant an insert.
Help us celebrate our 20th year in business! Refreshments, door prizes, specials!
See what's new, get ideas, enjoy the greenhouse in full bloom!
Sign up to receive email notices about events and specials.
Email us and let us know if you'd like to receive our email newsletter. | {
"redpajama_set_name": "RedPajamaC4"
} | 7,786 |
\section{Supplemental Material: Symmetry breaking in the folding of thick
ribbons moderated by solvent}
\subsection{Differential geometry of thick ribbons}
Consider a ribbon of length $L$, width $W$ and thickness $h$. Assume that the
midplane of the ribbon is differentiable and in Cartesian coordinates it can be
parameterized as
$\vec R_\mathrm{mid} (\alpha,\beta)$, where $\alpha$ and $\beta$ are
the parameters with $\alpha \in [\alpha_0,\alpha_m]$
and $\beta \in [\beta_0, \beta_m]$. The choice of $\alpha$ and $\beta$
will conveniently depend on the type of ribbon conformation and will be given
in the next subsections. The metric tensor \cite{Kreyszig} of the midplane can
be calculated as \begin{equation}
g_{\alpha \beta} = (\partial_\alpha \vec R_\mathrm{mid}) \cdot
(\partial_\beta \vec R_\mathrm{mid}) \ ,
\end{equation}
where $\partial_\alpha$ and $\partial_\beta$ are partial derivatives with
respect to $\alpha$ and $\beta$, respectively.
The determinant of this tensor is denoted as
\begin{equation}
g = \det(g_{\alpha\beta})\ .
\end{equation}
Assume that the surface area of the midplane is conserved and given by
\begin{equation}
\Sigma = \int_{\alpha_0}^{\alpha_m} d\alpha \int_{\beta_0}^{\beta_m} d\beta\, \sqrt{g}
= LW \ .
\end{equation}
The local normal vector to the ribbon midplane is given by
\begin{equation}
\hat N =
\frac{(\partial_\alpha \vec R_\mathrm{mid}) \times (\partial_\beta \vec R_\mathrm{mid} )}
{|(\partial_\alpha \vec R_\mathrm{mid}) \times (\partial_\beta \vec R_\mathrm{mid} )|}\ .
\end{equation}
The ribbon physical surfaces can be constructed from the midplane as
\begin{equation}
\vec R_\mathrm{surface}^\pm = \vec R_\mathrm{mid} \pm \frac{h}{2} \hat N \ ,
\label{eq:Rsurf}
\end{equation}
where $\pm$ denote the upper and lower surfaces, respectively.
One should be able to calculate the metric tensors of these surfaces
\begin{equation}
g_{\alpha \beta}^\pm = (\partial_\alpha \vec R_\mathrm{surface}^\pm) \cdot
(\partial_\beta \vec R_\mathrm{surface}^\pm) \ ,
\end{equation}
and their determinants
\begin{equation}
g^\pm = \det(g_{\alpha\beta}^\pm)\ .
\end{equation}
The areas of the ribbon upper and lower surfaces are then given by
\begin{equation}
\Sigma^\pm = \int_{\alpha_0}^{\alpha_m} d\alpha \int_{\beta_0}^{\beta_m} d\beta
\, \sqrt{g^\pm} \ .
\end{equation}
Because of the non-zero thickness, $\Sigma^\pm$ are generally different from
$\Sigma$. We will show later that the total surface area of a ribbon needs not
to be conserved, i.e., $\Sigma^{+} + \Sigma^{-} \neq 2 \Sigma$, as found in
the case of the twisted ribbon.
The surface curvatures of the midplane can be obtained by considering the
second fundamental form tensor \cite{Kreyszig}
\begin{equation}
b_{\alpha \beta} = (\partial_{\alpha,\beta} \vec R_\mathrm{mid} ) \cdot \vec N \ .
\end{equation}
The Gaussian curvature can be calculated as
\begin{equation}
K = \frac{\det(b_{\alpha\beta})}{\det(g_{\alpha\beta})} \ ,
\end{equation}
whereas the mean curvature is given by
\begin{equation}
H = \frac{1}{2} b_{\alpha \beta} g^{\beta \alpha} \ ,
\end{equation}
where $g^{\alpha \beta}$ is the inverse metric tensor
($g_{\alpha \gamma} g^{\gamma \beta} = \delta_{\alpha}^{\beta}$).
\subsection{Exposed surface area and bending energy of the rolled (Archimedean
spiral) ribbon}
We consider a general case of the rolled conformation which has a hole (Fig.
\ref{fig:cartoon}b) in the middle and whose midplane surface is
parameterized as
\begin{equation}
\vec R_\mathrm{mid} (\phi,z) = \left(\frac{\phi}{2\pi}p
\cos \phi, \frac{\phi}{2\pi}p \sin \phi, z\right) ,
\end{equation}
where
$\phi \in [\phi_0,\phi_m]$ is the azimuthal angle,
$p$ is the distance between consecutive turn of the spiral,
and $z \in [-W/2,W/2]$ is the ribbon's lateral coordinate.
The metric tensor determinant of the midplane is obtained as
\begin{equation}
g = \frac{p^2(1+\phi^2)}{4\pi^2} \ .
\end{equation}
The conservation condition of the midplane surface area is given by
\begin{equation}
\Sigma = LW = \int_{-W/2}^{W/2} dz \, \int_{\phi_0}^{\phi_m} \sqrt{g} \,d\phi
= \left. \frac{W p}{4\pi} \left[\phi \sqrt{1+\phi^2} + \arcsinh(\phi) \right]
\right|_{\phi_0}^{\phi_m} \ ,
\end{equation}
from which one can numerically calculate $\phi_m$ knowing $\phi_0$, $L$, and $p$.
The metric tensor determinants of the ribbon's physical surfaces are given by
\begin{equation}
g^\pm = \left[ \frac{p\sqrt{1+\phi^2}}{2\pi} \pm \frac{h}{2}
\left(\frac{2+\phi^2}{1+\phi^2}\right) \right]^2 \ .
\end{equation}
The surface areas of the ribbon's upper and lower surfaces can be exactly
calculated as
\begin{equation}
\Sigma^{\pm} = \int_{-W/2}^{W/2} dz \int_{\phi_0}^{\phi_m} \sqrt{g^\pm}\, d\phi
= \left. W \left[ \frac{p}{4\pi}\left(\phi \sqrt{1+\phi^2} + \arcsinh(\phi)\right)
\pm \frac{h}{2} \left(\phi + \arctan(\phi)\right) \right] \right|_{\phi_0}^{\phi_m} \ .
\label{eq:sigmarolled}
\end{equation}
As easily seen, the total surface area of the rolled ribbon is conserved, i.e.,
$\Sigma^{+} + \Sigma^{-} = 2 \Sigma$.
For the self-avoidance condition, we will simply assume that $p \ge h$.
For a given solvent diameter $D$, we will consider only the rolled
conformations, such that the contact surfaces between successive turns of the
roll are fully buried, which are found when $p < h + D$.
In these conformations, only the outer surface and
possibly also the inner surface of the spiral are
exposed. The ribbon's total exposed area thus is given by
\begin{equation}
S_\mathrm{rolled} = W \int_{\phi_m-2\pi}^{\phi_m} \sqrt{g^{+}}\, d\phi
+ W \int_{\phi_0}^{\phi_0+2\pi} \sqrt{g^{-}}\, d\phi \ ,
\label{eq:srolled}
\end{equation}
where the second term in the right hand side is included only if
the hole size is larger than the solvent diameter.
The integrals in Eq. (\ref{eq:srolled}) are easily calculated using
the result of Eq. (\ref{eq:sigmarolled}).
The Gaussian curvature of the midplane surface of the rolled conformation is
always zero, while the mean curvature is obtained as
\begin{equation}
H = - \frac{\pi (2+\phi^2)}{p\, (1+\phi^2)^{3/2}} \ ,
\end{equation}
where the minus sign means that one of the principal curvatures is in
opposite direction to the normal vector.
For a given $p$, the minimum value of $\phi_0$ can be obtained by requiring
that the spiral radius of curvature at $\phi=\phi_0$ must be larger than $h/2$.
The bending energy can be calculated as
\begin{eqnarray}
U_\mathrm{rolled} & = & \int_{-W/2}^{W/2} dz \int_{\phi_0}^{\phi_m}
\frac{\kappa}{2} H^2 \sqrt{g}\, d\phi \nonumber \\
& = &
W \frac{\pi \kappa}{4p} \left. \left[
\frac{\phi(9 + 8\phi^2)}{3(1+\phi^2)^{3/2}} + \arcsinh(\phi)
\right] \right|_{\phi_0}^{\phi_m} \ .
\end{eqnarray}
The optimal rolled conformation is obtained by minimizing the total
energy (Eq. (1)) with respect to $p$ and $\phi_0$. For sufficiently small
stiffness, this minimization leads to a rolled conformation with no hole (Fig.
\ref{fig:cartoon}a) with $p=h$ and $\phi_0\approx 0.541 \pi$.
For $L \gg h$, one gets
\begin{equation}
\phi_m \approx \left(\frac{4\pi L}{h}\right)^{1/2} \ .
\end{equation}
In this large $L$ limit, the exposed area of the
rolled conformation with no hole can be estimated as
\begin{equation}
S_\mathrm{rolled} \approx 2 W (\pi h L)^{1/2} \ ,
\end{equation}
whereas the bending energy is approximately given by
\begin{equation}
U_\mathrm{rolled} \approx \frac{\kappa \pi W}{8h}
\ln\left( \frac{16 \pi L}{h}\right) \ ,
\end{equation}
given that $\arcsinh(x) = \ln(x + \sqrt{1+x^2})$.
\subsection{Exposed surface area and bending energy of the curled ribbon}
Suppose that the curled ribbon (Fig. \ref{fig:cartoon}c) has $n$ turns.
Denote $r$ the midplane's radius of curvature at the turns.
We consider only the turns with $h/2 \leq r < (h+D)/2$, so that the ribbon's
inner surface associated with the turns are completely buried.
By simple geometrical consideration,
it can be shown that its exposed surface area for a solvent
diameter $D$ is equal to
\begin{equation}
S_\mathrm{curled} = W \left[ 2 \left(\frac{L - n\pi r}{n+1}\right)
+ \left(r+\frac{h}{2}\right)\left[n\pi - 2\alpha \Theta(n-1)
- 2 \beta \Theta(n-2)\right] \right] \ ,
\end{equation}
where
$\alpha=\arcsin(\frac{D}{2r+h+D})$,
$\beta=\arccos(\frac{4r}{2r+h+D})$,
$\Theta(x)$ is the step function equal to 1 if $x > 0$ and 0 otherwise,
$n$ is an integer satisfying
$1 \leq n \leq n_\mathrm{max} \approx \frac{L}{\pi r}$.
The Gaussian curvature of the midplane of the curled ribbon is always zero,
while the mean curvature is non-zero only at the turns, at which $H = 1/r$.
Therefore, the bending energy of the curled ribbon is given by
\begin{equation}
U_\mathrm{curled} = \frac{\kappa n \pi W}{2r} \ .
\end{equation}
The optimal curled conformation is obtained numerically by minimizing the total
energy on changing $r$ and $n$. Suppose that the energy minimum is observed at
$r=r^*$ and $n=n^*$.
The optimal conformation with $n^* = n_\mathrm{max}$ may be obtained for
sufficiently large solvent size ($D \gg h$) and small stiffness. In this
particular conformation, denoted as `crinkled' conformation
\cite{Hoang2012EPL}, the ribbon is globally straight but locally modulated.
In the limits of small solvent size ($D \ll h$) and
vanishing stiffness, $n^* \approx \sqrt{n_\mathrm{max}+1}-1$.
For large length ($L \gg h$), one gets $n^* \approx (2 L/\pi h)^{1/2}$ and
\begin{equation}
S_\mathrm{curled} \approx 2W (2 \pi h L)^{1/2} \ ,
\end{equation}
\begin{equation}
U_\mathrm{curled} \approx \kappa W h^{-3/2} (2\pi L)^{1/2}\ .
\end{equation}
\subsection{Exposed surface area and bending energy of the twisted ribbon}
We parameterized the midplane of the twisted ribbon (Fig. \ref{fig:cartoon}e) as:
\begin{equation}
\vec R_\mathrm{mid} (u,z) = (u\cos kz, u \sin kz, z) ,
\label{eq:midplane}
\end{equation}
where $u \in \left[-\frac{W}{2},\frac{W}{2}\right]$, $z \in [0, z_m]$, and $k$
is the wave number of the twisting along the $z$ axis.
The determinant of the metric tensor of the midplane is given by
\begin{equation}
g = 1 + k^2 u^2 \ .
\end{equation}
$z_\mathrm{m}$ can be determined from the conservation of the midplane surface area:
\begin{equation}
\Sigma = LW = \int_0^{z_\mathrm{m}} dz \int_{-W/2}^{W/2} \sqrt{g} \,du
=
z_\mathrm{m} \left[\frac{W\sqrt{4+k^2 W^2}}{4} +
\frac{\arcsinh(\frac{kW}{2})}{k} \right] \ .
\label{eq:zmtwist}
\end{equation}
The physical surfaces of a twisted ribbon of thickness $h$ can be constructed
from the midplane using the standard procedure as given by Eq. (\ref{eq:Rsurf}).
The determinants of the metric tensors of the ribbon upper
and lower surfaces are given by
\begin{equation}
g^\pm = \frac{[k^2 h^2 - 4(1 + k^2 u^2)^2]^2}{16(1+k^2u^2)^{3}} \ .
\label{eq:gpm}
\end{equation}
Note that the metric tensor determinant is the same for the
upper and lower surfaces.
The upper and lower surface areas of the twisted ribbon can be
determined analytically and are given by
\begin{equation}
\Sigma_\mathrm{twisted}^\pm =
\int_0^{z_m} dz \int_{-W/2}^{W/2} \sqrt{g^\pm} \, du =
z_m \left[\frac{W\sqrt{4+k^2W^2}}{4}
+ \frac{\arcsinh(\frac{kW}{2})}{k}
- \frac{h^2 k^2 W}{2\sqrt{4+ k^2 W^2}} \right] \ .
\end{equation}
By using Eq. (\ref{eq:zmtwist}) one obtains
\begin{equation}
\Sigma_\mathrm{twisted}^\pm = \Sigma
- \frac{z_m h^2 k^2 W}{2\sqrt{4+ k^2 W^2}} \ .
\label{eq:sigmatwisted}
\end{equation}
Eq. (\ref{eq:sigmatwisted}) clearly shows that
the surface areas of the physical surfaces of the twisted ribbon
are smaller than that of the midplane and this is due
to the effect of non-zero thickness $h$.
Fig. \ref{fig:stwist} shows that the ribbon surface area
decreases when either $k$ or $h$ increases.
Note that $\Sigma_\mathrm{twisted}^\pm$ is also the exposed area
for the case of $D=0$, thus the twisted conformation is
favorable in terms of surface energy for any solvent size. We will
show that this is also true for the case of DNA twist later
in this supplemental material.
\begin{figure}
\includegraphics[width=3in]{figstwist.eps}
\caption{
Dependence of the twisted ribbon's upper and lower surface areas,
$\Sigma^{\pm}_\mathrm{twisted}$, on the twist's wave number $k$. The areas are
shown relative to the midplane area $\Sigma$. The data are obtained
for $h=0.3W$, 0.5$W$ and 0.7$W$ as indicated. The range of $k$ is
$[0,k_\mathrm{max}]$ with $k_\mathrm{max}=2/h$.
}
\label{fig:stwist}
\end{figure}
The ribbon thickness also has a strong effect on self-avoidance condition of
the twisted conformation. If the ribbon has zero thickness, the twist's wave
number $k$ can increase to infinity while the ribbon size along
the $z$ axis shrinks to zero. If the ribbon has a finite thickness,
self-avoidance prevents infinite twisting. As the wave number $k$ increases to
a certain value, the physical surface of the ribbon starts to intersect itself.
The onset of self-intersection occurs exactly at the point where the metric of the
surface vanishes ($g^\pm=0$) and ought to happen at $u=0$ due to
symmetry. By using Eq. (\ref{eq:gpm}) for $g^\pm$, one obtains the
self-avoidance condition as
\begin{equation}
k \leq k_\mathrm{SA} = \frac{2}{h} \ ,
\end{equation}
where $k_\mathrm{SA}$ is the self-avoidance limit of $k$ determined by the thickness
$h$.
For a twisted ribbon submerged in a solvent, the closest distance from the
center of a solvent molecule to the ribbon midplane is $(h+D)/2$. One can
construct the excluded volume surfaces as
\begin{equation}
\vec R_\mathrm{ES}^\pm = \vec R_\mathrm{mid} \pm \frac{h+D}{2} \hat N .
\end{equation}
These new surfaces, unlike the ribbon's physical surfaces, are virtual surfaces
and can self-intersect. This self-intersection indicates that some
regions of the ribbon physical surfaces are inaccessible to the solvent.
Similarly to the self-avoidance condition, the self-intersection happens only
when $k$ is sufficiently large, i.e.
\begin{equation}
k \geq k_\mathrm{D} = \frac{2}{h+D} \ .
\end{equation}
The intersection line of the excluded volume surface corresponds to the borders
of the buried area on the physical surface. For the twisted ribbon, it
is a helical curve lying midway between the ribbon's successive turns and has a
constant $u$ coordinate.
In order to calculate the ribbon's exposed area, we first determine the $u$
coordinate of the excluded volume surface intersection.
Thanks to symmetry, this task can be done by considering the intersection
contour of one of the excluded volume surface with the $z=0$ plane. For the
upper excluded volume surface, the contour's coordinates can be found as
\begin{equation}
\vec C^{+} (u) \equiv
\left. \vec R_{ES}^{+} \right|_{z=0} =
\left[
u\cos(k^2 u v) - v \sin(k^2 u v), -v \cos(k^2 u v) - u \sin(k^2 u v), 0
\right]
\end{equation}
\begin{equation}
\mathrm{with} \qquad v=\frac{(h+D)}{\sqrt{1+k^2 u^2}} \ .
\end{equation}
The contour $\vec C^{-}(u)$ corresponding to the lower excluded volume surface can be obtained
from the above equations by just changing the sign of $v$.
Fig. \ref{fig:contour} shows that the self-intersection of the contour
occurs only on the $y$ axis of the $z=0$ plane. Thus,
the $u$ coordinate of the self-intersection
is the solution of the following equation
\begin{equation}
\tan \left(\frac{k^2u(h+D)}{2\sqrt{k^2 u^2+1}}\right) =
\frac{2 u \sqrt{k^2u^2+1}}{h+D} \ ,
\label{eq:tan}
\end{equation}
which can be solved numerically.
We are interested in only the solution $0< u^* \leq \frac{W}{2}$,
given that $-u^*$ is another solution by symmetry.
For a given $k$, such that $k_D \le k \le k_{SA}$, and a solvent size $D$,
solving this equation leads to one of the two following situations: (a) there is
a solution $0 < u^* \leq \frac{W}{2}$ (the ribbon surfaces are partially
exposed), and (b) there is no such solution (the ribbon surfaces are
completely shielded).
In case a, the ribbon's total exposed area on both upper and lower
surfaces is given by:
\begin{eqnarray}
S_\mathrm{twisted} & = &
4 \int_{0}^{z_m} dz \int_{u^*}^{\frac{W}{2}} \sqrt {g^\pm} \; du \nonumber \\
& = &
\left. 2\, z_m \left( u \sqrt{1+ k^2 u^2} + \frac{\arcsinh(ku)}{k}
-\frac{h^2 k^2 u}{2\sqrt{1 + k^2 u^2}} \right) \right|_{u^*}^{W/2} \ .
\end{eqnarray}
The exposed area is equal to zero for $D > D^*$, where $D^*$ is
a solvent diameter such that $u^*=W/2$. The value of $D^*$ can be determined
numerically through Eq. (\ref{eq:tan}). It can also be shown that
\begin{equation}
h + D^* < \frac{\pi}{k}\sqrt{1+(2/kW)^2} \ .
\end{equation}
\begin{figure}
\includegraphics[width=3in]{contourtwist.eps}
\caption{
Contours of the intersections of the excluded volume surface of a
twisted ribbon with the $z=0$ plane for four solvent diameters,
$D=0$, $0.1W$, $0.3W$ and $0.5 W$, as indicated.
The contours are calculated for a twisted ribbon of width $W=1$, thickness
$h=0.3 W$ and wave number $k=(4\pi/3) W^{-1}$.
The self-intersection of the contour on the $y$ axis is seen for $D=0.3 W$.
}
\label{fig:contour}
\end{figure}
The midplane of the twisted conformation has a zero mean curvature, $H=0$, as
found for ideal helicoid \cite{Kreyszig}. The Gaussian curvature can be
determined through calculating the second fundamental form tensor and is given
by
\begin{equation}
K = - \frac{k^2}{(1+k^2 u^2)^2} \ .
\end{equation}
The bending energy for the twisted configuration is calculated as:
\begin{eqnarray}
U_\mathrm{twisted} &=& \iint (-\kappa K) \sqrt{g} \; du\, dz \\
&=&
\int_0^{z_\mathrm{m}} dz \int_{-W/2}^{W/2} \frac{\kappa k^2}{(1+k^2 u^2)^{3/2}} \, du \\
&=&
\kappa z_\mathrm{m} \frac{2 k^2 W}{\sqrt{4 + k^2 W^2}} \ .
\end{eqnarray}
Because $z_m$ is approximately a linear function of $L$, for $L \gg W$ one
find that, for the twisted ribbon, $S_\mathrm{twisted} \propto L$ and
$U_\mathrm{twisted} \propto L$.
\subsection{The spherical spiral (globular) ribbon}
Our simulations show that for sufficiently large $L$ and sufficiently large
$D$, a ribbon may form compact conformations close to a globular shape with
a small exposed area of its hydrophobic surface. In these globule-like
conformations, the non-hydrophobic edges of the ribbon are exposed
to the solvent. These conformations also display a significant helical feature.
Some non-regularities observed may be due to the small sizes of the
ribbons. There are many possible conformations of a globular ribbon. Here,
based on the hints from simulation result, we consider a parameterized model
with a globular shape, in which the ribbon
forms a {\it spherical spiral} as shown in Fig. \ref{fig:globule}.
The spherical spiral has two poles corresponding to the two ends of the ribbon.
Overall, it forms a spherical layer of thickness $W$.
The midplane of the spherical spiral ribbon can be parameterized as
\begin{equation}
\vec R_\mathrm{mid} (\phi,u) = \left[ (R+u) \sin(k \phi)
\cos \phi, (R+u)\sin(k \phi) \sin \phi,
(R+u) \cos(k \phi) \right] \ ,
\end{equation}
where $R$ is the radius of the sphere passing through the central curve of the
ribbon, $u \in [-W/2,W/2]$, $\phi \in [0,\phi_m]$ and $k=\pi/\phi_m$.
It is straightforward to calculate the metric tensor of the midplane, whose
determinant is given by
\begin{equation}
g = (R+u)^2[k^2+\sin^2(k\phi)] \ .
\end{equation}
For a given $\phi_m$, and $k=\pi/\phi_m$, the radius $R$ can be determined from
the midplane surface area conservation
\begin{equation}
\Sigma = LW = \int_{-W/2}^{W/2} du \int_0^{\phi_m} \sqrt{g} \, d\phi
= R W \int_0^{\phi_m}
\sqrt{k^2+ \sin^2(k\phi)} \,d\phi \ .
\end{equation}
The upper and lower surfaces of the ribbon can be constructed from the
midplane as shown in Fig. \ref{fig:globule}. The metric tensor determinants
of these surfaces are given by
\begin{equation}
g^\pm = \left[(R+u)\sqrt{k^2+\sin^2(k\phi)} \pm h \cos(k\phi)
\frac{2 k^2 + \sin^2(k\phi)}{2 k^2 + 2 \sin^2(k\phi)}
\right]^2 \ .
\end{equation}
It can be easily shown that $\Sigma^{+} = \Sigma^{-} = \Sigma$.
To check the self-avoidance condition and to calculate the exposed area, we
will employ an approximate approach by considering the cross section of the
ribbon with the sphere of radius $(R+u)$. Such a cross section is a spherical
spiral stripe of the width equal to
\begin{equation}
{\cal W} (u,h) = 2(R+u)\arctan\left(\frac{h}{2(R+u)}\right) \ ,
\end{equation}
and the midline contour length equal to
\begin{equation}
{\cal L}(u) = \int_0^{\phi_m} |\partial_\phi \vec R_\mathrm{mid}|\, d\phi
= (R+u) \int_0^{\phi_m} \sqrt{k^2 + \sin^2(k\phi)} \, d\phi
= \frac{L(R+u)}{R} \ .
\end{equation}
For a self-avoiding ribbon, the area of the stripe must be not larger
than the surface area of the sphere
\begin{equation}
{\cal L}(u) \cdot {\cal W}(u,h) \leq 4\pi (R+u)^2 \quad \Rightarrow
\quad
\arctan\left(\frac{h}{2(R+u)}\right) \leq \frac{2\pi R}{L} \ .
\end{equation}
It is enough to check the above inequality for $u=-W/2$.
By using the same argument as above for a stripe corresponding to
the excluded volume surface of the ribbon with a solvent of diameter $D$, one
finds that the ribbon surface elements at a given parameter $u$ is exposed to
the solvent if
\begin{equation}
\arctan\left(\frac{h+D}{2(R+u)}\right) \leq \frac{2\pi R}{L} \ .
\end{equation}
Assume that the equality of the above equation is found for $u^*$, such that
$-\frac{W}{2} \leq u^* \leq \frac{W}{2}$, the ribbon's exposed area is given by
\begin{equation}
S_\mathrm{globule} = 2 \int_{0}^{\phi_m} d\phi
\int_{u^*}^{W/2} \sqrt{g^{+}} \, du
= \frac{L}{R}\left(2R + \frac{W}{2} + u^*\right)\left(\frac{W}{2}-u^*\right)
\ .
\end{equation}
\begin{figure}
\includegraphics[width=0.5\columnwidth]{globule2.eps}
\caption{
(a) A spherical spiral curve.
(b) The midplane of a spherical spiral ribbon
with the ribbon center line following the curve shown in a.
(c) A thick spherical spiral (globular) ribbon with the midplane shown in b.
}
\label{fig:globule}
\end{figure}
\begin{figure}
\includegraphics[width=3in]{uglob.eps}
\caption{
Dependence of bending energy, $U$, on length, $L$, of the rolled ribbon with no
hole and the spherical spiral (globular) ribbon. Discrete points are
obtained by numerical evaluation, whereas smooth curves represent fits with the
$\log(L)$ dependence. The data are obtained for tightly folded ribbons of
thickness $h=0.5W$.
}
\label{fig:uglob}
\end{figure}
By calculating the second fundamental form tensor of the midplane, one
immediately finds that the spherical spiral ribbon has a zero
Gaussian curvature, $K=0$,
whereas the mean curvature is given by
\begin{equation}
H = -\frac{\cos(k\phi)[2k^2+\sin^2(k\phi)]}
{2(R+u)[k^2+\sin^2(k\phi)]^{3/2}} \ .
\end{equation}
The bending energy thus can be numerically calculated from the integral
\begin{eqnarray}
U_\mathrm{globule} &=& \int_0^{\phi_m} d\phi \int_{-W/2}^{W/2} du \;
\frac{\kappa}{2} H^2 \sqrt{g} \nonumber \\
&=& \frac{\kappa}{8} \ln\left(\frac{2R+W}{2R-W}\right)
\int_0^{\phi_m} \frac{\cos^2(k\phi)[2k^2+\sin^2(k\phi)]^2}
{[k^2+\sin^2(k\phi)]^{5/2}} \, d\phi
\ .
\end{eqnarray}
Note that $R$ also depends on $\phi_m$. Our numerical calculations indicate
that the bending energy of the tightly folded spherical spiral ribbon grows
logarithmically with $L$, similar to that of the rolled conformation (Fig.
\ref{fig:uglob}). It is also shown that $U_\mathrm{globule}$ is larger than
$U_\mathrm{rolled}$.
The optimal spherical spiral conformation is obtained by minimizing the
total energy on changing $\phi_m$. In the large length limit ($L \gg h$), one
can write
\begin{equation}
S_\mathrm{globule} \propto L \qquad \mathrm{and} \qquad
U_\mathrm{globule} \propto \frac{W}{h} \ln L \ .
\end{equation}
\subsection{Ground state phase diagram of thick ribbon}
We studied the ground state phase diagram of thick ribbon as function of
the ribbon's length $L$ and the solvent diameter $D$. For a ribbon
of given $L$ and $D$, together with the thickness $h$ and
stiffness $\kappa$, each of the four conformations, the rolled, the
curled, the twisted and the spherical spiral ones, is optimized in
terms of their total energies. The ground state is the lowest energy
conformation among the four optimized configurations.
Fig. \ref{fig:diagram} in the main text shows the phase diagrams for
ribbons of the same thickness $h$ but for different values of
stiffness $\kappa$. On the other hand, Fig. \ref{fig:phaseh} shows
the phase diagrams for ribbons of the same $\kappa$ but
different thicknesses $h$. It is shown that the twisted conformation
appears as the ground state for ribbons of either low stiffness (as shown in
Fig. \ref{fig:diagram}) or large thickness (Fig. \ref{fig:phaseh}).
\begin{figure}
\includegraphics[width=3.3in]{phaseh.eps}
\caption{
Ground state phase diagram of ribbons as function of the ribbon's length $L$
and solvent diameter $D$. The phase diagram is shown for different ribbon's
thicknesses, $h=0.1W$ (a), $0.2W$ (b), $0.3W$ (c) and $0.4W$ (d).
The bending stiffness is $\kappa=0.1\,\sigma W^2$ for all cases.
Different phases are indicated by colors as given in the legends (top).
}
\label{fig:phaseh}
\end{figure}
\subsection{Thick ribbon description of DNA twist}
Consider the B-DNA double helix structure as a twisted thick ribbon.
We parameterize the midplane of this ribbon as
\begin{equation}
{\vec R}_\mathrm{mid} (u,z) = (u \cos \delta \cos kz,
u \cos \delta \sin kz, z + u \sin \delta) ,
\end{equation}
where $u \in [-W/2,W/2]$ and $k=2\pi/p$ with $W$ and $p$ correspond to the
width and the pitch of DNA, respectively, $\delta$ is the tilt angle of the
ribbon's lateral direction with respect to the plane perpendicular to the main
axis of the twist (the $z$ axis). The ribbon thickness is denoted $h$.
One can calculate the metric tensor of the midplane and obtain the
metric tensor determinant
\begin{equation}
g
= (1 + k^2 u^2) \cos^2 \delta \ .
\end{equation}
Denote $h$ the thickness of the DNA ribbon. It is straightforward to construct
the ribbon upper and lower physical surfaces ${\vec R}^{\pm}_\mathrm{surface}$.
These surfaces are shown in Fig. \ref{fig:dna} for realistic parameters of DNA
with a clear appearance of the minor ($+$) and major ($-$) grooves. The metric
tensor determinants of the physical surfaces are given by
\begin{eqnarray}
g^{\pm} & = &
\frac{\left[
(h^2 k^2 - 4(1+k^2u^2)^2) \cos\delta \pm
2hk(2+k^2u^2)\sqrt{1+k^2u^2} \sin \delta
\right]^2}
{16 (1+k^2 u^2)^{3}} \ .
\end{eqnarray}
Note that due to the tilt angle $\delta$, the obtained metric tensors are
different for the upper and lower surfaces.
Thus, the surface areas of the grooves are also different as calculated by
\begin{equation}
\Sigma^\pm = \int_0^{z_m} dz \int_{-W/2}^{W/2} du \, \sqrt{g^\pm} \ .
\end{equation}
In fact, for $k>0$, it is found that that $\Sigma^{+} < \Sigma < \Sigma^{-}$ and
$\Sigma^{+} + \Sigma^{-} < 2 \Sigma$ with $\Sigma$ the midplane area.
Fig. \ref{fig:dna}b shows that the total groove surface area
decreases with $k$.
If $k$ is increased, the ribbon surfaces can intersect themselves. Like for the
case of the ideal twisted ribbon, the self-intersection starts at
$u=0$ first with a vanishing metric at that point. The latter yields the
self-avoidance condition of the ribbon as
\begin{equation}
h \leq \frac{2}{k} \, \frac{(1 \mp \sin \delta)}{\cos \delta} \ ,
\end{equation}
for the minor ($-$) and major ($+$) grooves, respectively. Because the minor
groove yields a smaller limit for $h$, the self-avoidance constraint is imposed
by the minor groove.
\begin{figure}
\includegraphics[width=3in]{contour_minor.eps}
\caption{
Contours of the intersections of the excluded volume surface, ${\vec
R}_{ES}^{+}$, of the DNA minor groove with the $z=0$ plane for three
solvent diameters, $D=0$, 0.239 nm and 0.42 nm, as indicated. The contours are
calculated by using realistic DNA parameters of $W=2$ nm, $h=0.6$ nm, $p=3.4$
nm and $\delta=0.08\pi$.
The case of $D=0.239$ nm corresponds to the contour that is about to
intersect itself at $u=0$.
For $D=0.42$ nm, the self-intersection of the contour is seen on
$y$ axis.
}
\label{fig:dnaminor}
\end{figure}
\begin{figure}
\includegraphics[width=3in]{contour_major.eps}
\caption{
Same as Fig. \ref{fig:dnaminor} but for the DNA major groove and for three
values of solvent diameter, $D=0$, 0.795 nm and 1.2 nm, as indicated.
The case of $D=0.759$ nm corresponds to the contour that is about to
intersect itself at $u=0$.
}
\label{fig:dnamajor}
\end{figure}
In order to calculate the exposed area of the ribbon for a solvent of diameter
$D$, one constructs the ribbon's excluded volume surfaces $\vec R^{\pm}_{ES}$.
The latter have the same form as $\vec R_\mathrm{surface}^\pm$ with $h$ being
replaced by $h+D$. The ribbon surface is fully exposed to solvent if the
excluded volume surface does not self-intersect, which means that
\begin{equation}
h + D \leq \frac{2}{k} \, \frac{(1 \mp \sin \delta)}{\cos \delta} \ ,
\end{equation}
for the minor ($-$) and major ($+$) grooves, respectively.
If the excluded volume surface of a groove self-intersects, the groove surface
is partially exposed. Like for the case of the ideal twisted ribbon, the
$u^*$ position of the self-intersection can be determined numerically by
considering the contour of the excluded volume surface
${\vec R}_{ES}^{\pm}$ on the $z=0$ plane. Suppose that we consider only
$0 < u^* \leq W/2$.
Figs. \ref{fig:dnaminor} \& \ref{fig:dnamajor} show that the contour intersects
itself always on the $y$ axis and this starts happen at a lower solvent size
for the minor groove. Thus, the two grooves can have different values of $u^*$,
denoted as $u^*_{+}$ and $u^*_{-}$.
The exposed areas of the grooves are given by
\begin{equation}
S_\mathrm{DNA}^{\pm} = 2 \int_0^{z_m} dz \int_{u^*_{\pm}}^{W/2} \sqrt{g^\pm} \, du \ .
\end{equation}
Fig. \ref{fig:dnaexp} shows the dependence of the fraction of exposed area
on the solvent diameter $D$ for the two grooves.
\begin{figure}
\includegraphics[width=3in]{figdnaexp.eps}
\caption{
Dependence of the fraction of exposed area on the solvent diameter $D$
for the DNA minor (solid) and major (dashed) grooves. The data are obtained
by using realistic parameters for DNA as given in the caption of Fig.
\ref{fig:dnaminor}.
}
\label{fig:dnaexp}
\end{figure}
The curvatures of the midplane of the DNA ribbon can be determined by
calculating the tensors of the first and second fundamental forms.
One obtains the mean curvature
\begin{equation}
H = \frac{k(-2 + k^2 u^2 + k^4 u^4 \cos^2 \delta) \sin(2\delta)}
{4\sqrt{1+ k^2 u^2}} \ ,
\end{equation}
and the Gaussian curvature
\begin{equation}
K = -\frac{k^2}{(1+k^2 u^2)^2} \ .
\end{equation}
Note that unlike the ideal helicoid, the mean curvature of the DNA midplane
is mostly non-zero for $\delta \ne 0$. The Gaussian curvature of the DNA
ribbon, on the other hand, remains the same as for the ideal helicoid. The
bending energy of the DNA ribbon can be calculated numerically by integrating
the bending energy density over the midplane surface
\begin{equation}
U_\mathrm{DNA} = \frac{\kappa}{2} \int_0^{z_m} dz \int_{-W/2}^{W/2}
(H^2 - 2 K) \sqrt{g} \, du \ .
\end{equation}
\end{document}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 2,453 |
Study: Earth a Less Volatile Version of Sun
Planet composition key indicator of its habitability
Australian National University | Phys.org - March 18, 2019 Comments
ANU scientists have found that Earth is made of the same elements as the Sun but has less of the volatile elements such as hydrogen, helium, oxygen and nitrogen.
Lead author of the study, Dr. Haiyang Wang, said they made the best estimate of the composition of Earth and the Sun with the aim of creating a new tool to measure the elemental composition of other stars and rocky planets that orbit them.
"The composition of a rocky planet is one of the most important missing pieces in our efforts to find out whether a planet is habitable or not," said Dr. Wang from the ANU Research School of Astronomy and Astrophysics (RSAA).
Other rocky planets in the Universe are de-volatized pieces of their host stars, just like Earth.
The many climate predictions from the left over the decades have proven themselves to be completely false primarily because leftists ignore the impact of the sun on climate. Paul Joseph Watson asks why these scaremongers should continue to be believed.
Co-author and RSAA colleague Associate Professor Charley Lineweaver said every star had some kind of planetary system in orbit around it.
"The majority of stars probably have rocky planets in or near the habitable zone," he said.
(Photo by NASA)
Co-author Professor Trevor Ireland, from the ANU Research School of Earth Sciences, said the team conducted the study by comparing the composition of Earth rocks with meteorites and the Sun's outer shell.
"This comparison yields a wealth of information about the way the Earth formed. There is a remarkably linear volatility trend that can be used as a baseline to understand the relationships between meteorite, planet and stellar compositions," he said.
The research will be published in the journal Icarus.
Alex Jones exposes the massive push around the globe to use corporate media to use the New Zealand shooting to smear patriots. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 103 |
\section{Introduction}
Convolutional networks, introduced in~\cite{Lecun98gradient-basedlearning}, have demonstrated excellent performance on image classification and other tasks.
There are at least two key components of this model: computational efficiency manifested by leveraging the convolution operator, and a deep architecture, in which the features of a given layer serve as the inputs to the next layer above.
Since that seminal contribution, much work has been undertaken on improving deep convolutional networks~\citep{Lecun98gradient-basedlearning},
deep deconvolutional networks~\citep{Zeiler10CVPR},
convolutional deep restricted Boltzmann machines~\citep{Lee09ICML},
and on Bayesian convolutional dictionary learning~\citep{Chen13deepCFA}, among others.
An important technique employed in these deep models is {\em pooling}, in which a contiguous block of features from the layer below are mapped to a single input feature for the layer above. The pooling step manifests robustness, by minimizing the effects of variations due to small shifts, and it has the advantage of reducing the number of features as one moves higher in the hierarchical representation (possibly mitigating over-fitting).
Methods that have been considered include average and maximum pooling, in which the single feature mapped as input to the layer above is respectively the average or maximum of the corresponding block of features below. Average pooling may introduce blur to learned filters~\citep{Zeiler13ICLR}, and use of the maximum (``max pooling'') is widely employed. Note that average and max pooling are deterministic.
Stochastic pooling proposed by~\cite{Zeiler13ICLR} and the probabilistic max-pooling used by~\cite{Lee09ICML} often improve the pooling process. The use of stochastic pooling is also attractive in the context of developing a generative model for the deep convolutional representation, as highlighted in this paper. Specifically, we develop a deep generative statistical model, which starts at the highest-level features, and maps these through a sequence of layers, until ultimately mapping to the data plane (e.g., an image). The feature at a given layer is mapped via a multinomial distribution to one feature in a block of features at the layer below (and all other features in the block at the next layer are set to zero).
This is analogous to the method in~\cite{Lee09ICML}, in the sense of imposing that there is {\em at most} one non-zero activation within a pooling block.
As we demonstrate, this yields a generative statistical model with which Bayesian inference may be readily implemented, with all layers analyzed jointly to fit the data.
We use bottom-up pretraining, in which initially we sequentially learn parameters of each layer one at a time, from bottom to top, based on the features at the layer below. However, in the refinement phase, all model parameters are learned jointly, top-down. Each consecutive layer in the model is locally conjugate in a statistical sense, so learning model parameters may be readily performed using sampling or variational methods. We here develop a Gibbs sampler for learning, with the goal of obtaining a maximum \emph{a posterior} (MAP) estimate of the model parameters, as in the original paper on Gibbs sampling \citep{Geman1984} (we have found it unnecessary, and too expensive, to attempt an accurate estimate of the full posterior). The Gibbs sampler employed for parameter learning may be viewed as an alternative to typical optimization-based learning \citep{Lecun98gradient-basedlearning,Zeiler10CVPR}, making convenient use of the developed generative statistical model.
The work in \cite{Zeiler10CVPR,Chen13deepCFA} involves learning convolutional dictionaries, and at the testing phase one must perform a (generally) expensive nonlinear deconvolution step at each layer. In \cite{LeCun10NIPS} convolutional dictionaries are also learned at the training stage, but one also simultaneously learns a convolutional filterbank and nonlinear function. The convolutional filterbank can be implemented quickly at test (no nonlinear deconvolutional inversion) and, linked with the nonlinear function, this computationally efficient testing step is meant to approximate the devonvolutional network.
We propose an alternative approach to yield fast inversion at test, while still retaining an aspect of the nonlinear deconvolution operation. As detailed below, in the learning phase, we infer a deep hierarchy of convolutional dictionary elements, which if handled like in \cite{Zeiler10CVPR}, requires joint deconvolution at each layer when testing. However, leveraging our generative statistical model, the dictionary elements at the top of the hierarchy can be mapped through a sequence of linear operations to the image/data plane. At test, we only employ the features from the top layer in the hierarchy, mapped to the data plane, and therefore only a single layer of deconvolution need be applied. This implies that the test-time computational cost is independent of the number of layers employed during the learning phase.
This paper makes three contributions: ($i$) rather than employing beta-Bernoulli sparsity at each layer of the model separately, as in \cite{ICML2011Chen,Chen13deepCFA}, the sparsity is manifested via a multinomial process between layers, constituting stochastic pooling, and allowing coupling all layers of the deep model when learning; ($ii$) the stochastic pooling manifests a proper top-down generative model, allowing a new means of mapping high-level features to the data plane; and ($iii$) a novel form of testing is employed with deep models, with the top-layer features mapped to the data plane, and deconvolution only applied once, directly with the data. This methodology yields excellent performance on image-recognition tasks, as demonstrated in the experiments.
\vspace{-3mm}
\section{Modeling Framework}
\vspace{-2mm}
The proposed model is applicable to general data for which a convolutional dictionary representation is appropriate. One may, for example, apply the model to one-dimensional signals such as audio, or to two-dimensional imagery. In this paper we focus on imagery, and hence assume two-dimensional signals and convolutions. Gray-scale images are considered for simplicity, with straightforward extension to color.
\vspace{-3mm}
\subsection{Single-Layer Convolutional Dictionary Learning}
\vspace{-2mm}
Assume $N$ gray-scale images $\{{{\bf X}^{(n)}}\}_{n=1,N}$, with ${\bf X}^{(n)}\in\mathbb{R}^{N_x \times N_y}$; the images are analyzed jointly to learn the convolutional dictionary $\{{{\bf D}^{(k)}}\}_{k=1,K}$. Specifically
consider the model
\vspace{-2mm}
\begin{equation}\label{Eq:betabern}
{\bf X}^{(n)} = \sum_{k=1}^K {\bf D}^{(k)} \ast ({\bf Z}^{(n,k)}\odot {\bf W}^{(n,k)}) + {\bf E}^{(n)},
\vspace{-2mm}
\end{equation}
where $\ast$ is the convolution operator, $\odot$ denotes the Hadamard (element-wise) product, the elements of ${\bf Z}^{(n,k)}$ are in $\{0,1\}$, the elements of ${\bf W}^{(n,k)}$ are real, and ${\bf E}^{(n)}$ represents the residual. ${\bf Z}^{(n,k)}$ indicates which shifted version of ${\bf D}^{(k)}$ is used to represent ${\bf X}^{(n)}$.
Considering ${{\bf D}}^{(k)}\in {\mathbb R}^{n_{d_x} \times n_{d_y}}$ (typically $n_{d_x}\ll N_x$ and $n_{d_y}\ll N_y$), the corresponding weights ${\bf Z}^{(n,k)}\odot {\bf W}^{(n,k)}$ are of size $(N_x-n_{d_x} +1)\times(N_y - n_{d_y} +1)$.
Let $w_{i,j}^{(n,k)}$ and $z_{i,j}^{(n,k)}$ represent elements $(i,j)$ of ${\bf Z}^{(n,k)}$ and ${\bf W}^{(n,k)}$, respectively. Within a Bayesian construction, the priors for the model may be represented as
\citep{Paisley09ICML}:
\begin{eqnarray}
{ z}^{(n,k)}_{i,j} &\sim& {\rm Bernoulli}(\pi_{i,j}^{(n,k)}), \quad \quad \quad \pi_{i,j}^{(n,k)} \sim {\rm Beta}(a_0, b_0), \label{eq:beta-bern}\\
{ w}^{(n,k)}_{i,j} &\sim& {\cal N}(0, \gamma_w^{-1}), ~~\quad {\bf D}^{(k)}\sim {\cal N}(0, \gamma_d^{-1}{\bf I}), \quad {\bf E}^{(n)}\sim {\cal N}(0, \gamma_e^{-1}{\bf I}), \\
\gamma_w &\sim& {\rm Ga}(a_w, b_w), \quad\quad \gamma_d \sim {\rm Ga}(a_d, b_d), \quad\quad~~ \gamma_e \sim {\rm Ga}(a_e, b_e),
\end{eqnarray}
where $i= 1,\dots, N_x-n_{d_x} +1;~~j=1,\dots, N_y - n_{d_y} +1$, ${\rm Ga(\cdot)}$ denotes the gamma distribution, ${\bf I}$ represents the identity matrix, and $\{a_0,b_0, a_w,b_w, a_d, b_d, a_e, b_e\}$ are hyperparameters, for which default settings are discussed in \cite{Paisley09ICML,ICML2011Chen,Chen13deepCFA}. While the model may look somewhat complicated, local conjugacy admits Gibbs sampling or variational Bayes inference \citep{ICML2011Chen,Chen13deepCFA}.
In \cite{ICML2011Chen,Chen13deepCFA} a deep model was developed based on (\ref{Eq:betabern}), by using ${\bf S}^{(n,k)}\stackrel{\rm def}{=}{\bf Z}^{(n,k)}\odot{\bf W}^{(n,k)}$ as the input of the layer above. In order to do this, a pooling operation ({\em e.g.}, the max-pooling used in~\cite{Chen13deepCFA}) is employed, reducing the feature dimension as one moves to higher layers.
However, the model was learned by stacking layers upon each other, without subsequent overall refinement. This was because use of deterministic max pooling undermined development of a proper top-down generative model that coupled all layers; therefore, in \cite{Chen13deepCFA} the model in (\ref{Eq:betabern}) was used sequentially from bottom-up, but the overall model parameters were never coupled when learning.
To tackle this, we propose a {\em probabilistic pooling} procedure, yielding a top-down deep generative statistical structure, coupling all parameters when performing learning. As discussed when presenting results, this joint learning of all layers plays a critical role in improving model performance. The stochastic pooling applied here is closely related to that in \cite{Zeiler13ICLR,Lee09ICML}.
\vspace{-3mm}
\subsection{Pretraining \& Stochastic Pooling}
\vspace{-3mm}
Parameters of the deep model are learned by first analyzing one layer of the model at a time, starting at the bottom layer (touching the data), and sequentially stacking layers. The parameters of each layer of the model are learned separately, conditioned on parameters of the layers learned thus far (like in \cite{ICML2011Chen,Chen13deepCFA}). The parameters learned in this manner serve as {\em initializations} for the top-down refinement step, discussed in Sec.~\ref{sec:refine}, in which parameters at all layers of the deep model are learned jointly.
Assume an $L$-layer model, with layer $L$ the top layer, and layer 1 at the bottom, closest to the data. In the pretraining stage, the output of layer $l$ is the input to layer $l+1$, after pooling.
Layer $l\in\{1,\dots,L\}$ has $K_l$ dictionary elements, and we have:
\begin{eqnarray}
{\bf X}^{(n, l+1)} &=& \sum_{k_{l+1}=1}^{K_{l+1}} {\bf D}^{(k_{l+1}, l+1)} * \left({\bf Z}^{(n,k_{l+1},l+1)} \odot {\bf W}^{(n,k_{l+1}, l+1)}\right) + {\bf E}^{(n, l+1)} \label{Eq:x_lp1}\\
{\bf X}^{(n, l)} &=& \sum_{k_{l}=1}^{K_{l}} {\bf D}^{(k_{l}, l)} * \underbrace{\left({\bf Z}^{(n,k_{l},l)} \odot {\bf W}^{(n,k_{l}, l)}\right)}_{= {\bf S}^{(n,k_{l},l)}} + {\bf E}^{(n, l)} \label{Eq:x_l}
\end{eqnarray}
The expression ${\bf S}^{(n,k_l,l)}$ is a 2D (spatial) activation map, for image $n$, model layer $l$, dictionary element $k_l$. The expression ${\bf X}^{(n,l+1)}$ may be viewed as a 3D entity, with its $k_l$-th plane defined by a ``pooled'' version of ${\bf S}^{(n,k_l,l)}$ (pooling discussed next). The dictionary elements ${\bf D}^{(k_l,l)}$ and residual ${\bf E}^{(n,l)}$ are also three dimensional (each 2D plane of ${\bf D}^{(k_l,l)}$ and ${\bf E}^{(n,l)}$ is the spatial-dependent structure of the corresponding features), and the convolution is performed in the 2D spatial domain, simultaneously for each layer of the feature map.
We now discuss the relationship between ${\bf S}^{(n,k_l,l)}$ and layer $k_l$ of ${\bf X}^{(n,l+1)}$. The 2D activation map ${\bf S}^{(n,k_l,l)}$ is partitioned into $n_x\times n_y$ dimensional contiguous blocks (pooling blocks with respect to layer $l+1$ of the model); see the left part of Figure \ref{fig:max_pool}. Associated with each block of pixels in ${\bf S}^{(n,k_l,l)}$ is one pixel at layer $k_l$ of ${\bf X}^{(n,l+1)}$; the relative locations of the pixels in ${\bf X}^{(n,l+1)}$ are the same as the relative locations of the blocks in ${\bf S}^{(n,k_l,l)}$. Within each block of ${\bf S}^{(n,k_l,l)}$, either all $n_xn_y$ pixels are zero, or only one pixel is non-zero, with the position of that pixel selected stochastically via a multinomial distribution. Each pixel at layer $k_l$ of ${\bf X}^{(n,l+1)}$ equals the largest-amplitude element in the associated block of ${\bf S}^{(n,k_l,l)}$ ($i.e.$, max pooling). Hence, if all elements of a block of ${\bf S}^{(n,k_l,l)}$ are zero, the corresponding pixel in ${\bf X}^{(n,l+1)}$ is also zero. If a block of ${\bf S}^{(n,k_l,l)}$ has a (single) non-zero element, that non-zero element is the corresponding pixel value at the $k_l$-th layer of ${\bf X}^{(n,l+1)}$.
The bottom-up generative process for each block of ${\bf S}^{(n,k_l,l)}$ proceeds as follows (left part of Figure \ref{fig:max_pool}). The model first imposes that a given block of ${\bf S}^{(n,k_l,l)}$ is either all zero or has one non-zero element, and this binary question is modeled as the beta-Bernoulli representation of (\ref{Eq:x_l}). If a given block has a non-zero value, the position of that value in the associated $n_x\times n_y$ block is defined by a multinomial distribution, and its value is modeled as $w_{i,j}^{(n,k_l,l)}$ represented in (\ref{Eq:x_l}). The beta-Bernoulli step, followed by multinomial, are combined into one equivalent statistical representation, as discussed next.
\begin{figure}[tbp!]
\centering
\vspace{-3mm}
\includegraphics[scale=0.45]{Stackup4.pdf}
\vspace{-3mm}
\caption{\small{Schematic of the proposed generative process. Left: bottom-up pretraining, right: top-down refinement. (Zoom-in for best visulization and a larger version can be found in the Supplementary Material.)}}
\vspace{-6mm}
\label{fig:max_pool}
\end{figure}
Let ${\bf z}^{(n,k_l,l)}_{i^{\prime}, j^{\prime}}\in \{0,1\}^{n_xn_y}$ denote the $(i^{\prime}, j^{\prime})$-th block of ${\bf Z}^{(n,k_l,l)}$ at layer $l$, where $i^{\prime} = 1, \dots, \frac{N_x}{n_x}; j^{\prime} = 1,\dots, \frac{N_y}{n_y}$ assuming integer divisions.
We introduce a latent variable ${\bf c}^{(n,k_l,l)}_{i^{\prime}, j^{\prime}}\in\{0,1\}^{n_xn_y +1}$ to implement at most one non-zero element out of the $n_xn_y$ entries in $\{{ z}^{(n,k_l,l)}_{i^{\prime}, j^{\prime},m}\}_{m=1}^{n_xn_y}$ through
\begin{equation}
{z}^{(n,k_l,l)}_{i^{\prime}, j^{\prime},m} = { c}^{(n,k_l,l)}_{i^{\prime}, j^{\prime},m}, \quad\quad
{\bf c}^{(n,k_l,l)}_{i^{\prime}, j^{\prime}} \sim {\rm Mult}(1; \boldsymbol{\theta}^{(n,k_l,l)}), \quad\quad
{\boldsymbol{\theta}}^{(n,k_l,l)} \sim {\rm Dir}\left(\frac{1}{n_x n_y +1}\right),
\end{equation}
where ${\rm Mult}(\cdot)$ and ${\rm Dir}(\cdot)$ denote multinomial and Dirichlet distribution, respectively (the Dirichlet distribution has a {\em set} of parameters, and here we imply that are equal, and set to the value indicated in ${\rm Dir}(\cdot)$).
${\bf c}^{(n,k_l,l)}_{i^{\prime}, j^{\prime}}$ has $(n_xn_y +1)$ entries, of which only one is equal to 1. If the last element is 1, this means all $\{{ z}^{(n,k_l,l)}_{i^{\prime}, j^{\prime}, m}\}_{m=1}^{n_x n_y} = 0$.
Since the $(i^{\prime}, j^{\prime})$-th block at layer $l$ corresponds to one element at layer $(l+1)$,
we have
\begin{equation} \label{Eq:Sxz}
s^{(n,k_{l}, l)}_{i^{\prime},j^{\prime},m}= {x}^{(n,k_l, l+1)}_{i^{\prime}, j^{\prime}} z^{(n,k_{l},l)}_{i^{\prime}, j^{\prime},m}, ~~\forall m=1,\dots, n_xn_y
\end{equation}
Hence, if the last element of ${\bf c}^{(n,k_l,l)}_{i^{\prime}, j^{\prime}}$ is 1, all elements of block $(i^\prime,j^\prime)$ are zero; if not, the location of the non-zero element in the first $n_xn_y$ entries of ${\bf c}^{(n,k_l,l)}_{i^{\prime}, j^{\prime}}$ locates the position of the non-zero element in the corresponding block. The remaining parts of the model are represented as in (\ref{Eq:x_l}).
In the pretraining phase, we start with ${\bf X}^{(n,1)}$, which is the data ${\bf X}^{(n)}$. We learn $\{ {\bf S}^{(n,k_1,1)}\}_{k_1=1,K_1}$ using the blocked activation weights, via Gibbs sampling, where the multinomial distribution associates each non-zero element with a position in the corresponding block. The MAP Gibbs sample is then selected, defining model parameters for the layer under analysis. The ``stacked'' and pooled $\{{\bf S}^{(n,k_1,1)}\}_{k_1=1,K_1}$ are used to define ${\bf X}^{(n,2)}$, and the learning procedure then continues, learning dictionary elements ${\bf D}^{(k_2,2)}$ and activation maps $\{{\bf S}^{(n,k_2,2)}\}_{k_2=1,K_2}$, again via Gibbs sampling and MAP selection. This continues sequentially up to the $L$-th, or top, layer.
For the top layer, since no pooling is necessary, the beta-Bernoulli prior in (\ref{eq:beta-bern}) is used.
\vspace{-2mm}
\subsection{Model Refinement With Stochastic Pooling\label{sec:refine}}
\vspace{-3mm}
The learning performed with the top-down generative model (right part of Fig.~\ref{fig:max_pool}) constitutes a {\em refinement} of the parameters learned during pretraining, and the excellent initialization constituted by the parameters learned during pretraining is key to the subsequent model performance.
In the refinement phase, the equations are (almost) the same, but we now proceed top down, from (\ref{Eq:x_lp1}) to (\ref{Eq:x_l}). The generative process constitutes ${\bf D}^{(k_{l+1}, l+1)}$ and ${\bf Z}^{(n,k_{l+1},l+1)} \odot {\bf W}^{(n,k_{l+1}, l+1)}$, and after convolution ${\bf X}^{(n,l+1)}$ is manifested; the ${\bf E}^{(n,l)}$ is now absent at all layers, except layer $l=1$, at which the fit to the data is performed. Each element of ${\bf X}^{(n,l+1)}$ has an associated pooling {\em block} in ${\bf S}^{(n,k_l,l)}$. Via a multinomial distribution like in pretraining, each element of ${\bf X}^{(n,l+1)}$ is mapped to one position in the corresponding block of ${\bf S}^{(n,k_l,l)}$, and all other elements in that $n_x\times n_y$ block are set to zero. Since ${\bf X}^{(n,l+1)}$ is manifested top-down as a convolution of ${\bf D}^{(k_{l+1}, l+1)}$ and ${\bf Z}^{(n,k_{l+1},l+1)} \odot {\bf W}^{(n,k_{l+1}, l+1)}$, ${\bf X}^{(n,l+1)}$ will in general have no elements exactly equal to zero (but many will be small, based on the pretraining). Hence, {\em each} block of ${\bf S}^{(n,k_l,l)}$ will have one non-zero element, with position defined by the multinomial\footnote{We also considered a model exactly like in pretraining, which in the pooling step a pixel in ${\bf X}^{(n,l+1)}$ could be mapped via the multinomial to an all-zero activation block in layer $l$; the results are essentially unchanged from the method discussed above.}.
During pretraining many blocks of ${\bf S}^{(n,k_l,l)}$ will be all-zero since we preferred a sparse representation, while during refinement this sparsity requirement is relaxed, and in general each pooling block of ${\bf S}^{(n,k_l,l)}$ will have one non-zero element (but it is still sparse), and this value is mapped via pooling to the corresponding pixel in ${\bf X}^{(n,l)}$. In pretraining the Dirichlet and multinomial distributions were of size $n_xn_y+1$, allowing the all-zero activation block; during refinement the multinomial and Dirichlet are of dimensions $n_xn_y$. The corresponding $n_xn_y$ Dirichlet and multinomial parameters from pretraining are used to constitute initializations for refinement.
\vspace{-4mm}
\subsection{Top-Level Features and Testing}
\vspace{-3mm}
In order to understand deep convolutional models, researchers have visualized dictionary elements mapped to the image level~\citep{Zeiler14ECCV}. One key challenge of this visualization is that one dictionary element at high layers can have {\em multiple} representations at the layer below, given different activations in each pooling block (in our model, this is manifested by the stochasticity associated with the multinomial-based pooling). \cite{Zeiler14ECCV} showed different versions of the same upper-layer dictionary element at the image level. Because of this capability of accurate dictionary localization at each layer, deep convolutional models perform well in classification.
However, also due to these multiple representations, during testing, one has to infer dictionary activations layer by layer (via deconvolution), which is computationally expensive.
In order to alleviate this issue, \cite{LeCun10NIPS} proposed an approximation method using convolutional filter banks (fast because there is no explicit deconvolution) followed by a nonlinear function.
Though efficient at test time, in the training step one must simultaneously learn deconvolutional dictionaries and associated filterbanks, and the choice of non-linear function is critical to the performance of the model. Moreover, in the context of the framework proposed here, it is difficult to integrate the approach of \cite{Kavukcuoglu08,LeCun10NIPS} into a Bayesian model.
We propose a new approach to accelerate testing.
After performing model learning (after refinement), we project top-layer dictionary elements down to the data plane. At test, deconvolution is only performed once, using the top-layer dictionary elements mapped to the data plane. The top-layer activation strengths inferred via this deconvolution are then used in a subsequent classifier. The different manifestations of a top-layer dictionary element mapped to the data plane are constituted by different (stochastic) pooling mappings via the multinomial. To select top-layer dictionary elements in the data plane, used for test, we employ maximum-likelihood (ML) dictionary elements, with ML performed across the different choices of the max pooling at each layer. Hence, after this ML-based top-layer dictionary selection, a pixel at layer $l+1$ is mapped to the same location in the associated layer $l$ block, for all convolutional shifts (same max-pooling map for all shifts at a given layer). Hence, the key approximation is that the stochastic pooling employed for each pixel at layer $l+1$ to a position in a block at layer $l$ is replaced by an ML-based \emph{deterministic} pooling (possibly a different deterministic map at each layer).
This simple approach has the advantage of \cite{Zeiler14ECCV} at test, in that we retain the deconvolution operation (unlike \cite{LeCun10NIPS}), but deconvolution must only be performed \emph{once} (not at each layer). In the experiments presented below, when visualizing inferred dictionary elements in the image plane, this ML-based dictionary selection is employed. More details on this aspect of the model are provided in the Supplementary Material.
\vspace{-5mm}
\section{Gibbs-Sampling-Based Learning and Inference}
\vspace{-4mm}
Due to local conjugacy at every component of the model, the local conditional posterior distribution for all parameters of our model is manifested in closed form, yielding efficient Gibbs sampling (see Supplementary Material for details). As in all previous convolutional models of this type, the FFT is leveraged to accelerate computation of the convolution operations, here within Gibbs update equations.
In the pre-training step, we select the ML sample from 500 collection samples, after first computing and discarding 1500 burn-in samples.
The same number of burn-in and collection samples, with ML selection, is performed for model refinement.
This ML selection of collection samples shares the same spirit as \cite{Geman1984}, in the sense of yielding a MAP solution (\emph{not} attempting to approximate the full posterior).
During testing, we select the ML sample across 200 deconvolutional samples, after first discarding 500 burn-in samples.
\vspace{-4mm}
\section{Experimental Results}
\label{Sec:Exp}
\vspace{-4mm}
We here apply our model to the MNIST and Caltech 101 datasets.
We compare dictionaries (viewed in the data plane) before and after refinement.
Classification results (average of 10 trials) using top-layer features are presented for both datasets.
As in \citep{Paisley09ICML}, the hyperparameters are set as $a_0=1/K, b_0=1-1/K$, where $K$ is the number of dictionary elements at the corresponding layer, and $a_w=b_w=a_d=b_d=a_e=b_e=10^{-6}$; these are standard hyperparameter settings \citep{Paisley09ICML} for such models, and no tuning or optimization was performed.
All code is written in MATLAB and executed on a desktop with 3.8 GHz CPU and 24G memory.
Model training including refinement with one class (30 images) of Caltech 101 takes about 40 CPU minutes, and
testing (deconvolution) for one image takes less than 1 second. These results were run on a single computer, for demonstration, and acceleration via parallel implementation, GPUs~\citep{HintonNIPS2012}, and coding in C will be considered in the future; the successes realized recently in accelerating convolution-based models of this type are transferrable to our model.
\vspace{-3mm}
\paragraph{MNIST Dataset}
\begin{wraptable}{r}{0.55\textwidth}
\vspace{-6mm}
\caption{\small{Classification Error of MNIST data}}
\vspace{-4mm}
\centering
\small
\begin{tabular}{c|c}
Methods & Test error \\
\hline
DBN~\cite{Hinton06Science} & 1.20\% \\
\hline
CBDN~\cite{Lee09ICML} & 0.82\%\\
\hline
$\begin{array}{l}
\text{2-layer Conv. Net + 2-layer}\\
\text{Classifier~\cite{Jarrett09ICCV}} \end{array}$& 0.53\% \\
\hline
$\begin{array}{l}
\text{6-layer Conv. Net + 2-layer Classifier } \\
\text{+ elastic distortions~\cite{Ciresan11IJCAI}}
\end{array}$
& 0.35\% \\
\hline
MCDNN~\cite{ciresan2012multi} & 0.23\%\\
\hline
SPCNN~\cite{Zeiler13ICLR} & \\
Average Pooling & 0.83\% \\
Max Pooling & 0.55\% \\
Stochastic Pooling & 0.47\%\\
\hline
$\begin{array}{l}
\text{HBP~\cite{Chen13deepCFA},}\\
\text{2-layer cFA + 2-layer features} \end{array}$& \\
MCMC (10000 Training) & 0.89\% \\
Batch VB (10000 Training) & 0.95\% \\
online VB (60000 Training) & 0.96\% \\
\hline
Ours, 2-layer model + 1-layer features & \\
60000 Training & 0.42\% \\
10000 Training & 0.68\% \\
5000 Training & 1.02\% \\
2000 Training & 1.11\% \\
1000 Training & 1.66\%
\end{tabular}
\label{Table:Error_MNIST}
\vspace{-10mm}
\end{wraptable}
We first consider the widely studied MNIST data (\url{http://yann.lecun.com/exdb/
mnist/}), which has 60,000 training and 10,000
testing images, each $28\times28$, for digits 0 through 9.
A two layer model is used with dictionary size $8\times 8$ ($n_{d_x}=n_{d_y}=8$) at the first layer and $6\times 6$ at the second layer; the pooling size is $3\times 3$ ($n_x=n_y=3$) and the number of dictionary elements at layers 1 and 2 are $K_1=32$ and $K_2=160$, respectively.
We obtained these number of dictionary elements via setting the initial dictionary number to a relatively large value in the pre-training step and discarding infrequently used elements by counting the corresponding binary indicator ${\bf Z}$, {\em i.e.}, \emph{inferring} the number of needed dictionary elements, as in~\cite{Chen13deepCFA}.
Table~\ref{Table:Error_MNIST} summaries the classification results of our model compared with some related results, on the MNIST data.
The second (top) layer features corresponding to the refined dictionary are sent to a nonlinear
support vector machine (SVM)~\citep{CC01a} with Gaussian kernel, in a one-vs-all multi-class classifier, with classifier parameters tuned via 5-fold cross-validation (no tuning on the deep feature learning).
Rather than concatenating features at all layers as in~\cite{Zeiler13ICLR,Chen13deepCFA}, we only use the top layer features as the input to the SVM (deconvolution is only performed with top-layer dictionary elements), which saves much computation time (as well as memory) in both inference and classification, since the feature size is small.
When the model is trained using all 60000 digits, we achieve an error rate of $0.42\%$ on testing, which is very close to the state-of-the-art, but with a relatively simpler model compared to~\cite{ciresan2012multi}; the error rate obtained using features learned after pretraining, before refinement, are similar to those in \cite{Chen13deepCFA} ($0.9\%$ error), underscoring the importance of the refinement step.
We further plot the testing error in
Fig.~\ref{Fig:MNIST_missing2} (bottom part) when the training size is reduced compared to the results reported in~\cite{Zeiler13ICLR}.
It can be seen that our model outperforms every approach in~\cite{Zeiler13ICLR}.
\begin{figure}[htbp!]
\centering
\vspace{-4mm}
\setcounter{subfigure}{0}
\subfloat[]{\label{Fig:MNIST_dict}\includegraphics[width=\textwidth, height=2.5cm]{MNIST_dict_all.pdf}}\\
\vspace{-4mm}
\subfloat[]{\label{Fig:MNIST_missing}\includegraphics[width=0.65\textwidth, height=5.3cm]{MNIST_missing_all_1.pdf}}
\quad~~
\subfloat[]{\label{Fig:MNIST_missing2}\includegraphics[width=0.3\textwidth,height=5.3cm]{MNIST_missing_all_2_joint.pdf}}
\vspace{-4mm}
\caption{\small{(a) Visualization of the dictionary learned by the proposed model. Note the refined dictionary (right) is much sharper than the dictionary before refinement (middle). (b) Missing data interpolation of digits.
(c) Upper part: a more challenging case for missing data interpolation of digits. Bottom part: testing error when training with reduced dataset sizes on MNIST.}
}
\vspace{-6mm}
\label{fig:reconSpec}
\end{figure}
In order to examine the properties of the learned model, in Fig.~\ref{Fig:MNIST_dict} we visualize trained dictionaries at layer 2 mapped down to the data level.
It is observed qualitatively that refinement improves the dictionary; the atoms after refinement are much sharper.
If the average pooling described in~\cite{Zeiler13ICLR} is used, the dictionaries are blurry (middle-left part of Fig.~\ref{Fig:MNIST_dict}).
When a threshold is imposed on the refined dictionary elements, they look like digits (rightmost part).
To further verify the efficacy of our model, we show in Fig.~\ref{Fig:MNIST_missing} the interpolation results of digits with half missing, as in~\cite{Lee09ICML}. A one-layer model cannot recover the digits, while a two-layer model provides a good recovery (bottom row of Fig.~\ref{Fig:MNIST_missing}). Furthermore, by using our refinement approach, the recovery is much clearer (comparing the bottom-left part and bottom-middle part of Fig.~\ref{Fig:MNIST_missing}).
Given this excellent performance, more challenging interpolation results are shown in Fig.~\ref{Fig:MNIST_missing2} (upper part), where we cannot identify any digits from the observations; even in this case, the model can provide promising reconstructions.
\begin{figure}[htbp!]
\centering
\vspace{-3mm}
\includegraphics[width=\textwidth]{face_dict_all3_2.pdf}
\vspace{-6mm}
\caption{\small{Dictionary elements in each layer trained with 64 ``face easy" images from Caltech 101}.}
\label{Fig:face_train}
\vspace{-4mm}
\end{figure}
\begin{figure}[htbp!]
\centering
\vspace{-1mm}
\includegraphics[width=\textwidth]{face_miss_2layer.pdf}
\vspace{-7mm}
\caption{\small{Face data interpolation using a 2-layer model. From left to right: truth, observed data, layer-1 recovery, layer-2 recovery.} }
\label{Fig:face_miss}
\vspace{-3mm}
\end{figure}
\vspace{-3mm}
\paragraph{Caltech 101 Dataset}
We next consider the Caltech 101 dataset.
First we analyze our model with images in the ``easy face" category;
64 images (after local contrast normalization~\citep{Jarrett09ICCV}) have been resized to $128\times 128$ and a three-layer deep model is used.
At layers 1, 2 and 3, the number of dictionary elements is set respectively to $K_1=16$, $K_2=24$ and $K_3=36$ (these inferred in the pretraining step, as discussed above), with dictionary sizes $17\times 17$, $9\times 9$ and $6\times 6$.
The pooling sizes are $4\times 4$ (layer 1 to layer 2) and $2\times 2$ (layer 2 to layer 3).
Example learned dictionary elements are mapped to the image level and shown in Fig.~\ref{Fig:face_train}. It can be seen that the first-layer dictionary extracts edges of the images,
while the second-layer dictionary elements look like a part of the face and the third-layer elements are almost entire faces.
We can see the improvement manifested by refinement by comparing the right two parts in Fig.~\ref{Fig:face_train} (the dictionaries after refinement are sharper).
Similar to the MNIST example, we also show in Fig.~\ref{Fig:face_miss} the interpolation results of face data with half missing, using a two-layer model (the dictionary sizes are $14\times 14$ and $13\times 13$ at layers 1 and 2, respectively, with max-pooling size $3\times 3$.).
It can be seen the missing parts are recovered progressively more accurately considering a one- and two-layer model.
Though the background is a little noisy, each face is recovered in great detail by the second layer dictionary (a three-layer model gives similar results, omitted here for brevity).
\begin{figure}[htbp!]
\centering
\vspace{-3mm}
\includegraphics[width=\textwidth,height = 5cm]{101_more.pdf}
\vspace{-5mm}
\caption{\small{Trained dictionaries per class mapped to the data plane. Row 1-2: nautilus, revolver. Column 1-4: training images after local contrast normalization, layer-1 dictionary, layer-2 dictionary, layer-3 dictionary.}}
\label{Fig:SepDict}
\vspace{-5mm}
\end{figure}
We develop Caltech 101 dictionaries by learning on each data class in isolation, and then concatenate all (top-layer) dictionaries when learning the classifier.
In Figure~\ref{Fig:SepDict} we depict dictionary elements learned for two data classes, projected to the image level (more results are shown in the Supplementary Material). It can be seen the layer-1 dictionary elements are similar for the two data classes, while the upper-layer dictionary elements are data-class dependent.
One problem of this parallel training is that the dictionary may be redundant across image classes (especially at the first layer). However, during testing, using the proposed approach, we only use top-layer dictionaries, which are typically distinct across data classes (for the data considered).
\begin{wraptable}{r}{0.56\textwidth}
\vspace{-4mm}
\caption{ \small{Classification Accuracy Rate of Caltech-101.}}
\vspace{-0.3cm}
\centering
\small
\begin{tabular}{c|c|c}
\# Training Images per Category & 15 & 30 \\
\hline
DN~\cite{Zeiler10CVPR} & 58.6 \% & 66.9\% \\
\hline
CBDN~\cite{Lee09ICML} & 57.7 \% & 65.4\% \\
\hline
HBP ~\cite{Chen13deepCFA} & 58\% & 65.7\% \\
\hline
ScSPM ~\cite{yang09CVPR} & 67 \% & 73.2\% \\
\hline
P-FV ~\cite{seidenari2014local} & 71.47\% & 80.13\% \\
\hline
R-KSVD ~\cite{li2013reference} & 79 \% & 83\% \\
\hline
Convnet~\cite{Zeiler14ECCV} & 83.8 \% & 86.5\% \\
\hline
Ours, 2-layer model + 1-layer features & 70.02\% & 80.31\% \\
\hline
Ours, 3-layer model + 1-layer features & 75.24\% & 82.78\%
\end{tabular}
\label{Table:accuracy_caltech101}
\vspace{-0.4cm}
\end{wraptable}
For Caltech 101 classification, we follow the setup in~\cite{yang09CVPR}, selecting 15 and 30 images per category for training, and testing on the rest.
The features of testing images are inferred based on the top-layer dictionaries and sent to a multi-class SVM; we again use a Gaussian kernel non-linear SVM with parameters tuned via cross-validation.
Ours and related results are summarized in Table~\ref{Table:accuracy_caltech101}.
For our model, we present results based on 2-layer and 3-layer models.
It can be seen that our model (the 3-layer one) provides results close to the state-of-the-art in~\cite{Zeiler14ECCV}, which used a much more complicated model ({\em i.e.}, a 7-layer convolutional network and used the ImageNet dataset to pretrain the network), and our results are also very close to the state-of-the-art results using hand-crafted features ({\em e.g.}, SIFT in \cite{li2013reference}).
Based on features learned by our model at the pretraining stage, our classification performance is comparable to that of the HBP model in \cite{Chen13deepCFA} (around 65\% accuracy for a 2-layer model, when training with 30 examples per class), with our results demonstrating a 17\% improvement in performance after model refinement.
\vspace{-3mm}
\section{Conclusions}
\vspace{-4mm}
A deep generative convolutional dictionary-learning model has been developed within a Bayesian setting, with efficient Gibbs-sampling-based MAP parameter estimation. The proposed framework enjoys efficient bottom-up and top-down probabilistic inference. A probabilistic pooling module has been integrated into the model, a key component to developing a principled top-down generative model, with efficient learning and inference.
Extensive experimental results demonstrate the efficacy of the model to learn multi-layered features from images.
A novel method has been developed to project the high-layer dictionary elements to the image level, and efficient single-layer deconvolutional inference is accomplished during testing. On the MNIST and Caltech 101 datasets, our results are very near the state of the art, but with relatively simple model complexity at test.
Future work includes performing deep feature learning and classifier design jointly. The algorithm will also be ported to a GPU-based implementation, allowing scaling to large-scale datasets.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 7,372 |
Chapter Excerpt
Intro Chapter
Part I – Ancestors
Part II – 1989
Part III – 1991
Part IV – Freedom
Postscript – 25 years later
Story Images
After the failed Moscow Coup in August 1991, the following syndicated article, featuring Daiva Venckus, appeared on August 25, 1991. A One-Way Ticket To The Revolution references conversations with the journalist as expressed in the article.
http://articles.latimes.com/1991-08-25/opinion/op-1933_1_lithuanian-parliament
Soviet Showdown : And the Winner Is. . . : Personal Perspective : On the Line to Vilnius: While the World Watches Moscow, Lithuanians Face Their Test
August 25, 1991|Sherry Ricchiardi | Sherry Ricchiardi, formerly a reporter with the Des Moines Register, teaches journalism at Indiana University. She is co-author of "Women on Deadline" (Iowa State University Press)
WASHINGTON — The overseas operator offered little hope of getting through to Lithuania last Monday, due to "problems in the Soviet Union."
The "problems" included a coup and the ouster of Mikhail S. Gorbachev just 12 hours before. I urged her to keep trying.
Moments later, I heard the voice of Daiva Venckus, 25, a Los Angeles native who, along with dozens of others, was barricaded in the Lithuanian Parliament building in the capital city.
"They are occupying the communication centers. We are being cut off from the West," said Venckus, who arrived in Vilnius on Jan. 15, just two days after Soviet tanks rumbled into town, leaving 14 dead.
It was 8:30 p.m., Lithuanian time, day one of the coup. Venckus, an editor for the government Bureau of Information, was struggling against the odds to get news out.
Worried that we could be cut off, she poured out accounts of tank columns on the move, assaults on radio and TV towers, thousands of protesters in the streets. She feared that the Parliament building, a symbol of Lithuanian independence, was high on the Red Army's hit list.
"We don't know whether they'll try to kill us or force us out. If tanks come, I won't leave. I'll stay on the telephone as long as I can to tell the world what is happening," Venckus said over the crackling phone lines. Flashes of this fierce defiance had surfaced three weeks earlier when I met Venckus for an interview while I was traveling in the Baltic. She was in Lithuania to "join the struggle for independence. I believe I can make a difference here," she said.
She greeted me and another journalist at the entrance of the Parliament under the watchful eyes of security guards. She was pleased to find American journalists interested in the Baltics.
"Some are afraid to come here," she said, as she guided us through the maze of electrical wires, armed militia and blurry-eyed workers who, like herself, were putting in 12-hour days. Her attire–black blazer, blue jeans and silver-tipped motorcycle boots–would have been at home in a disco.
Venckus put her career on hold to stand up for her parents' homeland, earning 431 rubles (an equivalent of $15) per month. In July, she became a citizen of Lithuania as well as of the United States. "I am making up for what was taken away from my parents nearly 50 years ago," she explained.
As we departed, Venckus said, "If you need more information, call."
Now she was a voice on the telephone thousands of miles away, holed up behind cement barricades with tanks heading in her direction. There were at least two other Americans who vowed to stay.
By midnight, a crowd of 5,000 had gathered to serve as a shield in front of the Parliament. Simultaneously, Americans watched as civilians gathered around the Russian Parliament building in Moscow. Later, armed with flags, slogans and lofty ideals, they would stand off the tanks.
What drives human beings to fight guns with words? Venckus dreams of writing a book and going to graduate school. Instead, she is sitting on a powder keg. Many activists draw strength from knowing they are right. The repression of human beings cannot triumph, no matter what the price. Or maybe the courage to face tanks comes when there is nothing left to lose.
Ricardas Rimkus, a Vilnius schoolteacher who was blacklisted for voicing the wrong political views, told me that it's easier to stand up against the enemy when "you have been stripped of everything."
"The Communists and the KGB have made a hell for us here," he said.
In July, I interviewed journalists who had been forced out of their jobs by thugs who occupied communication centers throughout Vilnius. Two hundred are participating in a hunger strike. "We will continue this protest until we are allowed to operate a free press," said Skirmantas Valiulis, general director of Lithuanian television.
Valiulis knows he is high on the KGB hit list if the independence movement fails. The building where he once worked wears the scars of bullets fired the night of the takeover.
As for Venckus, she had been taking a break in the resort city of Niva, 180 miles from the capital city, when she learned of the coup this past week. "I ran to a phone to call Vilnius, but I couldn't get through," she recalled. She hailed the nearest cab and peeled off $61, all the American money she had, for a frantic drive through the countryside.
Back home in Los Angeles, her parents, Elena and Roma Venckus, scanned newspapers and television for news. "We notified the American Embassy that Daiva is in Vilnius," Elena Venckus said. "We left Lithuania so that we could be free. We don't want our daughter to die there."
Last Wednesday, day three of the coup, Daiva was on the telephone describing how a column of tanks approached the Parliament building. "We shut off all the lights and closed the curtains. Everybody had their gas masks ready," she said. But this time, it was psychological warfare. The tanks left without firing a shot.
Later that day, the world witnessed the triumph of the human spirit over tanks and guns in Moscow. But in Vilnius, the price still was being paid. Soviet troops advanced on the Lithuanian Parliament building. Gunfire left two wounded and a Lithuanian security guard dead. The TV reporter labeled it a "minor incident."
Mikhail Gorbachev's homecoming overshadowed the event.
When Daiva returned to America for Christmas visit in December 1991, she was interviewed by the Los Angeles The Daily News. (www.dailynews.com)
For further information on A One-Way Ticket To The Revolution, please
© Copyright Daiva Venckus | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 8,098 |
The Crossrail we have to bear - Spear's Magazine
Review: The secret art of Evelyn Waugh
Opulence on the Emerald Isle
The Crossrail we have to bear
Blog / Feature - Home Page Carousel / Politics
25th July 2017 by Alec Marsh
News that Crossrail 2 is firmly back on the agenda is welcome, but the question is when can we have Crossrails 3 and 4 – which London desperately needs, writes Alec Marsh
Having unceremoniously dropped Crossrail 2 from the recent Queen's Speech, that Transport Secretary Chris Grayling now says he's backing the £31 billion proposed underground route for London is to be welcomed in the strongest possible terms.
Welcome yet still further is that Mr Grayling has forged an agreement with the Labour Mayor of London, Sadiq Khan, on the matter. Indeed, in the words of one London infrastructure watcher, David Leam of London First: 'With this joint statement, Crossrail 2 has moved forward – from whether we do it to how we do it.'
And nowhere should this welcome be echoed more strongly than in the City of London and the capital's thriving financial services industry, which desperately needs a viable transportation system for its staff, underpinned by a broader housing strategy. But it's not just necessary for the capital's workers: don't forget that we also need a transport system worthy of the precious HNWs that we would like to welcome to these shores, too – and their dependants and loved ones.
Not so long ago the veteran Spear's contributor Alessandro Tome wrote about using the Tube for the first time in yonks: it read like the descent into hell as told by Dante.
So the government backing of Crossrail 2 is not before time: linking the glories of Hertfordshire to the north east of our sprawling metropolis to the shores of Surrey in the south west, Crossrail 2 will add capacity for 270,000 journeys at peak times and connect an array of useful transport hubs – including King's Cross and Victoria. It'll also relieve pressure on Waterloo.
Its supporters suggest that its impact will ape that of its nominative predecessor, the still as yet unfinished £15 billion-Crossrail – now officially designated the Elizabeth line. This was begun in 2009, will open in 2018, and will increase capacity of the grossly inundated underground network by ten per cent, making up around 200 million passenger journeys a year. It itself is the biggest increase in the underground's capacity since the Second World War – which tells you something about how much that conflict cost the country. Ironically, a north west to south line like Crossrail 2 was proposed back in 1944 – just 73 years ago. (We had to wait less time for a men's winner at Wimbledon.)
Moreover, the astonishingly good news is that Crossrail 2 is estimated to further increase the capacity of the Underground network by another ten per cent. If the terms of the National Infrastructure Commission's statement in 2016 are adhered to, then a bill should go before Parliament in 2019, to pave the way for construction to be completed by 2033. By then of course, London will probably need Crossrail 3 and 4: that's if teleporters haven't been invented.
For anyone who uses the London Underground now, especially in the summer, Crossrail and Crossrail 2 can't come soon enough. If you were to travel on the Central line, queuing in stations as escalators clear and barriers are moved through, you would be well within your rights to hazard that London doesn't just need a ten or 20 per cent capacity increase – it needs a 100 per cent capacity increase – and fast. And with the capital's population due to rise by more than a million to 10 million by 2029, the pressure will only mount.
So the question – and London knows it best – is certainly not if the capital needs it; it's when, and how many more. The only question outstanding is what to call Crossrail 2 – the Prince of Wales line, anyone?
Alec Marsh is editor of Spear's
Banksy's shredded masterpiece – who bears the loss?
What's the point of International Women's Day?
Saturday sale of the century | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 7,812 |
Phrase of the Week: Tearfully Urge
Posted by Josh Rudolph | May 10, 2018
The Word of the Week comes from the Grass-Mud Horse Lexicon, a glossary of terms created by Chinese netizens and encountered in online political discussions. These are the words of China's online "resistance discourse," used to mock and subvert the official language around censorship and political correctness.
hánlèi quàngào 含泪劝告
Yu Qiuyu with a copy of his inflammatory essay. (Artist: Ah Niu 阿牛)
Infamous appeal to victims of the 2008 Sichuan earthquake to stop complaining about the shoddy construction of schools, which led to the deaths of thousands of school children.
The 7.9-magnitude earthquake that struck mountainous Wenchuan County, Sichuan on May 12, 2008 claimed the lives of over 5,000 children, whose poorly built "tofu dregs" schools collapsed on them. Government buildings weathered the quake far better. Angry parents were begged to stop protesting. Some were detained at the scene or at protests, while others called off memorials following official warnings.
A month after the quake, literary figure Yu Qiuyu penned the essay "A Tearful Request for the Earthquake Survivors" (含泪劝告请愿灾民), in which he implored parents to stop their protests. He suggested the parents were being used by those with "ulterior motives" and by "anti-Chinese forces." Many netizens accused Yu of trying to protect the corrupt politicians and contractors whose greed and negligence had led to so many deaths. Some "tearfully urged Yu Qiuyu to jump in a river" (含泪劝告余秋雨去投江).
Angered by the criticism and the parodies of his essay, Yu wrote a reply to his critics, "You Are Not Permitted to Continue Insulting the Chinese People" (不准继续侮辱中国人).
Blogger Huang Lin has written an exposition on the "fashion of the tearfully urging writing style."
Artist Ai Weiwei lead an effort to collect the names of the schoolchildren, resulting in the performance piece "Commemoration." He and the volunteers who came with him to Sichuan faced police harassment. Ai also created an installation spelling out a quote from one grieving mother—"she lived happily on this earth for seven years"—using 9,000 student backpacks. The activist Tan Zuoren served a five-year sentence for "inciting subversion of state power" after he tried to compile a list of the children's names.
In the lead-up to the 10th anniversary of the deadly quake, Wenchuan County officials announced that May 12, 2018 would be declared a day of "thanksgiving" for government rebuilding efforts over the previous decade. This announcement drew scorn from Chinese web-users who felt that a day of memorial for the many thousands of lives lost would be far more appropriate. The New York Times' Tiffany May reports:
After Wenchuan County officials announced the day of thanksgiving to mark the anniversary on Saturday, the state news media described "beautiful, tidy buildings" that now populate the most ravaged disaster zone. The report noted that local residents often expressed their indebtedness for the "gushing springs of generosity" they had received — a sentimental adage.
[…] "Everyone knows that the earthquake killed tens of thousands of people on that day, and yet you call it 'Thanksgiving Day,'" a Weibo user said. "What do we give thanks for?"
"Can't it be called 'Memorial Day?'" another user asked. "Gratitude at the tip of the tongue is the most hypocritical way of giving thanks."
Others suggested alternative names for the anniversary: "Day of the Earthquake Victims," "Day of Suffering" and even "Day of Shame." [Source]
In an interview with CDT, historian Jeremy Brown explained that "handling" of grieving parents is a common pattern following "sudden incidents." After the deaths of 288 schoolchildren in the 1994 Karamay fire, for example, parents "became targeted as troublemakers and came under surveillance themselves."
The Party's definition of tufashijian includes three things: accidents, natural disasters, and protests, or political disturbances. The Party has decided to put these three things together. I think that's also a case of "lesson not learned" from all the targeting of families and accident victims that we've seen throughout the Mao period and continuing into today. There's an automatic assumption by putting those things together that these are threats to the stability of the Communist Party, and so they need to be handled in the same way. Instead of being transparent, or treating accident victims compassionately, the impulse is to cover it up, and target them for surveillance and crackdowns. Because that's what you do to a protest, right?
I think there's a recognition by putting natural disasters in the same category as accidents that the Party's going to be judged based on its handling of it, and that people are going to make the links between infrastructure and casualty patterns. [Source]
See also even the destruction is a blessing.
Can't get enough of subversive Chinese netspeak? Check out our latest ebook, "Decoding the Chinese Internet: A Glossary of Political Slang." Includes dozens of new terms and classic catchphrases, presented in a new, image-rich format. Available for pay-what-you-want (including nothing). All proceeds support CDT.
Categories : CDT Highlights,Grass-Mud Horse Discourse,Level 2 Article,Level 3 Article,Level 4 Article,Politics,Society,Top Article,Translation
Tags :2008 Sichuan earthquake,earthquakes,natural disasters,School Collapse,tofu dregs construction,Wenchuan,word of the week,Yu Qiuyu
Grass-Mud Horse Lexicon: "Everything is Fake!"
Minireview: 2018 in Censorship (Jan-Oct)
Word of the Week: Cut Chives
Minitrue: Delete Article on Shouguang Suicide
Minitrue: Strengthen Control on Shouguang Floods
Crime of the Week: Illegal Dedication of Flowers
Person of the Week: Teng Biao
Social Illness of the Week: Straight Man Cancer
Phrase of the Week: Hard Times Subdue the Party
Nickname of the Week: Junior High Schooler
Person of the Week: Ding Zilin
Fake Official Account of the Week: @ComYouthLeague
Word of the Week: Seven Don't Mentions | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 4,127 |
Pardon me ! Couldn't get you ?
Lovely lines with this beautiful butterfly. Thank you.
Beautiful the butterfly about and the poem.
Although it doesn't feel like that at the time. If and when (you get used to what is happening to you) it's easier to deal with.
Yes, Transformation is always felt after the process is over, it is good, cause we human beings are scared of change, so ignorance becomes bliss. Thank you for commenting dear.
Lovely poem and beautiful image.
Great title, beautiful image and excellent writing. Keep writing and inspiring.
Thank you, Sri. Glad you liked it. Will write to make you visit Quill again.
Next Next post: Banned Literature ! | {
"redpajama_set_name": "RedPajamaC4"
} | 1,732 |
Tyson McGuffin Pickleball 102 Clinics
Home/Tyson McGuffin Pickleball 102 Clinics
March 30, 2021 @ 9:00 am – 10:30 am
$90/3-days or $40/1-day
Classes Pickle Ball
Pickleball 102 – This class is for 3.0-3.5 level players
This 102 class carries over from the 101. We will go over Pickleball fundamentals, technique, doubles tactics, court positioning and some mental toughness Q's. This class will have 3.0-3.5 curriculum, drills and games.
Tuesday, March 30th, April 13th, & April 27th
9:00am-10:30am
Tuesdays, May 4th, 11th, & 18th
3-days $90/person
or 1-day $40/person
The HUB's TM PB Coaching Professionals
KYLE MCKENZIE
Kyle McKenzie lives in Spokane, WA with his wife, Kali McKenzie and their four young children. Originally from Sequim, WA where his former high school tennis coach introduced him to pickleball. A former Division I tennis player, Kyle took to pickleball quickly and was competing and having success in pro level events the following year. He joins TM camps with quite a bit of coaching/instruction experience. He has spent the last two years as a traveling instructor for LevelUP pickleball camps. He has achieved a career high singles ranking of #12 in the world in early 2020 (Global Pickleball Rankings).Private Lessons $90/hr – 2 & Me $45/person – 3 & Me $40/person – 4 & Me $30/person
Contact: kyle@tysonmcguffin.com
RAFA HEWETT
Rafa Hewett comes from the small town of Lewiston, Idaho. He is a 3rd generation cattle farmer and works full time as a rancher and wine vintner. Rafa enjoyed a successful collegiate tennis career at Point University, where he is the all-time leader in career wins. After college, he picked up the sport of pickleball and quickly realized he had what it took to become a professional. He trains weekly in the Pacific Northwest with World #2 Tyson McGuffin and in just one short year has already risen to #17 in the world with already 5 career titles to his name.Private Lessons $50/hr – 2 & Me $25/person – 3 & Me $20/person – 4 & Me $18/person
Ryan Barbieri2021-03-22T17:05:26-07:00 | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 4,279 |
Q: while applying overlay images to image which is taken from camera at that time app will terminates in ios taking an image either from gallery or camera.which is converts into pencil sketch when we applying overlays to image.At this time my app is crashing due to memory leak.getting image from gallery is fine .but taking image from camera and apply overlay images to pencil sketch image at this case only app terminates .please anybody help me
thank you in advance
A: If you use UIImagePickerControllerEditedImage instead of UIImagePickerControllerOriginalImage you will solve this type of memory leaks. Please refer this tutorial for your reference.
UIImage *chosenImage = info[UIImagePickerControllerEditedImage];
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 733 |
Gaston Velle, nascut Gaston Balthazar Velle a Roma el 24 de desembre de 1868 i mort al 10è districte de París el 8 de gener de 1953, va ser un prestidigitador i director de cinema francès que va treballar per Louis i Auguste Lumière, Pathé, i posteriorment pel cinema italià.
Biografia
Fill del prestidigitador Joseph Velle i Louise Joren, Gaston Velle va néixer a Roma l'any 1868. Uns anys després de la mort del seu pare el 1889, ell mateix va començar a actuar com a il·lusionista sota el mateix pseudònim, Professeur Velle.
A partir de 1903, va dirigir la majoria de les pel·lícules de contes de fades i de maquillatge durant l'apogeu de la producció de Pathé. Hi va fer més de 50 pel·lícules, fins al 1913.
El 1906 es va incorporar a la companyia italiana Cines a Roma, de la qual va ser director artístic. El seu retorn, a finals de 1907, a Vincennes a casa de Pathé va provocar una acalorada polèmica entre les dues empreses competidores, amb Cines acusant Pathé de plagi. El 1913 es va retirar i no se sap gairebé res de la seva producció posterior.
Gaston Velle va morir a París el 1953 . Fou sebollit al cementiri parisenc de Pantin, i el 1993 les restes foren transferides a L'Isle-sur-la-Sorgue.
Filmografia
1904 – Dévaliseurs nocturnes
1904 – La danse du Kickapoo
1904 – Le paravent mystérieux
1904 – La danse des apaches
1904 – Danses plastiques
1904 – Les dénicheurs d'oiseaux
1904 – La métamorphose du papillon
1904 – Japonaiseries
1904 – La Valise de Barnum
1904 – Métamorphose du roi de pique
1904 – Un drame dans les airs
1904 – Le Chapeau magique
1905 – La poule aux œufs d'or
1905 – Sidney, le clown aux échasses
1905 – La fée aux fleurs
1905 – Rêve à la lune o L'amant de la lune
1905 – Les cartes lumineuses o Les cartes transparentes
1905 – Ruche merveilleuse
1905 – Coiffes et coiffures
1905 – John Higgins, le roi des sauteurs
1905 – L'antre infernal
1905 – Un drame en mer
1905 – L'аlbum merveilleux
1906 – Le Bazar du Père Noël (Il Bazar di Natale)
1906 – Onore rusticano 1906 – Il pompiere di servizio 1906 – Heures de la mondaine (Le ore di una mondana)
1906 – Enlèvement à bicyclette (Il ratto di una sposa in bicicletta)
1906 – Il dessert di Lulù 1906 – L'accordéon mysterieux (L'organetto misterioso)
1906 – Bicyclette présentée en liberté 1906 – Mariage tragique (Nozze tragiche)
1906 – Le garde fantôme 1906 – Les Invisibles 1906 – La Peine du talion 1906 – Les effets de la foudre 1906 – Cuore e patria 1906 – La fée aux pigeons 1906 – Quarante degrés à l'ombre (Quaranta gradi all'ombra)
1906 – Otello 1906 – Voyage autour d'une étoile 1906 – L'есrin du rajah 1906 – Les Fleurs animées 1907 – Petit Jules Verne 1907 – Le secret de l'horloger 1907 – Le petit prestidigitateur 1907 – Le secret de l'horloger (Patto infernale)
1907 – Au pays des songes (Nel paese dei sogni)
1907 – Artista e pasticciere 1907 – Gitane (La gitana)
1907 – Pile électrique (La pila elettrica)
1907 – Petit Frégoli (Il piccolo Fregoli)
1907 – Fille du chiffonnier (La figlia del cenciaiolo)
1907 – Primavera senza sole 1907 – Triste jeunesse (Triste giovinezza)
1907 – Un moderno Sansone 1907 – Le crime du magistrat (Il delitto del magistrato)
1908 – Après le bal (Dopo un veglione)
1908 – Lapins du docteur (I conigli del dottore)
1908 – Confession par téléphone (La confessione per telefono)
1908 – Première nuit de noces (La prima notte)
1908 – Polichinelle (Le avventure di Pulcinella)
1908 – Le spectre(Lo spettro)
1908 – Dîner providentiel (Pranzo provvidenziale)
1908 – Le triple rendez-vous(Triplice convegno)
1910 – Isis 1910 – Cagliostro, aventurier, chimiste et magicien 1910 – Au temps des Pharaons 1910 – La rose d'or 1910 – Le charme des fleurs 1910 – Le fruit défendu 1910 – L'oracle des demoiselles 1910 – Rêve d'art 1911 – Fafarifla ou le fifre magique 1911 – L'armure de feu 1911 – Le cauchemar de Pierrot 1913 – La nuit rouge''
Referències
Directors de cinema francesos
Artistes de Roma | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 7,652 |
\section{Introduction}
In most system identification problems, the input signal---that is, the
independent variable---is perfectly known~\cite{ljung1999system}. Often, the
input signal is the result of an identification experiment, where a signal with
certain characteristics is designed and applied to the system to measure its
response. However, in some applications, the hypothesis that the input signal is
known may be too restrictive. In this work, we propose a new model structure
that accounts for partial knowledge about the input signal and we show how many
classical system identification problems can be seen as problems of identifying
instances of this model structure.
The proposed model structure, which we call \emph{uncertain-input model}, is composed of a linear
time-invariant dynamical system (the \emph{linear system}) and of a signal
of which partial information is available (the \emph{unknown input}). In the
next section, we characterize formally the unknown input; before that, we give
some examples of classical models that can be seen as uncertain-input models.
The \emph{Hammerstein model} is a cascade composition of a static nonlinear
function followed by a linear time-invariant dynamical
system~\cite{bai1998optimal,giri2010block,risuleo2015kernel}. In the Hammerstein
model, the (perfectly known) input signal passes through the unknown static
nonlinear function. After the nonlinear transformation, the signal which
is fed to the linear system, is completely unknown.
However, some characteristics of the signal may be known; for instance, we may
known that the nonlinear function is smooth or we may have a set of candidate
basis functions to choose among. Another instance of a
model where the input is not perfectly known is the \emph{errors-in-variables}
model~\cite{soederstroem2010system}. In the errors-in-variables formulation, the
input in known up to noisy measurements. The noise in the input introduces many difficulties and
special techniques have been developed to deal with
it~\cite{soederstroem2003why,soederstroem2007errors}. Closely related to
errors-in-variables models, \emph{blind system identification} methods are
used when the input signal is completely unknown~\cite{abedmeraim1997blind}.
These are particularly useful in telecommunications, image reconstruction, and
biomedical
applications~\cite{nakajima1993blind,moulines1995subspace,mccombie2005laguerre}.
Blind system identification problems are generally ill posed, and certain
assumptions on the input signal are needed to recover a
solution~\cite{ahmed2014blind}. Similar to blind problems are the problems of
\emph{system identification with missing data}. In these cases, the missing
data are estimated, together with a description of the system, by making
hypotheses on the mechanism that generated the missing
data~\cite{wallin2014maximum,markovsky2013structured,pillonetto2009bayesian,risuleo2016kernel,linder2017identification}.
In all the applications we have outlined, we can identify the common thread of
a linear system fed by a signal about which we have limited \emph{prior
information}. This leads us naturally to consider a Bayesian
framework where we can use prior distributions to encode beliefs about the
unknown quantities~\cite[Section~2.4]{bernardo2000bayesian}. Within the vast
framework of Bayesian methods, we concentrate on Gaussian
processes~\cite{rasmussen2006gaussian}. These enable us to
compute many quantities in closed form and to reason about identification in
terms of a limited number of sufficient statistics. For these reasons,
Gaussian-process modeling has become a popular approach in system
identification~\cite{pillonetto2011new,frigola2014identification,svensson2017flexible}. Although Gaussian processes
are typically analytically convenient, the structure of the uncertain-input problem leads
to an intractable inference problem: even though we model the system and the
input as Gaussian processes, the output of the system depends on their
convolution and therefore does not admit a Gaussian description. To perform the
inference---that is find the posterior distribution of the unknowns given the
observations---we need approximation methods. We propose two different
approximation methods for the posterior distribution of the unknowns: one Markov
Chain Monte Carlo~(MCMC, see~\cite{gilks1996markov}) method and one variational
approximation~\cite{beal2003variational} method. In the MCMC
method, we use the Gibbs sampler~\cite{geman1984stochastic} to draw
particles from the posterior distribution and we approximate expectations as
averages computed with the particles. In the variational method, we find the
factorized distribution that best approximates the posterior distribution in
Kullback-Leibler distance.
To give flexibility to the model, we allow the Gaussian priors to depend on
certain parameters (called \emph{hyperparameters}) that need to be estimated
from data together with the measurement noise variances. To estimate these
parameters, we use the \emph{empirical Bayes method} which requires maximizing
the marginal distribution of the data (sometimes called \emph{evidence},
see~\cite{maritz1989empirical}). To
this end, we propose an iterative algorithm based on the
Expectation-Maximization
(EM) method~\cite{dempster1977maximum}. The EM method alternates between the
computation of the expected value of the joint likelihood of the data, of the
unknown system, and of the input (E-step), and the maximization of this expected
value with respect to the unknown parameters (M-step). We show that the E-step
can be computed using the same approximations of the posterior distributions
that are used in the inference and that the M-step consists in a series of
simple and independent optimization problems that can be solved easily.
As mentioned above, the uncertain-input model encompasses several classical
model structures that have been object of research in the system-identification
community for decades. Two important contributions of this work are as follows.
\begin{enumerate}
\item We unify the problems of identifying systems that are usually regarded
as belonging to different model classes into a single identification
framework.
\item We formalize a method to apply the new tools of Gaussian processes and
Bayesian inference to classical system identification problems.
\end{enumerate}
To support the validity of the proposed methods, we present identification
experiments on synthetic datasets of cascaded linear systems and of
Hammerstein systems.
\subsection{Notation}
The notation $[A]{}_{i,j}$ indicates the element
of matrix $A$ in position $i,j$ (single subscripts are used for vectors).
``$\mathrm{T}_{N\times n}(v)$'' denotes the $N$ by
$n$ lower-triangular Toeplitz matrix of the $m$ dimensional vector $v$:
\begin{equation}
\sbr{\mathrm{T}_{N\times n}(v)}_{i,j} = \begin{cases} v_{i-j+1} & 0\leq i-j+1\leq m\\0
& \text{otherwise}\end{cases}
\end{equation}
If $v$ is a vector, then $V$ is the $N$ by $N$ Toeplitz matrix whose
elements are given by $v$. The notation ``$\|a\|^2_{M}$'' is shorthand for $a^T Ma$. The
notation ``$\mathcal{N}(\alpha,\Sigma)$'' indicates the Gaussian distribution with mean
vector $\alpha$ and covariance matrix $\Sigma$. The notation
``$\mathcal{GP}(\mu,\Sigma)$'' indicate a Gaussian process with mean function
$\mu$ and covariance function $\Sigma$. Random variables and their realizations
have the same symbol. The notation ``$x;\theta$'' indicates that the random variable
$x$ depends on the parameter $\theta$. If $x$ is a random variable, $\mathrm{p}(x)$
denotes its density. The symbol ``$\cong$'' indicates equality up to an
additive constant and ``$\delta$'' is the Dirac density.
\section{Uncertain-input systems}\label{sec:ui_systems}
In this work, we propose a new model structure called the \emph{uncertain-input
model}. Consider the block scheme in Figure~\ref{fig:ui_block_scheme}. Many system
identification tasks can be formulated as the identification of a linear system $S$,
subject to an input sequence $\{w_t\}$. In this work, we consider problems in
which we have partial information about the input sequence, and this partial
information depends on the specific problem at hand.
\begin{figure}[htb]
\centering
\includegraphics{blockscheme.pdf}
\caption{A block scheme of the general uncertain input system.}\label{fig:ui_block_scheme}
\end{figure}
We assume that the linear system $S$ is time invariant, stable, and causal. Therefore,
it is uniquely described by the sequence $\{g_t\}$ of its impulse response
samples, and the output of the system generated by an input $\{w_t\}$ can be
represented as the discrete convolution of the system impulse response with the
input signal---that is, at time $t$, the measurements of the output can be
written as the noise-corrupted discrete convolution
\begin{equation}\label{eq:model_y}
y_t = \del{w\ast g}_t + \varepsilon_t,
\end{equation}
where $\{\varepsilon_t\}$ is a stochastic process that describes additive
measurement noise, and where ``$\ast$'' denotes the discrete time convolution
\begin{equation}\label{eq:convolution}
\del{w\ast g}_t = \sum_{k=1}^\infty g_{k}w_{t-k}.
\end{equation}
In the uncertain-input model, we consider that the input signal is measured with
additive white noise described by a stochastic processes $\{\eta_t\}$. This
assumption allows us to write, for the input measurements, the model
\begin{equation}\label{eq:model_v}
v_t = w_t + \eta_t.
\end{equation}
We assume that the noise processes $\{\eta_t\}$ and $\{\varepsilon_t\}$ are independent
Gaussian white-noise processes. This means that every noise sample has a
Gaussian distribution,
\begin{equation}\label{eq:model_noise}
\eta_t \sim \mathcal{N}(0,\sigma_v^2),\quad
\varepsilon_t \sim \mathcal{N}(0,\sigma_y^2),
\end{equation}
and that $\varepsilon_t$ is independent of $\varepsilon_s$, for
$s\neq t$, and of $\eta_s$ for any $s$. To allow for models where some
observations are missing, we assume infinite variance for those noise
components that correspond to the missing samples.
To encode the prior information we have about the input signal and about the linear
system, we use Gaussian process models. We model the unknown input signal
and the impulse response of the linear system as a realization of a joint Gaussian
processes with suitable mean and covariance functions,
\begin{equation}\label{eq:gp_joint}
\sbr[4]{\begin{matrix}w\\g\end{matrix}}\!\!\sim \!\mathcal{GP} \!\del{\!\!
\sbr[4]{\begin{matrix}\mu_w(\cdot;\theta)\\
\mu_g(\cdot;\rho)\end{matrix}}\!\!,\!
\sbr[4]{\begin{matrix}K_w(\cdot;\theta) &
{K_{gw}(\cdot,\cdot;\rho,\theta)}^T\\
K_{gw}(\cdot,\cdot;\rho,\theta) &
K_{g}(\cdot,\cdot;\rho)
\end{matrix}}
}\!.
\end{equation}
The mean functions of the Gaussian processes, $\mu_g(\cdot\,;\theta)$ and
$\mu_w(\cdot\,;\rho)$, may depend on the parameter vectors $\theta$ and
$\rho$, called \emph{hyperparameter vectors}, which can be used to shape the prior
information to the specific application. The same goes for the covariance
functions $K_w(\,\cdot\,,\,\cdot\,;\theta)$,
$K_g(\,\cdot\,,\,\cdot\,;\rho)$, and $K_{gw}(\,\cdot\,,\,\cdot\,;\rho,\theta)$
which may depend on (possibly different) hyperparameters.
For notational convenience, we present the explicit computations in the case of
independent Gaussian process models for $g$ and $w$---that is, we consider the
case where
\begin{equation}\label{eq:gp_models}
\begin{split}
w &\sim
\mathcal{GP}\big(\mu_w(\,\cdot\,\,;\theta),K_w(\,\cdot\,,\,\cdot\,;\theta)\big),\\
g &\sim \mathcal{GP}\big(\mu_g(\,\cdot\,;\rho),K_g(\,\cdot\,,\,\cdot\,;\rho)\big),
\end{split}
\end{equation}
and the cross-covariance of processes
is zero. However, all results we show hold also in the more general case.
We assume that we have collected $N$ measurements of the processes $\{v_t\}$
and $\{y_t\}$ and, for sake of simplicity, we also assume that $w_t=0$ for
$t<0$ (see~\cite{risuleo2015estimation} for a way to extend the proposed
framework to unknown initial conditions).
From~\eqref{eq:model_y}, we see that the output measurements only depend on the
values of the impulse response at the discrete time instants $t=1,2,\ldots,N$; therefore,
we can consider the joint distribution of the samples $g_t$ for $t=1,2,\ldots,N$.
From the Gaussian process model~\eqref{eq:gp_models}, we have that, if we
collect the samples of $\{g_t\}$ into an $N$-dimensional column vector $g$,
this vector has a joint Gaussian distribution given by
\begin{equation}\label{eq:model_g}
g \sim \mathcal{N}\big(\mu_g(\rho),K_g(\rho)\big),
\end{equation}
where we have defined the mean vector and the
covariance matrix induced by~\eqref{eq:gp_models} as
\begin{equation}
{\big[\mu_g(\rho)\big]}_j := \mu_g(j\,;\rho), \quad {\big[K_g(\rho)\big]}_{i,j} :=
K_g(i,j\,;\rho).
\end{equation}
From~\eqref{eq:model_v} and~\eqref{eq:model_y}, we have that the $N$
measurements of the input and output only depend on the samples $w_t$ for
$t=1,\ldots,N$; therefore, we can consider the joint distribution of these
samples, collected in an $N$-dimensional vector $w$.
This distribution is Gaussian, and it is given by
\begin{equation}\label{eq:model_w}
w \sim \mathcal{N}\big(\mu_w(\theta),K_w(\theta)\big),
\end{equation}
where we have defined the mean vector and the
covariance matrix induced by~\eqref{eq:gp_models} as
\begin{equation}
{\big[\mu_w(\theta)\big]}_j := \mu_w(j\,;\theta), \quad {\big[K_w(\theta)\big]}_{i,j} :=
K_w(i,j\,;\theta).
\end{equation}
Assembling the models for the different components, given
by~\eqref{eq:model_y},~\eqref{eq:model_v},~\eqref{eq:model_noise},~\eqref{eq:model_g},
and~\eqref{eq:model_w}, we arrive at the following definition
of the uncertain-input model:
\begin{equation}\label{eq:ui_model}
\left\lbrace
\begin{aligned}
y &= Wg + \varepsilon\\
v &= w + \eta\\
g & \sim \mathcal{N}\big(\mu_g(\rho),K_g(\rho)\big)\\
w & \sim \mathcal{N}\big(\mu_w(\theta),K_w(\theta)\big)\\
\varepsilon & \sim \mathcal{N}\big(0,\sigma_y^2 I_N\big)\\
\eta & \sim \mathcal{N}\big(0,\sigma_v^2 I_N\big)\\
g,\,&w,\,\varepsilon,\,\eta\text{ mutually independent}
\end{aligned}
\right.
\end{equation}
where we have collected the output measurements $\{y_t\}$ in a vector $y$ and
where $\varepsilon$ and $\eta$ are the vectors of the first $N$ input and output
noise samples. The matrix
$W$ is the $N\times N$ Toeplitz matrix of the input,
$W := \mathrm{T}_{N\times N}(w)$,
which represents the discrete-time convolution~\eqref{eq:convolution} as the
product $Wg$. If we define the $N\times N$ Toeplitz matrix of the impulse
response samples, $G := \mathrm{T}_{N\times N}(g)$, then we have the property
\begin{equation}\label{eq:toeplitz_property}
Wg = Gw.
\end{equation}
In the next section, we give examples of some classical system identification
problems that can be cast as uncertain-input identification problems.
\section{Examples of uncertain-input models}
The uncertain-input framework is a generalization of many classical
system-identification problems. All these classical problems can be analyzed
using the tools of uncertain-input models; furthermore, under the right
conditions, the identification approach that we propose for uncertain-input
models reduces to classical system-identification approaches.
\subsection{Linear predictor model}
Consider the \emph{output-error} transfer-function model~\cite{ljung1999system},
\begin{equation}
y_t = \frac{B(q;\rho)}{F(q;\rho)} u_t + \varepsilon_t,
\end{equation}
where $B(q;\rho)$ and $F(q;\rho)$ are polynomials in the one-step shift operator
$q$ and $\varepsilon_t$ is Gaussian white noise. If we consider the parametric
predictor of the output-error model, we can write it as
\begin{equation}
\hat y_{t|t-1} = \frac{B(q;\rho)}{F(q;\rho)} u_t = \del{g(\rho)\ast u}_t,
\end{equation}
where $g_t(\rho)$ is the impulse response of the predictor transfer function.
We can see this model as a degenerate uncertain-input model with
\begin{equation}
\begin{aligned}
{[\mu_g(\rho)]}_i &= g_i(\rho), & {[K_g(\rho)]}_{i,j} &= 0,\\
{[\mu_w(\theta)]}_i &= u_i, & {[K_w(\theta)]}_{i,j} &= 0.
\end{aligned}
\end{equation}
We can also incorporate the framework of Bayesian identification of finite
impulse-response models with first order stable-spline kernels (for a survey,
see~\cite{pillonetto2014kernel}) with the choice
\begin{equation}\label{eq:stable-spline}
\begin{aligned}
{[\mu_g(\rho)]}_i &= 0, & {[K_g(\rho)]}_{i,j} &= \rho_1\,\rho_2^{\max(i,j)},\\
{[\mu_w(\theta)]}_i &= u_i, & {[K_w(\theta)]}_{i,j} &= 0,
\end{aligned}
\end{equation}
where $\rho_1\geq 0$ is a scaling parameter and $\rho_2\in \intcc{0,1}$
regulates the decay rate of $g$ (see,~\cite{pillonetto2010new}).
Note that, in this formulation, any kernel can be used to model $g$ (see, for
instance,~\cite{chen2014constructive,dinuzzo2015kernels}).
\subsection{Errors-in-variables system identification}
Errors-in-variables models are often described by the set of
equations~\cite{soederstroem2007errors},
\begin{align}
y_t &= \del{g \ast w}_t + \eta_t,\\
v_t &= w_t + \varepsilon_t.
\end{align}
It is clear that this type of models naturally fit into the
uncertain-input framework of~\eqref{eq:ui_model}.
In particular, we can consider the classical errors-in-variables
problem of identifying a parametric model of $S$ when $w_t$ is the realization of
a stationary stochastic signal with a rational
spectrum~\cite{castaldi1996identification}. In this case, we can
write $\{w_t\}$ as the filtered white noise process
\begin{equation}
w_t = \frac{C(q;\theta)}{D(q;\theta)}e_t,
\end{equation}
where $e_t$ is unitary variance Gaussian white noise,
and $C(q;\theta)$ and $D(q;\theta)$ are complex polynomials in the one-step
shift operator $q$.
From this expression, we see that $w$ is a Gaussian process with
zero mean and covariance matrix $\Sigma_w(\rho)$ that depends on the
parameterization of the input filter. Using a parametric model for the
system, we obtain an uncertain-input system with
\begin{equation}
\begin{aligned}
{[\mu_g(\rho)]}_i &= g_i(\rho), & {[K_g(\rho)]}_{i,j} &= 0,\\
{[\mu_w(\rho)]}_i &= 0, & {[K_w(\rho)]}_{i,j} &= \Sigma_w(\rho).
\end{aligned}
\end{equation}
Alternatively, we could estimate all samples of the input signal
with the choice ${[\mu_w(\rho)]}_i = \theta_i$ and
$[K_w(\rho)]=0$, even though this may lead to nonidentifiability of the
model~\cite{soederstroem2003why,zhang2015errors,risuleo2016kernel}.
\subsection{Blind system identification}
Blind system identification can also be cast as the problem of identifying an
uncertain-input model by setting the input noise variance to $\sigma_v^2 =
\infty$ (this indicates that no input measurements are available). In this
case, different parameterizations of the input
lead to different models for the input process. For instance, we can consider
the parameterization of the input as a switching signal with known switching
instants $T_0<T_1<\cdots<T_p$; in this case we can choose
${[\mu_w(\rho)]}_i = h_i^t\theta$,
where $h_i$ is a selection vector that is nonzero in the $i$th interval:
\begin{equation}
\begin{cases}{}
{[h_i]}_j = 1 & \text{if}\;\;T_{i-1}< t \leq T_i,\\
{[h_i]}_j = 0 & \text{otherwise}.
\end{cases}
\end{equation}
Models similar to this one were used, for instance, in~\cite{ohlsson2014blind}
and~\cite{bottegal2015blind}.
\subsection{Cascaded system identification}\label{sec:cascaded_systems}
In cascaded linear systems, the output of one linear system is used as the input
to a second linear system (see Figure~\ref{fig:cascade}).
\begin{figure}[htb]
\centering
\includegraphics{cascade_blockscheme.pdf}
\caption{Cascaded linear systems.}\label{fig:cascade}
\end{figure}
For sake of argument, we consider nonparametric models for both linear systems
(the reasoning also holds for parametric models):
\begin{equation}
g_1 \sim \mathcal{N}\big(0,K_1(\theta)\big),\qquad g_2 \sim \mathcal{N}\big(0,K_2(\rho)\big).
\end{equation}
Because $g_1$ is a Gaussian vector, the intermediate variable $w$ is also a
Gaussian vector, with zero mean and covariance matrix given by
\begin{equation}\label{eq:cascaded_input}
K_w(\theta) = UK_1(\theta) U^T,
\end{equation}
where $U:=\mathrm{T}_{N\times N}(u)$ is the Toeplitz matrix of the input signal $u_t$. Therefore, we
can model the linear cascade as an uncertain-input model with input modeled as a
zero-mean process with covariance matrix given by~\eqref{eq:cascaded_input}
where, for instance, we use the first-order stable spline kernel introduced
in~\eqref{eq:stable-spline}. The same choice of kernel can be made for $K_2(\rho)$.
\subsection{Hammerstein model identification}\label{sec:hammersteins}
The Hammerstein model is a cascade of a static nonlinear function followed by a
linear dynamical system (see Figure~\ref{fig:hammerstein}).
\begin{figure}[htb]
\centering
\includegraphics{hammerstein_blockscheme.pdf}
\caption{The Hammerstein model.}\label{fig:hammerstein}
\end{figure}
In the Hammerstein model, the intermediate variable $w_t$ is not observed (which,
symbolically, corresponds to an infinite $\sigma_v^2$). If we consider models
for the input block that are combinations of known basis
functions~\cite{bai1998optimal}, according to
\begin{equation}
w_t = \sum_{j=1}^p \theta_j\, \varphi_j(u_t)
\end{equation}
we can collect the values of the unknown input $w$ and the parameters in a vector such that
\begin{equation}
w = \Phi \theta,\qquad {[\Phi]}_{i,j} = \varphi_j(u_i).
\end{equation}
This can be modeled as an uncertain-input model with
$\mu_w(\theta) = \Phi\theta$ and $K_w(\theta) = 0$.
The uncertain-input framework also encompasses nonparametric models for the
input nonlinearity. For instance, we can model the Hammerstein cascade as the
uncertain-input model with the Gaussian radial-basis-function kernel as input model:
\begin{equation}\label{eq:rbf}
{[K_w(\theta)]}_{i,j} = \theta_1
\exp\left\{-\frac{1}{\theta_2}{(u_i-u_j)}^2\right\}.
\end{equation}
As for the linear system, we can use either parametric or nonparametirc
modeling approaches (see~\cite{bai1998optimal,risuleo2015new}).
\section{Estimation of uncertain-input models}\label{sec:estimation}
As discussed in Section~\ref{sec:ui_systems}, we suppose that we have collected
$N$ samples of the output $y_t$ and, possibly, $N$ samples of the noisy input
signal $v_t$ (in some applications, such as Hammerstein models and blind system
identification, these samples are not available). Whenever present, we assume
that the external input $u_t$ is completely known. We consider the following
identification problem.
\begin{prob}\label{prob:sysid}
Given the $N$-dimensional vectors of measurements $y$ and $v$, generated
according to~\eqref{eq:ui_model}, estimate the impulse response $g$, the
unknown input $w$, and the hyperparameters
$\tau = \{\rho,\theta,\sigma_y^2,\sigma_v^2\}$.
\end{prob}
Because we are using the Gaussian process model~\eqref{eq:gp_models}, we have
natural candidates for the estimates of $g$ and $w$. Interpreting~\eqref{eq:model_g} and~\eqref{eq:model_w} as prior distributions of the
unknowns, we know that the best estimates given the data (in the minimum
mean-square error sense) are the conditional expectations
\begin{equation}
g^\star = \mathbf{E}\big[ g\big| y,v\big],\quad w^\star = \mathbf{E}\big[w\big|y,v\big].
\end{equation}
However, these conditional expectations depend on the value of the
hyperparameter vector $\tau$. Because this value is not available, we follow an
empirical Bayes approach~\cite{maritz1989empirical} and we approximate
the true conditional expectations---that correspond to the true values of the
hyperparameters $\tau$---with the conditional expectations
\begin{equation}\label{eq:solution_posteriormean}
\hat g := \int g\, \mathrm{p}(g|y,v\,;\hat\tau) \dif g,\qquad
\hat w := \int w\, \mathrm{p}(w|y,v\,;\hat\tau) \dif w,
\end{equation}
where we are using estimated values $\hat \tau$ of the hyperparameters. In the
empirical Bayes approach, the estimates of the hyperparameters are chosen
by maximizing the marginal likelihood of the data,
\begin{equation}\label{eq:solution_marginal}
\hat \tau := \arg \max_{\tau} \log \mathrm{p}(y,v\,;\tau),
\end{equation}
where $\mathrm{p}(y,v\,;\tau)$ is the marginal distribution of the measurements according
to the model in~\eqref{eq:ui_model}.
Solving~\eqref{eq:solution_marginal} yields the marginal likelihood estimate of
the hyperparameters that can be used to find the empirical Bayesian estimates of
$g$ and $w$ in~\eqref{eq:solution_posteriormean}. However, this approach
requires distributions that, in general, are not available in closed form.
Furthermore,~\eqref{eq:solution_marginal} is possibly a
high-dimensional optimization problem that does not admit an analytical
expression. To address this last problem, we use the
EM method to derive an iterative algorithm that
solves~\eqref{eq:solution_marginal}. We start by rewriting the marginal
likelihood as
\begin{equation}
\mathrm{p}(y,v\,;\tau) = \int \mathrm{p}(y,v,g,w\,;\tau) \dif g \dif w.
\end{equation}
With this observation, we can see~\eqref{eq:solution_marginal} as a
maximum likelihood problem with latent variables, where the latent variables are
$g$ and $w$. Appealing to the theory of the EM method, we have that iterating
the two steps
\begin{description}
\item[E-step:] Given an estimate $\hat \tau^{(k)}$ of $\tau$, construct the
following lower bound of the marginal likelihood
\begin{equation}\label{eq:Qtrue}
\hspace{-1em} Q(\tau,\hat \tau^{(k)})\! = \!\int \!\log \mathrm{p}(y,v,g,w\,;\tau)
\mathrm{p}(g,w|y,v\,;\hat \tau^{(k)})\dif g \dif w;
\end{equation}
\item[M-step:] Update the hyperparameter estimates as
\begin{equation}
\hat \tau^{(k+1)} = \arg\max_{\tau} Q(\tau,\hat \tau^{(k)});
\end{equation}
\end{description}
from an arbitrary initial condition $\hat \tau^{(0)}$,
we obtain a sequence of estimates $\{\hat \tau^{(k)}\}$ of increasing
likelihood, which converges to a stationary point of the marginal likelihood of
the data. In practice, this stationary point will always be a local maximum:
saddle points are numerically unstable and minimal perturbations will drive the
sequence of updates away from them~\cite{mclachlan2007algorithm}.
Using the EM method, we have transformed the problem of maximizing the marginal
likelihood into a sequence of optimization problems. The whole point of the EM method is that
these problems should be simpler to solve than the original optimization
problem.
In addition to using the EM method to solve the marginal likelihood problem, we
can rewrite~\eqref{eq:solution_posteriormean} as
\begin{equation}\label{eq:posteriormeans}
\hat g := \int g\, \mathrm{p}(g,w|y,v\,;\hat\tau) \dif w\dif g,\qquad
\hat w := \int w\, \mathrm{p}(g,w|y,v\,;\hat\tau) \dif g\dif w.
\end{equation}
Comparing~\eqref{eq:posteriormeans} and the $Q$ function in the E-step, we see
that the solution of Problem~\ref{prob:sysid} using the procedure we have
described depends on expectations with respect to
the distribution $\mathrm{p}(g,w\,| y,v\,;\tau)$. This
distribution is, in general, not available in closed form. In the next section
we present three special cases when this distribution can be computed in closed form
and we present the resulting estimation algorithms. In
Section~\ref{sec:approximations}, we show two different ways to approximate
this joint posterior distribution in the general case.
\section{Cases with degenerate prior distributions}
There are cases where the integrals~\eqref{eq:Qtrue}
and~\eqref{eq:posteriormeans}, required to estimate uncertain-input
systems, admit closed-form solutions. This happens when either the prior for $g$
or for $w$ (or both) are degenerate distributions. This means that,
symbolically, we let the covariances $K_g(\rho)$ and $K_w(\theta)$ go to zero
and, respectively,
\begin{equation}
\mathrm{p}(g;\rho) \to \delta\del{g-\mu_g(\rho)}, \quad
\mathrm{p}(w;\theta) \to \delta\del{w-\mu_w(\theta)}.
\end{equation}
From these expressions, we see that the models of the unknown quantities $g$ and
$w$ are uniquely determined by the parameter vector $\tau$ (there is no
uncertainty or variability): therefore, we refer to these kind of models as
\emph{parametric models}. We now present three cases of parametric models that
admit closed form expressions for the EM algorithm.
\subsection{Semiparametric model}
The first model is called \emph{semiparametric}. It is obtained when
$K_w(\theta) \to 0$. This effectively means that the prior
density~\eqref{eq:model_w} collapses into the Dirac density centered around the
mean function, and the posterior distributions of the unknowns admit closed
form expressions:
\begin{lem}\label{lem:pigs_posterior_w}
Consider the uncertain-input system~\eqref{eq:ui_model}. In the
limit when $K_w(\theta)\to 0$, we have that
$\mathrm{p}(w|y,v;\tau) = \delta\del{w\!-\!\mu_w(\theta)}$.
\end{lem}
\begin{proof}\label{pf:pigs_posterior_w}
When $K_w(\theta)\to 0$, the prior density becomes the degenerate normal
distribution $\mathrm{p}(w;\theta) = \delta\big(w-\mu_w(\theta)\big)$.
From the law of conditional expectation, we have
\begin{equation}\label{eq:pf_pigs_posterior_w}
\mathrm{p}(w|y,v;\tau) = \frac{\mathrm{p}(y,v|w;\tau)\delta(w - \mu_w(\theta))}{\mathrm{p}(y,v;\tau)};
\end{equation}
in addition, the evidence becomes
\begin{equation}
\mathrm{p}(y,v;\tau) = \int \mathrm{p}(y,v|w;\tau)\mathrm{p}(w;\theta)\dif w =
p(y,v|\mu_w(\theta);\tau).
\end{equation}
Plugging this expression into~\eqref{eq:pf_pigs_posterior_w} we have the
result.
\end{proof}
\begin{lem}\label{lem:pigs_posterior_g}
Consider the uncertain-input system~\eqref{eq:ui_model}. In the limit when
$K_w(\theta)\to 0$, the posterior distribution $\mathrm{p}(g|y,v;\tau)$ is Gaussian with
covariance matrix and mean vector given by
\begin{equation}\label{eq:pigs_posterior_g}
P_g = \del{\frac{1}{\sigma_y^2}{M_w(\theta)}^T M_w(\theta) +
{K_g(\rho)}^{-1}}^{-1},\qquad
m_g = P_g\del{\frac{1}{\sigma_y^2}{M_w(\theta)}^T
y+{K_g(\rho)}^{-1}\mu_g(\rho)},
\end{equation}
where $M_w(\theta):=\mathrm{T}_{N\times N}\big(\mu_w(\theta)\big)$.
\begin{equation}
\end{equation}
\end{lem}
\begin{proof}
Note that $y|g,w,v;\tau$ is an affine transformation of the
Gaussian random variable $\varepsilon$; hence, it is Gaussian. By
the law of conditional expectation and ignoring terms independent of $g$, we
have that
\begin{equation}
\begin{aligned}
\log\, \mathrm{p}(g|w,y,v;\tau) &\cong \log p(y|g,w;\tau) + \log p(g;\rho)
\cong -\frac{1}{2\sigma_y^2}\enVert{y - M_w(\theta)g}^2 - \frac{1}{2}\enVert{g}^2_{{K_g(\rho)}^{-1}}\\
&\cong -\frac{1}{2} \enVert{g}_{P_g^{-1}} + g^T m_g\cong - \frac{1}{2}\enVert{g - m_g}^2_{{P_g}^{-1}}
\end{aligned}
\end{equation}
where $P_g$ and $m_g$ are defined in~\eqref{eq:pigs_posterior_g}.
Because it is quadratic, the posterior distribution of $g$ is Gaussian,
with the indicated covariance matrix and mean vector.
\end{proof}
Thanks to Lemma~\ref{lem:pigs_posterior_w} and
Lemma~\ref{lem:pigs_posterior_g}, the E-step can be computed analytically when
$K_w(\theta) = 0$, and
the function $Q(\tau, \hat \tau^{(k)})$ admits a closed-form expression. To
this end,
let $\Delta^{k}$ be the $N$ by $N$ matrix given by
\begin{equation}
{[\Delta^k]}_{i,j} =
\begin{cases}{}
1 & \text{if } i+j-1 = k\\
0 & \text{otherwise}
\end{cases}
\end{equation}
and let
\begin{equation}\label{eq:R}
R = \begin{bmatrix}
\Delta^1 & \Delta^2 & \Delta^3 & \cdots & \Delta^N
\end{bmatrix}.
\end{equation}
Then, we have the following result.
\begin{thm}\label{thm:Qpigs}
Consider a semiparametric uncertain-input model with $K_w(\theta) = 0$.
Let $\hat \tau^{(k)}$ be estimates of the hyperparameters at the $k$th
iteration of the EM method and let $\hat g^{(k)}$ and $\hat P_g^{(k)}$ be the moments
in~\eqref{eq:pigs_posterior_g} when $\tau=\hat \tau^{(k)}$. Define
\begin{equation}
\begin{aligned}
\hat R_y(\theta) &= \enVert[1]{ y - \hat G^{(k)}\mu_w(\theta) }^2,\\
\hat R_v(\theta) &= \enVert[1]{v - \mu_w(\theta)}^2,\\
\hat S_g^{(k)} &= R^T \del[1]{I_N \otimes \hat P_g^{(k)}} R;
\end{aligned}
\end{equation}
where $\hat G^{(k)} = T_{N\times N}(\hat g^{(k)})$.
Then, the function $Q(\tau,\hat \tau^{(k)})$ is
given by
\begin{equation}\label{eq:Qpigs}
\begin{aligned}
Q(\tau,\hat \tau^{(k)}) &= -\frac{1}{2\sigma_v^2}\hat R_v(\theta) - \frac{N}{2}\log \sigma_v^2
-\frac{1}{2\sigma_y^2}\del{\hat R_y(\theta) + \enVert{\mu_w(\theta)}_{
\hat S_g^{(k)}}^2}
- \frac{N}{2}\log
\sigma_y^2\\
&- \frac{1}{2} \enVert{\hat g^{(k)} - \mu_g(\rho)}^2_{{K_g(\rho)}^{-1}} - \frac{1}{2} \trace\cbr{{K_g(\rho)}^{-1}\hat P_g^{(k)}} -
\frac{1}{2}\log\det K_g(\rho).
\end{aligned}
\end{equation}
\end{thm}
\begin{proof}
See Appendix~\ref{pf:Qpigs}.
\end{proof}
From~\eqref{eq:Qpigs}, we see that the optimization with respect to $\theta$ is
not independent of $\sigma_y^2$ and $\sigma_v^2$. Therefore, to update the
hyperparameter $\theta$, we use a conditional-maximization
step~\cite{meng1993maximum}, where we keep the noise variances fixed
to their values at the previous iterations. The use of the conditional
maximization step allows us to write the updates of the EM method in closed
form:
\begin{cor}\label{cor:Qpigs}
At the $k$th iteration of the EM method, the parameters can be updated as
\begin{equation}
\begin{aligned}
\hat \rho^{(k+1)} &= \arg\min_{\rho}
\trace\cbr[1]{{K_g(\rho)}^{-1}\hat P_g^{(k)}} + \log \det K_g(\rho)\\
&+ \enVert[1]{\hat g^{(k)}-\mu_g(\rho)}^2_{K_g^{-1}(\rho)},\\
\hat\theta^{(k+1)}& = \arg \min_\theta\!
\frac{\hat R_v(\theta)}{2\hat \sigma_v^{(k)\,2}} +
\frac{1}{2\hat\sigma_y^{(k)\,2}}\!\del[2]{\!
\hat R_y(\theta)\!+\! \enVert{\mu_w(\theta)}^2_{
\hat S_g^{(k)}}\!}\!,\\
\hat \sigma_y^{(k+1)} &= \frac{1}{N}\del{\hat R_y(\hat \theta^{(k+1)}) +
\enVert[1]{\mu_w(\hat \theta^{(k+1)})}^2_{S_g^{(k)}}},\\
\hat \sigma_v^{(k+1)} &= \frac{1}{N}\hat R_v(\hat \theta^{(k+1)})\,.
\end{aligned}
\end{equation}
\end{cor}
\begin{proof}
Follows from the two-step maximization of~\eqref{eq:Qpigs}: first, maximize
with respect to $\theta$ and $\rho$ keeping $\sigma_y^2$ and $\sigma_v^2$
fixed to their values at the previous iteration; then, maximize with respect
to $\sigma_y^2$ and $\sigma_v^2$ using the updated values of the
hyperparameters.
\end{proof}
Thanks to Corollary~\ref{cor:Qpigs}, we have a simple way to compute the EM
estimates of the kernel hyperparameters and of the noise variances for
semiparametric models with $K_w(\theta) = 0$: starting
from an initial value of the unknown parameters, we first update the
hyperparameters $\rho$ and
$\theta$, then we use the new values to update the noise variances $\sigma_y^2$
and $\sigma_v^2$. Under mild regularity conditions, this procedure yields a
sequence of estimates that converges to a local maximum of the marginal
likelihood~(it is a Generalized EM sequence, see~\cite{wu1983convergence}).
\begin{rem}
In this section, we have presented the case when $K_w(\theta)\to 0$. However,
thanks to the symmetry of the model assured by~\eqref{eq:toeplitz_property},
the same kind of algorithm works when $K_g(\rho)\to 0$ (by exchanging the
roles of $g$ and $w$).
\end{rem}
\subsection{Parametric model}
In case we let both the input and the system covariance matrices go to zero,
all the variability in the model is removed, and we are left with classical
\emph{parametric} models. In this case, the marginal likelihood of the data collapses
into the likelihood where the impulse response and the input are
replaced with the parametric models $\mu_g(\rho)$ and $\mu_w(\theta)$
\begin{equation}
\mathrm{p}(y,v;\rho,\theta,\sigma^2) = \int\!\!
\mathrm{p}(y,v|g,w;\sigma^2)\mathrm{p}(g;\rho)\mathrm{p}(w;\theta)\dif g\dif w
=\mathrm{p}(y,v|\,g\!=\!\mu_g(\rho),\,w\!=\!\mu_w(\theta);\,\sigma^2).
\end{equation}
In other words, the marginal likelihood of the data is the distribution of the
data conditioned on the events $g=\mu_g(\rho)$ and $w=\mu_w(\theta)$. This
distribution is given in closed form by
\begin{equation}\label{eq:Qparametric}
\log \mathrm{p}(y,v|\mu_g(\rho),\mu_w(\theta);\sigma^2) =
-\frac{1}{2\sigma_y^2}\enVert{y-M_w(\theta)\mu_g(\rho)}^2
-\frac{N}{2}\log\sigma_y^2
-\frac{1}{2\sigma_v^2}\enVert{v-\mu_w(\theta)}^2-\frac{N}{2}\log \sigma_v^2.
\end{equation}
where $M_w(\theta)$ is the Toeplitz matrix of $\mu_w(\theta)$.
In this parametric-model case, we have that the posterior means reduce to the
prior means and the maximum marginal-likelihood criterion collapses into the
classical maximum-likelihood or prediction-error estimation method. To
estimate the system, we first maximize~\eqref{eq:Qparametric} to find the
parameter values $\hat \tau$; then, we estimate the system with
\begin{equation}
\hat g = \mu_g(\hat\rho),\quad \hat w=\mu_w(\hat \theta).
\end{equation}
The strategy to maximize~\eqref{eq:Qparametric} depends on the specific
structure of the problem. In some applications, concentrated-likelihood or
integrated-likelihood approaches have been proposed (for a review,
see~\cite{berger1999integrated}). An interesting consistent approach, for the
parametric EIV case, has been proposed in~\cite{zhang2015errors}.
In~\cite{bai2004convergence}, the authors show that if $g$ and $w$ are linearly
parameterized, alternating between estimation of $g$ and of $w$ leads to the
minimum of~\eqref{eq:Qparametric}.
\begin{rem}
The EM based algorithm presented in Section~\ref{sec:estimation} cannot be
used in the parametric model case because of the impulsive posterior
distributions: during the M-step, the method is overconfident in the current
value of the parameters and no update occurs. However, the EM method can be
used in the parametric case by considering a covariance matrix that shrinks
toward zero at every iteration.
\end{rem}
\section{Approximations of the joint posterior
distribution}\label{sec:approximations}
In the previous section, we have shown three cases in which the collapse of the prior
distribution allows us to express the marginal likelihood of the data and the
posterior distributions in closed form. In general, however, these distributions do not
have a closed form expression. Therefore, in this section, we present two ways to
approximate the joint posterior distribution
$\mathrm{p}(g,w\,| y,v\,;\tau)$.
In the first, we make a particle approximation. The particles are drawn
from the joint posterior using an MCMC method. In the second, we make a
variational approximation of the joint posterior.
\subsection{Markov Chain Monte Carlo integration}\label{sec:mcmc}
Monte Carlo methods are built around the concept of \emph{particle
approximation}. In a particle approximation method, a density with a complicated
functional form is approximated with a set of point probabilities---that is, we
approximate a density $\mathrm{p}(x)$ according to
\begin{equation}
\mathrm{p}(x) \approx \frac{1}{M}\sum_{j=1}^M \delta(x-x_j).
\end{equation}
If the particle locations $x_j$ are drawn from $\mathrm{p}(x)$, and the number of
particles $M$ is large enough, the expectation of any measurable function $f(x)$
over any set can be approximated as
\begin{equation}\label{eq:monte_carlo_integral}
\mathbf{E}\{f(x)\} = \int\!f(x)\,\mathrm{p}(x) \dif x \approx \frac{1}{M}\sum_{j=1}^M f(x_j),
\end{equation}
where $\{x_j\}$ are drawn from $\mathrm{p}(x)$. This result comes directly from the sampling
property of the Dirac density $\delta(\,\cdot\,)$. From a different
perspective, we can see~\eqref{eq:monte_carlo_integral} as an estimation of the
true expectation. With this interpretation, we have that this estimator is
unbiased,
\begin{equation}
\mathbf{E}\Bigg\{ \frac{1}{M}\sum_{j=1}^M
f(x_j)\Bigg\} = \mathbf{E}\{f(x)\},
\end{equation}
and its covariance is inversely proportional to the number of samples used,
\begin{equation}
\mathbf{cov}\Bigg\{ \frac{1}{M}\sum_{j=1}^M
f(x_j)\Bigg\} = \frac{1}{M}\mathbf{cov}\{f(x)\}.
\end{equation}
In practice, the number of samples needed depends on the specific application: in
certain applications, few particles (say 10 or 20) may suffice; in other
applications, we might need a much larger number of particles (in the order of
thousands; for a complete treatment, see~\cite[Chapter~11]{bishop2006pattern}).
When implementing Monte Carlo integrations, a common approach is MCMC\@. In
these methods, we set up a Markov chain whose stationary distribution is the
distribution we want to approximate and we run it to
collect samples~\cite{gilks1996markov}.
One convenient way to create a Markov chain is \emph{Gibbs sampling}. Using
this method, we obtain a particle approximation of a joint distribution (called
the \emph{target distribution}) by
sampling from all the \emph{full conditional} distributions---the distribution of
one random variable conditioned on all other variables---in sequence. This
procedure results in a Markov chain that has the target distribution as its
stationary distribution. Contrary to many other sampling methods, Gibbs
sampling does not include a rejection step; this means that the samples
proposed at every step are accepted as samples from the chain. This may lead to
faster mixing and decorrelation of the chain compared to other MCMC
methods~\cite[Chapter~11]{bishop2006pattern}.
The main drawback with Gibbs sampling is that we must sample the full
conditional distributions of all variables. Therefore, it is only applicable if
these distributions have a functionally convenient form. In the case at hand,
we have the following results.
\begin{lem}\label{lem:cond_g} Consider the uncertain-input model~\eqref{eq:ui_model}. The
density $\mathrm{p}(g|y,w;\tau)$ is Gaussian with covariance
matrix and mean vector given by
\begin{equation}\label{eq:conditional_g_pars}
P_g = {\left(\frac{W^T W}{\sigma_y^2} +
{K_g(\rho)}^{-1}\right)}^{-1},\qquad
m_g = P_g\left(\frac{W^T y}{\sigma_y^2} +
{K_g(\rho)}^{-1}\mu_g(\rho)\right).
\end{equation}
\end{lem}
\begin{proof}
The proof follows along the same line of reasoning as the proof of Lemma~\ref{lem:pigs_posterior_g}.
\end{proof}
\begin{lem}\label{lem:cond_w} Consider the uncertain-input model~\eqref{eq:ui_model}. The
density $\mathrm{p}(w|y,v,g;\tau)$ is Gaussian with covariance matrix
and mean given by
\begin{equation}\label{eq:conditional_w_pars}
P_w = {\left(\frac{G^T G}{\sigma_y^2} + \frac{I_N}{\sigma^2_v} +
{K_w(\theta)}^{-1}\right)}^{-1},\qquad
m_w =P_w\left(\frac{G^T y}{\sigma_y^2} + \frac{v}{\sigma_v^2} +
{K_w(\theta)}^{-1}\mu_w(\theta)\right).
\end{equation}
\end{lem}
\begin{proof}
Because $y$ and $v$ are conditionally independent given $w$ and $g$, we have
that
\begin{equation}
\begin{aligned}
&\log\mathrm{p}(w|y,v,g;\tau) \cong
\log\del{\mathrm{p}(y|g,w;\sigma_y^2)\mathrm{p}(v|w;\sigma_v^2)\mathrm{p}(w;\theta)}\\
&\cong -\frac{1}{2\sigma_y^2}\enVert{y \!-\! Gw}^2
\!\!-\!\frac{1}{2\sigma_v^2}\enVert{v \!-\! w}^2\! \!-\! \frac
{1}{2} \enVert{w \!-\! \mu_w(\theta)}^2_{{K_w(\theta)}^{-1}}\\
&\cong -\frac{1}{2}\enVert{w}^2_{P_w^{-1}}
+ w^T m_w \cong -\frac{1}{2} \enVert{w - m_w}_{P_w^{-1}}^2
\end{aligned}
\end{equation}
where $P_w$ and $m_w$ are given in~\eqref{eq:conditional_w_pars}. The
log-density of $w|y,v,g,w;\tau$ is quadratic and, hence, it is Gaussian with
the indicated mean vector and covariance matrix.
\end{proof}
\begin{rem}\label{rem:general_mc}
In case we consider the more general Gaussian process
model~\eqref{eq:gp_joint}, where $g$ and $w$ are a priori dependent,
Lemma~\ref{lem:cond_g} and Lemma~\ref{lem:cond_w} still hold with slightly
modified expressions for the mean vectors and covariance matrices (to account
for the prior correlation). For instance, the conditional density of $g$ is
Gaussian with covariance matrix and mean vector given by
\begin{equation}
P_g \!=\! {\left(\frac{W^T W}{\sigma_y^2} +
\Lambda_g(\rho,\theta)\right)}^{-1},\qquad
m_g \!=\! P_g\!\left( \!\frac{W^T y }{\sigma_y^2} +
\Lambda_g(\rho,\theta)\mu_g(\rho) + \Lambda_{gw}(\rho,\theta)(w \!-\!
\mu_w(\theta))\!\right)\!,
\end{equation}
where $\Lambda_{gw}(\rho,\theta)$ and $\Lambda_{g}(\rho,\theta)$ are,
respectively, the lower left and right blocks of the inverse of the prior
covariance matrix.
\end{rem}
In view of Lemma~\ref{lem:cond_g} and Lemma~\ref{lem:cond_w}, we can easily
set up the Gibbs sampler to draw from the joint posterior distribution: from any
initialization of the impulse response $g^{(0)}$ and of the input signal $w^{(0)}$, we sample
\begin{equation}\label{eq:gibbs}
\begin{aligned}
g^{(j+1)}&|w^{(j)},y,v;\tau \sim \mathcal{N}(m_g^{(j)},P_g^{(j)}),\\
w^{(j+1)}&|g^{(j+i)},y,v;\tau \sim \mathcal{N}(m_w^{(j)},P_w^{(j)}).
\end{aligned}
\end{equation}
where $m_g^{(j)}$ and $P_g^{(j)}$ are the mean and covariance
in~\eqref{eq:conditional_g_pars} when $w=w^{(j)}$, and where $m_w^{(j)}$ and
$P_w^{(j)}$ are the mean and covariance in~\eqref{eq:conditional_w_pars} when
$g=g^{(j+1)}$.
Because it is a Markov chain, the samples drawn using~\eqref{eq:gibbs} are
correlated, and subsequent samples have memory about the initial conditions and
are far away from the stationary distribution (which is equal to the target
distribution). Therefore, we discard the first samples of the Markov chain, and
we only retain the $M$ samples after a \emph{burn-in} of $B$ samples:
\begin{equation}\label{eq:burnin}
\bar g^{(j)} = g^{(j+B)}, \quad
\bar w^{(j)} = w^{(j+B)}, \quad j = 1,\ldots,M.
\end{equation}
If the burn-in is large enough, the Markov chain has lost its memory
about the initial conditions and is producing samples that come form the
stationary distribution. The choice of the length of the burn-in is a
difficult problem, and some heuristic algorithms have been
proposed (see~\cite[Section~1.4.6]{gilks1996markov}).
When we have drawn enough samples from
the Markov chain, we compute the Monte Carlo estimate of the function $Q$; in
other words, we replace the E-step in the EM method with a Monte Carlo E-step
(this is sometimes known as the MCEM method; see~\cite{wei1990monte}). We create the approximate lower
bound (at the $k$th iteration of the EM method) by setting
\begin{equation}
Q^{\text{\textsc{mc}}}(\tau,\hat \tau^{(k)}) = \frac{1}{M_k} \sum_{j=1}^{M_k} \log \mathrm{p}(y,v,\bar g^{(j,k)},
\bar w^{(j,k)};\tau)
\end{equation}
where $\bar g^{(j,k)}$ and $\bar w^{(j,k)}$ are samples from the stationary
distribution of~\eqref{eq:gibbs} at the $k$th iteration of the EM method. In
the uncertain-input case, the function $Q^{\text{\textsc{mc}}}$ is available in closed form as
a function of the sample moments of $g$ and $w$.
\begin{thm}\label{thm:Qmc}
Let $\cbr[1]{\bar g^{(j,k)}}_{j=1}^{M_k}$ and
$\cbr[1]{\bar w^{(j,k)}}_{j=1}^{M_k}$ be
samples from the stationary distribution of the Gibbs
sampler~\eqref{eq:gibbs} at the $k$th
iteration of the EM method and define
\begin{equation}\label{eq:mcem_moments}
\begin{aligned}
\hat g^{(k)} &= \frac{1}{M_k}\sum_{j=1}^{M_k} \bar g^{(j,k)},\quad \hat w^{(k)}
= \frac{1}{M_k}\sum_{j=1}^{M_k} \bar w^{(j,k)},\\
\hat P_g^{(k)} &= \frac{1}{M_k}\sum_{j=1}^{M_k} \del{\bar g^{(j,k)} -\hat g^{(k)}}\del{\bar g^{(j,k)} - \hat g^{(k)}}^T\\
\hat P_w^{(k)} &= \frac{1}{M_k}\sum_{j=1}^{M_k} \del{\bar w^{(j,k)} - \hat w^{(k)}}\del{
\bar w^{(j,k)} - \hat w^{(k)}}^T\\
\hat R_v^{(k)} &= \frac{1}{M_k}\sum_{j=1}^{M_k} \enVert{v - \bar w^{(j,k)}}^2,\\
\hat R_y^{(k)} &= \frac{1}{M_k}\sum_{j=1}^{M_k} \enVert{y - \bar G^{(j,k)}\bar w^{(j,k)}}^2.
\end{aligned}
\end{equation}
Then, the function $Q^{\text{\textsc{mc}}}(\tau, \hat \tau^{(k)})$ is given by
\begin{equation}\label{eq:Qmc}
\mathmakebox[0.88\columnwidth][l]{\begin{aligned}
\!Q^{\text{\textsc{mc}}}(\tau,\hat \tau^{(k)}) &= \!- \frac{\hat R_v^{(k)}}{2\sigma_v^2}
\!-\! \frac{N}{2} \log
\sigma_v^2 \!-\!\frac{\hat R_y^{(k)}}{2\sigma_y^2} \!-\! \frac{N}{2}\log \sigma_y^2
\!-\! \frac{1}{2}\trace\cbr{{K_g(\rho)}^{-1} \hat P_g^{(k)}}\\
\!&-\!\frac{1}{2} \enVert{\hat g^{(k)} \!-\!
\mu_g(\rho)}^{2}_{{K_g(\rho)}^{-1}}
\!-\!\frac{1}{2} \trace\cbr{{K_w(\theta)}^{-1} \hat P_w^{(k)}}
\!-\!\frac{1}{2} \enVert{\hat w^{(k)} \!-\!
\mu_w(\theta)}^{2}_{{K_w(\theta)}^{-1}}\\
&\!-\!\frac{1}{2} \log \det K_g(\rho) \!-\!\frac{1}{2} \log \det K_w(\theta).
\end{aligned}}
\end{equation}
\end{thm}
\begin{proof}
See Appendix~\ref{pf:Qmc}.
\end{proof}
In the M-step, we update the hyperparameters $\hat \tau^{(k)}$ by maximizing
the approximate lower bound of the marginal likelihood, $Q^{\text{\textsc{mc}}}$. Because of the closed
form expression in Theorem~\ref{thm:Qmc}, the M-step splits into the decoupled
optimization problems for the kernel hyperparameters and the noise variances
according to the following:
\begin{cor}\label{cor:Qmc}
At the $k$th iteration of the EM method, the kernel hyperparameters can be
updated as
\begin{equation}
\begin{aligned}
\hat \rho^{(k+1)} &= \arg\min_\rho
\enVert{\hat g^{(k)}-\mu_g(\rho)}^{2}_{{K_g(\rho)}^{-1}}
+ \trace\cbr{{K_g(\rho)}^{-1} \hat P^{(k)}_g} + \log \det K_g(\rho),\\
\hat \theta^{(k+1)} &= \arg\min_\theta
\enVert{\hat w^{(k)}-\mu_w(\theta)}^{2}_{{K_w(\theta)}^{-1}}
+ \trace\cbr{{K_w(\theta)}^{-1} \hat P^{(k)}_w} + \log \det K_w(\theta),
\end{aligned}
\end{equation}
and the noise variances can be updated as
\begin{equation}
\hat \sigma_v^{2\,(k+1)} = \frac{\hat R_v^{(k)}}{N},\qquad
\hat \sigma_y^{2\,(k+1)} = \frac{\hat R_y^{(k)}}{N}.
\end{equation}
\end{cor}
\begin{proof}
Follows from direct maximization of~\eqref{eq:Qmc}.
\end{proof}
Thanks to Theorem~\ref{thm:Qmc} and Corollary~\ref{cor:Qmc}, we have a simple
way to compute the MCEM estimates of the kernel hyperparameters and of the
noise variances; starting from an initial value of the hyperparameters, we iterate
the following three steps:
\begin{enumerate}
\item Run a Gibbs sampler according to~\eqref{eq:gibbs}.
\item Collect the samples according to~\eqref{eq:burnin} and compute the moments according
to~\eqref{eq:mcem_moments}.
\item Update the parameters according to Corollary~\ref{cor:Qmc}.
\end{enumerate}
Under mild regularity conditions, these iterations yield a sequence of
parameter estimates that converges to a stationary point of the marginal likelihood of the
data (under the condition that the number of particles $M_k$ at iteration $k$ is
such that that $\sum_{k=1}^\infty M_k^{-1} = \infty$;
see~\cite{neath2013convergence}). Then, using the estimated
hyperparameters, we can run a new Gibbs sampler and approximate the integrals
in~\eqref{eq:posteriormeans} with averages over the samples:
\begin{equation}\label{eq:particle_means}
\hat g \approx \frac{1}{M} \sum_{j=1}^M \bar g^{(j)},\quad
\hat w \approx \frac{1}{M} \sum_{j=1}^M \bar w^{(j)}.
\end{equation}
\subsection{Variational Bayes approximation}\label{sec:vb}
The second method we present is a variational approximation method.
Instead of approximating the unknown joint posterior density using sampling, we
propose an analytically tractable family of distributions and we look for the best
approximation of the unknown posterior density within that family.
The variational Bayes method hinges on the fact that
\begin{equation}
\log\mathrm{p}(y,v,g,w;\tau) = \log(g,w|y,v;\tau) + \log\mathrm{p}(y,v;\tau).
\end{equation}
Hence, for any proposal distribution $\mathrm{q}$ in some family of
distributions $\mathcal{Q}$, we can write
\begin{equation}
\log \mathrm{p}(y,v;\tau) = \log \frac{\mathrm{p}(y,v,g,w;\tau)}{\mathrm{q}(g,w)} - \log
\frac{\mathrm{p}(g,w|y,v;\tau)}{\mathrm{q}(g,w)}.
\end{equation}
Taking the expectation with respect to $\mathrm{q}$ and observing that the left hand
side is independent of $g$ and $w$, we get that
\begin{equation}\label{eq:marginal_split}
\log \mathrm{p}(y,v;\tau) = L(\mathrm{q}) + KL(\mathrm{q}),
\end{equation}
where we have defined the functional
\begin{equation}
L(\mathrm{q}) = \int \log\left(\frac{\mathrm{p}(y,v,g,w;\tau)}{\mathrm{q}(g,w)}\right)\mathrm{q}(g,w)\dif
g \dif w,
\end{equation}
and the \emph{Kullback-Leibler (KL) distance}~\cite{kullback1951information}
\begin{equation}
KL(\mathrm{q}) = \int \log\left(\frac{\mathrm{q}(g,w)}{\mathrm{p}(g,w|y,v;\tau)}\right)\mathrm{q}(g,w)\dif
g \dif w.
\end{equation}
Although the KL distance is not a metric---it is not symmetric
and it does not satisfy the triangle inequality---it is a useful measure of
similarity between probability
distributions (see~\cite[Section~1.6.1]{bishop2006pattern}).
Because the left hand side of~\eqref{eq:marginal_split} is independent of $\mathrm{q}$,
we can find the distribution $\mathrm{q}^\star$ with minimum distance
(in the KL sense) to the target distribution by maximizing
the functional $L(\mathrm{q})$ with respect to $\mathrm{q}\in \mathcal{Q}$,
\begin{equation}\label{eq:kl_optimization}
\mathrm{q}^\star(g,w) =\arg\min_{\mathrm{q} \in \mathcal{Q}} KL(\mathrm{q}) = \arg\max_{\mathrm{q}\in \mathcal{Q}} L(\mathrm{q}).
\end{equation}
This technique allows us to use the known functional $L(\mathrm{q})$ to find the $\mathrm{q}$
with minimum KL distance to the unknown joint posterior distribution.
To use the variational approximation, we need to fix a family of distributions
$\mathcal{Q}$ among which to look for $\mathrm{q}^\star$. In this work, we use a
\emph{mean-field approximation}, meaning that we look for an approximation of
the posterior distribution where $g$ and $w$ are independent given the data; in
other words we consider proposal distributions that factorize into two
independent factors according to
\begin{equation}
\mathrm{q}(g,w) = \mathrm{q}_g(g)\mathrm{q}_w(w).
\end{equation}
After choosing the family of proposal distributions, we need to find the best
approximation $\mathrm{q}^\star$ in terms of KL distance to the unknown
posterior distribution; in view of~\eqref{eq:kl_optimization}, the solution is
given by
\begin{equation}
\mathrm{q}^\star(g,w) = \arg\max_{\mathrm{q}_g,\,\mathrm{q}_w} L(\mathrm{q}_g\mathrm{q}_w).
\end{equation}
Consider first the factor $\mathrm{q}_g$. We have that
\begin{equation}
\begin{aligned}
L&(\mathrm{q}_g\mathrm{q}_w) = \int
\log\left(\frac{\mathrm{p}(y,v,g,w;\tau)}{\mathrm{q}_g(g)\mathrm{q}_w(w)}\right)
\mathrm{q}_g(g) \mathrm{q}_w(w)\dif g \dif w,\\
&\!\cong\!\int \!\sbr{\log\mathrm{p}(y,v,g,w;\tau)\mathrm{q}_w(w)\dif w - \log \mathrm{q}_g(g)
}\mathrm{q}_g(g)\dif g,
\end{aligned}
\end{equation}
ignoring terms independent of $\mathrm{q}_g$.
If we define the distribution $\mathrm{p}_w(y,v,g;\tau)$ such that
\begin{equation}
\log \mathrm{p}_w(y,v,g;\tau) = \int \log \mathrm{p}(y,v,g,w;\tau) \mathrm{q}_w(w)\dif w,
\end{equation}
we have that, again ignoring terms independent of $\mathrm{q}_g(g)$,
\begin{equation}
L(\mathrm{q}_g\mathrm{q}_w) \cong - \int \log\del{\frac{\mathrm{p}_w(y,v,g;\tau)}{\mathrm{q}_g(g)}}\mathrm{q}_g(g)
\dif g,
\end{equation}
which is the negative KL distance between the factor $\mathrm{q}_g$ and the
density $\mathrm{p}_w(y,v,g;\tau)$. Because the KL distance is nonnegative,
by choosing $\mathrm{q}_g^\star(g)=\mathrm{p}_w(y,v,g;\tau)$ (where the KL distance is zero)
we are maximizing the functional $L$ with respect to
$\mathrm{q}_g$. Considering now $\mathrm{q}_w(w)$, we can trace the same argument and find that
the optimal choice is
\begin{equation}\label{eq:q_w_star}
\log \mathrm{q}^\star_w(w) = \int \log p(y,v,g,w;\tau)\mathrm{q}^\star_g(g)\dif g,
\end{equation}
where $\mathrm{q}^\star_g(g)$ is the solution of
\begin{equation}\label{eq:q_g_star}
\log \mathrm{q}^\star_g(g) = \int \log p(y,v,g,w;\tau)\mathrm{q}^\star_w(w)\dif w.
\end{equation}
The maximum of $L(\mathrm{q}_g\mathrm{q}_w)$ is, therefore, the simultaneous solution
of~\eqref{eq:q_w_star} and~\eqref{eq:q_g_star}. The solution can be found with
the following iterative procedure: from an initialization
$\mathrm{q}^{(0)}_g$ and $\mathrm{q}_{w}^{(0)}$ of the densities, compute
\begin{equation}\label{eq:variational_bayes}
\begin{aligned}
\!\log \mathrm{q}^{(j+1)}_w(w) &= \int \log p(y,v,g,w;\tau)\mathrm{q}^{(j)}_g(g)\dif g,\\
\!\log \mathrm{q}^{(j+1)}_g(g) &= \int \log p(y,v,g,w;\tau)\mathrm{q}^{(j+1)}_w(w)\dif w.
\end{aligned}
\end{equation}
This iterative procedure will converge to the
simultaneous solution of~\eqref{eq:q_g_star} and~\eqref{eq:q_w_star}
(see~\cite[Chapter~10]{bishop2006pattern}; see also~\cite{boyd2004convex}).
As was the case for the Gibbs sampler, which can be used only if it easy to
sample from the full conditional distributions, the variational approximation of
the joint posterior is only useful if it is possible to compute the expectations
in~\eqref{eq:q_w_star} and~\eqref{eq:q_g_star}. In the uncertain-input case, we
have the following result.
\begin{thm}\label{thm:vb_gaussian}
Let $\mathrm{q}_g^\star\mathrm{q}^\star_w$ be the factorized density with minimum KL distance
to posterior density $\mathrm{p}(g,w|y,v;\tau)$, for a fixed value of the
hyperparameters. Then, $\mathrm{q}^\star_g$ and $\mathrm{q}^\star_w$ are Gaussian distributions.
\end{thm}
\begin{proof}
See Appendix~\ref{pf:vb_gaussian}.
\end{proof}
Theorem~\ref{thm:vb_gaussian} allows us to compute expectations with respect to
$\mathrm{q}^\star_g$ and $\mathrm{q}^\star_w$ easily. In addition, at every iteration
of~\eqref{eq:variational_bayes} the approximating densities remain Gaussian.
This allows us to write the update~\eqref{eq:variational_bayes} in terms
of the first and second moments of the approximating densities:
\begin{cor}\label{cor:iterative_vb}
Let $ w^{(j)}$ and $ g^{(j)}$ be the mean vectors of $\mathrm{q}_w^{(j)}$
and $\mathrm{q}_g^{(j)}$ at the $j$th iteration of~\eqref{eq:variational_bayes}
and let $ P_w^{(j)}$ and $ P_g^{(j)}$ be the covariance matrices. Let
$ g^{(j+1)}$, $ w^{(j+1)}$, $ P^{(j+1)}_g$, and
$ P_w^{(j+1)}$ be the mean vectors and covariance matrices at the
$(j+1)$th iteration. Let
\begin{equation}
\begin{aligned}
T_g^{(j)}&= R\del{I_N \otimes \sbr{ P_g^{(j)}+ g^{(j)}
g^{(j)\,^T}}}R^T,\\
T_w^{(j+1)}&= R\del{I_N \otimes \sbr{ P_w^{(j+1)}+ w^{(j+1)}
w^{(j+1)\,T}}}R^T.
\end{aligned}
\end{equation}
where the matrix $R$ is defined in~\eqref{eq:R}. Then,
\begin{equation}\label{eq:vb_moments}
\mathmakebox[0.88\columnwidth][l]{
\begin{aligned}
\!P_w^{(j+1)}&\!=\!\del[3]{\frac{1}{\sigma_y^2}T_g^{(j)} + \frac{1}{\sigma^2_v}I_n +
{K_w(\theta)}^{-1}}^{-1},\\
\!w^{(j+1)} &\!=\! P_w^{(j+1)}\!\del[3]{\frac{ G^{(j)\,T}}{\sigma_y^2}y + \frac{1}{\sigma_v^2}v +
{K_w(\theta)}^{-1}\mu_w(\theta)}\!,\\
\!P_g^{(j+1)} &= \del[3]{\frac{1}{\sigma_y^2}T_w^{(j+1)} +
{K_g(\rho)}^{-1}}^{-1},\\
\!g^{(j+1)} &= P_g^{(j+1)}\del[3]{\frac{ W^{(j+1)\,T}}{\sigma_y^2}y +
{K_g(\rho)}^{-1}\mu_g(\rho)}.
\end{aligned}}
\end{equation}
\end{cor}
\begin{proof}
See Appendix~\ref{pf:iterative_vb}.
\end{proof}
Thanks to Corollary~\ref{cor:iterative_vb}, we can iteratively update the
moments of the Gaussian factors, and the iterations will converge to the moments
of optimal variational approximation of the joint posterior distribution.
\begin{rem}
In case we consider the more general Gaussian process
model~\eqref{eq:gp_joint}, the results of
Theorem~\ref{thm:vb_gaussian} and of Corollary~\ref{cor:iterative_vb} still
hold with minor modifications (similarly to what is presented in
Remark~\ref{rem:general_mc}). However, the approximation of posterior
independence may not make sense when using a-priori dependent Gaussian
process models.
\end{rem}
Using the factorized approximation of the joint distribution,
we can approximate the E-step in the EM method with a variational
E-step (this is sometimes known as the VBEM method,
see~\cite{beal2003variational}). We
create the variational approximation of the lower bound (at the $k$th iteration
of the EM method) by setting
\begin{equation}
Q^{\text{\textsc{vb}}}(\tau,\hat\tau^{(k)}) \!:=\! \int\!
\log\mathrm{p}(y,v,g,w;\tau)\hat \mathrm{q}_g^{(k)}(g)\hat \mathrm{q}_w^{(k)}(w)\dif w\dif g,
\end{equation}
where $\hat \mathrm{q}_g^{(k)}$ and $\hat\mathrm{q}_w^{(k)}$ are the limits of the variational
Bayes iterations with the hyperparameters set to $\hat \tau^{(k)}$.
Because the complete-data likelihood is quadratic in $g$ and $w$, the
approximation $Q^\text{\textsc{vb}}$ admits the closed form expression
in function of the moments of $g$ and $w$.
\begin{thm}\label{thm:Qvb}
Let $\hat g^{(k)}$ and $\hat w^{(k)}$ be the mean vectors of $\hat\mathrm{q}_g^{(k)}$
and of $\hat \mathrm{q}_w^{(k)}$, respectively, and let $\hat P^{(k)}$ and
$\hat P^{(k)}$ be their covariance matrices. Define
\begin{equation}
\begin{aligned}
\hat S_w^{(k)} &= R \del[1]{I_n \otimes \hat P_g^{(k)}} R^T,
&\hat T_w^{(k)} &= \hat S_w^{(k)} \!+\! \hat W^{(k)T}\hat W^{(k)},\\
\hat R_v^{(k)} &= \enVert[1]{v - \hat w^{(k)}}^2,
&\hat R_y^{(k)} &= \enVert[1]{y - \hat W^{(k)} \hat g^{(k)}}^2,
\end{aligned}
\end{equation}
where $R$ is defined in~\eqref{eq:R}.
Then,
\begin{equation}\label{eq:Qvb}
\begin{aligned}
&Q^\text{\textsc{vb}}(\tau,\hat \tau^{(k)}) = -\frac{\hat R_v^{(k)}}{2\sigma_v^2}
-\frac{N}{2}\log \sigma_v^2 - \frac{N}{2}\log \sigma_y^2- \frac{1}{2\sigma_y^2}\del{\hat R_y^{(k)} -
\enVert[1]{\hat g^{(k)}}^2_{\hat S_w^{(k)}} - \trace\cbr{\hat T_w^{(k)} P_g^{(k)}}}\\
&- \frac{1}{2}\trace\cbr{{K_g(\rho)}^{-1} \hat P_g^{(k)}} -\frac{1}{2} \enVert{\hat g^{(k)} -
\mu_g(\rho)}^{2}_{{K_g(\rho)}^{-1}}-\frac{1}{2}
\trace\cbr{{K_w(\theta)}^{-1} \hat P_w^{(k)}}\\
&-\frac{1}{2} \enVert{\hat w^{(k)} -
\mu_w(\theta)}^{2}_{{K_w(\theta)}^{-1}}-\frac{1}{2} \log \det K_g(\rho)
-\frac{1}{2} \log \det K_w(\theta).
\end{aligned}
\end{equation}
\end{thm}
\begin{proof}
See Appendix~\ref{pf:Qvb}.
\end{proof}
Thanks to the structure of the function $Q^\text{\textsc{vb}}(\tau,\hat \tau^{(k)})$,
the M-step splits into
decoupled optimization problems for the kernel hyperparameters and for the
noise variances.
\begin{cor}\label{cor:Qvb}
At the $k$th iteration of the EM method, the kernel hyperparameters can be
updated as
\begin{equation}\label{eq:vbem_update_rho}
\begin{aligned}
\hat \rho^{(k+1)} &= \arg\min_\rho
\enVert{\hat g^{(k)}-\mu_g(\rho)}^{2}_{{K_g(\rho)}^{-1}}+
\trace\cbr{{K_g(\rho)}^{-1} \hat P^{(k)}_g} + \log \det K_g(\rho),\\
\hat \theta^{(k+1)} &= \arg\min_\theta
\enVert{\hat w^{(k)}-\mu_w(\theta)}^{2}_{{K_w(\theta)}^{-1}}+ \trace\cbr{{K_w(\theta)}^{-1} \hat P^{(k)}_w} + \log \det K_w(\theta),
\end{aligned}
\end{equation}
and the noise variances can be updated as
\begin{equation}
\begin{aligned}
\hat \sigma_v^{(k+1)} &= \frac{\hat R^{(k)}_v}{N},\\
\hat \sigma_y^{(k+1)} &=
\frac{\hat R_y^{(k)}\!+\!
\enVert[1]{\hat g^{(k)}}^2_{\hat S_w^{(k)}} \!+\! \trace\cbr{\hat T_w^{(k)}
P_g^{(k)}}}{N}.
\end{aligned}
\end{equation}
\end{cor}
\begin{proof}
Follows from direct maximization of~\eqref{eq:Qvb}.
\end{proof}
Thanks to Theorem~\ref{thm:Qvb} and Corollary~\ref{cor:Qvb}, we have a simple
iterative proceduce to compute the VBEM estimates of the kernel hyperparameters
and of the noise variances; starting from an inital value of the
hyperparameters, we iterate the following two steps:
\begin{enumerate}
\item Compute the moments of the variational approximation according to
Corollary~\ref{cor:iterative_vb}.
\item Update the hyperparameters according to Corollary~\ref{cor:Qvb}.
\end{enumerate}
Under mild regularity conditions, these iterations yield a sequence of
parameter estimates that converges to a stationary point of the marginal
likelihood of the data (see~\cite[Section~2.2]{beal2003variational}).
Then, we can run the iterations in Corollary~\ref{cor:iterative_vb} again to find the
posterior mean estimates of $g$ and $w$.
\section{Simulations}
In this section, we evaluate the methods proposed on some problems that can be
cast as problems of identifying uncertain-input systems.
\subsection{Cascaded linear systems}
In this numerical experiment, we estimate cascaded systems with the structure
presented in Section~\ref{sec:cascaded_systems}. We perform a Monte Carlo
experiment consisting of 500 runs. In each run, we generate two systems by
randomly sampling 40 poles and 40 zeros, in complex conjugate pairs, using the
following technique. We sample the poles randomly, with magnitudes uniformly between 0.4
and 0.8 and phases uniformly between 0 and $\pi$. We sample the zeros randomly,
with magnitudes uniformly between 0 and 0.92 and phases uniformly between 0 and
$\pi$. All systems are generated with unitary static gain. The noise variances on the
input and output measurements are $1$, respectively $1/100$, times the variance
of the corresponding noiseless signals; this means that the
sensor at the output of $S_2$ is considerably more accurate than the sensor at
the output of $S_1$.
We simulate the responses of the systems with a Gaussian
white-noise input with variance 1. We collect $N=200$ samples of the output, from zero
initial conditions, and we estimate the samples of the impulse responses of the
two systems.
As described in Section~\ref{sec:cascaded_systems}, the systems are modeled as
zero-mean Gaussian processes with first order stable-spline kernels.
All the methods are initialized with the choices $\rho_1=\theta_1=1$ and
$\rho_2=\theta_2 = 0.6$. The noise variances are initialized from the
sample variances of the errors of the linear least squares estimates of $g_1$
and $g_2$ from the noisy data.
In the experiment, we compare the following estimators.
\begin{description}
\item[C-MCEM] The method described in Section~\ref{sec:mcmc}. It uses an
MCMC approximation of the joint posterior with $B=400$ and $M=2000$. The
EM iterations are stopped once the relative change in the parameter values
is below $10^{-2}$.
\item[C-VBEM] The method described in Section~\ref{sec:vb}. It uses a
variational approximation of the joint posterior. The
EM iterations are stopped once the relative change in the parameter values
is below $10^{-2}$.
\item[C-2Stage] A kernel-based two-stage method. First, it estimates the
first system in the cascade from $u$ and $v$. Then, it simulates the
intermediate signal $\hat w$ as the response of the estimated system to $u$
and uses $\hat w$ and $y$ to estimate the second system in the cascade.
\item[C-Naive] A naive kernel-based estimation method. It estimates the
first system in the cascade from $u$ and $v_t$ and the second system from
$v_t$ and $y_t$. It corresponds to using the noisy signal $v_t$ as if it were
the noiseless input to the second system in the cascade.
\end{description}
To evaluate the performance of the estimators, we use the following goodness-of-fit metric
\begin{equation}\label{eq:goodnessoffit}
\mathrm{Fit}^g_j = 1- \frac{\enVert{g_j - \hat g_j}}{\enVert{g_j -
\mathrm{mean}(g_j)}}
\end{equation}
where $g_j$ is the impulse response of the system at the $j$th Monte Carlo run,
and $\hat g_j$ is an estimate of the same impulse response.
The results of the experiment are presented in
Figure~\ref{fig:boxplot_cascade}. The figure shows the boxplots of the fit of
the estimated impulse responses of the two blocks in the cascade over the
systems in the dataset.
\begin{figure}[htb]
\centering
\includegraphics{boxplot_cascade.pdf}
\caption{Results of the estimation of cascaded linear systems.}\label{fig:boxplot_cascade}
\end{figure}
From the figure, it appears that the proposed approximation methods are able to
reconstruct the cascaded model with higher accuracy than the alternative
approaches we have considered. Furthermore, there seems to be no clear
disadvantage in using the variational Bayes approximation as compared to the,
more correct, sampling-based approximation. Regarding the performance of the
methods in estimating $g_1$, we see that the methods C-MCEM and C-VBEM perform
better than the other methods (which give the same result). Both C-2Stage and
C-Naive only use the information in $v$ to estimate $g_1$, whereas C-MCEM and
C-VBEM use the full joint distribution of $v$ and $y$ to estimate $g_1$. Given
that in our setting the noise on $y$ is much lower than the noise on $v$,
there is information in $y$ that the joint methods are able to
leverage to improve the estimate of $g_1$ (similar phenomena were already
observed in~\cite{hjalmarsson2009system}, and in~\cite{everitt2013geometric}).
This allows C-MCEM and C-VBEM to better estimate $g_1$.
\subsection{Hammerstein systems}
In this numerical experiment, we estimate Hammerstein systems with the
structure presented in Section~\ref{sec:hammersteins}. We perform four Monte
Carlo experiments consisting of 500 runs. In each run, we generate a stable
transfer-function model by sampling poles and zeros in the complex plane. We
sample the poles, uniformly in magnitude and phase, in the annulus of radii 0.4
and 0.8. We sample the zeros uniformly in the disk of radius 0.92. We generate
the nonlinear transformation as a finite combination of Legendre polynomials
defined as
\begin{equation}
\varphi_j(x) = 2^j\cdot \sum_{k=0}^j
x^k\binom{j}{x}\binom{\tfrac{j+k-1}{2}}{j}.
\end{equation}
We sample the coefficients of the combination independently and uniformly in
the interval $\intcc{-1,1}$.
In each Monte Carlo experiment, we consider Hammerstein systems with
different orders for both the nonlinear system and the polynomial nonlinearity.
In Table~\ref{tab:hammerstein_orders}, we present the orders of the systems
considered in the various experiments.
\begin{table}[ht]
\centering
\caption{Orders of the Hammerstein systems used in the
simulations.}\label{tab:hammerstein_orders}
\begin{tabular}{ccc}
\toprule
Dataset & $S$ & $f(\cdot)$ \\
\midrule
LOLO (Low-Low) & $\{3,\ldots, 5\}$ & $\{5,\ldots, 10\}$\\
HILO (High-Low) & $\{9,\ldots, 20\}$ & $\{5,\ldots, 10\}$\\
LOHI (Low-High) & $\{3,\ldots, 5\}$ & $\{15,\ldots, 20\}$\\
HIHI (High-High) & $\{9,\ldots, 20\}$ & $\{15,\ldots, 20\}$\\
\bottomrule
\end{tabular}
\end{table}
We simulate the responses of the systems in the datasets to a uniform white
noise input in the interval $\sbr{-1,1}$. We collect $N=200$ samples of the
output, from zero initial conditions, and we estimate the static nonlinearity
and the impulse response.
As described in Section~\ref{sec:hammersteins}, the linear blocks are modeled
as zero-mean Gaussian processes with first order stable-spline kernels. We
consider both a parametric model and a nonparametric model for the static
nonlinearity. All the methods are initialized with $\rho_1 = 1$, $\rho_2=
0.6$. The noise variances are initialized from the prediction error of an
overparameterized least-squares estimate
(see~\cite{bai1998optimal,risuleo2015new}).
In the simulation, we compare the performance of the following estimators:
\begin{description}
\item[H-P] A semiparametric model for the Hammerstein
system. It uses the Legendre polynomial basis to construct a linear
parameterization (with the correct order) of the input:
\begin{equation}
\mu_w(\theta)=\Phi\theta,\qquad \sbr[1]{\Phi}_{i,j} = \varphi_j(u_i).
\end{equation}
The dynamical system is modeled as a zero-mean Gaussian process with
covariance matrix given by the first order stable-spline kernel.
\item[H-MCEM] A nonparametric model for the Hammerstein system with Gibbs
sampling from the joint posterior with $B=200$ and $M=500$. It uses the
radial-basis-function kernel~\eqref{eq:rbf} to model the input
nonlinearity. Note that, because the Hammerstein system is not
identifiable, we fix $\theta_1=1$ in the algorithm.
\item[H-VBEM] A nonparametric model for the Hammerstein system with
variational-Bayes approximation of the joint posterior. It uses the same
kernel as H-MCEM to model the input nonlinearity.
\item[NLHW] The parametric model in Matlab with the default parameters. It
corresponds to the maximum-likelihood estimator of the model with the
correct parameterization.
\end{description}
In all methods, the EM iterations are stopped once the relative change in the
parameter values is below $10^{-2}$.
To evaluate the performance of the methods, we use the standard goodness-of-fit
criterion~\eqref{eq:goodnessoffit} for the impulse response of the linear
system. For the input nonlinearity, we compute the
estimated value $\hat w_j$ on a uniform grid of 300 values between -1 and 1 and
we compare it to the true value $w_j$ according to
\begin{equation}
\mathrm{Fit}^f_j = 1 - \frac{\enVert{w_j - \hat w_j}}{\enVert{w_j -
\mathrm{mean}(w_j)}},
\end{equation}
where $w_j$ is the vector of values of the true nonlinearity ot the $j$th Monte
Carlo run, and $\hat w_j$ is an estimate of the same vector of values.
\begin{figure*}[htb]
\centering
\includegraphics{boxplot_hammerstein.pdf}
\caption{Boxplot of the estimation result}\label{fig:boxplot_hammerstein}
\end{figure*}
The result of the experiment are presented in
Figure~\ref{fig:boxplot_hammerstein}. The figure shows the boxplots of the fit
of the estimated impulse responses (upper pane) and of the static
nonlinearities (lower pane) over the systems in the datasets.
From this simulation, it appears that the proposed nonparametric models are
capable of recovering the system better than the fully parametric NLHW\@. In
addition, it appears that using the correct parametric model for the
input nonlinearity is beneficial in terms of accuracy. As was the case in the
cascaded-system estimation problem, the two approximation methods have comparable performance.
\section{Conclusions}
In this work, we have proposed a new model structure, which we have called the
\emph{uncertain-input model}. Uncertain-input models describe
linear systems subject to inputs about which we have limited information. To
encode the information we have available about the input and the system, we
have used Gaussian-process models.
We have shown how classical problems in system identification can be seen as
uncertain-input estimation problems. Among these applications we find classical
PEM, errors-in-variables and blind system-identification problems,
identification of cascaded linear systems, and identification of Hammerstein
models.
We have proposed an iterative algorithm to estimate the uncertain-input model.
We estimate the impulse response of the linear system and the input
nonlinearity as the posterior means of the Gaussian-process models given the
data. The hyperparameters of the Gaussian-process models are estimated using
the marginal-likelihood method. To solve the related optimization
problem, we have proposed an iterative method based on the EM method.
In the general formulation, the model depends on the convolution of two
Gaussian processes. Therefore, the joint distribution of the data is not
available in closed form. To circumvent this issue, we have proposed specialized
models, namely the semiparametric and the parametric models, for which the
integrals defining the posterior distributions are available. In the more
general case, we have proposed two approximation methods for the joint
posterior distribution. In the first method, we have used a particle
approximation of the posterior distribution. The particles are drawn using the
Gibbs sampler from Gaussian full-conditional distributions. In the second
method, we have used the variational-Bayes approach to approximate the
posterior distribution. Using a mean-field approximation, we have found that
the posterior distribution can be approximated as a product of two independent
Gaussian random variables.
We have tested the proposed model on two problems: the estimation of cascaded
linear systems and of Hammerstein models. In both cases, the proposed
uncertain-input formulation is able to capture the systems and to provide good
estimates.
Although hinged on the EM method (which is guaranteed to converge under certain
smoothness assumptions) the approximate methods we have proposed do
not have general convergence guarantees: in the formulation given
by~\eqref{eq:ui_model}, there may instances of uncertain-input models for which
the assumptions required for convergence may not hold. In future publications,
we plan to analyze whether there exists general conditions on the
uncertain-input model such that the algorithms are guaranteed to converge to
optimal solutions.
In addition, the uncertain-input model can be nonidentifiable in certain
configurations (for instance, consider the general errors-in-variables
problem). We plan to further explore this nonidentifiability. Connections with
other problems sharing the same bilinear
structure~\cite{bai2005least,wang2009revisiting} outside of the system
identification framework are also under investigation.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 8,191 |
Q: Mock Bigquery function in kotlin unit test I am develop an unit test, which involved (com.google.api.services.bigquery.Bigquery). I mock the object with mockk @SpyK. But the every{} block report error when unit test start. Detail as follow
the dependency is:
"com.google.apis:google-api-services-bigquery:v2-rev20220326-1.32.1"
The exception is throw in setUp() function
Code
@TestInstance(TestInstance.Lifecycle.PER_CLASS)
@ExtendWith(MockKExtension::class, SpringExtension::class)
@EnableConfigurationProperties(ConfidentialTags::class)
@PropertySource(value = ["classpath:application.yml"], factory = YamlPropertySourceFactory::class)
class ResourceAccessServiceTest {
private val mockGoogleCredential = MockGoogleCredential.Builder().build()
@SpyK
private var olderBigQuery: Bigquery = Bigquery.Builder(
MockGoogleCredential.newMockHttpTransportWithSampleTokenResponse(),
mockGoogleCredential.jsonFactory,
mockGoogleCredential
).build()
@InjectMockKs
private lateinit var resourceAccessService: ResourceAccessService
@BeforeAll
fun setUp() {
//error reported here
every { olderBigQuery.RowAccessPolicies().list(any(), any(), any()).execute() } returns mockk() {
every { rowAccessPolicies } returns listOf()
}
}
@Test
fun queryRowAccessExistedPrincipals() {
val list = resourceAccessService.queryRowAccessExistedPrincipals(
TableId.of("proj", "ds", "tbl"),
RowFilter("region", listOf("hk"))
)
assert(list.isEmpty())
}
}
error log
java.net.MalformedURLException: no protocol: projects/2af774bb308513d5/datasets/-3e5359fe6b1a603f/tables/-28b905709f94ecc7/rowAccessPolicies
java.lang.IllegalArgumentException: java.net.MalformedURLException: no protocol: projects/2af774bb308513d5/datasets/-3e5359fe6b1a603f/tables/-28b905709f94ecc7/rowAccessPolicies
at com.google.api.client.http.GenericUrl.parseURL(GenericUrl.java:679)
at com.google.api.client.http.GenericUrl.<init>(GenericUrl.java:125)
at com.google.api.client.http.GenericUrl.<init>(GenericUrl.java:108)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.buildHttpRequestUrl(AbstractGoogleClientRequest.java:373)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.buildHttpRequest(AbstractGoogleClientRequest.java:404)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:514)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:455)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:565)
at com.airwallex.data.grpc.das.service.ResourceAccessServiceTest$setUp$1.invoke(ResourceAccessServiceTest.kt:67)
at com.airwallex.data.grpc.das.service.ResourceAccessServiceTest$setUp$1.invoke(ResourceAccessServiceTest.kt:67)
at io.mockk.impl.eval.RecordedBlockEvaluator$record$block$1.invoke(RecordedBlockEvaluator.kt:25)
at io.mockk.impl.eval.RecordedBlockEvaluator$enhanceWithRethrow$1.invoke(RecordedBlockEvaluator.kt:78)
at io.mockk.impl.recording.JvmAutoHinter.autoHint(JvmAutoHinter.kt:23)
at io.mockk.impl.eval.RecordedBlockEvaluator.record(RecordedBlockEvaluator.kt:40)
at io.mockk.impl.eval.EveryBlockEvaluator.every(EveryBlockEvaluator.kt:30)
at io.mockk.MockKDsl.internalEvery(API.kt:93)
at io.mockk.MockKKt.every(MockK.kt:98)
at com.airwallex.data.grpc.das.service.ResourceAccessServiceTest.setUp(ResourceAccessServiceTest.kt:67)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688)
at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149)
at org.junit.jupiter.engine.extension.TimeoutExtension.interceptLifecycleMethod(TimeoutExtension.java:126)
at org.junit.jupiter.engine.extension.TimeoutExtension.interceptBeforeAllMethod(TimeoutExtension.java:68)
at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115)
at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105)
at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104)
at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98)
at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$invokeBeforeAllMethods$9(ClassBasedTestDescriptor.java:384)
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.invokeBeforeAllMethods(ClassBasedTestDescriptor.java:382)
at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.before(ClassBasedTestDescriptor.java:196)
at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.before(ClassBasedTestDescriptor.java:78)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:136)
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129)
at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127)
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84)
The key error info is
java.net.MalformedURLException: no protocol: projects/2af774bb308513d5/datasets/-3e5359fe6b1a603f/tables/-28b905709f94ecc7/rowAccessPolicies
The exception is thrown in the every{} block in setUp function. It seems the url built inside the SDK is invalid, without protocol header. But I don't know how to solve the problems
A: So what you really want is a way to decouple the code under test from BigQuery. The better way to do so that is to hide BigQuery behind your own abstraction (e.g. an interface) and make the code that needs to query only depend on that abstraction - look up the dependency inversion principle/dependency injection (I've got some answers here that show examples). That way, you can substitute a fake implementation in for testing that, say, queries against a collection in memory.
There is still one problem with this approach. You do at some point need to test against BigQuery for real. Otherwise, how will you prove that component works? I guess the possibilities here are constrained by what your company allows.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 7,492 |
\section{Introduction \label{Intro}}
Following a series of experimental breakthroughs that took place around the turn of the millennium,\cite{reed1997conductance,park2000nanomechanical,park2002coulomb,liang2002kondo,cui2001reproducible,kergueris1999electron} the field of molecular electronics has seen two decades of rapid experimental and theoretical development. From the technological perspective, the focus has been largely put on proof-of-principle experiments. It has been shown, for instance, that electronic devices based on molecular junctions can act as transistors,\cite{park2000nanomechanical,perrin2015single,gehring2017distinguishing} rectifiers,\cite{elbing2005single,diez2009rectification,perrin2016gate} spintronic devices\cite{iacovita2008visualizing,sanvito2011molecular,bogani2008molecular} or thermoelectric materials.\cite{reddy2007thermoelectricity,cui2017perspective} These experimental studies were performed in a multitude of device geometries and on a plethora of molecular structures.
Currently, however, progress beyond such prototypical devices is also slowly being made. It has been demonstrated, for example, that it is possible to construct molecular diode devices based on self-assembled molecular monolayers which can achieve rectification ratios comparable to those of conventional rectifiers.\cite{chen2017molecular} Reproducibility of the molecular junctions continues, nevertheless, to be a problem.
In order to understand the experimentally-observed transport behavior, it is necessary to resort (at least on a qualitative level) to a particular transport theory, many of which have been developed over the last few decades.
The off-resonant transport regime (where the molecular energy levels lie outside of the bias window) is nowadays almost universally described using the non-interacting Landauer approach,\cite{nitzan2001electron} which includes a use of a transmission coefficient,\cite{engelkes2004length} with the results typically yielding a good match between the observed and theoretically predicted behavior.\cite{lindsay2007molecular,jin2013energy}
Simultaneously, it has been repeatedly demonstrated that this non-interacting approach fails in the resonant regime where the effects of electron-vibrational and electron-electron interactions become important.\cite{secker2011resonant,burzuri2016sequential,thomas2019understanding,fung2019breaking}
Following the early work of Ulstrup, Kuznetsov and coworkers,\cite{friis1998situ,kuznetsov2000mechanisms,kuznetsov2002mechanisms,zhang2008single} as well as more recent studies by Migliore and Nitzan,\cite{migliore2011nonlinear,migliore2013irreversibility} Marcus theory has also become a popular framework to describe charge transport through molecular junctions at relatively high temperatures. This theory has been successfully applied in the resonant transport regime.\cite{thomas2019understanding,yuan2018transition,jia2016covalently}
As we shall demonstrate, due to the lack of lifetime broadening in the conventional Marcus theory, it may fail to correctly describe charge transport in the off-resonant regime.
Recently, however, a relatively simple theory (which we shall refer to as the generalized theory) unifying the Marcus and Landauer descriptions of charge transport has been developed\cite{sowa2018beyond} (see also Refs.~\onlinecite{sowa2020beyond} and \onlinecite{liu2020generalized}). In the present paper, we modify it so as to include entropic effects in the case of polar solvents. We also provide an intuitive derivation of this theory and apply it to study the transport behavior of molecular junctions.
Besides its perturbative nature (with respect to the molecule-lead interactions), the conventional Marcus theory also treats the vibrational degrees of freedom classically.\cite{marcus1985electron} Consequently, it fails to capture the effects of nuclear tunnelling which can still play an important role in overall charge transport characteristics, even at around room temperature when high frequency vibrations are involved, particularly in the `inverted' region.\cite{barbara1996contemporary,may2008charge} This inverted region has been recently observed experimentally in charge transport through molecular junctions.\cite{yuan2018transition,kang2020interplay}
Therefore, in the last part of this work, we demonstrate how lifetime broadening can be incorporated into the Marcus-Levich-Dogonadze-Jortner-type description of molecular conduction.
\section{Theory \label{model}}
We are interested in molecular junctions comprising a molecular system weakly coupled to two metallic electrodes.
At zero bias the molecular system within the junction is found in the $N$ charge state. As the bias is increased, the charging of the molecule -- populating the $N+1$ (or $N-1$) charge state -- will eventually become possible. For simplicity, we assume that each of the two considered charge states is non-degenerate, and ignore any excited electronic states.
Then, the molecular system in question can be modelled as a single energy level with energy $\varepsilon_0$ which corresponds to the chemical potential for the charging of the molecular system. We note that, generally, in the presence of electron-vibrational interactions and molecule-lead coupling, the position of the molecular energy level will be renormalized as compared to its gas-phase value. Since, experimentally, the position of the molecular level is typically an empirical parameter, here we simply absorb all these renormalizations into $\varepsilon_0$.
\begin{figure}[ht]
\centering
\includegraphics{fig1.eps}
\caption{(a) Artistic impression of a single-molecule junction. The effective rates of electron transfer on and off the molecular system are denoted by $k_\mathrm{L}$, $k_\mathrm{R}$ and $\bar{k}_\mathrm{L}$, $\bar{k}_\mathrm{R}$, respectively. (b) Schematic illustration of the rate-equation model considered here; $f_l(\epsilon)$ denotes the Fermi distribution in the lead $l$. $K_\pm(\epsilon)$ are the molecular densities of states. }
\label{fig1}
\end{figure}
In this work it will be sufficient to model charge transport through the junction using a rate equation approach.
As schematically shown in Fig.~\ref{fig1}, charge transport through the weakly-coupled single-molecule junction can be modelled as a series of electron transfers taking place at the left (L) and right (R) electrode.
In what follows, we will work within the wide-band approximation.\cite{galperin2006resonant,migliore2011nonlinear} We will assume that each of the leads has a constant density of states [$\varrho_l(\epsilon) = \mathrm{const.}$ where $l = \mathrm{L,R}$] and that the electronic coupling between the molecular energy level and a continuum of energy levels in the leads is also constant ($V_l= \mathrm{const.}$ where $V_l$ is the molecule-lead coupling matrix element).
The populations of the $N$ ($P_N$) and $N+1$ ($P_{N+1}$) charge states can be found by considering the following pair of rate equations:
\begin{align}
\dfrac{\mathrm{d} P_N}{\mathrm{d} t} &= -(k_\mathrm{L} + k_\mathrm{R}) P_N + (\bar{k}_\mathrm{L} + \bar{k}_\mathrm{R}) P_{N+1}~,\\
\dfrac{\mathrm{d} P_{N+1}}{\mathrm{d} t} &= - (\bar{k}_\mathrm{L} + \bar{k}_\mathrm{R}) P_{N+1} + (k_\mathrm{L} + k_\mathrm{R}) P_N~,
\end{align}
where $k_l$ and $\bar{k}_l$ are the rates of electron hopping on and off the molecular structure at the $l$ interface, respectively, as denoted in Fig.~\ref{fig1}.
In the steady-state limit, $\mathrm{d} P_N/{\mathrm{d} t} = {\mathrm{d} P_{N+1}}/{\mathrm{d} t} = 0$, it has the solution
\begin{equation}
P_N = \dfrac{\bar{k}_\mathrm{L} + \bar{k}_\mathrm{R}}{k_\mathrm{L} + k_\mathrm{R} + \bar{k}_\mathrm{L} + \bar{k}_\mathrm{R}} ~,
\end{equation}
and $P_{N+1} = 1 - P_N$.
The current through the junction can be determined by considering either the left or the right molecule-lead interface. Considering, for instance, the left interface, the current through the junction is given by:
\begin{equation}
I = e \left[k_\mathrm{L} P_N - \bar{k}_\mathrm{L} P_{N+1}\right]~,
\end{equation}
which gives the well-known expression:\cite{migliore2011nonlinear,zhang2008single}
\begin{equation}\label{current}
I = e \dfrac{k_\mathrm{L} \bar{k}_\mathrm{R} - k_\mathrm{R} \bar{k}_\mathrm{L} }{k_\mathrm{L} + k_\mathrm{R} + \bar{k}_\mathrm{L} + \bar{k}_\mathrm{R}}~.
\end{equation}
Although in this work we shall consider a non-degenerate electronic level, the (spin) degeneracy of the electronic level in question can be relatively easily introduced into this model, see for instance Ref.~\cite{thomas2019understanding}.
The rates of electron transfers in Eq.~\eqref{current} are given by:\cite{chidsey1991free,sowa2018beyond,gerischer1969charge}
\begin{align}\label{rate1}
k_l &= \dfrac{2}{\hbar}\Gamma_l \int_{-\infty}^\infty \dfrac{\mathrm{d}\epsilon}{2\pi} f_l(\epsilon) K_+(\epsilon) ~,\\
\bar{k}_l &= \dfrac{2}{\hbar}\Gamma_l \int_{-\infty}^\infty \dfrac{\mathrm{d}\epsilon}{2\pi} [1-f_l(\epsilon)] K_-(\epsilon) ~, \label{rate2}
\end{align}
where $f_l(\epsilon) = 1/[\exp((\epsilon - \mu_l)/k_\mathrm{B}T) + 1]$ is the Fermi distribution, $\mu_l$ is the chemical potential of the lead $l$, and $\Gamma_l$ is the strength of the molecule-lead interaction:
\begin{equation}\label{gammal}
\Gamma_l = 2\pi \lvert V_l \rvert^2 \varrho_l~,
\end{equation}
where $\varrho_l$ is the (constant) density of states in the lead $l$ (we make use of this wide-band approximation throughout).
$K_\pm(\epsilon)$ are the molecular densities of states for the relevant processes.
As we shall demonstrate in Section \ref{deriv}, they are given by
\begin{multline}\label{gmm}
K_\pm (\epsilon) = \int_{-\infty}^\infty \mathrm{d} E \dfrac{1}{\sqrt{4\pi \lambda k_\mathrm{B} T}} \times \\ \exp\left(-\dfrac{[\lambda \pm (E - T \Delta S^\circ - \epsilon)]^2}{4\lambda k_\mathrm{B} T}\right) \dfrac{\Gamma}{(E-\varepsilon_0)^2 + \Gamma^2} ~,
\end{multline}
where $\lambda$ is the classical reorganisation energy, $\Gamma$ is the lifetime broadening, $\Gamma = (\Gamma_\mathrm{L} + \Gamma_\mathrm{R})/2$, and $\Delta S^\circ$ the entropy change associated with the considered heterogeneous electron transfer ($\Delta S^\circ$ typically takes negative values when charged species are produced in a polar solvent).
The entropic effects, which will be discussed below, arise from the presence of the $T \Delta S^\circ$ term in Eq.~\eqref{gmm} and, physically, stem predominantly from the changes in the solvent librational-rotational frequencies of the solvent which depend on the charge on the electroactive molecule in the junction (an effect omitted in all `spin-boson' treatments of electron transfer).\cite{marcus1985electron} We note that the entropic effects are therefore not accounted for in descriptions of molecular conduction which treat the nuclear environment quantum-mechanically. It is well-known however that they can play a significant role in electron transfer reactions in polar solvent environments.\cite{marcus1986relation,marcus1985electron,marcus1975electron}
What is the physical meaning of the Eqs.~(\ref{rate1}) and (\ref{rate2})? The overall rate of electron transfer from the lead onto the molecule ($k_l$) can be understood as a sum of the rates for all the possible electron transfers from the continuum of donor states (the population of each of which is determined by the Fermi-Dirac distribution), and conversely for the rate of an electron transfer off the molecular system ($\bar{k}_l$).
$\Gamma_l$ in Eq.~\eqref{gammal}, in units of $\hbar$, is the well-known Golden Rule rate constant for electron transfer from the electronic state of the molecule into the electronic states of the lead, evaluated at the same energy. By microscopic reversibility the rate constant for the isoenergetic reverse step has the same value.
\subsection{Expression for the rate constant \label{deriv}}
In this section, we provide an intuitive derivation for the molecular densities of states $K_\pm(\epsilon)$ from the perspective of the classical theory of electron transfer. For more rigorous derivation we refer the reader to our earlier work in Ref.~\onlinecite{sowa2018beyond} (leading, however, to the omission of the entropic term in Eq.~\eqref{gmm}).
Let us consider a (non-adiabatic) electron transfer between a single band in a metallic lead $l$ (with electrochemical potential $\epsilon$) and the molecular level in question (with energy $\varepsilon_0$). According to the conventional theory of non-adiabatic electron transfer, the rate constant of this process is given by\cite{van1974nonadiabatic,ulstrup1975effect,kestner1974thermal,may2008charge}
\begin{equation} \label{ket}
k^\mathrm{ET} = \dfrac{2\pi}{\hbar} \lvert V_l \rvert^2 \ \mathrm{FCWD}~,
\end{equation}
where $V_l$ is the coupling matrix element and FCWD is the Franck-Condon-weighted density of states.
In what follows, we shall treat the vibrational dynamics classically as it is done within the Marcus theory\cite{marcus1956theory} (although a number of ways to include nuclear tunnelling in a Marcus-type description have been developed\cite{hopfield1974electron,jortner1976temperature,sowa2018beyond}). Later, we will also assume that the nuclear degrees of freedom are thermalized at all times. [We note that methods accounting for non-equilibrium vibrational effects in charge transfer and transport (while treating the vibrational environment classically) have also been developed.\cite{sumi1986dynamical,dou2018broadened,kirchberg2020charge}] In the classical limit, FWCD is therefore given by:\cite{ulstrup1975effect,kestner1974thermal,may2008charge,marcus1985electron}
\begin{equation}
\mathrm{FCWD} = \dfrac{1}{\sqrt{4\pi\lambda k_\mathrm{B}T}} \exp \left(-\dfrac{[\lambda + (\Delta E - T \Delta S^\circ)]^2}{4\lambda k_\mathrm{B}T} \right) ~,
\end{equation}
where $\lambda$ is the reorganisation energy, and $\Delta E$ and $\Delta S^\circ$ are the energy and entropy differences between the `products' and the `reactants' of the considered process, respectively.
Here, we wish to account for the fact that due to the coupling to metallic leads, the state corresponding to the `products' has a finite lifetime (i.e.~is lifetime-broadened, see Fig.~\ref{origins}).
We therefore assume that the electronic state corresponding to the `products' comprises a continuum of states with the molecular density of states $\rho(E)$ such that:
\begin{equation}
\int_{-\infty}^\infty \mathrm{d}E \ \rho(E) = 1~.
\end{equation}
Then, the rate of electron transfer (between the single considered metallic band and the molecular energy level) is given by the integral:
\begin{equation} \label{knew}
k^\mathrm{ET} = \int_{-\infty}^\infty \mathrm{d}E \ \dfrac{2\pi}{\hbar} \ \lvert V_l \rvert^2 \ \mathrm{FCWD}(E) \ \rho(E)~,
\end{equation}
where the $\mathrm{FCWD}(E)$ is given by:
\begin{equation}
\mathrm{FCWD}(E) = \dfrac{1}{\sqrt{4\pi\lambda k_\mathrm{B}T}} \exp \left(-\dfrac{[\lambda + (E - T \Delta S^\circ - \epsilon)]^2}{4\lambda k_\mathrm{B}T} \right)~.
\end{equation}
In order to determine $\rho(E)$, let us consider the wavefunction $\psi(t)$ for the molecular energy level. It can be written (in units of $\hbar$) as:
\begin{equation}
\psi(t) = \theta(t) \left[ \exp(-\mathrm{i} \varepsilon_0 t)\right] \left[ \exp(-\Gamma t)\right] \psi(0) ~,
\end{equation}
where $\theta(t)$ is the Heaviside step function, $\Gamma/\hbar$ is the lifetime (decay constant) for the state in question, and we assume that $\psi(0)$ is normalized.
In the energy space, the corresponding function can be obtained by means of a Fourier transform:
\begin{equation}
\phi(E) = \int_{-\infty}^\infty \mathrm{d} t \:\psi(t) \exp(\mathrm{i} E t) = \psi(0) /(\Gamma + \mathrm{i}(E-\varepsilon_0)) ~.
\end{equation}
The probability density $\rho(E)$ is proportional to $\lvert \phi(E)\rvert^2$, i.e.
\begin{equation}
\rho(E) = C^2 \lvert \phi(E)\rvert^2~,
\end{equation}
where $C$ is the normalisation factor. Since $\lvert \psi(0)\rvert^2 =1$, we obtain
\begin{equation}
\rho(E) = \dfrac{\Gamma}{ \pi[(E-\varepsilon_0)^2 + \Gamma^2]}~.
\end{equation}
The electron-transfer rate given in Eq.~\eqref{knew} therefore becomes
\begin{multline} \label{kkk}
k^\mathrm{ET} = \int_{-\infty}^\infty \mathrm{d}E \ \dfrac{2\pi}{\hbar} \ \lvert V_l \rvert^2 \ \dfrac{1}{\sqrt{4\pi\lambda k_\mathrm{B}T}} \times \\ \exp \left(-\dfrac{[\lambda + (E - T \Delta S^\circ - \epsilon)]^2}{4\lambda k_\mathrm{B}T} \right) \dfrac{\Gamma}{\pi[(E-\varepsilon_0)^2 + \Gamma^2]}~.
\end{multline}
The overall (effective) rate of electron transfer from the metallic electrode and onto the molecular level (or \textit{vice versa}) is simply a sum of the rates of individual electron transfers weighted by the Fermi distribution and the lead density of states, as described by Eqs.~\eqref{rate1} and \eqref{rate2}. From Eq.~\eqref{kkk} we therefore obtain the expression for $K_\pm(\epsilon)$ given in Eq.~\eqref{gmm}.
This result constitutes the basis of what we will refer to as a generalized theory (which shall be discussed in greater detail in Section \ref{gmarcus}).
\begin{figure}
\centering
\includegraphics{drawing233.eps}
\caption{Schematic illustration of the origins of the generalized theory. Parabolas describe the free energies of the reactants and products (of the considered electron transfer) as a function of the nuclear coordinate. Molecule-lead coupling results in broadening of the parabola corresponding to the $N+1$ charge state (M$^{-}$). Note that the shading does not show the dramatic effect that the Lorentzian tails can have on $K_\pm(\epsilon)$.}
\label{origins}
\end{figure}
\subsection{Landauer and Marcus limits \label{limitss}}
In this section we will demonstrate that the conventional Landauer and Marcus theories can be obtained as the limiting cases of the generalized theory.
As can be seen in Eq.~\eqref{gmm}, $K_\pm(\epsilon)$ in the generalized theory is given by a convolution of the Lorentzian and Gaussian profiles.
Let us first consider the case of vanishing reorganisation energy. Then, $\sqrt{4\lambda k_\mathrm{B}T}/\Gamma \rightarrow 0$, i.e.~the Gaussian profile in Eq.~\eqref{gmm} becomes very narrow as compared to the Lorentzian.
We also know that with vanishing reorganization energy the $-T \Delta S^\circ$ term also vanishes since the time is too short for the changes in polar solvent configurations to contribute.
In this limit, therefore, the relevant Gaussian function becomes
\begin{equation}
\dfrac{1}{\sqrt{4\pi \lambda k_\mathrm{B} T}} \exp\left(-\dfrac{[\lambda \pm (E - T \Delta S^\circ - \epsilon)]^2}{4\lambda k_\mathrm{B} T}\right) \rightarrow \delta(E-\epsilon) ~,
\end{equation}
and Eq.~\eqref{gmm} simplifies to
\begin{equation} \label{lorentz}
K_\pm(\epsilon) = \dfrac{\Gamma}{(\epsilon - \varepsilon_0)^2 + \Gamma^2}~,
\end{equation}
where the molecular densities of states are identical for an electron transfer on and off the molecular system (microscopic reversibility). Inserting Eq.~\eqref{lorentz} into Eq.~\eqref{current} allows us to reduce the expression for electric current to the usual Landauer (Landauer-B\"uttiker) approach.\cite{landauer1957spatial,imry1999conductance,zimbovskaya2013transport,nitzan2006chemical,esposito2009transport} It becomes:
\begin{equation} \label{LB}
I = \dfrac{e}{\hbar} \int_{-\infty}^\infty \dfrac{\mathrm{d}\epsilon}{2\pi} \left( f_\mathrm{L}(\epsilon) - f_\mathrm{R}(\epsilon) \right) \mathcal{T}(\epsilon) ~,
\end{equation}
where $\mathcal{T}(\epsilon)$ is the transmission function, here given by a Breit-Wigner resonance:\cite{breit1936capture}
\begin{equation}
\mathcal{T}(\epsilon) = \dfrac{\Gamma_\mathrm{L} \Gamma_\mathrm{R}}{(\epsilon - \varepsilon_0)^2 + \Gamma^2}~.
\end{equation}
Furthermore, it is instructive to consider the Landauer approach in the limit of zero temperature and for a constant transmission function $\mathcal{T}(\epsilon) =\mathcal{T}$. Then, Eq.~\eqref{LB} becomes:
\begin{equation}
I = \dfrac{e}{h} (\mu_\mathrm{L} - \mu_\mathrm{R}) \mathcal{T} = \dfrac{e^2}{h} V_\mathrm{b} \mathcal{T} ~.
\end{equation}
Introducing an additional factor of two to account for the spin degeneracy of the considered level, we recover the celebrated Landauer formula for the electronic conductance:
\begin{equation}\label{glan}
G = \dfrac{\mathrm{d}I}{\mathrm{d}V_\mathrm{b}} = \dfrac{2e^2}{h} \mathcal{T}~,
\end{equation}
where $\mathcal{T}$ can vary between 0 and 1.\cite{landauer1957spatial,landauer1989conductance}
For completeness, an alternative derivation of Eq.~\eqref{glan} is given in the Appendix \ref{appL}.
Next, we consider Eq.~\eqref{gmm} in the limit when $\Gamma/\sqrt{4\lambda k_\mathrm{B}T} \rightarrow 0$, that is when the width of the Lorentzian profile is negligible compared to that of the Gaussian profile. Then,
\begin{equation}
\dfrac{\Gamma}{(E-\varepsilon_0)^2 + \Gamma^2} \rightarrow \pi \: \delta(E-\varepsilon_0)~,
\end{equation}
and $K_\pm(\epsilon)$ in Eq.~\eqref{gmm} take the familiar form:
\begin{equation} \label{marcus}
K_\pm(\epsilon) = \sqrt{\dfrac{\pi}{4\lambda k_{\mathrm{B}} T}} \exp\left( -\dfrac{[\lambda \pm ( \varepsilon_0 - T \Delta S^\circ - \epsilon)]^2}{4\lambda k_{\mathrm{B}} T}\right) ~.
\end{equation}
Together with Eqs.~\eqref{rate1} and \eqref{rate2}, Eq.~\eqref{marcus} constitutes Marcus (Marcus-Levich-Dogonadze-Hush-Chidsey-Gerischer) theory of transport.\cite{marcus1985electron,marcus1956theory,chidsey1991free,gosavi2000nonadiabatic,migliore2012relationship,migliore2011nonlinear}
As was previously discussed,\cite{migliore2012relationship} Landauer and Marcus theories describe the opposite limits of charge transport mechanism. The former describes transport as a coherent process.
In the latter, meanwhile, it is assumed that before and following an electron transfer (from one of the metallic leads) the vibrational environment relaxes and the charge density localizes on the molecular system (until it tunnels out into the metallic lead).\footnote{It is interesting to note that a Gaussian profile is sometimes introduced \textit{ad hoc} into the Landauer framework in order to explain the experimentally-observed behavior.\cite{chen2017molecular}}
\subsection{Back to the generalized approach \label{gmarcus}}
Here, we return to the generalized theory derived in Section \ref{deriv} which, as we have shown above, unifies the conventional Marcus and Landauer theories of molecular conduction.
(We note that the performance of our generalized theory is yet to be validated in the intermediate regime, between the Landauer and Marcus limits, by a detailed comparison with exact quantum-mechanical calculation or experiment.)
As can be clearly seen in Eq.~\eqref{gmm}, the molecular densities of states $K_\pm(\epsilon)$ in the generalized theory are given by a Voigt function (a convolution of a Gaussian and a Lorentzian).\cite{armstrong1967spectrum}
It is instructive to consider Eq.~\eqref{gmm} far away from resonance, i.e.~when $\lvert \epsilon- \varepsilon_0 \rvert \gg \lambda, k_\mathrm{B}T, \Gamma$.
In this limit, the Lorentzian and Gaussian profiles in Eq.~\eqref{gmm} are centered very far apart from each other (on the $E$-axis) so that the wings of the Lorentzian are virtually constant over the width of the Gaussian profile. Consequently, the integral in Eq.~\eqref{gmm} returns simply the value of the Lorentzian profile (far away from the resonance).
Therefore, as we also more rigorously show in Appendix \ref{appA}, far away from resonance the $K_\pm(\epsilon)$ in Eq.~\eqref{gmm} can be approximated as:
\begin{equation} \label{approxmm}
K_\pm(\epsilon) \approx \dfrac{\Gamma}{(\epsilon- \varepsilon_0)^2}~.
\end{equation}
This is a significant result for several reasons. Firstly, we note that far from resonance $K_\pm (\epsilon)$ are independent of temperature. Furthermore, in the limit of $\lvert \epsilon- \varepsilon_0 \rvert \gg \Gamma$, the same expression can be obtained from the Landauer expression for $K_\pm(\epsilon)$ given in Eq.~\eqref{lorentz}.
Therefore, the generalized theory coincides with the Landauer approach not only for vanishing reorganisation energy (as we have previously discussed) but also far away from resonance:
in the deep off-resonant regime an interacting system can be approximated as a non-interacting one.
This result is in agreement with a multitude of experimental studies which, as discussed above, successfully modelled off-resonant transport using the Landauer approach.
Off-resonant charge transport is often the mechanism of conduction through molecular junctions especially at relatively low bias voltage and it is possible that it may also account for the long-range electron transport observed through DNA-based systems.\cite{beratan2017charge,kim2016intermediate,wierzbinski2013single,dauphin2019high}
In our previous work, we have studied the $IV$ characteristics and the thermoelectric response predicted by the generalized theory.\cite{sowa2018beyond,sowa2019marcus}
Here, we will explore the temperature-dependence of electric current predicted by this approach in various transport regimes, and compare it to that predicted by the conventional Landauer and Marcus approaches.
\begin{figure}
\centering
\includegraphics{rates1.eps}
\caption{Molecular densities of states $K_\pm(\epsilon)$ as present in the (i) Landauer approach [solid thick line], (ii) Marcus theory [solid lines], and (iii) generalized theory [dashed lines]. $K_\pm(\epsilon)$ were calculated for instructive values of $\lambda = 0.3$ eV and $\Gamma = 50$ meV at $T=300$ K. For simplicity, we also set $\Delta S^\circ = 0$.}
\label{fig2}
\end{figure}
\subsection{Some general remarks}
In summary, charge transport through a weakly-coupled molecular junction (modelled as a single electronic level) can be described as a series of electron transfers with the molecular densities of states taking a form of a Lorentzian (Landauer approach), Gaussian (Marcus theory), and Voigt functions (generalized theory), as in Fig.~\ref{fig2}.
We note that within all of these approaches\footnote{Naturally, this does not hold for the approximation of $K_\pm(\epsilon)$ given in Eq.~\eqref{approxmm} which is valid only on the part of the energy domain.}
\begin{equation} \label{norma}
2\int_{-\infty}^\infty \dfrac{\mathrm{d}\epsilon}{2\pi} K_\pm(\epsilon) = 1~,
\end{equation}
so that at very high bias $k_\mathrm{L} = \Gamma_\mathrm{L}/\hbar$, $\bar{k}_\mathrm{R} = \Gamma_\mathrm{R}/\hbar$, and $k_\mathrm{R}=\bar{k}_\mathrm{L} = 0$, or \textit{vice versa}.
Therefore, in the limit of very high bias voltage, we obtain the well-known value of electric current:
\begin{equation}
I = \dfrac{e}{\hbar} \: \dfrac{\Gamma_\mathrm{L}\Gamma_\mathrm{R}}{\Gamma_\mathrm{L}+\Gamma_\mathrm{R}}~,
\end{equation}
which is independent of the chosen theoretical approach (and so also of the strength of the vibrational coupling).
We again stress that all the theories discussed here assume the presence of only a single molecular electronic energy level (in each of the two considered charge states). They are therefore valid (in their presented form) at sufficiently low bias voltages such that the excited electronic states can be disregarded, and far away from the remaining charge degeneracy points (where populating charge states other than $N$ and $N+1$ becomes possible).
\subsection{Single-barrier model}
In the above, the molecular system within the junction was effectively modelled as a well potential with two tunnelling barriers -- one at each of the molecule-lead interfaces.
It is also worth to mention another relatively simple theoretical model which is somewhat complementary to what has been discussed here. Namely, it is possible to approximately model the molecular junction as a single (typically trapezoidal) tunnelling barrier,\cite{choi2008electrical,beebe2006transition, wang2003mechanism, wold2001fabrication} and obtain the current-voltage characteristics using the Simmons model.\cite{simmons1963generalized} Within this approach, no additional charge density can localise on the molecule.
It does not therefore account for the reorganisation of the vibrational environment associated with the charging of the molecule in the junction and is typically justified only in a deep off-resonant regime.
This approach has been successfully used to account for the observed charged transport through molecular system with high-lying molecular energy levels.\cite{choi2008electrical,beebe2006transition, wang2003mechanism, wold2001fabrication}
\section{Comparison of the conduction theories \label{3a}}
In this section, we explore the temperature-dependence of the electric current as predicted by the three approaches described above.
We first calculate the $IV$ characteristics for the energy level lying at $\varepsilon_0 = 0.5$ eV above the Fermi levels of the unbiased leads. Where appropriate, we set $\lambda = 0.3$ eV (c.f.~Ref.~\cite{thomas2019understanding}), assume relatively weak and symmetric molecule-lead coupling: $\Gamma_\mathrm{L} = \Gamma_\mathrm{R} = 1$ meV and, for simplicity, set $T \Delta S^\circ = 0$. Experimentally, values of lifetime broadening from less than 1 $\mu$eV up to a few hundred meV have been observed.\cite{frisenda2016transition,thomas2019understanding,fung2019breaking,capozzi2015single} This large spread in the observed $\Gamma$ stems most likely from variations in the nature of molecule-lead contacts (the electronic coupling is typically assumed to decay exponentially with distance) as well as in the densities of states in the metallic electrodes (which depend on the exact atomic structure of the metallic tips).
The chemical potentials of the leads are determined by the applied bias voltage $V_\mathrm{b}$: $\mu_\mathrm{L} = -\lvert e\rvert \alpha V_\mathrm{b}$ and $\mu_\mathrm{R} = \lvert e\rvert (1-\alpha) V_\mathrm{b}$.
The parameter $\alpha$ accounts for how the potential difference is distributed between the left and right electrodes (and varies between 0 and 1), see Ref.~\cite{datta1997current} for a detailed discussion. In particular if $\alpha = 0.5$, the bias voltage is applied symmetrically resulting in a symmetric $IV$ curve. Otherwise, the bias is applied asymmetrically giving rise to current rectification (asymmetrical $IV$ characteristics).\cite{chen2017molecular,capozzi2015single}
We begin by calculating the $IV$ characteristics for $\alpha = 0.5$ and $\alpha =0.9$ in Fig.~\ref{IVs}(a) and (b), respectively. All of them exhibit the expected behavior (for a single-level model): region of suppressed current at low bias voltage (where the molecular energy level is found outside of the bias window) followed by a rise in current and an eventual plateau in the deep resonant regime.
In the presence of electron-vibrational interactions (i.e.~within the Marcus and generalized approaches), we can observe lower values of current as the molecular energy level enters the bias window. This is fundamentally an example of a Franck-Condon blockade.\cite{koch2005franck,bevan2018relating}
Furthermore, due to relatively small $\Gamma$, the Marcus and generalized theory predict seemingly very similar behavior. As we shall demonstrate (\textit{vide infra}), the differences between these approaches become appreciable in the off-resonant transport regime.
\begin{figure}[h]
\centering
\includegraphics{figure_4_newww.eps}
\caption{$IV$ characteristics calculated using the Landauer, Marcus and generalized approaches for: (a) $\alpha = 0.5$, and (b) $\alpha = 0.9$. We set the position of the molecular level above the Fermi level of the unbiased leads $\varepsilon_0 = 0.5$ eV, $\Gamma_\mathrm{L} = \Gamma_\mathrm{R} = 1$ meV, $\lambda = 0.3$ eV (in Marcus and generalized approaches), and $T=300$ K. The shaded area marks the off-resonant regime (when the molecular level lies outside of the bias window). Note that Marcus and generalized theory curves appear to closely overlap in the resonant transport regime. }
\label{IVs}
\end{figure}
We now turn to examine the temperature dependence of the electric current as predicted by the three approaches considered here.
This is done in Fig.~\ref{temp_dep} which shows the electric current as a function of temperature (on an Arrhenius plot) for different values of the bias voltage. We consider current at four different bias voltages [as marked by arrows in Fig.~\ref{IVs}(b)], initially disregarding the entropic effects ($\Delta S^o = 0$).
\begin{figure*}[ht]
\centering
\includegraphics{temp_dep2.eps}
\caption{Arrhenius plots of electric current [$\mathrm{log}_{10}(I)$ \textit{vs.}~$1/T$] at bias voltage $V_\mathrm{b}=\{1,-0.4,-0.8,-1.2\}$ V as a function of temperature. Other parameters as in Fig.~\ref{IVs}(b): $\alpha =0.9$, $\Gamma_\mathrm{L} = \Gamma_\mathrm{R} = 1$ meV, $\lambda = 0.3$ eV. Left panels schematically show the relative positions of the molecular energy level and the chemical potentials of the leads (for clarity, broadening of the Fermi distributions in the leads is not shown).}
\label{temp_dep}
\end{figure*}
Within the Landauer approach, the temperature dependence of the electric current stems solely from the temperature dependence of the Fermi distributions in the leads. Consequently, the electric current is almost independent of temperature when the molecular energy level lies far away from the bias window [Fig.~\ref{temp_dep}(a)], increases with temperature in the case of near-resonant transport [Fig.~\ref{temp_dep}(b)], and decreases with increasing temperature in the resonant transport regime [Figs.~\ref{temp_dep}(c) and (d)] although this effect can be relatively modest.
In contrast, within the Marcus approach, the temperature-dependence is determined by both the temperature dependence of the Fermi distributions in the leads and that of the Marcus rates in Eq.~\eqref{marcus}.
The latter contribution typically dominates and usually exhibits an exponential dependence on inverse temperature. Indeed, we observe an Arrhenius-type behavior in the far off-resonant scenario [Fig.~\ref{temp_dep}(a)]: electric current depends exponentially on inverse temperature and is greatly suppressed, as compared to that predicted by the Landauer theory. The same is true in the near-resonant case [Fig.~\ref{temp_dep}(b)].
In the resonant regime, the electric current increases (in an Arrhenius-type fashion) with temperature as long as the chemical potential of the left lead satisfies $\mu_\mathrm{L} < \varepsilon_0 + \lambda$ [Fig.~\ref{temp_dep}(c)].\cite{migliore2012relationship}
In the deep resonant regime (for $\mu_\mathrm{L} > \varepsilon_0 + \lambda$), broadening of both the Fermi distributions in the leads and the molecular densities of states $K_\pm(\epsilon)$ leads to a modest decrease in current with increasing temperature [Fig.~\ref{temp_dep}(d)].
Finally, we consider the generalized theory. Within this approach, the temperature dependence of electric current once again stems from the broadening of the Fermi distributions as well as temperature dependence of the electron transfer rates. The temperature dependence of $K_\pm(\epsilon)$ given in Eq.~\eqref{gmm} is, however, rather non-trivial.
In the deep off-resonant regime [Fig.~\ref{temp_dep}(a)], electric current is virtually independent of temperature and takes values similar to those predicted by the Landauer approach, see discussion in Section \ref{gmarcus}.
In the near-resonant case [Fig.~\ref{temp_dep}(b)], electric current generally increases with increasing temperature although in a non-linear fashion different from what is predicted by both the Landauer and Marcus transport theories.
Conversely, in the resonant regime the predictions of the generalized theory closely coincide with those of the conventional Marcus theory.
These results illustrate the fact that both the Landauer and Marcus theories can be used to describe charge transport through molecular junctions in their respective regimes of applicability. As discussed above, these different regimes may even correspond to different ranges of bias voltage for the same molecular junction.
\section{Entropic effects}
We next investigate the role of entropic effects in molecular conduction. In accordance with previous experimental studies of electron transfer in polar solvents,\cite{marrosu1990reaction,komaguchi1991entropy,svaan1984temperature} we set $\Delta S^o = -40$ J K$^{-1}$ mol$^{-1}$ (which corresponds to roughly -0.41 meV K$^{-1}$) unless stated otherwise.
First, in Fig.~\ref{fentropy}(a), using our generalized theory, we calculate the $IV$ characteristics obtained for $\Delta S^o = 0$ and $-40$ J K$^{-1}$ mol$^{-1}$ and at different temperatures. The current steps, present in the $IV$ characteristics when the molecular energy level falls into the bias window, are significantly shifted for non-zero $\Delta S^o$.
Furthermore, in the presence of entropic effects, the magnitudes of those shifts are increasing with temperature, while for $\Delta S^o = 0$ increasing temperature leads solely to the broadening of the $IV$ characteristics.
In the resonant (high-current) region, qualitatively identical behavior is also predicted by the conventional Marcus theory (not shown).
The origin of both of these effects can be understood using Eq.~\eqref{gmm}: the inclusion of entropic effects corresponds to an effective (and temperature-dependent) renormalization of the position of the molecular energy level. For negative $\Delta S^o$, this results in a shift of the current step toward higher values of bias voltage (shift in the opposite direction will be observed in the case of transport through a level found below the Fermi level of the unbiased leads).
From Eqs.~\eqref{gmm} and \eqref{marcus}, it can be inferred that strong entropic effects should be expected when $\lambda + (\varepsilon_0 - \epsilon) =0$.
It can be indeed seen in Fig.~\ref{fentropy}(a) that inclusion of negative $\Delta S^o$ leads to a negative temperature coefficient of the current (decreasing current with increasing temperature) in the resonant regime. Analogous negative temperature coefficient has been seen experimentally in charge recombination electron transfer reactions in polar liquids when the intrinsic barrier to reaction is small and has been discussed in the literature.\cite{marcus1985electron,marcus1975electron} The decrease of electric current with increasing temperature can occur when $\Delta S^o$ is negative and the molecular energy level is found above the Fermi levels of the unbiased leads or when $\Delta S^o$ is positive and the and the molecular energy level is found below the Fermi levels of the electrodes.
We also note that the qualitative behavior of the electric current in the resonant regime (as a function of temperature) could be used to experimentally determine the sign of $\Delta S^o$.
\begin{figure}
\centering
\includegraphics{entropyy.eps}
\caption{(a) $IV$ characteristics calculated at different temperatures for $\Delta S^o = -40$ and $0$ J K$^{-1}$ mol$^{-1}$. Other parameters as in Fig.~\ref{temp_dep}. (b, c) Temperature dependence of the electric current at $V_b = +0.5$ V calculated using the (b) generalized and (c) conventional Marcus theory.}
\label{fentropy}
\end{figure}
In Figs.~\ref{fentropy}(b) and (c), we further consider the temperature dependence of the electric current in the off-resonant regime using the generalized and conventional Marcus theory, respectively (we do not consider here the Landauer theory since it disregards the environmental interactions). In the presence of negative $\Delta S^o$, we observe lower values of electric current through the junction (once again due to the temperature-dependent shift of the effective position of the molecular level).
The electric current predicted by the generalized theory [Fig.~\ref{fentropy}(b)] exhibits only fairly weak temperature dependence, in accordance with previous discussion. In the case of non-zero $\Delta S^o$, the current very unusually decreases with increasing $T$ as the temperature dependence is dominated by the entropic effect. This can again be explained by the effective renormalization of the position of the molecular level by the entropic term.
On the other hand, within the conventional Marcus theory [Fig.~\ref{fentropy}(c)], we once again observe Arrhenius-type characteristics. Unlike the magnitude of the current, its temperature-dependent behavior is not significantly affected by the entropic effects.
In summary, entropic effects (of a realistic magnitude) can result in an unusual temperature-dependent behavior of the electric current. Negative temperature coefficient in particular may serve as an indication of this phenomenon in experimental studies on solvated molecular junctions.
\section{Marcus-Levich-Dogonadze-Jortner description \label{jortner}}
Thus far, the entire vibrational environment was treated classically.
It is well-known, however, that the high-temperature assumption of Marcus theory is generally not valid at around room temperature for the high-frequency molecular modes. These modes should be treated quantum-mechanically in order to obtain a qualitative agreement with the experimental studies.\cite{miller1984effect,closs1986distance} This need motivated Jortner and coworkers to develop an extension of the classical Marcus theory, known as the Marcus-Levich-Dogonadze-Jortner theory.\cite{jortner1976temperature,ulstrup1975effect} Within this approach, molecular vibrational environment is divided into two components: the low-frequency part typically associated with the outer-sphere environment, and the high frequency part represented by a single effective mode of frequency $\omega_0$. This effective high-frequency mode typically represents molecular vibrational modes corresponding to carbon-carbon and carbon-oxygen double-bond stretches (ubiquitous to most organic structures) and has a frequency of roughly 190 meV ($\sim 1500$ cm$^{-1}$). Then, the rate of electron transfer is given by Eq.~\eqref{ket} with the Franck-Condon-weighted density of states\cite{jortner1976temperature}
\begin{multline}\label{fcwdj}
\mathrm{FCWD} = \dfrac{1}{\sqrt{4\pi \lambda_\mathrm{out} k_\mathrm{B} T}}\sum_{m=0}^\infty e^{-D} \dfrac{D^m}{m!} \\ \exp\left(- \dfrac{[\Delta E - T \Delta S^\circ + \lambda_\mathrm{out} + m \omega_0]^2}{4\lambda_\mathrm{out} k_\mathrm{B} T}\right)~,
\end{multline}
where $\lambda_\mathrm{out}$ is the outer-sphere reorganisation energy. $D$ is the Huang-Rhys parameter for the coupling to the effective high-frequency vibrational mode
\begin{equation}
D = \dfrac{\lambda_\mathrm{in}}{\omega_0}~,
\end{equation}
where $\lambda_\mathrm{in}$ is the corresponding reorganisation energy.
Marcus-Levich-Dogonadze-Jortner theory (in its original formulation as well as its multi-mode extension) has become the most commonly used way to introduce nuclear tunnelling into the description of electron transfer.\cite{barbara1996contemporary}
We recall that in the conventional Levich-Dogonadze and all similar quantum mechanical treatments the medium in which the charges exist do not contain a $\Delta S^o$ term because of the assumptions tacitly made in treating the environment quantum mechanically.
It is also possible to adapt this theory in the transport setting considered here and incorporate lifetime broadening into this framework.
Using Eq.~\eqref{knew} and the $\mathrm{FCWD}$ factor given in Eq.~\eqref{fcwdj}, the relevant densities of states are given by:
\begin{multline} \label{jortnerlife}
K_\pm(\epsilon) = \sqrt{\dfrac{\pi}{4 \lambda_\mathrm{out} k_\mathrm{B}T}}\sum_{m=0}^\infty e^{-D} \dfrac{D^m}{m!} \times \int_{-\infty}^\infty \mathrm{d} E \\ \exp\left(-\dfrac{[(\lambda_\mathrm{out} + m \omega_0) \pm (E - T \Delta S^\circ - \epsilon)]^2}{4\lambda_\mathrm{out} k_\mathrm{B} T}\right) \times \dfrac{\Gamma}{(E - \varepsilon_0)^2 + \Gamma^2} ~.
\end{multline}
This constitutes what we shall refer to as the generalized Marcus-Levich-Dogonadze-Jortner (gMLDJ) theory.
\begin{figure}
\centering
\includegraphics{generalised_rates.eps}
\caption{Molecular densities of states $K_\pm(\epsilon)$ calculated for $\Gamma = 5$ meV, $\lambda_\mathrm{out} = 150$ meV, $D = 1.9$, $\omega_0 = 190$ meV (gMLDJ), and $\lambda = \lambda_\mathrm{out} + D \omega_0$ (generalized theory) at $T=300$ K. For simplicity, $\Delta S^\circ = 0$.}
\label{fig4}
\end{figure}
In Fig.~\ref{fig4}, we plot the molecular densities of states $K_\pm(\epsilon)$ obtained using the generalized Marcus and generalized MLDJ approaches. The latter clearly shows a set of equidistant peaks separated by $\omega_0$ which correspond to the excitations of the (effective) high-frequency molecular mode.
Since this high-frequency vibrational mode constitutes a somewhat phenomenological description of the inner-sphere environment (which in reality comprises a set of vibrational modes), the presence of these equally-spaced conductance peaks is an artefact of the Marcus-Levich-Dogonadze-Jortner approach.
Furthermore, the Marcus-Levich-Dogonadze-Jortner approach predicts a much larger magnitude of $K_\pm(\epsilon)$ (as compared to the classical Marcus rates) for both smaller and larger values of $\lvert\epsilon -\varepsilon_0\rvert$, a direct result of incorporating nuclear tunnelling in the Marcus-Levich-Dogonadze-Jortner theory.
All these aspects of Marcus-Levich-Dogonadze-Jortner theory have long been well-understood.\cite{barbara1996contemporary} We note that nuclear tunneling is much more important in the inverted regime than in the normal regime.
In an analogy to what was discussed in Section \ref{limitss}, by setting $\lambda_\mathrm{out} = \lambda_\mathrm{in} = 0$ in the gMLDJ theory we again recover the Landauer description of transport.
Once again, lifetime broadening again becomes especially relevant in off-resonant regime of transport.
Qualitatively, the behavior which is predicted by this approach in the off-resonant regime will coincide with that of the generalized theory:
inclusion of lifetime broadening will result in increased electric current and its very weak temperature dependence, \textit{c.f.} Section \ref{3a}.
Finally, we note that lifetime broadening can also be introduced in the multi-mode extension of Marcus-Levich-Dogonadze-Jortner theory (where it would normally be necessary to calculate the Huang-Rhys factor for each of the molecular modes).\cite{ulstrup1975effect,jortner1976temperature}
This modification would lead, however, to an even more complicated expression and we see little advantage in using such an approach in practical applications (as opposed to, for instance, the generalized-quantum-master-equation result of Ref.~\onlinecite{sowa2018beyond}).
\section{Concluding Remarks \label{end}}
In this work, we first focused on the recently-derived generalized theory.
We have presented an intuitive derivation of this approach, showed how entropic effects can be incorporated into that formalism, and demonstrated how the conventional Landauer and Marcus approaches can be obtained as limiting cases of this more general approach.
We have further demonstrated that (for relatively weak molecule-lead coupling) the predictions of the generalized theory coincide very well with those of Landauer and Marcus theories in the off-resonant and resonant regime, respectively.
Consequently, we believe that the generalized theory correctly describes transport properties of molecular junctions across the entire experimentally-accessible domain (i.e.~in both the resonant and off-resonant regime; provided the high-temperature assumption of Marcus theory is justified).
We have also studied the influence and identified experimental signatures of entropic effects in the molecular electronic conduction in different transport regimes.
Finally, in Section \ref{jortner}, we have shown how lifetime broadening can be introduced into Marcus-Levich-Dogonadze-Jortner theory.
The theory presented here can be also extended beyond the single-level model and thus introduce lifetime-broadening effects into the rate-equation descriptions\cite{migliore2011nonlinear} of multi-level molecular junctions.\footnote{However, an \textit{ad hoc} replacement of the molecule-lead hopping rates (in the conventional rate equation or quantum master equation approaches) with the expressions developed here will yield a theory that does not recover the exact Landauer result for molecular systems with more than one site.}
Our hope is that this work will inspire a wide use of the theory described here in experimental studies on molecular junctions as well as stimulate empirical exploration of entropic effects in these systems.
\begin{acknowledgements}
JKS thanks Hertford College, Oxford for financial support, and L. MacGregor for carefully reading the manuscript.
RAM thanks the Office of the Naval Research and the Army Research Office for their support of this research.
\end{acknowledgements}
\section*{Data Availability Statement}
The data that supports the findings of this study are available within the article itself.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 7,844 |
Q: Why can't I use public/private key authentication with ssh on Arch Linux? I have the following setup on an Ubuntu machine:
~/dotfiles/authorized_keys2
~/.ssh/authorized_keys2 -> /home/wayne/dotfiles/authorized_keys2
I had the same setup on my Arch machine, but when I connect with -v,
debug1: Authentications that can continue: publickey,password
debug1: Next authentication method: publickey
debug1: Offering RSA public key: /home/wayne/.ssh/id_rsa
debug1: Authentications that can continue: publickey,password
I found this page on the Arch Wiki, which has this line:
$ chmod 600 ~/.ssh/authorized_keys
So I added another symlink:
authorized_keys -> /home/wayne/dotfiles/authorized_keys2
And yet still, no dice. And yes, I have ensured that the correct key is present in authorized_keys.
Why can I not connect using my keys?
Edit:
My permissions are set correctly on my home and ssh folders (and key file):
drwxr-x--x 150 wayne family 13k Aug 27 07:38 wayne/
drwx------ 2 wayne family 4.1k Aug 27 07:24 .ssh/
-rw------- 1 wayne family 6.4k Aug 20 07:01 authorized_keys2
A: The permissions on your authorized_keys file and the directories leading to it must be sufficiently restrictive: they must be only writable by you or root (recent versions of OpenSSH also allow them to be group-writable if you are the single user in that group). See Why am I still getting a password prompt with ssh with public key authentication? for the full story.
In your case, authorized_keys is a symbolic link. As of OpenSSH 5.9 (I haven't checked other versions), in that case, the server checks the permissions leading to the ultimate target of the symbolic link, with all intermediate symbolic links expanded (the canonical path). Assuming that all components of /home/wayne/dotfiles/authorized_keys2 are directories except for the last one which is a regular files, OpenSSH checks the permissions of /home/wayne, /home/wayne/dotfiles and /home/wayne/dotfiles/authorized_keys2.
If you have root access on the server, check the server logs for a message of the form bad ownership or modes for ….
A: I had the same issue, got resolved by changing permissions of /home/user directory which was not correct it should be chmod 755
A: If SELINUX is set to enforcing, and the canonical path to your authorized_keys file has a symlink for any of the directories, it will fail. You need to set SELINUX to disabled.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 5,763 |
Q: How to use D3.js to draw this graph I saw a graph that can efficently perform the relations between caller and callee(the functions that are called).The tree-like diagram is much clear for us to know the calling frequency and hierarchy.
... I know how to use D3 to draw a "Indented Tree" but don't know how to draw that tree.
I have some json data that contain "name"/"child"/"caller name"/,and "child" is a list contains all children.
"name": "eval",
"caller": "__main__",
"children": [
{
"name": "zend_compile_string",
"caller": "eval",
"children": [],
"call_num": 1,
"all_index": "[17]"
}
I sincerely want to know how to draw a line between parent and child node in this graph? one line links one parent and one child, and the parent node is at the upper level and the child node is at the lower level.
Thanks!!!!!
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 1,355 |
{"url":"https:\/\/electronics.stackexchange.com\/questions\/418503\/difference-between-cmsis-and-asf","text":"difference between CMSIS and ASF\n\nim trying to make c++ project using atmel asf.\n\nthe compiler have:\n\nC:\\Program Files (x86)\\Atmel\\Studio\\7.0\\Packs\\atmel\\SAM4E_DFP\\1.1.57\\ic.sam4e\\include\\component\\....\n\n\nbut in asf we have different files which are\n\nsrc\\ASF\\sam\\utils\\cmsis\\sam4e\\include\\component\\\n\n\nat beginning i thought they are the same. but now im confused. what is the difference?\n\nis one of them came from standard cmsis stuff related to cortex-m0+ and the other is something defined by Atmel? and in c++ project they just give me the original cmsis? (if i dont make sense here then maybe i have big misunderstand of a lot of things)\n\n\u2022 I believe Atmel ASF is drivers and pre-made code, HALs etc? That is, fairly high-level code. CMSIS is the low-level standardized \"CRT\" that sets up the basic environment out of reset. Often quite badly written. \u2013\u00a0Lundin Jan 24 '19 at 13:48\n\u2022 CMSIS badly written? what do you mean. the paths above is low level definition of registers of components.. but they differ .. the ones in cmsis have reserved registers and ones in asf are detailed.. \u2013\u00a0Hasan alattar Jan 24 '19 at 15:51\n\u2022 It is quite common that they have a SystemInit which picks an on-chip RC oscillator for clock as default, and then run the rest of the setup, including time-consuming .data and .bss initialization, with that bad clock. And then the desired oscillator settings don't kick in before they are run from application code. That's an amateur setup. \u2013\u00a0Lundin Jan 25 '19 at 7:28\n\u2022 well i think they leave that for user to edit.. they run rc as its built in the chip .. i that's something they know about but they choose it so u can test your first code with correct timing setup ... but what i dont get is the components in cmsis and asf .. they are different.. u cant right drivers! \u2013\u00a0Hasan alattar Jan 25 '19 at 8:48\n\u2022 @Lundin SystemInit is a vendor specific call out of reset. Have a look at the startup files on the GitHub repository - it's called by Reset_Handler, but the content isn't part of CMSIS. \u2013\u00a0awjlogan Mar 22 '19 at 21:05\n\nBasically, CMSIS is a vendor-independent interface for some ARM Cortex chips, while ASF is a restricted library for Atmel (now owned my microchip) devices.\n\nARM Primer\n\nARM Cortex is the core processor inside microntrollers from multiple vendors. It includes core components like SysTick, NVIC (Interrupt controller), and Micro Trace Buffer (MTB). This core is lisecensed to vendors like Atmel that design microntrollers around it, adding things like memory and peripherals (SERCOM, USB, etc)\n\nThe SAMD21 is an Atmel SAM microcontroller designed around an ARM Cortex-M0+ processor. You, as the programmer, have access to features of both.\n\nCMSIS - Cortex Microcontroller Software Interface Standard\n\nThis is a hardware-abstraction layer (HAL) for Cortex-M and Cortex-A processors. It works with chips from Atmel, NXP, Freescale, etc. This standard includes low-level interfaces for using core components and lays out a template for creating hardware-dependent (per vendor) libraries as well. The aim is to create an interface which enables code to be portable across numerous chips. In other words, you can reuse the same code between chips from any vendor.\n\nPros: supports vendor-independent, portable code\n\nCons: difficult to achieve true vendor-independence, easy to screw up dependencies, tries to do too much and is fairly clunky & convoluted\n\nASF - Advanced (was Atmel) Software Framework\n\nThis is an enormous code library that (should) work with any of Atmel's AVR or ARM microntrollers, as well as development boards like the xplained-pro series. See the link for the supported devices. The point is to provide working code for chip peripherals so developers don't have to redesign the wheel each time, providing a (mostly) similar interface across numerous devices.\n\nPros: Quicker development, familiar interfaces, lots of starter projects & examples\n\nCons: Extremely bloated and error prone code, trying too hard to be like Arduino\n\nCrossover\n\nSince the goal of CMSIS is to be vendor-independent over a few processor lines, but ASF is vendor-dependent but product-independent across multiple processors, the two don't fit together well. Of course, ASF uses CMSIS libs when it needs to, but it doesn't follow the standard. The biggest problem I've had with ASF is that it is overly abstracted, especially for low-resource microcontrollers where you need to know what is going on in the metal.\n\nCreating a new project in Atmel Studio to blink an LED will use an obscene amount of memory space, pulling dozens of header and source files that do trivial operations, stringing them together in an extremely convoluted manner. Because the libraries need to do everything someone might want, they often do a lot of stuff you don't want them to be doing. Keep in mind, much of the code was developed by low-wage and inexperienced interns, and is not polished or efficient. You don't have to spend much time on AVR Freaks to encounter people with insight as to what goes on behind the scenes.\n\nThat said, if you want to design your own library, it's useful to look through the ASF example code to see how they initiated peripherals, as it's easy to overlook some step, especially with ARM chips that have multiple clock, generators, and so on. You can then create a platform-abstract layer interface that can, for example, enable and use the serial port on any chip by communicating with the vendor-specific code for a particular chip.\n\nConsider these layers of code:\n\n[ Application Programming Interface (API) ]\n[ Vendor-Specific Implementation ]\n[ Core Processor Implementation ]\n\n\nThe top layer (API) is what you put in your code. This would be things like\n\nSERIAL_enable(port1, 9600, settings);\nSERIAL_send(port1, \"Hello, World!\");\n\n\nThe middle layer defines those SERIAL functions which likely need to talk to some core processor component which is defined in the bottom layer. If you are following CMSIS, the middle layer can be swapped out for different platforms like the Atmel SAMD21 and NXP LPC810M, as long as each has a serial port. This would typically be done by some Makefile magic, passing a defined architecture parameter during compilation and linking.\n\nKeep in mind, abstraction is a fundamental principal of software development, but can be your biggest enemy when you need to create efficient code on low-resource devices.","date":"2021-08-01 18:28:26","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.22870002686977386, \"perplexity\": 3993.6897933747896}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-31\/segments\/1627046154214.63\/warc\/CC-MAIN-20210801154943-20210801184943-00519.warc.gz\"}"} | null | null |
'use strict';
var rest = require('restler');
require('shelljs/global');
//rm old repos if any
mkdir('-p', process.env.HOME+'/Desktop/github');
rm('-rf', process.env.HOME+'/Desktop/github/*');
var api_url ='https://api.github.com/users/'+process.env.GITHUB_USER+'/starred';
console.log("start clone the repos...");
var args = {
headers:{
'User-Agent': 'eggcaker',
'Content-Type': 'application/json',
'Accept':'application/vnd.github.v3.star+json'}
};
rest.get(api_url, args).on('complete', function(data) {
var ts_hms = new Date();
var today = ts_hms.getFullYear() + '-' +
("0" + (ts_hms.getMonth() + 1)).slice(-2) + '-' +
("0" + (ts_hms.getDate() -1)).slice(-2);
data.forEach(function(item) {
if (item.starred_at.indexOf(today) > -1) {
exec('cd ~/Desktop/github;hub clone '+item.repo.full_name, function(status,output){
if (status !== 0) {
console.log('error:' + status);
}
});
}
});
});
| {
"redpajama_set_name": "RedPajamaGithub"
} | 4,448 |
_Vampire Vow_ copyright (C) Michael Schiefelbein, 2001
All rights reserved.
Published as an ebook in 2013 by Jabberwocky Literary Agency, Inc.
Cover art by John Fisk.
ISBN: 9781625670083
# For Gary
# TABLE OF CONTENTS
Title Page
Copyright
Dedication
Part I: Baptism Into Blood
Part II: The Cloister
Part III: Communications
Part IV: Clearing the Path
Part V: The Beloved
Part VI: The Proposal
Part VII: Revelation
Part VIII: The Storm
Part IX: Night Again
Also by Michael Schiefelbein
Thank You from JABberwocky
# I
Baptism Into Blood
# One
I wanted Jesus. That's how it started. Yes, the Jesus they built a religion on, the one they say rose from the dead. (I should be the last creature in the world to doubt that.)
There we were, on a quiet, stony hilltop overlooking the city, the stars above us like light through pinpricks in black velvet. Just he and I, years before the 12 dolts who formed his entourage. We huddled next to each other as we often did, and I finally asked him.
"Joshu," I whispered--that was my name for him. "Why resist this? You're always talking about the vanity of human law, about wanting to strike out against the old order."
He looked troubled, a young man of 23 still as idealistic as his disgustingly naive, dull Nazarene parents, who actually believed me when I told them I was a Jew--I, a Roman officer serving under Pilate!
"I'm troubled," he said. He leaned back on his hands. The moonlight washed over his lean form, his fine brow betraying his sensuality as much as it did his intelligence.
"What's to be troubled about? Love? That's what you rant about all the time, isn't it? Love must replace the legalism of the priests." I stroked his cheek, the smooth cheek of an unmarried Jew boy.
He took my hand and kissed it. "You know what place you have in my heart. You are the earth itself to me. But the earth is passing--"
"This is pious trash." I jerked my hand away. "The truth is you lack the boldness to act by your beliefs. You're not a man of action. You're a poet brainwashed by the Essene fanatics walled up in their caves. All this business about the end of this age, the plight of the complacent priests. Then when it comes to a radical move--"
"You're talking about forbidden relations."
"Forbidden to whom?" I asked. I could hear him weakening, see him eyeing my strong calves, my bulky thighs. It was hot and I had thrown off my tunic to tempt him. Nature had given me a square jaw, a cleft chin, a dark mane, eyes that could bring a vestal virgin to her knees--and a cock that could keep her there. My meaty physique came from years spent in the Emperor's training rooms.
"The point is," I said, "we've sworn allegiance to each other. We meet on this craggy hillside night after night. I listen to all these dreams of yours about a kingdom of god. Your god. The stuff of sedition, I might add. We race along the river, buck-naked. We even bathe each other! Ours is only the ultimate bond."
Right there, the Jew temple of Jerusalem beneath us, I swore to myself that I would finally enter him--the boy prophet, the ultimate challenge, my obsession. I would enter him the way I entered the Emperor's gates after a campaign: invincible, majestic. But hailed by his groans rather than by the cries of banner-waving masses.
I reached beneath his robe.
He pulled away. "No, Victor." His voice was not without regret. "I'm not ready to throw out the law of my fathers. This cannot happen."
This was the last time we met there.
Soon afterward I received a message from him: _I must give myself to no earthly man, only to my Father in heaven, for whose coming kingdom I must prepare. For the sake of this, we must meet no more._
I persisted. I who had taken whomever I chose until that moment. I followed Joshu, hounded him until he fled to the desert to live as a hermit.
I had never hesitated to use force with other subjects of the Empire, to beat, to wrench them into submission. With this one, though, force could not be mustered.
# Two
I met him a week after arriving in Palestine. After military training in Rome, my birthplace, and commanding troops in Gaul, I received a commission to serve in Judaea under Pilate, who'd replaced the puppet king, Archelaus, when he'd proved incompetent. My head swelled with the honor of it--living at Pilate's palatial headquarters on the Mediterranean coast, parading through the rabble of Jerusalem to remind the Jews whose empire they belonged to, spending the day wrestling naked soldiers and training in the governor's gymnasium. Besides quelling riots now and then and presiding over executions, all I had to do was bark orders and look good.
My hopes were crushed, however, as I and a fellow officer surveyed Jerusalem for the first time from horseback. Beggars squatted at the city gates, pissed out in the open. The marketplace stank with overripe fruit and animal dung. Urchins ran naked through the streets, and toothless hags scrubbed linens at the wells.
Rome was not without its own squalor, but I had been exposed to little of it. I'd been raised outside the city in a magnificent villa with gardens, vineyards, and a superb view of the Apennines. The city drew me only after dark, when I needed a whore, when the lurid faces I passed in the night only excited my wild, drunken desires. In Rome, colossal buildings of marble overshadowed the ghettos. Fountains in enormous squares, mansions on the Palatine, the Circus Maximus, the baths of Caracalla, aristocrats draped in purple--the splendors of the city made up for its unpleasant corners.
While Jerusalem--aside from the temple (a shack by Roman standards), Herod's palace, and a few mansions--had nothing to offer but rank slums.
I got drunk every night the first week, depressed about being stationed in Judaea for three years. To hell with standards of Roman discipline, I thought. I needed an escape from that hole. After carousing one night after Pilate had left for councils in Rome, I shook off the clingy Egyptian whore who couldn't get enough of me and rode to the hills overlooking the city. A cool wind blew. The sky already burned pink on the eastern horizon. I dismounted, threw a blanket on a smooth rock, and passed out.
When I opened my eyes, the sun was setting. I'd spent a whole day sleeping off my stupor, but my head still pounded. A voice rose over a ridge just above me, a voice as smooth and sensuous as a wooden flute, a young man's voice. I scaled the rocks to peer over the ridge. A stark-naked boy, who seemed barely 20, cavorted on a bluff, throwing up his hands, swirling his head, all the while chanting an eerie Oriental tune. His eyes were closed and I watched him freely for several minutes before he stopped directly in front of me and gazed down at the ledge where I stood, without a hint of shame in his handsome face.
"What's wrong, haven't you ever seen one before?"
He'd caught me admiring his dark circumcised cock, thick as a rope used to hoist building stones. "Not like that," I said. "You're my first naked Jew."
"It's a sign of the covenant," he said with pride, continuing to stand unabashedly in front of me with his arms crossed.
"Covenant?" His Latin was crude and I thought I'd misunderstood him.
"Yes, the covenant between us and our God."
"Demanding son-of-a bitch, if you ask me."
He sized me up as though he might kick me in the face, then laughed instead, his taut belly quivering. He laughed until he coughed and wiped tears from his almond-shaped eyes.
"Help me up, damn it. I'm tired of balancing on this ledge."
Still smiling, he offered me his hand, and I scrambled up to the bluff. He handed me a skin of water, exactly what I needed after a night of liquor and a full day of sleeping in the sun. Then we hiked to a lower level, leading my thirsty horse to a pool of water that had collected in a cave. He seemed to know every niche of the mountain, stepping with confidence in his dusty sandals across gullies and jagged stones. Still naked. In the twilight shadows at the mouth of the cave, his sinewy form and his long, unkempt hair seemed to belong to a wild man who roamed the mountains.
"Are you one of the crazy Essene cave dwellers I've heard about?"
"What makes you ask?" He stroked the neck of my horse as it lapped the water.
"Don't tell me all Jews prance around on mountaintops with their peeled cocks flopping."
"No, just me."
"And why is that?"
"I'm strange, they say. I'm drawn to the mountains, up above Jerusalem. You can laugh, but I sense God here, even more than in the temple." He nodded in the direction of the city.
"Which god?"
"There's only one."
"Ah yes, a Jew idea. So you toss off your clothes when you feel the presence of your god?"
He smiled. "I go a little mad. Down there inside the temple walls, God seems confined. I know I feel confined."
"Watch out, Jew boy, you'll be stoned for heresy. I hear that's part of your religious code."
We secured my horse and climbed back to the summit where we'd met. It was too dark now to descend the mountain. We ate bread and lentils he'd brought in a pack, and stretched out on a homespun blanket to gaze at the lights in the city below. When the wind picked up, the boy slipped into his robe and lay close to me for warmth. He tantalized me more than any creature ever had, but I kept my hands to myself. Few, very few men could raise a sense of honor in me, but he--a Palestinian subject of the Empire, a Jew boy--managed somehow.
Over the next year, I became obsessed with him. We sailed on the Sea of Galilee, near his hometown, and hiked the rugged mountain terrain of Judaea. We feasted on roasted lamb, chugged his homemade wine until our vision blurred. Hot and drunk on the Mount of Olives, east of the city, we talked freely. He lampooned religious sects like the Pharisees and spun theories about the Elysian Fields, what he called the kingdom of Heaven.
In time he owned up to his heat for me--like I couldn't see it in his eyes. It frightened him. But it wasn't fear that kept him from grabbing my balls. He had religious qualms. He was destined to be some kind of eunuch for his god, a damned vestal of Palestine.
His power over me never relented. Not when I inspected with contempt the modest stone house of his family in a neighborhood of Nazarene artisans, not when I watched him join dirty swarms of Jews outside the temple, not when I nearly exploded in unrequited lust.
If he'd toyed with me, if he'd held me at bay to stir up my desire, I'd have taken him in an instant. But he didn't tease. Nor did he fear me. He wanted me but wouldn't succumb. And I wanted nothing less than his will.
Our last night together, I was horny as a satyr and thought, _To hell with it, I 've been a Stoic long enough--and when he feels my cock inside him he'll forget his scruples._ When he refused me, and cut me off, my rage knew no bounds.
# Three
"I'm sick of hearing of it, Lieutenant," Pilate said. He was flat on his stomach on his massage table, naked and glistening with oil. The man working his shoulder muscles was a barrel-chested Egyptian with braided black hair. The bath chamber was heavy with columns, gilded tiles, draperies. "It was one thing to beat the tax collector for cheating us above the usual amounts: He was an example. The defiant drunkard, too. But a woman with her baby? By the gods, man, we'll have a mob on our hands. We've got enough trouble the way it is."
These words from a man who loved bearbaiting.
"She was a prostitute, Your Excellency." I stood at attention above his bald head, my crotch level with his eyes. (I'd heard he liked to carry on with his soldiers, though he'd never summoned me to his chamber.) "The people might have stoned her tomorrow for all we know, and she refused to obey my commands."
The governor signaled for the Egyptian to stop. The servant did so and brought him a sheet. Pilate wrapped himself in it and strolled to his couch. He stretched out on his side, eyeing the insignia on my tunic. His dark eyes were bold and cunning.
"She did not realize your rank, your position here?"
"Of course the bitch knew."
"And what did you command her to do?" He didn't bother trying to hide his curiosity.
"The usual things. I wanted to enter her backside. I wanted her tongue inside me. I wanted to bind her."
"And she resisted?"
"She tried. I tied her down, forced myself into her, with the brat looking on, bawling in his own shit. I backhanded him."
"Yes, well you might have spared the child." He motioned to a ewer of wine on a table and the Egyptian brought him a cup. "The Jews won't tolerate beating one of their children, even one that belongs to a prostitute. I've met with two contingents of their priests today."
"I'll restrain myself, Your Excellency."
"If you're hungry for blood, I can make you head executioner, Lieutenant." He eyed me impatiently over his cup.
"I will restrain myself."
"See to it that you do. I won't have another riot outside my walls." He waved me away.
In my dark room in the barracks, I removed my uniform and sandals and stretched out on my cot. The arched window framed a moon round and white as a discus. I knew I had gone too far. But since Joshu had dismissed me, fury had driven me to madness. What Pilate didn't know was that I'd knocked around other whores, as well as two or three Jew boys who'd tried to guard their circumcised shafts the way witless maidens guard their own treasures.
I had never wanted anyone like I wanted Joshu, a superior specimen of manhood--not just in his taut, athletic physique, but also in his thinking. He challenged not only the inane Jewish laws of ritual purity but, at least in private, elements of Roman civilization--the Emperor's title of divinity, social classes, the possession of slaves.
I shared his disdain. The difference between us was that I at least pretended to abide by the rules and obey the commands of my superiors. Such adherence had won me my rank, and I believed I was destined for greater things, perhaps direct service under Tiberius himself.
Now, however, my ruthlessness had spiraled out of control. My downfall was imminent if I could not restrain my animal urges.
# Four
Other officers had sworn by the tranquilizing potions of a seer named Tiresia. I'd resisted entering the cluttered slum along one of the city walls where she kept shop, but now I was desperate for an elixir that would temper my anger.
Made of stone blocks, all of its windows bricked up, the former inn was entered through a small portal leading to a long, cavernlike corridor built against the wall. I stumbled in the darkness past three or four cubicles toward a dull light. I found the wench staring into a mirror of polished metal, oil lamps glowing around her. She saw my reflection and smiled triumphantly.
"So, you've finally come, Lieutenant Victor Decimus. Welcome to my lair. Sit here." She patted a stool next to her. She wasn't the ancient hag I'd expected. She was not yet 40, her Ethiopian locks threaded with colored beads, her abundant bosom jiggling beneath her flowing robe with every movement she made.
"What, has another officer told you of me? Speak the truth, wench." I continued watching her in the mirror.
"The truth!" She laughed from deep in her chest. "I know nothing else. Nothing. If you please, my lieutenant, advance."
I approached the stool, removed my sword, and sat next to her.
"You're a handsome man, Lieutenant. You've a strong jaw, strong shoulders. Your eyes... they're bold, keen. You're a model client."
"I know what I look like. Enough of this prattle." I turned to her, but she averted her face from me.
"Please, my lord. Keep your eyes to the glass. If you want to be satisfied. Look only to the glass."
"If this is a game, I'll rip you apart, woman." I motioned to my sword and took a seat.
"You won't be sorry, my lord." Through the mirror, her heavy-lidded eyes again peered at me. She was a stunning beauty. "The remedy you seek, I have. I have something more, if you dare try it. However, it will stir up your passion rather than calm it."
"You must be a second-rate sorceress. The last thing I need is more of a temper."
"Ah, but what if you could be transported beyond the grounds of Pilate's headquarters? What if you could take whomever you pleased with no ill consequences?"
"Stop talking in riddles." I grabbed her arm, and she snapped her face away away from me.
"I promise, Lieutenant, I would not resort to tricks. What I offer is too grave for games. But you must not look at me."
"Why? I doubt you're afraid of a man's eyes, from the looks of you. Why do you hide from the sunlight? Why is the shop as dark as a prison? Why do you cower here like a rat?"
"You must trust me, my Lord. There is no other way." She continued to face the darkness.
"Proceed." I released her arm and turned back to the mirror.
Without loss of poise she resumed her attitude before the glass. "What I have for you, you shall receive with no small pleasure. But again, you must trust me. I've waited long for you. I have seen you coming for an eternity. You're the first in an age. Without you I would shrivel and die. Stop!" She raised her hand when she perceived my impatience. Rings of rubies and polished jade glistened on her fingers. "You will understand soon enough. First, drink this." She reached for a silver ewer on a table and poured from it into a cup, which she handed me.
I sniffed it.
"There is nothing magical about it," she said. "It's merely a strong liquor to relax your resistance to my words. I speak the truth. Drink."
I did as she bid and very soon breathed more easily. The flame of a lamp transfixed me and I felt as though I lay in the sun after a swimming race in the sea. She slipped off her robe to show me her black breasts, round and full as silky pouches of spring water, waiting to be tasted by a parched desert nomad. Then she stood and let her garment drop to the floor. Her ebony body gleamed. Her ringed fingers touched my cheeks, guided me to her belly, which smelled of musk and sweet oil.
"Ah, this is all, wench? You want me to hump you?" I pulled her to the dirt floor. She moaned when my swollen cock sank into her warm, moist cunt. She moaned more as I pumped her, first slow, then hard, then slow to heighten my pleasure. All the while she kept her face turned from me.
The ride affected me like the opium I'd once taken in Rome, producing a trancelike calm together with a keen, excited euphoria spreading from my loins through my body the way the heat of liquor in the stomach finally flushes the face.
She guided my mouth to her breast. My heart pounded as the brown nipple stiffened. At first I imagined that milk flowed from the breast, then I tasted something else, something warm and potent, something as rich and red as the wine of Capri, as exhilarating as a slash made by an opponent's dagger. When the liquid streamed down my chin, when I saw the stained breasts of the seer, I swallowed greedily. There was no stopping. The euphoria mounted.
I felt myself soar over the walls of the city on a moonlit night, over wells and courtyards, over crumbling stone dwellings wedged together on filthy streets. Below, lithe girls beckoned to me. Fair boys called for me to mount them. Amid them stood Joshu.
"So you see me, Victor!" he called from a rooftop garden. "You've discovered the power. But I promise, it will only trap you on the wrong side of eternity."
I soared round the roof, tried to light on its tiles, but could not descend. I was like a granule of chaff battling a whirlwind, while the palms below remained immobile in the silvered moonlight.
"You can't approach me, Victor. Believe me. Turn from her!" Joshu was as clear and distinct as a crow on white sand. None of the haziness of dreams obscured him, and I felt no awareness--as we often do in dreams--that what I saw would vanish if I willed myself awake. I was there, with him.
"Don't believe him, my lord." The seer's words burst into my mind, but I saw nothing of her as I soared, though the sensation of our coupling continued and I continued to taste blood in my mouth. "I am the way," she said. "Remember that. I am the truth."
"She mocks me. She mocks God," Joshu called.
"I will have you," I said. "I feel it. I feel it."
"You feel her."
"Enough for now, my lord." She pushed my lips from her breast. "More later if you like."
I was drenched with sweat, while she remained dry. The stain had disappeared from her breasts. I rolled off her to catch my breath.
"There's more, my lord. Why worry about your rage? You can find him whenever you like. And you can have much more besides. Much more." She propped her head on her elbow. In the shadows she had no need to turn her face away.
"What do you mean?"
"Come here again when you are ready to die."
"Are you raving mad, sorceress bitch?" I reached over and grabbed her throat. "I could strangle you now and leave you in this pit."
Suddenly a vise seemed to crush my hand, though I saw no culprit. I cried out in pain and tried, to no avail, to release myself.
"Remember what I've said, Lieutenant. I am the way. When you are willing to leave the world of the living I will take you to a place where you can be master of all. Even him."
My hand was freed from the invisible grip. Tiresia stood and, without moving toward her garments, was clothed.
"I want to see him again." I lay on the cold dirt, nursing my hand.
"You will not return here until you are prepared to stay."
She seated herself once again before the mirror and watched me stand and grope my way back through the dark corridor.
# Five
For weeks I could not escape the vision of Joshu on the rooftop, the sweet, powerful sensation I'd felt. I became convinced that had I remained with him longer I could have reached him, embraced him. And that locked in my arms, he could not have resisted me. I wrestled with the officers in Pilate's game hall, I drank myself to oblivion, I worked the whores and the boys who attended me. At night I paced the labyrinth of Jerusalem's narrow streets. Nothing relieved the restlessness.
I approached Tiresia's shop several times, once intending to feign agreement with her demands, but always the force I had felt on my hand returned, causing me to wince and massage my fingers. Unable to shake the invisible grip, I retreated.
My temper flared again and again. When my secretary misplaced a scroll, I beat him until he wept. When I found my chamber pot unemptied after breakfast, I shoved my servant's face in it. I raised my sword to a harlot when I found bloody discharge draining from her slit and made her beg for her life before I retreated from her house.
As long as I vented my rage on servants and whores, I stayed in Pilate's graces. But my restlessness reached feverish heights, and finally, delirium obliterated good sense.
The boy was a Jew. I had spotted him during a military parade near Herod's palace. Long-limbed, like a gazelle, he sunned himself on a wall, captivated by the glinting silver and plumage of our uniforms.
"You called me, sir?" He had come to my chamber, brought by the soldier I'd sent to retrieve him.
"Approach me, boy." I lay on my couch, surveying his soft features framed by black ringlets. "Closer. That's right." I fondled his woolen cloak. "What are you called?"
"Benjamin, sir." He stared straight ahead, out the window, not daring to drop his eyes to mine.
"Take off your cloak."
"Sir?"
"It's warm enough in here with the fire. Remove your cloak. There, that's better."
He was lean but solid. His neck rose like a delicate pedestal from his robe.
"Are you breaking some religious edict by being here? Never mind. You shouldn't worry your conscience. You had no choice. If you hadn't come... Do you have a family of some kind?"
"Sir?" He suddenly looked anxious.
"Oh, come now. I'm only asking from interest. I won't butcher them." I caressed his arm. My sense of protectiveness excited me.
"I have a mother, sir. A widow. She spends most of the day in prayer in the temple courts."
"Some kind of holy woman, I suppose."
"Yes, sir."
"Who looks after you? You can't be more than 14."
"I stay with my father's people, sir. They are potters."
"Ah, you've got good hands for that trade." I turned his hands over and examined the palms. "Sit here on my couch."
"Please, sir. I couldn't."
"But I command it." I pulled him to me. "Do you know what I want you for?"
"Do you want pottery, sir?"
"Look at me. Look at me, damn you!"
He looked at me as though I were going to recircumcise him.
"Do I look like I want pottery?"
"I don't know, sir."
Once he did know, he submitted like a lamb, stupidly following my directions without a sign of struggle. To break a boy in, to rob him of what most Jewish men never imagine surrendering, invigorated me more than a winter swim. When I was satisfied, I told him to dress and summoned my man.
"You'll breathe a word of this to no one, boy. For your family's sake. I warn you."
"Yes, sir." The boy stared blankly at me, as though stunned by days of exposure in the Judaean desert.
"Escort him to the gate," I said to my man, who had learned discretion from me the hard way.
The boy kept his word well. When his cousins tried to pry information from him, he kept silent. He lost his appetite, grew thin, stopped sleeping, ruined the pitchers and vases he was creating, burnt himself on the kiln. Then one day his cousin found him dangling from a tree, like an old woman's rug thrown up to dry. He'd hanged himself after his mother, the wise bitch of a prophetess, guessed his shameful secret.
A delegation from the Sanhedrin stood before Pilate the next day. The results, I knew, would be imprisonment for me in Pilate's clammy cells, reserved for debauched or inept officers, followed by demotion, which would mean marching with the foot soldiers during military parades--to the satisfaction of the slimy priests on the sidelines.
Unbearable? Not for some. But I knew punishment like that would ignite my fury. Like the baited cheetah in the Roman circus, I'd tear apart my tormentors, dooming myself to longer imprisonment.
I fled the palace before Pilate could send his guards for me.
# Six
In the whorehouse where I hid, incense burned night and day to camouflage the piss splattered in the corridors and courtyard by drunk patrons. The women painted their faces at night and floated up and down the halls in transparent silks, like ghosts, ghosts who returned to their grave beds by day. Not that customers did not demand service in the hours of sunlight--even pious Jews felt safe, knowing none of their kind ever showed his face in a neighborhood that would leave him ritually unclean. Then a ghost would be summoned from her grave by the shriveled mistress and display herself at the door, pale and naked, her eyes glassy and ringed by shadow.
The life of the night suited me to a degree. Hooded and cloaked, I wandered freely through the black city and returned each dawn to pleasures of a new whore. But I was an officer, accustomed to showing off my physique, my badges, accustomed to dramatic displays of homage from miserable peons, accustomed to taking what I liked. My agitation became unbearable, as though I were subjected to a shrill, incessant flute. I considered escaping to Egypt or stowing away on a boat to Rome. But now I was a wanted man. News would reach Egypt and Rome before I was halfway to either place. In hiding or on the run, I could never live the life I deserved.
My thoughts wandered often to Tiresia's proposal. One night, after working a whore to a sweat, I interrogated her about the seer.
"I know nothing about her, sir. She's not one of us. We know all the competition." The girl was 16 at most, but already possessed of the jaded, weary expression that marked all whores. She sat on the floor, her head resting in her arms on the foot of the cot, after performing her finale. Candlelight flickered on her alabaster back.
"Don't tell me you've never seen the woman. Ethiopian or something. Black as the bottom of a cooking pot. She lives less than half a league from here." I lay naked on the cot, my hands behind my head.
"I swear, sir."
"What about the others? Surely they've seen her. Talked of her."
"It's possible, but they tell me nothing. I'm too new to be included."
"Yes. I'll bet they make piles of denarii on you." I touched her cheek with the ball of my foot. "I want you to talk to them at any rate. Get the information from your mistress. Tell her there's money in it for her."
"Yes, sir." The girl said it as though approaching her mistress meant abuse for her, regardless of profit to the proprietress.
The next night, I got an answer I hadn't counted on--a contingent of Pilate's crew bearing shackles for me. Incensed at my inquiries about what she believed must have been a rival brothel, the mistress had arranged for my arrest. Fortunately the girl whore had taken a perverted liking to me. She flung open my door just after midnight.
"They're coming for you. Run, sir." Her chest heaved. Horror widened her normally listless eyes. She disappeared across the courtyard, a whirl of white.
Grabbing my tunic and a sack of gear, I climbed out the window and hurdled a low gate. Drawn by a force as strong as lust, I sprinted through alleys and back streets to the seer's house. This time I met no resistance to my entry.
"Welcome, Lieutenant." Tiresia sat enthroned in her place before the mirror, her reflection glimmering softly in lamplight.
"Who are you?" I demanded, panting at the doorway of the shadowy chamber.
Tiresia laughed and stroked the colored beads in her hair. "Oh, Lieutenant. Your nature will serve you well on this side of the night."
"Face me!" I flung down my bag and approached her, but thought better of clutching her after the hand-crushing I'd received before.
"You are prepared now to join the league of the night? You've destroyed hopes for success in a mortal life."
"I'm here to hide, wench, and nothing more." I sank onto a bench and wiped the sweat from my brow. A rat scurried along the wall and vanished behind broken furniture and piles of rags.
She raised her head and studied me from beneath her heavy eyelids. "When you emerge from this house, Lieutenant, you will never hide again. You've only tasted what I can give you. Wait until you feel the full pleasure of complete power over mortals, the ability to travel, to soar over continents at the speed of thought. Wait until you can crush whom you please with impunity, command anyone you will."
"What about him? The one I have wanted to taste as I have wanted no other."
"Why do you want him?" She seemed displeased.
"Why? If you know so much, you must know why."
"I know all about him. But do you?"
"More riddles." Impatient, I rose and peered down the corridor. Then I turned to her. "He's told me I'll never approach him. He's told me you lie."
"Of course. He wishes to keep you away."
"But I desire him. I want him as my beloved. And he desires me in return. I know it."
"Exactly, my dear Lieutenant. He sees you as a test of his faith."
"Damn this god he imagines. A moment with me will turn him into an apostate."
Tiresia smiled. She dropped the robe from her raven shoulders. Her breasts, nipples purple in the lamplight, rose and fell as she breathed, like floats on a calm sea. "Come to me."
Heated by her charms, I stripped, spread my cloak over the cold earthen floor for her to lie on, and mounted her. Once again she turned her face to the shadows.
"Why me?" I asked in the midst of her moans. "Why not the others, the officers who've come for your potions?"
"Oh, my lieutenant." She clutched my buttocks to drive me more deeply into her. "There were no others. The words... were planted in their minds." Groaning with delight, she guided my lips to her breast. "Drink, my Victor. Drink."
The warm blood oozed from her nipple. The sight of it maddened me. I lapped it up like a starving dog. I sucked long and hard, until my whole body became as engorged with blood as my cock. The sensation I'd felt before returned, the strange sense of euphoria mixed with acute vision and heightened power. I could have strangled a bull with my bare hands.
"Yes, Victor. Keep drinking. You mustn't stop this time."
I had pulled away from her to get my breath, as though we'd been locked in a kiss of passion. She pulled me back to the wet teat.
"Drink and live."
Suddenly a pain shot through my skull and then concentrated in my eyes. They pounded. I thought they would explode in my head. But despite the torture, I clung to Tiresia's supple body. My loins continued to hammer against hers at a furious pace. Then, like a fountain, the seer's breast poured forth liquid that was no longer hot and salty, but cool and refreshing--blood still, but a chilled elixir that somehow dulled my pain.
My sense of strength redoubled. I felt like a man charged with superhuman energy in a time of disaster, mighty enough to lift a block of granite from a worker crushed beneath it. I soared as I had soared in our last encounter, above the tiled roofs, the palms and walls of Jerusalem. Higher and higher. Through clouds. Toward a sliver of moon. Toward blackness--not empty, but full of creatures, heads and hands and limbs. The beings peered at me from behind treelike shadows.
"You are approaching it, Victor." Tiresia whispered into my ear. "Ah, yes. You're almost there."
But where? By the gods, where?
The sweat that had soaked me as our mad coupling began suddenly evaporated. My skin tingled, cool and taut. I was aware of every inch of my body and at the same time attuned to the darkness around me as I soared.
Tiresia laughed. "This is the beginning of time, Lieutenant. My time, my birth into the night. You are returning me here to reign. I have proven myself. It has taken centuries, since the time of the great Sphinx, but it is finished. Now I join the court of darkness."
Lightning ripped through the night. I could see Tiresia distinctly next to me. Her garments fluttered in a whirlwind as she flew. Her beautiful face had become translucent. The beads in her hair shimmered like precious stones. Other figures, robed, crowned, surrounded her in the air. They caressed her face with long, tapered fingers. They deposited a crown upon her head and carried her away, fading into the blackness.
"Wait! Damn you, wait! What about me? What do I do?"
"Follow your instincts, Victor." Her cry rolled out in her wake. "Follow your instincts. You will know." The final word echoed through the dark vault, slowly fading.
Conscious of returning, in an instant, to Tiresia's shop, I rolled off her motionless, cold body. I lowered the oil lamp to examine her face. The shriveled features and sunken cheeks of a hag glowed in the light--the hideous face she had hidden from me. Before my eyes, her luscious breasts dried up and disintegrated.
That is how it began.
# Seven
Behind a false wall in Tiresia's chamber, I found her sarcophagus, lined with hieroglyphics chiseled in ancient Egypt. There I slept by day for the next eight years, at night glutting my thirst on the blood of beggars and rich merchants, boys and maidens. I even rampaged Pilate's household, first unnerving his pathetic wife, who rambled about visions of black ghosts in her room, then sinking my fangs into the breasts of his best servant girls, their juice exploding in my mouth like plump fruit.
My cock swelled as I drank blood, and the sensation--the heat, the lusty ecstasy--transcended any erotic pleasure I'd ever had.
It didn't take long, however, for me to tire of satiating my thirst, of testing my new strength, of flight, and of preternatural vision. I, like all beings, craved company. But I existed in isolation. I knew others like me existed, but not in Jerusalem. We were assigned domains by the dark powers, and our provinces did not overlap. Strange voices told me this in dreams.
Whores and boys of the street entertained me at night, and vagabonds cast dice with me. I paid my debts to them with silver piled up in Tiresia's secret chamber, from what source I knew not.
But there was only one man I wanted.
After a month of refining my powers, I found Joshu bathing in the river after dusk. From behind the reeds, I watched his sinewy arms stretch as he scrubbed his back. I inspected his thighs, as strong as columns, his belly, taut as a drum. Light lingering on the horizon painted his angular features and left the shadow of his form on the shallow water.
"Who's there?" He scanned the shore as I rustled through the reeds.
"It is I, Joshu." I stepped into a clearing on the bank.
"I thought you were dead. Where have you been?"
For the first time the aura of strength about him, spiritual strength expressed in every firm muscle of his body, struck me as superhuman. For the first time a strange shudder of apprehension passed through me.
"Who are you, Joshu?"
"I have told you, Victor. I am my father's son."
I splashed through the water toward him. I stopped so close to him I could see the tiny scar on his temple that he'd told me about. A neighbor boy had struck him with a rock when he was six or seven. "Which god is your father?"
"I have told you. The only god." He stooped over and cupped his hands to drink.
I clutched his arm. "The Roman gods are much more powerful than this god of the Jews. They outnumber him after all."
"What have they done for you?" He straightened and faced me boldly. In the light of the rising moon, his eyes revealed his love for me.
"They've made me immortal."
"Have they?"
"It's what you always speak of, isn't it? An eternal realm. Well, I've found it."
He quietly contemplated my face.
I pressed my lips against his and for a moment he leaned toward me, but then drew back.
I snorted at his rebuff. "I thought you told me I could not approach you."
"When did I tell you that?"
"In the vision.
"I don't know of what you speak."
"Deny it if you like. But here's the truth. I've the power to take you. But why should I, Joshu, when you want me? Be my consort. I can grant you immortality."
"You can never possess me." Joshu started toward the shore.
I grabbed his arm and pulled his naked body to me. "I already possess you." I shoved him away and, with a flicker, sped through the night, as light and invisible as the wind.
Wherever he went, I hounded him. When he retreated to the desert like one of the crazy Essenes, I squatted next to his campfire. When he paced the temple courts beneath the starry sky, I blocked his path. When he slept at the homes of those who had begun to follow him, I boldly marched through their doors.
"Why are you doing this?" he screamed one night in the desert, where he'd gone to pray. I sensed I had not been the only devil to tempt him there in the darkness. "What do you want?"
"You know what I want." I stretched out on the blanket he had spread before the fire.
"Robbing my purity means that much to you? What if you did take my body? My soul would belong only to my Father." He looked weak and drawn in the rippling light.
"Save that pig fodder for the masses, Joshu. Piety means nothing to me. And as far as you're concerned--you want me too. I'll wage war on you until you weaken, my friend. You're body isn't enough for me. I can have any I choose. I want your soul."
He suddenly broke into a fit of laughter. "Oh, Victor, had you wanted my love, my company, my teaching.... But my soul?"
"Do you know who I am?" I sat up and glared at him.
"I know you belong to the night with the other demons. Except..."
"Except what?"
"For you there's still hope."
"Damn you!" I leaped up, ripped off his cloak, and threw him to the sand. I pinned back his arms and spread his legs with my knees. "Is there hope now?"
I tore at his throat with my fangs, but the second I tasted his blood a wave of nausea passed through me. I vomited the blood from my earlier victims into the sand.
"Who is protecting you? Who is keeping you from me?"
"I am free. No one commands my will. Not even my God. But I have surrendered to him." He sat up and rubbed his arms where I had gripped him. Then he reached for his cloak.
"They say you work miracles. Why don't you cast me away from you as you cast away the demons from the godforsaken creatures roaming the slums?"
His response was a prolonged gaze at me, as though he were actually considering using his powers. But his gaze also held compassion.
"You're a holy fool," I said, spitting out a remnant of the blood that had erupted from my throat. "You'll never rid yourself of me."
Indeed, I kept after him, hounding him until the very end of his divine crusade, when even his charisma couldn't save him. In the final days I nearly had him. Immortality in my company looked pretty good to someone condemned to die.
On the day when the sky darkened to the shade of midnight, I rose from my tomb and soared to Golgotha. Gazing lustfully at Joshu from the foot of his execution cross, fighting my urge to lap up the blood that dripped from his hands and feet, I invited him one last time. He listened to me but merely turned his eyes to the sky, invoked his god, and slipped into a death I will never know.
For two nights I ransacked Jerusalem's streets, torturing, murdering whoever came into my path, mangling limbs and tearing flesh with my teeth, without stopping to suck out the elixir after my first few victims.
Then, just before dawn that Sunday, as I hurried back to my refuge, it happened. I felt him.
"Joshu! What is this? What kind of spirit are you?"
I glanced around the street, gray in the harbinger glow of morning. "Show yourself to me."
He stood before me, naked and whole, not pale from death. The nail marks still on his feet and hands looked more like small tattoos than wounds.
"I am not a ghost, Victor. I live."
"As do I, my beloved." I stepped toward him, but the dawn was moments away and I had to flee. "Come with me, to safety."
But he stood immobile and I could not hesitate another second.
When I awoke later in the darkness, I felt as though I were the only being left on an annihilated earth. Joshu was gone from me. I knew this before his followers began babbling about a resurrection. He existed, was immortal, but not in a world of darkness like mine. I howled like a wolf who'd devastated a flock of sheep and was now left with nothing. It was only then that I believed in the god Joshu returned to, a god of light. And it was then that I vowed to avenge my loss on this pompous being who had deprived me of the only one I ever loved.
# II
The Cloister
# Eight
"Abbot Reginald of St. Sylvester speaks well of you, Brother Victor. He says you have much to offer us here at St. Thomas." Brother Matthew, a burly man, stood before a tall window in his office. He was abbot of St. Thomas, my new home, but he told me he didn't like titles. "Brother" suited him just fine.
The window looked out into the courtyard, where a gnarled old pine basked in the light of a full moon. The walls of the office bulged with thick, musty volumes. The oak floor and desk were blackened with age.
"I'm grateful for the abbot's recommendation, Brother Matthew." I assumed the tone of deference I'd perfected through centuries of monastery-hopping. Also robed in black, I sat across from him in a leather chair, venous with cracks.
"We Thomists have never accepted a monk from another order. But the accident was tragic--the demise of an ancient community stemming from the Dominicans, like ours. Order of the Divine Word--splendid name. What a pity." He shook his head.
"Yes." I studied the fat neck that rose from his black habit. His ripe jugular bulged as big as the infant snakes we ate as delicacies in Pilate's headquarters two millennia before.
"The fire was sudden?"
I nodded. "It was a medieval abbey, full of rotting wood. England is full of decrepit abbeys and convents. The unusually strong winds on the English heath didn't help. When the fire started, everyone was asleep. They weren't due to rise for matins for another hour. And the fieldwork had been grueling that day, especially for the older monks. Most everyone died of smoke inhalation. Through God's mercy I was taking my usual walk on the heath, safe from the sun's rays. My skin condition won't cause inconvenience, will it?"
"Of course not, Brother Victor." The abbot seated himself in the leather chair behind his desk. The lamp cast a rosy glow on his smooth cheeks. His eyes were small and closely set and full of the disgusting charity I never grew tired of loathing. "It's a strange condition, I admit. What a misfortune to be intolerant of the sun. Though the night has its own beauty."
"Indeed."
"And you can work in the night? It will be good to have a sentinel of sorts."
"The underground cell is not a problem?"
"Not if you don't mind the damp rooms beneath the chapel. It's where we said our private masses in the old days--a network of dank little chapels and storage rooms. We've furnished a cell for you down there. I promise you you'll not see a single ray of sun."
"Excellent. The crypt is around there too, I suppose? The one you mentioned to Abbot Reginald?"
"Yes. I hope that won't bother you--sleeping among our faithfully departed. They're good old souls." He smiled benevolently.
"Indeed not. I can pray better in that environment, reminded of mortality." I mustered a smile, and he, predictably, reciprocated. "I have a trunk full of books. They're out in the car."
"I'll get a brother to help you." Brother Matthew picked up the phone. "I'm surprised Brother Cyril didn't bring it in already."
"That's my fault. I told him I'd get the porter to help me after I spoke to you. He did carry my other bags to my cell."
The abbot shifted the receiver and gave directions to the porter I'd glimpsed as old Brother Cyril had pulled into the parking lot, a willowy boy of 19 or 20 who blushed when I saw him peering out the window.
I rose when he knocked at the door and the abbot called for him to enter. He was indeed a specimen--full lips, limpid eyes, a shock of blond hair, the bangs curling over his eyebrows. He smiled bashfully at me, then turned his attention to the abbot to receive his instructions.
"Brother Victor," Brother Matthew called as we were leaving. "Why exactly did Abbot Reginald survive the fire? He wasn't clear in his letter or phone call."
"I think he was out of the building too, Brother. He suffered from insomnia."
"I see. And he really won't accept our hospitality?" The abbot squinted in the lamplight.
"He's very old. His community is gone. He wants to be with his family in Brighton." I tried to restrain my impatience at his irritating concern.
"I see. Welcome to America, Brother, and to the monastery of St. Thomas. We'll try to help you not miss England too much. Make yourself at home."
"Yes, Brother. Thank you."
How many times over the ages had I introduced myself to an unsuspecting abbot? It was in the late 13th century that I entered the first cloister, a Dominican fortress in the Apennines full of boys hardly old enough to shoot their loads. That's when I set out in earnest on a calculated campaign to destroy the harems of Joshu's god. I'd only begun to hear about monasteries, although apparently the communities of monks had spread from Egypt into Italy 200 years before.
But let me start at the beginning.
After the death of Joshu, I spent a dozen frustrating years waiting, searching for his spirit among the old Palestine haunts. Then, despondent, I made my way back to Rome, where I spent four centuries feasting on the rich blood of patricians and the exotic blood of slaves from every corner of the empire. The Germanic invasions began in the fifth century. At first I enjoyed the excitement, the chaos. I could smell blood from the battlefields when I emerged from my hiding places after sunset. But depression gradually set in as I watched the collapse of civilization as I'd known it.
I left Rome in the eighth century and wandered through the Far East until the Barbarian raids were long over. Then I returned. More than a millennium had passed since I'd beheld Joshu. A millennium. I caught wind of these idealistic followers of Joshu huddled together, renouncing the world as he had. It seemed too good to be true. My nocturnal wanderings, glutting my appetites, had begun to bore me. I lacked a purpose, stable companionship, things I thought I could dispense with on the dark side of existence. The challenge of the secluded abbeys where pietistic young males wrestled with their carnal desires, where Joshu's spirit perhaps lingered, where I could most injure his god... that challenge baptized me into a new life. The night once again held promise.
The first millennium after Joshu's death was my adolescence as a predator, my youthful heyday. In the second millennium, as a monk, I enjoyed the fruits of experience: more finesse in my dealings with humans; more restraint over my cravings; more single-mindedness in my hunts; more concentration, hence ecstasy, in my feedings.
Between the 13th and 20th centuries, two dozen monasteries, in Italy, France, Spain, Germany, and the British Isles, harbored a creature of the grave unknowingly--until too late. With each abbey I left devastated, I cut a new notch in the belt squeezing my heart, the belt strapped there by Joshu. If I couldn't remove it, I could gloat over its defacement.
St. Thomas would make two dozen and one notches. And my first in the New World.
Leaving the abbot's office, I followed the boy monk, Brother Luke, down the dark hallway into the medieval-looking foyer of St. Thomas Abbey. Our feet thudded on heavy oak planks. High above us loomed thick rustic beams. The abbot had instructed the young porter to give me a tour of the buildings.
"Would you like to see the chapel first?"
Though I wasn't an expert on American accents, his drawl seemed rural. He pointed toward heavy doors surmounted by a carved image of Joshu on the abominable crucifix, the one I had stood before 2,000 years ago. In all that time I had not grown immune to the longing and bitterness the image evoked.
"Lead on, O Gabriel." I squeezed his shoulder when he beamed at the name of the angelic guide.
Lights from votive candles flickered before statues in side altars and near the sanctuary of the long, narrow chapel--an unimaginative variation on chapels since the beginning of monasticism. Choir stalls faced each other near the sanctuary, and pews lined the body of the church. The place smelled of incense and candle wax, old varnish and sickly-sweet flowers.
"You wanna offer a prayer?" The boy whispered as though the empty chapel were filled with meditating monks.
"Thank you. Please, come with me." I touched his back.
We walked side by side down the center aisle and knelt at the communion rail. The golden tabernacle doors were embossed with a predictable scene of the "last supper." The truth is, Joshu was sick during that final Passover and couldn't eat a thing. I teased him about it--and scoffed at the foot-washing ritual that embarrassed his foul-smelling men, their feet caked with filth. He had an irresistible penchant for degrading himself.
But he had enough sense to avoid ending up imprisoned in a bread box. The Blessed Sacrament indeed! And even if he was in the tabernacle, it wasn't as a wafer that I wanted to taste my Joshu.
Brother Luke amiably gabbed as he led me through the other buildings forming the sides of the large courtyard, thick with hedges and trees now dormant in the January cold: a high-vaulted library adjacent to the chapel was in one building; in another was a social hall, a kitchen, and a refectory with two long tables for the handful of monks who lived there; a third building contained a dormitory of tiny cells, administrative offices, and, adjacent to the foyer, a richly furnished parlor for receiving guests. A fairly new greenhouse had been constructed on the north side of the refectory, behind the buildings.
"There's a room of computers next to the library. But I ain't got the key." Boy Luke stood timidly behind a heavy chair. "They put in a lot of time in there, with their research and all."
"And you have no scholarly ambitions? I thought the Thomist order pledged to carry on the work of the great St. Thomas Aquinas." Through my research I had learned all about the 19th-century offshoot of the Dominicans. Most of the monks were scholars who did research at the monastery during extended university sabbaticals. Between the monastery's enormous library collections and its modern computer technology, they had all the resources a pinheaded professor could want. They would keep their noses buried in their books and stay well out of my way.
"Gotta have porters and groundskeepers, too." He smiled sheepishly.
"Ah, I see. You are responsible for the charming landscaping, then?"
"Me and Brother Michael. We tend the greenhouse, too."
"And is Brother Michael a stooped old farmer?"
Luke shook his head. "He ain't but five years older than me. Lot smarter though. Reads like a fiend. Just not interested in scholar stuff."
As the boy helped me unload my trunk from the car, I surveyed the landscape in the moonlight: the long dirt path to the monastery snaked through trees and down a hillside until it hit a country road, invisible from the promontory. Behind the buildings, acres of woods rose to the Appalachian peaks that now looked like bites taken from indigo paper. We were far from the city of Knoxville, from heavy settlement, but I smelled human blood in the cold night air, as I'd known I would.
My stay here would be short if I fed on monks again. This time I would resist the urge even longer than I had the last time. Once I started on monks--sick of feeding on drifters and prostitutes and other refuse no one missed--once I started on monks I couldn't control my appetite for their consecrated flesh, which carried me to orgiastic heights. I would grow careless, leaving trails of blood, missing the suppers I only pretended to eat anyway, missing compline--unable to stomach invocations of Joshu. In the bloody chaos, clues would point to me and I would have to flee once again, often destroying the monastery as I had recently done in England, when the good brothers of St. Sylvester discovered too much.
My cravings defeated my own purpose. I intended to steal the souls of devout boys, not their mortal lives. Controlling a boy thrilled me as terrifically as it had during my existence as a man. My cock still hardened. I could still take a boy, though it was the sight of him surrendering his will to me, not the friction of fucking, that triggered my orgasm.
I took no pleasure in barring the doors of St. Sylvester's, in torching the carpets and draperies, in razing the buildings that had given me security. But I had no choice, and no choice but to assume the identity of Abbot Reginald in order to make arrangements at St. Thomas. This time I would resist longer. I would stick to the indigenous food.
"People live in the mountains?" I asked, lugging one side of the trunk while Luke took the other.
"Some. Most is miners. But the mines done closed." Luke's breath steamed before him in the cold. "They get by on whatever they can shoot in the woods or pull out of the dirt. Michael takes them food and supplies."
"Indeed."
Inside the foyer, Luke unlocked a door that opened onto a dark and narrow stairway beneath the vestibule of the chapel. He flicked on the light and turned to take his side of the trunk.
"It'll be easier if I carry it myself." I picked up the trunk and he moved out of my way.
"My God, do you lift weights or something?"
I grinned, feeling his eyes admiring me from behind--for when I chose to I could feel any senses directed at me. "Natural, brute strength, my friend." He did not know the half of my powers.
Deep in the bowels of the church, we walked along a flagstone corridor, past alcoves made of brick, like burial niches in the Roman catacombs where I fed upon the neophyte Christians: the first martyrs--because of me, not the lions. The widely spaced incandescent bulbs along the walls shined upon marble altars within the alcoves.
"The ordained brothers used to say their masses down here." Luke was leading the way now. "Don't see how you can sleep so close to the graves. Gives me the creeps."
"With the Blessed Sacrament just above me? How can I be afraid, Luke?"
"Still..."
"Ah, I see we're coming to the crypt." Engraved marble tablets spaced six feet apart lined the walls. Six names were engraved on each of the tablets, which were embedded in the wall above an iron door that was soldered shut. I'd seen this sort of mausoleum many times: the coffins inside the small chambers were sealed in vaults, stacked in pairs. "How long has it been since a brother died?"
"That'd be Brother Raymond, last year. Ninety-two years old. There he is." Luke pointed to one of the last mausoleums. The soldering was shiny still.
I deposited my trunk on the floor of my cell, a small storage room beyond the crypt, which had been furnished with a bed, a desk, shelves, and a chest of drawers. In a room across from mine, plumbing had been installed for the priests who said their masses near the crypt. A duct from the boiler room directed meager heat into the entire subterranean space.
It was after midnight, and I was growing ravenous, not having fed since the night before when I broke out of the coffin being shipped on a flight from London. When Luke offered me a fraternal embrace to welcome me, I wanted to pierce his supple throat and drink. I clutched him as I clutch my prey, willing him to immobility. Within his stunned body, I felt him succumbing. In that second I could have ordered him to do anything, he was so pliable. Latent homosexuals--a category that obviously fit him--were always the easiest to control in monasteries, where such creatures flourish. But there was no hurry. Young Luke could wait. I released the boy and he pulled away, embarrassed but not sure why.
Minutes after he had ascended to his cell, I was once again in the night air. The scent of blood drifted through the dense woods. I tore through branches, over frozen soil and brittle leaves, my preternatural sight steering me though the darkness in the direction of my prey.
The wooden shack, its porch sagging and windows boarded, nestled into the mountainside near a frozen brook. A doorless old refrigerator and a heap of rusty cans and other garbage littered the ground beside it. Light bled through the windows. Inside, a baby cried. I stood on the porch, listening to a husky-voiced woman singing. Before she could finish her lullaby, I charged through the door.
The woman, seated on a kitchen chair, cradling the baby in her arms, screamed. Of course my appearance was horrible, as always when I fed. My skin took on the jaundiced hue of a new corpse. My fangs grew in an instant to the length some stalactites take a century to reach. Fire burned in my eyes. I panted like a rabid dog.
"Oh God, please. No!" She clutched her baby to her breast when I reached for it.
I snatched the brat from her, raised it to my mouth, and, shaking off the blanket, sank my teeth into its soft belly. Its blood squirted like the juice of a plump tomato in my mouth.
Shrieking hysterically, the woman grabbed at the baby, but I held it firmly, draining it and dropping the corpse on the dingy linoleum floor. She scrambled from her chair and threw herself on her dead child. Snatching a handful of her oily hair, I pulled her to her feet and ripped off her sweatshirt. She reeked, as though she hadn't bathed in a month. Her face was pockmarked. But her breasts, swollen with milk, enticed me. My fangs sliced into them. Her eyes rolled back. Her head dropped and her body went limp. When I'd had my fill, I let her crumple to the floor beside her child.
I hauled their bodies to the heart of the dark woods and flung them into a ditch, rolling a fallen tree over them. Then I returned to the shack and wiped up stray drops of blood. Anyone searching for the victims in their remote dwelling--whoever supplied their food and fuel--would assume they'd vacated and trudged to the warmth of the city.
Shortly before dawn I raced back to my cell. Stuffing pillows under the blanket in case anyone should look in on me during the day, I left the cold cubicle and hurried to the crypt. The iron door of old Brother Raymond's mausoleum gave easily under my strength. I slipped into the dark, cramped chamber, where I had to stoop like a humpback, and pulled the gate firmly shut behind me. The lingering smell of decay, no longer detectable to a mortal, wafted to my nostrils, at once familiar and repugnant. Tracing the odor to the top vault of the third pair of tombs, the one farthest from the door, I pried it open. The plain pine casket was perfectly intact, probably the only one in the whole crypt in such a condition. I opened the lid, scooped out the skeleton, still dressed in a habit, dumped it on the floor to be discarded later, and climbed into the coffin. Within seconds of closing the lid, I drifted off, sated and exhausted.
# Nine
I did not need to feed every night. Sometimes I could last a week or more if I'd imbibed enough blood. The woman and her baby had glutted me. I went for 10 days before killing again, this time a tramp who lived in a hovel of corrugated metal. In the first month at St. Thomas I fed on several more mountain people, careful to bury their remains in the woods to avoid raising alarm.
After centuries of inhabiting monasteries, I slipped easily into a routine. Rising just after sunset, I joined the brothers at table because it was expected of me, even though I always arranged to eat my own meals in the kitchen at times appropriate for a nocturnal schedule to keep hidden my inability to take normal food. While they swallowed their stew, the 23 men--most in their 50s and 60s with a few younger monks scattered about, all garbed in black, hooded robes--listened to readings from Thomas a Kempis's _Imitation of Christ_ or from the works of their fat hero, Aquinas. (I'd never lived with that medieval scholar's community, but I had met many monks who confirmed rumors that a semicircle had been carved into the dining table to accommodate his piggish girth.)
After dinner came a period of recreation. The monks could gather in the social hall or exercise in the basement gymnasium. Compline was chanted at 8 in the dimly lit choir stalls. Then the monks filed to their wretched holes. Fortunately, the chanting of matins had long ago been abandoned by the group; being in my assigned stall at 3 in the morning used to take some maneuvering, especially when my victims took me far from the grounds. I usually attended compline, as much as I loathed it--hearing Joshu's name again and again, the words about his blood, his body, at once a mockery of my own feedings and a reminder that I could not possess him. I was always half afraid that I would storm up to the altar and rip open the priest's throat.
But of course I did no such thing.
Instead, very easily, I exerted ever-increasing control over young Luke. I had him violating the Grand Silence within a few weeks. He would sneak down to my cell after compline, eager as a spaniel who wanted petting. He sobbed to me more than once about the mother who abandoned him after the death of his father, about the stern grandfather who raised him.
"Very sad," I said after one of his crying jags. "Come sit by me." I patted the cot I'd never slept in.
He rose from the desk chair and settled on the bed, nestling against my shoulder. His wet cheeks gleamed in the light of the candles burning on the bookshelves. I detested the overhead fluorescent tubes.
"That's better, isn't it?"
He nodded. "Victor...." He paused.
"Why have qualms?" I could hear his thoughts, like the confused voices of children on a playground. "What could be better? God has brought us together."
"Do you think so?" He sniffed.
I dug a handkerchief from my pocket. "Here, blow your nose."
He did as I directed, then said, "Brother Matthew talks about particular friendships. The _Imitation_ says they're the work of the devil."
"Has the good abbot spoken to you about us?"
He avoided my eyes. "No. In his sermons, I mean."
"What about the others?" I glared at him. "Do they talk of you and me?"
"Not really." He lowered his eyes once again.
"Tell me the truth."
"Well, Mike has asked a thing or two," he finally admitted, sheepishly.
"Brother Michael wanted information about me?"
"Not information, really. Sometimes when we're working in the greenhouse he asks things."
"What kinds of things?"
"Well, he asked about your order."
"And what did you tell him?" I stood and walked to the desk.
"Nothing. I don't really know nothing but what Brother Matthew told us all. You was in some other kind of Dominican order in England, Order of the Divine Word, and the monastery burned down and the other brothers died. And it was our duty to take you into our community. You never told me nothing else."
I leaned against the desk with my arms crossed, examining his childlike face. "Tell me, if Brother Michael's as intelligent as you say, why is he content to spend the day doing mindless work in a greenhouse?"
Luke twisted his mouth to one side and contemplated the question. "I expect it's a spiritual thing. He's damn holy. God comes first in everything. Maybe working the soil is good for his soul."
I pondered the crucifix above the bed. I'd left it there to avoid rousing suspicion. "Why didn't you tell me about Michael's curiosity?"
"I... I don't know. I never told him nothing, though. Not about breaking the Grand Silence or coming down here. I won't say nothing about you anymore."
"Never mind, Luke." I turned my eyes back on him. "Talk to him all you want about me. Now you'd better leave. It's nearly midnight."
# Ten
From then on I paid special attention to young Brother Michael, whose good looks and intensity I'd already noticed during compline, when he sang the psalms with such fervor one would have thought he'd already entered the courts of paradise--or that he feared losing paradise altogether. I liked the latter possibility much better.
I'd seen many monks like Michael in my time: cocky, full of themselves, aware of their brawn and handsome features, of their intelligence and charisma. Yet at odds with themselves because of their strange ideas about religious perfection. Why in hell they did not stop resisting their gifts and make the most of them... well, I didn't waste time puzzling over this question.
Such monks always posed the most intriguing challenge. I loved to watch them squirm against my magnetism, fasting, even flagellating their backs with cords until the flesh was raw and bleeding, and then inevitably succumbing to my powers. I possessed them--sexually, emotionally, spiritually. But when I considered making them like me--for I had come to understand this possibility--I despised them too much to have them as companions. Still, I never grew tired of seeking a companion or torturing my prey, especially those who were particularly unworthy of me.
Once I'd observed Michael long enough to know that invariably he lifted weights on Monday, Wednesday, and Friday evenings, I started showing up in the workout room at the same times. At first, I only nodded to him and went about my workout (a joke, really, since to me the free weights might have been made of cork rather than steel). But one night, after a couple of the other monks had finished their workouts and left the room, I spoke.
"You know," I said, turning my head from the bench press after replacing the barbell on its rest. "You should not worry so much about Luke. You can't keep him from worldly dangers."
Without looking at me, Michael continued working the leg lift, his calves bulging into balls of olive flesh. Though the basement was cool, he wore only shorts and a tank top, soaked with perspiration. His dark, normally unkempt hair was tied back, accentuating his noble forehead, classic nose, and sturdy chin. He reminded me of the best of the Roman gladiators, one fiercely flaring his nostrils, searing his gaze into the eyes of his opponent, and yet alert to sudden moves.
"I know you want to protect him," I persisted.
"Why should you care what I think?" He glanced coldly at me, then returned his attention to exercise.
His spiritedness only aroused my admiration. I looked him over as though he were a rare jeweled goblet, while he pretended to ignore my attention.
"None of us is above sin. True, good Brother?"
"None of us." He stopped, glaring at me. "I'm not pretending to be superior."
"Of course not."
"Some people are impressionable, that's all. They should be left alone."
"I think you do consider yourself superior." I sat up on the bench and faced him. "You're even above this kind of conversation, aren't you? I'll wager you've never had one quite like it within the walls of this monastery."
"I can't say that I have." He resumed his lifts, straining to speak as he raised the bar with his feet. "You don't care much for our Rule, do you? Surely, your own community had a similar one. How could you have made your peace with it?"
I laughed. "Now I see. You like me. You really do. Yes, Michael, I am worldly. But it doesn't mean I belong to the world. Oh, it sounds like a contradiction, I know. To crave things of the flesh, but to be above them. But that's exactly my position. And I relish it."
"You mock the spirit of monasticism. We're to live in the world, but not be of it. That doesn't mean giving in to bodily cravings, or believing we're above them." His emotion fueled an acceleration in his lifts.
"I didn't say I was above the cravings. I'm above this world."
He stopped, his chest heaving from the workout, and swiveled around on his bench to face me, his dark eyes intense. "You believe that, don't you? You're not just trying to scandalize me. But you're wrong, Brother Victor. You do belong to this world. So much so that you frighten me."
"Frighten you?" I grinned. "I'm flattered."
"Don't be. It says a lot more about me than you." Michael stood and grabbed a towel from a shelf above the weight bench and wiped his face and neck. "I have to shower before compline." He started toward the door.
I grabbed his wrist. "You're hard on yourself," I said. "But not by nature. That's why I like you." I smiled.
He studied me apprehensively and left the room.
That night, long after midnight, I crept down the dark dormitory corridor to his cell. The heavy door creaked on its hinges as I entered, but he did not stir in his bed. For several minutes I gazed at his form in a darkness that, in my gifted vision, borrowed the shades of dusk, not moonless night. He lay enshrouded in shadows, breathing with the regular rhythm of one whose conscience is clear. He lay on his side, his arm outside the wool blanket, his hair splashing the white sheets with black. Stooping and touching his shoulder, I entered the dream, which, though buried far beneath his consciousness, played as vividly before my eyes as my own memories.
"Come to me, Michael," I whispered, in spite of myself. "Don't fear the darkness. What kind of eternity awaits you in the light but one on bended knee? Would you fight your pride beyond the end of time?"
He moaned, opened his eyes and peered into mine, then slipped into a profound sleep.
# III
Communications
# Eleven
Rivulets of blood from the thorns stream down his ashen face. He shifts his weight to his feet in order to breathe and, when his strength fails him, hangs from his hands once again, the nails tearing his flesh. I remember taking those hands in mine one day on a fishing boat. Then they were browned by the sun, calloused from what I called his quaint profession as a builder.
"Victor," he says, his head rolling until his pained gaze falls on me, "God can forgive you." This is the way he always begins our conversations, no matter what scene of our lives together we are revisiting.
"God be damned! What kind of god lets them do this to you? He delights in your torture." I am standing beneath the cross, alone, under a graphite sky, my cape whipped by the wind.
"No." He swallows and takes a breath, straining to support himself on his feet. "It's salvation. To give to the end. To give all, even your life."
"Why do you spew this rubbish at me, Joshu? You heard the rogue there next to you. Save yourself. Ask this god of yours for mercy, if that's what it takes. Don't pant and bleed here in front of me, with excrement running down your legs, telling me to ask for forgiveness from your god. Your god disgusts me. You disgust me."
"I... I love you, Victor."
"Yes, so you say. Every time we speak. Every time, damn you. And for the millionth time I answer, 'Come to me then.' I know you have a power. Not the power the mobs see, your flimsy healings and exorcisms. You know what I mean. The power derived from immortality. Power like mine. Why choose the world of light? What is its reward? Look what I have in the night. Eternity, mastery."
"Loneliness, longing."
"I satisfy my longings! Every one of them. And as for loneliness... company be damned when it takes the form of sniveling humans or a tyrant god who can't stand to share his power. But you, Joshu," I press my lips against his bloody feet and savor his taste, "you are different. We could spend eternity together in the night, having all we want."
Joshu gasps for breath and collects his strength to stand once again. "If you want me, you want the God of light. My God, my God." His eyes roll toward the threatening sky, ripped by lightning.
"No!" I scream. "Not again." I pound the cross with my fist. "Curse your god and live!"
"This is my body... body... body." The words reverberate through darkness now. The winds cease. I feel myself spinning. I open my eyes and once again find myself in the choir stall. The fool priest is consecrating the wafer. "This is my body, which will be given up for you," he says, with gravity. I want to storm to the altar and rip the host from his hands. I want to tear open his throat. But I restrain myself, forced to take part in the mockery until the host is in my mouth. Then I defile it by swallowing, letting it mix in my stomach with the blood of my victims, and vomiting it up before climbing into my coffin.
# Twelve
My visions of Joshu were nothing new. They'd haunted me for 20 centuries. I hoped, though, as always upon moving into a new monastery, that they would cease. The crucifixion vision, especially, ripped at my heart. Although I had caused countless deaths, lapping up the blood that surfaced on taut flesh like vintage wine streaming from a cask, to see Joshu's death, Joshu's blood, was to relive my loss.
As much as I wished the visions would cease, I also feared this possibility. After all, the visions were all I had of Joshu. In them I felt that he was present, speaking, and not just a shadow created by my supernatural mind, felt that the risen Joshu communicated through this medium.
That there had to be such a medium at all exasperated me beyond endurance. It was like speaking through glass: The same cold matter that brought him near to me also, cruelly, formed an impenetrable barrier. The nearness to intimacy wrenched my heart.
The visions were induced by the chants and scents of Mass, and by feeding on the exceptionally rich blood of those who ate large quantities of meat--in the case of the mountain people, squirrel and porcupine and raccoon.
The visions also sometimes occurred while I studied volumes from my collection of ancient texts about Joshu: Some people called them the Apocrypha, but I knew from the lips of Joshu himself that much of the lore was true, tales about Joshu as a boy, breathing life into clay doves, striking down neighborhood bullies with a glance, healing wounded pets.
During our mountain hikes and sailing adventures Joshu would laugh over his youthful impetuosity. Not sure of what to make of his extraordinary powers at the time--whether he invented them or truly possessed them--but disinclined to reflect overly much, I would clap his naked back and say that he should use his powers. If he restrained his true nature, it would come out in one way or another.
Indeed, his power did come out, in the fire behind his words to temple officials, in his fury over injustices to toothless widows and snotty-nosed children. Compassion was his power, at the very core of his nature. This is what intrigued me about him: His compassion was not the result of pious submission to his god, but, in some inexplicable way, his very nature.
The visions also overcame me as I studied occult texts, which for many centuries were banned by the Church, though they could always be gotten from devious monks. Now such volumes were readily available in most monastic libraries. Mythology and satanism were now deemed legitimate fields of research by the new, sophisticated, incorruptible brand of monk.
I can't say how the authors of these volumes learned of the world of darkness, whether they were inspired by court members like Tiresia, whether they themselves were vampires like me. But I was convinced of their accuracy. From them, I pieced together the components of my world, my history, my fate.
Certain fallen angels, followers of Lucifer, had escaped hell and fled to a kingdom near the moon where they thrived on their lust for power. A hierarchy was established, the angel Principia becoming queen, Copulus and Demitria her consorts, and the others her court.
These beings were driven to extend their kingdom beyond themselves to the mortal world, created in time. Principia, assuming the form of a buxom wench, charmed an Aryan chieftain. As he lay on her, sucking at her breast, he was transformed as I was transformed by Tiresia.
The chieftain lived, solitary and hungry for blood, as I lived, foraging by night among robust villagers for his food, creeping by day into a burial chamber.
All the while the court above watched him from their dark realm, waiting for him to pass his gifts to another and enter their ranks, only to have the new creature of darkness join them in his turn--if he so chose. For there was a choice.
I knew this not only because of Joshu's words to me, but from the ancient writers' stories about weak vampires who caved into the demands of the possessive god of light. They had their tempting visions too, it seemed, long before Joshu. Which agents of the world of light appeared to them, I ever sought to discover in the books, though so far unsuccessfully. But I had learned much of the dark realm and delighted in discovering even more.
One fact in particular came to dominate my thoughts over the centuries. Bonds between members of the dark court were forged by mutual desire. When a vampire decided once and for all in favor of darkness, and created another soul of the night, he entered a world where he would no longer be alone. Long before I reached St. Thomas, I'd become determined to create my successor and move on to the dark realm, to be joined later, for eternity, by the vampire I'd created.
I had no delusions that Joshu could take that role. Once a spirit passed to the Kingdom of Light or the Kingdom of Darkness, the movement was final. So I lived for my communications with Joshu, as maddening as they were, until I could find one to replace him. Over the ages, as I toyed with pathetic young monks the way a cat cuffs and claws rodents, and as I defiled everything sacred with my lust and killings, I searched for a replica of Joshu--the one who obsessed me. Again and again my candidates disappointed me, as soon as they submitted to me. That ruined them, like a fissure ruins the blade of a sword.
Still, my obsession persisted.
# Thirteen
One spring evening after compline, I willed young Luke to my cell. His blond shaggy locks had tumbled endearingly over his brow as he'd chanted the psalm. His pale face and hands had seemed almost translucent under the lights. The effort it took to summon him was nothing, barely the amount of concentration required for lacing a shoe. Fifteen minutes after the Grand Silence began, he tapped on my door.
"Enter." Having pulled off my robe, I lay stretched out on the bed.
The lamplight threw his lean shadow on the wall as he closed the door behind him. With an impatient wave I bid him come to me. He timidly approached the bed.
"What took you so long, boy?"
"I came as fast as I could, Victor. Brother Matthew was walking up and down the dormitory hallway. I had to wait."
"Why are you lying?" I pulled him to the bed by his cowl and brought his face inches from mine.
"I'm not."
"Get out." I released his robe.
"No, please, Victor." He touched my cheek. "Don't make me go again. We ain't done nothing for weeks."
"Why is that, Luke?" I eyed the cowering boy with disgust. "Well?"
"My confession," he admitted. He lowered his eyes and then turned them imploringly on me. "But I told you, it's the guilt. When you sin against celibacy, it's like you're unfaithful to the Lord. I can't help confessing. The guilt just eats away at me. Old Brother Joseph ain't gonna tell anyone anyway. He'd never break the confessional seal. Besides, he's heard it all before."
"From your friends Peter and Gerard, no doubt." I'd written off the effete pair of monks as not worth raping the moment I set eyes on them.
"It's just common here, that's all."
"You've whored ever since you set foot in this place, haven't you, boy?"
"It was different. It was never like this."
"Like what?"
Luke's eyes gleamed and he smiled like a child dying to share a secret. He sat on the edge of the bed. I leaned back, my hands behind my head.
"I always knew I was different, Victor. When I was little, there was this farmhand of ours. He'd work in the field and come back all sweaty. He'd take his shirt off and wash himself down at the pump outside. I'd stand inside the screen door and look at him, my heart pumping away like an engine. I didn't know anything about what I was feeling except it was good. He'd bring me a stick of gum or some licorice. Sometimes he'd skip rocks with me across the creek on our place. My grandaddy was too old to do much with me and all my brothers was raised and married." Luke lifted my legs and, scooting back against the wall, laid them in his lap.
"So anyway. I took a shining to Bud--that was his name. Used to dream he was holding me in his big ol' hairy arms, kissing me. I knew that wasn't right. But I woke up feeling good all the same. Then he got work down at Chattanooga. Left us after harvest.
"I tell you, Victor, I liked to have died. Was sick to my stomach. Couldn't eat. My grandaddy sent me into town to see the doctor. He said there wasn't nothing wrong with me. Granddaddy about beat me senseless for pretending. Course I couldn't say nothing to nobody.
"I got over Bud in time, but the whole thing started me seeing things different. Here I was, 14 years old and never had no interest in girls. I noticed how good the other field hands started looking. Played around with one in the barn. Knew it couldn't be right so I went to confession."
"Your first mistake." The story was starting to bore me, but since he was working up to his paean to me, I humored him.
Luke sighed and shook his head. He rested it against the wall and stared at the ceiling. "Man, that priest gave it to me bad. Told me sleepin' with another man was worse than murder. Said if you died with that on your soul, you'd go to the deepest part of hell. Gave me a penance that would have worn out a saint."
"And you took it to heart, of course."
"Yes, sir, I took it to heart. Started praying that I'd shake off these feelings for men. Started going to morning Mass in town--had to walk two miles to get there, sometimes in snow and ice. Started talking to the priest about living a holy life. Before you knew it, I was in the novitiate here at St. Thomas--right out of high school. Then..." He hesitated and lowered his eyes as though saddened by what came next.
"Then you saw that the feelings got even stronger here. You were surrounded by beautiful men and thought you were in heaven." I had heard this confession many times.
"I got desperate. I ended up going to bed with a couple monks. But I fasted, prayed. Mike helped me."
"Brother Michael?" The boy had my attention now.
Luke nodded his head. "When I told him about the temptations, he said he had 'em too. Said we could help each other out."
"Yes, I'll bet."
"No, Victor. It wasn't like that. I thought he was good-looking, but we never fooled around. He was too strong. We'd pray together. We'd talk while we worked on the grounds and in the greenhouse. I dunno, it was like I could keep the feelings down then. Like they were channeled into a new path."
"Touching," I said.
"It was all going OK until you showed up. I wanted you bad. You was so confident and good-looking. You didn't seem to give a damn about rules. I never thought of things the way you did. Questioning, I mean. I never rebelled against the Church. It's like, when I'm with you, it don't seem like my feelings are bad. It's like God sent you for me."
I chuckled at this. "So that's why you run off to confession regularly."
Luke's face grew serious. "I ain't saying I've got it all under control. I ain't completely changed. It's mostly when we're apart, afterwards. Then I start worrying that maybe I have sinned."
"Maybe you have."
"What do you mean?"
I smiled at the panic in Luke's voice. "Maybe it is a sin. Maybe you'll wind up in hell."
"You don't really believe that, Victor. I know you don't."
"Think what you like. It's getting late. You better go." I wanted to feed and it was nearly midnight.
"Not yet." Luke grinned, got up on his hands and knees and buried his face in my crotch.
My cock hardened. Although I stopped ejaculating after my night existence began, sex gave me great pleasure, mostly the pleasure of having power over the human kneeling to lick my balls or take me in his ass. My whole body still shuddered with orgasms, but now it was my mind that exploded--in a dizzying euphoria of colors and sounds.
I stripped Luke of his clothes. I inspected his slight body, hairless and soft, his long slender cock protruding from a mass of blond fur. Throwing him face down on the rug, I mounted him. But I had barely entered him when I suddenly lost interest in the white body splayed before me. It was too easy. He wanted me too much. I felt no resistance in his will.
"Go to bed," I said, getting up and throwing on my robe.
"What's the matter, Victor?" He rolled over, still erect and badly wanting me.
"Another time, not tonight."
"Did I say something wrong? Do you want something else?" He clutched my arm like a beggar.
"Leave me."
Crestfallen, he dressed. I let him kiss my lips before he left the cell.
By now I was weak with hunger. In the heavy rain, I squished through the muddy grounds of the monastery toward the trees, invisible in the darkness to mortals, but to me a line of grayish branches, still leafless. Just inside the woods, I froze in place. I sensed a human within a stone's throw from me. My fangs instantly grew. My breathing became as excited and loud as that of a bull ready to charge. Someone lurked among the trees. I could stalk him, but as famished as I was it could only be to feed, and if the spy was a monk, I would endanger my position at St. Thomas. While I still had some power of discernment, I sped through the forest, as fast as my thoughts could carry me. My legs moved not at all now, but with the velocity of a jet I was carried bodily through the woods, my body dodging trees and brambles as though it carried its own radar system.
My senses guided me to a shack, like those of all my other victims in the mountains. The windows were dark, but the chimney smoked. The rotting door gave with a slight tug. Inside, the uneven floor moaned beneath my weight. The light from the fire painted the sticks of furniture orange. The room smelled of cat urine and damp upholstery.
"Who's out there?" an old woman demanded from the bedroom. "I got a gun in here, and I ain't afraid to use it."
Her delicious scent overwhelmed me. My nostrils dilated to catch her flavor. When I entered her door, she was standing near the bed, aiming a shotgun at me.
"Hello, Granny." I advanced, and she started trembling.
"Get out of my house else I'll blow you out."
"Now, that wouldn't do."
Before I could take another step, she fired. The bullet passed through my stomach and out my back, stunning me only momentarily. In an instant the pain dissipated, the wound closing as though my flesh were liquid and the projectile had only briefly parted the waters.
"Please!" She dropped the gun and sank to her knees. "Oh, please leave me be."
I approached her, lifting her chin to look into her rheumy eyes. "Good night, Granny." With a quick twist, I snapped her neck, a merciful gesture as I saw it, and picked her up to sink my fangs into her throat. It is only a myth that vampires cannot drink from corpses; they can, as long as the blood stays warm. Her loose skin tore easily and I fed on her rich old blood for nearly a quarter of an hour, I was so hungry.
After disposing of her body beneath a pair of fallen trees, I trekked up the mountainside to a clearing for a view of the valley. Below, the monastery rested in darkness, except for a few faint lights. Who was up at this time of night? Who had been watching me in the woods? Perhaps only Luke, who couldn't bear to leave me. But the presence I felt there was a stronger one than his, one I did not sense again as I crossed the monastery grounds but one I would attune myself to in the future.
# Fourteen
The balmy nights of May brought with them a keen loneliness, for during the whole month I was bereft of contact with Joshu. Luke, who had amused me for a while, now satisfied me less and less. Still, he proved useful for learning more about the monk who did intrigue me.
One night when I did not find Michael in the weight room I wandered to the greenhouse. The lights there annoyed me, but I shaded my eyes and strolled quietly toward voices that rose above the rows of plants grown to raise money for the monastery. (The plants were transported to a market in Knoxville.) Concealed behind a wall of tall ficus, I listened to the conversation between Michael and Luke.
"It is different, Mike. God, I can't explain it. It's not like the other times. Why don't you just lay off!"
"What are we about here, Luke? You can argue for that kind of love all you want. For the sake of argument, let's say you're right. Homosexual love is legitimate. Even so, you're a monk. You took a vow of celibacy."
"That's when I thought I'd go straight to hell if I didn't kill the urges. Hell, Mike, I don't wanna kill 'em when I'm with him. It's like it's a sign from God or something."
One of the two took a few steps, and I drew back behind the foliage.
"You want to know what a sign from God is?" Michael sounded impatient. "Good fruit. Remember when Jesus says you'll know them by their fruit?"
"Hell, yes. And that's exactly right. God is love, right? That's what we have, Victor and me. Where two or three are gathered and all that, you know. I understand it now."
"Attractions are deceptive, Luke. Attractions don't mean love. Why do you think you still feel guilty? You think if this little affair were such a good thing, you'd be heading to confession all the time?"
"You're pissing me off."
"I'm telling you the truth."
"It's just the old shit." Luke nearly shouted, but he seemed to collect himself, lowering his voice. "God don't send guilt trips. Brother Matthew said so himself. Lotsa times, they're from the devil. He gets a person all tied up in knots to do his dirty work."
"If he were good," Michael implored, "if he were good and if the love were pure... Luke, don't you see this is exactly how Satan works, not through what seems grotesque and repulsive, but through what seduces us. That's what I'm trying to tell you: The man has seduced you. He has no love to give you."
"You'd take away the only joy I ever had, wouldn't you?" Luke sounded choked-up now. "Here I thought you was my friend."
"He doesn't give a damn about you. Open your eyes." Michael's voice quivered, perhaps as he shook Luke by the shoulders--the plants blocked my view.
"He wants me. Don't look at me like that, damn you. I ain't crazy. Don't you think I know when someone wants me?"
"For what, Luke? For what? It's some power game he's playing. He likes to have people under his thumb."
"You know what I think? I think you're as jealous as all get out. Someone else takes a shine to me, and you go bananas. That's what it is. You like to play like some high and mighty judge. But you're in the game too."
A definite hit on the part of young Luke, I thought. I waited for Michael to go on the defensive. His response was a pleasant surprise.
"Maybe I am jealous." He spoke softly, seriously. "Maybe I envy this intimacy you feel, the physical affection. Maybe I feel Victor's power myself. But that only confirms what I'm saying to you. You don't seduce someone you love. This is his mode of operating."
"So that makes him Satan? Maybe it's the only way he knows."
Michael sighed, defeated. "Maybe it is."
That night after compline, Michael remained kneeling in the dark. I remained, too. A cross-breeze from the opened windows carried in sweet smells of lilacs and irises blooming in the courtyard.
I willed him to cross the chapel to me, but he resisted. His own will pressed upon my chest like hands keeping me at bay. But his bowed head, his kneeling form betrayed no hint of struggle, he was so composed. Suddenly he looked up, recognized me in the shadows, and spoke.
"Where are you from?"
"Many places. England was the last place. Brother Matthew told the community all about me, didn't he?"
"Why did you become a monk?"
I smiled. "If it's a heart-to-heart you want, Michael, perhaps we should adjourn to my cell, if you don't mind violating the Grand Silence."
"Did you listen to the reading tonight? 'The Lord has given an opportunity of repentance to all who would return to him.' "
"Yes, Clement of Rome waxes on, doesn't he?"
"The words didn't faze you."
I crossed the aisle and stood before him where he knelt. "You're not a prig, Michael. And you're not as self-righteous as you sound. So why are you preaching to me? What is it that possesses you? Surely it's more than the love for your friend." I touched his hand. He pulled it away and stood. We were so close I could smell the wine from dinner on his breath.
"I have a responsibility to Luke. If he won't listen to reason, I'll have to go to the abbot. It's my duty. I'm telling you so you'll back off."
"He's like a brother to you, eh?"
"He is my brother. So are you."
"Funny, I don't feel like your brother."
He started to turn away, but I grabbed his firm arm. "You're intelligent, Michael. Surely you don't take all this too seriously." I waved toward the chapel, as though it were a world rather than a room. "Dogmatism, I mean. Human rules, penances, busy rituals. Surely for you spirituality is something more... how should I put it... elastic, expansive, spontaneous. God speaks to each soul in a different way, don't you think?"
"Yes, he does. And I know his voice when I hear it." He jerked his arm away.
"Ah. One of the fortunate ones. The chosen few. You can discern good from evil, like hot and cold. No warm for you. Perhaps my first assessment of you was wrong after all. Perhaps you are self-righteous." As he walked away from me, I called after him, "If you run into Luke, tell him I'll be waiting for him in my cell."
# Fifteen
My father was a true paterfamilias. As a Roman patriarch, he ruled his sons and his sons' families by the laws of practicality, reserving sentiment for private moments with my mother. Embedded in my memory is the sight of my youngest brother, Justin, sick with fever after serving in a provincial regiment. Moaning in pain, he lay naked on his cot, his youthful body still firm and muscular but writhing now, his eyes glazed, his face pallid as the linens. My father, a senator, was involved at the time with some important state matters that required careful attention. My mother, exhausted from attending Justin, was in danger of nervous collapse. I was home on leave from Palestine, but family business occupied my time. My other brothers, both officers, were stationed abroad.
"What are his chances?" My father had taken the physician and me outside my brother's room, away from my mother. Still fit and straight of carriage at the age of 60, my father was a fierce-looking man. His cold blue eyes, focused on the physician, gazed intensely from beneath a wide brow.
The old physician, a bald curmudgeon with a great beak of a nose and thin lips, scratched his chin. "I have seen this fever before in those returning from the provinces in the far south. If he survives it, his mind will be touched. If not, he will die within the week."
My father hesitated not one second. "In that case, it must end. Today, no later. Take care of the matter in haste. Let my wife learn nothing of it."
The physician bowed and disappeared. Immediately my mother, who'd been eavesdropping, rushed to the corridor and fell in hysterics at my father's feet, a bundle of white robes.
"No, Lucretius, I beg of you. Let him live. Do not take him from me." Justin, who had her fine features and high cheekbones, was her favorite son.
"What would you have, woman?" My father's face remained immobile. "Unbearable pain for him, madness if he's unlucky enough to live?"
"I will tend to him myself," she said, grabbing his legs, her reddened eyes turned up to him. "You needn't waste one moment here."
"You have duties, Lydia." Her emotion only made my father speak more sternly. "Now release me."
Mother collapsed into a heap, sobbing violently.
Following my father's lead, I ignored her and accompanied him to his rooms to discuss business matters needing urgent attention.
"She will recover," he said as we crossed the courtyard, which was gleaming in the midday sun. "And so shall we."
Recover he might, but for three days and three nights after my brother's funeral rites he locked himself in his rooms. As for myself, though I shed not a single tear in public, though I reprimanded my mother for her tears, I knew for the first time the emptiness that a final death, a mortal's death, brings. Especially when it is your own flesh and blood whom Charon ferries across the river Styx to the Underworld--a boy you taught to ride, to handle a sword, to swear admirably.
For some reason, after my chapel interview with Michael, my mind returned to the scene of my brother's fever and my father's pragmatic decision. Cold blood guaranteed survival, not only of an individual, but of a people. Certainly a vampire needed it.
I was not given to brooding, but in certain periods the isolation that was the fruit of such coldness, the isolation of centuries, came over me like a blanket of darkness that even my sight could not pierce. In moments like these, I rallied my spirit by dreaming of the time when, after entering the realm of night, I would be joined by whomever I had chosen on earth, a beloved who would wipe away all thoughts of Joshu.
In the meantime, I took comfort where I could. That night I willed Luke to my cell. With a flashlight to guide us--for Luke's sake--we hiked out to the woods, to a spot cleared by mountain dwellers, and rested against a stump overgrown with vines. Along the way he asked me questions about the Church with an interest, an ebullience, of one who never previously conceived of the possibility of questioning.
"What do you mean the Church's power is arbitrary?" he asked, cradling my head in his lap. Around us a warm breeze rustled the branches and cicadas whirred.
"I mean the Pope's a foolish old man who invents laws."
"But he's the Vicar of Christ."
"He's nothing. He knows nothing of Christ but the wives' tales passed on about him by ignorant fishermen and tent-makers."
"But didn't Christ call Peter to feed his sheep?"
"Your Christ, the Church's Christ, is a god made in the image of effete men who've never had a good fuck in their lives, or if they have, who've thrashed themselves with whips to relieve their guilt. They hate their own cocks so much they'd light votive candles to make them fall off, if it would do the trick. Good old Origen, revered Church Father that he was, castrated himself. Did you know that?"
"Shit!" Luke winced.
"Virgins. Celibates. What kind of god makes bodies and forbids you to use them? And makes you feel holy and superior when you do manage to turn into gutless, passionless stone?"
"But Jesus didn't--"
"He worshiped a demented god. He was deluded." I stood. "Let's get off this subject. Before I tear off someone's head."
"If you feel this way, Victor," Luke timidly pursued, "how come you ended up in a monastery?"
I picked up a fallen branch and cracked it against the stump with such force that the sound reported like shotgun fire. "It is my mission, damn you! We hear the gospel every day. Well, I have my own gospel, the true gospel. Not the one that serves a possessive, tyrannical god, but one that frees us from submission to him. I live it. Daily. Why grovel at his feet for eternity? Do you want an eternity on your goddamned knees?"
"Jesus said God is our father." The boy was frightened by my outburst. For all his delight in unorthodox talk, I had gone too far.
"I am your father." I fell on the boy, worked up his habit, and with only the dew for lubrication, forced myself into him. He groaned in pain at first, then in pleasure as I rode him.
In that moment I wanted his blood more than ever. Nuzzling my face against his throat, I heard the blood pumping through his jugular as loud as a bass drum. My fangs descended. I bared them. They grazed his soft skin, barely scratching its surface, before I jerked my head away, just as my thrusting body brought him to his climax.
"God, I love you, Victor." He panted the words.
"Go back. I need to walk. Alone." I got up and disappeared into the woods before he had time to protest. My urge for food had suddenly driven me to near-insanity. Spotting a raccoon nosing the carcass of an animal, I rushed to it, grasped its furry throat, and sank my fangs into its tough hide. I sucked every drop of gamy blood from its veins and hurled the hairy mass against a tree.
Just before dawn, tired from aimless tramping about the woods, I crawled into the mausoleum and settled into my pine bed. Exhausted, I drifted toward a welcome sleep, when my eyes snapped open. Someone lurked outside the tomb, someone strong and unafraid--I felt such a presence within my bowels. Then I smiled and closed my eyes.
# IV
Clearing the Path
# Sixteen
Brother Matthew stood before the communion rail, the saints in all their glory inspecting the crown of his balding head from their niches in the high altar. He removed his wire-frame spectacles and rubbed his close-set eyes. "This isn't the best time of day for serious news," he said, "but it is one of the few times we're all gathered together."
All 23 monks had remained in their stalls after compline, which had been moved to 10 o'clock to accommodate me during the summer months when the shadows of dusk didn't collect until after 9. A couple of the older bastards regularly nodded off before the final blessing, and they had to be nudged now.
"It looks like there've been some break-ins up in the hills. The police aren't sure what the motive is. As you know, the folks up there have no valuables to speak of. They're also as isolated as can be, so we're not even sure why anyone would wander around up there." Matthew replaced his glasses, clasped his hands, and rocked nervously back and forth. "It seems some people have abandoned their houses. Brother George and Brother Michael first noticed that several months ago."
I glanced across the aisle at Michael, whose dark hair, pulled back into a ponytail, gleamed after his post-workout shower. He was all intensity, scratching his new goatee, studying the abbot as though he were pondering metaphysics rather than the disappearance of hillbillies.
"As you know, it's nothing new for people to move down to the city's shelters during the winter months to get out of the cold, so at first no one was alarmed. Then relatives put in missing person reports. And the disappearances continued when the weather warmed up. The county police started to investigate. Of course you can imagine how seriously they pursued trudging through mountain thickets to investigate the cases of missing indigent people."
Several of the monks shook their heads or mumbled something about the shame of it.
"However," Brother Matthew continued, "earlier this week they found a body."
A rumble of voices echoed through the chapel.
"Brother George, perhaps you would like to take it from here." The abbot turned to Brother George, a short, middle-aged man who managed the community's finances and was second in command after Matthew. His silver hair, clipped close to his skull, fuzzed the outline of his squarish head. He stood to address the group, and the abbot rested against the communion rail.
"You all know that the monastery regularly sends food to some of the mountain folks." His voice was a deep, gravelly smoker's voice. "Well, Michael and I got worried when one of our regulars disappeared without a trace--an older woman with no family, practically lame in one foot and with no transportation. We searched the woods ourselves, thinking maybe she fell and hurt herself. At first we didn't see any sign of her. Then we noticed a trail of something--it could have been blood--leading up to a fallen tree, a big oak about five feet around. It was strange since how could she have gotten under it? But we got the police out here a couple days ago. They sent for a crew of workers with a crane. They found her body under it. Molly Spaker was her name. Nice old woman. Husband died last year. He was a miner from up in Kentucky."
"How did she get under the tree?" Brother Alfred asked the question, a swarthy man at St. Thomas to complete a book on some drivel in Aquinas's _Summa Theologiae._
"They don't have a clue," George answered. "But evidently her neck was broken. And, well, I might as well say it--her blood was drained."
The chapel buzzed with murmurs.
The abbot stood, looking grave. "The concern here is that some psychopath is on the loose in the woods. The police are continuing their search for the missing folks, with the help of our information about them. And they're hunting for the killer of Mrs. Spaker. In the meantime, we've got to tighten our own security measures since we're obviously sitting ducks for a killer at large."
"That means all outside doors stay locked," Brother George threw in. "Please use your keys. And keep windows facing the exterior of the monastery shut. If you open your transoms, the interior courtyard windows should keep your rooms cool."
"We probably have nothing to worry about," the abbot continued. "If someone had wanted one of us, they could have had him by now. Probably, we're talking about someone smart enough to pick victims who wouldn't be missed."
The monks for the most part were not gossips--scholars rarely are, content to keep the world at bay while they bury themselves in their studies--but several of the monks did whisper together in the foyer after we had prayed for the deceased and for the apprehension of the murderer, apparently feeling the Grand Silence warranted violation under the circumstances. The shadows of the robed men rose like spirits on the walls.
I retraced my steps through the chapel to get to the library, where I could bury myself in books and stew over the matters at hand. The antechamber to the stacks was a large room whose beamed ceiling rose to a height of 40 feet, the vast space corresponding to five levels of stacks on the other side of one wall. Against tall wainscoting, study desks of darkened oak were arranged around a block of reference shelves in the center of the room.
I took a seat at one of the desks against an external wall where two arched windows framed the woods and distant mountaintops in the daylight, but in the darkness formed two blank eyes. Yellow lamplight fell upon the volume I'd left on the desk, a study of the historical Jesus, the result of some ambitious theology professor's drive for tenure, and for me, the opportunity to dream of Joshu.
The words blurred as I read, the page becoming like a theater curtain of transparent gauze that melts away when the lights behind it reveal a set and movement on the stage. I saw the mangled old woman crumpled on the linoleum. _Nothing can be linked to me, I said to myself._ _The woman 's body, the blood, the superhuman effort to move the tree--none of these points a finger to me. But I must find new grounds for feedings. Down in the city perhaps, among Knoxville's poor neighborhoods, in the prison outside the town's limits. Less convenient of course, especially now when my lust for the blood of young Luke makes it hard to resist sinking my fangs into his throat. But necessary all the same._
"Good evening, Brother Victor." The voice of the abbot broke through my thoughts. He looked a bit uncomfortable. "I hate to break the Grand Silence, but I do have something important to discuss with you, something I've been putting off, and with all this horrible stuff going on--well, there's not much silence right now anyway. May I?" He nodded to a chair near the desk.
"Please."
"What is it you're reading?"
"Nothing of interest. What's on your mind?"
He shifted and removed his glasses, studying them as he spoke. "Something's been brought to my attention, regarding you and Brother Luke."
"What's that?"
"At the risk of taking our friend Thomas a Kempis too seriously"--he looked up now as though to say I ought not take him too seriously either--"it's about particular friendship."
"Becoming too close to one person in the community."
"Yes. It's an old-fashioned idea of course. But there's something to be said for it." Perhaps detecting the disdain in my voice, he became emboldened enough to look me in the eye.
"I'm all for old-fashioned ideas, Brother Matthew."
"That's good, because I do believe this is a serious matter. Luke is... what can I say. He's naive in the extreme. He's not bright either--very impressionable. You, well, I take it you've seen some of the world."
"What makes you think so?"
"Oh, I'm not sure. The way you carry yourself. The way you speak. I guess it's just an impression."
"I see."
"But at any rate, you're older. He's hardly out of his teens. Luke needs to be handled delicately."
"Protected, you mean? From more experienced people?"
Frustrated, the abbot rubbed his cheek. "Of course experienced people have a lot to teach a boy like Luke. That wisdom, the right way to handle feelings, etcetera, that's something he should learn about."
"The boy's infatuated, Brother. He's young. It happens. I'll take care of it."
"Very good." He stood to go, but turned back to me as though a bit dissatisfied with the turn our conversation had taken. "I hope you'll take my advice in the right spirit. I don't want to seem inhospitable to a brother who's been through a tragedy. I can see why you would reach out for a friend."
"You're kind, Brother. Good night." I turned my attention to my book. He hesitated a second, then left the room.
So Michael had carried out his threat. Of course, since our talk in the chapel, I had continued summoning Luke to my cell and we'd gone on with our nocturnal romps through the woods. For the most part young Luke's company amused me, took the edge off the solitude of the night. Granted, he offered me no challenge and never had. But before it got tedious, his kind of wide-eyed fawning entertained me for an hour or so. Besides, our liaison won me Michael's attention. Perhaps now, however, my strategy should change.
Later that week I made my move, after sucking on his tender cock in the woods--so excited by its engorgement that I would have pierced the nearly transparent membrane keeping me from the blood if I'd possessed an ounce less restraint. The full moon had already reached a western point in its arc back to the earth. Patches of its faint light lay on the rocks, vines, and bare earth the color of coffee grounds. Luke and I had thrown off our habits and, guided by a flashlight, treaded naked to the familiar clearing where we conducted our rendezvous. In the hollow of a tree he'd stashed a bottle of wine he'd filched from the dusty collection in the cellar--the daring boy he'd now become. He'd chugged a good amount of it before our playing began. Now his face was the shade of the wine, but instead of breaking his energy the drink pumped it through him. His slim, naked form paced restlessly as he rambled on and on about the stalker roaming the mountainside.
"You think he escaped from a loony bin? I believe there's one in Knoxville, or maybe it's Nashville. Wherever the hell. You know, it's a damn scary thing. Someone sucking out blood like that. That's what the coroner guy said, ain't it?"
I shrugged, bored with the topic, and lay my head back against the stump. I imagined the dim moon, bright to my sight, was the sun and that once again I was basking in its heat.
"That ol' gal musta died from shock before he even started sucking. You think? Tell you what, I wouldn't want to be the detective that's gotta go poking around for the rest of the bodies."
"Maybe there are no other bodies."
"Hell, where there's one, I'll lay odds there's a dozen. Like rats in a barn. Yessir, betcha anything some crazy man read too many vampire stories. Got hisself some spikey teeth and ripped into her." He finally stopped pacing, peering through the trees as though on the watch for the killer, and sprawled out next to me after he tossed the drained bottle into the thicket.
"You've got a morbid imagination tonight."
"I expect so." He had calmed himself now and laid his head on my shoulder, his blonde locks soft against my cheek. "Anyway, with you here, I feel pretty damn safe."
"The abbot spoke to me the other night, Luke. About us."
"What about us?" He spoke drowsily now.
"He said our liaison had to end. The man could make trouble, weakling that he is."
It took a moment before Luke could register the news. Then he lifted his head, his spirit rallied by the threat. "To hell with him. To hell with St. Thomas. It's time for us to get out of this place, Victor. We could get out of this hick state and head to a big city. San Francisco maybe. Hell, half the city's gay, according to a magazine I read. We could get us a house. You might could teach. I could tend to lawns and such." The pupils of his blue eyes dilated with his excitement.
"Like a dream come true, eh?"
"Yessir. Exactly."
"I'm afraid you'll have to live your dream with someone else."
"What? What do you mean?" His forehead wrinkled as though he were trying to discern whether I was joking or not.
"This is the end, that's all. You'll get over me." I got up and put on my robe. As I started to walk away, Luke sprang up and grabbed my arm.
"Damn, Victor, you're serious, ain't you? Why? What use have you got for this place? You hate the fucking Church. Jesus, Victor, you can't just pitch me like garbage. I love you."
I stared coldly into his panic-stricken eyes, jerked my arm free, and resumed walking.
"Don't you love me, Victor?" He called after me. "Don't you love me?" His voice broke into a sob. "You bastard! Goddamn you. You hear me? Goddamn you!"
As usual, my appetite for blood overpowered me after coming so close to Luke's veins--one reason for my abruptness, which was perhaps more severe than usual. But I couldn't hunt in the mountains, not now with county police scouting the area, so I headed west toward the city. Attuned to the scent of blood, my body lifted and soared through the humid June night to a farmhouse several miles down the road, still far outside the city. The two-story abode rose on a hill, above fields of tobacco, a weather-beaten but sturdy structure. Inside a screened porch, I found the back door unlocked. A large calico cat, lounging on a dresser stripped of its drawers, watched me with curiosity as I entered.
The back door opened into a kitchen, where pots and pans lay piled on a drainboard and a table covered with a checkered cloth held a bowl of plastic fruit. Through a dining room, and a living room where a grandfather clock's pendulum ticked noisily, I followed the scent of blood. It became especially strong in the entrance hall. I mounted a staircase there, pausing at the top before a partially open door where I inhaled the rich odor of what I needed. But I advanced toward a second door where the smell was even stronger, more concentrated.
Bunk beds and another bed held three boys. The youngest, in the bottom bunk, couldn't have been more than three. The boy in the top bunk, his long lashes curling up against his cheeks, was probably 5 or 6. An older boy lay in the large bed, his sheet crumpled up at the foot, his tanned arms and legs dark against the linens. Through the open window, cicadas buzzed, but no breeze stirred the heavy summer air.
My chest was heaving now I needed blood so badly. My fangs were ready to tear flesh. But which child should I take? The youngest and most tender? The oldest and biggest portion? A pity it would have been to have two brothers awaken in the morning to find the third drained of life. I could take all three; they were small enough. But the parents in the next room would be left with nothing. I cursed the speck of human softness surfacing now. "Worry be damned," I muttered, clapping my hand over the oldest boy's mouth. His eyes flashed open. He tried to scream, and flapped against the mattress like a fish in the bottom of a boat. Within seconds I had pierced his throat and, draining most of his blood, I quickly twisted his neck to end any lingering misery.
The youngest boy stirred, and for several seconds I remained frozen, inhaling the sweaty odor of my victim, the wet-dog scent that children get when they play outdoors. The child in the bottom bunk suddenly started to cry. I rushed to him and snapped his neck. With no time to drink in case the parents had stirred, and with my thirst already slaked, I bolted out the window and rode the thick, warm air back to the monastery.
# Seventeen
Restless and sullen, and now lonelier than ever, I cursed Joshu again and again during the next month when he continued to not appear in a vision. I was sick of the inescapable fate of my feedings, sick of preying on drug addicts and prostitutes, people living on the streets in Knoxville. I resented being deprived of Luke, even though a better trophy required this sacrifice. I hadn't the patience for calculating ingenious ways to win Michael, and since he had kept a distance from me, I feared that my desire for an equal would go forever unmet.
Over the weeks Luke's reactions to my dismissal of him fluctuated as much as the whims of a Roman aristocrat's spoiled child. Initially, he slumped at the long dining room table with lowered eyes, mechanically moving his spoon but eating little. In the chapel he held his breviary in front of his face so no one could see that his lips weren't moving to the psalms. When we filed into the corridor after compline, he dragged his feet despondently.
Then several times he rallied himself to plead his case. The first time I was lying on my cot, turning the brittle pages of an ancient tome on the Dark Kingdom--a book I'd discovered half a millennium before. Some ambitious vampire had written the Latin text, I was sure of that. I'd lit a few candles to rest my eyes from what for me was the painful light of the chapel. Before Luke could knock, I felt his weak presence outside the door.
"Come in, Luke," I called. I was lying on my side, my elbow against the bed, my head propped on my hand.
The door opened. His eyes were red. He wore his habit, but his feet were in slippers.
"Well, don't stand there. Come in and shut the door."
He followed my orders but continued to hover sheepishly near the door. "I couldn't sleep."
"How unfortunate." I turned the page to a drawing of a voluptuous woman swathed in an ermine mantle, high priestess of the Dark Kingdom.
"Victor, if you're tired of me, I could try some new things. To make you feel really good. I know I ain't that experienced. But hell, I'd be willing to take a shot at anything. You're a temperamental type. Just like a horse I had once. Wouldn't let you near him for days and then would eat out of your hand like a puppy." He forced a smile. "I know that's all there is to it. The abbot... hell, you have him under your thumb. He ain't gonna do nothing if you wanna stick around here. Maybe sometime, though, you'll wanna go. We could go anywhere, right? Sky's the limit."
"Come here, Luke." I patted the mattress.
He eagerly obeyed, sitting down on the bed. His body left my book in shadow. I could smell tobacco on his habit. He'd started smoking, apparently to ease his misery.
"You're right. I am temperamental. I am weary of you. It can't be helped. There's nothing you can do. The best thing for you is to stop dreaming. Open your eyes. You're young. If you want to find love, get out of this perverse monastery and get yourself a lover. But don't expect anything from me. Now go to bed."
He started to tremble. His eyes welled and the tears tumbled down his face. "Damn, Victor. I can't put you out of my mind like you was some impure thought." He sobbed and then took a deep breath to collect himself. "I ain't never loved someone. I'd rather die than be without you." He grabbed my wrist.
"Then you'd better die, Luke. I'm telling you once and for all, you're a fool to hope."
He nodded his head dejectedly, rose, and slowly walked to the door.
"It's the best thing," I called after him.
There were a few more scenes like this one, then long letters blotched with tear stains, then angry outbursts during evening recreation, when in my boredom I would gravitate toward the common room. During one of those times he deliberately spilled his drink on me as I sat on the sofa, joking with one of the younger, better-looking brothers.
"Oh, I'm sorry, Brother," he said, with exaggerated remorse. "Hell, I'm clumsier than a blind old cat." He mopped my habit with his handkerchief.
"Never mind, Luke." I grabbed the handkerchief from him.
"Never mind? Hell no. I'm a damned good brother. I'm here to serve."
Two brothers seated near the piano interrupted their conversation at Luke's loud declaration, made, as best I could tell, with the help of a few glasses of wine. Michael, playing a board game with Brother George, the administrator, also glanced up and comprehended the situation at once. When Luke tried to tug the handkerchief away from me, his eyes filling with angry tears, Michael came over and reasoned with him.
"Luke, why don't you come help me a minute in the greenhouse."
"To hell with the greenhouse." Luke's blue eyes stood out against his flushed cheeks. "To hell with you." He turned, stumbling against a chair, and charged out of the room.
"Maybe you should go help him," I said to Michael.
His dark eyes peered at me with uncertainty and reserve, but also with something more. "No, he's best left alone for now."
After another similar scene, and after the abbot counseled Luke that the end of our intimacy was for his good, Luke rebelled again, calling me a cocksucker in chapel. When he ventured to my cell later that night, drunkenly remorseful and eager to plead for my affection, I clutched him by his slender throat. His eyes widened in horror. The acne on his cheeks stood out like blue match heads against his white face.
"Listen to me, damn you. I'll kill you if you don't shut your mouth and stay out of my way." I flung him to the floor.
"Kill me then!" he sobbed, rubbing his neck.
When I lifted my foot to kick him, I felt a twinge of conscience, even pity. I stepped over him and left my cell.
# Eighteen
The sheriff and his men found the remains of two more bodies, and their investigation turned up another 15 missing people. Although it had been a couple of months since I'd preyed on the mountain dwellers, the new findings alarmed the monks. The sheriff asked us to assemble so he could warn us to stay in after dark, when a culprit could lurk about with less danger of detection, and not to wander in the woods alone, even though no recent victims had been discovered.
That night I was taking some air at the edge of the woods, deliberating whether I should feed in the city or wait until the next night when the monks might be less alert. It was August, still warm, though I smelled rain in the heavy air and a breeze stirred the branches. A circle of light suddenly glowed near the monastery and grew larger as it approached me.
"Who's out there?" Michael called when he heard me snap a limb. He scanned the trees with his flashlight.
The darkness that shielded me from him did not, of course, shield him from me. I watched his athletic form, dressed like me in jeans and a T-shirt, hike with determined strides toward the forest.
As he neared, he inadvertently shined the torturous light in my eyes.
"You're blinding me, for God's sake," I said.
"Brother Victor?" He lowered the light. "What are you doing here?"
"Taking a walk. My usual midnight stroll."
"I see. You're not afraid of the lunatic roaming the mountainside." His tone was lighter, more agreeable than it had ever been with me.
"I'll take my chances. Where are you going?"
"I'm worried about a couple of kids up there. Their father went to the city to find work. They're alone. He called from Knoxville just before compline. Said people were talking about the new bodies and he got worried. I told him I'd check in on his kids."
"I thought the bodies they found had been there for quite a while. The killer probably has disappeared now."
"Most likely. But he's worried all the same." Michael switched off the flashlight. His eyes had evidently grown accustomed to the darkness. He gazed at me in the meager light of the moon, his eyes bold but no longer full of loathing.
"Do you plan to stay up there with them?" I asked.
He shook his head. "No. Just to walk them to a house about half a mile from theirs so they can stay with someone. The man there's a big guy with a lot of guns."
"I'll go with you."
"It's quite a trek. Up by that radio tower." He pointed to the north, maybe three miles from the spot.
"Good exercise," I said.
We wound through the thicket, tree frogs and cicadas clamoring, probably in anticipation of the coming shower. Lightning seared the sky in the direction we were heading, followed by a crack of thunder.
"Looks like we'll be soaked," Michael said, stopping to catch his breath. Guided by the flashlight's beam, we'd been steadily climbing toward a footpath.
Once we reached the path the hiking was easier, though we continued to move up the incline, through heavy growth. Big drops of rain sifted through the branches and then poured from the sky. Even under the partial shelter of the foliage we got drenched. But Michael forged on, not the least bit hindered by the storm. His stride was big, his muscular arms steadily swinging. I thought he could wrestle his god if he wanted, like Jacob of the Hebrews.
It took an hour and a half to reach our destination, a shack nestled in the thicket, not far from the path. No lights glowed in the house. The screenless windows beneath the shaky roof of the porch were wide open.
"Watch your step," Michael said, whisking his light across gaps in the rotting planks of the porch. He hammered on the door with his fist.
The white face of a little girl appeared at the window.
"It's Brother Michael, Dora Anne," he said.
"Ginny, Brother Michael's here!" The girl disappeared from the window and the door swung open. She was 5 or 6, with a missing front tooth, limp blond hair, and a pale face splattered with freckles. She hugged Michael's legs excitedly.
"Brother Michael?" A girl of 16 or 17 appeared in a T-shirt and cut-off jeans, folding her arms as though she were cold. Her short hair was tousled from sleeping. "What's a matter? Something wrong with Daddy?"
"No, no." Michael entered the cramped living room and I followed him. "He's just worried about you. He thinks you'd be safer staying with the Jacksons."
"They ain't found another body, have they?" Ginny said.
"I'm afraid so."
Ginny shivered and lit an oil lamp on a shelf. The pale light washed over a table with mismatched chairs, a sagging armchair, and two mattresses on the plywood floor, where water had puddled from our shoes. "Gives me the willies," she said. "We thought that crazy man done took off. Sheriff and his men been patrolling the woods."
"Did he cut off their heads?" Dora Anne looked up earnestly at Michael. "Ralph Jackson says he did. Then he stuck little needles into their body so they looked like porcupines."
"Ralph Jackson's just trying to scare you, Dora Anne. You just don't pay him any mind." Ginny turned to me, suspicion in her eyes. "You a brother too?"
"Yes. Brother Victor."
She nodded as though she had her doubts. "We'll be just fine, Brother Michael. Ain't no need to bother the Jacksons this time a night. I got Daddy's shotgun here and I can use it."
"How come you ain't never come up to see us before?" Dora Anne said to me.
"I'm new at the monastery." The little girl roused my appetite. Perhaps I would go down to the city to feed, I thought. If enough time remained.
"You wanna see a snake?" Dora said to me. "I got it in a jar."
"You can show him another time." Ginny grabbed Dora Anne's arm as she started toward the back door.
"Your father wants you to go to the Jacksons'," Brother Michael said to Ginny. "I think it's a good idea."
Ginny nodded. "Well, if you say so, Brother. Just hate to put 'em out. And Dora Anne ain't got nothing but flip-flops for walkin' in."
"I'll carry her," I said.
Michael looked at me with curiosity, as though I continued to surprise him by my benevolence.
"Yaaay! A piggyback ride!" Dora Anne jumped up and down, clapping her hands.
We waited until the rain slackened before heading for the Jacksons'. Dora Anne chattered the whole way, tugging at tree branches, squirming excitedly against my back, despite her sister's reprimands. Once we delivered them to the family, at nearly 2 in the morning, we tramped back to the monastery.
As I followed Michael, in answer to my question, he explained that many of the destitute mountain people chose to remain so far from civilization after the mines closed because of ignorance and fear of the city. And because of incestuous relationships they wanted to safeguard from the authorities. "I know one girl with three babies by her father," he said.
"Surely the sheriff must know about it?" Following Michael's lead, I stepped across a large puddle.
"He knows. But he also knows how life is here. The girl wouldn't leave her father if you paid her. And if they took her by force, she'd probably kill herself."
"That's a pity."
Michael reeled around. "You don't really mean that, do you?"
"No, I don't. Do you think it's a pity?"
Michael looked hard at me, despite the darkness. He could not make out my expression, but I could his, a gaze of recognition of an affinity between us. "There are worse things," he said and turned away.
We made the rest of the trek in silence, me savoring our new intimacy, Michael no doubt pondering it. We spoke only to warn each other of a treacherous limb or gully. But when we reached the fringe of the forest, where we could walk side by side, he spoke again.
"Thank you for being stern with Luke. I'm sorry if I misjudged you." He kept his eyes forward.
"And what if you didn't misjudge me?"
"Then you deserve even more credit for cutting him free."
I smiled in the darkness, though he had spoken quite seriously. "Have you talked to him? Is he still desperate?"
"He'll get over it. It's just infatuation."
"You sound as though you speak from experience."
Michael stopped. We were on the monastery property now. Our sneakers were sopping wet from the marshy ground. Michael steadied himself on my arm to remove his shoes. "It's no secret that monastic life attracts homosexuals. Men get to live with men, with impunity, with praise, at least from the Catholic world. I've had my share of attractions. Thanks." His eyes turned to mine when he straightened up and then darted away in discomfort.
"But you are dedicated to celibacy?" The opportunity seemed ripe for pressing him.
"I'm dedicated to God."
"And what does that mean, to be dedicated to God?"
"I discover it, day to day, like everybody else."
I grabbed Michael by the arm and stopped in my tracks, turning him toward me. "Let's stop playing games now, Michael. The intensity between us is as palpable as this flesh." I squeezed his firm arms. "You accused me of seducing Luke, but I wouldn't even attempt to seduce you, I desire you far too much for that. I can see you struggling, not against me but against yourself."
"Does this amuse you?"
"It gives me hope. May I hope?"
"Do you know why I struggle? It's not against my attraction to you. It's against the evil I find in my soul, the same evil I see reflected in your eyes. Both of us are proud, rebellious, but it's not just that. It's a coldness, like the ice trapping Satan at the bottom of Dante's inferno. A coldness that cuts us off from everyone."
"You've cared for Luke."
"I have a duty toward him. That's different."
"We're not sentimental types. Our hearts bother with nothing short of passion."
He continued gazing steadily into my eyes. I pulled him to me and kissed him. His full lips responded, his body pressed against mine. For a moment we merged with the loftiest mountain peak behind us, now star-crowned in the clearing skies.
"We can leave this place," I whispered in his ear as I embraced him.
He made no answer.
We kissed again in the dark entrance hall of the monastery and separated. My soul blazed for him. I longed for blood to calm myself, but dawn was too close now and I retreated to my cell. A figure sat wrapped in shadows outside the door.
"What are you doing here, Luke?" I nudged him with my foot to awaken him.
He shook off his sleep and got to his feet. "Where did y'all go?" he said accusingly.
"Go to bed." I opened my door and he followed me into the dark chamber.
"You was with Mike. I seen him going out to you."
He grabbed my arm. His eyes were filled with desperation.
I shook off his grip. "I said go to bed."
"No, Victor, you ain't gonna get away with this. No sir, you traitor. Both of you. You ain't gonna do this to me. Know what I'll do?" He was nearly hysterical now, his voice trembling, his hands making fists at his sides. "I'll go to the abbot with everything. Hell, I'll confess everything about you and me. I'll tell him 'bout every time we fucked, every time you sucked my dick. You'll be outta here so fast you won't know what hit you."
I grabbed him by the shoulders. "Listen to me, boy. You'll keep your mouth shut or you'll be out of here too."
"No!" He tried to free himself from my grip. "I'll tell him, you bastard. You think I care what happens to me?"
I had no choice now. As Luke struggled in my hands, in the darkness of my cell, I plunged my fangs into his throat. For a moment, he melted into my arms, as though once again I were mounting him, and I felt his desire for me flare. But as I siphoned the warm, young blood, he collapsed, unconscious. I continued drinking until his heart, whose rhythm had moved from a frantic speed to the tempo of a solemn war drum, sounded a final beat.
Deep into the woods I carried his body, hiding it in the underbrush. I made it back to the monastery just as faint light rimmed the mountains. Quietly, I slipped into my tomb, as filled with dread as with the blood of my victim.
# V
The Beloved
# Nineteen
Tolling, tolling, tolling. Throughout my fitful sleep in the close coffin, the bass voice of a bell, like an ancient prophet's lament, intruded into the dreams that followed my killing of Luke. When my nerves finally registered the sinking of the sun, my eyes flashed open in the dark chamber and I thought I heard the bell still. But silence prevailed. Around me the dusty skeletons lay, complacent, I thought, in their immobility, their finality. There were brief moments when I envied them.
The taste of Luke's blood was in my mouth. I thought I would retch. To kill a stranger, in the heat of lust for blood, in dire need of it, was like running a sword through an enemy during combat. No matter how young, how beautiful, how brave the opponent, I plunged the sword up under the ribs as a matter of survival. But to kill a boy who fawned on me, though he hung like a stone around my neck, made me realize the darkness, the suffocating darkness of my life. The tight dimensions of the pine box, the close vault that held it, trapping awful shadows: How many times did I awake that night to see the chamber as a symbol of what I carried with me when, like a rodent, I crawled from its confines? But when upon my mind's screen were projected Michael's eyes, dark and intense, solemn and keen as a winter night's stormy sky, the darkness held hope, life that mocked the pious, insipid light of day.
I rose, eager to find Michael, stripping off my blood-splattered T-shirt and stuffing it under the mattress in my cell until I could bury it later. In a fresh habit, I ascended the stairs to the entry hall of the monastery. There, to my irritation, I found the sober-faced abbot awaiting me.
He greeted me with a nod. "Would you come into my office a moment, Brother Victor?"
I followed him, taking a seat in the leather chair across from his desk, where he seated himself. A pair of dim lamps glowed, leaving unbroken the shadows stretched across the books and high corners of the room.
"What is it, Brother Matthew?"
"Brother Luke has disappeared."
His unusual directness amused me. I imagined that, determined to overcome his intimidation of me, he had rehearsed this confrontation.
"He did not show up for lauds this morning, and when one of the brothers went to check on him, he found Luke's bed still made. We searched the greenhouse, the grounds, finally the whole monastery. There was no sign of him."
"You think he ran away?" I asked. I leaned back and crossed my legs.
"What do you think, Brother Victor?" The abbot removed his glasses and managed to look directly into my eyes.
"How should I know, Brother Matthew? After our conversation in the library I did exactly as I promised. I ended my association with Brother Luke."
"He made no attempt to keep up your friendship?"
"Of course he tried. As I told you, he was infatuated. But I insisted. Did he leave a note?" In the brief time I'd had to dispose of Luke's body there was no opportunity to scribble an explanation. Besides, I couldn't forge his hand, nor could I print something from a computer since he didn't have access to one and therefore couldn't have done that himself.
"No. Nothing. I hoped he might have said something to you about his whereabouts. I thought he must have run away out of anger or desperation, and that he might have warned you."
I shook my head. "I'm sorry. I know nothing."
Brother Matthew searched the wall behind me, his brow furrowed in distress, his delicate finger pressing his lips. Then he leveled his troubled gaze at me.
"Did you hear or see anything last night, any unusual noises inside the monastery, or outside? Did you go outside at all last night?"
I knew this was a test. Only in that second did I understand how little the abbot trusted me. "Yes, I took a walk. I know the sheriff told us to stay in, but I can't stay cooped up. I ran into Brother Michael. I'm surprised he didn't say anything to you. He was worried about some children left alone. I went up with him to take them to a safer place."
"Brother Michael did mention it." The abbot's eyes fell for a moment and then turned back on me after he'd composed himself. "I assembled the brothers, except for you, of course, because it was daylight, to explain the situation and to ask if they heard or saw anything last night. Brother Michael told me you'd gone up the mountain and didn't see anything unusual. I just wondered if you had noticed anything before or after your trip."
"I see." I gazed at him until he lowered his eyes again. "No, I didn't notice anything. So you're concerned that Luke might have been attacked by the crazy man roaming the mountains?"
"It's a possibility we have to consider." He stood and went to the window. "Brother Luke took nothing with him, as far as we can tell. He had no means of transportation. To hitchhike into town in the middle of the night, on a country road... well, that seems pretty unlikely. At least the sheriff thinks so. He's had men searching the road into Knoxville all day."
"Are they searching the woods too?"
The abbot turned and rested his hands on the back of his chair. "Yes. If Luke did wander into the woods. Well, just pray for him, Brother."
I nodded.
"Sheriff Johnson will be here shortly to question you. He's already spoken to the others."
"I don't see that I can help him."
"Still, he wants to cover all the bases."
When the sheriff arrived, the abbot left us alone in his office. The stocky, 50ish man sat on the edge of a wing-back chair and, resting his elbows on his knees, wrote my responses on a page in a clipboard. He looked exhausted, straining to see his own notes through his bifocals. He wore a khaki uniform with short sleeves and two buttons open at the neck in the heat. His reddish beard needed trimming.
"Now, Brother," he drawled, glancing up at me with steady gray eyes. "You say you was outside last night till what time?"
"Nearly dawn. It took us that long to get back from the Jacksons' house. Didn't Brother Michael tell you all this?" I made little attempt to hide my impatience.
"All a formality, Brother. Bear with me here." He jotted down something. "And you saw or heard nothing suspicious?"
"It was raining. We were talking. I didn't notice anything."
"Mmm-hmm." He scribbled again.
"You think there was foul play?"
"Too early to know." He straightened up to yawn, sat back in the chair, and crossed his legs as though we were having a very amiable conversation. "You and Brother Luke was pretty close, I hear." He removed his bifocals and put them in his shirt pocket.
"You could say that, I suppose." I stared steadily at him.
"You suppose? Were y'all on the outs?" He tapped his pen against a gold cap on a bottom tooth as he peered at me with interest.
"Luke was making a pest of himself. He followed me around like a dog. I told him to find another hero. I'm sure the abbot told you he advised me to talk to Luke. He probably also told you Luke was upset with me. Which explains why he ran away."
"If he did run away."
"Yes."
"You don't seem too bent out of shape about Brother Luke. I guess it's probably good to keep a level head when there's nothing you can do." He continued to eye me with interest.
"Exactly. If there is something I can do--"
"Well, in fact there is one thing you might help me with. You mind if I take a peek at your bedroom? Abbot said it was down in the cellar under the church."
"Yes. I have a skin condition. I can't take any amount of sunlight. Why do you want to see my cell?" Civility had never come natural to me, and I made little effort to hide my resentment now.
"Oh, just a formality. You know. Got to poke around the whole place."
The sheriff followed me down the dark stairs. When we got to the crypt he stopped to examine the tablets on the walls.
"Well, look at that. This 'un died pert near 100 years ago." He had put on his glasses to read the dates on the middle tablet. "I guess the one down here's the newest dearly departed." He advanced toward my tomb.
"Yes." I stepped forward. "Do you mind if we get this inspection over? I have some work to do."
"'Course, Brother. 'Course. Lead on."
He looked around my cell, under the bed, under the desk. "You mind?" He pointed to the dresser and opened each drawer.
He approached the bed again and my heart stirred. Under the mattress lay the bloody T-shirt. Boring into his mind with my glance, I willed him away from it. He stopped in his tracks, scratched his head as though he'd forgotten what he was doing, and glanced at his clipboard. "Well, that'll do her," he said.
After nosing around in the boiler room and the storage areas, he departed.
Compline was already over, and the monks had retired. I grabbed the bloody T-shirt and tossed it into my tomb. I went out into the night for air, but the humid atmosphere weighed me down as if the ocean itself pressed me to its depths.
# Twenty
After 2,000 years, I found mystery in few things. But Michael proved to be an enigma. After feeling our eyes, our souls connect, speaking without words as we traipsed through the woods, as we crossed the grounds behind the monastery the night I sucked the life from Luke, I expected to see him nightly, to hold secret rendezvous of sweet passion, to lead him away from St. Thomas, ultimately to the Kingdom of Darkness. But I walked the grounds, the woods, alone for more than a week. I saw no sign of him under the September moon.
In chapel he gazed steadily from his breviary to the high altar, as though he conducted telepathic communications with one of the plaster saints in the niches. He knelt when the others knelt, but not meekly with his head bowed. Even as he knelt, his keen eyes bore into some private apparition. I frequently drew his gaze to me with my own bold stare, and for many seconds we would survey one another as though we placidly watched our reflections in a mirror. But he did not linger after prayers, and I did not pursue him.
Until finally I noticed a change in his expression. His contemplation of me across the aisle warmed to desire, and I felt in an instant how much he craved me. That night I found him in the greenhouse, dark except for a lone bulb in the far corner. Outside, branches brushed wistfully against the glass roof. Passing tables of ivy twisting down to the slate floor and row upon row of bright annuals, I found Michael bent over a cart of herbs divided into small white cartons. He wore gym trunks and a tank top. His shoulders and arms were brown from the sun, his hair gathered into a ponytail. He glanced up as I approached, and then continued removing the herbs from their containers and inserting them into trays of dirt.
"You haven't come to me," I said. Humidifiers made the air practically unbearable even though I had changed out of my habit after the sheriff left and slipped into light clothing.
"I needed time." With a small trowel, he loosened a sprig of parsley from its container and planted it in the earth.
"Time? For what? To confirm my desire for you or yours for me?"
"Neither." He glanced up as he continued transplanting the herbs. "A very wise woman once taught me to wait, to listen."
"Listen? To what? To God?" I folded my arms.
"To the night, to the wind, to my fantasies, my nightmares. To spirits, too." He looked up again.
"And what kind of spirits speak to you?" I wasn't sure if he was playing with me.
"All kinds. Evil. Good. Spirits of ages past. Spirits speaking through the pages of books. The old woman, Jana, was my grandmother, my mother's mother, a Creole in New Orleans, where I grew up. She ran a tarot shop in the French Quarter, voodoo dolls on the walls, and crucifixes, altars for saints, candles burning everywhere. She had quite a clientele. Sometimes I sat in the corner while she read their tarot cards."
"Your grandmother raised you?"
"No. My father. A thick-headed Italian drunk. My mother died when I was a baby. But I spoke to her, through Jana. She burned spices at her mausoleum in St. Louis Cemetery."
"You are serious."
He looked up as though surprised I had doubted.
"Come walk with me, in the woods. Tell me about this sorceress grandmother of yours."
"Let me finish these first. I'll meet you on the grounds in half an hour."
Thirty minutes later, he strode up the incline from the buildings, cutting a confident, athletic silhouette in the moonlight. I led the way to a familiar path. When we reached it, I turned to him.
"Not a stickler about the Grand Silence, are you?"
"The Sabbath is made for man, not man for the Sabbath." He stooped to inspect a glittering stone, then hurled it into the trees.
"Tell me more about your adventures in spiritualism." I longed to take him in my arms, but his reserve stopped me.
He shrugged. "What's to tell? I spoke to my mother. I've done it more than once. I've spoken to Jana too, now that she's gone. I learned quite a bit from her."
"Such as?"
"Such as discerning forces at work around me, attuning myself to them." He spoke as though he referred to a power no more unusual than the ability to tell whether the moon was full.
"Forces? You mean evil spirits or some such thing?"
He looked at me with curiosity. "Evil and good."
"Not a very monklike thing, is it? Why enter a monastery if you want to tell fortunes in the French Quarter?"
"Why did I enter? I don't want to bore you with all of that."
I grabbed his arm. "You know nothing you could say would bore me."
"Yes, I know."
"Then tell me."
We had steadily climbed the hillside to the clearing where Luke and I used to come, about 100 meters from where I had disposed of his body. Michael's eyes had adjusted to the dark, and during our walk he'd turned off the flashlight he carried. But now he turned it back on and scanned the clearing.
"What are you looking for?"
"Just a place to sit. Over there." He pointed to the fallen tree.
We sat on the dew-dampened ground, leaning against the tree. Michael folded his legs yoga-style and leaned his head back to view the stars.
"This is my fifth year in these mountains," he said. "The life of a monk has fascinated me since childhood. The ritual, the silence, the solitude. Working with the earth. Poring through volumes on philosophy and mysticism. The sublime chants."
"And celibacy?"
"As a discipline, it has its place. It strengthens the soul."
"For what purpose? To overcome evil, I suppose."
He smiled at my cynical tone. "Wasn't it idealism that brought you to the monastery?"
"No. It was anger. Survival, too, and power."
He registered the passion in my words but made no response, only turning his eyes back to the sky. "I used to think I had to fight evil. But Jana corrected me. Evil, she said, lurks everywhere, even in your own soul--especially there. Never underestimate it. Don't pretend to banish it. Respect it. Listen to it, and even evil will speak to you."
"Don't tell me you have a closet full of voodoo dolls."
Michael laughed and slapped my leg.
I grasped his hand, shoved him gently to the ground, and, lying over him, kissed him. The heat of desire flashed through my veins. His heart pounded too, through his meaty chest. I felt him stiffen against my loins. So much blood, so close to my thirsty soul, pumping so mightily, like the raging waters behind a dam.
"No, not now," he said.
"I want you."
"Not now!" He pushed me off him.
I was furious. I wanted to shout, _Do you know who I am?_ The words echoed through my head, discipline alone restraining my tongue from speaking them.
But Michael's eyes told me he knew what I was thinking anyway. And his unspoken response, as clear and firm as his own voice, sounded in my brain: "The link between body and soul--it confuses me, Victor. I'm learning."
# Twenty-one
Autumn came and vanished, the oak tree in the courtyard surrendering its last leaves in mid-November when, by day, I knew, the sky grew ashen and more intrusive through the naked branches of the woods. With the passing months fire raged through my veins and my spirit marched toward the trophy I'd coveted for two millennia: life with an eternal comrade.
I met Michael every night, and while the others wasted their hours in mortal sleep, we trod the woods under skies moonlit or black. We discussed occult and mystical volumes in the shadows of the library's stacks, and embraced in the humid jungle of the greenhouse.
Yet both of us guarded our secrets: I, my predatory and preternatural nature; he, the reason for his caution, his reluctance to yield his body to me despite the passion he could not hide.
I longed to take him, wholly, lustfully, his soul along with his body. But a companion worthy of me must surrender freely. His strength, his mysterious mind, raised him higher and higher in my estimation. Still, as the energy between our souls intensified like the friction of pistons in an engine, my restraint threatened to explode.
In the meantime, after our nocturnal rendezvous, I continued to feed on undesirables in the city--prostitutes, vagrants, drug addicts holed up in condemned shacks. Driven by my desire for Michael, I tore at jugulars with a fury, lapped up warm blood from full breasts, sated myself to the point of drunkenness on a slew of victims in one night. The headlines of the newspaper flashed my rampages to the whole city--terrified though I had restricted my prey to undesirables. The police had established that the murders took place between midnight and dawn. They kept surveillance not only in the red light district and the projects, but in the other urban neighborhoods, where half their fleet of cars patrolled the streets through the night.
The vast number of murders brought in federal agents to investigate--not only the killings in the city, but those in the mountains too. The whole cursed area, yet again in my long life, became the notorious central subject of the local and also wider-ranging media. I knew as technology advanced investigators could easily trace my path of destruction across the globe. It was only a matter of time before they linked the blood feasts in Knoxville with those in the English village I'd escaped.
A group of monks gathered around the television one night to watch a national report on the massacres.
The expressionless newscaster, unnaturally tan, peered into the camera. "Federal agents still search for leads in what has now become an international crime. Scotland Yard believes a cult could be at work in the killings, most of which involved the draining of the victim's blood, usually through the jugular vein, vampire-style. In fact, U.S. investigators believe a satanic cult steeped in vampire lore is behind the massacres. We interviewed FBI Director Walter Searling today in Washington."
Here the camera flashed to a hallway in the FBI building. A lanky blond reporter held a microphone in front of the neat, mustachioed director.
"We have followed the pattern of killings," he said, "and we're certain that a series of murders in Boyshire, England, were committed by the same group as those in Knoxville. We've been working closely with Scotland Yard, and we are certain we will find the perpetrators."
"So you're certain that a group of people are responsible for the crimes?" The reporter took the microphone away from the director's lips just long enough to ask her question.
"No, we are not, although it would be more feasible considering the widespread nature of the killings. We might be trailing someone like the Boston Strangler or Jack the Ripper, but psychologists tell us that serial killers usually restrict themselves to specific geographical areas."
"What about Ted Bundy?"
"There are always exceptions. We're considering the possibility that one person is acting alone, but it's most likely a group."
"Any clues at all about the identity of the killer or killers?"
"We're putting together a profile of the perpetrators. It's just a matter of time. In the meantime, local police have increased surveillance in the Knoxville area."
Following the interview several residents of Knoxville recounted to another reporter the grisly scenes I'd left behind. The brothers leaned forward on their chairs or shook their heads.
"What a god-awful thing." Brother Raymond took a drink from a bottle of beer and wiped his lips.
"They shouldn't show this gruesome stuff on television." Brother Herbert, a big-jowled professor on sabbatical from a university in Europe, frowned as the camera panned across a bloody bed.
The others sighed and moaned, and for the rest of the social hour the killings formed the topic of conversations around the coffee table and the bar. Michael had watched the news program intensely, but I saw nothing in his expression suggesting he suspected the truth.
How long, I wondered, until the investigators found monasteries at the center of both massacres? How long before they came to hunt me down in the crypt of St. Thomas?
The moment was ripe for claiming a place in the Dark Kingdom. Once I'd secured my consort, I could leave the detestable life of feedings and tombs and flights from those who hunted me.
The silence of Joshu over the summer and autumn months went unbroken, a sign that I had finally found his replacement. But though no visions of Joshu visited me, other apparitions did. Often, as I slumbered in the dank mausoleum, Tiresia's eyes, full of malice and sensuality, teased me in my dreams. "What are you waiting for, Victor?" she would say. "Your world awaits you. The time has come." Her ebony limbs and breasts cut a silhouette into a full white moon. A sleek mare galloped across the sky and Tiresia's creator and consort, a barrel-chested, hairy soldier, dismounted and wrapped her in his embrace.
Horrible apparitions haunted me too, apparitions of Luke. As I slept in the coffin or became entranced by a demonic book, he would moan and call my name from the woods. His voice would come closer and closer and finally he would stand naked before me, the gash in his throat oozing blood that streamed down his pallid chest. His listless eyes would fall on me and, panting for air, he would speak:
"Let me be your consort, Victor. Take me from this hell."
"What hell?" I would demand.
"It's cold here. Like ice, Victor, like ice. I'm freezing." He would futilely rub his arms. "Take me up."
"You've passed to another world. It's too late. Go back, damn you."
During the vision I would will Luke to vanish, the way the dreamer tries to alter a nightmare just as the demon's hand reaches for him, but my mind had no effect. Plaintive Luke remained, gasping for air, repeating his speech, until I reached out to kill him once again, when he would bare his teeth at me and fade into nothing.
Michael, on the other hand, appeared to enjoy more comforting visits from the supernatural world. When he'd first told me of Jana and his dabblings in spiritualism, I was amused. Not that I doubted the communications he received: Every human has a sixth sense, as they say, though in most it goes undiscovered or ignored. However, the magnitude of his experiences and the identity of his visitor roused my interest and envy.
The first time I witnessed his ecstatic seizure was a December night during Advent, when purple cloths draped the altar and pulpit. When Michael failed to show up in the crypt at the appointed hour, I searched for him and found him in the dark chapel, kneeling before the crucifix on the high altar, completely naked, his hands stretched out as though he were crucified. Light from the vigil lamp suspended by a chain near the tabernacle cast a red glow across his face.
"Michael, what are you doing?"
I touched his shoulder but he gave no sign of recognizing me. His eyes, like the eyes of a corpse, stared ahead as though they focused on nothing at all. His body was as cold as a corpse, too. Giving up my attempt to shake him from this reverie, I sat on the sanctuary steps to observe him and, if I could, to enter into his strange communion with the world beyond.
For a quarter of an hour his muscular arms stayed frozen in place, his body as immobile as the statues above him. Then, as though riding a mighty jet of air, he rose, locked into the same position, until he was level with the crucifix mounted on the gabled apex of the reredos.
Then he chanted over and over "O Crux, ave spes unica," words from a hymn I particularly despised--the damned "placing hope in the cross." His clear voice reached a crescendo and then faded. Finally he struck his breast and muttered, "Eripe me, Domine, ab homine malo." _Who was the evil man he sought refuge from?_ I wondered. _I who could give him what heaven only pretended to give?_
Suddenly the pallor of the marble corpus of the crucifix melted away like a coat of paint, revealing the brown flesh, the true features of Joshu himself.
"Joshu!" I yelled, jumping to my feet. Levitating myself to the pair floating near the vaulted ceiling, I tried to grasp first Joshu, then Michael, but what seemed to be a wall of glass prevented me from making contact. I remained a spectator.
Now Michael's body relaxed and spun around toward me. His cock was erect. He licked his lips and sensuously caressed his chest. Joshu approached him and, flinging his arms around his waist, kissed his neck. In that moment I thought I beheld twin Joshus, their sinewy bodies, their strong features and dark coloring were so alike. Michael panted, Joshu's hands remaining fixed around him, and he finally moaned as though climaxing. But though his cock remained stiffened by the blood of passion, nothing spewed forth at the moment of orgasm.
Joshu released him and resumed his place on the reredos.
"No, Joshu!" I cried. "Come back, damn you!"
His placid face showed no sign of hearing my demand. The human color faded from his flesh and he solidified once again into the corpus that resembled a pious artist's fantasy, not the man who smelled of Hebrew wine and spices, of labor's musky sweat, of the desert and the sea.
Michael floated back to the sanctuary and landed in a heap near his crumpled habit. Descending to him, I crouched to touch his forehead, before like ice, now burning with fever. Unconscious and trembling, he moaned and called out Joshu's name. I scooped his naked body in my arms, along with his clothes, and carried him to his cell by way of the corridor along the library, a safer route than past the abbot's rooms. I lay him upon his narrow bed and, checking the dark corridor before closing the door, rinsed a washcloth with cool water from the basin in the corner and mopped his face and neck.
For more than an hour I sat next to his bed in the darkness, bathing his flesh, until finally the fever broke and his limbs calmed. Color filled his wan face again, as it had the marble cheeks of Joshu's corpus, and he breathed quietly in a sound sleep. Near dawn, while I sat lost in thought, his eyes opened and he turned his face toward me.
"Victor? Is that you?"
I leaned over and touched his cheek.
"Did I... was there an apparition? Where did you find me?"
"In the chapel. Raving like a lunatic. Do you remember what you saw?"
He shook his head and kissed my hand.
"What do the others think of your raptures?"
"No one knows. Not now. Luke did. He found me a couple of times, outside, near the woods. I told him it was epilepsy."
"Maybe it is."
"You know it's not. You know." He gazed at me admiringly.
"Victor..." He brushed my arm with his lips.
I crawled into the bed next to him and found his lips, heated now but not with fever. His kiss stirred my desire for him, the desire I'd struggled to restrain until his will moved, until now, when its motion surged like a flooded stream. I stripped off my clothes and into that stream cast my soul, taking him in my arms, kissing his neck, his chest, taking in my mouth his engorged cock as though I were a hungry infant at the breast.
When he moaned in pleasure, his chest rising and falling, not with the regularity I had seen in the weight room but with urgency and a quickening, erratic rhythm, with an abandonment of predictability and control, I entered him. The warm passage expanded without a hint of resistance, as though my full cock belonged there, wrapped in the warm, bloody membrane. I drilled into that blood, pumped seconds ago from the excited heart of my beloved. I felt the blood there, and smelled blood on his lips. I felt blood coursing through his every vein and artery, bearing in its red tide his very soul--to me, to me. I longed for the blood, longed to drink of that soul. My fangs, now extended to their full length, sought it, but as the spasms of consummation shook his sweaty body, I turned my head away. He must be my lover, not my victim.
# VI
The Proposal
# Twenty-two
On Christmas, fresh snow piled three inches on the branches and drifted against the stone walls on the east side of St. Thomas. Irrationally or not, I breathed more easily, thinking that by shrouding the ground, the snow would trap the ghost of Luke, who continued to haunt me. But it was no use. As dried as it now was, I could smell Luke's blood on the shirt I'd deposited in my tomb. Destroying it or burying it in the woods was out of the question now that federal agents were searching the area with renewed vigor. News reports only speculated as to why the agents had returned to the woods above the monastery, since the director would reveal nothing about the agency's motives. I knew it was because they needed Luke's body to link the crimes to St. Thomas. Even someone without my keen perceptions would sense the sheriff had suspected me, and the drill I was subjected to by one of the agents only confirmed this impression.
"Now, Brother Victor," he said, sitting behind the abbot's desk with a pad and an expensive fountain pen. "How would you classify your relationship with Brother Luke?" His arrogant manner heightened his attractiveness. He had a boyish face but a stocky, well-built frame. His collar fit tight around his thick neck.
"I've been over this with the sheriff." I stared straight into his eyes, but he betrayed no intimidation.
"Formality, Brother. When we need to go over the same ground, we do. Right now, I need for you to answer me."
"We were friends." I crossed my legs and tried to keep myself calm. "He was young, looking for a mentor. I was the mentor."
"Just a mentor?"
"What are you asking?"
"Are you a homosexual, Brother?" His eyes remained leveled with mine. The agency had chosen well when they hired this cold, direct official.
"No."
"I've heard different."
"So? It's not true. I'm a monk. Celibate, Mr. Andrews. What kind of a monastery do you think we have here?"
He recapped his fountain pen. "That's the exact question we're working on, Brother. When we piece it together, you'll be the first to know."
I left him in the office when he'd finished his interrogation. Which of the monks had reported my unnatural liaison with Luke, I wondered. The pathetic abbot? One of the head-in-the-clouds scholars who'd for once noticed something outside his own esoteric world? It didn't matter. Even if I did covet my monastic refuge, I never sought to please my foolish companions. No witch-hunts would be held within the sacred walls now anyway, no matter who in the monastery observed my movements. Creating a scandal in the world without could injure the reputation of the order, could result in the abbot's demotion. Perhaps the arrogant agent himself had surmised the truth. So be it. My time was coming.
A freak blizzard stopped the FBI's search for a whole day. That night, the wind howling in my ears, I sped through the air to the spot where I'd buried Luke's remains. A downed tree sank into the ravine where I'd dug a grave with one hand while I lifted the trunk with the other. Snow filled the entire ditch and camouflaged even the tree itself beneath a drift.
I'd no choice now. If the agents discovered Luke's remains, I'd have to flee, and flee without Michael. The time was close. Between us, close as the dawn, lay the pact, itself a dawn whose bright rays even I could bask in.
The icy flakes coated my brows, my hair by now. The cold stung my hands as I clawed beneath the tree, which I'd shifted with some difficulty, laden as it was with snow. Blinking in the storm, I kept an eye out for ambitious FBI interlopers while my hand groped for the remnants of my prey. I felt a shock of brittle hair, then pulled up Luke's skull, skinless now after all these months, and dropped it into the garbage bag I'd brought with me. His habit, his loose bones, his shoes came next, all embedded in the frozen earth, which seemed to tighten its grip the more I dug and pulled. But at last I had every scrap of him. I replaced the tree, scooped snow back into the ravine, and, pitting myself against the storm, flew back to the monastery.
Nightmares or not, the skeleton could be concealed in only one place: the mausoleum. The stream was frozen and would be dragged when it thawed--soon, given the usually mild Tennessee winters. Transporting the bones to another location, to a farm or beyond the city, involved the risk of being seen. Besides, I believed the key to conquering my nightmares was to face the ghost who haunted me. What had I to fear from a cluster of bones, I who had slept in the midst of bones for 2,000 years?
I trod quietly through the dark entryway with my treasure and stole down the stone stairs to the crypt. The mausoleum's gate whined as I opened it. I ducked into the tomb and jerked the lid from one of the vaults. On the neat skeleton of a monk, clad in the tatters of his habit, I dumped the new, jumbled bones and the habit and shoes of a youth who, if not for his idiocy, would still live.
When day arrived I slept like a baby.
# Twenty-three
Twice more I witnessed Michael's strange seizures. The first time, he repeated his conference with the transformed marble corpus of Joshu, chanting the same hymn, muttering the same Latin injunction about an evil man. Again I carried him to his cell and we made passionate love. The second fit overcame him in the woods in early February, after the agents had given up their futile search for Luke's remains. The snow had long ago melted. In the prematurely balmy air we were hiking along the path through the woods when Michael launched into a sprint. At first I thought he was playing.
"Just try to get away!" I called after him.
Waiting until he had disappeared beyond a slope, I flew at the speed of my thoughts, arriving at a place where the path divided, part of it disappearing into the trees and winding around an abandoned shack. After standing against an oak for a good time, I called to him.
"Is this a challenge? All right then, I accept." I laughed, believing he hid from me in the woods.
But as I tramped up the hill, a light caught my eye, fire blazing through the trees. I found him, a glimmering cloak draped around his naked shoulders, his eyes raised to a limb where an old woman rested. A turban like the headdresses worn by American slaves wrapped her small, dark head. A large, bold print splashed her robe with purple and green, visible to me despite the darkness.
"This is Jana, Victor." Michael threw his arm around me.
His eyes were glazed, as though he spoke in his sleep.
"Yes." I studied the shriveled hag.
"We meet at last, good Victor," she said. "The World of Darkness bids you greetings." She spoke with a heavy accent--Creole, I assumed. Her eyes were full of mischief.
"The Dark Kingdom? You reside there?"
"I reside nowhere. I flit about in the darkness."
"What kind of apparition are you?" I asked, doubtful.
"One conjured by this one." She glanced at Michael, who continued looking on in a daze.
"Ah, only a shadow then," I said.
"A projection of the shadow within Michael's own soul. He would know evil."
"For what purpose?"
"To reckon with it, to understand its power."
I laughed. "Understand it? Vanish, hag, there's nothing you can teach."
Her eyes returned to Michael. "Witness the power of evil, boy." She faded to a transparent image and then vanished altogether.
Michael collapsed. The cloak had disappeared along with the apparition, leaving him naked. His body could have been a corpse, it was so cold. I dressed him and lay with him near the fallen tree until he opened his eyes and inhaled deeply the balmy air, spiced with the clean scent of firs.
"It's very dark, Victor."
"Yes." I stroked his hair. "I've learned to love the night. With my illness, I have no choice. Do you remember your apparition?"
He shook his head. "Something sad. That's all I know."
After he regained his sense of orientation, we relaxed together against the fallen tree, and I asked him about his mystical experiences.
He stared thoughtfully toward the stars. "I'm not sure what to say. I've had them ever since I can remember. I know when they come. I wake up naked and confused. Sometimes I remember the content exactly. Sometimes it's just a feeling of doom or lightness, depending."
"What about this time?" I said, grasping his hand. "What did you feel?"
He shrugged. "Fear."
"Of what?"
"I don't know. Who can make sense of dreams?"
"But you take these spells to heart." I paused. "Do you fear me?"
"You asked me that before. In the weight room. Should I fear you, Victor?"
"No, I would never harm you. In fact, what I want..."
I turned my head to consider whether the moment was ripe for opening the door, at least an inch. "What I want is to give you something."
"What?" His dark eyes gazed at me intensely.
I released his hand and got up. I paced for a few moments with my hands behind my back and then I faced him. He had drawn up his legs to hug his knees.
"I want you to think of an image."
"An image?"
"Anything, a broom, a car, anything. Only focus on it as though you were projecting it on a screen in your mind. Go ahead."
Michael closed his eyes. Within seconds of concentrating on his thoughts, the image of a skull reproduced itself in my mind.
"Why a skull?" I demanded.
"You see it?" Michael opened his eyes in surprise.
"No." I sat down next to him. "Close your eyes again. Imagine the skull. That's right. Now watch." I once again directed my will to Michael's mind, where blood now streamed from the skull's eyes. "See the blood?"
"Yes."
"Keep focused on it. Move toward it. Do you feel it?"
His breathing grew more rapid. "Yes," he said, excitedly.
"Feel it. Feel the power, the strength. Feel the lust, the hunger."
"Yes."
I shook him at the height of his pleasure and he opened his eyes, still panting.
"I want to give you this passion as often as you want."
The pleasure vanished from his face and he glared at me in indignation. "How is it yours to give, Victor?"
"I can't tell you now."
"You can't tell me?" He jerked his hand away from me and stood. "You ask me about intimate visions, but you can't tell me how you get inside my head?"
I climbed to my feet and faced him. "Don't push me. Strong chains are forged slowly. Like the bond between us."
"You think I withhold secrets from you?"
"You've no reason to." I grabbed his arm when he shook his head in exasperation and started to walk away. "Wait, please. I have a long past, Michael. You've got to let me unfold it slowly. You've got to trust me. You're the one who believes in waiting, remember?"
He gazed steadily into my eyes as though he were trying to read my thoughts. Then he relaxed in resignation. "All right, Victor. I'll leave it to you."
He remained true to his word, his discipline taking over when his pride reared its head, no small feat since his ego matched my own. It was my own impatience that worried me. I burned for our consummation; had I followed my impulses, I would have revealed everything in an instant, but at the risk of taxing even Michael's courage to hear the truth. A gradual disclosure alone would ensure our future.
One night in the library's reading room, as we pored over so-called "apocryphal" volumes on the life of Joshu, Michael went to retrieve another book from the stacks. When he arrived at the fifth level, reached only by the flights of narrow stairs he'd climbed, I was waiting for him there, having willed myself to the spot. His gaze absorbed the significance of the accomplishment, but without a word he walked to the shelves for his book.
At other times I heaved a fallen tree from the forest path or crushed a stone into powder. In the refectory one night, I caught his eye during the reading and made a sign for him to watch while my eyes seared into the mind of fat Brother Athanasius as he read from the Lives of the Saints. Suddenly confused, Athanasius stopped and returned to the beginning of the passage. After he stopped and started twice more at my prompting, the abbot motioned for him to take a seat.
Michael registered my powers with gravity, but stayed true to his promise, demanding no explanation.
One night during Lent I decided to broach the subject of the Kingdom of Darkness. Michael had spent the day spreading mulch around trees on the property and we inspected the hedges near the buildings to determine how much more was needed. When we returned to my cell, he sprawled out on the bed, exhausted. I straddled the desk chair, facing him. The tired lines around his eyes showed up even in the soft lamplight. We chatted awhile about trivial matters, and after a long pause in the conversation I spoke.
"What do you think heaven is like?"
"It's 3 in the morning, Victor. I'm not in the mood for a catechism lesson."
"This isn't catechism."
The seriousness of my tone got his attention. He rolled over and propped his head on his hand. "Go on."
"I think people are goaded toward it by some strange lofty ideas. Blessed union with the creator. Eternal blessedness. What is that?"
"What happens in eternity is anybody's guess."
"Guessing's not the problem. It's spelled out by Scriptures. The Book of Revelation paints a nice picture--the masses kneeling at the throne of God, singing, praising. For eternity. Frozen forever on their knees. Sounds like hell to me."
"What about Dante? Paradise is contemplating God, according to him."
"Contemplation be damned. We're made for action, life, movement, pleasure." I pounded the back of the chair with my fist.
"From the way Dante describes hell, I don't think we'll find much pleasure there." He yawned.
"What if there were another option, Michael?" I got up and sat next to him on the bed, my elbows on my knees. "What if you could spend eternity laughing, making love, feasting, living like a god yourself?"
He rolled onto his back and clasped his hands behind his head. "That's what Lucifer was after, if I remember the story."
I shook my head. "Lucifer wanted to rule the same damned heaven. He lost the war, that's all."
"What are you saying? A third realm exists? We've all been brainwashed into being good so we can go to heaven?"
"It's true. The Dark Kingdom is a place for the elect, bold souls. A realm of gods." I paused. "It's the source of my powers. I can give you a glimpse of it."
The adrenaline pumped through him now. His eyes were keen, every nerve of his body ready to receive. But I had said enough for the time being.
"Stay patient," I said. "I'll show you everything soon." I kissed him good night and asked him to leave, despite his curiosity.
How did I know I could enter the Kingdom of Darkness? It was like an animal's instinct to mate, latent until the right moment, when it flared into the single impulse of its existence. My movements in the night, despite centuries of my own ignorance, had not been directionless, I now sensed. My mating dance had begun--the time to carry my prospective consort across the boundary of darkness for a taste of what awaited him once he served his term as a nocturnal predator.
After he left I rushed so rapidly through the night that my cheeks stung and my ears rang when I lighted on a dark Knoxville street to feed. Ravenous, I forced myself to move with caution since the police still patrolled the streets. Sweet, rich blood hung in the March air, so much that for a moment I felt disoriented, as though pulled in a hundred directions. "Focus, focus," I urged myself, and when I obeyed my own command the route to take traced itself in my mind, as vivid and red as the blood I sought.
A porch wrapped around a modest frame house, meticulously painted and adorned with shutters and lacy trim along the gable. I scanned the rest of the houses in the quiet cul-de-sac. Every window was dark. Still, to play it safe, I walked around back and through the gate of a picket fence. A decal on the back door announced "Proud to Be a Vietnam Vet" over an unfurled American flag. I pulled the knob of the locked door slowly and as quietly as possible until the bolt tore through the door frame.
The snug kitchen smelled of bay leaves. In the living room, afghans draped sagging armchairs. From a photo above the antiquated television gazed a shiny-faced young man in a dress Marine jacket and white cap. The boards squeaked under my step as I followed a scent down a hallway. I paused, listened. Someone turned in a bed and sighed.
The first door I came to was slightly ajar. I crept in toward a large four-poster bed in the center of the room, where I could discern a woman's weathered face. Snatching her robe from the foot of the bed, I pressed it over her mouth. Her eyes opened; she struggled and moaned; her flailing arm knocked a glass to the floor.
"Mama! You all right?" The man's voice came from the adjacent room.
I twisted the old woman's neck until it snapped, and hid behind the door as the hall light came on.
"You OK, Mama?"
When no response came from the bed, a man in a wheelchair rolled into the room, his long ponytail jiggling as he worked the wheels. "Mama?"
Before he could discover my deed, I grabbed his shoulders and plunged my fangs into his bearded throat. He clutched my hair as blood spurted from his jugular until he lost consciousness. I drank greedily for almost ten minutes, emptying every vein of his tobacco-laced blood.
After feeding, I surveyed the scene before me: one body slumped over in the wheelchair, the other still staring toward the door, arms stretched out like a cross. "Soon, now," I said to myself. "Soon I will stop creeping in the night, desperate for prey."
I sped back to the edge of the woods and walked across the grounds to the monastery. As I came around to the front of the building, I caught a glimpse of a white car disappearing down the drive without its lights. I was sure it stopped behind a cluster of trees, but with dawn only half an hour away I couldn't investigate.
My sleep was fitful. I dreamed that blood gushed again from my victim's throat, but every time I stooped to drink from the red jet, it ceased. When I sucked the musty, hairy throat, it yielded nothing. Thirst maddened me. I stalked victim after victim, but each time I caught my prey it turned into a half-familiar corpse I had drained long ago.
The scene changed then. A golden throne rose from the midst of four six-winged creatures bulging with eyes in front and back. In the throne glowed a translucent figure whose crown of jasper and carnelian seemed an extension of the throne itself. The ruler lifted a scroll, the creatures fell to their knees, a man approached. In a white robe, his skin now alabaster, his hair like bleached fleece, the man accepted the scroll.
"Joshu!" I screamed from the midst of a throng of people. But he did not hear.
"Worthy is the lamb that was slain!" The crowd chanted.
"Damn you, Joshu!" I screamed, immobilized by the bodies pressing around me. "Damn you then! Rehearse your coronation for eternity. I will live!"
# Twenty-four
The night after I'd preyed on the crippled man, Michael approached me after vespers and said to meet him in the weight room during the recreation period. When I asked why it couldn't wait until our usual meeting time in my cell he shook his head gravely and whispered, "We can't. I'll explain."
"Shut the door," he said, straining to lift a bar loaded with weights.
"They'll be suspicious." I nodded to the locker room, where several of the monks were undressing.
"Close it, damnit!" The biceps of his fully extended arms swelled under the weight. Sweat plastered the hair on his naked chest and abdomen. "Take this."
I placed the barbell on its rest and sat on a plastic chair, scuffed from use. Michael sat up and wiped his face with a white towel. I could smell the bleach in it, used heavily by the pathetic monk who did the laundry.
"The federal agent talked to me today." Michael spoke in a hushed voice despite the private room.
"Andrews?"
"Yes. He had all kinds of questions about you, and about the two of us."
"Why should he care about two homosexual monks? Isn't finding a serial killer enough for him?" Nonchalant, I hoisted my feet up on the other bench.
"It's got something to do with finding Luke. He apparently knows about your relationship with him."
"Yes, he asked me about it."
"He wanted to know if there was anything unusual about you."
"Unusual? In what way?"
Michael shrugged. "Seems like he wanted to know if you indulge in some form of sadism. He seems to suspect you."
"Of scaring Luke away from the monastery? I know that. Good agent Andrews made that clear in our chat."
"It's more than that." Michael scooted down the bench, closer to me. He leaned over on his knees. "This is incredible, but I think he suspects you of killing Luke."
"What!"
"He made me show him my arms and back to prove you'd never abused me. He asked about your weird ideas, about your occult beliefs. Victor, I think he bugged your cell. That's why I wanted to talk here."
My face and manner remained calm, while fury welled in me like electricity in a stormy sky ready to burst forth in lightning. The news confirmed my fears about the car I'd seen slip behind the trees. Michael and I were under surveillance.
"Victor, I want the truth." Michael's eyes were unflinching in their intensity. "Did you hurt Luke more than emotionally?"
"Why don't you ask your real question?" I shouted. "Did I kill Luke and cut him up in pieces? You think I'm the killer stalking the woods?"
Michael remained unintimidated by my rage. "I don't think anything. I only want to hear from your own lips that you didn't harm Luke."
I glared at him. "I thought you loved me. Do you?"
"Yes, God help me. But I understand you, the extremes of your passions. You're capable of anything. That's what drew me to you. And your strength."
I broke into a laugh. "I see what this is all about. I've always known it. You got Luke out of your way, just to have me for yourself. Now you have qualms of conscience. That's very unbecoming of you, Michael. I thought you were above petty scrupulosity."
My words hit my mark, as I knew they would. He calmly studied me. "I'm not above shame."
"Relax, Michael." I slapped his arm. "I'm not a psychopath. Unless my own life were at stake, I wouldn't waste my time killing anyone."
Brooding, Michael stood, pulled on his T-shirt, and opened the door.
I laughed again.
"What?" he said, irritated.
"You look like Judas Iscariot."
He dismissed me with a glance and walked out.
"It's not worth hanging yourself over," I shouted after him and broke into a fit of laughter.
Back in my cell, I turned over the bed and pulled the books from their shelves in search of the listening device. I stood on a chair to inspect the painted pipe that ran the length of the ceiling and rummaged through the desk drawers. I was just about to fling the crucifix down after searching the back of it when I noticed the tiny case behind the bent knees of the corpus.
The irony of my enemy's means of spying did not escape me. Not only had Joshu fled from me, but his image assisted my would-be captors.
In that moment I craved the blood of the arrogant agent, who even now listened to my excited breathing. My fangs shot forth in anticipation as I stormed up the crypt stairs and down the winding drive. The white sedan was exactly where I expected it to be. In wrinkled starched shirts, Andrews and another agent got out of the car when they saw me approaching, both with their hands near the guns on their hips.
"Evening, Brother Victor. Can we help you with something?" Andrews had been eating a sandwich. He dabbed his lips with a handkerchief.
Andrew's partner was 40ish and balding, but a large, well-built man. I could have lunged at them in an instant, their blood titillated me so.
"You've invaded my privacy, Mr. Andrews." I held out the recorder, then hurled it off into the trees. "If you want excitement, go get yourselves fucked."
"Something right up your alley, isn't it, Brother?" Andrews sneered at me.
His neck, rising firm and thick from his loosened collar, beckoned to me in that moment. My fangs, which had retracted, inched forward again, and only through enormous concentration could I control their growth. As I turned to avoid any more temptation, Andrews called me back.
"Where are you from, Brother Victor?" He emphasized the word "Brother" in a mocking tone.
I stopped and faced him.
"You think you'll hide your tracks long?" He leaned against the car now. The other agent had relaxed too.
"The abbot has my records. I'm sure you've already seen them."
"Records from a monastery that no longer exists? A letter from a dead abbot?"
I smiled at the cockiness behind his transparent ploy to goad me into desperation now that the FBI was on my trail. I swore to myself that I would have that man eventually, then retraced my steps to the monastery.
The next day the evening news covered my most recent slayings, neighbors only that day having discovered the bodies of the crippled man and his mother. Cameras scanned the scene, the empty bed and wheelchair, the bodies being carried out in black bags. The elderly couple who had found the corpses explained that when the son hadn't emerged for his usual spins around the block, they'd knocked on the door. That failing, they'd tried without luck to reach him by phone. Then, finding the broken door in back, they ventured in.
At this point the old woman, wearing a sweatshirt stamped with the words "Proud to be a Grandma," broke into tears. "Oh, sweet Jesus! I never seen such a thing in my life."
Her husband, pushing up a pair of thick glasses, wrapped his arm around her.
The lanky blond reporter asked him how long they had known the victims.
The old man shook his head. "Mr. and Mrs. Sanders lived here when we bought the house back in '48. Jimmy came along a couple years later. Always playing soldier out in the yard here. A good boy. Took care of his mama after his daddy died." The man was too choked up to go on.
The maudlin curiosity of the monks annoyed me, and I had no desire to watch the rest of the coverage. But to leave now, with the abbot entertaining fears about me because of the FBI's probings, would have drawn unnecessary attention to myself.
A shot of the man's Marine photograph flashed on the screen, followed by an interview of a Paris Island officer, who attested to the soldier's dedication in boot camp and then in Vietnam, where an exploding land mine left him a paraplegic.
When the reporter interviewed Andrews, I swore he was looking directly at me.
"Yes, we are following some leads," he said in response to the reporter's question. He wore a navy-blue suit and a tie with neat rows of print--straight and controlled. "But we are not at liberty to discuss our suspects. We want to assure the community that we are doing everything within our power to get this fiend."
"Is it true, Lieutenant Andrews, that the blood was drained from these victims? That the killer is a member of a vampire cult?"
"We are not at liberty to discuss details of the investigation." Andrews's boyish face remained expressionless, his tone official.
"Could you tell us anything about the monk from St. Thomas Monastery, Brother Luke McMahan? A missing person's report has been filed with the police. Do you suspect foul play in this case?" The eager reporter tucked her straight hair behind her ear.
Andrews started looking impatient now. "The local police are handling that case. There is no evidence of foul play at this point."
The subject stirred the monks to comment and sigh and chat over their cocktails. Two younger monks appeared to take out their frustration over Luke on a ping-pong ball, which they knocked mercilessly back and forth across the net. Michael glanced at me from the open window, where he'd been musing over an illuminated fountain of John the Baptist in the Jordan.
I wanted air. Before evening prayer I strolled out to the courtyard. Daffodils had already bloomed and hedges of forsythia clustered around the oak, which had started to bud. I sat on a stone bench, closed my eyes, and inhaled the cool air stirring the skirts of my habit. No moon glowed, but even so I felt its magnetic power upon my immortal blood and reveled in the vision of the realm ever bathed in its illumination.
This was the last time I'd be forced into a corner, waging battle against human enemies that snapped at me like dogs. I was weary of it. From Jerusalem to Athens to Nampo; from Mozambique to Brindisi in the heel of Italy to the villages of Alsace-Lorraine and the hamlets of England; I'd traveled across the globe for 2,000 years, powerful but hunted, and I was now ready for my eternal reward. This time I would not flee the region, but the mortal sphere itself.
# Twenty-five
Holy Week nauseated me. The scripture readings about Joshu's triumphant entry into Jerusalem (if you call 20 beggars and a handful of religious fanatics fluttering palm branches triumphant); his romanticized last supper with 12 laborers who reeked of fish; him sweating blood in Gethsemane (more palatable than saying he was ready to soil himself from fear). The early Christian movement had spun the whole train of mundane events into a myth surpassing the epic of Aeneas.
But Good Friday revolted me the most. The rituals and readings reveling in the gore dripping from Joshu's face, hands, and feet because with it flowed his life, the prerequisite of redemption; this maudlin sentimentality perverted real passion for the sinewy man, wasting the blood that should have been lapped up as the fuel of orgasm.
Who on this damnable earth, millennia after his death, could lust for the man Joshu?
When I crept from my tomb the night of Good Friday, the chapel above seemed to press like a weight on my back. On their knees, amidst clouds of incense, the monks had observed the sacred hours from noon to 3--when Joshu expired--loading the vaulted space with, for me, a tangible perversion of my love.
Now, tested and watched, I had to ascend and join their desecrating rituals.
Purple cloth swathed every statue in the dimly lit chapel. The tabernacle door stood wide open, the ciborium of communion wafers removed to commemorate Joshu's arrest and removal from Gethsemane. The marble altar had been stripped of its linen.
When evening service began the monks rose, black figures in the shadows. Swinging a censer by its chain, Michael led the procession down the center aisle. His bleached surplice set off his dark brows and hair, his olive complexion. His face showed his absorption in the ritual.
The abbot brought up the rear of the procession, lifting a crucifix that was three feet tall and shrouded in purple.
"Behold the wood of the cross," he chanted, "on which is hung our salvation."
"Come, let us adore," responded the monks, dropping to their knees.
The invocation and refrain were repeated twice more, each time the abbot exposing another arm or leg of the plaster corpus until, upon reaching the sanctuary, he hoisted up a crucifix completely bared.
When the monks filed from the stalls to kiss the feet of the corpus, I fought the impulse to rush forward and crush the plaster figure. When my turn came to genuflect before the cross, while the abbot held it on the sanctuary step, I shot a glance at Michael. He stood by with a linen napkin to wipe the feet after each kiss.
"Yes, Victor," his eyes said to me. Was he directing me to offer the ritual kiss, or was he expressing assent? If so, to what?
I took a breath and brushed my lips against the cold feet. By the time I returned to my stall, I knew the moment had come. Michael must tour the Dark Kingdom.
When he came to my cell just before midnight, I was sitting in the soft illumination of candles burning in all four corners of the room. Michael closed the door softly behind him and glanced around. The bed was stripped, my books packed back in the trunk. The crucifix rose from the wastebasket.
"What's going on, Victor?" Michael wore a black sweatshirt and black jeans, a modernized habit of sorts.
"I want to take you to the place now." I'd tipped back the chair to rest my feet on the bed.
"The Dark Kingdom?" Michael leaned against the desk. "Why now?"
"It's the right moment. Will you go?"
He paused and studied the wall. "Strange. I feel exactly the way I did the day I entered the monastery. A step into the unknown."
"Very appropriate." I lowered my feet and leaned forward to take his hands in mine. "Believe me, once you see it, your life will change. Immortality will mean something. Not pious garbage about eternal peace. You weren't made for peace and neither was I. Our souls are strong, violent, passionate."
"And evil?"
"Evil. What does that mean?" The word disgusted me. "Rebellion against the Christian god, who rules with an iron hand, twists souls into his own image and likeness, demands from them unyielding devotion to no one but him? Is that evil? What about black skies scorned by sailors as they approach the rocks, winds that strip a field of grain? Are they evil, or are they simply power? Why moralize about strength and passion?"
"Take me," Michael said in a whisper, his dark eyes willful, excited.
I stood and stripped off my T-shirt and sweatpants. "Come here."
I pulled off his shirt, caressed his exquisite pectorals and biceps. Our heated lips met, our tongues pressing past each other's lips like impatient serpents entering their lairs. Then I pulled him to the bed and directed his mouth to my nipple, as Tiresia had directed me to hers centuries before monks came into being. He sucked, tenderly at first, and then as blood squirted down his throat, greedily, wantonly.
My head spun. I abandoned myself to the movement, which took an upward turn, like the takeoff of a plane, gracefully piercing an ebony sky. Michael's position had changed now. From behind me he hugged my waist, his face against my neck, as I rocketed moonward. He moaned in delight, gasped for air.
When we entered a sea of silver light, our momentum slowed. We floated now, over a wall, over tiled roofs gleaming under the dreamy light.
"It's Rome," I cried. "Rome restored."
Below us the Tiber snaked through the spires and domes, past the round Castel San Angelo, its crenellated tower near the riverbank, where pines swayed in the warm current above the river. Along the water, lovers embraced, their nude bodies perfectly proportional, strong, supple.
The marble facing of the Coliseum, torn away by the invading Visigoths, again covered it like shiny pink armor, and the noble buildings and columns and squares of the Forum were risen from the melancholy rubble. Medieval churches were restored too, their facades shimmering in the moonlight along the narrow streets. Crowds of spectators roared over gladiators in the Circus Maximus, whose sculpted muscles were as beautiful as their movements when they dodged and thrust the sword. And from the Pantheon, the sphere of perfection, haunting mantras to the gods rippled across the Piazza Navona, whose elliptical perimeter enclosed sword-eaters, lute-players, men at game tables, boys splashing through the waters of Bernini's fountain, women laughing with abandon as they inspected jewels on display in canvas-covered booths.
The colossal structures, the marble columns and Egyptian obelisks claimed as battle trophies, the noble shapes and movements of bare human forms, as breathtaking as statues and yet as flexible and rapid as the Tiber as it empties into the Ostian harbor, the eerie light cast on the city and its vital inhabitants by a white moon, round and precious as a giant jewel lying on velvet--all these things lent an awesome sublimity to the world we surveyed as we glided through space.
"Did you know this?" Michael yelled into the wind, panting, sweating against my body. "Did you know it would be like this?"
I shook my head. "It's beyond my wildest dreams."
Laughter caught my ear and when I turned, Tiresia moved through the air next to me, her dark skin spangled with jewels, a crown of rubies and emeralds glittering on her head. "Yes, Victor, it's time. What more could you want?" She stretched her hands, arched her smooth back, and dove down to a courtyard, where she mounted a boy stretched out on the ground and soon convulsed in pleasure as she rode him.
"I've waited centuries for this!" I shouted. "Now, with you, I can take my rightful reward. Laughter, sensuality, beauty, action. Action, Michael, action. For eternity."
He tightened his grip around me as his body trembled spasmodically against mine.
A moment later, we lay entwined once again in my cell, our chests heaving as though we'd run a race and collapsed beyond the finish line.
# VII
Revelation
# Twenty-six
"Two millennia?" Michael strained to comprehend the length of my history.
"Since the reign of Caesar Augustus, yes."
It was the night following our journey, the first chance to discuss an experience that had exhausted us both. We walked through the woods, where buds swelled on the oaks and maples stretching their arms after winter's dormancy into the sweet air of spring.
For the first time in my nocturnal existence, I unfolded the entire story of my origins, earthly and supernatural--Rome, Jerusalem, Tiresia, the roaming of the world's monasteries and the obsession that took me within their walls until that very moment. My means of nourishment I left for later.
"Joshu," he said. "Because it's closer to the Hebrew." He had listened silently as we hiked, until now.
"I wanted him as much as a soul can want."
"In love with Jesus of Nazareth." Michael mused over the idea. "The Christ."
"That's what history has called him. I knew the boy, the man of flesh and blood. The man who laughed at my crude jokes. The man who raced me across the Jordan. The man with pimples on his back. It's taken me 2,000 years to replace him."
"Incarnation. Incarnation." Michael murmured the word.
"What? What are you talking about?" I grabbed his arm to stop him.
"Nothing. I don't know. This is too much to take in."
Impatient with his musings, I leveled my eyes with his. "The point, Michael, is that you must inherit my existence. Wander here a brief time, then join me in that kingdom we've inspected. That's all that matters. Our eternity together."
"What do you mean inherit your existence?" He seemed to snap out of his contemplation, his eyes now sharp as a lawyer's.
"I mean this." I raised my hands. "The night. To live in the night until it's time to join me."
"Never see the sun again?"
"No. To hell with the sun!" I let go of him and took a few steps away to cool myself before turning back to him. "You want mortality when you can live forever? You'll have the power to take what you want. Anything. Then you'll go on living, really living, for eternity."
"But I wouldn't be mortal here. I'd become... whatever kind of being you are. Those are the terms?"
"Yes. You would have my powers. You could communicate with me."
"You're not telling me everything."
"What!"
"Everything, Victor." He drew close to me and clasped my arms, his eyes inches from mine. "There's still a closed door. Once again, you expect me to surrender, but you put up a wall. I'll take nothing less than everything, or there's nothing more to discuss."
I pushed him away. "Damn you. You want to know what it's taken me centuries to learn, is that it? A shortcut? I'm telling you, you can't understand it all now. Not until you are what I am. I can only promise you a freedom and power you've never known. Isn't that enough?"
"It's you I want, only you."
I laughed deeply and kissed him hard on the lips.
Over the next week, I made arrangements for Michael's new existence. His affection for New Orleans directed my search for lodgings there, not far from the above-ground tombs of St. Louis Cemetery, where his beloved Jana was buried and where he could easily claim the coffin of a new corpse and a forgotten mausoleum as his sleeping chamber. From a Swiss account, I transferred funds to a local bank in the French Quarter. Over the centuries I'd added to Tiresia's treasure through robbery and investment.
As for monastery life, that was up to Michael. He wouldn't share my initial motive for seeking cloisters, though he was drawn to the secluded life and would find in them companionship of sorts, as I had.
How would he take the life of predator, ripping into throats of children, women, crippled war veterans? He would adjust. Survival was survival, wherever you stood in the food chain. I counted on his supernatural cravings, his philosophical perspective, his passionate nature--these could obliterate mortal sensibility, which focused on the petty, the particular. I counted on his union with my soul to stir a sublime storm whose winds would devastate the oppressive claims of conscience. But for now, he must know nothing of my bloody nourishment. Before its strangulation, his conscience could misguide him, shrinking into a narrow chink his mind's all-encompassing window.
# Twenty-seven
Easter lilies around the altar, the moist, clean smell of the season of love--it was the first time in a monastery that I'd caught the excitement of resurrection. Not Joshu's, which I'd cursed over the centuries, but stirrings, nonetheless, that only he had raised in me before now. A longing deep in my immortal core, echoed by warm wind when it rushed over the mountains and through the pines and maples, by spring rain falling like Chopin's melodies. A longing that promised completion, though it could never be truly completed.
We prolonged the time before my departure, the days before his death and new existence. If centuries should pass before he joined me, we wanted to spend them anticipating a continuation of an interrupted love affair. In balmy air or thunderstorms, under moonlit or raven skies, we haunted the woods together, laughed as we pitched stones at bats spilling from hillside caves, made heated love in abandoned shacks.
One night in early May, Michael challenged me to a race on the footpath where he'd disappeared before. The moon, directly overhead, washed the ground with pale light.
"Don't make me laugh," I said. "What are your chances against me?"
He surveyed me arrogantly. "Let's just see." He pulled off his shirt and deposited a stick on the path. "That's the starting line. Are you ready?"
We positioned ourselves, and at the count of three, shot forward.
At first I held back, giving him a chance to gain ground so I could pass him more impressively. His arms pumped furiously, his feet kicked up high behind him, his dark ponytail bounced.
When he'd covered a good half-mile, I launched forward, speeding past the trees, reaching the apex of the hill in a matter of seconds. But when I got to where Michael should have been, I spotted him in the distance, at a curve in the path. Then he vanished behind the foliage.
Accelerating to my full speed, I gained the bend in seconds and intended to tug his ponytail, but once again he was relentlessly sprinting far ahead of me.
When I reached a rocky ledge, he was sitting on the ground, waiting for me. Winded, I dropped down beside him.
"Is this a trick Jana taught you?" I asked when I'd finally caught my breath.
"How did you guess?" He grinned and leaned back on his elbows. "I've got a surprise for you, another trick. Watch the sky."
The sweep of the wooded gray valley rolling to the horizon was magnificent. The river glinted through the branches. The monastery buildings huddled in a clearing and to the south, a black train threaded in and out of the trees, its lonely whistle piercing the silence.
Suddenly the pale sky brightened, as though the sun rose from the west, just beyond the horizon. "What!" I shot to my feet, ready to fly back to my dark refuge.
"Wait!" Michael grabbed my hand. "It's not real. Come on, sit down and enjoy something you haven't seen in a few centuries."
Light suffused the sky now, a sky bluer and clearer than I had remembered. Birds twittered in the false morning light.
"Now look there." Michael pointed to the eastern sky, where a ball of light blazed, my enemy in all his glory. Heat bathed my face, my arms and hands, as though I were on a beach near Positano rather than a mountainside at midnight.
The flaming sun shrank and the blue sky changed to indigo and then to silver as the moon resumed her post.
"Impressive," I said.
"It's my pledge to you, Victor. I have to admit, no one has ever affected me like you do. I've always been a detached spirit, drawing what I could from every circle I found myself in. I don't want to call it coldness or fear. It's just that I felt satisfied--working with the soil, studying philosophical discourses, drinking in the waters of mysticism. But now," he hesitated, lowering his eyes, "now I breath you like air. You're Patroclus to me."
"So you're Achilles? I like that." I grinned, and then clasped both his hands. "I've waited for this moment, night after night, century after century."
On the rocky ledge, the moon falling to the west, we entwined ourselves more like wrestlers than lovers, heaving, pressing, squeezing, and grunting, more primitive than the animals populating the woods around us. More than once, Michael's lips found my nipple, and each time the ecstasy his mouth brought to me almost numbed me to the danger of letting him drink too much of my blood.
"No," I finally said, each time, pushing his head away. "I'm not ready to leave you yet."
We stayed in the spot until the light of dawn bled into the sky above the mountains. I'd lost track of the time and now every cell in my body was alert to the imminent danger. His head on my chest, Michael had fallen asleep. I shook him.
"We've got to hurry. It's almost dawn." Scooping him up in my arms, I willed us to the monastery grounds. Branches gave way to us, the air formed a vacuum with the velocity of our movement.
As we raced to the entrance, Andrews's white sedan pulled up. He got out and motioned for us to wait for him.
"Go ahead," Michael said. "I'll talk to him."
With no other choice, I hurried to the crypt, my skin stinging from exposure to the predawn radiation.
# Twenty-eight
Because Andrews watched my every move--as he emphasized to Michael the night of the sun show--my feedings required caution, mostly to prevent him from knowing when I left the grounds. I had to initiate my flight from within the courtyard, where I also returned at the end of my hunts. I could give him no grounds for linking me to the crime scenes investigated day after day in a city frozen by fear.
One night in late April, I stole through a poorly lit neighborhood of housing projects in the city. Ducking into shadows whenever a patrol car whirled its searchlight, I sniffed for a large concentration of blood to keep me satisfied for a number of days, reducing the number of risky trips. Just when I caught the scent of a mass of blood in one of the dilapidated apartments, a searchlight shot out from a police car hidden behind a tree in an empty lot. I moaned in pain at the intensity of light, too stunned to move.
"Police! Put your hands up!" an officer shouted from the car.
I couldn't see him for the light, but I heard footsteps and voices on the street. The smell of blood laced with alcohol wafted from his direction. I waited with my hands raised as he'd commanded.
By the time he started frisking me, while his partner pointed his gun at me, my pain had subsided and I was able to concentrate. I had no choice now that they'd seen me. In two moves I'd flung them both to the ground. I'd snapped the neck of the first officer when the second recovered his gun and fired, hitting me in the shoulder. Before he could shoot again I grabbed the gun from him and snapped his neck. Between the two of them, both hefty men, I could drain more than enough blood. But the shot had roused the neighborhood. A siren wailed only streets away.
I reached the opposite end of town in seconds, lighting near a park. A colonial home across the street emitted the sanguinary scent of at least two people. With no time for a more leisurely hunt, I broke into a side door hidden by a trellis and found myself in a richly furnished parlor dominated by a grand piano. Before proceeding to the bedrooms, I sank into a white sofa to rest while I healed. The bullet wound of a vampire closes in minutes, the missile itself disintegrating as soon as it penetrates the skin.
The floor was littered with wrapping paper. Empty drink glasses cluttered the tables. I picked up a greeting card from a stack on the coffee table, wishing Diane and Paul a happy 25th wedding anniversary. I could exhibit mercy, but under the circumstances I needed to feed and flee the city. Once I'd recovered, I crept down a carpeted hallway and up a staircase. The scent of blood swelled as I approached the last door, which was shut. With my ear against the door, I listened to gentle groaning and squeaking springs within. At the moment of climax, I opened the door.
The man's white body covered his wife's. Her legs were wrapped around his. He couldn't make me out in the pitch-black room, though of course I could discern his panicked expression as he glanced toward the door.
"Jimmy? Wait a minute, son." He rolled off his wife and grabbed his robe. She pulled the sheets up.
Now the pungent scent of semen and female secretions joined the smell of blood. My fangs shot forth and I lunged greedily at his throat.
"No!" The woman screamed. Scampering naked from the bed, she bolted out the door. I cut her off at the bottom of the staircase, where I had willed myself.
"Please!" she sobbed, dropping to her knees. "Oh, God, please!"
Although she was probably in her late 40s, she was quite alluring. I stroked her long, soft hair and raised her chin to me, as her body continued to convulse. Her breasts were large and round, the nipples unusually large. I pulled her up by the arms and dragged her to the open hallway, where I fell upon her breasts as she cried for me to stop. Licking the nipples gently for a moment, I finally pierced through the tender flesh and lapped up the blood. Then I turned to her throat, but before I could drink, footsteps pounded down the stairs.
"Mom, where are you?" a boy called in the darkness. When he reached the hallway, he flicked on the light. He was 15 or 16, in a sweatshirt and shorts. "My God!" He disappeared into the sitting room and came out brandishing a fireplace poker.
"Get off her!" He drew back the poker as though he would run it through me. "Mom, are you all right?" His body trembled.
The unconscious woman remained motionless. I stood and stepped toward the boy. "She's just asleep."
When he jabbed at me with the poker, I wrenched it from his hands. I gripped his shoulders and drew his face up to mine and inhaled his luscious scent. His eyes were wide with horror.
"You're quite a specimen," I said. I kissed his full lips and sank my fangs into his neck. He instantly went limp.
When I had drunk every drop in his body, I returned to his mother to drain her. Between the two of them I was more than satiated. Even if the man upstairs had not already chilled to a dangerous point for drinking blood I wouldn't have touched him.
I left them all where they had died and exited through the same door I'd entered. Sirens howled throughout the city. Searchlights flashed up the street and scanned the park. Just as a patrol car turned the corner, I rose into the air and sped across the miles between Knoxville and the monastery.
It was nearly 3 o'clock when I returned. I crossed through the dark chapel on my way to the crypt. Exhausted from the killings, bloated with blood, I wanted to sleep though dawn was still a few hours away. But when I crawled into my coffin in the close, dark mausoleum, I lay awake for a long time, disturbed by something--not the killings, but a presence, like the presence I had felt once before outside my tomb, a presence that had amused me then. Now it threatened my dreams.
# Twenty-nine
At dusk, as though electricity surged through me, my furious heart awakened me. He waited for me outside the tomb. The iron door squealed like a rat as I pushed it open to face Michael, who watched as intensely as the Roman sentinels outside the tomb of Joshu.
"Exactly dusk," he said as I emerged. He sat on the cold floor, against a stone pillar. The meager incandescent lights lacked the strength to wipe the shadow from his face.
"So, you know?" I stretched and rubbed my eyes.
"I watched you come in, just after Andrews sped into town. I couldn't sleep. I was out walking. He and the other agent had apparently been searching for you here. But they didn't look long before they gave up and took off down the drive. I waited until you got back. I knew you'd be coming."
"Michael, I've put off explaining--"
He raised his hand to interrupt me. "Everything was confirmed this evening on the news. Three corpses, two drained of blood. A patrol car cruising the area spotted a suspicious man, then checked the houses on the street."
I squinted to see his eyes. "Come to my cell. Let me explain it from the beginning."
"We can't miss dinner and vespers, Victor. The last thing you need is to stir up any more suspicion here. We can talk later."
I grabbed his arm as he started to get up. "You don't blame me, then."
"Nature is nature."
"I adore you, Michael." I shook his arm.
The evening dragged. We chanted the longest, dreariest psalms at vespers. At dinner, the reading from _Lives of the Saints_ detailed the martyrdom of St. Lawrence, who praised God while he was grilled to death. Talk of the killings dominated the social hour. Neighbors of the victims were interviewed on the news and Andrews once again assured the city that the murderer would be apprehended. When the reporter pressed him about the killer's identity, Andrews refused to share the details he claimed to have.
As much as Andrews suspected me in Luke's disappearance, surely by now he'd given up trying to pin the mountain and Knoxville murders on me. After all, he'd never seen me leave the monastery grounds. Or maybe he thought I had a partner in the city. But none of that mattered. I focused on setting things right with Michael and preparing for his baptism into the night.
Grave, intense, Michael fired question after question at me when we met in the library's stacks after the Grand Silence had begun. We sat on the age-darkened plank floor between shelves of books, the humid air full of their smell. Thunder rumbled above us.
"This author traces vampires to Satan." Michael picked up one of the volumes piled next to him. "Is it true?"
"Satan, the fallen angel? No, he took a different route than the founder of the Dark Kingdom. He wanted to usurp the role of heaven's god. He wanted an eternity of adoration--static, lifeless worship. The Dark Kingdom is a place of activity, as you've seen."
"Heaven's god? There's more than one?"
"Many powers rule the universe. Nations take their pick--Isis, Zeus, the Hebrew god. I'm no enemy of the gods. They are simply not relevant to me. They exist in spheres not accessible to me, just as mine is not accessible to them."
"Your food, Victor. All these people. The killings in the mountains too, I suppose. What is it like?" He spoke with the burning interest underlying all his quests for knowledge.
I shrugged. "Sometimes exhilarating, sometimes... sometimes unpleasant."
"You killed Luke, didn't you?"
"It was unavoidable. He threatened to expose me, to have me expelled. It was a matter of survival."
He looked away for a moment before speaking again, in nearly a whisper. "And you drank his blood?"
"Yes. It's how I live. How you will live, Michael. As you say, nature is nature. If you give in to petty scruples, see through mortal eyes, it's abhorrent. But look at the wolves. Look at human's killing for meat. Vampires live on blood."
"Vampires." Michael pondered the word. "Jana spoke of creatures of the night. Are there others? Do you have enemies, allies?"
He'd asked the question I dreaded most. To condemn him to a life of solitude, the most torturous aspect of my existence, how could he understand?
"Yes, there are others. But we operate alone, until we create a successor." I paused, then said, "What are decades or centuries, though, when compared to eternity?" I leaned forward and clasped his shoulders. "Besides, in the interim you can live on our passion, Michael. I lived without it all this time on earth. Now I think I could stay another 2,000 years, if I knew I would be joining you."
"Will I be able to communicate with you?"
"I can make no promises. I don't know enough."
"What if we both stay here? What if we leave this place and make a life somewhere else?"
"You mean until you die?"
"Yes. Can't I share your existence now? These books talk about large numbers of vampires."
"Forget the books!" I pounded a shelf with my fist. "I'm telling you it's not a choice. Other vampires exist, but all rule their own domains. We never come into contact. We can't. There's a barrier."
Michael gazed at me steadily, despite the excitement and fear I detected pulsing through him. "How can I live without you for centuries?"
"You are strong enough to do it. And there's no choice."
"And feeding on people--"
"You'll do it. We all do it. The blood: There's nothing like it, the sense of elation, the power. To think the life of a man is pouring into you. Think of communion, for God's sake. The hunger for blood takes people there. The blood of a victim. Remember the taste of my blood, Michael?"
"But you weren't a victim."
I laughed. "You're wrong. You held complete power over my soul."
As the thunder rolled and exploded, rain pelted the roof. Our niche among the books was a snug refuge that, in light of our impending separation, took on the romance of a dark bedroom the day before a battle.
Aware of our alliance in subversion, united in blood and the dark longings of our souls, we fell into each other's arms. I stripped off his shorts and T-shirt and buried my face in his musky crotch, licking the dark sack, the thick cock. My tongue traced the line of dark fur from that sweet meat to his chest, where it densely swirled. Our lips met, then our tongues. We wrestled playfully. He finally sat upon my belly, pinning back my arms, and impaled himself upon my rigid cock. Massaging his own, he groaned in his motion, and I, panting in the scent of his racing blood, groaned too, until we both exploded in orgasm.
# Thirty
In the midst of the relentless media coverage of the devastation wrought by the "vampire killer," as I was dubbed by the press, Luke's grandfather visited the monastery. The dotard had to see where his boy last lived, had to search the woods himself. Apparently, Brother Matthew had discouraged his coming, but to no avail.
During compline, he sat in Luke's choir stall, glum and distracted, not bothering to follow the psalms in his grandson's breviary, which he caressed as though it were a child. He was a tall, rawboned man in his mid-60s, with deep grooves in his tanned cheeks and close-cropped gray hair. He wore a short-sleeved plaid shirt and blue jeans--a farmer through and through. From dawn to dusk he'd trekked through the woods, Michael told me, without stopping to eat, and had only picked at his food during dinner.
When the moment came for petitions, he spoke up wearily, in what I by now recognized as a country accent.
"I wanna ask the Lord to lead me to my boy. If he's dead, I just wanna know. But I pray he's alive. He ain't never hurt nobody. Looks like a killer wouldn't have no use for him, but I don't guess that matters to a pervert. I know it ain't right to wish evil on nobody, but I ask the Lord to strike that monster down in his tracks, afore he can kill another soul."
After compline, Michael spoke to him, briefly clasping his shoulder. It was a mistake to allow this compassion. Why stir the waters? A general could not weep over the slain soldiers on the field, whether his own men or those of the enemy. He must harden himself against future losses, future massacres. But I knew for now it was asking too much. With time Michael would see the necessity of leaving uncompromised the detachment natural to him.
What concerned me more was the incredible susceptibility of Michael's soul to supernatural interference. When he failed to show up at my cell that night at the appointed hour, I ventured to his. Flames encased in red glass flickered throughout his room and sweet incense smoked densely. Michael lay naked on the floor, his arms outstretched. The shadow of the crucifix suspended in midair above him fell across his face. His cock was swollen with blood. As before, he invoked the cross, instrument of torture, as a source of hope.
"O crux, ave spes unica." His tone was insistent, as if he were leading a war cry, and he repeated the pronouncement again and again as though accompanied by a battle drum.
I wanted to interrupt the spell, shake him from his vision, but once again I could not penetrate the invisible wall between us. Frustrated, I stood near the door.
The chant quickened, gaining urgency, and then stopped abruptly, as though the enemies now stood eye to eye, ready to charge across the battlefield. He got up on his knees, extending his arms before the crucifix, apparently receiving from it a burden. Straining under the invisible weight he stood and approached the door, which swung open of its own accord when I stepped aside. I followed him down the dark corridor, alert to any movements behind the closed doors, particularly the door of the abbot's cell.
I followed him all the way down the stairs to the crypt and through the long, dank passage to my tomb, where he stopped.
He grimaced as if in great pain, sweat beading on his forehead. He moaned like a frustrated dumb man trying to communicate. It was the iron gate, I knew; he wanted it opened and I obliged him. That calmed him. Standing naked before the black mouth of the mausoleum, his arms still extended, he appeared to wait for someone to relieve him of his heavy treasure. I followed his cue, pantomiming a transfer of the burden from his arms to mine.
When he motioned to the tomb, I pretended to lay what must have been a corpse inside. Tears streamed down his face. Then he opened his eyes and the weeping ceased.
"He's here, Victor." He whispered, his eyes on the gaping tomb.
"Yes."
"He called out to me. Then I found him in the woods, white as chalk."
"This is a temptation, Michael." I laid my hand on his shoulder. "Joshu wants you for himself. But for what? So you can kneel with him before his father's throne for all eternity?"
He turned his head slowly until our eyes met. "Yes. There was something in the vision about Jesus. At first there were colors--red, yellow, black splashed against a white screen, and then a cross, a writhing body, and then Luke lay before me in the woods."
Securing the iron gate, I led Michael to my cell, where I directed him to wait until I could retrieve his clothes. When I returned with his shorts and T-shirt, he was rolling his head as if to relieve tension from his neck and shoulders.
"Let's take a walk," I said.
He nodded, alert now after the long trance.
The grass, taller after a week of rain, formed a soft carpet beneath our feet. Light-green leaves budded on the trees scattered across the grounds. The sour smell of mulch wafted on a warm breeze cascading over the mountains.
Once secure inside the woods, I took Michael's hand as we climbed through the leafing oaks, over stumps and fallen trunks. Only a slice of moon jeweled the sky, so it was up to me to guide him through the black thicket to our clearing.
"Your gloom's because of our parting," I said when we had settled on the ground against the log. "And once you're transformed the visions will stop."
"How do you know? How do you know these things?"
"Don't question me," I said firmly. "Give yourself over. A divided soul can't survive."
"Yes, I know." He laid his head back against the log and studied the sky. "This has to happen soon, Victor. I won't permit my soul to be used as a battleground for invisible forces. But there is one thing." He turned to me. "I want to sleep with you."
"Sleep?"
"Next to your coffin. I want to spend our final hours together. At dusk when we wake up this thing has to happen."
I studied his keen eyes, in search of ulterior motives, but without success. "Is this a test run? Do you want to make sure you can survive sleeping among the dead?"
"I've told you what I want."
"All right. At dawn we sleep."
When the sky lightened behind the mountains, we climbed down to the crypt. I entered the tomb first and made space for him near my coffin. He lay calmly, quietly, while I shut the gate. I loosened a brick in the back wall to let in air from a narrow channel between the mausoleums and the outer wall of the foundation. Then I climbed into my bed.
"How are you?" I asked.
"Fine. The dead don't frighten me."
"You know, in some medieval monasteries monks had to build their own coffins and sleep in them every night."
"Yes, I know, to remind them of their mortality."
"For you it's a promise of eternity."
I squeezed his hand and fell into the most peaceful sleep I'd had in two centuries.
Just before dusk, though, I dreamed of Luke. He stood over us in the tomb, naked, pale. He sobbed. Putrid blood dripped from his throat onto Michael's face. Michael's eyes snapped open. In a feverous delirium he struggled to escape from the tomb. I awoke and realized that it was no dream. Kneeling over my coffin, he pummeled my chest.
I seized his wrists. "Wake up. It's a dream."
"I am awake. I am." He stopped struggling against me and took a deep breath.
"Don't let Luke haunt you."
"It's not Luke. It's something else. I dreamed of heaven."
"What? Some fantasy?"
"No. I don't know." He settled back on his haunches. "I can't do it, Victor."
"What! What do you mean?"
"I can't."
"You're babbling like an idiot. Get out in the air and calm yourself."
"It won't help. My sense is strong. This can't be done."
"What, is your Jana putting crazy ideas in your head? Fuck her. You love me. That's what matters. A magnificent life is ahead of us. You won't back out, not if you listen to your own desires instead of a spirit's black magic."
But his gaze was one of resolution, not fear.
"It's useless to discuss it now," I said. "Get out. We'll talk tomorrow."
He said nothing as he crawled from the tomb.
# VIII
The Storm
# Thirty-one
That night was the darkest, longest I'd known in my centuries of flight from the sun. Restless to the point of madness, I heard the subtlest movements in the woods, smelled the decay of animals lying dead miles away. The nerves in my body formed a network of live wires, exposed, popping, near the point of conflagration. If now, with all my hopes, now when the moment to act announced itself like a giant bell tolling over the land, if now this partner who'd freely entered into a pact with me betrayed me, I knew not what heights my fury would reach. His flesh would rip, flesh as abhorrent to me in that moment as it had been enthralling for the past year. How could I bear to let him live? Impossible. And he knew it. He knew it but spoke still.
Yet couldn't he have been under the old woman's spell, as he'd been that night in the woods? Perhaps she, not he, spoke. Perhaps she was in league with Luke's spirit. Damn them both. Cowards of seduction.
When Michael met me in the shadows outside the crypt the following night, I already knew his position, even without reading his thoughts. Erect, determined, silent, he approached. He did not play up the pain he felt. All the same, I despised him as he uttered the words I expected to hear.
"I can't do it, Victor," he said solemnly. "I can't."
"Damn you!" I pinned him to the wall. "It was a test, sleeping in the tomb. You refused to trust me. I'll kill you, I swear."
"I'm not stopping you."
"Stop me!" I gripped his throat and would have broken his neck, but footsteps sounded on the stairs. I covered his mouth and pulled him behind a column. The intruder stopped, then retreated. I slammed Michael against the wall.
"So you want to ruin my plans, my happiness! When I can almost taste it, like blood?" My chest heaved with fury. "You like power, do you? You like to experiment with the forces of darkness?"
"No, Victor. I love you." His face was still red from the pressure of my hands on his throat.
I backhanded him. The sting brought tears to his eyes. "Traitor. I'll kill you here and now. No, wait. I'll have some amusement first. You want power? I'll show you power."
Gripping him by the arm, I took him outside and rose into the night while he clung to me. We lighted near a corner grocery store in Knoxville, just as the owner was locking the doors. I knocked on the glass, clutching Michael still. When the owner mouthed the word "closed," I shoved open the door.
"Hey, Mister, I said I'm closed." The man was 70 or so, with a widow's peak and a bulbous nose sprouting black hairs.
"So you are." I advanced a step, my fangs now protruding, and he ran to the counter. Before he could grab his gun, my arm was around his neck like a vise. "Behold power, Michael."
"Let him go, Victor, for God's sake." Michael pounded on the counter.
"For whose sake?" I grabbed a shock of gray hair, pulled the man's head back, and pierced his throat. Then I sucked slowly, so he would struggle a long while under Michael's gaze.
Michael came around the table and tried to tear the man away from me.
With one thrust, I hurled him into a shelf loaded with wine. Bottles crashed to the floor, staining the white linoleum as red as the blood I drank. As soon as he stood up, I bared my dripping fangs at him.
"Nature is nature," I said. "Here, try it." I reached for his arm and pulled his lips to the old man's throat.
He strained against my grip to move his face, now smeared with blood, away from the man's neck.
The old man, his eyes stretched wide in terror, groaned and struggled to breathe. Blood had drained down the collar of his shirt and collected around his pocket protector full of pens. Holding Michael with one hand and my victim with the other, I lunged at the old man's throat, sucking greedily until he collapsed behind the counter. Michael darted toward the door, but I intercepted him.
"I could use your help now." I slammed him into shelf after shelf of canned goods and cereals and laundry soap until the quaint market was as devastated as an earthquake would have left it. Then, pulling him to his feet, I licked the blood from a gash in his forehead. "The night's just begun."
Waiting inside the store until a patrol car sped down the street, I dragged Michael, bruised and limping, across an alley into a large Victorian home, now shabby and divided into apartments. The unlocked back door opened onto a long, poorly lit hallway flanked by several doors. The scent was strongest from an apartment at the front of the building.
"Don't do this, Victor. Just kill me now." His dark eyes were filled with scorn.
"You're asking me for mercy?" I laughed. "No, come with me, my beloved."
The flimsy door gave way with a firm shove. A small dog scampered to the entrance and yapped hysterically.
"What's wrong, Nipper?" A woman called from down the hall. "Come see Mommy."
When the dog continued to bark, I scooped him up by the throat, strangled him with one hand, and flung him on the rug. Michael looked away.
"Nipper? Whatsa matter, girl? Come see me."
Submissive under my grip, Michael accompanied me down the hallway to the steamy room at the end. By the time the woman heard us, it was too late to run. Young and shapely, she stood in an old-fashioned tub with claw feet, reaching for a towel. She screamed at the sight of us. I jerked the towel away from her and stuffed it into her mouth. Wrapping her in one arm and Michael in the other, I carried them both into a room with a canopy bed. Like those of a mad dog, my teeth tore at her throat, her naked breasts, until blood gushed from the wounds and she lost consciousness. When Michael vomited, I shoved him to the floor and dropped her on the bed.
"What do you think--should I let her bleed to death?"
"Feed on her, damn you!" He lay on his side, deathly pale. "Put her out of her misery, for chrissake."
"Not yet." I picked up her body and laid it next to him.
"She would have been a fine catch for someone, don't you think?" I got down on my haunches and stroked her chestnut hair. Her blood trailed down the uneven oak floor toward Michael and he tried to get away from it but I held him down until the blood saturated his shirt and he vomited again.
"So, you like to play with evil forces? But you didn't bargain for this, did you."
"Kill me, Victor." As he looked up his eyes started to roll back and he collapsed in his own vomit.
"No! I'm not through with you yet." I carried him to the bathroom and dunked his head in the tub until he came to. Then I dragged him to his feet and back to the bedroom for my finale. I gulped the blood spurting from the woman's throat and breasts and lapped up the floor. My lips now dripping, I kissed Michael, thrusting my tongue down his throat until he gagged and vomited once again.
"Damn you." Michael was exhausted and ill. I nearly had to carry him to the house next door toward the scent of more blood.
Just as I tried the door a police car stopped in the alley. An officer got out and shouted for us to raise our hands. In the time it takes me to break a neck, I rose, my arm wrapped around Michael. The officer fired several shots. I winced when one struck my shoulder blade, but continued on. By the time I cleared the treetops, I felt blood oozing from Michael's stomach.
"You've been hit!" I shouted against the force of the wind.
Grimacing in pain, he did not respond.
I landed with my burden in the forest clearing, laid him on the ground, and stripped off his soaked shirt to inspect the wound. The bullet had plunged deep into his bowels. With every beat of his heart, blood spurted. Blood which, linked as it was with jeopardy to his existence, gave me no delight. Wadding his shirt into a tourniquet, I pressed it against his stomach. His rib cage heaved as he struggled for breath. His skin was as hot as summer pavement.
"Michael, don't slip away. Fight this." I shook his shoulders gently until he opened his eyes. "There's still time. You can still drink." Holding the tourniquet in place, I stripped off my shirt and positioned my nipple near his mouth. "Drink, damn it!"
He shook his head, slowly but deliberately.
"Why? Why let yourself die when you can suck life from me? Power, eternity. Eternity with me, damn you." I brought his lips to my chest, but he would not suck.
I leaned down and spoke directly into his ear as he fought for consciousness. "This god that draws you, doesn't he command you to love? Isn't that what the Gospels teach? Then do this for love. I need you, do you hear me?"
He labored more and more to take in air as he struggled to speak. "I love you, Victor."
"Then come with me!" I whispered the words forcefully into his ear.
Desperate ideas to save him flashed through my mind--whisking him away to an emergency room, retrieving bandages to bind him, tearing out the bullet with my own hand. But only moments remained for him.
"What is this heaven you have seen--that you'd sacrifice eternal freedom for? Freedom and me. What can it offer that I can't? Tell me! If it's worth eternity, I'll follow you, by God!"
Michael turned his head. He worked to focus his listless eyes on me, as though he wanted to speak. He inhaled and released a shaky breath. He did not inhale again. His gaze froze on me.
"No!" I knelt over him and shook him violently. "No, damn you. You can't leave me alone. I'm through with the night. I want my reward." Hot tears ran from my eyes. I lifted his face and kissed his lips.
"Are you satisfied, Joshu?" I called into the night sky, whose dazzling stars seemed to mock me. "What else do you want from me? Another two millennia of torture, is that it? You won't have it. I swear you won't!"
Casting a last look on the eyes no longer mysterious and penetrating, I got up and sprinted into the woods, bounding over fallen trees, snapping branches, splashing through the brook, until I reached the ledge where Michael had shown me the sun after 2,000 years of darkness. I howled like a wild animal, howled until I thought I would explode in anger. The sound resonated in the valley below and was answered by the cries of wolves deep in the thicket.
Then I tore at my hair, and clawed my face until it bled. I rolled on the rocky ground and beat it until my fists were crimson with blood. I longed to kill now, but the night was far gone and the streets of Knoxville were infested with police. So I remained brooding on the ledge until just before dawn, when I fled back to the shelter of my tomb--the tomb where only a day before Michael had slept with me.
# Thirty-two
My sleep was fitful that day. Several times I awoke to Michael's voice and called out to him. In dreams I reenacted the killings I'd staged for him while he pleaded with me to stop. The old store-owner's face reddened as my arm tightened around his throat. The woman's white breasts spewed blood.
Late in the day voices awakened me, first distant but gradually louder until the speakers stood outside my grave.
"Here's where the blood stops." Andrews spoke. "Shit. He's in there."
I heard the sound of guns whisked from holsters.
"Don't kill him." Brother Matthew cried from a distance. "How can we be sure?"
"Get back up to your office, Brother. We'll handle this."
The abbot's footsteps grew more faint as he retreated from the crypt.
"All right, Brother Victor," Andrews shouted toward the mausoleum, "come out with your hands up, nice and slow."
For a moment I considered surrendering. When they took me up into the sunlight, I would disintegrate instantly. No more wandering, hiding, feeding. No more searching for a companion to make eternity worthwhile. But the temptation evaporated as suddenly as it had formed.
"You win, Andrews," I called. "Here I come."
I kicked open the iron gate. Facing me under the dim lights were Andrews and three other agents, all pointing revolvers at me. I approached one of the agents, a neophyte with smooth cheeks.
He backed up. "Don't take another step!"
When I did, he fired. Stunned only momentarily, I grabbed the gun from his hand and crashed it against his skull. As they fired on me, I slammed the other two agents against the wall.
Andrews hit me several times while I handled the two, but I only flinched and the wounds closed immediately.
"You finally got what you wanted, Andrews." I faced him, smiling. "The serial killer. The vampire of Knoxville. Big accomplishment for you."
Andrews continued pointing his gun at me, but he looked worried. Sweat beaded on his forehead. "Reinforcements are on the way," he said as I stepped toward him.
"Too bad for them."
I took another step and he fired.
I didn't wince. "How many more bullets? One?" I couldn't resist laughing.
"What kind of a monster are you?" He backed away toward the open mausoleum. Now his gun trembled.
"Let me show you." After he fired his final bullet, I grabbed the weapon from him and tossed it on the stone floor. He tried to run, but I clutched him by the neck, flinging him to the floor. Straddling him, I pinned back his arms and lowered my face until our eyes were inches away. His were full of terror.
"Don't kill me. Please. I've got a family."
"That's a pity." I nuzzled his throat for a moment and then plunged my fangs into his jugular vein. He struggled against me longer than any of my prey ever had. Even his blood seemed to resist the force of my lips, barely trickling from his throat. But when he eventually succumbed, the blood gushed into my mouth, as robust as fine wine.
Since the other agents were still breathing, I snapped their necks. Then I waited in the dank crypt until the sun set, itching to leave, energized by Andrews' blood.
When I finally opened the door to the foyer, all was quiet, the abbot nowhere to be seen. Like a submissive lamb, he'd followed orders perfectly. He must not have heard the gunshots through the thick stone walls of the monastery. I stood outside his office door and heard him explaining everything to Brother George, who responded in gravelly monosyllables. When I opened the door, they looked up, stunned.
"Brother Victor." The abbot stood behind his desk. The blood drained from his face. He glanced at Brother George, who sat in an armchair with his legs crossed, smoking as usual.
"Mr. Andrews sent me up. He asked me to give you something."
Brother Matthew stared in horror as I approached his desk. My clothes were covered with blood and my fury, no doubt, fired my glance. Brother Matthew backed away from me toward the window. Brother George put out his cigarette and got up to rescue him.
"Now, Victor," Brother George said in his gravelly voice. "They're on to you. There's no escape. If this is a mental illness--"
The moment he touched my sleeve, I grabbed his throat, my eyes still searing into the abbot's, and squeezed the breath out of him while he thrashed about. When I released him he slumped to the floor.
"Oh God, God." The abbot frantically surveyed the room for a means to escape. Beads of sweat formed on his pink scalp. "Please, please, Victor. In the name of God," he said as I moved around the desk.
I pressed him against the window, both hands on what little neck emerged from his habit. He pounded my chest as I strangled him until his eyes rolled back in his head.
"So much for the power of your damned god!"
Rather than satisfying me, the killings only fueled my desire to strike out against the god who stole from me the only two creatures I had ever loved. I stormed to the recreation room, where the handful of monks who were not away at universities gathered for the social hour. Four old monks watched the blaring television in a corner of the large room. Two monks played pool, while another looked on from the bar. Not a single head turned when I entered; evidently they knew nothing about my crimes or the FBI's chase.
"Like a drink, Victor?" the scrawny bald monk at the bar said. Then he noticed my bloody clothes and his mouth gaped. In an instant I grasped him by the throat and snapped his spine.
The brothers playing pool had been too absorbed in their game to notice anything. While one, short and olive-skinned, leaned over the table to make a shot, I pounded his head against it with such force that he was dead in an instant. Dazed, his curly-headed opponent tried to fend me off with his pool cue. I snatched it from him, knocked him down, and punctured his heart with it. Blood rushed from his black robe like a geyser.
The monks near the television stared at me in horror, frozen in their chairs. Silver-haired Brother Augustine stood and, gathering his courage, reasoned with me, his frightened eyes darting to the bodies on the floor as he spoke. The other three rose. Two edged toward the door. I grabbed them by the throat and strangled them. When Augustine came at me I did the same to him, before turning to the last monk, the obese cook. His face pale, he backed away, clutching his chest as though his heart was failing. His eyes widened as I squeezed his throat, and he tore at my hands until I crushed his windpipe and he went limp.
I was met at the door by a new young monk, Brother Stephen, who saw the carnage and tried to flee.
"You don't think you can get away, do you?"
His terrified blue eyes looked past me at the door. Barely 20, he'd grown a soft reddish moustache after the monks teased him about his youth.
I brushed it with my fingers. Then I tore off his habit and flung his naked body to the floor.
"Oh God. Let me go, Brother Victor. Don't do this."
But without mercy, I rammed my cock into him, while he screamed in pain. When I was through, I bit into his tender throat and guzzled his blood, sweet as honey.
Maddened by a rage that had only gained momentum with the massacre, I rushed to the chapel, leaped onto the high altar and flung statues from their niches. When piles of plaster lay on the floor, fine white dust sifting down, painted hands and heads scattered over the marble, I wrenched out the tabernacle itself and hurled it into the sanctuary. The bronze doors flew open and released the ciborium, which spilled its store of wafers.
Like a wild ape, I scrambled up the reredos, tore off the crucifix that had taken possession of Michael, and hurled it across the chapel. In that moment, the room shook as though the crash had caused an earthquake. I dropped to the floor and searched the darkness, struggling to keep my balance in the trembling room.
I sensed a presence. "Who is it?" I shouted. "Who is it?" The corpus from the crucifix moved, becoming supple, taking on the color of flesh.
"Joshu! So now you come to me. To hell with you!"
"It's not too late, Victor." The man who rose before me, though radiating an eerie, unnatural light, was the Joshu I had known from life. Wrapped in the loincloth he wore at his execution, blood streaming from the thorns digging into his scalp, he stood before me in all his strength and beauty. "Heaven is not beyond you. I am not beyond you."
"So you haunt me?" I blurted. "You torture souls to bring them to salvation?"
"There's still hope, Victor."
"To hell with you, Joshu. You're a traitor!"
He looked at me as he had once looked at me, not in piety, not in pity, but in devotion, attachment. Then he turned and started back to the cross.
"Joshu, no!"
I ran to him, but my hands passed through his form, as though he were vapor. He mounted the cross and solidified once more into the immobile figure who embodied a sentimental artist's conception of him.
He had jabbed my heart again. He had betrayed me with full knowledge. How many times would we repeat this scene? I would never accept his conditions. To show him, to rebel against his god... that vow I renewed then and there.
# IX
Night Again
# Epilogue
My view of the above-ground tombs was partially obstructed by the tall magnolia in the front yard of the mansion. But through the second-story window I could discern a row of the white mausoleums across the street, protruding like teeth from a black mouth. A crowd near the iron cemetery gates gathered around a bearded tour guide dressed in a turtleneck and jeans, who gestured toward the cemetery. I knew he was giving the speech he gave every night at the same hour. The day after my arrival in New Orleans I'd gone down to listen, hearing him explain the process of "falling through," whereby the decaying body, baked to ashes in the brick grave--in which the internal temperature often exceeded 200 degrees from the intense southern sun--gradually sifted through the stacked racks to the bottom of the grave, where it mingled with the dust of its ancestors. The skull and any bones still intact on the top shelf were swept to the floor when it came time to bury the next corpse. "Falling through" was the passage to an eternal family reunion of sorts.
I had risen an hour before from the mausoleum I had claimed in the cemetery, one belonging to a family whose line had died out, indicated by the last burial date (1935) engraved in the white door. No one would disturb my slumbers in that resting place in New Orleans's Garden District. But the hot chamber had made my sleep fitful, haunted by nightmares of Michael. Upon waking, rather than crossing to the antebellum mansion I'd purchased, I often strolled the streets of the District, brooding as I passed the grand porticos, the cypresses and oaks of other old estates.
Only a week had passed since my flight from the monastery, and I continued to replay in my mind the final scenes there--Michael's death, the massacre of monks, the demolition of the chapel's sanctuary. And of course, the apparition of Joshu, pale and bleeding, once again ready to comfort and to torture my soul.
Excited by his appearance, infuriated by the promise of renewed torment, I set out to conclude the ravage I had begun. With gasoline from the storeroom, I doused the volumes in the library and trailed a stream along the wooden floors throughout the buildings. Then I set the monastery ablaze. From the grounds I watched the flames shoot from exploding windows and lick the roof. Before long, the old rafters and supports burned and the structures, one by one, collapsed. The inferno heated the grounds like a desert sun.
Then, once more, I fled to a new grave, new nights, knowing not what cloister I would next find myself in, or even if I would seek another. But I swore to myself I would not abandon a predator's life until a companion promised me his eternity.
Having had enough of morbid ruminating, I wandered downstairs, out to the street, past the crowd touring the cemetery, and took the streetcar to the French Quarter. Although it was still early, Bourbon Street already reeked of piss and beer. Sleazy barkers beckoned to prospective customers outside the strip clubs. Dixieland and blues blared through open doors of restaurants and bars.
To escape the growing crowd and neon lights, I turned down a dark side street. A row of renovated shotgun houses with colorful shutters gave way to shabby brick buildings whose galleries cast shadows over the cracked, garbage-lined walks. The smell of the river rode on a breeze through the Quarter's maze of brick and wrought iron.
"Got a light?" A shirtless boy in low-waisted jeans stepped out of a doorway. His bleached hair fell to his shoulders. His smooth face, his lustrous, innocent eyes belied his profession.
I lit his cigarette. Then I took him around the corner, far from the nearest streetlight.
# ALSO BY MICHAEL SCHIEFELBEIN
_Blood Brothers*_
_Body and Blood*_
_Vampire Vow*_
_Vampire Thrall*_
_Vampire Transgression*_
_Vampire Maker*_
_*also available as a Jabberwocky ebook_
# THANK YOU FOR READING
_This ebook has been brought to you by JABberwocky Literary Agency, Inc._
Did you enjoyed this JABberwocky ebook? Please consider leaving a review at the e-tailer of your choice! To see what other ebooks we have available, visit us at <http://awfulagent.com/ebooks/>.
_Help us make our ebooks better!_
We'd love to hear from you, whether it's just to say how much you liked it, or if you noticed any errors or formatting issues, or if you have any other comments about this title. Send us an email at ebooks@awfulagent.com.
Sincerely,
The JABberwocky Team
| {
"redpajama_set_name": "RedPajamaBook"
} | 3,122 |
Q: Https not working JBoss AS 7.0.2 I am trying to load SSL certificate to one of our site. We are using JBoss AS 7.0.2 for our application on Ubuntu Server 12.04, it's running successfully on http while on https it's not responding. Server is starting successfully without any exception. Kindly suggest the steps to debug this problem.
<subsystem xmlns="urn:jboss:domain:web:1.1" default-virtual-server="default-host">
<connector name="http" protocol="HTTP/1.1" scheme="http" socket-binding="http" redirect-port="443"/>
<connector name="https" protocol="HTTP/1.1" scheme="https" socket-binding="https" enable-lookups="true" secure="true">
<ssl password="******" certificate-key-file="/mnt/jboss/******" verify-client="false" certificate-file="/mnt/jboss/********.key"/>
</connector>
<virtual-server name="default-host" enable-welcome-root="true">
<alias name="localhost"/>
<alias name="example.com"/>
</virtual-server>
</subsystem>
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 8,724 |
Iain Bradley heads up the Data Modernisation within DfE. There, he is responsible for the delivery of Compare School Performance ("performance tables") and Analyse School Performance. His area also owns the National Pupil Database, and is developing the solutions to modernise how data collections are completed. He has also spent 3 years as Chair of Governor at an infant school in Sheffield, where he has had to be a customer of his own services as well as build an awareness of the other data tools that are used across the sector associated with performance and improvement. Prior to joining DfE, Iain also worked as an analyst in the NHS and Department for Work and Pensions. | {
"redpajama_set_name": "RedPajamaC4"
} | 3,921 |
\section{Introduction}
\label{Sec:Int}
Neutrinoless double beta ($0\nu\beta\beta$) is well-known as a sensitive probe
for lepton number violating (LNV) extensions of the standard model
(SM). Possible contributions to the $0\nu\beta\beta$ decay amplitude beyond
the minimal mass mechanism \footnote{Exchange of a Majorana neutrino
between two SM charged-current vertices leads to an amplitude ${\cal
A}^{0\nu\beta\beta} \propto \langle m_{\nu} \rangle \equiv \sum_i U_{ei}^2 m_i$, the
so-called mass mechanism.} have been discussed in the literature for
many models: Left-right (LR) symmetric extensions of the SM
\cite{Mohapatra:1980yp,Doi:1985dx}, R-parity violating supersymmetry,
both trilinear $R_P \hspace{-1.2em}/\;\:$ \cite{Mohapatra:1986su,Hirsch:1995zi} and
bilinear $R_P \hspace{-1.2em}/\;\:$ \cite{Faessler:1997db,Hirsch:1998kc}, leptoquarks
\cite{Hirsch:1996ye}, sterile neutrinos
\cite{Bamert:1994qh,Benes:2005hn}, composite neutrinos
\cite{Panella:1994nh}, Kaluza-Klein towers of neutrinos in models with
extra dimensions \cite{Bhattacharyya:2002vf}, colour octet scalars
\cite{Choubey:2012ux} or colour sextet diquarks
\cite{Brahmachari:2002xc,Gu:2011ak,Kohda:2012sr}. A recent review of
``exotics'' in $0\nu\beta\beta$ decay can be found, for example, in
\cite{Deppisch:2012nb}.
However, an observation of $0\nu\beta\beta$ decay will not easily be
interpreted as evidence for any specific model. Several ideas to
distinguish different contributions to $0\nu\beta\beta$ decay have been
discussed in the literature, among them are: (i) Measure the angular
distribution of the outgoing electrons
\cite{Doi:1985dx,Arnold:2010tu}; (ii) Compare rates in
$0\nu\beta^+/EC$ decays with $0\nu\beta^-\beta^-$ decays
\cite{Hirsch:1994es} and (iii) compare rates of $0\nu\beta^-\beta^-$
decays in different nuclei \cite{Deppisch:2006hb}. In principle, all
these three methods could serve to distinguish the long-range
right-handed current term (denoted $\epsilon^{V+A}_{V+A}$ in our
notation and $\langle\lambda\rangle$ in the notation of
\cite{Doi:1985dx}) from other contributions. However, distinguishing
among all the remaining contributions by measurements from $0\nu\beta\beta$
decay experiments only seems practically impossible, mainly due to the
large uncertainties in the nuclear matrix element calculations.
Contributions to the $0\nu\beta\beta$ decay rate can be divided into a
long-range~\cite{Pas:1999fc} and a short-range~\cite{Pas:2000vn} part.
In long-range contributions a light neutrino is exchanged between two
point-like vertices, not necessarily SM charged-current vertices.
This can lead, in those cases where one of the vertices contains a
violation of $L$ by $\Delta L=2$, to very stringent limits on the new
physics scale $\Lambda/(\lambda_{\mathrm{eff}}^{\rm LNV})\raise0.3ex\hbox{$\;>$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}}$
($100-1000$) TeV. Here, $\lambda_{\mathrm{eff}}^{\rm LNV}$ is some
effective lepton number violating (LNV) coupling depending in the
model under consideration. In the short-range part of the amplitude,
on the other hand, all exchanged particles are heavy. \footnote{From
the view-point of $0\nu\beta\beta$ decay, heavy means masses greater than
(only) a few GeV, since the scale to compare with is the nuclear Fermi
scale, $p_F\simeq (100-200)$ MeV.} $0\nu\beta\beta$ decay in this case behaves
as a true effective dimension-9 operator:
\begin{equation}\label{eq:effop}
\mathcal{O}^{0\nu\beta\beta}_{d=9} = \frac{c_9}{\Lambda^5}
{\bar u}{\bar u}d d {\bar e}{\bar e}.
\end{equation}
The general decomposition of $\mathcal{O}^{0\nu\beta\beta}_{d=9}$ has very
recently been given in \cite{Bonnet:2012kh}. Using the results of
\cite{Pas:2000vn} and \cite{Bonnet:2012kh}, one finds that current
limits on the $0\nu\beta\beta$ decay half-lives for $^{76}$Ge
\cite{KlapdorKleingrothaus:2000sn} and $^{136}$Xe
\cite{Auger:2012ar,Gando:2012zm}, both of the order of
$T_{1/2}^{0\nu\beta\beta}\raise0.3ex\hbox{$\;>$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} 10^{25}$ ys, correspond to roughly $\Lambda
\raise0.3ex\hbox{$\;>$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} (1.2-3.2) g_{eff}^{4/5}$ TeV, where $g_{eff}$ is some effective
coupling (see section \ref{Sec:Pheno}) depending on the exact
decomposition. Obviously, new physics at such scales should be
testable at the LHC.
\begin{figure}[tbh]
\hskip-10mm\includegraphics[width=0.35\linewidth]{LNVLR_LHC.eps}
\hskip10mm\includegraphics[width=0.5\linewidth]{LHC0nbb_LR.eps}
\caption{Example diagram (left) and comparison of sensitivities of the LHC
experiments with $0\nu\beta\beta$ decay (right) for a (manifest) left-right symmetric
extension of the standard model. Shown are the expected contours for
the $0\nu\beta\beta$ decay half-life of $^{76}$Ge (contours for $^{136}$Xe are
very similar) in comparison with the excluded regions from two recent
experimental studies by ATLAS \cite{ATLAS:2012ak} and CMS
\cite{CMS:PAS-EXO-12-017} in the plane $m_{N}-m_{W_R}$, see text.}
\label{fig:LHCLR}
\end{figure}
The prototype example of a short-range contribution which has been
discussed both for double beta decay and at the LHC are diagrams
mediated by right-handed W-bosons arising in left-right-symmetric
extension of the Standard Model
\cite{Mohapatra:1980yp,Keung:1983uu}. Here, $W_R$ can be produced
resonantly on-shell at the LHC and will decay to a right-handed
neutrino plus charged lepton, see fig. (\ref{fig:LHCLR}). The
right-handed neutrino decays via an off-shell $(W_R)^*$, thus the
signal is both, like-sign and opposite sign, dileptons with (at least)
two jets and no missing energy
\cite{Keung:1983uu,Ferrari:2000sp,Gninenko:2006br,Bansal:2009jx}.
Fig. (\ref{fig:LHCLR}) shows a comparison of sensitivities of the LHC
experiments with $0\nu\beta\beta$ decay within this framework. Contours show
the expected half-lifes for $^{76}$Ge $0\nu\beta\beta$ decay using the nuclear
matrix elements of \cite{Hirsch:1996qw}. Note that contours for the
$0\nu\beta\beta$ decay of $^{136}$Xe are very similar. Also shown are the
excluded regions from two recent experimental studies by ATLAS
\cite{ATLAS:2012ak} and CMS \cite{CMS:PAS-EXO-12-017}. Both LHC
analyses assume that the coupling of the right-handed $W$ boson to
fermions has exactly the same strength as the SM $W$ boson coupling to
fermions (``manifest left-right symmetry'') and the $0\nu\beta\beta$ decay
half-lives have therefore been calculated with the same value of the
coupling. However, the $0\nu\beta\beta$ decay rate sums over all mass
eigenstates $m_{N_i}$, which couple to electrons, while the LHC
experiments assume that $m_N$ appears on-shell in the $W_R$
decay. Thus, this comparison is strictly valid only if (i) only one
heavy neutrino appears in the LHC decay chain and (ii) this neutrino
couples only to electron pairs (i.e. possible generation mixing is
neglected for simplicity here). Note that limits on $W_R$ combining
the electron and muon channels at the LHC are slightly more stringent
than the ones shown in fig.(\ref{fig:LHCLR})
\cite{CMS:PAS-EXO-12-017}.
We mention in passing that resonant slepton production in R-Parity
violating SUSY leads to the same like-sign dilepton signal
\cite{Dreiner:2000vf}. The connection of $R_P \hspace{-1.2em}/\;\:$ SUSY at the LHC with
double beta decay has been studied in
\cite{Allanach:2009iv,Allanach:2009xx}.
Also a variant of the diagram in
fig. \ref{fig:LHCLR}, but with the SM $W_{L}$ bosons and a heavy sterile
neutrino $N$ mixed with the active ones represent a mechanism of
$0\nu\beta\beta$ decay, which implications for the LHC have been studied in
\cite{Kovalenko:2009td}.
In this paper we will generalize this comparison between double beta
decay and LHC to the complete list of short-range decompositions
(``diagrams'') of topology-I (see next section) worked out in
\cite{Bonnet:2012kh}. We consider singly charged scalar bosons,
leptoquarks and diquarks, as well as the coloured fermions, which
appear in the general decomposition of the neutrinoless double beta
operator. We mention in passing that a brief summary of our main
results has been presented before in a conference \cite{Varzielas:2012as}
and in \cite{Helo:2013dla}.
The rest of this paper is organized as follows: In section
\ref{Sec:Dec} we briefly review the general decomposition of
$\mathcal{O}^{0\nu\beta\beta}_{d=9}$ developed in \cite{Bonnet:2012kh} and make
contact with the Lorentz-invariant parametrization of the decay rate
worked out in \cite{Pas:2000vn}. Section \ref{sect:xsect} discusses
the production cross sections for different scalars. A
numerical analysis, comparing current and future LHC sensitivities
with double beta decay, case-by-case for all possible scalar
contributions to the $0\nu\beta\beta$ decay rate, is then performed in section
\ref{Sec:Pheno}. We then turn to the question, whether different
models (or decompositions) can actually be distinguished at the LHC,
if a positive signal were found in the future. We discuss two types
of observables, which allow to do so. First, in section \ref{SubSec:CA}
we discuss the ``charge asymmetry'', i.e. the ratio of the number of
positron-like to electron-like dilepton (plus jets) events. We then
turn to the discussion of invariant mass peaks in \ref{SubSec:MP}.
A joint analysis of charge asymmetry and invariant mass peaks would
allow to identify the dominant contribution to double beta decay.
Finally, we close with a short summary of our main results.
\section{Decomposition of the $d=9$ $0\nu\beta\beta$ decay operator}
\label{Sec:Dec}
In order to be able to compare the sensitivities of the LHC and
$0\nu\beta\beta$ decay experiments input from both particle and nuclear physics
is needed. In this section we will briefly recapitulate the main
results of two papers \cite{Pas:2000vn,Bonnet:2012kh}, which we will
use in the later parts of this work. In \cite{Bonnet:2012kh} a general
decomposition of the d=9 $0\nu\beta\beta$ decay operator was given. This work
allows to identify all possible contributions to $0\nu\beta\beta$ decay from
the particle physics point of view. Moreover, it makes contact with
the general Lorentz-invariant parametrization of the $0\nu\beta\beta$ deay rate
of \cite{Pas:1999fc,Pas:2000vn}. These latter papers developed a
general formalism and gave numerical values for nuclear matrix
elements, which allow to calculate the expected $0\nu\beta\beta$ decay
half-lives for any particle physics model.
\begin{figure}[tbh]
\hskip10mm\includegraphics[width=0.9\linewidth]{TopoIandII.eps}
\caption{\it \label{Fig:0nbbTopologies}The two basic tree-level
topologies realizing a $d=9$ $0\nu\beta\beta$ decay operator. External
lines are fermions; internal lines can be fermions (solid), scalars
(dashed) or vectors instead of scalars (not shown). For T-I there are
in total 3 possibilities classified as: SFS, VFS and VFV.}
\end{figure}
We start by recalling that there are only two basic topologies which
can generate the double beta decay operator at tree-level. These are
shown in fig. (\ref{Fig:0nbbTopologies}), for brevity we will call
them T-I and T-II in the following. While all outer particles in these
diagrams are fermions, internal particles can be scalars, fermions or
vectors. For topology-I (T-I) all three possible combinations (SFS,
SFV and VFV) can lead to models, which give sizeable contributions to
$0\nu\beta\beta$ decay. Note that, for T-II one derivative coupling (cases VVV
and SVV) or one dimensionful vertex (cases SSS and VVS) is needed.
We plan to deal with T-II, which requires a slightly more complicated
analysis, in a future publication \cite{Helo:2013xx}.
\begin{table}[h]
\begin{center}
\begin{tabular}{ccccccl}
\hline \hline
&&
\multicolumn{3}{c}{Mediator $(Q_{\rm em}, SU(3)_{c})$}
\\
\# & Decomposition & $S$ or $V_{\rho}$ & $\psi$ & $S'$ or $V'_{\rho}$
\\
\hline
1-i
&
$(\bar{u} d) (\bar{e}) (\bar{e}) (\bar{u} d)$
&
$(+1, {\bf 1}\oplus{\bf 8})$
&
$(0, {\bf 1}\oplus{\bf 8}) $
&
$(-1, {\bf 1}\oplus{\bf 8})$
\\
1-ii-a
&
$(\bar{u} d) (\bar{u}) (d) (\bar{e} \bar{e})$
&
$ (+1, {\bf 1}\oplus{\bf 8}) $
&
$ (+ 5/3, {\bf 3})$
&
$ (+2, {\bf 1})$
\\
1-ii-b
&
$(\bar{u} d) (d) (\bar{u}) (\bar{e} \bar{e})$
&
$(+1, {\bf 1}\oplus{\bf 8}) $
&
$(+4/3, \overline{\bf 3})$
&
$(+2, {\bf 1})$
\\
\hline
2-i-a
&
$(\bar{u} d) (d) (\bar{e}) (\bar{u} \bar{e})$
&
$(+1, {\bf 1}\oplus{\bf 8}) $
&
$(+4/3, \overline{\bf 3})$
&
$(+1/3, \overline{\bf 3})$
\\
2-i-b
&
$(\bar{u} d) (\bar{e}) (d) (\bar{u} \bar{e})$
&
$(+1, {\bf 1}\oplus{\bf 8})$
&
$ (0,{\bf 1}\oplus{\bf 8})$
&
$ (+1/3, \overline{\bf 3})$
\\
2-ii-a
&
$(\bar{u} d) (\bar{u}) (\bar{e}) (d \bar{e})$
&
$(+1, {\bf 1}\oplus{\bf 8})$
&
$ (+5/3,{\bf 3})$
&
$ (+2/3, {\bf 3})$
\\
2-ii-b
&
$(\bar{u} d) (\bar{e}) (\bar{u}) (d \bar{e})$
&
$(+ 1,{\bf 1}\oplus{\bf 8})$
&
$ (0,{\bf 1}\oplus{\bf 8})$
&
$ (+ 2/3, {\bf 3})$
\\
2-iii-a
&
$(d \bar{e}) (\bar{u}) (d) (\bar{u} \bar{e})$
&
$ (- 2/3, \overline{\bf 3})$
&
$ (0, {\bf 1}\oplus{\bf 8})$
&
$ (+ 1/3, \overline{\bf 3})$
\\
2-iii-b
&
$(d \bar{e}) (d) (\bar{u}) (\bar{u} \bar{e})$
&
$ (- 2/3, \overline{\bf 3})$
&
$ (-1/3, {\bf 3_a}\oplus\overline{\bf 6_s}) $
&
$ (+ 1/3, \overline{\bf 3})$
\\
\hline
3-i
&
$(\bar{u} \bar{u}) (\bar{e})(\bar{e}) (dd)$
&
$ (+ 4/3, \overline{\bf 3}_{\bf a}\oplus{\bf 6_s}) $
&
$ (+1/3, \overline{\bf 3}_{\bf a}\oplus{\bf 6_s}) $
&
$(- 2/3, \overline{\bf 3}_{\bf a}\oplus{\bf 6_s})$
\\
3-ii
&
$(\bar{u} \bar{u}) (d) (d) (\bar{e} \bar{e})$
&
$(+ 4/3, \overline{\bf 3}_{\bf a}\oplus {\bf 6_s}) $
&
$ (+5/3, {\bf 3})$
&
$(+2, {\bf 1}) $
\\
3-iii
&
$(dd) (\bar{u}) (\bar{u}) (\bar{e} \bar{e})$
&
$ (+ 2/3, {\bf 3}_{\bf a}\oplus\overline{\bf 6}_{\bf s}) $
&
$ (+4/3, \overline{\bf 3}) $
&
$ (+ 2, {\bf 1}) $
\\
\hline
4-i
&
$(d \bar{e}) (\bar{u}) (\bar{u}) (d \bar{e})$
&
$(- 2/3, \overline{\bf 3})$
&
$( 0, {\bf 1}\oplus{\bf 8}) $
&
$ (+ 2/3, {\bf 3}) $
\\
4-ii-a
&
$(\bar{u} \bar{u}) (d) (\bar{e}) (d \bar{e})$
&
$(+ 4/3, \overline{\bf 3}_{\bf a}\oplus{\bf 6_s}) $
&
$ (+5/3, {\bf 3})$
&
$ (+ 2/3, {\bf 3}) $
\\
4-ii-b
&
$(\bar{u} \bar{u}) (\bar{e}) (d) (d \bar{e})$
&
$ (+ 4/3, \overline{\bf 3}_{\bf a}\oplus{\bf 6_s}) $
&
$ (+1/3, \overline{\bf 3}_{\bf a}\oplus{\bf 6_s}) $
&
$ (+ 2/3, {\bf 3}) $
\\
\hline
5-i
&
$(\bar{u} \bar{e}) (d) (d) (\bar{u} \bar{e})$
&
$ (- 1/3, {\bf 3}) $
&
$(0, {\bf 1}\oplus{\bf 8}) $
&
$ (+ 1/3, \overline{\bf 3}) $
\\
5-ii-a
&
$(\bar{u} \bar{e}) (\bar{u}) (\bar{e}) (dd)$
&
$ (- 1/3, {\bf 3}) $
&
$ (+1/3,\overline{\bf 3}_{\bf a}\oplus{\bf 6_s}) $
&
$ (- 2/3,\overline{\bf 3}_{\bf a}\oplus{\bf 6_s}) $
\\
5-ii-b
&
$(\bar{u} \bar{e}) (\bar{e}) (\bar{u}) (dd)$
&
$ (- 1/3, {\bf 3}) $
&
$ (-4/3, {\bf 3})$
&
$(- 2/3, \overline{\bf 3}_{\bf a}\oplus{\bf 6_s}) $
\\
\hline \hline
\end{tabular}
\end{center}
\caption{\it \label{Tab:TopoI} General decomposition of the $d=9$
operator ${\bar u} {\bar u} dd{\bar e}{\bar e}$ for topology~I.
The chirality of outer fermions is left unspecified, thus the mediators
are given with the charge $Q_{\rm em}$ of electromagnetic $U(1)_{\rm
em}$ and that of colour $SU(3)_{c}$. The symbols $S$ and $S'$ denote
scalars, $V_\rho$ and $V_\rho'$ vectors, and $\psi$ a fermion. The
table follows the recent paper \cite{Bonnet:2012kh}, where more complete
tables including chiralities can be found.}
\end{table}
For assigning the fermions to the outer legs in
fig. (\ref{Fig:0nbbTopologies}) for T-I there exist a total of 18
possibilities. These are listed in table (\ref{Tab:TopoI}), together
with the electric charge and possible colour transformation properties
of the intermediate state particles. Note that in these tables the
chiralities of the fermions are not given, thus the hypercharge of the
mediators is not fixed. We will come back to this point below.
The table is valid for both, scalars and vectors, although later
on we will concentrate on the scalar case. The results for vectors
are very similar (apart from some minor numerical factors), so we
will only briefly comment on these differences in our numerical
analysis.
Table (\ref{Tab:TopoI}) contains six decompositions in which the
intermediate state fermion has zero electric charge. All T-I like
contributions to $0\nu\beta\beta$ decay, discussed in the literature prior to
\cite{Bonnet:2012kh}, are variants of these six decompositions. Just
to mention two examples, T-I-1-i with vectors coupling to right-handed
fermions correspond to the $W_R-N-W_R$ exchange diagram of the
left-right symmetric model, discussed briefly in the introduction,
while the up-squark diagram of trilinear $R$-parity breaking
supersymmetry \cite{Mohapatra:1986su,Hirsch:1995zi} is classified as
SFS of T-I-4-i with chirality $({\bar u}_Lu_R^c)({\bar e}_Ld_R)({\bar
e}_Ld_R)$. The remaining 12 decompositions all require fractionally
charged fermions with non-trivial colour transformation
properties. They require also the presence of either diquarks or
leptoquarks or both.
For most, but not all possibilities listed in table (\ref{Tab:TopoI})
two possibilities for the colour of the intermediate states
exist. This is a straight-forward consequence of the SU(3)
multiplication rules: ${\bf 3}\otimes {\bf \bar 3}= {\bf 1} + {\bf 8}$
and ${\bf 3}\otimes {\bf 3}= {\bf\bar 3}_a + {\bf 6}_{\bf s}$. The
exception is the case of scalar diquarks, where in all cases except
2-iii-b only the ${\bf 6}_{\bf s}$ contributes, since the (scalar)
anti-triplet coupling to two identical fermions vanishes
\cite{Han:2010rf}.
Fig. (\ref{fig:Diags}) shows some example diagrams, corresponding
to the decompositions (1-i) (diagram a); (2-iii-a) (diagram b);
(1-ii) (diagram c) and (3-i) (diagram d). These examples contain
at least one example for each of the six different scalars and the
four different fermions which appear in table (\ref{Tab:TopoI}).
Diagrams for all other decompositions can be straightforwardly derived
using the table. Note that, assigning all outer fermions to be right-handed
and replacing $S_{+1}$ by a vector corresponds to the diagram for
the LR-symmetric model, discussed in the introduction.
\begin{figure}[htb]
\centering
\begin{tabular}{cc}
\includegraphics[width=1.0\linewidth]{ExaDcmp.eps}
\end{tabular}
\vskip-3mm
\caption{Example diagrams for short-range double beta decay, see text.}
\label{fig:Diags}
\end{figure}
In \cite{Pas:1999fc,Pas:2000vn} a general Lorentz-invariant description
of the $0\nu\beta\beta$ decay rate has been derived. The Lagrangian for the
short-range part of the amplitude can be written as \cite{Pas:2000vn}
\begin{eqnarray}
\mathcal{L}=\frac{G_F^2}{2}m_p^{-1}\left(
\epsilon_1 JJj+\epsilon_2 J^{\mu\nu}J_{\mu\nu}j
+\epsilon_3 J^{\mu}J_{\mu}j+\epsilon_4 J^{\mu}J_{\mu\nu}j^{\nu}
+\epsilon_5J^{\mu}Jj_{\mu}\right)\, .
\label{eps_short}
\end{eqnarray}
Here we omitted the chiral indices for clarity. However, for the case
of $\epsilon_3-\epsilon_5$, where chirality changes play a role in the value
of the neutrinoless beta decay rate, the indices need to be kept.
\noindent
The low-energy energy hadronic and leptonic currents appearing
in eq.(\ref{eps_short}) are defined as:
\begin{eqnarray}\label{eq:Currents}
J^\mu_{V\pm A}&=&\overline{u}\gamma^{\mu}(1\pm\gamma_5)d\,, \ \ J_{S\pm P}=\overline{u}(1\pm\gamma_5)d\,, \ \
J^{\mu\nu}=\overline{u}\sigma^{\mu\nu} d\,,
\\ \nonumber
j^{\mu}_{A} &=& \overline{e}\gamma^{\mu} \gamma_5 e^c\,, \hspace{13mm} j_{S\pm P} =\overline{e}(1 \pm \gamma_5)e^c\,.
\end{eqnarray}
Note that the vectorial leptonic current $j_{V}^{\mu} =
\bar{e}\gamma^{\mu} e^{c}$ is identically zero. Also the quark tensor
operator $\bar{u} \sigma^{\mu\nu} \gamma_{5} d$ is not put into the
above list since it is reducible to $J^{\mu\nu}$.
The hadronic currents in eq. (\ref{eq:Currents}) are expressed in terms
of standard operators $({\bar u}{\cal O}d)$, adequate for the description
of double beta decay, a low-energy process in which neutrons are converted
into protons in a nucleus. The decompositions of table
(\ref{Tab:TopoI}), on the other hand, are given
in terms of quark currents. The latter can be brought into the standard
form, eq. (\ref{eq:Currents}) by performing a Fierz transformation,
extracting the relevant colour singlet piece(s). The corresponding
calculations do depend on the chiralities of the outer fermions.
Once the coefficients for the basic operators of eq.(\ref{eps_short})
have been calculated for any given decomposition one can write the
corresponding inverse half-life as a product of three distinct factors
\begin{equation}\label{eq:Tinv}
\left(T^{0\nu\beta\beta}_{1/2}\right)^{-1}
= G (\sum_i \epsilon_i {\cal M}_i)^2 .
\end{equation}
Here, $G$ is the leptonic phase space integral. Numerical values for
$G$ can be calculated accurately, see for example \cite{Doi:1985dx}.
${\cal M}_i$ are the nuclear matrix elements, they are different for
the different $\epsilon_i$.
Their numerical values for $^{76}$Ge can be
found in \cite{Pas:2000vn}, for other isotopes see \cite{Deppisch:2012nb}.
\footnote{For recent review of the nuclear structure theory behind the
calculation of ${\cal M}_{i}$ see, for instance, Ref. \cite{Faessler:2012ku}
and references therein.}
\section{Cross sections}
\label{sect:xsect}
In the following we discuss the production cross sections for charged
scalars ($S_{+1}$) and the two different cases each for diquarks
($S^{DQ}_{4/3}$ and $S^{DQ}_{2/3}$) and leptoquarks ($S^{LQ}_{1/3}$
and $S^{LQ}_{2/3}$) as well as their respective antiparticles. These
five cross sections, plus the corresponding ones for vectors, are in
principle sufficient to test all 18 decompositions of the double beta
decay operator in topology-I. The $S^{DQ}_{4/3}$ occurs in
decompositions 3-i, 3-ii, 4-ii. The $S^{DQ}_{2/3}$ occurs in 3-i,
3-iii, 5-ii. The leptoquark states $S^{LQ}_{1/3}$ (and $S^{LQ}_{2/3}$)
appear in 2-i, 2-iii, and all of 5 (and 2-ii, 2-iii and all of 4,
respectively). Finally, $S_{+1}$ appears in all of 1 and in 2-i and
2-ii. Examples of Feynman diagrams are shown in fig. \ref{fig:Diags}
and fig. \ref{fig:LQprod}. Note that the list of Feynman diagrams is
far from complete.
\begin{figure}[h]
\centering
\begin{tabular}{cc}
\includegraphics[width=0.5\linewidth]{Prod_LQ23_Dom.ps}
\includegraphics[width=0.5\linewidth]{Prod_LQ23_SubDom.ps} \\
\includegraphics[width=0.5\linewidth]{Prod_LQ13_Dom.ps}
\includegraphics[width=0.5\linewidth]{Prod_LQ13_SubDom.ps}
\end{tabular}
\vskip-3mm
\caption{Example diagrams for single leptoquark production, followed
by LNV decay, at the LHC. For discussion see text.}
\label{fig:LQprod}
\end{figure}
We have implemented the corresponding Lagrangian terms given in
Appendix A into CalcHEP \cite{Pukhov:2004ca} and MadGraph5
\cite{Alwall:2011uj} for the calculation of cross sections. Example
results are displayed in figure (\ref{fig:xsectTI}) and
(\ref{fig:xsectTILQ}).
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc}
\includegraphics[scale=0.7]{XSec_TI_14TeV.eps}
&\includegraphics[scale=0.7]{XSec_TI_14TeV_CA.eps}
\end{tabular}
\end{center}
\caption{\label{fig:xsectTI} Production cross sections in pb at the
LHC with $\sqrt{s}=14$ TeV for five different scalars: $S^{DQ}_{4/3}$,
$S^{DQ}_{2/3}$, $S_{+1}$, $S^{LQ}_{1/3}$ and $S^{LQ}_{2/3}$. To the
left the production of the scalar being dominantly produced (compare
the discussion of the charge asymmetry in \ref{SubSec:CA}) is being
considered. Depicted at the right is the production cross section for
its anti-particle - the scalar with the sub-dominantly produced
charge. For $S_{+1}$, $\sigma(pp\to S_{+1})$ is only a factor
$(2-3.5$) larger than $\sigma(pp\to {\bar S_{+1}})$. For other cases
much larger ratios are found, for discussion see text.}
\end{figure}
Figure (\ref{fig:xsectTI}) shows cross sections in pb for five
different scalars at LHC c.m.s. energy of $\sqrt{s}=14$ TeV. We show
$\sigma(pp \to ee + jets)/(g^2 BR)$, where $g$ stands generically for
the coupling entering the production cross section of the scalar,
``jets'' stands generically for any number of jets, and $BR$ is the
branching ratio to the final LNV state. The cross sections shown are
for colour sextets in case of diquarks, colour triplets in case of
leptoquarks. Note that for scalar diquarks coupling to the same
generation of quarks, only the sextet coupling is non-zero. For the
charged scalar, $S_{+1}$, we show the result for the colour
singlet. The cross section for a singly charged colour octet is larger
by a colour factor of $n_c=4/3$.
In fig. (\ref{fig:xsectTI}) to the left we show the ``dominant-sign''
production cross section, while to the right ``wrong-sign'' charge
production cross sections are shown. In case of $S_{+1}$, $S^{DQ}_{4/3}$
and $S^{LQ}_{1/3}$ the positive sign of the charge has the larger
cross section, while for the remaining cases of $S^{DQ}_{2/3}$ and
$S^{LQ}_{2/3}$ the negative sign is the dominant production mode.
The ratio of dominant to subdominant cross section is, however,
different for different scalars: For $S_{+1}$ it is in the range
of ($2-3.5$) in the mass range shown, while for the other cases
much larger ratios, strongly depending on the mass of the scalar
can be found. This ``asymmetry'' in cross sections forms the basis
of the observable ``charge asymmetry'', which we will discuss later
in this paper.
While the charged scalar and the diquark states can be singly produced
in an s-channel resonance, as shown in fig. \ref{fig:Diags}, thus
leading to large cross sections, in case of leptoquarks the scalar LQ
is necessarily always produced in association with a lepton, see
fig. \ref{fig:LQprod}, explaining the much smaller cross sections seen
in fig. \ref{fig:xsectTI}. While the signal for diquarks (and the
charged scalar) is therefore the ``classical'' $eejj$-signal with a
mass peak in $m_{eejj}^2=m_{S_i}^2$, for LQs the signal is $ee$ with
at least three hard jets, a broader distribution in $m_{eejjj}^2$ and a
mass peak in $m_{e_2jjj}^2=m_{S^{LQ}_j}^2$, see also section \ref{SubSec:MP}.
As shown in fig. (\ref{fig:LQprod}) leptoquarks can be produced in
association with a standard model lepton (electron/positron) or
together with one of the exotic fermions, $\psi$, see table
\ref{Tab:TopoI}. We have calculated $\sigma(pp\to S^{LQ}_{q}+\psi)$
for several different values of $m_{\psi}$ for both types of LQs
(and both types of electric charge).
In fig. (\ref{fig:xsectTILQ})
the results for these
cross sections are compared with
$\sigma(pp\to S^{LQ}_{q}+e)$.
Usually, $\sigma(pp\to S^{LQ}_{q}+\psi)$ is smaller than
$\sigma(pp\to S^{LQ}_{q}+e)$, due to the kinematical price
of producing a heavy $\psi$ in addition to the heavy LQ. However,
for the particular case of $S^{LQ}_{2/3}$, if $m_{\psi}\ll m_{S^{LQ}_{2/3}}$
the cross section for LQ plus exotic fermion production can be as
large (or slightly larger) than $\sigma(pp\to S^{LQ}_{q}+e)$, because
there are twice as many up-quarks in the proton than down-quarks.
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc}
\includegraphics[scale=0.7]{AllLQ23Xsect.eps}\hskip5mm
&\includegraphics[scale=0.7]{AllLQ13Xsect.eps}
\end{tabular}
\end{center}
\caption{\label{fig:xsectTILQ} Production cross sections in pb at the
LHC with $\sqrt{s}=14$ TeV for $S^{LQ}_{2/3}$ (left) and
$S^{LQ}_{1/3}$ (right). We show separately cross sections leading to
$e^-e^-$ and $e^+e^+$ final states. For the case of $S^{LQ}_{q}+\psi$
production, we show cross sections for three different choices of
$m_{\psi}$: $m_{\psi}=0.5$ TeV - full lines; $m_{\psi}=1.0$ TeV -
dot-dashed lines; $m_{\psi}=2.0$ TeV - dotted lines. Usually,
$\sigma(pp\to S^{LQ}_{q}+\psi)$ is smaller than $\sigma(pp\to S^{LQ}_{q}+e)$,
except in the case of $S^{LQ}_{2/3}$ when $m_{\psi}\ll m_{S^{LQ}_{2/3}}$.}
\end{figure}
Note, that $\sigma(pp\to S^{LQ}_{2/3}+e^-)$ contributes to $e^-e^-$
type of events, while $\sigma(pp\to S^{LQ}_{2/3}+\psi)$ contributes to
$e^+e^+$ type of events. (Both contribute to $e^-e^+$ events.) For the
$S^{LQ}_{1/3}$ charges of the leptons are reversed, see
fig. (\ref{fig:LQprod}). This will be important for the charge asymmetry,
discussed in section \ref{SubSec:CA}.
The only boson appearing in the decomposition shown in table
\ref{Tab:TopoI}, for which the single production cross section at
the LHC is not included in fig. (\ref{fig:xsectTI}), is the doubly
charged scalar, $S_{+2}$. Note, however, that for all decompositions
in which $S_{+2}$ appears, the other boson in the diagram is one of the
five, for which cross sections are shown in fig. (\ref{fig:xsectTI}).
In fact, most of the decompositions of table \ref{Tab:TopoI} have two
different bosons in the left and right part of the diagrams and
thus, if both are within reach of the LHC, will lead to multiple
``bumps'' in the invariant mass distribution of $eejj$ (or $e_2jjj$),
see the discussion in section \ref{SubSec:MP}.
Finally, we have also calculated the pair production cross section for
$\sigma(pp \to \psi_{1/3}{\bar \psi_{1/3}})$. For coloured fermions at
the LHC, the production cross section is dominated by gluon-gluon
fusion, thus $\sigma(pp \to \psi_{4/3}{\bar \psi_{4/3}})$ and
also $\sigma(pp \to \psi_{5/3}{\bar \psi_{5/3}})$ have very similar
values, while for a charge-neutral color octet cross sections are larger
than for the case $\sigma(pp \to \psi_{1/3}{\bar \psi_{1/3}})$ by a
corresponding colour factor. Pair production of
colored fermions provides a different signal as test for double beta
decay, since the minimal number of jets here is 4 (compared to 2 or 3
in all other cases). Cross sections are
larger than 1 fb up to masses of around 2 TeV and larger than 0.1 fb
up to 2.5 TeV. We will also briefly discuss invariant mass peaks for pair
production in section \ref{SubSec:MP}.
\section{Phenomenology}
\subsection{Status of related LHC searches}
\label{subsect:lhcstat}
Both, the ATLAS \cite{ATLAS:2012ak} and the CMS \cite{CMS:PAS-EXO-12-017}
collaborations have published searches for events with dilepton plus
jets (``$eejj$''). In both cases, limits on right-handed $W$-bosons
and heavy right-handed neutrinos, motivated by the left-right symmetric
extension of the standard model \cite{Pati:1974yy,Mohapatra:1974gc},
have been derived, see fig. (\ref{fig:LHCLR}). The search is based
on the assumption that an on-shell $W_R$ is produced, decaying to
an on-shell right-handed neutrino, i.e. $W_R \to l_1 N_l \to
l_1 l_2 W_R^* \to l_1 l_2 jj$ \cite{Keung:1983uu}, producing two
mass peaks in $m_{eejj}$ and $m_{e_2jj}$.
The ATLAS collaboration used $2.1$ fb$^{-1}$ of statistics at
$\sqrt{s}=7$ TeV and searched for both, like-sign and opposite-sign,
dileptons plus any number of jets. A number of cuts are applied to
the data, the most important ones for us are: Leptons have to be
isolated, with $p_T > 25$ GeV and the dilepton invariant mass
$m_{ll}$ is required to be greater than 110 GeV. In addition,
at least one jet has to have $p_T>20$ GeV. For larger mass differences
betrween $W_R$ and $N$, the $N$ is significantly boosted, such that
the two jets from the decay $N\to ljj$ are identified as a single jet.
Such events are taken into account in the analysis and, according to
\cite{ATLAS:2012ak}, make up up to half of the signal events. Invariant
masses of the $m_{lljj}$ or $m_{llj}$ systems are then required to
be larger than 400 GeV.
The main backgrounds have been identified, partially by MonteCarlo
(MC) and partially data driven, and depend on the final state
(like-sign (SS) versus opposite-sign (OS), as well as electrons versus
muons). For like-sign electrons the main background comes from ``fake
lepton events'', i.e. $W+j$, $t{\bar t}$ and QCD multi-jet production,
where one or more of the jets is misidentified as an electron. For OS
leptons, the main backgrounds are $(Z/\gamma)^*+j$ and $t{\bar t}$
events. The background for OS leptons is larger than for SS leptons by
a considerable factor ($\sim 5$), but since rough agreement between MC
and actual number of events is found in both cases the resulting upper
limits on signal cross sections are similar.
Unfortunately, \cite{ATLAS:2012ak} does not give upper limits on
$\sigma\times {\rm Br}(eejj)$ as function of $m_{eejj}$, nor does
ATLAS provide individual data sets for $e^-e^-$ and $e^+e^+$. Results
are instead presented as excluded areas in the plane ($m_N,m_{W_R}$)
for SS+OS (called ``Majorana case''), see fig. (\ref{fig:LHCLR}), and
OS-only (``Dirac case''), combining muon-type and electron-type events
and assuming $g_R=g_L$. \footnote{The classification into ``Majorana''
and ``Dirac'' case is done, since ATLAS assumes in its analysis that
the fermion produced is a heavy neutrino. A Dirac neutrino will remember
its lepton number and thus produce only electrons (positrons) in its
decay, if the $W_R$ decayed to neutrino plus positron (electron). Thus,
for the Dirac case only opposite sign lepton events are produced. An
on-shell Majorana neutrino, on the other hand, will decay with 50 \%
branching ratio into electrons and positrons each, thus producing
both SS and OS events.}
The CMS analysis \cite{CMS:PAS-EXO-12-017} is based on 3.6 fb$^{-1}$
of data at $\sqrt{s}=8$ TeV. In their analysis, the leading lepton has
to have $p_T>60$ GeV, the subleading lepton $p_T>40$ GeV, jet
candidates $p_T>40$ GeV, as well as $m_{ll}>200$ GeV and
$m_{lljj}>600$ GeV. Events are separated into electron-like and
muon-like and separately analysed, but no charge separation within the
two sets are given, limits apply to the sum of events in the SS and OS
channels. Due to the stronger cuts on the invariant masses, absolute
background numbers in the CMS study \cite{CMS:PAS-EXO-12-017} are
similar or smaller than the corresponding background numbers in the
ATLAS study \cite{ATLAS:2012ak} despite the larger data sample.
Main backgrounds are again $(Z/\gamma)^*+j$ and $t{\bar t}$ events,
the number of events from misidentified leptons from QCD is much
smaller. The resulting limits in the plane ($m_N,m_{W_R}$) are
stronger than those given by \cite{ATLAS:2012ak}, mostly due to the
larger statistics (and also larger $\sqrt{s}$).
More important for us is that CMS presents \cite{CMS:PAS-EXO-12-017}
also upper limits on $\sigma\times {\rm Br}(eejj)$ as function of $m_{eejj}$,
seperately for electrons and muons. These limits assume $m_N=\frac{1}{2}
m_{W_R}$. CMS notes that for this ratio of masses signal acceptance is
of order (70-80) \% and drops to zero at low $m_N$, but no information
on acceptance as function of $m_N$ is provided. Signal acceptance also
becomes small when $m_N$ approaches $m_{W_R}$, thus for approximately
$m_{W_R}-m_N \raise0.3ex\hbox{$\;<$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} 100$ GeV limits disappear. We will use these
upper limits in our analysis below. However, we will assume that for
the values of fermion masses shown, the acceptance percentage is the
same as the one used in the plots shown by CMS. From fig.(2) of
\cite{CMS:PAS-EXO-12-017} one can deduce that this should be a good
approximation for fermion masses above $m_F \simeq (200-300)$ GeV.
Note that \cite{CMS:PAS-EXO-12-017} shows cross section limits only
for $m_{eejj} \ge 1$ TeV. For $m_{eejj}$ larger than about roughly
$m_{eejj} \raise0.3ex\hbox{$\;>$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} 1.7$ TeV limits are of the order (2-3) fb.
In our analysis we will also use estimated sensitivity limits for the
future LHC run at $\sqrt{s}=14$ TeV. We will assume the LHC can
collect 300 fb$^{-1}$ of data. Excluding 3 signal events would then
optimistically allow to establish an upper limit on $\sigma\times {\rm
Br}(eejj)$ of $\sigma\times {\rm Br}(eejj)\raise0.3ex\hbox{$\;<$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} 0.01$ fb. Such low
values are, however, reachable only in regions of parameter space,
where background from standard model events is negligible, i.e. at the
highest values of $m_{eejj}$. For lower $m_{eejj}$, where already in
the published data a significant number of background event persists,
future data can improve limits only by much weaker factors. We do
a simple estimate, which considers that the $t{\bar t}$ production cross
section is about a factor $3$ higher at $\sqrt{s}=14$ TeV than at
$\sqrt{s}=8$ TeV. Thus backgrounds should also be higher by a similar
factor. Scaling current limits with this larger background estimate
and taking the square root of the statistics (300 $fb^{-1}$ in the
future, compared to roughly 3.6 $fb^{-1}$ used in \cite{CMS:PAS-EXO-12-017},
one can estimate the future limit for the region of $m_{eejj}$ in
the range of ($1-2$) TeV very roughly as
$\sigma\times {\rm Br}(eejj)\raise0.3ex\hbox{$\;<$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} 0.1$ fb.
Finally, as discussed in section \ref{sect:xsect} single LQ production
at the LHC leads to the final state $ee$ plus at least three hard
jets. This case is only partially covered by the searches presented by
ATLAS and CMS. In the experimental data sets the number of jets is
required to be larger or equal to one, including in principle events with
1, 2 and more jets. This is done simply because for large mass
hierarchies $m_{S_i} \gg m_{\psi}$, the fermion is boosted and thus
two jets coming from the decay of $\psi$ might be visible as a single
jet only. On the other hand, while events with more than two hard jets
are included in this data set, the system $ee$ plus any number of jets
does not form a mass peak in case of LQs, as already mentioned. Peaks
in $m_{e_2jjj}^2$, as expected for the LQs have not been searched for
in \cite{ATLAS:2012ak,CMS:PAS-EXO-12-017}. However, one can assume
that such a search ($ee+3j$ with peak in $m_{e_2jjj}^2$) should actually
have similar or smaller backgrounds, due to the larger number of jets,
than the search presented in \cite{ATLAS:2012ak,CMS:PAS-EXO-12-017}.
In our analysis we will therefore assume also in this case that in
the future limits of order $(0.1-1)$ fb will be reached. More precise
numbers would require a full MonteCarlo simulation of signals and
backgrounds, which is beyond the scope of the present work. Instead,
see below in section \ref{Sec:Pheno}, we estimate how our limits
will change as a function of the number of excluded events.
\subsection{Status and future of $0\nu\beta\beta$ limits}
\label{subsect:bbstat}
\begin{table}[h]
\begin{center}
\begin{tabular}{ccccccl}
\hline \hline
Decomposition \# & $S-S'$ & current limit: & future limit:
\\
\hline
1-i, 1-ii
&
$S_{+1}^{(1)}-S_{+1}^{(1)}/S_{+2}$
&
1.4
&
2.1
\\
\hline
1-i, 1-ii
&
$S_{+1}^{(8)}-S_{+1}^{(8)}/S_{+2}$
&
2.5-3.1
&
3.7-4.6
\\
\hline
2-i, 2-ii
&
$S_{+1}^{(1)}-S_{i}^{LQ}$
&
1.2-1.4
&
1.8-2.2
\\
\hline
2-i, 2-ii
&
$S_{+1}^{(8)}-S_{i}^{LQ}$
&
2.2-2.7
&
3.2-4.0
\\
\hline
2-iii
&
$S_{i}^{LQ}-S_{j}^{LQ}$
&
1.6-3.1
&
2.4-4.6
\\
\hline
3-i, 3-ii, 3-iii
&
$S_{i}^{DQ}-S_{j}^{DQ}/S_{+2}$
&
2.4-2.7
&
3.5-4.1
\\
\hline
4-i, 5-i
&
$S_{i}^{LQ}-S_{i}^{LQ}$
&
2.0-2.5
&
3.0-3.7
\\
\hline
4-ii, 5-ii
&
$S_{i}^{DQ}-S_{j}^{LQ}$
&
2.0-2.4
&
3.0-3.5
\\
\hline \hline
\end{tabular}
\end{center}
\caption{\it
\label{Tab:bbstat} Status and future of limits on short-range operators
from $0\nu\beta\beta$ decay experiments. Different decompositions result in
different limits and depend on the helicity of the outer fermions.
The first column gives the decompositon number, compare to table
(\ref{Tab:TopoI}), the 2nd column indicates the exchanged scalars.
If within a certain (set of) decomposition(s) more than one operator
can appear, depending on helicity asignments, for brevity we quote a
range for the limit corresponding to the largest and smallest operators
within this decomposition. ``Current limit'' are the limits assuming
$T_{1/2}^{0\nu\beta\beta}(^{136}Xe) \ge 1.6 \times 10^{25}$ yr \cite{Auger:2012ar},
while ``future limit'' correspond to an assumed future limit of the
order of $10^{27}$ yr. The numbers quoted are limits on $M_{eff}$ in
TeV and scale as $g_{eff}^{(4/5)}$.}
\end{table}
As mentioned in the introduction, currently the best limits on $0\nu\beta\beta$
decay come from experiments on two isotopes, namely $^{76}Ge$ and
$^{136}Xe$. The Heidelberg-Moscow collaboration gives
$T^{0\nu\beta\beta}_{1/2}(^{76}{\rm Ge}) \ge 1.9 \cdot 10^{25}$ yr
\cite{KlapdorKleingrothaus:2000sn}, while the recent results from
EXO-200 and KamLAND-ZEN quote $T^{0\nu\beta\beta}_{1/2}(^{136}{\rm
Xe}) \ge 1.6 \cdot 10^{25}$ yr \cite{Auger:2012ar} and
$T^{0\nu\beta\beta}_{1/2}(^{136}{\rm Xe}) \ge 1.9 \cdot 10^{25}$ yr
\cite{Gando:2012zm}, both at the 90 \% CL. However, it is expected
that these limits will be improved within the near future. The GERDA
experiment \cite{Abt:2004yk,Ackermann:2012xja} will release first
$0\nu\beta\beta$ data in summer of 2013 and then move to ``phase-II'', aiming
for $T^{0\nu\beta\beta}_{1/2}(^{76}{\rm Ge})$ in excess of $10^{26}$
yr. An experiment using $^{130}Te$ in bolometers named CUORE
\cite{Alessandria:2011rc}, with senstivity order $10^{26}$ yr is
currently under construction. Proposals for ton-scale next-to-next
generation $0\nu\beta\beta$ experiments claim that even sensitivities in excess
$T^{0\nu\beta\beta}_{1/2} \sim 10^{27}$ yr can be reached for
$^{136}$Xe \cite{KamLANDZen:2012aa,Auty:2013:zz} and $^{76}$Ge
\cite{Abt:2004yk,Guiseppe:2011me}. For recent reviews and a list of
experimental references, see for example \cite{Barabash:1209.4241}.
In table (\ref{Tab:bbstat}) we therefore quote current and expexcted
future limits on $M_{eff}$ from double beta decay experiments using
$T^{0\nu\beta\beta}_{1/2}(^{136}{\rm Xe}) \ge 1.6 \cdot 10^{25}$ yr
(current) and $10^{27}$ yr (future). Here, $M_{eff}$ and $g_{eff}$
are simply defined as the effective mass and couplings, which
enter the $0\nu\beta\beta$ decay amplitude:
\begin{eqnarray}\label{eq:meff}
M_{eff} = (m_{S}^2m_{\psi}m_{S'}^2)^{(1/5)} \\ \nonumber
g_{eff} = (g_1g_2g_3g_4)^{(1/4)}
\end{eqnarray}
We show limits for the different decompositions assuming scalars are
exchanged. The limits on $M_{eff}$ are in TeV and scale as
$g_{eff}^{(4/5)}$. Within a given decomposition different operators
can appear in the calculation of the $0\nu\beta\beta$ decay half-live. If
within a given decomposition there is more than one operator
combination that appears for the different possible helicity states,
we quote a range of limits, corresponding to the operators with the
largest and smallest possible rate within this decomposition. Numbers
are calculated using the nuclear matrix elements of
\cite{Deppisch:2012nb} and the uncertainty on $M_{eff}$ scales as
$\Delta(M_{eff}) \propto (\Delta M_{\rm Nucl.})^{(1/5)}$, where $M_{\rm Nucl}$
stands generically for the nuclear matrix elements. Current
limits range from $M_{eff}\raise0.3ex\hbox{$\;>$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} 1.2-3.1$ TeV, future sensitivities up
to $M_{eff}\raise0.3ex\hbox{$\;>$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} 1.8-4.6$ TeV are expected.
\subsection{LHC $\text{vs}$ $0\nu\beta\beta$: Numerical analysis}
\label{Sec:Pheno}
In this section we compare the LHC and $0\nu\beta\beta$ sensitivities of the
different decompositions for $0\nu\beta\beta$. In the numerical analysis we
will develop in this section we will concentrate on the case when the
fermion mass is smaller than (one of) the scalar masses. The
numerical analysis of the case when the fermion mass is larger than
the scalar masses will be analysed in a future paper.
We can divide the discussion of all 18-decompositions into two group
of cases. The first group correspond to "symmetric " and
"like-symmetric" decompositions. The former are simply those
decompositions, in which a scalar with the same quantum numbers
appears twice in the diagram (1-i, 5-i, 4-i), while the later are
those with two different scalars but of the same kind (two different
leptoquarks or two different diquarks: 2-iii, 3-i). The second group
correspond to "asymmetric" decompositions. Those correspond to
decompositions with $S_{+2}$ and either a $S_{+1}$ or a diquark (1-ii,
3-ii, 3-iii) and decompositions with a leptoquark and either a
$S_{+1}$ or a diquark (2-i, 2-ii, 4-ii, 5-ii).
First we will derive limits from existing LHC data at $\sqrt {s} = 8$
TeV to compare then the discovery potential of the forthcoming $\sqrt
{s} = 14$ TeV phase of the LHC with the sensitivity of current and
future $0\nu \beta \beta$ decay experiments. We will begin our
discussion with the "symmetric" decomposition $(\bar u d)(\bar e
)(\bar e)(\bar u d)$.
As discussed in section IV-A the most stringent current limits from
the LHC on like-sign lepton searches come from data taken by the CMS
collaboration at $\sqrt{s} = 8$ TeV \cite{CMS:PAS-EXO-12-017}. CMS
presents also upper limits on $\sigma \times Br(eejj)$ as a function
of $m_{eejj}$. These limits apply directly to the case of the
decomposition $(\bar u d)(\bar e )(\bar e)(\bar u d)$, which describes
at LHC a produced scalar $S_{+1}$, decaying to $S_{+1}\to \psi_0 e^+$,
followed by $\psi_0 \to e^+ \bar u d$, producing two mass peaks in
$m_{eejj}$ and $m_{e_2 j j}$.
The number of eejj-like events at the LHC in general depends on a
different combination of couplings and masses than the $0\nu\beta\beta$ decay
amplitude. The $0\nu\beta\beta$ half-life depends on the effective parameters
defined in Eq. (\ref{eq:meff}) and the cross section $\sigma \times
Br(eejj)$ is, in the narrow width approximation, proportional to
$g_{udS_{+1}}^2$ and to a non trivial function $F_{S_{+1}}$ of the scalar mass
$m_{S_{+1}}$
We can then
write the number of events as:
\begin{eqnarray}\label{eq:SigBrS1}
\sigma \times Br(eejj) = \sigma(pp\to S)\times{\rm Br}(S\to eejj) =
F_{S}\left( m_{S} \right) g_{1}^2 \ {\rm Br}(S\to eejj),
\end{eqnarray}
defining
\begin{eqnarray}\label{FS+Def}
F_{S_{+1}}(m_{S_{+1}})=\sigma(pp\to S_{+1})/g_{udS_{+1}}^2.
\end{eqnarray}
The ${\rm Br}(S\to eejj)$ can be calculated from
eq. (\ref{Lag-S1-psi}) and is equal to
\begin{eqnarray}
\label{BR}
Br(S \to eejj) = \frac{f(m_{\psi}/m_S) g_2^2 }
{3 g_1^2 + f(m_{\psi}/m_S) g_2^2 } \times \frac{1}{2}.
\end{eqnarray}
Here $S=S_{+1}$, $g_1 = g_{ud S_{+1}}$, $g_2 = g_{e \psi_0 S_{+1}}$, $\psi = \psi_0$ and
$f(x)=(1-x^2)^2$.
Note that in the limit
where all couplings are equal (and $m_{\psi_0}=0$) ${\rm Br}(S_{+1}\to
e^+e^+jj)={\rm Br}(S_{+1}\to e^+e^-jj) \simeq 1/8$. We have used
CalcHEP \cite{Pukhov:2004ca} to calculate the production cross
sections for $S_{+1}$ at the LHC. We have plotted our results in
Fig. \ref{fig:xsectTI} and compared them with the literature
\cite{Ferrari:2000sp} finding quite good agreement.
In ``symmetric''
decompositions, such as $(\bar u d)(\bar e )(\bar e)(\bar u d)$, the
effective couplings and scalar boson masses are pairwise equal,
i.e. in Eq. (\ref{eq:meff}) $g_1=g_4$, $g_2=g_3$ and $m_{S}=
m_{S^\prime}$. Then, the effective parameters defined in
(\ref{eq:meff}) become:
\begin{eqnarray}
\label{Efective-S1}
M_{eff(S)} &=& (m_{S}^4 m_{\psi} )^{1/5}, \ \ g_{eff(S)} = (g_{1} g_{2} )^{1/2}.
\end{eqnarray}
Eq. (\ref{eq:SigBrS1}) depends on 4 variables: the couplings $g_1$,
$g_2$ and the masses $m_{S}, m_{\psi}$. For comparison with $0\nu\beta\beta$
we have expressed Eq. (\ref{eq:SigBrS1}), using
Eq. (\ref{Efective-S1}) and (\ref{BR}), in terms of 4 new variables:
the effective coupling and mass $g_{eff}, M_{eff}$, the fermion mass
$m_{\psi}$ and the $Br(S \to eejj)$. Then, using
Eq. (\ref{eq:SigBrS1}) expressed in terms of this 4 new variables, and
the current limits on $\sigma \times Br(eejj)$ presented by CMS
\cite{CMS:PAS-EXO-12-017} we can plot in the plane $g_{eff}$ versus
$M_{eff}$ bounds of the LHC for differentt values of the $Br(S_{+1}
\to e e j j )$ and the fermion mass $m_{\psi_0}$. We have drawn these
limits for $Br(S_{+1} \to e e j j ) = 10^{-1}$ (solid red lines) and
$Br(S_{+1} \to e e j j ) = 10^{-2}$ (dashed red lines) in
Fig. \ref{fig:Lim_8TeV} using different values of the fermion mass
$m_{\psi_0} = 200$ GeV, $\ 800$ GeV. For larger masses $m_{\psi_0} $
the LHC limits become more stringent except for the region
$(m_{\psi_0} - m_{S_{+1}}) \lesssim 100$ GeV, where the LHC
sensitivity becomes very small as we discussed in section
\ref{subsect:lhcstat}. Note, that the for the dotted/dashed lines,
the part of the line, which is shown dotted correspond to values
of $1 \le g_1=g_{u d S_{+1}} \le 2$, i.e. close to values where
this coupling would become non-perturbative.
In addition, Fig. \ref{fig:Lim_8TeV} shows current and future limits
from $0\nu\beta\beta$ decay. The dark gray area is the currently excluded part
of parameter space from non-observation of $^{136}$Xe decay with
$T_{1/2}^{0\nu \beta \beta}\ge 1.6 \times 10^{25}$ ys
\cite{Auger:2012ar} and the blue area correspond to an assumed future
$0\nu\beta \beta$ decay sensitivities of $T_{1/2}^{0\nu \beta
\beta}\ge 10^{27}$ ys. We have used as a current limit $M_{eff}
>1.2 \ TeV \times g_{eff}^{4/5}$ and for future sensitivities up to
$M_{eff} >4.6 \ TeV \times g_{eff}^{4/5}$. These correspond to the
most pessimistic case for the current sensitivity of $0\nu\beta\beta$ decay and
the most optimistic reach for $0\nu\beta\beta$ decay in the foreseeable future
(See Table \ref{Tab:bbstat}). As we can see from
Fig. \ref{fig:Lim_8TeV} the LHC is already competitive to $0\nu\beta\beta$ for
part of the parameter region of the decomposition $(\bar u d)(\bar e
)(\bar e)(\bar u d)$, especially for larger masses of
$m_{\psi_0}$. However, this mechanism is not ruled out, quite on the
contrary, most of the parameter region explored by future $0\nu\beta\beta$
decay experiments has not been covered yet.
\begin{figure}[h]
\includegraphics[scale=0.8]{Lim8TeVEff.eps}
\caption{ Current limits for the LHC at $\sqrt {s} = 8$ TeV for
production of scalars $S_{+1}$ compared with current and future
double beta decay experiments. The gray region in the top left
corner is ruled out by current $0\nu\beta\beta$ data. The blue region
represents the parameter region accessible in near future $0\nu\beta\beta$
experiments, whereas the red lines shows current LHC limits for
production of scalars $S_{+1}$. Solid red lines were calculated
using $Br(S_{+1} \to e e j j ) = 10^{-1}$ while the dashed and dotted
red lines were calculated using $Br(S_{+1} \to e e j j ) =
10^{-2}$ for different values of the fermion mass $m_{\psi_0} = 200$
GeV, $\ 800$ GeV, see text.}
\label{fig:Lim_8TeV}
\end{figure}
Now we will analyze the discovery potential of the forthcoming
$\sqrt{s} = 14$ TeV phase of the LHC. We will start our discussion
with the first group of decompositions, i.e. "symmetric" and
"like-symmetric" decompositions. Recall, for "symmetric"
decompositions one can use Eqs. (\ref{eq:SigBrS1})-(\ref{Efective-S1})
to describe the cross section $\sigma \times Br(eejj)$ in terms of the
effective masses and couplings relevant for $0\nu\beta\beta$. In the LQ case,
the LQ is produced in association with a lepton, i.e. in
Eq. (\ref{eq:SigBrS1}) we calculate $\sigma(pp\to S^{LQ}+e)\times {\rm
Br}(S^{LQ}\to ejjj)$. For "like-symmetric" decompositions,
Eq. (\ref{Efective-S1}) is also a good approximation. This is because
both LQs or both diquarks can be produced at LHC and in turn will have
similar limits on the masses $m_{S}, m_{S^\prime}$ and couplings $g_1,
g_4$ and $g_2, g_3$. We have used CalcHEP \cite{Pukhov:2004ca} and
MadGraph 5 \cite{Alwall:2011uj} to calculate the production cross
sections for $S_{+1}$, $S^{LQ}$, and $S^{DQ}$ at the LHC. We have
plotted our results in Fig. \ref{fig:xsectTI} and compared them with
the literature \cite{Ferrari:2000sp,Belyaev:2005ew,Han:2010rf} and
found quite good agreement in all cases.
In Fig. \ref{fig:Lim_Sim}, \ref{fig:Lim_LQ} we then plot the
sensitivities of $0\nu\beta\beta$ decay and the LHC for five different cases in
the plane $g_{eff}$ versus $M_{eff}$. For the LHC we show the
expected sensitivity limits, assuming less then 3 signal events in 300
fb$^{-1}$ of statistics, and plot for two values of ${\rm Br}(S\to
eejj)$, i.e $10^{-2}$ (dashed lines) and $10^{-1}$ (solid lines) ,
for two different values of $m_{\psi}=200$ GeV (left) and $m_{\psi}=1$
TeV (right). Again, for larger masses $m_{\psi} $ the LHC limits
become more stringent except for the region $(m_{\psi} - m_{S})
\lesssim 100$ GeV, where the LHC sensitivity is low. The
different color codes correspond to the five different scalar bosons,
that can be singly produced at the LHC, namely $S_{+1}$ (red),
$S^{DQ}_{4/3} $ (black), $S^{DQ}_{2/3}$ (purple), $S^{LQ}_{2/3}$
(blue) and $S^{LQ}_{1/3}$ (orange). In Fig. \ref{fig:Lim_Sim} we have
ploted three cases which correspond to the scalars $S_{+1}$,
$S^{DQ}_{4/3}$ and $S^{DQ}_{2/3}$ while in Fig. \ref{fig:Lim_LQ} we
have plotted the remaining two leptoquark cases $S^{LQ}_{2/3}$ and
$S^{LQ}_{1/3}$. In addition Figs. \ref{fig:Lim_Sim}, \ref{fig:Lim_LQ}
show four different cases for current and future limits from $0\nu\beta\beta$
decay. The dark gray area is, as in fig \ref{fig:Lim_8TeV}, the
currently excluded part of parameter space from non-observation of
$^{136}$Xe decay with $T_{1/2}^{0\nu\beta\beta}\ge 1.6 \times 10^{25}$ ys
\cite{Auger:2012ar} assuming $0\nu\beta\beta$ decay is caused by the
decomposition with the smallest rate (see Table \ref{Tab:bbstat}), and
thus corresponds to the most pessimistic case for the sensitivity of
$0\nu\beta\beta$ decay. The three blue areas are (from left to right): Smallest
rate, but for a limit of $T_{1/2}^{0\nu\beta\beta}\ge 10^{26}$ ys, largest rate
with $T_{1/2}^{0\nu\beta\beta}\ge 10^{26}$ ys and, finally the largest rate
with $T_{1/2}^{0\nu\beta\beta}\ge 10^{27}$ ys. The lightest area to the right
therefore corresponds to the most optimistic reach for $0\nu\beta\beta$ decay
in the foreseeable future.
As can be seen from Figs. \ref{fig:Lim_Sim}, \ref{fig:Lim_LQ}, with
the exception of the LQ cases (Fig. \ref{fig:Lim_LQ}), the LHC at
$\sqrt{s}=14$ TeV will be more sensitive than $0\nu\beta\beta$ decay
experiments as probe for LNV. For the LQ case, the LHC is more
sensitive than $0\nu\beta\beta$ decay in the pessimistic case for $0\nu\beta\beta$
(operators ${\cal O}_1$ and ${\cal O}_5$ in the notation of
\cite{Pas:2000vn}) but not for the one to which $0\nu\beta\beta$ decay is most
sensitive, particularly ${\cal O}_2$. For the remaining operators
${\cal O}_3$ and ${\cal O}_4$ $0\nu\beta\beta$ decay and LHC sensitivities are
very similar.
\begin{figure}[htbp]
\begin{minipage}[b]{.45\linewidth}
\includegraphics[width=\linewidth]{Lim14TeVEffmN200.eps}
\end{minipage}
\begin{minipage}[b]{.05\linewidth}
\hspace{1pt}
\end{minipage}
\begin{minipage}[b]{.45\linewidth}
\vspace{0pt}
\includegraphics[width=\linewidth]{Lim14TeVEffmN1000.eps}
\end{minipage}
\vspace{0.2 cm}
\caption{ Future limits for the LHC at $\sqrt {s} = 14$ TeV compared
with current and future double beta decay experiment. The gray
region on the top left corner is ruled out by $0\nu\beta\beta$. The blue
region represents the parametric region accessible in near future
$0\nu\beta\beta$ experiments, whereas the colored lines shows sensitivity
limits for the LHC for production of three different scalar bosons
$S_{+1}$ (red), $S_{2/3}^{DQ}$ (purple) and $S_{4/3}^{DQ}$
(black). Solid lines were calculated using $Br(S \to e e j j )
= 10^{-1}$ whiles dashed lines were calculated using $Br(S \to
e e j j ) = 10^{-2}$ for different values of the fermion mass
$m_{\psi} = 200$ GeV (left) and $m_{\psi} = 1000$ GeV (right).
}
\label{fig:Lim_Sim}
\end{figure}
\begin{figure}[htbp]
\begin{minipage}[b]{.45\linewidth}
\includegraphics[width=\linewidth]{Lim14TeVEffmN200LQ.eps}
\end{minipage}
\begin{minipage}[b]{.05\linewidth}
\hspace{1pt}
\end{minipage}
\begin{minipage}[b]{.45\linewidth}
\vspace{0pt}
\includegraphics[width=\linewidth]{Lim14TeVEffmN1000LQ.eps}
\end{minipage}
\vspace{0.2 cm}
\caption{As fig. \ref{fig:Lim_Sim}, but for production of two
leptoquark scalars $S_{2/3}^{LQ}$ (blue) and $S_{1/3}^{LQ}$
(orange). Note, that the dashed line for $Br(S \to e e j j ) =
10^{-2}$ in case of $S_{1/3}^{LQ}$ is very similar to $Br(S \to e e
j j ) = 10^{-1}$ for the case of $S_{2/3}^{LQ}$. }
\label{fig:Lim_LQ}
\end{figure}
Now we will discuss the second group of decompositions, which
correspond to the "asymmetric'' cases, with two different scalar
masses and all couplings different. In this case, the assumption
$g_1=g_4$, $g_2=g_3$ and $m_{S}= m_{S^\prime}$ in
Eq. (\ref{Efective-S1}) is violated and the plane $g_1^2$ vs $m_S$ is
more adequate for comparison of LHC and $0\nu\beta\beta$ decay sensitivities.
In Fig. \ref{fig:Lim_Asim} we then compare the sensitivities of
$0\nu\beta\beta$ decay and the LHC for three different cases, using
Eq. (\ref{eq:SigBrS1}). The different color codes correspond to the
three different scalar bosons, that can be singly produced at the LHC,
namely $S_{+1}$ (red), $S^{DQ}_{4/3} $ (black), $S^{DQ}_{2/3}$
(purple). For the LHC we show the expected sensitivity limits for
$Br(S \to e e j j ) = 10^{-1}$ (solid lines) in the plane $g_{1}$
versus $m_{S}$.
\begin{figure}[htbp]
\begin{minipage}[b]{.45\linewidth}
\includegraphics[width=\linewidth]{Lim_Asimetric_ge1.eps}
\end{minipage}
\begin{minipage}[b]{.05\linewidth}
\hspace{1pt}
\end{minipage}
\begin{minipage}[b]{.45\linewidth}
\vspace{0pt}
\includegraphics[width=\linewidth]{Lim_Asimetric_ge05.eps}
\end{minipage}
\vspace{0.2 cm}
\caption{ Future limits for the LHC at $\sqrt {s} = 14$ TeV compared
with future double beta decay experiments. The blue region
represents the parameter region accessible in near future $0\nu\beta\beta$
experiments, whereas the colored lines shows sensitivity limits for
the LHC for production of three different scalar bosons $S_{+1}$
(red), $S_{2/3}^{DQ}$ (purple) and $S_{4/3}^{DQ}$ (black). Solid
lines were calculated using $Br(S \to e e j j ) = 10^{-1}$ and the
blue region was calculated using $m_{\psi} = 1.5$ TeV,
$m_{S^\prime} = 2.0$ TeV, $g_2 = g_3 = g_4 = 1$ (left) and $g_2 =
g_3 = g_4 = 0.5$ (right). }
\label{fig:Lim_Asim}
\end{figure}
Fig. \ref{fig:Lim_Asim} shows future limits from $0\nu\beta\beta$ decay which
corresponds to the most optimistic reach for $0\nu\beta\beta$ decay in the
foreseeable future. Those limits were calculated using, in Eq.
(\ref{eq:meff}), $m_{\psi} = 1.5$ TeV , $m_{S^\prime} = 2.0$ TeV, $g_2
= g_3 = g_4 = 1$ (left) and $g_2 = g_3 = g_4 = 0.5$ (right). For
larger masses $m_{\psi}, m_{S^\prime}$ or smaller couplings $g_2, g_3,
g_4 $ those limits become weaker. The choice of $m_{\psi} = 1.5$ TeV,
$m_{S^\prime} = 2.0$ TeV is reasonable since all the "asymmetric"
decompositions must have coloured fermions and these can be
constrained through pair production searches, which will yield
sensitivity limits on their masses of $2-2.5$ \ TeV. Moreover in the
"asymmetric" decompositions the scalar $S^{\prime}$ is a leptoquark or
a $S_{+2}$. Also for leptoquarks the LHC searches from pair and single
productions \cite{CiezaMontalvo:1998sk} will have sensitivities around $2$ TeV
and the doubly charged scalar $S_{+2}$ can also be searched trough pair
production (through a production graph with a virtual photon). As can
be seen from Fig. \ref{fig:Lim_Asim} the LHC at $\sqrt{s}=14$ TeV will
be more sensitive than $0\nu\beta\beta$ decay experiments as probe for LNV for
all the "asymmetric" decompositions.
Finally we have compared in Fig. \ref{fig:Sens} sensitivity limits for
the "symmetric" decomposition $(\bar u d)(\bar e )(\bar e)(\bar u d)$
assuming 3, 10 and 30 events in $300 fb^{-1}$ of statistics. As one can
see from Fig. \ref{fig:Sens} even under the pessimistic
assumption that only 30 signal events can be excluded, our previous
limits calculated for the more optimistic situation of 3 events
(see Fig. \ref{fig:Lim_Sim}) suffer only minor changes (in this
linear plot) and the LHC is still more sensitive than $0\nu\beta\beta$.
More accurate numbers of the total number of events necessary to
claim discovery/exclusion would require a full detector MonteCarlo,
outside the scope of this paper.
\begin{figure}[h]
\includegraphics[scale=0.8]{Lim_Simetric_mF1000_Sens.eps}
\caption{Comparison of expected sensitivity limits assuming less
than 3 (solid), 10 (dashed) and 30 (dotted) signal events in $300
fb^{-1}$ of statistic at LHC for the production of the scalar
bosons $S_{+1}$. Red lines were calculated using $Br(S \to
e e j j ) = 10^{-1}$ and fermion mass $m_{\psi_0} = 1000$
GeV. The gray region on the top left corner is ruled out by
$0\nu\beta\beta$ whereas the blue region represents the parametric region
accessible in near future $0\nu\beta\beta$ experiments. See text for more
details.}
\label{fig:Sens}
\end{figure}
\section{Distinguishing LNV models at the LHC}
\label{Sec:Dst}
In the previous section we have compared the sensitivity of the LHC
with $0\nu\beta\beta$ decay. Here, we discuss the question how the different
LNV decompositions could actually be distinguished using LHC data, if
a positive signal were to be found in the $\sqrt{s}=14$ TeV run. We
will consider two types of observables: (i) charge asymmetry
\footnote{This charge asymmetry has also been discussed in a
different context in the recent paper \cite{Durieux:2012gj}.} and
(ii) invariant mass peaks. Interestingly, the combination of the two
sets of observables is sufficient to distinguish among nearly all
decompositions. The only exceptions are the pairs of cases
(1-ii-a)-(1-ii-b) and (1-i)-(3-i), the latter, however, only in the
``mass-degenerate'' limit, see below.
Recall first that the scalars $S_{+1}$, $S^{DQ}_{4/3}$ and
$S^{DQ}_{2/3}$ are produced in s-channel, while single leptoquarks are
produced at the LHC always in association with a lepton. The LQ final
state that we are interested in, is therefore $eejjj$, different from
the other cases, see discussion next section. We will therefore
separate the discussion here in ``LQ-like'' and other cases.
\subsection{Charge asymmetry}
\label{SubSec:CA}
In the dilepton event samples, there are three subsets of events with
different charges: $e^+e^+$, $e^+e^-$ and $e^-e^-$. From these three
numbers we can form two independent ratios:
\begin{eqnarray}\label{eq:ca}
x_{CA} = \#(e^+e^+)/\#(e^-e^-) \\ \nonumber
y_{CA} = \#(e^-e^+)/\#(e^+e^+) .
\end{eqnarray}
Consider the simpler case of $y_{CA}$ first. In the cases where the
fermion in the diagram is neutral, $\psi=\psi_0$, it is a Majorana
particle and at tree-level ${\rm Br}(\psi_0\to e^+ jj)={\rm
Br}(\psi_0\to e^- jj)$.
\footnote{The branching ratios equal $1/2$ in case there is no
generation mixing.} Thus, all decompositions with $\psi_0$ will
have $y_{CA}=1/2$, up to loop corrections. The situation is different
for decompositions with charged fermions. Here we can distinguish the
cases involving $\psi_{4/3}$ and $\psi_{5/3}$, on the one hand, and
$\psi_{1/3}$ on the other hand. Since $\psi_{4/3}$ and $\psi_{5/3}$
can decay only into $e^+e^+j$ and $e^-e^-j$, all decompositions involving
these fermions have $y_{CA}=0$. Finally, $\psi_{1/3}$ can decay into
both charge signs, but the branching ratio of $\psi_{+1/3}$ into positrons
and electrons involve different combinations of couplings (and masses),
and therefore are free numbers. $y_{CA}$ in this case is arbitrary,
but could be used to fix some combination of couplings experimentally.
We now turn to the discussion of $x_{CA}$. Define the ratio for
the LHC production cross section of one of our five scalars, relative
to the cross section for its charge conjugate state as:
\begin{equation}\label{eq:Rsig}
R_{\sigma}^{S_i} = \frac{\sigma(pp\to S_i)}{\sigma(pp\to {\bar S_i})},
\end{equation}
Here, $S_i$ stands for any of $S_i=S_{+1},S^{DQ}_{4/3},S^{DQ}_{2/3},
S^{LQ}_{2/3},S^{LQ}_{1/3}$. We can divide the discussion of all 18
decompositions into three groups of cases. We put into the first
group the six decompositions without any leptoquark in the diagram,
i.e. all decompositions T-I-i and T-I-iii of table \ref{Tab:TopoI}.
Into the second group we put the decompositions with two leptoquarks,
i.e. 2-iii, 4-i and 5-i. The remaining 8 decompositions with one
leptoquark form the third group.
\begin{figure}[htb]
\centering
\includegraphics[width=0.5\linewidth,height=0.5\linewidth]{CA_DQandS1.eps}
\vskip-3mm
\caption{\label{fig:CASmplst}Charge asymmetry $x_{CA}$, see eq. (\ref{eq:ca}),
as a function of the boson mass for different kinds of scalars.
Shown are the cases with $S_{+1}$ or diquarks, for discussion see text.}
\end{figure}
We start the discussion with group-(1). As shown in
fig. (\ref{fig:xsectTI}), $R_{\sigma}^{S_i}$ is different for the
various scalars and moreover strongly dependent on the mass of the
scalar. \footnote{Note that $R_{\sigma}^{V_i}$ for vectors will behave
exactly as $R_{\sigma}^{S_i}$ discussed here.} This asymmetry in cross
sections will cause the charge asymmetry $x_{CA}$ to depend strongly
on the decomposition. The charge asymmetry $x_{CA}$ is shown in
fig. \ref{fig:CASmplst} for diquarks and for $S_{+1}$. Consider first
the case denoted $S_{+1}$ on the left. This corresponds to both, the
case 1-i and the two sub-cases 1-ii-a and 1-ii-b. The former is an
example of a symmetric decomposition, i.e. here two of the four
couplings in the diagram are pairwise equal, namely $g_{S_{+1}{\bar
u}d}$ connecting two outer legs and and $g_{S_{+1}{\bar e}\psi_0}$
connecting two propagators in either beta decay subprocess to the left
and to the right. It is straightforward to show that upon calculating
$x_{CA}$ all couplings cancel out an $x_{CA}$ simply is $x_{CA}^{\rm
sym} = R_{\sigma}$. Thus, for symmetric decompositions, $x_{CA}$ at
any fixed mass of the scalar is simply a number, predicted by the
decomposition. \footnote{We note again that the classical case of LR
symmetry has the same $x_{CA}$ as shown here for $S_{+1}$.} The two
sub-cases 1-ii-a and 1-ii-b can be called ``absolutely asymmetric''
decompositions, since the $S_{+2}$ can not be singly produced in the
LHC. In these cases only the couplings at the $S_{+1}$ vertices matter
and drop out again in the calculation of $x_{CA}$. Thus, the line
denoted $S_{+1}$ in fig. (\ref{fig:CASmplst}) is valid for both,
decomposition 1-i and 1-ii.
Consider DQs, i.e. decompositions 3-i, 3-ii and 3-iii. The lines
denoted $S^{DQ}_{4/3}$ and $S^{DQ}_{2/3}$ in fig. (\ref{fig:CASmplst})
correspond to decompositions 3-ii and 3-iii. In these two cases the
decomposition is ``completely asymmetric'' and therefore does not
depend on the values of individual couplings, but depends strongly on
the mass of the scalar.
For decomposition 3-i the discussion is slightly more
complicated. Here, one can distinguish the case where the two diquark
masses are degenerate ($m_{S^{DQ}_{4/3}}=m_{S^{DQ}_{2/3}}$) and the
non-degenerate case. In the non-degenerate case the distributions in
$m_{eejj}^2$ would show two distinct peaks, one having the $x_{CA}$
appropriate for $S^{DQ}_{4/3}$ while the other has $x_{CA}$ of
$S^{DQ}_{2/3}$. In fact, such a non-degenerate case is not only
``easy'' to resolve from the decomposition(s) involving $S_{+1}$,
having more than one peak in $m_{eejj}^2$ would actually allow to
probe for all four couplings entering the diagram and thus provide
more information than in other cases. In the mass degenerate limit,
however, both $S^{DQ}_{4/3}$ and $S^{DQ}_{2/3}$ contribute to the
number of events in the same peak. In this case, $x_{CA}$ depends on
the relative ratio of coupling of the two diquarks to
fermions. Fig. (\ref{fig:CASmplst}) shows $x_{CA}$ for this case,
$S^{DQ}_{4/3}+S^{DQ}_{4/3}$ in the limit where the diquark couplings
to fermions are equal. For arbitrary ratios of couplings (but
degenerate masses) $x_{CA}$ can vary between the two extreme limits
shown as $S^{DQ}_{4/3}$ and $S^{DQ}_{2/3}$. Measurement of $x_{CA}$
anywhere between those two extremes, therefore points toward
decomposition 3-i in case of DQs. The problematic case for
distinguishing between 3-i and decomposition 1-i is therefore the mass
degenerate case for decomposition 3-i, where the two pairs of diquark
couplings conspire to give a $x_{CA}$ equal (or very similar) to the
corresponding one for $S_{+1}$.
\begin{figure}[htb]
\centering
\includegraphics[width=0.45\linewidth,height=0.5\linewidth]{CA_LQ13.ps}
\includegraphics[width=0.45\linewidth,height=0.5\linewidth]{CA_LQ23.ps}
\vskip-3mm
\caption{\label{fig:CALQ} Charge asymmetry $x_{CA}$ as a function of
the leptoquark mass, to the left $S^{LQ}_{1/3}$, to the right $S^{LQ}_{2/3}$.
The different lines show different cases: blue (left, dash-dotted) and
orange (right, dash-dotted) show $x_{CA}$ for $S^{LQ}_{1/3}+e$ and
$S^{LQ}_{2/3}+e$ production, respectively. The red lines in both plots
show $x_{CA}$ for $S^{LQ}_{q}+\psi$ production only. The purple lines
show in both cases $x_{CA}$ combining both production modes, assuming
the couplings are equal, $g_{eQ S^{LQ}_{q}}=g_{\psi Q' S^{LQ}_{q}}$.
Shown are three calculations for different values of $m_{\psi}$:
$m_{\psi}=0.5$ TeV - full lines; $m_{\psi}=1$ TeV - dot-dashed lines
and $m_{\psi}=2$ TeV - dotted lines. In all cases the calculation
includes a phase space suppression for Br($S^{LQ}_{Q}\to q+\psi$) as
described in the text.}
\end{figure}
We now turn to the discussion of $x_{CA}$ for the four decompositions
with two LQs. As mentioned previously, the final states for LQs
are $e^-e^-jjj$ and $e^+e^+jjj$. LNV with LQs can therefore,
in principle, be distinguished from DQs and $S_{+1}$. However,
in the discussion of $x_{CA}$ for LQs one more complication arises:
The final LNV states can be produced via two different intermediate
states, i.e. $S^{LQ}_{q}+e$ and $S^{LQ}_{q}+\psi$. In case of
$S^{LQ}_{2/3}$, for example, the main production diagram is
$d+g \to S^{LQ}_{q}+e^{-}$, contributing to $e^-e^-$, while
$u+g\to S^{LQ}_{2/3}+\psi$ will contribute to $e^+e^+$-like
events. For the case of $\sigma(pp\to S^{LQ}_{q}+\psi)$, the
cross section not only depends strongly on $m_{S^{LQ}_{q}}$,
but also depends on $m_{\psi}$, see fig. (\ref{fig:xsectTILQ}).
However, also in the case of $S^{LQ}_{q}+e$ production, the
mass of $\psi$ enters in the calculation of the total number of
events, since the branching ratio of Br($S^{LQ}_{q}\to \psi +z$),
where $z$ stands for all possible SM fermion states, suffers a
phase space suppression factor for large $m_{\psi}$:
\begin{equation}\label{eq:phsp}
f(m_S^2,m_{\psi}^2) = \frac{(m_S^2-m_{\psi}^2)^2}{m_S^4}.
\end{equation}
The predicted charge asymmetry then depends on whether events
from $\sigma(pp\to S^{LQ}_{q}+\psi)$ can be separated from
$\sigma(pp\to S^{LQ}_{q}+e)$ or not. This separation can be
done, in principle, by looking at the invariant mass peaks
discussed in the next section. However, especially in case the
total number of events is low, such a separation will become
difficult (and inefficient). Then the charge asymmmetry
measured will be an averaged charged asymmetry of both
production modes.
In fig. (\ref{fig:CALQ}) we plot calculated $x_{CA}$ as a function of
the leptoquark mass, to the left $S^{LQ}_{1/3}$, to the right
$S^{LQ}_{2/3}$ for a number of cases. The blue (left, dash-dotted) and
orange (right, dash-dotted) lines show $x_{CA}$ for $S^{LQ}_{1/3}+e$
and $S^{LQ}_{2/3}+e$ production, respectively. The red lines in both
plots show $x_{CA}$ for $S^{LQ}_{q}+\psi$ production only. Shown are
three calculations for different values of $m_{\psi}$: $m_{\psi}=0.5$
TeV - full lines; $m_{\psi}=1$ TeV - dot-dashed lines and $m_{\psi}=2$
TeV - dotted lines. These lines are the predicted $x_{CA}$ for the
case that events from $S^{LQ}_{q}+\psi$ can be separated completely
from those stemming from $S^{LQ}_{1/3}+e$. In the more conservative
case that averaging over both production modes has to be done, the
predicted $x_{CA}$'s are plotted as purple lines, again for three
different values of $m_{\psi}$. In this calculation we assumed for
simplicity that $g_{eQ S^{LQ}_{q}}=g_{\psi Q' S^{LQ}_{q}}$ and
included a phase space suppression factor in the calculation of events
for $S^{LQ}_{1/3}+e$ production to account for the reduced
Br($S^{LQ}_{q}\to \psi +z$). This phase space suppression, see
eq. (\ref{eq:phsp}), is responsible for the sharp bend in the lines at
low $m_{S^{LQ}_q}$. For larger or smaller ratios of $g_{eQ
S^{LQ}_{q}}$ to $g_{\psi Q' S^{LQ}_{q}}$ the corresponding
lines for $x_{CA}$ will change, thus $x_{CA}$ can vary in principle
between the extremes shown in fig. (\ref{fig:CALQ}) for arbitrary
values of the couplings. However, if $m_{S^{LQ}_q}$ and $m_{\psi}$
are known from measurement, the ratio of $g_{\psi Q' S^{LQ}_{q}}$
and $g_{eQ S^{LQ}_{q}}$ can in principle be fixed from a measurement
of Br($S^{LQ}_{q}\to \psi +z$).
The above discussed results cover the decompositions 4-i and 5-i, in
which the two LQs in the diagram have the same quantum numbers. For
the case of decompositions 2-iii, on the other hand, both types of LQs
contribute and the resulting $x_{CA}$ will be the average of the
individual $x_{CA}$'s shown in fig. (\ref{fig:CALQ}) on the left and
right. For all couplings equal, the resulting $x_{CA}$ varies smoothly
from around $x_{CA}=2$ for $m_{S^{LQ}_q} = 1$ TeV to $x_{CA} \simeq 3$
for $m_{S^{LQ}_q} = 3$ TeV, for decomposition 2-iii-a.
There is, however, one subtle difference between decomposition
2-iii-a and 2-iii-b, since in these two the up and down quarks in the
initial state for the case of $\sigma(pp\to S^{LQ}_{q}+\psi)$
production is interchanged. This leads to slightly lower values for
$x_{CA}$ in 2-iii-b compared to 2-iii-a.
Finally, we briefly discuss the remaining eight decompositions within
group-(3). In this case, in principle, mass peaks should show up in
$m_{eejj}^2$ {\em and} $m_{e_2jjj}^2$, clearly identifying the LQ and
the other scalar boson by the individual $x_{CA}$'s. However, this
statement assumes that cross sections for LQs are large enough that
for these decompositions both types of scalars are produced at the
LHC. Considering the large ratio of cross sections for $S_{+1}$ and
DQs relative to cross sections for LQs, this might be a too optimistic
assumption. Thus, in the case only one peak in $m_{eejj}^2$ is found,
only the ``leading'' boson of the decomposition can be identified and
there appears a degeneracy among the decompositions in group-(1) and
group-(3) in this observable.
\subsection{Invariant mass peaks}
\label{SubSec:MP}
We now turn to the discussion of differentiating between different
decompositions using peaks in the cross sections in different experimentally
measurable invariant mass systems. Again, we will divide this discussion
into different cases. First, we will discuss decompositions with at
least one $S_{+1}$, then decompositions with at least one diquark.
These two cases can be distinguished, in principle, by measuring the
charge asymmetry discussed in the last subsection. Finally, we will
discuss decompositions which contain only LQs.
\begin{table}[h!]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c||c|c|c|c|}\hline
Case & $m_{S}$ & $m_{\psi}$ & $m_{S'}$ & Decomposition\\
\hline
A & $m(eejj)$ & $m(ejj)$ & $m(jj)$ & $(\bar u d)(\bar e)(\bar e)(\bar u d)$\\
\hline
B & $m(eejj)$ & $m(ejj)$ & $m(ej)$ &
$({\bar u}d)({\bar e})({\bar u})(d{\bar e})$ ;
$( \bar u d ) ( \bar e ) ( d ) ( \bar u \bar e )$ \\
\hline
C & $m(eejj)$ & $m(eej)$ & $m(ee)$ &
$(\bar u d ) ( \bar u ) ( d ) ( \bar e \bar e )$ ;
$( \bar u d ) ( d ) (\bar u ) ( \bar e \bar e )$ \\
\hline
D & $m(eejj)$ & $m(eej)$ & $m(ej)$ &
$(\bar u d ) ( \bar u ) ( \bar e ) ( d \bar e )$ ;
$( \bar u d ) ( d ) (\bar e ) ( \bar u \bar e )$ \\
\hline
\end{tabular}
\end{center}
\caption{\it Combinations of invariant mass distributions where peaks
in the cross sections arise, in case the mass ordering is
$m_S>m_{\psi}>m_{S'}$, for decompositions of $0\nu\beta\beta$ decay with charge
asymmetries that are " $(\bar u d)$ like". If $m_{\psi}\le m_{S'}$,
$m_{S'}$ can not be measured and cases A=B and C=D can not be distinguished
by this observable.
\label{InvMassUD}}
\end{table}
Table \ref{InvMassUD} shows the results of the analysis for
decompositions involving $S_{+1}$. With the exception of case (A),
where $S_{+1}$ appears twice in the diagram, the two scalars in the
decomposition are different particles. In case $m_S>m_{\psi}>m_{S'}$,
there are then two subsystems in the sample of $eejj$-events which
form mass peaks and we can distinguish cases (A)-(D), leaving only
``degeneracies'' in decompositions (1-ii-a)-(1-ii-b) (corresponding to
case B) and (2-iii-a)-(2-iii-b) (case C), where both mass peaks and
$x_{CA}$ are (pairwise) identical. However, it is possible that
$m_{\psi}\le m_{S'}$, in which case $m_{S'}$ can not be measured and
cases A=B and C=D can not be distinguished anymore.
\bigskip
\begin{table}[h!]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c||c|c|c|c|}\hline
Case & $m_{S}$ & $m_{\psi}$ & $m_{S'}$ & Decomposition\\
\hline
A & $m(eejj)$ & $m(ejj)$ & $m(jj)$ &
$(\bar u \bar u) (\bar e)(\bar e) (d d)$\\
\hline
B & $m(eejj)$ & $m(ejj)$ & $m(ej)$ &
$(\bar u \bar u ) ( \bar e ) ( d ) ( d \bar e )$ ;
$( d d ) ( \bar e ) (\bar u ) ( \bar u \bar e )$\\
\hline
C & $m(eejj)$ & $m(eej)$ & $m(ee)$ &
$(\bar u \bar u ) ( d ) ( d ) ( \bar e \bar e )$ ;
$( d d ) ( \bar u ) (\bar u ) ( \bar e \bar e )$\\
\hline
D & $m(eejj)$ & $m(eej)$ & $m(ej)$ &
$(\bar u \bar u ) ( d ) ( \bar e ) ( d \bar e )$ ;
$( d d ) ( \bar u ) (\bar e ) ( \bar u \bar e )$\\
\hline
\end{tabular}
\end{center}
\caption{\it As above, for decompositions of $0\nu\beta\beta$ decay with charge
asymmetries that are "$(\bar u \bar u)$ and $(d d) $ like". If
$m_{\psi}\le m_{S'}$, $m_{S'}$ cannot be measured and cases A=B and
C=D can not be distinguished with this observable.
\label{InvMassUU}}
\end{table}
Table \ref{InvMassUU} shows the results of the analysis for
decompositions involving DQs. Case (A) has the same invariant mass
peaks as case (A) in table \ref{InvMassUD}. Thus, in case
$m_{S^{DQ}_{4/3}}=m_{S^{DQ}_{2/3}}$ the decomposition (3-i) can
not be distinguished from (1-i), if also the diquark couplings ``conspire''
such that $x_{CA}$ agrees with the corresponding value for
$S_{+1}$. In this case, the only difference between (3-i) and
(1-i) is that (3-i) always requires a electrically charged
coloured fermion, which could show up in pair production.
Cases (B)-(D) in table \ref{InvMassUU} are also equal to
(B)-(D) in table \ref{InvMassUD}. However, in these cases
DQ decompositions and $S_{+1}$ decompositions can always
be distinguished by measuring $x_{CA}$.
\begin{table}[h!]
\label{InvMass}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c||c|c|c|c|}\hline
Case & $m_{S}$ & $m_{\psi}$ & $m_{S'}$ & Decomposition\\
\hline
A & $m(e_2jjj)$ & $m(e_2jj)$ & $m(jj)$ &
$(\bar u \bar e) ( d)(\bar e) (\bar u d)$ ;
$(\bar u \bar e) (\bar u)(\bar e) (d d)$ ;
$(d \bar e) (\bar u)(\bar e) (\bar u d)$ ;
$(d \bar e) (d)(\bar e) (\bar u \bar u)$\\
\hline
B & $m(e_2jjj)$ & $m(e_2jj)$ & $m(e_2j)$ &
$(\bar u \bar e) ( d)(\bar u) ( d \bar e )$ ;
$(\bar u \bar e) (d )( d) ( \bar u \bar e)$ ;
$(d \bar e) (\bar u)(\bar u) ( d \bar e)$ ;
$(d \bar e) (d)(\bar u) ( \bar u \bar e)$\\
\hline
C & $m(e_2jjj)$ & $m(jjj)$ & $m(jj)$ &
$(\bar u \bar e) ( \bar e)(\bar u) ( d d )$ ;
$(\bar u \bar e) ( \bar e )( d) ( \bar u d)$ ;
$(d \bar e) (\bar e)(\bar u) ( \bar u d)$ ;
$(d \bar e) ( \bar e)(\bar u) ( \bar u d)$\\
\hline
\end{tabular}
\end{center}
\caption{\it As above, for decompositions of $0\nu\beta\beta$ decay with charge
asymmetries that are "$(\bar u \bar e)$ and $(d \bar e)$ like",
i.e. for single leptoquark production. Recall that the the complete
signal is ``$eejjj$'' without a peak in $m_{eejjj}^2$. If
$m_{\psi}\le m_{S'}$, cases A=B cannot be distinguished. Note that
case B, where both sides of the decomposition contain only
leptoquarks, will produce ``$eejjj$'' final states, only. In all
other cases, also the ``$eejj$' signal should arise.
\label{InvMassLQ}}
\end{table}
Finally, table \ref{InvMassLQ} shows combinations of invariant mass
distributions in case the mass ordering is $m_S>m_{\psi}>m_{S'}$, for
decompositions of $0\nu\beta\beta$ decay with charge asymmetries that are
"$(\bar u \bar e)$ and $(d \bar e)$ like", i.e. for single leptoquark
production. Here, it is assumed that events from $S^{LQ}_{q}+e$
production can be distingsuished from $S^{LQ}_{q}+\psi$
production. The table refers to the former. (Note again, if
$m_{\psi}\le m_{S'}$, cases A=B can not be distinguished, leaving a
degeneracy in the identification of the decomposition in that case.)
In case of events from $S^{LQ}_{q}+\psi$ production, the decay of the
LQ will lead to a peak in $m_{e_aj_x}^2=m_{S^{LQ}_{q}}^2$ and the
decay of $\psi$ to $m_{e_bj_yj_z}^2=m_{\psi}^2$. Note, however, that
in case of $S^{LQ}_{q}+e$ production, the mass peak for $m_{\psi}^2$
is formed by a subsystem of ``$e_2jjj$'' (which gives
$m_{S^{LQ}_{q}}^2$), whereas in $S^{LQ}_{q}+\psi$ production the two
mass peaks must come from different leptons and jets, i.e. $a \ne b$
and $y \ne x \ne z$. This feature can be used to separate
$S^{LQ}_{q}+e$ from $S^{LQ}_{q}+\psi$ production.
Before closing, we briefly mention pair production of coloured
fermions. Here, a signal $eejjjj$, i.e. at least four hard jets,
would test the different decompositions of double beta decay. We note
that decompositions with $\psi_0$ exist with a colour singlet fermion,
for which pair production at the LHC is negligible, while for all the
12 ``new'' decompositions, see discussion in section \ref{Sec:Dec} and
section \ref{sect:xsect}, pair production of the exotic fermions is
expected to probe the existence of such states with masses up to
roughly $m_{\psi} \sim 2-2.5$ TeV, depending on the final state
branching ratios. Note, that also pair produced $\psi$ can, depending
on decomposition, produce in some cases like-sign dileptons. Also, if
a signal is found in $eejjjj$, there is a threshold for these events
at $m_{eejjjj}=2 m_{\psi}$. Different subsystems again form mass peaks
at $m_{\psi}$ and in case, this fermion is heavier than (one of)
the bosons mass peaks in sub-sub-systems will show up, providing
additional information. The possible combinations can be straightforwardly
derived from table \ref{Tab:TopoI}.
In summary, we have discussed two possible observables, which
allow to identify which of the decompositions of table \ref{Tab:TopoI}
is realized, if a positive LNV observation is made at the LHC.
The combination of both observables should be sufficient to
identify the correct decomposition unambigously, apart from
the two pairs: (A) 1-ii-a and 1-ii-b and (B) 1-i and 3-i (the latter
only in the mass degenerate case), which can lead to very similar
values in both observables.
\section{Summary}
\label{Sec:cncl}
In this paper we have compared the discovery potential of lepton
number violating signals at the LHC with the sensitivity of current
and future neutrinoless double beta decay experiments, assuming that
the decay rate is dominated by heavy ${\cal O}$ (TeV) particle
exchange. We have treated the first of two possible topologies
contributing to both processes which contains one fermion and two
bosons in the intermediate state, and concentrated on the case where
the fermion mass is always smaller than the scalar or vector
masses. The topology considered corresponds to 18 possible
decompositions including scalar, leptoquark and diquark
mechanisms. With the exception of some leptoquark mechanisms a
$0\nu\beta\beta$ decay signal corresponding to a half life in the
range $10^{26}-10^{27}$ yrs should imply a positive LNV signal at the
LHC, and vice versa, the non-observation of a positive signal at the
LHC would rule out a short-range mechanism for neutrinoless double
beta decay in most cases. In summary the LHC search provides a
complementary and in many cases even superior option to search for
$\Delta L= 2$ lepton number violation for this short range case.
If $0\nu\beta\beta$ decay is triggered by light sub-eV scale Majorana
neutrinos, on the other hand, its LHC analogue will be unobservable.
In any case though an observation of either $0\nu\beta\beta$ decay or
its analogue at the LHC will prove also the light neutrinos to be
Majorana particles by virtue of the four loop contribution to the
neutrino mass generation according to the Schechter-Valle thorem
\cite{Schechter:1981bd,Hirsch:2006yk}.
However, this 4-loop-induced Majorana mass while bestowing the light
neutrinos with Majorana-ness is too small to account for the mass
squared differences observed in neutrino oscillations
\cite{Duerr:2011zd}, implying
that differently generated masses, either of Dirac or of Majorana type
will be the dominant contributions for at least the heavier two mass
eigenstates.
Moreover, we have discussed two possibilities to discriminate
different contributions to the $0\nu\beta\beta$ decay rate by using
LHC observables: First, the charge asymmetry corresponding to the
ratio of positive like sign electron events and negative like sign
electron events, which reflects the larger abundance of $u$ quarks
compared to $d$ quarks in the most simple cases but becomes a more
complicated function of masses and couplings in the general case. For
large masses of the resonantly produced particles this asymmetry can
vary by up to 7 orders of magnitude. And second, the resonance peaks
at the invariant mass distribution of the decay products of the heavy
particles produced on-shell. The various resonance peaks depend on
the mass ordering of the intermediate particles and on the exact
decomposition and can then be used to identify the intermediate
particles triggering the decay. Consequently, if an LNV signal at the
LHC would be found it should be possible to identify the dominant
contribution of $0\nu\beta\beta$ decay.
\medskip
\centerline{\bf Acknowledgements}
\medskip
We are grateful to Alfonso Zerwekh for useful discussions.
J.C.H. thanks the IFIC for hospitality during his stay. This work was
supported by UNILHC PITN-GA-2009-237920 and by the Spanish MICINN
grants FPA2011-22975, MULTIDARK CSD2009-00064, by the Generalitat
Valenciana (Prometeo/2009/091), by Fondecyt (Chile) under grants
11121557, 1100582 and by CONICYT (Chile) project 791100017. HP was
supported by DGF grant PA 803/6-1.
\setcounter{section}{0}
\def\Alph{section}.\arabic{equation}{\Alph{section}.\arabic{equation}}
\setcounter{equation}{0}
\section{Appendix A. Lagrangians }
\label{Sec:Lags}
Here we specify the Lagrangian terms used in our analysis.
As was discussed in section \ref{Sec:Dec} there are two possible
topologies for the tree-level diagrams fig.~\ref{Fig:0nbbTopologies}
constructed of renormalizable interactions which contribute to
$0\nu\beta\beta$-decay and production of like-sign dileptons in
pp-collisions. It is implied that gluons could be attached to any
colorful external or internal line of these diagrams. In the present
paper we focus on the Topology I corresponding to
fig.~\ref{Fig:0nbbTopologies}(a). All the possible particles with
their SM assignments in the intermediate states of these diagrams are
listed in table \ref{Tab:TopoI} taken from ref. ~\cite{Bonnet:2012kh}.
These diagrams or their parts without the gluon insertions represent
mechanisms of $0\nu\beta\beta$-decay studied in the present paper.
Examples of Feynman diagrams are shown in figs. \ref{fig:Diags},
\ref{fig:LQprod}.
For our study it is sufficient to list renormalizable operators
corresponding to the vertices of these diagrams in the representation
with physical mass eigenstates after the electroweak symmetry
breaking. The SM gauge invariant representation in terms of the
electroweak interaction eigensates can be found in
Refs. ~\cite{Bonnet:2012kh,Han:2010rf}. For the fields we adopt
notations
\begin{eqnarray}\label{notation-1}
F^{(n)}_{Q_{em}}
\end{eqnarray}
where $n$ is a dimension of the $SU(3)_{c}$ representation to which
belongs a field $F$ and $Q_{em}$ is its electric charge. We use $F =
S$ and $F = V_{\mu}$ for the scalar and vector fields respectively.
Below we specify the interactions of the scalar fields $S$. The
interactions of the vector fields can be readily derived from them by
substitution $S \rightarrow S^{\mu}$ with the same charge and
$SU_{3C}$ assignment and by simultaneous insertion of $\gamma_{\mu}$
to the coupled fermionic current.
We start with the interactions of the scalar $S_{+1}^{(1,8)}$ fields participating in decompositions 1-i, 1-ii, and 2-i, 2-ii from table \ref{Tab:TopoI}. In our numerical analysis we use $S_{+1}$ for $S^{(1)}_{+1}$. These fields interact
with quarks, charged leptons and fermions $\psi_{0}^{(1,8)}$ and $\psi_{5/3}^{(3)}, \psi_{4/3}^{(\bar{3})}$ according to
the Lagrangian:
%
\begin{eqnarray}
\label{Lag-S1-psi}
{\cal L}_{S_{+1}} &=&
\nonumber
g^{(k)X}_{ud S_{+1}} \left( \bar u \ \hat{S}_{+1}^{(k)} P_X \ d\right) +
g^{(k)X}_{e^{c} \psi_{0} S_{+1}} \left( \overline{e^{C}} \ P_X \ \psi^{(k)}_{0}\right) S_{+1}^{(k)} +
g^{(k)X}_{e \psi_{0} S_{+1}} \left( \overline{e} \ P_X \ \psi^{(k)}_{0}\right) S_{+1}^{(k)\dagger} +\\
%
&+&
g^{X}_{u \psi_{5/3} S_{+1}}\ \left( \bar u \ P_X \ \psi^{(3)}_{5/3} \right) S^{(1)}_{+1} +
g^{X}_{d \psi_{4/3} S_{+1}}\ \left( \bar d \ P_X \ \psi^{(\bar{3})^{C}}_{4/3} \right) S^{(1)}_{+1} +
\\
\nonumber
&-& m^{(k)}_{\psi_{0}}\ \overline{\psi^{(k)}_{0}} \psi^{(k)}_{0} + \mbox{h.c.} .
\end{eqnarray}
The indices $X = L, P$ are independent in all the terms. The
generation indexes $i,j=1,2,3$ of the quarks $u, d$ and charged
leptons $e$ are suppressed. We use the shorthand notations: $g^{(k)}
\hat{S}^{(k)} = g^{(1)}S^{(1)}$ {\bf I} $+ g^{(8)} S^{(8) A}
\lambda^{A}/2$ with the identity and Gell-Mann matrices in the color
space. Also $g^{(k)} \psi^{(k)} S^{(k)} = g^{(1)} \psi^{(1)} S^{(1)}
+ g^{(8)} \psi^{(8)A} S^{(8)A}$ and $m^{(k)} \psi^{(k)} \psi^{(k)} =
m^{(1)} \psi^{(1)} \psi^{(1)} + m^{(8)} \psi^{(8)A} \psi^{(8)A} $.
For the $\psi_{0}$ field we study an economical case with only one
independent chiral component, so that in the 4-component notation it
is represented by a Majorana field satisfying $\psi_{0}^{C} =
\psi_{0}$. Thus, its mass $m_{\psi_{0}}$ in the last term of
eq. (\ref{Lag-S1-psi}) is a $\Delta L = 2$ Majorana mass. We do not
show the ordinary complex scalar mass term for $S_{+1}$ and the Dirac
ones for $\psi_{5/3}$, $\psi_{4/3}$.
The Majorana field cannot have definite lepton number, but for
convenience it can be assigned to the chiral projections of
$\psi_{0}$. One of the two options we choose is $L=1$ for $P_{L}
\psi_{0}$ and $L=-1$ for $P_{R} \psi_{0}$. The fields $\psi_{5/3},
\psi_{4/3}$ and $S_{+1}$ have $L=0$. Baryon number $B$ conservation
requires an assignment $B=1/3$ for $\psi_{5/3}$, $B= -1/3$ for
$\psi_{4/3}$ and $B=0$ for $S_{+1},\ \psi_{0}$ . As seen from
eq. (\ref{Lag-S1-psi}) there are two sources of LNV: the second
interaction term with the coupling $g^{L}_{e^{c}\psi S}$ and the
Majorana masses $m^{(1)}_{\psi_{0}}$ as well as $m^{(8)}_{\psi_{0}}$.
The scalar $SU_{3C}$ singlet field $S_{+2}$ appears in decompositions
1-ii, 3-ii and 3-iii of table \ref{Tab:TopoI}. Its interactions are
given by:
\begin{eqnarray}\label{Lag-S2}
{\cal L}_{S_{+2}} &=&
g^{X}_{eeS_{+2}} \left( \overline{e^{C}} \ P_{X} \ e \right) S_{+2} +
g^{X}_{d \psi_{5/3} S_{+2}} \left( \overline{\psi^{(3)}_{5/3}} \ P_X \ d \right) S_{+2} +\\
\nonumber
&+& g^{X}_{u \psi_{4/3} S_{+2}} \left( \overline{u} \ P_X \ \psi^{(\bar{3})^{C}}_{4/3} \right) S_{+2} +
\mbox{h.c.}
%
\end{eqnarray}
Here the only LNV source is the first $\Delta L = 2$ term.
The diquarks $S^{DQ(3, \bar{6})}_{2/3}$ and $S^{DQ(\bar{3}, 6)}_{4/3}$
appear in decompositions 3-i, 3-ii, 3-iii, 4-ii and 5-ii of table
\ref{Tab:TopoI}. These fields interact with quarks, charged leptons
and fermions $\psi^{(3)}_{5/3}, \psi^{(\bar{3})}_{4/3}$ and
$\psi^{(\bar{3}, 6)}_{1/3}$ in the following way:
%
\begin{eqnarray}\label{Lagrangian-S6}
%
{\cal L}_{DQ }&=&
%
g^{(6) X}_{uu S^{DQ}_{4/3}} \ (\bar{u} \ P_{X}\ \hat{S}^{DQ}_{4/3}\ u^{C} ) +
%
g^{(6) X}_{dd S^{DQ}_{2/3}} \ (\overline{d^{C}} \ P_{X}\ \hat{S}^{DQ}_{2/3} \ d ) +\\
%
\nonumber
&+& g^{(3) X}_{d_{i}d_{j} S^{DQ}_{2/3}} \ \epsilon^{IJK} (\overline{d^{C}}_{i I} \ P_{X}\ d_{j J}) S^{DQ(3)}_{2/3 K} +\\
\nonumber
&+& g^{(6) X}_{d\psi_{5/3} S^{DQ}_{4/3}} \ (\bar{d} \ P_{X}\ \hat{S}^{DQ}_{4/3} \ \psi^{(3)^{C}}_{5/3} ) +
g^{(6) X}_{u\psi_{4/3} S^{DQ}_{2/3}} \ (\overline{\psi^{(\bar{3})}_{4/3}} \ P_{X}\ \hat{S}^{DQ}_{2/3} \ u) +\\
\nonumber
&+& g^{(3) X}_{u\psi_{4/3} S^{DQ}_{2/3}} \ \epsilon^{IJK} (\overline{\psi^{(\bar{3})}_{4/3 I}} \ P_{X}\ u_{J}) S^{DQ(3)}_{2/3 K} +\\
\nonumber
&+& g^{X}_{e\psi_{1/3} S^{DQ}_{4/3}} \ (\overline{\psi^{(6)a}_{1/3}} \ P_{X}\ e) \ S^{DQ (6)}_{4/3 a} +
g^{(6) X}_{e\psi_{1/3} S^{DQ}_{2/3}} \ (\overline{e^{C}} \ P_{X}\ \psi^{(6)}_{1/3 a})\ S^{DQ(\bar{6}) a}_{2/3} + \\
\nonumber
&+& g^{(3)X}_{e\psi_{1/3} S^{DQ}_{2/3}} \
(\overline{e^{C}} \ P_{X}\ \psi^{(\bar{3}) I}_{1/3})\
S^{DQ(3)}_{2/3\ I} \ .
\end{eqnarray}
Here $I,J,K=1-3$ and $a=1-6$ are the color triplet and sextet indexes
respectively. As before the generation indexes $i,j = 1,2,3$ of the
quarks $u, d$ and charged leptons $e$ are suppressed in all the terms
except for the third one which vanishes if $i=j$. For convenience we
introduced notations $\hat{S}^{DQ}_{4/3} = S^{DQ (6)}_{4/3 a}
(T_{\bar{\bf 6}})^{a}_{IJ}$ and $\hat{S}^{DQ}_{2/3} = S^{DQ (\bar{6})
a}_{2/3} (T_{\bf 6})_{a}^{IJ}$. In the terms with these matrix
fields summation over the triplet indexes $I,J$ is implied. The
symmetric $3\times 3$ matrices $T_{\bf 6}$ and $T_{\bar{\bf 6}}$ can
be found in ref.~\cite{Bonnet:2012kh}. As seen from
eq. (\ref{Lagrangian-S6}) the sources of LNV in the diquark
interactions in are given by the last two $\Delta L = 2$ terms with
$\psi_{1/3}$ fields. We assign to $\psi_{1/3}^{(\bar{3}, 6)}$ a lepton
number $L=1$.
The leptoquark $SU_{3C}$ 3-plet fields $S^{LQ}_{2/3}$ and $
S^{LQ}_{-1/3}$ participate in decompositions 2, 4 and 5 of table
\ref{Tab:TopoI}. Their interactions we write in the form:
%
\begin{eqnarray}\label{Lagrangian-LQ}
%
{\cal L}_{LQ} &=&
g^{X}_{eu S^{LQ}_{-1/3}} \ (\bar{u}^{I} \ P_X \ e^{C} ) \ S^{LQ}_{-1/3\ I}
+ g^{X}_{ed S^{LQ}_{2/3}} \ (\bar{d}^{I} \ P_X\ e ) \ S^{LQ}_{2/3\ I} \\
\nonumber
&+& g^{(1) X}_{u \psi_{0} S^{LQ}_{2/3}} \ (\bar{u}^{I} \ P_X\ \psi_{0}^{(1)} ) \ S^{LQ}_{2/3\ I} +
g^{(8) X}_{u \psi_{0} S^{LQ}_{2/3}} \ (\bar{u} \ P_X\ \hat\psi_{0} ) \ S^{LQ}_{2/3} +\\
\nonumber
&+& g^{(1) X}_{d \psi_{0} S^{LQ}_{-1/3}} \ (\bar{d} \ P_X\ \hat\psi_{0}) \ S^{LQ}_{-1/3} +
g^{(8) X}_{d \psi_{0} S^{LQ}_{-1/3}} \ (\bar{d} \ P_X\ \hat\psi_{0}) \ S^{LQ}_{-1/3} +\\
\nonumber
&+& g^{(3) X}_{d \psi_{1/3} S^{LQ}_{2/3}} \ \epsilon_{IJK} (\bar{d}^{I} \ P_X\ \psi_{1/3}^{(\bar{3})J}) \ S^{LQ, K^{\dagger}}_{2/3} +
g^{(6) X}_{d \psi_{1/3} S^{LQ}_{2/3}} \ (\bar{d} \ P_X\ \hat\psi_{1/3}) \ S^{LQ^{\dagger}}_{2/3} \\
\nonumber
%
&+& g^{(3) X}_{u \psi_{1/3} S^{LQ}_{-1/3}} \ \epsilon_{IJK} (\bar{u}^{I} \ P_X\ \psi_{1/3}^{(\bar{3}) J}) \ S^{LQ,K^{\dagger}}_{-1/3}+
g^{(6) X}_{u \psi_{1/3} S^{LQ}_{-1/3}} \ (\bar{u} \ P_X\ \hat\psi_{1/3}) \ S^{LQ^{\dagger}}_{-1/3}\\
\nonumber
%
&+& g^{X}_{e \psi_{4/3} S^{LQ}_{-1/3}} \ (\overline{e^{C}} \ P_X\ \psi_{4/3}^{(\bar{3}) I}) \ S^{LQ}_{-1/3\ I} +
g^{X}_{e \psi_{5/3} S^{LQ}_{2/3}} \ (\overline{e^{C}} \ P_X\ \psi_{5/3\ I}^{(3)}) \ S^{LQ, I^{\dagger}}_{2/3} \ .
\end{eqnarray}
As before we introduce a short-hand notation $\hat\psi_{1/3} =
\psi^{(6)}_{1/3\ a} (T_{\bar{\bf 6}})^{a}_{IJ}$. Here $I,J= 1,2,3$ are
the color triplet indexes. We adopt the following assignment of
lepton $L$ and baryon numbers to the leptoquarks: $L=1, \ B=1/3$ for
$S^{LQ}_{-1/3}$ and $L=-1, \ B=1/3$ for $S^{LQ}_{2/3}$. Checking the
total lepton number of each term in eq. (\ref{Lagrangian-LQ}) one
finds that the terms in the 2nd line with chirality $X=R$, in the 3rd
line with $X=L$, in the 4th and the last lines with any $X$ break
lepton number in two units.
The following comments on the structure of the $\Delta L=2$ amplitude
is in order. For the analysis of $0\nu\beta\beta$-decay we introduced
in eq. (\ref{eq:meff}) an effective masses $M_{eff}$ and an effective
couplings $g_{eff}$. The quantities $M_{eff}^{5}$ and $g_{eff}^{4}$
represent respectively the products of the particle masses originating
from their propagators and the products of four couplings, $g_{i}$, of
those operators from eqs. (\ref{Lag-S1-psi})-(\ref{Lagrangian-LQ})
which participate in the decomposition in question. Let us specify
possible characteristic cases for combinations of these masses and
couplings in $0\nu\beta\beta$ amplitude. Schematically one can
distinguish the following cases:
\begin{eqnarray}\label{Rules-Ampl}
{\cal A}(0\nu\beta\beta) &\sim&
g_{1} g_{2\psi_{0}}^{X} g_{3\psi_{0}}^{X} g_{4} \frac{m_{\psi_{0}}}{m^{2}_{S_{1}} m^{2}_{S_{2}} m^{2}_{\psi_{0}}}\ ,
\\
\nonumber
&\sim& g \hspace{-0.5em}/_{1} g_{2\psi_{Q}}^{X} g_{3\psi_{Q}}^{X} g_{4} \frac{m_{\psi_{Q}}}{m^{2}_{S_{1}} m^{2}_{S_{2}} m^{2}_{\psi_{Q}}}\ , \ \ \
g \hspace{-0.5em}/_{1} g_{2\psi}^{L} g_{3\psi}^{R} g_{4} \frac{\langle \gamma_{\mu} q^{\mu}\rangle}{m^{2}_{S_{1}} m^{2}_{S_{2}} m^{2}_{\psi}}\ , \\
\nonumber
&\sim&
g_{1} g \hspace{-0.5em}/_{2\psi_{0}}^{X} g \hspace{-0.5em}/_{3\psi_{0}}^{X} g_{4} \frac{m_{\psi_{0}}}{m^{2}_{S_{1}} m^{2}_{S_{2}} m^{2}_{\psi_{0}}} .
\end{eqnarray}
Here, $X = L,R$ and $g_{i\psi}$ are the couplings of the operators
involving $\psi$-field. These fields with nonzero charge $Q$ are
denoted as $\psi_{Q}$. Without this index they can be both charged
$\psi_{Q}$ and neutral $\psi_{0}$. The masses $m_{\psi_{Q}}$ of the
$\psi_{Q}$ fields are of Dirac $\Delta L= 0$ type while in the case of
the $\psi_{0}$ fields their masses $m_{\psi_{0}}$ are of Mjorana
$\Delta L=2$ type. By $g \hspace{-0.5em}/_{i}$ we denote the coupling of the $\Delta
L = 2$ operators. In the case of only one slash, as in the second
line, it may take place in any of the four couplings, while the two
slashed couplings can only be of the $g \hspace{-0.5em}/_{i\psi_{0}}$-type as in the
last line. In the combination given in the first line the $\Delta L
=2$ is brought by the Majorana mass $m_{\psi_{0}}$ while in both cases
of the second line it is due to a single $g \hspace{-0.5em}/_{i}$ coupling. The
combination of the last line put together three sources of the $\Delta
L = 2$ in a total $\Delta L = 2$. Note that the expressions in
eq. (\ref{Rules-Ampl}) imply that the masses of the intermediate
particles $m_{i} \gg |{\bf q}|$ where ${\bf q}$ are their momenta
whose mean value is about $\sim 100$ MeV. The numerator of the third
combination $\langle \gamma_{\mu} q^{\mu} \rangle$ implies inception
of $\gamma_{\mu}$ in between of the two electron or two quark
bispinors depending on the considered decomposition. It is of the
order of ${\bf q}\sim$ 100 MeV and, therefore, the third term
corresponding to the LR chirality structure is suppressed in
comparison with the remaining LL or RR terms by a factor of ${\bf
q}/m_{\psi}$. Thus, among all the possible cases specified in
(\ref{Rules-Ampl}) survive only those with LL or RR chiralities
leading to
\begin{eqnarray}\label{Summary}
{\cal A}(0\nu\beta\beta) \sim \frac{g_{eff}^{4}}{M_{eff}^{5}}
\end{eqnarray}
in terms of the effective quantities introduced in eq. (\ref{eq:meff}).
The decompositions leading to the third term in eq. (\ref{Rules-Ampl})
are very weakly constrained by $0\nu\beta\beta$ decay experiments. However
they can be probed at the LHC in the way we discussed in the main
text.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 3,399 |
Men sometimes draw the short straw when it comes to a little beauty treat. Rest assured knowing your manhood won't be compromised by these indulgent makeovers. Pedicure's don't scream manliness but beer does. Enjoy the best of both worlds with a Pint and Pedicure at So SPA by Sofitel, posing as the perfect gift for a man who needs some persuasion to keep those toenails in shape. If you're having trouble keeping your bae's beard under control too a Men's Traditional Wet Shave will do the trick. Let an expert take the shears and do all the hard work while you sit back, relax, and let the magic happen. Explore below for some great men's gift ideas or treat yourself to a little makeover. | {
"redpajama_set_name": "RedPajamaC4"
} | 1,616 |
function Code(data) {
this.data = data;
};
Code.prototype.run = function(req, res, next) {
res.send({
message : 'Hi! I\'m taskmill-code-dummy, the dummy task runner.'
, data : this.data
});
};
module.exports = Code; | {
"redpajama_set_name": "RedPajamaGithub"
} | 1,568 |
{"url":"https:\/\/www.airbestpractices.com\/industries?_exception_statuscode=404&destination=\/&page=2","text":"Industrial Utility Efficiency\n\n# Industries\n\n## Danone recognized as global environmental leader with triple \u2018A\u2019 score given by CDP\n\nDanone today announced, in December 2020, that it has been highlighted for the second year in a row as a world environmental leader by the international non-profit organization CDP, whose disclosure and scoring system is recognized as the gold standard of corporate environmental transparency.\n\n## Colgate-Palmolive Earns 11th Consecutive ENERGY STAR\u00ae Partner of the Year Award\n\n\u201cSince the Colgate brand is in more homes than any other, we have the opportunity to help people build sustainable habits into their everyday lives,\u201d said\u00a0Ann Tracy, Colgate\u2019s Chief Sustainability Officer. \u201cWe are honored to be recognized by ENERGY STAR\u00ae for Colgate\u2019s achievements in sustainability and ongoing efforts to lead action on climate change \u2013 from encouraging suppliers to reduce their energy consumption to making our operations even more energy efficient to helping consumers lead more sustainable lives through the use of our products.\u201d\n\n## Firmenich Achieves 4th Consecutive CDP Triple \u201cA\u201d Recognition\n\n\u201cCompanies\u2019 emissions don't end at the factory door. CDP data shows a company's supply chain emissions are, on average, over 11.4 times greater than its direct emissions. Meaningful corporate climate action means engaging with suppliers to reduce emissions across the value chain,\u201d said Sonya Bhonsle, Global Head of Value Chains, CDP. \u201cWe congratulate Firmenich for making it on to the CDP Supplier Engagement Leaderboard. This demonstrates that they are setting the pace in environmental management and their commitment to reduce emissions and lower environmental risks across their supply chain.\u201d\n\n## Beiersdorf Group Sets Ambitious Climate Targets for 2025\n\nThe Group successfully cut its energy-related CO2\u00a0emissions by 60 percent in absolute terms between 2014 and 2019. Various energy-saving measures, the LEED seal-winning sustainable design of production and office locations, and the transformation to green logistics are just a few examples of the Beiersdorf Group\u2019s uncompromising climate protection program. Since the end of 2019, 100 percent of the electricity purchased worldwide comes from renewable energy sources.\n\n## Carlsberg Opens Water Recycling Plant Becoming World\u2019s Most Water Efficient Brewery\n\n\u201cWe have a goal of zero water waste globally in 2030. As a global company, we have a responsibility to support the UN Sustainable Development Goals, and as a brewery, we have a special responsibility to reduce water waste in our global production. The new water recycling plant in Fredericia will generate important learnings that can be implemented across our breweries in the rest of the world,\u201d says Carlsberg Group CEO, Cees \u2019t Hart.\n\n## Brewing Energy Conservation at Molson Coors Canada\n\nIn 2021, Compressed Air Best Practices\u00ae Magazine interviewed members of the Molson Coors Canada team, at their Toronto Brewery, to gain an understanding of the work being done to improve energy efficiency. The team members interviewed were Doug Dittburner (Chief Engineer), Antonio Mayne (Utilities Optimization Engineer) and Khalil Daniel (Engineering Intern).\n\n## Schoeneck Containers Leaves No Stone Unturned with Compressed Air\n\nWhen compressed air is essential to the production of up to one million plastic containers per day there\u2019s little room for error. That\u2019s why Schoeneck Containers, Inc. (SCI) leaves no stone unturned to ensure its compressed air systems run smoothly at all times and without fail at its bustling facilities in Wisconsin.\n\n## Auto\n\n\u201cI\u2019ll say this politely. At the end of the day, we shouldn\u2019t have to worry about compressed air,\n\n## Bulk\n\nWhen the design capabilities of an installed compressed air system didn\u2019t align with real-world\n\n## Food\n\nSustainability is a high priority for today\u2019s consumer packaged goods (CPG) companies. Driven by\n\n## Medical\n\nThe air is delivered through a distribution piping system that ends with a medical air outlet\n\n## Metals\n\nVane motors can run at much higher speeds (2000 rpm and up), but piston motors tend to turn much\n\n## Paper\n\nThe pulp and paper industry depends on reliable sources of energy, water and compressed air;\n\n## Plastics\n\nThe use of high performance boosters to raise low pressure air (100 psig) to high pressure air (500\n\n## Power\n\nA newly constructed ethanol plant experienced control gap issues shortly after comissioning.\u00a0 This\n\n## Printing\n\nThe Trinity Mirror Group print works on Oldham is one of the UK\u2019s largest newspaper printers. The","date":"2023-03-29 16:22:52","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.2429301142692566, \"perplexity\": 9317.134658952353}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-14\/segments\/1679296949009.11\/warc\/CC-MAIN-20230329151629-20230329181629-00735.warc.gz\"}"} | null | null |
HANDLING _the_ TRUTH
HANDLING
_the_
TRUTH
_On the Writing of Memoir_
BETH KEPHART
GOTHAM BOOKS
Published by the Penguin Group
Penguin Group (USA) Inc., 375 Hudson Street,
New York, New York 10014, USA
USA | Canada | UK | Ireland | Australia New Zealand | India | South Africa | China
Penguin Books Ltd, Registered Offices: 80 Strand, London WC2R 0RL, England
For more information about the Penguin Group visit penguin.com.
Copyright © 2013 by Beth Kephart
All rights reserved. No part of this book may be reproduced, scanned, or distributed in any printed or electronic form without permission. Please do not participate in or encourage piracy of copyrighted materials in violation of the author's rights. Purchase only authorized editions.
Gotham Books and the skyscraper logo are trademarks of Penguin Group (USA) Inc.
"Eggshell" on page 80 copyright © 1998 by Gerald Stern, from _This Time: New and Selected Poems_ by Gerald Stern. Used by permission of W. W. Norton & Company, Inc.
LIBRARY OF CONGRESS CATALOGING-IN-PUBLICATION DATA
Kephart, Beth.
Handling the truth : on the writing of memoir / Beth Kephart.
pages cm
ISBN: 978-1-101-62018-2
1. Autobiography—Authorship. 2. Biography as a literary form. I. Title.
CT25.K37 2013
808.06'692—dc23 2012043517
Designed by Spring Hoteling
While the author has made every effort to provide accurate telephone numbers, Internet addresses, and other contact information at the time of publication, neither the publisher nor the author assumes any responsibility for errors or for changes that occur after publication. Further, the publisher does not have any control over and does not assume any responsibility for author or third-party websites or their content.
_for my students—essential, inspiriting, whole-making
for Amy Rennert, who was sure_
# **CONTENTS**
INTRODUCTION
**PART ONE**
DEFINITIONS, PRELIMINARIES, CAUTIONS
PREFATORY
MEMOIR IS NOT
MEMOIR IS
READ TO WRITE
GREAT EXPECTATIONS
CAREFUL, NOW
**PART TWO**
RAW MATERIAL
WRESTLING YOURSELF DOWN
TENSE?
FIND YOUR FORM
PHOTO SHOP
DO YOU LOVE?
WHETHER THE WEATHER
LANDSCAPE IT
THINK SONG
THE COLOR OF LIFE
I HEAR VOICES
TASTES LIKE
SOMETHING SMELLS... FISHY?
EMPTY YOUR POCKETS
TELLING DETAIL
LET ME CHECK ON THAT
FIRST MEMORY
REMAIN VULNERABLE
**PART THREE**
GET MOVING
WHAT'S IT ALL ABOUT?
BEGINNINGS
BLANK PAGE
**PART FOUR**
FAKE NOT AND OTHER LAST WORDS
FAKE NOT
EXERCISE EMPATHY
SEEK BEAUTY
MOST UNLONELY
**A PPENDIX**
READ. PLEASE.
CHILDHOOD RELIVED
MOTHERS, FATHERS, CHILDREN
GRIEF
THE NATURAL WORLD
UNWELL
LEAVING AND RETURNING
RAPACIOUS MINDS
FUNNY BUSINESS
HELPFUL TEXTS
SOME ADDITIONALLY CITED SOURCES
ACKNOWLEDGMENTS
# INTRODUCTION
Throughout the 1990s, I was an unknown and in many ways unschooled writer who was deeply in love with her son. Love, for me, was the time I sat with this boy reading stories. It was the songs I sang to him at night. It was the walks we took and the hats we bought and the important things he taught me about language both received and given, and courage made essential. I wrote love one fragment at a time. I webbed it together until discrete essays became a binding narrative. I sent the manuscript out, a slush-pile writer. When this small book of mine—a family book, an intimate book—found readers beyond people I personally knew, I was utterly unprepared. I had been an outsider. I had written from the margins. I still had much to learn.
I would go on to write four more memoirs and a river book that I called _Flow_ that assumed the memoir's form. I would be asked to conduct workshops and give talks—in elementary schools, in middle schools, in high schools, at universities, in libraries and community centers. I would write about the writing life for publications great and small. I would chair juries for the National Book Awards and the PEN First Nonfiction Awards and serve on a jury panel for the National Endowment for the Arts. I would explore new genres—poetry, fable, young adult literature. I would—a brave experiment—begin to blog daily in memoiristic fashion. The important thing to me was this: I was still writing. I was still reading. I was still learning.
When the University of Pennsylvania asked me to teach creative nonfiction, I was not inclined to say yes. Raised up in memoir on my own, surrounded by my own huge but idiosyncratic memoir library, still in many ways making my outsider way into the book world, it wasn't at all clear to me that I would succeed within an Ivy League environment among faculty members who knew what teaching was. I hadn't grown up in the workshop system; how could I teach it? With the exception of three ten-day summer programs I enrolled in when I was already a mother, I had never taken a formal writing class. I was the true memoir autodidact, and this was Penn, where, as a student years before, I had studied the history and sociology of science, swerving clear of English.
As it turns out, teaching at Penn had been my calling all along. I eased into the responsibility—first mentoring a single student, then teaching a select advanced class, then taking on the teaching of Creative Nonfiction 135.302, which has become my favorite job of all. In teaching others memoir, I have taught it to myself—the language of expectations and critique, the exemplary work of others, the exercises that yield well-considered work, the morality of the business, the psychic cautions. Teaching memoir is teaching vulnerability is teaching voice is teaching self. Next to motherhood, it has been, for me, the greatest privilege.
It has also—perhaps inevitably—led to the writing of this book. _Handling the Truth_ is about the making of memoir, and the consequences. It's about why so many get it wrong, and about how to get it right. It's about the big questions: Is compassion teachable? Do half memories count? Are landscape, weather, color, taste, and music background or foreground? To whom does _then_ belong? And what rights do memoirists have, and how does one transcend particulars to achieve a universal tale, and how does a memoirist feel, once the label is attached, and what _is_ the language of truth? _Handling the Truth_ is about knowing ourselves. It's about writing, word after word, and if it swaggers a little, I hope it teaches a lot, providing a proven framework for teachers, students, and readers.
# ONE
DEFINITIONS, PRELIMINARIES, CAUTIONS
# PREFATORY
MAYBE the audacity of it thrills you. Maybe it's always been like this: You out on the edge with your verity serums, your odd-sized heart, your wet eyes, urging. Maybe this is what you are good for, after all, or good _at_ , though there, you've done it again: wanted proof, suggested the possibility. You teach memoir. You negotiate truth. Goodness doesn't matter here. Bearing witness does.
Memoir is a strut and a confession, a whisper in the ear, a scream. Memoir performs, then cedes. It is the work of thieves. It is a seduction and a sleight of hand, and the world won't rise above it.
Or you won't. You in the Victorian manse at the edge of the Ivy League campus, where you arrive early and sit in the attitude of prayer. You who know something not just of the toil but also of the psychic cost, the pummeling doubt, the lacerating regrets that live in the aftermath of public confession. You have written memoir in search of the lessons children teach and in confusion over the entanglements of friendships. You have written in despair regarding the sensational impossibility of knowing another, in defense of the imperiled imagination, and in the throes of the lonesome sink toward middle age. You have written quiet and expected quiet, and yet a terrible noise has hurried in—a churlish self-recrimination that cluttered the early hours when clear-minded nonmemoirists slept. You have learned from all that. You have decided. Memoir is, and will still be, but cautions must be taken.
Teaching memoir is teaching verge. It's teaching questions: Who are you? Where have you been? Where are you going? What do you believe in? What will you fight for? What is the sound of your voice? It's teaching _now_ against _then_ , and leave that out to put this in, and yes, maybe that happened, but what does it _mean_? An affront? You hope not. A calling? Probably.
You enter a classroom of students you have never seen before, and over the course of a semester you travel—their forgotten paraphernalia in the well of their backpacks, those tattoos on their wrists, those bio notes inked onto the palm of one hand. They will remember their mother's London broil, but not the recipe. They will proffer a profusion of umbrellas and a poor-fitting snowsuit, a pair of polka-dotted boots, red roses at a Pakistani grave, a white billiard ball, a pink-and-orange sari, a box with a secret bottom, Ciao Bella gelato. Someone will make a rat-a-tat out of a remembered list. Someone will walk you through the corridors of the sick or through the staged room of a movie set or beside the big bike that will take them far. Someone will say, _Teach me how to write like this_ , and someone will ask what good writing is, and you will read out loud from the memoirs you have loved, debunk (systematically) and proselytize (effusively), perform Patti Smith and Terrence Des Pres, Geoffrey Wolff and Mark Richard, Marie Arana and Mary Karr, William Fiennes and Michael Ondaatje, C. K. Williams and Natalie Kusz. You will play recordings of Sylvia Plath reciting "Lady Lazarus" and Etheridge Knight intoning "The Idea of Ancestry," and you will say, in a room made dark by encrusted velvet and mahogany stain, _You tell me good. You tell me why. Know your opinions and defend them._
These aspiring makers of memoir are who you believe and what you believe in—the smiley face tie he wears on Frat Rush Tuesdays, the cheerful interval between her two front teeth, the planks he carries in his dark-blue backpack, the accoutrements of power lifting. Enamored of the color red and hip-hop, declaring you their "galentine," impersonating Whitman, missing their mothers, missing their dead, they are, simply and complexly, human, and they may not trust themselves with truth, but they have to trust one another. You insist that they earn the trust of one another.
And so you will send them out into the world with cameras. And so you will sit them down with songs. And so you will ask them to retrieve what they lost and, after that, to leave aside the merely incidental. You will set a box of cookies on the table, some chocolate-covered berries, some salt-encrusted chips, and then (at last) get out of the way, for every memoir must in the end and on its own emerge and bleed and scab.
_Audacity_ was the wrong word; you see that now. The word, in fact, is _privilege_. Teaching, after all these years, is the marrow in your bones. Truth is your obsession.
# MEMOIR IS NOT
HERE are some of the things that memoir is not:
* A chronological, thematically tone-deaf recitation of everything remembered. That's autobiography, which should be left, in this twenty-first century, to politicians and celebrities. Oh, be honest: It should just be left.
* A typeset version of a diary scrawl—unfiltered, unshaped. There are remarkable diaries; _A Woman in Berlin_ (anonymous), for example, is artful, heartbreaking, essential. _New York Diaries: 1609 to 2009_ (Teresa Carpenter, editor) is a thrill. But the method of a diarist is to record events and thoughts _as they are happening_. A memoirist looks back.
* Exhibitionism for exhibitionism's sake. If nothing's been learned from a life, is it worth sharing? Or, if nothing's been learned _yet_ , shouldn't the story wait?
* An accusation, a retaliation, a big _take that!_ in type. Fights are waged in bedrooms and courthouses. A memoir is not a fight.
* A lecture, a lesson, a stew of information and facts. Memoirs illuminate and reveal, as opposed to justify and record. They connote and suggest but never insist.
* A self-administered therapy session. Memoirists speak to others and not just to themselves.
* An exercise in self-glorification; an ability—or refusal—to accept one's own culpability; a false allegiance to the idea that a life, any life, can be perfectly lived or faultlessly explained.
* An unwillingness to recognize—either explicitly or implicitly—that memory is neither machine nor uncontestable. Memory—our own and others'—is a tricky, fallible business.
* A trumped-up, fantastical idea of what an interesting life might have been, _if only_. A web of lies. A smudge. A mockery of reality. There is a separate (even equal) category for such things. It goes by the name of fiction.
# MEMOIR IS
IF you want to write memoir, you need to set caterwauling narcissism to the side. You need to soften your stance. You need to work through the explosives—anger, aggrandizement, injustice, misfortune, despair, fumes—toward mercy. Real memoirists, _literary_ memoirists, don't justify behaviors, decisions, moods. They don't ladder themselves up—high, high, high—so as to look down upon the rest of us. Real memoirists open themselves to self-discovery and, in the process, make themselves vulnerable—not just to the world but also to themselves. They yearn, and they are yearned with. They declare a want to know. They seek out loud. They quest. They lessen the distance. They lean toward.
Listen, for example, to Michael Ondaatje as he sets out to rediscover—and make sense of—his Sri Lankan childhood. From the opening pages of _Running in the Family_ :
I had already planned the journey back. During quiet afternoons I spread maps onto the floor and searched out possible routes to Ceylon. But it was only in the midst of this party, among my closest friends, that I realized I would be travelling back to the family I had grown from—those relations from my parents' generation who stood in my memory like frozen opera. I wanted to touch them into words. A perverse and solitary desire. In Jane Austen's _Persuasion_ I had come across the lines, "she had been forced into prudence in her youth—she learned romance as she grew older—the natural sequence of an unnatural beginning." In my mid-thirties I realized I had slipped past a childhood I had ignored and not understood.
Diane Keaton, a celebrity who wrote not autobiography but memoir with _Then Again_ , uses collage (letters written, journals plumbed, secrets exposed) to parse a question and to explore (beautifully, calmly, as a human being and not as a star) a thoughtfully articulated theme. It's all here in a single sentence—hardly easy gloss. It's the crinkly stuff of living and losing, and it sets the book in motion:
Comparing two women with big dreams who shared many of the same conflicts and also happened to be mother and daughter is partially a story of what's lost in success contrasted with what's gained in accepting an ordinary life.
Maybe memoir, for some, is the Queen of the Nasties—the medical horror story, the impossible loss story, the abuse story, the deprivation story, the I've-been-cheated story, the headline-making _you're kidding me_ s. But plot (which is to say the stuff of a life) is empty if it doesn't signify, and the unexamined tragedy—thank you, Socrates—isn't worth the trees it will be inked on or the screen that fingers will smudge. Some of the best memoirs are built not from sensate titillations but from the contemplation of universal questions within a framed perspective.
Annie Dillard, for example, is not a victim in her growing-up classic, _An American Childhood_. She's a woman looking back on what it meant to grow awake to the world.
I woke in bits, like all children, piecemeal over the years. I discovered myself and the world, and forgot them, and discovered them again. I woke at intervals until, by that September when Father went down the river, the intervals of waking tipped the scales, and I was more often awake than not. I noticed this process of waking, and predicted with terrifying logic that one of these years not far away I would be awake continuously and never slip back, and never be free of myself again.
Likewise, while loss frames C. K. Williams's _Misgivings_ , it's not the tragedy he's chasing. It's understanding.
My father dead, I come into the room where he lies and I say aloud, immediately concerned that he might still be able to hear me, _What a war we had!_ To my father's body I say it, still propped up on its pillows, before the men from the funeral home arrive to put him into their horrid zippered green bag to take him away, before his night table is cleared of the empty bottles of pills he wolfed down when he'd finally been allowed to end the indignity of his suffering, and had found the means to do it. Before my mother comes in to lie down beside him.
When my mother dies, I'll say to her, as unexpectedly, knowing as little that I'm going to, "I love you." But to my father, again now, my voice, as though of its own accord, blurts, _What a war!_ And I wonder again why I'd say that. It's been years since my father and I raged at each other the way we once did, violently, rancorously, seeming to loathe, despise, detest one another. Years since we'd learned, perhaps from each other, perhaps each in our struggles with ourselves, that conflict didn't have to be as it had been for so long the routine state of affairs between us.
In _Father's Day_ , Buzz Bissinger is in keen pursuit of understanding, too, though in this memoir about raising twin sons, one of whom suffers irreparable brain damage at birth, it's Bissinger's own inability to be at peace, to find solace, to be _okay_ that generates the tension, and the search.
It is strange to love someone so much who is still so fundamentally mysterious to you after all these years. _Strange_ is a lousy word, meaning nothing. It is the most terrible pain of my life. As much as I try to engage Zach, figure out how to make the flower germinate because there is a seed, I also run. I run out of guilt. I run because he was robbed and I feel I was robbed. I run because of my shame. I am not proud to feel or say this. But I think these things, not all the time, but too many times, which only increases the cycle of my shame. This is _my_ child. How can I look at him this way?
Marie Arana wrote _American Chica_ not to exploit a family or to out dark secrets, not to trump or to claim, but to somehow register how two exceptionally different people—her parents—could sustain a home.
A South American man, a North American woman—hoping against hope, throwing a frail span over the divide, trying to bolt beams into sand. There was one large lesson they had yet to learn as they strode into the garden with friends, hungry from rum and fried blood: There is a fundamental rift between North and South America, a flaw so deep it is tectonic. The plates don't fit. The earth is loose. A fault runs through. Earthquakes happen. Walls are likely to fall.
As I looked down at their fleeting radiance, I had no idea I would spend the rest of my life puzzling over them.
And then there's Jeanette Winterson, in _Why Be Happy When You Could Be Normal?_. She is writing not to abrade her mother—she might have, the material was there—but to report back out from a life of searching on the matter and the necessity of love.
Listen, we are human beings. Listen, we are inclined to love. Love is there, but we need to be taught how. We want to stand upright, we want to walk, but someone needs to hold our hand and balance us a bit, and guide us a bit, and scoop us up when we fall.
Listen, we fall. Love is there but we have to learn it—and its shapes and its possibilities. I taught myself to stand on my own two feet, but I could not teach myself how to love.
Beauty is born of urgency; that should be clear. Forced knowing is false knowing—self-evident, perhaps? Voice is tone and mood and attitude, and tense will make a difference. Makers of memoir shape what they have lived and what they have seen. They honor what they love and defend what they believe. They dwell with ideas and language and with themselves, countering complexity with clarity and manipulating (for the sake of seeing) time. They locate stories inside the contradictions of their lives—the false starts and the presumed victories, the epiphanies that rub themselves raw nearly as soon as they are stated. They write the stories once; they write them several times.
They take a breath.
They sing.
And when their voices are true, we hear them true. We trust them.
# READ TO WRITE
ONCE I had a friend. Yes. Once. It had occurred to her to write a book, a memoir in particular, and so she called, asking for help. _It should be fun_ , she said. I set to work creating a list of the memoirs my friend might read, for she hadn't read even so much as a single memoir yet, and I thought reading might be helpful. I sent the list and that was that—the end of the memoir, and of the friendship.
I don't mean to be insulting when I suggest that memoir writers should read memoir, but there they are—my annoying politics. Stories live inside the pages of memoirs, but so do strategies, tactics. Fine little experiments with points of view and tense. Daring reversals of structure. Elisions and white space. Italics pressed up against roman. I'm a little bit sorry, but the facts are the facts: You have to read memoir to write it.
I read in the earliest part of the day—before my husband stirs, before the glisten on the grass burns off, before anybody anywhere can suggest a different agenda. I read outside on the old chaise longue, or on the slatted, sloping deck, or on my side of the bed, turned toward the breeze and the clean pink morning light.
Reading is equally about exiting and entering, about going away and going nowhere. Reading early in the morning is like having one more dream, like lolling just a little longer in the strange, sweet gauze of sleep. If I were to draw myself in the morning reading, I would draw my head as a cloud—edgeless and capacious and shape-shifting and unbound, hovering near but never tethered to the bones and muscles of my body. I read, I am saying, and without moving anywhere I go—into the deep, wild, sometimes contradicting, mostly illuminating language and landscape of memoir. I learn (over and again) how memoir is made. I learn what memoirists teach.
With _Road Song_ , Natalie Kusz teaches the importance of selecting just the right details, and of giving them room on the page. With _Running in the Family_ , Michael Ondaatje commends the power of fragments and the integrity of not being entirely sure—or sane. With _Half a Life_ , Darin Strauss teaches white space. With _House of Prayer No. 2_ , Mark Richard teaches intimate second-person prose. With _Just Kids_ , Patti Smith teaches how much room memoir can make to preserve the integrity (and privacy) of others. With _The Duke of Deception_ , Geoffrey Wolff teaches forgiveness. With _Bone Black_ , bell hooks teaches the power of the returning refrain.
The good memoirs aren't just good stories. They are instructions on both life and form, considerations of shapes, shadows thrown up onto the wall. They are—they must be—works of art. It's fundamental, then, isn't it? You have to know what art is before you set out to write it. You have to have a dictionary of working terms, a means by which you can deliver up a verdict on your own sentences and their arrangements.
Buy the books. (There's an appendix back there to get you started.) Increase your shelf space. Go dirty and dog-eared; take an afternoon sprawl. "When you find only yourself interesting, you're boring," Grace Paley said. "True memoir is written, like all literature, in an attempt to find not only a self but a world," Patricia Hampl said.
Don't be boring.
Find the world.
And then (and only then) wedge yourself within it.
# GREAT EXPECTATIONS
IN the faces of my students I see the person I once was, though I was nearly twice their age, married, and a mother when I enrolled in my first writers' workshop. We'd flown to Spoleto, Italy, for a family vacation, and we'd climbed hills and slipped inside churches and sat beneath rooms where pianos were playing. There were nuns on the hills, ropes at their waists. There were market flowers wilted by sun. We'd arrived late at night and settled into a stranger's flat (the plates still draining by the kitchen sink, a cloud of smoky moon in the front window), and the next day I'd hauled myself up the stairs of a round-cornered building and sat in the back of the class.
I'd brought a blank book with gray pages, its cover hieroglyphically embossed. I'd read the works of our teachers, Reginald Gibbons and Rosellen Brown, and beyond the window, deep in the hills, was the Roman theater and the turreted castle, the Cathedral of Santa Maria Assunta, the shop of silver trinkets and cards from which my toddler son would soon (almost) catastrophically run as a Fiat hurtled by. The poisonous wasp that would balloon my husband's hand was out there. The pizza shop with the festoon of paper flowers at the base of the hill. The slinking arm of the aqueduct. The basilica in pale light, its beauty explained by my husband with two words: _forced perspective_. The cemetery where soon the class would go to imagine the lives of those whose names we'd find scratched out of headstones and buffed by a woman bearing (in broad daylight) a candle flame and a white handkerchief.
But at that moment there was only the classroom, the squeak-footed chairs, my blank book, the other students, Rosellen, and Reginald, and it was Reginald who began: "Every difference makes a difference." Word for word, I transcribed him. "The craft of writing is to describe something so that someone else can see it." Soon Reginald was quoting Henry James—"Be one of those on whom nothing is lost"—and then Rosellen was speaking: "I like the sentence that begins romantically, then de-romanticizes itself."
_The sentence that de-romanticizes itself._
I had been a closet writer nearly all my life—my poems stuffed in boxes, my short stories boomeranged back to me via return-envelope mail. I was taking my first lesson in craft, and what I learned in Spoleto, what I chose to value or come to believe about myself, would shape the way I thought about stories made and lived every thereafter day of my life. It would make me want to find a way to pass the knowing down.
Spoleto also began for me the process of examining, defining, and attempting to live up to my own literary expectations. What was I looking for in the writers I read? What was I hoping for from myself? Why hadn't I asked myself these questions before? Why had I left so much to hazy qualifiers? Why did I not yet have a standard that I was holding myself to? What does _good_ mean, after all? And what did I mean, when I said, simply, _I love it_?
Things had to change.
They did.
Of memoirists—I have learned as I have read, learned as I have taught, learned as I have reviewed my own work and the work of others—I expect deliberation with structure, ambition with language, compassion in tone, magnanimous reach, a refusal to presume that chronology alone teaches. And since I am so busy expecting that of others, I cop to expecting it of myself. Memoirs—their memoirs, my memoirs—must transcend not just the category and the particulars of the story but also, ultimately, the author herself.
My expectations, then. But what about yours? What about the expectations of my students? In that Victorian manse on the edge of my campus, extraordinary work emerges when I give my students 750 words each to express their expectations. The prompt question may seem simple enough: _What do you expect of others as you read, and what do you expect of yourself as a writer?_ The responses, however, have been remarkable, establishing for each writer not just a critical vocabulary and frame but also a contract of sorts. _This is what I'm looking for_ , each essay concludes, in its own fashion. _Look at me setting the bar._
I rarely know what to expect of my students' expectations essays. I have never—and this is deeply true—been disappointed. The essay fragments that I share here are meant to inspire you. Who am I kidding? They absolutely inspire me.
I expect, in a well-written piece, to be drawn in without my notice. I don't want awkwardly chosen words to fight for my attention. I want the attraction to feel effortless and instant, as if the writer doesn't even know she's being read. Or, even further: I want to imagine that a piece of writing is just an elegant, authorless, whole thought that had already existed before a writer nets it onto a page. Part of my fantasy is that the writer does not even care if the piece is read; this autonomous thing on the page is just fanning its wings and sunning itself, wholly innocent of me, the reader-voyeur. The writer is someone who has carelessly left a pair of glasses on the grass so that I can have a look. —Sara
Address me (first-person point-of-views are a good way to start, but not necessary) and acknowledge my presence. I want to know that you're writing for someone other than yourself: me. Write with intentionality. Labor over every sentence and every other word. Because at night, when I hold these bound pages you regard as your life's work, I want to read it with the trust that you have thought long and hard about the impact of your words on my mind. Because when I arrive at the destination your words have brought me to, I want to know that my journey is the result of your love. —Rachel
I hold the writers I read to the same standards as I hold myself. I enjoy writing that feels genuine because it allows me to trust the author and become more invested in his or her work. When I encounter writing that is pretentious or condescending, I put up a barrier that prevents me from getting anything out of the work at all. I get a great deal more out of reading when authors use imagery and description to draw me into their story and make it come alive. —Nabil
I also expect compassion for the people mentioned in a piece. We are all fallible and faulted. I expect fairness in a portrayal; very few people are flat characters, merely good or bad. Geoffrey Wolff's _The Duke of Deception_ was a great example of compassion. Though his father was a fraud, he still can say, "I had this from him always: compassion, care, generosity, and endurance." Another aspect of fairness is consideration of others' vulnerability: throughout our lives, others entrust us with their secrets. I wouldn't default on that trust without explicit permission. Not everything we know about the people in our lives is fair game. I want to be respectful of others in my writing. —Erin
What, then, is the stimulus for entertainment? Reading appeals to people from a voyeuristic perspective—the contrived intimacy of knowing others blithely and truly, with no repercussions, is the consummation of high human fantasy. We long for social connection at will. The mentioned aspects of literary entertainment entice the reader for their functional relationship to the voyeur's fulfillment: they simulate the closeness and familiarity while belying the actual vacuum between reader and character. In his essay, Seabrook shows us Schnabel the way his visitors see him—or the way that they might. The legitimacy of the reader's depiction is unimportant and personal—the facts that paint that picture remain true. DeLillo's encomia to contemporary commercialism and the academy place us in the confused mind of an intellectual, bridging conceptions with shifting tones, allowing the narrative to speak as much implicitly as it does explicitly. This is what I expect in writing, and what I expect to give. —Jonathan
Once I am committed to a book, I want to feel as though I am in an unhealthy serious relationship, the kind where you don't ever want to go anywhere without the significant other and it's all that is on your mind. I like to be able to know and empathize with the characters, so I can talk about them as if I were gossiping about a friend. While I can appreciate a poetic writer that crafts beautifully poised sentences, I tend to be more attracted to raw and honest writing, someone who can tell a good story without sounding pretentious. A good ending is pivotal; this doesn't mean every story has to have a fairy tale ending, but as I've learned in psychology, the "recency effect" claims that I'm most likely going to remember the last part I read. Thus a writer should want to leave the reader with anything but an inadequate closure; lingering questions are acceptable, but a weak and poorly cohesive conclusion will only leave a sour taste in my mouth. —Katie
I expect myself to surprise my reader by endowing my piece with that certain X factor that transforms a memory into a story. I want to do this on the linguistic level, by varying my sentence length and experimenting with punctuation (I find the dash to be very powerful when used correctly). Perhaps more importantly, I expect to surprise myself (or maybe I don't _expect_ this, because then it wouldn't be quite a surprise). But I can and do expect myself to be open to the possibility of surprise, and not to confine my memoir to a given framework within which it has no room to develop. I am excited to see how my writing and my voice will emerge. —Leah
From writers, I expect consideration for their readers, a balance of meaningful details, and a sense of destination for their own writing. I look for a certain kind of pensiveness and perceptiveness in writers that I rarely expect from normal people. I want writers who always ask "Why?" or "What is this, _really_?" and labor to figure it out. They know what they're looking for, and they take the readers with them on their search. Thinking about memoir, the importance of a mission, a framework is becoming so clear to me. We all have endless stories; our lives are impossible to summarize, and we shouldn't try! Writers should be able to parse out the golden threads from their own writing, know what to keep and what to scrap, and organize it all with the reader, and their own destination in mind. I also definitely expect writers to know _how to read._ —Andrea
Here then, after reading, is your first assignment. Know, for yourself, what draws you to the memoirs that you read. Know what you expect of you. Write it down and keep it close. Don't fail your own gold standards.
# CAREFUL, NOW
BEING out in the world with books of my own, I know the price of advice. I know the urgency behind the questions: _Read me? Teach me? Love me? Make me a writer?_ When you lean in the direction of another's work, you lean precariously out of your own. When you attend to the dreams and works of others, you are thrown from the path you had been on. Teaching is a succession of invasions and beginnings. And yet, of course, I lean.
But in reminding others to keep their hearts open, I remind myself. In teaching respect, I keep myself in check. When I recite the words of poet-novelist Forrest Gander—"Maybe the best we can do is try to leave ourselves unprotected..."—I wear my jacket just a little less snug on the long walk home. When I recite Edith Wharton—"One good heart-break will furnish the poet with many songs, and the novelist with a considerable number of novels. But they must have hearts that can break."—I ask myself, _At my old age, is my heart still capable of breaking?_
I am right there with my students as I teach my students, I'm saying. Whenever I teach memoir, when I contemplate it, when I have the urge to again write it, I live in the danger zone.
It's obvious, isn't it? Memoir making is a hazardous business. People are involved. Their feelings. Their reputations. Their relationships to you. Put somebody into a book you write, and you have changed—forever—the equation. I teach this to others. I teach it to myself. Over and over again.
Careful, now, I say. To them. To myself. Because it doesn't matter if you think your portraits flatter. It doesn't matter if you think the jokes are on you. It doesn't matter if you tag another as hero and escalate the praise. None of it matters. Memoir writers have no control over how their cast of characters—which is to say their mothers, their fathers, their siblings, their cousins, their early friends and late friends, their ancient lovers, their current partners, their neighbors, their teammates, their colleagues, their professors, their students, their children—will feel about what has taken up residency on your page. Call someone nice in a memoir and maybe she'll think you're chastising her as a bore. Accuse another of a kindness, and he may well think you've not paid homage to the thing that mattered more. Overtly accuse or overtly divulge, and it might—no, it will—get bloody. The war could last for years. The war could be unending.
And do not forget this. Learn it from me. People grow up. Children do. Memoirs freeze people in time. Sometimes that isn't the most loving thing to do. Others may forgive you, but will you forgive yourself?
Memoir making, the myth goes, is tenderness reserved for the book, intelligence transferred to the page, generosity given over to scene. But it is also, obviously, grand larceny, a form of plagiarism, a brand of stalking, and those who teach memoir have, I think, a moral responsibility to steady the student with terms, to caution her about consequences, to insist that he do it again, better, until the structure is solid and right, until the memoir can stand up against the ammunition hidden in the tall grass on the other side of the wall. Memoir making in the classroom is not a vanity operation. It is about melding the eye and the _I_ into something that actually matters—yes—while at the same time talking through the messiness of life. It's about giving the writer room to know himself, while making it clear (very clear) that those we have loved or warred against or not forgotten may be very happy living outside the public's big-eyed glare, thank you very much.
Real writers, I have said throughout these many years, do not write to trump or abolish. They write, instead, to rumble or howl, or because language is salvation, or because they've been alive, or because they have survived, or because they are determined to survive, tomorrow and the next day. Write for the right reasons, I implore. Write real. Write with the understanding, as Erin wrote in her expectations essay, that some lives or secrets do not belong to us. Write knowing that there are those who will inevitably walk away, and after that, there are those who will mock the form, who will dangle out their suspicions, who will attach the term _memoir_ (which has its roots in French and evokes _reminisce_ ) to the meanest list of accusations.
Memoir is lesser, you will read. Memoir is suspiciously easy. Consider Daniel Mendelsohn, writing in _The New Yorker_ , who proffered this "sounds like/looks like" list of labels for the genre: "unseemly self-exposures, unpalatable betrayals, unavoidable mendacity, a soupçon of meretriciousness." Consider Ben Yagoda's _Memoir: A History_ , where we learn that "memoir is to fiction as photography is to painting, also, in being easier to do fairly well. Only a master can create a convincing and compelling fictional world. Anyone with a moderate level of discipline, insight, intelligence, and editorial skill—plus a more than moderately interesting life—can write a decent memoir."
You might feel better after your memoir is written, in other words. But after it is read, after the critics have had their say, after you have overheard your neighbors at the block party whispering, after your sister has rebuked your way of remembering, how will you feel?
Be prepared. Be cognizant. Move forward, but with caution.
# TWO
RAW MATERIAL
# WRESTLING YOURSELF DOWN
DICTION, the poet Mary Oliver says, is the atmosphere created by word choices—the sound of those words, their relative precision, their various and variant connotations.
_The atmosphere created by words._ Yes. Write memoir, and you are writing atmosphere.
But what _kind_ , exactly? Do you know? Do you know who you are, what you are capable of, how what you _choose_ to see speaks of who you are? Do you know what mood you leave behind, or could? Do you know what trembles in your wake, and what might turn its back on you? Are you lazy, jiving, elaborate, prepossessing, imagistic, postmodern, slaying, friendly, intricate, intimate on the page? Is your speaking voice like your memoir voice? How many personae can you fit inside your hat? How is persona not in itself a lie? Why is thinking about all this—and thinking about it early—so very necessary?
It's necessary because the indiscriminate _I_ is haphazard and poorly informed. It is often dull and deselective. It does not qualify, as Ander Monson writes in "Voir Dire," as art: "I guess I want awareness," Monson says, "a sense that the writer has reckoned with the self, the material, as well as what it means to reveal it, and how secrets are revealed, how stories are told, that it's not just being simply told. In short, it must make something of itself."
Yes, of course. But how?
Start small, I advise. Don't try to write The Memoir straight off; don't attack it heart first and head strong. You'll get lost in its bigness, its tufted landscapes, its endless contradictions, its minor and major assaults. You'll lose your way, or you'll lose your opportunity. Start small by making notes to yourself. Go out and buy yourself a blank-page journal. Prose poem your days. Launch a blog. Write about what is happening right now so that you can learn to write well about what happened yesterday.
The following three passages are all excerpted from _New York Diaries: 1609 to 2009_. Here, outside the confines of formal memoir, are writers observing the world around them—and experimenting with voice. Clouds are "exactly parallel to avenues." People are "prematurely uglied." A man smokes "volcanically." Not one of these diarists is merely recording the facts. The facts are being inhabited, transfigured. These sentences, to borrow from Monson, are making something of themselves:
this evening 2 (uptown): long strings of cirrus clouds are sparkling electric fire orange against a still-blue sky, the same calm attention, the clouds are exactly parallel to avenues, and i smile and tear in relief: the city is in the sky again, dazzled, i'm stunned by natural beauty for the first time all week, the second-nature beauty of my city moving again in time. —Chad the Minx, September 17, 2001
At the Warwick Hotel in a room full of crones for a movie. A collection of age and failure. I among them, prematurely uglied, am cast as an extra in a film I don't even know the name of. —Judith Malina, July 17, 1956
I was indefatigable this morning and tonight I stayed quietly at home, smoked volcanically, and read Burns, of whose writings I ought to know more than I do. —George Templeton Strong, January 6, 1842
Journal keeping, diary making, blogging—it's all a curious thing, and it isn't (I'll make the point again) memoir. But it's a start, an inroad, a gesture. It tells us something about ourselves, records the details of our living, puts dialogue somewhere safe so that we can retrieve it later, talks back to us about us. When I ask my students to journal daily, I ask them not to judge and not to filter. Just put it down, I say—whatever you think of, however you want. A week goes by, and I send along a copy of Joan Didion's short, classic essay "On Keeping a Notebook." Write three paragraphs about the notebook pages that you have been keeping, I say. What is the value of the notes you have kept? What did they teach you about yourself? How honest are the pages, and what do you expect they will mean to you ten or twenty years from now? What shouts back at you about your voice and the sentences you leave behind?
Joe, one of what has become a succession of supremely talented engineering students, discovered this:
I'm a fairly logical person, and my brain tends to move through a series of conclusions faster than I can control. This "quick thinking" is generally considered an asset. But speeding down a road makes it mighty hard to turn down any of the side streets or see what's outside the window. Writing down my thoughts forced my mind to spin its tires, bogged down by the physical limit of my writing speed. But by taking my time, and exploring many different options at each stage, I came to much more significant conclusions. In one case—Friday, I believe—I threw around a problem that had been frustrating me for a while. It seemed to be an impossible dilemma: how could my morally upstanding friend enjoy the college social life without compromising his integrity? I'd run it through a hundred times and was convinced it was a flaw in our society. It wouldn't be the first, after all. But then, in writing my assigned notebook entry, I articulated the problem in a few different ways. Looking at it in a different light, I realized that it would not necessarily be a compromise of his integrity, but rather a stage of growth and change. If there was anything holding him back, it was just as much pride as it was "integrity." To say that I totally changed my opinion would be an exaggeration, but I certainly have a new feeling towards the problem. Thanks to the notebook, I was driving slow enough to see it.
Notebook keeping seems an awkward exercise to some, but much is learned. What were they doing, the students ask me, ask themselves, obsessing so much about food? Why were they having trouble telling the truth? Why did they write in half sentences or in bullets or in the margins correcting themselves—for the sake of whom, exactly? Are they as anxious as they seem? Will they really never stop remembering that girlfriend? Why do they sit in the same exact chair every single evening?
_And do they really sound like that?_
_Sound like what?_ I ask in class, and they begin, they self-diagnose, they express surprise:
_I'm more matter-of-fact than I thought I was._
_I'm more secretive._
_I'm too trapped in my head._
_I write about being anxious without saying why I'm anxious._
_I've got to lighten up._
_Maybe I'm not the poet I thought I was._
_I swear I've got to stop thinking about food._
_I need to stop sounding like her._
_My sentences go nowhere._
_There's not an original metaphor in twenty pages._
There is the who they thought they were and the who they wrote down, the something lost and the something gained, the discrepancy, now easily measured, between the voice they hear in their heads and the voice they find on their paper. "Our notebooks give us away," Joan Didion observes. And they do. They also provide, to memoir makers, a shelf and a foundation, as she goes on to observe.
I think we are well advised to keep on nodding terms with the people we used to be, whether we find them attractive company or not. Otherwise they turn up unannounced and surprise us, come hammering on the mind's door at 4 a.m. of a bad night and demand to know who deserted them, who betrayed them, who is going to make amends. We forget all too soon the things we thought we could never forget. We forget the loves and the betrayals alike, forget what we whispered and what we screamed, forget who we were.
So keep a notebook, and take note. Use obsessions and anxieties to your advantage, plumb the details for metaphor, negotiate the distance between the story you have to tell and the voice with which you can tell it, and above all else, know yourself. Maybe there's work to do in aligning your ideas with your sound. Maybe tweeting all day has curbed your capacity for expansion. Maybe you need to let down the guard on your jokey self because there's only so far that funny can go; at some point funny goes hollow if it doesn't add up to more, if there isn't a visible soul behind the pranks. Maybe you, like Alison Bechdel in her graphic memoir _Fun Home_ , will need some time to figure out whether even your private, right-now words are true. In the following scene, Bechdel is depicting herself at work on her own teenage journal. She's up against the memoirist's dilemma.
It was a sort of epistemological crisis. How did I know that the things I was writing were absolutely, objectively true? My simple, declarative sentences began to strike me as hubristic at best, utter lies at worst. All I could speak for was my own perceptions, and perhaps not even those.
A journal is written so that a journal might be studied. A journal is where the work-in-progress writer begins to wrestle his- or herself down, begins to understand or tussle with his or her own authority and authenticity. How will you write toward the truth? How will you wade in, deep? How will you know what is superfluous and what matters? Where is the artist in you? What can you do to a sentence?
Put present time down.
Teach yourself the range of your own voice.
# TENSE?
SQUIRREL gymnastics on the roof, laughter over a bowl of soup, the scratch and smell of ink. The memoir makers are working, photographs on their knees, their pens pausing midsentence.
Bring a photograph of yourself at a childhood or adolescent turning point: That is the assignment. Write the story in present tense. What can you see, smell, hear—at eight years old, with your brother nearby; at sixteen, with your grandmother cooking; at thirteen, with your father at an implacable distance? Present tense is instinct, spontaneity, life gulped in, the primal. The senses are ripe, but there is no absolute knowing, not yet in this trembling moment, of what will teach or linger.
_It's my brother and me. The sun is glare and heat, and we're out walking._
_She's teaching me to cook—sharpening the blade, snapping the stems off the big-leafed parsley, and I am watching._
_We have come to the hard edge of the wide canyon, and the bugs are out. I'm angry._
With the present-tense fragments set aside, the makers of memoir begin again, the same photos on their knees—the same people, the same action, the same fusing of flame to wick. But now past tense is the method. Now wisdom is the privilege. Now the urgency isn't simply about the details but about the process of bridging the distance.
_I wasn't the brother I should have been._
_She died before I understood what it was she meant to teach me._
_I never asked him why he brought me there, what he thought that I might see._
Past or present? Present or past? It's going to make a difference. Tense announces predilection and instinct. On the one hand: sense and detail, anecdote, in-your-face, it's happening, you're with me. On the other: cogitation, meditation, speculation, consideration, the sense of something measured. The heart and the mind. The eye and the _I_. It's still, in some fashion, alive, or it was. One or the other is going to appeal. One or the other will be right.
"Half my life ago," Darin Strauss begins his memoir, _Half a Life_ , "I killed a girl." There's nothing but past tense for this, don't you agree? If you want proof, try writing the sentence in present tense. See how fast it falls apart. See what it does to the tone, the pretext, the moral authority of the author. Darin Strauss could not write his book in present tense. Not if he wanted our respect.
Gail Caldwell, too, had little choice in her memoir, _Let's Take the Long Way Home_ , but to tell her story from the perspective of right now. She has been ruminating. She has been thinking. It's not the death of her best friend that she wants to posit as the headline. It's the struggle afterward to come to terms.
It's an old, old story: I had a friend and we shared everything, and then she died and so we shared that, too.
The year after she was gone, when I thought I had passed through the madness of early grief, I was on the path at the Cambridge reservoir where Caroline and I had walked the dogs for years. It was a winter afternoon and the place was empty—there was a bend in the road, with no one ahead of or behind me, and I felt a desolation so great that for a moment my knees wouldn't work. "What am I supposed to do here?" I asked her aloud, by now accustomed to conversations with a dead best friend. "Am I just supposed to keep going?" My life had made so much sense alongside hers: For years we had played the easy, daily game that intimate connection implies. One ball, two gloves, equal joy in the throw and the return. Now I was in the field without her: one glove, no game. Grief is what tells you who you are alone.
Could Loren Eiseley's _All the Strange Hours_ , written late in life to examine a life, have any real meaning if it had been rendered in present tense? Wasn't he writing—and aren't we reading—so that we might know what age ultimately teaches?
It was a time of violence, a time of hate, a time of sharing, a time of hunger. It was all that every human generation believes it has encountered for the very first time in human history. Life is a journey and eventually a death. Mine was no different than those others. But this is in retrospect. At that time I merely lived, and each day, each night, was different.
Now look at what Mark Richard does with _House of Prayer No. 2_. Look at how he puts us right there, in the infinite strangeness of his childhood, where he is considered a "special child" with all that that phrase can connote, and where he gets sent (it's devastating) to fix that problem with his hips. It's just one of many wrenching, near-impossible scenes:
They wheel you down to the ward, and it's been cleft palate season. There are a lot of children running around with complicated black stitchery on their upper lips. Some look like little Hitlers, others look like black-whiskered cats. They put you on the big sunporch with some older black boys, and you're glad to find Michael Christian. Nurse Wilfong comes to see you and says how you've grown, must have been your mama's cooking, and you look toward the little sunporch and ask where Jerry is, and she holds your face in her hands and bends over and says, _Jerry died._
Flip this to past tense—the same details, the same sad dying—and you have a different story. You have the need to explain more than one wants to have explained. You put knowing over feeling.
Two more pairings, now, for you to appraise. Both are the first paragraphs of their respective books. Both take their inspiration from memories of a childhood home. Both set into motion explorations of growing up in a "different" kind of world. Both are designed to bring the outsider in. But look at what tense does—which doors it opens and which it shuts.
There is laughter. There is the sharp report of a slamming door and the staccato of high heels crossing the ceramic tiles of the atrium garden. There is the reveille shout to the servants' quarters, the slap of sandals making their way to the animal pens, the _skrawk_ of chickens as they are pulled from their cages, one by one, into the ink of night. It is three o'clock, before the light of day. —Marie Arana, _American Chica_
That our family's home was a school for the deaf did not seem in any way extraordinary to Reba, Andy, and me. Lexington School for the Deaf was simply where we came from. Our apartment was on the third floor of the southern wing of the building, above the nursery school and adjacent to the boys' dormitory. The walls and doors, incidental separations between our living space and the rest of the building, were routinely disregarded. Our father might be called away from the table in the middle of dinner; we children often played down the hall with the kids from the dorm. It wasn't until Reba, my older sister, proved at age six to be a sleepwalker—discovered one night riding the elevator in her pajamas—that our parents even thought to install a proper lock on the front door. —Leah Hager Cohen, _Train Go Sorry_
Marie Arana is telling her story in a vivid, emotive present tense. A slamming door. Staccato heels. Reveille shouts. Chicken _skrawk_. Leah Hager Cohen makes a different choice, writing, as she does, in a muted, more rational, more cerebral past tense. There are no sounds here, in this paragraph. No _skrawks_ , no shouts. Hager lived, after all, among the deaf. It is neither her task nor her purpose to insert her readers inside the noisy, immediate bustle of childhood.
The transuding present tense, then? The phrenic past? Sometimes, but not always. Because certainly past-tense memoirs or memoiristic scenes can be and often are as vivid, as intense, as suspenseful as those written in the tense of _now_. No one, for example, has ever claimed that Jeannette Walls in _The Glass Castle_ did not start out with a deeply visceral bang. Her past tense is electrifying, absorbing, frightening, and also essential here. It deepens our trust, reassures us, somehow, that these tales of a feral childhood have been sifted and sorted over time.
I was on fire.
It's my earliest memory. I was three years old, and we were living in a trailer park in a southern Arizona town whose name I never knew. I was standing on a chair in front of the stove, wearing a pink dress my grandmother had bought for me. Pink was my favorite color. The dress's skirt stuck out like a tutu, and I liked to spin around in front of the mirror, thinking I looked like a ballerina. But at that moment, I was wearing the dress to cook hot dogs, watching them swell and bob in the boiling water as the late-morning sunlight filtered in through the trailer's small kitchenette window.
Neither past tense nor present tense is generally wrong, then, nor generally right. And other tenses, too, can be and often are effectively deployed; future perfect is, for example, ripe with possibilities ( _I will have learned; I will have lost; he will have left me, but I don't know that yet_ ). And sometimes—indeed, often—multiple tenses appear in a single memoir as authors wend back and forth through the years. Time is the memoirist's salvation and sin. Time is the tease and the puzzle. Time is the trickster, the tormentor, the vexer. Time solved or resolved is memoir mostly mastered.
Mary Karr brilliantly deploys dual tenses in her classic memoir, _The Liars' Club_. She even confides her strategy: "My father comes into focus for me on a Liars' Club afternoon. He sits at a wobbly card table weighed down by a bottle. Even now the scene seems so real to me that I can't but write it in the present tense."
In _Eat, Pray, Love_ , Elizabeth Gilbert also makes fluid use of _it happened then, it's happening right now_. On a single page, in sections divided by white space, she addresses her readers with sentences like these: "I was with Luca the first time I ever tried eating the intestines of a newborn lamb," and "Sometimes I wonder what I'm doing here, I admit it." She was and she is. It's okay. We're not confused.
But we your eager readers will only get it if you have been deliberate and smart. Passage by passage, chapter by chapter, you must decide how you will use tense to your advantage. To constrain or to free. To mark yourself as a certain _kind_ of writer. To shape the story you wish to tell. To put aspects of your story _at stake_.
# FIND YOUR FORM
YOU noticed something back there, didn't you? It caught your eye. That Mark Richard fragment—that _you_ instead of _I_. But aren't we talking about the single-letter pronoun when we're talking about memoir? Aren't we talking about ourselves? Why would Richard tell his story with a _you_? And come on, let's be honest, call a spade a spade: Is he even _allowed_ to?
He does it because, in this case, the _you_ is more intimate, more forgiving, more moving than the _I_ ever will be. It enables Richard to say things about himself and his ungodly circumstance that would be otherwise unthinkable. Richard is, as we now know, a special child. There's something nearly ferocious about him, nearly feral, and besides, his legs don't work. His hips "click and pop." He is poor and from the South, and his father is wrecked by a cruel streak. We readers know what is coming. It helps if we can (just slightly) avert our eyes. Richard's second person allows us to do this. It gentles his story, yields something lush and kinder, less abrasive, and somehow (despite the many missing pieces) whole. Richard has done what he must do, first, to make his own story writable for himself and, second, to draw his readers near.
Are you still thinking that this is but a stylistic tic or trick? Have I failed to convince you? All right, then. Convince yourself. Take the passage that I quoted in the previous chapter and rewrite it in first person, as best you can. I asked my students to do this one semester. Here is what Beryl (who hit the assignment out of the park) wrote:
I made my way around the familiar ward. There were cleft palates everywhere, kids with black stitches all over their faces. Their faces used to make me cringe, but now they just made me giggle. For some, the stitches were concentrated in the center of the upper lip, just like Hitler's. Other stitches looked more like a cat's whiskers; thin black lines covering the outer edges. These stitches comforted me in a way, made me feel like I was home. As I made my way out onto the big sun porch, I scanned for familiar faces. A feeling of relief passed over me once I recognized Michael. Made me feel like I wasn't so much of a stranger, after all. When I saw Nurse Wilfong, a smile spread across her face. "My how you've grown!" She fussed and fussed, remarking on Mama's cooking and how big I had gotten and how much she had missed me. I had missed her too. As I continued to scan for familiar faces, I realized one was missing. "Where's Jerry?" A dark cloud passed over Nurse Wilfong's face. I knew what she was going to say before she even said it. A million thoughts passed through my mind—chess, the blue plastic plate, his legs. And as she uttered the words that I had predicted, I began to sob.
Decide for yourself. Which version of the story takes you deeper? Which puts you in the ward, with the heat, with the missing friend, with the despair? As Beryl reminds us by way of her alternative example, saying less is often more, indirection has its emotive appeal, and the overly self-assertive _I_ can feel like an ambush. The second person, in _House of Prayer No. 2_ , does not ask for empathy. It earns it.
bell hooks slides all across the grammatical spectrum to render _Bone Black_ —deploying first person singular ( _I_ ), first person plural ( _we_ ), third person plural ( _they_ ), and third person feminine singular ( _she_ ) to quilt together her account of growing up readerly, passionate, different, and black in the South. In sixty-one abbreviated chapters—prose poems, really—hooks's pronouns are an endless source of suspense, a slowly depuzzled tension. "Mama has given me a quilt from her hope chest," the book begins, a false promise of the familiar.
But like Mark Richard, hooks soon finds the need to put some distance between her now self and her then self, and a switch to a new pronoun gives her that freedom: "She was considered a problem child, a child intent on getting her own way." Throughout the book, hooks will transition from the choral _we_ to the lonely _I_ , often within a fluid paragraph:
We learn about color with crayons. We learn to tell the difference between white and pink and a color they call Flesh. The flesh-colored crayon amuses us. Like white it never shows up on the thick Manila paper they give us to draw on, or on the brown paper sacks we draw on at home. Flesh we know has no relationship to our skin, for we are brown and brown and brown like all good things. And we know that pigs are not pink or white like these flesh people. We secretly love pigs, especially me. I like to watch them lie in the mud, covering themselves in the cool red mud that is like clay, that is flaming red hot like dirt on fire.
hooks's memoir, first published in 1996, is imagistic and suggestive, far more about what was gleaned and felt than about the contour of events. hooks has a plan; she has a method. She explains herself in the foreword:
Sometimes memories are presented in the third person, indirectly, just as all of us sometimes talk about things that way. We look back as if we are standing at a distance. Examining life retrospectively we are there and not there, watching and watched. Evoking the mood and sensibility of moments, this is an autobiography of perceptions and ideas. The events described are always less significant than the impressions they leave on the mind and heart.
I offer these examples (and of course there are others, notably Ned Zeman's brilliant _The Rules of the Tunnel_ ) with the hope that you, taking note, will feel liberated. Memoir, the _I_ genre, need not box you in. Memoir calls for experimentation, leaps of faith, new takes on the old truth serums. Memoir writers should not automatically assume that the _I_ will be sufficiently bold and bright for their stories.
Nor should they assume—should _you_ assume—that words alone will be sufficient. Look at the ways that Orhan Pamuk uses photographs—historical photographs, family photographs, both art and documentary photographs—not just to illustrate his memoir, _Istanbul_ , but also to generate a state of melancholy, a very particular mood. Or consider Dorothy Allison's use of family snapshots in _Two or Three Things I Know for Sure_. Even as she warns us that she may be embroidering her history and her people, even as her two or three learned things overwrite one another and multiply and ascend, Allison grounds us, with those photographs, in the immutable real—real people, real postures, real poses, real losses. Allison's photographs are her hard-core, tack-sure facts. They are among the reasons that we, despite her own disclaimers, trust her.
It's entirely possible, finally, that words and photographs, melded tenses and unexpected pronouns won't be enough, that you'll still be out there floundering for a way to tell the truth. Maybe the complexity of all you have to say—the overt stories, the hidden ones, the surfeit of details, the unspeakable pangs—can only be captured illustratively, comic-book style. Perhaps you—like Art Spiegelman, Harvey Pekar, Alison Bechdel, Marjane Satrapi, and others—have a gift for doing more with a pencil than simply scratching out the alphabet.
If that's the case, pull Bechdel's graphic memoir _Fun Home_ off the shelf and study it for a very long time. Ask yourself what her illustrations do to deepen her story, to draw readers in, to re-create but not to persecute her terribly complicated father. How is an illustration different from a photograph? What degree of complexity is enabled by thought bubbles, gutters, captions, the signaled sound effect? What can a graphic memoir do, and what could you do with it?
Find your form. Work beyond the box. Secure a workable frame.
# PHOTO SHOP
WRITING indulges the myth of continuity. Photographs suggest the significance of the single instant. Ever since a fourth-grade teacher helped me turn a musty Quaker Oats box into a pinhole camera, I've been chasing photographs. Since I fell hard for words (the sound of them, their shape) at about the same time, I've been caught, eternally thereafter, within the seductive snare of both.
That's not a bad thing.
In fact, I'm not entirely sure that I would still be writing today if I didn't have—or, more important, didn't _take_ —photographs. So affixed an appendage is my Sony DSLR-A700 (I wear it loose, dangling from one shoulder; I wear it cradled in one hand; I wear it like a necklace on windy winter days) that I am known by some as the Crazy Lady. That camera-toting enthusiast.
But listen: The weight of the camera reminds me to see. It helps me decide against deciding that my world is overly familiar, already known. I look for cracks and fissures, for the new or newly announced. I look for water to run a different color in the stream, or for the sun to strike the pond in winter with delirious force. If I can't see, then I don't know, and if I don't know, I'm not writing, and while some may question the value of words, or of memoir in particular, I will again make this claim: Words rendered true spook and spur us. They expect _of_ us. They expect _for_ us. Photographs do the same thing: "Your photography is a record of your living, for anyone who really sees," said Paul Strand.
A little to the right, with a photograph, a little to the left, and everything changes. Zoom in, zoom out, and you have a new story. Consider what happens when you replace your wide-angle lens with a macro. Suddenly you go from the sweet blue bend of long horizons and bottom-bowled clouds to the bizarrely microscopic. Between rain bursts, you'll find me crouching in my garden with a macro, dialing in and out of temporal focus, catching the reflections off a puddled stamen, discovering the zebra stripes of an iris. I'll be thinking about how razor-edged the lily is, or how skirted and blurred the hydrangea seems, when photographed from above. I brace myself for the macro lens. I try not to breathe. I snap.
"I prowled the streets all day, feeling very strung-up and ready to pounce, determined to 'trap' life—to preserve life in the act of living. Above all, I craved to seize, in the confines of one single photograph, the whole essence of some situation that was in the process of unrolling itself before my eyes." The words of Henri Cartier-Bresson.
"To quote out of context is the essence of the photographer's craft. His central problem is a simple one: what shall he include, what shall he reject? The line of decision between in and out is the picture's edge. While the draughtsman starts with the middle of the sheet, the photographer starts with the frame." The words of John Szarkowski.
I ask my students to bring their cameras to class. I ask them to go out and snatch up ten pictures. _Photograph what interests you_ , I say. _And come back in a half hour._ And then I wait. Okay, I don't actually just sit there and wait. I go out with my own camera, photographing students taking photographs. Crazy Lady and whatnot.
Out of breath, trailing laughter, already scheming about the words they will put down for the assignment they are sure they will be given, the students return. Easy peasy, right? The Crazy Lady wants them to find the words to describe the best-slash-favorite-slash-most-iconic-slash-killer-interesting picture(s) taken? Nope. Too obvious. Too surface. The assignment I give goes something like this: Study the background of any chosen photograph. Not the foreground, the background. What's in the picture that you didn't see when you were snapping? What lies beyond the chosen subject—just to the right or to the left? How do the borderlands shadow and shape the subject? What does the startle of the once-unnoticed detail suggest to you? What would happen if this small thing—and not the obvious thing, the central thing, the thing easily seized and snatched—was the start of your story?
Look deeper.
See smarter.
Consider.
So that the plate of cookies I had brought to class and was photographed as a student's tenth subject is suddenly perceived as the wooden table beneath the plate—scarred and secretive. Or the photograph of the man on the bike is not a photograph of a man on a bike after all but instead an image of the pane of glass behind him, a peek inside an urban gym and the endless going-nowhere of treadmill athletes; the bike moves, the treadmill athletes do not, tell me the story. Or maybe the streetscape snapped in a windy hurry is actually a portrait of a traffic light, frozen red: _Stop. Now._ Or maybe that portrait of the campus compass tiled into Locust Walk is not a portrait of a compass after all but of a chicken, off in the distance—or, as Liz wrote, "a man in a chicken _costume_. Slight difference. My eyes trace up his body from the large orange feet, past the heinous yellow feathers, up to the face of a brunette boy with a backwards baseball cap. I smile to myself. _This is Penn_ , I think, _utterly ridiculous and entirely beautiful at the same time._ "
Too many people think when they are thinking (theoretically) about the memoir they feel bound to write that they know what the story is. "My memoir is about what happened to me," they will say, and then recount, in a sentence or two, the headline news. A child lost. An illness overcome. A wicked mother. A farm recovered. A month in Africa. A liar! A cheat! A scandal survived!
But the headlines are only the headlines, a blare. Or, as Vivian Gornick writes in _The Situation and the Story_ , they are merely situation. What readers want is meaning. They want a story so rich, complex, thought through, and learned from that it can't, in fact, be revealed by a headline or two; it can't be satisfactorily summarized. Readers want to be able to participate. They want to discover, with the writer, those images at the edge of the frame, or over to the side, or just a tad blurred that have, as it turns out, something rich to say. Something powerful and universal. Something extracted and framed as, in the words of Loren Eiseley in _All the Strange Hours_ , "the unique possession of a single life":
There are pictures that hang askew, pictures with outlines barely chalked in, pictures torn, pictures the artist has striven unsuccessfully to erase, pictures that only emerge and glow in a certain light. They have all been teleported, stolen, as it were, out of time. They represent no longer the sequential flow of ordinary memory. They can be pulled about on easels, examined within the mind itself. The act is not one of total recall like that of the professional mnemonist. Rather it is the use of things extracted from their context in such a way that they have become the unique possession of a single life.
Yours is a single life. It is _the_ single life of your memoir.
# DO YOU LOVE?
IN his letters to the nineteen-year-old Franz Xaver Kappus, Rainer Maria Rilke said this about love:
It is also good to love: because love is difficult. For one human being to love another human being: that is perhaps the most difficult task that has been entrusted to us, the ultimate task, the final test and proof, the work for which all other work is merely preparation. That is why young people, who are beginners in everything, are not yet capable of love: it is something they must learn. With their whole being, with all their forces, gathered around their solitary, anxious, upward-beating hearts, they must learn to love.
Not long ago, I read these words to a classroom full of eighth-grade girls. _True or false?_ I asked. Some heads shaking no. Some heads shaking yes. Then I made a request: _Write for five minutes about one thing that you are learning to love._
_Anything?_ one freckle-faced girl asked, after everyone else had started scribbling.
_Anything_ , I said.
She scrunched her nose. She scratched her head. She couldn't get a toehold. I told her a little about me, about how I've lived wanting, reaching, exuding, falling, again reaching and again wanting more. I loved the wild, reckless freedom of the ice, I said, explaining my youthful figure skating years. I loved watching two pools of watercolors merge and make a brand-new color. I loved my cat, a calico I'd rescued from a graveyard. I loved the fancy things that a guy named F. Scott Fitzgerald could do with words.
But still the girl pondered as other pencils scratched, and so I read her a poem about applesauce by Ted Kooser. It's not a poem about love, exactly. But it is about its cousin, like about what apples do when boiling on a stove, about how a kitchen changes when suffused by the smell of warm apples.
Oh, the girl said. I don't like applesauce, she said. But I do like sunny afternoons when I'm homework-free, I like talking to friends on my pink iPhone, I like the ring I wear on my pinkie and (possibly) the person who gave me the ring. She smiled. She got to work. A fragment of a poem by the former poet laureate had given this still-learning adolescent a place to start. It had set her moving in the direction of love.
Do you love? Are you still learning to love? How hard is this love thing, for you? It's not a question reserved for the young. It's a question for all of us, and it's a question we must repeatedly ask ourselves, especially when we're writing memoir. If we don't know what we love—if we're not yet capable of it; if we're stuck in a stingy, fisted-up place; if we're still too angry to name the color of the sun—it is probably too soon to start the sorting and stacking and shaping that is memoir. Maybe we haven't learned enough yet. Maybe we haven't sufficiently tempered our disappointment with grace. Maybe we haven't stopped hurting long enough to look up and see the others who hurt with us, who stand in our (it only seems invisible) community. Maybe we only have words right now for our mighty wounds and our percolating scars. And if that's the case, let's step aside, for those words alone are the stuff of litanies, screeds, judgments, and declamations; they're the stuff of long and lonely writing rides. You'll be looking at you, talking through you, talking about you, talking at me, and it'll all be bump and grind.
Call me sentimental; others have. Remind me that the world is dark and ugly, that people are cruel, that injustice reigns, that children suffer, that the wrong people win, the wrong people triumph. I know. I have been there. I have seen. I have lost to the infidels once or twice myself, and that woman—that woman with the short auburn hair and the bright red lipstick who laughed at how I danced and moved and talked, who called me _old_ —she had _no_ business making me feel like that, doubt like that, stop sleeping. Seriously, she didn't. But no memoir is worth reading if it is not leavened with beauty and love. And no memoirist should start her work until she can, with authority, write about the things she loves.
So think about it. Put yourself in that half place between dream and story, and hover. Think about how the world leaks and scrambles out toward possibilities and how, between divisions, under stones, in the eyes of a child, in the spark of first sun on a river reflecting blue, passions get their running start—or should. Think about the smallest things that make you happy—Kooser's apples, maybe, or the backyard oak, or a full moon rising on a high tide, or your mother, after all, or the man you're actually glad you married, or the child you thought you'd never have, or the neighbor you so purposely ignored until his pear trees bloomed such a snow-fantastic white. Sit in a chair and conjure beauty and goodness, the stepping-stones of love. Make a list. Tangle up with metaphor. Practice gratitude. Rest assured you'll be given a chance to tell the _whole_ story soon. But start, for now, with love.
By the way: If anyone calls you sentimental while you sit there locking language to love, remind him that love is the hardest thing we do, the most complicated, riddled, as it is, with guilt and forgiveness, anxiety and insecurity, our supremely human need for redemption. Tell him that hate, anger, retribution, and clenched jaws are going-nowhere stories, unidirectional shouts. Tell him that love is where life stories start, no matter what one is writing about.
Maybe he'll remember the things he loves, too.
Maybe helping him remember is one of the many things that your memoir, when you write your memoir, can and must ultimately do. "Love, like light," Adam Gopnik said, "is a thing that is enacted better than defined: we know it afterward by the traces it leaves on paper."
# WHETHER THE WEATHER
I know what the weather was when I entered my husband's dorm room at Yale University to say, _Yes. Okay. I will marry you._ It was February's version of cold, and lonely birds squawked outside, and though it was late afternoon, the air was the color of morning fog. I had taken the train up, found my way to his room. I had had all those hours to decide whether this artist from El Salvador whose paintings I knew better than his ambitions, whose family I had never met, who favored the black and blue of night over my own peach-tinted dawn was, as they say, The One. I was twenty-three. I felt, at that time, old. I watched the weather through the scratched window of the Amtrak train and tried to read the signs in the air, translate the frantic cautions of the tossed and hieratic birds.
I know what the weather was when my mother died. I had spent the final difficult months at her side, had sung her songs, had placed and replaced the flowers and photographs, the potted lemon tree, the Bible in what would be her last room; I had heard (will never forget hearing) what she meant for me to hear. I had said good-bye because I knew it was her time, because I had somehow understood that she wished her dying to be a private thing, and so I was out, walking in the dark beneath a few bright stars when I felt the nudge of a breeze on one shoulder. A knock. And then a whoosh. "Mom," I said, for it was her, I knew. Her final earthly touch.
I know what the weather was on the night before we drove our son back to his second semester of college. We'd waited all day for the snow, and when it came the flakes were saucers—huge and slant, conjoined. We had had our time as a family of three, but the next day our boy would be headed back to the hills, to Literature and Advertising, to Probability and World Cultures, to a sound engineering booth and a dorm. So we drove through the night on backcountry roads—the snow falling, the moon rising, the world bright and wholly bittersweet, for what does one do with the deep, rutted, impossible love for children who grow, too, who emerge, like us, into the age they are becoming? What does one do but drive across roads and inside the shell of a heart-quelled silence, anticipating tomorrow? For that is what the weather was that night—a heart-quelled silence.
I remember weather. Do you? I am _convinced_ by it, so on mornings when I wake to the whisper-rush of snow, when I feel the roof heavying down, the silence deeper than the previous night, my sentences grow long, embedded, rounded. But then, on days that are blue sky and angling for warmth, my sentences take on the connotations of jive. They're all quick steps and electric slide. There is no escaping this, or there shouldn't be. Weather, and how we both live and write it, must enter—should enter—into the memories we make and resurrect.
Look outside, go outside, write this right now: The quality of breeze. The evidence of dew. The pile of clouds on the horizon. Find the words. It doesn't matter how tired you are. It doesn't matter if you think there's nothing new here, if weather has been done before, if weather isn't (to you, at least) the story. An alive sky is a whole soul; you must let it filter through you. Watermelon. Lilac. Gunmetal. Blue. Upticking fog. Rain as the sound. Sun as a caution sign. A moon that has gone fishing. A cranberry-colored landscape. Cold for August. Thunder like a jet just off the tarmac, hail the size of rock salt, the straight white nails of rain driving through, or just the gray pale pink before a storm, or, again, fog curl with a mind of its own.
Write the weather of this instance; find the words. Put yourself in a weather zone, and then let your mind drift back. Write the weather of your wedding day, now, or the weather of your first school day, or the weather of a funeral day, or the weather of a carefree day. What is on your page? Is it rhapsodic? Is it stark? Is it original? Is it true? Where do weather memories and weather words take you?
Too many people forget, when writing memoir, the power of context, the evocative tug of the broader tapestry. They'll focus on lines of action—on he said/she said/they did. They'll show you the crimes or craft a five-page monologue or slam you with the simmering gossip— _I was young; I lost my mind._ And in all this rendering of the facts as best as any facts can be recalled and subsequently rendered, the wider world gets lost, the extenuating circumstances, the reality that things are always bigger than you or me.
Don't lose the wider world. Carry more than the events themselves forward. Carry the images, the sensory shocks, the small interludes of cloud play, sun scream, the smell of rain, the yaw of an old birch branch, the scattering of sky.
It's possible—even probable—that there will be a lesson in all of this, that background will again become foreground, that what appeared to be inconsequential at the time was in fact a foreshadowing or a judgment. Pay attention to weather. Bend it into words. Ask yourself if it has a rightful place in the memoir you will be writing. "The range of a writer's metaphor is a measure of the range of his cognition," Leon Wieseltier once wrote, in a review of _Saul Bellow: Letters_. I am going to borrow that line and make a few substitutions: The range of a writer's weather vocabulary is a measure of the range of her perception.
I'll close this weather exhortation (rant?) with this: I have a friend named Alyson Hagy. We met years ago, thanks to a grant we'd both won, and she has gone on to do so many important things as both an author of stunning talent and a brilliant teacher at the University of Wyoming in Laramie. I learn important things from Alyson—about teaching, about writing, about the power of humility—and because she lives so many miles from where I do, our conversation is almost exclusively over e-mail. Nearly a thousand e-mails from Alyson now, and almost every one of them relates something of her weather. The early snows. The late thaws. The confusion of fish in hoary streams. The windy disruption of bird life.
"The hint of autumn was subtle," Alyson has written. "The way the clouds built in the morning—not as mountains of cumulonimbus but as layers of cumulus and cirrus. The early cry of the fledgling red-tailed hawks that have nested in the neighborhood and are now just out on their own—so wary, yet so dangerous (they have quieted the local crows). The way the dust on the breeze smelled cool instead of hot. It's usually 50 in the morning now instead of 55. The days are still glorious and bright, but you can see how the robins are hustling to plump up for departure."
There's story there, the tantalizing breath of memoir. There's Alyson yielding the wide, ungovernable world of her weather and—at the same time—her way of seeing, her patterns of perception. Tell me how you see your weather, and you will tell me something of yourself. I want to know not just _what_ you see but also _how_ you see, in every line that you call memoir.
# LANDSCAPE IT
ON the day that I turned forty-one I found myself at a pleasure garden some ten minutes down the road from where I live. I had gone alone. I was there just to be. I had written four books—four memoirs—and I was all done, I thought, with words. Sick of my own stories. Sick of my own responsibilities. Sick and tired and needing a world far more interesting, complex, mysterious, forgiving than I thought myself to be. You can worry yourself out as a writer. You can grow empty, redundant, spiritually thin. I had worried myself down to the bones.
Over the next two years I would visit that garden weekly during its open season. I would revel in all that I did not know, take small half steps toward knowledge. Gardeners would teach me. The weather would abrade me. The landscape would change the way I walked and saw. An old lady would ask me a question— _How do you see everything?_ —and I would wonder my way toward an answer for days afterward. It was as if someone had taken a saw to my chest, split my rib cage, and made more room for my heart.
All throughout our lives, we move through, we move against, we move toward landscape. We dress for landscape. We sweat the hills. We take our children to the ocean's edge. We rise at three in the morning to see how a certain rock face will hold the moon. We nestle close to the spray of a violent waterfall. We roll down the hill just past the forest. We gather the wildflowers because we can't take the rocky path home. We hurry our friends along, or we sneak out alone. We are shaped by landscape, and we tug at, plow into, level it ourselves, exerting our own ideas upon it.
I conceived of the garden as a poem in stanzas. Each terrace contributes to the garden as a whole in the same way each stanza in a poem has a life of its own, and yet is part of a progressive whole as well.
The form provides some degree of repose, letting our mind rest in the comparatively manageable unit of the stanza, or terrace. Yet there is also a need to move on, to look beyond the stanza, into the poem as a whole. —Stanley Kunitz, _The Wild Braid: A Poet Reflects on a Century in the Garden_
The beauty of a broken fountain, an old ramshackle mansion, a ruined hundred-year-old gasworks, the crumbling wall of an old mosque, the vines and plane trees intertwining to shade the old blackened walls of a wooden house—these are accidental. But when I visited the city's backstreets as a child, these painterly tableaux were so numerous it was difficult, after a point, to see them as unintended: these sad (now vanished) ruins that gave Istanbul its soul. But to "discover" the city's soul in its ruins, to see these ruins as expressing the city's essence, you must travel down a long labyrinthine path strewn with historical accidents. —Orhan Pamuk, _Istanbul_
It is interesting, given all the seething power of both the rising and the ruined, how rarely landscape seeps convincingly into the work of aspiring memoirists. A country might be named, or a mountain peak, or a flower. But by and large, beginning memoirists tend to discard or forget to see the power of earth vents and lava flows, caldera and geysers, the frozen life inside the fossil, gorges and stalactite caves, ice margins and deltas, skyscrapers and central parks. They consign landscape to background, or render it as mere decoration. They say, _I was here, here, here, and here_ , but they do not plumb here's depths.
No doubt some of my students would if the pieces we work on weren't constrained by a certain word count. Some of them would ultimately _get around_ to landscape, but getting around to landscape is not the same thing as deliberately mining it for metaphors and wisdoms, contours and sensibilities. Getting around to landscape does not honor landscape. It does not even begin to tap the possibilities that range within.
You don't need a geologist's vocabulary to write landscape. You don't need to go all textbook—kettle lake, oxbow lake, fault spring, graben lake. In the right circumstances, that kind of talk can take you and the reader somewhere. But landscape simply drawn tells stories, too.
Consider this passage by Debra Marquart in _The Horizontal World_. The author is returning home, to her father's funeral. She's revisiting familiar childhood terrain and seeing, in that patch of horizon, a magnetizing mythology.
On the morning of my father's funeral, as we came over the next rise, I saw we had three miles to go. This is Logan County. While it may be just another patch of flat horizon to someone driving through, to the people of my family it's the navel of the earth, the place from which all things flow and to which all things return in time.
For Susan Brind Morrow in _The Names of Things_ , time spent observing the natural world leads to quiet reconciliation and deep insights. The first paragraph excerpted below is, absolutely, pure description. But it isn't long before these crabs, this white sand, those mangroves are yielding an understanding of language itself.
As I walk along the shore of the Red Sea at dawn a hundred pale pink crabs scuttle carefully back across and into the white sand. Behind a sharp crust of coral a rock crab, seaweed-green edged with red, pries the back off of a sand crab and feeds. It is not so easily frightened and merely watches me. There are tiny porcelain-blue crabs in the mangroves a few miles south, popping out of the dense muddy quicksand like living jewels.
In this harsh environment, life itself is a gorgeous miracle, coming out of the barren desert, out of the bitter sea: hals, the sea of salt....
Words begin as description. They are prismatic, vehicles of hidden, deeper shades of thought. You can hold them up at different angles until the light bursts through in an unexpected color. The word carries the living thing concealed across millennia.
In her classic memoir, _Refuge_ , Terry Tempest Williams takes us into a world of enormous beauty and troubling wreckage. Williams's mother is dying of ovarian cancer. A bird refuge is being threatened. Williams is losing the things she values most and struggling to come to terms with her grief. Here the natural world does not merely suggest metaphors, or offer escape, or divulge some previously foggy truth. Here Williams _becomes_ landscape. Landscape steadies her.
I know the solitude my mother speaks of. It is what sustains me and protects me from my mind. It renders me fully present. I am desert. I am mountains. I am Great Salt Lake. There are other languages being spoken by wind, water, and wings. There are other lives to consider: avocets, stilts, and stones. Peace is the perspective found in patterns. When I see ring-billed gulls picking on the flesh of decaying carp, I am less afraid of death. We are no more and no less than the life that surrounds us. My fears surface in my isolation. My serenity surfaces in my solitude.
There are writers, like Rick Bragg, who give us landscape first, as here in the opening scene of _All Over but the Shoutin'._
My mother and father were born in the most beautiful place on earth, in the foothills of the Appalachians along the Alabama-Georgia line. It was a place where gray mists hid the tops of low, deep-green mountains, where redbone and bluetick hounds flashed through the pines as they chased possums into the sacks of old men in frayed overalls, where old women in bonnets dipped Bruton snuff and hummed "Faded Love and Winter Roses" as they shelled purple hulls, canned peaches and made biscuits too good for this world.
There are writers, like Isabel Allende, in _My Invented Country_ , who take us south, into the sun.
I recall that my family and I, loaded with bundles, climbed onto a train that traveled at a turtle's pace through the inclement Atacama Desert toward Bolivia. Sun, baked rocks, kilometers and kilometers of ghostly solitudes, from time to time an abandoned cemetery, ruined buildings of adobe and wood. It was a dry heat where not even flies survived. Thirst was unquenchable. We drank water by the gallon, sucked oranges, and had a hard time defending ourselves from the dust, which crept into every cranny. Our lips were so chapped they bled, our ears hurt, we were dehydrated.
There are those—Mary Morris, _Nothing to Declare_ —who will take us, so persuasively, to San Miguel that I later followed in her footsteps.
You come to the old Mexico, a lawless land. It is a landscape that could be ruled by bandits or serve as a backdrop for the classic Westerns, where all you expect the Mexicans to say is " _hombre_ " and " _amigo_ " and " _sí, señor_." It is a land with colors. Desert colors. Sand and sienna, red clay and cactus green, scattered yellow flowers. The sky runs all the ranges of purple and scarlet and orange. You can see dust storms or rain moving toward you. Rainbows are frequent. The solitude is dramatic.
There are those—read Mary Karr's _The Liars' Club_ —who render the poison and slick of an oil refinery into some otherworldly landscape.
In the fields of gator grass, you could see the ghostly outline of oil rigs bucking in slow motion. They always reminded me of rodeo riders, or of some huge servant creatures rising up and bowing down to nothing in particular. In the distance, giant towers rose from each refinery, with flames that turned every night's sky an odd, acid-green color. The first time I saw a glow-in-the-dark rosary, it reminded me of those five-story torches that circled the town at night. Then there were the white oil-storage tanks, miles of them, like the abandoned eggs of some terrible prehistoric insect.
But you don't have to go south or into the purging desert or the sand and sienna of San Miguel to have something to say about landscape. You don't even have to walk a garden for two years, or attempt to build one, or find ghostly beauty within noxious fumes. Because landscape can be the tadpole creek that ran behind your neighbors' houses. It can be the field where you found your first fleck of shiny mica. It can be the curvy path between the trees that you ran that day, alone (again), frightened (terrified) after a teacher had taken you aside to say, _Your mother's been in a bad accident._
Landscape can be your own backyard, or the unlanterned street with the single lit window of a hunched house. It can be big trees or skimpy trees, the rocks where the bobcats prowl or the golden fields of wheat historied by the old grain silo where the black crows make their home. It can be the milky blue fence of the horse show grounds or the round-bellied towers of a strange skyline or the planted rectangles of mobile homes in a concentric trailer park. It can be ruins. Have you, now contemplating memoir, contemplated ruins? Do you know what you are missing? Let Christopher Woodward tell you: "When we contemplate ruins, we contemplate our own future."
Stop reading this book; put it down. Pick up a pen and write what you can see from the nearest window—those fixed forms of your world. Not the weather; that's transient. Not the people; they'll come and go. Look for the bulwarks, the hollows, the cracks, the rounded masses, the chiseled, the pillared, the rising, the sunk, the missing. Geometry should factor in. Palettes and hues. Shades and pockmarks. Rough and smooth. Deliberate and accidental. Verging on gone. Write it all down so that when you send it to me or share it with your neighbor or blog it for the world, we can _see_.
Now close your eyes and find within yourself a landscape from long ago. Put this down, too, best as you can. Don't pretend to see what you cannot. Don't airbrush this exercise for the sake of faux completeness. Just put down what your memory gives you, as fragile or flimsy as that seems. Then ask yourself questions like these: Why _this_ landscape? Why its incompleteness? Why have you focused on the upright shafts and not on all of that which blunders horizontal? Why don't the colors come back, or if they do, why are they so loud and self-insistent? To what part of yourself, or your story, does this landscape return you?
And what do you know now that you couldn't have known then?
Where does landscape take you?
# THINK SONG
THE notebook in which I write these words is slick and sloganed: _I am fairly certain that, given a Cape and a nice tiara, I could save the world._ That's the front cover. Inside, two words and a period (I take special note of the period): _write love._
It would be impossible not to when writing of my students. It would be impossible not to _feel_. Because look at them—their heads bowed to the prayer of a memory teased forward by the music of Astor Piazzolla. They write to the tango, to the slow andante that spins in the old computer's tray and sifts through dusty speakers. They walk themselves back, their eyes half-closed, in a room crowded with tossed coats and fatigued bags, the Styrofoam crypt of abandoned French fries, the molder of snow that has collected in the treads of their boots.
It has been the winter of white skies and frozen slicks, but here they are, in a room of andante, shoulder to shoulder, remembering other temperatures, a different face of the sun, because that is what I have asked for here. Ten minutes spent remembering a childhood encounter with weather—a moment evoked by the Piazzolla song.
Watching them remember, I remember, too. The fog curl and cliff erosion of San Francisco. Lagoons drenched with dawn pink. The chill in the underskirt of an ocean current. The smell of Spanish moss after a torrent of afternoon rain. The split of a lightning-fractured sky. I watch, and as I watch, on torn pages and laptop screens, a storm breaks and clouds gather and elsewhere there is sun.
One sentence, or two. Bold. Unpredictable. True. Read aloud from what you've just written, I say, and the students do—and in their work I hear the dawning of new voices, new sounds, lines aided by song. "We will go where the wind takes us," Dascher writes, a beginning. Often the students are surprised by the sentences they produce. They didn't, they tell me, know they were capable of this—these collisions, these rhythms, these isolated or meshed or intravenous details. Something is happening. Something is new. Hold on, go further, see what you're capable of.
A small moment. A beat of silence. And now I trade the Piazzolla song for the Benedictine monks of Santo Domingo de Silos. I ask the students, again, to sit and listen, dream backward, reinvent. Another day of weather. Another month, another year. Where does the music take them? What language enwraps that then most fully? Devotion and lift. Ease and inner stillness. Anticipation, too. What do you hear? I ask. Where are you? Write it. Let your words uncover you. Let your words _prove_ you.
Across the campus, in darkened auditoriums, faculty pontificate, students take notes, and the business of actual learning goes on. But here in our room the monks are chanting and my students are sighing, tentative, wondering. They are going back in time, breaking the mold of the familiar in search of something equally true.
Maybe it doesn't sound all that Ivy League or résumé building to ask students to honor the smear of childhood or to heed the rhythms of remembered weather. Maybe I'm the only writing teacher spinning discs on ancient machines. And maybe it's a tad shy of rigorous to conduct a classroom full of eased-back kids—dreamers and window watchers, scribblers and flippers of pens, dismantlers of paper clips.
Maybe.
But I think not.
Because something always happens when I let foreign music spin. Shoulders drop. Postures settle. Words come out newly. Within the raw yelps of these music-infused exercises we discover, together, what the aspiring makers of memoir have within themselves to do, and to be. They haven't started writing true memoir yet. They haven't chosen a topic, delivered their proposals, made an explicit promise about form and meaning. All of that will come.
For now they are on speaking terms with a broader range of linguistic possibilities, and I'm going to keep them here for a while longer. There is still some not-yet-writing-memoir work to be done.
# THE COLOR OF LIFE
HOURS before the forty-ninth National Book Awards ceremony got under way, Alane Salierno Mason, the editor who had found my first memoir in a slush pile and called me on my birthday to offer me a contract, remembered a room I had to see; we went. A lion, an edifice, a swoop of stairs, and then there it was: big as a city block and skied with permanent weather. There were six-hundred-pound tables and a constellation of polished lamps, people enough for a subway station, though this was the New York Public Library, the newly splendored Rose Main Reading Room. I thought I heard a holy hush. I felt drawn out, thrown out of kilter by the hundreds hunkered down with books.
A while later, John Updike took the stage at the Marriott Marquis to accept the 1998 award for Distinguished Contribution to American Letters. His voice had a quiet, avuncular appeal, and in that darkened room he stepped his audience back into the library of his youth, the glamour of a typeface, the beauty of a book "in proportion to the human hand." There were stacks of books on every table, images of books hung like pendants on the walls. There were authors in the room, editors, publishers, agents, reviewers; there were readers, and we understood why we had come.
The media, the next day and for days to come, would write of dark horses, battlefields, upset victories, dueling styles. They would tally winners and losers as if the making of books were a gamble or a sport. They would declaim the event because their heroes had not been crowned, because somehow they had not deduced the final outcome. But what too many lost in their rush for the headline was the reality of what the National Book Awards is meant to be: a celebration of books. A communion of stories. A tribute to the humanity of words.
When I think back on the utterly unforeseen honor of being named a National Book Award finalist, I remember the bewilderment at having been noticed for such a personal book about love and courage and the distilled sheen of hours spent with family and friends. But from the haze also rises the unforgettable voice of Gerald Stern, a poet who had been nominated that year for his collection _This Time: New and Selected Poems_. Something happened inside my head as Stern read his work out loud over the course of that two-day event—and that same thing still happens, all these years later, whenever I return to his lines on the page. Gerald Stern's poetry cures my migraines. It corrects my blood pressure and shakes me clean and clear; it cracks whatever veneer has settled in for whatever reasons veneers always do settle in.
Which is precisely what I want for my students, for all writers of memoir. No posturing. No attitude. No working off history. No easy riding. No simple chapter two. Because it doesn't matter how many essays you've already written, or how many books. It doesn't matter what others have said or what the juries have decided. It doesn't matter if you're sitting in an Ivy League classroom or at home alone. If you are not awake to the world, if you do not approach the work as if it is the first thing you've ever written or the last words you'll ever say, you have no business writing. Writing is not a task; it is no job. Writing is a privilege.
I use a Gerald Stern poem called "Eggshell" to help my students rid themselves of predictable responses, merely passable language, B-plus muddle. I read from the poem's start:
The color of life is an almost pale white robin's green that once was bluer when it was in the nest, before the jay deranged the straw and warm flesh was in the shell....
That's what I read—the first sentence of an impeccable poem—and then I stop, hold the silence, read again. "What is the color of life?" I ask, and before anyone has had a chance to answer, I insist that they write it down instead. _Their_ color of life. _Their_ hues. The economics of _their_ relationships to things.
Tell me your color of life, and I will tell you who you are. But that isn't the point, not really. The point is for you to know who you are. The point is, once again, to stretch language. Here Nabil gives us red and blue and color reflected in faces. He gives us his colors. He gives us his soul.
The branches on the tree above me spread their hands and fingers outward, obscuring the sun from my eyes but giving me a first-class view of the brilliant blue sky beyond. A striking red cardinal fluttered in the breeze, and took off across my field of vision. Its movements and decisions are not random; they are calculated, deliberate, purposeful. The bird I saw had come from somewhere and was making its way to a place it decided it wanted to go. Each leaf of the tree, too, had seen things nothing else has ever seen, held secrets, experienced and lived the way nobody ever has before. The bird and the tree had stories worth knowing—stories that defined them, that nobody else could lay claim to. Life is the color reflected on people's faces as I walk by—a color that comes from within them, one that is shaped by their past and future. If we do not recognize our circumstances and our stories and hold them dear, then what do we have to recognize at all?
# I HEAR VOICES
A fellow memoirist and reviewer writes: "I'm reading a memoir now where the author has written four chapters full of dialogue for events that occurred when she was four years old. Over half the book occurs before she is ten and it's all about what people said and felt. I don't see how much of this could possibly be true."
My friend's got this right: Nothing makes a reader question memoir more indignantly than the things set aside by quotation marks. You remember that whole entire feminist monologue that your mother delivered when she found you (at age three) smeared with her lipstick, wearing her stilettos? Were you taking dialogue notes twenty years ago when your husband decided to leave you for the cyclist he met at his dance class? You knew the word _testosterone_ when you were five? You knew what _punk_ meant? You were capable of irony? You recollect, in long-paragraph format, the words your mother said upon the death of your (young) childhood pet?
"Why is so much lost?" Joyce Carol Oates asks in her memoir _A Widow's Story_. "Our aural memories are weak, unreliable. We have all heard friends repeating fragments of conversations inaccurately—yet emphatically; not only language is lost but the tone, the emphasis, the _meaning_."
Unless you walked around your entire life with a tape recorder in your pocket, dialogue will become one of the greatest moral and storytelling conundrums you will face when writing memoir. You may feel that you need some of it, a smattering at least, to round out characters, change the pace, dissect the rub between what was thought and what was actually said. You may need dialogue because, in life, people talk to one another and readers want to know what they said; they want to know the _sound_ of the relationships.
Dialogue isn't, strictly speaking, absolutely necessary in memoir. There's not a single chain of the stuff in all of _Bone Black_ , the bell hooks memoir, for example. She doesn't need it because the anecdotal is not her concern; because her chapters are short; because the book, with all its invention and complexity, is never in need of a reprieve. Lucy Grealy and Elizabeth McCracken don't rely on dialogue to tell their remarkable stories ( _Autobiography of a Face_ and _An Exact Replica of a Figment of my Imagination_ ); it appears, but only rarely. Other memoirists—most, in fact—can't go the distance without a she-said.
When it is done right, it feels essential; it seems to bring us closer to the story's heart. Consider the exchange that sits toward the end of Mark Doty's _Heaven's Coast_. Wally, Doty's lover, is deep into his journey with AIDS. Concessions must be made, but Wally's reluctant. We don't need a lot of dialogue to understand how much this hurts not just Doty, the story's teller, but also Wally, its subject. We don't even need quotation marks. But we do need to know some of what's been said.
The next morning, his anger is strenuous, and he's more passionate with refusal than I've seen him in months; this will _not_ do.
I say, let's give it a try.
He says, No, I won't have it, no.
Just for a week?
Silence.
Let's see if we can't make it better, and then in a week, if you still don't like it, back it goes.
Similarly, this exchange from Terry Tempest Williams's _Refuge_. The quoted words are essential, I find, to helping the reader understand just how a dying mother and a grieving daughter are coming to their very different, respective terms.
"What I have learned through all this," Mother says, "is that you just pick yourself up and go on." I rub her back while she talks.
"I have fought for so long and I have worked so hard to live through this summer, this fall, Christmas—and every minute has been worth it. And now, it feels good to give in. I am ready to go."
"Terry, you have accepted this, haven't you?"
"My soul has—but my mind has not."
Still, the act of writing dialogue for memoir feels just slightly akin to pinning the once-effervescent dragonfly to the black velvet backing in science class. You've got to be precise. You've got to spread the wings just right. You've got to protect those delicate saucer eyes, even if your hands are a tad sweaty and clumsy. You don't want to make it up, and you might not want to leave it out. Somebody help us with this.
Diaries and journals can be a boon. Transcripts are a blessing. Those who knowingly enter into an experience with the intention of writing memoir (a process known, among other names, as immersion memoir) can choose, and sometimes do choose, to bring a digital memory along. That was Buzz Bissinger's method as he wrote _Father's Day_. He took a trip. He brought his son. He recorded their conversations. Even so, Bissinger does not enclose his conversational threads inside quotation marks:
—Have you ever fallen in love with anybody?
—No.
—Have you ever had a girlfriend really?
—I think I like Shanna.
—Do you know what sex is?
—I've heard about it before.
—What is it?
—When you sleep together.
—Have you ever slept with anybody?
—No.
—Do you want to?
—No.
—Have you ever kissed anybody?
—No.
Somebody yells "Happy Birthday." The television screens show a catastrophic bridge collapse in Minneapolis. Nobody stops to watch.
—Do you hate it when I ask questions like that?
—A little.
—Why?
—'Cause I just do.
For the innumerable many who have not traveled with a juiced-up recorder on hand, other solutions must be considered and assessed. It's helpful to keep in mind that memoir is, first and foremost, a meditation and a quest. Conversational hints go a long way. So do suggestions. Readers don't want to plow through all those _um_ s and all those pauses and all those repetitions in the service of "real life."
Nor do (most) readers want to be asked to believe that all those bons mots from childhood have been sitting somewhere, all these years, just waiting to be summoned and set down. It's disconcerting to read page upon page of conversation between a former third grader and her mother. _Really?_ we readers say. _We're meant to believe this?_ It's part of what gives memoir a bad rap. Readers want, at the very least, _proximity_ to truth. They're expecting the acknowledgment, often implicit, that memories about conversations are the least reliable memories around. Discretion in dialogue doesn't just make for more honest memoirs. It makes for better ones.
In _No Heroes_ , Chris Offutt showcases the persuasion of the short and the finely snapped. The excerpt below echoes much of the book's frank Kentucky talk. There are no breathless diatribes or monologues. Just the back-and-forth patter of regular folk:
"How's your mom and dad?" he said.
"They're all right. And yours?"
"Same. I see your mom in town, but your daddy don't hardly leave the house, does he."
"Not much," I said.
"What's he do?"
"You'll have to ask him."
In _Misgivings_ , C. K. Williams uses italics to evoke remembered phrases. His italics represent a pact, a peace treaty with truth. We're not meant to believe that this was all that was said or even, perhaps, precisely what was said. Conversation is being hinted at, as is something essential and true.
My mother and I, my mother and father and I, so much complexity, so much of what had been happiness become anguish, before it could become something else. Once, my father said to me, quietly, not unpleasantly, quite cordially in fact, coolly, in a tone as though he were complimenting me, _You're a bastard, just like your mother._
In writing her book _Are You Somebody?: The Accidental Memoir of a Dublin Woman_ , Nuala O'Faolain draws upon letters to discover who she was and how she (and a few others) phrased their thoughts in days gone past. Letters, like diary entries, operate as another kind of transcript—another variety of proof of an authentic, talking self—and when used selectively, appropriately, they enlarge a memoir's scope.
Punctuation marks signal something of the author's intent as well. Drop the quotation marks altogether and you are making less of a claim. Deploy nontraditional quotation marks and you turn the reader's gaze toward all that happens on either side of the spoken word.
There are other ways to be nontraditional, and therefore more truthful. In _Girl, Interrupted_ , Susanna Kaysen strategically deploys what she comes to call "representative" conversations to depict the nature of the dialogue with the medical staff of McLean Hospital, where the author spent nearly two years in a ward for teenage girls. Kaysen's book is fast-moving, clipped. Scenes are illustrative. Exchanges are emblematic. Time is not counted; seasons are rarely named. This is what it _felt_ like to be institutionalized, nearly unwittingly, at eighteen. This is how things generally sounded. Kaysen's "representative" conversations embody more truth than artificially reconstructed talk ever could.
For my own books, I have adopted the strategy best suited to the time period and topic about which I find myself writing. I don't tend to re-create childhood talk; I can't honestly say that I remember. At the same time, I don't live for insta-epiphanies—don't actively go out hunting for the topic of a next book or for memoir-worthy scenes. I don't believe that can be effectively done. But I do live paying close attention to my life, which for me entails recording my life—not just with that camera but also with diary entries and essays, with long and short blog posts, with words scribbled down on the back of a course curriculum or in the margins of a theater program. Before I ever sit down to write a book, therefore, I have within reach an accretion of both general observations and dialogue clips. I don't know why it's there, most of the time. I don't use most of it. But it exists, and I will use it if I find myself in need.
Have I always been well served by my pile of things? I am sorry to report that I have not. Have I sometimes needed conversation for a critical passage and found myself empty-handed, no recorded history in the house? Yes, of course I have. Three things have helped when such a quandary struck: the attention I pay to how people talk in general (Would she use that word? Does he talk in spurts or fluently? What does she rely on her hands to do when words fail her? Do I need to insert an _um_?), a preference for keeping most dialogue exchanges tight, and a commitment to asking those whom I am quoting to read the passages in my drafts so that they might tell me if my memory parallels theirs.
Does this produce absolute perfection? I won't claim that it does. But I do believe that if our intention is to be true, if we do all that we can to approximate the truth, if we use dialogue only when it will have the greatest impact, if we signal our relationship to dialogue (our faith in our own rendering) by the punctuation that we choose, then we are, with dialogue, doing the best that we can. And yes, at times, we memoirists are ridiculed for doing "the best that we can." Cue me in when you find a better alternative.
So we'll do the best we can. We'll practice listening. We'll practice putting talk down. Find someone to interview—your husband, your child, your neighbor, a friend—and then write a passage based on the words you wrote down. Ask them if you got it right—if you heard not just what they said but also the _way_ they said it. Listen, when you're alone, to the talk of strangers. Listen to kids in the schoolyard. Listen to one end of a cell phone call. Listen to your father tell that story again. Train your ear toward the patterns of speech. You'll be glad that you did when you sit down to write memoir.
# TASTES LIKE
MY mother cooked like no one else. It seemed easy for her—too easy. Her own mother had, the stories went, pitched her pies against the skinny kitchen wall on Guyer Avenue, and they had splintered spectacularly—shards, like glass. That would have never happened to my mother. Her cinnamon apples softened; her vented steam rose; her piecrust crenellated, browned, and crisped. I stole the nubs. They went down sweet.
My mother taught herself every kitchen thing—the mastery of pots, the tricks of temperature, the hidden places where bread would rise, the liquid expertise of basting brushes. She decorated her desserts with the fabric dolls that she'd sewn the night before, and if you were her child, and even if you weren't, she'd bake you three birthday cakes each year and cure you with her soup. Cooking was what my mother did because that's what she was sure mothers did. It came naturally to her, and because it did, she never recognized the colossal quality of her talent.
My mother would go to college after she raised the three of us. She'd publish her own pieces in newspapers, write books, lecture. She'd fund young artists and read thick books and collect a cavalcade of friends. All true. But when my mother died, I didn't think of that. There was only this chant in my head:
_I will never see her eyes again._
_I will never have the fun of choosing a gift for her again._
_I will never again sit at her table._
A way of eating passes away with your mother. How you held the sugar on your tongue. How you stirred the crumbled cheese into the oiled broth. How you savored the sweet grit of flour in the gravy pot, and the thick pink of the beef, and the heated pear with its nutmeg top, and the brownies with the confectioner's crust. You will dig through the freezer at your father's house, mad for one last frozen roll of checkerboard cookie dough, one Tupperware of thick red sauce, one crystallized slice of eggplant parmesan. You will burn your fingers with the cold. Your mother's cooking will be gone.
Or maybe it was your father who made you cream of wheat on Sunday mornings. Or maybe your sister had a trick she did with eggs. Or maybe the old man in the first-floor apartment sent you a bowl of his famous soup. Can you write beyond the gesture? Can you yield the texture and the taste? Do you know what snapped and what smooshed, what was laden down with salt, how the custard thickened, why the tomatoes off the vine were sweetest? Maybe Italian parsley means something to you. Maybe there's a story in the tip of a jalapeño pepper. Maybe the watermelon you carved into a juicy pink pig says something about the last time you saw your uncle. Maybe your neighbor's burgers, charred to a crisp, will, when you resurrect them, finally explain the shape of your nostalgia.
We can't live without eating. We're defined (they say) by what we eat. We remember the frayed cloth, the broken bread basket, the hardened cheese, the soup-thickened casserole for the hands that offered them, the way the cake was frosted, the people who gathered, the predilections and antagonisms that formed. Taste returns us to some primal part of ourselves. It sets the bridgework, the understanding, in motion. Proust taught us that. The story is familiar. No one describes it more economically than Sven Birkerts in _The Art of Time in Memoir_ :
What happened—at least according to literary legend—was that the author, revisiting as an adult one of the sites of his childhood, stopped to take tea. When he automatically dunked the crusty little cake—that famous _petite madeleine_ —into his tea, he found that his unpremeditated action released a stored association of overwhelming force. A single taste suddenly swamped him with the charged-up sensation of childhood, overpowering all factual ordering, and in the light of this visceral reaction his former approaches to his remembered experience came to seem irrelevant. The vital past, the living past, he realized, could not be systematically excavated; it lay distilled in the very details that had not been groomed into story, details that could only be fortuitously discovered. The _madeleine_ experience initiated for him a whole chain of association, and from this he achieved the eventual restoration of an entire vanished world.
Tastes are pathways, then. They lead us toward story. But a meal—or a kitchen—can do even more than that for us. Jeanette Winterson had a very difficult mother. Her childhood was steeped in deprivations, brittle cold, a nearly medieval dark. But once each year Winterson's mother yielded to a mysterious celebratory tug. In _Why Be Happy When You Could Be Normal?_ this sudden near-joy is gastronomically summoned. Food—its selection, its preparation—reveals character.
We swapped our goods for smoked eel, crunchy like grated glass, and for a pudding made in cloth—a pudding made the proper way, and hard like a cannonball and speckled with fruit like a giant bird's egg. It stayed in slices when you cut it, and we poured the cherry brandy over the top and set it on fire, my dad turning the light out while my mother carried it into the parlour.
In _House of Stone_ , Anthony Shadid sets a conversation about the Lebanese war against the making of a meal— _awarma_ —in the kitchen of a man named Dr. Khairalla. We read of "three kilograms of meat, a glistening red," which "were the shade of cooked beets." We read that the smell of the liquefied fat was "pungent, meaty, but staler, a bit distasteful." It's death that is being discussed in the good doctor's kitchen. Massacres. The bloody meat and the pungent smell are signifiers—base, elemental—of the intimate nature of war.
We eat, and we recall our past. We cook with others, and other stories percolate between the chopping and the stirring. We watch someone we love making a meal we hope we won't forget, and something happens to us, connections are made.
In his essay "Coming Home Again," Chang-rae Lee does one of the best jobs I've ever seen of recording his mother's gestures in the kitchen—and their effect on him. It's a scene worth quoting at length, writing that I study for its specificity and tenderness. I pay special attention to Lee's verbs. I catalog all the ways he elevates this scene beyond ingredients and instructions. I take note of the single line of dialogue, and how it matters.
I would enter the kitchen quietly and stand beside her, my chin lodging upon the point of her hip. Peering through the crook of her arm, I beheld the movements of her hands. For _kalbi_ , she would take up a butchered short rib in her narrow hand, the flinty bone shaped like a section of an airplane wing and deeply embedded in gristle and flesh, and with the point of her knife cut so that the bone fell away, though not completely, leaving it connected to the meat by the barest opaque layer of tendon. Then she methodically butterflied the flesh, cutting and unfolding, repeating the action until the meat lay out on her board, glistening and ready for seasoning. She scored it diagonally, then sifted sugar into the crevices with her pinched fingers, gently rubbing in the crystals. The sugar would tenderize as well as sweeten the meat. She did this with each rib, and then set them all aside in a large shallow bowl. She minced a half-dozen cloves of garlic, a stub of gingerroot, sliced up a few scallions, and spread it all over the meat. She wiped her hands and took out a bottle of sesame oil, and, after pausing for a moment, streamed the dark oil in two swift circles around the bowl. After adding a few splashes of soy sauce, she thrust her hands in and kneaded the flesh, careful not to dislodge the bones. I asked her why it mattered that they remain connected. "The meat needs the bone nearby," she said, "to borrow its richness." She wiped her hands clean of the marinade, except for her little finger, which she would flick with her tongue from time to time, because she knew that the flavor of a good dish developed not at once but in stages.
Virginia Woolf once wrote, in _A Room of One's Own_ , that novelists "have a way of making us believe that luncheon parties are invariably memorable for something very witty that was said, or for something very wise that was done. But they seldom spare a word for what was eaten." I feel the same way about memoirs. Not those memoirs written specifically about kitchens or about food, or amplified by recipes. Not, in other words, M. F. K. Fisher, Anthony Bourdain, Ruth Reichl, Gabrielle Hamilton, and all the others. I'm talking everything else—the memoirs about childhood, love, or grief; the memoirs about going away and coming back; the memoirs about loss or illness. Where are the kitchen smells and treats? Where is the oven's simmering heat, or the trail of excess flour, or the permanent char on the lip of the pan, or the basil snipped from the pot on the sill?
We have to slow down to remember those details. We have to trust that writing ourselves back to the dining room at midnight or the campfire at dusk or the charcoal grill across the fence is going to take us somewhere new, forge a bright and unexpected connection, be finally integral to the greater pattern that we are tracing.
When my mother passed away, my father bestowed two gifts: the photo album that she had given to him as an early Christmas present (the cover is wood, the pages are black, the black-and-white square photographs are captioned with white pencil) and her original recipe book. I treasure both things more than any other possible maternal thing. They are what I didn't lose when I lost her.
As I write this chapter, my mother's recipe book spills across my smudged glass desk—the jaws of its three-ring permanently agape, the punched holes of the notebook paper torn through, the brightly named divider pages—Preserving, Salads, Fruits, Meats, Vegetables—delivering nothing of the sort. There's no order here, and I suspect there never was. There is, instead, a profusion of cellophaned newspaper recipes, magazine clippings, coupons, grocery store recipe cards, handwritten notes (the ink now more gray than blue, most of it splattered and diffused), one invitation to a Sloan School Bon Voyage Brunch, and a single sheet of arithmetic problems she must have written out for me so that she could frost a cake or tenderize a sirloin. My math that day got done on a Sun Oil Company calculation sheet. Above the problems, she'd made this note: _Beth. 5 years old._
Pancake Crisps. Feathery Fudge Cake. Fresh-as-a-Daisy Cake. Date-Nut Coffee Cake. Cheese-Olive Rings. Old-Fashioned Spicy Oat Cake. Heavenly Pumpkin Pie. Saucy Cupcakes. Lovelight Teacakes. Orange Chiffon Pie. Picnic Spice Cake. Good Traveler Birthday Cake. Savory Broiled Flank Steak. Carnation Burgers-on-a-Stick. Rodeo Rings. Rolled Chicken Washington. Salmon Patties. Tiny Stuffed Tomatoes. Foil-Blanketed Franks. Sausage Ring with Scrambled Eggs. Snowflake Cake: Good/Moist. Careful not to crack the hardened tape or bruise the yellowed pages, I move through my mother's recipes like an archaeologist—trying to imagine but incapable of fully imagining all the hours she must have sat on her kitchen stool searching for clues to our next meal, our coming birthday extravaganza. My mother wanted to read. She wanted to write. But this, mostly, is what she did, what she felt herself responsible for. I suspect we didn't thank her nearly enough. I don't have a chance to thank her now.
_Brownies_. The page is handwritten. The page is tarnished.
1 c. sugar
3 tspns. cocoa
2 eggs, slightly building
1 tspn van
¾ c flour
6 ttspns butter, melted
Mix sugar cocoa. Add eggs and vanilla, beat until smooth, stir in flour, add butter and mix well. Bake in 8 x 8″. About 30 minutes.
There's no more than this—no hints about temperature, no indication as to whether that double _t_ in front of the butter is an in-haste error or a secret sign. Still, when my father comes to dinner, or when Thanksgiving hurries close, or if I'm missing my mother, I will take out this page and make these brownies, experimenting each time, just a little. My brownies never come out like her brownies came out. They are never as luscious or moist. The thin ceiling of chocolate cracks beneath the weight of white sugar. The edges stick to the pan. I sit down wanting to find, somehow, my mother, but I find only me looking for her.
I cannot make my mother's brownies like my mother made her brownies, but I can try. I cannot materially re-create a meal with words, but in sinking in with the attempt, in trying to locate the snatch of my childhood, I am shortening the distance between now and so many thens. I am rousing memory; I am working toward meaning. If we stand in the kitchen long enough, if we pursue the sugar trail of our early years, if we stop and notice now how the chocolate curls and the carrots whimper and the hand stirs and beats and frosts, we will have given ourselves the gift of greater content and deeper knowing. We will be made closer to whole, and closer to whole is nudging closer to having something true to say.
# SOMETHING SMELLS... FISHY?
NOT long ago, my father decided to take me back home, to the first house he'd ever owned. He'd bought the corner-lot split-level in 1957 for $14,000 with help from the GI Bill. He'd worked just a few miles away at an oil refinery where, as a chemical engineer, he helped thwart fires, build a catalytic converter, and manage sudden flares. There was no phone in that house; my mother walked miles to call her mother. There was a backyard sandbox; a neighbor whose house I remember for a single detail, blue; and a sidewalk that seemed to magnetize ants, especially when I sat upon it, chalking.
My father, my husband, and I set out on our adventure relying on my father's GPS, which never quite manages (in my father's car) to deliver the promised pronto magic. We went on and off the wrong highways until we got to the right one and turned into Ashbourne Hills, a neighborhood that, from above, might look like a child's Spirograph construction—bulbously looped with returns, all the houses the same except in the small ways that they are different.
_This is the one_ , my father finally announced, pulling up to a corner house and parking the car. I hadn't been to Ashbourne Hills since I'd left as a child of three, but it didn't feel right. I snapped a photograph, nevertheless, suspicious. Something about the angle seemed off. Something about the sun and the size of the front-yard tree. And something—how can I explain this?—about the way the old place _smelled_.
_Dad_ , I finally said. _This isn't home._
And it wasn't, as it turned out. Home was two blocks north, a similarly styled corner, an equidistant setback, a modest simulacrum of the other brick-based split-level. Home was (sure, I'm animal-crazy) a different smell. Something about the trees, perhaps, in the neighbor's yard. Some hint of honeysuckle coming.
They call it olfaction. They talk about sensory nerves, cilia, basal cells, odor receptors, and odorant molecules, but we know better. We know that it's the brackish marsh grass at the edge of the dune that levers us toward our memories of Stone Harbor. We know it's the slightly woodsy smell of the nut beads we find in a child's macramé bracelet that returns us to Sun Oil Day Camp the year our arm was in a cast. We know the smell of plaster takes us back to that full-arm, itchy-as-all-hell cast, and we know that it's the smell of bleach in a mop bucket that returns us to the hospital where we lay with our smashed arm, waiting for the surgeon to arrive. _What is that smell?_ we'll wonder, and somewhere deep within our dendrite morphology the lights go on, the electricity sparks, the chemistry goes zinging. Baby powder. Wet sticks. Car exhaust. Burned rubber. Gym mats. Locker rooms. An old wool hat. The asbestos lining in an attic. Pine needles. Lavender. Spray paint. The metallic charge between a hammer and a nail. Smell, like a lasso, takes hold. Sit with it awhile, and you'll know.
What can you smell? What smells transport you? What smells do you associate with childhood, and why, and how has time interceded, and how, in some mysterious way, has time not passed at all? Here are Dorothy Allison, Marie Arana, Eudora Welty, and Vladimir Nabokov on the trail of odorants. Think, as you read, about your own ol' factory.
Where I was born—Greenville, South Carolina—smelled like nowhere else I've ever been. Cut wet grass, split green apples, baby shit and beer bottles, cheap makeup and motor oil. Everything was ripe, everything was rotting. —Dorothy Allison, _Two or Three Things I Know for Sure_
The corridors of my skull are haunted. I carry the smell of sugar there. The odors of a factory—wet cane, dripping iron, molasses pits—are up behind my forehead, deep inside my throat. I'm reminded of those scents when children offer me candy from a damp palm, when the man I love sighs with wine upon his tongue, when I inhale the heartbreaking sweetness of rotting fruit and human waste that rises from garbage dwellers' camps along the road to Lima. —Marie Arana, _American Chica_
In a children's art class, we sat in a ring on kindergarten chairs and drew three daffodils that had just been picked out of the yard; and while I was drawing, my sharpened yellow pencil and the cup of the yellow daffodil gave off whiffs just alike. That the pencil doing the drawing should give off the same smell as the flower it drew seemed part of the art lesson—as shouldn't it be? —Eudora Welty, _One Writer's Beginnings_
Mademoiselle's room, both in the country and in town, was a weird place to me—a kind of hothouse sheltering a thick-leaved plant imbued with a heavy, enuretic odor. Although next to ours, when we were small, it did not seem to belong to our pleasant, well-aired home. In that sickening mist, reeking, among other woolier effluvia, of the brown smell of oxidized apple peel, the lamp burned low, and strange objects glimmered upon the writing desk: a lacquered box with licorice sticks, black segments of which she would hack off with her penknife and put to melt under her tongue; a picture postcard of a lake...—Vladimir Nabokov, _Speak, Memory_
Easy prompt for a stuck writing day: Choose a smell, and write into it your story. It's the cracked pepper on the fried egg. It's the smell of fresh tar on a driveway. It's the strawberry pendant above the fresh-shredded mulch. It's the rot of an orange in a bowl. It's cat pee on a rug. It's the smell inside your oldest book. It's the basement smell after rain. It's Crest toothpaste, lemon Lysol, Ivory soap, melting wax, your mother's perfume, the bottom of your knapsack, nail polish, sawdust, fly strips, coin collections, the cardboard molt of an actual old-time record, the carcass in the spiderweb. One of them is yours, or something else is.
There are smells out there that explain you. There are smells that take you home.
# EMPTY YOUR POCKETS
SOMETIMES I ask my students to empty their pockets and to put everything right out there, on the table. No pockets? They dig into their backpacks or purses. And if somehow (too strangely, not rightly) all they have in their possession are their laptops, I ask them to close their eyes and think for a moment about the things they left back in their rooms, stuffed in a desk drawer or perched on a windowsill.
What are you carrying forward? I ask them. What are you keeping close?
Here are but a few things that have been placed upon our altar:
* keys (of course)
* the essential modern-day paper and plastic (license, work ID, health insurance cards, library cards, the yogurt-shop card with the eight punch holes)
* every conceivable brand of smartphone
* bills and coins
* parking tickets
* a chipped seashell
* a pocket-crushed feather
* a fake pack of gum (don't ask me; it was there)
* a toothbrush (owner of which had a killer smile)
* a hairbrush (apparently unused)
* a nail file (needing replacement)
* a pale pink comb
* recipe cards
* photographs
* a pack of raisins
* books (dog-eared, even)
* sparkly glitter pens
* rub-on tattoos
I could go on. I won't. The point is what happens next. Choose the thing that matters most, I say. Choose the thing that is so much more than the sum of its parts, so much more than its self-apparent, pragmatic function; choose the thing freighted with meaning. Which one do you keep close _just because_? Which one, when push comes to shove, is _irreplaceable_? I wait. Deciding can take awhile. But when it's clear that choices have been made, I get the students writing. Define _irreplaceable_ , I say. Then tell me about irreplaceable by writing your chosen object's story.
The answers are moving, surprising, often funny. They tell me (they tell others) so much about the person sitting there, what he values and how he sees, how he measures things, one against the other. This is irreplaceable because my mother gave it to me. This is irreplaceable because no other comb that I've ever found has such soft, even subtle teeth. This is irreplaceable because it took me three years and six holidays' worth of trying to get my grandmother to reveal her tomato sauce secret, and if I lose the card I wrote her ingredients on, she will never trust me again in the kitchen. This shell is irreplaceable because my brother found it for me in the year I could not travel to the ocean. These photographs were found in a Ziploc bag after the flood in my parents' basement.
Now, I say, write about something that _has_ been lost. Something you didn't think you could live without. Write about absence. Write about search. Write about reconciling yourself to the idea of something eternally missing. What is the language of loss? How does one resurrect the thing that can no longer be held, touched, seen, corresponded with? Are words replacements? Can they be made to be? How is the approximating work of a writer deeply frustrating and deeply satisfying? Why does writing always feel like almost or nearly, and why do we keep trying anyway? What impels us?
What impels you?
Diversion by way of example: We moved to the house where I now live close to twenty years ago. In the course of planning the move, I set every material thing that actually mattered to me into two cardboard boxes. The dolls my father would bring home to me following his travels abroad. The collection of Hummel figurines I had acquired as a child, one by one, as I, too, began to see the world. The Venetian masks, leather wrought, that my husband and I had discovered in a corner store in a day of thick fog on a street we were never, in subsequent travels, in miles of walking, able to find again. My porcelain dogs, an inch high at most—substitutes for the real puppy I'd always wanted. A few long-maned china horses.
These were talismans; these were evidence. These were proof of being loved and having loved. I treasured these things deeply. Entombed them with newspaper and padding. Put them inside the sturdiest boxes. Asked the moving men to carry them forward in a special part of the truck. Waited, anxious, to see them again. I never saw them again. Somehow these things had gone missing.
It's been twenty years, as I have said, and I'm still not used to the idea of so much missing. I still find myself on my knees in the attic dust, digging through piles and papers and crates—as if looking hard enough will solve this crime of absence. It doesn't. My masks are still gone, and my puppies, the little Hummel girl with the frozen, windswept hair that I bought with all the money I had as a nine-year-old in a German shop. My special treasures never made it to the home where I live now. The home feels partial, less than.
I don't have the things, but I write toward them.
Just as I write toward the grandmother I lost, the uncle I lost, my mother. Just as I write toward—we all write toward—childhood. Just as, in my Salvador memoir, I wrote toward a place called Santa Tecla, my husband's home, which was dusted down to nothing by a seismic earthquake just as I was finishing the book. "Words are the weights that hold our histories in place," I wrote then and I believe ever more firmly now, and as memoirists our job is to understand not just what we are holding in place but also why. Why does it matter so much that we try? Will we be able to live with the foreordained imperfections? Can our quest to keep what cannot be kept signify for another?
Empty your pockets. Know what you value. Write it down so that I can see it, want it, nearly touch it, too. So that I will yearn with you, or so that I can mourn with you, because loss is now, or loss is coming, and loss is our shared human condition. You have never seen my Venetian masks, and I have never cooked your grandmother's red sauce. But if we both write most truly, we will enable each other's compassion.
Memoir commands us to engender compassion.
# TELLING DETAIL
I most likely would not have fallen so hard for memoir—or had the guts to try to write it—had I not happened upon Natalie Kusz's miraculous _Road Song_ in a Princeton, New Jersey, bookstore in 1990. The story of the author's long recovery from the ferocious attack of a pack of Alaskan dogs, _Road Song_ was, for me, the revelation of a form. Here was the past delivered with equanimity and respect. Here was a terrible tragedy gentled by words, a book in which the good is ever equal to the bad. Kusz wrote to comprehend, and not to condemn. She wrote her way back to herself, and as she did, she broadened the reader's perspective, disassembled bitterness, healed. _Road Song_ begins in the spirit of adventure, not with despair. It begins with an _our_ and not an _I_ and reverberates out, like a hymn. There is no selling out here. Just a hand reaching out across the page.
I blame Natalie Kusz.
Not yet published, entirely unschooled as a writer (so unschooled that I had yet to meet a _real_ writer, let alone converse with one), I wrote to Kusz at her publisher's address. This was during that great, unrivaled epoch of letters, stamps, blue-ink signatures. This was that time when hovering by my mailbox required going outside, into the weather, getting hot or getting wet. It was early December when Kusz's letter arrived. I have it right here in a frame. It reads, in part:
As I am sure you know... writers are in the business of attempting to expose the human condition in such a way that our description resonates in the souls of other humans, and it is extremely gratifying to hear about the one or two times when something we publish succeeds in this endeavor.
I have carried this letter everywhere. I have returned to it time and again. I hadn't read memoir, hadn't written it, and then there was Kusz unveiling its mystery for me, explaining, by way of a thank-you, what a book like hers was designed to do. _Writers are in the business of attempting to expose the human condition in such a way that our description resonates in the souls of other humans_.... Yes, I thought. I want to be in that business.
_Road Song_ was my first instruction. It is a book I hold incalculably dear. And it is a book that I sometimes read out loud to the students who come to that Victorian manse and sit around the thick, old, patient table. I read from the harrowing early pages, when Kusz, then a little girl, is walking home from school with her dog, Hobo. Her mother is not home, and so Kusz carries on, toward a neighbor. Dogs pace and growl between where she is and where she wants to be—huskies tethered to their chains. They're agitated, restless. The snow, in places, is taller than the girl herself. It is Alaska cold. Kusz calls out to quiet the dogs, calls out to find courage herself, and when the boy she is looking for also isn't home, when she turns and walks the slender spit of ground between the doghouses and the howling dogs, when she finally makes her way to the end of that dog madness and allows herself to feel grateful for her own courage, she, well, here—let Kusz tell you for herself:
I was walking past the last dog and I felt brave, and I forgave him and bent to lay my mitten on his head. He surged forward on a chain much longer than I thought, leaping at my face, catching my hair in his mouth, shaking it in his teeth until the skin gave way with a jagged sound. My feet were too slow in my boots, and as I blundered backward they tangled in the chain, burning my legs on metal. I called out at Paul's window, expecting rescue, angry that it did not come, and I beat my arms in front of me, and the dog was back again, pulling me down.
A hole was worn into the snow, and I fit into it, arms and legs drawn up in front of me. The dog snatched and pulled at my mouth, eyes, hair; his breath clouded the air around us, but I did not feel its heat, or smell the blood sinking down between hairs of his muzzle. I watched my mitten come off in his teeth and sail upward, and it seemed unfair then and very sad that one hand should freeze all alone; I lifted the second mitten off and threw it away, then turned my face back again, overtaken suddenly by loneliness. A loud river ran in my ears, dragging me under.
This passage remains for me one of the most devastating scenes in all of memoir. I can never read it aloud without pausing to catch my breath, to wipe away a tear. What happened to Natalie Kusz is, of course, a tragedy. With simple language, with supreme clarity, without self-pity, Kusz enables us to see and to feel the puncture of untamed teeth, the lonely assault.
But when I ask my students what makes this passage so searing, so perfect and perfectly terrifying, it is, of course, that mitten, tossed: "... it seemed unfair then and very sad that one hand should freeze all alone; I lifted the second mitten off and threw it away..." The mitten tells us everything. The mitten—which is not blood, which is not teeth, which is not pain, which is not, then, _overt_ —is the trembling heart of this devastating story. It is why we ultimately feel Kusz's pain as profoundly as we do.
The telling detail. We know one when we see one. We recognize the pattern. Foreground/background. Here it is, again. The main action of this story is howling dogs and a lonesome walk by a little girl. The signifier—the flag, the surrender—is the mitten.
"When you write, you lay out a line of words," Annie Dillard has written. "The line of words is a miner's pick, a woodcarver's gouge, a surgeon's probe. You wield it, and it digs a path you follow. Soon you find yourself deep in new territory. Is it a dead end, or have you located the real subject? You will know tomorrow, or this time next year."
Follow the line of your own words. Follow the lines set down by others. Hunt down the telling detail. Here is a passage from the not-quite-a-memoir known as _Say Her Name_ , by Francisco Goldman. You don't need to know anything more about it, just now, except that it reads like this:
I had a friend, Saqui, who'd covered more war than anyone my age I knew: Afghanistan, Africa, and the Middle East, as well as Central America. Saqui told me about walking out of his hotel on Avenida Reforma the night he got to Mexico City, two nights after the quake, the air thick with smog, pulverized cement, and acrid smoke, and how, when he was crossing the avenue, he saw, in one of the lanes closed off to traffic, a dead child laid out on the pavement, a little girl in sweatshirt, jeans, and sneakers, who looked like she'd been rolled in flour. There were two Mexican men standing over her, and my friend told me that they looked at him in a way that so sorrowfully but menacingly warned him not to come any closer that he swerved away as if they were pointing guns, not daring even to glance back until he'd crossed onto the opposite sidewalk, where he turned and saw the two men still standing over the little corpse as if they were waiting for a bus, and he thought it was the saddest, most terrible thing he'd ever seen.
Read it through again. Mark out those details that make this passage vivid, memorable. I'll wager that the reference to flour—so simple, so primal, so right—has been noted. I'll imagine that the way the two men stood by the girl, the way they looked "as if they were pointing guns" stopped you, too—made you more capable of imagining the scene. Imagination and empathy are near cousins. Writers who help us see clearly, who make room for us beside them, will likely earn our compassion, and our time. Neither Kusz nor Goldman relies on ornate gestures or complicated schemas. Their images are organic, grasped in an instant. Mitten as surrender. A child rolled in flour.
So close your eyes now, and lean back. Direct your thoughts toward the first childhood room that you can remember. Take stock. Where is the light coming from? What is in that toy box on the floor? Why are the picture books double-stacked, and what happened to the stuffed clown's nose, and why is there a half-dollar coin stuck in the piggy bank slot? Did you choose the white dresser for yourself, or was it borrowed? Did you write your own name on the wall? Did your brother break the wooden horse? Did your mother rock in that chair?
Take your time; nobody's rushing you. Let it all come back, as memories do. Faulty, surely. An estimate, of course. Still, and nonetheless, do what you can.
Then find a pencil.
Then find a page.
(Please walk away from the computer. Please?)
And write.
Write what you remember, what you feel as you remember, what you wish that you could see but can't. Then look at the words that you have laid down, in those first lines—the pick, the gouge, the probe—and isolate your most telling, most signifying, and therefore most complete detail. On a new page, with a sharpened pencil, write the detail better. Look for wasteland stretches that might be eradicated, flat horizons in need of sky, opportunities to turn complication into complexity. Ask yourself, _Is this the best that I can do?_ Do nothing less when writing memoir.
# LET ME CHECK ON THAT
RESEARCH is subterranean; it's submersion. It dirties you up, challenges your presumptions, broadens your spectrum. It offers a defense against the faultiness of memory and against critics such as Ben Yagoda, who in _Memoir_ delights in reminding the rememberers that their pastime is practically feckless: "Among the most lasting of Freud's many revolutionary insights concerned the capriciousness of memory.... In experiment after experiment, study after study, subsequent psychologists have gone a good deal farther, establishing that memory is by nature untrustworthy: contaminated not merely by gaps, but by distortions and fabrications that inevitably and blamelessly creep into it."
Our memories will fail us; there's no pretending they won't. We'll get things wrong, and not just the talk. We'll be contradicted, doubted, pilloried maybe. Research helps—not just in after-the-fact self-defense but also in the original priming of the story. Absent the stark scribble of case file documents, how could Susanna Kaysen have so effectively pieced together the life she'd lived in that mental institution? How could she, in _Girl, Interrupted_ , have convinced herself (first) and her readers (second) just what happened on the morning that the physician committed her to an inmate life in an asylum? We need to know who to believe—the doctor or Kaysen. By digging out the admission note of April 27, 1967, by reproducing it in the pages of her book, Kaysen finds her way to the most accurate possible rendering of that harrowing moment. She puts us on her side.
Where would Mary Karr have been without her sister's corrective memory (or the threat of it) as she set out to write _The Liars' Club_? What would we think (or care to believe) had Anthony Shadid, an acclaimed journalist, not paid such close attention to the researchable facts—about the war in and against the Middle East, about his Lebanese ancestors—even as he wrote his very personal story, _House of Stone_? Sometimes research _is_ the story, as memoir becomes the _investigation_ of one's life. Such was the case for Ned Zeman, the _Vanity Fair_ reporter who, following a succession of increasingly violent treatments for violent depression, finds himself an amnesiac fitting together the puzzle of his life, one frayed cardboard piece after another. His _Rules of the Tunnel_ , like _House of Prayer No. 2_ , is a "you" story. It is also proof of the power of the undercover-cop memoir. What happened to me? How did it happen?
From the prologue of _The Rules of the Tunnel_ :
The void stretched back for months, maybe a year, save for random bits (JetBlue potato chips) and pieces (tiny pink shoes) signifying nothing. The rest of the story would have to come by way of shoe leather and notepads. Which made you, in addition to the world's first amnesiac reporter, appreciative of why monkeys don't become airline pilots. You were the worst subject you'd ever interviewed (and the feeling was mutual). You felt deceived, stonewalled; you felt ambushed, persecuted. You wanted to sue yourself for libel.
Like Zeman, _New York Times_ columnist David Carr and _New York Post_ reporter Susannah Cahalan had to hunt for the small and large details of their own lives to concretize the facts. Carr was, by his own admission, a thug, an addict, and an abuser before he sobered up, plucked his children free of welfare, and became the _Times_ fixture he now is. Cahalan was living her life—a good job, a bright boyfriend—when an undiagnosed autoimmune disease radically redefined her health and put her future at risk. Neither Carr nor Cahalan could, independently, remember much of what happened. Reportage was their out. Their respective memoirs— _The Night of the Gun_ and _Brain on Fire_ —are unearthed memoirs. Research _is_ the story.
Of course, the scenario needn't be so extreme for research to elevate memoir. When I wrote memoir, I was, of course, writing my life. But I was also following the always persistent, hardly consistent, rarely well-tiled path of my insatiable curiosity. If I was writing about friendships—my own—I was also writing about, or at least wondering about, the history of friendship, the word as defined by Cicero and Montaigne and Francine du Plessix Gray, the rareness of the relationship, the conclusions other writers have drawn—in memoirs, in novels, in psychological research. I was writing about what I remembered and what I came to know. I was reaching for the greater world even as I told my personal story.
Say, for example, that I was writing about my marriage to a Salvadoran man, that I was pursuing the question _How well do we ever really know the people we've come to love?_ I had, in my head, the SparkNotes version of my one and only marriage romance, which is to say the mess and wild tussle of what it feels like to fall in love, be in love, fight to stay in love. I could—without ever leaving my chair, without tapping a single keyboard key, without picking up a book, without interviewing another person—give you the love goods. But if I had written a book like that, I'd have been a capital-M _Me_ speaking without the graces of enriched perspective. I'd have been, in other words, a narcissistic bore.
I wanted to write a story that mattered. I wanted, besides, to learn about my husband's country, El Salvador. The land itself, the coffee farms, the grandfather my husband loved, the guerrilla warfare that fractured his world, the divisions of earth and politics. There would be, I trusted, wisdom in all of that. Life lessons. Metaphors. I would grow not just as a writer but also as a person and as a wife. I would dig until I finally commanded some part of that Salvadoran family and world as my own. Until I, in some small ways, fit in. Until I lost my outsider status. I would come to think harder about bridges, cleaving, foreignness. I would find room for myself, and room for my readers, in a story about marriage, strangeness, and war. I would establish myself within the tangled life web and make some relatable sense of it all.
But it would take time. The old family photographs had to be found. The antique pamphlets on coffee farming. The textbooks on plate tectonics. The wildlife guides. The history of the Brazilian who brought coffee to El Salvador in the first place. The stained and crumbling newspaper stories. The political interviews. The Carolyn Forché poems. The photographs I took when hiking the jungle hills alone, or when walking lost near dusk along an estuary, or when escaping the bombs that I'd just been told would soon explode in the capital city. Even if I didn't know my husband's primary language, Spanish, I listened for all that lay within its rhythms and oscillating volume. I sought out the aunt who would speak English with me. I asked questions of the brothers. I matched the stories they told against the faded news journals that I read, and I built, small bit by small bit, the story.
What do your hard facts imply? What do they teach you about the story? What do they offer in terms of analogy and depth? I had to write the land of El Salvador—and the way the land itself was made—before I could truly come to understand all that separated my husband from me, all that makes him so exotically different, so artistically foreign, so finally lovely.
In the hooting, crawling, philandering shade of an overgrown jungle, high, near the sky, it is possible to imagine that the world is as the world always was. This is illusion, the chicanery of nature. For when it comes to Central America, to El Salvador, to St. Anthony's Farm, there was indeed a time, as the Maya say, when the sky seemed crashed against the earth, when there was darkness only, nothing at all. The land that forms Central America is erupted earth, the aftereffect of spectacular geological discontent, a land bridge suffering the wind and weather of two barely separated seas. Sixty million years ago, there was only ocean where the land bridge lies today. Eleven million years ago, there was but a single archipelago. Having risen from the volcanic sea in fits of calm and violence, that archipelago would be joined, over the course of many more millions of years, by additional by-products of glaciation and geological turbulence until the isthmian sill grew deeper and more and more land poked its nose up to the sky. The islands wouldn't connect, the pocked, swamped, peaking, dipping hissing isthmian barrier wouldn't be complete, until three million years ago. But the cross-pollination of North and South American life was already in the works, so Central America was from the first an incubator of the exotic and inexplicable. — _Still Love in Strange Places_
Research is alivedness. It is the rush of something new and unexpected. It keeps you engaged, in suspense, full of the unprotected _what if_ s. Research requires us to shed our comfortable conceits, to break the formulae, to scramble the math. You didn't see that metaphor coming? Fantastic news. You were brazenly sure that the front door on the old house on Guyer Avenue was red until somebody showed you the photographs? Good. What else don't you know, and why don't you know it, and is uncertainty part of your story? The big pile of papers that someone just sent you has messed with your sanguinity and self-assurance, your confidence, your frame? All right, then. You're getting close. That interview you just completed—the one that contradicts the interview you conducted last week, the one you thought for sure was the Final Word? Perfect. Yes. You see it now. This memoir business is messier than you thought. Messier and far more interesting.
Research will never, however, be fully compensatory. Research isn't, in the end, plenary. You could research for years, exhaust all the documents, relentlessly interview the myriad eyewitnesses, undergird your memoir with film reels and photographs, and you would still be confuted and resisted; you would be refuted; you would question yourself. I spent those fifteen years writing about marriage and El Salvador. I bought every book I thought there was. I talked to every family member who would speak to me. I picked coffee beneath jungle shade, watched the _campesinos_ sort red beans from green, asked my husband, again and again, to tell me that story one more time. I called my brother-in-law, the one then living in Dallas. I called the one who had made his family home in Spain. I asked my son, after we had traveled to El Salvador and back again, _Did you see what I saw?_
And as true as I believed my memoir to be—as true as I have been told, by all those family members, that it is—I not too long ago sat with my husband and his family while they searched through the old photo albums again. They brought the long gone back again, close. They told their stories as if for the first time. I sat chin to my knees across from that couch and listened as they hovered and exclaimed. It was the nuances that changed—the gift that had been brought to the party, perhaps, or the hour in the day. There were debates about who had seen what first, or who had hidden a secret for a day. _It wasn't like that, it was like this_ , they said among themselves, and I thought of my book on the shelf, the words fixed in their place. I thought of how stories mutate with time, and with the teller, even the stories confidently set down in ink.
Research is corroborating, substantiating, authenticating; it, too, bears witness. But certain facts will remain elusive, or they will change with time. At some point, we have to trust what we have and what we can make of what we have. We can be absolutely sure of just one thing in all of this: that our hearts are true throughout the making of our story.
# FIRST MEMORY
TAKE stock. You have opinions, now, about tense and form. You've sifted photographs, listened to talk, remembered kitchens, sunk into your stinky-sweet olfaction. You've emptied your pockets and written loss. You've resurrected a childhood room, a telling detail. You've come to know more about the ways you see and think, the ways you process and remember and auction off the facts, the ways you live weather, landscape, song, and hue. You've read memoir (please tell me you've read memoir) and clarified (at least a bit) what it is you expect from your memoir self. It's time for one more exercise. Relax. Nobody's looking.
I want you—you saw this coming—to write your first memory. I'm not going to lie: This won't be easy. You're going to flail. Let yourself flail. You're going to ask yourself: Is this true? Is this right? Does this matter? Go with your fears; memoir is nothing if not frightening. Turn off the phone; first memories shouldn't be interrupted. Find the old photographs, if you have them. Find that scrapbook your mother kept, or that fishing reel your father left you, or that box of toys that's still in your uncle's attic, or the ornaments from the early Christmas tree, or the book your grandmother read you. And if you don't have these things, it's all right. Exhale. You have your neurons and dendrites, your prions that some scientists believe mark out memories in the brain.
You have time. Roll through it.
Fear finds its way into my first memory. Fear and a blue-sky afternoon in that cul-de-sac of 1950s-era houses—my parents' first neighborhood, the one I only just recently (but barely) found. For the first time, I am bearing witness to a crime—an assault against the pretty plastic streamers on the handlebars of my older brother's bike. Like teeth, the streamers have been yanked out, one by one, and now nothing flies, nothing tinsels when the bike wheels go forward. Who would have done such a thing, and why? How could anybody dare hurt my brother? And where is he, and does he know yet, and can my mother fix this? I see myself—my feet planted on the snaking concrete-tile walk—incapable of moving, impossibly sad, distraught. I don't have the words for betrayal, not yet. I don't know what justice is, so how can I form thoughts of its opposite? But I am confused, and the confusion is electrifying, and so are my inchoate feelings for my brother: _This should not have happened to him._
Elias Canetti's first memory is, as he writes in _The Tongue Set Free_ , "dipped in red."
I come out of a door on the arm of a maid, the floor in front of me is red, and to the left a staircase goes down, equally red. Across from us, at the same height, a door opens, and a smiling man steps forth, walking towards me in a friendly way. He steps right up close to me, halts, and says: "Show me your tongue." I stick out my tongue, he reaches into his pocket, pulls out a jackknife, opens it, and brings the blade all the way to my tongue. He says: "Now we'll cut off his tongue." I don't dare pull back my tongue, he comes closer and closer, the blade will touch me any second. In the last moment, he pulls back the knife, saying: "Not today, tomorrow." He snaps the knife shut again and puts it back in his pocket.
Every morning, we step out of the door and into the red hallway, the door opens, and the smiling man appears. I know what he's going to say and I wait for the command to show my tongue. I know he's going to cut it off, and I get more and more scared each time. That's how the day starts, and it happens very often.
A first memory born of a raw, unsettling, ungraspable emotion. Another first memory born of color (and seductive terror). Here, in _Cakewalk_ , is an early memory born quite specifically of sweets. Kate Moses, the memoir's author, is "not quite four" when she heads across the street to play for the first time with a neighbor girl. A cake has been set out on the new friend's kitchen counter. It beckons, and no parents are in sight. Kate encourages her playmate to take "just one taste." Soon the entire cake is gone.
_What was that?_ I was thinking as I burst out the neighbor girl's front door and skittered across her lawn, her mother still on the phone shrieking to my mother, my sticky hair flying behind me and my stiff new dress flapping, my mother erupting out of our house across the street and running toward me, a look of abject mortification on her heart-shaped face.
I knew I had been very bad. I knew I was going to be punished, maybe even spanked. But I didn't care. Whatever it was, whatever that voluptuous thing was, it had been worth it. _What was it?_ I was still wondering later, after my father had come home. That baked thing, that glazed and golden and sumptuous thing—I wanted it again. And again. And again. I lay on my bed, my bottom sore, sucking the last ambrosial flavor from my candied hair.
In _Limbo_ , a book about a young pianist struck down by a mysterious muscle disorder, A. Manette Ansay suggests that her first memory "is of memory itself—and the fear of its loss, that vast outer dark."
One night, as I lay floating in the still, dark pond between wakefulness and sleep, a stray thought breached the surface like a fish. _You will forget this._ I opened my eyes. To my right, tucked under the covers beside me, was an eyeless Raggedy Ann doll. To my left, on top of the covers, was a large plastic spark plug—a display model that my father, a traveling salesman, had coaxed from some far dealership and presented to me. My father's gifts were unpredictable and strange: hotel ashtrays, pens with company slogans trailing down their sides, desiccated frogs and snakes he found along the highway, jaws pulled back in agonized smiles. These things populated the bedroom I shared with my two-year-old brother like the grasshoppers and pianos and clocks in a Dali painting, startling the eye from my mother's homemade curtains, the Infant of Prague night-light keeping watch on the bedside table, the child-size rocking chair. The spark plug was nearly three feet long; if you shook it, something mysterious rattled around inside. It was tied to a wooden spool and, during the day, I dragged it clattering after me, the way other girls carried dolls.
_You will forget this._
It was 1969. I was four years old, almost five. The thought swam back and forth in the darkness, gaining speed...
Something Wordsworthian factors into Priscilla Gilman's first memory. Thunder. A father. A conquering of fear. She tells the story in _The Anti-Romantic Child_ :
It was a summer night in Spain, I was a little over three, and an especially dramatic thunderstorm woke me, terrified, in the middle of the night. The memory begins with my father's voice in my ear and the two of us gazing out into the night. Framed by the large window, the scene before us was like a little theater: the familiar garden strangely unfamiliar, the sky an indigo blue lit periodically by silvery flashes. Narrating the scene, my father sounded like a madcap sportscaster. "There's a big lightning! There's a little one... oh a big one again!" he ex-claimed as he held me firmly with one hand and gesticulated skyward with the other. I remember something disorienting becoming something glorious. I remember feeling so safe not because he protected me from fear but because he helped me to confront it.
In _All the Strange Hours_ , Loren Eiseley recalls a conversation with W. H. Auden, in which the poet asked Eiseley what public event he remembered first from childhood. It's not the sinking of the _Titantic_ , which Auden reveals to be his own spark point. It is, Eiseley reveals, a story that involves "a warden, a prison, and a blizzard." Time and again (in the book, in his life) Eiseley will return to this trope until, at last, Eiseley will walk "like a ghost back into the past" to understand this wintry episode. What _are_ the facts about these prisoners and their murderous escape? Why does the tale of a convict named Tom Murry continue to haunt him? What has happened to the years? Eiseley reads the now-microfilmed coverage of the escape. He drives to the place where the event took place. Time collapses until it isn't just now and then that are conflated, but also Eiseley and the convict Murry. It's all one moment. It's the brain playing tricks. It's the past as the place we all begin.
First memories are made not from the gloss of things, the one-day-just-like-another, the automaton response to life and its necessities. First memories are activated by some breed of shock to the system, some differentiating cause and effect, some Technicolor confusion or vivid confrontation between the norm and the new. First memories are a first awakening—emblematic, symbolic, telling. First memories are like DNA—as integral to the who of us as our green eyes and auburn hair.
But also this: What we recall about then, what we are capable of knowing about our childhood selves, how we tell ourselves the stories, how we tell them to others—this is all part and parcel of, inextricable from, who we are right now, how we filter the world, how we (again) _value_ it. My first memory is about the terrifying insistence of empathy, and a corresponding sense of powerlessness. Canetti's first memory is about fear, perhaps, but mostly about its tantalizations. Moses's first memory is about stolen sweets and the need to possess such deliciousness. Ansay's first memory is about the fear of forgetting, which is also the fear of losing, which is of course a pivotal life theme for a woman from whom mobility, not to mention piano song, will be taken.
Your first memory may not be the beginning of your memoir; certainly the story I just told about those purloined streamers does not stand at the beginning of any of my books. Your second, third, or fourth memories may not factor in, either, but that's all right. We're still a few pages away from our actual book work—still working, here, with raw material. So write your first memory, and then your second, and then your third, and after your pages are filled and your arm is aching, look back over the stories that you have called up, revealed. Look for the consistencies, story to story. Consider what they reveal about you. What are the half-buried themes, and what are the overt declarations? Do you know yourself better for having tramped around in the past?
In his memoir, _Speak, Memory_ , Vladimir Nabokov wrote this: "In probing my childhood (which is the next best thing to probing one's eternity) I see the awakening of consciousness as a series of spaced flashes, with the intervals between them gradually diminishing until bright blocks of perception are formed, affording memory a slippery hold."
I like that phrase "bright blocks of perception."
I like "probing one's eternity" even better.
# REMAIN VULNERABLE
I had never campaigned to teach at the University of Pennsylvania. It took me weeks, after the invitation arrived, to finally say yes. I was worried, first, about time: I run a business, I write books, I am a mother and a wife. I was worried, second, about yield. Did I know enough, had I learned enough, to be the teacher I would expect myself to be?
I had, it was true, been teaching all along—children and teens, midcareer adults and retirees. I'd traveled to universities and talked, joined the faculty of summer programs, mentored high school students, conducted workshops against the backdrop of gladioli and streams, taken on the mantle of writer-in-residence.
But to teach on an Ivy League campus for an entire semester is a different calling altogether. It is a marathon, a form of politics, a performance, and a contest of both popularity and wills. Students are, inevitably, in the know about which teachers are easily managed and which are hardly worth the maneuvers, which show up because they _want_ to teach and which because teaching is, finally, a job. A new name on a faculty roster will be evaluated primarily by her one hung shingle: the course description. I worked on mine for weeks, read it to my husband as if it were a poem:
"Maybe the best we can do is try to leave ourselves unprotected..." the poet-novelist Forrest Gander has written. "To approach each other and the world with as much vulnerability as we can possibly sustain." In this advanced nonfiction workshop, we will seek, and leverage, exposure. We'll be reading writers contemplating writing—Natalia Ginzburg, Larry Woiwode, Vivian Gornick, Terrence Des Pres, Annie Dillard. We'll be reading writers writing their own lives—Gretel Ehrlich, Anthony Doerr, Stanley Kunitz, Brooks Hansen, Jean-Dominique Bauby—as well as writers writing the lives of others—Frederick Busch on Terrence Des Pres, for example, Patricia Hampl on her parents, Michael Ondaatje on the utterly cinematic characters of his childhood. The point will be to get close to the bone of things.
"Sounds like an acquired taste," my husband said. (I put this in quotes. It is a fact. He said it.)
_Acquired taste._ I wavered, then went forward—an ungainly mix of recklessness and abject fear. Can you, in fact, teach vulnerability? Is that where memoir starts?
Semesters have since gone by. Students have entered my life and stayed. My family is big, and it is growing. And I can say now, with confidence: Leave yourself unprotected. Remain vulnerable. For this is where memoir begins.
I've written more than seventeen books in a half-dozen genres since I published my first memoir in 1998. I have written at least one blog post a day since that first intrepid post back in 2007. I say these things not to gloatingly quantify (perhaps I should be embarrassed; some say I should be ashamed) but to make this point: I still only write when the yearning is urgent. I'm dead-in-the-water boring otherwise. And so, of course, is my writing. I yearn a lot, I'm sorry to say. And so I'm always writing.
Urgency is born of vulnerability. Vulnerability makes room for surprise. Surprise must be exercised; it is a state of mind. You didn't see that coming? Good. You can't believe you cried? Thank God you're human. Beauty blazes through you, beauty makes you feel alive, you can't sleep sometimes because of beauty's ten-fingered grip, because of the shattering glory of the evening sky? It's all right. We insomniacs get you. You've wondered sometimes whether it's true that souls don't bleed, because you're pretty sure your soul has bled when you hugged your students good-bye, and when your son graduated, and when somebody played your mother's song and you couldn't turn to her and smile? You've wondered? I've wondered, too. The questions, the feelings, the hurt, the awe, the beguilement come at me, and because I have remained vulnerable, because I don't even know how to buckle the armor or shine the shield, I am affected (call it afflicted) and I see story.
I use music and movement to maintain this state of mind—taking long walks before I write to shake my muscles loose, to knock the ache out of my joints, to stretch, and then to see. Down the hill, past the church, around the bend, and _bam_ , there it is—the blue rope of a thin snake in the street, or the wide shell of a horny tortoise, or a briny-backed deer in the woods, or a marigold, or perhaps my friend Kathleen, eighty years old, here with a story about the circus. There's not an entire memoir in this; I wouldn't suggest the preposterous. But what there is—what I need—is that tremble or curiosity or déjà vu that sets a memory free.
Or maybe it's raining and I have the house to myself. Maybe I don't care if I never did buy curtains and any enterprising neighbor could see. I play Bruce Springsteen until the house is shaking—his river songs, his glory ballads. I play him until I have him in the hollow of my bones, until it is not my true self in the reflecting window glass but a phantom version—a ghostly, smoky mystery. Vulnerable? Yes. Cracked open, anticipating? That, too.
Always I make sure that, even as I teach, I don't neglect my role—my privilege—as a student of so many things. As a student of cultures, when I travel. As a student of photography. As a student of gardens or a student of rivers, or a student of the rumba, samba, waltz. I'll go to a studio and I'll submit to the instructions of a real dancer half my age. He'll tell me that I don't stand straight or that I don't let the music steep or that I have to stop fighting myself to master the _swoosh_ or the contagious quicks of the cha-cha. He will say, Do not be ashamed by your insatiable wanting, or your need for lyric and lift. Do not be ashamed. Dance it.
Our best teachers teach us more about life than about anything else. They give us the chance to be slightly better people. They listen to us so that we can start listening to ourselves, so that we can remove all the junk that lies between us and our own authority, our own capacity for remembering. It doesn't matter who you are or what you do: Don't lose your urgency. Don't yield to the suspicion that you know enough, have seen enough, have wanted enough, have danced the perfect rumba. Don't get yourself all pretty, perfect, and complete. Value imbalance. Remain vulnerable.
# THREE
GET MOVING
# WHAT'S IT ALL ABOUT?
Your memoir must negate chronology with wisdom, presumption with knowing, misty maybes with a more robust version of life as it was lived.
Do you know, yet, what you're writing about?
Do you know what is at stake?
Do you know what questions and hopes, suggestions and pervasions will ride like a sine curve behind your prose—sometimes overt, sometimes subtle, always implicating?
There are no right or wrong answers here. There is no instantaneous knowing. There is only the requirement that you think these matters through. Memoirists work like gardeners in spring—planting the seeds, clearing the weeds, harvesting the bright-headed crop, arranging the stems. Memoirists must be patient—not just with themselves but also with the mass of material, and with the impulse to tell. Memoirists must understand, as well, what it is, exactly, that has propelled them into this land of terrible beauty and great danger.
In a letter to Willa Cather, Sarah Orne Jewett suggests that what "belongs to Literature" is the stuff born of a long, nagging itch: "The thing that teases the mind over and over for years, and at last gets itself put down rightly on paper—whether little or great, it belongs to Literature." Terrence Des Pres (channeling Henry James) suggests in _Writing into the World_ , that we have little choice in the writing matter: "What we select to write about... isn't a matter of choice so much as being chosen. Writing of every kind begins, as Henry James said, with its _donnée_ , the something given, the one small thing that cannot be refused."
Memoir is active, it is alert, it is not lazy. It is about asking the right questions about the past and about the human condition. What leads to violence? What is the aftermath of abrasion? How does one survive loss? Why do we tell ourselves stories to protect ourselves from the chaos of experience? How are big things small and small things big? How do the refrains from the past shape the reality of our present? Who the hell was I? What was I thinking? And if it happened to me, does it happen to you? How does _my_ story get me closer to _us_?
Sometimes you can get at those questions obliquely, through structure and white space. Sometimes you do it by rubbing the now against the then. Sometimes we accentuate the terrible discrepancy. Sometimes you are writing toward forgiveness—of yourself, of others. This is the beauty of memoir. If all your memoir does is deliver story—no sediments, no tidewater, no ambiguity—readers have no reason to return. If you cannot embrace the messy tug of yourself, the inescapable contradictions, the ugly and the lovely, then you are not ready yet. If you can't make room for us, then please don't expect us to start making room for you.
Kim, my dark-haired student with the Cleopatra eyes, chose to write her memoir about luckiness, unluckiness, and love. My favorite paragraph:
Love makes you dependent; pain pushes you to the breaking point of self-actualization. My parents' support and the stability they provided for me is something I'm still trying to justify by replacing their hands with my own, finger by finger. Every day I lift a barricade to get through hermitage and extroversion, harmony and entropy, my mother's love and my mother's illness, innovation and inundation. I was lucky, I was born an American, I was born healthy, I was born into a loving home. I was unlucky, I was born judgmental, I have seen terror, I have seen desperate cries for life. So we continue: surprised, derisive, and awake by intuition.
Jonathan wrote about prayer as hobby, and about religious fanaticism:
Prayer was my new hobby, easily eating up an hour of every morning. My religious observance became systematic: I had to make sure experimental conditions were optimal. Experiments fail if they aren't perfectly calibrated—perhaps my prayer was similarly ineffective because I was ignoring some ritualistic detail. Scientific precision was giving way to religious fanaticism. I was too skeptical of reality to reject superstition so quickly—and I had so much to lose. For two years, I was blinded by minutiae. Then I found academic biblical analysis.
Gabe wrote about surviving a heart condition; more than that, though, he wrote to imagine what a son's illness means to a mother:
This was also probably what she begged for when, after I had gone unconscious in the hospital that day in February, the doctor spoke with her and told her that her son was very sick and that every effort was being made to save him. She had flown to Peru the night before to be with her father who was on his deathbed. She must have hung up the phone, heard the echo of the handset hitting the cradle resounding in her head, and felt her knees buckling beneath her. She somehow gathered strength, said what she thought was a last goodbye to her dying father, and boarded a plane towards Philadelphia. Those eight hours of flight must have been claustrophobically helpless. No jet plane could have flown fast enough to make this trip bearably short. No altitude could have brought her close enough to God so that she could scream loud enough in his ear to please save her son.
Responsibility—to one's self and to others—was the theme that engaged Stephanie.
How much of your life, the life you know, is actually your own? We all do things for others, stretching out limbs like a thigmotropic plant clinging to the structure of another to both give and receive life-sustaining supplements. But what do we do for ourselves that we do not do for others? What moments are we robbed of, what people do we give too much to? And when, if ever, are we truly independent?
No one can or should tell you what to write about. But if you don't know where the memoir impulse is coming from, if you can't trace it, can't defend it, can't articulate an answer when somebody asks "Why'd you want to write a memoir anyway?"—stop. Hold those memoir horses. Either the mind has been teased for years upon years, or there's that small thing that won't be refused, or there's something else genuine and worthy. But nobody wants to hear that you're writing memoir because you need some quick cash, or because you think it will make you famous, or because your boyfriend said there's a movie in this, or because you're just so mad and it's about time you get to tell your version.
So know why you're writing, and then know this: No memoir in the history of memoirs has ever written itself. Every recorded story, detail, metaphor, and pause represents a decision made. You will be writing a life story that leaves (by necessity) the vast majority of your life story off the page. You will use these elisions to your advantage, elevate details into symbols, find the heart of a story within the fringe of a vignette, shuffle the chapters of time in search of answers. You will remain vulnerable and tell the truth and still, somehow, make certain that the story you tell is yours to tell and not a violation of trust. You will get there.
But at first you're going to need to wallow around in early drafts. You're going to need to experiment. Write the you in present tense, the you in past tense, the landscape, the weather, the song, the color of your life, the self-analysis. Buy several notebooks. Suppress no urge, douse no flame, do not be shy. These are drafts, and when you draft, you keep your self-censor in the closet.
You'll know when you're ready. You'll know when it's time to turn and face the book. Time to define your frames and filters, identify your themes, and conscientiously—and artfully—proclaim (to yourself only, at first): My memoir is about navigating loss. My memoir is about second chances. My memoir is about the power of love. My memoir is about injustice overcome. My memoir is about coming to terms with middle age. My memoir is about foreignness. My memoir is about defining home. My memoir is about the power of the imagination.
All of which is not quite what the pseudo-memoirists say. My memoir is about the time my sister rode across the country on a horse and left me behind with a pitchfork, says the pseudo. My memoir is about how I had to work so hard so that my wife could stay home and whittle. My memoir is about the second aunt on my stepsister's side who ate green-pea soup every day, for lunch and also supper. My memoir is about a house I built. My memoir is about a fishing trip. My memoir is about how much I hate my mother. The pseudos haven't climbed out of their own small circles yet. The pseudos haven't connected with the larger world, or with their readers. The pseudos are confusing anecdote with memoir. The pseudos collect the critics' ire. They mess it up for the rest of us.
The pseudos just clearly aren't ready yet. They need more time with the material. They need to more profoundly _know_. It's not that we don't want their details, or the recipe for that green-pea soup, or a vivid image of that crafty wife's best whittling job. It's that we need to know what it all _means_ , and how it relates to us. Simple truths. It's just not memoir without them.
"A memoir is a work of sustained narrative prose controlled by an idea of the self under obligation to lift from the raw material of life a tale that will shape experience, transform event, deliver wisdom," Vivian Gornick writes in her essential guide, _The Situation and the Story_. "Truth in a memoir is achieved not through a recital of actual events; it is achieved when the reader comes to believe that the writer is working hard to engage with the experience at hand. What happened to the writer is not what matters; what matters is the large sense that the writer is able to _make_ of what happened. For that the power of a writing imagination is required. As V. S. Pritchett once said of the genre, 'It's all in the art. You get no credit for living.'"
In _The Art of Time in Memoir_ , Sven Birkerts speaks not about engaging experience but about redeeming it. He speaks of patterns. He speaks of memoir as having the power not just to showcase but also to explain: "The memoirist writes, above all else, to redeem experience, to reawaken the past, and to find its pattern; better yet, he writes to discover behind bygone events a dramatic explanatory narrative."
And if you're not yet convinced, read (I implore you) Patricia Hampl, especially "Memory and Imagination," which appears in the essay collection _I Could Tell You Stories._ I have never taught a memoir class without assigning these sweet, instructive seventeen pages. I never go far, and no memoirist should go far, without thinking of Hampl's words, quoted earlier: "True memoir is written, like all of literature, in an attempt to find not only a self but a world."
What world do you live in? And how will you bridge your world to mine? And what will you say when somebody asks you: _What is your memoir about?_
# BEGINNINGS
ENOUGH. You've got your question(s). You suspect your themes. You've got ideas about frame or, at the very least, a willingness to search for one. You've worked your voice onto the page in a way that sounds like you. You've got notebooks full of scenes and stories. The memoir must begin.
The memoir needs a beginning.
Beginnings, clearly, set the tone. They extend an invitation, issue a warning, throw down a bridge. They let readers know what might be at stake. They foreshadow a book's relative complications, making it clear, from the start, whether the story will be complex or direct, tangled or straight-shooting. Beginnings signal the quality of the author's voice—how trustworthy, how revealing, how descriptive, how matter-of-fact, how dependent on the actual or the imagined. Beginnings tell the reader in the bookstore; the reader with an iPad, a Nook, a Kindle; the reader at a book club meeting whether or not he will want to read more. And before any of that, beginnings declare a project's market value to agents and editors. (I hate to be crass, but there it is.)
Not always but often, memoirists introduce themselves to readers with a prologue. Think of the prologue as a buffer zone, as an easing toward, as prefatory. Consider all the room it offers to suggest theme and tone and frame, the percolating entanglements of the memoir's story lines. Consider how efficiently it can establish mood.
I advocate prologue in memoir. I find that it helps everyone involved—the writer, the reader—if certain early declarations are made. The thrill of literary memoir isn't bound up in plot, per se, and it shouldn't be bound up in gossip. The thrill of the genre—or, at least, one of its chief pleasures—is all webbed into just how well the author manages to answer the questions or explore the themes or concerns that lie at the story's heart. Coy doesn't work—or at least I don't think it does. The questions, themes, and concerns that fuel a memoir are often best enunciated at the start. And prologues (call these opening zones prefaces, if you like) are such fine, flexible containers. You can make them do whatever you want them to do. You can even give them different names.
Some examples. In _Into the Tangle of Friendship: A Memoir of the Things That Matter_ , my second book, I was interested in understanding how the people who enter our lives shape who we become. I begin with a scene, but embedded in that scene are the questions that will rise and fall throughout the ensuing narrative:
Call the wooden climber in the center the seat of power. Call the sandbox and the swings and the splintered tables the hearts of commerce; the shade beneath the oaks, the church; the ravaged muddy creek beyond, this country's borderlands. It is spring—a puckering day. The kids—alone, in pairs, afraid, delighted, in cars, on foot, in a parade of rusty wagons, on the verge of brave entanglements—have finally come.
Out on the playground's edge, the sun at my back, I sit and wait and wonder. I watch. I know that the coming hours will shape the children's view of friendship and, consequently, their view of themselves. I know that there will be struggles, winners, losers, so many one-act plays, mysteries and parables. Who is the leader here, and who the disciple? Who will betray, who can be trusted? Who will be drawn in, who locked out? How will passions coalesce, what will be talked about, who will care? When will the accretion of events, hopes, revelations, gifts, become the stuff of memory and faith, a durable philosophy of friendship?
[A few pages later]... What do any of us know about friendship, isn't that the question here? What can we make of how it changes over time, how it is about wonder at first, then self-definition, then survival, how it is always about comfort, about simply being here, alive? How do we come to terms with the responsibilities and limitations, the possibility of schisms and despair? Because isn't it true that the more we let others into our lives, the safer we become and also the more endangered. Isn't it worth it nonetheless? Friendships matter; they rebut death, they tie us to this earth, and, when we're gone, they keep us here; our friends remember us. Looking back and looking forward we see that this is true: friendship stands as both a scaffolding and a bridge.
In _Seeing Past Z: Nurturing the Imagination in a Fast-Forward World_ , I don't call the prologue "Prologue." I call it "Imagining Tomorrow." It ends with these words:
I want to raise my son to pursue wisdom over winning. I want him to channel his passions and talents and personal politics into rivers of his choosing. I'd like to take the chance that I feel it is my right to take on contentment over credentials, imagination over conquest, the idiosyncratic point of view over the standard-issue one. I'd like to live in a world where that's okay.
Some call this folly. Some make a point of reminding me of all the most relevant data: That the imagination has lost its standing in classrooms and families nationwide. That storytelling is for those with too much time. That winning early is one bet-hedging path toward winning later on. That there isn't time, as there once was time, for a child's inner life. That a mother who eschews competition for conversation is a mother who places her son at risk for second-class citizenry.
Perhaps. But I have this boy with these two huge dark eyes who thinks and plays and speculates. I have a boy who is emergent and hopeful, intuitive and funny, somewhere between childhood and adolescence. How will he define himself as the years unfold? What will he claim as his own? What will he craft of the past? What will he do with what he thinks, make of what he dreams, invent out of the stuff of all his passions? It is my right—it is my obligation, even—to sit with him for a while longer, imagining tomorrow.
I don't mean to sound extreme. Not every memoir prologue serves as the great question reveal; they don't all rely on the snaking question mark. Frank Conroy's classic memoir, _Stop-Time_ , a book about childhood and adolescence, begins with a page torn out of Conroy's adult life. It's a terrifying four paragraphs—that's it—about Conroy's trips to London "once or twice a week in a wild, escalating passion of frustration, blinded by some mysterious mixture of guilt, moroseness, and desire." Conroy doesn't pose his question outright. He doesn't even feel compelled to make the direct connection between those wild adult nights and the childhood story to come. We understand, implicitly, that who Conroy became is a function of who he was, and who his childhood allowed him to be. We have been introduced. And now we read.
Lucy Grealy, in _Autobiography of a Face_ , uses her prologue to tell a story about pony parties, to invite us in. "My friend Stephen and I used to do pony parties together," the book begins, and so with a sideways glance, with winning innocence, Grealy announces what she's up to here. She is not writing a story about her childhood cancer and its devastating effects to win our sympathy. No, indeed, she's going to keep that cancer at bay for as long as she can. She's going to write about beauty, its absence, a child's wisdom, an adolescent's struggle, the futility all of us feel, at times, about fitting in. To what? To whose standards? To what end result? Her prologue prepares us for the force of her intelligence.
In _Hiroshima in the Morning_ , a memoir about, among other things, the consequences of forgetting and the dangers of solitude, Rahna Reiko Rizzuto uses her prologue to issue a warning: "I can tell you the story but it won't be true," she begins. "It won't be the facts as they happened exactly, each day, each footstep, each breath. Time elides, events shift; sometimes we shift them on purpose and forget that we did. Memory is just how we choose to remember." Those who wade into Rizzuto's memoiristic waters know at once what they are in for. An impressionistic book. A shift-and-slide book. A search, not a gung-ho plot.
In _Blue Nights_ , a memoir about regrets, aging, a daughter's dying, Joan Didion gives us color as mood: "In certain latitudes there comes a span of time approaching and following the summer solstice, some weeks in all, when the twilights turn long and blue." Blue will sustain this heartbreaking narrative. Blue draws its curtains around it.
With _The Tender Bar_ , J. R. Moehringer gives us a choral _we_ in a prologue that introduces his tale about being raised among tall drafts and beer talk, and in the absence of the person who was supposed to love him most. Everything to come, in this memoir, is here, in this first paragraph—sometimes explicit, sometimes metaphoric. We know, thanks to the prologue, just what we're in for.
We went there for everything we needed. We went there when thirsty, of course, and when hungry, and when dead tired. We went there when happy, to celebrate, and when sad, to sulk. We went there after weddings and funerals, for something to settle our nerves, and always for a shot of courage just before. We went there when we didn't know what we needed, hoping someone might tell us. We went there when looking for love, or sex, or trouble, or for someone who had gone missing, because sooner or later everyone turned up there. Most of all we went there when we needed to be found.
In her glorious memoir about her friendship with Robert Mapplethorpe, _Just Kids_ , Patti Smith uses her opening pages (she titles them "Foreword") to walk us into her frame. This will be, it's clear, the story of a best friend vanished. This will be a meditation—the things that were set against the things that disappeared. This will be an accounting of a relationship that was always tipped toward imbalance. This will not be a judgment; it will be a blessing.
I was asleep when he died. I had called the hospital to say one more good night, but he had gone under, beneath layers of morphine. I held the receiver and listened to his labored breathing through the phone, knowing I would never hear him again.
Later I quietly straightened my things, my notebook and fountain pen. The cobalt inkwell that had been his. My Persian cup, my purple heart, a tray of baby teeth. I slowly ascended the stairs, counting them, fourteen of them, one after another. I drew the blanket over the baby in her crib, kissed my son as he slept, then lay down beside my husband and said my prayers. He is still alive, I remember whispering. Then I slept.
Here's how Meghan O'Rourke prepares us for her journey through loss—with a story stolen from childhood about a town on the banks of the Battenkill, a dog named Finn, and the mother O'Rourke will lose too young. Loss is open-ended. It cannot finally be reconciled. It leaves us bewildered. It leaves us yearning. The final paragraph from the lilting prologue to _The Long Goodbye_ is palpable with ache and desire:
When we are learning the world, we know things we cannot say how we know. When we are relearning the world in the aftermath of a loss, we feel things we had almost forgotten, old things, beneath the seat of reason. These memories in me of my mother are almost as deep as the memories that led Finn to flush and point. As the fireflies began to rise one summer evening, my mother called to us. _Look_ , she said. _See them? Run and get a jar and a can opener._ And my brother and I ran in for jars and our mother poked holes in the lids and sent us across the lawn to catch the fireflies. The air was the temperature of our skin.
With his prologue to _No Heroes_ , a memoir recounting his return to the hills of Kentucky, Chris Offutt establishes his outsiderliness from the very start. He tells you the facts—he's been gone twenty years—and he tells you how hard this chapter in his life is going to be. Offutt uses his prologue to put us on his side, to prepare us for all we will see. He asks the question, without employing the question mark, of whether any of us can really go back home again.
No matter how you leave the hills—the army, prison, marriage, a job—when you move back after twenty years, the whole country is carefully watching. They want to see the changes that the outside world put on you. They are curious to know if you've lost your laughter. They are worried that perhaps you've gotten above your raisings.
To reassure the community, you should dress down except when you have to dress up, then wear your Sunday-go-to-meeting clothes. Make sure you drive a rusty pickup truck that runs like a sewing machine, flies low on the straight stretch, and hauls block up a creek bed. Hang dice from the mirror and a gun rack in the back window. A rifle isn't necessary, but something needs to be there—a pool cue, a carpenter's level, an ax handle. Where the front plate should be, screw one on that says "American by birth, Kentuckian by the grace of God."
Finally, consider the power of italics on a first, untitled page. Consider, in other words, what Michael Ondaatje does at the very start of _Running in the Family_ , a memoir built from fragments. Ondaatje is, with these opening lines, telegraphing his process, acknowledging the truth that recollecting childhood is hard and dangerous work. It will steal your dreams. It will unwrap time. It will not come easily, and the story will tumble, and for a while, at least, the boy that Ondaatje was will appear as a character, not just to us but also to him. Ondaatje doesn't need to name his prologue and, indeed, had the word appeared on the page it would have interfered with the organic quality of the prose. But this is prologue as poetry, prologue as song, prologue as the writer working near.
_Drought since December._
_All across the city men roll carts with ice clothed in sawdust. Later on, during a fever, the drought still continuing, his nightmare is that thorn trees in the garden send their hard roots underground towards the house climbing through windows so they can drink the sweat off his body, steal the last of the saliva off his tongue._
_He snaps on the electricity just before daybreak. For twenty-five years he has not lived in this country, though up to the age of eleven he slept in rooms like this—with no curtains, just delicate bars across the windows so no one could break in. And the floors of red cement polished smooth, cool against bare feet._
_Dawn through a garden. Clarity to leaves, fruit, the dark yellow of the King Coconut. This delicate light is allowed only a brief moment of the day. In ten minutes the garden will lie in a blaze of heat, frantic with noise and butterflies._
_Half a page—and the morning is already ancient._
Again: Maybe you'll decide to write a prologue (or a preface or an untitled italicized block) for your memoir. Maybe you won't. It is far from mandatory. What _is_ mandatory is that you spend real time thinking about and working with your beginning. Don't cop to mere chronology, unless chronology is a theme or a question. Don't merely explain; this isn't journalism. Don't simply plunge in, assuming we'll follow along; we're only following, I can promise you this, if you've been smart and deliberate about your proximities and patterns, the nearness and farness of your voice. If all I wanted was to know what happened to some stranger from earliest memory through most recent, I'd be reading autobiography. As far as I can tell, you're signing up for memoir here. Write considered first lines, provocative first lines, telling first lines, self-disclosing first lines, first lines that hold the entire book within themselves, like a seed.
Draw us in. Seduce us.
# BLANK PAGE
THAT'S right. That's what I have for you here. A blank page.
It's all yours.
Use it.
Perhaps you'll write your entire first draft in one fell swoop—your boyfriend bringing you cups of tea, your cat curling around your legs, your phone ringing incessant and unanswered, your Twitter feed silent. Perhaps you will take pieces of things, the fragments you've been writing all along—your weather, your color, your mother's kitchen—and lay them out on a narrow table, one beside the other, until the right interrelationships reveal themselves and prompt the story. Perhaps it's just one line that you have so far—one line, but it's a good one. Perhaps you'll work like an architect of old, laying sheets of trace paper above your typed prose to find the bend in time.
Perhaps.
# FOUR
FAKE NOT AND OTHER LAST WORDS
# FAKE NOT
TO write memoir is to enter, as we have seen, a war zone—with yourself, with the ones you love, with the critics you may never meet. It is to lay your life on a line, on several lines. You may be ridiculed, harassed, taken down in the court of public opinion. Worse, your aunt Mathilda may never speak to you again. You may be called upon to defend the form. You may feel the need. Your sole protection will be the work itself—its integrity, its artfulness, its originality, its capacity to entertain or seduce, its implicit recognition that you are not, in the end, the only person who ever had a story to tell, the only person worth listening to. What you are, if you're a memoirist, is a person who has been trusted to help us see, or help us think, or remind us that we (the rest of all us _me_ s) are not alone.
Think you're ready? Feel immune? Have you coffee'd lately with Neil Genzlinger who, writing in the _New York Times Book Review_ , revealed that he had just communed with the memoir listings on Amazon? Tens of thousands of titles, he reported, and the small minority of "memoir-eligible" authors were, in his words:
... lost in a sea of people you've never heard of, writing uninterestingly about the unexceptional, apparently not realizing how commonplace their little wrinkle is or how many other people have already written about it. Memoirs have been disgorged by virtually everyone who has ever had cancer, been anorexic, battled depression, lost weight. By anyone who has ever taught an underprivileged child, adopted an underprivileged child or been an underprivileged child. By anyone who was raised in the '60s, '70s or '80s, not to mention the '50s, '40s or '30s. Owned a dog. Run a marathon. Found religion. Held a job.
Genzlinger is just one voice in a choral crowd that has had it with, in his words, this "absurdly bloated genre." I've introduced you, already, to others. Perhaps you have started to marvel at my moxie and devotion—because I not only write and teach memoir; I also am _writing_ about writing and teaching memoir. Marvel on, I say. I'm used to that. I can spot a raised eyebrow a mile away.
There are those who suggest—overtly and otherwise—that the cure for memoir is the redefining, or perhaps undefining, of memoir. Let it lie a little more. Let it extravagantly cheat. Allow it to let down its tangled, rootsy hair. _Wink. Wink._ Give it more room. Absolute, uncontested, thoroughly documented truth, we've established, is impossible when the primary source or instigator is a fickle brain. And so there will be gaps; why not just fill them? And so there will be interpretations; why not just claim? We lie by omission. We lie by trying to be kind. We lie because we love. We lie because we hate. We lie because we're ashamed. What does memoir think it is, anyway? We're all just tumbling around here on Planet Earth. Who _can_ handle the truth?
If we can't remember everything, if our memories change every time we recall them, if our brother is sure (he'll bet you the house) that your blue is his pink, your river his stream, if Ben Yagoda thinks we're mostly self-promoting liars, if Neil Genzlinger is pretty close to certain that he's read your memoir before, at least twice since last Sunday, shouldn't we pull our chair up to another literary feast? Sign up for a different genre team? Declare our methodology in a manner that gives the work that follows meaning?
Many writers do. Take _A Heartbreaking Work of Staggering Genius_ from your shelf, and read Dave Eggers's preface: "For all the author's bluster elsewhere, this is not, actually, a work of pure nonfiction. Many parts have been fictionalized in varying degrees, for various purposes." Eggers informs us that the dialogue has "of course been almost entirely reconstructed." He tells us that the author, meaning himself, "had to change a few names, and further disguise these named-changed characters." He tells us that "there have been a few instances of location-switching." And he fesses up to omissions: "Some really great sex scenes were omitted, at the request of those who are now married or involved."
Eggers tells us what we're in for, in other words. He grasses up his fearless, fearsome playing field. You're going to like this, or you won't. You're going to play along or, if it's memoir you're seeking, you're going to find another book. However you, the reader, responds, Eggers, the writer, has been truthful. He has not led you blind into a false confessional.
By his own accounting, Eggers has not written unadulterated _memoir_. Neither has the self-proclaimed Bloggess, Jenny Lawson, who gives her memoir— _Let's Pretend This Never Happened_ —a leeway-liberating subtitle: _A Mostly True Memoir_.
_All right. Wink._
But many writers offering steep disclaimers still shelf their books among nonfiction, a choice that puzzles me. _Reading Lolita in Tehran_ is, of course, an important book in so many ways, recounting as it does the two years the author gathered Iranian women in her home to read and share Western literature. This is a book with something to say, something to teach, something to reveal. But look at Azar Nafisi's author's note, which I reproduce in its entirety:
Aspects of characters and events in this story have been changed mainly to protect individuals, not just from the eye of the censor but also from those who read such narratives to discover who's who and who did what to whom, thriving on and filling their own emptiness through others' secrets. The facts of this story are true insofar as any memory is ever truthful, but I have made every effort to protect friends and students, baptizing them with new names and disguising them perhaps even from themselves, changing and interchanging facets of their lives so that their secrets are safe.
Now consider the description below, found early in the book. Beautifully written, of course. Evocative, absolutely. But because we have been warned that names and features and personal histories have all been thoroughly squished together and remixed, it's difficult to know what to do with these "facts." If these personal details—so lovingly drawn, so particular—aren't true, what else is? Or perhaps some of it _is_ true, but how are we to know? What should guide us?
Mahshid is proper in the true sense of the word: she has grace and a certain dignity. Her skin is the color of moonlight, and she has almond-shaped eyes and jet-black hair. She wears pastel colors and is soft-spoken. Her pious background should have shielded her, but it didn't. I cannot imagine her in jail.
Over the many years I have known Mahshid, she has rarely alluded to her jail experiences, which left her with a permanently impaired kidney....
Many secrets must be protected. Many people should be kept hidden from prying eyes. But when so much fictionalizing goes into making a book, it is no longer memoir. If you're changing all the names on purpose, if you're writing what might have been, if deliberate disguise is your method, if you are leaning hard on half truths, if your memoir feels like so much fiction _even to you_ , it's time to take your story—your still valid, still potent story—and set it free in another genre. Because nonfiction, as Sallie Tisdale has written, "is supposed to tell the truth—and telling the truth is what people _suppose_ us to do."
Don't ruin memoir for the rest of us. Don't discourage and unsettle us. Don't join James Frey ( _A Million Little Pieces_ ) and Margaret Seltzer ( _Love and Consequences_ ) and all the obvious scammers. You'll leave us feeling culpable for your lies, for buying into them. Or you'll leave _me_ feeling that way.
Here, from my personal treasure trove of shame, is an example: One cold rainy winter day, I carried a book I was reading to a friend of mine. I sat in his office and read it out loud. Said, through my teary, cracked green eyes: _Listen to the beauty of this._ I talked about the power of the book. I talked about the importance of the story. I talked about the talents of the author. I took my friend's _time_ , intoning as I turned the pages of the book:
I want to tell you about my son. That is why I am writing all this down in some mad, frenetic attempt to share him with you.
If I do _nothing_ else with what is left of my life, let me do this. Let me show you something extraordinarily unique, something more beautiful than _anything_ you have ever seen. Something mad. Mad to live.
Inside a dream.
The story of Awee if very much like the story of Awee running bases.
You know he's going to make it home, but you hold your breath as he slides through the human obstacles that stand directly in his way.
I could take his picture. He's standing there with his teammates. Their arms draped around him because they love him. But it really wouldn't tell you much about who Awee is.
...
Awee was eleven years old when he came to me. Adopted.
This _entire_ book (and then some) could be about his eyes.
I didn't find out until some time later that the memoir I'd read with such dramatic passion— _The Boy and the Dog Are Sleeping_ , which was acclaimed, by the way, which was award-winning—was a hoax. Its author, who had christened himself Nasdijj and had claimed to have been raised on a Navajo reservation by a white cowboy and a Navajo mother, was in fact Tim Barrus, a white man, a writer of gay erotica, a borrower of other people's styles and stories, a man who would soon become famously angry at any attempt to uncover his proxy.
It doesn't matter that I wasn't the only one duped by Tim Barrus—the prize givers, the gushing reviewers, the enthusing early readers stood right there with me. I still feel shame at having been among the believers. I still blush at the fervency of my faith in his story, by how I had been language seduced.
I'd let his language seduce me. I'd been fooled.
It's obvious, right? Don't lie on purpose when writing memoir. Don't appropriate other people's tragedies as your own, or turn yourself into some kind of sexy outlaw, if you are actually not. Don't say you are a half–Native American girl, a foster child even, a drug runner more so, a gang gal (come _on_ ) if you are white through and through and were raised in a wealthy Los Angeles neighborhood by your own attentive parents. Don't fabricate a boy and make us love him; we'd have loved him just as much (okay, maybe not _as_ much, but close) if you'd called him what he was, which was a fiction.
Try, instead, to get as close as possible to the what-actually-was. I have said it; I shall repeat myself; I want to be perfectly clear. We understand that what we remember dislodges and agitates during the very act of remembering. We recognize that the important stuff may lie in the glimmers and shadows, in the imprecisions, in the misremembered. We know that any dialogue that lives outside a transcript is iffy at best. We know that shaping a life means choosing a life means leaving a lot of it out. Memoir requires of us artistry. Sometimes life is anything but.
Still: Best not to pretend we affirmatively know when we don't know at all. Best not to polish up something quasi until it feels like, looks like, maybe could pass for something absolute. "Our lives are uncertain..." Sallie Tisdale also said in her pivotal essay, "Violation." "Make that uncertainty part of what you tell."
Take your uncertainty inspiration from, say, _The Liars' Club_. Time and again, Mary Karr confesses that her sister, Lecia, would tell the story differently. Time and again she says one version or another of: _I don't remember this part; it might have gone something like this._ Sometimes Karr goes so far as to share competing tales so that we readers might choose. We love Karr for this. We trust Karr for this. We get it, because we're human, too.
Here's an example, mid-scene. Lecia and Mary are stuffed into the back of their mother's car as she races out of town ahead of a projected killer storm. The mother is Nervous to begin with. She left town far too late. This endangered carload has gotten as far as a very steep bridge, and an accident is waiting to happen. Karr is roping out the scene—keeping it vivid, keeping it alive. That she stops to tell us that her memory isn't precisely what her sister's is does not slow this story down.
Lecia contends that at this point I started screaming, and that my screaming prompted Mother to wheel around and start grabbing at me, which caused what happened next. (Were Lecia writing this memoir, I would appear in one of only three guises: sobbing hysterically, wetting my pants in a deliberately inconvenient way, or biting somebody, usually her, with no provocation.)
I don't recall that Mother reached around to grab at me at all. And I flatly deny screaming. But despite my old trick of making my stomach into a rock, I did get carsick.
Do you want another example? Then look at Patti Smith in _Just Kids_ and her confession—perfectly understandable, reassuringly human—of hazy early memories. Nothing is lost in the not-quite-remembering. Indeed, the sentences are particulate and lush:
When I was very young, my mother took me for walks in Humboldt Park, along the edge of the Prairie River. I have vague memories, like impressions on glass plates, of an old boathouse, a circular band shell, an arched stone bridge.
Alice Ozma begins _The Reading Promise: My Father and the Books We Shared_ with a declaration, an inviolable-seeming truth: "It started on a train. I am sure of it. The 3,218-night reading marathon that my father and I call The Streak started on a train to Boston, when I was in third grade." We have no reason not to believe Ozma; we're not interested in wrestling with her record. It's Ozma herself who introduces uncertainty—a trust-inducing solution. Turn the page, and it's there:
If you ask my father, though, as many people recently have, he'll paint an entirely different picture.
"Lovie," he tells me, as I patiently endure his version of the story, "you're cracked in the head. Do you want to know what really happened or are you just going to write down whatever thing comes to mind?"
Orhan Pamuk, writing in _Istanbul_ , professes his tendency to exaggerate. There are those, he says, who disagree with his version of things. There is, for him, the leeway he gives himself by focusing not on accuracy but symmetry. I don't agree with Pamuk here about symmetry trumping accuracy. Both can be achieved, and both must. But I trust his account because he's copped to his process. He lets us know which lines to read between.
Later, when reminded of those brawls, my mother and my brother claimed no recollection of them, saying that, as always, I'd invented them just for the sake of something to write about, just to give myself a colorful and melodramatic past. They were so sincere that I was finally forced to agree, concluding that, as always, I'd been swayed more by my imagination than by real life. So anyone reading these pages should bear in mind that I am prone to exaggeration. But what is important for a painter is not a thing's reality but its shape, and what is important for a novelist is not the course of events but their ordering, and what is important for a memoirist is not the factual accuracy of the account but its symmetry.
Finally, consider this from _Half a Life_ by Darin Strauss, who has spent much of his life either running from or trying to re-create the moment when his car hit a girl on a bike. Most memories aren't continuous. Images come to us in spurts. We struggle to see; we can't quite see: This, too, is memoir. Write it down.
This moment has been, for all my life, a kind of shadowy giant. I'm able, tick by tick, to remember each second before it. Radio; friends; thoughts of mini-golf, another thought of maybe just going to the beach; the distance between the car and bicycle closing: anything could still happen. But I am powerless to see what comes next; the moment raises a shoulder, lowers its head, and slumps away.
I trust Loren Eiseley because of the many times he lets us know that there are white spaces around his memories, gaps—and because he does not try to fill them. I trust Alison Bechdel in her graphic memoir because she leaves evidence of her remembering on the page—her early, confused diary entries; the court report; the images she has drawn not just from memory but also from photographs. I trust Dorothy Allison in _Two or Three Things I Know for Sure_ not just because of the photographic record she binds in with her words but also because she issues cautions: "I'm a storyteller. I'll work to make you believe me."
Memoirs are never inferior because memory partially fails or because the journal of record goes suddenly blank or because the raw anecdote must be leavened with some poetry in order to make it psychically true. You are in danger of getting some of it wrong, and I understand, because I am human, too, and because my memory fails me, too, and because trying is the only thing we have, and because I have been wrong, plenty.
Just don't pretend that your story's airtight. Don't write as if there are no other versions. Don't make things up deliberately and hope that I won't notice. Don't assert an inviolable tale. The moment you claim _every word of this is true_ is the beginning of my lost faith in you. Take a page from David Carr as you think about your memoir. He tells you all he did to make his story. He tells you why it can't be perfect.
From the author's note for _The Night of the Gun_ :
The following book is based on sixty interviews conducted over three years, most of which were recorded on video and/or audio and then transcribed by a third party. The events represented are primarily the product of mutual recollection and discussion. Hundreds of medical files, legal documents, journals, and published reports were used as source material in reconstructing personal history. Every effort was made to corroborate memory with fact and in significant instances where that was not possible, it is noted in the text.... All of which is not to say that every word of this book is true—all human stories are subject to errors of omission, fact, or interpretation regardless of intent—only that it is as true as I could make it.
# EXERCISE EMPATHY
IT was a Tuesday. I was on my way. My bag was packed—books, lesson plans, camera. The skies were bright. I locked the house behind me and hurried past the garden edge, across the street, toward the old horse show grounds, to the stone train station. I slid into a window seat after the train rolled in. I watched the familiar landscape. Teaching is a ritual. The smudged SEPTA-train glass, the backyard views, the occasional cat or opossum, the stray gro-cery cart, the ballooning plastic bag, the solitary bike wheel, the hedge of violet-tinted flowers, the fissured fence, the suburban yielding to the borders of my city—it's all prelude and segue. I never touch the book on my lap.
On this particular midmorning, three stops shy of the Thirtieth Street Station, a kid took the seat next to mine.
"Crowded today," he said.
"Car show at the convention center," I told him.
"All these people for a car show?" he said, turning around and glancing back at the crowd of white-haired auto fanatics in the seats behind us. He had a nice face, a clean profile, Mediterranean skin, this kid. He had a backpack heavy with books, Penn insignia. We talked about his classes. I told him about mine.
"You write memoir or just teach it?" he asked.
"I've written it," I averred.
"What was it about? Your memoir?"
"Well, that depends," I say, wishing the answer were easier, less list-entailing. "I've written five."
" _Five_ memoirs?" The kid seemed startled, genuinely concerned. He felt the need to press. "Isn't five a lot? I mean, how much have you _lived_?"
There are plenty of reasons to write memoir. There are plenty of reasons not to. We've gone through all this. The decision is yours. I have just two more things to ask as you go about handling your truth: Exercise empathy. Seek beauty.
* * *
Empathy first, a theme I've touched on, an ideal upon which I would now like to dwell. Because it is that important. Because if you take nothing else from this book, take this. Please.
I'm not going to go all _Merriam-Webster's_. You know what empathy is. You must know why it matters. Not just for the sake of karma, divorce rates, guilt quotients, legal fees, and confession booths but also for the sake of the book—yours. Memoirists who lack empathy produce flat, self-heralding stuff; I hope I've made that clear. They demonstrate no skill for listening, no eye for nuance, no tolerance for opposing points of view. They prove no innate appreciation for the value of complexity or the many-sidedness of the Big Issues. They fail to speak to the ceaseless tug and release, tug and release that lives at the biological, philosophical, and relational heart of life itself. To write without empathy is to drone; it is to lecture; it is to be the only person talking in a crowded room. It is to accuse, and it is, therefore, not memoir.
If the only skin you can imagine is your own, if you cannot walk another's mile, if he is always wrong and you are always right, if it is all your mother's fault, if other people's histories are less important than your own, inhale big time and blow the whistle loud. Go live the weather of a garden. Go skip some stones. Go sift the photographs again. Background. Foreground. What is casting shadows?
Read _The Duke of Deception_ (I'm going to have to insist), then ask yourself what good that book would have been had Geoffrey Wolff been writing solely to humiliate or trump his father. Does Wolff pretty up his dad, pretend that things were not terribly hard? Of course not. That would be lying. Does Wolff pretty up himself? No, indeed. He neither lambasts his father nor posits himself as the hero of this heartbreaking childhood tale. Wolff grows up first suspecting, then knowing that his father has fabricated entire swaths of his personal history. He grows up answering to the high expectations of a man who has, in so many ways, failed himself. Writing the memoir affords Wolff the necessary distance—and intimacy—to write passages as deeply steeped in complexity—and humanity—and forgiveness—as this:
By now I knew my father was a phony. I wasn't dead sure about Yale, but I was sure he was a phony. My father's lesson had taken: he had tried to bring me up valuing precision of language and fact. So around him I became a tyrant of exactitude, not at all what he had meant me to be. Unable to face him down with the gross facts of his case I nattered at him about details, the _actual_ date of the Battle of Hastings, the world's coldest place, the distance between the moon and the sun, the number of vent-holes in a Buick Special. I became a small-print artist.
I was harder on my father after I had the goods on him than he had ever been on me. He had always had the goods on me. And he had never made cruel use of them.
Mira Bartók likewise could have written a thoroughly condemning account of her schizophrenic artist mother and had the court of public opinion in her favor. Thankfully, that wasn't her purpose with _The Memory Palace_. Nor was Bartók trying to gain her readers' pity as she related the story of the traumatic car accident that reconfigured the neural pathways of Bartók's own brain. Bartók's purpose was to contextualize, to understand her mother's compromised brain and, yes, her bizarre and often hurtful behavior through the lens of her own injured neurology. The result is measured, quiet—more questions than answers, not accusation but tender sadness, a telling decision to quote Nicolaus Steno in the book: "Beautiful is what we see. More beautiful is what we understand. Most beautiful if what we do not comprehend."
If Mary Karr had not worked so hard to get to the root of her mother's Nervous condition, her unsettlingly messy approach to motherhood, _The Liars' Club_ would still be a Very Good Book. But it would not be the extraordinary one that it is. We're not interested in jeering at, sneering over Karr's mother. That's middle school. That's hollow. We, like Karr, want to know _how_ a woman becomes this troubled, crazed, terrifying, seemingly out-of-touch, but always somehow near and somehow not not-loving mother. With wit, with poetry, with verve, with suspense, Karr works her way toward an answer.
Dani Shapiro had a difficult relationship with her beautiful mother. That is putting it mildly. Still, look at what Shapiro does toward the end of her memoir, _Devotion_. This is a perilous scene. This is a fearful moment. Shapiro retells it virtuously. She surrenders, to her mother, grace.
The tumors in my mother's brain looked like dust, sprinkled there on the black-and-white lunar landscape of the X-ray. The oncologist pointed to them with the tip of his pencil. "There," he said. "Do you see that? And there." He kept moving his pencil. Finally I began to see that the grayish blur he was showing us was actually dozens—maybe hundreds—of minuscule tumors.
"So how do we get rid of them?" asked my mother.
I sat next to her, close enough to touch. Her winter coat was folded in her lap, and her cane rested against the side of the oncologist's desk. For the first time, my mother looked brittle, as if her bones might break from a fall.
"We don't," the doctor answered. "We can treat them, but..." He trailed off, shrugging his shoulders as if to say that these specks were too much for him.
"Mostly, Mrs. Shapiro, what we can do at this point is make you comfortable."
"Comfortable," my mother repeated.
Finally, if Anthony Shadid had, in the course of rebuilding his great-grandfather's estate in Marjayoun, failed to put all of his difficult interactions with local craftsmen and construction crews, neighbors and detractors, into empathetic context, _House of Stone_ would be a thumbed nose of a book, an I'm-the-hero-and-they're-the-antagonists stomp across a war-torn part of the world. But this is Shadid, the Pulitzer Prize–winning journalist. Shadid, who was known, in all of his work, not just for his investigative talents, courage, and lyricism but also for his broad-minded compassion. He puts that to work, over and over, across the pages of his memoir:
In Isber's former domain, the ordinary has been, for nearly a century, interrupted by war, occupation, or what they often call in Arabic "the events." These are circumstances that stop time and postpone or conquer living. Traditions die. Everything normal is interrupted. Life is not lived in wartime, but how long does it take for the breaks in existence to be filled? How many generations? This is a nation in recovery from losses that cannot be remembered or articulated, but which are everywhere—in the head, behind the eyes, in the tears and footsteps and words. After life is bent, torn, exploded, there are shattered pieces that do not heal for years, if at all. What is left are scars and something else—shame, I suppose, shame for letting it all continue. Glances at the past where solace in tradition and myth prevailed only brings more shame over what the present is. We have lost the splendors our ancestors created, and we go elsewhere. People are reminded of that every day here, where an older world, still visible on every corner, fails to hide its superior ways.
Empathy doesn't soften you; it smartens you. Empathy gives you something to say. Empathy stops you—let it stop you—from deliberately hurting those who become essential to the story you feel you must tell.
Your scene centers on a best friend in a hospital, say. How far will you go— _should_ you go—in denuding her of her dignity? Are all those intravenous lines, all that blood spurting out of her nose, all that drool, all those excretions the point? Is your story bigger than that? Are you? Are you only writing this down because it feels, well, writerly and shocking to have collected all those details, or because you imagine book club readers exclaiming over your brave and detailed gruesomeness? Are you thinking that you can say what you want because your friend lost her battle with cancer? None of that is reason enough.
A brother appears heartbroken on your doorstep. You're angry at him, sure. You even have every right to be. But how much of his devastation belongs to you? How _broken_ must you render him for us to see your point? How, in fact, is he going to feel when he finds your words inside a book that his new girlfriend is also reading? (You never imagined she'd hear about this, you never thought...)
Your neighbors are fighting again, hurling insults at each other. It's atmosphere, sure. It says something about the condition, the weather of your own life. It's somehow integral to what you have to say—about suburban life, maybe, or about the permeable quality of overgrown azalea hedges. But how many of their heated words (their own words, their private anger) do you need to prove your point? How will you answer their red-faced questions when they read your book?
That teacher is ruining your child's fourth-grade year. She has given him a D in Science not because of what he knows (damn it, he knows it all!) but because of his handwriting. She has thrown his books to the floor in despair over his still developing organizational skills. She has been called to task for her behavior by the principal himself. This may be essential to your story. This may be true. She may be all wrong, in fact, when it comes to your son. Do her the favor of keeping her name out of your book, for perhaps she's already pledged to do better next time. Perhaps she's already trying.
Your first boyfriend duped you. You'll never forgive him. You lost confidence and gained thirty pounds. Everything bad that happened after that—everything you were denied, everything you lost—goes straight back to the yellow-teethed, pockmarked, bad-dressing, you-can't-believe-you-ever-loved-him asshole. That is your story. You plan to tell it. You have a World-Class Epiphany all set for the end. But. Weigh the evidence again. Think tug-and-rel3ease. Tug. Release. There may well be more to this story and if you search empathetically for that bigger _more_ , then your past—and that lousy, good-for-nothing, maybe-it-wasn't-in-fact-all-him dupe—may reveal itself newly. Your past may be more interesting. His character may be more complex. You'll have a better book.
I can't say it enough. To my students I never stop saying it: You can't know what is going to offend, or mark, another. You can't foresee the many ways a book—even a small book, even a self-published one distributed to a mere dozen friends—will carry forward through your life and through the lives of others—leaving indentations here, scars there, a trail of tears, an infiltrated reputation. I speak from experience—five memoirs. I speak as one who has systematically sought permission for every line in a book that relates to another, and who has failed nonetheless. Reviewers will say what they will about your book. Readers will draw their own conclusions. You can think you've locked it all up tight, but unforeseen winds will blow through. The only thing you can control when writing memoir is what you actually say and how you prepare the ones you love for the book's journey into the world.
Empathy, then. For their sake. For yours. And finally for the sake of your art.
# SEEK BEAUTY
I was the kid with the willow tree bark at her back and the blank-page book on her lap, writing sonnets to Zeus and his cohorts. I was cloud swept and purple saturated, tipped toward melody and song. I skated on ponds, and speed was a poem. I looked for meaning in the places where the hues of separate watercolors met and blurred. I sat on the bridge, watching the poor creek go by, desperate for something to say. Poetry was my ideal, a vast seduction. The way an image could be made to turn in on itself. The way sound itself was meaning. The economics of signifiers. The metering of intent. Complexity. Grace. Mystery. Intrigue. The fantastic smithereening and reconstituting of vocabulary itself. The inherent possibility that something original could be said. It was breathtaking. I wanted poetry for myself.
Just as I want memoir. Just as all that lives in a poem is possible—I believe this—with memoir. The endearing and the enduring. The subversive and the supposed. The ephemeral and the everlasting. The yelp and the yowl. The you within the me. This is what I love about memoir. This—like my son's face, like the ocean at dawn, like yellow wings in a blooming tree, like the cadence of a jungle hill, like a fleet of winter stars—is beauty to me.
_Teach me how to write like this._ It was my student Gabe who said it. He'd come to class early, carrying words a friend had written, and he'd asked me how sentences like those got made. Gabe was studying engineering. He knew process and mechanics. But here he was in my still-dark classroom, asking me to teach him beauty.
And so we moved through the opening lines of his friend's work, trying to discern where beauty lived. Where was momentum, and where was pause? Where were the oblique corners, the unexpected encounters, the phrases where the writing suggested mastery not just over topic but over harmony, too? Gabe had to know where the beauty lay—for him—before he could begin making beauty for himself. It's no different for you or for me.
Poetic beauty, Natalia Ginzburg wrote in the essay "My Craft," "is a composite of ruthlessness, arrogance, irony, carnal love, imagination and memory, of light and dark, and if we cannot achieve all of these together, our result will be impoverished, precarious, and scarcely alive." Do you agree?
Or does Larry Woiwode come closer to describing your ambitions for assimilating, fathoming, and writing life when he explains, in _A Step from Death_ :
My interweaving is on purpose, with the hope of holding you in one of its stopped-moments for a momentary glimpse of your own infinity. All experience is simultaneous, stilled and sealed in itself, and we manage daily by imagining we move from minute to minute, somehow always ahead. Our multiple selves collide at every second of intersection, one or the other vying for supremacy, the scars of the past flooding through the present texture of our personality, and maturity is knowing how to govern the best combination of them.
Or is beauty, for you, bound up in rivering rhythms, in cacophonic detail, in that one wise found word, in the use of punctuation? Is your beauty simple? Is your beauty complex? Does plainspokenness factor in, or a certain erudition?
At some point in nearly all of my classes, I will stop the conversation and read out loud to my students. "Memoirists on food," I'll say, and then read from Gabrielle Hamilton, Chang-rae Lee, Bich Minh Nguyen, M. F. K Fisher, and whomever else I might have in my fat stack. "Memoirists on childhood"—I'll make the pronouncement then read the words of Annie Dillard, Orhan Pamuk, Elias Canetti. On other days, "Memoirists on dying." On still others, "Memoirists on fear." In between, "Memoirists on regret" and "Memoirists on knowing" and "Memoirists on packing for a trip" and "Memoirists who like lists" and "Jane Satterfield, memoirist, on mothers and daughters and distance." Student by student, I will ask for choices. Which passages appealed? Which sentences resonated? Was any of this, to your ear, beautiful?
On other days, I will pull out my battered copy of Lia Purpura's essay "Autopsy Report" and cycle it, passage by passage, around the room, for the students to read to each other.
I shall begin with the chests of drowned men, bound with ropes and diesel-slicked. Their ears sludge-filled. Their legs mud-smeared. Asleep below deck when a freighter hit and the river rose inside their tug. Their lashes white with river silt.
Do these words fit easily into your mouth? I ask. Can you imagine writing like this? What would you look for, where would your words go, were you to find yourself (alive) in a morgue?
Somber days, winter weather days, I have silenced the room and spun a disc—poets reading their own work, poets intoning. What resonates? I've asked simply. And then, complexly, Why? Asking the students, as Robert Pinsky says, "to hear language in a more conscious way."
Sometimes I invite the students to read their own work aloud. Sometimes I invite them to allow others to read their work for them. Do the words sound like they were meant to sound? Are the words conforming to the author's philosophy of beauty?
We cannot impose our ideas of beauty on another, though we will inevitably try. We cannot insist that others love what we do, though we can tantalize and seduce. We can only ask that beauty be considered, measured, weighed, pursued by all those writing life stories. We're talking about memoir—an art form, not a treatise. We're primed for the inhering and daring, for a story to feel new. We want to be placed in possession of the bold sprout of a cracked seed. We want to be trusted with it.
I leave you with Pablo Neruda. Perhaps you'll start your search for beauty here:
It is very appropriate, at certain times of day or night, to look deeply into objects at rest: wheels which have traversed vast dusty spaces, bearing great cargoes of vegetables or minerals, sacks from the coal yards, barrels, baskets, the handles and grips of the carpenter's tools. They exude the touch of man and the earth as a lesson to the tormented poet. Worn surfaces, the mark hands have left on things, the aura, sometimes tragic and always wistful, of these objects, lend to reality a fascination not to be taken lightly.
... That is the kind of poetry we should be after, poetry worn away as if by acid by the labor of hands, impregnated with sweat and smoke, smelling of lilies and of urine, splashed by the variety of what we do, legally or illegally.
A poetry as impure as old clothes, as a body, with its food stains and its shame; with wrinkles, observations, dreams, wakefulness, prophecies, declarations of love and hate, stupidities, shocks, idylls, political beliefs, negations, doubts, affirmations, taxes.
# MOST UNLONELY
I walk the campus every day before class—always a new direction, always some memory that I am stalking. One day I went the length of Locust Walk and out toward West Philadelphia, where a mod-looking bowling alley had been slipped inside a residential street and the dental school where I once worked had gained the face of new authority. One day, behind the medical school, I found a garden and a bridged-over pond and sat on its edge, recalling my freshman-year despair over failed titration labs, the defeating hugeness of biology lecture halls. I've haunted the Quad, where I once lived. I've studied the facades of fraternity houses. I've stood on the corner of Forty-second and Spruce, besieged by memories of the friend with whom I'd shared a passion for Russian history. I remember the room where he kept the books he sometimes stole. I remember the soup that he made from his mother's recipe. I remember his fascination with Tolstoy. I remember my betrayal. His. I walk, before I teach memoir, through the places I remember.
There was an afternoon, during my first year, when the air was dark and the sky was rain, and melancholy was my mood, Brillo my temper; I couldn't shake it. This was before I had taken up residency in that Victorian manse. This was the semester that I taught on the second floor of the cozy and intricately staired Kelly Writers House. I was remembering my mother, who had passed away not long before. I was imagining my son negotiating a university campus all his own. I was missing the book project that I'd set aside so that I might come to Penn and teach. My shoes were soaked through, and my umbrella was ineffectual, and the bag I carried across my shoulder was an ache upon my bones. I opened the door to find Jonathan waiting—a magazine on his lap and his long legs folded at liberal angles, like some Erector Set construction. He was, I thought, taking refuge from the storm. He rose when he saw me, began to climb the stairs beside me, and soon we were joined by another. When we reached the narrow hall just outside our classroom, we found Kim, who had arrived with a days-old kitten tucked into the collar beneath her chin. "They call him Wild Bill," she told us, "and I think he likes my bling," for this refugee from the streets of West Philly had dug his claws in deep to her necklace chain and was, it seemed, intent on staying, as most anyone who meets Kim is.
"We should just talk," Kim said, once we'd moved into our room. "At least to begin with." And because the room was dark and the skies were saturated, because Wild Bill had been whisked to some mysterious zone of safety, because the other students had, by now, come, we talked. About a recent campus suicide. About false diagnoses. About the places where the imagination lives. About reaching beyond the person one already is. We were each in our place, and we were holding on, for this is teaching, too—unquantifiable, and essential. And life is where memoir begins.
We turned, finally, to Terrence Des Pres, to his essays "Writing into the World" and "Accident and Its Scene: Reflections on the Death of John Gardner." Like so many before me, I am helplessly drawn to this man. His slender, groundbreaking exploration of community and generosity in Nazi death camps, _The Survivor_. The lovely lyric of his essays. His insistence that writers bear witness. Out loud I read Des Pres's reflections on John Gardner's ultimately inexplicable early death with a heart made heavy with the knowledge of Des Pres's own far-too-soon, and essentially unreadable, passing. I read about the faith Des Pres held in this strange and beautiful thing we writers do: "Few of us believe anymore that through art our sins shall be forgiven us, but perhaps it's not too much to think that through art a state of provisional grace can be gained, a kind of redemption renewed daily in the practice of one's craft." I read from "Ourselves or Nothing," the poem Carolyn Forché wrote to honor Des Pres, and these were our subjects this day. Provisional grace. Redemption renewed. The endless practice of our craft.
How does one speak of grief? I asked my students. To whom does such sadness belong? And is _knowing_ what matters most of all, or does _wanting_ to know matter more?
"Wanting," Kim said softly.
"Wanting," I agreed. " _Wanting_ to know matters more."
Those who teach do not create, we're told. Those who give back will not be remembered for their genius. Those who love too much get nowhere. Those who cede the stage are thrust aside. That day the rain kept falling and the skies grew darker and we kept talking, my students and me. About life and books and secrets and questions, Terrence Des Pres and John Gardner, Carolyn Forché and Nazi camps, and memoir. Long after the quitting hour we said good-bye, and it was darker and even wetter by the time the train rattled me home. By ten o'clock that night, I was back at my desk when an e-mail from Jonathan came in. A note and an attachment.
This is a brief article about my uncle that my mother forwarded to me this morning, and it seemed like a bizarre memoir of sorts—it's definitely not biographical, but more like a semi-connected series of anecdotes—all poorly translated into English. And even if it were translated properly, the tone is so distinctly foreign. At any rate, treated as a sort of memoir, I thought you might find it interesting. It seems almost like a very narrow window into a life that the author clearly has no understanding of, but is almost unconscious of that ignorance.
It was late, and I was tired. I opened the attachment and read. I was confused at first, thrust in, as had been promised, to a very foreign world. It wasn't until I reached the essay's end that I found this:
During the duration of his teaching, he was noted as the most unlonely teacher. In the eyes of students, he did not look exactly like a teacher, when we mixed around with him, all of us would completely forget his teacher position, and we really had established a true, honest and deep relationship.
"Most unlonely teacher." I read the words again. _Most unlonely_. For that is the privilege of teaching memoir, the privilege of reading memoir, the privilege of sometimes writing it—the community that rushes in as we tell stories that are true. "Aren't you tired of memoir yet?" I'll be asked, from time to time. "Don't you want to teach something else? Move on?"
"No," I'll say. "I'm not tired yet." Because I'm not done, and won't ever be. Because memoir will never be fully conquered. Because memoir is about life knowingly, thoughtfully lived. I will be the perpetual student and teacher of memoir until the last fleet of stars in the last night sky performs its light for me.
# APPENDIX: READ. PLEASE.
YOU can teach yourself (or others) to see beyond what is near, to spend time with what you're not, to bear in mind the symphonic construction of a passage, to wait for an original idea. You can teach process: _Don't hurry._ You can teach living: _Go out, adventure, return._ You can fracture safety zones.
But the job of a teacher, most of all (I think), is to know what others have written and what another must read, right now, this second, in the midst of the long journey. The job of a teacher is to share.
Ever since I brought Natalie Kusz's _Road Song_ home with me from a Princeton bookstore, I have had a bad memoir habit. Reading as many as I can, owning more than can fit on my shelves, dancing around untidy stacks. And complaining (my husband might say whining) that I have not read nearly what I want to read, nearly what _must_ be read, of memoir.
It would therefore be preposterous for me to suggest that what I have assembled here is _the_ categorical list of the best literary memoirs. I don't like the word _best_ to begin with. It would be equally preposterous and unhelpful to suggest that memoirs can be locked into hard-and-fast categories. Obviously, _House of Prayer No. 2_ is as much about childhood as it is about illness, but it is also about a man fleeing and then returning home. _Running in the Family_ and _An American Childhood_ are as much about how memories are assembled and how memoir gets made as they are about a certain place and time. Is _Father's Day_ a celebration of a son or a book about grief? It is both things, of course. Is it fair to slip _Mentor_ in with books about fathers, mothers, and children? I had to put it somewhere.
What follows, then, is a list of memoirs (and, only occasionally, memoiristic essay collections) from which I have learned, placed within categories that I hope will be generally useful. Memoirs that—whether I have agreed with their politics or utterly been won over by their perspective, whether they have gone too far or not far enough, whether I envy the life remembered or worry through it— _whether or not_ —don't just offer insight into what it is to be human and to quest and to yearn, but also suggest stylistic, thematic, or structural possibilities to all those seeking to wade into the genre with their own inky pens uncapped. Something in each of these books will rankle, no doubt. Something or many things will be questioned—the number of adverbs, the resiliency of verbs, the excess of ego, the expanses of dialogue, the degree of masking, the inconsiderate consideration of private lives, the conclusions drawn. Perfection is not promised in my suggested reading list for this excruciating but as lovely reason: Perfection is not possible.
You will find your own memoirs to love. You will wonder why some of your favorites are not on this list. You will complain to a friend. You will blog about my inevitable injustice.
But if you are doing that, you're reading.
That's what I want for you.
CHILDHOOD RELIVED
**ROAD SONG/Natalie Kusz**
The story of the author's long recovery from the ferocious attack of a pack of Alaskan dogs, _Road Song_ is a revelation of form. Here is the past delivered with equanimity and respect. Here is a terrible tragedy gentled by words, a book in which the good is ever present with the bad. Natalie Kusz writes to comprehend, and not to condemn. She writes her way back to herself, and as she does, she broadens the reader's perspective, disassembles bitterness, heals. _Road Song_ begins in the spirit of adventure, not with despair. _Road Song_ begins with an _our_ and not an _I_ and reverberates out, like a hymn.
**AN AMERICAN CHILDHOOD/Annie Dillard**
In her classic memoir _An American Childhood_ , Annie Dillard recounts the life she lived with an astonishing accretionary style. Starting with her tenth year, it seems, she remembers everything—the books she read "to delirium" and her youthful assessments (" _Native Son_ was good, _Walden_ was pretty good, _The Interpretation of Dreams_ was okay, and _The Education of Henry Adams_ was awful"); the rocks she collected and the lines they drew ("Yellow pyrite drew a black streak, black limonite drew a yellow streak"); even the faces of perfect strangers seen once but fleetingly ("A linen-suited woman in her fifties did meet my exultant eye"). Even as a child, Dillard felt the need to trap and remember—to record her life so that it wouldn't elude her, so that what she had lived would be eternally webbed to whom she would become. Readers of this memoir will be inspired to look back on their own lives and challenged to recollect and re-shimmer the signifying details.
**THE LIARS' CLUB: A MEMOIR/Mary Karr**
Mary Karr grows up living the hardest possible scrabble of a life, in a poisonous town, with a Nervous mother; with a daddy who excels at big, foggy stories among menfolk at the Liars' Club; with a grandmother whose dying from cancer is an unrelieved grotesquerie, and whose fake leg with its fake shoe terrifies a child who is herself ornery as anything, except when nobody's looking. Karr knows when to use the past tense and when to wing at us with the present. She understands that in a story this full of snakes and madness, accidents and fires, the crawl of sugar ants up the arm of a dying woman, she will only gain our trust by saying, sometimes, that maybe her memory has been fudged, or maybe her sister recalls it better, or maybe she'll just have to leave that part blank because her thoughts went blank at that particularly crucial moment. People write about _The Liars' Club_ , and they write about its funniness, its love, which is all quite a trick, if you ask me. It's quite a trick to look back on what Mary Karr looks back on—poverty, abuse, danger, hurting of every measure—and come up with a story written not to tattle on what was done, not to complain, not to suggest that the author had it hard up— _See?_ —but to try to understand what breed of sadness, heartache, or shatter might lie at the bottom of her mother's supreme but never evil oddness. Every sentence in this book is a poem, some daredevil twist on what we think language is. Essential.
**AUTOBIOGRAPHY OF A FACE/Lucy Grealy**
Lucy Grealy would not have wanted her memoir classified, and she especially would not have wanted it bucketed under "Unwell." And so I place this remarkable work here, for it is, in so many ways, a story about growing up and learning the scales and textures of a world, a story about seeing and transcending, a story about appearance and the accommodations one must make when disease—in this case a rare form of cancer discovered when Grealy was a child—reconfigures a face and restructures a life around dozens of surgeries and countless hospital stays. A large section of Lucy's jaw will be lost to the cancer. More than two years of her young life will be consumed by radiation and chemotherapy. Operations designed to restore symmetry to her face will fail, one way or the other, always. And Grealy will have to find a way—and she does find a way—to assert her beauty, her wisdom, her capacity for forgiveness, her writerly mind. Page after page of this masterful memoir has something to say, to all of us.
**STOP-TIME: A MEMOIR/Frank Conroy**
If one seeks proof of the power of scenes in memoir, one need look no further than _Stop-Time_ , the story of Frank Conroy's rugged childhood and uncertain adolescence, first published in 1967. The careful orchestration of both past-tense and present-tense storytelling gives Conroy distance—and room—to walk around in his own life, to discover the themes, to find a way to understand, or to at least meet halfway, the father who stopped living with the family when Conroy was "three or four." It gives him a means, as well, to engage us. Prologue and epilogue are put to exceedingly good use in _Stop-Time_. Dialogue is proportionate, and credible.
**SPEAK, MEMORY: AN AUTOBIOGRAPHY REVISITED/Vladimir Nabokov**
"... the individual mystery remains to tantalize the memoirist," Vladimir Nabokov writes in _Speak, Memory_. "Neither in environment nor in heredity can I find the exact instrument that fashioned me, the anonymous roller that pressed upon my life a certain intricate watermark whose unique design becomes visible when the lamp of art is made to shine through life's foolscap." No memoir is more richly sensed than Nabokov's, whose life story, episodically conveyed, confirms the power of images and patterns, symbols and color, unapologetic nostalgia and love.
AMERICAN **CHICA: TWO WORLDS, ONE CHILDHOOD/Marie Arana**
I love to read out loud from the start of _American Chica_ —to inhabit its rich landscape of sounds, its sensual memory. Arana is an adult looking back on a childhood defined by a South American man and a North American woman—her parents. She is trying to make sense of their inevitable divide and where she fits between them. Every story Arana tells, every meditative pause, is written with the hope of understanding and with the recognition that we will never fully know all the secret yearnings of those who feathered our childhood homes. "They were so different from each other, so obverse in every way. I did not know that however resolutely they built their bridge, I would only wander its middle, never quite reaching either side."
**THE GLASS CASTLE: A MEMOIR/Jeannette Walls**
At the start of every semester, I ask my memoir students to bring a "best example" of memoir to class. Jeannette Walls's meditation on childhood survived inevitably makes an appearance. In terms of readerly popularity it is the memoir of our age, though I argue, with my students, that this fine work of writing might better be classified as autobiography, as is Frank McCourt's _Angela's Ashes_ ; the line in this case is quite thin. Walls's story is a character study of a father equally vibrant and annihilating, a free-spirited mother, and a daughter ( Jeannette) who somehow doesn't just get by but moves forward, too, toward a life threaded through with a glamorous career, a lovely husband, and an attentive Park Avenue doorman. It's the full-bodied quality of _The Glass Castle_ that escalated its popularity. Walls tells infinitely horrifying scenes without condemning the perpetrators. So that, for example, children idly experimenting with explosives or "nuclear fuel" and idly setting the walls around them on fire and finally running for their father will hear not yelling from their dad, not scolding threats, but rather a thoughtfully delivered lecture on "the boundary between turbulence and order."
**FUN HOME: A FAMILY TRAGICOMIC/Alison Bechdel**
Alison Bechdel's graphic memoir about growing up Addams Family style in central Pennsylvania—her father not just a funeral home director and an English teacher but also a man secretly loving younger men; her mother sequestered in shame and small-town theater; her home gothic and wildly floral (thanks to her father)—is multitiered and searingly smart. This is a mixed-up family. The father comes to a terrible, uncertain end. Bechdel will never know if her own newly declared sexual orientation helped precipitate her father's death; she will never know a lot of things. That doesn't stop her from creating a warm, vulnerable, deeply knowing, utterly literate account of her own growing up. Webbed within seven superbly choreographed chapters, this is a gorgeous graphic memoir in which every word, myth, thought bubble, and citation counts.
**BONE BLACK: MEMORIES OF GIRLHOOD/bell hooks**
How did bell hooks (born Gloria Jean Watkins) become bell hooks—a poor black girl who finds her purpose as a poet and writer? How did books, old people, a priest, and a writer named Rilke save her? What does she see when she looks back; what are the refrains? What steeled her and softened her and almost thwarted her, but didn't? hooks calls this memoir a crazy quilt, and it is—intensely melodic, experimental, devoid of quotation marks, fluent in many versions of the grammatical pronoun, interested in the ways that the mind works, the patterns that return, the seams we sew for ourselves.
**LIMBO: A MEMOIR/A. Manette Ansay**
_Limbo_ tells the story of A. Manette Ansay "learning to live" in the wake of a muscle disorder that suddenly (at the age of nineteen) renders every movement excruciating and a dream of becoming a concert pianist both implausible and painful. Ansay captures a childhood of great restlessness and yearning and transports her readers through the confusion of a religious upbringing that seeks to subdue the passions that Ansay knows to be true within herself. Time moves fluidly through this memoir. Wisdom is gained: "Point of view is the vantage point from which the world is observed, the story is told. If that vantage point changes, the point of view _shifts_ , and the story reshapes itself to accommodate the new perspective. One landscape is lost; another is gained. The distance between is called _vision_."
**STEALING BUDDHA'S DINNER: A MEMOIR/Bich Minh Nguyen**
In the spring of 1975 South Vietnam was a place overrun with rumors of reeducation camps, torture, and extinction—a place where rockets shattered neighborhood calm and hulking tanks overtook cityscapes. Simply surviving meant taking risks—putting your children on American rescue helicopters, or finding your way onto a plane, or escaping in the dark hold of a ship. Some brave souls stayed and hid from the feared Communists. Many died in the terrifying chaos. In the midst of all of this was Bich Minh Nguyen's family—a father, his mother, his brothers, and two very little girls whose mother, having never married the father, lived across town. At the height of the terror, the Nguyens made a fateful, irreversible decision: to flee toward the unknown on a crowded boat that had been docked on the Saigon River. The author was eight months old when her family fled Saigon and barely a toddler upon her arrival in the States. She recalls, in _Stealing Buddha's Dinner_ , the texture and the wonder of her new life in Grand Rapids and, in precise detail, the food. Nguyen understands the evocative possibilities of language, is fearless in asserting the specificities of memories culled from early childhood, and is herself an appealing character on the page.
MOTHERS, FATHERS, CHILDREN
**RUNNING IN THE FAMILY/Michael Ondaatje**
How can I convey to you just how exhilarated I am every time I sit down with Michael Ondaatje's _Running in the Family_ —a pastiche of a memoir, a cobbling together of poems and artifacts, remembered conversations and new ones, antics and antiques, the incredible and the incredulous. This re-creation of a Sri Lankan childhood is steeped in exotica. It is lush and close-leaning. It is ripe with language. It provides yet another example of how much beauty can burst forth—sheer wonder—when we choose not to judge the people who raised us oddly but rather to marvel at their own idiosyncratic bearings and pain. If you want to know what memoir can be, read _Running in the Family_. If you want to see how memoir gets made, sit down and read it again. A favorite line: "My Grandmother died in the blue arms of a jacaranda tree. She could read thunder."
**THE DUKE OF DECEPTION: MEMORIES OF MY FATHER/Geoffrey Wolff**
Few memoirs teach what memoir can be as gracefully as Geoffrey Wolff's _The Duke of Deception_. Leaving scolding and exhibitionism aside, evoking the past without summarizing it, _Duke_ is a father-son story, a forgiveness story, an adventure, a lesson. Geoffrey Wolff's father hardly ever told the truth, and he was a wreck, and he wrecked things. He was a shameful disappointment, and he died ignoble, and yet every word in this breathtaking book is written from a place of love. Essential and forever timely.
**ALL OVER BUT THE SHOUTIN'/Rick Bragg**
You never forget Rick Bragg once you read his Southern tale. You won't forget his mother, either, and that's mostly Bragg's point, mostly his purpose in going back and reporting on the woman who raised this Pulitzer winner to be someone even as she denied herself every imaginable comfort. There is no end of fascination in this tale—the drunk, bullying, primarily absent father haunted by the Korean War; the mother who does her best; the grandparents and the townsfolk of the glorious and strange northeastern Alabama. There are glorious sentences here, big ideas about life and love that Bragg modestly calls small. _Shoutin'_ is a love story for a deserving mother.
**WHY BE HAPPY WHEN YOU COULD BE NORMAL?/Jeanette Winterson**
_Why Be Happy When You Could Be Normal?_ is the story, at first, of Mrs. Winterson, Jeanette's apocalyptic, domineering, supremely lonely and lonesome-making adoptive mother. Mrs. Winterson is a big woman among small people in an industrial town in northern England. She is a death dreamer and Bible reader, a fearless deliverer of obscenely unkind punishments, a practiced hypocrite. She figures large. She wields not just a metaphoric big stick but also an actual revolver. She sets traps and insists on irreversible consequences. Jeanette grows up with this. She finds her way to books. She's scrappy and wild and falls in love with girls. She makes her way, miraculously, until things fall apart and love proves elusive. Late in life—after great success as an artist, after breakdowns big and small—Jeanette sets out to find her birth mother. Jeanette would like to know what real love is, and if she herself is capable not just of giving it but of receiving it, too. Her journey won't be binary. Her discoveries will not be pat. Any attempt to summarize any of this goes straight up against the honest search of the book. Equally about becoming a woman and becoming a writer, this episodic memoir passes no judgment, finally. It merely (and essentially) absolves.
**TOWNIE: A MEMOIR/Andre Dubus III**
_Townie_ is a long book, not one to be rushed through. It is a hard book, a tale about the fate of children growing up in the wake of an occasional dad—a talented man, a loving man, but a man who puts his impulses and his writing first. Andre Dubus II, the author's father, has, we come to realize (thanks to the son's telling), no real idea why his children are hungry or chased or being hurt in the world, or why his namesake son doesn't know how to throw a ball, or why that same son turns to beefing up and boxing and lashing out at the world that can do tremendous harm. Dubus III overcomes the embattled nature of his adolescent circumstance by clinging to his faith that the only defense he has (for himself, for his family) is muscle and fist. Throughout the book we see this kid (and, later, this man) throwing a sucker punch, knocking an enemy to the ground, riding in the back of a police car, sitting briefly behind bars, and hearing, later, that one of his victims was sent to the hospital, that one of the victim's friends is out to get him. Dubus has smashed through his own childhood. Urgently written, _Townie_ is a reckoning. It is an insider's look at how one moves past fists toward words, past heartbreak toward compassion, and past broken family to a wholeness of one's own.
**MISGIVINGS: MY MOTHER, MY FATHER, MYSELF/C. K. Williams**
_Misgivings_ offers an exploration of the spaces between the many unnamed things that happen in a life. There are no dates, no proper nouns, no specific locales divulged in _Misgivings_. There are no exhaustive reading lists, no revivals of perfect strangers, no cataloging of seasons or events, no maps drawn out of childhood homes, no allegiance to chronology. There isn't even a narrative arc. The dynamic here is that of memory and forgiveness, the way each acts upon the other to both restore and shatter. The plot is that of a man coming to terms with the parents that he had—the ways he loved them, the ways he did not, the ways he was shaped by who they were. An impeccable poet whose work has been awarded almost every conceivable honor, C. K. Williams does not back away from the life he's lived, does not hesitate to reveal the darker side of the legacy he inherited. And yet the brutal honesty of Williams's meditations does not negate the love he ultimately and so gorgeously finds for his parents, and between his parents, and among the three of them. Love is behind every word of this book. Love and Williams's final faith in it.
**MENTOR: A MEMOIR/Tom Grimes**
Tom Grimes's authoritative, unfancy, and bracingly honest memoir is about his relationship with Frank Conroy, who authored the classic and important memoir _Stop-Time_ and headed the Iowa Writers' Workshop. Grimes came into Conroy's orbit as a student—as a man waiting tables and writing at night, a man desperate to make a literary life. Grimes becomes, quite quickly, someone more—someone Conroy can drink with, talk to, and selflessly encourage. And oh, does Conroy selflessly encourage. He urges Grimes on; he connects him to possibilities; he celebrates Grimes's good moments and is there to buffer the bad. Many writers—too many writers—focus only on themselves, their own work, their own fame. Conroy clearly was not that sort, and Grimes's portrait of him is illuminating and restorative.
**THEN AGAIN/Diane Keaton**
Celebrities tend to write autobiographies. Diane Keaton didn't. She wants to understand who her mother is, how her mother shaped her, and what kind of mother she herself is now, and to do this, Keaton artfully poses the right questions and, taking risks, leaves aside that which does not matter. She is quiet, unassuming, funny, graceful, and one believes she is telling the truth. She did not write to entertain us, per se. She takes no easy potshots. She gives us the men she loved for the reasons she loved them. She gives her yearning, sometimes depressed, slowly fading mother the room for her own story. Keaton writes because she is one of us. She writes to find her way. This is not a book of quips or anecdotes or gossip. It's life, and it's beautifully rendered—a book that takes great structural risks to astonishingly moving effect.
**THE FLORIST'S DAUGHTER: A MEMOIR/Patricia Hampl**
"Middle-class, Midwestern, midcentury—middle everything"—that was Patricia Hampl's lot in life. Born to a Czech florist and his Irish wife, raised in St. Paul as the second child of two, Hampl grew up like so many of us did—looking for escape, circling right back around to home. She went fishing with her father. She whisked across slicked ice rinks. She listened to her mother's stories. She wanted out. She didn't go. "A son is a son until he takes a wife. A daughter is a daughter all her life"—that was her mother's mantra. That was Hampl's fate. It is also the subject of her memoir, _The Florist's Daughter_ , which is neither a settling of accounts nor a deification. Hampl isn't searching for heroes in _The Florist's Daughter_. She's listening for echoes, affixing shadows, taking a tour of her memory again, the photos again, the stories she'd been told again, and also the lies that she was fed and that she harbored. Hampl isn't on a hunt for pity. She's testing the limits of understanding. _The Florist's Daughter_ reminds us that we don't understand what it is to grow old until we are asked to take that journey with our parents. It yields perspective. It makes sitting, waiting, aching, and watching honorable, and restores our sense of purpose.
**TWO OR THREE THINGS I KNOW FOR SURE/Dorothy Allison**
To hold this memoir in your hand is to hold something birdlike, fierce, and fragile. The words are widely spaced. Photographs float across the ecru pages. Declarations are cried out; they vanish. There are two or three things that Dorothy Allison knows for sure about her Southern childhood; her wounded, wounding people; her own bold instincts; her refusal to bend to shame, to be put down by it. Two or three things, far more than two or three things, and this book is an effort to write them down, to keep them in place, to sing out loud, to work past the abuse she suffered in childhood. "When I make love I take my whole life in my hands, the damage and the pride, the bad memories and the good, all that I am or might be, and I do indeed love myself, can indeed do any damn thing I please," Allison writes. "I know the place where courage and desire come together, where pride and joy push lust through the bloodstream, right to the heart." _Two or Three Things_ is a chant, a rising, euphonic suite of epiphanies.
**FATHER'S DAY: A JOURNEY INTO THE MIND AND HEART OF MY EXTRAORDINARY SON/Buzz Bissinger**
Buzz Bissinger's book is a memoir about fatherhood and about a trip he took with his adult son Zach, a second-born twin who suffers from broad and not easily labeled differences. This is a book about not wanting and not getting, about bewilderment and exhilaration, about doing wrong and being wronged and loving hard and forever. It's raw and it's original, unafraid of the true impossible mess of life. On every page is proof of how an honest struggle, a desperate wrestling down, can at times yield a book that will be read for ages—not just for the story and for the wisdoms (which are many, accruing, and right), not just for the language (which is gorgeous as it both lances and limns), not just for the perfectly constructed asides that teach us the history of premature babies and savants, but also for the lessons it teaches about what can happen when we stop trying so hard to understand so that we might more simply live.
**THE GIFT OF AN ORDINARY DAY: A MOTHER'S MEMOIR/Katrina Kenison**
I'm simply going to share here what I wrote when I blurbed this book for my friend Katrina Kenison. I don't think anything more needs to be said: "With an honesty and intimacy rarely achieved in modern memoir, Katrina Kenison dissolves yearning into its complex, sensate parts. This is a book about midlife want and loss. It is also a most knowing book about a most gracious love—about the gifts that are returned to those who find beauty where it falls."
GRIEF
**JUST KIDS/Patti Smith**
_Just Kids_ is advertised primarily as the story of rock legend Patti Smith's relationship with the artist Robert Mapplethorpe, and that it is. But it is also the story of Smith's ascension through art—the years she spent choosing between buying a cheap meal or an old book, between being an artist or a writer, between being Mapplethorpe's lover or his best friend. She tells us about the conversations that generate ideas among artists and friends, about coincidences that set a life on its path, about the clothes she wore and the misimpressions she couldn't correct, about a kind of love that is bigger than any definition the world might want to latch onto it. She yields an entire era to us, and though her writing is all sinew, strength, and honesty, she does not once betray her friends, does not invite us to imagine privacies that should remain beyond the veil.
**AN EXACT REPLICA OF A FIGMENT OF MY IMAGINATION: A MEMOIR/Elizabeth McCracken**
A baby—planned for, deeply wanted, full term, dubbed (at least for that time being) Pudding—goes silent and still shortly before he is born. He will never see the sun; never know the parents who could not wait to begin their lives as a family of three; never see that part of France that had, until then, sustained his mother. Novelist Elizabeth McCracken will not write the story of Pudding's loss until Gus, her second son, is born. The coupling of these two stories—life denied and life given—is the substance and structure of _An Exact Replica of a Figment of My Imagination_. Grief—how we name it, how we tangle with it, how we look to others with need and unknowing, and how others turn back toward us, or do not—may saturate these pages, but it does not define them. McCracken goes deeper. With tremendous skill and honorable restraint, she evokes love. She tells us how it feels.
**LET'S TAKE THE LONG WAY HOME: A MEMOIR OF FRIENDSHIP/Gail Caldwell**
Gail Caldwell's finely crafted, thoroughly beautiful, absolutely heartbreaking _Let's Take the Long Way Home_ is the story of Caldwell's rare friendship with the writer Caroline Knapp—the story of long walks taken with beloved dogs; of the glass face of rowed-upon water; of pasts and imperfections and desires entrusted, one to the other; of a cancer diagnosis and a death, Caroline Knapp's, when she was at the prime of her life and the center, in so many ways, of Caldwell's world. _Home_ is a memoir filled with perfectly wrought particulars: "I often went out in early evening, when the wildlife had settled and the shoreline had gone from harsh brightness to Monet's gloaming, and then I would row back to the dock in golden light, the other scullers moving like fireflies across the water." _Home_ is not a tale about how Caldwell survived the loss of her best friend, though Caldwell has survived. It is instead both instruction and allegory on the power of kindness and small gestures, the fidelity of friendship and memory, the tenacity and tenuousness that make us our own complicated people in need of other complicated people.
**HEAVEN'S COAST: A MEMOIR/Mark Doty**
In April 1993, Mark Doty looked toward a future of loss; Wally, his longtime lover, was dying of AIDS. Their life together was ending, or would soon end, and Doty tried to conceive of the future—make some peace with it—while living the days the couple had left. How do you celebrate and mourn at the same time? How do you hold on when you can't? Doty's memoir is a beautiful exercise in near balance—lyrical, philosophical, raw: "The future's an absence, a dark space up ahead like the socket of a pulled tooth. I can't quite stay away from it, hard as I may try. The space opened up in the future insists on being filled with _something_ : attention, tears, imagination, longing."
**HALF A LIFE: A MEMOIR/Darin Strauss**
_Half a Life_ is the story of an accidental death—the story of what happened one day when Strauss set out to play some "putt-putt" with his high school friends. He was eighteen, behind the wheel of his father's Oldsmobile. On the margin of the road, two cyclists pedaled forward. Of a sudden, there was a zag, a knock, a "hysterical windshield." A cyclist, a girl from Darin Strauss's school, lay dying on the road. She'd crossed two lanes of highway to reach Strauss's car. He braked, incapable of forestalling consequences. It was forever. It was always. A girl had died. A boy had lived. Strauss spent his college years, his twenties, his early thirties incapable of reconciling himself to the facts, of entrusting them to friends. There's much he can't remember perfectly. There are gaps, white space, breakage—all of which are rendered with utmost decency, the thoughts broken into small segments, big breaths (blank pages) taken in between.
**THE LONG GOODBYE: A MEMOIR/Meghan O'Rourke**
One expects from poets deeply lyrical, language-invested memoirs, especially when the topic is grief—or at least I do. But in _The Long Goodbye_ , this very personal but also deliberately universal story about losing her fifty-five-year-old mother to cancer, the poet Meghan O'Rourke chooses language that is almost stark, rarely buoyed by metaphor, and frequently amplified by the words of experts to retrace her journey through loss. O'Rourke wasn't prepared for her mother's absence. Can anyone be? She has lost something, and she continues to look—avidly, stonily, ragingly, insistently. She reads the literature. She talks with friends. She risks bad behavior simply to find the prickle of living again. But O'Rourke, like all of us, has to live her grief alone, and the worst thing about grief, in the end, is this: There is no cure.
**BLUE NIGHTS/Joan Didion**
I was harder on Joan Didion's _The Year of Magical Thinking_ than many readers were. I thought it at times too self-consciously clinical, too reported, less felt. Many of my students at the University of Pennsylvania disagreed with me. I listened, wanting to be convinced. Thankfully, I do not feel disinclined toward _Blue Nights_. The jacket copy describes the book as "a work of stunning frankness about losing a daughter." It is that; in part it is. But it is also, mostly, as the jacket also promises, Didion's "thoughts, fears, and doubts regarding having children, illness, and growing old." A cry, in other words, in the almost dark. A mind doing what a mind does in the aftermath of grief and in the face of the cruelly ticking clock. _Blue Nights_ is language stripped to its most bare. It is the seeding and tilling of images grasped, lines said, recurring tropes—not always gently recurring tropes. It is a mind tracking time. It is questions. It isn't easy reading; it's hardly that. But it is stunning, sometimes stirring.
**THE TENDER LAND: A FAMILY LOVE STORY/Kathleen Finneran**
With _The Tender Land_ , Kathleen Finneran is asking vast, impossible questions about love and loss. She is restoring a long-lost brother to the page, a boy named Sean, who killed himself at the age of fifteen for reasons no one can fathom. Why did Sean swallow his father's heart medicine? Who was responsible for his sadness? What should Finneran herself have known to protect this brother from his fate? These are personal questions, certainly, very particular details, one family, one love, one loss. But as Finneran tells her story, she urges her readers deep into themselves, asks them to consider those whom they, too, love, and whether or not they have loved fully enough. Finneran's fine prose operates as a prayer—for both her brother and her readers.
**PAULA: A MEMOIR/Isabel Allende**
"Listen, Paula. I am going to tell you a story, so that when you wake up you will not feel so lost." So begins Isabel Allende's deeply affecting love letter—memoir—to her daughter, Paula, a young woman who has fallen into a coma at a time when life should be dazzled with possibility. Of course Allende's tale wends crooked and possibly untrue. Of course there are fantastical things and phantasmagoria. Of course the book is far more about Allende than it is about her daughter, but this is memoir, and this is what grief looks like to one of the most imaginative storytellers of our time. Paula speaks to Allende in dreams. Paula is slipping away. All the stories in the world can't save her, nor can Allende's confessions. What there is at the end is an act of love—a mother sitting near, bearing witness, to the hardest thing of all.
**DEVOTION: A MEMOIR/Dani Shapiro**
In the elegant prose of Dani Shapiro, _devotion_ is another word for _quest_. It is the journey to know—and to reckon with not knowing—how one lives in a world of risks, in a body aging, in the vessel of uncertainty. Having reached the middle-middle of her life, having left the city for the country, having raised a little boy who beat the odds of a rare and dangerous disorder, having achieved much as both a novelist and a memoirist (and also a screenwriter), Dani Shapiro wakes from her sleep full of worries and lists. Her jaw quakes. Her thoughts slide. She gets caught up in the stuff of life and then—and then—she worries. Shapiro was the child of a deeply religious household, and she doesn't know what she believes. She is the mother of a boy asking questions, simple, impossible questions about God and heaven and sin. She should know something, shouldn't she? She should have something definitive to offer. But what, in the end, is rock-solid, sure? What bolsters us, protects us, from vicissitudes and chance? Sentence by sentence, this is a beautiful book—considered and pure. Structurally, it is magnificent, scenes abutting scenes, time cutting into time, small threads woven into a greater tapestry. We never really do have more than one another, and that is what Shapiro comes to. Shapiro's book, itself, is a hand outstretched, an open door, a place to dwell.
**BEREFT: A SISTER'S STORY/Jane Bernstein**
Jane Bernstein, "a child who stood in doorways, heart beating hard," was always watchful, always a listener—and perhaps too obedient. When her older sister was murdered by a stranger, Jane, then seventeen, does what her mother advised by getting "right back into the swing of things." More than two decades pass before Jane's daughter asks a question about her aunt Laura—and sends Jane into a journey of discovery. What did happen to her sister? How had Laura's death shaped Jane's own life? What could be known, and what would never be? Part detective story, part reckoning, this complex memoir weaves across decades and through themes without ever losing its center.
THE NATURAL WORLD
Memoirs are about people first, but people arise from, move through, and finally return to landscapes. Many of the best writers about the natural world are personal essayists, masters of the short form. We come to know their stories episodically; we don't necessarily look for continuums. In constructing this list I chose to include a few titles that would more properly be shelved among essay collections. This is because these books contain passages and insights that are relevant to any memoirist working the long form.
**PILGRIM AT TINKER CREEK/Annie Dillard**
"I walk out; I see something, some event that would otherwise have been utterly missed and lost; or something sees me, some enormous power brushes me with its clean wing, and I resound like a beaten bell." That's the inimitable Annie Dillard writing. That's a single quotable passage out of countless quotable passages in her Pulitzer Prize–winning _Pilgrim at Tinker Creek_. This is a book about seeing, harboring, honoring—a book about being aware. What happens when we stop and look? Why are we all so afraid of letting "our eyes be blasted forever"?
**REFUGE: AN UNNATURAL HISTORY OF FAMILY AND PLACE/Terry Tempest Williams**
I found Terry Tempest Williams's _Refuge_ in the days just after 9/11, when I was looking for a way to mourn all those lost and for those who lost them. The skies above were blue and silent. The birds in nearby trees were stubborn, without song. I stared up into a lanky birch; it was a tower. I saw the bloom of a new cloud; it was a plume of smoke. How do we mourn? There are answers in _Refuge_ , a book about how Williams comes to terms with her own mother's dying and with the cruel incursions of pollution in the natural landscape she loves. "Particles of sand skitter across my skin, fill my ears and nose," she writes. "I am aware only of breathing. The workings of my lungs are amplified. The wind picks up. I hold my breath. It massages me. A raven lands inches away. I exhale. The raven flies." In _Refuge_ —infinitely worth reading for both its language and its intelligence—Williams reminds us that "peace is the perspective found in patterns.... My fears surface in my isolation. My serenity surfaces in my solitude."
**THE NAMES OF THINGS: LIFE, LANGUAGE, AND BEGINNINGS IN THE EGYPTIAN DESERT/Susan Brind Morrow**
Susan Brind Morrow's _The Names of Things_ is an exquisite example of the memoir form—a book of escape and discovery, exhaustion and surrender and relief. Morrow's book takes readers out far beyond where most have ever been—to the sands of Egypt, to the company of exotic beasts and plants—and somehow yields up passages that speak directly to the experience of humankind. "I thought of memory as a blanket," Morrow writes of her traveling days. "I could take a thing out of my mind and handle it as though it were part of some beautiful fabric I carried with me, things that had happened long ago, the faces of people I loved, the words of a poem I had long since forgotten I knew. This was something any nomad or illiterate peasant knew: the intangible treasure of memory, or memorized words." Morrow's readers don't have to go to Egypt to make this discovery. Morrow has made it for them, and has loved it with words, for their sake.
**THE GOOD GOOD PIG: THE EXTRAORDINARY LIFE OF CHRISTOPHER HOGWOOD/Sy Montgomery and Genine Lentine**
Sy Montgomery's adventures to exotic corners of the world have brought us news of pink dolphins and golden moon bears. With her memoir, Montgomery stayed close to home to tell us the story of her 750-pound pet pig named Christopher, whose heart is bigger than his belly. Endearing and effusive, this memoir is full of love for those furry or squawking or barking or snorting creatures with which we share the world. It's full of observations, too, about how to live among four-legged souls. "Christopher Hogwood came home on my lap in a shoe box," the memoir begins. And as Christopher grows, so does our affection for him.
**THE WILD BRAID: A POET REFLECTS ON A CENTURY IN THE GARDEN/Stanley Kunitz and Genine Lentine**
Stanley Kunitz was one of the nation's most beloved and honored poets, and he lived for a very long time. Written toward the very end of his life, _The Wild Braid_ is not just graceful and soulful; it is authentic, too, and with memoir, as we have observed, that matters. The book revolves around a house Kunitz bought back in 1962—a summer home the poet shared with his wife, Elise, until her passing. It revolves, more particularly, around the garden that, Kunitz tells us, was built from almost nothing, a "starkly barren area with nothing growing on it, not even grass." Building the garden entailed the digging in of seaweed and peat moss and manure, the construction of terraces, the planting and tending of flowers, and the constant management of the tension that pulses between faith and watchfulness. Garden and poetry—the two things are braided here, building, inexorably, to a greater understanding about life, a prayer.
**A PRIVATE HISTORY OF AWE/Scott Russell Sanders**
As he sorts through the first six decades of his life, Scott Russell Sanders trains his focus on those moments when something telling happened, something that advanced his understanding of the world and of that condition—"rapturous, fearful, bewildering"—known as awe. Sanders wants his readers alert to the power of thunderstorms and baby sighs, to "the holy shimmer at the heart of things." He remembers himself as a boy on a Tennessee farm, his needing to know the names and tastes and shapes of whatever was near. Those memories live within the reality of present time—an aged and confused mother, an intensely curious child, a still insatiable desire to know.
**WILD COMFORT: THE SOLACE OF NATURE/Kathleen Dean Moore**
"This is a book about the comfort and reassurance of wet, wild places," Kathleen Dean Moore tells us in the introduction to _Wild Comfort_. Moore had, she reveals, set out to write a book about happiness, but then people she loved died, and the world took on a different hue, and nature—its colors, breezes, infinite surprises—became an even more steady and significant companion. This is, properly speaking, a collection of memoiristic essays. It is also a way to understand the natural world and our place in it to begin. An example: "I doze on wet grass and imagine myself part of the mysterious unfolding of the universe, imagine that inflorescence. I fit in here. Literally. I am one unfolding among other interfoldings and enfoldings, the wrinkled lap and pucker of life in Earth, the vulture and the possum and the dew on the plums."
**THE RURAL LIFE/Verlyn Klinkenborg**
There's an old-fashioned elegance to the shape of Verlyn Klinkenborg's sentences, a steady, transcendent, unshowy intelligence that one instinctively leans toward and trusts. Klinkenborg chronicles life as he knows it: a small-town parade, a change in the weather, a conversation with two old brothers down the road. The anecdotal claims his attention, the nearly incidental. Baling twine and barn swallows. Mosquitoes and a pumpkin patch. The way people plan for winter or, conversely, plan for spring.
_The Rural Life_ is a compilation of the columns and essays he has published on such topics over the past several years. It's the last lines of these essays that readers should pay special attention to—lines I find to be arresting, even profound; lines that force us to look at the world in a new way.
UNWELL
**THE MUSIC ROOM: A MEMOIR/William Fiennes**
Memoir writers and teachers should know about the castle in which William Fiennes grew up, and how it shaped him. They should know about his brother Richard, who suffered from severe epilepsy, lived for Leeds United soccer games, exalted the flight of herons, and erupted with anger before he retreated—confused, ashamed? There were other siblings and other tragedies in this ancient place. There were also two parents who honored Richard for what he could be and gave him everything he was capable of receiving. There is a hush throughout this book—a tumble through past and present, a drift across _here we were_ and _here we are_ , time in a collision with time. There are long slides of description regarding a castle that cannot be contained by words, or mapped, and then, embedded, are scenes of aching, particular precision—Richard tracks a heron, Richard skates on a frozen moat, Richard burns his mother with a frying pan, Richard sings, Richard smashes ancient glass, Richard accuses, Richard lays a heavy (loving) hand upon Mum, Richard will not bathe, Richard celebrates Leeds, Richard recites a poem from memory, Richard suffers, William is there, William watches, William wonders. Then, like marks of punctuation (something solid, something fixed), there are episodic histories of epilepsy science, the scarred and fuming brain revealed.
**THE DIVING BELL AND THE BUTTERFLY: A MEMOIR OF LIFE IN DEATH/Jean-Dominique Bauby**
A mere 132 large-type pages long, _The Diving Bell and the Butterfly_ is sensuous and steeped. Jean-Dominique Bauby, the former _Elle_ editor, rendered locked-in by a massive stroke and speaking through the blinking of one eye. Letters read off to him until he consents to one and then another. Words congealing. Story. Hope. Most of us are blessed with hands that grip pens, fingers that do our bidding on keyboards. And yet we are, perhaps, tempted to hurry through scenes for the love of writing the next one, or to subsume a detail not readily recalled, or to rely on a familiar turn of phrase because the melody is familiar. Bauby's book serves as a reminder of what a man blinking each letter into place can achieve with language and with heart. His book teaches, at the same time, the riveting effects of wisely manipulated past and present tense. A book about irreparable loss and the buoyancy of remembering.
**THE NIGHT OF THE GUN: A Reporter Investigates the Darkest Story of His Life. His Own. David Carr**
By his own admission, David Carr was a substance abuser of the very first order—a "maniac" who went from handling whiskey and cocaine (barely) to not handling crack to smacking women he loved with an open hand to raising twins while failing at rehab to carrying a gun he doesn't remember, or didn't remember until he started tracking down his own past. Like the scrupulous _New York Times_ reporter he miraculously became, Carr sought out and interviewed those whose lives intersected his during his wilderness years. He weighed his idea of things against police records and the recall of old friends. He sorted, sifted, and spun in an attempt to understand not just who he was, but who he is, and how the was and the is somehow survive inside the same knocked-about skin. It's fascinating reading, memoir painstakingly stitched. It has a lot to say about what truth is and what to do with all the stuff we can't rightly remember.
**THE MEMORY PALACE: A MEMOIR/Mira Bartók**
This is a daughter's story (for Mira Bartók is mostly, in her memoir, a daughter) about a brilliant, beautiful, mentally ill mother. It is a survival story, first and foremost—a deeply loving, never condemning return to a life spent looking for safety during a mother's unruly outbursts. This mother and her two daughters are poor to begin with; Mira's father abandons the family early on. But true poverty sinks in as their mother quickly loses her power to work and her ability to provide. The quiet days are the days when their mother is institutionalized. The terrifying days are the ones in which the mother leaves the girls stranded in places both foreign and familiar, or bangs on the other side of a door, demanding to know if the girls are whores. There is a grandmother nearby, but she has troubles of her own. There are neighbors and the occasional piano teacher or kind adult who step in, offering only temporary reprieves. Bartók, who is also a children's author, fills her story with allusions to myths and fantasy, softening the insufferable with flights of tremendously fancy. She writes at times quite simply and at times with a poet's stance. She blames no one, but always tries to understand. I admire her work enormously here—her empathy, her powers of recall—and if at times I felt that some of the tangents unnecessarily complicate the story, or take the tale as a whole more toward autobiography than memoir, I closed the book with respect for Bartók, not just as a writer but as a person, too.
**DARKNESS VISIBLE: A MEMOIR OF MADNESS/William Styron**
"In Paris on a chilly evening late in October of 1985 I first became fully aware that the struggle with the disorder in my mind—a struggle which had engaged me for several months—might have a fatal outcome." So begins William Styron's taut, harrowing exploration of a condition that will haunt him for years: depression. He doesn't favor the term. He fights against its stereotypes. But he also systematically works to understand what this disorder is doing to him, what it has done to others, and how the wrong "cures" can deepen despair.
**AN UNQUIET MIND: A MEMOIR OF MOODS AND MADNESS/Kay Redfield Jamison**
When Kay Redfield Jamison, a renowned expert on manic-depressive illness, released her memoir in 1995, she wasn't just helping others understand what this devastating disorder is and does; she was telling us what it's like to live through it—to survive. She had struggled with the condition herself as a teen. She grew up veering left to right, somber to blazing, nearly out of control, and so smart. It's a very personal book—a searing story. It is also an enormously helpful book for those trying to understand and support others whose brain chemistries rocket them off toward trembling heights and melancholic lows.
**GIRL, INTERRUPTED/Susanna Kaysen**
It would have been simple (perhaps) for Susanna Kaysen to reconstruct her nearly two years at McLean Hospital among young women diagnosed as sociopaths and schizophrenics, former addicts and depressives. It would have, for many, been enough. But Kaysen never set out to write autobiography—to merely summarize an untamed time, to offer an explanation for her trauma. She set out to make art—to produce a memoir of many effective parts. Case file reports. Artful impressions of fellow patients. Sobering personal assessments. Definitions lifted from a diagnostic manual. White space. Smart chapter titles. Overt challenges of societal prejudices. An extraordinary absence of self-pity. "People ask, How did you get in there?" Kaysen begins. "What they really want to know is if they are likely to end up in there as well. I can't answer the real question. All I can tell them is, It's easy."
**THE RULES OF THE TUNNEL: MY BRIEF PERIOD OF MADNESS/Ned Zeman**
A second-person tour de force, Ned Zeman's rumination on years scrambled by anxiety, depression, and mania; therapy, medication, and a trip to McLean; and, ultimately, treatment-triggered amnesia is a remarkable work of literature—raucous, ribald, fused together with profound insight and rare humor. It is also, as memoir must be, a conversation with the reader. This is what happened, writes Zeman, an award-winning journalist and contributing writer at _Vanity Fair_. And this is how.
**INTOXICATED BY MY ILLNESS/Anatole Broyard**
Anatole Broyard is dying, and he is, at first, oddly intoxicated by the notion. "Suddenly there was in the air a rich sense of crisis—real crisis, yet one that also contained echoes of ideas like the crisis of language, the crisis of literature, or of personality," he tells us up front. "It seemed to me that my existence, whatever I thought, felt, or did, had taken on a kind of meter, as in poetry or in taxis." He seeks a "literature of illness." He uses his writing to counteract his illness—or likes to believe that writing has that power. But as this narrative evolves, it is clear that Broyard will not win against his disease. What Broyard does win, however, is the sense that he has lived until the very end. Broyard's truthfulness as a man, his willingness to live honestly within his own skin, was, of course, questioned after his death. This memoir remains elucidating for all the reasons mentioned here.
LEAVING AND RETURNING
**HOUSE OF PRAYER NO. 2: A WRITER'S JOURNEY HOME/Mark Richard**
You grow saddened, out here in life, by all the humdrum and the done before, the standard issue, the colors shimmed off to gray. So when you pick up a book like _House of Prayer No. 2_ , a Mark Richard memoir rendered in meaty second-person prose, you let a smile crawl across your face and stay. Of course, you have to have a life like his to tell a true story like this: poor and "special," hips whacked out, days lost to the hopeless heat of a hospital for crippled children, and, afterward, everything you hope your child doesn't do, doesn't get involved with, doesn't risk—all that done, by Richard, on his way to growing up, on his way to faith and writing.
**BROTHER, I'M DYING/Edwidge Danticat**
Memoirs that make room for family history and national politics challenge their writers structurally; they ask more from the words on the page. No false binding will do, no obvious superimpositions, no easy themes, no ready truths. There are higher stakes in memoirs like these. More is expected, more wanted. In _Brother, I'm Dying_ , Edwidge Danticat forges a remarkable narrative, establishing herself as her memoir's maker and not its heroine—there is such an important difference.
Intelligent, researched, heartfelt, the book weaves together the story of the man who raised Danticat as a child in Bel Air, Haiti—her uncle—and the man who fled to Brooklyn in an effort to create for his whole family a better life—her father. Two brothers, then, two father figures, and two ultimately tragic trajectories as each man fights to survive impossible odds and this daughter fights hard not to lose them. In a single year, 2004, Danticat—now married, in Miami, pregnant with her daughter—will watch her world unravel. She will bear witness to what revolutionary upheaval and disease can do to the men who, for so much of her youth, were not just essential but also invincible. She will find a way to make of fragments a whole.
**A THREE DOG LIFE: A MEMOIR/Abigail Thomas**
There is a very good reason that Stephen King called this memoir the best he'd ever read. It manages, at the very same time, to be spare and to offer great depth, to be intimate and somehow not self-involved, to be harrowing and also humorous (and also uplifting), to be about loss and yet (always) about the glorious possibilities of life right now, with whatever we still have, as whomever we still are. Abigail Thomas's husband went out one night to walk their dog. He didn't return. He had been struck by a car and badly hurt. His brain would never be the same. For the next many years he would live at a residential center and come home one day each week, and what could Abigail do but go about living in the meantime, hunting down beauty in her world, taking care of her three dogs, listening for the odd pearls of dreams or premonitions that the man she loved would sometimes share? This isn't continuous narrative so much as interlude. This is a book about living gracefully. A book about loving true.
**MY INVENTED COUNTRY: A MEMOIR/Isabel Allende**
"I wrote my first book by letting my fingers run over the typewriter keys," Isabel Allende tells us toward the end of this memoir, "just as I am writing this, without a plan." So yes, this memoir rambles, and yes, its premise is suspect (the author vaguely suggests but never convincingly explores some sort of existential link between her exile from Chile and the tragedy at the World Trade Center), and yes, it treads familiar ground (Allende's famously eccentric and colorful family; Allende's notoriously stubborn and passionate personal life; Allende's fixation on Chile, where so much of her nostalgia is centered; Allende's views on the role of stories in our lives). But don't let any of that dissuade you from reading these earthy and seductive pages. Fabulously well endowed with detail and insight, _My InventedCountry_ is willing to reckon with the ghosts and spirits that have inspired her oeuvre. "My tendency to transform reality, to invent memory, disturbs me," she writes, "I have no idea how far it may lead me.... Thanks to it, I found a voice and a way to overcome oblivion, which is the curse of vagabonds like me."
**DRINKING: A LOVE STORY/Caroline Knapp**
There isn't the bravado, in _Drinking_ , of the Magnificent Survivor. There isn't the boast one sometimes hears in the recounting of harrowing tales— _Can you believe I was like that? Can you imagine I survived? I know it's nasty, I know I was a jerk, but secretly, really, wasn't it all kind of wondrous, in a twisted (I'll admit it) way?_ There isn't the sense that Caroline Knapp believes that her story—about living with drink, about being tormented by it, about recognizing the need for sobriety and being terrified of coming to terms with her sober self—trumps all other tales. There is only the sense that perhaps by telling her tale—by exploring the slide, the massive deceptions, the dangers, the heat and seeming loveliness of alcohol—she may be helpful to others. This is not memoir as exorcism or exhibitionism, in other words. It's not a memoir in which the rememberer pretends to remember any more than she actually does. It is a book that is moving and hopeful and sad. It is authentic, as memoir must be.
**THE MAP OF MY DEAD PILOTS: THE DANGEROUS GAME OF FLYING IN ALASKA/Colleen Mondor**
This is the story of the four years Colleen Mondor spent running operations for a bush commuter airline in Fairbanks, Alaska. It's about the planes that rose and fell, the pilots that went missing, the cargo no one would believe. It's about defying the odds, the weather, the smash wall of mountains until those things rise up and speak and refuse to be defied. It's about vanishing, about vanishing's speed. It's about a daughter who loses her father too soon and who, in the end, writes stories down in search of some salvation. It's a memoir, but it's a chorus. It's a we and a them on the rhythmic order of Tim O'Brien's _The Things They Carried_ , a book that brings us into itself (and keeps us there, utterly absorbed). This is that other kind of memoir in which the author is not the heroine but the webber, the weaver, the voice for those who are no longer here to tell their own stories. That is not to suggest that there's any distance here, a single line that feels academic (though it has all been magnificently researched) or at emotive remove. Mondor's passion for those days and those people, her intimate knowing, is galvanizing. She's tough, and she's been toughened; she rarely puts her own self center stage. But when she appears, when she tells us something personal, the stories stick.
**HOUSE OF STONE: A MEMOIR OF HOME, FAMILY, AND A LOST MIDDLE EAST/Anthony Shadid**
_House of Stone_ is breathtaking—gigantic in ambition, equal to that ambition, combustible and yet right in its mix of country history, imagined (or imaginatively _supplemented_ ) familial history, personal yearning, poetry, politics, and passion flowers. It recounts the months Anthony Shadid spent rebuilding his great-grandfather's estate in old Marjayoun. It gives us Shadid, newly divorced and with a daughter far away, seeking to resurrect the idea of home. It introduces the sarcasm and suspicions and ironies and odd camaraderie of a band of Lebanese neighbors and fickle house builders. It memorializes a dying doctor who knows everything, it seems, about gardens. A book built of many parts, _House_ yet works, sweeping foreigners like myself toward its quiet, exotic heart. There is war, and there is the pickling of olives. There is dust, yet flowers grow. There are age-old accusations and cautions about war. There is a father working so far from the daughter he loves but choosing to believe in days yet to come. There is Shadid's own sadness over those who have died too soon—by horse, by weakened lungs. Yes, horse. Yes, weakened lungs. It is nearly unbearable to read these passages, but they are so beautiful and holy that we do.
**NO HEROES: A MEMOIR OF COMING HOME/Chris Offutt**
Chris Offutt's memoir is an odd composite—a gloriously funny and moving account of the year the author returned to his Kentucky roots to teach at his alma mater, interspersed with the stories of his parents-in-law, two Holocaust survivors. What binds the two narratives—more or less—is Offutt's desire to try to locate home, or at least to define it. What makes this memoir such a compelling read is Offutt's mastery of voice. Ironic, sardonic, ultimately tenderhearted Kentucky twang lives on every page of this book. Simple sentences. Walkabout sounds. Startling, original images that keep a reader reading.
**HIROSHIMA IN THE MORNING/Rahna Reiko Rizzuto**
When Rahna Reiko Rizzuto wins a grant that will take her to Japan for six months to complete research for a book, everything changes. She'll be leaving her husband and two toddler boys at home. She'll be entering a new landscape, be forced to negotiate, at least somewhat, in the language of her ancestors. She leaves New York City as a professional on a quest. She arrives in Japan as a woman with unprecedented freedoms. Everything can be questioned in this environment, and everything is. What is a mother? What is a wife? What is owed, and what must be taken? Whose side do we stand on when the question is survival? Built of letters, interview transcripts, travelogue entries, and questions of responsibility to self and others, _Hiroshima in the Morning_ wades into dangerous, even inflammatory territory and rocks easy assumptions about sacrifice and selfishness: "It is a question of time, and time is the question: How does one spend it? When does the part about living your life to the fullest begin to shift into just making do, and then into suffering, and how do any of us know where we are in this process?"
**NOTHING TO DECLARE: MEMOIRS OF A WOMAN TRAVELING ALONE/Mary Morris**
I traveled to San Miguel because of this book—because of the way it introduced me to a landscape, a culture, a desire. Mary Morris is young, on her own, and searching when she sets off for a journey south. She must choose what to trust and who to be as she interacts with strangers and strange places. This is an expat tale. It is also beautiful writing about remembering: "Women remember. Our bodies remember. Every part of us remembers everything that has ever happened. Every touch, every feel, everything is there in our skin, ready to be awakened, revived.... The water entered me and I could not tell where my body stopped and the sea began. My body was gone, but all the remembering was there."
RAPACIOUS MINDS
**ISTANBUL: MEMORIES AND THE CITY/Orhan Pamuk**
Some memoirs wind you back through the crowded streets of the hero's childhood. Some wend you through the neural pathways of the author's craving, omnivorous mind. _Istanbul_ , by the Nobel Prize–winning Orhan Pamuk, does both. Sebaldian in scope, suffused with gorgeous black-and-white photographs of historic Istanbul, this is an exploration of a city, a man, and a particularly rich, involving melancholic state known as _hüzün_. "The _hüzün_ of Istanbul is not just the mood evoked by its music and its poetry," writes Pamuk, "it is a way of looking at life that implicates us all, not only a spiritual state but a state of mind that is ultimately as life-affirming as it is negating." _Istanbul_ sprawls like the city sprawls. Its sentences can sometimes consume entire pages as they evoke landscapes and childhood rooms, gossip and history, painters and writers. Pamuk takes readers on a journey—his journey—as a boy in love with his mother, as a teen in love with his city, and as a young man who ultimately chooses writing over painting. Pamuk is tenderly and brilliantly tortured. He is obsessed with ruins and all the loss that ruins imply.
**NOTHING TO BE FRIGHTENED OF/Julian Barnes**
This is a helpful meditation on death and dying—on how people die (which is, of course, bound up with how people live) and on what people think along the way. Fear or acceptance? Defeat or glory? Ungainly irony or something worse? It is only partly memoir; it's equal parts wit and philosophy and literary biography. It's a chapterless not–outright diatribe, not–clinical exploration—perhaps _controlled rant_ is the term—that is nothing if not (and you know this matters to me) brilliantly choreographed. Julian Barnes assaults you. He appeases you. He is on your side and then he's all caught up with himself, as if he may be the only one facing ultimate extinction. No such luck, Barnes. It's a privilege to watch a mind like Barnes's work over, around, and through the inexplicableness of death. It's exhilarating, as a matter of fact, and has much to teach to true memoir writers.
**A STEP FROM DEATH: A MEMOIR/Larry Woiwode**
Perhaps the hardest books to write are those that hold themselves accountable to no conventional boundaries or forms. Those that permit time to spill across their pages—backward, forward, a rush of movement, a sudden stilling, returns and retreats. Those in which one thought juts deeply into the core of another, in which elisions are story, in which one is at a loss to define a true end or beginning. Books like these cannot hold their readers, let alone survive themselves, unless they are perfectly calibrated—orchestrated as if by some higher power so that all the fragments do at last become a gleaming, self-sustaining whole. Regrets, wants, self-disgust, confessions—all of that is here in the mad, bold waters of this book. But _A Step from Death_ is meant to be so much more than that: a reconciliation with self, a bid to understand fathers and fatherhood. It tumbles and stonewalls and enthralls and wounds, roping readers through the thick braid of its sentences, its unapologetic instructions on how to read the book. One senses no precocity here, no purposeful manipulations. One senses, instead, a struggle to find the best way to say the hardest things, to put a life into context.
**UNCLE TUNGSTEN: MEMORIES OF A CHEMICAL BOYHOOD/Oliver Sacks**
I like to remind my students, every now and then, that one need not have had a childhood of hardship or terror, high adventure or deprivation to have within one the stuff of memoir. Oliver Sacks's quiet but lovely _Uncle Tungsten_ is a case in point. "Many of my childhood memories are of metals: these seemed to exert a power on me from the start," the book begins. "They stood out, conspicuous against the heterogeneousness of the world, by their shining, gleaming quality, their silveriness, their smoothness and weight. They seemed cool to the touch, and they rang when they were struck." Passion, then—its emergence, its evolution, its effect on the shape of a life—lies at the core of this book, the discovery and nurturing of one boy's purpose.
**ALL THE STRANGE HOURS: THE EXCAVATION OF A LIFE/Loren Eiseley**
By his own admission, Loren Eiseley appears "to know nothing of what I truly am: gambler, scholar, or fugitive." By external measures he was an archaeologist, an acclaimed author, a teacher at my own University of Pennsylvania and elsewhere. _All the Strange Hours_ , first published in 1975 and written in the late years of Eiseley's life, is a stunning attempt at reconciling the many fragments of his wildly variant experiences as the son of a harsh deaf woman, a young man riding the rails during the Great Depression, an archaeologist, and a writer in the making. His text melds memories vital and true. It blends visions and fears. Eiseley demonstrates an acute talent for finding meaning in landscape. He is brilliant on time. He has a striking gift for describing people. He writes sentences like these: "Oncoming age is to me a vast wild autumn country strewn with broken seedpods, hurrying cloud wrack, abandoned farm machinery, and circling crows. A place where things were begun on too grand a scale to complete."
FUNNY BUSINESS
Memoir—the life recalled and sifted not just for the sake of narrative but also for the sake of some transcendent knowing—isn't always an easy fit for humorists, who naturally gravitate toward fish stories and flights of fancy, hyperbole and all varieties of magnification. Sometimes—often—funny lives in the overestimated, the caricature, the stretch. Sometimes extreme coagulation or tricked-out inventory is the place where laughter starts. Short and to the point is frequently more effective than long, cohering, and flowy. Punch lines have been known to trounce epiphanies.
Humor memoirs, then, aren't always actual memoirs, but who doesn't want to laugh? My students often ask me what to do with their real-life funny stuff. When they do, I point them to these books.
**A GIRL NAMED ZIPPY: GROWING UP SMALL IN MOORELAND, INDIANA/Haven Kimmel**
Wielding a perfectly calibrated child's voice and reporting on her hometown of a mere 300 strange nonstrangers, Haven Kimmel never lets her foot off the funny pedal in _A GirlNamed Zippy_. It's all gullibility and incredulity—one or the other, sometimes both at the same time. It's _what's going on here?_ ruminations on bad hair, chicken love, familial disputes, improbable alliances, odd neighbors, and bad card games. It's a wry science-fiction-loving mother, a deal-making father, a perfect sister (except when she isn't), and a very tall brother who turns out to be quite smart. And then there are all those neighbors. Kimmel makes us laugh without implicating us in ridicule. She writes lines like these about her famously unruly hair: "The really short haircut (the Pixie, as it was then called) was my favorite, and coincidentally, the most hideous. Many large, predatory birds believed I was asking for a date."
**A WALK IN THE WOODS: REDISCOVERING AMERICA ON THE APPALACHIAN TRAIL/Bill Bryson**
I'm not sure Bill Bryson would slot this memoiristic travelogue into the Funny Business pages of _Handling the Truth_ , but I do because of this: One night, sleepless, I grabbed _A Walk in the Woods_ , curled up on the couch, and then proceeded to fall to the floor twice thanks to uncontrollable laughter. I'd bought the book out of curiosity about the Appalachian Trail and the Great Smoky Mountains National Park, which my great-grandfather Horace Kephart helped to create. I'd hoped to learn about geology and last chances for an essential wilderness. Bryson delivers that, of course, along with wit, history, and regional color, making this a trail adventure with a non-self-glorifying greater purpose. But Bryson is also consistently hilarious, as with this description of his sidekick, Katz: "His posture brought to mind a shipwreck victim clinging to a square of floating wreckage on rough seas, or possibly someone who had been lifted unexpectedly into the sky on top of a weather balloon he was preparing to hoist—in any case, someone holding on for dear life in dangerous circumstances."
**BOSSYPANTS/Tina Fey**
I read _Bossypants_ with an odd sense of _You go, girl_ familiarity. Or maybe the pride I was feeling was pride in my own gender—the smartness of Tina Fey, the intelligence of her voice, the fluidity of her prose, the sense you get that she wrote this whole thing on her own, without the intercession of a hired pen. In _Bossypants_ , we get the down and dirty on Fey's growing up, her funny friends, her appealing parents. We see Fey at work as a young comedienne, as a young comedic writer, as the supernova force behind _30 Rock_. Amid all that is so funny, all that sings so smoothly along, we get Fey as the non-Celeb celebrity. She's just a person—kinda like you, kinda like me. She's wowed by her good fortune, she's annoyed by her critics, she's amused by Photoshopping, and she's not going to judge your parenting style, even if you choose to judge hers. It is by connecting with her readers—by reaching across what might have been the great divide—that Fey delivers memoir.
**ME TALK PRETTY ONE DAY/David Sedaris**
Shortly after I started reading David Sedaris's _Me Talk Pretty One Day_ , I started meeting the author in my dreams. It's true. There he'd be, wielding a purple feather duster and reviving a poignant, painful moment from his past—something, say, about his youthful ambitions to be a commercial jingle singer. A few nights later, I'd meet up with Sedaris again—this time in a Parisian café, where he'd regale me with tales of his many troubles learning the oversexed French language from an impatient instructor. I'd find myself waking up sore-throated and exhausted, the taste of the dream-induced laughter still on my tongue. Sedaris has been called a modern-day Mark Twain and Garrison Keillor's evil twin. He's been likened to J. D. Salinger and Dorothy Parker. But in fact he is an unassuming original, a middle-aged guy who remembers all too well how it felt to be young and gay and obsessed and inadequate, and whose acute vulnerability springboards his writing even now. Occasionally bawdy, giddily obscene, perpetually on the lookout for foibles and flaws, Sedaris somehow never fails to finally respect his subject matter. His sharp tongue achieves acuity, not cruelty. His humor pokes but never skewers.
**WHY I'M LIKE THIS: TRUE STORIES/Cynthia Kaplan**
Comprising twenty-one short tales, _Why I'm Like This_ begins with a hilarious send-up of Cynthia Kaplan's final year at "Queechy Lake Camp" and ends with a tour de force about the cruelly unfulfilled lives of truffle pigs. In between, we meet Kaplan's father, the unrepentant "Gadgeteer"; Kaplan's mother, who hopes against hope that the author will someday learn fashion; Kaplan's husband, who rescues Kaplan from a lifetime of loser boyfriends; Kaplan's pill-popping, self-absorbed, and disturbingly untidy therapist; Kaplan's wide-eyed newborn son; and, most touchingly, Kaplan's grandparents. In tale after tale, Kaplan yields fragments of a life, torn-out episodes, scrupulously polished set pieces that rise above their punch lines to achieve not just humor but also poignancy. Both honest and compassionate, refreshingly intelligent, fastidiously articulate, Kaplan writes not at the expense of others but at the expense of herself. Never the most popular girl in the class, often lonely, frequently waking up with the wrong kind of man, afraid of moths—afraid, indeed, of most things—Kaplan is all too aware of her conflicted dark side, her own desperate, fatalistic, worrisome, worrying self. With her unique brand of humane observation and wit, with her deadpan voice and her fearless honesty, Kaplan's book is one in which people will recognize some heretofore unearthed part of their own history or selves.
HELPFUL TEXTS
**THE SITUATION AND THE STORY: THE ART OF PERSONAL NARRATIVE/Vivian Gornick**
Vivian Gornick's _The Situation and the Story_ provides students (and teachers) with a broadened understanding of the task ahead. Writers of memoir, says Gornick, must develop a persona, must identify the telling situation(s), and must, essentially, locate the story inside the situation—the reason the situation matters, the why of it all. I frequently introduce Gornick into my lessons. I find that when I do, students begin to look at their own work and say, _Well, yes, I've described my situation brilliantly (perhaps), but I have no idea, still, what my story is._ Beyond Gornick's ideology lie her fascinating, in-depth assessments of many classic memoirs.
**I COULD TELL YOU STORIES: SOJOURNS IN THE LAND OF MEMORY/Patricia Hampl**
I never teach the same thing twice, but that doesn't mean I forsake the classics in favor of novelty. The one essay that I have carried forward into every memoir class is Patricia Hampl's "Memory and Imagination," found within _I Could Tell You Stories_. You just don't teach memoir without it, or at least I don't. You can't go far without words such as these:
We seek a means of exchange, a language which will renew these ancient concerns and make them wholly, pulsingly ours. Instinctively, we go to our store of private associations for our authority to speak of these weighty issues. We find, in our details and broken, obscured images, the language of symbol. Here memory impulsively reaches out and embraces imagination. That is the resort to invention. It isn't a lie, but an act of necessity, as the innate urge to locate truth always is.
Hampl believes that "the narrative self (the culprit who invented) wishes to be discovered by the reflective self, the self who wants to understand and make sense of a half-remembered moment about a nun sneezing in the sun."
**VANISHING POINT: NOT A MEMOIR/Ander Monson**
Ander Monson's _The Vanishing Point_ is interesting stuff—quotable, inventive, daggered, asterisked, _me_ -dominated and _me_ -avoidant, not quite memoir, though Monson himself would be the first to count all the sentences beginning with (or featuring) that single letter _I_. Monson is full of rue and half steps, full of self-disclosures that may or may not reveal the actual self. Full, most of all, of the questions: _Canthe actual self be revealed?_ _Can the we be known? Is the I a reliable story?_ Monson is thinking out loud, in these pages, about truths and dares, about how the technology we write with may or may not shape what we write. He is thinking about solipsisms and (magnificently) "assembloirs," and he gets us thinking, too. Playfully, insistently, self-defeatedly, self-aggrandizingly, Monson puts a lot at stake.
**THE ART OF TIME IN MEMOIR: THEN, AGAIN/Sven Birkerts**
With Quiet intelligence, Sven Birkerts begins _The Art of Time in Memoir_ with a story about his own descent into the making of memoir. What did the memories he had ready access to mean? How were involuntary memories leading him away from the obvious "events" of his life toward a braided understanding of his life's meaning? What could the memoirs of Annie Dillard, Frank Conroy, Jo Ann Beard, Paul Auster, Virginia Woolf, and so many others teach? Birkerts is a first-rate critic. His reflections on classic memoirs are, I think, unparalleled, and his obsession with time is instructive. Finally, for those who remain unconvinced that there is a real and important difference between memoir and autobiography, Birkerts provides the best clarification I've seen. I'll quote at length here, because it matters. Autobiography, Birkerts begins by telling us, is "the line of one's own life." Memoirs,
by contrast are neither open ended nor provisional. For as the root of the word attests, they present not the line of the life but the life remembered. They are pledged not to an ostensibly detached accounting of events but to presentation of life as it is narratively reconstituted by memory. The memoirist is generally not after the sequenced account of his life so much as the story or stories that have given that life its internal shape.
**THE MADE-UP SELF: IMPERSONATION IN THE PERSONAL ESSAY/Carl H. Klaus**
If the titles designating the four parts of this slender paperback seem, at first, daunting—"Evocations of Consciousness," "Evocations of Personality," "Personae and Culture," and "Personae and Personal Experience"—there's a lot of good stuff in between. Ruminations on the poetics of self, the possibility/impossibility of tracking the mind at work, the grand seductions and sometimes promise of what Carl Klaus, the founding director of the University of Iowa's Nonfiction Writing Program, calls "the literature of interiority. The story of thought. The drama of mind in action," etc. We get satisfying reflections on Montaigne reflecting on Montaigne, pithy quotes from nonfiction masters, mind teases that force us to conclude (again and again) that writing (and reading) the personal essay is both a minefield and an irresistible enterprise. The personal essay and memoir are cousins, or can be. There are puzzles worth de-puzzling here.
**MEMOIR: A HISTORY/Ben Yagoda**
Reason number one to read Ben Yagoda's _Memoir: A History_ : You'll learn something about the wherefore and comeuppance of the personal story (I use the term _personal story_ because Yagoda devotes a considerable percentage of his book to autobiography, which is not—see Birkerts above—precisely the same as memoir). Reason number two: You'll steel yourself for many of the assaults that are now directed at this nervy enterprise. Yagoda's sweep through the generations touches on such disparate characters as Augustine, Rousseau, Ulysses S. Grant, Mark Twain, Helen Keller, Kathryn Harrison, and James Frey. It offers an up-to-date survey of the scientific work that debunks the possibility of foolproof remembered truth. Yagoda has fun cataloging memoir's many crimes. Still, he does, in the end, confess that the memoir boom "has been a net plus for the cause of writing."
**ON WRITING: A MEMOIR OF THE CRAFT/Stephen King**
Years ago, when my son was in sixth grade, I lay on his narrow bed while he sat propped up on the floor, reading aloud from Stephen King's _On Writing_. You would have thought we were reading some extraterrestrial, action-jammed tale the way we returned to this book each day, for that's the kind of book this is—the kind that you can't get enough of. It's King's personal writing tale, and in it he generously imparts lessons about what to write and where to write it; narration, description, and dialogue; drafting and redrafting; and just about anything else writers need to know to write well. Writers in any genre would benefit from reading this book. Memoirists, by extension, will, too.
**BIRD BY BIRD: SOME INSTRUCTIONS ON WRITING AND LIFE/Anne Lamott**
Finally, but of course, there's Anne Lamott's _Bird by Bird_. Wily, funny, honest, wholly empathetic, and self-confessional, this, like King's book, is designed for those working in any genre, meaning that there's plenty here for memoirists. Short assignments, perfectionism, morality, broccoli, jealousy—they're all here not just for the taking but also for the applying.
SOME ADDITIONALLY CITED SOURCES
_Anonymous,_ _A Woman in Berlin: Eight Weeks in the Conquered City—A Diary_
_Susannah Cahalan, Brain on Fire: My Month of Madness_
_Elias Canetti, The Tongue Set Free_
_Teresa Carpenter, editor, New York Diaries: 1609 to 2009_
_Leah Hager Cohen, Train Go Sorry: Inside a Deaf World_
_Terrence Des Pres, "Writing into the World" and "Accident and Its Scene: Reflections on the Death of John Gardner"_
_Joan Didion, "On Keeping a Notebook"_
_Dave Eggers, A Heartbreaking Work of Staggering Genius_
_Forrest Gander, As a Friend: A Novel_
_Elizabeth Gilbert, Eat, Pray, Love: One Woman's Search for Everything Across Italy, India and Indonesia_
_Priscilla Gilman, The Anti-Romantic Child: A Memoir of Unexpected Joy_
_Natalia Ginzburg, "My Craft"_
_Francisco Goldman, Say Her Name: A Novel_
_Ted Kooser, "Applesauce"_
_Jenny Lawson, Let's Petend This Never Happened: A Mostly True Memoir_
_Chang-rae Lee, "Coming Home Again"_
_Debra Marquart, The Horizontal World: Growing Up Wild in the Middle of Nowhere_
_J. R. Moehringer, The Tender Bar: A Memoir_
_Kate Moses, Cakewalk: A Memoir_
_Azar Nafisi, Reading Lolita in Tehran: A Memoir in Books_
_Nasdijj, The Boy and the Dog Are Sleeping_
_Pablo Neruda, Absence and Presence_
_Joyce Carol Oates, A Widow's Story: A Memoir_
_Nuala O'Faolain, Are You Somebody?: The Accidental Memoir of a Dublin Woman_
_Alice Ozma, The Reading Promise: My Father and the Books We Shared_
_Lia Purpura, "Autopsy Report"_
_Rainer Maria Rilke, Letters to a Young Poet_
_Jane Satterfield, Daughters of Empire: A Memoir of a Year in Britain and Beyond_
_Gerald Stern, "Eggshell"_
_Sallie Tisdale, "Violation"_
_Eudora Welty, One Writer's Beginnings_
_Virginia Woolf, A Room of One's Own_
# ACKNOWLEDGMENTS
I begin with the students. All of the students. The youngest ones, who years ago came first to my house and then to a garden, with exhilarating faith in stories. The faraway ones, in Maryland and California, in Wisconsin. The ones whom I met for just a morning or an afternoon, the ones who added me to their rosters and stayed. Students teach the teachers how, and I have been outrageously blessed in my lessons. I have, over and again, fallen in love. For the questions asked, for the memories written, for the saturating joy of each one of you, unquantifiable thanks. To those who generously share their beauty here—Andrea Amanullah, Leah Apple, Rachel Au-Yong, Dascher Branch-Elliman, Kimberly Eisler, Katie Goldrath, Sara Kalkstein, Elizabeth Knight, Nabil Mehta, Erin Nigro, Jonathan Packer, Joseph Polin, Beryl Sanders, Gabriel Seidner, and Stephanie C. Trott—I am honored to carry your words forward.
Thank you to Gregory Djanikian, Al Filreis, and Mingo Reynolds, who have made room for me at the University of Pennsylvania, and to Greg, especially, for the conversation.
Thank you to Karen Rile, my indispensable and talented colleague at Penn, and to Alyson Hagy, Ivy Goodman, Lisa Zeidner, Rahna Reiko Rizzuto, and Elizabeth Mosier—exemplary writers and teachers and abiding friends who have been there, in so many ways, throughout this teaching journey. Thank you to Kelly Simmons, who calls when I need to laugh, or cry, and to Colleen Mondor and Katrina Kenison for the dialogue.
Thank you to Melissa Sarno, who, in an awesomely clever response to one of my occasional Facebook tirades about a certain non-memoir memoir, passed along a YouTube clip from an early Aaron Sorkin movie. And so a title was born.
Thank you to Elizabeth Taylor at the _Chicago Tribune_ , John Prendergast at the _Pennsylvania Gazette,_ Michael Pakenham formerly of _Baltimore Sun,_ Karen Templer formerly of _Readerville_ , as well as the editors of the _Philadelphia Inquirer_ , the _Washington Post, Book_ magazine, _Philadelphia_ magazine, _Salon,_ the _New York Times, Shelf Awareness, Publishing Perspectives, Publishers Weekly,_ and elsewhere, who have, through the years, given me room to write about books and the book life. Some of the books referenced in _Handling the Truth_ first made their way to me for review thanks to these editors, and at times, I borrow heavily from myself. It was a gift to appear first in your publications.
Thank you to the bloggers who, with outrageous generosity, have supported this writing dream of mine, and who have given me cause, every single day, to wake up and think and blog forward. Some of what appears in _Handling the Truth_ first appeared, in different form, on my blog. I have been enlightened and heartened by the conversation.
Thank you to the great memoirists, so many of them cited here, who have inspired, goaded, appeased, and made the world a smarter, gentler, more interesting place. Some of them are my enduring friends. These friendships matter hugely.
Thank you to William Shinker for his warm welcome to the fabulous Gotham Books. Thank you to Lauren Marino, my Gotham editor, for her immediate embrace of this book, for her great good humor, for her guiding touch, and for her friendship along the way. I would not have wanted to publish this book with any other. Thank you to Susan Barnes for all the big and small things I know you do, and for the sprightly e-mail conversations. Thank you to Mary Beth Constant, copy editor supreme, who took such deep interest, who read so closely, who saved me from my fast-typing self, who opined, and who made me laugh; your contribution to these pages is priceless. Thank you to Mimi Barke, who produced a timeless, original, and (I think) striking image for the cover of this book; I will be so proud to have this prettily outfitted thing sit on my shelf. Thank you to Spring Hoteling, who made this book so gorgeous, page by page, and to Lavina Lee, Erica Ferguson, and Dora mak, who brought such care to the production process. Thank you to Beth Parker, Gotham publicist, who understood at once what mattered most about this book, and who generously made sure that others did as well. Thank you to Lisa Johnson and Jessica Chun, who so passionately launched this book into the world. Thank you to my first Penguin family—Tamra Tuller, Michael Green, Jessica Shoffel, and Jill Santopolo—for believing in my work, and in me, and for shoring up the foundations of my writing life.
Thank you—thank you—to Amy Rennert, who received this book on a Saturday and called me the very next day, a Sunday, certain. Shortly thereafter (a mere snap in time), Amy was calling to say that this book had found its perfect Gotham home. These are the calls a writer never forgets, and for her unflagging support of this book, for knowing how much these students mean to me, for understanding why I had to overtly handle truth—make these confessions, offer these cautions—thank you. And thanks, too, to Robyn Russell, for all the years.
Finally, I was a student at the University of Pennsylvania primarily because my father had been a student there before me. This learning life of mine begins, then, with him. Living itself would not be possible without the two men who rock my world: my husband, Bill, and our son, Jeremy. Countless times, over the past many years, I have consulted with my bright and beautiful son about living and about teaching. Always he has known what to say. When I have doubted myself, when I have regretted, when I have wanted to pull some of my own pages back, when I haven't trusted that my process would hold, Jeremy—pure-hearted and absolute, bedrock in his conviction that when our words can help others, we should use them, smarter than anyone I know—has stood firm. I am a teacher because I was a mother first. I am lucky in this life.
# ABOUT THE AUTHOR
Beth Kephart, a National Book Award finalist, is the author of five memoirs. Her other books include the autobiography of Philadelphia's Schuylkill River, _Flow_ ; the Spring 2010 IndieBound Pick _The Heart Is Not a Size_ ; the Autumn 2010 IndieBound Pick _Dangerous Neighbors_ ; and the critically acclaimed novels for young adults _Undercover, House of Dance, Nothing But Ghosts, You Are My Only_ , and _Small Damages_. Kephart is a winner of the Pennsylvania Council on the Arts fiction grant, a National Endowment for the Arts grant, a Leeway grant, a Pew Fellowships in the Arts grant, and the Speakeasy Poetry Prize, among other honors. Her essays are frequently anthologized, she has judged many competitions, she has written for numerous national magazines and newspapers, and she has taught workshops across the United States, to all ages. Kephart teaches creative nonfiction at the University of Pennsylvania and served as the inaugural readergirlz author-in-residence. She is the strategic writing partner in the boutique marketing communications firm Fusion. In 2014 Chronicle Books will release _We Could Be Heroes, Just for One Day_. Please visit Beth's blog, twice named a top author blog during Book Blogger Appreciation Week, at www.beth-kephart.blogspot.com.
| {
"redpajama_set_name": "RedPajamaBook"
} | 1,402 |
Last edited by Mezilkree
2 edition of On the springing and adjusting of watches found in the catalog.
On the springing and adjusting of watches
F. J Britten
being a description of the balance spring and the compensation balance with directions for applying the spring and adjusting for isochronism and temperature
by F. J Britten
Published 1898 by E. & F. N. Spon, Spon & Chamberlain in London, New York .
Clocks and watches.
Statement by F. J. Britten.
Timepiece repairs can be expensive and often take a long time, so you generally want to avoid damaging a watch. It's obvious that throwing your watch against a wall, running over it with a Bentley, or smashing it with a hammer are things to to avoid. However, not everything that can damage a watch is so obvious, so here I list five common things you may not be aware of that can damage a watch. This book by Anthony Whiten, has a higher readability factor than other books on watchmaking. While Donald de Carle's "Practical Watch Repairing" is considered a great book for all, this book is another tool which is easier to use and without as much technical information that can .
Filed under: Clocks and watches -- Escapements Watch and clock escapements; a complete study in theory and practice of the lever, cylinder and chronometer escapements, together with a brief account of the origin and evolution of the escapement in horology; (Philadelphia, The Keystone, ), by Keystone (page images at HathiTrust). To adjust a deployment clasp, you'll need to unfold the hinged metal sections and set the length of the band. Once you've fitted the band to your wrist, there's no need to adjust it again—simply snap the deployment clasp open and closed each time you want to wear your : 39K.
A balance wheel, or balance, is the timekeeping device used in mechanical watches and some clocks, analogous to the pendulum in a pendulum is a weighted wheel that rotates back and forth, being returned toward its center position by a spiral torsion spring, the balance spring or hairspring. It is driven by the escapement, which transforms the rotating motion of the watch gear train. The Chronometer Escapement. 67 Daniel's Independent Double-Wheel Escapement. 72 The Double-Roller. 79 Chapters 12 to 18 are similarly presented in order to introduce the reader to the logic behind the drawings, though these drawings are more involved and require some understanding of watch theory, such as lock, drop, draw, and Size: KB.
The insanity of alcohol
Medical aspects of harsh environments
Individual household waste water disposal methods
The Great Stone of Sardis (Dodo Press)
Cosmos and creator
Forests of the Night (Johnny Hawke)
Feeling and form
Mecklenburg declaration of independence
The Scribner-Bantam English dictionary
Montmorency Falls and St. Anne de Beaupré.
Gospel hymns no. 3
Price theory in action
Fundamentals of analysis in science and engineering, with examples
The grounds of the old religion
Anglo-Saxon poetic records.
Star trek, the next generation
Suffolk County Wills
The Color of Secrets
Public recreation as a municipal service in Alabama.
On the springing and adjusting of watches by F. J Britten Download PDF EPUB FB2
On the Springing and Adjusting of Watches: Being a Description of the Balance Spring and the Compensation Balance With Directions for Applying the Isochronism and Temperature (Classic Reprint) Paperback – Febru Author: Frederick James Britten.
Practical Course in Adjusting Comprising a Review of the Laws Governing the Motion of the Balance and Balance Spring in Watches and Chronometers, and Application of the Principles Deduced Therefrom in the Correction of Variations of Rate Arising From Want of.
Practical Watch Adjusting and Springing [Donald De Carle] on *FREE* shipping on qualifying : On the springing and adjusting of watches book De Carle. The Watch Adjuster's Manual - A Practical Guide for the Watch and Chronometer Adjuster in Making, Springing, Timing and Adjusting for Isochronism, Positions and Temperatures [Charles Edgar Fritts] on *FREE* shipping on qualifying offers.
This vintage book contains a complete guide to making, adjusting, springing, timing and adjusting a variety of watches. The Watch Adjuster's Manual: Being a Practical Guide for the Watch and Chronometer Adjuster in Making, Springing, Timing and Adjusting for Isochronism, Positions and Temperatures Paperback – February 7, by Charles Fritts (Author)Author: Charles Fritts.
The Watch Adjuster's Manual: Being a Practical Guide for the Watch and Chronometer Adjuster in Making, Springing, Timing and Adjusting for Isochronism, Positions and Temperatures [Charles Fritts] on *FREE* shipping on qualifying offers.
This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. Audio Books & Poetry Community Audio Computers, Technology and Science Music, Arts & Culture News & Public Affairs Non-English Audio Spirituality & Religion.
Librivox Free Audiobook. Full text of "On the springing and adjusting of watches. COVID Resources. Reliable information about the coronavirus (COVID) is available from the World Health Organization (current situation, international travel).Numerous and frequently-updated resource results are available from this 's WebJunction has pulled together information and resources to assist library staff as they consider how to handle coronavirus.
Declaring his book "brand agnostic", it is refreshing to find colour photographs of watches from over 90 brands instead of the round-up of usual suspects that tend to dominate the arena. In a nutshell, Ryan Schmidt is your trusted guide to the cosmos of : REBECCA DOULTON.
Books on Watch and Clock Repair and working with Watchmakers Lathes. Practical Watch Repair, Practical Clock Repair, Clock Repairing as a Hobby, The E.J. Swigart Co., Illustrated Manual of American Watch Movements, Repairing Old Clocks & Watches, Watch & Clock Encyclopedia, Turning and Milling in Horlogy, The Watchmakers's and Model Engineer's Lathe Book, The Watchmaker and his Lathe.
The first book on our list, "The Complete Price Guide to Watches," is published annually and is considered the authoritative source for information on vintage watch prices. Its focus is primarily American watches, but it has a section on European watches as well.
It is an excellent book with a wealth of information on vintage watches. This book's contents include: general condition of the movement; cleaning and oiling; balance pivots; fitting a flat balance spring; fitting a breguet balance spring; positional timing; further considerations when fitting a spring and observing the point of attachment; general notes on springing and timing;/5(10).
Adjusting a Burlington (Illinois ) 16s Pocket Watch to 6 Positions, Step by Step. The Burlington Watch Company is an interesting figure in the commercial history of watches. The Wristwatch Handbook is the gateway to informed wristwatch collecting that will open up the world of watches to you in a concise and enjoyable way.
This book brilliantly mixes vintage and modern watches in a visually accessible way with useful and informative reflections on what makes watches. In honor of Black Friday, one of the biggest shopping days of the year, the HODINKEE Shop is now stocking 15 books that are required reading for any watch lover, and we've put together recommendations based on some of our favorite straps, rolls, and tools.
Click through for full : HODINKEE. How to fit a new watch winder. Crown and stem replacement. Broken winder. Watch repair series. - Duration: Watch Repair Channelviews. We have a wide selection of books on watchmaking and watch repair from knowledgeable experts who want to share their years of experience and secrets with you.
There are many people including collectors, watchmakers, and antique dealers who can find helpful information like watch pricing guides, watch repair guides, and identification of watches.
This is the sound an engine compensator makes when the spring pack behind it is worn out. The engine compensator in this video was used in and up Dyna's, and through touring models. & Watches Ltd, Sectric House, Cricklewood, London, both of whom have been most helpful in supplying information and drawings.
It is worthy of note here that in my approach to the industry, I found an unexpected enthusiasm to help when it was known that the book was to be a hobby book File Size: 1MB. Whether you're an avid watch collector or just an individual interested in the history of the pocket watch, there's a long and interesting story behind how one of the world's most popular accessories came into existence that you've got to watches have been around since at least the 14th century, and over the years they have evolved into the timepiece we know today.
The repeater in Figure 7 uses two separate return springs. Spring 9 is the quarter-pallet return spring. The hour-pallet return spring is not shown, but it is inside the movement frame and acts on the hour-pallet return arm As noted above, all three of the pins, 1, 2 and 3, protrude above the Size: 2MB.The Practical Lubrication of Clocks and Watches Version 3 oil from the narrow neck of a bottle.
It can easily be sterilised before use to remove any particles of fibre or fluff by passing it through a flame.
All-in one type of oil pot Types of Oilers: File Size: 2MB.Buy The History of Watches 01 by Thompson, David, Peckham, Saul (ISBN: ) from Amazon's Book Store.
Everyday low prices and free delivery on eligible orders/5(8).
ekinanaokulu.com - On the springing and adjusting of watches book © 2020 | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 5,331 |
NOREE is the design of two-sided instant hoods. With only two loops, it is ready without any pin or brooch. The style of flare drops and the details of flowers and stones at the end of the cuts besides giving a thin illusion and look elegant and minimalist. Only 15 seconds of wearing the pinless shawl looks like a shawl. Ideal for those who like to look like a minimalist style.
Sesuai untuk ke pejabat dan majlis, tampil glamor dan anggun. | {
"redpajama_set_name": "RedPajamaC4"
} | 3,794 |
package org.apache.guacamole.auth.jdbc.connectiongroup;
import com.google.inject.Inject;
import com.google.inject.Provider;
import java.util.Set;
import org.apache.guacamole.auth.jdbc.user.ModeledAuthenticatedUser;
import org.apache.guacamole.auth.jdbc.base.ModeledDirectoryObjectMapper;
import org.apache.guacamole.auth.jdbc.tunnel.GuacamoleTunnelService;
import org.apache.guacamole.GuacamoleClientException;
import org.apache.guacamole.GuacamoleException;
import org.apache.guacamole.GuacamoleSecurityException;
import org.apache.guacamole.GuacamoleUnsupportedException;
import org.apache.guacamole.auth.jdbc.base.ModeledGroupedDirectoryObjectService;
import org.apache.guacamole.auth.jdbc.permission.ConnectionGroupPermissionMapper;
import org.apache.guacamole.auth.jdbc.permission.ObjectPermissionMapper;
import org.apache.guacamole.net.GuacamoleTunnel;
import org.apache.guacamole.net.auth.ConnectionGroup;
import org.apache.guacamole.net.auth.permission.ObjectPermission;
import org.apache.guacamole.net.auth.permission.ObjectPermissionSet;
import org.apache.guacamole.net.auth.permission.SystemPermission;
import org.apache.guacamole.net.auth.permission.SystemPermissionSet;
import org.apache.guacamole.protocol.GuacamoleClientInformation;
/**
* Service which provides convenience methods for creating, retrieving, and
* manipulating connection groups.
*
* @author Michael Jumper, James Muehlner
*/
public class ConnectionGroupService extends ModeledGroupedDirectoryObjectService<ModeledConnectionGroup,
ConnectionGroup, ConnectionGroupModel> {
/**
* Mapper for accessing connection groups.
*/
@Inject
private ConnectionGroupMapper connectionGroupMapper;
/**
* Mapper for manipulating connection group permissions.
*/
@Inject
private ConnectionGroupPermissionMapper connectionGroupPermissionMapper;
/**
* Provider for creating connection groups.
*/
@Inject
private Provider<ModeledConnectionGroup> connectionGroupProvider;
/**
* Service for creating and tracking tunnels.
*/
@Inject
private GuacamoleTunnelService tunnelService;
@Override
protected ModeledDirectoryObjectMapper<ConnectionGroupModel> getObjectMapper() {
return connectionGroupMapper;
}
@Override
protected ObjectPermissionMapper getPermissionMapper() {
return connectionGroupPermissionMapper;
}
@Override
protected ModeledConnectionGroup getObjectInstance(ModeledAuthenticatedUser currentUser,
ConnectionGroupModel model) {
ModeledConnectionGroup connectionGroup = connectionGroupProvider.get();
connectionGroup.init(currentUser, model);
return connectionGroup;
}
@Override
protected ConnectionGroupModel getModelInstance(ModeledAuthenticatedUser currentUser,
final ConnectionGroup object) {
// Create new ModeledConnectionGroup backed by blank model
ConnectionGroupModel model = new ConnectionGroupModel();
ModeledConnectionGroup connectionGroup = getObjectInstance(currentUser, model);
// Set model contents through ModeledConnectionGroup, copying the provided connection group
connectionGroup.setParentIdentifier(object.getParentIdentifier());
connectionGroup.setName(object.getName());
connectionGroup.setType(object.getType());
connectionGroup.setAttributes(object.getAttributes());
return model;
}
@Override
protected boolean hasCreatePermission(ModeledAuthenticatedUser user)
throws GuacamoleException {
// Return whether user has explicit connection group creation permission
SystemPermissionSet permissionSet = user.getUser().getSystemPermissions();
return permissionSet.hasPermission(SystemPermission.Type.CREATE_CONNECTION_GROUP);
}
@Override
protected ObjectPermissionSet getPermissionSet(ModeledAuthenticatedUser user)
throws GuacamoleException {
// Return permissions related to connection groups
return user.getUser().getConnectionGroupPermissions();
}
@Override
protected void beforeCreate(ModeledAuthenticatedUser user,
ConnectionGroupModel model) throws GuacamoleException {
super.beforeCreate(user, model);
// Name must not be blank
if (model.getName() == null || model.getName().trim().isEmpty())
throw new GuacamoleClientException("Connection group names must not be blank.");
// Do not attempt to create duplicate connection groups
ConnectionGroupModel existing = connectionGroupMapper.selectOneByName(model.getParentIdentifier(), model.getName());
if (existing != null)
throw new GuacamoleClientException("The connection group \"" + model.getName() + "\" already exists.");
}
@Override
protected void beforeUpdate(ModeledAuthenticatedUser user,
ConnectionGroupModel model) throws GuacamoleException {
super.beforeUpdate(user, model);
// Name must not be blank
if (model.getName() == null || model.getName().trim().isEmpty())
throw new GuacamoleClientException("Connection group names must not be blank.");
// Check whether such a connection group is already present
ConnectionGroupModel existing = connectionGroupMapper.selectOneByName(model.getParentIdentifier(), model.getName());
if (existing != null) {
// If the specified name matches a DIFFERENT existing connection group, the update cannot continue
if (!existing.getObjectID().equals(model.getObjectID()))
throw new GuacamoleClientException("The connection group \"" + model.getName() + "\" already exists.");
}
// Verify that this connection group's location does not create a cycle
String relativeParentIdentifier = model.getParentIdentifier();
while (relativeParentIdentifier != null) {
// Abort if cycle is detected
if (relativeParentIdentifier.equals(model.getIdentifier()))
throw new GuacamoleUnsupportedException("A connection group may not contain itself.");
// Advance to next parent
ModeledConnectionGroup relativeParentGroup = retrieveObject(user, relativeParentIdentifier);
relativeParentIdentifier = relativeParentGroup.getModel().getParentIdentifier();
}
}
/**
* Returns the set of all identifiers for all connection groups within the
* connection group having the given identifier. Only connection groups
* that the user has read access to will be returned.
*
* Permission to read the connection group having the given identifier is
* NOT checked.
*
* @param user
* The user retrieving the identifiers.
*
* @param identifier
* The identifier of the parent connection group, or null to check the
* root connection group.
*
* @return
* The set of all identifiers for all connection groups in the
* connection group having the given identifier that the user has read
* access to.
*
* @throws GuacamoleException
* If an error occurs while reading identifiers.
*/
public Set<String> getIdentifiersWithin(ModeledAuthenticatedUser user,
String identifier)
throws GuacamoleException {
// Bypass permission checks if the user is a system admin
if (user.getUser().isAdministrator())
return connectionGroupMapper.selectIdentifiersWithin(identifier);
// Otherwise only return explicitly readable identifiers
else
return connectionGroupMapper.selectReadableIdentifiersWithin(user.getUser().getModel(), identifier);
}
/**
* Connects to the given connection group as the given user, using the
* given client information. If the user does not have permission to read
* the connection group, permission will be denied.
*
* @param user
* The user connecting to the connection group.
*
* @param connectionGroup
* The connectionGroup being connected to.
*
* @param info
* Information associated with the connecting client.
*
* @return
* A connected GuacamoleTunnel associated with a newly-established
* connection.
*
* @throws GuacamoleException
* If permission to connect to this connection is denied.
*/
public GuacamoleTunnel connect(ModeledAuthenticatedUser user,
ModeledConnectionGroup connectionGroup, GuacamoleClientInformation info)
throws GuacamoleException {
// Connect only if READ permission is granted
if (hasObjectPermission(user, connectionGroup.getIdentifier(), ObjectPermission.Type.READ))
return tunnelService.getGuacamoleTunnel(user, connectionGroup, info);
// The user does not have permission to connect
throw new GuacamoleSecurityException("Permission denied.");
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 7,532 |
package org.spongepowered.common.data.util;
import com.google.common.base.Optional;
import com.google.common.collect.ImmutableList;
import org.spongepowered.api.data.DataContainer;
import org.spongepowered.api.data.DataHolder;
import org.spongepowered.api.data.DataTransactionBuilder;
import org.spongepowered.api.data.DataTransactionResult;
import org.spongepowered.api.data.key.Key;
import org.spongepowered.api.data.manipulator.DataManipulator;
import org.spongepowered.api.data.manipulator.ImmutableDataManipulator;
import org.spongepowered.api.data.merge.MergeFunction;
import org.spongepowered.api.data.value.BaseValue;
import org.spongepowered.api.entity.EntityType;
import org.spongepowered.common.data.DataProcessor;
public final class DataProcessorDelegate<M extends DataManipulator<M, I>, I extends ImmutableDataManipulator<I, M>> implements DataProcessor<M, I> {
private final ImmutableList<DataProcessor<M, I>> processors;
public DataProcessorDelegate(ImmutableList<DataProcessor<M, I>> processors) {
this.processors = processors;
}
@Override
public int getPriority() {
return Integer.MAX_VALUE;
}
@Override
public boolean supports(DataHolder dataHolder) {
for (DataProcessor<M, I> processor : this.processors) {
if (processor.supports(dataHolder)) {
return true;
}
}
return false;
}
@Override
public boolean supports(EntityType entityType) {
return false;
}
@Override
public Optional<M> from(DataHolder dataHolder) {
for (DataProcessor<M, I> processor : this.processors) {
if (processor.supports(dataHolder)) {
final Optional<M> optional = processor.from(dataHolder);
if (optional.isPresent()) {
return optional;
}
}
}
return Optional.absent();
}
@Override
public Optional<M> fill(DataHolder dataHolder, M manipulator, MergeFunction overlap) {
for (DataProcessor<M, I> processor : this.processors) {
if (processor.supports(dataHolder)) {
final Optional<M> optional = processor.fill(dataHolder, manipulator, overlap);
if (optional.isPresent()) {
return optional;
}
}
}
return Optional.absent();
}
@Override
public Optional<M> fill(DataContainer container, M m) {
for (DataProcessor<M, I> processor : this.processors) {
final Optional<M> optional = processor.fill(container, m);
if (optional.isPresent()) {
return optional;
}
}
return Optional.absent();
}
@Override
public DataTransactionResult set(DataHolder dataHolder, M manipulator, MergeFunction function) {
for (DataProcessor<M, I> processor : this.processors) {
if (processor.supports(dataHolder)) {
final DataTransactionResult result = processor.set(dataHolder, manipulator, function);
if (!result.getType().equals(DataTransactionResult.Type.FAILURE)) {
return result;
}
}
}
return DataTransactionBuilder.failNoData();
}
@Override
public Optional<I> with(Key<? extends BaseValue<?>> key, Object value, I immutable) {
for (DataProcessor<M, I> processor : this.processors) {
final Optional<I> optional = processor.with(key, value, immutable);
if (optional.isPresent()) {
return optional;
}
}
return Optional.absent();
}
@Override
public DataTransactionResult remove(DataHolder dataHolder) {
for (DataProcessor<M, I> processor : this.processors) {
if (processor.supports(dataHolder)) {
final DataTransactionResult result = processor.remove(dataHolder);
if (!result.getType().equals(DataTransactionResult.Type.FAILURE)) {
return result;
}
}
}
return DataTransactionBuilder.failNoData();
}
@Override
public Optional<M> createFrom(DataHolder dataHolder) {
for (DataProcessor<M, I> processor : this.processors) {
if (processor.supports(dataHolder)) {
final Optional<M> optional = processor.createFrom(dataHolder);
if (optional.isPresent()) {
return optional;
}
}
}
return Optional.absent();
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 7,719 |
Q: Install Ubuntu to partition with Windows 7 Loader - erase partition with Win loader Can I install Ubuntu 9.10 to partition with Windows 7 Loader? I.e. I want to completely erase this partition and have Ubuntu on it. Will GRUB boot Windows 7 without its loader?
Windows 7 Loader is on C: while the system itself is on E:.
A: Yes, you can install Ubuntu with Windows 7 pre-installed. Take a look at these notes on dual-booting Windows 7 and Linux and Adding Windows 7 to Linux Multi-boot.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 274 |
# **FLIGHTS OF NO RETURN**
Aviation History's Most Infamous One-Way Tickets to Immortality
**STEVEN A. RUFFIN**
## CONTENTS
**PREFACE**
**ACKNOWLEDGMENTS**
**INTRODUCTION**
**PROLOGUE: FIRST TO FALL**
[**SECTION I
WHEN LUCK RUNS OUT**](part01.html)
CHAPTER ONE
**A DEADLY SUNDAY DRIVE**
CHAPTER TWO
**DISAPPEARANCE OF THE MIDNIGHT GHOSTS**
CHAPTER THREE
**THE SPARK THAT ENDED AN ERA**
CHAPTER FOUR
**LADY LINDY'S FLIGHT TO ETERNITY**
CHAPTER FIVE
**THE BANDLEADER'S LAST GIG**
[**SECTION II
LAPSES IN JUDGMENT**](part02.html)
CHAPTER SIX
**THE NIGHT CAMELOT ENDED**
CHAPTER SEVEN
**THE DEATH OF A BUTTERFLY**
CHAPTER EIGHT
**THE DAY THE BARON FLEW TOO LOW**
CHAPTER NINE
**A MADMAN'S JOURNEY TO NOWHERE**
CHAPTER TEN
**THE CONGRESSMEN WHO VANISHED**
[**SECTION III
CRIMINAL AND OTHER POLITICALLY INCORRECT BEHAVIOR**](part03.html)
CHAPTER ELEVEN
**MAD FLIGHT TO OBLIVION**
CHAPTER TWELVE
**THE CRIME OF THE CENTURY**
CHAPTER THIRTEEN
**WHEN THE SKIES RAINED TERROR**
CHAPTER FOURTEEN
**GONE WITH THE WIND**
CHAPTER FIFTEEN
**A PLANE THAT FELL IN THE MOUNTAINS**
[**SECTION IV
INTO THE TWILIGHT ZONE**](part04.html)
CHAPTER SIXTEEN
**A HAUNTING DISTRACTION**
CHAPTER SEVENTEEN
**LOST LADY OF THE DESERT**
CHAPTER EIGHTEEN
**THE GHOST BLIMP OF DALY CITY**
CHAPTER NINETEEN
**FATAL RENDEZVOUS WITH A UFO**
CHAPTER TWENTY
**THE CURIOUS CASE OF FLIGHT 19**
**EPILOGUE: BUT NOT LAST**
**BIBLIOGRAPHICAL REFERENCES**
**INDEX**
## **PREFACE**
I must have been a pioneer aviator in a previous life. This would explain my fascination with aviation history—and why I wrote this book about the most legendary "flights of no return." In a lifetime studying the history of flight, I have found that some of the most captivating true flying tales were those where one or more of the participants never returned. These "last flight" accounts pique my interest, not because of their grim endings, but because they so often form the basis of a more complex story.
I had two goals in writing this collection: I wanted the stories to be unique and interesting enough to appeal to readers of all types and all ages, and I wanted them to be as accurate as possible. A major challenge in achieving the first goal was deciding _which_ flights to chronicle. In more than two centuries of manned flight, the overwhelming majority of lost flights are easy to explain and involve little or no mystery: a pilot flew into bad weather and crashed into the side of a mountain, or he lost control of his aircraft and spun into the ground. All are tragic, but not necessarily compelling. However, a few of history's ill-fated flights—or the events associated with them—are fascinating in some particular way. They are mysterious, bizarre, or controversial, and they often involve someone famous. These flights provide all the adventure and drama necessary for good literature, and are therefore the ones I chose to describe.
My second goal—making this book about real historic events as accurate as possible—was equally challenging. Determining what happened before, during, and after each of these historic failed flights required a great deal of research and crosschecking. I spared no effort in describing each flight and each incident as faithfully as possible. I employed in my research as many primary sources as I could, including original documents and photos, official accident reports, maps, and first-person accounts. In the absence of these, I relied on information presented in books, articles, and other sources that were, to the best of my judgment, accurate. Only by meticulously evaluating each of these sources and comparing one to another could I feel confident of describing an event as closely as possible to the way it actually occurred.
Most of these legendary flights have been the subjects of books, articles, documentaries, websites, and movies. However, such an abundance of material is a double-edged sword, for within almost any mass of information exists an unknown quantity of _mis_ information. False "facts" are ubiquitous. They can arise, even in official documents, from typos or transcribing errors. Elsewhere, they result from unverified assertions, legends, rumors, and—too often—purposefully made fallacious claims. Although it was sometimes difficult to distinguish fact from fiction, I tried to weed out as many errors as possible. In so doing, I also eliminated some long-held misconceptions. Any untruths that may have survived are unintentional and my responsibility alone.
## **INTRODUCTION**
A celebrated millionaire—who was also the world's foremost aviator—lifted off in a small plane one clear morning in 2007 and disappeared. Was his loss an accident, or something more sinister?
The glamorous son of a beloved president took off on a hazy summer night in 1999 and plunged himself and two others into the Atlantic Ocean. Was an infamous "curse" to blame? Or did he simply make a bad decision?
In 1943, Nazi fighters deliberately shot down a civilian airliner carrying the famous movie star best known for his role in the epic movie _Gone with the Wind_. Was he the target? Or was this atrocity the result of mistaken identity?
A US Navy blimp landed one Sunday morning in 1942 in the middle of a street in Daly City, California, with no one aboard. What happened to the crew? Were they victims of enemy action, espionage—or, perhaps, even of their own government?
What is the real story behind the perplexing disappearance of Amelia Earhart? Did the Japanese execute her as a spy? Did she die of starvation and exposure on a remote Pacific island? Or did she live to a ripe old age as a housewife in New Jersey?
Some of these tragic flights terminated by accident. Others ended intentionally. A few ended for reasons that to this day remain unknown. What all the flights described in this book have in common, however, is that they are unique and compelling in their own ways, and ended unhappily for the occupants under unusual, mysterious, controversial—or, in some cases, downright spooky—circumstances.
Stories about failed flights precede manned flight itself. Perhaps the earliest ever recorded dates back to ancient Greek mythology. Daedalus and his son Icarus escaped from a fortress on the island of Crete, where they had been imprisoned, by flying from a tower using wings that Daedalus had fabricated from feathers held together by string and wax. Young Icarus ignored his father's warning not to fly too high and, in his youthful exuberance, ascended so close to the sun that it melted the wax holding his wings together. He crashed into the sea and perished.
While the story of Icarus' flight is mythical, man would eventually slip Earth's "surly bonds" and ascend into the heavens. And although the overwhelming majority of flights in aviation's magnificent history have been successful, too many have fallen victim to the unyielding demands of gravity.
The events described in this book cover the entire 230-year span of manned flight. They occurred during both war and peace, over land and sea, and in aircraft of all types: from balloons, blimps, and dirigibles to propeller-driven biplanes, triplanes, and monoplanes to jets and rocket planes. Readers will experience what the doomed occupants of these aircraft—some of them rich, famous, and/or powerful—experienced as they battled, unsuccessfully, a wide variety of deadly aerial adversaries. These include bad weather, bad judgment, enemy combatants, criminal activity, mysterious unknown forces, and a lethal dose of every aviator's nemesis: bad luck.
In the pages to come, you're about to discover true accounts of great aviation mysteries; ghosts and derelicts; aircraft—and the people in them—that seemingly vanished into thin air; political intrigue and conspiracy; an occasional sprinkling of the supernatural; and criminals who committed heinous acts in the air. All these factual events unfold here exactly as they occurred. They are aviation history and human drama at their best—fascinating, informative, poignant, and provocative.
## PROLOGUE
## **FIRST TO FALL**
**"SACRIFICES MUST BE MADE."**
### **First to Fly and First to Die**
On June 15, 1785, two pioneering French balloonists set out to make history. Early that morning, Jean-François Pilâtre de Rozier and Pierre Romain took off from the French coastal town of Boulogne-sur-Mer in a Rozière balloon—named after its inventor, Monsieur Jean-François Pilâtre de Rozier himself. To achieve buoyancy, this unique craft employed a highly incompatible hybrid of two separate gas chambers: one containing air heated in flight by an open flame suspended beneath it; and the other filled with combustible hydrogen gas. In this floating bomb, the two intrepid aeronauts hoped to be the first ever to fly across the English Channel from France to England.
De Rozier was already an aeronaut of great accomplishment. A year and a half earlier, on the afternoon of November 21, 1783, the young physicist, chemist, and inventor accompanied the Marquis François Laurent d'Arlandes to become the first humans ever to make an untethered balloon flight. On that day, they ascended to three thousand feet above the outskirts of Paris in a Montgolfier hot-air "fire balloon" built by brothers Étienne and Joseph Montgolfier. De Rozier and d'Arlandes remained aloft for twenty-five minutes and drifted a distance of more than five miles. It was only by stubborn persistence that the two men had been able to make this historic first flight. King Louis XVI considered it too dangerous for his loyal subjects, so he initially decreed that only convicted criminals could make the flight, in exchange for a pardon—if they survived. Only at the last minute did he acquiesce to de Rozier's and d'Arlandes's passionate pleas to allow them—men of status—the honor of being the first to fly.
_Mort de Pilâtre de Rozier et de Romain_ , an artistic rendering of Jean-François Pilâtre de Rozier and Pierre Romain after their balloon crashed on June 15, 1785, near Wimereux, France. They were the first humans ever to die in an aviation accident. _Library of Congress_
After this achievement, de Rozier continued to fly and, in so doing, helped establish several other early firsts in aeronautics. With his rapidly expanding résumé of accomplishments came international fame. If he could successfully complete the proposed thirty-mile flight across the forbidding English Channel, his celebrity and fortune would only increase. Two other aeronauts, Frenchman Jean-Pierre Blanchard and American expatriate Dr. John Jeffries, had managed to balloon their way eastward across the Channel, from England to France, earlier that year; however, no one had yet made the more difficult east-to-west flight against the prevailing winds.
The "Flying Man," Otto Lilienthal, in free flight after launching his glider from a German hillside. He died after his glider stalled and plunged to the ground on August 9, 1896. _Library of Congress_
German engineer and aviation pioneer, Otto Lilienthal. His groundbreaking work in aerodynamics greatly influenced others—most notably, the Wright brothers.
De Rozier's fellow flier, Romain, had an even more compelling reason to make the historic journey. Awaiting his arrival in Britain was a beautiful and wealthy young Englishwoman he intended to marry.
The two aeronauts had to wait for a favorable easterly wind to propel them across the Channel. When the conditions were finally right, they climbed into the basket suspended below the balloon and took off. They made, according to one contemporary newspaper account, "a fine appearance in the ascent, bidding fair for a prosperous voyage to England." Within minutes, the balloon had reached an altitude of several thousand feet and begun to drift westward, out over the Channel.
Soon after, however, the wind shifted and started pushing the two men and their balloon back toward the east. Before long, they were once again floating above the French landscape. Suddenly, onlookers below saw the balloon envelope collapse, and the two pioneers plunged with their deflated balloon to the earth. They fell near the town of Wimereux, three miles north of their launching point. Both died upon impact.
No one knew exactly what had gone wrong; however, the deadly proximity of the open flame and highly combustible hydrogen may have been a factor. This combination was, as fellow ballooning pioneer Jacques Charles had warned de Rozier, akin to "putting fire beside gunpowder."
De Rozier had once again made history—but not in the manner he intended. Instead, the man who made his mark as the first to fly became also the first to die. The world will always remember him and Romain as the first humans ever to die in an aviation accident, marking history's first flight of no return.
### **Winged Sacrifice**
Manned flight progressed significantly during the century following de Rozier and Romain's historic doomed flight. Lighter-than-air flight became almost commonplace—not only balloons, but also a wide variety of steerable, motorized dirigibles. Still, a few aviation pioneers began experimenting with an entirely different form of human flight, one that did not require buoyant gases for lift. One of these early visionaries was a German engineer born in 1848 named Karl Wilhelm Otto Lilienthal.
Otto Lilienthal's fascination with the concept of manned flight began at an early age. He carefully studied the flight of birds, and, while still a teenager, he began experimenting with winged gliders. From his observations, he developed his own designs, theories, and techniques. During the 1890s, his gliding feats earned him international fame—and the title Flying Man—due in part to the many photographs taken of him soaring through the air, suspended from his glider, that regularly appeared in publications throughout the world. Lilienthal's groundbreaking work with wing shapes and other principles of aerodynamics enabled him to make more than two thousand glider flights, during which he set many early flying records.
The scientifically minded engineer further bolstered his legacy by writing and publishing one of history's first aeronautical textbooks, _Der Vogelflug als Grundlage der Fliegekunst_ —later translated into English as _Birdflight as the Basis of Aviation_. Even today, well over a century later, the most recent edition of this classic is available. Given Lilienthal's flying achievements and scientific observations in the realm of winged flight, it is no surprise that many still consider him the world's first true aviator.
Otto Lilienthal achieved his many flying accomplishments in gliders of his own design, most of which were similar to today's hang gliders. He routinely made sustained and controlled flights for distances of up to several hundred feet. To get airborne, he launched from a hillside—often using an artificial slope that he constructed near his home in Lichterfelde, Germany. This knoll remains today a memorial to this important aviation pioneer.
Lilienthal's flying accomplishments were of great influence to other prospective aviators of his era, particularly two American bicycle-building brothers from Dayton, Ohio, who were also interested in manned flight. Wilbur and Orville Wright considered Lilienthal the most important aeronautical authority of the day. They carefully studied his writings, designs, and flight techniques as they worked on their own glider, which would be the precursor to the world's first true airplane, the Wright _Flyer_.
On August 9, 1896, Lilienthal flew his glider in the nearby Rhinower Hills. On his fourth flight of the day, he was soaring from a hilltop at a height of about fifty feet when a gust of wind caused his glider to stall. He tried to recover by shifting his weight, but was unable to do so in time. Crashing heavily to the ground, he sustained a fractured spine. Colleagues quickly transported him to Berlin for treatment, but he died the following day.
One of Otto Lilienthal's most quoted statements is " _Opfer müssen gebracht werden_ "—sacrifices must be made. His sacrifice made him humanity's first victim of winged flight.
### **The Airplane Joins the Club**
Based on the work of Lilienthal and others, it was only a matter of time before someone figured out how to attach an engine to a winged glider and fly it off the ground. The result would be the world's first airplane.
The first controlled and sustained manned flight of a powered, heavier-than-air machine occurred on December 17, 1903. On this day, Orville Wright made a twelve-second hop from the windswept sand dunes of Kitty Hawk, North Carolina. It was not overly impressive, but it was a first. Before day's end, he and his brother, Wilbur, would make three additional and progressively longer flights in their _Flyer_ , which they designed and built after years of painstaking experimentation.
Still, to many people, these first flights did not seem significant. Balloonists over the past 120 years had already accomplished much greater feats in the realm of human flight. Nevertheless, what the two bicycle makers had managed to do at Kitty Hawk was something special; though they did not invent flight itself, they did invent a _different kind_ of flight. The powered airplane would prove to be the way of the future. It would fly faster and higher and carry bigger payloads than any other type of flying machine; however, such exceptional performance would come at a cost.
The events of September 17, 1908, graphically illustrate this point. Orville Wright was at Fort Myer, Virginia, demonstrating his and Wilbur's newest flying machine, the Wright Model A, to the US Army. It had been nearly five years since their first flight, and they had yet to sell the US government on the concept of the airplane. However, Wilbur had begun demonstrating their flying machine in Europe at the famous automobile racetrack outside of Le Mans, France, and after only one flight had instantly converted the Europeans' widespread skepticism to wild enthusiasm. Now Orville had to make the case in his own country.
The Wright Model A at the US Army flight trials, Fort Myer, Virginia. During the first two weeks of September 1908, this machine—piloted by Orville Wright—would break most of the existing airplane flight records. Despite such success, the trials would end in disaster. _NASA_
Orville Wright flying his Wright Model A over the parade ground at Fort Myer, September 1908. _US Air Force_
On September 3, flying from the parade field at Fort Myer, Orville began putting his Model A through its paces. If successful, he would earn a $25,000 US Army contract for one of their airplanes. Enthusiastic crowds and skeptical US Army evaluators watched intently. They were not disappointed. Over the next two weeks, Orville broke—and broke yet again—several world records. On one flight, he managed to stay airborne for an incredible seventy-five minutes.
The army had several requirements for its first airplane. First, it had to be capable of carrying a passenger. On September 9, Orville took up Lt. Frank P. Lahm for a short hop, and three days later he did the same for Major George O. Squier. At shortly after 5:00 on the afternoon of September 17, it was Lt. Thomas E. Selfridge's turn.
The twenty-six-year-old Selfridge was a 1903 West Point graduate and already a rising star in the US Army. He was also anything but a neophyte when it came to aviation. After being assigned to the army's newly formed Aeronautical Division of the Signal Corps, he had learned to fly dirigibles, making him one of only three with this qualification in the US Army. In addition, the army had assigned him to work with a civilian aeronautical research group known as the Aerial Experiment Association, or AEA. Alexander Graham Bell, best known as the inventor of the first practical telephone, was the driving force behind this nonprofit scientific organization dedicated to building a "practical aeroplane" capable of carrying passengers. Over the most recent few months, Selfridge had completed several flights in aircraft that he and fellow AEA members had designed and built—making him the first member of the US military ever to solo an airplane. In view of Selfridge's unique expertise in the realm of aeronautics, it is not surprising that he was among the assigned army observers at the Wright airplane trials.
The suspicious and highly secretive Wright brothers viewed Bell and his AEA as a competitor—and, therefore, the enemy. Consequently, they had no great love for Lieutenant Selfridge either. Orville believed the young officer's real intent during the Fort Myer flight demonstration was to steal their secrets and use them for his and the AEA's own purposes. This led Orville to write to Wilbur during the Fort Myer trials, "I will be glad to have Selfridge out of the way. I don't trust him an inch." He would soon have reason to regret making that statement.
The September 17, 1908, crash that killed Lt. Thomas E. Selfridge. Orville Wright was piloting the Wright Model A over Fort Myer, when a propeller tip broke and sent the airplane out of control. _National Museum of the US Air Force_
The broken propeller tip that led to the death of Lt. Thomas E. Selfridge. It is displayed at the National Museum of the US Air Force. _Steven A. Ruffin_
Orville's takeoff that afternoon was uneventful, as Selfridge, seated next to him, waved to friends in the crowd of two thousand spectators. The Wright Model A climbed to about 150 feet and began flying circuits over the field. After a few minutes, however, something went terribly wrong; it was later determined that a propeller tip broke. It was the catalyst for a catastrophic series of events that led to the complete uncontrollability of the airplane, which suddenly pitched downward and crashed headlong to the earth. Selfridge, according to Orville, only had time to utter a nearly inaudible "uh-oh."
The stunned group of onlookers made a wild dash across the parade field toward the crash, while mounted cavalry did their best to hold back the mob. A cloud of dust hovered over the wreckage as rescuers extricated the two men from it. Orville, face bloodied, was conscious but badly hurt with a fractured femur, broken ribs, and other injuries. Selfridge was even less fortunate. Those in attendance carried him from the field, unconscious and suffering from a fractured skull. He died three hours later. The crash occurred mere feet outside the western perimeter of Arlington National Cemetery, where Selfridge's body would soon lie.
September 17, 1908, was a day of tragedy and a day of firsts. On that afternoon, Lt. Thomas E. Selfridge had the unfortunate distinction of becoming the first person ever to die in an airplane accident—with none other than the world's first pilot at the controls.
The fateful last flights of de Rozier and Romain, Lilienthal, and Selfridge were only the first of many. Aeronautical science continued to advance in leaps and bounds—all the way to the moon and back—but always at a heavy cost. Albert Einstein once said, "Failure is success in progress." If so, then aviation's many sacrifices have greatly contributed to the advancement of aeronautics. This did not make them any less tragic for those fliers destined never to return.
Plaque dedicated to Lt. Thomas E. Selfridge. It is located on the parade ground at Fort Myer, Virginia, near where he died on September 17, 1908. He was the first military officer ever to pilot an airplane solo, and the first person ever to die in an airplane crash. _Steven A. Ruffin_
## CHAPTER ONE
## **A DEADLY SUNDAY DRIVE**
**"A MIND-BLOWING THING"**
On the morning of September 3, 2007, a sixty-three-year-old pilot took off on a routine local flight from a desert airstrip in western Nevada. The terrain over which he flew was familiar and the weather was perfect. His plane was a single-engine Bellanca Super Decathlon—a safe and extremely rugged little two-seater—and it was in good operating condition. After lifting off and heading south, man and machine gradually faded into the distance, never to return.
The pilot of the small plane was not, however, just any pilot. He was James Stephen Fossett—millionaire, sailor, and all-around adventurer. As the holder of nearly one hundred aviation world records, he was also one of the most famous and accomplished aviators of all time.
### **Record Setter Supreme**
His biography reads like an adventure novel. After graduating from Stanford and earning an MBA from Washington University, Fossett became wealthy selling commodities futures. Although a phenomenally successful business tycoon and self-made millionaire—more than enough to satisfy most high achievers—he had many other interests. Not surprisingly, he applied the same tenacity to his hobbies. During his life he managed to establish an incredible 115 world records in five different sports. Remarkably, he established most of these records _after_ he had reached the age of fifty.
Fossett's range of interests and accomplishments was extraordinary. He was a world-renowned sailor, having set an astounding twenty-three world records, many of which are still standing as of 2015. Perhaps his best-known feats were his transatlantic speed record in 2001 and around-the-world record in 2004. For his many sailing contributions, he won a number of prestigious awards, including recognition by the World Sailing Speed Record Council as "the world's most accomplished speed sailor."
He was also a competitive cross-country skier, Le Mans racecar driver, marathon runner, and Ironman triathlete. As a mountain climber, he had conquered both the Matterhorn and Mount Kilimanjaro. In addition, he competed in—and completed—the ultragrueling 1,165-mile Iditarod Trail Sled Dog Race; and on his fourth attempt, Fossett became the 270th person in history to swim the icy-cold English Channel.
For all of Fossett's accomplishments, his most significant occurred in aviation, placing him in a lofty class all its own. A thorough examination of the data published by the international keeper of aviation records, the _Fédération Aéronautique Internationale_ (FAI), reveals the magnitude of Fossett's achievements. During his relatively short active career, he set nearly a hundred world records for altitude, distance, duration, and speed in an unparalleled four different classes of aircraft: balloons, gliders, airships, and airplanes. This made him the first person in history to set world flight records in more than a single category. The most significant of these were major feats of endurance and skill that will be difficult to eclipse.
Fossett's aviation milestones are so numerous that it would be impossible to do them justice in one short chapter; however, a passing mention of the most important ones highlights the fateful irony that characterized the improbable end of his remarkable life.
**Balloon** — During the sixteen-day period from June 19 to July 4, 2002, Fossett became the first person in history to fly a hot-air balloon solo around the world. This flight, made in the Rozière-type craft named _Bud Light Spirit of Freedom_ , was a testament to his amazing perseverance, having failed on five previous attempts—in one case plunging into the shark-infested Coral Sea after his balloon ruptured during a thunderstorm. During his precedent-setting flight, he also set a balloon speed record, covering 3,186.8 miles during June 30 and July 1 for an average speed of 133 miles per hour. At one point, he rode the high-altitude winds to a top speed of 200 miles per hour, the fastest any human had ever flown in a balloon.
**Airship** — On October 27, 2004, along with copilot Hans-Paul Stroehle, Fossett set an FAI "absolute" airship speed record of 115 kilometers per hour (71.5 miles per hour). They accomplished this in a Zeppelin NT semirigid airship—appropriately enough, at Friedrichshafen, Germany, where history's first Zeppelin airship took flight on July 2, 1900.
**Glider** — From December 2002 to December 2004, Fossett and copilot Terry Delore dominated international glider competition, setting ten out of a possible twenty-one world speed and distance records. On August 30, 2006, Fossett was at it again, this time with copilot Einar Enevoldson, when they rode Argentinean Andes mountain waves to an unprecedented altitude for unpowered gliders of 50,699 feet—nearly ten miles above sea level. It was a world record Fossett had been pursuing for the previous five years on three different continents—another testament to his tenacity.
**Fixed-Wing** — Undoubtedly the greatest of all Fossett's many record-shattering achievements occurred while flying the jet-powered Virgin Atlantic GlobalFlyer. Between February 2005 and March 2006, he flew this unique Burt Rutan–designed craft nonstop and unrefueled solo around the world—not once, but on three different occasions.
Steve Fossett landing the Virgin Atlantic GlobalFlyer at NASA's Kennedy Space Center on January 12, 2006. On February 8, Fossett took off from here to make history's longest nonstop flight. _NASA_
The first flight began on February 28, 2005, when Fossett took off from Salina, Kansas. A little more than sixty-seven hours later, he touched down once again at Salina, having circumnavigated the earth for a total of 22,936 miles. Not only was he the first person ever to fly nonstop around the world solo, his average speed of 342.2 miles per hour during this world-record flight qualified it as history's fastest nonstop round-the-world flight.
Fossett's second solo circumnavigation in the GlobalFlyer took place a year later, from February 8 to 11, 2006, beginning at Kennedy Space Center, Florida. This time, however, after completing the first circuit of the earth, he continued east to Bournemouth, England. Here, he landed after flying nonstop and unrefueled for an unprecedented seventy-six hours, forty-two minutes, and fifty-five seconds, having covered a distance of 25,767 miles. This was—and still is—history's longest nonstop aircraft flight.
Finally, a month later, on March 14-17, Fossett completed the GlobalFlyer's record-setting trilogy. Again, he started and ended on the 12,300-foot runway at Salina, Kansas. This time, he flew a total of 25,294 miles, claiming the world record in the FAI category of "absolute distance over a closed circuit."
Fossett's three GlobalFlyer flights set three of the possible seven "absolute" records kept by the FAI: distance without landing, distance over a closed circuit, and speed around the world nonstop and nonrefueled.
These achievements and more marked him as one of history's greatest aviators. He received nearly all of the most prestigious awards available, including the Harmon trophy, the Gold Medal of the _Fédération Aéronautique Internationale_ , and induction into both the Balloon and Airship and the National Aviation Halls of Fame. He had more than earned his way into the ultraexclusive club reserved for such icons as Lindbergh, Doolittle, Yeager, and Armstrong.
So how could a pilot of such incredible skill go out for a joyride on a clear day and never return? It was akin to a NASCAR driver dying in a traffic accident on the way to the grocery store.
### **The Search**
Fossett departed at around 8:30 a.m. on September 3, 2007, from a private airstrip on a western Nevada desert plain, adjacent to the Flying M Ranch. This hunting club for sportsmen, located about seventy-five miles southeast of Reno, was a sort of private resort for celebrities and high-profile aviators. The wealthy aviation enthusiast William Barron Hilton—of the Hilton hotel chain—owned and operated both ranch and airport. The world's most famous pilots often gathered there to enjoy good food, good company, and—best of all—the unique collection of aircraft that Hilton maintained. It was a pilot's paradise for those few with the "right stuff" to merit an invitation.
Steve Fossett seated in the Virgin Atlantic GlobalFlyer at Kennedy Space Center's Shuttle Landing Facility, February 8, 2006. He is about to take off on his second solo, nonstop circumnavigation of the world without refueling. He will fly for more than three days straight and a distance of 25,767 miles to make history's longest nonstop aircraft flight. _NASA_
Steve Fossett before his historic round-the-world flight. _NASA_
A Super Decathlon sport aircraft similar to the one Steve Fossett was flying when he disappeared on September 3, 2007. _Adrian Pingstone_
By noon, there had been no word from Fossett, and the chief pilot of the Flying M was concerned; he had expected him back by midmorning. Eventually, it became obvious that the Decathlon must be out of fuel and no longer flying. Fossett probably landed somewhere and needed a ride home. That he could have crashed seemed inconceivable.
A search began immediately, with every expectation of quickly finding Fossett unharmed. It soon expanded into the largest rescue effort ever conducted for a single person in the United States, involving Civil Air Patrol search planes from at least a half dozen states, US military aircraft, and numerous private aircraft from all over western Nevada and eastern California. In addition, several of Fossett's well-known pilot friends responded to the call to form what the press dubbed the "Flying M Air Force." There were also several ground search and rescue teams in all-terrain vehicles combing a rugged search area twice the size of Massachusetts. Divers even probed the murky depths of nearby Walker Lake.
The search incorporated others besides those actually hunting for the missing pilot. Experts analyzed radar tracking images from the morning of Fossett's flight for clues, and thousands of Internet users from around the world—who had been following the constant stream of news updates—used satellite imagery to look for some trace of Fossett's downed airplane. Someone even thought to consult a psychic. In short, the search was unparalleled in scope; it could only be a matter of time before the missing millionaire superpilot would turn up—hopefully alive and well.
Days turned into weeks, however, and still there was nothing. Searchers were baffled at their inability to find even a trace of Fossett's airplane, a situation reminiscent of the search for Amelia Earhart seventy years earlier. The hunt was so thorough that it turned up several other lost aircraft from the past, but nothing of Fossett. Where could he be? And why was his Emergency Locator Transmitter (ELT) not emitting a signal that would lead searchers to him? Even if he had been unable to turn it on, the force of any impact should have automatically activated it. It seemed as though the earth had snatched him from the sky and rendered him and the Decathlon invisible. It was, as Fossett's friend and autobiography co-author, Will Hasley, described it, "a mind-blowing thing."
In February 2008, five months after he disappeared, a judge declared Fossett legally dead. It provided the official and legal closure the family needed, but emotional closure was still months away. The baffling disappearance of the famed aviator, who could not be recovered in spite of one of the most extensive searches in history, gave rise to the inevitable theories and rumors that began to dominate the blogs and tabloids. Perhaps Fossett was just another victim of the so-called "Nevada Triangle," a wild area that has claimed uncounted gold prospectors, hikers, and airmen without leaving a trace. As evidence of this area's remote and rugged nature, the body of a World War II flier had only recently turned up there after being missing for sixty years.
Or was there a more sinister explanation? Various sources alleged that Fossett had faked his own death to escape personal or financial problems; escaped to Argentina to reunite with an illegitimate son; and faked a crash to provide cover for the US government to search for a lost nuclear warhead. The fact that Fossett was not only a man of unlimited resources, but also a recognized survival expert, further fueled this wild speculation. For him, anything was possible. However, there was not a shred of hard evidence on which to base any of these theories. Still, the question remained: why had the massive search failed to turn up a single trace of the lost airman?
### **Mystery Solved... But More Questions**
Finally, on September 29, 2008—almost thirteen months after Fossett's disappearance—a man hiking near Mammoth Lakes, in a remote part of the Sierra Nevada Mountains of east-central California, came upon a pilot's license and other personal effects, and a tattered bundle of hundred-dollar bills. The license bore the name of James Stephen Fossett. A new search operation quickly ensued, and the wreckage of Fossett's plane was located a half mile from where the documents were found. Bits and pieces of the Decathlon were strewn all over the mountainside at an altitude of ten thousand feet, some sixty-five miles south of the Flying M Ranch. Later DNA tests on the few available bits of human tissue that were found—inexplicably, a half mile from the crash site—confirmed that they were Fossett's. His family and friends finally were able to lay the famed pilot's remains to rest, and with them went the lurid rumors that had circulated. Appropriately, an October 3, 2008, headline in the _Times_ of London announced: FACTS RUIN OTHERWISE-GOOD STEVE FOSSETT CONSPIRACY THEORIES.
Fossett's presumed flight path of September 3, 2007. After taking off from the Flying M Ranch airstrip, he crashed some sixty-five miles to the south, high in the Sierra Nevada Mountains, near Mammoth Lakes, California.
The badly fragmented and burned wreckage quickly revealed why the ELT had not functioned: it was scattered in pieces all across the debris field. As for the airplane itself, one witness stated, "there wasn't a piece big enough to cover a coffee table." In addition to its unlikely location, it was atomized into a million pieces, rendering it nearly invisible from the air. This explained why search planes had flown over it a reported nineteen times during the operation without spotting it.
The National Transportation Safety Board (NTSB), which investigates the causes of accidents, detected no obvious problem with either the pilot or the airplane. Its report on the accident, finally published nearly two years after Fossett went missing, suggested that the probable cause was "the pilot's inadvertent encounter with downdrafts that exceeded the climb capability of the airplane." In other words, a vicious wind gust may have slammed his airplane down onto the side of the mountain. However, this explanation was only the NTSB's best guess as to why Fossett's airplane crashed into the side of the mountain with such force.
The official report left certain questions unspoken and unanswered. For example, why was Fossett flying at such a high altitude, in mountains more than sixty miles from home? His small plane was hardly suited for mountain flying. The crash location was especially puzzling, as Fossett had indicated before takeoff that he intended to investigate dry lakebeds for a possible future land-speed record attempt.
Also, why, as the NTSB reported, was Fossett's seat belt unbuckled? Severe crashes may rip or burn the webbing, but they seldom unbuckle metal latches; and it is extremely doubtful that this experienced pilot was flying without it securely buckled.
Another question left open for speculation was why the meager remains of Fossett's body were found scattered a half mile away from the crash site. Did wild animals drag them there? Or did he somehow survive the impact and crawl to that spot before dying and being partially devoured by animals?
Finally, was the conjectured powerful downdraft the only factor contributing to the crash? It seems that a pilot of Fossett's experience would have known better than to tempt the notoriously vicious Sierra Nevada air currents in a small aircraft. Was there a problem with either the pilot or plane? Or could it have even been intentional? The latter seems unlikely, given that he was carrying a roll of $100 bills, but no one will ever know.
Steve Fossett was a highly gifted pilot. It is the ultimate irony that a man who had flown balloons, airships, gliders, airplanes, and jets to their very limits, and who had flown solo nonstop around the world—not just once but three times—would be destined to die on a pleasure flight his wife characterized as "a Sunday drive."
The type of airplane he was flying that fateful day is one of the safest in existence—so safe that, as the saying goes, "it can just barely kill you." However, flight, by its nature, involves a degree of danger—regardless of the pilot's skills or the airplane's safety record. Steve Fossett accepted that risk and had repeatedly cheated death. However, on that September day in 2007, he came up short—perhaps for no better reason than that his luck finally ran out.
## CHAPTER TWO
## **DISAPPEARANCE OF THE MIDNIGHT GHOSTS**
**"THE EVEREST OF AVIATION MYSTERIES"**
At 5:18 on the Sunday morning of May 8, 1927, two famed and highly skilled French aviators took off from Le Bourget Field, near Paris, France. Their destination was New York City. If successful, they would be the first ever to fly nonstop between these two cities. The airplane they were flying was an all-white open-cockpit biplane, appropriately named _l'Oiseau Blanc_ —the white bird.
The two fliers, Charles Nungesser and François Coli, lifted off to the cheers of thousands of spectators. They proceeded northwest to the coast of Normandy and then out over the English Channel. After traversing southern England and Ireland, they disappeared into the haze over the Atlantic Ocean and were never seen again.
This internationally publicized world record attempt by two of the most glamorous and widely known aviators of the day remains one of history's most memorable flights of no return. Their still-unexplained disappearance is often compared to that of famed climbers George Leigh Mallory and Andrew Irvine, who vanished on Mount Everest in 1924—it is for this reason that some call the inexplicable loss of the two aviators "the Everest of aviation mysteries." Where they ended up has never been determined (unlike Mallory, whose body turned up in 1999), but evidence exists to suggest that Nungesser and Coli may have been the first ever to fly nonstop from Paris to North America.
Charles Nungesser, the much-decorated World War I French flying ace. He was one of the lucky few high-scoring aces to survive the war.
### **Nungesser and Coli**
Charles Nungesser, born March 15, 1892, first earned international fame as a World War I fighter pilot. With forty-five confirmed kills to his credit, he was France's third-ranking ace and among the top twenty aces from all nations. He was also a much publicized, hard-partying ladies' man, rumored to have consorted with the infamous Dutch exotic dancer and presumptive German spy Mata Hari.
Above all, the flamboyant Nungesser was admired for his grit and resilience. His numerous war wounds were so extensive that his case history could have provided an entire textbook on the treatment of traumatic injuries. Long before the war ended, the indomitable Nungesser was a certified physical wreck. Yet, for all the crippling pain and physical limitations these injuries imposed upon him, he continued to fly—often having to be lifted into and out of the cockpit—and wreak havoc upon the enemy.
His injuries were matched only by his numerous medals and awards, which he wore even when flying. They jangled together as he limped heavily around the airfield, making him sound, as _French Warbirds_ author Claude W. Sykes describes it, "rather like a walking ironmongers shop." It is no wonder the public idolized him and that his much-photographed, battle-scarred face was known worldwide.
Charles Nungesser's macabre personal insignia, as it appeared on one of his wartime fighter planes. This logo also adorned _l'Oiseau Blanc_ , the airplane in which he and François Coli attempted, unsuccessfully, to fly nonstop from Paris to New York. _US Air Force_
German pilots also recognized Nungesser from afar by the taunting macabre personal insignia he boldly exhibited on all his fighter aircraft, consisting of a skull and crossbones, candles, and a coffin, all enclosed in a big black heart. It was the brash Frenchman's challenge to enemy pilots: "Here I am—whenever you are ready to die!"
After the war, Nungesser ran a flying school, flew in exhibitions, and eventually ended up in Hollywood. There he flew as a stunt pilot and even made a cameo appearance in the 1925 silent movie _Sky Raider_. However, this was insufficient to satisfy the lust for fame, glory—and danger—that fed the oversized ego of a man like Nungesser. His opportunity to leap back into the limelight came when he was approached by an unlikely looking fellow countryman with a rotund physique, a black patch over his right eye, and a big idea.
François Coli, eleven years older than Nungesser, never achieved the same level of fame as the young ace, although he too had been a successful wartime fighter pilot. Like Nungesser, he had sustained grievous injuries, including the loss of one of his eyes in a crash. In addition to being a pilot, Coli was an accomplished navigator, having been a seafaring man before the war. After the armistice, he maintained his flying and navigational skills by participating in a series of record-setting flights. In 1919, he and fellow pilot Henri Roget set an overseas distance record by making the first double crossing of the Mediterranean Sea. Other record-setting distance flights in the Mediterranean region followed, including a 1,400-mile flight from Paris to Morocco.
### **A Reward Worth the Risk**
In 1923, Coli began planning his biggest venture yet—a nonstop transatlantic flight from Paris to New York. The flight between these two cities had become for aviators on both sides of the Atlantic the ultimate goal. This was partly because of the challenge involved with making this difficult 3,600-mile flight, and partly because of the symbolic significance of linking the two important cities by air. It was also attractive because the first to succeed would collect the $25,000 Orteig Prize. New York hotelier Raymond Orteig had made this offer—equivalent to about $330,000 in present-day dollars—for the first nonstop flight between the two cities. Prize money aside, the fame and glory resulting from this accomplishment would pay a lifetime of dividends.
The route Nungesser and Coli planned to take from Paris to New York. Their last confirmed sighting was over the Irish coast. No one knows where their flight ended, but unconfirmed witness reports indicate they may have crashed somewhere south of Newfoundland or in the wilderness of southeastern Maine.
Orteig did not specify in which direction the flight had to be made. Most pilots favored riding the prevailing westerly winds from New York to Paris, but Coli preferred to buck the headwind and fly westward. This way, he calculated, more of the final third of the flight—the most critical part—would be over land. The additional safety outweighed the disadvantage of the headwind. He would follow the great circle route, which extends in a northerly arc between Ireland and Newfoundland. This would also keep the most possible dry land beneath him—although it would also take him well north of the major sea-lanes, where any chance for rescue would be unlikely.
Coli's first choice for a pilot to accompany him was Paul Tarascon. But when Tarascon was injured in a crash in late 1926, Charles Nungesser moved to the top of the list. The dynamic ace flier and darling of the press was the obvious choice, and the adventurous Frenchman was happy to accept the challenge.
For their attempt they chose the Levasseur PL.8, a highly modified version of the French naval long-range PL.4 reconnaissance plane. A 450-horsepower Lorraine-Dietrich twelve-cylinder engine powered this open-cockpit biplane. However, its most remarkable feature was its ability to jettison its heavily reinforced, drag-inducing landing gear soon after takeoff. This would significantly increase the speed and fuel efficiency throughout the flight, after which the airplane would land on its belly in the water and—at least theoretically—stay afloat until the two French fliers could escape and climb onto a rescuing boat. They decided against taking any radio equipment. In 1927, radios were notoriously heavy and unreliable—and besides, over the North Atlantic there would be virtually no one with whom to communicate.
Nungesser and Coli in the flying attire they wore on their last flight.
Nungesser and Coli pictured above their Levasseur PL.8, appropriately named _l'Oiseau Blanc_ —the white bird. This widely distributed postcard carried a caption translating to "Nungesser and Coli – The heroes of the flight from Paris to New York." Though truly heroes, they never arrived in New York.
The intrepid fliers planned to make the biggest splash ever when they triumphantly swooped down into New York Harbor, escorted by a formation of US Army Air Corps fighters. The Frenchmen would land their beautiful white biplane—adorned with Nungesser's signature black-hearted wartime logo—in the water at the foot of the Statue of Liberty. It was a magnificent plan. All they had to do was execute it and collect the cash.
Capturing the Orteig Prize was, however, a much more difficult proposition than simply flying across the big pond. By 1927, several fliers had already conquered the Atlantic, but the Paris–New York flight was _twice_ that distance. This meant that twice as much fuel—as well as twice as much engine and crew endurance—was required. Equally important, the additional distance doubled the possibilities for weather, navigational, and mechanical problems. These considerations aside, the ever-treacherous North Atlantic remained by far the most challenging aspect of the flight. With its storms, fog, icing conditions, unpredictable winds, and cold waters, it was a fearsome adversary.
It was dangerous enough simply being airborne in the underpowered and fuel-laden airplanes of the era. On September 21, 1926, two crew members flying with former French ace René Fonck died when their Sikorsky S-35 crashed on takeoff during a New York-to-Paris attempt. And only days before Nungesser's and Coli's attempt, US Navy pilots Noel Davis and Stanton Wooster were killed taking off in the Keystone Pathfinder they were planning to fly from New York to Paris the following week. The 3,600-mile nonstop flight between the two cities would require all that aeronautical technology had to offer in 1927. It would also require more courage—and luck—than most rational people ever have.
It was a long shot, but the potential rewards were worth the risk, so Nungesser and Coli quickly readied themselves and their airplane. Time was of the essence, since several other efforts to capture the enticing Orteig prize were in the making. Three teams were already converging on Long Island, New York, awaiting optimal weather conditions for a flight to Paris. Clarence Chamberlin led one of these and Richard Byrd, recently returned from his record-breaking flight to the North Pole, another. The third American pilot intending to attempt the New York-to-Paris flight had not yet arrived in New York. He was hardly worthy of consideration anyway, since he was by far the least likely to succeed. He was a young, unknown US Air Mail pilot crazy enough to believe he could make the near-impossible trip to Paris flying solo. His name was Charles Lindbergh, and his airplane was a small, single-engine Ryan monoplane he called _Spirit of St. Louis_.
### **Missing!**
Nungesser and Coli lifted off on the morning of May 8, 1927, a week earlier than they had previously planned; the other teams breathing down their necks had forced the Frenchmen to expedite preparations. With only eighty days to build and test _l'Oiseau Blanc_ , work progressed at breakneck speed. The rushed arrangements paid off, however, and the two Frenchmen now had the edge.
Nungesser carefully coaxed their grossly overloaded biplane into the air from Le Bourget, using a half mile of runway. The fully laden airplane weighed in at five and a half tons and carried one thousand gallons of fuel—enough for forty hours of flight. Once airborne, the pilots evaluated the airplane's performance, and when all systems checked out, released the heavy landing gear. The aircraft—now a flying boat—was instantly 270 pounds lighter and several miles per hour faster. The two intrepid airmen then proceeded, accompanied by an escort of military and photographic airplanes, to the French coast. From there they headed out over the English Channel, toward their rendezvous with immortality.
Little else is known about the infamous last flight of _l'Oiseau Blanc_ , though not for lack of interest. It was the biggest event of the day, with profuse coverage in every newspaper on both sides of the Atlantic. Fans worldwide cheered on the two larger-than-life war heroes in their magnificent white biplane. Thousands of spectators had already started to converge around New York harbor in hopes of seeing history in the making. When the French newspaper _La Presse_ prematurely proclaimed Nungesser and Coli's successful arrival in New York, there was dancing in the streets all across France—that is, until the paper had to retract the story. The dancing quickly turned into riots, directed against the newspaper's sloppy journalism.
Exhaustive searches ensued on both sides of the Atlantic, and untold numbers of unsubstantiated reports from nearly every location along their route told of sightings, crashes, mysterious signals, messages in bottles, and the discovery of bodies and wreckage. But in the end, no traces of the big Levasseur or its renowned pilots emerged. The big white biplane never arrived.
It seems certain that _l'Oiseau Blanc_ made it safely across the English Channel and over the western Irish coast to the two-thousand-mile expanse of the Atlantic Ocean. What happened next is mostly conjecture. Did they encounter bad weather and crash into the sea? Or did they, as some have speculated, turn back and end up in the English Channel? Convincing evidence points to a third possibility.
### **The North American Theory**
Of the multiple theories proposed to explain the disappearance of Nungesser and Coli, one stands out as the most credible, based on evidence that emerged soon after the flight—but largely ignored and eventually forgotten. Recently, investigators have begun to put these scraps of information together, concluding that the two fliers actually reached North America.
**Newfoundland Reports** — There is no ironclad evidence to verify this claim, but plenty of documentation points to the possibility that Nungesser and Coli did reach North America. Well over a dozen credible witnesses near the eastern and southern coast of Newfoundland—an area known as the Avalon Peninsula—came forward soon after the flight to report hearing, and in a few cases actually seeing, an airplane overhead on the morning of May 9. Among these sources was the crew of a seagoing vessel, who reported seeing a white airplane matching the description of _l'Oiseau Blanc_ south of Newfoundland. A fisherman also reported hearing an airplane crash into the ocean in a dense fog near this location. For _any_ airplane to fly over such a remote area in 1927 would have been a rare and memorable event; therefore, if the witnesses truly did see one, it was in all likelihood _l'Oiseau Blanc_.
Recently, a French researcher named Bernard Decré discovered additional evidence in the US National Archives to support the Newfoundland crash theory. A pair of US Coast Guard dispatches revealed that on May 19, 1927, the Coast Guard picked up an airplane aileron in Long Island Sound's Napeague Bay. Three months later, they retrieved part of a white airplane wing floating off the coast of Norfolk, Virginia. One or both could have been from _l'Oiseau Blanc_. If so, these documents support the notion that Nungesser and Coli landed or crashed into the sea somewhere south of Newfoundland, where their airplane broke up and the two men drowned. The detached wings then drifted southward several hundred miles with the Labrador Current until finally spotted.
Why no one ever apparently made this information public is a mystery within a mystery, as is the question of what the Coast Guard did with the wreckage. Some have suggested that the US government withheld the findings to prevent casting a shadow on the American pilot who successfully completed the historic flight less than two weeks after Nungesser and Coli's attempt.
There were other allegations of conspiratorial tactics directed against the two unlucky French fliers. Soon after they disappeared, US meteorologists came under fire for allegedly giving them inaccurate weather information. Others have suggested that _l'Oiseau Blanc_ was shot down—either by the US Coast Guard or by Prohibition-era bootleggers. No one has ever proved any of these assertions.
**Witnesses in Maine** — Another possibility is that _l'Oiseau Blanc_ ended its flight in the United States. Late on the overcast afternoon of May 9, 1927, a man in a remote part of eastern Maine's Washington County claimed to have heard a sputtering airplane engine approaching from the northeast. According to him, the engine quit, followed by the sound of a crash. Unfortunately, he did not actually see the airplane in the low-hanging clouds, and he never felt sufficiently inclined to venture into the wilderness to investigate. This area was, however, very close to the two French fliers' original course, and the late-afternoon timeframe was compatible with the earlier sightings of the airplane over Newfoundland. Moreover, the sputtering engine corresponds almost exactly with when _l'Oiseau Blanc_ would have been burning its last drops of fuel.
Others in this area also reported hearing an airplane engine that day. Even more incredible, there was a report of what may have been the definitive evidence of the French fliers' arrival in North America: an old, corroded aircraft engine buried beneath the underbrush. Could it have been the big Lorraine-Dietrich from _l'Oiseau Blanc_? Sadly, the engine was never found. Subsequent searches by various aviation archeological teams have likewise failed to find any definitive traces of engine, airplane, or bodies. This apparent dead end is where the search for _l'Oiseau Blanc_ ended in Maine.
On May 21, 1927, only twelve days after _l'Oiseau Blanc_ went missing, Charles Lindbergh beat all the odds and succeeded where Nungesser and Coli had failed. He became the first person ever to fly nonstop between New York and Paris—and he did it solo. He landed at Le Bourget, the same field from which the two Frenchmen had taken off, thirty-three and a half hours after taking off from New York. Not only did "Lucky Lindy" collect the coveted Orteig Prize, he also instantly became the most celebrated man of his era and an aviation icon of a magnitude never surpassed. If only some of Lindy's legendary luck had gone to Nungesser and Coli, these honors could have been theirs. Instead, as Lindbergh himself noted in his classic autobiography, _The Spirit of St. Louis_ , the two Frenchmen simply "vanished like midnight ghosts."
Perhaps these ghosts will reappear someday. Researchers have focused the search south of Newfoundland, near the islands of Saint-Pierre and Miquelon. Their prospects of finding anything tangible after nearly nine decades are slim, but if successful, they will have found what some consider the holy grail of missing airplanes. Until that happens, the only verified surviving piece of _l'Oiseau Blanc_ in existence is the landing gear that Nungesser and Coli dropped to the ground after takeoff. It made its way back to Le Bourget and resides there at France's Air and Space Museum—a silent testimony to a valiant attempt.
This café/bar, located in the heart of Paris, is another reminder of Nungesser and Coli's ill-fated last flight. According to local legend, this establishment—operating under a different name—was a popular 1920s-era watering hole for pilots—perhaps even Nungesser and Coli. _Steven A. Ruffin_
The only verifiable piece of _l'Oiseau Blanc_ still in existence is the landing gear Nungesser and Coli dropped from it soon after takeoff. It is displayed at the French Air and Space Museum, located at the Paris—Le Bourget airport on the outskirts of Paris. The weight and drag saved by shedding the bulky gear was considerable, but not enough to ensure a successful flight. _Steven A. Ruffin_
## CHAPTER THREE
## **THE SPARK THAT ENDED AN ERA**
**"OH, THE HUMANITY!... THIS IS THE WORST THING I'VE EVER WITNESSED."**
On the rainy evening of May 6, 1937, a crowd gathered on the sprawling grounds of Naval Air Station Lakehurst, New Jersey. The reporters, photographers, and a host of other onlookers had been waiting for hours to witness the world's greatest airship, the mighty _Hindenburg_ , complete that year's first flight from Germany to the United States. One of those in attendance was a young radio announcer named Herb Morrison, whom Chicago's WLS radio station had dispatched to Lakehurst to record the event. As the majestic floating palace finally drifted into sight and slowly inched its way to the seventy-five-foot-high mooring mast, Morrison began a standard description of the approaching aerial giant. Suddenly, his voice took on a frantic tone that quickly became hysterical:
It burst into flames, and it's falling, it's crashing... Oh, my, get out of the way, please... This is one of the worst catastrophes in the world. Oh, the humanity! And all the passengers screaming around here... I–I can't talk, ladies and gentlemen... This is the worst thing I've ever witnessed.
It took just thirty-four seconds for the giant flaming airship to crash to the ground, but Morrison's emotional narrative, along with the vivid, ghastly images, ensured that it would become one of history's most memorable aviation catastrophes. After the smoke had cleared, thirty-six people were dead or dying—thirteen of the ship's thirty-six passengers, twenty-two of its crew of sixty-one, and one ground crew member. The twisted aluminum alloy skeleton of the airship lay crumpled and smoldering on the ground, after the searing flames had consumed most of the remainder of the ship. Although sixty-two of the ninety-seven aboard somehow survived the inferno of exploding hydrogen gas and burning fuel oil, the name _Hindenburg_ had become—in a single flash—another synonym for "disaster"—the same flash also brought the golden age of airship travel to a fiery end.
### **The Zeppelins**
The era of the big airships began on July 2, 1900, near Friedrichshafen, Germany. Here, Ferdinand _Graf_ (Count) von Zeppelin launched his first airship, which he designated _Luftschiff_ (airship) Zeppelin 1, or LZ-1. It was—as all its LZ descendants, including the _Hindenburg_ , would be—a rigid airship, having a fabric-covered metal framework with buoyant gas cells contained within. All of Zeppelin's airships used hydrogen gas for their buoyancy, while gasoline or diesel engines driving massive propellers powered them forward. These rigid airships, also called "dirigibles" because they were powered and steerable, shared one other characteristic: they were universally colossal in size. They had to be, since a battleship-size envelope full of hydrogen was required to lift the massive flying machine into the air.
Zeppelin continued his work through the next decade, building progressively bigger, faster, and more reliable airships with ever-increasing payloads. Soon, he started the world's first airline and passenger service. Between 1910 and 1914, Zeppelin airships carried thirty-four thousand passengers all over Germany without a single injury—an incredible accomplishment.
During the First World War, Germany's military forces pressed into service the _Graf_ 's airships—or Zeppelins, as they were called. From 1914 through 1918, they flew sea reconnaissance missions and bombing raids over Britain, France, and Italy. Flying at high altitudes in the dark of the night, they indiscriminately dropped bombs on whatever—and whomever—happened to be below them. These raids were not militarily effective, but they caused civilian casualties, including more than five hundred in Britain alone. As a result, the sinister giants became one of the most feared and hated terror weapons of the war.
_Graf_ von Zeppelin died in 1917, but _Luftschiffbau_ Zeppelin continued to operate. In October 1924, the head of the company, Hugo Eckener, flew his newest airship, the 656-foot LZ-126, an unprecedented five thousand miles from Germany to Lakehurst, New Jersey. Here, he personally delivered it to the US Navy, who redesignated it the ZR-3—and christened it the USS _Los Angeles_. This outstanding airship would prove to be the Navy's most successful and longest-serving airship.
In 1928, Zeppelin launched what would become the most successful passenger airship ever built, the LZ-127 _Graf Zeppelin_. Over its nine-year period of operation, it logged 590 flights, including, in 1929, history's first passenger-carrying circumnavigation of the earth. In all, it flew more than a million miles, safely carrying thousands of passengers and hundreds of tons of mail and freight all over the world.
The Zeppelin company was so encouraged by its success with the _Graf Zeppelin_ that it began construction in 1931 of an even more spectacular airship. For its massive framework, a huge amount of the aluminum alloy, Duralumin, was required. Much of it came from the wreckage of the British airship R-101, which had crashed in France on October 5, 1930, during its first commercial flight. Little did the Germans suspect the apparently jinxed metal would once again come crashing to the ground in a ball of flames.
### **Luxury Liner Supreme**
Superstitions notwithstanding, Zeppelin's newest product was by far the mother of all airships. It took more than four years to build—but was well worth the wait. The mammoth LZ-129, christened the _Hindenburg_ , launched in 1936. It is difficult, even in today's world of massive airliners, to imagine the sheer enormity and breathtaking presence of any of the great airships, and this is especially so for the _Hindenburg_. This behemoth was a beautiful, elegant, and complex engineering masterpiece, the ultimate in aeronautical and manufacturing technology, and the world's most luxurious airliner—all rolled into one. Some have called it "the Concorde of its day," but that simply does not do it justice.
It was powered by four state-of-the-art, sixteen-cylinder Daimler-Benz diesel engines, each capable of generating up to 1,320 horsepower and driven by a colossal four-bladed propeller, twenty feet in diameter. The _Hindenburg_ had unprecedented endurance, carrying up to seventy-two tons of diesel fuel in its forty-two tanks—enough to fly eight thousand miles nonstop. It was also fast. With a reported cruise speed of seventy-five miles per hour, it could nudge eighty-five at full throttle. These features allowed it to travel from Germany to the United States in less than three days—half the time it took the fastest ocean liner, which was the only other option available to passengers. At this time, the _Graf Zeppelin_ and _Hindenburg_ were the only aircraft in the world flying passengers across the Atlantic.
The Zeppelin-built Airship LZ-129 _Hindenburg_ moored at Naval Air Station Lakehurst, New Jersey.
The _Hindenburg_ 's size was unprecedented—it remains, along with its short-lived successor, the _Graf Zeppelin II_ , the largest flying machine ever airborne. It stretched an incredible 804 feet in length—nearly a sixth of a mile—and would dwarf even the largest of today's jumbo liners; its internal gas volume of 7 million cubic feet was ninety percent larger than its predecessor, the _Graf Zeppelin_ , and gave it a lifting capacity of a whopping 236 tons. Its painted linen fabric covering, if laid out on the ground, would have covered eight acres of real estate. All of this enabled the _Hindenburg_ to carry, in complete luxury, up to seventy-two well-heeled passengers, each paying about $450 in Depression-era dollars—equivalent in buying power to $7,500 today—per one-way ticket.
The ride was fast, quiet, clean, and smooth—or as newspaperman Louis Lochner described it, like being "carried in the arms of angels." The _Hindenburg_ offered the world's most beautiful panorama, as seen through the slanted observation windows on its port and starboard promenade decks; and with its private staterooms, lounge, formal dining room, reading and smoking rooms, and custom-made aluminum grand piano, passengers enjoyed a degree of splendor that rivaled even the greatest ocean liners. This massive, elegant "luxury liner of the air," as it was called, could easily have qualified as the Eighth Wonder of the World—and even today ranks near the top of the list of the most magnificent examples of technology that humanity has ever produced.
The _Hindenburg_ (left) flying in formation with a US Coast Guard Douglas RD-4 flying boat over Naval Air Station Lakehurst on May 9, 1936. The mooring mast is visible just above the massive Hangar No. 1 seen here. Less than one year later, the giant airship would lie in ruins at the base of that mast. _US Coast Guard_
Most aviation experts of the day also considered the _Hindenburg_ one of the safest aircraft ever produced. After all, the Zeppelin company had been flying passengers in its airships for nearly three decades without a single injury. The potential for disaster, however, was inherent: the _Hindenburg_ relied on highly flammable hydrogen gas for its buoyancy. The explosive potential of this gas, when allowed to combine with air, was no secret; other airships from around the world had demonstrated this vividly and tragically by exploding in flames. Zeppelin fully recognized the danger and originally designed the _Hindenburg_ to use helium—a safe, inert gas—instead of hydrogen. However, in the 1930s, helium was available only in the United States, which refused to export it because of restrictions imposed by the Helium Control Act of 1927. In spite of Zeppelin's best efforts, it was unable to obtain any of this precious gas. Even if they could have circumvented the law, the Germans did not help their case by emblazoning huge black Nazi swastikas on the great airship's vertical tail fins. Highlighted on a white circle surrounded by a brilliant red rectangle, they were all too visibly emblematic of the distasteful regime already ill-reputed in the United States.
The Germans thus had no choice but to fill the cells of the _Hindenburg_ with the only buoyant gas they could get: hydrogen. Though not as safe as helium, it was cheaper and more buoyant. Most important, it was available.
Hugo Eckener was—to his credit—anything but a good Nazi. He had fallen into disfavor with his country's fascist leaders by naming his newest supership after Germany's beloved late President, Field Marshall Paul von Hindenburg. Nazi Propaganda Minister Joseph Goebbels had wanted to name it the _Adolf Hitler_ , after his _Führer_. Though Eckener prevailed over the name, he could not escape other Nazi demands for his airship—especially since the German government had helped finance its construction. The _Hindenburg_ would thus serve as an instrument of propaganda and would carry the swastika insignia of the Nazi party—and Nazi agents—everywhere it flew.
Hugo Eckener, famed airship commander and chairman of the German airship company, _Luftschiffbau_ Zeppelin. He was one of the few prominent anti-Nazi Germans to survive World War II and die of old age. _Library of Congress_
### **Tragedy at Lakehurst**
The _Hindenburg_ soon began transatlantic passenger travel. By the end of 1936, it had safely crossed the Atlantic thirty-four times. Its first 1937 flight to North America began on the evening of May 3, when it departed Frankfurt, Germany, under the command of Capt. Max Pruss. The flight was uneventful, except that strong headwinds extended the flight time an additional ten hours; then, thunderstorms passing through Lakehurst further delayed the airship's arrival. Finally, at just after 7:00 p.m. on May 6, the giant airship slowly descended and began maneuvering toward the mooring mast, located in the middle of Naval Air Station Lakehurst's vast open landing field. Shifting winds forced Pruss—by now, feeling significant pressure to get the much-delayed airship on the ground—to make two tight turns to keep it lined up. He did not want to make a second approach and cause further delay.
At about 7:25 p.m., the _Hindenburg_ dropped its mooring lines to the ground in preparation for landing. Suddenly, a tiny flame appeared just forward of the top tail fin. In seconds, explosive flames engulfed the entire tail and spread rapidly toward the front of the airship. The burning tail section crashed to the ground, tipping the giant airship almost vertically. It all came crashing to the ground "like a giant torch," as the next day's issue of the _New York Times_ put it, sending the 231 line handlers below scurrying for their lives. There had been no warning, no time for preparation, and no time to react. In thirty-four seconds, the world's greatest airship was reduced to a crumpled mass of wreckage. Within a minute and a half, the hydrogen had burned itself out, leaving nine tons of spilled diesel fuel to continue burning on the ground for hours more.
Many of those aboard perished when they jumped from the airship to escape the flames. Others died of burns, smoke inhalation, and trauma suffered during and after the crash. Several who remained aboard survived, including Capt. Pruss, who stayed at his station in the control car suspended beneath the ship. Their survival was in part thanks to the heroic efforts of rescuers on the ground.
### **Conspiracy or Accident?**
Why the world's greatest airship burst into flames at this particular moment was something that both German and American authorities very much wanted to know. No one either in the airship or on the ground had even the slightest clue as to what had gone wrong. The investigation board that convened to determine the cause eventually arrived at a conclusion, but it was no more than an educated guess. Almost eighty years later, it is still a matter of debate. Three possible explanations have received the most attention.
The _Hindenburg_ , moments after exploding into flame on May 6, 1937, at Naval Air Station Lakehurst. It took only thirty-four seconds for the hydrogen-fed inferno to reduce the world's largest aircraft to a pile of smoldering ruins. This scene helped seal the fate of airship travel.
**Sabotage** — This was near the top of everyone's list. The aerial giant, which prominently displayed the Nazi colors everywhere it went, was a highly visible symbol of the world's most hated government—one with enemies everywhere, including within Germany itself. A. A. Hoehling, in his 1962 book _Who Destroyed the Hindenburg?_ , contends that one of the airship's crew who died in the crash, a rigger named Eric Spehl, was associated with a woman with strong Communist and anti-Nazi connections. Hoehling speculates that Spehl, an amateur photographer, used flashbulbs and a dry cell battery to set the airship afire. This explanation gained greater public popularity when producers adopted it for the plot of the 1975 movie _The Hindenburg_. However, no one ever proved Spehl's guilt, and critics maintain that the dead rigger was little more than a convenient scapegoat. In the end, no definitive evidence of sabotage on the _Hindenburg_ ever emerged.
**Incendiary-Paint Theory** — In 1997, a former NASA scientist named Addison Bain proposed this theory. He argued that the fabric covering the _Hindenburg_ 's Duralumin framework was the precipitating cause of the explosion—not the hydrogen inside. It is true that the fabric contained a coat of a flammable doping lacquer impregnated with iron oxide and aluminum powder. The fact that these two highly reactive substances can combine to form an explosive reaction, and that they are often used in explosives and solid rocket fuels, has prompted some—including Bain himself—to overstate that the _Hindenburg_ 's outer skin was "painted with rocket fuel." Critics argue that the amounts and ratios of these substances as used on the airship's fabric were not in any way dangerous, and that the fabric was simply not combustible enough to spontaneously ignite.
**Static Spark Theory** — This is probably the most plausible cause for the accident, and the one that both the investigation board and Eckener himself advocated. This theory holds that a spark ignited hydrogen that had leaked from one of the airship's sixteen gas chambers and combined with air to make an explosive mixture. Neither the source of the spark nor the leak was ever determined. The fact that the _Hindenburg_ was tail heavy as it approached the mooring mast suggests that some of the highly buoyant hydrogen gas may have been leaking from the rear of the airship. This would have resulted in a loss of lift there and caused the tail section to droop lower than the rest of the ship. Proponents of this theory hypothesize that the surface of the airship became electrically charged while passing through the humid air that evening. In flight this would not normally be a problem, but when the damp mooring lines contacted the rain-soaked earth, they grounded the metal frame of the airship. This caused a discharge of static electricity between the frame and skin, resulting in a spark that ignited the lethal hydrogen-air mixture. The gas leak could have resulted from an intentional hydrogen release in conjunction with the landing, a faulty gas valve, or perhaps a punctured gasbag. If the latter, some have speculated that structural wires snapped and punctured an envelope during the tight turns Capt. Pruss hurriedly made to get the airship aligned for landing.
No one will ever know whether the _Hindenburg_ crashed because of sabotage, reactive paint, a static spark igniting leaked hydrogen, or something else entirely. The German government retrieved its burnt and twisted metal skeleton and shipped it back to Germany, where they melted it down once again and used it for yet another lost cause: warplanes for Adolf Hitler's growing Nazi aerial armada. The same fate also awaited the _Hindenburg_ 's famous predecessor, the _Graf Zeppelin_ , as well as its successor, the _Graf Zeppelin II_. Germany was, for the first time in forty years, out of the airship business—and so was the rest of the world.
The _Hindenburg_ disaster was not singular among the great airships of this era. The number of spectacular fatal airship mishaps occurring in the 1920s and 1930s is remarkable: _Akron, Dixmude, Shenandoah, Italia, Roma_ , R-38/ZR-2, and R-101, to name a few. Surprisingly, the _Hindenburg_ was only the fifth most deadly—the US Navy Airship _Akron_ was the worst, with 73 fatalities. However, the _Hindenburg_ 's demise was the most visible. For this reason alone, it is still the best remembered—and as history would prove, the one that marked the end of the rigid airship.
The incredible size of the _Hindenburg_ is graphically demonstrated by this scale overlay diagram, which compares it to four of the largest winged aircraft ever built: the Hughes H-4 Hercules, Airbus A380-800, Boeing 747-8, and Antonov AN-225. _Clem Tillier_
The public's desire to ride in airships evaporated in the wake of the awful images and newsreels of the _Hindenburg_ 's fiery crash. Besides, fixed-wing aircraft development had progressed to the point where large multi-engine airplanes and flying boats would soon make transoceanic air travel faster and more affordable. Thus, the great airships were no longer the future of air travel that many visionaries of the day predicted they would be, and never again would they cast their giant cloudlike shadows across the earth below.
Today, visitors at Naval Air Station Lakehurst can still see the massive Hangar No. 1 rising up above the landscape. The Navy sheltered and maintained the _Hindenburg_ in this structure during each of the German airship's ten previous visits to Lakehurst, and it is where it would have rested on the night of May 6, 1937. Out on the barren field in front of the hangar is a monument shaped like an airship gondola, marking the spot where the burning _Hindenburg_ crashed to the ground. A small bronze plaque there reads: ON THIS SITE – MAY 6 1937 – 7:25 P.M. 36 PEOPLE PERISHED
The airship landing area at Naval Air Station Lakehurst, as it appears today. In the foreground is a memorial marking where the gondola of the doomed _Hindenburg_ crashed to the ground. The large building in the background is Hangar No. 1, where the giant airship was sheltered and maintained during its visits to Lakehurst. _Steven A. Ruffin_
## CHAPTER FOUR
## **LADY LINDY'S FLIGHT TO ETERNITY**
**"I HAVE A FEELING THAT THERE IS JUST ABOUT ONE MORE GOOD FLIGHT LEFT IN MY SYSTEM..."**
On July 2, 1937, a twin-engine Lockheed Model 10E Electra took off from the airport at Lae, a city located on the eastern coast of New Guinea. The two crew members aboard—pilot and navigator—were aiming for a tiny uninhabited coral island in the middle of the vast central Pacific Ocean, 2,556 miles to the east. The flight, which roughly followed the line of the equator, was the longest leg of what the two fliers intended to be a record-breaking circumnavigation of the globe. Yet the airplane never arrived at its destination, and no one to this day knows what happened to it or its occupants.
More than three-quarters of a century later, this disappearance is as much a mystery as it was in 1937. It is also just as intriguing, for the pilot of the Electra was famed American pilot Amelia Earhart. Since her disappearance, dozens of books and films, and hundreds of articles have chronicled the last flight of the world's most famous aviator—and, at this writing, a Google search for "Amelia Earhart" produces more than 1.6 million entries. Clearly, her fate continues to be one of history's greatest and most compelling mysteries.
### **Lady Lindy**
They called her "Lady Lindy," and with good reason. Her unparalleled flying achievements qualified her as the counterpart to the world's most famous male pilot—the great Charles A. Lindberg—who in 1927 became the first pilot to fly solo from New York to Paris. The two famed aviators even bore a striking physical resemblance to one another, making comparisons between them almost unavoidable.
Earhart was born in Atchison, Kansas, on July 24, 1897. After learning to fly in 1921, she bought her first airplane, a Kinner Airster. Within the next few years, she set nearly a dozen major aviation records, gaining her international acclaim and universal recognition.
Amelia Earhart was the rock star of her era. Wherever and whenever she appeared in public, crowds of adoring autograph seekers, photographers, and newshounds flocked around her. She was impossible to miss, since her face appeared nearly everywhere—in newsreels, newspapers, and magazines—almost on a daily basis. She inspired and endorsed product lines of clothing and luggage, and stylish women around the world tried their best to imitate her tomboyish "mop head" look. She was a bestselling author, an enthusiastic aviation advocate, and a vocal activist for women's rights. She hobnobbed regularly with the world's elite and was wined, dined, and courted on a regular basis by the rich and powerful. Eventually, she married her wealthy publisher, publicist, and promoter, George P. Putnam. He adored her and doted on her every whim, while his publicity machine made sure to keep the world up to date on her exploits. In just about every respect, "AE" had a star quality equal to that of the most glamorous screen idols of the day.
Earhart came by her fame honestly. Before the age of forty, she had accumulated a long list of flying accomplishments matched by no other woman—and only a few men—before or since. In June 1928, she became the first woman to cross the Atlantic Ocean in an airplane. However, she did not make the crossing as a pilot, but rather as a passenger... or as she put it later, "just baggage, like a sack of potatoes." On this flight, she rode from Newfoundland to Wales in the back of the Fokker F.VII trimotor _Friendship_ , crewed by pilot Wilmer Stultz and navigator/mechanic Louis Gordon. Although Earhart's contribution to this successful crossing was slight, it was an important first that earned her international fame and a White House reception with President Calvin Coolidge. Amelia Earhart was now in the national spotlight—and on the radar for much bigger things to come.
In 1929, she placed third in the First Women's Air Derby, and in 1930 she set several women's speed records. On May 20, 1932, she once again crossed the Atlantic Ocean—this time as a pilot, flying solo. She took off from Harbour Grace, Newfoundland, in her all-red Lockheed Vega single-engine monoplane and landed fifteen hours later in a field in Northern Ireland. Later that same year, she became the first female pilot to fly solo across the North American continent and back—in the process, setting a women's transcontinental speed record. These feats earned her the Gimbel award as the "Most Outstanding Woman of America for 1932" and her election as the first president of the women's aviation group, the Ninety-Nines.
An informal shot of the photogenic Amelia Earhart, as she walks past her Lockheed Electra.
Earhart set yet another major milestone in January 1935, again flying her Vega. She became the first pilot—male or female—to fly solo from Honolulu, Hawaii, to Oakland, California. Three months later, she added her name once again to the record books by becoming the first person to fly from Los Angeles to Mexico City; and three weeks later, from Mexico City to Newark, New Jersey.
Within five years, Amelia Earhart had set a dozen speed, altitude, and distance records. She was at the top of her game.
### **Round the World**
Earhart now set her sights on even more distant horizons. Her new goal would be to fly completely around the world the longest way possible, following the equatorial route. She would complete the journey as a series of separate legs, the aggregate of which would be the longest airplane flight ever. For this, she would need the additional safety and power of a twin-engine airplane, the best possible navigational and radio equipment, and a top-notch navigator. She would also have to step up her own skills as a pilot.
Amelia Earhart was the most famous and accomplished female pilot in the world, but she was not necessarily the best, technically speaking. Some historians have suggested she was only average in ability and that her fame came mostly from her husband's promotional efforts. Over the course of her flying career, she had experienced several mishaps, and on a few occasions displayed a lack of either practical knowledge or judgment. In fairness to her, these accidents—all minor—were probably more or less par for anyone who flew as much as Earhart did during this pioneering era of aviation; nevertheless, they provided fodder for those who questioned her abilities as a pilot.
Amelia Earhart posing in front of her Lockheed 10E Electra. It was one of the most advanced civilian airplanes of its day. _NASA_
Earhart's Lockheed Model 10E Electra at Ford Island, Oahu, on March 20, 1937. Later that day, she lost control of it while taking off on the second leg of her first round-the-world attempt. She would attempt another circumnavigation two months later, with even worse consequences.
Amelia Earhart at Wheeler Field, Oahu, Hawaii, March 18, 1937. She had just completed the initial leg of her first round-the-world attempt.
The most famous of her mishaps occurred during her first unsuccessful attempt to fly around the world. She began the adventure on March 17, 1937, by flying without incident from Oakland, California, to Honolulu, Hawaii. The airplane she chose was a twin-engine Lockheed Electra Model 10E, one of the most powerful and advanced civilian flying machines of its day. Built to transport ten passengers, it was large enough to carry the fuel and equipment she required. With her were not one, but two, well-qualified navigators, Capt. Harry Manning (who was additionally a skilled radio operator) and Fred Noonan. Also aboard—though only for this first leg—was a high-profile technical advisor and copilot, famed Hollywood stunt pilot, Paul Mantz.
The four arrived safely at Wheeler Field early on the morning of March 18. Two days later, as Earhart and her two navigators attempted to take off from Ford Island's Luke Field on the second leg of their journey, the big, overloaded twin-engine Lockheed accelerated down the runway and skidded into a ground loop. The landing gear collapsed and the airplane ground to a halt amidst a shower of sparks. No one was injured, but the Electra sustained substantial damage. Once again, Earhart's competence became a topic of discussion. Whether a blown tire or other mechanical problem caused the accident or—as some, including Mantz, believed—she had simply lost control of the airplane was never conclusively established. Whatever the cause, she abandoned the attempt and shipped her airplane back to California for repairs.
Earhart's second and final shot at circumnavigating the earth began two months later. This time, instead of flying east to west, she would fly from west to east, hoping to optimize prevailing weather patterns. She would also carry only one navigator, Fred Noonan. Captain Manning might have been Earhart's first choice, but a three-month leave of absence from his job had expired. In addition, as he later confided to friends, he had lost confidence in Earhart's piloting skills. Noonan, though rumored to have a serious drinking problem, was nevertheless an excellent navigator and well qualified for the task.
Earhart and Noonan began their journey on May 21 by flying an unannounced first leg from Oakland, California, to Miami, Florida. Before leaving Miami, Earhart publicly stated, "I have a feeling that there is just about one more good flight left in my system, and I hope this is it."
Earhart's wrecked Electra after she lost control while taking off from Ford Island's Luke Field. Before day's end, Amelia was on a ship heading back to California. This accident caused some to question her competence as a pilot.
Flying more than twenty legs over the next month, with stops on the continents of South America, Africa, Asia, and Australia, they finally landed at Lae, New Guinea, on June 29, 1937. They had flown twenty-two thousand miles from one end of the earth to the other, and although they had "only" seven thousand more to go, they were exhausted. They also knew that the remaining miles would be the toughest—nearly all of them over a remote section of the Pacific Ocean, where precise navigation would be critical, and the chances for rescue in the event they had to ditch, slim.
### **Lost at Sea**
On the morning of July 2, 1937, Earhart and Noonan lifted off from the airport at Lae. The Electra was loaded to its maximum weight with radio and navigational equipment, emergency gear, and enough fuel and oil for approximately twenty hours of flight. After straining to become airborne, they pointed the sleek silver airplane towards their next refueling stop. It was a dot in a sea of blue on Noonan's map so small he could hardly see it: Howland Island.
The charismatic Amelia Earhart always stood out in a crowd, as she does here on November 5, 1928, during her tour of Langley Research Center in Hampton, Virginia. This was only five months after her historic transatlantic flight with Wilmer Stulz and Louis Gordon. Although she was only a passenger on this flight, it gave her the distinction of being the first woman to fly across the Atlantic Ocean. As evidenced by her VIP reception on this visit, it also provided her an important voice in aeronautical development. _NASA_
Earhart seated in the cockpit of her Electra, in which she and navigator Fred Noonan disappeared. They took off from Lae, New Guinea, on July 2, 1937, en route to Howland Island, but never arrived. _Library of Congress_
This tiny flat strip of land, which barely protrudes from the surface of the Pacific, is less than a square mile in size—just big enough to land an airplane. It is 2,556 miles northeast of New Guinea, and situated all alone in the vast ocean with few visible reference points around it. Finding it using the conventional navigational techniques that existed in 1937 was a daunting task. On a flight this long over open sea, where checkpoints were virtually nonexistent, celestial navigation was essential; even the tiniest error could translate into a miss of several miles, enough to be catastrophic. Earhart's and Noonan's lives depended on locating Howland Island, and there was no margin for error.
Both pilot and navigator fully realized the importance—and difficulty—of locating Howland. Consequently, they had taken extra measures to ensure it did not slip past them. Earhart had pulled strings to get the US Navy to position ships in the Pacific Ocean to assist, if needed, and the US Coast Guard cutter _Itasca_ was stationed close to Howland to transmit radio signals to guide her to her destination. In addition, she had installed in the Electra specialized radio equipment to improve communications.
The approximate course Earhart and Noonan followed from Lae, New Guinea, to Howland Island. The 2,556-mile journey was a difficult navigational undertaking. They may have ended up near Gardner Island or somewhere in the Marshall Islands but no one yet knows for sure.
At about eight hundred miles into the flight, they overflew the Nukumanu Islands and radioed their last known position; soon after, the weather turned sour and further communications became sporadic. Eighteen hours into the flight, Earhart and Noonan approached where they thought Howland Island should be, but nothing appeared anywhere on the horizon.
By this time, the Electra was low on fuel and both pilot and navigator were undoubtedly bone tired. After eighteen hours of flying with no rest, it is nearly certain that they were not at full mental capacity at this most critical point in the flight—and it did not help that neither was particularly skilled in the relatively new art of radio navigation. Consequently, things quickly began to unravel for the two fliers. For some reason, Amelia apparently did not receive most of _Itasca_ 's radio transmissions to her, even though some of hers reached _Itasca_ loud and clear. As far as she and Noonan could tell, they were transmitting into thin air.
The growing concern Earhart was feeling was apparent in her 7:42 a.m. transmission: "We must be on you, but cannot see you—but gas is running low. Have been unable to reach you by radio. We are flying at one thousand feet." A few minutes later, _Itasca_ received its strongest transmission from her, indicating she was close—very close. In response, _Itasca_ sent out signals to allow her to take a radio bearing. She received them but was unable to pinpoint the direction from which they came.
Her last intelligible transmission came at 8:43 a.m.: "We are on the line 157-337. We will repeat message. We will repeat this on 6210 kilocycles. Wait." Earhart and Noonan apparently believed they had drifted to either the north or south of Howland and had decided to turn and fly a north-south course perpendicular to their original one, hoping to intercept the island. _Itasca_ operators, however, had no way of knowing exactly where the 157-337-degree vector that they were flying was located, so they were unable to determine the Lockheed's location. The men of _Itasca_ sent up a smoke signal, desperately hoping the lost fliers might see it, but to no avail. When it became obvious that the Electra's fuel was exhausted, the Coast Guard officially listed them as lost at sea.
Newspaper headlines screamed, AMELIA EARHART MISSING. The loss of America's flying sweetheart dominated the news and conversations throughout the country for weeks to come. One of Amelia's more influential friends and admirers, President Franklin D. Roosevelt, immediately ordered a massive coordinated US Navy and Coast Guard search, consisting of nine ships, sixty-six aircraft, and four thousand men; however, no trace of them or their silver bird was found anywhere within the 250,000-square-mile search area. No wreckage or oil slicks ever appeared, and though radio operators from various locations picked up several SOS calls, they were unable to verify that any of them actually came from Earhart. George Putnam continued the search at his own expense after the Navy and Coast Guard called it quits on July 18, but he, too, eventually gave up. Amelia Earhart was declared legally dead on January 5, 1939.
### **A Debatable Disappearance**
So, what went wrong on Amelia Earhart's final flight? Some have suggested that her high-tech radio equipment did not live up to expectations—or that she and Noonan simply did not fully understand how to use it. The frequencies they chose may not have been the best for direction finding, and they may not have had the low-frequency equipment they really needed for _Itasca_ to get an accurate fix on her. Others contend that she had removed a critical antenna—accidentally or on purpose—before the flight. It is also possible that Earhart had not properly coordinated with the _Itasca_ the time zone and frequencies she would be using, thus further confusing communications between the two. Whatever the cause, the radio navigation system the two fliers relied upon to find Howland Island simply failed.
Obviously, there were also navigational problems. Perhaps undetected wind changes or a minor miscalculation threw the Electra off course, and the overcast conditions they encountered most likely prevented Noonan from taking accurate celestial fixes. Researcher Elgen Long, in his book _Amelia Earhart: The Mystery Solved_ , has even suggested that the particular map they were using was inaccurate. He contends that it showed Howland Island at a position several miles from its actual location. This alone could have been enough to seal their fate, even if Noonan's navigation had been perfect.
Many theories have emerged over the years, attempting to explain the fate of the two lost fliers. Most have elements of credibility, but none has yet been proven—or disproven.
**"Crash and Sank"** — The most widely held belief is that Earhart and Noonan searched for Howland Island until they ran out of fuel and crashed into the Pacific. Some theorize that after passing the Nukumanu Islands, the two got on a wrong heading that took them as much as one hundred miles northwest of Howland Island—much too far away from the tiny island to locate it visually. When they decided they had flown far enough east, they flew back and forth to the north and the south looking for the island that was just not there.
**Castaway** — Ric Gillespie of the International Group for Historic Aircraft Recovery (TIGHAR) has spent several years and many resources searching for Earhart. He has theorized that she and Noonan managed to land safely on an uninhabited island they came upon while searching to the north and south of their primary heading. Here, they apparently died while awaiting rescue, after which giant coconut crabs may have partially or completely consumed their bodies, leaving little for searchers to find. The SOS calls reported from various sources after the two fliers went missing support this supposition. Since the origin of these calls seemed to converge around Gardner Island (now called Nikumaroro), located about four hundred miles southeast of Howland, this could be where they ended up. Searches of the island from 1940 to the present have revealed metal scraps that may or may not have come from the Electra, a 1930s-era cosmetic cream jar and other artifacts, and even human bone fragments. In 2013, TIGHAR released sonar images they recorded off the coast of Nikumaroro that might be of Earhart's Electra, and in 2014, they claimed that a piece of metal found on Nikumaroro definitely came from Earhart's Electra. As of this writing, however, no one has yet provided definitive proof that any of these findings are linked to Earhart.
**Japanese Prisoners** — One of the most intriguing hypotheses that has survived over the years is the idea that Earhart and Noonan landed somewhere in the Marshall Islands and were imprisoned by Japanese military authorities. Fred Goerner, in his book _The Search for Amelia Earhart_ , as well as other researchers, presented evidence from eyewitnesses who reported seeing the lost fliers land. According to them, they were taken into custody near Mili Atoll, located several hundred miles northwest of Howland. One witness even claimed to have treated the injuries of a male and female American aviator, the latter of which went by the name Amelia. Other witnesses place the two fliers in a prison on Saipan during World War II. One former US Marine even claimed to have found Earhart's passport in a Japanese military safe on Saipan, although it later mysteriously disappeared. Most of these witnesses agree that the ultimate fate of the two was death, either by execution or from disease. Goerner even alleged that Fleet Admiral Chester W. Nimitz, Commander in Chief of the US Navy in the Pacific, privately admitted that the Japanese really had captured the two fliers in the Marshall Islands. If this is true, however, then why all the secrecy?
**Earhart, the Spy** — Perhaps there was a reason for the Navy's apparent reluctance to discuss the disappearance of Earhart other than avoiding bad press. An even more bizarre version of the castaway theory, as has been proposed by several different researchers, contends that Earhart and Noonan's flight was actually a cover for a spy mission they were conducting for the US government, possibly in exchange for funding and logistical support. They may have even agreed to make an "emergency" landing in the Japanese-held Marshall Islands, in order to give the US Navy an excuse to come to her rescue. This would afford them the opportunity to gather as much intelligence as possible on the increasingly hostile Japanese presence. Instead of the planned rescue, however, the two were arrested as spies. Randall Brink, in his book _Lost Star: The Search for Amelia Earhart_ , even gives credence to the rumor that Earhart broadcast Tokyo Rose–like anti-American propaganda during the war. The US Navy did ask the Japanese for permission to search the Marshall Islands, but never received it. Little else of this theory can be substantiated, although Earhart's own mother believed her daughter died while on a secret mission for the US government.
**Alive and Well in New Jersey** — As if the cloak-and-dagger mystery of Amelia Earhart's disappearance was not already far-fetched enough, there was yet another twist. In 1970, authors of the book _Amelia Earhart Lives_ claimed that Earhart survived the war and returned to the United States as a New Jersey housewife named Irene Bolam. Mrs. Bolam promptly sued them for this scandalous allegation and settled the case out of court. Bolam did resemble Earhart, but forensic comparisons proved inconclusive.
Amelia Earhart's name still crops up in the news on a regular basis. Marine explorers have spent a considerable amount of time, money, and energy in recent years trying to find evidence of her airplane on the bottom of the ocean using sonar devices. Likewise, archeological teams, including TIGHAR, are still following various leads on the islands around Howland in search of some trace of the lost fliers. To date, however, there is no ironclad evidence that could shed any definitive light on their fate.
Someone may eventually solve the mystery of Amelia Earhart—or she may remain forever lost. Either way, the life, career, and disappearance of Lady Lindy must certainly rank as one of the most fascinating stories in aviation history. Nearly eight decades after her disappearance, Earhart's legacy lives on.
A display at the National Air and Space Museum, in Washington, DC, showing a sampling of the theories, legends, and rumors surrounding the mysterious disappearance of Amelia Earhart. _Steven A. Ruffin_
She wrote in a final letter to her husband, the contents of which appeared in the 1996 book _Last Flight_ , "Women must try to do things as men have tried. When they fail, their failure must be but a challenge to others." Women aviators have certainly met this challenge in the years since she wrote those words. Today, one can find women in the cockpits of every kind of flying machine in existence—from small single-engine trainers, to airliners, to fighter jets, to spaceships. From whatever island in the sky Amelia Earhart might be watching, she can only be proud of the advancements that resulted, in part, from her amazing example.
## CHAPTER FIVE
## **THE BANDLEADER'S LAST GIG**
**"WHAT'S THE MATTER... ? DO YOU WANT TO LIVE FOREVER?"**
On the foggy English afternoon of December 15, 1944, two US Army officers boarded a small, single-engine transport plane at an obscure Royal Air Force flying field north of London. One of the officers was a tall, slender, somewhat unmilitary-looking major, wearing glasses. The airplane, a Noorduyn UC-64A Norseman, also had a distinctly civilian look to it. The Canadian-built bush plane was one of roughly 750 of its type that the US Army Air Forces had drafted into wartime service.
Just before 2:00 p.m. the airplane, with two passengers and US Army Air Forces Flight Officer John R. S. Morgan at the controls, lifted off, bound for Paris, some 250 miles to the southeast. The Norseman climbed its way past London and toward the murky skies hovering over the English Channel. As it left the British coast and faded into the misty and overcast English Channel, air controllers lost contact with the plane. Neither they nor anyone else ever saw or heard from it again.
Given the enormity of US casualties throughout World War II, the loss of one light transport and three men normally would not have even raised eyebrows. However this case was different. The tall major aboard the Norseman that dreary afternoon was Alton G. Miller—better known as Glenn Miller, the world-famous bandleader. The death of this American patriot and musical icon would have a terrible impact on not only the Allied troops with whom he served, but also his millions of fans back home.
While his disappearance was painful, it was also impossible to explain. Exactly what caused Miller, fellow passenger Lt. Col. Norman F. Baessell, and Flight Officer Morgan to vanish has remained a mystery. Given the far-reaching effects of his disappearance and the intrigue that still swirls around it, Glenn Miller's "last gig" remains one of history's most memorable unsolved mysteries.
### **King of the Big Band**
Glenn Miller was still at the peak of his career when he vanished. Born on an Iowa farm in 1904, the college dropout quickly gained fame as a trombonist, composer, and arranger. By 1938, he was the leader of one of the most popular swing bands of his day. The unique woodwind sound he developed with the Glenn Miller Orchestra propelled him, the band, and their songs to the top of the charts. From 1939 to 1942, they scored an incredible seventy Top Ten hits—thirty-one of these in 1940 alone—earning the bandleader a reported $800,000. These earnings—equivalent to more than $13 million today—were staggering. In 1942, he won the first gold record ever presented for his smash hit "Chattanooga Choo-Choo," which sold 1.2 million copies. Other huge hits included "In the Mood," "Moonlight Serenade," "Stardust," and numerous other tunes now considered classics. The band broke attendance records at ballrooms and concerts and topped the ratings in radio broadcasts that aired three times weekly. Miller and his band even appeared in Hollywood movies.
The December 7, 1941, Japanese attack on Pearl Harbor suddenly thrust the United States into war. As a result, Miller—along with thousands of other patriotic young Americans—rushed to volunteer for the nation's military forces. At the age of thirty-eight, he knew he was too old to fight, but he felt he could still serve his country in a meaningful way by doing what he did best: playing music. He sold himself to the Army by vowing, as he wrote in an August 12, 1942, letter to US Army Brig. Gen. Charles Young, to "put a little more spring into the feet of our marching men and a little more joy into their hearts." The US Army commissioned Miller a captain in the US Army Air Forces, and in short order, he formed the fifty-piece 418th Army Air Force Band, which he took to England in the summer of 1944.
Over the next six months, Miller—by now promoted to major—stayed busy. He and his band of select professional musicians gave dozens of live performances and teamed up with the British Broadcasting Company to send his special brand of music to Allied forces stationed throughout Europe. They even recorded propaganda programs for broadcast deep into the heart of Nazi Germany. The positive effect Miller and his band had while serving in England during the latter half of 1944 was immeasurable. General James H. "Jimmy" Doolittle—famed leader of the daring 1942 raid on Japan, and by 1944 a three-star general in command of the US Eighth Air Force in Europe—fully recognized Miller's contributions. On July 29, 1944, he publicly stated near the end of a Miller concert at High Wycombe, England, that "next to a letter from home, your music is the greatest morale builder in the European Theater of Operations."
Famed bandleader Glenn Miller as a major in the US Army Air Forces. _US Air Force_
Major Glenn Miller conducts his 418th Army Air Force band during a 1944 outdoor concert in England. _US Air Force_
Major Glenn Miller (the trombonist on the right) and his band entertain troops at Steeple Morden, Cambridgeshire, England, on August 12, 1944. _US Air Force_
### **A Bad Day to Fly**
But even this high praise from one of the nation's true heroes rolled off of the unassuming bandleader's back. He did not take his band to Europe to earn kudos; he went to serve the troops who were fighting and dying there. As Allied victories pushed the war further east, he wanted to move his band with it. He believed the best way to support the troops serving on the front lines was to be there near them. This philosophy prompted him to schedule a Christmas concert in the recently liberated Parisian music hall the Olympia. As soon as he had the date booked, he began looking for transportation to Paris. He needed to arrange for the band's arrival and upcoming concert.
In the hectic confusion of wartime England, securing a hop across the English Channel was no easy task. This was especially so when the weather was not conducive to flying—which was often. Therefore, on the evening of December 14, 1944, when Lieutenant Colonel Baessell—an army staff officer acquaintance of Miller's who had secured a flight to Paris the next day—offered Miller a seat on the Norseman, he jumped at the chance. Most routine flights had been grounded because of the prevailing poor visibility, so this was a lucky break for Miller, who was anxious to get concert preparations underway.
The next day, Miller and Baessell stood on the foggy Twinwood Farm RAF airfield, located in Bedfordshire, a few miles north of London. As they prepared to board the Norseman transport, bearing the tail number 470285, bystanders overheard Miller asking Baesell, "Hey, where the hell are the parachutes?" He was not a fan of flying in the first place, and the small Canadian bush plane did not seem like a safe bet over water, in wartime, and in weather conditions of near-zero visibility. For that matter, perhaps it was not such a good idea to be flying in _any_ airplane under those conditions. Baessell replied, "What's the matter, Miller? Do you want to live forever?"
Noorduyn UC-64A Norseman, similar to the one in which Maj. Glenn Miller and two other officers disappeared over the English Channel on December 15, 1944. _US Air Force_
The Noorduyn UC-64A displayed at the National Museum of the US Air Force. The Norseman was a Canadian bush plane prior to being drafted into the military. _National Museum of the US Air Force_
The US military did not launch a comprehensive search for the missing airplane. This might have caused an outcry, had it been made public—but it was not. Besides, authorities had their reasons for not going to great lengths to find it. It was certainly not the first to disappear mysteriously into the treacherous English Channel, and so everyone assumed the worst when Miller's plane failed to arrive in Paris—or anywhere else. The chances of any of the three men surviving a Channel ditching in a high-wing, fixed-gear airplane such as the Norseman were remote. It would have likely flipped on impact and sunk before they could escape. And even if they survived the crash, their prospects of remaining alive in the icy December waters of the English Channel long enough for rescue were virtually nil. Besides, with a war going on, the resources needed for a thorough operation were simply not available.
The approximate course of Miller's plane from Bedfordshire to Paris on the day he disappeared. It would have been a routine flight in good weather, but flying over the English Channel in limited visibility could be deadly.
An official document, dated December 20, 1944, confirming that Maj. Glenn Miller and two others had been missing since December 15. _US Air Force_
The day after the Norseman vanished, the massive and bloody Battle of the Bulge began in mainland Europe, diminishing further any inclination to mount a search for a single missing airplane. Miller, Baessell, and Morgan were just three of what were to be 47,000 Eighth Air Force casualties suffered during World War II. Consequently, a little more than a week after they disappeared, the Army officially listed Miller and his two flying companions as missing and presumed dead.
### **Conspiracy Theories Galore**
The official US Army Air Forces explanation for Miller's loss is still probably as reasonable as any other. It held that the Norseman went into the Channel after experiencing icing or engine failure, or after the pilot became disoriented in the dense fog and lost control of the airplane. The airplane would undoubtedly have disintegrated upon impact, and been dragged to the bottom of the Channel by the heavy engine, taking its occupants with it. To this day, nothing definitive has surfaced to refute this explanation.
This is not to say, however, that there is any shortage of alternative theories for Miller's mysterious demise. The official explanation may not tell the whole story. After all, no airplane parts nor bodies ever turned up, and the Army conducted only a cursory search for the missing plane and its occupants. Not only might the official cause have been wrong, it could even have been a cover-up for something more sinister. Therefore, as is typical in such high-profile disappearances, the absence of facts has translated to a surplus of widely diverse theories. Each has attempted in its own unique way to explain Miller's disappearance.
**Knocked out of the Sky by Jettisoned Bombs** — Bombs dropped by Royal Air Force (RAF) bombers returning from an aborted mission may have knocked the low-flying Norseman out of the sky. On that day, 138 Lancaster bombers returning from an aborted mission to Germany jettisoned their bombs over a ten-mile circular area in the English Channel, designated the South Jettison Area. Bomber crews routinely employed this procedure to get rid of bombs they had been unable to drop on a target. This avoided the hazardous prospect of landing with armed bombs still on board. RAF navigator Fred Shaw, along with fellow crew members aboard one of the bombers that day, recorded in their flight logbooks seeing a small single-engine monoplane fitting the description of a Norseman tip over and crash into the water below. Had the Norseman inadvertently strayed in the foul weather into the forbidden jettison zone, and been hit outright by a falling bomb or blown out of the sky after one exploded upon impact with the water? Shaw and his fellow crew members failed to report their observation at the time, so there was no follow-up. The incident remained mostly unknown until 1984, when Shaw finally decided to report what he had observed on that mission to the British Ministry of Defense. The press quickly learned of it and his story appeared in newspapers worldwide. Historians and other authorities have generally taken this story seriously, although some have questioned certain aspects of it, such as the time and jettison area's location.
**Killed in a Crash on the Coast of France** — In 1999, another former military man provided a different explanation to Miller's death. Fred W. Atkinson Jr., in an article on his website entitled "A World War II Soldier's Insight Into the 'Mysterious Disappearance' of Glenn Miller," stated that not only did Miller's plane crash, the Army recovered his body. Atkinson served with the Paris-based 320th Air Transport Squadron, which operated Norseman aircraft like the one in which Miller disappeared. He indicated that the Army primarily intended them for short air evacuation flights. They were therefore not equipped with sophisticated navigational instruments and thus were generally grounded in bad weather. According to Atkinson, in spite of the dismal weather conditions, a high-ranking officer issued orders for a flight to bring Major Miller to Paris. The Norseman subsequently crashed near the coast of France, killing all aboard. Atkinson asserted that one of those killed was Miller, as verified by dog tags and identification papers found on the body. He further implied that because the order for Miller's flight in such weather conditions was tantamount to criminal negligence, the Army might have covered up the entire incident by leaving Miller listed as "missing in action."
**Shot Down by Antiaircraft Fire** — Another explanation appeared in a 2006 book written by Clarence B. Wolfe. The former US antiaircraft gunner serving with Battery D of the 134th Antiaircraft Battalion contends in _I Kept My Word_ that his gun battery accidentally downed Miller's plane near Folkestone, England, as it flew over. The only problem is that he claimed the downing occurred in September 1944—more than three months before Miller was declared missing. Did this really happen, but perhaps on a different date?
**Miller the Undercover Agent** — Another interesting explanation for Miller's disappearance came from a 2009 book written by a former US Army intelligence officer named Hunton Downs. In _The Glenn Miller Conspiracy_ , Downs contends that Miller's death was a cover-up for a failed secret mission authorized by Eisenhower. According to Downs, Miller's assignment was to convince key senior German officers to help expedite the war's end by collaborating with the Allies; instead, Nazi intelligence agents captured and executed him. Downs suggests that the US Army fabricated the story of Miller's disappearance over the English Channel to avoid embarrassment and prevent the revelation of sensitive information. This book—fifty years in the making—was a serious effort, although critics have questioned some of its facts, conclusions, and sources.
**Died of Cancer** — Another unsubstantiated but widely repeated version of Glenn Miller's death is attributed to his brother, Herb Miller. He allegedly revealed in 1983 that his brother did not die in a plane crash over the English Channel, but rather in a hospital from lung cancer. He contended that Glenn took off on the fateful flight as described in the official explanation, but later landed and entered a military hospital, where he died the following day. The crash story was fabricated so that the world would remember him as a fallen hero—not, as Herb allegedly said, someone who had died "in a lousy bed." This version, like some of the other alternative theories, would explain why authorities never conducted an extensive search for the downed airplane. There is other evidence, as well, that Glenn Miller was not in the best health during his time in England. His executive officer and band manager, Don Haynes, wrote that he was losing weight. Miller's radio director in England, George Voutsas, related that Miller had once said, "You know, George, I have an awful feeling you guys are going to go home without me."
**Other Theories** — Other explanations for Miller's disappearance, proposed by various authors over the years, range from far-fetched to utterly unbelievable. The Germans shot down his plane, and a horribly disfigured Miller remained hidden away in a hospital out of public view. He died in Ohio in 1945 after arriving there with gunshot wounds. His fellow passenger, Baessell, was involved in the black market, and after murdering Miller and Morgan, landed the airplane himself somewhere in France. The US high command discovered that Miller was a German spy and "eliminated" him. US agents killed him because he threatened to expose a group of homosexual US officers. A US military policeman in Paris accidentally shot him. A German assassin downed his airplane. Miller made the flight to France safely, but died in the arms of a French prostitute. Insufficient evidence exists to prove any of these explanations.
Glenn Miller's disappearance on that foggy December afternoon in 1944 is still a mystery. In all likelihood, it will remain that way. To this day, his memorial headstone at Arlington National Cemetery lists him as "MIA"—missing in action.
As for Miller's Army Air Forces Band, it soldiered on without its esteemed leader. Under the direction of Jerry Gray, it played the 1944 Paris Christmas concert Miller died trying to arrange, and the band continued playing until war's end. Its final performance was in Washington, DC, on November 13, 1945—just ten weeks after the war had ended and less than a year after Miller's disappearance. President Harry S. Truman, accompanied by generals Dwight D. Eisenhower and Henry H. "Hap" Arnold, was in attendance. These VIPs took the opportunity to honor the band—and specifically, its fallen leader—publicly. But the real tribute to Miller was the band itself.
Glenn Miller's legacy continued to grow in the years following his disappearance. It has included a Jimmy Stewart movie; a US postage stamp; numerous books, articles, and documentaries; a museum; and several additional musical awards including, in 2003, a Grammy Lifetime Achievement Award. However, the legacy he probably would have cherished the most is the modern-day descendent of his band: the acclaimed US Air Force jazz ensemble, Airmen of Note. The Air Force created the band in 1950 to continue the tradition of Miller's famous dance band, and it still entertains audiences today.
Glenn Miller left behind a wife, two children, a large fortune, a successful career, and a future full of promise. Though he is long gone, the man, the patriot, the bandleader—and of course, his unique sound—will never be forgotten.
Display at the National Museum of the US Air Force, honoring famed bandleader, Glenn Miller. _National Museum of the US Air Force_
Major Glenn Miller's summer uniform cap and spare eyeglasses, displayed at the National Museum of the US Air Force. _Steven A. Ruffin_
## CHAPTER SIX
## **THE NIGHT CAMELOT ENDED**
**"LIKE HIS FATHER, EVERY GIFT BUT LENGTH OF YEARS."**
On the dark, hazy Friday evening of July 16, 1999, a thirty-eight-year-old pilot took off from Essex County Airport, near Caldwell, New Jersey. He was flying a Piper PA-32R-301 Saratoga II—a sleek, dependable, retractable-gear, single-engine light plane. The pilot's wife and her sister also joined him on the flight. Their destination was Martha's Vineyard, the resort island off the southern coast of Massachusetts.
Just before takeoff, the pilot acknowledged tower clearance for a right downwind departure from runway twenty-two. After lifting off at about 8:40 p.m., the Piper climbed for altitude and headed out over Long Island Sound toward the Vineyard, some two hundred miles to the northeast. Air traffic control (ATC) received no further radio transmissions from the pilot, but everything appeared normal as his airplane vanished into the early evening haze.
The Saratoga II and its occupants should have reached their destination in just over an hour, but they never arrived. Soon, news agencies learned that an airplane was missing, and that the pilot was none other than the son of the thirty-fifth president of the United States—in his own right, one of the most famous men in the world—John Fitzgerald Kennedy Jr.
### **An Ill-Advised Flight**
When "John-John," the son of slain US President John F. Kennedy Sr., was only three years old, he became America's darling. On November 25, 1963, the brave little boy stood in front of the entire nation and saluted his father's flag-draped coffin. From that moment, he remained America's favorite "first son." Wealthy, intelligent, likeable, handsome, and the scion of a prominent American family, he was the complete package. It was a surprise to no one when _People_ magazine named him 1988's "Sexiest Man Alive." Well before the age of forty, he had established his own identity as an assistant district attorney and, later, co-founder of the political-culture magazine _George_. He was no longer John-John—he was John F. Kennedy Jr. With such a unique thoroughbred pedigree, it seemed certain that he was destined for greater things.
Kennedy had only recently earned his pilot's license, but he was not a total beginner. He had accumulated more than three hundred hours of flight time—far more than the license required, and fifty-five of them at night. However, the Federal Aviation Administration (FAA) classifies the Saratoga II that he flew—by virtue of its power, speed, retractable gear, and other advanced features—both a "complex" and a "high-performance" airplane. Consequently, it is a lot of airplane for a relatively inexperienced pilot to manage. John had logged roughly thirty-six flight hours in it, but only three of these were without a flight instructor. Perhaps most significantly, he had less than one hour of solo time at night in the Saratoga II. Still, his instructor considered him competent enough to handle it and had recently signed him off.
A Piper Saratoga II, the type in which John Kennedy Jr. made his last flight. Its relatively high performance makes it a demanding airplane for an inexperienced pilot to fly. _Steven A. Ruffin_
John F. Kennedy Jr., during a visit to Kennedy Space Center. His uncle, Senator Ted Kennedy, later described him as having "every gift but length of years." _NASA_
Although Kennedy lacked experience in the airplane, he was familiar with the route he was flying that evening. He had flown it, in one direction or the other, some thirty-five times previously. Several of these flights were at night, though this was the first time he had flown it after dark in the Saratoga II. It was a warm, hazy summer evening with little or no overcast, and weather reports indicated that visibility met the FAA Visual Flight Rules (VFR) minimum of three miles; consequently, the flight did not legally require an instrument flight plan. Otherwise, Kennedy would have had to either cancel the flight or take along an instructor, since he did not have an instrument rating.
These facts considered, it seems safe to conclude that John had the requisite ratings, training, skills, and meteorological conditions to make this flight—at least from a technical and legal standpoint. Whether he had the experience and overall competence to complete it _safely_ under those conditions proved another matter.
After departing the Essex County Airport, Kennedy climbed to an altitude of 5,500 feet and headed northeast toward the Vineyard. Navigation was simple enough, as he followed the southern Connecticut and Rhode Island shoreline for all but the final leg of the trip. His wife, Carolyn Bessette Kennedy, and sister-in-law, Lauren Bessette, accompanied him. Kennedy intended to drop Lauren off at Martha's Vineyard and then fly with Carolyn the additional twenty-five miles up to Hyannis Port, where they planned to attend a wedding the next day.
The course Kennedy flew from Essex County Airport, New Jersey, before he crashed into the Atlantic Ocean on July 16, 1999. He was only a few miles short of his destination at Martha's Vineyard.
Kennedy eventually left the coastline and headed out over the dark thirty-mile stretch of the Atlantic Ocean off the western coast of Martha's Vineyard. As he began his descent for landing, things began to go wrong.
What happened next was only determined later from ATC radar images recorded that night. They showed his airplane making a series of unexplained maneuvers and altitude changes that culminated in a steep, high-speed descending turn. The rate of descent quickly accelerated to an alarming vertical drop of nearly 5,000 feet per minute. This scenario suggested only one thing: the airplane had fallen into a dangerous downward spiral. The last position recorded was at 9:40 p.m., when the airplane was about seven miles off the western coast of Martha's Vineyard, at an altitude of 1,100 feet. Then, it simply disappeared from radar.
Unknown to anyone at the time it was happening—perhaps even including the three doomed occupants of the airplane—was that Kennedy had lost control of the airplane. It hit nose first at a speed well in excess of two hundred miles per hour. The effect of hitting the water at that speed was the same as hitting a slab of granite. All three occupants almost certainly died instantly, even before the crumpled mass began to sink to the ocean floor.
When the airplane failed to arrive at Martha's Vineyard, members of the family became concerned and started making phone calls. The hope was that Kennedy had simply changed his plans at the last minute—or perhaps had some minor mechanical or weather-related problem—and landed elsewhere. After authorities ruled out that possibility, they pinpointed the area off the coast of Martha's Vineyard where radar had last tracked the Saratoga II, and initiated a search.
Family members of the missing fliers, along with just about everyone else in the country, anxiously watched and waited through the days to come. Hopes dwindled by the hour that the three might still somehow be clinging to a piece of wreckage floating in the ocean or sitting high and dry on some remote shore. On July 20—four days after they had gone missing—the American public finally learned the gut-wrenching truth: divers from the US Navy salvage ship USNS _Grasp_ had located the wreckage of Kennedy's airplane about 120 feet below the surface of the Atlantic Ocean. In short order, they recovered the bodies. It was all too obvious that for all aboard there had been no chance for survival.
### **A Deadly Dose of Disorientation**
As is often the case with aircraft accidents, the cause of the crash that killed John F. Kennedy Jr., his wife, and his sister-in-law was as much an educated guess as it was a scientific finding. The NTSB investigated the wreckage and reviewed stacks of documentation—reports, statements, and interview transcripts. Finding no obvious deficiencies in either the airplane or the pilot, they concluded that the probable cause of the accident was "the pilot's failure to maintain control of the airplane during a descent over water at night, which was a result of spatial disorientation." The haze and darkness, which prevailed at the time, were contributing factors.
Unfortunately, this demon—spatial disorientation—is an all-too-common occurrence in aviation. It has claimed countless other pilots since the earliest days of powered flight. Pilots learn, almost from day one of training, that when unable to see the horizon or other reference points around them—as can occur over water at night and in situations where visibility is limited—they _must_ trust only their instruments and not their senses. This is because the senses easily become confused and convey to the pilot false information, while instruments normally do not lie. John apparently lost his sense of spatial awareness on that hazy night when he could not see anything around him on which to focus; consequently, he literally lost the ability to distinguish the difference between up and down. Because of the various forces affecting him in his moving airplane, a dive might have felt like a climb, or a right climbing turn may have given the same sensation as flying straight and level. Due to his state of spatial confusion, he unknowingly fell into a rapidly descending turn—or spiral. The sound of his engine racing during the rapid descent was probably the only indication to him that things were not right. Had he relied on his instruments or autopilot, he could have avoided the situation or even recovered after getting into it, but he obviously failed to do that. The result was a rapidly accelerating and tightening spiral that ended only when the nose of the Saratoga II smacked, right wing low, into the dark waters of the Atlantic Ocean. It was an all-too-easy mistake for even a seasoned pilot to make, and nearly always a deadly one—hence the macabre nickname "graveyard spiral." Several factors may have contributed to John Kennedy Jr.'s tragic end:
**Visibility** — This was probably more of a concern that hot, hazy July evening than Kennedy's preflight weather briefing led him to believe. Although it was technically above the VFR minimum of three miles all along the route, it was not much more than that. Airports along the coast reported visibilities of between five and eight miles, while at least one other pilot flying over the water in the vicinity that evening reported no visual horizon at all because of the haze. Most prudent non-instrument-rated pilots would avoid flying at night in conditions such as those. John may have found himself in trouble before he even realized the danger he was in.
**Nighttime Experience in the Saratoga II** — Although Kennedy was legally qualified to fly his high-performance Piper that evening, the amount of flight time he had in it without an instructor at night was minimal—less than an hour—and he had only one single unsupervised nighttime landing in it. Perhaps additional night experience in this airplane would have made enough difference to save him and his passengers.
**Late Departure** — Kennedy had planned an earlier departure that day, but the heavier-than-usual Friday afternoon traffic that he and his passengers encountered on the way to the airport delayed takeoff. The original plan had been to leave at 6:00 p.m. and arrive at Martha's Vineyard before dark, but it was past sunset when they finally took to the air. This delay made the difference between day and night—and, as events proved, between life and death.
**Injured Ankle** — John had fractured his left ankle while paragliding about six weeks earlier, which required surgery to repair. On the evening of the flight, a witness observed him using crutches as he loaded his airplane. His physical therapist stated later that Kennedy's foot still did not have a full range of motion. In addition, two different flight instructors with whom he had recently flown stated that they had to assist him in manipulating the rudder pedals. This may have contributed to his inability to pull the airplane out of its death spiral.
**Distracted by Personal Problems** — It was widely reported that John and Carolyn were experiencing marital problems at the time of the flight. On top of that, his magazine, though initially successful, was no longer financially sound. Given such issues, it would be reasonable to suspect that John might have had other things on his mind that night, and was perhaps not as attentive to flight details as he should have been. Christopher Andersen, in his book _The Day John Died_ , lends credibility to this idea. He contends that Kennedy nearly collided with an American Airlines jet earlier that evening after he inadvertently flew into its path.
### **The Curse**
The nation was shocked. Yet again, fate had prematurely snatched one of its favorite sons. The so-called "Kennedy Curse" is legendary—and not without foundation. Many have cited as evidence of its existence the numerous tragic incidents that have afflicted this famous family over the latter part of the twentieth century. Foremost among these were the assassinations of President John F. Kennedy Sr. in 1963 and, five years later, that of his brother, senator and presidential candidate Robert F. "Bobby" Kennedy. The trend goes well beyond those two tragedies, however—from automobile, aircraft, and skiing accidents to stillbirths and serious illnesses to the tragic consequences of bad behavior.
The family's ill fortune in airplanes, however, was the aspect of the curse on which John should have focused. If he had, he might have reconsidered ever taking flying lessons.
**Joseph P. Kennedy Jr.** — US Navy Lt. Joseph Kennedy was President Kennedy's older brother and the uncle that John Jr. never met. Joe, a World War II naval aviator, had volunteered to participate in a dangerous program called Operation Aphrodite, which involved the use of one of the earliest precursors to today's unmanned aerial vehicles. On August 12, 1944, Joe took off from a base in England in a modified Consolidated B-24 Liberator bomber that was loaded to the gills with explosives. The plan was simple: soon after he had the big plane safely in the air, he and his copilot—the only two aboard—were to bail out. The flying bomb would then proceed to its heavily fortified target, guided by remote control, and crash directly into it. Instead, Kennedy's airplane exploded prematurely before he and his copilot could bail out, killing them both instantly. The cause of the spontaneous explosion was never definitively established.
**Kathleen Agnes Kennedy Cavendish** — Kathleen was President Kennedy's younger sister and another family member that John Jr. never had the chance to meet. On May 13, 1948, she and three others—including her fiancé, Peter Wentworth-Fitzwilliam, eighth Earl Fitzwilliam, who was not yet divorced from his previous wife—died when their airplane crashed in southern France during a storm. Kathleen was twenty-eight years old.
**Senator Edward M. Kennedy** — An airplane accident occurring only seven months after the assassination of the president, his older brother, was just one of several misadventures in the career of this resilient politician. On June 19, 1964, "Teddy" was winging his way from Washington, DC, to Westfield, Massachusetts. While approaching the municipal airport at Westfield to land, the twin-engine Aero Commander 680 in which he was riding crashed into an apple orchard. The pilot and one of Kennedy's aides died, but Teddy and two others survived. It was a narrow escape for the junior senator from Massachusetts. He would spend the next five months in the hospital recovering—and probably thanking his lucky stars for having survived his family's dreaded curse.
**Alexander Onassis** — John's stepbrother from the second marriage of his mother, Jacqueline Kennedy Onassis, died in a plane crash near Athens, Greece, on January 23, 1973. While not involving a Kennedy, it was just another manifestation of the curse—this time by sheer association. Christopher Andersen also alleges in _The Day John Died_ that Jacqueline Kennedy had a recurring premonition that her son would die piloting his own plane. For that reason, she did everything in her power, until her own death in 1994, to prevent him from becoming a pilot. It is therefore just that much easier for Kennedy Curse proponents to feel that the inevitable had happened once again. Andrew Ferguson, who operated the company that maintained Kennedy's airplane, explained his tragic death this way: "He wasn't reckless. He made a stupid mistake. It's like going through a stop sign. But when a Kennedy goes through a stop sign, there always seems to be an eighteen-wheeler coming from the other side."
### **Conspiracy Theories**
Curses notwithstanding, John Kennedy Jr.'s death unsurprisingly generated a greater-than-usual assortment of conspiracy theories. Although the NTSB report provided a logical and reasonable explanation for his crash, those looking for a more sinister cause can find plenty of ammunition.
Almost immediately after the crash, the tabloids went into high gear. They were encouraged in part by a fake FBI "Preliminary Report" that made the rounds, suggesting that his death was not accidental. Consequently, many publications and websites began to claim that Kennedy's death was the result of a conspiracy. They alleged that there had been a large-scale cover-up, evidence destroyed, and "facts" fabricated.
Many of these conspiracy advocates even went so far as to refer to his death as an assassination. Those supposedly implicated included a variety of high-profile potential political rivals and the secret society known as the Illuminati. Some even pointed a finger at the Israeli intelligence service, Mossad, which allegedly feared that the probing of John's magazine into the 1995 assassination of Israeli Prime Minister Yitzhak Rabin might reveal incriminating secrets. Rumors also circulated that US Navy pyrotechnics, or perhaps even a nuclear accelerator fired from Long Island, accidentally brought down Kennedy's airplane.
Whether any of the numerous alternate explanations for John F. Kennedy Jr.'s demise have even a shred of credibility is debatable. In the absence of hard evidence to the contrary, the NTSB report remains the most likely scenario: an inexperienced pilot flying a high-performance aircraft over water on a dark, hazy night simply became disoriented, lost control, and spiraled into the sea.
Regardless of whether John Kennedy Jr.'s death was the result of a curse, a sinister plot, lack of competence, bad judgment, or just plain bad luck, the result was the same: America had lost its beloved John-John. The nation mourned his death, perhaps not so much for who or what he was as for what he had represented to the American people—he was the living legacy of a charismatic young president who died tragically in service of his country.
The family held a memorial Mass for John, Carolyn, and Lauren on July 23, 1999, at Old Saint Patrick's Cathedral in New York. Besides family members, President Bill Clinton and numerous other dignitaries attended. There were an additional four thousand mourners both inside and outside of the packed church. John's Uncle Teddy—US Senator Edward M. Kennedy—gave the eulogy. In so doing, he wistfully noted that John had, "like his father, every gift but length of years."
The fateful decision John F. Kennedy Jr. made to take to the air that hazy July evening in 1999 resulted in the tragic end of three promising young lives. The loss of America's favorite prince also effectively ended any hope of the country ever returning to one of its most memorable eras—the enchanted Kennedy period known as "Camelot."
## CHAPTER SEVEN
## **THE DEATH OF A BUTTERFLY**
**"OBVIOUSLY A MAJOR MALFUNCTION."**
On the cold morning of January 28, 1986, America's next spacecraft to rocket its way into orbit sat majestically poised for launch at Kennedy Space Center, Florida. Space Shuttle _Challenger_ , otherwise known by its Orbital Vehicle Designation, OV-099, was the second shuttle—after _Columbia_ —that the US Space agency, NASA, had launched into space. With nine missions and several significant firsts already under its belt, in less than three years the 115-ton winged orbiter had already more than proven itself. The highly advanced Rockwell-built space plane and its sister ships were the most high-tech manned flying machines ever built, the epitome of human scientific achievement.
Technical brilliance, however, was not enough. Only seventy-three seconds after launch, and in front of television viewers the world over, the pride of America's space program disintegrated into a million pieces and plunged, with its precious human cargo, into the Atlantic Ocean. It was the worst—and most graphically visible—in-flight catastrophe in the history of manned space flight. It was also a major black eye for NASA that would create years of negative fallout, and bring into question the agency's ethical culture and hard-earned reputation for safety. Worst of all, it was a tragic end for seven courageous human beings. For the first time in NASA history, failure had become an option. It was, by any measure, one of aviation history's most significant flights of no return.
### **A New Concept in Space Flight**
The Space Shuttle was the world's first space plane. The concept was the product of a NASA program created in the late 1960s called the Space Transportation System; hence the "STS" prefix for all shuttle missions. The idea was that a winged spacecraft capable of safely gliding back to the earth from orbit for reuse would significantly reduce the exorbitant cost of space missions. In addition, its spacious cargo compartment and hefty thirty-ton payload would facilitate the transport of satellites, interplanetary probes, scientific equipment, and other space hardware into orbit for future projects.
NASA operated six shuttles as reusable low-earth orbital spacecraft between 1981 and 2011: _Enterprise, Columbia, Challenger, Discovery, Atlantis_ , and _Endeavour_. All flew space missions except for the prototype _Enterprise_ , which NASA used only for flight-testing within Earth's atmosphere. The amazing accomplishments of the $200 billion Space Shuttle program are a matter of public record. During the program's thirty-year span, its five operational shuttles flew 355 different astronauts, ranging in age from twenty-eight to seventy-seven, on 135 missions. In so doing, they logged more than three and one-half _years_ in space, completed twenty-one thousand orbits of the earth, and carried 1,750 tons of cargo into orbit. All shuttle missions launched from Kennedy Space Center, Florida, and all but two ended with a conventional winged landing. The two exceptions were those missions in which both shuttle and crew perished: _Challenger_ mission STS-51-L and _Columbia_ mission STS-107.
The distinctive Space Shuttle launch complex consisted of three major components linked together—the delta-winged orbiter, which carried the crew and cargo; a massive external tank containing 790 tons of liquid oxygen and hydrogen for the orbiter's three main rocket engines; and two solid rocket boosters, which provided most of the thrust for the first two minutes after liftoff. Astronaut Story Musgrave, the only person to fly all five of NASA's operational shuttles into space, aptly described the setup as "bolting a very beautiful butterfly onto a bullet."
The orbiter jettisoned both boosters and the external fuel tank after they had served their purpose, so that only the orbiter itself escaped Earth's atmosphere. When its mission was over, it reentered the atmosphere and glided down to land at Edwards Air Force Base, California; Kennedy Space Center, Florida; or, in the case of STS-3 alone, White Sands, New Mexico.
By the time _Challenger_ was ready for its tenth mission in January 1986, many experts considered the Space Shuttle the safest spacecraft ever flown. _Columbia, Challenger, Discovery_ , and _Atlantis_ had already successfully completed an aggregate of twenty-four missions with hardly a hiccup. In fact, the American public—and perhaps NASA officials too—had been lulled into a sense of complacency about space launches. This was a far cry from the early days of the space program, when gamblers placed bets on whether the next launch would end in a successful liftoff—or a massive explosion on the launch pad. As events would prove, however, launching even the Space Shuttle into orbit was still anything but safe.
A "butterfly" bolted to a "bullet." Successful launch of Space Shuttle _Challenger_ on mission STS-6, April 4, 1983. On January 28, 1986, it broke apart at T+73 seconds, killing all seven astronauts aboard. _NASA_
### **A Perilous Undertaking**
Space is, without a doubt, the most inhospitable and treacherous environment imaginable. Just getting there requires a wild ride atop a metal tube propelled skyward by a semicontrolled explosion of thousands of tons of highly unstable rocket fuel. On the way up, riders must endure bone-crushing gravitational forces as the rocket quickly accelerates them to a speed high enough to escape Earth's gravitational pull.
Once in space, they are exposed to a multitude of deadly conditions seen nowhere on Earth, starting with temperatures cold enough in the shade to convert the human body instantly into a block of ice, yet hot enough in the sunlight to just as quickly turn that same body into a lump of carbon. Then, there is the airless, body-wrecking total vacuum of outer space that is utterly incompatible with life. Finally, since there is no gravity in space, space travelers must deal with the constant sensation of falling, nausea, and the many other discomforts associated with weightlessness.
Getting back into Earth's atmosphere is even more dangerous than escaping it. When it is time to come home, space travelers must once again withstand extremely high g-forces, this time from rapid deceleration; in addition, extreme precautions must be in place to prevent them from burning to a cinder in the three-thousand-degree heat generated by the friction of a seventeen-thousand-mile-per-hour slipstream of air screaming past the spacecraft. Finally, once safely slowed down and back in the earth's atmosphere, the now-powerless craft, designed primarily for space flight, somehow must find a way to bring its human cargo back to terra firma in one piece. Certainly, no human venture has ever been more technically demanding or fraught with peril than space travel.
Yet, surprisingly, in more than a half century of manned space flight, very few humans have failed to return home safely. This outstanding record is due, in part, to the obsessively meticulous care that space agencies have traditionally taken in producing spacecraft, training crew members, and planning missions. It is also due, at least to some extent, to that other ingredient that is always necessary for flight safety: luck. On those few occasions when either of these essential factors was lacking, the consequences were horrendous.
Apollo 1 astronauts Gus Grissom, Ed White, and Roger Chaffee in front of Launch Complex 34, on which the Saturn 1 launch rocket sits. The three later died in a preflight fire on the launch pad, making them the first US astronauts killed in action. _NASA_
Up until _Challenger_ 's tenth and last mission, there had only been two fatal space missions. Both took place within the Soviet Union's ultra-secret space program. The first occurred in April 1967, when cosmonaut Vladimir Komarov of _Soyuz_ 1 crashed to his death in his space capsule. After a trouble-plagued eighteen-orbit flight in a spaceship he angrily called a "devil-machine," he finally managed with great difficulty to reenter Earth's atmosphere... only to have the capsule's life-saving parachute fail. He plummeted, unchecked, all the way to the earth, crashing to his death. Then on June 30, 1971, three Soviet cosmonauts from _Soyuz_ 11 died during their return from the _Salyut_ space station. Their capsule developed an air leak and lost pressure, killing them instantly.
The US Space Program also had its disasters. On January 27, 1967, it lost three Apollo 1 astronauts in a preflight launch pad fire. And in 1970, it had a very close call during the Apollo 13 mission to the moon, when an oxygen tank exploded more than two hundred thousand miles from Earth. So far, however, NASA had managed to avoid any loss of life during a space mission. On January 28, 1986, that perfect record would end in a huge cloud of white smoke.
### **T+73 to Disaster**
A highly trained and extremely competent crew of five men and two women were strapped inside _Challenger_ 's crew compartment on pad 39B. Sitting there 195 feet above the ground awaiting countdown were Mission Commander Francis R. Scobee; Pilot Michael J. Smith; Mission Specialists Ellison S. Onizuka, Judith A. Resnik, and Ronald E. McNair; and Payload Specialists Gregory B. Jarvis and Christa McAuliffe. This carefully selected group of highly accomplished Americans appropriately represented a diverse cross-section of the nation—not only in gender and ethnicity, but also in their professional and personal backgrounds.
It was rookie astronaut Sharon Christa Corrigan McAuliffe, however, who was the media darling of the crew. Although this was only her first mission, her special distinction was that she had been selected from a pool of eleven thousand applicants to participate in a new NASA program called the Teacher in Space Project. As the winner of this nationwide competition, the thirty-eight-year-old high school teacher, wife, and mother of two was to be the first professional educator ever shot into orbit. Because of the widespread publicity surrounding this program, McAuliffe had become NASA's best-known astronaut. Moreover, because she was attractive, articulate, and enthusiastic about space flight, she proved to be an excellent spokesperson for her own profession, as well as for the US Space Program. As she once said on a late-night TV appearance, "If you're offered a seat on a rocket ship, don't ask what seat. Just get on." This is exactly what she did.
The mission, designated STS-51-L, was routine—as space missions go. Mission planners had tasked the crew with a variety of duties, including deploying satellites and performing experiments in space; in addition, they had scheduled McAuliffe to conduct classroom lessons and experiments for the Teacher in Space Project while orbiting weightless 150 miles above the earth. Her first lesson, entitled "The Ultimate Field Trip," was to be a tour of the cabin and a description of daily life aboard the shuttle.
NASA had experienced a particularly difficult time getting STS-51-L off the launch pad. For various reasons, half a dozen delays in the liftoff had occurred in as many days. The latest holdup had happened just two hours before its final launch, due to a problem with the shuttle's fire detection system. Because these delays were complicating a very tight flight schedule, NASA officials were understandably frustrated and anxious to get _Challenger_ into orbit. The pressure was even greater due to the fanfare associated with the much-heralded launching of their first teacher-astronaut into space.
The crew of STS-51-L. _Back row, L to R_ : Ellison S. Onizuka, Sharon Christa McAuliffe, Greg Jarvis, and Judy Resnik. _Front row, L to R_ : Mike Smith, Dick Scobee, and Ron McNair. All died on January 28, 1986, in the _Challenger_ accident. _NASA_
NASA astronaut and Teacher in Space Project representative, Christa McAuliffe, experiences the sensation of weightlessness in a Boeing KC-135 Stratotanker. Pilots of the modified refueling aircraft fly a special parabolic pattern that provides brief periods of zero gravity for astronauts in training. It is nicknamed the "vomit comet" for obvious reasons. McAuliffe was destined never to experience this sensation in space. _NASA_
The graphically violent end of Space Shuttle _Challenger_ , January 28, 1986. _NASA_
Finally, on Tuesday morning, January 28, 1986, all systems were "go" for launch. The countdown proceeded without interruption. At 11:38 a.m., the 2,250-ton mass of rocketry, explosive fuel, and orbiter filled with precious scientific and human cargo began to detach its umbilical cords to the 290-foot launch tower and inch slowly skyward. Millions of spectators watched on live TV, many of them children in their classrooms, as the first teacher in history headed into space. Thousands more had gathered on site to view the launch, some of them family members and friends of the crew. All gazed in awe as the two massive solid rocket fuel boosters and three main engines fired up simultaneously. The deafening roar, heat, vibration, concussion, and blinding light generated by the five engines, producing an aggregate power of more than six million pounds of thrust, was something they would never forget.
The spectacle of the mountain of machinery slowly moving skyward appeared exactly as it should have to onlookers and controllers alike. They watched as it soared higher and higher into the blue Florida sky. Unknown to anyone, however, something terrible had already occurred. As a result, at T+73 seconds, the shuttle assembly—still accelerating, and now approaching a speed of Mach 2—suddenly appeared to explode. Dense white smoke appeared, contrasting sharply against the deep blue sky, and rocket parts seemed to spew off in a hundred different directions. Awe turned to horror.
TV news correspondent Tom Mintier was describing the launch for CNN—which by now was the only national news station still broadcasting live. When he saw the explosive eruption, he became silent and for the next few seconds was at a complete loss for words. Forty seconds later, NASA public affairs officer Steve Nesbitt, watching from Mission Control Center in Houston, publicly announced what would become one of the great understatements of all time: "Flight controllers here looking very carefully at the situation. Obviously a major malfunction. We have no downlink." A little later, he tersely added, "We have a report from the Flight Dynamics Officer that the vehicle has exploded." _Challenger_ and its crew were no more. What could have gone so terribly wrong in only seventy-three seconds?
### **What Went Wrong?**
Subsequent analysis revealed that _Challenger_ did not really explode—it broke apart. What no one noticed during the first second of liftoff was obvious enough on the launch film, which analysts later carefully scrutinized. Clearly visible were ominous little puffs of smoke coming from a field joint on the right solid rocket booster. It was innocuous enough at the time to go unnoticed, but by T+60 seconds, an intense plume of flaming gases had replaced the smoke. It soon burned a hole in the adjacent 535,000-gallon external liquid fuel tank. Within seconds, the solid rocket booster broke loose from the strut attaching it to the external tank, shoving the shuttle laterally into an unusual attitude. Since it was now traveling at Mach 1.9—well over one thousand miles per hour—this created a catastrophic aerodynamic load on the orbiter. Thus, at seventy-three seconds after launch and at an altitude of forty-eight thousand feet, _Challenger_ simply flew into pieces.
It was at that precise moment that Pilot Michael Smith uttered his last recorded words: "Uh-oh." It was the last recorded statement by any member of the crew and the only one indicating that they were even remotely aware of a problem. After _Challenger_ dissolved in a massive puff of white smoke, only the sturdily built crew cabin—with its human occupants still strapped inside—remained intact, as it shot outward and upward from the rest of the debris. It continued to ride its Mach 1.9 momentum skyward for another twenty-five seconds. After it peaked at an altitude of sixty-five thousand feet, it slowly began its agonizing twelve-mile death plummet toward the Atlantic Ocean below. The doomed crew had no way to escape the sealed compartment.
All seven astronauts were probably still alive during the entire two-minute, forty-five-second plunge to the sea. Whether or not they were conscious is another matter. If the cabin maintained atmospheric pressure after the break-up, they may well have been aware of their impending death right up to the moment when they smashed into the water at a speed of 207 miles per hour. The resulting 200 _g_ -plus deceleration force they experienced upon impact was many times in excess of any human survivability. The forces completely crushed the reinforced aluminum cabin, and anyone inside still alive died instantly.
Further investigation would soon determine the root cause of the accident, and accusing fingers of blame would point directly to the highest echelons of NASA leadership. Meanwhile, the shuttle program would remain in limbo for the next thirty-two months.
### **A Flawed Decision**
The key to _Challenger_ 's destruction was determining the cause of the telltale gray-black wisps of smoke during liftoff. This, as it turned out, required very little detective work. They resulted from a defect that NASA engineers had known about for nearly a decade. The puffs of smoke were indicative of a failed joint between the two lower segments of the right solid rocket booster. The joint failed because of two faulty quarter-inch-thick rubberlike O-rings. Though the innocuous-looking gaskets were anything but high-tech, their role was critical: they were the only barrier preventing burning gases from leaking through the joint when the rocket was firing. When they failed, a fatal seventy-three-second chain of events ensued that ended in the obliteration of _Challenger_.
Managers from both NASA and the company that built the solid rocket boosters, Morton-Thiokol, Inc., were well aware of the problem with these seals. Nevertheless, since the O-rings had not failed catastrophically in any of the previous missions, NASA decision makers continued to consider them an acceptable risk. For this particular launch, however, lower-level engineers had expressed a different view. In a two-hour teleconference on the evening before _Challenger_ 's final launch, Morton-Thiokol engineers had argued passionately to NASA managers that the uncharacteristic freezing weather predicted for Kennedy Space Center on launch day would cause the O-rings to harden and lose their elasticity. This would greatly increase the chance of a catastrophic malfunction. For this reason, they urged NASA to halt the launch—something that impatient, and perhaps politically pressured, NASA bosses did not want to hear. One of these was George Hardy, the Deputy Director of Science and Engineering at Marshall Space Flight Center. He stated emphatically, according to witnesses present at the meeting, that he was "appalled" at their recommendation to delay the launch. Witnesses further testified that during that same meeting, another NASA bigwig—Lawrence Mulloy, who headed Marshall's Space Shuttle Solid Rocket Booster Program—exclaimed, "My God, Thiokol, when do you want me to launch, next April?" Both Hardy and Mulloy later complained that their remarks were taken out of context, but one glaringly incriminating fact remained: they ignored the warnings and the adamant recommendation to scrub the mission. Not surprisingly, no one outside of inner NASA circles learned of any of this until much later.
As predicted, the temperature early on the morning of the launch dipped to well below freezing and by launch time, it was still only thirty-six degrees Fahrenheit. This was fifteen degrees colder than any previous launch and even further below the contractor-recommended minimum of fifty-three degrees. However, NASA officials had a schedule they felt compelled to keep, so they pressed on with the launch.
After the accident, questions remained that NASA officials could not or would not answer. What was the cause of this catastrophe and how could they have prevented it? Was it safe to proceed with the shuttle program? When it became obvious that answers were not forthcoming, President Ronald Reagan appointed a special commission to investigate the accident. He chose former Secretary of State William P. Rogers to head it up.
NASA officials at first did their best to gloss over the circumstances leading up to the disaster. Some at the space agency, however, took exception to this. One of these was Richard C. Cook, the lead resource analyst for the solid rocket boosters. He even went so far as to accuse the agency of an orchestrated cover-up. To prove his point, he leaked documents to the _New York Times_ that engineers had sent to NASA management warning of the dangers of launching _Challenger_. Management's disregard of these warnings, along with the other incriminating findings of the Rogers Commission, incited a storm of controversy.
The commission was scathingly critical of NASA officials for making what its report called a "flawed" decision to launch on that day, even though they knew there was a significant risk. NASA spin masters, in turn, did their best—using an array of self-serving rationalizations—to justify their decision; but with the damning evidence presented and seven dead astronauts, their excuses had an exceedingly hollow ring.
In the end, NASA got the message and made more than two hundred changes to the shuttle during the thirty-two-month flight suspension that followed the _Challenger_ accident. These included the addition of an escape system—a feature that might have saved the _Challenger_ crew had it been available to them. In addition, the agency grudgingly examined its own safety and ethical culture. To improve its decision making, it adopted a more stringent, safety-based flight preparation process for all future flights. Both George Hardy and Lawrence Mulloy voluntarily retired from NASA within months following the disaster.
Finally, on September 29, 1988, the Space Shuttle program resumed operations with the launch of _Discovery_ on mission STS-26. With the new operating procedures in place, space flight would be safer.
In spite of the hard lessons learned from the loss of _Challenger_ , history was destined to repeat itself on February 1, 2003. Space Shuttle _Columbia_ was reentering the earth's atmosphere at the completion of mission STS-107. Only minutes shy of landing at Kennedy Space Center, it broke up over eastern Texas and western Louisiana. Once again, seven astronauts were lost... and once again, NASA had allowed a fatal accident to occur because of a problem they already knew about—and should have corrected. The loss of _Columbia_ and its crew resulted from a piece of thermal insulation foam from the external tank that had broken loose during launch. It blew back, damaging the orbiter's left wing, which during reentry led to the disintegration of the entire orbiter.
Like the defective O-rings that destroyed _Challenger_ , pieces of insulation foam breaking off during launch had been a known and persistent problem of which NASA engineers and management were well aware. This issue caused severe damage to the Space Shuttle _Atlantis_ in 1988 during STS-27. But as with the O-rings, the foam had not yet destroyed a shuttle, so officials considered it an acceptable flight risk—or in NASA lingo, an "expected anomaly"—that those in management were willing to accept. The price for this policy of Russian roulette once again proved to be devastatingly high.
With both _Challenger_ and _Columbia_ , NASA had—in the words of its own official history—"overlooked the obvious, allowing two tragedies to unfold on the public stage."
Divers finally recovered the mortal remains of the _Challenger_ crew on April 21, 1986—nearly three months after they fell into the Atlantic Ocean and sank in ninety feet of water. Americans will always justifiably remember them as heroes who gave their lives for the cause of space exploration. If not for two faulty O-rings and one "flawed" decision, they might all have lived to tell their grandchildren about the day they rode a butterfly into space.
The remains of the seven _Challenger_ crewmembers being transferred to a Lockheed C-141 Starlifter transport at Kennedy Space Center's Shuttle Landing Facility. _NASA_
## CHAPTER EIGHT
## **THE DAY THE BARON FLEW TOO LOW**
**"I THINK HE HAS SEEN DEATH TOO OFTEN."**
On the morning of April 21, 1918, a youthful German pilot strapped himself into the cockpit of his fighter plane in preparation for a combat patrol. His mount was an all-red Fokker Dr.I triplane, bearing the serial number 425/17, and he was flying from a large open field on the outskirts of the French town of Cappy. It was less than seven months before World War I would finally end, but on that day, the brutal four-year conflagration still raged as violently as ever.
According to legend, as the young _Flieger_ prepared to take off, a member of the ground crew hesitantly stepped up to the cockpit and said, "Herr _Rittmeister_ , may I have an autograph?" Although the timing was odd, the request was understandable. This particular pilot with the close-cropped blond hair also happened to be Germany's greatest hero and the world's most successful fighter ace. He was _Rittmeister_ Manfred _Freiherr_ von Richthofen. The famous ace laughed good-naturedly at the unlikely autograph-seeker and asked, "What's the hurry? Are you afraid I won't come back?"
This incident may or may not have actually occurred, but if so, one could only hope that the _Rittmeister_ signed the autograph. It would be of great historical value, for he would never sign another. In a matter of minutes, the pilot known worldwide as the "Red Baron" lay dead on the ground—after making one of history's most controversial flights of no return.
_Rittmeister_ Manfred _Freiherr_ von Richthofen. The eighty-victory German ace wears at his throat the coveted _Pour le Mérite_ ("Blue Max"), the German Empire's highest award for bravery. _National Museum of the US Air Force_
Manfred von Richthofen, in cold-weather flying gear. In the background is a Fokker Dr.I ( _Dreidecker_ ) triplane. _Courtesy Peter Kilduff_
### **The Red Baron**
Manfred von Richthofen was born in 1892 near Breslau, Lower Silesia (now Wrocław, Poland). The young aerial warrior was a Prussian nobleman—hence the title _Freiherr_ , or Baron. He was also the highly esteemed commander of Germany's first and foremost fighter wing, _Jagdgeschwader_ I. The German ace's greatest claim to fame, however, was his unprecedented success against enemy aircraft: with eighty official kills to his credit, he was the highest- scoring and most highly regarded fighter ace of World War I.
The bold and innovative _Rittmeister_ —a rank that equates to a captain in the cavalry, where Richthofen began his career—also had a flair for the dramatic. To make himself more visible to both friend and foe, he flew an airplane painted from stem to stern a brilliant blood-colored red. This earned the notorious ace the nickname by which most people remember him today.
Because of his accomplishments, he was—at the age of twenty-five—a living legend, the superstar of his day. Respected by friend and enemy alike, admirers mobbed him everyplace he went, and his image appeared in newspapers, magazines, and films and even on postcards. He already had all of the German Empire's highest decorations, including the most coveted of them all: the Pour le Mérite, also known as the Blue Max. Accounts of his many thrilling exploits appeared everywhere, including in a bestselling autobiography he had recently completed; and as his country's preeminent war hero, he received daily bags full of letters and packages from adoring fans throughout Germany. Some were perfumed and lace-decorated epistles from love-stricken _Fräulein_ , both young and old. A few even boldly sent to the handsome and heroic aristocratic ace enticing pieces of highly personal apparel, along with offers of marriage—and anything else he might desire. It was good to be a famous flying ace in the First World War. That is, until the day when the Grim Reaper made his unwelcome appearance.
### **A Very Long Shot**
For Manfred von Richthofen, the angel of death came on the Sunday morning of April 21, 1918. As to what ultimately happened to the celebrated ace, there is no mystery. He took off that morning, accompanied by pilots of _Jagdstaffel_ ( _Jasta_ ) 11, one of the four fighter squadrons he commanded. A few minutes after takeoff, they became embroiled in a fierce dogfight with an aggressive formation of Sopwith Camel biplane fighters from British Royal Air Force No. 209 Squadron. During this aerial "dance of death," Richthofen managed to maneuver his aircraft onto the tail of an inexperienced Canadian pilot, Lt. Wilfrid R. "Wop" May. The young Canadian glanced behind and saw the dreaded all-red Fokker triplane attached like a leech to his tail. Though still a rookie, he immediately grasped the gravity of the situation. He was in deep trouble.
Lieutenant Wilfrid R. "Wop" May, the young Canadian pilot that Richthofen was furiously pursuing on the morning of his death. May survived the war to enjoy a long and distinguished aviation career.
The Canadian frantically began throwing his highly maneuverable little Sopwith all over the sky. He somehow had to avoid the deadly hot steel slugs that would soon be streaming from the triplane's twin Spandau machine guns—and possibly boring into his unprotected back. His maneuvers were so extreme, they would have been comical under different circumstances, but they accomplished their intended purpose. Richthofen—though a marksman second to none—was unable to get a good bead on the terrified novice who was flying in such an unorthodox manner. Uncharacteristically, the German ace—undoubtedly becoming frustrated—stubbornly stayed with his quarry, even though they had by now crossed over into Allied lines and were getting dangerously low to the ground. This risky scenario was one that even the Red Baron normally avoided. The chance of being hit by enemy ground fire was too great and, if forced to land in enemy territory, his career would end as a prisoner of war.
Meanwhile, Lieutenant May's commander, fellow Canadian and former schoolmate Capt. Arthur Roy Brown, was watching intently from above, all too aware of the fledgling pilot's deadly dilemma. Brown quickly turned his Camel toward the red Fokker seemingly glued to the tail of May's twisting Sopwith. Unlike May, he was no beginner. An experienced and highly accomplished nine-victory ace, he knew exactly what he was doing. As he dived from above and behind, he began to overtake the red triplane.
Fokker triplane Dr.I 425-17, the fighter in which the Red Baron died on April 21, 1918. To this day, no one knows who shot him down. _Courtesy Peter Kilduff_
Sopwith F.1 Camel of the type Lt. "Wop" May and Capt. Roy Brown were flying on the day that Richthofen died. _National Museum of the US Air Force_
Brown fully realized that May was running out of time. If he did not do something immediately, his young friend and compatriot was doomed. For this reason, Brown—though still well out of range for an accurate shot—squeezed off a long and erratic burst of fire from the two Vickers machine guns mounted in front of his cockpit. The bullets streamed like water from a hose in the direction of the determined German ace. From that distance, Brown could not hope to hit anything vital, but perhaps the arrows of smoke in the sky created by his tracers would at least distract the German away from May.
Royal Air Force Capt. Arthur Roy Brown, the only person ever officially credited with shooting down Manfred von Richthofen. Despite this, most historians believe that a gunner on the ground fired the fatal bullet. Brown died in Ontario, Canada, in 1944.
As luck would have it, Brown's strategy worked. In fact, it worked far better than he could have imagined. The red Fokker abruptly disengaged from the terrified Lieutenant May and descended to a crash landing in a clearing below, near the town of Corbie. The relieved Captain Brown probably marveled at how he had managed to score a decisive hit on the red triplane. It was, without a doubt, a lucky shot; but any combat pilot knew that luck was a big part of success—and he was not complaining.
The blood-red triplane glided—apparently still under control—into the ground directly in front of the Allied troops fighting there. The men cheered loudly as the small fighter hit hard and skidded to a halt in a rough open field. They ran over to the airplane to capture the seemingly uninjured enemy pilot but instead found him taking his last breath. He died of what was later determined to be a single .303-caliber slug through the chest.
The news spread like wildfire: The Red Knight of Germany was dead! Captain Brown put in his claim and quickly received official credit for downing the famous ace. Lucky shot or not, the airplane at which Brown had been firing had gone down. Numerous witnesses both on the ground and in the air saw him shooting at Richthofen just before his red triplane crashed in Allied territory. No one could dispute that Brown was the victor—especially since there was no one else in the vicinity to counter his claim. Or was there?
It is at this point that the controversy begins. It soon became apparent that there had been others—dozens of others—shooting at the bright red airplane, but they were not Camel pilots. They were the Australian riflemen and machine gunners stationed in the vicinity of where Richthofen had skimmed the ground in pursuit of the hapless Lieutenant May. At the same time Brown had been shooting _down_ at the low-flying red triplane, the Australian ground gunners were furiously firing _up_ at him. The ammunition they used was .303 caliber—identical to Captain Brown's. At least three of these ground gunners believed that they had the most legitimate claim to the biggest prize of the war, and they continued to believe it for the rest of their lives.
The field, as it appeared recently, where the Fokker triplane flown by _Rittmeister_ Manfred _Freiherr_ von Richthofen came to rest on April 21, 1918. _Steve Miller_
### **Who Fired the Silver Bullet?**
The question has remained unanswered for nearly a century: Who shot down the Red Baron? Like all good mysteries, the solution is complicated and the truth anything but clear-cut. Practically everyone with any interest in the topic has an opinion—and opinions vary widely—but the truth is that no one really knows. This is in spite of the great lengths to which numerous historical and analytical researchers have gone since 1918, attempting to arrive at the correct answer.
Captain Brown immediately comes to the forefront as the fatal shooter. He was, after all, the only pilot to claim credit for shooting down the German ace. For that matter, he was, in all likelihood, the only flier even to fire at Richthofen that morning. Both Brown and May were convinced that he had fired the fatal shot, and he is still the only person ever officially credited with killing the Red Baron.
Assignment of official credit, however, did nothing to settle the controversy. Doubters have long claimed that Brown's desperate attempt was—both literally and figuratively—a long shot. Brown, in his frantic attempt to distract Richthofen from blasting the struggling Lieutenant May out of the sky, fired all of his rounds not only from well out of range, but also while in a steep high-speed dive. Under the circumstances, it would have been little short of a miracle had he actually gotten a slug into the famed German ace.
But in aerial combat, unlikely occurrences are not unknown. Richthofen himself had already fallen victim of one such fluke only nine months earlier. On July 6, 1917, an anxious gunner in a British observation aircraft opened up on the Baron when he was still ridiculously far out of range. As Richthofen derisively watched the frightened observer's pitiful waste of ammunition, he received a sudden, unanticipated, and crushing blow to the head that knocked him senseless. One of the gunner's slugs had somehow found its way to Richthofen's skull. It was a glancing blow, but a serious one, and he barely got down alive. It was proof positive that even a chance hit from a mile away could be dangerous. Perhaps Brown had also been lucky enough to fire off a "silver bullet" that day, which managed to find its way to the unfortunate German ace.
Perhaps, but probably not. Unfortunately for Brown, the forensic evidence suggested otherwise. The trajectory of the fatal bullet wound in Richthofen's body indicates that it hit him not from above and behind, where Brown was blasting away, but rather from the lower right. Just as important, Richthofen continued flying apparently unfazed for some time after Brown fired at him. With the severity of his wound—a bullet ripping through the chest from right to left, destroying vital organs and major blood vessels—he could probably not have remained conscious for more than a few seconds. These two facts would seem to eliminate even a very lucky Roy Brown as the fatal shooter.
The preponderance of evidence indicates that it was someone on the ground who killed the Red Baron. While chasing Lieutenant May, Richthofen hedgehopped at ground zero for a considerable distance—at one point narrowly missing the church steeple in the French village of Vaux-sur-Somme. As he flew over the Morlancourt Ridge overlooking the river Somme, dozens of enemy ground troops were firing up at him. Any one of these might have proper claim to the Baron's scalp.
One of the most interesting and novel attempts to determine the real shooter was a re-enactment conducted by the producers of Discovery Channel's popular series _Unsolved History_. They employed a team of experts from several different specialty areas to determine who shot the Red Baron. Using computer flight simulation—along with data collected from an authentic .303-caliber rifle, Vickers machine gun, and rotary engine—they reconstructed gunfire trajectories from Brown's Camel to Richthofen's triplane. The researchers diligently factored in rate of fire, relative speeds and distances, engine vibration, and aerodynamic considerations. Their conclusion: the likelihood of Brown scoring any hits at all was somewhere between slim and none.
In the second part of their experiment, they traveled to France to the actual site of Richthofen's last flight. Substituting a modern airplane of similar speed to Richthofen's, and laser beams for real bullets, they re-enacted the last two minutes of the famous dogfight, down to the minutest detail. As the airplane flew over, replicating Richthofen's exact flight path, the investigators on the ground "shot" at it from the known positions that Australian machine gunners had occupied. After comparing their findings to pertinent documents, they concluded that a gunner named William "Snowy" Evans had most likely fired the fatal shot. This was in disagreement to other past researchers who have favored machine gunners Cedric Popkin or Robert Buie as the most likely Richthofen killers. Though this realistic re-enactment was compelling, any conclusive answer to the mystery is as elusive as ever.
### **Fatal Decision**
Another remaining question is why Richthofen choose to ignore his own rule and follow Lieutenant May down to the deck behind enemy lines. One of the key points from his own manual that he wrote regarding air combat operations was that a pilot should never stubbornly pursue his adversary far behind enemy lines. He had always followed that rule, and he strictly forbade his own pilots from such practice. Still, for reasons unknown, he did just that... and paid for the transgression with his life.
Manfred von Richthofen was a complex, rather peculiar individual. Although the many books, articles, and documentaries about him would fill a library, little is really known about his inner self. Undoubtedly, he was confident, charismatic, and generally well liked; however, he tended to be somewhat distant in nature. Few, if any, of Richthofen's comrades qualified as close friends other than perhaps his own brother and fellow ace, Lothar. Nor did he apparently have any close female relationships, in spite of the unlimited opportunities his unique status afforded him—and in spite of rumors to the contrary.
One thing that seems certain is that Richthofen was suffering from what amounted to combat fatigue—or, in modern terminology, post-traumatic stress disorder (PTSD). Three and a half years of dangerous wartime service, the latter half of which involved almost continuous aerial combat, had stretched his nerves to the breaking point. He was tired, depressed, and in need of a long rest. In addition, he was still suffering from the severe head wound he had received the previous July. Part of the bone from his skull was still exposed, causing him severe pain. Given his physical and mental state, it seems clear that he should not have even been on flying status the day he died. As translated by Frank McGuire in his book _The Many Deaths of the Red Baron_ , Richthofen grimly wrote shortly before his death:
I feel terrible after every air battle, probably an after-effect of my head wound. When I again set foot on the ground I withdraw to my quarters and don't want to see anybody or hear anything. I think of the war as it really is, not "with a hurrah and a roar" as the people at home imagine it; it is much more serious, bitter.
Just as relevant, though less obvious, was Richthofen's unique cultural mindset. His attitude was perhaps typical of most aristocratic young Prussian military officers of the early twentieth century—especially those who had begun their military careers as early as the age of eleven, as had Richthofen. The importance of courage, perseverance, devotion to duty—and whatever else _Gott und Vaterland_ required—was ingrained in him to his very core. Certainly, Richthofen was not one to shirk his duties. Again, as related in McGuire's book, he tellingly wrote near the end of his life:
Higher authority has suggested that I should quit flying before it catches up with me. But I should despise myself if, now that I am famous and heavily decorated, I consented to live on as a pensioner of my honor, preserving my precious life for the nation while every poor fellow in the trenches, who is doing his duty no less than I am doing mine, has to stick it out.
In spite of his unhealed wound and fatigued mental state, Richthofen would "stick it out" until he could no longer. It mattered little to him that he would have been just as useful commanding and advising from the ground, or that he had already given everything he had for his _Vaterland_. Research psychologists Thomas Hyatt and Daniel Orme, in their 2004 article "Baron Manfred von Richthofen—DNIF (Duties Not Including Flying)," theorized that Richthofen's head wound caused him to suffer from a type of brain dysfunction that compelled him to persist in his flying, even knowing it would soon kill him. Such "mental rigidity" may have also caused the target fixation he experienced when he chased Lieutenant May down low on his fatal foray into enemy territory. Richthofen's mother, however, saw her beloved son in a simpler light. She wrote after her last visit with him, "I think he has seen death too often."
After nearly a century of controversy, no one will ever know conclusively who shot down the legendary German ace. It was most likely a machine gunner on the ground... or maybe it was Capt. Roy Brown after all. Perhaps even some unknown rifleman hit the jackpot and never even suspected it. So, despite all efforts to dispel the mystery of Manfred von Richthofen's unforgettable last flight, the mystery lives on—and with it, the remarkable legend of the equally remarkable man they called the Red Baron.
## CHAPTER NINE
## **A MADMAN'S JOURNEY TO NOWHERE**
**"A STORY UNLIKE ANY OTHER IN THE HISTORY OF AERIAL EXPLORATION."**
On July 11, 1897, three adventurous Swedish explorers took off in a balloon from an uninhabited Norwegian Island in the Arctic Ocean. Their grand intention was to fly all the way to the North Pole. If successful, they would be not only the first humans to fly to this frozen sea of ice that constitutes Earth's northernmost region, but also the first to lay eyes on it.
Soon after lifting off, they drifted out of sight into the gray northern sky. No one would ever see them alive again. It would take thirty-three years for unexpected events to reveal their tragic fate. People the world over would then finally learn where and under what circumstances the three adventurers met their end. They would also have the unique opportunity to relive—through the words and eyes of the long-dead explorers themselves—their ill-fated flight and desperate three-month death march to nowhere. This historic flight was only the beginning of a story unlike any other in the history of aerial exploration.
### **An Impossible Dream**
In 1897, getting to the North Pole was about as easy as going to the moon. Over the centuries, explorers had made one unsuccessful attempt after another to reach the mysteriously elusive pole. There were plenty of reasons for their consistent lack of success. The North Pole lies some five hundred miles from the closest human habitation, so virtually nothing was known about it in the late nineteenth century. No one even suspected that, unlike its diametric opposite, the South Pole, it is not a landmass at all. Rather, this geographical northernmost point on the earth is merely a moving accumulation of sea ice floating in the midst of the Arctic Ocean. The only way to get there was to sail through the icy waters as far as possible, and then travel the remaining few hundred miles on foot. This meant weeks of pulling heavy sledges loaded with life-sustaining provisions over the treacherous, ever-changing icy terrain—in some of the worst weather conditions that exist on Earth.
The explorers who were willing to go to such lengths to get to one of the world's most inaccessible and inhospitable places all had their reasons. Some went in search of a so-called Northwest Passage, a northerly trade route connecting Europe to the Far East via the Arctic Ocean; others sought a new world and the possible riches contained therein. But the underlying reason they were willing to risk everything to get to the North Pole—including their very lives—was simply to be the first.
As of 1897, it had proven not only impossible, but also extremely deadly. Hundreds of men had already perished in various past attempts. Some were lost sailing into the dangerous, icy waters encircling the North Pole; others simply disappeared after trudging into the uncharted expanse of the polar ice cap. It seemed that there was simply no way to get to the pole—unless one was able to grow wings and fly there.
### **To the Pole by Air**
Human flight began long before the Wright Brothers' first powered flight of December 17, 1903. Since the first manned balloon flight in 1783, balloons and airships filled with buoyant gases—hot air, hydrogen, or helium—had made hundreds of long-distance flights, soared to great altitudes, and achieved other significant aerial accomplishments. By the end of the nineteenth century, balloon flight had become nearly routine.
It was for this reason that Swedish engineer Salomon August Andrée chose a novel approach to becoming the first man to the North Pole. He would sail there—not through the hazardous icy waters—but above them, soaring on air currents. Andrée, born in 1854, was a graduate of Sweden's Royal Institute of Technology and an official at the Swedish Patent Office. His interest in exploration was aroused in 1882 with his participation in a scientific expedition led by meteorologist Nils Ekholm to the Norwegian island of Spitsbergen. After this, the lust for adventure was in his blood.
Andrée's fascination with ballooning began during a visit to the United States, where he met the famed American aeronaut John Wise. Andrée eventually learned to fly, and in 1893, he acquired a balloon for his own use. In it, he made nine rather eventful solo flights. In one case, a fierce wind swept him from Sweden across the Baltic Sea and all the way to Finland. He accumulated some forty hours aloft, experimenting with different ballooning techniques and recording a variety of meteorological observations. With his newly developed aeronautical skills, he now directed his attention to his ultimate goal: the conquest of the North Pole.
Andrée calculated that a flight to the pole would require a balloon large enough to carry three men and enough equipment and provisions to survive for four months on the ice. It would have to inflate just before launch, with hydrogen gas manufactured onsite, stay aloft for up to thirty days, and be steerable. These demanding specifications were unprecedented, but Andrée maintained they were possible. In such a balloon, he could be the first human to the North Pole.
He would launch in the summer, when Arctic temperatures were at their most moderate, averaging around thirty-two degrees Fahrenheit. This also gave him the advantage of constant daylight—summer being the season of the midnight sun in the northern latitudes. He and his fellow fliers would drift with the wind northward, while controlling the balloon with a system of guide ropes and sails.
Andrée christened the balloon _Örnen_ , or eagle. Built in France, this highly innovative craft featured an envelope made of layers of silk varnished together. It measured about a hundred feet in height and sixty-seven feet in diameter. Andrée designed the balloon with a series of heavy ropes hanging beneath, which would serve both as ballast and as anchors when allowed to drag across the ice. The balloon's other important feature was its three sails, which the aeronauts could angle in flight. When the heavy ropes dragging across the ice slowed the balloon to below wind speed, these sails would allow the breeze to push the balloon sideways in a slightly different direction than that of the wind. With this unproven—and somewhat questionable—system, Andrée hoped to regulate the balloon's ascent, descent, speed, and direction.
Andrée chose as his two flight companions Nils Ekholm—the meteorologist who had led the 1882 to 1883 expedition to Spitsbergen—and Nils Strindberg. Andrée assigned the latter, a handsome twenty-four-year-old physicist at Stockholm University, the task of photographically documenting the expedition. Having just completed balloon training in Paris, Strindberg would also assist in the aeronautical duties.
Even with the best planning and equipment, the mission was by any measure exceedingly bold, perilous, and—in the opinion of many at the time—outright foolhardy. Many prominent scientists of the day scoffed at Andrée's proposed attempt as a stunt, rather than a serious scientific expedition. They did not believe such a mission was possible, given then-known Arctic meteorological conditions and the limitations of existing balloon technology. One called it "a madman's journey." Andrée, the consummate engineer, was determined to prove to these naysayers that, with technology, all things were possible.
The Andrée expedition launching point at Dane's Island (Danskøya). This uninhabited Norwegian island lies just off the coast of Spitsbergen. Both are islands in the Svalbard archipelago, located in the Arctic Ocean some six hundred miles north of Norway. _Library of Congress_
### **Up and Away!**
Andrée's first attempt to fly to the pole in the summer of 1896 never got off the ground. An uninhabited Norwegian island in the Svalbard archipelago called Dane's Island (Danskøya) was his launching point. This tiny island, located in the Arctic Ocean more than six hundred miles north of the coast of Norway, was as close to the pole as practically possible. As Andrée and his fifty-one-member expedition departed Sweden, a cheering crowd of forty thousand saw them off, expecting great things from the newly appointed national heroes.
When the 1,800-mile voyage to Dane's Island was complete, Andrée disembarked with his team and began constructing housing for the balloon. Here they would assemble, inflate, and service the balloon in a protected environment. Accompanying his expedition was a sizable entourage of reporters and tourists, which gave the scientific event an almost festive atmosphere.
When the balloon was ready for flight, the three aeronauts waited for the wind that would carry them northward to the pole. After six weeks, however, the wind never came. At summer's end, the disgraced would-be explorers packed up and returned to Sweden, as one cynic wrote in a contemporary newspaper article, "with their balloon tucked between their legs." They would have to wait for the following summer.
The three members of the ill-fated Andrée expedition. _From L to R_ : Knut Frænkel, Salomon August Andrée, and Nils Strindberg.
Andrée faced more problems after returning to Sweden. Crew member Ekholm had begun to express doubts about the mission. Among other concerns, he questioned whether the balloon could retain enough hydrogen to complete the flight. Eventually, he backed out altogether. To replace him, Andrée chose an athletic twenty-seven-year-old engineer named Knut Frænkel. Like his fellow crew member Strindberg, Frænkel prepared for the upcoming flight by taking instruction in the art of ballooning. Even so, between the three men who would attempt to balloon their way across the Arctic Ocean to the North Pole, they totaled only twenty-seven flights.
In June 1897, the Andrée expedition returned to Dane's Island for the second—and last—attempt. Once more, they inflated the completely untested balloon, prepared for flight, and waited for the wind. It had to be strong enough to push them northward all the way to the pole, but not so strong as to destroy the balloon during the launch. Andrée estimated, perhaps naively, that with the right conditions they might arrive at the pole within two to three days; but in case it took longer, he assured the public the balloon was capable of remaining aloft for at least a month. While the aeronauts waited, they continually applied varnish to plug recurring gas leaks in the balloon—a red flag they seemed unwilling to acknowledge. They had come too far to back out now.
Finally, on July 11, 1897, a favorable—though sporadic—wind arose over the waiting expedition. Andrée was not convinced, but upon urging from his two impatient young traveling companions, he finally agreed that conditions were minimally acceptable. He realized this might be the only chance they would ever get, and he did not intend to return to Sweden, as before, without at least an attempt. Consequently, the three balloonists donned their flying clothes, loaded last minute items, and climbed into the basket suspended below _Örnen_ 's inflated envelope. At 2:30 that afternoon, Andrée ordered the restraining ropes cut. The balloon rose from the shelter and up into the wind.
Observers saw it rise rapidly to several hundred feet and then just as quickly descend back to the water over which it was flying. The basket in which the men were riding dipped into the bay while the three occupants furiously dumped sand ballast. The balloon abruptly ascended again, shearing off the ends of the all-important guide ropes, and drifted away. The three intrepid aeronauts were last seen fading into the northeastern sky. It had been a very rocky launch—an omen of things to come—but they were finally on their way to the North Pole.
### **Lost!**
Days passed while followers the world over held their breaths. One of the carrier pigeons Andrée had taken along turned up four days after their departure, landing in the rigging of a seal-hunting ship operating near their launch site. The boat's captain, anticipating fresh fowl for his evening meal, shot the bird—but it fell into the water, so he proceeded on without it. When he later learned it might have been one of Andrée's pigeons, he sailed back and retrieved the dead bird still floating on the surface of the water. The attached message, which Andrée had written two days after the balloon's departure, stated that all was well. His coordinates indicated they had traveled 170 miles northeast at an average speed of about four miles per hour. This proved to be the only one of Andrée's thirty-six pigeons that ever appeared.
In the ensuing months, false reports about the three missing explorers came from all corners of the world. "Andrée pigeons" seemed to turn up everywhere—someone even reported one in downtown Chicago. There were balloon sightings and word of strange noises on deserted islands in the Arctic area. It was variously reported that the explorers had crossed the North Pole; landed in Siberia; died at the hands of hostile Eskimos; or had fallen into an abyss at the pole and were now at the center of the earth.
Searchers combed all the likely areas that they were able to access but found no trace of the lost balloonists. It was as if they actually had fallen to the center of the earth. After several months, it became obvious that regardless of where they landed or how well equipped they were, they must by now be dead. The years rolled past, and in time, Andrée and his ill-fated expedition faded into distant memory... that is, until a fortuitous discovery three decades later put his name back in the headlines.
### **Ghosts Arisen from the Past**
On August 6, 1930, walrus hunters from the Norwegian ship _Bratvaag_ climbed onto an uninhabited strip of ice and rock in the Arctic Ocean called White Island (Kvitøya). This tiny, barren easternmost member of the Svalbard archipelago lies approximately 250 miles east of Dane's Island, where Andrée and his two comrades had launched thirty-three years earlier. What the hunters discovered there was far more significant than walrus—on a rocky clearing on the southwestern corner of the island, they stumbled upon what had once been a human campsite. It soon became apparent to the hunters that they had discovered the final resting place of the long-lost Andrée expedition.
The hunters found a multitude of artifacts strewn around the area: tools, scientific instruments, guns, ammunition, items of clothing, a boat, sledges, a still-functional camp stove containing fuel, and a variety of written records. The most significant find, however, was the frozen skeletal remains of two human bodies—Andrée's and Strindberg's, as it turned out. At least it was now known where two of the three doomed Swedish explorers had ended their lives, but other questions remained: How did they die? Under what circumstances did they land their balloon and arrive at this depressing little spot in the middle of nowhere? And where was the third member of the expedition, Knut Frænkel?
It did not take long to find the answer to the last question. A group of journalists on the island investigating the campsite discovered the body of Frænkel, lying frozen into the ice some distance from the other two bodies. At least now, they could account for all three members of the expedition. But what about the other unanswered questions?
The artifacts and remains of the three lost explorers were brought home on the _Svenskund_ —the same ship that had taken them to Dane's Island in 1897. It arrived in Stockholm, escorted by a flotilla of warships, two hundred civilian boats, and an aerial formation. The bells tolled throughout the city as Sweden's King Gustav V personally accepted their bodies "in the name of the Swedish nation." Four days of military salutes and memorials followed, after which, the mortal remains of the three heroes were cremated. By now, news of the discovery of the lost Andrée expedition had made headlines throughout the world.
Among the various records the three men kept of their activities were logbooks, letters, maps, journals, and diaries. The most descriptive of these was the account Andrée faithfully maintained to the very end. Another auspicious find supplemented this: several canisters of undeveloped film containing images that Strindberg had shot during the expedition. Amazingly, after thirty-three years in the harsh Arctic environment, photo experts were able to develop ninety-three of these exposures. The shadowy, ghostly images that appeared, together with the lost explorers' own written words, effectively brought them back from the dead. What emerged was a story unlike any other ever known.
### **Ordeal on the Ice**
After their tumultuous takeoff at Dane's Island, the three balloonists drifted northeastward. Eventually they entered fog, which cooled the balloon's buoyant hydrogen gas and caused it to descend to the ice. The balloonists dumped sand, anchors, tools, even food—but for hours, their basket continued to bump heavily across the uneven ice through the fog, tossing the occupants about mercilessly.
By the morning of the third day, July 14, the flight was finished. The slowly deflating balloon had alternated between flight and bumping along the ice for sixty-five and one-half hours; still, it was three hundred miles short of its polar destination. When the three men stepped out onto the ice, they sadly accepted the realization that they were as close to the North Pole as they would ever be. They had failed, and now—hundreds of miles from anywhere—they had to find some way to survive. They took the next few days to reorganize and pack their remaining 1,600 pounds of gear into sledges. On July 22, they began manhandling them southeastward toward a pre-arranged depot in the Franz Josef Land archipelago.
Slowly and laboriously, the three grounded aeronauts began what was to be their death march, dragging their sledges—each weighing several hundred pounds—across what Andrée described as "dreadful terrain." They struggled through frigid temperatures, snow, rain, and fog, and over the slowly drifting, but ever-changing and treacherously uneven, ice floes. They alternately skirted around, climbed, or hacked their way through walls of ice, and crossed the endless leads of water running between them. Repeatedly, they had to fish one another out of the water after falling in. On some days, they managed only a few hundred yards. They slept on the ice and supplemented their diet whenever possible with meat from the bears, seals, and gulls they were able to shoot.
The summer days were literally endless, with the sun still hanging above the horizon at midnight. This, however, would not continue much longer. After a few weeks, the cold unending days would give way to much colder and equally endless nights. Racing against the oncoming winter, the three men had to find a safe haven. Meanwhile they suffered bitterly as they inched southward, fighting for their lives: snow blindness, skin sores, frostbite, diarrhea, bruises and dislocated joints, viral infections, and muscle cramps.
After two weeks of this torturous travel, they discovered to their utter dismay that the ice on which they were traveling was drifting west faster than they were walking east. They therefore changed directions and headed southwest toward a different cache of provisions on Seven Islands (Sjuøyane) at the northernmost part of the Svalbard archipelago. Again, however, the unpredictable movement of the ice thwarted their plan. It began drifting them away from their only lifeline. With life-saving provisions located to the east and west, they were traveling almost due south—and ever deeper into no man's land. On September 1, Andrée saw the sun "touch the horizon at midnight." The polar summer was about to end; a long, cold, dark—and depressing—winter would soon begin.
Salomon Andrée and Knut Frænkel, soon after their balloon _Örnen_ (eagle) was forced down onto the ice. The third member of the expedition, Nils Strindberg, snapped the picture. The quality of Strindberg's photographs is remarkable, given that the film from which they were developed lay hibernating in the polar wilderness for thirty-three years.
Two weeks later, the three men—by now, half-dead from injury and exhaustion—sighted land for the first time in sixty-eight days. It was White Island. It took until October 5, for the ice on which they drifted to bring them close enough to struggle ashore. They started to set up camp and began collecting driftwood to build a shelter but got no further. On October 6, Andrée made his last legible diary entry, and within days, all three were dead.
It remains a mystery why the three explorers, who had so tenaciously managed to travel this far, suddenly gave up. They had made it to dry land and still had enough life-sustaining provisions to survive the winter. Researchers have suggested various scenarios for their demise: attacks from marauding bears; lead or carbon monoxide poisoning; vitamin A overdose from eating toxic bear livers; trichinosis contracted from bear meat; botulism from tainted seal meat; scurvy—and even murder-suicide. Any of these is possible, but perhaps the real reason was something less clinical. Maybe they simply surrendered to the mental depression and utter exhaustion they felt after nearly three months in the frozen hell, the prospect of a long winter of mind-numbing cold and eternal darkness, and the hopeless despair of imminent doom that must have pervaded their very souls. No one will ever know.
Frænkel and Strindberg posing with a polar bear they had just shot. Fresh game would be an important source of nourishment for the three lost explorers. It may have also been a contributor to their demise.
An excellent view of how the three downed balloonists spent their last days on earth. For more than two months, they dragged sledges like this one, each packed with hundreds of pounds of life-saving provisions, across treacherous ice floes and over and through walls of ice. Soon after arriving at White Island (Kvitøya), all three men died—for reasons still unknown.
Andrée's native country honored him as a hero. His audacious attempt stimulated nationalistic pride like few other events in Sweden ever have. More recently, however, some have come to view his ill-advised attempt in another light: Andrée, though undoubtedly a courageous visionary, was also a victim of self-deceit. He almost certainly realized that the expedition had little if any chance to succeed. He was well aware of the permeability of the balloon's envelope, the unproven nature of its steering mechanism, and the unpredictability of the Arctic winds. However, in light of his previous failure, the hero's sendoff his nation had given him, and the many prominent contributors—including Sweden's King Oscar II—who had helped finance the expedition, he felt obligated to make the attempt, hopeless or not. Lacking the moral courage to do the only sensible thing and cancel the expedition, he elected instead to save face, thereby dooming himself and his two young comrades to a needlessly cruel and premature death.
The ill-fated voyage of the Salomon Andrée expedition. The three men drifted an undetermined distance northeast from Dane's Island in their balloon before coming to rest on the ice, well short of the North Pole. Their death march southward ended on White Island.
A few years after the failed Andrée expedition, two different explorers claimed the honor of being the first to the North Pole: Frederick Cook in 1908 and Robert Peary in 1909. Significantly, neither of them did it by air. It is also noteworthy that there still exists some doubt that either actually succeeded in achieving what they claimed.
No one disputes, however, that Salomon Andrée and his two intrepid comrades were the first to attempt the North Pole by air. They paid for it with their lives. But their "madman's journey" remains one of aviation history's earliest and most memorable one-way flights to eternity.
## CHAPTER TEN
## **THE CONGRESSMEN WHO VANISHED**
**"SOMETHING TERRIBLE IS GOING TO HAPPEN."**
On the foggy morning of October 16, 1972, a small twin-engine airplane with four men aboard lifted off from Anchorage International Airport, Alaska. Their destination was the state capitol, Juneau, some six hundred miles to the southeast. It was a long flight for a light aircraft, especially considering the remote and unfriendly terrain below. But by Alaskan standards, it was routine. In fact, the only unusual thing about this particular flight was that one of America's most powerful political leaders was among the three passengers.
Shortly after takeoff, the pilot of the chartered Pan Alaska Airways Cessna 310C radioed his flight plan to the Anchorage Flight Service Station. He intended to follow the commonly used navigational airway that roughly paralleled the shoreline of the Gulf of Alaska to the coastal town of Yakutat. From there, he would fly a direct course into Juneau International Airport. He estimated the total flying time at three and one-half hours. Those on the ground at Anchorage watched as the small twin slowly faded into the southeastern sky. Neither the plane, nor the four unsuspecting men aboard were ever to be seen again.
### **A Premonition**
Aboard the airplane were Nicholas J. Begich, a first-term Democratic Representative from Alaska, and Thomas Hale Boggs Sr., a Congressional Representative from Louisiana. The latter also happened to be the reigning majority leader of the US House of Representatives—which made him the second-ranking Democrat in the House. The third passenger aboard the aircraft was Begich's aide, Russell L. Brown. Brown felt fortunate to be on the flight, since another one of Begich's assistants had given up his seat as a favor. They were on their way to a Democratic fundraiser for Begich, who was actively campaigning for a second term.
Thomas Hale Boggs Sr. was a fourteen-term veteran congressional representative from Louisiana. As majority leader of the US House of Representatives, he was the second-ranking Democrat in Congress. He and three other men disappeared during a routine flight from Anchorage to Juneau on October 16, 1972. _Collection of the US House of Representatives_
The forty-year-old Begich was an up-and-coming young politician with a bright political future. He was also a family man, with a wife and six children ranging in age from four to fourteen. He was undoubtedly thrilled and honored to have the prestigious Boggs stumping with him on the campaign trail. It was a powerful endorsement for the well-connected majority leader to travel all the way to Alaska to assist in his campaign.
While Begich's election seemed a sure bet, one of his closest friends and supporters was not so optimistic. Margaret Pojhola had a premonition after seeing him on the evening before his last flight. She confided to her husband that she felt "something terrible is going to happen."
The venerable Boggs was a fourteen-term veteran of Congress. Through the years, he had worked his way up through the congressional ranks to his present position as House majority leader. As such, the fifty-eight-year-old legislator was the most likely candidate to become the next Speaker of the House of Representatives, a position that would have placed him second only to the vice president in the line of succession to the presidency. Boggs was, by any measure, one of the most prominent political figures of his day.
The pilot of the charter plane was thirty-eight-year-old Don Edgar Jonz. He also happened to be the president, chief pilot, and sole stockholder of Pan Alaska Airways, Ltd. By all appearances and credentials, he was an accomplished and highly qualified pilot who possessed all the necessary aeronautical ratings. He was not only a licensed commercial airline transport pilot, but also a certified flight and instrument instructor. From his years of flying as a bush pilot, Jonz had accumulated an impressive seventeen thousand flight hours. As further evidence of his expertise, he had recently authored two magazine articles on flying in the adverse weather conditions he so often encountered in the unpredictable Alaskan skies. There was really only one legitimate criticism of the blond-haired, athletic young pilot: his attitude. He had gained a reputation with colleagues for being arrogant, taking unnecessary risks, and sometimes neglecting to fly "by the book." Overall, however, he seemed eminently qualified to pilot his highly placed passengers anywhere—including the Anchorage-to-Juneau leg over the treacherous Alaskan landscape.
A Cessna 310C, of the type in which congressional representatives Hale Boggs and Nicholas Begich, congressional aide Russell Brown, and pilot Don Jonz disappeared on October 16, 1972. No trace of the men or airplane ever turned up. _Cessna Aircraft_
The airplane Jonz selected that day was also up to the task. A twin-engine Cessna 310C—the type made famous by "Sky King" in the classic TV series of the same name—was a popular plane with an excellent safety reputation. This particular aircraft, which carried the registration number N1812H, was more than ten years old. But it was well maintained, in good operating condition, and considered completely airworthy.
However, Jonz had not equipped this particular airplane with certain items of safety equipment, as verified later by the NTSB investigation. The Cessna did not have an autopilot or an anti-icing system. While neither was required by air regulations, they were useful when flying in the often-inclement Alaskan weather. Also missing were two other safety items that _were_ required. The Cessna was not carrying the survival equipment or Emergency Locator Transmitter (ELT) mandated by a newly enacted Alaska state law. In the event of a crash or forced landing in a remote area, the ELT would send out radio signals that rescuers could trace back to the downed aircraft. The need for survival equipment when flying over the remote and rugged Alaskan terrain was self-evident. The importance of these items was such that when Jonz radioed ten minutes after takeoff to file his flight plan, the Anchorage Flight Service Station specialist specifically asked him if he had the required equipment aboard, to which Jonz replied, "Affirmative."
The reason why Jonz lied about having the required emergency equipment is self-evident: to do otherwise would have been admitting that he was flying illegally. The more difficult question is why he failed to take this equipment with him in the first place. They were items he already possessed, so all he had to do was load them onto the airplane. Perhaps he forgot them, or maybe he felt he could not afford the extra weight—the NTSB later estimated that the small airplane, when loaded with the three passengers and their luggage, already exceeded its maximum recommended takeoff weight. No one will ever know for sure why Jonz failed to comply with this safety regulation.
### **Lost in the Wilderness**
Just what happened to the four men and their aircraft after takeoff has never been determined. They may have gone down over mountains, glacier, or water, since their intended route took them over all of this terrain. It seemed that they had simply vanished into the vast Alaskan wilderness. Alaska is almost two and one-half times the size of the next largest US state, Texas—but it has only three percent the number of people. Not only is it mostly uninhabited, the weather is often not conducive to flying and the terrain is typically hostile to even the most basic human existence. The state truly lives up to its official nickname "the Last Frontier."
The route that pilot Don Jonz intended to follow from Anchorage to Juneau. No one knows where—or why—he and his three passengers ended their flight.
Many airplanes have disappeared in Alaska's remote skies, waters, and landmasses, never to return. This has prompted some to refer to its vast spaces as Alaska's Bermuda Triangle—a reference to the area of sea lying off the eastern coast of Florida that has similarly claimed numerous ships and aircraft.
Perhaps the earliest well-publicized aviation disappearance in Alaska's immense zone of the unknown occurred in 1937. That year, famed aviator Sigismund Levanevsky and his five crew members disappeared near the Alaska North Slope. Levanevsky, often called "Russia's Lindbergh," was attempting to fly from Moscow to New York City via the North Pole. The final resting place of the six intrepid Soviet fliers and their four-engine aircraft remains to this day a mystery. Likewise, finding Boggs and his missing Cessna in such a massive region made looking for a needle in a haystack seem easy.
### **An Unprecedented Search**
By 1:15 p.m. on October 16, the airplane carrying the four men was forty-five minutes overdue at Juneau. Accordingly, airport authorities notified the US Air Force Rescue Coordination Center at Elmendorf Air Force Base, near Anchorage. Personnel there quickly initiated a search operation, which ultimately became one of the most intensive air, land, and sea searches ever conducted. Though hampered at first by ground fog covering most of southeastern Alaska, dozens of civilian and military ships, airplanes, helicopters, and even high-performance jets, began the search for the missing Cessna. The Civil Air Patrol, Army, Navy, Air Force, and Coast Guard all participated. The Air Force even took the unprecedented action of sending its highly sophisticated and top-secret Lockheed SR-71 Blackbird reconnaissance jet to assist. It was the first time it had ever employed this phenomenal high-altitude spy plane for a search and rescue operation. From an altitude of fifteen miles, its powerful camera was capable of imaging at unbelievably high resolution an incredible ninety thousand square miles every hour, making it ideal for this particular mission.
The hunt continued for thirty-nine days—more than five and one-half weeks. It covered all possible routes that pilot Jonz might have taken. Air searchers flew more than a thousand sorties and logged 3,600 flight hours, as they crisscrossed nearly 326,000 square miles of the rugged terrain. However, in spite of its duration, thoroughness, and intensity, the massive multiservice search operation turned up nothing. Not a single trace of the orange and white Cessna, nor any of its occupants ever appeared, and not a single trace has surfaced to this day. On November 24, 1972, authorities ended the search. All those aboard the ill-fated charter plane were presumed dead.
The NTSB conducted a thorough investigation of the disappearance but was "unable to determine the probable cause of this accident from the evidence presently available." It declined even to venture a serious guess as to what had happened to the lost airplane and its occupants. The report further noted that the investigation would resume whenever the downed aircraft was located. However, as the years unfolded, not so much as a screw, sliver of aluminum, or shard of Plexiglas from the vanished airplane ever turned up.
### **A Mysterious Tip**
In nearly all unexplained high-profile disappearances—such as the Boggs case—the mysterious shadow of intrigue eventually materializes. This one was no exception, although it took twenty years to make itself apparent. In 1992, a Freedom of Information Act request brought to light some amazing and seemingly incriminating FBI documents. These cryptic messages revealed that authorities had received an apparently credible tip revealing the location of Boggs' downed plane only hours after its disappearance.
The telexes from the FBI's Los Angeles office disclosed information never before made public. They revealed that a man in the Long Beach area had contacted authorities to report what he believed was the location of the downed Cessna. He had some undefined connection with a top-secret private electronics firm that specialized in "highly sophisticated, experimental, electronic surveillance." The directions he gave were quite specific, pointing to a spot halfway between Anchorage and Juneau near Yakutat Bay and Alaska's largest ice field, Malaspina Glacier. The informant further revealed they had detected, and were tracking, at least two survivors who had apparently departed the downed plane!
FBI officials considered this information "significant," and forwarded it to their headquarters in Washington, DC. A follow-up telex later verified that they considered the informant—though evasive about his background and the firm that had obtained the information—to be reliable and the information very possibly authentic. All pertinent names in the documents released in 1992 were blacked out, so it has been impossible to learn the identity of this mystery man or the company with which he was associated.
One popular theory alleges that the FBI intentionally kept this potentially life-saving information secret to prevent Boggs' rescue. The FBI actually did have a score to settle with the missing Majority Leader. On April 5, 1971, he had—on the floor of Congress, no less—compared the FBI's surveillance methods to those used by the Soviet Union's KGB and Nazi Germany's Gestapo. He ended his speech by calling for the removal of then-FBI Director J. Edgar Hoover. Hoover died a year later—five months prior to Boggs' disappearance—so he could hardly be blamed for squelching the anonymous tip; however, there probably was enough residual acrimony against Boggs within the bureau for the conspiracy spinners to make their case.
Part of the heavily redacted FBI telex describing the anonymous tip received about Boggs' missing plane. The mystery caller defined in specific terms where the downed aircraft was located. The next page of the telex states that this information was "immediately furnished [to the] US Coast Guard." Did the Coast Guard act on it? _FBI Freedom of Information Record_
The biggest problem with this theory is that its premise is false. Both FBI and US Coast Guard documents clearly verify that the FBI _did_ pass the anonymous tip on to the Coast Guard. Therefore, allegations that the feds kept the information to themselves or ignored it are false. What the Coast Guard did to follow up on the tip is unclear, but in the end, it did not matter. The thirty-nine-day search operation covered all the bases. Tip or no tip, Boggs' plane would almost certainly have been sighted had it been in any way visible. That it remained lost suggests that it was sitting under water or ice, or perhaps spread in a million small pieces across an icy hillside. As for the anonymous tip, no one has yet verified or refuted its legitimacy.
There is at least one other widely held conspiracy theory relating to Boggs' disappearance. It refers to his participation in the highly controversial Warren Commission—the panel tasked to investigate the assassination of President John F. Kennedy. The commission ultimately concluded that there was no conspiracy and that gunman Lee Harvey Oswald acted alone when he fired the fatal shots. Some evidence suggests that Boggs privately had reservations about the commission and its report, and that he may have even been considering reopening the investigation. This, in turn, marked him for assassination by whatever dark unknown force may have been behind the killing of President Kennedy.
In truth, no evidence exists to prove that anyone sabotaged Boggs' flight. No one can rule it out, since the plane was never found, but even Boggs' daughter, noted political analyst Cokie Roberts, has expressed her disbelief that there was ever any conspiracy against her powerful father. However, others see it differently. At least one member of the Begich family has publicly stated that there may have been foul play. It is unlikely that anyone will ever know for sure—at least until the missing Cessna turns up.
### **A Bad Case of "Attitude"**
Yet another scenario for the four men's demise seems more likely. It involves the abysmal weather into which Jonz flew. It failed to meet even the minimum criteria for the Visual Flight Rules (VFR) flight plan he filed soon after takeoff. There was dense fog and drizzle with limited visibility. Jonz filed a VFR plan instead of the more appropriate Instrument Flight Rules (IFR) plan for one simple reason: legally, he could not file IFR. In order to do this for a flight carrying paying passengers, regulations required Jonz to have either a copilot or an autopilot, and he had neither. Still, he was an experienced instrument-rated pilot who was otherwise qualified to fly in such weather, so this may or may not have been a contributing factor.
There was, however, another—and more serious—meteorological challenge for Jonz to contend with that day: icing. This insidious killer strikes fear into the hearts of even the most grizzled aviators. An aircraft flying in icing conditions can simply fall out of the sky. The accumulating ice adds unsustainable weight to the airplane and simultaneously reduces the wings' lifting ability. Failure to initiate effective anti-icing procedures or to escape the condition by changing altitude or direction will quickly result in disaster.
Jonz had recently written an article on this very topic for the highly regarded aviation magazine _Flying_. Ironically, it hit the newsstands in October 1972—the very month he disappeared. The article, entitled "Ice Without Fear," revealed the author's remarkably cavalier attitude towards this deadly hazard. His lead paragraph said it all: "The thought of in-flight structural icing inspires the crazies in a lot of airmen. In my opinion, most of it is a crock." He went on to write, "If you are sneaky, smart and careful, you can fly 350 days a year and disregard 99 percent of the BS you hear about icing."
It is likely that Jonz did encounter icing conditions on his last flight. At least one other pilot flying a similar route that day reported that his airplane had started to collect ice, and that he barely escaped it by quickly climbing for altitude. It is entirely possible that the dangerous condition of which Jonz was so disdainful—coupled with his failure to outfit his airplane properly with emergency equipment—could have been his undoing. The unfortunate decisions this experienced but devil-may-care pilot made that day may have teamed up against him—and cost him and his passengers their lives.
Ultimately, just about the only thing known with certainty about the disappearance of Cessna 310C N1812H and its highly placed passengers is that there still remains plenty of uncertainty.
Both of the missing congressional representatives were re-elected to office in the election that followed less than a month after their disappearance. Their official tenure was, by necessity, short. On December 31, 1972, a presumptive death hearing in Anchorage legally declared Nicholas Begich dead. Three days later, House Resolution 1 of the Ninety-Third Congress officially accepted Boggs' death, as well. A special election that followed placed his widow, Corinne "Lindy" Boggs, into the Louisiana Second Congressional District seat her husband had filled for the previous twenty-eight years. She occupied that position for an additional eighteen years, and later served as US Ambassador to the Vatican.
It is appropriate that the state flower of Alaska is the forget-me-not. Forty years have passed since Majority Leader Boggs and Representative Begich vanished with two other men during a routine trans-Alaskan flight. Yet, there are many from that state—and others, as well—who have still not forgotten their unexplained disappearance in the Alaskan wilderness. It is unlikely they ever will.
## CHAPTER ELEVEN
## **MAD FLIGHT TO OBLIVION**
**"ONE OF THE MOST MYSTERIOUS AFFAIRS OF WAR."**
Shortly after 11:00 p.m. on May 10, 1941, one of the most bizarre and controversial flights of all time came to an abrupt end in a remote area of western Scotland. On this otherwise quiet evening, a loud explosion startled a farm couple who lived a few miles south of Glasgow, near the village of Eaglesham. When David McLean ran to his window, he was shocked to see the flames of a crashed aircraft blazing on the ground nearby. As he looked up into the darkened sky, he could just make out a parachute drifting down onto a nearby moor.
Armed with a pitchfork, the bewildered but wary farmer ran out to the parachutist and demanded to know, "Are ye a Nazi enemy, or are ye one o' ours?" The downed flier, grimacing from the ankle injury he had just suffered, replied in German-accented English that he was not a "Nazi enemy," but instead, a friend of Britain. He further informed McLean that he had an important message for the Duke of Hamilton, whose estate—Dungavel Castle—was located nearby.
The unsuspecting farmer helped the injured pilot into his house, where—as dictated by proper British etiquette—his wife offered him a cup of tea. The middle-aged pilot with bushy eyebrows identified himself as German Air Force _Hauptman_ (Captain) Alfred Horn. The McLeans had no way of knowing it, but the visitor who had so abruptly dropped in on them was in reality the third-ranking leader of Britain's mortal enemy—Nazi Germany. " _Hauptman_ Horn" was none other than Adolf Hitler's right-hand man and the Deputy _Führer_ of the Nazi party, Walter Richard Rudolf Hess.
Thus ended one of history's strangest wartime missions. No bombs were dropped and no bullets fired, but the bizarre and controversial stunt had far-reaching historical and political implications. To this day, Rudolf Hess's infamous flight of no return is one of history's most fascinating.
### **The Rise of Rudolf Hess**
Rudolf Hess was born in 1894 in Alexandria, Egypt, into a wealthy family of German merchants. During World War I, he served with distinction in the Imperial German Army—first as a foot soldier, and then after recovering from a serious chest wound, a fighter pilot with the Royal Bavarian squadron, _Jasta_ 35b. After Germany's capitulation, the disillusioned Hess joined the _Freikorps von Epp,_ a right-wing, anti-Communist paramilitary organization. Before long, he crossed paths with a charismatic young fellow reactionary named Adolf Hitler. The two ex-soldiers became friends and fellow members of the ultra-extreme fascist organization _Nationalsozialistische Deutsche Arbeiterpartei_ —the Nazi Party.
In November 1923, Hitler attempted to overthrow the Bavarian government in Munich with his famous "Beer Hall Putsch." After the somewhat amateurish coup attempt failed, both he and Hess ended up imprisoned together in a fortress located in Landsberg, Bavaria. While incarcerated, the faithful Hess assisted Hitler in writing his infamous blueprint of hate, _Mein Kampf_. After their release, Hess remained with Hitler as his personal secretary. When the Nazis finally came to power in 1933, Hess assumed the position of Deputy _Führer_ of the Nazi Party. By 1939, he had become the third-ranking Nazi in all of Germany—answering only to Hitler and his flamboyant air minister, _Reichsmarschal_ Hermann Göring.
In spite of Hess's high official ranking in the Nazi party, his influence had been on the wane for years. By 1941, other highly positioned Nazis had already displaced him in all but title. Consequently, Hess may have felt compelled to perform some spectacular feat in order to regain his status with his _Führer_.
Hess's bizarre plan, if successful, would certainly have restored Hitler's confidence in him. Germany was at war with Great Britain, a country that Hitler had never really wanted to fight and one that was proving difficult to defeat. Hess decided—apparently unilaterally—that it would be in both Germany's and Britain's best interests to join forces against what he considered the common enemy of all Western Europe: the Soviet Union. Surely, he thought, Britain would also see the wisdom of such an alliance.
### **An Old Eagle Takes Flight Again**
Rudolf Hess was an aviator at heart, so his secret plan would involve a flight so ambitious it would have challenged the skills of any of the world's greatest pilots. He would fly an unarmed high-performance combat aircraft from the heart of Germany across Northern Europe, over the North Sea, and into the heart of his well-defended enemy, the British Isles. He would pass over more than a thousand miles of mostly unfamiliar territory, and he would do it completely alone, at night, and in total secrecy—even from his own country.
British military personnel examining the wreckage of Hess's downed Messerschmitt Bf 110 fighter. He parachuted from it, rather than attempt a risky night landing.
A formal pose of Deputy _Führer_ Rudolf Hess in his Nazi uniform.
Rudolf Hess stands on the right next to his leader and comrade, Adolf Hitler. Nazi Minister of Propaganda Joseph Goebbels is visible in the background at the far left.
The entire flight would take place over hostile territory. Since even the _Luftwaffe_ was unaware of his plan, its pilots would also be doing their best to shoot him down. He would therefore have to slip past a double-deadly gauntlet of antiaircraft defenses and night fighter interceptors without being blown out of the sky. Then, if lucky enough to arrive at his destination, he had to find a way to get himself onto the ground safely, in enemy territory, and in the dead of night. If successful, Hess then intended to score a diplomatic coup that would change the course of human history. To improve his chances the superstitious Nazi even consulted his personal astrologer in order to select a night for the flight on which an optimal alignment of the planets would occur. Though he did not lack confidence or ambition, he would need all the help he could muster.
Hess was an accomplished pilot. He had maintained his flying skills in the years since World War I—winning in 1934 an air race around Germany's highest peak, the Zugspitze. Therefore, all that was necessary for him to fly one of the _Luftwaffe_ 's fast, modern fighters was a few hours of advanced training. This was not a problem for the Deputy _Führer_ of the Third Reich: no one—except for Hitler and Göring—had the authority to deny him this access. And in all likelihood, neither had even a clue as to what he was up to.
Consequently, Hess surreptitiously began visiting Willi Messerschmitt's _Bayerische Flugzeugwerke_ (Bavarian Aircraft Works), located at Augsburg, where he could learn to fly one of Germany's premiere high-performance, long-range combat aircraft: the twin-engine Messerschmitt Bf 110 _Zerstörer_ (Destroyer). A fast and highly versatile two-seat fighter, it was also widely used for bombing, ground attack, and reconnaissance duties.
Hess made numerous flights from the factory airfield near Augsburg over the course of several months leading up to his secret mission. After making his first few hops with an instructor, he began flying solo. After he had mastered the necessary skills, he claimed one of the fast Messerschmitts, an E/2-N model, as his own personal aircraft. It sat in its hangar, under constant guard and untouched, except by Hess himself.
Messerschmitt Bf 110G-2 at Britain's Royal Air Force Museum, Hendon. This is similar to the aircraft Rudolf Hess used for his infamous flight of May 19, 1941. _Steven A. Ruffin_
Eventually, he had his fighter fitted with wing fuel tanks, which extended its range to well over a thousand miles. He then began making practice flights with the aircraft fully fueled—a tricky undertaking even for the highly experienced test pilots who routinely flew for Messerschmitt.
### **A Most Mysterious Affair**
Hess made a series of false starts in pursuit of his intended diplomatic mission before finally succeeding. Each time, he had his fuel tanks topped off for what appeared to be a long journey. No one at the airfield could even guess where he might be going, and no one dared question the Deputy _Führer_. In each of these attempts, he aborted soon after takeoff for various reasons.
Finally, at a little before 6:00 p.m. on May 10, 1941, Hess once again fueled up his long-range, twin-engine fighter and took off for points unknown to anyone else. This time, he did not turn back. He headed northwest across Germany at more than two hundred miles per hour through the rapidly darkening sky, carefully flying well clear of German fighter interceptor bases, as well as other observation, antiaircraft, and defense stations in and around the Ruhr Valley.
Shortly after Hess's secret takeoff that evening, word of his apparently unauthorized flight somehow leaked out. German _Luftwaffe_ ace and fighter group commander Adolf Galland received an urgent phone call at his base in western France. As he relates in his autobiography, _The First and the Last_ , the frantic caller at the other end of the line exclaimed, "The Deputy _Führer_ has gone mad and is flying to England in a Messerschmitt. He must be brought down!"
The phone call was from none other than the _Reichsmarschal_ himself—Hermann Göring, Hitler's number two in command and Galland's ultimate superior. Galland, a distinguished flier and fighter—and not a politician—wondered to himself if perhaps Göring was the one who had gone stark raving mad. Why in the world had Nazi Germany's air minister just personally ordered him to shoot Hitler's own deputy out of the sky?
Galland halfheartedly scrambled a few fighters he had available. It was a token gesture at complying with an incomprehensible order. Meanwhile, he pondered further the question of why Göring would want Hess killed. After all, the "mad" pilot in question was one of Adolf Hitler's oldest and best friends and a powerful and highly regarded leader in the Nazi regime. It made no sense, but it really did not matter, anyway. As night approached, his fighters had little chance of intercepting Hess and even less of shooting him down. Whatever the secretive Deputy _Führer_ 's destination, no one in the Third Reich could stop him now. It was, as Galland wrote, "one of the most mysterious affairs of war."
### **Prisoner of War**
As Hess winged his way across Germany and Holland, he carefully evaded the Nazi air defense network. When he reached the Dutch coast, he veered northeast to avoid known enemy air defenses before turning west and heading into British airspace, toward Scotland. Remarkably, he managed to thread his way through this lethal maze of obstacles, and at approximately 11:00 p.m., he arrived at his destination in Scotland.
As he searched the darkness in vain for the small private landing strip situated on the Duke of Hamilton's estate, he began to run low on fuel. Finally, with no alternatives remaining, he unbuckled his seat belt, opened the glass canopy, rolled his fighter upside down, and dropped into the dark unknown. As he floated to earth in his parachute, he watched his trusty Messerschmitt plow into the Scottish moor below him. Before long, he also landed hard, not far from his burning airplane. In a few minutes, he made his acquaintance with the pitchfork-wielding McLean.
After Hess finished his tea, the now-congenial farmer turned his uninvited visitor over to the authorities. While the high-ranking Nazi was in the process of passing from one official to the next, he finally divulged his true identity and dramatically proclaimed that he had come in the name of humanity. He explained to his skeptical captors that he was there as a personal envoy of his _Führer_ , Adolf Hitler, to seek a peace settlement between Germany and Britain.
Rudolf Hess's route from Augsburg, Germany, to Dungavel Castle, Scotland. This solo night flight was a considerable achievement in navigation and airmanship. Before even attempting to penetrate Britain's highly capable defenses, he first had to escape those of Nazi Germany.
British authorities, however, decided for reasons of their own that the best course of action was to ignore Hess and his strange airborne peace offering. On May 17, after intensive interrogation and minimal public comment, they locked him in the Tower of London. Three days later, they moved him to Mytchett Place in Surrey, and finally to a prison in South Wales, where he remained a solitary prisoner for the duration of the war. In spite of Hess's adamant demands, British Prime Minister Winston Churchill steadfastly refused even to meet with the captured Nazi leader. The resolute prime minister had no desire to make peace with a regime he wanted only to destroy.
### **Man without a Country**
Back in Germany, Hitler publicly proclaimed that the secret mission of his now ex-Deputy _Führer_ was completely unauthorized. Not only did he refuse to sanction the flight in any way, he even went so far as to brand his former friend insane and suffering from "pacifist delusions." German radio immediately announced, "Rudolf Hess, Deputy _Führer_ , has flown to England in pursuit of an idea of madness. The _Führer_ has removed him from all posts and excluded him from the Party."
The possibility that Hitler may have been in on Hess's risky plan is still a matter of debate. Some historians believe that the two had previously discussed it and that Hitler may even have privately sanctioned the mission. He certainly had plenty of incentive to make peace with Great Britain: he secretly planned to invade the Soviet Union the following month. An alliance with Great Britain would free him from the war on the Western Front and greatly enhance his prospects of defeating the Soviet power to the east. However, no written records have yet materialized that substantiate any such discussion between Hess and Hitler.
Hess had apparently devised his plan—with or without Hitler's endorsement—in order to reach out to a group of influential British citizens he believed to be pro-German. Some of these, who may have even included members of the British Royal Family, were in a position to oppose—or perhaps even depose—the pugnacious Churchill. The man Hess was attempting to contact, the Duke of Hamilton, was a fellow aviator whom he may have briefly met during the 1936 Olympics in Berlin. The Duke was an influential former member of Parliament and current air commodore in Britain's Royal Air Force. Hess apparently considered him the best person to connect him with the right people.
How Hess arrived at such a notion is debatable. Some historians have suggested that the doltish Deputy _Führer_ was the hapless victim of an elaborate British plot to lure him to Britain. Others contend that the clandestine Hess affair was not a plot, but rather a sincere attempt by both countries to negotiate a peaceful solution. Unfortunately, after Hess's noisy and somewhat messy arrival in Scotland became a front-page story, no one on either side wanted any more to do with it.
Was Hess really a legitimate part of a plot to overthrow the British government? Or was he simply a British intelligence dupe? Was he Hitler's personal "winged messenger of peace?" Or was Churchill correct when he called him a deranged madman bent on "an act of benevolent lunacy"? It looks as though the jury will remain out pending further credible information.
For reasons unknown, the British government still has many of the documents most pertinent to the Hess affair under lock and key. There they will remain—at least as it stands at this writing—until the year 2017. Until then, the truth behind his historic flight will continue to be one of history's most closely guarded secrets. However, the fact that the British government is still withholding information after nearly three-quarters of a century suggests there may be more to the affair than either lunacy or clumsy diplomacy. It could even be, as John Costello asserts in his book _Ten Days to Destiny_ , that Hess's historic mission was "an interlocking sequence of secret British and German peace maneuvers that can be tracked right back to the summer of 1940."
Rudolf Hess (far right) at the Nuremburg Trials with fellow Nazis, former _Reichmarschal_ Hermann Göring (left) and _Grossadmiral_ Karl Dönitz (center). Hess and Dönitz ended up in Spandau Prison, while Göring avoided his appointment with the hangman by committing suicide. _US Army Signal Corps_
Rudolf Hess left behind a wife, a three-year-old son, and a number of letters. One letter was addressed to Hitler, stating, "My _Führer_ , should you not agree with what I have done, simply call me a madman." Hitler was quick to oblige him—and he wasted no time in signing Hess's death warrant in the event he ever again set foot in Germany. Hess's note to his wife explained that she would probably not see him again for a long time. It was an understatement.
Rudolf Hess, languishing in prison in 1945. From the evening of his ill-conceived flight to the end of his controversial life more than forty-six years later, he never enjoyed a single day of freedom. _US Army Signal Corps_
After the war ended, the unrepentant and still sadly misguided Hess stated publicly at the Nuremburg International Military Tribunal: "I am happy to know that I have done my duty toward my people, my duty as a German, as a National Socialist, as a loyal follower of my _Führer_. I regret nothing." Not surprisingly, the tribunal found him guilty of war crimes. Instead of sending him to the hangman, however—like most of his other high-ranking Nazi colleagues—it sentenced him to a lifetime of solitary confinement at Berlin's Spandau Allied Military Prison. Here, he languished for the remaining forty-one years of his life, the latter half of these as the fortress's only guest.
On August 17, 1987, the ninety-three-year-old Hess committed suicide by hanging himself. At least this was the official explanation. Hess's son, Wolf Rüdiger Hess, among others, has alleged that the former Reich Minister did not commit suicide at all, but that agents of the British government murdered him. Others contend that the man who died at Spandau after spending nearly half a century there in solitary confinement, was not really Hess at all, but an imposter.
There is still little certainty regarding Rudolf Hess's fantastic flight, but in the end, his futile mission failed to change the course of the war in any way. In fact, it accomplished nothing at all—other than to terminate his own misguided political career and his freedom, forever.
## CHAPTER TWELVE
## **THE CRIME OF THE CENTURY**
**"NO FUNNY STUFF OR I'LL DO THE JOB."**
On Thanksgiving eve, November 24, 1971, a tall, thin, well-dressed man boarded Northwest Orient Airlines Flight 305, in Portland, Oregon. The scheduled 2:45 p.m. flight was just a short thirty-minute hop due north to its destination of Seattle. The olive-complexioned man wore a suit, overcoat, and dark sunglasses, and he appeared to be in his midforties. He sat near the rear of the cabin in a row by himself.
Soon after the Boeing 727-51 jetliner took off, the mysterious passenger calmly presented a neatly printed note to flight attendant Florence Schaffner. She initially thought it was a request for a date or some other sort of proposal—not an uncommon occurrence for the attractive twenty-three-year-old woman. When she read it, however, she immediately realized it was anything but an innocent flirtation. It read, "Miss, I have a bomb. Come sit by me." When she did, he opened an attaché case he was carrying and briefly displayed its contents. Plainly visible inside were red cylindrical sticks and a battery connected by an electrical wire. It may or may not have been the real thing, but it looked authentic enough for Schaffner to take him seriously. He indicated that he wanted $200,000 in unmarked, "negotiable" bills, four parachutes (two back main chutes and two front reserve chutes), and a fuel truck ready and waiting when they landed at the Seattle-Tacoma airport. The man impressed Schaffner as being polite—and generous: he told her to keep the change after paying for a two-dollar drink with a twenty-dollar bill. Before she left to take the note to the pilot, however, he added ominously, "No funny stuff or I'll do the job."
Northwest Orient Airlines Boeing 727-51 N467US, from which hijacker D. B. Cooper made his famous escape on November 24, 1971.This photograph was taken in March 1967 at Cleveland's Hopkins International Airport. Cooper chose the 727 because of its distinctive rear boarding steps (which are retracted in this photo). _Bob Garrard, with permission_
Thus begins the strange saga of skyjacker "D. B. Cooper." The high-altitude scheme he orchestrated for himself that Thanksgiving eve was either the heist of the century—or the biggest blunder in criminal history. To this day, no one knows which.
### **Out into the Night**
Nearing Seattle, the crew of Flight 205 immediately notified authorities on the ground of the hijacker's demands. FBI agents quickly assembled ten thousand twenty-dollar bills and four parachutes while the airliner circled above Seattle, waiting to land. The bills were unmarked, as specified by the hijacker, but with serial numbers prerecorded to facilitate future identification. Mercifully, throughout the unfolding drama, the other passengers on the flight remained oblivious to the crime in progress. The pilot explained the delay by announcing that it was due to a minor mechanical problem.
After landing, the mysterious sky pirate accepted delivery of his two main and two reserve chutes, and a large bundle of greenbacks. He then allowed all the passengers, along with flight attendants Schaffner and Alice Hancock, to exit the aircraft. Soon, he ordered the remaining four crew members, Capt. William Scott, 1st Officer William Rataczak, 2nd Officer Harold Anderson, and flight attendant Tina Mucklow, to take off. His instructions were to head the now-empty airliner toward Mexico City, via Reno, Nevada, for another refueling stop. Soon after takeoff, for reasons unknown at the time, the hijacker ordered the pilots to climb to an altitude of ten thousand feet, drop the landing gear, and lower the flaps to an angle of fifteen degrees.
When the crew had complied with these instructions, the skyjacker ordered all four of them into the cockpit and then lowered the rear outside stairway of the jet. He had obviously selected the Boeing 727 as the vehicle for his crime specifically because of this distinctive feature. Donning a main and reserve parachute, Cooper then tethered the heavy parcel of money to his body, using some nylon rope from one of the extra parachutes. He did this in order to keep it from being lost during what was sure to be a wild descent into the equally wild night. As he was doing that, he probably wished he had specified hundred-dollar bills. The bulky bag of twenties weighed twenty-one pounds, while C-notes would have weighed only four.
At about 8:00 p.m., a cockpit warning light alerted the pilots that the airliner's rear door was ajar. When they asked the skyjacker via the intercom if he needed any assistance, they got a resounding "No!" It was the last word they—or anybody else—heard the mysterious criminal utter. A few minutes later, the audacious jumper made his way down the airliner's rear stairway, parachutes and money attached, and jumped into the two-hundred-mile-per-hour slipstream and the dark unknown that lay beyond. He was still wearing nothing more substantial than his loafers and business suit. All he left behind were the two unused parachutes, a thin black J. C. Penney tie with a pearl tie tack, eight cigarette butts, and a few mostly unreadable fingerprints.
The FBI later pieced together what little they could about this eventful four-hour period. The man in question wore sunglasses, drank bourbon and soda, and chain-smoked filter-tipped Raleigh cigarettes. In the forty-some years since the enigmatic man who came to be known as "D. B. Cooper" hijacked Flight 305 and jumped into immortality, almost nothing else has been learned about him.
### **A Legend Is Born**
To this day, one of the few certainties about this legendary mystery man is that, ironically, he never referred to himself as "D. B. Cooper." The only time he ever mentioned a name at all was while purchasing his airline ticket from Portland to Seattle. He paid for it with a twenty-dollar bill and gave the name "Dan Cooper" for the passenger manifest. By most accounts, the origin of the misnomer, "D. B. Cooper"—by which he is universally remembered—stemmed from a mistake. One of the early suspects included a Portland man named Daniel B.—or "D. B."—Cooper. The press keyed on this name and widely reported it as that of the hijacker. Authorities soon cleared the real D. B. Cooper of any wrongdoing, but the catchy name stuck. The world would forever afterward remember the infamous skyjacker—incorrectly—as "D. B. Cooper."
Just as this was not his real name, it is almost equally certain that it was not Dan Cooper, either. In 1971, proof of identity was not required to buy an airline ticket, so he could have given any name he wished. In all likelihood, "Dan Cooper" was a pseudonym. It also happened to be the name of a European comic book action hero popular in the 1960s. Some have conjectured that the skyjacker may have been a fan of this cartoon character and decided to adopt his name as a personal joke. To this day, no one has the slightest idea what his real name was.
Equally unknown was where Cooper landed. The exact location was impossible to determine, since the crew—all of whom were up front in the flight compartment—had no way of knowing exactly when he had exited the aircraft. For that matter, they did not know that he actually _had_ jumped until the airliner landed in Reno. There, FBI agents, who doubted that the hijacker would really be crazy enough to jump, waited to arrest him. Pilots of US Air Force aircraft, who had scrambled to observe and assist, were equally clueless. Even though they shadowed the airliner throughout its flight from Seattle to Reno, they failed to see anyone jump that dark and rainy night. Based on when the rear door light came on, the best guess was that Cooper had jumped at about 8:10 p.m., at which time the plane was flying over a remote area of the lower Cascade Mountains in southwestern Washington.
A portion of the aeronautical chart used by the FBI showing the flight path of Cooper's airliner, as it made its way south from Seattle to Reno, Nevada. Cooper jumped at about 8:10 p.m., over the town of Ariel, Washington. The only part of his loot that ever turned up was a bundle of twenty-dollar bills, found more than eight years later—and more than twenty miles southwest of Ariel. _FBI (labels added by author)_
One of the most intense manhunts in history began on Thanksgiving morning and continued for eighteen days. The elaborate operation—which ended up costing far more than the $200,000 ransom—focused on an area approximately twenty miles north of Portland, near the isolated town of Ariel, Washington. An armada of airplanes, helicopters, and hundreds of law enforcement and military personnel turned up nothing. There was no trace of Cooper, his parachute, or the money. Even now—after nearly a half century of dead-end leads, a thousand suspects, an estimated one hundred thousand interviews, a case file that is dozens of volumes thick, and untold numbers of false accusations and crackpot confessions—authorities are no closer to solving this crime than they were in 1971. To this day, there are only these indisputable truths: no one knows where Cooper landed, what happened to him—or even who the mysterious hijacker really was. It remains the greatest aviation crime mystery of all time.
### **D. B. Cooper, Cult Hero**
This case, which the FBI code-named "NORJAK" (for Northwest Hijacking), has been a never-ending source of aggravation for the nation's top law-enforcement agency. Retired FBI agent Ralph Himmelsbach spent much of his career unsuccessfully chasing down Cooper leads. Even after forty-plus years, his frustration is still apparent when he speaks about Cooper, calling the fugitive outlaw just another "sleazy, rotten criminal... a loser." Other agents, however—some of whom are still tinkering, at least unofficially, with the case—view Cooper more respectfully, as a more sophisticated and educated man.
FBI composite sketch of the skyjacker known as "D. B. Cooper." On Thanksgiving eve, 1971, he jumped with $200,000 from the back of a Boeing 727 into a stormy night, ten thousand feet above the wilderness of southwestern Washington. The case remains unsolved, and to this day, no one knows who he really was or if he survived the jump. _FBI_
Regardless, it seems that the more the authorities searched for the elusive skyjacker, the more he gained in public popularity. His sheer boldness, ingenuity, and prowess in fooling the nation's foremost law enforcement organization made him a sort of cult hero. He committed the perfect crime without hurting anyone else, and he got away with it. Not even famed 1930s gangster John Dillinger—whom many struggling, Depression-era Americans secretly admired—managed to do that. Cooper's crime, occurring as it did near the end of the Vietnam era, successfully bucked the system at a time when "sticking it to the man" was a popular idea. Consequently, the gutsy skyjacker and his daring crime became a part of American folklore.
Over the years, numerous books, songs, TV documentaries, and even a Robert Duvall movie, have honored the legend of D. B. Cooper. In addition, several towns and establishments around the country still hold D. B. Cooper parties. The most popular of these occurs in Ariel, Washington, near where Cooper supposedly landed. Every Thanksgiving, the tiny town celebrates "Cooper's Capers" in honor of its most famous nonresident's only known nonvisit.
### **Slim Chance of Survival**
It is still anyone's guess as to where Cooper landed and what happened to him. Many are convinced that he died during the jump. Some even consider his attempt so desperate that it was tantamount to suicide. It is not difficult to arrive at that conclusion; after all, he jumped while flying over very hostile terrain, from an aircraft traveling at ten thousand feet altitude and two hundred miles per hour, into a dark, cold, and wet night, wearing only a business suit. If not killed outright when his unprotected body slammed into the blast of icy wind, he may have died of exposure during or after the descent or from injuries he received when he blindly parachuted into the trees, rocks, or water below.
If Cooper did somehow beat all the odds and survive the jump, how then could he possibly have escaped to safety in one of the country's most rugged and remote areas and in the midst of one of the most intense manhunts ever conducted? He was obviously not dressed, or in any other way prepared, for a nighttime hike through dense forests and mountains. Logically, the chances that he survived seem nearly nonexistent. Even the FBI eventually admitted that he probably died during the attempt, stating publicly that there was little chance that even an experienced parachutist—which, they concluded, Cooper probably was not—could have survived such a jump.
Still, the FBI never stopped looking for the elusive Mr. Cooper. If he died during the jump, then what happened to his body? The widespread search for Cooper turned up at least two corpses from the past, but neither of these was his. Moreover, the extensive manhunt failed to find even one of the ten thousand missing twenty-dollar bills. Out of the many that must have eventually broken loose and drifted across the countryside, surely some of them would have turned up somewhere—unless the hijacker managed to survive and hang onto them.
Proof that Cooper could have survived this seemingly impossible jump came after the crime: on April 7, 1972, when a copycat skyjacker named Richard McCoy successfully reproduced the feat. He parachuted from the back of a United Airlines Boeing 727 over Utah in the same manner as Cooper, except he took with him a more substantial ransom of $500,000. The FBI thought that they might have found their man, Cooper, but when agents captured McCoy a few days later, they quickly ruled that possibility out. McCoy had an airtight alibi for the night Cooper committed his crime. McCoy went to prison for air piracy—and later died in a police shootout after escaping. Meanwhile, the indomitable D. B. Cooper remained at large.
Not surprisingly, these two skyjackings prompted airports around the country to institute more stringent security measures. In addition, Boeing retrofitted its 727 jetliners with a lock on the back door to prevent anyone else from opening it in flight. This so-called "Cooper Vane" put a stop to any further sky-jumping hijinks.
### **A Significant Breakthrough**
More than eight years later, just when the legend of D. B. Cooper was beginning to fade, there was an amazing break in the case. In February 1980, eight-year-old Brian Ingram was camping with his family on the Columbia River near Vancouver, Washington, when he uncovered in the sandy bank some bundles of weathered and moldy twenty-dollar bills—290 of them to be exact. A comparison of serial numbers quickly verified that the $5,800 was part of the Cooper loot. This electrifying discovery prompted authorities to initiate a renewed search operation near where the bills were located. Finally, it seemed, the solution to the D. B. Cooper case might be at hand.
However, in spite of a search almost as extensive as the original, Cooper's body was still missing and none of the remaining $194,200 could be found. Young Ingram's find proved to be the only bills connected with the Cooper case ever to appear, either in or out of circulation.
Part of the "Cooper Cash" an eight-year-old boy discovered in 1980, buried in the sand near Vancouver, Washington. The 290 twenty-dollar bills were the only ones ever found. _FBI_
So, just what did the fortuitous find—more than twenty miles southwest of where authorities originally assumed that Cooper landed—mean? Did the package of money separate from Cooper during or after the jump, fall into the water, and wash downstream? Or did Cooper purposely toss some of the money away, hoping to throw authorities off his track? No one knows. As for the lucky boy who found the bundles of bills in the sand, he received a cash reward in addition to several of the original bills to keep as souvenirs. One of these eventually sold at an auction for more than $6,500.
### **Other Leads**
No more hard evidence for the Cooper case ever surfaced. Over the decades since the crime was committed, law enforcement authorities received thousands of tips from people who thought they had information about Cooper. One was a 1996 claim made by a woman in Florida, who related that her husband had confessed on his deathbed that he was the infamous skyjacker, D. B Cooper. Duane Weber, a seventy-year-old antique dealer, did resemble sketches of the skyjacker, and there were several other bits of compelling, though inconclusive, circumstantial evidence. In the end, however, his DNA did not match that obtained from Cooper's tie.
Then, in 2007, a man revealed that his late brother, Kenneth Christiansen, might have been D. B. Cooper. Before the Cooper skyjacking, Kenneth had been a rather disgruntled employee of Northwest Orient. Then, about a year after the skyjacking, he bought a house and paid for it in cash—an unusual transaction for a person of his means. He also happened to be a bourbon drinker, a smoker, and a former paratrooper—but most important, his photo closely matched flight attendant Schaffner's recollection of Cooper. Christiansen is still the most likely D. B. Cooper suspect, but authorities never definitively connected him to the crime.
In 2011, D. B. Cooper was still making headlines. At that time, the FBI announced it had a "credible" lead on the case. A woman in Oklahoma, who claimed to be the skyjacker's niece, provided a fingerprint investigators hoped might match a possible Cooper print found on the airliner. However, this lead was just like all the ones before it—inconclusive.
After more than forty years, the D. B. Cooper case is still unsolved; in all likelihood, it will remain that way. No one will ever know whether he survived the ordeal and lived in comfort to a ripe old age, or whether he died a quick death the instant he stepped off the extended rear stairway of the high-flying airliner.
Either way, no one today doubts that his actions on that dark and rainy Thanksgiving eve two miles above the wilderness of the Pacific Northwest were worthy of the legend they created. His famous jump into immortality remains the only major unsolved domestic skyjacking in US history. Accordingly, this notorious "novel without a final chapter," as one FBI agent recently described it, remains an open case. To this day, the FBI has a standing statement on its official website, requesting anyone with information to contact its Seattle field office.
## CHAPTER THIRTEEN
## **WHEN THE SKIES RAINED TERROR**
**"ARE YOU GUYS READY? LET'S ROLL."**
On the cloudless morning of September 11, 2001, four US airliners took off from three different airports located on the Eastern Seaboard. All of the flights were regularly scheduled nonstop transcontinental hops to the West Coast. They were normal in all respects—except that distributed among the total 232 passengers were 19 suicidal terrorists from the al-Qaeda Islamist militant group. They were armed with both knives and incapacitating chemical agents.
During a two-hour period of utter horror, the fanatical Middle Eastern terrorists, bent on killing themselves and as many innocent Americans as possible, launched four separate but highly coordinated aerial attacks against high-profile targets in the United States. They hijacked and intentionally crashed four commercial jet airliners, loaded with hundreds of passengers and thousands of gallons of explosive jet fuel. Three of them hit America's two most iconic cities, New York City and Washington, DC. In so doing, they either destroyed or substantially damaged two of the nation's most important symbols of wealth and power. Worst of all, they took the lives of three thousand innocent people.
These four flights combined to form history's most devastating terrorist attack. The day we now call simply 9/11 truly was, as President George W. Bush told the American public on that evening, the day "our nation saw evil."
### **Falling Airliners**
The terrorists intentionally chose transcontinental flights, knowing the airliners would be heavily loaded with fuel, and they picked a Tuesday because it is typically a light flying day. These two factors would translate into minimal passenger resistance and maximum explosive power upon impact. At least one terrorist on each flight was a pilot who had trained at a US civilian flight school.
No one can ever know the horror that occurred within the four doomed airliners during these flights, but a great deal of information can be gleaned from evidence recovered on the ground, radio transmissions, and phone calls made by those aboard. The image that emerges from each of the hijacked planes is the same: stark terror, panic, confusion, and heartbreaking sorrow.
**Attack No. 1: American Airlines Flight 11** — This flight was the first of the four to take off and the first to crash. The Los Angeles-bound Boeing 767-200ER left Boston's Logan International Airport at 7:59 a.m. with eighty-one passengers, a crew of eleven, and approximately ten thousand gallons of fuel. About half of the available seats were empty.
Approximately fifteen minutes after takeoff, five terrorists seated near the front of the cabin in the business and first-class sections violently seized control of the aircraft. Two of the flight attendants on board, Betty Ong and Madeline "Amy" Sweeney, had the presence of mind to call American Airlines ground offices and inform them of the situation. The terrorists, using knives and a Mace-like incapacitating agent, had stabbed two flight attendants and slashed the throat of a passenger before forcing their way into the cockpit and overpowering both pilots. Mohamed Atta, the thirty-three-year-old tactical leader of the 9/11 operation and son of an Egyptian lawyer, then sat down in the pilot's seat and took over the controls.
Air traffic control (ATC) personnel first became suspicious when the plane's pilots stopped responding to radio messages. Before long, its identifier on the radar screen disappeared, indicating that someone had turned off its transponder. No doubt remained after Atta inadvertently transmitted publicly a message he intended only for the passengers: "We have some planes. Just stay quiet and we'll be OK. We are returning to the airport.... Nobody move. Everything will be OK. If you try to make any moves, you'll endanger yourself and the airplane. Just stay quiet."
West of Albany, New York, Atta turned the airliner away from its northwesterly course, and headed south, toward New York City. A few minutes later, the final transmission: "Nobody move, please. We are going back to the airport. Don't try to make any stupid moves."
An ATC official contacted Otis Air National Guard Base, located on Cape Cod, Massachusetts, and stated, "We have a hijacked aircraft headed towards New York and we need you guys to... scramble some F-16s or something up there to help us out." After considerable delay, two McDonnell Douglas F-15 Eagle fighter jets launched, each armed with air-to-air missiles and 20mm cannon. They went supersonic—reaching a speed of Mach 1.4—trying to intercept the hijacked airliner, but it was all for nothing. Before they were even airborne, Flight 11 ceased to exist. Even if they could have intercepted the speeding Boeing, the pilots lacked at that time the authorization—or sufficient reason—to shoot down an unarmed airliner full of passengers.
The last communication with Flight 11 came from flight attendant Sweeney: "Something is wrong. We are in a rapid descent... we are all over the place.... We are flying low. We are flying very, very low. We are flying way too low.... Oh, my God, we are way too low!" The phone call ended abruptly.
At 8:46:40, American 11, flying at a speed of 440 miles per hour, slammed into the 93rd through 99th floors of the 110-story North Tower of New York City's World Trade Center complex.
**Attack No. 2: United Airlines Flight 175** — At 8:14 a.m. another Boeing 767-200ER took off from Logan International Airport, also bound for Los Angeles. Aboard were fifty-six passengers and a crew of nine. Two-thirds of the available seats were empty. The scenario for this second hijacking of the morning was almost identical to the previous one: about thirty minutes after takeoff, five Arabic-speaking terrorists seated in the business and first-class sections violently took control of the airliner.
The captain and first officer of Flight 175 were already aware that something was amiss with Flight 11, which had taken off from Logan fifteen minutes before them. ATC had advised Flight 175 to be on the lookout for Flight 11 and to steer clear of it; in addition, the Flight 175 crew had heard terrorist Atta's ominous radio transmission. In spite of this forewarning, they were themselves overpowered by the five terrorists onboard, who then turned the jetliner south toward New York City. By now, trackers on the ground were starting to realize that they had not just one, but two, hijackings in progress. Still, no one knew anything about the hijackers' intentions, their destination, the chaos they were creating inside of each plane—or the unspeakable horror that was yet to come.
A Coast Guard rescue team on its way to the scene of the World Trade Center attack, September 11, 2001. _US Coast Guard/PA2 Tom Sperduto_
The Pentagon as it appeared three days after the devastating 9/11 terrorist attack. The airliner hit the west wall traveling 530 miles per hour. All 64 people aboard the airliner died, along with 125 military and civilian personnel inside the Pentagon. _DOD/TSgt. Cedric H. Rudisill_
A closer view of the damage done to the Pentagon's west-facing wall, as it appeared the day after the attack. Some conspiracy theorists asserted that a missile caused the damage, and not an airliner. Security film and numerous eyewitnesses prove otherwise. _DOD/R. D. Ward_
As with Flight 11, some of those on Flight 175 made phone calls, revealing that the five terrorist attackers carried knives and an incapacitating agent. They had stabbed at least one flight attendant and apparently killed both of the pilots. Then, twenty-three-year-old Marwan al-Shehhi, a citizen of the United Arab Emirates, took over the controls. Passenger Brian Sweeney revealed in a call that he and other passengers were considering storming the cockpit of the erratically flown jet. Another passenger, Peter Hanson, told his father:
It's getting bad, Dad. A stewardess was stabbed.... Passengers are throwing up and getting sick. The plane is making jerky movements.... I think we are going down. I think they intend to go to Chicago or someplace and fly into a building. Don't worry, Dad. If it happens, it'll be very fast.... My God... my God!
The airliner continued southwest, passing to the west of New York City. After narrowly avoiding collision with two other airliners, it turned and approached Lower Manhattan. It descended very rapidly from twenty-eight thousand feet, and at 9:03:02 a.m.—just over sixteen minutes after Flight 11 had hit the North Tower—it flew straight into the South Tower, killing all sixty-five people aboard. It crashed into the 77th through 85th floors, while in a left-banking turn, flying at a speed of 540 miles per hour. Flight 175 had the dubious distinction of being the only one of the four ill-fated 9/11 airliners to crash on live TV, as the entire world watched in horror.
It was now devastatingly clear to everyone that these two horrendous crashes were not accidents. The United States was under attack—and there was more to come.
**Attack No. 3: American Airlines Flight 77** — The third doomed aircraft that morning was a Boeing 757-200, which—like the previous two hijacked airliners—was bound for Los Angeles. Like Flight 175, two-thirds of the seats were empty. Only fifty-eight passengers were aboard, along with a crew of six. Again, five armed terrorists sat near the front of the aircraft.
The airliner departed Washington Dulles International Airport at 8:20 a.m. and proceeded west. At just over thirty-one minutes into the flight, the terrorists took over the airliner and turned it back toward Washington, DC, with its transponder switched off. By this time, Flight 11 had already crashed into the World Trade Center and it was obvious that hijackers were in control of Flight 175; it therefore seemed certain to everyone on the ground that Flight 77 had suffered a similar fate. Terrorist Hani Hanjour, a twenty-nine-year-old Saudi Arabian who had obtained a commercial pilot's license in the United States two years earlier, was at the controls.
As with the two other commandeered airliners, some of those aboard Flight 77 were surreptitiously able to make telephone calls from either their personal cell phones or the airliner's seatback phones. Flight attendant Renee May informed her mother that her plane had been hijacked, and that everyone had been moved to the rear of the aircraft. She asked her to notify American Airlines. Passenger Barbara Olson called her husband, US Solicitor General Theodore Olson. She told him terrorists armed with knives and box cutters had hijacked her plane. All he could do was to tell her the upsetting news that two airliners had crashed into the World Trade Center.
At 9:32 a.m., ATC observed "a primary radar target" heading toward Washington, DC, at a high rate of speed. It was Flight 77. Controllers advised the Secret Service that it might be heading toward the White House. Fighter jets from Langley Air Force Base, Virginia, were scrambled, but in the confusion, they were vectored in the wrong direction. It did not matter anyway: it was too late.
At 9:37:46 a.m., Flight 77, traveling at full power in a 530-mile-per-hour dive, slammed into the west-facing wall of the Pentagon, located just outside of Washington, DC. All 64 people aboard perished, along with another 125 inside the Pentagon. The withering fire—fed by 7,500 gallons of jet fuel—reached an estimated two thousand degrees Fahrenheit and took several days to extinguish.
**Attack No. 4: United Airlines Flight 93** — This was the fourth and final jetliner deliberately crashed on 9/11. It too was sparsely booked, with passengers in only one-fifth of the seats. However, it differed from the other three hijacked flights in that there were only four terrorists aboard, and its destination was San Francisco. It was also destined to be the only jetliner not to hit any structure on the ground. This was because a group of passengers onboard had the time and courage to rise up against the terrorists and thwart their malevolent plan.
The Boeing 757-200 took off at 8:42 a.m. from Newark Liberty International Airport after a twenty-five-minute delay. The delay could have been a lifesaver for everyone aboard, for by this time, authorities knew about the Flight 11 hijacking. Unfortunately, they neglected to take any precautionary action. Had they immediately stopped all takeoffs, Flight 93 would never have left the ground. Instead, thirty-seven passengers and a crew of seven took off, never to return. Four minutes after Flight 93 was airborne, Flight 11 crashed into the World Trade Center, followed sixteen minutes later by Flight 175.
After Flight 93 reached its cruising altitude of thirty-one thousand feet, the pilots received a message from a United Airlines flight dispatcher, advising them to "beware any cockpit intrusion." It further stated that two aircraft had hit the World Trade Center.
Whatever precautions the Flight 93 crew may have taken were not enough. The aircraft suddenly descended seven hundred feet, after which a strange radio transmission emanated from the jetliner: sounds of struggling, with a voice shouting, "Mayday!" and "Hey, get out of here!"
Controllers were unsuccessfully trying to contact the airliner, now over eastern Ohio, when they heard, "Ladies and gentlemen: Here the captain, please sit down keep remaining sitting. We have a bomb onboard. So, sit."
The airliner, by now over Cleveland, turned back toward the southeast and gained altitude. At least a dozen passengers and crew members managed to make phone calls revealing that knife-wielding terrorists wearing red bandanas had hijacked the plane. They had killed or injured at least three people and had forced everyone to the rear of the aircraft.
One of the passengers, Tom Burnett, called his wife in California and said, "The plane has been hijacked. We're in the air. They've already knifed a guy. There's a bomb on board. Call the FBI." Other passengers and crew members made similar calls, confirming to the world the horror occurring high in the skies over Ohio and Pennsylvania.
By this time, the FAA knew beyond any doubt that terrorists had hijacked four airliners. Consequently, they took the unprecedented action of ordering all 4,500 aircraft in the skies over the United States to land, regardless of destination. United Flight 93, now under the control of terrorists, was one of the few that ignored this order. Terrorist Ziad Jarrah, a twenty-six-year-old from a wealthy Lebanese family, pointed the big Boeing toward Washington, DC.
The phone calls from those aboard Flight 93 indicate that some of them were formulating a plan to retake the aircraft from the terrorists. Knowing that terrorists had already intentionally crashed other airliners, they resolved to go down fighting. Tom Burnett told his wife, "A group of us is going to do something." When she protested, he replied, "If they're going to crash the plane into the ground, we have to do something. We can't wait for the authorities. We have to do something now."
Another passenger, thirty-two-year-old software manager Todd Beamer, spoke and prayed for several minutes on his seatback phone with a telephone supervisor in Chicago. Near the end, she overheard Beamer through the still-open phone line utter perhaps the most memorable words of this tragic day: "Are you guys ready? Let's roll."
Moments later, flight attendant Sandy Bradshaw told her husband, "Everyone's running to first class. I've got to go. Bye." Another caller yelled, "They're doing it! They're doing it! They're doing it!" Then the line went dead.
At 9:57 a.m., the passenger assault on the terrorists began. The cockpit voice recorder revealed the sounds of the uprising—shouts, thumps, grunts, glass breaking. The struggle continued, while terrorist pilot Jarrah desperately rolled the airplane back and forth trying to knock the attacking passengers off balance. He instructed a fellow jihadist to block the cockpit door while he continued to throw the jetliner around the sky, but the determined impromptu assault continued. At 10:02:23 a.m., with the passengers only seconds away from entering the cockpit, Jarrah rolled the jetliner on its back, while his comrade shouted in Arabic, " _Allahu Akbar! Allahu Akbar!_ "—Allah is the greatest! Allah is the greatest!
At 10:03:11 a.m., Flight 93 plunged into a remote field near Shanksville, Pennsylvania—only twenty minutes short of Washington, DC. It hit the ground in a forty-degree inverted dive, at a speed of 563 miles per hour. No one on the ground was injured. The intrepid group of passengers, who fought to the final second, did not succeed in saving their own lives; but by forcing the terrorists to dive the jet into the ground prematurely, they saved countless additional innocent lives and prevented the loss of yet another precious national asset.
### **A Day of Infamy**
Within a space of seventy-seven minutes, four jetliners had been turned into manned, guided missiles and crashed, killing all 265 people aboard. The first three of these had hit targets on the ground, instantly killing many more. However, even this was not tragedy enough for this day of horror. The intense heat generated by the combined twenty thousand gallons of burning jet fuel from American Airlines Flight 11 and United Airlines Flight 175 weakened the structures of the two World Trade Center towers such that at 9:59, the South Tower collapsed, followed at 10:28 by the North Tower. Before this occurred, 200 desperate human beings jumped from the tops of the Twin Towers to certain death more than a thousand feet below. The total disintegration of the two skyscrapers, with thousands of people still trapped inside, brought the total death toll for the day to approximately 3,000. Among the dead were more than 400 heroic emergency rescue workers.
These are the known fatalities from 9/11. No one will ever know how many additional deaths occurred later—and are still occurring—as a direct result of the massive clouds of lethal toxins and carcinogens released by the collapse of the Twin Towers. Equally immeasurable are the pain and sorrow that families and friends of the victims have had to endure ever since that day.
### **A Failure of Imagination**
The events of 9/11 may well be the best-documented terrorist attack in history: audio transcripts, photos, video recordings, and eyewitness reports; innumerable books, articles, and TV documentaries; and the voluminous findings of a major governmental investigatory commission. Even with this unprecedented wealth of evidence, the inevitable conspiracy theorists soon emerged to present their alternative theories of what "really" happened that day.
Most center around the strange notion that the attacks were all part of a vast US government conspiracy. The hijacked airliners were really remotely flown military aircraft; the Twin Towers were brought down by explosive charges on the ground; Flight 93 was actually shot down by another airplane; Air Force fighter jets failed to intervene because they were ordered to stand down; the Pentagon was not struck by an airliner at all, but by a missile.
The conspiracy vendors never proved any of these allegations, nor did they ever make a strong case for why the US government would commit such atrocities against its own people. Instead, the best-documented—and most scathing—criticism came from the nation's own self-evaluation, which is summarized in the 9/11 Commission's Final Report. The attack, it concluded, was "a failure of policy, management, capability, and above all, a failure of imagination."
The facts are indisputable. A close examination of the reams of hard evidence indicates that the terrorist attacks of 9/11 were exactly what they appeared to be—a well-planned sneak attack by a fanatically suicidal enemy on an unsuspecting nation.
On that day when airliners full of innocent people fell from the skies—only to kill more innocents on the ground—a military officer monitoring the attacks was heard to say, "This is a new type of war...." It was also a new type of _world_ , in which no one was safe anymore. The war on terror continues, and probably will for years to come. Even so, Americans will never forget—or forgive—the criminals responsible for those four tragic, life-ending flights that caused what many still remember as "the saddest day in US history."
## CHAPTER FOURTEEN
## **GONE WITH THE WIND**
**"'PIMPERNEL' HOWARD HAS MADE HIS LAST TRIP."**
British Overseas Airways Corporation (BOAC) Flight 777-A lifted off from Lisbon, Portugal, on the morning of June 1, 1943, en route to England. The scheduled one-thousand-mile, seven-hour flight would take the Douglas DC-3, its thirteen passengers, and crew of four in a northerly direction across the Atlantic Ocean's Bay of Biscay. Its destination was Whitchurch Airport, located near the southwestern English city of Bristol. Among the more prominent passengers aboard were a Jewish leader with important connections to the British government, a Reuters news correspondent, and two highly placed corporate executives. A slender, fragile-looking, blond-haired Englishman named Leslie Stainer was also aboard the flight; however, just about everyone knew Stainer best by his other name, Leslie Howard, the internationally acclaimed actor who had starred in the blockbuster movie _Gone with the Wind_.
The flight proceeded without incident as the twin-engine transport headed north, up the western coasts of Portugal and Spain, and out over the open waters of the Bay of Biscay. The world was at war, so no area of the European sky was completely safe. This was, however, a regularly scheduled civilian flight over international waters, and it had originated from a neutral country. The pilots hoped that as long as they steered clear of the French coast, hostile aircraft would leave them unmolested.
Just before 11:00 a.m., when the airliner was about two hundred miles off the northern coast of Spain, ground dispatchers in England received a radio distress signal from the crew of Flight 777-A: "I am being followed by strange aircraft. Putting on best speed.... We are being attacked. Cannon shells and tracers are going through the fuselage. Wave-hopping and doing my best." Then, only silence.
A BOAC-operated Douglas DC-3 airliner sitting on the tarmac at Gibraltar airport, circa 1943. Wartime searchlights silhouette it while the Rock of Gibraltar looms in the background. This could be the same plane that Leslie Howard boarded in Lisbon. _Royal Air Force_
### **Murder in the Air**
It was not until later that the world learned how the events of that day unfolded, but a German _Luftwaffe_ flight of fighter-bombers on their way home from a submarine escort mission intercepted the airliner. The DC-3 was over the Bay of Biscay, heading north at an altitude of between seven and ten thousand feet. The eight twin-engine Junkers Ju 88C-6s belonged to 14 _Staffel_ of _Gruppe_ V/ _Kampfgeschwader_ 40, based in occupied France, near Bordeaux. Almost immediately, two of the Nazi warplanes took aim at the defenseless airliner and opened fire. They quickly set the DC-3 ablaze, after which it plunged into the Atlantic Ocean. The German pilots lingered only long enough to document their victory by circling and taking photographs of the wreckage in the water before it sank. The attack was little more than an aerial firing squad, a mass execution. Those aboard the airliner had no chance to defend themselves and no possibility of surviving. Leslie Howard and sixteen other innocent civilians were dead.
A captured German Junkers Ju 88D photographed sometime after the war at Wright-Patterson Air Force Base, Ohio. On June 1, 1943, a flight of eight German fighters similar to this one attacked the airliner in which Leslie Howard was riding. _American Aviation Historical Society_
The Lisbon–Bristol flight was a regularly scheduled shuttle that BOAC had been operating since September 1940. The airline employed a half dozen Dutch crews and aircraft that had managed to escape to Britain before the May 1940 German invasion of the Netherlands. The ex-KLM Royal Dutch Airlines craft and crews had successfully completed more than five hundred trips over the previous three years, but the long over-water flights were anything but uneventful. Wartime airspace was constantly contested by Great Britain and Germany—and heavily patrolled by marauding fighters from both sides. Twice in the previous six months, _Luftwaffe_ fighters had attacked the same plane in which Howard and his fellow fliers ultimately died on June 3. The DC-3-194, bearing the name _Ibis_ and the registration designation G-AGBB, had in both cases sustained significant damage but managed to arrive at its destination. The passengers were undoubtedly aware of this when they boarded the airliner. These were dangerous skies. On that day, they would be deadly.
When _Ibis_ failed to arrive at Whitchurch, Royal Australian Air Force 461 Squadron sent out two Short Sunderland flying boats to search for the airplane and any possible survivors. They found nothing—no wreckage, no bodies, no oil slick. The ocean had apparently sucked the DC-3 and its occupants into its depths without a trace. A further search the following day—during which a Sunderland was forced to fend off an attack by another _Schwarm_ of eight German Ju 88s—also failed to turn up any sign of the missing airliner. Consequently, all those aboard, including Leslie Howard, were presumed dead. Shortly after, the Germans made it official when they announced that their fighters had shot down a transport plane over the Bay of Biscay. It could only have been _Ibis_.
Both Allied and neutral powers decried the unprovoked attack as a war crime, and BOAC suspended any further daytime flights. Why they had allowed this particular flight to continue in broad daylight without any protection after the previous attacks is a question that remains unanswered. Equally open to conjecture was why the Germans decided to shoot down this particular airliner—which had flown this route on a regular basis for the past three years—on this particular day. Was it a coincidence, or were they specifically targeting one of the VIPs aboard?
The Douglas DC-3 _Ibis_ , as it appeared when operated by KLM. Here, it sits on the tarmac with engines running, date and place unknown. In 1940, the British Overseas Airways Corporation (BOAC) leased this aircraft from KLM and painted it in brown and green camouflage colors. On June 1, 1943, Leslie Howard and sixteen other innocent civilians died in it when German fighters shot it down into the Bay of Biscay.
Map showing the intended route of Flight 777A from Lisbon, Portugal, to Bristol, England. German fighters downed the unarmed Douglas DC-3 about two hundred miles off the northern coast of Spain.
### **A Talented Patriot**
Leslie Steiner—later anglicized to "Stainer"—was born in London on April 3, 1893. After graduating from an exclusive London boys' school, he worked as a bank clerk until the outbreak of World War I, during which he served in the trenches as a subaltern in the British Army's Northamptonshire Yeomanry. After suffering a severe case of shell shock, the Army invalided him out of the service.
Soon, he began to pursue acting, initially as a form of therapy recommended by one of his doctors. Before long, the talented and handsome young British performer, who by now had adopted the stage name Leslie Howard, had become a star of both stage and silent screen in England. It was only a matter of time before he migrated to the United States, where he found even greater success—first on Broadway, and later, in movies. During the 1930s, he made twenty-two films, eighteen of them in Hollywood. Critics still regard several of these, including _The Scarlet Pimpernel_ and _The Petrified Forest_ , as classics. His work in Hollywood culminated in 1939 with a costarring role, opposite Vivien Leigh and Clark Gable, in the immortal Civil War epic _Gone with the Wind_. Over the course of his illustrious career, Howard received many awards and two Academy Award nominations for Best Actor.
Yet, Howard was not only a talented actor and well-known star—he was a patriot. When Europe went to war again in 1939, he returned at the age of forty-six to his native England and began serving his country by producing, directing, starring in, and even financing anti-Nazi propaganda films. These included _"Pimpernel" Smith, In Which We Serve, The Lamp Still Burns_ , and _The First of the Few_. In the latter movie, which was probably the best of Howard's wartime films, he portrayed R. J. Mitchell, the cancer-stricken designer of Britain's most famous fighter, the Supermarine Spitfire.
Leslie Howard's patriotic activities extended well beyond filmmaking. He participated in wartime fundraising activities and, aided by his fluency in German, made numerous anti-Nazi radio broadcasts. His effectiveness in this regard—along with his Jewish background—made him a focal point of hatred from anti-Semitic Nazi leaders. Nazi Propaganda Minister Joseph Goebbels, whom Howard had personally lampooned in one of his movies, particularly despised him. This loathing was so great that the notorious British traitor and Nazi propagandist William Joyce—better known to the world as "Lord Haw-Haw"—publicly announced that Goebbels intended to execute Howard if he ever got him in his grasp.
Leslie Howard signs autographs for fans on February 15, 1943, while visiting US Eighth Air Force personnel at Watford, England. A few months later, the Hollywood idol, propagandist, and patriot would be dead. _US Army Signal Corps_
Many have speculated that this political aspect of Howard's career led to his death. They believe he was the primary target of the German aerial marauders that blasted his airliner out of the sky. There is no evidence he really was a spy, as the Germans alleged, but his activities in support of the British government were no secret. The British Council, an organization promoting British culture, had asked Howard in 1943 to travel to neutral Spain and Portugal on a "goodwill tour." When he hesitated, British Foreign Secretary Anthony Eden personally convinced him to go. Officially, he was visiting the two neutral countries to promote his movies, but his underlying purpose was to drum up support for the Allied cause against the fascists. Consequently, he spent the entire month of May traveling throughout the Iberian Peninsula making public appearances. His efforts were highly successful, even though Spain—technically a neutral country—was controlled by fascist dictator Gen. Francisco Franco. Not only did nearly a thousand cinemas in Spain and Portugal willingly show Howard's anti-Nazi propaganda movies, Portuguese viewers selected his _"Pimpernel" Smith_ as film of the year. Howard may also have used his connections to meet with General Franco himself, to lobby for Spain's continued neutrality.
German agents shadowed Howard everywhere he went in Iberia and reported on his activities. Goebbels was furious at the headway the British actor was making in direct opposition to his own Nazi propaganda efforts in the peninsula's two neutral countries. It seems likely that the Nazis were simply biding their time until they could exact revenge on Howard.
Their opportunity came when Howard suddenly appeared with his manager at Lisbon's Portela Airport and bumped two lower-priority passengers to board Flight 777-A, setting the stage for the aerial assassination that was to follow. It is generally assumed that the German Ju 88s intentionally attacked Howard's unarmed airliner and killed all seventeen people aboard just to eliminate him. But was he the real target?
### **A Case of Mistaken Identity?**
The Nazis were exuberant at Howard's tragic death. Goebbels boasted in his propagandist newspaper _Der Angriff_ , "'Pimpernel' Howard has made his last trip." Howard's propaganda work had done great damage to the Nazi cause, so the Germans were happy to be rid of him.
But had they done so on purpose? According to another compelling theory, the Germans may have had a different and more important target in mind. Howard's business partner and manager, Alfred T. Chenhalls, was with Howard at Lisbon's Portela Airport prior to their last flight. This busy wartime flying field, located in one of the few neutral European countries that remained, was a major crossroads for international travelers and spies alike. For this reason, Howard and Chenhalls were in plain view of many interested observers as they waited to board their flight. Chenhalls was a portly, middle-aged, cigar-smoking chap who just happened to bear a strong physical resemblance to Prime Minister Churchill. Thus some have speculated that Leslie Howard was not the primary target in the infamous shoot-down at all, but rather Winston Churchill.
Proponents of this theory hypothesize that German agents or sympathizers observed Chenhalls in the airport terminal with Howard and misidentified him as the bulldog-jawed "blood, sweat, and tears" prime minister of Great Britain. Leslie Howard even bore a striking resemblance to Churchill's bodyguard, Detective Inspector Walter H. Thompson, so Howard and Chenhalls may have been mistaken for the prime minister and his bodyguard. The fact that Churchill was known to be in the region at the time, touring North Africa, made it all the more believable and bolstered the possibility that he might have been traveling back to England via Lisbon. In fact, British agents may have intentionally planted this disinformation to mislead their German counterparts. Thus, the mistaken identity theory is not as far-fetched as it may seem. Certainly, the Nazis would have jumped at the chance to intercept the prime minister's aircraft. They would have stopped at nothing—including the atrocity of shooting down an unarmed civilian airliner full of innocent people—if there was the slightest possibility of killing their most hated and most dangerous enemy.
This theory has its drawbacks. First, it seems unlikely that trained German agents would have mistaken a man as well-known as Winston Churchill. Equally important, many—including Churchill himself in his memoirs—have questioned why anyone would even suspect that a powerful world leader, with a great navy and air force at his disposal, would risk his life flying in an unprotected commercial aircraft in broad daylight over enemy-patrolled territory. In truth, the prime minister flew back to England from Gibraltar four days later in his own personal aircraft, an Avro Type 685 York four-engine transport. He was never anywhere near Lisbon.
### **Other Likely Suspects**
Others have speculated that the target for the attack on Flight 777-A was neither Leslie Howard nor Winston Churchill. The Nazis would undoubtedly have liked to send to the bottom of the sea certain other passengers aboard the doomed BOAC transport. Topping the list was Wilfrid B. Israel, a wealthy British Jew and ardent Zionist with important connections within the British government. He was returning to Britain from a two-month investigation into the plight of Jewish refugees in the Iberian Peninsula. His plan for enlisting the British government to help provide them aid and safe passage to Palestine no doubt rankled many of those in the anti-Semitic Nazi regime.
The Nazis might also have been targeting other passengers on flight 777-A—or even someone bumped from the flight at the last minute. Wartime flights were mostly limited to high-ranking diplomats and government officials, military members, and VIPs who had special clearance to fly. For that reason, most BOAC flights between Lisbon and the United Kingdom provided a target-rich environment for enemy assassins. However, if Israel or anyone else on Flight 777-A, other than Leslie Howard, was the object of the heinous attack, no evidence has ever surfaced to prove it.
There remains yet another possible explanation for the downing of Leslie Howard's airliner. Perhaps it was nothing more sinister than a tragic misjudgment on the part of the attacking German airmen. They were five hundred miles from home and getting low on fuel, so the _Luftwaffe_ commander of the enemy formation, _Oberleutnant_ Herbert Hintze, had to make a quick decision: either shoot the airliner down or let it go. The airliner was civilian operated and carrying civilian passengers, but it was also en route to Britain, the country with which his nation was at war. In addition, the DC-3 carried wartime camouflage colors to make it less visible to other aircraft in the sky. This paint scheme was perhaps a logical wartime precaution, but it may also have suggested to Hintze that the airliner had a military purpose that made it fair game.
The German pilots involved in the Howard shootdown, including Hintze, claimed that this was indeed the case. After the war, they contended—to a man—that the Flight 777-A DC-3 appeared to be an enemy military aircraft, and therefore a legitimate target. They expressed regret at having shot down a plane filled with innocent civilians and claimed outrage at not having been informed of the scheduled civilian flight from Lisbon to England that day. Had they known, they insisted, they would never have attacked it.
Not everyone accepts the German pilots' story. Their version may have been a rationalization to ease their guilty consciences—or an out-and-out fabrication, intended to ward off accusations of war crimes. Ultimately, no hard evidence exists to prove that this despicable action was an intentional attempt to assassinate Leslie Howard, Winston Churchill, Wilfred Israel, or anyone else; nor can anyone say for sure that the shootdown was anything more than a tragic mistake.
The circumstances surrounding the killing of Leslie Howard and the other passengers of Flight 777-A remain unclear, even after seven decades of research that have resulted in numerous books, articles, and documentaries. One thing is certain: it was an appalling tragedy for the victims, their families, and friends. It was an especially painful blow to Howard's fans worldwide, who felt a personal bond with the talented and sophisticated idol of stage and screen.
## CHAPTER FIFTEEN
## **A PLANE THAT FELL IN THE MOUNTAINS**
**"THEY SURVIVED BY BEING RESOURCEFUL."**
On October 13, 1972, a Uruguayan Air Force-chartered transport with forty-five souls aboard took off from the airport at Mendoza, Argentina. Its destination was Santiago, Chile, on the western side of the formidable Andes Mountains. Nothing appeared amiss with the Fairchild-Hiller FH-227D twin-engine turboprop as it took off and faded into the distant Andean skies. Yet, it never reached its destination. No one saw where it went, and no one received a single emergency call from it. It simply disappeared. The ensuing search operation, though extensive, failed to turn up any trace of the missing plane. It was as if a towering mountain peak had snatched it and its human cargo from the sky and swallowed it whole.
It would take ten long weeks for the mountains to give up the secret of the missing transport and its passengers, which would prove to be one of history's most incredible survival exploits. The widespread celebration of this inspiring tale of human endurance was, however, short-lived. Admiration would soon turn to revulsion.
### **A Difficult Flight**
Fifteen members of the amateur Old Christians rugby team from Montevideo, Uruguay, had chartered the military transport to take them to a tournament in Santiago, Chile. For the young players, most of whom were also students, the trip would be an exciting, fun-filled excursion to a foreign country—a new experience for most of them. They chose the Uruguayan Air Force ( _Fuerza Aérea Uruguaya_ ) transport because the charter fee, equivalent at the time to $1,600 US, was considerably cheaper than flying commercial—provided they could fill the seats with paying passengers. To help with that, they enlisted family members and friends to come along. On the day of departure, the practically new American-built airliner was at near capacity with forty passengers and a Uruguayan Air Force crew of five.
A Fairchild-Hiller FH-227D, painted in the colors of Uruguayan Air Force Flight 571 for the movie _Alive_. It is almost identical in appearance to the one that crashed in the Andes Mountains on October 13, 1972.
The transport, designated Flight 571, left Montevideo on October 12 for Santiago, some 850 miles due west. However, bad weather ahead prompted the pilots to divert to Mendoza, Argentina, a city in the eastern foothills of the massive Andean mountain range, 120 miles northeast of Santiago. Here, they prudently decided to wait overnight until mountain conditions had improved.
The experienced captain of the transport, Col. Julio César Ferradas, was well aware of the dangers inherent in flying through the Andes. The mighty mountain chain had claimed more than its share of fliers who dared to challenge its supremacy. It is not only the world's longest continental mountain range, but also the second highest. Because the chain of mountain peaks, or _cordillera_ , in this area reaches upward to well above twenty thousand feet, Ferradas felt it advisable to navigate his heavily loaded propeller-driven aircraft from Mendoza to Santiago at lower altitude through one of the established mountain passes. This required weaving below and through a seventy-five-mile wall of seemingly impenetrable snow-covered peaks. Obviously, this could only be accomplished on the clearest of days. Even with optimal weather conditions, Ferradas had yet another hazard with which to contend: the notoriously treacherous turbulent air currents so characteristic of this mighty mountain chain. Some of his impatient young passengers considered him overly cautious for diverting to Mendoza, but the veteran pilot knew things they did not.
The next day, as Ferradas checked weather reports, one of the impatient young rugby players badgered him and his copilot, Lt. Col. Dante Héctor Lagurara, to go ahead and take off. Ferradas' joking reply was tragically prophetic: "Do you want your parents to read in the papers that forty-five Uruguayans are lost in the _cordillera_?"
The young passenger's taunts aside, Ferradas knew that he had to make a decision. Argentinean law required his Uruguayan military aircraft to vacate the country that day, so he had to either continue the journey as planned or return to Uruguay. The latter would mean a sizeable loss in revenue for the already financially strapped Uruguayan Air Force, so he chose to go on. Fortunately, weather reports indicated that one of the aerial trails through the Andes, the Planchón Pass, would be clear enough to navigate by early afternoon. This left only the turbulence to contend with—though at that time of day, it would be at its worst. Still, the experienced military pilot felt he could safely complete the flight. Consequently, at 2:18 p.m. the big transport and its passengers departed the airport at Mendoza. The date was October 13—a Friday.
### **A Plan Gone Wrong**
The flight plan called for a southerly heading from Mendoza to the Planchón Pass. Here the big, high-winged turboprop would turn west and fly through the mountain pass to the town of Curicó, Chile, which lay on the western side of the Andes. At this point, Ferradas and Lagurara would turn north and begin their descent for a landing at Santiago's Pudahuel International Airport, some one hundred miles to the north.
The plan was relatively straightforward, but executing it would be considerably more complicated. As Flight 571 flew into the pass, a thick layer of clouds still completely obscured the terrain below; but it was too late to turn back. After the pilots weaved their way through the peaks to the location they believed to be the all-important checkpoint at Curicó, they turned and began a blind descent into the clouds. What they could not know is that the tailwind they had last recorded had since become a headwind. With no visible checkpoints to verify groundspeeds or positions, they could only guess where they were. Instead of being over Curicó, they were still in the mountains well east of it. Thus, when they began their descent, they were diving directly into the rocky peaks of the Andes Mountains.
Intended route of Flight 571 from Montevideo, Uruguay, to Santiago, Chile. After diverting overnight to Mendoza, Argentina, the military transport flew south to the Planchón Pass and then west through the mountains toward the town of Curicó, Chile. Here, it would have turned north and descended for landing at Santiago, but it crashed in the mountains before reaching Curicó.
The first indication that anything was wrong was the fierce turbulence that suddenly took away the breath of the plane's occupants as they descended into the mountains below. One unnamed young passenger, in a lame attempt at humor, grabbed the cabin microphone and announced, "Ladies and gentlemen, please put on your parachutes. We are about to land in the _cordillera_." Before anyone could laugh, the aircraft hit a vicious downdraft and plunged several hundred feet, scaring the shtick out of the jokester and everyone else aboard. When this happened yet again, the struggling transport suddenly dropped below the cloud layer and into the clear.
The scene that burst into the view of those looking out their windows was nothing short of nightmarish. Instead of the expected fertile Chilean fields far below, they saw jagged pieces of rock screaming past only ten feet below the wingtip. The pilots immediately shoved the throttles forward for full power and attempted to climb, but it was too late. The right wing smashed into the rocks and immediately sheared off. As the severed wing ripped loose from the plane, it sliced through the tail, tearing it off the fuselage. With it went two crew members and three passengers, all still strapped in their seats. By this time, the left wing too had broken off, leaving the wingless metal canister to plunge like a missile into the snow and rock below. It hit the mountainside at well over two hundred miles per hour, sucking out two more passengers from the now-open rear of the plane. Many of the remaining seats broke loose, crushing bodies, and filling the air with a deafening cacophony of noise—not least of which was the screams of fear and agony from those still alive. The fuselage continued to careen down the side of the mountain, like a huge metallic sledge, until it screeched to a stop in a small valley. The air was suddenly very frigid inside the fuselage and for a moment, everything was strangely quiet. Then, all anyone could hear were the hysterical screams. The real ordeal was about to begin.
### **Situation: Hopeless**
Of the forty-five people aboard, thirty-four survived the immediate crash. That _anyone_ lived through the wingless airliner's high-speed collision with the mountainside was truly a miracle, let alone three-quarters of the passengers and crew. Those who somehow escaped serious injury eventually regained their senses. They pulled themselves from the wreckage and began to do what they could to help those who had been less fortunate. Many were in shock and suffering agonizing pain from their injuries.
The dead included pilot Ferradas, the seven sucked out of the back of the plane, and three others in the main cabin who died on impact. Another with a severed leg died of blood loss within minutes. Copilot Lagurara was alive, but wedged in the crushed cockpit next to the pilot. He too would soon die.
The surviving passengers assessed their situation. They were stranded—surrounded by dead, dying, and severely injured people—in an open airliner fuselage on a snowy mountain in one of the most remote spots on Earth. They had no idea where they were, and all they could see in any direction was snow and mountain peaks. According to the airplane's altimeter, they were sitting at nearly twelve thousand feet—more than two miles—above sea level. In this air, any exertion caused shortness of breath and fatigue, and the temperatures could dip to well below zero degrees Fahrenheit. With no source of heat, it was as bitterly frigid inside the thin aluminum walls as it was outside. To add to the misery, there was little of use in the tailless, wingless aircraft cabin. Virtually no food, water, medical supplies, fuel, blankets, or extra clothing were available—and none of those aboard had dressed for anything cooler than an evening stroll in downtown Santiago.
Yet another casualty of the crash was the plane's radio transmitter. This meant the survivors had no way of contacting the outside world for help. Finally, as a crowning touch to the perfectly miserable state of affairs, snow began to fall minutes after the crash. The rapidly accumulating flakes on and around the light-colored fuselage shell rendered it all the more invisible from the air. It was in all respects an utterly hopeless situation for the survivors of Flight 571. They had every reason to envy those who had died instantly and suffered no more.
As darkness began to fall, the crash survivors settled in for the longest, coldest, and most painful night of their lives. All suffered from varying degrees of emotional and physical injury and all were freezing in the subzero, high-altitude environment. They huddled together and took turns pummeling each other to keep their blood circulating. Even so, many of them suffered debilitating injury from frostbite. The only relief they might have had was sleep, but circumstances denied them even that. The bitter cold and the screams of the injured did not permit it. A few were delirious, while others were fortunate enough to be unconscious. Those who escaped injury included three medical students who tried to help those in pain, but without even the most basic medical supplies, they could do very little. If there was any merciful aspect of these darkest hours, it was that none of the survivors had any inkling of what was to come. They were better off not knowing how many more hellish days and nights they would have to endure before rescue—or death—freed them from their misery. Death claimed three more by morning, and many others would perish in the days to come.
Obtaining drinking water was the first order of business for the survivors. As they gasped to breathe in the arid, high-altitude environment, they quickly became thirsty. Fortunately, there were mountains full of H2O all around them, in the form of snow—but they quickly found that sucking on ice while freezing to death was a distinctly unpleasant way to stay hydrated. They soon devised a way to melt the snow in a makeshift container atop the sun-heated metal fuselage. They now had all the drinking water they needed.
A much bigger problem was the almost total lack of food. When the already-hungry survivors inventoried their meager supplies, they found only a few snack items. Even with the most severe rationing, these would last only a few days. Their new high-altitude home was in reality a subzero desert, completely unfit for human or animal habitation. It offered nothing but several feet of snow under which were only a few inedible lichens. It was an environment so inhospitable that even wild animals avoided it. Even had there been any other living creatures available to serve as food, the survivors had no way to catch them.
The Chilean Aerial Rescue Service launched a major search operation immediately after the Uruguayan transport failed to arrive in Santiago. It continued, as weather permitted, for ten days. The desperate survivors repeatedly saw and heard planes in the sky, but help never came. The light-colored, snow-covered metal fuselage was invisible to those overhead. The stranded passengers had been able to get bits and pieces of information on a transistor radio that had survived the crash, and when they eventually heard the devastating news that authorities had suspended the search, their spirits sank. They resigned themselves to remaining on the snowy mountain until they died of exposure or starvation.
The days continued to pass slowly and painfully. The starving survivors grew weaker, more irritable, and more depressed. Some were dangerously close to hysteria. They were also steadily becoming fewer in number. One after another of the more severely injured died, in most cases after a protracted period of excruciating pain.
Then, with the situation seemingly as bad as it could possibly get, disaster struck again. On October 29, the seventeenth night of their ordeal, a freak avalanche fell onto the open-ended fuselage and its beleaguered inhabitants, dumping tons of snow on them as they slept. Eight of them smothered to death before the others could disinter them from their snowy grave. The living now numbered only nineteen. In the days to come, this number would dwindle further, leaving a final contingent of sixteen frightened young men.
### **Dying for Something to Eat**
It had become obvious to the survivors that help from the outside world was out of the question. They resolved somehow to take control of their fate. This, however, would require physical strength and endurance, which in turn required adequate nourishment. Most of the few food items they had salvaged from the crash were long gone, and nothing edible existed in their frozen, lifeless world. As they grew weaker and more desperate by the day, only one solution remained. One of the starving survivors, Fernando Parrado, was the first to verbalize it. He stated perhaps jokingly that if necessary, he would "cut meat from one of the dead pilots—after all, they got us into this mess." Before long, the discussion took on a more serious tone.
The starving survivors could no longer ignore the obvious. They needed food, not only to stay alive, but also to maintain enough strength to find help. Yet, there was only one source of protein available to them—the flesh of their dead comrades, lying preserved in the snow and ice all around them. Though sickened by the thought, they eventually came to realize, one by one, that the unspeakable was their only salvation. They began eating the raw flesh of their deceased fellow passengers.
Cannibalism is, with a few exceptions, taboo among the human race. However, "survival cannibalism"—eating human flesh out of necessity to remain alive—is more common. One of the most notorious instances of this occurred in the American West during the winter of 1846 to 1847. A group of eighty-one American pioneers, known as the Donner Party, became stranded in the snowy Sierra Nevada Mountains while traveling by wagon train to California. To avoid starvation, some of them resorted to devouring the flesh of those who had died. The Flight 571 survivors found themselves in a similar predicament. Not surprisingly, they responded in a similar fashion.
### **Expedition to Civilization**
The days and weeks in the freezing aluminum fuselage shell passed slowly for the remaining sixteen survivors as they learned to cope with the brutal conditions. Thanks to the nutrition, they slowly regained some of their lost strength. They elected a few of the fittest and strongest to venture out on an expedition to find help. Those selected received extra rations and an exemption from their daily duties so that they might rest and gain strength.
Finally, on December 11, traveling conditions were right. It had been nearly two full months since the unscheduled arrival of Flight 571 at its final resting place on the mountain. Even the supply of human food was running low. It was now or never. They had previously launched a number of local excursions, none of which turned up any signs of civilization. This time, however, the selected team of three resolved to keep going until they either found help or died trying.
A few hours after departing, the three explorers decided that one of them should hand over his provisions to the other two and return to the airliner. The extra food the remaining two carried would increase their range significantly—perhaps enough to make all the difference. The remaining two, Roberto Canessa and Fernando Parrado, continued on, doggedly. They climbed, waded through snow up to their hips, and slipped and slid up one mountain slope and down another. One of the peaks they traversed towered to an altitude of nearly fifteen thousand feet. With nothing to lose, they kept going.
After seven days, they staggered over a ridge and saw before them a river and patches of green. As they continued on, they came upon cows, an empty soup can, and a horseshoe—all signs of human habitation.
Finally, after ten brutal days and forty-three miles of slogging up and down icy peaks, they made contact with Chilean cattlemen on the far side of a small stream. The note Parrado wrote and tossed over to them began, "I came from a plane that fell in the mountains..."
After ten weeks of icy hell on earth, the ordeal was almost over. Authorities quickly initiated a coordinated rescue operation, and by December 23, all remaining survivors were back from the mountain and reunited with their families. The press dubbed it a "Christmas Miracle." Seventy-two days had elapsed since the doomed military transport crashed onto the mountainside. Of the forty-five people originally aboard, only sixteen were still alive.
### **Aftermath**
The world embraced the young heroes who had returned from the dead, but the sixteen survivors carefully avoided any discussion of the act they had committed to stay alive. The fact that nearly all required immediate hospitalization was fortunate, in that it served to isolate them from the throngs of onlookers, well-wishers, and reporters.
All the young men were dirty, unshaven, and emaciated—having lost anywhere from thirty to eighty pounds each. Many of their crash injuries still had not healed, and they suffered from a variety of ailments: hypertension, irregular heartbeat, skin infections, sunburn, conjunctivitis, and more. Still, it was evident to their doctors that after ten weeks with no food, they should have been in even worse condition. The truth finally emerged that they had survived by eating human flesh.
This shocking news created a furor. How could educated young Christian men have committed such a horrible sin? One Chilean newspaper even printed their story under the headline, "May God Forgive Them." Others, however, understood the necessity of what they had done. Among these was the Catholic Church, to which most of the survivors belonged. Church leaders compassionately decreed that the survivors' cannibalistic practice was no more of a sin than receiving an organ transplant—it was just another way for the dead to donate tissue to the living.
Noted archeologist Dr. Julie Schablitsky wrote of the cannibalistic Donner Party, "They didn't survive by eating each other; they survived by being resourceful." The same is true of the Andes crash survivors. These young men, with a burning desire to live, did what they had to do. They survived because of their resilience, and because of the self-discipline, teamwork, and ingenuity they displayed under the worst conditions imaginable. As for the act they committed to remain alive: would anyone else, suddenly and unexpectedly thrust into a similar situation, behave any differently?
Flight 571 crash site memorial, as it appeared in 2006.
Regardless of how one chooses to view the actions of the Andes survivors during their brutal ten-week ordeal of death and misery, theirs was one of the most amazing feats of human survival ever recorded. The fact that they _did_ survive to live full lives was a testament to them and a tribute to the twenty-nine who remained forever on the mountain.
## CHAPTER SIXTEEN
## **A HAUNTING DISTRACTION**
**"HEY, WHAT'S HAPPENING HERE?"**
On the dark, clear evening of December 29, 1972, Eastern Airlines Flight 401 departed New York's JFK International Airport for Miami. The nearly new Lockheed L-1011 TriStar wide-body carried 163 passengers and 13 crew members. The two-hour flight was without incident until 11:34 p.m., when it approached Miami. The captain ordered the landing gear lowered, and immediately noticed the glaring absence of the green nose gear indicator light. This meant one of two very different things: the indicator light had simply malfunctioned, or else the nose gear was actually not down and locked. The pilot-in-command, Capt. Robert Loft, a thirty-year veteran with nearly thirty thousand flight hours, now had to determine whether he had a true in-flight emergency, or simply a burned out light bulb. After advising Miami Approach Control of the situation, he climbed out of the pattern to two thousand feet and turned onto a westerly heading away from the airport to troubleshoot the problem.
At 11:41, the Miami controller became concerned. He noticed that the airliner had descended from its already low altitude to only nine hundred feet above the ground. He immediately radioed the airliner, asking, "[H]ow are things comin' out there?" Capt. Loft, apparently satisfied by this time that their only problem was a faulty indicator light, advised that they were returning to the airport to land. Seconds later, the big Lockheed vanished from the radar screen. Unknown to the controller, it had just plunged into the swampy Florida Everglades and shredded into a million pieces, killing or fatally injuring 101 of the 176 people aboard.
Eastern Airlines Lockheed L-1011-385-1, registration number N310EA. On December 29, 1972, this airliner crashed into the Florida Everglades, killing or fatally injuring 101 of the 176 people aboard. Eastern had only acquired the new "Whisperliner" the previous August. _© Jon Proctor Collection, with permission_
The crash of Flight 401, though horrendously tragic, was far from the deadliest in US history. It is, however, unique because it was one of the most preventable aviation accidents ever to occur—and it initiated a series of ghostly in-flight airline encounters that rocked the commercial airline industry.
### **A Moment of Inattention**
NTSB investigators were fortunate in having a wealth of information relating to the crash of Flight 401. At their disposal were records of all radio communications, survivor interviews, evidence from the crash scene, and data collected from the cockpit voice recorder and flight data recorder. From all this, they were able to piece together an exceptionally clear picture of the events leading up to the crash. That picture was not a pretty one.
The Flight 401 crew faced a serious issue. They had to determine whether the TriStar's nose gear was down and locked, as it should have been. The answer to this question meant the difference between a routine landing and a nose-down, potentially deadly belly slide to a screeching stop.
Determining the gear's status was difficult since it was not visible from any vantage point inside the jetliner. The only way to check it was for a crew member to climb down into the forward avionics bay below the flight deck and visually verify, through an optical sighting device, the proper alignment of the nose gear indices. Captain Loft had assigned the flight engineer, Second Officer Don Repo, to that task. Unfortunately, it was so dark down there that Repo was unable to make a definitive determination. Therefore, the copilot, First Officer Albert Stockstill, began working on the other possible culprit—the nose gear light indicator assembly. If it was found to be defective, they would assume that the gear was functioning properly and proceed with a normal landing.
Thus, two of the three crew members responsible for flying the big jet, skimming just above the ground at a speed of 227 miles per hour, were engaged in other activities. Only Captain Loft, who was becoming increasingly impatient at the delay, was still at the controls. Soon, he placed the big jet on autopilot and turned his attention, as well, to the stubborn indicator light with which Stockstill was still struggling. Now, no human was in control of the giant airliner streaking just two thousand feet over the Florida Everglades on that moonless night. Consequently, no one noticed the altitude-warning chime that sounded in the cockpit at 11:40:38. If either of the pilots had heard it, they would have known that the jet had just dropped 250 feet. Now, they were flying at an altitude of only 1,750 feet... and still heading down.
Eventually, the captain and first officer decided that the light was indeed malfunctioning, as they had suspected, and that the gear was in all likelihood down and locked. Exactly one second after the pilots advised approach control that they were heading back to the airport to land, Stockstill was recorded as saying: "We did something to the altitude," to which Loft replied, "What?"
Stockstill then asked, "We're still at two thousand, right?" Loft then spoke the last words of his life: "Hey, what's happening here?" He then shoved on full throttles.
Five seconds later, at 11:42:12, the giant airliner crashed into the marshy ground. It hit with sufficient force to scatter the wreckage over eleven acres. Littered among the pieces were its 176 former occupants, 101 of whom were dead or dying. Among the latter were Captain Loft, First Officer Stockstill, and Second Officer Repo.
### **A Most Unsterile Cockpit**
The question with which investigators concerned themselves was relatively straightforward: why did the big Lockheed descend to the ground undetected when its Autoflight autopilot system had been set for two thousand feet? The NTSB crash investigators considered several possibilities and concluded that when Captain Loft turned to speak with Second Officer Repo, he may have inadvertently bumped against the control yoke. This would have disengaged the "altitude hold" function of the jet's Autoflight system, and put it into a slow descent to the ground. By the time the distracted pilots noticed that they had been losing altitude, it was too late.
As for the nose gear issue that precipitated the fatal series of events, post-crash analysis verified that the nose gear warning light lens assembly had malfunctioned. Investigators never determined if the nose gear had been in a down and locked position, but in all likelihood it was in perfect working order, and the landing would have been uneventful. The entire tragedy thus began because of nothing more significant than a faulty $12 light bulb. It was the inattentiveness of the crew—coupled with a series of unlucky coincidences—that doomed the giant airliner and the lives of 101 people.
That an entire flight crew of seasoned professionals could become so distracted from their primary duties as to allow their aircraft to fly into the ground seems too unlikely to believe. However, this was exactly what happened in the case of Flight 401. The highly experienced and otherwise competent crew members temporarily forgot the cardinal rule of airmanship that instructors drill into every student pilot from day one of flight training: "First, fly the airplane!" This means that no matter what other issues are occurring in the cockpit, the pilot must above all else maintain control of the aircraft. Failure to heed this rule renders all other issues instantly and completely irrelevant.
Human factors researchers from the FAA and NASA have long considered flight-deck distractions an important area of concern. Their focus over the past decades on this subtle killer was inspired in large part by the Flight 401 tragedy—a crash involving a crew so distracted that they allowed their perfectly airworthy aircraft to crash. One of the regulations resulting from this research was the so-called "Sterile Cockpit Rule," enacted by the FAA in 1981. This prohibits flight crews from engaging in nonessential activities during takeoffs, landings, and at other critical times during flights.
Unfortunately, even with this and other such rules in place, cockpit distractions continue to plague the aviation industry. One highly publicized example of this occurred on October 21, 2009, when a Northwest Airlines Airbus A320 flying out of San Diego with 149 people aboard overflew its destination of Minneapolis–Saint Paul by 150 miles. The two pilots of Flight 188 had apparently become so involved with their laptop computers that they lost track of time and of where they were. Air traffic controllers were unable to contact the pilots, making the entire scenario seem suspiciously like a hijacking. Consequently, aviation authorities alerted the White House, while the Air National Guard readied fighter jets for takeoff to intercept the wayward airliner. The very embarrassed pilots only turned the Airbus back to the airport when controllers finally managed to contact them and advise them of their blunder. The plane eventually arrived safely at its destination—seventy-four minutes late—but both pilots were suspended and their licenses revoked. The captain and first officer of this aircraft had more than thirty thousand flight hours between them, proving once again that experience is not enough to protect a pilot from distractions in the cockpit.
An excerpt from the NTSB report on the Eastern Airlines Flight 401 crash. The impeccable qualifications of the three flight deck crewmembers were not enough to prevent this tragedy. _NTSB_
The events that occurred in the cockpit of Flight 401 that dark night cost the lives of 101 people, including all three members of the flight crew whose responsibility it had been to prevent such a horrendous accident. The blame for this needless tragedy rested squarely on their shoulders. It could even be that the intense sense of guilt and regret, which they would undoubtedly have felt had they lived, caused two of them to return and try to make amends... in the afterlife.
### **Flying Ghosts**
In the months after the crash, numerous reliable witnesses reported seeing extremely lifelike ghostly images of Captain Loft and Second Officer Repo. Most of the sightings occurred on Eastern L-1011 airliners fitted with parts salvaged from the wreckage of Flight 401. One senior Eastern Airlines captain reported seeing Repo "sitting there clear as day" in the jump seat of his airliner during a flight. When the captain turned away and then looked back, the apparition was gone. On another occasion, a passenger called a flight attendant to check on a man sitting next to her wearing an Eastern Airlines pilot uniform who appeared to be ill. He disappeared before her eyes, but she later identified him from a photo as Repo.
Passengers and crew reported other sightings of Repo, in some instances with him performing tasks aboard the aircraft and warning of impending trouble. Credible witnesses also reported similar sightings of Captain Loft's ghost. They saw his image appear on different occasions and then vanish. These alleged sightings circulated as rumors throughout the airline community. When word leaked out, they quickly became legendary—and the subject of a bestselling book and made-for-TV movie. They even inspired the song "The Ghost of Flight 401," which appeared on the 1979 Bob Welch hit album _Three Hearts_.
Author John G. Fuller collected a great deal of anecdotal evidence from seemingly legitimate sources in support of these sightings and presented them in his book _The Ghost of Flight 401_. Not everyone, however, was receptive to such paranormal activities occurring on Eastern Airlines aircraft. Company president Frank Borman reportedly called it "a bunch of crap," and the company threatened employees with either dismissal or referral for psychological evaluation if caught spreading ghostly rumors. Likewise, some family members of the alleged ghosts were not pleased. One of them sued author Fuller over some of the assertions made in his book.
As is the case with just about all of history's ghost sightings to date, the validity of the Flight 401 apparitions can never be proven—or disproven. However, reported ghost sightings in aviation did not originate with those of Flight 401. The world of flight has long had its fair share of ghostly appearances, unexplained events, phantom aircraft, and messages from the beyond. One of the earliest and most enduring of these originated at a desolate Royal Flying Corps airfield near Montrose, Scotland. There a young RFC pilot died in 1913 when his sloppily repaired airplane fell apart in the air. Over the ensuing years, numerous pilots stationed there routinely reported seeing his image in various places around the base.
Another eerie and much-publicized series of supernatural events occurred in relation to the October 5, 1930, first—and last—passenger-carrying flight of the British airship R-101. On its maiden voyage, the 777-foot-long dirigible crashed and burned in France, killing forty-eight of the fifty-four people aboard. As related by John G. Fuller in another of his books, _The Airmen Who Would Not Die_ , famed psychic Eileen Garrett had, before R-101's last flight, a vision of a great airship crashing in flames. She voiced her premonition to the Director of Civil Aviation, Sir William Sefton Brancker, who laughed it off as nonsense. He of all people should have listened, as he was one of those destined to die in the crash. A second warning about R-101 came, purportedly, from the spirit of a famed British pilot named Raymond Hinchcliffe, who had disappeared in a 1928 transatlantic attempt. The dead pilot's spirit reportedly appeared during a séance, to warn one of his friends serving on R-101 that the new airship had serious structural problems.
The final R-101 paranormal incident was the most startling of all. Two days after the crash, the spirit of the airship's dead captain, Royal Air Force Flight Lt. H. Carmichael Irwin, unexpectedly broke into a séance that psychic Garrett was conducting. She had been trying to contact the spirit of the recently deceased author, Sir Arthur Conan Doyle. One of those attending the séance recorded Irwin's ghostly yet vivid description of the problems he encountered on his last flight. His narrative was amazingly complete, with accurate and highly detailed technical facts of which Garrett, ostensibly, could have had no knowledge.
There is an ironic epilogue to the series of supernatural occurrences surrounding R-101—the dead airship was reincarnated. Zeppelin, the German airship company, salvaged several tons of the burnt and twisted metal remains of R-101. They melted down the valuable Duralumin alloy and recycled it for use in another great airship—the _Hindenburg_.
The ghosts of Flight 401 may or may not have been real except to those claiming to have seen them. However, the legend remains. The ghosts from this ill-fated flight, as well as aviation's other supernatural events reported over the past century, remain real enough in some circles to start a lively discussion.
Flight 401 has become a textbook example of how even seasoned professionals can be lulled into a dangerous in-flight complacency. The environment in which pilots operate is one that tolerates no such inattentiveness. The deadly distraction that occurred in the cockpit of Flight 401 that dark evening over the Everglades was only the latest version of a scenario that had played out many times in the past—and that can still occur in any cockpit at any time.
In 1900, Wilbur Wright, three years prior to his and brother Orville's historic first powered flight on the sands of Kitty Hawk, wrote: "I have learned that carelessness and overconfidence are usually far more dangerous than deliberately accepted risks." His wise observation remains as true today as when he wrote it. Only strict and constant attention to the demands of airmanship can prevent future reoccurrences of such needless flights of no return.
## CHAPTER SEVENTEEN
## **LOST LADY OF THE DESERT**
**"A FATAL COMBINATION OF INEXPERIENCE AND BAD LUCK."**
On the afternoon of April 4, 1943, twenty-five US Army Air Forces B-24D Liberators lifted off from their base in North Africa. Their mission was a high-altitude bomb run to Naples, Italy. As the loose formation proceeded toward the target, the bombers drifted further apart, until one of them lost all contact with the rest. The bomber and its nine-man crew never returned, and they left no clue as to their fate.
The big bomber's unexplained disappearance was unusual but not unheard of. It was just one of thousands of aircraft in World War II to depart for a mission and never come back. What makes this story unique is what happened _after_ the flight ended. The tragic facts surrounding the disappearance of this airplane and crew would not surface until fifteen years later, when—almost literally, out of nowhere—they reemerged onto the public scene in spectacular fashion.
### **A Mission Destined to Fail**
The bomber was B-24D number AC41-24301, bearing a big white "64" on its nose along with the name _Lady Be Good_. The name may have originated from a 1924 Gershwin Brothers Broadway musical starring the rising young performer Fred Astaire, but its significance to this bomber is a secret lost to history. The airplane was brand new, having recently rolled out of the Consolidated Aircraft Company's San Diego factory. The US Army Air Forces accepted the ship on December 8, 1942, and it arrived at Soluch Airfield just in time for the April 4 mission.
Ill-fated crew of _Lady Be Good_. From left: Hatton, Toner, Hays, Woravka, Ripslinger, LaMotte, Shelley, Moore, and Adams. On April 4, 1943, they became lost on their very first mission and had to bail out at night over the Libyan Desert. It would take more than fifteen years for the rest of the world to learn their fate. _US Air Force_
The airfield at Soluch was a makeshift US Army Air Forces bomber base located thirty miles southeast of Benghazi, Libya. Here, the new Liberator found its way to the 514th Squadron of the Ninth Air Force's 376th Bomb Group.
The crew designated to fly _Lady Be Good_ on the April 4 mission was as new as the bomber, having recently arrived from stateside. They were: pilot 1st Lt. William J. Hatton; copilot 2nd Lt. Robert F. Toner; navigator 2nd Lt. "Dp" Hays; bombardier 2nd Lt. John S. Woravka; engineer Tech. Sgt. Harold J. Ripslinger; radio operator Tech. Sgt. Robert E. LaMotte; and gunners Staff Sgts. Guy E. Shelley Jr., Vernon L. Moore, and Samuel E. Adams. All were in their early to midtwenties, just out of training, untested, and inexperienced. It would therefore be the first wartime mission for both bomber and crew. It would also be their last.
US Army Air Forces B-24D Liberator, built by the Consolidated Aircraft Company. This is the same model as _Lady Be Good_ , which was lost on April 5, 1943, in the Libyan Desert during its first, last, and only wartime mission. _US Air Force_
_Lady Be Good_ was one of twenty-five B-24 Liberators that took off on Mission 109 the afternoon of April 4. Their objective was Naples Harbor, 750 miles to the northwest, on the far side of the Mediterranean Sea. The plan was to hit the target at dusk and then scoot home under cover of darkness. The challenging navigational aspect of this 1,500-mile overwater roundtrip increased the difficulty of the mission, but the B-24 was well suited for it. Though an ugly duckling compared to its sleek older sibling, the Boeing B-17 Flying Fortress, the thirty-ton, high-wing Liberator was equally capable. Powered by four 1,200-horsepower Pratt & Whitney turbo-supercharged radial engines, the twin-tailed B-24 could top three hundred miles per hour, reach altitudes up to thirty-five thousand feet, and fly as far as 3,000 miles nonstop. If attacked, its crews could defend themselves with a bristling armada of a dozen or more .50-caliber machine guns. Most important, bombardiers on this outstanding airplane could employ its top-secret Norden bombsight to drop with devastating precision—at least for 1943—a four-ton payload of high explosives. The Liberator's value to the US war effort is best gauged by the approximately 18,500 that were built—more than any other US combat airplane in history.
_Lady Be Good_ and the other twenty-four bombers accomplished their mass takeoff from Soluch that day with difficulty, in the midst of a sandstorm. Several of the big bombers had to return with engines fouled from the blowing desert sand that sucked into their inner workings. The rest headed out in a scattered formation over the Mediterranean toward Italy. _Lady Be Good_ was among the last of a second wave of thirteen bombers to take off. She struggled off the graded sand runway at 3:10 p.m., more than ninety minutes after the first wave of twelve bombers had departed. By this time, they had long since disappeared into the distance. Meanwhile, nine of those in the second wave turned back with mechanical problems, leaving only _Lady Be Good_ , with her rookie crew, and three other bombers to find their way to the target and back alone. It was a bad start, and things would only get worse.
Portion of map believed to have been used in the _Lady Be Good_ search operation. It shows the bomber's course from its base at Soluch, Libya, to its target at Naples, Italy, and back. _US Air Force (labels added by author)_
No one knows exactly what happened during _Lady Be Good_ 's first and last mission. If the bomber actually reached Naples, the crew probably did not have sufficient visibility to drop their bombs with any accuracy. With no clear target in sight, and possibly other problems plaguing them as well, they would have turned back toward home and eventually jettisoned their bombs into the sea. By this time, they had definitely separated from the other three bombers with which they started the mission. Failing to hit the target was bad enough, but the neophyte crew now had to make the formidable overwater flight back home alone, in total darkness, and under strict wartime conditions of blackout and radio silence.
Some have suggested that the plane's navigator, Lieutenant "Dep" Hays, was simply not up to the task. The twenty-three-year-old former bank clerk—whose first name really was "Dp," or sometimes, "D.P."—was perhaps too inexperienced to tackle a navigational problem as complex as the one he faced on his first mission. Had things gone as expected, he would have had little to do; his pilot would simply have followed the rest of the formation to the target and back. Now, he found himself alone and a long way from home, at night, and with virtually no visible checkpoints or other navigational aids to guide him. For this mission, the bomber carried only slightly more than enough fuel to get home, so there was not much room for error.
But did Hays fail? At around midnight that night, the men anxiously waiting for plane No. 64 back at Soluch heard the unmistakable deep throb of a Liberator's big radial engines high over the airfield. They fired off flares to alert it, but it continued to drone on until the sound faded into the southeastern sky. This was undoubtedly _Lady Be Good_ since, by now, they had accounted for all the other bombers on that day's mission. Apparently, Hays had managed to guide his plane back home, after all. But why did they not land? And why did no one aboard apparently see the flares as they were passing over it?
The likely answer is that they were not looking for either the airfield or the flares. Navigator Hays probably had no idea he was anywhere near Soluch when they overflew it. As far as he knew, they were still well out over the Mediterranean. To his credit, he had maintained the proper course, but a strong tailwind had carried them much faster and further than anticipated. Consequently, they were still at relatively high altitude and probably not even looking down when they passed over the airfield. Any flares sent up therefore went unnoticed in the dusty haze below, as _Lady Be Good_ droned ever deeper into the Sahara Desert.
None of this was clear to anyone at the time—either the nine men inside _Lady Be Good_ or those at Soluch listening to the bomber fly over. The fate of the plane and crew was a complete mystery. In the ensuing days, the military authorities conducted a cursory search, both in the Mediterranean and the desert, but to no avail. When no traces of bomber or crew appeared, the Army listed them as "missing in action," and that officially closed the book on _Lady Be Good_.
### **Desert Discovery**
That is, until fifteen years later. On May 16, 1958, the mystery of the lost bomber resurfaced in the most dramatic way imaginable. A British civilian oil exploration team flying across the desolate and mostly uncharted Libyan Sahara Desert—an area nearly twice the size of Texas—spotted something on the sand below that looked very much out of place. It was a derelict airplane. They noted its approximate location before continuing. Unknown to them, sitting there in the sand four hundred miles southeast of Soluch was the broken body of the long-lost _Lady Be Good_.
Map, also thought to have been used in the _Lady Be Good_ search operation, showing the lost bomber's course after overflying Soluch and crashing in the desert four hundred miles to the southeast. Had the crew walked southwest instead of northwest after bailing out, they would have found the downed bomber, with its life-saving supplies and still-functioning radio. _US Air Force (labels added by author)_
_Lady Be Good_ after lying unmolested and undetected in the desert for more than a decade and a half. _US Air Force_
The bomber was in remarkably good shape after its pilotless 1943 crash landing. _US Air Force_
US and British military officials learned of the derelict, but showed no immediate inclination to investigate it. It was just another of the many unidentified wrecks that littered this desert where so much fighting had occurred only a few years earlier. This particular one was sitting hundreds of miles deep into the desert, in an area said to be so forbidding that even the tough and fearless native Bedouins refused to enter it. Consequently, military authorities did not consider this unidentified airplane important enough to warrant further investigation.
On February 27, 1959, an oil ground-survey team came upon the downed airplane previously sighted from the air. They found the derelict, which they recognized as a US B-24 bomber, lying broken on a desert plateau in an area of the Libyan Desert known as the Calanscio Sand Sea. It had obviously belly-landed and skidded nearly seven hundred yards to its present location. The painted number "64" was still clearly visible on the nose. The case of the missing _Lady Be Good_ had just been reopened.
Though damaged by the crash, the bomber was in remarkably good condition. The crew might have even walked away from it. However, since no parachutes were aboard, it seemed more likely that the men had bailed out and left the bomber to crash. But instead of crashing, the pilotless Liberator somehow executed a belly landing that night that would have made any Army pilot proud. Many questions remained. Where did this misplaced mystery bomber from the past come from? When and how did it end up here? And most important, what happened to the men who had been aboard it?
The oilmen spent a considerable amount of time exploring the derelict, taking photos, and collecting souvenirs. By all appearances, they were the first humans to lay eyes on the bomber since the day it had crashed. It was amazingly well preserved; although it had been baking in the sun for nearly sixteen years, it looked as though it could have arrived there only yesterday. The arid atmosphere had maintained the big bomber as well as any climate-controlled museum could have. There was practically no corrosion, and most of the paint covering the bomber's aluminum skin was intact; two of its three tires were still fully inflated. Inside, nothing was disturbed—navigational instruments, neatly hung items of clothing, cigarette butts, food, water, chewing gum. Even the coffee remaining in a thermos still tasted like coffee. In addition, most of the Liberator's equipment appeared to function as well as the day it rolled out of the factory; when one of the oilmen pulled the trigger on a .50-caliber machine gun, it fired. The mysterious bomber in the sand was about to become an aviation legend, but the most dramatic discoveries were still to come.
### **Hell on Earth**
When US military authorities learned the specifics of the derelict in the desert and determined its identity, they finally became interested. The unexpected find had generated an international whirlwind of media attention, and family members of the lost fliers demanded answers. Its discovery also fueled some wild speculation: some conjectured that marauding nomads had captured the "Ghost Bomber's" missing crew and sold them into slavery. If so, perhaps they were still alive.
The Army thought otherwise. They quickly dispatched a mortuary team to initiate a search for the remains of the lost crew. From July through October 1959, the team—with support from the US Air Force at Wheelus Air Base, near Tripoli, examined the area for miles around the wreckage. The extensive air and ground expedition eventually turned up a trail of discarded crew equipment, including boots and parachutes formed into crude arrows pointing northward—a ghostly message from the past. The search team's physician and resident survival expert estimated that no man could cover a total of more than twenty-five miles under such harsh conditions as existed in this desert; therefore, their bodies had to be nearby. However, after three exhaustive months of searching, the team reluctantly gave up, empty-handed, and accepted the obvious conclusion: the nine crew members lay buried somewhere in their final resting places beneath a decade and a half of blowing sand. No one would ever know where.
Then, on February 11, 1960, events spectacularly disproved this assumption. Another oil team working its way through the desert stumbled upon the skeletal remains of five humans. They were lying close together in what one member of the team described as a "pathetic little camp," littered with equipment that identified them as airmen. All had apparently died at about the same time. Though the bodies were located some eighty miles northwest of where _Lady Be Good_ had come to rest, there could be no doubt: they were the missing crew members from the unlucky bomber.
Among the personal effects found was a diary belonging to the copilot, 2nd Lt. Robert Toner. This terse narrative related some of the details of the doomed crew's last mission. It also graphically described the ordeal they suffered in their final days of life:
One of the four engines from _Lady Be Good_. Its pilotless belly landing in the sand damaged it, but even after years of baking in the sun, much of the paint is still readily evident. _Steven A. Ruffin_
The nose wheel from _Lady Be Good_ displayed at the National Museum of the US Air Force. The tire still held air and, even today, appears unaffected by its years in the desert sun. _Steven A. Ruffin_
_James Whitmore/The LIFE Picture Collection/Getty Images_
**Sunday, Apr. 4, 1943**
Naples–28 planes–things pretty well mixed up–got lost returning, out of gas, jumped, landed in desert at 2:00 in morning, no one badly hurt, can't find John, all others present.
**Monday 5**
Start walking N.W., still no John. A few rations, 1/2 canteen of water, 1 cap full per day. Sun fairly warm, good breeze from N.W. Nite very cold, no sleep. Rested & walked.
**Tuesday 6**
Rested at 11:30, sun very warm, no breeze, spent P.M. in hell, no planes, etc. rested until 5:00 P.M. Walked & rested all nite. 15 min on, 5 off.
**Wednesday, Apr. 7, 1943**
Same routine, everyone getting weak, can't get very far, prayers all the time, again P.M. very warm, hell. Can't sleep. Everyone sore from ground.
**Thursday 8**
Hit Sand Dunes, very miserable, good wind but continuous blowing of sand, every[one] now very weak, thought Sam & Moore were all done. La Motte eyes are gone, everyone else's eyes are bad. Still going N.W.
**Friday 9**
Shelly [sic], Rip, Moore seperate [sic] & try to go for help, rest of us all very weak, eyes bad, not any travel, all want to die. still very little water. nites are about 35°, good n wind, no shelter, 1 parachute left.
**Saturday, Apr. 10, 1943**
Still having prayer meetings for help. No signs of _anything_ , a couple of birds; good wind from N.–Really weak now, can't walk, pains all over, still all want to die. Nites very cold. No sleep.
**Sunday 11**
Still waiting for help, still praying. eyes bad, lost all our wgt. aching all over, could make it if we had water; just enough left to put our tongue to, have hope for help very soon, no rest, still same place.
**Monday 12**
No help yet, very cold nite
The diary ended here. It was now clear that the crew bailed out of their fuel-starved bomber at around 2:00 a.m. on April 5. All except for missing bombardier John Woravka formed up and headed northwest on foot. This was the direction from which they had flown, and therefore, where they hoped they would find help. Little did they know how utterly futile their efforts would be; there was nothing but desert for hundreds of miles in that direction. Having only a half canteen of water between them, they trudged through the sand until crew members Hatton, Toner, Hays, LaMotte, and Adams could go no further. There they succumbed to dehydration and the harsh desert environment of sandstorms, 130-degree days, and freezing nights.
The three remaining men—Sergeants Shelley, Moore, and Ripslinger—grimly continued on, in hopes that help would be just over the next sand dune. There was nothing ahead, however, but more sand, more suffering—and ultimately, death.
Tragically, the men's still-intact Liberator had bellied in only a few miles southwest of where they had parachuted and initially assembled. Had they elected to head in that direction, they would have found the bomber—and in it, shelter, provisions, and a working radio. Of course, they had no way of knowing where—or in what kind of shape—it had ended its flight.
The dramatic discovery of the five bodies spawned a second US expedition to find the four still-missing crew members. The search continued through May 1960, but once again, civilian oilmen came to the rescue. On May 12, 1960, members of a work team discovered the remains of Guy Shelley. His body lay baking in the sand more than twenty miles further northwest from where he, Ripslinger, and Moore had left their five comrades to die. Then, five days later, a US helicopter crew member spotted Harold Rislinger's sun-dried remains lying, like all the others, on top of the sand in plain view. He had travelled several miles beyond even Shelley. Incredibly, both men had somehow managed to wade through the sand for more than a hundred miles and survive for an entire week—with virtually no food or water, in the worst environment on Earth. It was far more than anyone thought humanly possible—an amazing feat of strength and determination—but sadly, one that ended no less tragically. They were still three hundred miles away from civilization. The body of Vernon Moore, the third man who had pressed on with Shelley and Rislinger, was never found.
On August 11, 1960, oilmen made what was to be the final discovery relevant to the mystery of _Lady Be Good_. They found the skeleton of missing bombardier John Woravka imbedded in the sand only a few hundred yards from where the rest of the crew had landed and assembled. The discovery finally answered the question of why he had not joined his comrades that fateful night: he died during the parachute jump. Ironically, he may have been the luckiest of all, considering the unimaginable suffering his eight comrades endured before joining him in death.
The tragic series of events associated with _Lady Be Good_ and crew prompted some to label it a "jinx" or "ghost" plane. This idea found support in the disastrous fate of other aircraft using parts salvaged from the lost bomber. These included an Air Force C-47 using a radio receiver from _Lady_ : it ditched in the Mediterranean, killing the pilot. A C-54 using parts from the "jinx" bomber experienced propeller problems and barely avoided catastrophe. And most significantly, in January 1960, a US Army U-1A Otter fitted with an armrest from _Lady Be Good_ went down off the coast of Libya in the Gulf of Sidra. None of the ten people aboard the Otter were ever found, but one of the few bits of aircraft debris that eventually washed ashore was the ill-fated armrest.
The US Army and Air Force salvaged several parts from the crashed bomber and brought them back to the States for further evaluation, but what remained of the derelict continued to languish in the desert for decades until the Libyan government finally decided to retrieve it and haul it back to civilization. At last report, _Lady Be Good_ lay in pieces at a Libyan government compound near Tobruk. Fortunately, many of its parts and artifacts originally brought back to the United States are still on display in various museums across the country. As for the bodies of her crew, all—except for that of the still-missing Vernon Moore—made their belated trip back to the United States for burial.
A portion of the stained glass window from the chapel at the former Wheelus US Air Base, Libya. It memorializes the nine crewmembers of _Lady Be Good_ who perished in the Sahara desert in 1943. _Steven A. Ruffin_
Seven decades have passed since the first, last, and only mission of _Lady Be Good_ , but her legend lives on. She is undoubtedly the best-known B-24 Liberator ever built—equal in notoriety to her luckier Boeing B-17 Flying Fortress counterpart, _Memphis Belle_. However, while the iconic Belle and her crew, operating from a base in England, were widely feted as the war's first to complete twenty-five deadly bombing sorties intact, _Lady Be Good_ and her crew failed to complete a single mission. In spite of the celebrity surrounding the strange reincarnation of the doomed _Lady_ , nothing can detract from the tragedy of nine young lives snuffed out in the cruelest way imaginable, after a mission that accomplished nothing—all because of a fatal combination of inexperience and bad luck.
## CHAPTER EIGHTEEN
## **THE GHOST BLIMP OF DALY CITY**
**"NOTHING... HAS GIVEN A SATISFACTORY EXPLANATION OF WHAT HAPPENED."**
On the lazy Sunday morning of August 16, 1942, the citizens of Daly City, California, gazed upward at a curious sight. Drifting silently in from the sea with the incoming breeze was a big silver blimp with the large black letters "US Navy" stenciled on its sides. It was floating at low altitude with its cabin door propped open and both engines off. It was so low that two swimmers at the beach tried—without success—to grab the drifting airship and pull it down. Golfers, sunbathers, and hikers watched open-mouthed as it touched down and rose slightly. Then, it drifted into a rocky crag rising up over the beach, causing a cylindrical object to fall from the airship and crash to the ground. The object was a 325-pound depth charge designed to destroy enemy submarines; fortunately, it did not explode.
The errant blimp, now short one of its two heavy bombs and sagging in the middle, rose from the hilly peak that had snagged it. It continued to drift inland and over the town, descending slowly toward the ground—so low that it bumped along the tops of buildings and pulled down power lines along the way. Eventually, it touched down and scraped to a halt in the middle of the street at 444 Bellevue Avenue.
The partially deflated and crewless L-8, as it appeared over Daly City, California, on August 16, 1942. _US Navy_
The L-8 "Ghost Blimp" soon after it came to rest on Bellevue Avenue, Daly City, California. Rescuers looking for the missing crewmembers damaged the airship's envelope, but the navy would soon return the gondola to service. _US Navy via Otto Gross_
An opposite view of the grounded L-8. _US Navy via Otto Gross_
This was something different for the citizens of this quiet coastal community on the southern outskirts of San Francisco. It was not every Sunday morning that a navy blimp landed in the middle of their town. But the biggest surprise was yet to come.
Onlookers cautiously approached the now-stationary airship sitting silently in the street. They were justifiably apprehensive—the United States was at war. Perhaps the blimp was booby-trapped, or maybe enemy agents were inside. Soon, however, their curiosity got the best of them. Like a scene from a vintage science fiction movie, the rapidly growing crowd of Daly City citizens slowly approached the motionless gondola sitting mysteriously, like an alien spaceship, in the middle of the street. The metal and glass car extending below the huge, sagging envelope was resting on the pavement with the door to the crew compartment wide open. When they cautiously peered inside, they found to their astonishment that there was no one aboard. The bystanders searched high and low, and even ripped open the huge air bag to make sure there was no one trapped inside; however, it soon became obvious that the strange object they had before them was a derelict US Navy blimp. So, where was the crew who should have been at the controls?
The mystery of the US Navy blimp, designated L-8, is one of the most enduring and perplexing missing-persons cases to come out of World War II. The explanation for the disappearance of its two-man crew is as elusive now as the day it came to rest on Bellevue Avenue. This strange mission remains one of history's most intriguing.
### **US Navy Blimp L-8**
At 6:03 a.m. on the morning of August 16, 1942, US Navy blimp L-8 lifted off from the Treasure Island Naval Air Facility, located in the middle of San Francisco Bay. Its mission, designated Patrol Flight 101, was to reconnoiter the harbor entrance and other Pacific waters off the California coast.
It was a routine patrol, but an important one. Only eight months had passed since the devastating December 7, 1941, Japanese sneak attack on Pearl Harbor. The danger of enemy submarines lurking off the western coast of the United States was real, and potentially deadly. Less than six months earlier, on February 23, 1942, the Imperial Japanese Navy submarine I-17 had the audacity to surface and shell a US oil refinery just north of Santa Barbara, California. The damage was insignificant, but the public hysteria resulting from it was anything but. Then, on June 21, Japanese submarine I-25 fired on Fort Stevens, a coastal defense installation in Oregon. These attacks sparked widespread fear of an impending Japanese invasion of the West Coast.
Alert levels were justifiably high. No one wanted another sunrise surprise attack at the hands of the Japanese Empire, and navy blimps were ideal for patrolling the coastal waters for marauding submarines. They carried enough fuel to remain aloft for up to twelve hours, they could fly high or low as needed, and they could hover indefinitely above anything of interest in the waters below.
L-8 was one of twenty-two "L-class" nonrigid airships built by the Goodyear Tire & Rubber Company. The navy accepted the blimp at Moffett Field, California, on March 5, 1942, and assigned it to Airship Patrol Squadron 32 (ZP-32). The blimp was then shuttled over to Treasure Island, thirty miles to the northwest, where it began flying operational missions.
L-8—like all in its class—was relatively small, as airships go. Still, it was half as long as a football field and its rubberized cloth envelope held 123,000 cubic feet of buoyant, inert helium gas. A gondola, suspended below, housed the flight deck and crew. Attached to the gondola were the two 145-horsepower Warner radial engines that propelled the blimp to speeds of up to sixty miles per hour. Its respectable payload of more than a ton included a crew of two or three men, 150 gallons of fuel, a .30-caliber machine gun with ammunition, and two Mark 17 aircraft depth charges. Thus, L-8 could not only locate enemy submarines, it could attack and destroy them.
The Goodyear-built US Navy Airship L-8 was one of twenty-two of its type. Powered by two 145-horsepower radial engines, it could carry a crew of up to three, two depth charges, and a .30-caliber machine gun. These features made it exceptionally well suited for antisubmarine patrol work. _US Navy_
The triangle represents the planned patrol route for L-8 on August 16, 1942. Lieutenant Cody had completed the first leg, from Treasure Island to just east of the Farallon islands, when he reported a "suspicious oil slick." No further transmissions were ever received from the blimp's crew.
### **A Highly Qualified Crew**
As the helium-filled navy blimp slowly ascended into the cool air above San Francisco Bay that Sunday morning, its pilot set a course toward the Farallon Islands, located about thirty-five miles west of Treasure Island. The patrol routine then called for a turn northward to Point Reyes, followed by a southeasterly course along the California coastline to Montara, and finally back to Treasure Island. What actually happened on this mission, however, was far from routine and more strangely tragic than anyone could possibly have imagined.
Navy Blimp L-8 normally carried a crew of three, but on the morning of August 16, 1942, only the command pilot, Lieutenant Ernest DeWitt Cody, and his passenger, Ensign Charles E. Adams, were on board. The latter, a newly assigned member of ZP-32, was aboard L-8 that morning for a familiarization flight. Both were highly competent and reliable officers.
Cody, a twenty-seven-year-old native of Mayville, Michigan, was a ten-year veteran of the navy and a 1938 graduate of the US Naval Academy at Annapolis. Having accumulated 758 flight hours as an airship pilot, nearly 400 of which were in L-type ships, he was one of the squadron's senior aviators. Only four months previously, on April 3, he had distinguished himself in L-8 when he piloted it—with the help of air crewman Chief Boatswain's Mate Desmond—to a tricky and highly secret Pacific Ocean rendezvous with the US Navy aircraft carrier _Hornet_ , soon after the ship's departure from San Francisco harbor. The purpose of the blimp's mission that day was to deliver a three-hundred-pound package of parts for the sixteen US Army Air Forces B-25B bombers sitting on the carrier's deck. Unknown to anyone, except those at the highest levels, these bombers were the aircraft in which the famed Doolittle Raiders would soon make history. Their April 18, 1942, surprise bombing attack on mainland Japan, led by Lt. Col. James H. Doolittle, would strike America's first real blow of the war. Lieutenant Cody and L-8 had helped make this iconic mission possible.
The entry for Midshipman Ernest Dewitt Cody from the US Naval Academy yearbook, the _Lucky Bag_. His peers considered him "like his name, earnest and sincere, and moreover a true friend." On August 16, 1942, he and Ensign Charles E. Adams disappeared during a routine patrol flight in US Navy Blimp L-8. _US Navy_
Ensign Adams, on the other hand, was thirty-eight years old and an unrated pilot. He was, however, a seasoned lighter-than-air veteran, having served in navy airships for several years as an enlisted man. Prior to his recent commissioning as an officer, he had achieved the well-deserved senior enlisted rank of chief petty officer. During his action-filled fifteen years in the navy, he had repeatedly found himself in critical situations that, in retrospect, make him a sort of Forrest Gump of the Navy Airship Service—he seemed to always be wherever things were happening. On June 24, 1931, he was present to help extinguish a fire aboard the US Navy airship _Los Angeles_ , and on February 12, 1935, he was aboard the navy airship _Macon_ when it crashed into the Pacific Ocean. Then, on May 6, 1937, he was a member of the ground crew at Naval Air Station Lakehurst, New Jersey, when the airship _Hindenburg_ erupted into flame and crashed before his eyes. Finally, on December 7, 1941—only eight months before his fateful flight on L-8—Adams happened to be aboard the US Navy destroyer _Henley_ during the Japanese attack at Pearl Harbor. There he helped down an attacking enemy airplane and assisted in dropping depth charges on a midget submarine. Whether Adams was cursed for having so many dangerous experiences or blessed to have survived them all, he was, by any measure, an experienced and capable man.
US Navy Blimp L-8, piloted by Lt. j.g. Ernest D. Cody, delivering important cargo to the USS _Hornet_ on April 3, 1942. The carrier, with sixteen US Army Air Forces North American B-25B Mitchell bombers parked on its flight deck, had just departed San Francisco for an undisclosed destination. The bombers, commanded by Lt. Col. Jimmy Doolittle, would soon take off from that very deck and deliver America's first real blow to the Empire of Japan. _US Navy_
The third crew member scheduled for that morning's mission in L-8 was Aviation Machinist's Mate 3rd Class James Riley Hill, a mechanic. Lieutenant Cody scrubbed him at the last minute because the blimp was—for reasons unknown—two hundred pounds overweight. Depending on the circumstances surrounding Adams and Cody's later disappearance, this may have been a very lucky day for Hill. However, had he been aboard, his presence might have made all the difference in the outcome of the ill-fated flight.
### **A Suspicious Oil Slick**
Nothing appeared amiss that August morning as L-8 drifted out over the Golden Gate Bridge and faded into the western sky. At 7:42 a.m.—a little over an hour and a half into the flight—Lieutenant Cody radioed back to Wing Control, "Am investigating suspicious oil slick—stand by." He was at this time about four miles east of the Farallon Islands. Such an observation could indicate, among other things, the presence of an enemy submarine lurking beneath.
The cause of the oil slick remains to this day unknown, however, because this was the last transmission received from L-8. Wing Control tried repeatedly to contact the crew for follow-up information, but received no response. Not until three hours later, at just before 11:00 a.m., did other aircraft in the area report seeing L-8. It was near the coastline north of Daly City and appeared to be in no distress. In reality, however, it was in trouble—something had gone terribly wrong. At approximately 11:15, the unmanned blimp made its dramatic appearance in Daly City.
Navy officials soon received the alarming report that one of their blimps had dropped a depth charge onto a golf course and then crash-landed in the middle of Daly City with no one aboard. One can only imagine the consternation this news must have caused. They immediately dispatched a team to secure the downed blimp, and then launched a search operation to locate the missing men.
Searchers first focused on the land over which the blimp had flown. When this failed to turn up any leads, they accepted that Adams and Cody had—for reasons unknown—gone into the water. Two life jackets were missing from the blimp's cabin, so it seemed certain that the two men were wearing them when whatever happened to them had occurred. They were probably still alive and floating in the Pacific Ocean. With luck, perhaps someone had already picked them up, and they were on their way back home with a very embarrassing story to tell. However, no bodies, alive or otherwise, ever turned up. The two men simply vanished—seemingly into thin air—leaving L-8 to find its own way back home.
### **An Inexplicable Occurrence**
What could possibly have caused two experienced US Navy officers to abandon a perfectly good airship in midair? An investigation of the gondola revealed that in addition to the door being wide open, both engine ignition switches were still in the "on" position; and although the engines were no longer running, the fuel valves were still open and there was plenty of fuel remaining in the tanks. The helium gas valves were in their proper setting, and all components—radio included—were in good working order. The only notable discrepancy was the ship's drained battery, which any number of things could have caused. There was no evidence of foul play, and it was ascertained that the gondola never contacted saltwater at any time during its flight. All rescue equipment, including a life raft and parachutes, was still safely stowed and available for use. Even the classified codebooks aboard were undisturbed. The only things missing, in fact, were the two crew members and the life jackets they routinely wore as a safety precaution.
The condition of L-8 was such that the only reason it had descended at all was that it had automatically released some of its helium. A sudden rise to a higher altitude—possibly, when the two men exited the craft—caused the gas to expand and increase in pressure, resulting in an automatic helium release. Consequently, the partially deflated blimp slowly descended to the ground.
The crews of a fishing vessel and a Liberty ship were the only known eyewitnesses to L-8, as it circled for more than an hour above the oil slick. They reported that it descended to a very low altitude and dropped a smoke flare before eventually ascending and heading back toward San Francisco. They saw nothing else falling from the blimp—including human bodies. In short, Navy investigators failed to find anything about the abandoned airship or its mission that seemed suspiciously amiss. Their official statement reflected this: "Nothing the Navy knows now has given a satisfactory explanation of what happened." It was one of the most perplexing occurrences imaginable and has remained so ever since.
### **Espionage or Accident?**
Over the years, various theorists have proposed explanations to explain this strange incident. Disregarding those involving alien abductions, time warps, and black holes, one of the more believable scenarios is that a Japanese submarine might somehow have captured the two officers from L-8. This, however, also seems unlikely. There is simply no evidence to support this, and it is difficult to imagine how anyone _could_ have captured them without first bringing the airship down. Furthermore, logic would dictate that Cody or Adams would have immediately radioed back to their base had they encountered the enemy.
Others have speculated that the two men perhaps had some sort of dispute, during which they fell out of the airship and into the water. Once again, there is nothing to support such a bizarre occurrence; moreover, it seems safe to assume that officers of their caliber were professional enough to save such a confrontation for a later time and place.
Another interesting theory came from the commander of Airship Patrol Squadron 32, Lieutenant Commander George F. Watson. He stated under oath that he was puzzled as to why the blimp was two hundred pounds overweight that morning. Others had speculated, perhaps unrealistically, that moisture accumulating on the blimp overnight might have caused it, but Watson doubted this and suggested that an enemy agent might have stowed away on the ship. He could have killed or disabled the two officers, dumped them overboard, and then escaped to a waiting submarine. To support this idea, he testified that only two weeks prior to the L-8 incident, an armed intruder had attempted to break into the hangars at Treasure Island. A guard had exchanged shots with the intruder during a running gun battle. Watson conceded, however, that there was virtually no place in the tiny gondola for a stowaway to hide. Moreover, since none of the classified materials aboard were missing, it seems unlikely that the two men's demise came at the hands of a spy.
Probably the most logical explanation for this bizarre incident is that the two men simply fell out of the gondola into the ocean. Admittedly, it seems somewhat preposterous, but there are many possibilities as to how this could have happened. Perhaps while cruising with the door open for better viewing below, a gust of turbulence catapulted them both out of the cabin; or maybe they fell from outside the gondola while working on a stalled engine. If they did fall, they almost certainly died on impact with the water far below. However, it is puzzling why no bodies ever turned up, given the heavy ship traffic in the area at the time—especially since both men were almost certainly wearing life jackets that would have kept them afloat indefinitely.
The original L-8 gondola in 1942 US Navy colors, as displayed at the National Naval Aviation Museum, Naval Air Station Pensacola, Florida. _Steven A. Ruffin_
A close-up of the left engine of L-8. Crewmembers occasionally had to climb out of the cabin door and onto the engine struts, while in flight, to service the engines. Did either Lieutenant Cody or Ensign Adams—or both—fall from the blimp while doing just that? _Steven A. Ruffin_
Speculation aside, the facts remain unchanged: to this day, no one knows what happened to the two missing crew members of US Navy blimp L-8. Whether they encountered a spy, an enemy submarine, body-snatching aliens, or just bad luck, the result was the same: two good men were gone forever.
A final ironic twist occurred five years later that made the whole affair seem even weirder. On August 22, 1947, Lieutenant Cody's widow, Helen, wrote a letter to the US Navy Bureau of Personnel that ended up in Cody's Department of Defense personnel file. In this letter, she stated that her mother had recently seen her son-in-law, Ernest DeWitt Cody, in Phoenix, Arizona!
Helen explained that her mother, who had known Cody very well, described him as looking "peculiar, as though he were suffering from shock, or a mental illness." For that reason, as well as the fact that Helen had since remarried, she declined to approach him. Helen ended the letter by requesting that the navy look into the matter. However, it is unlikely that they had any inclination to do this.
Did Cody's mother-in-law see a ghost, or did she simply mistake someone else for Cody? And if it really was Cody, how did he survive, and why did he never return home to his wife? Could his disappearance have been the result of some secret mission on which he and Ensign Adams were engaged? As with just about every other aspect of this baffling incident, it appears that no one will ever know.
On August 17, 1943, exactly one year and one day after Cody and Adams disappeared, the secretary of the navy officially listed them as "presumed dead." As for the Ghost Blimp itself, the navy retrieved it from the Daly City street on which it landed, repaired it, and put it back into operation. It served honorably for the remainder of the war before returning, like most other veterans, to civilian life. Goodyear regained possession of it, but kept it in mothballs for the next two decades.
Finally, in 1966, the company decided to restore the gondola, and in 1969, the indomitable blimp returned to service—this time as the Goodyear blimp N10A, _America_. The reincarnated L-8 gondola once again became famous, as millions of people over the next decade watched it droning above major sporting events, flashing advertisements and providing aerial TV coverage to the networks. Finally, in 1982, the war-weary blimp retired for the last time.
However, like a cat with nine lives, the remains of the old L-8 rose once again—at least figuratively. In 2003, Goodyear donated the historic gondola to the National Naval Aviation Museum, located at Naval Air Station Pensacola, Florida. Here, it was restored to its original US Navy configuration and placed on display. It remains there today for visitors to peruse and to contemplate its dark secret from the past.
As the ghostly gondola sits there in eerie silence, who knows what stories it would tell if only it could speak. However, after seventy-plus years—and counting—the world's most famous Ghost Blimp remains silent. Whatever happened during that fateful August 16, 1942, patrol will forever remain a secret.
## CHAPTER NINETEEN
## **FATAL RENDEZVOUS WITH A UFO**
**"IT APPEARS TO BE A METALLIC OBJECT.... I'M TRYING TO CLOSE IN FOR A BETTER LOOK."**
On the afternoon of January 7, 1948, an eerie series of events occurred in the skies over southern Kentucky. They culminated in the death of an American fighter pilot and an escalating sense of public anxiety nationwide. On that day, a Kentucky Air National Guard pilot crashed to his death in a field near Franklin, Kentucky. While tragic, that alone was not the reason for the widespread consternation it caused. Rather, it was due to the highly unusual mission he was performing when he died: the hot pursuit of an unidentified flying object.
### **A Call for Help**
On that fateful day, Capt. Thomas F. Mantell Jr. was leading a flight of four North American F-51 Mustang fighters. They had taken off from Marietta Army Airfield, Georgia, and were ferrying the aircraft to Standiford Field in Louisville, Kentucky. Mantell, though only twenty-five years old, was a seasoned World War II combat pilot with more than two thousand hours of flight time. His unit was the 165th Fighter Squadron, 123rd Fighter Group of the newly activated Kentucky Air National Guard, based at Standiford Field.
At about 2:45 p.m., Mantell's flight was passing over Godman Army Air Field. Located on Fort Knox, approximately twenty-five miles southwest of Standiford, Godman was Mantell's last navigational checkpoint before landing. At that time, he received an unusual radio message from the Godman control tower. The controller informed Mantell that there was an object of unknown identity flying in the vicinity and asked him to investigate it.
A flight of three Kentucky Air National Guard F-51 Mustang fighters. The US military had only recently adopted the "F" designation—it was previously called the P-51. This formation presents a similar image to the January 7, 1948, flight that Capt. Thomas F. Mantell Jr. was leading, while in hot pursuit of an unidentified flying object. _US Air Force_
Witnesses had first sighted the object in the sky some fifty-five miles southeast of Fort Knox. Local law enforcement officers had then alerted authorities at the army base, who ruled out any known military explanation for the object. At around 1:45 p.m., Godman tower personnel sighted the object high in the sky—by then, to their southwest. The observers in the tower, who included the base commander, collectively described the object as being several hundred feet in diameter, white with a red border, and resembling an ice cream cone or a parachute. It appeared to be stationary or moving very slowly. All agreed that they had never seen anything like it before.
The men in the tower had been watching the strange object for nearly an hour when Mantell and his flight of four Mustang fighters roared in from the south. When the tower operator requested their assistance, Mantell first checked his fuel gauge and then agreed to investigate. He immediately turned back to the southwest and began climbing toward the unknown object lurking somewhere in front of him. Two of the three other members of his flight accompanied him, while the remaining pilot continued on to Standiford, low on fuel.
The three fighters continued to climb southwesterly in pursuit of the mysterious object. Mantell had no idea what they were chasing, but at least he, too, could now see it. At fifteen thousand feet, he radioed: "The object is directly ahead of and above me now, moving at about half my speed. It appears to be a metallic object... and it is of tremendous size. I'm still climbing. I'm trying to close in for a better look."
Mantell continued to climb toward the object, but at some point his two wingmen leveled off and lost sight of their leader, still climbing ahead of them. They heard him say in a somewhat garbled transmission that he intended to go up to twenty-five thousand feet for ten minutes and then come back down. Exactly what happened next is a matter of conjecture—because no one ever heard from Mantell again.
A few minutes later, a witness on the ground near Franklin, Kentucky, a small town eighty-five miles southwest of Godman Field, heard Mantell's airplane in the sky, still so high that it was barely visible. He observed as the fighter circled three times and then fell into a power dive. The engine screamed, as the sleek fighter accelerated in the dive to a fantastic speed, probably sending both its engine tachometer and air speed indicator off the dial. As the man watched, he heard an explosion and then saw the fighter break apart in midair, before crashing to the ground in pieces. Although the wreckage covered an area a half mile in diameter, Mantell's body was still strapped into what was left of the cockpit. His shattered wristwatch marked his death precisely at 3:18 p.m. He had just become the Kentucky Air National Guard's first flying fatality—and aviation history's first victim of an unidentified flying object. The town of Franklin, where Mantell crashed, was also the town in which he had been born on June 30, 1922.
### **The Flying Saucer Phenomenon**
The term "unidentified flying object," or "UFO," did not become a part of the public vernacular until 1952. However, human sightings of unknown objects in the sky date back almost to the beginning of time. More recently, Allied aircrew members flying in both the European and Pacific theaters of operations during World War II routinely reported unexplainable objects and lights in the sky. There were so many of these sightings that they gave the objects a name: "foo fighters." What they were, no one knows even today.
Only a few months before Mantell's encounter, two highly publicized otherworldly events had occurred, and these had a profound effect on the public's attitude toward these mysterious objects. First, on June 24, 1947, a civilian pilot in Washington State named Kenneth Arnold reported seeing nine "saucer-shaped" objects traveling at humanly impossible speeds while he was flying near Mount Rainier. He was widely regarded as a reliable witness, and his descriptions of these objects resulted in a term that quickly became a household word: "flying saucer." Then, just two weeks later, the most notorious of all out-of-this-world incidents occurred near Roswell, New Mexico. On July 8, 1947, an officer at Roswell Army Air Field announced that they had recovered a crashed "flying disk." Furthermore, there were rumors that they had found an alien body in the wreckage. The military quickly retracted the announcement and said that the object was really just a weather balloon—but the damage was done. Flying saucers—and aliens from other worlds—were now on everyone's radar.
Because of these recent sightings, people the world over became obsessed with UFOs and extraterrestrial beings. When news of the eerie events surrounding Mantell's death made headlines, it was just more evidence that aliens were about to take over Earth. Mantell's case, however, was much more sinister than previous sightings: now, a flying saucer had apparently attacked and killed an American fighter pilot. What were these strange craft and what were the intentions of the beings inside them? Was an alien invasion imminent? A public mass hysteria was in the making, in large part because the authorities refused to provide credible answers.
The sensational worldwide news coverage of the Mantell incident did little to help the situation. The next day's edition of the _New York Times_ proclaimed in its headlines, FLIER DIES CHASING "FLYING SAUCER" and PLANE EXPLODES OVER KENTUCKY AS THAT AND NEAR STATES REPORT STRANGE OBJECT. Dramatic, perhaps, but also true. For the first time, a military aviator had died trying to intercept an unidentified object in the sky. His Mustang fighter—one of the best and strongest airplanes ever built—had exploded or disintegrated in the air for no known reason, and the pilot had not even attempted to bail out. It was strange.
Moreover, if the facts were ominous, the rumors were downright frightening. Some falsely alleged that Mantell had radioed fantastic statements about what he saw: "My God, I can see people in this thing!" Others suggested that the alien spacecraft had shot his plane out of the sky, or that the occupants had abducted him from his airplane, or that Mantell's body had been shot full of strange holes. None of this was true, but it stoked the growing fires of public hysteria.
What really happened to Mantell is a matter of conjecture, but there is no doubt that he died under very unusual circumstances. No one ever positively identified the mysterious object he was chasing, but it was definitely more than a figment of everyone's imagination: many reliable witnesses, including Mantell himself, saw it. The absence of any definitive answers makes this infamous incident a compelling mystery. However, a logical explanation exists that even the most ardent "ufologist" can accept.
### **Postmortem**
Capt. Thomas Mantell was by all accounts an experienced, careful, and skilled pilot who normally did not take unnecessary risks. He was also a courageous man, as evidenced by his many wartime citations and by the unhesitating manner in which he went in pursuit of the unknown object. If he had any limitation in his résumé at all, it is that most of his flight hours had been logged flying transports at low altitudes—and not as a fighter pilot. Thus, his experience in high-altitude, high-performance fighters was surprisingly little: only sixty-seven hours.
Since Mantell intended his flight that day to be at low level, none of the three Mustangs was carrying an adequate supply of oxygen. Therefore, military regulations strictly limited the pilots on this mission to a maximum altitude of 14,000 feet. Above that, the air is simply too thin to safely pilot an aircraft without supplemental oxygen. Mantell undoubtedly knew this, but obviously felt that the situation warranted the risk to pursue a potential enemy. His two wingmen, who may have had more high-altitude experience than Mantell, chose the more cautious approach. They broke off the chase when they had gone as high as they felt they could safely go.
Mantell's unrestrained charge upward may have been—both literally and figuratively—his downfall. As his airplane ascended to higher altitudes, perhaps eventually as high as thirty thousand feet, he almost certainly passed out from lack of oxygen. It is difficult to understand why he believed he could remain at twenty-five thousand feet for ten minutes without supplemental oxygen, as he had indicated in his last radio transmission. In reality, he could only have remained conscious for three to five minutes, at most. These actions may have simply reflected his inexperience with high-altitude flight in a non-pressurized aircraft like the Mustang.
Air Force investigators postulated that after Mantell passed out, his aircraft, trimmed for climb, likely continued upward on its own until it finally fell out of control. It ended up in an uncontrolled power dive that probably exceeded the speed of sound—a deadly and all-too-common occurrence for World War II-era high-performance aircraft. Eventually, the sturdy fighter, with a still-unconscious pilot at its controls, broke apart under the tremendous forces. This sudden structural failure—or perhaps a sonic boom resulting from the high-speed dive—was in all likelihood the "explosion" that the ground witness reported. Mantell died instantly when he hit the ground—that is, if not already dead from oxygen starvation or the excessive forces he experienced on the way down.
This logical explanation seems to put to rest the cause of Mantell's death. He did not die at the hands of an alien spaceship. He simply failed to abide by an important safety regulation—and paid for it with his life.
### **The Mysterious Object**
Just what it was that Mantell and his two wingmen were chasing is not so easy to answer. The official explanation was that he was erroneously chasing the planet Venus that winter afternoon. Military authorities went to great pains to explain that the planet was, at the time, visible in the southwestern sky and that it set at about the same time the object reportedly disappeared from sight. Still, it was a stretch of anyone's imagination. It seemed ridiculous—even insulting—to propose that the faint appearance of a planet in broad daylight could have fooled hundreds of people on the ground, the experienced and reliable observers in the Godman tower, and a pilot of Mantell's ability. Because of the seemingly preposterous nature of this explanation, many accused military investigators of covering up the real story—that Mantell had actually died at the hands of an alien spacecraft.
What _was_ the "real" story? No one knows for sure what Mantell was chasing, but even some dedicated UFO proponents question that the object in the sky that day was an alien spaceship, or anything else of extraterrestrial design. It simply did not fit the pattern of most of the sightings they consider legitimate. Rather, the most logical explanation is that the unidentified flying object causing all the commotion that day was something decidedly earthly.
Specifically, it may have been a massive high-altitude US Navy research balloon developed for a program known as Project Skyhook. Very few people at the time knew about this extremely specialized unmanned balloon. The navy had launched the first of these on September 25, 1947, only three months before Mantell's death. The colossal helium-filled craft could carry a payload of scientific and photographic equipment to heights exceeding one hundred thousand feet.
Because of the secrecy surrounding the research these balloons were conducting, none of the observers in the Mantell incident—either on the ground or in the air—had ever seen one or even heard about them. It is now a matter of public record, however, that these balloons were consistent in size and appearance with most of the descriptions of the object observed in the sky over Kentucky that day. Made from reflective polyethylene plastic, Skyhook balloons could expand at very high altitudes to an amazing size—several hundred feet in diameter and as high as a skyscraper; their maximum gas volume could expand to at least _twice_ that of one of the largest airships ever built, the _Hindenburg_. In short, at high altitude one of these gigantic balloons would have been clearly visible to observers on the ground for, perhaps, as far away as a hundred miles in any direction. Additionally, it would have appeared as a white, shimmering parachute or ice cream cone hanging in the sky—just as the Godman Field control tower personnel described it.
This Skyhook balloon could fly at altitudes exceeding one hundred thousand feet and expand to an enormous size. With its ice cream cone shape and polyethylene plastic surface shimmering in the sunlight, it may have been what Captain Mantell and others saw in the skies over Kentucky on the afternoon of January 7, 1948. _US Navy_
Unfortunately, the navy kept the Skyhook launches so secret that it is still difficult to establish whether any of these balloons were actually floating over Kentucky that day. There is evidence to suggest that a Skyhook was launched on the previous day from a base in Minnesota. However, there are no records proving that it or any other balloon flew over Kentucky on January 7, 1948. Consequently, no one can say for sure what the mysterious object in the sky that day really was.
Later that year two additional incidents involving credible witnesses added yet more fuel to the flames of worldwide flying saucer paranoia. On July 24, 1948, two Eastern Airlines pilots flying a Douglas DC-3 over Alabama nearly collided with a silent, wingless, torpedo-shaped craft they could not identify. Three months later, on October 1, a North Dakota Air National Guard pilot—who, like Mantell, was flying an F-51 Mustang fighter—had a protracted encounter with a UFO. In this case, now widely known as the "Gorman Dogfight," Lieutenant George F. Gorman actually flew a series of combat maneuvers against an unidentified object he confronted in the sky.
In the years since 1948, there have been hundreds more UFO sightings throughout the United States and the rest of the world. Along with these have been a number of claims of alien contact of one type or another, including alien sightings and abductions, alien attacks on people and animals, and UFO crashes. Some of these sightings and claims were little more than overly imaginative minds misinterpreting natural phenomena or futuristic-looking secret military aircraft. Others have been out-and-out hoaxes perpetrated by misguided individuals looking for fame, fortune, or amusement. However, a few of these incidents seem to be credible and are still unexplained.
Of all the many UFO sightings that have occurred over the centuries, the Mantell incident was one of the most provocative ever recorded. After all, he died a mysterious death while in pursuit of an unidentified object in the sky. He was the first ever known to die in this manner—but he would not be the last. At least three other fliers met a similar fate while encountering UFOs. On November 23, 1953, US Air Force pilot Lt. Felix E. Moncla Jr. and his radar operator, Lt. Robert L. Wilson, disappeared in their Northrop F-89C Scorpion jet over Lake Superior while attempting to intercept a UFO. Twenty-five years later, on October 21, 1978, a young Australian civilian pilot named Frederick Valentich disappeared in his Cessna 182 over Bass Strait, after radioing that he was being harassed by a large, lighted metallic object hovering in the sky over him. Many people have proposed logical explanations for each of these incidents, but they remain unsolved.
A roadside marker honoring Capt. Thomas F. Mantell Jr. It is located just south of Franklin, Kentucky, at the intersection of I-65 and US Highway 31W. He crashed and died near here on January 7, 1948, while pursuing an unidentified flying object. Coincidentally, Mantell was also born in Franklin. _Steven A. Ruffin_
## CHAPTER TWENTY
## **THE CURIOUS CASE OF FLIGHT 19**
**"...AS IF THEY HAD FLOWN TO MARS."**
At 2:10 p.m., on December 5, 1945, a formation of five US Navy torpedo bombers took off from Naval Air Station Fort Lauderdale, Florida. The mission for Flight 19, so named because it was the nineteenth flight on the roster that day, was routine. It was a bombing and navigational training run over a three-hundred-mile triangular course. Aboard the planes, in addition to the five pilots, were nine enlisted crew members.
The lumbering single-engine Grumman TBM Avengers lifted off and headed east in formation. As they slowly disappeared into the haze over the Atlantic Ocean, no one could have imagined that the five planes or the fourteen men aboard would never return. This infamous flight was destined to become one of history's most intriguing and memorable aviation legends.
### **Navigation Problem No. 1**
The task for the flight's experienced instructor pilot, US Navy Lt. Charles Taylor, was straightforward. He was to accompany his four advanced student pilots—one navy officer and three marines—on a standard training mission. Almost the entire flight would be over the waters of the Atlantic Ocean. The three-legged course they had planned ran 123 miles east, 73 miles north, and then 120 miles southwest back on the final leg to the Naval Air Station. Four of the Avengers carried a crew of three men each—the pilot and two enlisted members—while the fifth carried only two.
Flight of five US Navy Grumman TBM Avengers, assigned to Naval Air Station Fort Lauderdale, Florida. Flight 19 must have looked much like this as it departed Fort Lauderdale for the last time on December 5, 1945. The final destination of the aircraft and crew is as much a mystery today as it was then. _US Navy via NAS Fort Lauderdale Museum_
Each of the student pilots was near the end of his advanced training and already wore the "Wings of Gold," designating him as a full-fledged naval aviator. As a group, they averaged more than 360 flight hours apiece. Consequently, for them, this mission—officially called Navigation Problem No. 1—was not particularly challenging. As they headed out over the Atlantic Ocean, they were probably already thinking about the cold drinks they would be sipping at the officers' club in less than three hours. Unfortunately, those drinks would never be poured.
The subsequent disappearance of the five TBMs and their crews should have been sufficient to satisfy the greedy gods of the sea, but it was not. A Martin PBM-5 Mariner patrol aircraft, sent out later that evening to search for the overdue Flight 19, also vanished with its thirteen-man crew. Despite an extensive five-day search operation, no definitive trace of any of the missing men or aircraft ever turned up. It was, as one US Navy investigator later put it, "as if they had flown to Mars."
US Navy Martin PBM-5, similar to the one that disappeared with thirteen men aboard while searching for lost Flight 19. Did it succumb to the malevolent powers of the Bermuda Triangle, as so often suggested? Or did it simply experience a fuel leak and explode in midair? _US Navy_
### **The Bermuda Triangle**
Exactly what happened to the Flight 19 Avengers and the Martin Mariner flying boat remains a mystery. Innumerable books, articles, and documentaries have appeared over the years, describing these strange and tragic disappearances and attempting to explain them. Unfortunately much of this is fraught with inaccuracy, hyperbole, and wild speculation. Yet, the fact remains that, to this day, no one knows exactly what happened to the twenty-seven men and six aircraft that vanished that December afternoon in 1945.
Vanishing airplanes and ships are not particularly uncommon. Disappearances of craft on, above, or beneath the ocean have taken place with appalling regularity ever since man first ventured out onto the unforgiving seas. Unpredictable weather systems, strong currents, and the sheer vastness of Earth's great oceans have always provided more than enough cause for maritime disasters. These conditions apply to all large bodies of water, but there are those who believe that some are more treacherous than others—and one of the most infamous of these purportedly cursed ocean regions, known as the Bermuda Triangle, lies beneath the airspace in which Flight 19 and the Mariner disappeared. Different authors have defined this four-hundred-thousand-square-mile area of nautical intrigue in various ways. Most, however, describe it roughly as the area of the Atlantic Ocean within a triangle formed by Bermuda, Puerto Rico, and Miami, Florida.
The name given this region dates back to an article appearing in the February 1964 issue of _Argosy_ magazine, entitled "The Deadly Bermuda Triangle." Its author, Vincent F. Gaddis, coined the term—and it stuck. However, the area's many recorded idiosyncrasies date back long before. Since Christopher Columbus himself, seafarers sailing through these regions have described anomalies, such as rogue waves, strange aquatic life, lights in the sky, powerful currents, and magnetic compass deviations.
These disarmingly enchanting Caribbean waters, which supposedly possess such mysterious forces, have also established a reputation for danger. The so-called "graveyard of the Atlantic," infamous for its extreme meteorological conditions and powerful currents, has devoured hundreds of ships and aircraft. Some of these have vanished without a trace and for reasons still unknown. Various authors have suggested an innate supernatural evil associated with the Bermuda Triangle—a sinister force that greedily snatches up ships, aircraft, and unsuspecting human travelers at will. Others contend that this important high-traffic area experiences no higher losses, proportionally, than any of the earth's other heavily traveled ocean areas.
Unquestionably, however, some of the most publicized and puzzling ship and aircraft disappearances ever recorded have occurred in this area. Bizarre as many of these tragedies were, the strangest vanishing act of all was that of Flight 19. This incident, more than any other, solidified the legend of the Bermuda Triangle.
### **An Explosion in the Sky**
The disappearances of the Flight 19 Avengers and the Martin Mariner sent out to look for them are in many ways as baffling now as they were on the day they occurred. Some of the many theories proposed to explain their loss seem credible, while others are considerably less so. Different authors have hypothesized about huge, deadly methane gas bubbles escaping from the sea and interfering with the men and aircraft; or electromagnetic anomalies that disabled the planes' navigational equipment; or even that the unlucky aircraft fell into a time warp or a black hole.
Some have even suggested that aliens abducted the planes and men—as portrayed in the 1977 Steven Spielberg movie _Close Encounters of the Third Kind_. In the exciting opening segment of the film, viewers experience the inexplicable discovery of what appears to be the Flight 19 Avengers sitting completely undamaged in the middle of a remote desert—more than thirty years after their disappearance. Later in the movie, the men—ostensibly from the lost flight—reappear as guests of the aliens who have come to Earth. This science fiction classic is undoubtedly more fiction than science, but it illustrates the widespread mystical aura that has hovered for all these decades over the Flight 19 incident.
No matter what happened to these aircraft, those experienced in search operations agree on one point: rarely does anything ever simply "disappear," even in the ocean. When a ship or aircraft goes down, it nearly always leaves behind traces, usually in the form of floating debris and an oil slick.
The Mariner search plane may have left just such a calling card. At 7:50 p.m.—only twenty-three minutes after the Mariner took off from its base at Banana River, Florida—the crew of the tanker SS _Gaines Mills_ , cruising off the coast of Florida nearby, witnessed what appeared to be an airplane explode in flame and crash into the sea. They later went through a pool of oil floating on the water, but found no bodies or anything else to confirm that it was—or was not—from the Mariner.
Many popular accounts suggested that the big flying boat was, like Flight 19, a victim of the malevolent forces of the Bermuda Triangle; this, however, is highly debatable, since the oil pool was near the location where the airplane disappeared from ground radar. Thus, at least circumstantially, there is a rational explanation as to the fate of the Mariner and crew. However, this did not explain the loss of Flight 19. Even after the ensuing five-day search, an entire flight of five US Navy aircraft and fourteen men were missing without any trace—and everyone wanted to know where they had gone.
### **A Case of Directional Disorientation**
As is so often the case, the most likely explanation for an unusual occurrence is not necessarily the most exciting. What _probably_ happened to the five lost US Navy torpedo bombers is not mysterious at all. Logic dictates that instead of aliens, black holes, time warps, and supernatural vortices from other dimensions, the blame should go to something more believable—like human error.
The most plausible explanation for the tragedy of Flight 19 is that the pilots simply became lost, ran out of fuel, crashed into the sea, and sank before searchers could locate them. The evidence is compelling. At the heart of the issue was the instructor pilot, Lieutenant Taylor. Although he was the leader of the flight, and a veteran with more than 2,500 flight hours, he had recently transferred to Fort Lauderdale from Naval Air Station Miami. Because of this, he was relatively unfamiliar with the assigned mission area.
To complicate things, Taylor had—for reasons still not clear—tried to excuse himself from flying that day. Some suggest he had a premonition, while others assert he simply had a hangover from the night before or otherwise did not feel up to par. Whatever the case, no replacement pilot was available, so he had no choice but to fly. This delayed the scheduled 1:45 p.m. takeoff time by almost half an hour. The mission was already off to a bad start.
The first indication that Flight 19 was having problems occurred at 3:50 p.m., less than two hours after the five torpedo bombers departed Fort Lauderdale. At this time, Lt. Robert Cox, another US Navy instructor pilot assigned there, was flying south of Fort Lauderdale. He picked up part of a radio conversation between Lieutenant Taylor and one of his student pilots, Marine Capt. Edward Powers. Cox heard Powers report, "I don't know where we are. We must have got lost after that last turn." When Cox heard this, he radioed Taylor and asked what the problem was. Taylor responded, "Both my compasses are out and I'm trying to find Fort Lauderdale, Florida. I'm over land but it's broken. I'm sure I'm in the Keys but I don't know how far down and I don't know how to get to Fort Lauderdale."
US Navy Lt. Charles Taylor, before promotion to full lieutenant. How did a pilot of his considerable experience get his entire flight of five Grumman TBM Avengers so hopelessly lost on a routine training mission? _US Navy via NAS Fort Lauderdale Museum_
Why _both_ of Taylor's compasses were apparently malfunctioning is a mystery in itself. Typically, the Avenger carried, like most other aircraft at that time, two types of compasses: one magnetic, the other gyroscopic. Since these operate on completely different principles, it is highly unlikely that both would go haywire at the same time. Was Taylor simply confused, or was some unknown force affecting his navigational gear?
Since Taylor had stated that he was in "the Keys," which lie off the southern coast of Florida, Cox advised him to "put the sun on your port wing... and fly up the coast until you get to Miami." This was sound advice, provided the formation actually was over the Keys. Minutes later, Taylor transmitted, "We have just passed over a small island. We have no other land in sight." This could only mean that he had somehow missed the entire peninsula of Florida—which was unlikely—or else that he was somewhere entirely different from where he thought he was. Unfortunately for the men of Flight 19, they almost certainly were not over the Florida Keys. They were probably not even near the Keys; nor for that matter, should they have been. This would have put them well over a hundred miles southwest of their planned route, which seems highly unlikely. Instead, they were probably more or less on course, over a group of islands near the Bahamas that could easily be mistaken for the Florida Keys.
The triangle marks the intended course of Flight 19 on December 5, 1945. However, Lieutenant Taylor incorrectly believed, for reasons unknown, that he had strayed down into the Florida Keys. The white circle marks the approximate location of the flight's last radio fix. This also may be where they crashed into the sea.
As the minutes ticked away, those on the ground became increasingly concerned. The navy alerted area coastal radar stations, ships, and the Coast Guard. At 5:03 p.m., Lieutenant Taylor, still flying north and still convinced he had been south of Florida over the Keys, instructed his flight to turn east. He apparently had decided that they must somehow be _west_ of Florida, in the Gulf of Mexico. However, since he was in reality already more than a hundred miles _east_ of Florida and north of the Bahamas, heading in that direction only took him and his doomed flight further out to sea.
Ironically, Taylor, the experienced veteran and leader of the flight, may have been the only pilot in the formation so completely confused. When he gave the order to turn east, two of his student pilots expressed their disagreement. One said, "Dammit, if we could just fly west we would get home; head west...!" Not until thirteen minutes later, at 5:16 p.m., did Taylor finally heed his students' advice and turn his flight—correctly—back toward the west, and mainland Florida. They would continue, he advised, "until we hit the beach or run out of gas."
By then, however, it was too late. Hitting the beach was no longer an option. Flight 19 was now not only well out to sea, totally lost, and getting low on fuel—they were suddenly also facing rapidly deteriorating flying conditions. The lost flight of Avengers was heading into a storm, with gusty winds and rain, decreased visibility, and rough seas, and it was getting dark. The final curtain was about to fall.
### **Gone Forever**
By 6:00 p.m., ground radio stations finally managed to obtain a fix, which definitively placed Flight 19 north of the Bahamas, halfway up the eastern coast of Florida. Instead of flying west for the past hour as Taylor indicated they were going to do, the five Avengers had apparently—for reasons unknown—been flying north. The ground radio stations failed to pass this information on to Lieutenant Taylor—although by now, it would not have helped anyway. At approximately 6:20 p.m., Taylor radioed his pilots: "All planes close up tight... we'll have to ditch unless landfall... when the first plane drops below ten gallons, we all go down together." This was the last transmission from Flight 19 ever heard. By 8:00 p.m., the formation was no longer airborne based on projected fuel exhaustion. They had either ditched or crashed into the sea.
Many researchers have puzzled over how five torpedo bombers and fourteen men managed to disappear so completely without a trace. Part of the explanation could be in the conditions they encountered near the end of their flight. Their visibility had become very poor on what was by now a dark and stormy night at sea. It is reasonable to assume that when they ran out of fuel and went into the rough sea that evening, they all went straight to Davy Jones's locker before anyone could escape. Even if they managed to make a relatively controlled ditching into the stormy sea, there was little chance of survival. The Grumman Aircraft Company did not build its seven-ton "Iron Birds," as Avengers were affectionately called, to float on water—especially after crashing into the rolling waves at a hundred miles per hour. In all likelihood, when the formation went into the drink, all five aircraft—and the men in them—went straight to the bottom. If they were far enough out to sea when they ditched, the nearly limitless depth of the ocean and the northeasterly flowing current of the powerful Gulf Stream would have erased any traces of the planes and rendered future discovery all but impossible.
The official navy board of inquiry into the matter ended in 1946, after a thorough investigation and a four-hundred-page report. It concluded that Lieutenant Taylor's misidentification of the Bahamas as the Florida Keys "plagued his future decisions and confused his reasoning.... He was directing his flight to fly east... even though he was undoubtedly east of Florida." Later that year, however, in response to a legal challenge from Taylor's mother—who took serious exception to her lost son bearing the responsibility for the tragedy, the Board of Correction of Naval Records officially removed the blame from Lieutenant Taylor and attributed the loss of Flight 19 to "causes or reasons unknown." That cryptic official conclusion is the final verdict as it stands today.
### **Recent Developments**
In 1986, it appeared that proof of the fate of Flight 19 might be at hand. In the aftermath of the tragic loss of the Space Shuttle _Challenger_ , divers combing the ocean floor thirty-five miles off the coast of Cape Canaveral for fragments of the doomed spacecraft unexpectedly came upon an airplane sitting on the sandy floor at a depth of 390 feet. Further examination confirmed that the airplane was a TBM Avenger. Had the final resting place of at least one of the missing Flight 19 aircraft at last been located? It seemed likely, given that this airplane was almost exactly where a researcher named Jon Myhre had previously calculated that Flight 19 probably ditched. Hopes of finally learning the secret of the infamous lost flight were deep-sixed, however, when divers were unable to positively identify this airplane or find others near it.
Five years later, in 1991, an underwater salvage crew made an even more compelling find. This team discovered a cluster of five Avengers lying six hundred feet below the ocean's surface off the eastern coast of Florida. At last, they had solved the mystery of Flight 19... or had they? Closer examination revealed discrepancies that ruled out any possibility that they were the aircraft of Flight 19. Neither the model type nor the serial numbers matched those of the lost flight, but it seemed utterly impossible that _another_ formation of five TBMs had simultaneously ditched in this same area. Experts explained this by concluding that these five aircraft had not crashed into the sea together but individually over a period of years near a low-level practice-bombing target.
In 2010, researcher Gian J. Quasar proposed a novel alternative explanation for the loss of Flight 19: the formation may not have crashed into the ocean at all. He argues rather convincingly that, based on all available evidence, Flight 19 made landfall and crashed into southern Georgia's Okefenokee Swamp. Unfortunately, this theory is impossible to test. The area in question is a national wildlife refuge, making any search for sunken torpedo bombers strictly forbidden.
After more than seven decades of theorizing, conjecture, and calculation—with at least two false sightings thrown in for good measure—no one knows what happened to Flight 19. The final resting place of the men and their aircraft remains a matter of ongoing speculation, and chances grow slimmer that anyone will ever find them in the nearly infinite expanse of the mighty Atlantic Ocean or the murky depths of a Georgia swamp. With still no secrets revealed, the mystery of this most curious flight to nowhere remains unsolved.
## EPILOGUE
## **BUT NOT LAST**
**"GOOD NIGHT. MALAYSIAN THREE-SEVEN-ZERO."**
### **Questions... and More Questions**
At forty-one minutes and forty-three seconds past midnight on March 8, 2014, a Malaysian Airlines Boeing 777 took off from Kuala Lumpur International Airport, Malaysia. The flight, designated MH370, was bound for Beijing, China, nearly three thousand miles to the northeast. Aboard were 227 passengers and a crew of 12. Within minutes, Lumpur radar cleared the airliner to climb to thirty-five thousand feet and, at 1:19:24 a.m., handed it off to Ho Chi Minh Air Traffic Control. The flight crew acknowledged this with a routine signoff, "Good night. Malaysian three-seven-zero." Two minutes later, the big airliner disappeared from the radar screen, and was never seen again.
Satellite analysis later indicated that the jet continued to fly for at least another six hours in total radio silence and—with neither of its two transponders apparently sending out signals—effectively invisible to radar. Instead of landing on time at Beijing Capital International Airport, the incomprehensible journey of MH370 is believed to have ended somewhere over the southern part of the Indian Ocean—thousands of miles off course and in the opposite direction from its intended destination. Here, in all likelihood, it ran out of fuel and crashed into the sea. No one knows why it lost contact or why it deviated from its course and proceeded to meander across a major portion of the Eastern Hemisphere. Nor does anyone know what terrible things must have transpired inside the cabin and cockpit of MH370 during those final hours.
Malaysian Airlines Boeing 777-2H6ER, registration number 9M-MRO, at Amsterdam Airport Schiphol, May 5, 2013. On March 8, 2014, this airliner disappeared after taking off from Kuala Lumpur International Airport, Malaysia. Lost with it were 227 passengers and a crew of 12. _Bernhard Ebner_
The ensuing search and recovery operation became the largest multinational air-sea search ever conducted. It eventually involved eighty-two aircraft, eighty-four surface vessels, and various support services from at least twenty-six countries. The vaguely defined search area was also the largest in history—nearly three million square miles—and the operation continued in full force until April 28. After more than seven weeks of general confusion, numerous false sightings, dead-end leads, baseless speculation, and ratings-driven 24/7 coverage by the cable TV news networks, it was apparent that any chance of finding debris still floating on the surface was nil. Consequently, aerial and surface search operations were suspended.
Malaysian Airlines Flight MH370 took off from Kuala Lumpur bound for Beijing, China, some three thousand miles to the northeast. Instead, it veered west and then south to points unknown. The white circle marks the area where it may have ended its flight.
Even after this unprecedented effort, no one had so much as a clue as to the whereabouts of the lost jetliner or the reason it went missing in the first place. Much conjecture ensued and dozens of possibilities were suggested, but none had even a shred of evidence to support it. Most of the more plausible theories seemed to center around one of two broad scenarios.
The first is that there was a catastrophic onboard event—a fire, explosion, sudden high-altitude decompression, or other problem causing an electrical failure. This probably would have knocked out all of the jet's communications and avionics equipment and incapacitated everyone aboard. Eventually, according to this theory, the airliner ran out of fuel and crashed into the sea.
The second scenario hinges on criminal activity—that one or more of the passengers or crew perpetrated a terrorist attack, hijacking, or other wrongdoing. This led to the destruction of the aircraft or a forced landing in some undisclosed place. Since no one apparently made any ransom demands, the latter possibility is doubtful, at best. The jetliner almost certainly crashed.
The list of less probable scenarios is almost endless. For example, one writer suggested in a widely cited April 18, 2014, OpEdNews.com article, that the US might have shot the airliner down—either intentionally or unintentionally—and then covered it up by failing to disclose known traces of the wreckage. However, the best source of unsubstantiated explanations came from TV news commentators, who found themselves with too much airtime and no real news to report. Some augmented their neverending coverage by quizzing guest experts on such matters as whether the big Boeing might have fallen into a black hole or been the victim of alien abduction. Such ridiculous speculation was so widespread that people actually started to believe it. A May 2014 CNN poll indicated that nine percent of Americans—one out of every eleven—believed that "space aliens or beings from another dimension" were somehow involved in the disappearance of MH370.
However, despite the comprehensive search and abundance of theories, the facts of this mysterious incident have remained as conspicuously absent as the airliner itself. It seems at this writing that unless someone comes forward with useful information, the only hope of ever finding answers to the many questions about Malaysian Airlines Flight 370 rests on future searches of the ocean's floor. Meanwhile, friends and families of the 239 victims can only grieve, wonder where and why their loved ones were taken, and imagine the terrible ways in which they might have met their end.
Eventually, the world may learn the secret of Flight MH370. Until then, it will remain—with the likes of Amelia Earhart, Nungesser and Coli, and Flight 19—one of aviation's great mysteries.
Though unique in circumstances, the disappearance of MH370 is but a recent example of a scenario that has played out thousands of times and in many ways over the past 230-plus years of manned flight. Yet, flying has never been more reliable. Improved aircraft and engine design, coupled with major technological advances in navigation and weather prediction, have made it the safest of all forms of transportation—even walking. Arnold Barnett, MIT professor of statistics, calculated that the risk of being on a fatal commercial aircraft flight in the United States was one in forty-five million. Given those odds, a person could—statistically speaking—fly every day for an average of 123,000 years before being involved in a fatal crash.
In spite of odds so overwhelmingly in favor of air travelers, it is also a statistical fact that the next flight a person takes _could_ be his last. This is merely a reflection of the uncertainty of life in general. In the end, a flier's chances of arriving at his destination boil down to something far less scientifically definable than statistical analysis. Renowned aviation author Ernest K. Gann called it "fate." In his autobiographical novel _Fate Is the Hunter_ , his conclusion about two fatal aircraft accidents he had just described could probably apply to all such occurrences: "In both incidents the official verdict was 'pilot error,' but since their passengers who were innocent of the controls also failed to survive, it seemed that fate was the hunter. As it had been and would be."
Aviation technology will continue to advance, and aircraft will fly through the skies faster, higher, and farther. With these advancements, aviation safety and reliability will also improve. However, since human beings will always be a part of the process, the possibility of error or intentional wrongdoing will continue to exist. For that reason, flights of no return will unfortunately always be a small but tragic part of aviation.
## BIBLIOGRAPHICAL REFERENCES
#### **Prologue**
Crouch, Tom D. _The Bishop's Boys: A Life of Wilbur and Orville Wright_. New York: W. W. Norton & Company, 1989.
Hennessy, Juliette A. _The United States Army Air Arm, April 1861 to April 1917_. Washington, DC: Office of Air Force History, 1985.
Jackson, Donald Dale. _The Aeronauts_. Alexandria, PA: Time-Life Books, Inc., 1980.
Kelly, Fred C. _The Wright Brothers: A Biography Authorized by Orville Wright_. New York: Harcourt, Brace and Company, 1943.
Lilienthal, Otto. _Birdflight as the Basis of Aviation: A Contribution Towards a System of Aviation_. Hummelstown, PA: Markowski International Publishers, 2000.
"Monsieurs Rozier and Romain, the World's First Deaths in an Air Crash – 15 June 1785." the British Newspaper Archive, posted June 14, 2013. _blog.britishnewspaperarchive.co.uk/monsieurs-rozier-and-romain-the-worlds-first-deaths-in-an-air-crash-15-june-1785._
Moolman, Valerie. _The Road to Kitty Hawk_. Alexandria, VA: Time-Life Books, Inc., 1980.
Prendergast, Curtis. _The First Aviators_. Alexandria, VA: Time-Life Books, Inc., 1980.
Rolt, L. T. C. _The Aeronauts: A History of Ballooning, 1783-1903_. London: Longmans, 1966.
Villard, Henry Serrano. _Contact! The Story of the Early Birds: Man's First Decade of Flight from Kitty Hawk to World War I_. New York: Thomas Y. Crowell Company, 1968.
#### **Chapter One**
Behar, Michael. "The Search for Steve Fossett: One Tough Job for the US Civil Air Patrol." _Air & Space_, March 2008.
Bingham, John. "Steve Fossett: Conspiracy Theories Challenged by Discovery of Human Remains." _Telegraph_ , October 3, 2008.
_Fédération Aéronautique Internationale_ (FAI), International Air Sports Federation. _www.fai.org/records_. Accessed 2013.
Fossett, Steve, with Will Hasley. _Chasing the Wind: The Autobiography of Steve Fossett_. London: Virgin Books, Ltd., 2006.
Garrison, Peter. "Why Steve Fossett Crashed." _Los Angeles Times_ , July 27, 2009.
Irvine, Chris. "Adventurer Steve Fossett May Have 'Faked His Own Death.'" _Telegraph_ , July 27, 2008.
National Transportation Safety Board. Factual Report, Aviation, NTSB Identification SEA07FA277, Washington, DC, September 3, 2007.
National Transportation Safety Board. Probable Cause, NTSB Identification SEA07FA277, Washington, DC, July 9, 2009.
Vlahos, James "Steve." _The New York Times Magazine_ , New York edition, December 23, 2008. MM48.
#### **Chapter Two**
Bak, Richard. _The Big Jump: Lindbergh and the Great Atlantic Air Race_. Hoboken, NJ: John Wiley & Sons, Inc., 2011.
Cussler, Clive. _The Sea Hunters II: More True Adventures with Famous Shipwrecks_ , Part 11, "L'Oiseau Blanc." New York: G.P. Putnam's Sons, 2002.
Hansen, Gunnar. "The Unfinished Flight of the White Bird." _Yankee Magazine_ , June 1980.
"Looking for l'Oiseau Blanc." _whitebird.over-blog.net/_. Accessed 2013.
Moffett, Sebastian. "Charles Lindbergh Won the Prize, but Did His Rival Get There First?" _Wall Street Journal_ , September 6, 2011. _online.wsj.com/news/articles/SB10001424053111904480904576498061491234304._
"Nungesser and Coli Disappear Aboard l'Oiseau Blanc – May 1927." French Ministry of Transport, General Inspector of Civil Aviation and Meteorology Report, June 1984. _tighar.org/Projects/PMG/FrenchReport.htm_ , 2013.
TIGHAR, The International Group for the Recovery of Historic Aircraft. "Project Midnight Ghost: The Search for History's Most Important Missing Airplane." _tighar.org/Projects/PMG/PMG.html_. Accessed 2013.
#### **Chapter Three**
Archbold, Rick. _Hindenburg: An Illustrated History_. New York: Chartwell Books, 2005.
Bain, A. "The Hindenberg Disaster: A Compelling Theory of Probable Cause and Effect." _Proceedings: National Hydrogen Association_ , 8th Annual Hydrogen Meeting, Alexandria, VA, March 11–13, 1997. 125-128.
Botting, Douglas. _Dr. Eckener's Dream Machine: The Great Zeppelin and the Dawn of Air Travel_. New York: Henry Holt & Co., 2001.
"Commerce Department Accident Report on the Hindenburg Disaster," _Air Commerce Bulletin_ , US Dept. of Commerce, Vol. 9, No. 2, August 15, 1937.
"Destruction of Airship Hindenburg," Federal Bureau of Investigation, File Number 70-396, Washington, DC, June 17, 1937.
Dick, Harold G. and Douglas H. Robinson. _The Golden Age of the Great Passenger Airships Graf Zeppelin & Hindenburg_. Washington, DC: Smithsonian Books, 1992.
Duggan, John. _LZ 129 "Hindenburg": The Complete Story_. Ickenham, UK: Zeppelin Study Group, 2002.
Grossman, Dan. "Airships: The Hindenburg and Other Zeppelins." _www.airships.net/zeppelins_. Accessed 2013.
Hoehling, A. A. _Who Destroyed the Hindenburg?_ Boston: Little, Brown and Company, 1962.
Hoehling, A. A. and Martin Mann, "The Biggest Birds That Ever Flew," _Popular Science_ , May 1962, pp. 85–95.
"'LZ-129,' the Latest Airship." _Popular Mechanics Magazine_ , June 1935, 846–847, 138A.
Majoor, Mireille. _Inside the Hindenburg_. Boston: Little, Brown and Company, 2000.
Mooney, Michael. _The Hindenburg_. New York: Dodd, Mead & Company, 1972.
Robinson, Douglas Hill. _Giants in the Sky: A History of the Rigid Airship_. Seattle: University of Washington Press, 1973.
Toland, John. _The Great Dirigibles: Their Triumphs & Disasters_. Mineola, NY: Dover Publications, Inc., 1972.
#### **Chapter Four**
Briand, Paul. _Daughter of the Sky_. New York: Duell, Sloan, Pearce, 1960.
Brink, Randall. _Lost Star: The Search for Amelia Earhart_. New York: W.W. Norton & Co. Inc., 1993.
Butler, Susan. _East to the Dawn: The Life of Amelia Earhart_. Reading, MA: Addison-Wesley, 1997.
Devine, Thomas E. _Eyewitness: The Amelia Earhart Incident_. Frederick, CO: Renaissance House, 1987.
Earhart, Amelia. _20 Hrs., 40 Min.: Our Flight in the Friendship_. New York: Harcourt, Brace and Company, 1928.
——. _Last Flight_. New York: Crown Publishing Group, 1996.
Goerner, Fred. _The Search for Amelia Earhart_. Garden City, NY: Doubleday & Company, Inc., 1966.
Jourdan, David W. _The Deep Sea Quest for Amelia Earhart_. Ocellus, 2010.
Klaas, Joe and Joseph Gervais. _Amelia Earhart Lives: A Trip through Intrigue to Find America's First Lady of Mystery_. New York: McGraw-Hill, 1970.
Long, Elgen M. _Amelia Earhart: The Mystery Solved_. New York: Simon & Schuster, 1999.
Loomis, Vincent V. _Amelia Earhart, the Final Story_. New York: Random House, 1985.
Lovell, Mary S. _The Sound of Wings_. New York: St. Martin's Press, 1989.
Rich, Doris L. _Amelia Earhart: A Biography_. Washington DC: Smithsonian Institution Press, 1989.
Strippel, Dick. _Amelia Earhart - The Myth and the Reality_. New York: Exposition Press, 1972.
TIGHAR, The International Group for the Recovery of Historic Aircraft. "The Earhart Project." _tighar.org/Projects/Earhart/AEdescr.html_. Accessed 2013.
#### **Chapter Five**
"The Airmen of Note." United States Air Force Band. _www.usafband.af.mil/ensembles/BandEnsembleBio.asp?EnsembleID=58_. Accessed 2013.
Atkinson, Fred W. Jr., "A World War II Soldier's Insight Into the 'Mysterious Disappearance' of Glenn Miller," _www.mishmash.com/glennmiller_. Accessed 2013.
British Broadcasting Company. "The Mysterious Disappearance of Glenn Miller." July 20, 2004. _www.bbc.co.uk/dna/place-lancashire/plain/A2654822_.
Butcher, Geoffrey. _Next to a Letter from Home: Major Glenn Miller's Wartime Band_. North Pomfret, VT: Trafalgar Square Publishing Co., 1997.
Downs, Hunton. _The Glenn Miller Conspiracy: The Never-Before-Told Story of His Life—and Death_. Beverly Hills, CA: Global Book Publishers, 2009.
Lennon, Peter, "Glenn Miller 'Died Under Hail of British Bombs.'" _Guardian, December 15, 2001.www.guardian.co.uk/uk/2001/dec/15/humanities.research._
Missing Air Crew Report, relating to the disappearance of Glenn Miller's airplane, War Department, Headquarters US Army Air Forces, Washington, DC, December 22, 1944.
"Noorduyn UC-64 Norseman," National Museum of the US Air Force Fact Sheet, _www.nationalmuseum.af.mil/factsheets/factsheet.asp?id=515_ , 2013.
Official Site of Glenn Miller. _www.glennmiller.com_ , 2013.
Simon, George T., _Glenn Miller & His Orchestra_. New York: T.Y. Crowell Co., 1974.
Wolfe, Clarence B., and Susan Goodrich Giffin. _I Kept My Word: The Personal Promise between a World War II Army Private and His Captain about What Really Happened to Glenn Miller_. Bloomington, IN: AuthorHouse, 2006.
#### **Chapter Six**
Andersen, Christopher. _The Day John Died_. New York: Avon Books, 2000.
Landau, Elaine. _John F. Kennedy Jr._ Brookfield, CT: Twenty-First Century Books, 2000.
Levine, Alan, Kevin Johnson, and Deborah Sharp, "Pilot Kennedy was 'Conscientious Guy,'" _USA Today_ , July 21, 1999.
National Transportation Safety Board. NTSB Identification NYC99MA178, Washington, DC, July 6, 2000.
Terenzio, RoseMarie. _Fairy Tale Interrupted: A Memoir of Life, Love, and Loss_. New York: Gallery Books, 2012.
#### **Chapter Seven**
"Astronaut Stories: The World's first Spaceplane," _AirSpaceMag.com_ , February 28, 2011.
Bredeson, Carmen. _The_ Challenger _Disaster: Tragic Space Flight_. Berkeley Hts., NJ: Enslow Publishers, 1999.
Cook, Richard C. _Challenger Revealed: An Insider's Account of How the Reagan Administration Caused the Greatest Tragedy of the Space Age_. New York: Thunder's Mouth Press, 2006.
Dunar, Andrew J., and Stephen P. Waring. _Power to Explore: History of Marshall Space Flight Center 1960–1990_ , Chapter IX, "The Challenger Accident." Washington, DC: National Aeronautics and Space Administration, NASA History Office. Also available online at: _history.msfc.nasa.gov/book/chptnine.pdf_ , 2013.
Feynman, Richard P., and Ralph Leighton. _What Do You Care What Other People Think? Further Adventures of a Curious Character_. New York: W.W. Norton & Company, 2001.
McDonald, Allan J. with James R. Hansen. _Truth, Lies, and O-Rings: Inside the Space Shuttle_ Challenger _Disaster_. Gainesville, FL: University Press of Florida, 2009.
National Aeronautics and Space Administration. _www.nasa.gov_. Accessed 2013.
——. "Report of the Presidential Commission on the Space Shuttle Challenger Accident," NASA History Office, Washington, DC, June 6, 1986. _history.nasa.gov/rogersrep/genindex.htm_. Accessed 2013.
——. "The Space Shuttle Decision: NASA's Search for a Reusable Space Vehicle." NASA History Office, Washington, DC, 1999.
——. "Report from Joseph P. Kerwin, Biomedical Specialist from the Johnson Space Center in Houston, Texas, Relating to the Deaths of the Astronauts in the Challenger Accident, July 28, 1986." NASA History Office. _history.nasa.gov/kerwin.html. Accessed 2013._
Vaughan, Diane. _The_ Challenger _Launch Decision: Risky Technology, Culture, and Deviance at NASA_. Chicago: University of Chicago Press, 1996.
#### **Chapter Eight**
Bickers, Richard Townshend. _Von Richthofen: The Legend Evaluated_. Annapolis, MD: Naval Institute Press, 1996.
Bodenschatz, Karl, and Jan Hayzlett, trans. _Hunting with Richthofen. The Bodenschatz Diaries: Sixteen Months of Battle with JG Freiherr von Richthofen No. 1_. London: Grub Street, 1996.
Burrows, William E. _Richthofen: A True History of the Red Baron_. New York: Harcourt, Brace & World, Inc., 1969.
Carisella, P. J., and James W. Ryan. _Who Killed the Red Baron?_ Greenwich, CT: Fawcett Publications, Inc., 1969.
"Death of the Red Baron." _Unsolved History_ , Discovery Communications, Inc., 2002.
Fischer, Suzanne Hayes. _Mother of Eagles: The War Diary of Baroness von Richthofen_. Atglen, PA: Schiffer Publishing, Ltd., 2001.
Franks, Norman, and Alan Bennett. _The Red Baron's Last Flight: A Mystery Investigated_. London: Grub Street, 1997.
Gibbons, Floyd. _The Red Knight of Germany: The Story of Baron von Richthofen_. Garden City, NY: Garden City Publishing Co., Inc., 1927.
Hyatt, T. L., and D. R. Orme, "Baron Manfred von Richthofen—DNIF (Duties Not Including Flying)," _Human Factors and Aerospace Safety_ , Vol. 4, No. 1 (2004): 67–81.
Kilduff, Peter. _Richthofen: Beyond the Legend of the Red Baron_. New York: John Wiley & Sons, Inc., 1993.
McGuire, Frank. _The Many Deaths of the Red Baron: The Richthofen Controversy 1918–2000_. Calgary, AB: Bunker to Bunker Publishing, 2001.
Schurmacher, Emile C. _Richthofen: The Red Baron_. New York: Paperback Library, 1970.
Titler, Dale. _The Day the Red Baron Died: A Full Account of the Death of Baron Manfred von Richthofen_. New York: Ballantine Books, 1970.
Ulanoff, Stanley M., ed., and Peter Kilduff, trans. _The Red Baron: The Autobiography of Manfred von Richthofen_. New York: Barnes & Noble Books, 1995.
Von Richthofen, Captain Manfred Freiherr (translated by J. Ellis Barker). _The Red Battle Flyer_. New York: Robert M. McBride & Co., 1918.
"Who Killed the Red Baron?" _NOVA_ , Corporation for Public Broadcasting, 2003.
#### **Chapter Nine**
Adams-Ray, Edward. _The Andrée Diaries, Being the Diaries and Records of S.A. Andrée, Nils Strindberg and Knut Frænkel_.... London: John Lane, The Bodley Head Ltd., 1931.
"Andrée Expeditionen," Grenna Museum, The Andrée Expedition Polar Centre, _www.grennamuseum.se/_ , 2013.
Martinsson, Tyrone. _Nils Strindberg, en biografi om fotografen p Andrées Polarexpedition_. Lund, Sweden: Historical Media, 2006.
——. "Recovering the Visual History of the Andrée Expedition: A Case Study in Photographic Research," _Research Issues in Art Design and Media_ , ISSN 1474–2365, Issue 6, Summer 2004. _www.biad.bcu.ac.uk/research/rti/riadm/issue6/abstract.htm_ , 2013.
Putnam, George Palmer. _Andrée: The Record of a Tragic Adventure_. New York: Brewer & Warren, Inc., 1930.
Sollinger, Günther, "S.A. Andrée: the Beginning of Polar Aviation 1895–1897," _The Geographical Journal_ , 172(4), December 2006, p. 350.
Stefansson, Vilhjalmur. _Unsolved Mysteries of the Arctic_. New York: Macmillan Co., 1972.
Sundman, Per Olaf. _The Flight of the Eagle_. New York: Pantheon, 1970.
Wilkinson, Alec. _The Ice Balloon: S.A. Andrée and the Heroic Age of Arctic Exploration_. New York: Alfred A. Knopf, 2011.
#### **Chapter Ten**
Begich, Tom, and Dr. Nick Begich Jr., "Alaska's Bermuda Triangle or Lack of Government Accountability?" _www.freedomwriter.com/issue17/ak1.htm_ , July 12, 2001.
"Boggs, Thomas Hale Sr.," Biographical Directory of the United States Congress, 1774 to Present, _bioguide.congress.gov/scripts/biodisplay.pl?index=B000594_ , 2013.
Gibson, Dirk C. "Hale Boggs on J. Edgar Hoover: Rhetorical Choice and Political Denunciation." _Southern Speech Communication Journal_ 47, Fall 1981, pp. 54–66.
Fensterwald, Bernard. _Coincidence or Conspiracy?_ New York: Kensington Publishing Corp., 1977.
"Hale Boggs," Check-Six.com. _www.check-six.com/lib/Famous_Missing/Boggs.htm_ , 2013.
"History's Mysteries: Alaska's Bermuda Triangle," The History Channel, 2005.
Jonz, Don, "Ice Without Fear," _Flying_ , October 1972, pp. 66–68, 123.
——. "Light Planes and Low Temperatures," _National Pilots Association Service Bulletin_ , Vol. XII, No. 1, January 1972; and Vol. XII, No. 2, February 1972.
National Transportation Safety Board Aircraft Accident Report, "Pan Alaska Airways, Ltd. Cessna 310C, N1812H, Missing between Anchorage and Juneau, Alaska, October 16, 1972," Report Number NTSB-AAR-73-1, File No. 3-0604, Washington, DC, January 31, 1973.
#### **Chapter Eleven**
Allen, Martin. _The Hitler-Hess Deception_. London: HarperCollins Publishers, 2003.
Allen, Peter. _The Windsor Secret: New Revelations of the Nazi Connection_. New York: Stein and Day, 1984.
Costello, John. _Ten Days to Destiny: The Secret Story of the Hess Peace Initiative and British Efforts to Strike a Deal with Hitler_. New York: William Morrow, 1991.
Galland, Adolf. _The First and the Last_. New York: Bantam Books, Inc., 1979.
Harris, John. _Rudolf Hess: The British Illusion of Peace_. Northampton, UK: Jema Publications, 2010.
Harris, John, and M. J. Trow. _Hess: The British Conspiracy_. London: Andre Deutsch, Ltd. 2011.
Harris, John, and Richard Wilbourn. _Rudolf Hess: A New Technical Analysis of the Hess Flight, May 1941_. Staplehurst, UK: Spellmount, 2014.
Hess, Wolf Rüdiger. _My Father, Rudolf Hess_. London: W. H. Allen, 1986.
Kilzer, Louis C. _Churchill's Deception: The Dark Secret That Destroyed Nazi Germany_. New York: Simon & Schuster, 1994.
Masters, Anthony. _The Man Who Was M: The Life of Charles Henry Maxwell Knight_. London: Grafton Books, 1986.
Nesbit, Roy Conyers, and Georges Van Acker. _The Flight of Rudolf Hess: Myths and Reality_. Stoud, UK: Sutton Publishing, Ltd., 2002.
Padfield, Peter. _Hess: Flight for the Fuhrer_. London: Weidenfeld & Nicolson, 1991.
Picknett, Lynn, Clive Prince, and Steven Prior with additional historical research by Robert Brydon. _Double Standards: The Rudolf Hess Cover-Up_. London: Little, Brown, and Co., 2001.
Schwärzwaller, Wulf. _Rudolf Hess, the Deputy_. London: Quartet Books, 1988.
Smith, Alfred. _Rudolf Hess and Germany's Reluctant War, 1939–41_. Sussex, UK: The Book Guild Ltd., 2001.
Stafford, David, ed. _Flight From Reality: Rudolf Hess and His Mission to Scotland, 1941_. London: Pimlico, 2002.
Thomas, W. Hugh. _The Murder of Rudolf Hess_. New York: Harper & Row, 1979.
#### **Chapter Twelve**
"A Byte out of History: The D. B. Cooper Mystery." Federal Bureau of Investigation Website, November 24, 2006. _www.fbi.gov/news/stories/2006/november/dbcooper_112406_. Accessed 2013.
"D. B. Cooper," FBI Records: The Vault. Federal Bureau of Investigation. _vault.fbi.gov/D-B-Cooper%20_. Accessed 2013.
"D. B. Cooper Redux: Help Us Solve This Enduring Mystery." Federal Bureau of Investigation, December 31, 2007. _www.fbi.gov/news/stories/2007/december/dbcooper_123107_. Accessed 2013.
Forman, Pat and Ron. _The Legend of D. B. Cooper: Death by Natural Causes_. Borders Personal Publishing, 2008.
Gates, David, with Mark Kirchmeier, "D. B. Cooper, Where Are You?" _Newsweek_ , December 26, 1983.
Gilmore, Susan. "D. B. Cooper Puzzle: The Legend Turns 30," _Seattle Times_ , Nov. 22, 2001.
Gorney, Cynthia. "Vanishing Act: The Hunt for D. B. Cooper," _Washington Post_ , February 18, 1980.
Gray, Geoffrey. _Skyjack: The Hunt for D. B. Cooper_. New York: Crown Publishing Group, 2012.
——. "Unmasking D. B. Cooper." _New York Magazine_ , October 21, 2007. _nymag.com/news/features/39593_.
Gunther, Max. _D.B. Cooper: What Really Happened_. Chicago: Contemporary Books, 1985.
Himmelsbach, Ralph P., and Thomas K. Worcester. _NORJAK: The Investigation of D. B. Cooper_. West Linn, OR: Norjak Project, 1986.
Martz, Ron. "D. B. Cooper Is Alive: The Legend Won't Let Him Die." _Chicago Tribune_ , December 5, 1985.
Olson, Kay Melchisedech. _D. B. Cooper Hijacking: Vanishing Act_. Mankato, MN: Compass Point Books, 2010.
Pasternak, Douglas, "Skyjacker at Large: A Florida Widow Thinks She Has Found Him," _U.S. News and World Report_ , June 24, 2000.
Porteous, Skipp, and Robert Blevins. _Into the Blast: The True Story of D. B. Cooper_. Seattle: Adventure Books of Seattle, 2010.
Rhodes, Bernie, and Russell Calame. _D. B. Cooper: The Real McCoy_. Salt Lake City: University of Utah Press, 1991.
Seven, Richard. "D. B. Cooper—Perfect Crime or Perfect Folly?" _Seattle Times_ , Nov. 17, 1996.
——. "Man Still Trying to Track Legendary Hijacker D. B. Cooper," _Seattle Times_ , Nov. 20, 1996.
Skolnik, Sam, "30 Years Ago, D. B. Cooper's Night Leap Began a Legend," _Seattle Post-Intelligencer_ , Nov. 22, 2001.
"Sluggo's Northwest 305 Northwest Hijacking Research Site." n467us.com/index.htm
_The Skyjacker That Got Away_. Edge West, Inc. with National Geographic Television for National Geographic Channel, 2009.
Tosaw, Richard T. _D. B. Cooper: Dead or Alive? The True Story of the Legendary Skyjacker_. Tosaw Publishing Company, 1984.
Vartabedian, Ralph, "A New Lead in the D. B. Cooper Mystery." _Los Angeles Times_ , August 1, 2011.
#### **Chapter Thirteen**
"A New Kind of War: The Story of the FAA and NORAD Response to the September 11, 2001 Attacks," _Rutgers Law Review,www.rutgerslawreview.com/2011/a-new-type-of-war_, 2013.
"Debunking the 9/11 Myths: Special Report – The Planes." _Popular Mechanics_ , March 2005. _www.popularmechanics.com/technology/military/news/1227842_. Accessed 2013.
Elias, Barbara, ed. "Complete Air-Ground Transcripts of Hijacked 9/11 Flight Recordings Declassified." National Security Archive Electronic Briefing Book No. 196, August 11, 2006. _www.gwu.edu/~nsarchiv/NSAEBB/NSAEBB196/index.htm_.
Full 9/11] Audio Transcript, _Rutgers Law Review,[www.rutgerslawreview.com/2011/full-audio-transcript_. Accessed 2013.
National Commission on Terrorist Attacks Upon the United States, "The 9/11 Commission Report," _www.9-11commission.gov/_. Accessed 2013.
#### **Chapter Fourteen**
Churchill, Winston S. _The Hinge of Fate_. New York: Houghton-Mifflin, 1950.
Colvin, Ian. _Flight 777: The Mystery of Leslie Howard_. London: Evans Brothers, 1957.
Eforgan, Estel. _Leslie Howard: The Lost Actor_. London: Vallentine Mitchell Publishers, 2010.
Goss, Chris. _Bloody Biscay: The Story of the Luftwaffe's Only Long Range Maritime Fighter Unit, V Gruppe/Kampfgeschwader 40, and Its Adversaries 1942–1944_. London: Crécy Publishing, 2001.
Howard, Leslie Ruth. _A Quite Remarkable Father: A Biography of Leslie Howard_. New York: Harcourt Brace and Co., 1959.
Howard, Ronald. _In Search of My Father: A Portrait of Leslie Howard_. London: St. Martin's Press, 1984.
Nesbit, Roy Conyers. _Failed to Return: Mysteries of the Air 1939–1945_. Wellingborough, Northamptonshire, UK: Patrick Stephens Limited, 1988.
Rosevink, Ben and Lt. Col. Herbert Hintze, "Flight 777," _FlyPast_ , Issue 120, July 1991.
#### **Chapter Fifteen**
"Accident Description." Aviation Safety Network, Flight Safety Foundation, _aviation-safety.net/database/record.php?id=19721013-0_. Accessed 2013.
Andes Accident Official Website. _www.viven.com.uy/571/eng/default.asp_. Accessed 2013.
Parrado, Nando, and Vince Rause. _Miracle in the Andes: 72 Days on the Mountain and My Long Trek Home_. New York: Three Rivers Press, 2006.
Read, Piers Paul. _Alive: The Story of the Andes Survivors_. New York: Lippincott, 1974.
#### **Chapter Sixteen**
Caidin, Martin. _Ghosts of the Air: True Stories of Aerial Hauntings_. Lakeville, MN: Galde Press, Inc., 1994.
Currie, Jack. _Echoes in the Air: A Chronicle of Aeronautical Ghost Stories_. Manchester, UK: Crécy Publishing Ltd., 1998.
"Eastern Air Lines Flight 401." _sites.google.com/site/eastern401_. Accessed 2013.
"Eastern Flight 401: The Story of the Crash." Miami Herald Media Company, December 2007. _www.miamiherald.com/multimedia/news/flight401._
Elder, Rob and Sarah. _Crash_. New York: Atheneum Press, 1977.
Fuller, Elizabeth, _My Search for the Ghost of Flight 401_. New York: Berkley Books, 1978.
Fuller, John G. _The Airmen Who Would Not Die_. New York: Putnam, 1979.
——. _The Ghost of Flight 401_. New York: Berkley Books, 1983.
Job, Macarthur. "Hey—What's Happening Here?" _Air Disaster, Volume 1_ , 98–111. Fyshwick ACT, Australia: Aerospace Publications Pty, Ltd., 1994.
Kilroy, Chris. "Special Report: Eastern Air Lines Flight 401." _www.airdisaster.com/special/special-ea401.shtml_. Accessed 2013.
McKee, Alexander. _Great Mysteries of Aviation_. New York: Stein and Day, 1981.
Monan, W. P. "Distraction—A Human Factor in Air Carrier Hazard Events," _NASA Aviation Safety Reporting System: Ninth Quarterly Report_ , 2–23. Moffett Field, CA: National Aeronautics and Space Administration, 1978.
National Transportation Safety Board. Aircraft Accident Report Number NTSB-AAR-73-14, Washington, DC, June 14, 1972. _www.airdisaster.com/reports/ntsb/AAR73-14.pdf_. Accessed 2013.
Titler, Dale M. _Wings of Mystery: True Stories of Aviation History_. New York: Tower Publications, Inc., 1962.
#### **Chapter Seventeen**
Ali Mohamed, Dr. Fadel. "The Return of the _Lady Be Good_." _After the Battle_ , Issue 89, 28–31.
Fuller, Captain Myron C., Jr., and Wesley A. Neep. "Report of Investigation, US Army Quartermaster Mortuary System, Europe, Case: B-24 Bomber Lost 4/5 April 1943 and the 1959 Libyan Desert Search for the Nine Missing Crewmembers." November 17, 1959.
——. "Report of Investigation, US Army Quartermaster Mortuary System, Europe, Case: Final Search for Sour Unrecovered Airmen of B-24 Bomber _Lady Be Good_ Lost April 1943 in the Libyan Desert." June 20, 1960.
Hanna, William, "The Ordeal of the _Lady Be Good_." _American History Illustrated_ , Vol. 16(7), November 1981, pp. 8-15.
Holder, William G. "Epitaph to the _Lady_ —30 Years After." _Air University Review_ , Vol. 9(3), March–April 1973, 41–50. _www.airpower.au.af.mil/airchronicles/aureview/1973/mar-apr/holder.html_. Accessed 2013.
" _The Lady Be Good_." _After the Battle_ , Issue 25, 26–49.
"Lady Be Good." National Museum of the US Air Force Factsheet. _www.nationalmuseum.af.mil/factsheets/factsheett.asp?id=2475_. Accessed 2013.
"Lady Be Good." US Army Quartermaster Foundation, Fort Lee, Virginia, _www.qmfound.com/lady_be_good_b-24_bomber_recovery.htm_. Accessed 2013.
"Lady Be Good.net: A repository for online information about World War II's Ghost Bomber." _www.ladybegood.net_. Accessed 2013.
Martinez, Mario. _Lady's Men: The Saga of Lady Be Good and Her Crew_. Annapolis, MD: Naval Institute Press, 1995.
McClendon, Dennis E. _The_ Lady Be Good _: Mystery Bomber of World War II_. New York: Day, 1962. (Reissued by Aero Publishers, Inc., Fallbruck, CA with a new epilogue in 1982.)
"North African Desert Gives Up Its Secret: 17-Year-Old Mystery of the 'Lady Be Good' and Her Crew Is Finally Solved." _LIFE_ magazine, March 7, 1960 (Vol. 48, No. 9), 20–27.
"The Truth about the Ship That Vanished in the Desert." _www.ladybegood.com_. Accessed 2013.
Walker, James W. "Lady Be Good." 376th Heavy Bomb Group website, _376hbgva.com/aircraft/ladybegood.html_. Accessed 2013.
——. _The Liberanos: World War II History of the 376th Bomb Group_. 219–281. Waco, TX: 376th Vets Association, 1994.
#### **Chapter Eighteen**
"Airship Accident, All West Coast." US Navy Publication, March 1944. Lighter-Than-Air Library, Naval Air Warfare Center, Aircraft Division, Warminster, PA.
"Airship Accidents of World War II." US Navy Publication, September 1945. Lighter-Than-Air Library, Naval Air Warfare Center, Aircraft Division, Warminster, PA.
Cook, Jeffrey, "The Flying Dutchman: the Mystery of the L-8." _The Noon Balloon_ , Official publication of the Naval Airship Association, Inc. Issue 74, Summer 2007, 14–17.
Gross, Otto K., "L-8: The Ghost Blimp." _links.ghostblimp.com_ and _ghostblimp.blogspot.com_. Accessed 2013.
Hansen, Zenon C.R. _The Goodyear Airships_. Bloomington, IL: Airship International Press, 1979. Updated in 2005 by James R. Shock and David R. Smith.
"History of Blimp Squadron 32." Official declassified US Navy document. _www.warwingsart.com/LTA/ZP-32%20Squadron%20Diary.pdf_. Accessed September 12, 2014.
"Record of Proceedings of a Board of Investigation Convened at the US Naval Air Station Moffett Field, California, by Order of Commander, Western Sea Frontier, San Francisco, California, to Inquire into the Accident to the US Navy Non-rigid Airship L-8 on August 16, 1942." Official US Navy document dated August 18, 1942.
Shock, James R. _US Navy Airships 1915–1962: A History by Individual Airship_. Edgewater, FL: Atlantis Productions, 1992.
Vaeth, J. Gordon. _Blimps & U-Boats: US Navy Airships in the Battle of the Atlantic_. Annapolis, MD: Naval Institute Press, 1992.
#### **Chapter Nineteen**
Beaty, David. _Strange Encounters: Mysteries of the Air_. New York: Atheneum, 1984.
Blundell, Nigel, and Roger Boar. _The World's Greatest UFO Mysteries_. London: Bounty Books, 1991.
Brookesmith, Peter. _UFO: The Complete Sightings_. New York: Barnes & Noble Books, 1995.
Clark, Jerome. _The UFO Book: Encyclopedia of the Extraterrestrial_. Canton, MI: Visible Ink Press, 1998.
Emenegger, Robert. _UFO's Past, Present & Future_. New York: Ballantine Books, 1974.
Jacobs, David M. _The UFO Controversy in America_. Bloomington, IN: Indiana University Press, 1975.
Keyhoe, Donald E. _The Flying Saucers Are Real_. New York: Fawcett Publications, 1950.
Lorenzen, Coral E. _Flying Saucers: The Startling Evidence of the Invasion from Outer Space_. New York: New American Library Signet Books, 1966.
"Mantell Accident Report and Mantell Case File No. 136." _Project Blue Book_ files, 1948. National Archives: Washington, DC, Microfilm Roll T-1206-2.
Peebles, Curtis. _Watch the Skies! A Chronicle of the Flying Saucer Myth_. Washington, DC: Smithsonian Institution, 1994.
Randle, Kevin D. "An Analysis of the Thomas Mantell UFO Case." _www.nicap.org/docs/mantell/analysis_mantell_randle.pdf_. Accessed 2013.
Randle, Kevin D. _Project Blue Book—Exposed_. New York: Marlowe and Co., 1997.
——. _The UFO Casebook_. New York: Warner Books, 1989.
Ruppelt, Edward. J. _The Report on Unidentified Flying Objects_. Garden City, NY: Doubleday and Co., 1956.
Steiger, Brad, ed. _Project Blue Book_. New York: Ballantine Books, 1987.
Story, Ronald D. _The Encyclopedia of UFOs_. Garden City, NY: Doubleday and Co., 1980.
Stringfield, Leonard H. _Situation Red, the UFO Siege!_ Garden City, NY: Doubleday and Co., 1977.
Wilkins, Harold T. _Flying Saucers on the Attack_. New York: Ace Books, 1954.
#### **Chapter Twenty**
Berlitz, Charles. _The Bermuda Triangle: The Incredible Saga of Unexplained Disappearances_. New York: Doubleday, 1974.
Berlitz, Charles. _Without a Trace: New Information From the Triangle_. New York: Doubleday, 1977.
_Bermuda Triangle Exposed_. Discovery Channel, 2010.
Gaddis, Vincent H., "The Deadly Bermuda Triangle," _Argosy_ , February 1964.
——. _Invisible Horizons: True Mysteries of the Sea_. Philadelphia: Chilton Books, 1965.
Kusche, Lawrence D. _The Bermuda Triangle Mystery – Solved_. New York: Harper & Row, 1975.
——. _The Disappearance of Flight 19_. New York: Harper & Row, 1980.
McDonell, Michael. "Lost Patrol." _Naval Aviation News_ , June 1973, 8–16.
MacGregor and Bruce Gernon. _The Fog: A Never Before Published Theory of the Bermuda Triangle Phenomenon_. Woodbury, MN: Llewellyn Worldwide, Ltd., 2005.
Myhre, Jon H. _Discovery of Flight 19: A 30-Year Search for the Lost Patrol in the Bermuda Triangle_. Orange, CA: The Paragon Agency, 2012.
Quasar, Gian J. _They Flew Into Oblivion: The Disappearance of Flight 19—A True Story of Mystery, Irony, and Infrared_. Lulu Enterprises, Inc., 2010.
Spencer, John Wallace. _Limbo of the Lost_. New York: Bantam Books, 1975.
"US Navy Board of Investigation to Inquire into the Loss of the 5 TBM Avengers in Flight 19 and PBM Aircraft." Microfilm reel, NRS 1983-37, Operational Archives Branch, Naval History & Heritage Command, Washington, DC. Selected excerpts available at: _www.ibiblio.org/hyperwar/USN/rep/Flight19/index.html_. Accessed September 12, 2014.
Winer, Richard. _The Devil's Triangle_. New York: Bantam Books, 1974.
#### **Epilogue**
"10 Theories about Missing Flight MH370." _New York Post, March 19, 2014 (originally appearing inNews.com.au_). _nypost.com_
Brumfield, Ben, and Holly Yan. "MH370 Report: Mixed Messages Ate Up Time Before Official Search Initiated," CNN.com, May 2, 2014.
Chuckman, John. "The Second Mystery around Malaysia Airlines Flight MH370." OpEdNews.com, April 18, 2014.
"Flight MH370 Conspiracy Theories: What Happened to the Missing Plane?" _The Week,www.theweek.co.uk_, May 9, 2014.
Gann, Ernest K. _Fate Is the Hunter_ , London: Hodder & Stoughton Ltd., 1961.
"Missing Plane MH370 Conspiracy Theory Goes Viral: Was Passenger Jet Shot Down by American Military Forces?" _Huffington Post UK_ , April 23, 2014. _www.huffingtonpost.co.uk_
Mouawad, Jad, and Christopher Drew. "Airline Industry at Its Safest Since the Dawn of the Jet Age." _New York Times_ , February 11, 2013.
"MH 370 Preliminary Report." Office of the Chief Inspector of Air Accidents, Ministry of Transport, Malaysia, Serial 03/2014, April 9, 2014.
Neuman, Scott. "Search For Flight MH370 Reportedly Largest in History." _The Two-Way_ , March 17, 2014. _www.npr.org/blogs/thetwo-way._
Sanchez, Ray. "Nearly 80% of Americans Think No One Survived Flight 370, CNN Poll Finds." CNN.com, May 7, 2014. _edition.cnn.com_.
"The Search for Flight MH370," BBC News Asia, April 11, 2014. _www.bbc.com/news/world-asia-26514556._
Yan, Holly, and Elizabeth Joseph. "Flight 370 Search Chief: Hunt for Plane Is the Most Difficult in History." CNN World, May 12, 2014. _www.cnn.com._
## **INDEX**
_Page numbers in italics indicate a photograph or illustration_
Adams, Charles E., –216, ,
Adams, Samuel E., , __ ,
airplanes, commercial
American Airlines Flight , –160,
American Airlines Flight , –164
Eastern Airlines Flight 401, –193, __
BOAC Flight 777A ( _Ibis_ ), –170, __ , –177
Malaysian Airlines Flight 370, –244, __ , –247
Northwest Orient Airlines Flight 305, , __ , –151,
United Airlines Flight , –166
United Airlines Flight , , ,
airplanes, military
Flight (TBM Avengers), , __
Hess, Rudolf, –147
_Lady Be Good_ , –200, , –204, –209
Mantell, Thomas F., Jr., –228,
Miller, Glenn, –68, , –76
the Red Baron, , –106, , , –113
Uruguayan Air Force Flight 571, –182
airplanes, private
Earhart, Amelia, , , –65
_l'Oiseau Blanc_ , , __ , –40
Fossett, Steve, –24, __ , –29
Kennedy, John F., Jr., , –83
Wright Model A (Fort Myer trial), _–16_, , –19, __
airships
_Hindenburg_ , –43,
L-8, , _–212_, –214, _, _, , –221
R-101, –195
Andes flight disaster. _See_ Uruguayan Air Force Flight 571
Andrée, Salomon August, –117, __ , –123, __ ,
Andrée balloon expedition
disappearance of, ,
failed launch, ,
fate of, –123,
plan to reach North Pole, –117
Baessell, Norman F., , , ,
Bain, Addison,
balloons
_Örnen_ , , , , __
Project Skyhook, , __ ,
Rozière balloon, ,
Begich, Nicholas J., ,
Bermuda Triangle, –236
blimps. _See_ airships
British Overseas Airways Corporation (BOAC) Flight 777A _Ibis_ , –170, __ , –177
Boggs, Thomas Hale, Sr., , __ , –134
Brown, Arthur Roy, , __ , –111
Brown, Russell L.,
_Challenger_ , –90, __ , , , –100
Churchill, Winston, , –176
Cody, Ernest DeWitt, –216, __ , , ,
Coli, François, , –34, _–36_, –40
Cooper, D. B.
alternate leads,
composite sketch, __
hijacking of Norwest Orient flight, , –151
hunt for, –155
recovered cash and, –156
crashes
Eastern Airlines Flight 401, –193
Kennedy, John F., Jr., , –83
_Lady Be Good_ (B-24 Liberator), –200, __ , –209
Hess, Rudolf, –147
Uruguayan Air Force Flight 571 (FH-227D), –182
Wright Model A (Fort Myer trial), _–16_, , –19, __ <M>
_See also_ disappearances; explosions; flights shot down; hijackings
de Rozier, Jean-François Pilâtre, –11, __ ,
disappearances
Andrée, Salomon August expedition, ,
Boggs/Begich party, , –135
Earhart, Amelia, , , –65
Flight ,
Fossett, Steve, –24, __ , –29
_Lady Be Good_ , –200
_l'Oiseau Blanc_ , , __ , –40
Malaysian Airlines Flight 370, –244
Miller, Glenn, –68, , –76
PBM-5 Mariner, , –237
_See also_ crashes; explosions; flights shot down; hijackings
Earhart, Amelia
career of, –56
circumnavigation attempts, , –59
disappearance of, , , –65
pictured, _, , , _
Eastern Airlines Flight 401, –194, __
Eckener, Hugo, __ , ,
Ekholm, Nils, –116,
explosions
_Challenger_ , , , –100
_Hindenburg_ , –43, , __ , –51,
_See also_ crashes; disappearances; flights shot down; hijackings
Ferradas, Julio César, –180,
Flight (TBM Avengers)
Bermuda Triangle theory, –236
disappearance of, , –241
disorientation explanation, –238, –241
Navigation Problem No. mission, –234
PBM-5 Mariner disappearance,
possible wreckage sites, –242
flights shot down
BOAC Flight 777A ( _Ibis_ ), –170, –177
Red Baron, , –106, , , –113
_See also_ crashes; disappearances; explosions; hijackings
Fossett, James Stephen "Steve," –24, __ , –29
Frænkel, Knut, _, _
Galland, Adolf, –143
Goebbels, Joseph, –175
Göring, Hermann, , __
Hatton, William J., __ , ,
Hays, "Dp," _197_ ,
Hess, Walter Richard Rudolf
career of,
crash of, –138
motivations for, –146
pictured, _, , , _
as prisoner of war, , –147
secret mission of, , –143
hijackings
Northwest Orient Airlines Flight 305 (D. B. Cooper), , __ , –151,
September , –160, –167
_See also_ crashes; disappearances; explosions; flights shot down
_Hindenburg_ ,
explosion of, –43, , __ ,
features of, –46
pictured, _, , _
public reaction to, –52
theories about, , –51
Hintze, Herbert,
Hitler, Adolf, ,
Howard, Leslie, –169, –175, __
_Ibis. See_ British Overseas Airways Corporation Flight 777A
Israel, Wilfrid B.,
Jarvis, Gregory B., , __
Jonz, Don Edgar, –130, –135
Kennedy, John F., Jr.
conspiracy theories about, –88
fatal crash of, –83
flight experience of, ,
"Kennedy Curse" and, –87
likely causes of crash of, –85
pictured, __
_Lady Be Good_ , –200, __ , –204, __ , –209
Lagurara, Dante Héctor, ,
LaMotte, Robert E., , __ ,
Lilienthal, Karl Wilhelm Otto, , __ , –14
Lilienthal glider,
Lindbergh, Charles, ,
Loft, Robert, –191, –194
_l'Oiseau Blanc_ , , __ , –40
Malaysian Airlines Flight 370, –244, __ , –247
Manning, Harry,
Mantell, Thomas F., Jr. (Mantell UFO incident), –228,
May, Wilfrid R. "Wop," , __ , –111
McAuliffe, Sharon Christa Corrigan, , __
McNair, Ronald E., , __
Miller, Alton Glenn
career of, ,
disappearance of, –68, ,
legacy of,
pictured, _–70_
theories about, –76
Moore, Vernon L., , __ , –208
Morgan, John R. S., –68, ,
NASA
Apollo casualties,
_Challenger_ explosion, , , , –100
Space Shuttle _Columbia_ disaster and, –101
Space Shuttle Program and, ,
Noonan, Fred, –59, –65
Nungesser, Charles, , __ , –34, _–36_, –38
Onizuka, Ellison S., , __
_Örnen_ , , , , __
Pentagon, __ , –164,
Pruss, Max,
Putnam, George P., ,
Red Baron. _See_ Richthofen, Manfred von
Repo, Don, , –194
Resnik, Judith A., , __
Richthofen, Manfred von (the Red Baron)
career of,
controversy over credit, –112
pictured, _–104_
shooting down of, , –106, , , –113
Ripslinger, Harold J., , __ , –208
Romain, Pierre, , __ ,
Rozière balloon, ,
Selfridge, Thomas E., –19
September attacks, –160, –167
Shelley, Guy E., Jr., , __ , –208
Smith, Michael J., , __ ,
Space Shuttle _Challenger_ , –90, __ , , , –100
Stainer, Leslie. _See_ Howard, Leslie
Stockstill, Albert,
Strindberg, Nils, , _, _
Taylor, Charles, , –238, __ ,
Toner, Robert F., , __ , , –207
unidentified flying objects (UFOs), , –227, , __ , –231
Uruguayan Air Force Flight 571 (FH-227D), –186
US Navy blimp L-8
abandonment of, –220
appearance in Daly City, ,
crew of, –216,
original mission of, –214
pictured, _–212, , , _
theories about, –221
use after Daly City,
Woravka, John S., , __ , –208
World Trade Center, –160, __ , , ,
Wright, Orville and Wilbur, –15, __ , –18,
Wright Model A (Fort Myer trial), _–16_, , –19, __
## **ACKNOWLEDGMENTS**
It is impossible to complete a work as comprehensive as this without a great deal of help from others, and I would like to acknowledge those who assisted me in making it possible. First, I am indebted to my editors at Zenith Press for helping to make this work the best it could possibly be. Their many perceptive comments and suggestions were invaluable. Colonel Walter Boyne—friend, colleague, and mentor—is the "ace of aces" in matters having to do with aviation history. I very much appreciated his encouragement and advice. Terry Irwin, certified flight instructor and friend for more years than seems possible, is also one of the most knowledgeable aviation authorities anywhere. His review of the manuscript and astute observations were exceedingly helpful. Lieutenant Dan Ruffin is a US Navy F/A-18 Super Hornet weapons systems officer—and the smartest young fellow I was ever blessed to have as a son. His technical perspective on this work was vital. Paula Ronald, whom I was lucky enough to have as a sister and personal librarian, has also been a lifetime coach, critic, fan, and friend. She painstakingly reviewed every word of the manuscript and expertly dissected some of my dangling participles, split infinitives, and run-on sentences. My good friend and comrade-in-arms, Col. Steve Robison, showed unparalleled fortitude by laboriously wading through the manuscript. I thank him for his insightful suggestions and much-needed encouragement.
Others who kindly provided assistance included Heather Bourk of the US House of Representatives Collection, the Cessna Aircraft Company, Stephen Miller, Bob Garrard, Minerva Bloom of the NAS Fort Lauderdale Museum, Peter Kilduff, Jon Proctor, Bernhard Ebner, and Otto Gross. Finally, I wish to acknowledge my wife, Janet, and daughter, Katie, both of whom supported and encouraged me in various ways throughout this long process. Many thanks to you, one and all.
**Steve Ruffin**
Virginia Beach, Virginia
January 2015
**Dedication**
This book is dedicated to all those who, since the beginning of manned flight, have taken off into the wild blue yonder and never returned.
First published in 2015 by Zenith Press, an imprint of Quarto Publishing Group USA Inc., 400 First Avenue North, Suite 400, Minneapolis, MN 55401 USA
© 2015 Quarto Publishing Group USA Inc.
Text © 2015 Steven A. Ruffin
All rights reserved. With the exception of quoting brief passages for the purposes of review, no part of this publication may be reproduced without prior written permission from the Publisher.
The information in this book is true and complete to the best of our knowledge. All recommendations are made without any guarantee on the part of the author or Publisher, who also disclaims any liability incurred in connection with the use of this data or specific details.
We recognize, further, that some words, model names, and designations mentioned herein are the property of the trademark holder. We use them for identification purposes only. This is not an official publication.
Zenith Press titles are also available at discounts in bulk quantity for industrial or sales-promotional use. For details write to Special Sales Manager at Quarto Publishing Group USA Inc., 400 First Avenue North, Suite 400, Minneapolis, MN 55401 USA.
To find out more about our books, visit us online at www.zenithpress.com.
Digital edition: 978-1-62788-872-1
Hardcover edition: 978-0-7603-4792-8
Library of Congress Cataloging-in-Publication Data
Ruffin, Steven A.
Flights of no return : aviation history's most infamous one-way tickets to immortality / Steven A. Ruffin.
pages cm
Includes bibliographical references.
ISBN 978-0-7603-4792-8 (hc w/jacket)
1. Aircraft accidents--History. I. Title.
TL553.5.R84 2015
363.12'409--dc23
2014049416
Acquisitions Editor: Elizabeth Demers
Project Manager: Madeleine Vasaly
Art Director: James Kegley
Cover Designer: Faceout Studios
Page Designer: Carol Holtz
Layout Designer: Simon Larkin
On the front cover: Amelia Earhart posing in front of her Lockheed 10E Electra. It was one of the most advanced civilian airplanes of its day. _NASA_
| {
"redpajama_set_name": "RedPajamaBook"
} | 9,588 |
Q: Node/Express/Formidable multipart form - request body is *absent* on server I am new to Node/Express (but not to development).. It was all going swimmingly - but I have hit a real roadblock.
I am trying to get a framework in place for an 'SPA' HTML game.
I'm trying to POST multipart/form-data - as eventually I will want to do file uploads.
I'm using Fetch and the FormData object for POSTS - as I want to 'Ajax' HTML fragments/JSON into my SPA
My POST looks ok (to me) client-side -
Post data headers/payload/client-side
The have a multipart payload
but the request body is entirely absent server-side (which (presumably) causes Formidable to return an empty set of fields)
No request.body
const form = formidable({ multiples: true });
let outFields={}
form.parse(cmd.request,(err,fields,files)=>{outFields=fields})
I swear at one point - the request had a body property, but it was an empty object{} which isn't very useful either
Things I have tried:-
*
*List item
Faffed about with CORS
//CORS - without this we cannot accept multipart forms (or do several other things later - this really wants locking down before production)
app.use((req, res, next:Function) => {
res.header("Access-Control-Allow-Origin", "*");
res.header("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept");
res.setHeader('Access-Control-Allow-Methods', 'POST, GET'); // Add other methods here
next();
});
*
*List item
Put in, Taken, out (and shaken all about) the middleware:--
app.use(logger('dev'));
//app.use(express.json());
//app.use(bodyParser.urlencoded({extended:true}))
//app.use(bodyParser.json())
(maybe that where the empty body came in)
*
*List item
'Funded' and reached out for paid support at https://www.npmjs.com/package/formidable (no reply yet)
*
*List item
Messed with the form (added an enctype and method attribute) - to no avail
*
*List item
Pulled quite a lot of my own hair out
*
*Stepped a fair way into the library - but TBH this is beyond my expertise
So to Boil it down:-
here's my form:-
<form id='signIn' method="post" enctype="multipart/form-data">
<p>Enter your player name <input type='text' name='pName'></p>
<p>Enter your password <input type='password' name='password'></p>
</form>
<!--(method, url,elementId,mode,formDataObj){ -->
<button onclick="ajax('POST','signIn','main','i',new FormData($('#signIn')[0]))">Sign In</button>
</form>
Here's the ajax helper method (not very pretty yet - sorry):-
//Fetch is the modern, native way - it is well described here https://javascript.info/fetch
function ajax(method, url,elementId,mode,formDataObj){
//we return a promise so we can await (completion if we want to)
return new Promise((resolve,reject)=>{resolve(fetch(url, {method:method,body:formDataObj,headers:{'Accept':'text/html','Content-Type':'multipart/form-data'}}).then(response=>fillElement(elementId,response,mode)))}) //response is a promise
}
and (not that it's very relevant) - here is the router/"controller" code
router.get('/*', (req:e.Request, res:e.Response, next:e.NextFunction)=> {processRequest(req,res,next)}) //res.render("main",{player:} }) //
function processRequest(req:e.Request,res:e.Response,next:e.NextFunction){
//siginIn,signUp,signOut
const action=req.path.split('/')[1] //we're going to see if the controller object has a function matching the (first segment of) the path - NB: path[0] is what before the first slash (i.e. - nothing)
if(controller.hasOwnProperty(action)){ //see if the controller object has a function matching the (first segment of) the path
const game:Game=global["game"] //get a type safe reference to the game
let player:Player = game.playerFromCookie(req.cookies.pid)
console.log (req.path)
//construct a parameters object to pass useful info into the controller[action] method
//export interface Params {readonly game:Game,readonly player:Player,request:e.Request,response:e.Response}
let params:controller.Params = {game:game, player:player, request:req , response:res}
//Invoke the method (action) with tha handy parameters
let output:{template:string,data:any} = controller[action](params)
res.render(output.template, {player:output.data})
Thanks in advance for your time
A: Your form has 2 </form> which makes 2 forms.
Also
let outFields={}
form.parse(cmd.request,(err,fields,files)=>{outFields=fields})
is not going to do what it looks like. I recommend you to learn more about how callbacks work in general, and when you do it will be obvious why this code is sloppy.
And your ajax code also shows you have no clue about Promise , so read the docs and gl hf
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 7,884 |
Sensory fiction is a design fiction about future ways of experiencing and telling stories. Traditionally fiction creates and induces emotions and empathy through words and images. In a fictional future dominated by wearable technology, people suppress their physiological emotional responses, as they are being recorded and analyzed. To regain those physical symptoms of emotions, they use a vest with different actuators that trigger emotions networked with the story of the book. The project is inspired by 'The Diamond Age' by Neil Stephenson, more specifically the seemingly magical book called the primer as well as the short story 'The Girl Who Was Plugged in' by James Tiptree Jr. | {
"redpajama_set_name": "RedPajamaC4"
} | 4,767 |
\section{Introduction}
\noindent
The need of nonperturbative methods in in quantum physics is obvious
considering the many problems where strong coupling and/or binding
effects
render perturbation theory inadequate. In nonrelativistic
many-body physics variational methods are widely used
under these circumstances while this is {\it not} the case in
relativistic field theory. However, Feynman's successful treatment
of the polaron problem \cite{Fey} shows that variational
methods may also be used for a nonrelativistic field theory
provided that the fast degrees of freedom can be integrated out and
their effect properly taken into account in the trial action.
In a previous paper \cite{RoSchr} (henceforth referred to as (I) )
we have extended the polaron variational method to the simplest
scalar field theory
which describes heavy particles (``nucleons'') interacting by the
exchange of light particles (``mesons''). In euclidean space-time the
Lagrangian of the Wick-Cutkosky model is given by
\begin{equation}
{\cal L} = \frac{1}{2} \left ( \partial_{\mu} \Phi \right )^2 +
\frac{1}{2}
M_0^2 \Phi^2 + \frac{1}{2} \left ( \partial_{\mu} \varphi \right )^2
+ \frac{1}{2}m^2 \varphi^2 - g \Phi^2 \varphi \> .
\end{equation}
Here $M_0$ is the bare mass of the nucleon, $m$ is the mass of the
meson and $g$ is the (dimensionfull) coupling constant of the Yukawa
interaction
between them. In the quenched approximation the meson field can be
integrated out and one ends up with an effective action for the
nucleons only
\begin{equation}
S_{\rm eff} \> [x(\tau)] = \int_0^{\beta} d \tau \> \frac{1}{2}
\dot x^2
- \frac{g^2}{2} \int_0^{\beta} d \tau_1 \> \int_0^{\beta} d \tau_2 \>
\int \frac{d^4 q}{(2 \pi)^4} \> \frac{1}{q^2 + m^2} \>
e^{ \> i q \cdot ( \>x(\tau_1) - x(\tau_2) \> ) } \> .
\label{eff action}
\end{equation}
Note that this is formulated in terms of trajectories $ x(\tau) $
of the
heavy particle (``particle representation'') which are parametrized by
the proper time $ \tau $ and obey the boundary conditions
$ x(0) = 0 $ and $ x(\beta) = x$. To obtain the 2-point function one
has to perform the path integral over all trajectories and
to integrate over $ \beta $ from zero to infinity with a certain
weight.
It is, of course, impossible to perform this path integral exactly
and, again following Feynman, we have approximated it variationally
by a retarded quadratic two-time action.
In (I) we have proposed various parameterizations for the
retardation function which enters this trial action and
derived variational equations for the most general case when its
form was left free.
The purpose of the present paper is to investigate numerically these
parametrizations as well as to solve the variational equations. This
fixes
the variational parameters which will be used to calculate physical
observables in forthcoming applications. One quantity which we
evaluate
in the present paper is the residue on the pole of the propagator.
Another one is related to the well-known instability \cite{Baym}
of the Wick-Cutkosky
model : although the effective action (\ref{eff action})
is very similar to the one in the
polaron model the ground state of the system is only metastable.
This does not show up in any order of perturbation theory but, as
we have demonstrated in (I),
the variational approach knows about it. Indeed
an approximate solution of the variational equations
has revealed that there are no real solutions beyond a certain
critical coupling.
In the present paper we will find exact numerical values for
this critical coupling and calulate
the width of the unstable particle for couplings beyond it.
This paper is organized as follows: The essential points of the
polaron variational approach are collected
in Section \ref{sec: polaron},
while Section \ref{sec: numerical} is devoted to the numerical
methods and
results. In Section \ref{sec: width} we investigate the instability
of the Wick-Cutkosky model in our variational method and
determine analytically and numerically the width of the dressed
particle. The variational principle can also be applied away from
the pole, which is explored in Section \ref{sec: var 2point off pole}
and used to calculate the residue
at the nucleon pole. The main results of this work are
summarized in the last Section.
\section{Polaron Variational Approach}
\label{sec: polaron}
\noindent
Following Feynman's treatment of the polaron problem we have
performed in (I) a variational calculation of the 2-point function
with the quadratic trial action
\begin{equation}
S_t[x] \> = \> \int_0^{\beta} d\tau \> \frac{1}{2} \dot x^2 \> + \>
\int_0^{\beta} d\tau_1 \> \int_0^{\tau_1} d\tau_2 \>
f ( \tau_1 - \tau_2 ) \> \left [ \> x(\tau_1) \> - \> x(\tau_2) \>
\right ]^2 \> .
\label{general x Feynman action}
\end{equation}
Here $f ( \tau_1 - \tau_2 ) $ is an undetermined `retardation
function' which takes into account the time lapse occurring when
mesons are emitted and absorbed on the nucleon. In actual
calculations we rather have used the Fourier space form
\begin{equation}
S_t = \> \sum_{k=0}^{\infty} A_k \> b_k^2 \> \>,
\label{Feynman Fourier action}
\end{equation}
where the $b_k$ are the Fourier components of the path $x(\tau)$ and
the Fourier
coefficients $A_k$ are considered as variational parameters.
The variational treatment is based on the decomposition of the action
$ S_{\rm eff} $
into $ S_{\rm eff} = S_t \> + \Delta S $ and on Jensen's inequality
\begin{equation}
< e^{ - \Delta S } > \> \> \ge \> \> e^{ - < \Delta S > } \> .
\label{jensen}
\end{equation}
Near $ p^2 = - M_{\rm phys}^2 $ the 2-point function should behave
like
\begin{equation}
G_2(p^2) \longrightarrow
\>\> \frac{Z}{p^2 + M_{\rm phys}^2} \> .
\label{pole of 2point(p)}
\end{equation}
where $ 0 < Z < 1 $ is the residue. As was shown in (I) this requires
the proper time $\beta$ to tend to infinity. One then obtains the
following inequality
\begin{equation}
M_{\rm phys}^2 \le \> \frac{M_1^2}{2 \lambda} \> + \>
\frac{\lambda}{2} \> M_{\rm phys}^2 \> + \> \frac{1}{\lambda}
\> \left (\Omega \> + \> V \right )\>.
\label{var inequality for Mphys}
\end{equation}
where
\begin{equation}
M_1^2 = M_0^2 \> - \> \frac{g^2}{4 \pi^2} \> \ln \frac{\Lambda^2}{m^2}
\label{finite mass}
\end{equation}
is a finite mass into which the divergence of the self-energy
has been absorbed and $\lambda$ a variational parameter. For
$\beta \to \infty$ all discrete sums over Fourier modes $A_k$
turn into integrals over the `profile function'
$ A( E = k \pi /\beta ) $ and one finds
\begin{equation}
\Omega \> = \> \frac{2}{\pi} \> \int_0^{\infty} dE \>
\left [ \> \ln A(E) \> + \> \frac{1}{A(E)} \> - \> 1 \> \right ] \> ,
\label{Omega by A(E)}
\end{equation}
as well as
\begin{equation}
V \> = \> - \> \frac{g^2}{8 \pi^2} \> \int_0^{\infty} d\sigma\>
\int_0^1 du \>
\Biggl [ \> \frac{1}{\mu^2(\sigma)} e \> \left ( m \mu(\sigma),
\frac{\lambda M_{\rm phys} \sigma}{ \mu(\sigma)}, u \right)
\> - \> \frac{1}{\sigma} \> e \> ( m \sqrt{\sigma},0,u)
\> \Biggr ] \> .
\label{pot}
\end{equation}
Here we use the abbreviations
\begin{equation}
e(s,t,u) =
\exp \left ( - \> \frac{s^2}{2} \> \frac{1-u}{u} \> - \>
\frac{t^2}{2}\> u \> \right )
\end{equation}
and
\begin{equation}
\mu^2(\sigma) \> =
\> \frac{4}{\pi} \int_0^{\infty} dE \> \frac{1}{A(E)} \>
\frac{\sin^2 (E \sigma/ 2)}{E^2} \>.
\label{amu2(sigma)}
\end{equation}
Because $ \mu^2(\sigma) $ behaves like $ \sigma $ and $ \sigma/A_0 $
for small and large $ \sigma $, respectively,
we have called it a `pseudotime'. Note that in Eq. (\ref{pot}) the
particular renormalization point $\mu_0 = 0$ has been used to
regularize
the small-$\sigma$ behaviour of the integrand. As we have shown in (I)
the total result is, of course, independent of $\mu_0$.
\noindent
The profile function $A(E)$ is linked to the retardation function
$f(\sigma)$ by
\begin{equation}
A( E ) =\> 1 \> + \> \frac{8}{E^2} \> \int_0^{\infty}
d\sigma \> f(\sigma) \> \sin^2 \frac{E \sigma}{ 2 } \> .
\label{A(E)}
\end{equation}
In (I) we have studied the following parametrizations
\vspace{0.5cm}
\noindent
{\it`Feynman' parametrization:}
\begin{equation}
f_F(\sigma) \> = \> w \> \frac{v^2 - w^2}{ w} \> e^{-w \sigma} \> ,
\label{Feynman retard func}
\end{equation}
which leads to
\begin{equation}
A_F ( E ) \> = \> \frac{v^2 \> + \> E^2}{w^2 \> + \> E^2}\;\;\;.
\label{Feynman A(E)}
\end{equation}
\noindent
{\it`Improved' parametrization:}
\begin{equation}
f_I(\sigma) \> = \> \frac{v^2 - w^2}{2 w} \frac{1}{\sigma^2} \>
e^{ - w \sigma} \> ,
\label{improved retard func}
\end{equation}
which entails
\begin{equation}
A_I( E ) = \> 1 + 2 \> \frac{ v^2 - w^2}{ w E} \> \left [ \arctan
\frac{E}{w} -\> \frac{w}{2 E} \ln \left ( 1 + \frac{E^2}{w^2}
\right ) \> \right ] \> .
\label{improved A(E)}
\end{equation}
\noindent
In both cases $ v, w $ are variational parameters whose values have to
be determined by minimizing Eq. (\ref{var inequality for Mphys}).
\vspace{0.5cm}
\noindent
As well as the above parametrizations, it was possible not to
impose any specific form for the retardation function
but to determine it from varying Eq. (\ref{var inequality for Mphys})
with respect
to $\lambda$ and $A(E)$. This gave the following relations
\begin{equation}
\frac{1}{\lambda} = \> 1 + \frac{g^2}{8 \pi^2} \> \int_0^{\infty}
d\sigma\> \frac{\sigma^2}{\mu^4(\sigma)} \>
\int_0^1 du \> u \> e \> \left ( m \mu(\sigma),
\frac{\lambda M_{\rm phys} \sigma}{ \mu(\sigma)}, u \right)
\label{var eq for lambda}
\end{equation}
\begin{eqnarray}
A(E) \> = \> 1 + \frac{g^2}{4 \pi^2} \frac{1}{E^2}
\int_0^{\infty} d\sigma \>
\frac{\sin^2 (E \sigma /2)}{\mu^4(\sigma)} \> \int_0^1 &du& \left [ 1
+ \frac{m^2}{2}
\mu^2(\sigma) \frac{1-u}{u} -\frac{\lambda^2 M^2_{\rm phys} \sigma^2}
{2 \mu^2(\sigma)} u \right ] \nonumber \\
&& \hspace{1 cm} \cdot \> e \> \left ( m \mu(\sigma),
\frac{\lambda M_{\rm phys} \sigma}{ \mu(\sigma)}, u \right) \> .
\label{var eq for A(E)}
\end{eqnarray}
Together with Eq. (\ref{amu2(sigma)}) they constitute a system
of coupled variational equations which have to be solved.
Assuming $\mu^2(\sigma) \simeq \sigma$ and $ m \simeq 0$ we have
found in (I) an approximate
solution which
had the same form of the retardation function as the `improved'
parametrization and
exhibited the instability of the system beyond a critical coupling
constant. In the general case we can read off the variational
retardation function from Eq. (\ref{var eq for A(E)})
\begin{equation}
f_{\rm var} (\sigma) = \frac{g^2}{32 \pi^2} \> \frac{1}
{\mu^4(\sigma)} \> \int_0^1 du \left [ 1 + \frac{m^2}{2}
\mu^2(\sigma) \frac{1-u}{u} -\frac{\lambda^2 M^2_{\rm phys} \sigma^2}
{2 \mu^2(\sigma)} u \right ] \> e \> \left ( m \mu(\sigma),
\frac{\lambda M_{\rm phys} \sigma}{ \mu(\sigma)}, u \right) \> .
\label{var retardation function}
\end{equation}
Obviously it
has the same $1/\sigma^2$-behaviour for small
relative times as the `improved' parametrization
(\ref{improved retard func}). Finally we mention that by means of the
variational Eq. (\ref{var eq for A(E)}),
one can find the following expression for the the `kinetic term'
$\Omega$ defined in Eq. (\ref{Omega by A(E)})
\begin{eqnarray}
\Omega_{\rm var} = \frac{g^2}{8 \pi^2}
\int_0^{\infty} &d\sigma& \> \int_0^1 du \> \left [ 1
+ \frac{m^2}{2}
\mu^2(\sigma) \frac{1-u}{u} -\frac{\lambda^2 M^2_{\rm phys} \sigma^2}
{2 \mu^2(\sigma)} u \right ] \nonumber \\
&\cdot& e \> \left ( m \mu(\sigma),
\frac{\lambda M_{\rm phys} \sigma}{ \mu(\sigma)}, u \right) \>
\frac{\partial}{\partial \sigma} \left ( \frac{\sigma}
{\mu^2(\sigma)} \right ) \> .
\label{Omega var expressed by g^2}
\end{eqnarray}
This is demonstrated in the Appendix and will be used in Chapter
\ref{sec: width}.
That the kinetic term $\Omega$ can be combined with the
`potential term' $V$
is a consequence of the virial theorem for a two-time action
\cite{AlRo} which the variational approximation fulfills.
\section{Numerical Results}
\label{sec: numerical}
\noindent
In this Section we will compare numerically the various
parametrizations for the retardation function.
Because we are primarily interested in an
eventual application in pion-nucleon physics, we have chosen the
masses and coupling constants appropriately. Of course, the model
does not really give a
realistic description of the pion-nucleon interaction as spin- and
isospin degrees of freedom as well as chiral symmetry are missing.
In short, we use
\begin{eqnarray}
m \> = \> 140 \>\>\> {\rm MeV}
\label{pion mass} \\
M_{\rm phys} \> = \> 939 \>\>\> {\rm MeV}
\label{nucleon mass}
\end{eqnarray}
and the
results are presented as function of the dimensionless coupling
constant
\begin{equation}
\alpha \> = \> \frac{g^2}{4 \pi} \frac{1}{M^2_{\rm phys}} \> \> \> .
\label{alpha}
\end{equation}
The relevant quantity for the physical situation is the
strength of the Yukawa potential between two nucleons
due to one-pion exchange \cite{BjDr}, which is approximately given by
(depending on the spin-isospin channel)
\begin{equation}
f^2 = \frac{g'^2}{4 \pi} \> \left ( \frac{m}{2 M_{\rm phys}}
\right )^2 \> \cong \> 0.08\;\;\;,
\label{pion-nucleon coupling}
\end{equation}
where $ \> g'^2/4 \pi \cong 14 \> $ is the pion-nucleon coupling.
In the Wick-Cutkosky scalar model the corresponding strength is just
the dimensionless coupling constant $ \alpha$ that
we have defined in Eq. (\ref{alpha}).
It should also be remembered
that a Yukawa potential only supports a bound state~\cite{ErWe} if
\begin{equation}
\alpha \> > \> 1.680 \> \frac{m}{M} \> = 0.2505 \> .
\label{critical binding for Yukawa}
\end{equation}
\noindent
We have minimized (cf. Eq. (\ref{var inequality for Mphys}))
\begin{equation}
- \> M_1^2 \> \le \> (\lambda^2 - 2 \lambda) \> M^2_{\rm phys}
\> + \> 2 \left ( \Omega \> + \> V \right )
\label{minimization}
\end{equation}
with the `Feynman' ansatz (\ref{Feynman A(E)}) and the `improved'
ansatz (\ref{improved A(E)}). This minimization was performed
numerically with respect to the
parameters $\lambda, v, w$ by using the standard CERN program MINUIT.
The numerical integrations were done with typically $2 \times 72$
Gauss-Legendre points after
mapping the infinite-range integrals to finite range. For the
`improved' retardation function we had to calculate $\mu^2(\sigma)$
and $\Omega$ numerically. Tables~\ref{table: var Feyn}
and~\ref{table: var improved} give the
results of these calculations. We also include the value of $M_1$
although it doesn't have a physical meaning: finite terms (which,
for example, arise when a different renormalization point is chosen)
can be either grouped with $M_1$ or with $V$. However, from the
variational inequality (\ref{minimization}) we see that
$M_1$ is a measure of the quality of the variational approximation:
the larger $M_1$ the better the approximation.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|} \hline
~~$ \alpha $~~ & $ \sqrt v $ \ [MeV] &$ \sqrt w $ \ [MeV] &
$ \lambda $ & $ M_1 $ \ [MeV]& $ A(0) $ \\ \hline
0.1 & 1850 & 1845 & ~0.97300~ & 890.23 & ~1.0120~ \\
0.2 & 1805 & 1794 & ~0.94400~ & 839.73 & ~1.0257~ \\
0.3 & 1756 & 1739 & ~0.91250~ & 787.29 & ~1.0417~ \\
0.4 & 1702 & 1678 & ~0.87773~ & 732.69 & ~1.0606~ \\
0.5 & 1641 & 1608 & ~0.83843~ & 675.70 & ~1.0838~ \\
0.6 & 1569 & 1527 & ~0.79223~ & 616.09 & ~1.1142~ \\
0.7 & 1477 & 1424 & ~0.73355~ & 553.93 & ~1.1582~ \\
0.8 & 1325 & 1254 & ~0.63714~ & 490.60 & ~1.2485~ \\
\hline
\end{tabular}
\end{center}
\caption{Variational calculation for the nucleon self-energy in
the Wick-Cutkosky model using the `Feynman' parametrization
(\protect\ref{Feynman A(E)}) for the profile function.
The parameters $v,w$ obtained
from minimizing Eq. (\protect\ref{minimization}) are given as
well as $\lambda$ and the intermediate renormalized mass $M_1$
(see Eq. (\protect\ref{finite mass})). The last column lists
$ A(0) = v^2/w^2$.}
\label{table: var Feyn}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|} \hline
~~$\alpha$ ~~ &$\sqrt v$\ [Mev]&$\sqrt w$\ [MeV]
&$\lambda$&$M_1$ \ [MeV] & $ A(0) $ \\ \hline
0.1 & 677.2 & 674.6 & ~0.97297~ & 890.25 & ~1.0158~ \\
0.2 & 661.4 & 656.0 & ~0.94390~ & 839.78 & ~1.0338~ \\
0.3 & 640.8 & 632.3 & ~0.91223~ & 787.43 & ~1.0548~ \\
0.4 & 613.7 & 601.9 & ~0.87715~ & 732.97 & ~1.0808~ \\
0.5 & 596.7 & 581.2 & ~0.83741~ & 676.20 & ~1.1109~ \\
0.6 & 570.4 & 550.7 & ~0.79040~ & 616.97 & ~1.1514~ \\
0.7 & 534.3 & 509.2 & ~0.72996~ & 555.45 & ~1.2118~ \\
0.8 & 468.2 & 434.5 & ~0.62429~ & 493.44 & ~1.3482~ \\
\hline
\end{tabular}
\end{center}
\caption{
Same as in Table \protect\ref{table: var Feyn} but
using the `improved' parameterization
(\protect\ref{improved A(E)}) for the profile function. }
\label{table: var improved}
\end{table}
Although the value of the parameters $v$ and $w$ are rather different
for the Feynman and the `improved' parametrization, the parameter
$\lambda$ and the value of the profile function at $E = 0$ are very
close. This is also reasonable when we study the behaviour of these
quantities under
a {\it reparametrization} of the particle path: it can be shown that
a rescaling of the proper time $\beta \to \beta/\kappa$
leaves the variational functional invariant if
\begin{equation}
A^{(\kappa)} \left ( \frac{E}{\kappa} \right ) \> = \>
A^{(\kappa = 1)} ( E ) \> .
\label{repar invariance}
\end{equation}
We are working in the `proper time gauge' $ \kappa = 1 $. In a
general `gauge'
$ \kappa $ the variational parameters $ v, w$ then obviously are
different ( see Eqs. (\ref{Feynman A(E)}, \ref{improved A(E)}) )
\begin{equation}
v^{(\kappa)} \> = \> \kappa \> v \> , \hspace{2cm}
w^{(\kappa)} \> = \> \kappa \> w \> ,
\end{equation}
but $A(0) = v^2 / w^2 $ and $\lambda$ are gauge-invariant.
For both parametrization no minimum of Eq. (\ref{minimization}) was
found beyond
\begin{equation}
\alpha \> > \> \alpha_c
\end{equation}
where
\begin{eqnarray}
\alpha_c \> = \> \left \{ \begin{array}{ll}
0.824 &\hspace{1 cm}
\mbox{( `Feynman' )} \\
0.817 &\hspace{1 cm}
\mbox{( `improved' )} \> .
\end{array}
\right .
\end{eqnarray}
This value of the critical coupling is surprisingly close
to the value $ \alpha_c \simeq \pi/4 $ which
we obtained from the approximate solution of the variational
equations in (I).
On the other hand when the parameter $\lambda$ is fixed to
$\lambda = 1$, i.e. a less general trial action for
"momentum averaging" (see (I)) is used, then
a minimum is found for {\it all} values of $\alpha$. This points to
the important role played by this parameter. Indeed, in the
approximate solution of the variational equations found in (I) the
branching of the real solutions into complex
ones is most clearly seen in the approximate solution for $\lambda$.
We can also trace the instability to the inequality
(\ref{var inequality for Mphys}) for the physical mass : a clear
minimum as a function of $\lambda$ exists only as long as
the coefficient of $1/\lambda$ ,
i.e. $M_1^2/2 + \Omega + V$ stays positive. However, with increasing
coupling $M_1$ shrinks and $V$ becomes more negative until
at the critical coupling the collapse finally occurs.
We have also solved the coupled nonlinear variational equations
(\ref{var eq for lambda}), (\ref{var eq for A(E)})
together with (\ref{amu2(sigma)}) numerically \footnote{Note that the
variational solution is also reparametrization
invariant: Eqs. (\ref{var eq for A(E)}) and (\ref{amu2(sigma)}) are
consistent with the condition (\ref{repar invariance}). } .
This was done
by the following {\em iterative} method: we first mapped variables
with infinite range to finite range~, e.g.
\begin{eqnarray}
E &=& M^2_{\rm phys} \> \tan \theta \\
\sigma &=& \frac{1}{M^2_{\rm phys}} \> \tan \psi
\end{eqnarray}
and then discretized the integrals by the standard
Gauss-Legendre integration scheme, with typically 72 or 96 gaussian
points per integral.
The functions $A(\theta), \mu^2(\psi)$
as given by the variational equations were then tabulated at the
gaussian points
using as input the values of $\lambda, A(\theta), \mu^2(\psi) $ from
the previous iteration. We started with the perturbative values
\begin{eqnarray}
\lambda^{(0)} &=& A (\theta_i)^{(0)} = 1 \nonumber \\
\mu^2(\psi_i)^{(0)} &=& \frac{1}{M^2_{\rm phys}} \> \tan \psi_i
\end{eqnarray}
and monitored the convergence with the help of
the largest relative deviation
\begin{equation}
\Delta_n = {\rm Max} \left ( \frac{ | \lambda^{(n)} -
\lambda^{(n-1)} | }
{\lambda^{(n)}} \> , \> \frac{ |A (\theta_i)^{(n)} -
A (\theta_i)^{(n-1)} | }
{A (\theta_i)^{(n)}}\> , \> \frac{ | \mu^2(\psi_i)^{(n)} -
\mu^2(\psi_i)^{(n-1)} | }
{\mu^2(\psi_i)^{(n)}} \right )\> , \>\>\> n = 1, 2 ...
\label{max dev}
\end{equation}
Some numerical results are given in Table~\ref{table: var equations}.
Comparing with Table~\ref{table: var improved} we observe a remarkable
agreement with the values from the `improved' parametrization.
It is only near $\alpha = 0.8$ that the variational solution is
appreciably better as demonstrated by the
numerical value of $M_1$ which measures the quality of the
corresponding approximation.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|} \hline
~~$\alpha$~~ & $\lambda$ & $M_1$ \ [MeV] & $ A(0) $ \\ \hline
0.1 & ~0.97297~ & 890.25 & ~1.0151~ \\
0.2 & ~0.94389~ & 839.78 & ~1.0322~ \\
0.3 & ~0.91223~ & 787.43 & ~1.0520~ \\
0.4 & ~0.87718~ & 732.97 & ~1.0755~ \\
0.5 & ~0.83738~ & 676.20 & ~1.1044~ \\
0.6 & ~0.79030~ & 616.98 & ~1.1421~ \\
0.7 & ~0.72968~ & 555.47 & ~1.1972~ \\
0.8 & ~0.62262~ & 493.55 & ~1.3188~ \\ \hline
\end{tabular}
\end{center}
\caption{
The variational parameter $\lambda$, the renormalized mass $M_1$
and the value of the profile function
at $ E = 0 $ from the solution of the variational equations .}
\label{table: var equations}
\end{table}
This may also be seen in Figs.~\ref{fig: A(E) for alpha small},
{}~\ref{fig: mu2 for alpha small} and ~\ref{fig: alpha big}
where the different profile functions and pseudotimes
are plotted for $\alpha = 0.2$ and $\alpha = 0.8$. One can also
confirm from
the graphs that the numerical results indeed have the limits for
$\sigma, E$ either small or large which we
expect from the analytical analysis. Furthermore, it
is clear that the `improved' parametrization of the trial action
is in general extremely close to the `variational' one, while the
`Feynman' parametrization deviates much more. Finally, it is
interesting to note that the profile
function of the `variational' calculation has a small dip near $E = 0$
which is a result of the additional terms in the retardation function
(\ref{var retardation function}). These rather innocent looking
deviations will become important if an analytic continuation back
to Minkowski space is performed in which physical scattering
processes take place.
\begin{figure}
\unitlength1mm
\begin{picture}(110,65)
\put(370,145){\makebox(110,65)
{\psfig{figure=aplot2.ps,height=300mm,width=500mm}}}
\end{picture}
\caption{Profile function $A(E)$ as function of $E$ for the
`Feynman' parameterization (\protect\ref{Feynman A(E)}) (dotted line), the
`improved'
parameterization (\protect\ref{improved A(E)}) (dashed line) and the
`variational'
solution (solid line). The dimensionless coupling constant is $\alpha = 0.2$.}
\label{fig: A(E) for alpha small}
\end{figure}
\begin{figure}
\unitlength1mm
\begin{picture}(110,65)
\put(370,145){\makebox(110,65)
{\psfig{figure=musplot2.ps,height=300mm,width=500mm}}}
\put(348,200){\makebox(40,40)
{\psfig{figure=musplot2lim.ps,height=300mm,width=500mm}}}
\end{picture}
\caption{Ratio of pseudotime $\mu^2(\sigma)$ to proper time
$\sigma$ for $\alpha = 0.2$ . The labeling of the curves is as in Fig.
\protect\ref{fig: A(E) for alpha small} . An expanded view of
the small-$\sigma$
region is shown in the inset.}
\label{fig: mu2 for alpha small}
\end{figure}
\begin{figure}
\unitlength1mm
\begin{picture}(110,65)
\put(370,145){\makebox(110,65)
{\psfig{figure=plot8.ps,height=300mm,width=500mm}}}
\end{picture}
\caption{A(E) and $\mu^2(\sigma)$ for $\alpha = 0.8$. The labeling of the
curves is as in Fig. \protect\ref{fig: A(E) for alpha small}.}
\label{fig: alpha big}
\end{figure}
Examples for the convergence of the iterative scheme
are shown in Fig.~\ref{fig: convergence} . It is seen that for small
coupling constant we have rapid convergence which becomes slower
and slower as the critical value
\begin{equation}
\alpha_c \> = \> 0.815 \hspace{1 cm} \mbox{( `variational' )}
\end{equation}
is reached. Finally, beyond $\alpha > \alpha_c$ only a minimal
relative accuracy can be reached and the deviations increase
again with additional iterations.
\begin{figure}
\unitlength1mm
\begin{picture}(110,70)
\put(370,140){\makebox(110,65)
{\psfig{figure=convergeplot.ps,height=300mm,width=500mm}}}
\end{picture}
\caption{Convergence of the iterative solution of the
variational equations as a function of the number of iterations $n$.
The convergence measure $\Delta_n$ is defined in Eq.~(\protect\ref{max dev}).}
\label{fig: convergence}
\end{figure}
How the critical coupling depends on the meson mass is shown in
Fig.~\ref{fig: alphacrit vs m} . It turns out that the good agreement
of the approximate value of $\alpha_c \approx \pi/4$ with the
numerical value obtained for $m = 140 $ MeV was an accidental one:
at $m = 0$
we have $\alpha_c = 0.641$. There is also a surprisingly strong
but nearly linear $m$-dependence which we cannot reproduce from an
approximate solution of the variational equations when taking
$m \neq 0$ but still assuming $\mu^2(\sigma) \approx \sigma$.
\begin{figure}
\unitlength1mm
\begin{picture}(110,65)
\put(370,135){\makebox(110,65)
{\psfig{figure=massplot.ps,height=300mm,width=500mm}}}
\end{picture}
\caption{Critical coupling constant as a function of the meson mass $m$.
The nucleon mass is fixed at $M = 939$ MeV . The crosses indicate the points
at which the critical coupling has been calculated, the line through them being
drawn to guide the eye.}
\label{fig: alphacrit vs m}
\end{figure}
\section{Instability and Width of the Dressed Particle}
\label{sec: width}
\noindent
In all parametrizations of the profile function $A(E)$ which we
investigated
numerically in the previous section it turned out to be impossible to
find a (real) solution of the variational equations or the variational
inequality for coupling constants above a critical value $\alpha_c$.
This is a signal of the instability of the model which is already seen
in the classical ``potential''
\begin{equation}
V^{(0)}(\Phi,\varphi) = \frac{1}{2} M_0^2 \Phi^2 \> + \> \frac{1}{2}
m^2 \varphi^2 - g \Phi^2 \> \varphi
\label{classical potential}
\end{equation}
and tells us
that the physical mass of the dressed particle becomes complex
\begin{equation}
M_{\rm phys} \> = \> M \> - \> i \> \frac{\Gamma}{2} \> .
\label{complex mass}
\end{equation}
In the following we take the real part of the nucleon mass to be
$ \> M = 939 \> $ MeV and try to determine the width $\Gamma$.
Note that in a perturbative calculation no sign of the instability
shows up : the one-loop self-energy
\begin{equation}
\Sigma(p^2) \> = \> - \frac{g^2}{4 \pi^2} \> \ln \frac{\Lambda^2}{m^2}
\> + \> \frac{g^2}{4 \pi^2} \> \int_0^1 du \>
\ln \left [ 1 + \frac{p^2}{m^2} u + \frac{M_0^2}{m^2} \frac{u}{1-u} \>
\right ] \> .
\label{perturb self energy}
\end{equation}
is perfectly well-behaved. Also the one-loop effective potential
\cite{ItZu,IIM} is not very indicative: in quenched approximation it
is given by
\begin{equation}
V_{\rm eff}^{(1)}(\Phi,\varphi) \> = \>
\frac{1}{2} \int \frac{d^4 p}{(2 \pi)^4} \> \ln \left [ \> 1
\> - \> \frac{4 g^2 \Phi^2}{p^2 + m^2} \> \frac{1}{p^2 + M_0^2 -
2 g \varphi - i \epsilon} \right ] \> .
\label{one loop Veff}
\end{equation}
A detailed analysis shows that the quantum corrections lower the
barrier which makes the ground state metastable in
$V^{(0)}(\Phi,\varphi)$
but do not remove it. In addition,
the one-loop effective potential develops an imaginary part but it
is easy to see that Im $V_{\rm eff}^{(1)}$ vanishes for
$M^2 - 2 g \Phi > 4 g^2 \phi^2 /m^2 $, i.e. it contains a
(non-analytic) step function. Therefore all proper one-loop
Green functions (which are generated by the effective action)
carry no sign of the instability.
\noindent
In contrast the variational approach for the two-point function
{\it knows} about the instability if we allow the parameter $\lambda$
in the trial action to vary.
Since the approximate solution of the variational
equation for $\lambda$ in (I) clearly showed the impossibility
to obtain a real solution beyond $ \> \alpha_c \> $,
we will first study the width
of the state using similar approximative methods before turning to the
exact numerical evaluation.
\subsection{Approximate analytical treatment}
\noindent
In order to discuss complex solutions of the variational equations
it is useful to introduce the complex quantity
\begin{equation}
\zeta \> = \> \lambda \> M_{\rm phys}
\label{def xi}
\end{equation}
and to write it in the form
\begin{equation}
\zeta \> = \> \zeta_0 \> e^{- i \chi} \> .
\label{def xi0 and chi}
\end{equation}
It is a phase $ \chi \neq 0 $ which will lead to the complex pole
(\ref{complex mass}) of the two-point function.
With the same approximation $ m = 0$ and
$ \mu^2(\sigma) \> \approx \> \sigma$ which was used before
we now obtain
\begin{equation}
\frac{1}{\lambda} \> = \> 1 \> + \> \frac{\alpha}{\pi} \>
\frac{M^2}{\zeta^2} \> .
\label{approx eq lambda xi}
\end{equation}
Here the dimensionless coupling constant is defined in terms of the
real part of the physical mass
\begin{equation}
\alpha \equiv \> \frac{g^2}{4 \pi^2 M^2} \> .
\label{def alpha for complex mass}
\end{equation}
Due to Eq. (\ref{Omega var expressed by g^2}) the kinetic term
vanishes under the same approximation
\begin{equation}
\Omega_{\rm var} \> \approx \> 0 \> .
\label{Omega approx}
\end{equation}
and the potential term becomes
\begin{eqnarray}
V &\approx& - \frac{g^2}{8 \pi^2} \int_0^1 du \> \int_0^{\infty}
d\sigma
\> \frac{1}{\sigma} \> \left [ \> \exp \left( - \frac{m^2}{2}
\frac{1-u}{u} \sigma - \frac{\zeta^2}{2} u \sigma \right ) -
\exp \left( - \frac{m^2}{2} \frac{1-u}{u} \sigma \right ) \> \right ]
\nonumber \\
&=& - \frac{g^2}{8 \pi^2} \int_0^1 du \> \ln \left [ \> 1 +
\frac{\zeta^2}{m^2} \frac{u^2}{1-u} \> \right ] \> .
\label{V approx}
\end{eqnarray}
These are rather drastic simplifications but the exact numerical
calculations show that the imaginary part of $\Omega$ is indeed
smaller (by a factor of five) than Im $V$.
Note that $V$ is not infrared stable, i.e. it diverges if the
meson mass $m$ is set to zero.
With the above approximations the stationarity equation
(\ref{var inequality for Mphys})
then reads
\begin{equation}
M_1^2 \> = \> \left ( \frac{2}{ \lambda} - 1 \right ) \> \zeta^2 +
\frac{\alpha}{ \pi} M^2 \> \int_0^1 du \> \ln
\left [ 1 + \frac{\zeta^2}{m^2} \frac{u^2}{1-u} \right ] \> .
\end{equation}
Using Eq. (\ref{approx eq lambda xi}) this is equivalent to
\begin{equation}
\zeta^2 = M_1^2 - \frac{2 \alpha}{\pi} M^2 +
\frac{\alpha}{ \pi} M^2 \> \int_0^1 du \> \ln
\left [ 1 + \frac{\zeta^2}{m^2} \frac{u^2}{1-u} \right ] \> .
\label{approx eq for xi}
\end{equation}
If we take the {\it imaginary} part of this equation it is possible
to set $m = 0$ and we obtain
\begin{equation}
\zeta_0^2 \> \sin 2 \chi = \frac{2 \alpha}{\pi} M^2 \> \chi \>.
\label{eq xi0 chi}
\end{equation}
How do we determine the width of the unstable state ? We take the
defining equation (\ref{def xi})
for $\> \zeta \> $, eliminate $\lambda$ by means of
Eq. (\ref{approx eq lambda xi}) and use Eq. (\ref{complex mass}).
This gives
\begin{equation}
M \> - i \> \frac{\Gamma}{2} \> = \> \zeta \> + \> \frac{\alpha}{\pi}
\frac{M^2}{\zeta} \>.
\end{equation}
The real and imaginary parts of this equation allow us to express
$\zeta_0$ and the width as a function of the
phase $\chi$. A simple calculation gives
\begin{equation}
\zeta_0 = \frac{M}{2 \cos \chi} \> \left [ \> 1 +
\sqrt{1 - \frac{4 \alpha}{\pi} \cos^2 \chi} \> \> \right ]
\label{xi0 as function of chi}
\end{equation}
and the width is
\begin{equation}
\Gamma \> = \> 2 M \tan \chi \> \sqrt{1 - \frac{4 \alpha}{\pi}
\cos^2 \chi} \>.
\label{Gamma expressed by chi and alpha}
\end{equation}
We have chosen the root which results in a positive width for
$ \> 0 \le \chi \le \pi/2 \> $.
Finally, substituting Eq. (\ref{xi0 as function of chi})
into Eq. (\ref{eq xi0 chi}) gives
the transcendental equation which determines the phase $\chi$.
After some algebraic transformations we obtain it in the form
\begin{equation}
\alpha = \pi \> \> \frac{2 \chi \> \sin 2 \chi}
{ (2 \chi + \sin 2 \chi)^2 \> \cos^2 \chi}
\> \equiv \> \frac{\pi}{4} \> h (\chi) \> .
\label{transc eq for chi}
\end{equation}
It is easy to see that the function $h (\chi)$ grows monotonically
from $ h(0) = 1$ to $h(\pi/2) = \infty$. Solutions $\chi(\alpha)$
therefore only exist for
\begin{equation}
\alpha \> > \> \alpha_c \> = \> \frac{\pi}{4} \> ,
\end{equation}
which is the same critical value of the coupling constant at which
previously the real (approximate) solutions of the variational
equations ceased to exist.
It is also easy to find solutions for the transcendental equation
(\ref{transc eq for chi}) for small $\chi$ : from $ h(\chi) = 1 +
\chi^2 + {\cal O}(\chi^4) $ we find
\begin{equation}
\chi \> \approx \> \sqrt{ \frac{\alpha - \alpha_c}{\alpha_c}}
\end{equation}
where, of course, $\alpha_c = \pi /4 $ should be used.
Since the expression (\ref{Gamma expressed by chi and alpha}) for
the width can be transformed into
\begin{equation}
\Gamma = 2 M \> \tan \chi \> \> \frac{ 2 \chi - \sin 2 \chi}
{2 \chi + \sin 2 \chi}
\label{Gamma expressed by chi}
\end{equation}
we obtain the following {\it nonanalytic} dependence of the width
on the coupling constant
\begin{equation}
\Gamma \> \approx \> \frac{2}{3} M \> \left ( \frac{\alpha - \alpha_c}
{\alpha_c}
\right )^{3/2} \> .
\label{Gamma for small chi}
\end{equation}
This should be valid near the critical coupling constant.
\subsection{Numerical results}
\noindent
For the numerical solution of the complex variational equations
we follow the approximate analytical solution as closely as possible.
However, some of the relations used previously do not hold exactly.
For example, the quantity
\begin{equation}
L = \frac{\zeta^2}{2} \> \int_0^{\infty} d \sigma \> \frac{\sigma^2}
{\mu^4(\sigma)} \> \int_0^1 du \> u \>
e \left ( m \mu(\sigma), \frac{\zeta \sigma}{\mu(\sigma)}, u \right )
\label{def L}
\end{equation}
would be unity for $m = 0, \> \mu^2(\sigma) = \sigma $ but has some
complex value in the exact treatment. Similarly, $ \Omega \neq 0 \> $
and $ V $ deviate from the aproximate value (\ref{V approx}).
Without invoking the simplifying assumptions
Eq. (\ref{approx eq for xi}) changes to
\begin{equation}
\zeta^2 = M_1^2 - \frac{2 \alpha}{\pi} M^2 L \> + \>
2 \> ( \> \Omega \> + \> V \> )
\label{eq xi}
\end{equation}
Following the same steps as in the approximate treatment we obtain
\begin{equation}
\zeta_0 = \frac{M}{2 \cos \chi} \left [ 1 +
\sqrt{1 - \frac{4 \alpha}{\pi} \cos \chi \> {\rm Re} \> ( L
e^{i \chi} ) \> } \> \right ]
\label{xi0}
\end{equation}
which replaces Eq. (\ref{xi0 as function of chi}) and
\begin{equation}
\alpha = \pi \> \frac{K}{ \left ( \> {\rm Re} \> ( L e^{i \chi} )
\> +\> K \> \cos \chi \> \right )^2 }
\label{eq for alpha}
\end{equation}
which supersedes Eq. (\ref{transc eq for chi}). Here
\begin{equation}
K = \frac{2}{\sin 2 \chi} \> {\rm Im} \> \left [ \> L -
\frac{\pi}{\alpha} \frac{1}{M^2} \> ( \Omega + V ) \> \right ] \> .
\label{def K}
\end{equation}
Instead of Eq. (\ref{Gamma expressed by chi}) one can show that the
width itself has now the exact form
\begin{equation}
\Gamma = 2 M \> \frac{K \> \sin \chi \> - \> {\rm Im} \> ( L
e^{i \chi} ) } {K \> \cos \chi + {\rm Re} \> ( L e^{i \chi} ) } \> .
\label{Gamma exact}
\end{equation}
We have solved the coupled complex equations by specifying a value for
the phase $\chi$ and determining the corresponding value of the
coupling constant $ \alpha $ by means of Eq. (\ref{eq for alpha}).
Of course, this could be done only iteratively by starting with
\begin{eqnarray}
L^{(0)} &=& 1 \> , \> \> \> \> K^{(0)} = 2 \chi / \sin 2 \chi \> ,
\nonumber \\
\mu^{(0) \> 2} (\sigma) &=& \sigma \>, \> \> \> A^{(0)}(E) = 1 \> .
\nonumber
\end{eqnarray}
Typically 20 -- 25 iterations were needed to get a relative accuracy
of better than $10^{-5}$.
Table~\ref{table: width}
gives the results of our calculations. It is seen that
the width grows rapidly after the coupling constant exceeds the
critical value. In Fig.~\ref{fig: width as function of alpha}
this is shown together with the approximate (small-$\chi$)
behaviour predicted by Eq. (\ref{Gamma for small chi}). After the
critical coupling constant
in this formula has been shifted to the precise value one observes a
satisfactory agreement with the exact result.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|} \hline
$\chi $ & $ \alpha $ &$ ~~\Gamma $\ [MeV]~~ & $ A(0) $ \\ \hline
~0.05~ & ~0.818~ & 0.13 & ~1.405 + 0.045 $i$~ \\
~0.10~ & ~0.827~ & 1.05 & ~1.396 + 0.088 $i$~ \\
~0.15~ & ~0.843~ & 3.54 & ~1.382 + 0.130 $i$~ \\
~0.20~ & ~0.865~ & 8.42 & ~1.362 + 0.169 $i$~ \\
~0.25~ & ~0.893~ & 16.5~ & ~1.338 + 0.205 $i$~ \\
~0.30~ & ~0.929~ & 28.6~ & ~1.309 + 0.236 $i$~ \\
~0.35~ & ~0.972~ & 45.5~ & ~1.277 + 0.263 $i$~ \\
~0.40~ & ~1.024~ & 68.2~ & ~1.243 + 0.285 $i$~ \\
~0.45~ & ~1.084~ & 97.6~ & ~1.207 + 0.301 $i$~ \\
~0.50~ & ~1.153~ & 134~~~ & ~1.171 + 0.313 $i$~ \\
~0.55~ & ~1.232~ & 180~~~ & ~1.134 + 0.319 $i$~ \\
~0.60~ & ~1.323~ & 235~~~ & ~1.099 + 0.320 $i$~ \\
~0.65~ & ~1.425~ & 301~~~ & ~1.065 + 0.316 $i$~ \\
~0.70~ & ~1.540~ & 379~~~ & ~1.032 + 0.307 $i$~ \\
\hline
\end{tabular}
\end{center}
\caption{
The width $\Gamma $ of the unstable state from the complex solution of the
variational equations for $ \alpha \> > \> \alpha_c = 0.815 $. The width
is given
as a function of the phase $\chi$ which determines the corresponding
coupling constant $\alpha$ according to Eq.~(\protect\ref{eq for alpha}).
The complex value of the profile function at $ E = 0 $ is also listed.}
\label{table: width}
\end{table}
\begin{figure}
\unitlength1mm
\begin{picture}(110,70)
\put(370,140){\makebox(110,65)
{\psfig{figure=figwidth.ps,height=300mm,width=500mm}}}
\end{picture}
\caption{Width of the unstable state as a function of the coupling constant
as obtained from the solution of the complex variational equations (see Table
\protect\ref{table: width} ). The dashed line shows the approximate solution
(\protect\ref{Gamma for small chi}). }
\label{fig: width as function of alpha}
\end{figure}
Finally Figs.~\ref{fig: real part of A and mu2}
and~\ref{fig: im part of A and mu2} depict the complex profile
function $ A(E) $
and the
complex pseudotime $ \mu^2(\sigma) $ for $ \> \chi = 0.5 \> $, i.e.
$ \> \alpha = 1.153 \> $. Compared to the real solutions below
$ \> \alpha_c \> $ ( cf.
Figs.~\ref{fig: A(E) for alpha small}~-~\ref{fig: alpha big} ) one
does not notice any
qualitative
changes in the real part of $ A ( E ) $ as one crosses the critical
coupling.
\begin{figure}
\unitlength1mm
\begin{picture}(110,65)
\put(370,145){\makebox(110,65)
{\psfig{figure=real.ps,height=300mm,width=500mm}}}
\end{picture}
\caption{Real part of the profile function and of the ratio of pseudotime to
proper time for $\alpha = 1.153$. }
\label{fig: real part of A and mu2}
\end{figure}
\begin{figure}
\unitlength1mm
\begin{picture}(110,65)
\put(370,145){\makebox(110,65)
{\psfig{figure=imag.ps,height=300mm,width=500mm}}}
\end{picture}
\caption{Imaginary part of the profile function and of the ratio of pseudotime
to proper time for $\alpha = 1.153$. }
\label{fig: im part of A and mu2}
\end{figure}
\section{The Two-point Function Away from the Pole}
\label{sec: var 2point off pole}
\noindent
Up to now we only have determined the variational parameters on the
nucleon pole. However, the variational principle also applies to
$ p^2 \ne - M^2_{\rm phys}$. This forces us to consider
sub-asymptotic values of the proper time $\beta$.
We first deal with the residue at the pole which gives us the
probability to find the bare nucleon in the dressed particle.
\subsection{The residue}
\label{sec: resi}
\noindent
To calculate the residue it is most convenient to use the ``momentum
averaging'' scheme developed in (I) because in this approach there
are only a few subasymptotic terms. To be more specific the
quantity $\tilde \mu^2(\sigma,T)$ introduced in Eq. (I.98)
has an additional term which
exactly cancels the $1/\beta$ term which arises from application of
the Poisson summation formula.
With exponential accuracy we therefore have
\begin{equation}
\tilde \mu^2(\sigma,T) \> \simeq
\> \frac{4}{\pi} \int_0^{\infty} dE \> \frac{1}{A(E)} \>
\frac{\sin^2 (E \sigma/ 2)}{E^2} \>.
\label{tilde amu2(sigma,T) Poisson}
\end{equation}
This is a big advantage as we do not have to expand the potential
term $ \> \ll S_1 \gg \> $ in Eq. (I.97) in inverse powers of
$\beta$. The only source of subasymptotic terms in
$ \> \ll S_1 \gg \> $ is then from the
$T$-integration from $ \> \sigma/2 \> $ to $ \> \beta - \sigma/2 \> $
which simply gives a factor $ \beta - \sigma$.
Applying the Poisson formula to the kinetic term
$\tilde \Omega$ defined in Eq. (I.100) we obtain, again with
exponential accuracy
\begin{equation}
\tilde \Omega(\beta) \> = \> \frac{2}{\pi} \> \int_0^{\infty} dE \>
\left [ \> \ln A(E) \> + \> \frac{1}{A(E)} \> - \> 1 \>
\right ] \> + \frac{1}{\beta}
\> \left [ \> \ln A(0) \> + \> \frac{1}{A(0)} \> - \> 1 \>
\right ] \> .
\label{tilde Omega(beta) Poisson}
\end{equation}
We recall from (I) that the 2-point function may be written near
the pole as
\begin{equation}
G_2(p) \simeq \frac{1}{2} \> \int_0^{\infty} d\beta \>
\exp \left [ -\frac{\beta}{2} F(\beta,p^2) \right ] \>.
\label{2point func as beta int over F}
\end{equation}
Collecting all non-exponential terms the function $F(\beta,p^2)$
therefore has the large-$\beta$ expansion
\begin{equation}
F(\beta,p^2) \> \simeq \> F_0(p^2) \> + \> \frac{2}{\beta} \>
F_1(p^2) \> ,
\label{F(beta,p^2) large beta}
\end{equation}
where
\begin{equation}
F_0(p^2) \> = \> p^2 + M_0^2 - p^2 (1-\lambda)^2 + \Omega
- \frac{g^2}{4 \pi^2} \int_0^{\infty} d\sigma \> \int_0^{1} du \>
e\left( m \mu(\sigma), \frac{-i \lambda p \sigma}{\mu(\sigma)},u
\right )
\label{F0}
\end{equation}
is what we have used before on the nucleon pole
($p = i M_{\rm phys}$) and
\begin{equation}
F_1(p^2) = \ln A(0) + \frac{1 - A(0)}{A(0)}
+ \frac{g^2}{8 \pi^2} \int_0^{\infty} d\sigma \>
\frac{\sigma}{\mu^2(\sigma)} \int_0^{1} du \>
e\left( m \mu(\sigma), \frac{-i \lambda p \sigma}
{\mu(\sigma)},u \right ) \> .
\label{F1}
\end{equation}
Note that the potential term in $F_0(p^2)$ develops a
small-$\sigma$ singularity which renormalizes the bare mass $M_0$
but $F_1(p^2)$ is finite.
Neglecting the exponentially suppressed terms and performing the
proper time integration we thus obtain the following
expression for the two-point function
\begin{equation}
G_2(p^2) \> \simeq \> \frac{ e^{-F_1(p^2)}}{F_0(p^2)} \> =
\>\exp \left[ \> - \ln F_0(p^2) - F_1(p^2) \> \right] \> .
\label{2point expressed by F0,F1}
\end{equation}
It is now very easy to calculate the residue $Z$ at the pole
(see Eq. (\ref{pole of 2point(p)}) )
by expanding
around the point $ \> p^2 = - M^2_{\rm phys} \> $ where $F_0$
vanishes. We obtain
\begin{equation}
Z \> = \> \frac{\exp\left[ -F_1(-M_{\rm phys}^2) \right ]}
{ F_0'(-M_{\rm phys}^2)}
\label{residue expressed by F0,F1}
\end{equation}
where the prime denotes differentiation with respect to $p^2$.
Explicitly we find
\begin{equation}
Z \> = \> \> \frac{ N_0 \> N_1}{D}
\label{residue explicit}
\end{equation}
where
\begin{eqnarray}
N_0 &=& \exp \left ( - \ln A(0) + 1 - \frac{1}{A(0)} \right )
\label{residue prefactor}\\
N_1 &=& \exp\left[ \> - \frac{g^2}{8 \pi^2} \int_0^{\infty}
d\sigma \> \frac{\sigma}{\mu^2(\sigma)} \int_0^{1} du \>
e\left( m \mu(\sigma), \frac{\lambda \sigma M_{\rm phys}}
{\mu(\sigma)},u \right ) \right ]
\label{residue numerator} \\
\rm D &=& \> 1 - (1-\lambda)^2
\> - \frac{g^2}{8 \pi^2} \> \lambda^2 \int_0^{\infty} d\sigma \>
\frac{\sigma^2}{\mu^4(\sigma)} \int_0^{1} du \> u \>
e\left( m \mu(\sigma), \frac{ \lambda \sigma M_{\rm phys}}
{\mu(\sigma)},u \right ) \nonumber \\
&=& \lambda \> .
\label{residue denominator}
\end{eqnarray}
In the last line the stationarity Eq. (\ref{var eq for lambda}) for
$\lambda$ was used to simplify the denominator $D$. Note that
this also applies to the case where one parametrizes
the profile function $A(E)$.
This demonstrates that
\begin{equation}
Z = \frac{ N_0 \> N_1}{\lambda}
\label{Z > 0}
\end{equation}
is always positive. It seems to be more difficult to prove in general
that $ Z \le 1 $ although all numerical calculations clearly
give this result.
Finally, it is again useful to check the variational result in
perturbation theory. With $A(0) = 1 + {\cal O}(g^2)$ one sees that
$N_0 = 1 + {\cal O}(g^4)$. Similarly
$ (1-\lambda)^2 = 1 + {\cal O}(g^4)$. Expanding $ N_1$ and $1/\lambda$
to order $g^2$ we obtain
\begin{eqnarray}
Z \> &=& \> 1 - \frac{g^2}{8 \pi^2} \> \int_0^{\infty} d\sigma \>
\int_0^{1} du \> (1 - u ) \>
\exp \left( - \frac{\sigma m^2}{2} \frac{1-u}{u} -
\frac{\sigma M^2_{\rm phys}}{2}u \right ) + {\cal O}(g^4) \nonumber \\
&=& \> 1 - \frac{g^2}{8 \pi^2} \> \int_0^{1} du \>
\frac{u (1 - u) } {M^2_{\rm phys} u^2 + m^2 (1 - u) } +
{\cal O}(g^4) \> .
\label{residue perturb check}
\end{eqnarray}
This coincides with what one obtains from the perturbative result
for the self-energy (\ref{perturb self energy}) in the usual way.
Table~\ref{table: residue} contains the numerical values of the
residue obtained with
the different parametrizations as well as the perturbative result
from Eq. (\ref{residue perturb check})
\begin{equation}
Z_{ \rm perturb } \> = \> 1 \> - \> 0.38004 \> \alpha \> .
\label{Zpert}
\end{equation}
It is seen that for $\alpha$ near the critical value
appreciable deviations from the perturbative result
occur. For example, at $\alpha = 0.8$ perturbation theory says that
there is a probability of nearly $70 \%$ to find the bare particle
in the dressed one whereas the variational results estimate this
probability to be less than $50 \%$. It should be also noted that
the residue is {\it not} an infrared stable quantity, i.e. for
$m \to 0 $ $ \> Z \> $ also vanishes.
From the variational equations one can deduce that
\begin{equation}
Z \> \buildrel m \to 0 \over \longrightarrow \> {\rm const.} \> \>
m^{\kappa}
\end{equation}
with $\kappa = \alpha/( \pi \lambda^2)$.
For massless mesons
the residue at the nucleon pole must vanish because
it is well known (e.g. from Quantum Electrodynamics) that in this case
the two-point function does not develop a pole but rather
a branchpoint at $p^2 = - M_{\rm phys}^2$.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|c|} \hline
~~$\alpha$~~ & ~`Feynman'~ & ~`improved'~ & ~`variational'~
& ~perturbative~ \\ \hline
0.1 & 0.96090 & 0.96087 & 0.96087 & 0.96200 \\
0.2 & 0.91934 & 0.91914 & 0.91918 & 0.92399 \\
0.3 & 0.87467 & 0.87418 & 0.87428 & 0.88599 \\
0.4 & 0.82600 & 0.82494 & 0.82521 & 0.84798 \\
0.5 & 0.77184 & 0.76996 & 0.77036 & 0.80998 \\
0.6 & 0.70940 & 0.70610 & 0.70672 & 0.77198 \\
0.7 & 0.63216 & 0.62597 & 0.62697 & 0.73397 \\
0.8 & 0.51086 & 0.49187 & 0.49284 & 0.69597 \\
\hline
\end{tabular}
\end{center}
\caption{Residue at the pole of the two-point function for the different
parametrizations of the profile function. The heading `Feynman'
gives the result in the Feynman parametrization whereas
`improved' refers to the improved parametrization from Eq.
(\protect\ref{improved A(E)}). The residue calculated with the solution of
the variational equations is denoted by
`variational'. For comparison the perturbative result is also given.}
\label{table: residue}
\end{table}
\subsection{Variational equations for the off-mass-shell case}
\label{sec: var eq off mass shell}
It is also possible to apply the variational principle away from
the pole of the two-point function by varying Eq.
(\ref{2point expressed by F0,F1}). This gives
\begin{equation}
\delta \; F_0 (p^2) \> + \> F_0 (p^2) \> \delta F_1 (p^2) \> = \> 0
\> .
\label{var eq off}
\end{equation}
Note that on mass-shell where $ F_0 $ vanishes the previous
variational equations follow. We will not elaborate on
Eq. (\ref{var eq off}) further
but only point out that the perturbative self-energy
(\ref{perturb self energy}) is {\em not} obtained from the off-shell
variational equations (\ref{var eq off}). In the limit
$\mu^2(\sigma) \to \sigma, A(E) \to 1, \lambda \to 1$ one rather finds
\begin{equation}
\Sigma_{\rm var}(p^2) \> \to \> - \frac{g^2}{4 \pi^2} \>
\ln \frac{\Lambda^2}{m^2} \> + \> \frac{g^2}{4 \pi^2} \> \int_0^1
du \> \ln \left [ 1 - \frac{p^2}{m^2} \frac{u^2}{1-u} \>
\right ] + \frac{g^2}{4 \pi^2} \> \int_0^1 du \> u \>
\frac{p^2 + M_1^2}{m^2 (1-u) - p^2 u^2}
\end{equation}
which is an expansion of Eq. (\ref{perturb self energy})
around $ p^2 = - M_0^2$ (or $M_1^2$ which is the same in lowest order
perturbation theory). The reason for this somehow unexpected result
is the neglect of exponentially suppressed terms in deriving
Eq. (\ref{2point expressed by F0,F1}). Indeed it is easy to see
that one obtains the correct perturbative self-energy only if the
upper limit of the $\sigma$-integral
is kept at $\beta$ and not extended to infinity as we have done in
deriving Eq. (\ref{2point expressed by F0,F1}). The difference is
one of the many exponentially suppressed terms which we have
neglected. Thus, the off-shell variational equations
(\ref{var eq off}) only hold in the vicinity
of the nucleon pole and in order to investigate variationally the
two-point function far away (say near the meson production threshold
$p^2 = - ( M_{\rm phys} + m)^2$ ) one has to include consistently
all terms which are exponentially suppressed in $\beta$. This is
beyond the scope of the present work.
\section{Discussion and Summary}
\noindent
In the present work we have performed variational calculations
for the `Wick-Cutkosky polaron' following the approach which was
developed previously \cite{RoSchr}. We have determined
different parametrizations as well as the full variational solution
for the retardation function which enters
the trial action.
Since the
nucleon mass is fixed on the pole of the 2-point function the value
of the functional which we minimize is of no physical significance
but only a measure of the quality of the corresponding ansatz.
This is in contrast to the familiar quantum-mechanical case where an
upper limit to the ground-state energy of the system is obtained.
However, our calculation fixes the variational parameters with which
we then can calculate other observables of physical interest.
One of these quantities was the residue on the pole of the propagator
for which we have compared numerically the results of the variational
calculations to first order perturbation theory in Table~{table: residue}.
For small couplings all results for the residue agree,
since in this case
the variational approach necessarily reduces to perturbation theory
independent of the value of the variational parameters.
What is rather remarkable is that for
larger couplings the three parametrizations of the profile function
in our variational approach
yield rather similar results, which are now of course different from
the perturbative calculation. As
we have seen, the `improved' and `variational' actions have the
same singularity behaviour, for small relative times, as
the true action, so here one might expect some similarity in the
results. This is however not true for the
`Feynman' parameterization which has a rather different form, so its
agreement with the other two is not
preordained. This similarity is also exhibited in
Tables~\ref{table: var Feyn} and~\ref{table: var improved} for
$\lambda$ and $A(0)$, but of course not for the parameters $v$ and
$w$ which enter the respective profile functions and which are
`gauge' (i.e. reparametrization)-dependent quantities.
Also the critical coupling at
which real solutions ceased to exist was nearly identical in all
three parametrizations.
The similarity of the results for the different ans\" atze presumably
indicates that these results are not too
far away from the exact ones.
We were not only able to determine the critical coupling but also
to deduce qualitatively and quantitatively the width which the
particle acquires beyond the critical coupling. This was achieved
by finding {\it complex} solutions of
the variational equations first approximately by an analytic approach
and then exactly by an iterative method which closely followed
the analytic procedure. Although the present
approach does not describe tunneling (which we expect to render the
system unstable even at small coupling constants but with
exponentially small width \cite{AfDel} ) the polaron variational
method is clearly superior to any perturbative treatment in this
respect.
We have concentrated mostly, although not exclusively, on the on-shell
2-point function, i.e. the nucleon propagator.
This corresponds to the limit where the proper time goes
towards infinity. It is possible, however, to
go beyond the on-shell limit. This was necessary, for example,
for the calculation of the residue of the
2-point function in Section \ref{sec: resi}.
Nevertheless, the residue is a quantity
which is calculated at the pole and thus only
requires off-shell information from an infinitesimal region around
it. This has the effect that the variational
parameters for the calculation of the residue are the same as the
on-shell ones. As one moves a finite distance away
from the pole the variational parameters themselves become a
function of the off-shellness $p^2$
(see Section \ref{sec: var eq off mass shell}).
In conclusion, we think that the present variational approach
has yielded nonperturbative numerical results which look very
reasonable and are encouraging.
We therefore believe it worthwhile to try to extend it in several
ways. First, in a sequel to this work we will generalize
the present approach to the case with $n$ external mesons
and thereby study physical processes like meson production or meson
scattering from a nucleon.
This can be done by employing the quadratic trial function whose
parameters have been determined in the present work
on the pole of the 2-point function. Such a
`zeroth order' calculation is similar in spirit to a quantum
mechanical calculation in which
wave functions determined from minimizing the energy functional
are used to evaluate other observables. More demanding
is the consistent `first-order' variational calculation of
higher-order Green functions as this requires the amputation of
precisely the non-perturbative nucleon propagators which have been
determined in the present work. That this is indeed possible
will be demonstrated in another paper in this series.
Of course, finally we would like to apply these non-perturbative
techniques to theories which are of a more physical nature.
Among these one may mention scalar QED, the Walecka model
\cite{SeWa,Se} or QED. The latter two will require introduction of
Grassmann variables in order to deal with
spin in a path integral. As such, this should not pose a
fundamental problem. A greater challenge, however, is to extend
such an approach beyond the quenched approximation or
to nonabelian theories where the light degrees of freedom
cannot be integrated out analytically.
\vspace{2cm}
\noindent
{\bf Note added}\\
After completion of this work we became aware of the
pioneering work by K. Mano \cite{Mano} in which
similar methods are applied to the Wick-Cutkosky model with zero
meson mass. Mano uses the proper
time formulation, the quenched approximation and the Feynman
parametrization for the retardation function to derive a
variational function for the self-energy of a scalar nucleon
(the expression following his Eq. (6.18))
which is identical with our Eq. (\ref{var inequality for Mphys})
after proper identification of quantities
is made. However, for minimizing the variational function Mano sets
(in our nomenclature) $ v = w (1 + \epsilon )$, expands to second
order in $\epsilon$ and finds an instability of the ground state for
$ g_{\rm Mano}^2/ 8 \pi M^2 > 0.34 $. Note that
$ g_{\rm Mano} = \sqrt{\pi} \> g$ so that this translates into a
critical coupling
$\alpha_c \approx 0.22$ which is much smaller than the value which
we obtain from the exact minimization.
In addition, in the present work we consider non-zero meson masses,
employ more general retardation functions, and calculate residue and
width of the dressed particle.
\vspace{3cm}
\noindent{\bf Acknowledgements}
\noindent
We would like to thank Dina Alexandrou and Yang Lu for many helpful
discussions and Geert Jan van Oldenborgh for encouragement and
a careful reading of the manuscript.
\newpage
\noindent
{\Large\bf Appendix : An alternative expression for
$\Omega_{\rm var}$}
\renewcommand{\theequation}{A.\arabic{equation}}
\setcounter{equation}{0}
\vspace{0.5cm}
\noindent
Here we derive Eq. (\ref{Omega var expressed by g^2}) for the
kinetic term $\Omega$ when the variational equations are fulfilled.
We first perform an integration by parts
in the definition (\ref{Omega by A(E)}) of $\Omega$. The slow fall-off
of the variational profile function with $E$
\begin{eqnarray}
A(E) \> \buildrel E \to \infty \over \longrightarrow \> &1& + \> \>
\frac{g^2}{4 \pi^2} \frac{1}{E^2} \int_0^{\infty} d\sigma \>
\frac{\sin^2(E \sigma/2)}{\sigma^2} \> + \> ... \nonumber \\
= \> &1& + \> \>
\frac{g^2}{16 \pi} \frac{1}{E} \> + \> ...
\label{var A(E) for large E}
\end{eqnarray}
leads to a contribution at $E = \infty$
\begin{equation}
\Omega_{\rm var} = \frac{g^2}{8 \pi^2} + \frac{2}{\pi}
\int_0^{\infty} dE \> \left [ - E \> \frac{A'(E)}{A(E)} +
\frac{1 - A(E)}{A(E)} \> \right ] \> .
\label{Omega with part int}
\end{equation}
We then write the variational equation (\ref{var eq for A(E)}) for
$A(E)$ in the form
\begin{equation}
\frac{1}{A(E)} - 1 \> = \> - \frac{g^2}{4 \pi^2} \> \int_0^{\infty}
d\sigma \> \frac{\sin^2(E\sigma/2)}{E^2 A(E)} \>
\frac{1}{\mu^4(\sigma)} \> X(\sigma)
\end{equation}
where
\begin{equation}
X(\sigma) \> = \> \int_0^1 du \> \left [ 1
+ \frac{m^2}{2}
\mu^2(\sigma) \frac{1-u}{u} -\frac{\lambda^2 M^2_{\rm phys} \sigma^2}
{2 \mu^2(\sigma)} u \right ] \> e \> \left ( m \mu(\sigma),
\frac{\lambda M_{\rm phys} \sigma}{ \mu(\sigma)}, u \right) \>.
\label{X(sigma)}
\end{equation}
The integration over $E$ can now be performed giving a factor
$\pi \mu^2(\sigma)/4 $ due to Eq. (\ref{amu2(sigma)}). Therefore we
have
\begin{equation}
\int_0^{\infty} dE \> \left [\frac{1}{A(E)} - 1 \>\right ] =
\> - \frac{g^2}{16 \pi} \>
\int_0^{\infty} d\sigma \> \frac{1}{\mu^2(\sigma)} \> X(\sigma)
\end{equation}
which is just one term in the expression (\ref{Omega with part int})
for $\Omega$. To get the other one we differentiate the variational
equation for $A(E)$ with respect to $E$ and observe that
\begin{equation}
\frac{\partial}{\partial E} \> \sin^2 \left( \frac{E \sigma}{2}
\right ) \> = \> \frac{\sigma}{E} \frac{\partial}{\partial \sigma }
\> \sin^2 \left( \frac{E \sigma}{2} \right ) \>.
\end{equation}
One has to be careful not to interchange the $E$-integration and the
$\sigma$-differentiation. We therefore perform an integration by parts
and obtain
\begin{eqnarray}
- \> \int_0^{\infty} dE \> E \> \frac{A'(E)}{A(E)} \> &=&
\frac{g^2}{4 \pi^2}
\int_0^{\infty} dE \> \frac{1}{E^2 A(E)} \> \Biggl [ \> \sigma
X(\sigma)
\frac{\sin^2(E \sigma/2)}{\mu^4(\sigma)} \> \Biggl |^{\infty}_0
\nonumber \\
&+& \int_0^{\infty} d\sigma \> \frac{\sin^2(E \sigma/2)}
{\mu^4(\sigma)}
\> \left (2 + \frac{\partial}{\partial \sigma} \sigma \> \right )
\> X(\sigma) \> \Biggr ]\nonumber \\
&=& \frac{g^2}{16 \pi} \left [ \> - \lim_{\sigma \to 0}
\frac{\sigma X(\sigma)}{\mu^2(\sigma)} +
\int_0^{\infty} d\sigma \> \frac{1}{\mu^2(\sigma)} \> \left (
2 + \frac{\partial}{\partial \sigma} \sigma \right ) \> X(\sigma)
\right ] \> .
\end{eqnarray}
Note that the boundary term at $\sigma = 0$ gives a contribution
because of $ X(0) = 1 $. This contribution exactly cancels the
term $g^2/ 8 \pi^2$ in Eq. (\ref{Omega with part int}).
Combining both terms for $\Omega$
(which do not exist separately due to the slow
fall-off of $A(E)$ ) we obtain
\begin{eqnarray}
\int_0^{\infty} dE \> \left [ - E \frac{ A'(E)}{A(E)} \> + \>
\frac{1 - A(E)}{A(E)} \right ] &=& \frac{g^2}{16 \pi} \left [ - 1 +
\int_0^{\infty} d\sigma \> \frac{1}{\mu^2(\sigma)} \> \left (
\> 1 - \frac{\partial}{\partial \sigma}\sigma \> \right ) \> \>
X(\sigma) \right ] \nonumber \\
&=&
\frac{g^2}{16 \pi} \left [ - 1 +
\int_0^{\infty} d\sigma \> X(\sigma) \left ( 1 + \sigma
\frac{\partial}{\partial \sigma} \> \right ) \frac{1}{\mu^2(\sigma)}
\> \> \right ]
\end{eqnarray}
from which Eq. (\ref{Omega var expressed by g^2}) follows. In the
last line again an integration by parts has been performed but this
time there is no contribution from the boundary terms.
\newpage
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 2,611 |
package com.company;//Daniel Huan Nguyen
// The purpose of this program is to output all possible solution to a Morse code segment
// if the user is not given any spaces.
// This first version is a brute force algorithm. I will later make use of a look-up table of some sort.
import java.lang.String;
import java.io.*;
import java.util.Scanner;
public class MorseCodeDecoder {
public static int solutionCounter = 0;
public static int counter = 0;
public static void main(String[] args) throws IOException {
Scanner cin = new Scanner(System.in);
MorseTree decoder = new MorseTree();
//Here is where the user inputs the Morse code. Use only "." and "-"
System.out.println("Please enter the morse code. Use only \".\" and \"-\". No spaces");
String code = cin.next();
//The filename will be where all the possible solutions are stored.
String filename = CreateFilename(code) + ".txt";
//If the the input was not correct, then the program will stop and not create the file.
if(!filename.equals(".txt")) {
filename = CreateFilename(code) + ".txt";
String empty = "";
//time will keep track of how long the program takes to decode all possible solutions.
long time;
FileOutputStream fout = null;
try {
fout = new FileOutputStream(filename);
} finally {
if (fout != null) {
fout.close();
}
}
//Start the timer here.
long start = System.nanoTime();
MorseCode(code, empty, filename, decoder);
//End the timer here
time = System.nanoTime() - start;
//Output the time taken
//System.out.println(time+" nanoseconds");
FileWriter writer = new FileWriter(filename, true);
BufferedWriter bfw = new BufferedWriter(writer);
//bfw.write(time+" nanoseconds\n");
bfw.write("Solutions : " + solutionCounter);
bfw.newLine();
bfw.write("Counter = " + counter);
bfw.newLine();
bfw.close();
System.out.println("Solutions : " + solutionCounter);
System.out.println("Counter : " + counter);
System.out.println("Program Finished");
}
else{
System.out.println("Please check that you inputted the Morse Code Correctly");
}
}
//This function is where all of the Morse code segmentation occurs.
public static void MorseCode(String code, String answer, String filename, MorseTree decoder) throws IOException{
if(code.length()>0){
String currentCode = ""; //This keeps track of the current code for this recursive function.
String tempAnswer = ""; //This keeps track of the decoded version of currentCode
for(int i=0; i<5; i++){ //This for loop is 5 because the max morse code size is 5
if(code.length()>i){ //The purpose of this if statement is to make sure that the code segment is long enough to append another character
currentCode = currentCode + code.substring(i,i+1); //currentCode is set to what the previous recursive call sent
if(!(decoder.decode(currentCode).equals(""))){ //if currentCode is not empty, then decode it
tempAnswer = answer + decoder.decode(currentCode); counter++;//Decodes currentCode and sets it to tempAnswer
MorseCode(code.substring(i+1, code.length()),tempAnswer, filename, decoder); //Sends the rest of the undecoded morse code into the next recursive call along with the current answer.
}
}
else break;
}
}
else{
//Print out result here
FileWriter writer = new FileWriter(filename, true);
BufferedWriter bfw = new BufferedWriter(writer);
bfw.write(answer);
bfw.newLine();
bfw.close();
solutionCounter++;
}
}
//The purpose of this function is to convert the "." and "-" to "O" and "A" to create a textfile name.
public static String CreateFilename(String code){
int length = code.length();
String temp = "";
for(int i=0;i<length;i++){
if(code.substring(i,i+1).equals(".")){
temp += "O";
}
else if(code.substring(i,i+1).equals("-")){
temp += "A";
}
else
return "";
}
return temp;
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 2,870 |
\section{Introduction}
The primary area of interest in this article is the study of patterns in
permutations. We will denote the set of length $n$ permutations by
$\SG_{n}$. Let $a_{1} a_{2} \ldots a_{k}$ be a sequence of $k$ distinct real
numbers. The \emph{reduction} of this sequence, which is denoted by
$\red(a_{1} \ldots a_{k})$, is the length $k$ permutation $\pi_{1} \ldots
\pi_{k} \in \SG_{k}$ such that order-relations are preserved (i.e., $\pi_{i}
< \pi_{j}$ if and only if $a_{i} < a_{j}$ for every $i$ and $j$). Given a
(permutation) pattern $\tau \in \SG_{k}$, we say that a permutation $\pi =
\pi_{1} \ldots \pi_{n} \in \SG_{n}$ \emph{contains} the pattern $\tau$ if
there exists $1 \leq i_{1} < i_{2} < \ldots < i_{k} \leq n$ such that
$\red(\pi_{i_{1}} \pi_{i_{2}} \ldots \pi_{i_{k}}) = \tau$. Each such
subsequence in $\pi$ will be called an \emph{occurrence} of the pattern
$\tau$. If $\pi$ contains no such subsequence, it is said to \emph{avoid}
the pattern $\tau$. Additionally, we will denote the number of occurrences
of the pattern $\tau$ in permutation $\pi$ by $N_{\tau}(\pi)$ (e.g., $\pi$
avoids the pattern $\tau$ if and only if $N_{\tau}(\pi) = 0$).
For any pattern $\tau$ and integer $n \geq 0$, we define the set
\begin{equation}
\SG_{n}(\tau) := \{ \pi \in \SG_{n} \; : \; \pi \text{ avoids the pattern } \tau \}
\end{equation}
and also define $s_{n}(\tau) := | \SG_{n}(\tau) |$. The patterns $\sigma$ and $\tau$ are said to be \emph{Wilf-equivalent} if $s_{n}(\sigma) = s_{n}(\tau)$ for all $n \geq 0$. We may also consider the more general set
\begin{equation}
\SG_{n}(\tau,r) := \{ \pi \in \SG_{n} \; : \; \pi \text{ contains exactly } r \text{ occurrences of } \tau \}.
\end{equation}
We will analogously define $s_{n}(\tau,r) := | \SG_{n}(\tau,r) |$.
A classical problem in this area is to find an enumeration for these sets or at the least, to study properties of the generating function encoding the enumerating sequence (for example, is it rational/algebraic/holonomic?). However, it is not even known if these generating functions are always holonomic. In general, the enumeration problem gets very difficult very quickly. Patterns up to length $3$ are well-understood, but there are basic unresolved questions even for length $4$ patterns. For example, it is known that there are three Wilf-equivalence classes for length $4$ patterns: $1234$, $1324$, and $1342$. While the enumeration problems have been solved for $1234$ and $1342$, no exact enumeration (or even asymptotics) is known for $1324$.
A (probabilistic) variation of this problem was posed by Joshua Cooper \cite{Cooper}: Given two (permutation) patterns $\sigma$ and $\tau$, what is the expected number of copies of $\sigma$ in a permutation chosen uniformly at random from $\SG_{n}(\tau)$? We note that if the enumeration of $\SG_{n}(\tau)$ is known, this question is equivalent to counting the total number of occurrences of $\sigma$ in permutations from $\SG_{n}(\tau)$, or put more precisely, to compute
\begin{equation}
T_{n}(\sigma, \tau) := \mathop{\sum} \limits_{\pi \in \SG_{n}(\tau)} {N_{\sigma}(\pi)}.
\end{equation}
B{\'o}na first addressed the question for $\tau = 132$ when $\sigma$ is either the increasing or decreasing permutation in \cite{Bona2}. He shows how to derive the generating functions for $T_{n}(1 2 \ldots k, 132)$ and $T_{n}(k \ldots 2 1, 132)$, the total number of occurrences of $1 2 \ldots k$ in $\SG_{n}(\tau)$ and occurrences of $k \ldots 2 1$ in $\SG_{n}(\tau)$, respectively. In \cite{Bona4}, B{\'o}na also shows that $T_{n}(213, 132) = T_{n}(231, 132) = T_{n}(312, 132)$ for all $n$ and provides an explicit formula for them. Rudolph \cite{Rudolph} also proves some conditions on when two patterns, say $p$ and $q$, occur equally frequently in $\SG_{n}(132)$ (i.e., $T_{n}(p,132) = T_{n}(q,132)$ for all $n$).
In \cite{Homberger}, Homberger answers the analogous question when $\tau = 123$ and shows that there are three non-trivial cases to consider: $T_{n}(132, 123)$, $T_{n}(231,123)$, and $T_{n}(321,123)$. He finds generating functions and explicit formulas for each one.
We will consider a more general problem. Given the pattern $\tau$, suppose that a permutation $\pi$ is chosen uniformly at random from $\SG_{n}(\tau)$. Given another pattern $\sigma$, we define the random variable $X_{\sigma}(\pi) := N_{\sigma}(\pi)$, the number of copies of $\sigma$ in $\pi$. Observe that $T_{n}(\sigma, \tau) = \mathbb{E}[X_{\sigma}]$, the expected value of $X_{\sigma}$ (i.e., the first moment of the random variable). The focus of this paper is to study higher moments for $X_{\sigma}$ as well as mixed moments between two such random variables that count different patterns. We will consider the case where the permutation $\pi$ is randomly chosen from $\SG_{n}$ as well as some cases where $\pi$ is chosen from $\SG_{n}(\tau)$ (for various patterns $\tau$).
In this paper, we approach the problem from two different angles. On one end, we will present (human-derived) results proving that the random variables are jointly asymptotically normal when the permutations are chosen at random from $\SG_{n}$. Unfortunately, the techniques do not naturally extend to the scenario when the permutations are chosen from $\SG_{n}(\tau)$. On the other end, we present a computational approach that can quickly and easily compute many empirical moments for the general case (permutations chosen from $\SG_{n}(\tau)$). In addition, for the case where permutations are chosen from $\SG_{n}$, the computational approach can rigorously produce closed-form formulas for quite a few moments and mixed moments of the random variables.
This paper is organized as follows. In Section~\ref{SECfunceqn}, we review and outline the functional equations enumeration approach developed in \cite{BN-GWILF2, NZ-GWILF}. In Section~\ref{SECmoments}, we derive both rigorous results and empirical values for higher order moments and mixed moments for various random variables $X_{\sigma}$. In Section~\ref{SECasymom}, we show that the random variables are jointly asymptotically normal when the permutations are randomly chosen from $\SG_{n}$. In Section~\ref{SECconcl}, we conclude with some final remarks and observations.
\section{Enumerating with functional equations}\label{SECfunceqn}
For various patterns $\tau$, functional equations were derived for enumerating permutations with $r$ occurrences of $\tau$ in \cite{BN-GWILF2, NZ-GWILF, NoonZeil}. These functional equations were then used to derive enumeration algorithms. We briefly review the relevant results here. The curious reader can see \cite{BN-GWILF2, NZ-GWILF, NoonZeil} for more details.
\subsection{Functional equations for single patterns}
Given a (fixed) pattern $\tau$ and non-negative integer $n$, we define the polynomial:
\begin{equation}
f_{n}(\tau; \; t) := \mathop{\sum} \limits_{\pi \in \SG_{n}} {t^{N_{\tau}(\pi)}} .
\end{equation}
Recall that the coefficient of $t^{r}$ is exactly $s_{n}(\tau, r)$. For certain patterns $\tau$, a multi-variate polynomial $P_{n}(\tau; \; t; \; x_{1}, \ldots, x_{n})$ was defined so that $P_{n}(\tau; \; t; \; 1, \ldots, 1) = f_{n}(\tau; \; t)$ and that functional equations could be derived for the $P_{n}$ polynomial.
The pattern $\tau = 123$ was considered in \cite{NZ-GWILF, NoonZeil}, and the polynomial $P_{n}$ was defined to be:
\begin{equation}
P_{n}(123; \; t; \; x_{1}, \ldots, x_{n}) := \mathop{\sum} \limits_{\pi \in \SG_{n}} { \left( t^{N_{123}(\pi)} \mathop{\prod} \limits_{i=1}^{n} {x_{i}^{| \{ (a,b) \; : \; \pi_{a}=i<\pi_{b}, \; 1 \leq a < b \leq n \} |}} \right)} .
\end{equation}
It was shown that this $P_{n}$ satisfies the functional equation:
\begin{thm}
For the pattern $\tau = 123$,
\begin{equation}
P_{n}(123; \; t; \; x_{1}, \ldots, x_{n}) = \mathop{\sum} \limits_{i=1}^{n} {x_{i}^{n-i} \cdot P_{n-1}(123; \; t; \; x_{1}, \ldots, x_{i-1}, t x_{i+1}, \ldots, t x_{n})}. \tag{FE123} \label{FE123}
\end{equation}
\end{thm}
\noindent Since $P_{1}(123; \; t; \; x_{1}) = 1$, the functional equation can be used to recursively compute our desired quantity $P_{n}(123; \; t; \; 1, \ldots, 1) = f_{n}(123; \; t)$.
Similarly, in \cite{BN-GWILF2}, the polynomial $P_{n}$ was defined for the pattern $\tau = 132$ so that it satisfied the functional equation:
\begin{thm}
For the pattern $\tau = 132$,
\begin{equation}
P_{n}(132; \; t; \; x_{1}, \ldots, x_{n}) = \mathop{\sum} \limits_{i=1}^{n} {x_{1} x_{2} \ldots x_{i-1} \cdot P_{n-1}(132; \; t; \; x_{1}, \ldots, x_{i-1}, t x_{i+1}, \ldots, t x_{n})}. \tag{FE132} \label{FE132}
\end{equation}
\end{thm}
\noindent Again $P_{1}(132; \; t; \; x_{1}) = 1$, so the functional equation can be used to recursively compute our desired quantity $P_{n}(132; \; t; \; 1, \ldots, 1) = f_{n}(132; \; t)$.
The same was also done for the pattern $\tau = 231$ in \cite{BN-GWILF2}. Although $f_{n}(231; \; t) = f_{n}(132; \; t)$, redeveloping the approach directly for the pattern $231$ allows us to consider the patterns $132$ and $231$ simultaneously. For $231$, the polynomial $P_{n}$ was defined so that it satisfies the functional equation:
\begin{thm}
For the pattern $\tau = 231$,
\begin{equation}
P_{n}(231; \; t; \; x_{1}, \ldots, x_{n}) = \mathop{\sum} \limits_{i=1}^{n} {x_{1}^{0} x_{2}^{1} \ldots x_{i}^{i-1} \cdot P_{n-1}(231; \; t; \; x_{1}, \ldots, x_{i-1}, t x_{i} x_{i+1}, x_{i+2}, \ldots, x_{n})}. \tag{FE231} \label{FE231}
\end{equation}
\end{thm}
\noindent We again have that $P_{1}(231; \; t; \; x_{1}) = 1$, so the functional equation can be used to recursively compute our desired quantity $P_{n}(231; \; t; \; 1, \ldots, 1) = f_{n}(231; \; t)$.
The approach for the pattern $123$ was also extended to the pattern $\tau = 1234$ in \cite{NZ-GWILF}. The polynomial $P_{n}(1234; \; t; \; x_{1}, \ldots, x_{n}; \; y_{1}, \ldots, y_{n})$ was defined so that $P_{n}(1234; \; t; \; 1 \text{ [n times]}; \; 1 \text{ [n times]}) = f_{n}(1234; \; t)$ and in such a way that it satisfies the functional equation:
\begin{thm}
For the pattern $\tau = 1234$,
\begin{gather}
P_{n}(1234; \; t; \; x_{1}, \ldots, x_{n}; \; y_{1}, \ldots, y_{n}) =\notag \\
\mathop{\sum} \limits_{i=1}^{n} {y_{i}^{n-i} \cdot P_{n-1}(1234; \; t; \; x_{1}, \ldots, x_{i-1}, t x_{i+1}, \ldots, t x_{n}; \; y_{1}, \ldots, y_{i-1}, x_{i} y_{i+1}, \ldots, x_{i} y_{n})}. \tag{FE1234} \label{FE1234}
\end{gather}
\end{thm}
\noindent Since $P_{1}(1234; \; t; \; x_{1}; \; y_{1}) = 1$, the functional equation can be used to recursively compute our desired quantity $P_{n}(1234; \; t; \; 1 \text{ [n times]}; \; 1 \text{ [n times]}) = f_{n}(1234; \; t)$.
\subsection{Merging functional equations for multiple patterns}
It is also straight-forward to consider multiple patterns simultaneously if their corresponding functional equations are known, as shown in \cite{BN-GWILF2}. For example, suppose that we want to consider the two patterns $\sigma = 123$ and $\tau = 132$ simultaneously. We can extend the $f_{n}$ polynomial in the natural way to:
\begin{equation}
f_{n}(\sigma, \tau; \; s, t) := \mathop{\sum} \limits_{\pi \in \SG_{n}} {s^{N_{\sigma}(\pi)} t^{N_{\tau}(\pi)} }.
\end{equation}
In \cite{BN-GWILF2}, the polynomial $P_{n}(123, 132; \; s, t; \; x_{1}, \ldots, x_{n}; \; y_{1}, \ldots, y_{n})$ was defined so that
\begin{equation}
P_{n}(123, 132; \; s, t; \; 1 \text{ [n times]}; \; 1 \text{ [n times]}) = f_{n}(123, 132; \; s, t).
\end{equation}
The following functional equation was then derived:
\begin{thm} \label{THM123n132}
For the patterns $\sigma = 123$ and $\tau = 132$,
\begin{gather*}
P_{n}(123, 132; \; s, t; \; x_{1}, \ldots, x_{n}; \; y_{1}, \ldots, y_{n}) = \\
\mathop{\sum} \limits_{i=1}^{n} {x_{i}^{n-i} \cdot y_{1} y_{2} \ldots y_{i-1} \cdot P_{n-1}(123, 132; \; s, t; \; x_{1}, \ldots, x_{i-1}, s x_{i+1}, \ldots, s x_{n}; \; y_{1}, \ldots, y_{i-1}, t y_{i+1}, \ldots, t y_{n})}.
\end{gather*}
\end{thm}
\noindent Observe that we combined the functional equations for the individual patterns $123$ and $132$ by re-labeling the $x_{i}$ variables for $132$ to $y_{i}$, merging the reductions in the $P_{n-1}$ in the natural way, and multiplying the coefficient terms for the $P_{n-1}$ within the summands. We again have that $P_{1}(123, 132; \; s, t; \; x_{1}; \; y_{1}) = 1$, so the functional equation can be used to recursively compute our desired quantity $P_{n}(123, 132; \; s, t; \; 1 \text{ [n times]}; \; 1 \text{ [n times]}) = f_{n}(123, 132; \; s, t)$.
More generally, we can similarly extend $f_{n}(\tau; \; t)$ to $k$ different patterns $\tau_{1}, \tau_{2}, \ldots, \tau_{k}$ and the corresponding variables $t_{1}, t_{2}, \ldots, t_{k}$ as:
\begin{equation}
f_{n}(\tau_{1}, \tau_{2}, \ldots, \tau_{k}; \; t_{1}, t_{2}, \ldots, t_{k}) := \mathop{\sum} \limits_{\pi \in \SG_{n}} {t_{1}^{N_{\tau_{1}}(\pi)} t_{2}^{N_{\tau_{2}}(\pi)} \ldots t_{k}^{N_{\tau_{k}}(\pi)}}.
\end{equation}
\noindent The generalized polynomials $P_{n}$ can be similarly defined and analogous functional equations can be derived.
For example, suppose that we want to consider all length three patterns simultaneously. We will consider the patterns in lexicographical order (i.e., $\tau_{1} = 123, \; \tau_{2} = 132, \; \ldots, \; \tau_{6} = 321$). Our $f_{n}$ polynomial now becomes:
\begin{equation}
f_{n}(123, 132, \ldots, 321; \; t_{1}, t_{2}, \ldots, t_{6}) := \mathop{\sum} \limits_{\pi \in \SG_{n}} {t_{1}^{N_{123}(\pi)} t_{2}^{N_{132}(\pi)} \ldots t_{6}^{N_{321}(\pi)}}. \label{FS3}
\end{equation}
\noindent For notational convenience, the polynomial $f_{n}(123, 132, \ldots, 321; \; t_{1}, t_{2}, \ldots, t_{6})$ will be denoted by $f_{n}(\SG_{3}; \; t_{1}, \ldots, t_{6})$. In \cite{BN-GWILF2}, we discuss how to extend this to the generalized polynomial $P_{n}$ and derive analogous functional equations.
The previous polynomial could also be refined further to consider all length three patterns and the pattern $1234$ simultaneously. We will again consider the length three patterns in lexicographical order. Our $f_{n}$ polynomial now becomes:
\begin{equation}
f_{n}(1234, \SG_{3}; \; s, t_{1}, t_{2}, \ldots, t_{6}) := \mathop{\sum} \limits_{\pi \in \SG_{n}} {s^{N_{1234}(\pi)} t_{1}^{N_{123}(\pi)} t_{2}^{N_{132}(\pi)} \ldots t_{6}^{N_{321}(\pi)}}. \label{FI4S3}
\end{equation}
Just like the previous case, this polynomial can be extended to the analogous generalized polynomial $P_{n}$ and similar functional equations can be derived.
\subsection{Adapting multi-pattern functional equations}
The previously described $f_{n}$ polynomials (and their corresponding generalized $P_{n}$ polynomials and functional equations) can be easily specialized to consider a variety of scenarios. This allows us to quickly extract functional equations (and fast enumeration algorithms) in a number of cases.
The polynomial $f_{n}(\SG_{3}; \; t_{1}, \ldots, t_{6})$ (in Eq.~\ref{FS3}) can be specialized to consider any subset of $\SG_{3}$ by setting some $t_{i}$ variables to $1$. For example, $f_{n}(\SG_{3}; \; t_{1}, t_{2}, 1, 1, 1, 1)$ would give us the polynomial tracking $123$ and $132$ simultaneously. Setting $t_{i} = 1$ for $3 \leq i \leq 6$ in the generalized polynomial $P_{n}$ and its functional equation would reproduce Theorem~\ref{THM123n132}. This approach actually allows us to quickly compute the bi-variate polynomial
\begin{equation}
f_{n}(\sigma, \tau; \; s, t) = \mathop{\sum} \limits_{\pi \in \SG_{n}} {s^{N_{\sigma}(\pi)} t^{N_{\tau}(\pi)}}
\end{equation}
\noindent for any patterns $\sigma, \tau \in \SG_{3}$ (with $\sigma \neq \tau$).
The polynomial $f_{n}(\SG_{3}; \; t_{1}, \ldots, t_{6})$ can actually be specialized in other ways. Suppose that we wanted to compute the bi-variate polynomial
\begin{equation}
\mathop{\sum} \limits_{\pi \in \SG_{n}(132)} {s^{N_{123}(\pi)} t^{N_{321}(\pi)}}.
\end{equation}
Observe that this is exactly $f_{n}(\SG_{3}; \; s, 0, 1, 1, 1, t)$. In other words, we may find the coefficient of $t_{2}^{0}$ in $f_{n}(\SG_{3}; \; t_{1}, \ldots, t_{6})$ and then set $t_{3} = t_{4} = t_{5} = 1$ and $t_{1} = s, t_{6} = t$. The same approach can be used to compute the polynomial
\begin{equation}
\mathop{\sum} \limits_{\pi \in \SG_{n}(132)} {s^{N_{\sigma}(\pi)} t^{N_{\tau}(\pi)}}.
\end{equation}
\noindent for any patterns $\sigma, \tau \in \SG_{3} \backslash \{ 132 \}$ (with $\sigma \neq \tau$).
The analogous specialization can be done to quickly compute
\begin{equation}
\mathop{\sum} \limits_{\pi \in \SG_{n}(123)} {s^{N_{\sigma}(\pi)} t^{N_{\tau}(\pi)}}.
\end{equation}
\noindent for any patterns $\sigma, \tau \in \SG_{3} \backslash \{ 123 \}$ (with $\sigma \neq \tau$). In general, for any $p \in \SG_{3}$, we can quickly compute
\begin{equation}
\mathop{\sum} \limits_{\pi \in \SG_{n}(p)} {s^{N_{\sigma}(\pi)} t^{N_{\tau}(\pi)}}.
\end{equation}
\noindent for any patterns $\sigma, \tau \in \SG_{3} \backslash \{ p \}$ (with $\sigma \neq \tau$).
We can also adapt the polynomial $f_{n}(1234, \SG_{3}; \; s, t_{1}, t_{2}, \ldots, t_{6})$ (from Eq.~\ref{FI4S3}) similarly. In particular, we can quickly compute the polynomial
\begin{equation}
\mathop{\sum} \limits_{\pi \in \SG_{n}(1234)} {s^{N_{\sigma}(\pi)} t^{N_{\tau}(\pi)}}.
\end{equation}
\noindent for any patterns $\sigma, \tau \in \SG_{3}$ (with $\sigma \neq \tau$) by setting $s = 0$ (i.e.~extracting the coefficient of $s^{0}$) and setting the appropriate $t_{i}$'s to $1$ in $f_{n}(1234, \SG_{3}; \; s, t_{1}, t_{2}, \ldots, t_{6})$.
The previously discussed functional equation approaches have been implemented in the Maple packages {\tt PDSn}, {\tt PDAV132}, {\tt PDAV123}, and {\tt PDAV1234}.
\section{Computing moments for random permutations}\label{SECmoments}
\subsection{Moments for random permutations from $\SG_{n}$}\label{SECmomsSn}
The previously discussed functional equations approach allows us to compute both rigorous and empirical statistical properties on permutations.
For some fixed $n$ and fixed pattern $\sigma \in \SG_{k}$, suppose that a permutation $\pi \in \SG_{n}$ is chosen uniformly at random. Let the random variable $X_{\sigma}(\pi)$ be the number of occurrences of the pattern $\sigma$ in $\pi$. It is not hard to compute the expected value (i.e., the first moment of the random variable $X$): $\mathbb{E}[X] = {n \choose k}/k!$. More generally, it was shown in \cite{DZ-SMC1} that each of the higher moments of $X$ is a polynomial in $n$. In particular, the $r$-th moment about the mean of $X$, which is $\mathbb{E}[(X - \mathbb{E}[X])^{r}]$, is a polynomial of degree $\left\lfloor r(k - 1/2) \right\rfloor$ for $r \geq 2$.\footnote{This corrects a minor inaccuracy in \cite{DZ-SMC1}.}
For the patterns $\sigma$ that were discussed in the previous section, the functional equations approach allows us to quickly compute $f_{n}(\sigma; \; t)$ for any desired $n$. Observe that $f_{n}(\sigma; \; t)/n!$ gives us the polynomial where the coefficient of $t^{i}$ is the probability that a randomly chosen $\pi \in \SG_{n}$ will have exactly $i$ copies of $\sigma$. The important point is that we can (rigorously) find a closed-form expression (in $n$) for the higher order moments of $X$ by computing sufficiently many terms to fit the polynomial.
For example, it was shown in \cite{DZ-SMC1} that the exact expression for the second moment (about the mean) of the random variable $X_{123}$ (over $\SG_{n}$) is:
\begin{equation}
\frac{n (n-1) (n-2) (39 n^{2} + 102 n - 157)}{21600}
\end{equation}
\noindent and that the third moment (about the mean) of the random variable $X_{123}$ (over $\SG_{n}$) is:
\begin{equation}
\frac{n (n-1) (n-2) (1437 n^{4} + 5592 n^{3} - 11277 n^{2} - 33990 n + 34082)}{6350400}
\end{equation}
Similarly, the exact expression for the second moment (about the mean) of the random variable $X_{132}$ (over $\SG_{n}$) is:
\begin{equation}
\frac{n (n-1) (n-2) (21 n^{2} + 78 n + 77)}{21600}
\end{equation}
\noindent and that the third moment (about the mean) of the random variable $X_{132}$ (over $\SG_{n}$) is:
\begin{equation}
\frac{n (n-1) (n-2) (129 n^{4} + 3705 n^{3} + 5355 n^{2} + 8655 n + 11356)}{12700800}
\end{equation}
We may also consider mixed moments for two patterns $\sigma$ and $\tau$. Suppose that a permutation $\pi$ is chosen uniformly at random from $\SG_{n}$, and again let the random variable $X_{\sigma}(\pi)$ be the number of occurrences of pattern $\sigma$ in $\pi$ (and equivalently for $X_{\tau}(\pi)$). It was also shown in \cite{DZ-SMC1} that the mixed moments of the random variables $X_{\sigma}$ and $X_{\tau}$ (about their respective means) are also polynomials in $n$. This allows us to rigorously find closed-form expressions (in $n$) for the higher order mixed moments by computing enough terms to find the polynomial.
For example, the covariance of the two random variables $X_{123}$ and $X_{132}$ is:
\begin{equation}
\frac{n (n-1) (n-2) (18 n^{2} - 51 n - 109)}{21600}
\end{equation}
\noindent while the covariance of the two random variables $X_{123}$ and $X_{312}$ is:
\begin{equation}
- \frac{n (n-1) ( n-2) (39 n^{2} - 48 n - 7)}{43200}
\end{equation}
\noindent and the covariance of the two random variables $X_{123}$ and $X_{321}$ is:
\begin{equation}
- \frac{n (n-1) (n-2) (9 n^{2} + 12 n - 92)}{5400}
\end{equation}
\noindent Similar results for other random variables can be derived using the Maple packages available on the authors' website.
\subsection{Moments for random permutations from $\SG_{n}(\tau)$}
There has been a flurry of recent activity studying occurrences of patterns in the set of permutations avoiding specific patterns. Many of the recent articles focus on counting the total number of occurrences of a pattern in $\SG_{n}(132)$ or in $\SG_{n}(123)$. Some examples (as previously mentioned) include \cite{Bona2, Bona4, Homberger, Rudolph}. It is important to note that finding the total number of occurrences of pattern $\sigma$ in the set $\SG_{n}(\tau)$ is equivalent to picking a permutation uniformly at random from $\SG_{n}(\tau)$ and finding the expected value $\mathbb{E}[X_{\sigma}]$ (assuming that the enumeration of $\SG_{n}(\tau)$ is known).
In the previous section, we were able to rigorously derive closed-form
expressions for moments of the random variable $X_{\sigma}(\pi)$ when the
permutation $\pi$ was randomly chosen from $\SG_{n}$. While we currently
cannot derive similar rigorous results for random permutations from
$\SG_{n}(\tau)$, we can still compute numerical moments for a variety of
cases. Interestingly, a number of such random variables appear to \emph{not}
be asymptotically normal (as opposed to when $\pi \in \SG_{n}$, where
Mikl\'os B\'ona showed that such random variables are asymptotically normal
\cite{Bona}, see also Section \ref{SECasymom}).
\FloatBarrier
\subsubsection{Permutations from $\SG_{132}$}
Suppose a permutation is chosen uniformly at random from $\SG_{n}(132)$. Using the Maple packages that accompany this article, we can compute many empirical moments. The expected values of the random variables $X_{123}$, $X_{312}$, and $X_{321}$ for $1 \leq n \leq 10$ can be found in Table~\ref{tab:t132mom1}.
\begin{table}[!h]
\centering
\begin{tabular}{|c|r|r|r|r|r|r|r|r|r|r|}
\hline
Pattern & \cheader{$n=1$} & \cheader{$n=2$} & \cheader{$n=3$} & \cheader{$n=4$} & \cheader{$n=5$} & \cheader{$n=6$} & \cheader{$n=7$} & \cheader{$n=8$} & \cheader{$n=9$} & \cheader{$n=10$}\\
\hline
$123$ & $0$ & $0$ & $0.200$ & $0.714$ & $1.619$ & $2.970$ & $4.809$ & $7.171$ & $10.083$ & $13.570$\\
\hline
$312$ & $0$ & $0$ & $0.200$ & $0.786$ & $1.929$ & $3.790$ & $6.513$ & $10.244$ & $15.115$ & $21.253$\\
\hline
$321$ & $0$ & $0$ & $0.200$ & $0.929$ & $2.595$ & $5.667$ & $10.653$ & $18.097$ & $28.572$ & $42.672$\\
\hline
\end{tabular}
\caption{Expected values (first moments) of $X_{123}(\pi)$, $X_{312}(\pi)$, and $X_{321}(\pi)$, where $\pi$ is chosen uniformly at random from $\SG_{n}(132)$.}
\label{tab:t132mom1}
\end{table}
\medskip
The second moments (about the mean) of the random variables $X_{123}$, $X_{312}$, and $X_{321}$ for $1 \leq n \leq 10$ can be found in Table~\ref{tab:t132mom2}.
\begin{table}[!h]
\centering
\begin{tabular}{|c|r|r|r|r|r|r|r|r|r|r|}
\hline
Pattern & \cheader{$n=1$} & \cheader{$n=2$} & \cheader{$n=3$} & \cheader{$n=4$} & \cheader{$n=5$} & \cheader{$n=6$} & \cheader{$n=7$} & \cheader{$n=8$} & \cheader{$n=9$} & \cheader{$n=10$}\\
\hline
$123$ & $0$ & $0$ & $0.160$ & $1.204$ & $4.617$ & $12.757$ & $28.933$ & $57.463$ & $103.720$ & $174.140$\\
\hline
$312$ & $0$ & $0$ & $0.160$ & $1.026$ & $3.733$ & $10.213$ & $23.392$ & $47.403$ & $87.787$ & $151.710$\\
\hline
$321$ & $0$ & $0$ & $0.160$ & $1.352$ & $6.003$ & $19.101$ & $49.313$ & $110.180$ & $221.360$ & $409.960$\\
\hline
\end{tabular}
\caption{Second moments (about the mean) of $X_{123}(\pi)$, $X_{312}(\pi)$, and $X_{321}(\pi)$, where $\pi$ is chosen uniformly at random from $\SG_{n}(132)$.}
\label{tab:t132mom2}
\end{table}
\medskip
Data for the higher moments can be found on the authors websites. For example, the $r$-th standardized moments for $X_{312}$ when $3 \leq r \leq 6$ and $15 \leq n \leq 20$ can be found in Table~\ref{tab:t132nonnorm}.
\begin{table}[!h]
\centering
\begin{tabular}{|c|r|r|r|r|r|r|r|r|r|r|}
\hline
$r$-th moment & \cheader{$n=15$} & \cheader{$n=16$} & \cheader{$n=17$} & \cheader{$n=18$} & \cheader{$n=19$} & \cheader{$n=20$}\\
\hline
$r=3$ & $0.41867$ & $0.42461$ & $0.43073$ & $0.43690$ & $0.44303$ & $0.44906$\\
\hline
$r=4$ & $2.92652$ & $2.95682$ & $2.98412$ & $3.00889$ & $3.03152$ & $3.05231$\\
\hline
$r=5$ & $3.59958$ & $3.69377$ & $3.78619$ & $3.87633$ & $3.96389$ & $4.04860$\\
\hline
$r=6$ & $14.79293$ & $15.24562$ & $15.66679$ & $16.06007$ & $16.42853$ & $16.77483$\\
\hline
\end{tabular}
\caption{$r$-th standardized moments for $X_{312}(\pi)$ for $3 \leq r \leq 6$, where $\pi$ is chosen uniformly at random from $\SG_{n}(132)$.}
\label{tab:t132nonnorm}
\end{table}
\medskip
It is interesting to note that the random variable $X_{312}$ does not appear to be asymptotically normal since the $3$-rd and $5$-th standard moments appear to be increasing (as opposed to going to $0$ as a normal distribution would) and the $6$-th moment appears to be larger than $15$ (the value for a normal distribution).
This approach can also be used to consider the mixed $(i,j)$ moments. For example, the mixed $(i,j)$ moments of the random variables $X_{123}$ and $X_{321}$ for $3 \leq n \leq 10$ can be found in Table~\ref{tab:t132mixmom}.
\begin{table}[!h]
\centering
\begin{tabular}{|c|r|r|r|r|r|r|r|r|r|r|}
\hline
$(i,j)$ & \cheader{$n=3$} & \cheader{$n=4$} & \cheader{$n=5$} & \cheader{$n=6$} & \cheader{$n=7$} & \cheader{$n=8$} & \cheader{$n=9$} & \cheader{$n=10$}\\
\hline
$(1,1)$ & $-0.040$ & $-0.663$ & $-3.392$ & $-11.162$ & $-28.714$ & $-62.970$ & $-123.370$ & $-222.180$\\
\hline
$(1,2)$ & $-0.024$ & $-0.350$ & $-1.445$ & $-0.404$ & $21.587$ & $127.800$ & $478.610$ & $1417.300$\\
\hline
$(2,1)$ & $-0.024$ & $-0.644$ & $-6.657$ & $-38.272$ & $-154.230$ & $-491.000$ & $-1322.000$ & $-3140.400$\\
\hline
$(2,2)$ & $0.011$ & $1.288$ & $33.666$ & $382.200$ & $2650.400$ & $13264.000$ & $52628.000$ & $175500.000$\\
\hline
\end{tabular}
\caption{Mixed $(i,j)$ moments of $X_{123}(\pi)$ and $X_{321}(\pi)$, where $\pi$ is chosen uniformly at random from $\SG_{n}(132)$.}
\label{tab:t132mixmom}
\end{table}
\medskip
Analogous data and outputs can be found on the authors websites.
\FloatBarrier
\subsubsection{Permutations from $\SG_{123}$}
Suppose a permutation is chosen uniformly at random from $\SG_{n}(123)$. Using the Maple packages that accompany this article, we can compute many empirical moments. The expected values of the random variables $X_{132}$, $X_{312}$, and $X_{321}$ for $1 \leq n \leq 10$ can be found in Table~\ref{tab:t123mom1}.
\begin{table}[!h]
\centering
\begin{tabular}{|c|r|r|r|r|r|r|r|r|r|r|}
\hline
Pattern & \cheader{$n=1$} & \cheader{$n=2$} & \cheader{$n=3$} & \cheader{$n=4$} & \cheader{$n=5$} & \cheader{$n=6$} & \cheader{$n=7$} & \cheader{$n=8$} & \cheader{$n=9$} & \cheader{$n=10$}\\
\hline
$132$ & $0$ & $0$ & $0.200$ & $0.643$ & $1.357$ & $2.364$ & $3.678$ & $5.314$ & $7.281$ & $9.589$\\
\hline
$312$ & $0$ & $0$ & $0.200$ & $0.786$ & $1.929$ & $3.788$ & $6.513$ & $10.244$ & $15.115$ & $21.253$\\
\hline
$321$ & $0$ & $0$ & $0.200$ & $1.143$ & $3.429$ & $7.697$ & $14.618$ & $24.884$ & $39.208$ & $58.317$\\
\hline
\end{tabular}
\caption{Expected values (first moments) of $X_{132}(\pi)$, $X_{312}(\pi)$, and $X_{321}(\pi)$, where $\pi$ is chosen uniformly at random from $\SG_{n}(123)$.}
\label{tab:t123mom1}
\end{table}
\medskip
The second moments (about the mean) of the random variables $X_{132}$, $X_{312}$, and $X_{321}$ for $1 \leq n \leq 10$ can be found in Table~\ref{tab:t123mom2}.
\begin{table}[!h]
\centering
\begin{tabular}{|c|r|r|r|r|r|r|r|r|r|r|}
\hline
Pattern & \cheader{$n=1$} & \cheader{$n=2$} & \cheader{$n=3$} & \cheader{$n=4$} & \cheader{$n=5$} & \cheader{$n=6$} & \cheader{$n=7$} & \cheader{$n=8$} & \cheader{$n=9$} & \cheader{$n=10$}\\
\hline
$132$ & $0$ & $0$ & $0.160$ & $0.801$ & $2.468$ & $5.959$ & $12.344$ & $22.978$ & $39.506$ & $63.877$\\
\hline
$312$ & $0$ & $0$ & $0.160$ & $0.740$ & $2.114$ & $4.804$ & $9.532$ & $17.303$ & $29.501$ & $48.000$\\
\hline
$321$ & $0$ & $0$ & $0.160$ & $1.122$ & $4.293$ & $12.423$ & $30.287$ & $65.419$ & $128.910$ & $236.250$\\
\hline
\end{tabular}
\caption{Second moments (about the mean) of $X_{132}(\pi)$, $X_{312}(\pi)$, and $X_{321}(\pi)$, where $\pi$ is chosen uniformly at random from $\SG_{n}(123)$.}
\label{tab:t123mom2}
\end{table}
\medskip
Data for the higher moments can be found on the authors websites. For example, the $r$-th standardized moments for $X_{132}$ when $3 \leq r \leq 6$ and $15 \leq n \leq 20$ can be found in Table~\ref{tab:t123nonnorm}.
\begin{table}[!h]
\centering
\begin{tabular}{|c|r|r|r|r|r|r|r|r|r|r|}
\hline
$r$-th moment & \cheader{$n=15$} & \cheader{$n=16$} & \cheader{$n=17$} & \cheader{$n=18$} & \cheader{$n=19$} & \cheader{$n=20$}\\
\hline
$r=3$ & $1.53492$ & $1.54020$ & $1.54458$ & $1.54823$ & $1.55129$ & $1.55385$\\
\hline
$r=4$ & $6.28717$ & $6.33967$ & $6.38469$ & $6.42356$ & $6.45735$ & $6.48687$\\
\hline
$r=5$ & $23.59568$ & $23.99423$ & $24.34048$ & $24.64315$ & $24.90923$ & $25.14433$\\
\hline
$r=6$ & $108.90240$ & $111.90699$ & $114.55548$ & $116.90184$ & $118.99022$ & $120.85698$\\
\hline
\end{tabular}
\caption{$r$-th standardized moments for $X_{132}(\pi)$ for $3 \leq r \leq 6$, where $\pi$ is chosen uniformly at random from $\SG_{n}(123)$.}
\label{tab:t123nonnorm}
\end{table}
\medskip
It is interesting to note that the random variable $X_{132}$ does not appear to be asymptotically normal since the $3$-rd and $5$-th standard moments appear to be increasing (as opposed to going to $0$ as a normal distribution would), the $4$-th moment appears to be larger than $3$ (the value for a normal distribution), and the $6$-th moment appears to be substantially larger than $15$ (the value for a normal distribution).
This approach can also be used to consider the mixed $(i,j)$ moments. For example, the mixed $(i,j)$ moments of the random variables $X_{132}$ and $X_{312}$ for $3 \leq n \leq 10$ can be found in Table~\ref{tab:t123mixmom}.
\begin{table}[!h]
\centering
\begin{tabular}{|c|r|r|r|r|r|r|r|r|r|r|}
\hline
$(i,j)$ & \cheader{$n=3$} & \cheader{$n=4$} & \cheader{$n=5$} & \cheader{$n=6$} & \cheader{$n=7$} & \cheader{$n=8$} & \cheader{$n=9$} & \cheader{$n=10$}\\
\hline
$(1,1)$ & $-0.040$ & $-0.219$ & $-0.641$ & $-1.362$ & $-2.332$ & $-3.326$ & $-3.890$ & $-3.269$\\
\hline
$(1,2)$ & $-0.024$ & $-0.099$ & $-0.039$ & $0.841$ & $3.917$ & $11.254$ & $25.372$ & $48.890$\\
\hline
$(2,1)$ & $-0.024$ & $-0.386$ & $-2.261$ & $-8.566$ & $-24.874$ & $-60.099$ & $-126.620$ & $-239.570$\\
\hline
$(2,2)$ & $0.011$ & $0.551$ & $6.309$ & $39.592$ & $172.880$ & $592.420$ & $1709.800$ & $4350.100$\\
\hline
\end{tabular}
\caption{Mixed $(i,j)$ moments of $X_{132}(\pi)$ and $X_{312}(\pi)$, where $\pi$ is chosen uniformly at random from $\SG_{n}(123)$.}
\label{tab:t123mixmom}
\end{table}
\medskip
Analogous data and outputs can be found on the authors websites.
\FloatBarrier
Suppose a permutation is chosen uniformly at random from $\SG_{n}(1234)$. Using the Maple packages that accompany this article, we can compute many empirical moments. The expected values of the random variables $X_{123}$, $X_{132}$, $X_{312}$, and $X_{321}$ for $1 \leq n \leq 10$ can be found in Table~\ref{tab:t1234mom1}.
\begin{table}[!h]
\centering
\begin{tabular}{|c|r|r|r|r|r|r|r|r|r|r|}
\hline
Pattern & \cheader{$n=1$} & \cheader{$n=2$} & \cheader{$n=3$} & \cheader{$n=4$} & \cheader{$n=5$} & \cheader{$n=6$} & \cheader{$n=7$} & \cheader{$n=8$} & \cheader{$n=9$} & \cheader{$n=10$}\\
\hline
$123$ & $0$ & $0$ & $0.167$ & $0.522$ & $1.049$ & $1.739$ & $2.592$ & $3.611$ & $4.796$ & $6.153$\\
\hline
$132$ & $0$ & $0$ & $0.167$ & $0.696$ & $1.709$ & $3.279$ & $5.457$ & $8.283$ & $11.789$ & $16.004$\\
\hline
$312$ & $0$ & $0$ & $0.167$ & $0.696$ & $1.796$ & $3.684$ & $6.575$ & $10.679$ & $16.202$ & $23.341$\\
\hline
$321$ & $0$ & $0$ & $0.167$ & $0.696$ & $1.942$ & $4.335$ & $8.344$ & $14.466$ & $23.223$ & $35.158$\\
\hline
\end{tabular}
\caption{Expected values (first moments) of $X_{123}(\pi)$, $X_{132}(\pi)$, $X_{312}(\pi)$, and $X_{321}(\pi)$, where $\pi$ is chosen uniformly at random from $\SG_{n}(1234)$.}
\label{tab:t1234mom1}
\end{table}
\medskip
The second moments (about the mean) of the random variables $X_{123}$, $X_{312}$, and $X_{321}$ for $1 \leq n \leq 10$ can be found in Table~\ref{tab:t1234mom2}.
\begin{table}[!h]
\centering
\begin{tabular}{|c|r|r|r|r|r|r|r|r|r|r|}
\hline
Pattern & \cheader{$n=1$} & \cheader{$n=2$} & \cheader{$n=3$} & \cheader{$n=4$} & \cheader{$n=5$} & \cheader{$n=6$} & \cheader{$n=7$} & \cheader{$n=8$} & \cheader{$n=9$} & \cheader{$n=10$}\\
\hline
$123$ & $0$ & $0$ & $0.139$ & $0.510$ & $1.172$ & $2.236$ & $3.863$ & $6.257$ & $9.654$ & $14.324$\\
\hline
$132$ & $0$ & $0$ & $0.139$ & $0.820$ & $2.828$ & $7.332$ & $15.959$ & $30.863$ & $54.767$ & $91.002$\\
\hline
$312$ & $0$ & $0$ & $0.139$ & $0.820$ & $2.667$ & $6.524$ & $13.484$ & $24.911$ & $42.468$ & $68.157$\\
\hline
$321$ & $0$ & $0$ & $0.139$ & $0.994$ & $3.764$ & $10.566$ & $24.936$ & $52.338$ & $100.740$ & $181.280$\\
\hline
\end{tabular}
\caption{Second moments (about the mean) of $X_{123}(\pi)$, $X_{132}(\pi)$, $X_{312}(\pi)$, and $X_{321}(\pi)$, where $\pi$ is chosen uniformly at random from $\SG_{n}(1234)$.}
\label{tab:t1234mom2}
\end{table}
\medskip
Data for the higher moments can be found on the authors websites. For example, the $r$-th standardized moments for $X_{123}$ when $3 \leq r \leq 6$ and $13 \leq n \leq 18$ can be found in Table~\ref{tab:t1234nonnorm}.
\begin{table}[!h]
\centering
\begin{tabular}{|c|r|r|r|r|r|r|r|r|r|r|}
\hline
$r$-th moment & \cheader{$n=13$} & \cheader{$n=14$} & \cheader{$n=15$} & \cheader{$n=16$} & \cheader{$n=17$} & \cheader{$n=18$}\\
\hline
$r=3$ & $1.14140$ & $1.16076$ & $1.17518$ & $1.18585$ & $1.19365$ & $1.19926$\\
\hline
$r=4$ & $5.14732$ & $5.21356$ & $5.26297$ & $5.29971$ & $5.32683$ & $5.34656$\\
\hline
$r=5$ & $16.61123$ & $17.07925$ & $17.43934$ & $17.71522$ & $17.92523$ & $18.08348$\\
\hline
$r=6$ & $74.59126$ & $77.40043$ & $79.60569$ & $81.33022$ & $82.67201$ & $83.70841$\\
\hline
\end{tabular}
\caption{$r$-th standardized moments for $X_{123}(\pi)$ for $3 \leq r \leq 6$, where $\pi$ is chosen uniformly at random from $\SG_{n}(1234)$.}
\label{tab:t1234nonnorm}
\end{table}
\medskip
It is interesting to note that the random variable $X_{123}$ does not appear to be asymptotically normal since the $3$-rd and $5$-th standard moments appear to be increasing (as opposed to going to $0$ as a normal distribution would), the $4$-th moment appears to be larger than $3$ (the value for a normal distribution), and the $6$-th moment appears to be substantially larger than $15$ (the value for a normal distribution).
This approach can also be used to consider the mixed $(i,j)$ moments. For example, the mixed $(i,j)$ moments of the random variables $X_{123}$ and $X_{321}$ for $3 \leq n \leq 10$ can be found in Table~\ref{tab:t1234mixmom}.
\begin{table}[!h]
\centering
\begin{tabular}{|c|r|r|r|r|r|r|r|r|r|r|}
\hline
$(i,j)$ & \cheader{$n=3$} & \cheader{$n=4$} & \cheader{$n=5$} & \cheader{$n=6$} & \cheader{$n=7$} & \cheader{$n=8$} & \cheader{$n=9$} & \cheader{$n=10$}\\
\hline
$(1,1)$ & $-0.028$ & $-0.363$ & $-1.298$ & $-3.258$ & $-6.892$ & $-13.121$ & $-23.171$ & $-38.611$\\
\hline
$(1,2)$ & $-0.019$ & $-0.266$ & $-1.674$ & $-5.958$ & $-15.301$ & $-31.716$ & $-55.546$ & $-82.648$\\
\hline
$(2,1)$ & $-0.019$ & $-0.166$ & $-0.505$ & $-1.531$ & $-4.798$ & $-13.664$ & $-34.352$ & $-77.387$\\
\hline
$(2,2)$ & $0.007$ & $0.386$ & $4.969$ & $33.937$ & $159.600$ & $593.990$ & $1880.700$ & $5274.100$\\
\hline
\end{tabular}
\caption{Mixed $(i,j)$ moments of $X_{123}(\pi)$ and $X_{321}(\pi)$, where $\pi$ is chosen uniformly at random from $\SG_{n}(1234)$.}
\label{tab:t1234mixmom}
\end{table}
\medskip
Analogous data and outputs can be found on the authors websites.
\FloatBarrier
In this section we let $\pi$ be a permutation
chosen uniformly at random from $\SG_n$
(without any condition) and we study the joint distribution of the random
variables $\xn\gs:=X_\gs(\pi)$, the number of copies of $\gs$ in $\pi$, for
different patterns $\gs\in\SG_* :=\bigcup_{k=1}^\infty \SG_k$ .
We consider asymptotics as $n\to\infty$ for (one or several) fixed $\gs$.
Each $\xn\gs$ has an asymptotic normal distribution, as was shown by Bona
\cite{Bona} (see also \cite{Bona3}).
We give another (perhaps simpler) proof of this; moreover, we extend the
result to joint asymptotic normality for several patterns $\gs$.
The asymptotic variances and covariances depend on the patterns in a
slightly complicated way, so we begin with some definitions.
For $k\ge 1$ and $1\le i \le k$,
define
\begin{equation}\label{g}
g_{k,i}(x) := \binom{k-1}{i-1} x^{i-1} (1-x)^{k-i}.
\end{equation}
For a permutation $\gs\in S_k$, define
\begin{equation}\label{G}
G_\gs(x,y) := \frac{1}{(k-1)!^2}
\left(\sum_{i=1}^k g_{k,i}(x) g_{k,\gs(i)}(y) -\frac 1k \right).
\end{equation}
Let $Z_\gs$, $\gs\in\SG_*$, be jointly normal random variables with
$\E Z_\gs=0$ and (co)variances
\begin{equation}\label{gS}
\Cov(Z_\gs,Z_\gt) =
\gS_{\gs,\gt}:=
\innprod{G_\gs,G_\gt}_{L^2(\oi^2)}
:=\intoi\intoi G_\gs(x,y) G_\gt(x,y)\dd x\dd y.
\end{equation}
(Such normal random variables exist since the matrix
$(\gS_{\gs,\gt})_{\gs,\gt}$ is non-negative definite. As is well
known, the joint distribution is uniquely defined by the means and covariances.)
We denote the length of a permutation $\gs$ by $|\gs|$, and let $\dto$ denote
convergence in distribution of random variables.
\begin{theorem}\label{T1}
For every pattern $\gs\in\SG_*$, as $n\to\infty$,
\begin{equation}\label{t1a}
\frac{\xn\gs-\E\xn\gs}{n^{|\gs|-1/2} }
=
\frac{\xn\gs-\frac1{|\gs|!}\binom{n}{|\gs|}}{n^{|\gs|-1/2} }
\dto Z_\gs.
\end{equation}
Moreover, this holds jointly for any finite family of patterns $\gs$.
Furthermore, all (joint) moments converge;
in particular, for any permutations $\gs,\gt$
\begin{equation}\label{t1b}
\frac{ \Cov(\xn\gs,\xn\gt)}{n^{|\gs|+|\gt|-1}} \to \gS_{\gs,\gt}.
\end{equation}
\end{theorem}
Before giving the proof we give some comments.
First, as noted above,
if $\gs$ has length $|\gs|=k$,
\begin{equation}
\E \xn\gs = \binom nk \frac1{k!}
\sim \frac1{k!^2} n^k,
\qquad
\text{as $n\to\infty$}.
\end{equation}
The asymptotic covariances $\gS_{\gs,\gt}$ can be computed explicitly.
By a beta integral,
\begin{equation}\label{intg}
\int_0^1 g_{k,i}(x)\, dx
=
\binom{k-1}{i-1} \frac{\Gamma(i)\Gamma(k-i+1)}{\Gamma(k+1)}
=\frac 1k,
\end{equation}
and similarly, for any $k,\ell\ge1$ and $1\le i\le k$, $1\le j\le\ell$,
\begin{equation} \label{intgg}
\begin{split}
\int_0^1 g_{k,i}(x) g_{\ell,j}(x)\, dx
&=
\binom{k-1}{i-1} \binom{\ell-1}{j-1}
\frac{\Gamma(i+j-1)\Gamma(k+\ell-i-j+1)}{\Gamma(k+\ell)}
\\&
= \frac{(k-1)!\, (\ell-1)!}{(k+\ell-1)!}
\binom{i+j-2}{i-1} \binom{k+\ell-i-j}{k-i} .
\end{split}
\end{equation}
It follows from \eqref{intg} that, if $|\gs|=k$,
\begin{equation}
\begin{split}
\intoi\intoi
\sum_{i=1}^k g_{k,i}(x) g_{k,\gs(i)}(y)
\dd x\dd y
= \frac{k}{k^2}=\frac1{k}
\end{split}
\end{equation}
which implies, using \eqref{intgg} twice, if further $|\gt|=\ell$,
\begin{multline*}
\intoi\intoi
\left(\sum_{i=1}^k g_{k,i}(x) g_{k,\gs(i)}(y) -\frac 1k \right)
\left(\sum_{j=1}^\ell g_{\ell,j}(x) g_{\ell,\gt(j)}(y) -\frac 1\ell \right)
\dd x\dd y
\\
=
\intoi\intoi
\sum_{i=1}^k g_{k,i}(x) g_{k,\gs(i)}(y)
\sum_{j=1}^\ell g_{\ell,j}(x) g_{\ell,\gt(j)}(y)
\dd x\dd y -\frac{1}{k\ell}
\\
=
\sum_{i=1}^k \sum_{j=1}^\ell
\intoi
g_{k,i}(x)
g_{\ell,j}(x)
\dd x
\intoi
g_{k,\gs(i)}(y)
g_{\ell,\gt(j)}(y)
\dd y -\frac{1}{k\ell}
\\
=
\sum_{i=1}^k \sum_{j=1}^\ell
\frac{(k-1)!^2\, (\ell-1)!^2}{(k+\ell-1)!^2}
\binom{i+j-2}{i-1} \binom{k+\ell-i-j}{k-i}
Consequently, by \eqref{gS} and \eqref{G},
if $|\gs|=k$ and $|\gt|=\ell$, then
{\multlinegap=0pt
\begin{multline}\label{gS2}
\begin{proof}[Proof of Theorem \ref{T1}]
Let $U_1,\dots,U_n$ be independent and identically distributed (i.i.d.)\
random variables with a uniform distribution on $\oi$.
It is a standard trick that (by symmetry) the reduction
$\red(U_1,\dots,U_n)$ is a
uniformly random permutation in $\SG_n$ (note that $U_1,\dots,U_n$ almost
surely are distinct), so we can take this as our random $\pi$ and obtain the
representation, with $k=|\gs|$,
\begin{equation}
\label{rep1}
\xn\gs = X_\gs(\pi)=\sum_{i_1<\dots<i_k}\ett{\red(U_{i_1},\dots,U_{i_k})=\gs}.
\end{equation}
This is an example of an asymmetric $U$-statistic, and
(a rather simple instance of)
the general theory in
\cite[Section 11.2]{SJIII}
can be used to show the theorem.
However, the details are a bit technical, in particular to calculate the
asymptotic covariances, so we will instead use another, more symmetric
representation. (See \cite[Remark 11.21]{SJIII}.)
Let $V_1,\dots,V_n$ be another sequence of i.i.d.\ random variables,
uniformly distributed on \oi{} and independent of $U_1,\dots,U_n$.
Let $\pi'$ be the permutation that sorts these
numbers such that $V_{\pi'(1)}<\dots<V_{\pi'(n)}$
and let $\pi$ be the reduction of $U_{\pi'(1)},\dots,U_{\pi'(n)}$.
Then $\pi$ is still uniformly random, and it is easy to see that
\begin{equation}
\label{rep2}
\begin{split}
\xn\gs = X_\gs(\pi)
&:=
\sum_{i_1<\dots<i_k}
\ett{\red(U_{\pi'(i_1)},\dots,U_{\pi'(i_k)})=\gs}
\\&\phantom:
=
\sumx_{j_1,\dots,j_k}
\ett{\red(U_{j_1},\dots,U_{j_k})=\gs}\cdot\ett{V_{j_1}<\dots<V_{j_k}},
\end{split}
\end{equation}
where $\sumx$ denotes summation over all distinct indices
$j_1,\dots,j_k$.
This representation, while in some ways more complicated that \eqref{rep1},
has the great advantage that we sum over all ordered $n$-tuples of
distinct indices;
this is thus an example of a $U$-statistic, and we can apply the
basic central limit theorem by Hoeffding \cite[Theorem 7.1]{Hoeffding},
see also \cite{Rubin-Vitale} and \cite[Section 11.1]{SJIII}.
In order to compute the (co)variances, we follow the path of
Hoeffding's proof.
The main idea of Hoeffding's proof of his
central limit theorem is to use a projection.
In our case we let $W_j:=(U_j,V_j)\in\oi^2$ and write \eqref{rep2} as
\begin{equation}\label{rep3}
\xn\gs = \sumx_{j_1,\dots,j_k} f_\gs(W_{j_1},\dots,W_{j_k}),
\end{equation}
for a certain (indicator) function $f_\gs$.
We then take the conditional expectation of $f_\gs(W_1,\dots,W_k)$ given one
of the variables $W_i$:
\begin{equation}\label{fgs}
f_{\gs;i}(x,y) := \E \bigpar{f_\gs(W_1,\dots,W_k)\mid W_i=(x,y)};
\end{equation}
we also take the expectation
\begin{equation}
\mu := \E f_\gs(W_1,\dots,W_k) = \E f_{\gs;i}(W_i).
\end{equation}
Hoeffding then shows that if we replace $f_\gs$ by
$f'_\gs(W_1,\dots,W_k):=\mu+\sum_{i=1}^k (f_{\gs,i}(W_i)-\mu)$,
then the resulting error for the sum in \eqref{rep3} will have variance
$O(n^{2k-2})$, which is negligible with the normalization used in
\refT{T1}.
Thus we can approximate
$\xn\gs-\E\xn\gs$ by
\begin{equation}
\sumx_{j_1,\dots,j_n} \sum_{i=1}^k \bigpar{f_{\gs;i}(W_{j_i})-\mu}
= \sum_{i=1}^k (n-1)\fall{k-1}\sum_{j=1}^n\bigpar{f_{\gs;i}(W_{j})-\mu}
=(n-1)\fall{k-1}\sum_{j=1}^n F_\gs(W_j),
\end{equation}
where $(n-1)\fall{k-1}=(n-1)\dotsm(n-k+1)$ and
\begin{equation}\label{F}
F_\gs(x,y):=\sum_{i=1}^k \bigpar{f_{\gs;i}(x,y)-\mu}.
\end{equation}
The asymptotic normality of $\xn\gs$ now
follows by the standard central limit theorem
for the i.i.d.\ random variables $F_\gs(W_j)$,
which yields $(\xn\gs-\E\xn\gs)/n^{k-1/2}\dto N\bigpar{0,\gS_{\gs,\gs}}$
where
\begin{equation}
\label{gSF}
\gS_{\gs,\gs}:=\E \bigpar{F_{\gs}(W_1)^2}
= \intoi\intoi F_{\gs}(x,y)^2\dd x\dd y.
\end{equation}
Joint normality for several patterns $\gs$ (possibly of different lengths)
follows in the same way, with the asymptotic covariances
\begin{equation}
\label{gSF2}
\gS_{\gs,\gt}:=\E \bigpar{F_{\gs}(W_1)F_\gt(W_1)}
= \intoi\intoi F_{\gs}(x,y)F_\gt(x,y)\dd x\dd y.
\end{equation}
It remains to compute the functions $F_\gs$ defined in \eqref{F}
For the second probability in \eqref{sw}
we require that $V_1,\dots,V_{i-1}<y$ and
$V_{i+1},\dots,V_k>y$, and furthermore that these two sets of variables are
increasing; since the variables are independent and uniformly distributed,
the probability is, recalling the notation \eqref{g},
\begin{equation}
\frac{y^{i-1}}{(i-1)!} \frac{(1-y)^{k-i}}{(k-i)!} = \frac1{(k-1)!}g_{k,i}(y).
\end{equation}
Similarly, for the first probability in \eqref{sw} we require that the
$\gs(i)$:th smallest of $U_1,\dots,U_k$ is $x$, and that the others come in
the order specified by $\gs$, and the probability of this is
$(k-1)!^{-1}g_{k,\gs(i)}(x)$.
Consequently,
\begin{equation}\label{f1}
f_{\gs;i}(x,y) = \frac1{(k-1)!^2}\,g_{k,\gs(i)}(x)g_{k,i}(y).
\end{equation}
Furthermore,
\begin{equation}\label{mu2}
\mu := \E f_{\gs}(W_1,\dots,W_k) =
\PP\bigpar{\red(U_{1},\dots,U_{k})=\gs}
\PP\bigpar{V_{1}<\dots<V_{k}}
=\frac{1}{k!^2}.
\end{equation}
It follows from \eqref{F}, \eqref{f1}, \eqref{mu2} and \eqref{G} that
$F_\gs(x,y)=G_\gs(y,x)$. Hence \eqref{gSF}--\eqref{gSF2} agree with
\eqref{gS}, and Hoeffding's theorem yields \eqref{t1a}.
Hoeffding's theorem (and its proof sketched above) yields also the
convergence \eqref{t1b} of the covariances. To see that moment convergence
holds also for higher moments, let $m$ be a positive integer.
By \eqref{rep3},
\begin{equation}\label{mom}
\E\bigpar{\xn\gs-\E\xn\gs}^m
=\sumx_{j_{11},\dots,j_{k1}} \dotsm \sumx_{j_{1m},\dots,j_{km}}
\E\prod_{i=1}^m \bigpar{f_\gs(W_{j_{1i}},\dots,W_{j_{ki}})-\mu}
\end{equation}
where the expectation on the right-hand side vanishes unless each index set
\set{j_{1i},\dots,j_{mi}} contains at least one index shared by another such
set. In this case, however, there are at most $mk-m/2$ distinct indices, and
it follows that the moment \eqref{mom} is a polynomial in $n$ of degree
at most $mk-m/2$.
In particular, the normalized central moment
$\E\bigpar{(\xn\gs-\E\xn\gs)/n^{k-1/2}}^m=O(1)$.
If $m$ is an even integer, this implies, by standard results on uniform
integrability, that all moments of lower order converge to the corresponding
moments of the limit $Z_\gs$, and the same holds for joint moments. Since $m$ is arbitrary, this shows convergence of all moments.
\end{proof}
\begin{example}\label{E1}
The case $k=1$ is trivial, with $\xn1=n$ deterministic. Indeed,
\eqref{g}--\eqref{G} yield $g_{1,1}(x)=1$ and $G_{1,1}(x,y)=0$.
\end{example}
\begin{example}\label{E2}
The simplest non-trivial example is $k=2$, where $X_{21}(\pi)$ is the
\emph{number of inversions} in $\pi$. The distribution of this random
variable, for $\pi$ uniformly at random in $\SG_n$, is called the
\emph{Mahonian distribution}, and it is well-known that it is asymptotically
normal, see e.g.\ \cite[Section X.6]{FellerI}.
(See \cite{CJZ-mahonian} for the case of permutations of multi-sets; it
would be interesting to obtain similar results for other patterns in
multi-set permutations.)
A simple calculation using \eqref{g}--\eqref{G} yields
\begin{equation}
G_{21,21}(x,y)=-2\bigpar{x-\tfrac12}\bigpar{y-\tfrac12}
\end{equation}
and \eqref{gS} or \eqref{gS2} yields $\gS_{21,21}=1/36$.
Hence \refT{T1} in this case yields the well-known
\begin{equation}
\frac{\xn{21}-\frac12\binom{n}2}{n^{3/2}}
\dto N\bigpar{0,1/36}.
\end{equation}
\end{example}
Example \ref{E1} is the only case when
the limit $Z_\gs$ in \refT{T1} vanishes, as we show next.
\begin{theorem}
\label{T0}
If $k>1$, then $\gS_{\gs,\gs}>0$ and thus $Z_\gs$ is non-degenerate,
for every $\gs\in\SG_k$.
\end{theorem}
\begin{proof}
By \eqref{g},
\begin{equation}
\sum_{i=1}^k g_{k,i}(x) = 1.
\end{equation}
Hence \eqref{G} may be written, using Kronecker's delta $\gd_{i,j}$,
\begin{equation}\label{G2}
G_\gs(x,y) := \frac{1}{(k-1)!^2}
\sum_{i=1}^k \sum_{j=1}^k
\Bigpar{\gd_{j,\gs(i)}-\frac1k}g_{k,i}(x) g_{k,j}(y).
\end{equation}
For a fixed $k$, the polynomials $g_{k,i}$, $1\le i\le k$, are linearly
independent (and form basis in the $k$-dimensional vector space of polynomials
of degree $\le k-1$).
Hence the $k^2$ tensor products $g_{k,i}(x)g_{k,j}(y)$ are
linearly independent in $L^2(\oi^2)$, and it follows from \eqref{G2} and
\eqref{gS} that if
$k\ge2$, then $G_\gs$ is not identically 0 and thus
$\gS_{\gs,\gs}=\iint G_\gs (x,y)^2>0$.
For a given $k$ we have $k!$ patterns $\gs\in\SG_k$ and thus $k!$ limit
variables $Z_\gs$. We have just seen that (if $k>1$)
these are all non-degenerate; however, they are not linearly independent.
For example, the sum $\sum_{\gs\in\SG_k}X_\gs(\pi)=\binom nk$ for every $\pi$,
so the sum is deterministic and it follows that $\sum_{\gs\in \SG_k} Z_\gs = 0$.
Many non-trivial linear combinations vanish too, as is seen by the following
theorem.
\begin{theorem}\label{T2}
Let $k\ge1$. The $k!$ limit random variables $Z_\gs$, $\gs\in\SG_k$, span
a linear space of dimension $(k-1)^2$.
\end{theorem}
\begin{proof}
By the definition \eqref{gS}, this linear space, $V$ say, is isomorphic
(and isometric for the appropriate $L^2$-norms)
to the linear space $V_1$ spanned by the functions $G_\gs$ on $\oi^2$.
Furthermore, by \eqref{G2} and the comments after it, $V_1$ is
isomorphic to the linear space $V_2$ of $k\times k$ matrices spanned by the
matrices
$A_{\gs} := \bigpar{\gd_{j,\gs(i)}-\frac1k}_{ij=1}^k$.
Let $V_3$ be the space of all $k\times k$ matrices
with all row sums and column sums 0.
Then each matrix $A_{\gs}\in V_3$ and thus $V_2\subseteq V_3$.
Conversely, it is easily seen that each matrix in $V_3$ is a linear
combination of matrices $A_\gs$, for example using the well-known fact that
every doubly stochastic matrix is a convex combination of permutation
matrices.
Hence $V_2=V_3$. Finally, $\dim(V_3)=(k-1)^2$ since a matrix in $V_3$ is
uniquely determined by its upper left corner $(k-1)\times(k-1)$ submatrix
obtained by deleting the last row and column, and conversely this submatrix
may be chosen arbitrarily.
\end{proof}
\begin{example}\label{E3}
There are 6 patterns of length $k=3$.
Taking them in lexicographic order 123, 132, 213, 231, 312, 321,
and using Maple to calculate the covariance matrix of the
limit variables $Z_{\gs}$ by \eqref{g}--\eqref{gS}, we find
\begin{equation}\label{cov3}
\bigpar{\Cov(Z_\gs,Z_\gt)}_{\gs,\gt\in\SG_3}
=\bigpar{\gS_{\gs,\gt}}_{\gs,\gt\in\SG_3}
=
\frac{1}{5!^2}
\begin {pmatrix}
26&12&12&-13&-13&-24\\
12&14&-1&-6&-6&-13\\
12&-1&14&-6&-6&-13\\
-13&-6&-6&14&-1&12\\
-13&-6&-6&-1&14&12\\
-24&-13&-13&12&12&26
\end{pmatrix}
.
\end{equation}
We note that the asymptotic variances differ between different patterns;
they are $13/7200$ (for 123 and 321) or $7/7200$ (for the other patterns).
The eigenvalues of the covariance matrix \eqref{cov3} are
\begin{equation}\label{evv3}
\newcommand\x{\phantom{-}}
\begin{pmatrix}
\x2\\ \x1\\ \x1\\ -1\\ -1\\ -2
\end{pmatrix}
,\quad
\begin{pmatrix}
\x0\\ \x1\\ -1\\ \x0\\ \x0\\ \x0
\end{pmatrix}
,\quad
\begin{pmatrix}
\x0\\ \x0\\ \x0\\ \x1\\ -1\\ \x0
\end{pmatrix}
,\quad
\begin{pmatrix}
\x2\\ -1\\ -1\\ -1\\ -1\\ \x2
\end{pmatrix}
,\quad
\begin{pmatrix}
\x1\\ -1\\ -1\\ \x1\\ \x1\\ -1
\end{pmatrix}
,\quad
\begin{pmatrix}
1\\1\\1\\1\\1\\1
\end{pmatrix}
.
\end{equation}
\end{example}
\begin{remark}
The last eigenvector in \eqref{evv3} corresponds to the trivial fact
mentioned above that the sum of all $Z_\gs$ vanishes.
The fifth eigenvector, also with eigenvalue 0, says that
\begin{equation}\label{Z=0}
Z_{123}+Z_{231}+Z_{312}-Z_{132}-Z_{213}-Z_{321} =0.
\end{equation}
Let $Y(\pi)$ be the corresponding number
\begin{equation}\label{Y}
Y(\pi):=
X_{123}(\pi)+X_{231}(\pi)+X_{312}(\pi)-X_{132}(\pi)-X_{213}(\pi)-X_{321}(\pi),
\end{equation}
and let $Y_n:=Y(\pi)$ with $\pi$ chosen uniformly at random in $\SG_n$.
(Note that $Y(\pi)$ is the sum of the signs of the $\binom n3$ permutations
$\red\bigpar{\pi_{i_1}\pi_{i_2}\pi_{i_3}}$.)
In particular, we have that the leading term of $\Var(Y_n)$ is $\frac1{18}n^4$,
i.e. of order $n^{2k-2}$ instead of
$n^{2k-1}$ as in the cases when \refT{T1} yields a non-degenerate limit.
In such cases, one can use a more advanced version of Hoeffding's argument
above and show that there is an asymptotic distribution that can be
represented as an (infinite) polynomial of degree 2 in normal random
variables; this polynomial can further be diagonalized as a linear
combination of squares of independent normal variables, see e.g.{}
\cite{Rubin-Vitale} and \cite[Section 11.1]{SJIII}.
In the present case this leads to
\begin{equation}\label{Y*}
n^{-2} Y_n \dto Y^* = \sum_{\substack{\ell,m=-\infty\\\ell,m\neq0}}^\infty
\frac1{2\pi^2 \ell m}\bigpar{\xi_{\ell,m}^2-1},
\end{equation}
where $\xi_{\ell,m}$ are i.i.d.\ standard normal random variables.
(We omit the details but note that the bilinear form in
\cite[Corollary 11.5(iii)]{SJIII}
in this case after some calculation
turns out to correspond to
the convolution operator on $L^2(\mathbb T^2)$ given by convolution with
$H(x,y)=\frac16(2x-1)(2y-1)$ (where we identify the group $\mathbb T$ with
$[0,1)$);
hence its eigenvalues are the Fourier coefficients
$\widehat H(\ell,m) = -1/(6\pi^2\ell m)$, which yields the coefficients in
\eqref{Y*}.)
Note that, since $\Var(\xi_{\ell,m}^2)=2$,
\begin{equation}\label{Y*v}
\Var Y^* = \sum_{\substack{\ell,m=-\infty\\\ell,m\neq0}}^\infty
\frac2{4\pi^4 \ell^2 m^2} =\frac1{18},
\end{equation}
in accordance with the asymptotic formula $\Var(Y_n)\sim n^4/18$.
Furthermore, the representation \eqref{Y*} of the limit $Y$ yields its
moment generating function as
\begin{equation
This type of limit is typical of the degenerate cases that can occur for
certain linear combinations of pattern counts.
It is also possible to obtain higher degeneracies in special cases, with
variance of still lower order and a limit that is a polynomial of higher
degree in infinitely many normal variables; one example is to generalize
\eqref{Y} by taking, for any fixed $k\ge3$, the sum of the signs of the
$\binom nk$ patterns of length $k$ occurring in $\pi$. It can be seen that
for this example, $\Var(Y_n)$ is a polynomial in $n$ of degree $k+1$ only
(instead of the typical $2k-1$),
because all higher order terms cancel in this highly symmetric example.
\end{remark}
\begin{example}
There are 24 patterns of length $k=4$.
A calculation as in Example \ref{E3} of the covariance matrix yields a
$24\times24$ matrix
of rank $(4-1)^2=9$. The 9 non-zero eigenvalues are
\begin{equation}\label{ev4}
Similarly, for $k=5$ the covariance matrix is a $120\times120$ matrix with
the $4^2=16$ non-zero eigenvalues
\begin{equation}\label{ev5}
\frac{30}{9!^2}
\bigpar{7056,3024,3024,1296,756,756,324,324,84,84,81,36,36,9,9,1}.
\end{equation}
The fact that the eigenvalues in \eqref{ev3}, \eqref{ev4} and \eqref{ev5}
all are simple rational numbers suggests that there is a general structure
(valid for all $k$)
for these eigenvalues, and presumably also for the corresponding
eigenvectors;
it would be interesting to know more about this.
\end{example}
\section{Conclusion}\label{SECconcl}
In this article, we studied the moments and mixed moments of the random variables $X_{\sigma}(\pi)$ for a number of patterns $\sigma$, where $\pi$ may be chosen from $\SG_{n}$ or a pattern avoiding set $\SG_{n}(\tau)$. In addition, we prove that for any two patterns, the corresponding random variables are joint asymptotically normal when the permutations are drawn from $\SG_{n}$. The contrasting computational approach can compute a number of moments and mixed moments as well as derive (rigorous) formulas for the lower moments. We anticipate that this approach could be extended to provide an alternative proof to the joint asymptotic normality of multiple random variables, but we leave this as ``future work''.
In the setting where the permutations are chosen from the pattern avoiding set $\SG_{n}(\tau)$ (for some fixed pattern $\tau$), much less is known. Others have recently studied the total number of occurrences of a pattern in these sets, which is equivalent to the expected value (i.e., the first moment) of the random variable $X_{\sigma}$, generally for when both $\sigma, \tau \in \SG_{3}$. Our approach allows us to quickly compute many empirical moments, far beyond the first moment. We expect that a more thorough analysis of these higher moments will uncover interesting properties and that in some cases, these higher moments will also have closed form formulas. In addition, the random variables for some patterns appear to \emph{not} be asymptotically normal (whereas in the case where permutations are drawn from $\SG_{n}$, they are asymptotically normal for every pattern \cite{Bona}). It would be interesting to understand which patterns (if any) have corresponding random variables that are asymptotically normal when permutations are drawn from $\SG_{n}(\tau)$.
\newcommand\arxiv[1]{\texttt{arXiv:#1.}}
\newcommand\arXiv{\arxiv}
\def\nobibitem#1\par{}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 513 |
{"url":"https:\/\/www.semanticscholar.org\/paper\/The-dual-complex-of-Calabi%E2%80%93Yau-pairs-Koll%C3%A1r-Chen\/a3d53952d773fd2a4151492d4a7eff806b6f9768","text":"# The dual complex of Calabi\u2013Yau pairs\n\n@article{Kollr2015TheDC,\ntitle={The dual complex of Calabi\u2013Yau pairs},\nauthor={J{\\'a}nos Koll{\\'a}r and Xu Chen},\njournal={Inventiones mathematicae},\nyear={2015},\nvolume={205},\npages={527-557}\n}\n\u2022 Published 28 March 2015\n\u2022 Mathematics\n\u2022 Inventiones mathematicae\nA log Calabi\u2013Yau pair consists of a proper variety X and a divisor D on it such that $$K_X+D$$KX+D is numerically trivial. A folklore conjecture predicts that the dual complex of D is homeomorphic to the quotient of a sphere by a finite group. The main result of the paper shows that the fundamental group of the dual complex of D is a quotient of the fundamental group of the smooth locus of X, hence its pro-finite completion is finite. This leads to a positive answer in dimension $$\\le$$\u22644. We\u2026\nBirational geometry and mirror symmetry of Calabi-Yau pairs\n\u2022 Mathematics\n\u2022 2019\nA Calabi\u2013Yau (CY) pair (X,DX) consists of a normal projective variety X and a reduced sum of integral Weil divisors DX such that KX + DX \u223cZ 0. The pair (X,DX) has (t, dlt) (resp. (t, lc))\nCombinatorial degenerations of surfaces and Calabi--Yau threefolds\n\u2022 Mathematics\n\u2022 2016\nIn this article we study combinatorial degenerations of minimal surfaces of Kodaira dimension 0 over local fields, and in particular show that the `type' of the degeneration can be read off from the\nIntrinsic Mirror Symmetry\n\u2022 Mathematics\n\u2022 2019\nWe associate a ring R to a log Calabi-Yau pair (X,D) or a degeneration of Calabi-Yau manifolds X->B. The vector space underlying R is determined by the tropicalization of (X,D) or X->B, while the\nMotivic zeta functions of degenerating Calabi\u2013Yau varieties\n\u2022 Mathematics\n\u2022 2017\nWe study motivic zeta functions of degenerating families of Calabi\u2013Yau varieties. Our main result says that they satisfy an analog of Igusa\u2019s monodromy conjecture if the family has a so-called Galois\nKawamata log terminal singularities of full rank\nWe study Kawamata log terminal singularities of full rank, i.e., $n$-dimensional klt singularities containing a large finite abelian group of rank $n$ in its regional fundamental group. The main\nOn minimal log discrepancies and koll\u00e1r components\n\u2022 Joaqu'in Moraga\n\u2022 Mathematics\nProceedings of the Edinburgh Mathematical Society\n\u2022 2021\nIn this article, we prove a local implication of boundedness of Fano varieties. More precisely, we prove that $d$ -dimensional $a$ -log canonical singularities with standard\nDiophantine Analysis on Moduli of Local Systems\nWe develop a Diophantine analysis on moduli of special linear rank two local systems over surfaces with prescribed boundary traces. We first show that such a moduli space is a log Calabi-Yau variety\nThe dual boundary complex of the $SL_2$ character variety of a punctured sphere\nSuppose $C_1,\\ldots , C_k$ are generic conjugacy classes in $SL_2({\\mathbb C})$. Consider the character variety of local systems on ${\\mathbb P}^1-\\{ y_1,\\ldots , y_k\\}$ whose monodromy\nCombinatorial part of the cohomology of the nearby fibre\nLet f : X \u2192 S be a unipotent degeneration of projective complex manifolds of dimension n over a disc such that the reduction of the central fibre Y = f(0) is simple normal crossings, and let X\u221e be\n\n## References\n\nSHOWING 1-10 OF 39 REFERENCES\nQuotients of Calabi\u2013Yau varieties\n\u2022 Mathematics\n\u2022 2009\nLet X be a complex Calabi\u2013Yau variety, that is, a complex projective variety with canonical singularities whose canonical class is numerically trivial. Let G be a finite group acting on X and\n\u00c9tale fundamental groups of Kawamata log terminal spaces, flat sheaves, and quotients of Abelian varieties\n\u2022 Mathematics\n\u2022 2016\nGiven a quasi-projective variety X with only Kawamata log terminal singularities, we study the obstructions to extending finite \\'etale covers from the smooth locus $X_{\\mathrm{reg}}$ of $X$ to $X$\nFamilies of rationally connected varieties\n\u2022 Mathematics\n\u2022 2001\nRecall that a proper variety X is said to be rationally connected if two general points p, q \u2208 X are contained in the image of a map g : P \u2192 X. This is clearly a birationally invariant property. When\nExistence of log canonical closures\n\u2022 Mathematics\n\u2022 2011\nLet f:X\u2192U be a projective morphism of normal varieties and (X,\u0394) a dlt pair. We prove that if there is an open set U0\u2282U, such that (X,\u0394)\u00d7UU0 has a good minimal model over U0 and the images of all the\nHomological mirror symmetry and torus fibrations\n\u2022 Mathematics\n\u2022 2000\nIn this paper we discuss two major conjectures in Mirror Symmetry: Strominger-Yau-Zaslow conjecture about torus fibrations, and the homological mirror conjecture (about an equivalence of the Fukaya\nFundamental groups of links of isolated singularities\n\u2022 Mathematics\n\u2022 2011\n\u00a9 2014 American Mathematical Society. All rights reserved. We study fundamental groups of projective varieties with normal crossing singularities and of germs of complex singularities. We prove that\nThe Sarkisov program for Mori fibred Calabi-Yau pairs\n\u2022 Mathematics\n\u2022 2015\nWe prove a version of the Sarkisov program for volume preserving birational maps of Mori fibred Calabi-Yau pairs valid in all dimensions. Our theorem generalises the theorem of Usnich and Blanc on\nExistence of log canonical flips and a special LMMP\nLet (X\/Z,B+A) be a Q-factorial dlt pair where B,A\u22650 are Q-divisors and KX+B+A\u223cQ0\/Z. We prove that any LMMP\/Z on KX+B with scaling of an ample\/Z divisor terminates with a good log minimal model or a\nThe dual complex of singularities\n\u2022 Mathematics\n\u2022 2012\nThe dual complex of a singularity is defined, up-to homotopy, using resolutions of singularities. In many cases, for instance for isolated singularities, we identify and study a \"minimal\"\nSubadjunction of log canonical divisors, II\n<abstract abstract-type=\"TeX\"><p>We extend a subadjunction formula of log canonical divisors as in [Kawamata, <i>Contemp. Math.<\/i><b>207<\/b> (1997), 79-88] to the case when the codimension of the","date":"2022-07-06 05:04:08","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6454340219497681, \"perplexity\": 1062.473977276535}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-27\/segments\/1656104660626.98\/warc\/CC-MAIN-20220706030209-20220706060209-00572.warc.gz\"}"} | null | null |
/*
* Manager.h
*
* Created on: 23 Oct 2015
* Author: hieu
*/
#pragma once
#include <queue>
#include <cstddef>
#include <string>
#include <deque>
#include "../ManagerBase.h"
#include "Stacks.h"
#include "InputPaths.h"
#include "Misc.h"
namespace Moses2
{
namespace SCFG
{
class SymbolBind;
class TargetPhraseImpl;
class SymbolBindElement;
class Manager: public Moses2::ManagerBase
{
public:
Manager(System &sys, const TranslationTask &task, const std::string &inputStr,
long translationId);
virtual ~Manager();
void Decode();
std::string OutputBest() const;
std::string OutputNBest();
std::string OutputTransOpt();
const InputPaths &GetInputPaths() const {
return m_inputPaths;
}
QueueItemRecycler &GetQueueItemRecycler() {
return m_queueItemRecycler;
}
const Stacks &GetStacks() const {
return m_stacks;
}
protected:
Stacks m_stacks;
SCFG::InputPaths m_inputPaths;
void InitActiveChart(SCFG::InputPath &path);
void Lookup(SCFG::InputPath &path);
void LookupUnary(SCFG::InputPath &path);
void Decode(SCFG::InputPath &path, Stack &stack);
void ExpandHypo(
const SCFG::InputPath &path,
const SCFG::SymbolBind &symbolBind,
const SCFG::TargetPhraseImpl &tp,
Stack &stack);
bool IncrPrevHypoIndices(
Vector<size_t> &prevHyposIndices,
size_t ind,
const std::vector<const SymbolBindElement*> ntEles);
// cube pruning
Queue m_queue;
SeenPositions m_seenPositions;
QueueItemRecycler m_queueItemRecycler;
void CreateQueue(
const SCFG::InputPath &path,
const SymbolBind &symbolBind,
const SCFG::TargetPhrases &tps);
};
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 1,318 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.